Hello,
we are using Solr 7.7.2 and sometimes we are performing a full reindex
of a core. Therefor we stop the replication on the master
(solr//replication?command=disablereplication),
we backup and delete the index, finally we rebuild the index and enable
the replication again.
However, the
>
> On Tue, Nov 3, 2020, 6:01 PM Parshant Kumar
> wrote:
>
> > Hi team,
> >
> > We are having solr architecture as *master->repeater-> 3 slave servers.*
> >
> > We are doing incremental indexing on the master server(every 20 min) .
> > Replicat
All,please help on this
On Tue, Nov 3, 2020, 6:01 PM Parshant Kumar
wrote:
> Hi team,
>
> We are having solr architecture as *master->repeater-> 3 slave servers.*
>
> We are doing incremental indexing on the master server(every 20 min) .
> Replication of index is done
Hello,
We are needing a recommendation for solr replication throttling. What are your
recommendations for maxWriteMBPerSec value? Our indexes contain 18 locales and
size for all indexes is 188GB and growing.
Also will replication throttling work with solr 4.10.3?
Thanks,
Pino Alu | HCL
Hi team,
We are having solr architecture as *master->repeater-> 3 slave servers.*
We are doing incremental indexing on the master server(every 20 min) .
Replication of index is done from master to repeater server(every 10 mins)
and from repeater to 3 slave servers (every 3 hours).
*We are
Hi all, please check the details
On Sat, Oct 17, 2020 at 5:52 PM Parshant Kumar
wrote:
>
>
> *Architecture is master->repeater->slave servers in hierarchy.*
>
> *One of the Below exceptions are occuring whenever replication fails.*
>
> 1)WARN : Error in fetching file
*Architecture is master->repeater->slave servers in hierarchy.*
*One of the Below exceptions are occuring whenever replication fails.*
1)WARN : Error in fetching file: _4rnu_t.liv (downloaded 0 of 11505507
bytes)
java.io.EOFException: Unexpected end of ZLIB input stream
shant Kumar
> wrote:
> Hi all,
>
> We are having solr architecture as below.
>
>
>
> We are facing the frequent replication failure between master to repeater
> server as well as between repeater to slave servers.
> On checking logs found every time one of t
Architecture image: If not visible in previous mail
[image: image.png]
On Sat, Oct 17, 2020 at 2:38 PM Parshant Kumar
wrote:
> Hi all,
>
> We are having solr architecture as below.
>
>
>
> *We are facing the frequent replication failure between master to repeater
> se
Hi all,
We are having solr architecture as below.
*We are facing the frequent replication failure between master to repeater
server as well as between repeater to slave servers.*
On checking logs found every time one of the below exceptions occurred
whenever the replication have failed.
1
,
> Tushar
> On Thu, 3 Sep 2020 at 16:17, Emir Arnautović
> wrote:
>
>> Hi Tushar,
>> Replication is file based process and hard commit is when segment is
>> flushed to disk. It is not common that you use soft commits on master. The
>> only usecase that I can
have to replicate the data to slave immediately.
Regards,
Tushar
On Thu, 3 Sep 2020 at 16:17, Emir Arnautović
wrote:
> Hi Tushar,
> Replication is file based process and hard commit is when segment is
> flushed to disk. It is not common that you use soft commits on master. The
> only us
Hi Tushar,
Replication is file based process and hard commit is when segment is flushed to
disk. It is not common that you use soft commits on master. The only usecase
that I can think of is when you read your index as part of indexing process,
but even that is bad practice and should be
Hi,
I want to ask if the soft commit works in replication.
One of our use cases deals with indexing the data every second on a master
server. And then it has to replicate to slaves. So if we use soft commit,
then does the data replicate immediately to the slave server or after the
hard commit
t 7, 2020 at 10:28 AM
To: "solr-user@lucene.apache.org" , Monica
Skidmore
Cc: Christine Poerschke
Subject: Re: Replication of Solr Model and feature store
Hi Monica,
Replication is working fine for me. You just have to add the
_schema_feature-store.json and _schema_model-s
Main info: SOLRCloud 7.7.3, Zookeeper 3.4.14
I have a 2 node SOLRCloud installation, 3 zookeeper instances, configured in
AWS to autoscale. I am currently testing with 9 collections. My issue is that
when I scale out and a node is added to the SOLRCloud cluster,
I get replication to the new
Hi Monica,
Replication is working fine for me. You just have to add the
_schema_feature-store.json and _schema_model-store.json to confFiles under
/replication in solrconfig.xml
I think the issue you are seeing is where the model is referencing a
feature which is not present in the feature store
h7ETQNaySOBJQ8x%2FP2dtzM%2FgSE1K1FZg%3D&reserved=0
From: solr-user@lucene.apache.org At: 07/22/20 14:00:59To:
solr-user@lucene.apache.org
Subject: Re: Replication of Solr Model and feature store
Adding more details here
I need some help on how to enable the solr LTR model and features on a
Hi Christine,
I am using Solr 7.7
I am able to get it replicated now. I didn't know that the feature and
model store are saved as files in the config structure. And by providing
these names in /replication handle, I can replicate them.
I guess this is something that can be provided in th
/solr/guide/8_6/learning-to-rank.html#applying-changes
From: solr-user@lucene.apache.org At: 07/22/20 14:00:59To:
solr-user@lucene.apache.org
Subject: Re: Replication of Solr Model and feature store
Adding more details here
I need some help on how to enable the solr LTR model and features on all
Adding more details here
I need some help on how to enable the solr LTR model and features on all
nodes of a solr cluster.
I am unable to replicate the model and the feature store though from any
master to its slaves with the replication API ? And unable to find any
documentation for the same
Bump. Any one has an idea how to proceed here ?
On Wed, Jul 8, 2020 at 5:41 PM krishan goyal wrote:
> Hi,
>
> How do I enable replication of the model and feature store ?
>
> Thanks
> Krishan
>
Hi,
How do I enable replication of the model and feature store ?
Thanks
Krishan
Hi all,
We are running Solr 8.2 Cloud in a cluster where we have a single TLOG
replica per shard and multiple PULL replicas for each shard. We have
noticed an issue recently where some of the PULL replicas stop replicating
from the masters. The will have a replication which outputs
be implemented in
> CouchDB by using filtered replication. Ideally, I would like to have
> one-way sync i.e. from the server to the client only. We may update the
> documents in the client-side Solr.
>
> How can I implement something like this in Solr?
>
> thanks
> Sachin
>
rching for some solution by which I can replicate only those
documents to the client-side Solr. For instance, it can be implemented in
CouchDB by using filtered replication. Ideally, I would like to have
one-way sync i.e. from the server to the client only. We may update the
documents in the client
Hi!
I have some custom cache set up in solrconfig XML for a solr cloud cluster in
Kubernetes. Each node has Kubernetes persistence set up. After I execute a
“delete pod” command to restart a node it goes into Replication Recovery
successfully but my custom cache’s warm() method never gets
large indices you can run into
issues with network unless you throttle replication (which will again result in
longer replication time). When it comes to caches, there are some per segment
caches but majority of caches are invalidated on any index searcher reopening
so it does not matter if you
necessary.
*Query: *I was reading articles about optimizations, merging and
committing. My basic queries can be summed up as:
1. Should we optimize the index after atomic updates before replication?
This equates to single optimization operation daily during low traffic
(morning).
2. Since
Hi Akreeti,
How much should I set "commitReserveDuration" for 2.62 GB ?
That's why I asked you about the time taken by the replication. You can
easily get hint about it after manually starting replication. The
commitReserveDuration should be roughly set as the time taken to do
Hi,
I have no idea about how much time is taken for successful replication for 2.62
GB. How much should I set "commitReserveDuration" for 2.62 GB ?
Thanks & Regards,
Akreeti Agarwal
-Original Message-
From: Paras Lehana
Sent: Thursday, September 12, 2019 6:46 PM
6246
> Swap:0 0 0
>
>
> Thanks & Regards,
> Akreeti Agarwal
>
>
> -Original Message-
> From: Jon Kjær Amundsen
> Sent: Wednesday, September 11, 2019 7:28 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Replication Itera
eeti Agarwal
-Original Message-
From: Jon Kjær Amundsen
Sent: Wednesday, September 11, 2019 7:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Replication Iteration
Is it every time it fails, or just sometimes?
What is the timestamps on the failed and passed iterations?
And how much disk s
Den ons. 11. sep. 2019 kl. 15.23 skrev Akreeti Agarwal :
> My index size is 2.62 GB, and :
> 00:00:10
>
> Thanks & Regards,
> Akreeti Agarwal
>
>
> -Original Message-
> From: Paras Lehana
> Sent: Wednesday, September 11, 2019 5:39 PM
> To: solr-user@l
My index size is 2.62 GB, and :
00:00:10
Thanks & Regards,
Akreeti Agarwal
-Original Message-
From: Paras Lehana
Sent: Wednesday, September 11, 2019 5:39 PM
To: solr-user@lucene.apache.org
Subject: Re: Replication Iteration
What is the size of your index? Is it too big? How fas
ache.solr.common.SolrException: Unable to
> download segments_znow completely. Downloaded 0!=2217
>
> Thanks & Regards,
> Akreeti Agarwal
>
> -Original Message-
> From: Paras Lehana
> Sent: Wednesday, September 11, 2019 5:17 PM
> To: solr-user@lucene.apache.org
>
hana
Sent: Wednesday, September 11, 2019 5:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Replication Iteration
Hi Akreeti,
Have you tried using the old UI to see errors? I had always experienced not
seeing status updates about replication in the newer UI. Check for the option
on top right of
Hi Akreeti,
Have you tried using the old UI to see errors? I had always experienced not
seeing status updates about replication in the newer UI. Check for the
option on top right of Solr UI.
And where are you seeing logs - on solr UI or from a file?
On Wed, 11 Sep 2019 at 16:12, Akreeti Agarwal
In the logs I don't see any errors, mostly after every 1-2 min replication
fails and I am not able to identify the root cause for it.
Thanks & Regards,
Akreeti Agarwal
-Original Message-
From: Jon Kjær Amundsen
Sent: Wednesday, September 11, 2019 12:15 PM
To:
It depends on the timestamps.
The red iterations are failed replications and the green are passed
replications.
If the newest timestamp is green the latest replication went well, if it is
red, it failed.
You should check the solr log on the slave if a recent replication have
failed to see the
Hi All,
I am using solr-5.5.5, in which I have one master and two slaves. I see some
red and some green replication iteration on my slave side.
What does these red and green iteration means?
Will this cause problem?
Thanks & Regards,
Akreeti Agarwal
::DISCLA
ader. Is this a known error? New Jira?
Regards,
Markus
-Original message-
> From:Ere Maijala
> Sent: Friday 23rd August 2019 11:24
> To: solr-user@lucene.apache.org
> Subject: Re: 8.2.0 After changing replica types, state.json is wrong and
> replication no longer takes p
Hi,
We've had PULL replicas stop replicating a couple of times in Solr 7.x.
Restarting Solr has got it going again. No errors in logs, and I've been
unable to reproduce the issue at will. At least once it happened when I
reloaded a collection, but other times that hasn't caused any issues.
I'll m
Hello,
There is a newly created 8.2.0 all NRT type cluster for which i replaced each
NRT replica with a TLOG type replica. Now, the replicas no longer replicate
when the leader receives data. The situation is odd, because some shard
replicas kept replicating up until eight hours ago, another on
I am not sure but just guessing is this node acting as a repeater?
This seems legitimate as Jai mentioned above, the discrepancy could be
because of unsuccessful replication due to disk space constraints.
On Thu, Aug 1, 2019 at 6:19 AM Aman Tandon wrote:
> Yes, that is what my understand
Yes, that is what my understanding is but if you see the Replication
handler response it is saying it is referring to the index folder not to
the one shown in index.properties. Due to that confusion I am not able to
delete the folder.
Is this some bug or default behavior where irrespective of the
It's correct behaviour , Solr put replica index file in this format only
and you can find latest index pointing in index.properties file. Usually
afer successful full replication Solr remove old timestamp dir.
On Wed, 31 Jul, 2019, 8:02 PM Aman Tandon, wrote:
> Hi,
>
> We are havi
replication API I am seeing it is pointing to *index *folder. Am I
missing something? Kindly advise.
*directory*
*drwxrwxr-x. 2 fusion fusion 69632 Jul 30 23:24 indexdrwxrwxr-x. 2 fusion
fusion 28672 Jul 31 03:02 index.20190731005047763drwxrwxr-x. 2 fusion
fusion 4096 Jul 31 10:20 index
Hi we have a new setup of solr 7.7 without cloud in a master/slave setup
Periodically our core stops responding to queries and must be
restarted on the slave.
Two hosts
is06 solr 7.7 master
ss06 solr 7.7 slave
simple replication is setup no solr cloud
so on the primary is06 we see this error
One other question related to this.
I know the change was made for a specific problem that was occurring but has
this caused a similar problem as mine with anyone else?
We're looking to try changing the second 'if' statement to add an extra
conditional to prevent it from performing the "deleteAll
I removed the replicate after startup from our solrconfig.xml file. However
that didn't solve the issue. When I rebuilt the primary, the associated
replicas all went to 0 documents.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Ok. probable dropping startup will help. Another idea
set replication.enable.master=false and enable it when master index is
build after restart.
On Tue, Jun 25, 2019 at 6:18 PM Patrick Bordelon <
patrick.borde...@coxautoinc.com> wrote:
> We are currently using the replicate after commit and sta
We are currently using the replicate after commit and startup
${replication.enable.master:false}
commit
startup
schema.xml,stopwords.txt
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
ve tracked down the issue.
>> SOLR-11293.
>>
>> SOLR-11293 changes
>> <
>> https://issues.apache.org/jira/browse/SOLR-11293?focusedCommentId=16182379&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16182379>
>>
>>
>
SOLR-11293 changes
> <
> https://issues.apache.org/jira/browse/SOLR-11293?focusedCommentId=16182379&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16182379>
>
>
> This fix changed the way the replication handler checks before updating a
>
After some research we believe we've tracked down the issue.
SOLR-11293.
SOLR-11293 changes
<https://issues.apache.org/jira/browse/SOLR-11293?focusedCommentId=16182379&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16182379>
This fix changed the
successfully processed is provided. The timestamp of the update
operation is the original timestamp, i.e., the time this operation was
processed on the Source SolrCloud. This allows an estimate the latency of the
replication process.
The timestamp of the update operation in the source solrcloud is
up an existing SolrCloud cluster as the master for
>> legacy replication to a slave server or two? It looks like another option
>> is to use Uni-direction CDCR, but not sure what is the best option in this
>> case.
>
> You're asking for problems if you try to combin
On 5/21/2019 8:48 AM, Michael Tracey wrote:
Is it possible set up an existing SolrCloud cluster as the master for
legacy replication to a slave server or two? It looks like another option
is to use Uni-direction CDCR, but not sure what is the best option in this
case.
You're askin
Is it possible set up an existing SolrCloud cluster as the master for
legacy replication to a slave server or two? It looks like another option
is to use Uni-direction CDCR, but not sure what is the best option in this
case.
--
Michael Tracey
The MetricsHistory issue is now resolved. I think the replication issue already
has a JIRA here https://issues.apache.org/jira/browse/SOLR-11904
<https://issues.apache.org/jira/browse/SOLR-11904> ?
Feel free to vote on that issue and perhaps add your own comments. And if you
have an idea
auth and index replication
Hey Solr community.
I’ve been following a couple of open JIRA tickets relating to use of the basic
auth plugin in a Solr cluster (https://issues.apache.org/jira/browse/SOLR-12584
, https://issues.apache.org/jira/browse/SOLR-12860) and recently I’ve noticed
similar
Hi Tulsi,
this sounds more like which replication types are preferably chosen for
distributed queries with multiple shards per collection.
What I'd like to achieve is creating TLOG replicas as default in the first
place.
But to be honest, I haven't tried it out since I'm now u
Hi Roger,
Have you tried shards.preference parameter? You can specify the replica.type
as TLOG or PULL(default is NRT) in solrconfig.xml using this parameter.
Example:
shards.preference=replica.TLOG
Note: earlier this parameter was preferLocalShards which has been
deprecated.
--
Sent from:
/mycollection_shard1_replica_n17/replication. Reason:
require authentication
Hello,
I'm using SolrCloud version 7.7.1 and want to try using *only* the TLOG
replication type (excepting the leader of course).
Where can I configure that as default? Ideally cluster-wide. Can I use the
*set-cluster-policy* or *set-trigger *(for my nodeAdded trigger) API calls
to define
Thanks.
That resolves the issue.
Thanks again.
-Original Message-
From: Shawn Heisey
Sent: Tuesday, March 19, 2019 7:10 PM
To: solr-user@lucene.apache.org
Subject: Re: is df needed for SolrCloud replication?
On 3/19/2019 4:48 PM, Oakley, Craig (NIH/NLM/NCBI) [C] wrote:
> I recen
section of solrconfig.xml (in order to streamline out
parts of the code which he does not use).
I have not (yet) noticed any ill effects from this error. Is this error benign?
Or shall I ask the user to reinstate df in the defaults section of
solrconfig.xml? Or can SorlCloud replication be configur
code which he does not use).
I have not (yet) noticed any ill effects from this error. Is this error benign?
Or shall I ask the user to reinstate df in the defaults section of
solrconfig.xml? Or can SorlCloud replication be configured to work around any
ill effects that there may be?
Please advise
Hey all,
I’m looking for some support with replication errors we’re seeing in SolrCloud
7.7.x (tried both .0 and .1).
I’ve created a StackOverflow issue:
We have errors in SolrCloud (7.7.1) during replication, which we can't
understand. We thought it may be
https://issues.apache.org
l sync on start given your update
>> rate.
>> 2> while you're doing the full sync, all new updates are sent to the
>> recovering replica and put in the tlog.
>> 3> When the initial replication is done, the documents sent to the
>> tlog while reco
ticular, you're indexing during node restart.
>
> That means that
> 1> you'll almost inevitably get a full sync on start given your update
> rate.
> 2> while you're doing the full sync, all new updates are sent to the
> recovering replica and pu
doing the full sync, all new updates are sent to the
recovering replica and put in the tlog.
3> When the initial replication is done, the documents sent to the
tlog while recovering are indexed. This is 7 hours of accumulated
updates.
4> If much goes wrong in this situation, then
ering) almost freezes with
100% CPU usage and 80%+ memory usage. Follower node's memory usage is 80%+
but CPU is very healthy. Also Follower node's log is filled up with updates
forwarded from the leader ("...PRE_UPDATE FINISH
{update.distrib=FROMLEADER&distrib.from=..."
layed after
the full index replication is accomplished.
Much of the retry logic for replication has been improved starting
with Solr 7.3 and,
in particular, Solr 7.5. That might address your replicas that just
fail to replicate ever,
but won't help that replicas need to full sync anyway.
T
Hello Solr gurus,
So I have a scenario where on Solr cluster restart the replica node goes
into full index replication for about 7 hours. Both replica nodes are
restarted around the same time for maintenance. Also, during usual times,
if one node goes down for whatever reason, upon restart it
Thanks.
I am not explicitly asking solr to optimize. I do send -commit yes in the POST
command when I execute the delete query.
In the master-slave node where replication is hung I see this:
On the master:
-bash-4.1$ ls -al data/index/segments_*
-rw-rw-r--. 1 u g 1269 Jan 29 16:23 data/index
a forced merge and can take a very long time to complete.
Getting around that problem involves using deleteById instead of
deleteByQuery.
I have no idea whether replication would be affected by the blocking
that deleteByQuery causes. I wouldn't expect it to be affected, but
I've
replication hangs. No errors but it is trying to
download a segments_* file (e.g. segments_1bnx7) and just sits there. No logs.
I am unable to stop replication (using abortfetch) once it reaches this state.
Disable polling works (which is set to 60 seconds) but that doesn't help. The
only
, January 07, 2019 11:10 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr Replication
>
> In SolrCloud there are Data Centers.
> Your Cluster 1 is DataCenter 1 and your Cluster 2 is Data Center 2.
> You can then use CDCR (Cross Data Center Replication).
> http://lucene.a
In SolrCloud there are Data Centers.
Your Cluster 1 is DataCenter 1 and your Cluster 2 is Data Center 2.
You can then use CDCR (Cross Data Center Replication).
http://lucene.apache.org/solr/guide/7_0/cross-data-center-replication-cdcr.html
Nevertheless I would spend your Cluster 2 another 2
, bkpsolr2
Master / Slave : solr1 / bkpsolr1
solr2 / bkpsolr2
Is it possible to have master / slave replication configured for solr
instances running in cluster1 & cluster2 (for failover). Kindly let me know
the possibility.
> goes into recovery, concentrate on the replica that goes into
> > recovery and the corresponding leader's log.
> >
> > Best,
> > Erick
> >
> > On Sat, Dec 29, 2018 at 6:23 PM Doss wrote:
> > >
> > > we are using 3 node solr (64GB r
est,
> Erick
>
> On Sat, Dec 29, 2018 at 6:23 PM Doss wrote:
> >
> > we are using 3 node solr (64GB ram/8cpu/12GB heap)cloud setup with
> version
> > 7.X. we have 3 indexes/collection on each node. index size were about
> > 250GB. NRT with 5sec soft /10min
Erick
On Sat, Dec 29, 2018 at 6:23 PM Doss wrote:
>
> we are using 3 node solr (64GB ram/8cpu/12GB heap)cloud setup with version
> 7.X. we have 3 indexes/collection on each node. index size were about
> 250GB. NRT with 5sec soft /10min hard commit. Sometimes in any one node
we are using 3 node solr (64GB ram/8cpu/12GB heap)cloud setup with version
7.X. we have 3 indexes/collection on each node. index size were about
250GB. NRT with 5sec soft /10min hard commit. Sometimes in any one node we
are seeing full index replication started running.. is there any
We are using Solr 7.2. We have two solrclouds that are hosted on Google clouds.
These are targets for an on Prem solr cloud where we run our ETL loads and
have CDCR replicate it to the Google clouds. This mostly works pretty well.
However, networks can fail. When the network has a brief outage
Hi
I am working on backups.
I have created a backup with below command:
/http://dev/solr/XXX/replication?command=backup&name=XXXBackup&location=D:\Backup\/
all worked fine, files have been created.
I wanted to restore the index from this backup with below command:
/http://dev/
Hmmm, afraid I'm out of my depth, perhaps some of the Lucene
folks can chime in.
Sorry I can't be more help.
Erick
On Mon, Nov 12, 2018 at 12:27 AM damian.pawski wrote:
>
> Hi
>
> I had to re-create the index as some tokenizers are not longer supported on
> the 7.x version.
> I have a fresh 7.x
Hi
I had to re-create the index as some tokenizers are not longer supported on
the 7.x version.
I have a fresh 7.x index.
Thank you
Damian
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi
I had to re-create the index, as some Tokenizers are no longer supported on
7.X, so I have a fresh 7.x index, but still having issues with the
replication.
Thank you
Damian
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
,
> We have switched from 5.4 to 7.2.1 and we have started to see more issues
> with the replication.
> I think it may be related to the fact that a delta import was started during
> a full import (not the case for the Solr 5.4).
>
> I am
Hi,
We have switched from 5.4 to 7.2.1 and we have started to see more issues
with the replication.
I think it may be related to the fact that a delta import was started during
a full import (not the case for the Solr 5.4).
I am getting below error:
XXX
Hmmm, ok. The replication failure could lead to the scenario I
outlined, but that's a secondary issue to the update not getting to
the follower in the first place as you say.
On Tue, Nov 6, 2018 at 12:19 PM Jeremy Smith wrote:
>
> Thanks everyone. I added SOLR-12969.
>
>
>
Thanks everyone. I added SOLR-12969.
Erick - those sound like important questions, but I think this issue is
slightly different. In this case, replication is failing even if the leader
never goes down.
From: Erick Erickson
Sent: Tuesday, November 6, 2018 2
en though the version number is
> > lower.
> >
> >
> >-Jeremy
> >
> > ____
> > From: Susheel Kumar
> > Sent: Thursday, November 1, 2018 2:57:00 PM
> > To: solr-user@lucene.apache.org
> > Subject: R
t; From: Susheel Kumar
> Sent: Thursday, November 1, 2018 2:57:00 PM
> To: solr-user@lucene.apache.org
> Subject: Re: SolrCloud Replication Failure
>
> Are we saying it has to do something with stop and restarting replica's
> otherwise I haven't seen/heard any issues with
solr-user@lucene.apache.org
Subject: Re: SolrCloud Replication Failure
Are we saying it has to do something with stop and restarting replica's
otherwise I haven't seen/heard any issues with document updates and
forwarding to replica's...
Thanks,
Susheel
On Thu, Nov 1, 2018 at 12:58 PM Erick Eri
t; > https://github.com/risdenk/test-solr-start-stop-replica-consistency
> > >> >
> > >> > I don't even see the first update getting applied from num 10 -> 20.
> > >> After
> > >> > the first update there is no more change.
> >
> the first update there is no more change.
> >> >
> >> > Kevin Risden
> >> >
> >> >
> >> > On Wed, Oct 31, 2018 at 8:26 PM Jeremy Smith
> >> wrote:
> >> >
> >> > > Thanks Erick, this is 7.5.0.
> >&
1 - 100 of 2153 matches
Mail list logo