Great. Thanks for the work on this patch!
Jim
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Cloud-A-B-Deployment-Issue-tp4302810p4303357.html
Sent from the Solr - User mailing list archive at Nabble.com.
Nodes will still go into recovery but only for a short duration.
On Oct 26, 2016 1:26 PM, "jimtronic" wrote:
It appears this has all been resolved by the following ticket:
https://issues.apache.org/jira/browse/SOLR-9446
My scenario fails in 6.2.1, but works in 6.3 and Master where this bug has
This is due to leader initiated recovery. When Take a look at
https://issues.apache.org/jira/browse/SOLR-9446
On Oct 24, 2016 1:23 PM, "jimtronic" wrote:
> We are running into a timing issue when trying to do a scripted deployment
> of
> our Solr Cloud cluster.
>
> Scenario to reproduce (someti
It appears this has all been resolved by the following ticket:
https://issues.apache.org/jira/browse/SOLR-9446
My scenario fails in 6.2.1, but works in 6.3 and Master where this bug has
been fixed.
In the meantime, we can use our workaround to issue a simple delete command
that deletes a non-exi
Also, if we issue a delete by query where the query is "_version_:0", it also
creates a transaction log and then has no trouble transferring leadership
between old and new nodes.
Still, it seems like when we ADDREPLICA, some sort of transaction log should
be started.
Jim
--
View this message
Interestingly, If I simply add one document to the full cluster after all 6
nodes are active, this entire problem goes away. This appears to be because
a transaction log entry is created which in turn prevents the new nodes from
going into full replication recovery upon leader change.
Adding a doc