No. There's a "peer sync" that will try to update from the leader's
transaction log if (and only if) the replica has fallen behind. By
"fallen behind" I mean it was unable to accept any updates for
some period of time. The default peer sync size is 100 docs,
you can make it larger see numRecordsToKeep here:
http://lucene.apache.org/solr/guide/7_6/updatehandlers-in-solrconfig.html

Some observations though:
12G heap for 250G of index on disk _may_ work, but I'd be looking at
the GC characteristics, particularly stop-the-world pauses.

Your hard commit interval looks too long. I'd shorten it to < 1 minute
with openSearcher=false. See:
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

I'd concentrate on _why_ the replica goes into recovery in the first
place. You say you're on 7x, which one? Starting in 7.3 the recovery
logic was pretty thoroughly reworked, so _which_ 7x version is
important to know.

The Solr logs should give you some idea of _why_ the replica
goes into recovery, concentrate on the replica that goes into
recovery and the corresponding leader's log.

Best,
Erick

On Sat, Dec 29, 2018 at 6:23 PM Doss <itsmed...@gmail.com> wrote:
>
> we are using 3 node solr (64GB ram/8cpu/12GB heap)cloud setup with version
> 7.X. we have 3 indexes/collection on each node. index size were about
> 250GB. NRT with 5sec soft /10min hard commit. Sometimes in any one node we
> are seeing full index replication started running..  is there any
> configuration which forces solr to replicate full , like 100/200 updates
> difference if a node sees with the leader ? - Thanks.

Reply via email to