Not at this point, the limit is, I think, 100 documents. I actually spoke imprecisely. Over that limit, an old-style replication happens which _may_ cause a full index copy, but usually will only move over the most recent segments that have changed. If you're optimizing, this will be the whole index (and you shouldn't optimize or forceMerge as optimize is called now).
Why do you want to force a full replication? If you have a suspicious replica, just shut it down and delete it's index directory and start it up back up again perhaps? Best Erick On Sat, Jan 5, 2013 at 1:33 AM, Sai Gadde <gadde....@gmail.com> wrote: > Hi Erick, > > The issue was with zookeeper when we tried to force full replication by > cleaning the datadir in zookeeper, caused the index removal. > > Our index always replicated full even on short outage or restart. I think > "too far out of date" could be the reason. We felt zookeeper was to blame > here. We continuously add documents to index on the leader node. Usually we > would have 1k - 2k docs more by time time server restarts. We only do > softcommits and use commit within call while indexing. > > Is there a way to change the this "too far out of date" property through > solr config? > > Thanks > Shyam > > On Jan 4, 2013 8:48 PM, "Erick Erickson" <erickerick...@gmail.com> wrote: > > > > That is very odd. Have there been any hard commits performed at all? Even > > if not, there should still be an index directory. > > > > Solr will do a full replication if the replica is too far out of date, > but > > that shouldn't > > create (I don't think) a new index directory unless it's a misleading > > message. > > Is the cluster still receiving updates while the instance is down? "too > far > > out > > of date" is about 100 documents currently. > > > > Are you sure you aren't just seeing a full replication happen? When you > say > > "only replicates new documents" how long are you waiting? > > > > If none of this is germane, we need more details on how you're bringing > the > > nodes > > up and down. Because this shouldn't be happening as you describe. Also, > > there have been a lot of changes since 4.0, if you have the bandwidth you > > might > > try with a current build. > > > > Best > > Erick > > > > > > On Fri, Jan 4, 2013 at 2:02 AM, Sai Gadde <gadde....@gmail.com> wrote: > > > > > I have a single collection and shard in my Solr cloud setup with 3 > > > nodes. zookeeper ensemble running on three different machines. > > > > > > When we restart one of the server other than leader in the cloud the > index > > > directory is getting deleted in that Solr instance. Index starts with > '0' > > > documents and the instance only replicates new documents. > > > > > > These are the messages from solr admin panel logging. Solr version: > 4.0.0 > > > > > > 10:48:26WARNINGSolrCoreNew index directory detected: old=null > > > new=/solr/mycore/data/index/10:48:26WARNINGSolrCore[mycore] Solr index > > > directory '/solr/mycore/data/index' doesn't exist. Creating new > index... > > > > > > Any help regarding this issue would be appreciated. > > > > > > Thanks > > > Shyam > > > gadde....@gmail.com > > > >