As far as I know, if replication fails, the old index will still be used. There will be some performance impact during replication simply because of network and disk IO, and of course the JVM Solr is in will be doing more work. But I think you can throttle replication..... or maybe not, I can't see anything about it over on http://wiki.apache.org/solr/SolrReplication
Otis -- SOLR Performance Monitoring - http://sematext.com/spm/index.html Search Analytics - http://sematext.com/search-analytics/index.html On Thu, Dec 13, 2012 at 2:13 PM, Lan <dung....@gmail.com> wrote: > In our current architecture, we use a staging core to perform full > re-indexes > while the live core continues to serve queries. After a full re-index we > use > the core admin to swap the live and stage index. Both the live and stage > core are on the same solr instance. > > In our new architecture we want to have the live core and stage core > running > on separate solr instances. Using core admin to swap is not longer possible > so we use the replication command below to push the stage indexed to live > index. > > > http://search-stage9084/solr/replication?command=fetchindex&masterUrl=http://search-live:9084/solr/live/replication > > Is this operation guaranteed to be atomic? For example if replication fails > halfway through, the old live index will still be good? Also during > replication, will the live server continue to serve queries without > performance penalty? > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Is-replication-an-atomic-operation-tp4026813.html > Sent from the Solr - User mailing list archive at Nabble.com. >