So what I'm playing with now is creating a new collection on the target
cluster, turning off the target cluster, wiping the indexes, and manually
just copying the indexes over to the correct directories and starting
again. In the middle, you can run an optimize or use the Lucene index
upgrader tool to bring yourself up to the new version.
Part of this for me is a migration to HDFSDirectory so there's an added
level of complication there.

I would assume that since you only need to preserve reads, you could cut
over once your collections were created on the new cloud?

Michael Della Bitta

Applications Developer

o: +1 646 532 3062

appinions inc.

“The Science of Influence Marketing”

18 East 41st Street

New York, NY 10017

t: @appinions <https://twitter.com/Appinions> | g+:
plus.google.com/appinions
<https://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts>
w: appinions.com <http://www.appinions.com/>


On Tue, Jun 24, 2014 at 3:25 PM, heaven <aheave...@gmail.com> wrote:

> Zero read would be enough, we can safely stop index updates for a while.
> But
> have some API endpoints, where read downtime is very undesirable.
>
> Best,
> Alex
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-copy-the-index-to-another-cluster-tp4143759p4143795.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Reply via email to