Hi, sorry for the delay.
Yes, we thought to simply copy the index over but this sounds risky and time
consuming. Our index is too big to copy it over the internet quickly.
We decided to re-index our data and then switch and re-index again. It's a
pity there's no way to do this like with mysql :)
So what I'm playing with now is creating a new collection on the target
cluster, turning off the target cluster, wiping the indexes, and manually
just copying the indexes over to the correct directories and starting
again. In the middle, you can run an optimize or use the Lucene index
upgrader tool
Zero read would be enough, we can safely stop index updates for a while. But
have some API endpoints, where read downtime is very undesirable.
Best,
Alex
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-copy-the-index-to-another-cluster-tp4143759p4143795.html
Sent
I'm currently playing around with Solr Cloud migration strategies, too. I'm
wondering... when you say "zero downtime," do you mean zero *read*
downtime, or zero downtime altogether?
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing
I've just realized that old and new clusters do use different installations,
configs and lib paths. So the nodes from the new cluster will probably
simply refuse to start using configs from the old zookeper.
Only if there is a way to run them with their own zookeper and then manually
add as replic