Hi,

In our set up there are two solr clouds:

Cloud A:  production cloud serves both writes and reads

Cloud B:  back up cloud serves only writes

Cloud A and B have the same shard configuration.

Write requests are sent to both cloud A and B. In certain circumstances
when Cloud A's update lags behind,  we want to bulk copy the binary index
from B to A.

We have tried two approaches:

Approach 1.
      For cloud A:
      a. delete collection to wipe out everything
      b. create new collection (data is empty now)
      c. shut down solr server
      d. copy binary index from cloud B to corresponding shard replicas in
cloud A
      e. start solr server

Approach 2.
      For cloud A:
      a.  shut down solr server
      b.  remove the whole 'data' folder under index/  in each replica
      c.  copy binary index from cloud B to corresponding shard replicas in
cloud A
      d.  start solr server

Is approach 2 sufficient?  I am wondering if delete/recreate collection
each time is necessary to get cloud into a "clean" state for copy binary
index between solr clouds.

Thanks for your advice!

Reply via email to