I won't be able to achieve the correct mapping as I did not store the
mapping info any where. I don't know if core-node1 was mapped to
shard1_recplica1 or shard2_replica1 in my old collection. But I am not
worried about that as I am not going to update any existing document.
This is what I did.
You've got it. You should be quite safe if you
1> create the same number of shards as you used to have
2> match the shard bits. I.e. collection1_shard1_replica1 as long as
the collection1_shard# parts match you should be fine. If this isn't
done correctly, the symptom will be that when you update a
Thanks Erick.
I had replicationFactor=1 in my old collection and going to have the same
config for the new collection.
When I create a new collection with number of Shards =20 and max shards per
node = 1, the shards are going to start on 20 hosts out of my 25 hosts Solr
cluster. When you say "get
That should work. The caveat here is that you need to get the each
shards index to the corresponding shard on your new collection.
Of course I'd back up _all_ of these indexes before even starting.
And one other trick. First create your collection with 1 replica per
shard (leader-only). Then copy
I have a Solr Cloud deployed on top of HDFS.
I accidentally deleted a collection using the collection API. So, ZooKeeper
cluster has lost all the info related to that collection. I don't have a
backup that I can restore from. However, I have indices and transaction
logs on HDFS.
If I create a new