Hi again,

a follow-up on this: I ended up fixing it by uploading a new version of 
clusterstate.json to Zookeeper with the missing hash ranges set (they were 
easily deducible since they were sorted by shard name).
I still don't know what the correct solution to handle index corruption (where 
all replicas of a shard needs to be repaired) while still keeping the cloud 
available for search, would be?

Thanks,

Rikke

On Aug 22, 2013, at 21:27 , Rikke Willer 
<r...@dtic.dtu.dk<mailto:r...@dtic.dtu.dk>> wrote:


Hi,

I have a Solr cloud set up with 12 shards with 2 replicas each, divided on 6 
servers (each server hosting 4 cores). Solr version is 4.3.1.
Due to memory errors on one machine, 3 of its 4 indexes became corrupted. I 
unloaded the cores, repaired the indexes with the Lucene CheckIndex tool, and 
added the cores again.
Afterwards the Solr cloud hash range has been set to null for the shards with 
corrupt indexes.
Could anybody point me to why this has occured, and more importantly, how to 
set the range on the shards again?
Thank you.

Best,

Rikke

Reply via email to