Hi,

I have a SolrCloud (on HDFS) of 52 nodes. I have 3 collections each with 50
shards and maxShards per node for every collection is 1.

I am having problem restarting a solr shard for a collection.

When I restart, there is always a particular shard of a particular
collection that remains down. The 2 shards on the same host for the rest of
the collections are up and running.

Before restarting, I delete all the write.lock files from the data dir. But
every time I restart I get the same exception.

index dir yyy of core xxx is already locked. The most likely cause is
another Solr server (or another solr core in this server) also configured
to use this directory; other possible causes may be specific to lockType:
hdfs

Thanks!

Reply via email to