bq: Before restarting, I delete all the write.lock files from the data dir. But
every time I restart I get the same exception.

First, this shouldn't be necessary. Are you by any chance killing the
Solr instances with
the equivalent of "kill -9"? Allow them to shut down gracefully. That
said, until recently
the bin/solr script would kill them forcefully after 5 seconds which
is too short an interval.

But the error really is telling you that somehow two or more Solr
cores are pointing at the
same data directory. Whichever one gets there first will block any
later cores with the
message you see. So check your core.properties files and your HDFS magic to see
how this is occurring would be my first guess.

Best,
Erick

On Wed, Nov 16, 2016 at 1:38 PM, Chetas Joshi <chetas.jo...@gmail.com> wrote:
> Hi,
>
> I have a SolrCloud (on HDFS) of 52 nodes. I have 3 collections each with 50
> shards and maxShards per node for every collection is 1.
>
> I am having problem restarting a solr shard for a collection.
>
> When I restart, there is always a particular shard of a particular
> collection that remains down. The 2 shards on the same host for the rest of
> the collections are up and running.
>
> Before restarting, I delete all the write.lock files from the data dir. But
> every time I restart I get the same exception.
>
> index dir yyy of core xxx is already locked. The most likely cause is
> another Solr server (or another solr core in this server) also configured
> to use this directory; other possible causes may be specific to lockType:
> hdfs
>
> Thanks!

Reply via email to