SplitShardCmd assumes that its main phase (when the Lucene index is being
split) always executes on the local file system of the shard leader, and indeed
the ShardSplitCmd.checkDiskSpace() checks the local file system’s free disk
space - even though in reality in your case the actual data is wri
Hi All - added a couple more solr nodes to an existing solr cloud
cluster where the index is in HDFS. When I try to a split a shard, I
get an error saying there is not enough disk space. It looks like it is
looking on the local file system, and not in HDFS.
"Operation splitshard casued
exce