Hello,

I have a solr cloud cluster in a test environment running 6.1 where I am
looking at using the collections API BACKUP and RESTORE commands to manage
data integrity.

When restoring from a backup, I'm finding the same behavior occurs every
time; after the restore command, all shards are being hosted on one node.
What's especially surprising about this is that there are 6 live nodes
beforehand, the collection has maxShardsPerNode set to 1, and this occurs
even if I pass through the parameter maxShardsPerNode=1 to the API call. Is
there perhaps somewhere else I need to configure something, or another step
I am missing? If perhaps I'm misunderstanding the intention of these
parameters, could you clarify for me and let me know how to support
restoring different shards on different nodes?

Full repro below.

Thanks!


*Repro*

*Cluster state before*

http://54.85.30.39:8983/solr/admin/collections?action=CLUSTERSTATUS&wt=json

{
  "responseHeader" : {"status" : 0,"QTime" : 4},
  "cluster" : {
    "collections" : {},
    "live_nodes" : [
      "172.18.7.153:8983_solr",
       "172.18.2.20:8983_solr",
       "172.18.10.88:8983_solr",
       "172.18.6.224:8983_solr",
       "172.18.8.255:8983_solr",
       "172.18.2.21:8983_solr"]
  }
}


*Restore Command (formatted for ease of reading)*

http://54.85.30.39:8983/solr/admin/collections?action=RESTORE

&collection=panopto
&async=backup-4

&location=/mnt/beta_solr_backups
&name=2016-09-02

&maxShardsPerNode=1

<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">16</int>
</lst>
<str name="requestid">backup-4</str>
</response>


*Cluster state after*

http://54.85.30.39:8983/solr/admin/collections?action=CLUSTERSTATUS&wt=json

{
  "responseHeader" : {"status" : 0,"QTime" : 8},
  "cluster" : {
    "collections" : {
      "panopto" : {
        "replicationFactor" : "1",
        "shards" : {
          "shard2" : {
            "range" : "0-7fffffff",
            "state" : "construction",
            "replicas" : {
              "core_node1" : {
                "core" : "panopto_shard2_replica0",
                "base_url" : "http://172.18.2.21:8983/solr";,
                "node_name" : "172.18.2.21:8983_solr",
                "state" : "active",
                "leader" : "true"
              }
            }
          },
          "shard1" : {
            "range" : "80000000-ffffffff",
            "state" : "construction",
            "replicas" : {
              "core_node2" : {
                "core" : "panopto_shard1_replica0",
                "base_url" : "http://172.18.2.21:8983/solr";,
                "node_name" : "172.18.2.21:8983_solr",
                "state" : "active",
                "leader" : "true"
              }
            }
          }
        },
        "router" : {
          "name" : "compositeId"
        },
        "maxShardsPerNode" : "1",
        "autoAddReplicas" : "false",
        "znodeVersion" : 44,
        "configName" : "panopto"
      }
    },
    "live_nodes" : ["172.18.7.153:8983_solr", "172.18.2.20:8983_solr",
"172.18.10.88:8983_solr", "172.18.6.224:8983_solr", "172.18.8.255:8983_solr",
"172.18.2.21:8983_solr"]
  }
}




-- 
Stephen

(206)753-9320
stephen-lewis.net

Reply via email to