Hi. I have 8 collections in a 3 node SolrCloud: 6.3
Given the following scenario: 1. preferredleader REPLICAPROP for all collections on core_node2 2. zookeeper->overseer_elect->leader has core_node1 3. BACKUP command always writes to/using core_node1 ??? Notes: 1. all collections have exactly one shard. 2. note: preferredleader has been set due to drift from core_node1 for several collections' shard leader 3. all collections have been REBALANCELEADER, so the shard leaders are all on core_node2 according to healthcheck non-canonical: I know Backup is supposed to have a shared fs mounted to each node, experimentation shows that when there is only 1 shard, only 1 node is writing to storage; and if that storage is local fs, no issues. I expected the writes to come from the shard leaders, but they are coming from the zookeeper->leader node. The workflow has been rock-solid as long as we have shard leaders and solrcloud leader consistent with each other. Is my expectation wrong [writes happen on shard leader for single shard collections]? I need defined behavior so that I know where to pick up the backup files. This is all implemented in a script, and deterministic understanding of what.writes.where will make it a success. thanks --will
