“no registered leader” is the effect of some problem usually, not the root cause. In this case, for instance, you could be running out of file handles and see other errors like “too many open files”. That’s just one example.
One common problem is that Solr needs a lot of file handles and the system defaults are too low. We usually recommend you start with 65K file handles (ulimit) and bump up the number of processes to 65K too. So to throw some numbers out. With 1,000 replicas, and let’s say you have 50 segments in the index in each replica. Each segment consists of multiple files (I’m skipping “compound files” here as an advanced topic), so each segment has, let’s say, 10 segments. 1,000 * 50 * 10 would require 500,000 file handles on your system. Bottom line: look for other, lower-level errors in the log to try to understand what limit you’re running into. All that said, there’ll be a number of “gotchas” when running that many replicas on a particular node, I second Jörn;’s question... Best, Erick > On Aug 30, 2019, at 3:18 AM, Jörn Franke <jornfra...@gmail.com> wrote: > > What is the reason for this number of replicas? Solr should work fine, but > maybe it is worth to consolidate some collections to avoid also > administrative overhead. > >> Am 29.08.2019 um 05:27 schrieb Hongxu Ma <inte...@outlook.com>: >> >> Hi >> I have a solr-cloud cluster, but it's unstable when collection number is >> big: 1000 replica/core per solr node. >> >> To solve this issue, I have read the performance guide: >> https://cwiki.apache.org/confluence/display/SOLR/SolrPerformanceProblems >> >> I noted there is a sentence on solr-cloud section: >> "Recent Solr versions perform well with thousands of replicas." >> >> I want to know does it mean a single solr node can handle thousands of >> replicas? or a solr cluster can (if so, what's the size of the cluster?) >> >> My solr version is 7.3.1 and 6.6.2 (looks they are the same in performance) >> >> Thanks for you help. >>