We are running Solr 6.6 Sent from my iPhone
> On May 25, 2019, at 12:45 PM, Mikhail Khludnev <m...@apache.org> wrote: > > Hello, Chuck. > > Which version do you run? > Can't you encounter > https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SOLR-2D9527&d=DwIBaQ&c=kKqjBR9KKWaWpMhASkPbOg&r=J-2s3b-3-OTA0o6bGDhJXAQlB5Y3s4rOUxlh_78DJl0&m=ce8IRTn9v0DazVrIgbbAzskEtjbmL79RLg0mB6wncrc&s=hBytAZMffpfESuSDUnIxDTFpsoc62YAxXvxCf1r2bAs&e= > ? > > On Fri, May 24, 2019 at 7:52 PM Chuck Reynolds <creyno...@ancestry.com> > wrote: > >> >> >> I have 4 instances of Solr running on 3 servers with a replication factor >> of 3. They are using ports 10001 -10004 >> >> >> >> Server 1 10.xxx.xxx.75 >> >> Server 2 10.xxx.xxx.220 >> >> Server 3 10.xxx.xxx.245 >> >> >> >> When I execute the command to do the restore to a new cluster it create >> each master with the same IP address and port but all subsequent replicas >> are create correctly. Why does it create the master using the same server >> and port? >> >> >> >> This means that one of the 4 solr instances on server 10.xxx.xxx.75 is >> managing 4 shards of data while some of the other instances are not >> managing any shards of data. >> >> >> >> I now that running more than one instances of Solr on the same server is >> not standard but I can setup this same cluster with 4 instances of Solr >> running on a single server and the create collection command is smart >> enough to figure it out. >> >> >> >> *Shard1* >> >> 10.xxx.xxx.75:10002 * master* >> >> 10.xxx.xxx.220:10001 repl >> >> 10.xxx.xxx.220:10003 repl >> >> >> >> *Shard2* >> >> 10.xxx.xxx.75:10002 * master* >> >> 10.xxx.xxx.245:10003 repl >> >> 10.xxx.xxx.75:10004 repl >> >> >> >> *Shard3* >> >> 10.xxx.xxx.75:10002 * master* >> >> 10.xxx.xxx.245:10004 repl >> >> 10.xxx.xxx.75:10003 repl >> >> >> >> *Shard4* >> >> 10.xxx.xxx.75:10002 * master* >> >> 10.xxx.xxx.245:10001 repl >> >> 10.xxx.xxx.220:10002 repl >> >> >> >> >> >> >> >> Any help would be appreciated >> >> >> >> [image: cid:image001.png@01D51200.98EE1F20] >> >> >> > > > -- > Sincerely yours > Mikhail Khludnev