If you have a properly secured cluster eg with Kerberos then you should not update files in ZK directly. Use the corresponding Solr REST interfaces then you also less likely to mess something up.
If you want to have HA you should have at least 3 Solr nodes and replicate the collection to all three of them (more is not needed from a HA point of view). This would also allow you upgrades to the cluster without downtime. > Am 03.09.2019 um 15:22 schrieb Porritt, Ian <ian.porr...@unisys.com>: > > Hi, > > I am relatively new to Solr especially Solr Cloud and have been using it for > a few days now. I think I have setup Solr Cloud correctly however would like > some guidance to ensure I am doing it correctly. I ideally want to be able to > process 40 million documents on production via Solr Cloud. The number of > fields is undefined as the documents may differ but could be around 20+. > > The current setup I have at present is as follows: (note this is all on 1 > machine for now). A 3 Zookeeper Ensemble (all running on different ports) and > works as expected. > > 3 Solar Nodes started on separate ports (note: directory path à > D:\solr-7.7.1\example\cloud\Node (1/2/3). > > <image001.jpg> > > Setup of Solr would be similar to the above except its on my local, the below > is the Graph status in Solr Cloud. > > <image002.jpg> > > I have a few questions which I cannot seem to find the answer for on the web. > > We have a schema which I have managed to upload to Zookeeper along with the > Solrconfig, how do I get the system to recognise both a lib/.jar extension > and a custom core.properties file? I bypassed the issue of the > core.properties by amending the update.autoCreateField in the Solrconfig.xml > to false however would like to include as a colleague has done on Solr > Standlone. > > Also from a high availability aspect, if I effectivly lost 2 of the Solr > Servers due to an outage will the system still work as expected? Would I > expect any data loss? > >