Hi, About the number of Zookeeper elements in an ensemble, you can find this good information in this page. It applies to Solr. https://www.cloudkarafka.com/blog/2018-07-04-cloudkarafka-how-many-zookeepers-in-a-cluster.html
- 1 Node: no fault tolerance, no maintenance possibilities - 3 Nodes: An ensemble with 3 nodes will support one failure without loss of service, which is probably fine for most users, and also the most popular setup - 5 Nodes (recommended for real fault tolerance): A five-node cluster allows to take one server out for maintenance or upgrade and still be able to take a second unexpected failure, without interrupting your service - 7 Nodes: The same as for 5-node cluster but with the ability to bear the failure of three nodes The Zookeeper ensemble size is not dependent of the number of Solr nodes. Zookeeper activity is not in relation with updates or queries volume. Il is related with : - solr node stop / start and so recovery - solr and alias collection creation / destruction ... - solr configset management If your solr cluster is stable in terms of functional solr nodes and collections, even with huge data updates and/or queries, the Zookeeper ensemble won't be stressed. About the upgrade method, I am not sure a hot operation is possible. In order to minimise downtime : Step 1 - Set up 2 new zookeeper with the 3 servers declaration server.1=xxx:2881:3881 server.2=yyy:2882:3882 server.3=zzz:2883:3883 Step 2 - Add the 2 new servers declaration in the running zookeeper server.2=yyy:2882:3882 server.3=zzz:2883:3883 Step 3 - Restart the running zookeeper and start the 2 new zookeeper Step 4 - Wait for data synchronization Step 5 - Stop, move, start the first zookeeper Warning : do not use IP address, use server names in your Zookeeper configuration Regards Dominique Le ven. 14 août 2020 à 05:48, yaswanth kumar <yaswanth...@gmail.com> a écrit : > Hi Team > > Can someone let me know if we can do an upgrade to zookeeper ensemble > from standalone ?? > > I have 3 solr nodes with one zookeeper running on one of the node .. and > it’s a solr cloud .. so now can I install zookeeper on another node just to > make sure it’s not a single point of failure when the solr node that got > zookeeper is down?? > > Also want to understand what’s the best formula of choosing no of > zookeeper that’s needed for solr cloud like for how many solr nodes .. how > many zookeeper do we need to maintain for best fault tolerance > > Sent from my iPhone