Hi Will, Thanks for your response. These Solrcloud instances are 8-core machines with a RAM of 24 GB each assigned to tomcat. The Indexer machine starts with -Xmx16g. All these machines are connected to the same switch.
The batch size is 5000 documents and there are 8 threads which adds 5000 document's batch per thread to solrcloud. I have tried with a bigger batch size but that caused OutOfMemory error. I can see the Solr cloud instances are not running out of memory or going low in memory. The CPU utilization is also around 50% on each core. Whereas indexer is using maximum of the assigned memory which is -Xmx16g but is not going out of memory. Thanks, Modassar On Tue, Oct 28, 2014 at 12:18 PM, Will Martin <wmartin...@gmail.com> wrote: > Modassar: > > Can you share your hw setup? And what size are your batches? Can you make > them smaller; it doesn't mean your throughput will necessarily suffer. > Re > Will > > > -----Original Message----- > From: Modassar Ather [mailto:modather1...@gmail.com] > Sent: Tuesday, October 28, 2014 2:12 AM > To: solr-user@lucene.apache.org > Subject: Log message "zkClient has disconnected". > > Hi, > > I am getting following INFO log messages many a times during my indexing. > The indexing process read records from database and using multiple threads > sends them for indexing in batches. > There are four shards and one embedded Zookeeper on one of the shards. > > org.apache.zookeeper.ClientCnxn$SendThread run > INFO: Client session timed out, have not heard from server in 9276ms for > sessionid <id>, closing socket connection and attempting reconnect > org.apache.solr.common.cloud.ConnectionManager process > INFO: Watcher org.apache.solr.common.cloud.ConnectionManager@3debc153 > name:ZooKeeperConnection Watcher:<host>:<port> got event WatchedEvent > state:Disconnected type:None path:null path:null type:None > org.apache.solr.common.cloud.ConnectionManager process > INFO: zkClient has disconnected > > Kindly help me understand the possible cause of Zookeeper state > disconnection. > > Thanks, > Modassar > >