mcvsubbu commented on issue #7704:
URL: https://github.com/apache/pinot/issues/7704#issuecomment-961504359


   In the current version of helix that we use in Pinot, data over 1M is 
automatically compressed by Helix before writing to the zookeeper. I think your 
compressed data is exceeding this limit (helix provides only one limit in 
0.9.x, we have requested to have two limits and they will provide it in 1.x)
   
   
https://github.com/apache/helix/blob/master/zookeeper-api/src/main/java/org/apache/helix/zookeeper/datamodel/ZNRecord.java#L63:27
   
   The best way is to remove some segments or set your segment size to be 
larger so that less number of segments are made. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to