On Nov 2, 2011, at 7:47 AM, Phil Hoy wrote:

> Hi,
> 
> I am running solrcloud and a file in the Dbootstrap_confdir is a large large 
> synonym file (~50mb ) used by a SynonymFilterFactory configured in the 
> schema.xml. When i start solr I get a zookeeper exception presumably because 
> the file size is too large. 
> 
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /configs/recordsets_conf/firstnames.csv
>       at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>       at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>       at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1038)
> 
> Is there a way to either increase the limit in zookeeper or perhaps configure 
> the SynonymFilterFactory differently to get the file from somewhere external 
> to Dbootstrap_confdir?
> 
> Phil  


As a workaround you can try:

(Java system property:* jute.maxbuffer*)


    This option can only be set as a Java system property. There is no
    zookeeper prefix on it. It specifies the maximum size of the data
    that can be stored in a znode. The default is 0xfffff, or just under
    1M. If this option is changed, the system property must be set on
    all servers and clients otherwise problems will arise. This is
    really a sanity check. ZooKeeper is designed to store data on the
    order of kilobytes in size.

Eventually there are other ways to solve this that we may offer...

Optional compression of files
Store a file across multiple zk nodes transparently when size is too large

- Mark Miller
lucidimagination.com











Reply via email to