Jan,

Thank you for your answer.
I've opened a JIRA issue with your suggestion.
https://issues.apache.org/jira/browse/SOLR-4793

Son

-----Original Message-----
From: Jan Høydahl [mailto:jan....@cominvent.com] 
Sent: Tuesday, May 07, 2013 4:16 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Cloud with large synonyms.txt

Hi,

SolrCloud is designed with an assumption that you should be able to upload your 
whole disk-based conf folder into ZK, and that you should be able to add an 
empty Solr node to a cluster and it would download all config from ZK. So 
immediately a splitting strategy automatically handled by ZkSolresourceLoader 
for large files could be one way forward, i.e. store synonyms.txt as e.g. 
__001_synonyms.txt __002_synonyms.txt....

Feel free to open a JIRA issue for this so we can get a proper resolution.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

7. mai 2013 kl. 09:55 skrev Roman Chyla <roman.ch...@gmail.com>:

> We have synonym files bigger than 5MB so even with compression that 
> would be probably failing (not using solr cloud yet) Roman On 6 May 
> 2013 23:09, "David Parks" <davidpark...@yahoo.com> wrote:
> 
>> Wouldn't it make more sense to only store a pointer to a synonyms 
>> file in zookeeper? Maybe just make the synonyms file accessible via 
>> http so other boxes can copy it if needed? Zookeeper was never meant 
>> for storing significant amounts of data.
>> 
>> 
>> -----Original Message-----
>> From: Jan Høydahl [mailto:jan....@cominvent.com]
>> Sent: Tuesday, May 07, 2013 4:35 AM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Solr Cloud with large synonyms.txt
>> 
>> See discussion here
>> http://lucene.472066.n3.nabble.com/gt-1MB-file-to-Zookeeper-td3958614
>> .html
>> 
>> One idea was compression. Perhaps if we add gzip support to 
>> SynonymFilter it can read synonyms.txt.gz which would then fit larger 
>> raw dicts?
>> 
>> --
>> Jan Høydahl, search solution architect Cominvent AS - 
>> www.cominvent.com
>> 
>> 6. mai 2013 kl. 18:32 skrev Son Nguyen <s...@trancorp.com>:
>> 
>>> Hello,
>>> 
>>> I'm building a Solr Cloud (version 4.1.0) with 2 shards and a 
>>> Zookeeper
>> (the Zookeeer is on different machine, version 3.4.5).
>>> I've tried to start with a 1.7MB synonyms.txt, but got a
>> "ConnectionLossException":
>>> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /configs/solr1/synonyms.txt
>>>       at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>>>       at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>>       at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
>>>       at
>> org.apache.solr.common.cloud.SolrZkClient$8.execute(SolrZkClient.java
>> :270)
>>>       at
>> org.apache.solr.common.cloud.SolrZkClient$8.execute(SolrZkClient.java
>> :267)
>>>       at
>> 
>> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecut
>> or.java
>> :65)
>>>       at
>> org.apache.solr.common.cloud.SolrZkClient.setData(SolrZkClient.java:2
>> 67)
>>>       at
>> org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:
>> 436)
>>>       at
>> org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:
>> 315)
>>>       at
>> org.apache.solr.cloud.ZkController.uploadToZK(ZkController.java:1135)
>>>       at
>> org.apache.solr.cloud.ZkController.uploadConfigDir(ZkController.java:
>> 955)
>>>       at
>> org.apache.solr.core.CoreContainer.initZooKeeper(CoreContainer.java:2
>> 85)
>>>       ... 43 more
>>> 
>>> I did some researches on internet and found out that because 
>>> Zookeeper
>> znode size limit is 1MB. I tried to increase the system property 
>> "jute.maxbuffer" but it won't work.
>>> Does anyone have experience of dealing with it?
>>> 
>>> Thanks,
>>> Son
>> 
>> 

Reply via email to