Thank you, this is clear
Regards 
Roopa

Sent from my iPhone

> On Mar 13, 2018, at 6:35 PM, Markus Jelsma <markus.jel...@openindex.io> wrote:
> 
> Hi - configure it for all servers that connect to ZK and need jute.maxbuffer 
> to be high, and ZK itself of course.
> 
> So if your Solr cluster needs a large buffer, your Solr's environment 
> variables need to match that of ZK. If you simultaneously use ZK for a Hadoop 
> cluster, but don't need that buffer size, you can omit in Hadoop's settings.
> 
> Markus
> 
> 
> 
> -----Original message-----
>> From:Roopa ML <roop...@gmail.com>
>> Sent: Tuesday 13th March 2018 23:18
>> To: solr-user@lucene.apache.org
>> Subject: Re: How to store files larger than zNode limit
>> 
>> The documentation has:
>> If this
>> option is changed, the system property must be set on all servers and
>> clients otherwise problems will arise
>> 
>> Other than Zookeeper java property what are the other places this should be 
>> set?
>> 
>> Thank you
>> Roopa
>> 
>> Sent from my iPhone
>> 
>>> On Mar 13, 2018, at 5:56 PM, Markus Jelsma <markus.jel...@openindex.io> 
>>> wrote:
>>> 
>>> Hi - For now, the only option is to allow larger blobs via jute.maxbuffer 
>>> (whatever jute means). Despite ZK being designed for kb sized blobs, Solr 
>>> demands us to abuse it. I think there was a ticket for compression support, 
>>> but that only stretches the limit.
>>> 
>>> We are running ZK with 16 MB for maxbuffer. It holds the large 
>>> dictionaries, it runs fine. 
>>> 
>>> Regards,
>>> Markus
>>> 
>>> -----Original message-----
>>>> From:Atita Arora <atitaar...@gmail.com>
>>>> Sent: Tuesday 13th March 2018 22:38
>>>> To: solr-user@lucene.apache.org
>>>> Subject: How to store files larger than zNode limit
>>>> 
>>>> Hi ,
>>>> 
>>>> I have a use case supporting multiple clients and multiple languages in a
>>>> single application.
>>>> So , In order to improve the language support, we want to leverage the Solr
>>>> dictionary (userdict.txt) files as large as 10MB.
>>>> I understand that ZooKeeper's default zNode file size limit is 1MB.
>>>> I'm not sure sure if someone tried increasing it before and how does that
>>>> fares in terms of performance.
>>>> Looking at - https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
>>>> It states -
>>>> Unsafe Options
>>>> 
>>>> The following options can be useful, but be careful when you use them. The
>>>> risk of each is explained along with the explanation of what the variable
>>>> does.
>>>> jute.maxbuffer:
>>>> 
>>>> (Java system property:* jute.maxbuffer*)
>>>> 
>>>> This option can only be set as a Java system property. There is no
>>>> zookeeper prefix on it. It specifies the maximum size of the data that can
>>>> be stored in a znode. The default is 0xfffff, or just under 1M. If this
>>>> option is changed, the system property must be set on all servers and
>>>> clients otherwise problems will arise. This is really a sanity check.
>>>> ZooKeeper is designed to store data on the order of kilobytes in size.
>>>> I would appreciate if someone has any suggestions  on what are the best
>>>> practices for handling large config/dictionary files in ZK?
>>>> 
>>>> Thanks ,
>>>> Atita
>>>> 
>> 

Reply via email to