The way Solr uses ZK, unless you are also using ZK with something else, I 
wouldn't worry about it at all. In a steady state, the cluster won't even 
really talk to ZK in any intensive manner at all.

- Mark

On May 16, 2013, at 5:07 PM, Furkan KAMACI <furkankam...@gmail.com> wrote:

> Hi Shawn;
> 
> I will have totally 18 Solr nodes at my current pre-prototype environment
> over one collection and I don't have large config files. I know that best
> and only recommend practice for estimating the heap size of my system needs
> is to run load tests and I will.
> 
> I asked this question because of an example at Zookeeper wiki:
> 
> "You should take special care to set your Java max heap size correctly. In
> particular, you should not create a situation in which ZooKeeper swaps to
> disk. The disk is death to ZooKeeper. Everything is ordered, so if
> processing one request swaps the disk, all other queued requests will
> probably do the same. the disk. DON'T SWAP.
> 
> Be conservative in your estimates: if you have 4G of RAM, do not set the
> Java max heap size to 6G or even 4G. For example, it is more likely you
> would use a 3G heap for a 4G machine, as the operating system and the cache
> also need memory."
> 
> This may be a more Zookeeper related question but one more question too. Is
> there anything something like not to use Zookeeper on a virtual machine
> because of performance issues or not?
> 
> 
> 
> 
> 2013/5/16 Shawn Heisey <s...@elyograg.org>
> 
>> On 5/16/2013 2:34 PM, Furkan KAMACI wrote:
>> 
>>> You have some tips about JVM parameters starting a Solr node. What do you
>>> have special for Solr when you start a Zookeeper ensemble. i.e. heap size?
>>> 
>> 
>> I haven't given it any JVM options.  The ZK process on my primary server
>> has a 5GB virtual memory size and is using 131MB of system memory.  If
>> you're not going to be creating a large number of collection or replicas
>> and you're not using super-large config files, you could probably limit the
>> max heap to a pretty small number and be OK.
>> 
>> Thanks,
>> Shawn
>> 
>> 

Reply via email to