<grin>. I've had recurring discussions with "executive level folks" that no
matter how many VMs you host on a machine, and no matter how big that
machine is, there really, truly, *is* some hardware underlying it all that
really, truly, *does* have some limits.

And adding more VMs doesn't somehow get around those limits......

Good Luck!
Erick

On Mon, Feb 6, 2012 at 10:55 AM, Per Steffensen <st...@designware.dk> wrote:
> Sami Siren skrev:
>
>> On Mon, Feb 6, 2012 at 2:53 PM, Per Steffensen <st...@designware.dk>
>> wrote:
>>
>>
>>
>>>
>>> Actually right now, I am trying to find our what my bottleneck is. The
>>> setup
>>> is more complex, than I would bother you with, but basically I have
>>> servers
>>> with 80-90% IO-wait and only 5-10% "real CPU usage". It might not be a
>>> Solr-related problem, I am investigating different things, but just
>>> wanted
>>> to know a little more about how Jetty/Solr works in order to make a
>>> qualified guess.
>>>
>>
>>
>> What kind of/how many discs do you have for your shards? ..also what
>> kind of server are you experimenting with?
>>
>
> Grrr, thats where I have a little fight with "operations". For now they gave
> me one (fairly big) machine with XenServer. I create my "machines" as Xen
> VM's on top of that. One of the things I dont like about this (besides that
> I dont trust Xen to do its virtualization right, or at least not provide me
> with correct readings on IO) is that disk space is assigned from an iSCSI
> connected SAN that they all share (including the line out there). But for
> now actually it doesnt look like disk IO problems. It looks like
> networks-bottlenecks (but to some extend they all also shard network) among
> all the components in our setup - our client plus Lily stack (HDFS, HBase,
> ZK, Lily Server, Solr etc). Well it is complex, but anyways ...
>>
>> --
>>  Sami Siren
>>
>>
>
>

Reply via email to