>in theory you could cap the performance interference using VM's and
>cgroup controls, but i'm not sure how effective that actually is (no
>data) in HPC.

I looked quite heavily at performance capping for RDMA applications in
cgroups about a year ago.
It is very doable, however you need a recent 4-series kernel. Sadly we were
using 3-series kernels on RHEL
Parav Pandit is the go-to guy for this
https://www.openfabrics.org/images/eventpresos/2016presentations/115rdmacont.pdf










On Thu, 26 Jul 2018 at 15:27, John Hearns <hear...@googlemail.com> wrote:

> For VM substitute 'container' - since containerisation is intimately
> linked with cgroups anyway.
> Google 'CEPH Docker' and there is plenty of information.
>
> Someone I work with tried out CEPH on Dockerr the other day, and got into
> some knots regarding access to the actual hardware devices.
> He then downloaded Minio and got it working very rapidly. Sorry - I am
> only repeating this story second hand.
>
>
>
>
>
>
>
>
>
> On Thu, 26 Jul 2018 at 15:20, Michael Di Domenico <mdidomeni...@gmail.com>
> wrote:
>
>> On Thu, Jul 26, 2018 at 3:14 AM, Jörg Saßmannshausen
>> <sassy-w...@sassy.formativ.net> wrote:
>> > I once had this idea as well: using the spinning discs which I have in
>> the
>> > compute nodes as part of a distributed scratch space. I was using
>> glusterfs
>> > for that as I thought it might be a good idea. It was not.
>>
>> i split the thread as to not pollute the other discussion.
>>
>> I'm curious if anyone has any hard data on the above, but
>> encapsulating the compute from the storage using VM's instead of just
>> separate processes?
>>
>> in theory you could cap the performance interference using VM's and
>> cgroup controls, but i'm not sure how effective that actually is (no
>> data) in HPC.
>>
>> I've been thinking about this recently to rebalance some of the rack
>> loading throughout my data center.   yes, i can move things around
>> within the racks, but then it turns into a cabling nightmare.
>>
>> discuss?
>> _______________________________________________
>> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to