Wondering if anyone out there is doing both I/O to storage as well as MPI over the same IB fabric. Following along in the Mellanox User's Guide, I see a section on how to implement the QOS for both MPI and my lustre storage. I am curious though as to what might happen to the performance of the MPI traffic when high I/O loads are placed on the storage.
In our current implementation, we are using blades which are 50% blocking (2:1 oversubscribed) when moving from a 16 blade chassis to other nodes. Would trying to do storage on top dictate moving to a totally non-blocking fabric? Thanks, Bill _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf