Thanks for the quick response Alex... There's no problem yet... It's a new 
install with 100GbE Mellanox IB and currently at defaults except I bumped the 
number of NFS daemons to 128. Just researching ways to optimize performance for 
36 nodes concurrently accessing the storage server. It is my understanding that 
setting MTU to 9000 is recommended but that seems to be applicable for 10GbE.


Regards,

John McCulloch | PCPC Direct, Ltd.
________________________________
From: Alex Chekholko <a...@calicolabs.com>
Sent: Friday, June 12, 2020 3:53 PM
To: John McCulloch
Cc: beowulf@beowulf.org
Subject: Re: [Beowulf] NFS over IPoIB

I think you should start with all defaults and then describe the problem you're 
having with those settings.  IIRC last time I ran NFS over IPoIB I didn't tune 
anything and it was fine.

On Fri, Jun 12, 2020 at 12:10 PM John McCulloch 
<jo...@pcpcdirect.com<mailto:jo...@pcpcdirect.com>> wrote:

Can anyone comment on experience with compute nodes mounting NFS v4.1 shares 
over IPoIB... i.e., tuning parameters that are likely to be most effective... 
We looked at NFS over RDMA but doing that would require a kernel upgrade...


https://www.admin-magazine.com/HPC/Articles/Useful-NFS-Options-for-Tuning-and-Management


Cheers,

John McCulloch | PCPC Direct, Ltd.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org<mailto:Beowulf@beowulf.org> sponsored 
by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf

Reply via email to