Sorry if this is inappropriate here. I'm finally growing from clusters
of single CPUs to a machine with multiple CPUs, which means that I need
to start taking note of NUMA issues. I'm looking for information on how
to achieve that with mpi under linux. I'm currently using mpich2, but I
don't mind switching if needed.

Things are actually more complex as this is a mixed GPU/GPU (CUDA)
system so I'm also looking for how to effectively transfer data between
GPUs siting on different PCIe slots and find the affinity between GPUs
and CPUs. Also at what stage is the support for using MPI to copy
between GPUs?

Thanks for any pointers


_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to