Kilian CAVALOTTI wrote:
We've also encountered somme oddities, like CUDA code freezing a machine running X.org (and using the proprietary NVIDIA driver),
I'm glad you mentioned this. I've read through much of the information on their web site and I still don't understand the usage model for CUDA. By that I mean, on a desktop machine, are you supposed to have 2 graphics cards, 1 for running CUDA code and one for regular graphics? If you only need 1 card for both, how do you avoid the problem you mentioned, which was also mentioned in the documentation? Or, if you have a compute node that will sit in a dark room, you aren't going to be running an X server at all, so you won't have to worry about anything hanging? How does this behavior change, if at all, when running Windows? I'm planning on starting a pilot program to get the chemists in my department to use CUDA, but I'm waiting for V2 of the SDK to come out. Cordially, -- Jon Forrest Research Computing Support College of Chemistry 173 Tan Hall University of California Berkeley Berkeley, CA 94720-1460 510-643-1032 [EMAIL PROTECTED] _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf