does anyone know of easy to run code that would swallow the cpu/memory
on the chassis but also a tesla card?  A lot of the tools i typically
used in the past that have been ported to GPU's don't seem to use up
much of the memory, or use all the GPU constantly.  I'm running
through NAMD at the moment which does seem to make pretty good use of
the gpu processor, it doesn't seem to use much if any of the memory.
Cuda-Linpack seems to cough an error on runtime, but hopefully i'll
get that going, but i curious if there was anything else i didn't know
about.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to