Hi,

I've been considering new purchasing NVidia RTX6000 or the  RTX8000 NVidia 
GPU's to add to our existing GPU's partitons on our  Slurm cluster.

The RTX6000 has 24GB of on-board memory and the RTX8000 has 48GB, both of these 
are single-precision cards. Besides the additional 24GB of memory the RTX8000 
supports something that NVidia calls virtual GPU (vGPU) the RTX6000 does not 
support vGPU. This allows one to carve up the RTX8000 to appear as multiple 
GPU's.

Are there any Slurm installations that have tried using or are using vGPU 
technology with these or other NVidia GPU's? Can you share you experiences 
using this technology?

Was it easy to configure?
Any issues getting it to work with Slurm?
How stable has the technology been for you?
Has it caused any the GPU's to be unstable or to crash more or less often?
Is there any overhead of using vGPU?

I would appreciate to hear any feedback to these questions and your thoughts 
and concerns about using vGPU with Slurm.

Kind regards

--
Mick Timony
Senior DevOps Engineer
Harvard Medical School
--

Reply via email to