On Mon, Sep 17, 2018 at 04:00:26PM +0200, Guilherme Amadio wrote:
> Hi everyone,
> 
> We have several packages (~35) with local USE=cuda. Should we make that
> a global USE flag? It's a quite generic flag for GPU support, so I was
> surprised to learn it was still local when I added support for it to a
> package recently.

Sounds good to me, I was also surprised when I added the cuda
useflag to TensorFlow.

> Another thing we might want to discuss is a global setting for the CUDA
> architecture to be used¹. We compile from source, so it would be a shame
> not to compile with the right features for newer GPUs². Maybe just
> advertising the NVCCFLAGS one should put in make.conf is enough?

Is this the CUDA Capability (ie https://developer.nvidia.com/cuda-gpus)?
TensorFlow's configure currently autodetects it based on what the card
supports and builds for that version (or versions). Or you can set
TF_CUDA_COMPUTE_CAPABILITIES="6.1" in make.conf. If you would like to
standardize this I'll gladly change to use that.

I'm also updating cuda.eclass for EAPI7, I will send patches to the list
soon. I'm also adding a couple other functions for things I use in TF,
If there are more things you want to add, they should all go in at once.

-- Jason

> Cheers,
> —Guilherme
> 
> 1. 
> https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#options-for-steering-gpu-code-generation
> 2. 
> https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#gpu-feature-list
> 

Reply via email to