jdoerfert added a comment.

I said this before, many times:

We don't want to have different host and device libraries that are incompatible.
Effectively, what we really want, is the host environment just work on the GPU, 
that includes extensions in the host headers, macros, taking the address of 
stuff, etc.
This became clear when we made (c)math.h available on the GPU (for OpenMP).

For most of libc, we might get away with custom GPU headers but eventually it 
will break "expected to work" user code, at the latest when we arrive at libc++.
A user can, right now, map a std::vector from the host to the device, and, 
assuming they properly did the deep copy, it will work.
If we end up with different sizes, alignments, layouts, this will not only 
break, but we will also break any structure that depends on those sizes, e.g., 
mapping an object with a std::map inside even if it is not accessed will cause 
problems.

In addition, systems are never "vanilla". We want to include the system headers 
to get the extensions users might rely on. Providing only alternative headers 
even breaks working code (in the OpenMP case), e.g., when we auto-translate 
definitions in the header to the device (not a CUDA thing, I think).

I strongly suggest to include our GPU headers first, in them we setup the 
overlays for the system headers, and then we include the system versions.
This works for (c)math.h, complex, and other parts of libc and libc++ already, 
even though we don't ship them as libraries.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D146973/new/

https://reviews.llvm.org/D146973

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to