sivachandra added a comment.

In D146973#4225983 <https://reviews.llvm.org/D146973#4225983>, @jdoerfert wrote:

> For most of libc, we might get away with custom GPU headers but eventually it 
> will break "expected to work" user code, at the latest when we arrive at 
> libc++.
> A user can, right now, map a std::vector from the host to the device, and, 
> assuming they properly did the deep copy, it will work.

I do not have any strong opinions about how things ought to work. However, ISTM 
that the above is assuming that the type topology on the host matches the one 
on the GPU. Not sure if that is really an assumption or a restriction or a 
requirement. May be the host / device compatibility ensures this, I do not 
know, I know almost nothing about GPUs. But in general, I would expect a host 
<=> device communication channel to be a curated one. As in, the communication 
model can understand and serialize/deserialize only a small set of primitive 
types and compound types crafted from these primitive types.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D146973/new/

https://reviews.llvm.org/D146973

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to