Re: [PATCH v3 5/6] libgomp, nvptx: Cuda pinned memory

2023-12-12 Thread Tobias Burnus
On 11.12.23 18:04, Andrew Stubbs wrote: Use Cuda to pin memory, instead of Linux mlock, when available. There are two advantages: firstly, this gives a significant speed boost for NVPTX offloading, and secondly, it side-steps the usual OS ulimit/rlimit setting. The design adds a device independ

[PATCH v3 5/6] libgomp, nvptx: Cuda pinned memory

2023-12-11 Thread Andrew Stubbs
Use Cuda to pin memory, instead of Linux mlock, when available. There are two advantages: firstly, this gives a significant speed boost for NVPTX offloading, and secondly, it side-steps the usual OS ulimit/rlimit setting. The design adds a device independent plugin API for allocating pinned memo