The answer to my original question appears to be "no", something that would
be really nice to change. In the meantime, here is my solution which allows
the VBO to be used with an ElementwiseKernel:

    # Make sure the GL context is set
    self._gl_context_fn()
    # Map the CUDA buffer so we can use it.
    # It's important to only do this while
    # using CUDA, so CUDA and OpenGL don't
    # interfere with each other.
    vbomapped = self._vbo_cubuf.map()
    try:
        # There doesn't seem to be a way to make a gpuarray out of
vbomapped, so
        # instead we duck-type our way into satisfying what the
ElementwiseKernel
        # function call expects
        vbo_devptr, vbo_size = vbomapped.device_ptr_and_size()
        class GPUArrayLikeADuck(object):
            class FlagsLikeADuck(object):
                c_contiguous = True
                f_contiguous = True
                forc = True
            flags = FlagsLikeADuck()
            class DeviceAllocationLikeADuck(object):
                def __init__(self, ptr):
                    self._ptr = ptr
                def __int__(self):
                    return self._ptr
                def __long__(self):
                    return long(self._ptr)
            gpudata = DeviceAllocationLikeADuck(vbo_devptr)
            shape = (vbo_size / 16,)
            dtype = np.dtype([('x', np.float64), ('y', np.float64)])
            mem_size = vbo_size
            nbytes = vbo_size
            strides = (16,)
            ptr = vbo_devptr
        vbo_gpu = GPUArrayLikeADuck()
        self.update_cuda_timestep_fn()
        self._cached_elementwise_fn(vbo_gpu, dt,
                range=slice(0, vbo_size / 16, 1))
    finally:
        vbomapped.unmap()

I also had to add the following line in compyte/dtype.py so that I
could use the "double2" CUDA type:

    register_dtype(np.dtype([('x', np.float64), ('y', np.float64)]),
"double2")

With this hackery, what I was trying to do works.

Thanks,
Mark

On Wed, Feb 29, 2012 at 1:58 PM, Mark Wiebe <[email protected]> wrote:

> Further to this, it appears that the context lifetime management doesn't
> properly tie the creation of a CUDA device context to its deletion. I'm
> thinking of a scenario where a user can create multiple windows with OpenGL
> viewports in them, each with its own OpenGL context and corresponding CUDA
> context. Since PyCUDA is using a global stack to maintain these contexts,
> creating windows and destroying them in arbitrary order will probably do
> something funky.
>
> -Mark
>
>
> On Wed, Feb 29, 2012 at 11:28 AM, Mark Wiebe <[email protected]> wrote:
>
>> I'd like to create a VBO using OpenGL, then manipulate it in PyCUDA as a
>> gpuarray. Is this possible? I looked a bit into how I would set the
>> deallocation policy, which needs to use OpenGL calls to free the VBO, but
>> gpuarray seems to hardcode CUDA. I would expect it to work like the NumPy
>> ndarray.base object, which owns the memory used by the ndarray. Is this
>> possible?
>>
>> Thanks,
>> Mark
>>
>
>
_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to