I have a question about when allocated texture memory is freed on the device
by pycuda. Suppose I have a function which does the following:
>>> def foo():
>>> a = np.zeros(100, dtype=np.uint32)
>>> a_gpu = cuda.to_device(a)
>>> a_tex = mod.get_texref('a_tex')
>>> a_tex.set_address(a_gpu, a.nbytes)
>>> a_tex.set_format(cuda.array_format.UNSIGNED_INT32, 1)
>>> return a_tex
If I call foo() I get back the texture reference, but the device pointer has
gone out of scope. Is it still safe to assume that memory is still available
on the GPU?
Thanks,
Tony
_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda