yaoyaoding commented on issue #264:
URL: https://github.com/apache/tvm-ffi/issues/264#issuecomment-3543614310

   > This does mean that we should perhaps have a default EnvAllocator for CPU 
if it is not set.
   
   We can ask the host (like PyTorch, or TensorRT) to initialize the callbacks 
for tensor allocation when they load tvm-ffi and raise an error if they does 
not do so. I feel It's not necessary for us to implement a memory allocator - 
using two memory pools (like tvm-ffi's and the host's) will prevent one memory 
pool free the unused tensors in another memory pool.
   
   > this is because we do not know where the object gets allocated(e.g. they 
can get allocated in rust) so we need customized deleter to safely call the 
deletion logic of the allocation dll.
   
   After reading the related sources I get the idea now. I think the memory 
allocator should be either 1) define in a separate runtime library that all 
kernels from the provider should link against; 2) allow the `libtvm_ffi.so` to 
own the custom allocator.
   It's strange to have all short-lived kernel libraries (e.g., `kernel1.so`, 
`kernel2.so`) have their own replication of a memory allocator. 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to