tp-nan commented on issue #338:
URL: https://github.com/apache/tvm-ffi/issues/338#issuecomment-3648567916

   @junrushao @tqchen
   Thank you for your replies and the helpful insights. The directions you 
mentioned are indeed more thorough and well-considered.
   
   Let me briefly clarify our use case. Our scenario is similar to extending 
the backend of [Triton Inference 
Server](https://github.com/triton-inference-server/server)/(Ensemble, 
[BLS](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/bls.html))—but
 with a twist: we aim to allow a backend to define both computation and 
scheduling backends. For computation-related data types like cv::Mat(or 
CV-CUDA), We can implement a sequence of computation and type-conversion 
backends (e.g., DecodeMat, ResizeMat, Mat2TVMTensor), which introduces backend 
boundaries.
   
   However, at the boundaries between multiple backends, we also encounter 
scheduling-related types—such as Event, Status, etc.—that need to be 
type-erased and passed across backend boundaries, for example via a structure 
like Ptr<Map<String, Any>>. Within each backend, the developer would then cast 
these values back to their original concrete types. These types may be newly 
introduced by backend authors when adding new backends, and it’s often 
impractical—or even impossible, especially for third-party types outside our 
control—to wrap every such type into the FFI system.
   
   Because our backend granularity is quite coarse, we’re not particularly 
sensitive to the performance overhead of TypeErase.
   
   I’m not sure whether this use case falls somewhat outside the current scope 
of tvm_ffi, but I’m glad to learn about tvm_ffi’s perspective on  broader types.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to