tlopex commented on code in PR #18546:
URL: https://github.com/apache/tvm/pull/18546#discussion_r2608912114
##########
src/runtime/vm/builtin.cc:
##########
@@ -122,10 +122,11 @@ TVM_FFI_STATIC_INIT_BLOCK() {
* \sa MatchShapeCode
*/
void MatchShape(ffi::PackedArgs args, ffi::Any* rv) {
- // input shape the first argument can take in tensor or shape.
+ // input shape the first argument can take in tensor, DLTensor* or shape.
ffi::Shape input_shape;
- if (auto opt_nd = args[0].as<Tensor>()) {
- input_shape = opt_nd.value().Shape();
+ if (auto opt_nd = args[0].try_cast<DLTensor*>()) {
+ DLTensor* ptr = opt_nd.value();
+ input_shape = ffi::Shape(ptr->shape, ptr->shape + ptr->ndim);
} else {
input_shape = args[0].cast<ffi::Shape>();
}
Review Comment:
The main reason here is not related to the performance difference between
`as<Tensor>()` and `try_cast<DLTensor*>()`. The issue is type coverage and
backward compatibility. Tensor cannot be assumed to always cast safely to
DLTensor*. Also, existing callers still pass tvm::runtime::Tensor as an
argument. If we replace the Tensor branch with DLTensor*, the previous behavior
may break.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]