gemini-code-assist[bot] commented on code in PR #18546:
URL: https://github.com/apache/tvm/pull/18546#discussion_r2591752511
##########
src/runtime/vm/builtin.cc:
##########
@@ -122,10 +122,11 @@ TVM_FFI_STATIC_INIT_BLOCK() {
* \sa MatchShapeCode
*/
void MatchShape(ffi::PackedArgs args, ffi::Any* rv) {
- // input shape the first argument can take in tensor or shape.
+ // input shape the first argument can take in tensor, DLTensor* or shape.
ffi::Shape input_shape;
- if (auto opt_nd = args[0].as<Tensor>()) {
- input_shape = opt_nd.value().Shape();
+ if (auto opt_nd = args[0].try_cast<DLTensor*>()) {
+ DLTensor* ptr = opt_nd.value();
+ input_shape = ffi::Shape(ptr->shape, ptr->shape + ptr->ndim);
} else {
input_shape = args[0].cast<ffi::Shape>();
}
Review Comment:

The current change correctly adds support for `DLTensor*` arguments, which
fixes the reported issue. However, it removes the existing support for
`tvm::runtime::Tensor` arguments by replacing the check instead of adding to
it. This will break existing code that passes a `Tensor` to `MatchShape`.
The logic should be updated to handle `Tensor`, `DLTensor*`, and
`ffi::Shape` arguments, as indicated by the updated comment on line 125. You
can chain the checks to support all three types.
```c
if (auto opt_tensor = args[0].as<Tensor>()) {
input_shape = opt_tensor.value().Shape();
} else if (auto opt_dltensor = args[0].try_cast<DLTensor*>()) {
DLTensor* ptr = opt_dltensor.value();
input_shape = ffi::Shape(ptr->shape, ptr->shape + ptr->ndim);
} else {
input_shape = args[0].cast<ffi::Shape>();
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]