gemini-code-assist[bot] commented on code in PR #335:
URL: https://github.com/apache/tvm-ffi/pull/335#discussion_r2612680920
##########
include/tvm/ffi/container/tensor.h:
##########
@@ -348,6 +361,40 @@ class Tensor : public ObjectRef {
* \return True if the Tensor data is aligned to the given alignment, false
otherwise.
*/
bool IsAligned(size_t alignment) const { return tvm::ffi::IsAligned(*get(),
alignment); }
+
+ /*!
+ * \brief Create a new Tensor as a strided view of the current Tensor.
+ * \param shape The shape of the new Tensor.
+ * \param strides The strides of the new Tensor.
+ * \param element_offset The element offset of the new Tensor in the unit of
dtype elements.
+ * \return The new Tensor.
+ * \note element_offset is in the unit of dtype elements not bytes.
+ */
+ Tensor as_strided(ShapeView shape, ShapeView strides,
+ std::optional<int64_t> element_offset = std::nullopt)
const {
+ DLTensor prototype;
+ prototype = *static_cast<const DLTensor*>(get());
+ prototype.shape = const_cast<int64_t*>(shape.data());
+ prototype.ndim = static_cast<int>(shape.size());
+ prototype.strides = const_cast<int64_t*>(strides.data());
Review Comment:

Using `const_cast` here is a code smell and can be unsafe. `ShapeView` might
point to temporary memory (e.g., from an `initializer_list`), and while the
current implementation seems to copy the data before the temporary is
destroyed, this approach is brittle. To make this safer and clearer, please
avoid `const_cast`. You can copy the shape and strides into local
`std::vector<int64_t>`s to get mutable data pointers for the `DLTensor`
prototype.
##########
include/tvm/ffi/container/tensor.h:
##########
@@ -417,6 +464,41 @@ class Tensor : public ObjectRef {
num_extra_i64_at_tail, alloc, shape, dtype, device,
std::forward<ExtraArgs>(extra_args)...));
}
+
+ /*!
+ * \brief A variant of FromNDAlloc that allows explicit passing a strides.
+ *
+ * \note This function needs to ensure that strides are well-defined
+ * with respect to the allocated compact shape.
+ *
+ * \param alloc The NDAllocator.
+ * \param shape The shape of the Tensor.
+ * \param strides The strides of the Tensor.
+ * \param dtype The data type of the Tensor.
+ * \param device The device of the Tensor.
+ * \param extra_args Extra arguments to be forwarded to TNDAlloc.
+ * \return The created Tensor.
+ * \tparam TNDAlloc The type of the NDAllocator, impelments Alloc and Free.
+ * \tparam ExtraArgs Extra arguments to be passed to Alloc.
+ */
+ template <typename TNDAlloc, typename... ExtraArgs>
+ static Tensor FromNDAllocStrided(TNDAlloc alloc, ffi::ShapeView shape,
ffi::ShapeView strides,
+ DLDataType dtype, DLDevice device,
ExtraArgs&&... extra_args) {
+ TVM_FFI_CHECK(shape.size() == strides.size(), ValueError)
+ << "shape and strides must have the same size.";
+ // inplace alloc shape and strides after data structure (as a result why
multiply 2)
+ size_t num_extra_i64_at_tail = shape.size() * 2;
+ DLTensor prototype;
+ prototype.data = nullptr;
+ prototype.device = device;
+ prototype.dtype = dtype;
+ prototype.shape = const_cast<int64_t*>(shape.data());
+ prototype.ndim = static_cast<int>(shape.size());
+ prototype.strides = const_cast<int64_t*>(strides.data());
Review Comment:

Similar to my comment in `as_strided`, the use of `const_cast` here is
concerning. Please copy the data from `shape` and `strides` `ShapeView`s into
local `std::vector`s to obtain mutable pointers for the `DLTensor` prototype.
This will make the code safer and avoid potential issues with temporary objects.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]