gemini-code-assist[bot] commented on code in PR #273:
URL: https://github.com/apache/tvm-ffi/pull/273#discussion_r2534124136


##########
docs/guides/julia_guide.md:
##########
@@ -0,0 +1,254 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+# Julia Guide
+
+This guide demonstrates how to use TVM FFI from Julia applications.
+
+## Installation
+
+### Prerequisites
+
+The Julia support depends on `libtvm_ffi`. First, build the TVM FFI library:
+
+```bash
+cd tvm-ffi
+mkdir -p build && cd build
+cmake .. && make -j$(nproc)
+```
+
+### Adding to Your Project
+
+Add the TVMFFI package to your Julia project:
+
+```julia
+using Pkg
+Pkg.add(path="/path/to/tvm-ffi/julia/TVMFFI")
+```
+
+### Environment Setup
+
+Set the library path so `libtvm_ffi` can be found at runtime:
+
+```bash
+export LD_LIBRARY_PATH=/path/to/tvm-ffi/build/lib:$LD_LIBRARY_PATH
+```

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The build and environment setup instructions use Linux-specific commands 
(`nproc`, `LD_LIBRARY_PATH`). It would be helpful to provide alternatives for 
other operating systems like macOS (`sysctl -n hw.ncpu`, `DYLD_LIBRARY_PATH`) 
and Windows (`%PATH%`) to improve cross-platform usability.



##########
julia/TVMFFI/Project.toml:
##########
@@ -0,0 +1,34 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+name = "TVMFFI"
+uuid = "6e4b1e8f-3e1d-4f5e-9b1e-5e1e1e1e1e1e"

Review Comment:
   ![high](https://www.gstatic.com/codereviewagent/high-priority.svg)
   
   The UUID `6e4b1e8f-3e1d-4f5e-9b1e-5e1e1e1e1e1e` appears to be a placeholder. 
For a production-ready package that can be registered, a unique UUID is 
required. You can generate one in the Julia REPL using `using UUIDs; uuid4()`.



##########
docs/guides/julia_guide.md:
##########
@@ -0,0 +1,254 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+# Julia Guide
+
+This guide demonstrates how to use TVM FFI from Julia applications.
+
+## Installation
+
+### Prerequisites
+
+The Julia support depends on `libtvm_ffi`. First, build the TVM FFI library:
+
+```bash
+cd tvm-ffi
+mkdir -p build && cd build
+cmake .. && make -j$(nproc)
+```
+
+### Adding to Your Project
+
+Add the TVMFFI package to your Julia project:
+
+```julia
+using Pkg
+Pkg.add(path="/path/to/tvm-ffi/julia/TVMFFI")
+```
+
+### Environment Setup
+
+Set the library path so `libtvm_ffi` can be found at runtime:
+
+```bash
+export LD_LIBRARY_PATH=/path/to/tvm-ffi/build/lib:$LD_LIBRARY_PATH
+```
+
+## Basic Usage
+
+### Loading a Module
+
+Load a compiled TVM FFI module and call its functions:
+
+```julia
+using TVMFFI
+
+# Load compiled module
+mod = load_module("build/add_one_cpu.so")
+
+# Get function by name
+add_one = get_function(mod, "add_one_cpu")
+
+# Or use bracket notation (Python-style)
+add_one = mod["add_one_cpu"]
+```
+
+### Calling Functions
+
+Call functions with automatic array conversion:
+
+```julia
+# Create input and output arrays
+x = Float32[1, 2, 3, 4, 5]
+y = zeros(Float32, 5)
+
+# Call function - arrays auto-converted!
+add_one(x, y)
+
+println(y)  # [2.0, 3.0, 4.0, 5.0, 6.0]
+```
+
+### Working with Slices
+
+Julia's `@view` creates zero-copy slices:
+
+```julia
+matrix = Float32[1 2 3; 4 5 6; 7 8 9]
+col = @view matrix[:, 2]  # Column slice (zero-copy)
+
+add_one(col, output)  # Pass slice directly
+```
+
+## Advanced Topics
+
+### Global Functions
+
+Access globally registered functions:
+
+```julia
+# Get global function
+func = get_global_func("my_function")
+
+if func !== nothing
+    result = func(arg1, arg2)

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The variables `arg1` and `arg2` are not defined in this example. Using 
concrete values would make the example clearer and runnable for users.
   
   ```suggestion
       result = func(10, 20)
   ```



##########
julia/README.md:
##########
@@ -0,0 +1,177 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+# Julia Interface for TVM FFI
+
+This directory contains the Julia language bindings for TVM FFI.
+
+## Features
+
+- ✅ **Module Loading** - Load compiled TVM modules (.so files)
+- ✅ **Function Calling** - Call TVM functions with type safety
+- ✅ **Zero-Copy Tensors** - Efficient array passing
+- ✅ **CPU Execution** - Verified working with real examples
+- ✅ **GPU Support** - CUDA integration via CUDA.jl
+- ✅ **Automatic Memory** - GC-based, no manual management
+- ✅ **Error Handling** - Julia exceptions with detailed messages
+
+## Quick Start
+
+### 1. Build TVM FFI Library
+
+```bash
+cd tvm-ffi
+mkdir -p build && cd build
+cmake .. && make -j$(nproc)
+```
+
+### 2. Run Working Demo
+
+```bash
+cd tvm-ffi/julia/TVMFFI
+
+# CPU example (verified working!)
+julia examples/load_add_one.jl
+# Output: ✅ SUCCESS! Output matches expected values!
+
+# Complete demo
+julia examples/complete_demo.jl
+```
+
+### 3. Use in Your Code
+
+```julia
+using Pkg
+Pkg.add(path="/path/to/tvm-ffi/julia/TVMFFI")
+
+using TVMFFI
+
+# Load TVM module
+mod = load_module("my_module.so")
+
+# Get function
+my_func = get_function(mod, "my_function")
+
+# Or use bracket notation
+my_func = mod["my_function"]
+
+# Create arrays
+x = Float32[1, 2, 3, 4, 5]
+y = zeros(Float32, 5)
+
+# NEW: Direct array passing! (Auto-conversion)
+# Arrays are automatically converted to DLTensorHolder
+my_func(x, y)  # ← Simple! Just pass arrays!
+
+# Check results
+println(y)  # Results from TVM!
+
+# Works with slices too! (Zero-copy)
+matrix = Float32[1 2 3; 4 5 6; 7 8 9]
+col = @view matrix[:, 2]           # Column slice (contiguous)
+my_func(col)                       # Pass slice directly!

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The function `my_func` is called with a single argument `my_func(col)`, but 
it was previously shown to take two arguments `my_func(x, y)`. This could be 
confusing. If the function can be called with one argument, it would be good to 
clarify that. If it always requires two arguments (e.g., input and output), 
this example should be corrected.



##########
julia/README.md:
##########
@@ -0,0 +1,177 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+# Julia Interface for TVM FFI
+
+This directory contains the Julia language bindings for TVM FFI.
+
+## Features
+
+- ✅ **Module Loading** - Load compiled TVM modules (.so files)
+- ✅ **Function Calling** - Call TVM functions with type safety
+- ✅ **Zero-Copy Tensors** - Efficient array passing
+- ✅ **CPU Execution** - Verified working with real examples
+- ✅ **GPU Support** - CUDA integration via CUDA.jl
+- ✅ **Automatic Memory** - GC-based, no manual management
+- ✅ **Error Handling** - Julia exceptions with detailed messages
+
+## Quick Start
+
+### 1. Build TVM FFI Library
+
+```bash
+cd tvm-ffi
+mkdir -p build && cd build
+cmake .. && make -j$(nproc)
+```
+
+### 2. Run Working Demo
+
+```bash
+cd tvm-ffi/julia/TVMFFI
+
+# CPU example (verified working!)
+julia examples/load_add_one.jl
+# Output: ✅ SUCCESS! Output matches expected values!
+
+# Complete demo
+julia examples/complete_demo.jl
+```
+
+### 3. Use in Your Code
+
+```julia
+using Pkg
+Pkg.add(path="/path/to/tvm-ffi/julia/TVMFFI")
+
+using TVMFFI
+
+# Load TVM module
+mod = load_module("my_module.so")
+
+# Get function
+my_func = get_function(mod, "my_function")
+
+# Or use bracket notation
+my_func = mod["my_function"]
+
+# Create arrays
+x = Float32[1, 2, 3, 4, 5]
+y = zeros(Float32, 5)
+
+# NEW: Direct array passing! (Auto-conversion)
+# Arrays are automatically converted to DLTensorHolder
+my_func(x, y)  # ← Simple! Just pass arrays!
+
+# Check results
+println(y)  # Results from TVM!
+
+# Works with slices too! (Zero-copy)
+matrix = Float32[1 2 3; 4 5 6; 7 8 9]
+col = @view matrix[:, 2]           # Column slice (contiguous)
+my_func(col)                       # Pass slice directly!
+
+# For optimization: pre-create holders
+holder = from_julia_array(x)
+for i in 1:1000000
+    my_func(holder)  # Reuse holder, no allocation
+end
+```
+
+## Directory Structure
+
+```text
+julia/
+├── TVMFFI/              Main package directory
+│   ├── Project.toml     Package metadata
+│   ├── README.md        Package documentation
+│   ├── DESIGN.md        Design philosophy and decisions
+│   ├── src/             Source code
+│   │   ├── TVMFFI.jl    Main module
+│   │   ├── LibTVMFFI.jl C API bindings
+│   │   ├── error.jl     Error handling
+│   │   ├── dtype.jl     Data types
+│   │   ├── device.jl    Devices
+│   │   ├── string.jl    Strings and bytes
+│   │   ├── object.jl    Object system
+│   │   ├── function.jl  Function calling
+│   │   └── tensor.jl    Tensor support
+│   ├── test/            Unit tests
+│   │   └── runtests.jl  Test suite
+│   └── examples/        Usage examples
+│       ├── README.md    Examples documentation
+│       └── basic_usage.jl  Basic usage example
+└── README.md            This file
+```
+
+## Documentation
+
+- **[Package README](TVMFFI/README.md)**: User-facing documentation
+- **[Design Document](TVMFFI/DESIGN.md)**: Architecture and implementation 
details
+- **[Examples](TVMFFI/examples/)**: Code examples
+
+## Design Philosophy
+
+TVMFFI.jl follows these principles:
+
+1. **Simplicity** - Direct, clear code over complex abstractions
+2. **Trust the Runtime** - Use Julia's GC instead of manual memory management
+3. **Zero-Copy** - Efficient interop with Julia arrays
+4. **Idiomatic Julia** - Follow Julia conventions and idioms
+
+For details, see [DESIGN.md](TVMFFI/DESIGN.md).
+
+## Testing
+
+```julia
+using Pkg
+Pkg.activate("/path/to/tvm-ffi/julia/TVMFFI")
+Pkg.test("TVMFFI")

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   Recommending `Pkg.activate` with a hardcoded absolute path is not ideal. 
It's better practice to instruct users to `cd` into the project directory and 
run `Pkg.activate(".")`.
   
   ```suggestion
   Pkg.activate(".") # Assuming you have cd'd into the TVMFFI directory
   Pkg.test("TVMFFI")
   ```



##########
julia/README.md:
##########
@@ -0,0 +1,177 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+# Julia Interface for TVM FFI
+
+This directory contains the Julia language bindings for TVM FFI.
+
+## Features
+
+- ✅ **Module Loading** - Load compiled TVM modules (.so files)
+- ✅ **Function Calling** - Call TVM functions with type safety
+- ✅ **Zero-Copy Tensors** - Efficient array passing
+- ✅ **CPU Execution** - Verified working with real examples
+- ✅ **GPU Support** - CUDA integration via CUDA.jl
+- ✅ **Automatic Memory** - GC-based, no manual management
+- ✅ **Error Handling** - Julia exceptions with detailed messages
+
+## Quick Start
+
+### 1. Build TVM FFI Library
+
+```bash
+cd tvm-ffi
+mkdir -p build && cd build
+cmake .. && make -j$(nproc)
+```
+
+### 2. Run Working Demo
+
+```bash
+cd tvm-ffi/julia/TVMFFI
+
+# CPU example (verified working!)
+julia examples/load_add_one.jl
+# Output: ✅ SUCCESS! Output matches expected values!
+
+# Complete demo
+julia examples/complete_demo.jl

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   This example refers to `examples/complete_demo.jl`, but this file does not 
seem to be included in the pull request. This will lead to a "file not found" 
error for users.



##########
docs/guides/julia_guide.md:
##########
@@ -0,0 +1,254 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+# Julia Guide
+
+This guide demonstrates how to use TVM FFI from Julia applications.
+
+## Installation
+
+### Prerequisites
+
+The Julia support depends on `libtvm_ffi`. First, build the TVM FFI library:
+
+```bash
+cd tvm-ffi
+mkdir -p build && cd build
+cmake .. && make -j$(nproc)
+```
+
+### Adding to Your Project
+
+Add the TVMFFI package to your Julia project:
+
+```julia
+using Pkg
+Pkg.add(path="/path/to/tvm-ffi/julia/TVMFFI")
+```
+
+### Environment Setup
+
+Set the library path so `libtvm_ffi` can be found at runtime:
+
+```bash
+export LD_LIBRARY_PATH=/path/to/tvm-ffi/build/lib:$LD_LIBRARY_PATH
+```
+
+## Basic Usage
+
+### Loading a Module
+
+Load a compiled TVM FFI module and call its functions:
+
+```julia
+using TVMFFI
+
+# Load compiled module
+mod = load_module("build/add_one_cpu.so")
+
+# Get function by name
+add_one = get_function(mod, "add_one_cpu")
+
+# Or use bracket notation (Python-style)
+add_one = mod["add_one_cpu"]
+```
+
+### Calling Functions
+
+Call functions with automatic array conversion:
+
+```julia
+# Create input and output arrays
+x = Float32[1, 2, 3, 4, 5]
+y = zeros(Float32, 5)
+
+# Call function - arrays auto-converted!
+add_one(x, y)
+
+println(y)  # [2.0, 3.0, 4.0, 5.0, 6.0]
+```
+
+### Working with Slices
+
+Julia's `@view` creates zero-copy slices:
+
+```julia
+matrix = Float32[1 2 3; 4 5 6; 7 8 9]
+col = @view matrix[:, 2]  # Column slice (zero-copy)
+
+add_one(col, output)  # Pass slice directly

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The variable `output` is not defined in this code snippet. This could be 
confusing for users trying to run the example. It should probably be `y`, which 
was defined in the previous snippet, or a new variable should be defined.
   
   ```suggestion
   add_one(col, y)  # Pass slice directly, assuming `y` is defined and has the 
correct size
   ```



##########
docs/guides/julia_guide.md:
##########
@@ -0,0 +1,254 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+# Julia Guide
+
+This guide demonstrates how to use TVM FFI from Julia applications.
+
+## Installation
+
+### Prerequisites
+
+The Julia support depends on `libtvm_ffi`. First, build the TVM FFI library:
+
+```bash
+cd tvm-ffi
+mkdir -p build && cd build
+cmake .. && make -j$(nproc)
+```
+
+### Adding to Your Project
+
+Add the TVMFFI package to your Julia project:
+
+```julia
+using Pkg
+Pkg.add(path="/path/to/tvm-ffi/julia/TVMFFI")
+```
+
+### Environment Setup
+
+Set the library path so `libtvm_ffi` can be found at runtime:
+
+```bash
+export LD_LIBRARY_PATH=/path/to/tvm-ffi/build/lib:$LD_LIBRARY_PATH
+```
+
+## Basic Usage
+
+### Loading a Module
+
+Load a compiled TVM FFI module and call its functions:
+
+```julia
+using TVMFFI
+
+# Load compiled module
+mod = load_module("build/add_one_cpu.so")
+
+# Get function by name
+add_one = get_function(mod, "add_one_cpu")
+
+# Or use bracket notation (Python-style)
+add_one = mod["add_one_cpu"]
+```
+
+### Calling Functions
+
+Call functions with automatic array conversion:
+
+```julia
+# Create input and output arrays
+x = Float32[1, 2, 3, 4, 5]
+y = zeros(Float32, 5)
+
+# Call function - arrays auto-converted!
+add_one(x, y)
+
+println(y)  # [2.0, 3.0, 4.0, 5.0, 6.0]
+```
+
+### Working with Slices
+
+Julia's `@view` creates zero-copy slices:
+
+```julia
+matrix = Float32[1 2 3; 4 5 6; 7 8 9]
+col = @view matrix[:, 2]  # Column slice (zero-copy)
+
+add_one(col, output)  # Pass slice directly
+```
+
+## Advanced Topics
+
+### Global Functions
+
+Access globally registered functions:
+
+```julia
+# Get global function
+func = get_global_func("my_function")
+
+if func !== nothing
+    result = func(arg1, arg2)
+end
+```
+
+### GPU Support
+
+Work with GPU arrays using CUDA.jl, AMDGPU.jl, Metal.jl, or oneAPI.jl:
+
+```julia
+using CUDA
+
+# Create CUDA arrays
+x_gpu = CUDA.CuArray(Float32[1, 2, 3, 4, 5])
+y_gpu = CUDA.zeros(Float32, 5)
+
+# Same API - device auto-detected!
+add_one_cuda(x_gpu, y_gpu)
+
+CUDA.synchronize()
+println(Array(y_gpu))
+```
+
+The Julia bindings support multiple GPU backends:
+
+```julia
+# Check available backends
+backends = list_available_gpu_backends()
+
+# NVIDIA CUDA
+using CUDA
+x = CUDA.CuArray(data)
+
+# AMD ROCm
+using AMDGPU
+x = AMDGPU.ROCArray(data)
+
+# Apple Metal
+using Metal
+x = Metal.MtlArray(data)
+
+# All work with the same API
+func(x)  # Auto-detects backend

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The variables `data` and `func` are used in this code snippet but are not 
defined. This makes the example incomplete and not runnable. Please define 
these variables to provide a complete example.



##########
julia/TVMFFI/src/tensor.jl:
##########
@@ -0,0 +1,420 @@
+#=
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+=#
+
+using .LibTVMFFI
+
+"""
+    DLTensor
+
+DLPack tensor structure (from dlpack.h).
+This is a C-compatible struct representing a multi-dimensional array.
+
+# Note on GPU Pointers
+For GPU arrays (CuArray, ROCArray, etc.), the data field contains a GPU device
+pointer, not a CPU pointer. We use UInt64 to store the pointer value and
+reinterpret it as needed, since GPU pointers can't be directly converted to 
Ptr{Cvoid}.
+"""
+struct DLTensor
+    data::Ptr{Cvoid}
+    device::DLDevice
+    ndim::Int32
+    dtype::DLDataType
+    shape::Ptr{Int64}
+    strides::Ptr{Int64}
+    byte_offset::UInt64
+end
+
+"""
+    DLTensor(data_ptr, device, ndim, dtype, shape, strides, byte_offset)
+
+Construct DLTensor with automatic GPU pointer handling.
+"""
+function DLTensor(data_ptr, device::DLDevice, ndim::Int32, dtype::DLDataType,
+                  shape::Ptr{Int64}, strides::Ptr{Int64}, byte_offset::UInt64)
+    # Convert GPU pointers (CuPtr, etc.) to generic pointer
+    # by reinterpreting through UInt
+    ptr_as_uint = if data_ptr isa Ptr
+        UInt(data_ptr)
+    else
+        # GPU pointer (CuPtr, ROCPtr, etc.)
+        # Get the raw pointer value
+        reinterpret(UInt, data_ptr)
+    end
+
+    # Convert to Ptr{Cvoid}
+    data_cvoid = reinterpret(Ptr{Cvoid}, ptr_as_uint)
+
+    return DLTensor(data_cvoid, device, ndim, dtype, shape, strides, 
byte_offset)
+end
+
+"""
+    TVMTensor
+
+TVM tensor wrapper with automatic memory management.
+
+Provides accessors for shape, dtype, and device information.
+"""
+mutable struct TVMTensor
+    handle::LibTVMFFI.TVMFFIObjectHandle
+
+    """
+        TVMTensor(handle; own=true)
+
+    Create a TVMTensor from a raw handle.
+
+    # Arguments
+    - `handle`: The raw tensor handle
+    - `own`: If true, increment refcount (default). If false, take ownership 
without IncRef.
+    """
+    function TVMTensor(handle::LibTVMFFI.TVMFFIObjectHandle; own::Bool=true)
+        if handle == C_NULL
+            error("Cannot create TVMTensor from NULL handle")
+        end
+
+        # Optionally increase reference count
+        if own
+            LibTVMFFI.TVMFFIObjectIncRef(handle)
+        end
+
+        tensor = new(handle)
+
+        # Finalizer
+        finalizer(tensor) do t
+            if t.handle != C_NULL
+                LibTVMFFI.TVMFFIObjectDecRef(t.handle)
+            end
+        end
+
+        return tensor
+    end
+end
+
+"""
+    get_dltensor_ptr(tensor::TVMTensor) -> Ptr{DLTensor}
+
+Get pointer to the underlying DLTensor structure.
+"""
+function get_dltensor_ptr(tensor::TVMTensor)
+    # DLTensor follows immediately after TVMFFIObject header
+    Ptr{DLTensor}(tensor.handle + sizeof(LibTVMFFI.TVMFFIObject))
+end
+
+"""
+    shape(tensor::TVMTensor) -> Vector{Int64}
+
+Get the shape of the tensor as a vector.
+
+# Note
+Julia arrays typically use `size()` which returns a tuple.
+This function returns a vector for compatibility with some use cases.
+"""
+function shape(tensor::TVMTensor)
+    dltensor_ptr = get_dltensor_ptr(tensor)
+    dltensor = unsafe_load(dltensor_ptr)
+
+    ndim = Int(dltensor.ndim)
+    if ndim == 0
+        return Int64[]
+    end
+
+    shape_vec = unsafe_wrap(Array, dltensor.shape, ndim)
+    return copy(shape_vec)
+end
+
+"""
+    Base.size(tensor::TVMTensor) -> Tuple
+
+Get the size of the tensor as a tuple (Julia standard).
+"""
+Base.size(tensor::TVMTensor) = Tuple(shape(tensor))
+
+"""
+    Base.size(tensor::TVMTensor, dim::Int) -> Int
+
+Get the size of a specific dimension.
+"""
+function Base.size(tensor::TVMTensor, dim::Int)
+    s = size(tensor)
+    if dim < 1 || dim > length(s)
+        return 1  # Julia convention for out-of-bounds dimensions
+    end
+    return s[dim]
+end
+
+"""
+    Base.ndims(tensor::TVMTensor) -> Int
+
+Get the number of dimensions.
+"""
+function Base.ndims(tensor::TVMTensor)
+    dltensor_ptr = get_dltensor_ptr(tensor)
+    dltensor = unsafe_load(dltensor_ptr)
+    return Int(dltensor.ndim)
+end
+
+"""
+    Base.length(tensor::TVMTensor) -> Int
+
+Get the total number of elements.
+"""
+function Base.length(tensor::TVMTensor)
+    prod(size(tensor))
+end
+
+"""
+    dtype(tensor::TVMTensor) -> DLDataType
+
+Get the data type of the tensor.
+"""
+function dtype(tensor::TVMTensor)
+    dltensor_ptr = get_dltensor_ptr(tensor)
+    dltensor = unsafe_load(dltensor_ptr)
+    return dltensor.dtype
+end
+
+"""
+    device(tensor::TVMTensor) -> DLDevice
+
+Get the device of the tensor.
+"""
+function device(tensor::TVMTensor)
+    dltensor_ptr = get_dltensor_ptr(tensor)
+    dltensor = unsafe_load(dltensor_ptr)
+    return dltensor.device
+end
+
+"""
+    strides(tensor::TVMTensor) -> Vector{Int64}
+
+Get the strides of the tensor.
+"""
+function strides(tensor::TVMTensor)
+    dltensor_ptr = get_dltensor_ptr(tensor)
+    dltensor = unsafe_load(dltensor_ptr)
+
+    if dltensor.strides == C_NULL
+        # Compute default C-contiguous strides
+        shape_vec = shape(tensor)
+        ndim = length(shape_vec)
+        strides_vec = ones(Int64, ndim)
+
+        for i in (ndim-1):-1:1
+            strides_vec[i] = strides_vec[i+1] * shape_vec[i+1]
+        end
+
+        return strides_vec
+    else
+        ndim = Int(dltensor.ndim)
+        strides_vec = unsafe_wrap(Array, dltensor.strides, ndim)
+        return copy(strides_vec)
+    end
+end
+
+"""
+    is_contiguous(tensor::TVMTensor) -> Bool
+
+Check if the tensor is contiguous in memory.
+"""
+function is_contiguous(tensor::TVMTensor)
+    shape_vec = shape(tensor)
+    strides_vec = strides(tensor)
+    ndim = length(shape_vec)
+
+    expected_stride = 1
+    for i in ndim:-1:1
+        if strides_vec[i] != expected_stride
+            return false
+        end
+        expected_stride *= shape_vec[i]
+    end
+
+    return true
+end
+
+"""
+    data_ptr(tensor::TVMTensor) -> Ptr{Cvoid}
+
+Get the raw data pointer of the tensor.
+"""
+function data_ptr(tensor::TVMTensor)
+    dltensor_ptr = get_dltensor_ptr(tensor)
+    dltensor = unsafe_load(dltensor_ptr)
+    return dltensor.data
+end
+
+"""
+    to_julia_array(tensor::TVMTensor, ::Type{T}) -> Array{T}
+
+Convert TVM tensor to Julia array (zero-copy for CPU contiguous tensors).
+
+Returns an array that shares memory with the tensor.
+"""
+function to_julia_array(tensor::TVMTensor, ::Type{T}) where T
+    # Check device
+    dev = device(tensor)
+    if dev.device_type != Int32(LibTVMFFI.kDLCPU)
+        error("Can only convert CPU tensors to Julia arrays. " *
+              "Tensor is on device type $(dev.device_type)")
+    end
+
+    # Verify dtype matches
+    expected_dtype = DLDataType(T)
+    actual_dtype = dtype(tensor)
+
+    if expected_dtype.code != actual_dtype.code ||
+       expected_dtype.bits != actual_dtype.bits
+        error("Type mismatch: tensor has dtype $(string(actual_dtype)), " *
+              "but requested type $T (dtype $(string(expected_dtype)))")
+    end
+
+    # Check contiguity
+    if !is_contiguous(tensor)
+        error("Can only convert contiguous tensors. For non-contiguous: use 
copy(to_julia_array())")
+    end
+
+    # Get shape and data pointer
+    shape_tuple = size(tensor)
+    ptr = Ptr{T}(data_ptr(tensor))
+
+    # Create zero-copy view
+    # Note: The array keeps a reference to the tensor to prevent GC
+    arr = unsafe_wrap(Array, ptr, shape_tuple)
+
+    return arr
+end
+
+# Pretty printing
+function Base.show(io::IO, tensor::TVMTensor)
+    shape_tuple = size(tensor)
+    dt = dtype(tensor)
+    dev = device(tensor)
+
+    print(io, "TVMTensor{", string(dt), "}(")
+    print(io, "shape=", shape_tuple, ", ")
+    print(io, "device=", dev, ")")
+end
+
+"""
+    Base.summary(tensor::TVMTensor) -> String
+
+Get a summary string for the tensor.
+"""
+function Base.summary(io::IO, tensor::TVMTensor)
+    shape_tuple = size(tensor)
+    dt = dtype(tensor)
+    print(io, join(shape_tuple, "×"), " TVMTensor{", string(dt), "}")
+end
+
+"""
+    DLTensorHolder{T, S}
+
+Self-contained DLTensor holder with automatic lifetime management.
+
+Works for both CPU and GPU arrays, including slices (SubArray).
+
+# Fields
+- `tensor::DLTensor`: DLPack tensor structure
+- `shape::Vector{Int64}`: Shape array
+- `strides::Vector{Int64}`: Strides array
+- `source::S`: Source array (Array, CuArray, SubArray, etc.)
+"""
+mutable struct DLTensorHolder{T, S}
+    tensor::DLTensor
+    shape::Vector{Int64}
+    strides::Vector{Int64}
+    source::S  # Can be any array type (CPU or GPU)
+end
+
+# Outer constructor for CPU arrays
+function DLTensorHolder(arr::Union{Array{T}, SubArray{T}}, 
device::DLDevice=cpu()) where T
+    # Get shape
+    shape_tuple = size(arr)
+    ndim = length(shape_tuple)
+    shape_vec = collect(Int64, shape_tuple)
+
+    # Get strides - Julia provides this for both Array and SubArray
+    arr_strides = Base.strides(arr)
+    strides_vec = collect(Int64, arr_strides)
+
+    # Get data pointer - Julia's pointer() handles SubArray correctly
+    data_ptr = pointer(arr)
+
+    # byte_offset is 0 (pointer already points to first element)
+    byte_offset = UInt64(0)
+
+    # Get dtype
+    dt = DLDataType(T)
+
+    # Create DLTensor
+    tensor = DLTensor(
+        data_ptr,
+        device,
+        Int32(ndim),
+        dt,
+        pointer(shape_vec),
+        pointer(strides_vec),
+        byte_offset
+    )
+
+    return DLTensorHolder{T, typeof(arr)}(tensor, shape_vec, strides_vec, arr)
+end
+
+"""
+    Base.Ref(holder::DLTensorHolder) -> Ref{DLTensor}
+
+Get a reference to the tensor for passing to C functions.
+The holder keeps all data alive.
+"""
+Base.Ref(holder::DLTensorHolder) = Ref(holder.tensor)
+
+"""
+    Base.unsafe_convert(::Type{Ptr{DLTensor}}, holder::DLTensorHolder)
+
+Convert holder to pointer for C calls.
+This is called automatically by ccall.
+"""
+function Base.unsafe_convert(::Type{Ptr{DLTensor}}, holder::DLTensorHolder)
+    # Get pointer to the tensor field within the holder
+    # The tensor is the first field, so we can get its address
+    return Ptr{DLTensor}(pointer_from_objref(holder))
+end
+
+"""
+    from_julia_array(arr, device=cpu()) -> DLTensorHolder
+
+Create DLTensor holder from Julia array or slice.
+
+Supports CPU and GPU arrays, including zero-copy slices.
+
+# Examples
+```julia
+# Array
+holder = from_julia_array(x)
+
+# Slice (zero-copy)
+col = @view matrix[:, 2]
+holder = from_julia_array(col)
+
+# GPU array (auto-detects device)
+gpu_holder = from_julia_array(cuda_array)
+```
+"""
+function from_julia_array(arr::Union{Array{T}, SubArray{T}}, 
device::DLDevice=cpu()) where T
+    return DLTensorHolder(arr, device)
+end

Review Comment:
   ![high](https://www.gstatic.com/codereviewagent/high-priority.svg)
   
   This implementation of `from_julia_array` for CPU arrays allows specifying a 
`device` other than CPU. This can lead to creating a `DLTensorHolder` that 
incorrectly claims a CPU-backed array is on a GPU, which will likely cause 
memory access errors or crashes when used. It's safer to either remove the 
`device` parameter or add a check to ensure the device is a CPU device.
   
   ```julia
   function from_julia_array(arr::Union{Array{T}, SubArray{T}}, 
device::DLDevice=cpu()) where T
       if device.device_type != LibTVMFFI.kDLCPU
           error("Cannot create a DLTensorHolder for a CPU array with a non-CPU 
device. Device type provided: $(device.device_type)")
       end
       return DLTensorHolder(arr, device)
   end
   ```



##########
julia/TVMFFI/src/LibTVMFFI.jl:
##########
@@ -0,0 +1,417 @@
+#=
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+=#
+
+"""
+    LibTVMFFI
+
+Low-level C API bindings for TVM FFI.
+
+This module provides direct Julia bindings to the TVM FFI C API.
+All functions follow the naming convention from c_api.h.
+
+# Design Notes
+- Uses ccall for direct C function invocation
+- Matches C struct layouts exactly with Julia struct definitions
+- No intermediate abstractions - keep it simple and direct
+"""
+module LibTVMFFI
+
+using Libdl
+
+# Find the TVM FFI shared library
+# This assumes libtvm_ffi.so is in a standard location or LD_LIBRARY_PATH
+function _find_libtvm_ffi()
+    candidates = [
+        "libtvm_ffi.$(Libdl.dlext)",           # System library path
+        joinpath(@__DIR__, "..", "..", "..", "build", "lib", 
"libtvm_ffi.$(Libdl.dlext)")     # Build directory
+    ]
+
+    for lib in candidates
+        try
+            # Try to dlopen to verify it exists
+            if Sys.iswindows()
+                # On Windows, just check if file exists
+                if isfile(lib)
+                    return lib
+                end
+            else
+                # On Unix, try dlopen
+                handle = Libc.Libdl.dlopen(lib, Libc.Libdl.RTLD_LAZY)
+                if handle != C_NULL
+                    Libc.Libdl.dlclose(handle)

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The code uses `Libc.Libdl.dlopen` and `Libc.Libdl.dlclose`. The recommended 
public API is in the `Libdl` standard library, so `Libdl.dlopen` and 
`Libdl.dlclose` should be used instead. `Libc.Libdl` is an internal 
implementation detail and its use is discouraged.
   
   ```julia
                   handle = Libdl.dlopen(lib, Libdl.RTLD_LAZY)
                   if handle != C_NULL
                       Libdl.dlclose(handle)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to