gemini-code-assist[bot] commented on code in PR #209:
URL: https://github.com/apache/tvm-ffi/pull/209#discussion_r2476315574


##########
docs/guides/build_from_source.md:
##########
@@ -24,7 +24,7 @@ This guide covers two common workflows:
 ```{admonition} Prerequisite
 :class: tip
 
-- Python: 3.9 or newer
+- Python: 3.11 or newer

Review Comment:
   ![high](https://www.gstatic.com/codereviewagent/high-priority.svg)
   
   The required Python version has been updated to 3.11 or newer. However, the 
provided development Docker environment in `tests/docker/Dockerfile` is based 
on Ubuntu 22.04, which ships with Python 3.10 by default, and the Dockerfile 
doesn't seem to install a newer version. This inconsistency could cause 
confusion or build issues for developers using the documented Docker setup. 
Please either update the Dockerfile to install Python 3.11+ or adjust the 
requirement here to match the environment (e.g., Python 3.10+) if that's 
sufficient.



##########
examples/quickstart/raw_compile.sh:
##########
@@ -59,3 +59,25 @@ g++ -fvisibility=hidden -O3                 \
 build/load_cpp
 # [load_cpp.end]
 fi
+
+# Example 4. Load and run `add_one_cuda.so` in C++
+# Before run this example, make sure you have a CUDA-capable GPU and the CUDA 
toolkit installed.
+# See CONTRIBUTING.md to use a pre-built Docker image with CUDA support.
+
+# if [ -f "$BUILD_DIR/add_one_cuda.so" ] && command -v nvcc >/dev/null 2>&1; 
then
+# # [load_cuda.begin]
+# g++ -fvisibility=hidden -O3                 \
+#     load/load_cuda.cc                       \
+#     $(tvm-ffi-config --cxxflags)            \
+#     $(tvm-ffi-config --ldflags)             \
+#     $(tvm-ffi-config --libs)                \
+#     -I/usr/local/cuda/include               \
+#     -L/usr/local/cuda/lib64                 \
+#     -lcudart                                \
+#     -Wl,-rpath,$(tvm-ffi-config --libdir)   \
+#     -Wl,-rpath,/usr/local/cuda/lib64        \

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The CUDA include and library paths are hardcoded to `/usr/local/cuda/`. This 
might not be portable as the CUDA toolkit can be installed in different 
locations. It would be more robust to use an environment variable, for example 
`$CUDA_HOME`, to locate the CUDA installation. This would make the example 
script more adaptable to different developer setups.
   
   ```suggestion
   #     -I${CUDA_HOME:-/usr/local/cuda}/include               \
   #     -L${CUDA_HOME:-/usr/local/cuda}/lib64                 \
   #     -lcudart                                \
   #     -Wl,-rpath,$(tvm-ffi-config --libdir)   \
   #     -Wl,-rpath,${CUDA_HOME:-/usr/local/cuda}/lib64        
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to