gemini-code-assist[bot] commented on code in PR #120:
URL: https://github.com/apache/tvm-ffi/pull/120#discussion_r2430234958


##########
README.md:
##########
@@ -23,16 +23,17 @@ Apache TVM FFI is an open ABI and FFI for machine learning 
systems. It is a mini
 yet flexible open convention with the following systems in mind:
 
 - Kernel libraries: ship one wheel to support multiple frameworks, Python 
versions, and different languages.
-- Kernel DSLs: reusable open ABI for JIT and AOT kernel exposure to PyTorch, 
JAX, and other machine learning systems.
+- Kernel DSLs: reusable open ABI for JIT and AOT kernel exposure to PyTorch, 
JAX, and other ML runtimes.
 - ML frameworks and runtimes: unified mechanism to connect libraries and DSLs 
that adopt the ABI convention.
 - Coding agents: unified mechanism to package and ship generated code to 
production environments.
-- ML infrastructure: cross-language support for Python, C++, Rust, and other 
languages that interface with the ABI.
+- ML infrastructure: cross-language support for Python, C++, and Rust, and 
DSLs.
 
 It has the following technical features:
 
+- DLPack-compatible Tensor data ABI to seamlessly support many frameworks such 
as PyTorch, JAX, cuPy and others that support DLPack convention.

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   There's a small typo in the name "cuPy". It should be capitalized as "CuPy" 
to correctly refer to the library.
   
   ```suggestion
   - DLPack-compatible Tensor data ABI to seamlessly support many frameworks 
such as PyTorch, JAX, CuPy and others that support DLPack convention.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to