gemini-code-assist[bot] commented on code in PR #18421:
URL: https://github.com/apache/tvm/pull/18421#discussion_r2533752398


##########
tests/python/codegen/test_target_codegen_cuda.py:
##########
@@ -877,5 +877,37 @@ def main(A: T.Buffer((16, 16), "float32"), B: 
T.Buffer((16, 16), "float32")):
     assert "return;" in cuda_code
 
 
[email protected]_gpu
[email protected]_cuda
+def test_cuda_loop_step():
+    @T.prim_func
+    def cuda_loop_step(
+        A: T.Buffer((1024,), "float32"),
+        B: T.Buffer((1024,), "float32"),
+        C: T.Buffer((1024,), "float32"),
+    ):
+        # Each thread computes a strided subset of the i loop: start = tx*3, 
step = 96 (3 * 32 threads)
+        for bx in T.thread_binding(1, "blockIdx.x"):
+            for tx in T.thread_binding(96, "threadIdx.x"):
+                for i in T.serial(tx, 4096, step=96):

Review Comment:
   It seems the previous suggestion was not applied. The loop's stop condition 
`4096` will still lead to out-of-bounds memory access, as the buffers `A`, `B`, 
and `C` are all defined with a size of 1024. The test expects all elements to 
be computed, which means the loop should cover indices from 0 to 1023. To 
achieve this with the given threading scheme, the stop value should be 1024.
   
   This is a `critical` issue as it can lead to incorrect behavior or crashes 
due to memory corruption.
   
   ```suggestion
                   for i in T.serial(tx, 1024, step=96):
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to