gemini-code-assist[bot] commented on code in PR #18673:
URL: https://github.com/apache/tvm/pull/18673#discussion_r2704026659


##########
python/tvm/relax/frontend/onnx/onnx_frontend.py:
##########
@@ -1675,9 +1691,14 @@ def _impl_v1(cls, bb, inputs, attr, params):
             output = cls.numpy_op(*np_inputs)  # pylint: disable=not-callable
             return relax.const(output, output.dtype)
 
-        # Expand inputs, stack them, then perform minimum over the new axis.
-        inputs = [bb.normalize(relax.op.expand_dims(i, axis=0)) for i in 
inputs]
-        stacked_tensor = relax.op.concat(inputs, axis=0)
+        input_shapes = [inp.struct_info.shape for inp in inputs]
+        current_target_shape = input_shapes[0]
+        for next_shape in input_shapes[1:]:
+            current_target_shape = 
compute_broadcast_shape(current_target_shape, next_shape)

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   For conciseness, you could use `functools.reduce` to compute the target 
broadcast shape, which would replace this loop with a single line.
   
   For example:
   ```python
   import functools
   ...
   current_target_shape = functools.reduce(compute_broadcast_shape, 
input_shapes)
   ```
   This would require adding `import functools` at the top of the file.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to