[quote="jnwang, post:1, topic:12367"]
I would like to know if it is possible to combine two reduce stages (te.sum)
into one reduce stage in te
[/quote]
I'm not sure what you mean here, but if I take it literally, that wouldn't be
feasible.
But there is an example of scheduling fused conv2d -
Hello,I am trying to fuse one layer convolution computation and their relu
result into next layer convolution computation. I tried two methods, one is to
use te.sum expression as a parameter of another te.sum, and the other is to use
s.compute_inline(), but both fail. I would like to know if i
I believe you will need to do this if some inputs in your onnx models don't
have fully defined shapes. E.g. you might have batch norm not defined so in
your onnx model it will be something like shape ['?', 3, 224, 224]. In this
case if you have a fixed shape it probably is helpful otherwise yo
I'm facing the same issue, did you finally solved it?
The cma memory doesn't free in any of the tvm-vta examples except for the
deploy_classification.py example
---
[Visit
Topic](https://discuss.tvm.apache.org/t/vta-cma-settings-for-de10-nano/9696/8)
to respond.
You are receiving this b
If there will be some difference when NOT set 'shape_dict'?
Here is the normal code which set the parameter
`relay.frontend.from_onnx(model, shape_dict=shape_dict)`
---
[Visit
Topic](https://discuss.tvm.apache.org/t/why-should-we-set-parameter-shape-dict-when-importing-models/12365/1)
to
Hi,
I am running autotuning from autotvm.
The running time is very long, but I want to know the current network measure
time, is it possible to get it from the log?
The related log is like as following, I could not find a well document to
describe that.
DEBUG:autotvm: SA iter: 200 ... max-0