[Apache TVM Discuss] [Questions] Can One reduce stage fuse into another reduce stage?

2022-03-21 Thread masahi via Apache TVM Discuss
[quote="jnwang, post:1, topic:12367"] I would like to know if it is possible to combine two reduce stages (te.sum) into one reduce stage in te [/quote] I'm not sure what you mean here, but if I take it literally, that wouldn't be feasible. But there is an example of scheduling fused conv2d -

[Apache TVM Discuss] [Questions] Can One reduce stage fuse into another reduce stage?

2022-03-21 Thread JiaNan Wang via Apache TVM Discuss
Hello,I am trying to fuse one layer convolution computation and their relu result into next layer convolution computation. I tried two methods, one is to use te.sum expression as a parameter of another te.sum, and the other is to use s.compute_inline(), but both fail. I would like to know if i

[Apache TVM Discuss] [Questions] Why should we set parameter 'shape_dict' when importing models?

2022-03-21 Thread Andrew Zhao Luo via Apache TVM Discuss
I believe you will need to do this if some inputs in your onnx models don't have fully defined shapes. E.g. you might have batch norm not defined so in your onnx model it will be something like shape ['?', 3, 224, 224]. In this case if you have a fixed shape it probably is helpful otherwise yo

[Apache TVM Discuss] [Questions] [VTA] cma settings for de10 nano

2022-03-21 Thread Apvgithub via Apache TVM Discuss
I'm facing the same issue, did you finally solved it? The cma memory doesn't free in any of the tvm-vta examples except for the deploy_classification.py example --- [Visit Topic](https://discuss.tvm.apache.org/t/vta-cma-settings-for-de10-nano/9696/8) to respond. You are receiving this b

[Apache TVM Discuss] [Questions] Why should we set parameter 'shape_dict' when importing models?

2022-03-21 Thread Liu Yuchen via Apache TVM Discuss
If there will be some difference when NOT set 'shape_dict'? Here is the normal code which set the parameter `relay.frontend.from_onnx(model, shape_dict=shape_dict)` --- [Visit Topic](https://discuss.tvm.apache.org/t/why-should-we-set-parameter-shape-dict-when-importing-models/12365/1) to

[Apache TVM Discuss] [Questions] Hot to interpret the log from autotuing

2022-03-21 Thread Ming-Hsuan-Tu via Apache TVM Discuss
Hi, I am running autotuning from autotvm. The running time is very long, but I want to know the current network measure time, is it possible to get it from the log? The related log is like as following, I could not find a well document to describe that. DEBUG:autotvm: SA iter: 200 ... max-0