[Apache TVM Discuss] [Questions] FoldConstant doesn't fold two consecutive add

2020-10-09 Thread Lily Orth-Smith via Apache TVM Discuss
If I understand your question, you want the two adds to fuse into one add. To do this, I would try using the FuseOps pass after FoldConstant and FoldScaleAxis. (Without seeing more of the program, I can't tell what your two adds are adding -- can you post an excerpt of the relay program here a

[Apache TVM Discuss] [Questions] FoldConstant doesn't fold two consecutive add

2020-10-09 Thread JoeyChou via Apache TVM Discuss
Hi, I am using TVM to load an MXNet model and saw there is two consecutive `add` that do not get constant folded. Below are three screenshots showing **(1) the Original MXNet model** **(2) the model without `FoldConstant` and `FoldScaleAxis`** **(3) with the relay transforms as below, which ha

[Apache TVM Discuss] [Questions] Where to find Generated RTL Files

2020-10-09 Thread sachacon via Apache TVM Discuss
Hello, I've been running the VTA tutorials, specifically the "Deploy Pretrained Vision Model from MxNet on VTA" tutorial, on my PYNQ board. Where should I look for to find the netlist and generated RTL files? Would this be somewhere in the tvm directory or on the PYNQ board after runtime? O

[Apache TVM Discuss] [Questions] Support for pre-quantized model int8/uint8 conversion

2020-10-09 Thread JoeyChou via Apache TVM Discuss
Yes, really appreciate your help! --- [Visit Topic](https://discuss.tvm.apache.org/t/support-for-pre-quantized-model-int8-uint8-conversion/8064/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm

[Apache TVM Discuss] [Questions] Support for pre-quantized model int8/uint8 conversion

2020-10-09 Thread Animesh Jain via Apache TVM Discuss
Yes, it does. The legalize pass can do this. --- [Visit Topic](https://discuss.tvm.apache.org/t/support-for-pre-quantized-model-int8-uint8-conversion/8064/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://

[Apache TVM Discuss] [Questions] Support for pre-quantized model int8/uint8 conversion

2020-10-09 Thread JoeyChou via Apache TVM Discuss
Hi @anijain2305 thanks for the reply. I should've made myself clear. What I meant was if the model(weight and bias) was quantized to uint8, does TVM has a way to convert the uint8 weight and bias to int8 weight and bias? I will certainly try what you suggested, thank you. --- [Visit Topi

[Apache TVM Discuss] [Questions] What is the best approach to convert a pytorch model to TVM?

2020-10-09 Thread Leandro Nunes (Arm) via Apache TVM Discuss
I remember seeing this PR, which is not merged yet, mentioning torch 1.6. Maybe @masahi can comment here. https://github.com/apache/incubator-tvm/pull/6602 --- [Visit Topic](https://discuss.tvm.apache.org/t/what-is-the-best-approach-to-convert-a-pytorch-model-to-tvm/8122/4) to respond.

[Apache TVM Discuss] [Questions] What is the best approach to convert a pytorch model to TVM?

2020-10-09 Thread Nauman007 via Apache TVM Discuss
Can i use torch==1.6.0 and torchvision== 0.7 ? becasue i trained my model using these versions... --- [Visit Topic](https://discuss.tvm.apache.org/t/what-is-the-best-approach-to-convert-a-pytorch-model-to-tvm/8122/3) to respond. You are receiving this because you enabled mailing list mod