@jinchenglee @acapone13 @Augusto  
I am sorry the shared link cannot be used. I have accelerated the 52-layer 
convolution on vta in my github dev branch. 
https://github.com/i24361/incubator-tvm

The consistency problem of vta in zcu 104 platform proved to be an internal 
logic bug in vta according to 
https://discuss.tvm.ai/t/rfc-vta-a-hls-c-vta-bug/6743

Due to the characteristics of BRAM, the fallback schedule for vta conv2d causes 
fault results in real FPGA. There is two way to solve this problem, one is 
auto-tuning, the other is construct a by-pass for VTA.





---
[Visit 
Topic](https://discuss.tvm.ai/t/vta-a-workaround-for-deploying-faster-r-cnn-on-target-ext-dev-vta-and-arm-cpu/6516/9)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/d491eb05cd9eb5842adb5913f767ae1d80fca35cb4fde9938245fe275d5e897f).

Reply via email to