I am not sure if it works with latest ... I've worked it out to make a
bitstream with 2019.2 which I had installed. As summary (for 2019.2) the DRAM
config needs a new parameter.
Also .sysdef is gone ...
Here is my diff:
--- a/hardware/xilinx/scripts/vivado.tcl
+++ b/hardware/xilinx/s
Yes, I remember TVM's implementation of deformable conv is modeled after MXNet.
---
[Visit
Topic](https://discuss.tvm.ai/t/deformable-conv-implementations-differences-between-pytorch-torhcvision-tvm/7702/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsu
Try to use mxnet dcn, it turns out that tvm & mxnet dcn have the same results.
Codes can be find
https://github.com/irvingzhang0512/tvm_tests/blob/master/dcn/dcn_tests.py
```
deformable_conv2d is not optimized for this platform.
pytorch torchvision dcn vs tvm relay dcn 10.221999
mmcv dcn vs to
Thank you for your reply.
---
[Visit
Topic](https://discuss.tvm.ai/t/vta-when-can-i-make-vta-bitstream-file-with-hls-blackbox/7651/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/uns
Hi @kazum, thank you for your suggestion.I'm able to start the RPC tracker.
I'm also tried as same as @jacobpostman mostly, but while tuning the sample
model I got the below logs.
[Task 1/16] Current/Best:0.00/ 0.00 GFLOPS | Progress: (15/100) |
600.01 s[19:07:37]
/Users/Dileep/L
I got this error before. It was caused by LLVM returning a generic target
triple as the default triple. Indeed TVM asks LLVM for the default target
triple during compilation and I guess that LLVM has it set to generic and does
not have a codegen for it.
A fast fix is to specify which triple T
Hi @kazum - Thank you for the previous suggestions, I am also looking at how to
use autotvm to tune a model on iOS.
Below is a modified version of 'tutorials/autotvm/tune_relay_arm.py' that is
based on your previous comment suggestion of adding a build_func, but something
isn't working quite