Does TVM ONNX support CUDA? I want to run Faster RCNN using onnx on TVM . It is
throwing error in relay frontend function.
---
[Visit Topic](https://discuss.tvm.ai/t/regarding-onnx-cuda-support/3723/1) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscri
CC: @were if you have bandwidth
---
[Visit
Topic](https://discuss.tvm.ai/t/can-i-use-topi-operators-in-hybrid-script/3722/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/
Generally speaking, in TVM, we want small ops instead of large ones, because
the compiler could automatically do optimization for you (e.g. fuse some
operators if possible).
In your particular case (use topi operators in hybrid script), I would suggest
to built up a Relay IR.
Python tuple sh
When you set `TARGET` to `tsim` in `vta_config.json` file, everything related
to VTA will be executing on cycle accurate hardware simulation (Chisel),
including the resnet-deploy-example. However, there is still some work to be
done (debugging) for ResNet on the Chisel version, because accurac
Thanks for your reply, Luis.
As for 'passing', are you meaning that when we set 'tsim' as the target in
vta_config.json, all benchmarks/tests are running on top of
vta/hardware/chisel, but real ResNet model is still running on top of TSIM
simulator?
---
[Visit Topic](https://discuss.tvm.
TVM runtime currently topologically traverse the graph and execute each of the
nodes in a sequential way.
https://github.com/dmlc/tvm/blob/5f9c5e43020a602427b7995afb9eedf2b695eea8/src/runtime/graph/graph_runtime.cc#L329
The execution order of the "parallel" nodes are consistent to what they ar
Hey,
There is only one version of VTA in Chisel. As the name suggest, `tsim_example`
is an example of TSIM (cycle accurate simulation) for a really simple
accelerator using some components of VTA, i.e. DPI.
This Chisel version is currently `passing` all microbenchmarks/tests except
end-to-en
Hi FrozenGene,
In GoogLeNet, there are parallel branches as show in following figure:

Possbile execution sequences are:
T0->T1->T2->T3->T4->T5->T6->T7
T0->T1->T6->T7->T4->T5->T2->T3
How does Relay/TVM determine the OP execution sequenc
Does anyone know how to debug OPs in topi?
I'm trying to add new OPs in topi, and it is hard to debug. Is there any way to
easily debug?
---
[Visit
Topic](https://discuss.tvm.ai/t/does-anyone-know-how-to-debug-ops-in-topi/3715/1)
to respond.
You are receiving this because you enabled mai
Modify your line 11:
`shape_dict = {"input.1": resnet_input.shape}`
---
[Visit Topic](https://discuss.tvm.ai/t/error-relay-frontend-from-onnx/3714/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.t
Facebook Team is already doing this job to support TVM in Pytorch Traing
Acceleration, refer to:
https://github.com/pytorch/tvm
As i know, Mxnet is also support TVM on Training backend.
---
[Visit Topic](https://discuss.tvm.ai/t/is-tvm-applicable-for-training/3060/4)
to respond.
You are
@alopez_13 thanks for the nice initial draft!. I hope other experience members
can also contribute to describe each of those steps in more detail. What is
difficult about this is not only the amount of files that have to be modified
but also that from one operator to another the are difference
12 matches
Mail list logo