When doing reduction the predicate guarding the
[initialization](https://github.com/apache/tvm/blob/adf560ebed8465c22bf58f406d0a8d20663cdd1d/src/te/operation/compute_op.cc#L488)
skips the boundary checking of the original itervars, by setting
`skip_ivar_domain` argument in
[MakeBoundCheck](ht
If you are using `relay.build()` -> `graph_executor.GraphModule` path, the
point I remember is that it should pass a multi-target dict into `target`
argument of build and pass a device list into GraphModule like
```python
lib = relay.build(relay_mod, target={"cpu": "llvm", "gpu": "cuda"},
par
Hi~ Can this unittest case help you?
https://github.com/apache/tvm/blob/be03d62e5b0afd607964365bc73e94f72fdfaaef/tests/python/relay/test_vm.py#L1071
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-do-heterogeneous-execution-on-cpu-and-gpu/11561/2)
to respond.
You are receiving t
Same question when trying to convert ConvBERT. Any help?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/notimplementederror-the-following-operators-are-not-implemented-aten-im2col/10334/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from t
We use threads for parallelism within an operator. By "concurrency" I meant
something like asynchronous execution among operators (also called
inter-operator parallelism).
---
[Visit
Topic](https://discuss.tvm.apache.org/t/confused-about-kmaxnumgpus-in-runtime/11536/3)
to respond.
You a
Hello,
I have read some posts from forum,but I still confused about that.
1) If I want to using Relay to build a simple network and heterogeneous
execution some Ops on gpu and others on cpu. There seem to be two different
ways.
* One is through relay.annotation.on_device, relay.device_copy
@grant-arm
Thanks a lot for the link. I'll have a look at the repo and see if standalone
execution is possible for any MCUs.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/standalone-execution-is-possible-after-all/11558/3)
to respond.
You are receiving this because you enabled maili
Hi @sho ,
If you're looking for a demo of a standalone application running on an MCU, you
could take a look at
https://github.com/apache/tvm/tree/main/apps/microtvm/ethosu . Although this
demo is an example of how to use TVM to run a model and offload operators to
the microNPU, it should pro
Thank you for visiting my question.
I'm now trying to run inference on bare-metal devices. More specifically, I'd
like to use micro tvm(or the outputs from micro tvm) on **MCUs**.
However, I went through some tutorials or notebooks created by developers:
https://github.com/areusch/microtvm-bl
Currently, I try to learn and understand TVM. I have a mxnet model and I want
to do AutoTVM, so I try to use `autotvm.task.extract_from_program`. The code is
similar to this
```
target = tvm.target.cuda()
tasks = autotvm.task.extract_from_program(
mod["main"], target=target, target_host
10 matches
Mail list logo