[Apache TVM Discuss] [Questions] Why do reduction's init predicate and its main body have different boundary checking behavior

2021-11-25 Thread jinkun via Apache TVM Discuss
When doing reduction the predicate guarding the [initialization](https://github.com/apache/tvm/blob/adf560ebed8465c22bf58f406d0a8d20663cdd1d/src/te/operation/compute_op.cc#L488) skips the boundary checking of the original itervars, by setting `skip_ivar_domain` argument in [MakeBoundCheck](ht

[Apache TVM Discuss] [Questions] How to do heterogeneous execution on cpu and gpu?

2021-11-25 Thread wrongtest via Apache TVM Discuss
If you are using `relay.build()` -> `graph_executor.GraphModule` path, the point I remember is that it should pass a multi-target dict into `target` argument of build and pass a device list into GraphModule like ```python lib = relay.build(relay_mod, target={"cpu": "llvm", "gpu": "cuda"}, par

[Apache TVM Discuss] [Questions] How to do heterogeneous execution on cpu and gpu?

2021-11-25 Thread wrongtest via Apache TVM Discuss
Hi~ Can this unittest case help you? https://github.com/apache/tvm/blob/be03d62e5b0afd607964365bc73e94f72fdfaaef/tests/python/relay/test_vm.py#L1071 --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-do-heterogeneous-execution-on-cpu-and-gpu/11561/2) to respond. You are receiving t

[Apache TVM Discuss] [Questions] NotImplementedError: The following operators are not implemented: ['aten::im2col']

2021-11-25 Thread popojames via Apache TVM Discuss
Same question when trying to convert ConvBERT. Any help? --- [Visit Topic](https://discuss.tvm.apache.org/t/notimplementederror-the-following-operators-are-not-implemented-aten-im2col/10334/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from t

[Apache TVM Discuss] [Questions] Confused about kMaxNumGPUs in runtime

2021-11-25 Thread masahi via Apache TVM Discuss
We use threads for parallelism within an operator. By "concurrency" I meant something like asynchronous execution among operators (also called inter-operator parallelism). --- [Visit Topic](https://discuss.tvm.apache.org/t/confused-about-kmaxnumgpus-in-runtime/11536/3) to respond. You a

[Apache TVM Discuss] [Questions] How to do heterogeneous execution on cpu and gpu?

2021-11-25 Thread yanyu1268 via Apache TVM Discuss
Hello, I have read some posts from forum,but I still confused about that. 1) If I want to using Relay to build a simple network and heterogeneous execution some Ops on gpu and others on cpu. There seem to be two different ways. * One is through relay.annotation.on_device, relay.device_copy

[Apache TVM Discuss] [Questions] Standalone execution is possible after all?

2021-11-25 Thread sho via Apache TVM Discuss
@grant-arm Thanks a lot for the link. I'll have a look at the repo and see if standalone execution is possible for any MCUs. --- [Visit Topic](https://discuss.tvm.apache.org/t/standalone-execution-is-possible-after-all/11558/3) to respond. You are receiving this because you enabled maili

[Apache TVM Discuss] [Questions] Standalone execution is possible after all?

2021-11-25 Thread Grant Watson via Apache TVM Discuss
Hi @sho , If you're looking for a demo of a standalone application running on an MCU, you could take a look at https://github.com/apache/tvm/tree/main/apps/microtvm/ethosu . Although this demo is an example of how to use TVM to run a model and offload operators to the microNPU, it should pro

[Apache TVM Discuss] [Questions] Standalone execution is possible after all?

2021-11-25 Thread sho via Apache TVM Discuss
Thank you for visiting my question. I'm now trying to run inference on bare-metal devices. More specifically, I'd like to use micro tvm(or the outputs from micro tvm) on **MCUs**. However, I went through some tutorials or notebooks created by developers: https://github.com/areusch/microtvm-bl

[Apache TVM Discuss] [Questions] Get x86 task from `autotvm.task.extract_from_program`

2021-11-25 Thread ERROR via Apache TVM Discuss
Currently, I try to learn and understand TVM. I have a mxnet model and I want to do AutoTVM, so I try to use `autotvm.task.extract_from_program`. The code is similar to this ``` target = tvm.target.cuda() tasks = autotvm.task.extract_from_program( mod["main"], target=target, target_host