Thanks for the reply. I'll send a clarification PR once this is more clear to
me. I tried Target.current(), but I am still not getting the desired output. It
just says none. Am I doing something wrong here?
, It always says cuda. I have however set cuda
flag off while making tvm and in the code I've written target='llvm'. What is
the possible cause of this?

---
[Visit
Topic]
Is there anyway to count the number of CPU cycles during inference using
TVM_runtime? I am currently using hwcounter python library but it does'nt seem
to be accurate.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/cpu-cycle-during-inference/8251/1) to
respond.
You are receiving thi
Can you please re post the link of the tutorial. This seems to be broken.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/profiling-a-tvm-run-on-cpu/4787/5) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https: