[The debugger](https://tvm.apache.org/docs/dev/debugger.html?highlight=debug)
can provide some time breakdowns for different operations.
However, I'm not sure if it will give you the granularity that you need. For
example I have looked into the Conv2D op, and I wanted to get time breakdowns
I'm building some functions in Relay, which have reshape stages for some
tensors. However, there are special cases where from a memory layout
perspective the reshape operation is the identity.
E.g. I might have a volume **A** of shape `[a, b, c]`, but I have an operation
which reshapes it in
Note, I have also encountered this issue a couple of times. It had come from
the versions of tvm on the host and the device being different. It was fixed
by ensuring that they are the same commit.
---
[Visit
Topic](https://discuss.tvm.ai/t/solved-autotvm-runtimeerror-return-code-4-durin
Thanks, I've been using the v0.6 release, rather than the development branch.
This Relay Op Strategy design seems to bring a lot more clarity to the process,
and hopefully I'll get a MWE off the ground soon.
---
[Visit Topic](https://discuss.tvm.ai/t/topi-autotuning-integration/6079/3) to
I'm trying to integrate an autotuning schedule that I've created for a special
case of convolution to work with TOPI.
However, I'm having difficulty getting it integrated correctly with autotvm.
When running my compute description and schedule in a standalone system, I am
successfully able
Since tvm is a compiler infrastructure, though the convolution is defined using
a Python API, it is simply defining the computation. When the operation runs,
this computation is compiled to a backend, e.g. LLVM, OpenCL, CUDA. So there
isn't an overhead in inference time by using Python here.