[TVM Discuss] [Questions] How do you test the percentage of time spent on several CUDA kernels

2020-09-03 Thread Wheest via TVM Discuss
[The debugger](https://tvm.apache.org/docs/dev/debugger.html?highlight=debug) can provide some time breakdowns for different operations. However, I'm not sure if it will give you the granularity that you need. For example I have looked into the Conv2D op, and I wanted to get time breakdowns

[TVM Discuss] [Application] Reshape in-place using Relay

2020-06-01 Thread Wheest via TVM Discuss
I'm building some functions in Relay, which have reshape stages for some tensors. However, there are special cases where from a memory layout perspective the reshape operation is the identity. E.g. I might have a volume **A** of shape `[a, b, c]`, but I have an operation which reshapes it in

[TVM Discuss] [Questions] [SOLVED] [AutoTVM] RuntimeError (Return code=4) during autotuning

2020-03-25 Thread Wheest via TVM Discuss
Note, I have also encountered this issue a couple of times. It had come from the versions of tvm on the host and the device being different. It was fixed by ensuring that they are the same commit. --- [Visit Topic](https://discuss.tvm.ai/t/solved-autotvm-runtimeerror-return-code-4-durin

[TVM Discuss] [Application] TOPI autotuning integration

2020-03-25 Thread Wheest via TVM Discuss
Thanks, I've been using the v0.6 release, rather than the development branch. This Relay Op Strategy design seems to bring a lot more clarity to the process, and hopefully I'll get a MWE off the ground soon. --- [Visit Topic](https://discuss.tvm.ai/t/topi-autotuning-integration/6079/3) to

[TVM Discuss] [Application] TOPI autotuning integration

2020-03-24 Thread Wheest via TVM Discuss
I'm trying to integrate an autotuning schedule that I've created for a special case of convolution to work with TOPI. However, I'm having difficulty getting it integrated correctly with autotvm. When running my compute description and schedule in a standalone system, I am successfully able

[TVM Discuss] [Questions] Why convolution written in python

2020-03-24 Thread Wheest via TVM Discuss
Since tvm is a compiler infrastructure, though the convolution is defined using a Python API, it is simply defining the computation. When the operation runs, this computation is compiled to a backend, e.g. LLVM, OpenCL, CUDA. So there isn't an overhead in inference time by using Python here.