Looks like existing TVM (v0.7) doesn't allow write cache to compute at "k" axis
in GEMM? Please correct me If I'm wrong.
Say I want to create a write cache for matrix C in GEMM, and let "k" to be the
outmost axis in the loop nest, thus the schedule code I wrote would be look
like this:
`CC =
I'm trying to create PackedFunc manually for my baremetal app. Following the
way calling below macro:
example.c
```
#include
int A_wrapper(blahblah);
TVM_DLL_EXPORT_TYPED_FUNC(A, A_wrapper_);
```
Linking the program complains**: undefined reference to `__dso_handle'.** **I
wonder where does i
Currently I'm trying to integrate TVM into TensorFlow by custom op, but has met
many obstacles.
I have considered two integrate approach: 1, export_library from python script
and load from cpp, just like examples from tvm/apps/howto_deploy/ or tftvm
project. 2, export cuda code and compile it
@lfengad Have you figured out the reason?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/vm-the-performance-degradation-of-vm-runtime-and-dynamic-shape-support-compared-to-graph-runtime/6076/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe f
thanks for your repaly, I only want to replace the conv2d computation and
remain others for x86cpu. I dont know how to regesiter my conv2d computation
api to tvm,and when run the model that will call my computation api to do the
work.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ho