[Apache TVM Discuss] [Questions] Best way to deal with kernel layout?

2021-04-01 Thread JC Li via Apache TVM Discuss
Thanks for the suggestion, @comaniac . Adding matmul operator with implementations of all combinations of inputs' layouts seems overkill to me. Instead, adding a target-specific relay pass to deal with such target-specific case would be a better solution, which is lightweight and orthogonal to

[Apache TVM Discuss] [Questions] Best way to deal with kernel layout?

2021-03-31 Thread JC Li via Apache TVM Discuss
This looks more like a hack, :slight_smile: If I want to do it in the relay, I should add a version of nn.dense (say, name it nn.dense_transposed_kernel) then register a function convert_dense(...) with register_convert_op_layouts("nn.dense"), right? --- [Visit Topic](https://discuss.tv

[Apache TVM Discuss] [Questions] Best way to deal with kernel layout?

2021-03-31 Thread JC Li via Apache TVM Discuss
Hi, @comaniac . I looked into your example and did a simple experiment similar to it. My example network imported into relay as below: #[version = "0.0.5"] def @main(%input.1: Tensor[(1, 1, 32, 16), float32], %conv.0.bias: Tensor[(1), float32], %conv.0.weight: Tensor[(1, 1, 3, 3), fl

[Apache TVM Discuss] [Questions] Best way to deal with kernel layout?

2021-03-30 Thread JC Li via Apache TVM Discuss
Well, I have a special BYOC **dense** kernel that deals with kernel layout different from default topi.nn implemenation. The default implemenation has *weight Tensor with shape [out_dim, in_dim]*, while I need [in_dim, out_dim]. Two questions here: 1. How can I change the default behavior o

[Apache TVM Discuss] [Questions] How to create PackedFunc manually?

2020-11-08 Thread JC Li via Apache TVM Discuss
I'm trying to create PackedFunc manually for my baremetal app. Following the way calling below macro: example.c ``` #include int A_wrapper(blahblah); TVM_DLL_EXPORT_TYPED_FUNC(A, A_wrapper_); ``` Linking the program complains**: undefined reference to `__dso_handle'.** **I wonder where does i

[Apache TVM Discuss] [Questions] TVM terms: relay, topi, tir, te

2020-11-05 Thread JC Li via Apache TVM Discuss
First of all, I'm by no means expert in TVM. So just my two cents. I believe the Relay-> Tir transform happens with so-called "lowering" process in side python/tvm/relay/backend/compile_engine.py, CompileEngine::lower(blah). --- [Visit Topic](https://discuss.tvm.apache.org/t/tvm-terms-re