[Apache TVM Discuss] [Development] Do we have a C host?

2021-04-14 Thread Jian Weng via Apache TVM Discuss
Maybe let me make my question clearer? I am working on some cross-compilation work and the kernel is gonna be integrated to a C/C++ project. An ideal interface that generates the kernel for me should be a pair of (.o, .h). Even more aggressively, I want a pair of (.ll, .h) since I want to ru

[Apache TVM Discuss] [Development] Do we have a C host?

2021-04-13 Thread Jian Weng via Apache TVM Discuss
I know TVM can easily invoke a compiled kernel from Python. I want to export the .o file and integrate it to a C/C++ binary. Is there any existing SDK to do that? If not I am willing to contribute one. --- [Visit Topic](https://discuss.tvm.apache.org/t/do-we-have-a-c-host/9687/1) to respond

Re: [apache/incubator-tvm] [DEV] TVM v0.7 Roadmap (#4845)

2020-02-19 Thread Jian Weng
I am currently working on some end-to-end model stuff, and Relay's optimization pass are too slow. Any plans on improving the compilation speed? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incub

Re: [dmlc/tvm] [RFC] Auto TensorCore CodeGen (#4105)

2019-10-18 Thread Jian Weng
I have a proposal to minimize the invasion in TVM and also fundamentally support TensorCore in TVM. This is in the middle of both methodology of #4052 and this RFC. I suppose the current pain point of supporting TensorCore is the data structure provided by NVIDIA, which introduces non-standard b

Re: [dmlc/tvm] [RFC][Relay][HalideIR] Automatically generate the AST (#3501)

2019-07-26 Thread Jian Weng
Yeah, I strongly agree with the point that we need to decouple schema reading and the generation. This is somehow like LLVM's tablegen, which manages repeat and regular codes in a centralized description file to minimize the changes we need to add new IR nodes. -- You are receiving this becau

[TVM Discuss] [RFC] About the tensorization interface

2019-07-25 Thread Jian Weng via TVM Discuss
That's another problem, AVX512 are mostly 1-d instructions, so often it does not care about the shape. (I hope my assertion is correct). The offloaded intrin still requires the a shape of small tensor, which makes the intrin defined ad-hoc. Sometimes, like doing NCHWxc, it is an across dimens

Re: [dmlc/tvm] [RFC] Add AVX512VNNI support for TVM (#3388)

2019-07-24 Thread Jian Weng
I am not sure if tensorize is a good way to suport VNNI: 1. VNNI is not true tensorization, though reduction dimension is introduced. It still operates on 1-D inputs. Due to the design of `tensorization` interface, you need to provide the declared intrin the shape of tensors offloaded, but esse