Maybe let me make my question clearer?
I am working on some cross-compilation work and the kernel is gonna be
integrated to a C/C++ project.
An ideal interface that generates the kernel for me should be a pair of (.o,
.h).
Even more aggressively, I want a pair of (.ll, .h) since I want to ru
I know TVM can easily invoke a compiled kernel from Python.
I want to export the .o file and integrate it to a C/C++ binary.
Is there any existing SDK to do that? If not I am willing to contribute one.
---
[Visit Topic](https://discuss.tvm.apache.org/t/do-we-have-a-c-host/9687/1) to
respond
I am currently working on some end-to-end model stuff, and Relay's optimization
pass are too slow. Any plans on improving the compilation speed?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incub
I have a proposal to minimize the invasion in TVM and also fundamentally
support TensorCore in TVM. This is in the middle of both methodology of #4052
and this RFC.
I suppose the current pain point of supporting TensorCore is the data structure
provided by NVIDIA, which introduces non-standard b
Yeah, I strongly agree with the point that we need to decouple schema reading
and the generation.
This is somehow like LLVM's tablegen, which manages repeat and regular codes in
a centralized description file to minimize the changes we need to add new IR
nodes.
--
You are receiving this becau
That's another problem, AVX512 are mostly 1-d instructions, so often it does
not care about the shape. (I hope my assertion is correct).
The offloaded intrin still requires the a shape of small tensor, which makes
the intrin defined ad-hoc. Sometimes, like doing NCHWxc, it is an across
dimens
I am not sure if tensorize is a good way to suport VNNI:
1. VNNI is not true tensorization, though reduction dimension is introduced. It
still operates on 1-D inputs. Due to the design of `tensorization` interface,
you need to provide the declared intrin the shape of tensors offloaded, but
esse