Makes sense, thanks!
I'm wondering if there is any good starter issue to take up for someone w/
reasonable deep learning dev experience but no compiler stack experience? I saw
3 issues marked #beginner-friendly
https://github.com/apache/tvm/issues?q=is%3Aopen+is%3Aissue+label%3Abeginner-frie
We do have a discord server to quick chats :-) For questions, I believe it's
better to use the forum because it's archivable and publicly available, which
could help save others' time if they encounter the same issue
---
[Visit
Topic](https://discuss.tvm.apache.org/t/unit-test-failures-on
Thanks for replying @junrushao1994 ! Yes.. just figured it out. It is because
of some shared library conflicts. I have installed some version of googletest
locally and the instructions in the wiki had me install another version from
the project HEAD.. and they don't work well with each other.
Thanks for reporting, this is very valuable information! Given it's not
reproducible in our CI system right now, would you like to dig a bit with
gdb/lldb, and let us know what happens on those segfault?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/unit-test-failures-on-fresh-checko
MergeComposite pass doesn't aware of the constant index. In general, this is
not managed by most of transform passes because it's not necessary. It's fine
as long as we could find the right constant via its index in run time.
You could have two options:
1. Include constants in the composite f
Hi,I have watched the developer tutorial given by @Lunderberg in TVM Conf 21.
The great talk helps me obtain the outline of adding a new device.
However, after checking the source code in [CUDA
runtime](https://github.com/apache/tvm/tree/main/src/runtime/cuda) , I have
following questions abo
I have a graph like below.
def @main(%input: Tensor[(1, 3, 3, 3), float32]) -> Tensor[(1, 2, 3, 3),
float32] {
%0 = nn.conv2d(%input, meta[relay.Constant][0], padding=[1, 1, 1, 1],
channels=2, kernel_size=[3, 3]);
%1 = nn.bias_add(%0, meta[relay.Constant][1]);
%2 = nn.rel