I run it on linux, a "segmentation fault" error occurs.Debug Info:
```
(gdb)bt
#0 std::basic_string,
std::allocator>::basic_string(std::string const&)() from
/lib64/libstdc++.so.6
#1 tvm::runtime:GraphRuntime::SetupStorage() from /tvm/build/libtvm.so
#2 tvm::runtime:GraphRuntime::Init(std:: str
I personally like `ShapePattern` and `DTypePattern` more as they are more
straightforward in the pattern language, but I'd also like to see other's
opinions.
cc @zhiics @tqchen @masahi
---
[Visit
Topic](https://discuss.tvm.ai/t/pattenlang-how-to-match-op-according-to-element-type-of-inpu
What do you guys thing? Which would be easier to use?
---
[Visit
Topic](https://discuss.tvm.ai/t/pattenlang-how-to-match-op-according-to-element-type-of-input-and-output/6846/11)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [
@JakeStevens @tico make sure and check out our latest work that is now
upstreamed with reproducible examples here:
https://tvm.apache.org/2020/06/04/tinyml-how-tvm-is-taming-tiny
And many more things to come! (eg see
https://discuss.tvm.ai/t/utvm-embedded-focus-online-meetup/6908 )
---
[
Yeah, it's a bit complicated. The current pattern uses the tvm::ir::Type*
classes, so you can match number of versions of types, but as this question
reveals, we may want a finer granularity on some types.
Unfortunately, since we're using a lower level Type object, we won't be able to
embed
Make sense. The ideal interface for this case would be leveraging has_type as
other type matching. We may need new patterns like AnyShape, or support
Wildcard in tensor shapes (seems much harder to me).
---
[Visit
Topic](https://discuss.tvm.ai/t/pattenlang-how-to-match-op-according-to-ele
My apologies! I somehow missed this last week.
Yeah, the current TypePattern is matching the full type via StructuralEqual.
One possibility to clean this up slightly is to add a rule of:
1) If it's a TensorType
2) and the pattern's shape is ()
3) only check the dtype
That would only take a han
I got very close to matching PyTorch's bmm on Vega 20 (Radeon VII) and to about
to 1.5x on 1080Ti for the 1024 example (with fixed dims).
One of the limiting things on the path ahead is the "-1" issue in the output
configurations of course.
Best regards
Thomas
---
[Visit
Topic](https:/
Hi @jinchenglee,
[quote="jinchenglee, post:2, topic:6891"]
TVMBackendRegisterSystemLibSymbol() function. However, I cannot find where it
was invoked in TVM source tree. Thanks in advance.
[/quote]
In my observation, the `TVMBackendRegisterSystemLibSymbol` function is called
inside the generat