@matt-arm : you may find this interesting.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-problem-about-subgraph-with-tupletypenode-inputs/6522/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tv
In my experience plain loop unrolling has always been a blunt hammer and is not
useful in the general case, thus turning that off by default in LLVM makes
sense. Targeted unrolling with vectorization and other loop optimizations is
more beneficial .
I hadn't realized that LLVM turned on plai
What kind of optimizations can a user expect with various opt_levels by default
in TVM ? It would be good to document this clearly so that we have an
understanding of whether opt_levels in TVM match up to folks who could think of
this like -O1, -O2, -O3 and -O0 in static compiler land?
A lo