Interesting point. I agree that having a hierachy would make the IR more
readable. Perhaps the nested structure can be achieved in A-norm form and can
be flatten to graph-norm when we need to tune the model?
---
[Visit Topic](https://discuss.tvm.apache.org/t/hierarchy-in-tvm/12306/4) to
r
Is it possibile to extend tf/pytorch to keep this information?
---
[Visit Topic](https://discuss.tvm.apache.org/t/hierarchy-in-tvm/12306/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.
This is an interesting question and I'm looking into this recently too.
It depends on how the model was implemented. If the model was implemented in
other frameworks (e.g., TensorFlow, PyTorch, etc), then there's no way for TVM
to keep this information, because this hierarchy doesn't a part of
Hey all,
I've been working with the TVM stack lately, and love it!
Does the TVM stack support a concept of hierarchy? That is, when compiling a
model with repeating operations (i.e. BERT) is there any way to extract the
fact that there are 12 identical layers, and which operators belong to