https://github.com/apache/incubator-tvm/pull/5826
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/16) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/5e76
Indeed A1 can address the problem better:
In the proposal, there is a dyn to static pass, this pass will try to convert
constants to attributes as much as possible. After this pass, all of the
constant shape reshape will become static, and then we can apply CSE easily. Of
course, we can also
Just a quick shout regarding potential difficulties: I think this difficulty in
common subexpression elimination with reshape is a consequence of the A0
approach for reshape:
https://discuss.tvm.ai/t/discuss-pass-for-merging-shape-tensors/6955
Best regards
Thomas
---
[Visit Topic](https:
To keep things simple, we can disallow symbolic var in attributes and force
attributes to be constant, so if the var is symbolic dependent, we should use
the dyn variant.
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/13) to respond.
You are receiving this because yo
I'm also in favor of A1 approach. I have one more question to dynamic ops.
Currently Relay allows to use symbolic var to represent a dimension. In the
world of A1, if attributes contains a symbolic var, such as new shape in
`reshape`, are we treating the op as a dynamic op or static op?
-
Just wanted to add a +1 to A1 as it seems like the best way to gradually move
TVM towards dynamism without breaking things in the meantime.
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/11) to respond.
You are receiving this because you enabled mailing list mode.
To
Seems we have converged on A1 with the additional clarifications in this thread
:slight_smile:
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/10) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here]
Now I think dynamic rank support should be a separated issue. Maybe we can
discuss that in another thread.
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/9) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
@kevinthesun @tqchen Are you guys agreeing to A1?
@lixiaoquan I haven't put much thought into dynamic rank. I'm not sure how we
would produce it with the current opset, do you have any use cases in mind?
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/8) to respond.
Y
Make sense. For API user, we will provide more dynamic API to support any
input cases. In the backend, we separate the purely static case(Probably
requires no shape func?) and dynamic cases to make it easier to maintain
related passed.
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-o
I think main topic of interest here is the way we define function signatures,
not necessarily how to broaden the scope of the same function to support more
flexible inputs(e.g. `Any`)
I think the main goal of A1 concerns the semantics of the attribute. Since
attribute has always been conside
Any thought about dynamic rank support?
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/5) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/7f73acbedfa6698
Correct me if my understanding is wrong, is the goal of A1 to finally merge
static and dynamic ops into a single dynamic API which input tensors allows
dynamic input and attributes only allows constants(Like TensorFlow)? Also in
terms of the boundary of static and dynamic ops, we still need to
Both A0 and A1 should be able to reduce the complexity of the frontend logic,
as conversion can always goes to their dynamic variants, and then follows the
conversion promotes the dynamic variants to the static counterpart.
>From the interface design PoV. A0 somewhat creates additional duplica
@mbrookhart Thank you for this RFC. IMHO, one advantage of A0 is it is more
user friendly to just have a unified API to handle both static and dynamic
shape cases. Though this adds complexity of type inference of each op, it
reduces 1) number of relay ops. 2) complexity of frontend logic to ha
Frameworks and state of the art models are moving more and more toward
dynamism, where the shapes of tensors in a model are calculated at runtime,
either from the shapes of inputs or from the values of inputs.
There are a number of efforts underway in TVM to better support dynamic models,
inc
16 matches
Mail list logo