My impression is that one drawback of this switch is that in the Python
interface, a lot of conversions to standard types seem to be needed. When using
shapes outside of TVM, one has to convert Arrays of IntImm to lists of int.
Even when feeding shapes back to TVM I seem to need a lot of list(
Would a reasonable first step towards this to define a "CustomFunction" relay
IR node with the paralleling much of tvm.relay.Function (but subclassing
BaseFunc) when it comes to types in and out except that instead of body we have
a PackedFunc reference of what is to be called.
Best regards
I'm currently working with a signature of `visualize(expr, collapse_small=True,
node_attr_dict = {})` where `node_attr_dict` is `Expr->Dict[str, str]` with
kwargs for node.
(And yeah, I know about the lint-complaint regarding mutuable objects as
default values.)
---
[Visit
Topic](https:/
I'm a bit hesitant to comment here, but it looks like the PR from this
discussion is stalled. I wonder if part of that is that the RelayViz
abstraction is too ambitious at this point.
I got to this question after I have been looking at adapting the visualization
from the starting from the PR
Thank you.
I filed [PR 5853](https://github.com/apache/incubator-tvm/pull/5853).
To put in some long term perspective: I wonder whether one could somehow
systematically see if things of class Object rather than a subclass are
instantiated and not allow that.
Best regards
Thomas
---
[Vis
I noticed that while most Attrs inherit from Attrs, some don't and are only on
the C++ side (thus being mapped to Object). In particular, they don't have the
`keys` function.
Now defining them with a short docstring like the others is easy, but is that
an OK patch?
Best regards
Thomas
The o
While we're at the topic of names: The params currently are just numbered. I
must admit I'd think it'd be prettier if we used the state_dict names instead.
What do you think?
---
[Visit
Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6
Actually, this can happen in the body of the function, but not here because the
inputs actually come from a function signature.
You can print `traced_module.code` to witness the translation (that is from
where I tracked down the function reproducing the non-processed names).
Another place where
So far the PR only changes the default. Is there an example of the strict mode
that could be followed?
Also my changes for the PyTorch backend intertwined with the fixes to deal with
non-fp32 types in general (probably a property of my branch rather than a
necessity), and I would not want to r
Indeed, I'd wait for that.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-pass-for-merging-shape-tensors/6955/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/09bcfe9d
Just a quick shout regarding potential difficulties: I think this difficulty in
common subexpression elimination with reshape is a consequence of the A0
approach for reshape:
https://discuss.tvm.ai/t/discuss-pass-for-merging-shape-tensors/6955
Best regards
Thomas
---
[Visit Topic](https:
Just to warm this up a bit. While graph input debug names can change, PyTorch
does keep the stem stable. This is used e.g. for `script_module.code` and to
give an error for missing inputs (try `script_module()`).
https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.cpp#L735
Hello,
I recently stumbled over the fact that `reshape` is typically hard for TVM's
common subexpression elimination pass to work with. This is because the target
shape (which also comes in the attrs) can be a distinct (even if equal) tensor.
In particular, converting reshape from, say, PyTor
So it seems that "float = float32" but with a warning might be good?
Personally, I had been thinking of a Python warning, so anyone can decide to
treat as error / ignore / ..., but @comaniac, is [this autotvm
warning](https://github.com/apache/incubator-tvm/blob/master/python/tvm/autotvm/recor
Hi,
so here is something I bumped into: `"float"` means different things in
different places, not always as expected.
The background is that C/C++, PyTorch, and others will interpret float to mean
32 bit floating point numbers aka float32 and arguably float32 is the most
common datatype in de
I would appreciate an inivitation as well for t...@beamnet.de (t-vi on github).
Thank you.
Thomas
---
[Visit
Topic](https://discuss.tvm.ai/t/request-for-invite-to-the-slack-channel/2888/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from thes
16 matches
Mail list logo