I do this using CLion (which supports Python as well as C++). You can set up
two different launch configurations for your program, one that launches it as a
Python app and the other as a C++ app. Then if I need to debug the C++, I can
launch it in debug using the C++ config and the graphical d
The sort of case I'm thinking of is when a mutation takes place, the mutated
part of the graph won't have types associated with it (at least, not until
type_infer is called on the expression again). It's not immediately obvious to
me whether that's happening in this example. But now I've thoug
There is another way types can go awry in the dataflow matcher. When things get
mutated they lose their type info until the rewrite is completed. We might want
to start treating that behaviour as a bug because it's caught me out before.
Maybe @mbrookhart can comment?
---
[Visit Topic](htt
Have you tried using checked_type rather than _checked_type_?
---
[Visit Topic](https://discuss.tvm.ai/t/same-shape-pattern/7012/5) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubsc
Just in terms of BYOC's capabilities, it's worth mentioning that one of the
reasons all the codegens accept Relay rather than TIR is because BYOC is
implemented in Relay. There's no infrastructure currently to partition TIR
functions and I don't think it would be a simple extension (maybe the
I have a candidate fix with this PR:
https://github.com/apache/incubator-tvm/pull/5476
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-problem-about-subgraph-with-tupletypenode-inputs/6522/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from t
Hi, welcome to the forum :) I'm working on fixing this exact issue at the
moment. It comes about because constant tuples are not correctly propagated
into the partitioned regions so you can't see the data of the tuple, only its
type. I hope to have a fix in review either later today or tomorro
This should be resolved by this PR:
https://github.com/apache/incubator-tvm/pull/5320 :)
---
[Visit
Topic](https://discuss.tvm.ai/t/incorrect-generated-function-after-partitiongraph-pass/6380/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fro
Status update! I've put on the following two PRs which hopefully will allow for
composite function annotation:
[5261](https://github.com/apache/incubator-tvm/pull/5261),
[5262](https://github.com/apache/incubator-tvm/pull/5262). Feel free to take a
look.
---
[Visit
Topic](https://discus
I've found out I can do this by just passing each function as it's own argument
and then collecting them together from TVMArgs.
---
[Visit
Topic](https://discuss.tvm.ai/t/how-can-i-transfer-a-list-of-functions/6239/2)
to respond.
You are receiving this because you enabled mailing list mo
I'm trying to pass a list of functions from Python to C++. However, this
doesn't seem possible at the moment because tvm::Array can only contain objects
deriving from ObjectRef (which PackedFunc does not). Is there another approach
I can take? In particular, it's important that the order of th
Ah, I understand now. We'll have a look at how viable that'll be for ACL.
Thanks for the suggestion!
---
[Visit
Topic](https://discuss.tvm.ai/t/external-codegen-constant-tensors-in-c-codegen/5890/19)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscrib
How are you compiling? We could serialize the graph, but we'd then need to
codegen the relevant ACL API calls on the remote and compile it into something
that can be executed. We can't do that without a toolchain though which can't
be guaranteed.
---
[Visit
Topic](https://discuss.tvm.ai/
I've had a chance to look at this now and it seems like it's quite a
fundamental issue with C codegen, not just ACL. This will make a lot of
compile-time optimisations impossible as there's no reasonable way to handle
large constant tensors in the codegen. This we be especially prevalent when
AnnotateTarget doesn't support composite functions at the moment. I intend to
send a PR to resolve this very soon (hopefully this week). You can use
MergeCompilerRegions if you like, but this will only be applicable if you also
support conv2d, bias and relu individually as well as merged.
I've been thinking about a more robust solution to this for a while and have
some scripts that implement a sort of AutoTVM 'cache' that I use when
autotuning. I'll try and turn these into a PR and open an RFC to see if anyone
would be interested.
---
[Visit
Topic](http://tracking.discuss
16 matches
Mail list logo