I think another situation where `SaveToFile` is hard is when we have multiple
modules imported. For example, a `MetadataModule` could contain a DSOModule and
one or more CSourceModule/JSONRuntimeModule. It seems a bit hard to save them
out as one file for compilation though.
I think this is n
Glad to see this is proposed since we wanted to do it for a while. I also agree
that P2 is better. Another use case of it is heterogeneous execution where we
can have llvm and cuda targets in it.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-composite-target/7744/4) to respond.
You are r
Thanks for the discussion.
I think we don't really need to tie this feature to the BYOC flow. The problem
it tries to solve is providing calibration data to 3rd codegen with quantizers
as @anijain2305 pointed out. This is not required by QNN or AutoQ. It is also
optional to 3rd codegen or BYO
@kazum Thanks for the effort. It is very interesting. It sounds that you only
need BYOC to do annotation and partitioning as you don't really have a
backend/library for it, right? I am wondering how you package the subgraphs, do
you manually prepare them? Thanks.
---
[Visit
Topic](https:
cc @anijain2305 as well
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-byoc-data-calibration-flow/7099/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/da1ed6d19e3b968d5c39
+1 for making fp32 as default as fp64 may not be that useful and it could
possibly increase memory footprint and reduce performance (i.e. occupying more
SIMD lanes).
I also agree that we can make float more explicit.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float
yeah, I thought about positional ordering as well. But it looks pass variables
might be safer. For a CSourceModule external codegen we generate a wrapper like
`float* a = const_0;` `const_0` would need to be produced by the initializer
later. So we would anyway need a name for it.
---
[Vi
BTW, we will need to have the variables as well, i.e. %x1, %x2, %x3, something
as I mentioned above. This is because we need to know which variable a ndarray
should be assigned to.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/29) to
respond.
You are
Yeah, let me give it a try.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/28) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/5b20458f6a
Yeah, I would prefer C1 or C2. C2 was pretty much what I was doing.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/26) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://dis
Yeah, I think I didn't make it very clear. The problem was because we may have
multiple subgraphs, each of them may have "var_name: NDarray" pairs. I was
trying to just have one `ModuleInitWrapper` to take charge of the
initialization of engines for all subgraphs so that users don't need to
o
I thought about array as well. Passing array to initialize is relatively
simple. The more tricky part is packing the data and passing them around using
packedfunc.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/19) to
respond.
You are receiving this b
cc @junrushao1994 as well
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/17) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/e9cc785a0991
Here is the draft PR: https://github.com/apache/incubator-tvm/pull/5770
We may need to use Map to save the variable to constant/NDArray mapping. Should
we move the `ModuleInitWrapper` out of runtime because it otherwise needs to
have map in the runtime namespace?
I used `SourceMetadataModule`
I think we actually need two things. One is thinking about how should we enable
the tests to make sure other changes in TVM wouldn't break this functionality.
The other is adding an official tutorial. There are examples under docs/dev.
You can probably take a look at them and add it there. Pl
I am not sure if the clarification of packaging part is clear enough, but there
is actually a potential problem. The goal is to be able to conveniently
assemble code and metadata separately from the frontend in a modular way. The
generated artifact is intended to be usable by AOT, graph runtim
@Leo-arm Thanks for the proposal and the interest in BYOC. I have a few
questions, 1) are you using the CSourceModule runtime/serialization or
something different? 2) Is the codegen toolchain ACL and do you plan to set the
CI for testing because I see there are several stages for testing?
@tqchen Thanks for the comment and sharing of thoughts. Yes, the fundamental
problem here is the serialization of code and weights. Code is relatively easy
to handle and weights are the real problem. I agree that a json runtime
introduces another layer of abstraction for graph which the curren
We have currently built the infra for Bring-Your-Own-Codegen. For demonstration
purpose, a simple CSourceModule style codegen and runtime is used for ccompiler
and dnnl (now called oneDNN). CSourceModule runtime works reasonably well on
small examples and it is easy to understand. However, it
I have another thought on this, how about just put this one in the
backend/utils.h since the current usage of them would be for the code under
there? For general passes, it might be different though (like, to_a_norm_form,
to_cps, PE, etc)
---
[Visit
Topic](https://discuss.tvm.ai/t/missin
To be honest, among C0-C3 I wouldn't not want to introduce ANF to codegen. This
means we either want to do ANF on the whole program or run the pass internally
in the extern codegen to convert it. If we run it on the whole program, I think
some passes that work on the DFG would not work well/or
ahh, I didn't notice we have this one. Thanks.
---
[Visit
Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/12) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/uns
Yeah, I am not a big fun of introducing this base class either as I think the
only duplication code would be really just the caching map. If you are
concerning about that 10 locs. I can actually just do it this way, I can
actually remove them and replace it by calling the
Functor::VisitExpr(e
I am not sure. But I sort of remember that striced_slice may also need to
change the `begin` and `end` into expr for dynamic shapes. @kevinthesun and
@yongwww can comment more on this.
---
[Visit
Topic](https://discuss.tvm.ai/t/slice-like-cant-be-constant-folded/6206/2) to
respond.
You
Thanks for clarification. I think this change makes sense to me.
---
[Visit
Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6055/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these e
The input names are really annoying. I think one use case of the name to shape
dict is to avoid the wrong order of the inputs. How hard is it for users to
supply the inputs in the correct order? And it is possible to connect the names
after _run_jin_passes?
---
[Visit
Topic](https://disc
@jonso Thanks for making these points and I am very glad to work together. Most
of questions are answered by @comaniac. One thing is that putting extern in
the target string might not be sufficient because 1) we need to change the way
how target is parsed now, 2) what if there are multiple ta
# Bring your own codegen to TVM + Graph Partitioning
The goal is to come up with a right Relay subgraph data structure/abstraction
so that we can more conveniently allow thrid-party library and hardware vendors
to bring their own codegen tools to TVM.
This RFC involves design and implementati
28 matches
Mail list logo