+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1882043633
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#issuecomment-1674098084
You are receiving this because you are subscribed to this thread.
Message ID:
Based on my experience at several organizations, dynamic shape support is
obviously very important, particularly along with the popularity of large
language models. Also, efficiently supporting dynamic shape would be one of the
major appealing features of a "modern" DLC. I think the above commen
Please join us to welcome @Lunderberg as a new committer to TVM.
Eric has greatly contributed to the testing framework and CI, TIR buffer
allocation, and Vulkan backend etc. He has also been actively participating the
RFC and forum discussions around the related areas, where he has shared many
Yeah, there are different uses of context in the codebase. Device makes more
sense to me as well. Would the change to DLPack break other projects that take
it as a submodule?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/7)
to respond.
You ar
It is really nice to add the regression tests against a selected set of models,
since the down streams users usually have to spend quite amount of time to find
the root cause once there is a regression. Or they have to sync the upstream
codebase as frequent as possible and test regression loca
This looks okay to me. But I have one comment because this sounds like we need
to add one more argument to the build interface which users may not need to
know the details. Another possible option is that we can bake it into
`PassContext` as a config. However, I understand that this configure
Thanks for reminding. I think we should probably close this for now.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/2566#issuecomment-720120936
Closed #2566.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/2566#event-3945389904
Please join us to welcome Junru Shao(@junrushao1994) as a new Committer. Junru
has been actively contributing to various aspects of the TVM codebase. He
reimplemented and refactored the Target system which greatly helped code
lowering and code generation. Junru also largely contributed to the
Please join us to welcome Junru Shao(@junrushao1994) as a new Committer. Junru
has been actively contributing to various aspects of the TVM codebase. He
reimplemented and refactored the Target system which greatly helped code
lowering and code generation. Junru also largely contributed to the ru
ahh, thanks for reminding. This is closed by #6337
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4178#issuecomment-705664905
Closed #4178.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4178#event-3856212990
Please join us to welcome @areusch as a new TVM reviewer. Andrew has been
actively contributing to uTVM, on-device RPC server, and various runtime
changes. He proposed the roadmap for uTVM and presented the work at the online
meetup. Andrew has also been very actively sharing his thoughts at the
+1 (binding)
- Checked the signature and hash
- The code compiles
- Checked LICESE and NOTICE
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6622#issuecomment-703743085
@comaniac cool, thanks. We plan to make a cut tomorrow.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6421#issuecomment-702523862
cc @tqchen @ZihengJiang
You can view, comment on, or merge this pull request online at:
https://github.com/apache/incubator-tvm/pull/6554
-- Commit Summary --
* Zhi's key for ASF release
-- File Changes --
M KEYS (56)
-- Patch Links --
https://github.com/apache/incubator-tvm/pull/65
Yeah, this could be a useful tool to generate the generic templates or the code
with the fixed pattern which is actually the major part of a node. For some
other members, e.g. SEqualReduce and SHashReduce, we may still need users to
manually check/add since they are not always `Equal(this->a,
Please join us to welcome @lhutton1 as a new reviewer. He has been actively
contributing to bring-your-own-codegen (BYOC), ConvertLayout, and integrating
the Arm Compute Library into TVM. He also helped review BYOC and Relay pass PRs.
- [Commits
History](https://github.com/apache/incubator-tvm/
Yeah, I also prefer to document it instead of throwing many warnings. In
addition, we have some checker in the codebase claiming that some APIs will be
deprecated in the next release. We probably want to take some actions on them
as well.
--
You are receiving this because you are subscribed to
I think another situation where `SaveToFile` is hard is when we have multiple
modules imported. For example, a `MetadataModule` could contain a DSOModule and
one or more CSourceModule/JSONRuntimeModule. It seems a bit hard to save them
out as one file for compilation though.
I think this is n
Glad to see this is proposed since we wanted to do it for a while. I also agree
that P2 is better. Another use case of it is heterogeneous execution where we
can have llvm and cuda targets in it.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-composite-target/7744/4) to respond.
You are r
ACK
On Thu, Aug 27, 2020 at 5:53 PM Henry Saputra
wrote:
> Hear ya
>
> On Thu, Aug 27, 2020 at 10:37 AM Dave Fisher wrote:
>
> > This is a test message to see if the project is listening on the dev@tvm
> > mailing list or is treating this only as an archive.
> >
> >
+1 (binding)
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679418594
+1
Looking forward to the continued success after graduation.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6299#issuecomment-676823099
Thanks for the discussion.
I think we don't really need to tie this feature to the BYOC flow. The problem
it tries to solve is providing calibration data to 3rd codegen with quantizers
as @anijain2305 pointed out. This is not required by QNN or AutoQ. It is also
optional to 3rd codegen or BYO
@kazum Thanks for the effort. It is very interesting. It sounds that you only
need BYOC to do annotation and partitioning as you don't really have a
backend/library for it, right? I am wondering how you package the subgraphs, do
you manually prepare them? Thanks.
---
[Visit
Topic](https:
cc @anijain2305 as well
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-byoc-data-calibration-flow/7099/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/da1ed6d19e3b968d5c39
+1 for making fp32 as default as fp64 may not be that useful and it could
possibly increase memory footprint and reduce performance (i.e. occupying more
SIMD lanes).
I also agree that we can make float more explicit.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float
yeah, I thought about positional ordering as well. But it looks pass variables
might be safer. For a CSourceModule external codegen we generate a wrapper like
`float* a = const_0;` `const_0` would need to be produced by the initializer
later. So we would anyway need a name for it.
---
[Vi
BTW, we will need to have the variables as well, i.e. %x1, %x2, %x3, something
as I mentioned above. This is because we need to know which variable a ndarray
should be assigned to.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/29) to
respond.
You are
Yeah, let me give it a try.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/28) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/5b20458f6a
Yeah, I would prefer C1 or C2. C2 was pretty much what I was doing.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/26) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://dis
Yeah, I think I didn't make it very clear. The problem was because we may have
multiple subgraphs, each of them may have "var_name: NDarray" pairs. I was
trying to just have one `ModuleInitWrapper` to take charge of the
initialization of engines for all subgraphs so that users don't need to
o
I thought about array as well. Passing array to initialize is relatively
simple. The more tricky part is packing the data and passing them around using
packedfunc.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/19) to
respond.
You are receiving this b
cc @junrushao1994 as well
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/17) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/e9cc785a0991
Here is the draft PR: https://github.com/apache/incubator-tvm/pull/5770
We may need to use Map to save the variable to constant/NDArray mapping. Should
we move the `ModuleInitWrapper` out of runtime because it otherwise needs to
have map in the runtime namespace?
I used `SourceMetadataModule`
I think we actually need two things. One is thinking about how should we enable
the tests to make sure other changes in TVM wouldn't break this functionality.
The other is adding an official tutorial. There are examples under docs/dev.
You can probably take a look at them and add it there. Pl
I am not sure if the clarification of packaging part is clear enough, but there
is actually a potential problem. The goal is to be able to conveniently
assemble code and metadata separately from the frontend in a modular way. The
generated artifact is intended to be usable by AOT, graph runtim
@Leo-arm Thanks for the proposal and the interest in BYOC. I have a few
questions, 1) are you using the CSourceModule runtime/serialization or
something different? 2) Is the codegen toolchain ACL and do you plan to set the
CI for testing because I see there are several stages for testing?
@tqchen Thanks for the comment and sharing of thoughts. Yes, the fundamental
problem here is the serialization of code and weights. Code is relatively easy
to handle and weights are the real problem. I agree that a json runtime
introduces another layer of abstraction for graph which the curren
We have currently built the infra for Bring-Your-Own-Codegen. For demonstration
purpose, a simple CSourceModule style codegen and runtime is used for ccompiler
and dnnl (now called oneDNN). CSourceModule runtime works reasonably well on
small examples and it is easy to understand. However, it
I have another thought on this, how about just put this one in the
backend/utils.h since the current usage of them would be for the code under
there? For general passes, it might be different though (like, to_a_norm_form,
to_cps, PE, etc)
---
[Visit
Topic](https://discuss.tvm.ai/t/missin
To be honest, among C0-C3 I wouldn't not want to introduce ANF to codegen. This
means we either want to do ANF on the whole program or run the pass internally
in the extern codegen to convert it. If we run it on the whole program, I think
some passes that work on the DFG would not work well/or
ahh, I didn't notice we have this one. Thanks.
---
[Visit
Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/12) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/uns
Yeah, I am not a big fun of introducing this base class either as I think the
only duplication code would be really just the caching map. If you are
concerning about that 10 locs. I can actually just do it this way, I can
actually remove them and replace it by calling the
Functor::VisitExpr(e
I am not sure. But I sort of remember that striced_slice may also need to
change the `begin` and `end` into expr for dynamic shapes. @kevinthesun and
@yongwww can comment more on this.
---
[Visit
Topic](https://discuss.tvm.ai/t/slice-like-cant-be-constant-folded/6206/2) to
respond.
You
Thanks for clarification. I think this change makes sense to me.
---
[Visit
Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6055/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these e
The input names are really annoying. I think one use case of the name to shape
dict is to avoid the wrong order of the inputs. How hard is it for users to
supply the inputs in the correct order? And it is possible to connect the names
after _run_jin_passes?
---
[Visit
Topic](https://disc
Yes, @yongwww had one months back
https://github.com/apache/incubator-tvm/pull/4312
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4969#issuecomment-592746247
ahh, one more reminder, for all these models, we will have OOM problem for
pretty printing after the ANF pass. It is very likely because recursively
visiting the AST saves all the intermediate results.
--
You are receiving this because you are subscribed to this thread.
Reply to this email dir
Just a reminder, to support these models we need some patches for tensor array
as well. mask_rcnn seems requiring some more debugging.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/i
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4443#issuecomment-559613402
@jonso Thanks for making these points and I am very glad to work together. Most
of questions are answered by @comaniac. One thing is that putting extern in
the target string might not be sufficient because 1) we need to change the way
how target is parsed now, 2) what if there are multiple ta
# Bring your own codegen to TVM + Graph Partitioning
The goal is to come up with a right Relay subgraph data structure/abstraction
so that we can more conveniently allow thrid-party library and hardware vendors
to bring their own codegen tools to TVM.
This RFC involves design and implementati
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-544302489
Merged #4115 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/4115#event-2712622451
@tqchen thanks. This is now merged.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/4115#issuecomment-542047511
# TVM Monthly - August 2019
https://discuss.tvm.ai/t/tvm-monthly-august-2019
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-527019813
Closed #3594.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3594#event-2523847252
#3647
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3594#issuecomment-516925996
The serialization itself doesn't have much to do with quantization. If
quantized model needs new opcode in the VM, we need to introduce them first and
then extend the serialization/deserialization to support these instructions.
--
You are receiving this because you are subscribed to this thread
I feel heterogenous execution will be mainly related to memory management in
the VM. We don't need to encode any information in VM for the compilation and
codegen. I think we probably need to handle `AllocTensor` a little differently,
e.g. making it device aware.
--
You are receiving this bec
@tqchen My bad. The APIs started with `Serialize` and `Deserialize` are
actually not exposed.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3594#issuecomment-514012903
@icemelon9 The length is mainly for sanity check before we decode the
instructions. We could remove it. There could be multiple fields with variable
length. I thought we should always have a field in the fixed field to indicate
the length of the variable one, is this right?
For example, https:
@icemelon9 we probably don't need to have the length for each field with
variable length because we should be able to derive it based on the fixed
fields? It means we usually put the length of it as a field of an instruction,
right?
--
You are receiving this because you are subscribed to this
@MarisaKirisame I think we need it to make deserialization easier. Otherwise,
we may need many checks.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3594#issuecomment-513988425
@icemelon9 Yeah, thanks. Putting the `length` before the filed with variable
length seems reasonable.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3594#issuecomment-513987982
Is this ready for review? Have we been converged on the design in the
quantization RFC?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3512#issuecomment-509430172
@icemelon9 #3353 was the draft I haven't finished yet.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3346#issuecomment-501150175
Yeah, this is exactly what I think as well.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3346#issuecomment-501072122
Yes, I agree this is annoying. It looks we might need to introduce some
metadata for a pass. Usually when we do sequential passes, we may need to
consider about preserving information from the updated passes and also validate
if we can proceed. We should think about it more when we start resolvi
Sounds like a good plan. I think main is currently used as the entry function.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3202#issuecomment-493577403
Why `transform` is a better namespace than `pass`? I am fine with `Sequential`
as it is also used by Pytorch.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3202#issuecomment-493266455
For anyone who is interested in this, please comment. We appreciate your
thoughts and suggestions. @MarisaKirisame and I will start working on it once
we get some cycles.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
htt
# Porting Relay Passes to Pass Manager
As the pass manager framework has been merged, we should start to move passes
to the pass manager. This RFC proposes the plans to move the Relay passes.
## Proposal (take constant folding as an example):
The proposal needs to solve problems from both the bac
@masahi I see, thanks. Another option is probably using a copy operator if
there are duplicates.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3039#issuecomment-484385353
BTW, I am not certain that stopping fusing return tuple will fully solve the
problem because it looks to me that we will still have two identical tensor in
the tuple, right? Am I missing something?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly
@masahi Can we prevent from passing duplicated tensors instead? It looks that
we otherwise need to change all schedules for all targets in topi, right?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc
@junrushao1994
> @tqchen where is %2?
There might be some code emitted, but the idea is to the problem when dealing
with duplicate values in return tuples.
> why is the example bad for codegen
The output tensor is scheduled twice in compute_engine here:
https://github.com/dmlc/tvm/blob/552d4a
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481453561
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-481032819
closed by #2830
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2812#issuecomment-475979226
Closed #2812.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2812#event-2224962041
+1 for refactoring.
BTW, we probably also need to have some discussion about adding some regression
tests in CI pipeline because some passes could noticeably affect perf. But this
can be a separate issue.
--
You are receiving this because you are subscribed to this thread.
Reply to this email
85 matches
Mail list logo