+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17471#issuecomment-2429852984
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/17179#issuecomment-2245792306
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #17156 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/17156#event-13509139467
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #17078 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/17078#event-13138474534
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16912#issuecomment-2081084861
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #16881 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/16881#event-12458804770
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1881948062
You are receiving this because you are subscribed to this thread.
Message ID:
@ysh329 you may need to setup your github account following
https://cwiki.apache.org/confluence/display/OPENWHISK/Accessing+Apache+GitHub+as+a+Committer
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16277#issuecomment-1876431553
You are receiving this
@ysh329 a tag will be created automatically if you create a release on GitHub
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16277#issuecomment-1876406386
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #16326 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/16326#event-11372927165
You are receiving this because you are subscribed to this thread.
Message ID:
it's possible that we modify jenkinsfile to enable automatic deployment of docs
for release branch
https://github.com/apache/tvm/blob/main/ci/jenkins/templates/gpu_jenkinsfile.groovy.j2#L157-L215
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15812#is
the link is in Jenkins ci step that uploads the artifact but I believe the link
has expired
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15812#issuecomment-1873595787
You are receiving this because you are subscribed to this thread.
Message ID:
the documents can be built via docker or downloaded from the built artifacts
generated by CI (there's a link in the ci output). we should update the website
with this process
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15812#issuecomment-1873557971
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15974#issuecomment-1785725040
You are receiving this because you are subscribed to this thread.
Message ID:
+1. I’m supportive after reviewing the community discussions and wearing the
hat as a PMC member
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#issuecomment-1673600754
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #15134 as completed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15134#event-10039303189
You are receiving this because you are subscribed to this thread.
Message ID:
I've updated the website
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15134#issuecomment-1670179542
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15313#issuecomment-1638563594
You are receiving this because you are subscribed to this thread.
Message ID:
There is no big difference except some missing details in the doc. The link to
the artifact can be found in the CI log, but I found the link is broken because
the last build for v0.12.0 branch is two weeks ago and the artifact was
removed, so I rerun the CI and download the artifact.
--
Reply
I have pushed to svn server and sent PR to update download page
https://github.com/apache/tvm-site/pull/41
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14505#issuecomment-1546796900
You are receiving this because you are subscribed to this thread.
M
@ysh329 Thanks for spotting this. The download page is broken now, it only
includes links for v0.8 release. This is because previous PRs to tvm-site
wasn't send to main branch, so manual rebuild of the site will overwrite the
contents.
The update of the site should follow steps: 1) send PR to tv
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14710#issuecomment-1537114794
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14260#issuecomment-1472324324
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14129#issuecomment-1451129138
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/13026#issuecomment-1276839145
You are receiving this because you are subscribed to this thread.
Message ID:
would be great to have
https://github.com/apache/tvm/commit/7aef584c0f8fb3b516afde3fb5fac9c2d0969c0a
cherry-picked
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12832#issuecomment-1254346733
You are receiving this because you are subscribed to this t
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12743#issuecomment-1241307038
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1231979807
You are receiving this because you commented.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12103#issuecomment-1187895239
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #77 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/77#event-7004931146
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks everyone for the discussions. We have agreed on the design principles
and will continue to explore scheduling options. Let's keep the RFC open for
final comments until the end of this week.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/77#i
> > For example, we may introduce explicit cache stage to add the padding, and
> > mark this block for later processing.
>
> Wouldn't that require a "remove entirely" annotation that was suggested
> against
> [here](https://github.com/apache/tvm-rfcs/pull/77#issuecomment-1163019805)? I
> could
Indeed if buffer is used in annotation value that will change the semantic of a
node, however, that are different ways to represent this, as long as it can be
reconstructed later. For example, we may introduce explicit cache stage to add
the padding, and mark this block for later processing.
--
> So long as the constraints can be statically searched for, this approach
> makes sense to me. I would be more concerned about adding additional
> semantics to existing nodes, such as a AttrStmt node
It doesn't add additional semantic, the computation semantic stays the same, it
is a hint to t
Thanks @csullivan for providing the overview. I agree that non-local approaches
2-4 are necessary. From the examples in this RFC I can also see how the
components C0-C2 can be used to support these non-local approaches. C0 + C1
allows to specify the constraints during scheduling, and propagate b
Thanks for the discussion. To provide more context, the A0 approach we
discussed is TIR-Relax layout rewriting
https://github.com/tlc-pack/relax/issues/162 (the general idea is to lift such
transformation in TIR scheduling into the graph, and then cancels out redundant
intermediate transformati
@areusch @Hzfengsy I've updated the RFC. It is ready for another look
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/70#issuecomment-1150342730
You are receiving this because you are subscribed to this thread.
Message ID:
Seems we all agree that introducing `DeclBuffer` is helpful. The only
unresolved question is how shall the TVMScript be updated as @wrongtest
mentioned. As discussed above, we have the options:
* B1: In TVMScript, `T.allocate` and `T.decl_buffer` strictly map to the
corresponding TIR nodes. To
@wrongtest I've thought about the option A3 vs A4. From the parsing /
translation from TVM script to TIR, it is acceptable to have `T.allocate`
translated to `Allocate + DeclBuffer` two nodes. But it will be tricky for
`TVMScriptPrinter`. We will need to find both `Allocate` and `DeclBuffer` nod
@wrongtest Thanks for bringing up this. There are a few options for the
behavior in TVM script, I'm open to discussion.
* A1: The original behavior before https://github.com/apache/tvm/pull/9727:
`T.allocate` returns a `Var`, which can be later used in `T.load / T.store`.
* A2: Current behavior:
This is a follow-up of https://github.com/apache/tvm/pull/9727 and
[RFC#63](https://github.com/apache/tvm-rfcs/pull/63). Currently buffer can be
implicitly declared and then used. The implicit behavior can be error prone and
makes analysis more difficult. This RFC introduces `DeclBuffer`, a new
@areusch added some minor updates based on the comments, see the last two
commits
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/63#issuecomment-1113659174
You are receiving this because you are subscribed to this thread.
Message ID:
@areusch it is ready for another look
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/63#issuecomment-1104314240
You are receiving this because you are subscribed to this thread.
Message ID:
In https://github.com/apache/tvm/pull/9727 and
[RFC#39](https://github.com/apache/tvm-rfcs/blob/main/rfcs/0039-buffer-physical-layout.md),
we deprecated Load and Store to use BufferLoad and BufferStore instead in
order to support generalized multi-dimensional physical buffer access. This is
a f
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/10471#issuecomment-1058568846
You are receiving this because you are subscribed to this thread.
Message ID:
The thread extents are part of the params of the `PrimFunc` for the device
code, kernel launch parameters are set here
c(https://github.com/apache/tvm/blob/main/src/tir/transforms/split_host_device.cc#L294-L296).
In `LaunchParamConfig`, `base` and `arg_index_map` is used to map the index of
p
@wrongtest I'm working on the TensorIR side and have a draft version of
`transform_layout`. The current implementation is
```void TransformLayout(ScheduleState self, const StmtSRef& block_sref, int
buffer_index, bool is_write_index, const IndexMap& index_map);```
It applies the mapping function
the api side of `transform_layout` looks good, let's add additional examples of
scheduling with returned new axes and get this in
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/39#iss
Hi all,
The Apache TVM community is happy to announce the release of Apache TVM
v0.8.0.
Apache TVM v0.8.0 brings several major exciting experimental features,
including:
- PaddlePaddle frontend
- TVMScript: round-trippable python-based syntax for TIR
- TorchScript integration
- TensorIR scheduli
Hi all,
The Apache TVM community is happy to announce the release of Apache TVM v0.8.0.
Apache TVM v0.8.0 brings several major exciting experimental features,
including:
- PaddlePaddle frontend
- TVMScript: round-trippable python-based syntax for TIR
- TorchScript integration
- TensorIR scheduli
Closed #9504.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9504#event-5662071047
Thanks everyone for voting. The result has been sent out in
https://lists.apache.org/thread/4rdndw0n8mz5mbbwz4p2po7h7y0hv4h2
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9504#issuecomment-976942438
iro Masuda (binding)
- Ziheng Jiang (binding)
- Wenxi Zhu
- Christopher Sidebottom
- Cody Yu (binding)
- Wuwei Lin
- Lily Orth-Smith
- Chris Sullivan
- Thierry Moreau (binding)
- Yuchen Jin
- Mehrdad Hessar
- Andrew Reusch
- Josh Fromm
- Masahiro Hiramori
0 votes
- No votes
-1 votes
- No votes
Vote thr
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9504#issuecomment-969203268
Thanks for adding the discussion points.
I understand the difficulty implementing it as eager transform in TE, mainly
because most other schedule primitives were not done eagerly as in TIR. So
adding a rewrite pass for `BufferTransform` makes sense to me.
> Should BufferTransform apply only to
Thanks for updating the RFC. Here are some follow-up thoughts:
Usage of `te.AXIS_SEPARATOR`: It seems this is only used in the API side but
not in `BufferTransform`, would be good to get some clarification. Also I could
see some tradeoff here that worth discussions:
- T0: using `te.AXIS_SEPARATO
I'd suggest adding the `BufferTransform` data structure here which will be very
helpful to other audience.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/39#issuecomment-955098088
Thanks for the RFC. A quick question: RFC #39 mentioned the usage of
`PHYSICAL_AXIS_SEPARATOR` to support n-d physical allocation (if supported by
runtime), how will it work with 1-d extent here?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly o
One way to represent the layout mapping in TIR is to introduce different
storage scopes and have a registry of pre-defined layout mapping (for example,
we already did similar thing for [`wmma`
fragments](https://github.com/apache/tvm/blob/813136401a11a49d6c15e6013c34dd822a5c4ff6/python/tvm/topi/
Thanks @Lunderberg for the RFC. Logical-physical mapping is definitely an
important feature. I also implemented something similar for warp memory to
support tensor core instructions on GPU, I'm happy to collaborate more to get
an unified design.
Some preliminary comments:
The current representat
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9057#issuecomment-924189855
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8928#issuecomment-912827349
I recently encountered similar issues. We can extent legalization/lowering to
match this pattern `Evaluate(call_intrin))` and lower them to `Stmt`
---
[Visit
Topic](https://discuss.tvm.apache.org/t/handling-of-prefetch-legalization-lowering/10718/2)
to respond.
You are receiving this bec
+1 (non-binding)
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679437092
+1, this is very exciting
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6299#issuecomment-675867143
When I Used VMExecutor to run a CNN model, it threw an error
```
RuntimeError: Check failed: VerifyMemory(func): Direct host side access to
device memory is detected. Did you forget to bind?
PrimFunc([placeholder, transform_weight]) attrs={"global_symbol":
"fused_nn_contrib_conv2d_winograd_wei
Unfortunately I'm not able to reproduce in a docker right now. I'll update here
if I find a way to reproduce it
---
[Visit
Topic](https://discuss.tvm.ai/t/conflict-with-xgboost-when-thrust-is-enabled/6889/4)
to respond.
You are receiving this because you enabled mailing list mode.
To un
When `USE_THRUST=ON`, unknown CUDA error happened:
```
File "/home/ubuntu/tvm/src/runtime/cuda/cuda_device_api.cc", line 108
CUDA: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading: unknown
error
```
It can be reproduced with the following script
```
import numpy as np
import tvm
It is a missing feature. Rules should be added to
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/quantize/_annotate.py
and
https://github.com/apache/incubator-tvm/blob/master/src/relay/quantize/calibrate.cc
For performance part, you might also need to take a look of `conv
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4443#issuecomment-55239
@yzhliu let's get #4295 in, it's ready to merge once ci passed
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-555737813
typo: enhence -> enhance
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-554602399
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-544497287
You need call traverse_inline in your schedule function, which should be
similar to `schedule_conv2d_nchw`
---
[Visit
Topic](http://tracking.discuss.tvm.ai/tracking/click?d=HAfqW0WEU_lor6ZW857wZyOyMAsmW1aj1uWgF-pZAaGc7CpXi4Zg5tueK7PhYq-5pJbdmXn8wG8ZG7HHibk2M3RJjyE5k8jp7fMWA6eY9fbayCkrmg1gv
Okay I will send a patch
---
[Visit
Topic](https://discuss.tvm.ai/t/quantization-broken-due-to-pr-3135/3237/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/9aef064ab6d6d8
solved in latest master
---
[Visit
Topic](https://discuss.tvm.ai/t/solved-relay-broken-case-of-type-infer/3169/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/a3f9cac923e
The above example after annotation:
```
data
||
sim_quantize(QINPUT) sim_quantize(QINPUT)
||
add(bn_bias)
|
... /
|
add
```
data is usually output of previous conv2d. There are duplicated
simula
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-480658092
This issue is introduced by https://github.com/dmlc/tvm/pull/2605
It is useful to print relay ir after passes for debug. Adding
`print(ir_pass.pass_debug_print(func, show_meta_data=False))` after
`ir_pass.fuse_ops` in relay.build_module, sub-functions are printted in reverse
order which are d
79 matches
Mail list logo