Re: [apache/tvm] [VOTE] Release Apache TVM v0.18.0.rc0 (Issue #17471)

2024-10-22 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/17471#issuecomment-2429852984 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Release Apache TVM v0.17.0.rc0 (Issue #17179)

2024-07-23 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/17179#issuecomment-2245792306 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [release][Dont Squash] Update version to 0.17.0 and 0.18.0.dev on main branch (PR #17156)

2024-07-15 Thread Wuwei Lin
Merged #17156 into main. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/17156#event-13509139467 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Bugfix][NCCL] Release NCCL thread_local resources in destructor (PR #17078)

2024-06-12 Thread Wuwei Lin
Merged #17078 into main. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/17078#event-13138474534 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Release Apache TVM v0.16.0.rc0 (Issue #16912)

2024-04-27 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16912#issuecomment-2081084861 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [release][Dont Squash] Update version to 0.16.0 and 0.17.0.dev on main branch (PR #16881)

2024-04-13 Thread Wuwei Lin
Merged #16881 into main. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/16881#event-12458804770 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Transition Main to Unity (Issue #16368)

2024-01-08 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16368#issuecomment-1881948062 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Release] v0.15.0 release schedule (Issue #16277)

2024-01-03 Thread Wuwei Lin
@ysh329 you may need to setup your github account following https://cwiki.apache.org/confluence/display/OPENWHISK/Accessing+Apache+GitHub+as+a+Committer -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16277#issuecomment-1876431553 You are receiving this

Re: [apache/tvm] [Release] v0.15.0 release schedule (Issue #16277)

2024-01-03 Thread Wuwei Lin
@ysh329 a tag will be created automatically if you create a release on GitHub -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16277#issuecomment-1876406386 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [release][Dont Squash] Update version to 0.15.0 and 0.16.0.dev on main branch (PR #16326)

2024-01-02 Thread Wuwei Lin
Merged #16326 into main. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/pull/16326#event-11372927165 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Release] v0.14.0 release schedule (Issue #15812)

2024-01-01 Thread Wuwei Lin
it's possible that we modify jenkinsfile to enable automatic deployment of docs for release branch https://github.com/apache/tvm/blob/main/ci/jenkins/templates/gpu_jenkinsfile.groovy.j2#L157-L215 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15812#is

Re: [apache/tvm] [Release] v0.14.0 release schedule (Issue #15812)

2024-01-01 Thread Wuwei Lin
the link is in Jenkins ci step that uploads the artifact but I believe the link has expired -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15812#issuecomment-1873595787 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Release] v0.14.0 release schedule (Issue #15812)

2024-01-01 Thread Wuwei Lin
the documents can be built via docker or downloaded from the built artifacts generated by CI (there's a link in the ci output). we should update the website with this process -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15812#issuecomment-1873557971

Re: [apache/tvm] [VOTE] Release Apache TVM v0.14.0.rc0 (Issue #15974)

2023-10-30 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15974#issuecomment-1785725040 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Clarify Community Strategy Decision Process (Issue #15521)

2023-08-10 Thread Wuwei Lin
+1. I’m supportive after reviewing the community discussions and wearing the hat as a PMC member -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15521#issuecomment-1673600754 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Release] v0.13.0 release schedule (Issue #15134)

2023-08-08 Thread Wuwei Lin
Closed #15134 as completed. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15134#event-10039303189 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Release] v0.13.0 release schedule (Issue #15134)

2023-08-08 Thread Wuwei Lin
I've updated the website -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15134#issuecomment-1670179542 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Release Apache TVM v0.13.0.rc0 (Issue #15313)

2023-07-17 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15313#issuecomment-1638563594 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Release] v0.12.0 release schedule (Issue #14505)

2023-05-14 Thread Wuwei Lin
There is no big difference except some missing details in the doc. The link to the artifact can be found in the CI log, but I found the link is broken because the last build for v0.12.0 branch is two weeks ago and the artifact was removed, so I rerun the CI and download the artifact. -- Reply

Re: [apache/tvm] [Release] v0.12.0 release schedule (Issue #14505)

2023-05-13 Thread Wuwei Lin
I have pushed to svn server and sent PR to update download page https://github.com/apache/tvm-site/pull/41 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/14505#issuecomment-1546796900 You are receiving this because you are subscribed to this thread. M

Re: [apache/tvm] [Release] v0.12.0 release schedule (Issue #14505)

2023-05-12 Thread Wuwei Lin
@ysh329 Thanks for spotting this. The download page is broken now, it only includes links for v0.8 release. This is because previous PRs to tvm-site wasn't send to main branch, so manual rebuild of the site will overwrite the contents. The update of the site should follow steps: 1) send PR to tv

Re: [apache/tvm] [VOTE] Release Apache TVM v0.12.0.rc0 (Issue #14710)

2023-05-06 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/14710#issuecomment-1537114794 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Release Apache TVM v0.11.1 (Issue #14260)

2023-03-16 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/14260#issuecomment-1472324324 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Release Apache TVM v0.11.0.rc0 (Issue #14129)

2023-03-01 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/14129#issuecomment-1451129138 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Release Apache TVM v0.10.0.rc0 (Issue #13026)

2022-10-12 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/13026#issuecomment-1276839145 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [release] v0.10.0 Release Schedule (Issue #12832)

2022-09-21 Thread Wuwei Lin
would be great to have https://github.com/apache/tvm/commit/7aef584c0f8fb3b516afde3fb5fac9c2d0969c0a cherry-picked -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12832#issuecomment-1254346733 You are receiving this because you are subscribed to this t

Re: [apache/tvm] [VOTE] Issue Triage Workflow RFC (Issue #12743)

2022-09-08 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12743#issuecomment-1241307038 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Establish TVM Unity Connection Technical Strategy (Issue #12651)

2022-08-30 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12651#issuecomment-1231979807 You are receiving this because you commented. Message ID:

Re: [apache/tvm] [VOTE] Release Apache TVM v0.9.0.rc0 (Issue #12103)

2022-07-18 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12103#issuecomment-1187895239 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [RFC] Buffer Layout Padding (PR #77)

2022-07-15 Thread Wuwei Lin
Merged #77 into main. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/77#event-7004931146 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [RFC] Buffer Layout Padding (PR #77)

2022-07-12 Thread Wuwei Lin
Thanks everyone for the discussions. We have agreed on the design principles and will continue to explore scheduling options. Let's keep the RFC open for final comments until the end of this week. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/77#i

Re: [apache/tvm-rfcs] [RFC] Buffer Layout Padding (PR #77)

2022-06-22 Thread Wuwei Lin
> > For example, we may introduce explicit cache stage to add the padding, and > > mark this block for later processing. > > Wouldn't that require a "remove entirely" annotation that was suggested > against > [here](https://github.com/apache/tvm-rfcs/pull/77#issuecomment-1163019805)? I > could

Re: [apache/tvm-rfcs] [RFC] Buffer Layout Padding (PR #77)

2022-06-22 Thread Wuwei Lin
Indeed if buffer is used in annotation value that will change the semantic of a node, however, that are different ways to represent this, as long as it can be reconstructed later. For example, we may introduce explicit cache stage to add the padding, and mark this block for later processing. --

Re: [apache/tvm-rfcs] [RFC] Buffer Layout Padding (PR #77)

2022-06-22 Thread Wuwei Lin
> So long as the constraints can be statically searched for, this approach > makes sense to me. I would be more concerned about adding additional > semantics to existing nodes, such as a AttrStmt node It doesn't add additional semantic, the computation semantic stays the same, it is a hint to t

Re: [apache/tvm-rfcs] [RFC] Buffer Layout Padding (PR #77)

2022-06-14 Thread Wuwei Lin
Thanks @csullivan for providing the overview. I agree that non-local approaches 2-4 are necessary. From the examples in this RFC I can also see how the components C0-C2 can be used to support these non-local approaches. C0 + C1 allows to specify the constraints during scheduling, and propagate b

Re: [apache/tvm-rfcs] [RFC] Buffer Layout Padding (PR #77)

2022-06-11 Thread Wuwei Lin
Thanks for the discussion. To provide more context, the A0 approach we discussed is TIR-Relax layout rewriting https://github.com/tlc-pack/relax/issues/162 (the general idea is to lift such transformation in TIR scheduling into the graph, and then cancels out redundant intermediate transformati

Re: [apache/tvm-rfcs] [RFC] Introducing DeclBuffer (PR #70)

2022-06-08 Thread Wuwei Lin
@areusch @Hzfengsy I've updated the RFC. It is ready for another look -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/70#issuecomment-1150342730 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [RFC] Introducing DeclBuffer (PR #70)

2022-06-07 Thread Wuwei Lin
Seems we all agree that introducing `DeclBuffer` is helpful. The only unresolved question is how shall the TVMScript be updated as @wrongtest mentioned. As discussed above, we have the options: * B1: In TVMScript, `T.allocate` and `T.decl_buffer` strictly map to the corresponding TIR nodes. To

Re: [apache/tvm-rfcs] [RFC] Introducing DeclBuffer (PR #70)

2022-05-20 Thread Wuwei Lin
@wrongtest I've thought about the option A3 vs A4. From the parsing / translation from TVM script to TIR, it is acceptable to have `T.allocate` translated to `Allocate + DeclBuffer` two nodes. But it will be tricky for `TVMScriptPrinter`. We will need to find both `Allocate` and `DeclBuffer` nod

Re: [apache/tvm-rfcs] [RFC] Introducing DeclBuffer (PR #70)

2022-05-11 Thread Wuwei Lin
@wrongtest Thanks for bringing up this. There are a few options for the behavior in TVM script, I'm open to discussion. * A1: The original behavior before https://github.com/apache/tvm/pull/9727: `T.allocate` returns a `Var`, which can be later used in `T.load / T.store`. * A2: Current behavior:

[apache/tvm-rfcs] [RFC] Introducing DeclBuffer (PR #70)

2022-05-10 Thread Wuwei Lin
This is a follow-up of https://github.com/apache/tvm/pull/9727 and [RFC#63](https://github.com/apache/tvm-rfcs/pull/63). Currently buffer can be implicitly declared and then used. The implicit behavior can be error prone and makes analysis more difficult. This RFC introduces `DeclBuffer`, a new

Re: [apache/tvm-rfcs] RFC: clarifying buffer declaration and access (PR #63)

2022-04-29 Thread Wuwei Lin
@areusch added some minor updates based on the comments, see the last two commits -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/63#issuecomment-1113659174 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] RFC: clarifying buffer declaration and access (PR #63)

2022-04-20 Thread Wuwei Lin
@areusch it is ready for another look -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/63#issuecomment-1104314240 You are receiving this because you are subscribed to this thread. Message ID:

[apache/tvm-rfcs] RFC: clarifying buffer declaration and access (PR #63)

2022-03-18 Thread Wuwei Lin
In https://github.com/apache/tvm/pull/9727 and [RFC#39](https://github.com/apache/tvm-rfcs/blob/main/rfcs/0039-buffer-physical-layout.md), we deprecated Load and Store to use BufferLoad and BufferStore instead in order to support generalized multi-dimensional physical buffer access. This is a f

Re: [apache/tvm] [VOTE] Replace codeowners with more relevant automation (Issue #10471)

2022-03-03 Thread Wuwei Lin
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/10471#issuecomment-1058568846 You are receiving this because you are subscribed to this thread. Message ID:

[Apache TVM Discuss] [Development] LaunchParamConfig documentation?

2022-02-11 Thread Wuwei Lin via Apache TVM Discuss
The thread extents are part of the params of the `PrimFunc` for the device code, kernel launch parameters are set here c(https://github.com/apache/tvm/blob/main/src/tir/transforms/split_host_device.cc#L294-L296). In `LaunchParamConfig`, `base` and `arg_index_map` is used to map the index of p

Re: [apache/tvm-rfcs] [RFC][TIR] Layout transformations on buffer access (#39)

2021-12-14 Thread Wuwei Lin
@wrongtest I'm working on the TensorIR side and have a draft version of `transform_layout`. The current implementation is ```void TransformLayout(ScheduleState self, const StmtSRef& block_sref, int buffer_index, bool is_write_index, const IndexMap& index_map);``` It applies the mapping function

Re: [apache/tvm-rfcs] [RFC][TIR] Layout transformations on buffer access (#39)

2021-12-07 Thread Wuwei Lin
the api side of `transform_layout` looks good, let's add additional examples of scheduling with returned new axes and get this in -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/39#iss

[Announcement] Apache TVM v0.8.0 Release

2021-11-24 Thread Wuwei Lin
Hi all, The Apache TVM community is happy to announce the release of Apache TVM v0.8.0. Apache TVM v0.8.0 brings several major exciting experimental features, including: - PaddlePaddle frontend - TVMScript: round-trippable python-based syntax for TIR - TorchScript integration - TensorIR scheduli

[Announcement] Apache TVM v0.8.0 Release

2021-11-24 Thread Wuwei Lin
Hi all, The Apache TVM community is happy to announce the release of Apache TVM v0.8.0. Apache TVM v0.8.0 brings several major exciting experimental features, including: - PaddlePaddle frontend - TVMScript: round-trippable python-based syntax for TIR - TorchScript integration - TensorIR scheduli

Re: [apache/tvm] [VOTE] Release Apache TVM v0.8.0.rc0 (Issue #9504)

2021-11-23 Thread Wuwei Lin
Closed #9504. -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/9504#event-5662071047

Re: [apache/tvm] [VOTE] Release Apache TVM v0.8.0.rc0 (Issue #9504)

2021-11-23 Thread Wuwei Lin
Thanks everyone for voting. The result has been sent out in https://lists.apache.org/thread/4rdndw0n8mz5mbbwz4p2po7h7y0hv4h2 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/9504#issuecomment-976942438

[apache/tvm] [RESULT][VOTE] Release Apache TVM v0.8.0.rc0 (Issue #9566)

2021-11-23 Thread Wuwei Lin
iro Masuda (binding) - Ziheng Jiang (binding) - Wenxi Zhu - Christopher Sidebottom - Cody Yu (binding) - Wuwei Lin - Lily Orth-Smith - Chris Sullivan - Thierry Moreau (binding) - Yuchen Jin - Mehrdad Hessar - Andrew Reusch - Josh Fromm - Masahiro Hiramori 0 votes - No votes -1 votes - No votes Vote thr

Re: [apache/tvm] [VOTE] Release Apache TVM v0.8.0.rc0 (Issue #9504)

2021-11-15 Thread Wuwei Lin
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/9504#issuecomment-969203268

Re: [apache/tvm-rfcs] [RFC][TIR] Layout transformations on buffer access (#39)

2021-11-03 Thread Wuwei Lin
Thanks for adding the discussion points. I understand the difficulty implementing it as eager transform in TE, mainly because most other schedule primitives were not done eagerly as in TIR. So adding a rewrite pass for `BufferTransform` makes sense to me. > Should BufferTransform apply only to

Re: [apache/tvm-rfcs] [RFC][TIR] Layout transformations on buffer access (#39)

2021-11-02 Thread Wuwei Lin
Thanks for updating the RFC. Here are some follow-up thoughts: Usage of `te.AXIS_SEPARATOR`: It seems this is only used in the API side but not in `BufferTransform`, would be good to get some clarification. Also I could see some tradeoff here that worth discussions: - T0: using `te.AXIS_SEPARATO

Re: [apache/tvm-rfcs] [RFC][TIR] Layout transformations on buffer access (#39)

2021-10-29 Thread Wuwei Lin
I'd suggest adding the `BufferTransform` data structure here which will be very helpful to other audience. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/39#issuecomment-955098088

Re: [apache/tvm-rfcs] [RFC][TIR] Change AllocateNode::extent to 1-D (#40)

2021-10-08 Thread Wuwei Lin
Thanks for the RFC. A quick question: RFC #39 mentioned the usage of `PHYSICAL_AXIS_SEPARATOR` to support n-d physical allocation (if supported by runtime), how will it work with 1-d extent here? -- You are receiving this because you are subscribed to this thread. Reply to this email directly o

Re: [apache/tvm-rfcs] [RFC][TIR] Separate physical and logical layout of buffers (#39)

2021-10-06 Thread Wuwei Lin
One way to represent the layout mapping in TIR is to introduce different storage scopes and have a registry of pre-defined layout mapping (for example, we already did similar thing for [`wmma` fragments](https://github.com/apache/tvm/blob/813136401a11a49d6c15e6013c34dd822a5c4ff6/python/tvm/topi/

Re: [apache/tvm-rfcs] [RFC][TIR] Separate physical and logical layout of buffers (#39)

2021-10-05 Thread Wuwei Lin
Thanks @Lunderberg for the RFC. Logical-physical mapping is definitely an important feature. I also implemented something similar for warp memory to support tensor core instructions on GPU, I'm happy to collaborate more to get an unified design. Some preliminary comments: The current representat

Re: [apache/tvm] [VOTE] Adopt round-robin assignment of reviewers for GitHub pull request reviewer assignment. (#9057)

2021-09-21 Thread Wuwei Lin
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/9057#issuecomment-924189855

Re: [apache/tvm] [VOTE] Adopt New Code Review Guideline (#8928)

2021-09-03 Thread Wuwei Lin
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/8928#issuecomment-912827349

[Apache TVM Discuss] [Development] Handling of `prefetch` (legalization/lowering)

2021-08-05 Thread Wuwei Lin via Apache TVM Discuss
I recently encountered similar issues. We can extent legalization/lowering to match this pattern `Evaluate(call_intrin))` and lower them to `Stmt` --- [Visit Topic](https://discuss.tvm.apache.org/t/handling-of-prefetch-legalization-lowering/10718/2) to respond. You are receiving this bec

Re: [apache/incubator-tvm] [VOTE] Apache TVM Graduation (#6332)

2020-08-24 Thread Wuwei Lin
+1 (non-binding) -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679437092

Re: [apache/incubator-tvm] [DISCUSS][RFC] Apache TVM Graduation (#6299)

2020-08-18 Thread Wuwei Lin
+1, this is very exciting -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/6299#issuecomment-675867143

[TVM Discuss] [Development] VM executor AlterOpLayout broken

2020-06-24 Thread Wuwei Lin via TVM Discuss
When I Used VMExecutor to run a CNN model, it threw an error ``` RuntimeError: Check failed: VerifyMemory(func): Direct host side access to device memory is detected. Did you forget to bind? PrimFunc([placeholder, transform_weight]) attrs={"global_symbol": "fused_nn_contrib_conv2d_winograd_wei

[TVM Discuss] [Development] Conflict with XGBoost when Thrust is enabled

2020-06-11 Thread Wuwei Lin via TVM Discuss
Unfortunately I'm not able to reproduce in a docker right now. I'll update here if I find a way to reproduce it --- [Visit Topic](https://discuss.tvm.ai/t/conflict-with-xgboost-when-thrust-is-enabled/6889/4) to respond. You are receiving this because you enabled mailing list mode. To un

[TVM Discuss] [Development] Conflict with XGBoost when Thrust is enabled

2020-06-04 Thread Wuwei Lin via TVM Discuss
When `USE_THRUST=ON`, unknown CUDA error happened: ``` File "/home/ubuntu/tvm/src/runtime/cuda/cuda_device_api.cc", line 108 CUDA: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading: unknown error ``` It can be reproduced with the following script ``` import numpy as np import tvm

[TVM Discuss] [Development] [Quantization] Add support for conv2D transpose

2020-05-02 Thread Wuwei Lin via TVM Discuss
It is a missing feature. Rules should be added to https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/quantize/_annotate.py and https://github.com/apache/incubator-tvm/blob/master/src/relay/quantize/calibrate.cc For performance part, you might also need to take a look of `conv

Re: [apache/incubator-tvm] [VOTE] Release Apache TVM (incubating) v0.6.0.rc2 (#4443)

2019-11-30 Thread Wuwei Lin
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4443#issuecomment-55239

Re: [apache/incubator-tvm] [DEV][DRAFT] TVM v0.6 Release candidate (#4259)

2019-11-19 Thread Wuwei Lin
@yzhliu let's get #4295 in, it's ready to merge once ci passed -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4259#issuecomment-555737813

Re: [apache/incubator-tvm] [DEV][DRAFT] TVM v0.6 Release candidate (#4259)

2019-11-15 Thread Wuwei Lin
typo: enhence -> enhance -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4259#issuecomment-554602399

Re: [dmlc/tvm] [VOTE] Add "Organizations contributing using and contributing to TVM" Section to Community Webpage (#4162)

2019-10-21 Thread Wuwei Lin
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/4162#issuecomment-544497287

[TVM Discuss] [Development] Can customized convolution operator be fused with topi.nn.relu?

2019-10-17 Thread Wuwei Lin via TVM Discuss
You need call traverse_inline in your schedule function, which should be similar to `schedule_conv2d_nchw` --- [Visit Topic](http://tracking.discuss.tvm.ai/tracking/click?d=HAfqW0WEU_lor6ZW857wZyOyMAsmW1aj1uWgF-pZAaGc7CpXi4Zg5tueK7PhYq-5pJbdmXn8wG8ZG7HHibk2M3RJjyE5k8jp7fMWA6eY9fbayCkrmg1gv

[TVM Discuss] [Development] Quantization broken due to PR #3135

2019-07-06 Thread Wuwei Lin via TVM Discuss
Okay I will send a patch --- [Visit Topic](https://discuss.tvm.ai/t/quantization-broken-due-to-pr-3135/3237/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/9aef064ab6d6d8

[TVM Discuss] [Development] [solved][Relay] Broken case of type infer

2019-07-02 Thread Wuwei Lin via TVM Discuss
solved in latest master --- [Visit Topic](https://discuss.tvm.ai/t/solved-relay-broken-case-of-type-infer/3169/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/a3f9cac923e

[TVM Discuss] [Development] Improving quantization accuracy with more precise bias

2019-05-08 Thread Wuwei Lin via TVM Discuss
The above example after annotation: ``` data || sim_quantize(QINPUT) sim_quantize(QINPUT) || add(bn_bias) | ... / | add ``` data is usually output of previous conv2d. There are duplicated simula

Re: [dmlc/tvm] [VOTE] Apache Transition Plan (#2973)

2019-04-07 Thread Wuwei Lin
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2973#issuecomment-480658092

[TVM Discuss] [Development] [Relay] Sub-functions printed in reverse order

2019-03-22 Thread Wuwei Lin via TVM Discuss
This issue is introduced by https://github.com/dmlc/tvm/pull/2605 It is useful to print relay ir after passes for debug. Adding `print(ir_pass.pass_debug_print(func, show_meta_data=False))` after `ir_pass.fuse_ops` in relay.build_module, sub-functions are printted in reverse order which are d