Re: [apache/tvm] [VOTE] Transition Main to Unity (Issue #16368)

2024-01-08 Thread Zhi
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16368#issuecomment-1882043633 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Clarify Community Strategy Decision Process (Issue #15521)

2023-08-10 Thread Zhi
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15521#issuecomment-1674098084 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [RFC] Relax Upstreaming (PR #89)

2022-10-08 Thread Zhi
Based on my experience at several organizations, dynamic shape support is obviously very important, particularly along with the popularity of large language models. Also, efficiently supporting dynamic shape would be one of the major appealing features of a "modern" DLC. I think the above commen

[Apache TVM Discuss] [Announcement] [COMMUNITY] @lunderberg -> Committer

2021-12-18 Thread Zhi via Apache TVM Discuss
Please join us to welcome @Lunderberg as a new committer to TVM. Eric has greatly contributed to the testing framework and CI, TIR buffer allocation, and Vulkan backend etc. He has also been actively participating the RFC and forum discussions around the related areas, where he has shared many

[Apache TVM Discuss] [Development/RFC] [RFC] Rename TVMContext to TVMDevice

2021-02-08 Thread Zhi via Apache TVM Discuss
Yeah, there are different uses of context in the codebase. Device makes more sense to me as well. Would the change to DLPack break other projects that take it as a submodule? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/7) to respond. You ar

[Apache TVM Discuss] [Development/RFC] [RFC] Building a new reproducible benchmark for TVM

2020-11-21 Thread Zhi via Apache TVM Discuss
It is really nice to add the regression tests against a selected set of models, since the down streams users usually have to spend quite amount of time to find the root cause once there is a regression. Or they have to sync the upstream codebase as frequent as possible and test regression loca

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-12 Thread Zhi via Apache TVM Discuss
This looks okay to me. But I have one comment because this sounds like we need to add one more argument to the build interface which users may not need to know the details. Another possible option is that we can bake it into `PassContext` as a config. However, I understand that this configure

Re: [apache/incubator-tvm] [Relay][RFC] Support layout altering for heterogeneous compilation (#2566)

2020-11-01 Thread Zhi
Thanks for reminding. I think we should probably close this for now. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/2566#issuecomment-720120936

Re: [apache/incubator-tvm] [Relay][RFC] Support layout altering for heterogeneous compilation (#2566)

2020-11-01 Thread Zhi
Closed #2566. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/2566#event-3945389904

[Apache TVM Discuss] [Announcement] [COMMUNITY] junrushao1994 -> Committer

2020-10-20 Thread Zhi via Apache TVM Discuss
Please join us to welcome Junru Shao(@junrushao1994) as a new Committer. Junru has been actively contributing to various aspects of the TVM codebase. He reimplemented and refactored the Target system which greatly helped code lowering and code generation. Junru also largely contributed to the

[apache/incubator-tvm] [COMMUNITY] junrushao1994 -> committer (#6719)

2020-10-20 Thread Zhi
Please join us to welcome Junru Shao(@junrushao1994) as a new Committer. Junru has been actively contributing to various aspects of the TVM codebase. He reimplemented and refactored the Target system which greatly helped code lowering and code generation. Junru also largely contributed to the ru

Re: [apache/incubator-tvm] [RFC][VM] Heterogeneous execution in Relay VM (#4178)

2020-10-08 Thread Zhi
ahh, thanks for reminding. This is closed by #6337 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4178#issuecomment-705664905

Re: [apache/incubator-tvm] [RFC][VM] Heterogeneous execution in Relay VM (#4178)

2020-10-08 Thread Zhi
Closed #4178. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4178#event-3856212990

[apache/incubator-tvm] [COMMUNITY] areusch -> Reviewer (#6637)

2020-10-06 Thread Zhi
Please join us to welcome @areusch as a new TVM reviewer. Andrew has been actively contributing to uTVM, on-device RPC server, and various runtime changes. He proposed the roadmap for uTVM and presented the work at the online meetup. Andrew has also been very actively sharing his thoughts at the

Re: [apache/incubator-tvm] [VOTE] Release Apache TVM (incubating) v0.7.0.rc0 (#6622)

2020-10-05 Thread Zhi
+1 (binding) - Checked the signature and hash - The code compiles - Checked LICESE and NOTICE -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/6622#issuecomment-703743085

Re: [apache/incubator-tvm] [RFC] v0.7 Release Planning (#6421)

2020-10-01 Thread Zhi
@comaniac cool, thanks. We plan to make a cut tomorrow. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/6421#issuecomment-702523862

[apache/incubator-tvm] [COMMUNITY] Zhi's key for ASF release (#6554)

2020-09-24 Thread Zhi
cc @tqchen @ZihengJiang You can view, comment on, or merge this pull request online at: https://github.com/apache/incubator-tvm/pull/6554 -- Commit Summary -- * Zhi's key for ASF release -- File Changes -- M KEYS (56) -- Patch Links -- https://github.com/apache/incubator-tvm/pull/65

[Apache TVM Discuss] [Development/RFC] [RFC] TVM Object Schema DSL

2020-09-16 Thread Zhi via Apache TVM Discuss
Yeah, this could be a useful tool to generate the generic templates or the code with the fixed pattern which is actually the major part of a node. For some other members, e.g. SEqualReduce and SHashReduce, we may still need users to manually check/add since they are not always `Equal(this->a,

[apache/incubator-tvm] [COMMUNITY] lhutton1 -> Reviewer (#6461)

2020-09-11 Thread Zhi
Please join us to welcome @lhutton1 as a new reviewer. He has been actively contributing to bring-your-own-codegen (BYOC), ConvertLayout, and integrating the Arm Compute Library into TVM. He also helped review BYOC and Relay pass PRs. - [Commits History](https://github.com/apache/incubator-tvm/

Re: [apache/incubator-tvm] [RFC] v0.7 Release Planning (#6421)

2020-09-08 Thread Zhi
Yeah, I also prefer to document it instead of throwing many warnings. In addition, we have some checker in the codebase claiming that some APIs will be deprecated in the next release. We probably want to take some actions on them as well. -- You are receiving this because you are subscribed to

[TVM Discuss] [Development/RFC] [RFC] SaveToFile(file_name, format) expected behavior

2020-08-27 Thread Zhi via TVM Discuss
I think another situation where `SaveToFile` is hard is when we have multiple modules imported. For example, a `MetadataModule` could contain a DSOModule and one or more CSourceModule/JSONRuntimeModule. It seems a bit hard to save them out as one file for compilation though. I think this is n

[TVM Discuss] [Development/RFC] [RFC] Composite Target

2020-08-27 Thread Zhi via TVM Discuss
Glad to see this is proposed since we wanted to do it for a while. I also agree that P2 is better. Another use case of it is heterogeneous execution where we can have llvm and cuda targets in it. --- [Visit Topic](https://discuss.tvm.ai/t/rfc-composite-target/7744/4) to respond. You are r

Re: Test to see if podling is listening

2020-08-27 Thread Zhi Chen
ACK On Thu, Aug 27, 2020 at 5:53 PM Henry Saputra wrote: > Hear ya > > On Thu, Aug 27, 2020 at 10:37 AM Dave Fisher wrote: > > > This is a test message to see if the project is listening on the dev@tvm > > mailing list or is treating this only as an archive. > > > >

Re: [apache/incubator-tvm] [VOTE] Apache TVM Graduation (#6332)

2020-08-24 Thread Zhi
+1 (binding) -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679418594

Re: [apache/incubator-tvm] [DISCUSS][RFC] Apache TVM Graduation (#6299)

2020-08-19 Thread Zhi
+1 Looking forward to the continued success after graduation. -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/6299#issuecomment-676823099

[TVM Discuss] [Development/RFC] [RFC][BYOC] Data Calibration Flow

2020-07-01 Thread Zhi via TVM Discuss
Thanks for the discussion. I think we don't really need to tie this feature to the BYOC flow. The problem it tries to solve is providing calibration data to 3rd codegen with quantizers as @anijain2305 pointed out. This is not required by QNN or AutoQ. It is also optional to 3rd codegen or BYO

[TVM Discuss] [Development/RFC] [RFC][BYOC] Runtime module to offload subgraph to edge server

2020-06-30 Thread Zhi via TVM Discuss
@kazum Thanks for the effort. It is very interesting. It sounds that you only need BYOC to do annotation and partitioning as you don't really have a backend/library for it, right? I am wondering how you package the subgraphs, do you manually prepare them? Thanks. --- [Visit Topic](https:

[TVM Discuss] [Development/RFC] [RFC][BYOC] Data Calibration Flow

2020-06-25 Thread Zhi via TVM Discuss
cc @anijain2305 as well --- [Visit Topic](https://discuss.tvm.ai/t/rfc-byoc-data-calibration-flow/7099/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/da1ed6d19e3b968d5c39

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-11 Thread Zhi via TVM Discuss
+1 for making fp32 as default as fp64 may not be that useful and it could possibly increase memory footprint and reduce performance (i.e. occupying more SIMD lanes). I also agree that we can make float more explicit. --- [Visit Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-11 Thread Zhi via TVM Discuss
yeah, I thought about positional ordering as well. But it looks pass variables might be safer. For a CSourceModule external codegen we generate a wrapper like `float* a = const_0;` `const_0` would need to be produced by the initializer later. So we would anyway need a name for it. --- [Vi

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-11 Thread Zhi via TVM Discuss
BTW, we will need to have the variables as well, i.e. %x1, %x2, %x3, something as I mentioned above. This is because we need to know which variable a ndarray should be assigned to. --- [Visit Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/29) to respond. You are

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-11 Thread Zhi via TVM Discuss
Yeah, let me give it a try. --- [Visit Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/28) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/5b20458f6a

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-11 Thread Zhi via TVM Discuss
Yeah, I would prefer C1 or C2. C2 was pretty much what I was doing. --- [Visit Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/26) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://dis

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-11 Thread Zhi via TVM Discuss
Yeah, I think I didn't make it very clear. The problem was because we may have multiple subgraphs, each of them may have "var_name: NDarray" pairs. I was trying to just have one `ModuleInitWrapper` to take charge of the initialization of engines for all subgraphs so that users don't need to o

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-10 Thread Zhi via TVM Discuss
I thought about array as well. Passing array to initialize is relatively simple. The more tricky part is packing the data and passing them around using packedfunc. --- [Visit Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/19) to respond. You are receiving this b

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-10 Thread Zhi via TVM Discuss
cc @junrushao1994 as well --- [Visit Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/17) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/e9cc785a0991

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-10 Thread Zhi via TVM Discuss
Here is the draft PR: https://github.com/apache/incubator-tvm/pull/5770 We may need to use Map to save the variable to constant/NDArray mapping. Should we move the `ModuleInitWrapper` out of runtime because it otherwise needs to have map in the runtime namespace? I used `SourceMetadataModule`

[TVM Discuss] [Development] Add the document for TVMDSOOp

2020-06-09 Thread Zhi via TVM Discuss
I think we actually need two things. One is thinking about how should we enable the tests to make sure other changes in TVM wouldn't break this functionality. The other is adding an official tutorial. There are examples under docs/dev. You can probably take a look at them and add it there. Pl

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-06-03 Thread Zhi via TVM Discuss
I am not sure if the clarification of packaging part is clear enough, but there is actually a potential problem. The goal is to be able to conveniently assemble code and metadata separately from the frontend in a modular way. The generated artifact is intended to be usable by AOT, graph runtim

[TVM Discuss] [Development/RFC] [RFC] [ETHOSN] Arm Ethos-N integration

2020-05-15 Thread Zhi via TVM Discuss
@Leo-arm Thanks for the proposal and the interest in BYOC. I have a few questions, 1) are you using the CSourceModule runtime/serialization or something different? 2) Is the codegen toolchain ACL and do you plan to set the CI for testing because I see there are several stages for testing?

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-05-05 Thread Zhi via TVM Discuss
@tqchen Thanks for the comment and sharing of thoughts. Yes, the fundamental problem here is the serialization of code and weights. Code is relatively easy to handle and weights are the real problem. I agree that a json runtime introduces another layer of abstraction for graph which the curren

[TVM Discuss] [Development/RFC] [BYOC][runtime] JSON runtime for BYOC

2020-05-04 Thread Zhi via TVM Discuss
We have currently built the infra for Bring-Your-Own-Codegen. For demonstration purpose, a simple CSourceModule style codegen and runtime is used for ccompiler and dnnl (now called oneDNN). CSourceModule runtime works reasonably well on small examples and it is easy to understand. However, it

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread Zhi via TVM Discuss
I have another thought on this, how about just put this one in the backend/utils.h since the current usage of them would be for the code under there? For general passes, it might be different though (like, to_a_norm_form, to_cps, PE, etc) --- [Visit Topic](https://discuss.tvm.ai/t/missin

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread Zhi via TVM Discuss
To be honest, among C0-C3 I wouldn't not want to introduce ANF to codegen. This means we either want to do ANF on the whole program or run the pass internally in the extern codegen to convert it. If we run it on the whole program, I think some passes that work on the DFG would not work well/or

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread Zhi via TVM Discuss
ahh, I didn't notice we have this one. Thanks. --- [Visit Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/12) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/uns

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread Zhi via TVM Discuss
Yeah, I am not a big fun of introducing this base class either as I think the only duplication code would be really just the caching map. If you are concerning about that 10 locs. I can actually just do it this way, I can actually remove them and replace it by calling the Functor::VisitExpr(e

[TVM Discuss] [Development/RFC] Slice_like can't be constant folded

2020-04-03 Thread Zhi via TVM Discuss
I am not sure. But I sort of remember that striced_slice may also need to change the `begin` and `end` into expr for dynamic shapes. @kevinthesun and @yongwww can comment more on this. --- [Visit Topic](https://discuss.tvm.ai/t/slice-like-cant-be-constant-folded/6206/2) to respond. You

[TVM Discuss] [Development] [PyTorch] [Frontend] graph input names can change using loaded torchscript

2020-03-23 Thread Zhi via TVM Discuss
Thanks for clarification. I think this change makes sense to me. --- [Visit Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6055/7) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these e

[TVM Discuss] [Development] [PyTorch] [Frontend] graph input names can change using loaded torchscript

2020-03-23 Thread Zhi via TVM Discuss
The input names are really annoying. I think one use case of the name to shape dict is to avoid the wrong order of the inputs. How hard is it for users to supply the inputs in the correct order? And it is possible to connect the names after _run_jin_passes? --- [Visit Topic](https://disc

Re: [apache/incubator-tvm] [RFC] Enhance TensorFlow Frontend Control Flow Support (#4969)

2020-02-28 Thread Zhi
Yes, @yongwww had one months back https://github.com/apache/incubator-tvm/pull/4312 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4969#issuecomment-592746247

Re: [apache/incubator-tvm] [RFC] Enhance TensorFlow Frontend Control Flow Support (#4969)

2020-02-28 Thread Zhi
ahh, one more reminder, for all these models, we will have OOM problem for pretty printing after the ANF pass. It is very likely because recursively visiting the AST saves all the intermediate results. -- You are receiving this because you are subscribed to this thread. Reply to this email dir

Re: [apache/incubator-tvm] [RFC] Enhance TensorFlow Frontend Control Flow Support (#4969)

2020-02-28 Thread Zhi
Just a reminder, to support these models we need some patches for tensor array as well. mask_rcnn seems requiring some more debugging. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/i

Re: [apache/incubator-tvm] [VOTE] Release Apache TVM (incubating) v0.6.0.rc2 (#4443)

2019-11-28 Thread Zhi
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4443#issuecomment-559613402

[TVM Discuss] [Development/RFC] Bring Your Own Codegen to TVM

2019-10-28 Thread Zhi via TVM Discuss
@jonso Thanks for making these points and I am very glad to work together. Most of questions are answered by @comaniac. One thing is that putting extern in the target string might not be sufficient because 1) we need to change the way how target is parsed now, 2) what if there are multiple ta

[TVM Discuss] [Development/RFC] Bring Your Own Codegen to TVM

2019-10-26 Thread Zhi via TVM Discuss
# Bring your own codegen to TVM + Graph Partitioning The goal is to come up with a right Relay subgraph data structure/abstraction so that we can more conveniently allow thrid-party library and hardware vendors to bring their own codegen tools to TVM. This RFC involves design and implementati

Re: [dmlc/tvm] [VOTE] Add "Organizations contributing using and contributing to TVM" Section to Community Webpage (#4162)

2019-10-20 Thread Zhi
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/4162#issuecomment-544302489

Re: [dmlc/tvm] [RFC][RUNTIME] Introduce new object protocol. (#4115)

2019-10-14 Thread Zhi
Merged #4115 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/4115#event-2712622451

Re: [dmlc/tvm] [RFC][RUNTIME] Introduce new object protocol. (#4115)

2019-10-14 Thread Zhi
@tqchen thanks. This is now merged. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/4115#issuecomment-542047511

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-09-01 Thread Zhi
# TVM Monthly - August 2019 https://discuss.tvm.ai/t/tvm-monthly-august-2019 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2623#issuecomment-527019813

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-31 Thread Zhi
Closed #3594. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3594#event-2523847252

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-31 Thread Zhi
#3647 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3594#issuecomment-516925996

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-29 Thread Zhi
The serialization itself doesn't have much to do with quantization. If quantized model needs new opcode in the VM, we need to introduce them first and then extend the serialization/deserialization to support these instructions. -- You are receiving this because you are subscribed to this thread

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-26 Thread Zhi
I feel heterogenous execution will be mainly related to memory management in the VM. We don't need to encode any information in VM for the compilation and codegen. I think we probably need to handle `AllocTensor` a little differently, e.g. making it device aware. -- You are receiving this bec

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-22 Thread Zhi
@tqchen My bad. The APIs started with `Serialize` and `Deserialize` are actually not exposed. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3594#issuecomment-514012903

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-22 Thread Zhi
@icemelon9 The length is mainly for sanity check before we decode the instructions. We could remove it. There could be multiple fields with variable length. I thought we should always have a field in the fixed field to indicate the length of the variable one, is this right? For example, https:

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-22 Thread Zhi
@icemelon9 we probably don't need to have the length for each field with variable length because we should be able to derive it based on the fixed fields? It means we usually put the length of it as a field of an instruction, right? -- You are receiving this because you are subscribed to this

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-22 Thread Zhi
@MarisaKirisame I think we need it to make deserialization easier. Otherwise, we may need many checks. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3594#issuecomment-513988425

Re: [dmlc/tvm] [RFC][relay][vm] Relay virtual machine serialization (#3594)

2019-07-22 Thread Zhi
@icemelon9 Yeah, thanks. Putting the `length` before the filed with variable length seems reasonable. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3594#issuecomment-513987982

Re: [dmlc/tvm] [RFC][Quantization] Designing and lowering of quantized ops (#3512)

2019-07-08 Thread Zhi
Is this ready for review? Have we been converged on the design in the quantization RFC? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/3512#issuecomment-509430172

Re: [dmlc/tvm] [RFC][Frontend] Return module for Relay frontend converter (#3346)

2019-06-12 Thread Zhi
@icemelon9 #3353 was the draft I haven't finished yet. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3346#issuecomment-501150175

Re: [dmlc/tvm] [RFC][Frontend] Return module for Relay frontend converter (#3346)

2019-06-11 Thread Zhi
Yeah, this is exactly what I think as well. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3346#issuecomment-501072122

Re: [dmlc/tvm] [RFC][Relay] Feature Manager (#3236)

2019-05-23 Thread Zhi
Yes, I agree this is annoying. It looks we might need to introduce some metadata for a pass. Usually when we do sequential passes, we may need to consider about preserving information from the updated passes and also validate if we can proceed. We should think about it more when we start resolvi

Re: [dmlc/tvm] [RFC][Relay] Pass API Discussion (#3202)

2019-05-17 Thread Zhi
Sounds like a good plan. I think main is currently used as the entry function. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3202#issuecomment-493577403

Re: [dmlc/tvm] [RFC][Relay] API Naming in Pass Manager (#3202)

2019-05-16 Thread Zhi
Why `transform` is a better namespace than `pass`? I am fine with `Sequential` as it is also used by Pytorch. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3202#issuecomment-493266455

Re: [dmlc/tvm] [RFC][Relay] Port Relay passes to pass manager (#3146)

2019-05-10 Thread Zhi
For anyone who is interested in this, please comment. We appreciate your thoughts and suggestions. @MarisaKirisame and I will start working on it once we get some cycles. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: htt

[dmlc/tvm] [RFC][Relay] Port Relay passes to pass manager (#3146)

2019-05-07 Thread Zhi
# Porting Relay Passes to Pass Manager As the pass manager framework has been merged, we should start to move passes to the pass manager. This RFC proposes the plans to move the Relay passes. ## Proposal (take constant folding as an example): The proposal needs to solve problems from both the bac

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-18 Thread Zhi
@masahi I see, thanks. Another option is probably using a copy operator if there are duplicates. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3039#issuecomment-484385353

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread Zhi
BTW, I am not certain that stopping fusing return tuple will fully solve the problem because it looks to me that we will still have two identical tensor in the tuple, right? Am I missing something? -- You are receiving this because you are subscribed to this thread. Reply to this email directly

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread Zhi
@masahi Can we prevent from passing duplicated tensors instead? It looks that we otherwise need to change all schedules for all targets in topi, right? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread Zhi
@junrushao1994 > @tqchen where is %2? There might be some code emitted, but the idea is to the problem when dealing with duplicate values in return tuples. > why is the example bad for codegen The output tensor is scheduled twice in compute_engine here: https://github.com/dmlc/tvm/blob/552d4a

Re: [dmlc/tvm] [Vote] Deprecate Python2 Support (#2994)

2019-04-09 Thread Zhi
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2994#issuecomment-481453561

Re: [dmlc/tvm] [VOTE] Apache Transition Plan (#2973)

2019-04-08 Thread Zhi
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2973#issuecomment-481032819

Re: [dmlc/tvm] [RFC] Decompile TensorFlow Control Flow Primitives to Relay (#2812)

2019-03-24 Thread Zhi
closed by #2830 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2812#issuecomment-475979226

Re: [dmlc/tvm] [RFC] Decompile TensorFlow Control Flow Primitives to Relay (#2812)

2019-03-24 Thread Zhi
Closed #2812. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2812#event-2224962041

Re: [dmlc/tvm] [Relay][RFC] Improve testing infrastructure of Relay (#2884)

2019-03-23 Thread Zhi
+1 for refactoring. BTW, we probably also need to have some discussion about adding some regression tests in CI pipeline because some passes could noticeably affect perf. But this can be a separate issue. -- You are receiving this because you are subscribed to this thread. Reply to this email