[Apache TVM Discuss] [Questions] [Quantization] How to add new quantization method for TVM

2020-11-03 Thread fredjon via Apache TVM Discuss
Hi electriclilies, I'm glad to hear that! It will be gorgeous if the new quantization framework is easy to add new methods. Noteworthily, the quantize method depends on the graph structure. Thanks a lot. --- [Visit Topic](https://discuss.tvm.apache.org/t/quantization-how-to-add-new-quanti

[Apache TVM Discuss] [Questions] How to match the pattern of a function in Relay?

2020-11-03 Thread Matthew Brookhart via Apache TVM Discuss
Like a pattern that exists across multiple functions? Yes, we'll need a FunctionPattern. --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-match-the-pattern-of-a-function-in-relay/8283/21) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe f

[Apache TVM Discuss] [Questions] How to match the pattern of a function in Relay?

2020-11-03 Thread moderato via Apache TVM Discuss
@mbrookhart So I tried the way you suggested and I'm able to rewrite the pattern inside a function. I wonder if it's also possible to partition and rewrite a pattern across multiple functions? I suspect if this would need the support of the potential `FunctionPattern`. --- [Visit Topic](

[Apache TVM Discuss] [Questions] 【TIR】After auto tuning, what other optimizations based on tir will be done?

2020-11-03 Thread Cody H. Yu via Apache TVM Discuss
What @matt-arm pointed out is correct. In addition, your figure is not exactly correct. relay.build actually goes the same flow as AutoTVM/TE schedule. When calling relay.build, it lowers each operator to TE according to Relay op strategy. The op strategy will select a TE compute/schedule for

[Apache TVM Discuss] [Questions] [Quantization] How to add new quantization method for TVM

2020-11-03 Thread Lily Orth-Smith via Apache TVM Discuss
I am working on the new quantization framework right now -- it is currently in progress. Our rough timeline is an RFC in three weeks to a month from now, and then upstreaming the finalized quantization framework in the two weeks after that. It would be great to get your feedback on the final

[Apache TVM Discuss] [Questions] 【TIR】After auto tuning, what other optimizations based on tir will be done?

2020-11-03 Thread Matt Barrett via Apache TVM Discuss
I'm no expert with this, but if you take a look in build_module.py, you can see what TIR passes are run after the schedules have been lowered. I'll paste them here for convenience: ``` tvm.tir.transform.InjectPrefetch(), tvm.tir.transform.StorageFlatten(64, instrument_bound_checkers), tvm.tir.t

[Apache TVM Discuss] [Questions] [Quantization] How to add new quantization method for TVM

2020-11-03 Thread fredjon via Apache TVM Discuss
Hi developers: How can I add the new quantization method for TVM? Or is there a tutorial ? The community experts could help to clarify the question? I would highly appreciate your response. Thanks a lot! Best Regards, Fred --- [Visit Topic](https://discuss.tvm.apache.org/t/quantization

[Apache TVM Discuss] [Questions] How can I test the performance of a single operator?

2020-11-03 Thread haozech via Apache TVM Discuss
Hi guys, I'm new to TVM and I was trying to test the performance of a single operator on NVGPU. So far I found the doc about how to test model benchmark(https://github.com/apache/incubator-tvm/blob/main/apps/benchmark/README.md). And the doc about [Tuning High Performance Convolution on NVIDIA

[Apache TVM Discuss] [Questions] After auto tuning, what other optimizations based on tir will be done?

2020-11-03 Thread sqchao via Apache TVM Discuss
I am a newer to TVM, from my point, compute + schedule + auto tuning convert the relay IR to Tensor IR. this process includes many hardware special optimizations related with schedule primitives. After the auto tuning, Tensor IR was generated. Q1: Besides the AutoTVM(include schedules), what