Re: [apache/incubator-tvm] [COMMUNITY] @wpan11nv -> Reviewer (#5790)

2020-06-12 Thread Haichen Shen
Merged #5790 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/5790#event-3439973095

[TVM Discuss] [Development] [PyTorch] [Frontend] graph input names can change using loaded torchscript

2020-06-12 Thread Thomas V via TVM Discuss
Just to warm this up a bit. While graph input debug names can change, PyTorch does keep the stem stable. This is used e.g. for `script_module.code` and to give an error for missing inputs (try `script_module()`). https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.cpp#L735

[TVM Discuss] [Development] [Q] "TVMError: Cannot convert type int64x4 to CUDA type on a L32 platform" - test_ewise.py::test_add fails

2020-06-12 Thread Yanming Wang via TVM Discuss
It seems that for Windows `sizeof(long)=4` ([here](https://docs.microsoft.com/en-us/cpp/cpp/data-type-ranges?view=vs-2019)) while it is typically 8 for other platforms. Since `longlong3` and `longlong4` are now supported since CUDA 10, maybe you can try replace [L247](https://github.com/apach

Re: [apache/incubator-tvm] [RFC] Improve quantized convolution performance for armv8 architectures (#5754)

2020-06-12 Thread Animesh Jain
@FrozenGene @giuseros If QNN Legalization is causing issues, we can remove QNN legalization for ARM CPUs altogether and move the logic to Alter Op layout. Alter op layout might become more complicated (like we might have to handle uint8 x int8 input and kernel dtype in alter op layout now). Just

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-12 Thread Bing Xu via TVM Discuss
I agree with this. Default float64 will kill perf for most of beginner users, which is not friendly. --- [Visit Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/18) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from th

[apache/incubator-tvm] [COMMUNITY] @wpan11nv -> Reviewer (#5790)

2020-06-12 Thread Tianqi Chen
Please join us to welcome @wpan11nv as a new reviewer :) He has been quite active contributing to the CUDA backend and reviewed many non-trival code related to tensor core and warp level parallelism. - [Commits History](https://github.com/apache/incubator-tvm/commits?author=wpan11nv) - [Code R

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-12 Thread tqchen via TVM Discuss
Something along that direction, in the meanwhile, seems we are converging: - convert default to fp32 and add warning - fix the float occurence to use fp32 --- [Visit Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/17) to respond. You are receiving this because

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-12 Thread Cody H. Yu via TVM Discuss
Ah I see. That makes sense. Then how about putting it to config.cmake to be something like `SET(STRICT_MODE ON)`? --- [Visit Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/16) to respond. You are receiving this because you enabled mailing list mode. To unsubs

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-12 Thread tqchen via TVM Discuss
I actually meant `TVM_STRICT_MODE` that changes the `"float"` handling behavior to directly throw, not intercepting the warnings. This way we can cleanup the use of `"float"` in our own codebase but still allow users to use it --- [Visit Topic](https://discuss.tvm.ai/t/discuss-the-meaning

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-12 Thread Cody H. Yu via TVM Discuss
Is the `TVM_STRICT_MODE` fails the CI if throw warnings? It looks not sustainable to me because this is not in a normal logging system so people can easily forget it. My understanding about how to determine the log messages is that if we hope to show them to end-users, then we should use INFO

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-12 Thread tqchen via TVM Discuss
Here is another idea: - “float = float32” but with a warning - Add an env variable `TVM_STRICT_MODE` to force the usage of "float" to throw, and enable the flag in the CI, so that we fix all the usage in our current codebase --- [Visit Topic](https://discuss.tvm.ai/t/discuss-the-meaning

[TVM Discuss] [Development] [Q] "TVMError: Cannot convert type int64x4 to CUDA type on a L32 platform" - test_ewise.py::test_add fails

2020-06-12 Thread Leslie German via TVM Discuss
Hello all, I have built tvm with python bindings on Windows. Now I'm testing it and found that some tests fail. I run `python -m pytest -v tvm_source/tests/python/integration` and `test_ewise.py::test_add` fails with: def test_add(): def run(dtype): run("float

Re: [apache/incubator-tvm] [RFC] Improve quantized convolution performance for armv8 architectures (#5754)

2020-06-12 Thread Zhao Wu
> Hi @FrozenGene , > I gave it another go, but switching legalization on the strategy seems very > hard (since we would need the auto-tuner to pick the best data-type for us). > > So for now, we have to content with the `_alter_conv2d_layout` workaround and > try to think a bit more on how we ca

[TVM Discuss] [Development] [DISCUSS] pass for merging shape tensors

2020-06-12 Thread Thomas V via TVM Discuss
Hello, I recently stumbled over the fact that `reshape` is typically hard for TVM's common subexpression elimination pass to work with. This is because the target shape (which also comes in the attrs) can be a distinct (even if equal) tensor. In particular, converting reshape from, say, PyTor

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-12 Thread Thomas V via TVM Discuss
So it seems that "float = float32" but with a warning might be good? Personally, I had been thinking of a Python warning, so anyone can decide to treat as error / ignore / ..., but @comaniac, is [this autotvm warning](https://github.com/apache/incubator-tvm/blob/master/python/tvm/autotvm/recor