Merged #5790 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5790#event-3439973095
Just to warm this up a bit. While graph input debug names can change, PyTorch
does keep the stem stable. This is used e.g. for `script_module.code` and to
give an error for missing inputs (try `script_module()`).
https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.cpp#L735
It seems that for Windows `sizeof(long)=4`
([here](https://docs.microsoft.com/en-us/cpp/cpp/data-type-ranges?view=vs-2019))
while it is typically 8 for other platforms. Since `longlong3` and `longlong4`
are now supported since CUDA 10, maybe you can try replace
[L247](https://github.com/apach
@FrozenGene @giuseros If QNN Legalization is causing issues, we can remove QNN
legalization for ARM CPUs altogether and move the logic to Alter Op layout.
Alter op layout might become more complicated (like we might have to handle
uint8 x int8 input and kernel dtype in alter op layout now). Just
I agree with this. Default float64 will kill perf for most of beginner users,
which is not friendly.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/18)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from th
Please join us to welcome @wpan11nv as a new reviewer :) He has been quite
active
contributing to the CUDA backend and reviewed many non-trival code related to
tensor core and warp level parallelism.
- [Commits
History](https://github.com/apache/incubator-tvm/commits?author=wpan11nv)
- [Code
R
Something along that direction, in the meanwhile, seems we are converging:
- convert default to fp32 and add warning
- fix the float occurence to use fp32
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/17)
to respond.
You are receiving this because
Ah I see. That makes sense. Then how about putting it to config.cmake to be
something like `SET(STRICT_MODE ON)`?
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/16)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubs
I actually meant `TVM_STRICT_MODE` that changes the `"float"` handling behavior
to directly throw, not intercepting the warnings. This way we can cleanup the
use of `"float"` in our own codebase but still allow users to use it
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning
Is the `TVM_STRICT_MODE` fails the CI if throw warnings? It looks not
sustainable to me because this is not in a normal logging system so people can
easily forget it.
My understanding about how to determine the log messages is that if we hope to
show them to end-users, then we should use INFO
Here is another idea:
- “float = float32” but with a warning
- Add an env variable `TVM_STRICT_MODE` to force the usage of "float" to throw,
and enable the flag in the CI, so that we fix all the usage in our current
codebase
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning
Hello all,
I have built tvm with python bindings on Windows.
Now I'm testing it and found that some tests fail.
I run `python -m pytest -v tvm_source/tests/python/integration` and
`test_ewise.py::test_add` fails with:
def test_add():
def run(dtype):
run("float
> Hi @FrozenGene ,
> I gave it another go, but switching legalization on the strategy seems very
> hard (since we would need the auto-tuner to pick the best data-type for us).
>
> So for now, we have to content with the `_alter_conv2d_layout` workaround and
> try to think a bit more on how we ca
Hello,
I recently stumbled over the fact that `reshape` is typically hard for TVM's
common subexpression elimination pass to work with. This is because the target
shape (which also comes in the attrs) can be a distinct (even if equal) tensor.
In particular, converting reshape from, say, PyTor
So it seems that "float = float32" but with a warning might be good?
Personally, I had been thinking of a Python warning, so anyone can decide to
treat as error / ignore / ..., but @comaniac, is [this autotvm
warning](https://github.com/apache/incubator-tvm/blob/master/python/tvm/autotvm/recor
15 matches
Mail list logo