Sure, I had opened a PR (#5131). I hope it helps. Thanks.
---
[Visit Topic](https://discuss.tvm.ai/t/relay-op-fast-exp-cant-be-built/6046/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/ema
This is a quite valuable topic which can help us figure out what kind of
information related to optimization we can get from TVM IR itself, after all
the LLVM optimization pass applied. For x86 conv2d, my observation is that the
work llvm unrolling is doing can be implemented in TVM schedule b
This works nicely, and it is trivial to implement. Here's a complete example
for reference:
```
import torch
import topi
import tvm
from tvm import te
from tvm.contrib import dlpack
def _codegen_function(d1, d2, name):
bsz = te.var('bsz') # bsz and d3 can be variables without impact on
pe
Thank you. Can you also register for fast_tanh?
Also, a better usage for using fastmath pass is follows
https://github.com/apache/incubator-tvm/blob/a5d7bdab8771430be052c22d07ebe2df6b320be4/tests/python/relay/test_pass_fast_math.py#L32-L33
---
[Visit Topic](https://discuss.tvm.ai/t/relay-o
Yes, it's probably an oversight that there is no registration of `fast_exp`
schedule. Your solution looks good. Could you send a PR to fix this?
---
[Visit Topic](https://discuss.tvm.ai/t/relay-op-fast-exp-cant-be-built/6046/2)
to respond.
You are receiving this because you enabled mailin
I encountered something wrong with the relay pass `FastMath()` and
`topi.fast_exp`.
I made a testing relay program with `exp` and build it with
```python
with relay.build_config(opt_level=4):
graph, lib, params = relay.build(mod, target, params=params)
```
to enable and build `Op(fast_exp
In my experience plain loop unrolling has always been a blunt hammer and is not
useful in the general case, thus turning that off by default in LLVM makes
sense. Targeted unrolling with vectorization and other loop optimizations is
more beneficial .
I hadn't realized that LLVM turned on plai