Hello! I make a very simple test exploring if the data_alignment argument
affects the performance of the intrinsic function:
```python
from __future__ import absolute_import, print_function
import tvm
from tvm import te
import numpy as np
def intrin_gemm(m, n, p, alignment=64):
a = te.pla
Oh, yeah, forget my stupid question :sweat_smile:
---
[Visit
Topic](https://discuss.tvm.ai/t/a-problem-for-the-optimized-model/6878/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/un
Isn't `p0` the weight of conv2d and `p1` the bias?
---
[Visit
Topic](https://discuss.tvm.ai/t/a-problem-for-the-optimized-model/6878/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/u
Hi,
I am a beginer for tvm.
I compile the model for vgg19 and visualize it by netron.

*The original model is just conv->relu->pooling->...*
I know tvm do some operator fusions. But I don't know wh
[quote="matt-arm, post:2, topic:6867"]
it’s worth mentioning that one of the reasons all the codegens accept Relay
rather than TIR is because BYOC is implemented in Relay
[/quote]
I agree with you that the current infrastructure seems to be limited to Relay.
But tqchen did mention:
[quote="
Currently, we use the CUDA schedule (and op) on ROCm:
https://github.com/apache/incubator-tvm/blob/2cd987d92724be0f859bfb624ce797f9c70167bb/python/tvm/relay/op/strategy/rocm.py#L47-L50
---
[Visit
Topic](https://discuss.tvm.ai/t/rocm-segmentation-fault-error-when-auto-tuning/6402/8)
to res