you can add it here
https://github.com/apache/tvm/blob/main/python/tvm/relay/op/_tensor_grad.py
---
[Visit
Topic](https://discuss.tvm.apache.org/t/errors-when-obtaining-gradients-for-nn-batch-norm/11304/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsu
I cannot reproduce the results you are getting. For me, the graph runtime and
the VM are within 10% of each other in profiling. And they are pretty close to
the benchmark results too.
Here are some questions that might help you debug this:
- Have you tried running on a different machine?
- Ha
Hi there, I am trying to get the gradients of some popular models, however it
seems that TVM does not register gradients for `nn.batch_norm` operators
currently, is there way to register gradients for unsupported OPs?
```
model = nn.Sequential(
nn.Conv2d(3, 3, kernel_size=3, padding=1),
@ZephyrSails
I guess you can take look at
https://discuss.tvm.apache.org/t/can-tvm-split-work-into-different-layers-and-assign-layers-into-different-cores/11161/10?u=popojames
I think this is what you are looking for.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-manually-con
My desktop doesn't have such Big and small cores, so I am not able to reproduce
the result.
I indeed saw when the number of cores increases, the performance will improve.
However, running on small clusters outperforming big clusters still makes no
sense to me.
Kindly ask if there are any thou