YES, have u ever solve this problem?
---
[Visit
Topic](https://discuss.tvm.ai/t/problem-start-rpc-server-in-pynq-z1/7115/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/d
This has been addressed here:
https://discuss.tvm.ai/t/optimizing-matrix-multiplication-for-gpu/4212/22?u=ibeltagy
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-save-multiple-compiled-modules-into-one-so-file/6030/2)
to respond.
You are receiving this because you enabled mailing list
I meet same error, are you use pynq image v2.4?
---
[Visit
Topic](https://discuss.tvm.ai/t/problem-start-rpc-server-in-pynq-z1/7115/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/un
Hi,
I imported DeeplabV3+(xception) model named 'xception65_coco_voc_trainval'
downloaded from TF model zoo
(https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md)
It runs well on CPU but gets some error on GPU.
```
target = tvm.target.cuda()
ctx = tvm.gpu(0)
mo
Thank you for answer.
It is defined in pass, not direct implementation.
Thank you for good information.
---
[Visit
Topic](https://discuss.tvm.ai/t/where-is-the-batch-normalization-implementation-specified-in-tvm/7120/3)
to respond.
You are receiving this because you enabled mailing list
I think it is not implemented per se.
There is a [`BatchNormToInferUnpack`
function](https://github.com/apache/incubator-tvm/blob/78d79923756ea9ed4545d2faef7d514a300d3452/src/relay/transforms/simplify_inference.cc#L34),
part of the [SimplifyInference
pass](https://tvm.apache.org/docs/api/pytho
hello!
I would like to see the implementation of batch_normalization used by Relay.
However, I searched all the source code, but I can't get any information about
the implementation.
Is there an implementation in another location?
---
[Visit
Topic](https://discuss.tvm.ai/t/where-is-the-b