Is there a device.py in tvm/python/tvm/micro?I can't import it,though in some
blogs it's imported sucessfully.
---
[Visit Topic](https://discuss.tvm.apache.org/t/where-to-find-device-py/8245/1)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from
can you send a PR to add your implementation of LSTM converter? This is a
requested feature (see https://github.com/apache/incubator-tvm/issues/6474)
Unrolling is the standard way to implement lstm op conversion. Both MXNet and
ONNX frontend do it. I don't recommend pursuing the approach of co
Hi,
Jiali and I tried to compile RNN-T PyTorch model by TVM. So we implement LSTM
in
/incubator-tvm/python/tvm/relay/frontend/pytorch.py
```
def _lstm():
def _lstm_cell(input, hidden, params):
hx = hidden[0]
cx = hidden[1]
_w_ih = params[0]
_w_hh = params[1]
I encountered the following error while converting the model。
File "../_base.py", line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: Error in operator 316: [10:41:59] src/top/nn/nn.cc:201:
Check failed: (size_t)param.axis < dshape.Size()
Anyone
Hi @Airbala, although this message looks scary, all it means is that you
haven't performed the autotuning process. TVM will still work fine, it just
might not be as fast as it could be. If you want to learn more about
autotuning, check out one of the tutorials here:
https://tvm.apache.org/doc
I‘m facing with the same problem,have u solved this?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/why-am-i-getting-cannot-find-config-for-target-opencl-device-intel-graphics-model-unknown-workload/5581/3)
to respond.
You are receiving this because you enabled mailing list mode.
To