My TVM version - 0.9.dev334+g3c8de42a0
x=torch.randn([1,256,35,35])
upsample_layer = nn.ModuleList([nn.ConvTranspose2d(256, 256, 3, stride=2,
padding=1)])
x = upsample_layer(x, output_size=torch.Size([1, 256, 69, 69]))
Line:
`mod, params = relay.frontend.from_pytorch(script_module,
Are you following the instructions here:
https://tvm.apache.org/docs/install/from_source.html?
Assuming you have the `cmake` executable installed, the instructions assume you
are working from inside a git checkout of TVM, so:
```bash
git clone https://github.com/apache/tvm.git
cd tvm
```
the
sorry for the delay here--no, you don't need an RPC tracker. in the future,
when we support distributing tuning jobs to more than 1 board, a tracker may be
needed functionally, but i suspect we would hide that under the covers.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/understand
Hi,
I am planning to integrate a new accelerator to TVM.
I have been following the the BYOC
[doc](https://tvm.apache.org/docs/dev/how_to/relay_bring_your_own_codegen.html)
and [blog
post](https://tvm.apache.org/2020/07/15/how-to-bring-your-own-codegen-to-tvm)
thinking this was sufficient to
It seems like I am missing some bits in what I said above, here is where I am
at now when running a build:
```
DEBUG:autotvm:Finish loading 35 records
Traceback (most recent call last):
File "pt_relay_conv2d.py", line 69, in
mod = graph_executor.GraphModule(lib['default'](dev))
File
"/