I implemented the conv3d with int8 as following:
I create the file ```python/tvm/topi/cuda/conv3d_int8.py``` which implement the
operation itself.
```
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with
First of all, given that the schema generation itself is de-coupled as a
frontend, there won't be a problem for the lower-level production system, as
the object themselves are still presented as part of the C++ and build into the
system. The schema generation is ran separately just like clang-
Hello @OValery16, I believe the issue you are encountering is that you are
calling `te.thread_axis("threadIdx.z")` multiple times. Instead, can you try
creating the thread axis once with `thread_z = te.thread_axis("threadIdx.y")`
and then use it like so: `s[output].bind(s[output].fuse(tf, td),
May I ask what version of the code (incubator-tvm) you used?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/vta-workaround-for-autotuning-with-one-pynq-z1-board/8091/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
0.7 release 2020-10-02
---
[Visit
Topic](https://discuss.tvm.apache.org/t/vta-workaround-for-autotuning-with-one-pynq-z1-board/8091/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/