[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-05 Thread Olivier Valery via Apache TVM Discuss
I implemented the conv3d with int8 as following: I create the file ```python/tvm/topi/cuda/conv3d_int8.py``` which implement the operation itself. ``` # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with

[Apache TVM Discuss] [Development/RFC] [RFC] TVM Object Schema DSL

2020-11-05 Thread tqchen via Apache TVM Discuss
First of all, given that the schema generation itself is de-coupled as a frontend, there won't be a problem for the lower-level production system, as the object themselves are still presented as part of the C++ and build into the system. The schema generation is ran separately just like clang-

[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-05 Thread Tristan Konolige via Apache TVM Discuss
Hello @OValery16, I believe the issue you are encountering is that you are calling `te.thread_axis("threadIdx.z")` multiple times. Instead, can you try creating the thread axis once with `thread_z = te.thread_axis("threadIdx.y")` and then use it like so: `s[output].bind(s[output].fuse(tf, td),

[Apache TVM Discuss] [Development] [VTA] Workaround for Autotuning with One PYNQ Z1 Board

2020-11-05 Thread thkim via Apache TVM Discuss
May I ask what version of the code (incubator-tvm) you used? --- [Visit Topic](https://discuss.tvm.apache.org/t/vta-workaround-for-autotuning-with-one-pynq-z1-board/8091/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click

[Apache TVM Discuss] [Development] [VTA] Workaround for Autotuning with One PYNQ Z1 Board

2020-11-05 Thread Hanting Huang via Apache TVM Discuss
0.7 release 2020-10-02 --- [Visit Topic](https://discuss.tvm.apache.org/t/vta-workaround-for-autotuning-with-one-pynq-z1-board/8091/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/