Do you have any advice for me to solve this bug ? In which file should I look
for ?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/12)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [cl
I am less familar with this part of the code, cc @vinx13 who might know a bit
more
---
[Visit
Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/11)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [cli
I can reproduce it now. To me it looks like a bug in scheduling. Maybe @tqchen
knows why this is happening?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/10)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscrib
Thank you for your quick reply.
I cloned the last version (55d81720f3d05bce559d8b4d7972f54b0fa3eb60). I
slightly modify the script because some files got renamed (util => utils).
```
"""Test for NCHW[x]c convolution"""
import numpy as np
import tvm
from tvm import te
from tvm import autotvm
I believe this line is the issue as it occurs before `threadIdx.z` is defined.
[quote="OValery16, post:6, topic:8338"]
`allocate(compute, int32, [(((floordiv(((threadIdx.z: int32*2) + 1), 4)*32) +
32) - (floordiv(threadIdx.z, 2)*32))]);`
[/quote]
However, I cannot reproduce this issue with the
I also wrote a a minimal example to reproduce the problem.
```
"""Test for NCHW[x]c convolution"""
import numpy as np
import tvm
from tvm import te
from tvm import autotvm
from tvm import topi
import tvm.testing
import tvm.topi.testing
from tvm.contrib.pickle_memoize import memoize
from tvm.top
@tkonolige Thanks a lot for your help.
Regarding the ```tvm.lower(s, args)```, you can find below the generated code .
Before tuning, I got:
```
#[version = "0.0.5"]
primfn(A_1: handle, W_1: handle, output_unpack_1: handle) -> ()
attr = {"global_symbol": "main", "tir.noalias": True}
buffer
Could you print out the lowered code? You can use `tvm.lower(s, args)` where
`s` is the schedule. Also, if you provide a minimal example to run, I can take
a look at it.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/5)
to respond.
You are receiv
Hi @tkonolige,
Thanks a lot for your help.
Unfortunately, your fix didn t solve the problem.
I am a bit confused because my implementation is very closed to the one for
conv2d_NCHWc_int8
```
def _schedule_conv2d_NCHWc_int8(cfg, s, output):
conv = output.op.input_tensors[0]
packed_d
Hello @OValery16, I believe the issue you are encountering is that you are
calling `te.thread_axis("threadIdx.z")` multiple times. Instead, can you try
creating the thread axis once with `thread_z = te.thread_axis("threadIdx.y")`
and then use it like so: `s[output].bind(s[output].fuse(tf, td),
I implemented the conv3d with int8 as following:
I create the file ```python/tvm/topi/cuda/conv3d_int8.py``` which implement the
operation itself.
```
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with
Hi all,
I would like to contribute to this project by implementing 8-bit quantization
for 3d convolution. Currently my implementation works fine without auto-tuning.
It is quite similar to what is happening in 2D:
1. Reshape the input data and the kernel such as the convolution computation
c
12 matches
Mail list logo