Hello!
I am trying to use the graph debugger to measure the performance of the VGG16
on the rk3399 board.
I simply debugged it using the code below.
import numpy as np
from tvm import relay
from tvm.relay import testing
import tvm
from tvm import te
from tvm.contrib.d
Thanks, @kevinthesun, @comaniac
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/9)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](http
Could you provide these parameters?
b = tvm.var('b') # batch size -> 12 or n
n = tvm.var('n') # sequence length -> 512
h = tvm.var('h') # number of heads -> ?
m = tvm.var('m') # hidden dimension -> 768
w = tvm.var('w') # window size -> 512
w_upper = tvm.var('w_uppe
If you want one or two specific configurations to work with, they would be:
- batch size = 12 (but `batch_matmul_schedule` didn't require a constant batch
size, so maybe this doesn't need to be constant)
- embedding size: 768
- sequence length: 4,096
- window size: 512
- dilation: 0 and 3 (I th
It's true, but it should be fine in my opinion, because the search space for
X86 workloads are still acceptable in most cases.
---
[Visit
Topic](https://discuss.tvm.ai/t/question-about-conv2d-x86-schedule-template/6436/6)
to respond.
You are receiving this because you enabled mailing lis
You can search `tile_ow` in the Github repo for the use cases. For example:
https://github.com/apache/incubator-tvm/blob/0cfdecdae99582998dae5c2c3fdfd7a2700f10c0/topi/python/topi/x86/conv2d_avx_1x1.py#L64
---
[Visit
Topic](https://discuss.tvm.ai/t/question-about-conv2d-x86-schedule-templat
You don't need graph tuning while using cblas.
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/8)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [cl
I am not sure if graph tuner is still applicable when cBLAS is used. Maybe
@kevinthesun could provide more details about it.
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/7)
to respond.
You are receiving thi
@kazum Thanks for your reply in patience.
I have converted the keras model '*.h5' to tensoflow model(*.pb). There are
four ops are not supported using 'tensorflow frontend using
relay.frontend.from_tensorflow', as:
```Python
otImplementedError: The following operators are not implemented:
{'