My model is in the Intranet and cannot be taken out.Any good Suggestions and
methods to check errors?
---
[Visit
Topic](https://discuss.tvm.ai/t/relay-what-does-the-mistake-mean-in-particular-dimensio-2-conflicts-3-does-not-match-2/6149/4)
to respond.
You are receiving this because you e
Thank you Robert!
This is really useful information. Thank you.
---
[Visit
Topic](https://discuss.tvm.ai/t/arm-cpu-performance-is-too-slow-than-mali-gpu/6220/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https
Hi!
Does this appear because of this previous issue:
https://discuss.tvm.ai/t/bug-arm-significant-performance-degradation-of-execution-times-between-tvm-revisions/6029/
I was experiencing a similar slowdown on ARM CPUs which was lead back by the
limited Winograd algorithm...
Cheers
Robert
Hi, I am now trying to use AutoTVM to tune the templates for different
operators, but I do not know where exactly those templates which are already
implemented by tvm are. Could you help me on this? Also, are there any examples
or tutorials on how to use these templates?
And from the Tuning H
Trying to mimic tests/cpp/relay_build_module_test.cc in construct a simple
Dense + Relu + Add function as below.
```
auto tensor_type_f32_16_8 = relay::TensorType({16, 8}, DataType::Float(32));
auto tensor_type_f32_8_8 = relay::TensorType({8, 8}, DataType::Float(32));
auto a = relay::Var
No, it's not available for cuda.
---
[Visit
Topic](https://discuss.tvm.ai/t/auto-tvm-cuda-tune-graph-is-possible-in-cuda/6219/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscr
this must be problem with shape of array where it can't reshape to target
shape, may be you can attached model i can have a look into that
---
[Visit
Topic](https://discuss.tvm.ai/t/relay-what-does-the-mistake-mean-in-particular-dimensio-2-conflicts-3-does-not-match-2/6149/3)
to respond.
Hello!
Currently I am trying to inference VGG-16 through arm cpu.
import tvm
import tvm.relay as relay
from tvm.contrib import graph_runtime
import numpy as np
import topi
from tvm.relay.testing.temp_op_attr import TempOpAttr
target_arm_cpu = tvm.target.create('llvm
In Auto-tuning, graph_tuner is only doing in x86.
Is there any way to do graph_tuner in cuda?
---
[Visit
Topic](https://discuss.tvm.ai/t/auto-tvm-cuda-tune-graph-is-possible-in-cuda/6219/1)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from thes
Hello!
I use rk3399 firefly board with LLVM 8.0.0 and ubuntu 18.04.
I ran vgg16 on the board using the following code.
import tvm
from tvm import te
import tvm.relay as relay
from tvm.contrib import graph_runtime
import numpy as np
import topi
from tvm.relay import t
10 matches
Mail list logo