Thank you so much, for your immediate input. Let me try tuning in my local
machine.
---
[Visit
Topic](https://discuss.tvm.ai/t/quick-start-tutorial-gives-cannot-find-config-for-target-cuda-model-unknown-workload-conv2d-nchw-cuda/5874/5)
to respond.
You are receiving this because you enab
This thread proposes some changes: there is an upcoming PR in the works:
https://discuss.tvm.ai/t/rfc-vta-support-for-cloud-devices-opencl-compatible/6676/22
---
[Visit
Topic](https://discuss.tvm.ai/t/tvm-vta-questions-use-computer-as-soc/3267/5)
to respond.
You are receiving this becaus
Hi @faku
Any updates on support fr PCI-e based FPGAs?
---
[Visit
Topic](https://discuss.tvm.ai/t/tvm-vta-questions-use-computer-as-soc/3267/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai
No, I just copied flies in a zip folder.
I restart with `git --recursive` it worked.
Thanks a lot
---
[Visit Topic](https://discuss.tvm.ai/t/install-error-tvm-on-windows/6938/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [
If you are using Python, another possible solution is to use C++. In my
experience, model deployment using C++ is more memory efficient.
---
[Visit
Topic](https://discuss.tvm.ai/t/tvm-contrib-graph-runtime-create-error/6868/7)
to respond.
You are receiving this because you enabled mailin
Did you use `git --recursive` when you clone the project?
---
[Visit Topic](https://discuss.tvm.ai/t/install-error-tvm-on-windows/6938/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/
HI all !
I’m trying to install tvm on my Windows10 computer but I'm unsuccessful untill
now.
I have made all steps from the “Building on Windows” on
[[https://docs.tvm.ai/install/from_source.html
](https://docs.tvm.ai/install/from_source.html)]. Namely:
1. Installed Visual Studio Community
I am not sure, but titan, 2080ti, 1080ti have tuned configuration in TVM.
So if you want to get the optimal performance for the 1070ti, tuning seems to
be the right choice.
also There is a simple tuning template provided in the TVM tutorial, so use it
to tune it.
---
[Visit
Topic](https:
I'm using tvm to convert a tensorflow model.
when I call relay.build, there comes a warning,
> Cannot find config for target=cuda, workload=('conv2d_nchw.cuda', ('TENSOR',
> (1, 384, 35, 35), 'float32'), ('TENSOR', (224, 384, 1, 1), '
but in fact, I searched the tf model and can't find a conv2
Hi all,
I was wondering how can I call a specific llvm intrinsic from within a compute
node.
Something on the line of:
```
def test(x):
def _compute(*indices):
value = x(*indices)
return call_llvm_aarch64_intrinsic(value)
return te.compute(x.shape, _co
Hi,
I am also facing same warning issue while running ***resnet-50*** on **'GeForce
GTX 1070'** with *--model=1080ti*, and taking **4.58ms** for
[inferencing](https://github.com/apache/incubator-tvm/blob/master/apps/benchmark/gpu_imagenet_bench.py)
which is nearly twice than actual expected v
11 matches
Mail list logo