[quote="merrymercy, post:3, topic:571, full:true"]
You can try to restart all of them and see whether you can get two “free” rasp
on the queue status.
[/quote]
@gasgallo, sorry I haven't been working on this for two years.
---
[Visit
Topic](https://discuss.tvm.ai/t/setting-num-parallel-2-
log like this: 
---
[Visit
Topic](https://discuss.tvm.ai/t/mxnet-resnest-model-after-using-tvm-auto-tunning-is-too-slow/6706/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these
Hi ,
Recently, I trained resnest101 which was downloaded from mxnet . Then i used
tvm auto tuning it on nvidia 1080ti , but i didn't get speed up . After
tuning, the speed is more lower then mxnet model, I did't find the reason.
---
[Visit
Topic](https://discuss.tvm.ai/t/mxnet-resne
@Jokeren how did you solve this issue? In my host machine, when I query, I can
see multiple devices are registered but only one is used during tuning, the
other one returns `INFO:RPCServer:no incoming connections, regenerate key ...`
---
[Visit
Topic](https://discuss.tvm.ai/t/setting-num-
In the tvm frontend, operations which involves mixed datatypes(float16,
float32, float64) as operands are not handled explicitly. I found a condition
check which is missing in src/tir/ir.op.cc file for float case. After adding
that check I ran TVM unit test suite and noticed that one of the ex
I'm comparing performance of a model when using:
- `mxnet-cu100` using CUDNN
- TVM CUDA `-libs=cudnn`
>From my understanding the result should be basically the same but instead TVM
>is a lot slower. When compiling the model I see CUDNN log looking for best
>algorithm, so I think the setup is f
```python
with tvm.target.build_config(disable_vectorize=True):
graph, c_mod, params = relay.build(func, target="c", params=params)
```
Hi, how about try to disable vectorize?
---
[Visit Topic](https://discuss.tvm.ai/t/c-target-for-relay/6696/3) to respond.
You are receiving this becaus
Current MicroTVM implementation seems to limit to C backends for the devices.
In this case, I wonder if we're relying on LLVM's ability to flatten out SIMD
types (such as the `<64 x f32>` type corresponding to the `float32x64`) to
scalar types when SIMD is not enabled. Shall this be fixed fr