The Lambda layer can contain anything, so I think it's difficult to support.
If your keras model contains Lambda, I'd suggest converting it to a tensorflow
model and use the tensorflow frontend instead. c.f.
https://github.com/amir-abdi/keras_to_tensorflow/
---
[Visit Topic](https://disc
NNVM is now deprecated. Please consider using Relay instead of NNVM as it gets
the latest updates.
---
[Visit
Topic](https://discuss.tvm.ai/t/nnvm-issue-when-trying-to-convert-pytorch-conv2d-to-nnvm-conv2d/6389/2)
to respond.
You are receiving this because you enabled mailing list mode.
That is nice, wait for approval
---
[Visit Topic](https://discuss.tvm.ai/t/issue-with-static-tensor-array/6333/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/26a7a5e59f6c
I'm fixing some issues regarding to tf ssd models and will submit a PR soon.
---
[Visit Topic](https://discuss.tvm.ai/t/issue-with-static-tensor-array/6333/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://
Thank you very much. Tonight I will try what you said. The graph tuner throwed
an exception, so i only tuned each op...
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/5)
to respond.
You are receiving this bec
Dense is another issue tho. In this case you have to tune the model with batch
size 500. Did you try graph tuner after tuning each op? Another option is
enabling cBLAS for dense ops by setting `target=llvm -lib=cblas`
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-
@liangfu, thanks for your reply. Those examples use tensor expression directly
to construct compute and schedule, then calls vta.build(schedule, ...). I want
to use relay.build() to directly compile relay IR which is closer to neural
network import flow.
Any idea?
---
[Visit Topic](http
Thank you for the insights @comaniac!!
---
[Visit
Topic](https://discuss.tvm.ai/t/meaning-of-first-numbers-in-auto-tuning-logs/6399/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/un
@jinchenglee You might interested in look into the
[test_vta_insn.py](https://github.com/apache/incubator-tvm/blob/master/vta/tests/python/unittest/test_vta_insn.py)
for how relu is mapped to ALU, and look into
[test_benchmark_topi_conv2d.py](https://github.com/apache/incubator-tvm/blob/master
My model does not contain conv2d, the most time-consuming op is nn.dense. Do
you mean using optimized history to build the relay using batch 500 and then do
inference?
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow
hello, I alos meet a onnx auto-tune build problem, can you do relay.build after
auto-tune finished.
```
with relay.build_config(opt_level=3):
graph, lib, params = relay.build(
mod, target=target, params=params)
```
More info about my problem.
[ [Auto-tune finished, b
I guess the error maybe caused by model load difference betweenthe onnx and
mxnet model, focusing on ` relay.Function` and `tvm.IRModule.from_expr`,
**1. I am not sure what do they used for, and what should I write for onnx
model?**
```
def customed_network_from_onnx(model_path, input_shapes,
you can make it work with static version of layout, i think something is wrong
in loop handling with tensor array.
---
[Visit Topic](https://discuss.tvm.ai/t/issue-with-static-tensor-array/6333/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fr
I think if we could implement it in C++, we could boost auto tvm tuning speed.
---
[Visit Topic](https://discuss.tvm.ai/t/can-you-do-auto-tuning-in-c/6362/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://d
I think I'm facing the same error.
First, I merged the code from [Dynamic NMS and
strided_slice](https://github.com/apache/incubator-tvm/pull/4312/) to support
importing model [TensorFlow model:
ssd_resnet_50_fpn_coco](https://github.com/tensorflow/models/blob/master/research/object_detection
The autotvm is a python only API, even though it does call into a lot of native
code.
---
[Visit Topic](https://discuss.tvm.ai/t/can-you-do-auto-tuning-in-c/6362/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](h
When I auto-tune my own onnx model, it finished:
```
[Task 20/22] Current/Best:3.86/ 14.62 GFLOPS | Progress: (5/5) | 4.90 s
Done.
[Task 21/22] Current/Best:7.47/ 12.78 GFLOPS | Progress: (5/5) | 2.42 s
Done.
[Task 22/22] Current/Best:2.07/ 2.07 GFLOPS | Progress: (5/5) | 2.5
17 matches
Mail list logo