Most of the part related to VM is already pushed into the TVM. We are working
on a more systematic way for tuning with symbolic shapes.
---
[Visit Topic](https://discuss.tvm.apache.org/t/codegen-of-nimble/9489/6) to
respond.
You are receiving this because you enabled mailing list mode.
T
We first replace the symbolic dimension with a large constant (e.g., 64, 128)
and use the standard AutoTVM tuning to search for the schedules. We observe
that the tuning on large sizes usually covers good schedules on other shapes.
After the tuning is done, we then choose the top 100 schedules
You might want to look into the BYOC flow.
[TVM Blog - Bring Your Own
Codegeneration](https://tvm.apache.org/2020/07/15/how-to-bring-your-own-codegen-to-tvm)
It looks like a perfect solution for your task. You most likely need to do
three things:
1. Define, which subgraphs and nodes need to b
I am wondering if I am using a custom accelerator, Can I skip code generation
for the sub-graph that have first class support in the accelerator. The
accelerator comes with its own SW stack and has its own proprietary code
generation which can't be exposed to TVM. However some operator support
It seems that AutoTVM does not check the correctness when applying the
run_through_rpc function.

Should AutoTvm use functions like 'assert_allclose' to guarantee the
correctness?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/
it solve the error but make errors below: check failed: ret ==0 ( -1 vs 0)
check failed: e == CL_SUCCESS == false: OpenCL Error, code=-5:
CL_OUT_OF_RESOURCES
---
[Visit
Topic](https://discuss.tvm.apache.org/t/check-failed-allow-missing-false-device-api-gpu-is-not-enabled/9532/4)
to respo
thank you so much for your precise description~ :grinning:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-measure-the-time-cost-when-inferencing-using-tvm/9433/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click