Currently there's no way to do this, as Ansor generates the schedule sketch
from scratch instead of relying on existing templates. On the other hand, it's
actually sufficient to use AutoTVM to search TOPI schedules.
The recent efforts, MetaSchedule and AutoTIR would provide the capability you
Thanks @driazati for your more tips for package python scripts to a single app.
I have successfully package a simple test_py.py + tvm + libtvm_runtime.so into
an single app using pyinstaller, and it runs ok. The only problem is the
destination app's size, more than 440MB (in a new created co
I want to auto schedule's explore space to include default schedule, as default
schedule is also important candidate. Is there any way or plan to support
convert it?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ansor-is-it-possible-to-convert-te-schedule-to-the-state-of-auto-schedul
Thanks @mbrookhart, this is what I suspected :)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/difference-between-mixedmodemutator-and-exprrewriter/12102/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https
Hi @lhutton1. That was the intention (it's somewhat safer to do a dataflow
transformation without access to recursion at all), but MixedMode* was
introduced well after TVM became a mature project, and it seems like most
people prefer using the MixedModMutator directly, since the API is closer
@areusch Thanks for the answer!
So, the AutoTvmModuleLoader eliminates the need to run the rpc_server on the
target board, correctly? This would also be impossible because its a baremetal
application the one running on the board, so thats ok.
Just to confirm: the rpc_tracker doesn't need to b
Hi,
I've been looking at types of Relay pass recently and got a bit confused when
it comes to `MixedModeMutator` and when it should be used over `ExprRewriter`.
The RFC
(https://discuss.tvm.apache.org/t/performing-relay-passes-non-recursively/5696)
seems to me to suggest that `ExprRewriter`
I want to save tvm(tensorrt backend) model, and then load use tensorrt c++ api.
So I set env_variable `TVM_TENSORRT_CACHE_DIR`, and model saved as
`tvmgen_default_tensorrt_main_0_fp32.meta` and
`tvmgen_default_tensorrt_main_0_fp32.plan`, but the size of
`tvmgen_default_tensorrt_main_0_fp32.pl