Hi @apeskov!
Yes, sure! I'll clean up my code so I can extract the patch out of it, but the
change was rather simple:
@contextlib.contextmanager
def default_module_loader_mgr(remote_kwargs, build_result):
remote = request_remote(**remote_kwargs)
# if pre_load_function i
No if you want to use dynamic shape, VM is always required. This is because the
graph executor assumes that everything is static and preallocates all memory
required based on static shape information.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/dynamic-batch-input-support/10069/6)
Thanks for your great help.
I have one more question about dynamic input shape. The method you suggested is
to exploit 'vm' executor instead of 'graph' executor. Are there any methods to
support dynamic input shape with 'graph' executor?
---
[Visit
Topic](https://discuss.tvm.apache.org/
Hi @L1onKing!
As I see you already dealt with that problem. The issue described by you in
[Tune the model in iOS](/t/tune-the-model-in-ios/10083) is a next level of
progress with iOS tuning.
But anyway the issue with transferring python objects between process on MacOS
is still present in tv
TVM_REGISTER_GLOBAL("ir.RegisterOpLowerIntrinsic") in src/ir/op.cc should be
built for runtime. Without building this file the VTA won't run after [PR
#7809](https://github.com/apache/tvm/pull/7809/). This has sth to do with
[Problem - RPC server on
ZCU104](https://discuss.tvm.apache.org/t/pr
Hello @echuraev ! I have solved an issue, thank you very much! Now I'm working
on tuning the iOS model and I'm having next error:
Traceback (most recent call last):
File
"/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/popen_spawn_posix.py",
line 47,