Hello @matt-arm , thank you for your reply.
I have gone through the latest 21.02 branch of Compute Library and for most of
the functions, padding has been removed. So, I tried importing the memory
similar to the way it is done in NEON case. The import is proper for inputs of
layers, however,
@dmitriy-arm Thanks for the reply :+1:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/model-graph-size-doubling-after-partitioning-for-arm-compute-library/9131/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here]
Hello,
I have been using the ACL backend for TVM and have observed the following,
without partitioning the graph for ACL, the size of graph.so is around 17MB for
MobilenetV1, however, after partitioning it is around 33MB. Have been observing
the same for other models as well. I doubt the weig