Ah, yes, thank you. I am not sure, what I did wrong on the first try, but now
it is working.
Thank you very much :slight_smile:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/measure-memory-allocation-using-debug-executor/9679/8)
to respond.
You are receiving this because you enable
Hi @max1996 ,
function metadata is of type :
`Map function_metadata;`
When accessed via Python it functions as a dictionary :) .
struct FunctionInfoNode : public Object {
Map workspace_sizes;
Map io_sizes;
Map constant_sizes;
Map tir_primfuncs;
Map relay_pri
Hi @manupa-arm ,
Thank you for this work :slight_smile:
I tried to access this data after compiling the GraphExecutorFactory, but am a
bit confused by the output:
The output of function_metadata is a JSON, which seems to represent a
dictionary, but the values for each function look like thi
Hi @max1996 ,
Have a go at this : https://github.com/apache/tvm/pull/7938
The metadata.json is augmented to include peak memory usage in the PR. Please
note if you are using crt graph runtime, it might copy the weights (~ (if
link-params is used && load_params is called) ) and also maintain a
Right now there is not a way to collect this information. I'd like to add one,
but it is a little bit complicated.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/measure-memory-allocation-using-debug-executor/9679/2)
to respond.
You are receiving this because you enabled mailing list