@tqchen: Any thoughts on above point?
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/70)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/un
Thank you, will start a new thread about that.
About the original post in the thread, i do have one small concern.
Whenever we provide Export_library a path, it always has to be accompanied with
correct extension name.
Like below:
`mod.export_library("xx.so")`
As we are coming up with a Pack
Please start another discuss thread for new questions(of weight serialization).
The current proposal does have a `package_params` option that packages the
weight.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/68)
to respond.
You are receivi
Thanks! Agree we can utilize rodata for that case.
May be that is for another thread of discussion.
Would you please help me about the basic question i raised? What i am trying to
figure out here is from the user perspective - the standard way to save and
reuse weights. As in the current thre
Note that the parameter has to be loaded into DRAM, so there is no place where
we could do partial weight load.
For memory limited scenarios like embedded devices, we would certainly need to
go for a different solution, for example directly store weights in the rodata
section to remove the ne
@tqchen: Thank you very much for your enlightening response!
I agree it will introduce an additional layer, but it may have an additional
performance benefit as well even when the store is for simple objects with
Flatbuffer or more precisely Flexbuffers used. I was thinking of a scenario
wh
It would be helpful to ask why and why not when introducing new dependencies.
See some of the examples in the design decision above. Flatbuffer coould be
useful when we need to serialize a complicated set of objects, but also
introduces an additional layer of abstraction.
Given that we are on
HI All, I was wondering, whether we can use
[flatbuffer](https://google.github.io/flatbuffers/) for serializing params.
In that way we can customize the framework according to our suit as its
opensource and it will be target agnostic.
I am working on a prototype currently. However i wanted
ok make sense. if all agree, we could improve our fallback way to make tvm blob
in the rodata section
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/62)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe
I wasn't proposing that as a solution, that is one of the options. I'm merely
stating that this is still a problem that will hit others most notably anyone
using the C backend .
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/61)
to r
I think I should clarify your question. Do you mean we should generate .rodata
section of `unsighed char __tvm_data_blob[]`?
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/60)
to respond.
You are receiving this because you enabled mailing lis
So, the problem hasn't been fixed : there is a "solution" depending on the
presence of an llvm target.
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/59)
to respond.
You are receiving this because you enabled mailing list mode.
To un
When we don’t have LLVM, we will fallback to our original way (call compiler to
generate)
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/58)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these
This won't work by default for the C backend where we don't necessarily rely on
the presence of llvm or are we saying that there needs to be an llvm solution
for the backend just to produce this constant data object always, so we do need
a general solution
Ramana
---
[Visit
Topic]
I got it. Thanks FrozenGene.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/56)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscrib
CUDA also could use this. Because cuda's target host is LLVM. As the example I
show, it is in fact cuda target. So you could see `NVIDIA NNVM Compiler` in the
constant string.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/55)
to respond.
Yo
Good solution! Thanks FrozenGene! but if we use LLVM, llvm series target can
take advantage of this solution, I'm not sure if other targets such as cuda
can use this solution.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/54)
to respond
Thanks for respond.Finally, we don't use this special hack. We will generate
this directly using LLVM IR. And LLVM will put this into `rodata` section
correctly.
Like this test:

![image|690x397](upload://pXekhJ0Qe1ilLipMCtLW5JCP3DP.
const unsigned char __tvm_dev_mblob[46788038] = {"TVM_BLOB_SIG"}; maybe not
enough. because 46788038 is too big for many embedded system, so I have to
place __tvm_dev_mblob to special section, for example, a rodata section. so
I mean I need declare __tvm_dev_mblob as const unsigned char
19 matches
Mail list logo