Yes, we should build a solution that directly bake the weight into the rodata
section without having to decode from a meta-data. I think we have a good path
to make it working.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/external-modules-in-utvm/7993/12) to
respond.
You are recei
that sounds pretty reasonable to me. I need to read more about the metadata
encoding, but it seems like we should avoid copying data out of flash.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/external-modules-in-utvm/7993/11) to
respond.
You are receiving this because you enabled m
I see..That makes sense
---
[Visit
Topic](https://discuss.tvm.apache.org/t/sovled-dlpack-error-attributeerror-module-has-no-function-tvm-main/8029/8)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.
I think the confusion part is mainly coming from different use cases. The
tutorial uses the TE functions while @whn09 is working on a Relay graph.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/sovled-dlpack-error-attributeerror-module-has-no-function-tvm-main/8029/7)
to respond.
You
Would you like to kindly post where the confusing part is, and possibly submit
a PR to improve the documentation? That would be super helpful! Thanks!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/sovled-dlpack-error-attributeerror-module-has-no-function-tvm-main/8029/6)
to respond.
Running the code on the current master seems to work fine. Given that we are
going to land another release, this problem may not appear in v0.7
Notably, there are constant memory used by global singletons which will always
stay the same.
```
import os, psutil
import tvm
from tvm import te
de
Hello all,
I have a sample code from tvm offical site below. I want to build thousands of
operators in the same process. But i found there is memory leak and the memory
usage is increasing. I thought after the build of one operator was finished,
the corresponding resource should be freed. Do y
Sounds good!. I ll take a look at your fork for now and see what we can do.
Regarding PackConstantsToLLVM, I think this is the intention behind the design
of the metadata module (cc : @comaniac @zhiics). I believe a solution lies
where we could generally support it rather than using it to cate
@manupa-arm yeah exactly--the main difference is that µTVM wants a static
library by default. i'm okay with O1 (reusing export_library) so long as we
don't need to change export_library too much to accommodate µTVM (i don't
believe any changes are needed, after reviewing it here).
for my auto
@tqchen, I think we should handle the weights more generally. Here I was
referring to binary artifacts that are produced (which are not present in the
relay graph initially as weights do) as part of the lowering of the external
function that is required in the runtime.
---
[Visit Topic](h
I agree that putting weight as constant would be an import question. This is
something that is probably orthogonal to the C source module, as we might be
able to create a similar util via LLVM(like what we did in the PackImports)
---
[Visit Topic](https://discuss.tvm.apache.org/t/external-
@areusch looking at the design of export_library, it seems it is designed to
generate a shared object. Thus, the difference in uTVM (w.r.t. TVM) would be
that we would want to statically link it with the runtime in the compile time
itself. What are your thoughts of re-using the export_library
Aren't they introduced by the quantization phase?
You could add the `annotation.stop_fusion` to the pattern and deal with it
there.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/byoc-multi-layer-subgraphs/8013/6) to
respond.
You are receiving this because you enabled mailing list m
first composite, afterwards annotate.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/byoc-multi-layer-subgraphs/8013/5) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsub
I found that I can simply use DLPACK to convert tvm tensor to pytorch tensor
and vice versa. The problem has been solved!
from torch.utils.dlpack import to_dlpack, from_dlpack
from tvm.runtime import ndarray
...
m.set_input('input1', ndarray.from_dlpack(to_dlpack(input1)))
.
I found that the example use [[tvm.build](http://tvm.build/)], but my code use
[[tvm.relay.build](http://tvm.relay.build/)]. I think it is the reason. But if
I use [[tvm.build](http://tvm.build/)] in my code, it may not work. So, what’s
the difference between [[tvm.build](http://tvm.build/)] a
So you are saying that the `%82 = annotation.stop_fusion(%81)` is preventing
you to composite merge `@tinyai_2` and `@tinyai_3`?
Do you first annotate and then composite merge?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/byoc-multi-layer-subgraphs/8013/4) to
respond.
You are rece
I removed the (ctx), and the error has been solved. But I got another error:
TVMError: Check failed: type_code_ == kTVMContext (13 vs. 6) : expected
TVMContext but get NDArrayContainer
---
[Visit
Topic](https://discuss.tvm.apache.org/t/dlpack-error-attributeerror-module-has-no-function-tv
Hey when generating code for an opencl target is there any way to set the
local_work_size argument (which is set in the clEnqueueNDRangeKernel function)
from the tvm expressions ?.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-opencl-local-work-size/8031/1) to
respond.
You are
Hey @jcf94 and @comaniac thanks for the response , I was just trying to learn
the compiler flow of tvm and how if needed new rules could be added which are
hardware specific, As of now there is no immediate need, but thanks
nevertheless.
---
[Visit
Topic](https://discuss.tvm.apache.org/
def @main(%input_1: Tensor[(1, 224, 224, 3), float32]) -> Tensor[(1, 1000),
float32] {
%69 = nn.pad(%input_1, pad_width=[[0, 0], [0, 1], [0, 1], [0, 0]]) /*
ty=Tensor[(1, 225, 225, 3), float32] */;
%70 = multiply(%69, 16f /* ty=float32 */) /* ty=Tensor[(1, 225, 225, 3),
float32
Sample code:
from tvm.contrib.dlpack import to_pytorch_func
m_pytorch = to_pytorch_func(lib\['default'\](ctx))
torch_output = torch.empty(1, 40, 120, 256)
m_pytorch(input1, input2, input3, torch_output)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/dlpack-error-attributeerror-module-
I am working on a case: compile a pytorch model using tvm on g4dn.
But I encountered a problem that the tensor conversion between tvm and pytorch
is too slow, since some pre-process and post-process are implemented using
pytorch.
I found that DLPACK maybe helpful in this case, thus I tried
23 matches
Mail list logo