Thanks your replay,Iwill try again!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-not-connect-andriod-device-with-rpc/11548/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org
Android rpc tries to connect to the machine running tracker (btw in its turn
tracker will try to connect to the phone as well during the work). Android RPC
has special fields which you have to fill before you switch "Enable RPC"
slider. Phone must be able to connect tracker machine. The simple
I see I understand you,I run "hostname -I" to get my machine ip andress, it is
10.12.17.189.If as your replay, I should run "python3 -m tvm.exec.rpc_tracker
--host=10.12.17.189 --port=9190".But I tried, it does not work.
My pc uses local area network of school, should my phone also use loc
Do you point real ip address of machine running tracker in the android rpc?
Android and machine running tracker should be in the same subnet because phone
has to connect to that machine to the port 3939 in your case.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-not-connect-andri
I use this tutorials to deploy model to device.
And in tvm application I set "Address:0.0.0.0; Port :3939; key:andriod"
**but I can not connect andriod device,who can tell me what is wrong?**
https://tvm.apache.org/docs/how_to/deploy_models/deploy_model_on_android.html#sphx-glr-how-to-deploy-mod
Hi there.
It's been quite a while I have worked on it, I will tell you based on what I
remember.
Since VTA is made using HLS, I have looked at how HLS translates the VTA and
makes those Verilog files.
Using that information, I have put debug flags on that Verilog file synthesized
by HLS.
Update:
According to: [PyTorch convert function for op 'dictconstruct' not implemented
· Issue #1157 · apple/coremltools
(github.com)](https://github.com/apple/coremltools/issues/1157)
After changing my code from
> model = transformers.SqueezeBertForSequenceClassification(config)
into
>
Hello, @AndrewZhaoLuo @masahi Thanks for your answer.
@AndrewZhaoLuo Yes, I can definitely try to converting the model → onnx → relay.
But I still wanna try on Pytorch for now.
@masahi I have used "torch.jit.trace" to produce trace model, and it looks
normal:
> SqueezeBertForSequenceCl
Please forgive for my ignorance. What is the relationship between
libtvm_runtime.so and BYOC?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/if-target-c-how-to-execute-the-c-program/11519/17)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fr
Thanks for your answer, that makes a lot of sense. After reading the BYOC's
blog, I suddenly have a question(I guess it's the last), the BYOC can apply to
specific operator. If I convert the front model to a relay function which
contains more than one operators or functions, the BYOC can take
@haruhi both the approach suggested by @comaniac and the one by @Mousius might
be appropriate for you depending on the situation.
if you want to offload the _entire_ model and you want to use your own C
compiler, the `c` backend will indeed do what you want. We built a specialized
export fun
Q1: Yes. This is one important purpose of introducing BYOC. All existing graph
optimizations can be directly leveraged for custom codegens.
Q2: It's up to you. In general most BYOC developers wish to generate compilable
code directly from the graph level IR, because it is easier for them to
i
[quote="AndrewZhaoLuo, post:2, topic:11538"]
The onnx frontend is much more mature.
[/quote]
Be careful with making such claims :slightly_smiling_face: Actually PT frontend
is fairly good and I can generally recommend it for PT users.
@popojames You are probably using `torch.jit.script`, since
I would suggest looking into converting the model --> onnx --> relay if
possible. The onnx frontend is much more mature.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/issue-converting-model-from-pytorch-to-relay-model/11538/2)
to respond.
You are receiving this because you enabled m
Here is a PR doing what you want: https://github.com/apache/tvm/pull/9553
(though it takes an IRModule instead of a PackedFunc).
---
[Visit
Topic](https://discuss.tvm.apache.org/t/papi-counters-with-basic-matmul-relay-function/11263/8)
to respond.
You are receiving this because you enabl
Hello TVM developers and community,
I am trying to convert the Transformer-like models such as BERT from different
platforms (Tensorflow or PyTorch) to relay models.
For TensorFlow model, I was able to convert them into relay models successfully
by referring to this tutorial:
[Deploy a Huggin
I am studying the source code of TVM, and I am confused about the constant
kMaxNumGPUs (=32) in /src/runtime/cuda/cuda_module.h.
To my mind, when we run the compiled model, we can only choose 1 GPU card.
If this is true, why TVM runtime set the kMaxNumGPUs to 32 and keeps the memory
allocatio
Sorry for my ignorance, I gradually get to know a little about BYOC flow. So I
want to confirm something.
First: If I use the BYOC flow, will the TVM's graph optimization passes apply
to the relay function?
Second: BYOC won't generate the TIR, it transform the relay to C code?
Looking forward t
18 matches
Mail list logo