Relay VM ops are TVM specific, I don't think this is something you want to port.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-vm-newbie-porting-relay-vm-question/9097/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails
Probably `scatter_nd`, see https://github.com/apache/tvm/pull/6854
---
[Visit
Topic](https://discuss.tvm.apache.org/t/index-put-operator-in-relay/9094/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://disc
Hi, I am a newbie to Relay VM, and I do read that relay VM has an ISA from the
tutorials section (https://tvm.apache.org/docs/dev/virtual_machine.html).
Anyone suggest to me how to start porting Relay VM to custom hardware? Where I
should start? Is there anyone who has ported Relay VM to anot
What operator in Relay can be used to set multiple values to a given tensor?
Lets say I have 3x3 tensor "A" and I need to set
```
A[0,0] = 2.0
A[1,1] = 4.0
A[2,1] = 7.0
A[2,2] = 9.0
```
PyTorch code:
```
A = torch.zeros(3,3)
hs = torch.tensor([0, 1, 2, 2])
ws = torch.tensor([0, 1, 1, 2])
vs =
Since the partitioned function is an anonymous function, you cannot access it
by name such as `mod["func_1"]`. I think the the easiest way is to write a
simple Relay pass to collect it.
```python
class FuncCollector(tvm.relay.ExprVisitor):
def __init__(self):
super().__init__()
Hi @comaniac , @masahi and @jroesch,
Any suggestions for the above problem? How can I extract the partitioned relay
function from relay IRModule?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-extract-a-relay-function-when-doing-partitioning-with-relay-pattern/9065/2)
to res
@aakah18151 we don't quite have good enough debugging for this right now for me
to be certain, but based on your stack trace and that it's inside
>` [bt] (6)
>/tvm_micro_with_debugger/tvm/build/libtvm.so(tvm::runtime::RPCClientSession::AllocDataSpace(DLContext,
> unsigned long, unsigned lon
Hi @gfvvz ,
Please have a read to a previous answer I gave it here:
https://discuss.tvm.apache.org/t/symbolic-shape-stride-calculation/8531/2
The answer to your question is then yes in TVM there is the capability to
support symbolic tensor shapes that will change dynamically at run-time.
Howev
@areusch I am trying to run a different tflite model from the tutorial given by
@tgall_foo . I am getting error while running.
> RPCError Traceback (most recent call
last)
> in
> 2 with tvm.micro.Session(binary=micro_binary, flasher=flashe
@sosa3104 Good to know! Thank you too!!
---
[Visit Topic](https://discuss.tvm.apache.org/t/tvm-deploy-a-c-example/9027/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscr
@juierror Thank you so much for the code.
With this code I got the following error:
terminate called after throwing an instance of 'dmlc::Error'
what(): [13:30:24]
/home/sara/tvm/src/runtime/graph/graph_runtime.cc:181:
---
Thanks @areusch This works. :slight_smile:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/measuring-utvm-inference-time/9064/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/ema
12 matches
Mail list logo