`scatter_nd` might work if we assume that `input_put_` usually sets the values
to empty tensor.
```
ctx = tvm.cpu(0)
target = 'llvm'
dtype = 'float32'
shape = (3,3)
tp = relay.TensorType(shape)
data = np.random.rand(*shape).astype(dtype)
hs = [0, 1, 2, 2]
ws = [0, 1, 1, 2]
vs = [2.0, 4.0, 7.0,
@aakah18151 `tests/micro/qemu/test_zephyr.py` should work on STM32f746xx board.
that should test sine_model.tflite. You can run it with:
`python tests/micro/qemu/test_zephyr.py --microtvm-platforms=stm32f746xx`
is this the tutorial you're trying?
---
[Visit
Topic](https://discuss.tvm.apa
Thanks @areusch . Unfortunately increasing the memory size did not work. The
other thing I tried is to replace my
/tvm/tests/micro/qemu/zephyr-runtime/src/main.c with the "main.c" from
blog-post eval which did not work either. Is there a working microTVM code
apart from sine model on the same
I am looking into DL optimizations for edge computing. Is there any work that
looks into the TVM optimizations for edge computing?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-optimizations-for-edge-computing/9101/1)
to respond.
You are receiving this because you enabled mailin
haven't posted one yet--we are doing some prerequisite work right now but will
try to post it in the next week or two.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-static-runtime-code-generator/8986/13)
to respond.
You are receiving this because you enabled mailing list mode.
Actual torchscript from some model looks the following
torchscript Python:
```
output = torch.zeros([_17, int(num_channels), 7, 7], dtype=6, layout=None,
pin_memory=False)
output0 = torch.index_put_(output, _20, _19, False)
```
torchscript IR:
```
%output.1 : Float(1000, 256, 7, 7, str
Hi @areusch, this sounds intriguing. Where can I find more information on the
current status of the AOT?
Is there already an RFC?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-static-runtime-code-generator/8986/12)
to respond.
You are receiving this because you enabled mailing l
Note that Relay is a functional language, so modify tensors in place is
awkward. In this case, the tensor you are modifying has zeros by default, so
you can use `scatter_nd`.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/index-put-operator-in-relay/9094/3) to
respond.
You are recei
Hello ! I have spent some time analyzing the inner workings of TVM and VTA. I
would like to know if my following assumptions are correct. In the VTA paper,
it states that
> The runtime performs JIT compilation of the accelerator binaries and manages
> heterogeneous execution between the CPU a