+1 (non-binding)
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679431456
@zhiics Thanks for your comment. Yes, I just use BYOC to specify which part
should be offloaded. The subgraph can be a blackbox for users.
There are two ways I tried to prepare the package.
1. Cross-compile locally and upload the built lib to the remote server.
[[code](https://github.com/ka
The goal of this RFC is to offload subgraph inference from user devices to high
performance edge servers. The initial code is available
[here](https://github.com/kazum/tvm/tree/remote_runtime), which implements
inference offloading based on BYOC.
# Motivation
The benefit of offloading infere
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5947#issuecomment-651304553
[quote="kazum, post:1, topic:6415"]
I’ve implemented [a prototype of
A0](https://github.com/kazum/tvm/tree/coreml_codegen)
[/quote]
The PR was sent and it's ready to review.
https://github.com/apache/incubator-tvm/pull/5634
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-coreml-codegen/6415
I'm interested in WebAssembly as a next generation of portable and secure
binary images, which can run anywhere and be deployed on, e.g.,
[Krustlet](https://github.com/deislabs/krustlet). Pure WASM support without a
JavaScript layer looks like the area where other DL frameworks haven't worked
Adding CoreML codegen with the BYOC feature enables us to offload subgraphs to
Apple’s Neural Engine on iOS devices. There are some approaches how to build a
CoreML model in TVM.
- A0: Build with coremltools
I think this is the most intuitive way to construct CoreML models.
coremltools pr
@FrozenGene Sorry, my description is ambiguous. You can compile a CoreML
model with the following command:
$(xcode-select -p)/usr/bin/coremlc compile [model.mlmodel] [outputFolder]
or
xcrun coremlcompiler compile [model.mlmodel] [outputFolder]
I tried Xcode 11.4 and 10.3, and I cou
In this RFC, we would like to propose adding a runtime to load and execute
CoreML models from TVM.
## Motivation
- Currently, using CoreML is a defacto standard approach to run inference on
iOS. This runtime is
useful to obtain a baseline benchmark and compare it with TVM.
- Using CoreML
[quote="adobay, post:5, topic:6243"]
```
in_indices = tf.placeholder(tf.float32, np_indices.shape, name="in_indices")
out = tf.gather_nd(in_data, indices)
```
[/quote]
These lines should be
```
in_indices = tf.placeholder(tf.int32, np_indices.shape, name="in_indices")
out = tf.gather_nd(in_data,
https://github.com/apache/incubator-tvm/pull/5279
---
[Visit Topic](https://discuss.tvm.ai/t/gather-nd-semantics/6243/3) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/704e7a
I think we don't need to change the current semantics. We can easily implement
Tensorflow gather_nd with the mxnet gather_nd (and vice versa).
Here is a pseudo code:
```
tf_gather_nd(data, indices) = relay.gather_nd(data, transpose(indices, [N-1, 0,
1, ..., N-2]))
```
where N is the dimension
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5102#issuecomment-601403528
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481581642
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-480433070
15 matches
Mail list logo