Hi Henry,

Thanks for asking the question. It depends on how and to what extent
AWS is gonna release its low level API. To put on my Amazon hat (and I
work closely with inferentia team), this is still not clear. We need
more time to investigate.

It is definitely feasible If we only treat TVM as a model converter
for neuron-sdk. Though it also requires neuron-sdk to clean and expose
some high level APIs.

On Wed, Jan 15, 2020 at 11:59 AM Henry Saputra <henry.sapu...@gmail.com> wrote:
>
> Hi All,
>
> I remember someone asked about AWS Neuron SDK [1] during TVM summit.
> AFAIK it is used to support AWS new Inferentia chip that focus for low
> power and low precision inferences jobs.
>
> Does anyone have followup whether would make sense to add support in TVM
> for the new AWS Inferentia chip [2]?
>
> Thanks,
>
> Henry
>
>
> [1] https://github.com/aws/aws-neuron-sdk
> [2] https://aws.amazon.com/machine-learning/inferentia/



-- 
Yizhi Liu

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tvm.apache.org
For additional commands, e-mail: dev-h...@tvm.apache.org

Reply via email to