> > **Covered frameworks for now** - TFLite and MxNet
> > **Target network for now** - Inception V3 from TFLite. (I will create one
> > for Mxnet)
> > **Target platforms for now** - ARM and Intel (will create separate Issue as
> > the project progresses)
>
> A quick question here since I can't s
I think lowering in the Python make sense.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3617#issuecomment-516232138
Thank you very much. I am wondering if you could redirect me/provide me a link
as to how to create a func (Operation) for Realize/ProducerConsumer stmt then
it would be very helpful.
Moreover, since the lower phase 0 in the lower() generates the output (IR)
which as a statement reflecting the
> **Covered frameworks for now** - TFLite and MxNet
> **Target network for now** - Inception V3 from TFLite. (I will create one for
> Mxnet)
> **Target platforms for now** - ARM and Intel (will create separate Issue as
> the project progresses)
A quick question here since I can't see this menti
The serialization itself doesn't have much to do with quantization. If
quantized model needs new opcode in the VM, we need to introduce them first and
then extend the serialization/deserialization to support these instructions.
--
You are receiving this because you are subscribed to this thread
Will do today @tqchen, my bad.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3567#issuecomment-515897387