Thanks @zhanghaohit for your proposal. It's quite interesting to bring VTA 
framework into cloud devices. It seems to be this RFC brings a quite large 
topic. I've read through the proposed change, and still unclear about:

* OpenCL requires multi-core parallelism, and we don't have multi-core support 
in VTA for now. (The topic for bringing scalability to VTA has been discussed 
at https://discuss.tvm.ai/t/vta-scalability-for-data-center-fpgas/4853 )
* How to reuse current VTA hardware to communicate with TVM runtime through 
PCI-e interface?

As a side note, Xilinx HLS are quite different from Intel FPGA OpenCL in my 
observation. I think a more easy (and efficient) workaround is to reuse Chisel 
VTA for the PCI-e based FPGA, and implement PCI-e based driver for DMA. 
@vegaluis would have more experience on this.  

See also:

* [A Tutorial for How to Deploy TVM to AWS F1 FPGA 
instance](https://docs.tvm.ai/deploy/hls.html) - Experimental
* There is a WIP PR from @hjiang [Add c++ and python local deploy 
example](https://github.com/apache/incubator-tvm-vta/pull/5) to enable running 
a workload without rpc server.





---
[Visit 
Topic](https://discuss.tvm.ai/t/rfc-vta-support-for-cloud-devices-opencl-compatible/6676/4)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/1862b9588f9c5309a8452eef4daca154d1e8814b505d30e0b730f562e92ab05b).

Reply via email to