> On Thu, Feb 18, 2021 at 06:38:07PM +0000, Shai Malin wrote: > > So, as there are no more comments / questions, we understand the > > direction is acceptable and will proceed to the full series. > > I do not think we should support offloads at all, and certainly not onces > requiring extra drivers. Those drivers have caused unbelivable pain for iSCSI > and we should not repeat that mistake.
Hi Christoph, We are fully aware of the challenges the iSCSI offload faced - I was there too (in bnx2i and qedi). In our mind the heart of that hardship was the iSCSI uio design, essentially a thin alternative networking stack, which led to no end of compatibility challenges. But we were also there for RoCE and iWARP (TCP based) RDMA offloads where a different approach was used, working with the networking stack instead of around it. We feel this is a much better approach, and this is what we are attempting to implement here. For this reason exactly we designed this offload to be completely seemless. There is no alternate user stack - we plug in directly into the networking stack and there are zero changes to the regular nvme-tcp. We are just adding a new transport alongside it, which interacts with the networking stack when needed, and leaves it alone most of the time. Our intention is to completely own the maintenance of the new transport, including any compatibility requirements, and have purposefully designed it to be streamlined in this aspect. Protocol offload is at the core of our technology, and our device offloads RoCE, iWARP, iSCSI and FCoE, all already in upstream drivers (qedr, qedi and qedf respectively). We are especially excited about NVMeTCP offload as it brings huge benefits: RDMA-like latency, tremendous cpu utilization reduction and the reliability of TCP. We would be more than happy to incorporate any feedback you may have on the design, in how to make it more robust and correct. We are aware of other work being done in creating special types of offloaded queue, and could model our design similarly, although our thinking was that this would be more intrusive to regular nvme over tcp. In our original submission of the RFC we were not adding a ULP driver, only our own vendor driver, but Sagi pointed us in the direction of a vendor agnostic ulp layer, which made a lot of sense to us and we think is a good approach. Thanks, Ariel