Hi Bruce, Vladimir, Anatoly Regarding inter-device or inter-domain DMA capability, could please clarify if Intel idxd driver will support this feature. I believe the changes Feng has suggested here are in line with the earlier "[PATCH v1 0/3] Add support for inter-domain DMA operations" proposal. We are planning to implement this feature support in version 25.11.
Your feedback would be appreciated, we are aiming for a more generic solution. Regards Vamsi >>On 2025/7/16 18:59, Vamsi Krishna Attunuru wrote: >>> >>>> >>>> Thanks for the explanation. >>>> >>>> Let me tell you what I understand: >>>> 1\ Two dmadev (must belong to the same DMA controller?) each >>>> passthrough to diffent domain (VM or container) 2\ The kernel DMA >>>> controller driver could config access groups --- there is a secure >>>> mechanism >>(like Intel IDPTE) >>>> and the two dmadev could communicate if the kernel DMA controller >>>> driver has put them in the same access groups. >>>> 3\ Application setup access group and get handle (maybe the new >>'dev_idx' >>>> which you announce in this commit), >>>> and then setup one vchan which config the handle. >>>> and later launch copy request based on this vchan. >>>> 4\ The driver will pass the request to dmadev-1 hardware, and >>>> dmadev-1 hardware will do some verification, >>>> and maybe use dmadev-2 stream ID for read/write operations? >>>> >>>> A few question about this: >>>> 1\ What the prototype of 'dev_idx', is it uint16_t? >>> Yes, it can be uint16_t and use two different dev_idx (src_dev_idx & >>> dest_dev_idx) for read & write. >>> >>>> 2\ How to implement read/write between two dmadev ? use two >>>> different dev_idx? the first for read and the second for write? >>> Yes, two different dev_idx will be used. >>> >>>> >>>> >>>> I also re-read the patchset "[PATCH v1 0/3] Add support for >>>> inter-domain DMA operations", it introduce: >>>> 1\ One 'int controller-id' in the rte_dma_info. which maybe used in >>>> vendor- specific secure mechanism. >>>> 2\ Two new OP_flag and two new datapath API. >>>> The reason why this patch didn't continue (I guess) is whether setup >>>> one new vchan. Yes, vchan was designed to represents different >>>> transfer contexts. But each vchan has its own enqueue/dequeue/ring, >>>> it more act like one logic dmadev, some of the hardware can fit this >>>> model well, some may not (like Intel in this case). >>>> >>>> >>>> So how about the following scheme: >>>> 1\ Add inter-domain capability bits, for example: >>>> RTE_DMA_CAPA_INTER_PROCESS_DOMAIN, >>>> RTE_DMA_CAPA_INTER_OS_DOMAIN 2\ Add one domain_controller_id >in >>the >>>> rte_dma_info which maybe used in vendor-specific secure mechanism. >>>> 3\ Add four OP_FLAGs: >>>> RTE_DMA_OP_FLAG_SRC_INTER_PROCESS_DOMAIN_HANDLE, >>>> RTE_DMA_OP_FLAG_DST_INTER_PROCESS_DOMAIN_HANDLE >>>> RTE_DMA_OP_FLAG_SRC_INTER_OS_DOMAIN_HANDLE, >>>> RTE_DMA_OP_FLAG_DST_INTER_OS_DOMAIN_HANDLE >>>> 4\ Reserved 32bit from flag parameter (which all enqueue API both >>>> supports) as the src and dst handle. >>>> or only reserved 16bit from flag parameter if we restrict don't >>>> support 3rd transfer. >>> >>> Yes, the above approach seems acceptable to me. I believe src & dst >>> handles require 16-bit values. Reserving 32-bits from flag parameter >>> would leave 32 flags available, which should be fine. >> >>Great >>tip: there are still 24bit flag reserved after apply this scheme. >> >>Would like more comments. >> > >If there are no major comments at this time, can we proceed with accepting >and merging this notice in this release. Further review can continue once the >RFC is available next month. > >Thanks & Regards >Vamsi > >>> >>>> >>>> Thanks >>>> >>>> On 2025/7/15 13:35, Vamsi Krishna Attunuru wrote: >>>>> Hi Feng, >>>>> >>>>> Thanks for depicting the feature use case. >>>>> >>>>> From the application’s perspective, inter VM/process communication >>>>> is >>>> required to exchange the src & dst buffer details, however the >>>> specifics of this communication mechanism are outside the scope in >>>> this context. Regarding the address translations, these buffer >>>> addresses can be either IOVA as PA or IOVA as VA. The DMA hardware >>>> must use the appropriate IOMMU stream IDs when initiating the DMA >>>> transfers. For example, in the use case shown in the diagram, >>>> dmadev-1 and dmadev-2 would join an access group managed by the >>>> kernel DMA controller driver. This controller driver will configure >>>> the access group on the DMA hardware, enabling the hardware to >>>> select the correct stream IDs for read/write operations. New rte_dma >>>> APIs could be introduced to join or leave the access group or to >>>> query the access group details. Additionally, a secure token >>>> mechanism (similar to >>vfio-pci token) can be implemented to validate any dmadev attempting to >>join the access group. >>>>> >>>>> Regards. >>>>> >>>>> From: fengchengwen <fengcheng...@huawei.com> >>>>> Sent: Tuesday, July 15, 2025 6:29 AM >>>>> To: Vamsi Krishna Attunuru <vattun...@marvell.com>; dev@dpdk.org; >>>>> Pavan Nikhilesh Bhagavatula <pbhagavat...@marvell.com>; >>>>> kevin.la...@intel.com; bruce.richard...@intel.com; >>>>> m...@smartsharesystems.com >>>>> Cc: Jerin Jacob <jer...@marvell.com>; tho...@monjalon.net >>>>> Subject: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device >>>>> DMA capability support in dmadev >>>>> >>>>> Hi Vamsi, From the commit log, I guess this commit mainly want to >>>>> meet following case: --------------- ---------------- | Container | >>>>> | VirtMachine | | | | | | dmadev-1 | | dmadev2 | --------------- >>>>> ---------------- | >>>> | ------------------------------ ZjQcmQRYFpfptBannerStart Prioritize >>>> | security for >>>> external emails: >>>>> Confirm sender and content safety before clicking links or opening >>>> attachments >>>>> Report Suspicious >>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us- >2Dphishala >>>>> r >>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us- >2Dphishala >>>>> r >>>>> m- >>2D&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=WllrYaumVkxaWjgKto6E_r >t >>DQsh >>>>> >>hIhik2jkvzFyRhW8&m=3uFGFVHxC4YLkjWHNg9s9rNDHd_ozbhLepOYCAkiZK >x >>M0sQ0m >>>>> >>43gqgTQl1cK9koZ&s=3_mvLYuMWu7RbHD3mj21CP65O5JY8L8AK8oVFutdTW >U >>&e= >>>> >>ewt.proofpoint.com/EWT/v1/CRVmXkqW!tg3ZldV0Yr_wdSwWmT2aDdKMi- >>>> >>4rn2z58vFaxwfOeocS1P19w1BeRdyGs5sjnhV2rU_6m8MOWj4KFbuXKkKJIvc >q >>>> wWD2WEwJW_0$ > >>>>> ZjQcmQRYFpfptBannerEnd >>>>> >>>>> Hi Vamsi, >>>>> >>>>> >>>>> >>>>> From the commit log, I guess this commit mainly want to meet >>>>> following >>>> case: >>>>> >>>>> >>>>> >>>>> --------------- ---------------- >>>>> >>>>> | Container | | VirtMachine | >>>>> >>>>> | | | | >>>>> >>>>> | dmadev-1 | | dmadev2 | >>>>> >>>>> --------------- ---------------- >>>>> >>>>> | | >>>>> >>>>> ------------------------------ >>>>> >>>>> >>>>> >>>>> App run in the container could launch DMA transfer from local >>>>> buffer to the VirtMachine by config >>>>> >>>>> dmadev-1/2 (the dmadev-1/2 are passthrough to diffent OS domain). >>>>> >>>>> >>>>> >>>>> Could you explain how to use it from application perspective (for >>>>> example address translation) and >>>>> >>>>> application & hardware restrictions? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> BTW: In this case, there are two OS domain communication, and I >>>>> remember there are also inter-process >>>>> >>>>> DMA RFC, so maybe we could design more generic solution if you >>>>> provide >>>> more info. >>>>> >>>>> >>>>> >>>>> Thanks >>>>> >>>>> >>>>> >>>>> On 2025/7/10 16:51, Vamsi Krishna wrote: >>>>> >>>>>> From: Vamsi Attunuru >>>>>> <vattun...@marvell.com<mailto:vattun...@marvell.com>> >>>>> >>>>>> >>>>> >>>>>> Modern DMA hardware supports data transfer between multiple >>>>> >>>>>> DMA devices, enabling data communication across isolated domains >>>>>> or >>>>> >>>>>> containers. To facilitate this, the ``dmadev`` library requires >>>>>> changes >>>>> >>>>>> to allow devices to register with or unregisters from DMA groups >>>>>> for >>>>> >>>>>> inter-device communication. This feature is planned for inclusion >>>>> >>>>>> in DPDK 25.11. >>>>> >>>>>> >>>>> >>>>>> Signed-off-by: Vamsi Attunuru >>>>>> <vattun...@marvell.com<mailto:vattun...@marvell.com>> >>>>> >>>>>> --- >>>>> >>>>>> doc/guides/rel_notes/deprecation.rst | 7 +++++++ >>>>> >>>>>> 1 file changed, 7 insertions(+) >>>>> >>>>>> >>>>> >>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst >>>>>> b/doc/guides/rel_notes/deprecation.rst >>>>> >>>>>> index e2d4125308..46836244dd 100644 >>>>> >>>>>> --- a/doc/guides/rel_notes/deprecation.rst >>>>> >>>>>> +++ b/doc/guides/rel_notes/deprecation.rst >>>>> >>>>>> @@ -152,3 +152,10 @@ Deprecation Notices >>>>> >>>>>> * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in >>>>> >>>>>> ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to >DPDK. >>>>> >>>>>> Those API functions are used internally by DPDK core and netvsc >PMD. >>>>> >>>>>> + >>>>> >>>>>> +* dmadev: a new capability flag ``RTE_DMA_CAPA_INTER_DEV`` will >>>>>> +be added >>>>> >>>>>> + to advertise DMA device's inter-device DMA copy capability. To >>>>>> + enable >>>>> >>>>>> + this functionality, a few dmadev APIs will be added to >>>>>> + configure the DMA >>>>> >>>>>> + access groups, facilitating coordinated data communication >>>>>> + between >>>> devices. >>>>> >>>>>> + A new ``dev_idx`` field will be added to the ``struct >>>>>> + rte_dma_vchan_conf`` >>>>> >>>>>> + structure to configure a vchan for data transfers between any >>>>>> + two DMA >>>> devices. >>>>> >>>>> >>>