On 05.12.2018 22:32, David Miller wrote:
> From: Anssi Hannula <anssi.hann...@bitwise.fi>
> Date: Fri, 30 Nov 2018 20:21:35 +0200
> 
>> @@ -682,6 +682,11 @@ static void macb_set_addr(struct macb *bp, struct 
>> macb_dma_desc *desc, dma_addr_
>>      if (bp->hw_dma_cap & HW_DMA_CAP_64B) {
>>              desc_64 = macb_64b_desc(bp, desc);
>>              desc_64->addrh = upper_32_bits(addr);
>> +            /* The low bits of RX address contain the RX_USED bit, clearing
>> +             * of which allows packet RX. Make sure the high bits are also
>> +             * visible to HW at that point.
>> +             */
>> +            dma_wmb();
>>      }
> 
> I agree with that dma_wmb() is what should be used here.
> 
> We are ordering CPU stores with DMA visibility, which is exactly what
> the dma_*() are for.
> 
> If it doesn't work properly on some architecture's implementation of dma_*(),
> those should be fixed rather than papering over it in the drivers.
> 

Ok, agree. This driver uses rmb()/wmb() all over the place with regards to
DMA descriptors updates, so I tough it might be a reason for that.

Thank you,
Claudiu Beznea

Reply via email to