On Mon, Feb 6, 2017 at 3:50 PM, David Laight <david.lai...@aculab.com> wrote:
> From: Saeed Mahameed
>> Sent: 05 February 2017 11:24
>> On Thu, Feb 2, 2017 at 4:47 PM, Daniel Jurgens <dani...@mellanox.com> wrote:
>> > On 2/1/2017 5:12 AM, David Laight wrote:
>> >> From: Saeed Mahameed
>> >>> Sent: 31 January 2017 20:59
>> >>> From: Daniel Jurgens <dani...@mellanox.com>
>> >>>
>> >>> There is a hardware feature that will pad the start or end of a DMA to
>> >>> be cache line aligned to avoid RMWs on the last cache line. The default
>> >>> cache line size setting for this feature is 64B. This change configures
>> >>> the hardware to use 128B alignment on systems with 128B cache lines.
>> >> What guarantees that the extra bytes are actually inside the receive skb's
>> >> head and tail room?
>> >>
>> >>       David
>> >>
>> >>
>> > The hardware won't over write the length of the posted buffer.  This 
>> > feature is already enabled and
>> defaults to 64B stride, this patch just configures it properly for 128B 
>> cache line sizes.
>> >
>> Right, and next patch will make sure RX stride is aligned to 128B in
>> case 128B cacheline size configured into the HW.
>
> Doesn't that mean these patches are in the wrong order?
>

Right, will fix that

>> > Thanks for reviewing it.
>
> Don't assume I've done anything other than look for obvious fubars/
>
>         David
>

Reply via email to