On Wed, 2013-11-06 at 10:54 -0500, Neil Horman wrote:
> On Wed, Nov 06, 2013 at 10:34:29AM -0500, Dave Jones wrote:
> > On Wed, Nov 06, 2013 at 10:23:19AM -0500, Neil Horman wrote:
> >  > do_csum was identified via perf recently as a hot spot when doing
> >  > receive on ip over infiniband workloads.  After alot of testing and
> >  > ideas, we found the best optimization available to us currently is to
> >  > prefetch the entire data buffer prior to doing the checksum
[]
> I'll fix this up and send a v3, but I'll give it a day in case there are more
> comments first.

Perhaps a reduction in prefetch loop count helps.

Was capping the amount prefetched and letting the
hardware prefetch also tested?

        prefetch_lines(buff, min(len, cache_line_size() * 8u));

Also pedantry/trivial comments:

__always_inline instead of inline
static __always_inline void prefetch_lines(const void *addr, size_t len)
{
        const void *end = addr + len;
...

buff doesn't need a void * cast in prefetch_lines

Beside the commit message, the comment above prefetch_lines
also needs updating to remove the "Manual Prefetching" line.

/*
 * Do a 64-bit checksum on an arbitrary memory area.
 * Returns a 32bit checksum.
 *
 * This isn't as time critical as it used to be because many NICs
 * do hardware checksumming these days.
 * 
 * Things tried and found to not make it faster:
 * Manual Prefetching
 * Unrolling to an 128 bytes inner loop.
 */


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to