> From: Scott Mitchell [mailto:[email protected]]
> Sent: Tuesday, 6 January 2026 19.16
> 
> On Tue, Jan 6, 2026 at 5:59 AM Morten Brørup <[email protected]>
> wrote:
> >
> > > From: Scott Mitchell <[email protected]>
> > >
> > > Optimize __rte_raw_cksum() by processing data in larger unrolled
> loops
> > > instead of iterating word-by-word. The new implementation processes
> > > 64-byte blocks (32 x uint16_t) in the hot path, followed by smaller
> > > 32/16/8/4/2-byte chunks.
> >
> > Good idea processing in 64-byte blocks!
> >
> > I wonder if there would be further gain by 64-byte aligning the 64-
> byte chunks, so the compiler can use vector instructions for summing
> the 32 2-byte words of each 64-byte chunk.
> > This would require a 3-step algorithm:
> > 1. Process the first 0..63 bytes preceding the first 64-byte aligned
> address. (These bytes are unaligned; nothing new here.)
> > 2. Process 64-byte chunks, if any. These are now 64-byte aligned, and
> you should ensure that the compiler knows it.
> > 3. Process the last 32/16/8/4/2/1-byte chunks. These are now aligned,
> which eliminates the need for unaligned_uint16_t in this step.
> Specifically, the 32-byte chunk will be 64-byte aligned, allowing the
> compiler to use vector instructions. The 16-byte chunk will be 32-byte
> aligned. Etc.
> >
> > <random idea>
> > Step 1 may be performed in reverse order of step 3, i.e. process in
> chunks of 1/2/4/8/16/32 bytes (using the lowest bits of the address as
> condition) - which will cause the alignment to increase accordingly.
> > </random idea>
> >
> > <feature creep>
> > Checking the alignment at runtime has a non-zero cost, so a an
> alternative (simpler) code path might be beneficial for small lengths
> (when the alignment is unknown at runtime).
> > </feature creep>
> >
> 
> Good idea! I implemented your suggestion but I didn't observe a
> measurable difference in cksum_perf_autotest. I suggest we proceed
> with the approach in this patch as an incremental step and I can post
> a followup with your suggestion above to review/discuss.

Strongly agree to proceed with this patch first.
It brings a big performance benefit, while remaining relatively simple.

Then vector optimized variants can be experimented with later.
Thanks for trying it out.

> Note the
> checksum computation requires processing in 16 bit blocks for
> correctness which requires special case handling for odd
> length/buffer-address alignment so complexity/code is higher.

Good point. The vector optimized variant might not be as simple as initially 
thought.

> 
> > >
> > > Uses uint64_t accumulator to reduce carry propagation overhead
> >
> > You return (uint32_t)sum64 at the end, so why replace the existing
> 32-bit "sum" with a 64-bit "sum64" accumulator?
> 
> Good catch. It gives more headroom to avoid overflow but not necessary
> and I will revert.

Thanks.

> 
> >
> > > and
> > > leverages unaligned_uint16_t for safe unaligned access on all
> > > platforms.
> > >
> > > Performance results from cksum_perf_autotest (TSC cycles/byte):
> > >   Block size    Before    After    Improvement
> > >          100  0.40-0.64  0.13-0.14    ~3-4x
> > >         1500  0.49-0.51  0.10-0.11    ~4-5x
> > >         9000  0.48-0.51  0.11-0.12    ~4x
> > >
> > > Signed-off-by: Scott Mitchell <[email protected]>
> >

Reply via email to