> From: Scott Mitchell <[email protected]>
> 
> Optimize __rte_raw_cksum() by processing data in larger unrolled loops
> instead of iterating word-by-word. The new implementation processes
> 64-byte blocks (32 x uint16_t) in the hot path, followed by smaller
> 32/16/8/4/2-byte chunks.

Good idea processing in 64-byte blocks!

I wonder if there would be further gain by 64-byte aligning the 64-byte chunks, 
so the compiler can use vector instructions for summing the 32 2-byte words of 
each 64-byte chunk.
This would require a 3-step algorithm:
1. Process the first 0..63 bytes preceding the first 64-byte aligned address. 
(These bytes are unaligned; nothing new here.)
2. Process 64-byte chunks, if any. These are now 64-byte aligned, and you 
should ensure that the compiler knows it.
3. Process the last 32/16/8/4/2/1-byte chunks. These are now aligned, which 
eliminates the need for unaligned_uint16_t in this step. Specifically, the 
32-byte chunk will be 64-byte aligned, allowing the compiler to use vector 
instructions. The 16-byte chunk will be 32-byte aligned. Etc.

<random idea>
Step 1 may be performed in reverse order of step 3, i.e. process in chunks of 
1/2/4/8/16/32 bytes (using the lowest bits of the address as condition) - which 
will cause the alignment to increase accordingly.
</random idea>

<feature creep>
Checking the alignment at runtime has a non-zero cost, so a an alternative 
(simpler) code path might be beneficial for small lengths (when the alignment 
is unknown at runtime).
</feature creep>

> 
> Uses uint64_t accumulator to reduce carry propagation overhead

You return (uint32_t)sum64 at the end, so why replace the existing 32-bit "sum" 
with a 64-bit "sum64" accumulator?

> and
> leverages unaligned_uint16_t for safe unaligned access on all
> platforms.
> 
> Performance results from cksum_perf_autotest (TSC cycles/byte):
>   Block size    Before    After    Improvement
>          100  0.40-0.64  0.13-0.14    ~3-4x
>         1500  0.49-0.51  0.10-0.11    ~4-5x
>         9000  0.48-0.51  0.11-0.12    ~4x
> 
> Signed-off-by: Scott Mitchell <[email protected]>

Reply via email to