Sam Russell wrote: > I think we are massively overengineering this. We shouldn't need to > future-proof tests for changes that *might* be added in the future, and > having 32x32 hashes in a table at the start of the test is not a big deal > (it took less than a second to generate and it'll take the same amount of > time if we extend to 64 bytes in the future) but it does illustrate that we > are perhaps taking a bad approach to testing here.
As you can imagine, I disagree. Tests ought to be designed to - catch wrong behaviour that you can think of, - catch wrong behaviour that you haven't think of — after all, our imagination is limited, - be maintainable. > Let's look at some the scenarios for how the code could potentially be > broken, and how we could detect this: > > === > Bug: code incorrectly generates checksum in the slice-by-8 code > === > > === > Bug: code triggers a CPU fault when reading an unaligned word > === Bug: code reads one more word from memory than it should. (Cf. https://sourceware.org/git/?p=glibc.git;a=commit;h=2bd779ae3f3a86bce22fcb7665d740b14ac677ca ) Detection: Use zerosize-ptr.h, like in test-memchr.c. Bruno