In an effort to separate intentional arithmetic wrap-around from unexpected wrap-around, we need to refactor places that depend on this kind of math. One of the most common code patterns of this is:
VAR + value < VAR Notably, this is considered "undefined behavior" for signed and pointer types, which the kernel works around by using the -fno-strict-overflow option in the build[1] (which used to just be -fwrapv). Regardless, we want to get the kernel source to the position where we can meaningfully instrument arithmetic wrap-around conditions and catch them when they are unexpected, regardless of whether they are signed[2], unsigned[3], or pointer[4] types. Refactor open-coded wrap-around addition test to use add_would_overflow(). This paves the way to enabling the wrap-around sanitizers in the future. Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1] Link: https://github.com/KSPP/linux/issues/26 [2] Link: https://github.com/KSPP/linux/issues/27 [3] Link: https://github.com/KSPP/linux/issues/344 [4] Cc: Herbert Xu <herb...@gondor.apana.org.au> Cc: "David S. Miller" <da...@davemloft.net> Cc: Aditya Srivastava <yashsri...@gmail.com> Cc: Randy Dunlap <rdun...@infradead.org> Cc: linux-crypto@vger.kernel.org Signed-off-by: Kees Cook <keesc...@chromium.org> --- crypto/adiantum.c | 2 +- drivers/crypto/amcc/crypto4xx_alg.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/crypto/adiantum.c b/crypto/adiantum.c index 60f3883b736a..c2f62ca455af 100644 --- a/crypto/adiantum.c +++ b/crypto/adiantum.c @@ -190,7 +190,7 @@ static inline void le128_add(le128 *r, const le128 *v1, const le128 *v2) r->b = cpu_to_le64(x + y); r->a = cpu_to_le64(le64_to_cpu(v1->a) + le64_to_cpu(v2->a) + - (x + y < x)); + (add_would_overflow(x, y))); } /* Subtraction in Z/(2^{128}Z) */ diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c index e0af611a95d8..33f73234ddd9 100644 --- a/drivers/crypto/amcc/crypto4xx_alg.c +++ b/drivers/crypto/amcc/crypto4xx_alg.c @@ -251,7 +251,7 @@ crypto4xx_ctr_crypt(struct skcipher_request *req, bool encrypt) * the whole IV is a counter. So fallback if the counter is going to * overlow. */ - if (counter + nblks < counter) { + if (add_would_overflow(counter, nblks)) { SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher.cipher); int ret; -- 2.34.1