There is no guarantee the record starts before the skb frags.
If we don't check for this condition copy amount will get
negative, leading to reads and writes to random memory locations.
Familiar hilarity ensues.

Fixes: 4799ac81e52a ("tls: Add rx inline crypto offload")
Signed-off-by: Jakub Kicinski <jakub.kicin...@netronome.com>
Reviewed-by: John Hurley <john.hur...@netronome.com>
---
 net/tls/tls_device.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index cc0256939eb6..96357060addc 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -628,14 +628,16 @@ static int tls_device_reencrypt(struct sock *sk, struct 
sk_buff *skb)
        else
                err = 0;
 
-       copy = min_t(int, skb_pagelen(skb) - offset,
-                    rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE);
+       if (skb_pagelen(skb) > offset) {
+               copy = min_t(int, skb_pagelen(skb) - offset,
+                            rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE);
 
-       if (skb->decrypted)
-               skb_store_bits(skb, offset, buf, copy);
+               if (skb->decrypted)
+                       skb_store_bits(skb, offset, buf, copy);
 
-       offset += copy;
-       buf += copy;
+               offset += copy;
+               buf += copy;
+       }
 
        skb_walk_frags(skb, skb_iter) {
                copy = min_t(int, skb_iter->len,
-- 
2.21.0

Reply via email to