A previous fix ("tls: Fix write space handling") assumed that
user space application gets informed about the socket send buffer
availability when tls_push_sg() gets called. Inside tls_push_sg(), in
case do_tcp_sendpages() returns 0, the function returns without calling
ctx->sk_write_space. Further, the new function tls_sw_write_space()
did not invoke ctx->sk_write_space. This leads to situation that user
space application encounters a lockup always waiting for socket send
buffer to become available.

Rather than call ctx->sk_write_space from tls_push_sg(), it should be
called from tls_sw_write_space. So whenever tcp stack invokes
sk->sk_write_space after freeing socket send buffer, we always declare
the same to user space by the way of invoking ctx->sk_write_space. The
function tls_device_write_space() already invokes ctx->sk_write_space.

Fixes: 7463d3a2db0ef ("tls: Fix write space handling")
Signed-off-by: Vakul Garg <vakul.g...@nxp.com>
---
 net/tls/tls_main.c | 1 -
 net/tls/tls_sw.c   | 2 ++
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
index 17e8667917aa..1d16562f86ed 100644
--- a/net/tls/tls_main.c
+++ b/net/tls/tls_main.c
@@ -146,7 +146,6 @@ int tls_push_sg(struct sock *sk,
        }
 
        ctx->in_tcp_sendpages = false;
-       ctx->sk_write_space(sk);
 
        return 0;
 }
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 425351ac2a9b..3d9f4cc0bb9c 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -2143,6 +2143,8 @@ void tls_sw_write_space(struct sock *sk, struct 
tls_context *ctx)
                                      &tx_ctx->tx_bitmask))
                        schedule_delayed_work(&tx_ctx->tx_work.work, 0);
        }
+
+       ctx->sk_write_space(sk);
 }
 
 int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
-- 
2.13.6

Reply via email to