On Fri, 2026-03-13 at 14:16 +0100, Sabrina Dubroca wrote:
> 2026-03-09, 15:48:36 +1000, Wilfred Mallawa wrote:
> > From: Wilfred Mallawa <[email protected]>
> > 
> > Currently, for TLS 1.3, ktls does not support record zero padding
> > [1].
> > Record zero padding is used to allow the sender to hide the size of
> > the
> > traffic patterns from an observer. TLS is susceptible to a variety
> > of traffic
> > analysis attacks based on observing the length and timing of
> > encrypted
> > packets [2]. Upcoming Western Digital NVMe-TCP hardware controllers
> > implement TLS 1.3. Which from a security perspective, can benefit
> > from having
> > record zero padding enabled to mitigate against traffic analysis
> > attacks [2].
> > 
> > Thus, for TX, add support to appending a randomized number of zero
> > padding
> > bytes to end-of-record (EOR) records that are not full. The number
> > of zero
> 
> I don't think this is the right behavior. I expect that a user that
> enables zero-padding would want _every_ record they send to be
> padded,
> and their payload is going to be split into however many records that
> requires. This could mean that data that would just fit in a record
> will get split into one full + one very small record.
> 
> As it is, if I repeatedly call send with MSG_MORE to let ktls chunk
> this for me, zero-padding has no effect. That doesn't seem right.
> 
> Does that make sense?
> 

hmm it does... but also, I am not sure if chunking records solely to
introduce zero padding is a good idea either? is the added overhead
worth it? For example, the NVMe TCP/TLS usecase, I think this would
slow things down noticeably. The current approach is meant to be a
balance between some of the security benefits and performance.

But as you mentioned, we can introduce a fixed size option, such that
all outgoing records are padded to the max record size. Which should
address security concern the above (?) ... at the cost of performance,
this provides a stronger padding policy, and would keep the logic quite
simple?

For context, testing with NVMe TCP TLS we saw a ~50% reduction in
performance (4K Write IOPs) when padding all outgoing records to the
maximum record size limit. 

> > padding bytes to append is determined by the remaining record room
> > and the
> > user specified upper bound (minimum of the two). That is
> > rand([0, min(record_room, upper_bound)]).
> > 
> > For TLS 1.3, zero padding is added after the content type byte, as
> > such,
> > if the record in context meets the above conditions for zero
> > padding,
> > attach a zero padding buffer to the content type byte before a
> > record is
> > encrypted. The padding buffer is freed when the record is freed.
> > 
> > By default, record zero padding is disabled, and userspace may
> > enable it
> > by using the setsockopt TLS_TX_RANDOM_PAD option.
> > 
> > [1] https://datatracker.ietf.org/doc/html/rfc8446#section-5.4l
> 
> nit: there's a stray 'l' at the end of that link (and other
> references
> to that section in your commit messages within the series)
> 

oops! missed that... thanks!

> 
> > @@ -1033,6 +1055,8 @@ static int tls_sw_sendmsg_locked(struct sock
> > *sk, struct msghdr *msg,
> >     unsigned char record_type = TLS_RECORD_TYPE_DATA;
> >     bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
> >     bool eor = !(msg->msg_flags & MSG_MORE);
> > +   bool tls_13 = (prot->version == TLS_1_3_VERSION);
> > +   bool rec_zero_pad = eor && tls_13 && tls_ctx-
> > >tx_record_zero_pad;
> 
> Thus here, rec_zero_pad would simply be tls_ctx->tx_record_zero_pad
> (the tls_13 check should be redundant I think?).

Ah yes, it is checked in the setsockopt()!


Wilfred

Reply via email to