On 3/21/2018 5:08 PM, Dave Watson wrote:
On 03/19/18 07:45 PM, Saeed Mahameed wrote:
+#define TLS_OFFLOAD_CONTEXT_SIZE
\
+ (ALIGN(sizeof(struct tls_offload_context), sizeof(void *)) + \
+ TLS_DRIVER_STATE_SIZE)
+
+ pfrag = sk_page_frag(sk);
+
+ /* KTLS_TLS_HEADER_SIZE is not counted as part of the TLS record, and
I think the define is actually TLS_HEADER_SIZE, no KTLS_ prefix
Fixed. Thanks.
+ memcpy(ctx->iv + TLS_CIPHER_AES_GCM_128_SALT_SIZE, iv, iv_size);
+
+ ctx->rec_seq_size = rec_seq_size;
+ /* worst case is:
+ * MAX_SKB_FRAGS in tls_record_info
+ * MAX_SKB_FRAGS + 1 in SKB head an frags.
spelling
Fixed. Thanks.
+int tls_sw_fallback_init(struct sock *sk,
+ struct tls_offload_context *offload_ctx,
+ struct tls_crypto_info *crypto_info)
+{
+ int rc;
+ const u8 *key;
+
+ offload_ctx->aead_send =
+ crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC);
in tls_sw we went with async + crypto_wait_req, any reason to not do
that here? Otherwise I think you still get the software gcm on x86
instead of aesni without additional changes.
Yes, synchronous crypto code runs to handle a software fallback in
validate_xmit_skb, where waiting is not possible. I know Steffen
recently added support for calling async crypto from validate_xmit_skb,
but it wasn't available when we were writing these patches.
I think we could implemented async support in the future based on the
infrastructure introduced by Steffen.
diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
index d824d548447e..e0dface33017 100644
--- a/net/tls/tls_main.c
+++ b/net/tls/tls_main.c
@@ -54,6 +54,9 @@ enum {
enum {
TLS_BASE_TX,
TLS_SW_TX,
+#ifdef CONFIG_TLS_DEVICE
+ TLS_HW_TX,
+#endif
TLS_NUM_CONFIG,
};
I have posted SW_RX patches, do you forsee any issues with SW_RX + HW_TX?
No, but I haven't tested these patches with the SW_RX patches.
I'll try to rebase your V2 SW_RX patches over this series tomorrow and
run some tests.
Thanks