On 12/13/16 04:32 PM, Ilya Lesokhin wrote:
> --- a/arch/x86/crypto/aesni-intel_glue.c
> +++ b/arch/x86/crypto/aesni-intel_glue.c
> @@ -903,9 +903,11 @@ static int helper_rfc4106_encrypt(struct aead_request 
> *req)
>       *((__be32 *)(iv+12)) = counter;
>  
>       if (sg_is_last(req->src) &&
> -         req->src->offset + req->src->length <= PAGE_SIZE &&
> +         (!PageHighMem(sg_page(req->src)) ||
> +         req->src->offset + req->src->length <= PAGE_SIZE) &&
>           sg_is_last(req->dst) &&
> -         req->dst->offset + req->dst->length <= PAGE_SIZE) {
> +         (!PageHighMem(sg_page(req->dst)) ||
> +         req->dst->offset + req->dst->length <= PAGE_SIZE)) {
>               one_entry_in_sg = 1;
>               scatterwalk_start(&src_sg_walk, req->src);
>               assoc = scatterwalk_map(&src_sg_walk);

I was also experimenting with a similar patch that loosened up the
restrictions here, checking for highmem.  Note that you can go even
further and check the AAD, data, and TAG all separately, the current
aesni crypto routines take them as separate buffers.  (This might fix
the RFC5288 patch AAD size issue?)

Long term it would be nice to improve the asm routines instead to
support scatter / gather IO and any AAD len, as the newer intel
routines do:

https://github.com/01org/isa-l_crypto/tree/master/aes
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to