On Mon, Apr 08, 2019 at 02:23:54PM +0800, Herbert Xu wrote:
> Eric Biggers <ebigg...@kernel.org> wrote:
> > From: Eric Biggers <ebigg...@google.com>
> > 
> > When the user-provided IV buffer is not aligned to the algorithm's
> > alignmask, skcipher_walk_virt() allocates an aligned buffer and copies
> > the IV into it.  However, skcipher_walk_virt() can fail after that
> > point, and in this case the buffer will be freed.
> > 
> > This causes a use-after-free read in callers that read from walk->iv
> > unconditionally, e.g. the LRW template.  For example, this can be
> > reproduced by trying to encrypt fewer than 16 bytes using "lrw(aes)".
> 
> This looks like a bug in LRW.  Relying on walk->iv to be set to
> anything after a failed skcipher_walk_virt call is wrong.  So we
> should fix it there instead.
> 
> Cheers,
> -- 

It's not just LRW though.  It's actually 7 places:

        arch/arm/crypto/aes-neonbs-glue.c
        arch/arm/crypto/chacha-neon-glue.c
        arch/arm64/crypto/aes-neonbs-glue.c
        arch/arm64/crypto/chacha-neon-glue.c
        crypto/chacha-generic.c
        crypto/lrw.c
        crypto/salsa20-generic.c

Do you prefer that all those be updated?

- Eric

Reply via email to