From: Eric Biggers
Add a function to crypto_simd that registers an array of skcipher
algorithms, then allocates and registers the simd wrapper algorithms for
them. It assumes the naming scheme where the names of the underlying
algorithms are prefixed with two underscores.
Also add the
From: Eric Biggers
Convert the AVX and AVX2 implementations of Serpent from the
(deprecated) ablkcipher and blkcipher interfaces over to the skcipher
interface. Note that this includes replacing the use of ablk_helper
with crypto_simd.
Signed-off-by: Eric Biggers
---
arch/x86/crypto
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-serpent-avx2 algorithm which did this. Users who
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-twofish-3way algorithm which did this. Users who
From: Eric Biggers
The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().
Remove the xts-twofish-3way algorithm which did this. Users who
From: Eric Biggers
Convert the SSE2 implementation of Serpent from the (deprecated)
ablkcipher and blkcipher interfaces over to the skcipher interface.
Note that this includes replacing the use of ablk_helper with
crypto_simd.
Signed-off-by: Eric Biggers
---
arch/x86/crypto
From: Eric Biggers
Convert the 3-way implementation of Twofish from the (deprecated)
blkcipher interface over to the skcipher interface.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/twofish_glue_3way.c | 151
crypto/Kconfig | 2
From: Eric Biggers
The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().
Remove the xts-serpent-sse2 algorithm which did this. Users who
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-serpent-sse2 algorithm which did this. Users who
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-serpent-avx algorithm which did this. Users who
From: Eric Biggers
Add ECB, CBC, and CTR functions to glue_helper which use skcipher_walk
rather than blkcipher_walk. This will allow converting the remaining
x86 algorithms from the blkcipher interface over to the skcipher
interface, after which we'll be able to remove the blkcipher
From: Eric Biggers
With ecb-cast5-avx, if a 128+ byte scatterlist element followed a
shorter one, then the algorithm accidentally encrypted/decrypted only 8
bytes instead of the expected 128 bytes. Fix it by setting the
encryption/decryption 'fn' correctly.
Fixes: c12ab20b162c (&quo
From: Eric Biggers
Convert the AESNI AVX and AESNI AVX2 implementations of Camellia from
the (deprecated) ablkcipher and blkcipher interfaces over to the
skcipher interface. Note that this includes replacing the use of
ablk_helper with crypto_simd.
Signed-off-by: Eric Biggers
---
arch/x86
From: Eric Biggers
Convert the AVX implementation of CAST6 from the (deprecated) ablkcipher
and blkcipher interfaces over to the skcipher interface. Note that this
includes replacing the use of ablk_helper with crypto_simd.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/cast6_avx_glue.c
From: Eric Biggers
All users of ablk_helper have been converted over to crypto_simd, so
remove ablk_helper.
Signed-off-by: Eric Biggers
---
crypto/Kconfig | 4 --
crypto/Makefile | 1 -
crypto/ablk_helper.c | 150
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-cast6-avx algorithm which did this. Users who
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-camellia-asm algorithm which did this. Users who
From: Eric Biggers
The XTS template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic XTS code themselves via xts_crypt().
Remove the xts-camellia-asm algorithm which did this. Users who
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-camellia-aesni algorithm which did this. Users
From: Eric Biggers
Now that all users of xts_crypt() have been removed in favor of the XTS
template wrapping an ECB mode algorithm, remove xts_crypt().
Signed-off-by: Eric Biggers
---
crypto/xts.c | 72
include/crypto/xts.h | 17
From: Eric Biggers
Now that all users of lrw_crypt() have been removed in favor of the LRW
template wrapping an ECB mode algorithm, remove lrw_crypt(). Also
remove crypto/lrw.h as that is no longer needed either; and fold
'struct lrw_table_ctx' into 'struct priv', lrw_init
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-twofish-avx algorithm which did this. Users who
From: Eric Biggers
Convert the x86 asm implementation of Camellia from the (deprecated)
blkcipher interface over to the skcipher interface.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/camellia_glue.c | 162
crypto/Kconfig | 2
From: Eric Biggers
Now that all glue_helper users have been switched from the blkcipher
interface over to the skcipher interface, remove the versions of the
glue_helper functions that handled the blkcipher interface.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/glue_helper.c
From: Eric Biggers
There are no users of the original glue_fpu_begin() anymore, so rename
glue_skwalk_fpu_begin() to glue_fpu_begin() so that it matches
glue_fpu_end() again.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/cast5_avx_glue.c | 4 ++--
arch/x86/crypto/glue_helper.c
From: Eric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block
cipher directly. Therefore it is now redundant for crypto modules to
wrap their ECB code with generic LRW code themselves via lrw_crypt().
Remove the lrw-camellia-aesni-avx2 algorithm which did this
From: Eric Biggers
Convert the AVX implementation of CAST5 from the (deprecated) ablkcipher
and blkcipher interfaces over to the skcipher interface. Note that this
includes replacing the use of ablk_helper with crypto_simd.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/cast5_avx_glue.c
From: Eric Biggers
Convert the AVX implementation of Twofish from the (deprecated)
ablkcipher and blkcipher interfaces over to the skcipher interface.
Note that this includes replacing the use of ablk_helper with
crypto_simd.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/twofish_avx_glue.c
From: Eric Biggers
Convert the x86 asm implementation of Blowfish from the (deprecated)
blkcipher interface over to the skcipher interface.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/blowfish_glue.c | 230
crypto/Kconfig | 2
From: Eric Biggers
Convert the x86 asm implementation of Triple DES from the (deprecated)
blkcipher interface over to the skcipher interface.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/des3_ede_glue.c | 238
crypto/Kconfig | 2
Hi David,
On Thu, Feb 08, 2018 at 03:07:30PM +, David Howells wrote:
> Eric Biggers wrote:
>
> > The X.509 parser mishandles the case where the certificate's signature's
> > hash algorithm is not available in the crypto API. In this case,
> > x509_get_sig_
From: Eric Biggers
commit 9fa68f620041be04720d0cbfb1bd3ddfc6310b24 upstream.
[Please apply to 4.9-stable.]
Currently, almost none of the keyed hash algorithms check whether a key
has been set before proceeding. Some algorithms are okay with this and
will effectively just use a key of all 0
From: Eric Biggers
commit a208fa8f33031b9e0aba44c7d1b7e68eb0cbd29e upstream.
[Please apply to 4.9-stable.]
We need to consistently enforce that keyed hashes cannot be used without
setting the key. To do this we need a reliable way to determine whether
a given hash algorithm is keyed or not
aren't recommended for most users. It's also because CRYPTO_SPECK just
refers to the generic implementation, which won't be fast enough for
many users; in practice, they'll need to enable a vectorized
implementation such as CRYPTO_SPECK_NEON to get acceptable performance.
Sig
t; };
>
> -struct skcipher_alg des3_ede_skciphers[] = {
> +static struct skcipher_alg des3_ede_skciphers[] = {
> {
> .base.cra_name = "ecb(des3_ede)",
> .base.cra_driver_name = "ecb-des3_ede-asm",
Acked-by: Eric Biggers
Thanks!
-XTS (NEON) 292.5 MB/s 286.1 MB/s
Speck128/256-XTS (generic) 186.3 MB/s 181.8 MB/s
AES-128-XTS (NEON bit-sliced) 142.0 MB/s 124.3 MB/s
AES-256-XTS (NEON bit-sliced) 104.7 MB/s 91.1 MB/s
Signed-off-by: Eric Biggers
---
arch/ar
Hi Benjamin,
On Tue, Mar 06, 2018 at 09:23:08PM +0100, Benjamin Warnke wrote:
> Currently ZRAM uses compression-algorithms from the crypto-api. ZRAM
> compresses each page individually. As a result the compression algorithm is
> forced to use a very small sliding window. None of the available comp
From: Eric Biggers
If the pcrypt template is used multiple times in an algorithm, then a
deadlock occurs because all pcrypt instances share the same
padata_instance, which completes requests in the order submitted. That
is, the inner pcrypt request waits for the outer pcrypt request while
the
Hi Benjamin,
On Wed, Mar 07, 2018 at 12:50:08PM +0100, Benjamin Warnke wrote:
> Hi Eric,
>
>
> On 06.03.2018 at 23:13, Eric Biggers wrote:
> >
> > Hi Benjamin,
> >
> > On Tue, Mar 06, 2018 at 09:23:08PM +0100, Benjamin Warnke wrote:
> >> Curren
On Wed, Mar 14, 2018 at 02:17:30PM +0100, Salvatore Mesoraca wrote:
> All ciphers implemented in Linux have a block size less than or
> equal to 16 bytes and the most demanding hw require 16 bits
> alignment for the block buffer.
> We avoid 2 VLAs[1] by always allocating 16 bytes with 16 bits
> ali
[+Cc linux-crypto]
Hi Yael,
On Sun, Mar 25, 2018 at 07:41:30PM +0100, Yael Chemla wrote:
> Allow parallel processing of bio blocks by moving to async. completion
> handling. This allows for better resource utilization of both HW and
> software based hash tfm and therefore better performance in
mounts in some cases.
Fix it by removing the incorrect call to crypto_ahash_init().
Reported-by: Michael Young
Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting
key")
Fixes: fffdaef2eb4a ("gss_krb5: Add support for rc4-hmac encryption")
Cc: sta...@vge
[+Cc linux-crypto]
On Sun, Dec 10, 2017 at 05:33:01AM -0800, syzbot wrote:
> Hello,
>
> syzkaller hit the following crash on
> 82bcf1def3b5f1251177ad47c44f7e17af039b4b
> git://git.cmpxchg.org/linux-mmots.git/master
> compiler: gcc (GCC) 7.1.1 20170620
> .config is attached
> Raw console output is
On Fri, Mar 23, 2018 at 08:21:52AM +0800, Herbert Xu wrote:
> On Sat, Mar 10, 2018 at 03:22:31PM -0800, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > If the pcrypt template is used multiple times in an algorithm, then a
> > deadlock occurs because all pcry
On Mon, Apr 09, 2018 at 10:58:08AM +0200, Steffen Klassert wrote:
> On Sun, Apr 08, 2018 at 03:55:28PM -0700, Eric Biggers wrote:
> > On Fri, Mar 23, 2018 at 08:21:52AM +0800, Herbert Xu wrote:
> > > On Sat, Mar 10, 2018 at 03:22:31PM -0800, Eric Biggers wrote:
> >
On Wed, Apr 11, 2018 at 04:31:01PM +0200, Stephan Müller wrote:
> Sorry, this time with the proper subject line.
>
> ---8<---
>
> During freeing of the internal buffers used by the DRBG, set the pointer
> to NULL. It is possible that the context with the freed buffers is
> reused. In case of an e
On Mon, Apr 16, 2018 at 07:34:29PM +, Yann Collet wrote:
> Hi Singh
>
> I don't have any strong opinion on this topic.
>
> You made your case clear:
> your variant trades a little bit of speed for a little bit more compression
> ratio.
> In the context of zram, it makes sense, and I would e
From: Eric Biggers
Commit eb02c38f0197 ("crypto: api - Keep failed instances alive") is
making allocating crypto transforms sometimes fail with ELIBBAD, when
multiple processes try to access encrypted files with fscrypt for the
first time since boot. The problem is that the "requ
Hi Jason,
On Tue, Apr 24, 2018 at 06:11:26PM +0200, Jason A. Donenfeld wrote:
> Can we please not Speck?
>
> It was just rejected by the ISO/IEC.
>
> https://twitter.com/TomerAshur/status/988659711091228673
So, what do you propose replacing it with?
As I explained in the patch, the purpose of
Hi Jason,
On Tue, Apr 24, 2018 at 06:18:26PM +0200, Jason A. Donenfeld wrote:
> This NSA-designed cipher was rejected for inclusion in international
> standards by ISO/IEC. Before anyone actually starts using it by
> accident, let's just not ship it at all.
>
> Signed-off-by: Jason A. Donenfeld
Hi Jason,
On Tue, Apr 24, 2018 at 10:58:35PM +0200, Jason A. Donenfeld wrote:
> Hi Eric,
>
> On Tue, Apr 24, 2018 at 8:16 PM, Eric Biggers wrote:
> > So, what do you propose replacing it with?
>
> Something more cryptographically justifiable.
>
It's easy to
Hi Samuel,
On Wed, Apr 25, 2018 at 03:33:16PM +0100, Samuel Neves wrote:
> Let's put the provenance of Speck aside for a moment, and suppose that
> it is an ideal block cipher. There are still some issues with this
> patch as it stands.
>
> - The rationale seems off. Consider this bit from the c
Hi Samuel,
On Thu, Apr 26, 2018 at 03:05:44AM +0100, Samuel Neves wrote:
> On Wed, Apr 25, 2018 at 8:49 PM, Eric Biggers wrote:
> > I agree that my explanation should have been better, and should have
> > considered
> > more crypto algorithms. The main difficulty is
is a differential cryptanalysis
attack on 25 of 34 rounds with 2^253 time complexity and 2^125 chosen
plaintexts, i.e. only marginally faster than brute force. There is no
known attack on the full 34 rounds.
Signed-off-by: Eric Biggers
---
Changed since v1:
- Improved commit messag
i"
Note: algorithms can be dynamically added to the crypto API, which can
result in different implementations being used at different times. But
this is rare; for most users, showing the first will be good enough.
Signed-off-by: Eric Biggers
---
Note: this patch is on top of the other fscr
i"
Note: algorithms can be dynamically added to the crypto API, which can
result in different implementations being used at different times. But
this is rare; for most users, showing the first will be good enough.
Signed-off-by: Eric Biggers
---
Changed since v1:
- Added missin
Hi Ondrej,
On Fri, May 11, 2018 at 02:12:51PM +0200, Ondrej Mosnáček wrote:
> From: Ondrej Mosnacek
>
> This patch adds optimized implementations of AEGIS-128, AEGIS-128L,
> and AEGIS-256, utilizing the AES-NI and SSE2 x86 extensions.
>
> Signed-off-by: Ondrej Mosnacek
[...]
> +static int cryp
hash algorithm to have both unkeyed and keyed tests,
without relying on having it work by accident.
The new test vectors pass with the generic and x86 CRC implementations.
I haven't tested others yet; if any happen to be broken, they'll need to
be fixed.
Eric Biggers (6):
crypto: crc
From: Eric Biggers
The Blackfin CRC driver was removed by commit 9678a8dc53c1 ("crypto:
bfin_crc - remove blackfin CRC driver"), but it was forgotten to remove
the corresponding "hmac(crc32)" test vectors. I see no point in keeping
them since nothing else appears to implemen
From: Eric Biggers
Since testmgr uses a single tfm for all tests of each hash algorithm,
once a key is set the tfm won't be unkeyed anymore. But with crc32 and
crc32c, the key is really the "default initial state" and is optional;
those algorithms should have both keyed and unkey
From: Eric Biggers
crc32c-generic sets an alignmask, but actually its ->update() works with
any alignment; only its ->setkey() and outputting the final digest
assume an alignment. To prevent the buffer from having to be aligned by
the crypto API for just these cases, switch these cases o
From: Eric Biggers
crc32c has an unkeyed test vector but crc32 did not. Add the crc32c one
(which uses an empty input) to crc32 too, and also add a new one to both
that uses a nonempty input. These test vectors verify that crc32 and
crc32c implementations use the correct default initial state
From: Eric Biggers
crc32-generic doesn't have a cra_alignmask set, which is desired as its
->update() works with any alignment. However, it incorrectly assumes
4-byte alignment in ->setkey() and when outputting the final digest.
Fix this by using the unaligned access macros in
From: Eric Biggers
The __crc32_le() wrapper function is pointless. Just call crc32_le()
directly instead.
Signed-off-by: Eric Biggers
---
crypto/crc32_generic.c | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/crypto/crc32_generic.c b/crypto/crc32_generic.c
From: Eric Biggers
One "cbc(des)" decryption test vector doesn't exactly match an
encryption test vector with input and result swapped. It's *almost* the
same as one, but the decryption version is "chunked" while the
encryption version is "unchunked". In
only includes my manual changes on top of the scripted changes.
Eric Biggers (5):
crypto: testmgr - add extra ecb(des) encryption test vectors
crypto: testmgr - make an cbc(des) encryption test vector chunked
crypto: testmgr - add extra ecb(tnepres) encryption test vectors
crypto: testm
From: Eric Biggers
None of the four "ecb(tnepres)" decryption test vectors exactly match an
encryption test vector with input and result swapped. In preparation
for removing the decryption test vectors, add these to the encryption
test vectors, so we don't lose any test coverage.
From: Eric Biggers
Two "ecb(des)" decryption test vectors don't exactly match any of the
encryption test vectors with input and result swapped. In preparation
for removing the decryption test vectors, add these to the encryption
test vectors, so we don't lose any test cove
From: Eric Biggers
One "kw(aes)" decryption test vector doesn't exactly match an encryption
test vector with input and result swapped. In preparation for removing
the decryption test vectors, add this test vector to the encryption test
vectors, so we don't lose any test cove
Hi Yu,
On Thu, May 24, 2018 at 10:26:12AM +0800, Yu Chen wrote:
> Hi Stephan,
> thanks for your reply,
> On Wed, May 23, 2018 at 1:43 AM Stephan Mueller wrote:
>
> > Am Dienstag, 22. Mai 2018, 05:00:40 CEST schrieb Yu Chen:
>
> > Hi Yu,
>
> > > Hi all,
> > > The request is that, we'd like to g
On Thu, May 24, 2018 at 09:36:15AM -0500, Denis Kenzior wrote:
> Hi Stephan,
>
> On 05/24/2018 12:57 AM, Stephan Mueller wrote:
> > Am Donnerstag, 24. Mai 2018, 04:45:00 CEST schrieb Eric Biggers:
> >
> > Hi Eric,
> >
> > >
> > > "Not hav
Hi Denis,
On Thu, May 24, 2018 at 07:56:50PM -0500, Denis Kenzior wrote:
> Hi Ted,
>
> > > I'm not really here to criticize or judge the past. AF_ALG exists now. It
> > > is being used. Can we just make it better? Or are we going to whinge at
> > > every user that tries to use (and improve) ke
Hi Denis,
On Fri, May 25, 2018 at 09:48:36AM -0500, Denis Kenzior wrote:
> Hi Eric,
>
> > The solution to the "too many system calls" problem is trivial: just do
> > SHA-512
> > in userspace. It's just math; you don't need a system call, any more than
> > you
> > would call sys_add(1, 1) to co
alsa20-asm
implementations, which as far as I can tell are basically useless these
days; the x86_64 asm version in particular isn't actually any faster
than the C version anymore. (And possibly no one even uses these
anyway.) See the patch for the full explanation.
Eric Biggers (2):
cryp
From: Eric Biggers
The x86 assembly implementations of Salsa20 use the frame base pointer
register (%ebp or %rbp), which breaks frame pointer convention and
breaks stack traces when unwinding from an interrupt in the crypto code.
Recent (v4.10+) kernels will warn about this, e.g.
WARNING
From: Eric Biggers
This reverts commit eb772f37ae8163a89e28a435f6a18742ae06653b, as now the
x86 Salsa20 implementation has been removed and the generic helpers are
no longer needed outside of salsa20_generic.c.
We could keep this just in case someone else wants to add a new
optimized Salsa20
On Sat, May 12, 2018 at 10:43:08AM +0200, Dmitry Vyukov wrote:
> On Fri, Feb 2, 2018 at 11:18 PM, Eric Biggers wrote:
> > On Fri, Feb 02, 2018 at 02:57:32PM +0100, Dmitry Vyukov wrote:
> >> On Fri, Feb 2, 2018 at 2:48 PM, syzbot
> >> wrote:
> >> > Hello,
expect to be able to use the same aead_request for
another encryption/decryption without reinitializing everything. The
last patch removes the test workaround now that this bug is fixed.
Eric Biggers (9):
crypto: simd - support wrapping AEAD algorithms
crypto: x86/aesni - convert to use skcipher
From: Eric Biggers
Now that all AEAD algorithms (that I have the hardware to test, at
least) have been fixed to not modify the user-provided aead_request,
remove the workaround from testmgr that reset aead_request::tfm after
each AEAD encryption/decryption.
Signed-off-by: Eric Biggers
From: Eric Biggers
Update the crypto_simd module to support wrapping AEAD algorithms.
Previously it only supported skciphers. The code for each is similar.
I'll be converting the x86 implementations of AES-GCM, AEGIS, and MORUS
to use this. Currently they each independently implement the
From: Eric Biggers
Convert the x86 implementation of AEGIS-128 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers
---
arch/x86/crypto
From: Eric Biggers
Convert the AES-NI glue code to use simd_register_skciphers_compat() to
create SIMD wrappers for all the internal skcipher algorithms at once,
rather than wrapping each one individually. This simplifies the code.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/aesni
From: Eric Biggers
Convert the x86 implementation of MORUS-640 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers
---
arch/x86/crypto
From: Eric Biggers
Convert the x86 implementations of MORUS-1280 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers
---
arch/x86/crypto
From: Eric Biggers
Convert the x86 implementation of AEGIS-256 to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers
---
arch/x86/crypto
From: Eric Biggers
Convert the AES-NI implementations of "gcm(aes)" and "rfc4106(gcm(aes))"
to use the AEAD SIMD helpers, rather than hand-rolling the same
functionality. This simplifies the code and also fixes the bug where
the user-provided aead_request is modified.
From: Eric Biggers
Convert the x86 implementation of AEGIS-128L to use the AEAD SIMD
helpers, rather than hand-rolling the same functionality. This
simplifies the code and also fixes the bug where the user-provided
aead_request is modified.
Signed-off-by: Eric Biggers
---
arch/x86/crypto
From: Eric Biggers
The arm64 gcm-aes-ce algorithm is failing the extra crypto self-tests
following my patches to test the !may_use_simd() code paths, which
previously were untested. The problem is that in the !may_use_simd()
case, an odd number of AES blocks can be processed within each step of
From: Eric Biggers
All crypto API algorithms are supposed to support the case where they
are called in a context where SIMD instructions are unusable, e.g. IRQ
context on some architectures. However, this isn't tested for by the
self-tests, causing bugs to go undetected.
Now tha
This patch series is based on top of my other pending patch series
"crypto: add SIMD helpers for AEADs". It can also be found in git at:
URL: https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git
Branch: crypto-nosimd-tests
Eric Biggers (8):
crypto: chacha-ge
From: Eric Biggers
Replace all calls to may_use_simd() in the arm64 crypto code with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers
---
arch/arm64/crypto/aes-ce-ccm-glue.c | 7 ---
arch/arm64/crypto/aes-ce-glue.c | 5
From: Eric Biggers
The arm64 implementations of ChaCha and XChaCha are failing the extra
crypto self-tests following my patches to test the !may_use_simd() code
paths, which previously were untested. The problem is as follows:
When !may_use_simd(), the arm64 NEON implementations fall back to
From: Eric Biggers
Replace all calls to irq_fpu_usable() in the x86 crypto code with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers
---
arch/x86/crypto/aesni-intel_glue.c | 8
arch/x86/crypto/chacha_glue.c
From: Eric Biggers
Replace all calls to may_use_simd() in the shared SIMD helpers with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers
---
crypto/simd.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/crypto
From: Eric Biggers
So that the no-SIMD fallback code can be tested by the crypto
self-tests, add a macro crypto_simd_usable() which wraps may_use_simd(),
but also returns false if the crypto self-tests have set a per-CPU bool
to disable SIMD in crypto code on the current CPU.
Signed-off-by
From: Eric Biggers
Replace all calls to may_use_simd() in the arm crypto code with
crypto_simd_usable(), in order to allow testing the no-SIMD code paths.
Signed-off-by: Eric Biggers
---
arch/arm/crypto/chacha-neon-glue.c | 5 +++--
arch/arm/crypto/crc32-ce-glue.c| 5 +++--
arch
On Wed, Feb 20, 2019 at 09:30:20PM -0800, Eric Biggers wrote:
> Hello,
>
> This series adds helper functions for testing AF_ALG (the userspace
> interface to algorithms in the Linux kernel's crypto API) to the
> Linux Test Project. It then adds a few sample regression tests.
Hi Zhang,
On Mon, Jan 28, 2019 at 11:14:32AM +0800, Tao Huang wrote:
> Hi Eric and Heiko:
>
> >> On Sat, 26 Jan 2019 at 22:05, Eric Biggers wrote:
> >>>
> >>> Hello,
> >>>
> >>> I don't know whether anyone is actually m
From: Eric Biggers
In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.
Signed-off-by: Eric Biggers
---
crypto/chacha_generic.c | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/crypto
From: Eric Biggers
In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor().
This avoids having to memcpy() the src buffer to the dst buffer.
Signed-off-by: Eric Biggers
---
crypto/salsa20_generic.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/crypto
601 - 700 of 1650 matches
Mail list logo