On Mon, Oct 10, 2016 at 05:34:28 +0200, Stephan Mueller wrote:
> Am Sonntag, 9. Oktober 2016, 20:16:27 CEST schrieb Sami Farin:
>
> Hi Sami,
>
> > commit e192be9d9a30555aae2ca1dc3aad37cba484cd4a
> >
> > + chacha20_block(&crng->state[0], out);
> > + if (crng->state[12] == 0)
> > +
Am Sonntag, 9. Oktober 2016, 20:16:27 CEST schrieb Sami Farin:
Hi Sami,
> commit e192be9d9a30555aae2ca1dc3aad37cba484cd4a
>
> + chacha20_block(&crng->state[0], out);
> + if (crng->state[12] == 0)
> + crng->state[13]++;
>
> Did you mean
> + if (++crng->state[12] =
Hi Linus:
Here is the crypto update for 4.9:
API:
* The crypto engine code now supports hashes.
Algorithms:
* Allow keys >= 2048 bits in FIPS mode for RSA.
Drivers:
* Memory overwrite fix for vmx ghash.
* Add support for building ARM sha1-neon in Thumb2 mode.
* Reenable ARM ghash-ce code by
On Mon, Oct 03, 2016 at 12:07:25PM -0300, Marcelo Cerri wrote:
> Hi Herbert,
>
> Sorry for bothering you. I noticed you included two of the patches in
> the crypto-2.6 repository and the remaining one in cryptodev-2.6. Is
> that right? I thought all 3 patches would be included in the cruptodev
> r
The AES-CCM implementation that uses ARMv8 Crypto Extensions instructions
refers to the AES round keys as pairs of 64-bit quantities, which causes
failures when building the code for big endian. In addition, it byte swaps
the input counter unconditionally, while this is only required for little
end
The SHA1 digest is an array of 5 32-bit quantities, so we should refer
to them as such in order for this code to work correctly when built for
big endian. So replace 16 byte scalar loads and stores with 4x4 vector
ones where appropriate.
Fixes: 2c98833a42cd ("arm64/crypto: SHA-1 using ARMv8 Crypto
The AES implementation using pure NEON instructions relies on the generic
AES key schedule generation routines, which store the round keys as arrays
of 32-bit quantities stored in memory using native endianness. This means
we should refer to these round keys using 4x4 loads rather than 16x1 loads.
The SHA256 digest is an array of 8 32-bit quantities, so we should refer
to them as such in order for this code to work correctly when built for
big endian. So replace 16 byte scalar loads and stores with 4x32 vector
ones where appropriate.
Fixes: 6ba6c74dfc6b ("arm64/crypto: SHA-224/SHA-256 using
As it turns out, none of the accelerated crypto routines under arch/arm64/crypto
currently work, or have ever worked correctly when built for big endian. So this
series fixes all of them.
Each of these patches carries a fixes tag, and could be backported to stable.
However, for patches #1 and #5,
The core AES cipher implementation that uses ARMv8 Crypto Extensions
instructions erroneously loads the round keys as 64-bit quantities,
which causes the algorithm to fail when built for big endian. In
addition, the key schedule generation routine fails to take endianness
into account as well, when
The GHASH key and digest are both pairs of 64-bit quantities, but the
GHASH code does not always refer to them as such, causing failures when
built for big endian. So replace the 16x1 loads and stores with 2x8 ones.
Fixes: b913a6404ce2 ("arm64/crypto: improve performance of GHASH algorithm")
Signe
commit e192be9d9a30555aae2ca1dc3aad37cba484cd4a
+ chacha20_block(&crng->state[0], out);
+ if (crng->state[12] == 0)
+ crng->state[13]++;
Did you mean
+ if (++crng->state[12] == 0)
?
--
Do what you love because life is too short for anything else.
https://samifa
ping...
Hi Tyhicks,
We observed a ecryptFS crash occasionally in Linux kernel 4.1.18. The call
trace is attached below. Is it a known issue? Look forward to hearing from you.
Thanks in advance!
[19314.529479s][pid:2694,cpu3,GAC_Executor[0]]Call trace:
[19314.529510s][pid:2694,cpu3,GAC_Execu
13 matches
Mail list logo