On Wed, 2018-03-07 at 15:29 -0800, James Bottomley wrote:
> By now, everybody knows we have a problem with the TPM2_RS_PW easy
> button on TPM2 in that transactions on the TPM bus can be intercepted
> and altered. The way to fix this is to use real sessions for HMAC
> capabilities to ensure integr
As reported by Sebastian, the way the arm64 NEON crypto code currently
keeps kernel mode NEON enabled across calls into skcipher_walk_xxx() is
causing problems with RT builds, given that the skcipher walk API may
allocate and free temporary buffers it uses to present the input and
output arrays to
When kernel mode NEON was first introduced on arm64, the preserve and
restore of the userland NEON state was completely unoptimized, and
involved saving all registers on each call to kernel_neon_begin(),
and restoring them on each call to kernel_neon_end(). For this reason,
the NEON crypto code tha
CBC encryption is strictly sequential, and so the current AES code
simply processes the input one block at a time. However, we are
about to add yield support, which adds a bit of overhead, and which
we prefer to align with other modes in terms of granularity (i.e.,
it is better to have all routines
The AES block mode implementation using Crypto Extensions or plain NEON
was written before real hardware existed, and so its interleave factor
was made build time configurable (as well as an option to instantiate
all interleaved sequences inline rather than as subroutines)
We ended up using INTERL
When kernel mode NEON was first introduced on arm64, the preserve and
restore of the userland NEON state was completely unoptimized, and
involved saving all registers on each call to kernel_neon_begin(),
and restoring them on each call to kernel_neon_end(). For this reason,
the NEON crypto code tha
In order to be able to test yield support under preempt, add a test
vector for CRC-T10DIF that is long enough to take multiple iterations
(and thus possible preemption between them) of the primary loop of the
accelerated x86 and arm64 implementations.
Signed-off-by: Ard Biesheuvel
---
crypto/tes
When kernel mode NEON was first introduced on arm64, the preserve and
restore of the userland NEON state was completely unoptimized, and
involved saving all registers on each call to kernel_neon_begin(),
and restoring them on each call to kernel_neon_end(). For this reason,
the NEON crypto code tha
When kernel mode NEON was first introduced on arm64, the preserve and
restore of the userland NEON state was completely unoptimized, and
involved saving all registers on each call to kernel_neon_begin(),
and restoring them on each call to kernel_neon_end(). For this reason,
the NEON crypto code tha
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/sha512-ce-core.S | 27 +++-
1 file changed, 21 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/cr
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/crct10dif-ce-core.S | 32 +---
1 file changed, 28 insertions(+), 4 deletions(-)
diff --git a/arch/arm64
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/ghash-ce-core.S | 113 ++--
arch/arm64/crypto/ghash-ce-glue.c | 28 +++--
2 files changed, 97 insertion
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-neonbs-core.S | 305 +++-
1 file changed, 170 insertions(+), 135 deletions(-)
diff --git a/arch/arm
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-ce.S| 15 +-
arch/arm64/crypto/aes-modes.S | 331
2 files changed, 216 insertions(+), 130
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/sha1-ce-core.S | 42 ++--
1 file changed, 29 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/cry
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/crc32-ce-core.S | 40 +++-
1 file changed, 30 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/cr
Test code to force a kernel_neon_end+begin sequence at every yield point,
and wipe the entire NEON state before resuming the algorithm.
---
arch/arm64/include/asm/assembler.h | 33
1 file changed, 33 insertions(+)
diff --git a/arch/arm64/include/asm/assembler.h
b/arch/arm64/
Add support macros to conditionally yield the NEON (and thus the CPU)
that may be called from the assembler code.
In some cases, yielding the NEON involves saving and restoring a non
trivial amount of context (especially in the CRC folding algorithms),
and so the macro is split into three, and the
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-ce-ccm-core.S | 150 +---
1 file changed, 95 insertions(+), 55 deletions(-)
diff --git a/arch/arm64
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/sha3-ce-core.S | 77 +---
1 file changed, 50 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/cry
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/sm3-ce-core.S | 30 +++-
1 file changed, 23 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/crypt
We are going to add code to all the NEON crypto routines that will
turn them into non-leaf functions, so we need to manage the stack
frames. To make this less tedious and error prone, add some macros
that take the number of callee saved registers to preserve and the
extra size to allocate in the st
CBC MAC is strictly sequential, and so the current AES code simply
processes the input one block at a time. However, we are about to add
yield support, which adds a bit of overhead, and which we prefer to
align with other modes in terms of granularity (i.e., it is better to
have all routines yield
Avoid excessive scheduling delays under a preemptible kernel by
conditionally yielding the NEON after every block of input.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/sha2-ce-core.S | 37 ++--
1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/cry
Tweak the SHA256 update routines to invoke the SHA256 block transform
block by block, to avoid excessive scheduling delays caused by the
NEON algorithm running with preemption disabled.
Also, remove a stale comment which no longer applies now that kernel
mode NEON is actually disallowed in some co
On Sat, 2018-03-10 at 14:49 +0200, Jarkko Sakkinen wrote:
> On Wed, 2018-03-07 at 15:29 -0800, James Bottomley wrote:
> >
> > By now, everybody knows we have a problem with the TPM2_RS_PW easy
> > button on TPM2 in that transactions on the TPM bus can be
> > intercepted
> > and altered. The way t
By now, everybody knows we have a problem with the TPM2_RS_PW easy
button on TPM2 in that transactions on the TPM bus can be intercepted
and altered. The way to fix this is to use real sessions for HMAC
capabilities to ensure integrity and to use parameter and response
encryption to ensure confide
This separates out the old tpm_buf_... handling functions from static
inlines into tpm.h and makes them their own tpm-buf.c file. It also
adds handling for tpm2b structures and also incremental pointer
advancing parsers.
Signed-off-by: James Bottomley
---
v2: added this patch to separate out t
This code adds true session based HMAC authentication plus parameter
decryption and response encryption using AES.
The basic design of this code is to segregate all the nasty crypto,
hash and hmac code into tpm2-sessions.c and export a usable API.
The API first of all starts off by gaining a sess
We use tpm2_pcr_extend() in trusted keys to extend a PCR to prevent a
key from being re-loaded until the next reboot. To use this
functionality securely, that extend must be protected by a session
hmac.
Signed-off-by: James Bottomley
---
v3: add error handling to sessions
---
drivers/char/tpm
If some entity is snooping the TPM bus, they can see the random
numbers we're extracting from the TPM and do prediction attacks
against their consumers. Foil this attack by using response
encryption to prevent the attacker from seeing the random sequence.
Signed-off-by: James Bottomley
---
v3:
If some entity is snooping the TPM bus, the can see the data going in
to be sealed and the data coming out as it is unsealed. Add parameter
and response encryption to these cases to ensure that no secrets are
leaked even if the bus is snooped.
As part of doing this conversion it was discovered th
This runs through a preset sequence using sessions to demonstrate that
the session handling code functions. It does both HMAC, encryption
and decryption by testing an encrypted sealing operation with
authority and proving that the same sealed data comes back again via
an HMAC and response encrypti
From: Eric Biggers
If the pcrypt template is used multiple times in an algorithm, then a
deadlock occurs because all pcrypt instances share the same
padata_instance, which completes requests in the order submitted. That
is, the inner pcrypt request waits for the outer pcrypt request while
the ou
Hi
How does this patchset affect the throughput performance of crypto?
Is it expected to increase?
Regards
Vakul
> -Original Message-
> From: linux-crypto-ow...@vger.kernel.org [mailto:linux-crypto-
> ow...@vger.kernel.org] On Behalf Of Ard Biesheuvel
> Sent: Saturday, March 10, 2018 8:
35 matches
Mail list logo