On Tue, Sep 3, 2019 at 7:43 PM Fabio Estevam wrote:
>
> Hi Andrey,
>
> On Tue, Sep 3, 2019 at 11:37 PM Andrey Smirnov
> wrote:
> >
> > Use devres to unmap memory and drop explicit de-initialization
> > code.
> >
> > NOTE: There's no corresponding unmapping code in caam_jr_remove which
> > seems
Hi Andrey,
On Tue, Sep 3, 2019 at 11:37 PM Andrey Smirnov wrote:
>
> Use devres to unmap memory and drop explicit de-initialization
> code.
>
> NOTE: There's no corresponding unmapping code in caam_jr_remove which
> seems like a resource leak.
>
> Signed-off-by: Andrey Smirnov
> Cc: Chris Healy
Use devres to de-initialize the RNG and drop explicit de-initialization
code in caam_remove().
Signed-off-by: Andrey Smirnov
Cc: Chris Healy
Cc: Lucas Stach
Cc: Horia Geantă
Cc: Herbert Xu
Cc: Iuliana Prodan
Cc: linux-crypto@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
---
drivers/crypt
Irq_of_parse_and_map will return zero in case of error, so add a error
check for that.
Signed-off-by: Andrey Smirnov
Cc: Chris Healy
Cc: Lucas Stach
Cc: Horia Geantă
Cc: Herbert Xu
Cc: Iuliana Prodan
Cc: linux-crypto@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
---
drivers/crypto/caam/j
Use devres to unmap memory and drop explicit de-initialization
code.
NOTE: There's no corresponding unmapping code in caam_jr_remove which
seems like a resource leak.
Signed-off-by: Andrey Smirnov
Cc: Chris Healy
Cc: Lucas Stach
Cc: Horia Geantă
Cc: Herbert Xu
Cc: Iuliana Prodan
Cc: linux-c
Use devres to remove debugfs and drop corresponding
debugfs_remove_recursive() call.
Signed-off-by: Andrey Smirnov
Cc: Chris Healy
Cc: Lucas Stach
Cc: Horia Geantă
Cc: Herbert Xu
Cc: Iuliana Prodan
Cc: linux-crypto@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
---
drivers/crypto/caam/ctr
Returning -EBUSY from platform device's .remove() callback won't stop
the removal process, so the code in caam_jr_remove() is not going to
have the desired effect of preventing JR from being removed.
In order to be able to deal with removal of the JR device, change the
code as follows:
1. To ma
In order to allow caam_jr_enqueue() to lock underlying JR's
device (via device_lock(), see commit that follows) we need to make
sure that no code calls caam_jr_enqueue() as a part of caam_jr_probe()
to avoid a deadlock. Unfortunately, current implementation of caamrng
code does exactly that in caam
Use devres to de-initialize the QI and drop explicit de-initialization
code in caam_remove().
Signed-off-by: Andrey Smirnov
Cc: Chris Healy
Cc: Lucas Stach
Cc: Horia Geantă
Cc: Herbert Xu
Cc: Iuliana Prodan
Cc: linux-crypto@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
---
drivers/crypto
In order to access IP block's registers we need to enable appropriate
clocks first, otherwise we are risking hanging the CPU.
The problem becomes very apparent when trying to use CAAM driver built
as a kernel module. In that case caam_probe() gets called after
clk_disable_unused() which means all
Everyone:
This series bugfixes and small improvement I made while doing more
testing of CAAM code:
- "crypto: caam - make sure clocks are enabled first"
fixes a recent regression (and, conincidentally a leak cause by one
of my i.MX8MQ patches)
- "crypto: caam - use devres to unmap JR's
With IRQ requesting being managed by devres we need to make sure that
we dispose of IRQ mapping after and not before it is free'd (otherwise
we'll end up with a warning from the kernel). To achieve that simply
convert IRQ mapping to rely on devres as well.
Fixes: f314f12db65c ("crypto: caam - conv
Use devres to de-initialize the RNG and drop explicit de-initialization
code in caam_remove().
Signed-off-by: Andrey Smirnov
Cc: Chris Healy
Cc: Lucas Stach
Cc: Horia Geantă
Cc: Herbert Xu
Cc: Iuliana Prodan
Cc: linux-crypto@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
---
drivers/crypt
Move the call to devm_of_platform_populate() at the end of
caam_probe(), so we won't try to add any child devices until all of
the initialization is finished successfully.
Signed-off-by: Andrey Smirnov
Cc: Chris Healy
Cc: Lucas Stach
Cc: Horia Geantă
Cc: Herbert Xu
Cc: Iuliana Prodan
Cc: lin
Use devres to unmap memory and drop corresponding iounmap() call.
Signed-off-by: Andrey Smirnov
Cc: Chris Healy
Cc: Lucas Stach
Cc: Horia Geantă
Cc: Herbert Xu
Cc: Iuliana Prodan
Cc: linux-crypto@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
---
drivers/crypto/caam/ctrl.c | 28 +-
On Tue, Sep 03, 2019 at 08:50:20AM -0500, Eric Biggers wrote:
>
> Doesn't this re-introduce the same bug that my patch fixed -- that
> scatterwalk_done() could be called after 0 bytes processed, causing a crash in
> scatterwalk_pagedone()?
No because that crash is caused by the internal calls to t
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/arm/crypto/aes-ce-core.S b/arch/
The pure NEON AES implementation predates the bit-slicing one, and is
generally slower, unless the algorithm in question can only execute
sequentially.
So advertising the skciphers that the bit-slicing driver implements as
well serves no real purpose, and we can just disable them. Note that the
bi
Add the missing support for ciphertext stealing in the implementation
of AES-XTS, which is part of the XTS specification but was omitted up
until now due to lack of a need for it.
The asm helpers are updated so they can deal with any input size, as
long as the last full block and the final partial
Update the AES-XTS implementation based on NEON instructions so that it
can deal with inputs whose size is not a multiple of the cipher block
size. This is part of the original XTS specification, but was never
implemented before in the Linux kernel.
Since the bit slicing driver is only faster if i
Since the CTS-CBC code completes synchronously, there is no point in
keeping part of the scratch data it uses in the request context, so
move it to the stack instead.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-glue.c | 61 +---
1 file changed, 26 insertions(+), 35 de
Optimize away one of the tbl instructions in the decryption path,
which turns out to be unnecessary.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-modes.S | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-mode
The AES round keys are arrays of u32s in native endianness now, so
update the function prototypes accordingly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 18 -
arch/arm/crypto/aes-ce-glue.c | 40 ++--
2 files changed, 29 insertions(+), 29 deletions(
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-neonbs-core.S | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/crypto/aes-neonbs-co
After starting a skcipher walk, the only way to ensure that all
resources it has tied up are released is to complete it. In some
cases, it will be useful to be able to abort a walk cleanly after
it has started, so add this ability to the skcipher walk API.
Signed-off-by: Ard Biesheuvel
---
inclu
Replace the vector load from memory sequence with a simple instruction
sequence to compose the tweak vector directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-neonbs-core.S | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/arch/arm/crypto/aes-neonbs-core.S
Import the AES-XTS test vectors from IEEE publication P1619/D16
that exercise the ciphertext stealing part of the XTS algorithm,
which we haven't supported in the Linux kernel implementation up
till now.
Tested-by: Pascal van Leeuwen
Signed-off-by: Ard Biesheuvel
---
crypto/testmgr.h | 60 +
From: Pascal van Leeuwen
This patch adds test vectors for AES-XTS that cover data inputs that are
not a multiple of 16 bytes and therefore require cipher text stealing
(CTS) to be applied. Vectors were added to cover all possible alignments
combined with various interesting (i.e. for vector imple
This is a collection of improvements for the ARM and arm64 implementations
of the AES based skciphers.
NOTES:
- the last two patches add XTS ciphertext stealing test vectors and should
NOT be merged until all AES-XTS implementations have been confirmed to work
- tested for correctness [on both Q
When the ARM AES instruction based crypto driver was introduced, there
were no known implementations that could benefit from a 4-way interleave,
and so a 3-way interleave was used instead. Since we have sufficient
space in the SIMD register file, let's switch to a 4-way interleave to
align with the
Update the AES-XTS implementation based on AES instructions so that it
can deal with inputs whose size is not a multiple of the cipher block
size. This is part of the original XTS specification, but was never
implemented before in the Linux kernel.
Signed-off-by: Ard Biesheuvel
---
arch/arm/cryp
Update the AES-XTS implementation based on NEON instructions so that it
can deal with inputs whose size is not a multiple of the cipher block
size. This is part of the original XTS specification, but was never
implemented before in the Linux kernel.
Signed-off-by: Ard Biesheuvel
---
arch/arm/cry
Reduce the scope of the kernel_neon_begin/end regions so that the SIMD
unit is released (and thus preemption re-enabled) if the crypto operation
cannot be completed in a single scatterwalk step. This avoids scheduling
blackouts due to preemption being enabled for unbounded periods, resulting
in a m
Instead of relying on the CTS template to wrap the accelerated CBC
skcipher, implement the ciphertext stealing part directly.
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 85 +
arch/arm/crypto/aes-ce-glue.c | 188 ++--
2 files changed, 256 insertions
From: Gilad Ben-Yossef
[ Upstream commit 1358c13a48c43f5e4de0c1835291837a27b9720c ]
We were enabling autosuspend, which is using data set by the
hash module, prior to the hash module being inited, casuing
a crash on resume as part of the startup sequence if the race
was lost.
This was never a r
From: Gilad Ben-Yossef
[ Upstream commit f1071c3e2473ae19a7f5d892a187c4cab1a61f2e ]
Commit 1358c13a48c4 ("crypto: ccree - fix resume race condition on init")
was missing a "inline" qualifier for stub function used when CONFIG_PM
is not set causing a build warning.
Fixes: 1358c13a48c4 ("crypto:
On Tue, Sep 03, 2019 at 04:54:38PM +1000, Herbert Xu wrote:
> int skcipher_walk_done(struct skcipher_walk *walk, int err)
> {
> - unsigned int n; /* bytes processed */
> - bool more;
> -
> - if (unlikely(err < 0))
> - goto finish;
> + unsigned int n = walk->nbytes - er
On Tue, Sep 3, 2019 at 10:51 AM Hans de Goede wrote:
>
> Hi,
>
> On 03-09-19 09:45, Gilad Ben-Yossef wrote:
> > On Sun, Sep 1, 2019 at 11:36 PM Hans de Goede wrote:
> >>
> >> Rename the algo_init arrays to cc_algo_init so that they do not conflict
> >> with the functions declared in crypto/sha256
Hi,
On 03-09-19 09:45, Gilad Ben-Yossef wrote:
On Sun, Sep 1, 2019 at 11:36 PM Hans de Goede wrote:
Rename the algo_init arrays to cc_algo_init so that they do not conflict
with the functions declared in crypto/sha256.h.
This is a preparation patch for folding crypto/sha256.h into crypto/sha
On Sun, Sep 1, 2019 at 11:36 PM Hans de Goede wrote:
>
> Rename the algo_init arrays to cc_algo_init so that they do not conflict
> with the functions declared in crypto/sha256.h.
>
> This is a preparation patch for folding crypto/sha256.h into crypto/sha.h.
I'm fine with the renaming.
Signed-of
blkcipher_walk_done may be called with an error by internal or
external callers. For those internal callers we shouldn't unmap
pages but for external callers we must unmap any pages that are
in use.
This patch adds a new function blkcipher_walk_unwind so that we
can eliminate the internal callers
ablkcipher_walk_done may be called with an error by internal or
external callers. For those internal callers we shouldn't unmap
pages but for external callers we must unmap any pages that are
in use.
This patch adds a new function ablkcipher_walk_unwind so that we
can eliminate the internal calle
42 matches
Mail list logo