plementation back to its original source
and correctly load the key registers and preserve their values by
*not* re-using the registers for other purposes.
Kudos to James for reporting the issue and providing a test case
showing the discrepancies.
Reported-by: James Yonan
Cc: Chandramouli Nar
On 15/12/2014 12:26, James Yonan wrote:
Mathias,
I'm seeing some anomalous results with the "by8" AVX CTR optimization in
3.18.
the patch you're replying to actually *disabled* the "by8" variant for
v3.17 as it had another bug related to wrong counter han
Mathias,
I'm seeing some anomalous results with the "by8" AVX CTR optimization in
3.18.
the patch you're replying to actually *disabled* the "by8" variant for
v3.17 as it had another bug related to wrong counter handling in GCM.
The fix for that particular issue only made it to v3.18, so the c
I'm seeing some anomalous results with the "by8" AVX CTR optimization in
3.18.
In particular, crypto_aead_encrypt appears to produce different
ciphertext from the same plaintext depending on whether or not the
optimization is enabled.
See the attached patch to tcrypt that demonstrates the di
On 24/11/2013 14:12, Cesar Eduardo Barros wrote:
Disabling compiler optimizations can be fragile, since a new
optimization could be added to -O0 or -Os that breaks the assumptions
the code is making.
Instead of disabling compiler optimizations, use a dummy inline assembly
(based on RELOC_HIDE) t
) using a switch so that future fast-path data widths
can be easily added.
* Reduce the number of #ifdefs by using sizeof(unsigned long) instead of
BITS_PER_LONG.
* Shortened the public function name to crypto_memneq.
James
On 26/09/2013 02:20, James Yonan wrote:
When comparing MAC hashes, AEAD
embler implementations.
This was a joint work of James Yonan and Daniel Borkmann. Also thanks
for feedback from Florian Weimer on this and earlier proposals [2].
[1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
[2] https://lkml.org/lkml/2013/2/10/131
Signed-off-by: James Yonan
Signed-o
On 17/09/2013 13:07, Daniel Borkmann wrote:
On 09/16/2013 07:10 PM, James Yonan wrote:
On 16/09/2013 01:56, Daniel Borkmann wrote:
On 09/15/2013 06:59 PM, James Yonan wrote:
On 15/09/2013 09:45, Florian Weimer wrote:
* James Yonan:
+ * Constant-time equality testing of memory regions
On 16/09/2013 01:56, Daniel Borkmann wrote:
On 09/15/2013 06:59 PM, James Yonan wrote:
On 15/09/2013 09:45, Florian Weimer wrote:
* James Yonan:
+ * Constant-time equality testing of memory regions.
+ * Returns 0 when data is equal, non-zero otherwise.
+ * Fast path if size == 16
On 15/09/2013 09:45, Florian Weimer wrote:
* James Yonan:
+ * Constant-time equality testing of memory regions.
+ * Returns 0 when data is equal, non-zero otherwise.
+ * Fast path if size == 16.
+ */
+noinline unsigned long crypto_mem_not_equal(const void *a, const void *b,
size_t size)
I
On 13/09/2013 02:33, Daniel Borkmann wrote:
On 09/11/2013 07:20 PM, James Yonan wrote:
On 10/09/2013 12:57, Daniel Borkmann wrote:
There was a similar patch posted some time ago [1] on lkml, where
Florian (CC) made a good point in [2] that future compiler optimizations
could short circuit on
is often called with size == 16 by its
users in the Crypto API, we add a special fast path for this case.
Signed-off-by: James Yonan
---
crypto/Makefile| 2 +-
crypto/asymmetric_keys/rsa.c | 5 +-
crypto/authenc.c |
On 10/09/2013 12:57, Daniel Borkmann wrote:
There was a similar patch posted some time ago [1] on lkml, where
Florian (CC) made a good point in [2] that future compiler optimizations
could short circuit on this. This issue should probably be addressed in
such a patch here as well.
[1] https://l
file
because a very smart compiler (or LTO) might notice that the return
value is always compared against zero/nonzero, and might then
reintroduce the same early-return optimization that we are trying to
avoid.
Signed-off-by: James Yonan
---
crypto/Makefile | 2 +-
c
On 07/09/2013 19:32, Herbert Xu wrote:
On Fri, Sep 06, 2013 at 04:20:50PM -0700, Kees Cook wrote:
In the two-thread situation, the first thread gets a larval with
refcnt 2 via crypto_larval_add. (Why 2?) The next thread finds the
larval via crypto_larval_add's call to __crypto_alg_lookup() and
I'm seeing a GPF when code on several CPUs calls crypto_alloc_aead at
the same time, and in order for crypto_alloc_aead to satisfy the
request, it needs to lookup a kernel module (in this case aesni_intel
and aes_x86_64).
Shouldn't the bulk of the code in crypto_alg_mod_lookup be protected by
16 matches
Mail list logo