On 15 June 2016 at 15:39, Herbert Xu wrote:
> On Wed, Jun 15, 2016 at 03:38:02PM +0800, Baolin Wang wrote:
>>
>> But that means we should divide the bulk request into 512-byte size
>> requests and break up the mapped sg table for each request. Another
>> hand we shou
On 15 June 2016 at 14:49, Herbert Xu wrote:
> On Wed, Jun 15, 2016 at 02:27:04PM +0800, Baolin Wang wrote:
>>
>> After some investigation, I still think we should divide the bulk
>> request from dm-crypt into small request (each one is 512bytes) if
>> this algorithm is
Hi Herbert,
On 8 June 2016 at 10:00, Baolin Wang wrote:
> Hi Herbert,
>
> On 7 June 2016 at 22:16, Herbert Xu wrote:
>> On Tue, Jun 07, 2016 at 08:17:05PM +0800, Baolin Wang wrote:
>>> Now some cipher hardware engines prefer to handle bulk block rather than one
>>&
Hi Herbert,
On 7 June 2016 at 22:16, Herbert Xu wrote:
> On Tue, Jun 07, 2016 at 08:17:05PM +0800, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block rather than one
>> sector (512 bytes) created by dm-crypt, cause these cipher engines ca
In dm-crypt, it need to map one bio to scatterlist for improving the
hardware engine encryption efficiency. Thus this patch introduces the
blk_bio_map_sg() function to map one bio with scatterlists.
Signed-off-by: Baolin Wang
---
block/blk-merge.c | 19 +++
include/linux
always 512
bytes and thus increase the hardware engine processing speed.
So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.
Signed-off-by: Baolin Wang
---
include/crypto/skcipher.h |7 +++
include/linux/crypto.h|6 ++
2 files c
setup (beaglebone black board with ecb(aes)
cipher and dd testing) using 64KB I/Os on an eMMC storage device I saw about
127% improvement in throughput for encrypted writes, and about 206% improvement
for encrypted reads.
Signed-off-by: Baolin Wang
---
drivers/md/dm-crypt.c |
Since the ecb(aes) cipher does not need to handle the IV things for encryption
or decryption, that means it can support for bulk block when handling data.
Thus this patch adds the CRYPTO_ALG_BULK flag for ecb(aes) cipher to improve
the hardware aes engine's efficiency.
Signed-off-by: Baolin
r the blk_bio_map_sg() function to avoid duplicated code.
- Move the sg table allocation to crypt_ctr_cipher() function to avoid memory
allocation in the IO path.
- Remove the crypt_sg_entry() function.
- Other optimization.
Baolin Wang (4):
block: Introduce blk_bio_map_sg() to map one bio
c
On 3 June 2016 at 22:35, Jens Axboe wrote:
> On 05/27/2016 05:11 AM, Baolin Wang wrote:
>>
>> In dm-crypt, it need to map one bio to scatterlist for improving the
>> hardware engine encryption efficiency. Thus this patch introduces the
>> blk_bio_map_sg() function to ma
On 3 June 2016 at 22:38, Jens Axboe wrote:
> On 05/27/2016 05:11 AM, Baolin Wang wrote:
>>
>> +/*
>> + * Map a bio to scatterlist, return number of sg entries setup. Caller
>> must
>> + * make sure sg can hold bio segments entries.
>> + */
>> +int b
On 3 June 2016 at 18:09, Herbert Xu wrote:
> On Fri, Jun 03, 2016 at 05:23:59PM +0800, Baolin Wang wrote:
>>
>> Assuming one 64K size bio coming, we can map the whole bio with one sg
>> table in crypt_convert_bulk_block() function. But if we send this bulk
>> request
On 3 June 2016 at 16:21, Herbert Xu wrote:
> On Fri, Jun 03, 2016 at 04:15:28PM +0800, Baolin Wang wrote:
>>
>> Suppose the cbc(aes) algorithm, which can not be handled through bulk
>> interface, it need to map the data sector by sector.
>> If we also handle the c
On 3 June 2016 at 15:54, Herbert Xu wrote:
> On Fri, Jun 03, 2016 at 03:10:31PM +0800, Baolin Wang wrote:
>> On 3 June 2016 at 14:51, Herbert Xu wrote:
>> > On Fri, Jun 03, 2016 at 02:48:34PM +0800, Baolin Wang wrote:
>> >>
>> >> If we move the IV gener
On 3 June 2016 at 14:51, Herbert Xu wrote:
> On Fri, Jun 03, 2016 at 02:48:34PM +0800, Baolin Wang wrote:
>>
>> If we move the IV generation into the crypto API, we also can not
>> handle every algorithm with the bulk interface. Cause we also need to
>> use different m
Hi Herbet,
On 2 June 2016 at 16:26, Herbert Xu wrote:
> On Fri, May 27, 2016 at 07:11:23PM +0800, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block rather than one
>> sector (512 bytes) created by dm-crypt, cause these cipher engines ca
bio map or request map.
Signed-off-by: Baolin Wang
---
block/blk-merge.c | 36 +++-
include/linux/blkdev.h |2 ++
2 files changed, 33 insertions(+), 5 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2613531..badae44 100644
--- a
always 512
bytes and thus increase the hardware engine processing speed.
So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.
Signed-off-by: Baolin Wang
---
include/crypto/skcipher.h |7 +++
include/linux/crypto.h|6 ++
2 files c
he sg table allocation to crypt_ctr_cipher() function to avoid memory
allocation in the IO path.
- Remove the crypt_sg_entry() function.
- Other optimization.
Baolin Wang (4):
block: Introduce blk_bio_map_sg() to map one bio
crypto: Introduce CRYPTO_ALG_BULK flag
md: dm-crypt: Introduce the
setup (beaglebone black board and dd testing)
using 64KB I/Os on an eMMC storage device I saw about 127% improvement in
throughput for encrypted writes, and about 206% improvement for encrypted reads.
But this is not fit for other modes which need different IV for each sector.
Signed-off-by: Baolin
Since the ecb(aes) cipher does not need to handle the IV things for encryption
or decryption, that means it can support for bulk block when handling data.
Thus this patch adds the CRYPTO_ALG_BULK flag for ecb(aes) cipher to improve
the hardware aes engine's efficiency.
Signed-off-by: Baolin
struct crypto_async_request *req);
> void crypto_finalize_request(struct crypto_engine *engine,
> -struct ablkcipher_request *req, int err);
> +struct crypto_async_request *req, int err);
> int crypto_engine_start(struct crypto_engine *engine);
> int crypto_engine_stop(struct crypto_engine *engine);
> struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
> --
> 2.7.3
>
Reviewed-by: Baolin Wang
--
Baolin.wang
Best Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
struct omap_des_dev *dd = omap_des_find_dev(ctx);
> @@ -620,8 +621,9 @@ static int omap_des_prepare_req(struct crypto_engine
> *engine,
> }
>
> static int omap_des_crypt_req(struct crypto_engine *engine,
> - struct ablkcipher_request *req)
> +
On 18 May 2016 at 17:21, LABBE Corentin wrote:
> Since the crypto engine has been converted to use crypto_async_request
> instead of ablkcipher_request, minor changes are needed to use it.
I think you missed the conversion for omap des driver, please rebase
your patch. Beyond that I think you did
bio map or request map.
Signed-off-by: Baolin Wang
---
block/blk-merge.c | 36 +++-
include/linux/blkdev.h |2 ++
2 files changed, 33 insertions(+), 5 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2613531..badae44 100644
--- a
in the IO path.
- Remove the crypt_sg_entry() function.
- Other optimization.
Baolin Wang (3):
block: Introduce blk_bio_map_sg() to map one bio
crypto: Introduce CRYPTO_ALG_BULK flag
md: dm-crypt: Introduce the bulk mode method when sending request
block/blk-merge.c | 36 +--
dr
setup (beaglebone black board) using 64KB
I/Os on an eMMC storage device I saw about 60% improvement in throughput for
encrypted writes, and about 100% improvement for encrypted reads. But this
is not fit for other modes which need different IV for each sector.
Signed-off-by: Baolin Wang
---
drivers/
always 512
bytes and thus increase the hardware engine processing speed.
So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.
Signed-off-by: Baolin Wang
---
include/crypto/skcipher.h |7 +++
include/linux/crypto.h|6 ++
2 files c
On 27 May 2016 at 15:53, Milan Broz wrote:
> On 05/27/2016 09:04 AM, Baolin Wang wrote:
>> Hi Milan,
>>
>> On 27 May 2016 at 14:31, Milan Broz wrote:
>>> On 05/25/2016 08:12 AM, Baolin Wang wrote:
>>>> Now some cipher hardware engines prefer to handle bu
Hi Milan,
On 27 May 2016 at 14:31, Milan Broz wrote:
> On 05/25/2016 08:12 AM, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block rather than one
>> sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
>> the interm
> per-bio-data to avoid memory allocations in the IO path.
Make sense.
>
> On Wed, May 25 2016 at 2:12am -0400,
> Baolin Wang wrote:
>
>> In now dm-crypt code, it is ineffective to map one segment (always one
>> sector) of one bio with just only one scatterlist
On 25 May 2016 at 16:52, Ming Lei wrote:
>> /*
>> + * map a bio to scatterlist, return number of sg entries setup.
>> + */
>> +int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
>> + struct scatterlist *sglist,
>> + struct scatterlist **sg)
>> +{
>> +
setup (beaglebone black board) using 64KB
I/Os on an eMMC storage device I saw about 60% improvement in throughput for
encrypted writes, and about 100% improvement for encrypted reads. But this
is not fit for other modes which need different IV for each sector.
Signed-off-by: Baolin Wang
---
drivers/
In dm-crypt, it need to map one bio to scatterlist for improving the
hardware engine encryption efficiency. Thus this patch introduces the
blk_bio_map_sg() function to map one bio with scatterlists.
Signed-off-by: Baolin Wang
---
block/blk-merge.c | 45
always 512
bytes and thus increase the hardware engine processing speed.
So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.
Signed-off-by: Baolin Wang
---
include/crypto/skcipher.h |7 +++
include/linux/crypto.h|6 ++
2 files c
This patchset will check if the cipher can support bulk mode, then dm-crypt
will handle different ways to send requests to crypto layer according to
cipher mode.
Looking forward to any comments and suggestions. Thanks.
Baolin Wang (3):
block: Introduce blk_bio_map_sg() to map one bio
crypto
framework can manage and process the requests automatically,
so remove the 'queue' and 'queue_task' things in omap des driver.
Signed-off-by: Baolin
---
drivers/crypto/Kconfig|1 +
drivers/crypto/omap-des.c | 97 -
2 files ch
Hi Robert,
On 5 April 2016 at 15:10, Baolin Wang wrote:
> Hi Robert,
>
> Sorry for the late reply.
>
> On 2 April 2016 at 23:00, Robert Jarzmik wrote:
>> Baolin Wang writes:
>>
>>> +/**
>>> + * sg_is_contiguous - Check if the scatterlists are conti
On 18 April 2016 at 16:41, Herbert Xu wrote:
> On Mon, Apr 18, 2016 at 04:40:36PM +0800, Baolin Wang wrote:
>>
>> Simply to say, now there are many different hardware engines for
>> different vendors, some engines can support bulk block but some can
>> not (or no cipher
On 18 April 2016 at 16:31, Herbert Xu wrote:
> On Mon, Apr 18, 2016 at 04:28:46PM +0800, Baolin Wang wrote:
>>
>> What I meaning is if the xts engine can support bulk block, then the
>> engine driver can select bulk mode to do encryption, but if their xts
>> engin
On 18 April 2016 at 16:17, Herbert Xu wrote:
> On Mon, Apr 18, 2016 at 04:14:48PM +0800, Baolin Wang wrote:
>> On 18 April 2016 at 16:04, Herbert Xu wrote:
>> > On Mon, Apr 18, 2016 at 03:58:59PM +0800, Baolin Wang wrote:
>> >>
>> >> That depends on
On 18 April 2016 at 16:04, Herbert Xu wrote:
> On Mon, Apr 18, 2016 at 03:58:59PM +0800, Baolin Wang wrote:
>>
>> That depends on the hardware engine. Some cipher hardware engines
>> (like xts(aes) engine) can handle the intermediate values (IV) by
>> themselves in one
On 18 April 2016 at 15:24, Herbert Xu wrote:
> On Mon, Apr 18, 2016 at 03:21:16PM +0800, Baolin Wang wrote:
>>
>> I don't think so, the dm-crypt can not send maximal requests at some
>> situations. For example, the 'cbc(aes)' cipher, it must be handled
>&
On 18 April 2016 at 15:04, Herbert Xu wrote:
> On Mon, Apr 18, 2016 at 02:02:51PM +0800, Baolin Wang wrote:
>>
>> If the crypto hardware engine can support bulk data
>> encryption/decryption, so the engine driver can select bulk mode to
>> handle the requests. I t
On 18 April 2016 at 13:45, Herbert Xu wrote:
> On Mon, Apr 18, 2016 at 01:31:09PM +0800, Baolin Wang wrote:
>>
>> We've tried to do this in dm-crypt, but it failed.
>> The dm-crypt maintainer explained to me that I should optimize the
>> driver, not add strange
Hi Herbert,
On 15 April 2016 at 21:48, Herbert Xu wrote:
> On Tue, Mar 15, 2016 at 03:47:58PM +0800, Baolin Wang wrote:
>> Now some cipher hardware engines prefer to handle bulk block by merging
>> requests
>> to increase the block size and thus increase the hardware engine
Hi Robert,
Sorry for the late reply.
On 2 April 2016 at 23:00, Robert Jarzmik wrote:
> Baolin Wang writes:
>
>> +/**
>> + * sg_is_contiguous - Check if the scatterlists are contiguous
>> + * @sga: SG entry
>> + * @sgb: SG entry
>> + *
>> + * Descr
If the crypto engine can support the bulk mode, that means the contiguous
requests from one block can be merged into one request to be handled by
crypto engine. If so, the crypto engine need the sector number of one request
to do merging action.
Signed-off-by: Baolin Wang
---
drivers/md/dm
increase the hardware engine processing speed.
This patch introduces some helper functions to help to merge requests to improve
hardware engine efficiency.
Signed-off-by: Baolin Wang
---
crypto/ablk_helper.c | 135 ++
include/crypto/ablk_helper.h |3
(SECTOR_MODE) for
initializing omap aes engine.
Signed-off-by: Baolin Wang
---
crypto/Kconfig|1 +
crypto/crypto_engine.c| 122 +++--
drivers/crypto/omap-aes.c |2 +-
include/crypto/algapi.h | 23 -
4 files changed, 143
ty'
function to check if the sg table is empty.
Signed-off-by: Baolin Wang
---
include/linux/scatterlist.h | 33 +
lib/scatterlist.c | 69 +++
2 files changed, 102 insertions(+)
diff --git a/include/linux/scatterlist.
the sg_is_contiguous() function.
Baolin Wang (4):
scatterlist: Introduce some helper functions
crypto: Introduce some helper functions to help to merge requests
crypto: Introduce the bulk mode for crypto engine framework
md: dm-crypt: Initialize the sector number for one request
crypto
On 10 March 2016 at 17:42, Robert Jarzmik wrote:
>>
>>
>> Ah, sorry that's a mistake. It should check as below:
>> static inline bool sg_is_contiguous(struct scatterlist *sga, struct
>> scatterlist *sgb)
>> {
>> return (unsigned int)sg_virt(sga) + sga->length == (unsigned
>> int)sg_virt(sgb);
>>> + **/
>>> +static inline bool sg_is_contiguous(struct scatterlist *sga,
>>> + struct scatterlist *sgb)
>>> +{
>>> + return ((sga->page_link & ~0x3UL) + sga->offset + sga->length ==
>>> + (sgb->page_link & ~0x3UL));
>>> +}
>> I don't understand tha
Hi Robert,
On 4 March 2016 at 03:15, Robert Jarzmik wrote:
> Baolin Wang writes:
>
>> @@ -212,6 +212,37 @@ static inline void sg_unmark_end(struct scatterlist *sg)
>> }
>>
>> /**
>> + * sg_is_contiguous - Check if the scatterlists are contiguous
>>
If the crypto engine can support the bulk mode, that means the contiguous
requests from one block can be merged into one request to be handled by
crypto engine. If so, the crypto engine need the sector number of one request
to do merging action.
Signed-off-by: Baolin Wang
---
drivers/md/dm
Now some cipher hardware engines prefer to handle bulk block by merging requests
to increase the block size and thus increase the hardware engine processing
speed.
This patchset introduces request bulk mode to help the crypto hardware drivers
improve in efficiency.
Baolin Wang (4
(SECTOR_MODE) for
initializing aes engine.
Signed-off-by: Baolin Wang
---
crypto/Kconfig|1 +
crypto/crypto_engine.c| 122 +++--
drivers/crypto/omap-aes.c |2 +-
include/crypto/algapi.h | 23 -
4 files changed, 143
increase the hardware engine processing speed.
This patch introduces some helper functions to help to merge requests to improve
hardware engine efficiency.
Signed-off-by: Baolin Wang
---
crypto/ablk_helper.c | 135 ++
include/crypto/ablk_helper.h |3
to check if two
scatterlists are contiguous, 'sg_alloc_empty_table()' function to
allocate one empty sg table, 'sg_add_sg_to_table()' function to add
one scatterlist into sg table and 'sg_table_is_empty' function to
check if the sg table is empty.
Signed-off-by: Baolin
On 1 February 2016 at 22:33, Herbert Xu wrote:
> On Tue, Jan 26, 2016 at 08:25:37PM +0800, Baolin Wang wrote:
>> Now block cipher engines need to implement and maintain their own
>> queue/thread
>> for processing requests, moreover currently helpers provided for only the
&
This patch introduces crypto_queue_len() helper function to help to get the
queue length in the crypto queue list now.
Signed-off-by: Baolin Wang
---
include/crypto/algapi.h |4
1 file changed, 4 insertions(+)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index
hings like running the
request immediately, DMA map it or providing a thread to process the queue in)
even though a lot of that code really shouldn't vary that much from device to
device.
This patch introduces the crypto engine framework to help the crypto hardware
drivers to queue requests.
B
remove the 'queue' and 'queue_task' things in
omap aes driver.
Signed-off-by: Baolin Wang
---
drivers/crypto/Kconfig|1 +
drivers/crypto/omap-aes.c | 97 -
2 files changed, 45 insertions(+), 53 deletions(-)
diff --git a/dr
could use. And this framework is patterned
on the SPI code and has worked out well there.
(https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/
drivers/spi/spi.c?id=ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0)
Signed-off-by: Baolin Wang
---
crypto/Kconfig |3 +
crypt
65 matches
Mail list logo