The following 4 patches enabling fsl-dma and talitos offload raid
operations for improving raid performance and balancing CPU load.
Write performance will be improved by 40% tested by iozone. CPU load
will be reduced by 8%.
Qiang Liu (4):
Talitos: move the data structure into header file
The following 4 patches enabling fsl-dma and talitos offload raid
operations for improving raid performance and balancing CPU load.
Write performance will be improved by 40% tested by iozone. CPU load
will be reduced by 8%.
Qiang Liu (4):
Talitos: move the data structure into header file
An error will be happened when test with mass data:
"DMA-API: device driver tries to sync DMA memory it has not allocated";
"DMA-API: debugging out of memory - disabling"
dma mapping memory of request->desc is not released by right device,
it should be private->dev but not dev;
Cc: Herbert Xu
Cc:
- delete attribute of DMA_INTERRUPT because fsl-dma doesn't support
this function, exception will be thrown if talitos is used to compute xor
at the same time;
- change the release process of dma descriptor for avoiding exception when
enable config NET_DMA, release dma descriptor from 1st to last s
Expose Talitos's XOR functionality to be used for RAID parity
calculation via the Async_tx layer.
Cc: Herbert Xu
Cc: David S. Miller
Signed-off-by: Dipen Dudhat
Signed-off-by: Maneesh Gupta
Signed-off-by: Kim Phillips
Signed-off-by: Vishnu Suresh
Signed-off-by: Qiang Liu
---
drivers/crypto
Move the declaration of talitos data structure into talitos.h.
Cc: Herbert Xu
Cc: David S. Miller
Signed-off-by: Qiang Liu
---
drivers/crypto/talitos.c | 108 --
drivers/crypto/talitos.h | 108 ++
2 files
The following 4 patches enabling fsl-dma and talitos offload raid
operations for improving raid performance and balancing CPU load.
Write performance will be improved by 40% tested by iozone. CPU load
will be reduced by 8%.
Qiang Liu (4):
Talitos: move the data structure into header file
Hi, Tejun
Just nitpicks..
On Mon, 9 Jul 2012 11:41:51 -0700, Tejun Heo wrote:
> Move worklist and all worker management fields from global_cwq into
> the new struct worker_pool. worker_pool points back to the containing
> gcwq. worker and cpu_workqueue_struct are updated to point to
> worker_
On 07/09/2012 10:54 AM, Jussi Kivilinna wrote:
> Quoting Randy Dunlap :
>
>> On 07/02/2012 12:23 AM, Stephen Rothwell wrote:
>>
>>> Hi all,
>>>
>>> Changes since 20120629:
>>>
>>
>>
>> on i386:
>>
>>
>> ERROR: "__divdi3" [drivers/crypto/hifn_795x.ko] undefined!
>>
>
> This is caused by commit fe
GCWQ_MANAGE_WORKERS, GCWQ_MANAGING_WORKERS and GCWQ_HIGHPRI_PENDING
are per-pool properties. Add worker_pool->flags and make the above
three flags per-pool flags.
The changes in this patch are mechanical and don't caues any
functional difference. This is to prepare for multiple pools per
gcwq.
Modify all functions which deal with per-pool properties to pass
around @pool instead of @gcwq or @cpu.
The changes in this patch are mechanical and don't caues any
functional difference. This is to prepare for multiple pools per
gcwq.
Signed-off-by: Tejun Heo
---
kernel/workqueue.c | 218 +++
Unbound wqs aren't concurrency-managed and try to execute work items
as soon as possible. This is currently achieved by implicitly setting
%WQ_HIGHPRI on all unbound workqueues; however, WQ_HIGHPRI
implementation is about to be restructured and this usage won't be
valid anymore.
Add an explicit c
Currently, WQ_HIGHPRI workqueues share the same worker pool as the
normal priority ones. The only difference is that work items from
highpri wq are queued at the head instead of tail of the worklist. On
pathological cases, this simplistics highpri implementation doesn't
seem to be sufficient.
Fo
Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.
Note that nr_running is per-pool property and converted to an array
with
WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist. Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.
Now that workqueue code has been updated
Move worklist and all worker management fields from global_cwq into
the new struct worker_pool. worker_pool points back to the containing
gcwq. worker and cpu_workqueue_struct are updated to point to
worker_pool instead of gcwq too.
This change is mechanical and doesn't introduce any functional
Quoting Randy Dunlap :
On 07/02/2012 12:23 AM, Stephen Rothwell wrote:
Hi all,
Changes since 20120629:
on i386:
ERROR: "__divdi3" [drivers/crypto/hifn_795x.ko] undefined!
This is caused by commit feb7b7ab928afa97a79a9c424e4e0691f49d63be.
hifn_795x has "DIV_ROUND_UP(NSEC_PER_SEC, de
On Mon, 9 Jul 2012 03:38:54 -0500
Geanta Neag Horia Ioan-B05471 wrote:
> On Mon, 9 Jul 2012 11:19:35 +0300, Herbert Xu
> wrote:
> > On Mon, Jul 09, 2012 at 11:17:43AM +0300, Horia Geanta wrote:
> >> In case of AEAD, some crypto engines expect assoc data and iv to be
> >> contiguous. This is how
Hi,
On Sun, Jul 08, 2012 at 01:38:47PM +0800, cloudy.linux wrote:
> Newest result. Still couldn't boot up. This time the source was cloned
> from your git repository.
>
> MV-DMA: window at bar0: target 0, attr 14, base 0, size 800
> MV-DMA: window at bar1: target 5, attr 0, base f220, si
On Mon, 9 Jul 2012 11:19:35 +0300, Herbert Xu
wrote:
> On Mon, Jul 09, 2012 at 11:17:43AM +0300, Horia Geanta wrote:
>> In case of AEAD, some crypto engines expect assoc data and iv to be
>> contiguous. This is how native IPsec works; make testmgr's behaviour
>> the same. (Alternative would be to
On Mon, Jul 09, 2012 at 11:17:43AM +0300, Horia Geanta wrote:
> In case of AEAD, some crypto engines expect assoc data and iv to be
> contiguous.
> This is how native IPsec works; make testmgr's behaviour the same.
> (Alternative would be to fix this in the crypto engine drivers, but this is
> pri
In case of AEAD, some crypto engines expect assoc data and iv to be contiguous.
This is how native IPsec works; make testmgr's behaviour the same.
(Alternative would be to fix this in the crypto engine drivers, but this is
pricy since it would involve memory allocation and copy in the hot path.)
S
22 matches
Mail list logo