[PATCH 05/10] workqueue: make workqueue->name[] fixed len

2013-03-19 Thread Tejun Heo
Currently workqueue->name[] is of flexible length. We want to use the flexible field for something more useful and there isn't much benefit in allowing arbitrary name length anyway. Make it fixed len capping at 24 bytes. Signed-off-by: Tejun Heo --- kernel/workqueue.c | 19 ---

[PATCH 09/10] workqueue: implement NUMA affinity for unbound workqueues

2013-03-19 Thread Tejun Heo
Currently, an unbound workqueue has single current, or first, pwq (pool_workqueue) to which all new work items are queued. This often isn't optimal on NUMA machines as workers may jump around across node boundaries and work items get assigned to workers without any regard to NUMA affinity. This p

[PATCH 06/10] workqueue: move hot fields of workqueue_struct to the end

2013-03-19 Thread Tejun Heo
Move wq->flags and ->cpu_pwqs to the end of workqueue_struct and align them to the cacheline. These two fields are used in the work item issue path and thus hot. The scheduled NUMA affinity support will add dispatch table at the end of workqueue_struct and relocating these two fields will allow u

[PATCH 04/10] workqueue: add workqueue->unbound_attrs

2013-03-19 Thread Tejun Heo
Currently, when exposing attrs of an unbound workqueue via sysfs, the workqueue_attrs of first_pwq() is used as that should equal the current state of the workqueue. The planned NUMA affinity support will make unbound workqueues make use of multiple pool_workqueues for different NUMA nodes and the

[PATCH 03/10] workqueue: determine NUMA node of workers accourding to the allowed cpumask

2013-03-19 Thread Tejun Heo
When worker tasks are created using kthread_create_on_node(), currently only per-cpu ones have the matching NUMA node specified. All unbound workers are always created with NUMA_NO_NODE. Now that an unbound worker pool may have an arbitrary cpumask associated with it, this isn't optimal. Add pool

[PATCH 02/10] workqueue: drop 'H' from kworker names of unbound worker pools

2013-03-19 Thread Tejun Heo
Currently, all workqueue workers which have negative nice value has 'H' postfixed to their names. This is necessary for per-cpu workers as they use the CPU number instead of pool->id to identify the pool and the 'H' postfix is the only thing distinguishing normal and highpri workers. As workers f

[PATCH 10/10] workqueue: update sysfs interface to reflect NUMA awareness and a kernel param to disable NUMA affinity

2013-03-19 Thread Tejun Heo
Unbound workqueues are now NUMA aware. Let's add some control knobs and update sysfs interface accordingly. * Add kernel param workqueue.numa_disable which disables NUMA affinity globally. * Replace sysfs file "pool_id" with "pool_ids" which contain node:pool_id pairs. This change is userla

[PATCH 07/10] workqueue: map an unbound workqueues to multiple per-node pool_workqueues

2013-03-19 Thread Tejun Heo
Currently, an unbound workqueue has only one "current" pool_workqueue associated with it. It may have multple pool_workqueues but only the first pool_workqueue servies new work items. For NUMA affinity, we want to change this so that there are multiple current pool_workqueues serving different NU

[PATCH 08/10] workqueue: break init_and_link_pwq() into two functions and introduce alloc_unbound_pwq()

2013-03-19 Thread Tejun Heo
Break init_and_link_pwq() into init_pwq() and link_pwq() and move unbound-workqueue specific handling into apply_workqueue_attrs(). Also, factor out unbound pool and pool_workqueue allocation into alloc_unbound_pwq(). This reorganization is to prepare for NUMA affinity and doesn't introduce any fu

[PATCH 01/10] workqueue: add wq_numa_tbl_len and wq_numa_possible_cpumask[]

2013-03-19 Thread Tejun Heo
Unbound workqueues are going to be NUMA-affine. Add wq_numa_tbl_len and wq_numa_possible_cpumask[] in preparation. The former is the highest NUMA node ID + 1 and the latter is masks of possibles CPUs for each NUMA node. This patch only introduces these. Future patches will make use of them. Si

[PATCHSET wq/for-3.10] workqueue: NUMA affinity for unbound workqueues

2013-03-19 Thread Tejun Heo
Hello, There are two types of workqueues - per-cpu and unbound. The former is bound to each CPU and the latter isn't not bound to any by default. While the recently added attrs support allows unbound workqueues to be confined to subset of CPUs, it still is quite cumbersome for applications where

Re: [PATCH] Fix x509_key_preparse() not to reject keys outside their validity time range

2013-03-19 Thread Alexander Holler
Am 14.03.2013 13:24, schrieb David Woodhouse: The x509_key_preparse() function will refuse to even *parse* a certificate when the system clock happens to be set to a time before the ValidFrom or after the ValidTo date. This is wrong. If date checks are to be done, they need to be done at the tim

Re: [PATCH 0/2] lib,crypto: Add lz4 compressor module and crypto API

2013-03-19 Thread Andrew Morton
On Tue, 19 Mar 2013 15:42:04 +0100 Yann Collet wrote: > Thanks for pointing that out. > I've been looking into the document pointed by Andrew, > and here is my understanding : > > Signed-off-by is a one-line, so in this case : > > Signed-off-by: Yann Collet > > > or > > Signed-off-by follo

Re: [PATCH] [char] random: fix priming of last_data

2013-03-19 Thread Neil Horman
On Tue, Mar 19, 2013 at 12:18:09PM -0400, Jarod Wilson wrote: > Commit ec8f02da9e added priming of last_data per fips requirements. > Unfortuantely, it did so in a way that can lead to multiple threads all > incrementing nbytes, but only one actually doing anything with the extra > data, which lead

Re: [PATCH] [char] random: fix priming of last_data

2013-03-19 Thread Jarod Wilson
On Tue, Mar 19, 2013 at 12:18:09PM -0400, Jarod Wilson wrote: > Commit ec8f02da9e added priming of last_data per fips requirements. > Unfortuantely, it did so in a way that can lead to multiple threads all > incrementing nbytes, but only one actually doing anything with the extra > data, which lead

[PATCH] [char] random: fix priming of last_data

2013-03-19 Thread Jarod Wilson
Commit ec8f02da9e added priming of last_data per fips requirements. Unfortuantely, it did so in a way that can lead to multiple threads all incrementing nbytes, but only one actually doing anything with the extra data, which leads to some fun random corruption and panics. The fix is to simply do

Re: [PATCH v3 2/2] crypto: sahara: Add driver for SAHARA2 accelerator.

2013-03-19 Thread javier Martin
Hi Herbert, would you please merge this driver or is there anything else you want me to address first? Regards. On 1 March 2013 12:37, Javier Martin wrote: > SAHARA2 HW module is included in the i.MX27 SoC from > Freescale. It is capable of performing cipher algorithms > such as AES, 3DES..., ha