On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote:
> This test was a single CPU benchmark with no congestion or concurrency.
> But the code was compiled with CONFIG_NUMA=y.
>
> I don't know the slAb code very well, but the kmem_cache_node->list_lock
> looks like a scalability issue. I guess that i
On Tue, 8 Sep 2015 10:22:32 -0500 (CDT)
Christoph Lameter wrote:
> On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote:
>
> > Also notice how well bulking maintains the performance when the bulk
> > size increases (which is a soar spot for the slub allocator).
>
> Well you are not actually complet
On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote:
> Also notice how well bulking maintains the performance when the bulk
> size increases (which is a soar spot for the slub allocator).
Well you are not actually completing the free action in SLAB. This is
simply queueing the item to be freed later
Implement a basic approach of bulking in the slab allocator. Simply
use local_irq_{disable,enable} and call single alloc/free in a loop.
This simple implementation approach is surprising fast.
Notice the normal slab fastpath is: 96 cycles (24.119 ns). Below table
show that single object bulking on