On Wed, May 25, 2016 at 01:17:21 +0300, Sergey Fedorov wrote:
> >> With this implementation we could:
> >> (1) get rid of qht_map::stale
> >> (2) don't waste cycles waiting for resize to complete
> > I'll include this in v6.
>
> How is it by perf?
Not much of a difference, since resize is a slo
On 25/05/16 01:07, Emilio G. Cota wrote:
> On Mon, May 23, 2016 at 23:28:27 +0300, Sergey Fedorov wrote:
>> What if we turn qht::lock into a mutex and change the function as follows:
>>
>> static inline
>> struct qht_bucket *qht_bucket_lock__no_stale(struct qht *ht,
>> uint32_t hash,
>>
On Mon, May 23, 2016 at 23:28:27 +0300, Sergey Fedorov wrote:
> What if we turn qht::lock into a mutex and change the function as follows:
>
> static inline
> struct qht_bucket *qht_bucket_lock__no_stale(struct qht *ht,
> uint32_t hash,
>
On 14/05/16 06:34, Emilio G. Cota wrote:
> +/*
> + * Get a head bucket and lock it, making sure its parent map is not stale.
> + * @pmap is filled with a pointer to the bucket's parent map.
> + *
> + * Unlock with qemu_spin_unlock(&b->lock).
> + */
> +static inline
> +struct qht_bucket *qht_bucket_
The appended increases write scalability by allowing concurrent writes
to separate buckets. It also removes the need for an external lock when
operating on the hash table.
Lookup code remains the same; insert and remove fast paths get an extra
smp_rmb() before reading the map pointer and a likely(