On 24/09/2016 22:44, Richard Henderson wrote:
> On 09/24/2016 04:51 AM, Paolo Bonzini wrote:
>>
>>
>> ----- Original Message -----
>>> From: "Richard Henderson" <[email protected]>
>>> To: "Paolo Bonzini" <[email protected]>, [email protected]
>>> Cc: "serge fdrv" <[email protected]>, [email protected], "alex
>>> bennee" <[email protected]>, "sergey fedorov"
>>> <[email protected]>
>>> Sent: Friday, September 23, 2016 8:06:09 PM
>>> Subject: Re: [Qemu-devel] [PATCH 15/16] tcg: Make tb_flush() thread safe
>>>
>>> On 09/23/2016 12:31 AM, Paolo Bonzini wrote:
>>>> +    unsigned tb_flush_req = (unsigned) (uintptr_t) data;
>>>
>>> Extra cast?
>>>
>>>> -    tcg_ctx.tb_ctx.tb_flush_count++;
>>>> +    atomic_inc(&tcg_ctx.tb_ctx.tb_flush_count);
>>>
>>> Since this is the only place this value is incremented, and we're
>>> under a
>>> lock,
>>> it should be cheaper to use
>>>
>>>    atomic_mb_set(&tcg_ctx.tb_ctx.tb_flush_count, tb_flush_req + 1);
>>
>> atomic_set will do even.  Though it's not really a fast path, which is
>> why I went for atomic_inc.
> 
> Don't we need the flush to be complete before the new count is seen? 
> That's why I was suggesting the mb_set.

You're right in that the mb_set is more correct, or even better would be
a store-release.

Actually even atomic_set works, though in a fairly surprising manner.
This is because the final check is done by do_tb_flush under the mutex,
so the final check does wait for the flush to be complete.

If tb_flush_count is exposed too early to tb_flush, all that can happen
is that do_tb_flush sees a tb_flush_req that's more recent than it
should.  do_tb_flush then incorrectly redoes the flush, but that's not a
disaster.

But I'm changing it to mb_set as you suggested.

Paolo

Reply via email to