On Sun, Aug 23, 2015 at 18:14:58 -0700, Paolo Bonzini wrote:
> On 23/08/2015 17:23, Emilio G. Cota wrote:
> > This paves the way for a lockless tb_find_fast.
> >
> > Signed-off-by: Emilio G. Cota <[email protected]>
> > ---
(snip)
> > @@ -1707,12 +1735,14 @@ void tb_flush_jmp_cache(CPUState *cpu, target_ulong
> > addr)
> > /* Discard jump cache entries for any tb which might potentially
> > overlap the flushed page. */
> > i = tb_jmp_cache_hash_page(addr - TARGET_PAGE_SIZE);
> > + seqlock_write_lock(&cpu->tb_jmp_cache_sequence);
> > memset(&cpu->tb_jmp_cache[i], 0,
> > TB_JMP_PAGE_SIZE * sizeof(TranslationBlock *));
> >
> > i = tb_jmp_cache_hash_page(addr);
> > memset(&cpu->tb_jmp_cache[i], 0,
> > TB_JMP_PAGE_SIZE * sizeof(TranslationBlock *));
> > + seqlock_write_unlock(&cpu->tb_jmp_cache_sequence);
> > }
> >
> > void dump_exec_info(FILE *f, fprintf_function cpu_fprintf)
> >
>
> I'm not sure how the last three patches compare with the existing "tcg:
> move tb_find_fast outside the tb_lock critical section"?
The seqlock for tb_jmp_cache is necessary the moment that the
array can be wiped out with a memset(), as shown above. That
function (tb_flush_jmp_cache) is called by tlb_flush_page,
which has many callers.
One could argue that we could enforce calling tlb_flush_page to be
a) always done by the owner thread or b) done while all others CPUs
are paused.
I argue that worrying about that is not worth it; let's protect
the array with a seqlock, which on TSO is essentially free, and
worry about more important things.
Wrt the next two patches:
Patch 27 is an improvement in that each TB has its own valid flag,
which makes sense because this should only affect TB's that are
trying to chain to/from it, not all TBs.
Patch 28 uses the RCU QLIST which to me seems cleaner and less
error-prone than open-coding an RCU LIST.
Thanks,
Emilio