Sergey Fedorov <[email protected]> writes: > From: Sergey Fedorov <[email protected]> > > Ensure atomicity of CPU's 'tb_flushed' access for future translation > block lookup out of 'tb_lock'. > > This field can only be touched from another thread by tb_flush() in user > mode emulation. So the only access to be atomic is: > * a single write in tb_flush(); > * reads/writes out of 'tb_lock'.
It might worth mentioning the barrier here. > > In future, before enabling MTTCG in system mode, tb_flush() must be safe > and this field becomes unnecessary. > > Signed-off-by: Sergey Fedorov <[email protected]> > Signed-off-by: Sergey Fedorov <[email protected]> > --- > cpu-exec.c | 16 +++++++--------- > translate-all.c | 4 ++-- > 2 files changed, 9 insertions(+), 11 deletions(-) > > diff --git a/cpu-exec.c b/cpu-exec.c > index d6178eab71d4..c973e3b85922 100644 > --- a/cpu-exec.c > +++ b/cpu-exec.c > @@ -338,13 +338,6 @@ static inline TranslationBlock *tb_find_fast(CPUState > *cpu, > tb->flags != flags)) { > tb = tb_find_slow(cpu, pc, cs_base, flags); > } > - if (cpu->tb_flushed) { > - /* Ensure that no TB jump will be modified as the > - * translation buffer has been flushed. > - */ > - *last_tb = NULL; > - cpu->tb_flushed = false; > - } > #ifndef CONFIG_USER_ONLY > /* We don't take care of direct jumps when address mapping changes in > * system emulation. So it's not safe to make a direct jump to a TB > @@ -356,7 +349,12 @@ static inline TranslationBlock *tb_find_fast(CPUState > *cpu, > #endif > /* See if we can patch the calling TB. */ > if (last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) { > - tb_add_jump(last_tb, tb_exit, tb); > + /* Check if translation buffer has been flushed */ > + if (cpu->tb_flushed) { > + cpu->tb_flushed = false; > + } else { > + tb_add_jump(last_tb, tb_exit, tb); > + } > } > tb_unlock(); > return tb; > @@ -618,7 +616,7 @@ int cpu_exec(CPUState *cpu) > } > > last_tb = NULL; /* forget the last executed TB after exception */ > - cpu->tb_flushed = false; /* reset before first TB lookup */ > + atomic_mb_set(&cpu->tb_flushed, false); /* reset before first TB > lookup */ > for(;;) { > cpu_handle_interrupt(cpu, &last_tb); > tb = tb_find_fast(cpu, last_tb, tb_exit); > diff --git a/translate-all.c b/translate-all.c > index fdf520a86d68..788fed1e0765 100644 > --- a/translate-all.c > +++ b/translate-all.c > @@ -845,7 +845,6 @@ void tb_flush(CPUState *cpu) > > tcg_ctx.code_gen_buffer_size) { > cpu_abort(cpu, "Internal error: code buffer overflow\n"); > } > - tcg_ctx.tb_ctx.nb_tbs = 0; > > CPU_FOREACH(cpu) { > int i; > @@ -853,9 +852,10 @@ void tb_flush(CPUState *cpu) > for (i = 0; i < TB_JMP_CACHE_SIZE; ++i) { > atomic_set(&cpu->tb_jmp_cache[i], NULL); > } > - cpu->tb_flushed = true; > + atomic_mb_set(&cpu->tb_flushed, true); > } > > + tcg_ctx.tb_ctx.nb_tbs = 0; > qht_reset_size(&tcg_ctx.tb_ctx.htable, CODE_GEN_HTABLE_SIZE); I can see the sense of moving the setting of nb_tbs but is it strictly required as part of this patch? > page_flush_tb(); Otherwise: Reviewed-by: Alex Bennée <[email protected]> -- Alex Bennée
