On 23/08/2015 17:24, Emilio G. Cota wrote:
> This is similar in intent to the async_safe_work mechanism. The main
> differences are:
>
> - Work is run on a single CPU thread *after* all others are put to sleep
>
> - Sleeping threads are woken up by the worker thread upon completing its job
>
> - A flag as been added to tcg_ctx so that only one thread can schedule
> work at a time. The flag is checked every time tb_lock is acquired.
>
> - Handles the possibility of CPU threads being created after the existing
> CPUs are put to sleep. This is easily triggered with many threads on
> a many-core host in usermode.
>
> - Works for both softmmu and usermode
>
> Signed-off-by: Emilio G. Cota <[email protected]>
I think this is a duplicate of the existing run_on_cpu code. If needed
in user-mode emulation, it should be extracted out of cpus.c.
Also I think it is dangerous (prone to deadlocks) to wait for other CPUs
with synchronize_cpu and condvar. I would much rather prefer to _halt_
the CPUs if there is pending work, and keep it halted like this:
static inline bool cpu_has_work(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
+ if (tcg_ctx.tb_ctx.tcg_has_work) {
+ return false;
+ }
g_assert(cc->has_work);
return cc->has_work(cpu);
}
You can then run flush_queued_work from linux-user/main.c (and
bsd-user/main.c) when cpu_exec returns EXCP_HALTED.
Paolo