On Wed, Jul 17, 2019 at 03:46:55PM +1000, Nicholas Piggin wrote: > This is a bit of proof of concept in case mttcg becomes more important > yield could be handled like this. You can have by accident or deliberately > force vCPUs onto the same physical CPU and cause inversion issues when the > lock holder was preempted by the waiter. This is lightly tested but not > to the point of measuring performance difference. > > I really consider the previous confer/prod patches more important just to > provide a more complete guest environment and better test coverage, than > performance, but maybe someone wants to persue this. > > Thanks, > Nick > --- > cpus.c | 6 ++++++ > hw/ppc/spapr_hcall.c | 14 +++++++------- > include/qemu/thread.h | 1 + > util/qemu-thread-posix.c | 5 +++++ > util/qemu-thread-win32.c | 4 ++++ > 5 files changed, 23 insertions(+), 7 deletions(-) > > diff --git a/cpus.c b/cpus.c > index 927a00aa90..f036e062d9 100644 > --- a/cpus.c > +++ b/cpus.c > @@ -1760,6 +1760,12 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) > qemu_mutex_unlock_iothread(); > cpu_exec_step_atomic(cpu); > qemu_mutex_lock_iothread(); > + break; > + case EXCP_YIELD: > + qemu_mutex_unlock_iothread(); > + qemu_thread_yield(); > + qemu_mutex_lock_iothread(); > + break; > default: > /* Ignore everything else? */ > break; > diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c > index 57c1ee0fe1..9c24a64dfe 100644 > --- a/hw/ppc/spapr_hcall.c > +++ b/hw/ppc/spapr_hcall.c > @@ -1162,13 +1162,13 @@ static target_ulong h_confer(PowerPCCPU *cpu, > SpaprMachineState *spapr, > return H_SUCCESS; > } > > - /* > - * The targeted confer does not do anything special beyond yielding > - * the current vCPU, but even this should be better than nothing. > - * At least for single-threaded tcg, it gives the target a chance to > - * run before we run again. Multi-threaded tcg does not really do > - * anything with EXCP_YIELD yet. > - */ > + /* > + * The targeted confer does not do anything special beyond yielding > + * the current vCPU, but even this should be better than nothing. > + * For single-threaded tcg, it gives the target a chance to run > + * before we run again, multi-threaded tcg will yield the CPU to > + * another vCPU. > + */
Uh.. this looks like a whitespace fixup leaked in from your other patches.
> }
>
> cs->exception_index = EXCP_YIELD;
> diff --git a/include/qemu/thread.h b/include/qemu/thread.h
> index 55d83a907c..8525b0a70a 100644
> --- a/include/qemu/thread.h
> +++ b/include/qemu/thread.h
> @@ -160,6 +160,7 @@ void qemu_thread_get_self(QemuThread *thread);
> bool qemu_thread_is_self(QemuThread *thread);
> void qemu_thread_exit(void *retval);
> void qemu_thread_naming(bool enable);
> +void qemu_thread_yield(void);
>
> struct Notifier;
> /**
> diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
> index 1bf5e65dea..91b12a1082 100644
> --- a/util/qemu-thread-posix.c
> +++ b/util/qemu-thread-posix.c
> @@ -573,3 +573,8 @@ void *qemu_thread_join(QemuThread *thread)
> }
> return ret;
> }
> +
> +void qemu_thread_yield(void)
> +{
> + pthread_yield();
> +}
> diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
> index 572f88535d..72fe406bef 100644
> --- a/util/qemu-thread-win32.c
> +++ b/util/qemu-thread-win32.c
> @@ -442,3 +442,7 @@ bool qemu_thread_is_self(QemuThread *thread)
> {
> return GetCurrentThreadId() == thread->tid;
> }
> +
> +void qemu_thread_yield(void)
> +{
> +}
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
signature.asc
Description: PGP signature
