On 15.10.2019 12:16, Peter Zijlstra wrote: > On Mon, Oct 14, 2019 at 09:08:34AM +0300, Alexey Budankov wrote: >> >> Restore Intel LBR call stack from cloned inactive task perf context on >> a context switch. This change inherently addresses inconsistency in LBR >> call stack data provided on a sample in record profiling mode for >> example like this: >> >> $ perf record -N -B -T -R --call-graph lbr \ >> -e >> cpu/period=0xcdfe60,event=0x3c,name=\'CPU_CLK_UNHALTED.THREAD\'/Duk \ >> --clockid=monotonic_raw -- ./miniFE.x nx 25 ny 25 nz 25 >> >> Let's assume threads A, B, C belonging to the same process. >> B and C are siblings of A and their perf contexts are treated as equivalent. >> At some point B blocks on a futex (non preempt context switch). >> B's LBRs are preserved at B's perf context task_ctx_data and B's events >> are removed from PMU and disabled. B's perf context becomes inactive. >> >> Later C gets on a cpu, runs, gets profiled and eventually switches to >> the awaken but not yet running B. The optimized context switch path is >> executed coping B's task_ctx_data to C's one and updating B's perf context >> pointer to refer to C's task_ctx_data that contains preserved B's LBRs >> after coping. >> >> However, as far B's perf context is inactive there is no enabled events >> in there and B's task_ctx_data->lbr_callstack_users is equal to 0. >> When B gets on the cpu B's events reviving is skipped following >> the optimized context switch path and B's task_ctx_data->lbr_callstack_users >> remains 0. Thus B's LBR's are not restored by pmu sched_task() code called >> in the end of perf context switch sched_in callback for B. >> >> In the report that manifests as having short fragments of B's >> call stack, still tracked by LBR's HW between adjacent samples, >> but the whole thread call tree doesn't aggregate. >> > >> Signed-off-by: Alexey Budankov <[email protected]> >> --- >> kernel/events/core.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/kernel/events/core.c b/kernel/events/core.c >> index 2aad959e6def..74c2ff38e079 100644 >> --- a/kernel/events/core.c >> +++ b/kernel/events/core.c >> @@ -3181,7 +3181,7 @@ static void perf_event_context_sched_out(struct >> task_struct *task, int ctxn, >> >> rcu_read_lock(); >> next_ctx = next->perf_event_ctxp[ctxn]; >> - if (!next_ctx) >> + if (!next_ctx || !next_ctx->is_active) >> goto unlock; > > AFAICT this completely kills off the optimization. next_ctx->is_active > cannot be set at this point.
Hmm, the intention was to skip optimization path only once when switching to just resumed thread. Thanks for observation. ~Alexey

