perf/core: Reduce context switch overhead

Skip most of the PMU context switching overhead when ctx->nr_events is 0.

50% performance overhead was observed under an extreme testcase.

Signed-off-by: leilei.lin <leilei.lin@alibaba-inc.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: alexander.shishkin@linux.intel.com
Cc: eranian@gmail.com
Cc: jolsa@redhat.com
Cc: linxiulei@gmail.com
Cc: yang_oliver@hotmail.com
Link: http://lkml.kernel.org/r/20170809002921.69813-1-leilei.lin@alibaba-inc.com
[ Rewrote the changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
leilei.lin 2017-08-09 08:29:21 +08:00 committed by Ingo Molnar
parent ab027620e9
commit fdccc3fb7a
1 changed files with 9 additions and 0 deletions

View File

@ -3211,6 +3211,13 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
return; return;
perf_ctx_lock(cpuctx, ctx); perf_ctx_lock(cpuctx, ctx);
/*
* We must check ctx->nr_events while holding ctx->lock, such
* that we serialize against perf_install_in_context().
*/
if (!ctx->nr_events)
goto unlock;
perf_pmu_disable(ctx->pmu); perf_pmu_disable(ctx->pmu);
/* /*
* We want to keep the following priority order: * We want to keep the following priority order:
@ -3224,6 +3231,8 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE); cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
perf_event_sched_in(cpuctx, ctx, task); perf_event_sched_in(cpuctx, ctx, task);
perf_pmu_enable(ctx->pmu); perf_pmu_enable(ctx->pmu);
unlock:
perf_ctx_unlock(cpuctx, ctx); perf_ctx_unlock(cpuctx, ctx);
} }