If a counter overflows during a perf stat profiling run it may overtake
the last known value of the counter:
0 prev new 0xffffffff
|----------|-------|----------------------|
In this case, the number of events that have occurred is
(0xffffffff - prev) + new. Unfortunately, the event update code will
not realise an overflow has occurred and will instead report the event
delta as (new - prev) which may be considerably smaller than the real
count.
This patch adds an extra argument to armpmu_event_update which indicates
whether or not an overflow has occurred. If an overflow has occurred
then we use the maximum period of the counter to calculate the elapsed
events.
Acked-by: Jamie Iles <jamie@jamieiles.com>
Reported-by: Ashwin Chaugule <ashwinc@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv7 dictates that the interrupt-enable and count-enable registers for
each PMU counter are UNKNOWN following core reset.
This patch adds a new (optional) function pointer to struct arm_pmu for
resetting the PMU state during init. The reset function is called on
each CPU via an arch_initcall in the generic ARM perf_event code and
allows the PMU backend to write sane values to any UNKNOWN registers.
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARMv7 architecture does not guarantee that effects from co-processor
writes are immediately visible to following instructions.
This patch adds two isbs to the ARMv7 perf code:
(1) Immediately after selecting an event register, so that the PMU state
following this instruction is consistent with the new event.
(2) Immediately before writing to the PMCR, so that any previous writes
to the PMU have taken effect before (typically) enabling the
counters.
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For kernels built with PREEMPT_RT, critical sections protected
by standard spinlocks are preemptible. This is not acceptable
on perf as (a) we may be scheduled onto a different CPU whilst
reading/writing banked PMU registers and (b) the latency when
reading the PMU registers becomes unpredictable.
This patch upgrades the pmu_lock spinlock to a raw_spinlock
instead.
Reported-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Russell reported a number of warnings coming from sparse when
checking the ARM perf_event.c files:
| perf_event.c seems to also have problems too:
|
| CHECK arch/arm/kernel/perf_event.c
| arch/arm/kernel/perf_event.c:37:1: warning: symbol 'pmu_lock' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:70:1: warning: symbol 'cpu_hw_events' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1006:1: warning: symbol 'armv6pmu_enable_event' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1113:1: warning: symbol 'armv6pmu_stop' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1956:6: warning: symbol 'armv7pmu_enable_event' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:3072:14: warning: incorrect type in argument 1 (different address spaces)
| arch/arm/kernel/perf_event.c:3072:14: expected void const volatile [noderef] <asn:1>*<noident>
| arch/arm/kernel/perf_event.c:3072:14: got struct frame_tail *tail
| arch/arm/kernel/perf_event.c:3074:49: warning: incorrect type in argument 2 (different address spaces)
| arch/arm/kernel/perf_event.c:3074:49: expected void const [noderef] <asn:1>*from
| arch/arm/kernel/perf_event.c:3074:49: got struct frame_tail *tail
This patch resolves these issues so we can live in silence
again.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM perf_event.c file contains all PMU backends and, as new PMUs
are introduced, will continue to grow.
This patch follows the example of x86 and splits the PMU implementations
into separate files which are then #included back into the main
file. Compile-time guards are added to each PMU file to avoid compiling
in code that is not relevant for the version of the architecture which
we are targetting.
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>