Pull percpu consistent-ops changes from Tejun Heo:
"Way back, before the current percpu allocator was implemented, static
and dynamic percpu memory areas were allocated and handled separately
and had their own accessors. The distinction has been gone for many
years now; however, the now duplicate two sets of accessors remained
with the pointer based ones - this_cpu_*() - evolving various other
operations over time. During the process, we also accumulated other
inconsistent operations.
This pull request contains Christoph's patches to clean up the
duplicate accessor situation. __get_cpu_var() uses are replaced with
with this_cpu_ptr() and __this_cpu_ptr() with raw_cpu_ptr().
Unfortunately, the former sometimes is tricky thanks to C being a bit
messy with the distinction between lvalues and pointers, which led to
a rather ugly solution for cpumask_var_t involving the introduction of
this_cpu_cpumask_var_ptr().
This converts most of the uses but not all. Christoph will follow up
with the remaining conversions in this merge window and hopefully
remove the obsolete accessors"
* 'for-3.18-consistent-ops' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (38 commits)
irqchip: Properly fetch the per cpu offset
percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t -fix
ia64: sn_nodepda cannot be assigned to after this_cpu conversion. Use __this_cpu_write.
percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t
Revert "powerpc: Replace __get_cpu_var uses"
percpu: Remove __this_cpu_ptr
clocksource: Replace __this_cpu_ptr with raw_cpu_ptr
sparc: Replace __get_cpu_var uses
avr32: Replace __get_cpu_var with __this_cpu_write
blackfin: Replace __get_cpu_var uses
tile: Use this_cpu_ptr() for hardware counters
tile: Replace __get_cpu_var uses
powerpc: Replace __get_cpu_var uses
alpha: Replace __get_cpu_var
ia64: Replace __get_cpu_var uses
s390: cio driver &__get_cpu_var replacements
s390: Replace __get_cpu_var uses
mips: Replace __get_cpu_var uses
MIPS: Replace __get_cpu_var uses in FPU emulator.
arm: Replace __this_cpu_ptr with raw_cpu_ptr
...
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x). This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.
Other use cases are for storing and retrieving data from the current
processors percpu area. __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.
__get_cpu_var() is defined as :
#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.
this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.
This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset. Thereby address calculations are avoided and less registers
are used when code is generated.
At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.
The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e. using a global
register that may be set to the per cpu base.
Transformations done to __get_cpu_var()
1. Determine the address of the percpu instance of the current processor.
DEFINE_PER_CPU(int, y);
int *x = &__get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(&y);
2. Same as #1 but this time an array structure is involved.
DEFINE_PER_CPU(int, y[20]);
int *x = __get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(y);
3. Retrieve the content of the current processors instance of a per cpu
variable.
DEFINE_PER_CPU(int, y);
int x = __get_cpu_var(y)
Converts to
int x = __this_cpu_read(y);
4. Retrieve the content of a percpu struct
DEFINE_PER_CPU(struct mystruct, y);
struct mystruct x = __get_cpu_var(y);
Converts to
memcpy(&x, this_cpu_ptr(&y), sizeof(x));
5. Assignment to a per cpu variable
DEFINE_PER_CPU(int, y)
__get_cpu_var(y) = x;
Converts to
__this_cpu_write(y, x);
6. Increment/Decrement etc of a per cpu variable
DEFINE_PER_CPU(int, y);
__get_cpu_var(y)++
Converts to
__this_cpu_inc(y)
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
In RT kernel, I ran into the following calltrace, so PMU interrupts cannot
be threaded
in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
INFO: lockdep is turned off.
Call Trace:
[<ffffffff8088595c>] dump_stack+0x1c/0x50
[<ffffffff801a958c>] __might_sleep+0x13c/0x148
[<ffffffff80891c54>] rt_spin_lock+0x3c/0xb0
[<ffffffff801ad29c>] __wake_up+0x3c/0x80
[<ffffffff80243ba4>] perf_event_wakeup+0x8c/0xf8
[<ffffffff80243c50>] perf_pending_event+0x40/0x78
[<ffffffff8023d88c>] irq_work_run+0x74/0xc0
[<ffffffff80152640>] mipsxx_pmu_handle_shared_irq+0x110/0x228
[<ffffffff8015276c>] mipsxx_pmu_handle_irq+0x14/0x30
[<ffffffff801ffda4>] handle_irq_event_percpu+0xbc/0x470
[<ffffffff80204478>] handle_percpu_irq+0x98/0xc8
[<ffffffff801ff284>] generic_handle_irq+0x4c/0x68
[<ffffffff8089748c>] do_IRQ+0x2c/0x48
[<ffffffff80105864>] plat_irq_dispatch+0x64/0xd0
[ralf@linux-mips.org: I don't see why based on this register dump the
handler should be marked IRQF_NO_THREAD - but the handler is manipulating
per-CPU resources so we don't want it to be rescheduled to another CPU.]
Signed-off-by: Yang Wei <Wei.Yang@windriver.com>
Cc: a.p.zijlstra@chello.nl
Cc: paulus@samba.org
Cc: mingo@redhat.com
Cc: acme@kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7506/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add cases in perf_event_mipsxx.c for CPU_P5600. All the event numbers
listed for proAptiv also apply to P5600, so we use mipsxxcore_event_map2
and mipsxxcore_cache_map2 too, but the P5600 has 8-bit event numbers so
bit 8 (256) of the user ABI config is used for the parity bit (to
specify odd/even counter events).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7242/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In mipsxx_pmu_map_raw_event(), set event_id to base_id after the cpu
type conditional code to allow that code to override the base_id to use
more bits from the config and a higher bit for parity.
This will allow cores with up to 512 events between all even/odd
counters (an 8-bit event id) such as P5600 to use bit 8 for parity.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7243/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
74K/proAptiv share the same event/cache maps. So it's better to change the
names of the existing mipsxx74Kcore_[event|cache]_map.
Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com>
Reviewed-by: Markos Chandras <Markos.Chandras@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven.Hill@imgtec.com
Patchwork: https://patchwork.linux-mips.org/patch/6526/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The 1074K is a multiprocessing coherent processing system (CPS) based
on modified 74K cores. This patch makes the 1074K an actual unique
CPU type, instead of a 74K derivative, which it is not.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Reviewed-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6389/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
According to Software User's Manual, the event of last-level-cache
read/write misses is mapped to even counters. Odd counters of that
event number count miss cycles.
Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6036/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Having received another series of whitespace patches I decided to do this
once and for all rather than dealing with this kind of patches trickling
in forever.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit 4be3d2f396 ("MIPS: perf: Add XLP
support for hardware perf.") added UNSUPPORTED_PERF_EVENT_ID which was
removed a while back.
Cc: Zi Shen Lim <zlim@netlogicmicro.com>
Cc: Jayachandran C <jchandra@broadcom.com>
Cc: Linux-MIPS <linux-mips@linux-mips.org>
Cc: John Crispin <blogic@openwrt.org>
Cc: Zi Shen Lim <zlim@netlogicmicro.com>
Signed-off-by: Manuel Lauss <manuel.lauss@gmail.com>
Acked-by: Jayachandran C <jchandra@broadcom.com>
Patchwork: https://patchwork.linux-mips.org/patch/4730/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add support for XLP performance counters register in perf. Update
mips/Kconfig so that perf events can be selected for XLP.
Signed-off-by: Zi Shen Lim <zlim@netlogicmicro.com>
Signed-off-by: Jayachandran C <jchandra@broadcom.com>
Patchwork: http://patchwork.linux-mips.org/patch/4457
Signed-off-by: John Crispin <blogic@openwrt.org>
Add hardware performance counter support to kernel "perf" code for
BMIPS5000. The BMIPS5000 performance counters are similar to MIPS
MTI cores, so the changes were mostly made in perf_event_mipsxx.c
which is typically for MTI cores.
Signed-off-by: Al Cooper <alcooperx@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/4109/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Split the Kconfig option CONFIG_MIPS_MT_SMP into CONFIG_MIPS_MT_SMP
and CONFIG_MIPS_PERF_SHARED_TC_COUNTERS so some of the code used
for performance counters that are shared between threads can be used
for MIPS cores that are not MT_SMP.
Signed-off-by: Al Cooper <alcooperx@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/4108/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The #ifdef for CONFIG_HW_PERF_EVENTS is not needed because the
Makefile will only compile the module if this config option is set.
This means that the code under #else would never be compiled. This
may have been done to leave the original broken code around for
reference, but the FIXME comment above the code already shows the
broken code.
Signed-off-by: Al Cooper <alcooperx@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/4107/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The PCI (Program Counter Interrupt) bit in the "cause" register
is mandatory for MIPS32R2 cores, but has also been added to some R1
cores (BMIPS5000). This change adds a cpu feature bit to make it
easier to check for and use this feature.
Signed-off-by: Al Cooper <alcooperx@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/4106/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Change the indicator from 0xffffffff in the "event_id" member to
zero in the "cntr_mask" member. This removes the need to initialize
entries that are unsupported. This also solves a problem where the
number of entries in the table was increased based on a globel enum
used for all platforms, but the new unsupported entries were not added
for mips. This was leaving new table entries of all zeros that we not
marked UNSUPPORTED.
Signed-off-by: Al Cooper <alcooperx@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/4110/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Loongson 1B is a 32-bit SoC designed by Institute of Computing Technology
(ICT) and the Chinese Academy of Sciences (CAS), which implements the
MIPS32 release 2 instruction set.
[ralf@linux-mips.org: But which is not strictly a MIPS32 compliant device
which also is why it identifies itself with the Legacy Vendor ID in the
PrID register. When applying the patch I shoveled some code around to
keep things in alphabetical order and avoid forward declarations.]
Signed-off-by: Kelvin Cheung <keguang.zhang@gmail.com>
Cc: To: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: wuzhangjin@gmail.com
Cc: zhzhl555@gmail.com
Cc: Kelvin Cheung <keguang.zhang@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/3976/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
cc1: warnings being treated as errors
arch/mips/kernel/perf_event_mipsxx.c:166: error: 'counters_per_cpu_to_total' defined but not used
make[2]: *** [arch/mips/kernel/perf_event_mipsxx.o] Error 1
make[2]: *** Waiting for unfinished jobs....
It was first introduced by 82091564cf [MIPS:
perf: Add support for 64-bit perf counters.] in 3.2.
Signed-off-by: Florian Fainelli <florian@openwrt.org>
Cc: linux-mips@linux-mips.org
Cc: david.daney@cavium.com
Patchwork: https://patchwork.linux-mips.org/patch/3357/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Pull MIPS updates from Ralf Baechle:
"The whole series has been sitting in -next for quite a while with no
complaints. The last change to the series was before the weekend the
removal of an SPI patch which Grant - even though previously acked by
himself - appeared to raise objections. So I removed it until the
situation is clarified. Other than that all the patches have the acks
from their respective maintainers, all MIPS and x86 defconfigs are
building fine and I'm not aware of any problems introduced by this
series.
Among the key features for this patch series is a sizable patchset for
Lantiq which among other things introduces support for Lantiq's
flagship product, the FALCON SOC. It also means that the opensource
developers behind this patchset have overtaken Lantiq's competing
inhouse development team that was working behind closed doors.
Less noteworthy the ath79 patchset which adds support for a few more
chip variants, cleanups and fixes. Finally the usual dose of tweaking
of generic code."
Fix up trivial conflicts in arch/mips/lantiq/xway/gpio_{ebu,stp}.c where
printk spelling fixes clashed with file move and eventual removal of the
printk.
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (81 commits)
MIPS: lantiq: remove orphaned code
MIPS: Remove all -Wall and almost all -Werror usage from arch/mips.
MIPS: lantiq: implement support for FALCON soc
MTD: MIPS: lantiq: verify that the NOR interface is available on falcon soc
MTD: MIPS: lantiq: implement OF support
watchdog: MIPS: lantiq: implement OF support and minor fixes
SERIAL: MIPS: lantiq: implement OF support
GPIO: MIPS: lantiq: convert gpio-stp-xway to OF
GPIO: MIPS: lantiq: convert gpio-mm-lantiq to OF and of_mm_gpio
GPIO: MIPS: lantiq: move gpio-stp and gpio-ebu to the subsystem folder
MIPS: pci: convert lantiq driver to OF
MIPS: lantiq: convert dma to platform driver
MIPS: lantiq: implement support for clkdev api
MIPS: lantiq: drop ltq_gpio_request() and gpio_to_irq()
OF: MIPS: lantiq: implement irq_domain support
OF: MIPS: lantiq: implement OF support
MIPS: lantiq: drop mips_machine support
OF: PCI: const usage needed by MIPS
MIPS: Cavium: Remove smp_reserve_lock.
MIPS: Move cache setup to setup_arch().
...
Make the oprofile code use the performance counters irq.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John Crispin <blogic@openwrt.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3723/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
We always need to pass the last sample period to
perf_sample_data_init(), otherwise the event distribution will be
wrong. Thus, modifiyng the function interface with the required period
as argument. So basically a pattern like this:
perf_sample_data_init(&data, ~0ULL);
data.period = event->hw.last_period;
will now be like that:
perf_sample_data_init(&data, ~0ULL, event->hw.last_period);
Avoids unininitialized data.period and simplifies code.
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-3-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Why removing pmu checking:
Since 3.2-rc1, when arch level event init is called, the event is already
connected to its PMU. Also, validate_event() is _only_ called by
validate_group() in event init, so there is no need of checking or
temporarily assigning event pmu during validate_group().
Why removing event state checking:
Events could be created in PERF_EVENT_STATE_OFF (attr->disabled == 1), when
these events go through this checking, validate_group() does dummy work.
But we do need to do group scheduling emulation for them in event init.
Again, validate_event() is _only_ called by validate_group().
Reference: http://www.spinics.net/lists/mips/msg42190.html
Signed-off-by: Deng-Cheng Zhu <dczhu@mips.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: David Daney <david.daney@cavium.com>
Cc: Eyal Barzilay <eyal@mips.com>
Cc: Zenon Fortuna <zenon@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/3108/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Port the following patch for ARM by Mark Rutland:
- 57ce9bb39b
ARM: 6902/1: perf: Remove erroneous check on active_events
When initialising a PMU, there is a check to protect against races with
other CPUs filling all of the available event slots. Since armpmu_add
checks that an event can be scheduled, we do not need to do this at
initialisation time. Furthermore the current code is broken because it
assumes that atomic_inc_not_zero will unconditionally increment
active_counts and then tries to decrement it again on failure.
This patch removes the broken, redundant code.
Signed-off-by: Deng-Cheng Zhu <dczhu@mips.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: David Daney <david.daney@cavium.com>
Cc: Eyal Barzilay <eyal@mips.com>
Cc: Zenon Fortuna <zenon@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/3106/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
MIPS licensees may want to modify performance counters to count extra
events. Also, now that the user is working on raw events, the manual is
being used for sure. And feeding unsupported events shouldn't cause
hardware failure and the like.
[ralf@linux-mips.org: performance events also being used in internal
performance evaluation and have a tendency to change as the micro-
architecture evolves, even for minor revisions that may not be
distinguishable by PrID. It's not very practicable to maintain a list
of all events and there is no real benefit.]
Signed-off-by: Deng-Cheng Zhu <dczhu@mips.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: David Daney <david.daney@cavium.com>
Cc: Eyal Barzilay <eyal@mips.com>
Cc: Zenon Fortuna <zenon@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/3107/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
People (Linus) objected to using -ENOSPC to signal not having enough
resources on the PMU to satisfy the request. Use -EINVAL.
Requested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-xv8geaz2zpbjhlx0svmpp28n@git.kernel.org
[ merged to newer kernel, fixed up MIPS impact ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The hard coded constants are moved to struct mips_pmu. All counter
register access move to the read_counter and write_counter function
pointers, which are set to either 32-bit or 64-bit access methods at
initialization time.
Many of the function pointers in struct mips_pmu were not needed as
there was only a single implementation, these were removed.
I couldn't figure out what made struct cpu_hw_events.msbs[] at all
useful, so I removed it too.
Some functions and other declarations were reordered to reduce the
need for forward declarations.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
To: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2792/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The contents of arch/mips/kernel/perf_event.c and
arch/mips/kernel/perf_event_mipsxx.c were divided in a seemingly ad
hoc manner, with the first including the second.
I moved all the hardware counter support code to perf_event_mipsxx.c
and removed the gating #ifdefs to the Kconfig and Makefile.
Now perf_event.c contains only the callchain support, everything else
is in perf_event_mipsxx.c
There are no code changes, only moving of functions from one file to
the other, or removing empty unneeded functions.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Dezhong Diao <dediao@cisco.com>
Cc: Gabor Juhos <juhosg@openwrt.org>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
To: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2791/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Get rid of a bunch of useless inline declarations, and join a bunch of
improperly split lines.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
To: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2793/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add a NODE level to the generic cache events which is used to measure
local vs remote memory accesses. Like all other cache events, an
ACCESS is HIT+MISS, if there is no way to distinguish between reads
and writes do reads only etc..
The below needs filling out for !x86 (which I filled out with
unsupported events).
I'm fairly sure ARM can leave it like that since it doesn't strike me as
an architecture that even has NUMA support. SH might have something since
it does appear to have some NUMA bits.
Sparc64, PowerPC and MIPS certainly want a good look there since they
clearly are NUMA capable.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: David Miller <davem@davemloft.net>
Cc: Anton Blanchard <anton@samba.org>
Cc: David Daney <ddaney@caviumnetworks.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1303508226.4865.8.camel@laptop
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This is the MIPS part of the following commits by Peter Zijlstra:
- a4eaf7f146
perf: Rework the PMU methods
Replace pmu::{enable,disable,start,stop,unthrottle} with
pmu::{add,del,start,stop}, all of which take a flags argument.
The new interface extends the capability to stop a counter while
keeping it scheduled on the PMU. We replace the throttled state with
the generic stopped state.
This also allows us to efficiently stop/start counters over certain
code paths (like IRQ handlers).
It also allows scheduling a counter without it starting, allowing for
a generic frozen state (useful for rotating stopped counters).
The stopped state is implemented in two different ways, depending on
how the architecture implemented the throttled state:
1) We disable the counter:
a) the pmu has per-counter enable bits, we flip that
b) we program a NOP event, preserving the counter state
2) We store the counter state and ignore all read/overflow events
For MIPSXX, the stopped state is implemented in the way of 1.b as above.
- 33696fc0d1
perf: Per PMU disable
Changes perf_disable() into perf_pmu_disable().
- 24cd7f54a0
perf: Reduce perf_disable() usage
Since the current perf_disable() usage is only an optimization,
remove it for now. This eases the removal of the __weak
hw_perf_enable() interface.
- b0a873ebbf
perf: Register PMU implementations
Simple registration interface for struct pmu, this provides the
infrastructure for removing all the weak functions.
- 51b0fe3954
perf: Deconstify struct pmu
sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"`
Reported-by: Wu Zhangjin <wuzhangjin@gmail.com>
Acked-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
To: a.p.zijlstra@chello.nl
To: fweisbec@gmail.com
To: will.deacon@arm.com
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: wuzhangjin@gmail.com
Cc: paulus@samba.org
Cc: mingo@elte.hu
Cc: acme@redhat.com
Cc: dengcheng.zhu@gmail.com
Cc: matt@console-pimps.org
Cc: sshtylyov@mvista.com
Cc: ddaney@caviumnetworks.com
Patchwork: http://patchwork.linux-mips.org/patch/2012/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This is the MIPS part of the following commit by Peter Zijlstra:
- e360adbe29
irq_work: Add generic hardirq context callbacks
Provide a mechanism that allows running code in IRQ context. It is
most useful for NMI code that needs to interact with the rest of the
system -- like wakeup a task to drain buffers.
Perf currently has such a mechanism, so extract that and provide it as
a generic feature, independent of perf so that others may also
benefit.
The IRQ context callback is generated through self-IPIs where
possible, or on architectures like powerpc the decrementer (the
built-in timer facility) is set to generate an interrupt immediately.
Architectures that don't have anything like this get to do with a
callback from the timer tick. These architectures can call
irq_work_run() at the tail of any IRQ handlers that might enqueue such
work (like the perf IRQ handler) to avoid undue latencies in
processing the work.
For MIPSXX, we need to call irq_work_run() at the tail of the perf IRQ
handler as described above.
Reported-by: Wu Zhangjin <wuzhangjin@gmail.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
To: fweisbec@gmail.com
To: will.deacon@arm.com
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: paulus@samba.org
Cc: mingo@elte.hu
Cc: acme@redhat.com
Cc: matt@console-pimps.org
Cc: sshtylyov@mvista.com,
Patchwork: http://patchwork.linux-mips.org/patch/2011/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The perf hardware pmu got initialized at various points in the boot,
some before early_initcall() some after (notably arch_initcall).
The problem is that the NMI lockup detector is ran from early_initcall()
and expects the hardware pmu to be present.
Sanitize this by moving all architecture hardware pmu implementations to
initialize at early_initcall() and move the lockup detector to an explicit
initcall right after that.
Cc: paulus <paulus@samba.org>
Cc: davem <davem@davemloft.net>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1290707759.2145.119.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This patch adds the mipsxx Perf-events support based on the skeleton.
Generic hardware events and cache events are now fully implemented for
the 24K/34K/74K/1004K cores. To support other cores in mipsxx (such as
R10000/SB1), the generic hardware event tables and cache event tables
need to be filled out. To support other CPUs which have different PMU
than mipsxx, such as RM9000 and LOONGSON2, the additional files
perf_event_$cpu.c need to be created.
Raw event is an important part of Perf-events. It helps the user collect
performance data for events that are not listed as the generic hardware
events and cache events but ARE supported by the CPU's PMU.
This patch also adds this feature for mipsxx 24K/34K/74K/1004K. For how to
use it, please refer to processor core software user's manual and the
comments for mipsxx_pmu_map_raw_event() for more details.
Please note that this is a "precise" implementation, which means the
kernel will check whether the requested raw events are supported by this
CPU and which hardware counters can be assigned for them.
To test the functionality of Perf-event, you may want to compile the tool
"perf" for your MIPS platform. You can refer to the following URL:
http://www.linux-mips.org/archives/linux-mips/2010-10/msg00126.html
You also need to customize the CFLAGS and LDFLAGS in tools/perf/Makefile
for your libs, includes, etc.
In case you encounter the boot failure in SMVP kernel on multi-threading
CPUs, you may take a look at:
http://www.linux-mips.org/git?p=linux-mti.git;a=commitdiff;h=5460815027d802697b879644c74f0e8365254020
Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
To: linux-mips@linux-mips.org
Cc: a.p.zijlstra@chello.nl
Cc: paulus@samba.org
Cc: mingo@elte.hu
Cc: acme@redhat.com
Cc: jamie.iles@picochip.com
Cc: ddaney@caviumnetworks.com
Cc: matt@console-pimps.org
Patchwork: https://patchwork.linux-mips.org/patch/1689/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
create mode 100644 arch/mips/kernel/perf_event_mipsxx.c