Commit Graph

296 Commits

Author SHA1 Message Date
Linus Torvalds e835a65f7a ARC fixes for 4.5
- Corner case of returning to delay slot from interrupt
 - Changing default interrupt prioiry level
 - Kconfig'ize support for super pages
 - Other minor fixes
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJWvxdUAAoJEGnX8d3iisJeRkYP/2HZAt4J6c5MPk/NSy8rabVX
 2bB1m5jYXlBmJAIsmWm+WcDL72MdrB1Owtc5tEN+hIoQQa2QQpxolp32IslHg0o8
 C9CCzmF+iR8wz3caVk3javpsbze23XbHho/kdx/l2Ed3Fi+syI/9jF1GiboydRtR
 X22an1lslA6Y44pYxFmSFcMCv7XclFkJNe1ltxsgN9/QapnNrE/HWqUIy+SMr2Oo
 Tpo3m/Dc+IfMMejYyupc3keyAhyeux69lJXPuOzYiurgGUIyXz15Un2mQ9gZWf0u
 W56L/55VpQVuah46qrp5CBTLmdJA5cBqr0F8RqmZAqrEYLgn5SD4IhDjamo1qsP/
 FfFh0cG955SoEyCsUOPILWUFR5TeS4rJK+ZJjErUb+dwEC1BWZR0/Dn1s9KJN8b7
 GgGV8yXruDACFlFnCqnlxVs1TKOPOUqD2NZRAdsKunp+ywNrvGdD43xWONcriyvr
 2KW0nb+mH3RRk8HQzKjfqsVhLMoR7n1MD/+tg8ME8usLn1ik0hBerT56CX0Wh/yQ
 VnOUX6xqlaRydeJJgCUyByz3+jJVvj8sk/VZbr19F0p9id6wpiPQeNus2AcoHFKW
 OyvWcfxzqKegXrYtMsy8IoFzx73zJaXV3ht0I09rhAj3JkdF7vFEIUpKIhsWqxAK
 yWKKqLcVKga/2Yc8jduI
 =FNDd
 -----END PGP SIGNATURE-----

Merge tag 'arc-4.5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc

Pull ARC fixes from Vineet Gupta:
 "I've been sitting on some of these fixes for a while.

   - Corner case of returning to delay slot from interrupt
   - Changing default interrupt prioiry level
   - Kconfig'ize support for super pages
   - Other minor fixes"

* tag 'arc-4.5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
  ARC: mm: Introduce explicit super page size support
  ARCv2: intc: Allow interruption by lowest priority interrupt
  ARCv2: Check for LL-SC livelock only if LLSC is enabled
  ARC: shrink cpuinfo by not saving full timer BCR
  ARCv2: clocksource: Rename GRTC -> GFRC ...
  ARCv2: STAR 9000950267: Handle return from intr to Delay Slot #2
2016-02-13 08:18:21 -08:00
Vineet Gupta dec2b2849c ARCv2: intc: Allow interruption by lowest priority interrupt
ARC HS Cores support configurable multiple interrupt priorities of upto
16 levels.

There is processor "interrupt preemption threshhold" in STATUS32.E[4:1]
And several places need to set this up:
1. seed value as kernel is booting
2. seed value for user space programs
3. Arg to SLEEP instruction in idle task (what interrupt prio can wake)
4. Per-IRQ line prioirty (i.e. what is the priority of interrupt
   raised by a peripheral or timer or perf counter...

Currently above sites use the highest priority 0. This can be potential
problem when multiple priorities are supported. e.g. user space could
only be interrupted by P0 interrupt, not others...
So turn this over and instead make default interruption level to be
the lowest priority possible 15. This should be fine even if there are
fewer priority levels configured (say two: P0 HIGH, P1 LOW)

This feature also effectively disables FIRQ feature if present in
hardware config. With old code, a P0 interrupt would be FIRQ, needing
special handling (ISR or Register Banks) which is NOT supported yet.
Now it not be P0 (P15 or whatever is lowest prio) so FIRQ is not
triggered.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-02-10 06:38:50 +05:30
Vineet Gupta 4d0cb15fcc ARCv2: Check for LL-SC livelock only if LLSC is enabled
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-01-29 16:51:04 +05:30
Vineet Gupta b89bd1f4fb ARC: shrink cpuinfo by not saving full timer BCR
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-01-29 16:51:03 +05:30
Vineet Gupta d584f0fb04 ARCv2: clocksource: Rename GRTC -> GFRC ...
... it is now called Global Free Running Counter

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-01-29 16:51:02 +05:30
Vineet Gupta cbfe74a753 ARCv2: STAR 9000950267: Handle return from intr to Delay Slot #2
Returning to delay slot, riding an interrupti, had one loose end.
AUX_USER_SP used for restoring user mode SP upon RTIE was not being
setup from orig task's saved value, causing task to use wrong SP,
leading to ProtV errors.

The reason being:
 - INTERRUPT_EPILOGUE returns to a kernel trampoline, thus not expected to restore it
 - EXCEPTION_EPILOGUE is not used at all

Fix that by restoring AUX_USER_SP explicitly in the trampoline.

This was broken in the original workaround, but the error scenarios got
reduced considerably since v3.14 due to following:

 1. The Linuxthreads.old based userspace at the time caused many more
    exceptions in delay slot than the current NPTL based one.
    Infact with current userspace the error doesn't happen at all.

 2. Return from interrupt (delay slot or otherwise) doesn't get exercised much
    after commit 4de0e52867 ("Really Re-enable interrupts to avoid deadlocks")
    since IRQ_ACTIVE.active being clear means most returns are as if from pure
    kernel (even for active interrupts)

Infact the issue only happened in an experimental branch where I was tinkering with
reverted 4de0e52867

Cc: stable@kernel.org # v4.2+
Fixes: 4255b07f2c ("ARCv2: STAR 9000793984: Handle return from intr to Delay Slot")
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-01-22 17:25:03 +05:30
Linus Torvalds 0f0836b7eb Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching
Pull livepatching updates from Jiri Kosina:

 - RO/NX attribute fixes for patch module relocations from Josh
   Poimboeuf.  As part of this effort, module.c has been cleaned up as
   well and livepatching is piggy-backing on this cleanup.  Rusty is OK
   with this whole lot going through livepatching tree.

 - symbol disambiguation support from Chris J Arges.  That series is
   also

        Reviewed-by: Miroslav Benes <mbenes@suse.cz>

   but this came in only after I've alredy pushed out.  Didn't want to
   rebase because of that, hence I am mentioning it here.

 - symbol lookup fix from Miroslav Benes

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching:
  livepatch: Cleanup module page permission changes
  module: keep percpu symbols in module's symtab
  module: clean up RO/NX handling.
  module: use a structure to encapsulate layout.
  gcov: use within_module() helper.
  module: Use the same logic for setting and unsetting RO/NX
  livepatch: function,sympos scheme in livepatch sysfs directory
  livepatch: add sympos as disambiguator field to klp_reloc
  livepatch: add old_sympos as disambiguator field to klp_func
2016-01-14 16:38:02 -08:00
Vineet Gupta 6b538db7c6 ARC: dw2 unwind: Catch Dwarf SNAFUs early
Instead of seeing empty stack traces, let kernel fail early so dwarf
issues can be fixed sooner

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-21 14:01:49 +05:30
Vineet Gupta 6d0d506012 ARC: dw2 unwind: Don't bail for CIE.version != 1
The rudimentary CIE.version == 3 handling is already present in code
(for return address register specification)

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-21 14:01:25 +05:30
Vineet Gupta 2d64affc92 Revert "ARC: dw2 unwind: Ignore CIE version !=1 gracefully instead of bailing"
Blingly ignoring CIE.version != 1 was a bad idea.
It still leaves "desirability" when running perf with callgraphing where libgcc
symbols might show in hotspot.

More importantly, basic CIE.version == 3 support already exists in code:

|
|   retAddrReg = state.version <= 1 ? *ptr++ : get_uleb128(&ptr, end);
|

Next commit with simply add continue-not-bail for CIE.version != 1

This reverts commit 323f41f9e7.
2015-12-21 13:29:44 +05:30
Vineet Gupta 575a9d4e2c ARC: smp: Rename platform hook @init_cpu_smp -> @init_per_cpu
Makes it similar to smp_ops which also has callback with same name

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-17 12:56:56 +05:30
Noam Camus b474a02382 ARC: rename smp operation init_irq_cpu() to init_per_cpu()
This will better reflect its description i.e. "any needed setup..."
and not just do an "IPI request".

Signed-off-by: Noam Camus <noamc@ezchip.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-17 12:56:43 +05:30
Vineet Gupta 323f41f9e7 ARC: dw2 unwind: Ignore CIE version !=1 gracefully instead of bailing
ARC dwarf unwinder only supports CIE version == 1
The boot time dwarf sanitizer (part of binary lookup table constructor)
would simply bail if it saw CIE version == 3, rendering unwinder with a
NULL lookup table.

It seems libgcc linked with kernel does have such entries.

With fallback linear search removed, and a NULL binary lookup table,
unwinder fails to generate any stack trace.

So allow graceful ignoring of unsupported CIE entries.

This problem was initially seen in Alexey's setup (and not mine) as he
was using buildroot built toolchain (libgcc) which doesn't get built with
CFLAGS_FOR_TARGET="-gdwarf-2 which is my default

Fixes STAR 9000985048: "kernel unwinder broken with stock tools"

Fixes: 2e22502c08 ARC: dw2 unwind: Remove falllback linear search thru FDE entries
Reported-by Alexey Brodkin <abrodkin@synopsys.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-17 11:10:23 +05:30
Vineet Gupta bc79c9a721 ARC: dw2 unwind: Reinstante unwinding out of modules
The fix which removed linear searching of dwarf (because binary lookup
data always exists) missed out on the fact that modules don't get the
binary lookup tables info. This caused unwinding out of modules to stop
working.

So add binary lookup header setup (equivalent of eh_frame_hdr setup) to
modules as well.

While at it, confine the header setup to within unwinder code,
reducing one API exposed out of unwinder code.

Fixes: 2e22502c08 ARC: dw2 unwind: Remove falllback linear search thru FDE entries
Cc: <stable@vger.kernel.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-17 11:10:23 +05:30
Vineet Gupta c512c6ba7a ARC: intc: Document arc_request_percpu_irq() better
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-12 16:04:12 +05:30
Vineet Gupta c6317bc7c5 ARCv2: perf: Ensure perf intr gets enabled on all cores
This was the second perf intr issue

perf sampling on multicore requires intr to be enabled on all cores.
ARC perf probe code used helper arc_request_percpu_irq() which calls
 - request_percpu_irq() on core0
 - enable_percpu_irq() on all all cores (including core0)

genirq requires that request be made ahead of enable call.
However if perf probe happened on non core0 (observed on a 3.18 kernel),
enable would get called ahead of request, failing obviously and
rendering perf intr disabled on all such cores

[   11.120000] 1 ARC perf       : 8 counters (48 bits), 113 conditions, [overflow IRQ support]
[   11.130000] 1 -----> enable_percpu_irq() IRQ 20 failed
[   11.140000] 3 -----> enable_percpu_irq() IRQ 20 failed
[   11.140000] 2 -----> enable_percpu_irq() IRQ 20 failed
[   11.140000] 0 =====> request_percpu_irq() IRQ 20
[   11.140000] 0 -----> enable_percpu_irq() IRQ 20

Fix this fragility, by calling request_percpu_irq() on whatever core
calls probe (there is no requirement on which core calls this anyways)
and then calling enable on each cores.

Interestingly this started as invesigation of STAR 9000838902:
"sporadically IRQs enabled on perf prob"

which was about occassional boot spew as request_percpu_irq got called
non-locally (from an IPI), and re-enabled interrupts in following path
proc_mkdir ->  spin_unlock_irq()

which the irq work code didn't like.

| ARC perf     : 8 counters (48 bits), 113 conditions, [overflow IRQ support]
|
| BUG: failure at ../kernel/irq_work.c:135/irq_work_run_list()!
| CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.18.10-01127-g285efb8e66d1 #2
|
| Stack Trace:
|  arc_unwind_core.constprop.1+0x94/0x104
|  dump_stack+0x62/0x98
|  irq_work_run_list+0xb0/0xb4
|  irq_work_run+0x22/0x3c
|  do_IPI+0x74/0x9c
|  handle_irq_event_percpu+0x34/0x164
|  handle_percpu_irq+0x58/0x78
|  generic_handle_irq+0x1e/0x2c
|  arch_do_IRQ+0x3c/0x60
|  ret_from_exception+0x0/0x8

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: Alexey Brodkin <abrodkin@synopsys.com>
Cc: <stable@vger.kernel.org> #4.2+
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-12 16:03:59 +05:30
Vineet Gupta 5bf704c204 ARC: intc: No need to clear IRQ_NOAUTOEN
arc_request_percpu_irq() is called by all cores to request/enable percpu
irq. It has some "prep" calls needed by genirq:
 - setup percpu devid
 - disable IRQ_NOAUTOEN

However given that enable_percpu_irq() is called enayways, latter can be
avoided.

We are now left with irq_set_percpu_devid() quirk and that too for
ARCompact builds only, since previous patch updated ARCv2 intc to do this
in the "right" place, i.e. irq map function.

By next release, this will ultimately be fixed for ARCompact as well.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Alexey Brodkin <abrodkin@synopsys.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-12 16:03:59 +05:30
Vineet Gupta 8eb0984bf4 ARCv2: intc: Fix random perf irq disabling in SMP setup
As part of fixing another perf issue, observed that after a perf run,
the interrupt got disabled on one/more cores.

Turns out that despite requesting perf irq as percpu, the flow handler
registered was not handle_percpu_irq()

Given that on ARCv2 cores, IRQs < 24 are always private to cpu, we
register the right handler at the very onset.

Before Fix

| [ARCLinux]# cat /proc/interrupts | grep perf
|  20:    0      0      0       0  ARCv2 core Intc  20 ARC perf counters
|
| [ARCLinux]# perf record -c 20000 /sbin/hackbench
| Running with 10*40 (== 400) tasks.
|
| [ARCLinux]# cat /proc/interrupts | grep perf
|  20:    0    522      8    51916  ARCv2 core Intc  20 ARC perf counters
|
| [ARCLinux]# perf record -c 20000 /sbin/hackbench
| Running with 10*40 (== 400) tasks.
|
| [ARCLinux]# cat /proc/interrupts | grep perf
|  20:    0    522      8   104368  ARCv2 core Intc  20 ARC perf counters

After Fix

| [ARCLinux]# cat /proc/interrupts | grep perf
|  20:    0      0      0       0  ARCv2 core Intc  20 ARC perf counters
|
| [ARCLinux]# perf record -c 20000 /sbin/hackbench
| Running with 10*40 (== 400) tasks.
|
| [ARCLinux]# cat /proc/interrupts | grep perf
|  20:  64198  62012  62697  67803  ARCv2 core Intc  20 ARC perf counters
|
| [ARCLinux]# perf record -c 20000 /sbin/hackbench
| Running with 10*40 (== 400) tasks.
|
| [ARCLinux]# cat /proc/interrupts | grep perf
|  20: 126014 122792 123301 133654  ARCv2 core Intc  20 ARC perf counters

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexey Brodkin <abrodkin@synopsys.com>
Cc: stable@vger.kernel.org #4.2+
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-12 16:03:41 +05:30
Rusty Russell 7523e4dc50 module: use a structure to encapsulate layout.
Makes it easier to handle init vs core cleanly, though the change is
fairly invasive across random architectures.

It simplifies the rbtree code immediately, however, while keeping the
core data together in the same cachline (now iff the rbtree code is
enabled).

Acked-by: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2015-12-04 22:46:25 +01:00
Vineet Gupta 2e22502c08 ARC: dw2 unwind: Remove falllback linear search thru FDE entries
Fixes STAR 9000953410: "perf callgraph profiling causing RCU stalls"

| perf record -g -c 15000 -e cycles /sbin/hackbench
|
| INFO: rcu_preempt self-detected stall on CPU
| 1: (1 GPs behind) idle=609/140000000000002/0 softirq=2914/2915 fqs=603
| Task dump for CPU 1:

in-kernel dwarf unwinder has a fast binary lookup and a fallback linear
search (which iterates thru each of ~11K entries) thus takes 2 orders of
magnitude longer (~3 million cycles vs. 2000). Routines written in hand
assembler lack dwarf info (as we don't support assembler CFI pseudo-ops
yet) fail the unwinder binary lookup, hit linear search, failing
nevertheless in the end.

However the linear search is pointless as binary lookup tables are created
from it in first place. It is impossible to have binary lookup fail while
succeed the linear search. It is pure waste of cycles thus removed by
this patch.

This manifested as RCU stalls / NMI watchdog splat when running
hackbench under perf with callgraph profiling. The triggering condition
was perf counter overflowing in routine lacking dwarf info (like memset)
leading to patheic 3 million cycle unwinder slow path and by the time it
returned new interrupts were already pending (Timer, IPI) and taken
rightaway. The original memset didn't make forward progress, system kept
accruing more interrupts and more unwinder delayes in a vicious feedback
loop, ultimately triggering the NMI diagnostic.

Cc: stable@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-23 21:36:49 +05:30
Vineet Gupta e81b75f7b2 ARC: remove SYNC from __switch_to()
SYNC in __switch_to() is a historic relic and not needed at all.

 - In UP context it is obviously useless, why would we want to stall
   the core for all updates to stack memory of t0 to complete before
   loading kernel mode callee registers from t1 stack's memory.

 - In SMP, there could be potential race in which outgoing task could
   be concurrently picked for running on a different core, thus writes
   to stack here need to be visible before the reads from stack on
   other core. Peter confirmed that generic schedular already has needed
   barriers (by way of rq lock) so there is no need for additional arch
   barrier.

This came up when Noam was trying to replace this SYNC with EZChip
specific hardware thread scheduling instruction for their platform
support.

Link: http://lkml.kernel.org/r/20151102092654.GM17308@twins.programming.kicks-ass.net
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org
Cc: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-17 22:05:30 +05:30
Vineet Gupta 512b5b89b9 ARC: Abstract out ISA specific SLEEP args
No semantical changes, prepares for ARCv2 specific change in next commit

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-16 14:17:02 +05:30
Vineet Gupta 541366da6a ARC: [arcompact] Handle bus error from userspace as Interrupt not exception
Bus errors from userspace on ARCompact based cores are handled by core
as a high priority L2 interrupt but current code treated it as interrupt
Handling an interrupt like exception is certainly not going to go unnoticed.
(and it worked so far as we never saw a Bus error from userspace until
IPPK guys tested a DDR controller with ECC error detection etc hence
needed to explicitly trigger/handle such errors)

 - So move mem_service exception handler from common code into ARCv2 code.
 - In ARCompact code, define  mem_service as L2 interrupt handler which
   just drops down to pure kernel mode and goes of to enqueue SIGBUS

Reported-by: Nelson Pereira <npereira@synopsys.com>
Tested-by: Ana Martins <amartins@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-14 13:12:20 +05:30
Vineet Gupta 483bcc99c0 ARC: boot: Non Master cpus only need to call EARLY_CPU_SETUP once
With prev fixes, all cores now start via common entry point @stext which
already calls EARLY_CPU_SETUP for all cores - so no need to invoke it
again

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:13:42 +05:30
Vineet Gupta aa0efcde45 ARCv2: smp: [plat-*]: No need to explicitly call mcip_init_smp()
MCIP now registers it's own per cpu setup routine (for IPI IRQ request)
using smp_ops.init_irq_cpu().

So no need for platforms to do that. This now completely decouples
platforms from MCIP.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:13:41 +05:30
Vineet Gupta 286130ebf1 ARC: smp: Introduce smp hook @init_irq_cpu called for all cores
Note this is not part of platform owned static machine_desc,
but more of device owned plat_smp_ops (rather misnamed) which a IPI
provider or some such typically defines.

This will help us seperate out the IPI registration from platform
specific init_cpu_smp() into device specific init_irq_cpu()

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:13:41 +05:30
Vineet Gupta 8721a7f5a6 ARC: smp: Rename platform hook @init_smp -> @init_cpu_smp
This conveys better that it is called for each cpu

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:13:40 +05:30
Vineet Gupta 26b8f99623 ARCv2: smp: [plat-*]: No need to explicitly call mcip_init_early_smp()
MCIP now registers it's own probe callback with smp_ops.init_early_smp()
which is called by ARC common code, so no need for platforms to do that.

This decouples the platforms and MCIP and helps confine MCIP details
to it's own file.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:13:40 +05:30
Vineet Gupta e55af4da02 ARC: smp: Introduce smp hook @init_early_smp for Master core
This adds a platform agnostic early SMP init hook which is called on
Master core before calling setup_processor()

  setup_arch()
     smp_init_cpus()
         smp_ops.init_early_smp()
     ...
     setup_processor()

How this helps:
 - Used for one time init of certain SMP centric IP blocks, before
   calling setup_processor() which probes various bits of core,
   possibly including this block

 - Currently platforms need to call this IP block init from their
   init routines, which doesn't make sense as this is specific to ARC
   core and not platform and otherwise requires copy/paste in all
   (and hence a possible point of failure)

e.g. MCIP init is called from 2 platforms currently (axs10x and sim)
which will go away once we have this.

This change only adds the hooks but they are empty for now. Next commit
will populate them and remove the explicit init calls from platforms.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:13:40 +05:30
Vineet Gupta 4c82f28617 ARC: remove @init_time, @init_irq platform callbacks
These are not in use for ARC platforms. Moreover DT mechanims exist to
probe them w/o explicit platform calls.

 - clocksource drivers can use CLOCKSOURCE_OF_DECLARE()
 - intc IRQCHIP_DECLARE() calls + cascading inside DT allows external
   intc to be probed automatically

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:13:39 +05:30
Vineet Gupta e0868e6f67 ARC: smp: irqchip: handle IPI as percpu irq like timer
The reason this was not done so far was lack of genuine IPI_IRQ for
ARC700, as we don't have a SMP version of core yet (which might change
soon thx to EZChip). Nevertheles to increase the build coverage, we
need to allow CONFIG_SMP for ARC700 and still be able to run it on a
UP platform (nsim or AXS101) with a UP Device Tree (SMP-on-UP)

The build itself requires some define for IPI_IRQ and even a dummy
value is fine since that code won't run anyways.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:13:39 +05:30
Vineet Gupta 3971cdc202 ARC: boot: Support Halt-on-reset and Run-on-reset SMP booting modes
For Run-on-reset, non masters need to spin wait. For Halt-on-reset they
can jump to entry point directly.

Also while at it, made reset vector handler as "the" entry point for
kernel including host debugger based boot (which uses the ELF header
entry point)

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 16:08:17 +05:30
Vineet Gupta f33e9c434b ARC: smp: Move default boot kick/wait code out of MCIP into common code
For non halt-on-reset case, all cores start of simultaneously in @stext.
Master core0 proceeds with kernel boot, while other spin-wait on
@wake_flag being set by master once it is ready. So NO hardware assist
is needed for master to "kick" the others.

This patch moves this soft implementation out of mcip.c (as there is no
hardware assist) into common smp.c

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17 17:48:27 +05:30
Vineet Gupta 964cf28f9d ARC: boot log: move helper macros to header for reuse
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17 17:48:25 +05:30
Vineet Gupta 9fabcc636b ARC: [arcompact] entry.S: Elide extra check/branch in exception ret path
This is done by improving the laddering logic !

Before:

   if Exception
      goto excep_or_pure_k_ret

   if !Interrupt(L2)
      goto l1_chk
   else
      INTERRUPT_EPILOGUE 2

 l1_chk:
   if !Interrupt(L1)  (i.e. pure kernel mode)
      goto excep_or_pure_k_ret
   else
      INTERRUPT_EPILOGUE 1

 excep_or_pure_k_ret:
   EXCEPTION_EPILOGUE

Now:

   if !Interrupt(L1 or L2) (i.e. exception or pure kernel mode)
      goto excep_or_pure_k_ret

  ; guaranteed to be an interrupt
   if !Interrupt(L2)
      goto l1_ret
   else
      INTERRUPT_EPILOGUE 2

 ; by virtue of above, no need to chk for L1 active
 l1_ret:
    INTERRUPT_EPILOGUE 1

 excep_or_pure_k_ret:
    EXCEPTION_EPILOGUE

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17 17:48:23 +05:30
Vineet Gupta 5f88808745 ARC: [arcompact] entry.S: Document preemption games for L2 intr
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17 17:48:23 +05:30
Vineet Gupta 9dbd3d9bfd ARC: [arcompact] don't check for hard isr calling local_irq_enable()
Historically this was done by ARC IDE driver, which is long gone.
IRQ core is pretty robust now and already checks if IRQs are enabled
in hard ISRs. Thus no point in checking this in arch code, for every
call of irq enabled.

Further if some driver does do that - let it bring down the system so we
notice/fix this sooner than covering up for sucker

This makes local_irq_enable() - for L1 only case atleast simple enough
so we can inline it.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17 17:48:22 +05:30
Thomas Gleixner bd0b9ac405 genirq: Remove irq argument from irq flow handlers
Most interrupt flow handlers do not use the irq argument. Those few
which use it can retrieve the irq number from the irq descriptor.

Remove the argument.

Search and replace was done with coccinelle and some extra helper
scripts around it. Thanks to Julia for her help!

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
2015-09-16 15:47:51 +02:00
Linus Torvalds 17e6b00ac4 Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq updates from Thomas Gleixner:
 "This updated pull request does not contain the last few GIC related
  patches which were reported to cause a regression.  There is a fix
  available, but I let it breed for a couple of days first.

  The irq departement provides:

   - new infrastructure to support non PCI based MSI interrupts
   - a couple of new irq chip drivers
   - the usual pile of fixlets and updates to irq chip drivers
   - preparatory changes for removal of the irq argument from interrupt
     flow handlers
   - preparatory changes to remove IRQF_VALID"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (129 commits)
  irqchip/imx-gpcv2: IMX GPCv2 driver for wakeup sources
  irqchip: Add bcm2836 interrupt controller for Raspberry Pi 2
  irqchip: Add documentation for the bcm2836 interrupt controller
  irqchip/bcm2835: Add support for being used as a second level controller
  irqchip/bcm2835: Refactor handle_IRQ() calls out of MAKE_HWIRQ
  PCI: xilinx: Fix typo in function name
  irqchip/gic: Ensure gic_cpu_if_up/down() programs correct GIC instance
  irqchip/gic: Only allow the primary GIC to set the CPU map
  PCI/MSI: pci-xgene-msi: Consolidate chained IRQ handler install/remove
  unicore32/irq: Prepare puv3_gpio_handler for irq argument removal
  tile/pci_gx: Prepare trio_handle_level_irq for irq argument removal
  m68k/irq: Prepare irq handlers for irq argument removal
  C6X/megamode-pic: Prepare megamod_irq_cascade for irq argument removal
  blackfin: Prepare irq handlers for irq argument removal
  arc/irq: Prepare idu_cascade_isr for irq argument removal
  sparc/irq: Use access helper irq_data_get_affinity_mask()
  sparc/irq: Use helper irq_data_get_irq_handler_data()
  parisc/irq: Use access helper irq_data_get_affinity_mask()
  mn10300/irq: Use access helper irq_data_get_affinity_mask()
  irqchip/i8259: Prepare i8259_irq_dispatch for irq argument removal
  ...
2015-09-01 14:33:35 -07:00
Vineet Gupta 3d5926599a ARCv2: entry: Fix reserved handler
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 16:25:37 +05:30
Vineet Gupta 9b28829d6d ARCv2: perf: Finally introduce HS perf unit
With all features in place, the ARC HS pct block can now be effectively
allowed to be probed/used

Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:59:07 +05:30
Alexey Brodkin e525c37f84 ARCv2: perf: SMP support
* split off pmu info into singleton and per-cpu bits
* setup PMU on all cores

Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:58:42 +05:30
Alexey Brodkin e6b1d126bb ARCv2: perf: implement exclusion of event counting in user or kernel mode
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:58:14 +05:30
Alexey Brodkin 36481cf7fb ARCv2: perf: Support sampling events using overflow interrupts
In times of ARC 700 performance counters didn't have support of
interrupt an so for ARC we only had support of non-sampling events.

Put simply only "perf stat" was functional.

Now with ARC HS we have support of interrupts in performance counters
which this change introduces support of.

ARC performance counters act in the following way in regard of
interrupts generation.
 [1] A counter counts starting from value set in PCT_COUNT register pair
 [2] Once counter reaches value set in PCT_INT_CNT interrupt is raised

Basic setup look like this:
 [1] PCT_COUNT = 0;
 [2] PCT_INT_CNT = __limit_value__;
 [3] Enable interrupts for that counter and let it run
 [4] Let counter reach its limit
 [5] Handle interrupt when it happens

Note that PCT HW block is build in CPU core and so ints interrupt
line (which is basically OR of all counters IRQs) is wired directly to
top-level IRQC. That means do de-assert PCT interrupt it's required to
reset IRQs from all counters that have reached their limit values.

Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:57:43 +05:30
Alexey Brodkin 1fe8bfa5ff ARCv2: perf: implement "event_set_period"
This generalization prepares for support of overflow interrupts.

Hardware event counters on ARC work that way:
Each counter counts from programmed start value (set in
ARC_REG_PCT_COUNT) to a limit value (set in ARC_REG_PCT_INT_CNT) and
once limit value is reached this timer generates an interrupt.

Even though this hardware implementation allows for more flexibility,
in Linux kernel we decided to mimic behavior of other architectures
this way:

 [1] Set limit value as half of counter's max value (to allow counter to
     run after reaching it limit, see below for more explanation):
 ---------->8-----------
 arc_pmu->max_period = (1ULL << counter_size) / 2 - 1ULL;
 ---------->8-----------

 [2] Set start value as "arc_pmu->max_period - sample_period" and then
count up to the limit

Our event counters don't stop on reaching max value (the one we set in
ARC_REG_PCT_INT_CNT) but continue to count until kernel explicitly
stops each of them.

And setting a limit as half of counter capacity is done to allow
capturing of additional events in between moment when interrupt was
triggered until we're actually processing PMU interrupts. That way
we're trying to be more precise.

For example if we count CPU cycles we keep track of cycles while
running through generic IRQ handling code:

 [1] We set counter period as say 100_000 events of type "crun"
 [2] Counter reaches that limit and raises its interrupt
 [3] Once we get in PMU IRQ handler we read current counter value from
ARC_REG_PCT_SNAP ans see there something like 105_000.

If counters stop on reaching a limit value then we would miss
additional 5000 cycles.

Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:57:29 +05:30
Vineet Gupta fb7c572551 ARC: perf: cap the number of counters to hardware max of 32
The number of counters in PCT can never be more than 32 (while
countable conditions could be 100+) for both ARCompact and ARCv2

And while at it update copyright dates.

Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-27 14:57:03 +05:30
Vineet Gupta 090749502f ARC: add/fix some comments in code - no functional change
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 19:05:49 +05:30
Yuriy Kolerov 6de6066c0d ARC: change some branchs to jumps to resolve linkage errors
When kernel's binary becomes large enough (32M and more) errors
may occur during the final linkage stage. It happens because
the build system uses short relocations for ARC  by default.
This problem may be easily resolved by passing -mlong-calls
option to GCC to use long absolute jumps (j) instead of short
relative branchs (b).

But there are fragments of pure assembler code exist which use
branchs in inappropriate places and cause a linkage error because
of relocations overflow.

First of these fragments is .fixup insertion in futex.h and
unaligned.c. It inserts a code in the separate section (.fixup)
with branch instruction. It leads to the linkage error when
kernel becomes large.

Second of these fragments is calling scheduler's functions
(common kernel code) from entry.S of ARC's code. When kernel's
binary becomes large it may lead to the linkage error because
scheduler may occur far enough from ARC's code in the final
binary.

Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-20 18:53:15 +05:30
Vineet Gupta e78fdfef84 ARCv2: spinlock/rwlock/atomics: Delayed retry of failed SCOND with exponential backoff
This is to workaround the llock/scond livelock

HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping
coherency transactions in the SCU. The exclusive line state keeps rotating
among contenting cores leading to a never ending cycle. So break the cycle
by deferring the retry of failed exclusive access (SCOND). The actual delay
needed is function of number of contending cores as well as the unrelated
coherency traffic from other cores. To keep the code simple, start off with
small delay of 1 which would suffice most cases and in case of contention
double the delay. Eventually the delay is sufficient such that the coherency
pipeline is drained, thus a subsequent exclusive access would succeed.

Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.com
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-04 09:26:34 +05:30
Vineet Gupta e13c42ecbe ARCv2: Fix the peripheral address space detection
With HS 2.1 release, the peripheral space register no longer contains
the uncached space specifics, causing the kernel to panic early on.
So read the newer NON VOLATILE AUX register to get that info.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-08-03 19:34:07 +05:30