Commit Graph

9 Commits

Author SHA1 Message Date
Vineet Gupta 88ec11b0f8 ARC: document memory clobber in irq control macros
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-12-12 16:02:46 +05:30
Vineet Gupta be64c997d9 ARC: remove extraneous __KERNEL__ guards
Verified by doing make headers_install as none of these files are
exported to userspace
2014-10-13 14:46:20 +05:30
Vineet Gupta 590892deb6 ARC: [intc] mask/unmask can be hidden again
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-07-23 11:22:02 +05:30
Vineet Gupta 0dafafc3ef ARC: Add support for irqflags tracing and lockdep
Lockdep required a small fix to stacktrace API which was incorrectly
unwindign out of __switch_to for the current call frame.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:40 +05:30
Vineet Gupta fce16bc35a ARC: Entry Handler tweaks: Optimize away redundant IRQ_DISABLE_SAVE
In the exception return path, for both U/K cases, intr are already
disabled (for various existing reasons). So when we drop down to
@restore_regs, we need not redo that.

There was subtle issue - when intr were NOT being disabled for
ret-to-kernel-but-no-preemption case - now fixed by moving the
IRQ_DISABLE further up in @resume_kernel_mode.

So what do we gain:

* Shaves off a few insn in return path.

* Eliminates the need for IRQ_DISABLE_SAVE assembler macro for ARCv2
  hence allows for entry code sharing.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-26 09:40:25 +05:30
Vineet Gupta da1677b02d ARC: Disintegrate arcregs.h
* Move the various sub-system defines/types into relevant files/functions
  (reduces compilation time)

* move CPU specific stuff out of asm/tlb.h into asm/mmu.h

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-22 13:46:42 +05:30
Christian Ruppert 79e5f05edc ARC: Add implicit compiler barrier to raw_local_irq* functions
ARC irqsave/restore macros were missing the compiler barrier, causing a
stale load in irq-enabled region be used in irq-safe region, despite
being changed, because the register holding the value was still live.

The problem manifested as random crashes in timer code when stress
testing ARCLinux (3.9-rc3) on a !SMP && !PREEMPT_COUNT

Here's the exact sequence which caused this:
 (0). tv1[x] <----> t1 <---> t2
 (1). mod_timer(t1) interrupted after it calls timer_pending()
 (2). mod_timer(t2) completes
 (3). mod_timer(t1) resumes but messes up the list
 (4). __runt_timers( ) uses bogus timer_list entry / crashes in
      timer->function

Essentially mod_timer() was racing against itself and while the spinlock
serialized the tv1[] timer link list, timer_pending() called outside the
spinlock, cached timer link list element in a register.
With low register pressure (and a deep register file), lack of barrier
in raw_local_irqsave() as well as preempt_disable (!PREEMPT_COUNT
version), there was nothing to force gcc to reload across the spinlock,
causing a stale value in reg be used for link list manipulation - ensuing
a corruption.

ARcompact disassembly which shows the culprit generated code:

mod_timer:
    push_s blink
    mov_s r13,r0	# timer, timer
..
    ###### timer_pending( )
    ld_s r3,[r13]       # <------ <variable>.entry.next LOADED
    brne r3, 0, @.L163

.L163:
..
    ###### spin_lock_irq( )
    lr  r5, [status32]  # flags
    bic r4, r5, 6       # temp, flags,
    and.f 0, r5, 6      # flags,
    flag.nz r4

    ###### detach_if_pending( ) begins

    tst_s r3,r3  <--------------
			# timer_pending( ) checks timer->entry.next
                        # r3 is NOT reloaded by gcc, using stale value
    beq.d @.L169
    mov.eq r0,0

    #####  detach_timer( ): __list_del( )

    ld r4,[r13,4]    	# <variable>.entry.prev, D.31439
    st r4,[r3,4]     	# <variable>.prev, D.31439
    st r3,[r4]       	# <variable>.next, D.30246

We initially tried to fix this by adding barrier() to preempt_* macros
for !PREEMPT_COUNT but Linus clarified that it was anything but wrong.
http://www.spinics.net/lists/kernel/msg1512709.html

[vgupta: updated commitlog]

Reported-by/Signed-off-by: Christian Ruppert <christian.ruppert@abilis.com>
Cc: Christian Ruppert <christian.ruppert@abilis.com>
Cc: Pierrick Hascoet <pierrick.hascoet@abilis.com>
Debugged-by/Signed-off-by: Vineet Gupta <vgupta@synopsys.com>

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-08 16:10:26 -07:00
Vineet Gupta 4788a5942b ARC: Support for high priority interrupts in the in-core intc
There is a bit of hack/kludge right now where we disable preemption if a
L2 (High prio) IRQ is taken while L1 (Low prio) is active.

Need to revisit this

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-02-15 23:16:01 +05:30
Vineet Gupta ac4c244d4e ARC: irqflags - Interrupt enabling/disabling at in-core intc
ARC700 has an in-core intc which provides 2 priorities (a.k.a.) "levels"
of interrupts (per IRQ) hencforth referred to as L1/L2 interrupts.

CPU flags register STATUS32 has Interrupt Enable bits per level (E1/E2)
to globally enable (or disable) all IRQs at a level. Hence the
implementation of arch_local_irq_{save,restore,enable,disable}( )

The STATUS32 reg can be r/w only using the AUX Interface of ARC, hence
the use of LR/SR instructions. Further, E1/E2 bits in there can only be
updated using the FLAG insn.

The intc supports 32 interrupts - and per IRQ enabling is controlled by
a bit in the AUX_IENABLE register, hence the implmentation of
arch_{,un}mask_irq( ) routines.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
2013-02-11 20:00:30 +05:30