Cc: Tim Bird <tim.bird@am.sony.com>
[rabin@rab.in: rebase on top of latest code,
keep code in ftrace.c instead of separate file,
check for ftrace_graph_entry also]
Signed-off-by: Rabin Vincent <rabin@rab.in>
Use assembler macros to avoid copy/pasting code between the
implementations of the two variants of the mcount call.
Signed-off-by: Rabin Vincent <rabin@rab.in>
When FUNCTION_GRAPH_TRACER is enabled, place do_IRQ() and friends in the
IRQ_ENTRY section so that the irq-related features of the function graph
tracer work.
Signed-off-by: Rabin Vincent <rabin@rab.in>
armv7_pmnc_counter_has_overflowed can return uninitialised data
if an invalid counter is specified.
This patch fixes the code to return 0 in this case, which squashes
the compiler warning from GCC 4.5.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When unwinding stack frames we must take care not to unwind
areas of memory that lie outside of the known extent of the stack.
This patch fixes an incorrect calculation of the stack base where
THREAD_SIZE is added to the stack pointer after it has already
been aligned to this value. Since the ALIGN macro performs this
addition internally, we end up overshooting the base by 8k.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The SWP instruction was deprecated in the ARMv6 architecture,
superseded by the LDREX/STREX family of instructions for
load-linked/store-conditional operations. The ARMv7 multiprocessing
extensions mandate that SWP/SWPB instructions are treated as undefined
from reset, with the ability to enable them through the System Control
Register SW bit.
This patch adds the alternative solution to emulate the SWP and SWPB
instructions using LDREX/STREX sequences, and log statistics to
/proc/cpu/swp_emulation. To correctly deal with copy-on-write, it also
modifies cpu_v7_set_pte_ext to change the mappings to priviliged RO when
user RO.
Signed-off-by: Leif Lindholm <leif.lindholm@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the domain switching functionality via the set_fs and
__switch_to functions on cores that have a TLS register.
Currently, the ioremap and vmalloc areas share the same level 1 page
tables and therefore have the same domain (DOMAIN_KERNEL). When the
kernel domain is modified from Client to Manager (via the __set_fs or in
the __switch_to function), the XN (eXecute Never) bit is overridden and
newer CPUs can speculatively prefetch the ioremap'ed memory.
Linux performs the kernel domain switching to allow user-specific
functions (copy_to/from_user, get/put_user etc.) to access kernel
memory. In order for these functions to work with the kernel domain set
to Client, the patch modifies the LDRT/STRT and related instructions to
the LDR/STR ones.
The user pages access rights are also modified for kernel read-only
access rather than read/write so that the copy-on-write mechanism still
works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register
(CPU_32v6K is defined) since writing the TLS value to the high vectors page
isn't possible.
The user addresses passed to the kernel are checked by the access_ok()
function so that they do not point to the kernel space.
Tested-by: Anton Vorontsov <cbouatmailru@gmail.com>
Cc: Tony Lindgren <tony@atomide.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (215 commits)
ARM: memblock: setup lowmem mappings using memblock
ARM: memblock: move meminfo into find_limits directly
ARM: memblock: convert free_highpages() to use memblock
ARM: move freeing of highmem pages out of mem_init()
ARM: memblock: convert memory detail printing to use memblock
ARM: memblock: use memblock to free memory into arm_bootmem_init()
ARM: memblock: use memblock when initializing memory allocators
ARM: ensure membank array is always sorted
ARM: 6466/1: implement flush_icache_all for the rest of the CPUs
ARM: 6464/2: fix spinlock recursion in adjust_pte()
ARM: fix memblock breakage
ARM: 6465/1: Fix data abort accessing proc_info from __lookup_processor_type
ARM: 6460/1: ixp2000: fix type of ixp2000_timer_interrupt
ARM: 6449/1: Fix for compiler warning of uninitialized variable.
ARM: 6445/1: fixup TCM memory types
ARM: imx: Add wake functionality to GPIO
ARM: mx5: Add gpio-keys to mx51 babbage board
ARM: imx: Add gpio-keys to plat-mxc
mx31_3ds: Fix spi registration
mx31_3ds: Fix the logic for detecting the debug board
...
DBG_MAX_REG_NUM incorrectly had the number of indices in the GDB regs
array rather than the number of registers, leading to an oops when the
"rd" command is used in KDB.
Cc: stable@kernel.org
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
use new 'datap' variable in order to remove unnecessary castings.
Signed-off-by: Namhyung Kim <namhyung@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fix up the arguments to arch_ptrace() to take account of the fact that
@addr and @data are now unsigned long rather than long as of a preceding
patch in this series.
Signed-off-by: Namhyung Kim <namhyung@gmail.com>
Cc: <linux-arch@vger.kernel.org>
Acked-by: Roland McGrath <roland@redhat.com>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit 5085f3ff45 added better support for
CONFIG_HOTPLUG_CPU by keeping proc_info around. However, depending on
the Kconfig options selected, this can make the booting fail mysteriously
early on.
Turns out a data abort can happen in __lookup_processor in ldmia r5 {r3, r4}.
When it happens the address loaded to r5 is not aligned. Fix the problem by
aligning proc_info.
Reported-by: Anand Gadiyar <gadiyar@ti.com>
Tested-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
kexec does not disable the outer cache before disabling the inner
caches in cpu_proc_fin(). So L2 is enabled across the kexec jump. When
the new kernel enables chaches again, it randomly crashes.
Disabling L2 before calling cpu_proc_fin() cures the problem.
Disabling L2 requires the following new functions: flush_all(),
inv_all() and disable(). Add them to outer_cache_fns and call them
from the kexec code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
* 'llseek' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/bkl:
vfs: make no_llseek the default
vfs: don't use BKL in default_llseek
llseek: automatically add .llseek fop
libfs: use generic_file_llseek for simple_attr
mac80211: disallow seeks in minstrel debug code
lirc: make chardev nonseekable
viotape: use noop_llseek
raw: use explicit llseek file operations
ibmasmfs: use generic_file_llseek
spufs: use llseek in all file operations
arm/omap: use generic_file_llseek in iommu_debug
lkdtm: use generic_file_llseek in debugfs
net/wireless: use generic_file_llseek in debugfs
drm: use noop_llseek
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (278 commits)
arm: remove machine_desc.io_pg_offst and .phys_io
arm: use addruart macro to establish debug mappings
arm: return both physical and virtual addresses from addruart
arm/debug: consolidate addruart macros for CONFIG_DEBUG_ICEDCC
ARM: make struct machine_desc definition coherent with its comment
eukrea_mbimxsd-baseboard: Pass the correct GPIO to gpio_free
cpuimx27: fix compile when ULPI is selected
mach-pcm037_eet: fix compile errors
Fixing ethernet driver compilation error for i.MX31 ADS board
cpuimx51: update board support
mx5: add cpuimx51sd module and its baseboard
iomux-mx51: fix GPIO_1_xx 's IOMUX configuration
imx-esdhc: update devices registration
mx51: add resources for SD/MMC on i.MX51
iomux-mx51: fix SD1 and SD2's iomux configuration
clock-mx51: rename CLOCK1 to CLOCK_CCGR for better readability
clock-mx51: factorize clk_set_parent and clk_get_rate
eukrea_mbimxsd: add support for DVI displays
cpuimx25 & cpuimx35: fix OTG port registration in host mode
i.MX31 and i.MX35 : fix errate TLSbo65953 and ENGcm09472
...
* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (96 commits)
apic, x86: Use BIOS settings for IBS and MCE threshold interrupt LVT offsets
apic, x86: Check if EILVT APIC registers are available (AMD only)
x86: ioapic: Call free_irte only if interrupt remapping enabled
arm: Use ARCH_IRQ_INIT_FLAGS
genirq, ARM: Fix boot on ARM platforms
genirq: Fix CONFIG_GENIRQ_NO_DEPRECATED=y build
x86: Switch sparse_irq allocations to GFP_KERNEL
genirq: Switch sparse_irq allocator to GFP_KERNEL
genirq: Make sparse_lock a mutex
x86: lguest: Use new irq allocator
genirq: Remove the now unused sparse irq leftovers
genirq: Sanitize dynamic irq handling
genirq: Remove arch_init_chip_data()
x86: xen: Sanitise sparse_irq handling
x86: Use sane enumeration
x86: uv: Clean up the direct access to irq_desc
x86: Make io_apic.c local functions static
genirq: Remove irq_2_iommu
x86: Speed up the irq_remapped check in hot pathes
intr_remap: Simplify the code further
...
Fix up trivial conflicts in arch/x86/Kconfig
Since we're now using addruart to establish the debug mapping, we can
remove the io_pg_offst and phys_io members of struct machine_desc.
The various declarations were removed using the following script:
grep -rl MACHINE_START arch/arm | xargs \
sed -i '/MACHINE_START/,/MACHINE_END/ { /\.\(phys_io\|io_pg_offst\)/d }'
[ Initial patch was from Jeremy Kerr, example script from Russell King ]
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Eric Miao <eric.miao at canonical.com>
Since we can get both physical and virtual addresses from the addruart
macro, we can use this to establish the debug mappings.
In the case of CONFIG_DEBUG_ICEDCC, we don't need any mappings, but
may still need to setup r7 correctly.
Incorporating ASM changes from Nicolas Pitre <npitre@fluxnic.net>.
Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Tested-by: Kevin Hilman <khilman@deeprootsystems.com>
Rather than checking the MMU status in every instance of addruart, do it
once in kernel/debug.S, and change the existing addruart macros to
return both physical and virtual addresses. The main debug code can then
select the appropriate address to use.
This will also allow us to retreive the address of a uart for the MMU
state that we're not current in.
Updated with fixes for OMAP from Jason Wang <jason77.wang@gmail.com>
and Tony Lindgren <tony@atomide.com>, and fix for versatile express from
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>.
Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Jason Wang <jason77.wang@gmail.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Tested-by: Kevin Hilman <khilman@deeprootsystems.com>
Provide a mechanism that allows running code in IRQ context. It is
most useful for NMI code that needs to interact with the rest of the
system -- like wakeup a task to drain buffers.
Perf currently has such a mechanism, so extract that and provide it as
a generic feature, independent of perf so that others may also
benefit.
The IRQ context callback is generated through self-IPIs where
possible, or on architectures like powerpc the decrementer (the
built-in timer facility) is set to generate an interrupt immediately.
Architectures that don't have anything like this get to do with a
callback from the timer tick. These architectures can call
irq_work_run() at the tail of any IRQ handlers that might enqueue such
work (like the perf IRQ handler) to avoid undue latencies in
processing the work.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
[ various fixes ]
Signed-off-by: Huang Ying <ying.huang@intel.com>
LKML-Reference: <1287036094.7768.291.camel@yhuang-dev>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The core code now initializes the requested number of interrupts and
sets the flags in irq_desc.status which are requested by the
architecture via ARCH_IRQ_INIT_FLAGS.
Add ARCH_IRQ_INIT_FLAGS and remove the loop which sets those flags
after the irq descriptors are allocated.
[ This patch should have been in the original irq rework and got
dropped accidentaly ]
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Linus Walleij <linus.walleij@stericsson.com>
Cc: Anand Gadiyar <gadiyar@ti.com>
Commit b683de2b3 in linux-next as of 20101014 (genirq: Query
arch for number of early descriptors) seems to have broken
bootup on several ARM boards - my beagleboard gives the
following dump with earlyprintk:
NR_IRQS:402
Unable to handle kernel NULL pointer dereference at virtual
address 00000028 pgd = c0004000
[00000028] *pgd=00000000
Internal error: Oops: 5 [#1]
last sysfs file:
Modules linked in:
CPU: 0 Not tainted
(2.6.36-rc7-next-20101014-linux-next-20101012+ #40) PC is at
init_IRQ+0x14/0x48 LR is at start_kernel+0x150/0x2c0
[...]
We seem to be using desc->status without assigning desc to
anything. Fix this by adding back the code that was originally
there.
Signed-off-by: Anand Gadiyar <gadiyar@ti.com>
Tested-by: Linus Walleij <linus.walleij@stericsson.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
LKML-Reference: <1287077397-21781-1-git-send-email-gadiyar@ti.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
Fix this linux-next build failure that Stephen reported:
arch/arm/kernel/perf_event.c: In function 'armpmu_event_init':
arch/arm/kernel/perf_event.c:543: error: request for member 'num_events' in something not a structure or union
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: paulus <paulus@samba.org>
LKML-Reference: <20101014164925.4fa16b75.sfr@canb.auug.org.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
sparse irq sets up NR_IRQS_LEGACY irq descriptors and archs then go
ahead and allocate more.
Use the unused return value of arch_probe_nr_irqs() to let the
architecture return the number of early allocations. Fix up all users.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@elte.hu>
The number of counters for the registered pmu is needed in a few places
so provide a helper function that returns this number.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Tested-by: Will Deacon <will.deacon@arm.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Add some additional documentation on register usage in __enable_mmu
to help complete the overall picture.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move these two functions, both of which are required for secondary
CPU booting, into the cpuinit section. Ensure bad processors call
__error_p for better diagnostics, rather than just __error.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
__enable_mmu is required to be executed in an identity mapped region
to ensure that variances in CPUs do not cause a crash. We currently
achieve this by assuming that it will be co-located with
__create_page_tables. With hotplug CPU support, this assumption
becomes invalid. Implement a better solution which ensures that
it will be appropriately mapped no matter where it is placed.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
__error and __error_p may be used by secondary CPUs, so these
need to be in the cpuinit section.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move these functions, which are only ever used during boot CPU
initialization, to the init section.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When hotplug CPU is enabled, we need to keep the list of supported CPUs,
their setup functions, and __lookup_processor_type in place so that we
can find and initialize secondary CPUs. Move these into the __CPUINIT
section.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make the entire kernel image available for secondary CPUs rather
than just the first MB of memory. This allows the startup code
to appear in the cpuinit sections.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
nommu can jump directly to __mmap_switched without the absolute
address branching which the mmuful kernel does.
Acked-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In order for CPUidle to work on SMP systems, an implementation of
cpu_idle_wait() is needed.
This patch duplicates the x86 implementation of cpu_idle_wait() for
ARM.
Tested-by: Colin Cross <ccross@android.com>
Signed-off-by: Kevin Hilman <khilman@deeprootsystems.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The kernel does not compile for my ARM926EJ-S system U300 due to
the isb instruction inserted in generic assember statement from
commit 8925ec4c53, "ARM: 6385/1:
setup: detect aliasing I-cache when D-cache is non-aliasing"
hey the isb is only available when assembling for v7 so let's
use the generic isb() macro from setup.h instead.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, the Kernel assumes that if a CPU has a non-aliasing D-cache
then the I-cache is also non-aliasing. This may not be true on ARM cores
from v6 onwards, which may have aliasing I-caches but non-aliasing
D-caches.
This patch adds a cpu_has_aliasing_icache function, which is called from
cacheid_init and adds CACHEID_VIPT_I_ALIASING to the cacheid when
appropriate. A utility macro, icache_is_vipt_aliasing(), is also
provided.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
No need to send IPI if there's one CPU, especially when booting
systems with CONFIG_SMP_ON_UP that may not even support IPI.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
UP systems do not implement all the instructions that SMP systems have,
so in order to boot a SMP kernel on a UP system, we need to rewrite
parts of the kernel.
Do this using an 'alternatives' scheme, where the kernel code and data
is modified prior to initialization to replace the SMP instructions,
thereby rendering the problematical code ineffectual. We use the linker
to generate a list of 32-bit word locations and their replacement values,
and run through these replacements when we detect a UP system.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is done so as to be able to make use of the coresight components'
registers in assembler code (like omap sleep code). Also, there shouldn't
be any users of this structure outside the etm driver.
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Alexander Shishkin <virtuoso@slind.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The MOVW instruction moves a 16-bit immediate into the bottom halfword
of the destination register.
This patch ensures that kprobes leaves the 16-bit immediate intact, rather
than assume a 12-bit immediate and mask out the upper 4 bits.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The kernel makes the high vector page visible to user space. This page
contains (amongst others) small code segments that can be executed in
user space. Make this page visible through ptrace and /proc/<pid>/mem
in order to let gdb perform code parsing needed for proper unwinding.
For example, the ERESTART_RESTARTBLOCK handler actually has a stack
frame -- it returns to a PC value stored on the user's stack. To
unwind after a "sleep" system call was interrupted twice, GDB would
have to recognize this situation and understand that stack frame
layout -- which it currently cannot do.
We could fix this by hard-coding addresses in the vector page range into
GDB, but that isn't really portable as not all of those addresses are
guaranteed to remain stable across kernel releases. And having the gdb
process make an exception for this page and get content from its own
address space for it looks strange, and it is not future proof either.
Being located above PAGE_OFFSET, this vma cannot be deleted by
user space code.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
* master.kernel.org:/home/rmk/linux-2.6-arm: (28 commits)
ARM: 6411/1: vexpress: set RAM latencies to 1 cycle for PL310 on ct-ca9x4 tile
ARM: 6409/1: davinci: map sram using MT_MEMORY_NONCACHED instead of MT_DEVICE
ARM: 6408/1: omap: Map only available sram memory
ARM: 6407/1: mmu: Setup MT_MEMORY and MT_MEMORY_NONCACHED L1 entries
ARM: pxa: remove pr_<level> uses of KERN_<level>
ARM: pxa168fb: clear enable bit when not active
ARM: pxa: fix cpu_is_pxa*() not expanding to zero when not configured
ARM: pxa168: fix corrected reset vector
ARM: pxa: Use PIO for PI2C communication on Palm27x
ARM: pxa: Fix Vpac270 gpio_power for MMC
ARM: 6401/1: plug a race in the alignment trap handler
ARM: 6406/1: at91sam9g45: fix i2c bus speed
leds: leds-ns2: fix locking
ARM: dove: fix __io() definition to use bus based offset
dmaengine: fix interrupt clearing for mv_xor
ARM: kirkwood: Unbreak PCIe I/O port
ARM: Fix build error when using KCONFIG_CONFIG
ARM: 6383/1: Implement phys_mem_access_prot() to avoid attributes aliasing
ARM: 6400/1: at91: fix arch_gettimeoffset fallout
ARM: 6398/1: add proc info for ARM11MPCore/Cortex-A9 from ARM
...
If a signal hits us outside of a syscall and another gets delivered
when we are in sigreturn (e.g. because it had been in sa_mask for
the first one and got sent to us while we'd been in the first handler),
we have a chance of returning from the second handler to location one
insn prior to where we ought to return. If r0 happens to contain -513
(-ERESTARTNOINTR), sigreturn will get confused into doing restart
syscall song and dance.
Incredible joy to debug, since it manifests as random, infrequent and
very hard to reproduce double execution of instructions in userland
code...
The fix is simple - mark it "don't bother with restarts" in wrapper,
i.e. set r8 to 0 in sys_sigreturn and sys_rt_sigreturn wrappers,
suppressing the syscall restart handling on return from these guys.
They can't legitimately return a restart-worthy error anyway.
Testcase:
#include <unistd.h>
#include <signal.h>
#include <stdlib.h>
#include <sys/time.h>
#include <errno.h>
void f(int n)
{
__asm__ __volatile__(
"ldr r0, [%0]\n"
"b 1f\n"
"b 2f\n"
"1:b .\n"
"2:\n" : : "r"(&n));
}
void handler1(int sig) { }
void handler2(int sig) { raise(1); }
void handler3(int sig) { exit(0); }
main()
{
struct sigaction s = {.sa_handler = handler2};
struct itimerval t1 = { .it_value = {1} };
struct itimerval t2 = { .it_value = {2} };
signal(1, handler1);
sigemptyset(&s.sa_mask);
sigaddset(&s.sa_mask, 1);
sigaction(SIGALRM, &s, NULL);
signal(SIGVTALRM, handler3);
setitimer(ITIMER_REAL, &t1, NULL);
setitimer(ITIMER_VIRTUAL, &t2, NULL);
f(-513); /* -ERESTARTNOINTR */
write(1, "buggered\n", 9);
return 1;
}
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Al Viro reports that calling "sys_sigsuspend(-ERESTARTNOHAND, 0, 0)"
with two signals coming and being handled in kernel space results
in the syscall restart being done twice.
Avoid this by clearing the 'why' flag when we call the signal handling
code to prevent further syscall restarts after the first.
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Neither the overcommit nor the reservation sysfs parameter were
actually working, remove them as they'll only get in the way.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: paulus <paulus@samba.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Replace pmu::{enable,disable,start,stop,unthrottle} with
pmu::{add,del,start,stop}, all of which take a flags argument.
The new interface extends the capability to stop a counter while
keeping it scheduled on the PMU. We replace the throttled state with
the generic stopped state.
This also allows us to efficiently stop/start counters over certain
code paths (like IRQ handlers).
It also allows scheduling a counter without it starting, allowing for
a generic frozen state (useful for rotating stopped counters).
The stopped state is implemented in two different ways, depending on
how the architecture implemented the throttled state:
1) We disable the counter:
a) the pmu has per-counter enable bits, we flip that
b) we program a NOP event, preserving the counter state
2) We store the counter state and ignore all read/overflow events
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: paulus <paulus@samba.org>
Cc: stephane eranian <eranian@googlemail.com>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Lin Ming <ming.m.lin@intel.com>
Cc: Yanmin <yanmin_zhang@linux.intel.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: David Miller <davem@davemloft.net>
Cc: Michael Cree <mcree@orcon.net.nz>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Since the current perf_disable() usage is only an optimization,
remove it for now. This eases the removal of the __weak
hw_perf_enable() interface.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: paulus <paulus@samba.org>
Cc: stephane eranian <eranian@googlemail.com>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Lin Ming <ming.m.lin@intel.com>
Cc: Yanmin <yanmin_zhang@linux.intel.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: David Miller <davem@davemloft.net>
Cc: Michael Cree <mcree@orcon.net.nz>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Simple registration interface for struct pmu, this provides the
infrastructure for removing all the weak functions.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: paulus <paulus@samba.org>
Cc: stephane eranian <eranian@googlemail.com>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Lin Ming <ming.m.lin@intel.com>
Cc: Yanmin <yanmin_zhang@linux.intel.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: David Miller <davem@davemloft.net>
Cc: Michael Cree <mcree@orcon.net.nz>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
If we're targetting a v6 or v7 core and have at least software perf events
available, then automatically add support for hardware breakpoints.
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: S. Karthikeyan <informkarthik@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For debuggers to take advantage of the hw-breakpoint framework in the kernel,
it is necessary to expose the API calls via a ptrace interface.
This patch exposes the hardware breakpoints framework as a collection of
virtual registers, accesible using PTRACE_SETHBPREGS and PTRACE_GETHBPREGS
requests. The breakpoints are stored in the debug_info struct of the running
thread.
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: S. Karthikeyan <informkarthik@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The hw-breakpoint framework in the kernel requires architecture-specific
support in order to install, remove, validate and manage hardware
breakpoints.
This patch adds initial support for this framework to the ARM architecture,
but restricts the number of watchpoints to a single resource to get around
the fact that the Data Fault Address Register is unknown when a watchpoint
debug exception is taken.
On cores with v7 debug, the Kernel can handle breakpoint and watchpoint
exceptions occuring from userspace. Older cores require clients to handle
the exception themselves by registering an appropriate overflow handler
or, in the case of ptrace, handling the raised SIGTRAP.
The memory-mapped extended debug interface is unsupported due to its
unreliability in real implementations.
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: S. Karthikeyan <informkarthik@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The validate_event function in the ARM perf events backend has the
following problems:
1.) Events that are disabled count towards the cost.
2.) Events associated with other PMUs [for example, software events or
breakpoints] do not count towards the cost, but do fail validation,
causing the group to fail.
This patch changes validate_event so that it ignores events in the
PERF_EVENT_STATE_OFF state or that are scheduled for other PMUs.
Reported-by: Pawel Moll <pawel.moll@arm.com>
Acked-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With several sections per module, and dozens of modules, the
searches down the linked list of sections would dominate the
lookup time, dwarfing any savings from the binary search
within the section.
A simple move-to-front optimisation exploits the commonality
of the code paths taken, and in simple real-world tests reduces
the number of steps in the search to barely more than 1.
Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Without these, exit functions cannot be stack-traced, so to speak.
This implies that module unloads that perform allocations (don't
laugh) will cause noisy warnings on the console when kmemleak is
enabled, as it presumes that all code's call chains are traceable.
Similarly, BUGs and WARN_ONs will give additional console spam.
Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The various sections are all dealt with similarly, so factor out
that common behaviour. (Incorporating Peter Huewe's fix.)
Cc: Peter Huewe <peterhuewe@gmx.de>
Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Less to read.
Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Handle the different nop and call instructions for Thumb-2. Also, we
need to adjust the recorded mcount_loc addresses because they have the
lsb set.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org> [recordmcount.pl change]
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This adds mcount recording and updates dynamic ftrace for ARM to work
with the new ftrace dyamic tracing implementation. It also adds support
for the mcount format used by newer ARM compilers.
With dynamic tracing, mcount() is implemented as a nop. Callsites are
patched on startup with nops, and dynamically patched to call to the
ftrace_caller() routine as needed.
Acked-by: Steven Rostedt <rostedt@goodmis.org> [recordmcount.pl change]
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Fix the mcount routines to build and run on a kernel built with the
Thumb-2 instruction set by correcting the following errors using the
fixes suggested by Catalin Marinas:
- Problem: The following assembler errors appear at the "adr r0,
ftrace_stub" instruction:
entry-common.S: Assembler messages:
entry-common.S:179: Error: invalid immediate for address calculation (value = 0x00000004)
Fix: The errors don't occur with a non-global symbol, so use one.
- Problem: The "mov lr, pc" does not set the lsb when storing the pc in
lr. The called function returns with "bx lr", and the mode changes
to ARM.
Fix: Add a label on the return address and use "adr lr, BSYM(label)".
We don't modify the old mcount because it won't be built when using
Thumb-2.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When building as Thumb-2, the ".type foo, %function" annotation in
ENDPROC seems to be required in order for the assembly routines to be
recognized as Thumb-2 code. If the ENDPROC annotations are not present,
calls to these routines are generated as BLX instead of BL.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With a new enough GCC, ARM function tracing can be supported without the
need for frame pointers. This is essential for Thumb-2 support, since
frame pointers aren't available then.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The 2.6.36-rc kernel added three new system calls:
fanotify_init, fanotify_mark, and prlimit64. This patch
wires them up on ARM.
The only non-trivial issue here is the u64 argument to
sys_fanotify_mark(), but it is the 3rd argument and thus
passed in r2/r3 in both kernel and user space, so it causes
no problems.
Tested with a 2.6.36-rc2 EABI kernel on an ixp4xx machine.
Tested-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is purely a cosmetic change to the ARM perf backend because the current
comments about the relationship between NMIs, interrupt context and
perf_event_do_pending are misleading.
This patch updates the comments so that they reflect what the code
actually does (which is in line with other architectures).
Acked-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Noone is using tty argument so let's get rid of it.
Acked-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Acked-by: Jason Wessel <jason.wessel@windriver.com>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Store the kernel and user contexts from the generic layer instead
of archs, this gathers some repetitive code.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Tested-by: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Borislav Petkov <bp@amd64.org>
- Most archs use one callchain buffer per cpu, except x86 that needs
to deal with NMIs. Provide a default perf_callchain_buffer()
implementation that x86 overrides.
- Centralize all the kernel/user regs handling and invoke new arch
handlers from there: perf_callchain_user() / perf_callchain_kernel()
That avoid all the user_mode(), current->mm checks and so...
- Invert some parameters in perf_callchain_*() helpers: entry to the
left, regs to the right, following the traditional (dst, src).
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Tested-by: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Borislav Petkov <bp@amd64.org>
callchain_store() is the same on every archs, inline it in
perf_event.h and rename it to perf_callchain_store() to avoid
any collision.
This removes repetitive code.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Tested-by: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Borislav Petkov <bp@amd64.org>
Drop the TASK_RUNNING test on user tasks for callchains as
this check doesn't seem to make any sense.
Also remove the tests for !current that is not supposed to
happen and current->pid as this should be handled at the
generic level, with exclude_idle attribute.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Borislav Petkov <bp@amd64.org>
* master.kernel.org:/home/rmk/linux-2.6-arm:
VIDEO: amba clcd: don't disable an already disabled clock
ARM: Tighten check for allowable CPSR values
ARM: 6329/1: wire up sys_accept4() on ARM
ARM: 6328/1: Build with -fno-dwarf2-cfi-asm
ARM: 6326/1: kgdb: fix GDB_MAX_REGS no longer used
Make do_execve() take a const filename pointer so that kernel_execve() compiles
correctly on ARM:
arch/arm/kernel/sys_arm.c:88: warning: passing argument 1 of 'do_execve' discards qualifiers from pointer target type
This also requires the argv and envp arguments to be consted twice, once for
the pointer array and once for the strings the array points to. This is
because do_execve() passes a pointer to the filename (now const) to
copy_strings_kernel(). A simpler alternative would be to cast the filename
pointer in do_execve() when it's passed to copy_strings_kernel().
do_execve() may not change any of the strings it is passed as part of the argv
or envp lists as they are some of them in .rodata, so marking these strings as
const should be fine.
Further kernel_execve() and sys_execve() need to be changed to match.
This has been test built on x86_64, frv, arm and mips.
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Ralf Baechle <ralf@linux-mips.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
sys_accept4() was added in kernel 2.6.28, but ARM was not updated
to include it. The number and types of parameters is such that
no ARM-specific processing is needed, so wiring up sys_accept4()
just requires defining __NR_accept4 and adding a direct call in
the syscall entry table.
Tested with an EABI 2.6.35 kernel and Ulrich Drepper's original
accept4() test program, modified to define __NR_accept4 for ARM.
Using the updated unistd.h also eliminates a warning then building
glibc (2.10.2 and newer) about accept4() being unimplemented.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
According to commit 22eeef4bb2
kgdb,arm: Individual register get/set for arm
It's now replaced by DBG_MAX_REG_NUM.
Cc: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Mark arguments to certain system calls as being const where they should be but
aren't. The list includes:
(*) The filename arguments of various stat syscalls, execve(), various utimes
syscalls and some mount syscalls.
(*) The filename arguments of some syscall helpers relating to the above.
(*) The buffer argument of various write syscalls.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The first read from ETM OS save and restore register after the power
down bit deassertion returns garbage.
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Alexander Shishkin <virtuoso@slind.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a comment describing the mcount variants and how the callsites look
like.
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The mcount implementation currently uses a different indentation style
from the rest of the file (and the rest of the ARM assembly in the
kernel). Clean it up.
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (162 commits)
tracing/kprobes: unregister_trace_probe needs to be called under mutex
perf: expose event__process function
perf events: Fix mmap offset determination
perf, powerpc: fsl_emb: Restore setting perf_sample_data.period
perf, powerpc: Convert the FSL driver to use local64_t
perf tools: Don't keep unreferenced maps when unmaps are detected
perf session: Invalidate last_match when removing threads from rb_tree
perf session: Free the ref_reloc_sym memory at the right place
x86,mmiotrace: Add support for tracing STOS instruction
perf, sched migration: Librarize task states and event headers helpers
perf, sched migration: Librarize the GUI class
perf, sched migration: Make the GUI class client agnostic
perf, sched migration: Make it vertically scrollable
perf, sched migration: Parameterize cpu height and spacing
perf, sched migration: Fix key bindings
perf, sched migration: Ignore unhandled task states
perf, sched migration: Handle ignored migrate out events
perf: New migration tool overview
tracing: Drop cpparg() macro
perf: Use tracepoint_synchronize_unregister() to flush any pending tracepoint call
...
Fix up trivial conflicts in Makefile and drivers/cpufreq/cpufreq.c
* 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/linux-2.6-kgdb:
debug_core,kdb: fix crash when arch does not have single step
kgdb,x86: use macro HBP_NUM to replace magic number 4
kgdb,mips: remove unused kgdb_cpu_doing_single_step operations
mm,kdb,kgdb: Add a debug reference for the kdb kmap usage
KGDB: Remove set but unused newPC
ftrace,kdb: Allow dumping a specific cpu's buffer with ftdump
ftrace,kdb: Extend kdb to be able to dump the ftrace buffer
kgdb,powerpc: Replace hardcoded offset by BREAK_INSTR_SIZE
arm,kgdb: Add ability to trap into debugger on notify_die
gdbstub: do not directly use dbg_reg_def[] in gdb_cmd_reg_set()
gdbstub: Implement gdbserial 'p' and 'P' packets
kgdb,arm: Individual register get/set for arm
kgdb,mips: Individual register get/set for mips
kgdb,x86: Individual register get/set for x86
kgdb,kdb: individual register set and and get API
gdbstub: Optimize kgdb's "thread:" response for the gdb serial protocol
kgdb: remove custom hex_to_bin()implementation
Now that ARM implements the notify die handlers, add the ability for
the kernel debugger to receive the notifications.
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
CC: Russell King <linux@arm.linux.org.uk>
CC: linux-arm-kernel@lists.infradead.org
Implement the ability to individually get and set registers for kdb
and kgdb for arm.
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
CC: Russell King <linux@arm.linux.org.uk>
CC: linux-arm-kernel@lists.infradead.org
Kernels compiled to ARM do not need to handle Thumb-2 module relocations
as interworking is not allowed. This patch #ifdef's out the handling of
such relocations.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reading back the upper and lower values in the R_ARM_THM_CALL and
R_ARM_THM_JUMP24 case was introduced by a previous commit but they are
not needed.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The patch adds handling case for the R_ARM_THM_MOVW_ABS_NC and
R_ARM_THM_MOVT_ABS relocations in arch/arm/kernel/module.c. Such
relocations may appear in Thumb-2 compiled kernel modules.
Reported-by: Kyungmin Park <kmpark@infradead.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
x86 calls machine_shutdown() from the various machine_*() calls which
take the machine down ready for halting, restarting, etc, and uses
this to bring the system safely to a point where those actions can be
performed. Such actions are stopping the secondary CPUs.
So, change the ARM implementation of these to reflect what x86 does.
This solves kexec problems on ARM SMP platforms, where the secondary
CPUs were left running across the kexec call.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The TWD local timers are unable to wake up the CPU when it is placed
into a low power mode, eg. C3. Therefore, we need to adapt things
such that the TWD code can cope with this.
We do this by always providing a broadcast tick function, and marking
the fact that the TWD local timer will stop in low power modes. This
means that when the CPU is placed into a low power mode, the core
timer code marks this fact, and allows an IPI to be given to the core.
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
All implementations of cpu_proc_fin() start by disabling interrupts
and then flush caches. Rather than have every processors proc_fin()
implementation do this, move it out into generic code - and move the
cache flush past setup_mm_for_reboot() (so it can benefit from having
caches still enabled.)
This allows cpu_proc_fin() to become independent of the L1/L2 cache
types, and eventually move the L2 cache flushing into the L2 support
code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This changes the TCM handling so that a fixed area is reserved at
0xfffe0000-0xfffeffff for TCM. This areas is used by XScale but
XScale does not have TCM so the mechanisms are mutually exclusive.
This change is needed to make TCM detection more dynamic while
still being able to compile code into it, and is a must for the
unified ARM goals: the current TCM allocation at different places
in memory for each machine would be a nightmare if you want to
compile a single image for more than one machine with TCM so it
has to be nailed down in one place.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
CPUs v6 and up support multiple TCM banks, for example an ITCM of
8k is supplied in two 4k banks. This makes the TCM work on the
1176JZF-S devchip.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The earlier TCM memory regions were mapped as MT_MEMORY_UNCACHED
which doesn't really work on platforms supporting the new v6
features like the NX bit. Add unique MT_MEMORY_[I|D]TCM types
instead.
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Everything should now be using sparsemem rather than discontigmem, so
remove the code supporting discontigmem from ARM.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
From: Bin Yang <bin.yang@marvell.com>
Cc: stable@kernel.org
Signed-off-by: Bin Yang <bin.yang@marvell.com>
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
CPU: Testing write buffer coherency: ok
------------[ cut here ]------------
WARNING: at kernel/lockdep.c:3145 check_flags+0xcc/0x1dc()
Modules linked in:
[<c0035120>] (unwind_backtrace+0x0/0xf8) from [<c0355374>] (dump_stack+0x20/0x24)
[<c0355374>] (dump_stack+0x20/0x24) from [<c0060c04>] (warn_slowpath_common+0x58/0x70)
[<c0060c04>] (warn_slowpath_common+0x58/0x70) from [<c0060c3c>] (warn_slowpath_null+0x20/0x24)
[<c0060c3c>] (warn_slowpath_null+0x20/0x24) from [<c008f224>] (check_flags+0xcc/0x1dc)
[<c008f224>] (check_flags+0xcc/0x1dc) from [<c00945dc>] (lock_acquire+0x50/0x140)
[<c00945dc>] (lock_acquire+0x50/0x140) from [<c0358434>] (_raw_spin_lock+0x50/0x88)
[<c0358434>] (_raw_spin_lock+0x50/0x88) from [<c00fd114>] (set_task_comm+0x2c/0x60)
[<c00fd114>] (set_task_comm+0x2c/0x60) from [<c007e184>] (kthreadd+0x30/0x108)
[<c007e184>] (kthreadd+0x30/0x108) from [<c0030104>] (kernel_thread_exit+0x0/0x8)
---[ end trace 1b75b31a2719ed1c ]---
possible reason: unannotated irqs-on.
irq event stamp: 3
hardirqs last enabled at (2): [<c0059bb0>] finish_task_switch+0x48/0xb0
hardirqs last disabled at (3): [<c002f0b0>] ret_slow_syscall+0xc/0x1c
softirqs last enabled at (0): [<c005f3e0>] copy_process+0x394/0xe5c
softirqs last disabled at (0): [<(null)>] (null)
Fix this by ensuring that the lockdep interrupt state is manipulated in
the appropriate places. We essentially treat userspace as an entirely
separate environment which isn't relevant to lockdep (lockdep doesn't
monitor userspace.) We don't tell lockdep that IRQs will be enabled
in that environment.
Instead, when creating kernel threads (which is a rare event compared
to entering/leaving userspace) we have to update the lockdep state. Do
this by starting threads with IRQs disabled, and in the kthread helper,
tell lockdep that IRQs are enabled, and enable them.
This provides lockdep with a consistent view of the current IRQ state
in kernel space.
This also revert portions of 0d928b0b61
which didn't fix the problem.
Tested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This parameter is used by primary kernel to pass address of vmcore
header to the dump capture kernel.
Signed-off-by: Mika Westerberg <ext-mika.1.westerberg@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This function is used by vmcore code to read a page from the old
kernel memory.
Signed-off-by: Mika Westerberg <ext-mika.1.westerberg@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When we are crashing there is no indirection page in place. Only
control page is present.
Signed-off-by: Mika Westerberg <ext-mika.1.westerberg@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Implement function machine_crash_shutdown() which disables IRQs and
saves machine state to ELF notes structure.
Signed-off-by: Mika Westerberg <ext-mika.1.westerberg@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Implemented ARM support for command line option
"crashkernel=size@start" which allows user to reserve some memory
for a dump capture kernel.
Signed-off-by: Mika Westerberg <ext-mika.1.westerberg@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The TLS register is only available on ARM1136 r1p0 and later.
Set HWCAP_TLS flags if hardware TLS is available and test for
it if CONFIG_CPU_32v6K is not set for V6.
Note that we set the TLS instruction in __kuser_get_tls
dynamically as suggested by Jamie Lokier <jamie@shareable.org>.
Also the __switch_to code is optimized out in most cases as
suggested by Nicolas Pitre <nico@fluxnic.net>.
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch enables the HAVE_REGS_AND_STACK_ACCESS_API option
for ARM which is required by the kprobe events tracer. Code based
on the PowerPC port.
Cc: Jean Pihet <jpihet@mvista.com>
Tested-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
So to allow NR_IRQS to be dynamic and platforms to specify the number
of IRQs really needed.
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This was deprecated in 2001 and announced to live on for 5 years.
For now provide a kernel parameter for those who still need it.
Acked-by: Eric Miao <eric.miao@canonical.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Hardware performance counters on ARM are 32-bits wide but atomic64_t
variables are used to represent counter data in the hw_perf_event structure.
The armpmu_event_update function right-shifts a signed 64-bit delta variable
and adds the result to the event count. This can lead to shifting in sign-bits
if the MSB of the 32-bit counter value is set. This results in perf output
such as:
Performance counter stats for 'sleep 20':
18446744073460670464 cycles <-- 0xFFFFFFFFF12A6000
7783773 instructions # 0.000 IPC
465 context-switches
161 page-faults
1172393 branches
20.154242147 seconds time elapsed
This patch ensures that the delta value is treated as unsigned so that the
right shift sets the upper bits to zero.
Cc: <stable@kernel.org>
Acked-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
A new random value for the canary is stored in the task struct whenever
a new task is forked. This is meant to allow for different canary values
per task. On ARM, GCC expects the canary value to be found in a global
variable called __stack_chk_guard. So this variable has to be updated
with the value stored in the task struct whenever a task switch occurs.
Because the variable GCC expects is global, this cannot work on SMP
unfortunately. So, on SMP, the same initial canary value is kept
throughout, making this feature a bit less effective although it is still
useful.
One way to overcome this GCC limitation would be to locate the
__stack_chk_guard variable into a memory page of its own for each CPU,
and then use TLB locking to have each CPU see its own page at the same
virtual address for each of them.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
This is the very basic stuff without the changing canary upon
task switch yet. Just the Kconfig option and a constant canary
value initialized at boot time.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
For this feature to take effect, CONFIG_COMPAT_BRK must be turned
off. This can safely be turned off for any EABI user space versions.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Since now all modification to event->count (and ->prev_count
and ->period_left) are local to a cpu, change then to local64_t so we
avoid the LOCK'ed ops.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>