... but produce a big warning about the problem as encouragement
for people to fix their drivers.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We need to round memory regions correctly -- specifically, we need to
round reserved region in the more expansive direction (lower limit
down, upper limit up) whereas usable memory regions need to be rounded
in the more restrictive direction (lower limit up, upper limit down).
This introduces two set of inlines:
memblock_region_memory_base_pfn()
memblock_region_memory_end_pfn()
memblock_region_reserved_base_pfn()
memblock_region_reserved_end_pfn()
Although they are antisymmetric (and therefore are technically
duplicates) the use of the different inlines explicitly documents the
programmer's intention.
The lack of proper rounding caused a bug on ARM, which was then found
to also affect other architectures.
Reported-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <4CB4CDFD.4020105@kernel.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
When hotplug CPU is enabled, we need to keep the list of supported CPUs,
their setup functions, and __lookup_processor_type in place so that we
can find and initialize secondary CPUs. Move these into the __CPUINIT
section.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 14eff18126 added proper
detection for ARM11MPCore/Cortex-A9 instead of detecting them
as ARMv7. However, it was missing the HWCAP_TLS flags.
HWCAP_TLS is needed if support for earlier ARMv6 is compiled
into the same kernel. Without HWCAP_TLS flags the userspace
won't work unless nosmp is specified:
Kernel panic - not syncing: Attempted to kill init!
CPU0: stopping
<c005d5e4>] (unwind_backtrace+0x0/0xec) from [<c004c2f8>] (do_IPI+0xfc/0x184)
<c004c2f8>] (do_IPI+0xfc/0x184) from [<c03f25bc>] (__irq_svc+0x9c/0x160)
Exception stack(0xc0565f80 to 0xc0565fc8)
5f80: 00000001 c05772a0 00000000 00003a61 c0564000 c05cf500 c003603c c0578600
5fa0: 80033ef0 410fc091 0000001f 00000000 00000000 c0565fc8 c00b91f8 c0057cb4
5fc0: 20000013 ffffffff
[<c03f25bc>] (__irq_svc+0x9c/0x160) from [<c0057cb4>] (default_idle+0x30/0x38)
[<c0057cb4>] (default_idle+0x30/0x38) from [<c005829c>] (cpu_idle+0x9c/0xf8)
[<c005829c>] (cpu_idle+0x9c/0xf8) from [<c0008d48>] (start_kernel+0x2a4/0x300)
[<c0008d48>] (start_kernel+0x2a4/0x300) from [<80008084>] (0x80008084)
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
copy_to_user_page can be used by access_process_vm to write to an
executable page of a process using a mapping acquired by kmap.
For systems with I-cache aliasing, flushing the I-cache using the
Kernel mapping may leave stale data in the I-cache if the user
mapping is of a different colour.
This patch introduces a flush_icache_alias function to flush.c,
which calls flush_icache_range with a mapping of the specified
colour. flush_ptrace_access is then modified to call this new
function instead of coherent_kern_range in the case of an aliasing
I-cache and a non-aliasing D-cache.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Do this by adding flush_icache_all to cache_fns for ARMv6 and 7.
As flush_icache_all may neeed to be called from flush_kern_cache_all,
add it as the first entry in the cache_fns.
Note that now we can remove the ARM_ERRATA_411920 dependency
to !SMP so it can be selected on UP ARMv6 processors, such
as omap2.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
UP systems do not implement all the instructions that SMP systems have,
so in order to boot a SMP kernel on a UP system, we need to rewrite
parts of the kernel.
Do this using an 'alternatives' scheme, where the kernel code and data
is modified prior to initialization to replace the SMP instructions,
thereby rendering the problematical code ineffectual. We use the linker
to generate a list of 32-bit word locations and their replacement values,
and run through these replacements when we detect a UP system.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The commit f1a2481c0 sets up the default flags for MT_MEMORY and
MT_MEMORY_NONCACHED memory types. L_PTE_USER flag is wrongly
set as default for these entries so remove it. Also adding
the 'L_PTE_WRITE' flag so that these pages become read-write
instead of just being read-only
[this stops them being exposed to userspace, which is the main
concern here --rmk]
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On the r2p0, r2p1 and r2p2 versions of the Cortex-A9, data corruption
can occur under very rare conditions due to a store buffer optimisation.
This workaround sets a bit in the diagnostic register of the Cortex-A9,
disabling the optimisation and preventing the problem from occurring.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There are very few legitimate use cases, if any, for directly accessing
system RAM through /dev/mem. So let's mimic what they do on x86 and
forbid it when CONFIG_STRICT_DEVMEM is turned on.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
This patch populates the L1 entries for MT_MEMORY and MT_MEMORY_NONCACHED
types so that at boot-up, we can map memories outside system memory
at page level granularity
Previously the mapping was limiting to section level, which creates
unnecessary additional mapping for which physical memory may not
present. On the newer ARM with speculation, this is dangerous and can
result in untraceable aborts.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When the policy for user space is to ignore misaligned accesses from user
space, the processor then performs a documented rotation on the accessed
data. This is the result of the access being trapped, and the kernel
disabling the alignment trap before returning to user space again.
In kernel space we always want misaligned accesses to be fixed up. This
is enforced by always re-enabling the alignment trap on every entry into
kernel space from user space. No such re-enabling is performed when an
exception occurs while already in kernel space as the alignment trap is
always supposed to be enabled in that case.
There is however a small race window when a misaligned access in user
space is trapped and the alignment trap disabled, but the CPU didn't
return to user space just yet. Any exception would be entered from kernel
space at that point and the kernel would then execute with the alignment
trap disabled.
Thanks to Maxime Bizon <mbizon@freebox.fr> for providing a test module
that made this issue reproducible.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv7 onwards requires that there are no aliases to the same physical
location using different memory types (i.e. Normal vs Strongly Ordered).
Access to SO mappings when the unaligned accesses are handled in
hardware is also Unpredictable (pgprot_noncached() mappings in user
space).
The /dev/mem driver requires uncached mappings with O_SYNC. The patch
implements the phys_mem_access_prot() function which generates Strongly
Ordered memory attributes if !pfn_valid() (independent of O_SYNC) and
Normal Noncacheable (writecombine) if O_SYNC.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv7 processors like Cortex-A9 broadcast the cache maintenance
operations in hardware. This patch allows the
flush_dcache_page/update_mmu_cache pair to work in lazy flushing mode
similar to the UP case.
Note that cache flushing on SMP systems now takes place via the
set_pte_at() call (__sync_icache_dcache) and there is no race with other
CPUs executing code from the new PTE before the cache flushing took
place.
Tested-by: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On SMP systems, there is a small chance of a PTE becoming visible to a
different CPU before the current cache maintenance operations in
update_mmu_cache(). To avoid this, cache maintenance must be handled in
set_pte_at() (similar to IA-64 and PowerPC).
This patch provides a unified VIPT cache handling mechanism and
implements the __sync_icache_dcache() function for ARMv6 onwards
architectures. It is called from set_pte_at() and replaces the
update_mmu_cache(). The latter is still used on VIVT hardware where a
vm_area_struct is required.
Tested-by: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There are places in Linux where writes to newly allocated page cache
pages happen without a subsequent call to flush_dcache_page() (several
PIO drivers including USB HCD). This patch changes the meaning of
PG_arch_1 to be PG_dcache_clean and always flush the D-cache for a newly
mapped page in update_mmu_cache().
The patch also sets the PG_arch_1 bit in the DMA cache maintenance
function to avoid additional cache flushing in update_mmu_cache().
Tested-by: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit d73cd42 forced non-lazy cache flushing of highmem pages in
flush_dcache_page(). This isn't needed since __flush_dcache_page()
(called lazily from update_mmu_cache) can handle highmem pages (fixed by
commit 7e5a69e).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Setting of these bits can cause issues on other SMP SoC's not produced
by ARM.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On the r2p0, r2p1 and r2p2 versions of the Cortex-A9, data corruption
can occur if a shared cache line is replaced on one CPU as another CPU
is accessing it.
This workaround sets two bits in the diagnostic register of the Cortex-A9,
reducing the linefill issuing capabilities of the processor and
avoiding the erroneous behaviour.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On versions of the Cortex-A9 up to and including r2p2, under rare
circumstances, a DMB instruction between 2 write operations may not
ensure the correct visibility ordering of the 2 writes.
This workaround sets a bit in the diagnostic register of the Cortex-A9,
causing the DMB instruction to behave like a DSB, which functions
correctly on the affected cores.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Kconfig doesn't have any knowledge of specific v7 cores, so it is possible
to select errata workarounds that may cause inadvertent behaviour when
executed on a core other than those targetted by the fix.
This patch improves the variant and revision checking in proc-v7.S so
that the primary part number is also considered when applying errata
workarounds.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Stephen Rothwell reported this build failure:
arch/arm/mm/init.c: In function 'arm_memory_present':
arch/arm/mm/init.c:260: warning: ISO C90 forbids mixed declarations and code
Caused by commit 719c1514f2 ("memblock/arm: Use new accessors")
which forgot a closing brace on a new for_each_memblock() in
arm_memory_present().
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Russell King <linux@arm.linux.org.uk>
LKML-Reference: <4C91C544.5050907@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Dave Hylands reports:
| We've observed a problem with dma_alloc_writecombine when the system
| is under heavy load (heavy bus traffic). We've managed to reduce the
| problem to the following snippet, which is run from a kthread in a
| continuous loop:
|
| void *virtAddr;
| dma_addr_t physAddr;
| unsigned int numBytes = 256;
|
| for (;;) {
| virtAddr = dma_alloc_writecombine(NULL,
| numBytes, &physAddr, GFP_KERNEL);
| if (virtAddr == NULL) {
| printk(KERN_ERR "Running out of memory\n");
| break;
| }
|
| /* access DMA memory allocated */
| tmp = virtAddr;
| *tmp = 0x77;
|
| /* free DMA memory */
| dma_free_writecombine(NULL,
| numBytes, virtAddr, physAddr);
|
| ...sleep here...
| }
|
| By itself, the code will run forever with no issues. However, as we
| increase our bus traffic (typically using DMA) then the *tmp = 0x77
| line will eventually cause a page fault. If we add a small delay (a
| few microseconds) before the *tmp = 0x77, then we don't see a page
| fault, even under heavy load.
A dsb() is required after modifying the PTE entries to ensure that they
will always be visible. Add this dsb().
Reported-by: Dave Hylands <dhylands@gmail.com>
Tested-by: Dave Hylands <dhylands@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARM processors with hardware breakpoint and watchpoint support,
triggering these events results in a debug exception. These manifest
as prefetch and data aborts respectively.
arch/arm/mm/fault.c already provides hook_fault_code for hooking
into data aborts dependent on the DFSR. This patch adds a new function,
hook_ifault_code for hooking into prefetch aborts in the same manner.
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: S. Karthikeyan <informkarthik@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
CPU_32v6K is selected by CPU_V7 but it only depends on CPU_V6.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
kunmap_atomic() is currently at level -4 on Rusty's "Hard To Misuse"
list[1] ("Follow common convention and you'll get it wrong"), except in
some architectures when CONFIG_DEBUG_HIGHMEM is set[2][3].
kunmap() takes a pointer to a struct page; kunmap_atomic(), however, takes
takes a pointer to within the page itself. This seems to once in a while
trip people up (the convention they are following is the one from
kunmap()).
Make it much harder to misuse, by moving it to level 9 on Rusty's list[4]
("The compiler/linker won't let you get it wrong"). This is done by
refusing to build if the type of its first argument is a pointer to a
struct page.
The real kunmap_atomic() is renamed to kunmap_atomic_notypecheck()
(which is what you would call in case for some strange reason calling it
with a pointer to a struct page is not incorrect in your code).
The previous version of this patch was compile tested on x86-64.
[1] http://ozlabs.org/~rusty/index.cgi/tech/2008-04-01.html
[2] In these cases, it is at level 5, "Do it right or it will always
break at runtime."
[3] At least mips and powerpc look very similar, and sparc also seems to
share a common ancestor with both; there seems to be quite some
degree of copy-and-paste coding here. The include/asm/highmem.h file
for these three archs mention x86 CPUs at its top.
[4] http://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html
[5] As an aside, could someone tell me why mn10300 uses unsigned long as
the first parameter of kunmap_atomic() instead of void *?
Signed-off-by: Cesar Eduardo Barros <cesarb@cesarb.net>
Cc: Russell King <linux@arm.linux.org.uk> (arch/arm)
Cc: Ralf Baechle <ralf@linux-mips.org> (arch/mips)
Cc: David Howells <dhowells@redhat.com> (arch/frv, arch/mn10300)
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> (arch/mn10300)
Cc: Kyle McMartin <kyle@mcmartin.ca> (arch/parisc)
Cc: Helge Deller <deller@gmx.de> (arch/parisc)
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> (arch/parisc)
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> (arch/powerpc)
Cc: Paul Mackerras <paulus@samba.org> (arch/powerpc)
Cc: "David S. Miller" <davem@davemloft.net> (arch/sparc)
Cc: Thomas Gleixner <tglx@linutronix.de> (arch/x86)
Cc: Ingo Molnar <mingo@redhat.com> (arch/x86)
Cc: "H. Peter Anvin" <hpa@zytor.com> (arch/x86)
Cc: Arnd Bergmann <arnd@arndb.de> (include/asm-generic)
Cc: Rusty Russell <rusty@rustcorp.com.au> ("Hard To Misuse" list)
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
v2: Fixes from Mike Rapoport
- remove unused header files (mach/dma.h and mach/nand.h)
- remove tegra 1 references from Makefile.boot
v2: fixes from Russell King
- remove mach/io.h include from mach/iomap.h
- fix whitespace in Kconfig
v2: from Colin Cross
- fix invalid immediate in debug-macro.S
v3:
- allow selection of multiple boards
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Erik Gilling <konkers@android.com>
This patch adds the Kconfig and Makefile for the new S5PV310 SoC.
It also updates arch/arm Kconfig, Makefile and arch/arm/mm/Kconfig
to include support for the new S5PV310.
Signed-off-by: Changhwan Youn <chaos.youn@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
The implementation is pretty much similar. There is a -small- added
overhead by having another function call and the address shift.
If that becomes a concern, I suppose we could actually have memblock
itself expose a memblock_pfn_valid() which then ARM can use directly
with an appropriate #define...
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (291 commits)
ARM: AMBA: Add pclk support to AMBA bus infrastructure
ARM: 6278/2: fix regression in RealView after the introduction of pclk
ARM: 6277/1: mach-shmobile: Allow users to select HZ, default to 128
ARM: 6276/1: mach-shmobile: remove duplicate NR_IRQS_LEGACY
ARM: 6246/1: mmci: support larger MMCIDATALENGTH register
ARM: 6245/1: mmci: enable hardware flow control on Ux500 variants
ARM: 6244/1: mmci: add variant data and default MCICLOCK support
ARM: 6243/1: mmci: pass power_mode to the translate_vdd callback
ARM: 6274/1: add global control registers definition header file for nuc900
mx2_camera: fix type of dma buffer virtual address pointer
mx2_camera: Add soc_camera support for i.MX25/i.MX27
arm/imx/gpio: add spinlock protection
ARM: Add support for the LPC32XX arch
ARM: LPC32XX: Arch config menu supoport and makefiles
ARM: LPC32XX: Phytec 3250 platform support
ARM: LPC32XX: Misc support functions
ARM: LPC32XX: Serial support code
ARM: LPC32XX: System suspend support
ARM: LPC32XX: GPIO, timer, and IRQ drivers
ARM: LPC32XX: Clock driver
...
smp_processor_id() must not be called from a preemptible context (this
is checked by CONFIG_DEBUG_PREEMPT). kmap_high_l1_vipt() was doing so.
This lead to a problem where the wrong per_cpu kmap_high_l1_vipt_depth
could be incremented, causing a BUG_ON(*depth <= 0); in
kunmap_high_l1_vipt().
The solution is to move the call to smp_processor_id() after the call
to preempt_disable().
Originally by: Andrew Howe <ahowe@nvidia.com>
Signed-off-by: Gary King <gking@nvidia.com>
Acked-by: Nicolas Pitre <nico.as.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch is in preparation for a subsequent patch which adds barriers
to the I/O accessors. Since the mandatory barriers may do an L2 cache
sync, this patch avoids a recursive call into l2x0_cache_sync() via the
write*() accessors and wmb() and a call into l2x0_cache_sync() with the
l2x0_lock held.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All implementations of cpu_proc_fin() start by disabling interrupts
and then flush caches. Rather than have every processors proc_fin()
implementation do this, move it out into generic code - and move the
cache flush past setup_mm_for_reboot() (so it can benefit from having
caches still enabled.)
This allows cpu_proc_fin() to become independent of the L1/L2 cache
types, and eventually move the L2 cache flushing into the L2 support
code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Statuses 3 (0b00011) and 6 (0x00110) of DFSR are Access Flags faults on
ARMv6K and ARMv7. Let's patch fsr_info[] at runtime if we are on ARMv7
or later.
Unfortunately, we don't have runtime check for 'K' extension, so we
can't check for it.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARM one Linux PGD entry contains two hardware entries (see page
tables layout in pgtable.h). We normally guarantee that we always
fill both L1 entries. But create_mapping() doesn't follow the rule.
It can create inidividual L1 entries, so here we have to call
pmd_none() check in do_translation_fault() for the entry really
corresponded to address, not for the first of pair.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add one more parameter to hook_fault_code() to be able to set 'code'
field of struct fsr_info.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
POSIX specify to use signal SIGBUS with code BUS_ADRALN for invalid
address alignment.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The DMA coherent remap area is used to provide an uncached mapping
of memory for coherency with DMA engines. Currently, we look for
any free hole which our allocation will fit in with page alignment.
However, this can lead to fragmentation of the area, and allows small
allocations to cross L1 entry boundaries. This is undesirable as we
want to move towards allocating sections of memory.
Align allocations according to the size, limiting the alignment between
the page and section sizes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We don't need our own implementation of this, use the generic
library implementation instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This changes the TCM handling so that a fixed area is reserved at
0xfffe0000-0xfffeffff for TCM. This areas is used by XScale but
XScale does not have TCM so the mechanisms are mutually exclusive.
This change is needed to make TCM detection more dynamic while
still being able to compile code into it, and is a must for the
unified ARM goals: the current TCM allocation at different places
in memory for each machine would be a nightmare if you want to
compile a single image for more than one machine with TCM so it
has to be nailed down in one place.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If TCM is in use, we should display it in the virtual memory
layout along with everything else.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The earlier TCM memory regions were mapped as MT_MEMORY_UNCACHED
which doesn't really work on platforms supporting the new v6
features like the NX bit. Add unique MT_MEMORY_[I|D]TCM types
instead.
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a common early allocator function, in preparation for switching
over to LMB. When we do, this function will need to do a little more
than just allocating memory; we need it zero initialized too.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the platform specific bootmem memory reservations out of
arch/arm/mm/mmu.c into their respective platform files.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Everything should now be using sparsemem rather than discontigmem, so
remove the code supporting discontigmem from ARM.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than storing the minimum size of the vmalloc area, store the
maximum permitted address of the vmalloc area instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The TLS register is only available on ARM1136 r1p0 and later.
Set HWCAP_TLS flags if hardware TLS is available and test for
it if CONFIG_CPU_32v6K is not set for V6.
Note that we set the TLS instruction in __kuser_get_tls
dynamically as suggested by Jamie Lokier <jamie@shareable.org>.
Also the __switch_to code is optimized out in most cases as
suggested by Nicolas Pitre <nico@fluxnic.net>.
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On i.MX35 the L2X0_AUX_CTRL register does not have sensible reset
default values. Allow them to be overwritten with the aux_val/aux_mask
arguments passed to l2x0_init().
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
RealView boards with certain revisions of the L210/L220 cache controller
may have issues (hardware deadlock) with the mandatory barriers (DSB
followed by an L2 cache sync) when ARM_DMA_MEM_BUFFERABLE is enabled.
The patch disables ARM_DMA_MEM_BUFFERABLE for these boards.
Tested-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit f4d6477f introduced a workaround for the lack of hardware
broadcasting of the cache maintenance operations on ARM11MPCore.
However, the workaround is only valid on CPUs that do not do speculative
loads into the D-cache.
This patch adds a Kconfig option with the corresponding help to make the
above clear. When the DMA_CACHE_RWFO option is disabled, the kernel
behaviour is that prior to the f4d6477f commit. This also allows ARMv6
UP processors with speculative loads to work correctly.
For other processors, a different workaround may be needed.
Cc: Ronen Shitrit <rshitrit@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
A recent patch for DMA cache maintenance on ARM11MPCore added a write
for ownership trick to the v6_dma_inv_range() function. Such operation
destroys data already present in the buffer. However, this function is
used with with dma_sync_single_for_device() which is supposed to
preserve the existing data transfered into the buffer. This patch adds a
combination of read/write for ownership to preserve the original data.
Reported-by: Ronen Shitrit <rshitrit@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This macro is not defined when !CONFIG_MMU so this patch moves the
CONSISTENT_* definitions to the CONFIG_MMU section.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv6 and above have a restriction whereby aliasing virtual:physical
mappings must not have differing memory type and sharability
attributes. Strictly, this covers the memory type (strongly ordered,
device, memory), cache attributes (uncached, write combine, write
through, write back read alloc, write back write alloc) and the
shared bit.
However, using ioremap() and its variants on system RAM results in
mappings which differ in these attributes from the main system RAM
mapping. Other architectures which similar restrictions approch this
problem in the same way - they do not permit ioremap on main system
RAM.
Make ARM behave in the same way, with a WARN_ON() such that users can
be traced and an alternative approach found.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The VM subsystem assumes that there are valid memmap entries to
the bank end aligned to MAX_ORDER_NR_PAGES. It will try and read
these page structs, and so we cannot free any memmap entries that
it may inspect.
Signed-off-by: Michael Bohan <mbohan@codeaurora.org>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
When functions incoming parameters are not in input operands list gcc
4.5 does not load the parameters into registers before calling this
function but the inline assembly assumes valid addresses inside this
function. This breaks the code because r0 and r1 are invalid when
execution enters v4wb_copy_user_page ()
Also the constant needs to be used as third input operand so account
for that as well.
Tested on qemu arm.
CC: <stable@kernel.org>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Instruction faults on pre-ARMv6 CPUs are interpreted as
a 'translation fault', but do_translation_fault doesn't
handle well if user mode trying to run instruction above
TASK_SIZE, and result in the infinite retry of that
instruction.
CC: <stable@kernel.org>
Signed-off-by: Anfei Zhou <anfei.zhou@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When CONFIG_DEBUG_HIGHMEM is used, the fixmap entry used for a highmem page
by kmap_atomic() is always cleared by kunmap_atomic(). This helps find
bad usages such as dereferences after the unmap, or overflow into the
adjacent fixmap areas.
But this debugging aid is completely bypassed when a kmap for the same
page already exists as the kmap is reused instead. ON VIVT systems we
have no choice but to reuse that kmap due to cache coherency issues,
but on non VIVT systems we should always force the fixmap usage when
debugging is active.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This fixes a bug in mm/init.c when freeing the TCM compile memory,
this was being referred to as a char * which is incorrect: this
will dereference the pointer and feed in the value at the location
instead of the address to it. Change it to a plain char and use
&(char) to reference it.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Cc: <stable@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch fixes the flush_cache_all for ARMv7 SMP.It was
missing from commit b8349b569a
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (224 commits)
ARM: remove 'select GENERIC_TIME'
ARM: 6136/1: ARCH_REQUIRE_GPIOLIB selects GENERIC_GPIO
ARM: 6074/1: oprofile: convert from sysdev to platform device
ARM: 6073/1: oprofile: remove old files and update KConfig
ARM: 6072/1: oprofile: use perf-events framework as backend
ARM: 6071/1: perf-events: allow modules to query the number of hardware counters
ARM: 6070/1: perf-events: add support for xscale PMUs
ARM: 6069/1: perf-events: use numeric ID to identify PMU
ARM: 6064/1: pmu: register IRQs at runtime
ARM: Optionally allow ARMv6 to use 'normal, bufferable' memory for DMA
ARM: 6134/1: Handle instruction cache maintenance fault properly
ARM: nwfpe: allow debugging output to be configured at runtime
ARM: rename mach_cpu_disable() to platform_cpu_disable()
ARM: 6132/1: PL330: Add common core driver
ARM: 6094/1: Extend cache-l2x0 to support the 16-way PL310
ARM: Move memory mapping into mmu.c
ARM: Ensure meminfo is sorted prior to sanity_check_meminfo
ARM: Remove useless linux/bootmem.h includes
ARM: convert /proc/cpu/aligment to seq_file
arm: use asm-generic/scatterlist.h
...
Provide a configuration option to allow the ARMv6 to use normal
bufferable memory for coherent DMA. This option is forced to 'y'
for ARMv7, and offered as a configuration option on ARMv6.
Enabling this option requires drivers to have the necessary barriers
to ensure that data in DMA coherent memory is visible prior to the
DMA operation commencing.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Between "clean D line..." and "invalidate I line" operations in
v7_coherent_user_range(), the memory page may get swapped out.
And the fault on "invalidate I line" could not be properly handled
causing the oops.
In ARMv6 "external abort on linefetch" replaced by "instruction cache
maintenance fault". Let's handle it as translation fault. It fixes the
issue.
I'm not sure if it's reasonable to check arch version in run-time.
Let's do it in compile time for now.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Siarhei Siamashka <siarhei.siamashka@nokia.com>
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The L310 cache controller's interface is almost identical
to the L210. One major difference is that the PL310 can
have up to 16 ways.
This change uses the cache's part ID and the Associativity
bits in the AUX_CTRL register to determine the number of ways.
Also, this version prints out the CACHE_ID and AUX_CTRL registers.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jason S. McMullan <jason.mcmullan@netronome.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert code away from ->read_proc/->write_proc interfaces. Switch to
proc_create()/proc_create_data() which makes addition of proc entries
reliable wrt NULL ->proc_fops, NULL ->data and so on.
Problem with ->read_proc et al is described here commit
786d7e1612 "Fix rmmod/read/write races in
/proc entries"
This patch is part of an effort to remove the old simple procfs PAGE_SIZE
buffer interface.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Enable Tauros2 L2 in mmp2. Tauros2 L2 is shared in Marvell ARM cores.
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
The standard I-cache Invalidate All (ICIALLU) and Branch Predication
Invalidate All (BPIALL) operations are not automatically broadcast to
the other CPUs in an ARMv7 MP system. The patch adds the Inner Shareable
variants, ICIALLUIS and BPIALLIS, if ARMv7 and SMP.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The Snoop Control Unit on the ARM11MPCore hardware does not detect the
cache operations and the dma_cache_maint*() functions may leave stale
cache entries on other CPUs. The solution implemented in this patch
performs a Read or Write For Ownership in the ARMv6 DMA cache
maintenance functions. These LDR/STR instructions change the cache line
state to shared or exclusive so that the cache maintenance operation has
the desired effect.
Tested-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 7959722 introduced calls to copy_(to|from)_user_page() from
access_process_vm() in mm/nommu.c. The copy_to_user_page() was not
implemented on noMMU ARM.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 31aa8fd6 introduced the __arm_ioremap_caller() function but the
nommu.c version did not have the _caller suffix.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The show_mem() and mem_init() function are assuming that the page map is
contiguous and calculates the start and end page of a bank using (map +
pfn). This fails with SPARSEMEM where pfn_to_page() must be used.
Tested-by: Will Deacon <Will.Deacon@arm.com>
Tested-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Handle incorrectly reported permission faults for qsd8650. On
permission faults, retry MVA to PA conversion. If retry detects
translation fault. Report as translation fault.
Cc: Jamie Lokier <jamie@shareable.org>
Signed-off-by: Dave Estes <cestes@quicinc.com>
Fix compiler error in copypage-fs.c
missing struct vm_area_struct *vma in function
fa_copy_user_highpage
Signed-off-by: Hans Ulli Kroll <ulli.kroll@googlemail.com>
/tmp/ccJ3ssZW.s: Assembler messages:
/tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077'
This is caused because:
.section .data
.section .text
.section .text
.previous
does not return us to the .text section, but the .data section; this
makes use of .previous dangerous if the ordering of previous sections
is not known.
Fix up the other users of .previous; .pushsection and .popsection are
a safer pairing to use than .section and .previous.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This enables the l2x0 support and ensures that the secondary
CPU can see the page table and secondary data at this point.
Signed-off-by: srinidhi kasagar <srinidhi.kasagar@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This cache flush occurs when we first insert a page into the page
tables, where a page did not exist previously. There can be no
cache lines associated with this virtual mapping, so this cache
flush is redundant.
Tested-by: Mike Rapoport <mike@compulab.co.il>
Tested-by: Mikael Pettersson <mikpe at it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When crash happens in interrupt context there is no userspace context.
We always use current->active_mm in those cases.
Signed-off-by: Mika Westerberg <ext-mika.1.westerberg@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The VIVT cache of a highmem page is always flushed before the page
is unmapped. This cache flush is explicit through flush_cache_kmaps()
in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in
kunmap_atomic(). There is also an implicit flush of those highmem pages
that were part of a process that just terminated making those pages free
as the whole VIVT cache has to be flushed on every task switch. Hence
unmapped highmem pages need no cache maintenance in that case.
However unmapped pages may still be cached with a VIPT cache because the
cache is tagged with physical addresses. There is no need for a whole
cache flush during task switching for that reason, and despite the
explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(),
some highmem pages that were mapped in user space end up still cached
even when they become unmapped.
So, we do have to perform cache maintenance on those unmapped highmem
pages in the context of DMA when using a VIPT cache. Unfortunately,
it is not possible to perform that cache maintenance using physical
addresses as all the L1 cache maintenance coprocessor functions accept
virtual addresses only. Therefore we have no choice but to set up a
temporary virtual mapping for that purpose.
And of course the explicit cache flushing when unmapping a highmem page
on a system with a VIPT cache now can go, which should increase
performance.
While at it, because the code in __flush_dcache_page() has to be modified
anyway, let's also make sure the mapped highmem pages are pinned with
kmap_high_get() for the duration of the cache maintenance operation.
Because kunmap() does unmap highmem pages lazily, it was reported by
Gary King <GKing@nvidia.com> that those pages ended up being unmapped
during cache maintenance on SMP causing segmentation faults.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Write combining/cached device mappings are not setting the shared bit,
which could potentially cause problems on SMP systems since the cache
lines won't participate in the cache coherency protocol.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
The mandatory barriers (mb, rmb, wmb) are used even on uniprocessor
systems for things like ordering Normal Non-cacheable memory accesses
with DMA transfer (via Device memory writes). The current implementation
uses dmb() for mb() and friends but this is not sufficient. The DMB only
ensures the relative ordering of the observability of accesses by other
processors or devices acting as masters. In case of DMA transfers
started by writes to device memory, the relative ordering is not ensured
because accesses to slave ports of a device are not considered
observable by the DMB definition.
A DSB is required for the data to reach the main memory (even if mapped
as Normal Non-cacheable) before the device receives the notification to
begin the transfer. Furthermore, some L2 cache controllers (like L2x0 or
PL310) buffer stores to Normal Non-cacheable memory and this would need
to be drained with the outer_sync() function call.
The patch also allows platforms to define their own mandatory barriers
implementation by selecting CONFIG_ARCH_HAS_BARRIERS and providing a
mach/barriers.h file.
Note that the SMP barriers are unchanged (being DMBs as before) since
they are only guaranteed to work with Normal Cacheable memory.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The L2x0 cache controllers need to explicitly drain their write buffer
even for Normal Noncacheable memory accesses.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch introduces the outer_cache_fns.sync function pointer together
with the OUTER_CACHE_SYNC config option that can be used to drain the
write buffer of the outer cache.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add ARM_L1_CACHE_SHIFT_6 to arch/arm/Kconfig to allow CPUs with
L1 cache lines which are 64bytes to indicate this without having to
alter the arch/arm/mm/Kconfig entry each time.
Update the mm Kconfig so that ARM_L1_CACHE_SHIFT default value
uses this and change OMAP3 and S5PC1XX to select ARM_L1_CACHE_SHIFT_6.
Acked-by: Ben Dooks <ben-linux@fluff.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
update_mmu_cache() is called with the page table for the faulted-in
page still mapped. We need to modify the PTE for this page to ensure
coherency with other shared mappings when multiple shared mappings
exist within a MM.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On VIVT ARM, when we have multiple shared mappings of the same file
in the same MM, we need to ensure that we have coherency across all
copies. We do this via make_coherent() by making the pages
uncacheable.
This used to work fine, until we allowed highmem with highpte - we
now have a page table which is mapped as required, and is not available
for modification via update_mmu_cache().
Ralf Beache suggested getting rid of the PTE value passed to
update_mmu_cache():
On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
to construct a pointer to the pte again. Passing a pte_t * is much
more elegant. Maybe we might even replace the pte argument with the
pte_t?
Ben Herrenschmidt would also like the pte pointer for PowerPC:
Passing the ptep in there is exactly what I want. I want that
-instead- of the PTE value, because I have issue on some ppc cases,
for I$/D$ coherency, where set_pte_at() may decide to mask out the
_PAGE_EXEC.
So, pass in the mapped page table pointer into update_mmu_cache(), and
remove the PTE value, updating all implementations and call sites to
suit.
Includes a fix from Stephen Rothwell:
sparc: fix fallout from update_mmu_cache API change
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Some glibc versions intentionally create lots of alignment faults in
their gconv code, which if not fixed up, results in segfaults during
boot. This can prevent systems booting properly.
There is no clear hard-configurable default for this; the desired
default depends on the nature of the userspace which is going to be
booted.
So, provide a way for the alignment fault handler to be configured via
the kernel command line.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Makes it consistent with VMALLOC_START
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Adds DMA area to 'virtual memory map' startup message
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Code based on parisc and x86_32.
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch implements the work-around for the errata 588369.The secure
API is used to alter L2 debug register because of trust-zone.
This version updated with comments from Russell and Catalin and
generated against 2.6.33-rc6 mainline kernel. Detail
comments can be found:
http://www.spinics.net/lists/linux-omap/msg23431.html
Signed-off-by: Woodruff Richard <r-woodruff2@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds L2 Cache support for OMAP4. External L2 cache
is used in OMAP4
CC: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds the cache maintainance by line helper functions.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Otherwise the kernel built with both CPU_V6 and CPU_V7 will not
boot on omap2.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The current ASID allocation algorithm doesn't ensure the notification
of the other CPUs when the ASID rolls over. This may lead to two
processes using the same ASID (but different generation) or multiple
threads of the same process using different ASIDs.
This patch adds the broadcasting of the ASID rollover event to the
other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid"
during the handling of the broadcast, the ASID numbering now starts at
"smp_processor_id() + 1". At rollover, the cpu_last_asid will be set
to NR_CPUS.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM setup code includes its own parser for early params, there's
also one in the generic init code.
This patch removes __early_init (and related code) from
arch/arm/kernel/setup.c, and changes users to the generic early_init
macro instead.
The generic macro takes a char * argument, rather than char **, so we
need to update the parser functions a little.
Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Always creating this directory avoids other users having to jump
through silly hoops when they want to share this directory.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This allows the procfs vmallocinfo file to show who created the ioremap
regions. Note: __builtin_return_address(0) doesn't do what's expected
if its used in an inline function, so we leave __arm_ioremap callers
in such places alone.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes
DMA cache coherency handling slightly more interesting. Rather than
being able to rely upon the CPU not accessing the DMA buffer until DMA
has completed, we now must expect that the cache could be loaded with
possibly stale data from the DMA buffer.
Where DMA involves data being transferred to the device, we clean the
cache before handing it over for DMA, otherwise we invalidate the buffer
to get rid of potential writebacks. On DMA Completion, if data was
transferred from the device, we invalidate the buffer to get rid of
any stale speculative prefetches.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
These are now unused, and so can be removed.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
dma_cache_maint_contiguous is now simple enough to live inside
dma_cache_maint_page, so move it there.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
The DMA API has the notion of buffer ownership; make it explicit in the
ARM implementation of this API. This gives us a set of hooks to allow
us to deal with CPU cache issues arising from non-cache coherent DMA.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-By: Jamie Iles <jamie@jamieiles.com>
The perf events subsystem allows counting of both hardware and
software events. This patch implements the bare minimum for software
performance events.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We already know the pfn for the page to be modified in make_coherent,
so let's stop recalculating it unnecessarily.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
update_mmu_cache() is called with a page table already mapped. We
call make_coherent(), which then calls adjust_pte() which wants to
map other page tables. This causes kmap_atomic() to BUG() because
the slot its trying to use is already taken.
Since do_adjust_pte() modifies the page tables, we are also missing
any form of locking, so we're risking corrupting the page tables.
Fix this by using pte_offset_map_nested(), and taking the pte page
table lock around do_adjust_pte().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>