Patch from Andrew Victor
This prepares the way for adding support for the new Atmel AT91SAM926x
processors.
Major changes:
- Rename time.c to at91rm9200_time.c
- Rename common.c to at91rm9200.c
- Introduce ARCH_AT91, of which ARCH_AT91RM9200, ARCH_AT91SAM9260 and
ARCH_AT91SAM9261 are dependent.
Signed-off-by: Andrew Victor <andrew@sanpeople.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Most MMU-based CPUs have a restriction on the setting of the data cache
enable and mmu enable bits in the control register, whereby if the data
cache is enabled, the MMU must also be enabled. Enabling the data
cache without the MMU is an invalid combination.
However, there are CPUs where the data cache can be enabled without the
MMU.
In order to allow these CPUs to take advantage of that, provide a
method whereby each proc-*.S file defines the control regsiter value
for use with nommu (with the MMU disabled.) Later on, when we add
support for enabling the MMU on these devices, we can adjust the
"crval" macro to also enable the data cache for nommu.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The "id(wb)BRR" suffix reports which CPU debugging options were (or
were not) selected at kernel build time. Rather than have every
proc-*.S file implement this, report the control register value,
from which this information can be deduced.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In noMMU mode, various of functions which are defined in mm/proc-*.S
is not valid or needed to be avoided. i.g. switch_mm is not needed,
just returns and this makes the I & D caches are valid which shows
great improvement of performance including task switching and IPC.
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is a trivial patch to export flush_dcache_page in mm/nommu.c.
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Remove fault-armv.o, mmap.o and mm-armv.o from uclinux builds - these
are concerned with MMU-ful operations, and as such are redundant for
uclinux.
Since this also removes iotable_init() and iotable_init() is used
extensively in the platform support files, just make it a no-op.
Based upon a couple of patches by Hyok.
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
TABLE_SIZE is never used in arch/arm/mm/init.c. create_memmap_holes(),
memtable_init, and setup_io_desc() no longer exist in the kernel.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
nommu doesn't require a complex flush_dcache_page implementation
like the MMU-ful CPUs do, so provide a simplified version in nommu.c
and omit flush.c from the build as appropriate.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
nommu doesn't have any form of remapping support, so ioremap, etc
become stubs which just return the casted address, doing nothing
else.
Move ioport_map(), ioport_unmap(), pci_iomap(), pci_iounmap()
into a separate file which is always built.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since uclinux doesn't make use of the TLB, including the TLB
maintainence and CPU-optimised copypage functions does not
make sense. Remove them.
(This is part of one of Hyok's patches.)
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since there can be no fixed location for the TLS value with nommu
systems, we must provide TLS register emulation in order to support
TLS binaries on CPUs without the thread register.
Part of a patch from Hyok S. Choi, and cleaned up by rmk.
Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
proc-v6 contains some compatibility to be able to use the V6
"cps" instruction. However, the kernel makes use of this
instruction elsewhere extensively, so there's no point keeping
this compatibility anymore.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Ben Dooks
Select CONFIG_CPU_ARM926 when CONFIG_CPU_S3C2412 is
selected.
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Sascha Hauer
This patch adds the base support for Hilscher's netX network
processors.
Signed-off-by: Robert Schwebel <r.schwebel@pengutronix.de>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Vitaly Wool
This patch adds basic chip support for PNX4008 ARM platform.
It's basically the same as the previous one, but with the rmk's
comments taken into account.
Signed-off-by: Vitaly Wool <vwool@ru.mvista.com>
Signed-off-by: Dmitry Pervushin <dpervushin@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM Architecture Reference Manual lists bit 4 of the PMD as "implementation
defined" and it must be set to zero on Intel XScale CPUs or the cache does
not behave properly. Found by Mike Rapoport while debugging a flash issue
on the PXA255:
http://marc.10east.com/?l=linux-arm-kernel&m=114845287600782&w=1
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
We don't enable the BTB on the ixp2350 as that can cause weird
crashes (erratum #42.) However, some bootloaders enable the BTB,
which means that we have to disable the BTB explicitly.
Found thanks to Tom Rini.
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Catalin Marinas
This patch modifies the __ioremap_pfn and __iounmap functions in
arch/arm/mm/ioremap.c to use vunmap instead of vfree.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We had two implementations for flushing the cache, which meant StrongARM
caches weren't being correctly flushed. Fix this by always using the
v4wb_flush_kern_cache_all method, rather than duplicating it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
FLUSH_BASE must be visible to arch/arm/mm/init.c in order for the
memory region to be setup. Move these definitions from
asm-arm/arch-*/hardware.h into asm-arm/arch-*/memory.h where mm
stuff can see them.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Lennert Buytenhek
This patch adds support for the I/O coherent cache available on the
xsc3. The approach is to provide a simple API to determine whether the
chipset supports coherency by calling arch_is_coherent() and then
setting the appropriate system memory PTE and PMD bits. In addition,
we call this API on dma_alloc_coherent() and dma_map_single() calls.
A generic version exists that will compile out all the coherency-related
code that is not needed on the majority of ARM systems.
Note that we do not check for coherency in the dma_alloc_writecombine()
function as that still requires a special PTE setting. We also don't
touch dma_mmap_coherent() as that is a special ARM-only API that is by
definition only used on non-coherent system.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Lennert Buytenhek
Adapt xsc3 to the changes in 74945c8616
(xsc3 was written before but merged after the latter went in.)
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Lennert Buytenhek
This patch adds support for the new XScale v3 core. This is an
ARMv5 ISA core with the following additions:
- L2 cache
- I/O coherency support (on select chipsets)
- Low-Locality Reference cache attributes (replaces mini-cache)
- Supersections (v6 compatible)
- 36-bit addressing (v6 compatible)
- Single instruction cache line clean/invalidate
- LRU cache replacement (vs round-robin)
I attempted to merge the XSC3 support into proc-xscale.S, but XSC3
cores have separate errata and have to handle things like L2, so it
is simpler to keep it separate.
L2 cache support is currently a build option because the L2 enable
bit must be set before we enable the MMU and there is no easy way to
capture command line parameters at this point.
There are still optimizations that can be done such as using LLR for
copypage (in theory using the exisiting mini-cache code) but those
can be addressed down the road.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
set_page_count usage outside mm/ is limited to setting the refcount to 1.
Remove set_page_count from outside mm/, and replace those users with
init_page_count() and set_page_refcounted().
This allows more debug checking, and tighter control on how code is allowed
to play around with page->_count.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Have an explicit mm call to split higher order pages into individual pages.
Should help to avoid bugs and be more explicit about the code's intention.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: David Howells <dhowells@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Lennert Buytenhek
This patch adds support for the Cirrus ep93xx series of CPUs. The
ep93xx is an ARM920T based CPU with two VICs, PL010 based UARTs,
IrDA, MaverickCrunch floating point coprocessor, between 24 and 64
GPIOs, ethernet, OHCI USB and, depending on the model, pcmcia, raster
engine, graphics accelerator, IDE controller and a bunch of other
stuff.
This patch adds the core ep93xx support code, and support for the
Glomation GESBC-9312-sx and the Technologic Systems TS-72xx SBCs.
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
asm/hardware.h is not required for the majority of processor support
files, ioremap support, mm initialisation, acorn IO support, nor
the debug code (which picks up its machine specific includes via
debug-macros.S)
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the hardware PMD and PTE page table definitions from pgtable.h
into pgtable-hwdef.h, and include pgtable-hwdef.h as necessary.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
ARM1136 erratum 371025 (category 2) specifies that, under rare
conditions, an invalidate I-cache by MVA (line or range) operation can
fail to invalidate a cache line. The recommended workaround is to
either invalidate the entire I-cache or invalidate the range by
set/way rather than MVA.
Note that for a 16K cache size, invalidating a 4K page by set/way is
equivalent to invalidating the entire I-cache.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
Chapter B2.7.3 in the latest ARM ARM (with v6 information) states that
the completion of a TLB maintenance operation is only guaranteed by
the execution of a DSB (Data Syncronization Barrier, formerly Data
Write Barrier or Drain Write Buffer).
Note that a DSB is only needed in the flush_tlb_kernel_* functions
since the completion is guaranteed by a mode change (i.e. switching
back to user mode) for the flush_tlb_user_* functions.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
Doing so adds a much larger cost to the loop than the cost implied by
simply invalidating the whole BTB at once.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
The mini I-cache issue is valid only for kernel space since debuggers
would not fly if they used user space addresses for their stubs.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from George G. Davis
This Freescale Semiconductor, Inc. contributed patch adds mem_types[]
support for ARMv6 non-shared device memory region attributes. This
implementation provides support for only first level section mapped
non-shared devices. Second level non-shared device mappings are not
yet supported.
Signed-off-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/mm/ioremap.c:145: warning: passing argument 1 of 'vfree' makes pointer from integer without a cast
resulted from commit id 9d4ae7276a
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Kevin Hilman
This patch increase available DMA-consistent memory allocated by dma_coherent_alloc(). The default remains at 2M (defined in asm/memory.h) and each platform has the ability to override in asm/arch-foo/memory.h.
Signed-off-by: Kevin Hilman <kevin@hilman.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Deepak Saxena
In working on adding 36-bit addressed supersection support to ioremap(),
I came to the conclusion that it would be far simpler to do so by just
splitting __ioremap() into a main external interface and adding an
__ioremap_pfn() function that takes a pfn + offset into the page that
__ioremap() can call. This way existing callers of __ioremap() won't have
to change their code and 36-bit systems will just call __ioremap_pfn()
and we will not have to deal with unsigned long long variables.
Note that __ioremap_pfn() should _NOT_ be called directly by drivers
but is reserved for use by arch_ioremap() implementations that map
32-bit resource regions into the real 36-bit address and then call
this new function.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from SAN People
Following changes were made to clock.c:
1) Replaced <asm/hardware/clock.h> with <linux/clk.h>
2) Removed old unused clk_enable & clk_disable.
3) Replaced clk_use/clk_unuse with clk_enable/clk_disable.
Otherwise it's the same as the previous patch.
Signed-off-by: Andrew Victor <andrew@sanpeople.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
EPXA10DB seems to be uncared for:
- the "PLD" code has never been merged
- no one has reported that this platform has been broken since
at least 2.6.10
- interest seems to have dried up around March 2003.
Therefore, remove EPXA10DB support.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/kernel/entry-armv.S has contained a comment suggesting
that asm/hardware.h and asm/arch/irqs.h should be moved into the
asm/arch/entry-macro.S include. So move the includes to these
two files as required.
Add missing includes (asm/hardware.h, asm/io.h) to asm/arch/system.h
includes which use those facilities, and remove asm/io.h from
kernel/process.c.
Remove other unnecessary includes from arch/arm/kernel, arch/arm/mm
and arch/arm/mach-footbridge.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Lazy flush_dcache_page() causes userspace instability on SMP
platforms, so disable it for now.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We must not call TLB maintainence operations with interrupts disabled,
otherwise we risk a lockup in the SMP IPI code.
This means that consistent_free() can not be called from a context with
IRQs disabled. In addition, we must not hold the lock in consistent_free
when we call flush_tlb_kernel_range(). However, we must continue to
prevent consistent_alloc() from re-using the memory region until we've
finished tearing down the mapping and dealing with the TLB.
Therefore, leave the vm_region entry in the list, but mark it inactive
before dropping the lock and starting the tear-down process. After the
mapping has been torn down, re-acquire the lock and remove the entry
from the list.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Daniel Jacobowitz
After delivering a signal (creating its stack frame) we must check for
additional pending unblocked signals before returning to userspace.
Otherwise signals may be delayed past the next syscall or reschedule.
Once that was fixed it became obvious that the ARM signal mask manipulation
was broken. It was a little bit broken before the recent SA_NODEFER
changes, and then very broken after them. We must block the requested
signals before starting the handler or the same signal can be delivered
again before the handler even gets a chance to run.
Signed-off-by: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Document that the VMALLOC_END address must be aligned to 2MB since
it must align with a PGD boundary.
Allocate the vectors page early so that the flush_cache_all() later
will cause any dirty cache lines in the direct mapping will be safely
written back.
Move the flush_cache_all() to the second local_flush_cache_tlb() and
remove the now redundant first local_flush_cache_tlb().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The "align" argument in ARMs __ioremap is unused and provides a
misleading expectation that it might do something. It doesn't.
Remove it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Tony Lindgren
This patch adds support for omap24xx series of processors.
The files live in arch/arm/mach-omap2, and share common
files with omap15xx and omap16xx processors in
arch/arm/plat-omap.
Omap24xx support was originally added for 2.6.9 by TI.
This code was then improved and integrated to share common
code with omap15xx and omap16xx processors by various
omap developers, such as Paul Mundt, Juha Yrjola, Imre Deak,
Tony Lindgren, Richard Woodruff, Nishant Menon, Komal Shah
et al.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Tony Lindgren
This patch syncs the mainline kernel with linux-omap tree.
The highlights of the patch are:
- Omap1 serial pport and framebuffer init updates by Imre Deak
- Add support for omap310 processor and Palm Tungsten E PDA
by Laurent Gonzales, Romain Goyet, et al. Omap310 and
omap1510 processors are now handled as omap15xx.
- Omap1 specific changes to shared omap clock framework
by Tony Lindgren
- Omap1 specific changes to shared omap pin mux framework
by Tony Lindgren
- Other misc fixes, such as update memory timings for smc91x,
omap1 specific device initialization etc.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We need to set the shared memory attribute in the page tables
on SMP systems to allow the cache coherency to operate.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
Using a llx format to print addresses that might possibly be (only) 36
bits wide make sense. However making it a zero padded 16 char wide
field is a bit excessive and useless.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The 'K' extension adds several new instructions to the ARMv6 ISA
which are primerily useful for SMP.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It seems that without the extra tlb flush, we may end up faulting
during the early kernel initialisation because the TLB can't see
the updated page tables.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We know what pgprot we're going to use, so don't #define it. Also,
since we select the nonaliasing/aliasing copypage implementation at
run time, there's no point having it globally visible.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
a many-threaded application which concurrently initializes different parts of
a large anonymous area.
This patch corrects that, by using a separate spinlock per page table page, to
guard the page table entries in that page, instead of using the mm's single
page_table_lock. (But even then, page_table_lock is still used to guard page
table allocation, and anon_vma allocation.)
In this implementation, the spinlock is tucked inside the struct page of the
page table page: with a BUILD_BUG_ON in case it overflows - which it would in
the case of 32-bit PA-RISC with spinlock debugging enabled.
Splitting the lock is not quite for free: another cacheline access. Ideally,
I suppose we would use split ptlock only for multi-threaded processes on
multi-cpu machines; but deciding that dynamically would have its own costs.
So for now enable it by config, at some number of cpus - since the Kconfig
language doesn't support inequalities, let preprocessor compare that with
NR_CPUS. But I don't think it's worth being user-configurable: for good
testing of both split and unsplit configs, split now at 4 cpus, and perhaps
change that to 8 later.
There is a benefit even for singly threaded processes: kswapd can be attacking
one part of the mm while another part is busy faulting.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Prepare arm for the split page_table_lock: three issues.
Signal handling's preserve and restore of iwmmxt context currently involves
reading and writing that context to and from user space, while holding
page_table_lock to secure the user page(s) against kswapd. If we split the
lock, then the structure might span two pages, secured by to read into and
write from a kernel stack buffer, copying that out and in without locking (the
structure is 160 bytes in size, and here we're near the top of the kernel
stack). Or would the overhead be noticeable?
arm_syscall's cmpxchg emulation use pte_offset_map_lock, instead of
pte_offset_map and mm-wide page_table_lock; and strictly, it should now also
take mmap_sem before descending to pmd, to guard against another thread
munmapping, and the page table pulled out beneath this thread.
Updated two comments in fault-armv.c. adjust_pte is interesting, since its
modification of a pte in one part of the mm depends on the lock held when
calling update_mmu_cache for a pte in some other part of that mm. This can't
be done with a split page_table_lock (and we've already taken the lowest lock
in the hierarchy here): so we'll have to disable split on arm, unless
CONFIG_CPU_CACHE_VIPT to ensures adjust_pte never used.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Convert those few architectures which are calling pud_alloc, pmd_alloc,
pte_alloc_map on a user mm, not to take the page_table_lock first, nor drop it
after. Each of these can continue to use pte_alloc_map, no need to change
over to pte_alloc_map_lock, they're neither racy nor swappable.
In the sparc64 io_remap_pfn_range, flush_tlb_range then falls outside of the
page_table_lock: that's okay, on sparc64 it's like flush_tlb_mm, and that has
always been called from outside of page_table_lock in dup_mmap.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
First step in pushing down the page_table_lock. init_mm.page_table_lock has
been used throughout the architectures (usually for ioremap): not to serialize
kernel address space allocation (that's usually vmlist_lock), but because
pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
and drop it when allocating a new one, to check lest a racing task already
did. Similarly no page_table_lock in vmalloc's map_vm_area.
Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
user mms, which are converted only by a later patch, for now they have to lock
differently according to whether or not it's init_mm.
If sources get muddled, there's a danger that an arch source taking
init_mm.page_table_lock will be mixed with common source also taking it (or
neither take it). So break the rules and make another change, which should
break the build for such a mismatch: remove the redundant mm arg from
pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
took page_table_lock for no good reason.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Nicolas Pitre
Fix XIP support after recent bootmem code refactoring.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Deepak Saxena
This patch adds support for 36-bit static mapped I/O. While there
are no platforms in the tree ATM that use it, it has been tested
tested on the IXP2350 NPU and I would like to get the support for
that chipset upstream one piece at a time. There are also other
Intel chipset ports in development that are waiting on this to go
upstream.
The patch replaces the print formats for physical addresses with
%016llx which will create a bit extraneous output on 32-bit systems,
but I think that is cleaner than having #ifdefs, specially since
users will only see the output in error cases.
Depends on 3016/1.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Deepak Saxena
Convert map_desc.physical to map_desc.pfn. This allows us to add
support for 36-bit addressed physical devices in the static maps
without having to resort to u64 variables.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make ARM independent of the way bootmem operates internally. We
now map each node as we initialise it, and place the bootmem bitmap
inside each node, rather than all in the first node.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Fix sparse warnings in arch/arm/kernel/module.c,
arch/arm/mm/consistent.c, drivers/pcmcia/sa1111_generic.c,
and platform support files.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Tony Lindgren
Machine restart calls cpu_proc_fin() to clean and disable
cache, and turn off interrupts. This patch adds proper
cpu_v6_proc_fin.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from George G. Davis
Fix leading, trailing and other miscellaneous whitespace issues
in arch/arm/kernel/alignment.c.
Signed-off-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from George G. Davis
Add test for invalid LDRD/STRD Rd cases in ARM alignment handler
and restore SWP printk KERN_ERR.
Signed-off-by: Steve Longerbeam <slongerbeam@mvista.com>
Signed-off-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
There is no reason to not allow these config options. They are useful when
the hardware has problems.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
Data abort caused by ldrex/strex can leave the exclusive monitor in an
unpredictable state. It is recommended that a clrex/strex is performed to
clear this state.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Gen FUKATSU
Invalidate BTB entry instruction flushes two instruction
at a time. Therefore this instruction should be done four
times after invalidate instruction cache line.
Signed-off-by: Gen Fukatsu
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
When CONFIG_CPU_CACHE_VIPT is defined, the flush_pfn_alias() function is
implicitely declared and it later conflicts with its actual definition.
This patch moves the function definition to the beginning of the file.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As per x86, we may deadlock while trying to get the mmap semaphore.
Implement the same fix, which allows (eg) recursive faults to cause
an oops instead of deadlocking.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Ben Dooks
The `make buildcheck` is erroneously reporting that the .proc.info
list is referencing items in the .init section as it is not itself
postfixed with .init
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This adds the necessary changes to ensure that we flush the
caches correctly with aliasing VIPT caches.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Timothy Baldwin
All data aborts are treated as read accesses. The existing code updates the wrong bit of r1, also the comments are wrong in that the sense of the L bit is inverted.
Signed-off-by: Timothy E. Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We weren't explicitly setting the page table bits we desired
in user_prot in the protection table, which resulted in the
user mappings for v6 CPUs being marked global.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
No point checking what CPU architecture level we have each time
within the loop, so precompute the base PMD flags outside the
loop.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Steve Longerbeam
Adds an implementation of unaligned LDRD and STRD fixups.
Also fixes a bug where do_alignment() would misinterpret and
fixup an unaligned LDRD/STRD as LDRH/STRH, causing memory
corruption.
This is the same as Patch #2867/1, but with minor whitespace
and comments changes, plus a check for arch-level >= v5TE
before printing ai_dword count in proc_alignment_read().
Signed-off-by: Steve Longerbeam <stevel@mwwireless.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Deepak Saxena
Working on adding support for 36-bit static mappings for ARMv6 and
Intel's XSC3 core and noticed that alloc_init_supersection currently
increments the phys addr by 1MB on each of the 16 iterations and then
forces alignment to supersection size (16MB). This is really uneeded
b/c we have already forced the phys address to be 16MB aligned in
create_mapping(). Furthermore, this breaks 36-bit addressing b/c bits
[23:20] of the PMD contain bits [35:32] of the physical address and
the masking causes us to loose those bits thus ending up with an
incorrect virt -> phys translation. The other option is to have an
alloc_init_supersection36.
Tested on Intel IXP2350 CPU with 36-bit static I/O mappings.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Sean Lee
In the arch/arm/mm/Kconfig file, the CPU_DCACHE_WRITETHROUGH
option is depend on the CPU_DISABLE_DCACHE, but the "Disable
D-Cache" option is configured as CPU_DCACHE_DISABLE.
The CPU_DISABLE_DCACHE should be CPU_DCACHE_DISABLE
Signed-off-by: Sean Lee <beginner2arm@eyou.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Unfortunately, we can't use the "user" bit in the page tables to
control whether a page table entry is "global" or "asid" specific,
since the vector page is mapped as "user" accessible but is not
process specific.
Therefore, give direct control of the ARMv6 "nG" (not global)
bit to the mm layers.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM fault handler is optimised to make the fast path, err, fast.
The renumbering of the VM_FAULT_* codes broke this because numbers
were used instead of the definitions. Fix this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Deepak Saxena
The XScale locking code is not something that has been validated
on 2.6 and needs to be replaced with a more generic API to use
with other ARMs that support locking features.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv6 introduces memory types into the page tables. Mark devices
mappings with the "shared device" memory type.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Tony Lindgren
This patch by Paul Mundt and other OMAP developers modifies
ARM specific Kconfig to allow sharing code between OMAP1 and
OMAP2 architectures.
In order to share code between OMAP1 and OMAP2, all OMAP1
specific code is moved into mach-omap1 directory in the
following patch. A new mach-omap2 directory will be added
later on.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Deepak Saxena
The code in mm-armv.c checks for the condition (cpu_architecture()<= ARMv5)
in a few places but should be checking for ARMv5TEJ as the MMU is shared
across all v5 variations.
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
The VFP instructions trigger undefined exceptions because the access to
CP11 is disabled (only CP10 is currently enabled by the kernel). The patch
fixes this problem.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
The range for the ARMv6 block cache operations is inclusive but the
kernel doesn't re-calculate the end address, causing a page fault when
used (this only happens with support for cache aliasing, otherwise the
blk_flush_kern_dcache_page() is not called). This patch subtracts
L1_CACHE_BYTES from the end address.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
This patch fixes the V bit setting for the ARM1020x processors. At
reset, this bit is automatically set to the value of the HIVECSINIT
input signal which just happened to be 1 but it is not mandatory.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
This patch fixes a broken comment in the proc-arm1020.S file which
prevents the file compilation
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If we receive an unrecognised abort during boot, don't try to
send a signal to pid0, but instead report the current state.
This leads to less confusing debug reports.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It doesn't make sense for this to be in mm-armv.c now that 26-bit
ARM support is no longer integrated into arch/arm.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It doesn't make sense to have the PGD kernel pointers initialisation
separate from the PGD user pointers, especially when we clean the
data cache over the whole range.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Catalin Marinas
cpu_v6_set_pte() sets the kernel access rights to r/o for user
pages (L_PTE_USER) when neither L_PTE_WRITE nor L_PTE_DIRTY are
set. This causes a kernel data abort when writing the TLS value
in the 0xffff0000 page. This patch enables the kernel r/w access.
Signed-off-by: Catalin Marinas
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since meminfo.bank[] array contains page-aligned start/size, we
no longer need to explicitly round up/down the addresses when
converting to PFNs.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ingo recently introduced a great speedup for allocating new mmaps using the
free_area_cache pointer which boosts the specweb SSL benchmark by 4-5% and
causes huge performance increases in thread creation.
The downside of this patch is that it does lead to fragmentation in the
mmap-ed areas (visible via /proc/self/maps), such that some applications
that work fine under 2.4 kernels quickly run out of memory on any 2.6
kernel.
The problem is twofold:
1) the free_area_cache is used to continue a search for memory where
the last search ended. Before the change new areas were always
searched from the base address on.
So now new small areas are cluttering holes of all sizes
throughout the whole mmap-able region whereas before small holes
tended to close holes near the base leaving holes far from the base
large and available for larger requests.
2) the free_area_cache also is set to the location of the last
munmap-ed area so in scenarios where we allocate e.g. five regions of
1K each, then free regions 4 2 3 in this order the next request for 1K
will be placed in the position of the old region 3, whereas before we
appended it to the still active region 1, placing it at the location
of the old region 2. Before we had 1 free region of 2K, now we only
get two free regions of 1K -> fragmentation.
The patch addresses thes issues by introducing yet another cache descriptor
cached_hole_size that contains the largest known hole size below the
current free_area_cache. If a new request comes in the size is compared
against the cached_hole_size and if the request can be filled with a hole
below free_area_cache the search is started from the base instead.
The results look promising: Whereas 2.6.12-rc4 fragments quickly and my
(earlier posted) leakme.c test program terminates after 50000+ iterations
with 96 distinct and fragmented maps in /proc/self/maps it performs nicely
(as expected) with thread creation, Ingo's test_str02 with 20000 threads
requires 0.7s system time.
Taking out Ingo's patch (un-patch available per request) by basically
deleting all mentions of free_area_cache from the kernel and starting the
search for new memory always at the respective bases we observe: leakme
terminates successfully with 11 distinctive hardly fragmented areas in
/proc/self/maps but thread creating is gringdingly slow: 30+s(!) system
time for Ingo's test_str02 with 20000 threads.
Now - drumroll ;-) the appended patch works fine with leakme: it ends with
only 7 distinct areas in /proc/self/maps and also thread creation seems
sufficiently fast with 0.71s for 20000 threads.
Signed-off-by: Wolfgang Wander <wwc@rentec.com>
Credit-to: "Richard Purdie" <rpurdie@rpsys.net>
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu> (partly)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Bellido Nicolas
Core support for AAEC-2000 based platforms.
This is an updated version of the previous patch, and takes
into account Russell's comments.
AAED-2000 default configuration will follow as soon
as some problems with the bootloader are sorted out...
Signed-off-by: Nicolas Bellido
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
flush_dcache_page() did nothing for these caches, but since they
suffer from I/D cache coherency issues, we need to ensure that data
is written back to RAM.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
Not that there might be many of them on the planet, but at least RMK
apparently has one.
Signed-off-by: Nicolas Pitre
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM copypage changes in 2.6.12-rc4-git1 removed the preempt locking
from the copypage functions which broke the XScale implementation.
This patch fixes the locking on XScale and removes the now unneeded
minicache code.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Checked-by: Richard Purdie
Patch from Nicolas Pitre
Not all ARMv6 processors implement the TLS register.
Signed-off-by: Nicolas Pitre
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the locking for copy_user_page() and clear_user_page() into
the implementations which require locking. For simple memcpy/
memset based implementations, the locking is extra overhead which
is not necessary, and prevents preemption occuring.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Add pmd_off() and pmd_off_k() to obtain the pmd pointer for a
virtual address, and use them throughout the mm initialisation.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Patch from Nicolas Pitre
This better express things, and should cover RMK's weird SMP toys.
Signed-off-by: Nicolas Pitre
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from George G. Davis
This patch is required for kernel XIP support on ARMv6 machines. It ensures that the access permission bits for kernel XIP section descriptors are APX=1 and AP[1:0]=01, which is Kernel read-only/User no access permissions. Prior to this change, kernel XIP section descriptor access permissions were set to Kernel no access/User no access on ARMv6 machines and the kernel would therefore hang upon entry to userspace when set_fs(USER_DS) was executed.
Signed-off-by: Steve Longerbeam
Signed-off-by: George G. Davis
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
This patch entirely reworks the kernel assistance for NPTL on ARM.
In particular this provides an efficient way to retrieve the TLS
value and perform atomic operations without any instruction emulation
nor special system call. This even allows for pre ARMv6 binaries to
be forward compatible with SMP systems without any penalty.
The problematic and performance critical operations are performed
through segment of kernel provided user code reachable from user space
at a fixed address in kernel memory. Those fixed entry points are
within the vector page so we basically get it for free as no extra
memory page is required and nothing else may be mapped at that
location anyway.
This is different from (but doesn't preclude) a full blown VDSO
implementation, however a VDSO would prevent some assembly tricks with
constants that allows for efficient branching to those code segments.
And since those code segments only use a few cycles before returning to
user code, the overhead of a VDSO far call would add a significant
overhead to such minimalistic operations.
The ARM_NR_set_tls syscall also changed number. This is done for two
reasons:
1) this patch changes the way the TLS value was previously meant to be
retrieved, therefore we ensure whatever library using the old way
gets fixed (they only exist in private tree at the moment since the
NPTL work is still progressing).
2) the previous number was allocated in a range causing an undefined
instruction trap on kernels not supporting that syscall and it was
determined that allocating it in a range returning -ENOSYS would be
much nicer for libraries trying to determine if the feature is
present or not.
Signed-off-by: Nicolas Pitre
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from George G. Davis
As noted in http://www.arm.com/linux/patch-2.6.9-arm1.gz, the "Faulty SWP instruction on 1136 doesn't set bit 11 in DFSR." So the v6_early_abort handler does not report the correct rd/wr direction for the SWP instruction which may result in SEGVS or hangs. In order to work around this problem, this patch merely updates the fix contained in the ARM Ltd. patch to use the macroised abort handler fixups.
Signed-off-by: George G. Davis
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
)
From: Russell King <rmk+lkml@arm.linux.org.uk>
Oddly, max_low_pfn/max_pfn end up being the number of pages in the system,
rather than the maximum PFN on ARM. This doesn't seem to cause any problems,
so just add a note about it.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
)
From: Russell King <rmk+lkml@arm.linux.org.uk>
ARM wasn't raising a SIGBUS with a siginfo structure. Fix
__do_user_fault() to allow us to use it for SIGBUS conditions, and arrange
for the sigbus path to use this.
We need to prevent the siginfo code being called if we do not have a user
space context to call it, so consolidate the "user_mode()" tests.
Thanks to Ian Campbell who spotted this oversight.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!