Compiling recent 2.6.29-rc kernels for ARM gives me the following warning:
arch/arm/mm/mmu.c: In function 'sanity_check_meminfo':
arch/arm/mm/mmu.c:697: warning: comparison between pointer and integer
This is because commit 3fd9825c42
"[ARM] 5402/1: fix a case of wrap-around in sanity_check_meminfo()"
in 2.6.29-rc5-git4 added a comparison of a pointer with PAGE_OFFSET,
which is an integer.
Fixed by casting PAGE_OFFSET to void *.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
VIPT aliasing caches have issues of their own which are not yet handled.
Usage of discard_old_kernel_data() in copypage-v6.c is not highmem ready,
kmap/fixmap stuff doesn't take account of cache colouring, etc.
If/when those issues are handled then this could be reverted.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The kmap virtual area borrows a 2MB range at the top of the 16MB area
below PAGE_OFFSET currently reserved for kernel modules and/or the
XIP kernel. This 2MB corresponds to the range covered by 2 consecutive
second-level page tables, or a single pmd entry as seen by the Linux
page table abstraction. Because XIP kernels are unlikely to be seen
on systems needing highmem support, there shouldn't be any shortage of
VM space for modules (14 MB for modules is still way more than twice the
typical usage).
Because the virtual mapping of highmem pages can go away at any moment
after kunmap() is called on them, we need to bypass the delayed cache
flushing provided by flush_dcache_page() in that case.
The atomic kmap versions are based on fixmaps, and
__cpuc_flush_dcache_page() is used directly in that case.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
This patch adds a Non-cacheable Normal ARM executable memory type,
MT_MEMORY_NONCACHED.
On OMAP3, this is used for rapid dynamic voltage/frequency scaling in
the VDD2 voltage domain. OMAP3's SDRAM controller (SDRC) is in the
VDD2 voltage domain, and its clock frequency must change along with
voltage. The SDRC clock change code cannot run from SDRAM itself,
since SDRAM accesses are paused during the clock change. So the
current implementation of the DVFS code executes from OMAP on-chip
SRAM, aka "OCM RAM."
If the OCM RAM pages are marked as Cacheable, the ARM cache controller
will attempt to flush dirty cache lines to the SDRC, so it can fill
those lines with OCM RAM instruction code. The problem is that the
SDRC is paused during DVFS, and so any SDRAM access causes the ARM MPU
subsystem to hang.
TI's original solution to this problem was to mark the OCM RAM
sections as Strongly Ordered memory, thus preventing caching. This is
overkill: since the memory is marked as non-bufferable, OCM RAM writes
become needlessly slow. The idea of "Strongly Ordered SRAM" is also
conceptually disturbing. Previous LAKML list discussion is here:
http://www.spinics.net/lists/arm-kernel/msg54312.html
This memory type MT_MEMORY_NONCACHED is used for OCM RAM by a future
patch.
Cc: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In the non highmem case, if two memory banks of 1GB each are provided,
the second bank would evade suppression since its virtual base would
be 0. Fix this by disallowing any memory bank which virtual base
address is found to be lower than PAGE_OFFSET.
Reported-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As noted by Akinobu Mita in patch b1fceac2b9,
alloc_bootmem and related functions never return NULL and always return a
zeroed region of memory. Thus a NULL test or memset after calls to these
functions is unnecessary.
This was fixed using the following semantic patch.
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
expression E;
statement S;
@@
E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
(
- BUG_ON (E == NULL);
|
- if (E == NULL) S
)
@@
expression E,E1;
@@
E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
- memset(E,0,E1);
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Update to use the asm/sections.h header rather than declaring these
symbols ourselves. Change __data_start to _data to conform with the
naming found within asm/sections.h.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 8d5796d2ec allows for the vmalloc
area to be resized from the kernel cmdline. Make sure it cannot overlap
with RAM entirely.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Doing so will greatly simplify the bootmem initialization code as each
bank is therefore entirely lowmem or highmem with no crossing between
those zones.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently there are two instances of struct meminfo: one in
kernel/setup.c marked __initdata, and another in mm/init.c with
permanent storage. Let's keep only the later to directly populate
the permanent version from arm_add_memory().
Also move common validation tests between the MMU and non-MMU cases
into arm_add_memory() to remove some duplication. Protection against
overflowing the membank array is also moved in there in order to cover
the kernel cmdline parsing path as well.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As suggested by Andrew Morton, remove memzero() - it's not supported
on other architectures so use of it is a potential build breaking bug.
Since the compiler optimizes memset(x,0,n) to __memzero() perfectly
well, we don't miss out on the underlying benefits of memzero().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Mikael Pettersson reported:
The 2.6.28-rc kernels fail to detect PCI device 0000:00:01.0
(the first ethernet port) on my Thecus n2100 XScale box.
There is however still a strange "ghost" device that gets partially
detected in 2.6.28-rc2 vanilla.
The IOP321 manual says:
The user designates the memory region containing the OCCDR as
non-cacheable and non-bufferable from the IntelR XScaleTM core.
This guarantees that all load/stores to the OCCDR are only of
DWORD quantities.
Ensure that the OCCDR is so mapped.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As a result of the ptebits changes, we ended up marking device mappings
as normal memory on ARMv7 CPUs, resulting in undesirable behaviour with
serial ports and the like. While reviewing the section mapping table
entries, other errors in the memory type settings for devices were
detected and confirmed to prevent Xscale3 platforms booting.
Tested on:
OMAP34xx (ARMv7),
OMAP24xx (ARMv6),
OMAP16xx (ARM926T, ARMv5),
PXA311 (Xscale3),
PXA272 (Xscale),
PXA255 (Xscale),
IXP42x (Xscale),
S3C2410 (ARM920T, ARMv4T),
ARM720T (ARMv4T)
StrongARM-110 (ARMv4)
Acked-by: Tony Lindgren <tony@atomide.com>
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
Tested-by: Mike Rapoport <mike@compulab.co.il>
Tested-by: Ben Dooks <ben-linux@fluff.org>
Tested-by: Anders Grafström <grfstrm@users.sourceforge.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As of 73bdf0a60e, the kernel needs
to know where modules are located in the virtual address space.
On ARM, we located this region between MODULE_START and MODULE_END.
Unfortunately, everyone else calls it MODULES_VADDR and MODULES_END.
Update ARM to use the same naming, so is_vmalloc_or_module_addr()
can work properly. Also update the comment on mm/vmalloc.c to
reflect that ARM also places modules in a separate region from the
vmalloc space.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As of the previous commit, MT_DEVICE_IXP2000 encodes to the same
PTE bit encoding as MT_DEVICE, so it's now redundant. Convert
MT_DEVICE_IXP2000 to use MT_DEVICE instead, and remove its aliases.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide L_PTE_MT_xxx definitions to describe the memory types that we
use in Linux/ARM. These definitions are carefully picked such that:
1. their LSBs match what is required for pre-ARMv6 CPUs.
2. they all have a unique encoding, including after modification
by build_mem_type_table() (the result being that some have more
than one combination.)
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no point scattering this around the tree, the parsing
of the parameter might as well live beside the code which uses
it. That also means we can make vmalloc_reserve a static
variable.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The newly introduced sanity_check_meminfo() function should be
used to collect all validation of the meminfo array, which we
have in bootmem_init(). Move it there.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch provides an ARM implementation of ioremap_wc().
We use different page table attributes depending on which CPU we
are running on:
- Non-XScale ARMv5 and earlier systems: The ARMv5 ARM documents four
possible mapping types (CB=00/01/10/11). We can't use any of the
cached memory types (CB=10/11), since that breaks coherency with
peripheral devices. Both CB=00 and CB=01 are suitable for _wc, and
CB=01 (Uncached/Buffered) allows the hardware more freedom than
CB=00, so we'll use that.
(The ARMv5 ARM seems to suggest that CB=01 is allowed to delay stores
but isn't allowed to merge them, but there is no other mapping type
we can use that allows the hardware to delay and merge stores, so
we'll go with CB=01.)
- XScale v1/v2 (ARMv5): same as the ARMv5 case above, with the slight
difference that on these platforms, CB=01 actually _does_ allow
merging stores. (If you want noncoalescing bufferable behavior
on Xscale v1/v2, you need to use XCB=101.)
- Xscale v3 (ARMv5) and ARMv6+: on these systems, we use TEXCB=00100
mappings (Inner/Outer Uncacheable in xsc3 parlance, Uncached Normal
in ARMv6 parlance).
The ARMv6 ARM explicitly says that any accesses to Normal memory can
be merged, which makes Normal memory more suitable for _wc mappings
than Device or Strongly Ordered memory, as the latter two mapping
types are guaranteed to maintain transaction number, size and order.
We use the Uncached variety of Normal mappings for the same reason
that we can't use C=1 mappings on ARMv5.
The xsc3 Architecture Specification documents TEXCB=00100 as being
Uncacheable and allowing coalescing of writes, which is also just
what we need.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add asm/cputype.h, moving functions and definitions from asm/system.h
there. Convert all users of 'processor_id' to the more efficient
read_cpuid_id() function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch will truncate and/or ignore memory banks if their kernel
direct mappings would (partially) overlap with the vmalloc area or
the mappings between the vmalloc area and the address space top, to
prevent crashing during early boot if there happens to be more RAM
installed than we are expecting.
Since the start of the vmalloc area is not at a fixed address (but
the vmalloc end address is, via the per-platform VMALLOC_END define),
a default area of 128M is reserved for vmalloc mappings, which can
be shrunk or enlarged by passing an appropriate vmalloc= command line
option as it is done on x86.
On a board with a 3:1 user:kernel split, VMALLOC_END at 0xfe000000,
two 512M RAM banks and vmalloc=128M (the default), this patch gives:
Truncating RAM at 20000000-3fffffff to -35ffffff (vmalloc region overlap).
Memory: 512MB 352MB = 864MB total
On a board with a 3:1 user:kernel split, VMALLOC_END at 0xfe800000,
two 256M RAM banks and vmalloc=768M, this patch gives:
Truncating RAM at 00000000-0fffffff to -0e7fffff (vmalloc region overlap).
Ignoring RAM at 10000000-1fffffff (vmalloc region overlap).
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Tested-by: Riku Voipio <riku.voipio@iki.fi>
ext4 uses ZERO_PAGE(0) to zero out blocks. We need to export
different symbols in different arches for the usage of ZERO_PAGE
in modules.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
This patchset adds a flags variable to reserve_bootmem() and uses the
BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
between crashkernel area and already used memory.
This patch:
Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
If that flag is set, the function returns with -EBUSY if the memory already
has been reserved in the past. This is to avoid conflicts.
Because that code runs before SMP initialisation, there's no race condition
inside reserve_bootmem_core().
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix powerpc build]
Signed-off-by: Bernhard Walle <bwalle@suse.de>
Cc: <linux-arch@vger.kernel.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently, Linux doesn't generate correct page tables for ARMv6 and
later cores if the cache policy is different from the default one (it
may lead to strongly ordered or shared device mappings). This patch
disallows cache policies other than writeback and the
CPU_[ID]CACHE_DISABLE options only affect the CP15 system control
register rather than the page tables.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
AT91SAM9260 stopped booting with the recent changes to MM
initialisation - it was asking for a non-aligned virtual address
which caused loops to be non-terminal. Fix this by rounding
virtual addresses down, but remember to include the offset in
the length, and round the length up to the following page.
This means that asking for a mapping of 4K starting at 2K into
a page maps two pages as one would expect.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add cached device type for ioremap_cached(). Group all device memory
types together, and ensure that they all have a "MT_DEVICE" prefix.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Change the memory types table to define the L1 descriptor bit 4 to
be in terms of the ARMv6 definition - execute never.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add prot_pte_ext to the mem_types table to allow the extended pte
attributes to be passed to set_pte_ext(), thereby permitting us to
specify memory type information for the hardware PTE entries.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We really want to be using the memory type table in ioremap, so we
only have to do the CPU type fixups in one place.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than our three separate loops to setup mappings (by page
mappings up to a section boundary, then section mappings, and the
remainder by page mappings) convert this to a more conventional
Linux style of a loop over each page table level.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Catalin Marinas at ARM Ltd says:
> The CPU architects in ARM intended supersections only as a way to map
> addresses >= 4GB. Supersections are not mandated by the architecture
> and there is no easy way to detect their hardware support at run-time
> (other than checking for a specific core). From the analysis done in
> ARM, there wasn't a clear performance gain by using supersections
> rather than sections (no significant improvement in the TLB misses).
Therefore, we should avoid using supersections unless there's a real
need (iow, we're mapping addresses >= 4GB).
This means that we can simplify create_mapping() a bit since we will
only use supersection mappings for addresses >= 4GB, which means that
the physical, virtual and length must be multiples of the supersection
mapping size.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's now no need to carry around each protection separately.
Instead, pass around the pointer to the entry in the mem_types
array which we're interested in.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than combining the domain for a particular memory type with
the protection information each time we want to use it, do so when
we fix up the mem_type array at initialisation time.
Rename struct mem_types to be mem_type - each structure is one
memory type description, not several.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The PAGE_* user page protection macros don't take into account the
configured memory policy and other architecture specific bits like
the global/ASID and shared mapping bits. Instead of constants let
these depend on a variable fixed up at init just like PAGE_KERNEL.
Signed-off-by: Imre Deak <imre.deak@solidboot.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move adjust_cr() into arch/arm/mm/mmu.c, and move irqflags.h to
a more appropriate place in the header file.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
L_PTE_ASID is not really required to be stored in every PTE, since we
can identify it via the address passed to set_pte_at(). So, create
set_pte_ext() which takes the address of the PTE to set, the Linux
PTE value, and the additional CPU PTE bits which aren't encoded in
the Linux PTE value.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The RX3715 is similar to the H1940 in the way
that suspend to RAM works, so we can use most
of the extant support for the H1940 with only
a few modifictions
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add support to suspend and resume, using the
H1940's bootloader
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Arnaud Patard <arnaud.patard@rtp-net.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Merge L_PTE_COHERENT with L_PTE_SHARED and free up a L_PTE_* bit.
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
XIP kernels need to know the start/end of text, but we were
missing the declaration of _etext in mmu.c. Add it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>