Commit Graph

36 Commits

Author SHA1 Message Date
Christoffer Dall 4d9c5b89cf ARM: 7950/1: mm: Fix stage-2 device memory attributes
The stage-2 memory attributes are distinct from the Hyp memory
attributes and the Stage-1 memory attributes.  We were using the stage-1
memory attributes for stage-2 mappings causing device mappings to be
mapped as normal memory.  Add the S2 equivalent defines for memory
attributes and fix the comments explaining the defines while at it.

Add a prot_pte_s2 field to the mem_type struct and fill out the field
for device mappings accordingly.

Cc: <stable@vger.kernel.org>	[3.9+]
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10 11:44:05 +00:00
Russell King 4dcfa60071 ARM: DMA-API: better handing of DMA masks for coherent allocations
We need to start treating DMA masks as something which is specific to
the bus that the device resides on, otherwise we're going to hit all
sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit
systems, where memory is offset from PFN 0.

In order to start doing this, we convert the DMA mask to a PFN using
the device specific dma_to_pfn() macro.  This is the reverse of the
pfn_to_dma() macro which is used to get the DMA address for the device.

This gives us a PFN mask, which we can then check against the PFN
limit of the DMA zone.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31 14:49:21 +00:00
Joonsoo Kim ed8fd2186a ARM: 7645/1: ioremap: introduce an infrastructure for static mapped area
In current implementation, we used ARM-specific flag, that is,
VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped area.
The purpose of static mapped area is to re-use static mapped area when
entire physical address range of the ioremap request can be covered
by this area.

This implementation causes needless overhead for some cases.
For example, assume that there is only one static mapped area and
vmlist has 300 areas. Every time we call ioremap, we check 300 areas for
deciding whether it is matched or not. Moreover, even if there is
no static mapped area and vmlist has 300 areas, every time we call
ioremap, we check 300 areas in now.

If we construct a extra list for static mapped area, we can eliminate
above mentioned overhead.
With a extra list, if there is one static mapped area,
we just check only one area and proceed next operation quickly.

In fact, it is not a critical problem, because ioremap is not frequently
used. But reducing overhead is better idea.

Another reason for doing this work is for removing architecture dependency
on vmalloc layer. I think that vmlist and vmlist_lock is internal data
structure for vmalloc layer. Some codes for debugging and stat inevitably
use vmlist and vmlist_lock. But it is preferable that they are used
as least as possible in outside of vmalloc.c

Now, I introduce an ARM-specific infrastructure for static mapped area. In
the following patch, we will use this and resolve above mentioned problem.

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-16 17:54:22 +00:00
Russell King a849088aa1 ARM: Fix ioremap() of address zero
Murali Nalajala reports a regression that ioremapping address zero
results in an oops dump:

Unable to handle kernel paging request at virtual address fa200000
pgd = d4f80000
[fa200000] *pgd=00000000
Internal error: Oops: 5 [#1] PREEMPT SMP ARM
Modules linked in:
CPU: 0    Tainted: G        W (3.4.0-g3b5f728-00009-g638207a #13)
PC is at msm_pm_config_rst_vector_before_pc+0x8/0x30
LR is at msm_pm_boot_config_before_pc+0x18/0x20
pc : [<c0078f84>]    lr : [<c007903c>]    psr: a0000093
sp : c0837ef0  ip : cfe00000  fp : 0000000d
r10: da7efc17  r9 : 225c4278  r8 : 00000006
r7 : 0003c000  r6 : c085c824  r5 : 00000001  r4 : fa101000
r3 : fa200000  r2 : c095080c  r1 : 002250fc  r0 : 00000000
Flags: NzCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM Segment kernel
Control: 10c5387d  Table: 25180059  DAC: 00000015
[<c0078f84>] (msm_pm_config_rst_vector_before_pc+0x8/0x30) from [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20)
[<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20) from [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04)
[<c007a55c>] (msm_pm_power_collapse+0x410/0xb04) from [<c007b17c>] (arch_idle+0x294/0x3e0)
[<c007b17c>] (arch_idle+0x294/0x3e0) from [<c000eed8>] (default_idle+0x18/0x2c)
[<c000eed8>] (default_idle+0x18/0x2c) from [<c000f254>] (cpu_idle+0x90/0xe4)
[<c000f254>] (cpu_idle+0x90/0xe4) from [<c057231c>] (rest_init+0x88/0xa0)
[<c057231c>] (rest_init+0x88/0xa0) from [<c07ff890>] (start_kernel+0x3a8/0x40c)
Code: c0704256 e12fff1e e59f2020 e5923000 (e5930000)

This is caused by the 'reserved' entries which we insert (see
19b52abe3c - ARM: 7438/1: fill possible PMD empty section gaps)
which get matched for physical address zero.

Resolve this by marking these reserved entries with a different flag.

Cc: <stable@vger.kernel.org>
Tested-by: Murali Nalajala <mnalajal@codeaurora.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-08-25 09:11:40 +01:00
Marek Szyprowski e9da6e9905 ARM: dma-mapping: remove custom consistent dma region
This patch changes dma-mapping subsystem to use generic vmalloc areas
for all consistent dma allocations. This increases the total size limit
of the consistent allocations and removes platform hacks and a lot of
duplicated code.

Atomic allocations are served from special pool preallocated on boot,
because vmalloc areas cannot be reliably created in atomic context.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
2012-07-30 12:25:45 +02:00
Russell King 09b2ad13da ARM: fix warning caused by wrongly typed arm_dma_limit
arch/arm/mm/init.c: In function 'arm_memblock_init':
arch/arm/mm/init.c:380: warning: comparison of distinct pointer types lacks a cast

by fixing the typecast in its definition when DMA_ZONE is disabled.
This was missed in 4986e5c7c (ARM: mm: fix type of the arm_dma_limit
global variable).

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-07-05 13:11:31 +01:00
Marek Szyprowski 4986e5c7cd ARM: mm: fix type of the arm_dma_limit global variable
arm_dma_limit stores physical address of maximal address accessible by DMA,
so the phys_addr_t type makes much more sense for it instead of u32. This
patch fixes the following build warning:

arch/arm/mm/init.c:380: warning: comparison of distinct pointer types lacks a cast

Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2012-06-11 14:30:47 +02:00
Marek Szyprowski c790950928 ARM: integrate CMA with DMA-mapping subsystem
This patch adds support for CMA to dma-mapping subsystem for ARM
architecture. By default a global CMA area is used, but specific devices
are allowed to have their private memory areas if required (they can be
created with dma_declare_contiguous() function during board
initialisation).

Contiguous memory areas reserved for DMA are remapped with 2-level page
tables on boot. Once a buffer is requested, a low memory kernel mapping
is updated to to match requested memory access type.

GFP_ATOMIC allocations are performed from special pool which is created
early during boot. This way remapping page attributes is not needed on
allocation time.

CMA has been enabled unconditionally for ARMv6+ systems.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21 15:09:38 +02:00
Russell King 60db4fcf14 ARM: pgtable: get rid of TOP_PTE()
Get rid of the TOP_PTE() macro as we now have proper accessor functions
instead.  No one should be directly referencing the top pte table
anymore.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-26 20:07:59 +00:00
Russell King 0d31fe47b0 ARM: pgtable: provide get_top_pte() to complement set_top_pte()
Provide get_top_pte() to complement set_top_pte(), moving the only
users of TOP_PTE to arch/arm/mm/mm.h.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-26 20:07:41 +00:00
Russell King 67ece14431 ARM: pgtable: consolidate set_pte_ext(TOP_PTE,...) + tlb flush
A number of places establish a PTE in our top page table and
immediately flush the TLB.  Rather than having this at every callsite,
provide an inline function for this purpose.

This changes some global tlb flushes to be local; each time we setup
one of these mappings, we always do it with preemption disabled which
would prevent us migrating to another CPU.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-26 20:06:28 +00:00
Russell King de27c30822 ARM: pgtable: move TOP_PTE address definitions to arch/arm/mm/mm.h
Move the TOP_PTE address definitions to one central place so that it's
easy to discover what they're being used for.  This helps to ensure
that there are no overlaps.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-26 20:06:14 +00:00
Nicolas Pitre 576d2f2525 ARM: add generic ioremap optimization by reusing static mappings
Now that we have all the static mappings from iotable_init() located
in the vmalloc area, it is trivial to optimize ioremap by reusing those
static mappings when the requested physical area fits in one of them,
and so in a generic way for all platforms.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Tested-by: Jamie Iles <jamie@jamieiles.com>
2011-11-26 19:21:28 -05:00
Catalin Marinas 442e70c0b3 ARM: 7076/1: LPAE: Add (pte|pmd)val_t type definitions as u32
This patch defines the (pte|pmd)val_t as u32 and changes the page table
types to be based on these. The PMD bits are converted to the
corresponding type using the _AT macro.

The flush_pmd_entry/clean_pmd_entry argument was changed to (void *) to
allow them to be used with both PGD and PMD pointers and avoid code
duplication.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-06 15:40:05 +01:00
Russell King 022ae537b2 ARM: dma: replace ISA_DMA_THRESHOLD with a variable
ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get
rid of it from ARM by replacing it with arm_dma_zone_mask.  Move
dma_supported() and dma_set_mask() out of line, and have
dma_supported() check this new variable instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-12 11:08:12 +01:00
Russell King cc780af5ac ARM: kill pmd_off()
pmd_off() has only one user, so lets consolidate this into its only
user.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 19:50:30 +01:00
Russell King 516295e5ab ARM: pgtable: add pud-level code
Add pud_offset() et.al. between the pgd and pmd code in preparation of
using pgtable-nopud.h rather than 4level-fixup.h.

This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for
uaccess_with_memcpy.c.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-21 19:24:14 +00:00
Russell King f6e3354d02 ARM: pgtable: introduce pteval_t to represent a pte value
This makes everywhere dealing with pte values use the same type.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-11-26 20:45:47 +00:00
Russell King 8d717a52d1 ARM: Convert platform reservations to use LMB rather than bootmem
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-27 08:48:23 +01:00
Russell King 2778f62056 ARM: initial LMB trial
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-27 08:48:22 +01:00
Russell King 98c672cf1f ARM: Move platform memory reservations out of generic code
Move the platform specific bootmem memory reservations out of
arch/arm/mm/mmu.c into their respective platform files.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-16 11:06:40 +01:00
Russell King be37030274 ARM: Remove DISCONTIGMEM support
Everything should now be using sparsemem rather than discontigmem, so
remove the code supporting discontigmem from ARM.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-16 10:57:35 +01:00
Russell King a2227120ee ARM: Move memory mapping into mmu.c
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-15 15:03:49 +01:00
Russell King 7b0a1003e7 ARM: Reduce __flush_dcache_page() visibility
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-04 14:58:50 +00:00
Nicolas Pitre 5f0fbf9eca [ARM] fixmap support
This is the minimum fixmap interface expected to be implemented by
architectures supporting highmem.

We have a second level page table already allocated and covering
0xfff00000-0xffffffff because the exception vector page is located
at 0xffff0000, and various cache tricks already use some entries above
0xffff0000.  Therefore the PTEs covering 0xfff00000-0xfffeffff are free
to be used.

However the XScale cache flushing code already uses virtual addresses
between 0xfffe0000 and 0xfffeffff.

So this reserves the 0xfff00000-0xfffdffff range for fixmap stuff.

The Documentation/arm/memory.txt information is updated accordingly,
including the information about the actual top of DMA memory mapping
region which didn't match the code.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:20 -04:00
Russell King 37efe6427d [ARM] use asm/sections.h
Update to use the asm/sections.h header rather than declaring these
symbols ourselves.  Change __data_start to _data to conform with the
naming found within asm/sections.h.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-12-01 11:53:07 +00:00
Nicolas Pitre 4b5f32cee0 [ARM] rationalize memory configuration code some more
Currently there are two instances of struct meminfo: one in
kernel/setup.c marked __initdata, and another in mm/init.c with
permanent storage.  Let's keep only the later to directly populate
the permanent version from arm_add_memory().

Also move common validation tests between the MMU and non-MMU cases
into arm_add_memory() to remove some duplication.  Protection against
overflowing the membank array is also moved in there in order to cover
the kernel cmdline parsing path as well.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28 15:36:44 +00:00
Russell King 6a4690c22f Merge branch 'ptebits' into devel
Conflicts:

	arch/arm/Kconfig
2008-10-09 21:31:56 +01:00
Russell King 40d192b63d [ARM] remove 'prot_pte_ext' from memory type table
This member is now redundant; the memory type is encoded in the Linux
PTE bits.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-10-01 16:41:02 +01:00
Russell King 5ed5fdf50c [ARM] clean up a load of old declarations
... some of which are now in linux/*.h headers.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-09-06 11:23:30 +01:00
Russell King c172cc92c8 [ARM] mm 6: allow mem_types table to specify extended pte attributes
Add prot_pte_ext to the mem_types table to allow the extended pte
attributes to be passed to set_pte_ext(), thereby permitting us to
specify memory type information for the hardware PTE entries.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-04-21 20:36:02 +01:00
Russell King b29e9f5e64 [ARM] mm 5: Use mem_types table in ioremap
We really want to be using the memory type table in ioremap, so we
only have to do the CPU type fixups in one place.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-04-21 20:36:00 +01:00
Russell King 3ff1559eae [ARM] Fix nommu build
Fix warnings and errors in arch/arm/mm for nommu build.
Remove commented out function prototype in pgtable-nommu.h

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-11-30 13:53:54 +00:00
Russell King ae8f154129 [ARM] Move rest of MMU setup code from mm-armv.c to mmu.c
If we're going to have mmu.c for code which is specific to the MMU
machines, we might as well move the other MMU initialisation
specific code from mm-armv.c into this new file.  This also allows
us to make some functions static.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-09-27 15:38:34 +01:00
Russell King d111e8f964 [ARM] Split ARM MM initialisation for !mmu
Move the MMU specific code from init.c into mmu.c, and add nommu
fixups to nommu.c

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-09-27 15:27:33 +01:00
Russell King 1b2e2b73b4 [ARM] Cleanup arch/arm/mm a little
Move top_pmd into arch/arm/mm/mm.h - nothing outside arch/arm/mm
references it.

Move the repeated definition of TOP_PTE into mm/mm.h, as well as
a few function prototypes.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-09-20 14:58:35 +01:00