Provide a helper to indicate whether we need to perform special handling
for boot identity mapping aliases or not.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reviewed-by: Pratyush Anand <panand@redhat.com>
For kexec, we need more functionality from the IDMAP system. We need to
be able to convert physical addresses to their identity mappped versions
as well as virtual addresses.
Convert the existing arch_virt_to_idmap() to deal with physical
addresses instead.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All drivers that are relevant for rpc or footbridge have stopped
using virt_to_bus a while ago, so we can remove it and avoid some
harmless randconfig warnings for drivers that we do not care about:
drivers/atm/zatm.c: In function 'poll_rx':
drivers/atm/zatm.c:401:18: warning: 'bus_to_virt' is deprecated [-Wdeprecated-declarations]
skb = ((struct rx_buffer_head *) bus_to_virt(here[2]))->skb;
FWIW, the remaining drivers using this are:
ATM: firestream, zatm, ambassador, horizon
ISDN: hisax/netjet
V4L: STA2X11, zoran
Net: Appletalk LTPC, Tulip DE4x5, Toshiba IrDA
WAN: comtrol sv11, cosa, lanmedia, sealevel
SCSI: DPT_I2O, buslogic
VME: CA91C142
My best guess is that all of the above are so hopelessly obsolete that
we are best off removing all of them form the kernel, but that can be
done another time.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The physical-relative calculation between the XIP text and data sections
introduced by the previous patch was far from obvious. Let's simplify it
by turning it into a macro which takes the two (virtual) addresses.
This allows us to arrange the calculation in a more obvious manner - we
can make it two sub-expressions which calculate the physical address for
each symbol, and then takes the difference of those physical addresses.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When XIP_KERNEL is enabled, the virt to phys address translation for RAM
is not the same as the virt to phys address translation for .text.
The only way to know where physical RAM is located is to use
PLAT_PHYS_OFFSET.
The MACRO will be useful for other places where there is a similar problem.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make virt_to_idmap() return an unsigned long rather than phys_addr_t.
Returning phys_addr_t here makes no sense, because the definition of
virt_to_idmap() is that it shall return a physical address which maps
identically with the virtual address. Since virtual addresses are
limited to 32-bit, identity mapped physical addresses are as well.
Almost all users already had an implicit narrowing cast to unsigned long
so let's make this official and part of this interface.
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
16MB alignment for ioremap mappings was added by commit a069c896d0 ("[ARM]
3705/1: add supersection support to ioremap()") in order to support supersection
mappings. But __arm_ioremap_pfn_caller uses section and supersection mappings
only in !SMP && !LPAE case. There is no need for such big alignment if either
SMP or LPAE is enabled.
After this change, ioremap will use default maximum alignment of 128 pages.
Link: https://lkml.kernel.org/g/1419328813-2211-1-git-send-email-d.safonov@partner.samsung.com
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: James Bottomley <JBottomley@parallels.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Arnd Bergmann <arnd.bergmann@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dmitry Safonov <d.safonov@partner.samsung.com>
Signed-off-by: Sergey Dyasly <s.dyasly@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Three architectures already define these, and we'll need them genericly
soon.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Fengguang Wu reports that building ARM with !MMU results in the
following build error:
arch/arm/kernel/built-in.o: In function `__soft_restart':
>> :(.text+0x1624): undefined reference to `arch_virt_to_idmap'
Fix this by adding an appropriate IS_ENABLED(CONFIG_MMU) into the
__virt_to_idmap() inline function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch fixes pfn_to_kaddr() to use phys_addr_t. Without this,
this macro is broken on LPAE systems. For physical addresses above
first 4GB result of shifting pfn with PAGE_SHIFT may be truncated.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Re-engineer the LPAE TTBR setup code. Rather than passing some shifted
address in order to fit in a CPU register, pass either a full physical
address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1).
This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of
cpu_set_ttbr() in the secondary CPU startup code path (which was there
to re-set TTBR1 to the appropriate high physical address space on
Keystone2.)
Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Include the generic I/O header file so that duplicate implementations
can be removed. This will also help to establish consistency across more
architectures regarding which accessors they support.
Signed-off-by: Thierry Reding <treding@nvidia.com>
The platforms selecting NEED_MACH_MEMORY_H defined the start address of
their physical memory in the respective <mach/memory.h>. With
ARM_PATCH_PHYS_VIRT=y (which is quite common today) this is useless
though because the definition isn't used but determined dynamically.
So remove the definitions from all <mach/memory.h> and provide the
Kconfig symbol PHYS_OFFSET with the respective defaults in case
ARM_PATCH_PHYS_VIRT isn't enabled.
This allows to drop the dependency of PHYS_OFFSET on !NEED_MACH_MEMORY_H
which prevents compiling an integrator nommu-kernel.
(CONFIG_PAGE_OFFSET which has "default PHYS_OFFSET if !MMU" expanded to
"0x" because CONFIG_PHYS_OFFSET doesn't exist as INTEGRATOR selects
NEED_MACH_MEMORY_H.)
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With TASK_SIZE set to the maximal RAM address booting in some XIP
configurations fails (e.g. on efm32 DK3750). The problem is that
strncpy_from_user et al. check for the address not being above TASK_SIZE
(since 8c56cc8be5 (ARM: 7449/1: use generic strnlen_user and
strncpy_from_user functions)) and this makes booting fail if the XIP
flash is above the RAM address space.
This change is in line with blackfin, frv and m68k which also use
0xffffffff for TASK_SIZE with !MMU.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
It looks like the static mapping area for DMA was replaced by dynamic
allocation into the vmalloc area by commit e9da6e9905 but the
information in Documentation/arm/memory.txt was not removed accordingly.
CONSISTENT_END in arch/arm/include/asm/memory.h has no more users and
can be removed as well.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
virt_to_page() is incredibly inefficient when virt-to-phys patching is
enabled. This is because we end up with this calculation:
page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12]
in assembly. The asm virt_to_phys() is equivalent this this operation:
addr - PAGE_OFFSET + __pv_phys_offset
and we can see that because this is assembly, the compiler has no chance
to optimise some of that away. This should reduce down to:
page = &mem_map[(addr - PAGE_OFFSET) >> 12]
for the common cases. Permit the compiler to make this optimisation by
giving it more of the information it needs - do this by providing a
virt_to_pfn() macro.
Another issue which makes this more complex is that __pv_phys_offset is
a 64-bit type on all platforms. This is needlessly wasteful - if we
store the physical offset as a PFN, we can save a lot of work having
to deal with 64-bit values, which sometimes ends up producing incredibly
horrid code:
a4c: e3009000 movw r9, #0
a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset
a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset
a50: R_ARM_MOVT_ABS __pv_phys_offset
a54: e3002000 movw r2, #0
a54: R_ARM_MOVW_ABS_NC __pv_phys_offset
a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset
a58: R_ARM_MOVT_ABS __pv_phys_offset
a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset
a60: e3001000 movw r1, #0
a60: R_ARM_MOVW_ABS_NC mem_map
a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With noMMU, CONFIG_PAGE_OFFSET was not being set correctly. As there's
no MMU, PAGE_OFFSET should be equal to PHYS_OFFSET in all cases. This
commit makes that explicit.
Since we do this, we don't need to mess around in asm/memory.h with
ifdefs to sort this out, so let's get rid of that, and there's no point
offering the "Memory split" option for noMMU as that's meaningless
there.
Fixes: b9b32bf70f ("ARM: use linker magic for vectors and vector stubs")
Cc: <stable@vger.kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The definition of virt_addr_valid is that virt_addr_valid should
return true if and only if virt_to_page returns a valid pointer.
The current definition of virt_addr_valid only checks against the
virtual address range. There's no guarantee that just because a
virtual address falls bewteen PAGE_OFFSET and high_memory the
associated physical memory has a valid backing struct page. Follow
the example of other architectures and convert to pfn_valid to
verify that the virtual address is actually valid. The check for
an address between PAGE_OFFSET and high_memory is still necessary
as vmalloc/highmem addresses are not valid with virt_to_page.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is
not defined:
In file included from arch/arm/include/asm/page.h:163:0,
from include/linux/mm_types.h:16,
from include/linux/sched.h:24,
from arch/arm/kernel/asm-offsets.c:13:
arch/arm/include/asm/memory.h: In function '__virt_to_phys':
arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function)
arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in
arch/arm/include/asm/memory.h: In function '__phys_to_virt':
arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function)
Fixes: ca5a45c06c ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions")
Tested-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make sure that inline assembler that expects 'r' operand
receives 32 bit value.
Before this fix in case of CONFIG_ARCH_PHYS_ADDR_T_64BIT and
CONFIG_ARM_PATCH_PHYS_VIRT __phys_to_virt function passed 64 bit
value to __pv_stub inline assembler where 'r' operand is
expected. Compiler behavior in such case is not well specified.
It worked in little endian case, but in big endian case
incorrect code was generated, where compiler confused which
part of 64 bit value it needed to modify. For example BE
snippet looked like this:
N:0x80904E08 : MOV r2,#0
N:0x80904E0C : SUB r2,r2,#0x81000000
when LE similar code looked like this
N:0x808FCE2C : MOV r2,r0
N:0x808FCE30 : SUB r2,r2,#0xc0, 8 ; #0xc0000000
Note 'r0' register is va that have to be translated into phys
To avoid this situation use explicit cast to 'unsigned long',
which explicitly discard upper part of phys address and convert
value to 32 bit. Also add comment so such cast will not be
removed in the future.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Olof Johansson reported:
In file included from arch/arm/include/asm/page.h:163:0,
from include/linux/mm_types.h:16,
from include/linux/sched.h:24,
from arch/arm/kernel/asm-offsets.c:13:
arch/arm/include/asm/memory.h: In function '__virt_to_idmap':
arch/arm/include/asm/memory.h:300:6: error: 'arch_virt_to_idmap' undeclared (first use in this function)
caused by arch_virt_to_idmap being placed inside a different
preprocessor conditional to its user. Move it along side its user.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The current phys_to_virt patching mechanism works only for 32 bit
physical addresses and this patch extends the idea for 64bit physical
addresses.
The 64bit v2p patching mechanism patches the higher 8 bits of physical
address with a constant using 'mov' instruction and lower 32bits are patched
using 'add'. While this is correct, in those platforms where the lowmem addressable
physical memory spawns across 4GB boundary, a carry bit can be produced as a
result of addition of lower 32bits. This has to be taken in to account and added
in to the upper. The patched __pv_offset and va are added in lower 32bits, where
__pv_offset can be in two's complement form when PA_START < VA_START and that can
result in a false carry bit.
e.g
1) PA = 0x80000000; VA = 0xC0000000
__pv_offset = PA - VA = 0xC0000000 (2's complement)
2) PA = 0x2 80000000; VA = 0xC000000
__pv_offset = PA - VA = 0x1 C0000000
So adding __pv_offset + VA should never result in a true overflow for (1).
So in order to differentiate between a true carry, a __pv_offset is extended
to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is
2's complement. So 'mvn #0' is inserted instead of 'mov' while patching
for the same reason. Since mov, add, sub instruction are to patched
with different constants inside the same stub, the rotation field
of the opcode is using to differentiate between them.
So the above examples for v2p translation becomes for VA=0xC0000000,
1) PA[63:32] = 0xffffffff
PA[31:0] = VA + 0xC0000000 --> results in a carry
PA[63:32] = PA[63:32] + carry
PA[63:0] = 0x0 80000000
2) PA[63:32] = 0x1
PA[31:0] = VA + 0xC0000000 --> results in a carry
PA[63:32] = PA[63:32] + carry
PA[63:0] = 0x2 80000000
The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as
part of the review of first and second versions of the subject patch.
There is no corresponding change on the phys_to_virt() side, because
computations on the upper 32-bits would be discarded anyway.
Cc: Russell King <linux@arm.linux.org.uk>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Sricharan R <r.sricharan@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
On some PAE systems (e.g. TI Keystone), memory is above the
32-bit addressable limit, and the interconnect provides an
aliased view of parts of physical memory in the 32-bit addressable
space. This alias is strictly for boot time usage, and is not
otherwise usable because of coherency limitations. On such systems,
the idmap mechanism needs to take this aliased mapping into account.
This patch introduces virt_to_idmap() and a arch function pointer which
can be populated by platform which needs it. Also populate necessary
idmap spots with now available virt_to_idmap(). Avoided #ifdef approach
to be compatible with multi-platform builds.
Most architecture won't touch it and in that case virt_to_idmap()
fall-back to existing virt_to_phys() macro.
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Fix remainder types used when converting back and forth between
physical and virtual addresses.
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
VALID_PAGE() has been removed from kernel long time ago,
so fix the comment.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Giancarlo Asnaghi <giancarlo.asnaghi@st.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
On LPAE machines, PHYS_OFFSET evaluates to a phys_addr_t and this type is
inherited by the PHYS_PFN_OFFSET definition as well. Consequently, the kernel
build emits warnings of the form:
init/main.c: In function 'start_kernel':
init/main.c:588:7: warning: format '%lx' expects argument of type 'long unsigned int', but argument 2 has type 'phys_addr_t' [-Wformat]
This patch fixes this warning by pinning down the PFN type to unsigned long.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch redefines the early boot time use of the R4 register to steal a few
low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to
38-bit physical addresses.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug fixes that did not make it into v3.8, mostly because they were not
considered important enough, and in some cases because bugs only show
up in combination with other patches destined for 3.9.
This includes a few larger patches for GPIO on the Marvell PXA platform
and a lot of Samsung specific bug fixes, as well as a series from Arnd
to fix older build warnings.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIVAwUAUSUyI2CrR//JCVInAQKBlRAAwOc8vTQl6UHciwibXXdMfL83hH0YNwu8
CFjnzHpvDX0YUQvW9fFPSq64CejhQumgaJYq3Te7mZW50H4hqGNqgFojPZy+2yn1
eEIjvNrES9Sp2aJG7iT1Q9BqvWVis736W7z8kTJ6szcG5e3KGZoFNdAl9izeiHtv
CTR9o4+XVY00eKbafVU+KJ4+/ZdptEnIWiQsg1gzImrRw/PosL/lgjYYCeRXb866
W7qltcCdwILA8+5x4KcK9CDAQbt+G2lLsmHzH4OZ2gjZmKzvZcxTOfXxdA9439Ew
jol5rkSAXiWcsgB4kzSx7sMGyMNim1qAu9FmhUdYqSc8UBGnGcns/QOLiGZxq9uS
9UTkInWC9Kcy8vSIIbDBvTpEZbbA5arAuEObK2eEPWntp/UgmGiSJaQmYHY+u9yd
BokmfSLc5g+jnQBS/H//nvPkA0ZlpknGZAovyRpjzZzYiF3LiBtsIMagYH2jVwzn
bc633NYo5cNO9DSO8mXSG6LUJ0X5d2HrTkg7J5x9GgOr1xDaH0rFjAdCZFXSTSKu
LVy3xehuXcHK4zbEvtgq7cjKyeIDpW0psPCXjKz4Fsreq6waPN88fHx9mVnoanII
JY9Lq8xW/2CH8uEdPyKMss/SXdzsVIm8bR3veS3TgqZa/2maMj/EkdGmPISUUjBY
JN48myQQSZM=
=BbV4
-----END PGP SIGNATURE-----
Merge tag 'fixes-non-critical' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull non-critical ARM SoC bug fixes from Arnd Bergmann:
"Bug fixes that did not make it into v3.8, mostly because they were not
considered important enough, and in some cases because bugs only show
up in combination with other patches destined for 3.9. This includes
a few larger patches for GPIO on the Marvell PXA platform and a lot of
Samsung specific bug fixes, as well as a series from Arnd to fix older
build warnings."
* tag 'fixes-non-critical' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (54 commits)
ARM: SPEAr13xx: Enable CONFIG_ARCH_HAS_CPUFREQ
ARM: imx: MACH_MX31ADS_WM1133_EV1 needs REGULATOR_WM8350
scripts/sortextable: silence script output
ARM: s3c: i2c: add platform_device forward declaration
ARM: mvebu: allow selecting mvebu without Armada XP
ARM: pick Versatile by default for !MMU
ARM: integrator: fix build with INTEGRATOR_AP off
ARM: integrator/versatile: fix NOMMU warnings
ARM: sa1100: don't warn about mach/ide.h
ARM: shmobile: fix defconfig warning on CONFIG_USB
ARM: w90x900: fix legacy assembly syntax
ARM: samsung: fix assembly syntax for new gas
ARM: disable virt_to_bus/virt_to_bus almost everywhere
ARM: dts: Correct pin configuration of SD 4 for exynos4x12-pinctrl
ARM: SAMSUNG: Silence empty switch warning in fimc-core.h
ARM: SAMSUNG: Silence empty switch warning in sdhci.h
ARM: msm: proc_comm_boot_wait should not be __init
arm: vt8500: Update MAINTAINERS entry for arch-vt8500
ARM: integrator: ensure ap_syscon_base is initialised when !CONFIG_MMU
ARM: S5PV210: Fix early uart output in fifo mode
...
Parts of the virtual memory layout (mainly the modules area) are
described using open-coded immediate values.
Use the SZ_ definitions from linux/sizes.h instead to make the code
clearer.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We are getting a number of warnings about the use of the deprecated
bus_to_virt function in drivers using the ARM ISA DMA API:
drivers/parport/parport_pc.c: In function 'parport_pc_fifo_write_block_dma':
drivers/parport/parport_pc.c:622:3: warning: 'bus_to_virt' is deprecated
(declared at arch/arm/include/asm/memory.h:253) [-Wdeprecated-declarations]
This is only because that function gets used by the inline
set_dma_addr() helper. We know that any driver for the ISA DMA API
is correctly using the DMA addresses, so we can change this
to use the __bus_to_virt() function instead, which does not warn.
After this, there are no remaining drivers that are used on
any defconfigs on ARM using virt_to_bus or bus_to_virt, with
the exception of the OSS sound driver. That driver is only used
on RiscPC, NetWinder and Shark, so we can set ARCH_NO_VIRT_TO_BUS
on all other platforms and hide the deprecated functions, which
is far more effective than marking them as deprecated, in order
to avoid any new users of that code.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Russell King <linux@arm.linux.org.uk>
We have received multiple reports of mmap failures when running with a
2:2 vm split. These manifest as either -EINVAL with a non page-aligned
address (ending 0xaaa) or a SEGV, depending on the application. The
issue is commonly observed in children of make, which appears to use
bottom-up mmap (assumedly because it changes the stack rlimit).
Further investigation reveals that this regression was triggered by
394ef6403a ("mm: use vm_unmapped_area() on arm architecture"), whereby
TASK_UNMAPPED_BASE is no longer page-aligned for bottom-up mmap, causing
get_unmapped_area to choke on misaligned addressed.
This patch fixes the problem by defining TASK_UNMAPPED_BASE in terms of
TASK_SIZE and explicitly aligns the result to 16M, matching the other
end of the heap.
Acked-by: Nicolas Pitre <nico@linaro.org>
Reported-by: Steve Capper <steve.capper@arm.com>
Reported-by: Jean-Francois Moine <moinejf@free.fr>
Reported-by: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With ixp2xxx removed, there are no platforms that define arch_is_coherent,
so the last occurrences of arch_is_coherent can be removed. Any new
platform with coherent i/o should use coherent dma mapping functions.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
During the p2v changes, the PHYS_OFFSET #define moved into a
!__ASSEMBLY__ section. This causes a XIP build to fail with
arch/arm/kernel/head.o: In function 'stext':
arch/arm/kernel/head.S:146: undefined reference to 'PHYS_OFFSET'
Momentarily leave the #ifndef __ASSEMBLY__ section so we can
define PHYS_OFFSET for all compilation units.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
XIP_VIRT_ADDR is needed for XIP builds and currently only defined for
builds with CONFIG_MMU.
Also provide it for no-MMU builds to make it possible to build an XIP
kernel for MMU-less machines. As these lack an MMU it has to be an
identity mapping.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Given that we want the default to not have any <mach/memory.h> and given
that there are now fewer cases where it is still provided than the cases
where it is not at this point, this makes sense to invert the logic and
just identify the exception cases.
The word "need" instead of "have" was chosen to construct the config
symbol so not to suggest that having a mach/memory.h file is actually
a feature that one should aim for.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
When the CONFIG_NO_MACH_MEMORY_H symbol is selected by a particular
machine class, the machine specific memory.h include file is no longer
used and can be removed. In that case the equivalent information can
be obtained dynamically at runtime by enabling CONFIG_ARM_PATCH_PHYS_VIRT
or by specifying the physical memory address at kernel configuration time.
If/when all instances of mach/memory.h are removed then this symbol could
be removed.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
This function can be called during boot to increase the size of the consistent
DMA region above it's default value of 2MB. It must be called before the memory
allocator is initialised, i.e. before any core_initcall.
Signed-off-by: Jon Medhurst <tixy@yxit.co.uk>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
This code can be removed now that MSM targets no longer need the 16-bit
offsets for P2V.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get
rid of it from ARM by replacing it with arm_dma_zone_mask. Move
dma_supported() and dma_set_mask() out of line, and have
dma_supported() check this new variable instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than each platform providing its own function to adjust the
zone sizes, use the new ARM_DMA_ZONE_SIZE definition to perform this
adjustment. This ensures that the actual DMA zone size and the
ISA_DMA_THRESHOLD/MAX_DMA_ADDRESS definitions are consistent with
each other, and moves this complexity out of the platform code.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The values of ISA_DMA_THRESHOLD and MAX_DMA_ADDRESS are related; one is
the physical/bus address, the other is the virtual address. Both need
to be kept in step, so rather than having platforms define both, allow
them to define a single macro which sets both of these macros
appropraitely.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The unsigned long datatype is not sufficient for mapping physical addresses
>= 4GB.
This patch ensures that the address conversion code in asm/memory.h casts
to the correct type when handling physical addresses. The internal v2p
macros only deal with lowmem addresses, so these do not need to be modified.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
MSM's memory is aligned to 2MB, which is more than we can do with our
existing method as we're limited to the upper 8 bits. Extend this by
using two instructions to 16 bits, automatically selected when MSM is
enabled.
Acked-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>