mm/cma: Cleanup highmem check

6b101e2a3c ("mm/CMA: fix boot regression due to physical address of
high_memory") added checks to use __pa_nodebug on x86 since
CONFIG_DEBUG_VIRTUAL complains about high_memory not being linearlly
mapped. arm64 is now getting support for CONFIG_DEBUG_VIRTUAL as well.
Rather than add an explosion of arches to the #ifdef, switch to an
alternate method to calculate the physical start of highmem using
the page before highmem starts. This avoids the need for the #ifdef and
extra __pa_nodebug calls.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This commit is contained in:
Laura Abbott 2017-01-10 13:35:41 -08:00 committed by Will Deacon
parent fa5b6ec9e5
commit 2dece445b6
1 changed files with 5 additions and 10 deletions

View File

@ -235,18 +235,13 @@ int __init cma_declare_contiguous(phys_addr_t base,
phys_addr_t highmem_start; phys_addr_t highmem_start;
int ret = 0; int ret = 0;
#ifdef CONFIG_X86
/* /*
* high_memory isn't direct mapped memory so retrieving its physical * We can't use __pa(high_memory) directly, since high_memory
* address isn't appropriate. But it would be useful to check the * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
* physical address of the highmem boundary so it's justifiable to get * complain. Find the boundary by adding one to the last valid
* the physical address from it. On x86 there is a validation check for * address.
* this case, so the following workaround is needed to avoid it.
*/ */
highmem_start = __pa_nodebug(high_memory); highmem_start = __pa(high_memory - 1) + 1;
#else
highmem_start = __pa(high_memory);
#endif
pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n", pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
__func__, &size, &base, &limit, &alignment); __func__, &size, &base, &limit, &alignment);