arm64: lib: __arch_clear_user(): fold fixups into body

Like other functions, __arch_clear_user() places its exception fixups in
the `.fixup` section without any clear association with
__arch_clear_user() itself. If we backtrace the fixup code, it will be
symbolized as an offset from the nearest prior symbol, which happens to
be `__entry_tramp_text_end`. Further, since the PC adjustment for the
fixup is akin to a direct branch rather than a function call,
__arch_clear_user() itself will be missing from the backtrace.

This is confusing and hinders debugging. In general this pattern will
also be problematic for CONFIG_LIVEPATCH, since fixups often return to
their associated function, but this isn't accurately captured in the
stacktrace.

To solve these issues for assembly functions, we must move fixups into
the body of the functions themselves, after the usual fast-path returns.
This patch does so for __arch_clear_user().

Inline assembly will be dealt with in subsequent patches.

Other than the improved backtracing, there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
This commit is contained in:
Mark Rutland 2021-10-19 17:02:07 +01:00 committed by Will Deacon
parent 5816b3e657
commit 35d67794b8
1 changed files with 3 additions and 5 deletions

View File

@ -45,13 +45,11 @@ USER(9f, sttrh wzr, [x0])
USER(7f, sttrb wzr, [x2, #-1]) USER(7f, sttrb wzr, [x2, #-1])
5: mov x0, #0 5: mov x0, #0
ret ret
SYM_FUNC_END(__arch_clear_user)
EXPORT_SYMBOL(__arch_clear_user)
.section .fixup,"ax" // Exception fixups
.align 2
7: sub x0, x2, #5 // Adjust for faulting on the final byte... 7: sub x0, x2, #5 // Adjust for faulting on the final byte...
8: add x0, x0, #4 // ...or the second word of the 4-7 byte case 8: add x0, x0, #4 // ...or the second word of the 4-7 byte case
9: sub x0, x2, x0 9: sub x0, x2, x0
ret ret
.previous SYM_FUNC_END(__arch_clear_user)
EXPORT_SYMBOL(__arch_clear_user)