arm64: uaccess: remove unnecessary earlyclobber
authorMark Rutland <mark.rutland@arm.com>
Tue, 14 Mar 2023 15:37:00 +0000 (15:37 +0000)
committerWill Deacon <will@kernel.org>
Tue, 28 Mar 2023 20:13:44 +0000 (21:13 +0100)
Currently the asm constraints for __get_mem_asm() mark the value
register as an earlyclobber operand. This means that the compiler can't
reuse the same register for both the address and value, even when the
value is not subsequently used.

There's no need for the value register to be marked as earlyclobber, as
it's only written to after the address register is consumed, even when
the access faults.

Remove the unnecessary earlyclobber.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230314153700.787701-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
arch/arm64/include/asm/uaccess.h

index 4ee5aa7..deaf4f8 100644 (file)
@@ -237,7 +237,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
        "1:     " load "        " reg "1, [%2]\n"                       \
        "2:\n"                                                          \
        _ASM_EXTABLE_##type##ACCESS_ERR_ZERO(1b, 2b, %w0, %w1)          \
-       : "+r" (err), "=&r" (x)                                         \
+       : "+r" (err), "=r" (x)                                          \
        : "r" (addr))
 
 #define __raw_get_mem(ldr, x, ptr, err, type)                                  \