sparc: mark __arch_xchg() as __always_inline
authorArnd Bergmann <arnd@arndb.de>
Wed, 28 Jun 2023 09:49:18 +0000 (11:49 +0200)
committerKees Cook <keescook@chromium.org>
Thu, 13 Jul 2023 16:54:32 +0000 (09:54 -0700)
An otherwise correct change to the atomic operations uncovered an
existing bug in the sparc __arch_xchg() function, which is calls
__xchg_called_with_bad_pointer() when its arguments are unknown at
compile time:

ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined!

This now happens because gcc determines that it's better to not inline the
function. Avoid this by just marking the function as __always_inline
to force the compiler to do the right thing here.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@roeck-us.net/
Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Acked-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Andi Shyti <andi.shyti@linux.intel.com>
Link: https://lore.kernel.org/r/20230628094938.2318171-1-arnd@kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
arch/sparc/include/asm/cmpxchg_32.h
arch/sparc/include/asm/cmpxchg_64.h

index 7a1339533d1d7ec214cff5ec9c51f60803c1aac0..d0af82c240b7318fcde01a6fe613d6a611fe4540 100644 (file)
@@ -15,7 +15,7 @@
 unsigned long __xchg_u32(volatile u32 *m, u32 new);
 void __xchg_called_with_bad_pointer(void);
 
-static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
+static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
 {
        switch (size) {
        case 4:
index 66cd61dde9ec1f0284a17037ec178fa5157a5411..3de25262c411803106a6f445e1a246ff6aee6c89 100644 (file)
@@ -87,7 +87,7 @@ xchg16(__volatile__ unsigned short *m, unsigned short val)
        return (load32 & mask) >> bit_shift;
 }
 
-static inline unsigned long
+static __always_inline unsigned long
 __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
 {
        switch (size) {