perf/x86/amd: fix potential integer overflow on shift of a int
authorColin Ian King <colin.i.king@gmail.com>
Fri, 2 Dec 2022 13:51:49 +0000 (13:51 +0000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 1 Feb 2023 07:34:51 +0000 (08:34 +0100)
commit 08245672cdc6505550d1a5020603b0a8d4a6dcc7 upstream.

The left shift of int 32 bit integer constant 1 is evaluated using 32 bit
arithmetic and then passed as a 64 bit function argument. In the case where
i is 32 or more this can lead to an overflow.  Avoid this by shifting
using the BIT_ULL macro instead.

Fixes: 471af006a747 ("perf/x86/amd: Constrain Large Increment per Cycle events")
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ian Rogers <irogers@google.com>
Acked-by: Kim Phillips <kim.phillips@amd.com>
Link: https://lore.kernel.org/r/20221202135149.1797974-1-colin.i.king@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/x86/events/amd/core.c

index d6f3703..4386b10 100644 (file)
@@ -1387,7 +1387,7 @@ static int __init amd_core_pmu_init(void)
                 * numbered counter following it.
                 */
                for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
-                       even_ctr_mask |= 1 << i;
+                       even_ctr_mask |= BIT_ULL(i);
 
                pair_constraint = (struct event_constraint)
                                    __EVENT_CONSTRAINT(0, even_ctr_mask, 0,