tick/sched: Preserve number of idle sleeps across CPU hotplug events
authorTim Chen <tim.c.chen@linux.intel.com>
Mon, 22 Jan 2024 23:35:34 +0000 (15:35 -0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 1 Feb 2024 00:19:13 +0000 (16:19 -0800)
commit 9a574ea9069be30b835a3da772c039993c43369b upstream.

Commit 71fee48f ("tick-sched: Fix idle and iowait sleeptime accounting vs
CPU hotplug") preserved total idle sleep time and iowait sleeptime across
CPU hotplug events.

Similar reasoning applies to the number of idle calls and idle sleeps to
get the proper average of sleep time per idle invocation.

Preserve those fields too.

Fixes: 71fee48f ("tick-sched: Fix idle and iowait sleeptime accounting vs CPU hotplug")
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20240122233534.3094238-1-tim.c.chen@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kernel/time/tick-sched.c

index 5cbd0ce..55cbc49 100644 (file)
@@ -1548,6 +1548,7 @@ void tick_cancel_sched_timer(int cpu)
 {
        struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
        ktime_t idle_sleeptime, iowait_sleeptime;
+       unsigned long idle_calls, idle_sleeps;
 
 # ifdef CONFIG_HIGH_RES_TIMERS
        if (ts->sched_timer.base)
@@ -1556,9 +1557,13 @@ void tick_cancel_sched_timer(int cpu)
 
        idle_sleeptime = ts->idle_sleeptime;
        iowait_sleeptime = ts->iowait_sleeptime;
+       idle_calls = ts->idle_calls;
+       idle_sleeps = ts->idle_sleeps;
        memset(ts, 0, sizeof(*ts));
        ts->idle_sleeptime = idle_sleeptime;
        ts->iowait_sleeptime = iowait_sleeptime;
+       ts->idle_calls = idle_calls;
+       ts->idle_sleeps = idle_sleeps;
 }
 #endif