From: Linus Torvalds Date: Fri, 14 Aug 2020 21:17:51 +0000 (-0700) Subject: Merge tag 'timers-core-2020-08-14' of git://git.kernel.org/pub/scm/linux/kernel/git... X-Git-Tag: v5.15~3090 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=b6b178e38f40f34842b719a8786d346d4cfec5dc;hp=-c;p=platform%2Fkernel%2Flinux-starfive.git Merge tag 'timers-core-2020-08-14' of git://git./linux/kernel/git/tip/tip Pull more timer updates from Thomas Gleixner: "A set of posix CPU timer changes which allows to defer the heavy work of posix CPU timers into task work context. The tick interrupt is reduced to a quick check which queues the work which is doing the heavy lifting before returning to user space or going back to guest mode. Moving this out is deferring the signal delivery slightly but posix CPU timers are inaccurate by nature as they depend on the tick so there is no real damage. The relevant test cases all passed. This lifts the last offender for RT out of the hard interrupt context tick handler, but it also has the general benefit that the actual heavy work is accounted to the task/process and not to the tick interrupt itself. Further optimizations are possible to break long sighand lock hold and interrupt disabled (on !RT kernels) times when a massive amount of posix CPU timers (which are unpriviledged) is armed for a task/process. This is currently only enabled for x86 because the architecture has to ensure that task work is handled in KVM before entering a guest, which was just established for x86 with the new common entry/exit code which got merged post 5.8 and is not the case for other KVM architectures" * tag 'timers-core-2020-08-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: Select POSIX_CPU_TIMERS_TASK_WORK posix-cpu-timers: Provide mechanisms to defer timer handling to task_work posix-cpu-timers: Split run_posix_cpu_timers() --- b6b178e38f40f34842b719a8786d346d4cfec5dc diff --combined arch/x86/Kconfig index 9a28495,a82e715..7101ac6 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@@ -209,6 -209,7 +209,7 @@@ config X8 select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT + select HAVE_POSIX_CPU_TIMERS_TASK_WORK select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION select HAVE_FUNCTION_ARG_ACCESS_API @@@ -803,7 -804,6 +804,7 @@@ config KVM_GUES depends on PARAVIRT select PARAVIRT_CLOCK select ARCH_CPUIDLE_HALTPOLL + select X86_HV_CALLBACK_VECTOR default y help This option enables various optimizations for running under the KVM diff --combined include/linux/sched.h index 53ddc02,e9942ce..93ecd930 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@@ -32,7 -32,6 +32,7 @@@ #include #include #include +#include #include /* task_struct member predeclarations (sorted alphabetically): */ @@@ -890,6 -889,10 +890,10 @@@ struct task_struct /* Empty if CONFIG_POSIX_CPUTIMERS=n */ struct posix_cputimers posix_cputimers; + #ifdef CONFIG_POSIX_CPU_TIMERS_TASK_WORK + struct posix_cputimers_work posix_cputimers_work; + #endif + /* Process credentials: */ /* Tracer's credentials at attach: */ @@@ -1050,7 -1053,7 +1054,7 @@@ /* Protected by ->alloc_lock: */ nodemask_t mems_allowed; /* Seqence number to catch updates: */ - seqcount_t mems_allowed_seq; + seqcount_spinlock_t mems_allowed_seq; int cpuset_mem_spread_rotor; int cpuset_slab_spread_rotor; #endif @@@ -1649,9 -1652,6 +1653,9 @@@ extern int idle_cpu(int cpu) extern int available_idle_cpu(int cpu); extern int sched_setscheduler(struct task_struct *, int, const struct sched_param *); extern int sched_setscheduler_nocheck(struct task_struct *, int, const struct sched_param *); +extern void sched_set_fifo(struct task_struct *p); +extern void sched_set_fifo_low(struct task_struct *p); +extern void sched_set_normal(struct task_struct *p, int nice); extern int sched_setattr(struct task_struct *, const struct sched_attr *); extern int sched_setattr_nocheck(struct task_struct *, const struct sched_attr *); extern struct task_struct *idle_task(int cpu);