platform/kernel/linux-starfive.git
22 months agotracing: Remove trace_hardirqs_{on,off}_caller()
Peter Zijlstra [Thu, 12 Jan 2023 19:43:47 +0000 (20:43 +0100)]
tracing: Remove trace_hardirqs_{on,off}_caller()

Per commit 56e62a737028 ("s390: convert to generic entry") the last
and only callers of trace_hardirqs_{on,off}_caller() went away, clean
up.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20230112195541.355283994@infradead.org
22 months agocpuidle, ACPI: Make noinstr clean
Peter Zijlstra [Thu, 12 Jan 2023 19:43:46 +0000 (20:43 +0100)]
cpuidle, ACPI: Make noinstr clean

objtool found cases where ACPI methods called out into instrumentation code:

  vmlinux.o: warning: objtool: io_idle+0xc: call to __inb.isra.0() leaves .noinstr.text section
  vmlinux.o: warning: objtool: acpi_idle_enter+0xfe: call to num_online_cpus() leaves .noinstr.text section
  vmlinux.o: warning: objtool: acpi_idle_enter+0x115: call to acpi_idle_fallback_to_c1.isra.0() leaves .noinstr.text section

Fix this by: marking the IO in/out, acpi_idle_fallback_to_c1() and
num_online_cpus() methods as __always_inline.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195541.294846301@infradead.org
22 months agocpuidle, nospec: Make mds_idle_clear_cpu_buffers() noinstr clean
Peter Zijlstra [Thu, 12 Jan 2023 19:43:45 +0000 (20:43 +0100)]
cpuidle, nospec: Make mds_idle_clear_cpu_buffers() noinstr clean

objtool found that the mds_idle_clear_cpu_buffers() method got
uninlined by the compiler where it called out into instrumentation:

  vmlinux.o: warning: objtool: mwait_idle+0x47: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section
  vmlinux.o: warning: objtool: acpi_processor_ffh_cstate_enter+0xa2: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle+0x91: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_s2idle+0x8c: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_irq+0xaa: call to mds_idle_clear_cpu_buffers() leaves .noinstr.text section

Solve this by marking mds_idle_clear_cpu_buffers() as __always_inline.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195541.233779815@infradead.org
22 months agocpuidle, xenpv: Make more PARAVIRT_XXL noinstr clean
Peter Zijlstra [Thu, 12 Jan 2023 19:43:44 +0000 (20:43 +0100)]
cpuidle, xenpv: Make more PARAVIRT_XXL noinstr clean

objtool found a few cases where this code called out into instrumented
code:

  vmlinux.o: warning: objtool: acpi_idle_enter_s2idle+0xde: call to wbinvd() leaves .noinstr.text section
  vmlinux.o: warning: objtool: default_idle+0x4: call to arch_safe_halt() leaves .noinstr.text section
  vmlinux.o: warning: objtool: xen_safe_halt+0xa: call to HYPERVISOR_sched_op.constprop.0() leaves .noinstr.text section

Solve this by:

 - marking arch_safe_halt(), wbinvd(), native_wbinvd() and
   HYPERVISOR_sched_op() as __always_inline().

 - Explicitly uninlining xen_safe_halt() and pv_native_wbinvd() [they were
   already uninlined by the compiler on use as function pointers] and
   annotating them as 'noinstr'.

 - Annotating pv_native_safe_halt() as 'noinstr'.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu>
Reviewed-by: Juergen Gross <jgross@suse.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195541.171918174@infradead.org
22 months agocpuidle, tdx: Make TDX code noinstr clean
Peter Zijlstra [Thu, 12 Jan 2023 19:43:43 +0000 (20:43 +0100)]
cpuidle, tdx: Make TDX code noinstr clean

objtool found a few cases where this code called out into instrumented
code:

  vmlinux.o: warning: objtool: __halt+0x2c: call to hcall_func.constprop.0() leaves .noinstr.text section
  vmlinux.o: warning: objtool: __halt+0x3f: call to __tdx_hypercall() leaves .noinstr.text section
  vmlinux.o: warning: objtool: __tdx_hypercall+0x66: call to __tdx_hypercall_failed() leaves .noinstr.text section

Fix it by:

  - moving TDX tdcall assembly methods into .noinstr.text (they are already noistr-clean)
  - marking __tdx_hypercall_failed() as 'noinstr'
  - annotating hcall_func() as __always_inline

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195541.111485720@infradead.org
22 months agocpuidle, mwait: Make the mwait code noinstr clean
Peter Zijlstra [Thu, 12 Jan 2023 19:43:42 +0000 (20:43 +0100)]
cpuidle, mwait: Make the mwait code noinstr clean

objtool found a few cases where this code called out into instrumented
code:

  vmlinux.o: warning: objtool: intel_idle_s2idle+0x6e: call to __monitor.constprop.0() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_irq+0x8c: call to __monitor.constprop.0() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle+0x73: call to __monitor.constprop.0() leaves .noinstr.text section

  vmlinux.o: warning: objtool: mwait_idle+0x88: call to clflush() leaves .noinstr.text section

Fix it by marking the affected methods as __always_inline.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195541.050542952@infradead.org
22 months agocpuidle, sched: Remove instrumentation from TIF_{POLLING_NRFLAG,NEED_RESCHED}
Peter Zijlstra [Thu, 12 Jan 2023 19:43:41 +0000 (20:43 +0100)]
cpuidle, sched: Remove instrumentation from TIF_{POLLING_NRFLAG,NEED_RESCHED}

objtool pointed out that various idle-TIF management methods
have instrumentation:

  vmlinux.o: warning: objtool: mwait_idle+0x5: call to current_set_polling_and_test() leaves .noinstr.text section
  vmlinux.o: warning: objtool: acpi_processor_ffh_cstate_enter+0xc5: call to current_set_polling_and_test() leaves .noinstr.text section
  vmlinux.o: warning: objtool: cpu_idle_poll.isra.0+0x73: call to test_ti_thread_flag() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle+0xbc: call to current_set_polling_and_test() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_irq+0xea: call to current_set_polling_and_test() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_s2idle+0xb4: call to current_set_polling_and_test() leaves .noinstr.text section

  vmlinux.o: warning: objtool: intel_idle+0xa6: call to current_clr_polling() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_irq+0xbf: call to current_clr_polling() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_s2idle+0xa1: call to current_clr_polling() leaves .noinstr.text section

  vmlinux.o: warning: objtool: mwait_idle+0xe: call to __current_set_polling() leaves .noinstr.text section
  vmlinux.o: warning: objtool: acpi_processor_ffh_cstate_enter+0xc5: call to __current_set_polling() leaves .noinstr.text section
  vmlinux.o: warning: objtool: cpu_idle_poll.isra.0+0x73: call to test_ti_thread_flag() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle+0xbc: call to __current_set_polling() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_irq+0xea: call to __current_set_polling() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_s2idle+0xb4: call to __current_set_polling() leaves .noinstr.text section

  vmlinux.o: warning: objtool: cpu_idle_poll.isra.0+0x73: call to test_ti_thread_flag() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_s2idle+0x73: call to test_ti_thread_flag.constprop.0() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_irq+0x91: call to test_ti_thread_flag.constprop.0() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle+0x78: call to test_ti_thread_flag.constprop.0() leaves .noinstr.text section
  vmlinux.o: warning: objtool: acpi_safe_halt+0xf: call to test_ti_thread_flag.constprop.0() leaves .noinstr.text section

Remove the instrumentation, because these methods are used in low-level
cpuidle code moving between states, that should not be instrumented.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.988741683@infradead.org
22 months agotime/tick-broadcast: Remove RCU_NONIDLE() usage
Peter Zijlstra [Thu, 12 Jan 2023 19:43:40 +0000 (20:43 +0100)]
time/tick-broadcast: Remove RCU_NONIDLE() usage

No callers left that have already disabled RCU.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.927904612@infradead.org
22 months agoprintk: Remove trace_.*_rcuidle() usage
Peter Zijlstra [Thu, 12 Jan 2023 19:43:39 +0000 (20:43 +0100)]
printk: Remove trace_.*_rcuidle() usage

The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
tracepoint"), was printk usage from the cpuidle path where RCU was
already disabled.

Per the patches earlier in this series, this is no longer the case.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Acked-by: Petr Mladek <pmladek@suse.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.865735001@infradead.org
22 months agoarm64, smp: Remove trace_.*_rcuidle() usage
Peter Zijlstra [Thu, 12 Jan 2023 19:43:38 +0000 (20:43 +0100)]
arm64, smp: Remove trace_.*_rcuidle() usage

Ever since commit d3afc7f12987 ("arm64: Allow IPIs to be handled as
normal interrupts") this function is called in regular IRQ context.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.804410487@infradead.org
22 months agoarm, smp: Remove trace_.*_rcuidle() usage
Peter Zijlstra [Thu, 12 Jan 2023 19:43:37 +0000 (20:43 +0100)]
arm, smp: Remove trace_.*_rcuidle() usage

None of these functions should ever be ran with RCU disabled anymore.

Specifically, do_handle_IPI() is only called from handle_IPI() which
explicitly does irq_enter()/irq_exit() which ensures RCU is watching.

The problem with smp_cross_call() was, per commit description:

   7c64cc0531fa ("arm: Use _rcuidle for smp_cross_call() tracepoints")

... that cpuidle_enter_state_coupled() already had RCU disabled, but that's
long been fixed by commit:

  1098582a0f6c ("sched,idle,rcu: Push rcu_idle deeper into the idle path")

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.743432118@infradead.org
22 months agox86/tdx: Remove TDX_HCALL_ISSUE_STI
Peter Zijlstra [Thu, 12 Jan 2023 19:43:36 +0000 (20:43 +0100)]
x86/tdx: Remove TDX_HCALL_ISSUE_STI

Now that arch_cpu_idle() is expected to return with IRQs disabled,
avoid the useless STI/CLI dance.

Per the specs this is supposed to work, but nobody has yet relied up
this behaviour so broken implementations are possible.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.682137572@infradead.org
22 months agoarch/idle: Change arch_cpu_idle() behavior: always exit with IRQs disabled
Peter Zijlstra [Thu, 12 Jan 2023 19:43:35 +0000 (20:43 +0100)]
arch/idle: Change arch_cpu_idle() behavior: always exit with IRQs disabled

Current arch_cpu_idle() is called with IRQs disabled, but will return
with IRQs enabled.

However, the very first thing the generic code does after calling
arch_cpu_idle() is raw_local_irq_disable(). This means that
architectures that can idle with IRQs disabled end up doing a
pointless 'enable-disable' dance.

Therefore, push this IRQ disabling into the idle function, meaning
that those architectures can avoid the pointless IRQ state flipping.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64]
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Guo Ren <guoren@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.618076436@infradead.org
22 months agocpuidle, intel_idle: Fix CPUIDLE_FLAG_IBRS
Peter Zijlstra [Thu, 12 Jan 2023 19:43:34 +0000 (20:43 +0100)]
cpuidle, intel_idle: Fix CPUIDLE_FLAG_IBRS

objtool to the rescue:

  vmlinux.o: warning: objtool: intel_idle_ibrs+0x17: call to spec_ctrl_current() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_ibrs+0x27: call to wrmsrl.constprop.0() leaves .noinstr.text section

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.556912863@infradead.org
22 months agocpuidle, intel_idle: Fix CPUIDLE_FLAG_INIT_XSTATE
Peter Zijlstra [Thu, 12 Jan 2023 19:43:33 +0000 (20:43 +0100)]
cpuidle, intel_idle: Fix CPUIDLE_FLAG_INIT_XSTATE

Fix instrumentation bugs objtool found:

  vmlinux.o: warning: objtool: intel_idle_s2idle+0xd5: call to fpu_idle_fpregs() leaves .noinstr.text section
  vmlinux.o: warning: objtool: intel_idle_xstate+0x11: call to fpu_idle_fpregs() leaves .noinstr.text section
  vmlinux.o: warning: objtool: fpu_idle_fpregs+0x9: call to xfeatures_in_use() leaves .noinstr.text section

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.494977795@infradead.org
22 months agocpuidle, intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE *again*
Peter Zijlstra [Thu, 12 Jan 2023 19:43:32 +0000 (20:43 +0100)]
cpuidle, intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE *again*

So objtool found this bug:

  vmlinux.o: warning: objtool: intel_idle_irq+0x10c: call to trace_hardirqs_off() leaves .noinstr.text section

As per commit 32d4fd5751ea ("cpuidle,intel_idle: Fix CPUIDLE_FLAG_IRQ_ENABLE"):

  "must not have tracing in idle functions"

Clearly people can't read and tinker along until splat dissapears.
This straight up reverts commit d295ad34f236 ("intel_idle: Fix false
positive RCU splats due to incorrect hardirqs state").

It doesn't re-introduce the problem because preceding patches fixed it
properly.

Fixes: d295ad34f236 ("intel_idle: Fix false positive RCU splats due to incorrect hardirqs state")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.434302128@infradead.org
22 months agoobjtool/idle: Validate __cpuidle code as noinstr
Peter Zijlstra [Thu, 12 Jan 2023 19:43:31 +0000 (20:43 +0100)]
objtool/idle: Validate __cpuidle code as noinstr

Idle code is very like entry code in that RCU isn't available. As
such, add a little validation.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.373461409@infradead.org
22 months agocpuidle: Annotate poll_idle()
Peter Zijlstra [Thu, 12 Jan 2023 19:43:30 +0000 (20:43 +0100)]
cpuidle: Annotate poll_idle()

The __cpuidle functions will become a noinstr class, as such they need
explicit annotations.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.312601331@infradead.org
22 months agoacpi_idle: Remove tracing
Peter Zijlstra [Thu, 12 Jan 2023 19:43:29 +0000 (20:43 +0100)]
acpi_idle: Remove tracing

All the idle routines are called with RCU disabled, as such there must
not be any tracing inside.

While there; clean-up the io-port idle thing.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.251666856@infradead.org
22 months agocpuidle, cpu_pm: Remove RCU fiddling from cpu_pm_{enter,exit}()
Peter Zijlstra [Thu, 12 Jan 2023 19:43:28 +0000 (20:43 +0100)]
cpuidle, cpu_pm: Remove RCU fiddling from cpu_pm_{enter,exit}()

All callers should still have RCU enabled.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.190860672@infradead.org
22 months agocpuidle: Fix ct_idle_*() usage
Peter Zijlstra [Thu, 12 Jan 2023 19:43:27 +0000 (20:43 +0100)]
cpuidle: Fix ct_idle_*() usage

The whole disable-RCU, enable-IRQS dance is very intricate since
changing IRQ state is traced, which depends on RCU.

Add two helpers for the cpuidle case that mirror the entry code:

  ct_cpuidle_enter()
  ct_cpuidle_exit()

And fix all the cases where the enter/exit dance was buggy.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.130014793@infradead.org
22 months agocpuidle, dt: Push RCU-idle into driver
Peter Zijlstra [Thu, 12 Jan 2023 19:43:26 +0000 (20:43 +0100)]
cpuidle, dt: Push RCU-idle into driver

Doing RCU-idle outside the driver, only to then temporarily enable it
again before going idle is suboptimal.

Notably: this converts all dt_init_idle_driver() and
__CPU_PM_CPU_IDLE_ENTER() users for they are inextrably intertwined.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.068981667@infradead.org
22 months agocpuidle, OMAP4: Push RCU-idle into driver
Peter Zijlstra [Thu, 12 Jan 2023 19:43:25 +0000 (20:43 +0100)]
cpuidle, OMAP4: Push RCU-idle into driver

Doing RCU-idle outside the driver, only to then temporarily enable it
again, some *four* times, before going idle is suboptimal.

Notably three times explicitly using RCU_NONIDLE() and once implicitly
through cpu_pm_*().

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20230112195540.007918454@infradead.org
22 months agocpuidle, armada: Push RCU-idle into driver
Peter Zijlstra [Thu, 12 Jan 2023 19:43:24 +0000 (20:43 +0100)]
cpuidle, armada: Push RCU-idle into driver

Doing RCU-idle outside the driver, only to then temporarily enable it
again before going idle is suboptimal.

Notably the cpu_pm_*() calls implicitly re-enable RCU for a bit.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20230112195539.946630819@infradead.org
22 months agocpuidle, OMAP3: Push RCU-idle into driver
Peter Zijlstra [Thu, 12 Jan 2023 19:43:23 +0000 (20:43 +0100)]
cpuidle, OMAP3: Push RCU-idle into driver

Doing RCU-idle outside the driver, only to then teporarily enable it
again before going idle is suboptimal.

Notably the cpu_pm_*() calls implicitly re-enable RCU for a bit.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Reviewed-by: Tony Lindgren <tony@atomide.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20230112195539.883561913@infradead.org
22 months agocpuidle, ARM/imx6: Push RCU-idle into driver
Peter Zijlstra [Thu, 12 Jan 2023 19:43:22 +0000 (20:43 +0100)]
cpuidle, ARM/imx6: Push RCU-idle into driver

Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is suboptimal.

Notably both cpu_pm_enter() and cpu_cluster_pm_enter() implicity
re-enable RCU.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20230112195539.821714572@infradead.org
22 months agocpuidle, psci: Push RCU-idle into driver
Peter Zijlstra [Thu, 12 Jan 2023 19:43:21 +0000 (20:43 +0100)]
cpuidle, psci: Push RCU-idle into driver

Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is suboptimal.

Notably once implicitly through the cpu_pm_*() calls and once
explicitly doing ct_irq_*_irqon().

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Reviewed-by: Guo Ren <guoren@kernel.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20230112195539.760296658@infradead.org
22 months agocpuidle, tegra: Push RCU-idle into driver
Peter Zijlstra [Thu, 12 Jan 2023 19:43:20 +0000 (20:43 +0100)]
cpuidle, tegra: Push RCU-idle into driver

Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is suboptimal.

Notably once implicitly through the cpu_pm_*() calls and once
explicitly doing RCU_NONIDLE().

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20230112195539.699546331@infradead.org
22 months agocpuidle, riscv: Push RCU-idle into driver
Peter Zijlstra [Thu, 12 Jan 2023 19:43:19 +0000 (20:43 +0100)]
cpuidle, riscv: Push RCU-idle into driver

Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is suboptimal.

That is, once implicitly through the cpu_pm_*() calls and once
explicitly doing ct_irq_*_irqon().

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20230112195539.637185846@infradead.org
22 months agocpuidle: Move IRQ state validation
Peter Zijlstra [Thu, 12 Jan 2023 19:43:18 +0000 (20:43 +0100)]
cpuidle: Move IRQ state validation

Make cpuidle_enter_state() consistent with the s2idle variant and
verify ->enter() always returns with interrupts disabled.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195539.576412812@infradead.org
22 months agocpuidle/poll: Ensure IRQs stay disabled after cpuidle_state::enter() calls
Peter Zijlstra [Thu, 12 Jan 2023 19:43:17 +0000 (20:43 +0100)]
cpuidle/poll: Ensure IRQs stay disabled after cpuidle_state::enter() calls

Make cpuidle_state::enter() methods IRQ state invariant on exit.

Additionally make sure to use raw_local_irq_*() methods since this
cpuidle callback will be called with RCU already disabled.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195539.515253662@infradead.org
22 months agox86/idle: Replace 'x86_idle' function pointer with a static_call
Peter Zijlstra [Thu, 12 Jan 2023 19:43:16 +0000 (20:43 +0100)]
x86/idle: Replace 'x86_idle' function pointer with a static_call

Typical boot time setup; no need to suffer an indirect call for that.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20230112195539.453613251@infradead.org
22 months agox86/perf/amd: Remove tracing from perf_lopwr_cb()
Peter Zijlstra [Thu, 12 Jan 2023 19:43:15 +0000 (20:43 +0100)]
x86/perf/amd: Remove tracing from perf_lopwr_cb()

The perf_lopwr_cb() function is called from the idle routines; there
is no RCU there, we must not enter tracing.

Use __always_inline, noidle annotations and existing no-trace methods.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195539.392862891@infradead.org
22 months agorseq: Increase AT_VECTOR_SIZE_BASE to match rseq auxvec entries
Mathieu Desnoyers [Wed, 4 Jan 2023 19:20:54 +0000 (14:20 -0500)]
rseq: Increase AT_VECTOR_SIZE_BASE to match rseq auxvec entries

Two new auxiliary vector entries are introduced for rseq without
matching increment of the AT_VECTOR_SIZE_BASE, which causes failures
with CONFIG_HARDENED_USERCOPY=y.

Fixes: 317c8194e6ae ("rseq: Introduce feature size and alignment ELF auxiliary vector entries")
Reported-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Link: https://lore.kernel.org/r/20230104192054.34046-1-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: Revert "selftests/rseq: Add mm_numa_cid to test script"
Mathieu Desnoyers [Wed, 4 Jan 2023 16:35:42 +0000 (11:35 -0500)]
selftests/rseq: Revert "selftests/rseq: Add mm_numa_cid to test script"

The mm_numa_cid related rseq patches from the series were not picked up
into the tip tree, so enabling the mm_numa_cid test needs to be
reverted.

This reverts commit b344b8f2d88dbf095caf97ac57fd3645843fa70f.

Reported-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/oe-lkp/202301040903.2dd1e25b-oliver.sang@intel.com
22 months agosched/cputime: Fix IA64 build error of missing arch_vtime_task_switch() prototype
Ingo Molnar [Wed, 11 Jan 2023 09:25:34 +0000 (10:25 +0100)]
sched/cputime: Fix IA64 build error of missing arch_vtime_task_switch() prototype

The following commit:

  c89970202a11 ("cputime: remove cputime_to_nsecs fallback")

Removed an <asm/cputime.h> inclusion from <linux/sched/cputime.h>, but this
broke the IA64 build:

    arch/ia64/kernel/time.c:110:6: warning: no previous prototype for 'arch_vtime_task_switch' [-Wmissing-prototypes]

Add in the missing <asm/cputime.h> header to fix it.

Fixes: c89970202a11 ("cputime: remove cputime_to_nsecs fallback")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: linux-kernel@vger.kernel.org
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
22 months agoselftests/membarrier: Test MEMBARRIER_CMD_GET_REGISTRATIONS
Michal Clapinski [Wed, 7 Dec 2022 16:43:38 +0000 (17:43 +0100)]
selftests/membarrier: Test MEMBARRIER_CMD_GET_REGISTRATIONS

Keep track of previously issued registrations and compare the result
with MEMBARRIER_CMD_GET_REGISTRATIONS return value.

Signed-off-by: Michal Clapinski <mclapinski@google.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/r/20221207164338.1535591-3-mclapinski@google.com
22 months agosched/membarrier: Introduce MEMBARRIER_CMD_GET_REGISTRATIONS
Michal Clapinski [Wed, 7 Dec 2022 16:43:37 +0000 (17:43 +0100)]
sched/membarrier: Introduce MEMBARRIER_CMD_GET_REGISTRATIONS

Provide a method to query previously issued registrations.

Signed-off-by: Michal Clapinski <mclapinski@google.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/r/20221207164338.1535591-2-mclapinski@google.com
22 months agocpufreq, sched/util: Optimize operations with single CPU capacity lookup
Lukasz Luba [Thu, 8 Dec 2022 16:02:56 +0000 (16:02 +0000)]
cpufreq, sched/util: Optimize operations with single CPU capacity lookup

The max CPU capacity is the same for all CPUs sharing frequency domain.
There is a way to avoid heavy operations in a loop for each CPU by
leveraging this knowledge. Thus, simplify the looping code in the
sugov_next_freq_shared() and drop heavy multiplications. Instead, use
simple max() to get the highest utilization from these CPUs.

This is useful for platforms with many (4 or 6) little CPUs. We avoid
heavy 2*PD_CPU_NUM multiplications in that loop, which is called billions
of times, since it's not limited by the schedutil time delta filter in
sugov_should_update_freq(). When there was no need to change frequency
the code bailed out, not updating the sg_policy::last_freq_update_time.
Then every visit after delta_ns time longer than the
sg_policy::freq_update_delay_ns goes through and triggers the next
frequency calculation code. Although, if the next frequency, as outcome
of that, would be the same as current frequency, we won't update the
sg_policy::last_freq_update_time and the story will be repeated (in
a very short period, sometimes a few microseconds).

The max CPU capacity must be fetched every time we are called, due to
difficulties during the policy setup, where we are not able to get the
normalized CPU capacity at the right time.

The fetched CPU capacity value is than used in sugov_iowait_apply() to
calculate the right boost. This required a few changes in the local
functions and arguments. The capacity value should hopefully be fetched
once when needed and then passed over CPU registers to those functions.

Signed-off-by: Lukasz Luba <lukasz.luba@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20221208160256.859-2-lukasz.luba@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
22 months agosched/core: Reorganize ttwu_do_wakeup() and ttwu_do_activate()
Chengming Zhou [Fri, 23 Dec 2022 10:32:57 +0000 (18:32 +0800)]
sched/core: Reorganize ttwu_do_wakeup() and ttwu_do_activate()

ttwu_do_activate() is used for a complete wakeup, in which we will
activate_task() and use ttwu_do_wakeup() to mark the task runnable
and perform wakeup-preemption, also call class->task_woken() callback
and update the rq->idle_stamp.

Since ttwu_runnable() is not a complete wakeup, don't need all those
done in ttwu_do_wakeup(), so we can move those to ttwu_do_activate()
to simplify ttwu_do_wakeup(), making it only mark the task runnable
to be reused in ttwu_runnable() and try_to_wake_up().

This patch should not have any functional changes.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20221223103257.4962-2-zhouchengming@bytedance.com
22 months agosched/core: Micro-optimize ttwu_runnable()
Chengming Zhou [Fri, 23 Dec 2022 10:32:56 +0000 (18:32 +0800)]
sched/core: Micro-optimize ttwu_runnable()

ttwu_runnable() is used as a fast wakeup path when the wakee task
is running on CPU or runnable on RQ, in both cases we can just
set its state to TASK_RUNNING to prevent a sleep.

If the wakee task is on_cpu running, we don't need to update_rq_clock()
or check_preempt_curr().

But if the wakee task is on_rq && !on_cpu (e.g. an IRQ hit before
the task got to schedule() and the task been preempted), we should
check_preempt_curr() to see if it can preempt the current running.

This also removes the class->task_woken() callback from ttwu_runnable(),
which wasn't required per the RT/DL implementations: any required push
operation would have been queued during class->set_next_task() when p
got preempted.

ttwu_runnable() also loses the update to rq->idle_stamp, as by definition
the rq cannot be idle in this scenario.

Suggested-by: Valentin Schneider <vschneid@redhat.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20221223103257.4962-1-zhouchengming@bytedance.com
22 months agosched/documentation: Document the util clamp feature
Qais Yousef [Fri, 16 Dec 2022 23:57:16 +0000 (23:57 +0000)]
sched/documentation: Document the util clamp feature

Add a document explaining the util clamp feature: what it is and
how to use it. The new document hopefully covers everything one needs to
know about uclamp.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Lukasz Luba <lukasz.luba@arm.com>
Link: https://lore.kernel.org/r/20221216235716.201923-1-qyousef@layalina.io
Cc: Jonathan Corbet <corbet@lwn.net>
22 months agosched/topology: Add __init for sched_init_domains()
Bing Huang [Thu, 5 Jan 2023 01:49:43 +0000 (09:49 +0800)]
sched/topology: Add __init for sched_init_domains()

sched_init_domains() is only used in initialization

Signed-off-by: Bing Huang <huangbing@kylinos.cn>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20230105014943.9857-1-huangbing775@126.com
22 months agosched/rseq: Fix concurrency ID handling of usermodehelper kthreads
Mathieu Desnoyers [Mon, 2 Jan 2023 15:12:16 +0000 (10:12 -0500)]
sched/rseq: Fix concurrency ID handling of usermodehelper kthreads

sched_mm_cid_after_execve() does not expect NULL t->mm, but it may happen
if a usermodehelper kthread fails when attempting to execute a binary.

sched_mm_cid_fork() can be issued from a usermodehelper kthread, which
has t->flags PF_KTHREAD set.

Fixes: af7f588d8f73 ("sched: Introduce per-memory-map concurrency ID")
Reported-by: kernel test robot <yujie.liu@intel.com>
Reported-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/oe-lkp/202212301353.5c959d72-yujie.liu@intel.com
22 months agocputime: remove cputime_to_nsecs fallback
Nicholas Piggin [Tue, 20 Dec 2022 07:07:05 +0000 (17:07 +1000)]
cputime: remove cputime_to_nsecs fallback

The archs that use cputime_to_nsecs() internally provide their own
definition and don't need the fallback. cputime_to_usecs() unused except
in this fallback, and is not defined anywhere.

This removes the final remnant of the cputime_t code from the kernel.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Link: https://lore.kernel.org/r/20221220070705.2958959-1-npiggin@gmail.com
22 months agosched/core: Adjusting the order of scanning CPU
Hao Jia [Fri, 16 Dec 2022 06:24:06 +0000 (14:24 +0800)]
sched/core: Adjusting the order of scanning CPU

When select_idle_capacity() starts scanning for an idle CPU, it starts
with target CPU that has already been checked in select_idle_sibling().
So we start checking from the next CPU and try the target CPU at the end.
Similarly for task_numa_assign(), we have just checked numa_migrate_on
of dst_cpu, so start from the next CPU. This also works for
steal_cookie_task(), the first scan must fail and start directly
from the next one.

Signed-off-by: Hao Jia <jiahao.os@bytedance.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Link: https://lore.kernel.org/r/20221216062406.7812-3-jiahao.os@bytedance.com
22 months agosched/numa: Stop an exhastive search if an idle core is found
Hao Jia [Fri, 16 Dec 2022 06:24:05 +0000 (14:24 +0800)]
sched/numa: Stop an exhastive search if an idle core is found

In update_numa_stats() we try to find an idle cpu on the NUMA node,
preferably an idle core. we can stop looking for the next idle core
or idle cpu after finding an idle core. But we can't stop the
whole loop of scanning the CPU, because we need to calculate
approximate NUMA stats at a point in time. For example,
the src and dst nr_running is needed by task_numa_find_cpu().

Signed-off-by: Hao Jia <jiahao.os@bytedance.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Link: https://lore.kernel.org/r/20221216062406.7812-2-jiahao.os@bytedance.com
22 months agosched: Make const-safe
Matthew Wilcox (Oracle) [Mon, 12 Dec 2022 14:49:46 +0000 (14:49 +0000)]
sched: Make const-safe

With a modified container_of() that preserves constness, the compiler
finds some pointers which should have been marked as const.  task_of()
also needs to become const-preserving for the !FAIR_GROUP_SCHED case so
that cfs_rq_of() can take a const argument.  No change to generated code.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20221212144946.2657785-1-willy@infradead.org
22 months agoselftests/rseq: Add mm_numa_cid to test script
Mathieu Desnoyers [Fri, 16 Dec 2022 14:53:32 +0000 (09:53 -0500)]
selftests/rseq: Add mm_numa_cid to test script

Add mm_numa_cid tests to the run_param_test.sh test script.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20221216145332.205095-1-mathieu.desnoyers@efficios.com
22 months agotracing/rseq: Add mm_cid field to rseq_update
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:23 +0000 (15:39 -0500)]
tracing/rseq: Add mm_cid field to rseq_update

Add the mm_cid field to the rseq_update event, allowing tracers to
follow which mm_cid is observed by user-space, and whether negative
mm_cid values are visible in case of internal scheduler implementation
issues.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-22-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: parametrized test: Report/abort on negative concurrency ID
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:22 +0000 (15:39 -0500)]
selftests/rseq: parametrized test: Report/abort on negative concurrency ID

Report and abort when a negative concurrency ID value is observed by the
spinlock test.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-21-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: Implement parametrized mm_cid test
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:21 +0000 (15:39 -0500)]
selftests/rseq: Implement parametrized mm_cid test

Adapt to the rseq.h API changes introduced by commits
"selftests/rseq: <arch>: Template memory ordering and percpu access mode".

Build a new param_test_mm_cid, param_test_mm_cid_benchmark, and
param_test_mm_cid_compare_twice executables to test the new "mm_cid"
rseq field.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-20-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: Implement basic percpu ops mm_cid test
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:20 +0000 (15:39 -0500)]
selftests/rseq: Implement basic percpu ops mm_cid test

Adapt to the rseq.h API changes introduced by commits
"selftests/rseq: <arch>: Template memory ordering and percpu access mode".

Build a new basic_percpu_ops_mm_cid_test to test the new "mm_cid" rseq
field.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-19-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: riscv: Template memory ordering and percpu access mode
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:19 +0000 (15:39 -0500)]
selftests/rseq: riscv: Template memory ordering and percpu access mode

Introduce a rseq-riscv-bits.h template header which is internally included
to generate the static inline functions covering:

- relaxed and release memory ordering,
- per-cpu-id and per-mm-cid per-cpu data access.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-18-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: s390: Template memory ordering and percpu access mode
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:18 +0000 (15:39 -0500)]
selftests/rseq: s390: Template memory ordering and percpu access mode

Introduce a rseq-s390-bits.h template header which is internally included
to generate the static inline functions covering:

- relaxed and release memory ordering,
- per-cpu-id and per-mm-cid per-cpu data access.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-17-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: ppc: Template memory ordering and percpu access mode
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:17 +0000 (15:39 -0500)]
selftests/rseq: ppc: Template memory ordering and percpu access mode

Introduce a rseq-ppc-bits.h template header which is internally included
to generate the static inline functions covering:

- relaxed and release memory ordering,
- per-cpu-id and per-mm-cid per-cpu data access.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-16-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: mips: Template memory ordering and percpu access mode
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:16 +0000 (15:39 -0500)]
selftests/rseq: mips: Template memory ordering and percpu access mode

Introduce a rseq-mips-bits.h template header which is internally
included to generate the static inline functions covering:

- relaxed and release memory ordering,
- per-cpu-id and per-mm-cid per-cpu data access.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-15-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: arm64: Template memory ordering and percpu access mode
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:15 +0000 (15:39 -0500)]
selftests/rseq: arm64: Template memory ordering and percpu access mode

Introduce a rseq-arm64-bits.h template header which is internally
included to generate the static inline functions covering:

- relaxed and release memory ordering,
- per-cpu-id and per-mm-cid per-cpu data access.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-14-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: arm: Template memory ordering and percpu access mode
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:14 +0000 (15:39 -0500)]
selftests/rseq: arm: Template memory ordering and percpu access mode

Introduce a rseq-arm-bits.h template header which is internally included
to generate the static inline functions covering:

- relaxed and release memory ordering,
- per-cpu-id and per-mm-cid per-cpu data access.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-13-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: x86: Template memory ordering and percpu access mode
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:13 +0000 (15:39 -0500)]
selftests/rseq: x86: Template memory ordering and percpu access mode

Introduce a rseq-x86-bits.h template header which is internally included
to generate the static inline functions covering:

- relaxed and release memory ordering,
- per-cpu-id and per-mm-cid per-cpu data access.

This introduces changes to the rseq.h selftests API which require to
update the rseq selftest programs. Similar API/templating changes need
to be done for other architectures.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-12-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: Implement rseq mm_cid field support
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:12 +0000 (15:39 -0500)]
selftests/rseq: Implement rseq mm_cid field support

Add support for the mm_cid field (per-memory-map concurrency ID) of
struct rseq to rseq selftests.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-11-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: Remove RSEQ_SKIP_FASTPATH code
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:11 +0000 (15:39 -0500)]
selftests/rseq: Remove RSEQ_SKIP_FASTPATH code

This code is not currently build by the test Makefile, adds complexity,
and is not overall useful considering that the abort handling loops to
retry the fast-path.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-10-mathieu.desnoyers@efficios.com
22 months agorseq: Extend struct rseq with per-memory-map concurrency ID
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:10 +0000 (15:39 -0500)]
rseq: Extend struct rseq with per-memory-map concurrency ID

If a memory map has fewer threads than there are cores on the system, or
is limited to run on few cores concurrently through sched affinity or
cgroup cpusets, the concurrency IDs will be values close to 0, thus
allowing efficient use of user-space memory for per-cpu data structures.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-9-mathieu.desnoyers@efficios.com
22 months agosched: Introduce per-memory-map concurrency ID
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:09 +0000 (15:39 -0500)]
sched: Introduce per-memory-map concurrency ID

This feature allows the scheduler to expose a per-memory map concurrency
ID to user-space. This concurrency ID is within the possible cpus range,
and is temporarily (and uniquely) assigned while threads are actively
running within a memory map. If a memory map has fewer threads than
cores, or is limited to run on few cores concurrently through sched
affinity or cgroup cpusets, the concurrency IDs will be values close
to 0, thus allowing efficient use of user-space memory for per-cpu
data structures.

This feature is meant to be exposed by a new rseq thread area field.

The primary purpose of this feature is to do the heavy-lifting needed
by memory allocators to allow them to use per-cpu data structures
efficiently in the following situations:

- Single-threaded applications,
- Multi-threaded applications on large systems (many cores) with limited
  cpu affinity mask,
- Multi-threaded applications on large systems (many cores) with
  restricted cgroup cpuset per container.

One of the key concern from scheduler maintainers is the overhead
associated with additional spin locks or atomic operations in the
scheduler fast-path. This is why the following optimization is
implemented.

On context switch between threads belonging to the same memory map,
transfer the mm_cid from prev to next without any atomic ops. This
takes care of use-cases involving frequent context switch between
threads belonging to the same memory map.

Additional optimizations can be done if the spin locks added when
context switching between threads belonging to different memory maps end
up being a performance bottleneck. Those are left out of this patch
though. A performance impact would have to be clearly demonstrated to
justify the added complexity.

The credit goes to Paul Turner (Google) for the original virtual cpu id
idea. This feature is implemented based on the discussions with Paul
Turner and Peter Oskolkov (Google), but I took the liberty to implement
scheduler fast-path optimizations and my own NUMA-awareness scheme. The
rumor has it that Google have been running a rseq vcpu_id extension
internally in production for a year. The tcmalloc source code indeed has
comments hinting at a vcpu_id prototype extension to the rseq system
call [1].

The following benchmarks do not show any significant overhead added to
the scheduler context switch by this feature:

* perf bench sched messaging (process)

Baseline:                    86.5±0.3 ms
With mm_cid:                 86.7±2.6 ms

* perf bench sched messaging (threaded)

Baseline:                    84.3±3.0 ms
With mm_cid:                 84.7±2.6 ms

* hackbench (process)

Baseline:                    82.9±2.7 ms
With mm_cid:                 82.9±2.9 ms

* hackbench (threaded)

Baseline:                    85.2±2.6 ms
With mm_cid:                 84.4±2.9 ms

[1] https://github.com/google/tcmalloc/blob/master/tcmalloc/internal/linux_syscall_support.h#L26

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-8-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: Implement rseq numa node id field selftest
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:08 +0000 (15:39 -0500)]
selftests/rseq: Implement rseq numa node id field selftest

Test the NUMA node id extension rseq field. Compare it against the value
returned by the getcpu(2) system call while pinned on a specific core.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-7-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: Use ELF auxiliary vector for extensible rseq
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:07 +0000 (15:39 -0500)]
selftests/rseq: Use ELF auxiliary vector for extensible rseq

Use the ELF auxiliary vector AT_RSEQ_FEATURE_SIZE to detect the RSEQ
features supported by the kernel.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-6-mathieu.desnoyers@efficios.com
22 months agorseq: Extend struct rseq with numa node id
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:06 +0000 (15:39 -0500)]
rseq: Extend struct rseq with numa node id

Adding the NUMA node id to struct rseq is a straightforward thing to do,
and a good way to figure out if anything in the user-space ecosystem
prevents extending struct rseq.

This NUMA node id field allows memory allocators such as tcmalloc to
take advantage of fast access to the current NUMA node id to perform
NUMA-aware memory allocation.

It can also be useful for implementing fast-paths for NUMA-aware
user-space mutexes.

It also allows implementing getcpu(2) purely in user-space.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-5-mathieu.desnoyers@efficios.com
22 months agorseq: Introduce extensible rseq ABI
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:05 +0000 (15:39 -0500)]
rseq: Introduce extensible rseq ABI

Introduce the extensible rseq ABI, where the feature size supported by
the kernel and the required alignment are communicated to user-space
through ELF auxiliary vectors.

This allows user-space to call rseq registration with a rseq_len of
either 32 bytes for the original struct rseq size (which includes
padding), or larger.

If rseq_len is larger than 32 bytes, then it must be large enough to
contain the feature size communicated to user-space through ELF
auxiliary vectors.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-4-mathieu.desnoyers@efficios.com
22 months agorseq: Introduce feature size and alignment ELF auxiliary vector entries
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:04 +0000 (15:39 -0500)]
rseq: Introduce feature size and alignment ELF auxiliary vector entries

Export the rseq feature size supported by the kernel as well as the
required allocation alignment for the rseq per-thread area to user-space
through ELF auxiliary vector entries.

This is part of the extensible rseq ABI.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-3-mathieu.desnoyers@efficios.com
22 months agoselftests/rseq: Fix: Fail thread registration when CONFIG_RSEQ=n
Mathieu Desnoyers [Tue, 22 Nov 2022 20:39:03 +0000 (15:39 -0500)]
selftests/rseq: Fix: Fail thread registration when CONFIG_RSEQ=n

When linking the selftests against a libc which does not handle rseq
registration (before 2.35),  rseq thread registration silently succeed
even with CONFIG_RSEQ=n because it erroneously thinks that libc is
handling rseq registration.

This is caused by setting the rseq ownership flag only after the
rseq_available() check. It should rather be set before the
rseq_available() check.

Set the rseq_size to 0 (error value) immediately after the
rseq_available() check fails rather than in the thread registration
functions.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221122203932.231377-2-mathieu.desnoyers@efficios.com
22 months agosched: Async unthrottling for cfs bandwidth
Josh Don [Thu, 17 Nov 2022 00:54:18 +0000 (16:54 -0800)]
sched: Async unthrottling for cfs bandwidth

CFS bandwidth currently distributes new runtime and unthrottles cfs_rq's
inline in an hrtimer callback. Runtime distribution is a per-cpu
operation, and unthrottling is a per-cgroup operation, since a tg walk
is required. On machines with a large number of cpus and large cgroup
hierarchies, this cpus*cgroups work can be too much to do in a single
hrtimer callback: since IRQ are disabled, hard lockups may easily occur.
Specifically, we've found this scalability issue on configurations with
256 cpus, O(1000) cgroups in the hierarchy being throttled, and high
memory bandwidth usage.

To fix this, we can instead unthrottle cfs_rq's asynchronously via a
CSD. Each cpu is responsible for unthrottling itself, thus sharding the
total work more fairly across the system, and avoiding hard lockups.

Signed-off-by: Josh Don <joshdon@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20221117005418.3499691-1-joshdon@google.com
22 months agosched/topology: Add __init for init_defrootdomain
Bing Huang [Fri, 18 Nov 2022 03:42:08 +0000 (11:42 +0800)]
sched/topology: Add __init for init_defrootdomain

init_defrootdomain is only used in initialization

Signed-off-by: Bing Huang <huangbing@kylinos.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lkml.kernel.org/r/20221118034208.267330-1-huangbing775@126.com
22 months agoLinux 6.2-rc1
Linus Torvalds [Sun, 25 Dec 2022 21:41:39 +0000 (13:41 -0800)]
Linux 6.2-rc1

22 months agotreewide: Convert del_timer*() to timer_shutdown*()
Steven Rostedt (Google) [Tue, 20 Dec 2022 18:45:19 +0000 (13:45 -0500)]
treewide: Convert del_timer*() to timer_shutdown*()

Due to several bugs caused by timers being re-armed after they are
shutdown and just before they are freed, a new state of timers was added
called "shutdown".  After a timer is set to this state, then it can no
longer be re-armed.

The following script was run to find all the trivial locations where
del_timer() or del_timer_sync() is called in the same function that the
object holding the timer is freed.  It also ignores any locations where
the timer->function is modified between the del_timer*() and the free(),
as that is not considered a "trivial" case.

This was created by using a coccinelle script and the following
commands:

    $ cat timer.cocci
    @@
    expression ptr, slab;
    identifier timer, rfield;
    @@
    (
    -       del_timer(&ptr->timer);
    +       timer_shutdown(&ptr->timer);
    |
    -       del_timer_sync(&ptr->timer);
    +       timer_shutdown_sync(&ptr->timer);
    )
      ... when strict
          when != ptr->timer
    (
            kfree_rcu(ptr, rfield);
    |
            kmem_cache_free(slab, ptr);
    |
            kfree(ptr);
    )

    $ spatch timer.cocci . > /tmp/t.patch
    $ patch -p1 < /tmp/t.patch

Link: https://lore.kernel.org/lkml/20221123201306.823305113@linutronix.de/
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Acked-by: Pavel Machek <pavel@ucw.cz> [ LED ]
Acked-by: Kalle Valo <kvalo@kernel.org> [ wireless ]
Acked-by: Paolo Abeni <pabeni@redhat.com> [ networking ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
22 months agoMerge tag 'spi-fix-v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Linus Torvalds [Fri, 23 Dec 2022 22:44:08 +0000 (14:44 -0800)]
Merge tag 'spi-fix-v6.2-rc1' of git://git./linux/kernel/git/broonie/spi

Pull spi fix from Mark Brown:
 "One driver specific change here which handles the case where a SPI
  device for some reason tries to change the bus speed during a message
  on fsl_spi hardware, this should be very unusual"

* tag 'spi-fix-v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
  spi: fsl_spi: Don't change speed while chipselect is active

22 months agoMerge tag 'regulator-fix-v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Fri, 23 Dec 2022 22:38:00 +0000 (14:38 -0800)]
Merge tag 'regulator-fix-v6.2-rc1' of git://git./linux/kernel/git/broonie/regulator

Pull regulator fixes from Mark Brown:
 "Two core fixes here, one for a long standing race which some Qualcomm
  systems have started triggering with their UFS driver and another
  fixing a problem with supply lookup introduced by the fixes for devm
  related use after free issues that were introduced in this merge
  window"

* tag 'regulator-fix-v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator:
  regulator: core: fix deadlock on regulator enable
  regulator: core: Fix resolve supply lookup issue

22 months agoMerge tag 'coccinelle-6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/jlawall...
Linus Torvalds [Fri, 23 Dec 2022 21:56:41 +0000 (13:56 -0800)]
Merge tag 'coccinelle-6.2' of git://git./linux/kernel/git/jlawall/linux

Pull coccicheck update from Julia Lawall:
 "Modernize use of grep in coccicheck:

  Use 'grep -E' instead of 'egrep'"

* tag 'coccinelle-6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/jlawall/linux:
  scripts: coccicheck: use "grep -E" instead of "egrep"

22 months agoMerge tag 'hardening-v6.2-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Fri, 23 Dec 2022 20:00:24 +0000 (12:00 -0800)]
Merge tag 'hardening-v6.2-rc1-fixes' of git://git./linux/kernel/git/kees/linux

Pull kernel hardening fixes from Kees Cook:

 - Fix CFI failure with KASAN (Sami Tolvanen)

 - Fix LKDTM + CFI under GCC 7 and 8 (Kristina Martsenko)

 - Limit CONFIG_ZERO_CALL_USED_REGS to Clang > 15.0.6 (Nathan
   Chancellor)

 - Ignore "contents" argument in LoadPin's LSM hook handling

 - Fix paste-o in /sys/kernel/warn_count API docs

 - Use READ_ONCE() consistently for oops/warn limit reading

* tag 'hardening-v6.2-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  cfi: Fix CFI failure with KASAN
  exit: Use READ_ONCE() for all oops/warn limit reads
  security: Restrict CONFIG_ZERO_CALL_USED_REGS to gcc or clang > 15.0.6
  lkdtm: cfi: Make PAC test work with GCC 7 and 8
  docs: Fix path paste-o for /sys/kernel/warn_count
  LoadPin: Ignore the "contents" argument of the LSM hooks

22 months agoMerge tag 'pstore-v6.2-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Fri, 23 Dec 2022 19:55:54 +0000 (11:55 -0800)]
Merge tag 'pstore-v6.2-rc1-fixes' of git://git./linux/kernel/git/kees/linux

Pull pstore fixes from Kees Cook:

 - Switch pmsg_lock to an rt_mutex to avoid priority inversion (John
   Stultz)

 - Correctly assign mem_type property (Luca Stefani)

* tag 'pstore-v6.2-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  pstore: Properly assign mem_type property
  pstore: Make sure CONFIG_PSTORE_PMSG selects CONFIG_RT_MUTEXES
  pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion

22 months agoMerge tag 'dma-mapping-2022-12-23' of git://git.infradead.org/users/hch/dma-mapping
Linus Torvalds [Fri, 23 Dec 2022 19:44:20 +0000 (11:44 -0800)]
Merge tag 'dma-mapping-2022-12-23' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping fixes from Christoph Hellwig:
 "Fix up the sound code to not pass __GFP_COMP to the non-coherent DMA
  allocator, as it copes with that just as badly as the coherent
  allocator, and then add a check to make sure no one passes the flag
  ever again"

* tag 'dma-mapping-2022-12-23' of git://git.infradead.org/users/hch/dma-mapping:
  dma-mapping: reject GFP_COMP for noncoherent allocations
  ALSA: memalloc: don't use GFP_COMP for non-coherent dma allocations

22 months agoMerge tag '9p-for-6.2-rc1' of https://github.com/martinetd/linux
Linus Torvalds [Fri, 23 Dec 2022 19:39:18 +0000 (11:39 -0800)]
Merge tag '9p-for-6.2-rc1' of https://github.com/martinetd/linux

Pull 9p updates from Dominique Martinet:

 - improve p9_check_errors to check buffer size instead of msize when
   possible (e.g. not zero-copy)

 - some more syzbot and KCSAN fixes

 - minor headers include cleanup

* tag '9p-for-6.2-rc1' of https://github.com/martinetd/linux:
  9p/client: fix data race on req->status
  net/9p: fix response size check in p9_check_errors()
  net/9p: distinguish zero-copy requests
  9p/xen: do not memcpy header into req->rc
  9p: set req refcount to zero to avoid uninitialized usage
  9p/net: Remove unneeded idr.h #include
  9p/fs: Remove unneeded idr.h #include

22 months agoMerge tag 'sound-6.2-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai...
Linus Torvalds [Fri, 23 Dec 2022 19:15:48 +0000 (11:15 -0800)]
Merge tag 'sound-6.2-rc1-2' of git://git./linux/kernel/git/tiwai/sound

Pull more sound updates from Takashi Iwai:
 "A few more updates for 6.2: most of changes are about ASoC
  device-specific fixes.

   - Lots of ASoC Intel AVS extensions and refactoring

   - Quirks for ASoC Intel SOF as well as regression fixes

   - ASoC Mediatek and Rockchip fixes

   - Intel HD-audio HDMI workarounds

   - Usual HD- and USB-audio device-specific quirks"

* tag 'sound-6.2-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (54 commits)
  ALSA: usb-audio: Add new quirk FIXED_RATE for JBL Quantum810 Wireless
  ALSA: azt3328: Remove the unused function snd_azf3328_codec_outl()
  ASoC: lochnagar: Fix unused lochnagar_of_match warning
  ASoC: Intel: Add HP Stream 8 to bytcr_rt5640.c
  ASoC: SOF: mediatek: initialize panic_info to zero
  ASoC: rt5670: Remove unbalanced pm_runtime_put()
  ASoC: Intel: bytcr_rt5640: Add quirk for the Advantech MICA-071 tablet
  ASoC: Intel: soc-acpi: update codec addr on 0C11/0C4F product
  ASoC: rockchip: spdif: Add missing clk_disable_unprepare() in rk_spdif_runtime_resume()
  ASoC: wm8994: Fix potential deadlock
  ASoC: mediatek: mt8195: add sof be ops to check audio active
  ASoC: SOF: Revert: "core: unregister clients and machine drivers in .shutdown"
  ASoC: SOF: Intel: pci-tgl: unblock S5 entry if DMA stop has failed"
  ALSA: hda/hdmi: fix stream-id config keep-alive for rt suspend
  ALSA: hda/hdmi: set default audio parameters for KAE silent-stream
  ALSA: hda/hdmi: fix i915 silent stream programming flow
  ALSA: hda: Error out if invalid stream is being setup
  ASoC: dt-bindings: fsl-sai: Reinstate i.MX93 SAI compatible string
  ASoC: soc-pcm.c: Clear DAIs parameters after stream_active is updated
  ASoC: codecs: wcd-clsh: Remove the unused function
  ...

22 months agoMerge tag 'drm-next-2022-12-23' of git://anongit.freedesktop.org/drm/drm
Linus Torvalds [Fri, 23 Dec 2022 19:09:44 +0000 (11:09 -0800)]
Merge tag 'drm-next-2022-12-23' of git://anongit.freedesktop.org/drm/drm

Pull drm fixes from Dave Airlie:
 "Holiday fixes!

  Two batches from amd, and one group of i915 changes.

  amdgpu:
   - Spelling fix
   - BO pin fix
   - Properly handle polaris 10/11 overlap asics
   - GMC9 fix
   - SR-IOV suspend fix
   - DCN 3.1.4 fix
   - KFD userptr locking fix
   - SMU13.x fixes
   - GDS/GWS/OA handling fix
   - Reserved VMID handling fixes
   - FRU EEPROM fix
   - BO validation fixes
   - Avoid large variable on the stack
   - S0ix fixes
   - SMU 13.x fixes
   - VCN fix
   - Add missing fence reference

  amdkfd:
   - Fix init vm error handling
   - Fix double release of compute pasid

  i915
   - Documentation fixes
   - OA-perf related fix
   - VLV/CHV HDMI/DP audio fix
   - Display DDI/Transcoder fix
   - Migrate fixes"

* tag 'drm-next-2022-12-23' of git://anongit.freedesktop.org/drm/drm: (39 commits)
  drm/amdgpu: grab extra fence reference for drm_sched_job_add_dependency
  drm/amdgpu: enable VCN DPG for GC IP v11.0.4
  drm/amdgpu: skip mes self test after s0i3 resume for MES IP v11.0
  drm/amd/pm: correct the fan speed retrieving in PWM for some SMU13 asics
  drm/amd/pm: bump SMU13.0.0 driver_if header to version 0x34
  drm/amdgpu: skip MES for S0ix as well since it's part of GFX
  drm/amd/pm: avoid large variable on kernel stack
  drm/amdkfd: Fix double release compute pasid
  drm/amdkfd: Fix kfd_process_device_init_vm error handling
  drm/amd/pm: update SMU13.0.0 reported maximum shader clock
  drm/amd/pm: correct SMU13.0.0 pstate profiling clock settings
  drm/amd/pm: enable GPO dynamic control support for SMU13.0.7
  drm/amd/pm: enable GPO dynamic control support for SMU13.0.0
  drm/amdgpu: revert "generally allow over-commit during BO allocation"
  drm/amdgpu: Remove unnecessary domain argument
  drm/amdgpu: Fix size validation for non-exclusive domains (v4)
  drm/amdgpu: Check if fru_addr is not NULL (v2)
  drm/i915/ttm: consider CCS for backup objects
  drm/i915/migrate: fix corner case in CCS aux copying
  drm/amdgpu: rework reserved VMID handling
  ...

22 months agoMerge tag 'mips_6.2_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Linus Torvalds [Fri, 23 Dec 2022 18:49:45 +0000 (10:49 -0800)]
Merge tag 'mips_6.2_1' of git://git./linux/kernel/git/mips/linux

Pull MIPS fixes from Thomas Bogendoerfer:
 "Fixes due to DT changes"

* tag 'mips_6.2_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
  MIPS: dts: bcm63268: Add missing properties to the TWD node
  MIPS: ralink: mt7621: avoid to init common ralink reset controller

22 months agoMerge tag 'mm-hotfixes-stable-2022-12-22-14-34' of git://git.kernel.org/pub/scm/linux...
Linus Torvalds [Fri, 23 Dec 2022 18:45:00 +0000 (10:45 -0800)]
Merge tag 'mm-hotfixes-stable-2022-12-22-14-34' of git://git./linux/kernel/git/akpm/mm

Pull hotfixes from Andrew Morton:
 "Eight fixes, all cc:stable. One is for gcov and the remainder are MM"

* tag 'mm-hotfixes-stable-2022-12-22-14-34' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
  gcov: add support for checksum field
  test_maple_tree: add test for mas_spanning_rebalance() on insufficient data
  maple_tree: fix mas_spanning_rebalance() on insufficient data
  hugetlb: really allocate vma lock for all sharable vmas
  kmsan: export kmsan_handle_urb
  kmsan: include linux/vmalloc.h
  mm/mempolicy: fix memory leak in set_mempolicy_home_node system call
  mm, mremap: fix mremap() expanding vma with addr inside vma

22 months agopstore: Properly assign mem_type property
Luca Stefani [Thu, 22 Dec 2022 13:10:49 +0000 (14:10 +0100)]
pstore: Properly assign mem_type property

If mem-type is specified in the device tree
it would end up overriding the record_size
field instead of populating mem_type.

As record_size is currently parsed after the
improper assignment with default size 0 it
continued to work as expected regardless of the
value found in the device tree.

Simply changing the target field of the struct
is enough to get mem-type working as expected.

Fixes: 9d843e8fafc7 ("pstore: Add mem_type property DT parsing support")
Cc: stable@vger.kernel.org
Signed-off-by: Luca Stefani <luca@osomprivacy.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20221222131049.286288-1-luca@osomprivacy.com
22 months agopstore: Make sure CONFIG_PSTORE_PMSG selects CONFIG_RT_MUTEXES
John Stultz [Wed, 21 Dec 2022 05:18:55 +0000 (05:18 +0000)]
pstore: Make sure CONFIG_PSTORE_PMSG selects CONFIG_RT_MUTEXES

In commit 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex
to avoid priority inversion") I changed a lock to an rt_mutex.

However, its possible that CONFIG_RT_MUTEXES is not enabled,
which then results in a build failure, as the 0day bot detected:
  https://lore.kernel.org/linux-mm/202212211244.TwzWZD3H-lkp@intel.com/

Thus this patch changes CONFIG_PSTORE_PMSG to select
CONFIG_RT_MUTEXES, which ensures the build will not fail.

Cc: Wei Wang <wvw@google.com>
Cc: Midas Chien<midaschieh@google.com>
Cc: Connor O'Brien <connoro@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: kernel test robot <lkp@intel.com>
Cc: kernel-team@android.com
Fixes: 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: John Stultz <jstultz@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20221221051855.15761-1-jstultz@google.com
22 months agocfi: Fix CFI failure with KASAN
Sami Tolvanen [Thu, 22 Dec 2022 22:57:47 +0000 (22:57 +0000)]
cfi: Fix CFI failure with KASAN

When CFI_CLANG and KASAN are both enabled, LLVM doesn't generate a
CFI type hash for asan.module_ctor functions in translation units
where CFI is disabled, which leads to a CFI failure during boot when
do_ctors calls the affected constructors:

  CFI failure at do_basic_setup+0x64/0x90 (target:
  asan.module_ctor+0x0/0x28; expected type: 0xa540670c)

Specifically, this happens because CFI is disabled for
kernel/cfi.c. There's no reason to keep CFI disabled here anymore, so
fix the failure by not filtering out CC_FLAGS_CFI for the file.

Note that https://reviews.llvm.org/rG3b14862f0a96 fixed the issue
where LLVM didn't emit CFI type hashes for any sanitizer constructors,
but now type hashes are emitted correctly for TUs that use CFI.

Link: https://github.com/ClangBuiltLinux/linux/issues/1742
Fixes: 89245600941e ("cfi: Switch to -fsanitize=kcfi")
Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20221222225747.3538676-1-samitolvanen@google.com
22 months agoMerge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Thu, 22 Dec 2022 19:22:31 +0000 (11:22 -0800)]
Merge tag 'scsi-misc' of git://git./linux/kernel/git/jejb/scsi

Pull more SCSI updates from James Bottomley:
 "Mostly small bug fixes and small updates.

  The only things of note is a qla2xxx fix for crash on hotplug and
  timeout and the addition of a user exposed abstraction layer for
  persistent reservation error return handling (which necessitates the
  conversion of nvme.c as well as SCSI)"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  scsi: qla2xxx: Fix crash when I/O abort times out
  nvme: Convert NVMe errors to PR errors
  scsi: sd: Convert SCSI errors to PR errors
  scsi: core: Rename status_byte to sg_status_byte
  block: Add error codes for common PR failures
  scsi: sd: sd_zbc: Trace zone append emulation
  scsi: libfc: Include the correct header

22 months agoMerge tag 'afs-next-20221222' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowel...
Linus Torvalds [Thu, 22 Dec 2022 19:17:34 +0000 (11:17 -0800)]
Merge tag 'afs-next-20221222' of git://git./linux/kernel/git/dhowells/linux-fs

Pull afs update from David Howells:
 "A fix for a couple of missing resource counter decrements, two small
  cleanups of now-unused bits of code and a patch to remove writepage
  support from afs"

* tag 'afs-next-20221222' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs:
  afs: Stop implementing ->writepage()
  afs: remove afs_cache_netfs and afs_zap_permits() declarations
  afs: remove variable nr_servers
  afs: Fix lost servers_outstanding count

22 months agoMerge tag 'perf-tools-for-v6.2-2-2022-12-22' of git://git.kernel.org/pub/scm/linux...
Linus Torvalds [Thu, 22 Dec 2022 19:07:29 +0000 (11:07 -0800)]
Merge tag 'perf-tools-for-v6.2-2-2022-12-22' of git://git./linux/kernel/git/acme/linux

Pull more perf tools updates from Arnaldo Carvalho de Melo:
 "perf tools fixes and improvements:

   - Don't stop building perf if python setuptools isn't installed, just
     disable the affected perf feature.

   - Remove explicit reference to python 2.x devel files, that warning
     is about python-devel, no matter what version, being unavailable
     and thus disabling the linking with libpython.

   - Don't use -Werror=switch-enum when building the python support that
     handles libtraceevent enumerations, as there is no good way to test
     if some specific enum entry is available with the libtraceevent
     installed on the system.

   - Introduce 'perf lock contention' --type-filter and --lock-filter,
     to filter by lock type and lock name:

        $ sudo ./perf lock record -a -- ./perf bench sched messaging

        $ sudo ./perf lock contention -E 5 -Y spinlock
         contended  total wait   max wait  avg wait      type  caller

               802     1.26 ms   11.73 us   1.58 us  spinlock  __wake_up_common_lock+0x62
                13   787.16 us  105.44 us  60.55 us  spinlock  remove_wait_queue+0x14
                12   612.96 us   78.70 us  51.08 us  spinlock  prepare_to_wait+0x27
               114   340.68 us   12.61 us   2.99 us  spinlock  try_to_wake_up+0x1f5
                83   226.38 us    9.15 us   2.73 us  spinlock  folio_lruvec_lock_irqsave+0x5e

        $ sudo ./perf lock contention -l
         contended  total wait  max wait  avg wait           address  symbol

                57     1.11 ms  42.83 us  19.54 us  ffff9f4140059000
                15   280.88 us  23.51 us  18.73 us  ffffffff9d007a40  jiffies_lock
                 1    20.49 us  20.49 us  20.49 us  ffffffff9d0d50c0  rcu_state
                 1     9.02 us   9.02 us   9.02 us  ffff9f41759e9ba0

        $ sudo ./perf lock contention -L jiffies_lock,rcu_state
         contended  total wait  max wait  avg wait      type  caller

                15   280.88 us  23.51 us  18.73 us  spinlock  tick_sched_do_timer+0x93
                 1    20.49 us  20.49 us  20.49 us  spinlock  __softirqentry_text_start+0xeb

        $ sudo ./perf lock contention -L ffff9f4140059000
         contended  total wait  max wait  avg wait      type  caller

                38   779.40 us  42.83 us  20.51 us  spinlock  worker_thread+0x50
                11   216.30 us  39.87 us  19.66 us  spinlock  queue_work_on+0x39
                 8   118.13 us  20.51 us  14.77 us  spinlock  kthread+0xe5

   - Fix splitting CC into compiler and options when checking if a
     option is present in clang to build the python binding, needed in
     systems such as yocto that set CC to, e.g.: "gcc --sysroot=/a/b/c".

   - Refresh metris and events for Intel systems: alderlake.
     alderlake-n, bonnell, broadwell, broadwellde, broadwellx,
     cascadelakex, elkhartlake, goldmont, goldmontplus, haswell,
     haswellx, icelake, icelakex, ivybridge, ivytown, jaketown,
     knightslanding, meteorlake, nehalemep, nehalemex, sandybridge,
     sapphirerapids, silvermont, skylake, skylakex, snowridgex,
     tigerlake, westmereep-dp, westmereep-sp, westmereex.

   - Add vendor events files (JSON) for AMD Zen 4, from sections
     2.1.15.4 "Core Performance Monitor Counters", 2.1.15.5 "L3 Cache
     Performance Monitor Counter"s and Section 7.1 "Fabric Performance
     Monitor Counter (PMC) Events" in the Processor Programming
     Reference (PPR) for AMD Family 19h Model 11h Revision B1
     processors.

     This constitutes events which capture op dispatch, execution and
     retirement, branch prediction, L1 and L2 cache activity, TLB
     activity, L3 cache activity and data bandwidth for various links
     and interfaces in the Data Fabric.

   - Also, from the same PPR are metrics taken from Section 2.1.15.2
     "Performance Measurement", including pipeline utilization, which
     are new to Zen 4 processors and useful for finding performance
     bottlenecks by analyzing activity at different stages of the
     pipeline.

   - Greatly improve the 'srcline', 'srcline_from', 'srcline_to' and
     'srcfile' sort keys performance by postponing calling the external
     addr2line utility to the collapse phase of histogram bucketing.

   - Fix 'perf test' "all PMU test" to skip parametrized events, that
     requires setting up and are not supported by this test.

   - Update tools/ copies of kernel headers: features,
     disabled-features, fscrypt.h, i915_drm.h, msr-index.h, power pc
     syscall table and kvm.h.

   - Add .DELETE_ON_ERROR special Makefile target to clean up partially
     updated files on error.

   - Simplify the mksyscalltbl script for arm64 by avoiding to run the
     host compiler to create the syscall table, do it all just with the
     shell script.

   - Further fixes to honour quiet mode (-q)"

* tag 'perf-tools-for-v6.2-2-2022-12-22' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (67 commits)
  perf python: Fix splitting CC into compiler and options
  perf scripting python: Don't be strict at handling libtraceevent enumerations
  perf arm64: Simplify mksyscalltbl
  perf build: Remove explicit reference to python 2.x devel files
  perf vendor events amd: Add Zen 4 mapping
  perf vendor events amd: Add Zen 4 metrics
  perf vendor events amd: Add Zen 4 uncore events
  perf vendor events amd: Add Zen 4 core events
  perf vendor events intel: Refresh westmereex events
  perf vendor events intel: Refresh westmereep-sp events
  perf vendor events intel: Refresh westmereep-dp events
  perf vendor events intel: Refresh tigerlake metrics and events
  perf vendor events intel: Refresh snowridgex events
  perf vendor events intel: Refresh skylakex metrics and events
  perf vendor events intel: Refresh skylake metrics and events
  perf vendor events intel: Refresh silvermont events
  perf vendor events intel: Refresh sapphirerapids metrics and events
  perf vendor events intel: Refresh sandybridge metrics and events
  perf vendor events intel: Refresh nehalemex events
  perf vendor events intel: Refresh nehalemep events
  ...

22 months agoperf python: Fix splitting CC into compiler and options
Arnaldo Carvalho de Melo [Thu, 22 Dec 2022 13:56:25 +0000 (10:56 -0300)]
perf python: Fix splitting CC into compiler and options

Noticed this build failure on archlinux:base when building with clang:

  clang-14: error: optimization flag '-ffat-lto-objects' is not supported [-Werror,-Wignored-optimization-argument]

In tools/perf/util/setup.py we check if clang supports that option, but
since commit 3cad53a6f9cdbafa ("perf python: Account for multiple words
in CC") this got broken as in the common case where CC="clang":

  >>> cc="clang"
  >>> print(cc.split()[0])
  clang
  >>> option="-ffat-lto-objects"
  >>> print(str(cc.split()[1:]) + option)
  []-ffat-lto-objects
  >>>

And then the Popen will call clang with that bogus option name that in
turn will not produce the b"unknown argument" or b"is not supported"
that this function uses to detect if the option is not available and
thus later on clang will be called with an unknown/unsupported option.

Fix it by looking if really there are options in the provided CC
variable, and if so override 'cc' with the first token and append the
options to the 'option' variable.

Fixes: 3cad53a6f9cdbafa ("perf python: Account for multiple words in CC")
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Fangrui Song <maskray@google.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: John Keeping <john@metanate.com>
Cc: Khem Raj <raj.khem@gmail.com>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Link: http://lore.kernel.org/lkml/Y6Rq5F5NI0v1QQHM@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
22 months agoafs: Stop implementing ->writepage()
David Howells [Fri, 18 Nov 2022 07:57:27 +0000 (07:57 +0000)]
afs: Stop implementing ->writepage()

We're trying to get rid of the ->writepage() hook[1].  Stop afs from using
it by unlocking the page and calling afs_writepages_region() rather than
folio_write_one().

A flag is passed to afs_writepages_region() to indicate that it should only
write a single region so that we don't flush the entire file in
->write_begin(), but do add other dirty data to the region being written to
try and reduce the number of RPC ops.

This requires ->migrate_folio() to be implemented, so point that at
filemap_migrate_folio() for files and also for symlinks and directories.

This can be tested by turning on the afs_folio_dirty tracepoint and then
doing something like:

   xfs_io -c "w 2223 7000" -c "w 15000 22222" -c "w 23 7" /afs/my/test/foo

and then looking in the trace to see if the write at position 15000 gets
stored before page 0 gets dirtied for the write at position 23.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Christoph Hellwig <hch@lst.de>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-afs@lists.infradead.org
Link: https://lore.kernel.org/r/20221113162902.883850-1-hch@lst.de/
Link: https://lore.kernel.org/r/166876785552.222254.4403222906022558715.stgit@warthog.procyon.org.uk/
22 months agoafs: remove afs_cache_netfs and afs_zap_permits() declarations
Gaosheng Cui [Fri, 9 Sep 2022 07:03:53 +0000 (15:03 +0800)]
afs: remove afs_cache_netfs and afs_zap_permits() declarations

afs_zap_permits() has been removed since
commit be080a6f43c4 ("afs: Overhaul permit caching").

afs_cache_netfs has been removed since
commit 523d27cda149 ("afs: Convert afs to use the new fscache API").

so remove the declare for them from header file.

Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org
Link: https://lore.kernel.org/r/20220909070353.1160228-1-cuigaosheng1@huawei.com/
22 months agoafs: remove variable nr_servers
Colin Ian King [Thu, 20 Oct 2022 17:39:23 +0000 (18:39 +0100)]
afs: remove variable nr_servers

Variable nr_servers is no longer being used, the last reference
to it was removed in commit 45df8462730d ("afs: Fix server list handling")
so clean up the code by removing it.

Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org
Link: https://lore.kernel.org/r/20221020173923.21342-1-colin.i.king@gmail.com/
22 months agoafs: Fix lost servers_outstanding count
David Howells [Wed, 21 Dec 2022 14:30:48 +0000 (14:30 +0000)]
afs: Fix lost servers_outstanding count

The afs_fs_probe_dispatcher() work function is passed a count on
net->servers_outstanding when it is scheduled (which may come via its
timer).  This is passed back to the work_item, passed to the timer or
dropped at the end of the dispatcher function.

But, at the top of the dispatcher function, there are two checks which
skip the rest of the function: if the network namespace is being destroyed
or if there are no fileservers to probe.  These two return paths, however,
do not drop the count passed to the dispatcher, and so, sometimes, the
destruction of a network namespace, such as induced by rmmod of the kafs
module, may get stuck in afs_purge_servers(), waiting for
net->servers_outstanding to become zero.

Fix this by adding the missing decrements in afs_fs_probe_dispatcher().

Fixes: f6cbb368bcb0 ("afs: Actively poll fileservers to maintain NAT or firewall openings")
Reported-by: Marc Dionne <marc.dionne@auristor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org
Link: https://lore.kernel.org/r/167164544917.2072364.3759519569649459359.stgit@warthog.procyon.org.uk/
22 months agoMerge tag 'asoc-v6.2-3' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie...
Takashi Iwai [Thu, 22 Dec 2022 08:18:38 +0000 (09:18 +0100)]
Merge tag 'asoc-v6.2-3' of https://git./linux/kernel/git/broonie/sound into for-linus

ASoC: Updates for v6.2

Some more small fixes and board quirks that came in since my last
update, the main one being the fixes from Kai for issues around the
attempts to get kexec working well on SOF based systems.

22 months agoALSA: usb-audio: Add new quirk FIXED_RATE for JBL Quantum810 Wireless
Jaroslav Kysela [Thu, 15 Dec 2022 15:30:37 +0000 (16:30 +0100)]
ALSA: usb-audio: Add new quirk FIXED_RATE for JBL Quantum810 Wireless

It seems that the firmware is broken and does not accept
the UAC_EP_CS_ATTR_SAMPLE_RATE URB. There is only one rate (48000Hz)
available in the descriptors for the output endpoint.

Create a new quirk QUIRK_FLAG_FIXED_RATE to skip the rate setup
when only one rate is available (fixed).

BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=216798
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Link: https://lore.kernel.org/r/20221215153037.1163786-1-perex@perex.cz
Signed-off-by: Takashi Iwai <tiwai@suse.de>
22 months agoALSA: azt3328: Remove the unused function snd_azf3328_codec_outl()
Jiapeng Chong [Tue, 13 Dec 2022 06:13:55 +0000 (14:13 +0800)]
ALSA: azt3328: Remove the unused function snd_azf3328_codec_outl()

The function snd_azf3328_codec_outl is defined in the azt3328.c file, but
not called elsewhere, so remove this unused function.

sound/pci/azt3328.c:367:1: warning: unused function 'snd_azf3328_codec_outl'.

Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3432
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Link: https://lore.kernel.org/r/20221213061355.62856-1-jiapeng.chong@linux.alibaba.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
22 months agoMerge branch 'for-next' into for-linus
Takashi Iwai [Thu, 22 Dec 2022 08:11:48 +0000 (09:11 +0100)]
Merge branch 'for-next' into for-linus