Qiuxu Zhuo [Wed, 22 Mar 2023 11:42:41 +0000 (19:42 +0800)]
rcu/rcuscale: Stop kfree_scale_thread thread(s) after unloading rcuscale
[ Upstream commit
23fc8df26dead16687ae6eb47b0561a4a832e2f6 ]
Running the 'kfree_rcu_test' test case [1] results in a splat [2].
The root cause is the kfree_scale_thread thread(s) continue running
after unloading the rcuscale module. This commit fixes that isue by
invoking kfree_scale_cleanup() from rcu_scale_cleanup() when removing
the rcuscale module.
[1] modprobe rcuscale kfree_rcu_test=1
// After some time
rmmod rcuscale
rmmod torture
[2] BUG: unable to handle page fault for address:
ffffffffc0601a87
#PF: supervisor instruction fetch in kernel mode
#PF: error_code(0x0010) - not-present page
PGD
11de4f067 P4D
11de4f067 PUD
11de51067 PMD
112f4d067 PTE 0
Oops: 0010 [#1] PREEMPT SMP NOPTI
CPU: 1 PID: 1798 Comm: kfree_scale_thr Not tainted 6.3.0-rc1-rcu+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
RIP: 0010:0xffffffffc0601a87
Code: Unable to access opcode bytes at 0xffffffffc0601a5d.
RSP: 0018:
ffffb25bc2e57e18 EFLAGS:
00010297
RAX:
0000000000000000 RBX:
ffffffffc061f0b6 RCX:
0000000000000000
RDX:
0000000000000000 RSI:
ffffffff962fd0de RDI:
ffffffff962fd0de
RBP:
ffffb25bc2e57ea8 R08:
0000000000000000 R09:
0000000000000000
R10:
0000000000000001 R11:
0000000000000001 R12:
0000000000000000
R13:
0000000000000000 R14:
000000000000000a R15:
00000000001c1dbe
FS:
0000000000000000(0000) GS:
ffff921fa2200000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
ffffffffc0601a5d CR3:
000000011de4c006 CR4:
0000000000370ee0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
Call Trace:
<TASK>
? kvfree_call_rcu+0xf0/0x3a0
? kthread+0xf3/0x120
? kthread_complete_and_exit+0x20/0x20
? ret_from_fork+0x1f/0x30
</TASK>
Modules linked in: rfkill sunrpc ... [last unloaded: torture]
CR2:
ffffffffc0601a87
---[ end trace
0000000000000000 ]---
Fixes:
e6e78b004fa7 ("rcuperf: Add kfree_rcu() performance Tests")
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Qiuxu Zhuo [Wed, 22 Mar 2023 11:42:40 +0000 (19:42 +0800)]
rcu/rcuscale: Move rcu_scale_*() after kfree_scale_cleanup()
[ Upstream commit
bf5ddd736509a7d9077c0b6793e6f0852214dbea ]
This code-movement-only commit moves the rcu_scale_cleanup() and
rcu_scale_shutdown() functions to follow kfree_scale_cleanup().
This is code movement is in preparation for a bug-fix patch that invokes
kfree_scale_cleanup() from rcu_scale_cleanup().
Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Stable-dep-of:
23fc8df26dea ("rcu/rcuscale: Stop kfree_scale_thread thread(s) after unloading rcuscale")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Paul E. McKenney [Tue, 31 Jan 2023 20:08:54 +0000 (12:08 -0800)]
rcuscale: Move shutdown from wait_event() to wait_event_idle()
[ Upstream commit
ef1ef3d47677dc191b88650a9f7f91413452cc1b ]
The rcu_scale_shutdown() and kfree_scale_shutdown() kthreads/functions
use wait_event() to wait for the rcuscale test to complete. However,
each updater thread in such a test waits for at least 100 grace periods.
If each grace period takes more than 1.2 seconds, which is long, but
not insanely so, this can trigger the hung-task timeout.
This commit therefore replaces those wait_event() calls with calls to
wait_event_idle(), which do not trigger the hung-task timeout.
Reported-by: kernel test robot <yujie.liu@intel.com>
Reported-by: Liam Howlett <liam.howlett@oracle.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Yujie Liu <yujie.liu@intel.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Stable-dep-of:
23fc8df26dea ("rcu/rcuscale: Stop kfree_scale_thread thread(s) after unloading rcuscale")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Paul E. McKenney [Tue, 21 Mar 2023 23:40:08 +0000 (16:40 -0700)]
rcutorture: Correct name of use_softirq module parameter
[ Upstream commit
b409afe0268faeb77267f028ea85f2d93438fced ]
The BUSTED-BOOST and TREE03 scenarios specify a mythical tree.use_softirq
module parameter, which means a failure to get full test coverage. This
commit therefore corrects the name to rcutree.use_softirq.
Fixes:
e2b949d54392 ("rcutorture: Make TREE03 use real-time tree.use_softirq setting")
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Paul E. McKenney [Wed, 26 Apr 2023 18:11:29 +0000 (11:11 -0700)]
rcu-tasks: Stop rcu_tasks_invoke_cbs() from using never-onlined CPUs
[ Upstream commit
401b0de3ae4fa49d1014c8941e26d9a25f37e7cf ]
The rcu_tasks_invoke_cbs() function relies on queue_work_on() to silently
fall back to WORK_CPU_UNBOUND when the specified CPU is offline. However,
the queue_work_on() function's silent fallback mechanism relies on that
CPU having been online at some time in the past. When queue_work_on()
is passed a CPU that has never been online, workqueue lockups ensue,
which can be bad for your kernel's general health and well-being.
This commit therefore checks whether a given CPU has ever been online,
and, if not substitutes WORK_CPU_UNBOUND in the subsequent call to
queue_work_on(). Why not simply omit the queue_work_on() call entirely?
Because this function is flooding callback-invocation notifications
to all CPUs, and must deal with possibilities that include a sparse
cpu_possible_mask.
This commit also moves the setting of the rcu_data structure's
->beenonline field to rcu_cpu_starting(), which executes on the
incoming CPU before that CPU has ever enabled interrupts. This ensures
that the required workqueues are present. In addition, because the
incoming CPU has not yet enabled its interrupts, there cannot yet have
been any softirq handlers running on this CPU, which means that the
WARN_ON_ONCE(!rdp->beenonline) within the RCU_SOFTIRQ handler cannot
have triggered yet.
Fixes:
d363f833c6d88 ("rcu-tasks: Use workqueues for multiple rcu_tasks_invoke_cbs() invocations")
Reported-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Paul E. McKenney [Thu, 27 Apr 2023 17:50:47 +0000 (10:50 -0700)]
rcu: Make rcu_cpu_starting() rely on interrupts being disabled
[ Upstream commit
15d44dfa40305da1648de4bf001e91cc63148725 ]
Currently, rcu_cpu_starting() is written so that it might be invoked
with interrupts enabled. However, it is always called when interrupts
are disabled, either by rcu_init(), notify_cpu_starting(), or from a
call point prior to the call to notify_cpu_starting().
But why bother requiring that interrupts be disabled? The purpose is
to allow the rcu_data structure's ->beenonline flag to be set after all
early processing has completed for the incoming CPU, thus allowing this
flag to be used to determine when workqueues have been set up for the
incoming CPU, while still allowing this flag to be used as a diagnostic
within rcu_core().
This commit therefore makes rcu_cpu_starting() rely on interrupts being
disabled.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Stable-dep-of:
401b0de3ae4f ("rcu-tasks: Stop rcu_tasks_invoke_cbs() from using never-onlined CPUs")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Christophe JAILLET [Sun, 14 May 2023 18:46:05 +0000 (20:46 +0200)]
thermal/drivers/sun8i: Fix some error handling paths in sun8i_ths_probe()
[ Upstream commit
89382022b370dfd34eaae9c863baa123fcd4d132 ]
Should an error occur after calling sun8i_ths_resource_init() in the probe
function, some resources need to be released, as already done in the
.remove() function.
Switch to the devm_clk_get_enabled() helper and add a new devm_action to
turn sun8i_ths_resource_init() into a fully managed function.
Move the place where reset_control_deassert() is called so that the
recommended order of reset release/clock enable steps is kept.
A64 manual states that:
3.3.6.4. Gating and reset
Make sure that the reset signal has been released before the release of
module clock gating;
This fixes the issue and removes some LoC at the same time.
Fixes:
dccc5c3b6f30 ("thermal/drivers/sun8i: Add thermal driver for H6/H5/H3/A64/A83T/R40")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Maxime Ripard <maxime@cerno.tech>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Link: https://lore.kernel.org/r/a8ae84bd2dc4b55fe428f8e20f31438bf8bb6762.1684089931.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tero Kristo [Wed, 21 Jun 2023 06:58:39 +0000 (09:58 +0300)]
cpufreq: intel_pstate: Fix energy_performance_preference for passive
[ Upstream commit
03f44ffb3d5be2fceda375d92c70ab6de4df7081 ]
If the intel_pstate driver is set to passive mode, then writing the
same value to the energy_performance_preference sysfs twice will fail.
This is caused by the wrong return value used (index of the matched
energy_perf_string), instead of the length of the passed in parameter.
Fix by forcing the internal return value to zero when the same
preference is passed in by user. This same issue is not present when
active mode is used for the driver.
Fixes:
f6ebbcf08f37 ("cpufreq: intel_pstate: Implement passive mode with HWP enabled")
Reported-by: Niklas Neronin <niklas.neronin@intel.com>
Signed-off-by: Tero Kristo <tero.kristo@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Arnd Bergmann [Fri, 2 Jun 2023 18:28:42 +0000 (19:28 +0100)]
ARM: 9303/1: kprobes: avoid missing-declaration warnings
[ Upstream commit
1b9c3ddcec6a55e15d3e38e7405e2d078db02020 ]
checker_stack_use_t32strd() and kprobe_handler() can be made static since
they are not used from other files, while coverage_start_registers()
and __kprobes_test_case() are used from assembler code, and just need
a declaration to avoid a warning with the global definition.
arch/arm/probes/kprobes/checkers-common.c:43:18: error: no previous prototype for 'checker_stack_use_t32strd'
arch/arm/probes/kprobes/core.c:236:16: error: no previous prototype for 'kprobe_handler'
arch/arm/probes/kprobes/test-core.c:723:10: error: no previous prototype for 'coverage_start_registers'
arch/arm/probes/kprobes/test-core.c:918:14: error: no previous prototype for '__kprobes_test_case_start'
arch/arm/probes/kprobes/test-core.c:952:14: error: no previous prototype for '__kprobes_test_case_end_16'
arch/arm/probes/kprobes/test-core.c:967:14: error: no previous prototype for '__kprobes_test_case_end_32'
Fixes:
6624cf651f1a ("ARM: kprobes: collects stack consumption for store instructions")
Fixes:
454f3e132d05 ("ARM/kprobes: Remove jprobe arm implementation")
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Ulf Hansson [Tue, 30 May 2023 09:55:36 +0000 (11:55 +0200)]
PM: domains: Move the verification of in-params from genpd_add_device()
[ Upstream commit
4384a70c8813e8573d1841fd94eee873f80a7e1a ]
Commit
f38d1a6d0025 ("PM: domains: Allocate governor data dynamically
based on a genpd governor") started to use the in-parameters in
genpd_add_device(), without first doing a verification of them.
This isn't really a big problem, as most callers do a verification already.
Therefore, let's drop the verification from genpd_add_device() and make
sure all the callers take care of it instead.
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Fixes:
f38d1a6d0025 ("PM: domains: Allocate governor data dynamically based on a genpd governor")
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Zhang Rui [Tue, 6 Jun 2023 14:00:00 +0000 (22:00 +0800)]
powercap: RAPL: Fix CONFIG_IOSF_MBI dependency
[ Upstream commit
4658fe81b3f8afe8adf37734ec5fe595d90415c6 ]
After commit
3382388d7148 ("intel_rapl: abstract RAPL common code"),
accessing to IOSF_MBI interface is done in the RAPL common code.
Thus it is the CONFIG_INTEL_RAPL_CORE that has dependency of
CONFIG_IOSF_MBI, while CONFIG_INTEL_RAPL_MSR does not.
This problem was not exposed previously because all the previous RAPL
common code users, aka, the RAPL MSR and MMIO I/F drivers, have
CONFIG_IOSF_MBI selected.
Fix the CONFIG_IOSF_MBI dependency in RAPL code. This also fixes a build
time failure when the RAPL TPMI I/F driver is introduced without
selecting CONFIG_IOSF_MBI.
x86_64-linux-ld: vmlinux.o: in function `set_floor_freq_atom':
intel_rapl_common.c:(.text+0x2dac9b8): undefined reference to `iosf_mbi_write'
x86_64-linux-ld: intel_rapl_common.c:(.text+0x2daca66): undefined reference to `iosf_mbi_read'
Reference to iosf_mbi.h is also removed from the RAPL MSR I/F driver.
Fixes:
3382388d7148 ("intel_rapl: abstract RAPL common code")
Reported-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/all/20230601213246.3271412-1-arnd@kernel.org
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Junhao He [Thu, 8 Jun 2023 11:43:26 +0000 (19:43 +0800)]
drivers/perf: hisi: Don't migrate perf to the CPU going to teardown
[ Upstream commit
7a6a9f1c5a0a875a421db798d4b2ee022dc1ee1a ]
The driver needs to migrate the perf context if the current using CPU going
to teardown. By the time calling the cpuhp::teardown() callback the
cpu_online_mask() hasn't updated yet and still includes the CPU going to
teardown. In current driver's implementation we may migrate the context
to the teardown CPU and leads to the below calltrace:
...
[ 368.104662][ T932] task:cpuhp/0 state:D stack: 0 pid: 15 ppid: 2 flags:0x00000008
[ 368.113699][ T932] Call trace:
[ 368.116834][ T932] __switch_to+0x7c/0xbc
[ 368.120924][ T932] __schedule+0x338/0x6f0
[ 368.125098][ T932] schedule+0x50/0xe0
[ 368.128926][ T932] schedule_preempt_disabled+0x18/0x24
[ 368.134229][ T932] __mutex_lock.constprop.0+0x1d4/0x5dc
[ 368.139617][ T932] __mutex_lock_slowpath+0x1c/0x30
[ 368.144573][ T932] mutex_lock+0x50/0x60
[ 368.148579][ T932] perf_pmu_migrate_context+0x84/0x2b0
[ 368.153884][ T932] hisi_pcie_pmu_offline_cpu+0x90/0xe0 [hisi_pcie_pmu]
[ 368.160579][ T932] cpuhp_invoke_callback+0x2a0/0x650
[ 368.165707][ T932] cpuhp_thread_fun+0xe4/0x190
[ 368.170316][ T932] smpboot_thread_fn+0x15c/0x1a0
[ 368.175099][ T932] kthread+0x108/0x13c
[ 368.179012][ T932] ret_from_fork+0x10/0x18
...
Use function cpumask_any_but() to find one correct active cpu to fixes
this issue.
Fixes:
8404b0fbc7fb ("drivers/perf: hisi: Add driver for HiSilicon PCIe PMU")
Signed-off-by: Junhao He <hejunhao3@huawei.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Yicong Yang <yangyicong@hisilicon.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20230608114326.27649-1-hejunhao3@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Kirill A. Shutemov [Tue, 6 Jun 2023 09:56:21 +0000 (12:56 +0300)]
x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()
[ Upstream commit
195edce08b63d293377f615f4f7f086715d2d212 ]
tl;dr: There is a race in the TDX private<=>shared conversion code
which could kill the TDX guest. Fix it by changing conversion
ordering to eliminate the window.
TDX hardware maintains metadata to track which pages are private and
shared. Additionally, TDX guests use the guest x86 page tables to
specify whether a given mapping is intended to be private or shared.
Bad things happen when the intent and metadata do not match.
So there are two thing in play:
1. "the page" -- the physical TDX page metadata
2. "the mapping" -- the guest-controlled x86 page table intent
For instance, an unrecoverable exit to VMM occurs if a guest touches a
private mapping that points to a shared physical page.
In summary:
* Private mapping => Private Page == OK (obviously)
* Shared mapping => Shared Page == OK (obviously)
* Private mapping => Shared Page == BIG BOOM!
* Shared mapping => Private Page == OK-ish
(It will read generate a recoverable #VE via handle_mmio())
Enter load_unaligned_zeropad(). It can touch memory that is adjacent but
otherwise unrelated to the memory it needs to touch. It will cause one
of those unrecoverable exits (aka. BIG BOOM) if it blunders into a
shared mapping pointing to a private page.
This is a problem when __set_memory_enc_pgtable() converts pages from
shared to private. It first changes the mapping and second modifies
the TDX page metadata. It's moving from:
* Shared mapping => Shared Page == OK
to:
* Private mapping => Shared Page == BIG BOOM!
This means that there is a window with a shared mapping pointing to a
private page where load_unaligned_zeropad() can strike.
Add a TDX handler for guest.enc_status_change_prepare(). This converts
the page from shared to private *before* the page becomes private. This
ensures that there is never a private mapping to a shared page.
Leave a guest.enc_status_change_finish() in place but only use it for
private=>shared conversions. This will delay updating the TDX metadata
marking the page private until *after* the mapping matches the metadata.
This also ensures that there is never a private mapping to a shared page.
[ dhansen: rewrite changelog ]
Fixes:
7dbde7631629 ("x86/mm/cpa: Add support for TDX shared memory")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Link: https://lore.kernel.org/all/20230606095622.1939-3-kirill.shutemov%40linux.intel.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Kirill A. Shutemov [Tue, 6 Jun 2023 09:56:20 +0000 (12:56 +0300)]
x86/mm: Allow guest.enc_status_change_prepare() to fail
[ Upstream commit
3f6819dd192ef4f0c568ec3e9d6d408b3fa1ad3d ]
TDX code is going to provide guest.enc_status_change_prepare() that is
able to fail. TDX will use the call to convert the GPA range from shared
to private. This operation can fail.
Add a way to return an error from the callback.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Link: https://lore.kernel.org/all/20230606095622.1939-2-kirill.shutemov%40linux.intel.com
Stable-dep-of:
195edce08b63 ("x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Robin Murphy [Wed, 24 May 2023 16:44:32 +0000 (17:44 +0100)]
perf/arm-cmn: Fix DTC reset
[ Upstream commit
71746c995cac92fcf6a65661b51211cf2009d7f0 ]
It turns out that my naive DTC reset logic fails to work as intended,
since, after checking with the hardware designers, the PMU actually
needs to be fully enabled in order to correctly clear any pending
overflows. Therefore, invert the sequence to start with turning on both
enables so that we can reliably get the DTCs into a known state, then
moving to our normal counters-stopped state from there. Since all the
DTM counters have already been unpaired during the initial discovery
pass, we just need to additionally reset the cycle counters to ensure
that no other unexpected overflows occur during this period.
Fixes:
0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver")
Reported-by: Geoff Blake <blakgeof@amazon.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/0ea4559261ea394f827c9aee5168c77a60aaee03.1684946389.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Nikita Zhandarovich [Tue, 18 Apr 2023 13:07:43 +0000 (06:07 -0700)]
PM: domains: fix integer overflow issues in genpd_parse_state()
[ Upstream commit
e5d1c8722083f0332dcd3c85fa1273d85fb6bed8 ]
Currently, while calculating residency and latency values, right
operands may overflow if resulting values are big enough.
To prevent this, albeit unlikely case, play it safe and convert
right operands to left ones' type s64.
Found by Linux Verification Center (linuxtesting.org) with static
analysis tool SVACE.
Fixes:
30f604283e05 ("PM / Domains: Allow domain power states to be read from DT")
Signed-off-by: Nikita Zhandarovich <n.zhandarovich@fintech.ru>
Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Feng Mingxi [Tue, 25 Apr 2023 06:56:11 +0000 (06:56 +0000)]
clocksource/drivers/cadence-ttc: Fix memory leak in ttc_timer_probe
[ Upstream commit
8b5bf64c89c7100c921bd807ba39b2eb003061ab ]
Smatch reports:
drivers/clocksource/timer-cadence-ttc.c:529 ttc_timer_probe()
warn: 'timer_baseaddr' from of_iomap() not released on lines: 498,508,516.
timer_baseaddr may have the problem of not being released after use,
I replaced it with the devm_of_iomap() function and added the clk_put()
function to cleanup the "clk_ce" and "clk_cs".
Fixes:
e932900a3279 ("arm: zynq: Use standard timer binding")
Fixes:
70504f311d4b ("clocksource/drivers/cadence_ttc: Convert init function to return error")
Signed-off-by: Feng Mingxi <m202271825@hust.edu.cn>
Reviewed-by: Dongliang Mu <dzm91@hust.edu.cn>
Acked-by: Michal Simek <michal.simek@amd.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Link: https://lore.kernel.org/r/20230425065611.702917-1-m202271825@hust.edu.cn
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sebastian Andrzej Siewior [Tue, 18 Apr 2023 14:38:54 +0000 (16:38 +0200)]
tracing/timer: Add missing hrtimer modes to decode_hrtimer_mode().
[ Upstream commit
2951580ba6adb082bb6b7154a5ecb24e7c1f7569 ]
The trace output for the HRTIMER_MODE_.*_HARD modes is seen as a number
since these modes are not decoded. The author was not aware of the fancy
decoding function which makes the life easier.
Extend decode_hrtimer_mode() with the additional HRTIMER_MODE_.*_HARD
modes.
Fixes:
ae6683d815895 ("hrtimer: Introduce HARD expiry mode")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Mukesh Ojha <quic_mojha@quicinc.com>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20230418143854.8vHWQKLM@linutronix.de
Signed-off-by: Sasha Levin <sashal@kernel.org>
Wen Yang [Thu, 4 May 2023 16:12:53 +0000 (00:12 +0800)]
tick/rcu: Fix bogus ratelimit condition
[ Upstream commit
a7e282c77785c7eabf98836431b1f029481085ad ]
The ratelimit logic in report_idle_softirq() is broken because the
exit condition is always true:
static int ratelimit;
if (ratelimit < 10)
return false; ---> always returns here
ratelimit++; ---> no chance to run
Make it check for >= 10 instead.
Fixes:
0345691b24c0 ("tick/rcu: Stop allowing RCU_SOFTIRQ in idle")
Signed-off-by: Wen Yang <wenyang.linux@foxmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/tencent_5AAA3EEAB42095C9B7740BE62FBF9A67E007@qq.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Thomas Gleixner [Thu, 1 Jun 2023 20:16:34 +0000 (22:16 +0200)]
posix-timers: Prevent RT livelock in itimer_delete()
[ Upstream commit
9d9e522010eb5685d8b53e8a24320653d9d4cbbf ]
itimer_delete() has a retry loop when the timer is concurrently expired. On
non-RT kernels this just spin-waits until the timer callback has completed,
except for posix CPU timers which have HAVE_POSIX_CPU_TIMERS_TASK_WORK
enabled.
In that case and on RT kernels the existing task could live lock when
preempting the task which does the timer delivery.
Replace spin_unlock() with an invocation of timer_wait_running() to handle
it the same way as the other retry loops in the posix timer code.
Fixes:
ec8f954a40da ("posix-timers: Use a callback for cancel synchronization on PREEMPT_RT")
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/87v8g7c50d.ffs@tglx
Signed-off-by: Sasha Levin <sashal@kernel.org>
Gao Xiang [Thu, 1 Jun 2023 11:23:41 +0000 (19:23 +0800)]
erofs: fix compact 4B support for 16k block size
[ Upstream commit
001b8ccd0650727e54ec16ef72bf1b8eeab7168e ]
In compact 4B, two adjacent lclusters are packed together as a unit to
form on-disk indexes for effective random access, as below:
(amortized = 4, vcnt = 2)
_____________________________________________
|___@_____ encoded bits __________|_ blkaddr _|
0 . amortized * vcnt = 8
. .
. . amortized * vcnt - 4 = 4
. .
.____________________________.
|_type (2 bits)_|_clusterofs_|
Therefore, encoded bits for each pack are 32 bits (4 bytes). IOWs,
since each lcluster can get 16 bits for its type and clusterofs, the
maximum supported lclustersize for compact 4B format is 16k (14 bits).
Fix this to enable compact 4B format for 16k lclusters (blocks), which
is tested on an arm64 server with 16k page size.
Fixes:
152a333a5895 ("staging: erofs: add compacted compression indexes support")
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Link: https://lore.kernel.org/r/20230601112341.56960-1-hsiangkao@linux.alibaba.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Gao Xiang [Sat, 14 Jan 2023 15:08:23 +0000 (23:08 +0800)]
erofs: simplify iloc()
[ Upstream commit
b780d3fc6107464dcc43631a6208c43b6421f1e6 ]
Actually we could pass in inodes directly to clean up all callers.
Also rename iloc() as erofs_iloc().
Link: https://lore.kernel.org/r/20230114150823.432069-1-xiang@kernel.org
Reviewed-by: Yue Hu <huyue2@coolpad.com>
Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com>
Reviewed-by: Chao Yu <chao@kernel.org>
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Stable-dep-of:
001b8ccd0650 ("erofs: fix compact 4B support for 16k block size")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Chuck Lever [Mon, 12 Jun 2023 14:10:20 +0000 (10:10 -0400)]
svcrdma: Prevent page release when nothing was received
[ Upstream commit
baf6d18b116b7dc84ed5e212c3a89f17cdc3f28c ]
I noticed that svc_rqst_release_pages() was still unnecessarily
releasing a page when svc_rdma_recvfrom() returns zero.
Fixes:
a53d5cb0646a ("svcrdma: Avoid releasing a page in svc_xprt_release()")
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
John Paul Adrian Glaubitz [Wed, 10 May 2023 16:33:42 +0000 (18:33 +0200)]
irqchip/jcore-aic: Fix missing allocation of IRQ descriptors
[ Upstream commit
4848229494a323eeaab62eee5574ef9f7de80374 ]
The initialization function for the J-Core AIC aic_irq_of_init() is
currently missing the call to irq_alloc_descs() which allocates and
initializes all the IRQ descriptors. Add missing function call and
return the error code from irq_alloc_descs() in case the allocation
fails.
Fixes:
981b58f66cfc ("irqchip/jcore-aic: Add J-Core AIC driver")
Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Tested-by: Rob Landley <rob@landley.net>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230510163343.43090-1-glaubitz@physik.fu-berlin.de
Signed-off-by: Sasha Levin <sashal@kernel.org>
Antonio Borneo [Thu, 1 Jun 2023 15:56:14 +0000 (17:56 +0200)]
irqchip/stm32-exti: Fix warning on initialized field overwritten
[ Upstream commit
48f31e496488a25f443c0df52464da446fb1d10c ]
While compiling with W=1, both gcc and clang complain about a
tricky way to initialize an array by filling it with a non-zero
value and then overrride some of the array elements.
In this case the override is intentional, so just disable the
specific warning for only this part of the code.
Note: the flag "-Woverride-init" is recognized by both compilers,
but the warning msg from clang reports "-Winitializer-overrides".
The doc of clang clarifies that the two flags are synonyms, so use
here only the flag name common on both compilers.
Signed-off-by: Antonio Borneo <antonio.borneo@foss.st.com>
Fixes:
c297493336b7 ("irqchip/stm32-exti: Simplify irq description table")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230601155614.34490-1-antonio.borneo@foss.st.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yu Kuai [Sat, 10 Jun 2023 02:20:03 +0000 (10:20 +0800)]
block: fix blktrace debugfs entries leakage
[ Upstream commit
dd7de3704af9989b780693d51eaea49a665bd9c2 ]
Commit
99d055b4fd4b ("block: remove per-disk debugfs files in
blk_unregister_queue") moves blk_trace_shutdown() from
blk_release_queue() to blk_unregister_queue(), this is safe if blktrace
is created through sysfs, however, there is a regression in corner
case.
blktrace can still be enabled after del_gendisk() through ioctl if
the disk is opened before del_gendisk(), and if blktrace is not shutdown
through ioctl before closing the disk, debugfs entries will be leaked.
Fix this problem by shutdown blktrace in disk_release(), this is safe
because blk_trace_remove() is reentrant.
Fixes:
99d055b4fd4b ("block: remove per-disk debugfs files in blk_unregister_queue")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20230610022003.2557284-4-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yu Kuai [Mon, 29 May 2023 13:11:03 +0000 (21:11 +0800)]
md/raid1-10: submit write io directly if bitmap is not enabled
[ Upstream commit
7db922bae3abdf0a1db81ef7228cc0b996a0c1e3 ]
Commit
6cce3b23f6f8 ("[PATCH] md: write intent bitmap support for raid10")
add bitmap support, and it changed that write io is submitted through
daemon thread because bitmap need to be updated before write io. And
later, plug is used to fix performance regression because all the write io
will go to demon thread, which means io can't be issued concurrently.
However, if bitmap is not enabled, the write io should not go to daemon
thread in the first place, and plug is not needed as well.
Fixes:
6cce3b23f6f8 ("[PATCH] md: write intent bitmap support for raid10")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230529131106.2123367-5-yukuai1@huaweicloud.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yu Kuai [Mon, 29 May 2023 13:11:02 +0000 (21:11 +0800)]
md/raid1-10: factor out a helper to submit normal write
[ Upstream commit
8295efbe68c080047e98d9c0eb5cb933b238a8cb ]
There are multiple places to do the same thing, factor out a helper to
prevent redundant code, and the helper will be used in following patch
as well.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230529131106.2123367-4-yukuai1@huaweicloud.com
Stable-dep-of:
7db922bae3ab ("md/raid1-10: submit write io directly if bitmap is not enabled")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yu Kuai [Mon, 29 May 2023 13:11:01 +0000 (21:11 +0800)]
md/raid1-10: factor out a helper to add bio to plug
[ Upstream commit
5ec6ca140a034682e421e2e808ef5ddfdfd65242 ]
The code in raid1 and raid10 is identical, prepare to limit the number
of plugged bios.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230529131106.2123367-3-yukuai1@huaweicloud.com
Stable-dep-of:
7db922bae3ab ("md/raid1-10: submit write io directly if bitmap is not enabled")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Li Nan [Fri, 2 Jun 2023 09:18:39 +0000 (17:18 +0800)]
md/raid10: fix io loss while replacement replace rdev
[ Upstream commit
2ae6aaf76912bae53c74b191569d2ab484f24bf3 ]
When removing a disk with replacement, the replacement will be used to
replace rdev. During this process, there is a brief window in which both
rdev and replacement are read as NULL in raid10_write_request(). This
will result in io not being submitted but it should be.
//remove //write
raid10_remove_disk raid10_write_request
mirror->rdev = NULL
read rdev -> NULL
mirror->rdev = mirror->replacement
mirror->replacement = NULL
read replacement -> NULL
Fix it by reading replacement first and rdev later, meanwhile, use smp_mb()
to prevent memory reordering.
Fixes:
475b0321a4df ("md/raid10: writes should get directed to replacement as well as original.")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230602091839.743798-3-linan666@huaweicloud.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Li Nan [Sat, 27 May 2023 07:22:15 +0000 (15:22 +0800)]
md/raid10: fix null-ptr-deref of mreplace in raid10_sync_request
[ Upstream commit
34817a2441747b48e444cb0e05d84e14bc9443da ]
There are two check of 'mreplace' in raid10_sync_request(). In the first
check, 'need_replace' will be set and 'mreplace' will be used later if
no-Faulty 'mreplace' exists, In the second check, 'mreplace' will be
set to NULL if it is Faulty, but 'need_replace' will not be changed
accordingly. null-ptr-deref occurs if Faulty is set between two check.
Fix it by merging two checks into one. And replace 'need_replace' with
'mreplace' because their values are always the same.
Fixes:
ee37d7314a32 ("md/raid10: Fix raid10 replace hang when new added disk faulty")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230527072218.2365857-2-linan666@huaweicloud.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Li Nan [Mon, 22 May 2023 07:25:34 +0000 (15:25 +0800)]
md/raid10: fix wrong setting of max_corr_read_errors
[ Upstream commit
f8b20a405428803bd9881881d8242c9d72c6b2b2 ]
There is no input check when echo md/max_read_errors and overflow might
occur. Add check of input number.
Fixes:
1e50915fe0bb ("raid: improve MD/raid10 handling of correctable read errors.")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230522072535.1523740-3-linan666@huaweicloud.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Li Nan [Mon, 22 May 2023 07:25:33 +0000 (15:25 +0800)]
md/raid10: fix overflow of md/safe_mode_delay
[ Upstream commit
6beb489b2eed25978523f379a605073f99240c50 ]
There is no input check when echo md/safe_mode_delay in safe_delay_store().
And msec might also overflow when HZ < 1000 in safe_delay_show(), Fix it by
checking overflow in safe_delay_store() and use unsigned long conversion in
safe_delay_show().
Fixes:
72e02075a33f ("md: factor out parsing of fixed-point numbers")
Signed-off-by: Li Nan <linan122@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230522072535.1523740-2-linan666@huaweicloud.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Li Nan [Mon, 15 May 2023 13:48:05 +0000 (21:48 +0800)]
md/raid10: check slab-out-of-bounds in md_bitmap_get_counter
[ Upstream commit
301867b1c16805aebbc306aafa6ecdc68b73c7e5 ]
If we write a large number to md/bitmap_set_bits, md_bitmap_checkpage()
will return -EINVAL because 'page >= bitmap->pages', but the return value
was not checked immediately in md_bitmap_get_counter() in order to set
*blocks value and slab-out-of-bounds occurs.
Move check of 'page >= bitmap->pages' to md_bitmap_get_counter() and
return directly if true.
Fixes:
ef4256733506 ("md/bitmap: optimise scanning of empty bitmaps.")
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230515134808.3936750-2-linan666@huaweicloud.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Chaitanya Kulkarni [Fri, 28 Apr 2023 07:31:15 +0000 (00:31 -0700)]
nvme-core: fix dev_pm_qos memleak
[ Upstream commit
7ed5cf8e6d9bfb6a78d0471317edff14f0f2b4dd ]
Call dev_pm_qos_hide_latency_tolerance() in the error unwind patch to
avoid following kmemleak:-
blktests (master) # kmemleak-clear; ./check nvme/044;
blktests (master) # kmemleak-scan ; kmemleak-show
nvme/044 (Test bi-directional authentication) [passed]
runtime 2.111s ... 2.124s
unreferenced object 0xffff888110c46240 (size 96):
comm "nvme", pid 33461, jiffies
4345365353 (age 75.586s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<
0000000069ac2cec>] kmalloc_trace+0x25/0x90
[<
000000006acc66d5>] dev_pm_qos_update_user_latency_tolerance+0x6f/0x100
[<
00000000cc376ea7>] nvme_init_ctrl+0x38e/0x410 [nvme_core]
[<
000000007df61b4b>] 0xffffffffc05e88b3
[<
00000000d152b985>] 0xffffffffc05744cb
[<
00000000f04a4041>] vfs_write+0xc5/0x3c0
[<
00000000f9491baf>] ksys_write+0x5f/0xe0
[<
000000001c46513d>] do_syscall_64+0x3b/0x90
[<
00000000ecf348fe>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
Link: https://lore.kernel.org/linux-nvme/CAHj4cs-nDaKzMx2txO4dbE+Mz9ePwLtU0e3egz+StmzOUgWUrA@mail.gmail.com/
Fixes:
f50fff73d620 ("nvme: implement In-Band authentication")
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Chaitanya Kulkarni [Fri, 28 Apr 2023 07:31:14 +0000 (00:31 -0700)]
nvme-core: add missing fault-injection cleanup
[ Upstream commit
3a12a0b868a512fcada564699d00f5e652c0998c ]
Add missing fault-injection cleanup in nvme_init_ctrl() in the error
unwind path that also fixes following message for blktests:-
linux-block (for-next) # grep debugfs debugfs-err.log
[ 147.853464] debugfs: Directory 'nvme1' with parent '/' already present!
[ 147.853973] nvme1: failed to create debugfs attr
[ 148.802490] debugfs: Directory 'nvme1' with parent '/' already present!
[ 148.803244] nvme1: failed to create debugfs attr
[ 148.877304] debugfs: Directory 'nvme1' with parent '/' already present!
[ 148.877775] nvme1: failed to create debugfs attr
[ 149.816652] debugfs: Directory 'nvme1' with parent '/' already present!
[ 149.818011] nvme1: failed to create debugfs attr
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Stable-dep-of:
7ed5cf8e6d9b ("nvme-core: fix dev_pm_qos memleak")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sagi Grimberg [Sun, 13 Nov 2022 11:24:10 +0000 (13:24 +0200)]
nvme-auth: don't ignore key generation failures when initializing ctrl keys
[ Upstream commit
193a8c7e5f1a8481841636cec9c185543ec5c759 ]
nvme_auth_generate_key can fail, don't ignore it upon initialization.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Stable-dep-of:
7ed5cf8e6d9b ("nvme-core: fix dev_pm_qos memleak")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Chaitanya Kulkarni [Fri, 28 Apr 2023 07:31:13 +0000 (00:31 -0700)]
nvme-core: fix memory leak in dhchap_ctrl_secret
[ Upstream commit
99c2dcc8ffc24e210a3aa05c204d92f3ef460b05 ]
Free dhchap_secret in nvme_ctrl_dhchap_ctrl_secret_store() before we
return when nvme_auth_generate_key() returns error.
Fixes:
f50fff73d620 ("nvme: implement In-Band authentication")
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Chaitanya Kulkarni [Fri, 28 Apr 2023 07:31:12 +0000 (00:31 -0700)]
nvme-core: fix memory leak in dhchap_secret_store
[ Upstream commit
a836ca33c5b07d34dd5347af9f64d25651d12674 ]
Free dhchap_secret in nvme_ctrl_dhchap_secret_store() before we return
fix following kmemleack:-
unreferenced object 0xffff8886376ea800 (size 64):
comm "check", pid 22048, jiffies
4344316705 (age 92.199s)
hex dump (first 32 bytes):
44 48 48 43 2d 31 3a 30 30 3a 6e 78 72 35 4b 67 DHHC-1:00:nxr5Kg
75 58 34 75 6f 41 78 73 4a 61 34 63 2f 68 75 4c uX4uoAxsJa4c/huL
backtrace:
[<
0000000030ce5d4b>] __kmalloc+0x4b/0x130
[<
000000009be1cdc1>] nvme_ctrl_dhchap_secret_store+0x8f/0x160 [nvme_core]
[<
00000000ac06c96a>] kernfs_fop_write_iter+0x12b/0x1c0
[<
00000000437e7ced>] vfs_write+0x2ba/0x3c0
[<
00000000f9491baf>] ksys_write+0x5f/0xe0
[<
000000001c46513d>] do_syscall_64+0x3b/0x90
[<
00000000ecf348fe>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8886376eaf00 (size 64):
comm "check", pid 22048, jiffies
4344316736 (age 92.168s)
hex dump (first 32 bytes):
44 48 48 43 2d 31 3a 30 30 3a 6e 78 72 35 4b 67 DHHC-1:00:nxr5Kg
75 58 34 75 6f 41 78 73 4a 61 34 63 2f 68 75 4c uX4uoAxsJa4c/huL
backtrace:
[<
0000000030ce5d4b>] __kmalloc+0x4b/0x130
[<
000000009be1cdc1>] nvme_ctrl_dhchap_secret_store+0x8f/0x160 [nvme_core]
[<
00000000ac06c96a>] kernfs_fop_write_iter+0x12b/0x1c0
[<
00000000437e7ced>] vfs_write+0x2ba/0x3c0
[<
00000000f9491baf>] ksys_write+0x5f/0xe0
[<
000000001c46513d>] do_syscall_64+0x3b/0x90
[<
00000000ecf348fe>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
Fixes:
f50fff73d620 ("nvme: implement In-Band authentication")
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sagi Grimberg [Sun, 13 Nov 2022 11:24:17 +0000 (13:24 +0200)]
nvme-auth: no need to reset chap contexts on re-authentication
[ Upstream commit
e8a420efb637f52c586596283d6fd96f2a7ecb5c ]
Now that the chap context is reset upon completion, this is no longer
needed. Also remove nvme_auth_reset as no callers are left.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Stable-dep-of:
a836ca33c5b0 ("nvme-core: fix memory leak in dhchap_secret_store")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sagi Grimberg [Sun, 13 Nov 2022 11:24:07 +0000 (13:24 +0200)]
nvme-auth: remove symbol export from nvme_auth_reset
[ Upstream commit
100b555bc204fc754108351676297805f5affa49 ]
Only the nvme module calls it.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Stable-dep-of:
a836ca33c5b0 ("nvme-core: fix memory leak in dhchap_secret_store")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sagi Grimberg [Sun, 13 Nov 2022 11:24:06 +0000 (13:24 +0200)]
nvme-auth: rename authentication work elements
[ Upstream commit
0c999e69c40a87285f910c400b550fad866e99d0 ]
Use nvme_ctrl_auth_work and nvme_queue_auth_work for better
readability.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Stable-dep-of:
a836ca33c5b0 ("nvme-core: fix memory leak in dhchap_secret_store")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sagi Grimberg [Sun, 13 Nov 2022 11:24:05 +0000 (13:24 +0200)]
nvme-auth: rename __nvme_auth_[reset|free] to nvme_auth[reset|free]_dhchap
[ Upstream commit
0a7ce375f83f4ade7c2a835444093b6870fb8257 ]
nvme_auth_[reset|free] operate on the controller while
__nvme_auth_[reset|free] operate on a chap struct (which maps to a queue
context). Rename it for clarity.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Stable-dep-of:
a836ca33c5b0 ("nvme-core: fix memory leak in dhchap_secret_store")
Signed-off-by: Sasha Levin <sashal@kernel.org>
NeilBrown [Fri, 2 Jun 2023 21:14:14 +0000 (07:14 +1000)]
lockd: drop inappropriate svc_get() from locked_get()
[ Upstream commit
665e89ab7c5af1f2d260834c861a74b01a30f95f ]
The below-mentioned patch was intended to simplify refcounting on the
svc_serv used by locked. The goal was to only ever have a single
reference from the single thread. To that end we dropped a call to
lockd_start_svc() (except when creating thread) which would take a
reference, and dropped the svc_put(serv) that would drop that reference.
Unfortunately we didn't also remove the svc_get() from
lockd_create_svc() in the case where the svc_serv already existed.
So after the patch:
- on the first call the svc_serv was allocated and the one reference
was given to the thread, so there are no extra references
- on subsequent calls svc_get() was called so there is now an extra
reference.
This is clearly not consistent.
The inconsistency is also clear in the current code in lockd_get()
takes *two* references, one on nlmsvc_serv and one by incrementing
nlmsvc_users. This clearly does not match lockd_put().
So: drop that svc_get() from lockd_get() (which used to be in
lockd_create_svc().
Reported-by: Ido Schimmel <idosch@idosch.org>
Closes: https://lore.kernel.org/linux-nfs/ZHsI%2FH16VX9kJQX1@shredder/T/#u
Fixes:
b73a2972041b ("lockd: move lockd_start_svc() call into lockd_create_svc()")
Signed-off-by: NeilBrown <neilb@suse.de>
Tested-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yu Kuai [Sat, 10 Jun 2023 02:30:43 +0000 (10:30 +0800)]
blk-mq: fix potential io hang by wrong 'wake_batch'
[ Upstream commit
4f1731df60f9033669f024d06ae26a6301260b55 ]
In __blk_mq_tag_busy/idle(), updating 'active_queues' and calculating
'wake_batch' is not atomic:
t1: t2:
_blk_mq_tag_busy blk_mq_tag_busy
inc active_queues
// assume 1->2
inc active_queues
// 2 -> 3
blk_mq_update_wake_batch
// calculate based on 3
blk_mq_update_wake_batch
/* calculate based on 2, while active_queues is actually 3. */
Fix this problem by protecting them wih 'tags->lock', this is not a hot
path, so performance should not be concerned. And now that all writers
are inside the lock, switch 'actives_queues' from atomic to unsigned
int.
Fixes:
180dccb0dba4 ("blk-mq: fix tag_get wait task can't be awakened")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20230610023043.2559121-1-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Arnd Bergmann [Tue, 17 Jan 2023 17:13:56 +0000 (18:13 +0100)]
virt: sevguest: Add CONFIG_CRYPTO dependency
[ Upstream commit
84b9b44b99780d35fe72ac63c4724f158771e898 ]
This driver fails to link when CRYPTO is disabled, or in a loadable
module:
WARNING: unmet direct dependencies detected for CRYPTO_GCM
WARNING: unmet direct dependencies detected for CRYPTO_AEAD2
Depends on [m]: CRYPTO [=m]
Selected by [y]:
- SEV_GUEST [=y] && VIRT_DRIVERS [=y] && AMD_MEM_ENCRYPT [=y]
x86_64-linux-ld: crypto/aead.o: in function `crypto_register_aeads':
Fixes:
fce96cf04430 ("virt: Add SEV-SNP guest driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20230117171416.2715125-1-arnd@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tom Lendacky [Tue, 6 Jun 2023 14:51:22 +0000 (09:51 -0500)]
x86/sev: Fix calculation of end address based on number of pages
[ Upstream commit
5dee19b6b2b194216919b99a1f5af2949a754016 ]
When calculating an end address based on an unsigned int number of pages,
any value greater than or equal to 0x100000 that is shift PAGE_SHIFT bits
results in a 0 value, resulting in an invalid end address. Change the
number of pages variable in various routines from an unsigned int to an
unsigned long to calculate the end address correctly.
Fixes:
5e5ccff60a29 ("x86/sev: Add helper for validating pages in early enc attribute changes")
Fixes:
dc3f3d2474b8 ("x86/mm: Validate memory when changing the C-bit")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/6a6e4eea0e1414402bac747744984fa4e9c01bb6.1686063086.git.thomas.lendacky@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Li Nan [Sat, 27 May 2023 09:19:04 +0000 (17:19 +0800)]
blk-iocost: use spin_lock_irqsave in adjust_inuse_and_calc_cost
[ Upstream commit
8d211554679d0b23702bd32ba04aeac0c1c4f660 ]
adjust_inuse_and_calc_cost() use spin_lock_irq() and IRQ will be enabled
when unlock. DEADLOCK might happen if we have held other locks and disabled
IRQ before invoking it.
Fix it by using spin_lock_irqsave() instead, which can keep IRQ state
consistent with before when unlock.
================================
WARNING: inconsistent lock state
5.10.0-02758-g8e5f91fd772f #26 Not tainted
--------------------------------
inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
kworker/2:3/388 [HC0[0]:SC0[0]:HE0:SE1] takes:
ffff888118c00c28 (&bfqd->lock){?.-.}-{2:2}, at: spin_lock_irq
ffff888118c00c28 (&bfqd->lock){?.-.}-{2:2}, at: bfq_bio_merge+0x141/0x390
{IN-HARDIRQ-W} state was registered at:
__lock_acquire+0x3d7/0x1070
lock_acquire+0x197/0x4a0
__raw_spin_lock_irqsave
_raw_spin_lock_irqsave+0x3b/0x60
bfq_idle_slice_timer_body
bfq_idle_slice_timer+0x53/0x1d0
__run_hrtimer+0x477/0xa70
__hrtimer_run_queues+0x1c6/0x2d0
hrtimer_interrupt+0x302/0x9e0
local_apic_timer_interrupt
__sysvec_apic_timer_interrupt+0xfd/0x420
run_sysvec_on_irqstack_cond
sysvec_apic_timer_interrupt+0x46/0xa0
asm_sysvec_apic_timer_interrupt+0x12/0x20
irq event stamp: 837522
hardirqs last enabled at (837521): [<
ffffffff84b9419d>] __raw_spin_unlock_irqrestore
hardirqs last enabled at (837521): [<
ffffffff84b9419d>] _raw_spin_unlock_irqrestore+0x3d/0x40
hardirqs last disabled at (837522): [<
ffffffff84b93fa3>] __raw_spin_lock_irq
hardirqs last disabled at (837522): [<
ffffffff84b93fa3>] _raw_spin_lock_irq+0x43/0x50
softirqs last enabled at (835852): [<
ffffffff84e00558>] __do_softirq+0x558/0x8ec
softirqs last disabled at (835845): [<
ffffffff84c010ff>] asm_call_irq_on_stack+0xf/0x20
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&bfqd->lock);
<Interrupt>
lock(&bfqd->lock);
*** DEADLOCK ***
3 locks held by kworker/2:3/388:
#0:
ffff888107af0f38 ((wq_completion)kthrotld){+.+.}-{0:0}, at: process_one_work+0x742/0x13f0
#1:
ffff8881176bfdd8 ((work_completion)(&td->dispatch_work)){+.+.}-{0:0}, at: process_one_work+0x777/0x13f0
#2:
ffff888118c00c28 (&bfqd->lock){?.-.}-{2:2}, at: spin_lock_irq
#2:
ffff888118c00c28 (&bfqd->lock){?.-.}-{2:2}, at: bfq_bio_merge+0x141/0x390
stack backtrace:
CPU: 2 PID: 388 Comm: kworker/2:3 Not tainted 5.10.0-02758-g8e5f91fd772f #26
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
Workqueue: kthrotld blk_throtl_dispatch_work_fn
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x107/0x167
print_usage_bug
valid_state
mark_lock_irq.cold+0x32/0x3a
mark_lock+0x693/0xbc0
mark_held_locks+0x9e/0xe0
__trace_hardirqs_on_caller
lockdep_hardirqs_on_prepare.part.0+0x151/0x360
trace_hardirqs_on+0x5b/0x180
__raw_spin_unlock_irq
_raw_spin_unlock_irq+0x24/0x40
spin_unlock_irq
adjust_inuse_and_calc_cost+0x4fb/0x970
ioc_rqos_merge+0x277/0x740
__rq_qos_merge+0x62/0xb0
rq_qos_merge
bio_attempt_back_merge+0x12c/0x4a0
blk_mq_sched_try_merge+0x1b6/0x4d0
bfq_bio_merge+0x24a/0x390
__blk_mq_sched_bio_merge+0xa6/0x460
blk_mq_sched_bio_merge
blk_mq_submit_bio+0x2e7/0x1ee0
__submit_bio_noacct_mq+0x175/0x3b0
submit_bio_noacct+0x1fb/0x270
blk_throtl_dispatch_work_fn+0x1ef/0x2b0
process_one_work+0x83e/0x13f0
process_scheduled_works
worker_thread+0x7e3/0xd80
kthread+0x353/0x470
ret_from_fork+0x1f/0x30
Fixes:
b0853ab4a238 ("blk-iocost: revamp in-period donation snapbacks")
Signed-off-by: Li Nan <linan122@huawei.com>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20230527091904.3001833-1-linan666@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Shawn Wang [Mon, 15 May 2023 06:04:48 +0000 (14:04 +0800)]
x86/resctrl: Only show tasks' pid in current pid namespace
[ Upstream commit
2997d94b5dd0e8b10076f5e0b6f18410c73e28bd ]
When writing a task id to the "tasks" file in an rdtgroup,
rdtgroup_tasks_write() treats the pid as a number in the current pid
namespace. But when reading the "tasks" file, rdtgroup_tasks_show() shows
the list of global pids from the init namespace, which is confusing and
incorrect.
To be more robust, let the "tasks" file only show pids in the current pid
namespace.
Fixes:
e02737d5b826 ("x86/intel_rdt: Add tasks files")
Signed-off-by: Shawn Wang <shawnwang@linux.alibaba.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Acked-by: Reinette Chatre <reinette.chatre@intel.com>
Acked-by: Fenghua Yu <fenghua.yu@intel.com>
Tested-by: Reinette Chatre <reinette.chatre@intel.com>
Link: https://lore.kernel.org/all/20230116071246.97717-1-shawnwang@linux.alibaba.com/
Signed-off-by: Sasha Levin <sashal@kernel.org>
Gao Xiang [Fri, 26 May 2023 20:14:56 +0000 (04:14 +0800)]
erofs: kill hooked chains to avoid loops on deduplicated compressed images
[ Upstream commit
967c28b23f6c89bb8eef6a046ea88afe0d7c1029 ]
After heavily stressing EROFS with several images which include a
hand-crafted image of repeated patterns for more than 46 days, I found
two chains could be linked with each other almost simultaneously and
form a loop so that the entire loop won't be submitted. As a
consequence, the corresponding file pages will remain locked forever.
It can be _only_ observed on data-deduplicated compressed images.
For example, consider two chains with five pclusters in total:
Chain 1: 2->3->4->5 -- The tail pcluster is 5;
Chain 2: 5->1->2 -- The tail pcluster is 2.
Chain 2 could link to Chain 1 with pcluster 5; and Chain 1 could link
to Chain 2 at the same time with pcluster 2.
Since hooked chains are all linked locklessly now, I have no idea how
to simply avoid the race. Instead, let's avoid hooked chains completely
until I could work out a proper way to fix this and end users finally
tell us that it's needed to add it back.
Actually, this optimization can be found with multi-threaded workloads
(especially even more often on deduplicated compressed images), yet I'm
not sure about the overall system impacts of not having this compared
with implementation complexity.
Fixes:
267f2492c8f7 ("erofs: introduce multi-reference pclusters (fully-referenced)")
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Reviewed-by: Yue Hu <huyue2@coolpad.com>
Link: https://lore.kernel.org/r/20230526201459.128169-4-hsiangkao@linux.alibaba.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Gao Xiang [Sat, 4 Feb 2023 09:30:38 +0000 (17:30 +0800)]
erofs: move zdata.h into zdata.c
[ Upstream commit
a9a94d9373349e1a53f149d2015eb6f03a8517cf ]
Definitions in zdata.h are only used in zdata.c and for internal
use only. No logic changes.
Reviewed-by: Yue Hu <huyue2@coolpad.com>
Reviewed-by: Chao Yu <chao@kernel.org>
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Link: https://lore.kernel.org/r/20230204093040.97967-4-hsiangkao@linux.alibaba.com
Stable-dep-of:
967c28b23f6c ("erofs: kill hooked chains to avoid loops on deduplicated compressed images")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Gao Xiang [Sat, 4 Feb 2023 09:30:37 +0000 (17:30 +0800)]
erofs: remove tagged pointer helpers
[ Upstream commit
b1ed220c6262bff63cdcb53692e492be0b05206c ]
Just open-code the remaining one to simplify the code.
Reviewed-by: Yue Hu <huyue2@coolpad.com>
Reviewed-by: Chao Yu <chao@kernel.org>
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Link: https://lore.kernel.org/r/20230204093040.97967-3-hsiangkao@linux.alibaba.com
Stable-dep-of:
967c28b23f6c ("erofs: kill hooked chains to avoid loops on deduplicated compressed images")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Gao Xiang [Sat, 4 Feb 2023 09:30:36 +0000 (17:30 +0800)]
erofs: avoid tagged pointers to mark sync decompression
[ Upstream commit
cdba55067f2f9fdc7870ffcb6aef912d3468cff8 ]
We could just use a boolean in z_erofs_decompressqueue for sync
decompression to simplify the code.
Reviewed-by: Yue Hu <huyue2@coolpad.com>
Reviewed-by: Chao Yu <chao@kernel.org>
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Link: https://lore.kernel.org/r/20230204093040.97967-2-hsiangkao@linux.alibaba.com
Stable-dep-of:
967c28b23f6c ("erofs: kill hooked chains to avoid loops on deduplicated compressed images")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Gao Xiang [Tue, 6 Dec 2022 06:03:52 +0000 (14:03 +0800)]
erofs: clean up cached I/O strategies
[ Upstream commit
1282dea37b09087b8aec59f0774572c16b52276a ]
After commit
4c7e42552b3a ("erofs: remove useless cache strategy of
DELAYEDALLOC"), only one cached I/O allocation strategy is supported:
When cached I/O is preferred, page allocation is applied without
direct reclaim. If allocation fails, fall back to inplace I/O.
Let's get rid of z_erofs_cache_alloctype. No logical changes.
Reviewed-by: Yue Hu <huyue2@coolpad.com>
Reviewed-by: Chao Yu <chao@kernel.org>
Signed-off-by: Yue Hu <huyue2@coolpad.com>
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Link: https://lore.kernel.org/r/20221206060352.152830-1-xiang@kernel.org
Stable-dep-of:
967c28b23f6c ("erofs: kill hooked chains to avoid loops on deduplicated compressed images")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Bart Van Assche [Wed, 17 May 2023 17:42:21 +0000 (10:42 -0700)]
block: Fix the type of the second bdev_op_is_zoned_write() argument
[ Upstream commit
3ddbe2a7e0d4a155a805f69c906c9beed30d4cc4 ]
Change the type of the second argument of bdev_op_is_zoned_write() from
blk_opf_t into enum req_op because this function expects an operation
without flags as second argument.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Pankaj Raghav <p.raghav@samsung.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cc: Ming Lei <ming.lei@redhat.com>
Fixes:
8cafdb5ab94c ("block: adapt blk_mq_plug() to not plug for writes that require a zone lock")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Link: https://lore.kernel.org/r/20230517174230.897144-4-bvanassche@acm.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Arnd Bergmann [Tue, 16 May 2023 19:56:12 +0000 (21:56 +0200)]
fs: pipe: reveal missing function protoypes
[ Upstream commit
247c8d2f9837a3e29e3b6b7a4aa9c36c37659dd4 ]
A couple of functions from fs/pipe.c are used both internally
and for the watch queue code, but the declaration is only
visible when the latter is enabled:
fs/pipe.c:1254:5: error: no previous prototype for 'pipe_resize_ring'
fs/pipe.c:758:15: error: no previous prototype for 'account_pipe_buffers'
fs/pipe.c:764:6: error: no previous prototype for 'too_many_pipe_buffers_soft'
fs/pipe.c:771:6: error: no previous prototype for 'too_many_pipe_buffers_hard'
fs/pipe.c:777:6: error: no previous prototype for 'pipe_is_unprivileged_user'
Make the visible unconditionally to avoid these warnings.
Fixes:
c73be61cede5 ("pipe: Add general notification queue support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Message-Id: <
20230516195629.551602-1-arnd@kernel.org>
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jeff Layton [Wed, 19 Apr 2023 11:24:46 +0000 (07:24 -0400)]
drm: use mgr->dev in drm_dbg_kms in drm_dp_add_payload_part2
commit
54d217406afe250d7a768783baaa79a035f21d38 upstream.
I've been experiencing some intermittent crashes down in the display
driver code. The symptoms are ususally a line like this in dmesg:
amdgpu 0000:30:00.0: [drm] Failed to create MST payload for port
000000006d3a3885: -5
...followed by an Oops due to a NULL pointer dereference.
Switch to using mgr->dev instead of state->dev since "state" can be
NULL in some cases.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=2184855
Suggested-by: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Jeff Layton <jlayton@kernel.org>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230419112447.18471-1-jlayton@kernel.org
Cc: "Limonciello, Mario" <mario.limonciello@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Wed, 5 Jul 2023 17:27:38 +0000 (18:27 +0100)]
Linux 6.1.38
Link: https://lore.kernel.org/r/20230703184519.121965745@linuxfoundation.org
Tested-by: Takeshi Ogasawara <takeshi.ogasawara@futuring-girl.com>
Link: https://lore.kernel.org/r/20230704084611.071971014@linuxfoundation.org
Tested-by: Takeshi Ogasawara <takeshi.ogasawara@futuring-girl.com>
Tested-by: Bagas Sanjaya <bagasdotme@gmail.com>
Tested-by: Conor Dooley <conor.dooley@microchip.com>
Tested-by: Salvatore Bonaccorso <carnil@debian.org>
Tested-by: Ron Economos <re@w6rz.net>
Tested-by: Markus Reichelt <lkt+2023@mareichelt.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Linux Kernel Functional Testing <lkft@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rodrigo Siqueira [Fri, 24 Feb 2023 18:35:43 +0000 (11:35 -0700)]
drm/amd/display: Ensure vmin and vmax adjust for DCE
commit
2820433be2a33beb44b13b367e155cf221f29610 upstream.
[Why & How]
In the commit
32953485c558 ("drm/amd/display: Do not update DRR while
BW optimizations pending"), a modification was added to avoid adjusting
DRR if optimized bandwidth is set. This change was only intended for
DCN, but one part of the patch changed the code path for DCE devices and
caused regressions to the kms_vrr test. To address this problem, this
commit adds a modification in which dc_stream_adjust_vmin_vmax will be
fully executed in DCE devices.
Fixes:
32953485c558 ("drm/amd/display: Do not update DRR while BW optimizations pending")
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Qingqing Zhuo <qingqing.zhuo@amd.com>
Signed-off-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
Tested-by: Daniel Wheeler <daniel.wheeler@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bas Nieuwenhuizen [Sat, 13 May 2023 12:51:00 +0000 (14:51 +0200)]
drm/amdgpu: Validate VM ioctl flags.
commit
a2b308044dcaca8d3e580959a4f867a1d5c37fac upstream.
None have been defined yet, so reject anybody setting any. Mesa sets
it to 0 anyway.
Signed-off-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ahmed S. Darwish [Mon, 15 May 2023 17:32:17 +0000 (19:32 +0200)]
docs: Set minimal gtags / GNU GLOBAL version to 6.6.5
commit
b230235b386589d8f0d631b1c77a95ca79bb0732 upstream.
Kernel build now uses the gtags "-C (--directory)" option, available
since GNU GLOBAL v6.6.5. Update the documentation accordingly.
Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de>
Cc: <stable@vger.kernel.org>
Link: https://lists.gnu.org/archive/html/info-global/2020-09/msg00000.html
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ahmed S. Darwish [Mon, 15 May 2023 17:32:16 +0000 (19:32 +0200)]
scripts/tags.sh: Resolve gtags empty index generation
commit
e1b37563caffc410bb4b55f153ccb14dede66815 upstream.
gtags considers any file outside of its current working directory
"outside the source tree" and refuses to index it. For O= kernel builds,
or when "make" is invoked from a directory other then the kernel source
tree, gtags ignores the entire kernel source and generates an empty
index.
Force-set gtags current working directory to the kernel source tree.
Due to commit
9da0763bdd82 ("kbuild: Use relative path when building in
a subdir of the source tree"), if the kernel build is done in a
sub-directory of the kernel source tree, the kernel Makefile will set
the kernel's $srctree to ".." for shorter compile-time and run-time
warnings. Consequently, the list of files to be indexed will be in the
"../*" form, rendering all such paths invalid once gtags switches to the
kernel source tree as its current working directory.
If gtags indexing is requested and the build directory is not the kernel
source tree, index all files in absolute-path form.
Note, indexing in absolute-path form will not affect the generated
index, as paths in gtags indices are always relative to the gtags "root
directory" anyway (as evidenced by "gtags --dump").
Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Krister Johansen [Wed, 25 Jan 2023 18:34:18 +0000 (10:34 -0800)]
perf symbols: Symbol lookup with kcore can fail if multiple segments match stext
commit
1c249565426e3a9940102c0ba9f63914f7cda73d upstream.
This problem was encountered on an arm64 system with a lot of memory.
Without kernel debug symbols installed, and with both kcore and kallsyms
available, perf managed to get confused and returned "unknown" for all
of the kernel symbols that it tried to look up.
On this system, stext fell within the vmalloc segment. The kcore symbol
matching code tries to find the first segment that contains stext and
uses that to replace the segment generated from just the kallsyms
information. In this case, however, there were two: a very large
vmalloc segment, and the text segment. This caused perf to get confused
because multiple overlapping segments were inserted into the RB tree
that holds the discovered segments. However, that alone wasn't
sufficient to cause the problem. Even when we could find the segment,
the offsets were adjusted in such a way that the newly generated symbols
didn't line up with the instruction addresses in the trace. The most
obvious solution would be to consult which segment type is text from
kcore, but this information is not exposed to users.
Instead, select the smallest matching segment that contains stext
instead of the first matching segment. This allows us to match the text
segment instead of vmalloc, if one is contained within the other.
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Reaver <me@davidreaver.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20230125183418.GD1963@templeofstupid.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Finn Thain [Tue, 14 Mar 2023 08:51:59 +0000 (19:51 +1100)]
nubus: Partially revert proc_create_single_data() conversion
commit
0e96647cff9224db564a1cee6efccb13dbe11ee2 upstream.
The conversion to proc_create_single_data() introduced a regression
whereby reading a file in /proc/bus/nubus results in a seg fault:
# grep -r . /proc/bus/nubus/e/
Data read fault at 0x00000020 in Super Data (pc=0x1074c2)
BAD KERNEL BUSERR
Oops:
00000000
Modules linked in:
PC: [<
001074c2>] PDE_DATA+0xc/0x16
SR: 2010 SP:
38284958 a2:
01152370
d0:
00000001 d1:
01013000 d2:
01002790 d3:
00000000
d4:
00000001 d5:
0008ce2e a0:
00000000 a1:
00222a40
Process grep (pid: 45, task=
142f8727)
Frame format=B ssw=074d isc=2008 isb=4e5e daddr=
00000020 dobuf=
01199e70
baddr=
001074c8 dibuf=
ffffffff ver=f
Stack from
01199e48:
01199e70 00222a58 01002790 00000000 011a3000 01199eb0 015000c0 00000000
00000000 01199ec0 01199ec0 000d551a 011a3000 00000001 00000000 00018000
d003f000 00000003 00000001 0002800d 01052840 01199fa8 c01f8000 00000000
00000029 0b532b80 00000000 00000000 00000029 0b532b80 01199ee4 00103640
011198c0 d003f000 00018000 01199fa8 00000000 011198c0 00000000 01199f4c
000b3344 011198c0 d003f000 00018000 01199fa8 00000000 00018000 011198c0
Call Trace: [<
00222a58>] nubus_proc_rsrc_show+0x18/0xa0
[<
000d551a>] seq_read+0xc4/0x510
[<
00018000>] fp_fcos+0x2/0x82
[<
0002800d>] __sys_setreuid+0x115/0x1c6
[<
00103640>] proc_reg_read+0x5c/0xb0
[<
00018000>] fp_fcos+0x2/0x82
[<
000b3344>] __vfs_read+0x2c/0x13c
[<
00018000>] fp_fcos+0x2/0x82
[<
00018000>] fp_fcos+0x2/0x82
[<
000b8aa2>] sys_statx+0x60/0x7e
[<
000b34b6>] vfs_read+0x62/0x12a
[<
00018000>] fp_fcos+0x2/0x82
[<
00018000>] fp_fcos+0x2/0x82
[<
000b39c2>] ksys_read+0x48/0xbe
[<
00018000>] fp_fcos+0x2/0x82
[<
000b3a4e>] sys_read+0x16/0x1a
[<
00018000>] fp_fcos+0x2/0x82
[<
00002b84>] syscall+0x8/0xc
[<
00018000>] fp_fcos+0x2/0x82
[<
0000c016>] not_ext+0xa/0x18
Code: 4e5e 4e75 4e56 0000 206e 0008 2068 ffe8 <2068> 0020 2008 4e5e 4e75 4e56 0000 2f0b 206e 0008 2068 0004 2668 0020 206b ffe8
Disabling lock debugging due to kernel taint
Segmentation fault
The proc_create_single_data() conversion does not work because
single_open(file, nubus_proc_rsrc_show, PDE_DATA(inode)) is not
equivalent to the original code.
Fixes:
3f3942aca6da ("proc: introduce proc_create_single{,_data}")
Cc: Christoph Hellwig <hch@lst.de>
Cc: stable@vger.kernel.org # 5.6+
Signed-off-by: Finn Thain <fthain@linux-m68k.org>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Link: https://lore.kernel.org/r/d4e2a586e793cc8d9442595684ab8a077c0fe726.1678783919.git.fthain@linux-m68k.org
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Mon, 3 Jul 2023 06:20:17 +0000 (23:20 -0700)]
execve: always mark stack as growing down during early stack setup
commit
f66066bc5136f25e36a2daff4896c768f18c211e upstream.
While our user stacks can grow either down (all common architectures) or
up (parisc and the ia64 register stack), the initial stack setup when we
copy the argument and environment strings to the new stack at execve()
time is always done by extending the stack downwards.
But it turns out that in commit
8d7071af8907 ("mm: always expand the
stack with the mmap write lock held"), as part of making the stack
growing code more robust, 'expand_downwards()' was now made to actually
check the vma flags:
if (!(vma->vm_flags & VM_GROWSDOWN))
return -EFAULT;
and that meant that this execve-time stack expansion started failing on
parisc, because on that architecture, the stack flags do not contain the
VM_GROWSDOWN bit.
At the same time the new check in expand_downwards() is clearly correct,
and simplified the callers, so let's not remove it.
The solution is instead to just codify the fact that yes, during
execve(), the stack grows down. This not only matches reality, it ends
up being particularly simple: we already have special execve-time flags
for the stack (VM_STACK_INCOMPLETE_SETUP) and use those flags to avoid
page migration during this setup time (see vma_is_temporary_stack() and
invalid_migration_vma()).
So just add VM_GROWSDOWN to that set of temporary flags, and now our
stack flags automatically match reality, and the parisc stack expansion
works again.
Note that the VM_STACK_INCOMPLETE_SETUP bits will be cleared when the
stack is finalized, so we only add the extra VM_GROWSDOWN bit on
CONFIG_STACK_GROWSUP architectures (ie parisc) rather than adding it in
general.
Link: https://lore.kernel.org/all/612eaa53-6904-6e16-67fc-394f4faa0e16@bell.net/
Link: https://lore.kernel.org/all/5fd98a09-4792-1433-752d-029ae3545168@gmx.de/
Fixes:
8d7071af8907 ("mm: always expand the stack with the mmap write lock held")
Reported-by: John David Anglin <dave.anglin@bell.net>
Reported-and-tested-by: Helge Deller <deller@gmx.de>
Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mario Limonciello [Tue, 20 Jun 2023 14:04:51 +0000 (09:04 -0500)]
PCI/ACPI: Call _REG when transitioning D-states
commit
112a7f9c8edbf76f7cb83856a6cb6b60a210b659 upstream.
ACPI r6.5, sec 6.5.4, describes how AML is unable to access an
OperationRegion unless _REG has been called to connect a handler:
The OS runs _REG control methods to inform AML code of a change in the
availability of an operation region. When an operation region handler is
unavailable, AML cannot access data fields in that region. (Operation
region writes will be ignored and reads will return indeterminate data.)
The PCI core does not call _REG at any time, leading to the undefined
behavior mentioned in the spec.
The spec explains that _REG should be executed to indicate whether a
given region can be accessed:
Once _REG has been executed for a particular operation region, indicating
that the operation region handler is ready, a control method can access
fields in the operation region. Conversely, control methods must not
access fields in operation regions when _REG method execution has not
indicated that the operation region handler is ready.
An example included in the spec demonstrates calling _REG when devices are
turned off: "when the host controller or bridge controller is turned off
or disabled, PCI Config Space Operation Regions for child devices are
no longer available. As such, ETH0’s _REG method will be run when it
is turned off and will again be run when PCI1 is turned off."
It is reported that ASMedia PCIe GPIO controllers fail functional tests
after the system has returning from suspend (S3 or s2idle). This is because
the BIOS checks whether the OSPM has called the _REG method to determine
whether it can interact with the OperationRegion assigned to the device as
part of the other AML called for the device.
To fix this issue, call acpi_evaluate_reg() when devices are transitioning
to D3cold or D0.
[bhelgaas: split pci_power_t checking to preliminary patch]
Link: https://uefi.org/specs/ACPI/6.5/06_Device_Configuration.html#reg-region
Link: https://lore.kernel.org/r/20230620140451.21007-1-mario.limonciello@amd.com
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rafael J. Wysocki <rafael@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bjorn Helgaas [Wed, 21 Jun 2023 21:36:12 +0000 (16:36 -0500)]
PCI/ACPI: Validate acpi_pci_set_power_state() parameter
commit
5557b62634abbd55bab7b154ce4bca348ad7f96f upstream.
Previously acpi_pci_set_power_state() assumed the requested power state was
valid (PCI_D0 ... PCI_D3cold). If a caller supplied something else, we
could index outside the state_conv[] array and pass junk to
acpi_device_set_power().
Validate the pci_power_t parameter and return -EINVAL if it's invalid.
Link: https://lore.kernel.org/r/20230621222857.GA122930@bhelgaas
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Mario Limonciello <mario.limonciello@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Aric Cyr [Thu, 9 Feb 2023 00:51:42 +0000 (19:51 -0500)]
drm/amd/display: Do not update DRR while BW optimizations pending
commit
32953485c558cecf08f33fbfa251e80e44cef981 upstream.
[why]
While bandwidth optimizations are pending, it's possible a pstate change
will occur. During this time, VSYNC handler should not also try to update
DRR parameters causing pstate hang
[how]
Do not adjust DRR if optimize bandwidth is set.
Reviewed-by: Aric Cyr <aric.cyr@amd.com>
Acked-by: Qingqing Zhuo <qingqing.zhuo@amd.com>
Signed-off-by: Aric Cyr <aric.cyr@amd.com>
Tested-by: Daniel Wheeler <daniel.wheeler@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Mario Limonciello <mario.limonciello@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alvin Lee [Thu, 20 Oct 2022 15:46:49 +0000 (11:46 -0400)]
drm/amd/display: Remove optimization for VRR updates
commit
3442f4e0e55555d14b099c17382453fdfd2508d5 upstream.
Optimization caused unexpected regression, so remove for now.
Tested-by: Mark Broadworth <mark.broadworth@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
Signed-off-by: Alvin Lee <Alvin.Lee2@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Mario Limonciello <mario.limonciello@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Max Filippov [Sat, 1 Jul 2023 10:31:55 +0000 (03:31 -0700)]
xtensa: fix lock_mm_and_find_vma in case VMA not found
commit
03f889378f33aa9a9d8e5f49ba94134cf6158090 upstream.
MMU version of lock_mm_and_find_vma releases the mm lock before
returning when VMA is not found. Do the same in noMMU version.
This fixes hang on an attempt to handle protection fault.
Fixes:
d85a143b69ab ("xtensa: fix NOMMU build with lock_mm_and_find_vma() conversion")
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Sat, 1 Jul 2023 11:16:27 +0000 (13:16 +0200)]
Linux 6.1.37
Link: https://lore.kernel.org/r/20230629184151.651069086@linuxfoundation.org
Tested-by: Salvatore Bonaccorso <carnil@debian.org>
Tested-by: Takeshi Ogasawara <takeshi.ogasawara@futuring-girl.com>
Link: https://lore.kernel.org/r/20230630055632.571288857@linuxfoundation.org
Link: https://lore.kernel.org/r/20230630072124.944461414@linuxfoundation.org
Tested-by: Takeshi Ogasawara <takeshi.ogasawara@futuring-girl.com>
Tested-by: Ron Economos <re@w6rz.net>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Salvatore Bonaccorso <carnil@debian.org>
Tested-by: Markus Reichelt <lkt+2023@mareichelt.com>
Tested-by: Linux Kernel Functional Testing <lkft@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Sat, 1 Jul 2023 01:24:49 +0000 (18:24 -0700)]
xtensa: fix NOMMU build with lock_mm_and_find_vma() conversion
commit
d85a143b69abb4d7544227e26d12c4c7735ab27d upstream.
It turns out that xtensa has a really odd configuration situation: you
can do a no-MMU config, but still have the page fault code enabled.
Which doesn't sound all that sensible, but it turns out that xtensa can
have protection faults even without the MMU, and we have this:
config PFAULT
bool "Handle protection faults" if EXPERT && !MMU
default y
help
Handle protection faults. MMU configurations must enable it.
noMMU configurations may disable it if used memory map never
generates protection faults or faults are always fatal.
If unsure, say Y.
which completely violated my expectations of the page fault handling.
End result: Guenter reports that the xtensa no-MMU builds all fail with
arch/xtensa/mm/fault.c: In function ‘do_page_fault’:
arch/xtensa/mm/fault.c:133:8: error: implicit declaration of function ‘lock_mm_and_find_vma’
because I never exposed the new lock_mm_and_find_vma() function for the
no-MMU case.
Doing so is simple enough, and fixes the problem.
Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net>
Fixes:
a050ba1e7422 ("mm/fault: convert remaining simple cases to lock_mm_and_find_vma()")
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Fri, 30 Jun 2023 06:34:29 +0000 (23:34 -0700)]
csky: fix up lock_mm_and_find_vma() conversion
commit
e55e5df193d247a38a5e1ac65a5316a0adcc22fa upstream.
As already mentioned in my merge message for the 'expand-stack' branch,
we have something like 24 different versions of the page fault path for
all our different architectures, all just _slightly_ different due to
various historical reasons (usually related to exactly when they
branched off the original i386 version, and the details of the other
architectures they had in their history).
And a few of them had some silly mistake in the conversion.
Most of the architectures call the faulting address 'address' in the
fault path. But not all. Some just call it 'addr'. And if you end up
doing a bit too much copy-and-paste, you end up with the wrong version
in the places that do it differently.
In this case it was csky.
Fixes:
a050ba1e7422 ("mm/fault: convert remaining simple cases to lock_mm_and_find_vma()")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Fri, 30 Jun 2023 06:04:57 +0000 (23:04 -0700)]
parisc: fix expand_stack() conversion
commit
ea3f8272876f2958463992f6736ab690fde7fa9c upstream.
In commit
8d7071af8907 ("mm: always expand the stack with the mmap write
lock held") I tried to deal with the remaining odd page fault handling
cases. The oddest one is ia64, which has stacks that grow both up and
down. And because ia64 was _so_ odd, I asked people to verify the end
result.
But a close second oddity is parisc, which is the only one that has a
main stack growing up (our "CONFIG_STACK_GROWSUP" config option). But
it looked obvious enough that I didn't worry about it.
I should have worried a bit more. Not because it was particularly
complex, but because I just used the wrong variable name.
The previous vma isn't called "prev", it's called "prev_vma". Blush.
Fixes:
8d7071af8907 ("mm: always expand the stack with the mmap write lock held")
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Fri, 30 Jun 2023 03:41:24 +0000 (20:41 -0700)]
sparc32: fix lock_mm_and_find_vma() conversion
commit
0b26eadbf200abf6c97c6d870286c73219cdac65 upstream.
The sparc32 conversion to lock_mm_and_find_vma() in commit
a050ba1e7422
("mm/fault: convert remaining simple cases to lock_mm_and_find_vma()")
missed the fact that we didn't actually have a 'regs' pointer available
in the 'force_user_fault()' case.
It's there in the regular page fault path ("do_sparc_fault()"), but not
the window underflow/overflow paths.
Which is all fine - we can just pass in a NULL pointer. The register
state is only used to avoid deadlock with kernel faults, which is not
the case for any of these register window faults.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes:
a050ba1e7422 ("mm/fault: convert remaining simple cases to lock_mm_and_find_vma()")
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ricardo Cañuelo [Thu, 25 May 2023 12:18:11 +0000 (14:18 +0200)]
Revert "thermal/drivers/mediatek: Use devm_of_iomap to avoid resource leak in mtk_thermal_probe"
commit
86edac7d3888c715fe3a81bd61f3617ecfe2e1dd upstream.
This reverts commit
f05c7b7d9ea9477fcc388476c6f4ade8c66d2d26.
That change was causing a regression in the generic-adc-thermal-probed
bootrr test as reported in the kernelci-results list [1].
A proper rework will take longer, so revert it for now.
[1] https://groups.io/g/kernelci-results/message/42660
Fixes:
f05c7b7d9ea9 ("thermal/drivers/mediatek: Use devm_of_iomap to avoid resource leak in mtk_thermal_probe")
Signed-off-by: Ricardo Cañuelo <ricardo.canuelo@collabora.com>
Suggested-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Link: https://lore.kernel.org/r/20230525121811.3360268-1-ricardo.canuelo@collabora.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mike Hommey [Sat, 17 Jun 2023 23:09:57 +0000 (08:09 +0900)]
HID: logitech-hidpp: add HIDPP_QUIRK_DELAYED_INIT for the T651.
commit
5fe251112646d8626818ea90f7af325bab243efa upstream.
commit
498ba2069035 ("HID: logitech-hidpp: Don't restart communication if
not necessary") put restarting communication behind that flag, and this
was apparently necessary on the T651, but the flag was not set for it.
Fixes:
498ba2069035 ("HID: logitech-hidpp: Don't restart communication if not necessary")
Cc: stable@vger.kernel.org
Signed-off-by: Mike Hommey <mh@glandium.org>
Link: https://lore.kernel.org/r/20230617230957.6mx73th4blv7owqk@glandium.org
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason Gerecke [Thu, 8 Jun 2023 21:38:28 +0000 (14:38 -0700)]
HID: wacom: Use ktime_t rather than int when dealing with timestamps
commit
9a6c0e28e215535b2938c61ded54603b4e5814c5 upstream.
Code which interacts with timestamps needs to use the ktime_t type
returned by functions like ktime_get. The int type does not offer
enough space to store these values, and attempting to use it is a
recipe for problems. In this particular case, overflows would occur
when calculating/storing timestamps leading to incorrect values being
reported to userspace. In some cases these bad timestamps cause input
handling in userspace to appear hung.
Link: https://gitlab.freedesktop.org/libinput/libinput/-/issues/901
Fixes:
17d793f3ed53 ("HID: wacom: insert timestamp to packed Bluetooth (BT) events")
CC: stable@vger.kernel.org
Signed-off-by: Jason Gerecke <jason.gerecke@wacom.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Link: https://lore.kernel.org/r/20230608213828.2108-1-jason.gerecke@wacom.com
Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ludvig Michaelsson [Wed, 21 Jun 2023 11:17:43 +0000 (13:17 +0200)]
HID: hidraw: fix data race on device refcount
commit
944ee77dc6ec7b0afd8ec70ffc418b238c92f12b upstream.
The hidraw_open() function increments the hidraw device reference
counter. The counter has no dedicated synchronization mechanism,
resulting in a potential data race when concurrently opening a device.
The race is a regression introduced by commit
8590222e4b02 ("HID:
hidraw: Replace hidraw device table mutex with a rwsem"). While
minors_rwsem is intended to protect the hidraw_table itself, by instead
acquiring the lock for writing, the reference counter is also protected.
This is symmetrical to hidraw_release().
Link: https://github.com/systemd/systemd/issues/27947
Fixes:
8590222e4b02 ("HID: hidraw: Replace hidraw device table mutex with a rwsem")
Cc: stable@vger.kernel.org
Signed-off-by: Ludvig Michaelsson <ludvig.michaelsson@yubico.com>
Link: https://lore.kernel.org/r/20230621-hidraw-race-v1-1-a58e6ac69bab@yubico.com
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Zhang Shurong [Sat, 24 Jun 2023 16:16:49 +0000 (00:16 +0800)]
fbdev: fix potential OOB read in fast_imageblit()
commit
c2d22806aecb24e2de55c30a06e5d6eb297d161d upstream.
There is a potential OOB read at fast_imageblit, for
"colortab[(*src >> 4)]" can become a negative value due to
"const char *s = image->data, *src".
This change makes sure the index for colortab always positive
or zero.
Similar commit:
https://patchwork.kernel.org/patch/
11746067
Potential bug report:
https://groups.google.com/g/syzkaller-bugs/c/9ubBXKeKXf4/m/k-QXy4UgAAAJ
Signed-off-by: Zhang Shurong <zhang_shurong@foxmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Sat, 24 Jun 2023 20:45:51 +0000 (13:45 -0700)]
mm: always expand the stack with the mmap write lock held
commit
8d7071af890768438c14db6172cc8f9f4d04e184 upstream
This finishes the job of always holding the mmap write lock when
extending the user stack vma, and removes the 'write_locked' argument
from the vm helper functions again.
For some cases, we just avoid expanding the stack at all: drivers and
page pinning really shouldn't be extending any stacks. Let's see if any
strange users really wanted that.
It's worth noting that architectures that weren't converted to the new
lock_mm_and_find_vma() helper function are left using the legacy
"expand_stack()" function, but it has been changed to drop the mmap_lock
and take it for writing while expanding the vma. This makes it fairly
straightforward to convert the remaining architectures.
As a result of dropping and re-taking the lock, the calling conventions
for this function have also changed, since the old vma may no longer be
valid. So it will now return the new vma if successful, and NULL - and
the lock dropped - if the area could not be extended.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[6.1: Patch drivers/iommu/io-pgfault.c instead]
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Mon, 19 Jun 2023 18:34:15 +0000 (11:34 -0700)]
execve: expand new process stack manually ahead of time
commit
f313c51d26aa87e69633c9b46efb37a930faca71 upstream.
This is a small step towards a model where GUP itself would not expand
the stack, and any user that needs GUP to not look up existing mappings,
but actually expand on them, would have to do so manually before-hand,
and with the mm lock held for writing.
It turns out that execve() already did almost exactly that, except it
didn't take the mm lock at all (it's single-threaded so no locking
technically needed, but it could cause lockdep errors). And it only did
it for the CONFIG_STACK_GROWSUP case, since in that case GUP has
obviously never expanded the stack downwards.
So just make that CONFIG_STACK_GROWSUP case do the right thing with
locking, and enable it generally. This will eventually help GUP, and in
the meantime avoids a special case and the lockdep issue.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[6.1 Minor context from still having FOLL_FORCE flags set]
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Liam R. Howlett [Fri, 16 Jun 2023 22:58:54 +0000 (15:58 -0700)]
mm: make find_extend_vma() fail if write lock not held
commit
f440fa1ac955e2898893f9301568435eb5cdfc4b upstream.
Make calls to extend_vma() and find_extend_vma() fail if the write lock
is required.
To avoid making this a flag-day event, this still allows the old
read-locking case for the trivial situations, and passes in a flag to
say "is it write-locked". That way write-lockers can say "yes, I'm
being careful", and legacy users will continue to work in all the common
cases until they have been fully converted to the new world order.
Co-Developed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Sat, 24 Jun 2023 18:17:05 +0000 (11:17 -0700)]
powerpc/mm: convert coprocessor fault to lock_mm_and_find_vma()
commit
2cd76c50d0b41cec5c87abfcdf25b236a2793fb6 upstream.
This is one of the simple cases, except there's no pt_regs pointer.
Which is fine, as lock_mm_and_find_vma() is set up to work fine with a
NULL pt_regs.
Powerpc already enabled LOCK_MM_AND_FIND_VMA for the main CPU faulting,
so we can just use the helper without any extra work.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Sat, 24 Jun 2023 17:55:38 +0000 (10:55 -0700)]
mm/fault: convert remaining simple cases to lock_mm_and_find_vma()
commit
a050ba1e7422f2cc60ff8bfde3f96d34d00cb585 upstream.
This does the simple pattern conversion of alpha, arc, csky, hexagon,
loongarch, nios2, sh, sparc32, and xtensa to the lock_mm_and_find_vma()
helper. They all have the regular fault handling pattern without odd
special cases.
The remaining architectures all have something that keeps us from a
straightforward conversion: ia64 and parisc have stacks that can grow
both up as well as down (and ia64 has special address region checks).
And m68k, microblaze, openrisc, sparc64, and um end up having extra
rules about only expanding the stack down a limited amount below the
user space stack pointer. That is something that x86 used to do too
(long long ago), and it probably could just be skipped, but it still
makes the conversion less than trivial.
Note that this conversion was done manually and with the exception of
alpha without any build testing, because I have a fairly limited cross-
building environment. The cases are all simple, and I went through the
changes several times, but...
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Hutchings [Thu, 22 Jun 2023 19:24:30 +0000 (21:24 +0200)]
arm/mm: Convert to using lock_mm_and_find_vma()
commit
8b35ca3e45e35a26a21427f35d4093606e93ad0a upstream.
arm has an additional check for address < FIRST_USER_ADDRESS before
expanding the stack. Since FIRST_USER_ADDRESS is defined everywhere
(generally as 0), move that check to the generic expand_downwards().
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Hutchings [Thu, 22 Jun 2023 18:18:18 +0000 (20:18 +0200)]
riscv/mm: Convert to using lock_mm_and_find_vma()
commit
7267ef7b0b77f4ed23b7b3c87d8eca7bd9c2d007 upstream.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[6.1: Kconfig context]
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Hutchings [Thu, 22 Jun 2023 16:47:40 +0000 (18:47 +0200)]
mips/mm: Convert to using lock_mm_and_find_vma()
commit
4bce37a68ff884e821a02a731897a8119e0c37b7 upstream.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Ellerman [Fri, 16 Jun 2023 05:51:29 +0000 (15:51 +1000)]
powerpc/mm: Convert to using lock_mm_and_find_vma()
commit
e6fe228c4ffafdfc970cf6d46883a1f481baf7ea upstream.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Fri, 16 Jun 2023 00:11:44 +0000 (17:11 -0700)]
arm64/mm: Convert to using lock_mm_and_find_vma()
commit
ae870a68b5d13d67cf4f18d47bb01ee3fee40acb upstream.
This converts arm64 to use the new page fault helper. It was very
straightforward, but still needed a fix for the "obvious" conversion I
initially did. Thanks to Suren for the fix and testing.
Fixed-and-tested-by: Suren Baghdasaryan <surenb@google.com>
Unnecessary-code-removal-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[6.1: Ignore CONFIG_PER_VMA_LOCK context]
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Thu, 15 Jun 2023 23:17:48 +0000 (16:17 -0700)]
mm: make the page fault mmap locking killable
commit
eda0047296a16d65a7f2bc60a408f70d178b2014 upstream.
This is done as a separate patch from introducing the new
lock_mm_and_find_vma() helper, because while it's an obvious change,
it's not what x86 used to do in this area.
We already abort the page fault on fatal signals anyway, so why should
we wait for the mmap lock only to then abort later? With the new helper
function that returns without the lock held on failure anyway, this is
particularly easy and straightforward.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Thu, 15 Jun 2023 22:17:36 +0000 (15:17 -0700)]
mm: introduce new 'lock_mm_and_find_vma()' page fault helper
commit
c2508ec5a58db67093f4fb8bf89a9a7c53a109e9 upstream.
.. and make x86 use it.
This basically extracts the existing x86 "find and expand faulting vma"
code, but extends it to also take the mmap lock for writing in case we
actually do need to expand the vma.
We've historically short-circuited that case, and have some rather ugly
special logic to serialize the stack segment expansion (since we only
hold the mmap lock for reading) that doesn't match the normal VM
locking.
That slight violation of locking worked well, right up until it didn't:
the maple tree code really does want proper locking even for simple
extension of an existing vma.
So extract the code for "look up the vma of the fault" from x86, fix it
up to do the necessary write locking, and make it available as a helper
function for other architectures that can use the common helper.
Note: I say "common helper", but it really only handles the normal
stack-grows-down case. Which is all architectures except for PA-RISC
and IA64. So some rare architectures can't use the helper, but if they
care they'll just need to open-code this logic.
It's also worth pointing out that this code really would like to have an
optimistic "mmap_upgrade_trylock()" to make it quicker to go from a
read-lock (for the common case) to taking the write lock (for having to
extend the vma) in the normal single-threaded situation where there is
no other locking activity.
But that _is_ all the very uncommon special case, so while it would be
nice to have such an operation, it probably doesn't matter in reality.
I did put in the skeleton code for such a possible future expansion,
even if it only acts as pseudo-documentation for what we're doing.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[6.1: Ignore CONFIG_PER_VMA_LOCK context]
Signed-off-by: Samuel Mendoza-Jonas <samjonas@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peng Zhang [Sat, 6 May 2023 02:47:52 +0000 (10:47 +0800)]
maple_tree: fix potential out-of-bounds access in mas_wr_end_piv()
commit
cd00dd2585c4158e81fdfac0bbcc0446afbad26d upstream.
Check the write offset end bounds before using it as the offset into the
pivot array. This avoids a possible out-of-bounds access on the pivot
array if the write extends to the last slot in the node, in which case the
node maximum should be used as the end pivot.
akpm: this doesn't affect any current callers, but new users of mapletree
may encounter this problem if backported into earlier kernels, so let's
fix it in -stable kernels in case of this.
Link: https://lkml.kernel.org/r/20230506024752.2550-1-zhangpeng.00@bytedance.com
Fixes:
54a611b60590 ("Maple Tree: add new data structure")
Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oliver Hartkopp [Wed, 7 Jun 2023 07:27:08 +0000 (09:27 +0200)]
can: isotp: isotp_sendmsg(): fix return error fix on TX path
commit
e38910c0072b541a91954682c8b074a93e57c09b upstream.
With commit
d674a8f123b4 ("can: isotp: isotp_sendmsg(): fix return
error on FC timeout on TX path") the missing correct return value in
the case of a protocol error was introduced.
But the way the error value has been read and sent to the user space
does not follow the common scheme to clear the error after reading
which is provided by the sock_error() function. This leads to an error
report at the following write() attempt although everything should be
working.
Fixes:
d674a8f123b4 ("can: isotp: isotp_sendmsg(): fix return error on FC timeout on TX path")
Reported-by: Carsten Schmidt <carsten.schmidt-achim@t-online.de>
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Link: https://lore.kernel.org/all/20230607072708.38809-1-socketcan@hartkopp.net
Cc: stable@vger.kernel.org
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 15 Jun 2023 20:33:57 +0000 (22:33 +0200)]
x86/smp: Cure kexec() vs. mwait_play_dead() breakage
commit
d7893093a7417527c0d73c9832244e65c9d0114f upstream.
TLDR: It's a mess.
When kexec() is executed on a system with offline CPUs, which are parked in
mwait_play_dead() it can end up in a triple fault during the bootup of the
kexec kernel or cause hard to diagnose data corruption.
The reason is that kexec() eventually overwrites the previous kernel's text,
page tables, data and stack. If it writes to the cache line which is
monitored by a previously offlined CPU, MWAIT resumes execution and ends
up executing the wrong text, dereferencing overwritten page tables or
corrupting the kexec kernels data.
Cure this by bringing the offlined CPUs out of MWAIT into HLT.
Write to the monitored cache line of each offline CPU, which makes MWAIT
resume execution. The written control word tells the offlined CPUs to issue
HLT, which does not have the MWAIT problem.
That does not help, if a stray NMI, MCE or SMI hits the offlined CPUs as
those make it come out of HLT.
A follow up change will put them into INIT, which protects at least against
NMI and SMI.
Fixes:
ea53069231f9 ("x86, hotplug: Use mwait to offline a processor, fix the legacy case")
Reported-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Ashok Raj <ashok.raj@intel.com>
Reviewed-by: Ashok Raj <ashok.raj@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20230615193330.492257119@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 15 Jun 2023 20:33:55 +0000 (22:33 +0200)]
x86/smp: Use dedicated cache-line for mwait_play_dead()
commit
f9c9987bf52f4e42e940ae217333ebb5a4c3b506 upstream.
Monitoring idletask::thread_info::flags in mwait_play_dead() has been an
obvious choice as all what is needed is a cache line which is not written
by other CPUs.
But there is a use case where a "dead" CPU needs to be brought out of
MWAIT: kexec().
This is required as kexec() can overwrite text, pagetables, stacks and the
monitored cacheline of the original kernel. The latter causes MWAIT to
resume execution which obviously causes havoc on the kexec kernel which
results usually in triple faults.
Use a dedicated per CPU storage to prepare for that.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ashok Raj <ashok.raj@intel.com>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20230615193330.434553750@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 15 Jun 2023 20:33:54 +0000 (22:33 +0200)]
x86/smp: Remove pointless wmb()s from native_stop_other_cpus()
commit
2affa6d6db28855e6340b060b809c23477aa546e upstream.
The wmb()s before sending the IPIs are not synchronizing anything.
If at all then the apic IPI functions have to provide or act as appropriate
barriers.
Remove these cargo cult barriers which have no explanation of what they are
synchronizing.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20230615193330.378358382@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tony Battersby [Thu, 15 Jun 2023 20:33:52 +0000 (22:33 +0200)]
x86/smp: Dont access non-existing CPUID leaf
commit
9b040453d4440659f33dc6f0aa26af418ebfe70b upstream.
stop_this_cpu() tests CPUID leaf 0x8000001f::EAX unconditionally. Intel
CPUs return the content of the highest supported leaf when a non-existing
leaf is read, while AMD CPUs return all zeros for unsupported leafs.
So the result of the test on Intel CPUs is lottery.
While harmless it's incorrect and causes the conditional wbinvd() to be
issued where not required.
Check whether the leaf is supported before reading it.
[ tglx: Adjusted changelog ]
Fixes:
08f253ec3767 ("x86/cpu: Clear SME feature flag when not in use")
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Mario Limonciello <mario.limonciello@amd.com>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/3817d810-e0f1-8ef8-0bbd-663b919ca49b@cybernetics.com
Link: https://lore.kernel.org/r/20230615193330.322186388@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Wed, 26 Apr 2023 16:37:00 +0000 (18:37 +0200)]
x86/smp: Make stop_other_cpus() more robust
commit
1f5e7eb7868e42227ac426c96d437117e6e06e8e upstream.
Tony reported intermittent lockups on poweroff. His analysis identified the
wbinvd() in stop_this_cpu() as the culprit. This was added to ensure that
on SME enabled machines a kexec() does not leave any stale data in the
caches when switching from encrypted to non-encrypted mode or vice versa.
That wbinvd() is conditional on the SME feature bit which is read directly
from CPUID. But that readout does not check whether the CPUID leaf is
available or not. If it's not available the CPU will return the value of
the highest supported leaf instead. Depending on the content the "SME" bit
might be set or not.
That's incorrect but harmless. Making the CPUID readout conditional makes
the observed hangs go away, but it does not fix the underlying problem:
CPU0 CPU1
stop_other_cpus()
send_IPIs(REBOOT); stop_this_cpu()
while (num_online_cpus() > 1); set_online(false);
proceed... -> hang
wbinvd()
WBINVD is an expensive operation and if multiple CPUs issue it at the same
time the resulting delays are even larger.
But CPU0 already observed num_online_cpus() going down to 1 and proceeds
which causes the system to hang.
This issue exists independent of WBINVD, but the delays caused by WBINVD
make it more prominent.
Make this more robust by adding a cpumask which is initialized to the
online CPU mask before sending the IPIs and CPUs clear their bit in
stop_this_cpu() after the WBINVD completed. Check for that cpumask to
become empty in stop_other_cpus() instead of watching num_online_cpus().
The cpumask cannot plug all holes either, but it's better than a raw
counter and allows to restrict the NMI fallback IPI to be sent only the
CPUs which have not reported within the timeout window.
Fixes:
08f253ec3767 ("x86/cpu: Clear SME feature flag when not in use")
Reported-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Ashok Raj <ashok.raj@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/3817d810-e0f1-8ef8-0bbd-663b919ca49b@cybernetics.com
Link: https://lore.kernel.org/r/87h6r770bv.ffs@tglx
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Borislav Petkov (AMD) [Tue, 2 May 2023 17:53:50 +0000 (19:53 +0200)]
x86/microcode/AMD: Load late on both threads too
commit
a32b0f0db3f396f1c9be2fe621e77c09ec3d8e7d upstream.
Do the same as early loading - load on both threads.
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Cc: <stable@kernel.org>
Link: https://lore.kernel.org/r/20230605141332.25948-1-bp@alien8.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>