Yang Yingliang [Wed, 15 Sep 2021 03:49:25 +0000 (11:49 +0800)]
usb: musb: tusb6010: check return value after calling platform_get_resource()
[ Upstream commit
14651496a3de6807a17c310f63c894ea0c5d858e ]
It will cause null-ptr-deref if platform_get_resource() returns NULL,
we need check the return value.
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Link: https://lore.kernel.org/r/20210915034925.2399823-1-yangyingliang@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tony Lindgren [Tue, 21 Sep 2021 09:42:25 +0000 (12:42 +0300)]
bus: ti-sysc: Use context lost quirk for otg
[ Upstream commit
9067839ff45a528bcb015cc2f24f656126b91e3f ]
Let's use SYSC_QUIRK_REINIT_ON_CTX_LOST quirk for am335x otg instead of
SYSC_QUIRK_REINIT_ON_RESUME quirk as we can now handle the context loss
in a more generic way.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tony Lindgren [Tue, 21 Sep 2021 09:42:25 +0000 (12:42 +0300)]
bus: ti-sysc: Add quirk handling for reinit on context lost
[ Upstream commit
9d881361206ebcf6285c2ec2ef275aff80875347 ]
Some interconnect target modules such as otg and gpmc on am335x need a
re-init after resume. As we also have PM runtime cases where the context
may be lost, let's handle these all with cpu_pm.
For the am335x resume path, we already have cpu_pm_resume() call
cpu_pm_cluster_exit().
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Selvin Xavier [Wed, 15 Sep 2021 12:32:42 +0000 (05:32 -0700)]
RDMA/bnxt_re: Check if the vlan is valid before reporting
[ Upstream commit
6bda39149d4b8920fdb8744090653aca3daa792d ]
When VF is configured with default vlan, HW strips the vlan from the
packet and driver receives it in Rx completion. VLAN needs to be reported
for UD work completion only if the vlan is configured on the host. Add a
check for valid vlan in the UD receive path.
Link: https://lore.kernel.org/r/1631709163-2287-12-git-send-email-selvin.xavier@broadcom.com
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Michael Walle [Mon, 30 Aug 2021 16:51:13 +0000 (18:51 +0200)]
arm64: dts: hisilicon: fix arm,sp805 compatible string
[ Upstream commit
894d4f1f77d0e88f1f81af2e1e37333c1c41b631 ]
According to Documentation/devicetree/bindings/watchdog/arm,sp805.yaml
the compatible is:
compatible = "arm,sp805", "arm,primecell";
The current compatible string doesn't exist at all. Fix it.
Signed-off-by: Michael Walle <michael@walle.cc>
Signed-off-by: Wei Xu <xuwei5@hisilicon.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Matthias Brugger [Thu, 15 Jul 2021 16:41:01 +0000 (18:41 +0200)]
arm64: dts: rockchip: Disable CDN DP on Pinebook Pro
[ Upstream commit
2513fa5c25d42f55ca5f0f0ab247af7c9fbfa3b1 ]
The CDN DP needs a PHY and a extcon to work correctly. But no extcon is
provided by the device-tree, which leads to an error:
cdn-dp
fec00000.dp: [drm:cdn_dp_probe [rockchipdrm]] *ERROR* missing extcon or phy
cdn-dp: probe of
fec00000.dp failed with error -22
Disable the CDN DP to make graphic work on the Pinebook Pro.
Reported-by: Guillaume Gardet <guillaume.gardet@arm.com>
Signed-off-by: Matthias Brugger <mbrugger@suse.com>
Link: https://lore.kernel.org/r/20210715164101.11486-1-matthias.bgg@kernel.org
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Bixuan Cui [Sat, 11 Sep 2021 08:12:46 +0000 (16:12 +0800)]
ASoC: mediatek: mt8195: Add missing of_node_put()
[ Upstream commit
b2fc2c92d2fd34d93268f677e514936f50dd6b5c ]
The platform_node is returned by of_parse_phandle() should have
of_node_put() before return.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Bixuan Cui <cuibixuan@huawei.com>
Link: https://lore.kernel.org/r/20210911081246.33867-1-cuibixuan@huawei.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
James Smart [Fri, 10 Sep 2021 23:31:46 +0000 (16:31 -0700)]
scsi: lpfc: Fix list_add() corruption in lpfc_drain_txq()
[ Upstream commit
99154581b05c8fb22607afb7c3d66c1bace6aa5d ]
When parsing the txq list in lpfc_drain_txq(), the driver attempts to pass
the requests to the adapter. If such an attempt fails, a local "fail_msg"
string is set and a log message output. The job is then added to a
completions list for cancellation.
Processing of any further jobs from the txq list continues, but since
"fail_msg" remains set, jobs are added to the completions list regardless
of whether a wqe was passed to the adapter. If successfully added to
txcmplq, jobs are added to both lists resulting in list corruption.
Fix by clearing the fail_msg string after adding a job to the completions
list. This stops the subsequent jobs from being added to the completions
list unless they had an appropriate failure.
Link: https://lore.kernel.org/r/20210910233159.115896-2-jsmart2021@gmail.com
Co-developed-by: Justin Tee <justin.tee@broadcom.com>
Signed-off-by: Justin Tee <justin.tee@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Ajish Koshy [Mon, 6 Sep 2021 17:04:04 +0000 (22:34 +0530)]
scsi: pm80xx: Fix memory leak during rmmod
[ Upstream commit
51e6ed83bb4ade7c360551fa4ae55c4eacea354b ]
Driver failed to release all memory allocated. This would lead to memory
leak during driver removal.
Properly free memory when the module is removed.
Link: https://lore.kernel.org/r/20210906170404.5682-5-Ajish.Koshy@microchip.com
Acked-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Ajish Koshy <Ajish.Koshy@microchip.com>
Signed-off-by: Viswas G <Viswas.G@microchip.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Rafał Miłecki [Thu, 19 Aug 2021 12:26:06 +0000 (14:26 +0200)]
arm64: dts: broadcom: bcm4908: Move reboot syscon out of bus
[ Upstream commit
6cf9f70255b90b540b9cbde062f18fea29024a75 ]
This fixes following error for every bcm4908 DTS file:
bus@
ff800000: reboot: {'type': 'object'} is not allowed for {'compatible': ['syscon-reboot'], 'regmap': [[15]], 'offset': [[52]], 'mask': [[1]]}
Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Matthew Hagan [Sun, 29 Aug 2021 22:37:48 +0000 (22:37 +0000)]
ARM: dts: NSP: Fix mpcore, mmc node names
[ Upstream commit
15a563d008ef9d04df525f0c476cd7d7127bb883 ]
Running dtbs_check yielded the issues with bcm-nsp.dtsi.
Firstly this patch fixes the following message by appending "-bus" to
the mpcore node name:
mpcore@
19000000: $nodename:0: 'mpcore@
19000000' does not match '^([a-z][a-z0-9\\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$'
Secondly mmc node name. The label name can remain as is.
sdhci@21000: $nodename:0: 'sdhci@21000' does not match '^mmc(@.*)?$'
Signed-off-by: Matthew Hagan <mnhagan88@gmail.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Rafał Miłecki [Thu, 19 Aug 2021 06:57:01 +0000 (08:57 +0200)]
ARM: dts: BCM5301X: Fix MDIO mux binding
[ Upstream commit
6ee0b56f7530e0ebb496fe15d0b54c5f3a1b5e17 ]
This fixes following error for all BCM5301X dts files:
mdio-bus-mux@
18003000: compatible: ['mdio-mux-mmioreg'] is too short
Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Rafał Miłecki [Thu, 19 Aug 2021 06:57:00 +0000 (08:57 +0200)]
ARM: dts: BCM5301X: Fix nodes names
[ Upstream commit
9dba049b6d32e95c0dd2a0d607f593ea288ac140 ]
This fixes following errors for all BCM5301X dts files:
chipcommonA@
18000000: $nodename:0: 'chipcommonA@
18000000' does not match '^([a-z][a-z0-9\\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$'
mpcore@
19000000: $nodename:0: 'mpcore@
19000000' does not match '^([a-z][a-z0-9\\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$'
mdio-bus-mux@
18003000: $nodename:0: 'mdio-bus-mux@
18003000' does not match '^mdio-mux[\\-@]?'
dmu@
1800c000: $nodename:0: 'dmu@
1800c000' does not match '^([a-z][a-z0-9\\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$'
Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jérôme Pouiller [Mon, 13 Sep 2021 13:02:03 +0000 (15:02 +0200)]
staging: wfx: ensure IRQ is ready before enabling it
[ Upstream commit
5e57c668dc097c6c27c973504706edec53f79281 ]
Since commit
5561770f80b1 ("staging: wfx: repair external IRQ for
SDIO"), wfx_sdio_irq_subscribe() enforce the device to use IRQs.
However, there is currently a race in this code. An IRQ may happen
before the IRQ has been registered.
The problem has observed during debug session when the device crashes
before the IRQ set up:
[ 1.546] wfx-sdio mmc0:0001:1: started firmware 3.12.2 "WF200_ASIC_WFM_(Jenkins)_FW3.12.2" (API: 3.7, keyset: C0, caps: 0x00000002)
[ 2.559] wfx-sdio mmc0:0001:1: time out while polling control register
[ 3.565] wfx-sdio mmc0:0001:1: chip is abnormally long to answer
[ 6.563] wfx-sdio mmc0:0001:1: chip did not answer
[ 6.568] wfx-sdio mmc0:0001:1: hardware request CONFIGURATION (0x09) on vif 2 returned error -110
[ 6.577] wfx-sdio mmc0:0001:1: PDS bytes 0 to 12: chip didn't reply (corrupted file?)
[ 6.585] Unable to handle kernel NULL pointer dereference at virtual address
00000000
[ 6.592] pgd =
c0004000
[ 6.595] [
00000000] *pgd=
00000000
[ 6.598] Internal error: Oops - BUG: 17 [#1] THUMB2
[ 6.603] Modules linked in:
[ 6.606] CPU: 0 PID: 23 Comm: kworker/u2:1 Not tainted 3.18.19 #78
[ 6.612] Workqueue: kmmcd mmc_rescan
[ 6.616] task:
c176d100 ti:
c0e50000 task.ti:
c0e50000
[ 6.621] PC is at wake_up_process+0xa/0x14
[ 6.625] LR is at sdio_irq+0x61/0x250
[ 6.629] pc : [<
c001e8ae>] lr : [<
c00ec5bd>] psr:
600001b3
[ 6.629] sp :
c0e51bd8 ip :
c0e51cc8 fp :
00000001
[ 6.640] r10:
00000003 r9 :
00000000 r8 :
c0003c34
[ 6.644] r7 :
c0e51bd8 r6 :
c0003c30 r5 :
00000001 r4 :
c0e78c00
[ 6.651] r3 :
00000000 r2 :
00000000 r1 :
00000003 r0 :
00000000
[ 6.657] Flags: nZCv IRQs off FIQs on Mode SVC_32 ISA Thumb Segment kernel
[ 6.664] Control:
50c53c7d Table:
11fd8059 DAC:
00000015
[ 6.670] Process kworker/u2:1 (pid: 23, stack limit = 0xc0e501b0)
[ 6.676] Stack: (0xc0e51bd8 to 0xc0e52000)
[...]
[ 6.949] [<
c001e8ae>] (wake_up_process) from [<
c00ec5bd>] (sdio_irq+0x61/0x250)
[ 6.956] [<
c00ec5bd>] (sdio_irq) from [<
c0025099>] (handle_irq_event_percpu+0x17/0x92)
[ 6.964] [<
c0025099>] (handle_irq_event_percpu) from [<
c002512f>] (handle_irq_event+0x1b/0x24)
[ 6.973] [<
c002512f>] (handle_irq_event) from [<
c0026577>] (handle_level_irq+0x5d/0x76)
[ 6.981] [<
c0026577>] (handle_level_irq) from [<
c0024cc3>] (generic_handle_irq+0x13/0x1c)
[ 6.989] [<
c0024cc3>] (generic_handle_irq) from [<
c0024dd9>] (__handle_domain_irq+0x31/0x48)
[ 6.997] [<
c0024dd9>] (__handle_domain_irq) from [<
c0008359>] (ov_handle_irq+0x31/0xe0)
[ 7.005] [<
c0008359>] (ov_handle_irq) from [<
c000af5b>] (__irq_svc+0x3b/0x5c)
[ 7.013] Exception stack(0xc0e51c68 to 0xc0e51cb0)
[...]
[ 7.038] [<
c000af5b>] (__irq_svc) from [<
c01775aa>] (wait_for_common+0x9e/0xc4)
[ 7.045] [<
c01775aa>] (wait_for_common) from [<
c00e1dc3>] (mmc_wait_for_req+0x4b/0xdc)
[ 7.053] [<
c00e1dc3>] (mmc_wait_for_req) from [<
c00e1e83>] (mmc_wait_for_cmd+0x2f/0x34)
[ 7.061] [<
c00e1e83>] (mmc_wait_for_cmd) from [<
c00e7b2b>] (mmc_io_rw_direct_host+0x71/0xac)
[ 7.070] [<
c00e7b2b>] (mmc_io_rw_direct_host) from [<
c00e8f79>] (sdio_claim_irq+0x6b/0x116)
[ 7.078] [<
c00e8f79>] (sdio_claim_irq) from [<
c00d8415>] (wfx_sdio_irq_subscribe+0x19/0x94)
[ 7.086] [<
c00d8415>] (wfx_sdio_irq_subscribe) from [<
c00d5229>] (wfx_probe+0x189/0x2ac)
[ 7.095] [<
c00d5229>] (wfx_probe) from [<
c00d83bf>] (wfx_sdio_probe+0x8f/0xcc)
[ 7.102] [<
c00d83bf>] (wfx_sdio_probe) from [<
c00e7fbb>] (sdio_bus_probe+0x5f/0xa8)
[ 7.109] [<
c00e7fbb>] (sdio_bus_probe) from [<
c00be229>] (driver_probe_device+0x59/0x134)
[ 7.118] [<
c00be229>] (driver_probe_device) from [<
c00bd4d7>] (bus_for_each_drv+0x3f/0x4a)
[ 7.126] [<
c00bd4d7>] (bus_for_each_drv) from [<
c00be1a5>] (device_attach+0x3b/0x52)
[ 7.134] [<
c00be1a5>] (device_attach) from [<
c00bdc2b>] (bus_probe_device+0x17/0x4c)
[ 7.141] [<
c00bdc2b>] (bus_probe_device) from [<
c00bcd69>] (device_add+0x2c5/0x334)
[ 7.149] [<
c00bcd69>] (device_add) from [<
c00e80bf>] (sdio_add_func+0x23/0x44)
[ 7.156] [<
c00e80bf>] (sdio_add_func) from [<
c00e79eb>] (mmc_attach_sdio+0x187/0x1ec)
[ 7.164] [<
c00e79eb>] (mmc_attach_sdio) from [<
c00e31bd>] (mmc_rescan+0x18d/0x1fc)
[ 7.172] [<
c00e31bd>] (mmc_rescan) from [<
c001a14f>] (process_one_work+0xd7/0x170)
[ 7.179] [<
c001a14f>] (process_one_work) from [<
c001a59b>] (worker_thread+0x103/0x1bc)
[ 7.187] [<
c001a59b>] (worker_thread) from [<
c001c731>] (kthread+0x7d/0x90)
[ 7.194] [<
c001c731>] (kthread) from [<
c0008ce1>] (ret_from_fork+0x11/0x30)
[ 7.201] Code: 2103 b580 2200 af00 (681b) 46bd
[ 7.206] ---[ end trace
3ab50aced42eedb4 ]---
Signed-off-by: Jérôme Pouiller <jerome.pouiller@silabs.com>
Link: https://lore.kernel.org/r/20210913130203.1903622-33-Jerome.Pouiller@silabs.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maxime Ripard [Wed, 1 Sep 2021 09:18:49 +0000 (11:18 +0200)]
arm64: dts: allwinner: a100: Fix thermal zone node name
[ Upstream commit
5c34c4e46e601554bfa370b23c8ae3c3c734e9f7 ]
The thermal zones one the A100 are called $device-thermal-zone.
However, the thermal zone binding explicitly requires that zones are
called *-thermal. Let's fix it.
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
Acked-by: Jernej Skrabec <jernej.skrabec@gmail.com>
Link: https://lore.kernel.org/r/20210901091852.479202-50-maxime@cerno.tech
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maxime Ripard [Wed, 1 Sep 2021 09:18:47 +0000 (11:18 +0200)]
arm64: dts: allwinner: h5: Fix GPU thermal zone node name
[ Upstream commit
94a0f2b0e4e0953d8adf319c44244ef7a57de32c ]
The GPU thermal zone is named gpu_thermal. However, the underscore is
an invalid character for a node name and the thermal zone binding
explicitly requires that zones are called *-thermal. Let's fix it.
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
Acked-by: Jernej Skrabec <jernej.skrabec@gmail.com>
Link: https://lore.kernel.org/r/20210901091852.479202-48-maxime@cerno.tech
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maxime Ripard [Wed, 1 Sep 2021 09:18:42 +0000 (11:18 +0200)]
ARM: dts: sunxi: Fix OPPs node name
[ Upstream commit
ffbe853a3f5a37fa0a511265b21abf097ffdbe45 ]
The operating-points-v2 nodes are named inconsistently, but mostly
either opp_table0 or gpu-opp-table. However, the underscore is an
invalid character for a node name and the thermal zone binding
explicitly requires that zones are called opp-table-*. Let's fix it.
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
Acked-by: Jernej Skrabec <jernej.skrabec@gmail.com>
Link: https://lore.kernel.org/r/20210901091852.479202-43-maxime@cerno.tech
Signed-off-by: Sasha Levin <sashal@kernel.org>
Samuel Holland [Wed, 1 Sep 2021 05:05:19 +0000 (00:05 -0500)]
clk: sunxi-ng: Unregister clocks/resets when unbinding
[ Upstream commit
9bec2b9c6134052994115d2d3374e96f2ccb9b9d ]
Currently, unbinding a CCU driver unmaps the device's MMIO region, while
leaving its clocks/resets and their providers registered. This can cause
a page fault later when some clock operation tries to perform MMIO. Fix
this by separating the CCU initialization from the memory allocation,
and then using a devres callback to unregister the clocks and resets.
This also fixes a memory leak of the `struct ccu_reset`, and uses the
correct owner (the specific platform driver) for the clocks and resets.
Early OF clock providers are never unregistered, and limited error
handling is possible, so they are mostly unchanged. The error reporting
is made more consistent by moving the message inside of_sunxi_ccu_probe.
Signed-off-by: Samuel Holland <samuel@sholland.org>
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
Link: https://lore.kernel.org/r/20210901050526.45673-2-samuel@sholland.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Michal Simek [Fri, 6 Aug 2021 08:58:29 +0000 (10:58 +0200)]
arm64: zynqmp: Fix serial compatible string
[ Upstream commit
812fa2f0e9d33564bd0131a69750e0d165f4c82a ]
Based on commit
65a2c14d4f00 ("dt-bindings: serial: convert Cadence UART
bindings to YAML") compatible string should look like differently that's
why fix it to be aligned with dt binding.
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Link: https://lore.kernel.org/r/89b36e0a6187cc6b05b27a035efdf79173bd4486.1628240307.git.michal.simek@xilinx.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Amit Kumar Mahapatra [Mon, 14 Jun 2021 15:25:10 +0000 (17:25 +0200)]
arm64: zynqmp: Do not duplicate flash partition label property
[ Upstream commit
167721a5909f867f8c18c8e78ea58e705ad9bbd4 ]
In kernel 5.4, support has been added for reading MTD devices via the nvmem
API.
For this the mtd devices are registered as read-only NVMEM providers under
sysfs with the same name as the flash partition label property.
So if flash partition label property of multiple flash devices are
identical then the second mtd device fails to get registered as a NVMEM
provider.
This patch fixes the issue by having different label property for different
flashes.
Signed-off-by: Amit Kumar Mahapatra <amit.kumar-mahapatra@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Link: https://lore.kernel.org/r/6c4b9b9232b93d9e316a63c086540fd5bf6b8687.1623684253.git.michal.simek@xilinx.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Greg Kroah-Hartman [Sun, 21 Nov 2021 12:44:15 +0000 (13:44 +0100)]
Linux 5.15.4
Link: https://lore.kernel.org/r/20211119171444.640508836@linuxfoundation.org
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Fox Chen <foxhlchen@gmail.com>
Tested-by: Shuah Khan <skhan@linuxfoundation.org>
Tested-by: Rudi Heitbaum <rudi@heitbaum.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Tested-By: Scott Bruce <smbruce@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rafael J. Wysocki [Wed, 17 Nov 2021 16:05:41 +0000 (17:05 +0100)]
Revert "ACPI: scan: Release PM resources blocked by unused objects"
commit
3b2b49e6dfdcf423506a771bf44cee842596351a upstream.
Revert commit
c10383e8ddf4 ("ACPI: scan: Release PM resources blocked
by unused objects"), because it causes boot issues to appear on some
platforms.
Reported-by: Kyle D. Pelton <kyle.d.pelton@intel.com>
Reported-by: Saranya Gopal <saranya.gopal@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Subbaraman Narayanamurthy [Thu, 4 Nov 2021 23:57:07 +0000 (16:57 -0700)]
thermal: Fix NULL pointer dereferences in of_thermal_ functions
commit
96cfe05051fd8543cdedd6807ec59a0e6c409195 upstream.
of_parse_thermal_zones() parses the thermal-zones node and registers a
thermal_zone device for each subnode. However, if a thermal zone is
consuming a thermal sensor and that thermal sensor device hasn't probed
yet, an attempt to set trip_point_*_temp for that thermal zone device
can cause a NULL pointer dereference. Fix it.
console:/sys/class/thermal/thermal_zone87 # echo 120000 > trip_point_0_temp
...
Unable to handle kernel NULL pointer dereference at virtual address
0000000000000020
...
Call trace:
of_thermal_set_trip_temp+0x40/0xc4
trip_point_temp_store+0xc0/0x1dc
dev_attr_store+0x38/0x88
sysfs_kf_write+0x64/0xc0
kernfs_fop_write_iter+0x108/0x1d0
vfs_write+0x2f4/0x368
ksys_write+0x7c/0xec
__arm64_sys_write+0x20/0x30
el0_svc_common.llvm.
7279915941325364641+0xbc/0x1bc
do_el0_svc+0x28/0xa0
el0_svc+0x14/0x24
el0_sync_handler+0x88/0xec
el0_sync+0x1c0/0x200
While at it, fix the possible NULL pointer dereference in other
functions as well: of_thermal_get_temp(), of_thermal_set_emul_temp(),
of_thermal_get_trend().
Suggested-by: David Collins <quic_collinsd@quicinc.com>
Signed-off-by: Subbaraman Narayanamurthy <quic_subbaram@quicinc.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Thelen [Thu, 11 Nov 2021 02:18:14 +0000 (18:18 -0800)]
perf/core: Avoid put_page() when GUP fails
commit
4716023a8f6a0f4a28047f14dd7ebdc319606b84 upstream.
PEBS PERF_SAMPLE_PHYS_ADDR events use perf_virt_to_phys() to convert PMU
sampled virtual addresses to physical using get_user_page_fast_only()
and page_to_phys().
Some get_user_page_fast_only() error cases return false, indicating no
page reference, but still initialize the output page pointer with an
unreferenced page. In these error cases perf_virt_to_phys() calls
put_page(). This causes page reference count underflow, which can lead
to unintentional page sharing.
Fix perf_virt_to_phys() to only put_page() if get_user_page_fast_only()
returns a referenced page.
Fixes:
fc7ce9c74c3ad ("perf/core, x86: Add PERF_SAMPLE_PHYS_ADDR")
Signed-off-by: Greg Thelen <gthelen@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20211111021814.757086-1-gthelen@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Zyngier [Thu, 4 Nov 2021 18:01:30 +0000 (18:01 +0000)]
PCI: Add MSI masking quirk for Nvidia ION AHCI
commit
f21082fb20dbfb3e42b769b59ef21c2a7f2c7c1f upstream.
The ION AHCI device pretends that MSI masking isn't a thing, while it
actually implements it and needs MSIs to be unmasked to work. Add a quirk
to that effect.
Reported-by: Rui Salvaterra <rsalvaterra@gmail.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Rui Salvaterra <rsalvaterra@gmail.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Bjorn Helgaas <helgaas@kernel.org>
Link: https://lore.kernel.org/r/CALjTZvbzYfBuLB+H=fj2J+9=DxjQ2Uqcy0if_PvmJ-nU-qEgkg@mail.gmail.com
Link: https://lore.kernel.org/r/20211104180130.3825416-3-maz@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Zyngier [Thu, 4 Nov 2021 18:01:29 +0000 (18:01 +0000)]
PCI/MSI: Deal with devices lying about their MSI mask capability
commit
2226667a145db2e1f314d7f57fd644fe69863ab9 upstream.
It appears that some devices are lying about their mask capability,
pretending that they don't have it, while they actually do.
The net result is that now that we don't enable MSIs on such
endpoint.
Add a new per-device flag to deal with this. Further patches will
make use of it, sadly.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20211104180130.3825416-2-maz@kernel.org
Cc: Bjorn Helgaas <helgaas@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sven Schnelle [Sat, 13 Nov 2021 19:41:17 +0000 (20:41 +0100)]
parisc/entry: fix trace test in syscall exit path
commit
3ec18fc7831e7d79e2d536dd1f3bc0d3ba425e8a upstream.
commit
8779e05ba8aa ("parisc: Fix ptrace check on syscall return")
fixed testing of TI_FLAGS. This uncovered a bug in the test mask.
syscall_restore_rfi is only used when the kernel needs to exit to
usespace with single or block stepping and the recovery counter
enabled. The test however used _TIF_SYSCALL_TRACE_MASK, which
includes a lot of bits that shouldn't be tested here.
Fix this by using TIF_SINGLESTEP and TIF_BLOCKSTEP directly.
I encountered this bug by enabling syscall tracepoints. Both in qemu and
on real hardware. As soon as i enabled the tracepoint (sys_exit_read,
but i guess it doesn't really matter which one), i got random page
faults in userspace almost immediately.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Flintham [Thu, 30 Sep 2021 08:22:39 +0000 (09:22 +0100)]
Bluetooth: btusb: Add support for TP-Link UB500 Adapter
commit
4fd6d490796171bf786090fee782e252186632e4 upstream.
Add support for TP-Link UB500 Adapter (RTL8761B)
* /sys/kernel/debug/usb/devices
T: Bus=01 Lev=02 Prnt=05 Port=01 Cnt=01 Dev#= 78 Spd=12 MxCh= 0
D: Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1
P: Vendor=2357 ProdID=0604 Rev= 2.00
S: Manufacturer=
S: Product=TP-Link UB500 Adapter
S: SerialNumber=
E848B8C82000
C:* #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=500mA
I:* If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
E: Ad=81(I) Atr=03(Int.) MxPS= 16 Ivl=1ms
E: Ad=02(O) Atr=02(Bulk) MxPS= 64 Ivl=0ms
E: Ad=82(I) Atr=02(Bulk) MxPS= 64 Ivl=0ms
I:* If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
E: Ad=03(O) Atr=01(Isoc) MxPS= 0 Ivl=1ms
E: Ad=83(I) Atr=01(Isoc) MxPS= 0 Ivl=1ms
I: If#= 1 Alt= 1 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
E: Ad=03(O) Atr=01(Isoc) MxPS= 9 Ivl=1ms
E: Ad=83(I) Atr=01(Isoc) MxPS= 9 Ivl=1ms
I: If#= 1 Alt= 2 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
E: Ad=03(O) Atr=01(Isoc) MxPS= 17 Ivl=1ms
E: Ad=83(I) Atr=01(Isoc) MxPS= 17 Ivl=1ms
I: If#= 1 Alt= 3 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
E: Ad=03(O) Atr=01(Isoc) MxPS= 25 Ivl=1ms
E: Ad=83(I) Atr=01(Isoc) MxPS= 25 Ivl=1ms
I: If#= 1 Alt= 4 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
E: Ad=03(O) Atr=01(Isoc) MxPS= 33 Ivl=1ms
E: Ad=83(I) Atr=01(Isoc) MxPS= 33 Ivl=1ms
I: If#= 1 Alt= 5 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
E: Ad=03(O) Atr=01(Isoc) MxPS= 49 Ivl=1ms
E: Ad=83(I) Atr=01(Isoc) MxPS= 49 Ivl=1ms
Signed-off-by: Nicholas Flintham <nick@flinny.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Cc: Szabolcs Sipos <labuwx@balfug.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xie Yongji [Tue, 26 Oct 2021 14:40:14 +0000 (22:40 +0800)]
loop: Use blk_validate_block_size() to validate block size
commit
af3c570fb0df422b4906ebd11c1bf363d89961d5 upstream.
Remove loop_validate_block_size() and use the block layer helper
to validate block size.
Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
Link: https://lore.kernel.org/r/20211026144015.188-4-xieyongji@bytedance.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Tadeusz Struk <tadeusz.struk@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xie Yongji [Tue, 26 Oct 2021 14:40:12 +0000 (22:40 +0800)]
block: Add a helper to validate the block size
commit
570b1cac477643cbf01a45fa5d018430a1fddbce upstream.
There are some duplicated codes to validate the block
size in block drivers. This limitation actually comes
from block layer, so this patch tries to add a new block
layer helper for that.
Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
Link: https://lore.kernel.org/r/20211026144015.188-2-xieyongji@bytedance.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Tadeusz Struk <tadeusz.struk@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kees Cook [Thu, 13 May 2021 04:51:10 +0000 (21:51 -0700)]
fortify: Explicitly disable Clang support
commit
a52f8a59aef46b59753e583bf4b28fccb069ce64 upstream.
Clang has never correctly compiled the FORTIFY_SOURCE defenses due to
a couple bugs:
Eliding inlines with matching __builtin_* names
https://bugs.llvm.org/show_bug.cgi?id=50322
Incorrect __builtin_constant_p() of some globals
https://bugs.llvm.org/show_bug.cgi?id=41459
In the process of making improvements to the FORTIFY_SOURCE defenses, the
first (silent) bug (coincidentally) becomes worked around, but exposes
the latter which breaks the build. As such, Clang must not be used with
CONFIG_FORTIFY_SOURCE until at least latter bug is fixed (in Clang 13),
and the fortify routines have been rearranged.
Update the Kconfig to reflect the reality of the current situation.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/lkml/CAKwvOd=A+ueGV2ihdy5GtgR2fQbcXjjAtVxv3=cPjffpebZB7A@mail.gmail.com
Cc: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Thumshirn [Thu, 18 Nov 2021 08:58:18 +0000 (17:58 +0900)]
btrfs: zoned: allow preallocation for relocation inodes
commit
960a3166aed015887cd54423a6589ae4d0b65bd5 upstream
Now that we use a dedicated block group and regular writes for data
relocation, we can preallocate the space needed for a relocated inode,
just like we do in regular mode.
Essentially this reverts commit
32430c614844 ("btrfs: zoned: enable
relocation on a zoned filesystem") as it is not needed anymore.
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Thumshirn [Thu, 18 Nov 2021 08:58:17 +0000 (17:58 +0900)]
btrfs: check for relocation inodes on zoned btrfs in should_nocow
commit
2adada886b26e998b5a624e72f0834ebfdc54cc7 upstream
Prepare for allowing preallocation for relocation inodes.
Reviewed-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Thumshirn [Thu, 18 Nov 2021 08:58:16 +0000 (17:58 +0900)]
btrfs: zoned: use regular writes for relocation
commit
e6d261e3b1f777b499ce8f535ed44dd1b69278b7 upstream
Now that we have a dedicated block group for relocation, we can use
REQ_OP_WRITE instead of REQ_OP_ZONE_APPEND for writing out the data on
relocation.
Reviewed-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Thumshirn [Thu, 18 Nov 2021 08:58:15 +0000 (17:58 +0900)]
btrfs: zoned: only allow one process to add pages to a relocation inode
commit
35156d852762b58855f513b4f8bb7f32d69dc9c5 upstream
Don't allow more than one process to add pages to a relocation inode on
a zoned filesystem, otherwise we cannot guarantee the sequential write
rule once we're filling preallocated extents on a zoned filesystem.
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Thumshirn [Thu, 18 Nov 2021 08:58:14 +0000 (17:58 +0900)]
btrfs: zoned: add a dedicated data relocation block group
commit
c2707a25562343511bf9a3a6a636a16a822204eb upstream
Relocation in a zoned filesystem can fail with a transaction abort with
error -22 (EINVAL). This happens because the relocation code assumes that
the extents we relocated the data to have the same size the source extents
had and ensures this by preallocating the extents.
But in a zoned filesystem we currently can't preallocate the extents as
this would break the sequential write required rule. Therefore it can
happen that the writeback process kicks in while we're still adding pages
to a delalloc range and starts writing out dirty pages.
This then creates destination extents that are smaller than the source
extents, triggering the following safety check in get_new_location():
1034 if (num_bytes != btrfs_file_extent_disk_num_bytes(leaf, fi)) {
1035 ret = -EINVAL;
1036 goto out;
1037 }
Temporarily create a dedicated block group for the relocation process, so
no non-relocation data writes can interfere with the relocation writes.
This is needed that we can switch the relocation process on a zoned
filesystem from the REQ_OP_ZONE_APPEND writing we use for data to a scheme
like in a non-zoned filesystem using REQ_OP_WRITE and preallocation.
Fixes:
32430c614844 ("btrfs: zoned: enable relocation on a zoned filesystem")
Reviewed-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Thumshirn [Thu, 18 Nov 2021 08:58:13 +0000 (17:58 +0900)]
btrfs: introduce btrfs_is_data_reloc_root
commit
37f00a6d2e9c97d6e7b5c3d47c49b714c3d0b99f upstream
There are several places in our codebase where we check if a root is the
root of the data reloc tree and subsequent patches will introduce more.
Factor out the check into a small helper function instead of open coding
it multiple times.
Reviewed-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Woodhouse [Sun, 14 Nov 2021 08:59:02 +0000 (08:59 +0000)]
KVM: Fix steal time asm constraints
commit
964b7aa0b040bdc6ec1c543ee620cda3f8b4c68a upstream.
In 64-bit mode, x86 instruction encoding allows us to use the low 8 bits
of any GPR as an 8-bit operand. In 32-bit mode, however, we can only use
the [abcd] registers. For which, GCC has the "q" constraint instead of
the less restrictive "r".
Also fix st->preempted, which is an input/output operand rather than an
input.
Fixes:
7e2175ebd695 ("KVM: x86: Fix recording of guest steal time / preempted status")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <
89bf72db1b859990355f9c40713a34e0d2d86c98.camel@infradead.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Fri, 19 Nov 2021 11:30:13 +0000 (12:30 +0100)]
Revert "drm: fb_helper: fix CONFIG_FB dependency"
This reverts commit
c95380ba527ae0aee29b2a133c5d0c481d472759 which is
commit
606b102876e3741851dfb09d53f3ee57f650a52c upstream.
It causes some build problems as reported by Jiri.
Link: https://lore.kernel.org/r/9fdb2bf1-de52-1b9d-4783-c61ce39e8f51@kernel.org
Reported-by: Jiri Slaby <jirislaby@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Fri, 19 Nov 2021 11:30:10 +0000 (12:30 +0100)]
Revert "drm: fb_helper: improve CONFIG_FB dependency"
This reverts commit
94e18f5a5dd1b5e3b89c665fc5ff780858b1c9f6 which is
commit
9d6366e743f37d36ef69347924ead7bcc596076e upstream.
It causes some build problems as reported by Jiri.
Link: https://lore.kernel.org/r/9fdb2bf1-de52-1b9d-4783-c61ce39e8f51@kernel.org
Reported-by: Jiri Slaby <jirislaby@kernel.org>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Javier Martinez Canillas <javierm@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Tue, 2 Nov 2021 14:24:20 +0000 (07:24 -0700)]
string: uninline memcpy_and_pad
commit
5c4e0a21fae877a7ef89be6dcc6263ec672372b8 upstream.
When building m68k:allmodconfig, recent versions of gcc generate the
following error if the length of UTS_RELEASE is less than 8 bytes.
In function 'memcpy_and_pad',
inlined from 'nvmet_execute_disc_identify' at
drivers/nvme/target/discovery.c:268:2: arch/m68k/include/asm/string.h:72:25: error:
'__builtin_memcpy' reading 8 bytes from a region of size 7
Discussions around the problem suggest that this only happens if an
architecture does not provide strlen(), if -ffreestanding is provided as
compiler option, and if CONFIG_FORTIFY_SOURCE=n. All of this is the case
for m68k. The exact reasons are unknown, but seem to be related to the
ability of the compiler to evaluate the return value of strlen() and
the resulting execution flow in memcpy_and_pad(). It would be possible
to work around the problem by using sizeof(UTS_RELEASE) instead of
strlen(UTS_RELEASE), but that would only postpone the problem until the
function is called in a similar way. Uninline memcpy_and_pad() instead
to solve the problem for good.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Andy Shevchenko <andriy.shevchenko@intel.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 18 Nov 2021 18:17:21 +0000 (19:17 +0100)]
Linux 5.15.3
Link: https://lore.kernel.org/r/20211115165428.722074685@linuxfoundation.org
Tested-by: Shuah Khan <skhan@linuxfoundation.org>
Tested-by: Fox Chen <foxhlchen@gmail.com>
Tested-by: Salvatore Bonaccorso <carnil@debian.org>
Link: https://lore.kernel.org/r/20211116142631.571909964@linuxfoundation.org
Tested-by: Shuah Khan <skhan@linuxfoundation.org>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Fox Chen <foxhlchen@gmail.com>
Link: https://lore.kernel.org/r/20211117101657.463560063@linuxfoundation.org
Tested-by: Fox Chen <foxhlchen@gmail.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Linux Kernel Functional Testing <lkft@linaro.org>
Link: https://lore.kernel.org/r/20211118081919.507743013@linuxfoundation.org
Tested-by: Fox Chen <foxhlchen@gmail.com>
Tested-by: Rudi Heitbaum <rudi@heitbaum.com>
Tested-by: Linux Kernel Functional Testing <lkft@linaro.org>
Tested-By: Scott Bruce <smbruce@gmail.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hans de Goede [Mon, 1 Nov 2021 14:53:55 +0000 (14:53 +0000)]
media: videobuf2-dma-sg: Fix buf->vb NULL pointer dereference
commit
d55c3ee6b4c7b76326eb257403762f8bd7cc48c2 upstream.
Commit
a4b83deb3e76 ("media: videobuf2: rework vb2_mem_ops API")
added a new vb member to struct vb2_dma_sg_buf, but it only added
code setting this to the vb2_dma_sg_alloc() function and not to the
vb2_dma_sg_get_userptr() and vb2_dma_sg_attach_dmabuf() which also
create vb2_dma_sg_buf objects.
This is causing a crash due to a NULL pointer deref when using
libcamera on devices with an Intel IPU3 (qcam app).
Fix these crashes by assigning buf->vb in the other 2 functions too,
note libcamera tests the vb2_dma_sg_get_userptr() path, the change
to the vb2_dma_sg_attach_dmabuf() path is untested.
Fixes:
a4b83deb3e76 ("media: videobuf2: rework vb2_mem_ops API")
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sergey Senozhatsky [Tue, 28 Sep 2021 03:46:34 +0000 (04:46 +0100)]
media: videobuf2: always set buffer vb2 pointer
commit
67f85135c57c8ea20b5417b28ae65e53dc2ec2c3 upstream.
We need to always link allocated vb2_dc_buf back to vb2_buffer because
we dereference vb2 in prepare() and finish() callbacks.
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Tested-by: Chen-Yu Tsai <wenst@chromium.org>
Acked-by: Tomasz Figa <tfiga@chromium.org>
Signed-off-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Borislav Petkov [Fri, 1 Oct 2021 19:41:20 +0000 (21:41 +0200)]
x86/sev: Make the #VC exception stacks part of the default stacks storage
commit
541ac97186d9ea88491961a46284de3603c914fd upstream.
The size of the exception stacks was increased by the commit in Fixes,
resulting in stack sizes greater than a page in size. The #VC exception
handling was only mapping the first (bottom) page, resulting in an
SEV-ES guest failing to boot.
Make the #VC exception stacks part of the default exception stacks
storage and allocate them with a CONFIG_AMD_MEM_ENCRYPT=y .config. Map
them only when a SEV-ES guest has been detected.
Rip out the custom VC stacks mapping and storage code.
[ bp: Steal and adapt Tom's commit message. ]
Fixes:
7fae4c24a2b8 ("x86: Increase exception stack sizes")
Signed-off-by: Borislav Petkov <bp@suse.de>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Link: https://lkml.kernel.org/r/YVt1IMjIs7pIZTRR@zn.tnic
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tom Lendacky [Wed, 8 Sep 2021 22:58:34 +0000 (17:58 -0500)]
x86/sev: Add an x86 version of cc_platform_has()
commit
aa5a461171f98fde0df78c4f6b5018a1e967cf81 upstream.
Introduce an x86 version of the cc_platform_has() function. This will be
used to replace vendor specific calls like sme_active(), sev_active(),
etc.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20210928191009.32551-4-bp@alien8.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tom Lendacky [Wed, 8 Sep 2021 22:58:33 +0000 (17:58 -0500)]
arch/cc: Introduce a function to check for confidential computing features
commit
46b49b12f3fc5e1347dba37d4639e2165f447871 upstream.
In preparation for other confidential computing technologies, introduce
a generic helper function, cc_platform_has(), that can be used to
check for specific active confidential computing attributes, like
memory encryption. This is intended to eliminate having to add multiple
technology-specific checks to the code (e.g. if (sev_active() ||
tdx_active() || ... ).
[ bp: s/_CC_PLATFORM_H/_LINUX_CC_PLATFORM_H/g ]
Co-developed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Co-developed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20210928191009.32551-3-bp@alien8.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrii Nakryiko [Mon, 1 Nov 2021 23:01:18 +0000 (16:01 -0700)]
selftests/bpf: Fix also no-alu32 strobemeta selftest
commit
a20eac0af02810669e187cb623bc904908c423af upstream.
Previous fix aded bpf_clamp_umax() helper use to re-validate boundaries.
While that works correctly, it introduces more branches, which blows up
past 1 million instructions in no-alu32 variant of strobemeta selftests.
Switching len variable from u32 to u64 also fixes the issue and reduces
the number of validated instructions, so use that instead. Fix this
patch and bpf_clamp_umax() removed, both alu32 and no-alu32 selftests
pass.
Fixes:
0133c20480b1 ("selftests/bpf: Fix strobemeta selftest regression")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211101230118.1273019-1-andrii@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Borislav Petkov [Fri, 29 Oct 2021 17:27:32 +0000 (19:27 +0200)]
selftests/x86/iopl: Adjust to the faked iopl CLI/STI usage
commit
a72fdfd21e01c626273ddcf5ab740d4caef4be54 upstream.
Commit in Fixes changed the iopl emulation to not #GP on CLI and STI
because it would break some insane luserspace tools which would toggle
interrupts.
The corresponding selftest would rely on the fact that executing CLI/STI
would trigger a #GP and thus detect it this way but since that #GP is
not happening anymore, the detection is now wrong too.
Extend the test to actually look at the IF flag and whether executing
those insns had any effect on it. The STI detection needs to have the
fact that interrupts were previously disabled, passed in so do that from
the previous CLI test, i.e., STI test needs to follow a previous CLI one
for it to make sense.
Fixes:
b968e84b509d ("x86/iopl: Fake iopl(3) CLI/STI usage")
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20211030083939.13073-1-bp@alien8.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Colin Ian King [Wed, 13 Oct 2021 10:00:52 +0000 (11:00 +0100)]
mmc: moxart: Fix null pointer dereference on pointer host
commit
0eab756f8821d255016c63bb55804c429ff4bdb1 upstream.
There are several error return paths that dereference the null pointer
host because the pointer has not yet been set to a valid value.
Fix this by adding a new out_mmc label and exiting via this label
to avoid the host clean up and hence the null pointer dereference.
Addresses-Coverity: ("Explicit null dereference")
Fixes:
8105c2abbf36 ("mmc: moxart: Fix reference count leaks in moxart_probe")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Link: https://lore.kernel.org/r/20211013100052.125461-1-colin.king@canonical.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Wed, 20 Oct 2021 08:59:07 +0000 (11:59 +0300)]
ath10k: fix invalid dma_addr_t token assignment
commit
937e79c67740d1d84736730d679f3cb2552f990e upstream.
Using a kernel pointer in place of a dma_addr_t token can
lead to undefined behavior if that makes it into cache
management functions. The compiler caught one such attempt
in a cast:
drivers/net/wireless/ath/ath10k/mac.c: In function 'ath10k_add_interface':
drivers/net/wireless/ath/ath10k/mac.c:5586:47: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
5586 | arvif->beacon_paddr = (dma_addr_t)arvif->beacon_buf;
| ^
Looking through how this gets used down the way, I'm fairly
sure that beacon_paddr is never accessed again for ATH10K_DEV_TYPE_HL
devices, and if it was accessed, that would be a bug.
Change the assignment to use a known-invalid address token
instead, which avoids the warning and makes it easier to catch
bugs if it does end up getting used.
Fixes:
e263bdab9c0e ("ath10k: high latency fixes for beacon buffer")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/20211014075153.3655910-1-arnd@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paulo Alcantara [Fri, 12 Nov 2021 17:53:36 +0000 (14:53 -0300)]
cifs: fix memory leak of smb3_fs_context_dup::server_hostname
commit
869da64d071142d4ed562a3e909deb18e4e72c4e upstream.
Fix memory leak of smb3_fs_context_dup::server_hostname when parsing
and duplicating fs contexts during mount(2) as reported by kmemleak:
unreferenced object 0xffff888125715c90 (size 16):
comm "mount.cifs", pid 3832, jiffies
4304535868 (age 190.094s)
hex dump (first 16 bytes):
7a 65 6c 64 61 2e 74 65 73 74 00 6b 6b 6b 6b a5 zelda.test.kkkk.
backtrace:
[<
ffffffff8168106e>] kstrdup+0x2e/0x60
[<
ffffffffa027a362>] smb3_fs_context_dup+0x392/0x8d0 [cifs]
[<
ffffffffa0136353>] cifs_smb3_do_mount+0x143/0x1700 [cifs]
[<
ffffffffa02795e8>] smb3_get_tree+0x2e8/0x520 [cifs]
[<
ffffffff817a19aa>] vfs_get_tree+0x8a/0x2d0
[<
ffffffff8181e3e3>] path_mount+0x423/0x1a10
[<
ffffffff8181fbca>] __x64_sys_mount+0x1fa/0x270
[<
ffffffff83ae364b>] do_syscall_64+0x3b/0x90
[<
ffffffff83c0007c>] entry_SYSCALL_64_after_hwframe+0x44/0xae
unreferenced object 0xffff888111deed20 (size 32):
comm "mount.cifs", pid 3832, jiffies
4304536044 (age 189.918s)
hex dump (first 32 bytes):
44 46 53 52 4f 4f 54 31 2e 5a 45 4c 44 41 2e 54 DFSROOT1.ZELDA.T
45 53 54 00 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 EST.kkkkkkkkkkk.
backtrace:
[<
ffffffff8168118d>] kstrndup+0x2d/0x90
[<
ffffffffa027ab2e>] smb3_parse_devname+0x9e/0x360 [cifs]
[<
ffffffffa01870c8>] cifs_setup_volume_info+0xa8/0x470 [cifs]
[<
ffffffffa018c469>] connect_dfs_target+0x309/0xc80 [cifs]
[<
ffffffffa018d6cb>] cifs_mount+0x8eb/0x17f0 [cifs]
[<
ffffffffa0136475>] cifs_smb3_do_mount+0x265/0x1700 [cifs]
[<
ffffffffa02795e8>] smb3_get_tree+0x2e8/0x520 [cifs]
[<
ffffffff817a19aa>] vfs_get_tree+0x8a/0x2d0
[<
ffffffff8181e3e3>] path_mount+0x423/0x1a10
[<
ffffffff8181fbca>] __x64_sys_mount+0x1fa/0x270
[<
ffffffff83ae364b>] do_syscall_64+0x3b/0x90
[<
ffffffff83c0007c>] entry_SYSCALL_64_after_hwframe+0x44/0xae
Fixes:
7be3248f3139 ("cifs: To match file servers, make sure the server hostname matches")
Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hans Verkuil [Tue, 14 Sep 2021 07:21:25 +0000 (08:21 +0100)]
media: vidtv: move kfree(dvb) to vidtv_bridge_dev_release()
commit
112024a3b6dcfc62ec36ea0cf58b897f2ce54c59 upstream.
Adding kfree(dvb) to vidtv_bridge_remove() will remove the memory
too soon: if an application still has an open filehandle to the device
when the driver is unloaded, then when that filehandle is closed, a
use-after-free access takes place to the freed memory.
Move the kfree(dvb) to vidtv_bridge_dev_release() instead.
Signed-off-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Fixes:
76e21bb8be4f ("media: vidtv: Fix memory leak in remove")
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mario Limonciello [Tue, 2 Nov 2021 15:04:37 +0000 (10:04 -0500)]
drm/amd/display: Look at firmware version to determine using dmub on dcn21
commit
91adec9e07097e538691daed5d934e7886dd1dc3 upstream.
commit
652de07addd2 ("drm/amd/display: Fully switch to dmub for all dcn21
asics") switched over to using dmub on Renoir to fix Gitlab 1735, but this
implied a new dependency on newer firmware which might not be met on older
kernel versions.
Since sw_init runs before hw_init, there is an opportunity to determine
whether or not the firmware version is new to adjust the behavior.
Cc: Roman.Li@amd.com
BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1772
BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1735
Fixes:
652de07addd2 ("drm/amd/display: Fully switch to dmub for all dcn21 asics")
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Roman Li <Roman.Li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Trond Myklebust [Mon, 12 Jul 2021 13:52:59 +0000 (09:52 -0400)]
SUNRPC: Partial revert of commit
6f9f17287e78
commit
ea7a1019d8baf8503ecd6e3ec8436dec283569e6 upstream.
The premise of commit
6f9f17287e78 ("SUNRPC: Mitigate cond_resched() in
xprt_transmit()") was that cond_resched() is expensive and unnecessary
when there has been just a single send.
The point of cond_resched() is to ensure that tasks that should pre-empt
this one get a chance to do so when it is safe to do so. The code prior
to commit
6f9f17287e78 failed to take into account that it was keeping a
rpc_task pinned for longer than it needed to, and so rather than doing a
full revert, let's just move the cond_resched.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pali Rohár [Tue, 5 Oct 2021 18:09:41 +0000 (20:09 +0200)]
PCI: aardvark: Fix PCIe Max Payload Size setting
commit
a4e17d65dafdd3513042d8f00404c9b6068a825c upstream.
Change PCIe Max Payload Size setting in PCIe Device Control register to 512
bytes to align with PCIe Link Initialization sequence as defined in Marvell
Armada 3700 Functional Specification. According to the specification,
maximal Max Payload Size supported by this device is 512 bytes.
Without this kernel prints suspicious line:
pci 0000:01:00.0: Upstream bridge's Max Payload Size set to 256 (was 16384, max 512)
With this change it changes to:
pci 0000:01:00.0: Upstream bridge's Max Payload Size set to 256 (was 512, max 512)
Link: https://lore.kernel.org/r/20211005180952.6812-3-kabel@kernel.org
Fixes:
8c39d710363c ("PCI: aardvark: Add Aardvark PCI host controller driver")
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Marek Behún <kabel@kernel.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Marek Behún <kabel@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pali Rohár [Tue, 5 Oct 2021 18:09:40 +0000 (20:09 +0200)]
PCI: Add PCI_EXP_DEVCTL_PAYLOAD_* macros
commit
460275f124fb072dca218a6b43b6370eebbab20d upstream.
Define a macro PCI_EXP_DEVCTL_PAYLOAD_* for every possible Max Payload
Size in linux/pci_regs.h, in the same style as PCI_EXP_DEVCTL_READRQ_*.
Link: https://lore.kernel.org/r/20211005180952.6812-2-kabel@kernel.org
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Marek Behún <kabel@kernel.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Marek Behún <kabel@kernel.org>
Reviewed-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jernej Skrabec [Tue, 31 Aug 2021 18:48:19 +0000 (20:48 +0200)]
drm/sun4i: Fix macros in sun8i_csc.h
commit
c302c98da646409d657a473da202f10f417f3ff1 upstream.
Macros SUN8I_CSC_CTRL() and SUN8I_CSC_COEFF() don't follow usual
recommendation of having arguments enclosed in parenthesis. While that
didn't change anything for quite sometime, it actually become important
after CSC code rework with commit
ea067aee45a8 ("drm/sun4i: de2/de3:
Remove redundant CSC matrices").
Without this fix, colours are completely off for supported YVU formats
on SoCs with DE2 (A64, H3, R40, etc.).
Fix the issue by enclosing macro arguments in parenthesis.
Cc: stable@vger.kernel.org # 5.12+
Fixes:
883029390550 ("drm/sun4i: Add DE2 CSC library")
Reported-by: Roman Stratiienko <r.stratiienko@gmail.com>
Signed-off-by: Jernej Skrabec <jernej.skrabec@gmail.com>
Reviewed-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
Link: https://patchwork.freedesktop.org/patch/msgid/20210831184819.93670-1-jernej.skrabec@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xiaoming Ni [Wed, 29 Sep 2021 03:36:46 +0000 (11:36 +0800)]
powerpc/85xx: fix timebase sync issue when CONFIG_HOTPLUG_CPU=n
commit
c45361abb9185b1e172bd75eff51ad5f601ccae4 upstream.
When CONFIG_SMP=y, timebase synchronization is required when the second
kernel is started.
arch/powerpc/kernel/smp.c:
int __cpu_up(unsigned int cpu, struct task_struct *tidle)
{
...
if (smp_ops->give_timebase)
smp_ops->give_timebase();
...
}
void start_secondary(void *unused)
{
...
if (smp_ops->take_timebase)
smp_ops->take_timebase();
...
}
When CONFIG_HOTPLUG_CPU=n and CONFIG_KEXEC_CORE=n,
smp_85xx_ops.give_timebase is NULL,
smp_85xx_ops.take_timebase is NULL,
As a result, the timebase is not synchronized.
Timebase synchronization does not depend on CONFIG_HOTPLUG_CPU.
Fixes:
56f1ba280719 ("powerpc/mpc85xx: refactor the PM operations")
Cc: stable@vger.kernel.org # v4.6+
Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210929033646.39630-3-nixiaoming@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nathan Lynch [Wed, 20 Oct 2021 19:47:03 +0000 (14:47 -0500)]
powerpc/pseries/mobility: ignore ibm, platform-facilities updates
commit
319fa1a52e438a6e028329187783a25ad498c4e6 upstream.
On VMs with NX encryption, compression, and/or RNG offload, these
capabilities are described by nodes in the ibm,platform-facilities device
tree hierarchy:
$ tree -d /sys/firmware/devicetree/base/ibm,platform-facilities/
/sys/firmware/devicetree/base/ibm,platform-facilities/
├── ibm,compression-v1
├── ibm,random-v1
└── ibm,sym-encryption-v1
3 directories
The acceleration functions that these nodes describe are not disrupted by
live migration, not even temporarily.
But the post-migration ibm,update-nodes sequence firmware always sends
"delete" messages for this hierarchy, followed by an "add" directive to
reconstruct it via ibm,configure-connector (log with debugging statements
enabled in mobility.c):
mobility: removing node /ibm,platform-facilities/ibm,random-v1:
4294967285
mobility: removing node /ibm,platform-facilities/ibm,compression-v1:
4294967284
mobility: removing node /ibm,platform-facilities/ibm,sym-encryption-v1:
4294967283
mobility: removing node /ibm,platform-facilities:
4294967286
...
mobility: added node /ibm,platform-facilities:
4294967286
Note we receive a single "add" message for the entire hierarchy, and what
we receive from the ibm,configure-connector sequence is the top-level
platform-facilities node along with its three children. The debug message
simply reports the parent node and not the whole subtree.
Also, significantly, the nodes added are almost completely equivalent to
the ones removed; even phandles are unchanged. ibm,shared-interrupt-pool in
the leaf nodes is the only property I've observed to differ, and Linux does
not use that. So in practice, the sum of update messages Linux receives for
this hierarchy is equivalent to minor property updates.
We succeed in removing the original hierarchy from the device tree. But the
vio bus code is ignorant of this, and does not unbind or relinquish its
references. The leaf nodes, still reachable through sysfs, of course still
refer to the now-freed ibm,platform-facilities parent node, which makes
use-after-free possible:
refcount_t: addition on 0; use-after-free.
WARNING: CPU: 3 PID: 1706 at lib/refcount.c:25 refcount_warn_saturate+0x164/0x1f0
refcount_warn_saturate+0x160/0x1f0 (unreliable)
kobject_get+0xf0/0x100
of_node_get+0x30/0x50
of_get_parent+0x50/0xb0
of_fwnode_get_parent+0x54/0x90
fwnode_count_parents+0x50/0x150
fwnode_full_name_string+0x30/0x110
device_node_string+0x49c/0x790
vsnprintf+0x1c0/0x4c0
sprintf+0x44/0x60
devspec_show+0x34/0x50
dev_attr_show+0x40/0xa0
sysfs_kf_seq_show+0xbc/0x200
kernfs_seq_show+0x44/0x60
seq_read_iter+0x2a4/0x740
kernfs_fop_read_iter+0x254/0x2e0
new_sync_read+0x120/0x190
vfs_read+0x1d0/0x240
Moreover, the "new" replacement subtree is not correctly added to the
device tree, resulting in ibm,platform-facilities parent node without the
appropriate leaf nodes, and broken symlinks in the sysfs device hierarchy:
$ tree -d /sys/firmware/devicetree/base/ibm,platform-facilities/
/sys/firmware/devicetree/base/ibm,platform-facilities/
0 directories
$ cd /sys/devices/vio ; find . -xtype l -exec file {} +
./ibm,sym-encryption-v1/of_node: broken symbolic link to
../../../firmware/devicetree/base/ibm,platform-facilities/ibm,sym-encryption-v1
./ibm,random-v1/of_node: broken symbolic link to
../../../firmware/devicetree/base/ibm,platform-facilities/ibm,random-v1
./ibm,compression-v1/of_node: broken symbolic link to
../../../firmware/devicetree/base/ibm,platform-facilities/ibm,compression-v1
This is because add_dt_node() -> dlpar_attach_node() attaches only the
parent node returned from configure-connector, ignoring any children. This
should be corrected for the general case, but fixing that won't help with
the stale OF node references, which is the more urgent problem.
One way to address that would be to make the drivers respond to node
removal notifications, so that node references can be dropped
appropriately. But this would likely force the drivers to disrupt active
clients for no useful purpose: equivalent nodes are immediately re-added.
And recall that the acceleration capabilities described by the nodes remain
available throughout the whole process.
The solution I believe to be robust for this situation is to convert
remove+add of a node with an unchanged phandle to an update of the node's
properties in the Linux device tree structure. That would involve changing
and adding a fair amount of code, and may take several iterations to land.
Until that can be realized we have a confirmed use-after-free and the
possibility of memory corruption. So add a limited workaround that
discriminates on the node type, ignoring adds and removes. This should be
amenable to backporting in the meantime.
Fixes:
410bccf97881 ("powerpc/pseries: Partition migration in the kernel")
Cc: stable@vger.kernel.org
Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20211020194703.2613093-1-nathanl@linux.ibm.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Piggin [Tue, 26 Oct 2021 12:25:31 +0000 (22:25 +1000)]
powerpc/64s/interrupt: Fix check_return_regs_valid() false positive
commit
4a5cb51f3db4be547225a4bce7a43d41b231382b upstream.
The check_return_regs_valid() can cause a false positive if the return
regs are marked as norestart and they are an HSRR type interrupt,
because the low bit in the bottom of regs->trap causes interrupt type
matching to fail.
This can occcur for example on bare metal with a HV privileged doorbell
interrupt that causes a signal, but do_signal returns early because
get_signal() fails, and takes the "No signal to deliver" path. In this
case no signal was delivered so the return location is not changed so
return SRRs are not invalidated, yet set_trap_norestart is called, which
messes up the match. Building go-1.16.6 is known to reproduce this.
Fix it by using the TRAP() accessor which masks out the low bit.
Fixes:
6eaaf9de3599 ("powerpc/64s/interrupt: Check and fix srr_valid without crashing")
Cc: stable@vger.kernel.org # v5.14+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20211026122531.3599918-1-npiggin@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Russell Currey [Wed, 27 Oct 2021 07:24:10 +0000 (17:24 +1000)]
powerpc/security: Use a mutex for interrupt exit code patching
commit
3c12b4df8d5e026345a19886ae375b3ebc33c0b6 upstream.
The mitigation-patching.sh script in the powerpc selftests toggles
all mitigations on and off simultaneously, revealing that rfi_flush
and stf_barrier cannot safely operate at the same time due to races
in updating the static key.
On some systems, the static key code throws a warning and the kernel
remains functional. On others, the kernel will hang or crash.
Fix this by slapping on a mutex.
Fixes:
13799748b957 ("powerpc/64: use interrupt restart table to speed up return from interrupt")
Cc: stable@vger.kernel.org # v5.14+
Signed-off-by: Russell Currey <ruscur@russell.cc>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20211027072410.40950-1-ruscur@russell.cc
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vasant Hegde [Thu, 28 Oct 2021 16:57:16 +0000 (22:27 +0530)]
powerpc/powernv/prd: Unregister OPAL_MSG_PRD2 notifier during module unload
commit
52862ab33c5d97490f3fa345d6529829e6d6637b upstream.
Commit
587164cd, introduced new opal message type (OPAL_MSG_PRD2) and
added opal notifier. But I missed to unregister the notifier during
module unload path. This results in below call trace if you try to
unload and load opal_prd module.
Also add new notifier_block for OPAL_MSG_PRD2 message.
Sample calltrace (modprobe -r opal_prd; modprobe opal_prd)
BUG: Unable to handle kernel data access on read at 0xc0080000192200e0
Faulting instruction address: 0xc00000000018d1cc
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV
CPU: 66 PID: 7446 Comm: modprobe Kdump: loaded Tainted: G E 5.14.0prd #759
NIP:
c00000000018d1cc LR:
c00000000018d2a8 CTR:
c0000000000cde10
REGS:
c0000003c4c0f0a0 TRAP: 0300 Tainted: G E (5.14.0prd)
MSR:
9000000002009033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE> CR:
24224824 XER:
20040000
CFAR:
c00000000018d2a4 DAR:
c0080000192200e0 DSISR:
40000000 IRQMASK: 1
...
NIP notifier_chain_register+0x2c/0xc0
LR atomic_notifier_chain_register+0x48/0x80
Call Trace:
0xc000000002090610 (unreliable)
atomic_notifier_chain_register+0x58/0x80
opal_message_notifier_register+0x7c/0x1e0
opal_prd_probe+0x84/0x150 [opal_prd]
platform_probe+0x78/0x130
really_probe+0x110/0x5d0
__driver_probe_device+0x17c/0x230
driver_probe_device+0x60/0x130
__driver_attach+0xfc/0x220
bus_for_each_dev+0xa8/0x130
driver_attach+0x34/0x50
bus_add_driver+0x1b0/0x300
driver_register+0x98/0x1a0
__platform_driver_register+0x38/0x50
opal_prd_driver_init+0x34/0x50 [opal_prd]
do_one_initcall+0x60/0x2d0
do_init_module+0x7c/0x320
load_module+0x3394/0x3650
__do_sys_finit_module+0xd4/0x160
system_call_exception+0x140/0x290
system_call_common+0xf4/0x258
Fixes:
587164cd593c ("powerpc/powernv: Add new opal message type")
Cc: stable@vger.kernel.org # v5.4+
Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20211028165716.41300-1-hegdevasant@linux.vnet.ibm.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Piggin [Thu, 28 Oct 2021 13:30:43 +0000 (23:30 +1000)]
powerpc/32e: Ignore ESR in instruction storage interrupt handler
commit
81291383ffde08b23bce75e7d6b2575ce9d3475c upstream.
A e5500 machine running a 32-bit kernel sometimes hangs at boot,
seemingly going into an infinite loop of instruction storage interrupts.
The ESR (Exception Syndrome Register) has a value of 0x800000 (store)
when this happens, which is likely set by a previous store. An
instruction TLB miss interrupt would then leave ESR unchanged, and if no
PTE exists it calls directly to the instruction storage interrupt
handler without changing ESR.
access_error() does not cause a segfault due to a store to a read-only
vma because is_exec is true. Most subsequent fault handling does not
check for a write fault on a read-only vma, and might do strange things
like create a writeable PTE or call page_mkwrite on a read only vma or
file. It's not clear what happens here to cause the infinite faulting in
this case, a fault handler failure or low level PTE or TLB handling.
In any case this can be fixed by having the instruction storage
interrupt zero regs->dsisr rather than storing the ESR value to it.
Fixes:
a01a3f2ddbcd ("powerpc: remove arguments from fault handler functions")
Cc: stable@vger.kernel.org # v5.12+
Reported-by: Jacques de Laval <jacques.delaval@protonmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Jacques de Laval <jacques.delaval@protonmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20211028133043.4159501-1-npiggin@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hari Bathini [Mon, 25 Oct 2021 05:56:49 +0000 (11:26 +0530)]
powerpc/bpf: Fix write protecting JIT code
commit
44a8214de96bafb5210e43bfa2c97c19bf75af3d upstream.
Running program with bpf-to-bpf function calls results in data access
exception (0x300) with the below call trace:
bpf_int_jit_compile+0x238/0x750 (unreliable)
bpf_check+0x2008/0x2710
bpf_prog_load+0xb00/0x13a0
__sys_bpf+0x6f4/0x27c0
sys_bpf+0x2c/0x40
system_call_exception+0x164/0x330
system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code
location during the extra pass.
Fix it by holding off write protection of JIT code until the extra
pass, where branch target addresses fixup happens.
Fixes:
62e3d4210ac9 ("powerpc/bpf: Write protect JIT code")
Cc: stable@vger.kernel.org # v5.14+
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20211025055649.114728-1-hbathini@linux.ibm.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gustavo A. R. Silva [Fri, 15 Oct 2021 05:03:45 +0000 (00:03 -0500)]
powerpc/vas: Fix potential NULL pointer dereference
commit
61cb9ac66b30374c7fd8a8b2a3c4f8f432c72e36 upstream.
(!ptr && !ptr->foo) strikes again. :)
The expression (!ptr && !ptr->foo) is bogus and in case ptr is NULL,
it leads to a NULL pointer dereference: ptr->foo.
Fix this by converting && to ||
This issue was detected with the help of Coccinelle, and audited and
fixed manually.
Fixes:
1a0d0d5ed5e3 ("powerpc/vas: Add platform specific user window operations")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20211015050345.GA1161918@embeddedor
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:22:41 +0000 (00:22 +0200)]
mtd: rawnand: au1550nd: Keep the driver compatible with on-die ECC engines
commit
7e3cdba176ba59eaf4d463d273da0718e3626140 upstream.
Following the introduction of the generic ECC engine infrastructure, it
was necessary to reorganize the code and move the ECC configuration in
the ->attach_chip() hook. Failing to do that properly lead to a first
series of fixes supposed to stabilize the situation. Unfortunately, this
only fixed the use of software ECC engines, preventing any other kind of
engine to be used, including on-die ones.
It is now time to (finally) fix the situation by ensuring that we still
provide a default (eg. software ECC) but will still support different
ECC engines such as on-die ECC engines if properly described in the
device tree.
There are no changes needed on the core side in order to do this, but we
just need to leverage the logic there which allows:
1- a subsystem default (set to Host engines in the raw NAND world)
2- a driver specific default (here set to software ECC engines)
3- any type of engine requested by the user (ie. described in the DT)
As the raw NAND subsystem has not yet been fully converted to the ECC
engine infrastructure, in order to provide a default ECC engine for this
driver we need to set chip->ecc.engine_type *before* calling
nand_scan(). During the initialization step, the core will consider this
entry as the default engine for this driver. This value may of course
be overloaded by the user if the usual DT properties are provided.
Fixes:
dbffc8ccdf3a ("mtd: rawnand: au1550: Move the ECC initialization to ->attach_chip()")
Cc: stable@vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-3-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:22:46 +0000 (00:22 +0200)]
mtd: rawnand: plat_nand: Keep the driver compatible with on-die ECC engines
commit
325fd539fc84f0aaa0ceb9d7d3b8718582473dc5 upstream.
Following the introduction of the generic ECC engine infrastructure, it
was necessary to reorganize the code and move the ECC configuration in
the ->attach_chip() hook. Failing to do that properly lead to a first
series of fixes supposed to stabilize the situation. Unfortunately, this
only fixed the use of software ECC engines, preventing any other kind of
engine to be used, including on-die ones.
It is now time to (finally) fix the situation by ensuring that we still
provide a default (eg. software ECC) but will still support different
ECC engines such as on-die ECC engines if properly described in the
device tree.
There are no changes needed on the core side in order to do this, but we
just need to leverage the logic there which allows:
1- a subsystem default (set to Host engines in the raw NAND world)
2- a driver specific default (here set to software ECC engines)
3- any type of engine requested by the user (ie. described in the DT)
As the raw NAND subsystem has not yet been fully converted to the ECC
engine infrastructure, in order to provide a default ECC engine for this
driver we need to set chip->ecc.engine_type *before* calling
nand_scan(). During the initialization step, the core will consider this
entry as the default engine for this driver. This value may of course
be overloaded by the user if the usual DT properties are provided.
Fixes:
612e048e6aab ("mtd: rawnand: plat_nand: Move the ECC initialization to ->attach_chip()")
Cc: stable@vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-8-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:22:44 +0000 (00:22 +0200)]
mtd: rawnand: orion: Keep the driver compatible with on-die ECC engines
commit
194ac63de6ff56d30c48e3ac19c8a412f9c1408e upstream.
Following the introduction of the generic ECC engine infrastructure, it
was necessary to reorganize the code and move the ECC configuration in
the ->attach_chip() hook. Failing to do that properly lead to a first
series of fixes supposed to stabilize the situation. Unfortunately, this
only fixed the use of software ECC engines, preventing any other kind of
engine to be used, including on-die ones.
It is now time to (finally) fix the situation by ensuring that we still
provide a default (eg. software ECC) but will still support different
ECC engines such as on-die ECC engines if properly described in the
device tree.
There are no changes needed on the core side in order to do this, but we
just need to leverage the logic there which allows:
1- a subsystem default (set to Host engines in the raw NAND world)
2- a driver specific default (here set to software ECC engines)
3- any type of engine requested by the user (ie. described in the DT)
As the raw NAND subsystem has not yet been fully converted to the ECC
engine infrastructure, in order to provide a default ECC engine for this
driver we need to set chip->ecc.engine_type *before* calling
nand_scan(). During the initialization step, the core will consider this
entry as the default engine for this driver. This value may of course
be overloaded by the user if the usual DT properties are provided.
Fixes:
553508cec2e8 ("mtd: rawnand: orion: Move the ECC initialization to ->attach_chip()")
Cc: stable@vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-6-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:22:45 +0000 (00:22 +0200)]
mtd: rawnand: pasemi: Keep the driver compatible with on-die ECC engines
commit
f16b7d2a5e810fcf4b15d096246d0d445da9cc88 upstream.
Following the introduction of the generic ECC engine infrastructure, it
was necessary to reorganize the code and move the ECC configuration in
the ->attach_chip() hook. Failing to do that properly lead to a first
series of fixes supposed to stabilize the situation. Unfortunately, this
only fixed the use of software ECC engines, preventing any other kind of
engine to be used, including on-die ones.
It is now time to (finally) fix the situation by ensuring that we still
provide a default (eg. software ECC) but will still support different
ECC engines such as on-die ECC engines if properly described in the
device tree.
There are no changes needed on the core side in order to do this, but we
just need to leverage the logic there which allows:
1- a subsystem default (set to Host engines in the raw NAND world)
2- a driver specific default (here set to software ECC engines)
3- any type of engine requested by the user (ie. described in the DT)
As the raw NAND subsystem has not yet been fully converted to the ECC
engine infrastructure, in order to provide a default ECC engine for this
driver we need to set chip->ecc.engine_type *before* calling
nand_scan(). During the initialization step, the core will consider this
entry as the default engine for this driver. This value may of course
be overloaded by the user if the usual DT properties are provided.
Fixes:
8fc6f1f042b2 ("mtd: rawnand: pasemi: Move the ECC initialization to ->attach_chip()")
Cc: stable@vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-7-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:22:42 +0000 (00:22 +0200)]
mtd: rawnand: gpio: Keep the driver compatible with on-die ECC engines
commit
b5b5b4dc6fcd8194b9dd38c8acdc5ab71adf44f8 upstream.
Following the introduction of the generic ECC engine infrastructure, it
was necessary to reorganize the code and move the ECC configuration in
the ->attach_chip() hook. Failing to do that properly lead to a first
series of fixes supposed to stabilize the situation. Unfortunately, this
only fixed the use of software ECC engines, preventing any other kind of
engine to be used, including on-die ones.
It is now time to (finally) fix the situation by ensuring that we still
provide a default (eg. software ECC) but will still support different
ECC engines such as on-die ECC engines if properly described in the
device tree.
There are no changes needed on the core side in order to do this, but we
just need to leverage the logic there which allows:
1- a subsystem default (set to Host engines in the raw NAND world)
2- a driver specific default (here set to software ECC engines)
3- any type of engine requested by the user (ie. described in the DT)
As the raw NAND subsystem has not yet been fully converted to the ECC
engine infrastructure, in order to provide a default ECC engine for this
driver we need to set chip->ecc.engine_type *before* calling
nand_scan(). During the initialization step, the core will consider this
entry as the default engine for this driver. This value may of course
be overloaded by the user if the usual DT properties are provided.
Fixes:
f6341f6448e0 ("mtd: rawnand: gpio: Move the ECC initialization to ->attach_chip()")
Cc: stable@vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-4-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:22:43 +0000 (00:22 +0200)]
mtd: rawnand: mpc5121: Keep the driver compatible with on-die ECC engines
commit
f9d8570b7fd6f4f08528ce2f5e39787a8a260cd6 upstream.
Following the introduction of the generic ECC engine infrastructure, it
was necessary to reorganize the code and move the ECC configuration in
the ->attach_chip() hook. Failing to do that properly lead to a first
series of fixes supposed to stabilize the situation. Unfortunately, this
only fixed the use of software ECC engines, preventing any other kind of
engine to be used, including on-die ones.
It is now time to (finally) fix the situation by ensuring that we still
provide a default (eg. software ECC) but will still support different
ECC engines such as on-die ECC engines if properly described in the
device tree.
There are no changes needed on the core side in order to do this, but we
just need to leverage the logic there which allows:
1- a subsystem default (set to Host engines in the raw NAND world)
2- a driver specific default (here set to software ECC engines)
3- any type of engine requested by the user (ie. described in the DT)
As the raw NAND subsystem has not yet been fully converted to the ECC
engine infrastructure, in order to provide a default ECC engine for this
driver we need to set chip->ecc.engine_type *before* calling
nand_scan(). During the initialization step, the core will consider this
entry as the default engine for this driver. This value may of course
be overloaded by the user if the usual DT properties are provided.
Fixes:
6dd09f775b72 ("mtd: rawnand: mpc5121: Move the ECC initialization to ->attach_chip()")
Cc: stable@vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-5-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:22:48 +0000 (00:22 +0200)]
mtd: rawnand: xway: Keep the driver compatible with on-die ECC engines
commit
6bcd2960af1b7bacb2f1e710ab0c0b802d900501 upstream.
Following the introduction of the generic ECC engine infrastructure, it
was necessary to reorganize the code and move the ECC configuration in
the ->attach_chip() hook. Failing to do that properly lead to a first
series of fixes supposed to stabilize the situation. Unfortunately, this
only fixed the use of software ECC engines, preventing any other kind of
engine to be used, including on-die ones.
It is now time to (finally) fix the situation by ensuring that we still
provide a default (eg. software ECC) but will still support different
ECC engines such as on-die ECC engines if properly described in the
device tree.
There are no changes needed on the core side in order to do this, but we
just need to leverage the logic there which allows:
1- a subsystem default (set to Host engines in the raw NAND world)
2- a driver specific default (here set to software ECC engines)
3- any type of engine requested by the user (ie. described in the DT)
As the raw NAND subsystem has not yet been fully converted to the ECC
engine infrastructure, in order to provide a default ECC engine for this
driver we need to set chip->ecc.engine_type *before* calling
nand_scan(). During the initialization step, the core will consider this
entry as the default engine for this driver. This value may of course
be overloaded by the user if the usual DT properties are provided.
Fixes:
d525914b5bd8 ("mtd: rawnand: xway: Move the ECC initialization to ->attach_chip()")
Cc: stable@vger.kernel.org
Cc: Jan Hoffmann <jan@3e8.eu>
Cc: Kestrel seventyfour <kestrelseventyfour@gmail.com>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Tested-by: Jan Hoffmann <jan@3e8.eu>
Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-10-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:22:40 +0000 (00:22 +0200)]
mtd: rawnand: ams-delta: Keep the driver compatible with on-die ECC engines
commit
d707bb74daae07879e0fc1b4b960f8f2d0a5fe5d upstream.
Following the introduction of the generic ECC engine infrastructure, it
was necessary to reorganize the code and move the ECC configuration in
the ->attach_chip() hook. Failing to do that properly lead to a first
series of fixes supposed to stabilize the situation. Unfortunately, this
only fixed the use of software ECC engines, preventing any other kind of
engine to be used, including on-die ones.
It is now time to (finally) fix the situation by ensuring that we still
provide a default (eg. software ECC) but will still support different
ECC engines such as on-die ECC engines if properly described in the
device tree.
There are no changes needed on the core side in order to do this, but we
just need to leverage the logic there which allows:
1- a subsystem default (set to Host engines in the raw NAND world)
2- a driver specific default (here set to software ECC engines)
3- any type of engine requested by the user (ie. described in the DT)
As the raw NAND subsystem has not yet been fully converted to the ECC
engine infrastructure, in order to provide a default ECC engine for this
driver we need to set chip->ecc.engine_type *before* calling
nand_scan(). During the initialization step, the core will consider this
entry as the default engine for this driver. This value may of course
be overloaded by the user if the usual DT properties are provided.
Fixes:
59d93473323a ("mtd: rawnand: ams-delta: Move the ECC initialization to ->attach_chip()")
Cc: stable@vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-2-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miquel Raynal [Tue, 28 Sep 2021 22:15:00 +0000 (00:15 +0200)]
mtd: rawnand: fsmc: Fix use of SM ORDER
commit
9be1446ece291a1f08164bd056bed3d698681f8b upstream.
The introduction of the generic ECC engine API lead to a number of
changes in various drivers which broke some of them. Here is a typical
example: I expected the SM_ORDER option to be handled by the Hamming ECC
engine internals. Problem: the fsmc driver does not instantiate (yet) a
real ECC engine object so we had to use a 'bare' ECC helper instead of
the shiny rawnand functions. However, when not intializing this engine
properly and using the bare helpers, we do not get the SM ORDER feature
handled automatically. It looks like this was lost in the process so
let's ensure we use the right SM ORDER now.
Fixes:
ad9ffdce4539 ("mtd: rawnand: fsmc: Fix external use of SW Hamming ECC helper")
Cc: stable@vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20210928221507.199198-2-miquel.raynal@bootlin.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dong Aisheng [Fri, 10 Sep 2021 09:06:20 +0000 (17:06 +0800)]
remoteproc: imx_rproc: Fix rsc-table name
commit
e90547d59d4e29e269e22aa6ce590ed0b41207d2 upstream.
Usually the dash '-' is preferred in node name.
So far, not dts in upstream kernel, so we just update node name
in driver.
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Fixes:
5e4c1243071d ("remoteproc: imx_rproc: support remote cores booted before Linux Kernel")
Reviewed-and-tested-by: Peng Fan <peng.fan@nxp.com>
Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20210910090621.3073540-6-peng.fan@oss.nxp.com
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dong Aisheng [Fri, 10 Sep 2021 09:06:19 +0000 (17:06 +0800)]
remoteproc: imx_rproc: Fix ignoring mapping vdev regions
commit
afe670e23af91d8a74a8d7049f6e0984bbf6ea11 upstream.
vdev regions are typically named vdev0buffer, vdev0ring0, vdev0ring1 and
etc. Change to strncmp to cover them all.
Fixes:
8f2d8961640f ("remoteproc: imx_rproc: ignore mapping vdev regions")
Reviewed-and-tested-by: Peng Fan <peng.fan@nxp.com>
Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20210910090621.3073540-5-peng.fan@oss.nxp.com
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dong Aisheng [Fri, 10 Sep 2021 09:06:17 +0000 (17:06 +0800)]
remoteproc: Fix the wrong default value of is_iomem
commit
970675f61bf5761d7e5326f6e4df995ecdba5e11 upstream.
Currently the is_iomem is a random value in the stack which may
be default to true even on those platforms that not use iomem to
store firmware.
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Fixes:
40df0a91b2a5 ("remoteproc: add is_iomem to da_to_va")
Reviewed-and-tested-by: Peng Fan <peng.fan@nxp.com>
Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20210910090621.3073540-3-peng.fan@oss.nxp.com
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peng Fan [Fri, 10 Sep 2021 09:06:16 +0000 (17:06 +0800)]
remoteproc: elf_loader: Fix loading segment when is_iomem true
commit
24acbd9dc934f5d9418a736c532d3970a272063e upstream.
It seems luckliy work on i.MX platform, but it is wrong.
Need use memcpy_toio, not memcpy_fromio.
Fixes:
40df0a91b2a5 ("remoteproc: add is_iomem to da_to_va")
Tested-by: Dong Aisheng <aisheng.dong@nxp.com> (i.MX8MQ)
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20210910090621.3073540-2-peng.fan@oss.nxp.com
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Halil Pasic [Wed, 8 Sep 2021 15:36:23 +0000 (17:36 +0200)]
s390/cio: make ccw_device_dma_* more robust
commit
ad9a14517263a16af040598c7920c09ca9670a31 upstream.
Since commit
48720ba56891 ("virtio/s390: use DMA memory for ccw I/O and
classic notifiers") we were supposed to make sure that
virtio_ccw_release_dev() completes before the ccw device and the
attached dma pool are torn down, but unfortunately we did not. Before
that commit it used to be OK to delay cleaning up the memory allocated
by virtio-ccw indefinitely (which isn't really intuitive for guys used
to destruction happens in reverse construction order), but now we
trigger a BUG_ON if the genpool is destroyed before all memory allocated
from it is deallocated. Which brings down the guest. We can observe this
problem, when unregister_virtio_device() does not give up the last
reference to the virtio_device (e.g. because a virtio-scsi attached scsi
disk got removed without previously unmounting its previously mounted
partition).
To make sure that the genpool is only destroyed after all the necessary
freeing is done let us take a reference on the ccw device on each
ccw_device_dma_zalloc() and give it up on each ccw_device_dma_free().
Actually there are multiple approaches to fixing the problem at hand
that can work. The upside of this one is that it is the safest one while
remaining simple. We don't crash the guest even if the driver does not
pair allocations and frees. The downside is the reference counting
overhead, that the reference counting for ccw devices becomes more
complex, in a sense that we need to pair the calls to the aforementioned
functions for it to be correct, and that if we happen to leak, we leak
more than necessary (the whole ccw device instead of just the genpool).
Some alternatives to this approach are taking a reference in
virtio_ccw_online() and giving it up in virtio_ccw_release_dev() or
making sure virtio_ccw_release_dev() completes its work before
virtio_ccw_remove() returns. The downside of these approaches is that
these are less safe against programming errors.
Cc: <stable@vger.kernel.org> # v5.3
Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Fixes:
48720ba56891 ("virtio/s390: use DMA memory for ccw I/O and classic notifiers")
Reported-by: bfu@redhat.com
Reviewed-by: Vineeth Vijayan <vneethv@linux.ibm.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Harald Freudenberger [Thu, 14 Oct 2021 07:58:24 +0000 (09:58 +0200)]
s390/ap: Fix hanging ioctl caused by orphaned replies
commit
3826350e6dd435e244eb6e47abad5a47c169ebc2 upstream.
When a queue is switched to soft offline during heavy load and later
switched to soft online again and now used, it may be that the caller
is blocked forever in the ioctl call.
The failure occurs because there is a pending reply after the queue(s)
have been switched to offline. This orphaned reply is received when
the queue is switched to online and is accidentally counted for the
outstanding replies. So when there was a valid outstanding reply and
this orphaned reply is received it counts as the outstanding one thus
dropping the outstanding counter to 0. Voila, with this counter the
receive function is not called any more and the real outstanding reply
is never received (until another request comes in...) and the ioctl
blocks.
The fix is simple. However, instead of readjusting the counter when an
orphaned reply is detected, I check the queue status for not empty and
compare this to the outstanding counter. So if the queue is not empty
then the counter must not drop to 0 but at least have a value of 1.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sven Schnelle [Tue, 2 Nov 2021 09:55:30 +0000 (10:55 +0100)]
s390/tape: fix timer initialization in tape_std_assign()
commit
213fca9e23b59581c573d558aa477556f00b8198 upstream.
commit
9c6c273aa424 ("timer: Remove init_timer_on_stack() in favor
of timer_setup_on_stack()") changed the timer setup from
init_timer_on_stack(() to timer_setup(), but missed to change the
mod_timer() call. And while at it, use msecs_to_jiffies() instead
of the open coded timeout calculation.
Cc: stable@vger.kernel.org
Fixes:
9c6c273aa424 ("timer: Remove init_timer_on_stack() in favor of timer_setup_on_stack()")
Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
Reviewed-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vineeth Vijayan [Fri, 5 Nov 2021 15:44:51 +0000 (16:44 +0100)]
s390/cio: check the subchannel validity for dev_busid
commit
a4751f157c194431fae9e9c493f456df8272b871 upstream.
Check the validity of subchanel before reading other fields in
the schib.
Fixes:
d3683c055212 ("s390/cio: add dev_busid sysfs entry for each subchannel")
CC: <stable@vger.kernel.org>
Reported-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Vineeth Vijayan <vneethv@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Link: https://lore.kernel.org/r/20211105154451.847288-1-vneethv@linux.ibm.com
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Richter [Wed, 3 Nov 2021 12:13:04 +0000 (13:13 +0100)]
s390/cpumf: cpum_cf PMU displays invalid value after hotplug remove
commit
9d48c7afedf91a02d03295837ec76b2fb5e7d3fe upstream.
When a CPU is hotplugged while the perf stat -e cycles command is
running, a wrong (very large) value is displayed immediately after the
CPU removal:
Check the values, shouldn't be too high as in
time counts unit events
1.
001101919 29261846 cycles
2.
002454499 17523405 cycles
3.
003659292 24361161 cycles
4.
004816983 18446744073638406144 cycles
5.
005671647 <not counted> cycles
...
The CPU hotplug off took place after 3 seconds.
The issue is the read of the event count value after 4 seconds when
the CPU is not available and the read of the counter returns an
error. This is treated as a counter value of zero. This results
in a very large value (0 - previous_value).
Fix this by detecting the hotplugged off CPU and report 0 instead
of a very large number.
Cc: stable@vger.kernel.org
Fixes:
a029a4eab39e ("s390/cpumf: Allow concurrent access for CPU Measurement Counter Facility")
Reported-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Reviewed-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rafael J. Wysocki [Thu, 4 Nov 2021 17:26:26 +0000 (18:26 +0100)]
PM: sleep: Avoid calling put_device() under dpm_list_mtx
commit
2aa36604e8243698ff22bd5fef0dd0c6bb07ba92 upstream.
It is generally unsafe to call put_device() with dpm_list_mtx held,
because the given device's release routine may carry out an action
depending on that lock which then may deadlock, so modify the
system-wide suspend and resume of devices to always drop dpm_list_mtx
before calling put_device() (and adjust white space somewhat while
at it).
For instance, this prevents the following splat from showing up in
the kernel log after a system resume in certain configurations:
[ 3290.969514] ======================================================
[ 3290.969517] WARNING: possible circular locking dependency detected
[ 3290.969519] 5.15.0+ #2420 Tainted: G S
[ 3290.969523] ------------------------------------------------------
[ 3290.969525] systemd-sleep/4553 is trying to acquire lock:
[ 3290.969529]
ffff888117ab1138 ((wq_completion)hci0#2){+.+.}-{0:0}, at: flush_workqueue+0x87/0x4a0
[ 3290.969554]
but task is already holding lock:
[ 3290.969556]
ffffffff8280fca8 (dpm_list_mtx){+.+.}-{3:3}, at: dpm_resume+0x12e/0x3e0
[ 3290.969571]
which lock already depends on the new lock.
[ 3290.969573]
the existing dependency chain (in reverse order) is:
[ 3290.969575]
-> #3 (dpm_list_mtx){+.+.}-{3:3}:
[ 3290.969583] __mutex_lock+0x9d/0xa30
[ 3290.969591] device_pm_add+0x2e/0xe0
[ 3290.969597] device_add+0x4d5/0x8f0
[ 3290.969605] hci_conn_add_sysfs+0x43/0xb0 [bluetooth]
[ 3290.969689] hci_conn_complete_evt.isra.71+0x124/0x750 [bluetooth]
[ 3290.969747] hci_event_packet+0xd6c/0x28a0 [bluetooth]
[ 3290.969798] hci_rx_work+0x213/0x640 [bluetooth]
[ 3290.969842] process_one_work+0x2aa/0x650
[ 3290.969851] worker_thread+0x39/0x400
[ 3290.969859] kthread+0x142/0x170
[ 3290.969865] ret_from_fork+0x22/0x30
[ 3290.969872]
-> #2 (&hdev->lock){+.+.}-{3:3}:
[ 3290.969881] __mutex_lock+0x9d/0xa30
[ 3290.969887] hci_event_packet+0xba/0x28a0 [bluetooth]
[ 3290.969935] hci_rx_work+0x213/0x640 [bluetooth]
[ 3290.969978] process_one_work+0x2aa/0x650
[ 3290.969985] worker_thread+0x39/0x400
[ 3290.969993] kthread+0x142/0x170
[ 3290.969999] ret_from_fork+0x22/0x30
[ 3290.970004]
-> #1 ((work_completion)(&hdev->rx_work)){+.+.}-{0:0}:
[ 3290.970013] process_one_work+0x27d/0x650
[ 3290.970020] worker_thread+0x39/0x400
[ 3290.970028] kthread+0x142/0x170
[ 3290.970033] ret_from_fork+0x22/0x30
[ 3290.970038]
-> #0 ((wq_completion)hci0#2){+.+.}-{0:0}:
[ 3290.970047] __lock_acquire+0x15cb/0x1b50
[ 3290.970054] lock_acquire+0x26c/0x300
[ 3290.970059] flush_workqueue+0xae/0x4a0
[ 3290.970066] drain_workqueue+0xa1/0x130
[ 3290.970073] destroy_workqueue+0x34/0x1f0
[ 3290.970081] hci_release_dev+0x49/0x180 [bluetooth]
[ 3290.970130] bt_host_release+0x1d/0x30 [bluetooth]
[ 3290.970195] device_release+0x33/0x90
[ 3290.970201] kobject_release+0x63/0x160
[ 3290.970211] dpm_resume+0x164/0x3e0
[ 3290.970215] dpm_resume_end+0xd/0x20
[ 3290.970220] suspend_devices_and_enter+0x1a4/0xba0
[ 3290.970229] pm_suspend+0x26b/0x310
[ 3290.970236] state_store+0x42/0x90
[ 3290.970243] kernfs_fop_write_iter+0x135/0x1b0
[ 3290.970251] new_sync_write+0x125/0x1c0
[ 3290.970257] vfs_write+0x360/0x3c0
[ 3290.970263] ksys_write+0xa7/0xe0
[ 3290.970269] do_syscall_64+0x3a/0x80
[ 3290.970276] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 3290.970284]
other info that might help us debug this:
[ 3290.970285] Chain exists of:
(wq_completion)hci0#2 --> &hdev->lock --> dpm_list_mtx
[ 3290.970297] Possible unsafe locking scenario:
[ 3290.970299] CPU0 CPU1
[ 3290.970300] ---- ----
[ 3290.970302] lock(dpm_list_mtx);
[ 3290.970306] lock(&hdev->lock);
[ 3290.970310] lock(dpm_list_mtx);
[ 3290.970314] lock((wq_completion)hci0#2);
[ 3290.970319]
*** DEADLOCK ***
[ 3290.970321] 7 locks held by systemd-sleep/4553:
[ 3290.970325] #0:
ffff888103bcd448 (sb_writers#4){.+.+}-{0:0}, at: ksys_write+0xa7/0xe0
[ 3290.970341] #1:
ffff888115a14488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x103/0x1b0
[ 3290.970355] #2:
ffff888100f719e0 (kn->active#233){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x10c/0x1b0
[ 3290.970369] #3:
ffffffff82661048 (autosleep_lock){+.+.}-{3:3}, at: state_store+0x12/0x90
[ 3290.970384] #4:
ffffffff82658ac8 (system_transition_mutex){+.+.}-{3:3}, at: pm_suspend+0x9f/0x310
[ 3290.970399] #5:
ffffffff827f2a48 (acpi_scan_lock){+.+.}-{3:3}, at: acpi_suspend_begin+0x4c/0x80
[ 3290.970416] #6:
ffffffff8280fca8 (dpm_list_mtx){+.+.}-{3:3}, at: dpm_resume+0x12e/0x3e0
[ 3290.970428]
stack backtrace:
[ 3290.970431] CPU: 3 PID: 4553 Comm: systemd-sleep Tainted: G S 5.15.0+ #2420
[ 3290.970438] Hardware name: Dell Inc. XPS 13 9380/0RYJWW, BIOS 1.5.0 06/03/2019
[ 3290.970441] Call Trace:
[ 3290.970446] dump_stack_lvl+0x44/0x57
[ 3290.970454] check_noncircular+0x105/0x120
[ 3290.970468] ? __lock_acquire+0x15cb/0x1b50
[ 3290.970474] __lock_acquire+0x15cb/0x1b50
[ 3290.970487] lock_acquire+0x26c/0x300
[ 3290.970493] ? flush_workqueue+0x87/0x4a0
[ 3290.970503] ? __raw_spin_lock_init+0x3b/0x60
[ 3290.970510] ? lockdep_init_map_type+0x58/0x240
[ 3290.970519] flush_workqueue+0xae/0x4a0
[ 3290.970526] ? flush_workqueue+0x87/0x4a0
[ 3290.970544] ? drain_workqueue+0xa1/0x130
[ 3290.970552] drain_workqueue+0xa1/0x130
[ 3290.970561] destroy_workqueue+0x34/0x1f0
[ 3290.970572] hci_release_dev+0x49/0x180 [bluetooth]
[ 3290.970624] bt_host_release+0x1d/0x30 [bluetooth]
[ 3290.970687] device_release+0x33/0x90
[ 3290.970695] kobject_release+0x63/0x160
[ 3290.970705] dpm_resume+0x164/0x3e0
[ 3290.970710] ? dpm_resume_early+0x251/0x3b0
[ 3290.970718] dpm_resume_end+0xd/0x20
[ 3290.970723] suspend_devices_and_enter+0x1a4/0xba0
[ 3290.970737] pm_suspend+0x26b/0x310
[ 3290.970746] state_store+0x42/0x90
[ 3290.970755] kernfs_fop_write_iter+0x135/0x1b0
[ 3290.970764] new_sync_write+0x125/0x1c0
[ 3290.970777] vfs_write+0x360/0x3c0
[ 3290.970785] ksys_write+0xa7/0xe0
[ 3290.970794] do_syscall_64+0x3a/0x80
[ 3290.970803] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 3290.970811] RIP: 0033:0x7f41b1328164
[ 3290.970819] Code: 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 80 00 00 00 00 8b 05 4a d2 2c 00 48 63 ff 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 55 53 48 89 d5 48 89 f3 48 83
[ 3290.970824] RSP: 002b:
00007ffe6ae21b28 EFLAGS:
00000246 ORIG_RAX:
0000000000000001
[ 3290.970831] RAX:
ffffffffffffffda RBX:
0000000000000004 RCX:
00007f41b1328164
[ 3290.970836] RDX:
0000000000000004 RSI:
000055965e651070 RDI:
0000000000000004
[ 3290.970839] RBP:
000055965e651070 R08:
000055965e64f390 R09:
00007f41b1e3d1c0
[ 3290.970843] R10:
000000000000000a R11:
0000000000000246 R12:
0000000000000004
[ 3290.970846] R13:
0000000000000001 R14:
000055965e64f2b0 R15:
0000000000000004
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Coly Li [Wed, 3 Nov 2021 15:10:41 +0000 (23:10 +0800)]
bcache: Revert "bcache: use bvec_virt"
commit
2878feaed543c35f9dbbe6d8ce36fb67ac803eef upstream.
This reverts commit
2fd3e5efe791946be0957c8e1eed9560b541fe46.
The above commit replaces page_address(bv->bv_page) by bvec_virt(bv) to
avoid directly access to bv->bv_page, but in situation bv->bv_offset is
not zero and page_address(bv->bv_page) is not equal to bvec_virt(bv). In
such case a memory corruption may happen because memory in next page is
tainted by following line in do_btree_node_write(),
memcpy(bvec_virt(bv), addr, PAGE_SIZE);
This patch reverts the mentioned commit to avoid the memory corruption.
Fixes:
2fd3e5efe791 ("bcache: use bvec_virt")
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: stable@vger.kernel.org # 5.15
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211103151041.70516-1-colyli@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Coly Li [Wed, 3 Nov 2021 06:49:17 +0000 (14:49 +0800)]
bcache: fix use-after-free problem in bcache_device_free()
commit
8468f45091d2866affed6f6a7aecc20779139173 upstream.
In bcache_device_free(), pointer disk is referenced still in
ida_simple_remove() after blk_cleanup_disk() gets called on this
pointer. This may cause a potential panic by use-after-free on the
disk pointer.
This patch fixes the problem by calling blk_cleanup_disk() after
ida_simple_remove().
Fixes:
bc70852fd104 ("bcache: convert to blk_alloc_disk/blk_cleanup_disk")
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: stable@vger.kernel.org # v5.14+
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211103064917.67383-1-colyli@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marek Vasut [Tue, 21 Sep 2021 17:35:06 +0000 (19:35 +0200)]
video: backlight: Drop maximum brightness override for brightness zero
commit
33a5471f8da976bf271a1ebbd6b9d163cb0cb6aa upstream.
The note in
c2adda27d202f ("video: backlight: Add of_find_backlight helper
in backlight.c") says that gpio-backlight uses brightness as power state.
This has been fixed since in
ec665b756e6f7 ("backlight: gpio-backlight:
Correct initial power state handling") and other backlight drivers do not
require this workaround. Drop the workaround.
This fixes the case where e.g. pwm-backlight can perfectly well be set to
brightness 0 on boot in DT, which without this patch leads to the display
brightness to be max instead of off.
Fixes:
c2adda27d202f ("video: backlight: Add of_find_backlight helper in backlight.c")
Cc: <stable@vger.kernel.org> # 5.4+
Cc: <stable@vger.kernel.org> # 4.19.x: ec665b756e6f7: backlight: gpio-backlight: Correct initial power state handling
Signed-off-by: Marek Vasut <marex@denx.de>
Acked-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jack Andersen [Mon, 18 Oct 2021 11:25:41 +0000 (13:25 +0200)]
mfd: dln2: Add cell for initializing DLN2 ADC
commit
313c84b5ae4104e48c661d5d706f9f4c425fd50f upstream.
This patch extends the DLN2 driver; adding cell for adc_dln2 module.
The original patch[1] fell through the cracks when the driver was added
so ADC has never actually been usable. That patch did not have ACPI
support which was added in v5.9, so the oldest supported version this
current patch can be backported to is 5.10.
[1] https://www.spinics.net/lists/linux-iio/msg33975.html
Cc: <stable@vger.kernel.org> # 5.10+
Signed-off-by: Jack Andersen <jackoalan@gmail.com>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Link: https://lore.kernel.org/r/20211018112541.25466-1-noralf@tronnes.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rongwei Wang [Fri, 5 Nov 2021 20:43:44 +0000 (13:43 -0700)]
mm, thp: fix incorrect unmap behavior for private pages
commit
8468e937df1f31411d1e127fa38db064af051fe5 upstream.
When truncating pagecache on file THP, the private pages of a process
should not be unmapped mapping. This incorrect behavior on a dynamic
shared libraries which will cause related processes to happen core dump.
A simple test for a DSO (Prerequisite is the DSO mapped in file THP):
int main(int argc, char *argv[])
{
int fd;
fd = open(argv[1], O_WRONLY);
if (fd < 0) {
perror("open");
}
close(fd);
return 0;
}
The test only to open a target DSO, and do nothing. But this operation
will lead one or more process to happen core dump. This patch mainly to
fix this bug.
Link: https://lkml.kernel.org/r/20211025092134.18562-3-rongwei.wang@linux.alibaba.com
Fixes:
eb6ecbed0aa2 ("mm, thp: relax the VM_DENYWRITE constraint on file-backed THPs")
Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
Tested-by: Xu Yu <xuyu@linux.alibaba.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Song Liu <song@kernel.org>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Collin Fijalkovich <cfijalkovich@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rongwei Wang [Fri, 5 Nov 2021 20:43:41 +0000 (13:43 -0700)]
mm, thp: lock filemap when truncating page cache
commit
55fc0d91746759c71bc165bba62a2db64ac98e35 upstream.
Patch series "fix two bugs for file THP".
This patch (of 2):
Transparent huge page has supported read-only non-shmem files. The
file- backed THP is collapsed by khugepaged and truncated when written
(for shared libraries).
However, there is a race when multiple writers truncate the same page
cache concurrently.
In that case, subpage(s) of file THP can be revealed by find_get_entry
in truncate_inode_pages_range, which will trigger PageTail BUG_ON in
truncate_inode_page, as follows:
page:
000000009e420ff2 refcount:1 mapcount:0 mapping:
0000000000000000 index:0x7ff pfn:0x50c3ff
head:
0000000075ff816d order:9 compound_mapcount:0 compound_pincount:0
flags: 0x37fffe0000010815(locked|uptodate|lru|arch_1|head)
raw:
37fffe0000000000 fffffe0013108001 dead000000000122 dead000000000400
raw:
0000000000000001 0000000000000000 00000000ffffffff 0000000000000000
head:
37fffe0000010815 fffffe001066bd48 ffff000404183c20 0000000000000000
head:
0000000000000600 0000000000000000 00000001ffffffff ffff000c0345a000
page dumped because: VM_BUG_ON_PAGE(PageTail(page))
------------[ cut here ]------------
kernel BUG at mm/truncate.c:213!
Internal error: Oops - BUG: 0 [#1] SMP
Modules linked in: xfs(E) libcrc32c(E) rfkill(E) ...
CPU: 14 PID: 11394 Comm: check_madvise_d Kdump: ...
Hardware name: ECS, BIOS 0.0.0 02/06/2015
pstate:
60400005 (nZCv daif +PAN -UAO -TCO BTYPE=--)
Call trace:
truncate_inode_page+0x64/0x70
truncate_inode_pages_range+0x550/0x7e4
truncate_pagecache+0x58/0x80
do_dentry_open+0x1e4/0x3c0
vfs_open+0x38/0x44
do_open+0x1f0/0x310
path_openat+0x114/0x1dc
do_filp_open+0x84/0x134
do_sys_openat2+0xbc/0x164
__arm64_sys_openat+0x74/0xc0
el0_svc_common.constprop.0+0x88/0x220
do_el0_svc+0x30/0xa0
el0_svc+0x20/0x30
el0_sync_handler+0x1a4/0x1b0
el0_sync+0x180/0x1c0
Code:
aa0103e0 900061e1 910ec021 9400d300 (
d4210000)
This patch mainly to lock filemap when one enter truncate_pagecache(),
avoiding truncating the same page cache concurrently.
Link: https://lkml.kernel.org/r/20211025092134.18562-1-rongwei.wang@linux.alibaba.com
Link: https://lkml.kernel.org/r/20211025092134.18562-2-rongwei.wang@linux.alibaba.com
Fixes:
eb6ecbed0aa2 ("mm, thp: relax the VM_DENYWRITE constraint on file-backed THPs")
Signed-off-by: Xu Yu <xuyu@linux.alibaba.com>
Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Tested-by: Song Liu <song@kernel.org>
Cc: Collin Fijalkovich <cfijalkovich@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michal Hocko [Fri, 5 Nov 2021 20:38:06 +0000 (13:38 -0700)]
mm, oom: do not trigger out_of_memory from the #PF
commit
60e2793d440a3ec95abb5d6d4fc034a4b480472d upstream.
Any allocation failure during the #PF path will return with VM_FAULT_OOM
which in turn results in pagefault_out_of_memory. This can happen for 2
different reasons. a) Memcg is out of memory and we rely on
mem_cgroup_oom_synchronize to perform the memcg OOM handling or b)
normal allocation fails.
The latter is quite problematic because allocation paths already trigger
out_of_memory and the page allocator tries really hard to not fail
allocations. Anyway, if the OOM killer has been already invoked there
is no reason to invoke it again from the #PF path. Especially when the
OOM condition might be gone by that time and we have no way to find out
other than allocate.
Moreover if the allocation failed and the OOM killer hasn't been invoked
then we are unlikely to do the right thing from the #PF context because
we have already lost the allocation context and restictions and
therefore might oom kill a task from a different NUMA domain.
This all suggests that there is no legitimate reason to trigger
out_of_memory from pagefault_out_of_memory so drop it. Just to be sure
that no #PF path returns with VM_FAULT_OOM without allocation print a
warning that this is happening before we restart the #PF.
[VvS: #PF allocation can hit into limit of cgroup v1 kmem controller.
This is a local problem related to memcg, however, it causes unnecessary
global OOM kills that are repeated over and over again and escalate into a
real disaster. This has been broken since kmem accounting has been
introduced for cgroup v1 (3.8). There was no kmem specific reclaim for
the separate limit so the only way to handle kmem hard limit was to return
with ENOMEM. In upstream the problem will be fixed by removing the
outdated kmem limit, however stable and LTS kernels cannot do it and are
still affected. This patch fixes the problem and should be backported
into stable/LTS.]
Link: https://lkml.kernel.org/r/f5fd8dd8-0ad4-c524-5f65-920b01972a42@virtuozzo.com
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vasily Averin [Fri, 5 Nov 2021 20:38:02 +0000 (13:38 -0700)]
mm, oom: pagefault_out_of_memory: don't force global OOM for dying tasks
commit
0b28179a6138a5edd9d82ad2687c05b3773c387b upstream.
Patch series "memcg: prohibit unconditional exceeding the limit of dying tasks", v3.
Memory cgroup charging allows killed or exiting tasks to exceed the hard
limit. It can be misused and allowed to trigger global OOM from inside
a memcg-limited container. On the other hand if memcg fails allocation,
called from inside #PF handler it triggers global OOM from inside
pagefault_out_of_memory().
To prevent these problems this patchset:
(a) removes execution of out_of_memory() from
pagefault_out_of_memory(), becasue nobody can explain why it is
necessary.
(b) allow memcg to fail allocation of dying/killed tasks.
This patch (of 3):
Any allocation failure during the #PF path will return with VM_FAULT_OOM
which in turn results in pagefault_out_of_memory which in turn executes
out_out_memory() and can kill a random task.
An allocation might fail when the current task is the oom victim and
there are no memory reserves left. The OOM killer is already handled at
the page allocator level for the global OOM and at the charging level
for the memcg one. Both have much more information about the scope of
allocation/charge request. This means that either the OOM killer has
been invoked properly and didn't lead to the allocation success or it
has been skipped because it couldn't have been invoked. In both cases
triggering it from here is pointless and even harmful.
It makes much more sense to let the killed task die rather than to wake
up an eternally hungry oom-killer and send him to choose a fatter victim
for breakfast.
Link: https://lkml.kernel.org/r/0828a149-786e-7c06-b70a-52d086818ea3@virtuozzo.com
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vasily Averin [Fri, 5 Nov 2021 20:38:09 +0000 (13:38 -0700)]
memcg: prohibit unconditional exceeding the limit of dying tasks
commit
a4ebf1b6ca1e011289677239a2a361fde4a88076 upstream.
Memory cgroup charging allows killed or exiting tasks to exceed the hard
limit. It is assumed that the amount of the memory charged by those
tasks is bound and most of the memory will get released while the task
is exiting. This is resembling a heuristic for the global OOM situation
when tasks get access to memory reserves. There is no global memory
shortage at the memcg level so the memcg heuristic is more relieved.
The above assumption is overly optimistic though. E.g. vmalloc can
scale to really large requests and the heuristic would allow that. We
used to have an early break in the vmalloc allocator for killed tasks
but this has been reverted by commit
b8c8a338f75e ("Revert "vmalloc:
back off when the current task is killed""). There are likely other
similar code paths which do not check for fatal signals in an
allocation&charge loop. Also there are some kernel objects charged to a
memcg which are not bound to a process life time.
It has been observed that it is not really hard to trigger these
bypasses and cause global OOM situation.
One potential way to address these runaways would be to limit the amount
of excess (similar to the global OOM with limited oom reserves). This
is certainly possible but it is not really clear how much of an excess
is desirable and still protects from global OOMs as that would have to
consider the overall memcg configuration.
This patch is addressing the problem by removing the heuristic
altogether. Bypass is only allowed for requests which either cannot
fail or where the failure is not desirable while excess should be still
limited (e.g. atomic requests). Implementation wise a killed or dying
task fails to charge if it has passed the OOM killer stage. That should
give all forms of reclaim chance to restore the limit before the failure
(ENOMEM) and tell the caller to back off.
In addition, this patch renames should_force_charge() helper to
task_is_dying() because now its use is not associated witch forced
charging.
This patch depends on pagefault_out_of_memory() to not trigger
out_of_memory(), because then a memcg failure can unwind to VM_FAULT_OOM
and cause a global OOM killer.
Link: https://lkml.kernel.org/r/8f5cebbb-06da-4902-91f0-6566fc4b4203@virtuozzo.com
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matthew Wilcox (Oracle) [Fri, 5 Nov 2021 20:37:10 +0000 (13:37 -0700)]
mm/filemap.c: remove bogus VM_BUG_ON
commit
d417b49fff3e2f21043c834841e8623a6098741d upstream.
It is not safe to check page->index without holding the page lock. It
can be changed if the page is moved between the swap cache and the page
cache for a shmem file, for example. There is a VM_BUG_ON below which
checks page->index is correct after taking the page lock.
Link: https://lkml.kernel.org/r/20210818144932.940640-1-willy@infradead.org
Fixes:
5c211ba29deb ("mm: add and use find_lock_entries")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reported-by: <syzbot+c87be4f669d920c76330@syzkaller.appspotmail.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dominique Martinet [Tue, 2 Nov 2021 10:47:47 +0000 (19:47 +0900)]
9p/net: fix missing error check in p9_check_errors
commit
27eb4c3144f7a5ebef3c9a261d80cb3e1fa784dc upstream.
Link: https://lkml.kernel.org/r/99338965-d36c-886e-cd0e-1d8fff2b4746@gmail.com
Reported-by: syzbot+06472778c97ed94af66d@syzkaller.appspotmail.com
Cc: stable@vger.kernel.org
Signed-off-by: Dominique Martinet <asmadeus@codewreck.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Borkmann [Mon, 11 Oct 2021 12:12:36 +0000 (14:12 +0200)]
net, neigh: Enable state migration between NUD_PERMANENT and NTF_USE
[ Upstream commit
3dc20f4762c62d3b3f0940644881ed818aa7b2f5 ]
Currently, it is not possible to migrate a neighbor entry between NUD_PERMANENT
state and NTF_USE flag with a dynamic NUD state from a user space control plane.
Similarly, it is not possible to add/remove NTF_EXT_LEARNED flag from an existing
neighbor entry in combination with NTF_USE flag.
This is due to the latter directly calling into neigh_event_send() without any
meta data updates as happening in __neigh_update(). Thus, to enable this use
case, extend the latter with a NEIGH_UPDATE_F_USE flag where we break the
NUD_PERMANENT state in particular so that a latter neigh_event_send() is able
to re-resolve a neighbor entry.
Before fix, NUD_PERMANENT -> NUD_* & NTF_USE:
# ./ip/ip n replace 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a PERMANENT
[...]
# ./ip/ip n replace 192.168.178.30 dev enp5s0 use extern_learn
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a PERMANENT
[...]
As can be seen, despite the admin-triggered replace, the entry remains in the
NUD_PERMANENT state.
After fix, NUD_PERMANENT -> NUD_* & NTF_USE:
# ./ip/ip n replace 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a PERMANENT
[...]
# ./ip/ip n replace 192.168.178.30 dev enp5s0 use extern_learn
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a extern_learn REACHABLE
[...]
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a extern_learn STALE
[...]
# ./ip/ip n replace 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a PERMANENT
[...]
After the fix, the admin-triggered replace switches to a dynamic state from
the NTF_USE flag which triggered a new neighbor resolution. Likewise, we can
transition back from there, if needed, into NUD_PERMANENT.
Similar before/after behavior can be observed for below transitions:
Before fix, NTF_USE -> NTF_USE | NTF_EXT_LEARNED -> NTF_USE:
# ./ip/ip n replace 192.168.178.30 dev enp5s0 use
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a REACHABLE
[...]
# ./ip/ip n replace 192.168.178.30 dev enp5s0 use extern_learn
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a REACHABLE
[...]
After fix, NTF_USE -> NTF_USE | NTF_EXT_LEARNED -> NTF_USE:
# ./ip/ip n replace 192.168.178.30 dev enp5s0 use
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a REACHABLE
[...]
# ./ip/ip n replace 192.168.178.30 dev enp5s0 use extern_learn
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a extern_learn REACHABLE
[...]
# ./ip/ip n replace 192.168.178.30 dev enp5s0 use
# ./ip/ip n
192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a REACHABLE
[..]
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Roopa Prabhu <roopa@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Anatolij Gustschin [Thu, 14 Oct 2021 09:40:12 +0000 (11:40 +0200)]
dmaengine: bestcomm: fix system boot lockups
commit
adec566b05288f2787a1f88dbaf77ed8b0c644fa upstream.
memset() and memcpy() on an MMIO region like here results in a
lockup at startup on mpc5200 platform (since this first happens
during probing of the ATA and Ethernet drivers). Use memset_io()
and memcpy_toio() instead.
Fixes:
2f9ea1bde0d1 ("bestcomm: core bestcomm support for Freescale MPC5200")
Cc: stable@vger.kernel.org # v5.14+
Signed-off-by: Anatolij Gustschin <agust@denx.de>
Link: https://lore.kernel.org/r/20211014094012.21286-1-agust@denx.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kishon Vijay Abraham I [Sun, 31 Oct 2021 03:24:11 +0000 (08:54 +0530)]
dmaengine: ti: k3-udma: Set r/tchan or rflow to NULL if request fail
commit
eb91224e47ec33a0a32c9be0ec0fcb3433e555fd upstream.
udma_get_*() checks if rchan/tchan/rflow is already allocated by checking
if it has a NON NULL value. For the error cases, rchan/tchan/rflow will
have error value and udma_get_*() considers this as already allocated
(PASS) since the error values are NON NULL. This results in NULL pointer
dereference error while de-referencing rchan/tchan/rflow.
Reset the value of rchan/tchan/rflow to NULL if a channel request fails.
CC: stable@vger.kernel.org
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Link: https://lore.kernel.org/r/20211031032411.27235-3-kishon@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kishon Vijay Abraham I [Sun, 31 Oct 2021 03:24:10 +0000 (08:54 +0530)]
dmaengine: ti: k3-udma: Set bchan to NULL if a channel request fail
commit
5c6c6d60e4b489308ae4da8424c869f7cc53cd12 upstream.
bcdma_get_*() checks if bchan is already allocated by checking if it
has a NON NULL value. For the error cases, bchan will have error value
and bcdma_get_*() considers this as already allocated (PASS) since the
error values are NON NULL. This results in NULL pointer dereference
error while de-referencing bchan.
Reset the value of bchan to NULL if a channel request fails.
CC: stable@vger.kernel.org
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Link: https://lore.kernel.org/r/20211031032411.27235-2-kishon@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>