Shenwei Wang [Tue, 23 Jan 2024 16:51:41 +0000 (10:51 -0600)]
net: fec: fix the unhandled context fault from smmu
[ Upstream commit
5e344807735023cd3a67c37a1852b849caa42620 ]
When repeatedly changing the interface link speed using the command below:
ethtool -s eth0 speed 100 duplex full
ethtool -s eth0 speed 1000 duplex full
The following errors may sometimes be reported by the ARM SMMU driver:
[ 5395.035364] fec
5b040000.ethernet eth0: Link is Down
[ 5395.039255] arm-smmu
51400000.iommu: Unhandled context fault:
fsr=0x402, iova=0x00000000, fsynr=0x100001, cbfrsynra=0x852, cb=2
[ 5398.108460] fec
5b040000.ethernet eth0: Link is Up - 100Mbps/Full -
flow control off
It is identified that the FEC driver does not properly stop the TX queue
during the link speed transitions, and this results in the invalid virtual
I/O address translations from the SMMU and causes the context faults.
Fixes: dbc64a8ea231 ("net: fec: move calls to quiesce/resume packet processing out of fec_restart()")
Signed-off-by: Shenwei Wang <shenwei.wang@nxp.com>
Link: https://lore.kernel.org/r/20240123165141.2008104-1-shenwei.wang@nxp.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Hangbin Liu [Tue, 23 Jan 2024 07:59:17 +0000 (15:59 +0800)]
selftests: bonding: do not test arp/ns target with mode balance-alb/tlb
[ Upstream commit
a2933a8759a62269754e54733d993b19de870e84 ]
The prio_arp/ns tests hard code the mode to active-backup. At the same
time, The balance-alb/tlb modes do not support arp/ns target. So remove
the prio_arp/ns tests from the loop and only test active-backup mode.
Fixes: 481b56e0391e ("selftests: bonding: re-format bond option tests")
Reported-by: Jay Vosburgh <jay.vosburgh@canonical.com>
Closes: https://lore.kernel.org/netdev/17415.1705965957@famine/
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Acked-by: Jay Vosburgh <jay.vosburgh@canonical.com>
Link: https://lore.kernel.org/r/20240123075917.1576360-1-liuhangbin@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Zhipeng Lu [Mon, 22 Jan 2024 17:24:42 +0000 (01:24 +0800)]
fjes: fix memleaks in fjes_hw_setup
[ Upstream commit
f6cc4b6a3ae53df425771000e9c9540cce9b7bb1 ]
In fjes_hw_setup, it allocates several memory and delay the deallocation
to the fjes_hw_exit in fjes_probe through the following call chain:
fjes_probe
|-> fjes_hw_init
|-> fjes_hw_setup
|-> fjes_hw_exit
However, when fjes_hw_setup fails, fjes_hw_exit won't be called and thus
all the resources allocated in fjes_hw_setup will be leaked. In this
patch, we free those resources in fjes_hw_setup and prevents such leaks.
Fixes: 2fcbca687702 ("fjes: platform_driver's .probe and .remove routine")
Signed-off-by: Zhipeng Lu <alexious@zju.edu.cn>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240122172445.3841883-1-alexious@zju.edu.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:16:02 +0000 (20:16 +0100)]
i40e: update xdp_rxq_info::frag_size for ZC enabled Rx queue
[ Upstream commit
0cbb08707c932b3f004bc1a8ec6200ef572c1f5f ]
Now that i40e driver correctly sets up frag_size in xdp_rxq_info, let us
make it work for ZC multi-buffer as well. i40e_ring::rx_buf_len for ZC
is being set via xsk_pool_get_rx_frame_size() and this needs to be
propagated up to xdp_rxq_info.
Fixes: 1c9ba9c14658 ("i40e: xsk: add RX multi-buffer support")
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-12-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:16:01 +0000 (20:16 +0100)]
i40e: set xdp_rxq_info::frag_size
[ Upstream commit
a045d2f2d03d23e7db6772dd83e0ba2705dfad93 ]
i40e support XDP multi-buffer so it is supposed to use
__xdp_rxq_info_reg() instead of xdp_rxq_info_reg() and set the
frag_size. It can not be simply converted at existing callsite because
rx_buf_len could be un-initialized, so let us register xdp_rxq_info
within i40e_configure_rx_ring(), which happen to be called with already
initialized rx_buf_len value.
Commit
5180ff1364bc ("i40e: use int for i40e_status") converted 'err' to
int, so two variables to deal with return codes are not needed within
i40e_configure_rx_ring(). Remove 'ret' and use 'err' to handle status
from xdp_rxq_info registration.
Fixes: e213ced19bef ("i40e: add support for XDP multi-buffer Rx")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-11-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:16:00 +0000 (20:16 +0100)]
xdp: reflect tail increase for MEM_TYPE_XSK_BUFF_POOL
[ Upstream commit
fbadd83a612c3b7aad2987893faca6bd24aaebb3 ]
XSK ZC Rx path calculates the size of data that will be posted to XSK Rx
queue via subtracting xdp_buff::data_end from xdp_buff::data.
In bpf_xdp_frags_increase_tail(), when underlying memory type of
xdp_rxq_info is MEM_TYPE_XSK_BUFF_POOL, add offset to data_end in tail
fragment, so that later on user space will be able to take into account
the amount of bytes added by XDP program.
Fixes: 24ea50127ecf ("xsk: support mbuf on ZC RX")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-10-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:15:59 +0000 (20:15 +0100)]
ice: update xdp_rxq_info::frag_size for ZC enabled Rx queue
[ Upstream commit
3de38c87174225487fc93befeea7d380db80aef6 ]
Now that ice driver correctly sets up frag_size in xdp_rxq_info, let us
make it work for ZC multi-buffer as well. ice_rx_ring::rx_buf_len for ZC
is being set via xsk_pool_get_rx_frame_size() and this needs to be
propagated up to xdp_rxq_info.
Use a bigger hammer and instead of unregistering only xdp_rxq_info's
memory model, unregister it altogether and register it again and have
xdp_rxq_info with correct frag_size value.
Fixes: 1bbc04de607b ("ice: xsk: add RX multi-buffer support")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-9-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:15:58 +0000 (20:15 +0100)]
intel: xsk: initialize skb_frag_t::bv_offset in ZC drivers
[ Upstream commit
290779905d09d5fdf6caa4f58ddefc3f4db0c0a9 ]
Ice and i40e ZC drivers currently set offset of a frag within
skb_shared_info to 0, which is incorrect. xdp_buffs that come from
xsk_buff_pool always have 256 bytes of a headroom, so they need to be
taken into account to retrieve xdp_buff::data via skb_frag_address().
Otherwise, bpf_xdp_frags_increase_tail() would be starting its job from
xdp_buff::data_hard_start which would result in overwriting existing
payload.
Fixes: 1c9ba9c14658 ("i40e: xsk: add RX multi-buffer support")
Fixes: 1bbc04de607b ("ice: xsk: add RX multi-buffer support")
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-8-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:15:57 +0000 (20:15 +0100)]
ice: remove redundant xdp_rxq_info registration
[ Upstream commit
2ee788c06493d02ee85855414cca39825e768aaf ]
xdp_rxq_info struct can be registered by drivers via two functions -
xdp_rxq_info_reg() and __xdp_rxq_info_reg(). The latter one allows
drivers that support XDP multi-buffer to set up xdp_rxq_info::frag_size
which in turn will make it possible to grow the packet via
bpf_xdp_adjust_tail() BPF helper.
Currently, ice registers xdp_rxq_info in two spots:
1) ice_setup_rx_ring() // via xdp_rxq_info_reg(), BUG
2) ice_vsi_cfg_rxq() // via __xdp_rxq_info_reg(), OK
Cited commit under fixes tag took care of setting up frag_size and
updated registration scheme in 2) but it did not help as
1) is called before 2) and as shown above it uses old registration
function. This means that 2) sees that xdp_rxq_info is already
registered and never calls __xdp_rxq_info_reg() which leaves us with
xdp_rxq_info::frag_size being set to 0.
To fix this misbehavior, simply remove xdp_rxq_info_reg() call from
ice_setup_rx_ring().
Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side")
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-7-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tirthendu Sarkar [Wed, 24 Jan 2024 19:15:56 +0000 (20:15 +0100)]
i40e: handle multi-buffer packets that are shrunk by xdp prog
[ Upstream commit
83014323c642b8faa2d64a5f303b41c019322478 ]
XDP programs can shrink packets by calling the bpf_xdp_adjust_tail()
helper function. For multi-buffer packets this may lead to reduction of
frag count stored in skb_shared_info area of the xdp_buff struct. This
results in issues with the current handling of XDP_PASS and XDP_DROP
cases.
For XDP_PASS, currently skb is being built using frag count of
xdp_buffer before it was processed by XDP prog and thus will result in
an inconsistent skb when frag count gets reduced by XDP prog. To fix
this, get correct frag count while building the skb instead of using
pre-obtained frag count.
For XDP_DROP, current page recycling logic will not reuse the page but
instead will adjust the pagecnt_bias so that the page can be freed. This
again results in inconsistent behavior as the page refcnt has already
been changed by the helper while freeing the frag(s) as part of
shrinking the packet. To fix this, only adjust pagecnt_bias for buffers
that are stillpart of the packet post-xdp prog run.
Fixes: e213ced19bef ("i40e: add support for XDP multi-buffer Rx")
Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-6-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:15:55 +0000 (20:15 +0100)]
ice: work on pre-XDP prog frag count
[ Upstream commit
ad2047cf5d9313200e308612aed516548873d124 ]
Fix an OOM panic in XDP_DRV mode when a XDP program shrinks a
multi-buffer packet by 4k bytes and then redirects it to an AF_XDP
socket.
Since support for handling multi-buffer frames was added to XDP, usage
of bpf_xdp_adjust_tail() helper within XDP program can free the page
that given fragment occupies and in turn decrease the fragment count
within skb_shared_info that is embedded in xdp_buff struct. In current
ice driver codebase, it can become problematic when page recycling logic
decides not to reuse the page. In such case, __page_frag_cache_drain()
is used with ice_rx_buf::pagecnt_bias that was not adjusted after
refcount of page was changed by XDP prog which in turn does not drain
the refcount to 0 and page is never freed.
To address this, let us store the count of frags before the XDP program
was executed on Rx ring struct. This will be used to compare with
current frag count from skb_shared_info embedded in xdp_buff. A smaller
value in the latter indicates that XDP prog freed frag(s). Then, for
given delta decrement pagecnt_bias for XDP_DROP verdict.
While at it, let us also handle the EOP frag within
ice_set_rx_bufs_act() to make our life easier, so all of the adjustments
needed to be applied against freed frags are performed in the single
place.
Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side")
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-5-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:15:54 +0000 (20:15 +0100)]
xsk: fix usage of multi-buffer BPF helpers for ZC XDP
[ Upstream commit
c5114710c8ce86b8317e9b448f4fd15c711c2a82 ]
Currently when packet is shrunk via bpf_xdp_adjust_tail() and memory
type is set to MEM_TYPE_XSK_BUFF_POOL, null ptr dereference happens:
[
1136314.192256] BUG: kernel NULL pointer dereference, address:
0000000000000034
[
1136314.203943] #PF: supervisor read access in kernel mode
[
1136314.213768] #PF: error_code(0x0000) - not-present page
[
1136314.223550] PGD 0 P4D 0
[
1136314.230684] Oops: 0000 [#1] PREEMPT SMP NOPTI
[
1136314.239621] CPU: 8 PID: 54203 Comm: xdpsock Not tainted 6.6.0+ #257
[
1136314.250469] Hardware name: Intel Corporation S2600WFT/S2600WFT,
BIOS SE5C620.86B.02.01.0008.
031920191559 03/19/2019
[
1136314.265615] RIP: 0010:__xdp_return+0x6c/0x210
[
1136314.274653] Code: ad 00 48 8b 47 08 49 89 f8 a8 01 0f 85 9b 01 00 00 0f 1f 44 00 00 f0 41 ff 48 34 75 32 4c 89 c7 e9 79 cd 80 ff 83 fe 03 75 17 <f6> 41 34 01 0f 85 02 01 00 00 48 89 cf e9 22 cc 1e 00 e9 3d d2 86
[
1136314.302907] RSP: 0018:
ffffc900089f8db0 EFLAGS:
00010246
[
1136314.312967] RAX:
ffffc9003168aed0 RBX:
ffff8881c3300000 RCX:
0000000000000000
[
1136314.324953] RDX:
0000000000000000 RSI:
0000000000000003 RDI:
ffffc9003168c000
[
1136314.336929] RBP:
0000000000000ae0 R08:
0000000000000002 R09:
0000000000010000
[
1136314.348844] R10:
ffffc9000e495000 R11:
0000000000000040 R12:
0000000000000001
[
1136314.360706] R13:
0000000000000524 R14:
ffffc9003168aec0 R15:
0000000000000001
[
1136314.373298] FS:
00007f8df8bbcb80(0000) GS:
ffff8897e0e00000(0000)
knlGS:
0000000000000000
[
1136314.386105] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[
1136314.396532] CR2:
0000000000000034 CR3:
00000001aa912002 CR4:
00000000007706f0
[
1136314.408377] DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
[
1136314.420173] DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
[
1136314.431890] PKRU:
55555554
[
1136314.439143] Call Trace:
[
1136314.446058] <IRQ>
[
1136314.452465] ? __die+0x20/0x70
[
1136314.459881] ? page_fault_oops+0x15b/0x440
[
1136314.468305] ? exc_page_fault+0x6a/0x150
[
1136314.476491] ? asm_exc_page_fault+0x22/0x30
[
1136314.484927] ? __xdp_return+0x6c/0x210
[
1136314.492863] bpf_xdp_adjust_tail+0x155/0x1d0
[
1136314.501269] bpf_prog_ccc47ae29d3b6570_xdp_sock_prog+0x15/0x60
[
1136314.511263] ice_clean_rx_irq_zc+0x206/0xc60 [ice]
[
1136314.520222] ? ice_xmit_zc+0x6e/0x150 [ice]
[
1136314.528506] ice_napi_poll+0x467/0x670 [ice]
[
1136314.536858] ? ttwu_do_activate.constprop.0+0x8f/0x1a0
[
1136314.546010] __napi_poll+0x29/0x1b0
[
1136314.553462] net_rx_action+0x133/0x270
[
1136314.561619] __do_softirq+0xbe/0x28e
[
1136314.569303] do_softirq+0x3f/0x60
This comes from __xdp_return() call with xdp_buff argument passed as
NULL which is supposed to be consumed by xsk_buff_free() call.
To address this properly, in ZC case, a node that represents the frag
being removed has to be pulled out of xskb_list. Introduce
appropriate xsk helpers to do such node operation and use them
accordingly within bpf_xdp_adjust_tail().
Fixes: 24ea50127ecf ("xsk: support mbuf on ZC RX")
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> # For the xsk header part
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-4-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Daan De Meyer [Wed, 11 Oct 2023 18:51:05 +0000 (20:51 +0200)]
bpf: Add bpf_sock_addr_set_sun_path() to allow writing unix sockaddr from bpf
[ Upstream commit
53e380d21441909b12b6e0782b77187ae4b971c4 ]
As prep for adding unix socket support to the cgroup sockaddr hooks,
let's add a kfunc bpf_sock_addr_set_sun_path() that allows modifying a unix
sockaddr from bpf. While this is already possible for AF_INET and AF_INET6,
we'll need this kfunc when we add unix socket support since modifying the
address for those requires modifying both the address and the sockaddr
length.
Signed-off-by: Daan De Meyer <daan.j.demeyer@gmail.com>
Link: https://lore.kernel.org/r/20231011185113.140426-4-daan.j.demeyer@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Stable-dep-of:
c5114710c8ce ("xsk: fix usage of multi-buffer BPF helpers for ZC XDP")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Daan De Meyer [Wed, 11 Oct 2023 18:51:04 +0000 (20:51 +0200)]
bpf: Propagate modified uaddrlen from cgroup sockaddr programs
[ Upstream commit
fefba7d1ae198dcbf8b3b432de46a4e29f8dbd8c ]
As prep for adding unix socket support to the cgroup sockaddr hooks,
let's propagate the sockaddr length back to the caller after running
a bpf cgroup sockaddr hook program. While not important for AF_INET or
AF_INET6, the sockaddr length is important when working with AF_UNIX
sockaddrs as the size of the sockaddr cannot be determined just from the
address family or the sockaddr's contents.
__cgroup_bpf_run_filter_sock_addr() is modified to take the uaddrlen as
an input/output argument. After running the program, the modified sockaddr
length is stored in the uaddrlen pointer.
Signed-off-by: Daan De Meyer <daan.j.demeyer@gmail.com>
Link: https://lore.kernel.org/r/20231011185113.140426-3-daan.j.demeyer@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Stable-dep-of:
c5114710c8ce ("xsk: fix usage of multi-buffer BPF helpers for ZC XDP")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:15:53 +0000 (20:15 +0100)]
xsk: make xsk_buff_pool responsible for clearing xdp_buff::flags
[ Upstream commit
f7f6aa8e24383fbb11ac55942e66da9660110f80 ]
XDP multi-buffer support introduced XDP_FLAGS_HAS_FRAGS flag that is
used by drivers to notify data path whether xdp_buff contains fragments
or not. Data path looks up mentioned flag on first buffer that occupies
the linear part of xdp_buff, so drivers only modify it there. This is
sufficient for SKB and XDP_DRV modes as usually xdp_buff is allocated on
stack or it resides within struct representing driver's queue and
fragments are carried via skb_frag_t structs. IOW, we are dealing with
only one xdp_buff.
ZC mode though relies on list of xdp_buff structs that is carried via
xsk_buff_pool::xskb_list, so ZC data path has to make sure that
fragments do *not* have XDP_FLAGS_HAS_FRAGS set. Otherwise,
xsk_buff_free() could misbehave if it would be executed against xdp_buff
that carries a frag with XDP_FLAGS_HAS_FRAGS flag set. Such scenario can
take place when within supplied XDP program bpf_xdp_adjust_tail() is
used with negative offset that would in turn release the tail fragment
from multi-buffer frame.
Calling xsk_buff_free() on tail fragment with XDP_FLAGS_HAS_FRAGS would
result in releasing all the nodes from xskb_list that were produced by
driver before XDP program execution, which is not what is intended -
only tail fragment should be deleted from xskb_list and then it should
be put onto xsk_buff_pool::free_list. Such multi-buffer frame will never
make it up to user space, so from AF_XDP application POV there would be
no traffic running, however due to free_list getting constantly new
nodes, driver will be able to feed HW Rx queue with recycled buffers.
Bottom line is that instead of traffic being redirected to user space,
it would be continuously dropped.
To fix this, let us clear the mentioned flag on xsk_buff_pool side
during xdp_buff initialization, which is what should have been done
right from the start of XSK multi-buffer support.
Fixes: 1bbc04de607b ("ice: xsk: add RX multi-buffer support")
Fixes: 1c9ba9c14658 ("i40e: xsk: add RX multi-buffer support")
Fixes: 24ea50127ecf ("xsk: support mbuf on ZC RX")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-3-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Fijalkowski [Wed, 24 Jan 2024 19:15:52 +0000 (20:15 +0100)]
xsk: recycle buffer in case Rx queue was full
[ Upstream commit
269009893146c495f41e9572dd9319e787c2eba9 ]
Add missing xsk_buff_free() call when __xsk_rcv_zc() failed to produce
descriptor to XSK Rx queue.
Fixes: 24ea50127ecf ("xsk: support mbuf on ZC RX")
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20240124191602.566724-2-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jakub Kicinski [Tue, 23 Jan 2024 06:05:29 +0000 (22:05 -0800)]
selftests: netdevsim: fix the udp_tunnel_nic test
[ Upstream commit
0879020a7817e7ce636372c016b4528f541c9f4d ]
This test is missing a whole bunch of checks for interface
renaming and one ifup. Presumably it was only used on a system
with renaming disabled and NetworkManager running.
Fixes: 91f430b2c49d ("selftests: net: add a test for UDP tunnel info infra")
Acked-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240123060529.1033912-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jakub Kicinski [Mon, 22 Jan 2024 19:58:15 +0000 (11:58 -0800)]
selftests: net: fix rps_default_mask with >32 CPUs
[ Upstream commit
0719b5338a0cbe80d1637a5fb03d8141b5bfc7a1 ]
If there is more than 32 cpus the bitmask will start to contain
commas, leading to:
./rps_default_mask.sh: line 36: [:
00000000,
00000000: integer expression expected
Remove the commas, bash doesn't interpret leading zeroes as oct
so that should be good enough. Switch to bash, Simon reports that
not all shells support this type of substitution.
Fixes: c12e0d5f267d ("self-tests: introduce self-tests for RPS default mask")
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240122195815.638997-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jenishkumar Maheshbhai Patel [Fri, 19 Jan 2024 03:59:14 +0000 (19:59 -0800)]
net: mvpp2: clear BM pool before initialization
[ Upstream commit
9f538b415db862e74b8c5d3abbccfc1b2b6caa38 ]
Register value persist after booting the kernel using
kexec which results in kernel panic. Thus clear the
BM pool registers before initialisation to fix the issue.
Fixes: 3f518509dedc ("ethernet: Add new driver for Marvell Armada 375 network unit")
Signed-off-by: Jenishkumar Maheshbhai Patel <jpatel2@marvell.com>
Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Link: https://lore.kernel.org/r/20240119035914.2595665-1-jpatel2@marvell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Bernd Edlinger [Mon, 22 Jan 2024 18:19:09 +0000 (19:19 +0100)]
net: stmmac: Wait a bit for the reset to take effect
[ Upstream commit
a5f5eee282a0aae80227697e1d9c811b1726d31d ]
otherwise the synopsys_id value may be read out wrong,
because the GMAC_VERSION register might still be in reset
state, for at least 1 us after the reset is de-asserted.
Add a wait for 10 us before continuing to be on the safe side.
> From what have you got that delay value?
Just try and error, with very old linux versions and old gcc versions
the synopsys_id was read out correctly most of the time (but not always),
with recent linux versions and recnet gcc versions it was read out
wrongly most of the time, but again not always.
I don't have access to the VHDL code in question, so I cannot
tell why it takes so long to get the correct values, I also do not
have more than a few hardware samples, so I cannot tell how long
this timeout must be in worst case.
Experimentally I can tell that the register is read several times
as zero immediately after the reset is de-asserted, also adding several
no-ops is not enough, adding a printk is enough, also udelay(1) seems to
be enough but I tried that not very often, and I have not access to many
hardware samples to be 100% sure about the necessary delay.
And since the udelay here is only executed once per device instance,
it seems acceptable to delay the boot for 10 us.
BTW: my hardware's synopsys id is 0x37.
Fixes: c5e4ddbdfa11 ("net: stmmac: Add support for optional reset control")
Signed-off-by: Bernd Edlinger <bernd.edlinger@hotmail.de>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Serge Semin <fancer.lancer@gmail.com>
Link: https://lore.kernel.org/r/AS8P193MB1285A810BD78C111E7F6AA34E4752@AS8P193MB1285.EURP193.PROD.OUTLOOK.COM
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Pablo Neira Ayuso [Tue, 23 Jan 2024 15:38:25 +0000 (16:38 +0100)]
netfilter: nf_tables: validate NFPROTO_* family
[ Upstream commit
d0009effa8862c20a13af4cb7475d9771b905693 ]
Several expressions explicitly refer to NF_INET_* hook definitions
from expr->ops->validate, however, family is not validated.
Bail out with EOPNOTSUPP in case they are used from unsupported
families.
Fixes: 0ca743a55991 ("netfilter: nf_tables: add compatibility layer for x_tables")
Fixes: a3c90f7a2323 ("netfilter: nf_tables: flow offload expression")
Fixes: 2fa841938c64 ("netfilter: nf_tables: introduce routing expression")
Fixes: 554ced0a6e29 ("netfilter: nf_tables: add support for native socket matching")
Fixes: ad49d86e07a4 ("netfilter: nf_tables: Add synproxy support")
Fixes: 4ed8eb6570a4 ("netfilter: nf_tables: Add native tproxy support")
Fixes: 6c47260250fc ("netfilter: nf_tables: add xfrm expression")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Florian Westphal [Fri, 19 Jan 2024 12:34:32 +0000 (13:34 +0100)]
netfilter: nf_tables: restrict anonymous set and map names to 16 bytes
[ Upstream commit
b462579b2b86a8f5230543cadd3a4836be27baf7 ]
nftables has two types of sets/maps, one where userspace defines the
name, and anonymous sets/maps, where userspace defines a template name.
For the latter, kernel requires presence of exactly one "%d".
nftables uses "__set%d" and "__map%d" for this. The kernel will
expand the format specifier and replaces it with the smallest unused
number.
As-is, userspace could define a template name that allows to move
the set name past the 256 bytes upperlimit (post-expansion).
I don't see how this could be a problem, but I would prefer if userspace
cannot do this, so add a limit of 16 bytes for the '%d' template name.
16 bytes is the old total upper limit for set names that existed when
nf_tables was merged initially.
Fixes: 387454901bd6 ("netfilter: nf_tables: Allow set names of up to 255 chars")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Florian Westphal [Fri, 19 Jan 2024 12:11:32 +0000 (13:11 +0100)]
netfilter: nft_limit: reject configurations that cause integer overflow
[ Upstream commit
c9d9eb9c53d37cdebbad56b91e40baf42d5a97aa ]
Reject bogus configs where internal token counter wraps around.
This only occurs with very very large requests, such as 17gbyte/s.
Its better to reject this rather than having incorrect ratelimit.
Fixes: d2168e849ebf ("netfilter: nft_limit: add per-byte limiting")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Frederic Weisbecker [Mon, 18 Dec 2023 23:19:15 +0000 (00:19 +0100)]
rcu: Defer RCU kthreads wakeup when CPU is dying
[ Upstream commit
e787644caf7628ad3269c1fbd321c3255cf51710 ]
When the CPU goes idle for the last time during the CPU down hotplug
process, RCU reports a final quiescent state for the current CPU. If
this quiescent state propagates up to the top, some tasks may then be
woken up to complete the grace period: the main grace period kthread
and/or the expedited main workqueue (or kworker).
If those kthreads have a SCHED_FIFO policy, the wake up can indirectly
arm the RT bandwith timer to the local offline CPU. Since this happens
after hrtimers have been migrated at CPUHP_AP_HRTIMERS_DYING stage, the
timer gets ignored. Therefore if the RCU kthreads are waiting for RT
bandwidth to be available, they may never be actually scheduled.
This triggers TREE03 rcutorture hangs:
rcu: INFO: rcu_preempt self-detected stall on CPU
rcu: 4-...!: (1 GPs behind) idle=9874/1/0x4000000000000000 softirq=0/0 fqs=20 rcuc=21071 jiffies(starved)
rcu: (t=21035 jiffies g=938281 q=40787 ncpus=6)
rcu: rcu_preempt kthread starved for 20964 jiffies! g938281 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:14896 pid:14 tgid:14 ppid:2 flags:0x00004000
Call Trace:
<TASK>
__schedule+0x2eb/0xa80
schedule+0x1f/0x90
schedule_timeout+0x163/0x270
? __pfx_process_timeout+0x10/0x10
rcu_gp_fqs_loop+0x37c/0x5b0
? __pfx_rcu_gp_kthread+0x10/0x10
rcu_gp_kthread+0x17c/0x200
kthread+0xde/0x110
? __pfx_kthread+0x10/0x10
ret_from_fork+0x2b/0x40
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1b/0x30
</TASK>
The situation can't be solved with just unpinning the timer. The hrtimer
infrastructure and the nohz heuristics involved in finding the best
remote target for an unpinned timer would then also need to handle
enqueues from an offline CPU in the most horrendous way.
So fix this on the RCU side instead and defer the wake up to an online
CPU if it's too late for the local one.
Reported-by: Paul E. McKenney <paulmck@kernel.org>
Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU earlier")
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.iitr10@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Dinghao Liu [Tue, 28 Nov 2023 09:29:01 +0000 (17:29 +0800)]
net/mlx5e: fix a potential double-free in fs_any_create_groups
[ Upstream commit
aef855df7e1bbd5aa4484851561211500b22707e ]
When kcalloc() for ft->g succeeds but kvzalloc() for in fails,
fs_any_create_groups() will free ft->g. However, its caller
fs_any_create_table() will free ft->g again through calling
mlx5e_destroy_flow_table(), which will lead to a double-free.
Fix this by setting ft->g to NULL in fs_any_create_groups().
Fixes: 0f575c20bf06 ("net/mlx5e: Introduce Flow Steering ANY API")
Signed-off-by: Dinghao Liu <dinghao.liu@zju.edu.cn>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Zhipeng Lu [Wed, 17 Jan 2024 07:17:36 +0000 (15:17 +0800)]
net/mlx5e: fix a double-free in arfs_create_groups
[ Upstream commit
3c6d5189246f590e4e1f167991558bdb72a4738b ]
When `in` allocated by kvzalloc fails, arfs_create_groups will free
ft->g and return an error. However, arfs_create_table, the only caller of
arfs_create_groups, will hold this error and call to
mlx5e_destroy_flow_table, in which the ft->g will be freed again.
Fixes: 1cabe6b0965e ("net/mlx5e: Create aRFS flow tables")
Signed-off-by: Zhipeng Lu <alexious@zju.edu.cn>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Leon Romanovsky [Sun, 26 Nov 2023 09:08:10 +0000 (11:08 +0200)]
net/mlx5e: Ignore IPsec replay window values on sender side
[ Upstream commit
315a597f9bcfe7fe9980985031413457bee95510 ]
XFRM stack doesn't prevent from users to configure replay window
in TX side and strongswan sets replay_window to be 1. It causes
to failures in validation logic when trying to offload the SA.
Replay window is not relevant in TX side and should be ignored.
Fixes: cded6d80129b ("net/mlx5e: Store replay window in XFRM attributes")
Signed-off-by: Aya Levin <ayal@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Leon Romanovsky [Tue, 12 Dec 2023 11:52:55 +0000 (13:52 +0200)]
net/mlx5e: Allow software parsing when IPsec crypto is enabled
[ Upstream commit
20f5468a7988dedd94a57ba8acd65ebda6a59723 ]
All ConnectX devices have software parsing capability enabled, but it is
more correct to set allow_swp only if capability exists, which for IPsec
means that crypto offload is supported.
Fixes: 2451da081a34 ("net/mlx5: Unify device IPsec capabilities check")
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Rahul Rameshbabu [Tue, 28 Nov 2023 22:01:54 +0000 (14:01 -0800)]
net/mlx5: Use mlx5 device constant for selecting CQ period mode for ASO
[ Upstream commit
20cbf8cbb827094197f3b17db60d71449415db1e ]
mlx5 devices have specific constants for choosing the CQ period mode. These
constants do not have to match the constants used by the kernel software
API for DIM period mode selection.
Fixes: cdd04f4d4d71 ("net/mlx5: Add support to create SQ and CQ for ASO")
Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Reviewed-by: Jianbo Liu <jianbol@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yevgeny Kliteynik [Sun, 17 Dec 2023 11:20:36 +0000 (13:20 +0200)]
net/mlx5: DR, Can't go to uplink vport on RX rule
[ Upstream commit
5b2a2523eeea5f03d39a9d1ff1bad2e9f8eb98d2 ]
Go-To-Vport action on RX is not allowed when the vport is uplink.
In such case, the packet should be dropped.
Fixes: 9db810ed2d37 ("net/mlx5: DR, Expose steering action functionality")
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yevgeny Kliteynik [Sun, 17 Dec 2023 09:24:08 +0000 (11:24 +0200)]
net/mlx5: DR, Use the right GVMI number for drop action
[ Upstream commit
5665954293f13642f9c052ead83c1e9d8cff186f ]
When FW provides ICM addresses for drop RX/TX, the provided capability
is 64 bits that contain its GVMI as well as the ICM address itself.
In case of TX DROP this GVMI is different from the GVMI that the
domain is operating on.
This patch fixes the action to use these GVMI IDs, as provided by FW.
Fixes: 9db810ed2d37 ("net/mlx5: DR, Expose steering action functionality")
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Moshe Shemesh [Sat, 30 Dec 2023 20:40:37 +0000 (22:40 +0200)]
net/mlx5: Bridge, fix multicast packets sent to uplink
[ Upstream commit
ec7cc38ef9f83553102e84c82536971a81630739 ]
To enable multicast packets which are offloaded in bridge multicast
offload mode to be sent also to uplink, FTE bit uplink_hairpin_en should
be set. Add this bit to FTE for the bridge multicast offload rules.
Fixes: 18c2916cee12 ("net/mlx5: Bridge, snoop igmp/mld packets")
Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Erez Shitrit [Mon, 28 Aug 2023 11:20:00 +0000 (14:20 +0300)]
net/mlx5: Bridge, Enable mcast in smfs steering mode
[ Upstream commit
653b7eb9d74426397c95061fd57da3063625af65 ]
In order to have mcast offloads the driver needs the following:
It should know if that mcast comes from wire port, in addition the flow
should not be marked as any specific source, that way it will give the
flexibility for the driver not to be depended on the way iterator
implemented in the FW.
Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Stable-dep-of:
ec7cc38ef9f8 ("net/mlx5: Bridge, fix multicast packets sent to uplink")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yishai Hadas [Sun, 31 Dec 2023 13:19:50 +0000 (15:19 +0200)]
net/mlx5: Fix a WARN upon a callback command failure
[ Upstream commit
cc8091587779cfaddb6b29c9e9edb9079a282cad ]
The below WARN [1] is reported once a callback command failed.
As a callback runs under an interrupt context, needs to use the IRQ
save/restore variant.
[1]
DEBUG_LOCKS_WARN_ON(lockdep_hardirq_context())
WARNING: CPU: 15 PID: 0 at kernel/locking/lockdep.c:4353
lockdep_hardirqs_on_prepare+0x11b/0x180
Modules linked in: vhost_net vhost tap mlx5_vfio_pci
vfio_pci vfio_pci_core vfio_iommu_type1 vfio mlx5_vdpa vringh
vhost_iotlb vdpa nfnetlink_cttimeout openvswitch nsh ip6table_mangle
ip6table_nat ip6table_filter ip6_tables iptable_mangle
xt_conntrackxt_MASQUERADE nf_conntrack_netlink nfnetlink
xt_addrtype iptable_nat nf_nat br_netfilter rpcsec_gss_krb5
auth_rpcgss oid_registry overlay rpcrdma rdma_ucm ib_iser libiscsi
scsi_transport_iscsi rdma_cm iw_cm ib_umad ib_ipoib ib_cm
mlx5_ib ib_uverbs ib_core fuse mlx5_core
CPU: 15 PID: 0 Comm: swapper/15 Tainted: G W 6.7.0-rc4+ #1587
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
RIP: 0010:lockdep_hardirqs_on_prepare+0x11b/0x180
Code: 00 5b c3 c3 e8 e6 0d 58 00 85 c0 74 d6 8b 15 f0 c3
76 01 85 d2 75 cc 48 c7 c6 04 a5 3b 82 48 c7 c7 f1
e9 39 82 e8 95 12 f9 ff <0f> 0b 5b c3 e8 bc 0d 58 00
85 c0 74 ac 8b 3d c6 c3 76 01 85 ff 75
RSP: 0018:
ffffc900003ecd18 EFLAGS:
00010086
RAX:
0000000000000000 RBX:
0000000000000000 RCX:
0000000000000027
RDX:
0000000000000000 RSI:
ffff88885fbdb880 RDI:
ffff88885fbdb888
RBP:
00000000ffffff87 R08:
0000000000000000 R09:
0000000000000001
R10:
0000000000000000 R11:
284e4f5f4e524157 R12:
00000000002c9aa1
R13:
ffff88810aace980 R14:
ffff88810aace9b8 R15:
0000000000000003
FS:
0000000000000000(0000) GS:
ffff88885fbc0000(0000)
knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00007f731436f4c8 CR3:
000000010aae6001 CR4:
0000000000372eb0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
Call Trace:
<IRQ>
? __warn+0x81/0x170
? lockdep_hardirqs_on_prepare+0x11b/0x180
? report_bug+0xf8/0x1c0
? handle_bug+0x3f/0x70
? exc_invalid_op+0x13/0x60
? asm_exc_invalid_op+0x16/0x20
? lockdep_hardirqs_on_prepare+0x11b/0x180
? lockdep_hardirqs_on_prepare+0x11b/0x180
trace_hardirqs_on+0x4a/0xa0
raw_spin_unlock_irq+0x24/0x30
cmd_status_err+0xc0/0x1a0 [mlx5_core]
cmd_status_err+0x1a0/0x1a0 [mlx5_core]
mlx5_cmd_exec_cb_handler+0x24/0x40 [mlx5_core]
mlx5_cmd_comp_handler+0x129/0x4b0 [mlx5_core]
cmd_comp_notifier+0x1a/0x20 [mlx5_core]
notifier_call_chain+0x3e/0xe0
atomic_notifier_call_chain+0x5f/0x130
mlx5_eq_async_int+0xe7/0x200 [mlx5_core]
notifier_call_chain+0x3e/0xe0
atomic_notifier_call_chain+0x5f/0x130
irq_int_handler+0x11/0x20 [mlx5_core]
__handle_irq_event_percpu+0x99/0x220
? tick_irq_enter+0x5d/0x80
handle_irq_event_percpu+0xf/0x40
handle_irq_event+0x3a/0x60
handle_edge_irq+0xa2/0x1c0
__common_interrupt+0x55/0x140
common_interrupt+0x7d/0xa0
</IRQ>
<TASK>
asm_common_interrupt+0x22/0x40
RIP: 0010:default_idle+0x13/0x20
Code: c0 08 00 00 00 4d 29 c8 4c 01 c7 4c 29 c2 e9 72 ff
ff ff cc cc cc cc 8b 05 ea 08 25 01 85 c0 7e 07 0f 00 2d 7f b0 26 00 fb
f4 <fa> c3 90 66 2e 0f 1f 84 00 00 00 00 00 65 48 8b 04 25 80 d0 02 00
RSP: 0018:
ffffc9000010fec8 EFLAGS:
00000242
RAX:
0000000000000001 RBX:
000000000000000f RCX:
4000000000000000
RDX:
0000000000000000 RSI:
0000000000000000 RDI:
ffffffff811c410c
RBP:
ffffffff829478c0 R08:
0000000000000001 R09:
0000000000000001
R10:
0000000000000000 R11:
0000000000000000 R12:
0000000000000000
R13:
0000000000000000 R14:
0000000000000000 R15:
0000000000000000
? do_idle+0x1ec/0x210
default_idle_call+0x6c/0x90
do_idle+0x1ec/0x210
cpu_startup_entry+0x26/0x30
start_secondary+0x11b/0x150
secondary_startup_64_no_verify+0x165/0x16b
</TASK>
irq event stamp: 833284
hardirqs last enabled at (833283): [<
ffffffff811c410c>]
do_idle+0x1ec/0x210
hardirqs last disabled at (833284): [<
ffffffff81daf9ef>]
common_interrupt+0xf/0xa0
softirqs last enabled at (833224): [<
ffffffff81dc199f>]
__do_softirq+0x2bf/0x40e
softirqs last disabled at (833177): [<
ffffffff81178ddf>]
irq_exit_rcu+0x7f/0xa0
Fixes: 34f46ae0d4b3 ("net/mlx5: Add command failures data to debugfs")
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Vlad Buslov [Fri, 10 Nov 2023 10:10:22 +0000 (11:10 +0100)]
net/mlx5e: Fix peer flow lists handling
[ Upstream commit
d76fdd31f953ac5046555171620f2562715e9b71 ]
The cited change refactored mlx5e_tc_del_fdb_peer_flow() to only clear DUP
flag when list of peer flows has become empty. However, if any concurrent
user holds a reference to a peer flow (for example, the neighbor update
workqueue task is updating peer flow's parent encap entry concurrently),
then the flow will not be removed from the peer list and, consecutively,
DUP flag will remain set. Since mlx5e_tc_del_fdb_peers_flow() calls
mlx5e_tc_del_fdb_peer_flow() for every possible peer index the algorithm
will try to remove the flow from eswitch instances that it has never peered
with causing either NULL pointer dereference when trying to remove the flow
peer list head of peer_index that was never initialized or a warning if the
list debug config is enabled[0].
Fix the issue by always removing the peer flow from the list even when not
releasing the last reference to it.
[0]:
[ 3102.985806] ------------[ cut here ]------------
[ 3102.986223] list_del corruption,
ffff888139110698->next is NULL
[ 3102.986757] WARNING: CPU: 2 PID: 22109 at lib/list_debug.c:53 __list_del_entry_valid_or_report+0x4f/0xc0
[ 3102.987561] Modules linked in: act_ct nf_flow_table bonding act_tunnel_key act_mirred act_skbedit vxlan cls_matchall nfnetlink_cttimeout act_gact cls_flower sch_ingress mlx5_vdpa vringh vhost_iotlb vdpa openvswitch nsh xt_MASQUERADE nf_conntrack_netlink nfnetlink iptable_nat xt_addrtype xt_conntrack nf_nat br_netfilter rpcsec_gss_krb5 auth_rpcg
ss oid_registry overlay rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm mlx5_ib ib_uverbs ib_core mlx5_core [last unloaded: bonding]
[ 3102.991113] CPU: 2 PID: 22109 Comm: revalidator28 Not tainted 6.6.0-rc6+ #3
[ 3102.991695] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
[ 3102.992605] RIP: 0010:__list_del_entry_valid_or_report+0x4f/0xc0
[ 3102.993122] Code: 39 c2 74 56 48 8b 32 48 39 fe 75 62 48 8b 51 08 48 39 f2 75 73 b8 01 00 00 00 c3 48 89 fe 48 c7 c7 48 fd 0a 82 e8 41 0b ad ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 fd 0a 82 e8 2d 0b ad ff 0f 0b
[ 3102.994615] RSP: 0018:
ffff8881383e7710 EFLAGS:
00010286
[ 3102.995078] RAX:
0000000000000000 RBX:
0000000000000002 RCX:
0000000000000000
[ 3102.995670] RDX:
0000000000000001 RSI:
ffff88885f89b640 RDI:
ffff88885f89b640
[ 3102.997188] DEL flow
00000000be367878 on port 0
[ 3102.998594] RBP:
dead000000000122 R08:
0000000000000000 R09:
c0000000ffffdfff
[ 3102.999604] R10:
0000000000000008 R11:
ffff8881383e7598 R12:
dead000000000100
[ 3103.000198] R13:
0000000000000002 R14:
ffff888139110000 R15:
ffff888101901240
[ 3103.000790] FS:
00007f424cde4700(0000) GS:
ffff88885f880000(0000) knlGS:
0000000000000000
[ 3103.001486] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[ 3103.001986] CR2:
00007fd42e8dcb70 CR3:
000000011e68a003 CR4:
0000000000370ea0
[ 3103.002596] DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
[ 3103.003190] DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
[ 3103.003787] Call Trace:
[ 3103.004055] <TASK>
[ 3103.004297] ? __warn+0x7d/0x130
[ 3103.004623] ? __list_del_entry_valid_or_report+0x4f/0xc0
[ 3103.005094] ? report_bug+0xf1/0x1c0
[ 3103.005439] ? console_unlock+0x4a/0xd0
[ 3103.005806] ? handle_bug+0x3f/0x70
[ 3103.006149] ? exc_invalid_op+0x13/0x60
[ 3103.006531] ? asm_exc_invalid_op+0x16/0x20
[ 3103.007430] ? __list_del_entry_valid_or_report+0x4f/0xc0
[ 3103.007910] mlx5e_tc_del_fdb_peers_flow+0xcf/0x240 [mlx5_core]
[ 3103.008463] mlx5e_tc_del_flow+0x46/0x270 [mlx5_core]
[ 3103.008944] mlx5e_flow_put+0x26/0x50 [mlx5_core]
[ 3103.009401] mlx5e_delete_flower+0x25f/0x380 [mlx5_core]
[ 3103.009901] tc_setup_cb_destroy+0xab/0x180
[ 3103.010292] fl_hw_destroy_filter+0x99/0xc0 [cls_flower]
[ 3103.010779] __fl_delete+0x2d4/0x2f0 [cls_flower]
[ 3103.011207] fl_delete+0x36/0x80 [cls_flower]
[ 3103.011614] tc_del_tfilter+0x56f/0x750
[ 3103.011982] rtnetlink_rcv_msg+0xff/0x3a0
[ 3103.012362] ? netlink_ack+0x1c7/0x4e0
[ 3103.012719] ? rtnl_calcit.isra.44+0x130/0x130
[ 3103.013134] netlink_rcv_skb+0x54/0x100
[ 3103.013533] netlink_unicast+0x1ca/0x2b0
[ 3103.013902] netlink_sendmsg+0x361/0x4d0
[ 3103.014269] __sock_sendmsg+0x38/0x60
[ 3103.014643] ____sys_sendmsg+0x1f2/0x200
[ 3103.015018] ? copy_msghdr_from_user+0x72/0xa0
[ 3103.015265] ___sys_sendmsg+0x87/0xd0
[ 3103.016608] ? copy_msghdr_from_user+0x72/0xa0
[ 3103.017014] ? ___sys_recvmsg+0x9b/0xd0
[ 3103.017381] ? ttwu_do_activate.isra.137+0x58/0x180
[ 3103.017821] ? wake_up_q+0x49/0x90
[ 3103.018157] ? futex_wake+0x137/0x160
[ 3103.018521] ? __sys_sendmsg+0x51/0x90
[ 3103.018882] __sys_sendmsg+0x51/0x90
[ 3103.019230] ? exit_to_user_mode_prepare+0x56/0x130
[ 3103.019670] do_syscall_64+0x3c/0x80
[ 3103.020017] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[ 3103.020469] RIP: 0033:0x7f4254811ef4
[ 3103.020816] Code: 89 f3 48 83 ec 10 48 89 7c 24 08 48 89 14 24 e8 42 eb ff ff 48 8b 14 24 41 89 c0 48 89 de 48 8b 7c 24 08 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 30 44 89 c7 48 89 04 24 e8 78 eb ff ff 48 8b
[ 3103.022290] RSP: 002b:
00007f424cdd9480 EFLAGS:
00000293 ORIG_RAX:
000000000000002e
[ 3103.022970] RAX:
ffffffffffffffda RBX:
00007f424cdd9510 RCX:
00007f4254811ef4
[ 3103.023564] RDX:
0000000000000000 RSI:
00007f424cdd9510 RDI:
0000000000000012
[ 3103.024158] RBP:
00007f424cdda238 R08:
0000000000000000 R09:
00007f41d801a4b0
[ 3103.024748] R10:
0000000000000000 R11:
0000000000000293 R12:
0000000000000001
[ 3103.025341] R13:
00007f424cdd9510 R14:
00007f424cdda240 R15:
00007f424cdd99a0
[ 3103.025931] </TASK>
[ 3103.026182] ---[ end trace
0000000000000000 ]---
[ 3103.027033] ------------[ cut here ]------------
Fixes: 9be6c21fdcf8 ("net/mlx5e: Handle offloads flows per peer")
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Rahul Rameshbabu [Thu, 23 Nov 2023 02:32:11 +0000 (18:32 -0800)]
net/mlx5e: Fix operation precedence bug in port timestamping napi_poll context
[ Upstream commit
3876638b2c7ebb2c9d181de1191db0de8cac143a ]
Indirection (*) is of lower precedence than postfix increment (++). Logic
in napi_poll context would cause an out-of-bound read by first increment
the pointer address by byte address space and then dereference the value.
Rather, the intended logic was to dereference first and then increment the
underlying value.
Fixes: 92214be5979c ("net/mlx5e: Update doorbell for port timestamping CQ before the software counter")
Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Ido Schimmel [Mon, 22 Jan 2024 13:28:43 +0000 (15:28 +0200)]
net/sched: flower: Fix chain template offload
[ Upstream commit
32f2a0afa95fae0d1ceec2ff06e0e816939964b8 ]
When a qdisc is deleted from a net device the stack instructs the
underlying driver to remove its flow offload callback from the
associated filter block using the 'FLOW_BLOCK_UNBIND' command. The stack
then continues to replay the removal of the filters in the block for
this driver by iterating over the chains in the block and invoking the
'reoffload' operation of the classifier being used. In turn, the
classifier in its 'reoffload' operation prepares and emits a
'FLOW_CLS_DESTROY' command for each filter.
However, the stack does not do the same for chain templates and the
underlying driver never receives a 'FLOW_CLS_TMPLT_DESTROY' command when
a qdisc is deleted. This results in a memory leak [1] which can be
reproduced using [2].
Fix by introducing a 'tmplt_reoffload' operation and have the stack
invoke it with the appropriate arguments as part of the replay.
Implement the operation in the sole classifier that supports chain
templates (flower) by emitting the 'FLOW_CLS_TMPLT_{CREATE,DESTROY}'
command based on whether a flow offload callback is being bound to a
filter block or being unbound from one.
As far as I can tell, the issue happens since cited commit which
reordered tcf_block_offload_unbind() before tcf_block_flush_all_chains()
in __tcf_block_put(). The order cannot be reversed as the filter block
is expected to be freed after flushing all the chains.
[1]
unreferenced object 0xffff888107e28800 (size 2048):
comm "tc", pid 1079, jiffies
4294958525 (age 3074.287s)
hex dump (first 32 bytes):
b1 a6 7c 11 81 88 ff ff e0 5b b3 10 81 88 ff ff ..|......[......
01 00 00 00 00 00 00 00 e0 aa b0 84 ff ff ff ff ................
backtrace:
[<
ffffffff81c06a68>] __kmem_cache_alloc_node+0x1e8/0x320
[<
ffffffff81ab374e>] __kmalloc+0x4e/0x90
[<
ffffffff832aec6d>] mlxsw_sp_acl_ruleset_get+0x34d/0x7a0
[<
ffffffff832bc195>] mlxsw_sp_flower_tmplt_create+0x145/0x180
[<
ffffffff832b2e1a>] mlxsw_sp_flow_block_cb+0x1ea/0x280
[<
ffffffff83a10613>] tc_setup_cb_call+0x183/0x340
[<
ffffffff83a9f85a>] fl_tmplt_create+0x3da/0x4c0
[<
ffffffff83a22435>] tc_ctl_chain+0xa15/0x1170
[<
ffffffff838a863c>] rtnetlink_rcv_msg+0x3cc/0xed0
[<
ffffffff83ac87f0>] netlink_rcv_skb+0x170/0x440
[<
ffffffff83ac6270>] netlink_unicast+0x540/0x820
[<
ffffffff83ac6e28>] netlink_sendmsg+0x8d8/0xda0
[<
ffffffff83793def>] ____sys_sendmsg+0x30f/0xa80
[<
ffffffff8379d29a>] ___sys_sendmsg+0x13a/0x1e0
[<
ffffffff8379d50c>] __sys_sendmsg+0x11c/0x1f0
[<
ffffffff843b9ce0>] do_syscall_64+0x40/0xe0
unreferenced object 0xffff88816d2c0400 (size 1024):
comm "tc", pid 1079, jiffies
4294958525 (age 3074.287s)
hex dump (first 32 bytes):
40 00 00 00 00 00 00 00 57 f6 38 be 00 00 00 00 @.......W.8.....
10 04 2c 6d 81 88 ff ff 10 04 2c 6d 81 88 ff ff ..,m......,m....
backtrace:
[<
ffffffff81c06a68>] __kmem_cache_alloc_node+0x1e8/0x320
[<
ffffffff81ab36c1>] __kmalloc_node+0x51/0x90
[<
ffffffff81a8ed96>] kvmalloc_node+0xa6/0x1f0
[<
ffffffff82827d03>] bucket_table_alloc.isra.0+0x83/0x460
[<
ffffffff82828d2b>] rhashtable_init+0x43b/0x7c0
[<
ffffffff832aed48>] mlxsw_sp_acl_ruleset_get+0x428/0x7a0
[<
ffffffff832bc195>] mlxsw_sp_flower_tmplt_create+0x145/0x180
[<
ffffffff832b2e1a>] mlxsw_sp_flow_block_cb+0x1ea/0x280
[<
ffffffff83a10613>] tc_setup_cb_call+0x183/0x340
[<
ffffffff83a9f85a>] fl_tmplt_create+0x3da/0x4c0
[<
ffffffff83a22435>] tc_ctl_chain+0xa15/0x1170
[<
ffffffff838a863c>] rtnetlink_rcv_msg+0x3cc/0xed0
[<
ffffffff83ac87f0>] netlink_rcv_skb+0x170/0x440
[<
ffffffff83ac6270>] netlink_unicast+0x540/0x820
[<
ffffffff83ac6e28>] netlink_sendmsg+0x8d8/0xda0
[<
ffffffff83793def>] ____sys_sendmsg+0x30f/0xa80
[2]
# tc qdisc add dev swp1 clsact
# tc chain add dev swp1 ingress proto ip chain 1 flower dst_ip 0.0.0.0/32
# tc qdisc del dev swp1 clsact
# devlink dev reload pci/0000:06:00.0
Fixes: bbf73830cd48 ("net: sched: traverse chains in block with tcf_get_next_chain()")
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jakub Kicinski [Mon, 22 Jan 2024 20:35:28 +0000 (12:35 -0800)]
selftests: fill in some missing configs for net
[ Upstream commit
04fe7c5029cbdbcdb28917f09a958d939a8f19f7 ]
We are missing a lot of config options from net selftests,
it seems:
tun/tap: CONFIG_TUN, CONFIG_MACVLAN, CONFIG_MACVTAP
fib_tests: CONFIG_NET_SCH_FQ_CODEL
l2tp: CONFIG_L2TP, CONFIG_L2TP_V3, CONFIG_L2TP_IP, CONFIG_L2TP_ETH
sctp-vrf: CONFIG_INET_DIAG
txtimestamp: CONFIG_NET_CLS_U32
vxlan_mdb: CONFIG_BRIDGE_VLAN_FILTERING
gre_gso: CONFIG_NET_IPGRE_DEMUX, CONFIG_IP_GRE, CONFIG_IPV6_GRE
srv6_end_dt*_l3vpn: CONFIG_IPV6_SEG6_LWTUNNEL
ip_local_port_range: CONFIG_MPTCP
fib_test: CONFIG_NET_CLS_BASIC
rtnetlink: CONFIG_MACSEC, CONFIG_NET_SCH_HTB, CONFIG_XFRM_INTERFACE
CONFIG_NET_IPGRE, CONFIG_BONDING
fib_nexthops: CONFIG_MPLS, CONFIG_MPLS_ROUTING
vxlan_mdb: CONFIG_NET_ACT_GACT
tls: CONFIG_TLS, CONFIG_CRYPTO_CHACHA20POLY1305
psample: CONFIG_PSAMPLE
fcnal: CONFIG_TCP_MD5SIG
Try to add them in a semi-alphabetical order.
Fixes: 62199e3f1658 ("selftests: net: Add VXLAN MDB test")
Fixes: c12e0d5f267d ("self-tests: introduce self-tests for RPS default mask")
Fixes: 122db5e3634b ("selftests/net: add MPTCP coverage for IP_LOCAL_PORT_RANGE")
Link: https://lore.kernel.org/r/20240122203528.672004-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Zhengchao Shao [Mon, 22 Jan 2024 10:20:01 +0000 (18:20 +0800)]
ipv6: init the accept_queue's spinlocks in inet6_create
[ Upstream commit
435e202d645c197dcfd39d7372eb2a56529b6640 ]
In commit
198bc90e0e73("tcp: make sure init the accept_queue's spinlocks
once"), the spinlocks of accept_queue are initialized only when socket is
created in the inet4 scenario. The locks are not initialized when socket
is created in the inet6 scenario. The kernel reports the following error:
INFO: trying to register non-static key.
The code is fine but needs lockdep annotation, or maybe
you didn't initialize this object before use?
turning off the locking correctness validator.
Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
Call Trace:
<TASK>
dump_stack_lvl (lib/dump_stack.c:107)
register_lock_class (kernel/locking/lockdep.c:1289)
__lock_acquire (kernel/locking/lockdep.c:5015)
lock_acquire.part.0 (kernel/locking/lockdep.c:5756)
_raw_spin_lock_bh (kernel/locking/spinlock.c:178)
inet_csk_listen_stop (net/ipv4/inet_connection_sock.c:1386)
tcp_disconnect (net/ipv4/tcp.c:2981)
inet_shutdown (net/ipv4/af_inet.c:935)
__sys_shutdown (./include/linux/file.h:32 net/socket.c:2438)
__x64_sys_shutdown (net/socket.c:2445)
do_syscall_64 (arch/x86/entry/common.c:52)
entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:129)
RIP: 0033:0x7f52ecd05a3d
Code: 5b 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 48 89 f8 48 89 f7
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
ff 73 01 c3 48 8b 0d ab a3 0e 00 f7 d8 64 89 01 48
RSP: 002b:
00007f52ecf5dde8 EFLAGS:
00000293 ORIG_RAX:
0000000000000030
RAX:
ffffffffffffffda RBX:
00007f52ecf5e640 RCX:
00007f52ecd05a3d
RDX:
00007f52ecc8b188 RSI:
0000000000000000 RDI:
0000000000000004
RBP:
00007f52ecf5de20 R08:
00007ffdae45c69f R09:
0000000000000000
R10:
0000000000000000 R11:
0000000000000293 R12:
00007f52ecf5e640
R13:
0000000000000000 R14:
00007f52ecc8b060 R15:
00007ffdae45c6e0
Fixes: 198bc90e0e73 ("tcp: make sure init the accept_queue's spinlocks once")
Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240122102001.2851701-1-shaozhengchao@huawei.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Zhengchao Shao [Mon, 22 Jan 2024 01:18:07 +0000 (09:18 +0800)]
netlink: fix potential sleeping issue in mqueue_flush_file
[ Upstream commit
234ec0b6034b16869d45128b8cd2dc6ffe596f04 ]
I analyze the potential sleeping issue of the following processes:
Thread A Thread B
... netlink_create //ref = 1
do_mq_notify ...
sock = netlink_getsockbyfilp ... //ref = 2
info->notify_sock = sock; ...
... netlink_sendmsg
... skb = netlink_alloc_large_skb //skb->head is vmalloced
... netlink_unicast
... sk = netlink_getsockbyportid //ref = 3
... netlink_sendskb
... __netlink_sendskb
... skb_queue_tail //put skb to sk_receive_queue
... sock_put //ref = 2
... ...
... netlink_release
... deferred_put_nlk_sk //ref = 1
mqueue_flush_file
spin_lock
remove_notification
netlink_sendskb
sock_put //ref = 0
sk_free
...
__sk_destruct
netlink_sock_destruct
skb_queue_purge //get skb from sk_receive_queue
...
__skb_queue_purge_reason
kfree_skb_reason
__kfree_skb
...
skb_release_all
skb_release_head_state
netlink_skb_destructor
vfree(skb->head) //sleeping while holding spinlock
In netlink_sendmsg, if the memory pointed to by skb->head is allocated by
vmalloc, and is put to sk_receive_queue queue, also the skb is not freed.
When the mqueue executes flush, the sleeping bug will occur. Use
vfree_atomic instead of vfree in netlink_skb_destructor to solve the issue.
Fixes: c05cdb1b864f ("netlink: allow large data transfers from user-space")
Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Link: https://lore.kernel.org/r/20240122011807.2110357-1-shaozhengchao@huawei.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Kuniyuki Iwashima [Sat, 20 Jan 2024 03:16:42 +0000 (19:16 -0800)]
selftest: Don't reuse port for SO_INCOMING_CPU test.
[ Upstream commit
97de5a15edf2d22184f5ff588656030bbb7fa358 ]
Jakub reported that ASSERT_EQ(cpu, i) in so_incoming_cpu.c seems to
fire somewhat randomly.
# # RUN so_incoming_cpu.before_reuseport.test3 ...
# # so_incoming_cpu.c:191:test3:Expected cpu (32) == i (0)
# # test3: Test terminated by assertion
# # FAIL so_incoming_cpu.before_reuseport.test3
# not ok 3 so_incoming_cpu.before_reuseport.test3
When the test failed, not-yet-accepted CLOSE_WAIT sockets received
SYN with a "challenging" SEQ number, which was sent from an unexpected
CPU that did not create the receiver.
The test basically does:
1. for each cpu:
1-1. create a server
1-2. set SO_INCOMING_CPU
2. for each cpu:
2-1. set cpu affinity
2-2. create some clients
2-3. let clients connect() to the server on the same cpu
2-4. close() clients
3. for each server:
3-1. accept() all child sockets
3-2. check if all children have the same SO_INCOMING_CPU with the server
The root cause was the close() in 2-4. and net.ipv4.tcp_tw_reuse.
In a loop of 2., close() changed the client state to FIN_WAIT_2, and
the peer transitioned to CLOSE_WAIT.
In another loop of 2., connect() happened to select the same port of
the FIN_WAIT_2 socket, and it was reused as the default value of
net.ipv4.tcp_tw_reuse is 2.
As a result, the new client sent SYN to the CLOSE_WAIT socket from
a different CPU, and the receiver's sk_incoming_cpu was overwritten
with unexpected CPU ID.
Also, the SYN had a different SEQ number, so the CLOSE_WAIT socket
responded with Challenge ACK. The new client properly returned RST
and effectively killed the CLOSE_WAIT socket.
This way, all clients were created successfully, but the error was
detected later by 3-2., ASSERT_EQ(cpu, i).
To avoid the failure, let's make sure that (i) the number of clients
is less than the number of available ports and (ii) such reuse never
happens.
Fixes: 6df96146b202 ("selftest: Add test for SO_INCOMING_CPU.")
Reported-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Tested-by: Jakub Kicinski <kuba@kernel.org>
Link: https://lore.kernel.org/r/20240120031642.67014-1-kuniyu@amazon.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Salvatore Dipietro [Fri, 19 Jan 2024 19:01:33 +0000 (11:01 -0800)]
tcp: Add memory barrier to tcp_push()
[ Upstream commit
7267e8dcad6b2f9fce05a6a06335d7040acbc2b6 ]
On CPUs with weak memory models, reads and updates performed by tcp_push
to the sk variables can get reordered leaving the socket throttled when
it should not. The tasklet running tcp_wfree() may also not observe the
memory updates in time and will skip flushing any packets throttled by
tcp_push(), delaying the sending. This can pathologically cause 40ms
extra latency due to bad interactions with delayed acks.
Adding a memory barrier in tcp_push removes the bug, similarly to the
previous commit
bf06200e732d ("tcp: tsq: fix nonagle handling").
smp_mb__after_atomic() is used to not incur in unnecessary overhead
on x86 since not affected.
Patch has been tested using an AWS c7g.2xlarge instance with Ubuntu
22.04 and Apache Tomcat 9.0.83 running the basic servlet below:
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class HelloWorldServlet extends HttpServlet {
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/html;charset=utf-8");
OutputStreamWriter osw = new OutputStreamWriter(response.getOutputStream(),"UTF-8");
String s = "a".repeat(3096);
osw.write(s,0,s.length());
osw.flush();
}
}
Load was applied using wrk2 (https://github.com/kinvolk/wrk2) from an AWS
c6i.8xlarge instance. Before the patch an additional 40ms latency from P99.99+
values is observed while, with the patch, the extra latency disappears.
No patch and tcp_autocorking=1
./wrk -t32 -c128 -d40s --latency -R10000 http://172.31.60.173:8080/hello/hello
...
50.000% 0.91ms
75.000% 1.13ms
90.000% 1.46ms
99.000% 1.74ms
99.900% 1.89ms
99.990% 41.95ms <<< 40+ ms extra latency
99.999% 48.32ms
100.000% 48.96ms
With patch and tcp_autocorking=1
./wrk -t32 -c128 -d40s --latency -R10000 http://172.31.60.173:8080/hello/hello
...
50.000% 0.90ms
75.000% 1.13ms
90.000% 1.45ms
99.000% 1.72ms
99.900% 1.83ms
99.990% 2.11ms <<< no 40+ ms extra latency
99.999% 2.53ms
100.000% 2.62ms
Patch has been also tested on x86 (m7i.2xlarge instance) which it is not
affected by this issue and the patch doesn't introduce any additional
delay.
Fixes: 7aa5470c2c09 ("tcp: tsq: move tsq_flags close to sk_wmem_alloc")
Signed-off-by: Salvatore Dipietro <dipiets@amazon.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240119190133.43698-1-dipiets@amazon.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
David Howells [Mon, 8 Jan 2024 17:22:36 +0000 (17:22 +0000)]
afs: Hide silly-rename files from userspace
[ Upstream commit
57e9d49c54528c49b8bffe6d99d782ea051ea534 ]
There appears to be a race between silly-rename files being created/removed
and various userspace tools iterating over the contents of a directory,
leading to such errors as:
find: './kernel/.tmp_cpio_dir/include/dt-bindings/reset/.__afs2080': No such file or directory
tar: ./include/linux/greybus/.__afs3C95: File removed before we read it
when building a kernel.
Fix afs_readdir() so that it doesn't return .__afsXXXX silly-rename files
to userspace. This doesn't stop them being looked up directly by name as
we need to be able to look them up from within the kernel as part of the
silly-rename algorithm.
Fixes: 79ddbfa500b3 ("afs: Implement sillyrename for unlink and rename")
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Petr Pavlu [Mon, 22 Jan 2024 15:09:28 +0000 (16:09 +0100)]
tracing: Ensure visibility when inserting an element into tracing_map
[ Upstream commit
2b44760609e9eaafc9d234a6883d042fc21132a7 ]
Running the following two commands in parallel on a multi-processor
AArch64 machine can sporadically produce an unexpected warning about
duplicate histogram entries:
$ while true; do
echo hist:key=id.syscall:val=hitcount > \
/sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
sleep 0.001
done
$ stress-ng --sysbadaddr $(nproc)
The warning looks as follows:
[ 2911.172474] ------------[ cut here ]------------
[ 2911.173111] Duplicates detected: 1
[ 2911.173574] WARNING: CPU: 2 PID: 12247 at kernel/trace/tracing_map.c:983 tracing_map_sort_entries+0x3e0/0x408
[ 2911.174702] Modules linked in: iscsi_ibft(E) iscsi_boot_sysfs(E) rfkill(E) af_packet(E) nls_iso8859_1(E) nls_cp437(E) vfat(E) fat(E) ena(E) tiny_power_button(E) qemu_fw_cfg(E) button(E) fuse(E) efi_pstore(E) ip_tables(E) x_tables(E) xfs(E) libcrc32c(E) aes_ce_blk(E) aes_ce_cipher(E) crct10dif_ce(E) polyval_ce(E) polyval_generic(E) ghash_ce(E) gf128mul(E) sm4_ce_gcm(E) sm4_ce_ccm(E) sm4_ce(E) sm4_ce_cipher(E) sm4(E) sm3_ce(E) sm3(E) sha3_ce(E) sha512_ce(E) sha512_arm64(E) sha2_ce(E) sha256_arm64(E) nvme(E) sha1_ce(E) nvme_core(E) nvme_auth(E) t10_pi(E) sg(E) scsi_mod(E) scsi_common(E) efivarfs(E)
[ 2911.174738] Unloaded tainted modules: cppc_cpufreq(E):1
[ 2911.180985] CPU: 2 PID: 12247 Comm: cat Kdump: loaded Tainted: G E 6.7.0-default #2
1b58bbb22c97e4399dc09f92d309344f69c44a01
[ 2911.182398] Hardware name: Amazon EC2 c7g.8xlarge/, BIOS 1.0 11/1/2018
[ 2911.183208] pstate:
61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
[ 2911.184038] pc : tracing_map_sort_entries+0x3e0/0x408
[ 2911.184667] lr : tracing_map_sort_entries+0x3e0/0x408
[ 2911.185310] sp :
ffff8000a1513900
[ 2911.185750] x29:
ffff8000a1513900 x28:
ffff0003f272fe80 x27:
0000000000000001
[ 2911.186600] x26:
ffff0003f272fe80 x25:
0000000000000030 x24:
0000000000000008
[ 2911.187458] x23:
ffff0003c5788000 x22:
ffff0003c16710c8 x21:
ffff80008017f180
[ 2911.188310] x20:
ffff80008017f000 x19:
ffff80008017f180 x18:
ffffffffffffffff
[ 2911.189160] x17:
0000000000000000 x16:
0000000000000000 x15:
ffff8000a15134b8
[ 2911.190015] x14:
0000000000000000 x13:
205d373432323154 x12:
5b5d313131333731
[ 2911.190844] x11:
00000000fffeffff x10:
00000000fffeffff x9 :
ffffd1b78274a13c
[ 2911.191716] x8 :
000000000017ffe8 x7 :
c0000000fffeffff x6 :
000000000057ffa8
[ 2911.192554] x5 :
ffff0012f6c24ec0 x4 :
0000000000000000 x3 :
ffff2e5b72b5d000
[ 2911.193404] x2 :
0000000000000000 x1 :
0000000000000000 x0 :
ffff0003ff254480
[ 2911.194259] Call trace:
[ 2911.194626] tracing_map_sort_entries+0x3e0/0x408
[ 2911.195220] hist_show+0x124/0x800
[ 2911.195692] seq_read_iter+0x1d4/0x4e8
[ 2911.196193] seq_read+0xe8/0x138
[ 2911.196638] vfs_read+0xc8/0x300
[ 2911.197078] ksys_read+0x70/0x108
[ 2911.197534] __arm64_sys_read+0x24/0x38
[ 2911.198046] invoke_syscall+0x78/0x108
[ 2911.198553] el0_svc_common.constprop.0+0xd0/0xf8
[ 2911.199157] do_el0_svc+0x28/0x40
[ 2911.199613] el0_svc+0x40/0x178
[ 2911.200048] el0t_64_sync_handler+0x13c/0x158
[ 2911.200621] el0t_64_sync+0x1a8/0x1b0
[ 2911.201115] ---[ end trace
0000000000000000 ]---
The problem appears to be caused by CPU reordering of writes issued from
__tracing_map_insert().
The check for the presence of an element with a given key in this
function is:
val = READ_ONCE(entry->val);
if (val && keys_match(key, val->key, map->key_size)) ...
The write of a new entry is:
elt = get_free_elt(map);
memcpy(elt->key, key, map->key_size);
entry->val = elt;
The "memcpy(elt->key, key, map->key_size);" and "entry->val = elt;"
stores may become visible in the reversed order on another CPU. This
second CPU might then incorrectly determine that a new key doesn't match
an already present val->key and subsequently insert a new element,
resulting in a duplicate.
Fix the problem by adding a write barrier between
"memcpy(elt->key, key, map->key_size);" and "entry->val = elt;", and for
good measure, also use WRITE_ONCE(entry->val, elt) for publishing the
element. The sequence pairs with the mentioned "READ_ONCE(entry->val);"
and the "val->key" check which has an address dependency.
The barrier is placed on a path executed when adding an element for
a new key. Subsequent updates targeting the same key remain unaffected.
From the user's perspective, the issue was introduced by commit
c193707dde77 ("tracing: Remove code which merges duplicates"), which
followed commit
cbf4100efb8f ("tracing: Add support to detect and avoid
duplicates"). The previous code operated differently; it inherently
expected potential races which result in duplicates but merged them
later when they occurred.
Link: https://lore.kernel.org/linux-trace-kernel/20240122150928.27725-1-petr.pavlu@suse.com
Fixes: c193707dde77 ("tracing: Remove code which merges duplicates")
Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
Acked-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Dan Carpenter [Fri, 12 Jan 2024 06:59:41 +0000 (09:59 +0300)]
netfs, fscache: Prevent Oops in fscache_put_cache()
[ Upstream commit
3be0b3ed1d76c6703b9ee482b55f7e01c369cc68 ]
This function dereferences "cache" and then checks if it's
IS_ERR_OR_NULL(). Check first, then dereference.
Fixes: 9549332df4ed ("fscache: Implement cache registration")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Link: https://lore.kernel.org/r/e84bc740-3502-4f16-982a-a40d5676615c@moroto.mountain/
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sharath Srinivasan [Sat, 20 Jan 2024 01:48:39 +0000 (17:48 -0800)]
net/rds: Fix UBSAN: array-index-out-of-bounds in rds_cmsg_recv
[ Upstream commit
13e788deb7348cc88df34bed736c3b3b9927ea52 ]
Syzcaller UBSAN crash occurs in rds_cmsg_recv(),
which reads inc->i_rx_lat_trace[j + 1] with index 4 (3 + 1),
but with array size of 4 (RDS_RX_MAX_TRACES).
Here 'j' is assigned from rs->rs_rx_trace[i] and in-turn from
trace.rx_trace_pos[i] in rds_recv_track_latency(),
with both arrays sized 3 (RDS_MSG_RX_DGRAM_TRACE_MAX). So fix the
off-by-one bounds check in rds_recv_track_latency() to prevent
a potential crash in rds_cmsg_recv().
Found by syzcaller:
=================================================================
UBSAN: array-index-out-of-bounds in net/rds/recv.c:585:39
index 4 is out of range for type 'u64 [4]'
CPU: 1 PID: 8058 Comm: syz-executor228 Not tainted
6.6.0-gd2f51b3516da #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS 1.15.0-1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x136/0x150 lib/dump_stack.c:106
ubsan_epilogue lib/ubsan.c:217 [inline]
__ubsan_handle_out_of_bounds+0xd5/0x130 lib/ubsan.c:348
rds_cmsg_recv+0x60d/0x700 net/rds/recv.c:585
rds_recvmsg+0x3fb/0x1610 net/rds/recv.c:716
sock_recvmsg_nosec net/socket.c:1044 [inline]
sock_recvmsg+0xe2/0x160 net/socket.c:1066
__sys_recvfrom+0x1b6/0x2f0 net/socket.c:2246
__do_sys_recvfrom net/socket.c:2264 [inline]
__se_sys_recvfrom net/socket.c:2260 [inline]
__x64_sys_recvfrom+0xe0/0x1b0 net/socket.c:2260
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x40/0x110 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x63/0x6b
==================================================================
Fixes: 3289025aedc0 ("RDS: add receive message trace used by application")
Reported-by: Chenyuan Yang <chenyuan0y@gmail.com>
Closes: https://lore.kernel.org/linux-rdma/CALGdzuoVdq-wtQ4Az9iottBqC5cv9ZhcE5q8N7LfYFvkRsOVcw@mail.gmail.com/
Signed-off-by: Sharath Srinivasan <sharath.srinivasan@oracle.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Horatiu Vultur [Fri, 19 Jan 2024 10:47:50 +0000 (11:47 +0100)]
net: micrel: Fix PTP frame parsing for lan8814
[ Upstream commit
aaf632f7ab6dec57bc9329a438f94504fe8034b9 ]
The HW has the capability to check each frame if it is a PTP frame,
which domain it is, which ptp frame type it is, different ip address in
the frame. And if one of these checks fail then the frame is not
timestamp. Most of these checks were disabled except checking the field
minorVersionPTP inside the PTP header. Meaning that once a partner sends
a frame compliant to 8021AS which has minorVersionPTP set to 1, then the
frame was not timestamp because the HW expected by default a value of 0
in minorVersionPTP. This is exactly the same issue as on lan8841.
Fix this issue by removing this check so the userspace can decide on this.
Fixes: ece19502834d ("net: phy: micrel: 1588 support for LAN8814 phy")
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Reviewed-by: Divya Koppera <divya.koppera@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yunjian Wang [Fri, 19 Jan 2024 10:22:56 +0000 (18:22 +0800)]
tun: add missing rx stats accounting in tun_xdp_act
[ Upstream commit
f1084c427f55d573fcd5688d9ba7b31b78019716 ]
The TUN can be used as vhost-net backend, and it is necessary to
count the packets transmitted from TUN to vhost-net/virtio-net.
However, there are some places in the receive path that were not
taken into account when using XDP. It would be beneficial to also
include new accounting for successfully received bytes using
dev_sw_netstats_rx_add.
Fixes: 761876c857cb ("tap: XDP support")
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yunjian Wang [Fri, 19 Jan 2024 10:22:35 +0000 (18:22 +0800)]
tun: fix missing dropped counter in tun_xdp_act
[ Upstream commit
5744ba05e7c4bff8fec133dd0f9e51ddffba92f5 ]
The commit
8ae1aff0b331 ("tuntap: split out XDP logic") includes
dropped counter for XDP_DROP, XDP_ABORTED, and invalid XDP actions.
Unfortunately, that commit missed the dropped counter when error
occurs during XDP_TX and XDP_REDIRECT actions. This patch fixes
this issue.
Fixes: 8ae1aff0b331 ("tuntap: split out XDP logic")
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Jakub Kicinski [Fri, 19 Jan 2024 00:58:59 +0000 (16:58 -0800)]
net: fix removing a namespace with conflicting altnames
[ Upstream commit
d09486a04f5da0a812c26217213b89a3b1acf836 ]
Mark reports a BUG() when a net namespace is removed.
kernel BUG at net/core/dev.c:11520!
Physical interfaces moved outside of init_net get "refunded"
to init_net when that namespace disappears. The main interface
name may get overwritten in the process if it would have
conflicted. We need to also discard all conflicting altnames.
Recent fixes addressed ensuring that altnames get moved
with the main interface, which surfaced this problem.
Reported-by: Марк Коренберг <socketpair@gmail.com>
Link: https://lore.kernel.org/all/CAEmTpZFZ4Sv3KwqFOY2WKDHeZYdi0O7N5H1nTvcGp=SAEavtDg@mail.gmail.com/
Fixes: 7663d522099e ("net: check for altname conflicts when changing netdev's netns")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Eric Dumazet [Thu, 18 Jan 2024 20:17:49 +0000 (20:17 +0000)]
udp: fix busy polling
[ Upstream commit
a54d51fb2dfb846aedf3751af501e9688db447f5 ]
Generic sk_busy_loop_end() only looks at sk->sk_receive_queue
for presence of packets.
Problem is that for UDP sockets after blamed commit, some packets
could be present in another queue: udp_sk(sk)->reader_queue
In some cases, a busy poller could spin until timeout expiration,
even if some packets are available in udp_sk(sk)->reader_queue.
v3: - make sk_busy_loop_end() nicer (Willem)
v2: - add a READ_ONCE(sk->sk_family) in sk_is_inet() to avoid KCSAN splats.
- add a sk_is_inet() check in sk_is_udp() (Willem feedback)
- add a sk_is_inet() check in sk_is_tcp().
Fixes: 2276f58ac589 ("udp: use a separate rx queue for packet reception")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Kuniyuki Iwashima [Fri, 19 Jan 2024 01:55:15 +0000 (17:55 -0800)]
llc: Drop support for ETH_P_TR_802_2.
[ Upstream commit
e3f9bed9bee261e3347131764e42aeedf1ffea61 ]
syzbot reported an uninit-value bug below. [0]
llc supports ETH_P_802_2 (0x0004) and used to support ETH_P_TR_802_2
(0x0011), and syzbot abused the latter to trigger the bug.
write$tun(r0, &(0x7f0000000040)={@val={0x0, 0x11}, @val, @mpls={[], @llc={@snap={0xaa, 0x1, ')', "90e5dd"}}}}, 0x16)
llc_conn_handler() initialises local variables {saddr,daddr}.mac
based on skb in llc_pdu_decode_sa()/llc_pdu_decode_da() and passes
them to __llc_lookup().
However, the initialisation is done only when skb->protocol is
htons(ETH_P_802_2), otherwise, __llc_lookup_established() and
__llc_lookup_listener() will read garbage.
The missing initialisation existed prior to commit
211ed865108e
("net: delete all instances of special processing for token ring").
It removed the part to kick out the token ring stuff but forgot to
close the door allowing ETH_P_TR_802_2 packets to sneak into llc_rcv().
Let's remove llc_tr_packet_type and complete the deprecation.
[0]:
BUG: KMSAN: uninit-value in __llc_lookup_established+0xe9d/0xf90
__llc_lookup_established+0xe9d/0xf90
__llc_lookup net/llc/llc_conn.c:611 [inline]
llc_conn_handler+0x4bd/0x1360 net/llc/llc_conn.c:791
llc_rcv+0xfbb/0x14a0 net/llc/llc_input.c:206
__netif_receive_skb_one_core net/core/dev.c:5527 [inline]
__netif_receive_skb+0x1a6/0x5a0 net/core/dev.c:5641
netif_receive_skb_internal net/core/dev.c:5727 [inline]
netif_receive_skb+0x58/0x660 net/core/dev.c:5786
tun_rx_batched+0x3ee/0x980 drivers/net/tun.c:1555
tun_get_user+0x53af/0x66d0 drivers/net/tun.c:2002
tun_chr_write_iter+0x3af/0x5d0 drivers/net/tun.c:2048
call_write_iter include/linux/fs.h:2020 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x8ef/0x1490 fs/read_write.c:584
ksys_write+0x20f/0x4c0 fs/read_write.c:637
__do_sys_write fs/read_write.c:649 [inline]
__se_sys_write fs/read_write.c:646 [inline]
__x64_sys_write+0x93/0xd0 fs/read_write.c:646
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x63/0x6b
Local variable daddr created at:
llc_conn_handler+0x53/0x1360 net/llc/llc_conn.c:783
llc_rcv+0xfbb/0x14a0 net/llc/llc_input.c:206
CPU: 1 PID: 5004 Comm: syz-executor994 Not tainted
6.6.0-syzkaller-14500-g1c41041124bd #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/09/2023
Fixes: 211ed865108e ("net: delete all instances of special processing for token ring")
Reported-by: syzbot+b5ad66046b913bc04c6f@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=b5ad66046b913bc04c6f
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240119015515.61898-1-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Eric Dumazet [Thu, 18 Jan 2024 18:36:25 +0000 (18:36 +0000)]
llc: make llc_ui_sendmsg() more robust against bonding changes
[ Upstream commit
dad555c816a50c6a6a8a86be1f9177673918c647 ]
syzbot was able to trick llc_ui_sendmsg(), allocating an skb with no
headroom, but subsequently trying to push 14 bytes of Ethernet header [1]
Like some others, llc_ui_sendmsg() releases the socket lock before
calling sock_alloc_send_skb().
Then it acquires it again, but does not redo all the sanity checks
that were performed.
This fix:
- Uses LL_RESERVED_SPACE() to reserve space.
- Check all conditions again after socket lock is held again.
- Do not account Ethernet header for mtu limitation.
[1]
skbuff: skb_under_panic: text:
ffff800088baa334 len:1514 put:14 head:
ffff0000c9c37000 data:
ffff0000c9c36ff2 tail:0x5dc end:0x6c0 dev:bond0
kernel BUG at net/core/skbuff.c:193 !
Internal error: Oops - BUG:
00000000f2000800 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 6875 Comm: syz-executor.0 Not tainted
6.7.0-rc8-syzkaller-00101-g0802e17d9aca-dirty #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
pstate:
60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : skb_panic net/core/skbuff.c:189 [inline]
pc : skb_under_panic+0x13c/0x140 net/core/skbuff.c:203
lr : skb_panic net/core/skbuff.c:189 [inline]
lr : skb_under_panic+0x13c/0x140 net/core/skbuff.c:203
sp :
ffff800096f97000
x29:
ffff800096f97010 x28:
ffff80008cc8d668 x27:
dfff800000000000
x26:
ffff0000cb970c90 x25:
00000000000005dc x24:
ffff0000c9c36ff2
x23:
ffff0000c9c37000 x22:
00000000000005ea x21:
00000000000006c0
x20:
000000000000000e x19:
ffff800088baa334 x18:
1fffe000368261ce
x17:
ffff80008e4ed000 x16:
ffff80008a8310f8 x15:
0000000000000001
x14:
1ffff00012df2d58 x13:
0000000000000000 x12:
0000000000000000
x11:
0000000000000001 x10:
0000000000ff0100 x9 :
e28a51f1087e8400
x8 :
e28a51f1087e8400 x7 :
ffff80008028f8d0 x6 :
0000000000000000
x5 :
0000000000000001 x4 :
0000000000000001 x3 :
ffff800082b78714
x2 :
0000000000000001 x1 :
0000000100000000 x0 :
0000000000000089
Call trace:
skb_panic net/core/skbuff.c:189 [inline]
skb_under_panic+0x13c/0x140 net/core/skbuff.c:203
skb_push+0xf0/0x108 net/core/skbuff.c:2451
eth_header+0x44/0x1f8 net/ethernet/eth.c:83
dev_hard_header include/linux/netdevice.h:3188 [inline]
llc_mac_hdr_init+0x110/0x17c net/llc/llc_output.c:33
llc_sap_action_send_xid_c+0x170/0x344 net/llc/llc_s_ac.c:85
llc_exec_sap_trans_actions net/llc/llc_sap.c:153 [inline]
llc_sap_next_state net/llc/llc_sap.c:182 [inline]
llc_sap_state_process+0x1ec/0x774 net/llc/llc_sap.c:209
llc_build_and_send_xid_pkt+0x12c/0x1c0 net/llc/llc_sap.c:270
llc_ui_sendmsg+0x7bc/0xb1c net/llc/af_llc.c:997
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg net/socket.c:745 [inline]
sock_sendmsg+0x194/0x274 net/socket.c:767
splice_to_socket+0x7cc/0xd58 fs/splice.c:881
do_splice_from fs/splice.c:933 [inline]
direct_splice_actor+0xe4/0x1c0 fs/splice.c:1142
splice_direct_to_actor+0x2a0/0x7e4 fs/splice.c:1088
do_splice_direct+0x20c/0x348 fs/splice.c:1194
do_sendfile+0x4bc/0xc70 fs/read_write.c:1254
__do_sys_sendfile64 fs/read_write.c:1322 [inline]
__se_sys_sendfile64 fs/read_write.c:1308 [inline]
__arm64_sys_sendfile64+0x160/0x3b4 fs/read_write.c:1308
__invoke_syscall arch/arm64/kernel/syscall.c:37 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51
el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136
do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155
el0_svc+0x54/0x158 arch/arm64/kernel/entry-common.c:678
el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696
el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595
Code:
aa1803e6 aa1903e7 a90023f5 94792f6a (
d4210000)
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Reported-and-tested-by: syzbot+2a7024e9502df538e8ef@syzkaller.appspotmail.com
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://lore.kernel.org/r/20240118183625.4007013-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Lin Ma [Thu, 18 Jan 2024 13:03:06 +0000 (21:03 +0800)]
vlan: skip nested type that is not IFLA_VLAN_QOS_MAPPING
[ Upstream commit
6c21660fe221a15c789dee2bc2fd95516bc5aeaf ]
In the vlan_changelink function, a loop is used to parse the nested
attributes IFLA_VLAN_EGRESS_QOS and IFLA_VLAN_INGRESS_QOS in order to
obtain the struct ifla_vlan_qos_mapping. These two nested attributes are
checked in the vlan_validate_qos_map function, which calls
nla_validate_nested_deprecated with the vlan_map_policy.
However, this deprecated validator applies a LIBERAL strictness, allowing
the presence of an attribute with the type IFLA_VLAN_QOS_UNSPEC.
Consequently, the loop in vlan_changelink may parse an attribute of type
IFLA_VLAN_QOS_UNSPEC and believe it carries a payload of
struct ifla_vlan_qos_mapping, which is not necessarily true.
To address this issue and ensure compatibility, this patch introduces two
type checks that skip attributes whose type is not IFLA_VLAN_QOS_MAPPING.
Fixes: 07b5b17e157b ("[VLAN]: Use rtnl_link API")
Signed-off-by: Lin Ma <linma@zju.edu.cn>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240118130306.1644001-1-linma@zju.edu.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Michael Chan [Wed, 17 Jan 2024 23:45:14 +0000 (15:45 -0800)]
bnxt_en: Prevent kernel warning when running offline self test
[ Upstream commit
c20f482129a582455f02eb9a6dcb2a4215274599 ]
We call bnxt_half_open_nic() to setup the chip partially to run
loopback tests. The rings and buffers are initialized normally
so that we can transmit and receive packets in loopback mode.
That means page pool buffers are allocated for the aggregation ring
just like the normal case. NAPI is not needed because we are just
polling for the loopback packets.
When we're done with the loopback tests, we call bnxt_half_close_nic()
to clean up. When freeing the page pools, we hit a WARN_ON()
in page_pool_unlink_napi() because the NAPI state linked to the
page pool is uninitialized.
The simplest way to avoid this warning is just to initialize the
NAPIs during half open and delete the NAPIs during half close.
Trying to skip the page pool initialization or skip linking of
NAPI during half open will be more complicated.
This fix avoids this warning:
WARNING: CPU: 4 PID: 46967 at net/core/page_pool.c:946 page_pool_unlink_napi+0x1f/0x30
CPU: 4 PID: 46967 Comm: ethtool Tainted: G S W 6.7.0-rc5+ #22
Hardware name: Dell Inc. PowerEdge R750/06V45N, BIOS 1.3.8 08/31/2021
RIP: 0010:page_pool_unlink_napi+0x1f/0x30
Code: 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 48 8b 47 18 48 85 c0 74 1b 48 8b 50 10 83 e2 01 74 08 8b 40 34 83 f8 ff 74 02 <0f> 0b 48 c7 47 18 00 00 00 00 c3 cc cc cc cc 66 90 90 90 90 90 90
RSP: 0018:
ffa000003d0dfbe8 EFLAGS:
00010246
RAX:
ff110003607ce640 RBX:
ff110010baf5d000 RCX:
0000000000000008
RDX:
0000000000000000 RSI:
ff110001e5e522c0 RDI:
ff110010baf5d000
RBP:
ff11000145539b40 R08:
0000000000000001 R09:
ffffffffc063f641
R10:
ff110001361eddb8 R11:
000000000040000f R12:
0000000000000001
R13:
000000000000001c R14:
ff1100014553a080 R15:
0000000000003fc0
FS:
00007f9301c4f740(0000) GS:
ff1100103fd00000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00007f91344fa8f0 CR3:
00000003527cc005 CR4:
0000000000771ef0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
PKRU:
55555554
Call Trace:
<TASK>
? __warn+0x81/0x140
? page_pool_unlink_napi+0x1f/0x30
? report_bug+0x102/0x200
? handle_bug+0x44/0x70
? exc_invalid_op+0x13/0x60
? asm_exc_invalid_op+0x16/0x20
? bnxt_free_ring.isra.123+0xb1/0xd0 [bnxt_en]
? page_pool_unlink_napi+0x1f/0x30
page_pool_destroy+0x3e/0x150
bnxt_free_mem+0x441/0x5e0 [bnxt_en]
bnxt_half_close_nic+0x2a/0x40 [bnxt_en]
bnxt_self_test+0x21d/0x450 [bnxt_en]
__dev_ethtool+0xeda/0x2e30
? native_queued_spin_lock_slowpath+0x17f/0x2b0
? __link_object+0xa1/0x160
? _raw_spin_unlock_irqrestore+0x23/0x40
? __create_object+0x5f/0x90
? __kmem_cache_alloc_node+0x317/0x3c0
? dev_ethtool+0x59/0x170
dev_ethtool+0xa7/0x170
dev_ioctl+0xc3/0x530
sock_do_ioctl+0xa8/0xf0
sock_ioctl+0x270/0x310
__x64_sys_ioctl+0x8c/0xc0
do_syscall_64+0x3e/0xf0
entry_SYSCALL_64_after_hwframe+0x6e/0x76
Fixes: 294e39e0d034 ("bnxt: hook NAPIs to page pools")
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/20240117234515.226944-5-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Michael Chan [Wed, 17 Jan 2024 23:45:11 +0000 (15:45 -0800)]
bnxt_en: Wait for FLR to complete during probe
[ Upstream commit
3c1069fa42872f95cf3c6fedf80723d391e12d57 ]
The first message to firmware may fail if the device is undergoing FLR.
The driver has some recovery logic for this failure scenario but we must
wait 100 msec for FLR to complete before proceeding. Otherwise the
recovery will always fail.
Fixes: ba02629ff6cb ("bnxt_en: log firmware status on firmware init failure")
Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Link: https://lore.kernel.org/r/20240117234515.226944-2-michael.chan@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Zhengchao Shao [Thu, 18 Jan 2024 01:20:19 +0000 (09:20 +0800)]
tcp: make sure init the accept_queue's spinlocks once
[ Upstream commit
198bc90e0e734e5f98c3d2833e8390cac3df61b2 ]
When I run syz's reproduction C program locally, it causes the following
issue:
pvqspinlock: lock 0xffff9d181cd5c660 has corrupted value 0x0!
WARNING: CPU: 19 PID: 21160 at __pv_queued_spin_unlock_slowpath (kernel/locking/qspinlock_paravirt.h:508)
Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
RIP: 0010:__pv_queued_spin_unlock_slowpath (kernel/locking/qspinlock_paravirt.h:508)
Code: 73 56 3a ff 90 c3 cc cc cc cc 8b 05 bb 1f 48 01 85 c0 74 05 c3 cc cc cc cc 8b 17 48 89 fe 48 c7 c7
30 20 ce 8f e8 ad 56 42 ff <0f> 0b c3 cc cc cc cc 0f 0b 0f 1f 40 00 90 90 90 90 90 90 90 90 90
RSP: 0018:
ffffa8d200604cb8 EFLAGS:
00010282
RAX:
0000000000000000 RBX:
0000000000000000 RCX:
ffff9d1ef60e0908
RDX:
00000000ffffffd8 RSI:
0000000000000027 RDI:
ffff9d1ef60e0900
RBP:
ffff9d181cd5c280 R08:
0000000000000000 R09:
00000000ffff7fff
R10:
ffffa8d200604b68 R11:
ffffffff907dcdc8 R12:
0000000000000000
R13:
ffff9d181cd5c660 R14:
ffff9d1813a3f330 R15:
0000000000001000
FS:
00007fa110184640(0000) GS:
ffff9d1ef60c0000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
0000000020000000 CR3:
000000011f65e000 CR4:
00000000000006f0
Call Trace:
<IRQ>
_raw_spin_unlock (kernel/locking/spinlock.c:186)
inet_csk_reqsk_queue_add (net/ipv4/inet_connection_sock.c:1321)
inet_csk_complete_hashdance (net/ipv4/inet_connection_sock.c:1358)
tcp_check_req (net/ipv4/tcp_minisocks.c:868)
tcp_v4_rcv (net/ipv4/tcp_ipv4.c:2260)
ip_protocol_deliver_rcu (net/ipv4/ip_input.c:205)
ip_local_deliver_finish (net/ipv4/ip_input.c:234)
__netif_receive_skb_one_core (net/core/dev.c:5529)
process_backlog (./include/linux/rcupdate.h:779)
__napi_poll (net/core/dev.c:6533)
net_rx_action (net/core/dev.c:6604)
__do_softirq (./arch/x86/include/asm/jump_label.h:27)
do_softirq (kernel/softirq.c:454 kernel/softirq.c:441)
</IRQ>
<TASK>
__local_bh_enable_ip (kernel/softirq.c:381)
__dev_queue_xmit (net/core/dev.c:4374)
ip_finish_output2 (./include/net/neighbour.h:540 net/ipv4/ip_output.c:235)
__ip_queue_xmit (net/ipv4/ip_output.c:535)
__tcp_transmit_skb (net/ipv4/tcp_output.c:1462)
tcp_rcv_synsent_state_process (net/ipv4/tcp_input.c:6469)
tcp_rcv_state_process (net/ipv4/tcp_input.c:6657)
tcp_v4_do_rcv (net/ipv4/tcp_ipv4.c:1929)
__release_sock (./include/net/sock.h:1121 net/core/sock.c:2968)
release_sock (net/core/sock.c:3536)
inet_wait_for_connect (net/ipv4/af_inet.c:609)
__inet_stream_connect (net/ipv4/af_inet.c:702)
inet_stream_connect (net/ipv4/af_inet.c:748)
__sys_connect (./include/linux/file.h:45 net/socket.c:2064)
__x64_sys_connect (net/socket.c:2073 net/socket.c:2070 net/socket.c:2070)
do_syscall_64 (arch/x86/entry/common.c:51 arch/x86/entry/common.c:82)
entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:129)
RIP: 0033:0x7fa10ff05a3d
Code: 5b 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89
c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ab a3 0e 00 f7 d8 64 89 01 48
RSP: 002b:
00007fa110183de8 EFLAGS:
00000202 ORIG_RAX:
000000000000002a
RAX:
ffffffffffffffda RBX:
0000000020000054 RCX:
00007fa10ff05a3d
RDX:
000000000000001c RSI:
0000000020000040 RDI:
0000000000000003
RBP:
00007fa110183e20 R08:
0000000000000000 R09:
0000000000000000
R10:
0000000000000000 R11:
0000000000000202 R12:
00007fa110184640
R13:
0000000000000000 R14:
00007fa10fe8b060 R15:
00007fff73e23b20
</TASK>
The issue triggering process is analyzed as follows:
Thread A Thread B
tcp_v4_rcv //receive ack TCP packet inet_shutdown
tcp_check_req tcp_disconnect //disconnect sock
... tcp_set_state(sk, TCP_CLOSE)
inet_csk_complete_hashdance ...
inet_csk_reqsk_queue_add inet_listen //start listen
spin_lock(&queue->rskq_lock) inet_csk_listen_start
... reqsk_queue_alloc
... spin_lock_init
spin_unlock(&queue->rskq_lock) //warning
When the socket receives the ACK packet during the three-way handshake,
it will hold spinlock. And then the user actively shutdowns the socket
and listens to the socket immediately, the spinlock will be initialized.
When the socket is going to release the spinlock, a warning is generated.
Also the same issue to fastopenq.lock.
Move init spinlock to inet_create and inet_accept to make sure init the
accept_queue's spinlocks once.
Fixes: fff1f3001cc5 ("tcp: add a spinlock to protect struct request_sock_queue")
Fixes: 168a8f58059a ("tcp: TCP Fast Open Server - main code path")
Reported-by: Ming Shu <sming56@aliyun.com>
Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240118012019.1751966-1-shaozhengchao@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Benjamin Poirier [Thu, 18 Jan 2024 00:12:32 +0000 (19:12 -0500)]
selftests: bonding: Increase timeout to 1200s
[ Upstream commit
b01f15a7571b7aa222458bc9bf26ab59bd84e384 ]
When tests are run by runner.sh, bond_options.sh gets killed before
it can complete:
make -C tools/testing/selftests run_tests TARGETS="drivers/net/bonding"
[...]
# timeout set to 120
# selftests: drivers/net/bonding: bond_options.sh
# TEST: prio (active-backup miimon primary_reselect 0) [ OK ]
# TEST: prio (active-backup miimon primary_reselect 1) [ OK ]
# TEST: prio (active-backup miimon primary_reselect 2) [ OK ]
# TEST: prio (active-backup arp_ip_target primary_reselect 0) [ OK ]
# TEST: prio (active-backup arp_ip_target primary_reselect 1) [ OK ]
# TEST: prio (active-backup arp_ip_target primary_reselect 2) [ OK ]
#
not ok 7 selftests: drivers/net/bonding: bond_options.sh # TIMEOUT 120 seconds
This test includes many sleep statements, at least some of which are
related to timers in the operation of the bonding driver itself. Increase
the test timeout to allow the test to complete.
I ran the test in slightly different VMs (including one without HW
virtualization support) and got runtimes of 13m39.760s, 13m31.238s, and
13m2.956s. Use a ~1.5x "safety factor" and set the timeout to 1200s.
Fixes: 42a8d4aaea84 ("selftests: bonding: add bonding prio option test")
Reported-by: Jakub Kicinski <kuba@kernel.org>
Closes: https://lore.kernel.org/netdev/20240116104402.1203850a@kernel.org/#t
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Benjamin Poirier <bpoirier@nvidia.com>
Reviewed-by: Hangbin Liu <liuhangbin@gmail.com>
Link: https://lore.kernel.org/r/20240118001233.304759-1-bpoirier@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Wen Gu [Thu, 18 Jan 2024 04:32:10 +0000 (12:32 +0800)]
net/smc: fix illegal rmb_desc access in SMC-D connection dump
[ Upstream commit
dbc153fd3c142909e564bb256da087e13fbf239c ]
A crash was found when dumping SMC-D connections. It can be reproduced
by following steps:
- run nginx/wrk test:
smc_run nginx
smc_run wrk -t 16 -c 1000 -d <duration> -H 'Connection: Close' <URL>
- continuously dump SMC-D connections in parallel:
watch -n 1 'smcss -D'
BUG: kernel NULL pointer dereference, address:
0000000000000030
CPU: 2 PID: 7204 Comm: smcss Kdump: loaded Tainted: G E 6.7.0+ #55
RIP: 0010:__smc_diag_dump.constprop.0+0x5e5/0x620 [smc_diag]
Call Trace:
<TASK>
? __die+0x24/0x70
? page_fault_oops+0x66/0x150
? exc_page_fault+0x69/0x140
? asm_exc_page_fault+0x26/0x30
? __smc_diag_dump.constprop.0+0x5e5/0x620 [smc_diag]
? __kmalloc_node_track_caller+0x35d/0x430
? __alloc_skb+0x77/0x170
smc_diag_dump_proto+0xd0/0xf0 [smc_diag]
smc_diag_dump+0x26/0x60 [smc_diag]
netlink_dump+0x19f/0x320
__netlink_dump_start+0x1dc/0x300
smc_diag_handler_dump+0x6a/0x80 [smc_diag]
? __pfx_smc_diag_dump+0x10/0x10 [smc_diag]
sock_diag_rcv_msg+0x121/0x140
? __pfx_sock_diag_rcv_msg+0x10/0x10
netlink_rcv_skb+0x5a/0x110
sock_diag_rcv+0x28/0x40
netlink_unicast+0x22a/0x330
netlink_sendmsg+0x1f8/0x420
__sock_sendmsg+0xb0/0xc0
____sys_sendmsg+0x24e/0x300
? copy_msghdr_from_user+0x62/0x80
___sys_sendmsg+0x7c/0xd0
? __do_fault+0x34/0x160
? do_read_fault+0x5f/0x100
? do_fault+0xb0/0x110
? __handle_mm_fault+0x2b0/0x6c0
__sys_sendmsg+0x4d/0x80
do_syscall_64+0x69/0x180
entry_SYSCALL_64_after_hwframe+0x6e/0x76
It is possible that the connection is in process of being established
when we dump it. Assumed that the connection has been registered in a
link group by smc_conn_create() but the rmb_desc has not yet been
initialized by smc_buf_create(), thus causing the illegal access to
conn->rmb_desc. So fix it by checking before dump.
Fixes: 4b1b7d3b30a6 ("net/smc: add SMC-D diag support")
Signed-off-by: Wen Gu <guwen@linux.alibaba.com>
Reviewed-by: Dust Li <dust.li@linux.alibaba.com>
Reviewed-by: Wenjia Zhang <wenjia@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Johannes Berg [Thu, 11 Jan 2024 16:17:44 +0000 (18:17 +0200)]
wifi: mac80211: fix potential sta-link leak
[ Upstream commit
b01a74b3ca6fd51b62c67733ba7c3280fa6c5d26 ]
When a station is allocated, links are added but not
set to valid yet (e.g. during connection to an AP MLD),
we might remove the station without ever marking links
valid, and leak them. Fix that.
Fixes: cb71f1d136a6 ("wifi: mac80211: add sta link addition/removal")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Reviewed-by: Ilan Peer <ilan.peer@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://msgid.link/20240111181514.6573998beaf8.I09ac2e1d41c80f82a5a616b8bd1d9d8dd709a6a6@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Lucas Stach [Wed, 17 Jan 2024 21:06:28 +0000 (22:06 +0100)]
SUNRPC: use request size to initialize bio_vec in svc_udp_sendto()
[ Upstream commit
1d9cabe2817edd215779dc9c2fe5e7ab9aac0704 ]
Use the proper size when setting up the bio_vec, as otherwise only
zero-length UDP packets will be sent.
Fixes: baabf59c2414 ("SUNRPC: Convert svc_udp_sendto() to use the per-socket bio_vec array")
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Shyam Prasad N [Fri, 29 Dec 2023 11:30:07 +0000 (11:30 +0000)]
cifs: after disabling multichannel, mark tcon for reconnect
commit
27e1fd343f80168ff456785c2443136b6b7ca3cc upstream.
Once the server disables multichannel for an active multichannel
session, on the following reconnect, the client would reduce
the number of channels to 1. However, it could be the case that
the tree connect was active on one of these disabled channels.
This results in an unrecoverable state.
This change fixes that by making sure that whenever a channel
is being terminated, the session and tcon are marked for
reconnect too. This could mean a few redundant tree connect
calls to the server, but considering that this is not a frequent
event, we should be okay.
Fixes: ee1d21794e55 ("cifs: handle when server stops supporting multichannel")
Signed-off-by: Shyam Prasad N <sprasad@microsoft.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shyam Prasad N [Fri, 15 Dec 2023 17:16:55 +0000 (17:16 +0000)]
cifs: fix a pending undercount of srv_count
commit
f30bbc38704e279c06d073ecb18fea376791ecab upstream.
The following commit reverted the changes to ref count
the server struct while scheduling a reconnect work:
823342524868 Revert "cifs: reconnect work should have reference on server struct"
However, a following change also introduced scheduling
of reconnect work, and assumed ref counting. This change
fixes that as well.
Fixes umount problems like:
[73496.157838] CPU: 5 PID:
1321389 Comm: umount Tainted: G W OE 6.7.0-060700rc6-generic #
202312172332
[73496.157841] Hardware name: LENOVO 20MAS08500/20MAS08500, BIOS N2CET67W (1.50 ) 12/15/2022
[73496.157843] RIP: 0010:cifs_put_tcp_session+0x17d/0x190 [cifs]
[73496.157906] Code: 5d 31 c0 31 d2 31 f6 31 ff c3 cc cc cc cc e8 4a 6e 14 e6 e9 f6 fe ff ff be 03 00 00 00 48 89 d7 e8 78 26 b3 e5 e9 e4 fe ff ff <0f> 0b e9 b1 fe ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90
[73496.157908] RSP: 0018:
ffffc90003bcbcb8 EFLAGS:
00010286
[73496.157911] RAX:
00000000ffffffff RBX:
ffff8885830fa800 RCX:
0000000000000000
[73496.157913] RDX:
0000000000000000 RSI:
0000000000000000 RDI:
0000000000000000
[73496.157915] RBP:
ffffc90003bcbcc8 R08:
0000000000000000 R09:
0000000000000000
[73496.157917] R10:
0000000000000000 R11:
0000000000000000 R12:
0000000000000000
[73496.157918] R13:
ffff8887d56ba800 R14:
00000000ffffffff R15:
ffff8885830fa800
[73496.157920] FS:
00007f1ff0e33800(0000) GS:
ffff88887ba80000(0000) knlGS:
0000000000000000
[73496.157922] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[73496.157924] CR2:
0000115f002e2010 CR3:
00000003d1e24005 CR4:
00000000003706f0
[73496.157926] DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
[73496.157928] DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
[73496.157929] Call Trace:
[73496.157931] <TASK>
[73496.157933] ? show_regs+0x6d/0x80
[73496.157936] ? __warn+0x89/0x160
[73496.157939] ? cifs_put_tcp_session+0x17d/0x190 [cifs]
[73496.157976] ? report_bug+0x17e/0x1b0
[73496.157980] ? handle_bug+0x51/0xa0
[73496.157983] ? exc_invalid_op+0x18/0x80
[73496.157985] ? asm_exc_invalid_op+0x1b/0x20
[73496.157989] ? cifs_put_tcp_session+0x17d/0x190 [cifs]
[73496.158023] ? cifs_put_tcp_session+0x1e/0x190 [cifs]
[73496.158057] __cifs_put_smb_ses+0x2b5/0x540 [cifs]
[73496.158090] ? tconInfoFree+0xc2/0x120 [cifs]
[73496.158130] cifs_put_tcon.part.0+0x108/0x2b0 [cifs]
[73496.158173] cifs_put_tlink+0x49/0x90 [cifs]
[73496.158220] cifs_umount+0x56/0xb0 [cifs]
[73496.158258] cifs_kill_sb+0x52/0x60 [cifs]
[73496.158306] deactivate_locked_super+0x32/0xc0
[73496.158309] deactivate_super+0x46/0x60
[73496.158311] cleanup_mnt+0xc3/0x170
[73496.158314] __cleanup_mnt+0x12/0x20
[73496.158330] task_work_run+0x5e/0xa0
[73496.158333] exit_to_user_mode_loop+0x105/0x130
[73496.158336] exit_to_user_mode_prepare+0xa5/0xb0
[73496.158338] syscall_exit_to_user_mode+0x29/0x60
[73496.158341] do_syscall_64+0x6c/0xf0
[73496.158344] ? syscall_exit_to_user_mode+0x37/0x60
[73496.158346] ? do_syscall_64+0x6c/0xf0
[73496.158349] ? exit_to_user_mode_prepare+0x30/0xb0
[73496.158353] ? syscall_exit_to_user_mode+0x37/0x60
[73496.158355] ? do_syscall_64+0x6c/0xf0
Reported-by: Robert Morris <rtm@csail.mit.edu>
Fixes: 705fc522fe9d ("cifs: handle when server starts supporting multichannel")
Signed-off-by: Shyam Prasad N <sprasad@microsoft.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shyam Prasad N [Tue, 14 Nov 2023 04:58:23 +0000 (04:58 +0000)]
cifs: fix lock ordering while disabling multichannel
commit
5eef12c4e3230f2025dc46ad8c4a3bc19978e5d7 upstream.
The code to handle the case of server disabling multichannel
was picking iface_lock with chan_lock held. This goes against
the lock ordering rules, as iface_lock is a higher order lock
(even if it isn't so obvious).
This change fixes the lock ordering by doing the following in
that order for each secondary channel:
1. store iface and server pointers in local variable
2. remove references to iface and server in channels
3. unlock chan_lock
4. lock iface_lock
5. dec ref count for iface
6. unlock iface_lock
7. dec ref count for server
8. lock chan_lock again
Since this function can only be called in smb2_reconnect, and
that cannot be called by two parallel processes, we should not
have races due to dropping chan_lock between steps 3 and 8.
Fixes: ee1d21794e55 ("cifs: handle when server stops supporting multichannel")
Reported-by: Paulo Alcantara <pc@manguebit.com>
Signed-off-by: Shyam Prasad N <sprasad@microsoft.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jonathan Gray [Sat, 27 Jan 2024 01:01:50 +0000 (12:01 +1100)]
Revert "drm/amd: Enable PCIe PME from D3"
This reverts commit
847e6947afd3c46623172d2eabcfc2481ee8668e.
duplicated a change made in 6.6.5
49227bea27ebcd260f0c94a3055b14bbd8605c5e
Cc: stable@vger.kernel.org # 6.6
Signed-off-by: Jonathan Gray <jsg@jsg.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:07:01 +0000 (04:07 +0200)]
selftests/bpf: check if max number of bpf_loop iterations is tracked
commit
57e2a52deeb12ab84c15c6d0fb93638b5b94001b upstream.
Check that even if bpf_loop() callback simulation does not converge to
a specific state, verification could proceed via "brute force"
simulation of maximal number of callback calls.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-12-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:07:00 +0000 (04:07 +0200)]
bpf: keep track of max number of bpf_loop callback iterations
commit
bb124da69c47dd98d69361ec13244ece50bec63e upstream.
In some cases verifier can't infer convergence of the bpf_loop()
iteration. E.g. for the following program:
static int cb(__u32 idx, struct num_context* ctx)
{
ctx->i++;
return 0;
}
SEC("?raw_tp")
int prog(void *_)
{
struct num_context ctx = { .i = 0 };
__u8 choice_arr[2] = { 0, 1 };
bpf_loop(2, cb, &ctx, 0);
return choice_arr[ctx.i];
}
Each 'cb' simulation would eventually return to 'prog' and reach
'return choice_arr[ctx.i]' statement. At which point ctx.i would be
marked precise, thus forcing verifier to track multitude of separate
states with {.i=0}, {.i=1}, ... at bpf_loop() callback entry.
This commit allows "brute force" handling for such cases by limiting
number of callback body simulations using 'umax' value of the first
bpf_loop() parameter.
For this, extend bpf_func_state with 'callback_depth' field.
Increment this field when callback visiting state is pushed to states
traversal stack. For frame #N it's 'callback_depth' field counts how
many times callback with frame depth N+1 had been executed.
Use bpf_func_state specifically to allow independent tracking of
callback depths when multiple nested bpf_loop() calls are present.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-11-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:06:59 +0000 (04:06 +0200)]
selftests/bpf: test widening for iterating callbacks
commit
9f3330aa644d6d979eb064c46e85c62d4b4eac75 upstream.
A test case to verify that imprecise scalars widening is applied to
callback entering state, when callback call is simulated repeatedly.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-10-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:06:58 +0000 (04:06 +0200)]
bpf: widening for callback iterators
commit
cafe2c21508a38cdb3ed22708842e957b2572c3e upstream.
Callbacks are similar to open coded iterators, so add imprecise
widening logic for callback body processing. This makes callback based
loops behave identically to open coded iterators, e.g. allowing to
verify programs like below:
struct ctx { u32 i; };
int cb(u32 idx, struct ctx* ctx)
{
++ctx->i;
return 0;
}
...
struct ctx ctx = { .i = 0 };
bpf_loop(100, cb, &ctx, 0);
...
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-9-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:06:57 +0000 (04:06 +0200)]
selftests/bpf: tests for iterating callbacks
commit
958465e217dbf5fc6677d42d8827fb3073d86afd upstream.
A set of test cases to check behavior of callback handling logic,
check if verifier catches the following situations:
- program not safe on second callback iteration;
- program not safe on zero callback iterations;
- infinite loop inside a callback.
Verify that callback logic works for bpf_loop, bpf_for_each_map_elem,
bpf_user_ringbuf_drain, bpf_find_vma.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-8-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:06:56 +0000 (04:06 +0200)]
bpf: verify callbacks as if they are called unknown number of times
commit
ab5cfac139ab8576fb54630d4cca23c3e690ee90 upstream.
Prior to this patch callbacks were handled as regular function calls,
execution of callback body was modeled exactly once.
This patch updates callbacks handling logic as follows:
- introduces a function push_callback_call() that schedules callback
body verification in env->head stack;
- updates prepare_func_exit() to reschedule callback body verification
upon BPF_EXIT;
- as calls to bpf_*_iter_next(), calls to callback invoking functions
are marked as checkpoints;
- is_state_visited() is updated to stop callback based iteration when
some identical parent state is found.
Paths with callback function invoked zero times are now verified first,
which leads to necessity to modify some selftests:
- the following negative tests required adding release/unlock/drop
calls to avoid previously masked unrelated error reports:
- cb_refs.c:underflow_prog
- exceptions_fail.c:reject_rbtree_add_throw
- exceptions_fail.c:reject_with_cp_reference
- the following precision tracking selftests needed change in expected
log trace:
- verifier_subprog_precision.c:callback_result_precise
(note: r0 precision is no longer propagated inside callback and
I think this is a correct behavior)
- verifier_subprog_precision.c:parent_callee_saved_reg_precise_with_callback
- verifier_subprog_precision.c:parent_stack_slot_precise_with_callback
Reported-by: Andrew Werner <awerner32@gmail.com>
Closes: https://lore.kernel.org/bpf/CA+vRuzPChFNXmouzGG+wsy=6eMcfr1mFG0F3g7rbg-sedGKW3w@mail.gmail.com/
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-7-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:06:55 +0000 (04:06 +0200)]
bpf: extract setup_func_entry() utility function
commit
58124a98cb8eda69d248d7f1de954c8b2767c945 upstream.
Move code for simulated stack frame creation to a separate utility
function. This function would be used in the follow-up change for
callbacks handling.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-6-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:06:54 +0000 (04:06 +0200)]
bpf: extract __check_reg_arg() utility function
commit
683b96f9606ab7308ffb23c46ab43cecdef8a241 upstream.
Split check_reg_arg() into two utility functions:
- check_reg_arg() operating on registers from current verifier state;
- __check_reg_arg() operating on a specific set of registers passed as
a parameter;
The __check_reg_arg() function would be used by a follow-up change for
callbacks handling.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-5-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:06:52 +0000 (04:06 +0200)]
selftests/bpf: track string payload offset as scalar in strobemeta
commit
87eb0152bcc102ecbda866978f4e54db5a3be1ef upstream.
This change prepares strobemeta for update in callbacks verification
logic. To allow bpf_loop() verification converge when multiple
callback iterations are considered:
- track offset inside strobemeta_payload->payload directly as scalar
value;
- at each iteration make sure that remaining
strobemeta_payload->payload capacity is sufficient for execution of
read_{map,str}_var functions;
- make sure that offset is tracked as unbound scalar between
iterations, otherwise verifier won't be able infer that bpf_loop
callback reaches identical states.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-3-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 21 Nov 2023 02:06:51 +0000 (04:06 +0200)]
selftests/bpf: track tcp payload offset as scalar in xdp_synproxy
commit
977bc146d4eb7070118d8a974919b33bb52732b4 upstream.
This change prepares syncookie_{tc,xdp} for update in callbakcs
verification logic. To allow bpf_loop() verification converge when
multiple callback itreations are considered:
- track offset inside TCP payload explicitly, not as a part of the
pointer;
- make sure that offset does not exceed MAX_PACKET_OFF enforced by
verifier;
- make sure that offset is tracked as unbound scalar between
iterations, otherwise verifier won't be able infer that bpf_loop
callback reaches identical states.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231121020701.26440-2-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 24 Oct 2023 00:09:17 +0000 (03:09 +0300)]
bpf: print full verifier states on infinite loop detection
commit
b4d8239534fddc036abe4a0fdbf474d9894d4641 upstream.
Additional logging in is_state_visited(): if infinite loop is detected
print full verifier state for both current and equivalent states.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231024000917.12153-8-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 24 Oct 2023 00:09:16 +0000 (03:09 +0300)]
selftests/bpf: test if state loops are detected in a tricky case
commit
64870feebecb7130291a55caf0ce839a87405a70 upstream.
A convoluted test case for iterators convergence logic that
demonstrates that states with branch count equal to 0 might still be
a part of not completely explored loop.
E.g. consider the following state diagram:
initial Here state 'succ' was processed first,
| it was eventually tracked to produce a
V state identical to 'hdr'.
.---------> hdr All branches from 'succ' had been explored
| | and thus 'succ' has its .branches == 0.
| V
| .------... Suppose states 'cur' and 'succ' correspond
| | | to the same instruction + callsites.
| V V In such case it is necessary to check
| ... ... whether 'succ' and 'cur' are identical.
| | | If 'succ' and 'cur' are a part of the same loop
| V V they have to be compared exactly.
| succ <- cur
| |
| V
| ...
| |
'----'
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231024000917.12153-7-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 24 Oct 2023 00:09:15 +0000 (03:09 +0300)]
bpf: correct loop detection for iterators convergence
commit
2a0992829ea3864939d917a5c7b48be6629c6217 upstream.
It turns out that .branches > 0 in is_state_visited() is not a
sufficient condition to identify if two verifier states form a loop
when iterators convergence is computed. This commit adds logic to
distinguish situations like below:
(I) initial (II) initial
| |
V V
.---------> hdr ..
| | |
| V V
| .------... .------..
| | | | |
| V V V V
| ... ... .-> hdr ..
| | | | | |
| V V | V V
| succ <- cur | succ <- cur
| | | |
| V | V
| ... | ...
| | | |
'----' '----'
For both (I) and (II) successor 'succ' of the current state 'cur' was
previously explored and has branches count at 0. However, loop entry
'hdr' corresponding to 'succ' might be a part of current DFS path.
If that is the case 'succ' and 'cur' are members of the same loop
and have to be compared exactly.
Co-developed-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Co-developed-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Reviewed-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231024000917.12153-6-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 24 Oct 2023 00:09:14 +0000 (03:09 +0300)]
selftests/bpf: tests with delayed read/precision makrs in loop body
commit
389ede06c2974b2f878a7ebff6b0f4f707f9db74 upstream.
These test cases try to hide read and precision marks from loop
convergence logic: marks would only be assigned on subsequent loop
iterations or after exploring states pushed to env->head stack first.
Without verifier fix to use exact states comparison logic for
iterators convergence these tests (except 'triple_continue') would be
errorneously marked as safe.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231024000917.12153-5-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 24 Oct 2023 00:09:13 +0000 (03:09 +0300)]
bpf: exact states comparison for iterator convergence checks
commit
2793a8b015f7f1caadb9bce9c63dc659f7522676 upstream.
Convergence for open coded iterators is computed in is_state_visited()
by examining states with branches count > 1 and using states_equal().
states_equal() computes sub-state relation using read and precision marks.
Read and precision marks are propagated from children states,
thus are not guaranteed to be complete inside a loop when branches
count > 1. This could be demonstrated using the following unsafe program:
1. r7 = -16
2. r6 = bpf_get_prandom_u32()
3. while (bpf_iter_num_next(&fp[-8])) {
4. if (r6 != 42) {
5. r7 = -32
6. r6 = bpf_get_prandom_u32()
7. continue
8. }
9. r0 = r10
10. r0 += r7
11. r8 = *(u64 *)(r0 + 0)
12. r6 = bpf_get_prandom_u32()
13. }
Here verifier would first visit path 1-3, create a checkpoint at 3
with r7=-16, continue to 4-7,3 with r7=-32.
Because instructions at 9-12 had not been visitied yet existing
checkpoint at 3 does not have read or precision mark for r7.
Thus states_equal() would return true and verifier would discard
current state, thus unsafe memory access at 11 would not be caught.
This commit fixes this loophole by introducing exact state comparisons
for iterator convergence logic:
- registers are compared using regs_exact() regardless of read or
precision marks;
- stack slots have to have identical type.
Unfortunately, this is too strict even for simple programs like below:
i = 0;
while(iter_next(&it))
i++;
At each iteration step i++ would produce a new distinct state and
eventually instruction processing limit would be reached.
To avoid such behavior speculatively forget (widen) range for
imprecise scalar registers, if those registers were not precise at the
end of the previous iteration and do not match exactly.
This a conservative heuristic that allows to verify wide range of
programs, however it precludes verification of programs that conjure
an imprecise value on the first loop iteration and use it as precise
on the second.
Test case iter_task_vma_for_each() presents one of such cases:
unsigned int seen = 0;
...
bpf_for_each(task_vma, vma, task, 0) {
if (seen >= 1000)
break;
...
seen++;
}
Here clang generates the following code:
<LBB0_4>:
24: r8 = r6 ; stash current value of
... body ... 'seen'
29: r1 = r10
30: r1 += -0x8
31: call bpf_iter_task_vma_next
32: r6 += 0x1 ; seen++;
33: if r0 == 0x0 goto +0x2 <LBB0_6> ; exit on next() == NULL
34: r7 += 0x10
35: if r8 < 0x3e7 goto -0xc <LBB0_4> ; loop on seen < 1000
<LBB0_6>:
... exit ...
Note that counter in r6 is copied to r8 and then incremented,
conditional jump is done using r8. Because of this precision mark for
r6 lags one state behind of precision mark on r8 and widening logic
kicks in.
Adding barrier_var(seen) after conditional is sufficient to force
clang use the same register for both counting and conditional jump.
This issue was discussed in the thread [1] which was started by
Andrew Werner <awerner32@gmail.com> demonstrating a similar bug
in callback functions handling. The callbacks would be addressed
in a followup patch.
[1] https://lore.kernel.org/bpf/
97a90da09404c65c8e810cf83c94ac703705dc0e.camel@gmail.com/
Co-developed-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Co-developed-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231024000917.12153-4-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 24 Oct 2023 00:09:12 +0000 (03:09 +0300)]
bpf: extract same_callsites() as utility function
commit
4c97259abc9bc8df7712f76f58ce385581876857 upstream.
Extract same_callsites() from clean_live_states() as a utility function.
This function would be used by the next patch in the set.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231024000917.12153-3-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eduard Zingerman [Tue, 24 Oct 2023 00:09:11 +0000 (03:09 +0300)]
bpf: move explored_state() closer to the beginning of verifier.c
commit
3c4e420cb6536026ddd50eaaff5f30e4f144200d upstream.
Subsequent patches would make use of explored_state() function.
Move it up to avoid adding unnecessary prototype.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231024000917.12153-2-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rohan G Thomas [Sat, 16 Sep 2023 06:33:11 +0000 (14:33 +0800)]
dt-bindings: net: snps,dwmac: Tx coe unsupported
commit
6fb8c20a04be234cf1cfd4bdd8cfb8860c9d2d3b upstream.
Add dt-bindings for coe-unsupported property per tx queue. Some DWMAC
IPs support tx checksum offloading(coe) only for a few tx queues.
DW xGMAC IP can be synthesized such that it can support tx coe only
for a few initial tx queues. Also as Serge pointed out, for the DW
QoS IP tx coe can be individually configured for each tx queue. This
property is added to have sw fallback for checksum calculation if a
tx queue doesn't support tx coe.
Signed-off-by: Rohan G Thomas <rohan.g.thomas@intel.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Namjae Jeon [Tue, 23 Jan 2024 11:40:31 +0000 (20:40 +0900)]
ksmbd: Add missing set_freezable() for freezable kthread
From: Kevin Hao <haokexin@gmail.com>
[ Upstream commit
8fb7b723924cc9306bc161f45496497aec733904 ]
The kernel thread function ksmbd_conn_handler_loop() invokes
the try_to_freeze() in its loop. But all the kernel threads are
non-freezable by default. So if we want to make a kernel thread to be
freezable, we have to invoke set_freezable() explicitly.
Signed-off-by: Kevin Hao <haokexin@gmail.com>
Acked-by: Namjae Jeon <linkinjeon@kernel.org>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Namjae Jeon [Tue, 23 Jan 2024 11:40:30 +0000 (20:40 +0900)]
ksmbd: send lease break notification on FILE_RENAME_INFORMATION
[ Upstream commit
3fc74c65b367476874da5fe6f633398674b78e5a ]
Send lease break notification on FILE_RENAME_INFORMATION request.
This patch fix smb2.lease.v2_epoch2 test failure.
Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Namjae Jeon [Tue, 23 Jan 2024 11:40:29 +0000 (20:40 +0900)]
ksmbd: don't increment epoch if current state and request state are same
[ Upstream commit
b6e9a44e99603fe10e1d78901fdd97681a539612 ]
If existing lease state and request state are same, don't increment
epoch in create context.
Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Namjae Jeon [Tue, 23 Jan 2024 11:40:28 +0000 (20:40 +0900)]
ksmbd: fix potential circular locking issue in smb2_set_ea()
[ Upstream commit
6fc0a265e1b932e5e97a038f99e29400a93baad0 ]
smb2_set_ea() can be called in parent inode lock range.
So add get_write argument to smb2_set_ea() not to call nested
mnt_want_write().
Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Namjae Jeon [Tue, 23 Jan 2024 11:40:27 +0000 (20:40 +0900)]
ksmbd: set v2 lease version on lease upgrade
[ Upstream commit
bb05367a66a9990d2c561282f5620bb1dbe40c28 ]
If file opened with v2 lease is upgraded with v1 lease, smb server
should response v2 lease create context to client.
This patch fix smb2.lease.v2_epoch2 test failure.
This test case assumes the following scenario:
1. smb2 create with v2 lease(R, LEASE1 key)
2. smb server return smb2 create response with v2 lease context(R,
LEASE1 key, epoch + 1)
3. smb2 create with v1 lease(RH, LEASE1 key)
4. smb server return smb2 create response with v2 lease context(RH,
LEASE1 key, epoch + 2)
i.e. If same client(same lease key) try to open a file that is being
opened with v2 lease with v1 lease, smb server should return v2 lease.
Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
Acked-by: Tom Talpey <tom@talpey.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lino Sanfilippo [Wed, 3 Jan 2024 06:18:12 +0000 (07:18 +0100)]
serial: Do not hold the port lock when setting rx-during-tx GPIO
commit
07c30ea5861fb26a77dade8cdc787252f6122fb1 upstream.
Both the imx and stm32 driver set the rx-during-tx GPIO in rs485_config().
Since this function is called with the port lock held, this can be a
problem in case that setting the GPIO line can sleep (e.g. if a GPIO
expander is used which is connected via SPI or I2C).
Avoid this issue by moving the GPIO setting outside of the port lock into
the serial core and thus making it a generic feature.
Also with commit
c54d48543689 ("serial: stm32: Add support for rs485
RX_DURING_TX output GPIO") the SER_RS485_RX_DURING_TX flag is only set if a
rx-during-tx GPIO is _not_ available, which is wrong. Fix this, too.
Furthermore reset old GPIO settings in case that changing the RS485
configuration failed.
Fixes: c54d48543689 ("serial: stm32: Add support for rs485 RX_DURING_TX output GPIO")
Fixes: ca530cfa968c ("serial: imx: Add support for RS485 RX_DURING_TX output GPIO")
Cc: Shawn Guo <shawnguo@kernel.org>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Lino Sanfilippo <l.sanfilippo@kunbus.com>
Link: https://lore.kernel.org/r/20240103061818.564-2-l.sanfilippo@kunbus.com
Signed-off-by: Lino Sanfilippo <l.sanfilippo@kunbus.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Charan Teja Kalla [Fri, 24 Nov 2023 10:57:25 +0000 (16:27 +0530)]
mm: page_alloc: unreserve highatomic page blocks before oom
commit
ac3f3b0a55518056bc80ed32a41931c99e1f7d81 upstream.
__alloc_pages_direct_reclaim() is called from slowpath allocation where
high atomic reserves can be unreserved after there is a progress in
reclaim and yet no suitable page is found. Later should_reclaim_retry()
gets called from slow path allocation to decide if the reclaim needs to be
retried before OOM kill path is taken.
should_reclaim_retry() checks the available(reclaimable + free pages)
memory against the min wmark levels of a zone and returns:
a) true, if it is above the min wmark so that slow path allocation will
do the reclaim retries.
b) false, thus slowpath allocation takes oom kill path.
should_reclaim_retry() can also unreserves the high atomic reserves **but
only after all the reclaim retries are exhausted.**
In a case where there are almost none reclaimable memory and free pages
contains mostly the high atomic reserves but allocation context can't use
these high atomic reserves, makes the available memory below min wmark
levels hence false is returned from should_reclaim_retry() leading the
allocation request to take OOM kill path. This can turn into a early oom
kill if high atomic reserves are holding lot of free memory and
unreserving of them is not attempted.
(early)OOM is encountered on a VM with the below state:
[ 295.998653] Normal free:7728kB boost:0kB min:804kB low:1004kB
high:1204kB reserved_highatomic:8192KB active_anon:4kB inactive_anon:0kB
active_file:24kB inactive_file:24kB unevictable:1220kB writepending:0kB
present:70732kB managed:49224kB mlocked:0kB bounce:0kB free_pcp:688kB
local_pcp:492kB free_cma:0kB
[ 295.998656] lowmem_reserve[]: 0 32
[ 295.998659] Normal: 508*4kB (UMEH) 241*8kB (UMEH) 143*16kB (UMEH)
33*32kB (UH) 7*64kB (UH) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB
0*4096kB = 7752kB
Per above log, the free memory of ~7MB exist in the high atomic reserves
is not freed up before falling back to oom kill path.
Fix it by trying to unreserve the high atomic reserves in
should_reclaim_retry() before __alloc_pages_direct_reclaim() can fallback
to oom kill path.
Link: https://lkml.kernel.org/r/1700823445-27531-1-git-send-email-quic_charante@quicinc.com
Fixes: 0aaa29a56e4f ("mm, page_alloc: reserve pageblocks for high-order atomic allocations on demand")
Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
Reported-by: Chris Goldsworthy <quic_cgoldswo@quicinc.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Chris Goldsworthy <quic_cgoldswo@quicinc.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Pavankumar Kondeti <quic_pkondeti@quicinc.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Joakim Tjernlund <Joakim.Tjernlund@infinera.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Huacai Chen [Wed, 8 Nov 2023 06:12:15 +0000 (14:12 +0800)]
LoongArch/smp: Call rcutree_report_cpu_starting() earlier
commit
a2ccf46333d7b2cf9658f0d82ac74097c1542fae upstream.
rcutree_report_cpu_starting() must be called before cpu_probe() to avoid
the following lockdep splat that triggered by calling __alloc_pages() when
CONFIG_PROVE_RCU_LIST=y:
=============================
WARNING: suspicious RCU usage
6.6.0+ #980 Not tainted
-----------------------------
kernel/locking/lockdep.c:3761 RCU-list traversed in non-reader section!!
other info that might help us debug this:
RCU used illegally from offline CPU!
rcu_scheduler_active = 1, debug_locks = 1
1 lock held by swapper/1/0:
#0:
900000000c82ef98 (&pcp->lock){+.+.}-{2:2}, at: get_page_from_freelist+0x894/0x1790
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.6.0+ #980
Stack :
0000000000000001 9000000004f79508 9000000004893670 9000000100310000
90000001003137d0 0000000000000000 90000001003137d8 9000000004f79508
0000000000000000 0000000000000001 0000000000000000 90000000048a3384
203a656d616e2065 ca43677b3687e616 90000001002c3480 0000000000000008
000000000000009d 0000000000000000 0000000000000001 80000000ffffe0b8
000000000000000d 0000000000000033 0000000007ec0000 13bbf50562dad831
9000000005140748 0000000000000000 9000000004f79508 0000000000000004
0000000000000000 9000000005140748 90000001002bad40 0000000000000000
90000001002ba400 0000000000000000 9000000003573ec8 0000000000000000
00000000000000b0 0000000000000004 0000000000000000 0000000000070000
...
Call Trace:
[<
9000000003573ec8>] show_stack+0x38/0x150
[<
9000000004893670>] dump_stack_lvl+0x74/0xa8
[<
900000000360d2bc>] lockdep_rcu_suspicious+0x14c/0x190
[<
900000000361235c>] __lock_acquire+0xd0c/0x2740
[<
90000000036146f4>] lock_acquire+0x104/0x2c0
[<
90000000048a955c>] _raw_spin_lock_irqsave+0x5c/0x90
[<
900000000381cd5c>] rmqueue_bulk+0x6c/0x950
[<
900000000381fc0c>] get_page_from_freelist+0xd4c/0x1790
[<
9000000003821c6c>] __alloc_pages+0x1bc/0x3e0
[<
9000000003583b40>] tlb_init+0x150/0x2a0
[<
90000000035742a0>] per_cpu_trap_init+0xf0/0x110
[<
90000000035712fc>] cpu_probe+0x3dc/0x7a0
[<
900000000357ed20>] start_secondary+0x40/0xb0
[<
9000000004897138>] smpboot_entry+0x54/0x58
raw_smp_processor_id() is required in order to avoid calling into lockdep
before RCU has declared the CPU to be watched for readers.
See also commit
29368e093921 ("x86/smpboot: Move rcu_cpu_starting() earlier"),
commit
de5d9dae150c ("s390/smp: move rcu_cpu_starting() earlier") and commit
99f070b62322 ("powerpc/smp: Call rcu_cpu_starting() earlier").
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Thu, 21 Dec 2023 23:18:12 +0000 (18:18 -0500)]
serial: sc16is7xx: improve do/while loop in sc16is7xx_irq()
commit
d5078509c8b06c5c472a60232815e41af81c6446 upstream.
Simplify and improve readability by replacing while(1) loop with
do {} while, and by using the keep_polling variable as the exit
condition, making it more explicit.
Fixes: 834449872105 ("sc16is7xx: Fix for multi-channel stall")
Cc: <stable@vger.kernel.org>
Suggested-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231221231823.2327894-6-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Thu, 21 Dec 2023 23:18:11 +0000 (18:18 -0500)]
serial: sc16is7xx: remove obsolete loop in sc16is7xx_port_irq()
commit
ed647256e8f226241ecff7baaecdb8632ffc7ec1 upstream.
Commit
834449872105 ("sc16is7xx: Fix for multi-channel stall") changed
sc16is7xx_port_irq() from looping multiple times when there was still
interrupts to serve. It simply changed the do {} while(1) loop to a
do {} while(0) loop, which makes the loop itself now obsolete.
Clean the code by removing this obsolete do {} while(0) loop.
Fixes: 834449872105 ("sc16is7xx: Fix for multi-channel stall")
Cc: <stable@vger.kernel.org>
Suggested-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231221231823.2327894-5-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Thu, 21 Dec 2023 23:18:08 +0000 (18:18 -0500)]
serial: sc16is7xx: fix invalid sc16is7xx_lines bitfield in case of probe error
commit
8a1060ce974919f2a79807527ad82ac39336eda2 upstream.
If an error occurs during probing, the sc16is7xx_lines bitfield may be left
in a state that doesn't represent the correct state of lines allocation.
For example, in a system with two SC16 devices, if an error occurs only
during probing of channel (port) B of the second device, sc16is7xx_lines
final state will be
00001011b instead of the expected
00000011b.
This is caused in part because of the "i--" in the for/loop located in
the out_ports: error path.
Fix this by checking the return value of uart_add_one_port() and set line
allocation bit only if this was successful. This allows the refactor of
the obfuscated for(i--...) loop in the error path, and properly call
uart_remove_one_port() only when needed, and properly unset line allocation
bits.
Also use same mechanism in remove() when calling uart_remove_one_port().
Fixes: c64349722d14 ("sc16is7xx: support multiple devices")
Cc: <stable@vger.kernel.org>
Cc: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231221231823.2327894-2-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Mon, 11 Dec 2023 17:13:52 +0000 (12:13 -0500)]
serial: sc16is7xx: convert from _raw_ to _noinc_ regmap functions for FIFO
commit
dbf4ab821804df071c8b566d9813083125e6d97b upstream.
The SC16IS7XX IC supports a burst mode to access the FIFOs where the
initial register address is sent ($00), followed by all the FIFO data
without having to resend the register address each time. In this mode, the
IC doesn't increment the register address for each R/W byte.
The regmap_raw_read() and regmap_raw_write() are functions which can
perform IO over multiple registers. They are currently used to read/write
from/to the FIFO, and although they operate correctly in this burst mode on
the SPI bus, they would corrupt the regmap cache if it was not disabled
manually. The reason is that when the R/W size is more than 1 byte, these
functions assume that the register address is incremented and handle the
cache accordingly.
Convert FIFO R/W functions to use the regmap _noinc_ versions in order to
remove the manual cache control which was a workaround when using the
_raw_ versions. FIFO registers are properly declared as volatile so
cache will not be used/updated for FIFO accesses.
Fixes: dfeae619d781 ("serial: sc16is7xx")
Cc: <stable@vger.kernel.org>
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231211171353.2901416-6-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Mon, 11 Dec 2023 17:13:51 +0000 (12:13 -0500)]
serial: sc16is7xx: change EFR lock to operate on each channels
commit
4409df5866b7ff7686ba27e449ca97a92ee063c9 upstream.
Now that the driver has been converted to use one regmap per port, change
efr locking to operate on a channel basis instead of on the whole IC.
Fixes: 3837a0379533 ("serial: sc16is7xx: improve regmap debugfs by using one regmap per port")
Cc: <stable@vger.kernel.org> # 6.1.x: 3837a03 serial: sc16is7xx: improve regmap debugfs by using one regmap per port
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231211171353.2901416-5-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Mon, 11 Dec 2023 17:13:50 +0000 (12:13 -0500)]
serial: sc16is7xx: remove unused line structure member
commit
41a308cbedb2a68a6831f0f2e992e296c4b8aff0 upstream.
Now that the driver has been converted to use one regmap per port, the line
structure member is no longer used, so remove it.
Fixes: 3837a0379533 ("serial: sc16is7xx: improve regmap debugfs by using one regmap per port")
Cc: <stable@vger.kernel.org>
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231211171353.2901416-4-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Mon, 11 Dec 2023 17:13:49 +0000 (12:13 -0500)]
serial: sc16is7xx: remove global regmap from struct sc16is7xx_port
commit
f6959c5217bd799bcb770b95d3c09b3244e175c6 upstream.
Remove global struct regmap so that it is more obvious that this
regmap is to be used only in the probe function.
Also add a comment to that effect in probe function.
Fixes: 3837a0379533 ("serial: sc16is7xx: improve regmap debugfs by using one regmap per port")
Cc: <stable@vger.kernel.org>
Suggested-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231211171353.2901416-3-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Mon, 11 Dec 2023 17:13:48 +0000 (12:13 -0500)]
serial: sc16is7xx: remove wasteful static buffer in sc16is7xx_regmap_name()
commit
6bcab3c8acc88e265c570dea969fd04f137c8a4c upstream.
Using a static buffer inside sc16is7xx_regmap_name() was a convenient and
simple way to set the regmap name without having to allocate and free a
buffer each time it is called. The drawback is that the static buffer
wastes memory for nothing once regmap is fully initialized.
Remove static buffer and use constant strings instead.
This also avoids a truncation warning when using "%d" or "%u" in snprintf
which was flagged by kernel test robot.
Fixes: 3837a0379533 ("serial: sc16is7xx: improve regmap debugfs by using one regmap per port")
Cc: <stable@vger.kernel.org> # 6.1.x: 3837a03 serial: sc16is7xx: improve regmap debugfs by using one regmap per port
Suggested-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231211171353.2901416-2-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hugo Villeneuve [Mon, 30 Oct 2023 21:14:47 +0000 (17:14 -0400)]
serial: sc16is7xx: improve regmap debugfs by using one regmap per port
commit
3837a0379533aabb9e4483677077479f7c6aa910 upstream.
With this current driver regmap implementation, it is hard to make sense
of the register addresses displayed using the regmap debugfs interface,
because they do not correspond to the actual register addresses documented
in the datasheet. For example, register 1 is displayed as registers 04 thru
07:
$ cat /sys/kernel/debug/regmap/spi0.0/registers
04: 10 -> Port 0, register offset 1
05: 10 -> Port 1, register offset 1
06: 00 -> Port 2, register offset 1 -> invalid
07: 00 -> port 3, register offset 1 -> invalid
...
The reason is that bits 0 and 1 of the register address correspond to the
channel (port) bits, so the register address itself starts at bit 2, and we
must 'mentally' shift each register address by 2 bits to get its real
address/offset.
Also, only channels 0 and 1 are supported by the chip, so channel mask
combinations of 10b and 11b are invalid, and the display of these
registers is useless.
This patch adds a separate regmap configuration for each port, similar to
what is done in the max310x driver, so that register addresses displayed
match the register addresses in the chip datasheet. Also, each port now has
its own debugfs entry.
Example with new regmap implementation:
$ cat /sys/kernel/debug/regmap/spi0.0-port0/registers
1: 10
2: 01
3: 00
...
$ cat /sys/kernel/debug/regmap/spi0.0-port1/registers
1: 10
2: 01
3: 00
As an added bonus, this also simplifies some operations (read/write/modify)
because it is no longer necessary to manually shift register addresses.
Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
Link: https://lore.kernel.org/r/20231030211447.974779-1-hugo@hugovil.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>