platform/kernel/linux-starfive.git
2 years agonet: dm9051: Fix spelling mistake "eror" -> "error"
Colin Ian King [Tue, 15 Feb 2022 10:39:20 +0000 (10:39 +0000)]
net: dm9051: Fix spelling mistake "eror" -> "error"

There are spelling mistakes in debug messages. Fix them.

Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodpaa2-eth: Simplify bool conversion
Yang Li [Tue, 15 Feb 2022 01:09:13 +0000 (09:09 +0800)]
dpaa2-eth: Simplify bool conversion

Fix the following coccicheck warnings:
./drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c:1199:42-47: WARNING:
conversion to bool not needed here
./drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c:1218:54-59: WARNING:
conversion to bool not needed here

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: bridge: vlan: check for errors from __vlan_del in __vlan_flush
Vladimir Oltean [Mon, 14 Feb 2022 23:36:46 +0000 (01:36 +0200)]
net: bridge: vlan: check for errors from __vlan_del in __vlan_flush

If the following call path returns an error from switchdev:

nbp_vlan_flush
-> __vlan_del
   -> __vlan_vid_del
      -> br_switchdev_port_vlan_del
-> __vlan_group_free
   -> WARN_ON(!list_empty(&vg->vlan_list));

then the deletion of the net_bridge_vlan is silently halted, which will
trigger the WARN_ON from __vlan_group_free().

The WARN_ON is rather unhelpful, because nothing about the source of the
error is printed. Add a print to catch errors from __vlan_del.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hso: Use GFP_KERNEL instead of GFP_ATOMIC when possible
Christophe JAILLET [Mon, 14 Feb 2022 19:09:06 +0000 (20:09 +0100)]
net: hso: Use GFP_KERNEL instead of GFP_ATOMIC when possible

hso_create_device() is only called from function that already use
GFP_KERNEL. And all the callers are called from the probe function.

So there is no need here to explicitly require a GFP_ATOMIC when
allocating memory.

Use GFP_KERNEL instead.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agovirtio_net: Fix code indent error
Michael Catanzaro [Sun, 13 Feb 2022 08:01:41 +0000 (02:01 -0600)]
virtio_net: Fix code indent error

This patch fixes the checkpatch.pl warning:

ERROR: code indent should use tabs where possible #3453: FILE: drivers/net/virtio_net.c:3453: ret = register_virtio_driver(&virtio_net_driver);$

Uneccessary newline was also removed making line 3453 now 3452.

Signed-off-by: Michael Catanzaro <mcatanzaro.kernel@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge tag 'mlx5-updates-2022-02-14' of git://git.kernel.org/pub/scm/linux/kernel...
David S. Miller [Tue, 15 Feb 2022 10:35:09 +0000 (10:35 +0000)]
Merge tag 'mlx5-updates-2022-02-14' of git://git./linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2022-02-14

mlx5 TX routines improvements

1) From Aya and Tariq, first 3 patches, Use the Max size of the TX descriptor
as advertised by the device and not the fixed value of 16 that the driver
always assumed, this is not a bug fix as all existing devices have Max value
larger than 16, but the series is necessary for future proofing the driver.

2) TX Synchronization improvements from Maxim, last 12 patches

Maxim Mikityanskiy Says:
=======================
mlx5e: Synchronize ndo_select_queue with configuration changes

The kernel can call ndo_select_queue at any time, and there is no direct
way to block it. The implementation of ndo_select_queue in mlx5e expects
the parameters to be consistent and may crash (invalid pointer, division
by zero) if they aren't.

There were attempts to partially fix some of the most frequent crashes,
see commit 846d6da1fcdb ("net/mlx5e: Fix division by 0 in
mlx5e_select_queue") and commit 84c8a87402cf ("net/mlx5e: Fix division
by 0 in mlx5e_select_queue for representors"). However, they don't
address the issue completely.

This series introduces the proper synchronization mechanism between
mlx5e configuration and TX data path:

1. txq2sq updates are synchronized properly with ndo_start_xmit
   (mlx5e_xmit). The TX queue is stopped when it configuration is being
   updated, and memory barriers ensure the changes are visible before
   restarting.

2. The set of parameters needed for mlx5e_select_queue is reduced, and
   synchronization using RCU is implemented. This way, changes are
   atomic, and the state in mlx5e_select_queue is always consistent.

3. A few optimizations are applied to the new implementation of
   mlx5e_select_queue.

=======================

====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/mlx5e: Optimize the common case condition in mlx5e_select_queue
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:53:00 +0000 (12:53 +0200)]
net/mlx5e: Optimize the common case condition in mlx5e_select_queue

Check all booleans for special queues at once, when deciding whether to
go to the fast path in mlx5e_select_queue. Pack them into bitfields to
have some room for extensibility.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Optimize modulo in mlx5e_select_queue
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:59 +0000 (12:52 +0200)]
net/mlx5e: Optimize modulo in mlx5e_select_queue

To improve the performance of the modulo operation (%), it's replaced by
a subtracting the divisor in a loop. The modulo is used to fix up an
out-of-bounds value that might be returned by netdev_pick_tx or to
convert the queue number to the channel number when num_tcs > 1. Both
situations are unlikely, because XPS is configured not to pick higher
queues (qid >= num_channels) by default, so under normal circumstances
the flow won't go inside the loop, and it will be faster than %.

num_tcs == 8 adds at most 7 iterations to the loop. PTP adds at most 1
iteration to the loop. HTB would add at most 256 iterations (when
num_channels == 1), so there is an additional boundary check in the HTB
flow, which falls back to % if more than 7 iterations are expected.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Optimize mlx5e_select_queue
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:57 +0000 (12:52 +0200)]
net/mlx5e: Optimize mlx5e_select_queue

This commit optimizes mlx5e_select_queue for HTB and PTP cases by
short-cutting some checks, without sacrificing performance of the common
non-HTB non-PTP flow.

1. The HTB flow uses the fact that num_tcs == 1 to drop these checks
(it's not possible to attach both mqprio and htb as the root qdisc).
It's also enough to calculate `txq_ix % num_channels` only once, instead
of twice.

2. The PTP flow drops the check for HTB and the second calculation of
`txq_ix % num_channels`.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Use READ_ONCE/WRITE_ONCE for DCBX trust state
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:56 +0000 (12:52 +0200)]
net/mlx5e: Use READ_ONCE/WRITE_ONCE for DCBX trust state

trust_state can be written while mlx5e_select_queue() is reading it. To
avoid inconsistencies, use READ_ONCE and WRITE_ONCE for access and
updates, and touch the variable only once per operation.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Move repeating code that gets TC prio into a function
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:55 +0000 (12:52 +0200)]
net/mlx5e: Move repeating code that gets TC prio into a function

Both mlx5e_select_queue and mlx5e_select_ptpsq contain the same logic to
get user priority of a packet, according to the current trust state
settings. This commit moves this repeating code to its own function.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Use select queue parameters to sync with control flow
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:54 +0000 (12:52 +0200)]
net/mlx5e: Use select queue parameters to sync with control flow

Start using the select queue parameters introduced in the previous
commit to have proper synchronization with changing the configuration
(such as number of channels and queues). It ensures that the state that
mlx5e_select_queue() sees is always consistent and stays the same while
the function is running. Also it allows mlx5e_select_queue to stop using
data structures that weren't synchronized properly: txq2sq,
channel_tc2realtxq, port_ptp_tc2realtxq. The last two are removed
completely, as they were used only in mlx5e_select_queue.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Move mlx5e_select_queue to en/selq.c
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:52 +0000 (12:52 +0200)]
net/mlx5e: Move mlx5e_select_queue to en/selq.c

This commit moves mlx5e_select_queue and all stuff related to
ndo_select_queue to en/selq.c to put all stuff working with selq into a
separate file.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Introduce select queue parameters
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:51 +0000 (12:52 +0200)]
net/mlx5e: Introduce select queue parameters

ndo_select_queue can be called at any time, and there is no way to stop
the kernel from calling it to synchronize with configuration changes
(real_num_tx_queues, num_tc). This commit introduces an internal way in
mlx5e to sync mlx5e_select_queue() with these changes. The configuration
needed by this function is stored in a struct mlx5e_selq_params, which
is modified and accessed in an atomic way using RCU methods. The whole
ndo_select_queue is called under an RCU lock, providing the necessary
guarantees.

The parameters stored in the new struct mlx5e_selq_params should only be
used from inside mlx5e_select_queue. It's the minimal set of parameters
needed for mlx5e_select_queue to do its job efficiently, derived from
parameters stored elsewhere. That means that when the configuration
change, mlx5e_selq_params may need to be updated. In such cases, the
mlx5e_selq_prepare/mlx5e_selq_apply API should be used.

struct mlx5e_selq contains two slots for the params: active and standby.
mlx5e_selq_prepare updates the standby slot, and mlx5e_selq_apply swaps
the slots in a safe atomic way using the RCU API. It integrates well
with the open/activate stages of the configuration change flow.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Sync txq2sq updates with mlx5e_xmit for HTB queues
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:50 +0000 (12:52 +0200)]
net/mlx5e: Sync txq2sq updates with mlx5e_xmit for HTB queues

This commit makes necessary changes to guarantee that txq2sq remains
stable while mlx5e_xmit is running. Proper synchronization is added for
HTB TX queues.

All updates to txq2sq are performed while the corresponding queue is
disabled (i.e. mlx5e_xmit doesn't run on that queue). smp_wmb after each
change guarantees that mlx5e_xmit can see the updated value after the
queue is enabled. Comments explaining this mechanism are added to
mlx5e_xmit.

When an HTB SQ can be deleted (after deleting an HTB node), synchronize
with RCU to wait for mlx5e_select_queue to finish and stop selecting
that queue, before we re-enable it to avoid TX timeout watchdog alarms.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Use a barrier after updating txq2sq
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:49 +0000 (12:52 +0200)]
net/mlx5e: Use a barrier after updating txq2sq

mlx5e_build_txq_maps updates txq2sq while TX queues are stopped. Add a
barrier to ensure that these changes are visible before the queues are
started and mlx5e_xmit reads from txq2sq.

This commit handles regular TX queues. Synchronization between HTB TX
queues and mlx5e_xmit is handled in the following commit.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Disable TX queues before registering the netdev
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:48 +0000 (12:52 +0200)]
net/mlx5e: Disable TX queues before registering the netdev

Normally, the queues are disabled when the channels are deactivated, and
enabled when the channels are activated. However, on register, the
channels are not active, but the queues are enabled by default. This
change fixes it, preventing mlx5e_xmit from running when the channels
are deactivated in the beginning.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Cleanup of start/stop all queues
Maxim Mikityanskiy [Tue, 25 Jan 2022 10:52:43 +0000 (12:52 +0200)]
net/mlx5e: Cleanup of start/stop all queues

mlx5e_activate_priv_channels() and mlx5e_deactivate_priv_channels()
start and stop all netdev TX queues. This commit removes the unneeded
call to netif_tx_stop_all_queues and adds explanatory comments why these
operations are needed.

netif_tx_disable() does the same thing that netif_tx_stop_all_queues(),
but taking the TX lock, thus guaranteeing that ndo_start_xmit is not
running after return. That means that the netif_tx_stop_all_queues()
call is not really necessary.

The comments are improved: the TX watchdog timeout explanation is moved
to the start stage where it really belongs (it used to be in both
places, but was lost during some old refactoring) and rephrased in more
details; the explanation for stopping all TX queues is added.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Use FW limitation for max MPW WQEBBs
Aya Levin [Mon, 10 May 2021 07:13:06 +0000 (10:13 +0300)]
net/mlx5e: Use FW limitation for max MPW WQEBBs

Calculate maximal count of MPW WQEBBs on SQ's creation and store it
there. Remove MLX5E_TX_MPW_MAX_NUM_DS and MLX5E_TX_MPW_MAX_WQEBBS.
Update mlx5e_tx_mpwqe_is_full() and mlx5e_xdp_mpqwe_is_full() .

Signed-off-by: Aya Levin <ayal@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Read max WQEBBs on the SQ from firmware
Aya Levin [Mon, 17 Jan 2022 09:46:35 +0000 (11:46 +0200)]
net/mlx5e: Read max WQEBBs on the SQ from firmware

Prior to this patch the maximal value for max WQEBBs (WQE Basic Blocks,
where WQE is a Work Queue Element) on the TX side was assumed to be 16
(fixed value). All firmware versions till today comply to this. In order
to be more flexible and resilient, read from FW the corresponding:
max_wqe_sz_sq. This value describes the maximum WQE size given in bytes,
thus max WQEBBs is given by the division in WQEBB's byte size. The
driver uses the top between 16 and the division result. This ensures
synchronization between driver and firmware and avoids unexpected
behavior. Store this value on the different SQs (Send Queues) for easy
access.

Signed-off-by: Aya Levin <ayal@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet/mlx5e: Remove unused tstamp SQ field
Tariq Toukan [Thu, 20 May 2021 09:31:44 +0000 (12:31 +0300)]
net/mlx5e: Remove unused tstamp SQ field

Remove tstamp pointer in mlx5e_txqsq as it's no longer used after
commit 7c39afb394c7 ("net/mlx5: PTP code migration to driver core section").

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Aya Levin <ayal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2 years agonet: dsa: mv88e6xxx: Fix validation of built-in PHYs on 6095/6097
Tobias Waldekranz [Sun, 13 Feb 2022 18:51:54 +0000 (19:51 +0100)]
net: dsa: mv88e6xxx: Fix validation of built-in PHYs on 6095/6097

These chips have 8 built-in FE PHYs and 3 SERDES interfaces that can
run at 1G. With the blamed commit, the built-in PHYs could no longer
be connected to, using an MII PHY interface mode.

Create a separate .phylink_get_caps callback for these chips, which
takes the FE/GE split into consideration.

Fixes: 2ee84cfefb1e ("net: dsa: mv88e6xxx: convert to phylink_generic_validate()")
Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com>
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Link: https://lore.kernel.org/r/20220213185154.3262207-1-tobias@waldekranz.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoselftests: net: cmsg_sender: Fix spelling mistake "MONOTINIC" -> "MONOTONIC"
Colin Ian King [Mon, 14 Feb 2022 09:38:10 +0000 (09:38 +0000)]
selftests: net: cmsg_sender: Fix spelling mistake "MONOTINIC" -> "MONOTONIC"

There is a spelling mistake in an error message. Fix it.

Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: prestera: acl: add multi-chain support offload
Volodymyr Mytnyk [Mon, 14 Feb 2022 08:20:06 +0000 (10:20 +0200)]
net: prestera: acl: add multi-chain support offload

Add support of rule offloading added to the non-zero index chain,
which was previously forbidden. Also, goto action is offloaded
allowing to jump for processing of desired chain.

Note that only implicit chain 0 is bound to the device port(s) for
processing. The rest of chains have to be jumped by actions.

Signed-off-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'wwan-debugfs'
David S. Miller [Mon, 14 Feb 2022 14:09:59 +0000 (14:09 +0000)]
Merge branch 'wwan-debugfs'

M Chetan Kumar says:

====================
net: wwan: debugfs dev reference not dropped

This patch series contains WWAN subsystem & IOSM Driver changes to
drop dev reference obtained as part of wwan debugfs dir entry retrieval.

PATCH1: A new debugfs interface is introduced in wwan subsystem so
that wwan driver can drop the obtained dev reference post debugfs use.

PATCH2: IOSM Driver uses new debugfs interface to drop dev reference.

Please refer to commit messages for details.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wwan: iosm: drop debugfs dev reference
M Chetan Kumar [Mon, 14 Feb 2022 07:16:53 +0000 (12:46 +0530)]
net: wwan: iosm: drop debugfs dev reference

Post debugfs use call wwan_put_debugfs_dir()to drop
debugfs dev reference.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wwan: debugfs obtained dev reference not dropped
M Chetan Kumar [Mon, 14 Feb 2022 07:16:52 +0000 (12:46 +0530)]
net: wwan: debugfs obtained dev reference not dropped

WWAN driver call's wwan_get_debugfs_dir() to obtain
WWAN debugfs dir entry. As part of this procedure it
returns a reference to a found device.

Since there is no debugfs interface available at WWAN
subsystem, it is not possible to drop dev reference post
debugfs use. This leads to side effects like post wwan
driver load and reload the wwan instance gets increment
from wwanX to wwanX+1.

A new debugfs interface is added in wwan subsystem so that
wwan driver can drop the obtained dev reference post debugfs
use.

void wwan_put_debugfs_dir(struct dentry *dir)

Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'dsa-realtek-next'
David S. Miller [Mon, 14 Feb 2022 14:06:11 +0000 (14:06 +0000)]
Merge branch 'dsa-realtek-next'

Luiz Angelo Daros de Luca says:

====================
net: dsa: realtek: realtek-mdio: reset before setup

This patch series cleans the realtek-smi reset code and copy that to the
realtek-mdio.

v1-v2)
- do not run reset code block if GPIO is missing. It was printing "RESET
  deasserted" even when there is no GPIO configured.
- reset switch after dsa_unregister_switch()
- demote reset messages to debug

v2-v3)
- do not assert the reset on gpiod_get. Do it explicitly aferwards.
- split the commit into two (one for each module)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dsa: realtek: realtek-mdio: reset before setup
Luiz Angelo Daros de Luca [Mon, 14 Feb 2022 02:20:12 +0000 (23:20 -0300)]
net: dsa: realtek: realtek-mdio: reset before setup

Some devices, like the switch in Banana Pi BPI R64 only starts to answer
after a HW reset. It is the same reset code from realtek-smi.

Reported-by: Frank Wunderlich <frank-w@public-files.de>
Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com>
Tested-by: Frank Wunderlich <frank-w@public-files.de>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Alvin Å ipraga <alsi@bang-olufsen.dk>
Acked-by: Arınç ÜNAL <arinc.unal@arinc9.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dsa: realtek: realtek-smi: clean-up reset
Luiz Angelo Daros de Luca [Mon, 14 Feb 2022 02:20:11 +0000 (23:20 -0300)]
net: dsa: realtek: realtek-smi: clean-up reset

When reset GPIO was missing, the driver was still printing an info
message and still trying to assert the reset. Although gpiod_set_value()
will silently ignore calls with NULL gpio_desc, it is better to make it
clear the driver might allow gpio_desc to be NULL.

The initial value for the reset pin was changed to GPIOD_OUT_LOW,
followed by a gpiod_set_value() asserting the reset. This way, it will
be easier to spot if and where the reset really happens.

A new "asserted RESET" message was added just after the reset is
asserted, similar to the existing "deasserted RESET" message. Both
messages were demoted to dbg. The code comment is not needed anymore.

Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com>
Acked-by: Arınç ÜNAL <arinc.unal@arinc9.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: blackhole_netdev needs snmp6 counters
Ido Schimmel [Mon, 14 Feb 2022 02:10:56 +0000 (18:10 -0800)]
ipv6: blackhole_netdev needs snmp6 counters

Whenever rt6_uncached_list_flush_dev() swaps rt->rt6_idev
to the blackhole device, parts of IPv6 stack might still need
to increment one SNMP counter.

Root cause, patch from Ido, changelog from Eric :)

This bug suggests that we need to audit rt->rt6_idev usages
and make sure they are properly using RCU protection.

Fixes: e5f80fcf869a ("ipv6: give an IPv6 dev to blackhole_netdev")
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dsa: realtek: rename macro to match filename
Luiz Angelo Daros de Luca [Sat, 12 Feb 2022 02:25:34 +0000 (23:25 -0300)]
net: dsa: realtek: rename macro to match filename

The macro was missed while renaming realtek-smi.h to realtek.h.

Fixes: f5f119077b1c (net: dsa: realtek: rename realtek_smi to)
Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Alvin Å ipraga <alsi@bang-olufsen.dk>
Acked-by: Arınç ÜNAL <arinc.unal@arinc9.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'netdev-RT'
David S. Miller [Mon, 14 Feb 2022 13:38:36 +0000 (13:38 +0000)]
Merge branch 'netdev-RT'

Sebastian Andrzej Siewior says:

====================
net: dev: PREEMPT_RT fixups.

this series removes or replaces preempt_disable() and local_irq_save()
sections which are problematic on PREEMPT_RT.
Patch 2 makes netif_rx() work from any context after I found suggestions
for it in an old thread. Should that work, then the context-specific
variants could be removed.

v2…v3:
   - #2
     - Export __netif_rx() so it can be used by everyone.
     - Add a lockdep assert to check for interrupt context.
     - Update the kernel doc and mention that the skb is posted to
       backlog NAPI.
     - Use __netif_rx() also in drivers/net/*.c.
     - Added Toke''s review tag and kept Eric's desptite the changes
       made.

v1…v2:
  - #1 and #2
    - merge patch 1 und 2 from the series (as per Toke).
    - updated patch description and corrected the first commit number (as
      per Eric).
   - #2
     - Provide netif_rx() as in v1 and additionally __netif_rx() without
       local_bh disable()+enable() for the loopback driver. __netif_rx() is
       not exported (loopback is built-in only) so it won't be used
       drivers. If this doesn't work then we can still export/ define a
       wrapper as Eric suggested.
     - Added a comment that netif_rx() considered legacy.
   - #3
     - Moved ____napi_schedule() into rps_ipi_queued() and
       renamed it napi_schedule_rps().
   https://lore.kernel.org/all/20220204201259.1095226-1-bigeasy@linutronix.de/

v1:
   https://lore.kernel.org/all/20220202122848.647635-1-bigeasy@linutronix.de
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dev: Make rps_lock() disable interrupts.
Sebastian Andrzej Siewior [Fri, 11 Feb 2022 23:38:39 +0000 (00:38 +0100)]
net: dev: Make rps_lock() disable interrupts.

Disabling interrupts and in the RPS case locking input_pkt_queue is
split into local_irq_disable() and optional spin_lock().

This breaks on PREEMPT_RT because the spinlock_t typed lock can not be
acquired with disabled interrupts.
The sections in which the lock is acquired is usually short in a sense that it
is not causing long und unbounded latiencies. One exception is the
skb_flow_limit() invocation which may invoke a BPF program (and may
require sleeping locks).

By moving local_irq_disable() + spin_lock() into rps_lock(), we can keep
interrupts disabled on !PREEMPT_RT and enabled on PREEMPT_RT kernels.
Without RPS on a PREEMPT_RT kernel, the needed synchronisation happens
as part of local_bh_disable() on the local CPU.
____napi_schedule() is only invoked if sd is from the local CPU. Replace
it with __napi_schedule_irqoff() which already disables interrupts on
PREEMPT_RT as needed. Move this call to rps_ipi_queued() and rename the
function to napi_schedule_rps as suggested by Jakub.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dev: Makes sure netif_rx() can be invoked in any context.
Sebastian Andrzej Siewior [Fri, 11 Feb 2022 23:38:38 +0000 (00:38 +0100)]
net: dev: Makes sure netif_rx() can be invoked in any context.

Dave suggested a while ago (eleven years by now) "Let's make netif_rx()
work in all contexts and get rid of netif_rx_ni()". Eric agreed and
pointed out that modern devices should use netif_receive_skb() to avoid
the overhead.
In the meantime someone added another variant, netif_rx_any_context(),
which behaves as suggested.

netif_rx() must be invoked with disabled bottom halves to ensure that
pending softirqs, which were raised within the function, are handled.
netif_rx_ni() can be invoked only from process context (bottom halves
must be enabled) because the function handles pending softirqs without
checking if bottom halves were disabled or not.
netif_rx_any_context() invokes on the former functions by checking
in_interrupts().

netif_rx() could be taught to handle both cases (disabled and enabled
bottom halves) by simply disabling bottom halves while invoking
netif_rx_internal(). The local_bh_enable() invocation will then invoke
pending softirqs only if the BH-disable counter drops to zero.

Eric is concerned about the overhead of BH-disable+enable especially in
regard to the loopback driver. As critical as this driver is, it will
receive a shortcut to avoid the additional overhead which is not needed.

Add a local_bh_disable() section in netif_rx() to ensure softirqs are
handled if needed.
Provide __netif_rx() which does not disable BH and has a lockdep assert
to ensure that interrupts are disabled. Use this shortcut in the
loopback driver and in drivers/net/*.c.
Make netif_rx_ni() and netif_rx_any_context() invoke netif_rx() so they
can be removed once they are no more users left.

Link: https://lkml.kernel.org/r/20100415.020246.218622820.davem@davemloft.net
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dev: Remove preempt_disable() and get_cpu() in netif_rx_internal().
Sebastian Andrzej Siewior [Fri, 11 Feb 2022 23:38:37 +0000 (00:38 +0100)]
net: dev: Remove preempt_disable() and get_cpu() in netif_rx_internal().

The preempt_disable() () section was introduced in commit
    cece1945bffcf ("net: disable preemption before call smp_processor_id()")

and adds it in case this function is invoked from preemtible context and
because get_cpu() later on as been added.

The get_cpu() usage was added in commit
    b0e28f1effd1d ("net: netif_rx() must disable preemption")

because ip_dev_loopback_xmit() invoked netif_rx() with enabled preemption
causing a warning in smp_processor_id(). The function netif_rx() should
only be invoked from an interrupt context which implies disabled
preemption. The commit
   e30b38c298b55 ("ip: Fix ip_dev_loopback_xmit()")

was addressing this and replaced netif_rx() with in netif_rx_ni() in
ip_dev_loopback_xmit().

Based on the discussion on the list, the former patch (b0e28f1effd1d)
should not have been applied only the latter (e30b38c298b55).

Remove get_cpu() and preempt_disable() since the function is supposed to
be invoked from context with stable per-CPU pointers. Bottom halves have
to be disabled at this point because the function may raise softirqs
which need to be processed.

Link: https://lkml.kernel.org/r/20100415.013347.98375530.davem@davemloft.net
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoice: Simplify tracking status of RDMA support
Dave Ertman [Fri, 11 Feb 2022 18:26:03 +0000 (10:26 -0800)]
ice: Simplify tracking status of RDMA support

The status of support for RDMA is currently being tracked with two
separate status flags. This is unnecessary with the current state of
the driver.

Simplify status tracking down to a single flag.

Rename the helper function to denote the RDMA specific status and
universally use the helper function to test the status bit.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Tested-by: Leszek Kaliszczuk <leszek.kaliszczuk@intel.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'ocelot-stats'
David S. Miller [Mon, 14 Feb 2022 13:24:29 +0000 (13:24 +0000)]
Merge branch 'ocelot-stats'

Colin Foster says:

====================
use bulk reads for ocelot statistics

Ocelot loops over memory regions to gather stats on different ports.
These regions are mostly continuous, and are ordered. This patch set
uses that information to break the stats reads into regions that can get
read in bulk.

The motiviation is for general cleanup, but also for SPI. Performing two
back-to-back reads on a SPI bus require toggling the CS line, holding,
re-toggling the CS line, sending 3 address bytes, sending N padding
bytes, then actually performing the read. Bulk reads could reduce almost
all of that overhead, but require that the reads are performed via
regmap_bulk_read.

Verified with eth0 hooked up to the CPU port:
NIC statistics:
     Good Rx Frames: 905
     Rx Octets: 78848
     Good Tx Frames: 691
     Tx Octets: 52516
     Rx + Tx 65-127 Octet Frames: 1574
     Rx + Tx 128-255 Octet Frames: 22
     Net Octets: 131364
     Rx DMA chan 0: head_enqueue: 1
     Rx DMA chan 0: tail_enqueue: 1032
     Rx DMA chan 0: busy_dequeue: 628
     Rx DMA chan 0: good_dequeue: 905
     Tx DMA chan 0: head_enqueue: 346
     Tx DMA chan 0: tail_enqueue: 345
     Tx DMA chan 0: misqueued: 345
     Tx DMA chan 0: empty_dequeue: 346
     Tx DMA chan 0: good_dequeue: 691
     p00_rx_octets: 52516
     p00_rx_unicast: 691
     p00_rx_frames_65_to_127_octets: 691
     p00_tx_octets: 78848
     p00_tx_unicast: 905
     p00_tx_frames_65_to_127_octets: 883
     p00_tx_frames_128_255_octets: 22
     p00_tx_green_prio_0: 905

And with swp2 connected to swp3 with STP enabled:
NIC statistics:
     tx_packets: 379
     tx_bytes: 19708
     rx_packets: 1
     rx_bytes: 46
     rx_octets: 64
     rx_multicast: 1
     rx_frames_below_65_octets: 1
     rx_classified_drops: 1
     tx_octets: 44630
     tx_multicast: 387
     tx_broadcast: 290
     tx_frames_below_65_octets: 379
     tx_frames_65_to_127_octets: 294
     tx_frames_128_255_octets: 4
     tx_green_prio_0: 298
     tx_green_prio_7: 379
NIC statistics:
     tx_packets: 1
     tx_bytes: 52
     rx_packets: 713
     rx_bytes: 34148
     rx_octets: 46982
     rx_multicast: 407
     rx_broadcast: 306
     rx_frames_below_65_octets: 399
     rx_frames_65_to_127_octets: 310
     rx_frames_128_to_255_octets: 4
     rx_classified_drops: 399
     rx_green_prio_0: 314
     tx_octets: 64
     tx_multicast: 1
     tx_frames_below_65_octets: 1
     tx_green_prio_7: 1

v1 > v2: reword commit messages
v2 > v3: correctly mark this for net-next when sending
v3 > v4: calloc array instead of zalloc per review
v4 > v5:
    Apply CR suggestions for whitespace
    Fix calloc / zalloc mixup
    Properly destroy workqueues
    Add third commit to split long macros
v5 > v6:
    Fix functionality - v5 was improperly tested
    Add bugfix for ethtool mutex lock
    Remove unnecessary ethtool stats reads
v6 > v7:
    Remove mutex bug patch that was applied via net
    Rename function based on CR
    Add missed error check
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mscc: ocelot: use bulk reads for stats
Colin Foster [Sun, 13 Feb 2022 19:12:54 +0000 (11:12 -0800)]
net: mscc: ocelot: use bulk reads for stats

Create and utilize bulk regmap reads instead of single access for gathering
stats. The background reading of statistics happens frequently, and over
a few contiguous memory regions.

High speed PCIe buses and MMIO access will probably see negligible
performance increase. Lower speed buses like SPI and I2C could see
significant performance increase, since the bus configuration and register
access times account for a large percentage of data transfer time.

Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mscc: ocelot: add ability to perform bulk reads
Colin Foster [Sun, 13 Feb 2022 19:12:53 +0000 (11:12 -0800)]
net: mscc: ocelot: add ability to perform bulk reads

Regmap supports bulk register reads. Ocelot does not. This patch adds
support for Ocelot to invoke bulk regmap reads. That will allow any driver
that performs consecutive reads over memory regions to optimize that
access.

Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: ocelot: align macros for consistency
Colin Foster [Sun, 13 Feb 2022 19:12:52 +0000 (11:12 -0800)]
net: ocelot: align macros for consistency

In the ocelot.h file, several read / write macros were split across
multiple lines, while others weren't. Split all macros that exceed the 80
character column width and match the style of the rest of the file.

Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mscc: ocelot: remove unnecessary stat reading from ethtool
Colin Foster [Sun, 13 Feb 2022 19:12:51 +0000 (11:12 -0800)]
net: mscc: ocelot: remove unnecessary stat reading from ethtool

The ocelot_update_stats function only needs to read from one port, yet it
was updating the stats for all ports. Update to only read the stats that
are necessary.

Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: Add reasons for skb drops to __udp6_lib_rcv
David Ahern [Fri, 11 Feb 2022 17:15:07 +0000 (09:15 -0800)]
ipv6: Add reasons for skb drops to __udp6_lib_rcv

Add reasons to __udp6_lib_rcv for skb drops. The only twist is that the
NO_SOCKET takes precedence over the CSUM or other counters for that
path (motivation behind this patch - csum counter was misleading).

Signed-off-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'dm9051'
David S. Miller [Mon, 14 Feb 2022 11:18:47 +0000 (11:18 +0000)]
Merge branch 'dm9051'

Joseph CHAMG says:

====================
ADD DM9051 ETHERNET DRIVER

DM9051 is a spi interface chip,
need cs/mosi/miso/clock with an interrupt gpio pin
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: Add dm9051 driver
Joseph CHAMG [Fri, 11 Feb 2022 09:27:56 +0000 (17:27 +0800)]
net: Add dm9051 driver

Add davicom dm9051 spi ethernet driver, The driver work for the
device platform which has the spi master

Signed-off-by: Joseph CHAMG <josright123@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodt-bindings: net: Add Davicom dm9051 SPI ethernet controller
Joseph CHAMG [Fri, 11 Feb 2022 09:27:55 +0000 (17:27 +0800)]
dt-bindings: net: Add Davicom dm9051 SPI ethernet controller

This is a new yaml base data file for configure davicom dm9051 with
device tree

Signed-off-by: Joseph CHAMG <josright123@gmail.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Add comment for smc_tx_pending
Tony Lu [Fri, 11 Feb 2022 06:52:21 +0000 (14:52 +0800)]
net/smc: Add comment for smc_tx_pending

The previous patch introduces a lock-free version of smc_tx_work() to
solve unnecessary lock contention, which is expected to be held lock.
So this adds comment to remind people to keep an eye out for locks.

Suggested-by: Stefan Raspl <raspl@linux.ibm.com>
Signed-off-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoGenerate netlink notification when default IPv6 route preference changes
Kalash Nainwal [Thu, 10 Feb 2022 22:09:35 +0000 (14:09 -0800)]
Generate netlink notification when default IPv6 route preference changes

Generate RTM_NEWROUTE netlink notification when the route preference
 changes on an existing kernel generated default route in response to
 RA messages. Currently netlink notifications are generated only when
 this route is added or deleted but not when the route preference
 changes, which can cause userspace routing application state to go
 out of sync with kernel.

Signed-off-by: Kalash Nainwal <kalash@arista.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_police: more accurate MTU policing
Davide Caratti [Thu, 10 Feb 2022 17:56:08 +0000 (18:56 +0100)]
net/sched: act_police: more accurate MTU policing

in current Linux, MTU policing does not take into account that packets at
the TC ingress have the L2 header pulled. Thus, the same TC police action
(with the same value of tcfp_mtu) behaves differently for ingress/egress.
In addition, the full GSO size is compared to tcfp_mtu: as a consequence,
the policer drops GSO packets even when individual segments have the L2 +
L3 + L4 + payload length below the configured valued of tcfp_mtu.

Improve the accuracy of MTU policing as follows:
 - account for mac_len for non-GSO packets at TC ingress.
 - compare MTU threshold with the segmented size for GSO packets.
Also, add a kselftest that verifies the correct behavior.

Signed-off-by: Davide Caratti <dcaratti@redhat.com>
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoetherdevice: Adjust ether_addr* prototypes to silence -Wstringop-overead
Kees Cook [Sat, 12 Feb 2022 17:14:49 +0000 (09:14 -0800)]
etherdevice: Adjust ether_addr* prototypes to silence -Wstringop-overead

With GCC 12, -Wstringop-overread was warning about an implicit cast from
char[6] to char[8]. However, the extra 2 bytes are always thrown away,
alignment doesn't matter, and the risk of hitting the edge of unallocated
memory has been accepted, so this prototype can just be converted to a
regular char *. Silences:

net/core/dev.c: In function â€˜bpf_prog_run_generic_xdp’: net/core/dev.c:4618:21: warning: â€˜ether_addr_equal_64bits’ reading 8 bytes from a region of size 6 [-Wstringop-overread]
 4618 |         orig_host = ether_addr_equal_64bits(eth->h_dest, > skb->dev->dev_addr);
      |                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/dev.c:4618:21: note: referencing argument 1 of type â€˜const u8[8]’ {aka â€˜const unsigned char[8]’}
net/core/dev.c:4618:21: note: referencing argument 2 of type â€˜const u8[8]’ {aka â€˜const unsigned char[8]’}
In file included from net/core/dev.c:91: include/linux/etherdevice.h:375:20: note: in a call to function â€˜ether_addr_equal_64bits’
  375 | static inline bool ether_addr_equal_64bits(const u8 addr1[6+2],
      |                    ^~~~~~~~~~~~~~~~~~~~~~~

Reported-by: Marc Kleine-Budde <mkl@pengutronix.de>
Tested-by: Marc Kleine-Budde <mkl@pengutronix.de>
Link: https://lore.kernel.org/netdev/20220212090811.uuzk6d76agw2vv73@pengutronix.de
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan966x: Fix when CONFIG_IPV6 is not set
Horatiu Vultur [Sat, 12 Feb 2022 20:03:43 +0000 (21:03 +0100)]
net: lan966x: Fix when CONFIG_IPV6 is not set

When CONFIG_IPV6 is not set, then the linking of the lan966x driver
fails with the following error:

drivers/net/ethernet/microchip/lan966x/lan966x_main.c:444: undefined
reference to `ipv6_mc_check_mld'

The fix consists in adding a check also for IS_ENABLED(CONFIG_IPV6)

Fixes: 47aeea0d57e80c ("net: lan966x: Implement the callback SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan966x: Fix when CONFIG_PTP_1588_CLOCK is compiled as module
Horatiu Vultur [Sat, 12 Feb 2022 20:45:44 +0000 (21:45 +0100)]
net: lan966x: Fix when CONFIG_PTP_1588_CLOCK is compiled as module

When CONFIG_PTP_1588_CLOCK is compiled as a module, then the linking of
the lan966x fails because it can't find references to the following
functions 'ptp_clock_index', 'ptp_clock_register' and
'ptp_clock_unregister'

The fix consists in adding CONFIG_PTP_1588_CLOCK_OPTIONAL as a
dependency for the driver.

Fixes: d096459494a887 ("net: lan966x: Add support for ptp clocks")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'lan743x-enhancements'
David S. Miller [Sun, 13 Feb 2022 12:07:26 +0000 (12:07 +0000)]
Merge branch 'lan743x-enhancements'

Raju Lakkaraju says:

====================
net: lan743x: PCI11010 / PCI11414 devices Enhancements

This patch series adds support of the Ethernet function of the PCI11010 / PCI11414 devices to the LAN743x driver.
The PCI1xxxx family of devices consists of a PCIe switch with a variety of embedded PCI endpoints on its downstream ports.
The PCI11010 / PCI11414 devices include an Ethernet 10/100/1000/2500 function as one of those embedded endpoints.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan743x: Add support for Clause-45 MDIO PHY management
Raju Lakkaraju [Sat, 12 Feb 2022 15:53:15 +0000 (21:23 +0530)]
net: lan743x: Add support for Clause-45 MDIO PHY management

Add support for Clause-45 MDIO PHY management

Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan743x: Add support for SGMII interface
Raju Lakkaraju [Sat, 12 Feb 2022 15:53:14 +0000 (21:23 +0530)]
net: lan743x: Add support for SGMII interface

This change facilitates the selection between SGMII and (R)GIII
interfaces

Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan743x: Increase MSI(x) vectors to 16 and Int de-assertion timers to 10
Raju Lakkaraju [Sat, 12 Feb 2022 15:53:13 +0000 (21:23 +0530)]
net: lan743x: Increase MSI(x) vectors to 16 and Int de-assertion timers to 10

Increase MSI / MSI-X vectors supported from 8 to 16 and
Interrupt De-assertion timers from 8 to 10

Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan743x: Add support for 4 Tx queues
Raju Lakkaraju [Sat, 12 Feb 2022 15:53:12 +0000 (21:23 +0530)]
net: lan743x: Add support for 4 Tx queues

Add support for 4 Tx queues

Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan743x: Add PCI11010 / PCI11414 device IDs
Raju Lakkaraju [Sat, 12 Feb 2022 15:53:11 +0000 (21:23 +0530)]
net: lan743x: Add PCI11010 / PCI11414 device IDs

PCI11010/PCI11414 devices are enhancement of Ethernet LAN743x chip family.

Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wwan: iosm: Enable M.2 7360 WWAN card support
M Chetan Kumar [Thu, 10 Feb 2022 15:34:45 +0000 (21:04 +0530)]
net: wwan: iosm: Enable M.2 7360 WWAN card support

This patch enables Intel M.2 7360 WWAN card support on
IOSM Driver.

Control path implementation is a reuse whereas data path
implementation it uses a different protocol called as MUX
Aggregation. The major portion of this patch covers the MUX
Aggregation protocol implementation used for IP traffic
communication.

For M.2 7360 WWAN card, driver exposes 2 wwan AT ports for
control communication.  The user space application or the
modem manager to use wwan AT port for data path establishment.

During probe, driver reads the mux protocol device capability
register to know the mux protocol version supported by device.
Base on which the right mux protocol is initialized for data
path communication.

An overview of an Aggregation Protocol
1>  An IP packet is encapsulated with 16 octet padding header
    to form a Datagram & the start offset of the Datagram is
    indexed into Datagram Header (DH).
2>  Multiple such Datagrams are composed & the start offset of
    each DH is indexed into Datagram Table Header (DTH).
3>  The Datagram Table (DT) is IP session specific & table_length
    item in DTH holds the number of composed datagram pertaining
    to that particular IP session.
4>  And finally the offset of first DTH is indexed into DBH (Datagram
    Block Header).

So in TX/RX flow Datagram Block (Datagram Block Header + Payload)is
exchanged between driver & device.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoRevert "net: ethernet: cavium: use div64_u64() instead of do_div()"
Jakub Kicinski [Fri, 11 Feb 2022 02:05:44 +0000 (18:05 -0800)]
Revert "net: ethernet: cavium: use div64_u64() instead of do_div()"

This reverts commit 038fcdaf0470de89619bc4cc199e329391e6566c.

Christophe points out div64_u64() and do_div() have different
calling conventions. One updates the param, the other returns
the result.

Reported-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/all/056a7276-c6f0-cd7e-9e46-1d8507a0b6b1@wanadoo.fr/
Fixes: 038fcdaf0470 ("net: ethernet: cavium: use div64_u64() instead of do_div()")
Link: https://lore.kernel.org/r/20220211020544.3262694-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: moxa: use GFP_KERNEL
Julia Lawall [Thu, 10 Feb 2022 20:42:15 +0000 (21:42 +0100)]
net: moxa: use GFP_KERNEL

Platform_driver probe functions aren't called with locks
held and thus don't need GFP_ATOMIC. Use GFP_KERNEL instead.

Problem found with Coccinelle.

Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
Link: https://lore.kernel.org/r/20220210204223.104181-1-Julia.Lawall@inria.fr
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoocteontx2-af: fix array bound error
Hariprasad Kelam [Fri, 11 Feb 2022 15:55:39 +0000 (21:25 +0530)]
octeontx2-af: fix array bound error

This patch fixes below error by using proper data type.

drivers/net/ethernet/marvell/octeontx2/af/rpm.c: In function
'rpm_cfg_pfc_quanta_thresh':
include/linux/find.h:40:23: error: array subscript 'long unsigned
int[0]' is partly outside array bounds of 'u16[1]' {aka 'short unsigned
int[1]'} [-Werror=array-bounds]
   40 |                 val = *addr & GENMASK(size - 1, offset);

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Link: https://lore.kernel.org/r/20220211155539.13931-1-hkelam@marvell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge tag 'wireless-next-2022-02-11' of git://git.kernel.org/pub/scm/linux/kernel...
David S. Miller [Fri, 11 Feb 2022 14:19:23 +0000 (14:19 +0000)]
Merge tag 'wireless-next-2022-02-11' of git://git./linux/kernel/git/wireless/wireless-next

wireless-next patches for v5.18

First set of patches for v5.18, with both wireless and stack patches.
rtw89 now has AP mode support and wcn36xx has survey support. But
otherwise pretty normal.

Major changes:

ath11k

* add LDPC FEC type in 802.11 radiotap header

* enable RX PPDU stats in monitor co-exist mode

wcn36xx

* implement survey reporting

brcmfmac

* add CYW43570 PCIE device

rtw88

* rtw8821c: enable RFE 6 devices

rtw89

* AP mode support

mt76

* mt7916 support

* background radar detection support

2 years agoMerge branch 'ipv6-loopback'
David S. Miller [Fri, 11 Feb 2022 11:44:27 +0000 (11:44 +0000)]
Merge branch 'ipv6-loopback'

Eric Dumazet says:

====================
ipv6: remove addrconf reliance on loopback

Second patch in this series removes IPv6 requirement about the netns
loopback device being the last device being dismantled.

This was needed because rt6_uncached_list_flush_dev()
and ip6_dst_ifdown() had to switch dst dev to a known
device (loopback).

Instead of loopback, we can use the (hidden) blackhole_netdev
which is also always there.

This will allow future simplfications of netdev_run_to()
and other parts of the stack like default_device_exit_batch().

Last two patches are optimizations for both IP families.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv4: add (struct uncached_list)->quarantine list
Eric Dumazet [Thu, 10 Feb 2022 21:42:31 +0000 (13:42 -0800)]
ipv4: add (struct uncached_list)->quarantine list

This is an optimization to keep the per-cpu lists as short as possible:

Whenever rt_flush_dev() changes one rtable dst.dev
matching the disappearing device, it can can transfer the object
to a quarantine list, waiting for a final rt_del_uncached_list().

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: add (struct uncached_list)->quarantine list
Eric Dumazet [Thu, 10 Feb 2022 21:42:30 +0000 (13:42 -0800)]
ipv6: add (struct uncached_list)->quarantine list

This is an optimization to keep the per-cpu lists as short as possible:

Whenever rt6_uncached_list_flush_dev() changes one rt6_info
matching the disappearing device, it can can transfer the object
to a quarantine list, waiting for a final rt6_uncached_list_del().

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: give an IPv6 dev to blackhole_netdev
Eric Dumazet [Thu, 10 Feb 2022 21:42:29 +0000 (13:42 -0800)]
ipv6: give an IPv6 dev to blackhole_netdev

IPv6 addrconf notifiers wants the loopback device to
be the last device being dismantled at netns deletion.

This caused many limitations and work arounds.

Back in linux-5.3, Mahesh added a per host blackhole_netdev
that can be used whenever we need to make sure objects no longer
refer to a disappearing device.

If we attach to blackhole_netdev an ip6_ptr (allocate an idev),
then we can use this special device (which is never freed)
in place of the loopback_dev (which can be freed).

This will permit improvements in netdev_run_todo() and other parts
of the stack where had steps to make sure loopback_dev was
the last device to disappear.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: get rid of net->ipv6.rt6_stats->fib_rt_uncache
Eric Dumazet [Thu, 10 Feb 2022 21:42:28 +0000 (13:42 -0800)]
ipv6: get rid of net->ipv6.rt6_stats->fib_rt_uncache

This counter has never been visible, there is little point
trying to maintain it.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodsa: mv88e6xxx: make serdes SGMII/Fiber tx amplitude configurable
Holger Brunck [Thu, 10 Feb 2022 17:48:23 +0000 (18:48 +0100)]
dsa: mv88e6xxx: make serdes SGMII/Fiber tx amplitude configurable

The mv88e6352, mv88e6240 and mv88e6176  have a serdes interface. This patch
allows to configure the output swing to a desired value in the
phy-handle of the port. The value which is peak to peak has to be
specified in microvolts. As the chips only supports eight dedicated
values we return EINVAL if the value in the DTS does not match one of
these values.

Signed-off-by: Holger Brunck <holger.brunck@hitachienergy.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Marek Behún <kabel@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodt-bindings: phy: Add `tx-p2p-microvolt` property binding
Marek Behún [Thu, 10 Feb 2022 17:48:22 +0000 (18:48 +0100)]
dt-bindings: phy: Add `tx-p2p-microvolt` property binding

Common PHYs and network PCSes often have the possibility to specify
peak-to-peak voltage on the differential pair - the default voltage
sometimes needs to be changed for a particular board.

Add properties `tx-p2p-microvolt` and `tx-p2p-microvolt-names` for this
purpose. The second property is needed to specify the mode for the
corresponding voltage in the `tx-p2p-microvolt` property, if the voltage
is to be used only for speficic mode. More voltage-mode pairs can be
specified.

Example usage with only one voltage (it will be used for all supported
PHY modes, the `tx-p2p-microvolt-names` property is not needed in this
case):

  tx-p2p-microvolt = <915000>;

Example usage with voltages for multiple modes:

  tx-p2p-microvolt = <915000>, <1100000>, <1200000>;
  tx-p2p-microvolt-names = "2500base-x", "usb", "pcie";

Add these properties into a separate file phy/transmit-amplitude.yaml,
which should be referenced by any binding that uses it.

Signed-off-by: Marek Behún <kabel@kernel.org>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: Reject routes configurations that specify dsfield (tos)
Guillaume Nault [Thu, 10 Feb 2022 15:08:08 +0000 (16:08 +0100)]
ipv6: Reject routes configurations that specify dsfield (tos)

The ->rtm_tos option is normally used to route packets based on both
the destination address and the DS field. However it's ignored for
IPv6 routes. Setting ->rtm_tos for IPv6 is thus invalid as the route
is going to work only on the destination address anyway, so it won't
behave as specified.

Suggested-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'dsa-cleanup'
David S. Miller [Fri, 11 Feb 2022 11:17:33 +0000 (11:17 +0000)]
Merge branch 'dsa-cleanup'

Vladimir Oltean says:

====================
More aggressive DSA cleanup

This series deletes some code which is apparently not needed.

I've had these patches in my tree for a while, and testing on my boards
didn't reveal any issues.

Compared to the RFC v1 series, the only change is the addition of patch 3.
https://patchwork.kernel.org/project/netdevbpf/cover/20220107184842.550334-1-vladimir.oltean@nxp.com/
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dsa: remove lockdep class for DSA slave address list
Vladimir Oltean [Thu, 10 Feb 2022 13:45:00 +0000 (15:45 +0200)]
net: dsa: remove lockdep class for DSA slave address list

Since commit 2f1e8ea726e9 ("net: dsa: link interfaces with the DSA
master to get rid of lockdep warnings"), suggested by Cong Wang, the
DSA interfaces and their master have different dev->nested_level, which
makes netif_addr_lock() stop complaining about potentially recursive
locking on the same lock class.

So we no longer need DSA slave interfaces to have their own lockdep
class.

Cc: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dsa: remove lockdep class for DSA master address list
Vladimir Oltean [Thu, 10 Feb 2022 13:44:59 +0000 (15:44 +0200)]
net: dsa: remove lockdep class for DSA master address list

Since commit 2f1e8ea726e9 ("net: dsa: link interfaces with the DSA
master to get rid of lockdep warnings"), suggested by Cong Wang, the
DSA interfaces and their master have different dev->nested_level, which
makes netif_addr_lock() stop complaining about potentially recursive
locking on the same lock class.

So we no longer need DSA masters to have their own lockdep class.

Cc: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dsa: remove ndo_get_phys_port_name and ndo_get_port_parent_id
Vladimir Oltean [Thu, 10 Feb 2022 13:44:58 +0000 (15:44 +0200)]
net: dsa: remove ndo_get_phys_port_name and ndo_get_port_parent_id

There are no legacy ports, DSA registers a devlink instance with ports
unconditionally for all switch drivers. Therefore, delete the old-style
ndo operations used for determining bridge forwarding domains.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'smc-optimizations'
David S. Miller [Fri, 11 Feb 2022 11:14:58 +0000 (11:14 +0000)]
Merge branch 'smc-optimizations'

D. Wythe says:

====================
net/smc: Optimizing performance in short-lived scenarios

This patch set aims to optimizing performance of SMC in short-lived
links scenarios, which is quite unsatisfactory right now.

In our benchmark, we test it with follow scripts:

./wrk -c 10000 -t 4 -H 'Connection: Close' -d 20 http://smc-server

Current performance figures like that:

Running 20s test @ http://11.213.45.6
  4 threads and 10000 connections
  4956 requests in 20.06s, 3.24MB read
  Socket errors: connect 0, read 0, write 672, timeout 0
Requests/sec:    247.07
Transfer/sec:    165.28KB

There are many reasons for this phenomenon, this patch set doesn't
solve it all though, but it can be well alleviated with it in.

Patch 1/5  (Make smc_tcp_listen_work() independent) :

Separate smc_tcp_listen_work() from smc_listen_work(), make them
independent of each other, the busy SMC handshake can not affect new TCP
connections visit any more. Avoid discarding a large number of TCP
connections after being overstock, which is undoubtedly raise the
connection establishment time.

Patch 2/5 (Limit SMC backlog connections):

Since patch 1 has separated smc_tcp_listen_work() from
smc_listen_work(), an unrestricted TCP accept have come into being. This
patch try to put a limit on SMC backlog connections refers to
implementation of TCP.

Patch 3/5 (Limit SMC visits when handshake workqueue congested):

Considering the complexity of SMC handshake right now, in short-lived
links scenarios, this may not be the main scenario of SMC though, it's
performance is still quite poor. This patch try to provide constraint on
SMC handshake when handshake workqueue congested, which is the sign of
SMC handshake stacking in our opinion.

Patch 4/5 (Dynamic control handshake limitation by socket options)

This patch allow applications dynamically control the ability of SMC
handshake limitation. Since SMC don't support set SMC socket option
before,
this patch also have to support SMC's owns socket options.

Patch 5/5 (Add global configure for handshake limitation by netlink)

This patch provides a way to get benefit of handshake limitation
without
modifying any code for applications, which is quite useful for most
existing applications.

After this patch set, performance figures like that:

Running 20s test @ http://11.213.45.6
  4 threads and 10000 connections
  693253 requests in 20.10s, 452.88MB read
Requests/sec:  34488.13
Transfer/sec:     22.53MB

That's a quite well performance improvement, about to 6 to 7 times in my
environment.
---
changelog:
v1 -> v2:
- fix compile warning
- fix invalid dependencies in kconfig
v2 -> v3:
- correct spelling mistakes
- fix useless variable declare
v3 -> v4
- make smc_tcp_ls_wq be static
v4 -> v5
- add dynamic control for SMC auto fallback by socket options
- add global configure for SMC auto fallback through netlink
v5 -> v6
- move auto fallback to net namespace scope
- remove auto fallback attribute in SMC_GEN_SYS_INFO
- add independent attributes for auto fallback
v6 -> v7
- fix wording and the naming issues, rename 'auto fallback' to handshake
  limitation.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Add global configure for handshake limitation by netlink
D. Wythe [Thu, 10 Feb 2022 09:11:38 +0000 (17:11 +0800)]
net/smc: Add global configure for handshake limitation by netlink

Although we can control SMC handshake limitation through socket options,
which means that applications who need it must modify their code. It's
quite troublesome for many existing applications. This patch modifies
the global default value of SMC handshake limitation through netlink,
providing a way to put constraint on handshake without modifies any code
for applications.

Suggested-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Reviewed-by: Tony Lu <tonylu@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Dynamic control handshake limitation by socket options
D. Wythe [Thu, 10 Feb 2022 09:11:37 +0000 (17:11 +0800)]
net/smc: Dynamic control handshake limitation by socket options

This patch aims to add dynamic control for SMC handshake limitation for
every smc sockets, in production environment, it is possible for the
same applications to handle different service types, and may have
different opinion on SMC handshake limitation.

This patch try socket options to complete it, since we don't have socket
option level for SMC yet, which requires us to implement it at the same
time.

This patch does the following:

- add new socket option level: SOL_SMC.
- add new SMC socket option: SMC_LIMIT_HS.
- provide getter/setter for SMC socket options.

Link: https://lore.kernel.org/all/20f504f961e1a803f85d64229ad84260434203bd.1644323503.git.alibuda@linux.alibaba.com/
Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Limit SMC visits when handshake workqueue congested
D. Wythe [Thu, 10 Feb 2022 09:11:36 +0000 (17:11 +0800)]
net/smc: Limit SMC visits when handshake workqueue congested

This patch intends to provide a mechanism to put constraint on SMC
connections visit according to the pressure of SMC handshake process.
At present, frequent visits will cause the incoming connections to be
backlogged in SMC handshake queue, raise the connections established
time. Which is quite unacceptable for those applications who base on
short lived connections.

There are two ways to implement this mechanism:

1. Put limitation after TCP established.
2. Put limitation before TCP established.

In the first way, we need to wait and receive CLC messages that the
client will potentially send, and then actively reply with a decline
message, in a sense, which is also a sort of SMC handshake, affect the
connections established time on its way.

In the second way, the only problem is that we need to inject SMC logic
into TCP when it is about to reply the incoming SYN, since we already do
that, it's seems not a problem anymore. And advantage is obvious, few
additional processes are required to complete the constraint.

This patch use the second way. After this patch, connections who beyond
constraint will not informed any SMC indication, and SMC will not be
involved in any of its subsequent processes.

Link: https://lore.kernel.org/all/1641301961-59331-1-git-send-email-alibuda@linux.alibaba.com/
Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Limit backlog connections
D. Wythe [Thu, 10 Feb 2022 09:11:35 +0000 (17:11 +0800)]
net/smc: Limit backlog connections

Current implementation does not handling backlog semantics, one
potential risk is that server will be flooded by infinite amount
connections, even if client was SMC-incapable.

This patch works to put a limit on backlog connections, referring to the
TCP implementation, we divides SMC connections into two categories:

1. Half SMC connection, which includes all TCP established while SMC not
connections.

2. Full SMC connection, which includes all SMC established connections.

For half SMC connection, since all half SMC connections starts with TCP
established, we can achieve our goal by put a limit before TCP
established. Refer to the implementation of TCP, this limits will based
on not only the half SMC connections but also the full connections,
which is also a constraint on full SMC connections.

For full SMC connections, although we know exactly where it starts, it's
quite hard to put a limit before it. The easiest way is to block wait
before receive SMC confirm CLC message, while it's under protection by
smc_server_lgr_pending, a global lock, which leads this limit to the
entire host instead of a single listen socket. Another way is to drop
the full connections, but considering the cast of SMC connections, we
prefer to keep full SMC connections.

Even so, the limits of full SMC connections still exists, see commits
about half SMC connection below.

After this patch, the limits of backend connection shows like:

For SMC:

1. Client with SMC-capability can makes 2 * backlog full SMC connections
   or 1 * backlog half SMC connections and 1 * backlog full SMC
   connections at most.

2. Client without SMC-capability can only makes 1 * backlog half TCP
   connections and 1 * backlog full TCP connections.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: Make smc_tcp_listen_work() independent
D. Wythe [Thu, 10 Feb 2022 09:11:34 +0000 (17:11 +0800)]
net/smc: Make smc_tcp_listen_work() independent

In multithread and 10K connections benchmark, the backend TCP connection
established very slowly, and lots of TCP connections stay in SYN_SENT
state.

Client: smc_run wrk -c 10000 -t 4 http://server

the netstate of server host shows like:
    145042 times the listen queue of a socket overflowed
    145042 SYNs to LISTEN sockets dropped

One reason of this issue is that, since the smc_tcp_listen_work() shared
the same workqueue (smc_hs_wq) with smc_listen_work(), while the
smc_listen_work() do blocking wait for smc connection established. Once
the workqueue became congested, it's will block the accept() from TCP
listen.

This patch creates a independent workqueue(smc_tcp_ls_wq) for
smc_tcp_listen_work(), separate it from smc_listen_work(), which is
quite acceptable considering that smc_tcp_listen_work() runs very fast.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodt-bindings: net: dsa: realtek: convert to YAML schema, add MDIO
Luiz Angelo Daros de Luca [Wed, 9 Feb 2022 18:41:16 +0000 (15:41 -0300)]
dt-bindings: net: dsa: realtek: convert to YAML schema, add MDIO

Schema changes:

- support for mdio-connected switches (mdio driver), recognized by
  checking the presence of property "reg"
- new compatible strings for rtl8367s and rtl8367rb
- "interrupt-controller" was not added as a required property. It might
  still work polling the ports when missing.

Examples changes:

- renamed "switch_intc" to make it unique between examples
- removed "dsa-mdio" from mdio compatible property
- renamed phy@0 to ethernet-phy@0 (not tested with real HW)
  phy@ requires #phy-cells

Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Jakub Kicinski [Fri, 11 Feb 2022 01:29:56 +0000 (17:29 -0800)]
Merge git://git./linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge tag 'net-5.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Linus Torvalds [Fri, 11 Feb 2022 00:01:22 +0000 (16:01 -0800)]
Merge tag 'net-5.17-rc4' of git://git./linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from netfilter and can.

Current release - new code bugs:

   - sparx5: fix get_stat64 out-of-bound access and crash

   - smc: fix netdev ref tracker misuse

  Previous releases - regressions:

   - eth: ixgbevf: require large buffers for build_skb on 82599VF, avoid
     overflows

   - eth: ocelot: fix all IP traffic getting trapped to CPU with PTP
     over IP

   - bonding: fix rare link activation misses in 802.3ad mode

  Previous releases - always broken:

   - tcp: fix tcp sock mem accounting in zero-copy corner cases

   - remove the cached dst when uncloning an skb dst and its metadata,
     since we only have one ref it'd lead to an UaF

   - netfilter:
      - conntrack: don't refresh sctp entries in closed state
      - conntrack: re-init state for retransmitted syn-ack, avoid
        connection establishment getting stuck with strange stacks
      - ctnetlink: disable helper autoassign, avoid it getting lost
      - nft_payload: don't allow transport header access for fragments

   - dsa: fix use of devres for mdio throughout drivers

   - eth: amd-xgbe: disable interrupts during pci removal

   - eth: dpaa2-eth: unregister netdev before disconnecting the PHY

   - eth: ice: fix IPIP and SIT TSO offload"

* tag 'net-5.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (53 commits)
  net: dsa: mv88e6xxx: fix use-after-free in mv88e6xxx_mdios_unregister
  net: mscc: ocelot: fix mutex lock error during ethtool stats read
  ice: Avoid RTNL lock when re-creating auxiliary device
  ice: Fix KASAN error in LAG NETDEV_UNREGISTER handler
  ice: fix IPIP and SIT TSO offload
  ice: fix an error code in ice_cfg_phy_fec()
  net: mpls: Fix GCC 12 warning
  dpaa2-eth: unregister the netdev before disconnecting from the PHY
  skbuff: cleanup double word in comment
  net: macb: Align the dma and coherent dma masks
  mptcp: netlink: process IPv6 addrs in creating listening sockets
  selftests: mptcp: add missing join check
  net: usb: qmi_wwan: Add support for Dell DW5829e
  vlan: move dev_put into vlan_dev_uninit
  vlan: introduce vlan_dev_free_egress_priority
  ax25: fix UAF bugs of net_device caused by rebinding operation
  net: dsa: fix panic when DSA master device unbinds on shutdown
  net: amd-xgbe: disable interrupts during pci removal
  tipc: rate limit warning for received illegal binding update
  net: mdio: aspeed: Add missing MODULE_DEVICE_TABLE
  ...

2 years agoMerge tag 'linux-kselftest-fixes-5.17-rc4' of git://git.kernel.org/pub/scm/linux...
Linus Torvalds [Thu, 10 Feb 2022 23:42:48 +0000 (15:42 -0800)]
Merge tag 'linux-kselftest-fixes-5.17-rc4' of git://git./linux/kernel/git/shuah/linux-kselftest

Pull Kselftest fixes from Shuah Khan:
 "Build and run-time fixes to pidfd, clone3, and ir tests"

* tag 'linux-kselftest-fixes-5.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
  selftests/ir: fix build with ancient kernel headers
  selftests: fixup build warnings in pidfd / clone3 tests
  pidfd: fix test failure due to stack overflow on some arches

2 years agoMerge tag 'linux-kselftest-kunit-fixes-5.17-rc4' of git://git.kernel.org/pub/scm...
Linus Torvalds [Thu, 10 Feb 2022 23:39:59 +0000 (15:39 -0800)]
Merge tag 'linux-kselftest-kunit-fixes-5.17-rc4' of git://git./linux/kernel/git/shuah/linux-kselftest

Pull KUnit fixes from Shuah Khan:
 "Fixes to the test and usage documentation"

* tag 'linux-kselftest-kunit-fixes-5.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
  Documentation: KUnit: Fix usage bug
  kunit: fix missing f in f-string in run_checks.py

2 years agonet: dsa: mv88e6xxx: fix use-after-free in mv88e6xxx_mdios_unregister
Vladimir Oltean [Thu, 10 Feb 2022 17:40:17 +0000 (19:40 +0200)]
net: dsa: mv88e6xxx: fix use-after-free in mv88e6xxx_mdios_unregister

Since struct mv88e6xxx_mdio_bus *mdio_bus is the bus->priv of something
allocated with mdiobus_alloc_size(), this means that mdiobus_free(bus)
will free the memory backing the mdio_bus as well. Therefore, the
mdio_bus->list element is freed memory, but we continue to iterate
through the list of MDIO buses using that list element.

To fix this, use the proper list iterator that handles element deletion
by keeping a copy of the list element next pointer.

Fixes: f53a2ce893b2 ("net: dsa: mv88e6xxx: don't use devres for mdiobus")
Reported-by: Rafael Richter <rafael.richter@gin.de>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Link: https://lore.kernel.org/r/20220210174017.3271099-1-vladimir.oltean@nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net...
Jakub Kicinski [Thu, 10 Feb 2022 19:45:35 +0000 (11:45 -0800)]
Merge branch '100GbE' of git://git./linux/kernel/git/tnguy/net-queue

Tony Nguyen says:

====================
Intel Wired LAN Driver Updates 2022-02-10

Dan Carpenter propagates an error in FEC configuration.

Jesse fixes TSO offloads of IPIP and SIT frames.

Dave adds a dedicated LAG unregister function to resolve a KASAN error
and moves auxiliary device re-creation after LAG removal to the service
task to avoid issues with RTNL lock.

* '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue:
  ice: Avoid RTNL lock when re-creating auxiliary device
  ice: Fix KASAN error in LAG NETDEV_UNREGISTER handler
  ice: fix IPIP and SIT TSO offload
  ice: fix an error code in ice_cfg_phy_fec()
====================

Link: https://lore.kernel.org/r/20220210170515.2609656-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mscc: ocelot: fix mutex lock error during ethtool stats read
Colin Foster [Thu, 10 Feb 2022 15:04:51 +0000 (07:04 -0800)]
net: mscc: ocelot: fix mutex lock error during ethtool stats read

An ongoing workqueue populates the stats buffer. At the same time, a user
might query the statistics. While writing to the buffer is mutex-locked,
reading from the buffer wasn't. This could lead to buggy reads by ethtool.

This patch fixes the former blamed commit, but the bug was introduced in
the latter.

Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Fixes: 1e1caa9735f90 ("ocelot: Clean up stats update deferred work")
Fixes: a556c76adc052 ("net: mscc: Add initial Ocelot switch support")
Reported-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Link: https://lore.kernel.org/all/20220210150451.416845-2-colin.foster@in-advantage.com/
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dsa: qca8k: fix noderef.cocci warnings
kernel test robot [Wed, 9 Feb 2022 22:13:04 +0000 (06:13 +0800)]
net: dsa: qca8k: fix noderef.cocci warnings

drivers/net/dsa/qca8k.c:422:37-43: ERROR: application of sizeof to pointer

 sizeof when applied to a pointer typed expression gives the size of
 the pointer

Generated by: scripts/coccinelle/misc/noderef.cocci

Fixes: 90386223f44e ("net: dsa: qca8k: add support for larger read/write size with mgmt Ethernet")
CC: Ansuel Smith <ansuelsmth@gmail.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: kernel test robot <lkp@intel.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/r/20220209221304.GA17529@d2214a582157
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoice: Avoid RTNL lock when re-creating auxiliary device
Dave Ertman [Fri, 21 Jan 2022 00:27:56 +0000 (16:27 -0800)]
ice: Avoid RTNL lock when re-creating auxiliary device

If a call to re-create the auxiliary device happens in a context that has
already taken the RTNL lock, then the call flow that recreates auxiliary
device can hang if there is another attempt to claim the RTNL lock by the
auxiliary driver.

To avoid this, any call to re-create auxiliary devices that comes from
an source that is holding the RTNL lock (e.g. netdev notifier when
interface exits a bond) should execute in a separate thread.  To
accomplish this, add a flag to the PF that will be evaluated in the
service task and dealt with there.

Fixes: f9f5301e7e2d ("ice: Register auxiliary device to provide RDMA")
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Reviewed-by: Jonathan Toppins <jtoppins@redhat.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: Fix KASAN error in LAG NETDEV_UNREGISTER handler
Dave Ertman [Tue, 18 Jan 2022 21:08:20 +0000 (13:08 -0800)]
ice: Fix KASAN error in LAG NETDEV_UNREGISTER handler

Currently, the same handler is called for both a NETDEV_BONDING_INFO
LAG unlink notification as for a NETDEV_UNREGISTER call.  This is
causing a problem though, since the netdev_notifier_info passed has
a different structure depending on which event is passed.  The problem
manifests as a call trace from a BUG: KASAN stack-out-of-bounds error.

Fix this by creating a handler specific to NETDEV_UNREGISTER that only
is passed valid elements in the netdev_notifier_info struct for the
NETDEV_UNREGISTER event.

Also included is the removal of an unbalanced dev_put on the peer_netdev
and related braces.

Fixes: 6a8b357278f5 ("ice: Respond to a NETDEV_UNREGISTER event for LAG")
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Acked-by: Jonathan Toppins <jtoppins@redhat.com>
Tested-by: Sunitha Mekala <sunithax.d.mekala@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: fix IPIP and SIT TSO offload
Jesse Brandeburg [Fri, 14 Jan 2022 23:38:39 +0000 (15:38 -0800)]
ice: fix IPIP and SIT TSO offload

The driver was avoiding offload for IPIP (at least) frames due to
parsing the inner header offsets incorrectly when trying to check
lengths.

This length check works for VXLAN frames but fails on IPIP frames
because skb_transport_offset points to the inner header in IPIP
frames, which meant the subtraction of transport_header from
inner_network_header returns a negative value (-20).

With the code before this patch, everything continued to work, but GSO
was being used to segment, causing throughputs of 1.5Gb/s per thread.
After this patch, throughput is more like 10Gb/s per thread for IPIP
traffic.

Fixes: e94d44786693 ("ice: Implement filter sync, NDO operations and bump version")
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoice: fix an error code in ice_cfg_phy_fec()
Dan Carpenter [Fri, 7 Jan 2022 08:02:06 +0000 (11:02 +0300)]
ice: fix an error code in ice_cfg_phy_fec()

Propagate the error code from ice_get_link_default_override() instead
of returning success.

Fixes: ea78ce4dab05 ("ice: add link lenient and default override support")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agonet/switchdev: use struct_size over open coded arithmetic
Minghao Chi (CGEL ZTE) [Thu, 10 Feb 2022 06:10:08 +0000 (06:10 +0000)]
net/switchdev: use struct_size over open coded arithmetic

Replace zero-length array with flexible-array member and make use
of the struct_size() helper in kmalloc(). For example:

struct switchdev_deferred_item {
    ...
    unsigned long data[];
};

Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes.

Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: Minghao Chi (CGEL ZTE) <chi.minghao@zte.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv4: Reject again rules with high DSCP values
Guillaume Nault [Thu, 10 Feb 2022 12:24:51 +0000 (13:24 +0100)]
ipv4: Reject again rules with high DSCP values

Commit 563f8e97e054 ("ipv4: Stop taking ECN bits into account in
fib4-rules") replaced the validation test on frh->tos. While the new
test is stricter for ECN bits, it doesn't detect the use of high order
DSCP bits. This would be fine if IPv4 could properly handle them. But
currently, most IPv4 lookups are done with the three high DSCP bits
masked. Therefore, using these bits doesn't lead to the expected
result.

Let's reject such configurations again, so that nobody starts to
use and make any assumption about how the stack handles the three high
order DSCP bits in fib4 rules.

Fixes: 563f8e97e054 ("ipv4: Stop taking ECN bits into account in fib4-rules")
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoocteontx2-pf: Add TC feature for VFs
Subbaraya Sundeep [Thu, 10 Feb 2022 11:51:44 +0000 (17:21 +0530)]
octeontx2-pf: Add TC feature for VFs

This patch adds TC feature for VFs also. When MCAM
rules are allocated for a VF then either TC or ntuple
filters can be used. Below are the commands to use
TC feature for a VF(say lbk0):

devlink dev param set pci/0002:01:00.1 name mcam_count value 16 \
 cmode runtime
ethtool -K lbk0 hw-tc-offload on
ifconfig lbk0 up
tc qdisc add dev lbk0 ingress
tc filter add dev lbk0 parent ffff: protocol ip flower skip_sw \
 dst_mac 98:03:9b:83:aa:12 action police rate 100Mbit burst 5000

Also to modify any fields of the hardware context with
NIX_AQ_INSTOP_WRITE command then corresponding masks of those
fields must be set as per hardware. This was missing in
ingress ratelimiting context. This patch sets those masks also.

Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: make net->dev_unreg_count atomic
Eric Dumazet [Thu, 10 Feb 2022 02:59:32 +0000 (18:59 -0800)]
net: make net->dev_unreg_count atomic

Having to acquire rtnl from netdev_run_todo() for every dismantled
device is not desirable when/if rtnl is under stress.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mpls: Fix GCC 12 warning
Victor Erminpour [Thu, 10 Feb 2022 00:28:38 +0000 (16:28 -0800)]
net: mpls: Fix GCC 12 warning

When building with automatic stack variable initialization, GCC 12
complains about variables defined outside of switch case statements.
Move the variable outside the switch, which silences the warning:

./net/mpls/af_mpls.c:1624:21: error: statement will never be executed [-Werror=switch-unreachable]
  1624 |                 int err;
       |                     ^~~

Signed-off-by: Victor Erminpour <victor.erminpour@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoqed: prevent a fw assert during device shutdown
Venkata Sudheer Kumar Bhavaraju [Wed, 9 Feb 2022 19:28:14 +0000 (11:28 -0800)]
qed: prevent a fw assert during device shutdown

Device firmware can assert if the device shutdown path in driver
encounters an async. events from mfw (processed in
qed_mcp_handle_events()) after qed_mcp_unload_req() returns.
A call to qed_mcp_unload_req() currently marks the device as inactive
and thus stops any new events, but there is a windows where in-flight
events might still be received by the driver.

To prevent this race condition, atomically set QED_MCP_BYPASS_PROC_BIT
in qed_mcp_unload_req() to make sure qed_mcp_handle_events() ignores all
events. Wait for any event that might already be in-process to complete
by monitoring QED_MCP_IN_PROCESSING_BIT.

Signed-off-by: Pravin Kumar Ganesh Dhende <pdhende@marvell.com>
Signed-off-by: Venkata Sudheer Kumar Bhavaraju <vbhavaraju@marvell.com>
Signed-off-by: Alok Prasad <palok@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>