Russell King (Oracle) [Tue, 19 Oct 2021 10:00:04 +0000 (11:00 +0100)]
net: phylink: rejig SFP interface selection in ksettings_set()
Commit
ea269a6f7207 ("net: phylink: Update SFP selected interface on
advertising changes") added a better solution to selecting the
interface mode for SFPs using the advertisement mask. This method will
work for mvneta and mvpp2 when selecting between 2500base-X and
1000base-X without needing to use the basex helper, or indicate that
we support both 1000base-X and 2500base-X when in either of these two
interface modes.
Hence, we need to eliminate the validation prior to selecting the
interface, otherwise when we clean up mvneta's validation function, we
will end up locking to 2500base-X as we validate with an interface mode
of PHY_INERFACE_MODE_2500BASEX.
The supported mask will already have been reduced down to the union of
support for the SFP and MAC already, so we can be confident that the
advertisement mask is already appropriately restricted. We only need to
select the appropriate interface, and then revalidate with the new
interface mode.
We get rid of the check for pl->sfp_port too, this is meaningless here
as it doesn't get cleared when a module is removed, so it doesn't
indicate if a module is present. Just rely on pl->sfp_bus.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
luo penghao [Mon, 18 Oct 2021 08:51:54 +0000 (08:51 +0000)]
e1000e: Remove redundant statement
This assignment statement is meaningless, because the statement
will execute to the tag "set_itr_now".
The clang_analyzer complains as follows:
drivers/net/ethernet/intel/e1000e/netdev.c:2552:3 warning:
Value stored to 'current_itr' is never read.
Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: luo penghao <luo.penghao@zte.com.cn>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Oct 2021 11:46:25 +0000 (12:46 +0100)]
Merge branch 'eth_hw_addr_gen-for-switches'
Jakub Kicinski says:
====================
ethernet: add eth_hw_addr_gen() for switches
While doing the last polishing of the drivers/ethernet
changes I realized we have a handful of drivers offsetting
some base MAC addr by an id. So I decided to add a helper
for it. The helper takes care of wrapping which is probably
not 100% necessary but seems like a good idea. And it saves
driver side LoC (the diffstat is actually negative if we
compare against the changes I'd have to make if I was to
convert all these drivers to not operate directly on
netdev->dev_addr).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 21:10:07 +0000 (14:10 -0700)]
ethernet: sparx5: use eth_hw_addr_gen()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 21:10:06 +0000 (14:10 -0700)]
ethernet: mlxsw: use eth_hw_addr_gen()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 21:10:05 +0000 (14:10 -0700)]
ethernet: fec: use eth_hw_addr_gen()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 21:10:04 +0000 (14:10 -0700)]
ethernet: prestera: use eth_hw_addr_gen()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Vadym and Taras report that the current behavior of the driver
is not exactly expected and it's better to add the port id in
like other drivers do.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 21:10:03 +0000 (14:10 -0700)]
ethernet: ocelot: use eth_hw_addr_gen()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 21:10:02 +0000 (14:10 -0700)]
ethernet: add a helper for assigning port addresses
We have 5 drivers which offset base MAC addr by port id.
Create a helper for them.
This helper takes care of overflows, which some drivers
did not do, please complain if that's going to break
anything!
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Shannon Nelson <snelson@pensando.io>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Oct 2021 11:41:48 +0000 (12:41 +0100)]
Merge branch 'dev_addr-conversions-part-two'
Jakub Kicinski says:
====================
ethernet: manual netdev->dev_addr conversions (part 2)
Manual conversions of Ethernet drivers writing directly
to netdev->dev_addr (part 2 out of 3).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:32 +0000 (07:29 -0700)]
ethernet: smsc: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Break the address up into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:31 +0000 (07:29 -0700)]
ethernet: smc91x: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:30 +0000 (07:29 -0700)]
ethernet: sis900: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:29 +0000 (07:29 -0700)]
ethernet: sis190: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:28 +0000 (07:29 -0700)]
ethernet: sxgbe: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:27 +0000 (07:29 -0700)]
ethernet: rocker: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:26 +0000 (07:29 -0700)]
ethernet: renesas: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Break the address up into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:25 +0000 (07:29 -0700)]
ethernet: r8169: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:24 +0000 (07:29 -0700)]
ethernet: netxen: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Invert the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:23 +0000 (07:29 -0700)]
ethernet: lpc: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:22 +0000 (07:29 -0700)]
ethernet: sky2/skge: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Mon, 18 Oct 2021 14:29:21 +0000 (07:29 -0700)]
ethernet: mv643xx: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Oct 2021 11:24:52 +0000 (12:24 +0100)]
Merge branch 'mlxsw-multi-level-qdisc-offload'
Ido Schimmel says:
====================
mlxsw: Multi-level qdisc offload
Petr says:
Currently, mlxsw admits for offload a suitable root qdisc, and its
children. Thus up to two levels of hierarchy are offloaded. Often, this is
enough: one can configure TCs with RED and TCs with a shaper on, and can
even see counters for each TC by looking at a qdisc at a sufficiently
shallow position.
While simple, the system has obvious shortcomings. It is not possible to
configure both RED and shaping on one TC. It is not possible to place a
PRIO below root TBF, which would then be offloaded as port shaper. FIFOs
are only offloaded at root or directly below, which is confusing to users,
because RED and TBF of course have their own FIFO.
This patch set lifts assumptions that prevent offloading multi-level qdisc
trees.
In patch #1, offload of a graft operation is added to TBF. Grafts are
issued as another qdisc is linked to the qdisc in question, and give
drivers a chance to react to the linking. The absence of this event was not
a major issue so far, because TBF was not considered classful, which
changes with this patchset.
The codebase currently assumes that ETS and PRIO are the only classful
qdiscs. The following patches gradually lift this assumption.
In patch #2, calculation of traffic class and priomap of a qdisc is fixed.
Patch #3 fixes handling of future FIFOs. Child FIFO qdiscs may be created
and notified before their parent qdisc exists and therefore need special
handling.
Patches #4, #5 and #6 unify, respectively, child destruction, child
grafting, and cleanup of statistics.
Patch #7 adds a function that validates whether a given qdisc topology is
offloadable.
Finally in patch #8, TBF and RED become classful. At this point, FIFO
qdiscs grafted to an offloaded qdisc should always be offloaded.
Patch #9 adds a selftest to verify some offloadable and unoffloadable qdisc
trees.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:12 +0000 (11:07 +0300)]
selftests: mlxsw: Add a test for un/offloadable qdisc trees
This checks that various qdisc configurations either are or are not
offloaded.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:11 +0000 (11:07 +0300)]
mlxsw: spectrum_qdisc: Make RED, TBF offloads classful
Permit offloading qdiscs below RED and TBF. In order to avoid having to
implement trivial propagating callbacks for get_prio_bitmap and
get_tclass_num, extend mlxsw_sp_qdisc_get_prio_bitmap() and
..._get_tclass_num() to handle the lack of the callback as a cue to forward
the request to the parent.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:10 +0000 (11:07 +0300)]
mlxsw: spectrum_qdisc: Validate qdisc topology
A following patch will enable offloading qdiscs that are deeper than
directly under root qdisc. Currently the topology validation consists of
demanding a root qdisc position for ETS and PRIO. Since RED and TBF are
considered classless, this is enough. In order to prevent some nonsensical
combinations when RED and TBF become classful, introduce a more general
topology validator.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:09 +0000 (11:07 +0300)]
mlxsw: spectrum_qdisc: Clean stats recursively when priomap changes
On Spectrum, there are no per-TC TX counters. Instead, mlxsw uses per-prio
counters and aggregates them according to the priomap. Therefore when
priomap changes, the counter base values need to be reset to reflect the
change. Previously, this was only done for the sole child qdisc, but a
following patch makes RED and TBF classful. Thus apply the request to the
whole sub-tree.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:08 +0000 (11:07 +0300)]
mlxsw: spectrum_qdisc: Unify graft validation
Qdisc graft operations have so far been reported at PRIO, ETS and RED, with
RED events ignored, because RED was not considered a classful qdisc. A
following patch will make mlxsw recognize RED and TBF as classful qdiscs,
and thus it is necessary to validate grafting at these qdiscs as well.
Rename the existing graft validator to make it clear that it is a generic
function, and invoke for RED and TBF graft events as well. Drop the
unnecessary PRIO helper and invoke the graft validator directly for PRIO as
well.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:07 +0000 (11:07 +0300)]
mlxsw: spectrum_qdisc: Destroy children in mlxsw_sp_qdisc_destroy()
Currently ETS and PRIO are the only offloaded classful qdiscs. Since they
are both similar, their destroy handler is the same, and it handles
children destruction itself. But now it is possible to do it generically
for any classful qdisc. Therefore promote the recursive destruction from
the ETS handler to mlxsw_sp_qdisc_destroy(), so that RED and TBF pick it up
in follow-up patches.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:06 +0000 (11:07 +0300)]
mlxsw: spectrum_qdisc: Extract two helpers for handling future FIFOs
Extract from __mlxsw_sp_qdisc_ets_replace() two helpers for handling of one
future FIFO resp. reinitializing the array of future FIFOs.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:05 +0000 (11:07 +0300)]
mlxsw: spectrum_qdisc: Query tclass / priomap instead of caching it
Currently when keeping track of qdiscs, mlxsw notes the TC and priomap
corresponding to each qdisc. That is fine currently, as there only ever is
one level of qdiscs to update: the direct children of ETS / PRIO. However
as deeper structures are made offloadable, ETS would need to update these
values for the complete subtree, and interim qdiscs would need to remember
to propagate the value.
Instead, reverse the responsibility: child qdiscs can ask their parent what
their TC and priomap are. ETS / PRIO know the answer right away, or there
are defaults for when the root qdisc does not assign them (e.g. when RED is
used as root qdisc). When RED and TBF become classful, they will simply
forward the request up to their parent.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 19 Oct 2021 08:07:04 +0000 (11:07 +0300)]
net: sch_tbf: Add a graft command
As another qdisc is linked to the TBF, the latter should issue an event to
give drivers a chance to react to the grafting. In other qdiscs, this event
is called GRAFT, so follow suit with TBF as well.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Oct 2021 11:16:34 +0000 (12:16 +0100)]
Merge tag 'mlx5-updates-2021-10-18' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
mlx5-updates-2021-10-18
Maor Maor Gottlieb says:
========================
Use hash to select the affinity port in VF LAG
Current VF LAG architecture is based on QP association with a port.
QP must be created after LAG is enabled to allow association with non-native port.
VM Packets going on slow-path to eSwicth manager (SW path or hairpin) will be transmitted
through a different QP than the VM. This means that Different packets of the same flow might
egress from different physical ports.
This patch-set solves this issue by moving the port selection to be based on the hash function
defined by the bond.
When the device is moved to VF LAG mode, the driver creates TTC (traffic type classifier) flow
tables in order to classify the packet and steer it to the relevant hash function. Similar to what
is done in the mlx5 RSS implementation.
Each rule in the TTC table, forwards the packet to port selection flow table which has one hash
split flow group which contains two "catch all" flow table entries. Each entry point to the
relative uplink port. As shown below:
-------------------
| FT |
TTC rule -> | ----------- |
| FG| FTE --|-|-----> uplink of port #1
| | FTE --|-|-----> uplink of port #2
| ----------- |
-------------------
Hash split flow group is flow group that created as type of HASH_SPLIT and associated with match definer.
The match definer define the fields which included in the hash calculation.
The driver creates the match definer according to the xmit hash policy of the bond driver.
Patches overview:
========================
Minor E-Switch updates:
- Patch #12, dynamic allocation of dest array
- Patch #13, increase number of forward destinations to 32
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Oct 2021 11:12:21 +0000 (12:12 +0100)]
Merge branch '40GbE' of git://git./linux/kernel/git/tnguy/next-queue
Mateusz Palczewski says:
====================
40GbE Intel Wired LAN Driver Updates 2021-10-18
Use single state machine for driver initialization
and for service initialized driver. The init state
machine implemented in init_task() is merged
into the watchdog_task(). The init_task() function
is removed.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Maor Dickman [Sun, 5 Sep 2021 11:22:11 +0000 (14:22 +0300)]
net/mlx5: E-Switch, Increase supported number of forward destinations to 32
Increase supported number of forward destinations in the same rule, local
and remote, from 2 to 32.
Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Dickman [Mon, 20 Sep 2021 07:51:05 +0000 (10:51 +0300)]
net/mlx5: E-Switch, Use dynamic alloc for dest array
Use dynamic allocation for the dest array in preparation for
the next patch which increase MLX5_MAX_FLOW_FWD_VPORTS and
will cause stack allocation to be bigger than 1024 bytes.
Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Wed, 18 Aug 2021 21:19:14 +0000 (00:19 +0300)]
net/mlx5: Lag, use steering to select the affinity port in LAG
Use the steering based solution for select the affinity port
when the LAG mode is based on hash policy and the device support
in port selection flow table.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Thu, 15 Jul 2021 07:13:52 +0000 (10:13 +0300)]
net/mlx5: Lag, add support to create/destroy/modify port selection
Add create function, build the steering tables, TTC and definers
according to the LAG hash type.
The destroy function, destroys all the steering components.
The modify functions is used when the bond mapping changes and it
iterates over all the rules in the definers and modifies them to steer
the packet to the relevant active ports.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Thu, 15 Jul 2021 06:43:35 +0000 (09:43 +0300)]
net/mlx5: Lag, add support to create TTC tables for LAG port selection
Add support to create inner and outer TTC tables for LAG port
selection. These tables are used to classify the packets in
order to select the related definer.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Tue, 17 Aug 2021 07:24:05 +0000 (10:24 +0300)]
net/mlx5: Lag, add support to create definers for LAG
Every definer will consist of a flow table with a single hash group
with exactly two flow table entries, one for each device port.
The destination of these entries is the uplink vport according to the
port state and hash policy.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Tue, 13 Jul 2021 12:30:45 +0000 (15:30 +0300)]
net/mlx5: Lag, set match mask according to the traffic type bitmap
Set the related bits in the match definer mask according to the
TT mapping.
This mask will be used to create the match definers.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Tue, 13 Jul 2021 12:20:10 +0000 (15:20 +0300)]
net/mlx5: Lag, set LAG traffic type mapping
Generate a traffic type bitmap that will define which
steering objects we need to create for the steering
based LAG.
Bits in this bitmap are set according to the LAG hash type.
In addition, have a field that indicate if the lag is in encap
mode or not.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Mon, 5 Jul 2021 12:34:00 +0000 (15:34 +0300)]
net/mlx5: Lag, move lag files into directory
Downstream patches add another lag related file so it makes
sense to have all the lag files in a dedicated directory.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Tue, 3 Aug 2021 07:04:41 +0000 (10:04 +0300)]
net/mlx5: Introduce new uplink destination type
The uplink destination type should be used in rules to steer the
packet to the uplink when the device is in steering based LAG mode.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Tue, 6 Jul 2021 14:48:26 +0000 (17:48 +0300)]
net/mlx5: Add support to create match definer
Introduce new APIs to create and destroy flow matcher
for given format id.
Flow match definer object is used for defining the fields and
mask used for the hash calculation. User should mask the desired
fields like done in the match criteria.
This object is assigned to flow group of type hash. In this flow
group type, packets lookup is done based on the hash result.
This patch also adds the required bits to create such flow group.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Sun, 23 May 2021 10:34:28 +0000 (13:34 +0300)]
net/mlx5: Introduce port selection namespace
Add new port selection flow steering namespace. Flow steering rules in
this namespaceare are used to determine the physical port for egress
packets.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Gottlieb [Sun, 4 Jul 2021 12:16:21 +0000 (15:16 +0300)]
net/mlx5: Support partial TTC rules
Add bitmasks to ttc_params to indicate if rule is valid or not.
It will allow to create TTC table with support only in part of the
traffic types.
In later patches which introduce the steering based LAG port selection,
TTC will be created with only part of the rules according to the hash
type.
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Shai Malin [Fri, 15 Oct 2021 12:41:17 +0000 (15:41 +0300)]
qed: Change the TCP common variable - "iscsi_ooo"
Change the TCP common variable - "iscsi_ooo" to "ooo_opq".
This variable is common between all the TCP L5 protocols and not
specific to iSCSI.
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Shai Malin <smalin@marvell.com>
Link: https://lore.kernel.org/r/20211015124118.29041-2-smalin@marvell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Shai Malin [Fri, 15 Oct 2021 12:41:16 +0000 (15:41 +0300)]
qed: Optimize the ll2 ooo flow
Optimize the ll2 TCP out-of-order likely flows:
- Optimize the non-error flows of the ll2 ooo data path.
- Optimize "QED_OOO_RIGHT_BUF" over "QED_OOO_LEFT_BUF".
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: Shai Malin <smalin@marvell.com>
Link: https://lore.kernel.org/r/20211015124118.29041-1-smalin@marvell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Lukas Bulwahn [Sat, 16 Oct 2021 05:58:15 +0000 (07:58 +0200)]
MAINTAINERS: adjust file entry for of_net.c after movement
Commit
e330fb14590c ("of: net: move of_net under net/") moves of_net.c
to ./net/core/, but misses to adjust the reference to this file in
MAINTAINERS.
Hence, ./scripts/get_maintainer.pl --self-test=patterns complains:
warning: no file matches F: drivers/of/of_net.c
Adjust the file entry after this file movement.
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Link: https://lore.kernel.org/r/20211016055815.14397-1-lukas.bulwahn@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Mateusz Palczewski [Thu, 19 Aug 2021 08:47:58 +0000 (08:47 +0000)]
iavf: Combine init and watchdog state machines
Use single state machine for driver initialization and for service
initialized driver. The init state machine implemented in init_task()
is merged into the watchdog_task(). The init_task() function is
removed.
Signed-off-by: Jakub Pawlak <jakub.pawlak@intel.com>
Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com>
Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Mateusz Palczewski [Thu, 19 Aug 2021 08:47:49 +0000 (08:47 +0000)]
iavf: Add __IAVF_INIT_FAILED state
This commit adds a new state, __IAVF_INIT_FAILED to the state machine.
From now on initialization functions report errors not by returning an
error value, but by changing the state to indicate that something went
wrong.
Signed-off-by: Jakub Pawlak <jakub.pawlak@intel.com>
Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com>
Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Mateusz Palczewski [Thu, 19 Aug 2021 08:47:40 +0000 (08:47 +0000)]
iavf: Refactor iavf state machine tracking
Replace state changes of iavf state machine
with a method that also tracks the previous
state the machine was on.
This change is required for further work with
refactoring init and watchdog state machines.
Tracking of previous state would help us
recover iavf after failure has occurred.
Signed-off-by: Jakub Pawlak <jakub.pawlak@intel.com>
Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com>
Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jakub Kicinski [Mon, 18 Oct 2021 17:26:08 +0000 (10:26 -0700)]
mlx5: prevent 64bit divide
mlx5_tout_ms() returns a u64, we can't directly divide it.
This is not a problem here, @timeout which is the value
that actually matters here is already a ulong, so this
implies storing return value of mlx5_tout_ms() on a ulong
should be fine.
This fixes:
ERROR: modpost: "__udivdi3" [drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko] undefined!
Fixes:
32def4120e48 ("net/mlx5: Read timeout values from DTOR")
Link: https://lore.kernel.org/r/20211018172608.1069754-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Erik Ekman [Sun, 17 Oct 2021 17:16:57 +0000 (19:16 +0200)]
sfc: Fix reading non-legacy supported link modes
Everything except the first 32 bits was lost when the pause flags were
added. This makes the 50000baseCR2 mode flag (bit 34) not appear.
I have tested this with a 10G card (SFN5122F-R7) by modifying it to
return a non-legacy link mode (10000baseCR).
Signed-off-by: Erik Ekman <erik@kryo.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ansuel Smith [Sun, 17 Oct 2021 14:56:46 +0000 (16:56 +0200)]
net: dsa: qca8k: fix delay applied to wrong cpu in parse_port_config
Fix delay settings applied to wrong cpu in parse_port_config. The delay
values is set to the wrong index as the cpu_port_index is incremented
too early. Start the cpu_port_index to -1 so the correct value is
applied to address also the case with invalid phy mode and not available
port.
Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Oct 2021 13:05:25 +0000 (14:05 +0100)]
Merge git://git./linux/kernel/git/pablo/nf-next
Pablo Neira Ayuso says:
====================
Netfilter/IPVS updates for net-next
The following patchset contains Netfilter/IPVS for net-next:
1) Add new run_estimation toggle to IPVS to stop the estimation_timer
logic, from Dust Li.
2) Relax superfluous dynset check on NFT_SET_TIMEOUT.
3) Add egress hook, from Lukas Wunner.
4) Nowadays, almost all hook functions in x_table land just call the hook
evaluation loop. Remove remaining hook wrappers from iptables and IPVS.
From Florian Westphal.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Oct 2021 13:02:56 +0000 (14:02 +0100)]
Merge branch 'rtl8365mb-vc-support'
Alvin Šipraga says:
====================
net: dsa: add support for RTL8365MB-VC
This series adds support for Realtek's RTL8365MB-VC, a 4+1 port
10/100/1000M Ethernet switch. The driver - rtl8365mb - was developed by
Michael Ramussen and myself.
This version of the driver is relatively slim, implementing only the
standalone port functionality and no offload capabilities. It is based
on a previous RFC series [1] from August, and the main difference is the
removal of some spurious VLAN operations. Otherwise I have simply
addressed most of the feedback. Please see the respective patches for
more detail.
In parallel I am working on offloading the bridge layer capabilities,
but I would like to get the basic stuff upstreamed as soon as possible.
v3 -> v4:
- get irq before setting virq parents (fixes kernel test robot
warning)
- remove pad-to-72-bytes logic in tagger xmit (fixes DENG Qingfang's
suggestion); no longer needed as we set CPU minimum RX size to 64
bytes
- use mutex to protect MIB counter access instead of a spinlock (fixes
Jakub's feedback on v3 statistics refactoring)
v2 -> v3:
- move IRQ setup earlier in probe per Florian's suggestion
- fix compilation error on some archs due to FIELD_PREP use in v1
- follow Jakub's suggestion and use the standard ethtool stats API;
NOTE: new patch in the series for relevant DSA plumbing
- following the stats change, it became apparent that the rtl8366
helper library is no longer that helpful; scrap it and implement
the ethtool ops specifically for this chip
v1 -> v2:
- drop DSA port type checks during MAC configuration
- use OF properties to configure RGMII TX/RX delay
- don't set default fwd_offload_mark if packet is trapped to CPU
- remove port mapping macros
- update device tree bindings documentation with an example
- cosmetic changes to the tagging driver using FIELD_* macros
[1] https://lore.kernel.org/netdev/
20210822193145.1312668-1-alvin@pqrs.dk/
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alvin Šipraga [Mon, 18 Oct 2021 09:38:02 +0000 (11:38 +0200)]
net: phy: realtek: add support for RTL8365MB-VC internal PHYs
The RTL8365MB-VC ethernet switch controller has 4 internal PHYs for its
user-facing ports. All that is needed is to let the PHY driver core
pick up the IRQ made available by the switch driver.
Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alvin Šipraga [Mon, 18 Oct 2021 09:38:01 +0000 (11:38 +0200)]
net: dsa: realtek-smi: add rtl8365mb subdriver for RTL8365MB-VC
This patch adds a realtek-smi subdriver for the RTL8365MB-VC 4+1 port
10/100/1000M switch controller. The driver has been developed based on a
GPL-licensed OS-agnostic Realtek vendor driver known as rtl8367c found
in the OpenWrt source tree.
Despite the name, the RTL8365MB-VC has an entirely different register
layout to the already-supported RTL8366RB ASIC. Notwithstanding this,
the structure of the rtl8365mb subdriver is loosely based on the rtl8366rb
subdriver. Like the 'rb, it establishes its own irqchip to handle
cascaded PHY link status interrupts.
The RTL8365MB-VC switch is capable of offloading a large number of
features from the software, but this patch introduces only the most
basic DSA driver functionality. The ports always function as standalone
ports, with bridging handled in software.
One more thing. Realtek's nomenclature for switches makes it hard to
know exactly what other ASICs might be supported by this driver. The
vendor driver goes by the name rtl8367c, but as far as I can tell, no
chip actually exists under this name. As such, the subdriver is named
rtl8365mb to emphasize the potentially limited support. But it is clear
from the vendor sources that a number of other more advanced switches
share a similar register layout, and further support should not be too
hard to add given access to the relevant hardware. With this in mind,
the subdriver has been written with as few assumptions about the
particular chip as is reasonable. But the RTL8365MB-VC is the only
hardware I have available, so some further work is surely needed.
Co-developed-by: Michael Rasmussen <mir@bang-olufsen.dk>
Signed-off-by: Michael Rasmussen <mir@bang-olufsen.dk>
Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Arınç ÜNAL <arinc.unal@arinc9.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alvin Šipraga [Mon, 18 Oct 2021 09:38:00 +0000 (11:38 +0200)]
net: dsa: tag_rtl8_4: add realtek 8 byte protocol 4 tag
This commit implements a basic version of the 8 byte tag protocol used
in the Realtek RTL8365MB-VC unmanaged switch, which carries with it a
protocol version of 0x04.
The implementation itself only handles the parsing of the EtherType
value and Realtek protocol version, together with the source or
destination port fields. The rest is left unimplemented for now.
The tag format is described in a confidential document provided to my
company by Realtek Semiconductor Corp. Permission has been granted by
the vendor to publish this driver based on that material, together with
an extract from the document describing the tag format and its fields.
It is hoped that this will help future implementors who do not have
access to the material but who wish to extend the functionality of
drivers for chips which use this protocol.
In addition, two possible values of the REASON field are specified,
based on experiments on my end. Realtek does not specify what value this
field can take.
Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Arınç ÜNAL <arinc.unal@arinc9.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alvin Šipraga [Mon, 18 Oct 2021 09:37:59 +0000 (11:37 +0200)]
dt-bindings: net: dsa: realtek-smi: document new compatible rtl8365mb
rtl8365mb is a new realtek-smi subdriver for the RTL8365MB-VC 4+1 port
10/100/1000M Ethernet switch controller. Its compatible string is
"realtek,rtl8365mb".
Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Rob Herring <robh@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alvin Šipraga [Mon, 18 Oct 2021 09:37:58 +0000 (11:37 +0200)]
net: dsa: move NET_DSA_TAG_RTL4_A to right place in Kconfig/Makefile
Move things around a little so that this tag driver is alphabetically
ordered. The Kconfig file is sorted based on the tristate text.
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alvin Šipraga [Mon, 18 Oct 2021 09:37:57 +0000 (11:37 +0200)]
net: dsa: allow reporting of standard ethtool stats for slave devices
Jakub pointed out that we have a new ethtool API for reporting device
statistics in a standardized way, via .get_eth_{phy,mac,ctrl}_stats.
Add a small amount of plumbing to allow DSA drivers to take advantage of
this when exposing statistics.
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alvin Šipraga [Mon, 18 Oct 2021 09:37:56 +0000 (11:37 +0200)]
ether: add EtherType for proprietary Realtek protocols
Add a new EtherType ETH_P_REALTEK to the if_ether.h uapi header. The
EtherType 0x8899 is used in a number of different protocols from Realtek
Semiconductor Corp [1], so no general assumptions should be made when
trying to decode such packets. Observed protocols include:
0x1 - Realtek Remote Control protocol [2]
0x2 - Echo protocol [2]
0x3 - Loop detection protocol [2]
0x4 - RTL8365MB 4- and 8-byte switch CPU tag protocols [3]
0x9 - RTL8306 switch CPU tag protocol [4]
0xA - RTL8366RB switch CPU tag protocol [4]
[1] https://lore.kernel.org/netdev/CACRpkdYQthFgjwVzHyK3DeYUOdcYyWmdjDPG=Rf9B3VrJ12Rzg@mail.gmail.com/
[2] https://www.wireshark.org/lists/ethereal-dev/200409/msg00090.html
[3] https://lore.kernel.org/netdev/
20210822193145.1312668-4-alvin@pqrs.dk/
[4] https://lore.kernel.org/netdev/
20200708122537.1341307-2-linus.walleij@linaro.org/
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Oct 2021 21:53:04 +0000 (14:53 -0700)]
ethernet: use eth_hw_addr_set() in unmaintained drivers
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Fri, 15 Oct 2021 21:06:01 +0000 (23:06 +0200)]
octeontx2-nic: fix mixed module build
Building the VF and PF side of this driver differently, with one being
a loadable module and the other one built-in results in a link failure
for the common PTP driver:
ld.lld: error: undefined symbol: __this_module
>>> referenced by otx2_ptp.c
>>> net/ethernet/marvell/octeontx2/nic/otx2_ptp.o:(otx2_ptp_init) in archive drivers/built-in.a
>>> referenced by otx2_ptp.c
>>> net/ethernet/marvell/octeontx2/nic/otx2_ptp.o:(otx2_ptp_init) in archive drivers/built-in.a
Move the otx2_ptp.c code into a separate module that gets built for
both configurations, making it built-in if at least one of the other
two is built-in.
Fixes:
43510ef4ddad ("octeontx2-nicvf: Add PTP hardware clock support to NIX VF")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Oct 2021 12:10:08 +0000 (13:10 +0100)]
Merge branch 'uniphier-nx1'
Kunihiko Hayashi says:
====================
net: ethernet: ave: Introduce UniPhier NX1 SoC support
This series includes the patches to add basic support for new UniPhier NX1
SoC. NX1 SoC also has the same kinds of controls as the other UniPhier
SoCs.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Kunihiko Hayashi [Mon, 18 Oct 2021 01:27:37 +0000 (10:27 +0900)]
net: ethernet: ave: Add compatible string and SoC-dependent data for NX1 SoC
Add basic support for UniPhier NX1 SoC. This includes a compatible string
and SoC-dependent data.
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kunihiko Hayashi [Mon, 18 Oct 2021 01:27:36 +0000 (10:27 +0900)]
dt-bindings: net: ave: Add bindings for NX1 SoC
Update AVE binding document for UniPhier NX1 SoC.
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Uwe Kleine-König [Fri, 15 Oct 2021 06:56:15 +0000 (08:56 +0200)]
net: w5100: Make w5100_remove() return void
Up to now w5100_remove() returns zero unconditionally. Make it return
void instead which makes it easier to see in the callers that there is
no error to handle.
Also the return value of platform and spi remove callbacks is ignored
anyway.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Uwe Kleine-König [Fri, 15 Oct 2021 06:56:14 +0000 (08:56 +0200)]
net: ks8851: Make ks8851_remove_common() return void
Up to now ks8851_remove_common() returns zero unconditionally. Make it
return void instead which makes it easier to see in the callers that
there is no error to handle.
Also the return value of platform and spi remove callbacks is ignored
anyway.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Oct 2021 11:54:41 +0000 (12:54 +0100)]
Merge branch 'remove-qdisc-running-counter'
Sebastian Andrzej Siewior says:
====================
Try to simplify the gnet_stats and remove qdisc->running sequence counter.
The first few patches is a follow up to
https://lore.kernel.org/all/
20211007175000.2334713-1-bigeasy@linutronix.de/
The remaining patches (#5+) remove the seqcount_t (Qdisc::running) from
the Qdisc. The statistics (Qdisc::bstats and Qdisc::cpu_bstats) use
u64_stats_t and the "running state" is now represented by a bit in
Qdisc::state.
By removing the seqcount_t from Qdisc and decoupling the bstats
statistics from the seqcount_t it is possible to query the statistics
even if the Qdisc is running instead of waiting until it is idle again.
The try-lock like usage of the seqcount_t in qdisc_run_begin() is
problematic on PREEMPT_RT. Inside the qdisc_run_begin/end() qdisc->running
sequence counter write sections, at sch_direct_xmit(), the seqcount write
serialization lock is released then re-acquired. This is fine for !RT, because
the writer is in a BH disabled region and there is a no in-IRQ reader. For RT
though, BH sections are preemptible. The earlier introduced seqcount_LOCKNAME_t
mechanism, which for RT the reader acquires then relesaes the write
serailization lock to avoid infinite spinning if it preempts a seqcount write
section, cannot work: the qdisc->running write serialization lock is already
intermittingly released inside the seqcount write section.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ahmed S. Darwish [Sat, 16 Oct 2021 08:49:10 +0000 (10:49 +0200)]
net: sched: Remove Qdisc::running sequence counter
The Qdisc::running sequence counter has two uses:
1. Reliably reading qdisc's tc statistics while the qdisc is running
(a seqcount read/retry loop at gnet_stats_add_basic()).
2. As a flag, indicating whether the qdisc in question is running
(without any retry loops).
For the first usage, the Qdisc::running sequence counter write section,
qdisc_run_begin() => qdisc_run_end(), covers a much wider area than what
is actually needed: the raw qdisc's bstats update. A u64_stats sync
point was thus introduced (in previous commits) inside the bstats
structure itself. A local u64_stats write section is then started and
stopped for the bstats updates.
Use that u64_stats sync point mechanism for the bstats read/retry loop
at gnet_stats_add_basic().
For the second qdisc->running usage, a __QDISC_STATE_RUNNING bit flag,
accessed with atomic bitops, is sufficient. Using a bit flag instead of
a sequence counter at qdisc_run_begin/end() and qdisc_is_running() leads
to the SMP barriers implicitly added through raw_read_seqcount() and
write_seqcount_begin/end() getting removed. All call sites have been
surveyed though, and no required ordering was identified.
Now that the qdisc->running sequence counter is no longer used, remove
it.
Note, using u64_stats implies no sequence counter protection for 64-bit
architectures. This can lead to the qdisc tc statistics "packets" vs.
"bytes" values getting out of sync on rare occasions. The individual
values will still be valid.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ahmed S. Darwish [Sat, 16 Oct 2021 08:49:09 +0000 (10:49 +0200)]
net: sched: Merge Qdisc::bstats and Qdisc::cpu_bstats data types
The only factor differentiating per-CPU bstats data type (struct
gnet_stats_basic_cpu) from the packed non-per-CPU one (struct
gnet_stats_basic_packed) was a u64_stats sync point inside the former.
The two data types are now equivalent: earlier commits added a u64_stats
sync point to the latter.
Combine both data types into "struct gnet_stats_basic_sync". This
eliminates redundancy and simplifies the bstats read/write APIs.
Use u64_stats_t for bstats "packets" and "bytes" data types. On 64-bit
architectures, u64_stats sync points do not use sequence counter
protection.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ahmed S. Darwish [Sat, 16 Oct 2021 08:49:08 +0000 (10:49 +0200)]
net: sched: Use _bstats_update/set() instead of raw writes
The Qdisc::running sequence counter, used to protect Qdisc::bstats reads
from parallel writes, is in the process of being removed. Qdisc::bstats
read/writes will synchronize using an internal u64_stats sync point
instead.
Modify all bstats writes to use _bstats_update(). This ensures that
the internal u64_stats sync point is always acquired and released as
appropriate.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ahmed S. Darwish [Sat, 16 Oct 2021 08:49:07 +0000 (10:49 +0200)]
net: sched: Protect Qdisc::bstats with u64_stats
The not-per-CPU variant of qdisc tc (traffic control) statistics,
Qdisc::gnet_stats_basic_packed bstats, is protected with Qdisc::running
sequence counter.
This sequence counter is used for reliably protecting bstats reads from
parallel writes. Meanwhile, the seqcount's write section covers a much
wider area than bstats update: qdisc_run_begin() => qdisc_run_end().
That read/write section asymmetry can lead to needless retries of the
read section. To prepare for removing the Qdisc::running sequence
counter altogether, introduce a u64_stats sync point inside bstats
instead.
Modify _bstats_update() to start/end the bstats u64_stats write
section.
For bisectability, and finer commits granularity, the bstats read
section is still protected with a Qdisc::running read/retry loop and
qdisc_run_begin/end() still starts/ends that seqcount write section.
Once all call sites are modified to use _bstats_update(), the
Qdisc::running seqcount will be removed and bstats read/retry loop will
be modified to utilize the internal u64_stats sync point.
Note, using u64_stats implies no sequence counter protection for 64-bit
architectures. This can lead to the statistics "packets" vs. "bytes"
values getting out of sync on rare occasions. The individual values will
still be valid.
[bigeasy: Minor commit message edits, init all gnet_stats_basic_packed.]
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ahmed S. Darwish [Sat, 16 Oct 2021 08:49:06 +0000 (10:49 +0200)]
u64_stats: Introduce u64_stats_set()
Allow to directly set a u64_stats_t value which is used to provide an init
function which sets it directly to zero intead of memset() the value.
Add u64_stats_set() to the u64_stats API.
[bigeasy: commit message. ]
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sebastian Andrzej Siewior [Sat, 16 Oct 2021 08:49:05 +0000 (10:49 +0200)]
gen_stats: Move remaining users to gnet_stats_add_queue().
The gnet_stats_queue::qlen member is only used in the SMP-case.
qdisc_qstats_qlen_backlog() needs to add qdisc_qlen() to qstats.qlen to
have the same value as that provided by qdisc_qlen_sum().
gnet_stats_copy_queue() needs to overwritte the resulting qstats.qlen
field whith the caller submitted qlen value. It might be differ from the
submitted value.
Let both functions use gnet_stats_add_queue() and remove unused
__gnet_stats_copy_queue().
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sebastian Andrzej Siewior [Sat, 16 Oct 2021 08:49:04 +0000 (10:49 +0200)]
mq, mqprio: Use gnet_stats_add_queue().
gnet_stats_add_basic() and gnet_stats_add_queue() add up the statistics
so they can be used directly for both the per-CPU and global case.
gnet_stats_add_queue() copies either Qdisc's per-CPU
gnet_stats_queue::qlen or the global member. The global
gnet_stats_queue::qlen isn't touched in the per-CPU case so there is no
need to consider it in the global-case.
In the per-CPU case, the sum of global gnet_stats_queue::qlen and
the per-CPU gnet_stats_queue::qlen was assigned to sch->q.qlen and
sch->qstats.qlen. Now both fields are copied individually.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sebastian Andrzej Siewior [Sat, 16 Oct 2021 08:49:03 +0000 (10:49 +0200)]
gen_stats: Add gnet_stats_add_queue().
This function will replace __gnet_stats_copy_queue(). It reads all
arguments and adds them into the passed gnet_stats_queue argument.
In contrast to __gnet_stats_copy_queue() it also copies the qlen member.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sebastian Andrzej Siewior [Sat, 16 Oct 2021 08:49:02 +0000 (10:49 +0200)]
gen_stats: Add instead Set the value in __gnet_stats_copy_basic().
__gnet_stats_copy_basic() always assigns the value to the bstats
argument overwriting the previous value. The later added per-CPU version
always accumulated the values in the returning gnet_stats_basic_packed
argument.
Based on review there are five users of that function as of today:
- est_fetch_counters(), ___gnet_stats_copy_basic()
memsets() bstats to zero, single invocation.
- mq_dump(), mqprio_dump(), mqprio_dump_class_stats()
memsets() bstats to zero, multiple invocation but does not use the
function due to !qdisc_is_percpu_stats().
Add the values in __gnet_stats_copy_basic() instead overwriting. Rename
the function to gnet_stats_add_basic() to make it more obvious.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lukas Wunner [Sat, 16 Oct 2021 08:13:27 +0000 (10:13 +0200)]
netfilter: core: Fix clang warnings about unused static inlines
Unlike gcc, clang warns about unused static inlines that are not in an
include file:
net/netfilter/core.c:344:20: error: unused function 'nf_ingress_hook' [-Werror,-Wunused-function]
static inline bool nf_ingress_hook(const struct nf_hook_ops *reg, int pf)
^
net/netfilter/core.c:353:20: error: unused function 'nf_egress_hook' [-Werror,-Wunused-function]
static inline bool nf_egress_hook(const struct nf_hook_ops *reg, int pf)
^
According to commit
6863f5643dd7 ("kbuild: allow Clang to find unused
static inline functions for W=1 build"), the proper resolution is to
mark the affected functions as __maybe_unused. An alternative approach
would be to move them to include/linux/netfilter_netdev.h, but since
Pablo didn't do that in commit
ddcfa710d40b ("netfilter: add
nf_ingress_hook() helper function"), I'm guessing __maybe_unused is
preferred.
This fixes both the warning introduced by Pablo in v5.10 as well as the
one recently introduced by myself with commit
42df6e1d221d ("netfilter:
Introduce egress hook").
Fixes:
ddcfa710d40b ("netfilter: add nf_ingress_hook() helper function")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Kyungrok Chung [Sat, 16 Oct 2021 11:21:36 +0000 (20:21 +0900)]
net: make use of helper netif_is_bridge_master()
Make use of netdev helper functions to improve code readability.
Replace 'dev->priv_flags & IFF_EBRIDGE' with netif_is_bridge_master(dev).
Signed-off-by: Kyungrok Chung <acadx0@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 16 Oct 2021 13:58:13 +0000 (14:58 +0100)]
Merge branch 'smc-rv23'
Karsten Graul says:
====================
net/smc: introduce SMC-Rv2 support
Please apply the following patch series for smc to netdev's net-next tree.
SMC-Rv2 support (see https://www.ibm.com/support/pages/node/6326337)
provides routable RoCE support for SMC-R, eliminating the current
same-subnet restriction, by exploiting the UDP encapsulation feature
of the RoCE adapter hardware.
v2: resend of the v1 patch series, and CC linux-rdma this time
v3: rebase after net tree was merged into net-next
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:52 +0000 (11:37 +0200)]
net/smc: stop links when their GID is removed
With SMC-Rv2 the GID is an IP address that can be deleted from the
device. When an IB_EVENT_GID_CHANGE event is provided then iterate over
all active links and check if their GID is still defined. Otherwise
stop the affected link.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:51 +0000 (11:37 +0200)]
net/smc: add netlink support for SMC-Rv2
Implement the netlink support for SMC-Rv2 related attributes that are
provided to user space.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:50 +0000 (11:37 +0200)]
net/smc: extend LLC layer for SMC-Rv2
Add support for large v2 LLC control messages in smc_llc.c.
The new large work request buffer allows to combine control
messages into one packet that had to be spread over several
packets before.
Add handling of the new v2 LLC messages.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:49 +0000 (11:37 +0200)]
net/smc: add v2 support to the work request layer
In the work request layer define one large v2 buffer for each link group
that is used to transmit and receive large LLC control messages.
Add the completion queue handling for this buffer.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:48 +0000 (11:37 +0200)]
net/smc: retrieve v2 gid from IB device
In smc_ib.c, scan for RoCE devices that support UDP encapsulation.
Find an eligible device and check that there is a route to the
remote peer.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:47 +0000 (11:37 +0200)]
net/smc: add v2 format of CLC decline message
The CLC decline message changed with SMC-Rv2 and supports up to
4 additional diagnosis codes.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:46 +0000 (11:37 +0200)]
net/smc: add listen processing for SMC-Rv2
Implement the server side of the SMC-Rv2 processing. Process incoming
CLC messages, find eligible devices and check for a valid route to the
remote peer.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:45 +0000 (11:37 +0200)]
net/smc: add SMC-Rv2 connection establishment
Send a CLC proposal message, and the remote side process this type of
message and determine the target GID. Check for a valid route to this
GID, and complete the connection establishment.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:44 +0000 (11:37 +0200)]
net/smc: prepare for SMC-Rv2 connection
Prepare the connection establishment with SMC-Rv2. Detect eligible
RoCE cards and indicate all supported SMC modes for the connection.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Karsten Graul [Sat, 16 Oct 2021 09:37:43 +0000 (11:37 +0200)]
net/smc: save stack space and allocate smc_init_info
The struct smc_init_info grew over time, its time to save space on stack
and allocate this struct dynamically.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Oct 2021 13:37:39 +0000 (06:37 -0700)]
net: stream: don't purge sk_error_queue in sk_stream_kill_queues()
sk_stream_kill_queues() can be called on close when there are
still outstanding skbs to transmit. Those skbs may try to queue
notifications to the error queue (e.g. timestamps).
If sk_stream_kill_queues() purges the queue without taking
its lock the queue may get corrupted, and skbs leaked.
This shows up as a warning about an rmem leak:
WARNING: CPU: 24 PID: 0 at net/ipv4/af_inet.c:154 inet_sock_destruct+0x...
The leak is always a multiple of 0x300 bytes (the value is in
%rax on my builds, so RAX:
0000000000000300). 0x300 is truesize of
an empty sk_buff. Indeed if we dump the socket state at the time
of the warning the sk_error_queue is often (but not always)
corrupted. The ->next pointer points back at the list head,
but not the ->prev pointer. Indeed we can find the leaked skb
by scanning the kernel memory for something that looks like
an skb with ->sk = socket in question, and ->truesize = 0x300.
The contents of ->cb[] of the skb confirms the suspicion that
it is indeed a timestamp notification (as generated in
__skb_complete_tx_timestamp()).
Removing purging of sk_error_queue should be okay, since
inet_sock_destruct() does it again once all socket refs
are gone. Eric suggests this may cause sockets that go
thru disconnect() to maintain notifications from the
previous incarnations of the socket, but that should be
okay since the race was there anyway, and disconnect()
is not exactly dependable.
Thanks to Jonathan Lemon and Omar Sandoval for help at various
stages of tracing the issue.
Fixes:
cb9eff097831 ("net: new user space API for time stamping of incoming and outgoing packets")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 16 Oct 2021 07:53:46 +0000 (08:53 +0100)]
Merge branch 'dev_addr-conversions-part-1'
Jakub Kicinski says:
====================
ethernet: manual netdev->dev_addr conversions (part 1)
Manual conversions of drivers writing directly
to netdev->dev_addr (part 1 out of 3).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Oct 2021 22:16:52 +0000 (15:16 -0700)]
ethernet: ixgb: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Read the address into an array on the stack, then call
eth_hw_addr_set(). ixgb_get_ee_mac_addr() is used with
a non-nevdev->dev_addr pointer so we can't deal with the problem
inside it.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Oct 2021 22:16:51 +0000 (15:16 -0700)]
ethernet: ibmveth: use ether_addr_to_u64()
We'll want to make netdev->dev_addr const, remove the local
helper which is missing a const qualifier on the argument
and use ether_addr_to_u64().
Similar story to mlx4.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Oct 2021 22:16:50 +0000 (15:16 -0700)]
ethernet: enetc: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it got through appropriate helpers.
Pass a netdev into the helper instead of just the address,
read the address into an array on the stack, then call
eth_hw_addr_set().
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>