Vladimir Oltean says:
====================
DSA changes for multiple CPU ports (part 4)
Those who have been following part 1:
https://patchwork.kernel.org/project/netdevbpf/cover/
20220511095020.562461-1-vladimir.oltean@nxp.com/
part 2:
https://patchwork.kernel.org/project/netdevbpf/cover/
20220521213743.2735445-1-vladimir.oltean@nxp.com/
and part 3:
https://patchwork.kernel.org/project/netdevbpf/cover/
20220819174820.3585002-1-vladimir.oltean@nxp.com/
will know that I am trying to enable the second internal port pair from
the NXP LS1028A Felix switch for DSA-tagged traffic via "ocelot-8021q".
This series represents the final part of that effort. We have:
- the introduction of new UAPI in the form of IFLA_DSA_MASTER, the
iproute2 patch for which is here:
https://patchwork.kernel.org/project/netdevbpf/patch/
20220904190025.813574-1-vladimir.oltean@nxp.com/
- preparation for LAG DSA masters in terms of suppressing some
operations for masters in the DSA core that simply don't make sense
when those masters are a bonding/team interface
- handling all the net device events that occur between DSA and a
LAG DSA master, including migration to a different DSA master when the
current master joins a LAG, or the LAG gets destroyed
- updating documentation
- adding an implementation for NXP LS1028A, where things are insanely
complicated due to hardware limitations. We have 2 tagging protocols:
* the native "ocelot" protocol (NPI port mode). This does not support
CPU ports in a LAG, and supports a single DSA master. The DSA master
can be changed between eno2 (2.5G) and eno3 (1G), but all ports must
be down during the changing process, and user ports assigned to the
old DSA master will refuse to come up if the user requests that
during a "transient" state.
* the "ocelot-8021q" software-defined protocol, where the Ethernet
ports connected to the CPU are not actually "god mode" ports as far
as the hardware is concerned. So here, static assignment between
user and CPU ports is possible by editing the PGID_SRC masks for
the port-based forwarding matrix, and "CPU ports in a LAG" simply
means "a LAG like any other".
The series was regression-tested on LS1028A using the local_termination.sh
kselftest, in most of the possible operating modes and tagging protocols.
I have not done a detailed performance evaluation yet, but using LAG, is
possible to exceed the termination bandwidth of a single CPU port in an
iperf3 test with multiple senders and multiple receivers.
v1 at:
https://patchwork.kernel.org/project/netdevbpf/cover/
20220830195932.683432-1-vladimir.oltean@nxp.com/
Previous (older) RFC at:
https://lore.kernel.org/netdev/
20220523104256.3556016-1-olteanv@gmail.com/
====================
Link: https://lore.kernel.org/r/20220911010706.2137967-1-vladimir.oltean@nxp.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>