David S. Miller [Tue, 20 Jul 2021 14:10:49 +0000 (07:10 -0700)]
Merge branch 'qcom-dts-updates'
Alex Elder says:
====================
arm64: dts: qcom: DTS updates
This series updates some IPA-related DT nodes.
Newer versions of IPA do not require an interconnect between IPA
and SoC internal memory. The first patch updates the DT binding
to reflect this.
The second patch adds IPA information to "sc7280.dtsi", using only
two interconnects. It includes the definition of the reserved
memory area used to hold IPA firmware.
The last patch defines the reserved IPA firmware memory area in
"sc7180.dtsi".
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 19 Jul 2021 21:24:56 +0000 (16:24 -0500)]
arm64: dts: qcom: sc7180: define ipa_fw_mem node
Define the reserved memory space used for IPA firmware for the
Qualcomm SC7180 SoC.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 19 Jul 2021 21:24:55 +0000 (16:24 -0500)]
arm64: dts: qcom: sc7280: add IPA information
Add IPA-related nodes and definitions to "sc7280.dtsi", including
the reserved memory area used for AP-based IPA firmware loading.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 19 Jul 2021 21:24:54 +0000 (16:24 -0500)]
dt-bindings: net: qcom,ipa: make imem interconnect optional
On some newer SoCs, the interconnect between IPA and SoC internal
memory (imem) is not used. Reflect this in the binding by moving
the definition of the "imem" interconnect to the end and defining
minItems to be 2 for both the interconnects and interconnect-names
properties.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 19 Jul 2021 20:23:33 +0000 (15:23 -0500)]
net: ipa: fix IPA v4.11 interconnect data
Currently three interconnects are defined for the Qualcomm SC7280
SoC, but this was based on a misunderstanding. There should only be
two interconnects defined: one between the IPA and system memory;
and another between the AP and IPA config space. The bandwidths
defined for the memory and config interconnects do not match what I
understand to be proper values, so update these.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fabio Estevam [Mon, 19 Jul 2021 23:26:39 +0000 (20:26 -0300)]
dt-bindings: net: fec: Fix indentation
The following warning is observed when running 'make dtbs_check':
Documentation/devicetree/bindings/net/fsl,fec.yaml:85:7: [warning] wrong indentation: expected 8 but found 6 (indentation)
Fix the indentation accordingly.
Signed-off-by: Fabio Estevam <festevam@gmail.com>
Reviewed-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Jul 2021 14:04:37 +0000 (07:04 -0700)]
Merge branch 'fdb-fanout'
Vladimir Oltean says:
====================
Fan out FDB entries pointing towards the bridge to all switchdev member ports
The "DSA RX filtering" series has added some important support for
interpreting addresses towards the bridge device as host addresses and
installing them as FDB entries towards the CPU port, but it does not
cover all circumstances and needs further work.
To be precise, the mechanism introduced in that series only works as
long as the ports are fairly static and no port joins or leaves the
bridge once the configuration is done. If any port leaves, host FDB
entries that were installed during runtime (for example the user changes
the MAC address of the bridge device) will be prematurely deleted,
resulting in a broken setup.
I see this work as targeted for "net-next" because technically it was
not supposed to work. Also, there are still corner cases and holes to be
plugged. For example, today, FDB entries on foreign interfaces are not
covered by br_fdb_replay(), which means that there are cases where some
host addresses are either lost, or never deleted by DSA. That will be
resolved once more work gets accepted, in particular the "Allow
forwarding for the software bridge data path to be offloaded to capable
devices" series, which moves the br_fdb_replay() call to the bridge core
and therefore would be required to solve the problem in a generic way
for every switchdev driver and not just for DSA.
These patches also pave the way for a cleaner implementation for FDB
entries pointing towards a LAG upper interface in DSA (that code needs
only to be added, nothing changed), however this is not done here.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 13:51:40 +0000 (16:51 +0300)]
net: dsa: use switchdev_handle_fdb_{add,del}_to_device
Using the new fan-out helper for FDB entries installed on the software
bridge, we can install host addresses with the proper refcount on the
CPU port, such that this case:
ip link set swp0 master br0
ip link set swp1 master br0
ip link set swp2 master br0
ip link set swp3 master br0
ip link set br0 address 00:01:02:03:04:05
ip link set swp3 nomaster
works properly and the br0 address remains installed as a host entry
with refcount 3 instead of getting deleted.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 13:51:39 +0000 (16:51 +0300)]
net: switchdev: introduce a fanout helper for SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE
Currently DSA has an issue with FDB entries pointing towards the bridge
in the presence of br_fdb_replay() being called at port join and leave
time.
In particular, each bridge port will ask for a replay for the FDB
entries pointing towards the bridge when it joins, and for another
replay when it leaves.
This means that for example, a bridge with 4 switch ports will notify
DSA 4 times of the bridge MAC address.
But if the MAC address of the bridge changes during the normal runtime
of the system, the bridge notifies switchdev [ once ] of the deletion of
the old MAC address as a local FDB towards the bridge, and of the
insertion [ again once ] of the new MAC address as a local FDB.
This is a problem, because DSA keeps the old MAC address as a host FDB
entry with refcount 4 (4 ports asked for it using br_fdb_replay). So the
old MAC address will not be deleted. Additionally, the new MAC address
will only be installed with refcount 1, and when the first switch port
leaves the bridge (leaving 3 others as still members), it will delete
with it the new MAC address of the bridge from the local FDB entries
kept by DSA (because the br_fdb_replay call on deletion will bring the
entry's refcount from 1 to 0).
So the problem, really, is that the number of br_fdb_replay() calls is
not matched with the refcount that a host FDB is offloaded to DSA during
normal runtime.
An elegant way to solve the problem would be to make the switchdev
notification emitted by br_fdb_change_mac_address() result in a host FDB
kept by DSA which has a refcount exactly equal to the number of ports
under that bridge. Then, no matter how many DSA ports join or leave that
bridge, the host FDB entry will always be deleted when there are exactly
zero remaining DSA switch ports members of the bridge.
To implement the proposed solution, we remember that the switchdev
objects and port attributes have some helpers provided by switchdev,
which can be optionally called by drivers:
switchdev_handle_port_obj_{add,del} and switchdev_handle_port_attr_set.
These helpers:
- fan out a switchdev object/attribute emitted for the bridge towards
all the lower interfaces that pass the check_cb().
- fan out a switchdev object/attribute emitted for a bridge port that is
a LAG towards all the lower interfaces that pass the check_cb().
In other words, this is the model we need for the FDB events too:
something that will keep an FDB entry emitted towards a physical port as
it is, but translate an FDB entry emitted towards the bridge into N FDB
entries, one per physical port.
Of course, there are many differences between fanning out a switchdev
object (VLAN) on 3 lower interfaces of a LAG and fanning out an FDB
entry on 3 lower interfaces of a LAG. Intuitively, an FDB entry towards
a LAG should be treated specially, because FDB entries are unicast, we
can't just install the same address towards 3 destinations. It is
imaginable that drivers might want to treat this case specifically, so
create some methods for this case and do not recurse into the LAG lower
ports, just the bridge ports.
DSA also listens for FDB entries on "foreign" interfaces, aka interfaces
bridged with us which are not part of our hardware domain: think an
Ethernet switch bridged with a Wi-Fi AP. For those addresses, DSA
installs host FDB entries. However, there we have the same problem
(those host FDB entries are installed with a refcount of only 1) and an
even bigger one which we did not have with FDB entries towards the
bridge:
br_fdb_replay() is currently not called for FDB entries on foreign
interfaces, just for the physical port and for the bridge itself.
So when DSA sniffs an address learned by the software bridge towards a
foreign interface like an e1000 port, and then that e1000 leaves the
bridge, DSA remains with the dangling host FDB address. That will be
fixed separately by replaying all FDB entries and not just the ones
towards the port and the bridge.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 13:51:38 +0000 (16:51 +0300)]
net: switchdev: introduce helper for checking dynamically learned FDB entries
It is a bit difficult to understand what DSA checks when it tries to
avoid installing dynamically learned addresses on foreign interfaces as
local host addresses, so create a generic switchdev helper that can be
reused and is generally more readable.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xu Liang [Mon, 19 Jul 2021 05:32:12 +0000 (13:32 +0800)]
net: phy: add Maxlinear GPY115/21x/24x driver
Add driver to support the Maxlinear GPY115, GPY211, GPY212, GPY215,
GPY241, GPY245 PHYs. Separate from XWAY PHY driver because this series
has different register layout and new features not supported in XWAY PHY.
Signed-off-by: Xu Liang <lxu@maxlinear.com>
Acked-by: Hauke Mehrtens <hmehrtens@maxlinear.com>
Tested-by: Wong Vee Khee <vee.khee.wong@linux.intel.com>
Tested-by: Wong Vee Khee <vee.khee.wong@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xu Liang [Mon, 19 Jul 2021 05:32:11 +0000 (13:32 +0800)]
net: phy: add API to read 802.3-c45 IDs
Add API to read 802.3-c45 IDs so that C22/C45 mixed device can use
C45 APIs without failing ID checks.
Signed-off-by: Xu Liang <lxu@maxlinear.com>
Acked-by: Hauke Mehrtens <hmehrtens@maxlinear.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Jul 2021 13:37:12 +0000 (06:37 -0700)]
Merge branch 'tag_8021q-cross-chip'
Vladimir Olteans says:
====================
Proper cross-chip support for tag_8021q
The cross-chip bridging support for tag_8021q/sja1105 introduced here:
https://patchwork.ozlabs.org/project/netdev/cover/
20200510163743.18032-1-olteanv@gmail.com/
took some shortcuts and is not reusable in other topologies except for
the one it was written for: disjoint DSA trees. A diagram of this
topology can be seen here:
https://patchwork.ozlabs.org/project/netdev/patch/
20200510163743.18032-3-olteanv@gmail.com/
However there are sja1105 switches on other boards using other
topologies, most notably:
- Daisy chained:
|
sw0p0 sw0p1 sw0p2 sw0p3 sw0p4
[ user ] [ user ] [ user ] [ dsa ] [ cpu ]
|
+---------+
|
sw1p0 sw1p1 sw1p2 sw1p3 sw1p4
[ user ] [ user ] [ user ] [ dsa ] [ dsa ]
|
+---------+
|
sw2p0 sw2p1 sw2p2 sw2p3 sw2p4
[ user ] [ user ] [ user ] [ user ] [ dsa ]
- "H" topology:
eth0 eth1
| |
CPU port CPU port
| DSA link |
sw0p0 sw0p1 sw0p2 sw0p3 sw0p4 -------- sw1p4 sw1p3 sw1p2 sw1p1 sw1p0
| | | | | |
user user user user user user
port port port port port port
In fact, the current code for tag_8021q cross-chip links works for
neither of these 2 classes of topologies.
The main reasons are:
(a) The sja1105 driver does not treat DSA links. In the "disjoint trees"
topology, the routing port towards any other switch is also the CPU
port, and that was already configured so it already worked.
This series does not deal with enabling DSA links in the sja1105
driver, that is a fairly trivial task that will be dealt with
separately.
(b) The tag_8021q code for cross-chip links assumes that any 2 switches
between cross-chip forwarding needs to be enabled (i.e. which have
user ports part of the same bridge) are at most 1 hop away from each
other. This was true for the "disjoint trees" case because
once a packet reached the CPU port, VLAN-unaware bridging was done
by the DSA master towards the other switches based on destination
MAC address, so the tag_8021q header was not interpreted in any way.
However, in a daisy chain setup with 3 switches, all of them will
interpret the tag_8021q header, and all tag_8021q VLANs need to be
installed in all switches.
When looking at the O(n^2) real complexity of the problem, it is clear
that the current code had absolutely no chance of working in the general
case. So this patch series brings a redesign of tag_8021q, in light of
its new requirements. Anything with O(n^2) complexity (where n is the
number of switches in a DSA tree) is an obvious candidate for the DSA
cross-chip notifier support.
One by one, the patches are:
- The sja1105 driver is extremely entangled with tag_8021q, to be exact,
with that driver's best_effort_vlan_filtering support. We drop this
operating mode, which means that sja1105 temporarily loses network
stack termination for VLAN-aware bridges. That operating mode raced
itself to its own grave anyway due to some hardware limitations in
combination with PTP reported by NXP customers. I can't say a lot
more, but network stack termination for VLAN-aware bridges in sja1105
will be reimplemented soon with a much, much better solution.
- What remains of tag_8021q in sja1105 is support for standalone ports
mode and for VLAN-unaware bridging. We refactor the API surface of
tag_8021q to a single pair of dsa_tag_8021q_{register,unregister}
functions and we clean up everything else related to tag_8021q from
sja1105 and felix.
- Then we move tag_8021q into the DSA core. I thought about this a lot,
and there is really no other way to add a DSA_NOTIFIER_TAG_8021Q_VLAN_ADD
cross-chip notifier if DSA has no way to know if the individual
switches use tag_8021q or not. So it needs to be part of the core to
use notifiers.
- Then we modify tag_8021q to update dynamically on bridge_{join,leave}
events, instead of what we have today which is simply installing the
VLANs on all ports of a switch and leaving port isolation up to
somebody else. This change is necessary because port isolation over a
DSA link cannot be done in any other way except based on VLAN
membership, as opposed to bridging within the same switch which had 2
choices (at least on sja1105).
- Finally we add 2 new cross-chip notifiers for adding and deleting a
tag_8021q VLAN, which is properly refcounted similar to the bridge FDB
and MDB code, and complete cleanup is done on teardown (note that this
is unlike regular bridge VLANs, where we currently cannot do
refcounting because the user can run "bridge vlan add dev swp0 vid 100"
a gazillion times, and "bridge vlan del dev swp0 vid 100" just once,
and for some reason expect that the VLAN will be deleted. But I digress).
With this opportunity we remove a lot of hard-to-digest code and
replace it with much more idiomatic DSA-style code.
This series was regression-tested on:
- Single-switch boards with SJA1105T
- Disjoint-tree boards with SJA1105S and Felix (using ocelot-8021q)
- H topology boards using SJA1110A
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:52 +0000 (20:14 +0300)]
net: dsa: tag_8021q: add proper cross-chip notifier support
The big problem which mandates cross-chip notifiers for tag_8021q is
this:
|
sw0p0 sw0p1 sw0p2 sw0p3 sw0p4
[ user ] [ user ] [ user ] [ dsa ] [ cpu ]
|
+---------+
|
sw1p0 sw1p1 sw1p2 sw1p3 sw1p4
[ user ] [ user ] [ user ] [ dsa ] [ dsa ]
|
+---------+
|
sw2p0 sw2p1 sw2p2 sw2p3 sw2p4
[ user ] [ user ] [ user ] [ dsa ] [ dsa ]
When the user runs:
ip link add br0 type bridge
ip link set sw0p0 master br0
ip link set sw2p0 master br0
It doesn't work.
This is because dsa_8021q_crosschip_bridge_join() assumes that "ds" and
"other_ds" are at most 1 hop away from each other, so it is sufficient
to add the RX VLAN of {ds, port} into {other_ds, other_port} and vice
versa and presto, the cross-chip link works. When there is another
switch in the middle, such as in this case switch 1 with its DSA links
sw1p3 and sw1p4, somebody needs to tell it about these VLANs too.
Which is exactly why the problem is quadratic: when a port joins a
bridge, for each port in the tree that's already in that same bridge we
notify a tag_8021q VLAN addition of that port's RX VLAN to the entire
tree. It is a very complicated web of VLANs.
It must be mentioned that currently we install tag_8021q VLANs on too
many ports (DSA links - to be precise, on all of them). For example,
when sw2p0 joins br0, and assuming sw1p0 was part of br0 too, we add the
RX VLAN of sw2p0 on the DSA links of switch 0 too, even though there
isn't any port of switch 0 that is a member of br0 (at least yet).
In theory we could notify only the switches which sit in between the
port joining the bridge and the port reacting to that bridge_join event.
But in practice that is impossible, because of the way 'link' properties
are described in the device tree. The DSA bindings require DT writers to
list out not only the real/physical DSA links, but in fact the entire
routing table, like for example switch 0 above will have:
sw0p3: port@3 {
link = <&sw1p4 &sw2p4>;
};
This was done because:
/* TODO: ideally DSA ports would have a single dp->link_dp member,
* and no dst->rtable nor this struct dsa_link would be needed,
* but this would require some more complex tree walking,
* so keep it stupid at the moment and list them all.
*/
but it is a perfect example of a situation where too much information is
actively detrimential, because we are now in the position where we
cannot distinguish a real DSA link from one that is put there to avoid
the 'complex tree walking'. And because DT is ABI, there is not much we
can change.
And because we do not know which DSA links are real and which ones
aren't, we can't really know if DSA switch A is in the data path between
switches B and C, in the general case.
So this is why tag_8021q RX VLANs are added on all DSA links, and
probably why it will never change.
On the other hand, at least the number of additions/deletions is well
balanced, and this means that once we implement reference counting at
the cross-chip notifier level a la fdb/mdb, there is absolutely zero
need for a struct dsa_8021q_crosschip_link, it's all self-managing.
In fact, with the tag_8021q notifiers emitted from the bridge join
notifiers, it becomes so generic that sja1105 does not need to do
anything anymore, we can just delete its implementation of the
.crosschip_bridge_{join,leave} methods.
Among other things we can simply delete is the home-grown implementation
of sja1105_notify_crosschip_switches(). The reason why that is wrong is
because it is not quadratic - it only covers remote switches to which we
have a cross-chip bridging link and that does not cover in-between
switches. This deletion is part of the same patch because sja1105 used
to poke deep inside the guts of the tag_8021q context in order to do
that. Because the cross-chip links went away, so needs the sja1105 code.
Last but not least, dsa_8021q_setup_port() is simplified (and also
renamed). Because our TAG_8021Q_VLAN_ADD notifier is designed to react
on the CPU port too, the four dsa_8021q_vid_apply() calls:
- 1 for RX VLAN on user port
- 1 for the user port's RX VLAN on the CPU port
- 1 for TX VLAN on user port
- 1 for the user port's TX VLAN on the CPU port
now get squashed into only 2 notifier calls via
dsa_port_tag_8021q_vlan_add.
And because the notifiers to add and to delete a tag_8021q VLAN are
distinct, now we finally break up the port setup and teardown into
separate functions instead of relying on a "bool enabled" flag which
tells us what to do. Arguably it should have been this way from the
get go.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:51 +0000 (20:14 +0300)]
net: dsa: tag_8021q: manage RX VLANs dynamically at bridge join/leave time
There has been at least one wasted opportunity for tag_8021q to be used
by a driver:
https://patchwork.ozlabs.org/project/netdev/patch/
20200710113611.3398-3-kurt@linutronix.de/#2484272
because of a design decision: the declared purpose of tag_8021q is to
offer source port/switch identification for a tagging driver for packets
coming from a switch with no hardware DSA tagging support. It is not
intended to provide VLAN-based port isolation, because its first user,
sja1105, had another mechanism for bridging domain isolation, the L2
Forwarding Table. So even if 2 ports are in the same VLAN but they are
separated via the L2 Forwarding Table, they will not communicate with
one another. The L2 Forwarding Table is managed by the
sja1105_bridge_join() and sja1105_bridge_leave() methods.
As a consequence, today tag_8021q does not bother too much with hooking
into .port_bridge_join() and .port_bridge_leave() because that would
introduce yet another degree of freedom, it just iterates statically
through all ports of a switch and adds the RX VLAN of one port to all
the others. In this way, whenever .port_bridge_join() is called,
bridging will magically work because the RX VLANs are already installed
everywhere they need to be.
This is not to say that the reason for the change in this patch is to
satisfy the hellcreek and similar use cases, that is merely a nice side
effect. Instead it is to make sja1105 cross-chip links work properly
over a DSA link.
For context, sja1105 today supports a degenerate form of cross-chip
bridging, where the switches are interconnected through their CPU ports
("disjoint trees" topology). There is some code which has been
generalized into dsa_8021q_crosschip_link_{add,del}, but it is not
enough, and frankly it is impossible to build upon that.
Real multi-switch DSA trees, like daisy chains or H trees, which have
actual DSA links, do not work.
The problem is that sja1105 is unlike mv88e6xxx, and does not have a PVT
for cross-chip bridging, which is a table by which the local switch can
select the forwarding domain for packets from a certain ingress switch
ID and source port. The sja1105 switches cannot parse their own DSA
tags, because, well, they don't really have support for DSA tags, it's
all VLANs.
So to make something like cross-chip bridging between sw0p0 and sw1p0 to
work over the sw0p3/sw1p3 DSA link to work with sja1105 in the topology
below:
| |
sw0p0 sw0p1 sw0p2 sw0p3 sw1p3 sw1p2 sw1p1 sw1p0
[ user ] [ user ] [ cpu ] [ dsa ] ---- [ dsa ] [ cpu ] [ user ] [ user ]
we need to ask ourselves 2 questions:
(1) how should the L2 Forwarding Table be managed?
(2) how should the VLAN Lookup Table be managed?
i.e. what should prevent packets from going to unwanted ports?
Since as mentioned, there is no PVT, the L2 Forwarding Table only
contains forwarding rules for local ports. So we can say "all user ports
are allowed to forward to all CPU ports and all DSA links".
If we allow forwarding to DSA links unconditionally, this means we must
prevent forwarding using the VLAN Lookup Table. This is in fact
asymmetric with what we do for tag_8021q on ports local to the same
switch, and it matters because now that we are making tag_8021q a core
DSA feature, we need to hook into .crosschip_bridge_join() to add/remove
the tag_8021q VLANs. So for symmetry it makes sense to manage the VLANs
for local forwarding in the same way as cross-chip forwarding.
Note that there is a very precise reason why tag_8021q hooks into
dsa_switch_bridge_join() which acts at the cross-chip notifier level,
and not at a higher level such as dsa_port_bridge_join(). We need to
install the RX VLAN of the newly joining port into the VLAN table of all
the existing ports across the tree that are part of the same bridge, and
the notifier already does the iteration through the switches for us.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:50 +0000 (20:14 +0300)]
net: dsa: tag_8021q: absorb dsa_8021q_setup into dsa_tag_8021q_{,un}register
Right now, setting up tag_8021q is a 2-step operation for a driver,
first the context structure needs to be created, then the VLANs need to
be installed on the ports. A similar thing is true for teardown.
Merge the 2 steps into the register/unregister methods, to be as
transparent as possible for the driver as to what tag_8021q does behind
the scenes. This also gets rid of the funny "bool setup == true means
setup, == false means teardown" API that tag_8021q used to expose.
Note that dsa_tag_8021q_register() must be called at least in the
.setup() driver method and never earlier (like in the driver probe
function). This is because the DSA switch tree is not initialized at
probe time, and the cross-chip notifiers will not work.
For symmetry with .setup(), the unregister method should be put in
.teardown().
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:49 +0000 (20:14 +0300)]
net: dsa: make tag_8021q operations part of the core
Make tag_8021q a more central element of DSA and move the 2 driver
specific operations outside of struct dsa_8021q_context (which is
supposed to hold dynamic data and not really constant function
pointers).
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:48 +0000 (20:14 +0300)]
net: dsa: let the core manage the tag_8021q context
The basic problem description is as follows:
Be there 3 switches in a daisy chain topology:
|
sw0p0 sw0p1 sw0p2 sw0p3 sw0p4
[ user ] [ user ] [ user ] [ dsa ] [ cpu ]
|
+---------+
|
sw1p0 sw1p1 sw1p2 sw1p3 sw1p4
[ user ] [ user ] [ user ] [ dsa ] [ dsa ]
|
+---------+
|
sw2p0 sw2p1 sw2p2 sw2p3 sw2p4
[ user ] [ user ] [ user ] [ user ] [ dsa ]
The CPU will not be able to ping through the user ports of the
bottom-most switch (like for example sw2p0), simply because tag_8021q
was not coded up for this scenario - it has always assumed DSA switch
trees with a single switch.
To add support for the topology above, we must admit that the RX VLAN of
sw2p0 must be added on some ports of switches 0 and 1 as well. This is
in fact a textbook example of thing that can use the cross-chip notifier
framework that DSA has set up in switch.c.
There is only one problem: core DSA (switch.c) is not able right now to
make the connection between a struct dsa_switch *ds and a struct
dsa_8021q_context *ctx. Right now, it is drivers who call into
tag_8021q.c and always provide a struct dsa_8021q_context *ctx pointer,
and tag_8021q.c calls them back with the .tag_8021q_vlan_{add,del}
methods.
But with cross-chip notifiers, it is possible for tag_8021q to call
drivers without drivers having ever asked for anything. A good example
is right above: when sw2p0 wants to set itself up for tag_8021q,
the .tag_8021q_vlan_add method needs to be called for switches 1 and 0,
so that they transport sw2p0's VLANs towards the CPU without dropping
them.
So instead of letting drivers manage the tag_8021q context, add a
tag_8021q_ctx pointer inside of struct dsa_switch, which will be
populated when dsa_tag_8021q_register() returns success.
The patch is fairly long-winded because we are partly reverting commit
5899ee367ab3 ("net: dsa: tag_8021q: add a context structure") which made
the driver-facing tag_8021q API use "ctx" instead of "ds". Now that we
can access "ctx" directly from "ds", this is no longer needed.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:47 +0000 (20:14 +0300)]
net: dsa: build tag_8021q.c as part of DSA core
Upcoming patches will add tag_8021q related logic to switch.c and
port.c, in order to allow it to make use of cross-chip notifiers.
In addition, a struct dsa_8021q_context *ctx pointer will be added to
struct dsa_switch.
It seems fairly low-reward to #ifdef the *ctx from struct dsa_switch and
to provide shim implementations of the entire tag_8021q.c calling
surface (not even clear what to do about the tag_8021q cross-chip
notifiers to avoid compiling them). The runtime overhead for switches
which don't use tag_8021q is fairly small because all helpers will check
for ds->tag_8021q_ctx being a NULL pointer and stop there.
So let's make it part of dsa_core.o.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:46 +0000 (20:14 +0300)]
net: dsa: tag_8021q: create dsa_tag_8021q_{register,unregister} helpers
In preparation of moving tag_8021q to core DSA, move all initialization
and teardown related to tag_8021q which is currently done by drivers in
2 functions called "register" and "unregister". These will gather more
functionality in future patches, which will better justify the chosen
naming scheme.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:45 +0000 (20:14 +0300)]
net: dsa: tag_8021q: remove struct packet_type declaration
This is no longer necessary since tag_8021q doesn't register itself as a
full-blown tagger anymore.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:44 +0000 (20:14 +0300)]
net: dsa: tag_8021q: use symbolic error names
Use %pe to give the user a string holding the error code instead of just
a number.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:43 +0000 (20:14 +0300)]
net: dsa: tag_8021q: use "err" consistently instead of "rc"
Some of the tag_8021q code has been taken out of sja1105, which uses
"rc" for its return code variables, whereas the DSA core uses "err".
Change tag_8021q for consistency.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 19 Jul 2021 17:14:42 +0000 (20:14 +0300)]
net: dsa: sja1105: delete the best_effort_vlan_filtering mode
Simply put, the best-effort VLAN filtering mode relied on VLAN retagging
from a bridge VLAN towards a tag_8021q sub-VLAN in order to be able to
decode the source port in the tagger, but the VLAN retagging
implementation inside the sja1105 chips is not the best and we were
relying on marginal operating conditions.
The most notable limitation of the best-effort VLAN filtering mode is
its incapacity to treat this case properly:
ip link add br0 type bridge vlan_filtering 1
ip link set swp2 master br0
ip link set swp4 master br0
bridge vlan del dev swp4 vid 1
bridge vlan add dev swp4 vid 1 pvid
When sending an untagged packet through swp2, the expectation is for it
to be forwarded to swp4 as egress-tagged (so it will contain VLAN ID 1
on egress). But the switch will send it as egress-untagged.
There was an attempt to fix this here:
https://patchwork.kernel.org/project/netdevbpf/patch/
20210407201452.1703261-2-olteanv@gmail.com/
but it failed miserably because it broke PTP RX timestamping, in a way
that cannot be corrected due to hardware issues related to VLAN
retagging.
So with either PTP broken or pushing VLAN headers on egress for untagged
packets being broken, the sad reality is that the best-effort VLAN
filtering code is broken. Delete it.
Note that this means there will be a temporary loss of functionality in
this driver until it is replaced with something better (network stack
RX/TX capability for "mode 2" as described in
Documentation/networking/dsa/sja1105.rst, the "port under VLAN-aware
bridge" case). We simply cannot keep this code until that driver rework
is done, it is super bloated and tangled with tag_8021q.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Jul 2021 13:23:50 +0000 (06:23 -0700)]
Merge branch 's390-next'
Julian Wiedmann says:
====================
s390/qeth: updates 2021-07-20
please apply the following patch series for qeth to netdev's net-next tree.
This removes the deprecated support for OSN-mode devices, and does some
follow-on cleanups.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Jul 2021 13:23:13 +0000 (06:23 -0700)]
Merge branch 'veth-flexible-channel-numbers'
Paolo Abeni says:
====================
veth: more flexible channels number configuration
XDP setups can benefit from multiple veth RX/TX queues. Currently
veth allow setting such number only at creation time via the
'numrxqueues' and 'numtxqueues' parameters.
This series introduces support for the ethtool set_channel operation
and allows configuring the queue number via a new module parameter.
The veth default configuration is not changed.
Finally self-tests are updated to check the new features, with both
valid and invalid arguments.
This iteration is a rebase of the most recent RFC, it does not provide
a module parameter to configure the default number of queues, but I
think could be worthy
RFC v1 -> RFC v2:
- report more consistent 'combined' count
- make set_channel as resilient as possible to errors
- drop module parameter - but I would still consider it.
- more self-tests
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Jul 2021 13:22:14 +0000 (06:22 -0700)]
Merge branch 'bridge-vlan-multicast'
Nikolay Aleksandrov says:
====================
net: bridge: multicast: add vlan support
This patchset adds initial per-vlan multicast support, most of the code
deals with moving to multicast context pointers from bridge/port pointers.
That allows us to switch them with the per-vlan contexts when a multicast
packet is being processed and vlan multicast snooping has been enabled.
That is controlled by a global bridge option added in patch 06 which is
off by default (BR_BOOLOPT_MCAST_VLAN_SNOOPING). It is important to note
that this option can change only under RTNL and doesn't require
multicast_lock, so we need to be careful when retrieving mcast contexts
in parallel. For packet processing they are switched only once in
br_multicast_rcv() and then used until the packet has been processed.
For the most part we need these contexts only to read config values and
check if they are disabled. The global mcast state which is maintained
consists of querier and router timers, the rest are config options.
The port mcast state which is maintained consists of query timer and
link to router port list if it's ever marked as a router port. Port
multicast contexts _must_ be used only with their respective global
contexts, that is a bridge port's mcast context must be used only with
bridge's global mcast context and a vlan/port's mcast context must be
used only with that vlan's global mcast context due to the router port
lists. This way a bridge port can be marked as a router in multiple
vlans, but might not be a router in some other vlan. Also this allows us
to have per-vlan querier elections, per-vlan queries and basically the
whole multicast state becomes per-vlan when the option is enabled.
One of the hardest parts is synchronization with vlan's memory
management, that is done through a new vlan flag: BR_VLFLAG_MCAST_ENABLED
which is changed only under multicast_lock. When a vlan is being
destroyed first that flag is removed under the lock, then the multicast
context is torn down which includes waiting for any outstanding context
timers. Since all of the vlan processing depends on BR_VLFLAG_MCAST_ENABLED
it must be checked first if the contexts are vlan and the multicast_lock
has been acquired. That is done by all IGMP/MLD packet processing
functions and timers. When processing a packet we have RCU so the vlan
memory won't be freed, but if the flag is missing we must not process it.
The timers are synchronized in the same way with the addition of waiting
for them to finish in case they are running after removing the flag
under multicast_lock (i.e. they were waiting for the lock). Multicast vlan
snooping requires vlan filtering to be enabled, if it's disabled then
snooping gets automatically disabled, too. BR_VLFLAG_GLOBAL_MCAST_ENABLED
controls if a vlan has BR_VLFLAG_MCAST_ENABLED set which is used in all
vlan disabled checks. We need both flags because one is controlled by
user-space globally (BR_VLFLAG_GLOBAL_MCAST_ENABLED) and the other is
for a particular bridge/vlan or port/vlan entry (BR_VLFLAG_MCAST_ENABLED).
Since the latter is also used for synchronization between the multicast
and vlan code, and also controlled by BR_VLFLAG_GLOBAL_MCAST_ENABLED we
rely on it when checking if a vlan context is disabled. The multicast
fast-path has 3 new bit tests on the cache-hot bridge flags field, I
didn't observe any measurable difference. I haven't forced either
context options to be always disabled when the other type is enabled
because the state consists of timers which either expire (router) or
don't affect the normal operation. Some options, like the mcast querier
one, won't be allowed to change for the disabled context type, that will
come with a future patch-set which adds per-vlan querier control.
Another important addition is the global vlan options, so far we had
only per bridge/port vlan options but in order to control vlan multicast
snooping globally we need to add a new type of global vlan options.
They can be changed only on the bridge device and are dumped only when a
special flag is set in the dump request. The first global option is vlan
mcast snooping control, it controls the vlan BR_VLFLAG_GLOBAL_MCAST_ENABLED
private flag. It can be set only on master vlan entries. There will be
many more global vlan options in the future both for multicast config
and other per-vlan options (e.g. STP).
There's a lot of room for improvements, I'll do some of the initial
ones but splitting the state to different contexts opens the door
for a lot more. Also any new multicast options become vlan-supported with
very little to no effort by using the same contexts.
Short patch description:
patches 01-04: initial mcast context add, no functional changes
patch 05: adds vlan mcast init and control helpers and uses them on
vlan create/destroy
patch 06: adds a global bridge mcast vlan snooping knob (default
off)
patches 07-08: add a helper for users which must derive the contexts
based on current bridge and vlan options (e.g. timers)
patch 09: adds checks for disabled vlan contexts in packet
processing and timers
patch 10: adds support for per-vlan querier and tagged queries
patch 11: adds router port vlan id in the notifications
patches 12-14: add global vlan options support (change, dump, notify)
patch 15: adds per-vlan global mcast snooping control
Future patch-sets which build on this one (in order):
- vlan state mcast handling
- user-space mdb contexts (currently only the bridge contexts are used
there)
- all bridge multicast config options added per-vlan global and per
vlan/port
- iproute2 support for all the new uAPIs
- selftests
This set has been stress-tested (deleting/adding ports/vlans while changing
vlan mcast snooping while processing IGMP/MLD packets), and also has
passed all bridge self-tests. I'm sending this set as early as possible
since there're a few more related sets that should go in the same
release to get proper and full mcast vlan snooping support.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Tue, 20 Jul 2021 06:38:49 +0000 (08:38 +0200)]
s390/qeth: clean up device_type management
qeth uses three device_type structs - a generic one, and one for each
sub-driver (which is used for fixed-layer devices only). Instead of
exporting these device_types back&forth between the driver's modules,
make all the logic self-contained within the sub-drivers.
On disc->setup() they either install their own device_type, or add the
sysfs attributes that are missing in the generic device_type. Later on
disc->remove() these attributes are removed again from any device that
has the generic device_type.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Tue, 20 Jul 2021 06:38:48 +0000 (08:38 +0200)]
s390/qeth: clean up QETH_PROT_* naming
The QETH_PROT_* naming is shared among two unrelated areas - one is
the MPC-level protocol identifiers, the other is the qeth_prot_version
enum.
Rename the MPC definitions to use QETH_MPC_PROT_*.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Tue, 20 Jul 2021 06:38:47 +0000 (08:38 +0200)]
s390/qeth: remove OSN support
Commit
fb64de1bc36c ("s390/qeth: phase out OSN support") spelled out
why the OSN support in qeth is in a bad shape, and put any remaining
interested parties on notice to speak up before it gets ripped out.
It's 2021 now, so make true on that promise and remove all the
OSN-specific parts from qeth. This also means that we no longer need to
export various parts of the cmd & data path internals to the L2 driver.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Jul 2021 13:13:37 +0000 (06:13 -0700)]
Merge branch '40GbE' of git://git./linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
40GbE Intel Wired LAN Driver Updates 2021-07-19
This series contains updates to iavf and i40e drivers.
Stefan Assmann adds locking to a path that does not acquire a spinlock
where needed for i40e. He also adjusts locking of critical sections to
help avoid races and removes overriding of the adapter state during
pending reset for iavf driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Jul 2021 13:11:28 +0000 (06:11 -0700)]
Merge branch 'veth-flexible-channel-numbers'
Paolo Abeni says:
====================
veth: more flexible channels number configuration
XDP setups can benefit from multiple veth RX/TX queues. Currently
veth allow setting such number only at creation time via the
'numrxqueues' and 'numtxqueues' parameters.
This series introduces support for the ethtool set_channel operation
and allows configuring the queue number via a new module parameter.
The veth default configuration is not changed.
Finally self-tests are updated to check the new features, with both
valid and invalid arguments.
This iteration is a rebase of the most recent RFC, it does not provide
a module parameter to configure the default number of queues, but I
think could be worthy
RFC v1 -> RFC v2:
- report more consistent 'combined' count
- make set_channel as resilient as possible to errors
- drop module parameter - but I would still consider it.
- more self-tests
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Tue, 20 Jul 2021 08:41:52 +0000 (10:41 +0200)]
selftests: net: veth: add tests for set_channel
Simple functional test for the newly exposed features.
Also add an optional stress test for the channel number
update under flood.
RFC v1 -> RFC v2:
- add the stress test
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Tue, 20 Jul 2021 08:41:51 +0000 (10:41 +0200)]
veth: create by default nr_possible_cpus queues
This allows easier XDP usage. The number of default active
queues is not changed: 1 RX and 1 TX so that this does
not introduce overhead on the datapath for queue selection.
v1 -> v2:
- drop the module parameter, force default to nr_possible_cpus - Toke
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Tue, 20 Jul 2021 08:41:50 +0000 (10:41 +0200)]
veth: implement support for set_channel ethtool op
This change implements the set_channel() ethtool operation,
preserving the current defaults values and allowing up set
the number of queues in the range set ad device creation
time.
The update operation tries hard to leave the device in a
consistent status in case of errors.
RFC v1 -> RFC v2:
- don't flip device status on set_channel()
- roll-back the changes if possible on error - Jackub
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Tue, 20 Jul 2021 08:41:49 +0000 (10:41 +0200)]
veth: factor out initialization helper
Extract in simpler helpers the code to enable and disable a
range of xdp/napi instance, with the common property that
"disable" helpers can't fail.
Will be used by the next patch. No functional change intended.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Tue, 20 Jul 2021 08:41:48 +0000 (10:41 +0200)]
veth: always report zero combined channels
veth get_channel currently reports for channels being both RX/TX and
combined. As Jakub noted:
"""
ethtool man page is relatively clear, unfortunately the kernel code
is not and few read the man page. A channel is approximately an IRQ,
not a queue, and IRQ can't be dedicated and combined simultaneously
"""
This patch changes the information exposed by veth_get_channels,
setting max_combined to zero, being more consistent with the above
statement. The ethtool_channels is always cleared by the caller, we just
need to avoid setting the 'combined' fields.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasily Averin [Mon, 19 Jul 2021 10:44:56 +0000 (13:44 +0300)]
memcg: enable accounting for scm_fp_list objects
unix sockets allows to send file descriptors via SCM_RIGHTS type messages.
Each such send call forces kernel to allocate up to 2Kb memory for
struct scm_fp_list.
It makes sense to account for them to restrict the host's memory
consumption from inside the memcg-limited container.
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasily Averin [Mon, 19 Jul 2021 10:44:50 +0000 (13:44 +0300)]
memcg: ipv6/sit: account and don't WARN on ip_tunnel_prl structs allocation
Author: Andrey Ryabinin <aryabinin@virtuozzo.com>
The size of the ip_tunnel_prl structs allocation is controllable from
user-space, thus it's better to avoid spam in dmesg if allocation failed.
Also add __GFP_ACCOUNT as this is a good candidate for per-memcg
accounting. Allocation is temporary and limited by 4GB.
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasily Averin [Mon, 19 Jul 2021 10:44:44 +0000 (13:44 +0300)]
memcg: enable accounting for VLAN group array
vlan array consume up to 8 pages of memory per net device.
It makes sense to account for them to restrict the host's memory
consumption from inside the memcg-limited container.
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasily Averin [Mon, 19 Jul 2021 10:44:37 +0000 (13:44 +0300)]
memcg: enable accounting for inet_bin_bucket cache
net namespace can create up to 64K tcp and dccp ports and force kernel
to allocate up to several megabytes of memory per netns
for inet_bind_bucket objects.
It makes sense to account for them to restrict the host's memory
consumption from inside the memcg-limited container.
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasily Averin [Mon, 19 Jul 2021 10:44:31 +0000 (13:44 +0300)]
memcg: enable accounting for IP address and routing-related objects
An netadmin inside container can use 'ip a a' and 'ip r a'
to assign a large number of ipv4/ipv6 addresses and routing entries
and force kernel to allocate megabytes of unaccounted memory
for long-lived per-netdevice related kernel objects:
'struct in_ifaddr', 'struct inet6_ifaddr', 'struct fib6_node',
'struct rt6_info', 'struct fib_rules' and ip_fib caches.
These objects can be manually removed, though usually they lives
in memory till destroy of its net namespace.
It makes sense to account for them to restrict the host's memory
consumption from inside the memcg-limited container.
One of such objects is the 'struct fib6_node' mostly allocated in
net/ipv6/route.c::__ip6_ins_rt() inside the lock_bh()/unlock_bh() section:
write_lock_bh(&table->tb6_lock);
err = fib6_add(&table->tb6_root, rt, info, mxc);
write_unlock_bh(&table->tb6_lock);
In this case it is not enough to simply add SLAB_ACCOUNT to corresponding
kmem cache. The proper memory cgroup still cannot be found due to the
incorrect 'in_interrupt()' check used in memcg_kmem_bypass().
Obsoleted in_interrupt() does not describe real execution context properly.
>From include/linux/preempt.h:
The following macros are deprecated and should not be used in new code:
in_interrupt() - We're in NMI,IRQ,SoftIRQ context or have BH disabled
To verify the current execution context new macro should be used instead:
in_task() - We're in task context
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasily Averin [Mon, 19 Jul 2021 10:44:23 +0000 (13:44 +0300)]
memcg: enable accounting for net_device and Tx/Rx queues
Container netadmin can create a lot of fake net devices,
then create a new net namespace and repeat it again and again.
Net device can request the creation of up to 4096 tx and rx queues,
and force kernel to allocate up to several tens of megabytes memory
per net device.
It makes sense to account for them to restrict the host's memory
consumption from inside the memcg-limited container.
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Jul 2021 12:41:29 +0000 (05:41 -0700)]
Merge branch 'bridge-vlan-multicast'
Nikolay Aleksandrov says:
====================
net: bridge: multicast: add vlan support
This patchset adds initial per-vlan multicast support, most of the code
deals with moving to multicast context pointers from bridge/port pointers.
That allows us to switch them with the per-vlan contexts when a multicast
packet is being processed and vlan multicast snooping has been enabled.
That is controlled by a global bridge option added in patch 06 which is
off by default (BR_BOOLOPT_MCAST_VLAN_SNOOPING). It is important to note
that this option can change only under RTNL and doesn't require
multicast_lock, so we need to be careful when retrieving mcast contexts
in parallel. For packet processing they are switched only once in
br_multicast_rcv() and then used until the packet has been processed.
For the most part we need these contexts only to read config values and
check if they are disabled. The global mcast state which is maintained
consists of querier and router timers, the rest are config options.
The port mcast state which is maintained consists of query timer and
link to router port list if it's ever marked as a router port. Port
multicast contexts _must_ be used only with their respective global
contexts, that is a bridge port's mcast context must be used only with
bridge's global mcast context and a vlan/port's mcast context must be
used only with that vlan's global mcast context due to the router port
lists. This way a bridge port can be marked as a router in multiple
vlans, but might not be a router in some other vlan. Also this allows us
to have per-vlan querier elections, per-vlan queries and basically the
whole multicast state becomes per-vlan when the option is enabled.
One of the hardest parts is synchronization with vlan's memory
management, that is done through a new vlan flag: BR_VLFLAG_MCAST_ENABLED
which is changed only under multicast_lock. When a vlan is being
destroyed first that flag is removed under the lock, then the multicast
context is torn down which includes waiting for any outstanding context
timers. Since all of the vlan processing depends on BR_VLFLAG_MCAST_ENABLED
it must be checked first if the contexts are vlan and the multicast_lock
has been acquired. That is done by all IGMP/MLD packet processing
functions and timers. When processing a packet we have RCU so the vlan
memory won't be freed, but if the flag is missing we must not process it.
The timers are synchronized in the same way with the addition of waiting
for them to finish in case they are running after removing the flag
under multicast_lock (i.e. they were waiting for the lock). Multicast vlan
snooping requires vlan filtering to be enabled, if it's disabled then
snooping gets automatically disabled, too. BR_VLFLAG_GLOBAL_MCAST_ENABLED
controls if a vlan has BR_VLFLAG_MCAST_ENABLED set which is used in all
vlan disabled checks. We need both flags because one is controlled by
user-space globally (BR_VLFLAG_GLOBAL_MCAST_ENABLED) and the other is
for a particular bridge/vlan or port/vlan entry (BR_VLFLAG_MCAST_ENABLED).
Since the latter is also used for synchronization between the multicast
and vlan code, and also controlled by BR_VLFLAG_GLOBAL_MCAST_ENABLED we
rely on it when checking if a vlan context is disabled. The multicast
fast-path has 3 new bit tests on the cache-hot bridge flags field, I
didn't observe any measurable difference. I haven't forced either
context options to be always disabled when the other type is enabled
because the state consists of timers which either expire (router) or
don't affect the normal operation. Some options, like the mcast querier
one, won't be allowed to change for the disabled context type, that will
come with a future patch-set which adds per-vlan querier control.
Another important addition is the global vlan options, so far we had
only per bridge/port vlan options but in order to control vlan multicast
snooping globally we need to add a new type of global vlan options.
They can be changed only on the bridge device and are dumped only when a
special flag is set in the dump request. The first global option is vlan
mcast snooping control, it controls the vlan BR_VLFLAG_GLOBAL_MCAST_ENABLED
private flag. It can be set only on master vlan entries. There will be
many more global vlan options in the future both for multicast config
and other per-vlan options (e.g. STP).
There's a lot of room for improvements, I'll do some of the initial
ones but splitting the state to different contexts opens the door
for a lot more. Also any new multicast options become vlan-supported with
very little to no effort by using the same contexts.
Short patch description:
patches 01-04: initial mcast context add, no functional changes
patch 05: adds vlan mcast init and control helpers and uses them on
vlan create/destroy
patch 06: adds a global bridge mcast vlan snooping knob (default
off)
patches 07-08: add a helper for users which must derive the contexts
based on current bridge and vlan options (e.g. timers)
patch 09: adds checks for disabled vlan contexts in packet
processing and timers
patch 10: adds support for per-vlan querier and tagged queries
patch 11: adds router port vlan id in the notifications
patches 12-14: add global vlan options support (change, dump, notify)
patch 15: adds per-vlan global mcast snooping control
Future patch-sets which build on this one (in order):
- vlan state mcast handling
- user-space mdb contexts (currently only the bridge contexts are used
there)
- all bridge multicast config options added per-vlan global and per
vlan/port
- iproute2 support for all the new uAPIs
- selftests
This set has been stress-tested (deleting/adding ports/vlans while changing
vlan mcast snooping while processing IGMP/MLD packets), and also has
passed all bridge self-tests. I'm sending this set as early as possible
since there're a few more related sets that should go in the same
release to get proper and full mcast vlan snooping support.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:37 +0000 (20:06 +0300)]
net: bridge: vlan: add mcast snooping control
Add a new global vlan option which controls whether multicast snooping
is enabled or disabled for a single vlan. It controls the vlan private
flag: BR_VLFLAG_GLOBAL_MCAST_ENABLED.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:36 +0000 (20:06 +0300)]
net: bridge: vlan: notify when global options change
Add support for global options notifications. They use only RTM_NEWVLAN
since global options can only be set and are contained in a separate
vlan global options attribute. Notifications are compressed in ranges
where possible, i.e. the sequential vlan options are equal.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:35 +0000 (20:06 +0300)]
net: bridge: vlan: add support for dumping global vlan options
Add a new vlan options dump flag which causes only global vlan options
to be dumped. The dumps are done only with bridge devices, ports are
ignored. They support vlan compression if the options in sequential
vlans are equal (currently always true).
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:34 +0000 (20:06 +0300)]
net: bridge: vlan: add support for global options
We can have two types of vlan options depending on context:
- per-device vlan options (split in per-bridge and per-port)
- global vlan options
The second type wasn't supported in the bridge until now, but we need
them for per-vlan multicast support, per-vlan STP support and other
options which require global vlan context. They are contained in the global
bridge vlan context even if the vlan is not configured on the bridge device
itself. This patch adds initial netlink attributes and support for setting
these global vlan options, they can only be set (RTM_NEWVLAN) and the
operation must use the bridge device. Since there are no such options yet
it shouldn't have any functional effect.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:33 +0000 (20:06 +0300)]
net: bridge: multicast: include router port vlan id in notifications
Use the port multicast context to check if the router port is a vlan and
in case it is include its vlan id in the notification.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:32 +0000 (20:06 +0300)]
net: bridge: multicast: add vlan querier and query support
Add basic vlan context querier support, if the contexts passed to
multicast_alloc_query are vlan then the query will be tagged. Also
handle querier start/stop of vlan contexts.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:31 +0000 (20:06 +0300)]
net: bridge: multicast: check if should use vlan mcast ctx
Add helpers which check if the current bridge/port multicast context
should be used (i.e. they're not disabled) and use them for Rx IGMP/MLD
processing, timers and new group addition. It is important for vlans to
disable processing of timer/packet after the multicast_lock is obtained
if the vlan context doesn't have BR_VLFLAG_MCAST_ENABLED. There are two
cases when that flag is missing:
- if the vlan is getting destroyed it will be removed and timers will
be stopped
- if the vlan mcast snooping is being disabled
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:30 +0000 (20:06 +0300)]
net: bridge: multicast: use the port group to port context helper
We need to use the new port group to port context helper in places where
we cannot pass down the proper context (i.e. functions that can be
called by timers or outside the packet snooping paths).
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:29 +0000 (20:06 +0300)]
net: bridge: multicast: add helper to get port mcast context from port group
Add br_multicast_pg_to_port_ctx() which returns the proper port multicast
context from either port or vlan based on bridge option and vlan flags.
As the comment inside explains the locking is a bit tricky, we rely on
the fact that BR_VLFLAG_MCAST_ENABLED requires multicast_lock to change
and we also require it to be held to call that helper. If we find the
vlan under rcu and it still has the flag then we can be sure it will be
alive until we unlock multicast_lock which should be enough.
Note that the context might change from vlan to bridge between different
calls to this helper as the mcast vlan knob requires only rtnl so it should
be used carefully and for read-only/check purposes.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:28 +0000 (20:06 +0300)]
net: bridge: add vlan mcast snooping knob
Add a global knob that controls if vlan multicast snooping is enabled.
The proper contexts (vlan or bridge-wide) will be chosen based on the knob
when processing packets and changing bridge device state. Note that
vlans have their individual mcast snooping enabled by default, but this
knob is needed to turn on bridge vlan snooping. It is disabled by
default. To enable the knob vlan filtering must also be enabled, it
doesn't make sense to have vlan mcast snooping without vlan filtering
since that would lead to inconsistencies. Disabling vlan filtering will
also automatically disable vlan mcast snooping.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:27 +0000 (20:06 +0300)]
net: bridge: multicast: add vlan state initialization and control
Add helpers to enable/disable vlan multicast based on its flags, we need
two flags because we need to know if the vlan has multicast enabled
globally (user-controlled) and if it has it enabled on the specific device
(bridge or port). The new private vlan flags are:
- BR_VLFLAG_MCAST_ENABLED: locally enabled multicast on the device, used
when removing a vlan, toggling vlan mcast snooping and controlling
single vlan (kernel-controlled, valid under RTNL and multicast_lock)
- BR_VLFLAG_GLOBAL_MCAST_ENABLED: globally enabled multicast for the
vlan, used to control the bridge-wide vlan mcast snooping for a
single vlan (user-controlled, can be checked under any context)
Bridge vlan contexts are created with multicast snooping enabled by
default to be in line with the current bridge snooping defaults. In
order to actually activate per vlan snooping and context usage a
bridge-wide knob will be added later which will default to disabled.
If that knob is enabled then automatically all vlan snooping will be
enabled. All vlan contexts are initialized with the current bridge
multicast context defaults.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:26 +0000 (20:06 +0300)]
net: bridge: vlan: add global and per-port multicast context
Add global and per-port vlan multicast context, only initialized but
still not used. No functional changes intended.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:25 +0000 (20:06 +0300)]
net: bridge: multicast: use multicast contexts instead of bridge or port
Pass multicast context pointers to multicast functions instead of bridge/port.
This would make it easier later to switch these contexts to their per-vlan
versions. The patch is basically search and replace, no functional changes.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:24 +0000 (20:06 +0300)]
net: bridge: multicast: factor out bridge multicast context
Factor out the bridge's global multicast context into a separate
structure which will later be used for per-vlan global context.
No functional changes intended.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 19 Jul 2021 17:06:23 +0000 (20:06 +0300)]
net: bridge: multicast: factor out port multicast context
Factor out the port's multicast context into a separate structure which
will later be shared for per-port,vlan context. No functional changes
intended. We need the structure even if bridge multicast is not defined
to pass down as pointer to forwarding functions.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kurt Kanzenbach [Mon, 19 Jul 2021 10:16:40 +0000 (12:16 +0200)]
Revert "igc: Export LEDs"
This reverts commit
cf8331825a8d10e46fa574fdf015a65cb5a6db86.
There are better Linux interfaces to export the different LED modes
and blinking reasons.
Revert this patch for now and come up with better solution later.
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de>
Link: https://lore.kernel.org/r/20210719101640.16047-1-kurt@linutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 19 Jul 2021 10:11:07 +0000 (03:11 -0700)]
net/tcp_fastopen: remove tcp_fastopen_ctx_lock
Remove the (per netns) spinlock in favor of xchg() atomic operations.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Acked-by: Wei Wang <weiwan@google.com>
Link: https://lore.kernel.org/r/20210719101107.3203943-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Yajun Deng [Mon, 19 Jul 2021 05:18:16 +0000 (13:18 +0800)]
netlink: Deal with ESRCH error in nlmsg_notify()
Yonghong Song report:
The bpf selftest tc_bpf failed with latest bpf-next.
The following is the command to run and the result:
$ ./test_progs -n 132
[ 40.947571] bpf_testmod: loading out-of-tree module taints kernel.
test_tc_bpf:PASS:test_tc_bpf__open_and_load 0 nsec
test_tc_bpf:PASS:bpf_tc_hook_create(BPF_TC_INGRESS) 0 nsec
test_tc_bpf:PASS:bpf_tc_hook_create invalid hook.attach_point 0 nsec
test_tc_bpf_basic:PASS:bpf_obj_get_info_by_fd 0 nsec
test_tc_bpf_basic:PASS:bpf_tc_attach 0 nsec
test_tc_bpf_basic:PASS:handle set 0 nsec
test_tc_bpf_basic:PASS:priority set 0 nsec
test_tc_bpf_basic:PASS:prog_id set 0 nsec
test_tc_bpf_basic:PASS:bpf_tc_attach replace mode 0 nsec
test_tc_bpf_basic:PASS:bpf_tc_query 0 nsec
test_tc_bpf_basic:PASS:handle set 0 nsec
test_tc_bpf_basic:PASS:priority set 0 nsec
test_tc_bpf_basic:PASS:prog_id set 0 nsec
libbpf: Kernel error message: Failed to send filter delete notification
test_tc_bpf_basic:FAIL:bpf_tc_detach unexpected error: -3 (errno 3)
test_tc_bpf:FAIL:test_tc_internal ingress unexpected error: -3 (errno 3)
The failure seems due to the commit
cfdf0d9ae75b ("rtnetlink: use nlmsg_notify() in rtnetlink_send()")
Deal with ESRCH error in nlmsg_notify() even the report variable is zero.
Reported-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
Link: https://lore.kernel.org/r/20210719051816.11762-1-yajun.deng@linux.dev
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Stefan Assmann [Tue, 16 Mar 2021 10:01:41 +0000 (11:01 +0100)]
iavf: fix locking of critical sections
To avoid races between iavf_init_task(), iavf_reset_task(),
iavf_watchdog_task(), iavf_adminq_task() as well as the shutdown and
remove functions more locking is required.
The current protection by __IAVF_IN_CRITICAL_TASK is needed in
additional places.
- The reset task performs state transitions, therefore needs locking.
- The adminq task acts on replies from the PF in
iavf_virtchnl_completion() which may alter the states.
- The init task is not only run during probe but also if a VF gets stuck
to reinitialize it.
- The shutdown function performs a state transition.
- The remove function performs a state transition and also free's
resources.
iavf_lock_timeout() is introduced to avoid waiting infinitely
and cause a deadlock. Rather unlock and print a warning.
Signed-off-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Stefan Assmann [Fri, 5 Mar 2021 12:38:56 +0000 (13:38 +0100)]
iavf: do not override the adapter state in the watchdog task
The iavf watchdog task overrides adapter->state to __IAVF_RESETTING
when it detects a pending reset. Then schedules iavf_reset_task() which
takes care of the reset.
The reset task is capable of handling the reset without changing
adapter->state. In fact we lose the state information when the watchdog
task prematurely changes the adapter state. This may lead to a crash if
instead of the reset task the iavf_remove() function gets called before
the reset task.
In that case (if we were in state __IAVF_RUNNING previously) the
iavf_remove() function triggers iavf_close() which fails to close the
device because of the incorrect state information.
This may result in a crash due to pending interrupts.
kernel BUG at drivers/pci/msi.c:357!
[...]
Call Trace:
[<
ffffffffbddf24dd>] pci_disable_msix+0x3d/0x50
[<
ffffffffc08d2a63>] iavf_reset_interrupt_capability+0x23/0x40 [iavf]
[<
ffffffffc08d312a>] iavf_remove+0x10a/0x350 [iavf]
[<
ffffffffbddd3359>] pci_device_remove+0x39/0xc0
[<
ffffffffbdeb492f>] __device_release_driver+0x7f/0xf0
[<
ffffffffbdeb49c3>] device_release_driver+0x23/0x30
[<
ffffffffbddcabb4>] pci_stop_bus_device+0x84/0xa0
[<
ffffffffbddcacc2>] pci_stop_and_remove_bus_device+0x12/0x20
[<
ffffffffbddf361f>] pci_iov_remove_virtfn+0xaf/0x160
[<
ffffffffbddf3bcc>] sriov_disable+0x3c/0xf0
[<
ffffffffbddf3ca3>] pci_disable_sriov+0x23/0x30
[<
ffffffffc0667365>] i40e_free_vfs+0x265/0x2d0 [i40e]
[<
ffffffffc0667624>] i40e_pci_sriov_configure+0x144/0x1f0 [i40e]
[<
ffffffffbddd5307>] sriov_numvfs_store+0x177/0x1d0
Code: 00 00 e8 3c 25 e3 ff 49 c7 86 88 08 00 00 00 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d c3 48 8b 7b 28 e8 0d 44
RIP [<
ffffffffbbbf1068>] free_msi_irqs+0x188/0x190
The solution is to not touch the adapter->state in iavf_watchdog_task()
and let the reset task handle the state transition.
Signed-off-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Stefan Assmann [Thu, 4 Mar 2021 09:34:30 +0000 (10:34 +0100)]
i40e: improve locking of mac_filter_hash
i40e_config_vf_promiscuous_mode() calls
i40e_getnum_vf_vsi_vlan_filters() without acquiring the
mac_filter_hash_lock spinlock.
This is unsafe because mac_filter_hash may get altered in another thread
while i40e_getnum_vf_vsi_vlan_filters() traverses the hashes.
Simply adding the spinlock in i40e_getnum_vf_vsi_vlan_filters() is not
possible as it already gets called in i40e_get_vlan_list_sync() with the
spinlock held. Therefore adding a wrapper that acquires the spinlock and
call the correct function where appropriate.
Fixes:
37d318d7805f ("i40e: Remove scheduling while atomic possibility")
Fix-suggested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Marek Vasut [Sat, 17 Jul 2021 12:32:49 +0000 (14:32 +0200)]
net: phy: Fix data type in DP83822 dp8382x_disable_wol()
The last argument of phy_clear_bits_mmd(..., u16 val); is u16 and not
int, just inline the value into the function call arguments.
No functional change.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Liu Jian [Fri, 16 Jul 2021 04:06:17 +0000 (12:06 +0800)]
igmp: Add ip_mc_list lock in ip_check_mc_rcu
I got below panic when doing fuzz test:
Kernel panic - not syncing: panic_on_warn set ...
CPU: 0 PID: 4056 Comm: syz-executor.3 Tainted: G B 5.14.0-rc1-00195-gcff5c4254439-dirty #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
Call Trace:
dump_stack_lvl+0x7a/0x9b
panic+0x2cd/0x5af
end_report.cold+0x5a/0x5a
kasan_report+0xec/0x110
ip_check_mc_rcu+0x556/0x5d0
__mkroute_output+0x895/0x1740
ip_route_output_key_hash_rcu+0x2d0/0x1050
ip_route_output_key_hash+0x182/0x2e0
ip_route_output_flow+0x28/0x130
udp_sendmsg+0x165d/0x2280
udpv6_sendmsg+0x121e/0x24f0
inet6_sendmsg+0xf7/0x140
sock_sendmsg+0xe9/0x180
____sys_sendmsg+0x2b8/0x7a0
___sys_sendmsg+0xf0/0x160
__sys_sendmmsg+0x17e/0x3c0
__x64_sys_sendmmsg+0x9e/0x100
do_syscall_64+0x3b/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x462eb9
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8
48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48>
3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:
00007f3df5af1c58 EFLAGS:
00000246 ORIG_RAX:
0000000000000133
RAX:
ffffffffffffffda RBX:
000000000073bf00 RCX:
0000000000462eb9
RDX:
0000000000000312 RSI:
0000000020001700 RDI:
0000000000000007
RBP:
0000000000000004 R08:
0000000000000000 R09:
0000000000000000
R10:
0000000000000000 R11:
0000000000000246 R12:
00007f3df5af26bc
R13:
00000000004c372d R14:
0000000000700b10 R15:
00000000ffffffff
It is one use-after-free in ip_check_mc_rcu.
In ip_mc_del_src, the ip_sf_list of pmc has been freed under pmc->lock protection.
But access to ip_sf_list in ip_check_mc_rcu is not protected by the lock.
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 17 Jul 2021 00:32:15 +0000 (17:32 -0700)]
Merge branch 'vmxnet3-version-6'
Ronak Doshi says:
====================
vmxnet3: upgrade to version 6
vmxnet3 emulation has recently added several new features which includes
increase in queues supported, remove power of 2 limitation on queues,
add RSS for ESP IPv6, etc. This patch series extends the vmxnet3 driver
to leverage these new features.
Compatibility is maintained using existing vmxnet3 versioning mechanism as
follows:
- new features added to vmxnet3 emulation are associated with new vmxnet3
version viz. vmxnet3 version 6.
- emulation advertises all the versions it supports to the driver.
- during initialization, vmxnet3 driver picks the highest version number
supported by both the emulation and the driver and configures emulation
to run at that version.
In particular, following changes are introduced:
Patch 1:
This patch introduces utility macros for vmxnet3 version 6 comparison
and updates Copyright information.
Patch 2:
This patch adds support to increase maximum Tx/Rx queues from 8 to 32.
Patch 3:
This patch removes the limitation of power of 2 on the queues.
Patch 4:
Uses existing get_rss_hash_opts and set_rss_hash_opts methods to add
support for ESP IPv6 RSS.
Patch 5:
This patch reports correct RSS hash type based on the type of RSS
performed.
Patch 6:
This patch updates maximum configurable mtu to 9190.
Patch 7:
With all vmxnet3 version 6 changes incorporated in the vmxnet3 driver,
with this patch, the driver can configure emulation to run at vmxnet3
version 6.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ronak Doshi [Fri, 16 Jul 2021 22:36:26 +0000 (15:36 -0700)]
vmxnet3: update to version 6
With all vmxnet3 version 6 changes incorporated in the vmxnet3 driver,
the driver can configure emulation to run at vmxnet3 version 6, provided
the emulation advertises support for version 6.
Signed-off-by: Ronak Doshi <doshir@vmware.com>
Acked-by: Guolin Yang <gyang@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ronak Doshi [Fri, 16 Jul 2021 22:36:25 +0000 (15:36 -0700)]
vmxnet3: increase maximum configurable mtu to 9190
This patch increases the maximum configurable mtu to 9190
to accommodate jumbo packets of overlay traffic.
Signed-off-by: Ronak Doshi <doshir@vmware.com>
Acked-by: Guolin Yang <gyang@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ronak Doshi [Fri, 16 Jul 2021 22:36:24 +0000 (15:36 -0700)]
vmxnet3: set correct hash type based on rss information
As vmxnet3 supports IP/TCP/UDP RSS, this patch sets appropriate
hash type based on the type of RSS performed.
Signed-off-by: Ronak Doshi <doshir@vmware.com>
Acked-by: Guolin Yang <gyang@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ronak Doshi [Fri, 16 Jul 2021 22:36:23 +0000 (15:36 -0700)]
vmxnet3: add support for ESP IPv6 RSS
Vmxnet3 version 4 added support for ESP RSS. However, only IPv4 was
supported. With vmxnet3 version 6, this patch enables RSS for ESP
IPv6 packets as well.
Signed-off-by: Ronak Doshi <doshir@vmware.com>
Acked-by: Guolin Yang <gyang@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ronak Doshi [Fri, 16 Jul 2021 22:36:22 +0000 (15:36 -0700)]
vmxnet3: remove power of 2 limitation on the queues
With version 6, vmxnet3 relaxes the restriction on queues to
be power of two. This is helpful in cases (Edge VM) where
vcpus are less than 8 and device requires more than 4 queues.
Signed-off-by: Ronak Doshi <doshir@vmware.com>
Acked-by: Guolin Yang <gyang@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ronak Doshi [Fri, 16 Jul 2021 22:36:21 +0000 (15:36 -0700)]
vmxnet3: add support for 32 Tx/Rx queues
Currently, vmxnet3 supports maximum of 8 Tx/Rx queues. With increase
in number of vcpus on a VM, to achieve better performance and utilize
idle vcpus, we need to increase the max number of queues supported.
This patch enhances vmxnet3 to support maximum of 32 Tx/Rx queues.
Increasing the Rx queues also increases the probability of distrubuting
the traffic from different flows to different queues with RSS.
Signed-off-by: Ronak Doshi <doshir@vmware.com>
Acked-by: Guolin Yang <gyang@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ronak Doshi [Fri, 16 Jul 2021 22:36:20 +0000 (15:36 -0700)]
vmxnet3: prepare for version 6 changes
vmxnet3 is currently at version 4 and this patch initiates the
preparation to accommodate changes for upto version 6. Introduced
utility macros for vmxnet3 version 6 comparison and update Copyright
information.
Signed-off-by: Ronak Doshi <doshir@vmware.com>
Acked-by: Guolin Yang <gyang@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Fri, 16 Jul 2021 21:44:07 +0000 (17:44 -0400)]
tipc: keep the skb in rcv queue until the whole data is read
Currently, when userspace reads a datagram with a buffer that is
smaller than this datagram, the data will be truncated and only
part of it can be received by users. It doesn't seem right that
users don't know the datagram size and have to use a huge buffer
to read it to avoid the truncation.
This patch to fix it by keeping the skb in rcv queue until the
whole data is read by users. Only the last msg of the datagram
will be marked with MSG_EOR, just as TCP/SCTP does.
Note that this will work as above only when MSG_EOR is set in the
flags parameter of recvmsg(), so that it won't break any old user
applications.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Jon Maloy <jmaloy@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 17 Jul 2021 00:25:13 +0000 (17:25 -0700)]
Merge branch '1GbE' of git://git./linux/kernel/git/t
nguy/next-queue
Tony Nguyen says:
====================
1GbE Intel Wired LAN Driver Updates 2021-07-16
Vinicius Costa Gomes says:
Add support for steering traffic to specific RX queues using Flex Filters.
As the name implies, Flex Filters are more flexible than using
Layer-2, VLAN or MAC address filters, one of the reasons is that they
allow "AND" operations more easily, e.g. when the user wants to steer
some traffic based on the source MAC address and the packet ethertype.
Future work include adding support for offloading tc-u32 filters to
the hardware.
The series is divided as follows:
Patch 1/5, add the low level primitives for configuring Flex filters.
Patch 2/5 and 3/5, allow ethtool to manage Flex filters.
Patch 4/5, when specifying filters that have multiple predicates, use
Flex filters.
Patch 5/5, Adds support for exposing the i225 LEDs using the LED subsystem.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Kurt Kanzenbach [Tue, 29 Jun 2021 04:43:32 +0000 (21:43 -0700)]
igc: Export LEDs
Each i225 has three LEDs. Export them via the LED class framework.
Each LED is controllable via sysfs. Example:
$ cd /sys/class/leds/igc_led0
$ cat brightness # Current Mode
$ cat max_brightness # 15
$ echo 0 > brightness # Mode 0
$ echo 1 > brightness # Mode 1
The brightness field here reflects the different LED modes ranging
from 0 to 15.
Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Kurt Kanzenbach [Tue, 29 Jun 2021 04:43:31 +0000 (21:43 -0700)]
igc: Make flex filter more flexible
Currently flex filters are only used for filters containing user data.
However, it makes sense to utilize them also for filters having
multiple conditions, because that's not supported by the driver at the
moment. Add it.
Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Vinicius Costa Gomes [Tue, 29 Jun 2021 04:43:30 +0000 (21:43 -0700)]
igc: Allow for Flex Filters to be installed
Allows Flex Filters to be installed.
The previous restriction to the types of filters that can be installed
can now be lifted.
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Kurt Kanzenbach [Tue, 29 Jun 2021 04:43:29 +0000 (21:43 -0700)]
igc: Integrate flex filter into ethtool ops
Use the flex filter mechanism to extend the current ethtool filter
operations by intercoperating the user data. This allows to match
eight more bytes within a Ethernet frame in addition to macs, ether
types and vlan.
The matching pattern looks like this:
* dest_mac [6]
* src_mac [6]
* tpid [2]
* vlan tci [2]
* ether type [2]
* user data [8]
This can be used to match Profinet traffic classes by FrameID range.
Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Kurt Kanzenbach [Tue, 29 Jun 2021 04:43:28 +0000 (21:43 -0700)]
igc: Add possibility to add flex filter
The Intel i225 NIC has the possibility to add flex filters which can
match up to the first 128 byte of a packet. These filters are useful
for all kind of packet matching. One particular use case is Profinet,
as the different traffic classes are distinguished by the frame id
range which cannot be matched by any other means.
Add code to configure and enable flex filters.
Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Voon Weifeng [Fri, 16 Jul 2021 13:46:45 +0000 (21:46 +0800)]
net: phy: marvell10g: enable WoL for 88X3310 and 88E2110
Implement Wake-on-LAN feature for 88X3310 and 88E2110.
This is done by enabling WoL interrupt and WoL detection and
configuring MAC address into WoL magic packet registers
Signed-off-by: Voon Weifeng <weifeng.voon@intel.com>
Signed-off-by: Ling Pei Lee <pei.lee.ling@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Joakim Zhang [Fri, 16 Jul 2021 10:29:11 +0000 (18:29 +0800)]
ARM: dts: imx7-mba7: remove un-used "phy-reset-delay" property
Remove un-used "phy-reset-delay" property which found when do dtbs_check
(set additionalProperties: false in fsl,fec.yaml).
Double check current driver and commit history, "phy-reset-delay" never comes
up, so it should be safe to remove it.
$ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- dtbs_check DT_SCHEMA_FILES=Documentation/devicetree/bindings/net/fsl,fec.yaml
arch/arm/boot/dts/imx7d-mba7.dt.yaml: ethernet@
30be0000: 'phy-reset-delay' does not match any of the regexes: 'pinctrl-[0-9]+'
arch/arm/boot/dts/imx7d-mba7.dt.yaml: ethernet@
30bf0000: 'phy-reset-delay' does not match any of the regexes: 'pinctrl-[0-9]+'
/arch/arm/boot/dts/imx7s-mba7.dt.yaml: ethernet@
30be0000: 'phy-reset-delay' does not match any of the regexes: 'pinctrl-[0-9]+'
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Joakim Zhang [Fri, 16 Jul 2021 10:29:10 +0000 (18:29 +0800)]
ARM: dts: imx35: correct node name for FEC
Correct node name for FEC which found when do dtbs_check.
$ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- dtbs_check DT_SCHEMA_FILES=Documentation/devicetree/bindings/net/fsl,fec.yaml
arch/arm/boot/dts/imx35-eukrea-mbimxsd35-baseboard.dt.yaml: fec@
50038000: $nodename:0: 'fec@
50038000' does not match '^ethernet(@.*)?$'
arch/arm/boot/dts/imx35-pdk.dt.yaml: fec@
50038000: $nodename:0: 'fec@
50038000' does not match '^ethernet(@.*)?$'
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Joakim Zhang [Fri, 16 Jul 2021 10:29:09 +0000 (18:29 +0800)]
dt-bindings: net: fec: convert fsl,*fec bindings to yaml
In order to automate the verification of DT nodes convert fsl-fec.txt to
fsl,fec.yaml, and pass binding check with below command.
$ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- dt_binding_check DT_SCHEMA_FILES=Documentation/devicetree/bindings/net/fsl,fec.yaml
DTEX Documentation/devicetree/bindings/net/fsl,fec.example.dts
DTC Documentation/devicetree/bindings/net/fsl,fec.example.dt.yaml
CHECK Documentation/devicetree/bindings/net/fsl,fec.example.dt.yaml
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Peilin Ye [Fri, 16 Jul 2021 01:52:45 +0000 (18:52 -0700)]
netdevsim: Add multi-queue support
Currently netdevsim only supports a single queue per port, which is
insufficient for testing multi-queue TC schedulers e.g. sch_mq. Extend
the current sysfs interface so that users can create ports with multiple
queues:
$ echo "[ID] [PORT_COUNT] [NUM_QUEUES]" > /sys/bus/netdevsim/new_device
As an example, echoing "2 4 8" creates 4 ports, with 8 queues per port.
Note, this is compatible with the current interface, with default number
of queues set to 1. For example, echoing "2 4" creates 4 ports with 1
queue per port; echoing "2" simply creates 1 port with 1 queue.
Reviewed-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Peilin Ye <peilin.ye@bytedance.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mark Gray [Thu, 15 Jul 2021 12:27:54 +0000 (08:27 -0400)]
openvswitch: Introduce per-cpu upcall dispatch
The Open vSwitch kernel module uses the upcall mechanism to send
packets from kernel space to user space when it misses in the kernel
space flow table. The upcall sends packets via a Netlink socket.
Currently, a Netlink socket is created for every vport. In this way,
there is a 1:1 mapping between a vport and a Netlink socket.
When a packet is received by a vport, if it needs to be sent to
user space, it is sent via the corresponding Netlink socket.
This mechanism, with various iterations of the corresponding user
space code, has seen some limitations and issues:
* On systems with a large number of vports, there is a correspondingly
large number of Netlink sockets which can limit scaling.
(https://bugzilla.redhat.com/show_bug.cgi?id=1526306)
* Packet reordering on upcalls.
(https://bugzilla.redhat.com/show_bug.cgi?id=1844576)
* A thundering herd issue.
(https://bugzilla.redhat.com/show_bug.cgi?id=1834444)
This patch introduces an alternative, feature-negotiated, upcall
mode using a per-cpu dispatch rather than a per-vport dispatch.
In this mode, the Netlink socket to be used for the upcall is
selected based on the CPU of the thread that is executing the upcall.
In this way, it resolves the issues above as:
a) The number of Netlink sockets scales with the number of CPUs
rather than the number of vports.
b) Ordering per-flow is maintained as packets are distributed to
CPUs based on mechanisms such as RSS and flows are distributed
to a single user space thread.
c) Packets from a flow can only wake up one user space thread.
The corresponding user space code can be found at:
https://mail.openvswitch.org/pipermail/ovs-dev/2021-July/385139.html
Bugzilla: https://bugzilla.redhat.com/1844576
Signed-off-by: Mark Gray <mark.d.gray@redhat.com>
Acked-by: Flavio Leitner <fbl@sysclose.org>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bill Wendling [Wed, 14 Jul 2021 09:17:46 +0000 (02:17 -0700)]
bnx2x: remove unused variable 'cur_data_offset'
Fix the clang build warning:
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c:1862:13: error: variable 'cur_data_offset' set but not used [-Werror,-Wunused-but-set-variable]
dma_addr_t cur_data_offset;
Signed-off-by: Bill Wendling <morbo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christophe JAILLET [Wed, 14 Jul 2021 08:32:33 +0000 (10:32 +0200)]
net: switchdev: Simplify 'mlxsw_sp_mc_write_mdb_entry()'
Use 'bitmap_alloc()/bitmap_free()' instead of hand-writing it.
This makes the code less verbose.
Also, use 'bitmap_alloc()' instead of 'bitmap_zalloc()' because the bitmap
is fully overridden by a 'bitmap_copy()' call just after its allocation.
While at it, remove an extra and unneeded space.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yajun Deng [Thu, 15 Jul 2021 12:24:24 +0000 (20:24 +0800)]
net/sched: Remove unnecessary if statement
It has been deal with the 'if (err' statement in rtnetlink_send()
and rtnl_unicast(). so remove unnecessary if statement.
v2: use the raw name rtnetlink_send().
Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yajun Deng [Thu, 15 Jul 2021 12:12:57 +0000 (20:12 +0800)]
rtnetlink: use nlmsg_notify() in rtnetlink_send()
The netlink_{broadcast, unicast} don't deal with 'if (err > 0' statement
but nlmsg_{multicast, unicast} do. The nlmsg_notify() contains them.
so use nlmsg_notify() instead. so that the caller wouldn't deal with
'if (err > 0' statement.
v2: use nlmsg_notify() will do well.
Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haiyue Wang [Wed, 14 Jul 2021 07:34:59 +0000 (15:34 +0800)]
gve: fix the wrong AdminQ buffer overflow check
The 'tail' pointer is also free-running count, so it needs to be masked
as 'adminq_prod_cnt' does, to become an index value of AdminQ buffer.
Fixes:
5cdad90de62c ("gve: Batch AQ commands for creating and destroying queues.")
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Catherine Sullivan <csully@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 16 Jul 2021 05:40:10 +0000 (22:40 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
====================
pull-request: bpf-next 2021-07-15
The following pull-request contains BPF updates for your *net-next* tree.
We've added 45 non-merge commits during the last 15 day(s) which contain
a total of 52 files changed, 3122 insertions(+), 384 deletions(-).
The main changes are:
1) Introduce bpf timers, from Alexei.
2) Add sockmap support for unix datagram socket, from Cong.
3) Fix potential memleak and UAF in the verifier, from He.
4) Add bpf_get_func_ip helper, from Jiri.
5) Improvements to generic XDP mode, from Kumar.
6) Support for passing xdp_md to XDP programs in bpf_prog_run, from Zvi.
===================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov [Fri, 16 Jul 2021 01:17:51 +0000 (18:17 -0700)]
Merge branch 'sockmap: add sockmap support for unix datagram socket'
Cong Wang says:
====================
From: Cong Wang <cong.wang@bytedance.com>
This is the last patchset of the original large patchset. In the
previous patchset, a new BPF sockmap program BPF_SK_SKB_VERDICT
was introduced and UDP began to support it too. In this patchset,
we add BPF_SK_SKB_VERDICT support to Unix datagram socket, so that
we can finally splice Unix datagram socket and UDP socket. Please
check each patch description for more details.
To see the big picture, the previous patchsets are available here:
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=
1e0ab70778bd86a90de438cc5e1535c115a7c396
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=
89d69c5d0fbcabd8656459bc8b1a476d6f1efee4
and this patchset is available here:
https://github.com/congwang/linux/tree/sockmap3
Acked-by: John Fastabend <john.fastabend@gmail.com>
---
v5: lift socket state check for dgram
remove ->unhash() case
add retries for EAGAIN in all test cases
remove an unused parameter of __unix_dgram_recvmsg()
rebase on the latest bpf-next
v4: fix af_unix disconnect case
add unix_unhash()
split out two small patches
reduce u->iolock critical section
remove an unused parameter of __unix_dgram_recvmsg()
v3: fix Kconfig dependency
make unix_read_sock() static
fix a UAF in unix_release()
add a missing header unix_bpf.c
v2: separate out from the original large patchset
rebase to the latest bpf-next
clean up unix_read_sock()
export sock_map_close()
factor out some helpers in selftests for code reuse
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cong Wang [Sun, 4 Jul 2021 19:02:52 +0000 (12:02 -0700)]
selftests/bpf: Add test cases for redirection between udp and unix
Add two test cases to ensure redirection between udp and unix
work bidirectionally.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210704190252.11866-12-xiyou.wangcong@gmail.com
Cong Wang [Sun, 4 Jul 2021 19:02:51 +0000 (12:02 -0700)]
selftests/bpf: Add a test case for unix sockmap
Add a test case to ensure redirection between two AF_UNIX
datagram sockets work.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210704190252.11866-11-xiyou.wangcong@gmail.com
Cong Wang [Sun, 4 Jul 2021 19:02:50 +0000 (12:02 -0700)]
selftests/bpf: Factor out add_to_sockmap()
Factor out a common helper add_to_sockmap() which adds two
sockets into a sockmap.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210704190252.11866-10-xiyou.wangcong@gmail.com
Cong Wang [Sun, 4 Jul 2021 19:02:49 +0000 (12:02 -0700)]
selftests/bpf: Factor out udp_socketpair()
Factor out a common helper udp_socketpair() which creates
a pair of connected UDP sockets.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210704190252.11866-9-xiyou.wangcong@gmail.com
Cong Wang [Sun, 4 Jul 2021 19:02:48 +0000 (12:02 -0700)]
af_unix: Implement unix_dgram_bpf_recvmsg()
We have to implement unix_dgram_bpf_recvmsg() to replace the
original ->recvmsg() to retrieve skmsg from ingress_msg.
AF_UNIX is again special here because the lack of
sk_prot->recvmsg(). I simply add a special case inside
unix_dgram_recvmsg() to call sk->sk_prot->recvmsg() directly.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210704190252.11866-8-xiyou.wangcong@gmail.com