Jakub Kicinski [Sat, 10 Dec 2022 03:46:14 +0000 (19:46 -0800)]
Merge branch 'mptcp-miscellaneous-cleanup'
Mat Martineau says:
====================
mptcp: Miscellaneous cleanup
Two code cleanup patches for the 6.2 merge window that don't change
behavior:
Patch 1 makes proper use of nlmsg_free(), as suggested by Jakub while
reviewing
f8c9dfbd875b ("mptcp: add pm listener events").
Patch 2 clarifies success status in a few mptcp functions, which
prevents some smatch false positives.
====================
Link: https://lore.kernel.org/r/20221209004431.143701-1-mathew.j.martineau@linux.intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Matthieu Baerts [Fri, 9 Dec 2022 00:44:31 +0000 (16:44 -0800)]
mptcp: return 0 instead of 'err' var
When 'err' is 0, it looks clearer to return '0' instead of the variable
called 'err'.
The behaviour is then not modified, just a clearer code.
By doing this, we can also avoid false positive smatch warnings like
this one:
net/mptcp/pm_netlink.c:1169 mptcp_pm_parse_pm_addr_attr() warn: missing error code? 'err'
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <error27@gmail.com>
Suggested-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Geliang Tang [Fri, 9 Dec 2022 00:44:30 +0000 (16:44 -0800)]
mptcp: use nlmsg_free instead of kfree_skb
Use nlmsg_free() instead of kfree_skb() in pm_netlink.c.
The SKB's have been created by nlmsg_new(). The proper cleaning way
should then be done with nlmsg_free().
For the moment, nlmsg_free() is simply calling kfree_skb() so we don't
change the behaviour here.
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Sat, 10 Dec 2022 03:44:42 +0000 (19:44 -0800)]
Merge tag 'mlx5-updates-2022-12-08' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2022-12-08
1) Support range match action in SW steering
Yevgeny Kliteynik says:
=======================
The following patch series adds support for a range match action in
SW Steering.
SW steering is able to match only on the exact values of the packet fields,
as requested by the user: the user provides mask for the fields that are of
interest, and the exact values to be matched on when the traffic is handled.
The following patch series add new type of action - Range Match, where the
user provides a field to be matched on and a range of values (min to max)
that will be considered as hit.
There are several new notions that were implemented in order to support
Range Match:
- MATCH_RANGES Steering Table Entry (STE): the new STE type that allows
matching the packets' fields on the range of values instead of a specific
value.
- Match Definer: this is a general FW object that defines which fields
in the packet will be referenced by the mask and tag of each STE.
Match definer ID is part of STE fields, and it defines how the HW needs
to interpret the STE's mask/tag values.
Till now SW steering used the definers that were managed by FW and
implemented the STE layout as described by the HW spec.
Now that we're adding a new type of STE, SW steering needs to also be
able to define this new STE's layout, and this is do
=======================
2) From OZ add support for meter mtu offload
2.1: Refactor the code to allow both metering and range post actions as a
pre-step for adding police mtu offload support.
2.2: Instantiate mtu green/red flow tables with a single match-all rule.
Add the green/red actions to the hit/miss table accordingly
2.3: Initialize the meter object with the TC police mtu parameter.
Use the hardware range match action feature.
3) From MaorD, support routes with more than 2 nexthops in multipath
4) Michael and Or, improve and extend vport representor counters.
* tag 'mlx5-updates-2022-12-08' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
net/mlx5: Expose steering dropped packets counter
net/mlx5: Refactor and expand rep vport stat group
net/mlx5e: multipath, support routes with more than 2 nexthops
net/mlx5e: TC, add support for meter mtu offload
net/mlx5e: meter, add mtu post meter tables
net/mlx5e: meter, refactor to allow multiple post meter tables
net/mlx5: DR, Add support for range match action
net/mlx5: DR, Add function that tells if STE miss addr has been initialized
net/mlx5: DR, Some refactoring of miss address handling
net/mlx5: DR, Manage definers with refcounts
net/mlx5: DR, Handle FT action in a separate function
net/mlx5: DR, Rework is_fw_table function
net/mlx5: DR, Add functions to create/destroy MATCH_DEFINER general object
net/mlx5: fs, add match on ranges API
net/mlx5: mlx5_ifc updates for MATCH_DEFINER general object
====================
Link: https://lore.kernel.org/r/20221209001420.142794-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Sat, 10 Dec 2022 03:42:16 +0000 (19:42 -0800)]
Merge branch '100GbE' of git://git./linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
Intel Wired LAN Driver Updates 2022-12-08 (ice)
Jacob Keller says:
This series of patches primarily consists of changes to fix some corner
cases that can cause Tx timestamp failures. The issues were discovered and
reported by Siddaraju DH and primarily affect E822 hardware, though this
series also includes some improvements that affect E810 hardware as well.
The primary issue is regarding the way that E822 determines when to generate
timestamp interrupts. If the driver reads timestamp indexes which do not
have a valid timestamp, the E822 interrupt tracking logic can get stuck.
This is due to the way that E822 hardware tracks timestamp index reads
internally. I was previously unaware of this behavior as it is significantly
different in E810 hardware.
Most of the fixes target refactors to ensure that the ice driver does not
read timestamp indexes which are not valid on E822 hardware. This is done by
using the Tx timestamp ready bitmap register from the PHY. This register
indicates what timestamp indexes have outstanding timestamps waiting to be
captured.
Care must be taken in all cases where we read the timestamp registers, and
thus all flows which might have read these registers are refactored. The
ice_ptp_tx_tstamp function is modified to consolidate as much of the logic
relating to these registers as possible. It now handles discarding stale
timestamps which are old or which occurred after a PHC time update. This
replaces previously standalone thread functions like the periodic work
function and the ice_ptp_flush_tx_tracker function.
In addition, some minor cleanups noticed while writing these refactors are
included.
The remaining patches refactor the E822 implementation to remove the
"bypass" mode for timestamps. The E822 hardware has the ability to provide a
more precise timestamp by making use of measurements of the precise way that
packets flow through the hardware pipeline. These measurements are known as
"Vernier" calibration. The "bypass" mode disables many of these measurements
in favor of a faster start up time for Tx and Rx timestamping. Instead, once
these measurements were captured, the driver tries to reconfigure the PHY to
enable the vernier calibrations.
Unfortunately this recalibration does not work. Testing indicates that the
PHY simply remains in bypass mode without the increased timestamp precision.
Remove the attempt at recalibration and always use vernier mode. This has
one disadvantage that Tx and Rx timestamps cannot begin until after at least
one packet of that type goes through the hardware pipeline. Because of this,
further refactor the driver to separate Tx and Rx vernier calibration.
Complete the Tx and Rx independently, enabling the appropriate type of
timestamp as soon as the relevant packet has traversed the hardware
pipeline. This was reported by Milena Olech.
Note that although these might be considered "bug fixes", the required
changes in order to appropriately resolve these issues is large. Thus it
does not feel suitable to send this series to net.
* '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue:
ice: reschedule ice_ptp_wait_for_offset_valid during reset
ice: make Tx and Rx vernier offset calibration independent
ice: only check set bits in ice_ptp_flush_tx_tracker
ice: handle flushing stale Tx timestamps in ice_ptp_tx_tstamp
ice: cleanup allocations in ice_ptp_alloc_tx_tracker
ice: protect init and calibrating check in ice_ptp_request_ts
ice: synchronize the misc IRQ when tearing down Tx tracker
ice: check Tx timestamp memory register for ready timestamps
ice: handle discarding old Tx requests in ice_ptp_tx_tstamp
ice: always call ice_ptp_link_change and make it void
ice: fix misuse of "link err" with "link status"
ice: Reset TS memory for all quads
ice: Remove the E822 vernier "bypass" logic
ice: Use more generic names for ice_ptp_tx fields
====================
Link: https://lore.kernel.org/r/20221208213932.1274143-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
wangchuanlei [Wed, 7 Dec 2022 01:38:57 +0000 (20:38 -0500)]
net: openvswitch: Add support to count upcall packets
Add support to count upall packets, when kmod of openvswitch
upcall to count the number of packets for upcall succeed and
failed, which is a better way to see how many packets upcalled
on every interfaces.
Signed-off-by: wangchuanlei <wangchuanlei@inspur.com>
Acked-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tejun Heo [Tue, 6 Dec 2022 21:36:32 +0000 (11:36 -1000)]
rhashtable: Allow rhashtable to be used from irq-safe contexts
rhashtable currently only does bh-safe synchronization making it impossible
to use from irq-safe contexts. Switch it to use irq-safe synchronization to
remove the restriction.
v2: Update the lock functions to return the ulong flags value and unlock
functions to take the value directly instead of passing around the
pointer. Suggested by Linus.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: David Vernet <dvernet@meta.com>
Acked-by: Josh Don <joshdon@google.com>
Acked-by: Hao Luo <haoluo@google.com>
Acked-by: Barret Rhoden <brho@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 9 Dec 2022 09:18:08 +0000 (09:18 +0000)]
Merge branch 'net-sched-retpoline'
Pedro Tammela says:
====================
net/sched: retpoline wrappers for tc
In tc all qdics, classifiers and actions can be compiled as modules.
This results today in indirect calls in all transitions in the tc hierarchy.
Due to CONFIG_RETPOLINE, CPUs with mitigations=on might pay an extra cost on
indirect calls. For newer Intel cpus with IBRS the extra cost is
nonexistent, but AMD Zen cpus and older x86 cpus still go through the
retpoline thunk.
Known built-in symbols can be optimized into direct calls, thus
avoiding the retpoline thunk. So far, tc has not been leveraging this
build information and leaving out a performance optimization for some
CPUs. In this series we wire up 'tcf_classify()' and 'tcf_action_exec()'
with direct calls when known modules are compiled as built-in as an
opt-in optimization.
We measured these changes in one AMD Zen 4 cpu (Retpoline), one AMD Zen 3 cpu (Retpoline),
one Intel 10th Gen CPU (IBRS), one Intel 3rd Gen cpu (Retpoline) and one
Intel Xeon CPU (IBRS) using pktgen with 64b udp packets. Our test setup is a
dummy device with clsact and matchall in a kernel compiled with every
tc module as built-in. We observed a 3-8% speed up on the retpoline CPUs,
when going through 1 tc filter, and a 60-100% speed up when going through 100 filters.
For the IBRS cpus we observed a 1-2% degradation in both scenarios, we believe
the extra branches check introduced a small overhead therefore we added
a static key that bypasses the wrapper on kernels not using the retpoline mitigation,
but compiled with CONFIG_RETPOLINE.
1 filter:
CPU | before (pps) | after (pps) | diff
R9 7950X | 5914980 | 6380227 | +7.8%
R9 5950X | 4237838 | 4412241 | +4.1%
R9 5950X | 4265287 | 4413757 | +3.4% [*]
i5-3337U | 1580565 | 1682406 | +6.4%
i5-10210U | 3006074 | 3006857 | +0.0%
i5-10210U | 3160245 | 3179945 | +0.6% [*]
Xeon 6230R | 3196906 | 3197059 | +0.0%
Xeon 6230R | 3190392 | 3196153 | +0.01% [*]
100 filters:
CPU | before (pps) | after (pps) | diff
R9 7950X | 373598 | 820396 | +119.59%
R9 5950X | 313469 | 633303 | +102.03%
R9 5950X | 313797 | 633150 | +101.77% [*]
i5-3337U | 127454 | 211210 | +65.71%
i5-10210U | 389259 | 381765 | -1.9%
i5-10210U | 408812 | 412730 | +0.9% [*]
Xeon 6230R | 415420 | 406612 | -2.1%
Xeon 6230R | 416705 | 405869 | -2.6% [*]
[*] In these tests we ran pktgen with clone set to 1000.
On the 7950x system we also tested the impact of filters if iteration order
placement varied, first by compiling a kernel with the filter under test being
the first one in the static iteration and then repeating it with being last (of 15 classifiers existing today).
We saw a difference of +0.5-1% in pps between being the first in the iteration vs being the last.
Therefore we order the classifiers and actions according to relevance per our current thinking.
v5->v6:
- Address Eric Dumazet suggestions
v4->v5:
- Rebase
v3->v4:
- Address Eric Dumazet suggestions
v2->v3:
- Address suggestions by Jakub, Paolo and Eric
- Dropped RFC tag (I forgot to add it on v2)
v1->v2:
- Fix build errors found by the bots
- Address Kuniyuki Iwashima suggestions
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Pedro Tammela [Tue, 6 Dec 2022 13:55:13 +0000 (10:55 -0300)]
net/sched: avoid indirect classify functions on retpoline kernels
Expose the necessary tc classifier functions and wire up cls_api to use
direct calls in retpoline kernels.
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Reviewed-by: Victor Nogueira <victor@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pedro Tammela [Tue, 6 Dec 2022 13:55:12 +0000 (10:55 -0300)]
net/sched: avoid indirect act functions on retpoline kernels
Expose the necessary tc act functions and wire up act_api to use
direct calls in retpoline kernels.
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Reviewed-by: Victor Nogueira <victor@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pedro Tammela [Tue, 6 Dec 2022 13:55:11 +0000 (10:55 -0300)]
net/sched: add retpoline wrapper for tc
On kernels using retpoline as a spectrev2 mitigation,
optimize actions and filters that are compiled as built-ins into a direct call.
On subsequent patches we expose the classifiers and actions functions
and wire up the wrapper into tc.
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Reviewed-by: Victor Nogueira <victor@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pedro Tammela [Tue, 6 Dec 2022 13:55:10 +0000 (10:55 -0300)]
net/sched: move struct action_ops definition out of ifdef
The type definition should be visible even in configurations not using
CONFIG_NET_CLS_ACT.
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Reviewed-by: Victor Nogueira <victor@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Randy Dunlap [Wed, 7 Dec 2022 04:42:57 +0000 (20:42 -0800)]
net: phy: remove redundant "depends on" lines
Delete a few lines of "depends on PHYLIB" since they are inside
an "if PHYLIB / endif # PHYLIB" block, i.e., they are redundant
and the other 50+ drivers there don't use "depends on PHYLIB"
since it is not needed.
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Heiner Kallweit <hkallweit1@gmail.com>
Cc: Russell King <linux@armlinux.org.uk>
Link: https://lore.kernel.org/r/20221207044257.30036-1-rdunlap@infradead.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Willem de Bruijn [Wed, 7 Dec 2022 14:37:01 +0000 (09:37 -0500)]
net_tstamp: add SOF_TIMESTAMPING_OPT_ID_TCP
Add an option to initialize SOF_TIMESTAMPING_OPT_ID for TCP from
write_seq sockets instead of snd_una.
This should have been the behavior from the start. Because processes
may now exist that rely on the established behavior, do not change
behavior of the existing option, but add the right behavior with a new
flag. It is encouraged to always set SOF_TIMESTAMPING_OPT_ID_TCP on
stream sockets along with the existing SOF_TIMESTAMPING_OPT_ID.
Intuitively the contract is that the counter is zero after the
setsockopt, so that the next write N results in a notification for
the last byte N - 1.
On idle sockets snd_una == write_seq and this holds for both. But on
sockets with data in transmission, snd_una records the unacked offset
in the stream. This depends on the ACK response from the peer. A
process cannot learn this in a race free manner (ioctl SIOCOUTQ is one
racy approach).
write_seq records the offset at the last byte written by the process.
This is a better starting point. It matches the intuitive contract in
all circumstances, unaffected by external behavior.
The new timestamp flag necessitates increasing sk_tsflags to 32 bits.
Move the field in struct sock to avoid growing the socket (for some
common CONFIG variants). The UAPI interface so_timestamping.flags is
already int, so 32 bits wide.
Reported-by: Sotirios Delimanolis <sotodel@meta.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/r/20221207143701.29861-1-willemdebruijn.kernel@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Fri, 9 Dec 2022 03:47:45 +0000 (19:47 -0800)]
Merge branch 'fix-possible-deadlock-during-wed-attach'
Lorenzo Bianconi says:
====================
fix possible deadlock during WED attach
Fix a possible deadlock in mtk_wed_attach if mtk_wed_wo_init routine fails.
Check wo pointer is properly allocated before running mtk_wed_wo_reset() and
mtk_wed_wo_deinit().
====================
Link: https://lore.kernel.org/r/cover.1670421354.git.lorenzo@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Lorenzo Bianconi [Wed, 7 Dec 2022 14:04:55 +0000 (15:04 +0100)]
net: ethernet: mtk_wed: fix possible deadlock if mtk_wed_wo_init fails
Introduce __mtk_wed_detach() in order to avoid a deadlock in
mtk_wed_attach routine if mtk_wed_wo_init fails since both
mtk_wed_attach and mtk_wed_detach run holding hw_lock mutex.
Fixes:
4c5de09eb0d0 ("net: ethernet: mtk_wed: add configure wed wo support")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Lorenzo Bianconi [Wed, 7 Dec 2022 14:04:54 +0000 (15:04 +0100)]
net: ethernet: mtk_wed: fix some possible NULL pointer dereferences
Fix possible NULL pointer dereference in mtk_wed_detach routine checking
wo pointer is properly allocated before running mtk_wed_wo_reset() and
mtk_wed_wo_deinit().
Even if it is just a theoretical issue at the moment check wo pointer is
not NULL in mtk_wed_mcu_msg_update.
Moreover, honor mtk_wed_mcu_send_msg return value in mtk_wed_wo_reset()
Fixes:
799684448e3e ("net: ethernet: mtk_wed: introduce wed wo support")
Fixes:
4c5de09eb0d0 ("net: ethernet: mtk_wed: add configure wed wo support")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Ian King [Wed, 7 Dec 2022 09:43:12 +0000 (09:43 +0000)]
nfp: Fix spelling mistake "tha" -> "the"
There is a spelling mistake in a nn_dp_warn message. Fix it.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Link: https://lore.kernel.org/r/20221207094312.2281493-1-colin.i.king@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Björn Töpel [Tue, 6 Dec 2022 10:28:38 +0000 (11:28 +0100)]
selftests: net: Fix O=dir builds
The BPF Makefile in net/bpf did incorrect path substitution for O=dir
builds, e.g.
make O=/tmp/kselftest headers
make O=/tmp/kselftest -C tools/testing/selftests
would fail in selftest builds [1] net/ with
clang-16: error: no such file or directory: 'kselftest/net/bpf/nat6to4.c'
clang-16: error: no input files
Add a pattern prerequisite and an order-only-prerequisite (for
creating the directory), to resolve the issue.
[1] https://lore.kernel.org/all/
202212060009.34CkQmCN-lkp@intel.com/
Reported-by: kernel test robot <lkp@intel.com>
Fixes:
837a3d66d698 ("selftests: net: Add cross-compilation support for BPF programs")
Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/r/20221206102838.272584-1-bjorn@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Fri, 9 Dec 2022 02:46:33 +0000 (18:46 -0800)]
Merge branch 'mlxsw-add-spectrum-1-ip6gre-support'
Petr Machata says:
====================
mlxsw: Add Spectrum-1 ip6gre support
Ido Schimmel writes:
Currently, mlxsw only supports ip6gre offload on Spectrum-2 and newer
ASICs. Spectrum-1 can also offload ip6gre tunnels, but it needs double
entry router interfaces (RIFs) for the RIFs representing these tunnels.
In addition, the RIF index needs to be even. This is handled in
patches #1-#3.
The implementation can otherwise be shared between all Spectrum
generations. This is handled in patches #4-#5.
Patch #6 moves a mlxsw ip6gre selftest to a shared directory, as ip6gre
is no longer only supported on Spectrum-2 and newer ASICs.
This work is motivated by users that require multiple GRE tunnels that
all share the same underlay VRF. Currently, mlxsw only supports
decapsulation based on the underlay destination IP (i.e., not taking the
GRE key into account), so users need to configure these tunnels with
different source IPs and IPv6 addresses are easier to spare than IPv4.
Tested using existing ip6gre forwarding selftests.
====================
Link: https://lore.kernel.org/r/cover.1670414573.git.petrm@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Ido Schimmel [Wed, 7 Dec 2022 12:36:47 +0000 (13:36 +0100)]
selftests: mlxsw: Move IPv6 decap_error test to shared directory
Now that Spectrum-1 gained ip6gre support we can move the test out of
the Spectrum-2 directory.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Ido Schimmel [Wed, 7 Dec 2022 12:36:46 +0000 (13:36 +0100)]
mlxsw: spectrum_ipip: Add Spectrum-1 ip6gre support
As explained in the previous patch, the existing Spectrum-2 ip6gre
implementation can be reused for Spectrum-1. Change the Spectrum-1
ip6gre operations structure to use the common operations.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Ido Schimmel [Wed, 7 Dec 2022 12:36:45 +0000 (13:36 +0100)]
mlxsw: spectrum_ipip: Rename Spectrum-2 ip6gre operations
There are two main differences between Spectrum-1 and newer ASICs in
terms of IP-in-IP support:
1. In Spectrum-1, RIFs representing ip6gre tunnels require two entries
in the RIF table.
2. In Spectrum-2 and newer ASICs, packets ingress the underlay (during
encapsulation) and egress the underlay (during decapsulation) via a
special generic loopback RIF.
The first difference was handled in previous patches by adding the
'double_rif_entry' field to the Spectrum-1 operations structure of
ip6gre RIFs. The second difference is handled during RIF creation, by
only creating a generic loopback RIF in Spectrum-2 and newer ASICs.
Therefore, the ip6gre operations can be shared between Spectrum-1 and
newer ASIC in a similar fashion to how the ipgre operations are shared.
Rename the operations to not be Spectrum-2 specific and move them
earlier in the file so that they could later be used for Spectrum-1.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Ido Schimmel [Wed, 7 Dec 2022 12:36:44 +0000 (13:36 +0100)]
mlxsw: spectrum_router: Add support for double entry RIFs
In Spectrum-1, loopback router interfaces (RIFs) used for IP-in-IP
encapsulation with an IPv6 underlay require two RIF entries and the RIF
index must be even.
Prepare for this change by extending the RIF parameters structure with a
'double_entry' field that indicates if the RIF being created requires
two RIF entries or not. Only set it for RIFs representing ip6gre tunnels
in Spectrum-1.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Ido Schimmel [Wed, 7 Dec 2022 12:36:43 +0000 (13:36 +0100)]
mlxsw: spectrum_router: Parametrize RIF allocation size
Currently, each router interface (RIF) consumes one entry in the RIFs
table. This is going to change in subsequent patches where some RIFs
will consume two table entries.
Prepare for this change by parametrizing the RIF allocation size. For
now, always pass '1'.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Ido Schimmel [Wed, 7 Dec 2022 12:36:42 +0000 (13:36 +0100)]
mlxsw: spectrum_router: Use gen_pool for RIF index allocation
Currently, each router interface (RIF) consumes one entry in the RIFs
table and there are no alignment constraints. This is going to change in
subsequent patches where some RIFs will consume two table entries and
their indexes will need to be aligned to the allocation size (even).
Prepare for this change by converting the RIF index allocation to use
gen_pool with the 'gen_pool_first_fit_order_align' algorithm.
No Kconfig changes necessary as mlxsw already selects
'GENERIC_ALLOCATOR'.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Fri, 9 Dec 2022 00:07:53 +0000 (16:07 -0800)]
Merge git://git./linux/kernel/git/netdev/net
No conflicts.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Michael Guralnik [Thu, 8 Dec 2022 04:23:46 +0000 (06:23 +0200)]
net/mlx5: Expose steering dropped packets counter
Add rx steering discarded packets counter to the vnic_diag debugfs.
Signed-off-by: Michael Guralnik <michaelgur@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Or Har-Toov [Tue, 22 Nov 2022 10:42:18 +0000 (12:42 +0200)]
net/mlx5: Refactor and expand rep vport stat group
Expand representor vport stat group to support all counters from the
vport stat group, to count all the traffic passing through the vport.
Fix current implementation where fill_stats and update_stats use
different structs.
Signed-off-by: Or Har-Toov <ohartoov@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Maor Dickman [Sun, 9 Oct 2022 06:53:22 +0000 (09:53 +0300)]
net/mlx5e: multipath, support routes with more than 2 nexthops
Today multipath offload is only supported when the number of
nexthops is 2 which block the use of it in case of system with
2 NICs.
This patch solve it by enabling multipath offload per NIC if
2 nexthops of the route are its uplinks.
Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Oz Shlomo [Wed, 2 Nov 2022 14:36:51 +0000 (14:36 +0000)]
net/mlx5e: TC, add support for meter mtu offload
Initialize the meter object with the TC police mtu parameter.
Use the hardware range destination to compare the pkt len to the mtu setting.
Assign the range destination hit/miss ft to the police conform/exceed
attributes.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Oz Shlomo [Wed, 24 Aug 2022 12:45:43 +0000 (12:45 +0000)]
net/mlx5e: meter, add mtu post meter tables
TC police action may configure the maximum packet size to be handled by
the policer, in addition to byte/packet rate.
MTU check is realized in hardware using the range destination, specifying
a hit ft, if packet len is in the range, or miss ft otherwise.
Instantiate mtu green/red flow tables with a single match-all rule.
Add the green/red actions to the hit/miss table accordingly.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Oz Shlomo [Thu, 1 Dec 2022 12:35:36 +0000 (12:35 +0000)]
net/mlx5e: meter, refactor to allow multiple post meter tables
TC police action may configure the maximum packet size to be handled by
the policer, in addition to byte/packet rate.
Currently the post meter table steers the packet according to the meter
aso output.
Refactor the code to allow both metering and range post actions as a
pre-step for adding police mtu offload support.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Tue, 29 Nov 2022 09:21:38 +0000 (11:21 +0200)]
net/mlx5: DR, Add support for range match action
Add support for matching on range.
The supported type of range is L2 frame size.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Tue, 29 Nov 2022 09:01:23 +0000 (11:01 +0200)]
net/mlx5: DR, Add function that tells if STE miss addr has been initialized
Up until now miss address in all the STEs was used to connect miss lists
and to link the last STE in the list to end anchor.
Match range STE will require special handling because its miss address is
part of the 'action'. That is, range action has hit and miss addresses.
Since the range action is always the last action, need to make sure that
its miss address isn't overwritten by the end anchor.
Adding new function mlx5dr_ste_is_miss_addr_set() to answer the question
whether the STE's miss address has already been set as part of STE
initialization. Use a callback that always returns false right now. Once
match range is added, a different callback will be used for that STE type.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Tue, 29 Nov 2022 22:26:05 +0000 (00:26 +0200)]
net/mlx5: DR, Some refactoring of miss address handling
In preparation for MATCH RANGE STE support, create a function
to set the miss address of an STE.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Thu, 25 Aug 2022 12:35:59 +0000 (15:35 +0300)]
net/mlx5: DR, Manage definers with refcounts
In many cases different actions will ask for the same definer format.
Instead of allocating new definer general object and running out of
definers, have an xarray of allocated definers and keep track of their
usage with refcounts: allocate a new definer only when there isn't
one with the same format already created, and destroy definer only
when its refcount runs down to zero.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Fri, 5 Aug 2022 23:36:54 +0000 (02:36 +0300)]
net/mlx5: DR, Handle FT action in a separate function
As preparation for range action support, moving the handling
of final ICM address for flow table action to a separate function.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Fri, 5 Aug 2022 10:42:24 +0000 (13:42 +0300)]
net/mlx5: DR, Rework is_fw_table function
This patch handles the following two changes w.r.t. is_fw_table function:
1. When SW steering is asked to create/destroy FW table, we allow for
creation/destruction of only termination tables. Rename mlx5_dr_is_fw_table
both to comply with the static function naming and to reflect that we're
actually checking for FW termination table.
2. When the action 'go to flow table' is created, the destination flow
table can be any FW table, not only termination table. Adding function
to check if the dest table is FW table. This function will also be used
by the later creation of range match action, so putting it the header file.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Tue, 31 May 2022 13:25:17 +0000 (16:25 +0300)]
net/mlx5: DR, Add functions to create/destroy MATCH_DEFINER general object
SW steering is able to match only on the exact values of the packet fields,
as requested by the user: the user provides mask for the fields that are of
interest, and the exact values to be matched on when the traffic is handled.
Match Definer is a general FW object that defines which fields in the
packet will be referenced by the mask and tag of each STE. Match definer ID
is part of STE fields, and it defines how the HW needs to interpret the STE's
mask/tag values.
Till now SW steering used the definers that were managed by FW and implemented
the STE layout as described by the HW spec. Now that we're adding a new type
of STE, SW steering needs to define for the HW how it should interpret this
new STE's layout.
This is done with a programmable match definer.
The programmable definer allows to selects which fields will be included in
the definer, and their layout: it has up to 9 DW selectors 8 Byte selectors.
Each selector indicates a DW/Byte worth of fields out of the table that
is defined by HW spec by referencing the offset of the required DW/Byte.
This patch adds dr_cmd function to create and destroy MATCH_DEFINER
general object.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Mon, 15 Aug 2022 09:45:28 +0000 (12:45 +0300)]
net/mlx5: fs, add match on ranges API
Range is a new flow destination type which allows matching on
a range of values instead of matching on a specific value.
Range flow destination has the following fields:
- hit_ft: flow table to forward the traffic in case of hit
- miss_ft: flow table to forward the traffic in case of miss
- field: which packet characteristic to match on
- min: minimal value for the selected field
- max: maximal value for the selected field
Note:
- In order to match, the value in the packet should meet
the following criteria: min <= value < max
- Currently, the only supported field type is L2 packet length
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Yevgeny Kliteynik [Tue, 31 May 2022 10:55:28 +0000 (13:55 +0300)]
net/mlx5: mlx5_ifc updates for MATCH_DEFINER general object
Update full structure of match definer and add an ID of
the SELECT match definer type.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Linus Torvalds [Thu, 8 Dec 2022 23:32:13 +0000 (15:32 -0800)]
Merge tag 'net-6.1-rc9' of git://git./linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Including fixes from bluetooth, can and netfilter.
Current release - new code bugs:
- bonding: ipv6: correct address used in Neighbour Advertisement
parsing (src vs dst typo)
- fec: properly scope IRQ coalesce setup during link up to supported
chips only
Previous releases - regressions:
- Bluetooth fixes for fake CSR clones (knockoffs):
- re-add ERR_DATA_REPORTING quirk
- fix crash when device is replugged
- Bluetooth:
- silence a user-triggerable dmesg error message
- L2CAP: fix u8 overflow, oob access
- correct vendor codec definition
- fix support for Read Local Supported Codecs V2
- ti: am65-cpsw: fix RGMII configuration at SPEED_10
- mana: fix race on per-CQ variable NAPI work_done
Previous releases - always broken:
- af_unix: diag: fetch user_ns from in_skb in unix_diag_get_exact(),
avoid null-deref
- af_can: fix NULL pointer dereference in can_rcv_filter
- can: slcan: fix UAF with a freed work
- can: can327: flush TX_work on ldisc .close()
- macsec: add missing attribute validation for offload
- ipv6: avoid use-after-free in ip6_fragment()
- nft_set_pipapo: actually validate intervals in fields after the
first one
- mvneta: prevent oob access in mvneta_config_rss()
- ipv4: fix incorrect route flushing when table ID 0 is used, or when
source address is deleted
- phy: mxl-gpy: add workaround for IRQ bug on GPY215B and GPY215C"
* tag 'net-6.1-rc9' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (77 commits)
net: dsa: sja1105: avoid out of bounds access in sja1105_init_l2_policing()
s390/qeth: fix use-after-free in hsci
macsec: add missing attribute validation for offload
net: mvneta: Fix an out of bounds check
net: thunderbolt: fix memory leak in tbnet_open()
ipv6: avoid use-after-free in ip6_fragment()
net: plip: don't call kfree_skb/dev_kfree_skb() under spin_lock_irq()
net: phy: mxl-gpy: add MDINT workaround
net: dsa: mv88e6xxx: accept phy-mode = "internal" for internal PHY ports
xen/netback: don't call kfree_skb() under spin_lock_irqsave()
dpaa2-switch: Fix memory leak in dpaa2_switch_acl_entry_add() and dpaa2_switch_acl_entry_remove()
ethernet: aeroflex: fix potential skb leak in greth_init_rings()
tipc: call tipc_lxc_xmit without holding node_read_lock
can: esd_usb: Allow REC and TEC to return to zero
can: can327: flush TX_work on ldisc .close()
can: slcan: fix freed work crash
can: af_can: fix NULL pointer dereference in can_rcv_filter
net: dsa: sja1105: fix memory leak in sja1105_setup_devlink_regions()
ipv4: Fix incorrect route flushing when table ID 0 is used
ipv4: Fix incorrect route flushing when source address is deleted
...
Jakub Kicinski [Thu, 8 Dec 2022 22:27:50 +0000 (14:27 -0800)]
Merge branch 'mlx4-better-big-tcp-support'
Eric Dumazet says:
====================
mlx4: better BIG-TCP support
mlx4 uses a bounce buffer in TX whenever the tx descriptors
wrap around the right edge of the ring.
Size of this bounce buffer was hard coded and can be
increased if/when needed.
====================
Link: https://lore.kernel.org/r/20221207141237.2575012-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Wed, 7 Dec 2022 14:12:37 +0000 (14:12 +0000)]
net/mlx4: small optimization in mlx4_en_xmit()
Test against MLX4_MAX_DESC_TXBBS only matters if the TX
bounce buffer is going to be used.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Wei Wang <weiwan@google.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Wed, 7 Dec 2022 14:12:36 +0000 (14:12 +0000)]
net/mlx4: MLX4_TX_BOUNCE_BUFFER_SIZE depends on MAX_SKB_FRAGS
Google production kernel has increased MAX_SKB_FRAGS to 45
for BIG-TCP rollout.
Unfortunately mlx4 TX bounce buffer is not big enough whenever
an skb has up to 45 page fragments.
This can happen often with TCP TX zero copy, as one frag usually
holds 4096 bytes of payload (order-0 page).
Tested:
Kernel built with MAX_SKB_FRAGS=45
ip link set dev eth0 gso_max_size 185000
netperf -t TCP_SENDFILE
I made sure that "ethtool -G eth0 tx 64" was properly working,
ring->full_size being set to 15.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Wei Wang <weiwan@google.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Wed, 7 Dec 2022 14:12:35 +0000 (14:12 +0000)]
net/mlx4: rename two constants
MAX_DESC_SIZE is really the size of the bounce buffer used
when reaching the right side of TX ring buffer.
MAX_DESC_TXBBS get a MLX4_ prefix.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jacob Keller [Mon, 5 Dec 2022 19:52:53 +0000 (11:52 -0800)]
ice: reschedule ice_ptp_wait_for_offset_valid during reset
If the ice_ptp_wait_for_offest_valid function is scheduled to run while the
driver is resetting, it will exit without completing calibration. The work
function gets scheduled by ice_ptp_port_phy_restart which will be called as
part of the reset recovery process.
It is possible for the first execution to occur before the driver has
completely cleared its resetting flags. Ensure calibration completes by
rescheduling the task until reset is fully completed.
Reported-by: Siddaraju DH <siddaraju.dh@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Siddaraju DH [Mon, 5 Dec 2022 19:52:52 +0000 (11:52 -0800)]
ice: make Tx and Rx vernier offset calibration independent
The Tx and Rx calibration and timestamp generation blocks are independent.
However, the ice driver waits until both blocks are ready before
configuring either block.
This can result in delay of configuring one block because we have not yet
received a packet in the other block.
There is no reason to wait to finish programming Tx just because we haven't
received a packet. Similarly there is no reason to wait to program Rx just
because we haven't transmitted a packet.
Instead of checking both offset status before programming either block,
refactor the ice_phy_cfg_tx_offset_e822 and ice_phy_cfg_rx_offset_e822
functions so that they perform their own offset status checks.
Additionally, make them also check the offset ready bit to determine if
the offset values have already been programmed.
Call the individual configure functions directly in
ice_ptp_wait_for_offset_valid. The functions will now correctly check
status, and program the offsets if ready. Once the offset is programmed,
the functions will exit quickly after just checking the offset ready
register.
Remove the ice_phy_calc_vernier_e822 in ice_ptp_hw.c, as well as the offset
valid check functions in ice_ptp.c entirely as they are no longer
necessary.
With this change, the Tx and Rx blocks will each be enabled as soon as
possible without waiting for the other block to complete calibration. This
can enable timestamps faster in setups which have a low rate of transmitted
or received packets. In particular, it can stop a situation where one port
never receives traffic, and thus never finishes calibration of the Tx
block, resulting in continuous faults reported by the ptp4l daemon
application.
Signed-off-by: Siddaraju DH <siddaraju.dh@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jacob Keller [Mon, 5 Dec 2022 19:52:51 +0000 (11:52 -0800)]
ice: only check set bits in ice_ptp_flush_tx_tracker
The ice_ptp_flush_tx_tracker function is called to clear all outstanding Tx
timestamp requests when the port is being brought down. This function
iterates over the entire list, but this is unnecessary. We only need to
check the bits which are actually set in the ready bitmap.
Replace this logic with for_each_set_bit, and follow a similar flow as in
ice_ptp_tx_tstamp_cleanup. Note that it is safe to call dev_kfree_skb_any
on a NULL pointer as it will perform a no-op so we do not need to verify
that the skb is actually NULL.
The new implementation also avoids clearing (and thus reading!) the PHY
timestamp unless the index is marked as having a valid timestamp in the
timestamp status bitmap. This ensures that we properly clear the status
registers as appropriate.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jacob Keller [Mon, 5 Dec 2022 19:52:50 +0000 (11:52 -0800)]
ice: handle flushing stale Tx timestamps in ice_ptp_tx_tstamp
In the event of a PTP clock time change due to .adjtime or .settime, the
ice driver needs to update the cached copy of the PHC time and also discard
any outstanding Tx timestamps.
This is required because otherwise the wrong copy of the PHC time will be
used when extending the Tx timestamp. This could result in reporting
incorrect timestamps to the stack.
The current approach taken to handle this is to call
ice_ptp_flush_tx_tracker, which will discard any timestamps which are not
yet complete.
This is problematic for two reasons:
1) it could lead to a potential race condition where the wrong timestamp is
associated with a future packet.
This can occur with the following flow:
1. Thread A gets request to transmit a timestamped packet, and picks an
index and transmits the packet
2. Thread B calls ice_ptp_flush_tx_tracker and sees the index in use,
marking is as disarded. No timestamp read occurs because the status
bit is not set, but the index is released for re-use
3. Thread A gets a new request to transmit another timestamped packet,
picks the same (now unused) index and transmits that packet.
4. The PHY transmits the first packet and updates the timestamp slot and
generates an interrupt.
5. The ice_ptp_tx_tstamp thread executes and sees the interrupt and a
valid timestamp but associates it with the new Tx SKB and not the one
that actual timestamp for the packet as expected.
This could result in the previous timestamp being assigned to a new
packet producing incorrect timestamps and leading to incorrect behavior
in PTP applications.
This is most likely to occur when the packet rate for Tx timestamp
requests is very high.
2) on E822 hardware, we must avoid reading a timestamp index more than once
each time its status bit is set and an interrupt is generated by
hardware.
We do have some extensive checks for the unread flag to ensure that only
one of either the ice_ptp_flush_tx_tracker or ice_ptp_tx_tstamp threads
read the timestamp. However, even with this we can still have cases
where we "flush" a timestamp that was actually completed in hardware.
This can lead to cases where we don't read the timestamp index as
appropriate.
To fix both of these issues, we must avoid calling ice_ptp_flush_tx_tracker
outside of the teardown path.
Rather than using ice_ptp_flush_tx_tracker, introduce a new state bitmap,
the stale bitmap. Start this as cleared when we begin a new timestamp
request. When we're about to extend a timestamp and send it up to the
stack, first check to see if that stale bit was set. If so, drop the
timestamp without sending it to the stack.
When we need to update the cached PHC timestamp out of band, just mark all
currently outstanding timestamps as stale. This will ensure that once
hardware completes the timestamp we'll ignore it correctly and avoid
reporting bogus timestamps to userspace.
With this change, we fix potential issues caused by calling
ice_ptp_flush_tx_tracker during normal operation.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jacob Keller [Mon, 5 Dec 2022 19:52:49 +0000 (11:52 -0800)]
ice: cleanup allocations in ice_ptp_alloc_tx_tracker
The ice_ptp_alloc_tx_tracker function must allocate the timestamp array and
the bitmap for tracking the currently in use indexes. A future change is
going to add yet another allocation to this function.
If these allocations fail we need to ensure that we properly cleanup and
ensure that the pointers in the ice_ptp_tx structure are NULL.
Simplify this logic by allocating to local variables first. If any
allocation fails, then free everything and exit. Only update the ice_ptp_tx
structure if all allocations succeed.
This ensures that we have no side effects on the Tx structure unless all
allocations have succeeded. Thus, no code will see an invalid pointer and
we don't need to re-assign NULL on cleanup.
This is safe because kernel "free" functions are designed to be NULL safe
and perform no action if passed a NULL pointer. Thus its safe to simply
always call kfree or bitmap_free even if one of those pointers was NULL.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jacob Keller [Mon, 5 Dec 2022 19:52:47 +0000 (11:52 -0800)]
ice: protect init and calibrating check in ice_ptp_request_ts
When requesting a new timestamp, the ice_ptp_request_ts function does not
hold the Tx tracker lock while checking init and calibrating. This means
that we might issue a new timestamp request just after the Tx timestamp
tracker starts being deinitialized. This could lead to incorrect access of
the timestamp structures. Correct this by moving the init and calibrating
checks under the lock, and updating the flows which modify these fields to
use the lock.
Note that we do not need to hold the lock while checking for tx->init in
ice_ptp_tx_tstamp. This is because the teardown function will use
synchronize_irq after clearing the flag to ensure that the threaded
interrupt completes. Either a) the tx->init flag will be cleared before the
ice_ptp_tx_tstamp function starts, thus it will exit immediately, or b) the
threaded interrupt will be executing and the synchronize_irq will wait
until the threaded interrupt has completed at which point we know the init
field has definitely been set and new interrupts will not execute the Tx
timestamp thread function.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jakub Kicinski [Thu, 8 Dec 2022 21:04:57 +0000 (13:04 -0800)]
Merge branch 'mlx5-Support-tc-police-jump-conform-exceed-attribute'
Saeed Mahameed says:
====================
Support tc police jump conform-exceed attribute
The tc police action conform-exceed option defines how to handle
packets which exceed or conform to the configured bandwidth limit.
One of the possible conform-exceed values is jump, which skips over
a specified number of actions.
This series adds support for conform-exceed jump action.
The series adds platform support for branching actions by providing
true/false flow attributes to the branching action.
This is necessary for supporting police jump, as each branch may
execute a different action list.
The first five patches are preparation patches:
- Patches 1 and 2 add support for actions with no destinations (e.g. drop)
- Patch 3 refactor the code for subsequent function reuse
- Patch 4 defines an abstract way for identifying terminating actions
- Patch 5 updates action list validations logic considering branching actions
The following three patches introduce an interface for abstracting branching
actions:
- Patch 6 introduces an abstract api for defining branching actions
- Patch 7 generically instantiates the branching flow attributes using
the abstract API
Patch 8 adds the platform support for jump actions, by executing the following
sequence:
a. Store the jumping flow attr
b. Identify the jump target action while iterating the actions list.
c. Instantiate a new flow attribute after the jump target action.
This is the flow attribute that the branching action should jump to.
d. Set the target post action id on:
d.1. The jumping attribute, thus realizing the jump functionality.
d.2. The attribute preceding the target jump attr, if not terminating.
The next patches apply the platform's branching attributes to the police
action:
- Patch 9 is a refactor patch
- Patch 10 initializes the post meter table with the red/green flow attributes,
as were initialized by the platform
- Patch 11 enables the offload of meter actions using jump conform-exceed
value.
====================
Link: https://lore.kernel.org/all/20221203221337.29267-1-saeed@kernel.org/
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:33 +0000 (14:13 -0800)]
net/mlx5e: TC, allow meter jump control action
Separate the matchall police action validation from flower validation.
Isolate the action validation logic in the police action parser.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-12-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:32 +0000 (14:13 -0800)]
net/mlx5e: TC, init post meter rules with branching attributes
Instantiate the post meter actions with the platform initialized branching
action attributes.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-11-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:31 +0000 (14:13 -0800)]
net/mlx5e: TC, rename post_meter actions
Currently post meter supports only the pipe/drop conform-exceed policy.
This assumption is reflected in several variable names.
Rename the following variables as a pre-step for using the generalized
branching action platform.
Rename fwd_green_rule/drop_red_rule to green_rule/red_rule respectively.
Repurpose red_counter/green_counter to act_counter/drop_counter to allow
police conform-exceed configurations that do not drop.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-10-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:30 +0000 (14:13 -0800)]
net/mlx5e: TC, initialize branching action with target attr
Identify the jump target action when iterating the action list.
Initialize the jump target attr with the jumping attribute during the
parsing phase. Initialize the jumping attr post action with the target
during the offload phase.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-9-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:29 +0000 (14:13 -0800)]
net/mlx5e: TC, initialize branch flow attributes
Initialize flow attribute for drop, accept, pipe and jump branching actions.
Instantiate a flow attribute instance according to the specified branch
control action. Store the branching attributes on the branching action
flow attribute during the parsing phase. Then, during the offload phase,
allocate the relevant mod header objects to the branching actions.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-8-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:28 +0000 (14:13 -0800)]
net/mlx5e: TC, set control params for branching actions
Extend the act tc api to set the branch control params aligning with
the police conform/exceed use case.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-7-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:27 +0000 (14:13 -0800)]
net/mlx5e: TC, validate action list per attribute
Currently the entire flow action list is validate for offload limitations.
For example, flow with both forward and drop actions are declared invalid
due to hardware restrictions.
However, a multi-table hardware model changes the limitations from a flow
scope to a single flow attribute scope.
Apply offload limitations to flow attributes instead of the entire flow.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-6-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:26 +0000 (14:13 -0800)]
net/mlx5e: TC, add terminating actions
Extend act api to identify actions that terminate action list.
Pre-step for terminating branching actions.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-5-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:25 +0000 (14:13 -0800)]
net/mlx5e: TC, reuse flow attribute post parser processing
After the tc action parsing phase the flow attribute is initialized with
relevant eswitch offload objects such as tunnel, vlan, header modify and
counter attributes. The post processing is done both for fdb and post-action
attributes.
Reuse the flow attribute post parsing logic by both fdb and post-action
offloads.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-4-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:24 +0000 (14:13 -0800)]
net/mlx5: fs, assert null dest pointer when dest_num is 0
Currently create_flow_handle() assumes a null dest pointer when there
are no destinations.
This might not be the case as the caller may pass an allocated dest
array while setting the dest_num parameter to 0.
Assert null dest array for flow rules that have no destinations (e.g. drop
rule).
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-3-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oz Shlomo [Sat, 3 Dec 2022 22:13:23 +0000 (14:13 -0800)]
net/mlx5e: E-Switch, handle flow attribute with no destinations
Rules with drop action are not required to have a destination.
Currently the destination list is allocated with the maximum number of
destinations and passed to the fs_core layer along with the actual number
of destinations.
Remove redundant passing of dest pointer when count of dest is 0.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20221203221337.29267-2-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Linus Torvalds [Thu, 8 Dec 2022 20:37:42 +0000 (12:37 -0800)]
Merge tag 'for-linus-
2022120801' of git://git./linux/kernel/git/hid/hid
Pull HID fixes from Jiri Kosina:
"A regression fix for handling Logitech HID++ devices and memory
corruption fixes:
- regression fix (revert) for catch-all handling of Logitech HID++
Bluetooth devices; there are devices that turn out not to work with
this, and the root cause is yet to be properly understood. So we
are dropping it for now, and it will be revisited for 6.2 or 6.3
(Benjamin Tissoires)
- memory corruption fix in HID core (ZhangPeng)
- memory corruption fix in hid-lg4ff (Anastasia Belova)
- Kconfig fix for I2C_HID (Benjamin Tissoires)
- a few device-id specific quirks that piggy-back on top of the
important fixes above"
* tag 'for-linus-
2022120801' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid:
Revert "HID: logitech-hidpp: Enable HID++ for all the Logitech Bluetooth devices"
Revert "HID: logitech-hidpp: Remove special-casing of Bluetooth devices"
HID: usbhid: Add ALWAYS_POLL quirk for some mice
HID: core: fix shift-out-of-bounds in hid_report_raw_event
HID: uclogic: Add HID_QUIRK_HIDINPUT_FORCE quirk
HID: fix I2C_HID not selected when I2C_HID_OF_ELAN is
HID: hid-lg4ff: Add check for empty lbuf
HID: ite: Enable QUIRK_TOUCHPAD_ON_OFF_REPORT on Acer Aspire Switch V 10
HID: uclogic: Fix frame templates for big endian architectures
Linus Torvalds [Thu, 8 Dec 2022 19:22:27 +0000 (11:22 -0800)]
Merge tag 'soc-fixes-6.1-5' of git://git./linux/kernel/git/soc/soc
Pull ARM SoC fix from Arnd Bergmann:
"One last build fix came in, addressing a link failure when building
without CONFIG_OUTER_CACHE"
* tag 'soc-fixes-6.1-5' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc:
ARM: at91: fix build for SAMA5D3 w/o L2 cache
Benjamin Tissoires [Wed, 7 Dec 2022 14:24:33 +0000 (15:24 +0100)]
Revert "HID: logitech-hidpp: Enable HID++ for all the Logitech Bluetooth devices"
This reverts commit
532223c8ac57605a10e46dc0ab23dcf01c9acb43.
As reported in [0], hid-logitech-hidpp now binds on all bluetooth mice,
but there are corner cases where hid-logitech-hidpp just gives up on
the mouse. This leads the end user with a dead mouse.
Given that we are at -rc8, we are definitively too late to find a proper
fix. We already identified 2 issues less than 24 hours after the bug
report. One in that ->match() was never designed to be used anywhere else
than in hid-generic, and the other that hid-logitech-hidpp has corner
cases where it gives up on devices it is not supposed to.
So we have no choice but postpone this patch to the next kernel release.
[0] https://lore.kernel.org/linux-input/CAJZ5v0g-_o4AqMgNwihCb0jrwrcJZfRrX=jv8aH54WNKO7QB8A@mail.gmail.com/
Reported-by: Rafael J . Wysocki <rjw@rjwysocki.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Benjamin Tissoires [Wed, 7 Dec 2022 14:24:32 +0000 (15:24 +0100)]
Revert "HID: logitech-hidpp: Remove special-casing of Bluetooth devices"
This reverts commit
8544c812e43ab7bdf40458411b83987b8cba924d.
We need to revert commit
532223c8ac57 ("HID: logitech-hidpp: Enable HID++
for all the Logitech Bluetooth devices") because that commit might make
hid-logitech-hidpp bind on mice that are not well enough supported by
hid-logitech-hidpp, and the end result is that the probe of those mice
is now returning -ENODEV, leaving the end user with a dead mouse.
Given that commit
8544c812e43a ("HID: logitech-hidpp: Remove special-casing
of Bluetooth devices") is a direct dependency of
532223c8ac57, revert it
too.
Note that this also adapt according to commit
908d325e1665 ("HID:
logitech-hidpp: Detect hi-res scrolling support") to re-add support of
the devices that were removed from that commit too.
I have locally an MX Master and I tested this device with that revert,
ensuring we still have high-res scrolling.
Reported-by: Rafael J . Wysocki <rjw@rjwysocki.net>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Linus Torvalds [Thu, 8 Dec 2022 19:16:15 +0000 (11:16 -0800)]
Merge tag 'loongarch-fixes-6.1-3' of git://git./linux/kernel/git/chenhuacai/linux-loongson
Pull LoongArch fixes from Huacai Chen:
"Export smp_send_reschedule() for modules use, fix a huge page entry
update issue, and add documents for booting description"
* tag 'loongarch-fixes-6.1-3' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
docs/zh_CN: Add LoongArch booting description's translation
docs/LoongArch: Add booting description
LoongArch: mm: Fix huge page entry update for virtual machine
LoongArch: Export symbol for function smp_send_reschedule()
Jacob Keller [Mon, 5 Dec 2022 19:52:46 +0000 (11:52 -0800)]
ice: synchronize the misc IRQ when tearing down Tx tracker
Since commit
1229b33973c7 ("ice: Add low latency Tx timestamp read") the
ice driver has used a threaded IRQ for handling Tx timestamps. This change
did not add a call to synchronize_irq during ice_ptp_release_tx_tracker.
Thus it is possible that an interrupt could occur just as the tracker is
being removed. This could lead to a use-after-free of the Tx tracker
structure data.
Fix this by calling sychronize_irq in ice_ptp_release_tx_tracker after
we've cleared the init flag. In addition, make sure that we re-check the
init flag at the end of ice_ptp_tx_tstamp before we exit ensuring that we
will stop polling for new timestamps once the tracker de-initialization has
begun.
Refactor the ts_handled variable into "more_timestamps" so that we can
simply directly assign this boolean instead of relying on an initialized
value of true. This makes the new combined check easier to read.
With this change, the ice_ptp_release_tx_tracker function will now wait for
the threaded interrupt to complete if it was executing while the init flag
was cleared.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jacob Keller [Mon, 5 Dec 2022 19:52:45 +0000 (11:52 -0800)]
ice: check Tx timestamp memory register for ready timestamps
The PHY for E822 based hardware has a register which indicates which
timestamps are valid in the PHY timestamp memory block. Each bit in the
register indicates whether the associated index in the timestamp memory is
valid.
Hardware sets this bit when the timestamp is captured, and clears the bit
when the timestamp is read. Use of this register is important as reading
timestamp registers can impact the way that hardware generates timestamp
interrupts.
This occurs because the PHY has an internal value which is incremented
when hardware captures a timestamp and decremented when software reads a
timestamp. Reading timestamps which are not marked as valid still decrement
the internal value and can result in the Tx timestamp interrupt not
triggering in the future.
To prevent this, use the timestamp memory value to determine which
timestamps are ready to be read. The ice_get_phy_tx_tstamp_ready function
reads this value. For E810 devices, this just always returns with all bits
set.
Skip any timestamp which is not set in this bitmap, avoiding reading extra
timestamps on E822 devices.
The stale check against a cached timestamp value is no longer necessary for
PHYs which support the timestamp ready bitmap properly. E810 devices still
need this. Introduce a new verify_cached flag to the ice_ptp_tx structure.
Use this to determine if we need to perform the verification against the
cached timestamp value. Set this to 1 for the E810 Tx tracker init
function. Notice that many of the fields in ice_ptp_tx are simple 1 bit
flags. Save some structure space by using bitfields of length 1 for these
values.
Modify the ICE_PTP_TS_VALID check to simply drop the timestamp immediately
so that in an event of getting such an invalid timestamp the driver does
not attempt to re-read the timestamp again in a future poll of the
register.
With these changes, the driver now reads each timestamp register exactly
once, and does not attempt any re-reads. This ensures the interrupt
tracking logic in the PHY will not get stuck.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jacob Keller [Mon, 5 Dec 2022 19:52:44 +0000 (11:52 -0800)]
ice: handle discarding old Tx requests in ice_ptp_tx_tstamp
Currently the driver uses the PTP kthread to process handling and
discarding of stale Tx timestamp requests. The function
ice_ptp_tx_tstamp_cleanup is used for this.
A separate thread creates complications for the driver as we now have both
the main Tx timestamp processing IRQ checking timestamps as well as the
kthread.
Rather than using the kthread to handle this, simply check for stale
timestamps within the ice_ptp_tx_tstamp function. This function must
already process the timestamps anyways.
If a Tx timestamp has been waiting for 2 seconds we simply clear the bit
and discard the SKB. This avoids the complication of having separate
threads polling, reducing overall CPU work.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jacob Keller [Mon, 5 Dec 2022 19:52:43 +0000 (11:52 -0800)]
ice: always call ice_ptp_link_change and make it void
The ice_ptp_link_change function is currently only called for E822 based
hardware. Future changes are going to extend this function to perform
additional tasks on link change.
Always call this function, moving the E810 check from the callers down to
just before we call the E822-specific function required to restart the PHY.
This function also returns an error value, but none of the callers actually
check it. In general, the errors it produces are more likely systemic
problems such as invalid or corrupt port numbers. No caller checks these,
and so no warning is logged.
Re-order the flag checks so that ICE_FLAG_PTP is checked first. Drop the
unnecessary check for ICE_FLAG_PTP_SUPPORTED, as ICE_FLAG_PTP will not be
set except when ICE_FLAG_PTP_SUPPORTED is set.
Convert the port checks to WARN_ON_ONCE, in order to generate a kernel
stack trace when they are hit.
Convert the function to void since no caller actually checks these return
values.
Co-developed-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Jacob Keller [Mon, 5 Dec 2022 19:52:42 +0000 (11:52 -0800)]
ice: fix misuse of "link err" with "link status"
The ice_ptp_link_change function has a comment which mentions "link
err" when referring to the current link status. We are storing the status
of whether link is up or down, which is not an error.
It is appears that this use of err accidentally got included due to an
overzealous search and replace when removing the ice_status enum and local
status variable.
Fix the wording to use the correct term.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Karol Kolacinski [Mon, 5 Dec 2022 19:52:41 +0000 (11:52 -0800)]
ice: Reset TS memory for all quads
In E822 products, the owner PF should reset memory for all quads, not
only for the one where assigned lport is.
Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Milena Olech [Mon, 5 Dec 2022 19:52:40 +0000 (11:52 -0800)]
ice: Remove the E822 vernier "bypass" logic
The E822 devices support an extended "vernier" calibration which enables
higher precision timestamps by accounting for delays in the PHY, and
compensating for them. These delays are measured by hardware as part of its
vernier calibration logic.
The driver currently starts the PHY in "bypass" mode which skips
the compensation. Then it later attempts to switch from bypass to vernier.
This unfortunately does not work as expected. Instead of properly
compensating for the delays, the hardware continues operating in bypass
without the improved precision expected.
Because we cannot dynamically switch between bypass and vernier mode,
refactor the driver to always operate in vernier mode. This has a slight
downside: Tx timestamp and Rx timestamp requests that occur as the very
first packet set after link up will not complete properly and may be
reported to applications as missing timestamps.
This occurs frequently in test environments where traffic is light or
targeted specifically at testing PTP. However, in practice most
environments will have transmitted or received some data over the network
before such initial requests are made.
Signed-off-by: Milena Olech <milena.olech@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Sergey Temerkhanov [Mon, 5 Dec 2022 19:52:39 +0000 (11:52 -0800)]
ice: Use more generic names for ice_ptp_tx fields
Some supported devices have per-port timestamp memory blocks while
others have shared ones within quads. Rename the struct ice_ptp_tx
fields to reflect the block entities it works with
Signed-off-by: Sergey Temerkhanov <sergey.temerkhanov@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Linus Torvalds [Thu, 8 Dec 2022 19:11:06 +0000 (11:11 -0800)]
Merge tag 'for-linus-xsa-6.1-rc9b-tag' of git://git./linux/kernel/git/xen/tip
Pull xen fix from Juergen Gross:
"A single fix for the recent security issue XSA-423"
* tag 'for-linus-xsa-6.1-rc9b-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/netback: fix build warning
Linus Torvalds [Thu, 8 Dec 2022 19:00:42 +0000 (11:00 -0800)]
Merge tag 'gpio-fixes-for-v6.1' of git://git./linux/kernel/git/brgl/linux
Pull gpio fixes from Bartosz Golaszewski:
- fix a memory leak in gpiolib core
- fix reference leaks in gpio-amd8111 and gpio-rockchip
* tag 'gpio-fixes-for-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux:
gpio/rockchip: fix refcount leak in rockchip_gpiolib_register()
gpio: amd8111: Fix PCI device reference count leak
gpiolib: fix memory leak in gpiochip_setup_dev()
Linus Torvalds [Thu, 8 Dec 2022 18:46:52 +0000 (10:46 -0800)]
Merge tag 'ata-6.1-rc8' of git://git./linux/kernel/git/dlemoal/libata
Pull ATA fix from Damien Le Moal:
- Avoid a NULL pointer dereference in the libahci platform code that
can happen on initialization when a device tree does not specify
names for the adapter clocks (from Anders)
* tag 'ata-6.1-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata:
ata: libahci_platform: ahci_platform_find_clk: oops, NULL pointer
Tejun Heo [Thu, 8 Dec 2022 02:53:15 +0000 (16:53 -1000)]
memcg: Fix possible use-after-free in memcg_write_event_control()
memcg_write_event_control() accesses the dentry->d_name of the specified
control fd to route the write call. As a cgroup interface file can't be
renamed, it's safe to access d_name as long as the specified file is a
regular cgroup file. Also, as these cgroup interface files can't be
removed before the directory, it's safe to access the parent too.
Prior to
347c4a874710 ("memcg: remove cgroup_event->cft"), there was a
call to __file_cft() which verified that the specified file is a regular
cgroupfs file before further accesses. The cftype pointer returned from
__file_cft() was no longer necessary and the commit inadvertently
dropped the file type check with it allowing any file to slip through.
With the invarients broken, the d_name and parent accesses can now race
against renames and removals of arbitrary files and cause
use-after-free's.
Fix the bug by resurrecting the file type check in __file_cft(). Now
that cgroupfs is implemented through kernfs, checking the file
operations needs to go through a layer of indirection. Instead, let's
check the superblock and dentry type.
Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes:
347c4a874710 ("memcg: remove cgroup_event->cft")
Cc: stable@kernel.org # v3.14+
Reported-by: Jann Horn <jannh@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Radu Nicolae Pirea (OSS) [Wed, 7 Dec 2022 13:23:47 +0000 (15:23 +0200)]
net: dsa: sja1105: avoid out of bounds access in sja1105_init_l2_policing()
The SJA1105 family has 45 L2 policing table entries
(SJA1105_MAX_L2_POLICING_COUNT) and SJA1110 has 110
(SJA1110_MAX_L2_POLICING_COUNT). Keeping the table structure but
accounting for the difference in port count (5 in SJA1105 vs 10 in
SJA1110) does not fully explain the difference. Rather, the SJA1110 also
has L2 ingress policers for multicast traffic. If a packet is classified
as multicast, it will be processed by the policer index 99 + SRCPORT.
The sja1105_init_l2_policing() function initializes all L2 policers such
that they don't interfere with normal packet reception by default. To have
a common code between SJA1105 and SJA1110, the index of the multicast
policer for the port is calculated because it's an index that is out of
bounds for SJA1105 but in bounds for SJA1110, and a bounds check is
performed.
The code fails to do the proper thing when determining what to do with the
multicast policer of port 0 on SJA1105 (ds->num_ports = 5). The "mcast"
index will be equal to 45, which is also equal to
table->ops->max_entry_count (SJA1105_MAX_L2_POLICING_COUNT). So it passes
through the check. But at the same time, SJA1105 doesn't have multicast
policers. So the code programs the SHARINDX field of an out-of-bounds
element in the L2 Policing table of the static config.
The comparison between index 45 and 45 entries should have determined the
code to not access this policer index on SJA1105, since its memory wasn't
even allocated.
With enough bad luck, the out-of-bounds write could even overwrite other
valid kernel data, but in this case, the issue was detected using KASAN.
Kernel log:
sja1105 spi5.0: Probed switch chip: SJA1105Q
==================================================================
BUG: KASAN: slab-out-of-bounds in sja1105_setup+0x1cbc/0x2340
Write of size 8 at addr
ffffff880bd57708 by task kworker/u8:0/8
...
Workqueue: events_unbound deferred_probe_work_func
Call trace:
...
sja1105_setup+0x1cbc/0x2340
dsa_register_switch+0x1284/0x18d0
sja1105_probe+0x748/0x840
...
Allocated by task 8:
...
sja1105_setup+0x1bcc/0x2340
dsa_register_switch+0x1284/0x18d0
sja1105_probe+0x748/0x840
...
Fixes:
38fbe91f2287 ("net: dsa: sja1105: configure the multicast policers, if present")
CC: stable@vger.kernel.org # 5.15+
Signed-off-by: Radu Nicolae Pirea (OSS) <radu-nicolae.pirea@oss.nxp.com>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Link: https://lore.kernel.org/r/20221207132347.38698-1-radu-nicolae.pirea@oss.nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alexandra Winter [Wed, 7 Dec 2022 10:53:04 +0000 (11:53 +0100)]
s390/qeth: fix use-after-free in hsci
KASAN found that addr was dereferenced after br2dev_event_work was freed.
==================================================================
BUG: KASAN: use-after-free in qeth_l2_br2dev_worker+0x5ba/0x6b0
Read of size 1 at addr
00000000fdcea440 by task kworker/u760:4/540
CPU: 17 PID: 540 Comm: kworker/u760:4 Tainted: G E 6.1.0-
20221128.rc7.git1.
5aa3bed4ce83.300.fc36.s390x+kasan #1
Hardware name: IBM 8561 T01 703 (LPAR)
Workqueue: 0.0.8000_event qeth_l2_br2dev_worker
Call Trace:
[<
000000016944d4ce>] dump_stack_lvl+0xc6/0xf8
[<
000000016942cd9c>] print_address_description.constprop.0+0x34/0x2a0
[<
000000016942d118>] print_report+0x110/0x1f8
[<
0000000167a7bd04>] kasan_report+0xfc/0x128
[<
000000016938d79a>] qeth_l2_br2dev_worker+0x5ba/0x6b0
[<
00000001673edd1e>] process_one_work+0x76e/0x1128
[<
00000001673ee85c>] worker_thread+0x184/0x1098
[<
000000016740718a>] kthread+0x26a/0x310
[<
00000001672c606a>] __ret_from_fork+0x8a/0xe8
[<
00000001694711da>] ret_from_fork+0xa/0x40
Allocated by task 108338:
kasan_save_stack+0x40/0x68
kasan_set_track+0x36/0x48
__kasan_kmalloc+0xa0/0xc0
qeth_l2_switchdev_event+0x25a/0x738
atomic_notifier_call_chain+0x9c/0xf8
br_switchdev_fdb_notify+0xf4/0x110
fdb_notify+0x122/0x180
fdb_add_entry.constprop.0.isra.0+0x312/0x558
br_fdb_add+0x59e/0x858
rtnl_fdb_add+0x58a/0x928
rtnetlink_rcv_msg+0x5f8/0x8d8
netlink_rcv_skb+0x1f2/0x408
netlink_unicast+0x570/0x790
netlink_sendmsg+0x752/0xbe0
sock_sendmsg+0xca/0x110
____sys_sendmsg+0x510/0x6a8
___sys_sendmsg+0x12a/0x180
__sys_sendmsg+0xe6/0x168
__do_sys_socketcall+0x3c8/0x468
do_syscall+0x22c/0x328
__do_syscall+0x94/0xf0
system_call+0x82/0xb0
Freed by task 540:
kasan_save_stack+0x40/0x68
kasan_set_track+0x36/0x48
kasan_save_free_info+0x4c/0x68
____kasan_slab_free+0x14e/0x1a8
__kasan_slab_free+0x24/0x30
__kmem_cache_free+0x168/0x338
qeth_l2_br2dev_worker+0x154/0x6b0
process_one_work+0x76e/0x1128
worker_thread+0x184/0x1098
kthread+0x26a/0x310
__ret_from_fork+0x8a/0xe8
ret_from_fork+0xa/0x40
Last potentially related work creation:
kasan_save_stack+0x40/0x68
__kasan_record_aux_stack+0xbe/0xd0
insert_work+0x56/0x2e8
__queue_work+0x4ce/0xd10
queue_work_on+0xf4/0x100
qeth_l2_switchdev_event+0x520/0x738
atomic_notifier_call_chain+0x9c/0xf8
br_switchdev_fdb_notify+0xf4/0x110
fdb_notify+0x122/0x180
fdb_add_entry.constprop.0.isra.0+0x312/0x558
br_fdb_add+0x59e/0x858
rtnl_fdb_add+0x58a/0x928
rtnetlink_rcv_msg+0x5f8/0x8d8
netlink_rcv_skb+0x1f2/0x408
netlink_unicast+0x570/0x790
netlink_sendmsg+0x752/0xbe0
sock_sendmsg+0xca/0x110
____sys_sendmsg+0x510/0x6a8
___sys_sendmsg+0x12a/0x180
__sys_sendmsg+0xe6/0x168
__do_sys_socketcall+0x3c8/0x468
do_syscall+0x22c/0x328
__do_syscall+0x94/0xf0
system_call+0x82/0xb0
Second to last potentially related work creation:
kasan_save_stack+0x40/0x68
__kasan_record_aux_stack+0xbe/0xd0
kvfree_call_rcu+0xb2/0x760
kernfs_unlink_open_file+0x348/0x430
kernfs_fop_release+0xc2/0x320
__fput+0x1ae/0x768
task_work_run+0x1bc/0x298
exit_to_user_mode_prepare+0x1a0/0x1a8
__do_syscall+0x94/0xf0
system_call+0x82/0xb0
The buggy address belongs to the object at
00000000fdcea400
which belongs to the cache kmalloc-96 of size 96
The buggy address is located 64 bytes inside of
96-byte region [
00000000fdcea400,
00000000fdcea460)
The buggy address belongs to the physical page:
page:
000000005a9c26e8 refcount:1 mapcount:0 mapping:
0000000000000000 index:0x0 pfn:0xfdcea
flags: 0x3ffff00000000200(slab|node=0|zone=1|lastcpupid=0x1ffff)
raw:
3ffff00000000200 0000000000000000 0000000100000122 000000008008cc00
raw:
0000000000000000 0020004100000000 ffffffff00000001 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
00000000fdcea300: fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
00000000fdcea380: fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
>
00000000fdcea400: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
^
00000000fdcea480: fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
00000000fdcea500: fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
==================================================================
Fixes:
f7936b7b2663 ("s390/qeth: Update MACs of LEARNING_SYNC device")
Reported-by: Thorsten Winkler <twinkler@linux.ibm.com>
Signed-off-by: Alexandra Winter <wintera@linux.ibm.com>
Reviewed-by: Wenjia Zhang <wenjia@linux.ibm.com>
Reviewed-by: Thorsten Winkler <twinkler@linux.ibm.com>
Link: https://lore.kernel.org/r/20221207105304.20494-1-wintera@linux.ibm.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Emeel Hakim [Wed, 7 Dec 2022 10:16:18 +0000 (12:16 +0200)]
macsec: add missing attribute validation for offload
Add missing attribute validation for IFLA_MACSEC_OFFLOAD
to the netlink policy.
Fixes:
791bb3fcafce ("net: macsec: add support for specifying offload upon link creation")
Signed-off-by: Emeel Hakim <ehakim@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Sabrina Dubroca <sd@queasysnail.net>
Link: https://lore.kernel.org/r/20221207101618.989-1-ehakim@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Dan Carpenter [Wed, 7 Dec 2022 07:06:31 +0000 (10:06 +0300)]
net: mvneta: Fix an out of bounds check
In an earlier commit, I added a bounds check to prevent an out of bounds
read and a WARN(). On further discussion and consideration that check
was probably too aggressive. Instead of returning -EINVAL, a better fix
would be to just prevent the out of bounds read but continue the process.
Background: The value of "pp->rxq_def" is a number between 0-7 by default,
or even higher depending on the value of "rxq_number", which is a module
parameter. If the value is more than the number of available CPUs then
it will trigger the WARN() in cpu_max_bits_warn().
Fixes:
e8b4fc13900b ("net: mvneta: Prevent out of bounds read in mvneta_config_rss()")
Signed-off-by: Dan Carpenter <error27@gmail.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/Y5A7d1E5ccwHTYPf@kadam
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Zhengchao Shao [Wed, 7 Dec 2022 01:50:01 +0000 (09:50 +0800)]
net: thunderbolt: fix memory leak in tbnet_open()
When tb_ring_alloc_rx() failed in tbnet_open(), ida that allocated in
tb_xdomain_alloc_out_hopid() is not released. Add
tb_xdomain_release_out_hopid() to the error path to release ida.
Fixes:
180b0689425c ("thunderbolt: Allow multiple DMA tunnels over a single XDomain connection")
Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Link: https://lore.kernel.org/r/20221207015001.1755826-1-shaozhengchao@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Oleksij Rempel [Mon, 5 Dec 2022 05:29:04 +0000 (06:29 +0100)]
net: dsa: microchip: add stats64 support for ksz8 series of switches
Add stats64 support for ksz8xxx series of switches.
Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de>
Link: https://lore.kernel.org/r/20221205052904.2834962-1-o.rempel@pengutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Yanteng Si [Thu, 8 Dec 2022 06:59:15 +0000 (14:59 +0800)]
docs/zh_CN: Add LoongArch booting description's translation
Translate ../loongarch/booting.rst into Chinese.
Suggested-by: Xiaotian Wu <wuxiaotian@loongson.cn>
Signed-off-by: Yanteng Si <siyanteng@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Yanteng Si [Thu, 8 Dec 2022 06:59:15 +0000 (14:59 +0800)]
docs/LoongArch: Add booting description
1, Describe the information passed from BootLoader to kernel.
2, Describe the meaning and values of the kernel image header field.
Suggested-by: Xiaotian Wu <wuxiaotian@loongson.cn>
Signed-off-by: Yanteng Si <siyanteng@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Huacai Chen [Thu, 8 Dec 2022 06:59:15 +0000 (14:59 +0800)]
LoongArch: mm: Fix huge page entry update for virtual machine
In virtual machine (guest mode), the tlbwr instruction can not write the
last entry of MTLB, so we need to make it non-present by invtlb and then
write it by tlbfill. This also simplify the whole logic.
Signed-off-by: Rui Wang <wangrui@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Bibo Mao [Thu, 8 Dec 2022 06:59:15 +0000 (14:59 +0800)]
LoongArch: Export symbol for function smp_send_reschedule()
Function smp_send_reschedule() is standard kernel API, which is defined
in header file include/linux/smp.h. However, on LoongArch it is defined
as an inline function, this is confusing and kernel modules can not use
this function.
Now we define smp_send_reschedule() as a general function, and add a
EXPORT_SYMBOL_GPL on this function, so that kernel modules can use it.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Jakub Kicinski [Thu, 8 Dec 2022 04:17:35 +0000 (20:17 -0800)]
Merge branch 'net-ethernet-ti-am65-cpsw-fix-set-channel-operation'
Roger Quadros says:
====================
net: ethernet: ti: am65-cpsw: Fix set channel operation
This contains a critical bug fix for the recently merged suspend/resume
support [1] that broke set channel operation. (ethtool -L eth0 tx <n>)
As there were 2 dependent patches on top of the offending commit [1]
first revert them and then apply them back after the correct fix.
[1]
fd23df72f2be ("net: ethernet: ti: am65-cpsw: Add suspend/resume support")
====================
Link: https://lore.kernel.org/r/20221206094419.19478-1-rogerq@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Roger Quadros [Tue, 6 Dec 2022 09:44:19 +0000 (11:44 +0200)]
net: ethernet: ti: am65-cpsw: Fix hardware switch mode on suspend/resume
On low power during system suspend the ALE table context is lost.
Save the ALE context before suspend and restore it after resume.
Signed-off-by: Roger Quadros <rogerq@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Roger Quadros [Tue, 6 Dec 2022 09:44:18 +0000 (11:44 +0200)]
net: ethernet: ti: am65-cpsw: retain PORT_VLAN_REG after suspend/resume
During suspend resume the context of PORT_VLAN_REG is lost so
save it during suspend and restore it during resume for
host port and slave ports.
Signed-off-by: Roger Quadros <rogerq@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Roger Quadros [Tue, 6 Dec 2022 09:44:17 +0000 (11:44 +0200)]
net: ethernet: ti: am65-cpsw: Add suspend/resume support
Add PM handlers for System suspend/resume.
As DMA driver doesn't yet support suspend/resume we free up
the DMA channels at suspend and acquire and initialize them
at resume.
In this revised approach we do not free the TX/RX IRQs at
am65_cpsw_nuss_common_stop() as it causes problems.
We will now free them only on .suspend() as we need to release
the DMA channels (as DMA looses context) and re-acquiring
them on .resume() may not necessarily give us the same
IRQs.
To make this easier:
- introduce am65_cpsw_nuss_remove_rx_chns() which is
similar to am65_cpsw_nuss_remove_tx_chns(). These will
be invoked in pm.suspend() to release the DMA channels
and free up the IRQs.
- move napi_add() and request_irq() calls to
am65_cpsw_nuss_init_rx/tx_chns() so we can invoke them
in pm.resume() to acquire the DMA channels and IRQs.
As CPTS looses contect during suspend/resume, invoke the
necessary CPTS suspend/resume helpers.
ALE_CLEAR command is issued in cpsw_ale_start() so no need
to issue it before the call to cpsw_ale_start().
Signed-off-by: Roger Quadros <rogerq@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Roger Quadros [Tue, 6 Dec 2022 09:44:16 +0000 (11:44 +0200)]
Revert "net: ethernet: ti: am65-cpsw: Add suspend/resume support"
This reverts commit
fd23df72f2be317d38d9fde0a8996b8e7454fd2a.
This commit broke set channel operation. Revert this and
implement it with a different approach in a separate patch.
Signed-off-by: Roger Quadros <rogerq@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Roger Quadros [Tue, 6 Dec 2022 09:44:15 +0000 (11:44 +0200)]
Revert "net: ethernet: ti: am65-cpsw: retain PORT_VLAN_REG after suspend/resume"
This reverts commit
643cf0e3ab5ccee37b3c53c018bd476c45c4b70e.
This is to make it easier to revert the offending commit
fd23df72f2be ("net: ethernet: ti: am65-cpsw: Add suspend/resume support")
Signed-off-by: Roger Quadros <rogerq@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Roger Quadros [Tue, 6 Dec 2022 09:44:14 +0000 (11:44 +0200)]
Revert "net: ethernet: ti: am65-cpsw: Fix hardware switch mode on suspend/resume"
This reverts commit
1af3cb3702d02167926a2bd18580cecb2d64fd94.
This is to make it easier to revert the offending commit
fd23df72f2be ("net: ethernet: ti: am65-cpsw: Add suspend/resume support")
Signed-off-by: Roger Quadros <rogerq@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 6 Dec 2022 10:13:51 +0000 (10:13 +0000)]
ipv6: avoid use-after-free in ip6_fragment()
Blamed commit claimed rcu_read_lock() was held by ip6_fragment() callers.
It seems to not be always true, at least for UDP stack.
syzbot reported:
BUG: KASAN: use-after-free in ip6_dst_idev include/net/ip6_fib.h:245 [inline]
BUG: KASAN: use-after-free in ip6_fragment+0x2724/0x2770 net/ipv6/ip6_output.c:951
Read of size 8 at addr
ffff88801d403e80 by task syz-executor.3/7618
CPU: 1 PID: 7618 Comm: syz-executor.3 Not tainted 6.1.0-rc6-syzkaller-00012-g4312098baf37 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd1/0x138 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:284 [inline]
print_report+0x15e/0x45d mm/kasan/report.c:395
kasan_report+0xbf/0x1f0 mm/kasan/report.c:495
ip6_dst_idev include/net/ip6_fib.h:245 [inline]
ip6_fragment+0x2724/0x2770 net/ipv6/ip6_output.c:951
__ip6_finish_output net/ipv6/ip6_output.c:193 [inline]
ip6_finish_output+0x9a3/0x1170 net/ipv6/ip6_output.c:206
NF_HOOK_COND include/linux/netfilter.h:291 [inline]
ip6_output+0x1f1/0x540 net/ipv6/ip6_output.c:227
dst_output include/net/dst.h:445 [inline]
ip6_local_out+0xb3/0x1a0 net/ipv6/output_core.c:161
ip6_send_skb+0xbb/0x340 net/ipv6/ip6_output.c:1966
udp_v6_send_skb+0x82a/0x18a0 net/ipv6/udp.c:1286
udp_v6_push_pending_frames+0x140/0x200 net/ipv6/udp.c:1313
udpv6_sendmsg+0x18da/0x2c80 net/ipv6/udp.c:1606
inet6_sendmsg+0x9d/0xe0 net/ipv6/af_inet6.c:665
sock_sendmsg_nosec net/socket.c:714 [inline]
sock_sendmsg+0xd3/0x120 net/socket.c:734
sock_write_iter+0x295/0x3d0 net/socket.c:1108
call_write_iter include/linux/fs.h:2191 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x9ed/0xdd0 fs/read_write.c:584
ksys_write+0x1ec/0x250 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fde3588c0d9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:
00007fde365b6168 EFLAGS:
00000246 ORIG_RAX:
0000000000000001
RAX:
ffffffffffffffda RBX:
00007fde359ac050 RCX:
00007fde3588c0d9
RDX:
000000000000ffdc RSI:
00000000200000c0 RDI:
000000000000000a
RBP:
00007fde358e7ae9 R08:
0000000000000000 R09:
0000000000000000
R10:
0000000000000000 R11:
0000000000000246 R12:
0000000000000000
R13:
00007fde35acfb1f R14:
00007fde365b6300 R15:
0000000000022000
</TASK>
Allocated by task 7618:
kasan_save_stack+0x22/0x40 mm/kasan/common.c:45
kasan_set_track+0x25/0x30 mm/kasan/common.c:52
__kasan_slab_alloc+0x82/0x90 mm/kasan/common.c:325
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook mm/slab.h:737 [inline]
slab_alloc_node mm/slub.c:3398 [inline]
slab_alloc mm/slub.c:3406 [inline]
__kmem_cache_alloc_lru mm/slub.c:3413 [inline]
kmem_cache_alloc+0x2b4/0x3d0 mm/slub.c:3422
dst_alloc+0x14a/0x1f0 net/core/dst.c:92
ip6_dst_alloc+0x32/0xa0 net/ipv6/route.c:344
ip6_rt_pcpu_alloc net/ipv6/route.c:1369 [inline]
rt6_make_pcpu_route net/ipv6/route.c:1417 [inline]
ip6_pol_route+0x901/0x1190 net/ipv6/route.c:2254
pol_lookup_func include/net/ip6_fib.h:582 [inline]
fib6_rule_lookup+0x52e/0x6f0 net/ipv6/fib6_rules.c:121
ip6_route_output_flags_noref+0x2e6/0x380 net/ipv6/route.c:2625
ip6_route_output_flags+0x76/0x320 net/ipv6/route.c:2638
ip6_route_output include/net/ip6_route.h:98 [inline]
ip6_dst_lookup_tail+0x5ab/0x1620 net/ipv6/ip6_output.c:1092
ip6_dst_lookup_flow+0x90/0x1d0 net/ipv6/ip6_output.c:1222
ip6_sk_dst_lookup_flow+0x553/0x980 net/ipv6/ip6_output.c:1260
udpv6_sendmsg+0x151d/0x2c80 net/ipv6/udp.c:1554
inet6_sendmsg+0x9d/0xe0 net/ipv6/af_inet6.c:665
sock_sendmsg_nosec net/socket.c:714 [inline]
sock_sendmsg+0xd3/0x120 net/socket.c:734
__sys_sendto+0x23a/0x340 net/socket.c:2117
__do_sys_sendto net/socket.c:2129 [inline]
__se_sys_sendto net/socket.c:2125 [inline]
__x64_sys_sendto+0xe1/0x1b0 net/socket.c:2125
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
Freed by task 7599:
kasan_save_stack+0x22/0x40 mm/kasan/common.c:45
kasan_set_track+0x25/0x30 mm/kasan/common.c:52
kasan_save_free_info+0x2e/0x40 mm/kasan/generic.c:511
____kasan_slab_free mm/kasan/common.c:236 [inline]
____kasan_slab_free+0x160/0x1c0 mm/kasan/common.c:200
kasan_slab_free include/linux/kasan.h:177 [inline]
slab_free_hook mm/slub.c:1724 [inline]
slab_free_freelist_hook+0x8b/0x1c0 mm/slub.c:1750
slab_free mm/slub.c:3661 [inline]
kmem_cache_free+0xee/0x5c0 mm/slub.c:3683
dst_destroy+0x2ea/0x400 net/core/dst.c:127
rcu_do_batch kernel/rcu/tree.c:2250 [inline]
rcu_core+0x81f/0x1980 kernel/rcu/tree.c:2510
__do_softirq+0x1fb/0xadc kernel/softirq.c:571
Last potentially related work creation:
kasan_save_stack+0x22/0x40 mm/kasan/common.c:45
__kasan_record_aux_stack+0xbc/0xd0 mm/kasan/generic.c:481
call_rcu+0x9d/0x820 kernel/rcu/tree.c:2798
dst_release net/core/dst.c:177 [inline]
dst_release+0x7d/0xe0 net/core/dst.c:167
refdst_drop include/net/dst.h:256 [inline]
skb_dst_drop include/net/dst.h:268 [inline]
skb_release_head_state+0x250/0x2a0 net/core/skbuff.c:838
skb_release_all net/core/skbuff.c:852 [inline]
__kfree_skb net/core/skbuff.c:868 [inline]
kfree_skb_reason+0x151/0x4b0 net/core/skbuff.c:891
kfree_skb_list_reason+0x4b/0x70 net/core/skbuff.c:901
kfree_skb_list include/linux/skbuff.h:1227 [inline]
ip6_fragment+0x2026/0x2770 net/ipv6/ip6_output.c:949
__ip6_finish_output net/ipv6/ip6_output.c:193 [inline]
ip6_finish_output+0x9a3/0x1170 net/ipv6/ip6_output.c:206
NF_HOOK_COND include/linux/netfilter.h:291 [inline]
ip6_output+0x1f1/0x540 net/ipv6/ip6_output.c:227
dst_output include/net/dst.h:445 [inline]
ip6_local_out+0xb3/0x1a0 net/ipv6/output_core.c:161
ip6_send_skb+0xbb/0x340 net/ipv6/ip6_output.c:1966
udp_v6_send_skb+0x82a/0x18a0 net/ipv6/udp.c:1286
udp_v6_push_pending_frames+0x140/0x200 net/ipv6/udp.c:1313
udpv6_sendmsg+0x18da/0x2c80 net/ipv6/udp.c:1606
inet6_sendmsg+0x9d/0xe0 net/ipv6/af_inet6.c:665
sock_sendmsg_nosec net/socket.c:714 [inline]
sock_sendmsg+0xd3/0x120 net/socket.c:734
sock_write_iter+0x295/0x3d0 net/socket.c:1108
call_write_iter include/linux/fs.h:2191 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x9ed/0xdd0 fs/read_write.c:584
ksys_write+0x1ec/0x250 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
Second to last potentially related work creation:
kasan_save_stack+0x22/0x40 mm/kasan/common.c:45
__kasan_record_aux_stack+0xbc/0xd0 mm/kasan/generic.c:481
call_rcu+0x9d/0x820 kernel/rcu/tree.c:2798
dst_release net/core/dst.c:177 [inline]
dst_release+0x7d/0xe0 net/core/dst.c:167
refdst_drop include/net/dst.h:256 [inline]
skb_dst_drop include/net/dst.h:268 [inline]
__dev_queue_xmit+0x1b9d/0x3ba0 net/core/dev.c:4211
dev_queue_xmit include/linux/netdevice.h:3008 [inline]
neigh_resolve_output net/core/neighbour.c:1552 [inline]
neigh_resolve_output+0x51b/0x840 net/core/neighbour.c:1532
neigh_output include/net/neighbour.h:546 [inline]
ip6_finish_output2+0x56c/0x1530 net/ipv6/ip6_output.c:134
__ip6_finish_output net/ipv6/ip6_output.c:195 [inline]
ip6_finish_output+0x694/0x1170 net/ipv6/ip6_output.c:206
NF_HOOK_COND include/linux/netfilter.h:291 [inline]
ip6_output+0x1f1/0x540 net/ipv6/ip6_output.c:227
dst_output include/net/dst.h:445 [inline]
NF_HOOK include/linux/netfilter.h:302 [inline]
NF_HOOK include/linux/netfilter.h:296 [inline]
mld_sendpack+0xa09/0xe70 net/ipv6/mcast.c:1820
mld_send_cr net/ipv6/mcast.c:2121 [inline]
mld_ifc_work+0x720/0xdc0 net/ipv6/mcast.c:2653
process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289
worker_thread+0x669/0x1090 kernel/workqueue.c:2436
kthread+0x2e8/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
The buggy address belongs to the object at
ffff88801d403dc0
which belongs to the cache ip6_dst_cache of size 240
The buggy address is located 192 bytes inside of
240-byte region [
ffff88801d403dc0,
ffff88801d403eb0)
The buggy address belongs to the physical page:
page:
ffffea00007500c0 refcount:1 mapcount:0 mapping:
0000000000000000 index:0x0 pfn:0x1d403
memcg:
ffff888022f49c81
flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff)
raw:
00fff00000000200 ffffea0001ef6580 dead000000000002 ffff88814addf640
raw:
0000000000000000 00000000800c000c 00000001ffffffff ffff888022f49c81
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112a20(GFP_ATOMIC|__GFP_NOWARN|__GFP_NORETRY|__GFP_HARDWALL), pid 3719, tgid 3719 (kworker/0:6), ts
136223432244, free_ts
136222971441
prep_new_page mm/page_alloc.c:2539 [inline]
get_page_from_freelist+0x10b5/0x2d50 mm/page_alloc.c:4288
__alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5555
alloc_pages+0x1aa/0x270 mm/mempolicy.c:2285
alloc_slab_page mm/slub.c:1794 [inline]
allocate_slab+0x213/0x300 mm/slub.c:1939
new_slab mm/slub.c:1992 [inline]
___slab_alloc+0xa91/0x1400 mm/slub.c:3180
__slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3279
slab_alloc_node mm/slub.c:3364 [inline]
slab_alloc mm/slub.c:3406 [inline]
__kmem_cache_alloc_lru mm/slub.c:3413 [inline]
kmem_cache_alloc+0x31a/0x3d0 mm/slub.c:3422
dst_alloc+0x14a/0x1f0 net/core/dst.c:92
ip6_dst_alloc+0x32/0xa0 net/ipv6/route.c:344
icmp6_dst_alloc+0x71/0x680 net/ipv6/route.c:3261
mld_sendpack+0x5de/0xe70 net/ipv6/mcast.c:1809
mld_send_cr net/ipv6/mcast.c:2121 [inline]
mld_ifc_work+0x720/0xdc0 net/ipv6/mcast.c:2653
process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289
worker_thread+0x669/0x1090 kernel/workqueue.c:2436
kthread+0x2e8/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1459 [inline]
free_pcp_prepare+0x65c/0xd90 mm/page_alloc.c:1509
free_unref_page_prepare mm/page_alloc.c:3387 [inline]
free_unref_page+0x1d/0x4d0 mm/page_alloc.c:3483
__unfreeze_partials+0x17c/0x1a0 mm/slub.c:2586
qlink_free mm/kasan/quarantine.c:168 [inline]
qlist_free_all+0x6a/0x170 mm/kasan/quarantine.c:187
kasan_quarantine_reduce+0x184/0x210 mm/kasan/quarantine.c:294
__kasan_slab_alloc+0x66/0x90 mm/kasan/common.c:302
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook mm/slab.h:737 [inline]
slab_alloc_node mm/slub.c:3398 [inline]
kmem_cache_alloc_node+0x304/0x410 mm/slub.c:3443
__alloc_skb+0x214/0x300 net/core/skbuff.c:497
alloc_skb include/linux/skbuff.h:1267 [inline]
netlink_alloc_large_skb net/netlink/af_netlink.c:1191 [inline]
netlink_sendmsg+0x9a6/0xe10 net/netlink/af_netlink.c:1896
sock_sendmsg_nosec net/socket.c:714 [inline]
sock_sendmsg+0xd3/0x120 net/socket.c:734
__sys_sendto+0x23a/0x340 net/socket.c:2117
__do_sys_sendto net/socket.c:2129 [inline]
__se_sys_sendto net/socket.c:2125 [inline]
__x64_sys_sendto+0xe1/0x1b0 net/socket.c:2125
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
Fixes:
1758fd4688eb ("ipv6: remove unnecessary dst_hold() in ip6_fragment()")
Reported-by: syzbot+8c0ac31aa9681abb9e2d@syzkaller.appspotmail.com
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Wei Wang <weiwan@google.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/r/20221206101351.2037285-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>