platform/kernel/linux-rpi.git
2 years agoigc: Correct the launchtime offset
Muhammad Husaini Zulkifli [Wed, 21 Sep 2022 02:49:40 +0000 (10:49 +0800)]
igc: Correct the launchtime offset

The launchtime offset should be corrected according to sections 7.5.2.6
Transmit Scheduling Latency of the Intel Ethernet I225/I226 Software
User Manual.

Software can compensate the latency between the transmission scheduling
and the time that packet is transmitted to the network by setting this
GTxOffset register. Without setting this register, there may be a
significant delay between the packet scheduling and the network point.

This patch helps to reduce the latency for each of the link speed.

Before:

10Mbps   : 11000 - 13800 nanosecond
100Mbps  : 1300 - 1700 nanosecond
1000Mbps : 190 - 600 nanosecond
2500Mbps : 1400 - 1700 nanosecond

After:

10Mbps   : less than 750 nanosecond
100Mbps  : less than 192 nanosecond
1000Mbps : less than 128 nanosecond
2500Mbps : less than 128 nanosecond

Test Setup:

Talker : Use l2_tai.c to generate the launchtime into packet payload.
Listener: Use timedump.c to compute the delta between packet arrival and
LaunchTime packet payload.

Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com>
Acked-by: Sasha Neftin <sasha.neftin@intel.com>
Acked-by: Paul Menzel <pmenzel@molgen.mpg.de>
Tested-by: Naama Meir <naamax.meir@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoe1000: Remove unnecessary use of kmap_atomic()
Anirudh Venkataramanan [Mon, 19 Sep 2022 18:09:48 +0000 (11:09 -0700)]
e1000: Remove unnecessary use of kmap_atomic()

buffer_info->rxbuf.page accessed in e1000_clean_jumbo_rx_irq() is
allocated using GFP_ATOMIC. Pages allocated with GFP_ATOMIC can't come from
highmem and so there's no need to kmap() them. Just use page_address().

I don't have access to a 32-bit system so did some limited testing on
qemu (qemu-system-i386 -m 4096 -smp 4 -device e1000e) with a 32-bit
Debian 11.04 image.

Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Fabio M. De Francesco <fmdefrancesco@gmail.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Suggested-by: Ira Weiny <ira.weiny@intel.com>
Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoe1000e: Remove unnecessary use of kmap_atomic()
Anirudh Venkataramanan [Fri, 30 Sep 2022 22:33:24 +0000 (15:33 -0700)]
e1000e: Remove unnecessary use of kmap_atomic()

alloc_rx_buf() allocates ps_page->page and buffer_info->page using either
GFP_ATOMIC or GFP_KERNEL. Memory allocated with GFP_KERNEL/GFP_ATOMIC can't
come from highmem and so there's no need to kmap() them. Just use
page_address().

I don't have access to a 32-bit system so did some limited testing on qemu
(qemu-system-i386 -m 4096 -smp 4 -device e1000e) with a 32-bit Debian 11.04
image.

Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Fabio M. De Francesco <fmdefrancesco@gmail.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Suggested-by: Ira Weiny <ira.weiny@intel.com>
Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Naama Meir <naamax.meir@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoe1000e: Add e1000e trace module
Sasha Neftin [Wed, 21 Sep 2022 07:59:37 +0000 (10:59 +0300)]
e1000e: Add e1000e trace module

Add tracepoints to the driver via a new file e1000e_trace.h and some new
trace calls added in interesting places in the driver. Add some tracing
for s0ix flows to help in a debug of shared resources with the CSME
firmware. The idea here is that tracepoints have such low performance cost
when disabled that we can leave these in the upstream driver.

Performance not affected, and this can be very useful for debugging and
adding new trace events to paths in the future.

Usage:
echo "e1000e_trace:*" > /sys/kernel/debug/tracing/set_event
echo 1 > /sys/kernel/debug/tracing/events/e1000e_trace/enable

Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Naama Meir <naamax.meir@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoe1000e: Add support for the next LOM generation
Sasha Neftin [Thu, 29 Sep 2022 08:08:59 +0000 (11:08 +0300)]
e1000e: Add support for the next LOM generation

Add devices IDs for the next LOM generations that will be available on the
next Intel Client platforms.
This patch provides the initial support for these devices.

Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Naama Meir <naamax.meir@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoe1000e: Separate MTP board type from ADP
Sasha Neftin [Tue, 26 Jul 2022 14:36:25 +0000 (17:36 +0300)]
e1000e: Separate MTP board type from ADP

We have the same LAN controller on different PCH's. Separate MTP board
type from an ADP which will allow for specific fixes to be applied for MTP
platforms.

Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Naama Meir <naamax.meir@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2 years agoMerge branch 'renesas-eswitch'
David S. Miller [Wed, 2 Nov 2022 12:38:53 +0000 (12:38 +0000)]
Merge branch 'renesas-eswitch'

Yoshihiro Shimoda says:

====================
net: ethernet: renesas: Add support for "Ethernet Switch"

This patch series is based on next-20221027.

Add initial support for Renesas "Ethernet Switch" device of R-Car S4-8.
The hardware has features about forwarding for an ethernet switch
device. But, for now, it acts as ethernet controllers so that any
forwarding offload features are not supported. So, any switchdev
header files and DSA framework are not used.

Notes that this driver requires some special settings on marvell10g,
Especially host mactype and host speed. And, I need further investigation
to modify the marvell10g driver for upstream. But, the special settings
are applied, this rswitch driver can work correcfly without any changes
of this rswitch driver. So, I believe the rswitch driver can go for
upstream.

Changes from v6:
https://lore.kernel.org/all/20221028065458.2417293-1-yoshihiro.shimoda.uh@renesas.com/
 - Add Reviewed-by tag in the patch [1/3].
 - Fix ordering of initialization because NFS root can start mounting
   the filesystem before register_netdev() even returns

Changes from v5:
https://lore.kernel.org/all/20221027134034.2343230-1-yoshihiro.shimoda.uh@renesas.com/
 - Add maxItems for the ethernet-port/port/reg property.

Changes from v4:
https://lore.kernel.org/all/20221019083518.933070-1-yoshihiro.shimoda.uh@renesas.com/
 - Rebased on next-20221027.
 - Drop some unneeded properties on the dt-bindings doc.
 - Change the subject and commit descriptions on the patch [2/3].
 - Use phylink instead of phylib.
 - Modify struct rswitch_*_desc to remove similar functions ([gs]et_dptr).

====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: ethernet: renesas: rswitch: Add R-Car Gen4 gPTP support
Yoshihiro Shimoda [Mon, 31 Oct 2022 12:32:42 +0000 (21:32 +0900)]
net: ethernet: renesas: rswitch: Add R-Car Gen4 gPTP support

Add R-Car Gen4 gPTP support into the rswitch driver.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: ethernet: renesas: Add support for "Ethernet Switch"
Yoshihiro Shimoda [Mon, 31 Oct 2022 12:32:41 +0000 (21:32 +0900)]
net: ethernet: renesas: Add support for "Ethernet Switch"

Add initial support for Renesas "Ethernet Switch" device of R-Car S4-8.
The hardware has features about forwarding for an ethernet switch
device. But, for now, it acts as ethernet controllers so that any
forwarding offload features are not supported. So, any switchdev
header files and DSA framework are not used.

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodt-bindings: net: renesas: Document Renesas Ethernet Switch
Yoshihiro Shimoda [Mon, 31 Oct 2022 12:32:40 +0000 (21:32 +0900)]
dt-bindings: net: renesas: Document Renesas Ethernet Switch

Document Renesas Etherent Switch for R-Car S4-8 (r8a779f0).

Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'txgbe'
David S. Miller [Wed, 2 Nov 2022 12:31:24 +0000 (12:31 +0000)]
Merge branch 'txgbe'

Mengyuan Lou says:

====================
net: WangXun txgbe/ngbe ethernet driver

This patch series adds support for WangXun NICS, to initialize
interface from software to firmware.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: ngbe: Initialize sw info and register netdev
Mengyuan Lou [Mon, 31 Oct 2022 07:07:57 +0000 (15:07 +0800)]
net: ngbe: Initialize sw info and register netdev

Initialize ngbe mac/phy type.
Check whether the firmware is initialized.
Initialize ngbe hw and register netdev.

Signed-off-by: Mengyuan Lou <mengyuanlou@net-swift.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: txgbe: Add operations to interact with firmware
Jiawen Wu [Mon, 31 Oct 2022 07:07:56 +0000 (15:07 +0800)]
net: txgbe: Add operations to interact with firmware

Add firmware interaction to get EEPROM information.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: libwx: Implement interaction with firmware
Jiawen Wu [Mon, 31 Oct 2022 07:07:55 +0000 (15:07 +0800)]
net: libwx: Implement interaction with firmware

Add mailbox commands to interact with firmware.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoveth: Avoid drop packets when xdp_redirect performs
Heng Qi [Mon, 31 Oct 2022 06:19:22 +0000 (14:19 +0800)]
veth: Avoid drop packets when xdp_redirect performs

In the current processing logic, when xdp_redirect occurs, it transmits
the xdp frame based on napi.

If napi of the peer veth is not ready, the veth will drop the packets.
This doesn't meet our expectations.

In this context, we enable napi of the peer veth automatically when the
veth loads the xdp. Then if the veth unloads the xdp, we need to
correctly judge whether to disable napi of the peer veth, because the peer
veth may have loaded xdp, or even the user has enabled GRO.

Signed-off-by: Heng Qi <henqqi@linux.alibaba.com>
Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonfc: Add KCOV annotations
Dmitry Vyukov [Sun, 30 Oct 2022 15:03:37 +0000 (16:03 +0100)]
nfc: Add KCOV annotations

Add remote KCOV annotations for NFC processing that is done
in background threads. This enables efficient coverage-guided
fuzzing of the NFC subsystem.

The intention is to add annotations to background threads that
process skb's that were allocated in syscall context
(thus have a KCOV handle associated with the current fuzz test).
This includes nci_recv_frame() that is called by the virtual nci
driver in the syscall context.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Bongsu Jeon <bongsu.jeon@samsung.com>
Cc: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agogve: Reduce alloc and copy costs in the GQ rx path
Shailend Chand [Sat, 29 Oct 2022 16:53:22 +0000 (09:53 -0700)]
gve: Reduce alloc and copy costs in the GQ rx path

Previously, even if just one of the many fragments of a 9k packet
required a copy, we'd copy the whole packet into a freshly-allocated
9k-sized linear SKB, and this led to performance issues.

By having a pool of pages to copy into, each fragment can be
independently handled, leading to a reduced incidence of
allocation and copy.

Signed-off-by: Shailend Chand <shailend@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wwan: iosm: add rpc interface for xmm modems
Shane Parslow [Sat, 29 Oct 2022 09:03:56 +0000 (02:03 -0700)]
net: wwan: iosm: add rpc interface for xmm modems

Add a new iosm wwan port that connects to the modem rpc interface. This
interface provides a configuration channel, and in the case of the 7360, is
the only way to configure the modem (as it does not support mbim).

The new interface is compatible with existing software, such as
open_xdatachannel.py from the xmm7360-pci project [1].

[1] https://github.com/xmm7360/xmm7360-pci

Signed-off-by: Shane Parslow <shaneparslow808@gmail.com>
Reviewed-by: Loic Poulain <loic.poulain@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wwan: t7xx: Add port for modem logging
M Chetan Kumar [Fri, 28 Oct 2022 15:35:34 +0000 (21:05 +0530)]
net: wwan: t7xx: Add port for modem logging

The Modem Logging (MDL) port provides an interface to collect modem
logs for debugging purposes. MDL is supported by the relay interface,
and the mtk_t7xx port infrastructure. MDL allows user-space apps to
control logging via mbim command and to collect logs via the relay
interface, while port infrastructure facilitates communication between
the driver and the modem.

Signed-off-by: Moises Veleta <moises.veleta@linux.intel.com>
Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com>
Signed-off-by: Devegowda Chandrashekar <chandrashekar.devegowda@intel.com>
Acked-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Reviewed-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wwan: t7xx: use union to group port type specific data
M Chetan Kumar [Fri, 28 Oct 2022 15:34:50 +0000 (21:04 +0530)]
net: wwan: t7xx: use union to group port type specific data

Use union inside t7xx_port to group port type specific data members.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com>
Reviewed-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotcp: refine tcp_prune_ofo_queue() logic
Eric Dumazet [Tue, 1 Nov 2022 03:52:34 +0000 (03:52 +0000)]
tcp: refine tcp_prune_ofo_queue() logic

After commits 36a6503fedda ("tcp: refine tcp_prune_ofo_queue()
to not drop all packets") and 72cd43ba64fc1
("tcp: free batches of packets in tcp_prune_ofo_queue()")
tcp_prune_ofo_queue() drops a fraction of ooo queue,
to make room for incoming packet.

However it makes no sense to drop packets that are
before the incoming packet, in sequence space.

In order to recover from packet losses faster,
it makes more sense to only drop ooo packets
which are after the incoming packet.

Tested:
packetdrill test:
   0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
   +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
   +0 setsockopt(3, SOL_SOCKET, SO_RCVBUF, [3800], 4) = 0
   +0 bind(3, ..., ...) = 0
   +0 listen(3, 1) = 0

   +0 < S 0:0(0) win 32792 <mss 1000,sackOK,nop,nop,nop,wscale 7>
   +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 0>
  +.1 < . 1:1(0) ack 1 win 1024
   +0 accept(3, ..., ...) = 4

 +.01 < . 200:300(100) ack 1 win 1024
   +0 > . 1:1(0) ack 1 <nop,nop, sack 200:300>

 +.01 < . 400:500(100) ack 1 win 1024
   +0 > . 1:1(0) ack 1 <nop,nop, sack 400:500 200:300>

 +.01 < . 600:700(100) ack 1 win 1024
   +0 > . 1:1(0) ack 1 <nop,nop, sack 600:700 400:500 200:300>

 +.01 < . 800:900(100) ack 1 win 1024
   +0 > . 1:1(0) ack 1 <nop,nop, sack 800:900 600:700 400:500 200:300>

 +.01 < . 1000:1100(100) ack 1 win 1024
   +0 > . 1:1(0) ack 1 <nop,nop, sack 1000:1100 800:900 600:700 400:500>

 +.01 < . 1200:1300(100) ack 1 win 1024
   +0 > . 1:1(0) ack 1 <nop,nop, sack 1200:1300 1000:1100 800:900 600:700>

// this packet is dropped because we have no room left.
 +.01 < . 1400:1500(100) ack 1 win 1024

 +.01 < . 1:200(199) ack 1 win 1024
// Make sure kernel did not drop 200:300 sequence
   +0 > . 1:1(0) ack 300 <nop,nop, sack 1200:1300 1000:1100 800:900 600:700>
// Make room, since our RCVBUF is very small
   +0 read(4, ..., 299) = 299

 +.01 < . 300:400(100) ack 1 win 1024
   +0 > . 1:1(0) ack 500 <nop,nop, sack 1200:1300 1000:1100 800:900 600:700>

 +.01 < . 500:600(100) ack 1 win 1024
   +0 > . 1:1(0) ack 700 <nop,nop, sack 1200:1300 1000:1100 800:900>

   +0 read(4, ..., 400) = 400

 +.01 < . 700:800(100) ack 1 win 1024
   +0 > . 1:1(0) ack 900 <nop,nop, sack 1200:1300 1000:1100>

 +.01 < . 900:1000(100) ack 1 win 1024
   +0 > . 1:1(0) ack 1100 <nop,nop, sack 1200:1300>

 +.01 < . 1100:1200(100) ack 1 win 1024
// This checks that 1200:1300 has not been removed from ooo queue
   +0 > . 1:1(0) ack 1300

Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://lore.kernel.org/r/20221101035234.3910189-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: core: inet[46]_pton strlen len types
Dr. David Alan Gilbert [Sat, 29 Oct 2022 01:46:04 +0000 (02:46 +0100)]
net: core: inet[46]_pton strlen len types

inet[46]_pton check the input length against
a sane length limit (INET[6]_ADDRSTRLEN), but
the strlen value gets truncated due to being stored in an int,
so there's a theoretical potential for a >4G string to pass
the limit test.
Use size_t since that's what strlen actually returns.

I've had a hunt for callers that could hit this, but
I've not managed to find anything that doesn't get checked with
some other limit first; but it's possible that I've missed
something in the depth of the storage target paths.

Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Link: https://lore.kernel.org/r/20221029014604.114024-1-linux@treblig.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'inet-add-drop-monitor-support'
Jakub Kicinski [Tue, 1 Nov 2022 03:14:30 +0000 (20:14 -0700)]
Merge branch 'inet-add-drop-monitor-support'

Eric Dumazet says:

====================
inet: add drop monitor support

I recently tried to analyse flakes in ip_defrag selftest.
This failed miserably.

IPv4 and IPv6 reassembly units are causing false kfree_skb()
notifications. It is time to deal with this issue.

First two patches are changing core networking to better
deal with eventual skb frag_list chains, in respect
of kfree_skb/consume_skb status.

Last three patches are adding three new drop reasons,
and make sure skbs that have been reassembled into
a large datagram are no longer viewed as dropped ones.

After this, understanding why ip_defrag selftest is flaky
is possible using standard drop monitoring tools.
====================

Link: https://lore.kernel.org/r/20221029154520.2747444-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dropreason: add SKB_DROP_REASON_FRAG_TOO_FAR
Eric Dumazet [Sat, 29 Oct 2022 15:45:20 +0000 (15:45 +0000)]
net: dropreason: add SKB_DROP_REASON_FRAG_TOO_FAR

IPv4 reassembly unit can decide to drop frags based on
/proc/sys/net/ipv4/ipfrag_max_dist sysctl.

Add a specific drop reason to track this specific
and weird case.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dropreason: add SKB_DROP_REASON_FRAG_REASM_TIMEOUT
Eric Dumazet [Sat, 29 Oct 2022 15:45:19 +0000 (15:45 +0000)]
net: dropreason: add SKB_DROP_REASON_FRAG_REASM_TIMEOUT

Used to track skbs freed after a timeout happened
in a reassmbly unit.

Passing a @reason argument to inet_frag_rbtree_purge()
allows to use correct consumed status for frags
that have been successfully re-assembled.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dropreason: add SKB_DROP_REASON_DUP_FRAG
Eric Dumazet [Sat, 29 Oct 2022 15:45:18 +0000 (15:45 +0000)]
net: dropreason: add SKB_DROP_REASON_DUP_FRAG

This is used to track when a duplicate segment received by various
reassembly units is dropped.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dropreason: propagate drop_reason to skb_release_data()
Eric Dumazet [Sat, 29 Oct 2022 15:45:17 +0000 (15:45 +0000)]
net: dropreason: propagate drop_reason to skb_release_data()

When an skb with a frag list is consumed, we currently
pretend all skbs in the frag list were dropped.

In order to fix this, add a @reason argument to skb_release_data()
and skb_release_all().

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dropreason: add SKB_CONSUMED reason
Eric Dumazet [Sat, 29 Oct 2022 15:45:16 +0000 (15:45 +0000)]
net: dropreason: add SKB_CONSUMED reason

This will allow to simply use in the future:

kfree_skb_reason(skb, reason);

Instead of repeating sequences like:

if (dropped)
    kfree_skb_reason(skb, reason);
else
    consume_skb(skb);

For instance, following patch in the series is adding
@reason to skb_release_data() and skb_release_all(),
so that we can propagate a meaningful @reason whenever
consume_skb()/kfree_skb() have to take care of a potential frag_list.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: systemport: Add support for RDMA overflow statistic counter
Florian Fainelli [Fri, 28 Oct 2022 22:21:40 +0000 (15:21 -0700)]
net: systemport: Add support for RDMA overflow statistic counter

RDMA overflows can happen if the Ethernet controller does not have
enough bandwidth allocated at the memory controller level, report RDMA
overflows and deal with saturation, similar to the RBUF overflow
counter.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/r/20221028222141.3208429-1-f.fainelli@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ftmac100: allow increasing MTU to make most use of single-segment buffers
Sergei Antonov [Fri, 28 Oct 2022 18:32:20 +0000 (21:32 +0300)]
net: ftmac100: allow increasing MTU to make most use of single-segment buffers

If the FTMAC100 is used as a DSA master, then it is expected that frames
which are MTU sized on the wire facing the external switch port (1500
octets in L2 payload, plus L2 header) also get a DSA tag when seen by
the host port.

This extra tag increases the length of the packet as the host port sees
it, and the FTMAC100 is not prepared to handle frames whose length
exceeds 1518 octets (including FCS) at all.

Only a minimal rework is needed to support this configuration. Since
MTU-sized DSA-tagged frames still fit within a single buffer (RX_BUF_SIZE),
we just need to optimize the resource management rather than implement
multi buffer RX.

In ndo_change_mtu(), we toggle the FTMAC100_MACCR_RX_FTL bit to tell the
hardware to drop (or not) frames with an L2 payload length larger than
1500. We need to replicate the MACCR configuration in ftmac100_start_hw()
as well, since there is a hardware reset there which clears previous
settings.

The advantage of dynamically changing FTMAC100_MACCR_RX_FTL is that when
dev->mtu is at the default value of 1500, large frames are automatically
dropped in hardware and we do not spend CPU cycles dropping them.

Suggested-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Sergei Antonov <saproj@gmail.com>
Link: https://lore.kernel.org/r/20221028183220.155948-3-saproj@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ftmac100: report the correct maximum MTU of 1500
Vladimir Oltean [Fri, 28 Oct 2022 18:32:19 +0000 (21:32 +0300)]
net: ftmac100: report the correct maximum MTU of 1500

The driver uses the MAX_PKT_SIZE (1518) for both MTU reporting and for
TX. However, the 2 places do not measure the same thing.

On TX, skb->len measures the entire L2 packet length (without FCS, which
software does not possess). So the comparison against 1518 there is
correct.

What is not correct is the reporting of dev->max_mtu as 1518. Since MTU
measures L2 *payload* length (excluding L2 overhead) and not total L2
packet length, it means that the correct max_mtu supported by this
device is the standard 1500. Anything higher than that will be dropped
on RX currently.

To fix this, subtract VLAN_ETH_HLEN from MAX_PKT_SIZE when reporting the
max_mtu, since that is the difference between L2 payload length and
total L2 length as seen by software.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Sergei Antonov <saproj@gmail.com>
Link: https://lore.kernel.org/r/20221028183220.155948-2-saproj@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ftmac100: prepare data path for receiving single segment packets > 1514
Vladimir Oltean [Fri, 28 Oct 2022 18:32:18 +0000 (21:32 +0300)]
net: ftmac100: prepare data path for receiving single segment packets > 1514

Eliminate one check in the data path and move it elsewhere, to where our
real limitation is. We'll want to start processing "too long" frames in
the driver (currently there is a hardware MAC setting which drops
theses).

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Sergei Antonov <saproj@gmail.com>
Link: https://lore.kernel.org/r/20221028183220.155948-1-saproj@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dsa: mv88e6xxx: Add RGMII delay to 88E6320
Steffen Bätz [Fri, 28 Oct 2022 16:31:58 +0000 (13:31 -0300)]
net: dsa: mv88e6xxx: Add RGMII delay to 88E6320

Currently, the .port_set_rgmii_delay hook is missing for the 88E6320
family, which causes failure to retrieve an IP address via DHCP.

Add mv88e6320_port_set_rgmii_delay() that allows applying the RGMII
delay for ports 2, 5, and 6, which are the only ports that can be used
in RGMII mode.

Tested on a custom i.MX8MN board connected to an 88E6320 switch.

This change also applies safely to the 88E6321 variant.

The only difference between 88E6320 versus 88E6321 is the temperature
grade and pinout.

They share exactly the same MDIO register map for ports 2, 5, and 6,
which are the only ports that can be used in RGMII mode.

Signed-off-by: Steffen Bätz <steffen@innosonix.de>
[fabio: Improved commit log and extended it to mv88e6321_ops]
Signed-off-by: Fabio Estevam <festevam@denx.de>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20221028163158.198108-1-festevam@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'rtnetlink-honour-nlm_f_echo-flag-in-rtnl_-new-del-link'
Jakub Kicinski [Tue, 1 Nov 2022 01:10:25 +0000 (18:10 -0700)]
Merge branch 'rtnetlink-honour-nlm_f_echo-flag-in-rtnl_-new-del-link'

Hangbin Liu says:

====================
rtnetlink: Honour NLM_F_ECHO flag in rtnl_{new, del}link

Netlink messages are used for communicating between user and kernel space.
When user space configures the kernel with netlink messages, it can set the
NLM_F_ECHO flag to request the kernel to send the applied configuration back
to the caller. This allows user space to retrieve configuration information
that are filled by the kernel (either because these parameters can only be
set by the kernel or because user space let the kernel choose a default
value).

The kernel has support this feature in some places like RTM_{NEW, DEL}ADDR,
RTM_{NEW, DEL}ROUTE. This patch set handles NLM_F_ECHO flag and send link
info back after rtnl_{new, del}link.
====================

Link: https://lore.kernel.org/r/20221028084224.3509611-1-liuhangbin@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agortnetlink: Honour NLM_F_ECHO flag in rtnl_delete_link
Hangbin Liu [Fri, 28 Oct 2022 08:42:24 +0000 (04:42 -0400)]
rtnetlink: Honour NLM_F_ECHO flag in rtnl_delete_link

This patch use the new helper unregister_netdevice_many_notify() for
rtnl_delete_link(), so that the kernel could reply unicast when userspace
 set NLM_F_ECHO flag to request the new created interface info.

At the same time, the parameters of rtnl_delete_link() need to be updated
since we need nlmsghdr and portid info.

Suggested-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agortnetlink: Honour NLM_F_ECHO flag in rtnl_newlink_create
Hangbin Liu [Fri, 28 Oct 2022 08:42:23 +0000 (04:42 -0400)]
rtnetlink: Honour NLM_F_ECHO flag in rtnl_newlink_create

This patch pass the netlink header message in rtnl_newlink_create() to
the new updated rtnl_configure_link(), so that the kernel could reply
unicast when userspace set NLM_F_ECHO flag to request the new created
interface info.

Suggested-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: add new helper unregister_netdevice_many_notify
Hangbin Liu [Fri, 28 Oct 2022 08:42:22 +0000 (04:42 -0400)]
net: add new helper unregister_netdevice_many_notify

Add new helper unregister_netdevice_many_notify(), pass netlink message
header and portid, which could be used to notify userspace when flag
NLM_F_ECHO is set.

Make the unregister_netdevice_many() as a wrapper of new function
unregister_netdevice_many_notify().

Suggested-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agortnetlink: pass netlink message header and portid to rtnl_configure_link()
Hangbin Liu [Fri, 28 Oct 2022 08:42:21 +0000 (04:42 -0400)]
rtnetlink: pass netlink message header and portid to rtnl_configure_link()

This patch pass netlink message header and portid to rtnl_configure_link()
All the functions in this call chain need to add the parameters so we can
use them in the last call rtnl_notify(), and notify the userspace about
the new link info if NLM_F_ECHO flag is set.

- rtnl_configure_link()
  - __dev_notify_flags()
    - rtmsg_ifinfo()
      - rtmsg_ifinfo_event()
        - rtmsg_ifinfo_build_skb()
        - rtmsg_ifinfo_send()
  - rtnl_notify()

Also move __dev_notify_flags() declaration to net/core/dev.h, as Jakub
suggested.

Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dpaa2: Add some debug prints on deferred probe
Sean Anderson [Thu, 27 Oct 2022 19:00:05 +0000 (15:00 -0400)]
net: dpaa2: Add some debug prints on deferred probe

When this device is deferred, there is often no way to determine what
the cause was. Add some debug prints to make it easier to figure out
what is blocking the probe.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Link: https://lore.kernel.org/r/20221027190005.400839-1-sean.anderson@seco.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mvneta: Remove unused variable i
Colin Ian King [Fri, 28 Oct 2022 12:36:24 +0000 (13:36 +0100)]
net: mvneta: Remove unused variable i

Variable i is just being incremented and it's never used anywhere else. The
variable and the increment are redundant so remove it.

Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'ptp-adjfine'
David S. Miller [Mon, 31 Oct 2022 11:14:16 +0000 (11:14 +0000)]
Merge branch 'ptp-adjfine'

Jacob Keller says:

====================
ptp: convert drivers to .adjfine

Many drivers implementing PTP have not yet migrated to the new .adjfine
frequency adjustment implementation.

A handful of these drivers use hardware with a simple increment value which
is adjusted by multiplying by the adjustment factor and then dividing by
1 billion. This calculation is very easy to convert to .adjfine, by simply
updating the divisor.

Introduce new helper functions, diff_by_scaled_ppm and adjust_by_scaled_ppm
which perform the most common calculations used by drivers for this purpose.

The adjust_by_scaled_ppm takes the base increment and scaled PPM value, and
calculates the new increment to use.

A few drivers need the difference and direction rather than a raw increment
value. The diff_by_scaled_ppm calculates the difference and returns true if
it should be a subtraction, false otherwise. This most closely aligns with
existing driver implementations.

I previously submitted v1 of this series at [1], and got some feedback only
on a handful of drivers. In the interest of merging the changes which have
received feedback, I've dropped the following drivers out of this send:

 * ptp_phc
 * ptp_ipx46x
 * tg3
 * hclge
 * stmac
 * cpts

I plan to submit those drivers changes again at a later date. As before,
there are some drivers which are not trivial to convert to the new helper
functions. While they may be able to work, their implementation is different
and I lack the hardware or datasheets to determine what the correct
implementation would be.

* drivers/net/ethernet/broadcom/bnx2x
* drivers/net/ethernet/broadcom/bnxt
* drivers/net/ethernet/cavium/liquidio
* drivers/net/ethernet/chelsio/cxgb4
* drivers/net/ethernet/freescale
* drivers/net/ethernet/qlogic/qed
* drivers/net/ethernet/qlogic/qede
* drivers/net/ethernet/sfc
* drivers/net/ethernet/sfc/siena
* drivers/net/ethernet/ti/am65-cpts.c
* drivers/ptp/ptp_dte.c

My end goal is to drop the .adjfreq implementation entirely, and to that end
I plan on modifying these drivers in the future to directly use
scaled_ppm_to_ppb as the simplest method to convert them.

Changes since v2:
* Rebased to allow landing in 6.2
* Added Richard's Acked-by

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Shyam Sundar S K <Shyam-sundar.S-k@amd.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Siva Reddy Kallam <siva.kallam@broadcom.com>
Cc: Prashant Sreedharan <prashant@broadcom.com>
Cc: Michael Chan <mchan@broadcom.com>
Cc: Yisen Zhuang <yisen.zhuang@huawei.com>
Cc: Salil Mehta <salil.mehta@huawei.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: Tariq Toukan <tariqt@nvidia.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Bryan Whitehead <bryan.whitehead@microchip.com>
Cc: Sergey Shtylyov <s.shtylyov@omp.ru>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Jose Abreu <joabreu@synopsys.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Vivek Thampi <vithampi@vmware.com>
Cc: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
Cc: Jie Wang <wangjie125@huawei.com>
Cc: Jacob Keller <jacob.e.keller@intel.com>
Cc: Guangbin Huang <huangguangbin2@huawei.com>
Cc: Eran Ben Elisha <eranbe@nvidia.com>
Cc: Aya Levin <ayal@nvidia.com>
Cc: Cai Huoqing <cai.huoqing@linux.dev>
Cc: Biju Das <biju.das.jz@bp.renesas.com>
Cc: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Cc: Phil Edworthy <phil.edworthy@renesas.com>
Cc: Jiasheng Jiang <jiasheng@iscas.ac.cn>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Wan Jiabing <wanjiabing@vivo.com>
Cc: Lv Ruyi <lv.ruyi@zte.com.cn>
Cc: Arnd Bergmann <arnd@arndb.de>
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoptp: xgbe: convert to .adjfine and adjust_by_scaled_ppm
Jacob Keller [Fri, 28 Oct 2022 11:04:20 +0000 (04:04 -0700)]
ptp: xgbe: convert to .adjfine and adjust_by_scaled_ppm

The xgbe implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.

Convert this driver to .adjfine and use adjust_by_scaled_ppm to calculate
the new addend value.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Shyam Sundar S K <Shyam-sundar.S-k@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoptp: ravb: convert to .adjfine and adjust_by_scaled_ppm
Jacob Keller [Fri, 28 Oct 2022 11:04:19 +0000 (04:04 -0700)]
ptp: ravb: convert to .adjfine and adjust_by_scaled_ppm

The ravb implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.

Convert this driver to .adjfine and use the adjust_by_scaled_ppm helper
function to calculate the new addend.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Sergey Shtylyov <s.shtylyov@omp.ru>
Cc: Biju Das <biju.das.jz@bp.renesas.com>
Cc: Phil Edworthy <phil.edworthy@renesas.com>
Cc: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Cc: linux-renesas-soc@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoptp: lan743x: use diff_by_scaled_ppm in .adjfine implementation
Jacob Keller [Fri, 28 Oct 2022 11:04:18 +0000 (04:04 -0700)]
ptp: lan743x: use diff_by_scaled_ppm in .adjfine implementation

Update the lan743x driver to use the recently added diff_by_scaled_ppm
helper function. This reduces the amount of code required in lan743x_ptp.c
driver file.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Bryan Whitehead <bryan.whitehead@microchip.com>
Cc: UNGLinuxDriver@microchip.com
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoptp: lan743x: remove .adjfreq implementation
Jacob Keller [Fri, 28 Oct 2022 11:04:17 +0000 (04:04 -0700)]
ptp: lan743x: remove .adjfreq implementation

The lan743x driver implements both .adjfreq and .adjfine, but the core PTP
subsystem prefers .adjfine if implemented. There is no reason to carry a
.adjfreq implementation, so we can remove it.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Bryan Whitehead <bryan.whitehead@microchip.com>
Cc: UNGLinuxDriver@microchip.com
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoptp: mlx5: convert to .adjfine and adjust_by_scaled_ppm
Jacob Keller [Fri, 28 Oct 2022 11:04:16 +0000 (04:04 -0700)]
ptp: mlx5: convert to .adjfine and adjust_by_scaled_ppm

The mlx5 implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.

Convert this to the .adjfine interface and use adjust_by_scaled_ppm for the
calculation  of the new mult value.

Note that the mlx5_ptp_adjfreq_real_time function expects input in terms of
ppb, so use the scaled_ppm_to_ppb to convert before passing to this
function.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Shirly Ohnona <shirlyo@nvidia.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Gal Pressman <gal@nvidia.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Aya Levin <ayal@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoptp: mlx4: convert to .adjfine and adjust_by_scaled_ppm
Jacob Keller [Fri, 28 Oct 2022 11:04:15 +0000 (04:04 -0700)]
ptp: mlx4: convert to .adjfine and adjust_by_scaled_ppm

The mlx4 implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.

Convert this driver to .adjfine and use adjust_by_scaled_ppm to perform the
calculation.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodrivers: convert unsupported .adjfreq to .adjfine
Jacob Keller [Fri, 28 Oct 2022 11:04:14 +0000 (04:04 -0700)]
drivers: convert unsupported .adjfreq to .adjfine

A few PTP drivers implement a .adjfreq handler which indicates the
operation is not supported. Convert all of these to .adjfine.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: Vivek Thampi <vithampi@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoptp: introduce helpers to adjust by scaled parts per million
Jacob Keller [Fri, 28 Oct 2022 11:04:13 +0000 (04:04 -0700)]
ptp: introduce helpers to adjust by scaled parts per million

Many drivers implement the .adjfreq or .adjfine PTP op function with the
same basic logic:

  1. Determine a base frequency value
  2. Multiply this by the abs() of the requested adjustment, then divide by
     the appropriate divisor (1 billion, or 65,536 billion).
  3. Add or subtract this difference from the base frequency to calculate a
     new adjustment.

A few drivers need the difference and direction rather than the combined
new increment value.

I recently converted the Intel drivers to .adjfine and the scaled parts per
million (65.536 parts per billion) logic. To avoid overflow with minimal
loss of precision, mul_u64_u64_div_u64 was used.

The basic logic used by all of these drivers is very similar, and leads to
a lot of duplicate code to perform the same task.

Rather than keep this duplicate code, introduce diff_by_scaled_ppm and
adjust_by_scaled_ppm. These helper functions calculate the difference or
adjustment necessary based on the scaled parts per million input.

The diff_by_scaled_ppm function returns true if the difference should be
subtracted, and false otherwise.

Update the Intel drivers to use the new helper functions. Other vendor
drivers will be converted to .adjfine and this helper function in the
following changes.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoptp: add missing documentation for parameters
Jacob Keller [Fri, 28 Oct 2022 11:04:12 +0000 (04:04 -0700)]
ptp: add missing documentation for parameters

The ptp_find_pin_unlocked function and the ptp_system_timestamp structure
didn't document their parameters and fields. Fix this.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: phy: Add driver for Motorcomm yt8521 gigabit ethernet phy
Frank [Fri, 28 Oct 2022 09:26:21 +0000 (17:26 +0800)]
net: phy: Add driver for Motorcomm yt8521 gigabit ethernet phy

Add a driver for the motorcomm yt8521 gigabit ethernet phy. We have verified
 the driver on StarFive VisionFive development board, which is developed by
 Shanghai StarFive Technology Co., Ltd.. On the board, yt8521 gigabit ethernet
 phy works in utp mode, RGMII interface, supports 1000M/100M/10M speeds, and
 wol(magic package).

Signed-off-by: Frank <Frank.Sae@motor-comm.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: microchip: sparx5: kunit test: change test_callbacks and test_vctrl to static
Yang Yingliang [Fri, 28 Oct 2022 08:11:06 +0000 (16:11 +0800)]
net: microchip: sparx5: kunit test: change test_callbacks and test_vctrl to static

test_callbacks and test_vctrl are only used in vcap_api_kunit.c now,
change them to static.

Fixes: 67d637516fa9 ("net: microchip: sparx5: Adding KUNIT test for the VCAP API")
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Reviewed-by: Steen Hegelund <Steen.Hegelund@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: geneve: fix array of flexible structures warnings
Jakub Kicinski [Fri, 28 Oct 2022 03:52:59 +0000 (20:52 -0700)]
net: geneve: fix array of flexible structures warnings

New compilers don't like flexible array of flexible structs:

  include/net/geneve.h:62:34: warning: array of flexible structures

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hns: hnae: remove unnecessary __module_get() and module_put()
Yang Yingliang [Fri, 28 Oct 2022 07:34:57 +0000 (15:34 +0800)]
net: hns: hnae: remove unnecessary __module_get() and module_put()

hnae_ae_register() is called from hns_dsaf_probe(), the refcount of
module hnae has already be got in resolve_symbol() while calling the
function, so the __module_get()/module_put() can be removed.

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonetlink: split up copies in the ack construction
Jakub Kicinski [Thu, 27 Oct 2022 21:25:53 +0000 (14:25 -0700)]
netlink: split up copies in the ack construction

Clean up the use of unsafe_memcpy() by adding a flexible array
at the end of netlink message header and splitting up the header
and data copies.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodrivers: net: convert to boolean for the mac_managed_pm flag
Denis Kirjanov [Thu, 27 Oct 2022 18:45:02 +0000 (21:45 +0300)]
drivers: net: convert to boolean for the mac_managed_pm flag

Signed-off-by: Dennis Kirjanov <dkirjanov@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodt-bindings: net: snps,dwmac: Document queue config subnodes
Sebastian Reichel [Thu, 27 Oct 2022 16:31:19 +0000 (18:31 +0200)]
dt-bindings: net: snps,dwmac: Document queue config subnodes

The queue configuration is referenced by snps,mtl-rx-config and
snps,mtl-tx-config. Some in-tree DTs and the example put the
referenced config nodes directly beneath the root node, but
most in-tree DTs put it as child node of the dwmac node.

This adds proper description for this setup, which has the
advantage of validating the queue configuration node content.

The example is also updated to use the sub-node style, incl.
the axi bus configuration node, which got the same treatment
as the queues config in 5361660af6d3 ("dt-bindings: net: snps,dwmac:
Document stmmac-axi-config subnode").

Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: remove unused netdev_unregistering()
Juhee Kang [Thu, 27 Oct 2022 16:04:24 +0000 (01:04 +0900)]
net: remove unused netdev_unregistering()

Currently, use dev->reg_state == NETREG_UNREGISTERING to check the status
which is NETREG_UNREGISTERING, rather than using netdev_unregistering.
Also, A helper function which is netdev_unregistering on nedevice.h is no
longer used. Thus, netdev_unregistering removes from netdevice.h.

Signed-off-by: Juhee Kang <claudiajkang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge tag 'mlx5-updates-2022-10-24' of git://git.kernel.org/pub/scm/linux/kernel...
Jakub Kicinski [Sat, 29 Oct 2022 05:07:47 +0000 (22:07 -0700)]
Merge tag 'mlx5-updates-2022-10-24' of git://git./linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2022-10-24

SW steering updates from Yevgeny Kliteynik:

1) 1st Four patches: small fixes / optimizations for SW steering:

 - Patch 1: Don't abort destroy flow if failed to destroy table - continue
   and free everything else.
 - Patches 2 and 3 deal with fast teardown:
    + Skip sync during fast teardown, as PCI device is not there any more.
    + Check device state when polling CQ - otherwise SW steering keeps polling
      the CQ forever, because nobody is there to flush it.
 - Patch 4: Removing unneeded function argument.

2) Deal with the hiccups that we get during rules insertion/deletion,
which sometimes reach 1/4 of a second. While insertion/deletion rate
improvement was not the focus here, it still is a by-product of removing these
hiccups.

Another by-product is the reduced standard deviation in measuring the duration
of rules insertion/deletion bursts.

In the testing we add K rules (warm-up phase), and then continuously do
insertion/deletion bursts of N rules.
During the test execution, the driver measures hiccups (amount and duration)
and total time for insertion/deletion of a batch of rules.

Here are some numbers, before and after these patches:

+--------------------------------------------+-----------------+----------------+
|                                            |   Create rules  |  Delete rules  |
|                                            +--------+--------+--------+-------+
|                                            | Before |  After | Before | After |
+--------------------------------------------+--------+--------+--------+-------+
| Max hiccup [msec]                          |    253 |     42 |    254 |    68 |
+--------------------------------------------+--------+--------+--------+-------+
| Avg duration of 10K rules add/remove [msec]| 140.07 | 124.32 | 106.99 | 99.51 |
+--------------------------------------------+--------+--------+--------+-------+
| Num of hiccups per 100K rules add/remove   |   7.77 |   7.97 |  12.60 | 11.57 |
+--------------------------------------------+--------+--------+--------+-------+
| Avg hiccup duration [msec]                 |  36.92 |  33.25 |  36.15 | 33.74 |
+--------------------------------------------+--------+--------+--------+-------+

 - Patch 5: Allocate a short array on stack instead of dynamically- it is
   destroyed at the end of the function.
 - Patch 6: Rather than cleaning the corresponding chunk's section of
   ste_arrays on chunk deletion, initialize these areas upon chunk creation.
   Chunk destruction tend to come in large batches (during pool syncing),
   so instead of doing huge memory initialization during pool sync,
   we amortize this by doing small initsializations on chunk creation.
 - Patch 7: In order to simplifies error flow and allows cleaner addition
   of new pools, handle creation/destruction of all the domain's memory pools
   and other memory-related fields in a separate init/uninit functions.
 - Patch 8: During rehash, write each table row immediately instead of waiting
   for the whole table to be ready and writing it all - saves allocations
   of ste_send_info structures and improves performance.
 - Patch 9: Instead of allocating/freeing send info objects dynamically,
   manage them in pool. The number of send info objects doesn't depend on
   number of rules, so after pre-populating the pool with an initial batch of
   send info objects, the pool is not expected to grow.
   This way we save alloc/free during writing STEs to ICM, which by itself can
   sometimes take up to 40msec.
 - Patch 10: Allocate icm_chunks from their own slab allocator, which lowered
   the alloc/free "hiccups" frequency.
 - Patch 11: Similar to patch 9, allocate htbl from its own slab allocator.
 - Patch 12: Lower sync threshold for ICM hot memory - set the threshold for
   sync to 1/4 of the pool instead of 1/2 of the pool. Although we will have
   more syncs, each     sync will be shorter and will help with insertion rate
   stability. Also, notice that the overall number of hiccups wasn't increased
   due to all the other patches.
 - Patch 13: Keep track of hot ICM chunks in an array instead of list.
   After steering sync, we traverse the hot list and finally free all the
   chunks. It appears that traversing a long list takes unusually long time
   due to cache misses on many entries, which causes a big "hiccup" during
   rule insertion. This patch replaces the list with pre-allocated array that
   stores only the bookkeeping information that is needed to later free the
   chunks in its buddy allocator.
 - Patch 14: Remove the unneeded buddy used_list - we don't need to have the
   list of used chunks, we only need the total amount of used memory.

* tag 'mlx5-updates-2022-10-24' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
  net/mlx5: DR, Remove the buddy used_list
  net/mlx5: DR, Keep track of hot ICM chunks in an array instead of list
  net/mlx5: DR, Lower sync threshold for ICM hot memory
  net/mlx5: DR, Allocate htbl from its own slab allocator
  net/mlx5: DR, Allocate icm_chunks from their own slab allocator
  net/mlx5: DR, Manage STE send info objects in pool
  net/mlx5: DR, In rehash write the line in the entry immediately
  net/mlx5: DR, Handle domain memory resources init/uninit separately
  net/mlx5: DR, Initialize chunk's ste_arrays at chunk creation
  net/mlx5: DR, For short chains of STEs, avoid allocating ste_arr dynamically
  net/mlx5: DR, Remove unneeded argument from dr_icm_chunk_destroy
  net/mlx5: DR, Check device state when polling CQ
  net/mlx5: DR, Fix the SMFS sync_steering for fast teardown
  net/mlx5: DR, In destroy flow, free resources even if FW command failed
====================

Link: https://lore.kernel.org/r/20221027145643.6618-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'net-ipa-start-adding-ipa-v5-0-functionality'
Jakub Kicinski [Sat, 29 Oct 2022 05:07:04 +0000 (22:07 -0700)]
Merge branch 'net-ipa-start-adding-ipa-v5-0-functionality'

Alex Elder says:

====================
net: ipa: start adding IPA v5.0 functionality

The biggest change for IPA v5.0 is that it supports more than 32
endpoints.  However there are two other unrelated changes:
  - The STATS_TETHERING memory region is not required
  - Filter tables no longer support a "global" filter

Beyond this, refactoring some code makes supporting more than 32
endpoints (in an upcoming series) easier.  So this series includes
a few other changes (not in this order):
  - The maximum endpoint ID in use is determined during config
  - Loops over all endpoints only involve those in use
  - Endpoints IDs and their directions are checked for validity
    differently to simplify comparison against the maximum
====================

Link: https://lore.kernel.org/r/20221027122632.488694-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ipa: record and use the number of defined endpoint IDs
Alex Elder [Thu, 27 Oct 2022 12:26:32 +0000 (07:26 -0500)]
net: ipa: record and use the number of defined endpoint IDs

Define a new field in the IPA structure that records the maximum
number of entries that will be used in the IPA endpoint array.  Use
that value rather than IPA_ENDPOINT_MAX to determine the end
condition for two loops that iterate over all endpoints.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ipa: determine the maximum endpoint ID
Alex Elder [Thu, 27 Oct 2022 12:26:31 +0000 (07:26 -0500)]
net: ipa: determine the maximum endpoint ID

Each endpoint ID has an entry in the IPA endpoint array.  But the
size of that array is defined at compile time.  Instead, rename
ipa_endpoint_data_valid() to be ipa_endpoint_max() and have it
return the maximum endpoint ID defined in configuration data.
That function will still validate configuration data.

Zero is returned on error; it's a valid endpoint ID, but we need
more than one, so it can't be the maximum.  The next patch makes use
of the returned maximum value.

Finally, rename the "initialized" mask of endpoints defined by
configuration data to be "defined".

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ipa: refactor endpoint loops
Alex Elder [Thu, 27 Oct 2022 12:26:30 +0000 (07:26 -0500)]
net: ipa: refactor endpoint loops

Change two functions that iterate over all endpoints to use while
loops, using "endpoint_id" as the index variables in both spots.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ipa: more completely check endpoint validity
Alex Elder [Thu, 27 Oct 2022 12:26:29 +0000 (07:26 -0500)]
net: ipa: more completely check endpoint validity

Ensure all defined TX endpoints are in the range [0, CONS_PIPES) and
defined RX endpoints are within [PROD_LOWEST, PROD_LOWEST+PROD_PIPES).

Modify the way local variables are used to make the checks easier
to understand.  Check for each endpoint being in valid range in the
loop, and drop the logical-AND check of initialized against
unavailable IDs.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ipa: no more global filtering starting with IPA v5.0
Alex Elder [Thu, 27 Oct 2022 12:26:28 +0000 (07:26 -0500)]
net: ipa: no more global filtering starting with IPA v5.0

IPA v5.0 eliminates the global filter table entry.  As a result,
there is no need to shift the filtered endpoint bitmap when it is
written to IPA local memory.

Update comments to explain this.  Also delete a redundant block of
comments above the function.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ipa: change an IPA v5.0 memory requirement
Alex Elder [Thu, 27 Oct 2022 12:26:27 +0000 (07:26 -0500)]
net: ipa: change an IPA v5.0 memory requirement

Don't require IPA v5.0 to have a STATS_TETHERING memory region.
Downstream defines its size to 0, so it apparently is unused.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: ipa: define IPA v5.0
Alex Elder [Thu, 27 Oct 2022 12:26:26 +0000 (07:26 -0500)]
net: ipa: define IPA v5.0

In preparation for adding support for IPA v5.0, define it as an
understood version.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet/packet: add PACKET_FANOUT_FLAG_IGNORE_OUTGOING
Willem de Bruijn [Thu, 27 Oct 2022 21:10:14 +0000 (17:10 -0400)]
net/packet: add PACKET_FANOUT_FLAG_IGNORE_OUTGOING

Extend packet socket option PACKET_IGNORE_OUTGOING to fanout groups.

The socket option sets ptype.ignore_outgoing, which makes
dev_queue_xmit_nit skip the socket.

When the socket joins a fanout group, the option is not reflected in
the struct ptype of the group. dev_queue_xmit_nit only tests the
fanout ptype, so the flag is ignored once a socket joins a
fanout group.

Inheriting the option from a socket would change established behavior.
Different sockets in the group can set different flags, and can also
change them at runtime.

Testing in packet_rcv_fanout defeats the purpose of the original
patch, which is to avoid skb_clone in dev_queue_xmit_nit (esp. for
MSG_ZEROCOPY packets).

Instead, introduce a new fanout group flag with the same behavior.

Tested with https://github.com/wdebruij/kerneltools/blob/master/tests/test_psock_fanout_ignore_outgoing.c

Signed-off-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20221027211014.3581513-1-willemdebruijn.kernel@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoice: Add additional CSR registers to ETHTOOL_GREGS
Lukasz Czapnik [Thu, 27 Oct 2022 10:42:39 +0000 (03:42 -0700)]
ice: Add additional CSR registers to ETHTOOL_GREGS

In the event of a Tx hang it can be useful to read a variety of hardware
registers to capture some state about why the transmit queue got stuck.

Extend the ETHTOOL_GREGS dump provided by the ice driver with several CSR
registers that provide such relevant information regarding the hardware Tx
state. This enables capturing relevant data to enable debugging such a Tx
hang.

Signed-off-by: Lukasz Czapnik <lukasz.czapnik@intel.com>
Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com>
Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Link: https://lore.kernel.org/r/20221027104239.1691549-1-jacob.e.keller@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'clean-up-sfp-register-definitions'
Jakub Kicinski [Sat, 29 Oct 2022 04:56:22 +0000 (21:56 -0700)]
Merge branch 'clean-up-sfp-register-definitions'

Russell King says:

====================
Clean up SFP register definitions

This two-part patch series cleans up the SFP register definitions by
1. converting them from hex to decimal, as all the definitions in the
   documents use decimal, this makes it easier to cross-reference.
2. moving the bit definitions for each register along side their
   register address definition
====================

Link: https://lore.kernel.org/r/Y1qFvaDlLVM1fHdG@shell.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: sfp: move field definitions along side register index
Russell King (Oracle) [Thu, 27 Oct 2022 13:21:21 +0000 (14:21 +0100)]
net: sfp: move field definitions along side register index

Just as we do for the A2h enum, arrange the A0h enum to have the
field definitions next to their corresponding register index.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: sfp: convert register indexes from hex to decimal
Russell King (Oracle) [Thu, 27 Oct 2022 13:21:16 +0000 (14:21 +0100)]
net: sfp: convert register indexes from hex to decimal

The register indexes in the standards are in decimal rather than hex,
so lets specify them in decimal in the header file so we can easily
cross-reference without converting between hex and decimal.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'net-mtk_eth_soc-improve-pcs-implementation'
Jakub Kicinski [Sat, 29 Oct 2022 04:48:39 +0000 (21:48 -0700)]
Merge branch 'net-mtk_eth_soc-improve-pcs-implementation'

Russell King says:

====================
net: mtk_eth_soc: improve PCS implementation

As a result of invesigations from Frank Wunderlich, we know a lot more
about the Mediatek "SGMII" PCS block, and can implement the PCS support
correctly. This series achieves that, and Frank has tested the final
result and reports that it works for him. The series could do with
further testing by others, but I suspect that is unlikely to happen
until it is merged based on past performances with this driver.

Briefly, the patches in order:

1. Add a new helper to get the link timer duration in nanoseconds
2. Add definitions for the newly discovered registers and updates to
   bit definitions, including bitmasks for the BMCR, BMSR and two
   advertisement registers.
3. Remove unnecessary/unused error handling (functions always returning
   zero.)
4. Adding the missing pcs_get_state() implementation.
5. Converting the code to use regmap_update_bits() rather than
   open-coding read-modify-write sequences.
6. Adding out-of-band speed and duplex forcing for all non-inband modes
   not just the 802.3z link modes the code currently does.
7. Moving the release of the PHY power down to the main pcs_config()
   function.
8. Moving the interface speed selection to the main pcs_config()
   function.
9. Adding advertisement programming.
10. Adding correct link timer programming using the new helper in the
    first patch.
11. Adding support for 802.3z negotiation.

There is one remaining issue - when configuring the PCS for in-band,
for some reason the AN restart bit is always set. This should not be
necessary, but requires further investigation with the hardware to
find out whether it is really necessary. I suspect this was a work
around for a previous poor implementation.
====================

Link: https://lore.kernel.org/r/Y1qDMw+DJLAJHT40@shell.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: add support for in-band 802.3z negotiation
Russell King (Oracle) [Thu, 27 Oct 2022 13:11:28 +0000 (14:11 +0100)]
net: mtk_eth_soc: add support for in-band 802.3z negotiation

As a result of help from Frank Wunderlich to investigate and test, we
now know how to program this PCS for in-band 802.3z negotiation. Add
support for this by moving the contents of the two functions into the
common mtk_pcs_config() function and adding the register settings for
802.3z negotiation.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: move and correct link timer programming
Russell King (Oracle) [Thu, 27 Oct 2022 13:11:23 +0000 (14:11 +0100)]
net: mtk_eth_soc: move and correct link timer programming

Program the link timer appropriately for the interface mode being
used, using the newly introduced phylink helper that provides the
nanosecond link timer interval.

The intervals are 1.6ms for SGMII based protocols and 10ms for
802.3z based protocols.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: add advertisement programming
Russell King (Oracle) [Thu, 27 Oct 2022 13:11:18 +0000 (14:11 +0100)]
net: mtk_eth_soc: add advertisement programming

Program the advertisement into the mtk PCS block.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: move interface speed selection
Russell King (Oracle) [Thu, 27 Oct 2022 13:11:13 +0000 (14:11 +0100)]
net: mtk_eth_soc: move interface speed selection

Move the selection of the underlying interface speed to the pcs_config
function, so we always program the interface speed.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: move PHY power up
Russell King (Oracle) [Thu, 27 Oct 2022 13:11:08 +0000 (14:11 +0100)]
net: mtk_eth_soc: move PHY power up

The PHY power up is common to both configuration paths, so move it into
the parent function. We need to do this for all serdes modes.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: add out of band forcing of speed and duplex in pcs_link_up
Russell King (Oracle) [Thu, 27 Oct 2022 13:11:03 +0000 (14:11 +0100)]
net: mtk_eth_soc: add out of band forcing of speed and duplex in pcs_link_up

Add support for forcing the link speed and duplex setting in the
pcs_link_up() method for out of band modes, which will be useful when
we finish converting the pcs_config() method. Until then, we still have
to force duplex for 802.3z modes to work correctly.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: convert mtk_sgmii to use regmap_update_bits()
Russell King (Oracle) [Thu, 27 Oct 2022 13:10:58 +0000 (14:10 +0100)]
net: mtk_eth_soc: convert mtk_sgmii to use regmap_update_bits()

mtk_sgmii does a lot of read-modify-write operations, for which there
is a specific regmap function. Use this function instead of open-coding
the operations.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: add pcs_get_state() implementation
Russell King (Oracle) [Thu, 27 Oct 2022 13:10:52 +0000 (14:10 +0100)]
net: mtk_eth_soc: add pcs_get_state() implementation

Add a pcs_get_state() implementation which uses the advertisements
to compute the resulting link modes, and BMSR contents to determine
negotiation and link status.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: eliminate unnecessary error handling
Russell King (Oracle) [Thu, 27 Oct 2022 13:10:47 +0000 (14:10 +0100)]
net: mtk_eth_soc: eliminate unnecessary error handling

The functions called by the pcs_config() method always return zero, so
there is no point trying to handle an error from these functions. Make
these functions void, eliminate the "err" variable and simply return
zero from the pcs_config() function itself.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: mtk_eth_soc: add definitions for PCS
Russell King (Oracle) [Thu, 27 Oct 2022 13:10:42 +0000 (14:10 +0100)]
net: mtk_eth_soc: add definitions for PCS

As a result of help from Frank Wunderlich to investigate and test, we
know a bit more about the PCS on the Mediatek platforms. Update the
definitions from this investigation.

This PCS appears similar, but not identical to the Lynx PCS.

Although not included in this patch, but for future reference, the PHY
ID registers at offset 4 read as 0x4d544950 'MTIP'.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: phylink: add phylink_get_link_timer_ns() helper
Russell King (Oracle) [Thu, 27 Oct 2022 13:10:37 +0000 (14:10 +0100)]
net: phylink: add phylink_get_link_timer_ns() helper

Add a helper to convert the PHY interface mode to the required link
timer setting as stated by the appropriate standard. Inappropriate
interface modes return an error.

Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'net-remove-the-obsolte-u64_stats_fetch_-_irq'
Jakub Kicinski [Sat, 29 Oct 2022 03:13:57 +0000 (20:13 -0700)]
Merge branch 'net-remove-the-obsolte-u64_stats_fetch_-_irq'

Sebastian Andrzej Siewior says:

====================
net: Remove the obsolte u64_stats_fetch_*_irq()

This is the removal of u64_stats_fetch_*_irq() users in networking. The
prerequisites are part of v6.1-rc1.

The spi and bpf bits are not part of the series and have been routed
directly.
====================

Link: https://lore.kernel.org/r/20221026132215.696950-1-bigeasy@linutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: Remove the obsolte u64_stats_fetch_*_irq() users (net).
Thomas Gleixner [Wed, 26 Oct 2022 13:22:15 +0000 (15:22 +0200)]
net: Remove the obsolte u64_stats_fetch_*_irq() users (net).

Now that the 32bit UP oddity is gone and 32bit uses always a sequence
count, there is no need for the fetch_irq() variants anymore.

Convert to the regular interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: Remove the obsolte u64_stats_fetch_*_irq() users (drivers).
Thomas Gleixner [Wed, 26 Oct 2022 13:22:14 +0000 (15:22 +0200)]
net: Remove the obsolte u64_stats_fetch_*_irq() users (drivers).

Now that the 32bit UP oddity is gone and 32bit uses always a sequence
count, there is no need for the fetch_irq() variants anymore.

Convert to the regular interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoKalle Valo says:
Jakub Kicinski [Sat, 29 Oct 2022 01:31:39 +0000 (18:31 -0700)]
Kalle Valo says:

====================
pull-request: wireless-next-2022-10-28

First set of patches v6.2. mac80211 refactoring continues for Wi-Fi 7.
All mac80211 driver are now converted to use internal TX queues, this
might cause some regressions so we wanted to do this early in the
cycle.

Note: wireless tree was merged[1] to wireless-next to avoid some
conflicts with mac80211 patches between the trees. Unfortunately there
are still two smaller conflicts in net/mac80211/util.c which Stephen
also reported[2]. In the first conflict initialise scratch_len to
"params->scratch_len ?: 3 * params->len" (note number 3, not 2!) and
in the second conflict take the version which uses elems->scratch_pos.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next.git/commit/?id=dfd2d876b3fda1790bc0239ba4c6967e25d16e91
[2] https://lore.kernel.org/all/20221020032340.5cf101c0@canb.auug.org.au/

mac80211
 - preparation for Wi-Fi 7 Multi-Link Operation (MLO) continues
 - add API to show the link STAs in debugfs
 - all mac80211 drivers are now using mac80211 internal TX queues (iTXQs)

rtw89
 - support 8852BE

rtl8xxxu
 - support RTL8188FU

brmfmac
 - support two station interfaces concurrently

bcma
 - support SPROM rev 11
====================

Link: https://lore.kernel.org/r/20221028132943.304ECC433B5@smtp.kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonfc: s3fwrn5: use devm_clk_get_optional_enabled() helper
Dmitry Torokhov [Thu, 27 Oct 2022 07:34:02 +0000 (00:34 -0700)]
nfc: s3fwrn5: use devm_clk_get_optional_enabled() helper

Because we enable the clock immediately after acquiring it in probe,
we can combine the 2 operations and use devm_clk_get_optional_enabled()
helper.

Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'txgbe'
David S. Miller [Fri, 28 Oct 2022 10:25:53 +0000 (11:25 +0100)]
Merge branch 'txgbe'

Jiawen Wu says:

====================
net: WangXun txgbe ethernet driver

This patch series adds support for WangXun 10 gigabit NIC, to initialize
hardware, set mac address, and register netdev.

Change log:
v6: address comments:
    Jakub Kicinski: check with scripts/kernel-doc
v5: address comments:
    Jakub Kicinski: clean build with W=1 C=1
v4: address comments:
    Andrew Lunn: https://lore.kernel.org/all/YzXROBtztWopeeaA@lunn.ch/
v3: address comments:
    Andrew Lunn: remove hw function ops, reorder functions, use BIT(n)
                 for register bit offset, move the same code of txgbe
                 and ngbe to libwx
v2: address comments:
    Andrew Lunn: https://lore.kernel.org/netdev/YvRhld5rD%2FxgITEg@lunn.ch/
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: txgbe: Set MAC address and register netdev
Jiawen Wu [Thu, 27 Oct 2022 06:11:16 +0000 (14:11 +0800)]
net: txgbe: Set MAC address and register netdev

Add MAC address related operations, and register netdev.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: txgbe: Reset hardware
Jiawen Wu [Thu, 27 Oct 2022 06:11:15 +0000 (14:11 +0800)]
net: txgbe: Reset hardware

Reset and initialize the hardware by configuring the MAC layer.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: txgbe: Store PCI info
Jiawen Wu [Thu, 27 Oct 2022 06:11:14 +0000 (14:11 +0800)]
net: txgbe: Store PCI info

Get PCI config space info, set LAN id and check flash status.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'tcp-plb'
David S. Miller [Fri, 28 Oct 2022 09:47:42 +0000 (10:47 +0100)]
Merge branch 'tcp-plb'

Mubashir Adnan Qureshi says:

====================
net: Add PLB functionality to TCP

This patch series adds PLB (Protective Load Balancing) to TCP and hooks
it up to DCTCP. PLB is disabled by default and can be enabled using
relevant sysctls and support from underlying CC.

PLB (Protective Load Balancing) is a host based mechanism for load
balancing across switch links. It leverages congestion signals(e.g. ECN)
from transport layer to randomly change the path of the connection
experiencing congestion. PLB changes the path of the connection by
changing the outgoing IPv6 flow label for IPv6 connections (implemented
in Linux by calling sk_rethink_txhash()). Because of this implementation
mechanism, PLB can currently only work for IPv6 traffic. For more
information, see the SIGCOMM 2022 paper:
  https://doi.org/10.1145/3544216.3544226
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotcp: add rcv_wnd and plb_rehash to TCP_INFO
Mubashir Adnan Qureshi [Wed, 26 Oct 2022 13:51:15 +0000 (13:51 +0000)]
tcp: add rcv_wnd and plb_rehash to TCP_INFO

rcv_wnd can be useful to diagnose TCP performance where receiver window
becomes the bottleneck. rehash reports the PLB and timeout triggered
rehash attempts by the TCP connection.

Signed-off-by: Mubashir Adnan Qureshi <mubashirq@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotcp: add u32 counter in tcp_sock and an SNMP counter for PLB
Mubashir Adnan Qureshi [Wed, 26 Oct 2022 13:51:14 +0000 (13:51 +0000)]
tcp: add u32 counter in tcp_sock and an SNMP counter for PLB

A u32 counter is added to tcp_sock for counting the number of PLB
triggered rehashes for a TCP connection. An SNMP counter is also
added to count overall PLB triggered rehash events for a host. These
counters are hooked up to PLB implementation for DCTCP.

TCP_NLA_REHASH is added to SCM_TIMESTAMPING_OPT_STATS that reports
the rehash attempts triggered due to PLB or timeouts. This gives
a historical view of sustained congestion or timeouts experienced
by the TCP connection.

Signed-off-by: Mubashir Adnan Qureshi <mubashirq@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotcp: add support for PLB in DCTCP
Mubashir Adnan Qureshi [Wed, 26 Oct 2022 13:51:13 +0000 (13:51 +0000)]
tcp: add support for PLB in DCTCP

PLB support is added to TCP DCTCP code. As DCTCP uses ECN as the
congestion signal, PLB also uses ECN to make decisions whether to change
the path or not upon sustained congestion.

Signed-off-by: Mubashir Adnan Qureshi <mubashirq@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotcp: add PLB functionality for TCP
Mubashir Adnan Qureshi [Wed, 26 Oct 2022 13:51:12 +0000 (13:51 +0000)]
tcp: add PLB functionality for TCP

Congestion control algorithms track PLB state and cause the connection
to trigger a path change when either of the 2 conditions is satisfied:

- No packets are in flight and (# consecutive congested rounds >=
  sysctl_tcp_plb_idle_rehash_rounds)
- (# consecutive congested rounds >= sysctl_tcp_plb_rehash_rounds)

A round (RTT) is marked as congested when congestion signal
(ECN ce_ratio) over an RTT is greater than sysctl_tcp_plb_cong_thresh.
In the event of RTO, PLB (via tcp_write_timeout()) triggers a path
change and disables congestion-triggered path changes for random time
between (sysctl_tcp_plb_suspend_rto_sec, 2*sysctl_tcp_plb_suspend_rto_sec)
to avoid hopping onto the "connectivity blackhole". RTO-triggered
path changes can still happen during this cool-off period.

Signed-off-by: Mubashir Adnan Qureshi <mubashirq@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotcp: add sysctls for TCP PLB parameters
Mubashir Adnan Qureshi [Wed, 26 Oct 2022 13:51:11 +0000 (13:51 +0000)]
tcp: add sysctls for TCP PLB parameters

PLB (Protective Load Balancing) is a host based mechanism for load
balancing across switch links. It leverages congestion signals(e.g. ECN)
from transport layer to randomly change the path of the connection
experiencing congestion. PLB changes the path of the connection by
changing the outgoing IPv6 flow label for IPv6 connections (implemented
in Linux by calling sk_rethink_txhash()). Because of this implementation
mechanism, PLB can currently only work for IPv6 traffic. For more
information, see the SIGCOMM 2022 paper:
  https://doi.org/10.1145/3544216.3544226

This commit adds new sysctl knobs and sets their default values for
TCP PLB.

Signed-off-by: Mubashir Adnan Qureshi <mubashirq@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'mxl-gpy-MDI-X'
David S. Miller [Fri, 28 Oct 2022 09:35:51 +0000 (10:35 +0100)]
Merge branch 'mxl-gpy-MDI-X'

Raju Lakkaraju says:

====================
net: phy: mxl-gpy: Add MDI-X

This patch series add the MDI-X feature to GPY211 PHYs and
Also Change return type to gpy_update_interface() function
====================

Signed-off-by: David S. Miller <davem@davemloft.net>