platform/kernel/linux-rpi.git
2 years agoethernet/ti: delete if NULL check befort devm_kfree
Bernard Zhao [Mon, 16 May 2022 01:52:05 +0000 (18:52 -0700)]
ethernet/ti: delete if NULL check befort devm_kfree

devm_kfree check the pointer, there is no need to check before
devm_kfree call.
This change is to cleanup the code a bit.

Signed-off-by: Bernard Zhao <bernard@vivo.com>
Link: https://lore.kernel.org/r/20220516015208.6526-1-bernard@vivo.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2 years agonet: ethernet: Fix unmet direct dependencies detected for NVMEM_SUNPLUS_OCOTP
Wells Lu [Fri, 13 May 2022 11:57:16 +0000 (19:57 +0800)]
net: ethernet: Fix unmet direct dependencies detected for NVMEM_SUNPLUS_OCOTP

Removed unnecessary:

select COMMON_CLK_SP7021
select RESET_SUNPLUS
select NVMEM_SUNPLUS_OCOTP

from Kconfig.

Reported-by: kernel test robot <yujie.liu@intel.com>
Signed-off-by: Wells Lu <wellslutw@gmail.com>
Link: https://lore.kernel.org/r/1652443036-24731-1-git-send-email-wellslutw@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge tag 'linux-can-next-for-5.19-20220516' of git://git.kernel.org/pub/scm/linux...
Jakub Kicinski [Tue, 17 May 2022 00:06:20 +0000 (17:06 -0700)]
Merge tag 'linux-can-next-for-5.19-20220516' of git://git./linux/kernel/git/mkl/linux-can-next

Marc Kleine-Budde says:

====================
pull-request: can-next 2022-05-16

the first 2 patches are by me and target the CAN raw protocol. The 1st
removes an unneeded assignment, the other one adds support for
SO_TXTIME/SCM_TXTIME.

Oliver Hartkopp contributes 2 patches for the ISOTP protocol. The 1st
adds support for transmission without flow control, the other let's
bind() return an error on incorrect CAN ID formatting.

Geert Uytterhoeven contributes a patch to clean up ctucanfd's Kconfig
file.

Vincent Mailhol's patch for the slcan driver uses the proper function
to check for invalid CAN frames in the xmit callback.

The next patch is by Geert Uytterhoeven and makes the interrupt-names
of the renesas,rcar-canfd dt bindings mandatory.

A patch by my update the ctucanfd dt bindings to include the common
CAN controller bindings.

The last patch is by Akira Yokosawa and fixes a breakage the
ctucanfd's documentation.

* tag 'linux-can-next-for-5.19-20220516' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next:
  docs: ctucanfd: Use 'kernel-figure' directive instead of 'figure'
  dt-bindings: can: ctucanfd: include common CAN controller bindings
  dt-bindings: can: renesas,rcar-canfd: Make interrupt-names required
  can: slcan: slc_xmit(): use can_dropped_invalid_skb() instead of manual check
  can: ctucanfd: Let users select instead of depend on CAN_CTUCANFD
  can: isotp: isotp_bind(): return -EINVAL on incorrect CAN ID formatting
  can: isotp: add support for transmission without flow control
  can: raw: add support for SO_TXTIME/SCM_TXTIME
  can: raw: raw_sendmsg(): remove not needed setting of skb->sk
====================

Link: https://lore.kernel.org/r/20220516202625.1129281-1-mkl@pengutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'net-skb-remove-skb_data_area_size'
Jakub Kicinski [Mon, 16 May 2022 20:45:41 +0000 (13:45 -0700)]
Merge branch 'net-skb-remove-skb_data_area_size'

Ricardo Martinez says:

====================
net: skb: Remove skb_data_area_size()

This patch series removes the skb_data_area_size() helper,
replacing it in t7xx driver with the size used during skb allocation.

https://lore.kernel.org/netdev/CAHNKnsTmH-rGgWi3jtyC=ktM1DW2W1VJkYoTMJV2Z_Bt498bsg@mail.gmail.com/
====================

Link: https://lore.kernel.org/r/20220513173400.3848271-1-ricardo.martinez@linux.intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: skb: Remove skb_data_area_size()
Ricardo Martinez [Fri, 13 May 2022 17:34:00 +0000 (10:34 -0700)]
net: skb: Remove skb_data_area_size()

skb_data_area_size() is not needed. As Jakub pointed out [1]:
For Rx, drivers can use the size passed during skb allocation or
use skb_tailroom().
For Tx, drivers should use skb_headlen().

[1] https://lore.kernel.org/netdev/CAHNKnsTmH-rGgWi3jtyC=ktM1DW2W1VJkYoTMJV2Z_Bt498bsg@mail.gmail.com/

Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: wwan: t7xx: Avoid calls to skb_data_area_size()
Ricardo Martinez [Fri, 13 May 2022 17:33:59 +0000 (10:33 -0700)]
net: wwan: t7xx: Avoid calls to skb_data_area_size()

skb_data_area_size() helper was used to calculate the size of the
DMA mapped buffer passed to the HW. Instead of doing this, use the
size passed to allocate the skbs.

Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'mptcp-updates-for-net-next'
Jakub Kicinski [Mon, 16 May 2022 20:11:33 +0000 (13:11 -0700)]
Merge branch 'mptcp-updates-for-net-next'

Mat Martineau says:

====================
mptcp: Updates for net-next

Three independent fixes/features from the MPTCP tree:

Patch 1 is a selftest workaround for older iproute2 packages.

Patch 2 removes superfluous locks that were added with recent MP_FAIL
patches.

Patch 3 adds support for the TCP_DEFER_ACCEPT sockopt.
====================

Link: https://lore.kernel.org/r/20220514002115.725976-1-mathew.j.martineau@linux.intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agomptcp: sockopt: add TCP_DEFER_ACCEPT support
Florian Westphal [Sat, 14 May 2022 00:21:15 +0000 (17:21 -0700)]
mptcp: sockopt: add TCP_DEFER_ACCEPT support

Support this via passthrough to the underlying tcp listener socket.

Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/271
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoRevert "mptcp: add data lock for sk timers"
Paolo Abeni [Sat, 14 May 2022 00:21:14 +0000 (17:21 -0700)]
Revert "mptcp: add data lock for sk timers"

This reverts commit 4293248c6704b854bf816aa1967e433402bee11c.

Additional locks are not needed, all the touched sections
are already under mptcp socket lock protection.

Fixes: 4293248c6704 ("mptcp: add data lock for sk timers")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoselftests: mptcp: fix a mp_fail test warning
Geliang Tang [Sat, 14 May 2022 00:21:13 +0000 (17:21 -0700)]
selftests: mptcp: fix a mp_fail test warning

Old tc versions (iproute2 5.3) show actions in multiple lines, not a
single line. Then the following unexpected MP_FAIL selftest output
occurs:

 file received by server has inverted byte at 169
 ./mptcp_join.sh: line 1277: [: [{"total acts":1},{"actions":[{"order":0 pedit ,"control_action":{"type":"pipe"}keys 1
         index 1 ref 1 bind 1,"installed":0,"last_used":0
         key #0  at 148: val ff000000 mask ffffffff
 5: integer expression expected
 001 Infinite map                      syn[ ok ] - synack[ ok ] - ack[ ok ]
                                       sum[ ok ] - csum  [ ok ]
                                       ftx[ ok ] - failrx[ ok ]
                                       rtx[ ok ] - rstrx [ ok ]
                                       itx[ ok ] - infirx[ ok ]
                                       ftx[ ok ] - failrx[ ok ] invert

This patch adds a 'grep' before 'sed' to fix this.

Fixes: b6e074e171bc ("selftests: mptcp: add infinite map testcase")
Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agodocs: ctucanfd: Use 'kernel-figure' directive instead of 'figure'
Akira Yokosawa [Tue, 10 May 2022 23:45:43 +0000 (08:45 +0900)]
docs: ctucanfd: Use 'kernel-figure' directive instead of 'figure'

Two issues were observed in the ReST doc added by commit c3a0addefbde
("docs: ctucanfd: CTU CAN FD open-source IP core documentation.")
with Sphinx versions 2.4.4 and 4.5.0.

The plain "figure" directive broke "make pdfdocs" due to a missing
PDF figure.  For conversion of SVG -> PDF to work, the "kernel-figure"
directive, which is an extension for kernel documentation, should
be used instead.

The directive of "code:: raw" causes a warning from both
"make htmldocs" and "make pdfdocs", which reads:

    [...]/can/ctu/ctucanfd-driver.rst:75: WARNING: Pygments lexer name
    'raw' is not known

A plain literal-block marker should suffice where no syntax
highlighting is intended.

Fix the issues by using suitable directive and marker.

Fixes: c3a0addefbde ("docs: ctucanfd: CTU CAN FD open-source IP core documentation.")
Link: https://lore.kernel.org/all/5986752a-1c2a-5d64-f91d-58b1e6decd17@gmail.com
Signed-off-by: Akira Yokosawa <akiyks@gmail.com>
Acked-by: Pavel Pisa <pisa@cmp.felk.cvut.cz>
Cc: Martin Jerabek <martin.jerabek01@gmail.com>
Cc: Ondrej Ille <ondrej.ille@gmail.com>
Cc: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agodt-bindings: can: ctucanfd: include common CAN controller bindings
Marc Kleine-Budde [Wed, 4 May 2022 06:21:16 +0000 (08:21 +0200)]
dt-bindings: can: ctucanfd: include common CAN controller bindings

Since commit

1f9234401ce0 ("dt-bindings: can: add can-controller.yaml")

there is a common CAN controller binding. Add this to the ctucanfd
binding.

Cc: Ondrej Ille <ondrej.ille@gmail.com>
Acked-by: Pavel Pisa <pisa@cmp.felk.cvut.cz>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agonet: dsa: realtek: rtl8366rb: Serialize indirect PHY register access
Alvin Šipraga [Fri, 13 May 2022 21:36:18 +0000 (23:36 +0200)]
net: dsa: realtek: rtl8366rb: Serialize indirect PHY register access

Lock the regmap during the whole PHY register access routines in
rtl8366rb.

Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Link: https://lore.kernel.org/r/20220513213618.2742895-1-linus.walleij@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agodt-bindings: can: renesas,rcar-canfd: Make interrupt-names required
Geert Uytterhoeven [Mon, 2 May 2022 17:33:53 +0000 (19:33 +0200)]
dt-bindings: can: renesas,rcar-canfd: Make interrupt-names required

The Renesas R-Car CAN FD Controller always uses two or more interrupts.
Make the interrupt-names properties a required property, to make it
easier to identify the individual interrupts.

Update the example accordingly.

Link: https://lore.kernel.org/all/a68e65955e0df4db60233d468f348203c2e7b940.1651512451.git.geert+renesas@glider.be
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agocan: slcan: slc_xmit(): use can_dropped_invalid_skb() instead of manual check
Vincent Mailhol [Sat, 14 May 2022 14:16:47 +0000 (23:16 +0900)]
can: slcan: slc_xmit(): use can_dropped_invalid_skb() instead of manual check

slcan does a manual check in slc_xmit() to verify if the skb is valid.
This check is incomplete, use instead can_dropped_invalid_skb().

Link: https://lore.kernel.org/all/20220514141650.1109542-2-mailhol.vincent@wanadoo.fr
Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agocan: ctucanfd: Let users select instead of depend on CAN_CTUCANFD
Geert Uytterhoeven [Mon, 9 May 2022 14:02:59 +0000 (16:02 +0200)]
can: ctucanfd: Let users select instead of depend on CAN_CTUCANFD

The CTU CAN-FD IP core is only useful when used with one of the
corresponding PCI/PCIe or platform (FPGA, SoC) drivers, which depend on
PCI resp. OF.

Hence make the users select the core driver code, instead of letting
then depend on it.  Keep the core code config option visible when
compile-testing, to maintain compile-coverage.

Link: https://lore.kernel.org/all/887b7440446b6244a20a503cc6e8dc9258846706.1652104941.git.geert+renesas@glider.be
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Pavel Pisa <pisa@cmp.felk.cvut.cz>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agocan: isotp: isotp_bind(): return -EINVAL on incorrect CAN ID formatting
Oliver Hartkopp [Sun, 15 May 2022 18:16:33 +0000 (20:16 +0200)]
can: isotp: isotp_bind(): return -EINVAL on incorrect CAN ID formatting

Commit 3ea566422cbd ("can: isotp: sanitize CAN ID checks in
isotp_bind()") checks the given CAN ID address information by
sanitizing the input values.

This check (silently) removes obsolete bits by masking the given CAN
IDs.

Derek Will suggested to give a feedback to the application programmer
when the 'sanitizing' was actually needed which means the programmer
provided CAN ID content in a wrong format (e.g. SFF CAN IDs with a CAN
ID > 0x7FF).

Link: https://lore.kernel.org/all/20220515181633.76671-1-socketcan@hartkopp.net
Suggested-by: Derek Will <derekrobertwill@gmail.com>
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agocan: isotp: add support for transmission without flow control
Oliver Hartkopp [Sat, 7 May 2022 11:55:58 +0000 (13:55 +0200)]
can: isotp: add support for transmission without flow control

Usually the ISO 15765-2 protocol is a point-to-point protocol to transfer
segmented PDUs to a dedicated receiver. This receiver sends a flow control
message to specify protocol options and timings (e.g. block size / STmin).

The so called functional addressing communication allows a 1:N
communication but is limited to a single frame length.

This new CAN_ISOTP_CF_BROADCAST allows an unconfirmed 1:N communication
with PDU length that would not fit into a single frame. This feature is
not covered by the ISO 15765-2 standard.

Link: https://lore.kernel.org/all/20220507115558.19065-1-socketcan@hartkopp.net
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agocan: raw: add support for SO_TXTIME/SCM_TXTIME
Marc Kleine-Budde [Thu, 21 Apr 2022 10:31:52 +0000 (12:31 +0200)]
can: raw: add support for SO_TXTIME/SCM_TXTIME

This patch calls into sock_cmsg_send() to parse the user supplied
control information into a struct sockcm_cookie. Then assign the
requested transmit time to the skb.

This makes it possible to use the Earliest TXTIME First (ETF) packet
scheduler with the CAN_RAW protocol. The user can send a CAN_RAW frame
with a TXTIME and the kernel (with the ETF scheduler) will take care
of sending it to the network interface.

Link: https://lore.kernel.org/all/20220502091946.1916211-3-mkl@pengutronix.de
Acked-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agocan: raw: raw_sendmsg(): remove not needed setting of skb->sk
Marc Kleine-Budde [Thu, 21 Apr 2022 08:29:03 +0000 (10:29 +0200)]
can: raw: raw_sendmsg(): remove not needed setting of skb->sk

The skb in raw_sendmsg() is allocated with sock_alloc_send_skb(),
which subsequently calls sock_alloc_send_pskb() -> skb_set_owner_w(),
which assigns "skb->sk = sk".

This patch removes the not needed setting of skb->sk.

Link: https://lore.kernel.org/all/20220502091946.1916211-2-mkl@pengutronix.de
Acked-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2 years agonet: phy: micrel: Use the kszphy probe/suspend/resume
Fabio Estevam [Fri, 13 May 2022 11:46:13 +0000 (08:46 -0300)]
net: phy: micrel: Use the kszphy probe/suspend/resume

Now that it is possible to use .probe without having .driver_data, let
KSZ8061 use the kszphy specific hooks for probe,suspend and resume,
which is preferred.

Switch to using the dedicated kszphy probe/suspend/resume functions.

Signed-off-by: Fabio Estevam <festevam@denx.de>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20220513114613.762810-2-festevam@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: phy: micrel: Allow probing without .driver_data
Fabio Estevam [Fri, 13 May 2022 11:46:12 +0000 (08:46 -0300)]
net: phy: micrel: Allow probing without .driver_data

Currently, if the .probe element is present in the phy_driver structure
and the .driver_data is not, a NULL pointer dereference happens.

Allow passing .probe without .driver_data by inserting NULL checks
for priv->type.

Signed-off-by: Fabio Estevam <festevam@denx.de>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20220513114613.762810-1-festevam@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoocteontx2-pf: Remove unnecessary synchronize_irq() before free_irq()
Minghao Chi [Fri, 13 May 2022 08:19:18 +0000 (08:19 +0000)]
octeontx2-pf: Remove unnecessary synchronize_irq() before free_irq()

Calling synchronize_irq() right before free_irq() is quite useless. On one
hand the IRQ can easily fire again before free_irq() is entered, on the
other hand free_irq() itself calls synchronize_irq() internally (in a race
condition free way), before any state associated with the IRQ is freed.

Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wwan: t7xx: Fix return type of t7xx_dl_add_timedout()
YueHaibing [Fri, 13 May 2022 07:56:11 +0000 (15:56 +0800)]
net: wwan: t7xx: Fix return type of t7xx_dl_add_timedout()

t7xx_dl_add_timedout() now return int 'ret', but the return type
is bool. Change the return type to int for furthor errcode upstream.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoocteon_ep: delete unnecessary NULL check
Ziyang Xuan [Fri, 13 May 2022 07:29:28 +0000 (15:29 +0800)]
octeon_ep: delete unnecessary NULL check

vfree(NULL) is safe. NULL check before vfree() is not needed.
Delete them to simplify the code.

Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoocteon_ep: add missing destroy_workqueue in octep_init_module
Zheng Bin [Fri, 13 May 2022 07:10:18 +0000 (15:10 +0800)]
octeon_ep: add missing destroy_workqueue in octep_init_module

octep_init_module misses destroy_workqueue in error path,
this patch fixes that.

Fixes: 862cd659a6fb ("octeon_ep: Add driver framework and device initialization")
Signed-off-by: Zheng Bin <zhengbin13@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'net-skb-defer-freeing-polish'
David S. Miller [Mon, 16 May 2022 10:33:59 +0000 (11:33 +0100)]
Merge branch 'net-skb-defer-freeing-polish'

Eric Dumazet says:

====================
net: polish skb defer freeing

While testing this recently added feature on a variety
of platforms/configurations, I found the following issues:

1) A race leading to concurrent calls to smp_call_function_single_async()

2) Missed opportunity to use napi_consume_skb()

3) Need to limit the max length of the per-cpu lists.

4) Process the per-cpu list more frequently, for the
   (unusual) case where net_rx_action() has mutiple
   napi_poll() to process per round.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: call skb_defer_free_flush() before each napi_poll()
Eric Dumazet [Mon, 16 May 2022 04:24:56 +0000 (21:24 -0700)]
net: call skb_defer_free_flush() before each napi_poll()

skb_defer_free_flush() can consume cpu cycles,
it seems better to call it in the inner loop:

- Potentially frees page/skb that will be reallocated while hot.

- Account for the cpu cycles in the @time_limit determination.

- Keep softnet_data.defer_count small to reduce chances for
  skb_attempt_defer_free() to send an IPI.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: add skb_defer_max sysctl
Eric Dumazet [Mon, 16 May 2022 04:24:55 +0000 (21:24 -0700)]
net: add skb_defer_max sysctl

commit 68822bdf76f1 ("net: generalize skb freeing
deferral to per-cpu lists") added another per-cpu
cache of skbs. It was expected to be small,
and an IPI was forced whenever the list reached 128
skbs.

We might need to be able to control more precisely
queue capacity and added latency.

An IPI is generated whenever queue reaches half capacity.

Default value of the new limit is 64.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: use napi_consume_skb() in skb_defer_free_flush()
Eric Dumazet [Mon, 16 May 2022 04:24:54 +0000 (21:24 -0700)]
net: use napi_consume_skb() in skb_defer_free_flush()

skb_defer_free_flush() runs from softirq context,
we have the opportunity to refill the napi_alloc_cache,
and/or use kmem_cache_free_bulk() when this cache is full.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: fix possible race in skb_attempt_defer_free()
Eric Dumazet [Mon, 16 May 2022 04:24:53 +0000 (21:24 -0700)]
net: fix possible race in skb_attempt_defer_free()

A cpu can observe sd->defer_count reaching 128,
and call smp_call_function_single_async()

Problem is that the remote CPU can clear sd->defer_count
before the IPI is run/acknowledged.

Other cpus can queue more packets and also decide
to call smp_call_function_single_async() while the pending
IPI was not yet delivered.

This is a common issue with smp_call_function_single_async().
Callers must ensure correct synchronization and serialization.

I triggered this issue while experimenting smaller threshold.
Performing the call to smp_call_function_single_async()
under sd->defer_lock protection did not solve the problem.

Commit 5a18ceca6350 ("smp: Allow smp_call_function_single_async()
to insert locked csd") replaced an informative WARN_ON_ONCE()
with a return of -EBUSY, which is often ignored.
Test of CSD_FLAG_LOCK presence is racy anyway.

Fixes: 68822bdf76f1 ("net: generalize skb freeing deferral to per-cpu lists")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: tulip: convert to devres
Rolf Eike Beer [Fri, 13 May 2022 17:23:59 +0000 (19:23 +0200)]
net: tulip: convert to devres

Works fine on my HP C3600:

[  274.452394] tulip0: no phy info, aborting mtable build
[  274.499041] tulip0:  MII transceiver #1 config 1000 status 782d advertising 01e1
[  274.750691] net eth0: Digital DS21142/43 Tulip rev 65 at MMIO 0xf4008000, 00:30:6e:08:7d:21, IRQ 17
[  283.104520] net eth0: Setting full-duplex based on MII#1 link partner capability of c1e1

Signed-off-by: Rolf Eike Beer <eike-kernel@sf-tec.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hinic: add missing destroy_workqueue in hinic_pf_to_mgmt_init
Zheng Bin [Fri, 13 May 2022 07:09:22 +0000 (15:09 +0800)]
net: hinic: add missing destroy_workqueue in hinic_pf_to_mgmt_init

hinic_pf_to_mgmt_init misses destroy_workqueue in error path,
this patch fixes that.

Fixes: 6dbb89014dc3 ("hinic: fix sending mailbox timeout in aeq event work")
Signed-off-by: Zheng Bin <zhengbin13@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'skb-drop-reason-boundary'
David S. Miller [Mon, 16 May 2022 09:47:44 +0000 (10:47 +0100)]
Merge branch 'skb-drop-reason-boundary'

Menglong Dong says:

====================
net: skb: check the boundrary of skb drop reason

In the commit 1330b6ef3313 ("skb: make drop reason booleanable"),
SKB_NOT_DROPPED_YET is added to the enum skb_drop_reason, which makes
the invalid drop reason SKB_NOT_DROPPED_YET can leak to the kfree_skb
tracepoint. Once this happen (it happened, as 4th patch says), it can
cause NULL pointer in drop monitor and result in kernel panic.

Therefore, check the boundrary of drop reason in both kfree_skb_reason
(2th patch) and drop monitor (1th patch) to prevent such case happens
again.

Meanwhile, fix the invalid drop reason passed to kfree_skb_reason() in
tcp_v4_rcv() and tcp_v6_rcv().

Changes since v2:
1/4 - don't reset the reason and print the debug warning only (Jakub
      Kicinski)
4/4 - remove new lines between tags

Changes since v1:
- consider tcp_v6_rcv() in the 4th patch
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: tcp: reset 'drop_reason' to NOT_SPCIFIED in tcp_v{4,6}_rcv()
Menglong Dong [Fri, 13 May 2022 03:03:39 +0000 (11:03 +0800)]
net: tcp: reset 'drop_reason' to NOT_SPCIFIED in tcp_v{4,6}_rcv()

The 'drop_reason' that passed to kfree_skb_reason() in tcp_v4_rcv()
and tcp_v6_rcv() can be SKB_NOT_DROPPED_YET(0), as it is used as the
return value of tcp_inbound_md5_hash().

And it can panic the kernel with NULL pointer in
net_dm_packet_report_size() if the reason is 0, as drop_reasons[0]
is NULL.

Fixes: 1330b6ef3313 ("skb: make drop reason booleanable")
Reviewed-by: Jiang Biao <benbjiang@tencent.com>
Reviewed-by: Hao Peng <flyingpeng@tencent.com>
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: skb: change the definition SKB_DR_SET()
Menglong Dong [Fri, 13 May 2022 03:03:38 +0000 (11:03 +0800)]
net: skb: change the definition SKB_DR_SET()

The SKB_DR_OR() is used to set the drop reason to a value when it is
not set or specified yet. SKB_NOT_DROPPED_YET should also be considered
as not set.

Reviewed-by: Jiang Biao <benbjiang@tencent.com>
Reviewed-by: Hao Peng <flyingpeng@tencent.com>
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: skb: check the boundrary of drop reason in kfree_skb_reason()
Menglong Dong [Fri, 13 May 2022 03:03:37 +0000 (11:03 +0800)]
net: skb: check the boundrary of drop reason in kfree_skb_reason()

Sometimes, we may forget to reset skb drop reason to NOT_SPECIFIED after
we make it the return value of the functions with return type of enum
skb_drop_reason, such as tcp_inbound_md5_hash. Therefore, its value can
be SKB_NOT_DROPPED_YET(0), which is invalid for kfree_skb_reason().

So we check the range of drop reason in kfree_skb_reason() with
DEBUG_NET_WARN_ON_ONCE().

Reviewed-by: Jiang Biao <benbjiang@tencent.com>
Reviewed-by: Hao Peng <flyingpeng@tencent.com>
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: dm: check the boundary of skb drop reasons
Menglong Dong [Fri, 13 May 2022 03:03:36 +0000 (11:03 +0800)]
net: dm: check the boundary of skb drop reasons

The 'reason' will be set to 'SKB_DROP_REASON_NOT_SPECIFIED' if it not
small that SKB_DROP_REASON_MAX in net_dm_packet_trace_kfree_skb_hit(),
but it can't avoid it to be 0, which is invalid and can cause NULL
pointer in drop_reasons.

Therefore, reset it to SKB_DROP_REASON_NOT_SPECIFIED when 'reason <= 0'.

Reviewed-by: Jiang Biao <benbjiang@tencent.com>
Reviewed-by: Hao Peng <flyingpeng@tencent.com>
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/smc: align the connect behaviour with TCP
Guangguan Wang [Fri, 13 May 2022 02:24:53 +0000 (10:24 +0800)]
net/smc: align the connect behaviour with TCP

Connect with O_NONBLOCK will not be completed immediately
and returns -EINPROGRESS. It is possible to use selector/poll
for completion by selecting the socket for writing. After select
indicates writability, a second connect function call will return
0 to indicate connected successfully as TCP does, but smc returns
-EISCONN. Use socket state for smc to indicate connect state, which
can help smc aligning the connect behaviour with TCP.

Signed-off-by: Guangguan Wang <guangguan.wang@linux.alibaba.com>
Acked-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'sk_bound_dev_if-annotations'
David S. Miller [Mon, 16 May 2022 09:31:06 +0000 (10:31 +0100)]
Merge branch 'sk_bound_dev_if-annotations'

Eric Dumazet says:

====================
net: add annotations for sk->sk_bound_dev_if

While writes on sk->sk_bound_dev_if are protected by socket lock,
we have many lockless reads all over the places.

This is based on syzbot report found in the first patch changelog.

v2: inline ipv6 function only defined if IS_ENABLED(CONFIG_IPV6) (kernel bots)
    Change the INET6_MATCH() to inet6_match(), this is no longer a macro.
    Change INET_MATCH() to inet_match() (Olivier Hartkopp & Jakub Kicinski)

====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoinet: rename INET_MATCH()
Eric Dumazet [Fri, 13 May 2022 18:55:50 +0000 (11:55 -0700)]
inet: rename INET_MATCH()

This is no longer a macro, but an inlined function.

INET_MATCH() -> inet_match()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Suggested-by: Olivier Hartkopp <socketcan@hartkopp.net>
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: add READ_ONCE(sk->sk_bound_dev_if) in INET6_MATCH()
Eric Dumazet [Fri, 13 May 2022 18:55:49 +0000 (11:55 -0700)]
ipv6: add READ_ONCE(sk->sk_bound_dev_if) in INET6_MATCH()

INET6_MATCH() runs without holding a lock on the socket.

We probably need to annotate most reads.

This patch makes INET6_MATCH() an inline function
to ease our changes.

v2: inline function only defined if IS_ENABLED(CONFIG_IPV6)
    Change the name to inet6_match(), this is no longer a macro.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agol2tp: use add READ_ONCE() to fetch sk->sk_bound_dev_if
Eric Dumazet [Fri, 13 May 2022 18:55:48 +0000 (11:55 -0700)]
l2tp: use add READ_ONCE() to fetch sk->sk_bound_dev_if

Use READ_ONCE() in paths not holding the socket lock.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet_sched: em_meta: add READ_ONCE() in var_sk_bound_if()
Eric Dumazet [Fri, 13 May 2022 18:55:47 +0000 (11:55 -0700)]
net_sched: em_meta: add READ_ONCE() in var_sk_bound_if()

sk->sk_bound_dev_if can change under us, use READ_ONCE() annotation.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoinet: add READ_ONCE(sk->sk_bound_dev_if) in inet_csk_bind_conflict()
Eric Dumazet [Fri, 13 May 2022 18:55:46 +0000 (11:55 -0700)]
inet: add READ_ONCE(sk->sk_bound_dev_if) in inet_csk_bind_conflict()

inet_csk_bind_conflict() can access sk->sk_bound_dev_if for
unlocked sockets.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodccp: use READ_ONCE() to read sk->sk_bound_dev_if
Eric Dumazet [Fri, 13 May 2022 18:55:45 +0000 (11:55 -0700)]
dccp: use READ_ONCE() to read sk->sk_bound_dev_if

When reading listener sk->sk_bound_dev_if locklessly,
we must use READ_ONCE().

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: core: add READ_ONCE/WRITE_ONCE annotations for sk->sk_bound_dev_if
Eric Dumazet [Fri, 13 May 2022 18:55:44 +0000 (11:55 -0700)]
net: core: add READ_ONCE/WRITE_ONCE annotations for sk->sk_bound_dev_if

sock_bindtoindex_locked() needs to use WRITE_ONCE(sk->sk_bound_dev_if, val),
because other cpus/threads might locklessly read this field.

sock_getbindtodevice(), sock_getsockopt() need READ_ONCE()
because they run without socket lock held.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotcp: sk->sk_bound_dev_if once in inet_request_bound_dev_if()
Eric Dumazet [Fri, 13 May 2022 18:55:43 +0000 (11:55 -0700)]
tcp: sk->sk_bound_dev_if once in inet_request_bound_dev_if()

inet_request_bound_dev_if() reads sk->sk_bound_dev_if twice
while listener socket is not locked.

Another cpu could change this field under us.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agosctp: read sk->sk_bound_dev_if once in sctp_rcv()
Eric Dumazet [Fri, 13 May 2022 18:55:42 +0000 (11:55 -0700)]
sctp: read sk->sk_bound_dev_if once in sctp_rcv()

sctp_rcv() reads sk->sk_bound_dev_if twice while the socket
is not locked. Another cpu could change this field under us.

Fixes: 0fd9a65a76e8 ("[SCTP] Support SO_BINDTODEVICE socket option on incoming packets.")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Vlad Yasevich <vyasevich@gmail.com>
Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: annotate races around sk->sk_bound_dev_if
Eric Dumazet [Fri, 13 May 2022 18:55:41 +0000 (11:55 -0700)]
net: annotate races around sk->sk_bound_dev_if

UDP sendmsg() is lockless, and reads sk->sk_bound_dev_if while
this field can be changed by another thread.

Adds minimal annotations to avoid KCSAN splats for UDP.
Following patches will add more annotations to potential lockless readers.

BUG: KCSAN: data-race in __ip6_datagram_connect / udpv6_sendmsg

write to 0xffff888136d47a94 of 4 bytes by task 7681 on cpu 0:
 __ip6_datagram_connect+0x6e2/0x930 net/ipv6/datagram.c:221
 ip6_datagram_connect+0x2a/0x40 net/ipv6/datagram.c:272
 inet_dgram_connect+0x107/0x190 net/ipv4/af_inet.c:576
 __sys_connect_file net/socket.c:1900 [inline]
 __sys_connect+0x197/0x1b0 net/socket.c:1917
 __do_sys_connect net/socket.c:1927 [inline]
 __se_sys_connect net/socket.c:1924 [inline]
 __x64_sys_connect+0x3d/0x50 net/socket.c:1924
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x2b/0x50 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

read to 0xffff888136d47a94 of 4 bytes by task 7670 on cpu 1:
 udpv6_sendmsg+0xc60/0x16e0 net/ipv6/udp.c:1436
 inet6_sendmsg+0x5f/0x80 net/ipv6/af_inet6.c:652
 sock_sendmsg_nosec net/socket.c:705 [inline]
 sock_sendmsg net/socket.c:725 [inline]
 ____sys_sendmsg+0x39a/0x510 net/socket.c:2413
 ___sys_sendmsg net/socket.c:2467 [inline]
 __sys_sendmmsg+0x267/0x4c0 net/socket.c:2553
 __do_sys_sendmmsg net/socket.c:2582 [inline]
 __se_sys_sendmmsg net/socket.c:2579 [inline]
 __x64_sys_sendmmsg+0x53/0x60 net/socket.c:2579
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x2b/0x50 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

value changed: 0x00000000 -> 0xffffff9b

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 7670 Comm: syz-executor.3 Tainted: G        W         5.18.0-rc1-syzkaller-dirty #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011

I chose to not add Fixes: tag because race has minor consequences
and stable teams busy enough.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'big-tcp'
David S. Miller [Mon, 16 May 2022 09:18:56 +0000 (10:18 +0100)]
Merge branch 'big-tcp'

Eric Dumazet says:

====================
tcp: BIG TCP implementation

This series implements BIG TCP as presented in netdev 0x15:

https://netdevconf.info/0x15/session.html?BIG-TCP

Jonathan Corbet made a nice summary: https://lwn.net/Articles/884104/

Standard TSO/GRO packet limit is 64KB

With BIG TCP, we allow bigger TSO/GRO packet sizes for IPv6 traffic.

Note that this feature is by default not enabled, because it might
break some eBPF programs assuming TCP header immediately follows IPv6 header.

While tcpdump recognizes the HBH/Jumbo header, standard pcap filters
are unable to skip over IPv6 extension headers.

Reducing number of packets traversing networking stack usually improves
performance, as shown on this experiment using a 100Gbit NIC, and 4K MTU.

'Standard' performance with current (74KB) limits.
for i in {1..10}; do ./netperf -t TCP_RR -H iroa23  -- -r80000,80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
77           138          183          8542.19
79           143          178          8215.28
70           117          164          9543.39
80           144          176          8183.71
78           126          155          9108.47
80           146          184          8115.19
71           113          165          9510.96
74           113          164          9518.74
79           137          178          8575.04
73           111          171          9561.73

Now enable BIG TCP on both hosts.

ip link set dev eth0 gro_max_size 185000 gso_max_size 185000
for i in {1..10}; do ./netperf -t TCP_RR -H iroa23  -- -r80000,80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
57           83           117          13871.38
64           118          155          11432.94
65           116          148          11507.62
60           105          136          12645.15
60           103          135          12760.34
60           102          134          12832.64
62           109          132          10877.68
58           82           115          14052.93
57           83           124          14212.58
57           82           119          14196.01

We see an increase of transactions per second, and lower latencies as well.

v7: adopt unsafe_memcpy() in mlx5 to avoid FORTIFY warnings.

v6: fix a compilation error for CONFIG_IPV6=n in
    "net: allow gso_max_size to exceed 65536", reported by kernel bots.

v5: Replaced two patches (that were adding new attributes) with patches
    from Alexander Duyck. Idea is to reuse existing gso_max_size/gro_max_size

v4: Rebased on top of Jakub series (Merge branch 'tso-gso-limit-split')
    max_tso_size is now family independent.

v3: Fixed a typo in RFC number (Alexander)
    Added Reviewed-by: tags from Tariq on mlx4/mlx5 parts.

v2: Removed the MAX_SKB_FRAGS change, this belongs to a different series.
    Addressed feedback, for Alexander and nvidia folks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agomlx5: support BIG TCP packets
Eric Dumazet [Fri, 13 May 2022 18:34:08 +0000 (11:34 -0700)]
mlx5: support BIG TCP packets

mlx5 supports LSOv2.

IPv6 gro/tcp stacks insert a temporary Hop-by-Hop header
with JUMBO TLV for big packets.

We need to ignore/skip this HBH header when populating TX descriptor.

Note that ipv6_has_hopopt_jumbo() only recognizes very specific packet
layout, thus mlx5e_sq_xmit_wqe() is taking care of this layout only.

v7: adopt unsafe_memcpy() and MLX5_UNSAFE_MEMCPY_DISCLAIMER
v2: clear hopbyhop in mlx5e_tx_get_gso_ihs()
v4: fix compile error for CONFIG_MLX5_CORE_IPOIB=y

Signed-off-by: Coco Li <lixiaoyan@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agomlx4: support BIG TCP packets
Eric Dumazet [Fri, 13 May 2022 18:34:07 +0000 (11:34 -0700)]
mlx4: support BIG TCP packets

mlx4 supports LSOv2 just fine.

IPv6 stack inserts a temporary Hop-by-Hop header
with JUMBO TLV for big packets.

We need to ignore the HBH header when populating TX descriptor.

Tested:

Before: (not enabling bigger TSO/GRO packets)

ip link set dev eth0 gso_max_size 65536 gro_max_size 65536

netperf -H lpaa18 -t TCP_RR -T2,2 -l 10 -Cc -- -r 70000,70000
MIGRATED TCP REQUEST/RESPONSE TEST from ::0 (::) port 0 AF_INET6 to lpaa18.prod.google.com () port 0 AF_INET6 : first burst 0 : cpu bind
Local /Remote
Socket Size   Request Resp.  Elapsed Trans.   CPU    CPU    S.dem   S.dem
Send   Recv   Size    Size   Time    Rate     local  remote local   remote
bytes  bytes  bytes   bytes  secs.   per sec  % S    % S    us/Tr   us/Tr

262144 540000 70000   70000  10.00   6591.45  0.86   1.34   62.490  97.446
262144 540000

After: (enabling bigger TSO/GRO packets)

ip link set dev eth0 gso_max_size 185000 gro_max_size 185000

netperf -H lpaa18 -t TCP_RR -T2,2 -l 10 -Cc -- -r 70000,70000
MIGRATED TCP REQUEST/RESPONSE TEST from ::0 (::) port 0 AF_INET6 to lpaa18.prod.google.com () port 0 AF_INET6 : first burst 0 : cpu bind
Local /Remote
Socket Size   Request Resp.  Elapsed Trans.   CPU    CPU    S.dem   S.dem
Send   Recv   Size    Size   Time    Rate     local  remote local   remote
bytes  bytes  bytes   bytes  secs.   per sec  % S    % S    us/Tr   us/Tr

262144 540000 70000   70000  10.00   8383.95  0.95   1.01   54.432  57.584
262144 540000

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoveth: enable BIG TCP packets
Eric Dumazet [Fri, 13 May 2022 18:34:06 +0000 (11:34 -0700)]
veth: enable BIG TCP packets

Set the TSO driver limit to GSO_MAX_SIZE (512 KB).

This allows the admin/user to set a GSO limit up to this value.

ip link set dev veth10 gso_max_size 200000

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: loopback: enable BIG TCP packets
Eric Dumazet [Fri, 13 May 2022 18:34:05 +0000 (11:34 -0700)]
net: loopback: enable BIG TCP packets

Set the driver limit to GSO_MAX_SIZE (512 KB).

This allows the admin/user to set a GSO limit up to this value.

Tested:

ip link set dev lo gso_max_size 200000
netperf -H ::1 -t TCP_RR -l 100 -- -r 80000,80000 &

tcpdump shows :

18:28:42.962116 IP6 ::1 > ::1: HBH 40051 > 63780: Flags [P.], seq 3626480001:3626560001, ack 3626560001, win 17743, options [nop,nop,TS val 3771179265 ecr 3771179265], length 80000
18:28:42.962138 IP6 ::1.63780 > ::1.40051: Flags [.], ack 3626560001, win 17743, options [nop,nop,TS val 3771179265 ecr 3771179265], length 0
18:28:42.962152 IP6 ::1 > ::1: HBH 63780 > 40051: Flags [P.], seq 3626560001:3626640001, ack 3626560001, win 17743, options [nop,nop,TS val 3771179265 ecr 3771179265], length 80000
18:28:42.962157 IP6 ::1.40051 > ::1.63780: Flags [.], ack 3626640001, win 17743, options [nop,nop,TS val 3771179265 ecr 3771179265], length 0
18:28:42.962180 IP6 ::1 > ::1: HBH 40051 > 63780: Flags [P.], seq 3626560001:3626640001, ack 3626640001, win 17743, options [nop,nop,TS val 3771179265 ecr 3771179265], length 80000
18:28:42.962214 IP6 ::1.63780 > ::1.40051: Flags [.], ack 3626640001, win 17743, options [nop,nop,TS val 3771179266 ecr 3771179265], length 0
18:28:42.962228 IP6 ::1 > ::1: HBH 63780 > 40051: Flags [P.], seq 3626640001:3626720001, ack 3626640001, win 17743, options [nop,nop,TS val 3771179266 ecr 3771179265], length 80000
18:28:42.962233 IP6 ::1.40051 > ::1.63780: Flags [.], ack 3626720001, win 17743, options [nop,nop,TS val 3771179266 ecr 3771179266], length 0

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: Add hop-by-hop header to jumbograms in ip6_output
Coco Li [Fri, 13 May 2022 18:34:04 +0000 (11:34 -0700)]
ipv6: Add hop-by-hop header to jumbograms in ip6_output

Instead of simply forcing a 0 payload_len in IPv6 header,
implement RFC 2675 and insert a custom extension header.

Note that only TCP stack is currently potentially generating
jumbograms, and that this extension header is purely local,
it wont be sent on a physical link.

This is needed so that packet capture (tcpdump and friends)
can properly dissect these large packets.

Signed-off-by: Coco Li <lixiaoyan@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: allow gro_max_size to exceed 65536
Alexander Duyck [Fri, 13 May 2022 18:34:03 +0000 (11:34 -0700)]
net: allow gro_max_size to exceed 65536

Allow the gro_max_size to exceed a value larger than 65536.

There weren't really any external limitations that prevented this other
than the fact that IPv4 only supports a 16 bit length field. Since we have
the option of adding a hop-by-hop header for IPv6 we can allow IPv6 to
exceed this value and for IPv4 and non-TCP flows we can cap things at 65536
via a constant rather than relying on gro_max_size.

[edumazet] limit GRO_MAX_SIZE to (8 * 65535) to avoid overflows.

Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6/gro: insert temporary HBH/jumbo header
Eric Dumazet [Fri, 13 May 2022 18:34:02 +0000 (11:34 -0700)]
ipv6/gro: insert temporary HBH/jumbo header

Following patch will add GRO_IPV6_MAX_SIZE, allowing gro to build
BIG TCP ipv6 packets (bigger than 64K).

This patch changes ipv6_gro_complete() to insert a HBH/jumbo header
so that resulting packet can go through IPv6/TCP stacks.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6/gso: remove temporary HBH/jumbo header
Eric Dumazet [Fri, 13 May 2022 18:34:01 +0000 (11:34 -0700)]
ipv6/gso: remove temporary HBH/jumbo header

ipv6 tcp and gro stacks will soon be able to build big TCP packets,
with an added temporary Hop By Hop header.

If GSO is involved for these large packets, we need to remove
the temporary HBH header before segmentation happens.

v2: perform HBH removal from ipv6_gso_segment() instead of
    skb_segment() (Alexander feedback)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoipv6: add struct hop_jumbo_hdr definition
Eric Dumazet [Fri, 13 May 2022 18:34:00 +0000 (11:34 -0700)]
ipv6: add struct hop_jumbo_hdr definition

Following patches will need to add and remove local IPv6 jumbogram
options to enable BIG TCP.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotcp_cubic: make hystart_ack_delay() aware of BIG TCP
Eric Dumazet [Fri, 13 May 2022 18:33:59 +0000 (11:33 -0700)]
tcp_cubic: make hystart_ack_delay() aware of BIG TCP

hystart_ack_delay() had the assumption that a TSO packet
would not be bigger than GSO_MAX_SIZE.

This will no longer be true.

We should use sk->sk_gso_max_size instead.

This reduces chances of spurious Hystart ACK train detections.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: limit GSO_MAX_SIZE to 524280 bytes
Eric Dumazet [Fri, 13 May 2022 18:33:58 +0000 (11:33 -0700)]
net: limit GSO_MAX_SIZE to 524280 bytes

Make sure we will not overflow shinfo->gso_segs

Minimal TCP MSS size is 8 bytes, and shinfo->gso_segs
is a 16bit field.

TCP_MIN_GSO_SIZE is currently defined in include/net/tcp.h,
it seems cleaner to not bring tcp details into include/linux/netdevice.h

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: allow gso_max_size to exceed 65536
Alexander Duyck [Fri, 13 May 2022 18:33:57 +0000 (11:33 -0700)]
net: allow gso_max_size to exceed 65536

The code for gso_max_size was added originally to allow for debugging and
workaround of buggy devices that couldn't support TSO with blocks 64K in
size. The original reason for limiting it to 64K was because that was the
existing limits of IPv4 and non-jumbogram IPv6 length fields.

With the addition of Big TCP we can remove this limit and allow the value
to potentially go up to UINT_MAX and instead be limited by the tso_max_size
value.

So in order to support this we need to go through and clean up the
remaining users of the gso_max_size value so that the values will cap at
64K for non-TCPv6 flows. In addition we can clean up the GSO_MAX_SIZE value
so that 64K becomes GSO_LEGACY_MAX_SIZE and UINT_MAX will now be the upper
limit for GSO_MAX_SIZE.

v6: (edumazet) fixed a compile error if CONFIG_IPV6=n,
               in a new sk_trim_gso_size() helper.
               netif_set_tso_max_size() caps the requested TSO size
               with GSO_MAX_SIZE.

Signed-off-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: add IFLA_TSO_{MAX_SIZE|SEGS} attributes
Eric Dumazet [Fri, 13 May 2022 18:33:56 +0000 (11:33 -0700)]
net: add IFLA_TSO_{MAX_SIZE|SEGS} attributes

New netlink attributes IFLA_TSO_MAX_SIZE and IFLA_TSO_MAX_SEGS
are used to report to user-space the device TSO limits.

ip -d link sh dev eth1
...
   tso_max_size 65536 tso_max_segs 65535

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'Renesas-RSZ-V2M-support'
David S. Miller [Mon, 16 May 2022 09:14:27 +0000 (10:14 +0100)]
Merge branch 'Renesas-RSZ-V2M-support'

Phil Edworthy says:

====================
Add Renesas RZ/V2M Ethernet support

The RZ/V2M Ethernet is very similar to R-Car Gen3 Ethernet-AVB, though
some small parts are the same as R-Car Gen2.
Other differences are:
* It has separate data (DI), error (Line 1) and management (Line 2) irqs
  rather than one irq for all three.
* Instead of using the High-speed peripheral bus clock for gPTP, it has
  a separate gPTP reference clock.

v4:
 * Add clk_disable_unprepare() for gptp ref clk

v3:
 * Really renamed irq_en_dis_regs to irq_en_dis this time
 * Modified ravb_ptp_extts() to use irq_en_dis
 * Added Reviewed-by tags

v2:
 * Just net patches in this series
 * Instead of reusing ch22 and ch24 interrupt names, use the proper names
 * Renamed irq_en_dis_regs to irq_en_dis
 * Squashed use of GIC reg versus GIE/GID and got rid of separate gptp_ptm_gic feature.
 * Move err_mgmt_irqs code under multi_irqs
 * Minor editing of the commit msgs
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoravb: Add support for RZ/V2M
Phil Edworthy [Thu, 12 May 2022 11:47:22 +0000 (12:47 +0100)]
ravb: Add support for RZ/V2M

RZ/V2M Ethernet is very similar to R-Car Gen3 Ethernet-AVB, though
some small parts are the same as R-Car Gen2.
Other differences to R-Car Gen3 and Gen2 are:
* It has separate data (DI), error (Line 1) and management (Line 2) irqs
  rather than one irq for all three.
* Instead of using the High-speed peripheral bus clock for gPTP, it has a
  separate gPTP reference clock.

Signed-off-by: Phil Edworthy <phil.edworthy@renesas.com>
Reviewed-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoravb: Use separate clock for gPTP
Phil Edworthy [Thu, 12 May 2022 11:47:21 +0000 (12:47 +0100)]
ravb: Use separate clock for gPTP

RZ/V2M has a separate gPTP reference clock that is used when the
AVB-DMAC Mode Register (CCC) gPTP Clock Select (CSEL) bits are
set to "01: High-speed peripheral bus clock".
Therefore, add a feature that allows this clock to be used for
gPTP.

Signed-off-by: Phil Edworthy <phil.edworthy@renesas.com>
Reviewed-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoravb: Support separate Line0 (Desc), Line1 (Err) and Line2 (Mgmt) irqs
Phil Edworthy [Thu, 12 May 2022 11:47:20 +0000 (12:47 +0100)]
ravb: Support separate Line0 (Desc), Line1 (Err) and Line2 (Mgmt) irqs

R-Car has a combined interrupt line, ch22 = Line0_DiA | Line1_A | Line2_A.
RZ/V2M has separate interrupt lines for each of these, so add a feature
that allows the driver to get these interrupts and call the common handler.

Signed-off-by: Phil Edworthy <phil.edworthy@renesas.com>
Reviewed-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoravb: Separate handling of irq enable/disable regs into feature
Phil Edworthy [Thu, 12 May 2022 11:47:19 +0000 (12:47 +0100)]
ravb: Separate handling of irq enable/disable regs into feature

Currently, when the HW has a single interrupt, the driver uses the
GIC, TIC, RIC0 registers to enable and disable interrupts.
When the HW has multiple interrupts, it uses the GIE, GID, TIE, TID,
RIE0, RID0 registers.

However, other devices, e.g. RZ/V2M, have multiple irqs and only have
the GIC, TIC, RIC0 registers.
Therefore, split this into a separate feature.

Signed-off-by: Phil Edworthy <phil.edworthy@renesas.com>
Reviewed-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodt-bindings: net: renesas,etheravb: Document RZ/V2M SoC
Phil Edworthy [Thu, 12 May 2022 11:47:18 +0000 (12:47 +0100)]
dt-bindings: net: renesas,etheravb: Document RZ/V2M SoC

Document the Ethernet AVB IP found on RZ/V2M SoC.
It includes the Ethernet controller (E-MAC) and Dedicated Direct memory
access controller (DMAC) for transferring transmitted Ethernet frames
to and received Ethernet frames from respective storage areas in the
RAM at high speed.
The AVB-DMAC is compliant with IEEE 802.1BA, IEEE 802.1AS timing and
synchronization protocol, IEEE 802.1Qav real-time transfer, and the
IEEE 802.1Qat stream reservation protocol.

R-Car has a pair of combined interrupt lines:
 ch22 = Line0_DiA | Line1_A | Line2_A
 ch23 = Line0_DiB | Line1_B | Line2_B
Line0 for descriptor interrupts (which we call dia and dib).
Line1 for error related interrupts (which we call err_a and err_b).
Line2 for management and gPTP related interrupts (mgmt_a and mgmt_b).

RZ/V2M hardware has separate interrupt lines for each of these.

It has 3 clocks; the main AXI clock, the AMBA CHI (Coherent Hub
Interface) clock and a gPTP reference clock.

Signed-off-by: Phil Edworthy <phil.edworthy@renesas.com>
Reviewed-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next
David S. Miller [Mon, 16 May 2022 09:10:37 +0000 (10:10 +0100)]
Merge git://git./linux/kernel/git/netfilter/nf-next

Pablo Neira Ayuso says:

====================
Netfilter updates for net-next

This is v2 including deadlock fix in conntrack ecache rework
reported by Jakub Kicinski.

The following patchset contains Netfilter updates for net-next,
mostly updates to conntrack from Florian Westphal.

1) Add a dedicated list for conntrack event redelivery.

2) Include event redelivery list in conntrack dumps of dying type.

3) Remove per-cpu dying list for event redelivery, not used anymore.

4) Add netns .pre_exit to cttimeout to zap timeout objects before
   synchronize_rcu() call.

5) Remove nf_ct_unconfirmed_destroy.

6) Add generation id for conntrack extensions for conntrack
   timeout and helpers.

7) Detach timeout policy from conntrack on cttimeout module removal.

8) Remove __nf_ct_unconfirmed_destroy.

9) Remove unconfirmed list.

10) Remove unconditional local_bh_disable in init_conntrack().

11) Consolidate conntrack iterator nf_ct_iterate_cleanup().

12) Detect if ctnetlink listeners exist to short-circuit event
    path early.

13) Un-inline nf_ct_ecache_ext_add().

14) Add nf_conntrack_events autodetect ctnetlink listener mode
    and make it default.

15) Add nf_ct_ecache_exist() to check for event cache extension.

16) Extend flowtable reverse route lookup to include source, iif,
    tos and mark, from Sven Auhagen.

17) Do not verify zero checksum UDP packets in nf_reject,
    from Kevin Mitchell.

====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoice: Expose RSS indirection tables for queue groups via ethtool
Sridhar Samudrala [Thu, 12 May 2022 21:32:49 +0000 (14:32 -0700)]
ice: Expose RSS indirection tables for queue groups via ethtool

When ADQ queue groups (TCs) are created via tc mqprio command,
RSS contexts and associated RSS indirection tables are configured
automatically per TC based on the queue ranges specified for
each traffic class.

For ex:
tc qdisc add dev enp175s0f0 root mqprio num_tc 3 map 0 1 2 \
queues 2@0 8@2 4@10 hw 1 mode channel

will create 3 queue groups (TC 0-2) with queue ranges 2, 8 and 4
in 3 queue groups. Each queue group is associated with its
own RSS context and RSS indirection table.

Add support to expose RSS indirection tables for all ADQ queue
groups using ethtool RSS contexts interface.
ethtool -x enp175s0f0 context <tc-num>

Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Sudheer Mogilappagari <sudheer.mogilappagari@intel.com>
Tested-by: Bharathi Sreenivas <bharathi.sreenivas@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Link: https://lore.kernel.org/r/20220512213249.3747424-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoixgbe: add xdp frags support to ndo_xdp_xmit
Lorenzo Bianconi [Thu, 12 May 2022 21:26:21 +0000 (14:26 -0700)]
ixgbe: add xdp frags support to ndo_xdp_xmit

Add the capability to map non-linear xdp frames in XDP_TX and ndo_xdp_xmit
callback.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Link: https://lore.kernel.org/r/20220512212621.3746140-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoeth: sfc: remove remnants of the out-of-tree napi_weight module param
Jakub Kicinski [Thu, 12 May 2022 20:56:03 +0000 (13:56 -0700)]
eth: sfc: remove remnants of the out-of-tree napi_weight module param

Remove napi_weight statics which are set to 64 and never modified,
remnants of the out-of-tree napi_weight module param.

Acked-by: Edward Cree <ecree.xilinx@gmail.com>
Link: https://lore.kernel.org/r/20220512205603.1536771-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec...
Jakub Kicinski [Fri, 13 May 2022 17:25:08 +0000 (10:25 -0700)]
Merge branch 'master' of git://git./linux/kernel/git/klassert/ipsec-next

Steffen Klassert says:

====================
pull request (net-next): ipsec-next 2022-05-13

1) Cleanups for the code behind the XFRM offload API. This is a
   preparation for the extension of the API for policy offload.
   From Leon Romanovsky.

* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next:
  xfrm: drop not needed flags variable in XFRM offload struct
  net/mlx5e: Use XFRM state direction instead of flags
  netdevsim: rely on XFRM state direction instead of flags
  ixgbe: propagate XFRM offload state direction instead of flags
  xfrm: store and rely on direction to construct offload flags
  xfrm: rename xfrm_state_offload struct to allow reuse
  xfrm: delete not used number of external headers
  xfrm: free not used XFRM_ESP_NO_TRAILER flag
====================

Link: https://lore.kernel.org/r/20220513151218.4010119-1-steffen.klassert@secunet.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agosfc: siena: Fix Kconfig dependencies
Ren Zhijie [Fri, 13 May 2022 01:27:21 +0000 (09:27 +0800)]
sfc: siena: Fix Kconfig dependencies

If CONFIG_PTP_1588_CLOCK=m and CONFIG_SFC_SIENA=y, the siena driver will fail to link:

drivers/net/ethernet/sfc/siena/ptp.o: In function `efx_ptp_remove_channel':
ptp.c:(.text+0xa28): undefined reference to `ptp_clock_unregister'
drivers/net/ethernet/sfc/siena/ptp.o: In function `efx_ptp_probe_channel':
ptp.c:(.text+0x13a0): undefined reference to `ptp_clock_register'
ptp.c:(.text+0x1470): undefined reference to `ptp_clock_unregister'
drivers/net/ethernet/sfc/siena/ptp.o: In function `efx_ptp_pps_worker':
ptp.c:(.text+0x1d29): undefined reference to `ptp_clock_event'
drivers/net/ethernet/sfc/siena/ptp.o: In function `efx_siena_ptp_get_ts_info':
ptp.c:(.text+0x301b): undefined reference to `ptp_clock_index'

To fix this build error, make SFC_SIENA depends on PTP_1588_CLOCK.

Reported-by: Hulk Robot <hulkci@huawei.com>
Fixes: d48523cb88e0 ("sfc: Copy shared files needed for Siena (part 2)")
Signed-off-by: Ren Zhijie <renzhijie2@huawei.com>
Acked-by: Martin Habets <habetsm.xilinx@gmail.com>
Link: https://lore.kernel.org/r/20220513012721.140871-1-renzhijie2@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonetfilter: conntrack: skip verification of zero UDP checksum
Kevin Mitchell [Sat, 30 Apr 2022 03:40:27 +0000 (20:40 -0700)]
netfilter: conntrack: skip verification of zero UDP checksum

The checksum is optional for UDP packets. However nf_reject would
previously require a valid checksum to elicit a response such as
ICMP_DEST_UNREACH.

Add some logic to nf_reject_verify_csum to determine if a UDP packet has
a zero checksum and should therefore not be verified.

Signed-off-by: Kevin Mitchell <kevmitch@arista.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: flowtable: nft_flow_route use more data for reverse route
Sven Auhagen [Wed, 27 Apr 2022 07:15:15 +0000 (09:15 +0200)]
netfilter: flowtable: nft_flow_route use more data for reverse route

When creating a flow table entry, the reverse route is looked
up based on the current packet.
There can be scenarios where the user creates a custom ip rule
to route the traffic differently.
In order to support those scenarios, the lookup needs to add
more information based on the current packet.
The patch adds multiple new information to the route lookup.

Signed-off-by: Sven Auhagen <sven.auhagen@voleatech.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: prefer extension check to pointer check
Florian Westphal [Mon, 25 Apr 2022 13:15:44 +0000 (15:15 +0200)]
netfilter: prefer extension check to pointer check

The pointer check usually results in a 'false positive': its likely
that the ctnetlink module is loaded but no event monitoring is enabled.

After recent change to autodetect ctnetlink usage and only allocate
the ecache extension if a listener is active, check if the extension
is present on a given conntrack.

If its not there, there is nothing to report and calls to the
notification framework can be elided.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: conntrack: add nf_conntrack_events autodetect mode
Florian Westphal [Mon, 25 Apr 2022 13:15:43 +0000 (15:15 +0200)]
netfilter: conntrack: add nf_conntrack_events autodetect mode

This adds the new nf_conntrack_events=2 mode and makes it the
default.

This leverages the earlier flag in struct net to allow to avoid
the event extension as long as no event listener is active in
the namespace.

This avoids, for most cases, allocation of ct->ext area.
A followup patch will take further advantage of this by avoiding
calls down into the event framework if the extension isn't present.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: conntrack: un-inline nf_ct_ecache_ext_add
Florian Westphal [Mon, 25 Apr 2022 13:15:42 +0000 (15:15 +0200)]
netfilter: conntrack: un-inline nf_ct_ecache_ext_add

Only called when new ct is allocated or the extension isn't present.
This function will be extended, place this in the conntrack module
instead of inlining.

The callers already depend on nf_conntrack module.
Return value is changed to bool, noone used the returned pointer.

Make sure that the core drops the newly allocated conntrack
if the extension is requested but can't be added.
This makes it necessary to ifdef the section, as the stub
always returns false we'd drop every new conntrack if the
the ecache extension is disabled in kconfig.

Add from data path (xt_CT, nft_ct) is unchanged.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: nfnetlink: allow to detect if ctnetlink listeners exist
Florian Westphal [Mon, 25 Apr 2022 13:15:41 +0000 (15:15 +0200)]
netfilter: nfnetlink: allow to detect if ctnetlink listeners exist

At this time, every new conntrack gets the 'event cache extension'
enabled for it.

This is because the 'net.netfilter.nf_conntrack_events' sysctl defaults
to 1.

Changing the default to 0 means that commands that rely on the event
notification extension, e.g. 'conntrack -E' or conntrackd, stop working.

We COULD detect if there is a listener by means of
'nfnetlink_has_listeners()' and only add the extension if this is true.

The downside is a dependency from conntrack module to nfnetlink module.

This adds a different way: inc/dec a counter whenever a ctnetlink group
is being (un)subscribed and toggle a flag in struct net.

Next patches will take advantage of this and will only add the event
extension if the flag is set.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: conntrack: add nf_ct_iter_data object for nf_ct_iterate_cleanup*()
Pablo Neira Ayuso [Fri, 8 Apr 2022 11:10:19 +0000 (13:10 +0200)]
netfilter: conntrack: add nf_ct_iter_data object for nf_ct_iterate_cleanup*()

This patch adds a structure to collect all the context data that is
passed to the cleanup iterator.

 struct nf_ct_iter_data {
       struct net *net;
       void *data;
       u32 portid;
       int report;
 };

There is a netns field that allows to clean up conntrack entries
specifically owned by the specified netns.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: conntrack: avoid unconditional local_bh_disable
Florian Westphal [Mon, 11 Apr 2022 11:01:25 +0000 (13:01 +0200)]
netfilter: conntrack: avoid unconditional local_bh_disable

Now that the conntrack entry isn't placed on the pcpu list anymore the
bh only needs to be disabled in the 'expectation present' case.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: conntrack: remove unconfirmed list
Florian Westphal [Mon, 11 Apr 2022 11:01:24 +0000 (13:01 +0200)]
netfilter: conntrack: remove unconfirmed list

It has no function anymore and can be removed.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: conntrack: remove __nf_ct_unconfirmed_destroy
Florian Westphal [Mon, 11 Apr 2022 11:01:23 +0000 (13:01 +0200)]
netfilter: conntrack: remove __nf_ct_unconfirmed_destroy

Its not needed anymore:

A. If entry is totally new, then the rcu-protected resource
must already have been removed from global visibility before call
to nf_ct_iterate_destroy.

B. If entry was allocated before, but is not yet in the hash table
   (uncofirmed case), genid gets incremented and synchronize_rcu() call
   makes sure access has completed.

C. Next attempt to peek at extension area will fail for unconfirmed
  conntracks, because ext->genid != genid.

D. Conntracks in the hash are iterated as before.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: cttimeout: decouple unlink and free on netns destruction
Florian Westphal [Mon, 11 Apr 2022 11:01:22 +0000 (13:01 +0200)]
netfilter: cttimeout: decouple unlink and free on netns destruction

Increment the extid on module removal; this makes sure that even
in extreme cases any old uncofirmed entry that happened to be kept
e.g. on nfnetlink_queue list will not trip over a stale timeout
reference.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: extensions: introduce extension genid count
Florian Westphal [Mon, 11 Apr 2022 11:01:21 +0000 (13:01 +0200)]
netfilter: extensions: introduce extension genid count

Multiple netfilter extensions store pointers to external data
in their extension area struct.

Examples:
1. Timeout policies
2. Connection tracking helpers.

No references are taken for these.

When a helper or timeout policy is removed, the conntrack table gets
traversed and affected extensions are cleared.

Conntrack entries not yet in the hashtable are referenced via a special
list, the unconfirmed list.

On removal of a policy or connection tracking helper, the unconfirmed
list gets traversed an all entries are marked as dying, this prevents
them from getting committed to the table at insertion time: core checks
for dying bit, if set, the conntrack entry gets destroyed at confirm
time.

The disadvantage is that each new conntrack has to be added to the percpu
unconfirmed list, and each insertion needs to remove it from this list.
The list is only ever needed when a policy or helper is removed -- a rare
occurrence.

Add a generation ID count: Instead of adding to the list and then
traversing that list on policy/helper removal, increment a counter
that is stored in the extension area.

For unconfirmed conntracks, the extension has the genid valid at ct
allocation time.

Removal of a helper/policy etc. increments the counter.
At confirmation time, validate that ext->genid == global_id.

If the stored number is not the same, do not allow the conntrack
insertion, just like as if a confirmed-list traversal would have flagged
the entry as dying.

After insertion, the genid is no longer relevant (conntrack entries
are now reachable via the conntrack table iterators and is set to 0.

This allows removal of the percpu unconfirmed list.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: remove nf_ct_unconfirmed_destroy helper
Florian Westphal [Mon, 11 Apr 2022 11:01:20 +0000 (13:01 +0200)]
netfilter: remove nf_ct_unconfirmed_destroy helper

This helper tags connections not yet in the conntrack table as
dying.  These nf_conn entries will be dropped instead when the
core attempts to insert them from the input or postrouting
'confirm' hook.

After the previous change, the entries get unlinked from the
list earlier, so that by the time the actual exit hook runs,
new connections no longer have a timeout policy assigned.

Its enough to walk the hashtable instead.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: cttimeout: decouple unlink and free on netns destruction
Florian Westphal [Mon, 11 Apr 2022 11:01:19 +0000 (13:01 +0200)]
netfilter: cttimeout: decouple unlink and free on netns destruction

Make it so netns pre_exit unlinks the objects from the pernet list, so
they cannot be found anymore.

netns core issues a synchronize_rcu() before calling the exit hooks so
any the time the exit hooks run unconfirmed nf_conn entries have been
free'd or they have been committed to the hashtable.

The exit hook still tags unconfirmed entries as dying, this can
now be removed in a followup change.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: conntrack: remove the percpu dying list
Florian Westphal [Mon, 11 Apr 2022 11:01:18 +0000 (13:01 +0200)]
netfilter: conntrack: remove the percpu dying list

Its no longer needed. Entries that need event redelivery are placed
on the new pernet dying list.

The advantage is that there is no need to take additional spinlock on
conntrack removal unless event redelivery failed or the conntrack entry
was never added to the table in the first place (confirmed bit not set).

The IPS_CONFIRMED bit now needs to be set as soon as the entry has been
unlinked from the unconfirmed list, else the destroy function may
attempt to unlink it a second time.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: conntrack: include ecache dying list in dumps
Florian Westphal [Mon, 11 Apr 2022 11:01:17 +0000 (13:01 +0200)]
netfilter: conntrack: include ecache dying list in dumps

The new pernet dying list includes conntrack entries that await
delivery of the 'destroy' event via ctnetlink.

The old percpu dying list will be removed soon.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agonetfilter: ecache: use dedicated list for event redelivery
Florian Westphal [Mon, 11 Apr 2022 11:01:16 +0000 (13:01 +0200)]
netfilter: ecache: use dedicated list for event redelivery

This disentangles event redelivery and the percpu dying list.

Because entries are now stored on a dedicated list, all
entries are in NFCT_ECACHE_DESTROY_FAIL state and all entries
still have confirmed bit set -- the reference count is at least 1.

The 'struct net' back-pointer can be removed as well.

The pcpu dying list will be removed eventually, it has no functionality.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2 years agoMerge branch 'bnxt_en-next'
David S. Miller [Fri, 13 May 2022 11:47:41 +0000 (12:47 +0100)]
Merge branch 'bnxt_en-next'

Michael Chan says:

====================
bnxt_en: Updates for net-next

This small patchset updates the firmware interface, adds timestamping
support for all receive packets, and adds revised NVRAM package error
messages for ethtool and devlink.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt_en: parse and report result field when NVRAM package install fails
Kalesh AP [Fri, 13 May 2022 02:40:24 +0000 (22:40 -0400)]
bnxt_en: parse and report result field when NVRAM package install fails

Instead of always returning -ENOPKG, decode the firmware error
code further when the HWRM_NVM_INSTALL_UPDATE firmware call fails.
Return a more suitable error code to userspace and log an error
in dmesg.

This is version 2 of the earlier patch that was reverted:

02acd399533e ("bnxt_en: parse result field when NVRAM package install fails")

In this new version, if the call is made through devlink instead of
ethtool, we'll also set the error message in extack.

Link: https://lore.kernel.org/netdev/20220307141358.4d52462e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt_en: Enable packet timestamping for all RX packets
Pavan Chebbi [Fri, 13 May 2022 02:40:23 +0000 (22:40 -0400)]
bnxt_en: Enable packet timestamping for all RX packets

Add driver support to enable timestamping on all RX packets
that are received by the NIC. This capability can be requested
by the applications using SIOCSHWTSTAMP ioctl with filter type
HWTSTAMP_FILTER_ALL.

Cc: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt_en: Configure ptp filters during bnxt open
Pavan Chebbi [Fri, 13 May 2022 02:40:22 +0000 (22:40 -0400)]
bnxt_en: Configure ptp filters during bnxt open

For correctness, we need to configure the packet filters for timestamping
during bnxt_open.  This way they are always configured after firmware
reset or chip reset.  We should not assume that the filters will always
be retained across resets.

This patch modifies the ioctl handler and always configures the PTP
filters in the bnxt_open() path.

Cc: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt_en: Update firmware interface to 1.10.2.95
Michael Chan [Fri, 13 May 2022 02:40:21 +0000 (22:40 -0400)]
bnxt_en: Update firmware interface to 1.10.2.95

The main changes are timestamp support for all RX packets and new PCIe
statistics.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: axienet: Use NAPI for TX completion path
Robert Hancock [Thu, 12 May 2022 17:18:53 +0000 (11:18 -0600)]
net: axienet: Use NAPI for TX completion path

This driver was using the TX IRQ handler to perform all TX completion
tasks. Under heavy TX network load, this can cause significant irqs-off
latencies (found to be in the hundreds of microseconds using ftrace).
This can cause other issues, such as overrunning serial UART FIFOs when
using high baud rates with limited UART FIFO sizes.

Switch to using a NAPI poll handler to perform the TX completion work
to get this out of hard IRQ context and avoid the IRQ latency impact.
A separate poll handler is used for TX and RX since they have separate
IRQs on this controller, so that the completion work for each of them
stays on the same CPU as the interrupt.

Testing on a Xilinx MPSoC ZU9EG platform using iperf3 from a Linux PC
through a switch at 1G link speed showed no significant change in TX or
RX throughput, with approximately 941 Mbps before and after. Hard IRQ
time in the TX throughput test was significantly reduced from 12% to
below 1% on the CPU handling TX interrupts, with total hard+soft IRQ CPU
usage dropping from about 56% down to 48%.

Signed-off-by: Robert Hancock <robert.hancock@calian.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: axienet: Be more careful about updating tx_bd_tail
Robert Hancock [Thu, 12 May 2022 17:18:52 +0000 (11:18 -0600)]
net: axienet: Be more careful about updating tx_bd_tail

The axienet_start_xmit function was updating the tx_bd_tail variable
multiple times, with potential rollbacks on error or invalid
intermediate positions, even though this variable is also used in the
TX completion path. Use READ_ONCE where this variable is read and
WRITE_ONCE where it is written to make this update more atomic, and
move the write before the MMIO write to start the transfer, so it is
protected by that implicit write barrier.

Signed-off-by: Robert Hancock <robert.hancock@calian.com>
Signed-off-by: David S. Miller <davem@davemloft.net>