platform/kernel/linux-starfive.git
3 years agonet/mlx5: Use ida_alloc_range() instead of ida_simple_alloc()
Roi Dayan [Mon, 15 Mar 2021 11:31:40 +0000 (13:31 +0200)]
net/mlx5: Use ida_alloc_range() instead of ida_simple_alloc()

ida_simple_alloc() and remove functions are deprecated.
Related change:
commit 3264ceec8f17 ("lib/idr.c: document that ida_simple_{get,remove}() are deprecated")

Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: E-Switch, move QoS specific fields to existing qos struct
Parav Pandit [Wed, 3 Feb 2021 13:32:50 +0000 (15:32 +0200)]
net/mlx5: E-Switch, move QoS specific fields to existing qos struct

Function QoS related fields are already defined in qos related struct.
min and max rate are left out to mlx5_vport_info struct.

Move them to existing qos struct.

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: E-Switch, cut down mlx5_vport_info structure size by 8 bytes
Parav Pandit [Wed, 3 Feb 2021 04:59:32 +0000 (06:59 +0200)]
net/mlx5: E-Switch, cut down mlx5_vport_info structure size by 8 bytes

Structure mlx5_vport_info consumes 40 bytes of space due to a hole
in it. After packing it reduces to 32 bytes.

Currently:
pahole -C mlx5_vport_info drivers/net/ethernet/mellanox/mlx5/core/eswitch.o
struct mlx5_vport_info {
        u8                         mac[6];               /*     0     6 */
        u16                        vlan;                 /*     6     2 */
        u8                         qos;                  /*     8     1 */

        /* XXX 7 bytes hole, try to pack */

        u64                        node_guid;            /*    16     8 */
        int                        link_state;           /*    24     4 */
        u32                        min_rate;             /*    28     4 */
        u32                        max_rate;             /*    32     4 */
        bool                       spoofchk;             /*    36     1 */
        bool                       trusted;              /*    37     1 */

        /* size: 40, cachelines: 1, members: 9 */
        /* sum members: 31, holes: 1, sum holes: 7 */
        /* padding: 2 */
        /* last cacheline: 40 bytes */
};

After packing:

$ pahole -C mlx5_vport_info drivers/net/ethernet/mellanox/mlx5/core/eswitch.o

struct mlx5_vport_info {
        u8                         mac[6];               /*     0     6 */
        u16                        vlan;                 /*     6     2 */
        u64                        node_guid;            /*     8     8 */
        int                        link_state;           /*    16     4 */
        u32                        min_rate;             /*    20     4 */
        u32                        max_rate;             /*    24     4 */
        u8                         qos;                  /*    28     1 */
        u8                         spoofchk:1;           /*    29: 0  1 */
        u8                         trusted:1;            /*    29: 1  1 */

        /* size: 32, cachelines: 1, members: 9 */
        /* padding: 2 */
        /* bit_padding: 6 bits */
        /* last cacheline: 32 bytes */
};

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Pair mutex_destory with mutex_init for rate limit table
Parav Pandit [Fri, 19 Feb 2021 07:36:54 +0000 (09:36 +0200)]
net/mlx5: Pair mutex_destory with mutex_init for rate limit table

Add missing mutex_destroy() to pair with mutex_init().

This should be done only when table is initialized, hence perform
mutex_init() only when table is initialized.

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Allocate rate limit table when rate is configured
Parav Pandit [Fri, 19 Feb 2021 10:06:54 +0000 (12:06 +0200)]
net/mlx5: Allocate rate limit table when rate is configured

A device supports 128 rate limiters. A static table allocation consumes
8KB of memory even when rate is not configured.

Instead, allocate the table when at least one rate is configured.

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Use helper to increment, decrement rate entry refcount
Parav Pandit [Thu, 25 Feb 2021 11:38:07 +0000 (13:38 +0200)]
net/mlx5: Use helper to increment, decrement rate entry refcount

Rate limit entry refcount can be incremented uniformly when it is newly
allocated or reused.
So simplify the code to increment refcount at one place.

Use decrement refcount helper in two routines.

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Use helpers to allocate and free rl table entries
Parav Pandit [Fri, 19 Feb 2021 07:29:56 +0000 (09:29 +0200)]
net/mlx5: Use helpers to allocate and free rl table entries

User helper routines to allocate and free rate limit table entries.
Subsequent patch extends use of these helpers to do allocation
during rate entry allocation callback.

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Do not hold mutex while reading table constants
Parav Pandit [Wed, 24 Feb 2021 09:04:19 +0000 (11:04 +0200)]
net/mlx5: Do not hold mutex while reading table constants

Table max_size, min and max rate are constants initialized while table
is created. Reading it doesn't need to hold a table mutex. Hence, read
them without holding table mutex.

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Pack mlx5_rl_entry structure
Parav Pandit [Fri, 19 Feb 2021 06:18:12 +0000 (08:18 +0200)]
net/mlx5: Pack mlx5_rl_entry structure

mlx5_rl_entry structure is not properly packed as shown below. Due to this
an array of size 9144 bytes allocated which is aligned to 16Kbytes.
Hence, pack the structure and avoid the wastage.

This offers 8Kbytes of saving per mlx5_core_dev struct.

pahole -C mlx5_rl_entry  drivers/net/ethernet/mellanox/mlx5/core/en_main.o

Existing layout:

struct mlx5_rl_entry {
        u8                         rl_raw[48];           /*     0    48 */
        u16                        index;                /*    48     2 */

        /* XXX 6 bytes hole, try to pack */

        u64                        refcount;             /*    56     8 */
        /* --- cacheline 1 boundary (64 bytes) --- */
        u16                        uid;                  /*    64     2 */
        u8                         dedicated:1;          /*    66: 0  1 */

        /* size: 72, cachelines: 2, members: 5 */
        /* sum members: 60, holes: 1, sum holes: 6 */
        /* sum bitfield members: 1 bits (0 bytes) */
        /* padding: 5 */
        /* bit_padding: 7 bits */
        /* last cacheline: 8 bytes */
};

After alignment:

struct mlx5_rl_entry {
        u8                         rl_raw[48];           /*     0    48 */
        u64                        refcount;             /*    48     8 */
        u16                        index;                /*    56     2 */
        u16                        uid;                  /*    58     2 */
        u8                         dedicated:1;          /*    60: 0  1 */

        /* size: 64, cachelines: 1, members: 5 */
        /* padding: 3 */
        /* bit_padding: 7 bits */
};

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: Use unsigned int for free_count
Parav Pandit [Tue, 9 Feb 2021 07:44:27 +0000 (09:44 +0200)]
net/mlx5: Use unsigned int for free_count

Fix the warning due to missing int.

WARNING: Prefer 'unsigned int' to bare use of 'unsigned'
+       unsigned free_count;

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: E-Switch, move QoS specific fields to existing qos struct
Parav Pandit [Wed, 3 Feb 2021 13:32:50 +0000 (15:32 +0200)]
net/mlx5: E-Switch, move QoS specific fields to existing qos struct

Function QoS related fields are already defined in qos related struct.
min and max rate are left out to mlx5_vport_info struct.

Move them to existing qos struct.

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: E-Switch, cut down mlx5_vport_info structure size by 8 bytes
Parav Pandit [Wed, 3 Feb 2021 04:59:32 +0000 (06:59 +0200)]
net/mlx5: E-Switch, cut down mlx5_vport_info structure size by 8 bytes

Structure mlx5_vport_info consumes 40 bytes of space due to a hole
in it. After packing it reduces to 32 bytes.

Currently:
pahole -C mlx5_vport_info drivers/net/ethernet/mellanox/mlx5/core/eswitch.o
struct mlx5_vport_info {
        u8                         mac[6];               /*     0     6 */
        u16                        vlan;                 /*     6     2 */
        u8                         qos;                  /*     8     1 */

        /* XXX 7 bytes hole, try to pack */

        u64                        node_guid;            /*    16     8 */
        int                        link_state;           /*    24     4 */
        u32                        min_rate;             /*    28     4 */
        u32                        max_rate;             /*    32     4 */
        bool                       spoofchk;             /*    36     1 */
        bool                       trusted;              /*    37     1 */

        /* size: 40, cachelines: 1, members: 9 */
        /* sum members: 31, holes: 1, sum holes: 7 */
        /* padding: 2 */
        /* last cacheline: 40 bytes */
};

After packing:

$ pahole -C mlx5_vport_info drivers/net/ethernet/mellanox/mlx5/core/eswitch.o

struct mlx5_vport_info {
        u8                         mac[6];               /*     0     6 */
        u16                        vlan;                 /*     6     2 */
        u64                        node_guid;            /*     8     8 */
        int                        link_state;           /*    16     4 */
        u32                        min_rate;             /*    20     4 */
        u32                        max_rate;             /*    24     4 */
        u8                         qos;                  /*    28     1 */
        u8                         spoofchk:1;           /*    29: 0  1 */
        u8                         trusted:1;            /*    29: 1  1 */

        /* size: 32, cachelines: 1, members: 9 */
        /* padding: 2 */
        /* bit_padding: 6 bits */
        /* last cacheline: 32 bytes */
};

Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet/mlx5: CT: Add support for matching on ct_state inv and rel flags
Ariel Levkovich [Mon, 15 Mar 2021 17:59:52 +0000 (19:59 +0200)]
net/mlx5: CT: Add support for matching on ct_state inv and rel flags

Add support for matching on ct_state inv and rel flags.

Currently the support is only for match on -inv and -rel.
Matching on +inv and +rel will be rejected.

Example:
$ tc filter add dev ens1f0_0 ingress prio 1 chain 1 proto ip flower \
  ct_state -est-rel+trk \
  action mirred egress redirect dev ens1f0_1
$ tc filter add dev ens1f0_1 ingress prio 1 chain 1 proto ip flower \
  ct_state +trk+est-inv \
  action mirred egress redirect dev ens1f0_0

Signed-off-by: Ariel Levkovich <lariel@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
3 years agonet: usb: ax88179_178a: initialize local variables before use
Phillip Potter [Thu, 1 Apr 2021 22:36:07 +0000 (23:36 +0100)]
net: usb: ax88179_178a: initialize local variables before use

Use memset to initialize local array in drivers/net/usb/ax88179_178a.c, and
also set a local u16 and u32 variable to 0. Fixes a KMSAN found uninit-value bug
reported by syzbot at:
https://syzkaller.appspot.com/bug?id=00371c73c72f72487c1d0bfe0cc9d00de339d5aa

Reported-by: syzbot+4993e4a0e237f1b53747@syzkaller.appspotmail.com
Signed-off-by: Phillip Potter <phil@philpotter.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: phy: broadcom: Add statistics for all Gigabit PHYs
Florian Fainelli [Thu, 1 Apr 2021 16:42:33 +0000 (09:42 -0700)]
net: phy: broadcom: Add statistics for all Gigabit PHYs

All Gigabit PHYs use the same register layout as far as fetching
statistics goes. Fast Ethernet PHYs do not all support statistics, and
the BCM54616S would require some switching between the coper and fiber
modes to fetch the appropriate statistics which is not supported yet.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: document a side effect of ip_local_reserved_ports
Otto Hollmann [Thu, 1 Apr 2021 15:57:05 +0000 (17:57 +0200)]
net: document a side effect of ip_local_reserved_ports

If there is overlapp between ip_local_port_range and ip_local_reserved_ports with a huge reserved block, it will affect probability of selecting ephemeral ports, see file net/ipv4/inet_hashtables.c:723

    int __inet_hash_connect(
    ...
            for (i = 0; i < remaining; i += 2, port += 2) {
                    if (unlikely(port >= high))
                            port -= remaining;
                    if (inet_is_local_reserved_port(net, port))
                            continue;

    E.g. if there is reserved block of 10000 ports, two ports right after this block will be 5000 more likely selected than others.
    If this was intended, we can/should add note into documentation as proposed in this commit, otherwise we should think about different solution. One option could be mapping table of continuous port ranges. Second option could be letting user to modify step (port+=2) in above loop, e.g. using new sysctl parameter.

Signed-off-by: Otto Hollmann <otto.hollmann@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agolan743x: remove redundant semi-colon
Yang Yingliang [Thu, 1 Apr 2021 14:20:15 +0000 (22:20 +0800)]
lan743x: remove redundant semi-colon

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: hns: Fix some typos
Lu Wei [Thu, 1 Apr 2021 09:27:01 +0000 (17:27 +0800)]
net: hns: Fix some typos

Fix some typos.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Lu Wei <luwei32@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: smc: Remove repeated struct declaration
Wan Jiabing [Thu, 1 Apr 2021 08:40:29 +0000 (16:40 +0800)]
net: smc: Remove repeated struct declaration

struct smc_clc_msg_local is declared twice. One is declared at
301st line. The blew one is not needed. Remove the duplicate.

Signed-off-by: Wan Jiabing <wanjiabing@vivo.com>
Acked-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoinclude: net: Remove repeated struct declaration
Wan Jiabing [Thu, 1 Apr 2021 07:08:22 +0000 (15:08 +0800)]
include: net: Remove repeated struct declaration

struct ctl_table_header is declared twice. One is declared
at 46th line. The blew one is not needed. Remove the duplicate.

Signed-off-by: Wan Jiabing <wanjiabing@vivo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: stmmac: remove unnecessary pci_enable_msi() call
Wong Vee Khee [Thu, 1 Apr 2021 06:06:28 +0000 (14:06 +0800)]
net: stmmac: remove unnecessary pci_enable_msi() call

The commit d2a029bde37b ("stmmac: pci: add MSI support for Intel Quark
X1000") introduced a pci_enable_msi() call in stmmac_pci.c.

With the commit 58da0cfa6cf1 ("net: stmmac: create dwmac-intel.c to
contain all Intel platform"), Intel Quark platform related codes
have been moved to the newly created driver.

Removing this unnecessary pci_enable_msi() call as there are no other
devices that uses stmmac-pci and need MSI to be enabled.

Signed-off-by: Wong Vee Khee <vee.khee.wong@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agostmmac: intel: use managed PCI function on probe and resume
Wong Vee Khee [Thu, 1 Apr 2021 06:02:50 +0000 (14:02 +0800)]
stmmac: intel: use managed PCI function on probe and resume

Update dwmac-intel to use managed function, i.e. pcim_enable_device().

This will allow devres framework to call resource free function for us.

Signed-off-by: Wong Vee Khee <vee.khee.wong@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ipv6: Refactor in rt6_age_examine_exception
Xu Jia [Thu, 1 Apr 2021 03:22:23 +0000 (11:22 +0800)]
net: ipv6: Refactor in rt6_age_examine_exception

The logic in rt6_age_examine_exception is confusing. The commit is
to refactor the code.

Signed-off-by: Xu Jia <xujia39@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agotipc: fix unique bearer names sanity check
Hoang Le [Thu, 1 Apr 2021 02:30:48 +0000 (09:30 +0700)]
tipc: fix unique bearer names sanity check

When enabling a bearer by name, we don't sanity check its name with
higher slot in bearer list. This may have the effect that the name
of an already enabled bearer bypasses the check.

To fix the above issue, we just perform an extra checking with all
existing bearers.

Fixes: cb30a63384bc9 ("tipc: refactor function tipc_enable_bearer()")
Cc: stable@vger.kernel.org
Acked-by: Jon Maloy <jmaloy@redhat.com>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next...
David S. Miller [Thu, 1 Apr 2021 22:41:08 +0000 (15:41 -0700)]
Merge branch '100GbE' of git://git./linux/kernel/git/tnguy/next-queue

Tony Nguyen says:

====================
100GbE Intel Wired LAN Driver Updates 2021-03-31

This series contains updates to ice driver only.

Benita adds support for XPS.

Ani moves netdev registration to the end of probe to prevent use before
the interface is ready and moves up an error check to possibly avoid
an unneeded call. He also consolidates the VSI state and flag fields to
a single field.

Dan changes the segment where package information is pulled.

Paul S ensures correct ITR values are set when increasing ring size.

Paul G rewords a link misconfiguration message as this could be
expected.

Bruce removes setting an unnecessary AQ flag and corrects a memory
allocation call. Also fixes checkpatch issues for 'COMPLEX_MACRO'.

Qi aligns PTYPE bitmap naming by adding 'ptype' prefix to the bitmaps
missing it.

Brett removes limiting Rx queue mapping to RSS size as there is not a
dependency on this. He also refactors RSS configuration by introducing
individual functions for LUT and key configuration and by passing a
structure containing pertinent information instead of individual
arguments.

Tony corrects a comment block to follow netdev style.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoselftests/net: so_txtime multi-host support
Carlos Llamas [Thu, 1 Apr 2021 00:40:20 +0000 (00:40 +0000)]
selftests/net: so_txtime multi-host support

SO_TXTIME hardware offload requires testing across devices, either
between machines or separate network namespaces.

Split up SO_TXTIME test into tx and rx modes, so traffic can be
sent from one process to another. Create a veth-pair on different
namespaces and bind each process to an end point via [-S]ource and
[-D]estination parameters. Optional start [-t]ime parameter can be
passed to synchronize the test across the hosts (with synchorinzed
clocks).

Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: mediatek: add flow offload for mt7623
Frank Wunderlich [Wed, 31 Mar 2021 13:34:37 +0000 (15:34 +0200)]
net: mediatek: add flow offload for mt7623

mt7623 uses offload version 2 too

tested on Bananapi-R2

Signed-off-by: Frank Wunderlich <frank-w@public-files.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: stmmac: enable MTL ECC Error Address Status Over-ride by default
Voon Weifeng [Wed, 31 Mar 2021 16:18:25 +0000 (00:18 +0800)]
net: stmmac: enable MTL ECC Error Address Status Over-ride by default

Turn on the MEEAO field of MTL_ECC_Control_Register by default.

As the MTL ECC Error Address Status Over-ride(MEEAO) is set by default,
the following error address fields will hold the last valid address
where the error is detected.

Signed-off-by: Voon Weifeng <weifeng.voon@intel.com>
Signed-off-by: Tan Tee Min <tee.min.tan@intel.com>
Co-developed-by: Wong Vee Khee <vee.khee.wong@linux.intel.com>
Signed-off-by: Wong Vee Khee <vee.khee.wong@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'nxp-enetc-xdp'
David S. Miller [Wed, 31 Mar 2021 21:57:44 +0000 (14:57 -0700)]
Merge branch 'nxp-enetc-xdp'

Vladimir Oltean says:

====================
XDP for NXP ENETC

This series adds support to the enetc driver for the basic XDP primitives.
The ENETC is a network controller found inside the NXP LS1028A SoC,
which is a dual-core Cortex A72 device for industrial networking,
with the CPUs clocked at up to 1.3 GHz. On this platform, there are 4
ENETC ports and a 6-port embedded DSA switch, in a topology that looks
like this:

  +-------------------------------------------------------------------------+
  |                    +--------+ 1 Gbps (typically disabled)               |
  | ENETC PCI          |  ENETC |--------------------------+                |
  | Root Complex       | port 3 |-----------------------+  |                |
  | Integrated         +--------+                       |  |                |
  | Endpoint                                            |  |                |
  |                    +--------+ 2.5 Gbps              |  |                |
  |                    |  ENETC |--------------+        |  |                |
  |                    | port 2 |-----------+  |        |  |                |
  |                    +--------+           |  |        |  |                |
  |                                         |  |        |  |                |
  |                        +------------------------------------------------+
  |                        |             |  Felix |  |  Felix |             |
  |                        | Switch      | port 4 |  | port 5 |             |
  |                        |             +--------+  +--------+             |
  |                        |                                                |
  | +--------+  +--------+ | +--------+  +--------+  +--------+  +--------+ |
  | |  ENETC |  |  ENETC | | |  Felix |  |  Felix |  |  Felix |  |  Felix | |
  | | port 0 |  | port 1 | | | port 0 |  | port 1 |  | port 2 |  | port 3 | |
  +-------------------------------------------------------------------------+
         |          |             |           |            |          |
         v          v             v           v            v          v
       Up to      Up to                      Up to 4x 2.5Gbps
      2.5Gbps     1Gbps

The ENETC ports 2 and 3 can act as DSA masters for the embedded switch.
Because 4 out of the 6 externally-facing ports of the SoC are switch
ports, the most interesting use case for XDP on this device is in fact
XDP_TX on the 2.5Gbps DSA master.

Nonetheless, the results presented below are for IPv4 forwarding between
ENETC port 0 (eno0) and port 1 (eno1) both configured for 1Gbps.
There are two streams of IPv4/UDP datagrams with a frame length of 64
octets delivered at 100% port load to eno0 and to eno1. eno0 has a flow
steering rule to process the traffic on RX ring 0 (CPU 0), and eno1 has
a flow steering rule towards RX ring 1 (CPU 1).

For the IPFWD test, standard IP routing was enabled in the netns.
For the XDP_DROP test, the samples/bpf/xdp1 program was attached to both
eno0 and to eno1.
For the XDP_TX test, the samples/bpf/xdp2 program was attached to both
eno0 and to eno1.
For the XDP_REDIRECT test, the samples/bpf/xdp_redirect program was
attached once to the input of eno0/output of eno1, and twice to the
input of eno1/output of eno0.

Finally, the preliminary results are as follows:

        | IPFWD | XDP_TX | XDP_REDIRECT | XDP_DROP
--------+-------+--------+-------------------------
fps     | 761   | 2535   | 1735         | 2783
Gbps    | 0.51  | 1.71   | 1.17         | n/a

There is a strange phenomenon in my testing sistem where it appears that
one CPU is processing more than the other. I have not investigated this
too much. Also, the code might not be very well optimized (for example
dma_sync_for_device is called with the full ENETC_RXB_DMA_SIZE_XDP).

Design wise, the ENETC is a PCI device with BD rings, so it uses the
MEM_TYPE_PAGE_SHARED memory model, as can typically be seen in Intel
devices. The strategy was to build upon the existing model that the
driver uses, and not change it too much. So you will see things like a
separate NAPI poll function for XDP.

I have only tested with PAGE_SIZE=4096, and since we split pages in
half, it means that MTU-sized frames are scatter/gather (the XDP
headroom + skb_shared_info only leaves us 1476 bytes of data per
buffer). This is sub-optimal, but I would rather keep it this way and
help speed up Lorenzo's series for S/G support through testing, rather
than change the enetc driver to use some other memory model like page_pool.
My code is already structured for S/G, and that works fine for XDP_DROP
and XDP_TX, just not for XDP_REDIRECT, even between two enetc ports.
So the S/G XDP_REDIRECT is stubbed out (the frames are dropped), but
obviously I would like to remove that limitation soon.

Please note that I am rather new to this kind of stuff, I am more of a
control path person, so I would appreciate feedback.

Enough talking, on to the patches.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: add support for XDP_REDIRECT
Vladimir Oltean [Wed, 31 Mar 2021 20:08:57 +0000 (23:08 +0300)]
net: enetc: add support for XDP_REDIRECT

The driver implementation of the XDP_REDIRECT action reuses parts from
XDP_TX, most notably the enetc_xdp_tx function which transmits an array
of TX software BDs. Only this time, the buffers don't have DMA mappings,
we need to create them.

When a BPF program reaches the XDP_REDIRECT verdict for a frame, we can
employ the same buffer reuse strategy as for the normal processing path
and for XDP_PASS: we can flip to the other page half and seed that to
the RX ring.

Note that scatter/gather support is there, but disabled due to lack of
multi-buffer support in XDP (which is added by this series):
https://patchwork.kernel.org/project/netdevbpf/cover/cover.1616179034.git.lorenzo@kernel.org/

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: increase RX ring default size
Vladimir Oltean [Wed, 31 Mar 2021 20:08:56 +0000 (23:08 +0300)]
net: enetc: increase RX ring default size

As explained in the XDP_TX patch, when receiving a burst of frames with
the XDP_TX verdict, there is a momentary dip in the number of available
RX buffers. The system will eventually recover as TX completions will
start kicking in and refilling our RX BD ring again. But until that
happens, we need to survive with as few out-of-buffer discards as
possible.

This increases the memory footprint of the driver in order to avoid
discards at 2.5Gbps line rate 64B packet sizes, the maximum speed
available for testing on 1 port on NXP LS1028A.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: add support for XDP_TX
Vladimir Oltean [Wed, 31 Mar 2021 20:08:55 +0000 (23:08 +0300)]
net: enetc: add support for XDP_TX

For reflecting packets back into the interface they came from, we create
an array of TX software BDs derived from the RX software BDs. Therefore,
we need to extend the TX software BD structure to contain most of the
stuff that's already present in the RX software BD structure, for
reasons that will become evident in a moment.

For a frame with the XDP_TX verdict, we don't reuse any buffer right
away as we do for XDP_DROP (the same page half) or XDP_PASS (the other
page half, same as the skb code path).

Because the buffer transfers ownership from the RX ring to the TX ring,
reusing any page half right away is very dangerous. So what we can do is
we can recycle the same page half as soon as TX is complete.

The code path is:
enetc_poll
-> enetc_clean_rx_ring_xdp
   -> enetc_xdp_tx
   -> enetc_refill_rx_ring
(time passes, another MSI interrupt is raised)
enetc_poll
-> enetc_clean_tx_ring
   -> enetc_recycle_xdp_tx_buff

But that creates a problem, because there is a potentially large time
window between enetc_xdp_tx and enetc_recycle_xdp_tx_buff, period in
which we'll have less and less RX buffers.

Basically, when the ship starts sinking, the knee-jerk reaction is to
let enetc_refill_rx_ring do what it does for the standard skb code path
(refill every 16 consumed buffers), but that turns out to be very
inefficient. The problem is that we have no rx_swbd->page at our
disposal from the enetc_reuse_page path, so enetc_refill_rx_ring would
have to call enetc_new_page for every buffer that we refill (if we
choose to refill at this early stage). Very inefficient, it only makes
the problem worse, because page allocation is an expensive process, and
CPU time is exactly what we're lacking.

Additionally, there is an even bigger problem: if we let
enetc_refill_rx_ring top up the ring's buffers again from the RX path,
remember that the buffers sent to transmission haven't disappeared
anywhere. They will be eventually sent, and processed in
enetc_clean_tx_ring, and an attempt will be made to recycle them.
But surprise, the RX ring is already full of new buffers, because we
were premature in deciding that we should refill. So not only we took
the expensive decision of allocating new pages, but now we must throw
away perfectly good and reusable buffers.

So what we do is we implement an elastic refill mechanism, which keeps
track of the number of in-flight XDP_TX buffer descriptors. We top up
the RX ring only up to the total ring capacity minus the number of BDs
that are in flight (because we know that those BDs will return to us
eventually).

The enetc driver manages 1 RX ring per CPU, and the default TX ring
management is the same. So we do XDP_TX towards the TX ring of the same
index, because it is affined to the same CPU. This will probably not
produce great results when we have a tc-taprio/tc-mqprio qdisc on the
interface, because in that case, the number of TX rings might be
greater, but I didn't add any checks for that yet (mostly because I
didn't know what checks to add).

It should also be noted that we need to change the DMA mapping direction
for RX buffers, since they may now be reflected into the TX ring of the
same device. We choose to use DMA_BIDIRECTIONAL instead of unmapping and
remapping as DMA_TO_DEVICE, because performance is better this way.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: add support for XDP_DROP and XDP_PASS
Vladimir Oltean [Wed, 31 Mar 2021 20:08:54 +0000 (23:08 +0300)]
net: enetc: add support for XDP_DROP and XDP_PASS

For the RX ring, enetc uses an allocation scheme based on pages split
into two buffers, which is already very efficient in terms of preventing
reallocations / maximizing reuse, so I see no reason why I would change
that.

 +--------+--------+--------+--------+--------+--------+--------+
 |        |        |        |        |        |        |        |
 | half B | half B | half B | half B | half B | half B | half B |
 |        |        |        |        |        |        |        |
 +--------+--------+--------+--------+--------+--------+--------+
 |        |        |        |        |        |        |        |
 | half A | half A | half A | half A | half A | half A | half A | RX ring
 |        |        |        |        |        |        |        |
 +--------+--------+--------+--------+--------+--------+--------+
     ^                                                     ^
     |                                                     |
 next_to_clean                                       next_to_alloc
                                                      next_to_use

                   +--------+--------+--------+--------+--------+
                   |        |        |        |        |        |
                   | half B | half B | half B | half B | half B |
                   |        |        |        |        |        |
 +--------+--------+--------+--------+--------+--------+--------+
 |        |        |        |        |        |        |        |
 | half B | half B | half A | half A | half A | half A | half A | RX ring
 |        |        |        |        |        |        |        |
 +--------+--------+--------+--------+--------+--------+--------+
 |        |        |   ^                                   ^
 | half A | half A |   |                                   |
 |        |        | next_to_clean                   next_to_use
 +--------+--------+
              ^
              |
         next_to_alloc

then when enetc_refill_rx_ring is called, whose purpose is to advance
next_to_use, it sees that it can take buffers up to next_to_alloc, and
it says "oh, hey, rx_swbd->page isn't NULL, I don't need to allocate
one!".

The only problem is that for default PAGE_SIZE values of 4096, buffer
sizes are 2048 bytes. While this is enough for normal skb allocations at
an MTU of 1500 bytes, for XDP it isn't, because the XDP headroom is 256
bytes, and including skb_shared_info and alignment, we end up being able
to make use of only 1472 bytes, which is insufficient for the default
MTU.

To solve that problem, we implement scatter/gather processing in the
driver, because we would really like to keep the existing allocation
scheme. A packet of 1500 bytes is received in a buffer of 1472 bytes and
another one of 28 bytes.

Because the headroom required by XDP is different (and much larger) than
the one required by the network stack, whenever a BPF program is added
or deleted on the port, we drain the existing RX buffers and seed new
ones with the required headroom. We also keep the required headroom in
rx_ring->buffer_offset.

The simplest way to implement XDP_PASS, where an skb must be created, is
to create an xdp_buff based on the next_to_clean RX BDs, but not clear
those BDs from the RX ring yet, just keep the original index at which
the BDs for this frame started. Then, if the verdict is XDP_PASS,
instead of converting the xdb_buff to an skb, we replay a call to
enetc_build_skb (just as in the normal enetc_clean_rx_ring case),
starting from the original BD index.

We would also like to be minimally invasive to the regular RX data path,
and not check whether there is a BPF program attached to the ring on
every packet. So we create a separate RX ring processing function for
XDP.

Because we only install/remove the BPF program while the interface is
down, we forgo the rcu_read_lock() in enetc_clean_rx_ring, since there
shouldn't be any circumstance in which we are processing packets and
there is a potentially freed BPF program attached to the RX ring.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: move up enetc_reuse_page and enetc_page_reusable
Vladimir Oltean [Wed, 31 Mar 2021 20:08:53 +0000 (23:08 +0300)]
net: enetc: move up enetc_reuse_page and enetc_page_reusable

For XDP_TX, we need to call enetc_reuse_page from enetc_clean_tx_ring,
so we need to avoid a forward declaration.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: clean the TX software BD on the TX confirmation path
Vladimir Oltean [Wed, 31 Mar 2021 20:08:52 +0000 (23:08 +0300)]
net: enetc: clean the TX software BD on the TX confirmation path

With the future introduction of some new fields into enetc_tx_swbd such
as is_xdp_tx, is_xdp_redirect etc, we need not only to set these bits
to true from the XDP_TX/XDP_REDIRECT code path, but also to false from
the old code paths.

This is because TX software buffer descriptors are kept in a ring that
is shadow of the hardware TX ring, so these structures keep getting
reused, and there is always the possibility that when a software BD is
reused (after we ran a full circle through the TX ring), the old user of
the tx_swbd had set is_xdp_tx = true, and now we are sending a regular
skb, which would need to set is_xdp_tx = false.

To be minimally invasive to the old code paths, let's just scrub the
software TX BD in the TX confirmation path (enetc_clean_tx_ring), once
we know that nobody uses this software TX BD (tx_ring->next_to_clean
hasn't yet been updated, and the TX paths check enetc_bd_unused which
tells them if there's any more space in the TX ring for a new enqueue).

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: add a dedicated is_eof bit in the TX software BD
Vladimir Oltean [Wed, 31 Mar 2021 20:08:51 +0000 (23:08 +0300)]
net: enetc: add a dedicated is_eof bit in the TX software BD

In the transmit path, if we have a scatter/gather frame, it is put into
multiple software buffer descriptors, the last of which has the skb
pointer populated (which is necessary for rearming the TX MSI vector and
for collecting the two-step TX timestamp from the TX confirmation path).

At the moment, this is sufficient, but with XDP_TX, we'll need to
service TX software buffer descriptors that don't have an skb pointer,
however they might be final nonetheless. So add a dedicated bit for
final software BDs that we populate and check explicitly. Also, we keep
looking just for an skb when doing TX timestamping, because we don't
want/need that for XDP.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: move skb creation into enetc_build_skb
Vladimir Oltean [Wed, 31 Mar 2021 20:08:50 +0000 (23:08 +0300)]
net: enetc: move skb creation into enetc_build_skb

We need to build an skb from two code paths now: from the plain RX data
path and from the XDP data path when the verdict is XDP_PASS.

Create a new enetc_build_skb function which contains the essential steps
for building an skb based on the first and last positions of buffer
descriptors within the RX ring.

We also squash the enetc_process_skb function into enetc_build_skb,
because what that function did wasn't very meaningful on its own.

The "rx_frm_cnt++" instruction has been moved around napi_gro_receive
for cosmetic reasons, to be in the same spot as rx_byte_cnt++, which
itself must be before napi_gro_receive, because that's when we lose
ownership of the skb.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: enetc: consume the error RX buffer descriptors in a dedicated function
Vladimir Oltean [Wed, 31 Mar 2021 20:08:49 +0000 (23:08 +0300)]
net: enetc: consume the error RX buffer descriptors in a dedicated function

We can and should check the RX BD errors before starting to build the
skb. The only apparent reason why things are done in this backwards
order is to spare one call to enetc_rxbd_next.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoipv6: remove extra dev_hold() for fallback tunnels
Eric Dumazet [Wed, 31 Mar 2021 21:38:11 +0000 (14:38 -0700)]
ipv6: remove extra dev_hold() for fallback tunnels

My previous commits added a dev_hold() in tunnels ndo_init(),
but forgot to remove it from special functions setting up fallback tunnels.

Fallback tunnels do call their respective ndo_init()

This leads to various reports like :

unregister_netdevice: waiting for ip6gre0 to become free. Usage count = 2

Fixes: 48bb5697269a ("ip6_tunnel: sit: proper dev_{hold|put} in ndo_[un]init methods")
Fixes: 6289a98f0817 ("sit: proper dev_{hold|put} in ndo_[un]init methods")
Fixes: 40cb881b5aaa ("ip6_vti: proper dev_{hold|put} in ndo_[un]init methods")
Fixes: 7f700334be9a ("ip6_gre: proper dev_{hold|put} in ndo_[un]init methods")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet/tipc: fix missing destroy_workqueue() on error in tipc_crypto_start()
Yang Yingliang [Wed, 31 Mar 2021 08:36:02 +0000 (16:36 +0800)]
net/tipc: fix missing destroy_workqueue() on error in tipc_crypto_start()

Add the missing destroy_workqueue() before return from
tipc_crypto_start() in the error handling case.

Fixes: 1ef6f7c9390f ("tipc: add automatic session key exchange")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'inet-shrink-netns'
David S. Miller [Wed, 31 Mar 2021 21:48:20 +0000 (14:48 -0700)]
Merge branch 'inet-shrink-netns'

Eric Dumazet says:

====================
inet: shrink netns_ipv{4|6}

This patch series work on reducing footprint of netns_ipv4
and netns_ipv6. Some sysctls are converted to bytes,
and some fields are moves to reduce number of holes
and paddings.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoipv6: move ip6_dst_ops first in netns_ipv6
Eric Dumazet [Wed, 31 Mar 2021 17:52:13 +0000 (10:52 -0700)]
ipv6: move ip6_dst_ops first in netns_ipv6

ip6_dst_ops have cache line alignement.

Moving it at beginning of netns_ipv6
removes a 48 byte hole, and shrinks netns_ipv6
from 12 to 11 cache lines.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoipv6: convert elligible sysctls to u8
Eric Dumazet [Wed, 31 Mar 2021 17:52:12 +0000 (10:52 -0700)]
ipv6: convert elligible sysctls to u8

Convert most sysctls that can fit in a byte.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agotcp: convert tcp_comp_sack_nr sysctl to u8
Eric Dumazet [Wed, 31 Mar 2021 17:52:11 +0000 (10:52 -0700)]
tcp: convert tcp_comp_sack_nr sysctl to u8

tcp_comp_sack_nr max value was already 255.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoipv4: convert igmp_link_local_mcast_reports sysctl to u8
Eric Dumazet [Wed, 31 Mar 2021 17:52:10 +0000 (10:52 -0700)]
ipv4: convert igmp_link_local_mcast_reports sysctl to u8

This sysctl is a bool, can use less storage.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoipv4: convert fib_multipath_{use_neigh|hash_policy} sysctls to u8
Eric Dumazet [Wed, 31 Mar 2021 17:52:09 +0000 (10:52 -0700)]
ipv4: convert fib_multipath_{use_neigh|hash_policy} sysctls to u8

Make room for better packing of netns_ipv4

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoipv4: convert udp_l3mdev_accept sysctl to u8
Eric Dumazet [Wed, 31 Mar 2021 17:52:08 +0000 (10:52 -0700)]
ipv4: convert udp_l3mdev_accept sysctl to u8

Reduce footprint of sysctls.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoipv4: convert fib_notify_on_flag_change sysctl to u8
Eric Dumazet [Wed, 31 Mar 2021 17:52:07 +0000 (10:52 -0700)]
ipv4: convert fib_notify_on_flag_change sysctl to u8

Reduce footprint of sysctls.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoinet: shrink netns_ipv4 by another cache line
Eric Dumazet [Wed, 31 Mar 2021 17:52:06 +0000 (10:52 -0700)]
inet: shrink netns_ipv4 by another cache line

By shuffling around some fields to remove 8 bytes of hole,
we can save one cache line.

pahole result before/after the patch :

/* size: 768, cachelines: 12, members: 139 */
/* sum members: 673, holes: 11, sum holes: 39 */
/* padding: 56 */
/* paddings: 2, sum paddings: 7 */
/* forced alignments: 1 */

->

/* size: 704, cachelines: 11, members: 139 */
/* sum members: 673, holes: 10, sum holes: 31 */
/* paddings: 2, sum paddings: 7 */
/* forced alignments: 1 */

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoinet: shrink inet_timewait_death_row by 48 bytes
Eric Dumazet [Wed, 31 Mar 2021 17:52:05 +0000 (10:52 -0700)]
inet: shrink inet_timewait_death_row by 48 bytes

struct inet_timewait_death_row uses two cache lines, because we want
tw_count to use a full cache line to avoid false sharing.

Rework its definition and placement in netns_ipv4 so that:

1) We add 60 bytes of padding after tw_count to avoid
  false sharing, knowing that tcp_death_row will
  have ____cacheline_aligned_in_smp attribute.

2) We do not risk padding before tcp_death_row, because
  we move it at the beginning of netns_ipv4, even if new
 fields are added later.

3) We do not waste 48 bytes of padding after it.

Note that I have not changed dccp.

pahole result for struct netns_ipv4 before/after the patch :

/* size: 832, cachelines: 13, members: 139 */
/* sum members: 721, holes: 12, sum holes: 95 */
/* padding: 16 */
/* paddings: 2, sum paddings: 55 */

->

/* size: 768, cachelines: 12, members: 139 */
/* sum members: 673, holes: 11, sum holes: 39 */
/* padding: 56 */
/* paddings: 2, sum paddings: 7 */
/* forced alignments: 1 */

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'net-coding-style'
David S. Miller [Wed, 31 Mar 2021 21:34:09 +0000 (14:34 -0700)]
Merge branch 'net-coding-style'

Weihang Li says:

====================
net: fix some coding style issues

Do some cleanups according to the coding style of kernel, including wrong
print type, redundant and missing spaces and so on.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: lpc_eth: fix format warnings of block comments
Yangyang Li [Wed, 31 Mar 2021 08:18:34 +0000 (16:18 +0800)]
net: lpc_eth: fix format warnings of block comments

Fix the following format warning:
1. Block comments use * on subsequent lines
2. Block comments use a trailing */ on a separate line

Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: toshiba: fix the trailing format of some block comments
Yixing Liu [Wed, 31 Mar 2021 08:18:33 +0000 (16:18 +0800)]
net: toshiba: fix the trailing format of some block comments

Use a trailling */ on a separate line for block comments.

Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ocelot: fix a trailling format issue with block comments
Yixing Liu [Wed, 31 Mar 2021 08:18:32 +0000 (16:18 +0800)]
net: ocelot: fix a trailling format issue with block comments

Use a tralling */ on a separate line for block comments.

Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: amd: correct some format issues
Yixing Liu [Wed, 31 Mar 2021 08:18:31 +0000 (16:18 +0800)]
net: amd: correct some format issues

There should be a blank line after declarations.

Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: amd8111e: fix inappropriate spaces
Yixing Liu [Wed, 31 Mar 2021 08:18:30 +0000 (16:18 +0800)]
net: amd8111e: fix inappropriate spaces

Delete unncecessary spaces and add some reasonable spaces according to the
coding-style of kernel.

Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ena: remove extra words from comments
Yixing Liu [Wed, 31 Mar 2021 08:18:29 +0000 (16:18 +0800)]
net: ena: remove extra words from comments

Remove the redundant "for" from the commment.

Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ena: fix inaccurate print type
Yixing Liu [Wed, 31 Mar 2021 08:18:28 +0000 (16:18 +0800)]
net: ena: fix inaccurate print type

Use "%u" to replace "hu%".

Signed-off-by: Yixing Liu <liuyixing1@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoqrtr: Convert qrtr_ports from IDR to XArray
Matthew Wilcox (Oracle) [Wed, 31 Mar 2021 04:36:42 +0000 (05:36 +0100)]
qrtr: Convert qrtr_ports from IDR to XArray

The XArray interface is easier for this driver to use.  Also fixes a
bug reported by the improper use of GFP_ATOMIC.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ethernet: stmicro: Remove duplicate struct declaration
Wan Jiabing [Wed, 31 Mar 2021 02:35:53 +0000 (10:35 +0800)]
net: ethernet: stmicro: Remove duplicate struct declaration

struct stmmac_safety_stats is declared twice. One has been
declared at 29th line. Remove the duplicate.

Signed-off-by: Wan Jiabing <wanjiabing@vivo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoice: Correct comment block style
Tony Nguyen [Tue, 2 Mar 2021 18:15:43 +0000 (10:15 -0800)]
ice: Correct comment block style

The following is reported by checkpatch, correct it.

-----------------------------------------------
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
-----------------------------------------------
WARNING:NETWORKING_BLOCK_COMMENT_STYLE: networking block comments don't use an empty /* line, use /* Comment...
FILE: drivers/net/ethernet/intel/ice/ice_adminq_cmd.h:1428:
+/*
+ * Send to PF command (indirect 0x0801) ID is only used by PF

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
3 years agoice: cleanup style issues
Bruce Allan [Tue, 2 Mar 2021 18:15:42 +0000 (10:15 -0800)]
ice: cleanup style issues

A few style issues reported by checkpatch have snuck into the code; resolve
the style issues.

COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: Consolidate VSI state and flags
Anirudh Venkataramanan [Tue, 2 Mar 2021 18:15:37 +0000 (10:15 -0800)]
ice: Consolidate VSI state and flags

struct ice_vsi has two fields, state and flags which seem to
be serving the same purpose. Consolidate them into one field
'state'.

enum ice_state is used to represent state information of the PF.
While some of these enum values can be use to represent VSI state,
it makes more sense to represent VSI state with its own enum. So
derive a new enum ice_vsi_state from ice_vsi_flags and ice_state
and use it. Also rename enum ice_state to ice_pf_state for clarity.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: Refactor ice_set/get_rss into LUT and key specific functions
Brett Creeley [Tue, 2 Mar 2021 18:15:36 +0000 (10:15 -0800)]
ice: Refactor ice_set/get_rss into LUT and key specific functions

Currently ice_set/get_rss are used to set/get the RSS LUT and/or RSS
key. However nearly everywhere these functions are called only the LUT
or key are set/get. Also, making this change reduces how many things
ice_set/get_rss are doing. Fix this by adding ice_set/get_rss_lut and
ice_set/get_rss_key functions.

Also, consolidate all calls for setting/getting the RSS LUT and RSS Key
to use ice_set/get_rss_lut() and ice_set/get_rss_key().

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: Refactor get/set RSS LUT to use struct parameter
Brett Creeley [Tue, 2 Mar 2021 18:15:35 +0000 (10:15 -0800)]
ice: Refactor get/set RSS LUT to use struct parameter

Update ice_aq_get_rss_lut() and ice_aq_set_rss_lut() to take a new
structure ice_aq_get_set_rss_params instead of passing individual
parameters. This is done for 2 reasons:

1. Reduce the number of parameters passed to the functions.
2. Reduce the amount of change required if the arguments ever need to be
   updated in the future.

Also, reduce duplicate code that was checking for an invalid vsi_handle
and lut parameter by moving the checks to the lower level
__ice_aq_get_set_rss_lut().

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: Change ice_vsi_setup_q_map() to not depend on RSS
Brett Creeley [Tue, 2 Mar 2021 18:15:33 +0000 (10:15 -0800)]
ice: Change ice_vsi_setup_q_map() to not depend on RSS

Currently, ice_vsi_setup_q_map() depends on the VSI's rss_size. However,
the Rx Queue Mapping section of the VSI context has no dependency on RSS.
Instead, limit the maximum number of Rx queues per TC based on the Rx
Queue mapping section of the VSI context, which currently allows for up
to 256 Rx queues per TC.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: rename ptype bitmap
Qi Zhang [Tue, 2 Mar 2021 18:12:11 +0000 (10:12 -0800)]
ice: rename ptype bitmap

Align all ptype bitmap to follow ice_ptypes_xxx prefix.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: correct memory allocation call
Bruce Allan [Tue, 2 Mar 2021 18:12:10 +0000 (10:12 -0800)]
ice: correct memory allocation call

Use *malloc() instead of *calloc() when allocating only a single object as
opposed to an array of objects.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: Check for bail out condition early
Anirudh Venkataramanan [Tue, 2 Mar 2021 18:12:09 +0000 (10:12 -0800)]
ice: Check for bail out condition early

Check for bail out condition before calling ice_aq_sff_eeprom

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: remove unnecessary duplicated AQ command flag setting
Bruce Allan [Tue, 2 Mar 2021 18:12:08 +0000 (10:12 -0800)]
ice: remove unnecessary duplicated AQ command flag setting

Commit a012dca9f7a2 ("ice: add ethtool -m support for reading i2c eeprom
modules") unnecessarily added the ICE_AQ_FLAG_BUF flag to the descriptor
when sending the indirect Read/Write SFF EEPROM AQ command. The flag is
already added later in the code flow for all indirect AQ commands, i.e.
commands that provide an additional data buffer.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: change link misconfiguration message
Paul Greenwalt [Tue, 2 Mar 2021 18:12:07 +0000 (10:12 -0800)]
ice: change link misconfiguration message

Change link misconfiguration message since the configuration
could be intended by the user.

Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: handle increasing Tx or Rx ring sizes
Paul M Stillwell Jr [Tue, 2 Mar 2021 18:12:05 +0000 (10:12 -0800)]
ice: handle increasing Tx or Rx ring sizes

There is an issue when the Tx or Rx ring size increases using
'ethtool -L ...' where the new rings don't get the correct ITR
values because when we rebuild the VSI we don't know that some
of the rings may be new.

Fix this by looking at the original number of rings and
determining if the rings in ice_vsi_rebuild_set_coalesce()
were not present in the original rings received in
ice_vsi_rebuild_get_coalesce().

Also change the code to return an error if we can't allocate
memory for the coalesce data in ice_vsi_rebuild().

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: Update to use package info from ice segment
Dan Nowlin [Tue, 2 Mar 2021 18:12:04 +0000 (10:12 -0800)]
ice: Update to use package info from ice segment

There are two package versions in the package binary. Today, these two
version numbers are the same. However, in the future that may change.

Update code to use the package info from the ice segment metadata
section, which is the package information that is actually downloaded to
the firmware during the download package process.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: Delay netdev registration
Anirudh Venkataramanan [Tue, 2 Mar 2021 18:12:03 +0000 (10:12 -0800)]
ice: Delay netdev registration

Once a netdev is registered, the corresponding network interface can
be immediately used by userspace utilities (like say NetworkManager).
This can be problematic if the driver technically isn't fully up yet.

Move netdev registration to the end of probe, as by this time the
driver data structures and device will be initialized as expected.

However, delaying netdev registration causes a failure in the aRFS flow
where netdev->reg_state == NETREG_REGISTERED condition is checked. It's
not clear why this check was added to begin with, so remove it.
Local testing didn't indicate any issues with this change.

The state bit check in ice_open was put in as a stop-gap measure to
prevent a premature interface up operation. This is no longer needed,
so remove it.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoice: Add Support for XPS
Benita Bose [Tue, 2 Mar 2021 18:12:02 +0000 (10:12 -0800)]
ice: Add Support for XPS

Enable and configure XPS. The driver code implemented sets up the Transmit
Packet Steering Map, which in turn will be used by the kernel in queue
selection during Tx.

Signed-off-by: Benita Bose <benita.bose@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
3 years agoip6_tunnel: sit: proper dev_{hold|put} in ndo_[un]init methods
Eric Dumazet [Tue, 30 Mar 2021 06:45:51 +0000 (23:45 -0700)]
ip6_tunnel: sit: proper dev_{hold|put} in ndo_[un]init methods

Same reasons than for the previous commits :
6289a98f0817 ("sit: proper dev_{hold|put} in ndo_[un]init methods")
40cb881b5aaa ("ip6_vti: proper dev_{hold|put} in ndo_[un]init methods")
7f700334be9a ("ip6_gre: proper dev_{hold|put} in ndo_[un]init methods")

After adopting CONFIG_PCPU_DEV_REFCNT=n option, syzbot was able to trigger
a warning [1]

Issue here is that:

- all dev_put() should be paired with a corresponding prior dev_hold().

- A driver doing a dev_put() in its ndo_uninit() MUST also
  do a dev_hold() in its ndo_init(), only when ndo_init()
  is returning 0.

Otherwise, register_netdevice() would call ndo_uninit()
in its error path and release a refcount too soon.

[1]
WARNING: CPU: 1 PID: 21059 at lib/refcount.c:31 refcount_warn_saturate+0xbf/0x1e0 lib/refcount.c:31
Modules linked in:
CPU: 1 PID: 21059 Comm: syz-executor.4 Not tainted 5.12.0-rc4-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:refcount_warn_saturate+0xbf/0x1e0 lib/refcount.c:31
Code: 1d 6a 5a e8 09 31 ff 89 de e8 8d 1a ab fd 84 db 75 e0 e8 d4 13 ab fd 48 c7 c7 a0 e1 c1 89 c6 05 4a 5a e8 09 01 e8 2e 36 fb 04 <0f> 0b eb c4 e8 b8 13 ab fd 0f b6 1d 39 5a e8 09 31 ff 89 de e8 58
RSP: 0018:ffffc900025aefe8 EFLAGS: 00010282
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000040000 RSI: ffffffff815c51f5 RDI: fffff520004b5def
RBP: 0000000000000004 R08: 0000000000000000 R09: 0000000000000000
R10: ffffffff815bdf8e R11: 0000000000000000 R12: ffff888023488568
R13: ffff8880254e9000 R14: 00000000dfd82cfd R15: ffff88802ee2d7c0
FS:  00007f13bc590700(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f0943e74000 CR3: 0000000025273000 CR4: 00000000001506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 __refcount_dec include/linux/refcount.h:344 [inline]
 refcount_dec include/linux/refcount.h:359 [inline]
 dev_put include/linux/netdevice.h:4135 [inline]
 ip6_tnl_dev_uninit+0x370/0x3d0 net/ipv6/ip6_tunnel.c:387
 register_netdevice+0xadf/0x1500 net/core/dev.c:10308
 ip6_tnl_create2+0x1b5/0x400 net/ipv6/ip6_tunnel.c:263
 ip6_tnl_newlink+0x312/0x580 net/ipv6/ip6_tunnel.c:2052
 __rtnl_newlink+0x1062/0x1710 net/core/rtnetlink.c:3443
 rtnl_newlink+0x64/0xa0 net/core/rtnetlink.c:3491
 rtnetlink_rcv_msg+0x44e/0xad0 net/core/rtnetlink.c:5553
 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2502
 netlink_unicast_kernel net/netlink/af_netlink.c:1312 [inline]
 netlink_unicast+0x533/0x7d0 net/netlink/af_netlink.c:1338
 netlink_sendmsg+0x856/0xd90 net/netlink/af_netlink.c:1927
 sock_sendmsg_nosec net/socket.c:654 [inline]
 sock_sendmsg+0xcf/0x120 net/socket.c:674
 ____sys_sendmsg+0x6e8/0x810 net/socket.c:2350
 ___sys_sendmsg+0xf3/0x170 net/socket.c:2404
 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2433
 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
 entry_SYSCALL_64_after_hwframe+0x44/0xae

Fixes: 919067cc845f ("net: add CONFIG_PCPU_DEV_REFCNT")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'ethtool-fec-netlink'
David S. Miller [Wed, 31 Mar 2021 21:15:23 +0000 (14:15 -0700)]
Merge branch 'ethtool-fec-netlink'

Jakub Kicinski says:

====================
ethtool: support FEC configuration over netlink

This series adds support for the equivalents of ETHTOOL_GFECPARAM
and ETHTOOL_SFECPARAM over netlink.

As a reminder - this is an API which allows user to query current
FEC mode, as well as set FEC manually if autoneg is disabled.
It does not configure anything if autoneg is enabled (that said
few/no drivers currently reject .set_fecparam calls while autoneg
is disabled, hopefully FW will just ignore the settings).

The existing functionality is mostly preserved in the new API.
The ioctl interface uses a set of flags, and link modes to tell
user which modes are supported. Here is how the flags translate
to the new interface (skipping descriptions for actual FEC modes):

  ioctl flag      |   description         |  new API
================================================================
ETHTOOL_FEC_OFF   | disabled (supported)  | \
ETHTOOL_FEC_RS    |                       |  ` link mode bitset
ETHTOOL_FEC_BASER |                       |  / .._A_FEC_MODES
ETHTOOL_FEC_LLRS  |                       | /
ETHTOOL_FEC_AUTO  | pick based on cable   | bool .._A_FEC_AUTO
ETHTOOL_FEC_NONE  | not supported         | no bit, no AUTO reported

Since link modes are already depended on (although somewhat implicitly)
for expressing supported modes - the new interface uses them for
the manual configuration, as well as uses link mode bit number
to communicate the active mode.

Use of link modes allows us to define any number of FEC modes we want,
and reuse the strset we already have defined.

Separating AUTO as its own attribute is the biggest changed compared
to the ioctl. It means drivers can no longer report AUTO as the
active FEC mode because there is no link mode for AUTO.
active_fec == AUTO makes little sense in the first place IMHO,
active_fec should be the actual mode, so hopefully this is fine.

The other minor departure is that None is no longer explicitly
expressed in the API. But drivers are reasonable in handling of
this somewhat pointless bit, so I'm not expecting any issues there.

One extension which could be considered would be moving active FEC
to ETHTOOL_MSG_LINKMODE_*, but then why not move all of FEC into
link modes? I don't know where to draw the line.

netdevsim support and a simple self test are included.

Next step is adding stats similar to the ones added for pause.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
,

3 years agoselftests: ethtool: add a netdevsim FEC test
Jakub Kicinski [Tue, 30 Mar 2021 03:59:54 +0000 (20:59 -0700)]
selftests: ethtool: add a netdevsim FEC test

Test FEC settings, iterate over configs.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonetdevsim: add FEC settings support
Jakub Kicinski [Tue, 30 Mar 2021 03:59:53 +0000 (20:59 -0700)]
netdevsim: add FEC settings support

Add support for ethtool FEC and some ethtool error injection.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoethtool: support FEC settings over netlink
Jakub Kicinski [Tue, 30 Mar 2021 03:59:52 +0000 (20:59 -0700)]
ethtool: support FEC settings over netlink

Add FEC API to netlink.

This is not a 1-to-1 conversion.

FEC settings already depend on link modes to tell user which
modes are supported. Take this further an use link modes for
manual configuration. Old struct ethtool_fecparam is still
used to talk to the drivers, so we need to translate back
and forth. We can revisit the internal API if number of FEC
encodings starts to grow.

Enforce only one active FEC bit (by using a bit position
rather than another mask).

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: ethernet: Fix typo of 'network' in comment
Eric Lin [Wed, 31 Mar 2021 01:04:17 +0000 (09:04 +0800)]
net: ethernet: Fix typo of 'network' in comment

Signed-off-by: Eric Lin <dslin1010@gmail.com>
Reported-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomlxsw: spectrum_router: Only perform atomic nexthop bucket replacement when requested
Ido Schimmel [Tue, 30 Mar 2021 06:58:41 +0000 (09:58 +0300)]
mlxsw: spectrum_router: Only perform atomic nexthop bucket replacement when requested

When cleared, the 'force' parameter in nexthop bucket replacement
notifications indicates that a driver should try to perform an atomic
replacement. Meaning, only update the contents of the bucket if it is
inactive.

Since mlxsw only queries buckets' activity once every second, there is
no point in trying an atomic replacement if the idle timer interval is
smaller than 1 second.

Currently, mlxsw ignores the original value of 'force' and will always
try an atomic replacement if the idle timer is not smaller than 1
second.

Fix this by taking the original value of 'force' into account and never
promoting a non-atomic replacement to an atomic one.

Fixes: 617a77f044ed ("mlxsw: spectrum_router: Add nexthop bucket replacement support")
Reported-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'mptcp-subflow-disconnected'
David S. Miller [Wed, 31 Mar 2021 00:42:23 +0000 (17:42 -0700)]
Merge branch 'mptcp-subflow-disconnected'

Mat Martineau says:

====================
MPTCP: Allow initial subflow to be disconnected

An MPTCP connection is aggregated from multiple TCP subflows, and can
involve multiple IP addresses on either peer. The addresses used in the
initial subflow connection are assigned address id 0 on each side of the
link. More addresses can be added and shared with the peer using address
IDs of 1 or larger. MPTCP in Linux shares non-zero address IDs across
all MPTCP connections in a net namespace, which allows userspace to
manage subflow connections across a number of sockets. However, this
makes the address with id 0 a special case, since the IP address
associated with id 0 is potentially different for each socket.

This patch set allows the initial subflow to be disconnected when
userspace specifies an address to remove using both id 0 and an IP
address, or when the peer sends an RM_ADDR for id 0.

Patches 1 and 3 implement the change for requests from the peer and
userspace, respectively.

Patch 2 consolidates some code for disconnecting subflows.

Patches 4-6 update the self tests to cover removal of subflows using
address id 0.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoselftests: mptcp: remove id 0 address testcases
Geliang Tang [Wed, 31 Mar 2021 00:08:56 +0000 (17:08 -0700)]
selftests: mptcp: remove id 0 address testcases

This patch added the testcases for removing the id 0 subflow and the id 0
address.

In do_transfer, use the removing addresses number '9' for deleting the id
0 address.

Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoselftests: mptcp: add addr argument for del_addr
Geliang Tang [Wed, 31 Mar 2021 00:08:55 +0000 (17:08 -0700)]
selftests: mptcp: add addr argument for del_addr

For the id 0 address, different MPTCP connections could be using
different IP addresses for id 0.

This patch added an extra argument IP address for del_addr when
using id 0.

Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoselftests: mptcp: avoid calling pm_nl_ctl with bad IDs
Matthieu Baerts [Wed, 31 Mar 2021 00:08:54 +0000 (17:08 -0700)]
selftests: mptcp: avoid calling pm_nl_ctl with bad IDs

IDs are supposed to be between 0 and 255.

In pm_nl_ctl, for both the 'add' and 'get' instruction, the ID is casted
in a u_int8_t. So if we give 256, we will delete ID 0. Obviously, the
goal is not to delete this ID by giving 256.

We could modify pm_nl_ctl and stop if the ID is negative or higher than
255 but probably better not to increase the number of lines for such
things in this tool which is only used in selftests. Instead, we use it
within the limits.

This modification also means that we will no longer add a new ID for the
2nd entry. That's why we removed an expected entry from the dump and
introduced with
commit dc8eb10e95a8 ("selftests: mptcp: add testcases for setting the address ID").

So now we delete ID 9 like before and we add entries for IDs 10 to 255
that are deleted just after.

Note that this could be seen as a fix but it was not really an issue so
far: we were simply playing with ID 0/1 once again. With the following
commit ("selftests: mptcp: add addr argument for del_addr"), it will be
different because ID 0 is going to required an address. We don't want
errors when trying to delete ID 0 without the address argument.

Acked-and-tested-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomptcp: remove id 0 address
Geliang Tang [Wed, 31 Mar 2021 00:08:53 +0000 (17:08 -0700)]
mptcp: remove id 0 address

This patch added a new function mptcp_nl_remove_id_zero_address to
remove the id 0 address.

In this function, traverse all the existing msk sockets to find the
msk matched the input IP address. Then fill the removing list with
id 0, and pass it to mptcp_pm_remove_addr and mptcp_pm_remove_subflow.

Suggested-by: Paolo Abeni <pabeni@redhat.com>
Suggested-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomptcp: unify RM_ADDR and RM_SUBFLOW receiving
Geliang Tang [Wed, 31 Mar 2021 00:08:52 +0000 (17:08 -0700)]
mptcp: unify RM_ADDR and RM_SUBFLOW receiving

There are some duplicate code in mptcp_pm_nl_rm_addr_received and
mptcp_pm_nl_rm_subflow_received. This patch unifies them into a new
function named mptcp_pm_nl_rm_addr_or_subflow. In it, use the input
parameter rm_type to identify it's now removing an address or a subflow.

Suggested-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agomptcp: remove all subflows involving id 0 address
Geliang Tang [Wed, 31 Mar 2021 00:08:51 +0000 (17:08 -0700)]
mptcp: remove all subflows involving id 0 address

There's only one subflow involving the non-zero id address, but there
may be multi subflows involving the id 0 address.

Here's an example:

 local_id=0, remote_id=0
 local_id=1, remote_id=0
 local_id=0, remote_id=1

If the removing address id is 0, all the subflows involving the id 0
address need to be removed.

In mptcp_pm_nl_rm_addr_received/mptcp_pm_nl_rm_subflow_received, the
"break" prevents the iteration to the next subflow, so this patch
dropped them.

Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agonet: fix icmp_echo_enable_probe sysctl
Eric Dumazet [Tue, 30 Mar 2021 21:06:13 +0000 (14:06 -0700)]
net: fix icmp_echo_enable_probe sysctl

sysctl_icmp_echo_enable_probe is an u8.

ipv4_net_table entry should use
 .maxlen       = sizeof(u8).
 .proc_handler = proc_dou8vec_minmax,

Fixes: f1b8fa9fa586 ("net: add sysctl for enabling RFC 8335 PROBE messages")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Andreas Roeseler <andreas.a.roeseler@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'ionic-cleanups'
David S. Miller [Wed, 31 Mar 2021 00:37:13 +0000 (17:37 -0700)]
Merge branch 'ionic-cleanups'

Shannon Nelson says:

====================
ionic: code cleanup for heartbeat, dma error counts, sizeof, stats

These patches are a few more bits of code cleanup found in
testing and review: count all our dma error instances, make
better use of sizeof, fix a race in our device heartbeat check,
and clean up code formatting in the ethtool stats collection.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: pull per-q stats work out of queue loops
Shannon Nelson [Tue, 30 Mar 2021 19:52:10 +0000 (12:52 -0700)]
ionic: pull per-q stats work out of queue loops

Abstract out the per-queue data collection work into separate
functions from the per-queue loops in the stats reporting,
similar to what Alex did for the data label strings in
commit acebe5b6107c ("ionic: Update driver to use ethtool_sprintf")

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: avoid races in ionic_heartbeat_check
Shannon Nelson [Tue, 30 Mar 2021 19:52:09 +0000 (12:52 -0700)]
ionic: avoid races in ionic_heartbeat_check

Rework the heartbeat checks to be sure that we're getting an
atomic operation.  Through testing we found occasions where a
separate thread could clash with this check and cause erroneous
heartbeat check results.

Signed-off-by: Allen Hubbe <allenbh@pensando.io>
Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: fix sizeof usage
Shannon Nelson [Tue, 30 Mar 2021 19:52:08 +0000 (12:52 -0700)]
ionic: fix sizeof usage

Use the actual pointer that we care about as the subject of the
sizeof, rather than a struct name.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoionic: count dma errors
Shannon Nelson [Tue, 30 Mar 2021 19:52:07 +0000 (12:52 -0700)]
ionic: count dma errors

Increment our dma-error counter in a couple of spots
that were missed before.

Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agoMerge branch 'dpaa2-switch-STP'
David S. Miller [Wed, 31 Mar 2021 00:18:26 +0000 (17:18 -0700)]
Merge branch 'dpaa2-switch-STP'

Ioana Ciornei says:

====================
dpaa2-switch: add STP support

This patch set adds support for STP to the dpaa2-switch.

First of all, it fixes a bug which was determined by the improper usage
of bridge BR_STATE_* values directly in the MC ABI.
The next patches deal with creating an ACL table per port and trapping
the STP frames to the control interface by adding an entry into each
table.
The last patch configures proper learning state depending on the STP
state.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodpaa2-switch: setup learning state on STP state change
Ioana Ciornei [Tue, 30 Mar 2021 14:54:19 +0000 (17:54 +0300)]
dpaa2-switch: setup learning state on STP state change

Depending on what STP state a port is in, the learning on that port
should be enabled or disabled.

When the STP state is DISABLED, BLOCKING or LISTENING no learning should
be happening irrespective of what the bridge previously requested. The
learning state is changed to be the one setup by the bridge when the STP
state is LEARNING or FORWARDING.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodpaa2-switch: trap STP frames to the CPU
Ioana Ciornei [Tue, 30 Mar 2021 14:54:18 +0000 (17:54 +0300)]
dpaa2-switch: trap STP frames to the CPU

Add an ACL entry in each port's ACL table to redirect any frame that
has the destination MAC address equal to the STP dmac to the control
interface.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodpaa2-switch: keep track of the current learning state per port
Ioana Ciornei [Tue, 30 Mar 2021 14:54:17 +0000 (17:54 +0300)]
dpaa2-switch: keep track of the current learning state per port

Keep track of the current learning state per port so that we can
reference it in the next patches when setting up a STP state.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3 years agodpaa2-switch: create and assign an ACL table per port
Ioana Ciornei [Tue, 30 Mar 2021 14:54:16 +0000 (17:54 +0300)]
dpaa2-switch: create and assign an ACL table per port

In order to trap frames to the CPU, the DPAA2 switch uses the ACL table.
At probe time, create an ACL table for each switch port so that in the
next patches we can use this to trap STP frames and redirect them to the
control interface.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>