Jakub Kicinski [Wed, 27 Oct 2021 18:58:11 +0000 (11:58 -0700)]
Merge branch 'two-reverts-to-calm-down-devlink-discussion'
Leon Romanovsky says:
====================
Two reverts to calm down devlink discussion
Two reverts as was discussed in [1], fast, easy and wrong in long run
solution to syzkaller bug [2].
[1] https://lore.kernel.org/all/
20211026120234.
3408fbcc@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com
[2] https://lore.kernel.org/netdev/
000000000000af277405cf0a7ef0@google.com/
====================
Link: https://lore.kernel.org/r/cover.1635276828.git.leonro@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Leon Romanovsky [Tue, 26 Oct 2021 19:40:42 +0000 (22:40 +0300)]
Revert "devlink: Remove not-executed trap policer notifications"
This reverts commit
22849b5ea5952d853547cc5e0651f34a246b2a4f as it
revealed that mlxsw and netdevsim (copy/paste from mlxsw) reregisters
devlink objects during another devlink user triggered command.
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Leon Romanovsky [Tue, 26 Oct 2021 19:40:41 +0000 (22:40 +0300)]
Revert "devlink: Remove not-executed trap group notifications"
This reverts commit
8bbeed4858239ac956a78e5cbaf778bd6f3baef8 as it
revealed that mlxsw and netdevsim (copy/paste from mlxsw) reregisters
devlink objects during another devlink user triggered command.
Fixes:
22849b5ea595 ("devlink: Remove not-executed trap policer notifications")
Reported-by: syzbot+93d5accfaefceedf43c1@syzkaller.appspotmail.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
David S. Miller [Wed, 27 Oct 2021 13:54:02 +0000 (14:54 +0100)]
Merge branch 'br-fdb-refactoring'
Vladimir Oltean says:
====================
Bridge FDB refactoring
This series refactors the br_fdb.c, br_switchdev.c and switchdev.c files
to offer the same level of functionality with a bit less code, and to
clarify the purpose of some functions.
No functional change intended.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 14:27:43 +0000 (17:27 +0300)]
net: switchdev: merge switchdev_handle_fdb_{add,del}_to_device
To reduce code churn, the same patch makes multiple changes, since they
all touch the same lines:
1. The implementations for these two are identical, just with different
function pointers. Reduce duplications and name the function pointers
"mod_cb" instead of "add_cb" and "del_cb". Pass the event as argument.
2. Drop the "const" attribute from "orig_dev". If the driver needs to
check whether orig_dev belongs to itself and then
call_switchdev_notifiers(orig_dev, SWITCHDEV_FDB_OFFLOADED), it
can't, because call_switchdev_notifiers takes a non-const struct
net_device *.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 14:27:42 +0000 (17:27 +0300)]
net: bridge: create a common function for populating switchdev FDB entries
There are two places where a switchdev FDB entry is constructed, one is
br_switchdev_fdb_notify() and the other is br_fdb_replay(). One uses a
struct initializer, and the other declares the structure as
uninitialized and populates the elements one by one.
One problem when introducing new members of struct
switchdev_notifier_fdb_info is that there is a risk for one of these
functions to run with an uninitialized value.
So centralize the logic of populating such structure into a dedicated
function. Being the primary location where these structures are created,
using an uninitialized variable and populating the members one by one
should be fine, since this one function is supposed to assign values to
all its members.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 14:27:41 +0000 (17:27 +0300)]
net: bridge: move br_fdb_replay inside br_switchdev.c
br_fdb_replay is only called from switchdev code paths, so it makes
sense to be disabled if switchdev is not enabled in the first place.
As opposed to br_mdb_replay and br_vlan_replay which might be turned off
depending on bridge support for multicast and VLANs, FDB support is
always on. So moving br_mdb_replay and br_vlan_replay inside
br_switchdev.c would mean adding some #ifdef's in br_switchdev.c, so we
keep those where they are.
The reason for the movement is that in future changes there will be some
code reuse between br_switchdev_fdb_notify and br_fdb_replay.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 14:27:40 +0000 (17:27 +0300)]
net: bridge: reduce indentation level in fdb_create
We can express the same logic without an "if" condition as big as the
function, just return early if the kmem_cache_alloc() call fails.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 14:27:39 +0000 (17:27 +0300)]
net: bridge: rename br_fdb_insert to br_fdb_add_local
br_fdb_insert() is a wrapper over fdb_insert() that also takes the
bridge hash_lock.
With fdb_insert() being renamed to fdb_add_local(), rename
br_fdb_insert() to br_fdb_add_local().
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 14:27:38 +0000 (17:27 +0300)]
net: bridge: rename fdb_insert to fdb_add_local
fdb_insert() is not a descriptive name for this function, and also easy
to confuse with __br_fdb_add(), fdb_add_entry(), br_fdb_update().
Even more confusingly, it is not even related in any way with those
functions, neither one calls the other.
Since fdb_insert() basically deals with the creation of a BR_FDB_LOCAL
entry and is called only from functions where that is the intention:
- br_fdb_changeaddr
- br_fdb_change_mac_address
- br_fdb_insert
then rename it to fdb_add_local(), because its removal counterpart is
called fdb_delete_local().
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 14:27:37 +0000 (17:27 +0300)]
net: bridge: remove fdb_insert forward declaration
fdb_insert() has a forward declaration because its first caller,
br_fdb_changeaddr(), is declared before fdb_create(), a function which
fdb_insert() needs.
This patch moves the 2 functions above br_fdb_changeaddr() and deletes
the forward declaration for fdb_insert().
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 14:27:36 +0000 (17:27 +0300)]
net: bridge: remove fdb_notify forward declaration
fdb_notify() has a forward declaration because its first caller,
fdb_delete(), is declared before 3 functions that fdb_notify() needs:
fdb_to_nud(), fdb_fill_info() and fdb_nlmsg_size().
This patch moves the aforementioned 4 functions above fdb_delete() and
deletes the forward declaration.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 27 Oct 2021 13:50:11 +0000 (14:50 +0100)]
Merge branch 'mvneta-phylink'
Russell King says:
====================
Convert mvneta to phylink supported_interfaces
This patch series converts mvneta to use phylinks supported_interfaces
bitmap to simplify the validate() implementation. The patches:
1) Add the supported interface modes the supported_interfaces bitmap.
2) Removes the checks for the interface type being supported from
the validate callback
3) Removes the now unnecessary checks and call to
phylink_helper_basex_speed() to support switching between
1000base-X and 2500base-X for SFPs
(3) becomes possible because when asking the MAC for its complete
support, we walk all supported interfaces which will include 1000base-X
and 2500base-X only if the comphy is present.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King (Oracle) [Wed, 27 Oct 2021 09:03:53 +0000 (10:03 +0100)]
net: mvneta: drop use of phylink_helper_basex_speed()
Now that we have a better method to select SFP interface modes, we
no longer need to use phylink_helper_basex_speed() in a driver's
validation function, and we can also get rid of our hack to indicate
both 1000base-X and 2500base-X if the comphy is present to make that
work. Remove this hack and use of phylink_helper_basex_speed().
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King (Oracle) [Wed, 27 Oct 2021 09:03:48 +0000 (10:03 +0100)]
net: mvneta: remove interface checks in mvneta_validate()
As phylink checks the interface mode against the supported_interfaces
bitmap, we no longer need to validate the interface mode in the
validation function. Remove this to simplify it.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Wed, 27 Oct 2021 09:03:43 +0000 (10:03 +0100)]
net: mvneta: populate supported_interfaces member
Populate the phy_interface_t bitmap for the Marvell mvneta driver with
interfaces modes supported by the MAC.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 27 Oct 2021 13:39:57 +0000 (14:39 +0100)]
Merge tag 'mlx5-updates-2021-10-26' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2021-10-26
HW-GRO support in mlx5
Beside the HW GRO this series includes two trivial non-mlx5 patches:
- net: Prevent HW-GRO and LRO features operate together
- lib: bitmap: Introduce node-aware alloc API
Khalid Manaa Says:
==================
This series implements the HW-GRO offload using the HW feature SHAMPO.
HW-GRO: Hardware offload for the Generic Receive Offload feature.
SHAMPO: Split Headers And Merge Payload Offload.
This feature performs headers data split for each received packed and
merge the payloads of the packets of the same session.
There are new HW components for this feature:
The headers buffer:
– cyclic buffer where the packets headers will be located
Reservation buffer:
– capability to divide RQ WQEs to reservations, a definite size in
granularity of 4KB, the reservation is used to define the largest segment
that we can create by packets stitching.
Each reservation will have a session and the new received packet can be merged
to the session, terminate it, or open a new one according to the match criteria.
When a new packet is received the headers will be written to the headers buffer
and the data will be written to the reservation, in case the packet matches
the session the data will be written continuously otherwise it will be written
after performing an alignment.
SHAMPO RQ, WQ and CQE changes:
-----------------------------
RQ (receive queue) new params:
-shampo_no_match_alignment_granularity: the HW alignment granularity in case
the received packet doesn't match the current session.
-shampo_match_criteria_type: the type of match criteria.
-reservation_timeout: the maximum time that the HW will hold the reservation.
-Each RQ has SKB that represents the current opened flow.
WQ (work queue) new params:
-headers_mkey: mkey that represents the headers buffer, where the packets
headers will be written by the HW.
-shampo_enable: flag to verify if the WQ supports SHAMPO feature.
-log_reservation_size: the log of the reservation size where the data of
the packet will be written by the HW.
-log_max_num_of_packets_per_reservation: log of the maximum number of packets
that can be written to the same reservation.
-log_headers_entry_size: log of the header entry size of the headers buffer.
-log_headers_buffer_entry_num: log of the entries number of the headers buffer.
CQEs (Completion queue entry) SHAMPO fields:
-match: in case it is set, then the current packet matches the opened session.
-flush: in case it is set, the opened session must be flushed.
-header_size: the size of the packet’s headers.
-header_entry_index: the entry index in the headers buffer of the received
packet headers.
-data_offset: the offset of the received packet data in the WQE.
HW-GRO works as follow:
----------------------
The feature can be enabled on the interface using the ethtool command by
setting on rx-gro-hw. When the feature is on the mlx5 driver will reopen
the RQ to support the SHAMPO feature:
Will allocate the headers buffer and fill the parameters regarding the
reservation and the match criteria.
Receive packet flow:
each RQ will hold SKB that represents the current GRO opened session.
The driver has a new CQE handler mlx5e_handle_rx_cqe_mpwrq_shampo which will
use the CQE SHAMPO params to extract the location of the packet’s headers
in the headers buffer and the location of the packets data in the RQ.
Also, the CQE has two flags flush and match that indicate if the current
packet matches the current session or not and if we need to close the session.
In case there is an opened session, and we receive a matched packet then the
handler will merge the packet's payload to the current SKB, in case we receive
no match then the handler will flush the SKB and create a new one for the new packet.
In case the flash flag is set then the driver will close the session, the SKB
will be passed to the network stack.
In case the driver merges packets in the SKB, before passing the SKB to the network
stack the driver will update the checksum of the packet’s headers.
SKB build:
---------
The driver will build a new SKB in the following situations:
in case there is no current opened session.
In case the current packet doesn’t match the current session.
In case there is no place to add the packets data to the SKB that represents the
current session.
Otherwise, the driver will add the packet’s data to the SKB.
When the driver builds a new SKB, the linear area will contain only the packet headers
and the data will be added to the SKB fragments.
In case the entry size of the headers buffer is sufficient to build the SKB
it will be used, otherwise the driver will allocate new memory to build the SKB.
==================
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Maor Dickman [Mon, 25 Oct 2021 13:54:12 +0000 (16:54 +0300)]
net/mlx5: Lag, Make mlx5_lag_is_multipath() be static inline
Fix "no previous prototype" W=1 warnings when CONFIG_MLX5_CORE_EN is not set:
drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h:34:6: error: no previous prototype for ‘mlx5_lag_is_multipath’ [-Werror=missing-prototypes]
34 | bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev) { return false; }
| ^~~~~~~~~~~~~~~~~~~~~
Fixes:
14fe2471c628 ("net/mlx5: Lag, change multipath and bonding to be mutually exclusive")
Signed-off-by: Maor Dickman <maord@nvidia.com>
Khalid Manaa [Mon, 11 Oct 2021 07:36:24 +0000 (10:36 +0300)]
net/mlx5e: Prevent HW-GRO and CQE-COMPRESS features operate together
HW-GRO and CQE-COMPRESS are mutually exclusive, this commit adds this
restriction.
Signed-off-by: Khalid Manaa <khalidm@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Khalid Manaa [Wed, 26 May 2021 07:01:34 +0000 (10:01 +0300)]
net/mlx5e: Add HW-GRO offload
This commit introduces HW-GRO offload by using the SHAMPO feature
- Add set feature handler for HW-GRO.
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Signed-off-by: Khalid Manaa <khalidm@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Khalid Manaa [Tue, 13 Oct 2020 07:34:35 +0000 (10:34 +0300)]
net/mlx5e: Add HW_GRO statistics
This patch adds HW_GRO counters to RX packets statistics:
- gro_match_packets: counter of received packets with set match flag.
- gro_packets: counter of received packets over the HW_GRO feature,
this counter is increased by one for every received
HW_GRO cqe.
- gro_bytes: counter of received bytes over the HW_GRO feature,
this counter is increased by the received bytes for every
received HW_GRO cqe.
- gro_skbs: counter of built HW_GRO skbs,
increased by one when we flush HW_GRO skb
(when we call a napi_gro_receive with hw_gro skb).
- gro_large_hds: counter of received packets with large headers size,
in case the packet needs new SKB, the driver will allocate
new one and will not use the headers entry to build it.
Signed-off-by: Khalid Manaa <khalidm@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Khalid Manaa [Mon, 14 Sep 2020 12:55:26 +0000 (15:55 +0300)]
net/mlx5e: HW_GRO cqe handler implementation
this patch updates the SHAMPO CQE handler to support HW_GRO,
changes in the SHAMPO CQE handler:
- CQE match and flush fields are used to determine if to build new skb
using the new received packet,
or to add the received packet data to the existing RQ.hw_gro_skb,
also this fields are used to determine when to flush the skb.
- in the end of the function mlx5e_poll_rx_cq the RQ.hw_gro_skb is flushed.
Signed-off-by: Khalid Manaa <khalidm@nvidia.com>
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Ben Ben-Ishay [Mon, 14 Sep 2020 09:51:57 +0000 (12:51 +0300)]
net/mlx5e: Add data path for SHAMPO feature
The header buffer is used to store the headers of the rx packets.
The header buffer size deduced from WorkQueue size + restriction
of max packets per WorkQueueElement.
This commit adds the functionality for posting/updating memory for
the header buffer during the posting/updating of WQEs.
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Khalid Manaa [Tue, 19 May 2020 12:45:38 +0000 (15:45 +0300)]
net/mlx5e: Add handle SHAMPO cqe support
This patch adds the new CQE SHAMPO fields:
- flush: indicates that we must close the current session and pass the SKB
to the network stack.
- match: indicates that the current packet matches the oppened session,
the packet will be merge into the current SKB.
- header_size: the size of the packet headers that written into the headers
buffer.
- header_entry_index: the entry index in the headers buffer.
- data_offset: packets data offset in the WQE.
Also new cqe handler is added to handle SHAMPO packets:
- The new handler uses CQE SHAMPO fields to build the SKB.
CQE's Flush and match fields are not used in this patch, packets are not
merged in this patch.
Signed-off-by: Khalid Manaa <khalidm@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Ben Ben-Ishay [Wed, 9 Jun 2021 09:28:57 +0000 (12:28 +0300)]
net/mlx5e: Add control path for SHAMPO feature
This commit introduces the control path infrastructure for SHAMPO feature.
SHAMPO feature enables packet stitching by splitting packets to
header and payload, the header is placed on a dedicated buffer
and the payload on the RX ring, this allows stitching the data part
of a flow together continuously in the receive buffer.
SHAMPO feature is implemented as linked list striding RQ feature.
To support packets splitting and payload stitching:
- Enlarge the ICOSQ and the correspond CQ to support the header buffer
memory regions.
- Add support to create linked list striding RQ with SHAMPO feature set
in the open_rq function.
- Add deallocation function and corresponded calls for SHAMPO header
buffer.
- Add mlx5e_create_umr_klm_mkey to support KLM mkey for the header
buffer.
- Rename mlx5e_create_umr_mkey to mlx5e_create_umr_mtt_mkey.
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Ben Ben-Ishay [Tue, 14 Jul 2020 11:40:32 +0000 (14:40 +0300)]
net/mlx5e: Add support to klm_umr_wqe
This commit adds the needed definitions for using the klm_umr_wqe.
UMR stands for user-mode memory registration, is a mechanism to alter
address translation properties of MKEY by posting WorkQueueElement
aka WQE on send queue.
MKEY stands for memory key, MKEY are used to describe a region in memory that
can be later used by HW.
KLM stands for {Key, Length, MemVa}, KLM_MKEY is indirect MKEY that enables
to map multiple memory spaces with different sizes in unified MKEY.
klm_umr_wqe is a UMR that use to update a KLM_MKEY.
SHAMPO feature uses KLM_MKEY for memory registration of his header buffer.
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Khalid Manaa [Wed, 9 Jun 2021 09:27:32 +0000 (12:27 +0300)]
net/mlx5e: Rename TIR lro functions to TIR packet merge functions
This series introduces new packet merge type, therefore rename lro
functions to packet merge to support the new merge type:
- Generalize + rename mlx5e_build_tir_ctx_lro to
mlx5e_build_tir_ctx_packet_merge.
- Rename mlx5e_modify_tirs_lro to mlx5e_modify_tirs_packet_merge.
- Rename lro bit in mlx5_ifc_modify_tir_bitmask_bits to packet_merge.
- Rename lro_en in mlx5e_params to packet_merge_type type and combine
packet_merge params into one struct mlx5e_packet_merge_param.
Signed-off-by: Khalid Manaa <khalidm@nvidia.com>
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Ben Ben-Ishay [Wed, 9 Sep 2020 14:36:39 +0000 (17:36 +0300)]
net/mlx5: Add SHAMPO caps, HW bits and enumerations
This commit adds SHAMPO bit to hca_cap and SHAMPO capabilities structure,
SHAMPO related HW spec hardware fields and enumerations.
SHAMPO stands for: split headers and merge payload offload.
SHAMPO new fields:
WQ:
- headers_mkey: mkey that represents the headers buffer, where the packets
headers will be written by the HW.
- shampo_enable: flag to verify if the WQ supports SHAMPO feature.
- log_reservation_size: the log of the reservation size where the data of
the packet will be written by the HW.
- log_max_num_of_packets_per_reservation: log of the maximum number of
packets that can be written to the same reservation.
- log_headers_entry_size: log of the header entry size of the headers buffer.
- log_headers_buffer_entry_num: log of the entries number of the headers buffer.
RQ:
- shampo_no_match_alignment_granularity: the HW alignment granularity
in case the received packet doesn't match the current session.
- shampo_match_criteria_type: the type of match criteria.
- reservation_timeout: the maximum time that the HW will hold the
reservation.
mlx5_ifc_shampo_cap_bits, the capabilities of the SHAMPO feature:
- shampo_log_max_reservation_size: the maximum allowed value of the field
WQ.log_reservation_size.
- log_reservation_size: the minimum allowed value of the field
WQ.log_reservation_size.
- shampo_min_mss_size: the minimum payload size of packet that can open
a new session or be merged to a session.
- shampo_max_log_headers_entry_size: the maximum allowed value of the field
WQ.log_headers_entry_size
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Ben Ben-Ishay [Thu, 2 Jul 2020 14:22:45 +0000 (17:22 +0300)]
net/mlx5e: Rename lro_timeout to packet_merge_timeout
TIR stands for transport interface receive, the TIR object is
responsible for performing all transport related operations on
the receive side like packet processing, demultiplexing the packets
to different RQ's, etc.
lro_timeout is a field in the TIR that is used to set the timeout for lro
session, this series introduces new packet merge type, therefore rename
lro_timeout to packet_merge_timeout for all packet merge types.
Signed-off-by: Ben Ben-Ishay <benishay@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Ben Ben-ishay [Wed, 16 Dec 2020 10:32:24 +0000 (12:32 +0200)]
net: Prevent HW-GRO and LRO features operate together
LRO and HW-GRO are mutually exclusive, this commit adds this restriction
in netdev_fix_feature. HW-GRO is preferred, that means in case both
HW-GRO and LRO features are requested, LRO is cleared.
Signed-off-by: Ben Ben-ishay <benishay@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Tariq Toukan [Wed, 30 Dec 2020 09:41:52 +0000 (11:41 +0200)]
lib: bitmap: Introduce node-aware alloc API
Expose new node-aware API for bitmap allocation:
bitmap_alloc_node() / bitmap_zalloc_node().
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Luo Jie [Tue, 26 Oct 2021 10:29:57 +0000 (18:29 +0800)]
net: phy: fixed warning: Function parameter not described
Fixed warning: Function parameter or member 'enable' not
described in 'genphy_c45_fast_retrain'
Signed-off-by: Luo Jie <luoj@codeaurora.org>
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20211026102957.17100-1-luoj@codeaurora.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 26 Oct 2021 15:29:39 +0000 (08:29 -0700)]
net/mlx5: remove the recent devlink params
revert commit
46ae40b94d88 ("net/mlx5: Let user configure io_eq_size param")
revert commit
a6cb08daa3b4 ("net/mlx5: Let user configure event_eq_size param")
revert commit
554604061979 ("net/mlx5: Let user configure max_macs param")
The EQE parameters are applicable to more drivers, they should
be configured via standard API, probably ethtool. Example of
another driver needing something similar:
https://lore.kernel.org/all/
1633454136-14679-3-git-send-email-sbhatta@marvell.com/
The last param for "max_macs" is probably fine but the documentation
is severely lacking. The meaning and implications for changing the
param need to be stated.
Link: https://lore.kernel.org/r/20211026152939.3125950-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
David S. Miller [Tue, 26 Oct 2021 14:10:37 +0000 (15:10 +0100)]
Merge branch 'phy-supported-interfaces-bitmap'
Russell King says:
====================
Introduce supported interfaces bitmap
This series introduces a new bitmap to allow us to indicate which
phy_interface_t modes are supported.
Currently, phylink will call ->validate with PHY_INTERFACE_MODE_NA to
request all link mode capabilities from the MAC driver before choosing
an interface to use. This leads in some cases to some rather hairly
code. This can be simplified if phylink is aware of the interface modes
that the MAC supports, and it can instead walk those modes, calling
->validate for each one, and combining the results.
This series merely introduces the support; there is no change of
behaviour until MAC drivers populate their supported_interfaces bitmap.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King (Oracle) [Tue, 26 Oct 2021 10:06:11 +0000 (11:06 +0100)]
net: phylink: use supported_interfaces for phylink validation
If the network device supplies a supported interface bitmap, we can use
that during phylink's validation to simplify MAC drivers in two ways by
using the supported_interfaces bitmap to:
1. reject unsupported interfaces before calling into the MAC driver.
2. generate the set of all supported link modes across all supported
interfaces (used mainly for SFP, but also some 10G PHYs.)
Suggested-by: Sean Anderson <sean.anderson@seco.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Tue, 26 Oct 2021 10:06:06 +0000 (11:06 +0100)]
net: phylink: add MAC phy_interface_t bitmap
Add a phy_interface_t bitmap so the MAC driver can specifiy which PHY
interface modes it supports.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King (Oracle) [Tue, 26 Oct 2021 10:06:01 +0000 (11:06 +0100)]
net: phy: add phy_interface_t bitmap support
Add support for a bitmap for phy interface modes, which includes:
- a macro to declare the interface bitmap
- an inline helper to zero the interface bitmap
- an inline helper to detect an empty interface bitmap
- inline helpers to do a bitwise AND and OR operations on two interface
bitmaps
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 26 Oct 2021 14:07:36 +0000 (15:07 +0100)]
Merge branch 'dsa-isolation-prep'
Vladimir Oltean says:
====================
DSA preparations for FDB isolation between bridges
This series makes 2 small changes to DSA's SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE
handler, which will make it possible to offer switch drivers a stable
association between a FDB entry and a bridge device in a future series.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 09:25:56 +0000 (12:25 +0300)]
net: dsa: stop calling dev_hold in dsa_slave_fdb_event
Now that we guarantee that SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE events have
finished executing by the time we leave our bridge upper interface,
we've established a stronger boundary condition for how long the
dsa_slave_switchdev_event_work() might run.
As such, it is no longer possible for DSA slave interfaces to become
unregistered, since they are still bridge ports.
So delete the unnecessary dev_hold() and dev_put().
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Tue, 26 Oct 2021 09:25:55 +0000 (12:25 +0300)]
net: dsa: flush switchdev workqueue when leaving the bridge
DSA is preparing to offer switch drivers an API through which they can
associate each FDB entry with a struct net_device *bridge_dev. This can
be used to perform FDB isolation (the FDB lookup performed on the
ingress of a standalone, or bridged port, should not find an FDB entry
that is present in the FDB of another bridge).
In preparation of that work, DSA needs to ensure that by the time we
call the switch .port_fdb_add and .port_fdb_del methods, the
dp->bridge_dev pointer is still valid, i.e. the port is still a bridge
port.
This is not guaranteed because the SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE API
requires drivers that must have sleepable context to handle those events
to schedule the deferred work themselves. DSA does this through the
dsa_owq.
It can happen that a port leaves a bridge, del_nbp() flushes the FDB on
that port, SWITCHDEV_FDB_DEL_TO_DEVICE is notified in atomic context,
DSA schedules its deferred work, but del_nbp() finishes unlinking the
bridge as a master from the port before DSA's deferred work is run.
Fundamentally, the port must not be unlinked from the bridge until all
FDB deletion deferred work items have been flushed. The bridge must wait
for the completion of these hardware accesses.
An attempt has been made to address this issue centrally in switchdev by
making SWITCHDEV_FDB_DEL_TO_DEVICE deferred (=> blocking) at the switchdev
level, which would offer implicit synchronization with del_nbp:
https://patchwork.kernel.org/project/netdevbpf/cover/
20210820115746.3701811-1-vladimir.oltean@nxp.com/
but it seems that any attempt to modify switchdev's behavior and make
the events blocking there would introduce undesirable side effects in
other switchdev consumers.
The most undesirable behavior seems to be that
switchdev_deferred_process_work() takes the rtnl_mutex itself, which
would be worse off than having the rtnl_mutex taken individually from
drivers which is what we have now (except DSA which has removed that
lock since commit
0faf890fc519 ("net: dsa: drop rtnl_lock from
dsa_slave_switchdev_event_work")).
So to offer the needed guarantee to DSA switch drivers, I have come up
with a compromise solution that does not require switchdev rework:
we already have a hook at the last moment in time when the bridge is
still an upper of ours: the NETDEV_PRECHANGEUPPER handler. We can flush
the dsa_owq manually from there, which makes all FDB deletions
synchronous.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lukas Wunner [Tue, 26 Oct 2021 05:15:32 +0000 (07:15 +0200)]
ifb: Depend on netfilter alternatively to tc
IFB originally depended on NET_CLS_ACT for traffic redirection.
But since v4.5, that may be achieved with NFT_FWD_NETDEV as well.
Fixes:
39e6dea28adc ("netfilter: nf_tables: add forward expression to the netdev family")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: <stable@vger.kernel.org> # v4.5+: bcfabee1afd9: netfilter: nft_fwd_netdev: allow to redirect to ifb via ingress
Cc: <stable@vger.kernel.org> # v4.5+
Signed-off-by: David S. Miller <davem@davemloft.net>
Jeremy Kerr [Tue, 26 Oct 2021 01:57:28 +0000 (09:57 +0800)]
mctp: Implement extended addressing
This change allows an extended address struct - struct sockaddr_mctp_ext
- to be passed to sendmsg/recvmsg. This allows userspace to specify
output ifindex and physical address information (for sendmsg) or receive
the input ifindex/physaddr for incoming messages (for recvmsg). This is
typically used by userspace for MCTP address discovery and assignment
operations.
The extended addressing facility is conditional on a new sockopt:
MCTP_OPT_ADDR_EXT; userspace must explicitly enable addressing before
the kernel will consume/populate the extended address data.
Includes a fix for an uninitialised var:
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nathan Chancellor [Mon, 25 Oct 2021 21:12:39 +0000 (14:12 -0700)]
net: ax88796c: Remove pointless check in ax88796c_open()
Clang warns:
drivers/net/ethernet/asix/ax88796c_main.c:851:24: error: address of
array 'ax_local->phydev->advertising' will always evaluate to 'true'
[-Werror,-Wpointer-bool-conversion]
if (ax_local->phydev->advertising &&
~~~~~~~~~~~~~~~~~~^~~~~~~~~~~ ~~
advertising cannot be NULL here if ax_local is not NULL, which cannot
happen due to the check in ax88796c_probe(). Remove the check.
Link: https://github.com/ClangBuiltLinux/linux/issues/1492
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nathan Chancellor [Mon, 25 Oct 2021 21:12:38 +0000 (14:12 -0700)]
net: ax88796c: Fix clang -Wimplicit-fallthrough in ax88796c_set_mac()
Clang warns:
drivers/net/ethernet/asix/ax88796c_main.c:696:2: error: unannotated fall-through between switch labels [-Werror,-Wimplicit-fallthrough]
case SPEED_10:
^
drivers/net/ethernet/asix/ax88796c_main.c:696:2: note: insert 'break;' to avoid fall-through
case SPEED_10:
^
break;
drivers/net/ethernet/asix/ax88796c_main.c:706:2: error: unannotated fall-through between switch labels [-Werror,-Wimplicit-fallthrough]
case DUPLEX_HALF:
^
drivers/net/ethernet/asix/ax88796c_main.c:706:2: note: insert 'break;' to avoid fall-through
case DUPLEX_HALF:
^
break;
Clang is a little more pedantic than GCC, which permits implicit
fallthroughs to cases that contain just break or return. Clang's version
is more in line with the kernel's own stance in deprecated.rst, which
states that all switch/case blocks must end in either break,
fallthrough, continue, goto, or return. Add the missing breaks to fix
the warning.
Link: https://github.com/ClangBuiltLinux/linux/issues/1491
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haiyang Zhang [Mon, 25 Oct 2021 18:37:34 +0000 (11:37 -0700)]
net: mana: Allow setting the number of queues while the NIC is down
The existing code doesn't allow setting the number of queues while the
NIC is down.
Update the ethtool handler functions to support setting the number of
queues while the NIC is at down state.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andreas Oetken [Mon, 25 Oct 2021 18:56:18 +0000 (20:56 +0200)]
net: hsr: Add support for redbox supervision frames
added support for the redbox supervision frames
as defined in the IEC-62439-3:2018.
Signed-off-by: Andreas Oetken <andreas.oetken@siemens-energy.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 26 Oct 2021 13:45:12 +0000 (14:45 +0100)]
Merge branch 'tcp_stream_alloc_skb'
Eric Dumazet says:
====================
tcp: tcp_stream_alloc_skb() changes
sk_stream_alloc_skb() is only used by TCP.
Rename it to tcp_stream_alloc_skb() and apply small
optimizations.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 25 Oct 2021 22:13:42 +0000 (15:13 -0700)]
tcp: remove unneeded code from tcp_stream_alloc_skb()
Aligning @size argument to 4 bytes is not needed.
The header alignment has nothing to do with @size.
It really depends on skb->head alignment and MAX_TCP_HEADER.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 25 Oct 2021 22:13:41 +0000 (15:13 -0700)]
tcp: use MAX_TCP_HEADER in tcp_stream_alloc_skb
Both IPv4 and IPv6 uses same reserve, no need risking
cache line misses to fetch its value.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 25 Oct 2021 22:13:40 +0000 (15:13 -0700)]
tcp: rename sk_stream_alloc_skb
sk_stream_alloc_skb() is only used by TCP.
Rename it to make this clear, and move its declaration
to include/net/tcp.h
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 25 Oct 2021 18:15:55 +0000 (11:15 -0700)]
net: annotate data-race in neigh_output()
neigh_output() reads n->nud_state and hh->hh_len locklessly.
This is fine, but we need to add annotations and document this.
We evaluate skip_cache first to avoid reading these fields
if the cache has to by bypassed.
syzbot report:
BUG: KCSAN: data-race in __neigh_event_send / ip_finish_output2
write to 0xffff88810798a885 of 1 bytes by interrupt on cpu 1:
__neigh_event_send+0x40d/0xac0 net/core/neighbour.c:1128
neigh_event_send include/net/neighbour.h:444 [inline]
neigh_resolve_output+0x104/0x410 net/core/neighbour.c:1476
neigh_output include/net/neighbour.h:510 [inline]
ip_finish_output2+0x80a/0xaa0 net/ipv4/ip_output.c:221
ip_finish_output+0x3b5/0x510 net/ipv4/ip_output.c:309
NF_HOOK_COND include/linux/netfilter.h:296 [inline]
ip_output+0xf3/0x1a0 net/ipv4/ip_output.c:423
dst_output include/net/dst.h:450 [inline]
ip_local_out+0x164/0x220 net/ipv4/ip_output.c:126
__ip_queue_xmit+0x9d3/0xa20 net/ipv4/ip_output.c:525
ip_queue_xmit+0x34/0x40 net/ipv4/ip_output.c:539
__tcp_transmit_skb+0x142a/0x1a00 net/ipv4/tcp_output.c:1405
tcp_transmit_skb net/ipv4/tcp_output.c:1423 [inline]
tcp_xmit_probe_skb net/ipv4/tcp_output.c:4011 [inline]
tcp_write_wakeup+0x4a9/0x810 net/ipv4/tcp_output.c:4064
tcp_send_probe0+0x2c/0x2b0 net/ipv4/tcp_output.c:4079
tcp_probe_timer net/ipv4/tcp_timer.c:398 [inline]
tcp_write_timer_handler+0x394/0x520 net/ipv4/tcp_timer.c:626
tcp_write_timer+0xb9/0x180 net/ipv4/tcp_timer.c:642
call_timer_fn+0x2e/0x1d0 kernel/time/timer.c:1421
expire_timers+0x135/0x240 kernel/time/timer.c:1466
__run_timers+0x368/0x430 kernel/time/timer.c:1734
run_timer_softirq+0x19/0x30 kernel/time/timer.c:1747
__do_softirq+0x12c/0x26e kernel/softirq.c:558
invoke_softirq kernel/softirq.c:432 [inline]
__irq_exit_rcu kernel/softirq.c:636 [inline]
irq_exit_rcu+0x4e/0xa0 kernel/softirq.c:648
sysvec_apic_timer_interrupt+0x69/0x80 arch/x86/kernel/apic/apic.c:1097
asm_sysvec_apic_timer_interrupt+0x12/0x20
native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
acpi_safe_halt drivers/acpi/processor_idle.c:109 [inline]
acpi_idle_do_entry drivers/acpi/processor_idle.c:553 [inline]
acpi_idle_enter+0x258/0x2e0 drivers/acpi/processor_idle.c:688
cpuidle_enter_state+0x2b4/0x760 drivers/cpuidle/cpuidle.c:237
cpuidle_enter+0x3c/0x60 drivers/cpuidle/cpuidle.c:351
call_cpuidle kernel/sched/idle.c:158 [inline]
cpuidle_idle_call kernel/sched/idle.c:239 [inline]
do_idle+0x1a3/0x250 kernel/sched/idle.c:306
cpu_startup_entry+0x15/0x20 kernel/sched/idle.c:403
secondary_startup_64_no_verify+0xb1/0xbb
read to 0xffff88810798a885 of 1 bytes by interrupt on cpu 0:
neigh_output include/net/neighbour.h:507 [inline]
ip_finish_output2+0x79a/0xaa0 net/ipv4/ip_output.c:221
ip_finish_output+0x3b5/0x510 net/ipv4/ip_output.c:309
NF_HOOK_COND include/linux/netfilter.h:296 [inline]
ip_output+0xf3/0x1a0 net/ipv4/ip_output.c:423
dst_output include/net/dst.h:450 [inline]
ip_local_out+0x164/0x220 net/ipv4/ip_output.c:126
__ip_queue_xmit+0x9d3/0xa20 net/ipv4/ip_output.c:525
ip_queue_xmit+0x34/0x40 net/ipv4/ip_output.c:539
__tcp_transmit_skb+0x142a/0x1a00 net/ipv4/tcp_output.c:1405
tcp_transmit_skb net/ipv4/tcp_output.c:1423 [inline]
tcp_xmit_probe_skb net/ipv4/tcp_output.c:4011 [inline]
tcp_write_wakeup+0x4a9/0x810 net/ipv4/tcp_output.c:4064
tcp_send_probe0+0x2c/0x2b0 net/ipv4/tcp_output.c:4079
tcp_probe_timer net/ipv4/tcp_timer.c:398 [inline]
tcp_write_timer_handler+0x394/0x520 net/ipv4/tcp_timer.c:626
tcp_write_timer+0xb9/0x180 net/ipv4/tcp_timer.c:642
call_timer_fn+0x2e/0x1d0 kernel/time/timer.c:1421
expire_timers+0x135/0x240 kernel/time/timer.c:1466
__run_timers+0x368/0x430 kernel/time/timer.c:1734
run_timer_softirq+0x19/0x30 kernel/time/timer.c:1747
__do_softirq+0x12c/0x26e kernel/softirq.c:558
invoke_softirq kernel/softirq.c:432 [inline]
__irq_exit_rcu kernel/softirq.c:636 [inline]
irq_exit_rcu+0x4e/0xa0 kernel/softirq.c:648
sysvec_apic_timer_interrupt+0x69/0x80 arch/x86/kernel/apic/apic.c:1097
asm_sysvec_apic_timer_interrupt+0x12/0x20
native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
acpi_safe_halt drivers/acpi/processor_idle.c:109 [inline]
acpi_idle_do_entry drivers/acpi/processor_idle.c:553 [inline]
acpi_idle_enter+0x258/0x2e0 drivers/acpi/processor_idle.c:688
cpuidle_enter_state+0x2b4/0x760 drivers/cpuidle/cpuidle.c:237
cpuidle_enter+0x3c/0x60 drivers/cpuidle/cpuidle.c:351
call_cpuidle kernel/sched/idle.c:158 [inline]
cpuidle_idle_call kernel/sched/idle.c:239 [inline]
do_idle+0x1a3/0x250 kernel/sched/idle.c:306
cpu_startup_entry+0x15/0x20 kernel/sched/idle.c:403
rest_init+0xee/0x100 init/main.c:734
arch_call_rest_init+0xa/0xb
start_kernel+0x5e4/0x669 init/main.c:1142
secondary_startup_64_no_verify+0xb1/0xbb
value changed: 0x20 -> 0x01
Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.15.0-rc6-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 26 Oct 2021 12:35:59 +0000 (13:35 +0100)]
Merge branch 'mlxsw-rif-mac-prefixes'
Ido Schimmel says:
====================
mlxsw: Support multiple RIF MAC prefixes
Currently, mlxsw enforces that all the netdevs used as router interfaces
(RIFs) have the same MAC prefix (e.g., same 38 MSBs in Spectrum-1).
Otherwise, an error is returned to user space with extack. This patchset
relaxes the limitation through the use of RIF MAC profiles.
A RIF MAC profile is a hardware entity that represents a particular MAC
prefix which multiple RIFs can reference. Therefore, the number of
possible MAC prefixes is no longer one, but the number of profiles
supported by the device.
The ability to change the MAC of a particular netdev is useful, for
example, for users who use the netdev to connect to an upstream provider
that performs MAC filtering. Currently, such users are either forced to
negotiate with the provider or change the MAC address of all other
netdevs so that they share the same prefix.
Patchset overview:
Patches #1-#3 are preparations.
Patch #4 adds actual support for RIF MAC profiles.
Patch #5 exposes RIF MAC profiles as a devlink resource, so that user
space has visibility into the maximum number of profiles and current
occupancy. Useful for debugging and testing (next 3 patches).
Patches #6-#8 add both scale and functional tests.
Patch #9 removes tests that validated the previous limitation. It is now
covered by patch #6 for devices that support a single profile.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:25 +0000 (12:42 +0300)]
selftests: mlxsw: Remove deprecated test cases
After adding the previous patches, the constraint that all the router
interface MAC addresses have the same prefix is no longer relevant.
Remove the test cases that validated that this constraint is honored.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:24 +0000 (12:42 +0300)]
selftests: Add an occupancy test for RIF MAC profiles
When all the RIF MAC profiles are in use, test that it is possible to
change the MAC of a netdev (i.e., a RIF) when its MAC profile is not
shared with other RIFs. Test that replacement fails when the MAC profile
is shared.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:23 +0000 (12:42 +0300)]
selftests: mlxsw: Add forwarding test for RIF MAC profiles
Verify that MAC profile changes are indeed applied and that packets are
forwarded with the correct source MAC.
Output example:
$ ./rif_mac_profiles.sh
TEST: h1->h2: new mac profile [ OK ]
TEST: h2->h1: new mac profile [ OK ]
TEST: h1->h2: edit mac profile [ OK ]
TEST: h2->h1: edit mac profile [ OK ]
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:22 +0000 (12:42 +0300)]
selftests: mlxsw: Add a scale test for RIF MAC profiles
Query the maximum number of supported RIF MAC profiles using
devlink-resource and verify that all available MAC profiles can be utilized
and that an error is generated when user space tries to exceed this number.
Output example in Spectrum-2:
$ TESTS='rif_mac_profile' ./resource_scale.sh
TEST: 'rif_mac_profile' 4 [ OK ]
TEST: 'rif_mac_profile' overflow 5 [ OK ]
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:21 +0000 (12:42 +0300)]
mlxsw: spectrum_router: Expose RIF MAC profiles to devlink resource
Expose via devlink-resource the maximum number of RIF MAC profiles and
their current occupancy, so it can be used for debug and writing generic
tests, like in the next patch.
Example for Spectrum-2 output:
$ devlink resource show pci/0000:06:00.0
...
name rif_mac_profiles size 4 occ 0 unit entry dpipe_tables none
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:20 +0000 (12:42 +0300)]
mlxsw: spectrum_router: Add RIF MAC profiles support
Currently, mlxsw enforces that all the router interfaces (RIFs) have the
same MAC prefix.
Relax this limitation by using RIF MAC profiles. Each profile is
associated with a particular MAC prefix and multiple RIFs can use the
same profile. Therefore, the number of possible MAC prefixes is no
longer one, but the number of profiles supported by the device.
Store the profiles in an IDR and reference count them according to the
number of RIFs using them.
Associate a RIF with a profile when the RIF is created and remove the
association when the RIF is deleted.
Change the association following 'NETDEV_CHANGEADDR' events, except when
only one RIF is using the profile. In which case, change the MAC prefix
of the profile itself instead of associating the RIF with a new profile.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:19 +0000 (12:42 +0300)]
mlxsw: spectrum_router: Propagate extack further
The next patch will set the MAC profile of a router interface (RIF) as
part of its configure() callback. The operation can fail in case the
maximum number of profiles was exceeded.
Add extack to mlxsw_sp_rif_ops::configure() in order to communicate such
failures to user space.
In addition, the MAC profile of a RIF can change following a
'NETDEV_CHANGEADDR' notification. Propagate extack to
mlxsw_sp_router_port_change_event() so that failures could be
communicated in this path as well.
No functional changes intended.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:18 +0000 (12:42 +0300)]
mlxsw: resources: Add resource identifier for RIF MAC profiles
Add a resource identifier for maximum RIF MAC profiles so that it could
be later used to query the information from firmware.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Tue, 26 Oct 2021 09:42:17 +0000 (12:42 +0300)]
mlxsw: reg: Add MAC profile ID field to RITR register
Add MAC profile ID field to RITR register so that it could be used for
associating a RIF with a MAC profile ID by a later patch.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 26 Oct 2021 12:21:10 +0000 (13:21 +0100)]
Merge branch 'netfilter-vrf-rework'
Florian Westphal says:
====================
vrf: rework interaction with netfilter/conntrack
V2:
- fix 'plain integer as null pointer' warning
- reword commit message in patch 2 to clarify loss of 'ct set untracked'
This patch series aims to solve the to-be-reverted change
09e856d54bda5f288e
("vrf: Reset skb conntrack connection on VRF rcv") in a different way.
Rather than have skbs pass through conntrack and nat hooks twice, suppress
conntrack invocation if the conntrack/nat hook is called from the vrf driver.
First patch deals with 'incoming connection' case:
1. suppress NAT transformations
2. skip conntrack confirmation
NAT and conntrack confirmation is done when ip/ipv6 stack calls
the postrouting hook.
Second patch deals with local packets:
in vrf driver, mark the skbs as 'untracked', so conntrack output
hook ignores them. This skips all nat hooks as well.
Afterwards, remove the untracked state again so the second
round will pick them up.
One alternative to the chosen implementation would be to add a 'caller
id' field to 'struct nf_hook_state' and then use that, these patches
use the more straightforward check of VRF flag on the state->out device.
The two patches apply to both net and net-next, i am targeting -next
because I think that since snat did not work correctly for so long that
we can take the longer route. If you disagree, apply to net at your
discretion.
The patches apply both with
09e856d54bda5f288e reverted or still
in-place, but only with the revert in place ingress conntrack settings
(zone, notrack etc) start working again.
I've already submitted selftests for vrf+nfqueue and conntrack+vrf.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Mon, 25 Oct 2021 14:14:00 +0000 (16:14 +0200)]
vrf: run conntrack only in context of lower/physdev for locally generated packets
The VRF driver invokes netfilter for output+postrouting hooks so that users
can create rules that check for 'oif $vrf' rather than lower device name.
This is a problem when NAT rules are configured.
To avoid any conntrack involvement in round 1, tag skbs as 'untracked'
to prevent conntrack from picking them up.
This gets cleared before the packet gets handed to the ip stack so
conntrack will be active on the second iteration.
One remaining issue is that a rule like
output ... oif $vrfname notrack
won't propagate to the second round because we can't tell
'notrack set via ruleset' and 'notrack set by vrf driver' apart.
However, this isn't a regression: the 'notrack' removal happens
instead of unconditional nf_reset_ct().
I'd also like to avoid leaking more vrf specific conditionals into the
netfilter infra.
For ingress, conntrack has already been done before the packet makes it
to the vrf driver, with this patch egress does connection tracking with
lower/physical device as well.
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Mon, 25 Oct 2021 14:13:59 +0000 (16:13 +0200)]
netfilter: conntrack: skip confirmation and nat hooks in postrouting for vrf
The VRF driver invokes netfilter for output+postrouting hooks so that users
can create rules that check for 'oif $vrf' rather than lower device name.
Afterwards, ip stack calls those hooks again.
This is a problem when conntrack is used with IP masquerading.
masquerading has an internal check that re-validates the output
interface to account for route changes.
This check will trigger in the vrf case.
If the -j MASQUERADE rule matched on the first iteration, then round 2
finds state->out->ifindex != nat->masq_index: the latter is the vrf
index, but out->ifindex is the lower device.
The packet gets dropped and the conntrack entry is invalidated.
This change makes conntrack postrouting skip the nat hooks.
Also skip confirmation. This allows the second round
(postrouting invocation from ipv4/ipv6) to create nat bindings.
This also prevents the second round from seeing packets that had their
source address changed by the nat hook.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 26 Oct 2021 12:17:45 +0000 (13:17 +0100)]
Merge tag 'mlx5-updates-2021-10-25' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2021-10-25
Misc updates for mlx5 driver:
1) Misc updates and cleanups:
- Don't write directly to netdev->dev_addr, From Jakub Kicinski
- Remove unnecessary checks for slow path flag in tc module
- Fix unused function warning of mlx5i_flow_type_mask
- Bridge, support replacing existing FDB entry
2) Sub Functions, Reduction in memory usage:
- Reduce flow counters bulk query buffer size
- Implement max_macs devlink parameter
- Add devlink vendor params to control Event Queue sizes
- Added SF life cycle trace points by Parav/
3) From Aya, Firmware health buffer reporting improvements
- Print health buffer by log level and more missing information
- Periodic update of host time to firmware
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maxwell [Sun, 24 Oct 2021 23:59:03 +0000 (10:59 +1100)]
tcp: don't free a FIN sk_buff in tcp_remove_empty_skb()
v1: Implement a more general statement as recommended by Eric Dumazet. The
sequence number will be advanced, so this check will fix the FIN case and
other cases.
A customer reported sockets stuck in the CLOSING state. A Vmcore revealed that
the write_queue was not empty as determined by tcp_write_queue_empty() but the
sk_buff containing the FIN flag had been freed and the socket was zombied in
that state. Corresponding pcaps show no FIN from the Linux kernel on the wire.
Some instrumentation was added to the kernel and it was found that there is a
timing window where tcp_sendmsg() can run after tcp_send_fin().
tcp_sendmsg() will hit an error, for example:
1269 ▹ if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN))↩
1270 ▹ ▹ goto do_error;↩
tcp_remove_empty_skb() will then free the FIN sk_buff as "skb->len == 0". The
TCP socket is now wedged in the FIN-WAIT-1 state because the FIN is never sent.
If the other side sends a FIN packet the socket will transition to CLOSING and
remain that way until the system is rebooted.
Fix this by checking for the FIN flag in the sk_buff and don't free it if that
is the case. Testing confirmed that fixed the issue.
Fixes:
fdfc5c8594c2 ("tcp: remove empty skb from write queue in error cases")
Signed-off-by: Jon Maxwell <jmaxwell37@gmail.com>
Reported-by: Monir Zouaoui <Monir.Zouaoui@mail.schwarz>
Reported-by: Simon Stier <simon.stier@mail.schwarz>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 26 Oct 2021 02:11:17 +0000 (19:11 -0700)]
Merge branch 'small-fixes-for-true-expression-checks'
Jean Sacren says:
====================
Small fixes for true expression checks
This series fixes checks of true !rc expression.
====================
Link: https://lore.kernel.org/r/cover.1634974124.git.sakiwit@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jean Sacren [Sat, 23 Oct 2021 09:26:15 +0000 (03:26 -0600)]
net: qed_dev: fix check of true !rc expression
Remove the check of !rc in (!rc && !resc_lock_params.b_granted) since it
is always true.
Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jean Sacren [Sat, 23 Oct 2021 09:26:14 +0000 (03:26 -0600)]
net: qed_ptp: fix check of true !rc expression
Remove the check of !rc in (!rc && !params.b_granted) since it is always
true.
We should also use constant 0 for return.
Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 26 Oct 2021 01:02:16 +0000 (18:02 -0700)]
Merge branch 'tcp-receive-path-optimizations'
Eric Dumazet says:
====================
tcp: receive path optimizations
This series aims to reduce cache line misses in RX path.
I am still working on better cache locality in tcp_sock but
this will wait few more weeks.
====================
Link: https://lore.kernel.org/r/20211025164825.259415-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:25 +0000 (09:48 -0700)]
ipv6/tcp: small drop monitor changes
Two kfree_skb() calls must be replaced by consume_skb()
for skbs that are not technically dropped.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:24 +0000 (09:48 -0700)]
ipv4: guard IP_MINTTL with a static key
RFC 5082 IP_MINTTL option is rarely used on hosts.
Add a static key to remove from TCP fast path useless code,
and potential cache line miss to fetch inet_sk(sk)->min_ttl
Note that once ip4_min_ttl static key has been enabled,
it stays enabled until next boot.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:23 +0000 (09:48 -0700)]
ipv4: annotate data races arount inet->min_ttl
No report yet from KCSAN, yet worth documenting the races.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:22 +0000 (09:48 -0700)]
ipv6: guard IPV6_MINHOPCOUNT with a static key
RFC 5082 IPV6_MINHOPCOUNT is rarely used on hosts.
Add a static key to remove from TCP fast path useless code,
and potential cache line miss to fetch tcp_inet6_sk(sk)->min_hopcount
Note that once ip6_min_hopcount static key has been enabled,
it stays enabled until next boot.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:21 +0000 (09:48 -0700)]
ipv6: annotate data races around np->min_hopcount
No report yet from KCSAN, yet worth documenting the races.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:20 +0000 (09:48 -0700)]
net: annotate accesses to sk->sk_rx_queue_mapping
sk->sk_rx_queue_mapping can be modified locklessly,
add a couple of READ_ONCE()/WRITE_ONCE() to document this fact.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:19 +0000 (09:48 -0700)]
net: avoid dirtying sk->sk_rx_queue_mapping
sk_rx_queue_mapping is located in a cache line that should be kept read mostly.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:18 +0000 (09:48 -0700)]
net: avoid dirtying sk->sk_napi_id
sk_napi_id is located in a cache line that can be kept read mostly.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:17 +0000 (09:48 -0700)]
ipv6: move inet6_sk(sk)->rx_dst_cookie to sk->sk_rx_dst_cookie
Increase cache locality by moving rx_dst_coookie next to sk->sk_rx_dst
This removes one or two cache line misses in IPv6 early demux (TCP/UDP)
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Mon, 25 Oct 2021 16:48:16 +0000 (09:48 -0700)]
tcp: move inet->rx_dst_ifindex to sk->sk_rx_dst_ifindex
Increase cache locality by moving rx_dst_ifindex next to sk->sk_rx_dst
This is part of an effort to reduce cache line misses in TCP fast path.
This removes one cache line miss in early demux.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alexander Lobakin [Sat, 23 Oct 2021 12:19:16 +0000 (12:19 +0000)]
ax88796c: fix fetching error stats from percpu containers
rx_dropped, tx_dropped, rx_frame_errors and rx_crc_errors are being
wrongly fetched from the target container rather than source percpu
ones.
No idea if that goes from the vendor driver or was brainoed during
the refactoring, but fix it either way.
Fixes:
a97c69ba4f30e ("net: ax88796c: ASIX AX88796C SPI Ethernet Adapter Driver")
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Acked-by: Łukasz Stelmach <l.stelmach@samsung.com>
Link: https://lore.kernel.org/r/20211023121148.113466-1-alobakin@pm.me
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Parav Pandit [Tue, 5 Oct 2021 08:26:05 +0000 (11:26 +0300)]
net/mlx5: SF_DEV Add SF device trace points
Add SF device add and delete specific trace points.
echo mlx5:mlx5_sf_dev_add >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_dev_del >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_vhca_event >> /sys/kernel/debug/tracing/set_event
Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Parav Pandit [Tue, 21 Sep 2021 13:12:28 +0000 (16:12 +0300)]
net/mlx5: SF, Add SF trace points
Add support for trace events for SFs to improve debugging.
This covers
(a) port add and free trace points
(b) device level trace points
(c) SF hardware context add, free trace points.
(d) SF function activate/deacticate and state trace points
SF events examples:
echo mlx5:mlx5_sf_add >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_free >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_hwc_alloc >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_hwc_free >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_hwc_deferred_free >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_update_state >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_activate >> /sys/kernel/debug/tracing/set_event
echo mlx5:mlx5_sf_deactivate >> /sys/kernel/debug/tracing/set_event
Signed-off-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Shay Drory [Mon, 16 Aug 2021 05:41:08 +0000 (08:41 +0300)]
net/mlx5: Let user configure max_macs param
Currently, max_macs is taking 70Kbytes of memory per function. This
size is not needed in all use cases, and is critical with large scale.
Hence, allow user to configure the number of max_macs.
For example, to reduce the number of max_macs to 1, execute::
$ devlink dev param set pci/0000:00:0b.0 name max_macs value 1 \
cmode driverinit
$ devlink dev reload pci/0000:00:0b.0
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Shay Drory [Wed, 13 Oct 2021 06:57:54 +0000 (09:57 +0300)]
net/mlx5: Let user configure event_eq_size param
Event EQ is an EQ which received the notification of almost all the
events generated by the NIC.
Currently, each event EQ is taking 512KB of memory. This size is not
needed in most use cases, and is critical with large scale. Hence,
allow user to configure the size of the event EQ.
For example to reduce event EQ size to 64, execute::
$ devlink resource set pci/0000:00:0b.0 path /event_eq_size/ size 64
$ devlink dev reload pci/0000:00:0b.0
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Shay Drory [Thu, 12 Aug 2021 08:53:34 +0000 (11:53 +0300)]
net/mlx5: Let user configure io_eq_size param
Currently, each I/O EQ is taking 128KB of memory. This size
is not needed in all use cases, and is critical with large scale.
Hence, allow user to configure the size of I/O EQs.
For example, to reduce I/O EQ size to 64, execute:
$ devlink resource set pci/0000:00:0b.0 path /io_eq_size/ size 64
$ devlink dev reload pci/0000:00:0b.0
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Vlad Buslov [Tue, 19 Oct 2021 15:45:28 +0000 (18:45 +0300)]
net/mlx5: Bridge, support replacing existing FDB entry
The SWITCHDEV_FDB_ADD_TO_DEVICE is used for both adding new and replacing
existing entry. Implement support for replacing existing FDB entries in
mlx5 offload code.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Vlad Buslov [Tue, 19 Oct 2021 15:17:19 +0000 (18:17 +0300)]
net/mlx5: Bridge, extract code to lookup and del/notify entry
Following two patterns in bridge code are used in multiple places where
similar code is duplicated:
- Lookup FDB entry from hashtable by address+vid pair.
- Notify software bridge and then delete existing FDB entry.
In order to improve code quality and prepare for following patch series
that also uses described patterns, extract the codes to dedicated helper
functions.
This commit doesn't change functionality.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Aya Levin [Wed, 13 Oct 2021 06:45:22 +0000 (09:45 +0300)]
net/mlx5: Add periodic update of host time to firmware
Firmware logs its asserts also to non-volatile memory. In order to
reduce drift between the NIC and the host, the driver sets the host
epoch-time to the firmware every hour.
Signed-off-by: Aya Levin <ayal@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Aya Levin [Mon, 11 Oct 2021 14:19:23 +0000 (17:19 +0300)]
net/mlx5: Print health buffer by log level
Add log macro which gets log level as a parameter. Use the severity
read from the health buffer and the new log macro to log the health buffer
with severity as log level. Prior to this patch, health buffer was
printed in error log level regardless of its severity. Now the user may
filter dmesg (--level) or change kernel log level to focus on different
severity levels of firmware errors.
Signed-off-by: Aya Levin <ayal@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Aya Levin [Mon, 11 Oct 2021 10:14:28 +0000 (13:14 +0300)]
net/mlx5: Extend health buffer dump
Enhance health buffer to include:
- assert_var5: expose the 6'th assert variable.
- time: error's time-stamp in seconds (epoch time).
- rfr: Recovery Flow Requiered. When set, indicates that the error
cannot be recovered without flow involving reset.
- severity: error's severity value, ranging from emergency to debug.
Expose them in the health buffer dump (dmesg and devlink fw reporter).
Health buffer in dmesg:
mlx5_core 0000:08:00.0: print_health_info:425:(pid 912): Health issue observed, firmware internal error, severity(3) ERROR:
mlx5_core 0000:08:00.0: print_health_info:429:(pid 912): assert_var[0] 0x08040700
mlx5_core 0000:08:00.0: print_health_info:429:(pid 912): assert_var[1] 0x00000000
mlx5_core 0000:08:00.0: print_health_info:429:(pid 912): assert_var[2] 0x00000000
mlx5_core 0000:08:00.0: print_health_info:429:(pid 912): assert_var[3] 0x00000000
mlx5_core 0000:08:00.0: print_health_info:429:(pid 912): assert_var[4] 0x00000000
mlx5_core 0000:08:00.0: print_health_info:429:(pid 912): assert_var[5] 0x00000000
mlx5_core 0000:08:00.0: print_health_info:432:(pid 912): assert_exit_ptr 0x00aaf800
mlx5_core 0000:08:00.0: print_health_info:434:(pid 912): assert_callra 0x00aaf70c
mlx5_core 0000:08:00.0: print_health_info:436:(pid 912): fw_ver 16.32.492
mlx5_core 0000:08:00.0: print_health_info:437:(pid 912): time
1634819758
mlx5_core 0000:08:00.0: print_health_info:438:(pid 912): hw_id 0x0000020d
mlx5_core 0000:08:00.0: print_health_info:439:(pid 912): rfr 0
mlx5_core 0000:08:00.0: print_health_info:440:(pid 912): severity 3 (ERROR)
mlx5_core 0000:08:00.0: print_health_info:441:(pid 912): irisc_index 9
mlx5_core 0000:08:00.0: print_health_info:442:(pid 912): synd 0x1: firmware internal error
mlx5_core 0000:08:00.0: print_health_info:444:(pid 912): ext_synd 0x802b
mlx5_core 0000:08:00.0: print_health_info:445:(pid 912): raw fw_ver 0x102001ec
Signed-off-by: Aya Levin <ayal@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Avihai Horon [Wed, 6 Oct 2021 09:19:40 +0000 (12:19 +0300)]
net/mlx5: Reduce flow counters bulk query buffer size for SFs
Currently, the flow counters bulk query buffer takes a little more than
512KB of memory, which is aligned to the next power of 2, to 1MB.
The buffer size determines the maximum number of flow counters that can
be queried at a time. Thus, having a bigger buffer can improve
performance for users that need to query many flow counters.
SFs don't use many flow counters and don't need a big buffer. Since this
size is critical with large scale, reduce the size of the bulk query
buffer for SFs.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Shay Drory [Mon, 18 Oct 2021 06:18:39 +0000 (09:18 +0300)]
net/mlx5: Fix unused function warning of mlx5i_flow_type_mask
The cited commit is causing unused-function warning[1] when
CONFIG_MLX5_EN_RXNFC is not set.
Fix this by moving the function into the ifdef, where it's only used
[1]
warning: ‘mlx5i_flow_type_mask’ defined but not used [-Wunused-function]
Fixes:
9fbe1c25ecca ("net/mlx5i: Enable Rx steering for IPoIB via ethtool")
Signed-off-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Paul Blakey [Sun, 24 Oct 2021 13:29:24 +0000 (16:29 +0300)]
net/mlx5: Remove unnecessary checks for slow path flag
After previous changes, caller (mlx5e_tc_offload_fdb_rules()) already
checks for the slow path flag, and if set won't call offload/unoffload
sample.
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jakub Kicinski [Wed, 13 Oct 2021 20:20:01 +0000 (13:20 -0700)]
net/mlx5e: don't write directly to netdev->dev_addr
Use a local buffer and eth_hw_addr_set()
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jakub Kicinski [Mon, 25 Oct 2021 18:01:32 +0000 (11:01 -0700)]
Merge branch 'bluetooth-don-t-write-directly-to-netdev-dev_addr'
Jakub Kicinski says:
====================
bluetooth: don't write directly to netdev->dev_addr
The usual conversions.
====================
Link: https://lore.kernel.org/r/20211022231834.2710245-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Fri, 22 Oct 2021 23:18:34 +0000 (16:18 -0700)]
bluetooth: use dev_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it go through appropriate helpers.
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Acked-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Fri, 22 Oct 2021 23:18:33 +0000 (16:18 -0700)]
bluetooth: use eth_hw_addr_set()
Commit
406f42fa0d3c ("net-next: When a bond have a massive amount
of VLANs...") introduced a rbtree for faster Ethernet address look
up. To maintain netdev->dev_addr in this tree we need to make all
the writes to it go through appropriate helpers.
Convert bluetooth from memcpy(... ETH_ADDR) to eth_hw_addr_set():
@@
expression dev, np;
@@
- memcpy(dev->dev_addr, np, ETH_ALEN)
+ eth_hw_addr_set(dev, np)
Reviewed-by: Marcel Holtmann <marcel@holtmann.org>
Acked-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Mon, 25 Oct 2021 16:00:00 +0000 (09:00 -0700)]
fddi: defza: add missing pointer type cast
hw_addr is a uint AKA unsigned int. dev_addr_set() takes
a u8 *.
drivers/net/fddi/defza.c:1383:27: error: passing argument 2 of 'dev_addr_set' from incompatible pointer type [-Werror=incompatible-pointer-types]
Reported-by: kernel test robot <lkp@intel.com>
Fixes:
1e9258c389ee ("fddi: defxx,defza: use dev_addr_set()")
Acked-by: Maciej W. Rozycki <macro@orcam.me.uk>
Link: https://lore.kernel.org/r/20211025160000.2803818-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Tianjia Zhang [Mon, 25 Oct 2021 13:05:00 +0000 (21:05 +0800)]
net/tls: getsockopt supports complete algorithm list
AES_CCM_128 and CHACHA20_POLY1305 are already supported by tls,
similar to setsockopt, getsockopt also needs to support these
two algorithms.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>