Casper Andersson [Tue, 6 Sep 2022 06:58:15 +0000 (08:58 +0200)]
net: sparx5: fix function return type to match actual type
Function returns error integer, not bool.
Does not have any impact on functionality.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Casper Andersson <casper.casan@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20220906065815.3856323-1-casper.casan@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Heiner Kallweit [Mon, 5 Sep 2022 19:23:12 +0000 (21:23 +0200)]
r8169: merge support for chip versions 10, 13, 16
These chip versions are closely related and all of them have no
chip-specific MAC/PHY initialization. Therefore merge support
for the three chip versions.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Link: https://lore.kernel.org/r/469d27e0-1d06-9b15-6c96-6098b3a52e35@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Kurt Kanzenbach [Mon, 5 Sep 2022 13:01:55 +0000 (15:01 +0200)]
net: stmmac: Disable automatic FCS/Pad stripping
The stmmac has the possibility to automatically strip the padding/FCS for IEEE
802.3 type frames. This feature is enabled conditionally. Therefore, the stmmac
receive path has to have a determination logic whether the FCS has to be
stripped in software or not.
In fact, for DSA this ACS feature is disabled and the determination logic
doesn't check for it properly. For instance, when using DSA in combination with
an older stmmac (pre version 4), the FCS is not stripped by hardware or software
which is problematic.
So either add another check for DSA to the fast path or simply disable ACS
feature completely. The latter approach has been chosen, because most of the
time the FCS is stripped in software anyway and it removes conditionals from the
receive fast path.
Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/r/87v8q8jjgh.fsf@kurt/
Link: https://lore.kernel.org/r/20220905130155.193640-1-kurt@linutronix.de
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
David S. Miller [Wed, 7 Sep 2022 15:20:15 +0000 (16:20 +0100)]
Merge branch 'hns3-new-features'
Guangbin Huang says:
====================
hns3: add some new features
This series adds some new features for the HNS3 ethernet driver.
Patches #1~#3 support configuring dscp map to tc.
Patch 4# supports querying FEC statistics by command "ethtool -I --show-fec eth0".
Patch 5# supports querying and setting Serdes lane number.
Change logs:
V1 -> V2:
- fix build error of patch 1# reported by robot lkp@intel.com.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Hao Chen [Tue, 6 Sep 2022 09:12:23 +0000 (17:12 +0800)]
net: hns3: add support to query and set lane number by ethtool
When serdes lane support setting 25Gb/s or 50Gb/s speed and user wants to
set port speed as 50Gb/s, it can be setted as one 50Gb/s serdes lane or
two 25Gb/s serdes lanes.
So, this patch adds support to query and set lane number by ethtool
to satisfy this scenario.
Signed-off-by: Hao Chen <chenhao418@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hao Lan [Tue, 6 Sep 2022 09:12:22 +0000 (17:12 +0800)]
net: hns3: add querying fec statistics
FEC statistics can be used to check the transmission quality of links.
This patch implements the get_fec_stats callback of ethtool_ops to support
querying FEC statistics by command "ethtool -I --show-fec eth0".
Signed-off-by: Hao Lan <lanhao@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guangbin Huang [Tue, 6 Sep 2022 09:12:21 +0000 (17:12 +0800)]
net: hns3: debugfs add dump dscp map info
This patch add dump the map relation for dscp, priority and TC, and
the current tc map mode.
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guangbin Huang [Tue, 6 Sep 2022 09:12:20 +0000 (17:12 +0800)]
net: hns3: support ndo_select_queue()
To support tx packets to select queue according to its dscp field after
setting dscp and tc map relationship, this patch implements
ndo_select_queue() to set skb->priority according to the user's setting
dscp and priority map relationship.
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guangbin Huang [Tue, 6 Sep 2022 09:12:19 +0000 (17:12 +0800)]
net: hns3: add support config dscp map to tc
This patch add support config dscp map to tc by implementing ieee_setapp
and ieee_delapp of struct dcbnl_rtnl_ops. Driver will convert mapping
relationship from dscp-prio to dscp-tc.
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 7 Sep 2022 14:57:48 +0000 (15:57 +0100)]
Merge branch '40GbE' of git://git./linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
Intel Wired LAN Driver Updates 2022-09-06 (i40e, iavf)
This series contains updates to i40e and iavf drivers.
Stanislaw adds support for new device id for i40e.
Jaroslaw tidies up some code around MSI-X configuration by adding/
reworking comments and introducing a couple of macros for i40e.
Michal resolves some races around reset and close by deferring and deleting
some pending AdminQ operations and reworking filter additions and deletions
during these operations for iavf.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 7 Sep 2022 14:56:21 +0000 (15:56 +0100)]
Merge branch '100GbE' of git://git./linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
Intel Wired LAN Driver Updates 2022-09-06 (ice)
This series contains updates to ice driver only.
Tony reduces device MSI-X request/usage when entire request can't be fulfilled.
Michal adds check for reset when waiting for PTP offsets.
Paul refactors firmware version checks to use a common helper.
Christophe Jaillet changes a couple of local memory allocation to not
use the devm variant.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Liu Shixin [Mon, 5 Sep 2022 12:50:42 +0000 (20:50 +0800)]
net: sysctl: remove unused variable long_max
The variable long_max is replaced by bpf_jit_limit_max and no longer be
used. So remove it.
No functional change.
Signed-off-by: Liu Shixin <liushixin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lorenzo Bianconi [Mon, 5 Sep 2022 12:46:01 +0000 (14:46 +0200)]
net: ethernet: mtk_eth_soc: remove mtk_foe_entry_timestamp
Get rid of mtk_foe_entry_timestamp routine since it is no longer used.
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 7 Sep 2022 13:02:09 +0000 (14:02 +0100)]
Merge branch 'macsec-offload-mlx5'
Saeed Mahameed says:
====================
Introduce MACsec skb_metadata_dst and mlx5 macsec offload
v1->v2:
- attach mlx5 implementation patches.
This patchset introduces MACsec skb_metadata_dst to lay the ground
for MACsec HW offload.
MACsec is an IEEE standard (IEEE 802.1AE) for MAC security.
It defines a way to establish a protocol independent connection
between two hosts with data confidentiality, authenticity and/or
integrity, using GCM-AES. MACsec operates on the Ethernet layer and
as such is a layer 2 protocol, which means it’s designed to secure
traffic within a layer 2 network, including DHCP or ARP requests.
Linux has a software implementation of the MACsec standard and
HW offloading support.
The offloading is re-using the logic, netlink API and data
structures of the existing MACsec software implementation.
For Tx:
In the current MACsec offload implementation, MACsec interfaces shares
the same MAC address by default.
Therefore, HW can't distinguish from which MACsec interface the traffic
originated from.
MACsec stack will use skb_metadata_dst to store the SCI value, which is
unique per MACsec interface, skb_metadat_dst will be used later by the
offloading device driver to associate the SKB with the corresponding
offloaded interface (SCI) to facilitate HW MACsec offload.
For Rx:
Like in the Tx changes, if there are more than one MACsec device with
the same MAC address as in the packet's destination MAC, the packet will
be forward only to one of the devices and not neccessarly to the desired one.
Offloading device driver sets the MACsec skb_metadata_dst sci
field with the appropriaate Rx SCI for each SKB so the MACsec rx handler
will know to which port to divert those skbs, instead of wrongly solely
relaying on dst MAC address comparison.
1) patch 1,2, Add support to skb_metadata_dst in MACsec code:
net/macsec: Add MACsec skb_metadata_dst Tx Data path support
net/macsec: Add MACsec skb_metadata_dst Rx Data path support
2) patch 3, Move some MACsec driver code for sharing with various
drivers that implements offload:
net/macsec: Move some code for sharing with various drivers that
implements offload
3) The rest of the patches introduce mlx5 implementation for macsec
offloads TX and RX via steering tables.
a) TX, intercept skbs with macsec offlad mark in skb_metadata_dst and mark
the descriptor for offload.
b) RX, intercept offloaded frames and prepare the proper
skb_metadata_dst to mark offloaded rx frames.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:29 +0000 (22:21 -0700)]
net/mlx5e: Add support to configure more than one macsec offload device
Add the ability to add up to 16 MACsec offload interfaces
over the same physical interface
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:28 +0000 (22:21 -0700)]
net/mlx5e: Add MACsec stats support for Rx/Tx flows
Add the following statistics:
RX successfully decrypted MACsec packets:
macsec_rx_pkts : Number of packets decrypted successfully
macsec_rx_bytes : Number of bytes decrypted successfully
Rx dropped MACsec packets:
macsec_rx_pkts_drop : Number of MACsec packets dropped
macsec_rx_bytes_drop : Number of MACsec bytes dropped
TX successfully encrypted MACsec packets:
macsec_tx_pkts : Number of packets encrypted/authenticated successfully
macsec_tx_bytes : Number of bytes encrypted/authenticated successfully
Tx dropped MACsec packets:
macsec_tx_pkts_drop : Number of MACsec packets dropped
macsec_tx_bytes_drop : Number of MACsec bytes dropped
The above can be seen using:
ethtool -S <ifc> |grep macsec
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:27 +0000 (22:21 -0700)]
net/mlx5e: Add MACsec offload SecY support
Add offload support for MACsec SecY callbacks - add/update/delete.
add_secy is called when need to create a new MACsec interface.
upd_secy is called when source MAC address or tx SC was changed.
del_secy is called when need to destroy the MACsec interface.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:26 +0000 (22:21 -0700)]
net/mlx5e: Implement MACsec Rx data path using MACsec skb_metadata_dst
MACsec driver need to distinguish to which offload device the MACsec
is target to, in order to handle them correctly.
This can be done by attaching a metadata_dst to a SKB with a SCI,
when there is a match on MACsec rule.
To achieve that, there is a map between fs_id to SCI, so for each RX SC,
there is a unique fs_id allocated when creating RX SC.
fs_id passed to device driver as metadata for packets that passed Rx
MACsec offload to aid the driver to retrieve the matching SCI.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:25 +0000 (22:21 -0700)]
net/mlx5e: Add MACsec RX steering rules
Rx flow steering consists of two flow tables (FTs).
The first FT (crypto table) have one default miss rule so non MACsec
offloaded packets bypass the MACSec tables.
All others flow table entries (FTEs) are divided to two equal groups
size, both of them are for MACsec packets:
The first group is for MACsec packets which contains SCI field in the
SecTAG header.
The second group is for MACsec packets which doesn't contain SCI,
where need to match on the source MAC address (only if the SCI
is built from default MACsec port).
Destination MAC address, ethertype and some of SecTAG fields
are also matched for both groups.
In case of match, invoke decrypt action on the packet.
For each MACsec Rx offloaded SA two rules are created: one with SCI
and one without SCI.
The second FT (check table) has two fixed rules:
One rule is for verifying that the previous offload actions were
finished successfully.
In this case, need to decap the SecTAG header and forward the packet
for further processing.
Another default rule for dropping packets that failed in the previous
decrypt actions.
The MACsec FTs are created on demand when the first MACsec rule is
added and destroyed when the last MACsec rule is deleted.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:24 +0000 (22:21 -0700)]
net/mlx5: Add MACsec Rx tables support to fs_core
Add new namespace for MACsec RX flows.
Encrypted MACsec packets should be first decrypted and stripped
from MACsec header and then continues with the kernel's steering
pipeline.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:23 +0000 (22:21 -0700)]
net/mlx5e: Add MACsec offload Rx command support
Add a support for Connect-X MACsec offload Rx SA & SC commands:
add, update and delete.
SCs are created on demend and aren't limited by number and unique by SCI.
Each Rx SA must be associated with Rx SC according to SCI.
Follow-up patches will implement the Rx steering.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:22 +0000 (22:21 -0700)]
net/mlx5e: Implement MACsec Tx data path using MACsec skb_metadata_dst
MACsec driver marks Tx packets for device offload using a dedicated
skb_metadata_dst which holds a 64 bits SCI number.
A previously set rule will match on this number so the correct SA is used
for the MACsec operation.
As device driver can only provide 32 bits of metadata to flow tables,
need to used a mapping from 64 bit to 32 bits marker or id,
which is can be achieved by provide a 32 bit unique flow id in the
control path, and used a hash table to map 64 bit to the unique id in the
datapath.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:21 +0000 (22:21 -0700)]
net/mlx5e: Add MACsec TX steering rules
Tx flow steering consists of two flow tables (FTs).
The first FT (crypto table) has two fixed rules:
One default miss rule so non MACsec offloaded packets bypass the MACSec
tables, another rule to make sure that MACsec key exchange (MKE) traffic
passes unencrypted as expected (matched of ethertype).
On each new MACsec offload flow, a new MACsec rule is added.
This rule is matched on metadata_reg_a (which contains the id of the
flow) and invokes the MACsec offload action on match.
The second FT (check table) has two fixed rules:
One rule for verifying that the previous offload actions were
finished successfully and packet need to be transmitted.
Another default rule for dropping packets that were failed in the
offload actions.
The MACsec FTs should be created on demand when the first MACsec rule is
added and destroyed when the last MACsec rule is deleted.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:20 +0000 (22:21 -0700)]
net/mlx5: Add MACsec Tx tables support to fs_core
Changed EGRESS_KERNEL namespace to EGRESS_IPSEC and add new
namespace for MACsec TX.
This namespace should be the last namespace for transmitted packets.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:19 +0000 (22:21 -0700)]
net/mlx5: Add MACsec offload Tx command support
This patch adds support for Connect-X MACsec offload Tx SA commands:
add, update and delete.
In Connect-X MACsec, a Security Association (SA) is added or deleted
via allocating a HW context of an encryption/decryption key and
a HW context of a matching SA (MACsec object).
When new SA is added:
- Use a separate crypto key HW context.
- Create a separate MACsec context in HW to include the SA properties.
Introduce a new compilation flag MLX5_EN_MACSEC for it.
Follow-up patches will implement the Tx steering.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:18 +0000 (22:21 -0700)]
net/mlx5: Introduce MACsec Connect-X offload hardware bits and structures
Add MACsec offload related IFC structs, layouts and enumerations.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:17 +0000 (22:21 -0700)]
net/mlx5: Generalize Flow Context for new crypto fields
In order to support MACsec offload (and maybe some other crypto features
in the future), generalize flow action parameters / defines to be used by
crypto offlaods other than IPsec.
The following changes made:
ipsec_obj_id field at flow action context was changed to crypto_obj_id,
intreduced a new crypto_type field where IPsec is the default zero type
for backward compatibility.
Action ipsec_decrypt was changed to crypto_decrypt.
Action ipsec_encrypt was changed to crypto_encrypt.
IPsec offload code was updated accordingly for backward compatibility.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:16 +0000 (22:21 -0700)]
net/mlx5: Removed esp_id from struct mlx5_flow_act
esp_id is no longer in used
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:15 +0000 (22:21 -0700)]
net/macsec: Move some code for sharing with various drivers that implements offload
Move some MACsec infrastructure like defines and functions,
in order to avoid code duplication for future drivers which
implements MACsec offload.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Ben Ben-Ishay <benishay@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:14 +0000 (22:21 -0700)]
net/macsec: Add MACsec skb_metadata_dst Rx Data path support
Like in the Tx changes, if there are more than one MACsec device with
the same MAC address as in the packet's destination MAC, the packet will
be forward only to this device and not neccessarly to the desired one.
Offloading device drivers will mark offloaded MACsec SKBs with the
corresponding SCI in the skb_metadata_dst so the macsec rx handler will
know to which port to divert those skbs, instead of wrongly solely
relaying on dst MAC address comparison.
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lior Nahmanson [Tue, 6 Sep 2022 05:21:13 +0000 (22:21 -0700)]
net/macsec: Add MACsec skb_metadata_dst Tx Data path support
In the current MACsec offload implementation, MACsec interfaces shares
the same MAC address by default.
Therefore, HW can't distinguish from which MACsec interface the traffic
originated from.
MACsec stack will use skb_metadata_dst to store the SCI value, which is
unique per Macsec interface, skb_metadat_dst will be used by the
offloading device driver to associate the SKB with the corresponding
offloaded interface (SCI).
Signed-off-by: Lior Nahmanson <liorna@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 7 Sep 2022 11:33:44 +0000 (12:33 +0100)]
Merge branch 'netlink-be-policy'
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Mon, 5 Sep 2022 10:09:37 +0000 (12:09 +0200)]
netfilter: nft_payload: reject out-of-range attributes via policy
Now that nla_policy allows range checks for bigendian data make use of
this to reject such attributes. At this time, reject happens later
from the init or select_ops callbacks, but its prone to errors.
In the future, new attributes can be handled via NLA_POLICY_MAX_BE
and exiting ones can be converted one by one.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Mon, 5 Sep 2022 10:09:36 +0000 (12:09 +0200)]
netlink: introduce NLA_POLICY_MAX_BE
netlink allows to specify allowed ranges for integer types.
Unfortunately, nfnetlink passes integers in big endian, so the existing
NLA_POLICY_MAX() cannot be used.
At the moment, nfnetlink users, such as nf_tables, need to resort to
programmatic checking via helpers such as nft_parse_u32_check().
This is both cumbersome and error prone. This adds NLA_POLICY_MAX_BE
which adds range check support for BE16, BE32 and BE64 integers.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 7 Sep 2022 11:23:25 +0000 (12:23 +0100)]
Merge branch 'sfc-ptp'
Edward Cree says:
====================
sfc: add support for PTP over IPv6 and 802.3
Most recent cards (8000 series and newer) had enough hardware support
for this, but it was not enabled in the driver. The transmission of PTP
packets over these protocols was already added in commit
bd4a2697e5e2
("sfc: use hardware tx timestamps for more than PTP"), but receiving
them was already unsupported so synchronization didn't happen.
These patches add support for timestamping received packets over
IPv6/UPD and IEEE802.3.
v2: fixed weird indentation in efx_ptp_init_filter
v3: fixed bug caused by usage of htons in PTP_EVENT_PORT definition.
It was used in more places, where htons was used too, so using it
2 times leave it again in host order. I didn't detected it in my
tests because it only affected if timestamping through the MC, but
the model I used do it through the MAC. Detected by kernel test
robot <lkp@intel.com>
v4: removed `inline` specifiers from 2 local functions
v5: restored deleted comment with useful explanation about packets
reordering. Deleted useless whitespaces.
====================
Reviewed-by: Edward Cree <ecree.xilinx@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Íñigo Huguet [Mon, 5 Sep 2022 08:23:23 +0000 (10:23 +0200)]
sfc: support PTP over Ethernet
The previous patch add support for PTP over IPv6/UDP (only for 8000
series and newer) and this one add support for PTP over 802.3.
Tested: sync as master and as slave is correct with ptp4l. PTP over IPv4
and IPv6 still works fine.
Suggested-by: Edward Cree <ecree.xilinx@gmail.com>
Signed-off-by: Íñigo Huguet <ihuguet@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Íñigo Huguet [Mon, 5 Sep 2022 08:23:22 +0000 (10:23 +0200)]
sfc: support PTP over IPv6/UDP
commit
bd4a2697e5e2 ("sfc: use hardware tx timestamps for more than
PTP") added support for hardware timestamping on TX for cards of the
8000 series and newer, in an effort to provide support for other
transports other than IPv4/UDP.
However, timestamping was still not working on RX for these other
transports. This patch add support for PTP over IPv6/UDP.
Tested: sync as master and as slave is correct using ptp4l from linuxptp
package, both with IPv4 and IPv6.
Suggested-by: Edward Cree <ecree.xilinx@gmail.com>
Signed-off-by: Íñigo Huguet <ihuguet@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Íñigo Huguet [Mon, 5 Sep 2022 08:23:21 +0000 (10:23 +0200)]
sfc: allow more flexible way of adding filters for PTP
In preparation for the support of PTP over IPv6/UDP and Ethernet in next
patches, allow a more flexible way of adding and removing RX filters for
PTP. Right now, only 2 filters are allowed, which are the ones needed
for PTP over IPv4/UDP.
Signed-off-by: Íñigo Huguet <ihuguet@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerry Ray [Fri, 2 Sep 2022 21:30:21 +0000 (16:30 -0500)]
net: dsa: LAN9303: Add basic support for LAN9354
Adding support for the LAN9354 device by allowing it to use
the LAN9303 DSA driver. These devices have the same underlying
access and control methods and from a feature set point of view
the LAN9354 is a superset of the LAN9303.
The MDIO access method has been tested on a SAMA5D3-EDS board
with a LAN9354 RMII daughter card.
While the SPI access method should also be the same, it has not
been tested and as such is not included at this time.
Signed-off-by: Jerry Ray <jerry.ray@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerry Ray [Fri, 2 Sep 2022 21:30:20 +0000 (16:30 -0500)]
net: dsa: LAN9303: Add early read to sync
Add initial BYTE_ORDER read to sync the 32-bit accesses over the 16-bit
mdio bus to improve driver robustness.
The lan9303 expects two mdio read transactions back-to-back to read a
32-bit register. The first read transaction causes the other half of the
32-bit register to get latched. The subsequent read returns the latched
second half of the 32-bit read. The BYTE_ORDER register is an exception to
this rule. As it is a constant value, there is no need to latch the second
half. We read this register first in case there were reads during the boot
loader process that might have occurred prior to this driver taking over
ownership of accessing this device.
This patch has been tested on the SAMA5D3-EDS with a LAN9303 RMII daughter
card.
Signed-off-by: Jerry Ray <jerry.ray@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Naour [Fri, 2 Sep 2022 10:16:10 +0000 (12:16 +0200)]
net: dsa: microchip: add regmap_range for KSZ9896 chip
Add register validation for KSZ9896.
Signed-off-by: Romain Naour <romain.naour@skf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Naour [Fri, 2 Sep 2022 10:16:09 +0000 (12:16 +0200)]
net: dsa: microchip: ksz9477: remove 0x033C and 0x033D addresses from regmap_access_tables
According to the KSZ9477S datasheet, there is no global register
at 0x033C and 0x033D addresses.
Signed-off-by: Romain Naour <romain.naour@skf.com>
Cc: Oleksij Rempel <o.rempel@pengutronix.de>
Tested-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Naour [Fri, 2 Sep 2022 10:16:08 +0000 (12:16 +0200)]
net: dsa: microchip: add KSZ9896 to KSZ9477 I2C driver
Add support for the KSZ9896 6-port Gigabit Ethernet Switch to the
ksz9477 driver. The KSZ9896 supports both SPI (already in) and I2C.
Signed-off-by: Romain Naour <romain.naour@skf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Romain Naour [Fri, 2 Sep 2022 10:16:07 +0000 (12:16 +0200)]
net: dsa: microchip: add KSZ9896 switch support
Add support for the KSZ9896 6-port Gigabit Ethernet Switch to the
ksz9477 driver.
Although the KSZ9896 is already listed in the device tree binding
documentation since
a1c0ed24fe9b (dt-bindings: net: dsa: document
additional Microchip KSZ9477 family switches) the chip id
(0x00989600) is not recognized by ksz_switch_detect() and rejected
by the driver.
The KSZ9896 is similar to KSZ9897 but has only one configurable
MII/RMII/RGMII/GMII cpu port.
Signed-off-by: Romain Naour <romain.naour@skf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Tue, 6 Sep 2022 21:21:14 +0000 (23:21 +0200)]
Merge https://git./linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2022-09-05
The following pull-request contains BPF updates for your *net-next* tree.
We've added 106 non-merge commits during the last 18 day(s) which contain
a total of 159 files changed, 5225 insertions(+), 1358 deletions(-).
There are two small merge conflicts, resolve them as follows:
1) tools/testing/selftests/bpf/DENYLIST.s390x
Commit
27e23836ce22 ("selftests/bpf: Add lru_bug to s390x deny list") in
bpf tree was needed to get BPF CI green on s390x, but it conflicted with
newly added tests on bpf-next. Resolve by adding both hunks, result:
[...]
lru_bug # prog 'printk': failed to auto-attach: -524
setget_sockopt # attach unexpected error: -524 (trampoline)
cb_refs # expected error message unexpected error: -524 (trampoline)
cgroup_hierarchical_stats # JIT does not support calling kernel function (kfunc)
htab_update # failed to attach: ERROR: strerror_r(-524)=22 (trampoline)
[...]
2) net/core/filter.c
Commit
1227c1771dd2 ("net: Fix data-races around sysctl_[rw]mem_(max|default).")
from net tree conflicts with commit
29003875bd5b ("bpf: Change bpf_setsockopt(SOL_SOCKET)
to reuse sk_setsockopt()") from bpf-next tree. Take the code as it is from
bpf-next tree, result:
[...]
if (getopt) {
if (optname == SO_BINDTODEVICE)
return -EINVAL;
return sk_getsockopt(sk, SOL_SOCKET, optname,
KERNEL_SOCKPTR(optval),
KERNEL_SOCKPTR(optlen));
}
return sk_setsockopt(sk, SOL_SOCKET, optname,
KERNEL_SOCKPTR(optval), *optlen);
[...]
The main changes are:
1) Add any-context BPF specific memory allocator which is useful in particular for BPF
tracing with bonus of performance equal to full prealloc, from Alexei Starovoitov.
2) Big batch to remove duplicated code from bpf_{get,set}sockopt() helpers as an effort
to reuse the existing core socket code as much as possible, from Martin KaFai Lau.
3) Extend BPF flow dissector for BPF programs to just augment the in-kernel dissector
with custom logic. In other words, allow for partial replacement, from Shmulik Ladkani.
4) Add a new cgroup iterator to BPF with different traversal options, from Hao Luo.
5) Support for BPF to collect hierarchical cgroup statistics efficiently through BPF
integration with the rstat framework, from Yosry Ahmed.
6) Support bpf_{g,s}et_retval() under more BPF cgroup hooks, from Stanislav Fomichev.
7) BPF hash table and local storages fixes under fully preemptible kernel, from Hou Tao.
8) Add various improvements to BPF selftests and libbpf for compilation with gcc BPF
backend, from James Hilliard.
9) Fix verifier helper permissions and reference state management for synchronous
callbacks, from Kumar Kartikeya Dwivedi.
10) Add support for BPF selftest's xskxceiver to also be used against real devices that
support MAC loopback, from Maciej Fijalkowski.
11) Various fixes to the bpf-helpers(7) man page generation script, from Quentin Monnet.
12) Document BPF verifier's tnum_in(tnum_range(), ...) gotchas, from Shung-Hsi Yu.
13) Various minor misc improvements all over the place.
* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (106 commits)
bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.
bpf: Remove usage of kmem_cache from bpf_mem_cache.
bpf: Remove prealloc-only restriction for sleepable bpf programs.
bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.
bpf: Remove tracing program restriction on map types
bpf: Convert percpu hash map to per-cpu bpf_mem_alloc.
bpf: Add percpu allocation support to bpf_mem_alloc.
bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.
bpf: Adjust low/high watermarks in bpf_mem_cache
bpf: Optimize call_rcu in non-preallocated hash map.
bpf: Optimize element count in non-preallocated hash map.
bpf: Relax the requirement to use preallocated hash maps in tracing progs.
samples/bpf: Reduce syscall overhead in map_perf_test.
selftests/bpf: Improve test coverage of test_maps
bpf: Convert hash map to bpf_mem_alloc.
bpf: Introduce any context BPF specific memory allocator.
selftest/bpf: Add test for bpf_getsockopt()
bpf: Change bpf_getsockopt(SOL_IPV6) to reuse do_ipv6_getsockopt()
bpf: Change bpf_getsockopt(SOL_IP) to reuse do_ip_getsockopt()
bpf: Change bpf_getsockopt(SOL_TCP) to reuse do_tcp_getsockopt()
...
====================
Link: https://lore.kernel.org/r/20220905161136.9150-1-daniel@iogearbox.net
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Michal Jaron [Thu, 18 Aug 2022 11:32:33 +0000 (13:32 +0200)]
iavf: Fix race between iavf_close and iavf_reset_task
During stress tests with adding VF to namespace and changing vf's
trust there was a race between iavf_reset_task and iavf_close.
Sometimes when IAVF_FLAG_AQ_DISABLE_QUEUES from iavf_close was sent
to PF after reset and before IAVF_AQ_GET_CONFIG was sent then PF
returns error IAVF_NOT_SUPPORTED to disable queues request and
following requests. There is need to get_config before other
aq_required will be send but iavf_close clears all flags, if
get_config was not sent before iavf_close, then it will not be send
at all.
In case when IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS was sent before
IAVF_FLAG_AQ_DISABLE_QUEUES then there was rtnl_lock deadlock
between iavf_close and iavf_adminq_task until iavf_close timeouts
and disable queues was sent after iavf_close ends.
There was also a problem with sending delete/add filters.
Sometimes when filters was not yet added to PF and in
iavf_close all filters was set to remove there might be a try
to remove nonexistent filters on PF.
Add aq_required_tmp to save aq_required flags and send them after
disable_queues will be handled. Clear flags given to iavf_down
different than IAVF_FLAG_AQ_GET_CONFIG as this flag is necessary
to sent other aq_required. Remove some flags that we don't
want to send as we are in iavf_close and we want to disable
interface. Remove filters which was not yet sent and send del
filters flags only when there are filters to remove.
Signed-off-by: Michal Jaron <michalx.jaron@intel.com>
Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Christophe JAILLET [Sun, 4 Sep 2022 14:22:46 +0000 (16:22 +0200)]
ice: Simplify memory allocation in ice_sched_init_port()
'buf' is locale to the ice_sched_init_port() function.
There is no point in using devm_kzalloc()/devm_kfree().
use kzalloc()/kfree() instead.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Christophe JAILLET [Sun, 4 Sep 2022 14:18:02 +0000 (16:18 +0200)]
ice: switch: Simplify memory allocation
'rbuf' is locale to the ice_get_initial_sw_cfg() function.
There is no point in using devm_kzalloc()/devm_kfree().
use kzalloc()/kfree() instead.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Paul Greenwalt [Wed, 24 Aug 2022 20:28:40 +0000 (13:28 -0700)]
ice: add helper function to check FW API version
Several functions in ice_common.c check the firmware API version to see if
the current API version meets some minimum requirement.
Improve the readability of these checks by introducing
ice_is_fw_api_min_ver, a helper function to perform that check.
Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Michal Michalik [Tue, 23 Aug 2022 11:56:26 +0000 (13:56 +0200)]
ice: Check if reset in progress while waiting for offsets
Occasionally while waiting to valid offsets from hardware we get reset.
Add check for reset before proceeding to execute scheduled work.
Co-developed-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Michal Michalik <michal.michalik@intel.com>
Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tony Nguyen [Mon, 22 Aug 2022 18:56:54 +0000 (11:56 -0700)]
ice: Allow operation with reduced device MSI-X
The driver currently takes an all or nothing approach for device MSI-X
vectors. Meaning if it does not get its full allocation, it will fail and
not load. There is no reason it can't work with a reduced number of MSI-X
vectors. Take a similar approach as commit
741106f7bd8d ("ice: Improve
MSI-X fallback logic") and, instead, adjust the MSI-X request to make use
of what is available.
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Petr Oros <poros@redhat.com>
Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Jaroslaw Gawin [Tue, 23 Aug 2022 08:28:03 +0000 (10:28 +0200)]
i40e: add description and modify interrupts configuration procedure
Add description for values written into registers QINT_XXXX
and small cosmetic changes for MSI/LEGACY interrupts
configuration in the same way as for MSI-X.
Descriptions confirm the code is written correctly and
make the code clear. Small cosmetic changes for MSI/LEGACY
interrupts make code clear in the same manner as for MSI-X
interrupts.
Signed-off-by: Jaroslaw Gawin <jaroslawx.gawin@intel.com>
Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>
Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Stanislaw Grzeszczak [Mon, 29 Aug 2022 11:59:46 +0000 (13:59 +0200)]
i40e: Add basic support for I710 devices
Intel introduces a new line of 1G ethernet adapters with Device ID 0x0DD2
Signed-off-by: Stanislaw Grzeszczak <stanislaw.a.grzeszczak@intel.com>
Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com>
Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Sergei Antonov [Fri, 2 Sep 2022 12:50:37 +0000 (15:50 +0300)]
net: moxa: fix endianness-related issues from 'sparse'
Sparse checker found two endianness-related issues:
.../moxart_ether.c:34:15: warning: incorrect type in assignment (different base types)
.../moxart_ether.c:34:15: expected unsigned int [usertype]
.../moxart_ether.c:34:15: got restricted __le32 [usertype]
.../moxart_ether.c:39:16: warning: cast to restricted __le32
Fix them by using __le32 type instead of u32.
Signed-off-by: Sergei Antonov <saproj@gmail.com>
Link: https://lore.kernel.org/r/20220902125037.1480268-1-saproj@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Sergei Antonov [Fri, 2 Sep 2022 11:37:49 +0000 (14:37 +0300)]
net: ftmac100: fix endianness-related issues from 'sparse'
Sparse found a number of endianness-related issues of these kinds:
.../ftmac100.c:192:32: warning: restricted __le32 degrades to integer
.../ftmac100.c:208:23: warning: incorrect type in assignment (different base types)
.../ftmac100.c:208:23: expected unsigned int rxdes0
.../ftmac100.c:208:23: got restricted __le32 [usertype]
.../ftmac100.c:249:23: warning: invalid assignment: &=
.../ftmac100.c:249:23: left side has type unsigned int
.../ftmac100.c:249:23: right side has type restricted __le32
.../ftmac100.c:527:16: warning: cast to restricted __le32
Change type of some fields from 'unsigned int' to '__le32' to fix it.
Signed-off-by: Sergei Antonov <saproj@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20220902113749.1408562-1-saproj@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Horatiu Vultur [Fri, 2 Sep 2022 11:15:48 +0000 (13:15 +0200)]
net: lan966x: Extend lan966x with RGMII support
Extend lan966x with RGMII support. The MAC supports all RGMII_* modes.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Link: https://lore.kernel.org/r/20220902111548.614525-1-horatiu.vultur@microchip.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Heiner Kallweit [Sat, 3 Sep 2022 11:15:13 +0000 (13:15 +0200)]
r8169: remove not needed net_ratelimit() check
We're not in a hot path and don't want to miss this message,
therefore remove the net_ratelimit() check.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kees Cook [Sat, 3 Sep 2022 04:37:49 +0000 (21:37 -0700)]
netlink: Bounds-check struct nlmsgerr creation
In preparation for FORTIFY_SOURCE doing bounds-check on memcpy(),
switch from __nlmsg_put to nlmsg_put(), and explain the bounds check
for dealing with the memcpy() across a composite flexible array struct.
Avoids this future run-time warning:
memcpy: detected field-spanning write (size 32) of single field "&errmsg->msg" at net/netlink/af_netlink.c:2447 (size 16)
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Jozsef Kadlecsik <kadlec@netfilter.org>
Cc: Florian Westphal <fw@strlen.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: syzbot <syzkaller@googlegroups.com>
Cc: netfilter-devel@vger.kernel.org
Cc: coreteam@netfilter.org
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220901071336.1418572-1-keescook@chromium.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Mon, 5 Sep 2022 13:33:07 +0000 (15:33 +0200)]
Merge branch 'bpf-allocator'
Alexei Starovoitov says:
====================
Introduce any context BPF specific memory allocator.
Tracing BPF programs can attach to kprobe and fentry. Hence they run in
unknown context where calling plain kmalloc() might not be safe. Front-end
kmalloc() with per-cpu cache of free elements. Refill this cache asynchronously
from irq_work.
Major achievements enabled by bpf_mem_alloc:
- Dynamically allocated hash maps used to be 10 times slower than fully
preallocated. With bpf_mem_alloc and subsequent optimizations the speed
of dynamic maps is equal to full prealloc.
- Tracing bpf programs can use dynamically allocated hash maps. Potentially
saving lots of memory. Typical hash map is sparsely populated.
- Sleepable bpf programs can used dynamically allocated hash maps.
Future work:
- Expose bpf_mem_alloc as uapi FD to be used in dynptr_alloc, kptr_alloc
- Convert lru map to bpf_mem_alloc
- Further cleanup htab code. Example: htab_use_raw_lock can be removed.
Changelog:
v5->v6:
- Debugged the reason for selftests/bpf/test_maps ooming in a small VM that BPF CI is using.
Added patch 16 that optimizes the usage of rcu_barrier-s between bpf_mem_alloc and
hash map. It drastically improved the speed of htab destruction.
v4->v5:
- Fixed missing migrate_disable in hash tab free path (Daniel)
- Replaced impossible "memory leak" with WARN_ON_ONCE (Martin)
- Dropped sysctl kernel.bpf_force_dyn_alloc patch (Daniel)
- Added Andrii's ack
- Added new patch 15 that removes kmem_cache usage from bpf_mem_alloc.
It saves memory, speeds up map create/destroy operations
while maintains hash map update/delete performance.
v3->v4:
- fix build issue due to missing local.h on 32-bit arch
- add Kumar's ack
- proposal for next steps from Delyan:
https://lore.kernel.org/bpf/
d3f76b27f4e55ec9e400ae8dcaecbb702a4932e8.camel@fb.com/
v2->v3:
- Rewrote the free_list algorithm based on discussions with Kumar. Patch 1.
- Allowed sleepable bpf progs use dynamically allocated maps. Patches 13 and 14.
- Added sysctl to force bpf_mem_alloc in hash map even if pre-alloc is
requested to reduce memory consumption. Patch 15.
- Fix: zero-fill percpu allocation
- Single rcu_barrier at the end instead of each cpu during bpf_mem_alloc destruction
v2 thread:
https://lore.kernel.org/bpf/
20220817210419.95560-1-alexei.starovoitov@gmail.com/
v1->v2:
- Moved unsafe direct call_rcu() from hash map into safe place inside bpf_mem_alloc. Patches 7 and 9.
- Optimized atomic_inc/dec in hash map with percpu_counter. Patch 6.
- Tuned watermarks per allocation size. Patch 8
- Adopted this approach to per-cpu allocation. Patch 10.
- Fully converted hash map to bpf_mem_alloc. Patch 11.
- Removed tracing prog restriction on map types. Combination of all patches and final patch 12.
v1 thread:
https://lore.kernel.org/bpf/
20220623003230.37497-1-alexei.starovoitov@gmail.com/
LWN article:
https://lwn.net/Articles/899274/
====================
Link: https://lore.kernel.org/r/
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:58 +0000 (14:10 -0700)]
bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.
User space might be creating and destroying a lot of hash maps. Synchronous
rcu_barrier-s in a destruction path of hash map delay freeing of hash buckets
and other map memory and may cause artificial OOM situation under stress.
Optimize rcu_barrier usage between bpf hash map and bpf_mem_alloc:
- remove rcu_barrier from hash map, since htab doesn't use call_rcu
directly and there are no callback to wait for.
- bpf_mem_alloc has call_rcu_in_progress flag that indicates pending callbacks.
Use it to avoid barriers in fast path.
- When barriers are needed copy bpf_mem_alloc into temp structure
and wait for rcu barrier-s in the worker to let the rest of
hash map freeing to proceed.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220902211058.60789-17-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:57 +0000 (14:10 -0700)]
bpf: Remove usage of kmem_cache from bpf_mem_cache.
For bpf_mem_cache based hash maps the following stress test:
for (i = 1; i <= 512; i <<= 1)
for (j = 1; j <= 1 << 18; j <<= 1)
fd = bpf_map_create(BPF_MAP_TYPE_HASH, NULL, i, j, 2, 0);
creates many kmem_cache-s that are not mergeable in debug kernels
and consume unnecessary amount of memory.
Turned out bpf_mem_cache's free_list logic does batching well,
so usage of kmem_cache for fixes size allocations doesn't bring
any performance benefits vs normal kmalloc.
Hence get rid of kmem_cache in bpf_mem_cache.
That saves memory, speeds up map create/destroy operations,
while maintains hash map update/delete performance.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220902211058.60789-16-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:56 +0000 (14:10 -0700)]
bpf: Remove prealloc-only restriction for sleepable bpf programs.
Since hash map is now converted to bpf_mem_alloc and it's waiting for rcu and
rcu_tasks_trace GPs before freeing elements into global memory slabs it's safe
to use dynamically allocated hash maps in sleepable bpf programs.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-15-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:55 +0000 (14:10 -0700)]
bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.
Use call_rcu_tasks_trace() to wait for sleepable progs to finish.
Then use call_rcu() to wait for normal progs to finish
and finally do free_one() on each element when freeing objects
into global memory pool.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-14-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:54 +0000 (14:10 -0700)]
bpf: Remove tracing program restriction on map types
The hash map is now fully converted to bpf_mem_alloc. Its implementation is not
allocating synchronously and not calling call_rcu() directly. It's now safe to
use non-preallocated hash maps in all types of tracing programs including
BPF_PROG_TYPE_PERF_EVENT that runs out of NMI context.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-13-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:53 +0000 (14:10 -0700)]
bpf: Convert percpu hash map to per-cpu bpf_mem_alloc.
Convert dynamic allocations in percpu hash map from alloc_percpu() to
bpf_mem_cache_alloc() from per-cpu bpf_mem_alloc. Since bpf_mem_alloc frees
objects after RCU gp the call_rcu() is removed. pcpu_init_value() now needs to
zero-fill per-cpu allocations, since dynamically allocated map elements are now
similar to full prealloc, since alloc_percpu() is not called inline and the
elements are reused in the freelist.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-12-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:52 +0000 (14:10 -0700)]
bpf: Add percpu allocation support to bpf_mem_alloc.
Extend bpf_mem_alloc to cache free list of fixed size per-cpu allocations.
Once such cache is created bpf_mem_cache_alloc() will return per-cpu objects.
bpf_mem_cache_free() will free them back into global per-cpu pool after
observing RCU grace period.
per-cpu flavor of bpf_mem_alloc is going to be used by per-cpu hash maps.
The free list cache consists of tuples { llist_node, per-cpu pointer }
Unlike alloc_percpu() that returns per-cpu pointer
the bpf_mem_cache_alloc() returns a pointer to per-cpu pointer and
bpf_mem_cache_free() expects to receive it back.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-11-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:51 +0000 (14:10 -0700)]
bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.
SLAB_TYPESAFE_BY_RCU makes kmem_caches non mergeable and slows down
kmem_cache_destroy. All bpf_mem_cache are safe to share across different maps
and programs. Convert SLAB_TYPESAFE_BY_RCU to batched call_rcu. This change
solves the memory consumption issue, avoids kmem_cache_destroy latency and
keeps bpf hash map performance the same.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-10-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:50 +0000 (14:10 -0700)]
bpf: Adjust low/high watermarks in bpf_mem_cache
The same low/high watermarks for every bucket in bpf_mem_cache consume
significant amount of memory. Preallocating 64 elements of 4096 bytes each in
the free list is not efficient. Make low/high watermarks and batching value
dependent on element size. This change brings significant memory savings.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-9-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:49 +0000 (14:10 -0700)]
bpf: Optimize call_rcu in non-preallocated hash map.
Doing call_rcu() million times a second becomes a bottle neck.
Convert non-preallocated hash map from call_rcu to SLAB_TYPESAFE_BY_RCU.
The rcu critical section is no longer observed for one htab element
which makes non-preallocated hash map behave just like preallocated hash map.
The map elements are released back to kernel memory after observing
rcu critical section.
This improves 'map_perf_test 4' performance from 100k events per second
to 250k events per second.
bpf_mem_alloc + percpu_counter + typesafe_by_rcu provide 10x performance
boost to non-preallocated hash map and make it within few % of preallocated map
while consuming fraction of memory.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-8-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:48 +0000 (14:10 -0700)]
bpf: Optimize element count in non-preallocated hash map.
The atomic_inc/dec might cause extreme cache line bouncing when multiple cpus
access the same bpf map. Based on specified max_entries for the hash map
calculate when percpu_counter becomes faster than atomic_t and use it for such
maps. For example samples/bpf/map_perf_test is using hash map with max_entries
1000. On a system with 16 cpus the 'map_perf_test 4' shows 14k events per
second using atomic_t. On a system with 15 cpus it shows 100k events per second
using percpu. map_perf_test is an extreme case where all cpus colliding on
atomic_t which causes extreme cache bouncing. Note that the slow path of
percpu_counter is 5k events per secound vs 14k for atomic, so the heuristic is
necessary. See comment in the code why the heuristic is based on
num_online_cpus().
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-7-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:47 +0000 (14:10 -0700)]
bpf: Relax the requirement to use preallocated hash maps in tracing progs.
Since bpf hash map was converted to use bpf_mem_alloc it is safe to use
from tracing programs and in RT kernels.
But per-cpu hash map is still using dynamic allocation for per-cpu map
values, hence keep the warning for this map type.
In the future alloc_percpu_gfp can be front-end-ed with bpf_mem_cache
and this restriction will be completely lifted.
perf_event (NMI) bpf programs have to use preallocated hash maps,
because free_htab_elem() is using call_rcu which might crash if re-entered.
Sleepable bpf programs have to use preallocated hash maps, because
life time of the map elements is not protected by rcu_read_lock/unlock.
This restriction can be lifted in the future as well.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-6-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:46 +0000 (14:10 -0700)]
samples/bpf: Reduce syscall overhead in map_perf_test.
Make map_perf_test for preallocated and non-preallocated hash map
spend more time inside bpf program to focus performance analysis
on the speed of update/lookup/delete operations performed by bpf program.
It makes 'perf report' of bpf_mem_alloc look like:
11.76% map_perf_test [k] _raw_spin_lock_irqsave
11.26% map_perf_test [k] htab_map_update_elem
9.70% map_perf_test [k] _raw_spin_lock
9.47% map_perf_test [k] htab_map_delete_elem
8.57% map_perf_test [k] memcpy_erms
5.58% map_perf_test [k] alloc_htab_elem
4.09% map_perf_test [k] __htab_map_lookup_elem
3.44% map_perf_test [k] syscall_exit_to_user_mode
3.13% map_perf_test [k] lookup_nulls_elem_raw
3.05% map_perf_test [k] migrate_enable
3.04% map_perf_test [k] memcmp
2.67% map_perf_test [k] unit_free
2.39% map_perf_test [k] lookup_elem_raw
Reduce default iteration count as well to make 'map_perf_test' quick enough
even on debug kernels.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-5-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:45 +0000 (14:10 -0700)]
selftests/bpf: Improve test coverage of test_maps
Make test_maps more stressful with more parallelism in
update/delete/lookup/walk including different value sizes.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-4-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:44 +0000 (14:10 -0700)]
bpf: Convert hash map to bpf_mem_alloc.
Convert bpf hash map to use bpf memory allocator.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-3-alexei.starovoitov@gmail.com
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:43 +0000 (14:10 -0700)]
bpf: Introduce any context BPF specific memory allocator.
Tracing BPF programs can attach to kprobe and fentry. Hence they
run in unknown context where calling plain kmalloc() might not be safe.
Front-end kmalloc() with minimal per-cpu cache of free elements.
Refill this cache asynchronously from irq_work.
BPF programs always run with migration disabled.
It's safe to allocate from cache of the current cpu with irqs disabled.
Free-ing is always done into bucket of the current cpu as well.
irq_work trims extra free elements from buckets with kfree
and refills them with kmalloc, so global kmalloc logic takes care
of freeing objects allocated by one cpu and freed on another.
struct bpf_mem_alloc supports two modes:
- When size != 0 create kmem_cache and bpf_mem_cache for each cpu.
This is typical bpf hash map use case when all elements have equal size.
- When size == 0 allocate 11 bpf_mem_cache-s for each cpu, then rely on
kmalloc/kfree. Max allocation size is 4096 in this case.
This is bpf_dynptr and bpf_kptr use case.
bpf_mem_alloc/bpf_mem_free are bpf specific 'wrappers' of kmalloc/kfree.
bpf_mem_cache_alloc/bpf_mem_cache_free are 'wrappers' of kmem_cache_alloc/kmem_cache_free.
The allocators are NMI-safe from bpf programs only. They are not NMI-safe in general.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-2-alexei.starovoitov@gmail.com
Sean Anderson [Fri, 2 Sep 2022 22:02:39 +0000 (18:02 -0400)]
net: phy: Add 1000BASE-KX interface mode
Add 1000BASE-KX interface mode. This 1G backplane ethernet as described in
clause 70. Clause 73 autonegotiation is mandatory, and only full duplex
operation is supported.
Although at the PMA level this interface mode is identical to
1000BASE-X, it uses a different form of in-band autonegation. This
justifies a separate interface mode, since the interface mode (along
with the MLO_AN_* autonegotiation mode) sets the type of autonegotiation
which will be used on a link. This results in more than just electrical
differences between the link modes.
With regard to 1000BASE-X, 1000BASE-KX holds a similar position to
SGMII: same signaling, but different autonegotiation. PCS drivers
(which typically handle in-band autonegotiation) may only support
1000BASE-X, and not 1000BASE-KX. Similarly, the phy mode is used to
configure serdes phys with phy_set_mode_ext. Due to the different
electrical standards (SFI or XFI vs Clause 70), they will likely want to
use different configuration. Adding a phy interface mode for
1000BASE-KX helps simplify configuration in these areas.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 5 Sep 2022 13:27:40 +0000 (14:27 +0100)]
Merge branch 'dpaa-cleanups'
Sean Anderson says:
====================
net: dpaa: Cleanups in preparation for phylink conversion (part 2)
This series contains several cleanup patches for dpaa/fman. While they
are intended to prepare for a phylink conversion, they stand on their
own. This series was originally submitted as part of [1].
[1] https://lore.kernel.org/netdev/
20220715215954.
1449214-1-sean.anderson@seco.com
Changes in v5:
- Reduce line length of tgec_config
- Reduce line length of qman_update_cgr_safe
- Rebase onto net-next/master
Changes in v4:
- weer -> were
- tricy -> tricky
- Use mac_dev for calling change_addr
- qman_cgr_create -> qman_create_cgr
Changes in v2:
- Fix prototype for dtsec_initialization
- Fix warning if sizeof(void *) != sizeof(resource_size_t)
- Specify type of mac_dev for exception_cb
- Add helper for sanity checking cgr ops
- Add CGR update function
- Adjust queue depth on rate change
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:36 +0000 (17:57 -0400)]
net: dpaa: Adjust queue depth on rate change
Instead of setting the queue depth once during probe, adjust it on the
fly whenever we configure the link. This is a bit unusal, since usually
the DPAA driver calls into the FMAN driver, but here we do the opposite.
We need to add a netdev to struct mac_device for this, but it will soon
live in the phylink config.
I haven't tested this extensively, but it doesn't seem to break
anything. We could possibly optimize this a bit by keeping track of the
last rate, but for now we just update every time. 10GEC probably doesn't
need to call into this at all, but I've added it for consistency.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:35 +0000 (17:57 -0400)]
soc: fsl: qbman: Add CGR update function
This adds a function to update a CGR with new parameters. qman_create_cgr
can almost be used for this (with flags=0), but it's not suitable because
it also registers the callback function. The _safe variant was modeled off
of qman_cgr_delete_safe. However, we handle multiple arguments and a return
value.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:34 +0000 (17:57 -0400)]
soc: fsl: qbman: Add helper for sanity checking cgr ops
This breaks out/combines get_affine_portal and the cgr sanity check in
preparation for the next commit. No functional change intended.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:33 +0000 (17:57 -0400)]
net: dpaa: Use mac_dev variable in dpaa_netdev_init
There are several references to mac_dev in dpaa_netdev_init. Make things a
bit more concise by adding a local variable for it.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:32 +0000 (17:57 -0400)]
net: fman: Change return type of disable to void
When disabling, there is nothing we can do about errors. In fact, the
only error which can occur is misuse of the API. Just warn in the mac
driver instead.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:31 +0000 (17:57 -0400)]
net: fman: Clean up error handling
This removes the _return label, since something like
err = -EFOO;
goto _return;
can be replaced by the briefer
return -EFOO;
Additionally, this skips going to _return_of_node_put when dev_node has
already been put (preventing a double put).
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:30 +0000 (17:57 -0400)]
net: fman: Specify type of mac_dev for exception_cb
Instead of using a void pointer for mac_dev, specify its type
explicitly.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:29 +0000 (17:57 -0400)]
net: fman: Use mac_dev for some params
Some params are already present in mac_dev. Use them directly instead of
passing them through params.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:28 +0000 (17:57 -0400)]
net: fman: Pass params directly to mac init
Instead of having the mac init functions call back into the fman core to
get their params, just pass them directly to the init functions.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:27 +0000 (17:57 -0400)]
net: fman: Map the base address once
We don't need to remap the base address from the resource twice (once in
mac_probe() and again in set_fman_mac_params()). We still need the
resource to get the end address, but we can use a single function call
to get both at once.
While we're at it, use platform_get_mem_or_io and devm_request_resource
to map the resource. I think this is the more "correct" way to do things
here, since we use the pdev resource, instead of creating a new one.
It's still a bit tricky, since we need to ensure that the resource is a
child of the fman region when it gets requested.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:26 +0000 (17:57 -0400)]
net: fman: Remove internal_phy_node from params
This member was used to pass the phy node between mac_probe and the
mac-specific initialization function. But now that the phy node is
gotten in the initialization function, this parameter does not serve a
purpose. Remove it, and do the grabbing of the node/grabbing of the phy
in the same place.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:25 +0000 (17:57 -0400)]
net: fman: Inline several functions into initialization
There are several small functions which were only necessary because the
initialization functions didn't have access to the mac private data. Now
that they do, just do things directly.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:24 +0000 (17:57 -0400)]
net: fman: Mark mac methods static
These methods are no longer accessed outside of the driver file, so mark
them as static.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Anderson [Fri, 2 Sep 2022 21:57:23 +0000 (17:57 -0400)]
net: fman: Move initialization to mac-specific files
This moves mac-specific initialization to mac-specific files. This will
make it easier to work with individual macs. It will also make it easier
to refactor the initialization to simplify the control flow. No
functional change intended.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Fri, 2 Sep 2022 21:16:52 +0000 (23:16 +0200)]
r8169: remove useless PCI region size check
Let's trust the hardware here and remove this useless check.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 5 Sep 2022 12:06:40 +0000 (13:06 +0100)]
Merge branch 'lan937x-phy-link-interrupt'
Arun Ramadoss says:
====================
net: dsa: microchip: lan937x: enable interrupt for internal phy link detection
This patch series enables the internal phy link detection for lan937x using the
interrupt method. lan937x acts as the interrupt controller for the internal
ports and phy, the irq_domain is registered for the individual ports and in
turn for the individual port interrupts.
RFC v3 -> Patch v1
- Removed the RFC v3 1/3 from the series - changing exit from reset
- Changed the variable name in ksz_port from irq to pirq
- Added the check for return value of irq_find_mapping during phy irq
registeration.
- Moved the clearing of POR_READY_INT from girq_thread_fn to
lan937x_reset_switch
RFC v2 -> v3
- Used the interrupt controller implementation of phy link
Changes in RFC v2
- fixed the compilation issue
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Arun Ramadoss [Fri, 2 Sep 2022 10:32:10 +0000 (16:02 +0530)]
net: dsa: microchip: lan937x: add interrupt support for port phy link
This patch enables the interrupts for internal phy link detection for
LAN937x. The interrupt enable bits are active low. There is global
interrupt mask for each port. And each port has the individual interrupt
mask for TAS. QCI, SGMII, PTP, PHY and ACL.
The first level of interrupt domain is registered for global port
interrupt and second level of interrupt domain for the individual port
interrupts. The phy interrupt is enabled in the lan937x_mdio_register
function. Interrupt from which port is raised will be detected based on
the interrupt host data.
Signed-off-by: Arun Ramadoss <arun.ramadoss@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Arun Ramadoss [Fri, 2 Sep 2022 10:32:09 +0000 (16:02 +0530)]
net: dsa: microchip: lan937x: clear the POR_READY_INT status bit
In the lan937x_reset_switch(), it masks all the switch and port
registers. In the Global_Int_status register, POR ready bit is write 1
to clear bit and all other bits are read only. So, this patch clear the
por_ready_int status bit by writing 1.
Signed-off-by: Arun Ramadoss <arun.ramadoss@microchip.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Arun Ramadoss [Fri, 2 Sep 2022 10:32:08 +0000 (16:02 +0530)]
net: dsa: microchip: add reference to ksz_device inside the ksz_port
struct ksz_port doesn't have reference to ksz_device as of now. In order
to find out from which port interrupt has triggered, we need to pass the
struct ksz_port as a host data. When the interrupt is triggered, we can
get the port from which interrupt triggered, but to identify it is phy
interrupt we have to read status register. The regmap structure for
accessing the device register is present in the ksz_device struct. To
access the ksz_device from the ksz_port, the reference is added to it
with port number as well.
Signed-off-by: Arun Ramadoss <arun.ramadoss@microchip.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 5 Sep 2022 11:47:02 +0000 (12:47 +0100)]
Merge branch 'ipa-transaction-IDs'
Alex Elder says:
====================
net: ipa: start using transaction IDs
A previous group of patches added ID fields to track the state of
transactions:
https://lore.kernel.org/netdev/
20220831224017.377745-1-elder@linaro.org
This series starts using those IDs instead of the lists used
previously. Most of this series involves reworking the function
that determines which transaction is the "last", which determines
when a channel has been quiesed. The last patch is mainly used to
prove that the new index method of tracking transaction state is
equivalent to the previous use of lists.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 2 Sep 2022 21:02:18 +0000 (16:02 -0500)]
net: ipa: verify a few more IDs
The completed transaction list is used in gsi_channel_trans_complete()
to return the next transaction in completed state.
Add some temporary checks to verify the transaction indicated by the
completed ID matches the one first in this list.
Similarly, we use the pending and completed transaction lists when
cancelling pending transactions in gsi_channel_trans_cancel_pending().
Add temporary checks there to verify the transactions indicated by
IDs match those tracked by these lists.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 2 Sep 2022 21:02:17 +0000 (16:02 -0500)]
net: ipa: further simplify gsi_channel_trans_last()
Do a little more refactoring in gsi_channel_trans_last() to simplify
it further. The resulting code should behave exactly as before.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 2 Sep 2022 21:02:16 +0000 (16:02 -0500)]
net: ipa: simplify gsi_channel_trans_last()
Using a little logic we can simplify gsi_channel_trans_last().
The first condition in that function looks like this:
if (trans_info->allocated_id != trans_info->free_id)
And if that's false, we proceed to the next one:
if (trans_info->committed_id != trans_info->allocated_id)
Failure of the first test implies:
trans_info->allocated_id == trans_info->free_id
And therefore, the second one can be rewritten this way:
if (trans_info->committed_id != trans_info->free_id)
Substituting free_id for allocated_id and committed_id can also be
done in the code blocks executed when these conditions yield true.
The net result is that all three blocks for TX endpoints can be
consolidated into just one.
The two blocks of code at the end of that function (used for both TX
and RX channels) can be similarly consolidated into a single block.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>