Oliver Hartkopp [Wed, 4 Jan 2023 20:18:44 +0000 (21:18 +0100)]
can: isotp: check CAN address family in isotp_bind()
Add missing check to block non-AF_CAN binds.
Syzbot created some code which matched the right sockaddr struct size
but used AF_XDP (0x2C) instead of AF_CAN (0x1D) in the address family
field:
bind$xdp(r2, &(0x7f0000000540)={0x2c, 0x0, r4, 0x0, r2}, 0x10)
^^^^
This has no funtional impact but the userspace should be notified about
the wrong address family field content.
Link: https://syzkaller.appspot.com/text?tag=CrashLog&x=11ff9d8c480000
Reported-by: syzbot+5aed6c3aaba661f5b917@syzkaller.appspotmail.com
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Link: https://lore.kernel.org/all/20230104201844.13168-1-socketcan@hartkopp.net
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Oliver Hartkopp [Wed, 25 Jan 2023 05:54:07 +0000 (06:54 +0100)]
can: gw: give feedback on missing CGW_FLAGS_CAN_IIF_TX_OK flag
To send CAN traffic back to the incoming interface a special flag has to
be set. When creating a routing job for identical interfaces without this
flag the rule is created but has no effect.
This patch adds an error return value in the case that the CAN interfaces
are identical but the CGW_FLAGS_CAN_IIF_TX_OK flag was not set.
Reported-by: Jannik Hartung <jannik.hartung@tu-bs.de>
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Link: https://lore.kernel.org/all/20230125055407.2053-1-socketcan@hartkopp.net
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Sunil Goutham [Wed, 1 Feb 2023 04:03:01 +0000 (09:33 +0530)]
octeontx2-af: Removed unnecessary debug messages.
NPC exact match feature is supported only on one silicon
variant, removed debug messages which print that this
feature is not available on all other silicon variants.
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Ratheesh Kannoth <rkannoth@marvell.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20230201040301.1034843-1-rkannoth@marvell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Heng Qi [Tue, 31 Jan 2023 08:50:04 +0000 (16:50 +0800)]
virtio-net: fix possible unsigned integer overflow
When the single-buffer xdp is loaded and after xdp_linearize_page()
is called, *num_buf becomes 0 and (*num_buf - 1) may overflow into
a large integer in virtnet_build_xdp_buff_mrg(), resulting in
unexpected packet dropping.
Fixes:
ef75cb51f139 ("virtio-net: build xdp_buff with multi buffers")
Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Link: https://lore.kernel.org/r/20230131085004.98687-1-hengqi@linux.alibaba.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Leon Romanovsky [Tue, 31 Jan 2023 13:31:57 +0000 (15:31 +0200)]
netlink: provide an ability to set default extack message
In netdev common pattern, extack pointer is forwarded to the drivers
to be filled with error message. However, the caller can easily
overwrite the filled message.
Instead of adding multiple "if (!extack->_msg)" checks before any
NL_SET_ERR_MSG() call, which appears after call to the driver, let's
add new macro to common code.
[1] https://lore.kernel.org/all/Y9Irgrgf3uxOjwUm@unreal
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/6993fac557a40a1973dfa0095107c3d03d40bec1.1675171790.git.leon@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Brian Haley [Mon, 30 Jan 2023 17:14:28 +0000 (12:14 -0500)]
neighbor: fix proxy_delay usage when it is zero
When set to zero, the neighbor sysctl proxy_delay value
does not cause an immediate reply for ARP/ND requests
as expected, it instead causes a random delay between
[0, U32_MAX). Looking at this comment from
__get_random_u32_below() explains the reason:
/*
* This function is technically undefined for ceil == 0, and in fact
* for the non-underscored constant version in the header, we build bug
* on that. But for the non-constant case, it's convenient to have that
* evaluate to being a straight call to get_random_u32(), so that
* get_random_u32_inclusive() can work over its whole range without
* undefined behavior.
*/
Added helper function that does not call get_random_u32_below()
if proxy_delay is zero and just uses the current value of
jiffies instead, causing pneigh_enqueue() to respond
immediately.
Also added definition of proxy_delay to ip-sysctl.txt since
it was missing.
Signed-off-by: Brian Haley <haleyb.dev@gmail.com>
Link: https://lore.kernel.org/r/20230130171428.367111-1-haleyb.dev@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Thu, 2 Feb 2023 04:54:29 +0000 (20:54 -0800)]
Merge branch 'net-support-ipv4-big-tcp'
Xin Long says:
====================
net: support ipv4 big tcp
This is similar to the BIG TCP patchset added by Eric for IPv6:
https://lwn.net/Articles/895398/
Different from IPv6, IPv4 tot_len is 16-bit long only, and IPv4 header
doesn't have exthdrs(options) for the BIG TCP packets' length. To make
it simple, as David and Paolo suggested, we set IPv4 tot_len to 0 to
indicate this might be a BIG TCP packet and use skb->len as the real
IPv4 total length.
This will work safely, as all BIG TCP packets are GSO/GRO packets and
processed on the same host as they were created; There is no padding
in GSO/GRO packets, and skb->len - network_offset is exactly the IPv4
packet total length; Also, before implementing the feature, all those
places that may get iph tot_len from BIG TCP packets are taken care
with some new APIs:
Patch 1 adds some APIs for iph tot_len setting and getting, which are
used in all these places where IPv4 BIG TCP packets may reach in Patch
2-7, Patch 8 adds a GSO_TCP tp_status for af_packet users, and Patch 9
add new netlink attributes to make IPv4 BIG TCP independent from IPv6
BIG TCP on configuration, and Patch 10 implements this feature.
Note that the similar change as in Patch 2-6 are also needed for IPv6
BIG TCP packets, and will be addressed in another patchset.
The similar performance test is done for IPv4 BIG TCP with 25Gbit NIC
and 1.5K MTU:
No BIG TCP:
for i in {1..10}; do netperf -t TCP_RR -H 192.168.100.1 -- -r80000,80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
168 322 337 3776.49
143 236 277 4654.67
128 258 288 4772.83
171 229 278 4645.77
175 228 243 4678.93
149 239 279 4599.86
164 234 268 4606.94
155 276 289 4235.82
180 255 268 4418.95
168 241 249 4417.82
Enable BIG TCP:
ip link set dev ens1f0np0 gro_ipv4_max_size 128000 gso_ipv4_max_size 128000
for i in {1..10}; do netperf -t TCP_RR -H 192.168.100.1 -- -r80000,80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
161 241 252 4821.73
174 205 217 5098.28
167 208 220 5001.43
164 228 249 4883.98
150 233 249 4914.90
180 233 244 4819.66
154 208 219 5004.92
157 209 247 4999.78
160 218 246 4842.31
174 206 217 5080.99
Thanks for the feedback from Eric and David Ahern.
====================
Link: https://lore.kernel.org/r/cover.1674921359.git.lucien.xin@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:39 +0000 (10:58 -0500)]
net: add support for ipv4 big tcp
Similar to Eric's IPv6 BIG TCP, this patch is to enable IPv4 BIG TCP.
Firstly, allow sk->sk_gso_max_size to be set to a value greater than
GSO_LEGACY_MAX_SIZE by not trimming gso_max_size in sk_trim_gso_size()
for IPv4 TCP sockets.
Then on TX path, set IP header tot_len to 0 when skb->len > IP_MAX_MTU
in __ip_local_out() to allow to send BIG TCP packets, and this implies
that skb->len is the length of a IPv4 packet; On RX path, use skb->len
as the length of the IPv4 packet when the IP header tot_len is 0 and
skb->len > IP_MAX_MTU in ip_rcv_core(). As the API iph_set_totlen() and
skb_ip_totlen() are used in __ip_local_out() and ip_rcv_core(), we only
need to update these APIs.
Also in GRO receive, add the check for ETH_P_IP/IPPROTO_TCP, and allows
the merged packet size >= GRO_LEGACY_MAX_SIZE in skb_gro_receive(). In
GRO complete, set IP header tot_len to 0 when the merged packet size
greater than IP_MAX_MTU in iph_set_totlen() so that it can be processed
on RX path.
Note that by checking skb_is_gso_tcp() in API iph_totlen(), it makes
this implementation safe to use iph->len == 0 indicates IPv4 BIG TCP
packets.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:38 +0000 (10:58 -0500)]
net: add gso_ipv4_max_size and gro_ipv4_max_size per device
This patch introduces gso_ipv4_max_size and gro_ipv4_max_size
per device and adds netlink attributes for them, so that IPV4
BIG TCP can be guarded by a separate tunable in the next patch.
To not break the old application using "gso/gro_max_size" for
IPv4 GSO packets, this patch updates "gso/gro_ipv4_max_size"
in netif_set_gso/gro_max_size() if the new size isn't greater
than GSO_LEGACY_MAX_SIZE, so that nothing will change even if
userspace doesn't realize the new netlink attributes.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:37 +0000 (10:58 -0500)]
packet: add TP_STATUS_GSO_TCP for tp_status
Introduce TP_STATUS_GSO_TCP tp_status flag to tell the af_packet user
that this is a TCP GSO packet. When parsing IPv4 BIG TCP packets in
tcpdump/libpcap, it can use tp_len as the IPv4 packet len when this
flag is set, as iph tot_len is set to 0 for IPv4 BIG TCP packets.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:36 +0000 (10:58 -0500)]
ipvlan: use skb_ip_totlen in ipvlan_get_L3_hdr
ipvlan devices calls netif_inherit_tso_max() to get the tso_max_size/segs
from the lower device, so when lower device supports BIG TCP, the ipvlan
devices support it too. We also should consider its iph tot_len accessing.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:35 +0000 (10:58 -0500)]
cipso_ipv4: use iph_set_totlen in skbuff_setattr
It may process IPv4 TCP GSO packets in cipso_v4_skbuff_setattr(), so
the iph->tot_len update should use iph_set_totlen().
Note that for these non GSO packets, the new iph tot_len with extra
iph option len added may become greater than 65535, the old process
will cast it and set iph->tot_len to it, which is a bug. In theory,
iph options shouldn't be added for these big packets in here, a fix
may be needed here in the future. For now this patch is only to set
iph->tot_len to 0 when it happens.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:34 +0000 (10:58 -0500)]
netfilter: use skb_ip_totlen and iph_totlen
There are also quite some places in netfilter that may process IPv4 TCP
GSO packets, we need to replace them too.
In length_mt(), we have to use u_int32_t/int to accept skb_ip_totlen()
return value, otherwise it may overflow and mismatch. This change will
also help us add selftest for IPv4 BIG TCP in the following patch.
Note that we don't need to replace the one in tcpmss_tg4(), as it will
return if there is data after tcphdr in tcpmss_mangle_packet(). The
same in mangle_contents() in nf_nat_helper.c, it returns false when
skb->len + extra > 65535 in enlarge_skb().
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:33 +0000 (10:58 -0500)]
net: sched: use skb_ip_totlen and iph_totlen
There are 1 action and 1 qdisc that may process IPv4 TCP GSO packets
and access iph->tot_len, replace them with skb_ip_totlen() and
iph_totlen() accordingly.
Note that we don't need to replace the one in tcf_csum_ipv4(), as it
will return for TCP GSO packets in tcf_csum_ipv4_tcp().
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:32 +0000 (10:58 -0500)]
openvswitch: use skb_ip_totlen in conntrack
IPv4 GSO packets may get processed in ovs_skb_network_trim(),
and we need to use skb_ip_totlen() to get iph totlen.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:31 +0000 (10:58 -0500)]
bridge: use skb_ip_totlen in br netfilter
These 3 places in bridge netfilter are called on RX path after GRO
and IPv4 TCP GSO packets may come through, so replace iph tot_len
accessing with skb_ip_totlen() in there.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Xin Long [Sat, 28 Jan 2023 15:58:30 +0000 (10:58 -0500)]
net: add a couple of helpers for iph tot_len
This patch adds three APIs to replace the iph->tot_len setting
and getting in all places where IPv4 BIG TCP packets may reach,
they will be used in the following patches.
Note that iph_totlen() will be used when iph is not in linear
data of the skb.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Thu, 2 Feb 2023 04:53:06 +0000 (20:53 -0800)]
Merge branch 'virtio_net-vdpa-update-mac-address-when-it-is-generated-by-virtio-net'
Laurent Vivier says:
====================
virtio_net: vdpa: update MAC address when it is generated by virtio-net
When the MAC address is not provided by the vdpa device virtio_net
driver assigns a random one without notifying the device.
The consequence, in the case of mlx5_vdpa, is the internal routing
tables of the device are not updated and this can block the
communication between two namespaces.
To fix this problem, use virtnet_send_command(VIRTIO_NET_CTRL_MAC)
to set the address from virtnet_probe() when the MAC address is
not provided by the device.
====================
Link: https://lore.kernel.org/r/20230127204500.51930-1-lvivier@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Laurent Vivier [Fri, 27 Jan 2023 20:45:00 +0000 (21:45 +0100)]
virtio_net: notify MAC address change on device initialization
In virtnet_probe(), if the device doesn't provide a MAC address the
driver assigns a random one.
As we modify the MAC address we need to notify the device to allow it
to update all the related information.
The problem can be seen with vDPA and mlx5_vdpa driver as it doesn't
assign a MAC address by default. The virtio_net device uses a random
MAC address (we can see it with "ip link"), but we can't ping a net
namespace from another one using the virtio-vdpa device because the
new MAC address has not been provided to the hardware:
RX packets are dropped since they don't go through the receive filters,
TX packets go through unaffected.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Laurent Vivier [Fri, 27 Jan 2023 20:44:59 +0000 (21:44 +0100)]
virtio_net: disable VIRTIO_NET_F_STANDBY if VIRTIO_NET_F_MAC is not set
failover relies on the MAC address to pair the primary and the standby
devices:
"[...] the hypervisor needs to enable VIRTIO_NET_F_STANDBY
feature on the virtio-net interface and assign the same MAC address
to both virtio-net and VF interfaces."
Documentation/networking/net_failover.rst
This patch disables the STANDBY feature if the MAC address is not
provided by the hypervisor.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Huayu Chen [Tue, 31 Jan 2023 16:30:33 +0000 (17:30 +0100)]
nfp: correct cleanup related to DCB resources
This patch corrects two oversights relating to releasing resources
and DCB initialisation.
1. If mapping of the dcbcfg_tbl area fails: an error should be
propagated, allowing partial initialisation (probe) to be unwound.
2. Conversely, if where dcbcfg_tbl is successfully mapped: it should
be unmapped in nfp_nic_dcb_clean() which is called via various error
cleanup paths, and shutdown or removal of the PCIE device.
Fixes:
9b7fe8046d74 ("nfp: add DCB IEEE support")
Signed-off-by: Huayu Chen <huayu.chen@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20230131163033.981937-1-simon.horman@corigine.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jiapeng Chong [Tue, 31 Jan 2023 06:34:56 +0000 (14:34 +0800)]
ipv6: ICMPV6: Use swap() instead of open coding it
Swap is a function interface that provides exchange function. To avoid
code duplication, we can use swap function.
./net/ipv6/icmp.c:344:25-26: WARNING opportunity for swap().
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3896
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Link: https://lore.kernel.org/r/20230131063456.76302-1-jiapeng.chong@linux.alibaba.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 1 Feb 2023 18:57:03 +0000 (10:57 -0800)]
Merge branch 'devlink-trivial-names-cleanup'
Jiri Pirko says:
====================
devlink: trivial names cleanup
This is a follow-up to Jakub's devlink code split and dump iteration
helper patchset. No functional changes, just couple of renames to makes
things consistent and perhaps easier to follow.
====================
Link: https://lore.kernel.org/r/20230131090613.2131740-1-jiri@resnulli.us
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jiri Pirko [Tue, 31 Jan 2023 09:06:13 +0000 (10:06 +0100)]
devlink: rename and reorder instances of struct devlink_cmd
In order to maintain naming consistency, rename and reorder all usages
of struct struct devlink_cmd in the following way:
1) Remove "gen" and replace it with "cmd" to match the struct name
2) Order devl_cmds[] and the header file to match the order
of enum devlink_command
3) Move devl_cmd_rate_get among the peers
4) Remove "inst" for DEVLINK_CMD_GET
5) Add "_get" suffix to all to match DEVLINK_CMD_*_GET (only rate had it
done correctly)
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jiri Pirko [Tue, 31 Jan 2023 09:06:12 +0000 (10:06 +0100)]
devlink: remove "gen" from struct devlink_gen_cmd name
No need to have "gen" inside name of the structure for devlink commands.
Remove it.
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jiri Pirko [Tue, 31 Jan 2023 09:06:11 +0000 (10:06 +0100)]
devlink: rename devlink_nl_instance_iter_dump() to "dumpit"
To have the name of the function consistent with the struct cb name,
rename devlink_nl_instance_iter_dump() to
devlink_nl_instance_iter_dumpit().
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 1 Feb 2023 05:45:53 +0000 (21:45 -0800)]
Merge branch 'net-ipa-remaining-ipa-v5-0-support'
Alex Elder says:
====================
net: ipa: remaining IPA v5.0 support
This series includes almost all remaining IPA code changes required
to support IPA v5.0. IPA register definitions and configuration
data for IPA v5.0 will be sent later (soon). Note that the GSI
register definitions still require work. GSI for IPA v5.0 supports
up to 256 (rather than 32) channels, and this changes the way GSI
register offsets are calculated. A few GSI register fields also
change.
The first patch in this series increases the number of IPA endpoints
supported by the driver, from 32 to 36. The next updates the width
of the destination field for the IP_PACKET_INIT immediate command so
it can represent up to 256 endpoints rather than just 32. The next
adds a few definitions of some IPA registers and fields that are
first available in IPA v5.0.
The next two patches update the code that handles router and filter
table caches. Previously these were referred to as "hashed" tables,
and the IPv4 and IPv6 tables are now combined into one "unified"
table. The sixth and seventh patches add support for a new pulse
generator, which allows time periods to be specified with a wider
range of clock resolution. And the last patch just defines two new
memory regions that were not previously used.
====================
Link: https://lore.kernel.org/r/20230130210158.4126129-1-elder@linaro.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder [Mon, 30 Jan 2023 21:01:58 +0000 (15:01 -0600)]
net: ipa: define two new memory regions
IPA v5.0 uses two memory regions not previously used. Define them
and treat them as valid only for IPA v5.0.
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder [Mon, 30 Jan 2023 21:01:57 +0000 (15:01 -0600)]
net: ipa: support a third pulse register
The AP has third pulse generator available starting with IPA v5.0.
Redefine ipa_qtime_val() to support that possibility. Pass the IPA
pointer as an argument so the version can be determined. And stop
using the sign of the returned tick count to indicate which of two
pulse generators to use.
Instead, have the caller provide the address of a variable that will
hold the selected pulse generator for the Qtime value. And for
version 5.0, check whether the third pulse generator best represents
the time period.
Add code in ipa_qtime_config() to configure the fourth pulse
generator for IPA v5.0+; in that case configure both the third and
fourth pulse generators to use 10 msec granularity.
Consistently use "ticks" for local variables that represent a tick
count.
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder [Mon, 30 Jan 2023 21:01:56 +0000 (15:01 -0600)]
net: ipa: greater timer granularity options
Starting with IPA v5.0, the head-of-line blocking timer has more
than two pulse generators available to define timer granularity.
To prepare for that, change the way the field value is encoded
to use ipa_reg_encode() rather than ipa_reg_bit().
The aggregation granularity selection could (in principle) also use
an additional pulse generator starting with IPA v5.0. Encode the
AGGR_GRAN_SEL field differently to allow that as well.
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder [Mon, 30 Jan 2023 21:01:55 +0000 (15:01 -0600)]
net: ipa: support zeroing new cache tables
IPA v5.0+ separates the configuration of entries in the cached
(previously "hashed") routing and filtering tables into distinct
registers. Previously a single "filter and router" register updated
entries in both tables at once; now the routing and filter table
caches have separate registers that define their content.
This patch updates the code that zeroes entries in the cached filter
and router tables to support IPA versions including v5.0+.
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder [Mon, 30 Jan 2023 21:01:54 +0000 (15:01 -0600)]
net: ipa: update table cache flushing
Update the code that causes filter and router table caches to be
flushed so that it supports IPA versions 5.0+. It adds a comment in
ipa_hardware_config_hashing() that explains that cacheing does not
need to be enabled, just as before, because it's enabled by default.
(For the record, the FILT_ROUT_CACHE_CFG register would have been
used if we wanted to explicitly enable these.)
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder [Mon, 30 Jan 2023 21:01:53 +0000 (15:01 -0600)]
net: ipa: define IPA v5.0+ registers
Define some new registers that appear starting with IPA v5.0, along
with enumerated types identifying their fields. Code that uses
these will be added by upcoming patches.
Most of the new registers are related to filter and routing tables,
and in particular, their "hashed" variant. These tables are better
described as "cached", where a hash value determines which entries
are cached. From now on, naming related to this functionality will
use "cache" instead of "hash", and that is reflected in these new
register names. Some registers for managing these caches and their
contents have changed as well.
A few other new field definitions for registers (unrelated to table
caches) are also defined.
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder [Mon, 30 Jan 2023 21:01:52 +0000 (15:01 -0600)]
net: ipa: extend endpoints in packet init command
The IP_PACKET_INIT immediate command defines the destination
endpoint to which a packet should be sent. Prior to IPA v5.0, a
5 bit field in that command represents the endpoint, but starting
with IPA v5.0, the field is extended to 8 bits to support more than
32 endpoints.
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alex Elder [Mon, 30 Jan 2023 21:01:51 +0000 (15:01 -0600)]
net: ipa: support more endpoints
Increase the number of endpoints supported by the driver to 36,
which IPA v5.0 supports. This makes it impossible to check at build
time whether the supported number is too big to fit within the
(5-bit) PACKET_INIT destination endpoint field. Instead, convert
the build time check to compare against what fits in 8 bits.
Add a check in ipa_endpoint_config() to also ensure the hardware
reports an endpoint count that's in the expected range. Just
open-code 32 as the limit (the PACKET_INIT field mask is not
available where we'd want to use it).
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 1 Feb 2023 05:35:34 +0000 (21:35 -0800)]
Merge tag 'mlx5-updates-2023-01-30' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2023-01-30
Add fast update encryption key
Jianbo Liu Says:
================
Data encryption keys (DEKs) are the keys used for data encryption and
decryption operations. Starting from version 22.33.0783, firmware is
optimized to accelerate the update of user keys into DEK object in
hardware. The support for bulk allocation and destruction of DEK
objects is added, and the bulk allocated DEKs are uninitialized, as
the bulk creation requires no input key. When offload
encryption/decryption, user gets one object from a bulk, and updates
key by a new "modify DEK" command. This command is the same as create
DEK object, but requires no heavy context memory allocation in
firmware, which consumes most cpu cycles of the create DEK command.
DEKs are cached internally by the NIC, so invalidating internal NIC
caches is required before reusing DEKs. The SYNC_CRYPTO command is
added to support it. DEK object can be reused, the keys in it can be
updated after this command is executed.
This patchset enhances the key creation and destruction flow, to get
use of this new feature. Any user, for example, ktls, ipsec and
macsec, can use it to offload keys. But, only ktls uses it, as others
don't need many keys, and caching two many DEKs in pool is wasteful.
There are two new data struts added:
a. DEK pool. One pool is created for each key type. The bulks by
the type, are placed in the pool's different bulk lists, according to
the number of available and in_used DEKs in the bulk.
b. DEK bulk. All DEKs in one bulk allocation are store here. There
are two bitmaps to indicate the state of each DEK.
New APIs are then added. When user need a DEK object,
a. Fetch one bulk with avail DEKs, from the partial_list or
avail_list, otherwise create new one.
b. Pick one DEK, and set its need_sync and in_used bits to 1.
Move the bulk to full_list if no more available keys, or put it to
partial_list if the bulk is newly created.
c. Update DEK object's key with user key, by the "modify DEK"
command.
d. Return DEK struct to user, then it gets the object id and fills
it into the offload commands.
When user free a DEK,
a. Set in_use bit to 0. If all need_sync bits are 1 and all in_use
bits of this bulk are 0, move it to sync_list.
b. If the number of DEKs, which are freed by users, is over the
threshold (128), schedule a workqueue to do the sync process.
For the sync process, the SYNC_CRYPTO command is executed first. Then,
for each bulks in partial_list, full_list and sync_list, reset
need_sync bits of the freed DEK objects. If all need_sync bits in one
bulk are zero, move it to avail_list.
We already supported TIS pool to recycle the TISes. With this series
and TIS pool, TLS CPS performance is improved greatly.
And we tested https on the system:
CPU: dual AMD EPYC 7763 64-Core processors
RAM: 512G
DEV: ConnectX-6 DX, with FW ver 22.33.0838 and TLS_OPTIMISE=true
TLS CPS performance numbers are:
Before: 11k connections/sec
After: 101 connections/sec
================
* tag 'mlx5-updates-2023-01-30' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
net/mlx5e: kTLS, Improve connection rate by using fast update encryption key
net/mlx5: Keep only one bulk of full available DEKs
net/mlx5: Add async garbage collector for DEK bulk
net/mlx5: Reuse DEKs after executing SYNC_CRYPTO command
net/mlx5: Use bulk allocation for fast update encryption key
net/mlx5: Add bulk allocation and modify_dek operation
net/mlx5: Add support SYNC_CRYPTO command
net/mlx5: Add new APIs for fast update encryption key
net/mlx5: Refactor the encryption key creation
net/mlx5: Add const to the key pointer of encryption key creation
net/mlx5: Prepare for fast crypto key update if hardware supports it
net/mlx5: Change key type to key purpose
net/mlx5: Add IFC bits and enums for crypto key
net/mlx5: Add IFC bits for general obj create param
net/mlx5: Header file for crypto
====================
Link: https://lore.kernel.org/r/20230131031201.35336-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 1 Feb 2023 05:04:25 +0000 (21:04 -0800)]
Merge branch '10GbE' of git://git./linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
Intel Wired LAN: Remove redundant Device Control Error Reporting Enable
Bjorn Helgaas says:
Since
f26e58bf6f54 ("PCI/AER: Enable error reporting when AER is native"),
the PCI core sets the Device Control bits that enable error reporting for
PCIe devices.
This series removes redundant calls to pci_enable_pcie_error_reporting()
that do the same thing from several NIC drivers.
There are several more drivers where this should be removed; I started with
just the Intel drivers here.
* '10GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue:
ixgbe: Remove redundant pci_enable_pcie_error_reporting()
igc: Remove redundant pci_enable_pcie_error_reporting()
igb: Remove redundant pci_enable_pcie_error_reporting()
ice: Remove redundant pci_enable_pcie_error_reporting()
iavf: Remove redundant pci_enable_pcie_error_reporting()
i40e: Remove redundant pci_enable_pcie_error_reporting()
fm10k: Remove redundant pci_enable_pcie_error_reporting()
e1000e: Remove redundant pci_enable_pcie_error_reporting()
====================
Link: https://lore.kernel.org/r/20230130192519.686446-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 1 Feb 2023 05:02:12 +0000 (21:02 -0800)]
Merge branch 'selftests-mlxsw-convert-to-iproute2-dcb'
Petr Machata says:
====================
selftests: mlxsw: Convert to iproute2 dcb
There is a dedicated tool for configuration of DCB in iproute2. Use it
in the selftests instead of lldpad.
Patches #1-#3 convert three tests. Patch #4 drops the now-unnecessary
lldpad helpers.
====================
Link: https://lore.kernel.org/r/cover.1675096231.git.petrm@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Petr Machata [Mon, 30 Jan 2023 16:40:04 +0000 (17:40 +0100)]
selftests: net: forwarding: lib: Drop lldpad_app_wait_set(), _del()
The existing users of these helpers have been converted to iproute2 dcb.
Drop the helpers.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Petr Machata [Mon, 30 Jan 2023 16:40:03 +0000 (17:40 +0100)]
selftests: mlxsw: qos_defprio: Convert from lldptool to dcb
Set up default port priority through the iproute2 dcb tool, which is easier
to understand and manage.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Petr Machata [Mon, 30 Jan 2023 16:40:02 +0000 (17:40 +0100)]
selftests: mlxsw: qos_dscp_router: Convert from lldptool to dcb
Set up DSCP prioritization through the iproute2 dcb tool, which is easier
to understand and manage.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Petr Machata [Mon, 30 Jan 2023 16:40:01 +0000 (17:40 +0100)]
selftests: mlxsw: qos_dscp_bridge: Convert from lldptool to dcb
Set up DSCP prioritization through the iproute2 dcb tool, which is easier
to understand and manage.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 1 Feb 2023 04:59:09 +0000 (20:59 -0800)]
Merge branch 'net-mdio-add-amlogic-gxl-mdio-mux-support'
Jerome Brunet says:
====================
net: mdio: add amlogic gxl mdio mux support
Add support for the MDIO multiplexer found in the Amlogic GXL SoC family.
This multiplexer allows to choose between the external (SoC pins) MDIO bus,
or the internal one leading to the integrated 10/100M PHY.
This multiplexer has been handled with the mdio-mux-mmioreg generic driver
so far. When it was added, it was thought the logic was handled by a
single register.
It turns out more than a single register need to be properly set.
As long as the device is using the Amlogic vendor bootloader, or upstream
u-boot with net support, it is working fine since the kernel is inheriting
the bootloader settings. Without net support in the bootloader, this glue
comes unset in the kernel and only the external path may operate properly.
With this driver (and the associated change in
arch/arm64/boot/dts/amlogic/meson-gxl.dtsi), the kernel no longer relies
on the bootloader to set things up, fixing the problem.
====================
Link: https://lore.kernel.org/r/20230130151616.375168-1-jbrunet@baylibre.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jerome Brunet [Mon, 30 Jan 2023 15:16:16 +0000 (16:16 +0100)]
net: mdio: add amlogic gxl mdio mux support
Add support for the mdio mux and internal phy glue of the GXL SoC
family
Reported-by: Da Xue <da@lessconfused.com>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jerome Brunet [Mon, 30 Jan 2023 15:16:15 +0000 (16:16 +0100)]
dt-bindings: net: add amlogic gxl mdio multiplexer
Add documentation for the MDIO bus multiplexer found on the Amlogic GXL
SoC family
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 1 Feb 2023 04:36:05 +0000 (20:36 -0800)]
Merge branch 'tools-ynl-more-docs-and-basic-ethtool-support'
Jakub Kicinski says:
====================
tools: ynl: more docs and basic ethtool support
I got discouraged from supporting ethtool in specs, because
generating the user space C code seems a little tricky.
The messages are ID'ed in a "directional" way (to and from
kernel are separate ID "spaces"). There is value, however,
in having the spec and being able to for example use it
in Python.
After paying off some technical debt - add a partial
ethtool spec. Partial because the header for ethtool is almost
a 1000 LoC, so converting in one sitting is tough. But adding
new commands should be trivial now.
Last but not least I add more docs, I realized that I've been
sending a similar "instructions" email to people working on
new families. It's now intro-specs.rst.
====================
Link: https://lore.kernel.org/r/20230131023354.1732677-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:54 +0000 (18:33 -0800)]
tools: net: use python3 explicitly
The scripts require Python 3 and some distros are dropping
Python 2 support.
Reported-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:53 +0000 (18:33 -0800)]
docs: netlink: add a starting guide for working with specs
We have a bit of documentation about the internals of Netlink
and the specs, but really the goal is for most people to not
worry about those. Add a practical guide for beginners who
want to poke at the specs.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:52 +0000 (18:33 -0800)]
netlink: specs: add partial specification for ethtool
Ethtool is one of the most actively developed families.
With the changes to the CLI it should be possible to use
the YNL based code for easy prototyping and development.
Add a partial family definition. I've tested the string
set and rings. I don't have any MAC Merge implementation
to test with, but I added the definition for it, anyway,
because it's last. New commands can simply be added at
the end without having to worry about manually providing
IDs / values.
Set (with notification support - None is the response,
the data is from the notification):
$ sudo ./tools/net/ynl/cli.py \
--spec Documentation/netlink/specs/ethtool.yaml \
--do rings-set \
--json '{"header":{"dev-name":"enp0s31f6"}, "rx":129}' \
--subscribe monitor
None
[{'msg': {'header': {'dev-index': 2, 'dev-name': 'enp0s31f6'},
'rx': 136,
'rx-max': 4096,
'tx': 256,
'tx-max': 4096,
'tx-push': 0},
'name': 'rings-ntf'}]
Do / dump (yes, the kernel requires that even for dump and even
if empty - the "header" nest must be there):
$ ./tools/net/ynl/cli.py \
--spec Documentation/netlink/specs/ethtool.yaml \
--do rings-get \
--json '{"header":{"dev-index": 2}}'
{'header': {'dev-index': 2, 'dev-name': 'enp0s31f6'},
'rx': 136,
'rx-max': 4096,
'tx': 256,
'tx-max': 4096,
'tx-push': 0}
$ ./tools/net/ynl/cli.py \
--spec Documentation/netlink/specs/ethtool.yaml \
--dump rings-get \
--json '{"header":{}}'
[{'header': {'dev-index': 2, 'dev-name': 'enp0s31f6'},
'rx': 136,
'rx-max': 4096,
'tx': 256,
'tx-max': 4096,
'tx-push': 0},
{'header': {'dev-index': 3, 'dev-name': 'wlp0s20f3'}, 'tx-push': 0},
{'header': {'dev-index': 19, 'dev-name': 'enp58s0u1u1'},
'rx': 100,
'rx-max': 4096,
'tx-push': 0}]
And error reporting:
$ ./tools/net/ynl/cli.py \
--spec Documentation/netlink/specs/ethtool.yaml \
--dump rings-get \
--json '{"header":{"flags":5}}'
Netlink error: Invalid argument
nl_len = 68 (52) nl_flags = 0x300 nl_type = 2
error: -22 extack: {'msg': 'reserved bit set',
'bad-attr-offs': 24,
'bad-attr': '.header.flags'}
None
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:51 +0000 (18:33 -0800)]
netlink: specs: finish up operation enum-models
I had a (bright?) idea of introducing the concept of enum-models
to account for all the weird ways families enumerate their messages.
I've never finished it because generating C code for each of them
is pretty daunting. But for languages which can use ID values directly
the support is simple enough, so clean this up a bit.
"unified" model is what I recommend going forward.
"directional" model is what ethtool uses.
"notify-split" is used by the proposed DPLL code, but we can just
make them use "unified", it hasn't been merged :)
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:50 +0000 (18:33 -0800)]
tools: ynl: load jsonschema on demand
The CLI script tries to validate jsonschema by default.
It's seems better to validate too many times than too few.
However, when copying the scripts to random servers having
to install jsonschema is tedious. Load jsonschema via
importlib, and let the user opt out.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:49 +0000 (18:33 -0800)]
tools: ynl: use operation names from spec on the CLI
When I wrote the first version of the Python code I was quite
excited that we can generate class methods directly from the
spec. Unfortunately we need to use valid identifiers for method
names (specifically no dashes are allowed). Don't reuse those
names on the CLI, it's much more natural to use the operation
names exactly as listed in the spec.
Instead of:
./cli --do rings_get
use:
./cli --do rings-get
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:48 +0000 (18:33 -0800)]
tools: ynl: support pretty printing bad attribute names
One of my favorite features of the Netlink specs is that they
make decoding structured extack a ton easier.
Implement pretty printing bad attribute names in YNL.
For example it will now say:
'bad-attr': '.header.flags'
rather than the useless:
'bad-attr-offs': 32
Proof:
$ ./cli.py --spec ethtool.yaml --do rings_get \
--json '{"header":{"dev-index":1, "flags":4}}'
Netlink error: Invalid argument
nl_len = 68 (52) nl_flags = 0x300 nl_type = 2
error: -22 extack: {'msg': 'reserved bit set',
'bad-attr': '.header.flags'}
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:47 +0000 (18:33 -0800)]
tools: ynl: support multi-attr
Ethtool uses mutli-attr, add the support to YNL.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:46 +0000 (18:33 -0800)]
tools: ynl: support directional enum-model in CLI
Support families which use different IDs for messages
to and from the kernel.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:45 +0000 (18:33 -0800)]
tools: ynl: add support for types needed by ethtool
Ethtool needs support for handful of extra types.
It doesn't have the definitions section yet.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:44 +0000 (18:33 -0800)]
tools: ynl: use the common YAML loading and validation code
Adapt the common object hierarchy in code gen and CLI.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:43 +0000 (18:33 -0800)]
tools: ynl: add an object hierarchy to represent parsed spec
There's a lot of copy and pasting going on between the "cli"
and code gen when it comes to representing the parsed spec.
Create a library which both can use.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:42 +0000 (18:33 -0800)]
tools: ynl: move the cli and netlink code around
Move the CLI code out of samples/ and the library part
of it into tools/net/ynl/lib/. This way we can start
sharing some code with the code gen.
Initially I thought that code gen is too C-specific to
share anything but basic stuff like calculating values
for enums can easily be shared.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Tue, 31 Jan 2023 02:33:41 +0000 (18:33 -0800)]
tools: ynl-gen: prevent do / dump reordering
An earlier fix tried to address generated code jumping around
one code-gen run to another. Turns out dict()s are already
ordered since Python 3.7, the problem is that we iterate over
operation modes using a set(). Sets are unordered in Python.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Caleb Connolly [Fri, 27 Jan 2023 20:27:58 +0000 (20:27 +0000)]
net: ipa: use dev PM wakeirq handling
Replace the enable_irq_wake() call with one to dev_pm_set_wake_irq()
instead. This will let the dev PM framework automatically manage the
the wakeup capability of the ipa IRQ and ensure that userspace requests
to enable/disable wakeup for the IPA via sysfs are respected.
Signed-off-by: Caleb Connolly <caleb.connolly@linaro.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20230127202758.2913612-1-caleb.connolly@linaro.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Arnd Bergmann [Mon, 30 Jan 2023 13:17:51 +0000 (14:17 +0100)]
net: dsa: microchip: ptp: fix up PTP dependency
When NET_DSA_MICROCHIP_KSZ_COMMON is built-in but PTP is a loadable
module, the ksz_ptp support still causes a link failure:
ld.lld-16: error: undefined symbol: ptp_clock_index
>>> referenced by ksz_ptp.c
>>> drivers/net/dsa/microchip/ksz_ptp.o:(ksz_get_ts_info) in archive vmlinux.a
This can happen if NET_DSA_MICROCHIP_KSZ8863_SMI is enabled, or
even if none of the KSZ9477_I2C/KSZ_SPI/KSZ8863_SMI ones are active
but only the common module is.
The most straightforward way to address this is to move the
dependency to NET_DSA_MICROCHIP_KSZ_PTP itself, which can now
only be enabled if both PTP_1588_CLOCK support is reachable
from NET_DSA_MICROCHIP_KSZ_COMMON. Alternatively, one could make
NET_DSA_MICROCHIP_KSZ_COMMON a hidden Kconfig symbol and extend the
PTP_1588_CLOCK_OPTIONAL dependency to NET_DSA_MICROCHIP_KSZ8863_SMI as
well, but that is a little more fragile.
Fixes:
eac1ea20261e ("net: dsa: microchip: ptp: add the posix clock support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20230130131808.1084796-1-arnd@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Randy Dunlap [Sun, 29 Jan 2023 23:10:48 +0000 (15:10 -0800)]
Documentation: networking: correct spelling
Correct spelling problems for Documentation/networking/ as reported
by codespell.
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
Cc: Jiri Pirko <jiri@nvidia.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: netdev@vger.kernel.org
Link: https://lore.kernel.org/r/20230129231053.20863-5-rdunlap@infradead.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Nick Child [Fri, 27 Jan 2023 21:43:58 +0000 (15:43 -0600)]
ibmvnic: Toggle between queue types in affinity mapping
Previously, ibmvnic IRQs were assigned to CPU numbers by assigning all
the IRQs for transmit queues then assigning all the IRQs for receive
queues. With multi-threaded processors, in a heavy RX or TX environment,
physical cores would either be overloaded or underutilized (due to the
IRQ assignment algorithm). This approach is sub-optimal because IRQs for
the same subprocess (RX or TX) would be bound to adjacent CPU numbers,
meaning they were more likely to be contending for the same core.
For example, in a system with 64 CPU's and 32 queues, the IRQs would
be bound to CPU in the following pattern:
IRQ type | CPU number
-----------------------
TX0 | 0-1
TX1 | 2-3
<etc>
RX0 | 32-33
RX1 | 34-35
<etc>
Observe that in SMT-8, the first 4 tx queues would be sharing the
same core.
A more optimal algorithm would balance the number RX and TX IRQ's across
the physical cores. Therefore, to increase performance, distribute RX and
TX IRQs across cores by alternating between assigning IRQs for RX and TX
queues to CPUs.
With a system with 64 CPUs and 32 queues, this results in the following
pattern:
IRQ type | CPU number
-----------------------
TX0 | 0-1
RX0 | 2-3
TX1 | 4-5
RX1 | 6-7
<etc>
Observe that in SMT-8, there is equal distribution of RX and TX IRQs
per core. In the above case, each core handles 2 TX and 2 RX IRQ's.
Signed-off-by: Nick Child <nnac123@linux.ibm.com>
Reviewed-by: Haren Myneni <haren@linux.ibm.com>
Link: https://lore.kernel.org/r/20230127214358.318152-1-nnac123@linux.ibm.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Jakub Kicinski [Tue, 31 Jan 2023 05:07:22 +0000 (21:07 -0800)]
Merge branch 'add-support-for-the-the-vsc7512-internal-copper-phys'
Colin Foster says:
====================
add support for the the vsc7512 internal copper phys
This patch series is a continuation to add support for the VSC7512:
https://patchwork.kernel.org/project/netdevbpf/list/?series=674168&state=*
That series added the framework and initial functionality for the
VSC7512 chip. Several of these patches grew during the initial
development of the framework, which is why v1 will include changelogs.
It was during v9 of that original MFD patch set that these were dropped.
With that out of the way, the VSC7512 is mainly a subset of the VSC7514
chip. The 7512 lacks an internal MIPS processor, but otherwise many of
the register definitions are identical. That is why several of these
patches are simply to expose common resources from
drivers/net/ethernet/mscc/*.
This patch only adds support for the first four ports (swp0-swp3). The
remaining ports require more significant changes to the felix driver,
and will be handled in the future.
====================
Link: https://lore.kernel.org/r/20230127193559.1001051-1-colin.foster@in-advantage.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:59 +0000 (11:35 -0800)]
mfd: ocelot: add external ocelot switch control
Utilize the existing ocelot MFD interface to add switch functionality to
the Microsemi VSC7512 chip.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Acked-for-MFD-by: Lee Jones <lee@kernel.org>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:58 +0000 (11:35 -0800)]
net: dsa: ocelot: add external ocelot switch control
Add control of an external VSC7512 chip.
Currently the four copper phy ports are fully functional. Communication to
external phys is also functional, but the SGMII / QSGMII interfaces are
currently non-functional.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:57 +0000 (11:35 -0800)]
dt-bindings: mfd: ocelot: add ethernet-switch hardware support
The main purpose of the Ocelot chips are the Ethernet switching
functionalities. Document the support for these features.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:56 +0000 (11:35 -0800)]
dt-bindings: net: mscc,vsc7514-switch: add dsa binding for the vsc7512
The VSC7511, VSC7512, VSC7513 and VSC7514 all have the ability to be
controlled either internally by a memory-mapped CPU, or externally via
interfaces like SPI and PCIe. The internal CPU of the VSC7511 and 7512
don't have the resources to run Linux, so must be controlled via these
external interfaces in a DSA configuration.
Add mscc,vsc7512-switch compatible string to indicate that the chips are
being controlled externally in a DSA configuration.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:55 +0000 (11:35 -0800)]
mfd: ocelot: prepend resource size macros to be 32-bit
The *_RES_SIZE macros are initally <= 0x100. Future resource sizes will be
upwards of 0x200000 in size.
To keep things clean, fully align the RES_SIZE macros to 32-bit to do
nothing more than make the code more consistent.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Acked-for-MFD-by: Lee Jones <lee@kernel.org>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:54 +0000 (11:35 -0800)]
net: dsa: felix: add functionality when not all ports are supported
When the Felix driver would probe the ports and verify functionality, it
would fail if it hit single port mode that wasn't supported by the driver.
The initial case for the VSC7512 driver will have physical ports that
exist, but aren't supported by the driver implementation. Add the
OCELOT_PORT_MODE_NONE macro to handle this scenario, and allow the Felix
driver to continue with all the ports that are currently functional.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:53 +0000 (11:35 -0800)]
net: dsa: felix: add support for MFD configurations
The architecture around the VSC7512 differs from existing felix drivers. In
order to add support for all the chip's features (pinctrl, MDIO, gpio) the
device had to be laid out as a multi-function device (MFD).
One difference between an MFD and a standard platform device is that the
regmaps are allocated to the parent device before the child devices are
probed. As such, there is no need for felix to initialize new regmaps in
these configurations, they can simply be requested from the parent device.
Add support for MFD configurations by performing this request from the
parent device.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:52 +0000 (11:35 -0800)]
net: dsa: felix: add configurable device quirks
The define FELIX_MAC_QUIRKS was used directly in the felix.c shared driver.
Other devices (VSC7512 for example) don't require the same quirks, so they
need to be configured on a per-device basis.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:51 +0000 (11:35 -0800)]
net: mscc: ocelot: expose vsc7514_regmap definition
The VSC7514 target regmap is identical for ones shared with similar
hardware, specifically the VSC7512. Share this resource, and change the
name to match the pattern of other exported resources.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:50 +0000 (11:35 -0800)]
net: mscc: ocelot: expose ocelot_reset routine
Resetting the switch core is the same whether it is done internally or
externally. Move this routine to the ocelot library so it can be used by
other drivers.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:49 +0000 (11:35 -0800)]
net: mscc: ocelot: expose vcap_props structure
The vcap_props structure is common to other devices, specifically the
VSC7512 chip that can only be controlled externally. Export this structure
so it doesn't need to be recreated.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:48 +0000 (11:35 -0800)]
net: mscc: ocelot: expose regfield definition to be used by other drivers
The ocelot_regfields struct is common between several different chips, some
of which can only be controlled externally. Export this structure so it
doesn't have to be duplicated in these other drivers.
Rename the structure as well, to follow the conventions of other shared
resources.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Colin Foster [Fri, 27 Jan 2023 19:35:47 +0000 (11:35 -0800)]
net: mscc: ocelot: expose ocelot wm functions
Expose ocelot_wm functions so they can be shared with other drivers.
Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> # regression
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Frank Sae [Sat, 28 Jan 2023 06:35:58 +0000 (14:35 +0800)]
net: phy: motorcomm: change the phy id of yt8521 and yt8531s to lowercase
The phy id is usually defined in lower case.
Signed-off-by: Frank Sae <Frank.Sae@motor-comm.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20230128063558.5850-2-Frank.Sae@motor-comm.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Frank Sae [Sat, 28 Jan 2023 06:35:57 +0000 (14:35 +0800)]
net: phy: fix the spelling problem of Sentinel
CHECK: 'sentinal' may be misspelled - perhaps 'sentinel'?
Signed-off-by: Frank Sae <Frank.Sae@motor-comm.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20230128063558.5850-1-Frank.Sae@motor-comm.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Sat, 28 Jan 2023 07:31:08 +0000 (23:31 -0800)]
sh: checksum: add missing linux/uaccess.h include
SuperH does not include uaccess.h, even tho it calls access_ok().
Fixes:
68f4eae781dd ("net: checksum: drop the linux/uaccess.h include")
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Tested-by: Simon Horman <simon.horman@corigine.com>
Link: https://lore.kernel.org/r/20230128073108.1603095-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jiapeng Chong [Sat, 28 Jan 2023 09:04:13 +0000 (17:04 +0800)]
net: b44: Remove the unused function __b44_cam_read()
The function __b44_cam_read() is defined in the b44.c file, but not called
elsewhere, so remove this unused function.
drivers/net/ethernet/broadcom/b44.c:199:20: warning: unused function '__b44_cam_read'.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3858
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Link: https://lore.kernel.org/r/20230128090413.79824-1-jiapeng.chong@linux.alibaba.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jianbo Liu [Mon, 28 Nov 2022 00:55:16 +0000 (00:55 +0000)]
net/mlx5e: kTLS, Improve connection rate by using fast update encryption key
As the fast DEK update is fully implemented, use it for kTLS to get
better performance.
TIS pool was already supported to recycle the TISes. With this series
and TIS pool, TLS CPS is improved by 9x higher, from 11k/s to 101k/s.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Mon, 25 Apr 2022 03:17:47 +0000 (03:17 +0000)]
net/mlx5: Keep only one bulk of full available DEKs
One bulk with full available keys is left undestroyed, to service the
possible requests from users quickly.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Mon, 25 Apr 2022 03:17:47 +0000 (03:17 +0000)]
net/mlx5: Add async garbage collector for DEK bulk
After invalidation, the idle bulk with all DEKs available for use, is
destroyed, to free keys and mem.
To get better performance, the firmware destruction operation is done
asynchronously. So idle bulks are enqueued in destroy_list first, then
destroyed in system workqueue. This will improve performance, as the
destruction doesn't need to hold pool's mutex.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Wed, 16 Mar 2022 02:06:22 +0000 (02:06 +0000)]
net/mlx5: Reuse DEKs after executing SYNC_CRYPTO command
To fast update encryption keys, those freed keys with need_sync bit 1
and in_use bit 0 in a bulk, can be recycled. The keys are cached
internally by the NIC, so invalidating internal NIC caches by
SYNC_CRYPTO command is required before reusing them. A threshold in
driver is added to avoid invalidating for every update. Only when the
number of DEKs, which need to be synced, is over this threshold, the
sync process will start. Besides, it is done in system workqueue.
After SYNC_CRYPTO command is executed successfully, the bitmaps of
each bulk must be reset accordingly, so that the freed DEKs can be
reused. From the analysis in previous patch, the number of reused DEKs
can be calculated by hweight_long(need_sync XOR in_use), and the
need_sync bits can be reset by simply copying from in_use bits.
Two more list (avail_list and sync_list) are added for each pool. The
avail_list is for a bulk when all bits in need_sync are reset after
sync. If there is no avail deks, and all are be freed by users, the
bulk is moved to sync_list, instead of being destroyed in previous
patch, and waiting for the invalidation. While syncing, they are
simply reset need_sync bits, and moved to avail_list.
Besides, add a wait_for_free list for the to-be-free DEKs. It is to
avoid this corner case: when thread A is done with SYNC_CRYPTO but just
before starting to reset the bitmaps, thread B is alloc dek, and free
it immediately. It's obvious that this DEK can't be reused this time,
so put it to waiting list, and do free after bulk bitmaps reset is
finished.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Wed, 20 Jul 2022 07:20:01 +0000 (07:20 +0000)]
net/mlx5: Use bulk allocation for fast update encryption key
We create a pool for each key type. For the pool, there is a struct
to store the info for all DEK objects of one bulk allocation. As we
use crypto->log_dek_obj_range, which is set to 12 in previous patch,
for the log_obj_range of bulk allocation, 4096 DEKs are allocated in
one time.
To trace the state of all the keys in a bulk, two bitmaps are created.
The need_sync bitmap is used to indicate the available state of the
corresponding key. If the bit is 0, it can be used (available) as it
either is newly created by FW, or SYNC_CRYPTO is executed and bit is
reset after it is freed by upper layer user (this is the case to be
handled in later patch). Otherwise, the key need to be synced. The
in_use bitmap is used to indicate the key is being used, and reset
when user free it.
When ktls, ipsec or macsec need a key from a bulk, it get one with
need_sync bit 0, then set both need_sync and in_used bit to 1. When
user free a key, only in_use bit is reset to 0. So, for the
combinations of (need_sync, in_use) of one DEK object,
- (0,0) means the key is ready for use,
- (1,1) means the key is currently being used by a user,
- (1,0) means the key is freed, and waiting for being synced,
- (0,1) is invalid state.
There are two lists in each pool, partial_list and full_list,
according to the number for available DEKs in a bulk. When user need a
key, it get a bulk, either from partial list, or create new one from
FW. Then the bulk is put in the different pool's lists according to
the num of avail deks it has. If there is no avail deks, and all of
them are be freed by users, for now, the bulk is destroyed.
To speed up the bitmap search, a variable (avail_start) is added to
indicate where to start to search need_sync bitmap for available key.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Wed, 20 Jul 2022 03:57:37 +0000 (03:57 +0000)]
net/mlx5: Add bulk allocation and modify_dek operation
To support fast update of keys into hardware, we optimize firmware to
achieve the maximum rate. The approach is to create DEK objects in
bulk, and update each of them with modify command.
This patch supports bulk allocation and modify_dek commands for new
firmware. However, as log_obj_range is 0 for now, only one DEK obj is
allocated each time, and then updated with user key by modify_dek.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Wed, 20 Jul 2022 02:43:40 +0000 (02:43 +0000)]
net/mlx5: Add support SYNC_CRYPTO command
Add support for SYNC_CRYPTO command. For now, it is executed only when
initializing DEK, but needed when reusing keys in later patch.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Wed, 20 Jul 2022 02:36:05 +0000 (02:36 +0000)]
net/mlx5: Add new APIs for fast update encryption key
New APIs are added to support fast update DEKs. As a pool is created
for each key purpose (type), one pair of pool APIs to get/put pool.
Anotehr pair of DEKs APIs is to get DEK object from pool and update it
with user key, or free it back to the pool. As The bulk allocation
and destruction will be supported in later patches, old implementation
is used here.
To support these APIs, pool and dek structs are defined first. Only
small number of fields are stored in them. For example, key_purpose
and refcnt in pool struct, DEK object id in dek struct. More fields
will be added to these structs in later patches, for example, the
different bulk lists for pool struct, the bulk pointer dek struct
belongs to, and a list_entry for the list in a pool, which is used to
save keys waiting for being freed while other thread is doing sync.
Besides the creation and destruction interfaces, new one is also added
to get obj id.
Currently these APIs are planned to used by TLS only.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Thu, 28 Jul 2022 03:59:29 +0000 (03:59 +0000)]
net/mlx5: Refactor the encryption key creation
Move the common code to general functions which can be used by fast
update encryption key in later patches.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Mon, 8 Aug 2022 02:47:17 +0000 (02:47 +0000)]
net/mlx5: Add const to the key pointer of encryption key creation
Change key pointer to const void *, as there is no need to change the
key content. This is also to avoid modifying the key by mistake.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Mon, 23 May 2022 04:10:02 +0000 (04:10 +0000)]
net/mlx5: Prepare for fast crypto key update if hardware supports it
Add CAP for crypto offload, do the simple initialization if hardware
supports it. Currently set log_dek_obj_range to 12, so 4k DEKs will be
created in one bulk allocation.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Tue, 22 Nov 2022 06:09:22 +0000 (06:09 +0000)]
net/mlx5: Change key type to key purpose
Change the naming of key type in DEK fields and macros, to be
consistent with the device spec.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Thu, 17 Feb 2022 09:29:35 +0000 (09:29 +0000)]
net/mlx5: Add IFC bits and enums for crypto key
Add and extend structure layouts and defines for fast crypto key
update. This is a prerequisite to support bulk creation, key
modification and destruction, software wrapped DEK, and SYNC_CRYPTO
command.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jianbo Liu [Wed, 6 Jul 2022 13:06:30 +0000 (13:06 +0000)]
net/mlx5: Add IFC bits for general obj create param
Before this patch, the log_obj_range was defined inside
general_obj_in_cmd_hdr to support bulk allocation. However, we need to
modify/query one of the object in the bulk in later patch, so change
those fields to param bits for parameters specific for cmd header, and
add general_obj_create_param according to what was updated in spec.
We will also add general_obj_query_param for modify/query later.
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Tariq Toukan [Sun, 13 Mar 2022 22:22:40 +0000 (00:22 +0200)]
net/mlx5: Header file for crypto
Take crypto API out of the generic mlx5.h header into a dedicated
header.
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Bjorn Helgaas [Wed, 18 Jan 2023 23:46:12 +0000 (17:46 -0600)]
ixgbe: Remove redundant pci_enable_pcie_error_reporting()
pci_enable_pcie_error_reporting() enables the device to send ERR_*
Messages. Since
f26e58bf6f54 ("PCI/AER: Enable error reporting when AER is
native"), the PCI core does this for all devices during enumeration.
Remove the redundant pci_enable_pcie_error_reporting() call from the
driver. Also remove the corresponding pci_disable_pcie_error_reporting()
from the driver .remove() path.
Note that this doesn't control interrupt generation by the Root Port; that
is controlled by the AER Root Error Command register, which is managed by
the AER service driver.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Bjorn Helgaas [Wed, 18 Jan 2023 23:46:11 +0000 (17:46 -0600)]
igc: Remove redundant pci_enable_pcie_error_reporting()
pci_enable_pcie_error_reporting() enables the device to send ERR_*
Messages. Since
f26e58bf6f54 ("PCI/AER: Enable error reporting when AER is
native"), the PCI core does this for all devices during enumeration.
Remove the redundant pci_enable_pcie_error_reporting() call from the
driver. Also remove the corresponding pci_disable_pcie_error_reporting()
from the driver .remove() path.
Note that this doesn't control interrupt generation by the Root Port; that
is controlled by the AER Root Error Command register, which is managed by
the AER service driver.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org
Tested-by: Naama Meir <naamax.meir@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Bjorn Helgaas [Wed, 18 Jan 2023 23:46:10 +0000 (17:46 -0600)]
igb: Remove redundant pci_enable_pcie_error_reporting()
pci_enable_pcie_error_reporting() enables the device to send ERR_*
Messages. Since
f26e58bf6f54 ("PCI/AER: Enable error reporting when AER is
native"), the PCI core does this for all devices during enumeration.
Remove the redundant pci_enable_pcie_error_reporting() call from the
driver. Also remove the corresponding pci_disable_pcie_error_reporting()
from the driver .remove() path.
Note that this doesn't control interrupt generation by the Root Port; that
is controlled by the AER Root Error Command register, which is managed by
the AER service driver.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>