Petr Machata [Thu, 11 Mar 2021 18:03:18 +0000 (19:03 +0100)]
nexthop: Implement notifiers for resilient nexthop groups
Implement the following notifications towards drivers:
- NEXTHOP_EVENT_REPLACE, when a resilient nexthop group is created.
- NEXTHOP_EVENT_BUCKET_REPLACE any time there is a change in assignment of
next hops to hash table buckets. That includes replacements, deletions,
and delayed upkeep cycles. Some bucket notifications can be vetoed by the
driver, to make it possible to propagate bucket busy-ness flags from the
HW back to the algorithm. Some are however forced, e.g. if a next hop is
deleted, all buckets that use this next hop simply must be migrated,
whether the HW wishes so or not.
- NEXTHOP_EVENT_RES_TABLE_PRE_REPLACE, before a resilient nexthop group is
replaced. Usually the driver will get the bucket notifications as well,
and could veto those. But in some cases, a bucket may not be migrated
immediately, but during delayed upkeep, and that is too late to roll the
transaction back. This notification allows the driver to take a look and
veto the new proposed group up front, before anything is committed.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Thu, 11 Mar 2021 18:03:17 +0000 (19:03 +0100)]
nexthop: Add data structures for resilient group notifications
Add data structures that will be used for in-kernel notifications about
addition / deletion of a resilient nexthop group and about changes to a
hash bucket within a resilient group.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 11 Mar 2021 18:03:16 +0000 (19:03 +0100)]
nexthop: Add implementation of resilient next-hop groups
At this moment, there is only one type of next-hop group: an mpath group,
which implements the hash-threshold algorithm.
To select a next hop, hash-threshold algorithm first assigns a range of
hashes to each next hop in the group, and then selects the next hop by
comparing the SKB hash with the individual ranges. When a next hop is
removed from the group, the ranges are recomputed, which leads to
reassignment of parts of hash space from one next hop to another. While
there will usually be some overlap between the previous and the new
distribution, some traffic flows change the next hop that they resolve to.
That causes problems e.g. as established TCP connections are reset, because
the traffic is forwarded to a server that is not familiar with the
connection.
Resilient hashing is a technique to address the above problem. Resilient
next-hop group has another layer of indirection between the group itself
and its constituent next hops: a hash table. The selection algorithm uses a
straightforward modulo operation to choose a hash bucket, and then reads
the next hop that this bucket contains, and forwards traffic there.
This indirection brings an important feature. In the hash-threshold
algorithm, the range of hashes associated with a next hop must be
continuous. With a hash table, mapping between the hash table buckets and
the individual next hops is arbitrary. Therefore when a next hop is deleted
the buckets that held it are simply reassigned to other next hops. When
weights of next hops in a group are altered, it may be possible to choose a
subset of buckets that are currently not used for forwarding traffic, and
use those to satisfy the new next-hop distribution demands, keeping the
"busy" buckets intact. This way, established flows are ideally kept being
forwarded to the same endpoints through the same paths as before the
next-hop group change.
In a nutshell, the algorithm works as follows. Each next hop has a number
of buckets that it wants to have, according to its weight and the number of
buckets in the hash table. In case of an event that might cause bucket
allocation change, the numbers for individual next hops are updated,
similarly to how ranges are updated for mpath group next hops. Following
that, a new "upkeep" algorithm runs, and for idle buckets that belong to a
next hop that is currently occupying more buckets than it wants (it is
"overweight"), it migrates the buckets to one of the next hops that has
fewer buckets than it wants (it is "underweight"). If, after this, there
are still underweight next hops, another upkeep run is scheduled to a
future time.
Chances are there are not enough "idle" buckets to satisfy the new demands.
The algorithm has knobs to select both what it means for a bucket to be
idle, and for whether and when to forcefully migrate buckets if there keeps
being an insufficient number of idle buckets.
There are three users of the resilient data structures.
- The forwarding code accesses them under RCU, and does not modify them
except for updating the time a selected bucket was last used.
- Netlink code, running under RTNL, which may modify the data.
- The delayed upkeep code, which may modify the data. This runs unlocked,
and mutual exclusion between the RTNL code and the delayed upkeep is
maintained by canceling the delayed work synchronously before the RTNL
code touches anything. Later it restarts the delayed work if necessary.
The RTNL code has to implement next-hop group replacement, next hop
removal, etc. For removal, the mpath code uses a neat trick of having a
backup next hop group structure, doing the necessary changes offline, and
then RCU-swapping them in. However, the hash tables for resilient hashing
are about an order of magnitude larger than the groups themselves (the size
might be e.g. 4K entries), and it was felt that keeping two of them is an
overkill. Both the primary next-hop group and the spare therefore use the
same resilient table, and writers are careful to keep all references valid
for the forwarding code. The hash table references next-hop group entries
from the next-hop group that is currently in the primary role (i.e. not
spare). During the transition from primary to spare, the table references a
mix of both the primary group and the spare. When a next hop is deleted,
the corresponding buckets are not set to NULL, but instead marked as empty,
so that the pointer is valid and can be used by the forwarding code. The
buckets are then migrated to a new next-hop group entry during upkeep. The
only times that the hash table is invalid is the very beginning and very
end of its lifetime. Between those points, it is always kept valid.
This patch introduces the core support code itself. It does not handle
notifications towards drivers, which are kept as if the group were an mpath
one. It does not handle netlink either. The only bit currently exposed to
user space is the new next-hop group type, and that is currently bounced.
There is therefore no way to actually access this code.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Thu, 11 Mar 2021 18:03:15 +0000 (19:03 +0100)]
nexthop: Add netlink defines and enumerators for resilient NH groups
- RTM_NEWNEXTHOP et.al. that handle resilient groups will have a new nested
attribute, NHA_RES_GROUP, whose elements are attributes NHA_RES_GROUP_*.
- RTM_NEWNEXTHOPBUCKET et.al. is a suite of new messages that will
currently serve only for dumping of individual buckets of resilient next
hop groups. For nexthop group buckets, these messages will carry a nested
attribute NHA_RES_BUCKET, whose elements are attributes NHA_RES_BUCKET_*.
There are several reasons why a new suite of messages is created for
nexthop buckets instead of overloading the information on the existing
RTM_{NEW,DEL,GET}NEXTHOP messages.
First, a nexthop group can contain a large number of nexthop buckets (4k
is not unheard of). This imposes limits on the amount of information that
can be encoded for each nexthop bucket given a netlink message is limited
to 64k bytes.
Second, while RTM_NEWNEXTHOPBUCKET is only used for notifications at
this point, in the future it can be extended to provide user space with
control over nexthop buckets configuration.
- The new group type is NEXTHOP_GRP_TYPE_RES. Note that nexthop code is
adjusted to bounce groups with that type for now.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 11 Mar 2021 18:03:14 +0000 (19:03 +0100)]
nexthop: Add a dedicated flag for multipath next-hop groups
With the introduction of resilient nexthop groups, there will be two types
of multipath groups: the current hash-threshold "mpath" ones, and resilient
groups. Both are multipath, but to determine the fact, the system needs to
consider two flags. This might prove costly in the datapath. Therefore,
introduce a new flag, that should be set for next-hop groups that have more
than one nexthop, and should be considered multipath.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 11 Mar 2021 18:03:13 +0000 (19:03 +0100)]
nexthop: __nh_notifier_single_info_init(): Make nh_info an argument
The cited function currently uses rtnl_dereference() to get nh_info from a
handed-in nexthop. However, under the resilient hashing scheme, this
function will not always be called under RTNL, sometimes the mutual
exclusion will be achieved differently. Therefore move the nh_info
extraction from the function to its callers to make it possible to use a
different synchronization guarantee.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 11 Mar 2021 18:03:12 +0000 (19:03 +0100)]
nexthop: Pass nh_config to replace_nexthop()
Currently, replace assumes that the new group that is given is a
fully-formed object. But mpath groups really only have one attribute, and
that is the constituent next hop configuration. This may not be universally
true. From the usability perspective, it is desirable to allow the replace
operation to adjust just the constituent next hop configuration and leave
the group attributes as such intact.
But the object that keeps track of whether an attribute was or was not
given is the nh_config object, not the next hop or next-hop group. To allow
(selective) attribute updates during NH group replacement, propagate `cfg'
to replace_nexthop() and further to replace_nexthop_grp().
Signed-off-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 12 Mar 2021 00:09:21 +0000 (16:09 -0800)]
Merge branch 'seg6-next'
Julien Massonneau says:
====================
SRv6: SRH processing improvements
Add support for IPv4 decapsulation in ipv6_srh_rcv() and
ignore routing header with segments left equal to 0 for
seg6local actions that doesn't perfom decapsulation.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Julien Massonneau [Thu, 11 Mar 2021 15:53:19 +0000 (16:53 +0100)]
seg6: ignore routing header with segments left equal to 0
When there are 2 segments routing header, after an End.B6 action
for example, the second SRH will never be handled by an action, packet will
be dropped when the first SRH has segments left equal to 0.
For actions that doesn't perform decapsulation (currently: End, End.X,
End.T, End.B6, End.B6.Encaps), this patch adds the IP6_FH_F_SKIP_RH flag
in arguments for ipv6_find_hdr().
Signed-off-by: Julien Massonneau <julien.massonneau@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julien Massonneau [Thu, 11 Mar 2021 15:53:18 +0000 (16:53 +0100)]
seg6: add support for IPv4 decapsulation in ipv6_srh_rcv()
As specified in IETF RFC 8754, section 4.3.1.2, if the upper layer
header is IPv4 or IPv6, perform IPv6 decapsulation and resubmit the
decapsulated packet to the IPv4 or IPv6 module.
Only IPv6 decapsulation was implemented. This patch adds support for IPv4
decapsulation.
Link: https://tools.ietf.org/html/rfc8754#section-4.3.1.2
Signed-off-by: Julien Massonneau <julien.massonneau@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 12 Mar 2021 00:01:10 +0000 (16:01 -0800)]
Merge branch 'hns3-next'
Huazhong Tan says:
====================
net: hns3: two updates for -next
This series includes two updates for the HNS3 ethernet driver.
====================
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yufeng Mo [Thu, 11 Mar 2021 02:14:12 +0000 (10:14 +0800)]
net: hns3: use pause capability queried from firmware
For maintainability and compatibility, add support to use pause
capability queried from firmware, and add debugfs support to dump
this capability.
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yufeng Mo [Thu, 11 Mar 2021 02:14:11 +0000 (10:14 +0800)]
net: hns3: use FEC capability queried from firmware
For maintainability and compatibility, add support to use FEC
capability queried from firmware.
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiapeng Chong [Thu, 11 Mar 2021 07:11:01 +0000 (15:11 +0800)]
netdevsim: fib: Remove redundant code
Fix the following coccicheck warnings:
./drivers/net/netdevsim/fib.c:874:5-8: Unneeded variable: "err". Return
"0" on line 889.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli [Wed, 10 Mar 2021 22:12:43 +0000 (14:12 -0800)]
net: phy: Expose phydev::dev_flags through sysfs
phydev::dev_flags contains a bitmask of configuration bits requested by
the consumer of a PHY device (Ethernet MAC or switch) towards the PHY
driver. Since these flags are often used for requesting LED or other
type of configuration being able to quickly audit them without
instrumenting the kernel is useful.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli [Wed, 10 Mar 2021 18:52:26 +0000 (10:52 -0800)]
net: dsa: b53: Add debug prints in b53_vlan_enable()
Having dynamic debug prints in b53_vlan_enable() has been helpful to
uncover a recent but update the function to indicate the port being
configured (or -1 for initial setup) and include the global VLAN enabled
and VLAN filtering enable status.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bhaskar Chowdhury [Wed, 10 Mar 2021 22:51:23 +0000 (04:21 +0530)]
net: fddi: skfp: Mundane typo fixes throughout the file smt.h
Few spelling fixes throughout the file.
Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shubhankar Kuranagatti [Wed, 10 Mar 2021 21:13:43 +0000 (02:43 +0530)]
net: ipv4: route.c: fix space before tab
The extra space before tab space has been removed.
Signed-off-by: Shubhankar Kuranagatti <shubhankarvk@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Mar 2021 23:34:28 +0000 (15:34 -0800)]
Merge branch 'ionic-next'
Shannon Nelson says:
====================
ionic Rx updates
The ionic driver's Rx path is due for an overhaul in order to
better use memory buffers and to clean up the data structures.
The first two patches convert the driver to using page sharing
between buffers so as to lessen the page alloc and free overhead.
The remaining patches clean up the structs and fastpath code for
better efficency.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Shannon Nelson [Wed, 10 Mar 2021 19:26:31 +0000 (11:26 -0800)]
ionic: simplify use of completion types
Make better use of our struct types and type checking by passing
the actual Rx or Tx completion type rather than a generic void
pointer type.
Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shannon Nelson [Wed, 10 Mar 2021 19:26:30 +0000 (11:26 -0800)]
ionic: rebuild debugfs on qcq swap
With a reconfigure of each queue is needed a rebuild of
the matching debugfs information.
Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shannon Nelson [Wed, 10 Mar 2021 19:26:29 +0000 (11:26 -0800)]
ionic: simplify rx skb alloc
Remove an unnecessary layer over rx skb allocation.
Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shannon Nelson [Wed, 10 Mar 2021 19:26:28 +0000 (11:26 -0800)]
ionic: optimize fastpath struct usage
Clean up a couple of struct uses to make for better fast path
access.
Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shannon Nelson [Wed, 10 Mar 2021 19:26:27 +0000 (11:26 -0800)]
ionic: implement Rx page reuse
Rework the Rx buffer allocations to use pages twice when using
normal MTU in order to cut down on buffer allocation and mapping
overhead.
Instead of tracking individual pages, in which we may have
wasted half the space when using standard 1500 MTU, we track
buffers which use half pages, so we can use the second half
of the page rather than allocate and map a new page once the
first buffer has been used.
Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shannon Nelson [Wed, 10 Mar 2021 19:26:26 +0000 (11:26 -0800)]
ionic: move rx_page_alloc and free
Move ionic_rx_page_alloc() and ionic_rx_page_free() to earlier
in the file to make the next patch easier to review.
Signed-off-by: Shannon Nelson <snelson@pensando.io>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Mar 2021 21:30:36 +0000 (13:30 -0800)]
Merge branch 'dpaa2-switch-next'
Ioana Ciornei says:
====================
dpaa2-switch: CPU terminated traffic and move out of staging
This patch set adds support for Rx/Tx capabilities on DPAA2 switch port
interfaces as well as fixing up some major blunders in how we take care
of the switching domains. The last patch actually moves the driver out
of staging now that the minimum requirements are met.
I am sending this directly towards the net-next tree so that I can use
the rest of the development cycle adding new features on top of the
current driver without worrying about merge conflicts between the
staging and net-next tree.
The control interface is comprised of 3 queues in total: Rx, Rx error
and Tx confirmation. In this patch set we only enable Rx and Tx conf.
All switch ports share the same queues when frames are redirected to the
CPU. Information regarding the ingress switch port is passed through
frame metadata - the flow context field of the descriptor.
NAPI instances are also shared between switch net_devices and are
enabled when at least on one of the switch ports .dev_open() was called
and disabled when no switch port is still up.
Since the last version of this feature was submitted to the list, I
reworked how the switching and flooding domains are taken care of by the
driver, thus the switch is now able to also add the control port (the
queues that the CPU can dequeue from) into the flooding domains of a
port (broadcast, unknown unicast etc). With this, we are able to receive
and sent traffic from the switch interfaces.
Also, the capability to properly partition the DPSW object into multiple
switching domains was added so that when not under a bridge, the ports
are not actually capable to switch between them. This is possible by
adding a private FDB table per switch interface. When multiple switch
interfaces are under the same bridge, they will all use the same FDB
table.
Another thing that is fixed in this patch set is how the driver handles
VLAN awareness. The DPAA2 switch is not capable to run as VLAN unaware
but this was not reflected in how the driver responded to requests to
change the VLAN awareness. In the last patch, this is fixed by
describing the switch interfaces as Rx VLAN filtering on [fixed] and
declining any request to join a VLAN unaware bridge.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:52 +0000 (14:14 +0200)]
staging: dpaa2-switch: move the driver out of staging
Now that the dpaa2-switch driver has basic I/O capabilities on the
switch port net_devices and multiple bridging domains are supported,
move the driver out of staging.
The dpaa2-switch driver is placed right next to the dpaa2-eth driver
since, in the near future, they will be sharing most of the data path.
I didn't implement code reuse in this patch series because I wanted to
keep it as small as possible.
Also, the README is removed from staging with the intention to add
proper rst documentation afterwards to actually match was is supported
by the driver.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:51 +0000 (14:14 +0200)]
staging: dpaa2-switch: prevent joining a bridge while VLAN uppers are present
Each time a switch port joins a bridge, it will start to use a FDB table
common with all the other switch ports that are under the same bridge.
This means that any VLAN added prior to a bridge join, will retain its
previous FDB table destination. With this patch, I choose to restrict
when a switch port can change it's upper device (either join or leave)
so that the driver does not have to delete all the previously installed
VLANs from the previous FDB and add them into the new one.
Thus, in the PRECHANGEUPPER notification we check if there are any VLAN
type upper devices and if that's true, deny the CHANGEUPPER.
This way, the user is not restricted in the topology but rather in the
order in which the setup is done: it must first create the bridging
domain layout and after that add the necessary VLAN devices if
necessary. The teardown is similar, the VLAN devices will need to be
destroyed prior to a change in the bridging layout.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:50 +0000 (14:14 +0200)]
staging: dpaa2-switch: add fast-ageing on bridge leave
Upon leaving a bridge, any MAC addresses learnt on the switch port prior
to this point have to be removed so that we preserve the bridging domain
configuration.
Restructure the dpaa2_switch_port_fdb_dump() function in order to have a
common dpaa2_switch_fdb_iterate() function between the FDB dump callback
and the fast age procedure. To accomplish this, add a new callback -
dpaa2_switch_fdb_cb_t - which will be called on each MAC addr and,
depending on the situation, will either dump the FDB entry into a
netlink message or will delete the address from the FDB table, in case
of the fast-age.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:49 +0000 (14:14 +0200)]
staging: dpaa2-switch: accept only vlan-aware upper devices
The DPAA2 Switch is not capable to handle traffic in a VLAN unaware
fashion, thus the previous handling of both the accepted upper devices
and the SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING flag was wrong.
Fix this by checking if the bridge that we are joining is indeed VLAN
aware, if not return an error. Also, the RX VLAN filtering feature is
defined as 'on [fixed]' and the .ndo_vlan_rx_add_vid() and
.ndo_vlan_rx_kill_vid() callbacks are implemented just by recreating a
switchdev_obj_port_vlan object and then calling the same functions used
on the switchdev notifier path.
In addition, changing the vlan_filtering flag to 0 on a bridge under
which a DPAA2 switch interface is present is not supported, thus
rejected when SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING is received with
such a request.
This patch is also adding the use of the switchdev_handle_port_attr_set
function so that we can iterate through all the lower devices of the
bridge that the notification was received on and actually catch if the
user is trying to change the vlan_filtering state. Since on a VLAN
filtering change the net_device is the bridge, we also move the
dpaa2_switch_port_dev_check call so that we do not return NOTIFY_DONE
right away.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:48 +0000 (14:14 +0200)]
staging: dpaa2-switch: move the notifier register to module_init()
Move the notifier blocks register into the module_init() step, instead of
object probe, so that all DPSW devices probed by the dpaa2-switch driver
can use the same notifiers.
This will enable us to have a more straightforward approach in
determining if an event is intended for an object managed by this driver
or not. Previously, the dpaa2_switch_port_dev_check() function was
forced to also check the notifier block beside the net_device_ops
structure to determine if the event is for us or not.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:47 +0000 (14:14 +0200)]
staging: dpaa2-switch: properly setup switching domains
Until now, the DPAA2 switch was not capable to properly setup its
switching domains depending on the existence, or lack thereof, of a
upper bridge device. This meant that all switch ports of a DPSW object
were switching by default even though they were not under the same
bridge device.
Another issue was the inability to actually add the CPU in the flooding
domains (broadcast, unknown unicast etc) of a particular switch port.
This meant that a simple ping on a switch interface was not possible
since no broadcast ARP frame would actually reach the CPU queues.
This patch tries to fix exactly these problems by:
* Creating and managing a FDB table for each flooding domain. This means
that when a switch interface is not bridged it will use its own FDB
table. While in bridged mode all DPAA2 switch interfaces under the
same upper will use the same FDB table, thus leverage the same FDB
entries.
* Adding a new MC firmware command - dpsw_set_egress_flood() - through
which the driver can setup the flooding domains as needed. For
example, when the switch interface is standalone, thus not in a
bridge with any other DPAA2 switch port, it will setup its broadcast
and unknown unicast flooding domains to only include the control
interface (the queues that reach the CPU and the driver can dequeue
from). This flooding domain changes when the interface joins a bridge
and is configured to include, beside the control interface, all other
DPAA2 switch interfaces.
We impose a minimum limit of FDB tables available equal to the number of
switch interfaces so that we guarantee that, in the maximal
configuration - all interfaces are standalone, each switch port will
have a private FDB table. At the same time, we only probe DPSW objects
that have the flooding and broadcast replicators configured to be per
FDB (DPSW_*_PER_FDB). Without this, the dpaa2-switch driver would not
be able to configure multiple switching domains.
At probe time, a FDB table will be allocated for each port. At a bridge
join event, the switch port will either continue to use the current FDB
table (if it's the first dpaa2-switch port to join that bridge) or will
switch to use the FDB table associated with the port that it's already
under the bridge. If a FDB switch is necessary, the private FDB table
which was previously used will be returned to the pool of unused FDBs.
Upon a bridge leave, the switch port needs a private FDB table thus it
will search and get the first unused FDB table. This way, all the other
ports remaining under the bridge will continue to use the same FDB
table.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:46 +0000 (14:14 +0200)]
staging: dpaa2-switch: enable the control interface
Enable the CTRL_IF of the switch object, now that all the pieces are in
place (buffer and queue management, interrupts, NAPI instances etc).
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:45 +0000 (14:14 +0200)]
staging: dpaa2-switch: add .ndo_start_xmit() callback
Implement the .ndo_start_xmit() callback for the switch port interfaces.
For each of the switch ports, gather the corresponding queue
destination ID (QDID) necessary for Tx enqueueing.
We'll reserve 64 bytes for software annotations, where we keep a skb
backpointer used on the Tx confirmation side for releasing the allocated
memory. At the moment, we only support linear skbs.
Also, add support for the Tx confirmation path which for the most part
shares the code path with the normal Rx queue.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:44 +0000 (14:14 +0200)]
staging: dpaa2-switch: handle Rx path on control interface
The dpaa2-ethsw supports only one Rx queue that is shared by all switch
ports. This means that information about which port was the ingress port
for a specific frame needs to be passed in metadata. In our case, the
Flow Context (FLC) field from the frame descriptor holds this
information. Besides the interface ID of the ingress port we also
receive the virtual QDID of the port. Below is a visual description of
the 64 bits of FLC.
63 47 31 15 0
+---------------------------------------------------+
| | | | |
| RESERVED | IF_ID | RESERVED | IF QDID |
| | | | |
+---------------------------------------------------+
Because all switch ports share the same Rx and Tx conf queues, NAPI
management takes into consideration when there is at least one switch
interface open to enable the NAPI instance.
The Rx path is common, for the most part, for both Rx and Tx conf with
the mention that each of them has its own consume function of a frame
descriptor. Dequeueing from a FQ, consuming dequeued store and also the
NAPI poll function is common between both queues.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:43 +0000 (14:14 +0200)]
staging: dpaa2-switch: setup dpio
Setup interrupts on the control interface queues. We do not force an
exact affinity between the interrupts received from a specific queue and
a cpu.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:42 +0000 (14:14 +0200)]
staging: dpaa2-switch: setup buffer pool and RX path rings
Allocate and setup a buffer pool, needed on the Rx path of the control
interface. Also, define the Rx buffer size seen by the WRIOP from the
PAGE_SIZE buffers seeded.
Also, create the needed Rx rings for both frame queues used on the
control interface. On the Rx path, when a pull-dequeue operation is
performed on a software portal, available frame descriptors are put in a
ring - a DMA memory storage - for further usage.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:41 +0000 (14:14 +0200)]
staging: dpaa2-switch: get control interface attributes
Introduce a new structure to hold all necessary info related to an RX
queue for the control interface and populate the FQ IDs.
We only have one Rx queue and one Tx confirmation queue on the control
interface, both shared by all the switch ports.
Also, increase the minimum version of the object supported by the driver
since for a basic switch driver support we'll be in need for some ABIs
added in the latest version of firmware.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:40 +0000 (14:14 +0200)]
staging: dpaa2-switch: remove obsolete .ndo_fdb_{add|del} callbacks
Since the dpaa2-switch already listens for SWITCHDEV_FDB_ADD_TO_DEVICE /
SWITCHDEV_FDB_DEL_TO_DEVICE events emitted by the bridge, we don't need
the bridge bypass operations, and now is a good time to delete them. All
'bridge fdb' commands need the 'master' flag specified now.
In fact, having the obsolete .ndo_fdb_{add|del} callbacks would even
complicate the bridge leave/join procedures without any real benefit.
Every FDB entry is installed in an FDB ID as far as the hardware is
concerned, and the dpaa2-switch ports change their FDB ID when they join
or leave a bridge. So we would need to manually delete these FDB entries
when the FDB ID changes. That's because, unlike FDB entries added
through switchdev, where the bridge automatically deletes those on
leave, there isn't anybody who will remove the static FDB entries
installed via the bridge bypass operations upon a change in the upper
device.
Note that we still need .ndo_fdb_dump though. The dpaa2-switch does not
emit any interrupts when a new address is learnt, so we cannot keep the
bridge FDB in sync with the hardware FDB. Therefore, we need this
callback to get a chance to print the FDB entries that were dynamically
learnt by our hardware.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:39 +0000 (14:14 +0200)]
staging: dpaa2-switch: fix up initial forwarding configuration done by firmware
By default, the DPSW object is configured with VLAN ID 1 in the VLAN
table, which all ports are member of. This entry in the VLAN table
selects the same FDB ID for all ports, meaning that forwarding between
ports is permitted. This is unlike the switchdev model, where each port
should operate as standalone by default.
To make the switch operate in standalone ports mode, we need the VLAN
table to select a unique FDB ID for each port. In order to do that, we
need to simply delete the VLAN 1 created automatically by firmware, and
let dpaa2_switch_port_init take over, by readding VLAN ID 1, but
pointing towards a unique FDB ID.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 10 Mar 2021 12:14:38 +0000 (14:14 +0200)]
staging: dpaa2-switch: remove broken learning and flooding support
This patch is removing the current configuration of learning and
flooding states per switch port because they are essentially broken in
terms of integration with the switchdev APIs and the bridge
understanding of these states.
First of all, the learning state is a per switch port configuration
while the dpaa2-switch driver was using it to configure the entire
bridging domain. This is broken since the software learning state could
be out of sync with the hardware state when ports from the same bridging
domain are configured by the user with different learning parameters.
The BR_FLOOD flag has been misinterpreted as well. Instead of denoting
whether unicast traffic for which there is no FDB entry will be flooded
towards a given port, the dpaa2-switch used the flag to configure
whether or not a frame with an unknown destination received on a given
port should be flooded or not. In summary, it was used as ingress
setting instead of a egress one.
Also, remove the unnecessary call to dpsw_if_set_broadcast() and the API
definition. The HW default is to let all switch ports to be able to
flood broadcast traffic thus there is no need to call the API again.
Instead of trying to patch things up, just remove the support for the
moment so that we'll add it back cleanly once the driver is out of
staging.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Mar 2021 21:14:15 +0000 (13:14 -0800)]
Merge branch 'enetc-cleanups'
Vladimir Oltean says:
====================
Refactoring/cleanup for NXP ENETC
This series performs the following:
- makes the API for Control Buffer Descriptor Rings in enetc_cbdr.c a
bit more tightly knit.
- moves more logic into enetc_rxbd_next to make the callers simpler
- moves more logic into enetc_refill_rx_ring to make the callers simpler
- removes forward declarations
- simplifies the probe path to unify probing for used and unused PFs.
Nothing radical.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:51 +0000 (14:03 +0200)]
net: enetc: make enetc_refill_rx_ring update the consumer index
Since commit
fd5736bf9f23 ("enetc: Workaround for MDIO register access
issue"), enetc_refill_rx_ring no longer updates the RX BD ring's
consumer index, that is left to be done by the caller. This has led to
bugs such as the ones found in
96a5223b918c ("net: enetc: remove bogus
write to SIRXIDR from enetc_setup_rxbdr") and
3a5d12c9be6f ("net: enetc:
keep RX ring consumer index in sync with hardware"), so it is desirable
that we move back the update of the consumer index into enetc_refill_rx_ring.
The trouble with that is the different MDIO locking context for the two
callers of enetc_refill_rx_ring:
- enetc_clean_rx_ring runs under enetc_lock_mdio()
- enetc_setup_rxbdr runs outside enetc_lock_mdio()
Simplify the callers of enetc_refill_rx_ring by making enetc_setup_rxbdr
explicitly take enetc_lock_mdio() around the call. It will be the only
place in need of ensuring the hot accessors can be used.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:50 +0000 (14:03 +0200)]
net: enetc: remove forward declaration for enetc_map_tx_buffs
There is no other reason why this forward declaration exists rather than
poor ordering of the functions.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:49 +0000 (14:03 +0200)]
net: enetc: remove forward-declarations of enetc_clean_{rx,tx}_ring
This patch moves the NAPI enetc_poll after enetc_clean_rx_ring such that
we can delete the forward declarations.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:48 +0000 (14:03 +0200)]
net: enetc: use enum enetc_active_offloads
The active_offloads variable of enetc_ndev_priv has an enum type, use it.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:47 +0000 (14:03 +0200)]
net: enetc: simplify callers of enetc_rxbd_next
When we iterate through the BDs in the RX ring, the software producer
index (which is already passed by value to enetc_rxbd_next) lags behind,
and we end up with this funny looking "++i == rx_ring->bd_count" check
so that we drag it after us.
Let's pass the software producer index "i" by reference, so that
enetc_rxbd_next can increment it by itself (mod rx_ring->bd_count),
especially since enetc_rxbd_next has to increment the index anyway.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:46 +0000 (14:03 +0200)]
net: enetc: don't initialize unused ports from a separate code path
Since commit
3222b5b613db ("net: enetc: initialize RFS/RSS memories for
unused ports too") there is a requirement to initialize the memories of
unused PFs too, which has left the probe path in a bit of a rough shape,
because we basically have a minimal initialization path for unused PFs
which is separate from the main initialization path.
Now that initializing a control BD ring is as simple as calling
enetc_setup_cbdr, let's move that outside of enetc_alloc_si_resources
(unused PFs don't need classification rules, so no point in allocating
them just to free them later).
But enetc_alloc_si_resources is called both for PFs and for VFs, so now
that enetc_setup_cbdr is no longer called from this common function, it
means that the VF probe path needs to explicitly call enetc_setup_cbdr
too.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:45 +0000 (14:03 +0200)]
net: enetc: pass bd_count as an argument to enetc_setup_cbdr
It makes no sense from an API perspective to first initialize some
portion of struct enetc_cbdr outside enetc_setup_cbdr, then leave that
function to initialize the rest. enetc_setup_cbdr should be able to
perform all initialization given a zero-initialized struct enetc_cbdr.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:44 +0000 (14:03 +0200)]
net: enetc: squash clear_cbdr and free_cbdr into teardown_cbdr
All call sites call enetc_clear_cbdr and enetc_free_cbdr one after
another, so let's combine the two functions into a single method named
enetc_teardown_cbdr which does both, and in the same order.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:43 +0000 (14:03 +0200)]
net: enetc: save the mode register address inside struct enetc_cbdr
enetc_clear_cbdr depends on struct enetc_hw because it must disable the
ring through a register write. We'd like to remove that dependency, so
let's do what's already done with the producer and consumer indices,
which is to save the iomem address in a variable kept in struct enetc_cbdr.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:42 +0000 (14:03 +0200)]
net: enetc: squash enetc_alloc_cbdr and enetc_setup_cbdr
enetc_alloc_cbdr and enetc_setup_cbdr are always called one after
another, so we can simplify the callers and make enetc_setup_cbdr do
everything that's needed.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:41 +0000 (14:03 +0200)]
net: enetc: save the DMA device for enetc_free_cbdr
We shouldn't need to pass the struct device *dev to enetc CBDR APIs over
and over again, so save this inside struct enetc_cbdr::dma_dev and avoid
calling it from the enetc_free_cbdr functions.
This breaks the dependency of the cbdr API from struct enetc_si (the
station interface).
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 12:03:40 +0000 (14:03 +0200)]
net: enetc: move the CBDR API to enetc_cbdr.c
Since there is a dedicated file in this driver for interacting with
control BD rings, it makes sense to move these functions there.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Mar 2021 21:08:15 +0000 (13:08 -0800)]
Merge branch 'defxx-updates'
Maciej W. Rozycki says:
====================
FDDI: defxx: CSR access fixes and improvements
As a lab upgrade I have recently replaced a dated 32-bit x86 server with
a new POWER9 system. One of the purposes of the system has been providing
network based resources to clients over my FDDI network. As such the new
server has also received a new DEFPA FDDI network adapter.
As it turned out the interface did not work with the driver as shipped by
the most recent stable Debian release (Linux version 5.9.15) for ppc64el.
Symptoms were inconclusive, and the DEFPA adapter turned out to have a
manufacturing defect as well, however eventually I have figured out the
PCIe host bridge used with the system, Power Systems Host Bridge 4 (PHB4),
does not (anymore) implement PCI I/O transactions, while the binary defxx
driver as shipped by Debian comes configured for port I/O, and then a bug
in resource handling causes the driver to try and use an unassigned port
I/O range for adapter's PDQ main ASIC's CSR access.
Fortunately the PFI PCI interface ASIC used with the DEFPA adapter has
been designed such as to provide for both PCI I/O and PCI memory accesses
to be used for PDQ CSR access, via a pair of BARs to be alternatively
used.
Originally the defxx driver only supported port I/O access, but in the
course of interfacing it to the TURBOchannel bus I had to implement MMIO
access too, and while at it I have added a kernel configuration option to
globally switch between port I/O and MMIO at compilation time, however
conservatively defaulting to port I/O for EISA bus support where the use
of MMIO currently requires the adapter to have been suitably configured
via ECU (EISA Configuration Utility), supplied externally.
With the kernel configuration option set to MMIO the DEFPA interface
works correctly with my POWER9 system. Therefore I have prepared this
small patch series consisting of a pair of conservative bug fixes, to be
backported to stable branches, and then a pair of improvements for the
robustness of the driver.
So changes 1/4 and 2/4 apply both to net and net-next, and then changes
3/4 and 4/4 apply on top of them to net-next only. In particular there
are diff context dependencies going like this: 1/4 -> 3/4 -> 4/4. Let me
know if this submission needs to be sorted differently.
See individual change descriptions for further details as to the actual
changes made.
NB the ESIC interface chip used for slave address decoding with the DEFEA
EISA adapter has decoding implemented for address bits 31:10 and therefore
supports full 32-bit range for the allocation of the CSR decoding window.
For DOS compatibility reasons ECU however only allows allocations between
0x000c0000 and 0x000effff.
Given that for other compatibility reasons EISA is subtractively decoded
on mixed PCI/EISA systems we could allocate an MMIO region from arbitrary
unoccupied memory space and program the ESIC suitably without regard for
that compatibility limitation. In fact I have a proof-of-concept change
and it seems to work reliably.
However with these patches applied the driver continues supporting port
I/O as fallback and the EISA product ID register is located in the EISA
slot-specific port I/O address space, so any EISA system however modern
(sounds like a joke, eh?) also has to support port I/O access somehow.
So while I think such a dynamic MMIO allocation would be an example of
good engineering, but it would require changes to our EISA core and
therefore it may have had sense 25 years ago when EISA was still
mainstream, but not nowadays when EISA systems are I suppose more of a
curiosity rather than the usual equipment.
This patch series has been thoroughly verified with Linux 5.11.0 as
released and then a Raptor Talos II POWER9 system and a Malta 5Kc MIPS64
system for PCI DEFPA adapter support, an Advanced Integrated Research
486EI x86 system for EISA DEFEA adapter support, and a Digital Equipment
DECstation 5000 model 260 MIPS III system for TURBOchannel DEFTA adapter
support, covering both port I/O and MMIO operation where applicable.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Wed, 10 Mar 2021 12:03:24 +0000 (13:03 +0100)]
FDDI: defxx: Use driver's name with resource requests
Replace repeated "defxx" strings with a reference to the DRV_NAME macro
and then use the driver's name rather that the bus address with resource
requests so as to have contents of /proc/iomem and /proc/ioports more
meaningful to the user, in line with what drivers usually do.
So rather than say:
5000-50ff : DEC FDDIcontroller/EISA Adapter
5000-503f : 00:05
5040-5043 : 00:05
5400-54ff : DEC FDDIcontroller/EISA Adapter
5800-58ff : DEC FDDIcontroller/EISA Adapter
5c00-5cff : DEC FDDIcontroller/EISA Adapter
5c80-5cbf : 00:05
or:
620c080020000-
620c08002007f : 0031:02:04.0
620c080020000-
620c08002007f : 0031:02:04.0
620c080030000-
620c08003ffff : 0031:02:04.0
or:
1f100000-
1f10003f : tc2
we report:
5000-50ff : DEC FDDIcontroller/EISA Adapter
5000-503f : defxx
5040-5043 : defxx
5400-54ff : DEC FDDIcontroller/EISA Adapter
5800-58ff : DEC FDDIcontroller/EISA Adapter
5c00-5cff : DEC FDDIcontroller/EISA Adapter
5c80-5cbf : defxx
and:
620c080020000-
620c08002007f : 0031:02:04.0
620c080020000-
620c08002007f : defxx
620c080030000-
620c08003ffff : 0031:02:04.0
and:
1f100000-
1f10003f : defxx
respectively for the DEFEA (EISA), DEFPA (PCI), and DEFTA (TURBOchannel)
adapters.
Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Wed, 10 Mar 2021 12:03:19 +0000 (13:03 +0100)]
FDDI: defxx: Implement dynamic CSR I/O address space selection
Recent versions of the PCI Express specification have deprecated support
for I/O transactions and actually some PCIe host bridges, such as Power
Systems Host Bridge 4 (PHB4), do not implement them. Conversely a DEFEA
adapter can have its MMIO decoding disabled with ECU (EISA Configuration
Utility) and therefore not available for us with the resource allocation
infrastructure we implement.
However either I/O address space will always be available for use with
the DEFEA (EISA) and DEFPA (PCI) adapters and both have double address
decoding implemented in hardware for Control and Status Register access.
The two kinds of adapters can be present both at once in a single mixed
PCI/EISA system. For the DEFTA (TURBOchannel) variant there is no issue
as there has been no port I/O address space defined for that bus.
To make people's life easier and the driver more robust remove the
DEFXX_MMIO configuration option so as to rather than making the choice
for the I/O address space to use at build time for all the adapters
installed in the system let the driver choose the most suitable address
space dynamically on a case-by-case basis at run time. Make MMIO the
default and resort to port I/O should the default fail for some reason.
This way multiple adapters installed in one system can use different I/O
address spaces each, in particular in the presence of DEFEA adapters in
a pure-EISA or a mixed EISA/PCI system (it is expected that DEFPA boards
will use MMIO in normal circumstances).
The choice of the I/O address space to use continues being reported by
the driver on startup, e.g.:
eisa 00:05: EISA: slot 5:
DEC3002 detected
defxx: v1.12 2021/03/10 Lawrence V. Stefani and others
00:05: DEFEA at I/O addr = 0x5000, IRQ = 10, Hardware addr = 00-00-f8-c8-b3-b6
00:05: registered as fddi0
and:
defxx: v1.12 2021/03/10 Lawrence V. Stefani and others
0031:02:04.0: DEFPA at MMIO addr = 0x620c080020000, IRQ = 57, Hardware addr = 00-60-6d-93-91-98
0031:02:04.0: registered as fddi0
and:
defxx: v1.12 2021/03/10 Lawrence V. Stefani and others
tc2: DEFTA at MMIO addr = 0x1f100000, IRQ = 21, Hardware addr = 08-00-2b-b0-8b-1e
tc2: registered as fddi0
so there is no need to add further information.
The change is supposed to cause a negligible performance hit as I/O
accessors will now have code executed conditionally at run time.
Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Wed, 10 Mar 2021 12:03:14 +0000 (13:03 +0100)]
FDDI: defxx: Make MMIO the configuration default except for EISA
Recent versions of the PCI Express specification have deprecated support
for I/O transactions and actually some PCIe host bridges, such as Power
Systems Host Bridge 4 (PHB4), do not implement them.
The default kernel configuration choice for the defxx driver is the use
of I/O ports rather than MMIO for PCI and EISA systems. It may have
made sense as a conservative backwards compatible choice back when MMIO
operation support was added to the driver as a part of TURBOchannel bus
support. However nowadays this configuration choice makes the driver
unusable with systems that do not implement I/O transactions for PCIe.
Make DEFXX_MMIO the configuration default then, except where configured
for EISA. This exception is because an EISA adapter can have its MMIO
decoding disabled with ECU (EISA Configuration Utility) and therefore
not available with the resource allocation infrastructure we implement,
while port I/O is always readily available as it uses slot-specific
addressing, directly mapped to the slot an option card has been placed
in and handled with our EISA bus support core. Conversely a kernel that
supports modern systems which may not have I/O transactions implemented
for PCIe will usually not be expected to handle legacy EISA systems.
The change of the default will make it easier for people, including but
not limited to distribution packagers, to make a working choice for the
driver.
Update the option description accordingly and while at it replace the
potentially ambiguous PIO acronym with IOP for "port I/O" vs "I/O ports"
according to our nomenclature used elsewhere.
Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Fixes: e89a2cfb7d7b ("[TC] defxx: TURBOchannel support")
Cc: stable@vger.kernel.org # v2.6.21+
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Wed, 10 Mar 2021 12:03:09 +0000 (13:03 +0100)]
FDDI: defxx: Bail out gracefully with unassigned PCI resource for CSR
Recent versions of the PCI Express specification have deprecated support
for I/O transactions and actually some PCIe host bridges, such as Power
Systems Host Bridge 4 (PHB4), do not implement them.
For those systems the PCI BARs that request a mapping in the I/O space
have the length recorded in the corresponding PCI resource set to zero,
which makes it unassigned:
# lspci -s 0031:02:04.0 -v
0031:02:04.0 FDDI network controller: Digital Equipment Corporation PCI-to-PDQ Interface Chip [PFI] FDDI (DEFPA) (rev 02)
Subsystem: Digital Equipment Corporation FDDIcontroller/PCI (DEFPA)
Flags: bus master, medium devsel, latency 136, IRQ 57, NUMA node 8
Memory at
620c080020000 (32-bit, non-prefetchable) [size=128]
I/O ports at <unassigned> [disabled]
Memory at
620c080030000 (32-bit, non-prefetchable) [size=64K]
Capabilities: [50] Power Management version 2
Kernel driver in use: defxx
Kernel modules: defxx
#
Regardless the driver goes ahead and requests it (here observed with a
Raptor Talos II POWER9 system), resulting in an odd /proc/ioport entry:
# cat /proc/ioports
00000000-
ffffffffffffffff : 0031:02:04.0
#
Furthermore, the system gets confused as the driver actually continues
and pokes at those locations, causing a flood of messages being output
to the system console by the underlying system firmware, like:
defxx: v1.11 2014/07/01 Lawrence V. Stefani and others
defxx 0031:02:04.0: enabling device (0140 -> 0142)
LPC[000]: Got SYNC no-response error. Error address reg: 0xd0010000
IPMI: dropping non severe PEL event
LPC[000]: Got SYNC no-response error. Error address reg: 0xd0010014
IPMI: dropping non severe PEL event
LPC[000]: Got SYNC no-response error. Error address reg: 0xd0010014
IPMI: dropping non severe PEL event
and so on and so on (possibly intermixed actually, as there's no locking
between the kernel and the firmware in console port access with this
particular system, but cleaned up above for clarity), and once some 10k
of such pairs of the latter two messages have been produced an interace
eventually shows up in a useless state:
0031:02:04.0: DEFPA at I/O addr = 0x0, IRQ = 57, Hardware addr = 00-00-00-00-00-00
This was not expected to happen as resource handling was added to the
driver a while ago, because it was not known at that time that a PCI
system would be possible that cannot assign port I/O resources, and
oddly enough `request_region' does not fail, which would have caught it.
Correct the problem then by checking for the length of zero for the CSR
resource and bail out gracefully refusing to register an interface if
that turns out to be the case, producing messages like:
defxx: v1.11 2014/07/01 Lawrence V. Stefani and others
0031:02:04.0: Cannot use I/O, no address set, aborting
0031:02:04.0: Recompile driver with "CONFIG_DEFXX_MMIO=y"
Keep the original check for the EISA MMIO resource as implemented,
because in that case the length is hardwired to 0x400 as a consequence
of how the compare/mask address decoding works in the ESIC chip and it
is only the base address that is set to zero if MMIO has been disabled
for the adapter in EISA configuration, which in turn could be a valid
bus address in a legacy-free system implementing PCI, especially for
port I/O.
Where the EISA MMIO resource has been disabled for the adapter in EISA
configuration this arrangement keeps producing messages like:
eisa 00:05: EISA: slot 5:
DEC3002 detected
defxx: v1.11 2014/07/01 Lawrence V. Stefani and others
00:05: Cannot use MMIO, no address set, aborting
00:05: Recompile driver with "CONFIG_DEFXX_MMIO=n"
00:05: Or run ECU and set adapter's MMIO location
with the last two lines now swapped for easier handling in the driver.
There is no need to check for and catch the case of a port I/O resource
not having been assigned for EISA as the adapter uses the slot-specific
I/O space, which gets assigned by how EISA has been specified and maps
directly to the particular slot an option card has been placed in. And
the EISA variant of the adapter has additional registers that are only
accessible via the port I/O space anyway.
While at it factor out the error message calls into helpers and fix an
argument order bug with the `pr_err' call now in `dfx_register_res_err'.
Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Fixes: 4d0438e56a8f ("defxx: Clean up DEFEA resource management")
Cc: stable@vger.kernel.org # v3.19+
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Mar 2021 21:04:58 +0000 (13:04 -0800)]
Merge branch 'mlxsw-misc-updates'
Ido Schimmel says:
====================
mlxsw: Misc updates
This patch set contains miscellaneous updates for mlxsw.
Patches #1-#2 reword an extack message to make it clearer and fix a
comment.
Patch #3 bumps the minimum firmware version enforced by mlxsw. This is
needed for two upcoming features: Resilient hashing and per-flow
sampling.
Patches #4-#6 improve the information reported via devlink-health for
'fw_fatal' events.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Wed, 10 Mar 2021 11:02:20 +0000 (13:02 +0200)]
mlxsw: Adjust some MFDE fields shift and size to fw implementation
MFDE.irisc_id and MFDE.event_id were adjusted according to what is
actually implemented in firmware.
Adjust the shift and size of these fields in mlxsw as well.
Note that the displacement of the first field is not a regression.
It was always incorrect and therefore reported "0".
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Wed, 10 Mar 2021 11:02:19 +0000 (13:02 +0200)]
mlxsw: core: Expose MFDE.log_ip to devlink health
Add the MFDE.log_ip field to devlink health reporter in order to ease
firmware debug. This field encodes the instruction pointer that triggered
the CR space timeout.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Wed, 10 Mar 2021 11:02:18 +0000 (13:02 +0200)]
mlxsw: reg: Extend MFDE register with new log_ip field
Extend MFDE (Monitoring FW Debug) register with new field specifying the
instruction pointer that triggered the CR space timeout.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Wed, 10 Mar 2021 11:02:17 +0000 (13:02 +0200)]
mlxsw: spectrum: Bump minimum FW version to xx.2008.2406
The indicated version fixes the following two issues:
- MIRROR_SAMPLER_ACTION.mirror_probability_rate inverted. This has
implication for per-flow sampling.
- When adjacency is replaced-if-inactive (RATR.opcode=3), bad parameter
was reported when replacing an active entry. This breaks offload of
resilient next-hop groups.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amit Cohen [Wed, 10 Mar 2021 11:02:16 +0000 (13:02 +0200)]
mlxsw: reg: Fix comment about slot_index field in PMAOS register
The comment did not include the register name.
Add `pmaos` to align the comment with other comments.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Wed, 10 Mar 2021 11:02:15 +0000 (13:02 +0200)]
mlxsw: spectrum: Reword an error message for Q-in-Q veto
'Uppers' is not clear enough for all users when referring to upper
devices.
Reword the error message so it will be clearer.
Signed-off-by: Danielle Ratson <danieller@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shubhankar Kuranagatti [Wed, 10 Mar 2021 20:33:14 +0000 (02:03 +0530)]
net: ipv6: route.c:fix indentation
The series of space has been replaced by tab space
wherever required.
Signed-off-by: Shubhankar Kuranagatti <shubhankarvk@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Wed, 10 Mar 2021 14:50:44 +0000 (16:50 +0200)]
net: add a helper to avoid issues with HW TX timestamping and SO_TXTIME
As explained in commit
29d98f54a4fe ("net: enetc: allow hardware
timestamping on TX queues with tc-etf enabled"), hardware TX
timestamping requires an skb with skb->tstamp = 0. When a packet is sent
with SO_TXTIME, the skb->skb_mstamp_ns corrupts the value of skb->tstamp,
so the drivers need to explicitly reset skb->tstamp to zero after
consuming the TX time.
Create a helper named skb_txtime_consumed() which does just that. All
drivers which offload TC_SETUP_QDISC_ETF should implement it, and it
would make it easier to assess during review whether they do the right
thing in order to be compatible with hardware timestamping or not.
Suggested-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Wed, 10 Mar 2021 12:02:54 +0000 (13:02 +0100)]
FDDI: defza: Update my e-mail address
Following the recent update to MAINTAINERS update my e-mail address.
Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Wed, 10 Mar 2021 12:02:49 +0000 (13:02 +0100)]
FDDI: defxx: Update my e-mail address
Following the recent update to MAINTAINERS update my e-mail address.
Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maciej W. Rozycki [Wed, 10 Mar 2021 12:02:42 +0000 (13:02 +0100)]
FDDI: if_fddi.h: Update my e-mail address
Following the recent update to MAINTAINERS update my e-mail address.
Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Wed, 10 Mar 2021 10:33:20 +0000 (12:33 +0200)]
sched: act_sample: Implement stats_update callback
Implement this callback in order to get the offloaded stats added to the
kernel stats.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yang Li [Wed, 10 Mar 2021 08:53:04 +0000 (16:53 +0800)]
isdn: mISDN: remove unneeded variable 'ret'
Fix the following coccicheck warning:
./drivers/isdn/mISDN/dsp_core.c:956:6-9: Unneeded variable: "err".
Return "0" on line 1001
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rafał Miłecki [Wed, 10 Mar 2021 08:48:13 +0000 (09:48 +0100)]
net: broadcom: bcm4908_enet: read MAC from OF
BCM4908 devices have MAC address accessible using NVMEM so it's needed
to use OF helper for reading it.
Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yunsheng Lin [Wed, 10 Mar 2021 08:28:58 +0000 (16:28 +0800)]
skbuff: remove some unnecessary operation in skb_segment_list()
gro list uses skb_shinfo(skb)->frag_list to link two skb together,
and NAPI_GRO_CB(p)->last->next is used when there are more skb,
see skb_gro_receive_list(). gso expects that each segmented skb is
linked together using skb->next, so only the first skb->next need
to set to skb_shinfo(skb)-> frag_list when doing gso list segment.
It is the same reason that nskb->next does not need to be set to
list_skb before goto the error handling, because nskb->next already
pointers to list_skb.
And nskb is also the last skb at the end of loop, so remove tail
variable and use nskb instead.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 06:07:01 +0000 (00:07 -0600)]
qed: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix multiple
warnings by explicitly adding a couple of break statements instead of
just letting the code fall through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:45:22 +0000 (23:45 -0600)]
net: plip: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix multiple
warnings by explicitly adding multiple break statements instead of
letting the code fall through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:43:45 +0000 (23:43 -0600)]
net: rose: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix multiple
warnings by explicitly adding multiple break statements instead of
letting the code fall through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:42:43 +0000 (23:42 -0600)]
net: core: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of letting the code fall
through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:41:15 +0000 (23:41 -0600)]
net: bridge: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of letting the code fall
through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com>
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:39:41 +0000 (23:39 -0600)]
net: ax25: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of letting the code fall
through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:38:11 +0000 (23:38 -0600)]
decnet: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of letting the code fall
through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:37:02 +0000 (23:37 -0600)]
net: cassini: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of just letting the code
fall through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:34:53 +0000 (23:34 -0600)]
net: 3c509: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of just letting the code
fall through to the next case.
Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:33:16 +0000 (23:33 -0600)]
net: mscc: ocelot: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of just letting the code
fall through to the next case.
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Wed, 10 Mar 2021 05:30:20 +0000 (23:30 -0600)]
net: fddi: skfp: smt: Replace one-element array with flexible-array member
There is a regular need in the kernel to provide a way to declare having
a dynamically sized set of trailing elements in a structure. Kernel code
should always use “flexible array members”[1] for these cases. The older
style of one-element or zero-length arrays should no longer be used[2].
Refactor the code according to the use of flexible-array members in
smt_sif_operation structure, instead of one-element arrays. Also, make
use of the struct_size() helper instead of the open-coded version
to calculate the size of the struct-with-flex-array. Additionally, make
use of the typeof operator to properly determine the object type to be
passed to macro smtod().
Also, this helps the ongoing efforts to enable -Warray-bounds by fixing
the following warnings:
CC [M] drivers/net/fddi/skfp/smt.o
drivers/net/fddi/skfp/smt.c: In function ‘smt_send_sif_operation’:
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
| ^~~
drivers/net/fddi/skfp/smt.c:1084:30: warning: array subscript 1 is above array bounds of ‘struct smt_p_lem[1]’ [-Warray-bounds]
1084 | smt_fill_lem(smc,&sif->lem[i],i) ;
| ~~~~~~~~^~~
In file included from drivers/net/fddi/skfp/h/smc.h:42,
from drivers/net/fddi/skfp/smt.c:15:
drivers/net/fddi/skfp/h/smt.h:767:19: note: while referencing ‘lem’
767 | struct smt_p_lem lem[1] ; /* phy lem status */
[1] https://en.wikipedia.org/wiki/Flexible_array_member
[2] https://www.kernel.org/doc/html/v5.10/process/deprecated.html#zero-length-and-one-element-arrays
Link: https://github.com/KSPP/linux/issues/79
Link: https://github.com/KSPP/linux/issues/109
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yejune Deng [Wed, 10 Mar 2021 03:23:43 +0000 (11:23 +0800)]
net/rds: Drop duplicate sin and sin6 assignments
There is no need to assign the msg->msg_name to sin or sin6,
because there is DECLARE_SOCKADDR statement.
Signed-off-by: Yejune Deng <yejune.deng@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wang Qing [Wed, 10 Mar 2021 03:06:46 +0000 (11:06 +0800)]
net: ethernet: chelsiofix: spelling typo of 'rewriteing'
rewriteing -> rewriting
Signed-off-by: Wang Qing <wangqing@vivo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wang Qing [Wed, 10 Mar 2021 03:06:03 +0000 (11:06 +0800)]
drivers: isdn: mISDN: fix spelling typo of 'wheter'
wheter -> whether
Signed-off-by: Wang Qing <wangqing@vivo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xuan Zhuo [Wed, 10 Mar 2021 02:24:45 +0000 (10:24 +0800)]
virtio-net: support XDP when not more queues
The number of queues implemented by many virtio backends is limited,
especially some machines have a large number of CPUs. In this case, it
is often impossible to allocate a separate queue for
XDP_TX/XDP_REDIRECT, then xdp cannot be loaded to work, even xdp does
not use the XDP_TX/XDP_REDIRECT.
This patch allows XDP_TX/XDP_REDIRECT to run by reuse the existing SQ
with __netif_tx_lock() hold when there are not enough queues.
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Reviewed-by: Dust Li <dust.li@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Wed, 10 Mar 2021 01:51:35 +0000 (17:51 -0800)]
net: socket: use BIT() for MSG_*
The bit mask for MSG_* seems a little confused here. Replace it
with BIT() to make it clear to understand.
Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Mar 2021 02:07:05 +0000 (18:07 -0800)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
====================
pull-request: bpf-next 2021-03-09
The following pull-request contains BPF updates for your *net-next* tree.
We've added 90 non-merge commits during the last 17 day(s) which contain
a total of 114 files changed, 5158 insertions(+), 1288 deletions(-).
The main changes are:
1) Faster bpf_redirect_map(), from Björn.
2) skmsg cleanup, from Cong.
3) Support for floating point types in BTF, from Ilya.
4) Documentation for sys_bpf commands, from Joe.
5) Support for sk_lookup in bpf_prog_test_run, form Lorenz.
6) Enable task local storage for tracing programs, from Song.
7) bpf_for_each_map_elem() helper, from Yonghong.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Wed, 10 Mar 2021 01:15:56 +0000 (17:15 -0800)]
Merge git://git./linux/kernel/git/netdev/net
Pull networking fixes from David Miller:
1) Fix transmissions in dynamic SMPS mode in ath9k, from Felix Fietkau.
2) TX skb error handling fix in mt76 driver, also from Felix.
3) Fix BPF_FETCH atomic in x86 JIT, from Brendan Jackman.
4) Avoid double free of percpu pointers when freeing a cloned bpf prog.
From Cong Wang.
5) Use correct printf format for dma_addr_t in ath11k, from Geert
Uytterhoeven.
6) Fix resolve_btfids build with older toolchains, from Kun-Chuan
Hsieh.
7) Don't report truncated frames to mac80211 in mt76 driver, from
Lorenzop Bianconi.
8) Fix watcdog timeout on suspend/resume of stmmac, from Joakim Zhang.
9) mscc ocelot needs NET_DEVLINK selct in Kconfig, from Arnd Bergmann.
10) Fix sign comparison bug in TCP_ZEROCOPY_RECEIVE getsockopt(), from
Arjun Roy.
11) Ignore routes with deleted nexthop object in mlxsw, from Ido
Schimmel.
12) Need to undo tcp early demux lookup sometimes in nf_nat, from
Florian Westphal.
13) Fix gro aggregation for udp encaps with zero csum, from Daniel
Borkmann.
14) Make sure to always use imp*_ndo_send when necessaey, from Jason A.
Donenfeld.
15) Fix TRSCER masks in sh_eth driver from Sergey Shtylyov.
16) prevent overly huge skb allocationsd in qrtr, from Pavel Skripkin.
17) Prevent rx ring copnsumer index loss of sync in enetc, from Vladimir
Oltean.
18) Make sure textsearch copntrol block is large enough, from Wilem de
Bruijn.
19) Revert MAC changes to r8152 leading to instability, from Hates Wang.
20) Advance iov in 9p even for empty reads, from Jissheng Zhang.
21) Double hook unregister in nftables, from PabloNeira Ayuso.
22) Fix memleak in ixgbe, fropm Dinghao Liu.
23) Avoid dups in pkt scheduler class dumps, from Maximilian Heyne.
24) Various mptcp fixes from Florian Westphal, Paolo Abeni, and Geliang
Tang.
25) Fix DOI refcount bugs in cipso, from Paul Moore.
26) One too many irqsave in ibmvnic, from Junlin Yang.
27) Fix infinite loop with MPLS gso segmenting via virtio_net, from
Balazs Nemeth.
* git://git.kernel.org:/pub/scm/linux/kernel/git/netdev/net: (164 commits)
s390/qeth: fix notification for pending buffers during teardown
s390/qeth: schedule TX NAPI on QAOB completion
s390/qeth: improve completion of pending TX buffers
s390/qeth: fix memory leak after failed TX Buffer allocation
net: avoid infinite loop in mpls_gso_segment when mpls_hlen == 0
net: check if protocol extracted by virtio_net_hdr_set_proto is correct
net: dsa: xrs700x: check if partner is same as port in hsr join
net: lapbether: Remove netif_start_queue / netif_stop_queue
atm: idt77252: fix null-ptr-dereference
atm: uPD98402: fix incorrect allocation
atm: fix a typo in the struct description
net: qrtr: fix error return code of qrtr_sendmsg()
mptcp: fix length of ADD_ADDR with port sub-option
net: bonding: fix error return code of bond_neigh_init()
net: enetc: allow hardware timestamping on TX queues with tc-etf enabled
net: enetc: set MAC RX FIFO to recommended value
net: davicom: Use platform_get_irq_optional()
net: davicom: Fix regulator not turned off on driver removal
net: davicom: Fix regulator not turned off on failed probe
net: dsa: fix switchdev objects on bridge master mistakenly being applied on ports
...
Linus Torvalds [Wed, 10 Mar 2021 01:08:41 +0000 (17:08 -0800)]
Merge git://git./linux/kernel/git/davem/sparc
Pull sparc fixes from David Miller:
"Fix opcode filtering for exceptions, and clean up defconfig"
* git://git.kernel.org:/pub/scm/linux/kernel/git/davem/sparc:
sparc: sparc64_defconfig: remove duplicate CONFIGs
sparc64: Fix opcode filtering in handling of no fault loads
Corentin Labbe [Mon, 8 Mar 2021 09:51:26 +0000 (09:51 +0000)]
sparc: sparc64_defconfig: remove duplicate CONFIGs
After my patch there is CONFIG_ATA defined twice.
Remove the duplicate one.
Same problem for CONFIG_HAPPYMEAL, except I added as builtin for boot
test with NFS.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes: a57cdeb369ef ("sparc: sparc64_defconfig: add necessary configs for qemu")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rob Gardner [Mon, 1 Mar 2021 05:48:16 +0000 (22:48 -0700)]
sparc64: Fix opcode filtering in handling of no fault loads
is_no_fault_exception() has two bugs which were discovered via random
opcode testing with stress-ng. Both are caused by improper filtering
of opcodes.
The first bug can be triggered by a floating point store with a no-fault
ASI, for instance "sta %f0, [%g0] #ASI_PNF", opcode
C1A01040.
The code first tests op3[5] (0x1000000), which denotes a floating
point instruction, and then tests op3[2] (0x200000), which denotes a
store instruction. But these bits are not mutually exclusive, and the
above mentioned opcode has both bits set. The intent is to filter out
stores, so the test for stores must be done first in order to have
any effect.
The second bug can be triggered by a floating point load with one of
the invalid ASI values 0x8e or 0x8f, which pass this check in
is_no_fault_exception():
if ((asi & 0xf2) == ASI_PNF)
An example instruction is "ldqa [%l7 + %o7] #ASI 0x8f, %f38",
opcode
CF95D1EF. Asi values greater than 0x8b (ASI_SNFL) are fatal
in handle_ldf_stq(), and is_no_fault_exception() must not allow these
invalid asi values to make it that far.
In both of these cases, handle_ldf_stq() reacts by calling
sun4v_data_access_exception() or spitfire_data_access_exception(),
which call is_no_fault_exception() and results in an infinite
recursion.
Signed-off-by: Rob Gardner <rob.gardner@oracle.com>
Tested-by: Anatoly Pugachev <matorola@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Mar 2021 00:14:54 +0000 (16:14 -0800)]
Merge branch 's390-qeth-fixes'
Julian Wiedmann says:
====================
s390/qeth: fixes 2021-03-09
please apply the following patch series to netdev's net tree.
This brings one fix for a memleak in an error path of the setup code.
Also several fixes for dealing with pending TX buffers - two for old
bugs in their completion handling, and one recent regression in a
teardown path.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Tue, 9 Mar 2021 16:52:21 +0000 (17:52 +0100)]
s390/qeth: fix notification for pending buffers during teardown
The cited commit reworked the state machine for pending TX buffers.
In qeth_iqd_tx_complete() it turned PENDING into a transient state, and
uses NEED_QAOB for buffers that get parked while waiting for their QAOB
completion.
But it missed to adjust the check in qeth_tx_complete_buf(). So if
qeth_tx_complete_pending_bufs() is called during teardown to drain
the parked TX buffers, we no longer raise a notification for af_iucv.
Instead of updating the checked state, just move this code into
qeth_tx_complete_pending_bufs() itself. This also gets rid of the
special-case in the common TX completion path.
Fixes: 8908f36d20d8 ("s390/qeth: fix af_iucv notification race")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Tue, 9 Mar 2021 16:52:20 +0000 (17:52 +0100)]
s390/qeth: schedule TX NAPI on QAOB completion
When a QAOB notifies us that a pending TX buffer has been delivered, the
actual TX completion processing by qeth_tx_complete_pending_bufs()
is done within the context of a TX NAPI instance. We shouldn't rely on
this instance being scheduled by some other TX event, but just do it
ourselves.
qeth_qdio_handle_aob() is called from qeth_poll(), ie. our main NAPI
instance. To avoid touching the TX queue's NAPI instance
before/after it is (un-)registered, reorder the code in qeth_open()
and qeth_stop() accordingly.
Fixes: 0da9581ddb0f ("qeth: exploit asynchronous delivery of storage blocks")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Tue, 9 Mar 2021 16:52:19 +0000 (17:52 +0100)]
s390/qeth: improve completion of pending TX buffers
The current design attaches a pending TX buffer to a custom
single-linked list, which is anchored at the buffer's slot on the
TX ring. The buffer is then checked for final completion whenever
this slot is processed during a subsequent TX NAPI poll cycle.
But if there's insufficient traffic on the ring, we might never make
enough progress to get back to this ring slot and discover the pending
buffer's final TX completion. In particular if this missing TX
completion blocks the application from sending further traffic.
So convert the custom single-linked list code to a per-queue list_head,
and scan this list on every TX NAPI cycle.
Fixes: 0da9581ddb0f ("qeth: exploit asynchronous delivery of storage blocks")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>