David S. Miller [Tue, 2 Jul 2019 02:34:46 +0000 (19:34 -0700)]
Merge branch 'blackhole-device-to-invalidate-dst'
Mahesh Bandewar says:
====================
blackhole device to invalidate dst
When we invalidate dst or mark it "dead", we assign 'lo' to
dst->dev. First of all this assignment is racy and more over,
it has MTU implications.
The standard dev MTU is 1500 while the Loopback MTU is 64k. TCP
code when dereferencing the dst don't check if the dst is valid
or not. TCP when dereferencing a dead-dst while negotiating a
new connection, may use dst device which is 'lo' instead of
using the correct device. Consider the following scenario:
A SYN arrives on an interface and tcp-layer while processing
SYNACK finds a dst and associates it with SYNACK skb. Now before
skb gets passed to L3 for processing, if that dst gets "dead"
(because of the virtual device getting disappeared & then reappeared),
the 'lo' gets assigned to that dst (lo MTU = 64k). Let's assume
the SYN has ADV_MSS set as 9k while the output device through
which this SYNACK is going to go out has standard MTU of 1500.
The MTU check during the route check passes since MIN(9K, 64K)
is 9k and TCP successfully negotiates 9k MSS. The subsequent
data packet; bigger in size gets passed to the device and it
won't be marked as GSO since the assumed MTU of the device is
9k.
This either crashes the NIC and we have seen fixes that went
into drivers to handle this scenario.
8914a595110a ('bnx2x:
disable GSO where gso_size is too big for hardware') and
2b16f048729b ('net: create skb_gso_validate_mac_len()') and
with those fixes TCP eventually recovers but not before
few dropped segments.
Well, I'm not a TCP expert and though we have experienced
these corner cases in our environment, I could not reproduce
this case reliably in my test setup to try this fix myself.
However, Michael Chan <michael.chan@broadcom.com> had a setup
where these fixes helped him mitigate the issue and not cause
the crash.
The idea here is to not alter the data-path with additional
locks or smb()/rmb() barriers to avoid racy assignments but
to create a new device that has really low MTU that has
.ndo_start_xmit essentially a kfree_skb(). Make use of this
device instead of 'lo' when marking the dst dead.
First patch implements the blackhole device and second
patch uses it in IPv4 and IPv6 stack while the third patch
is the self test that ensures the sanity of this device.
v1->v2
fixed the self-test patch to handle the conflict
v2 -> v3
fixed Kconfig text/string.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Mahesh Bandewar [Mon, 1 Jul 2019 21:39:01 +0000 (14:39 -0700)]
blackhole_dev: add a selftest
Since this is not really a device with all capabilities, this test
ensures that it has *enough* to make it through the data path
without causing unwanted side-effects (read crash!).
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mahesh Bandewar [Mon, 1 Jul 2019 21:38:57 +0000 (14:38 -0700)]
blackhole_netdev: use blackhole_netdev to invalidate dst entries
Use blackhole_netdev instead of 'lo' device with lower MTU when marking
dst "dead".
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Tested-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mahesh Bandewar [Mon, 1 Jul 2019 21:38:49 +0000 (14:38 -0700)]
loopback: create blackhole net device similar to loopack.
Create a blackhole net device that can be used for "dead"
dst entries instead of loopback device. This blackhole device differs
from loopback in few aspects: (a) It's not per-ns. (b) MTU on this
device is ETH_MIN_MTU (c) The xmit function is essentially kfree_skb().
and (d) since it's not registered it won't have ifindex.
Lower MTU effectively make the device not pass the MTU check during
the route check when a dst associated with the skb is dead.
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Kelam [Sun, 30 Jun 2019 14:29:49 +0000 (19:59 +0530)]
net: ethernet: broadcom: bcm63xx_enet: Remove unneeded memset
Remove unneeded memset as alloc_etherdev is using kvzalloc which uses
__GFP_ZERO flag
Signed-off-by: Hariprasad Kelam <hariprasad.kelam@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 2 Jul 2019 02:27:08 +0000 (19:27 -0700)]
Merge branch 'net-netsec-Add-XDP-Support'
Ilias Apalodimas says:
====================
net: netsec: Add XDP Support
This is a respin of https://www.spinics.net/lists/netdev/msg526066.html
Since page_pool API fixes are merged into net-next we can now safely use
it's DMA mapping capabilities.
First patch changes the buffer allocation from napi/netdev_alloc_frag()
to page_pool API. Although this will lead to slightly reduced performance
(on raw packet drops only) we can use the API for XDP buffer recycling.
Another side effect is a slight increase in memory usage, due to using a
single page per packet.
The second patch adds XDP support on the driver.
There's a bunch of interesting options that come up due to the single
Tx queue.
Locking is needed(to avoid messing up the Tx queues since ndo_xdp_xmit
and the normal stack can co-exist). We also need to track down the
'buffer type' for TX and properly free or recycle the packet depending
on it's nature.
Changes since RFC:
- Bug fixes from Jesper and Maciej
- Added page pool API to retrieve the DMA direction
Changes since v1:
- Use page_pool_free correctly if xdp_rxq_info_reg() failed
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Sat, 29 Jun 2019 05:23:25 +0000 (08:23 +0300)]
net: netsec: add XDP support
The interface only supports 1 Tx queue so locking is introduced on
the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
.ndo_xdp_xmit won't corrupt Tx ring
- Performance (SMMU off)
Benchmark XDP_SKB XDP_DRV
xdp1 291kpps 344kpps
rxdrop 282kpps 342kpps
- Performance (SMMU on)
Benchmark XDP_SKB XDP_DRV
xdp1 167kpps 324kpps
rxdrop 164kpps 323kpps
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Sat, 29 Jun 2019 05:23:24 +0000 (08:23 +0300)]
net: page_pool: add helper function for retrieving dma direction
Since the dma direction is stored in page pool params, offer an API
helper for driver that choose not to keep track of it locally
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Sat, 29 Jun 2019 05:23:23 +0000 (08:23 +0300)]
net: netsec: Use page_pool API
Use page_pool and it's DMA mapping capabilities for Rx buffers instead
of netdev/napi_alloc_frag()
Although this will result in a slight performance penalty on small sized
packets (~10%) the use of the API will allow to easily add XDP support.
The penalty won't be visible in network testing i.e ipef/netperf etc, it
only happens during raw packet drops.
Furthermore we intend to add recycling capabilities on the API
in the future. Once the recycling is added the performance penalty will
go away.
The only 'real' penalty is the slightly increased memory usage, since we
now allocate a page per packet instead of the amount of bytes we need +
skb metadata (difference is roughly 2kb per packet).
With a minimum of 4BG of RAM on the only SoC that has this NIC the
extra memory usage is negligible (a bit more on 64K pages)
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Roman Mashak [Fri, 28 Jun 2019 21:32:01 +0000 (17:32 -0400)]
tc-testing: added tdc tests for prio qdisc
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 2 Jul 2019 02:18:04 +0000 (19:18 -0700)]
Merge branch 'mirred-batch-fixes'
Roman Mashak says:
====================
Fix batched event generation for mirred action
When adding or deleting a batch of entries, the kernel sends upto
TCA_ACT_MAX_PRIO entries in an event to user space. However it does not
consider that the action sizes may vary and require different skb sizes.
For example :
% cat tc-batch.sh
TC="sudo /mnt/iproute2.git/tc/tc"
$TC actions flush action mirred
for i in `seq 1 $1`;
do
cmd="action mirred egress redirect dev lo index $i "
args=$args$cmd
done
$TC actions add $args
%
% ./tc-batch.sh 32
Error: Failed to fill netlink attributes while adding TC action.
We have an error talking to the kernel
%
patch 1 adds callback in tc_action_ops of mirred action, which calculates
the action size, and passes size to tcf_add_notify()/tcf_del_notify().
patch 2 updates the TDC test suite with relevant test cases.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Roman Mashak [Fri, 28 Jun 2019 18:30:18 +0000 (14:30 -0400)]
tc-testing: updated mirred action tests with batch create/delete
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Roman Mashak [Fri, 28 Jun 2019 18:30:17 +0000 (14:30 -0400)]
net sched: update mirred action for batched events operations
Add get_fill_size() routine used to calculate the action size
when building a batch of events.
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jason A. Donenfeld [Fri, 28 Jun 2019 14:40:21 +0000 (16:40 +0200)]
netlink: use 48 byte ctx instead of 6 signed longs for callback
People are inclined to stuff random things into cb->args[n] because it
looks like an array of integers. Sometimes people even put u64s in there
with comments noting that a certain member takes up two slots. The
horror! Really this should mirror the usage of skb->cb, which are just
48 opaque bytes suitable for casting a struct. Then people can create
their usual casting macros for accessing strongly typed members of a
struct.
As a plus, this also gives us the same amount of space on 32bit and 64bit.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Fri, 28 Jun 2019 15:06:20 +0000 (17:06 +0200)]
tipc: embed jiffies in macro TIPC_BC_RETR_LIM
The macro TIPC_BC_RETR_LIM is always used in combination with 'jiffies',
so we can just as well perform the addition in the macro itself. This
way, we get a few shorter code lines and one less line break.
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eiichi Tsukata [Fri, 28 Jun 2019 02:37:14 +0000 (11:37 +0900)]
net/ipv6: Fix misuse of proc_dointvec "flowlabel_reflect"
/proc/sys/net/ipv6/flowlabel_reflect assumes written value to be in the
range of 0 to 3. Use proc_dointvec_minmax instead of proc_dointvec.
Fixes:
323a53c41292 ("ipv6: tcp: enable flowlabel reflection in some RST packets")
Signed-off-by: Eiichi Tsukata <devel@etsukata.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yunsheng Lin [Fri, 28 Jun 2019 01:13:19 +0000 (09:13 +0800)]
net: link_watch: prevent starvation when processing linkwatch wq
When user has configured a large number of virtual netdev, such
as 4K vlans, the carrier on/off operation of the real netdev
will also cause it's virtual netdev's link state to be processed
in linkwatch. Currently, the processing is done in a work queue,
which may cause rtnl locking starvation problem and worker
starvation problem for other work queue, such as irqfd_inject wq.
This patch releases the cpu when link watch worker has processed
a fixed number of netdev' link watch event, and schedule the
work queue again when there is still link watch event remaining.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 2 Jul 2019 01:58:35 +0000 (18:58 -0700)]
Merge branch 'mlxsw-PTP-timestamping-support'
Ido Schimmel says:
====================
mlxsw: PTP timestamping support
This is the second patchset adding PTP support in mlxsw. Next patchset
will add PTP shapers which are required to maintain accuracy under rates
lower than 40Gb/s, while subsequent patchsets will add tracepoints and
selftests.
Petr says:
This patch set introduces support for retrieving and processing hardware
timestamps for PTP packets.
The way PTP timestamping works on Spectrum-1 is that there are two queues
associated with each front panel port. When a packet is timestamped, the
timestamp is put to one of the queues: timestamps for transmitted packets
to one and for received packets to the other. Activity on these queues is
signaled through the events PTP_ING_FIFO and PTP_EGR_FIFO.
Packets themselves arrive through two traps: PTP0 and PTP1. It is possible
to configure which PTP messages should be trapped under which PTP trap. On
Spectrum systems, mlxsw will use PTP0 for event messages (which need
timestamping), and PTP1 for general messages (which do not).
There are therefore four relevant traps: receive of PTP event resp. general
message, and receive of timestamp for a transmitted resp. received PTP
packet. The obvious point where to put the new logic is a custom listener
to the mentioned traps.
Besides handling ingress traffic (be in packets or timestamps), the driver
also needs to handle timestamping of transmitted packets. One option would
be to invoke the relevant logic from mlxsw_core_ptp_transmitted(). However
on Spectrum-2, the timestamps are actually delivered through the completion
queue, and for that reason this patchset opts to invoke the logic from the
PCI code, via core and the driver, to a chip-specific operation. That way
the invocation will be done in a place where a Spectrum-2 implementation
will have an opportunity to extract the timestamp.
As indicated above, the PTP FIFO signaling happens independently from
packet delivery. A packet corresponding to any given timestamp could be
delivered sooner or later than the timestamp itself. Additionally, the
queues are only four elements deep, and it is therefore possible that the
timestamp for a delivered packet never arrives at all. Similarly a PTP
packet might be dropped due to CPU traffic pressure, and never be delivered
even if the corresponding timestamp was.
The driver thus needs to hold a cache of as-yet-unmatched SKBs and
timestamps. The first piece to arrive (be it timestamp or SKB) is put to
this cache. When the other piece arrives, the timestamp is attached to the
SKB and that is passed on. A delayed work is run at regular intervals to
prune the old unmatched entries.
As mentioned above, the mechanism for timestamp delivery changes on
Spectrum-2, where timestamps are part of completion queue elements, and all
packets are timestamped. All this bookkeeping is therefore unnecessary on
Spectrum-2. For this reason, this patchset spends some time introducing
Spectrum-1 specific artifacts such as a possibility to register a given
trap only on Spectrum-1.
Patches #1-#4 describe new registers.
Patches #5 and #6 introduce the possibility to register certain traps
only on some systems. The list of Spectrum-1 specific traps is left empty
at this point.
Patch #7 hooks into packet receive path by registering PTP traps
and appropriate handlers (that however do nothing of substance yet).
Patch #8 adds a helper to allow storing custom data to SKB->cb.
Patch #9 adds a call into the PCI completion queue handler that invokes,
via core and spectrum code, a PTP transmit handler. (Which also does not do
anything interesting yet.)
Patch #10 introduces code to invoke PTP initialization and adds data types
for the cache of unmatched entries.
Patches #11 and #12 implement the timestamping itself. In #11, the PHC
spin_locks are converted to _bh variants, because unlike normal PHC path,
which runs in process context, timestamp processing runs as soft interrupt.
Then #12 introduces the code for saving and retrieval of unmatched entries,
invokes PTP classifier to identify packets of interest, registers timestamp
FIFO events, and handles decoding and attaching timestamps to packets.
Patch #13 introduces a garbage collector for left-behind entries that have
not been matched for about a second.
In patch #14, PTP message types are configured to arrive as PTP0
(events) or PTP1 (everything else) as appropriate. At this point, the PTP
packets start arriving through the traps, but because PTP is disabled and
there is no way to enable it yet, they are always just passed to the usual
receive path right away.
Finally patches #15 and #16 add the plumbing to actually make it possible
to enable this code through SIOCSHWTSTAMP ioctl, and to advertise the
hardware timestamping capabilities through ethtool.
v2:
- Patch #12:
- In mlxsw_sp1_ptp_fifo_event_func(), post-increment when iterating over PTP
FIFO records.
- Patch #14:
- Change namespace of message type enumerators from MLXSW_ to MLXSW_SP_.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:05:00 +0000 (09:05 +0300)]
mlxsw: spectrum: PTP: Support ethtool get_ts_info
The get_ts_info callback is used for obtaining information about
timestamping capabilities of a network device. On Spectrum-1, implement
it to advertise the PHC and the capability to do HW timestamping, and
the supported RX and TX filters.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:59 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Support SIOCGHWTSTAMP, SIOCSHWTSTAMP ioctls
The SIOCSHWTSTAMP ioctl configures HW timestamping on a given port.
Dispatch the ioctls to per-chip handler (which add to ptp_ops). Find
which PTP messages need to be timestamped and configure MTPPPC
accordingly.
The SIOCGHWTSTAMP ioctl is getter for the current configuration.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:58 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Configure PTP traps and FIFO events
Configure MTPTPT to set which message types should arrive under which
PTP trap, and MOGCR to clear the timestamp queue after its contents are
reported through PTP_ING_FIFO or PTP_EGR_FIFO.
With this configuration, PTP packets start arriving through the PTP
traps. However since timestamping is disabled by default and there is
currently no way to enable it, they will not be timestamped.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:57 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Garbage-collect unmatched entries
On Spectrum-1, timestamped PTP packets and the corresponding timestamps
need to be kept in caches until both are available, at which point they are
matched up and packets forwarded as appropriate. However, not all packets
will ever see their timestamp, and not all timestamps will ever see their
packet. It is therefore necessary to dispose of such abandoned entries.
To that end, introduce a garbage collector to collect entries that have
not had their counterpart turn up within about a second. The GC
maintains a monotonously-increasing value of GC cycle. Every entry that
is put to the hash table is annotated with the GC cycle at which it
should be collected. When the GC runs, it walks the hash table, and
collects the objects according to their GC cycle annotation.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:56 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Support timestamping on Spectrum-1
On Spectrum-1, timestamps arrive through a pair of dedicated events:
MLXSW_TRAP_ID_PTP_ING_FIFO and _EGR_FIFO. The payload delivered with
those traps is contents of the timestamp FIFO at a given port in a given
direction. Add a Spectrum-1-specific handler for these two events which
decodes the timestamps and forwards them to the PTP module.
Add a function that parses a packet, dispatching to ptp_classify_raw(),
and decodes PTP message type, domain number, and sequence ID. Add a new
mlxsw dependency on the PTP classifier.
Add helpers that can store and retrieve unmatched timestamps and SKBs to
the hash table added in a preceding patch.
Add the matching code itself: upon arrival of a timestamp or a packet,
look up the corresponding unmatched entry, and match it up. If there is
none, add a new unmatched entry. This logic is the same on ingress as on
egress.
Packets and timestamps that never matched need to be eventually disposed
of. A garbage collector added in a follow-up patch will take care of
that. Since currently all this code is turned off, no crud will
accumulate in the hash table.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:55 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Disable BH when working with PHC
Up until now, the PTP hardware clock code was only invoked in the process
context (SYS_clock_adjtime -> do_clock_adjtime -> k_clock::clock_adj ->
pc_clock_adjtime -> posix_clock_operations::clock_adjtime ->
ptp_clock_info::adjtime -> mlxsw_spectrum).
In order to enable HW timestamping, which is tied into trap handling, it
will be necessary to take the clock lock from the PCI queue handler
tasklets as well.
Therefore use the _bh variants when handling the clock lock. Incidentally,
Documentation/ptp/ptp.txt recommends _irqsave variants, but that's
unnecessarily strong for our needs.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:54 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Add PTP initialization / finalization
Add two ptp_ops: init and fini, to initialize and finalize the PTP
subsystem. Call as appropriate from mlxsw_sp_init() and _fini().
Lay the groundwork for Spectrum-1 support. On Spectrum-1, the received
timestamped packets and their corresponding timestamps arrive
independently, and need to be matched up. Introduce the related data types
and add to struct mlxsw_sp_ptp_state the hash table that will keep the
unmatched entries.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:53 +0000 (09:04 +0300)]
mlxsw: pci: PTP: Hook into packet transmit path
On Spectrum-1, timestamps are delivered separately from the packets, and
need to paired up. Therefore, at some point after mlxsw_sp_port_xmit()
is invoked, it is necessary to involve the chip-specific driver code to
allow it to do the necessary bookkeeping and matching.
On Spectrum-2, timestamps are delivered in CQE. For that reason,
position the point of driver involvement into mlxsw_pci_cqe_sdq_handle()
to make it hopefully easier to extend for Spectrum-2 in the future.
To tell the driver what port the packet was sent on, keep tx_info
in SKB control buffer.
Introduce a new driver core interface mlxsw_core_ptp_transmitted(), a
driver callback ptp_transmitted, and a PTP op transmitted. The callee is
responsible for taking care of releasing the SKB passed to the new
interfaces, and correspondingly have the new stub callbacks just call
dev_kfree_skb_any().
Follow-up patches will introduce the actual content into
mlxsw_sp1_ptp_transmitted() in particular.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:52 +0000 (09:04 +0300)]
mlxsw: core: Add support for using SKB control buffer
The SKB control buffer is useful (and used) for bookkeeping of information
related to that SKB. Add helpers so that the mlxsw driver(s) can safely use
the buffer as well. The structure is currently empty, individual users will
add members to it as necessary.
Note that SKB allocation functions already clear the buffer, so the cleanup
is only necessary when ndo_start_xmit is called.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:51 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Hook into packet receive path
When configured, the Spectrum hardware can recognize PTP packets and
trap them to the CPU using dedicated traps, PTP0 and PTP1.
One reason to get PTP packets under dedicated traps is to have a
separate policer suitable for the amount of PTP traffic expected when
switch is operated as a boundary clock. For this, add two new trap
groups, MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0 and _PTP1, and associate the
two PTP traps with these two groups.
In the driver, specifically for Spectrum-1, event PTP packets will need
to be paired up with their timestamps. Those arrive through a different
set of traps, added later in the patch set. To support this future use,
introduce a new PTP op, ptp_receive.
It is possible to configure which PTP messages should be trapped under
which PTP trap. On Spectrum systems, we will use PTP0 for event
packets (which need timestamping), and PTP1 for control packets (which
do not). Thus configure PTP0 trap with a custom callback that defers to
the ptp_receive op.
Additionally, L2 PTP packets are actually trapped through the LLDP trap,
not through any of the PTP traps. So treat the LLDP trap the same way as
the PTP0 trap. Unlike PTP traps, which are currently still disabled,
LLDP trap is active. Correspondingly, have all the implementations of
the ptp_receive op return true, which the handler treats as a signal to
forward the packet immediately.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:50 +0000 (09:04 +0300)]
mlxsw: spectrum: Add support for traps specific to Spectrum-1
On Spectrum-1, timestamps for PTP packets are delivered through queues
of ingress and egress timestamps. There are two event traps
corresponding to activity on each of those queues. This mechanism is
absent on Spectrum-2, and therefore the traps should only be registered
on Spectrum-1.
Carry a chip-specific listener array in mlxsw_sp->listeners and
listeners_count. Register listeners from that array in
mlxsw_sp_traps_init(). Add a new listener array for Spectrum-1 traps and
configure the newly-added mlxsw_sp->listeners with this array.
The listener array is empty for now, the events will be added in a later
patch.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:49 +0000 (09:04 +0300)]
mlxsw: spectrum: Extract a helper for trap registration
On Spectrum-1, timestamps for PTP packets are delivered through queues
of ingress and egress timestamps. There are two event traps
corresponding to activity on each of those queues. This mechanism is
absent on Spectrum-2, and therefore the traps should only be registered
on Spectrum-1.
Extract out of mlxsw_sp_traps_init() a generic helper,
mlxsw_sp_traps_register(), and likewise with _unregister(). The new helpers
will later be called with Spectrum-1-specific traps.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:48 +0000 (09:04 +0300)]
mlxsw: reg: Add Monitoring Global Configuration Register
This register serves to configure global parameters of certain
monitoring operations. The following patches will use it to configure
that when PTP timestamps are delivered through the PTP FIFO traps, the
FIFO in question is cleared as well.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:47 +0000 (09:04 +0300)]
mlxsw: reg: Add Time Precision Packet Timestamping Reading
The MTPPTR is used for reading the per port PTP timestamp FIFO.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:46 +0000 (09:04 +0300)]
mlxsw: reg: Add Monitoring Precision Time Protocol Trap Register
This register is used for configuring under which trap to deliver PTP
packets depending on type of the packet.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:45 +0000 (09:04 +0300)]
mlxsw: reg: Add Monitoring Time Precision Packet Port Configuration Register
This register serves for configuration of which PTP messages should be
timestamped. This is a global configuration, despite the register name.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel T. Lee [Sat, 29 Jun 2019 13:33:58 +0000 (22:33 +0900)]
samples: pktgen: allow to specify destination port
Currently, kernel pktgen has the feature to specify udp destination port
for sending packet. (e.g. pgset "udp_dst_min 9")
But on samples, each of the scripts doesn't have any option to achieve this.
This commit adds the DST_PORT option to specify the target port(s) in the script.
-p : ($DST_PORT) destination PORT range (e.g. 433-444) is also allowed
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel T. Lee [Sat, 29 Jun 2019 13:33:57 +0000 (22:33 +0900)]
samples: pktgen: add some helper functions for port parsing
This commit adds port parsing and port validate helper function to parse
single or range of port(s) from a given string. (e.g. 1234, 443-444)
Helpers will be used in prior to set target port(s) in samples/pktgen.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 1 Jul 2019 13:39:36 +0000 (06:39 -0700)]
ipv6: icmp: allow flowlabel reflection in echo replies
Extend flowlabel_reflect bitmask to allow conditional
reflection of incoming flowlabels in echo replies.
Note this has precedence against auto flowlabels.
Add flowlabel_reflect enum to replace hard coded
values.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 1 Jul 2019 01:41:13 +0000 (18:41 -0700)]
Merge tag 'mlx5e-updates-2019-06-28' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5e-updates-2019-06-28
This series adds some misc updates for mlx5e driver
1) Allow adding the same mac more than once in MPFS table
2) Move to HW checksumming advertising
3) Report netdevice MPLS features
4) Correct physical port name of the PF representor
5) Reduce stack usage in mlx5_eswitch_termtbl_create
6) Refresh TIR improvement for representors
7) Expose same physical switch_id for all representors
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 30 Jun 2019 23:03:35 +0000 (16:03 -0700)]
Merge branch '10GbE' of git://git./linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
Intel Wired LAN Driver Updates 2019-06-28
This series contains a smorgasbord of updates to many of the Intel
drivers.
Gustavo A. R. Silva updates the ice and iavf drivers to use the
strcut_size() helper where possible.
Miguel increases the pause and refresh time for flow control in the
e1000e driver during reset for certain devices.
Dann Frazier fixes a potential NULL pointer dereference in ixgbe driver
when using non-IPSec enabled devices.
Colin Ian King fixes a potential overflow during a shift in the ixgbe
driver. Also fixes a potential NULL pointer dereference in the iavf
driver by adding a check.
Venkatesh Srinivas converts the e1000 driver to use dma_wmb() instead of
wmb() for doorbell writes to avoid SFENCEs in the transmit and receive
paths.
Arjan updates the e1000e driver to improve boot time by over 100 msec by
reducing the usleep ranges suring system startup.
Artem updates the igb driver register dump in ethtool, first prepares
the register dump for future additions of registers in the dump, then
secondly, adds the RR2DCDELAY register to the dump. When dealing with
time-sensitive networks, this register is helpful in determining your
latency from the device to the ring.
Alex fixes the ixgbevf driver to use the current cached link state,
rather than trying to re-check the value from the PF.
Harshitha adds support for MACVLAN offloads in i40e by using channels as
MACVLAN interfaces.
Detlev Casanova updates the e1000e driver to use delayed work instead of
timers to run the watchdog.
Vitaly fixes an issue in e1000e, where when disconnecting and
reconnecting the physical cable connection, the NIC enters a DMoff
state. This state causes a mismatch in link and duplexing, so check the
PCIm function state and perform a PHY reset when in this state to
resolve the issue.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 27 Jun 2019 21:19:09 +0000 (23:19 +0200)]
r8169: remove not needed call to dma_sync_single_for_device
DMA_API_HOWTO.txt includes an example explaining when
dma_sync_single_for_device() is not needed, and that example matches
our use case. The buffer isn't changed by the CPU and direction is
DMA_FROM_DEVICE, so we can remove the call to
dma_sync_single_for_device().
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 27 Jun 2019 21:12:39 +0000 (23:12 +0200)]
r8169: consider that 32 Bit DMA is the default
Documentation/DMA-API-HOWTO.txt states:
By default, the kernel assumes that your device can address 32-bits of
DMA addressing. For a 64-bit capable device, this needs to be increased,
and for a device with limitations, it needs to be decreased.
Therefore we don't need the 32 Bit DMA fallback configuration and can
remove it.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 27 Jun 2019 21:06:33 +0000 (23:06 +0200)]
r8169: improve handling VLAN tag
The VLAN tag is stored in the descriptor in network byte order.
Using swab16 works on little endian host systems only. Better play safe
and use ntohs or htons respectively.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Thu, 27 Jun 2019 15:12:42 +0000 (17:12 +0200)]
selftests: rtnetlink: skip ipsec offload tests if netdevsim isn't present
running the script on systems without netdevsim now prints:
SKIP: ipsec_offload can't load netdevsim
instead of error message & failed status.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 29 Jun 2019 18:15:12 +0000 (11:15 -0700)]
Merge branch 'em_ipt-add-support-for-addrtype'
Nikolay Aleksandrov says:
====================
em_ipt: add support for addrtype
We would like to be able to use the addrtype from tc for ACL rules and
em_ipt seems the best place to add support for the already existing xt
match. The biggest issue is that addrtype revision 1 (with ipv6 support)
is NFPROTO_UNSPEC and currently em_ipt can't differentiate between v4/v6
if such xt match is used because it passes the match's family instead of
the packet one. The first 3 patches make em_ipt match only on IP
traffic (currently both policy and addrtype recognize such traffic
only) and make it pass the actual packet's protocol instead of the xt
match family when it's unspecified. They also add support for NFPROTO_UNSPEC
xt matches. The last patch allows to add addrtype rules via em_ipt.
We need to keep the user-specified nfproto for dumping in order to be
compatible with libxtables, we cannot dump NFPROTO_UNSPEC as the nfproto
or we'll get an error from libxtables, thus the nfproto is limited to
ipv4/ipv6 in patch 03 and is recorded.
v3: don't use the user nfproto for matching, only for dumping, more
information is available in the commit message in patch 03
v2: change patch 02 to set the nfproto only when unspecified and drop
patch 04 from v1 (Eyal Birger)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 27 Jun 2019 08:10:47 +0000 (11:10 +0300)]
net: sched: em_ipt: add support for addrtype matching
Allow em_ipt to use addrtype for matching. Restrict the use only to
revision 1 which has IPv6 support. Since it's a NFPROTO_UNSPEC xt match
we use the user-specified nfproto for matching, in case it's unspecified
both v4/v6 will be matched by the rule.
v2: no changes, was patch 5 in v1
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 27 Jun 2019 08:10:46 +0000 (11:10 +0300)]
net: sched: em_ipt: keep the user-specified nfproto and dump it
If we dump NFPROTO_UNSPEC as nfproto user-space libxtables can't handle
it and would exit with an error like:
"libxtables: unhandled NFPROTO in xtables_set_nfproto"
In order to avoid the error return the user-specified nfproto. If we
don't record it then the match family is used which can be
NFPROTO_UNSPEC. Even if we add support to mask NFPROTO_UNSPEC in
iproute2 we have to be compatible with older versions which would be
also be allowed to add NFPROTO_UNSPEC matches (e.g. addrtype after the
last patch).
v3: don't use the user nfproto for matching, only for dumping the rule,
also don't allow the nfproto to be unspecified (explained above)
v2: adjust changes to missing patch, was patch 04 in v1
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 27 Jun 2019 08:10:45 +0000 (11:10 +0300)]
net: sched: em_ipt: set the family based on the packet if it's unspecified
Set the family based on the packet if it's unspecified otherwise
protocol-neutral matches will have wrong information (e.g. NFPROTO_UNSPEC).
In preparation for using NFPROTO_UNSPEC xt matches.
v2: set the nfproto only when unspecified
Suggested-by: Eyal Birger <eyal.birger@gmail.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 27 Jun 2019 08:10:44 +0000 (11:10 +0300)]
net: sched: em_ipt: match only on ip/ipv6 traffic
Restrict matching only to ip/ipv6 traffic and make sure we can use the
headers, otherwise matches will be attempted on any protocol which can
be unexpected by the xt matches. Currently policy supports only ipv4/6.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xue Chaojing [Sat, 29 Jun 2019 02:26:27 +0000 (02:26 +0000)]
hinic: add vlan offload support
This patch adds vlan offload support for the HINIC driver.
Signed-off-by: Xue Chaojing <xuechaojing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Paul Blakey [Mon, 24 Jun 2019 12:04:58 +0000 (15:04 +0300)]
net/mlx5e: Disallow tc redirect offload cases we don't support
After changing the parent_id to be the same for both NICs of same
the hardware device, netdev_port_same_parent_id now returns true for
more cases (all the lower devices in the hierarchy are on the same
hardware device).
If merged eswitch isn't enabled, these cases aren't supported, so disallow
them.
Signed-off-by: Paul Blakey <paulb@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Paul Blakey [Thu, 16 May 2019 12:27:17 +0000 (15:27 +0300)]
net/mlx5e: Expose same physical switch_id for all representors
Report system_image_guid as the E-Switch switch_id, this ensures
that when a NIC contains multiple PCI functions and which
has merged eswitch capability, all representors from
multiple PFs publish same switch_id.
Signed-off-by: Paul Blakey <paulb@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Gavi Teitz [Thu, 23 May 2019 06:58:56 +0000 (09:58 +0300)]
net/mlx5e: Don't refresh TIRs when updating representor SQs
Refreshing TIRs is done in order to update the TIRs with the current
state of SQs in the transport domain, so that the TIRs can filter out
undesired self-loopback packets based on the source SQ of the packet.
Representor TIRs will only receive packets that originate from their
associated vport, due to dedicated steering, and therefore will never
receive self-loopback packets, whose source vport will be the vport of
the E-Switch manager, and therefore not the vport associated with the
representor. As such, it is not necessary to refresh the representors'
TIRs, since self-loopback packets can't reach them.
Since representors only exist in switchdev mode, and there is no
scenario in which a representor will exist in the transport domain
alongside a non-representor, it is not necessary to refresh the
transport domain's TIRs upon changing the state of a representor's
queues. Therefore, do not refresh TIRs upon such a change. Achieve
this by adding an update_rx callback to the mlx5e_profile, which
refreshes TIRs for non-representors and does nothing for representors,
and replace instances of mlx5e_refresh_tirs() upon changing the state
of the queues with update_rx().
Signed-off-by: Gavi Teitz <gavi@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Arnd Bergmann [Tue, 18 Jun 2019 11:15:06 +0000 (13:15 +0200)]
net/mlx5e: reduce stack usage in mlx5_eswitch_termtbl_create
Putting an empty 'mlx5_flow_spec' structure on the stack is a bit
wasteful and causes a warning on 32-bit architectures when building
with clang -fsanitize-coverage:
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c: In function 'mlx5_eswitch_termtbl_create':
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c:90:1: error: the frame size of 1032 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
Since the structure is never written to, we can statically allocate
it to avoid the stack usage. To be on the safe side, mark all
subsequent function arguments that we pass it into as 'const'
as well.
Fixes:
10caabdaad5a ("net/mlx5e: Use termination table for VLAN push actions")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
Acked-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Mon, 27 May 2019 09:47:10 +0000 (04:47 -0500)]
net/mlx5e: Set drvinfo in generic manner
Consider PCI and non PCI device types while setting device name
in get_drvinfo() callback using existing generic device.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Wed, 5 Jun 2019 06:29:05 +0000 (01:29 -0500)]
net/mlx5e: Correct phys_port_name for PF port
Currently PF phys_port_name is named as pfNvf-1 as vport number for PF
vport is 65535.
Correct PF's phys_port name as agreed upon name as pfN.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Ariel Levkovich [Wed, 5 Jun 2019 17:01:08 +0000 (20:01 +0300)]
net/mlx5e: Report netdevice MPLS features
Set supported device features in the netdevice MPLS features mask.
This will enable HW checksumming and TSO for MPLS tagged traffic.
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Ariel Levkovich [Wed, 5 Jun 2019 16:40:09 +0000 (19:40 +0300)]
net/mlx5e: Move to HW checksumming advertising
This patch changes the way the driver advertises its checksum offload
capabilities within the net device features bit mask.
Instead of advertising protocol specific checksumming capabilities
which are limited today to IPv4 and IPv6, we move to reporing
generic HW checksumming capabilities.
This will allow the network stack to let mlx5 device offload checksum
for cases where the IP header is encapsulated within another protocol
and the skb->protocol doesn't indicate one of the IP versions protocol,
specifically in the case of MPLS label encapsulating the IP header and
the skb->protocol indiciates MPLS ethertype rather than IP.
Moving the HW_CSUM reporting is required in the basic net device hw
features mask and also in the extensions (vlan and encpasulation
features) since the extensions are always multiplied by the basic
features set during the packet's traversal through the stack's tx flow.
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Gavi Teitz [Tue, 11 Jun 2019 09:29:41 +0000 (12:29 +0300)]
net/mlx5: MPFS, Allow adding the same MAC more than once
Remove the limitation preventing adding a vport's MAC address to the
Multi-Physical Function Switch (MPFS) more than once per E-switch, as
there is no difference in the MPFS if an address is being used by an
E-switch more than once.
This allows the E-switch to have multiple vports with the same MAC
address, allowing vports to be classified by VLAN id instead of by MAC
if desired.
Signed-off-by: Gavi Teitz <gavi@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Gavi Teitz [Tue, 11 Jun 2019 08:54:36 +0000 (11:54 +0300)]
net/mlx5: MPFS, Cleanup add MAC flow
Unify and isolate the error handling flow in mlx5_mpfs_add_mac(),
removing code duplication.
Signed-off-by: Gavi Teitz <gavi@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Saeed Mahameed [Fri, 28 Jun 2019 22:49:59 +0000 (15:49 -0700)]
Merge branch 'mlx5-next' of git://git./linux/kernel/git/mellanox/linux
Misc updates from mlx5-next branch:
1) E-Switch vport metadata support for source vport matching
2) Convert mkey_table to XArray
3) Shared IRQs and to use single IRQ for all async EQs
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Vitaly Lifshits [Tue, 25 Jun 2019 14:39:11 +0000 (17:39 +0300)]
e1000e: PCIm function state support
Due to commit:
5d8682588605 ("[misc] mei: me: allow runtime
pm for platform with D0i3")
When disconnecting the cable and reconnecting it the NIC
enters DMoff state. This caused wrong link indication
and duplex mismatch. This bug is described in:
https://bugzilla.redhat.com/show_bug.cgi?id=1689436
Checking PCIm function state and performing PHY reset after a
timeout in watchdog task solves this issue.
Signed-off-by: Vitaly Lifshits <vitaly.lifshits@intel.com>
Acked-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Detlev Casanova [Sun, 23 Jun 2019 03:14:37 +0000 (23:14 -0400)]
e1000e: Make watchdog use delayed work
Use delayed work instead of timers to run the watchdog of the e1000e
driver.
Simplify the code with one less middle function.
Signed-off-by: Detlev Casanova <detlev.casanova@gmail.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Harshitha Ramamurthy [Wed, 19 Jun 2019 18:45:40 +0000 (14:45 -0400)]
i40e: Add macvlan support on i40e
This patch enables macvlan offloads for i40e. The idea is to use
channels as macvlan interfaces. The channels are VSIs of
type VMDQ. When the first macvlan is created, the maximum number of
channels possible are created. From then on, as a macvlan interface
is created, a macvlan filter is added to these already created
channels (VSIs).
This patch utilizes subordinate device traffic classes to make queue
groups(channels) available for an upper device like a macvlan.
Steps to configure macvlan offloads:
1. ethtool -K ethx l2-fwd-offload on
2. ip link add link ethx name macvlan1 type macvlan
3. ip addr add <address> dev macvlan1
4. ip link set macvlan1 up
Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Wed, 19 Jun 2019 16:58:53 +0000 (09:58 -0700)]
ixgbevf: Use cached link state instead of re-reading the value for ethtool
Change the ethtool link settings call to just read the cached state out of
the adapter structure instead of trying to recheck the value from the PF.
Doing this should prevent excessive reading of the mailbox.
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: "Guilherme G. Piccoli" <gpiccoli@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Colin Ian King [Wed, 19 Jun 2019 14:30:44 +0000 (15:30 +0100)]
iavf: fix dereference of null rx_buffer pointer
A recent commit
efa14c3985828d ("iavf: allow null RX descriptors") added
a null pointer sanity check on rx_buffer, however, rx_buffer is being
dereferenced before that check, which implies a null pointer dereference
bug can potentially occur. Fix this by only dereferencing rx_buffer
until after the null pointer check.
Addresses-Coverity: ("Dereference before null check")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Artem Bityutskiy [Tue, 18 Jun 2019 11:55:13 +0000 (14:55 +0300)]
igb: add RR2DCDELAY to ethtool registers dump
This patch adds the RR2DCDELAY register to the ethtool registers dump.
RR2DCDELAY exists on I210 and I211 Intel Gigabit Ethernet chips and it stands
for "Read Request To Data Completion Delay". Here is how this register is
described in the I210 datasheet:
"This field captures the maximum PCIe split time in 16 ns units, which is the
maximum delay between the read request to the first data completion. This is
giving an estimation of the PCIe round trip time."
In other words, whenever I210 reads from the host memory (e.g., fetches a
descriptor from the ring), the chip measures every PCI DMA read transaction and
captures the maximum value. So it ends up containing the longest DMA
transaction time.
This register is very useful for troubleshooting and research purposes. If you
are dealing with time-sensitive networks, this register can help you get
an idea of your "I210-to-ring" latency. This helps answering questions like
"should I have PCIe ASPM enabled?" or "should I enable deep C-states?" on
my system.
It is safe to read this register at any point, reading it has no effect on
the I210 chip functionality.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Artem Bityutskiy [Tue, 18 Jun 2019 11:55:12 +0000 (14:55 +0300)]
igb: minor ethool regdump amendment
This patch has no functional impact and it is just a preparation
for the following patch. It removes an early return from the
'igb_get_regs()' function by moving the 82576-only registers
dump into an "if" block. With this preparation, we can dump more
non-82576 registers at the end of this function.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jeff Kirsher [Mon, 17 Jun 2019 23:10:58 +0000 (16:10 -0700)]
iavf: Fix up debug print macro
This aligns the iavf_debug() macro with the other Intel drivers.
Add the bus number, bus_id field to i40e_bus_info so output shows
each physical port(i.e func) in following format:
[[[[<domain>]:]<bus>]:][<slot>][.[<func>]]
domains are numbered from 0 to ffff), bus (0-ff), slot (0-1f) and
function (0-7).
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Arjan van de Ven [Sat, 15 Jun 2019 00:29:35 +0000 (17:29 -0700)]
e1000e: Reduce boot time by tightening sleep ranges
The e1000e driver is a great user of the usleep_range() API,
and has nice ranges that in principle help power management.
However the ranges that are used only during system startup are
very long (and can add easily 100 msec to the boot time) while
the power savings of such long ranges is irrelevant due to the
one-off, boot only, nature of these functions.
This patch shrinks some of the longest ranges to be shorter
(while still using a power friendly 1 msec range); this saves
100msec+ of boot time on my BDW NUCs
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Gustavo A. R. Silva [Fri, 14 Jun 2019 23:23:20 +0000 (16:23 -0700)]
iavf: use struct_size() helper
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes, in particular in the
context in which this code is being used.
So, replace code of the following form:
sizeof(struct virtchnl_ether_addr_list) + (count * sizeof(struct virtchnl_ether_addr))
with:
struct_size(veal, list, count)
and so on...
This code was detected with the help of Coccinelle.
Signed-off-by: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Venkatesh Srinivas [Mon, 10 Jun 2019 21:27:50 +0000 (14:27 -0700)]
e1000: Use dma_wmb() instead of wmb() before doorbell writes
e1000 writes to doorbells to post transmit descriptors and fill the
receive ring. After writing descriptors to memory but before
writing to doorbells, use dma_wmb() rather than wmb(). wmb() is more
heavyweight than necessary for a device to see descriptor writes.
On x86, this avoids SFENCEs before doorbell writes in both the
Tx and Rx paths. On ARM, this converts DSB ST -> DMB OSHST.
Tested: 82576EB / x86; QEMU (qemu emulates an 8257x)
Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Colin Ian King [Fri, 7 Jun 2019 18:19:20 +0000 (19:19 +0100)]
ixgbe: fix potential u32 overflow on shift
The u32 variable rem is being shifted using u32 arithmetic however
it is being passed to div_u64 that expects the expression to be a u64.
The 32 bit shift may potentially overflow, so cast rem to a u64 before
shifting to avoid this. Also remove comment about overflow.
Addresses-Coverity: ("Unintentional integer overflow")
Fixes:
cd4583206990 ("ixgbe: implement support for SDP/PPS output on X550 hardware")
Fixes:
68d9676fc04e ("ixgbe: fix PTP SDP pin setup on X540 hardware")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Dann Frazier [Wed, 22 May 2019 23:22:58 +0000 (17:22 -0600)]
ixgbe: Avoid NULL pointer dereference with VF on non-IPsec hw
An ipsec structure will not be allocated if the hardware does not support
offload. Fixes the following Oops:
[ 191.045452] Unable to handle kernel NULL pointer dereference at virtual address
0000000000000000
[ 191.054232] Mem abort info:
[ 191.057014] ESR = 0x96000004
[ 191.060057] Exception class = DABT (current EL), IL = 32 bits
[ 191.065963] SET = 0, FnV = 0
[ 191.069004] EA = 0, S1PTW = 0
[ 191.072132] Data abort info:
[ 191.074999] ISV = 0, ISS = 0x00000004
[ 191.078822] CM = 0, WnR = 0
[ 191.081780] user pgtable: 4k pages, 48-bit VAs, pgdp =
0000000043d9e467
[ 191.088382] [
0000000000000000] pgd=
0000000000000000
[ 191.093252] Internal error: Oops:
96000004 [#1] SMP
[ 191.098119] Modules linked in: vhost_net vhost tap vfio_pci vfio_virqfd vfio_iommu_type1 vfio xt_CHECKSUM iptable_mangle ipt_MASQUERADE iptable_nat nf_nat_ipv4 nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter devlink ebtables ip6table_filter ip6_tables iptable_filter bpfilter ipmi_ssif nls_iso8859_1 input_leds joydev ipmi_si hns_roce_hw_v2 ipmi_devintf hns_roce ipmi_msghandler cppc_cpufreq sch_fq_codel ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables autofs4 ses enclosure btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor hid_generic usbhid hid raid6_pq libcrc32c raid1 raid0 multipath linear ixgbevf hibmc_drm ttm
[ 191.168607] drm_kms_helper aes_ce_blk aes_ce_cipher syscopyarea crct10dif_ce sysfillrect ghash_ce qla2xxx sysimgblt sha2_ce sha256_arm64 hisi_sas_v3_hw fb_sys_fops sha1_ce uas nvme_fc mpt3sas ixgbe drm hisi_sas_main nvme_fabrics usb_storage hclge scsi_transport_fc ahci libsas hnae3 raid_class libahci xfrm_algo scsi_transport_sas mdio aes_neon_bs aes_neon_blk crypto_simd cryptd aes_arm64
[ 191.202952] CPU: 94 PID: 0 Comm: swapper/94 Not tainted 4.19.0-rc1+ #11
[ 191.209553] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI RC0 - V1.20.01 04/26/2019
[ 191.218064] pstate:
20400089 (nzCv daIf +PAN -UAO)
[ 191.222873] pc : ixgbe_ipsec_vf_clear+0x60/0xd0 [ixgbe]
[ 191.228093] lr : ixgbe_msg_task+0x2d0/0x1088 [ixgbe]
[ 191.233044] sp :
ffff000009b3bcd0
[ 191.236346] x29:
ffff000009b3bcd0 x28:
0000000000000000
[ 191.241647] x27:
ffff000009628000 x26:
0000000000000000
[ 191.246946] x25:
ffff803f652d7600 x24:
0000000000000004
[ 191.252246] x23:
ffff803f6a718900 x22:
0000000000000000
[ 191.257546] x21:
0000000000000000 x20:
0000000000000000
[ 191.262845] x19:
0000000000000000 x18:
0000000000000000
[ 191.268144] x17:
0000000000000000 x16:
0000000000000000
[ 191.273443] x15:
0000000000000000 x14:
0000000100000026
[ 191.278742] x13:
0000000100000025 x12:
ffff8a5f7fbe0df0
[ 191.284042] x11:
000000010000000b x10:
0000000000000040
[ 191.289341] x9 :
0000000000001100 x8 :
ffff803f6a824fd8
[ 191.294640] x7 :
ffff803f6a825098 x6 :
0000000000000001
[ 191.299939] x5 :
ffff000000f0ffc0 x4 :
0000000000000000
[ 191.305238] x3 :
ffff000028c00000 x2 :
ffff803f652d7600
[ 191.310538] x1 :
0000000000000000 x0 :
ffff000000f205f0
[ 191.315838] Process swapper/94 (pid: 0, stack limit = 0x00000000addfed5a)
[ 191.322613] Call trace:
[ 191.325055] ixgbe_ipsec_vf_clear+0x60/0xd0 [ixgbe]
[ 191.329927] ixgbe_msg_task+0x2d0/0x1088 [ixgbe]
[ 191.334536] ixgbe_msix_other+0x274/0x330 [ixgbe]
[ 191.339233] __handle_irq_event_percpu+0x78/0x270
[ 191.343924] handle_irq_event_percpu+0x40/0x98
[ 191.348355] handle_irq_event+0x50/0xa8
[ 191.352180] handle_fasteoi_irq+0xbc/0x148
[ 191.356263] generic_handle_irq+0x34/0x50
[ 191.360259] __handle_domain_irq+0x68/0xc0
[ 191.364343] gic_handle_irq+0x84/0x180
[ 191.368079] el1_irq+0xe8/0x180
[ 191.371208] arch_cpu_idle+0x30/0x1a8
[ 191.374860] do_idle+0x1dc/0x2a0
[ 191.378077] cpu_startup_entry+0x2c/0x30
[ 191.381988] secondary_start_kernel+0x150/0x1e0
[ 191.386506] Code:
6b15003f 54000320 f1404a9f 54000060 (
79400260)
Fixes:
eda0333ac2930 ("ixgbe: add VF IPsec management")
Signed-off-by: Dann Frazier <dann.frazier@canonical.com>
Acked-by: Shannon Nelson <snelson@pensando.io>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Miguel Bernal Marin [Mon, 27 Mar 2017 22:01:56 +0000 (16:01 -0600)]
e1000e: Increase pause and refresh time
Suggested-by: Tim Pepper <timothy.c.pepper@linux.intel.com>
Signed-off-by: Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com>
Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de>
Acked-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Gustavo A. R. Silva [Fri, 29 Mar 2019 23:38:47 +0000 (16:38 -0700)]
ice: Use struct_size() helper
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
struct boo entry[];
};
size = sizeof(struct foo) + count * sizeof(struct boo);
instance = alloc(size, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
size = struct_size(instance, entry, count);
This code was detected with the help of Coccinelle.
Signed-off-by: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
David S. Miller [Fri, 28 Jun 2019 21:45:34 +0000 (14:45 -0700)]
Merge branch 'net-sched-Add-txtime-assist-support-for-taprio'
Vedang Patel says:
====================
net/sched: Add txtime-assist support for taprio.
Changes in v6:
- Use _BITUL() instead of BIT() in UAPI for etf. (patch #1)
- Fix a bug reported by kbuild test bot in length_to_duration(). (patch #6)
- Remove an unused function (get_cycle_start()). (Patch #6)
Changes in v5:
- Commit message improved for the igb patch (patch #1).
- Fixed typo in commit message for etf patch (patch #2).
Changes in v4:
- Remove inline directive from functions in foo.c.
- Fix spacing in pkt_sched.h (for etf patch).
Changes in v3:
- Simplify implementation for taprio flags.
- txtime_delay can only be set if txtime-assist mode is enabled.
- txtime_delay and flags will only be visible in tc output if set by user.
- Minor changes in error reporting.
Changes in v2:
- Txtime-offload has now been renamed to txtime-assist mode.
- Renamed the offload parameter to flags.
- Removed the code which introduced the hardware offloading functionality.
Original Cover letter (with above changes included)
--------------------------------------------------
Currently, we are seeing packets being transmitted outside their
timeslices. We can confirm that the packets are being dequeued at the right
time. So, the delay is induced after the packet is dequeued, because
taprio, without any offloading, has no control of when a packet is actually
transmitted.
In order to solve this, we are making use of the txtime feature provided by
ETF qdisc. Hardware offloading needs to be supported by the ETF qdisc in
order to take advantage of this feature. The taprio qdisc will assign
txtime (in skb->tstamp) for all the packets which do not have the txtime
allocated via the SO_TXTIME socket option. For the packets which already
have SO_TXTIME set, taprio will validate whether the packet will be
transmitted in the correct interval.
In order to support this, the following parameters have been added:
- flags (taprio): This is added in order to support different offloading
modes which will be added in the future.
- txtime-delay (taprio): This indicates the minimum time it will take for
the packet to hit the wire after it reaches taprio_enqueue(). This is
useful in determining whether we can transmit the packet in the remaining
time if the gate corresponding to the packet is currently open.
- skip_skb_check (ETF): ETF currently drops any packet which does not have
the SO_TXTIME socket option set. This check can be skipped by specifying
this option.
Following is an example configuration:
tc qdisc replace dev $IFACE parent root handle 100 taprio \\
num_tc 3 \\
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \\
queues 1@0 1@0 1@0 \\
base-time $BASE_TIME \\
sched-entry S 01 300000 \\
sched-entry S 02 300000 \\
sched-entry S 04 400000 \\
flags 0x1 \\
txtime-delay 200000 \\
clockid CLOCK_TAI
tc qdisc replace dev $IFACE parent 100:1 etf \\
offload delta 200000 clockid CLOCK_TAI skip_skb_check
Here, the "flags" parameter is indicating that the txtime-assist mode is
enabled. Also, all the traffic classes have been assigned the same queue.
This is to prevent the traffic classes in the lower priority queues from
getting starved. Note that this configuration is specific to the i210
ethernet card. Other network cards where the hardware queues are given the
same priority, might be able to utilize more than one queue.
Following are some of the other highlights of the series:
- Fix a bug where hardware timestamping and SO_TXTIME options cannot be
used together. (Patch 1)
- Introduces the skip_skb_check option. (Patch 2)
- Make TxTime assist mode work with TCP packets (Patch 7).
The following changes are recommended to be done in order to get the best
performance from taprio in this mode:
ip link set dev enp1s0 mtu 1514
ethtool -K eth0 gso off
ethtool -K eth0 tso off
ethtool --set-eee eth0 eee off
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:19 +0000 (15:07 -0700)]
taprio: Adjust timestamps for TCP packets
When the taprio qdisc is running in "txtime offload" mode, it will
set the launchtime value (in skb->tstamp) for all the packets which do
not have the SO_TXTIME socket option. But, the TCP packets already have
this value set and it indicates the earliest departure time represented
in CLOCK_MONOTONIC clock.
We need to respect the timestamp set by the TCP subsystem. So, convert
this time to the clock which taprio is using and ensure that the packet
is not transmitted before the deadline set by TCP.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:18 +0000 (15:07 -0700)]
taprio: make clock reference conversions easier
Later in this series we will need to transform from
CLOCK_MONOTONIC (used in TCP) to the clock reference used in TAPRIO.
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:17 +0000 (15:07 -0700)]
taprio: Add support for txtime-assist mode
Currently, we are seeing non-critical packets being transmitted outside of
their timeslice. We can confirm that the packets are being dequeued at the
right time. So, the delay is induced in the hardware side. The most likely
reason is the hardware queues are starving the lower priority queues.
In order to improve the performance of taprio, we will be making use of the
txtime feature provided by the ETF qdisc. For all the packets which do not
have the SO_TXTIME option set, taprio will set the transmit timestamp (set
in skb->tstamp) in this mode. TAPrio Qdisc will ensure that the transmit
time for the packet is set to when the gate is open. If SO_TXTIME is set,
the TAPrio qdisc will validate whether the timestamp (in skb->tstamp)
occurs when the gate corresponding to skb's traffic class is open.
Following two parameters added to support this mode:
- flags: used to enable txtime-assist mode. Will also be used to enable
other modes (like hardware offloading) later.
- txtime-delay: This indicates the minimum time it will take for the packet
to hit the wire. This is useful in determining whether we can transmit
the packet in the remaining time if the gate corresponding to the packet is
currently open.
An example configuration for enabling txtime-assist:
tc qdisc replace dev eth0 parent root handle 100 taprio \\
num_tc 3 \\
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \\
queues 1@0 1@0 1@0 \\
base-time
1558653424279842568 \\
sched-entry S 01 300000 \\
sched-entry S 02 300000 \\
sched-entry S 04 400000 \\
flags 0x1 \\
txtime-delay 40000 \\
clockid CLOCK_TAI
tc qdisc replace dev $IFACE parent 100:1 etf skip_sock_check \\
offload delta 200000 clockid CLOCK_TAI
Note that all the traffic classes are mapped to the same queue. This is
only possible in taprio when txtime-assist is enabled. Also, note that the
ETF Qdisc is enabled with offload mode set.
In this mode, if the packet's traffic class is open and the complete packet
can be transmitted, taprio will try to transmit the packet immediately.
This will be done by setting skb->tstamp to current_time + the time delta
indicated in the txtime-delay parameter. This parameter indicates the time
taken (in software) for packet to reach the network adapter.
If the packet cannot be transmitted in the current interval or if the
packet's traffic is not currently transmitting, the skb->tstamp is set to
the next available timestamp value. This is tracked in the next_launchtime
parameter in the struct sched_entry.
The behaviour w.r.t admin and oper schedules is not changed from what is
present in software mode.
The transmit time is already known in advance. So, we do not need the HR
timers to advance the schedule and wakeup the dequeue side of taprio. So,
HR timer won't be run when this mode is enabled.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:16 +0000 (15:07 -0700)]
taprio: Remove inline directive
Remove inline directive from length_to_duration(). We will let the compiler
make the decisions.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:15 +0000 (15:07 -0700)]
taprio: calculate cycle_time when schedule is installed
cycle time for a particular schedule is calculated only when it is first
installed. So, it makes sense to just calculate it once right after the
'cycle_time' parameter has been parsed and store it in cycle_time.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:14 +0000 (15:07 -0700)]
etf: Add skip_sock_check
Currently, etf expects a socket with SO_TXTIME option set for each packet
it encounters. So, it will drop all other packets. But, in the future
commits we are planning to add functionality where tstamp value will be set
by another qdisc. Also, some packets which are generated from within the
kernel (e.g. ICMP packets) do not have any socket associated with them.
So, this commit adds support for skip_sock_check. When this option is set,
etf will skip checking for a socket and other associated options for all
skbs.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:13 +0000 (15:07 -0700)]
etf: Don't use BIT() in UAPI headers.
The BIT() macro isn't exported as part of the UAPI interface. So, the
compile-test to ensure they are self contained fails. So, use _BITUL()
instead.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:12 +0000 (15:07 -0700)]
igb: clear out skb->tstamp after reading the txtime
If a packet which is utilizing the launchtime feature (via SO_TXTIME socket
option) also requests the hardware transmit timestamp, the hardware
timestamp is not delivered to the userspace. This is because the value in
skb->tstamp is mistaken as the software timestamp.
Applications, like ptp4l, request a hardware timestamp by setting the
SOF_TIMESTAMPING_TX_HARDWARE socket option. Whenever a new timestamp is
detected by the driver (this work is done in igb_ptp_tx_work() which calls
igb_ptp_tx_hwtstamps() in igb_ptp.c[1]), it will queue the timestamp in the
ERR_QUEUE for the userspace to read. When the userspace is ready, it will
issue a recvmsg() call to collect this timestamp. The problem is in this
recvmsg() call. If the skb->tstamp is not cleared out, it will be
interpreted as a software timestamp and the hardware tx timestamp will not
be successfully sent to the userspace. Look at skb_is_swtx_tstamp() and the
callee function __sock_recv_timestamp() in net/socket.c for more details.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 28 Jun 2019 21:36:25 +0000 (14:36 -0700)]
Merge branch 'mirred-recurse'
John Hurley says:
====================
Track recursive calls in TC act_mirred
These patches aim to prevent act_mirred causing stack overflow events from
recursively calling packet xmit or receive functions. Such events can
occur with poor TC configuration that causes packets to travel in loops
within the system.
Florian Westphal advises that a recursion crash and packets looping are
separate issues and should be treated as such. David Miller futher points
out that pcpu counters cannot track the precise skb context required to
detect loops. Hence these patches are not aimed at detecting packet loops,
rather, preventing stack flows arising from such loops.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Mon, 24 Jun 2019 22:13:36 +0000 (23:13 +0100)]
net: sched: protect against stack overflow in TC act_mirred
TC hooks allow the application of filters and actions to packets at both
ingress and egress of the network stack. It is possible, with poor
configuration, that this can produce loops whereby an ingress hook calls
a mirred egress action that has an egress hook that redirects back to
the first ingress etc. The TC core classifier protects against loops when
doing reclassifies but there is no protection against a packet looping
between multiple hooks and recursively calling act_mirred. This can lead
to stack overflow panics.
Add a per CPU counter to act_mirred that is incremented for each recursive
call of the action function when processing a packet. If a limit is passed
then the packet is dropped and CPU counter reset.
Note that this patch does not protect against loops in TC datapaths. Its
aim is to prevent stack overflow kernel panics that can be a consequence
of such loops.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Mon, 24 Jun 2019 22:13:35 +0000 (23:13 +0100)]
net: sched: refactor reinsert action
The TC_ACT_REINSERT return type was added as an in-kernel only option to
allow a packet ingress or egress redirect. This is used to avoid
unnecessary skb clones in situations where they are not required. If a TC
hook returns this code then the packet is 'reinserted' and no skb consume
is carried out as no clone took place.
This return type is only used in act_mirred. Rather than have the reinsert
called from the main datapath, call it directly in act_mirred. Instead of
returning TC_ACT_REINSERT, change the type to the new TC_ACT_CONSUMED
which tells the caller that the packet has been stolen by another process
and that no consume call is required.
Moving all redirect calls to the act_mirred code is in preparation for
tracking recursion created by act_mirred.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christian Brauner [Mon, 24 Jun 2019 13:29:23 +0000 (15:29 +0200)]
ipv4: enable route flushing in network namespaces
Tools such as vpnc try to flush routes when run inside network
namespaces by writing 1 into /proc/sys/net/ipv4/route/flush. This
currently does not work because flush is not enabled in non-initial
network namespaces.
Since routes are per network namespace it is safe to enable
/proc/sys/net/ipv4/route/flush in there.
Link: https://github.com/lxc/lxd/issues/4257
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 28 Jun 2019 16:48:24 +0000 (09:48 -0700)]
Merge tag 'batadv-next-for-davem-20190627v2' of git://git.open-mesh.org/linux-merge
Simon Wunderlich says:
====================
This feature/cleanup patchset includes the following patches:
- bump version strings, by Simon Wunderlich
- fix includes for _MAX constants, atomic functions and fwdecls,
by Sven Eckelmann (3 patches)
- shorten multicast tt/tvlv worker spinlock section, by Linus Luessing
- routeable multicast preparations: implement MAC multicast filtering,
by Linus Luessing (2 patches, David Millers comments integrated)
- remove return value checks for debugfs_create, by Greg Kroah-Hartman
- add routable multicast optimizations, by Linus Luessing (2 patches)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 28 Jun 2019 16:39:50 +0000 (09:39 -0700)]
Merge branch 'hns3-next'
Huazhong Tan says:
====================
net: hns3: some code optimizations & cleanups & bugfixes
[patch 01/12] fixes a TX timeout issue.
[patch 02/12 - 04/12] adds some patch related to TM module.
[patch 05/12] fixes a compile warning.
[patch 06/12] adds Asym Pause support for autoneg
[patch 07/12] optimizes the error handler for VF reset.
[patch 08/12] deals with the empty interrupt case.
[patch 09/12 - 12/12] adds some cleanups & optimizations.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Peng Li [Fri, 28 Jun 2019 11:50:18 +0000 (19:50 +0800)]
net: hns3: optimize the CSQ cmd error handling
If CMDQ ring is full, hclge_cmd_send may return directly, but IMP still
working and HW pointer changed, SW ring pointer do not match the HW
pointer. This patch update the SW pointer every time when the space is
full, so it can work normally next time if IMP and HW still working.
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yunsheng Lin [Fri, 28 Jun 2019 11:50:17 +0000 (19:50 +0800)]
net: hns3: remove RXD_VLD check in hns3_handle_bdinfo
The HNS3_RXD_VLD_B bit has already been checked in hns3_add_frag
or hns3_handle_rx_bd before calling hns3_handle_bdinfo, so when
hns3_handle_bdinfo is called, the HNS3_RXD_VLD_B bit is always
set, which makes the checking in hns3_handle_bdinfo unnecessary.
This patch removes the RXD_VLD_B checking in hns3_handle_bdinfo.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jian Shen [Fri, 28 Jun 2019 11:50:16 +0000 (19:50 +0800)]
net: hns3: remove unused linkmode definition
This patch removes unused linkmode definition.
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yufeng Mo [Fri, 28 Jun 2019 11:50:15 +0000 (19:50 +0800)]
net: hns3: fix a statistics issue about l3l4 checksum error
The frame column is based on rx_crc_errors and rx_frame_errors. So
l3l4 checksum error should not be counted by rx_crc_errors. Instead,
l3l4 checksum error should be counted in ifconfig error column.
Fixes:
d3ec4ef66937 ("net: hns3: refactor the statistics updating for netdev")
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Fri, 28 Jun 2019 11:50:14 +0000 (19:50 +0800)]
net: hns3: handle empty unknown interrupt
Since some MSI-X interrupt's status may be cleared by hardware,
so when the driver receives the interrupt, reading
HCLGE_VECTOR0_PF_OTHER_INT_STS_REG register will get an empty
unknown interrupt. For this case, the irq handler should enable
vector0 interrupt. This patch also use dev_info() instead of
dev_dbg() in the hclge_check_event_cause(), since this information
will be useful for normal usage.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Fri, 28 Jun 2019 11:50:13 +0000 (19:50 +0800)]
net: hns3: re-schedule reset task while VF reset fail
The VF reset may fail for some probabilistic reasons,
such as wait for hardware reset timeout, wait for mailbox
response timeout, so this patch tries to re-schedule the
reset task when the number of reset failing is under
HCLGEVF_RESET_MAX_FAIL_CNT. This patch also add a function
hclgevf_reset_err_handle() to handle the reset failing.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yonglong Liu [Fri, 28 Jun 2019 11:50:12 +0000 (19:50 +0800)]
net: hns3: add Asym Pause support to fix autoneg problem
Local device and link partner config auto-negotiation on both,
local device config pause frame use as: rx on/tx off,
link partner config pause frame use as: rx off/tx on.
We except the result is:
Local device:
Autonegotiate: on
RX: on
TX: off
RX negotiated: on
TX negotiated: off
Link partner:
Autonegotiate: on
RX: off
TX: on
RX negotiated: off
TX negotiated: on
But actually, the result of Local device and link partner is both:
Autonegotiate: on
RX: off
TX: off
RX negotiated: off
TX negotiated: off
The root cause is that the supported flag is has only Pause,
reference to the function genphy_config_advert():
static int genphy_config_advert(struct phy_device *phydev)
{
...
linkmode_and(phydev->advertising, phydev->advertising,
phydev->supported);
...
}
The pause frame use of link partner is rx off/tx on, so its
advertising only set the bit Asym_Pause, and the supported is
only set the bit Pause, so the result of linkmode_and(), is
rx off/tx off.
This patch adds Asym_Pause to the supported flag to fix it.
Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yonglong Liu [Fri, 28 Jun 2019 11:50:11 +0000 (19:50 +0800)]
net: hns3: fix a -Wformat-nonliteral compile warning
When setting -Wformat=2, there is a compiler warning like this:
hclge_main.c:xxx:x: warning: format not a string literal and no
format arguments [-Wformat-nonliteral]
strs[i].desc);
^~~~
This patch adds missing format parameter "%s" to snprintf() to
fix it.
Fixes:
46a3df9f9718 ("Add HNS3 Acceleration Engine & Compatibility Layer Support")
Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yunsheng Lin [Fri, 28 Jun 2019 11:50:10 +0000 (19:50 +0800)]
net: hns3: add some error checking in hclge_tm module
When hdev->tx_sch_mode is HCLGE_FLAG_VNET_BASE_SCH_MODE, the
hclge_tm_schd_mode_vnet_base_cfg calls hclge_tm_pri_schd_mode_cfg
with vport->vport_id as pri_id, which is used as index for
hdev->tm_info.tc_info, it will cause out of bound access issue
if vport_id is equal to or larger than HNAE3_MAX_TC.
Also hardware only support maximum speed of HCLGE_ETHER_MAX_RATE.
So this patch adds two checks for above cases.
Fixes:
848440544b41 ("net: hns3: Add support of TX Scheduler & Shaper to HNS3 driver")
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yunsheng Lin [Fri, 28 Jun 2019 11:50:09 +0000 (19:50 +0800)]
net: hns3: change SSU's buffer allocation according to UM
Currently when there is share buffer in the SSU(storage
switching unit), the low waterline for RX private buffer is
too low to keep the hardware running. Hardware may have
processed all the packet stored in the private buffer of the
low waterline before the new packet comes, because hardware
only tell the peer send packet again when the private buffer
is under the low waterline.
So this patch only allocate RX private buffer if there is
enough buffer according to hardware user manual.
This patch also reserve some buffer for reusing when TC num
is less than or equal to 2, and change PAUSE_TRANS_GAP &
HCLGE_NON_DCB_ADDITIONAL_BUF according to hardware user
manual.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>