Petr Machata [Tue, 10 Jul 2018 07:02:59 +0000 (10:02 +0300)]
mlxsw: spectrum_span: Change LAG lower selection
When offloading mirror-to-gretap, mlxsw needs to preroute the path that
the encapsulated packet will take. That path may include a LAG device
above a front panel port. So far, mlxsw resolved the path to the first
up front panel slave of the LAG interface, but that only reflects
administrative state of the port. It neglects to consider whether the
port actually has a carrier, and what the LACP state is.
So instead of checking upness of the device, check carrier state and
txability.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 10 Jul 2018 07:02:58 +0000 (10:02 +0300)]
net: Add lag.h, net_lag_port_dev_txable()
LAG devices (team or bond) recognize for each one of their slave devices
whether LAG traffic is going to be sent through that device. Bond calls
such devices "active", team calls them "txable". When this state
changes, a NETDEV_CHANGELOWERSTATE notification is distributed, together
with a netdev_notifier_changelowerstate_info structure that for LAG
devices includes a tx_enabled flag that refers to the new state. The
notification thus makes it possible to react to the changes in txability
in drivers.
However there's no way to query txability from the outside on demand.
That is problematic namely for mlxsw, which when resolving ERSPAN packet
path, may encounter a LAG device, and needs to determine which of the
slaves it should choose.
To that end, introduce a new function, net_lag_port_dev_txable(), which
determines whether a given slave device is "active" or
"txable" (depending on the flavor of the LAG device). That function then
dispatches to per-LAG-flavor helpers, bond_is_active_slave_dev() resp.
team_port_dev_txable().
Because there currently is no good place where net_lag_port_dev_txable()
should be added, introduce a new header file, lag.h, which should from
now on hold any logic common to both team and bond. (But keep
netif_is_lag_master() together with the rest of netif_is_*_master()
functions).
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Tue, 10 Jul 2018 07:02:57 +0000 (10:02 +0300)]
team: Publish team_port_get_rcu()
A follow-up patch adds a new entry point, team_port_dev_txable(). Making
it an ordinary exported function would mean that any module that may
need the service in one of the supported configurations also
unconditionally needs to pull in the team module, whether or not the
user actually intends to create team interfaces.
To prevent that, team_port_dev_txable() is defined in if_team.h, and
therefore all dependencies of that function also need to be
publicly-visible.
Therefore move team_port_get_rcu() from team.c to if_team.h.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Travis Brown [Tue, 10 Jul 2018 00:35:01 +0000 (00:35 +0000)]
macvlan: Change status when lower device goes down
Today macvlan ignores the notification when a lower device goes
administratively down, preventing the lack of connectivity from
bubbling up.
Processing NETDEV_DOWN results in a macvlan state of LOWERLAYERDOWN
with NO-CARRIER which should be easy to interpret in userspace.
2: lower: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
3: macvlan@lower: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
Signed-off-by: Suresh Krishnan <skrishnan@arista.com>
Signed-off-by: Travis Brown <travisb@arista.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 12 Jul 2018 06:06:14 +0000 (23:06 -0700)]
Merge branch 'tipc-make-link-protocol-more-resilient'
Jon Maloy says:
====================
tipc: make link protocol more resilient
These two commits make the link ptotocol more resilient to
infrastructures with frequent packet duplication and long delays.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Mon, 9 Jul 2018 23:07:36 +0000 (01:07 +0200)]
tipc: check session number before accepting link protocol messages
In some virtual environments we observe a significant higher number of
packet reordering and delays than we have been used to traditionally.
This makes it necessary with stricter checks on incoming link protocol
messages' session number, which until now only has been validated for
RESET messages.
Since the other two message types, ACTIVATE and STATE messages also
carry this number, it is easy to extend the validation check to those
messages.
We also introduce a flag indicating if a link has a valid peer session
number or not. This eliminates the mixing of 32- and 16-bit arithmethics
we are currently using to achieve this.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Mon, 9 Jul 2018 23:07:35 +0000 (01:07 +0200)]
tipc: add sequence number check for link STATE messages
Some switch infrastructures produce huge amounts of packet duplicates.
This becomes a problem if those messages are STATE/NACK protocol
messages, causing unnecessary retransmissions of already accepted
packets.
We now introduce a unique sequence number per STATE protocol message
so that duplicates can be identified and ignored. This will also be
useful when tracing such cases, and to avert replay attacks when TIPC
is encrypted.
For compatibility reasons we have to introduce a new capability flag
TIPC_LINK_PROTO_SEQNO to handle this new feature.
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 12 Jul 2018 06:03:32 +0000 (23:03 -0700)]
Merge branch '10GbE' of git://git./linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
L2 Fwd Offload & 10GbE Intel Driver Updates 2018-07-09
This patch series is meant to allow support for the L2 forward offload, aka
MACVLAN offload without the need for using ndo_select_queue.
The existing solution currently requires that we use ndo_select_queue in
the transmit path if we want to associate specific Tx queues with a given
MACVLAN interface. In order to get away from this we need to repurpose the
tc_to_txq array and XPS pointer for the MACVLAN interface and use those as
a means of accessing the queues on the lower device. As a result we cannot
offload a device that is configured as multiqueue, however it doesn't
really make sense to configure a macvlan interfaced as being multiqueue
anyway since it doesn't really have a qdisc of its own in the first place.
The big changes in this set are:
Allow lower device to update tc_to_txq and XPS map of offloaded MACVLAN
Disable XPS for single queue devices
Replace accel_priv with sb_dev in ndo_select_queue
Add sb_dev parameter to fallback function for ndo_select_queue
Consolidated ndo_select_queue functions that appeared to be duplicates
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Deepti Raghavan [Mon, 9 Jul 2018 17:53:39 +0000 (17:53 +0000)]
tcp: expose both send and receive intervals for rate sample
Congestion control algorithms, which access the rate sample
through the tcp_cong_control function, only have access to the maximum
of the send and receive interval, for cases where the acknowledgment
rate may be inaccurate due to ACK compression or decimation. Algorithms
may want to use send rates and receive rates as separate signals.
Signed-off-by: Deepti Raghavan <deeptir@mit.edu>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Mon, 9 Jul 2018 17:26:47 +0000 (20:26 +0300)]
net: sched: fix unprotected access to rcu cookie pointer
Fix action attribute size calculation function to take rcu read lock and
access act_cookie pointer with rcu dereference.
Fixes:
eec94fdb0480 ("net: sched: use rcu for action cookie update")
Reported-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 12 Jul 2018 05:59:39 +0000 (22:59 -0700)]
Merge branch 'cxgb4-move-stats-fetched-from-firmware-to-debugfs'
Rahul Lakkireddy says:
====================
cxgb4: move stats fetched from firmware to debugfs
Some stats are fetched via slow firmware mailbox, which can cause
packet drops under heavy load. So, this series removes these stats
from ethtool -S and expose them via debugfs.
Patch 1 removes stats fetched via firmware from ethtool -S.
Patch 2 exposes stats removed in Patch 1 via debugfs.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Mon, 9 Jul 2018 16:12:47 +0000 (21:42 +0530)]
cxgb4: expose stats fetched from firmware via debugfs
Expose stats obtained from firmware via debugfs. These stats can't
be part of ethtool -S because the slow firmware mailbox can cause
packet drops under heavy load.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Mon, 9 Jul 2018 16:12:46 +0000 (21:42 +0530)]
cxgb4: remove stats fetched from firmware
When running ethtool -S, some stats are requested from firmware.
Since getting these stats via firmware mailbox is slow, some packets
get dropped under heavy load while running ethtool -S.
So, remove these stats from ethtool -S.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Antoine Tenart [Mon, 9 Jul 2018 15:00:43 +0000 (17:00 +0200)]
net: mvpp2: explicitly include linux/interrupt.h
The Marvell PPv2 driver uses interrupts and tasklet but does not
explicitly include linux/interrupt.h, relying on implicit includes. This
one particularly is included by chance after a long unlogical chain of
inclusions. Fix this so we do not get future build breaks.
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jan Dakinevich [Mon, 9 Jul 2018 13:51:19 +0000 (16:51 +0300)]
cnic: use kvzalloc to allocate memory for csk_tbl
Size of csk_tbl is about 58K, which means 3rd order page allocation.
kvzalloc provides a fallback if no high order memory is available.
Signed-off-by: Jan Dakinevich <jan.dakinevich@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Mon, 9 Jul 2018 12:23:13 +0000 (13:23 +0100)]
wimax/i2400m: remove redundant variables ack_status, bcf and protocol
Variables ack_status, bcf and protocol are being assigned but are
never used hence they are redundant and can be removed.
Also declare ack_type as unsigned int rather than unsigned to clean
up a checkpatch warning.
Cleans up clang warnings:
warning: variable 'ack_status' set but not used [-Wunused-but-set-variable]
warning: variable 'bcf' set but not used [-Wunused-but-set-variable]
warning: variable 'protocol' set but not used [-Wunused-but-set-variable]
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Mon, 9 Jul 2018 11:33:26 +0000 (14:33 +0300)]
net: sched: act_ife: fix memory leak in ife init
Free params if tcf_idr_check_alloc() returned error.
Fixes:
0190c1d452a9 ("net: sched: atomically check-allocate action")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Arjun Vynipadath [Mon, 9 Jul 2018 11:22:03 +0000 (16:52 +0530)]
cxgb4: specify IQTYPE in fw_iq_cmd
congestion argument passed to t4_sge_alloc_rxq() is used
to differentiate between nic/ofld queues.
Signed-off-by: Arjun Vynipadath <arjun@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 12 Jul 2018 05:50:46 +0000 (22:50 -0700)]
Merge branch 'net-ipv6-addr_gen_mode-fixes'
Sabrina Dubroca says:
====================
net/ipv6: addr_gen_mode fixes
This series fixes bugs in handling of the addr_gen_mode option, mainly
related to the sysctl. A minor netlink issue was also present in the
initial commit introducing the option on a per-netdevice basis.
v2: add patch 4, requested by David Ahern during review of v1
add patch 5, missing documentation for the sysctl
patches 1, 2, 3 are unchanged
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sabrina Dubroca [Mon, 9 Jul 2018 10:25:18 +0000 (12:25 +0200)]
Documentation: ip-sysctl.txt: document addr_gen_mode
addr_gen_mode was introduced in without documentation, add it now.
Fixes:
d35a00b8e33d ("net/ipv6: allow sysctl to change link-local address generation mode")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sabrina Dubroca [Mon, 9 Jul 2018 10:25:17 +0000 (12:25 +0200)]
net/ipv6: propagate net.ipv6.conf.all.addr_gen_mode to devices
This aligns the addr_gen_mode sysctl with the expected behavior of the
"all" variant.
Fixes:
d35a00b8e33d ("net/ipv6: allow sysctl to change link-local address generation mode")
Suggested-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sabrina Dubroca [Mon, 9 Jul 2018 10:25:16 +0000 (12:25 +0200)]
net/ipv6: reserve room for IFLA_INET6_ADDR_GEN_MODE
inet6_ifla6_size() is called to check how much space is needed by
inet6_fill_link_af() and inet6_fill_ifinfo(), both of which include
the IFLA_INET6_ADDR_GEN_MODE attribute. Reserve some room for it.
Fixes:
bc91b0f07ada ("ipv6: addrconf: implement address generation modes")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sabrina Dubroca [Mon, 9 Jul 2018 10:25:15 +0000 (12:25 +0200)]
net/ipv6: don't reinitialize ndev->cnf.addr_gen_mode on new inet6_dev
The value has already been copied from this netns's devconf_dflt, it
shouldn't be reset to the global kernel default.
Fixes:
d35a00b8e33d ("net/ipv6: allow sysctl to change link-local address generation mode")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sabrina Dubroca [Mon, 9 Jul 2018 10:25:14 +0000 (12:25 +0200)]
net/ipv6: fix addrconf_sysctl_addr_gen_mode
addrconf_sysctl_addr_gen_mode() has multiple problems. First, it ignores
the errors returned by proc_dointvec().
addrconf_sysctl_addr_gen_mode() calls proc_dointvec() directly, which
writes the value to memory, and then checks if it's valid and may return
EINVAL. If a bad value is given, the value displayed when reading
net.ipv6.conf.foo.addr_gen_mode next time will be invalid. In case the
value provided by the user was valid, addrconf_dev_config() won't be
called since idev->cnf.addr_gen_mode has already been updated.
Fix this in the usual way we deal with values that need to be checked
after the proc_do*() helper has returned: define a local ctl_table and
storage, call proc_dointvec() on that temporary area, then check and
store.
addrconf_sysctl_addr_gen_mode() also writes the new value to the global
ipv6_devconf_dflt, when we're writing to some netns's default, so that
new netns will inherit the value that was set by the change occuring in
any netns. That doesn't make any sense, so let's drop this assignment.
Finally, since addr_gen_mode is a __u32, switch to proc_douintvec().
Fixes:
d35a00b8e33d ("net/ipv6: allow sysctl to change link-local address generation mode")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jianbo Liu [Mon, 9 Jul 2018 02:26:20 +0000 (02:26 +0000)]
net/sched: flower: Fix null pointer dereference when run tc vlan command
Zahari issued tc vlan command without setting vlan_ethtype, which will
crash kernel. To avoid this, we must check tb[TCA_FLOWER_KEY_VLAN_ETH_TYPE]
is not null before use it.
Also we don't need to dump vlan_ethtype or cvlan_ethtype in this case.
Fixes:
d64efd0926ba ('net/sched: flower: Add supprt for matching on QinQ vlan headers')
Signed-off-by: Jianbo Liu <jianbol@mellanox.com>
Reported-by: Zahari Doychev <zahari.doychev@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 8 Jul 2018 17:58:55 +0000 (19:58 +0200)]
selftests: forwarding: mirror_lib: Tighten up VLAN capture
The function do_test_span_vlan_dir_ips() is used for testing whether
mirrored packets are VLAN-encapsulated. But since it only considers
VLAN encapsulation, it may end up matching unmirrored ARP traffic as
well. One consequence is a rare failure of mirror_gre_vlan_bridge_1q's
test_gretap_untagged_egress. Decreasing ping cadence in mirror_test()
makes the problem easily reproducible.
Therefore tighten up the match criterion to only count those 802.1q
packets where the next header is IP.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 11 Jul 2018 03:06:35 +0000 (20:06 -0700)]
Merge branch 'cake-qdisc'
Toke Høiland-Jørgensen says:
====================
sched: Add Common Applications Kept Enhanced (cake) qdisc
This patch series adds the CAKE qdisc, and has been split up to ease
review.
I have attempted to split out each configurable feature into its own patch.
The first commit adds the base shaper and packet scheduler, while
subsequent commits add the optional features. The full userspace API and
most data structures are included in this commit, but options not
understood in the base version will be ignored.
The result of applying the entire series is identical to the out of tree
version that have seen extensive testing in previous deployments, most
notably as an out of tree patch to OpenWrt. However, note that I have only
compile tested the individual patches; so the whole series should be
considered as a unit.
---
Changelog
v19:
- Rebase to current net-next.
- Don't rely on the value of sch->q.qlen to break loops; fixes possible
infinite loop on multi-queue devices.
- Don't overwrite NAT flag when setting flow mode.
v18:
- Rework classification logic in the diffserv case to always hash if
filter doesn't select a queue, and to run TC filters before
selecting the diffserv tin (allowing filter to influence this).
- Make sure we always call qdisc_watchdog_init() in cake_init(), so we
don't crash in cake_destroy().
v17:
- Rebase to newest net-next and move the conntrack callback to
nf_ct_hook
- Fix a compile error when NF_CONNTRACK is unset.
v16:
- Move conntrack lookup function into conntrack core and read it via
RCU so it is only active when the nf_conntrack module is loaded.
This avoids the module dependency on conntrack for NAT mode. Thanks
to Pablo for the idea.
v15:
- Handle ECN flags in ACK filter
v14:
- Handle seqno wraps and DSACKs in ACK filter
v13:
- Avoid ktime_t to scalar compares
- Add class dumping and basic stats
- Fail with ENOTSUPP when requesting NAT mode and conntrack is not
available.
- Parse all TCP options in ACK filter and make sure to only drop safe
ones. Also handle SACK ranges properly.
v12:
- Get rid of custom time typedefs. Use ktime_t for time and u64 for
duration instead.
v11:
- Fix overhead compensation calculation for GSO packets
- Change configured rate to be u64 (I ran out of bits before I ran out
of CPU when testing the effects of the above)
v10:
- Christmas tree gardening (fix variable declarations to be in reverse
line length order)
v9:
- Remove duplicated checks around kvfree() and just call it
unconditionally.
- Don't pass __GFP_NOWARN when allocating memory
- Move options in cake_dump() that are related to optional features to
later patches implementing the features.
- Support attaching filters to the qdisc and use the classification
result to select flow queue.
- Support overriding diffserv priority tin from skb->priority
v8:
- Remove inline keyword from function definitions
- Simplify ACK filter; remove the complex state handling to make the
logic easier to follow. This will potentially be a bit less efficient,
but I have not been able to measure a difference.
v7:
- Split up patch into a series to ease review.
- Constify the ACK filter.
v6:
- Fix 6in4 encapsulation checks in ACK filter code
- Checkpatch fixes
v5:
- Refactor ACK filter code and hopefully fix the safety issues
properly this time.
v4:
- Only split GSO packets if shaping at speeds <= 1Gbps
- Fix overhead calculation code to also work for GSO packets
- Don't re-implement kvzalloc()
- Remove local header include from out-of-tree build (fixes kbuild-bot
complaint).
- Several fixes to the ACK filter:
- Check pskb_may_pull() before deref of transport headers.
- Don't run ACK filter logic on split GSO packets
- Fix TCP sequence number compare to deal with wraparounds
v3:
- Use IS_REACHABLE() macro to fix compilation when sch_cake is
built-in and conntrack is a module.
- Switch the stats output to use nested netlink attributes instead
of a versioned struct.
- Remove GPL boilerplate.
- Fix array initialisation style.
v2:
- Fix kbuild test bot complaint
- Clean up the netlink ABI
- Fix checkpatch complaints
- A few tweaks to the behaviour of cake based on testing carried out
while writing the paper.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Toke Høiland-Jørgensen [Fri, 6 Jul 2018 15:37:19 +0000 (17:37 +0200)]
sch_cake: Conditionally split GSO segments
At lower bandwidths, the transmission time of a single GSO segment can add
an unacceptable amount of latency due to HOL blocking. Furthermore, with a
software shaper, any tuning mechanism employed by the kernel to control the
maximum size of GSO segments is thrown off by the artificial limit on
bandwidth. For this reason, we split GSO segments into their individual
packets iff the shaper is active and configured to a bandwidth <= 1 Gbps.
Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toke Høiland-Jørgensen [Fri, 6 Jul 2018 15:37:19 +0000 (17:37 +0200)]
sch_cake: Add overhead compensation support to the rate shaper
This commit adds configurable overhead compensation support to the rate
shaper. With this feature, userspace can configure the actual bottleneck
link overhead and encapsulation mode used, which will be used by the shaper
to calculate the precise duration of each packet on the wire.
This feature is needed because CAKE is often deployed one or two hops
upstream of the actual bottleneck (which can be, e.g., inside a DSL or
cable modem). In this case, the link layer characteristics and overhead
reported by the kernel does not match the actual bottleneck. Being able to
set the actual values in use makes it possible to configure the shaper rate
much closer to the actual bottleneck rate (our experience shows it is
possible to get with 0.1% of the actual physical bottleneck rate), thus
keeping latency low without sacrificing bandwidth.
The overhead compensation has three tunables: A fixed per-packet overhead
size (which, if set, will be accounted from the IP packet header), a
minimum packet size (MPU) and a framing mode supporting either ATM or PTM
framing. We include a set of common keywords in TC to help users configure
the right parameters. If no overhead value is set, the value reported by
the kernel is used.
Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toke Høiland-Jørgensen [Fri, 6 Jul 2018 15:37:19 +0000 (17:37 +0200)]
sch_cake: Add DiffServ handling
This adds support for DiffServ-based priority queueing to CAKE. If the
shaper is in use, each priority tier gets its own virtual clock, which
limits that tier's rate to a fraction of the overall shaped rate, to
discourage trying to game the priority mechanism.
CAKE defaults to a simple, three-tier mode that interprets most code points
as "best effort", but places CS1 traffic into a low-priority "bulk" tier
which is assigned 1/16 of the total rate, and a few code points indicating
latency-sensitive or control traffic (specifically TOS4, VA, EF, CS6, CS7)
into a "latency sensitive" high-priority tier, which is assigned 1/4 rate.
The other supported DiffServ modes are a 4-tier mode matching the 802.11e
precedence rules, as well as two 8-tier modes, one of which implements
strict precedence of the eight priority levels.
This commit also adds an optional DiffServ 'wash' mode, which will zero out
the DSCP fields of any packet passing through CAKE. While this can
technically be done with other mechanisms in the kernel, having the feature
available in CAKE significantly decreases configuration complexity; and the
implementation cost is low on top of the other DiffServ-handling code.
Filters and applications can set the skb->priority field to override the
DSCP-based classification into tiers. If TC_H_MAJ(skb->priority) matches
CAKE's qdisc handle, the minor number will be interpreted as a priority
tier if it is less than or equal to the number of configured priority
tiers.
Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toke Høiland-Jørgensen [Fri, 6 Jul 2018 15:37:19 +0000 (17:37 +0200)]
sch_cake: Add NAT awareness to packet classifier
When CAKE is deployed on a gateway that also performs NAT (which is a
common deployment mode), the host fairness mechanism cannot distinguish
internal hosts from each other, and so fails to work correctly.
To fix this, we add an optional NAT awareness mode, which will query the
kernel conntrack mechanism to obtain the pre-NAT addresses for each packet
and use that in the flow and host hashing.
When the shaper is enabled and the host is already performing NAT, the cost
of this lookup is negligible. However, in unlimited mode with no NAT being
performed, there is a significant CPU cost at higher bandwidths. For this
reason, the feature is turned off by default.
Cc: netfilter-devel@vger.kernel.org
Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toke Høiland-Jørgensen [Fri, 6 Jul 2018 15:37:19 +0000 (17:37 +0200)]
netfilter: Add nf_ct_get_tuple_skb global lookup function
This adds a global netfilter function to extract a conntrack tuple from an
skb. The function uses a new function added to nf_ct_hook, which will try
to get the tuple from skb->_nfct, and do a full lookup if that fails. This
makes it possible to use the lookup function before the skb has passed
through the conntrack init hooks (e.g., in an ingress qdisc). The tuple is
copied to the caller to avoid issues with reference counting.
The function returns false if conntrack is not loaded, allowing it to be
used without incurring a module dependency on conntrack. This is used by
the NAT mode in sch_cake.
Cc: netfilter-devel@vger.kernel.org
Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toke Høiland-Jørgensen [Fri, 6 Jul 2018 15:37:19 +0000 (17:37 +0200)]
sch_cake: Add optional ACK filter
The ACK filter is an optional feature of CAKE which is designed to improve
performance on links with very asymmetrical rate limits. On such links
(which are unfortunately quite prevalent, especially for DSL and cable
subscribers), the downstream throughput can be limited by the number of
ACKs capable of being transmitted in the *upstream* direction.
Filtering ACKs can, in general, have adverse effects on TCP performance
because it interferes with ACK clocking (especially in slow start), and it
reduces the flow's resiliency to ACKs being dropped further along the path.
To alleviate these drawbacks, the ACK filter in CAKE tries its best to
always keep enough ACKs queued to ensure forward progress in the TCP flow
being filtered. It does this by only filtering redundant ACKs. In its
default 'conservative' mode, the filter will always keep at least two
redundant ACKs in the queue, while in 'aggressive' mode, it will filter
down to a single ACK.
The ACK filter works by inspecting the per-flow queue on every packet
enqueue. Starting at the head of the queue, the filter looks for another
eligible packet to drop (so the ACK being dropped is always closer to the
head of the queue than the packet being enqueued). An ACK is eligible only
if it ACKs *fewer* bytes than the new packet being enqueued, including any
SACK options. This prevents duplicate ACKs from being filtered, to avoid
interfering with retransmission logic. In addition, we check TCP header
options and only drop those that are known to not interfere with sender
state. In particular, packets with unknown option codes are never dropped.
In aggressive mode, an eligible packet is always dropped, while in
conservative mode, at least two ACKs are kept in the queue. Only pure ACKs
(with no data segments) are considered eligible for dropping, but when an
ACK with data segments is enqueued, this can cause another pure ACK to
become eligible for dropping.
The approach described above ensures that this ACK filter avoids most of
the drawbacks of a naive filtering mechanism that only keeps flow state but
does not inspect the queue. This is the rationale for including the ACK
filter in CAKE itself rather than as separate module (as the TC filter, for
instance).
Our performance evaluation has shown that on a 30/1 Mbps link with a
bidirectional traffic test (RRUL), turning on the ACK filter on the
upstream link improves downstream throughput by ~20% (both modes) and
upstream throughput by ~12% in conservative mode and ~40% in aggressive
mode, at the cost of ~5ms of inter-flow latency due to the increased
congestion.
In *really* pathological cases, the effect can be a lot more; for instance,
the ACK filter increases the achievable downstream throughput on a link
with 100 Kbps in the upstream direction by an order of magnitude (from ~2.5
Mbps to ~25 Mbps).
Finally, even though we consider the ACK filter to be safer than most, we
do not recommend turning it on everywhere: on more symmetrical link
bandwidths the effect is negligible at best.
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toke Høiland-Jørgensen [Fri, 6 Jul 2018 15:37:19 +0000 (17:37 +0200)]
sch_cake: Add ingress mode
The ingress mode is meant to be enabled when CAKE runs downlink of the
actual bottleneck (such as on an IFB device). The mode changes the shaper
to also account dropped packets to the shaped rate, as these have already
traversed the bottleneck.
Enabling ingress mode will also tune the AQM to always keep at least two
packets queued *for each flow*. This is done by scaling the minimum queue
occupancy level that will disable the AQM by the number of active bulk
flows. The rationale for this is that retransmits are more expensive in
ingress mode, since dropped packets have to traverse the bottleneck again
when they are retransmitted; thus, being more lenient and keeping a minimum
number of packets queued will improve throughput in cases where the number
of active flows are so large that they saturate the bottleneck even at
their minimum window size.
This commit also adds a separate switch to enable ingress mode rate
autoscaling. If enabled, the autoscaling code will observe the actual
traffic rate and adjust the shaper rate to match it. This can help avoid
latency increases in the case where the actual bottleneck rate decreases
below the shaped rate. The scaling filters out spikes by an EWMA filter.
Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toke Høiland-Jørgensen [Fri, 6 Jul 2018 15:37:19 +0000 (17:37 +0200)]
sched: Add Common Applications Kept Enhanced (cake) qdisc
sch_cake targets the home router use case and is intended to squeeze the
most bandwidth and latency out of even the slowest ISP links and routers,
while presenting an API simple enough that even an ISP can configure it.
Example of use on a cable ISP uplink:
tc qdisc add dev eth0 cake bandwidth 20Mbit nat docsis ack-filter
To shape a cable download link (ifb and tc-mirred setup elided)
tc qdisc add dev ifb0 cake bandwidth 200mbit nat docsis ingress wash
CAKE is filled with:
* A hybrid Codel/Blue AQM algorithm, "Cobalt", tied to an FQ_Codel
derived Flow Queuing system, which autoconfigures based on the bandwidth.
* A novel "triple-isolate" mode (the default) which balances per-host
and per-flow FQ even through NAT.
* An deficit based shaper, that can also be used in an unlimited mode.
* 8 way set associative hashing to reduce flow collisions to a minimum.
* A reasonable interpretation of various diffserv latency/loss tradeoffs.
* Support for zeroing diffserv markings for entering and exiting traffic.
* Support for interacting well with Docsis 3.0 shaper framing.
* Extensive support for DSL framing types.
* Support for ack filtering.
* Extensive statistics for measuring, loss, ecn markings, latency
variation.
A paper describing the design of CAKE is available at
https://arxiv.org/abs/1804.07617, and will be published at the 2018 IEEE
International Symposium on Local and Metropolitan Area Networks (LANMAN).
This patch adds the base shaper and packet scheduler, while subsequent
commits add the optional (configurable) features. The full userspace API
and most data structures are included in this commit, but options not
understood in the base version will be ignored.
Various versions baking have been available as an out of tree build for
kernel versions going back to 3.10, as the embedded router world has been
running a few years behind mainline Linux. A stable version has been
generally available on lede-17.01 and later.
sch_cake replaces a combination of iptables, tc filter, htb and fq_codel
in the sqm-scripts, with sane defaults and vastly simpler configuration.
CAKE's principal author is Jonathan Morton, with contributions from
Kevin Darbyshire-Bryant, Toke Høiland-Jørgensen, Sebastian Moeller,
Ryan Mounce, Tony Ambardar, Dean Scarff, Nils Andreas Svee, Dave Täht,
and Loganaden Velvindron.
Testing from Pete Heist, Georgios Amanakis, and the many other members of
the cake@lists.bufferbloat.net mailing list.
tc -s qdisc show dev eth2
qdisc cake 8017: root refcnt 2 bandwidth 1Gbit diffserv3 triple-isolate split-gso rtt 100.0ms noatm overhead 38 mpu 84
Sent
51504294511 bytes
37724591 pkt (dropped 6, overlimits
64958695 requeues 12)
backlog 0b 0p requeues 12
memory used:
1053008b of 15140Kb
capacity estimate: 970Mbit
min/max network layer size: 28 / 1500
min/max overhead-adjusted size: 84 / 1538
average network hdr offset: 14
Bulk Best Effort Voice
thresh 62500Kbit 1Gbit 250Mbit
target 5.0ms 5.0ms 5.0ms
interval 100.0ms 100.0ms 100.0ms
pk_delay 5us 5us 6us
av_delay 3us 2us 2us
sp_delay 2us 1us 1us
backlog 0b 0b 0b
pkts 3164050
25030267 9530280
bytes
3227519915 35396974782 12879808898
way_inds 0 8 0
way_miss 21 366 25
way_cols 0 0 0
drops 5 0 1
marks 0 0 0
ack_drop 0 0 0
sp_flows 1 3 0
bk_flows 0 1 1
un_flows 0 0 0
max_len 68130 68130 68130
Tested-by: Pete Heist <peteheist@gmail.com>
Tested-by: Georgios Amanakis <gamanakis@gmail.com>
Signed-off-by: Dave Taht <dave.taht@gmail.com>
Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesus Sanchez-Palencia [Mon, 9 Jul 2018 23:20:56 +0000 (16:20 -0700)]
net: Use __u32 in uapi net_stamp.h
We are not supposed to use u32 in uapi, so change the flags member of
struct sock_txtime from u32 to __u32 instead.
Fixes:
80b14dee2bea ("net: Add a new socket option for a future transmit time")
Reported-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 9 Jul 2018 23:24:18 +0000 (16:24 -0700)]
Merge branch 'mlxsw-More-Spectrum-2-preparations'
aIdo Schimmel says:
====================
mlxsw: More Spectrum-2 preparations
This is the second and last set of preparations towards initial
Spectrum-2 support in mlxsw. It mainly re-arranges parts of the code
that need to work with both ASICs, but somewhat differ.
The first three patches allow different ASICs to register different set
of operations for KVD linear (KVDL) management. In Spectrum-2 there is
no linear memory and instead entries that reside there in Spectrum
(e.g., nexthops) are hashed and inserted to the hash-based KVD memory.
The fourth patch does a similar restructuring in the low-level multicast
router code. This is necessary because multicast routing is implemented
using regular circuit TCAM (C-TCAM) in Spectrum, whereas Spectrum-2 uses
an algorithmic TCAM (A-TCAM).
Next six patches prepare the ACL code for the introduction of A-TCAM in
follow-up patch sets.
Last two patches allow different ASICs to require different firmware
versions and add two resources that need to be queried from firmware by
Spectrum-2 specific code.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:27 +0000 (23:51 +0300)]
mlxsw: resources: Add couple of Spectrum-2 KVD resources
These resources are needed for Spectrum-2 KVD linear management
implementation.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:26 +0000 (23:51 +0300)]
mlxsw: spectrum: Prepare for multiple FW versions for Spectrum and Spectrum-2
Prepare for Spectrum-2 FW version checking and
make mlxsw_sp_fw_rev_validate() per-ASIC as well as required FW revision
and FW filename.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:25 +0000 (23:51 +0300)]
mlxsw: spectrum_acl: Implement priority setting for rules inserted to TCAM
For Spectrum-2, we need to insert priority to C-TCAM because HW
needs that info in order to correctly process scenarios where rules
are in both C-TCAM and A-TCAM.
So extend the mlxsw_sp_acl_ctcam_entry_add() args to accept indication
if priority needs to be filled up and implement the priority
computation and fill-up.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:24 +0000 (23:51 +0300)]
mlxsw: reg: Add priority field for PTCEV2 register
This is going to be needed for Spectrum-2 C-TCAM implementation.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:23 +0000 (23:51 +0300)]
mlxsw: spectrum_acl: Move block items encoding into Spectrum op
Since Spectrum-2 encodes blocks into different HW layout, push this
code into Spectrum-specific op.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:22 +0000 (23:51 +0300)]
mlxsw: spectrum_acl: Convert mlxsw_afk_create args to ops
Since the flex keys for Spectrum-2 differ not only in blocks definitions
but also in encoding layout, prepare for the implementation and pass
Spectrum/Spectrum-2 specific ops down to mlxsw_afk_create.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:21 +0000 (23:51 +0300)]
mlxsw: spectrum_acl: Add tcam init/fini ops
Add ops to be called on driver instance init and fini.
This is needed in order to be possible to do Spectrum-2 specific init
and fini work.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:20 +0000 (23:51 +0300)]
mlxsw: spectrum_acl: Split TCAM handling 3 ways
To allow easy and clean Spectrum-2 implementation for things that differ
from Spectrum, split the existing ACL TCAM code 3 ways:
1) common code that calls Spectrum/Spectrum-2 specific ops
2) Spectrum ops implementations
3) common C-TCAM code that is going to be shared between Spectrum and
Spectrum-2 implementations
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:19 +0000 (23:51 +0300)]
mlxsw: spectrum_mr_tcam: Push Spectrum-specific operations into a separate file
Since Spectrum-2 has different handling of TCAM, push Spectrum MR TCAM
bits to a separate file accessible by ops which allows to implement
Spectrum-2 specific ops.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:18 +0000 (23:51 +0300)]
mlxsw: spectrum_kvdl: Pass entry_count to free function
For the Spectrum-2 KVD linear manager implementation, entry_count will be
needed even for the free function. So pass it down.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:17 +0000 (23:51 +0300)]
mlxsw: spectrum_kvdl: Pass entry type to alloc/free
Future Spectrum-2 KVD linear manager implementation needs to know type
of the entry to alloc and free. So define the types in an enum and
pass it down to alloc and free functions. Once the entry type
is passed down, KVDL common part knows sizes of each entry types,
so replace size function arg with entry count.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 20:51:16 +0000 (23:51 +0300)]
mlxsw: spectrum_kvdl: Push out KVD linear management into ops
In Spectrum-2 there is a different implementation of KVD linear
management. Unlike in Spectrum where there is a single index space,
in Spectrum-2 the indexes are per-resource. Also there is need to
explicitly tell HW that an entry is no longer used.
So push out the existing implementation into spectrum1_kvdl.c and
prepare ops infrastructure to allow new implementation in a follow-up.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kees Cook [Wed, 4 Jul 2018 17:28:47 +0000 (10:28 -0700)]
net/mlx5: Use 2-factor allocator calls
This restores the use of 2-factor allocation helpers that were already
fixed treewide. Please do not use open-coded multiplication; prefer,
instead, using 2-factor allocation helpers.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Mon, 9 Jul 2018 07:45:14 +0000 (09:45 +0200)]
tcp: remove SG-related comment in tcp_sendmsg()
Since commit
74d4a8f8d378 ("tcp: remove sk_can_gso() use"), the code
doesn't care whether the interface supports SG.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 9 Jul 2018 21:55:54 +0000 (14:55 -0700)]
Merge branch 'fix-use-after-free-bugs-in-skb-list-processing'
Edward Cree says:
====================
fix use-after-free bugs in skb list processing
A couple of bugs in skb list handling were spotted by Dan Carpenter, with
the help of Smatch; following up on them I found a couple more similar
cases. This series fixes them by changing the relevant loops to use the
dequeue-enqueue model (rather than in-place list modification).
v3: fixed another similar bug in __netif_receive_skb_list_core().
v2: dropped patch #3 (new list.h helper), per DaveM's request.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 9 Jul 2018 17:10:19 +0000 (18:10 +0100)]
net: core: fix use-after-free in __netif_receive_skb_list_core
__netif_receive_skb_core can free the skb, so we have to use the dequeue-
enqueue model when calling it from __netif_receive_skb_list_core.
Fixes:
88eb1944e18c ("net: core: propagate SKB lists through packet_type lookup")
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 9 Jul 2018 17:10:02 +0000 (18:10 +0100)]
netfilter: fix use-after-free in NF_HOOK_LIST
nf_hook() can free the skb, so we need to remove it from the list before
calling, and add passed skbs to a sublist afterwards.
Fixes:
17266ee93984 ("net: ipv4: listified version of ip_rcv")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 9 Jul 2018 17:09:54 +0000 (18:09 +0100)]
net: core: fix uses-after-free in list processing
In netif_receive_skb_list_internal(), all of skb_defer_rx_timestamp(),
do_xdp_generic() and enqueue_to_backlog() can lead to kfree(skb). Thus,
we cannot wait until after they return to remove the skb from the list;
instead, we remove it first and, in the pass case, add it to a sublist
afterwards.
In the case of enqueue_to_backlog() we have already decided not to pass
when we call the function, so we do not need a sublist.
Fixes:
7da517a3bc52 ("net: core: Another step of skb receive list processing")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Mon, 9 Jul 2018 16:20:04 +0000 (12:20 -0400)]
net: allow fallback function to pass netdev
For most of these calls we can just pass NULL through to the fallback
function as the sb_dev. The only cases where we cannot are the cases where
we might be dealing with either an upper device or a driver that would
have configured things to support an sb_dev itself.
The only driver that has any significant change in this patch set should be
ixgbe as we can drop the redundant functionality that existed in both the
ndo_select_queue function and the fallback function that was passed through
to us.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Mon, 9 Jul 2018 16:19:59 +0000 (12:19 -0400)]
net: allow ndo_select_queue to pass netdev
This patch makes it so that instead of passing a void pointer as the
accel_priv we instead pass a net_device pointer as sb_dev. Making this
change allows us to pass the subordinate device through to the fallback
function eventually so that we can keep the actual code in the
ndo_select_queue call as focused on possible on the exception cases.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Mon, 9 Jul 2018 16:19:54 +0000 (12:19 -0400)]
net: Add generic ndo_select_queue functions
This patch adds a generic version of the ndo_select_queue functions for
either returning 0 or selecting a queue based on the processor ID. This is
generally meant to just reduce the number of functions we have to change
in the future when we have to deal with ndo_select_queue changes.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Mon, 9 Jul 2018 16:19:48 +0000 (12:19 -0400)]
net: Add support for subordinate traffic classes to netdev_pick_tx
This change makes it so that we can support the concept of subordinate
device traffic classes to the core networking code. In doing this we can
start pulling out the driver specific bits needed to support selecting a
queue based on an upper device.
The solution at is currently stands is only partially implemented. I have
the start of some XPS bits in here, but I would still need to allow for
configuration of the XPS maps on the queues reserved for the subordinate
devices. For now I am using the reference to the sb_dev XPS map as just a
way to skip the lookup of the lower device XPS map for now as that would
result in the wrong queue being picked.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Mon, 9 Jul 2018 16:19:43 +0000 (12:19 -0400)]
ixgbe: Add code to populate and use macvlan TC to Tx queue map
This patch makes it so that we use the tc_to_txq mapping in the macvlan
device in order to select the Tx queue for outgoing packets.
The idea here is to try and move away from using ixgbe_select_queue and to
come up with a generic way to make this work for devices going forward. By
encoding this information in the netdev this can become something that can
be used generically as a solution for similar setups going forward.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Mon, 9 Jul 2018 16:19:38 +0000 (12:19 -0400)]
net: Add support for subordinate device traffic classes
This patch is meant to provide the basic tools needed to allow us to create
subordinate device traffic classes. The general idea here is to allow
subdividing the queues of a device into queue groups accessible through an
upper device such as a macvlan.
The idea here is to enforce the idea that an upper device has to be a
single queue device, ideally with IFF_NO_QUQUE set. With that being the
case we can pretty much guarantee that the tc_to_txq mappings and XPS maps
for the upper device are unused. As such we could reuse those in order to
support subdividing the lower device and distributing those queues between
the subordinate devices.
In order to distinguish between a regular set of traffic classes and if a
device is carrying subordinate traffic classes I changed num_tc from a u8
to a s16 value and use the negative values to represent the subordinate
pool values. So starting at -1 and running to -32768 we can encode those as
pool values, and the existing values of 0 to 15 can be maintained.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Mon, 9 Jul 2018 16:19:32 +0000 (12:19 -0400)]
net-sysfs: Drop support for XPS and traffic_class on single queue device
This patch makes it so that we do not report the traffic class or allow XPS
configuration on single queue devices. This is mostly to avoid unnecessary
complexity with changes I have planned that will allow us to reuse
the unused tc_to_txq and XPS configuration on a single queue device to
allow it to make use of a subset of queues on an underlying device.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Eric Dumazet [Sun, 8 Jul 2018 06:15:56 +0000 (23:15 -0700)]
tcp: remove redundant SOCK_DONE checks
In both tcp_splice_read() and tcp_recvmsg(), we already test
sock_flag(sk, SOCK_DONE) right before evaluating sk->sk_state,
so "!sock_flag(sk, SOCK_DONE)" is always true.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 8 Jul 2018 08:05:20 +0000 (17:05 +0900)]
Merge branch 'mlxsw-Spectrum2-acl-prep'
Ido Schimmel says:
====================
mlxsw: Spectrum-2 small ACL preparations
This is the first set of changes towards Spectrum-2 support in the mlxsw
driver. It contains small changes that prepare the code for the later
introduction of Spectrum-2 support.
The Spectrum-2 ASIC uses an algorithmic TCAM (A-TCAM) instead of a
circuit TCAM (C-TCAM) as Spectrum, and thus most of the changes are
around the ACL code.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 07:00:21 +0000 (10:00 +0300)]
mlxsw: core_acl_flex_actions: Fix helper to get the first KVD linear index
The helper should return always KVD linear index of the second set.
It is unused now, but going to be used soon.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 07:00:20 +0000 (10:00 +0300)]
mlxsw: core_acl_flex_actions: Allow the first set to be dummy
In Spectrum-2, the real action sets are always in KVD linear. The first
set is always empty and contains only pointer to the first real set in
KVD linear. So provide possibility to specify the first set is the dummy
one.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 07:00:19 +0000 (10:00 +0300)]
mlxsw: spectrum: Put pointer to flex action ops to mlxsw_sp
Spectrum-2 need a slightly different handling of flexible actions. So
put an ops pointer in mlxsw_sp struct and rename it.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 07:00:18 +0000 (10:00 +0300)]
mlxsw: core_acl_flex_keys: Change SRC_SYS_PORT flex key element size
The SRC_SYS_PORT is passed as 8 bit value down to hw anyway, so cap it
in the driver as well. Also, in Spectrum-2 the FW iface for SRC_SYS_PORT
is only 8 bits, so prepare for it.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 07:00:17 +0000 (10:00 +0300)]
mlxsw: core_acl_flex_keys: Split MAC and IP address flex key elements
Since in Spectrum-2, MACs are split and IP addresses are split as well,
in order to use the same elements for Spectrum and Spectrum-2 split them
now.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 07:00:16 +0000 (10:00 +0300)]
mlxsw: spectrum_acl: Ignore always-zeroed bits in tp->prio
The lowest 16 bits of tp->prio are always zero, so ignore them with a
shift.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 07:00:15 +0000 (10:00 +0300)]
mlxsw: reg: Introduce Flex2 key type for PTAR register
Introduce Flex2 key type for PTAR register which is used in Spectrum-2.
Also, extend mlxsw_reg_ptar_pack() to set the value according to the
caller.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sun, 8 Jul 2018 07:00:14 +0000 (10:00 +0300)]
mlxsw: spectrum: Change name of mlxsw_sp_afk_blocks to mlxsw_sp1_afk_blocks
This is specific for Spectrum as Spectrum-2 has completely different key
blocks.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 8 Jul 2018 08:02:59 +0000 (17:02 +0900)]
net: sched: Fix warnings from xchg() on RCU'd cookie pointer.
The kbuild test robot reports:
>> net/sched/act_api.c:71:15: sparse: incorrect type in initializer (different address spaces) @@ expected struct tc_cookie [noderef] <asn:4>*__ret @@ got [noderef] <asn:4>*__ret @@
net/sched/act_api.c:71:15: expected struct tc_cookie [noderef] <asn:4>*__ret
net/sched/act_api.c:71:15: got struct tc_cookie *new_cookie
>> net/sched/act_api.c:71:13: sparse: incorrect type in assignment (different address spaces) @@ expected struct tc_cookie *old @@ got struct tc_cookie [noderef] <struct tc_cookie *old @@
net/sched/act_api.c:71:13: expected struct tc_cookie *old
net/sched/act_api.c:71:13: got struct tc_cookie [noderef] <asn:4>*[assigned] __ret
>> net/sched/act_api.c:132:48: sparse: dereference of noderef expression
Handle this in the usual way by force casting away the __rcu annotation
when we are using xchg() on it.
Fixes:
eec94fdb0480 ("net: sched: use rcu for action cookie update")
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 8 Jul 2018 03:42:29 +0000 (12:42 +0900)]
Merge branch 'Modify-action-API-for-implementing-lockless-actions'
Vlad Buslov says:
====================
Modify action API for implementing lockless actions
Currently, all netlink protocol handlers for updating rules, actions and
qdiscs are protected with single global rtnl lock which removes any
possibility for parallelism. This patch set is a first step to remove
rtnl lock dependency from TC rules update path.
Recently, new rtnl registration flag RTNL_FLAG_DOIT_UNLOCKED was added.
Handlers registered with this flag are called without RTNL taken. End
goal is to have rule update handlers(RTM_NEWTFILTER, RTM_DELTFILTER,
etc.) to be registered with UNLOCKED flag to allow parallel execution.
However, there is no intention to completely remove or split rtnl lock
itself. This patch set addresses specific problems in action API that
prevents it from being executed concurrently. This patch set does not
completely unlock rules or actions update path. Additional patch sets
are required to refactor individual actions and filters update for
parallel execution.
As a preparation for executing TC rules update handlers without rtnl
lock, action API code was audited to determine areas that assume
external synchronization with rtnl lock and must be changed to allow
safe concurrent access with following results:
1. Action idr is already protected with spinlock. However, some code
paths assume that idr state is not changes between several
consecutive tcf_idr_* function calls.
2. tc_action reference and bind counters are implemented as plain
integers. They purpose was to allow single actions to be shared
between multiple filters, not to provide means for concurrent
modification.
3. tc_action 'cookie' pointer field is not protected against
modification.
4. Action API functions, that work with set of actions, use intrusive
linked list, which cannot be used concurrently without additional
synchronization.
5. Action API functions don't take reference to actions while using
them, assuming external synchronization with rtnl lock.
Following solutions to these problems are implemented:
1. To remove assumption that idr state doesn't change between tcf_idr_*
calls, implement new functions that atomically perform several
operations on idr without releasing idr spinlock. (function to
atomically lookup and delete action by index, function to atomically
check if action exists and allocate new one if necessary, etc.)
2. Use atomic operations on counters to make them suitable for
concurrent get/put operations.
3. Data that 'cookie' points to is never modified, so it enough to
refactor it to rcu pointer to prevent concurrent de-allocation.
4. Action API doesn't actually use any linked list specific operations
on actions intrusive linked list, so it can be refactored to array in
straightforward manner.
5. Always take reference to action while accessing it in action API.
tcf_idr_search function modified to take reference to action before
returning it, so there is no way to lookup an action without
incrementing its reference counter. All users of this function are
modified to release the reference, after they done using action. With
all users using reference counting, it is now safe to concurrently
delete actions.
Additionally, actions init function signature was expanded with
'rtnl_held' argument, that allows actions that have internal dependency
on rtnl lock to take/release it when necessary.
Since only shared state in action API module are actions themselves and
action idr, these changes are sufficient to not to rely on global rtnl
lock for protection of internal action API data structures.
Changes from V5 to V6:
- Rebase on current net-next
- When action is deleted, set pointer in actions array to NULL to
prevent double freeing.
Changes from V4 to V5:
- Change action delete API to track actions that were deleted, to
prevent releasing them on error.
Changes from V3 to V4:
- Expand cover letter.
- Reduce actions array size in tcf_action_init_1.
- Rebase on latest net-next.
Changes from V2 to V3:
- Re-send with changelog copied to individual patches.
Changes from V1 to V2:
- Removed redundant actions ops lookup during delete.
- Merge action ops delete definition and implementation.
- Assume all actions have delete implemented and don't check for it
explicitly.
- Resplit action lookup/release code to prevent memory leaks in
individual patches.
- Make __tcf_idr_check function static
- Remove unique idr insertion function. Change original idr insert to do
the same thing.
- Merge changes that take reference to action when performing lookup and
changes that account for this additional reference when dumping action
to user space into single patch.
- Change convoluted commit message.
- Rename "unlocked" to "rtnl_held" for clarity.
- Remove estimator lock add patch.
- Refactor action check-alloc code into standalone function.
- Rename tcf_idr_find_delete to tcf_idr_delete_index.
- Rearrange variable definitions in tc_action_delete.
- Add patch that refactors action API code to use array of pointers to
actions instead of intrusive linked list.
- Expand cover letter.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:33 +0000 (17:24 +0300)]
net: sched: change action API to use array of pointers to actions
Act API used linked list to pass set of actions to functions. It is
intrusive data structure that stores list nodes inside action structure
itself, which means it is not safe to modify such list concurrently.
However, action API doesn't use any linked list specific operations on this
set of actions, so it can be safely refactored into plain pointer array.
Refactor action API to use array of pointers to tc_actions instead of
linked list. Change argument 'actions' type of exported action init,
destroy and dump functions.
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:32 +0000 (17:24 +0300)]
net: sched: atomically check-allocate action
Implement function that atomically checks if action exists and either takes
reference to it, or allocates idr slot for action index to prevent
concurrent allocations of actions with same index. Use EBUSY error pointer
to indicate that idr slot is reserved.
Implement cleanup helper function that removes temporary error pointer from
idr. (in case of error between idr allocation and insertion of newly
created action to specified index)
Refactor all action init functions to insert new action to idr using this
API.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:31 +0000 (17:24 +0300)]
net: sched: use reference counting action init
Change action API to assume that action init function always takes
reference to action, even when overwriting existing action. This is
necessary because action API continues to use action pointer after init
function is done. At this point action becomes accessible for concurrent
modifications, so user must always hold reference to it.
Implement helper put list function to atomically release list of actions
after action API init code is done using them.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:30 +0000 (17:24 +0300)]
net: sched: don't release reference on action overwrite
Return from action init function with reference to action taken,
even when overwriting existing action.
Action init API initializes its fourth argument (pointer to pointer to tc
action) to either existing action with same index or newly created action.
In case of existing index(and bind argument is zero), init function returns
without incrementing action reference counter. Caller of action init then
proceeds working with action, without actually holding reference to it.
This means that action could be deleted concurrently.
Change action init behavior to always take reference to action before
returning successfully, in order to protect from concurrent deletion.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:29 +0000 (17:24 +0300)]
net: sched: implement reference counted action release
Implement helper delete function that uses new action ops 'delete', instead
of destroying action directly. This is required so act API could delete
actions by index, without holding any references to action that is being
deleted.
Implement function __tcf_action_put() that releases reference to action and
frees it, if necessary. Refactor action deletion code to use new put
function and not to rely on rtnl lock. Remove rtnl lock assertions that are
no longer needed.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:28 +0000 (17:24 +0300)]
net: sched: add 'delete' function to action ops
Extend action ops with 'delete' function. Each action type to implements
its own delete function that doesn't depend on rtnl lock.
Implement delete function that is required to delete actions without
holding rtnl lock. Use action API function that atomically deletes action
only if it is still in action idr. This implementation prevents concurrent
threads from deleting same action twice.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:27 +0000 (17:24 +0300)]
net: sched: implement action API that deletes action by index
Implement new action API function that atomically finds and deletes action
from idr by index. Intended to be used by lockless actions that do not rely
on rtnl lock.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:26 +0000 (17:24 +0300)]
net: sched: always take reference to action
Without rtnl lock protection it is no longer safe to use pointer to tc
action without holding reference to it. (it can be destroyed concurrently)
Remove unsafe action idr lookup function. Instead of it, implement safe tcf
idr check function that atomically looks up action in idr and increments
its reference and bind counters. Implement both action search and check
using new safe function
Reference taken by idr check is temporal and should not be accounted by
userspace clients (both logically and to preserver current API behavior).
Subtract temporal reference when dumping action to userspace using existing
tca_get_fill function arguments.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:25 +0000 (17:24 +0300)]
net: sched: implement unlocked action init API
Add additional 'rtnl_held' argument to act API init functions. It is
required to implement actions that need to release rtnl lock before loading
kernel module and reacquire if afterwards.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:24 +0000 (17:24 +0300)]
net: sched: change type of reference and bind counters
Change type of action reference counter to refcount_t.
Change type of action bind counter to atomic_t.
This type is used to allow decrementing bind counter without testing
for 0 result.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Thu, 5 Jul 2018 14:24:23 +0000 (17:24 +0300)]
net: sched: use rcu for action cookie update
Implement functions to atomically update and free action cookie
using rcu mechanism.
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yifeng Sun [Mon, 2 Jul 2018 15:18:03 +0000 (08:18 -0700)]
openvswitch: kernel datapath clone action
Add 'clone' action to kernel datapath by using existing functions.
When actions within clone don't modify the current flow, the flow
key is not cloned before executing clone actions.
This is a follow up patch for this incomplete work:
https://patchwork.ozlabs.org/patch/722096/
v1 -> v2:
Refactor as advised by reviewer.
Signed-off-by: Yifeng Sun <pkusunyifeng@gmail.com>
Signed-off-by: Andy Zhou <azhou@ovn.org>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Randy Dunlap [Sat, 7 Jul 2018 15:31:15 +0000 (08:31 -0700)]
isdn/capi: fix defined but not used warnings
Fix build warnings in drivers/isdn/capi/ when CONFIG_PROC_FS is not
enabled by marking the unused functions as __maybe_unused.
../drivers/isdn/capi/capi.c:1324:12: warning: 'capi20_proc_show' defined but not used [-Wunused-function]
../drivers/isdn/capi/capi.c:1347:12: warning: 'capi20ncci_proc_show' defined but not used [-Wunused-function]
../drivers/isdn/capi/capidrv.c:2454:12: warning: 'capidrv_proc_show' defined but not used [-Wunused-function]
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Karsten Keil <isdn@linux-pingi.de>
Cc: isdn4linux@listserv.isdn4linux.de (subscribers-only)
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Randy Dunlap [Sat, 7 Jul 2018 15:27:53 +0000 (08:27 -0700)]
connector: fix defined but not used warning
Fix a build warning in connector.c when CONFIG_PROC_FS is not enabled
by marking the unused function as __maybe_unused.
../drivers/connector/connector.c:242:12: warning: 'cn_proc_show' defined but not used [-Wunused-function]
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Evgeniy Polyakov <zbr@ioremap.net>
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Sat, 7 Jul 2018 12:47:06 +0000 (13:47 +0100)]
drivers: net: lmc: remove redundant variable next_rx
Variable next_rx is being assigned but is never used hence it is
redundant and can be removed.
Cleans up clang warning:
warning: variable 'next_rx' set but not used [-Wunused-but-set-variable]
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 7 Jul 2018 12:22:57 +0000 (21:22 +0900)]
Merge branch 'cpsw-allow-PTP-224.0.0.107-to-be-timestamped'
Ivan Khoronzhuk says:
====================
allow PTP 224.0.0.107 to be timestamped
Allows packets with 224.0.0.107 mcas address for PTP packets to be timestamped.
Only for cpsw version > v1.15.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Fri, 6 Jul 2018 18:44:45 +0000 (21:44 +0300)]
net: ethernet: ti: cpsw: allow PTP 224.0.0.107 to be timestamped
Tested on AM572x with cpsw v1.15 for PTP sync and delay_req messages.
It doesn't work on cpsw v1.12, so added only for cpsw v > 1.15.
Command for testing:
ptp4l -P -4 -H -i eth0 -l 6 -m -q -p /dev/ptp0 -f ptp.cfg
where ptp.cfg:
[global]
tx_timestamp_timeout 20
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Fri, 6 Jul 2018 18:44:44 +0000 (21:44 +0300)]
net: ethernet: ti: cpsw: use BIT macro
It's needed to avoid checkpatch warnings for farther changes.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Fri, 6 Jul 2018 13:36:07 +0000 (15:36 +0200)]
stmmac: fix signed 64-bit division
I link error on 32-bit ARM points to yet another arithmetic bug:
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.o: In function `tc_setup_cbs':
stmmac_tc.c:(.text+0x148): undefined reference to `__aeabi_uldivmod'
stmmac_tc.c:(.text+0x1fc): undefined reference to `__aeabi_uldivmod'
stmmac_tc.c:(.text+0x308): undefined reference to `__aeabi_uldivmod'
stmmac_tc.c:(.text+0x320): undefined reference to `__aeabi_uldivmod'
stmmac_tc.c:(.text+0x33c): undefined reference to `__aeabi_uldivmod'
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.o:stmmac_tc.c:(.text+0x3a4): more undefined references to `__aeabi_uldivmod' follow
I observe that the last change to add the 'ul' prefix was incorrect,
as it did not turn the result of the multiplication into a 64-bit
expression on 32-bit architectures. Further, it seems that the
do_div() macro gets confused by the fact that we pass a signed
variable rather than unsigned into it.
This changes the code to instead use the div_s64() helper that is
meant for signed division, along with changing the constant suffix
to 'll' to actually make it a 64-bit argument everywhere, fixing
both of the issues I pointed out.
I'm not completely convinced that this makes the code correct, but
I'm fairly sure that we have two problems less than before.
Fixes:
1f705bc61aee ("net: stmmac: Add support for CBS QDISC")
Fixes:
c18a9c096683 ("net: stmmac_tc: use 64-bit arithmetic instead of 32-bit")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Fri, 6 Jul 2018 13:22:36 +0000 (15:22 +0200)]
tipc: extend link reset criteria for stale packet retransmission
Currently a link is declared stale and reset if there has been 100
repeated attempts to retransmit the same packet. However, in certain
infrastructures we see that packet (NACK) duplicates and delays may
cause such retransmit attempts to occur at a high rate, so that the
peer doesn't have a reasonable chance to acknowledge the reception
before the 100-limit is hit. This may take much less than the
stipulated link tolerance time, and despite that probe/probe replies
otherwise go through as normal.
We now extend the criteria for link reset to also being time based.
I.e., we don't reset the link until the link tolerance time is passed
AND we have made 100 retransmissions attempts.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 7 Jul 2018 11:51:53 +0000 (20:51 +0900)]
Merge branch 'Introduce-matching-on-double-vlan-QinQ-headers-for-TC-flower'
Jianbo Liu says:
====================
Introduce matching on double vlan/QinQ headers for TC flower
Currently TC flower supports only one vlan tag, it doesn't match on both outer
and inner vlan headers for QinQ. To do this, we add support to get both outer
and inner vlan headers for flow dissector, and then TC flower do matching on
those information.
We also plan to extend TC command to support this feature. We add new
cvlan_id/cvlan_prio/cvlan_ethtype keywords for inner vlan header. The existing
vlan_id/vlan_prio/vlan_ethtype are for outer vlan header, and vlan_ethtype must
be 802.1q or 802.1ad.
The examples for command and output are as the following.
flower vlan_id 1000 vlan_ethtype 802.1q \
cvlan_id 100 cvlan_ethtype ipv4 \
action vlan pop \
action vlan pop \
action mirred egress redirect dev ens1f1_0
filter protocol 802.1ad pref 33 flower chain 0
filter protocol 802.1ad pref 33 flower chain 0 handle 0x1
vlan_id 1000
vlan_ethtype 802.1Q
cvlan_id 100
cvlan_ethtype ip
eth_type ipv4
in_hw
...
v2:
fix sparse warning.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jianbo Liu [Fri, 6 Jul 2018 05:38:16 +0000 (05:38 +0000)]
net/sched: flower: Add supprt for matching on QinQ vlan headers
As support dissecting of QinQ inner and outer vlan headers, user can
add rules to match on QinQ vlan headers.
Signed-off-by: Jianbo Liu <jianbol@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jianbo Liu [Fri, 6 Jul 2018 05:38:15 +0000 (05:38 +0000)]
net/sched: flower: Dump the ethertype encapsulated in vlan
Currently the encapsulated ethertype is not dumped as it's the same as
TCA_FLOWER_KEY_ETH_TYPE keyvalue. But the dumping result is inconsistent
with input, we add dumping it with TCA_FLOWER_KEY_VLAN_ETH_TYPE.
Signed-off-by: Jianbo Liu <jianbol@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jianbo Liu [Fri, 6 Jul 2018 05:38:14 +0000 (05:38 +0000)]
net/flow_dissector: Add support for QinQ dissection
Dissect the QinQ packets to get both outer and inner vlan information,
then store to the extended flow keys.
Signed-off-by: Jianbo Liu <jianbol@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jianbo Liu [Fri, 6 Jul 2018 05:38:13 +0000 (05:38 +0000)]
net/sched: flower: Add support for matching on vlan ethertype
As flow dissector stores vlan ethertype, tc flower now can match on that.
It is to make preparation for supporting QinQ.
Signed-off-by: Jianbo Liu <jianbol@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jianbo Liu [Fri, 6 Jul 2018 05:38:12 +0000 (05:38 +0000)]
net/flow_dissector: Save vlan ethertype from headers
Change vlan dissector key to save vlan tpid to support both 802.1Q
and 802.1AD ethertype.
Signed-off-by: Jianbo Liu <jianbol@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>