platform/kernel/linux-rpi.git
5 years agonetfilter: physdev: relax br_netfilter dependency
Florian Westphal [Fri, 11 Jan 2019 13:46:15 +0000 (14:46 +0100)]
netfilter: physdev: relax br_netfilter dependency

Following command:
  iptables -D FORWARD -m physdev ...
causes connectivity loss in some setups.

Reason is that iptables userspace will probe kernel for the module revision
of the physdev patch, and physdev has an artificial dependency on
br_netfilter (xt_physdev use makes no sense unless a br_netfilter module
is loaded).

This causes the "phydev" module to be loaded, which in turn enables the
"call-iptables" infrastructure.

bridged packets might then get dropped by the iptables ruleset.

The better fix would be to change the "call-iptables" defaults to 0 and
enforce explicit setting to 1, but that breaks backwards compatibility.

This does the next best thing: add a request_module call to checkentry.
This was a stray '-D ... -m physdev' won't activate br_netfilter
anymore.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agonetfilter: conntrack: remove helper hook again
Florian Westphal [Wed, 9 Jan 2019 16:19:34 +0000 (17:19 +0100)]
netfilter: conntrack: remove helper hook again

place them into the confirm one.

Old:
 hook (300): ipv4/6_help() first call helper, then seqadj.
 hook (INT_MAX): confirm

Now:
 hook (INT_MAX): confirm, first call helper, then seqadj, then confirm

Not having the extra call is noticeable in bechmarks.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agonetfilter: nf_tables: add direct calls for all builtin expressions
Florian Westphal [Tue, 8 Jan 2019 16:35:34 +0000 (17:35 +0100)]
netfilter: nf_tables: add direct calls for all builtin expressions

With CONFIG_RETPOLINE its faster to add an if (ptr == &foo_func)
check and and use direct calls for all the built-in expressions.

~15% improvement in pathological cases.

checkpatch doesn't like the X macro due to the embedded return statement,
but the macro has a very limited scope so I don't think its a problem.

I would like to avoid bugs of the form
  If (e->ops->eval == (unsigned long)nft_foo_eval)
 nft_bar_eval();

and open-coded if ()/else if()/else cascade, thus the macro.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agonetfilter: nf_tables: handle nft_object lookups via rhltable
Florian Westphal [Tue, 8 Jan 2019 14:45:59 +0000 (15:45 +0100)]
netfilter: nf_tables: handle nft_object lookups via rhltable

Instead of linear search, use rhlist interface to look up the objects.
This fixes rulesets with thousands of named objects (quota, counters and
the like).

We only use a single table for this and consider the address of the
table we're doing the lookup in as a part of the key.

This reduces restore time of a sample ruleset with ~20k named counters
from 37 seconds to 0.8 seconds.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agonetfilter: nf_tables: prepare nft_object for lookups via hashtable
Florian Westphal [Tue, 8 Jan 2019 14:45:58 +0000 (15:45 +0100)]
netfilter: nf_tables: prepare nft_object for lookups via hashtable

Add a 'key' structure for object, so we can look them up by name + table
combination (the name can be the same in each table).

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agoMerge branch 'tcp_openreq_child'
David S. Miller [Fri, 18 Jan 2019 06:19:05 +0000 (22:19 -0800)]
Merge branch 'tcp_openreq_child'

Eric Dumazet says:

====================
tcp: remove code from tcp_create_openreq_child()

tcp_create_openreq_child() is essentially cloning a listener, then
must initialize some fields that can not be inherited.

Listeners are either fresh sockets, or sockets that came through
tcp_disconnect() after a session that dirtied many fields.

By moving code to tcp_disconnect(), we can shorten time taken
to create a clone, since tcp_disconnect() operation is very
unlikely.
====================

Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: move rx_opt & syn_data_acked init to tcp_disconnect()
Eric Dumazet [Thu, 17 Jan 2019 19:23:42 +0000 (11:23 -0800)]
tcp: move rx_opt & syn_data_acked init to tcp_disconnect()

If we make sure all listeners have these fields cleared, then a clone
will also inherit zero values.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: move tp->rack init to tcp_disconnect()
Eric Dumazet [Thu, 17 Jan 2019 19:23:41 +0000 (11:23 -0800)]
tcp: move tp->rack init to tcp_disconnect()

If we make sure all listeners have proper tp->rack value,
then a clone will also inherit proper initial value.

Note that fresh sockets init tp->rack from tcp_init_sock()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: move app_limited init to tcp_disconnect()
Eric Dumazet [Thu, 17 Jan 2019 19:23:40 +0000 (11:23 -0800)]
tcp: move app_limited init to tcp_disconnect()

If we make sure all listeners have app_limited set to ~0U,
then a clone will also inherit proper initial value.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: move retrans_out, sacked_out, tlp_high_seq, last_oow_ack_time init to tcp_discon...
Eric Dumazet [Thu, 17 Jan 2019 19:23:39 +0000 (11:23 -0800)]
tcp: move retrans_out, sacked_out, tlp_high_seq, last_oow_ack_time init to tcp_disconnect()

If we make sure all listeners have these fields cleared, then a clone
will also inherit zero values.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: do not clear urg_data in tcp_create_openreq_child
Eric Dumazet [Thu, 17 Jan 2019 19:23:38 +0000 (11:23 -0800)]
tcp: do not clear urg_data in tcp_create_openreq_child

All listeners have this field cleared already, since tcp_disconnect()
clears it and newly created sockets have also a zero value here.

So a clone will inherit a zero value here.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: move snd_cwnd & snd_cwnd_cnt init to tcp_disconnect()
Eric Dumazet [Thu, 17 Jan 2019 19:23:37 +0000 (11:23 -0800)]
tcp: move snd_cwnd & snd_cwnd_cnt init to tcp_disconnect()

Passive connections can inherit proper value by cloning,
if we make sure all listeners have the proper values there.

tcp_disconnect() was setting snd_cwnd to 2, which seems
quite obsolete since IW10 adoption.

Also remove an obsolete comment.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: move mdev_us init to tcp_disconnect()
Eric Dumazet [Thu, 17 Jan 2019 19:23:36 +0000 (11:23 -0800)]
tcp: move mdev_us init to tcp_disconnect()

If we make sure a listener always has its mdev_us
field set to TCP_TIMEOUT_INIT, we do not need to rewrite
this field after a new clone is created.

tcp_disconnect() is very seldom used in real applications.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: do not clear srtt_us in tcp_create_openreq_child
Eric Dumazet [Thu, 17 Jan 2019 19:23:35 +0000 (11:23 -0800)]
tcp: do not clear srtt_us in tcp_create_openreq_child

All listeners have this field cleared already, since tcp_disconnect()
clears it and newly created sockets have also a zero value here.

So a clone will inherit a zero value here.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: do not clear packets_out in tcp_create_openreq_child()
Eric Dumazet [Thu, 17 Jan 2019 19:23:34 +0000 (11:23 -0800)]
tcp: do not clear packets_out in tcp_create_openreq_child()

New sockets have this field cleared, and tcp_disconnect()
calls tcp_write_queue_purge() which among other things
also clear tp->packets_out

So a listener is guaranteed to have this field cleared.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: move icsk_rto init to tcp_disconnect()
Eric Dumazet [Thu, 17 Jan 2019 19:23:33 +0000 (11:23 -0800)]
tcp: move icsk_rto init to tcp_disconnect()

If we make sure a listener always has its icsk_rto
field set to TCP_TIMEOUT_INIT, we do not need to rewrite
this field after a new clone is created.

tcp_disconnect() is very seldom used in real applications.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: do not set snd_ssthresh in tcp_create_openreq_child()
Eric Dumazet [Thu, 17 Jan 2019 19:23:32 +0000 (11:23 -0800)]
tcp: do not set snd_ssthresh in tcp_create_openreq_child()

New sockets get the field set to TCP_INFINITE_SSTHRESH in tcp_init_sock()
In case a socket had this field changed and transitions to TCP_LISTEN
state, tcp_disconnect() also makes sure snd_ssthresh is set to
TCP_INFINITE_SSTHRESH.

So a listener has this field set to TCP_INFINITE_SSTHRESH already.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/mlx4: remove unneeded semicolon
YueHaibing [Thu, 17 Jan 2019 13:03:56 +0000 (21:03 +0800)]
net/mlx4: remove unneeded semicolon

Remove unneeded semicolon.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: ethernet: ti: cpsw-phy-sel: remove unneeded semicolon
YueHaibing [Thu, 17 Jan 2019 12:59:19 +0000 (20:59 +0800)]
net: ethernet: ti: cpsw-phy-sel: remove unneeded semicolon

Remove unneeded semicolon.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotipc: remove unneeded semicolon in trace.c
YueHaibing [Thu, 17 Jan 2019 12:57:08 +0000 (20:57 +0800)]
tipc: remove unneeded semicolon in trace.c

Remove unneeded semicolon

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoqed: remove duplicated include from qed_if.h
YueHaibing [Thu, 17 Jan 2019 07:22:20 +0000 (15:22 +0800)]
qed: remove duplicated include from qed_if.h

Remove duplicated include.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Denis Bolotin <dbolotin@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agosb1000: fix a couple of indentation issues and remove assignment in if statements
Colin Ian King [Thu, 17 Jan 2019 00:35:43 +0000 (00:35 +0000)]
sb1000: fix a couple of indentation issues and remove assignment in if statements

There is an if statement and a return statement that are incorrectly
indented. Fix these.  Also replace the assignment-in-if statements
to assignment followed by an if to keep to the coding style.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: add a route cache full diagnostic message
Peter Oskolkov [Wed, 16 Jan 2019 16:50:28 +0000 (08:50 -0800)]
net: add a route cache full diagnostic message

In some testing scenarios, dst/route cache can fill up so quickly
that even an explicit GC call occasionally fails to clean it up. This leads
to sporadically failing calls to dst_alloc and "network unreachable" errors
to the user, which is confusing.

This patch adds a diagnostic message to make the cause of the failure
easier to determine.

Signed-off-by: Peter Oskolkov <posk@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodpaa2-eth: Fix ndo_stop routine
Ioana Ciocoi Radulescu [Wed, 16 Jan 2019 16:51:44 +0000 (16:51 +0000)]
dpaa2-eth: Fix ndo_stop routine

In the current implementation, on interface down we disabled NAPI and
then manually drained any remaining ingress frames. This could lead
to a situation when, under heavy traffic, the data availability
notification for some of the channels would not get rearmed correctly.

Change the implementation such that we let all remaining ingress frames
be processed as usual and only disable NAPI once the hardware queues
are empty.

We also add a wait on the Tx side, to allow hardware time to process
all in-flight Tx frames before issueing the disable command.

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agowan: dscc4: fix various indentation issues
Colin Ian King [Wed, 16 Jan 2019 17:08:17 +0000 (17:08 +0000)]
wan: dscc4: fix various indentation issues

There are some lines that have indentation issues, fix these.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'vxlan-FDB-veto'
David S. Miller [Thu, 17 Jan 2019 23:18:47 +0000 (15:18 -0800)]
Merge branch 'vxlan-FDB-veto'

Petr Machata says:

====================
vxlan: Allow vetoing FDB operations

mlxsw does not implement handling of the more advanced types of VXLAN
FDB entries. In order to provide visibility to users, it is important to
be able to reject such FDB entries, ideally with an explanation passed
in extended ack. This patch set implements this.

In patches #1-#4, vxlan is gradually transformed to support vetoing of
FDB entries added (or modified) through vxlan_fdb_update(), and the
default FDB entry added in __vxlan_dev_create().

Patches #5-#7 deal with vxlan_changelink(). The existing code recognizes
that vxlan_fdb_update() may fail, but doesn't attempt to keep things
intact if it does. These patches change the function in several steps to
gracefully handle vetoes (or other failures).

Then in patches #8-#11, extack arguments are added, respectively, to
ndo_fdb_add(), mlxsw's mlxsw_sp_nve_ops.fdb_replay, the functions that
connect to the VXLAN vetoing code, and call_switchdev_notifiers(). Note
that call_switchdev_blocking_notifiers() already does support extack.

Finally in patch #12, mlxsw is extended to add extack messages to
rejected FDB entries. In patch #13, the functionality is tested.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoselftests: mlxsw: Test veto of unsupported VXLAN FDBs
Petr Machata [Wed, 16 Jan 2019 23:07:00 +0000 (23:07 +0000)]
selftests: mlxsw: Test veto of unsupported VXLAN FDBs

mlxsw doesn't implement offloading of all types of FDB entries that the
VXLAN driver supports. Test that such FDB entries are rejected. That
makes sure that the decision made by the existing validation code in
mlxsw propagates up the stack. It also exercises rollback functionality
in VXLAN, and tests that extack is returned.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agomlxsw: spectrum: Add extack messages to VXLAN FDB rejection
Petr Machata [Wed, 16 Jan 2019 23:06:58 +0000 (23:06 +0000)]
mlxsw: spectrum: Add extack messages to VXLAN FDB rejection

Annotate the rejections in mlxsw_sp_switchdev_vxlan_work_prepare() with
textual reasons.

Because this code ends up being invoked for FDB replay as well, drop the
default message from there, so that the more accurate error message is
not overwritten.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoswitchdev: Add extack argument to call_switchdev_notifiers()
Petr Machata [Wed, 16 Jan 2019 23:06:56 +0000 (23:06 +0000)]
switchdev: Add extack argument to call_switchdev_notifiers()

A follow-up patch will enable vetoing of FDB entries. Make it possible
to communicate details of why an FDB entry is not acceptable back to the
user.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agovxlan: Add extack to switchdev operations
Petr Machata [Wed, 16 Jan 2019 23:06:54 +0000 (23:06 +0000)]
vxlan: Add extack to switchdev operations

There are four sources of VXLAN switchdev notifier calls:

- the changelink() link operation, which already supports extack,
- ndo_fdb_add() which got extack support in a previous patch,
- FDB updates due to packet forwarding,
- and vxlan_fdb_replay().

Extend vxlan_fdb_switchdev_call_notifiers() to include extack in the
switchdev message that it sends, and propagate the argument upwards to
the callers. For the first two cases, pass in the extack gotten through
the operation. For case #3, pass in NULL.

To cover the last case, extend vxlan_fdb_replay() to take extack
argument, which might come from whatever operation necessitated the FDB
replay.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agomlxsw: Add extack to mlxsw_sp_nve_ops.fdb_replay
Petr Machata [Wed, 16 Jan 2019 23:06:52 +0000 (23:06 +0000)]
mlxsw: Add extack to mlxsw_sp_nve_ops.fdb_replay

A follow-up patch will extend vxlan_fdb_replay() with an extack
argument. Extend the fdb_replay callback in mlxsw likewise so that the
argument is ready for the vxlan conversion.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: Add extack argument to ndo_fdb_add()
Petr Machata [Wed, 16 Jan 2019 23:06:50 +0000 (23:06 +0000)]
net: Add extack argument to ndo_fdb_add()

Drivers may not be able to support certain FDB entries, and an error
code is insufficient to give clear hints as to the reasons of rejection.

In order to make it possible to communicate the rejection reason, extend
ndo_fdb_add() with an extack argument. Adapt the existing
implementations of ndo_fdb_add() to take the parameter (and ignore it).
Pass the extack parameter when invoking ndo_fdb_add() from rtnl_fdb_add().

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agovxlan: changelink: Delete remote after update
Petr Machata [Wed, 16 Jan 2019 23:06:43 +0000 (23:06 +0000)]
vxlan: changelink: Delete remote after update

If a change in remote address prompts a change in a default FDB entry,
that change might be vetoed. If that happens, it would then be necessary
to reinstate the already-removed default FDB entry corresponding to the
previous remote address.

Instead, arrange to have the previous address removed only after the
FDB is successfully vetted.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agovxlan: changelink: Postpone vxlan_config_apply()
Petr Machata [Wed, 16 Jan 2019 23:06:41 +0000 (23:06 +0000)]
vxlan: changelink: Postpone vxlan_config_apply()

When an FDB entry is vetoed, it is necessary to unroll the changes that
have already been done. To avoid having to unroll vxlan_config_apply(),
postpone the call after the point where the vetoing takes place. Since
the call can't fail, it doesn't necessitate any cleanups in the
preceding FDB update logic.

Correspondingly, move down the mod_timer() call as well.

References to *dst need to be replaced with references to conf.
Additionally, old_dst and old_age_interval are not necessary anymore,
and therefore drop them.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agovxlan: changelink: Inline vxlan_dev_configure()
Petr Machata [Wed, 16 Jan 2019 23:06:39 +0000 (23:06 +0000)]
vxlan: changelink: Inline vxlan_dev_configure()

The changelink operation may cause change in remote address, and
therefore an FDB update, which can be vetoed. To properly handle
vetoing, vxlan_changelink() needs to be gradually updated.

In this patch simply replace vxlan_dev_configure() with the two
constituent calls.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agovxlan: Allow vetoing of FDB notifications
Petr Machata [Wed, 16 Jan 2019 23:06:38 +0000 (23:06 +0000)]
vxlan: Allow vetoing of FDB notifications

Change vxlan_fdb_switchdev_call_notifiers() to return the result from
calling switchdev notifiers. Propagate the error number up the stack.

In vxlan_fdb_update_existing() and vxlan_fdb_update_create() add
rollbacks to clean up the work that was done before the veto.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agovxlan: Have vxlan_fdb_replace() save original rdst value
Petr Machata [Wed, 16 Jan 2019 23:06:34 +0000 (23:06 +0000)]
vxlan: Have vxlan_fdb_replace() save original rdst value

To enable rollbacks after vetoed FDB updates, extend vxlan_fdb_replace()
to take an additional argument where it should store the original values
of a modified rdst. Update the sole caller.

The following patch will make use of the saved value.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agovxlan: Split vxlan_fdb_update() in two
Petr Machata [Wed, 16 Jan 2019 23:06:32 +0000 (23:06 +0000)]
vxlan: Split vxlan_fdb_update() in two

In order to make it easier to implement rollbacks after FDB update
vetoing, separate the FDB update code to two parts: one that deals with
updates of existing FDB entries, and one that creates new entries.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agovxlan: Move up vxlan_fdb_free(), vxlan_fdb_destroy()
Petr Machata [Wed, 16 Jan 2019 23:06:30 +0000 (23:06 +0000)]
vxlan: Move up vxlan_fdb_free(), vxlan_fdb_destroy()

These functions will be needed for rollbacks of vetoed FDB entries. Move
them up so that they are visible at their intended point of use.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'improving-TCP-behavior-on-host-congestion'
David S. Miller [Thu, 17 Jan 2019 23:12:26 +0000 (15:12 -0800)]
Merge branch 'improving-TCP-behavior-on-host-congestion'

Yuchung Cheng says:

====================
improving TCP behavior on host congestion

This patch set aims to improve how TCP handle local qdisc congestion
by simplifying the previous implementation.  Previously when an
skb fails to (re)transmit due to local qdisc congestion or other
resource issue, TCP refrains from setting the skb timestamp or the
recovery starting time.

This design makes determining when to abort a stalling socket more
complicated, as the timestamps of these tranmission attempts were
missing. The stack needs to sort of infer when the original attempt
happens. A by-product is a socket may disregard the system timeout
limit (i.e. sysctl net.ipv4.tcp_retries2 or USER_TIMEOUT option),
and continue to retry until the transmission is successful.

In data-center environment when TCP RTO is small, this could cause
the socket to retry frequently for long during qdisc congestion.

The solution is to first unconditionally timestamp skb and recovery
attempt. Then retry more conservatively (twice a second) on local
qdisc congestion but abort the sockets according to the system limit.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: less aggressive window probing on local congestion
Yuchung Cheng [Wed, 16 Jan 2019 23:05:35 +0000 (15:05 -0800)]
tcp: less aggressive window probing on local congestion

Previously when the sender fails to send (original) data packet or
window probes due to congestion in the local host (e.g. throttling
in qdisc), it'll retry within an RTO or two up to 500ms.

In low-RTT networks such as data-centers, RTO is often far below
the default minimum 200ms. Then local host congestion could trigger
a retry storm pouring gas to the fire. Worse yet, the probe counter
(icsk_probes_out) is not properly updated so the aggressive retry
may exceed the system limit (15 rounds) until the packet finally
slips through.

On such rare events, it's wise to retry more conservatively
(500ms) and update the stats properly to reflect these incidents
and follow the system limit. Note that this is consistent with
the behaviors when a keep-alive probe or RTO retry is dropped
due to local congestion.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: retry more conservatively on local congestion
Yuchung Cheng [Wed, 16 Jan 2019 23:05:34 +0000 (15:05 -0800)]
tcp: retry more conservatively on local congestion

Previously when the sender fails to retransmit a data packet on
timeout due to congestion in the local host (e.g. throttling in
qdisc), it'll retry within an RTO up to 500ms.

In low-RTT networks such as data-centers, RTO is often far
below the default minimum 200ms (and the cap 500ms). Then local
host congestion could trigger a retry storm pouring gas to the
fire. Worse yet, the retry counter (icsk_retransmits) is not
properly updated so the aggressive retry may exceed the system
limit (15 rounds) until the packet finally slips through.

On such rare events, it's wise to retry more conservatively (500ms)
and update the stats properly to reflect these incidents and follow
the system limit. Note that this is consistent with the behavior
when a keep-alive probe is dropped due to local congestion.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: simplify window probe aborting on USER_TIMEOUT
Yuchung Cheng [Wed, 16 Jan 2019 23:05:33 +0000 (15:05 -0800)]
tcp: simplify window probe aborting on USER_TIMEOUT

Previously we use the next unsent skb's timestamp to determine
when to abort a socket stalling on window probes. This no longer
works as skb timestamp reflects the last instead of the first
transmission.

Instead we can estimate how long the socket has been stalling
with the probe count and the exponential backoff behavior.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: create a helper to model exponential backoff
Yuchung Cheng [Wed, 16 Jan 2019 23:05:32 +0000 (15:05 -0800)]
tcp: create a helper to model exponential backoff

Create a helper to model TCP exponential backoff for the next patch.
This is pure refactor w no behavior change.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: properly track retry time on passive Fast Open
Yuchung Cheng [Wed, 16 Jan 2019 23:05:31 +0000 (15:05 -0800)]
tcp: properly track retry time on passive Fast Open

This patch addresses a corner issue on timeout behavior of a
passive Fast Open socket.  A passive Fast Open server may write
and close the socket when it is re-trying SYN-ACK to complete
the handshake. After the handshake is completely, the server does
not properly stamp the recovery start time (tp->retrans_stamp is
0), and the socket may abort immediately on the very first FIN
timeout, instead of retying until it passes the system or user
specified limit.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: always set retrans_stamp on recovery
Yuchung Cheng [Wed, 16 Jan 2019 23:05:30 +0000 (15:05 -0800)]
tcp: always set retrans_stamp on recovery

Previously TCP socket's retrans_stamp is not set if the
retransmission has failed to send. As a result if a socket is
experiencing local issues to retransmit packets, determining when
to abort a socket is complicated w/o knowning the starting time of
the recovery since retrans_stamp may remain zero.

This complication causes sub-optimal behavior that TCP may use the
latest, instead of the first, retransmission time to compute the
elapsed time of a stalling connection due to local issues. Then TCP
may disrecard TCP retries settings and keep retrying until it finally
succeed: not a good idea when the local host is already strained.

The simple fix is to always timestamp the start of a recovery.
It's worth noting that retrans_stamp is also used to compare echo
timestamp values to detect spurious recovery. This patch does
not break that because retrans_stamp is still later than when the
original packet was sent.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: always timestamp on every skb transmission
Yuchung Cheng [Wed, 16 Jan 2019 23:05:29 +0000 (15:05 -0800)]
tcp: always timestamp on every skb transmission

Previously TCP skbs are not always timestamped if the transmission
failed due to memory or other local issues. This makes deciding
when to abort a socket tricky and complicated because the first
unacknowledged skb's timestamp may be 0 on TCP timeout.

The straight-forward fix is to always timestamp skb on every
transmission attempt. Also every skb retransmission needs to be
flagged properly to avoid RTT under-estimation. This can happen
upon receiving an ACK for the original packet and the a previous
(spurious) retransmission has failed.

It's worth noting that this reverts to the old time-stamping
style before commit 8c72c65b426b ("tcp: update skb->skb_mstamp more
carefully") which addresses a problem in computing the elapsed time
of a stalled window-probing socket. The problem will be addressed
differently in the next patches with a simpler approach.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: exit if nothing to retransmit on RTO timeout
Yuchung Cheng [Wed, 16 Jan 2019 23:05:28 +0000 (15:05 -0800)]
tcp: exit if nothing to retransmit on RTO timeout

Previously TCP only warns if its RTO timer fires and the
retransmission queue is empty, but it'll cause null pointer
reference later on. It's better to avoid such catastrophic failure
and simply exit with a warning.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: phy: micrel: use phy_read_mmd and phy_write_mmd
Heiner Kallweit [Wed, 16 Jan 2019 20:52:22 +0000 (21:52 +0100)]
net: phy: micrel: use phy_read_mmd and phy_write_mmd

This driver implements open-coded versions of phy_read_mmd() and
phy_write_mmd() for KSZ9031. That's not needed, let's use the
phylib functions directly.

This is compile-tested only because I have no such hardware.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodavicom: Annotate implicit fall through in dm9000_set_io
Mathieu Malaterre [Wed, 16 Jan 2019 19:49:25 +0000 (20:49 +0100)]
davicom: Annotate implicit fall through in dm9000_set_io

There is a plan to build the kernel with -Wimplicit-fallthrough and
this place in the code produced a warning (W=1).

This commit removes the following warning:

  include/linux/device.h:1480:5: warning: this statement may fall through [-Wimplicit-fallthrough=]
  drivers/net/ethernet/davicom/dm9000.c:397:3: note: in expansion of macro 'dev_dbg'
  drivers/net/ethernet/davicom/dm9000.c:398:2: note: here

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/ipv6/udp_tunnel: prefer SO_BINDTOIFINDEX over SO_BINDTODEVICE
David Herrmann [Tue, 15 Jan 2019 13:42:16 +0000 (14:42 +0100)]
net/ipv6/udp_tunnel: prefer SO_BINDTOIFINDEX over SO_BINDTODEVICE

The udp-tunnel setup allows binding sockets to a network device. Prefer
the new SO_BINDTOIFINDEX to avoid temporarily resolving the device-name
just to look it up in the ioctl again.

Reviewed-by: Tom Gundersen <teg@jklm.no>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/ipv4/udp_tunnel: prefer SO_BINDTOIFINDEX over SO_BINDTODEVICE
David Herrmann [Tue, 15 Jan 2019 13:42:15 +0000 (14:42 +0100)]
net/ipv4/udp_tunnel: prefer SO_BINDTOIFINDEX over SO_BINDTODEVICE

The udp-tunnel setup allows binding sockets to a network device. Prefer
the new SO_BINDTOIFINDEX to avoid temporarily resolving the device-name
just to look it up in the ioctl again.

Reviewed-by: Tom Gundersen <teg@jklm.no>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: introduce SO_BINDTOIFINDEX sockopt
David Herrmann [Tue, 15 Jan 2019 13:42:14 +0000 (14:42 +0100)]
net: introduce SO_BINDTOIFINDEX sockopt

This introduces a new generic SOL_SOCKET-level socket option called
SO_BINDTOIFINDEX. It behaves similar to SO_BINDTODEVICE, but takes a
network interface index as argument, rather than the network interface
name.

User-space often refers to network-interfaces via their index, but has
to temporarily resolve it to a name for a call into SO_BINDTODEVICE.
This might pose problems when the network-device is renamed
asynchronously by other parts of the system. When this happens, the
SO_BINDTODEVICE might either fail, or worse, it might bind to the wrong
device.

In most cases user-space only ever operates on devices which they
either manage themselves, or otherwise have a guarantee that the device
name will not change (e.g., devices that are UP cannot be renamed).
However, particularly in libraries this guarantee is non-obvious and it
would be nice if that race-condition would simply not exist. It would
make it easier for those libraries to operate even in situations where
the device-name might change under the hood.

A real use-case that we recently hit is trying to start the network
stack early in the initrd but make it survive into the real system.
Existing distributions rename network-interfaces during the transition
from initrd into the real system. This, obviously, cannot affect
devices that are up and running (unless you also consider moving them
between network-namespaces). However, the network manager now has to
make sure its management engine for dormant devices will not run in
parallel to these renames. Particularly, when you offload operations
like DHCP into separate processes, these might setup their sockets
early, and thus have to resolve the device-name possibly running into
this race-condition.

By avoiding a call to resolve the device-name, we no longer depend on
the name and can run network setup of dormant devices in parallel to
the transition off the initrd. The SO_BINDTOIFINDEX ioctl plugs this
race.

Reviewed-by: Tom Gundersen <teg@jklm.no>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotls: Fix recvmsg() to be able to peek across multiple records
Vakul Garg [Wed, 16 Jan 2019 10:40:16 +0000 (10:40 +0000)]
tls: Fix recvmsg() to be able to peek across multiple records

This fixes recvmsg() to be able to peek across multiple tls records.
Without this patch, the tls's selftests test case
'recv_peek_large_buf_mult_recs' fails. Each tls receive context now
maintains a 'rx_list' to retain incoming skb carrying tls records. If a
tls record needs to be retained e.g. for peek case or for the case when
the buffer passed to recvmsg() has a length smaller than decrypted
record length, then it is added to 'rx_list'. Additionally, records are
added in 'rx_list' if the crypto operation runs in async mode. The
records are dequeued from 'rx_list' after the decrypted data is consumed
by copying into the buffer passed to recvmsg(). In case, the MSG_PEEK
flag is used in recvmsg(), then records are not consumed or removed
from the 'rx_list'.

Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'dsa-lantiq_gswip-probe-fixes-and-remove-cleanup'
David S. Miller [Thu, 17 Jan 2019 20:12:19 +0000 (12:12 -0800)]
Merge branch 'dsa-lantiq_gswip-probe-fixes-and-remove-cleanup'

Johan Hovold says:

====================
net: dsa: lantiq_gswip: probe fixes and remove cleanup

This series fix a few issues found through inspection when fixing up new
bad uses of of_find_compatible_node() that have crept in since 4.19.

Note that these have only been compile tested.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: lantiq_gswip: drop bogus drvdata check
Johan Hovold [Wed, 16 Jan 2019 10:23:35 +0000 (11:23 +0100)]
net: dsa: lantiq_gswip: drop bogus drvdata check

The platform-device driver data is set on successful probe and will
never be NULL on remove (or we have much bigger problems).

Signed-off-by: Johan Hovold <johan@kernel.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: lantiq_gswip: fix OF child-node lookups
Johan Hovold [Wed, 16 Jan 2019 10:23:34 +0000 (11:23 +0100)]
net: dsa: lantiq_gswip: fix OF child-node lookups

Use the new of_get_compatible_child() helper to look up child nodes to
avoid ever matching non-child nodes elsewhere in the tree.

Also fix up the related struct device_node leaks.

Fixes: 14fceff4771e ("net: dsa: Add Lantiq / Intel DSA driver for vrx200")
Cc: stable <stable@vger.kernel.org> # 4.20
Cc: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: Johan Hovold <johan@kernel.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: lantiq_gswip: fix use-after-free on failed probe
Johan Hovold [Wed, 16 Jan 2019 10:23:33 +0000 (11:23 +0100)]
net: dsa: lantiq_gswip: fix use-after-free on failed probe

Make sure to disable and deregister the switch on late probe errors to
avoid use-after-free when the device-resource-managed switch is freed.

Fixes: 14fceff4771e ("net: dsa: Add Lantiq / Intel DSA driver for vrx200")
Cc: stable <stable@vger.kernel.org> # 4.20
Cc: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: Johan Hovold <johan@kernel.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agosfc: extend MTD support for newer hardware
Bert Kenward [Wed, 16 Jan 2019 10:00:39 +0000 (10:00 +0000)]
sfc: extend MTD support for newer hardware

The X2 family of NICs (based on the SFC9250) have additional
MTD partitions for firmware and configuration. This includes
partitions that are read-only.

The NICs also have extended versions of the NVRAM interface,
allowing more detailed status information to be returned.

Signed-off-by: Bert Kenward <bkenward@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoselftests/tls: Fix recv partial/large_buff test cases
Vakul Garg [Wed, 16 Jan 2019 08:40:58 +0000 (08:40 +0000)]
selftests/tls: Fix recv partial/large_buff test cases

TLS test cases recv_partial & recv_peek_large_buf_mult_recs expect to
receive a certain amount of data and then compare it against known
strings using memcmp. To prevent recvmsg() from returning lesser than
expected number of bytes (compared in memcmp), MSG_WAITALL needs to be
passed in recvmsg().

Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: phy: check return code when requesting PHY driver module
Heiner Kallweit [Wed, 16 Jan 2019 07:07:38 +0000 (08:07 +0100)]
net: phy: check return code when requesting PHY driver module

When requesting the PHY driver module fails we'll bind the genphy
driver later. This isn't obvious to the user and may cause, depending
on the PHY, different types of issues. Therefore check the return code
of request_module(). Note that we only check for failures in loading
the module, not whether a module exists for the respective PHY ID.

v2:
- add comment explaining what is checked and what is not
- return error from phy_device_create() if loading module fails

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/tls: Make function tls_sw_do_sendpage static
YueHaibing [Wed, 16 Jan 2019 02:39:28 +0000 (10:39 +0800)]
net/tls: Make function tls_sw_do_sendpage static

Fixes the following sparse warning:

 net/tls/tls_sw.c:1023:5: warning:
 symbol 'tls_sw_do_sendpage' was not declared. Should it be static?

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/tls: remove unused function tls_sw_sendpage_locked
YueHaibing [Wed, 16 Jan 2019 02:39:27 +0000 (10:39 +0800)]
net/tls: remove unused function tls_sw_sendpage_locked

There are no in-tree callers.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoOptimize sk_msg_clone() by data merge to end dst sg entry
Vakul Garg [Wed, 16 Jan 2019 01:42:44 +0000 (01:42 +0000)]
Optimize sk_msg_clone() by data merge to end dst sg entry

Function sk_msg_clone has been modified to merge the data from source sg
entry to destination sg entry if the cloned data resides in same page
and is contiguous to the end entry of destination sk_msg. This improves
kernel tls throughput to the tune of 10%.

When the user space tls application calls sendmsg() with MSG_MORE, it leads
to calling sk_msg_clone() with new data being cloned placed continuous to
previously cloned data. Without this optimization, a new SG entry in
the destination sk_msg i.e. rec->msg_plaintext in tls_clone_plaintext_msg()
gets used. This leads to exhaustion of sg entries in rec->msg_plaintext
even before a full 16K of allowable record data is accumulated. Hence we
lose oppurtunity to encrypt and send a full 16K record.

With this patch, the kernel tls can accumulate full 16K of record data
irrespective of the size of data passed in sendmsg() with MSG_MORE.

Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns: Use struct_size() in devm_kzalloc()
Gustavo A. R. Silva [Wed, 16 Jan 2019 01:04:56 +0000 (19:04 -0600)]
net: hns: Use struct_size() in devm_kzalloc()

One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:

struct foo {
    int stuff;
    struct boo entry[];
};

instance = devm_kzalloc(dev, sizeof(struct foo) + count * sizeof(struct boo), GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:

instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: phy: Add helpers to determine if PHY driver is generic
Florian Fainelli [Tue, 15 Jan 2019 23:09:35 +0000 (15:09 -0800)]
net: phy: Add helpers to determine if PHY driver is generic

We are already checking in phy_detach() that the PHY driver is of
generic kind (1G or 10G) and we are going to make use of that in the SFP
layer as well for 1000BaseT SFP modules, so expose helper functions to
return that information.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'dsa-Split-platform-data-to-header-file'
David S. Miller [Thu, 17 Jan 2019 19:31:24 +0000 (11:31 -0800)]
Merge branch 'dsa-Split-platform-data-to-header-file'

Florian Fainelli says:

====================
net: dsa: Split platform data to header file

This patch series decouples the DSA platform data structures from
net/dsa.h which was getting used for all sorts of DSA related
structures.

It would probably make sense for this series to go via David's net-next
tree to avoid conflicts on the ARM part, since we cannot obviously
include a header that does not yet exist.

No functional changes intended.
====================

Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: Include platform_data header file
Florian Fainelli [Tue, 15 Jan 2019 23:06:13 +0000 (15:06 -0800)]
net: dsa: Include platform_data header file

b53 and mv88e6xxx support passing platform_data, and now that we have
split the platform_data portion from the main net/dsa.h header file,
include only the relevant parts.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoARM: orion5x: Include platform_data/dsa.h
Florian Fainelli [Tue, 15 Jan 2019 23:06:12 +0000 (15:06 -0800)]
ARM: orion5x: Include platform_data/dsa.h

Now that we have split the DSA platform data structures from the main
net/dsa.h header file, include only the relevant header file.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: Split platform data to header file
Florian Fainelli [Tue, 15 Jan 2019 23:06:11 +0000 (15:06 -0800)]
net: dsa: Split platform data to header file

Instead of having net/dsa.h contain both the internal switch tree/driver
structures, split the relevant platform_data parts into
include/linux/platform_data/dsa.h and make that header be included by
net/dsa.h in order not to break any setup. A subsequent set of patches
will update code including net/dsa.h to include only the platform_data
header.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet-next/hinic: replace disable_irq_nosync/enable_irq
Xue Chaojing [Tue, 15 Jan 2019 17:48:52 +0000 (17:48 +0000)]
net-next/hinic: replace disable_irq_nosync/enable_irq

In order to avoid frequent system interrupts when sending and
receiving packets. we replace disable_irq_nosync/enable_irq
with hinic_set_msix_state(), hinic_set_msix_state is used to
access memory mapped hinic devices.

Signed-off-by: Xue Chaojing <xuechaojing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: Add ndo_get_phys_port_name() for CPU port
Florian Fainelli [Tue, 15 Jan 2019 22:43:04 +0000 (14:43 -0800)]
net: dsa: Add ndo_get_phys_port_name() for CPU port

There is not currently way to infer the port number through sysfs that
is being used as the CPU port number. Overlay a ndo_get_phys_port_name()
operation onto the DSA master network device in order to retrieve that
information.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoDocumentation: networking: dsa: Update documentation
Florian Fainelli [Tue, 15 Jan 2019 22:35:02 +0000 (14:35 -0800)]
Documentation: networking: dsa: Update documentation

Since 83c0afaec7b7 ("net: dsa: Add new binding implementation"), DSA is
no longer a platform device exclusively and can support registering DSA
switches from other bus drivers (PCI, USB, I2C, etc.).

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agocxgb4/l2t: Use struct_size() in kvzalloc()
Gustavo A. R. Silva [Tue, 15 Jan 2019 21:44:52 +0000 (15:44 -0600)]
cxgb4/l2t: Use struct_size() in kvzalloc()

One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    struct boo entry[];
};

instance = kvzalloc(sizeof(struct foo) + count * sizeof(struct boo), GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kvzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoopenvswitch: meter: Use struct_size() in kzalloc()
Gustavo A. R. Silva [Tue, 15 Jan 2019 21:19:17 +0000 (15:19 -0600)]
openvswitch: meter: Use struct_size() in kzalloc()

One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    struct boo entry[];
};

instance = kzalloc(sizeof(struct foo) + count * sizeof(struct boo), GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: phy: don't include asm/irq.h directly
Heiner Kallweit [Tue, 15 Jan 2019 20:40:51 +0000 (21:40 +0100)]
net: phy: don't include asm/irq.h directly

There's no need to and one shouldn't include asm/irq.h directly.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: phy: improve logging in phylib
Heiner Kallweit [Tue, 15 Jan 2019 20:50:25 +0000 (21:50 +0100)]
net: phy: improve logging in phylib

Some time ago phydev_info() and friends have been added. They allow to
improve and simplify logging.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: phy: remove preliminary workaround for not loading PHY driver
Heiner Kallweit [Tue, 15 Jan 2019 20:11:14 +0000 (21:11 +0100)]
net: phy: remove preliminary workaround for not loading PHY driver

This workaround attempt helped for some but not all affected users.
With commit 11287b693d03 ("r8169: load Realtek PHY driver module
before r8169") we have a better workaround now, so we an remove
the first attempt.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'nfp-flower-improve-flower-resilience'
David S. Miller [Wed, 16 Jan 2019 23:23:15 +0000 (15:23 -0800)]
Merge branch 'nfp-flower-improve-flower-resilience'

Jakub Kicinski says:

====================
nfp: flower: improve flower resilience

This series contains mostly changes which improve nfp flower
offload's resilience, but are too large or risky to push into net.

Fred makes the driver waits for flower FW responses uninterruptible,
and a little longer (~40ms).

Pieter adds support for cards with multiple rule memories.

John reworks the MAC offloads.  He says:
> When potential tunnel end-point MACs are offloaded, they are assigned an
> index. This index may be associated with a port number meaning that if a
> packet matches an offloaded MAC address on the card, then the ingress
> port for that MAC can also be verified. In the case of shared MACs (e.g.
> on a linux bond) there may be situations where this index maps to only
> one of the ports that share the MAC.
>
> The idea of 'global' MAC indexes are supported that bypass the check on
> ingress port on the NFP. The patchset tracks shared MACs and assigns
> global indexes to these. It also ensures that port based indexes are
> re-applied if a single port becomes the only user of an offloaded MAC.
>
> Other patches in the set aim to tidy code without changing functionality.
> There is also a delete offload message introduced to ensure that MACs no
> longer in use in kernel space are removed from the firmware lookup tables.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: enable MAC address sharing for offloadable devs
John Hurley [Wed, 16 Jan 2019 03:06:59 +0000 (19:06 -0800)]
nfp: flower: enable MAC address sharing for offloadable devs

A MAC address is not necessarily a unique identifier for a netdev. Drivers
such as Linux bonds, for example, can apply the same MAC address to the
upper layer device and all lower layer devices.

NFP MAC offload for tunnel decap includes port verification for reprs but
also supports the offload of non-repr MAC addresses by assigning 'global'
indexes to these. This means that the FW will not verify the incoming port
of a packet matching this destination MAC.

Modify the MAC offload logic to assign global indexes based on MAC address
instead of net device (as it currently does). Use this to allow multiple
devices to share the same MAC. In other words, if a repr shares its MAC
address with another device then give the offloaded MAC a global index
rather than associate it with an ingress port. Track this so that changes
can be reverted as MACs stop being shared.

Implement this by removing the current list based assignment of global
indexes and replacing it with an rhashtable that maps an offloaded MAC
address to the number of devices sharing it, distributing global indexes
based on this.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: ensure MAC cleanup on address change
John Hurley [Wed, 16 Jan 2019 03:06:58 +0000 (19:06 -0800)]
nfp: flower: ensure MAC cleanup on address change

It is possible to receive a MAC address change notification without the
net device being down (e.g. when an OvS bridge is assigned the same MAC as
a port added to it). This means that an offloaded MAC address may not be
removed if its device gets a new address.

Maintain a record of the offloaded MAC addresses for each repr and netdev
assigned a MAC offload index. Use this to delete the (now expired) MAC if
a change of address event occurs. Only handle change address events if the
device is already up - if not then the netdev up event will handle it.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: add infastructure for non-repr priv data
John Hurley [Wed, 16 Jan 2019 03:06:57 +0000 (19:06 -0800)]
nfp: flower: add infastructure for non-repr priv data

NFP repr netdevs contain private data that can store per port information.
In certain cases, the NFP driver offloads information from non-repr ports
(e.g. tunnel ports). As the driver does not have control over non-repr
netdevs, it cannot add/track private data directly to the netdev struct.

Add infastructure to store private information on any non-repr netdev that
is offloaded at a given time. This is used in a following patch to track
offloaded MAC addresses for non-reprs and enable correct house keeping on
address changes.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: ensure deletion of old offloaded MACs
John Hurley [Wed, 16 Jan 2019 03:06:56 +0000 (19:06 -0800)]
nfp: flower: ensure deletion of old offloaded MACs

When a potential tunnel end point goes down then its MAC address should
not be matchable on the NFP.

Implement a delete message for offloaded MACs and call this on net device
down. While at it, remove the actions on register and unregister netdev
events. A MAC should only be offloaded if the device is up. Note that the
netdev notifier will replay any notifications for UP devices on
registration so NFP can still offload ports that exist before the driver
is loaded. Similarly, devices need to go down before they can be
unregistered so removal of offloaded MACs is only required on down events.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: remove list infastructure from MAC offload
John Hurley [Wed, 16 Jan 2019 03:06:55 +0000 (19:06 -0800)]
nfp: flower: remove list infastructure from MAC offload

Potential MAC destination addresses for tunnel end-points are offloaded to
firmware. This was done by building a list of such MACs and writing to
firmware as blocks of addresses.

Simplify this code by removing the list format and sending a new message
for each offloaded MAC.

This is in preparation for delete MAC messages. There will be one delete
flag per message so we cannot assume that this applies to all addresses
in a list.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: ignore offload of VF and PF repr MAC addresses
John Hurley [Wed, 16 Jan 2019 03:06:54 +0000 (19:06 -0800)]
nfp: flower: ignore offload of VF and PF repr MAC addresses

Currently MAC addresses of all repr netdevs, along with selected non-NFP
controlled netdevs, are offloaded to FW as potential tunnel end-points.
However, the addresses of VF and PF reprs are meaningless outside of
internal communication and it is only those of physical port reprs
required.

Modify the MAC address offload selection code to ignore VF/PF repr devs.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: tidy tunnel related private data
John Hurley [Wed, 16 Jan 2019 03:06:53 +0000 (19:06 -0800)]
nfp: flower: tidy tunnel related private data

Recent additions to the flower app private data have grouped the variables
of a given feature into a struct and added that struct to the main private
data struct.

In keeping with this, move all tunnel related private data to their own
struct. This has no affect on functionality but improves readability and
maintenance of the code.

Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: support multiple memory units for filter offloads
Pieter Jansen van Vuuren [Wed, 16 Jan 2019 03:06:52 +0000 (19:06 -0800)]
nfp: flower: support multiple memory units for filter offloads

Adds support for multiple memory units which are used for filter
offloads. Each filter is assigned a stats id, the MSBs of the id are
used to determine which memory unit the filter should be offloaded
to. The number of available memory units that could be used for filter
offload is obtained from HW. A simple round robin technique is used to
allocate and distribute the ids across memory units.

Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: flower: increase cmesg reply timeout
Fred Lotter [Wed, 16 Jan 2019 03:06:51 +0000 (19:06 -0800)]
nfp: flower: increase cmesg reply timeout

QA tests report occasional timeouts on REIFY message replies. Profiling
of the two cmesg reply types under burst conditions, with a 12-core host
under heavy cpu and io load (stress --cpu 12 --io 12), show both PHY MTU
change and REIFY replies can exceed the 10ms timeout. The maximum MTU
reply wait under burst is 16ms, while the maximum REIFY wait under 40 VF
burst is 12ms. Using a 4 VF REIFY burst results in an 8ms maximum wait.
A larger VF burst does increase the delay, but not in a linear enough
way to justify a scaled REIFY delay. The worse case values between
MTU and REIFY appears close enough to justify a common timeout. Pick a
conservative 40ms to make a safer future proof common reply timeout. The
delay only effects the failure case.

Change the REIFY timeout mechanism to use wait_event_timeout() instead
of wait_event_interruptible_timeout(), to match the MTU code. In the
current implementation, theoretically, a signal could interrupt the
REIFY waiting period, with a return code of ERESTARTSYS. However, this is
caught under the general timeout error code EIO. I cannot see the benefit
of exposing the REIFY waiting period to signals with such a short delay
(40ms), while the MTU mechnism does not use the same logic. In the absence
of any reply (wakeup() call), both reply types will wake up the task after
the timeout period. The REIFY timeout applies to the entire representor
group being instantiated (e.g. VFs), while the MTU timeout apples to a
single PHY MTU change.

Signed-off-by: Fred Lotter <frederik.lotter@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: sungem: fix indentation, remove a tab
Colin Ian King [Mon, 14 Jan 2019 15:41:25 +0000 (15:41 +0000)]
net: sungem: fix indentation, remove a tab

The declaration of variable 'found' is one level too deep, fix this by
removing a tab.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodrivers: net: atp: fix various indentation issues
Colin Ian King [Mon, 14 Jan 2019 15:37:01 +0000 (15:37 +0000)]
drivers: net: atp: fix various indentation issues

There are various lines that have indentation issues, fix these.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agobnx2x: fix various indentation issues
Colin Ian King [Mon, 14 Jan 2019 15:15:16 +0000 (15:15 +0000)]
bnx2x: fix various indentation issues

There are lines that have indentation issues, fix these.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonetworking: Documentation: fix snmp_counters.rst Sphinx warnings
Randy Dunlap [Mon, 14 Jan 2019 04:17:41 +0000 (20:17 -0800)]
networking: Documentation: fix snmp_counters.rst Sphinx warnings

Fix over 100 documentation warnings in snmp_counter.rst by
extending the underline string lengths and inserting a blank line
after bullet items.

Examples:

Documentation/networking/snmp_counter.rst:1: WARNING: Title overline too short.
Documentation/networking/snmp_counter.rst:14: WARNING: Bullet list ends without a blank line; unexpected unindent.

Fixes: 2b96547223e3 ("add document for TCP OFO, PAWS and skip ACK counters")
Fixes: 8e2ea53a83df ("add snmp counters document")
Fixes: 712ee16c230f ("add documents for snmp counters")
Fixes: 80cc49507ba4 ("net: Add part of TCP counts explanations in snmp_counters.rst")
Fixes: b08794a922c4 ("documentation of some IP/ICMP snmp counters")

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: yupeng <yupeng0921@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet, decnet: use struct_size() in kzalloc()
Gustavo A. R. Silva [Tue, 15 Jan 2019 19:50:06 +0000 (13:50 -0600)]
net, decnet: use struct_size() in kzalloc()

One of the more common cases of allocation size calculations is finding the
size of a structure that has a zero-sized array at the end, along with memory
for some number of elements for that array. For example:

struct foo {
    int stuff;
    struct boo entry[];
};

instance = kzalloc(sizeof(struct foo) + count * sizeof(struct boo), GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can now
use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agomlxsw: spectrum_nve: Use struct_size() in kzalloc()
Gustavo A. R. Silva [Tue, 15 Jan 2019 23:14:29 +0000 (17:14 -0600)]
mlxsw: spectrum_nve: Use struct_size() in kzalloc()

One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:

struct foo {
    int stuff;
    struct boo entry[];
};

instance = kzalloc(sizeof(struct foo) + count * sizeof(struct boo), GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This issue was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agomlxsw: spectrum_acl_bloom_filter: use struct_size() in kzalloc()
Gustavo A. R. Silva [Tue, 15 Jan 2019 23:05:39 +0000 (17:05 -0600)]
mlxsw: spectrum_acl_bloom_filter: use struct_size() in kzalloc()

One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:

struct foo {
    int stuff;
    void *entry[];
};

instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:

instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This issue was detected with the help of Coccinelle.

Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodt-bindings: net: dsa: ksz9477: fix indentation for switch spi bindings
Sergio Paracuellos [Sun, 13 Jan 2019 08:56:48 +0000 (09:56 +0100)]
dt-bindings: net: dsa: ksz9477: fix indentation for switch spi bindings

Switch bindings for spi managed mode are using spaces instead of tabs.
Fix them to get a file with a proper kernel indentation style.

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Sergio Paracuellos <sergio.paracuellos@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoFix ERROR:do not initialise statics to 0 in af_vsock.c
Lepton Wu [Wed, 9 Jan 2019 23:45:41 +0000 (15:45 -0800)]
Fix ERROR:do not initialise statics to 0 in af_vsock.c

Found by scripts/checkpatch.pl
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Tue, 15 Jan 2019 23:48:00 +0000 (15:48 -0800)]
Merge branch '100GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
100GbE Intel Wired LAN Driver Updates 2019-01-15

This series contains updates to the ice driver only.

Bruce fixes an unused variable build warning, which was introduced with
the commit 2fd527b72bb6 ("net: ndo_bridge_setlink: Add extack").  Added
ethtool support for get_eeprom and get_eeprom_len operations.  Added
support for bringing down the PHY link optional when the interface is
administratively downed.

Anirudh refactors the transmit scheduler functions, which results in
reduced code duplication and adds a helper function, which all the
scheduler functions call instead.  Added an LED blinking handler to
ethtool.  Reworked the queue management code to allow for reuse in
future XDP feature support.  Updates the driver to be able to preserve
the aggregator list after reset by moving it out of port_info and into
ice_hw.  Added the ability to offload SCTP checksum calculation to the
hardware.  Added support for new PHY types, which support higher link
speeds.

Md Fahad makes sure that RSS lookup table and hash key get configured
during the rebuild path after a reset.

Brett updates the driver to set the physical link state according to the
netdev state (up/down).  Added support for adaptive/dynamic interrupt
moderation in the ice driver, along with the ethtool operations needed.

Tony adds software timestamping support by using
ethtool_op_get_ts_info().
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoice: add const qualifier to mac_addr parameter
Jacob Keller [Wed, 19 Dec 2018 18:03:34 +0000 (10:03 -0800)]
ice: add const qualifier to mac_addr parameter

The function ice_aq_manage_mac_write takes a pointer to a MAC address.
The parameter is not marked const, even though the function doesn't need
to modify it. This prevents passing a parameter that is already marked
const. Update the function prototype to take a const pointer, to allow
passing constant pointers to this function.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
5 years agoice: Add support for new PHY types
Anirudh Venkataramanan [Wed, 19 Dec 2018 18:03:33 +0000 (10:03 -0800)]
ice: Add support for new PHY types

This patch adds code for the detection and operation of several
additional PHY types that support higher link speeds.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>