platform/kernel/linux-rpi.git
5 years agomlxsw: spectrum_router: Propagate neighbour update errors
Ido Schimmel [Sun, 14 Apr 2019 18:57:47 +0000 (18:57 +0000)]
mlxsw: spectrum_router: Propagate neighbour update errors

Next patch will add offload indication to neighbours, but the indication
should only be altered in case the neighbour was successfully added to /
deleted from the device.

Propagate neighbour update errors, so that they could be taken into
account by the next patch.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next
David S. Miller [Mon, 15 Apr 2019 19:07:35 +0000 (12:07 -0700)]
Merge git://git./linux/kernel/git/pablo/nf-next

Pablo Neira Ayuso says:

====================
Netfilter/IPVS updates for net-next

The following patchset contains Netfilter updates for net-next:

1) Remove the broute pseudo hook, implement this from the bridge
   prerouting hook instead. Now broute becomes real table in ebtables,
   from Florian Westphal. This also includes a size reduction patch for the
   bridge control buffer area via squashing boolean into bitfields and
   a selftest.

2) Add OS passive fingerprint version matching, from Fernando Fernandez.

3) Support for gue encapsulation for IPVS, from Jacky Hu.

4) Add support for NAT to the inet family, from Florian Westphal.
   This includes support for masquerade, redirect and nat extensions.

5) Skip interface lookup in flowtable, use device in the dst object.

6) Add jiffies64_to_msecs() and use it, from Li RongQing.

7) Remove unused parameter in nf_tables_set_desc_parse(), from Colin Ian King.

8) Statify several functions, patches from YueHaibing and Florian Westphal.

9) Add an optimized version of nf_inet_addr_cmp(), from Li RongQing.

10) Merge route extension to core, also from Florian.

11) Use IS_ENABLED(CONFIG_NF_NAT) instead of NF_NAT_NEEDED, from Florian.

12) Merge ip/ip6 masquerade extensions, from Florian. This includes
    netdevice notifier unification.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agobridge: only include nf_queue.h if needed
Stephen Rothwell [Sat, 13 Apr 2019 04:03:36 +0000 (14:03 +1000)]
bridge: only include nf_queue.h if needed

After merging the netfilter-next tree, today's linux-next build (powerpc
ppc44x_defconfig) failed like this:

In file included from net/bridge/br_input.c:19:
include/net/netfilter/nf_queue.h:16:23: error: field 'state' has incomplete type
  struct nf_hook_state state;
                       ^~~~~

Fixes: 971502d77faa ("bridge: netfilter: unroll NF_HOOK helper in bridge input path")
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agor8169: change irq handler to always trigger NAPI polling
Heiner Kallweit [Sun, 14 Apr 2019 09:48:39 +0000 (11:48 +0200)]
r8169: change irq handler to always trigger NAPI polling

This check isn't really needed and we can simplify the code and save
some CPU cycles by removing it. Only in case of an error none of these
bits are set, and calling the NAPI callback doesn't hurt in this case.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'r8169-phy-func-ptr-arrays'
David S. Miller [Sun, 14 Apr 2019 20:50:05 +0000 (13:50 -0700)]
Merge branch 'r8169-phy-func-ptr-arrays'

Heiner Kallweit says:

====================
r8169: create function pointer arrays for PHY and chip hw init functions

Using function pointer arrays makes the code easier to read and better
maintainable. AFAIK function pointer arrays cause some performance
drawback due to Spectre mitigation, but we're not in a hot path.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agor8169: create function pointer array for chip hw init functions
Heiner Kallweit [Sun, 14 Apr 2019 08:32:07 +0000 (10:32 +0200)]
r8169: create function pointer array for chip hw init functions

Using a function pointer array makes this easier to read and better
maintainable.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agor8169: create function pointer array for PHY init functions
Heiner Kallweit [Sun, 14 Apr 2019 08:30:24 +0000 (10:30 +0200)]
r8169: create function pointer array for PHY init functions

Using a function pointer array makes this easier to read and better
maintainable. AFAIK function pointer arrays cause some performance
drawback due to Spectre mitigation, but we're not in a hot path here.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'hns3-next'
David S. Miller [Sun, 14 Apr 2019 20:47:35 +0000 (13:47 -0700)]
Merge branch 'hns3-next'

Huazhong Tan says:

====================
code optimizations & bugfixes for HNS3 driver

This patch-set includes code optimizations and bugfixes for the HNS3
ethernet controller driver.

[patch 1/12 - 4/12] optimizes the VLAN freature and adds support for port
based VLAN, fixes some related bugs about the current implementation.

[patch 5/12 - 12/12] includes some other code optimizations for the HNS3
ethernet controller driver.

Change log:
V1->V2: modifies some patches' commint log and code.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: code optimization for command queue' spin lock
Peng Li [Sun, 14 Apr 2019 01:47:46 +0000 (09:47 +0800)]
net: hns3: code optimization for command queue' spin lock

This patch removes some redundant BH disable when initializing
and uninitializing command queue.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: free the pending skb when clean RX ring
Peng Li [Sun, 14 Apr 2019 01:47:45 +0000 (09:47 +0800)]
net: hns3: free the pending skb when clean RX ring

If there is pending skb in RX flow when close the port, and the
pending buffer is not cleaned, the new packet will be added to
the pending skb when the port opens again, and the first new
packet has error data.

This patch cleans the pending skb when clean RX ring.

Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: do not initialize MDIO bus when PHY is inexistent
Jian Shen [Sun, 14 Apr 2019 01:47:44 +0000 (09:47 +0800)]
net: hns3: do not initialize MDIO bus when PHY is inexistent

For some cases, PHY may not be connected to MDIO bus, then
the driver will initialize fail since MDIO bus initialization
fails.

This patch fixes it by skipping the MDIO bus initialization
when PHY is inexistent.

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: set dividual reset level for all RAS and MSI-X errors
Weihang Li [Sun, 14 Apr 2019 01:47:43 +0000 (09:47 +0800)]
net: hns3: set dividual reset level for all RAS and MSI-X errors

According to hardware description, reset level that should be
triggered are not consistent in a module. For example, in SSU
common errors, the first two bits has no need to do reset,
but the other bits need global reset.

This patch sets separate reset level for all RAS and MSI-X
interrupts by adding a reset_lvel field in struct hclge_hw_error,
and fixes some incorrect reset level.

Signed-off-by: Weihang Li <liweihang@hisilicon.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: divide shared buffer between TC
Yunsheng Lin [Sun, 14 Apr 2019 01:47:42 +0000 (09:47 +0800)]
net: hns3: divide shared buffer between TC

Currently hardware may have not enough buffer to receive packet
when it has used more than two MPS(maximum packet size) of
buffer, but there are still a lot of shared buffer left unused
when TC num is small.

This patch divides shared buffer to be used between TC when
the port supports DCB, and adjusts the waterline and threshold
according to user manual for the port that does not support
DCB.

This patch also change hclge_get_tc_num's return type to u32
to avoid signed-unsigned mix with divide.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: always assume no drop TC for performance reason
Yunsheng Lin [Sun, 14 Apr 2019 01:47:41 +0000 (09:47 +0800)]
net: hns3: always assume no drop TC for performance reason

Currently RX shared buffer' threshold size for speific TC is
set to smaller value when the TC's PFC is not enabled, which may
cause performance problem because hardware may not have enough
hardware buffer when PFC is not enabled.

This patch sets the same threshold size for all TC no matter if
the specific TC's PFC is enabled.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: add hns3_gro_complete for HW GRO process
Yunsheng Lin [Sun, 14 Apr 2019 01:47:40 +0000 (09:47 +0800)]
net: hns3: add hns3_gro_complete for HW GRO process

When a GRO packet is received by driver, the cwr field in the
struct tcphdr needs to be checked to decide whether to set the
SKB_GSO_TCP_ECN for skb_shinfo(skb)->gso_type.

So this patch adds hns3_gro_complete to do that, and adds the
hns3_handle_bdinfo to handle the hns3_gro_complete and
hns3_rx_checksum.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: minor refactor for hns3_rx_checksum
Yunsheng Lin [Sun, 14 Apr 2019 01:47:39 +0000 (09:47 +0800)]
net: hns3: minor refactor for hns3_rx_checksum

Change the parameters of hns3_rx_checksum to be more specific to
what is used internally, rather than passing in a pointer to the
whole hns3_desc. Reduces duplicate code and bring this function
inline with the approach used in hns3_set_gro_param.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: fix set port based VLAN issue for VF
Jian Shen [Sun, 14 Apr 2019 01:47:38 +0000 (09:47 +0800)]
net: hns3: fix set port based VLAN issue for VF

In original codes, ndo_set_vf_vlan() in hns3 driver was implemented
wrong. It adds or removes VLAN into VLAN filter for VF, but VF is
unaware of it.

This patch fixes it. When VF loads up, it firstly queries the port
based VLAN state from PF. When user change port based VLAN state
from PF, PF firstly checks whether the VF is alive. If the VF is
alive, then PF notifies the VF the modification; otherwise PF
configure the port based VLAN state directly.

Fixes: 46a3df9f9718 ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support")
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: fix set port based VLAN for PF
Jian Shen [Sun, 14 Apr 2019 01:47:37 +0000 (09:47 +0800)]
net: hns3: fix set port based VLAN for PF

In original codes, ndo_set_vf_vlan() in hns3 driver was implemented
wrong. It adds or removes VLAN into VLAN filter for VF, but VF is
unaware of it.

Indeed, ndo_set_vf_vlan() is expected to enable or disable port based
VLAN (hardware inserts a specified VLAN tag to all TX packets for a
specified VF) . When enable port based VLAN, we use port based VLAN id
as VLAN filter entry. When disable port based VLAN, we use VLAN id of
VLAN device.

This patch fixes it for PF, enable/disable port based VLAN when calls
ndo_set_vf_vlan().

Fixes: 46a3df9f9718 ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support")
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: fix VLAN offload handle for VLAN inserted by port
Jian Shen [Sun, 14 Apr 2019 01:47:36 +0000 (09:47 +0800)]
net: hns3: fix VLAN offload handle for VLAN inserted by port

Currently, in TX direction, driver implements the TX VLAN offload
by checking the VLAN header in skb, and filling it into TX descriptor.
Usually it works well, but if enable inserting VLAN header based on
port, it may conflict when out_tag field of TX descriptor is already
used, and cause RAS error.

In RX direction, hardware supports stripping max two VLAN headers.
For vlan_tci in skb can only store one VLAN tag, when RX VLAN offload
enabled, driver tells hardware to strip one VLAN header from RX
packet; when RX VLAN offload disabled, driver tells hardware not to
strip VLAN header from RX packet. Now if port based insert VLAN
enabled, all RX packets will have the port based VLAN header. This
header is useless for stack, driver needs to ask hardware to strip
it. Unfortunately, hardware can't drop this VLAN header, and always
fill it into RX descriptor, so driver has to identify and drop it.

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: hns3: modify VLAN initialization to be compatible with port based VLAN
Jian Shen [Sun, 14 Apr 2019 01:47:35 +0000 (09:47 +0800)]
net: hns3: modify VLAN initialization to be compatible with port based VLAN

Our hardware supports inserting a specified VLAN header for each
function when sending packets. User can enable it with command
"ip link set <devname> vf  <vfid> vlan <vlan id>".
For this VLAN header is inserted by hardware, not from stack,
hardware also needs to strip it from received packets before
sending to stack.  In this case, driver needs to tell
hardware which VLAN to insert or strip.

The current VLAN initialization doesn't allow inserting
VLAN header by hardware, this patch modifies it, in order be
compatible with VLAN inserted base on port.

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'net-phy-shrink-PHY-settings-array-and-add-200Gbps-support'
David S. Miller [Sun, 14 Apr 2019 20:37:06 +0000 (13:37 -0700)]
Merge branch 'net-phy-shrink-PHY-settings-array-and-add-200Gbps-support'

Heiner Kallweit says:

====================
net: phy: shrink PHY settings array and add 200Gbps support

The definition of array settings[] is quite lengthy meanwhile. Add a
macro to shrink the definition.

When doing this I saw that the new 200Gbps modes and few 100Gbps/50Gbps
modes aren't supported in phylib yet. So add this.

To avoid ethtool and phylib mode definitions getting out of sync, add
a build bug to check for this.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agophy: warn if phylib and ethtool PHY mode definitions are out of sync
Heiner Kallweit [Sat, 13 Apr 2019 18:53:43 +0000 (20:53 +0200)]
phy: warn if phylib and ethtool PHY mode definitions are out of sync

If new PHY modes are added people may miss to update all relevant places
in the kernel. Therefore add a build bug check for new modes in enum
ethtool_link_mode_bit_indices that haven't been added to phylib yet.

Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: phy: add support for new modes in phylib
Heiner Kallweit [Sat, 13 Apr 2019 18:50:24 +0000 (20:50 +0200)]
net: phy: add support for new modes in phylib

Recently new modes have been added to ethtool.h, but the related
extension to phylib hasn't been done yet. So add support for these
modes.

v2:
- add missing 100Gbps and 50Gbps modes

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: phy: shrink PHY settings array
Heiner Kallweit [Sat, 13 Apr 2019 18:48:55 +0000 (20:48 +0200)]
net: phy: shrink PHY settings array

The definition of array settings[] is quite lengthy meanwhile. Add a
macro to shrink the definition.

v2:
- Fix an indentation issue

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'rhashtable-bit-locking-m68k'
David S. Miller [Sat, 13 Apr 2019 00:34:46 +0000 (17:34 -0700)]
Merge branch 'rhashtable-bit-locking-m68k'

NeilBrown says:

====================
Fix rhashtable bit-locking for m68k

As reported by Guenter Roeck, the new rhashtable bit-locking
doesn't work on m68k as it only requires 2-byte alignment, so BIT(1)
is addresses is not unused.

We current use BIT(0) to identify a NULLS marker, but that is only
needed in ->next pointers.  The bucket head does not need a NULLS
marker, so the lsb there can be used for locking.

the first 4 patches make some small improvements and re-arrange some
code.  The final patch converts to using only BIT(0) for these two
different special purposes.

I had previously suggested dropping the series until I fix it.  Given
that this was fairly easy, I retract that I think it best simply to
add these patches to fix the code.
====================

Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agorhashtable: use BIT(0) for locking.
NeilBrown [Fri, 12 Apr 2019 01:52:08 +0000 (11:52 +1000)]
rhashtable: use BIT(0) for locking.

As reported by Guenter Roeck, the new bit-locking using
BIT(1) doesn't work on the m68k architecture.  m68k only requires
2-byte alignment for words and longwords, so there is only one
unused bit in pointers to structs - We current use two, one for the
NULLS marker at the end of the linked list, and one for the bit-lock
in the head of the list.

The two uses don't need to conflict as we never need the head of the
list to be a NULLS marker - the marker is only needed to check if an
object has moved to a different table, and the bucket head cannot
move.  The NULLS marker is only needed in a ->next pointer.

As we already have different types for the bucket head pointer (struct
rhash_lock_head) and the ->next pointers (struct rhash_head), it is
fairly easy to treat the lsb differently in each.

So: Initialize buckets heads to NULL, and use the lsb for locking.
When loading the pointer from the bucket head, if it is NULL (ignoring
the lock big), report as being the expected NULLS marker.
When storing a value into a bucket head, if it is a NULLS marker,
store NULL instead.

And convert all places that used bit 1 for locking, to use bit 0.

Fixes: 8f0db018006a ("rhashtable: use bit_spin_locks to protect hash bucket.")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agorhashtable: replace rht_ptr_locked() with rht_assign_locked()
NeilBrown [Fri, 12 Apr 2019 01:52:08 +0000 (11:52 +1000)]
rhashtable: replace rht_ptr_locked() with rht_assign_locked()

The only times rht_ptr_locked() is used, it is to store a new
value in a bucket-head.  This is the only time it makes sense
to use it too.  So replace it by a function which does the
whole task:  Sets the lock bit and assigns to a bucket head.

Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agorhashtable: move dereference inside rht_ptr()
NeilBrown [Fri, 12 Apr 2019 01:52:08 +0000 (11:52 +1000)]
rhashtable: move dereference inside rht_ptr()

Rather than dereferencing a pointer to a bucket and then passing the
result to rht_ptr(), we now pass in the pointer and do the dereference
in rht_ptr().

This requires that we pass in the tbl and hash as well to support RCU
checks, and means that the various rht_for_each functions can expect a
pointer that can be dereferenced without further care.

There are two places where we dereference a bucket pointer
where there is no testable protection - in each case we know
that we much have exclusive access without having taken a lock.
The previous code used rht_dereference() to pretend that holding
the mutex provided protects, but holding the mutex never provides
protection for accessing buckets.

So instead introduce rht_ptr_exclusive() that can be used when
there is known to be exclusive access without holding any locks.

Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agorhashtable: reorder some inline functions and macros.
NeilBrown [Fri, 12 Apr 2019 01:52:08 +0000 (11:52 +1000)]
rhashtable: reorder some inline functions and macros.

This patch only moves some code around, it doesn't
change the code at all.
A subsequent patch will benefit from this as it needs
to add calls to functions which are now defined before the
call-site, but weren't before.

Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agorhashtable: fix some __rcu annotation errors
NeilBrown [Fri, 12 Apr 2019 01:52:07 +0000 (11:52 +1000)]
rhashtable: fix some __rcu annotation errors

With these annotations, the rhashtable now gets no
warnings when compiled with "C=1" for sparse checking.

Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agorhashtable: use struct_size() in kvzalloc()
Gustavo A. R. Silva [Thu, 11 Apr 2019 23:43:06 +0000 (18:43 -0500)]
rhashtable: use struct_size() in kvzalloc()

One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along with
memory for some number of elements for that array.  For example:

struct foo {
    int stuff;
    struct boo entry[];
};

size = sizeof(struct foo) + count * sizeof(struct boo);
instance = kvzalloc(size, GFP_KERNEL);

Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:

instance = kvzalloc(struct_size(instance, entry, count), GFP_KERNEL);

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'nfp-update-to-control-structures'
David S. Miller [Sat, 13 Apr 2019 00:29:15 +0000 (17:29 -0700)]
Merge branch 'nfp-update-to-control-structures'

Jakub Kicinski says:

====================
nfp: update to control structures

This series prepares NFP control structures for crypto offloads.
So far we mostly dealt with configuration requests under rtnl lock.
This will no longer be the case with crypto.  Additionally we will
try to reuse the BPF control message format, so we move common code
out of BPF.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: split out common control message handling code
Jakub Kicinski [Fri, 12 Apr 2019 03:27:07 +0000 (20:27 -0700)]
nfp: split out common control message handling code

BPF's control message handler seems like a good base to built
on for request-reply control messages.  Split it out to allow
for reuse.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: move vNIC reset before netdev init
Jakub Kicinski [Fri, 12 Apr 2019 03:27:06 +0000 (20:27 -0700)]
nfp: move vNIC reset before netdev init

During probe we clear vNIC configuration in case the device
wasn't closed cleanly by previous driver.  Move that code
before netdev init, so netdev init can already try to apply
its config parameters.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: add a mutex lock for the vNIC ctrl BAR
Jakub Kicinski [Fri, 12 Apr 2019 03:27:05 +0000 (20:27 -0700)]
nfp: add a mutex lock for the vNIC ctrl BAR

Soon we will try to write to the vNIC mailbox without RTNL held.
Add a new mutex to protect access to specific parts of the PCI
control BAR.

Move the mailbox size checking to the mailbox lock() helper, where
it can be more effective (happen prior to potential overwrite of
other data).

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonfp: opportunistically poll for reconfig result
Dirk van der Merwe [Fri, 12 Apr 2019 03:27:04 +0000 (20:27 -0700)]
nfp: opportunistically poll for reconfig result

If the reconfig was a quick update, we could have results available from
firmware within 200us.

Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Remove flowi6_oif compare from __ip6_route_redirect
David Ahern [Fri, 12 Apr 2019 17:31:14 +0000 (10:31 -0700)]
ipv6: Remove flowi6_oif compare from __ip6_route_redirect

In the review of 0b34eb004347 ("ipv6: Refactor __ip6_route_redirect"),
Martin noted that the flowi6_oif compare is moved to the new helper and
should be removed from __ip6_route_redirect. Fix the oversight.

Fixes: 0b34eb004347 ("ipv6: Refactor __ip6_route_redirect")
Reported-by: Martin Lau <kafai@fb.com>
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'netdevsim-Mostly-cleanup-in-sdev-bpf-iface-area'
David S. Miller [Fri, 12 Apr 2019 23:49:54 +0000 (16:49 -0700)]
Merge branch 'netdevsim-Mostly-cleanup-in-sdev-bpf-iface-area'

Jiri Pirko says:

====================
netdevsim: Mostly cleanup in sdev/bpf iface area

This patches does mainly internal netdevsim code shuffle. Nothing
serious, just small changes to help readability and preparations for
future work.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonetdevsim: move sdev-specific init/uninit code into separate functions
Jiri Pirko [Fri, 12 Apr 2019 12:49:29 +0000 (14:49 +0200)]
netdevsim: move sdev-specific init/uninit code into separate functions

In order to improve readability and prepare for future code changes,
move sdev specific init/uninit code into separate functions.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonetdevsim: make bpf_offload_dev_create() per-sdev instead of first ns
Jiri Pirko [Fri, 12 Apr 2019 12:49:28 +0000 (14:49 +0200)]
netdevsim: make bpf_offload_dev_create() per-sdev instead of first ns

offload dev is stored in sdev struct. However, first netdevsim instance
is used as a priv. Change this to be sdev to as it is shared among
multiple netdevsim instances.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonetdevsim: move sdev specific bpf debugfs files to sdev dir
Jiri Pirko [Fri, 12 Apr 2019 12:49:27 +0000 (14:49 +0200)]
netdevsim: move sdev specific bpf debugfs files to sdev dir

Some netdevsim bpf debugfs files are per-sdev, yet they are defined per
netdevsim instance. Move them under sdev directory.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonetdevsim: move shared dev creation and destruction into separate file
Jiri Pirko [Fri, 12 Apr 2019 12:49:26 +0000 (14:49 +0200)]
netdevsim: move shared dev creation and destruction into separate file

To make code easier to read, move shared dev bits into a separate file.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoDocumentation: net: dsa: transition to the rst format
Ioana Ciornei [Fri, 12 Apr 2019 11:55:18 +0000 (14:55 +0300)]
Documentation: net: dsa: transition to the rst format

This patch also performs some minor adjustments such as numbering for
the receive path sequence, conversion of keywords to inline literals and
adding an index page so it looks better in the output of 'make htmldocs'.

Signed-off-by: Ioana Ciornei <ciorneiioana@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: veth: use generic helper to report timestamping info
Julian Wiedmann [Fri, 12 Apr 2019 11:06:15 +0000 (13:06 +0200)]
net: veth: use generic helper to report timestamping info

For reporting the common set of SW timestamping capabilities, use
ethtool_op_get_ts_info() instead of re-implementing it.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: loopback: use generic helper to report timestamping info
Julian Wiedmann [Fri, 12 Apr 2019 11:06:14 +0000 (13:06 +0200)]
net: loopback: use generic helper to report timestamping info

For reporting the common set of SW timestamping capabilities, use
ethtool_op_get_ts_info() instead of re-implementing it.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dummy: use generic helper to report timestamping info
Julian Wiedmann [Fri, 12 Apr 2019 11:06:13 +0000 (13:06 +0200)]
net: dummy: use generic helper to report timestamping info

For reporting the common set of SW timestamping capabilities, use
ethtool_op_get_ts_info() instead of re-implementing it.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'smc-next'
David S. Miller [Fri, 12 Apr 2019 17:50:56 +0000 (10:50 -0700)]
Merge branch 'smc-next'

Ursula Braun says:

====================
net/smc: patches 2019-04-12

here are patches for SMC:
* patch 1 improves behavior of non-blocking connect
* patches 2, 3, 5, 7, and 8 improve connecting return codes
* patches 4 and 6 are a cleanups without functional change
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/smc: improve smc_conn_create reason codes
Karsten Graul [Fri, 12 Apr 2019 10:57:30 +0000 (12:57 +0200)]
net/smc: improve smc_conn_create reason codes

Rework smc_conn_create() to always return a valid DECLINE reason code.
This removes the need to translate the return codes on 4 different
places and allows to easily add more detailed return codes by changing
smc_conn_create() only.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/smc: improve smc_listen_work reason codes
Karsten Graul [Fri, 12 Apr 2019 10:57:29 +0000 (12:57 +0200)]
net/smc: improve smc_listen_work reason codes

Rework smc_listen_work() to provide improved reason codes when an
SMC connection is declined. This allows better debugging on user side.
This also adds 3 more detailed reason codes in smc_clc.h to indicate
what type of device was not found (ism or rdma or both), or if ism
cannot talk to the peer.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/smc: code cleanup smc_listen_work
Karsten Graul [Fri, 12 Apr 2019 10:57:28 +0000 (12:57 +0200)]
net/smc: code cleanup smc_listen_work

In smc_listen_work() the variables rc and reason_code are defined which
have the same meaning. Eliminate reason_code in favor of the shorter
name rc. No functional changes.
Rename the functions smc_check_ism() and smc_check_rdma() into
smc_find_ism_device() and smc_find_rdma_device() to make there purpose
more clear. No functional changes.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/smc: cleanup of get vlan id
Karsten Graul [Fri, 12 Apr 2019 10:57:27 +0000 (12:57 +0200)]
net/smc: cleanup of get vlan id

The vlan_id of the underlying CLC socket was retrieved two times
during processing of the listen handshaking. Change this to get the
vlan id one time in connect and in listen processing, and reuse the id.
And add a new CLC DECLINE return code for the case when the retrieval
of the vlan id failed.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/smc: consolidate function parameters
Karsten Graul [Fri, 12 Apr 2019 10:57:26 +0000 (12:57 +0200)]
net/smc: consolidate function parameters

During initialization of an SMC socket a lot of function parameters need
to get passed down the function call path. Consolidate the parameters
in a helper struct so there are less enough parameters to get all passed
by register.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/smc: check for ip prefix and subnet
Karsten Graul [Fri, 12 Apr 2019 10:57:25 +0000 (12:57 +0200)]
net/smc: check for ip prefix and subnet

The check for a matching ip prefix and subnet was only done for SMC-R
in smc_listen_rdma_check() but not when an SMC-D connection was
possible. Rename the function into smc_listen_prfx_check() and move its
call to a place where it is called for both SMC variants.
And add a new CLC DECLINE reason for the case when the IP prefix or
subnet check fails so the reason for the failing SMC connection can be
found out more easily.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/smc: fallback to TCP after connect problems
Karsten Graul [Fri, 12 Apr 2019 10:57:24 +0000 (12:57 +0200)]
net/smc: fallback to TCP after connect problems

Correct the CLC decline reason codes for internal problems to not have
the sign bit set, negative reason codes are interpreted as not eligible
for TCP fallback.

Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet/smc: nonblocking connect rework
Ursula Braun [Fri, 12 Apr 2019 10:57:23 +0000 (12:57 +0200)]
net/smc: nonblocking connect rework

For nonblocking sockets move the kernel_connect() from the connect
worker into the initial smc_connect part to return kernel_connect()
errors other than -EINPROGRESS to user space.

Reviewed-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoxen-netback: add reference from xenvif to backend_info to facilitate coredump analysis
Dongli Zhang [Fri, 12 Apr 2019 06:53:24 +0000 (14:53 +0800)]
xen-netback: add reference from xenvif to backend_info to facilitate coredump analysis

During coredump analysis, it is not easy to obtain the address of
backend_info in xen-netback.

So far there are two ways to obtain backend_info:

1. Do what xenbus_device_find() does for vmcore to find the xenbus_device
and then derive it from dev_get_drvdata().

2. Extract backend_info from callstack of xenwatch (e.g., netback_remove()
or frontend_changed()).

This patch adds a reference from xenvif to backend_info so that it would be
much more easier to obtain backend_info during coredump analysis.

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'sctp-skb-list'
David S. Miller [Fri, 12 Apr 2019 04:33:37 +0000 (21:33 -0700)]
Merge branch 'sctp-skb-list'

David Miller says:

====================
SCTP: Event skb list overhaul.

This patch series eliminates the explicit reference to the skb list
implementation via skb->prev dereferences.

The approach used is to pass a non-empty skb list around instead of an
event skb object which may or may not be on a list.

I'd like to thank Marcelo Leitner, Xin Long, and Neil Horman for
reviewing previous versions of this series.

Testing would be very much appreciated, in addition to the review of
course.

v4 --> v5: Rebase to net-next

v3 --> v4: Fix the logic in patch #4 so that we don't miss cases
           where we should add event to the on-stack temp list.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agosctp: Pass sk_buff_head explicitly to sctp_ulpq_tail_event().
David Miller [Thu, 11 Apr 2019 22:02:07 +0000 (15:02 -0700)]
sctp: Pass sk_buff_head explicitly to sctp_ulpq_tail_event().

Now the SKB list implementation assumption can be removed.

And now that we know that the list head is always non-NULL
we can remove the code blocks dealing with that as well.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agosctp: Make sctp_enqueue_event tak an skb list.
David Miller [Thu, 11 Apr 2019 22:02:04 +0000 (15:02 -0700)]
sctp: Make sctp_enqueue_event tak an skb list.

Pass this, instead of an event.  Then everything trickles down and we
always have events a non-empty list.

Then we needs a list creating stub to place into .enqueue_event for sctp_stream_interleave_1.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agosctp: Use helper for sctp_ulpq_tail_event() when hooked up to ->enqueue_event
David Miller [Thu, 11 Apr 2019 22:02:01 +0000 (15:02 -0700)]
sctp: Use helper for sctp_ulpq_tail_event() when hooked up to ->enqueue_event

This way we can make sure events sent this way to
sctp_ulpq_tail_event() are on a list as well.  Now all such code paths
are fully covered.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agosctp: Always pass skbs on a list to sctp_ulpq_tail_event().
David Miller [Thu, 11 Apr 2019 22:01:57 +0000 (15:01 -0700)]
sctp: Always pass skbs on a list to sctp_ulpq_tail_event().

This way we can simplify the logic and remove assumptions
about the implementation of skb lists.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agosctp: Remove superfluous test in sctp_ulpq_reasm_drain().
David Miller [Thu, 11 Apr 2019 22:01:53 +0000 (15:01 -0700)]
sctp: Remove superfluous test in sctp_ulpq_reasm_drain().

Inside the loop, we always start with event non-NULL.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: sched: flower: fix filter net reference counting
Vlad Buslov [Thu, 11 Apr 2019 21:54:19 +0000 (00:54 +0300)]
net: sched: flower: fix filter net reference counting

Fix net reference counting in fl_change() and remove redundant call to
tcf_exts_get_net() from __fl_delete(). __fl_put() already tries to get net
before releasing exts and deallocating a filter, so this code caused flower
classifier to obtain net twice per filter that is being deleted.

Implementation of __fl_delete() called tcf_exts_get_net() to pass its
result as 'async' flag to fl_mask_put(). However, 'async' flag is redundant
and only complicates fl_mask_put() implementation. This functionality seems
to be copied from filter cleanup code, where it was added by Cong with
following explanation:

    This patchset tries to fix the race between call_rcu() and
    cleanup_net() again. Without holding the netns refcnt the
    tc_action_net_exit() in netns workqueue could be called before
    filter destroy works in tc filter workqueue. This patchset
    moves the netns refcnt from tc actions to tcf_exts, without
    breaking per-netns tc actions.

This doesn't apply to flower mask, which doesn't call any tc action code
during cleanup. Simplify fl_mask_put() by removing the flag parameter and
always use tcf_queue_work() to free mask objects.

Fixes: 061775583e35 ("net: sched: flower: introduce reference counting for filters")
Fixes: 1f17f7742eeb ("net: sched: flower: insert filter to ht before offloading it to hw")
Fixes: 05cd271fd61a ("cls_flower: Support multiple masks per priority")
Reported-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoselftests: Add debugging options to pmtu.sh
David Ahern [Thu, 11 Apr 2019 18:51:50 +0000 (11:51 -0700)]
selftests: Add debugging options to pmtu.sh

pmtu.sh script runs a number of tests and dumps a summary of pass/fail.
If a test fails, it is near impossible to debug why. For example:

    TEST: ipv6: PMTU exceptions                       [FAIL]

There are a lot of commands run behind the scenes for this test. Which
one is failing?

Add a VERBOSE option to show commands that are run and any output from
those commands. Add a PAUSE_ON_FAIL option to halt the script if a test
fails allowing users to poke around with the setup in the failed state.

In the process, rename tracing to TRACING and move declaration to top
with the new variables.

Signed-off-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
David S. Miller [Fri, 12 Apr 2019 00:00:05 +0000 (17:00 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2019-04-12

The following pull-request contains BPF updates for your *net-next* tree.

The main changes are:

1) Improve BPF verifier scalability for large programs through two
   optimizations: i) remove verifier states that are not useful in pruning,
   ii) stop walking parentage chain once first LIVE_READ is seen. Combined
   gives approx 20x speedup. Increase limits for accepting large programs
   under root, and add various stress tests, from Alexei.

2) Implement global data support in BPF. This enables static global variables
   for .data, .rodata and .bss sections to be properly handled which allows
   for more natural program development. This also opens up the possibility
   to optimize program workflow by compiling ELFs only once and later only
   rewriting section data before reload, from Daniel and with test cases and
   libbpf refactoring from Joe.

3) Add config option to generate BTF type info for vmlinux as part of the
   kernel build process. DWARF debug info is converted via pahole to BTF.
   Latter relies on libbpf and makes use of BTF deduplication algorithm which
   results in 100x savings compared to DWARF data. Resulting .BTF section is
   typically about 2MB in size, from Andrii.

4) Add BPF verifier support for stack access with variable offset from
   helpers and add various test cases along with it, from Andrey.

5) Extend bpf_skb_adjust_room() growth BPF helper to mark inner MAC header
   so that L2 encapsulation can be used for tc tunnels, from Alan.

6) Add support for input __sk_buff context in BPF_PROG_TEST_RUN so that
   users can define a subset of allowed __sk_buff fields that get fed into
   the test program, from Stanislav.

7) Add bpf fs multi-dimensional array tests for BTF test suite and fix up
   various UBSAN warnings in bpftool, from Yonghong.

8) Generate a pkg-config file for libbpf, from Luca.

9) Dump program's BTF id in bpftool, from Prashant.

10) libbpf fix to use smaller BPF log buffer size for AF_XDP's XDP
    program, from Magnus.

11) kallsyms related fixes for the case when symbols are not present in
    BPF selftests and samples, from Daniel
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agobridge: broute: make broute a real ebtables table
Florian Westphal [Thu, 11 Apr 2019 14:36:42 +0000 (16:36 +0200)]
bridge: broute: make broute a real ebtables table

This makes broute a normal ebtables table, hooking at PREROUTING.
The broute hook is removed.

It uses skb->cb to signal to bridge rx handler that the skb should be
routed instead of being bridged.

This change is backwards compatible with ebtables as no userspace visible
parts are changed.

This means we can also remove the !ops test in ebt_register_table,
it was only there for broute table sake.

Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agobridge: netfilter: unroll NF_HOOK helper in bridge input path
Florian Westphal [Thu, 11 Apr 2019 14:36:41 +0000 (16:36 +0200)]
bridge: netfilter: unroll NF_HOOK helper in bridge input path

Replace NF_HOOK() based invocation of the netfilter hooks with a private
copy of nf_hook_slow().

This copy has one difference: it can return the rx handler value expected
by the stack, i.e. RX_HANDLER_CONSUMED or RX_HANDLER_PASS.

This is needed by the next patch to invoke the ebtables
"broute" table via the standard netfilter hooks rather than the custom
"br_should_route_hook" indirection that is used now.

When the skb is to be "brouted", we must return RX_HANDLER_PASS from the
bridge rx input handler, but there is no way to indicate this via
NF_HOOK(), unless perhaps by some hack such as exposing bridge_cb in the
netfilter core or a percpu flag.

  text    data     bss     dec   filename
  3369      56       0    3425   net/bridge/br_input.o.before
  3458      40       0    3498   net/bridge/br_input.o.after

This allows removal of the "br_should_route_hook" in the next patch.

Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agobridge: reduce size of input cb to 16 bytes
Florian Westphal [Thu, 11 Apr 2019 14:36:40 +0000 (16:36 +0200)]
bridge: reduce size of input cb to 16 bytes

Reduce size of br_input_skb_cb from 24 to 16 bytes by
using bitfield for those values that can only be 0 or 1.

igmp is the igmp type value, so it needs to be at least u8.

Furthermore, the bridge currently relies on step-by-step initialization
of br_input_skb_cb fields as the skb passes through the stack.

Explicitly zero out the bridge input cb instead, this avoids having to
review/validate that no BR_INPUT_SKB_CB(skb)->foo test can see a
'random' value from previous protocol cb.

AFAICS all current fields are always set up before they are read again,
so this is not a bug fix.

Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agoselftests: netfilter: add ebtables broute test case
Florian Westphal [Thu, 11 Apr 2019 14:36:39 +0000 (16:36 +0200)]
selftests: netfilter: add ebtables broute test case

ebtables -t broute allows to redirect packets in a way that
they get pushed up the stack, even if the interface is part
of a bridge.

In case of IP packets to non-local address, this means
those IP packets are routed instead of bridged-forwarded, just
as if the bridge would not have existed.

Expected test output is:
PASS: netns connectivity: ns1 and ns2 can reach each other
PASS: ns1/ns2 connectivity with active broute rule
PASS: ns1/ns2 connectivity with active broute rule and bridge forward drop

Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agobpf: explicitly prohibit ctx_{in, out} in non-skb BPF_PROG_TEST_RUN
Stanislav Fomichev [Thu, 11 Apr 2019 22:47:07 +0000 (15:47 -0700)]
bpf: explicitly prohibit ctx_{in, out} in non-skb BPF_PROG_TEST_RUN

This should allow us later to extend BPF_PROG_TEST_RUN for non-skb case
and be sure that nobody is erroneously setting ctx_{in,out}.

Fixes: b0b9395d865e ("bpf: support input __sk_buff context in BPF_PROG_TEST_RUN")
Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agotools: add smp_* barrier variants to include infrastructure
Daniel Borkmann [Tue, 9 Apr 2019 09:44:46 +0000 (11:44 +0200)]
tools: add smp_* barrier variants to include infrastructure

Add the definition for smp_rmb(), smp_wmb(), and smp_mb() to the
tools include infrastructure: this patch adds the implementation
for x86-64 and arm64, and have it fall back as currently is for
other archs which do not have it implemented at this point. The
x86-64 one uses lock + add combination for smp_mb() with address
below red zone.

This is on top of 09d62154f613 ("tools, perf: add and use optimized
ring_buffer_{read_head, write_tail} helpers"), which didn't touch
smp_* barrier implementations. Magnus recently rightfully reported
however that the latter on x86-64 still wrongly falls back to sfence,
lfence and mfence respectively, thus fix that for applications under
tools making use of these to avoid such ugly surprises. The main
header under tools (include/asm/barrier.h) will in that case not
select the fallback implementation.

Reported-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
5 years agoMerge branch 'ipv6-Refactor-nexthop-selection-helpers-during-a-fib-lookup'
David S. Miller [Thu, 11 Apr 2019 21:24:07 +0000 (14:24 -0700)]
Merge branch 'ipv6-Refactor-nexthop-selection-helpers-during-a-fib-lookup'

David Ahern says:

====================
ipv6: Refactor nexthop selection helpers during a fib lookup

IPv6 has a fib6_nh embedded within each fib6_info and a separate
fib6_info for each path in a multipath route. A side effect is that
a fib6_info is passed all the way down the stack when selecting a path
on a fib lookup. Refactor the fib lookup functions and associated
helper functions to take a fib6_nh when appropriate to enable IPv6
to work with nexthop objects where the fib6_nh is not directly part
of a fib entry.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Refactor __ip6_route_redirect
David Ahern [Tue, 9 Apr 2019 21:41:19 +0000 (14:41 -0700)]
ipv6: Refactor __ip6_route_redirect

Move the nexthop evaluation of a fib entry to a helper that can be
leveraged for each fib6_nh in a multipath nexthop object.

In the move, 'continue' statements means the helper returns false
(loop should continue) and 'break' means return true (found the entry
of interest).

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Refactor rt6_device_match
David Ahern [Tue, 9 Apr 2019 21:41:18 +0000 (14:41 -0700)]
ipv6: Refactor rt6_device_match

Move the device and gateway checks in the fib6_next loop to a helper
that can be called per fib6_nh entry.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Move fib6_multipath_select down in ip6_pol_route
David Ahern [Tue, 9 Apr 2019 21:41:17 +0000 (14:41 -0700)]
ipv6: Move fib6_multipath_select down in ip6_pol_route

Move the siblings and fib6_multipath_select after the null entry check
since a null entry can not have siblings.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Be smarter with null_entry handling in ip6_pol_route_lookup
David Ahern [Tue, 9 Apr 2019 21:41:16 +0000 (14:41 -0700)]
ipv6: Be smarter with null_entry handling in ip6_pol_route_lookup

Clean up the fib6_null_entry handling in ip6_pol_route_lookup.
rt6_device_match can return fib6_null_entry, but fib6_multipath_select
can not. Consolidate the fib6_null_entry handling and on the final
null_entry check set rt and goto out - no need to defer to a second
check after rt6_find_cached_rt.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Refactor find_rr_leaf
David Ahern [Tue, 9 Apr 2019 21:41:15 +0000 (14:41 -0700)]
ipv6: Refactor find_rr_leaf

find_rr_leaf has 3 loops over fib_entries calling find_match. The loops
are very similar with differences in start point and whether the metric
is evaluated:
    1. start at rr_head, no extra loop compare, check fib metric
    2. start at leaf, compare rt against rr_head, check metric
    3. start at cont (potential saved point from earlier loops), no
       extra loop compare, no metric check

Create 1 loop that is called 3 different times. This will make a
later change with multipath nexthop objects much simpler.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Refactor find_match
David Ahern [Tue, 9 Apr 2019 21:41:14 +0000 (14:41 -0700)]
ipv6: Refactor find_match

find_match primarily needs a fib6_nh (and fib6_flags which it passes
through to rt6_score_route). Move fib6_check_expired up to the call
sites so find_match is only called for relevant entries. Remove the
match argument which is mostly a pass through and use the return
boolean to decide if match gets set in the call sites.

The end result is a helper that can be called per fib6_nh struct
which is needed once fib entries reference nexthop objects that
have more than one fib6_nh.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Pass fib6_nh and flags to rt6_score_route
David Ahern [Tue, 9 Apr 2019 21:41:13 +0000 (14:41 -0700)]
ipv6: Pass fib6_nh and flags to rt6_score_route

rt6_score_route only needs the fib6_flags and nexthop data. Change
it accordingly. Allows re-use later for nexthop based fib6_nh.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Change rt6_probe to take a fib6_nh
David Ahern [Tue, 9 Apr 2019 21:41:12 +0000 (14:41 -0700)]
ipv6: Change rt6_probe to take a fib6_nh

rt6_probe sends probes for gateways in a nexthop. As such it really
depends on a fib6_nh, not a fib entry. Move last_probe to fib6_nh and
update rt6_probe to a fib6_nh struct.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Remove rt6_check_dev
David Ahern [Tue, 9 Apr 2019 21:41:11 +0000 (14:41 -0700)]
ipv6: Remove rt6_check_dev

rt6_check_dev is a simpler helper with only 1 caller. Fold the code
into rt6_score_route.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoipv6: Only call rt6_check_neigh for nexthop with gateway
David Ahern [Tue, 9 Apr 2019 21:41:10 +0000 (14:41 -0700)]
ipv6: Only call rt6_check_neigh for nexthop with gateway

Change rt6_check_neigh to take a fib6_nh instead of a fib entry.
Move the check on fib_flags and whether the nexthop has a gateway
up to the one caller.

Remove the inline from the definition as well. Not necessary.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodns: remove redundant zero length namelen check
Colin Ian King [Tue, 9 Apr 2019 12:59:12 +0000 (13:59 +0100)]
dns: remove redundant zero length namelen check

The zero namelen check is redundant as it has already been checked
for zero at the start of the function.  Remove the redundant check.

Addresses-Coverity: ("Logically Dead Code")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'bpf-l2-encap'
Daniel Borkmann [Thu, 11 Apr 2019 20:50:58 +0000 (22:50 +0200)]
Merge branch 'bpf-l2-encap'

Alan Maguire says:

====================
Extend bpf_skb_adjust_room growth to mark inner MAC header so
that L2 encapsulation can be used for tc tunnels.

Patch #1 extends the existing test_tc_tunnel to support UDP
encapsulation; later we want to be able to test MPLS over UDP
and MPLS over GRE encapsulation.

Patch #2 adds the BPF_F_ADJ_ROOM_ENCAP_L2(len) macro, which
allows specification of inner mac length.  Other approaches were
explored prior to taking this approach.  Specifically, I tried
automatically computing the inner mac length on the basis of the
specified flags (so inner maclen for GRE/IPv4 encap is the len_diff
specified to bpf_skb_adjust_room minus GRE + IPv4 header length
for example).  Problem with this is that we don't know for sure
what form of GRE/UDP header we have; is it a full GRE header,
or is it a FOU UDP header or generic UDP encap header? My fear
here was we'd end up with an explosion of flags.  The other approach
tried was to support inner L2 header marking as a separate room
adjustment, i.e. adjust for L3/L4 encap, then call
bpf_skb_adjust_room for L2 encap.  This can be made to work but
because it imposed an order on operations, felt a bit clunky.

Patch #3 syncs tools/ bpf.h.

Patch #4 extends the tests again to support MPLSoverGRE,
MPLSoverUDP, and transparent ethernet bridging (TEB) where
the inner L2 header is an ethernet header.  Testing of BPF
encap against tunnels is done for cases where configuration
of such tunnels is possible (MPLSoverGRE[6], MPLSoverUDP,
gre[6]tap), and skipped otherwise.  Testing of BPF encap/decap
is always carried out.

Changes since v2:
 - updated tools/testing/selftest/bpf/config with FOU/MPLS CONFIG
   variables (patches 1, 4)
 - reduced noise in patch 1 by avoiding unnecessary movement of code
 - eliminated inner_mac variable in bpf_skb_net_grow (patch 2)

Changes since v1:
 - fixed formatting of commit references.
 - BPF_F_ADJ_ROOM_FIXED_GSO flag enabled on all variants (patch 1)
 - fixed fou6 options for UDP encap; checksum errors observed were
   due to the fact fou6 tunnel was not set up with correct ipproto
   options (41 -6).  0 checksums work fine (patch 1)
 - added definitions for mask and shift used in setting L2 length
   (patch 2)
 - allow udp encap with fixed GSO (patch 2)
 - changed "elen" to "l2_len" to be more descriptive (patch 4)
====================

Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agoselftests_bpf: add L2 encap to test_tc_tunnel
Alan Maguire [Tue, 9 Apr 2019 14:06:43 +0000 (15:06 +0100)]
selftests_bpf: add L2 encap to test_tc_tunnel

Update test_tc_tunnel to verify adding inner L2 header
encapsulation (an MPLS label or ethernet header) works.

Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agobpf: sync bpf.h to tools/ for BPF_F_ADJ_ROOM_ENCAP_L2
Alan Maguire [Tue, 9 Apr 2019 14:06:42 +0000 (15:06 +0100)]
bpf: sync bpf.h to tools/ for BPF_F_ADJ_ROOM_ENCAP_L2

Sync include/uapi/linux/bpf.h with tools/ equivalent to add
BPF_F_ADJ_ROOM_ENCAP_L2(len) macro.

Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agobpf: add layer 2 encap support to bpf_skb_adjust_room
Alan Maguire [Tue, 9 Apr 2019 14:06:41 +0000 (15:06 +0100)]
bpf: add layer 2 encap support to bpf_skb_adjust_room

commit 868d523535c2 ("bpf: add bpf_skb_adjust_room encap flags")
introduced support to bpf_skb_adjust_room for GSO-friendly GRE
and UDP encapsulation.

For GSO to work for skbs, the inner headers (mac and network) need to
be marked.  For L3 encapsulation using bpf_skb_adjust_room, the mac
and network headers are identical.  Here we provide a way of specifying
the inner mac header length for cases where L2 encap is desired.  Such
an approach can support encapsulated ethernet headers, MPLS headers etc.
For example to convert from a packet of form [eth][ip][tcp] to
[eth][ip][udp][inner mac][ip][tcp], something like the following could
be done:

headroom = sizeof(iph) + sizeof(struct udphdr) + inner_maclen;

ret = bpf_skb_adjust_room(skb, headroom, BPF_ADJ_ROOM_MAC,
  BPF_F_ADJ_ROOM_ENCAP_L4_UDP |
  BPF_F_ADJ_ROOM_ENCAP_L3_IPV4 |
  BPF_F_ADJ_ROOM_ENCAP_L2(inner_maclen));

Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agoselftests_bpf: extend test_tc_tunnel for UDP encap
Alan Maguire [Tue, 9 Apr 2019 14:06:40 +0000 (15:06 +0100)]
selftests_bpf: extend test_tc_tunnel for UDP encap

commit 868d523535c2 ("bpf: add bpf_skb_adjust_room encap flags")
introduced support to bpf_skb_adjust_room for GSO-friendly GRE
and UDP encapsulation and later introduced associated test_tc_tunnel
tests.  Here those tests are extended to cover UDP encapsulation also.

Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agotipc: use standard write_lock & unlock functions when creating node
Jon Maloy [Thu, 11 Apr 2019 19:56:28 +0000 (21:56 +0200)]
tipc: use standard write_lock & unlock functions when creating node

In the function tipc_node_create() we protect the peer capability field
by using the node rw_lock. However, we access the lock directly instead
of using the dedicated functions for this, as we do everywhere else in
node.c. This cosmetic spot is fixed here.

Fixes: 40999f11ce67 ("tipc: make link capability update thread safe")
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agobpf: fix missing bpf_check_uarg_tail_zero in BPF_PROG_TEST_RUN
Stanislav Fomichev [Thu, 11 Apr 2019 16:12:02 +0000 (09:12 -0700)]
bpf: fix missing bpf_check_uarg_tail_zero in BPF_PROG_TEST_RUN

Commit b0b9395d865e ("bpf: support input __sk_buff context in
BPF_PROG_TEST_RUN") started using bpf_check_uarg_tail_zero in
BPF_PROG_TEST_RUN. However, bpf_check_uarg_tail_zero is not defined
for !CONFIG_BPF_SYSCALL:

net/bpf/test_run.c: In function ‘bpf_ctx_init’:
net/bpf/test_run.c:142:9: error: implicit declaration of function ‘bpf_check_uarg_tail_zero’ [-Werror=implicit-function-declaration]
   err = bpf_check_uarg_tail_zero(data_in, max_size, size);
         ^~~~~~~~~~~~~~~~~~~~~~~~

Let's not build net/bpf/test_run.c when CONFIG_BPF_SYSCALL is not set.

Reported-by: kbuild test robot <lkp@intel.com>
Fixes: b0b9395d865e ("bpf: support input __sk_buff context in BPF_PROG_TEST_RUN")
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agonetfilter: nf_nat_masquerade: unify ipv4/6 notifier registration
Florian Westphal [Tue, 9 Apr 2019 08:44:08 +0000 (10:44 +0200)]
netfilter: nf_nat_masquerade: unify ipv4/6 notifier registration

Only reason for having two different register functions was because of
ipt_MASQUERADE and ip6t_MASQUERADE being two different modules.

Previous patch merged those into xt_MASQUERADE, so we can merge this too.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agonetfilter: x_tables: merge ip and ipv6 masquerade modules
Florian Westphal [Tue, 9 Apr 2019 08:44:07 +0000 (10:44 +0200)]
netfilter: x_tables: merge ip and ipv6 masquerade modules

No need to have separate modules for this.
before:
 text    data   bss    dec  filename
 2038    1168     0   3206  net/ipv4/netfilter/ipt_MASQUERADE.ko
 1526    1024     0   2550  net/ipv6/netfilter/ip6t_MASQUERADE.ko
after:
 text    data   bss    dec  filename
 2521    1296     0   3817  net/netfilter/xt_MASQUERADE.ko

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agonetfilter: nf_nat: merge ip/ip6 masquerade headers
Florian Westphal [Tue, 9 Apr 2019 08:44:06 +0000 (10:44 +0200)]
netfilter: nf_nat: merge ip/ip6 masquerade headers

Both are now implemented by nf_nat_masquerade.c, so no need to keep
different headers.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
5 years agonet: sched: flower: use correct ht function to prevent duplicates
Vlad Buslov [Thu, 11 Apr 2019 16:12:20 +0000 (19:12 +0300)]
net: sched: flower: use correct ht function to prevent duplicates

Implementation of function rhashtable_insert_fast() check if its internal
helper function __rhashtable_insert_fast() returns non-NULL pointer and
seemingly return -EEXIST in such case. However, since
__rhashtable_insert_fast() is called with NULL key pointer, it never
actually checks for duplicates, which means that -EEXIST is never returned
to the user. Use rhashtable_lookup_insert_fast() hash table API instead. In
order to verify that it works as expected and prevent the problem from
happening in future, extend tc-tests with new test that verifies that no
new filters with existing key can be inserted to flower classifier.

Fixes: 1f17f7742eeb ("net: sched: flower: insert filter to ht before offloading it to hw")
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonetns: read NETNSA_NSID as s32 attribute in rtnl_net_getid()
Guillaume Nault [Thu, 11 Apr 2019 14:45:57 +0000 (16:45 +0200)]
netns: read NETNSA_NSID as s32 attribute in rtnl_net_getid()

NETNSA_NSID is signed. Use nla_get_s32() to avoid confusion.

Signed-off-by: Guillaume Nault <gnault@redhat.com>
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoselftests: bpf: add selftest for __sk_buff context in BPF_PROG_TEST_RUN
Stanislav Fomichev [Tue, 9 Apr 2019 18:49:11 +0000 (11:49 -0700)]
selftests: bpf: add selftest for __sk_buff context in BPF_PROG_TEST_RUN

Simple test that sets cb to {1,2,3,4,5} and priority to 6, runs bpf
program that fails if cb is not what we expect and increments cb[i] and
priority. When the test finishes, we check that cb is now {2,3,4,5,6}
and priority is 7.

We also test the sanity checks:
* ctx_in is provided, but ctx_size_in is zero (same for
  ctx_out/ctx_size_out)
* unexpected non-zero fields in __sk_buff return EINVAL

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agolibbpf: add support for ctx_{size, }_{in, out} in BPF_PROG_TEST_RUN
Stanislav Fomichev [Tue, 9 Apr 2019 18:49:10 +0000 (11:49 -0700)]
libbpf: add support for ctx_{size, }_{in, out} in BPF_PROG_TEST_RUN

Support recently introduced input/output context for test runs.
We extend only bpf_prog_test_run_xattr. bpf_prog_test_run is
unextendable and left as is.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agobpf: support input __sk_buff context in BPF_PROG_TEST_RUN
Stanislav Fomichev [Tue, 9 Apr 2019 18:49:09 +0000 (11:49 -0700)]
bpf: support input __sk_buff context in BPF_PROG_TEST_RUN

Add new set of arguments to bpf_attr for BPF_PROG_TEST_RUN:
* ctx_in/ctx_size_in - input context
* ctx_out/ctx_size_out - output context

The intended use case is to pass some meta data to the test runs that
operate on skb (this has being brought up on recent LPC).

For programs that use bpf_prog_test_run_skb, support __sk_buff input and
output. Initially, from input __sk_buff, copy _only_ cb and priority into
skb, all other non-zero fields are prohibited (with EINVAL).
If the user has set ctx_out/ctx_size_out, copy the potentially modified
__sk_buff back to the userspace.

We require all fields of input __sk_buff except the ones we explicitly
support to be set to zero. The expectation is that in the future we might
add support for more fields and we want to fail explicitly if the user
runs the program on the kernel where we don't yet support them.

The API is intentionally vague (i.e. we don't explicitly add __sk_buff
to bpf_attr, but ctx_in) to potentially let other test_run types use
this interface in the future (this can be xdp_md for xdp types for
example).

v4:
  * don't copy more than allowed in bpf_ctx_init [Martin]

v3:
  * handle case where ctx_in is NULL, but ctx_out is not [Martin]
  * convert size==0 checks to ptr==NULL checks and add some extra ptr
    checks [Martin]

v2:
  * Addressed comments from Martin Lau

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agotools/bpftool: show btf id in program information
Prashant Bhole [Wed, 10 Apr 2019 04:56:42 +0000 (13:56 +0900)]
tools/bpftool: show btf id in program information

Let's add a way to know whether a program has btf context.
Patch adds 'btf_id' in the output of program listing.
When btf_id is present, it means program has btf context.

Sample output:
user@test# bpftool prog list
25: xdp  name xdp_prog1  tag 539ec6ce11b52f98  gpl
loaded_at 2019-04-10T11:44:20+0900  uid 0
xlated 488B  not jited  memlock 4096B  map_ids 23
btf_id 1

Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agolibbpf: Fix build with gcc-8
Andrey Ignatov [Thu, 11 Apr 2019 01:36:43 +0000 (18:36 -0700)]
libbpf: Fix build with gcc-8

Reported in [1].

With gcc 8.3.0 the following error is issued:

  cc -Ibpf@sta -I. -I.. -I.././include -I.././include/uapi
  -fdiagnostics-color=always -fsanitize=address,undefined -fno-omit-frame-pointer
  -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Werror -g -fPIC -g -O2
  -Werror -Wall -Wno-pointer-arith -Wno-sign-compare  -MD -MQ
  'bpf@sta/src_libbpf.c.o' -MF 'bpf@sta/src_libbpf.c.o.d' -o
  'bpf@sta/src_libbpf.c.o' -c ../src/libbpf.c
  ../src/libbpf.c: In function 'bpf_object__elf_collect':
  ../src/libbpf.c:947:18: error: 'map_def_sz' may be used uninitialized in this
  function [-Werror=maybe-uninitialized]
     if (map_def_sz <= sizeof(struct bpf_map_def)) {
         ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  ../src/libbpf.c:827:18: note: 'map_def_sz' was declared here
    int i, map_idx, map_def_sz, nr_syms, nr_maps = 0, nr_maps_glob = 0;
                    ^~~~~~~~~~

According to [2] -Wmaybe-uninitialized is enabled by -Wall.
Same error is generated by clang's -Wconditional-uninitialized.

[1] https://github.com/libbpf/libbpf/pull/29#issuecomment-481902601
[2] https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html

Fixes: d859900c4c56 ("bpf, libbpf: support global data/bss/rodata sections")
Reported-by: Evgeny Vereshchagin <evvers@ya.ru>
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>