Arjun Vynipadath [Thu, 24 May 2018 14:03:37 +0000 (19:33 +0530)]
cxgb4/cxgb4vf: Notify link changes to OS-dependent code
We have a confusion of two different abstractions in the Common
Code: Physical Link (Port) and Logical Network Interface (Virtual
Interface), and we haven't been properly managing the state of the
intersection of those two abstractions.
On the one hand we have the Physical state of the Link -- up or down --
and on the other we have the logical state of the VI, enabled or not.
{ethN} refers to both the Physical and Logical State. In this case,
ifconfig only affects/interrogates the Logical State of a VI,
and ethtool only deals with the Physical State. And these are different.
So, just because we disable the VI, we don't really want to change the
Physical Link Up/Down state. Thus, the previous hack to set
"lc->link_ok = 0" when we disable a VI is completely incorrect.
Where we get into trouble is where the Physical Link State and the
Logical VI State cross swords. And that happens in
t4_handle_get_port_info() where we need to manage/safe the Physical
Link State, but we also need to know when the Logical VI State has
changed and pass that back up to the OS-dependent Driver routine
t4_os_link_changed() which is concerned about the Logical Interface.
So we enable a VI and that causes Firmware to send us a new Port
Information message, but if none of the Physical Link State
particulars have changed, we don't call t4_os_link_changed().
This fix uses the existing OS Contract APIs for the Common Code to
inform the OS-dependent portion of the Host Driver when the "Link" (really
Logical Network Interface) is "up" or "down". A new API
t4_enable_pi_params() is added which calls t4_enable_vi_params() and,
if that is successful, then calls back to the OS Contract API
t4_os_link_changed() notifying the OS-dependent layer of the
potential Link State change.
Original Work by : Casey Leedom <leedom@chelsio.com>
Signed-off-by: Santosh Rastapur <santosh@chelsio.com>
Signed-off-by: Arjun Vynipadath <arjun@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ganesh Goudar [Thu, 24 May 2018 13:02:15 +0000 (18:32 +0530)]
cxgb4: clean up init_one
clean up init_one and use chip_ver consistently throughout
init_one() for chip version.
Signed-off-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ganesh Goudar [Thu, 24 May 2018 12:19:30 +0000 (17:49 +0530)]
cxgb4/cxgb4vf: link management changes for new SFP
newer SFPs like SFP28 and QSFP28 Transceiver Modules present
several new possibilities which we haven't faced before. Fix the
assumptions in the code reflecting the more limited capabilities
of previous Transceiver Module systems
Original work by Casey Leedom <leedom@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Thu, 24 May 2018 11:27:07 +0000 (19:27 +0800)]
net: fec: remove stale comment
This comment is outdated as fec_ptp_ioctl has been replaced by fec_ptp_set/fec_ptp_get
since commit
1d5244d0e43b ("fec: Implement the SIOCGHWTSTAMP ioctl")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin Habets [Thu, 24 May 2018 09:14:00 +0000 (10:14 +0100)]
sfc: stop the TX queue before pushing new buffers
efx_enqueue_skb() can push new buffers for the xmit_more functionality.
We must stops the TX queue before this or else the TX queue does not get
restarted and we get a netdev watchdog.
In the error handling we may now need to unwind more than 1 packet, and
we may need to push the new buffers onto the partner queue.
v2: In the error leg also push this queue if xmit_more is set
Fixes:
e9117e5099ea ("sfc: Firmware-Assisted TSO version 2")
Reported-by: Jarod Wilson <jarod@redhat.com>
Tested-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Martin Habets <mhabets@solarflare.com>
Acked-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 24 May 2018 08:56:48 +0000 (11:56 +0300)]
net: bridge: add support for port isolation
This patch adds support for a new port flag - BR_ISOLATED. If it is set
then isolated ports cannot communicate between each other, but they can
still communicate with non-isolated ports. The same can be achieved via
ACLs but they can't scale with large number of ports and also the
complexity of the rules grows. This feature can be used to achieve
isolated vlan functionality (similar to pvlan) as well, though currently
it will be port-wide (for all vlans on the port). The new test in
should_deliver uses data that is already cache hot and the new boolean
is used to avoid an additional source port test in should_deliver.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 25 May 2018 03:10:57 +0000 (23:10 -0400)]
Merge branch 'nfp-offload-LAG-for-tc-flower-egress'
Jakub Kicinski says:
====================
nfp: offload LAG for tc flower egress
This series from John adds bond offload to the nfp driver. Patch 5
exposes the hash type for NETDEV_LAG_TX_TYPE_HASH to make sure nfp
hashing matches that of the software LAG. This may be unnecessarily
conservative, let's see what LAG maintainers think :)
John says:
This patchset sets up the infrastructure and offloads output actions for
when a TC flower rule attempts to egress a packet to a LAG port.
Firstly it adds some of the infrastructure required to the flower app and
to the nfp core. This includes the ability to change the MAC address of a
repr, a function for combining lookup and write to a FW symbol, and the
addition of private data to a repr on a per app basis.
Patch 6 continues by implementing notifiers that track Linux bonds and
communicates to the FW those which enslave reprs, along with the current
state of reprs within the bond.
Patch 7 ensures bonds are synchronised with FW by receiving and acting
upon cmsgs sent to the kernel. These may request that a bond message is
retransmitted when FW can process it, or may request a full sync of the
bonds defined in the kernel.
Patch 8 offloads a flower action when that action requires egressing to a
pre-defined Linux bond.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Thu, 24 May 2018 02:22:55 +0000 (19:22 -0700)]
nfp: flower: compute link aggregation action
If the egress device of an offloaded rule is a LAG port, then encode the
output port to the NFP with a LAG identifier and the offloaded group ID.
A prelag action is also offloaded which must be the first action of the
series (although may appear after other pre-actions - e.g. tunnels). This
causes the FW to check that it has the necessary information to output to
the requested LAG port. If it does not, the packet is sent to the kernel
before any other actions are applied to it.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Thu, 24 May 2018 02:22:54 +0000 (19:22 -0700)]
nfp: flower: implement host cmsg handler for LAG
Adds the control message handler to synchronize offloaded group config
with that of the kernel. Such messages are sent from fw to driver and
feature the following 3 flags:
- Data: an attached cmsg could not be processed - store for retransmission
- Xon: FW can accept new messages - retransmit any stored cmsgs
- Sync: full sync requested so retransmit all kernel LAG group info
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Thu, 24 May 2018 02:22:53 +0000 (19:22 -0700)]
nfp: flower: monitor and offload LAG groups
Monitor LAG events via the NETDEV_CHANGEUPPER/NETDEV_CHANGELOWERSTATE
notifiers to maintain a list of offloadable groups. Sync these groups with
HW via a delayed workqueue to prevent excessive re-configuration. When the
workqueue is triggered it may generate multiple control messages for
different groups. These messages are linked via a batch ID and flags to
indicate a new batch and the end of a batch.
Update private data in each repr to track their LAG lower state flags. The
state of a repr is used to determine the active netdevs that can be
offloaded. For example, in active-backup mode, we only offload the netdev
currently active.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Thu, 24 May 2018 02:22:52 +0000 (19:22 -0700)]
net: include hash policy in LAG changeupper info
LAG upper event notifiers contain the tx type used by the LAG device.
Extend this to also include the hash policy used for tx types that
utilize hashing.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Thu, 24 May 2018 02:22:51 +0000 (19:22 -0700)]
nfp: flower: add per repr private data for LAG offload
Add a bitmap to each flower repr to track its state if it is enslaved by a
bond. This LAG state may be different to the port state - for example, the
port may be up but LAG state may be down due to the selection in an
active/backup bond.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Thu, 24 May 2018 02:22:50 +0000 (19:22 -0700)]
nfp: flower: check for/turn on LAG support in firmware
Check if the fw contains the _abi_flower_balance_sync_enable symbol. If it
does then write a 1 to this indicating that the driver is willing to
receive NIC to kernel LAG related control messages.
If the write is successful, update the list of extra features supported by
the fw and add a stub to accept LAG cmsgs.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Thu, 24 May 2018 02:22:49 +0000 (19:22 -0700)]
nfp: nfpcore: add rtsym writing function
Add an rtsym API function that combines the lookup of a symbol and the
writing of a value to it. Values can be written as unsigned 32 or 64 bits.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Thu, 24 May 2018 02:22:48 +0000 (19:22 -0700)]
nfp: add ndo_set_mac_address for representors
Adding a netdev to a bond requires that its mac address can be modified.
The default eth_mac_addr is sufficient to satisfy this requirement.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Stephen Hemminger [Thu, 24 May 2018 01:02:00 +0000 (18:02 -0700)]
hv_netvsc: fix bogus ifalias on network device
If the guest network adapter is not configured with DeviceNaming
enabled on the host, then the query for friendly name will return
success but with a zero length name. Which then leads to a garbage value
(stack contents) for ifalias.
Fix is simple, just don't set name if host doesn't return it.
Fixes:
0fe554a46a0f ("hv_netvsc: propogate Hyper-V friendly name into interface alias")
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 25 May 2018 03:01:23 +0000 (23:01 -0400)]
Merge branch 'net-Update-fib_table_lookup-tracepoints'
David Ahern says:
====================
net: Update fib_table_lookup tracepoints
Update the FIB lookup tracepoints to include ip proto and port fields
from the flow struct. In the process make the IPv4 tracepoint inline
with IPv6 which is much easier to use and follow the lookup and result.
Remove the tracepoint in fib_validate_source which does not provide
value above the fib_table_lookup which immediately follows it.
v2
- move CREATE_TRACE_POINTS for the v6 tracepoint to route.c to handle
its need for an internal function to convert route type to error and
handle IPv6 as a module or builtin. Reported by kbuild robot.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Thu, 24 May 2018 00:08:49 +0000 (17:08 -0700)]
net/ipv4: Remove tracepoint in fib_validate_source
Tracepoint does not add value and the call to fib_lookup follows
it which shows the same information and the fib lookup result.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Thu, 24 May 2018 00:08:48 +0000 (17:08 -0700)]
net/ipv6: Udate fib6_table_lookup tracepoint
Commit
bb0ad1987e96 ("ipv6: fib6_rules: support for match on sport, dport
and ip proto") added support for protocol and ports to FIB rules.
Update the FIB lookup tracepoint to dump the parameters.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Thu, 24 May 2018 00:08:47 +0000 (17:08 -0700)]
net/ipv4: Udate fib_table_lookup tracepoint
Commit
4a2d73a4fb36 ("ipv4: fib_rules: support match on sport, dport
and ip proto") added support for protocol and ports to FIB rules.
Update the FIB lookup tracepoint to dump the parameters.
In addition, make the IPv4 tracepoint similar to the IPv6 one where
the lookup parameters and result are dumped in 1 event. It is much
easier to use and understand the outcome of the lookup.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cong Wang [Wed, 23 May 2018 22:26:53 +0000 (15:26 -0700)]
net_sched: switch to rcu_work
Commit
05f0fe6b74db ("RCU, workqueue: Implement rcu_work") introduces
new API's for dispatching work in a RCU callback. Now we can just
switch to the new API's for tc filters. This could get rid of a lot
of code.
Cc: Tejun Heo <tj@kernel.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 25 May 2018 02:26:20 +0000 (22:26 -0400)]
Merge branch 'Mirroring-tests-involving-VLAN'
Petr Machata says:
====================
Mirroring tests involving VLAN
This patchset tests mirror-to-gretap with various underlay
configurations involving VLAN netdevice in particular. Some of the tests
involve bridges as well, but tests aimed specifically at testing bridges
(i.e. FDB, STP) are not part of this patchset.
In patches #1-#6, the codebase is adapted to support the new tests.
In patch #7, a test for mirroring to VLAN is introduced.
Patches #8-#10 add three tests where VLAN is part of underlay path after
gretap encapsulation.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:28:06 +0000 (16:28 +0200)]
selftests: forwarding: Test mirror-to-gre w/ UL 802.1d+VLAN
Test for "tc action mirred egress mirror" that mirrors to GRE when the
underlay route points at an 802.1d bridge and packet egresses through a
VLAN device.
Besides testing basic connectivity, this also tests that the traffic is
properly tagged.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:28:01 +0000 (16:28 +0200)]
selftests: forwarding: Test mirror-to-gre w/ UL VLAN
Test for "tc action mirred egress mirror" that mirrors to a gretap
netdevice whose underlay route points at a vlan device.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:27:55 +0000 (16:27 +0200)]
selftests: forwarding: Test mirror-to-gre w/ UL VLAN+802.1q
Test for "tc action mirred egress mirror" that mirrors to GRE when the
underlay route points at a vlan device on top of a bridge device with
vlan filtering (802.1q).
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:27:48 +0000 (16:27 +0200)]
selftests: forwarding: Test mirror-to-vlan
Test for "tc action mirred egress mirror" that mirrors to a vlan device.
- test_vlan() tests that the packets get mirrored
- test_tagged_vlan() tests that the mirrored packets have correct inner
VLAN tag.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:27:35 +0000 (16:27 +0200)]
selftests: forwarding: lib: Extract trap_{, un}install()
A mirror-to-vlan test that's coming next needs to install the trap
unconditionally. Therefore extract from slow_path_trap_{,un}install()
a more generic functions trap_install() and trap_uninstall(), and covert
the former two to conditional wrappers around these.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:27:31 +0000 (16:27 +0200)]
selftests: forwarding: mirror_gre_lib: Support VLAN
Add full_test_span_gre_dir_vlan_ips() and full_test_span_gre_dir_vlan()
to support mirror-to-gre tests that involve VLAN.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:27:26 +0000 (16:27 +0200)]
selftests: forwarding: lib: Support VLAN devices
Add vlan_create() and vlan_destroy() to manage VLAN netdevices.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:27:21 +0000 (16:27 +0200)]
selftests: forwarding: Add $h3's clsact to mirror_topo_lib.sh
Having a clsact qdisc on $h3 is useful in several tests, and will be
useful in more tests to come. Move the registration from all the tests
that need it into the topology file itself.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:27:16 +0000 (16:27 +0200)]
selftests: forwarding: mirror_gre_lib: Extract generic functions
For non-GRE mirroring tests, a functions along the lines of
do_test_span_gre_dir_ips() and test_span_gre_dir_ips() are necessary,
but such that they don't assume tunnels are involved. Extract the code
from mirror_gre_lib.sh to mirror_lib.sh and convert to just use a given
device without assuming it's named "h3-$tundev". Convert the two
above-mentioned functions to wrappers that pass along the correct device
name.
Add test_span_dir() and fail_test_span_dir() to round up the API for use
by following patches.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 24 May 2018 14:27:10 +0000 (16:27 +0200)]
selftests: forwarding: Split mirror_gre_topo_lib.sh
Move generic parts of mirror_gre_topo_lib.sh into a new file
mirror_topo_lib.sh. Reuse the functions in GRE topo, adding the tunnel
devices as necessary.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 25 May 2018 02:20:51 +0000 (22:20 -0400)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
====================
pull-request: bpf-next 2018-05-24
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Björn Töpel cleans up AF_XDP (removes rebind, explicit cache alignment from uapi, etc).
2) David Ahern adds mtu checks to bpf_ipv{4,6}_fib_lookup() helpers.
3) Jesper Dangaard Brouer adds bulking support to ndo_xdp_xmit.
4) Jiong Wang adds support for indirect and arithmetic shifts to NFP
5) Martin KaFai Lau cleans up BTF uapi and makes the btf_header extensible.
6) Mathieu Xhonneux adds an End.BPF action to seg6local with BPF helpers allowing
to edit/grow/shrink a SRH and apply on a packet generic SRv6 actions.
7) Sandipan Das adds support for bpf2bpf function calls in ppc64 JIT.
8) Yonghong Song adds BPF_TASK_FD_QUERY command for introspection of tracing events.
9) other misc fixes from Gustavo A. R. Silva, Sirio Balmelli, John Fastabend, and Magnus Karlsson
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 25 May 2018 02:19:26 +0000 (22:19 -0400)]
Merge branch 'ibmvnic-Failover-hardening'
Thomas Falcon says:
====================
ibmvnic: Failover hardening
Introduce additional transport event hardening to handle
events during device reset. In the driver's current state,
if a transport event is received during device reset, it can
cause the device to become unresponsive as invalid operations
are processed as the backing device context changes. After
a transport event, the device expects a request to begin the
initialization process. If the driver is still processing
a previously queued device reset in this state, it is likely
to fail as firmware will reject any commands other than the
one to initialize the client driver's Command-Response Queue.
Instead of failing and becoming dormant, the driver will make
one more attempt to recover and continue operation. This is
achieved by setting a state flag, which if true will direct
the driver to clean up all allocated resources and perform
a hard reset in an attempt to bring the driver back to an
operational state.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 23 May 2018 18:38:02 +0000 (13:38 -0500)]
ibmvnic: Introduce hard reset recovery
Introduce a recovery hard reset to handle reset failure as a result of
change of device context following a transport event, such as a
backing device failover or partition migration. These operations reset
the device context to its initial state. If this occurs during a reset,
any initialization commands are likely to fail with an invalid state
error as backing device firmware requests reinitialization.
When this happens, make one more attempt by performing a hard reset,
which frees any resources currently allocated and performs device
initialization. If a transport event occurs during a device reset, a
flag is set which will trigger a new hard reset following the
completionof the current reset event.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 23 May 2018 18:38:01 +0000 (13:38 -0500)]
ibmvnic: Set resetting state at earliest possible point
Set device resetting state at the earliest possible point: as soon as a
reset is successfully scheduled. The reset state is toggled off when
all resets have been processed to completion.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 23 May 2018 18:38:00 +0000 (13:38 -0500)]
ibmvnic: Create separate initialization routine for resets
Instead of having one initialization routine for all cases, create
a separate, simpler function for standard initialization, such as during
device probe. Use the original initialization function to handle
device reset scenarios. The goal of this patch is to avoid having
a single, cluttered init function to handle all possible
scenarios.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 23 May 2018 18:37:59 +0000 (13:37 -0500)]
ibmvnic: Handle error case when setting link state
If setting the link state is not successful, print a warning
with the resulting return code and return it to be handled
by the caller.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 23 May 2018 18:37:58 +0000 (13:37 -0500)]
ibmvnic: Return error code if init interrupted by transport event
If device init is interrupted by a failover, set the init return
code so that it can be checked and handled appropriately by the
init routine.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 23 May 2018 18:37:57 +0000 (13:37 -0500)]
ibmvnic: Check CRQ command return codes
Check whether CRQ command is successful before awaiting a response
from the management partition. If the command was not successful, the
driver may hang waiting for a response that will never come.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 23 May 2018 18:37:56 +0000 (13:37 -0500)]
ibmvnic: Introduce active CRQ state
Introduce an "active" state for a IBM vNIC Command-Response Queue. A CRQ
is considered active once it has initialized or linked with its partner by
sending an initialization request and getting a successful response back
from the management partition. Until this has happened, do not allow CRQ
commands to be sent other than the initialization request.
This change will avoid a protocol error in case of a device transport
event occurring during a initialization. When the driver receives a
transport event notification indicating that the backing hardware
has changed and needs reinitialization, any further commands other
than the initialization handshake with the VIOS management partition
will result in an invalid state error. Instead of sending a command
that will be returned with an error, print a warning and return an
error that will be handled by the caller.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 23 May 2018 18:37:55 +0000 (13:37 -0500)]
ibmvnic: Mark NAPI flag as disabled when released
Set adapter NAPI state as disabled if they are removed. This will allow
them to be enabled again if reallocated in case of a hard reset.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 25 May 2018 02:14:37 +0000 (22:14 -0400)]
Merge branch 'gretap-mirroring-selftests'
Petr Machata says:
====================
selftests: forwarding: Additions to mirror-to-gretap tests
This patchset is for a handful of edge cases in mirror-to-gretap
scenarios: removal of mirrored-to netdevice (#1), removal of underlay
route for tunnel remote endpoint (#2) and cessation of mirroring upon
removal of flower mirroring rule (#3).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Wed, 23 May 2018 16:35:07 +0000 (18:35 +0200)]
selftests: forwarding: Test removal of mirroring
Test that when flower-based mirror action is removed, mirroring stops.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Wed, 23 May 2018 16:35:01 +0000 (18:35 +0200)]
selftests: forwarding: Test removal of underlay route
When underlay route is removed, the mirrored traffic should not be
forwarded.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Wed, 23 May 2018 16:34:56 +0000 (18:34 +0200)]
selftests: forwarding: Test mirroring to deleted device
Tests that the mirroring code catches up with deletion of a mirrored-to
device.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Tue, 22 May 2018 07:07:18 +0000 (15:07 +0800)]
cxgb4: Check for kvzalloc allocation failure
t4_prep_fw doesn't check for card_fw pointer before store the read data,
which could lead to a NULL pointer dereference if kvzalloc failed.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov [Fri, 25 May 2018 01:36:16 +0000 (18:36 -0700)]
Merge branch 'xdp_xmit-bulking'
Jesper Dangaard Brouer says:
====================
This patchset change ndo_xdp_xmit API to take a bulk of xdp frames.
When kernel is compiled with CONFIG_RETPOLINE, every indirect function
pointer (branch) call hurts performance. For XDP this have a huge
negative performance impact.
This patchset reduce the needed (indirect) calls to ndo_xdp_xmit, but
also prepares for further optimizations. The DMA APIs use of indirect
function pointer calls is the primary source the regression. It is
left for a followup patchset, to use bulking calls towards the DMA API
(via the scatter-gatter calls).
The other advantage of this API change is that drivers can easier
amortize the cost of any sync/locking scheme, over the bulk of
packets. The assumption of the current API is that the driver
implemementing the NDO will also allocate a dedicated XDP TX queue for
every CPU in the system. Which is not always possible or practical to
configure. E.g. ixgbe cannot load an XDP program on a machine with
more than 96 CPUs, due to limited hardware TX queues. E.g. virtio_net
is hard to configure as it requires manually increasing the
queues. E.g. tun driver chooses to use a per XDP frame producer lock
modulo smp_processor_id over avail queues.
I'm considered adding 'flags' to ndo_xdp_xmit, but it's not part of
this patchset. This will be a followup patchset, once we know if this
will be needed (e.g. for non-map xdp_redirect flush-flag, and if
AF_XDP chooses to use ndo_xdp_xmit for TX).
---
V5: Fixed up issues spotted by Daniel and John
V4: Splitout the patches from 4 to 8 patches. I cannot split the
driver changes from the NDO change, but I've tried to isolated the NDO
change together with the driver change as much as possible.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jesper Dangaard Brouer [Thu, 24 May 2018 14:46:22 +0000 (16:46 +0200)]
samples/bpf: xdp_monitor use err code from tracepoint xdp:xdp_devmap_xmit
Update xdp_monitor to use the recently added err code introduced
in tracepoint xdp:xdp_devmap_xmit, to show if the drop count is
caused by some driver general delivery problem. Other kind of drops
will likely just be more normal TX space issues.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jesper Dangaard Brouer [Thu, 24 May 2018 14:46:17 +0000 (16:46 +0200)]
xdp/trace: extend tracepoint in devmap with an err
Extending tracepoint xdp:xdp_devmap_xmit in devmap with an err code
allow people to easier identify the reason behind the ndo_xdp_xmit
call to a given driver is failing.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jesper Dangaard Brouer [Thu, 24 May 2018 14:46:12 +0000 (16:46 +0200)]
xdp: change ndo_xdp_xmit API to support bulking
This patch change the API for ndo_xdp_xmit to support bulking
xdp_frames.
When kernel is compiled with CONFIG_RETPOLINE, XDP sees a huge slowdown.
Most of the slowdown is caused by DMA API indirect function calls, but
also the net_device->ndo_xdp_xmit() call.
Benchmarked patch with CONFIG_RETPOLINE, using xdp_redirect_map with
single flow/core test (CPU E5-1650 v4 @ 3.60GHz), showed
performance improved:
for driver ixgbe: 6,042,682 pps -> 6,853,768 pps = +811,086 pps
for driver i40e : 6,187,169 pps -> 6,724,519 pps = +537,350 pps
With frames avail as a bulk inside the driver ndo_xdp_xmit call,
further optimizations are possible, like bulk DMA-mapping for TX.
Testing without CONFIG_RETPOLINE show the same performance for
physical NIC drivers.
The virtual NIC driver tun sees a huge performance boost, as it can
avoid doing per frame producer locking, but instead amortize the
locking cost over the bulk.
V2: Fix compile errors reported by kbuild test robot <lkp@intel.com>
V4: Isolated ndo, driver changes and callers.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jesper Dangaard Brouer [Thu, 24 May 2018 14:46:07 +0000 (16:46 +0200)]
xdp: introduce xdp_return_frame_rx_napi
When sending an xdp_frame through xdp_do_redirect call, then error
cases can happen where the xdp_frame needs to be dropped, and
returning an -errno code isn't sufficient/possible any-longer
(e.g. for cpumap case). This is already fully supported, by simply
calling xdp_return_frame.
This patch is an optimization, which provides xdp_return_frame_rx_napi,
which is a faster variant for these error cases. It take advantage of
the protection provided by XDP RX running under NAPI protection.
This change is mostly relevant for drivers using the page_pool
allocator as it can take advantage of this. (Tested with mlx5).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jesper Dangaard Brouer [Thu, 24 May 2018 14:46:02 +0000 (16:46 +0200)]
samples/bpf: xdp_monitor use tracepoint xdp:xdp_devmap_xmit
The xdp_monitor sample/tool is updated to use the new tracepoint
xdp:xdp_devmap_xmit the previous patch just introduced.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jesper Dangaard Brouer [Thu, 24 May 2018 14:45:57 +0000 (16:45 +0200)]
xdp: add tracepoint for devmap like cpumap have
Notice how this allow us get XDP statistic without affecting the XDP
performance, as tracepoint is no-longer activated on a per packet basis.
V5: Spotted by John Fastabend.
Fix 'sent' also counted 'drops' in this patch, a later patch corrected
this, but it was a mistake in this intermediate step.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jesper Dangaard Brouer [Thu, 24 May 2018 14:45:51 +0000 (16:45 +0200)]
bpf: devmap prepare xdp frames for bulking
Like cpumap create queue for xdp frames that will be bulked. For now,
this patch simply invoke ndo_xdp_xmit foreach frame. This happens,
either when the map flush operation is envoked, or when the limit
DEV_MAP_BULK_SIZE is reached.
V5: Avoid memleak on error path in dev_map_update_elem()
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jesper Dangaard Brouer [Thu, 24 May 2018 14:45:46 +0000 (16:45 +0200)]
bpf: devmap introduce dev_map_enqueue
Functionality is the same, but the ndo_xdp_xmit call is now
simply invoked from inside the devmap.c code.
V2: Fix compile issue reported by kbuild test robot <lkp@intel.com>
V5: Cleanups requested by Daniel
- Newlines before func definition
- Use BUILD_BUG_ON checks
- Remove unnecessary use return value store in dev_map_enqueue
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Fri, 25 May 2018 01:18:20 +0000 (18:18 -0700)]
Merge branch 'bpf-task-fd-query'
Yonghong Song says:
====================
Currently, suppose a userspace application has loaded a bpf program
and attached it to a tracepoint/kprobe/uprobe, and a bpf
introspection tool, e.g., bpftool, wants to show which bpf program
is attached to which tracepoint/kprobe/uprobe. Such attachment
information will be really useful to understand the overall bpf
deployment in the system.
There is a name field (16 bytes) for each program, which could
be used to encode the attachment point. There are some drawbacks
for this approaches. First, bpftool user (e.g., an admin) may not
really understand the association between the name and the
attachment point. Second, if one program is attached to multiple
places, encoding a proper name which can imply all these
attachments becomes difficult.
This patch introduces a new bpf subcommand BPF_TASK_FD_QUERY.
Given a pid and fd, this command will return bpf related information
to user space. Right now it only supports tracepoint/kprobe/uprobe
perf event fd's. For such a fd, BPF_TASK_FD_QUERY will return
. prog_id
. tracepoint name, or
. k[ret]probe funcname + offset or kernel addr, or
. u[ret]probe filename + offset
to the userspace.
The user can use "bpftool prog" to find more information about
bpf program itself with prog_id.
Patch #1 adds function perf_get_event() in kernel/events/core.c.
Patch #2 implements the bpf subcommand BPF_TASK_FD_QUERY.
Patch #3 syncs tools bpf.h header and also add bpf_task_fd_query()
in the libbpf library for samples/selftests/bpftool to use.
Patch #4 adds ksym_get_addr() utility function.
Patch #5 add a test in samples/bpf for querying k[ret]probes and
u[ret]probes.
Patch #6 add a test in tools/testing/selftests/bpf for querying
raw_tracepoint and tracepoint.
Patch #7 add a new subcommand "perf" to bpftool.
Changelogs:
v4 -> v5:
. return strlen(buf) instead of strlen(buf) + 1
in the attr.buf_len. As long as user provides
non-empty buffer, it will be filed with empty
string, truncated string, or full string
based on the buffer size and the length of
to-be-copied string.
v3 -> v4:
. made attr buf_len input/output. The length of
actual buffter is written to buf_len so user space knows
what is actually needed. If user provides a buffer
with length >= 1 but less than required, do partial
copy and return -ENOSPC.
. code simplification with put_user.
. changed query result attach_info to fd_type.
. add tests at selftests/bpf to test zero len, null buf and
insufficient buf.
v2 -> v3:
. made perf_get_event() return perf_event pointer const.
this was to ensure that event fields are not meddled.
. detect whether newly BPF_TASK_FD_QUERY is supported or
not in "bpftool perf" and warn users if it is not.
v1 -> v2:
. changed bpf subcommand name from BPF_PERF_EVENT_QUERY
to BPF_TASK_FD_QUERY.
. fixed various "bpftool perf" issues and added documentation
and auto-completion.
====================
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 24 May 2018 18:21:58 +0000 (11:21 -0700)]
tools/bpftool: add perf subcommand
The new command "bpftool perf [show | list]" will traverse
all processes under /proc, and if any fd is associated
with a perf event, it will print out related perf event
information. Documentation is also added.
Below is an example to show the results using bcc commands.
Running the following 4 bcc commands:
kprobe: trace.py '__x64_sys_nanosleep'
kretprobe: trace.py 'r::__x64_sys_nanosleep'
tracepoint: trace.py 't:syscalls:sys_enter_nanosleep'
uprobe: trace.py 'p:/home/yhs/a.out:main'
The bpftool command line and result:
$ bpftool perf
pid 21711 fd 5: prog_id 5 kprobe func __x64_sys_write offset 0
pid 21765 fd 5: prog_id 7 kretprobe func __x64_sys_nanosleep offset 0
pid 21767 fd 5: prog_id 8 tracepoint sys_enter_nanosleep
pid 21800 fd 5: prog_id 9 uprobe filename /home/yhs/a.out offset 1159
$ bpftool -j perf
[{"pid":21711,"fd":5,"prog_id":5,"fd_type":"kprobe","func":"__x64_sys_write","offset":0}, \
{"pid":21765,"fd":5,"prog_id":7,"fd_type":"kretprobe","func":"__x64_sys_nanosleep","offset":0}, \
{"pid":21767,"fd":5,"prog_id":8,"fd_type":"tracepoint","tracepoint":"sys_enter_nanosleep"}, \
{"pid":21800,"fd":5,"prog_id":9,"fd_type":"uprobe","filename":"/home/yhs/a.out","offset":1159}]
$ bpftool prog
5: kprobe name probe___x64_sys tag
e495a0c82f2c7a8d gpl
loaded_at 2018-05-15T04:46:37-0700 uid 0
xlated 200B not jited memlock 4096B map_ids 4
7: kprobe name probe___x64_sys tag
f2fdee479a503abf gpl
loaded_at 2018-05-15T04:48:32-0700 uid 0
xlated 200B not jited memlock 4096B map_ids 7
8: tracepoint name tracepoint__sys tag
5390badef2395fcf gpl
loaded_at 2018-05-15T04:48:48-0700 uid 0
xlated 200B not jited memlock 4096B map_ids 8
9: kprobe name probe_main_1 tag
0a87bdc2e2953b6d gpl
loaded_at 2018-05-15T04:49:52-0700 uid 0
xlated 200B not jited memlock 4096B map_ids 9
$ ps ax | grep "python ./trace.py"
21711 pts/0 T 0:03 python ./trace.py __x64_sys_write
21765 pts/0 S+ 0:00 python ./trace.py r::__x64_sys_nanosleep
21767 pts/2 S+ 0:00 python ./trace.py t:syscalls:sys_enter_nanosleep
21800 pts/3 S+ 0:00 python ./trace.py p:/home/yhs/a.out:main
22374 pts/1 S+ 0:00 grep --color=auto python ./trace.py
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 24 May 2018 18:21:57 +0000 (11:21 -0700)]
tools/bpf: add two BPF_TASK_FD_QUERY tests in test_progs
The new tests are added to query perf_event information
for raw_tracepoint and tracepoint attachment. For tracepoint,
both syscalls and non-syscalls tracepoints are queries as
they are treated slightly differently inside the kernel.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 24 May 2018 18:21:56 +0000 (11:21 -0700)]
samples/bpf: add a samples/bpf test for BPF_TASK_FD_QUERY
This is mostly to test kprobe/uprobe which needs kernel headers.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 24 May 2018 18:21:11 +0000 (11:21 -0700)]
tools/bpf: add ksym_get_addr() in trace_helpers
Given a kernel function name, ksym_get_addr() will return the kernel
address for this function, or 0 if it cannot find this function name
in /proc/kallsyms. This function will be used later when a kernel
address is used to initiate a kprobe perf event.
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 24 May 2018 18:21:10 +0000 (11:21 -0700)]
tools/bpf: sync kernel header bpf.h and add bpf_task_fd_query in libbpf
Sync kernel header bpf.h to tools/include/uapi/linux/bpf.h and
implement bpf_task_fd_query() in libbpf. The test programs
in samples/bpf and tools/testing/selftests/bpf, and later bpftool
will use this libbpf function to query kernel.
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 24 May 2018 18:21:09 +0000 (11:21 -0700)]
bpf: introduce bpf subcommand BPF_TASK_FD_QUERY
Currently, suppose a userspace application has loaded a bpf program
and attached it to a tracepoint/kprobe/uprobe, and a bpf
introspection tool, e.g., bpftool, wants to show which bpf program
is attached to which tracepoint/kprobe/uprobe. Such attachment
information will be really useful to understand the overall bpf
deployment in the system.
There is a name field (16 bytes) for each program, which could
be used to encode the attachment point. There are some drawbacks
for this approaches. First, bpftool user (e.g., an admin) may not
really understand the association between the name and the
attachment point. Second, if one program is attached to multiple
places, encoding a proper name which can imply all these
attachments becomes difficult.
This patch introduces a new bpf subcommand BPF_TASK_FD_QUERY.
Given a pid and fd, if the <pid, fd> is associated with a
tracepoint/kprobe/uprobe perf event, BPF_TASK_FD_QUERY will return
. prog_id
. tracepoint name, or
. k[ret]probe funcname + offset or kernel addr, or
. u[ret]probe filename + offset
to the userspace.
The user can use "bpftool prog" to find more information about
bpf program itself with prog_id.
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 24 May 2018 18:21:08 +0000 (11:21 -0700)]
perf/core: add perf_get_event() to return perf_event given a struct file
A new extern function, perf_get_event(), is added to return a perf event
given a struct file. This function will be used in later patches.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Heiner Kallweit [Wed, 23 May 2018 06:05:20 +0000 (08:05 +0200)]
net: phy: replace bool members in struct phy_device with bit-fields
In struct phy_device we have a number of flags being defined as type
bool. Similar to e.g. struct pci_dev we can save some space by using
bit-fields.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 24 May 2018 15:00:24 +0000 (11:00 -0400)]
Merge tag 'batadv-next-for-davem-
20180524' of git://git.open-mesh.org/linux-merge
Simon Wunderlich says:
====================
This feature/cleanup patchset includes the following patches:
- bump version strings, by Simon Wunderlich
- Disable batman-adv debugfs by default, by Sven Eckelmann
- Improve handling mesh nodes with multicast optimizations disabled,
by Linus Luessing
- Avoid bool in structs, by Sven Eckelmann
- Allocate less memory when debugfs is disabled, by Sven Eckelmann
- Fix batadv_interface_tx return data type, by Luc Van Oostenryck
- improve link speed handling for virtual interfaces, by Marek Lindner
- Enable BATMAN V algorithm by default, by Marek Lindner
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Thu, 24 May 2018 07:41:19 +0000 (00:41 -0700)]
bpfilter: don't pass O_CREAT when opening console for debug
Passing O_CREAT (
00000100) to open means we should also pass file
mode as the third parameter. Creating /dev/console as a regular
file may not be helpful anyway, so simply drop the flag when
opening debug_fd.
Fixes:
d2ba09c17a06 ("net: add skeleton of bpfilter kernel module")
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov [Thu, 24 May 2018 04:29:05 +0000 (21:29 -0700)]
bpfilter: fix build dependency
BPFILTER could have been enabled without INET causing this build error:
ERROR: "bpfilter_process_sockopt" [net/bpfilter/bpfilter.ko] undefined!
Fixes:
d2ba09c17a06 ("net: add skeleton of bpfilter kernel module")
Reported-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Thu, 24 May 2018 09:57:37 +0000 (11:57 +0200)]
Merge branch 'bpf-ipv6-seg6-bpf-action'
Mathieu Xhonneux says:
====================
As of Linux 4.14, it is possible to define advanced local processing for
IPv6 packets with a Segment Routing Header through the seg6local LWT
infrastructure. This LWT implements the network programming principles
defined in the IETF "SRv6 Network Programming" draft.
The implemented operations are generic, and it would be very interesting to
be able to implement user-specific seg6local actions, without having to
modify the kernel directly. To do so, this patchset adds an End.BPF action
to seg6local, powered by some specific Segment Routing-related helpers,
which provide SR functionalities that can be applied on the packet. This
BPF hook would then allow to implement specific actions at native kernel
speed such as OAM features, advanced SR SDN policies, SRv6 actions like
Segment Routing Header (SRH) encapsulation depending on the content of
the packet, etc.
This patchset is divided in 6 patches, whose main features are :
- A new seg6local action End.BPF with the corresponding new BPF program
type BPF_PROG_TYPE_LWT_SEG6LOCAL. Such attached BPF program can be
passed to the LWT seg6local through netlink, the same way as the LWT
BPF hook operates.
- 3 new BPF helpers for the seg6local BPF hook, allowing to edit/grow/
shrink a SRH and apply on a packet some of the generic SRv6 actions.
- 1 new BPF helper for the LWT BPF IN hook, allowing to add a SRH through
encapsulation (via IPv6 encapsulation or inlining if the packet contains
already an IPv6 header).
As this patchset adds a new LWT BPF hook, I took into account the result
of the discussions when the LWT BPF infrastructure got merged. Hence, the
seg6local BPF hook doesn't allow write access to skb->data directly, only
the SRH can be modified through specific helpers, which ensures that the
integrity of the packet is maintained. More details are available in the
related patches messages.
The performances of this BPF hook have been assessed with the BPF JIT
enabled on an Intel Xeon X3440 processors with 4 cores and 8 threads
clocked at 2.53 GHz. No throughput losses are noted with the seg6local
BPF hook when the BPF program does nothing (440kpps). Adding a 8-bytes
TLV (1 call each to bpf_lwt_seg6_adjust_srh and bpf_lwt_seg6_store_bytes)
drops the throughput to 410kpps, and inlining a SRH via bpf_lwt_seg6_action
drops the throughput to 420kpps. All throughputs are stable.
Changelog:
v2: move the SRH integrity state from skb->cb to a per-cpu buffer
v3: - document helpers in man-page style
- fix kbuild bugs
- un-break BPF LWT out hook
- bpf_push_seg6_encap is now static
- preempt_enable is now called when the packet is dropped in
input_action_end_bpf
v4: fix kbuild bugs when CONFIG_IPV6=m
v5: fix kbuild sparse warnings when CONFIG_IPV6=m
v6: fix skb pointers-related bugs in helpers
v7: - fix memory leak in error path of End.BPF setup
- add freeing of BPF data in seg6_local_destroy_state
- new enums SEG6_LOCAL_BPF_* instead of re-using ones of lwt bpf for
netlink nested bpf attributes
- SEG6_LOCAL_BPF_PROG attr now contains prog->aux->id when dumping
state
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Mathieu Xhonneux [Sun, 20 May 2018 13:58:17 +0000 (14:58 +0100)]
selftests/bpf: test for seg6local End.BPF action
Add a new test for the seg6local End.BPF action. The following helpers
are also tested:
- bpf_lwt_push_encap within the LWT BPF IN hook
- bpf_lwt_seg6_action
- bpf_lwt_seg6_adjust_srh
- bpf_lwt_seg6_store_bytes
A chain of End.BPF actions is built. The SRH is injected through a LWT
BPF IN hook before entering this chain. Each End.BPF action validates
the previous one, otherwise the packet is dropped. The test succeeds
if the last node in the chain receives the packet and the UDP datagram
contained can be retrieved from userspace.
Signed-off-by: Mathieu Xhonneux <m.xhonneux@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Mathieu Xhonneux [Sun, 20 May 2018 13:58:16 +0000 (14:58 +0100)]
ipv6: sr: Add seg6local action End.BPF
This patch adds the End.BPF action to the LWT seg6local infrastructure.
This action works like any other seg6local End action, meaning that an IPv6
header with SRH is needed, whose DA has to be equal to the SID of the
action. It will also advance the SRH to the next segment, the BPF program
does not have to take care of this.
Since the BPF program may not be a source of instability in the kernel, it
is important to ensure that the integrity of the packet is maintained
before yielding it back to the IPv6 layer. The hook hence keeps track if
the SRH has been altered through the helpers, and re-validates its
content if needed with seg6_validate_srh. The state kept for validation is
stored in a per-CPU buffer. The BPF program is not allowed to directly
write into the packet, and only some fields of the SRH can be altered
through the helper bpf_lwt_seg6_store_bytes.
Performances profiling has shown that the SRH re-validation does not induce
a significant overhead. If the altered SRH is deemed as invalid, the packet
is dropped.
This validation is also done before executing any action through
bpf_lwt_seg6_action, and will not be performed again if the SRH is not
modified after calling the action.
The BPF program may return 3 types of return codes:
- BPF_OK: the End.BPF action will look up the next destination through
seg6_lookup_nexthop.
- BPF_REDIRECT: if an action has been executed through the
bpf_lwt_seg6_action helper, the BPF program should return this
value, as the skb's destination is already set and the default
lookup should not be performed.
- BPF_DROP : the packet will be dropped.
Signed-off-by: Mathieu Xhonneux <m.xhonneux@gmail.com>
Acked-by: David Lebrun <dlebrun@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Mathieu Xhonneux [Sun, 20 May 2018 13:58:15 +0000 (14:58 +0100)]
bpf: Split lwt inout verifier structures
The new bpf_lwt_push_encap helper should only be accessible within the
LWT BPF IN hook, and not the OUT one, as this may lead to a skb under
panic.
At the moment, both LWT BPF IN and OUT share the same list of helpers,
whose calls are authorized by the verifier. This patch separates the
verifier ops for the IN and OUT hooks, and allows the IN hook to call the
bpf_lwt_push_encap helper.
This patch is also the occasion to put all lwt_*_func_proto functions
together for clarity. At the moment, socks_op_func_proto is in the middle
of lwt_inout_func_proto and lwt_xmit_func_proto.
Signed-off-by: Mathieu Xhonneux <m.xhonneux@gmail.com>
Acked-by: David Lebrun <dlebrun@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Mathieu Xhonneux [Sun, 20 May 2018 13:58:14 +0000 (14:58 +0100)]
bpf: Add IPv6 Segment Routing helpers
The BPF seg6local hook should be powerful enough to enable users to
implement most of the use-cases one could think of. After some thinking,
we figured out that the following actions should be possible on a SRv6
packet, requiring 3 specific helpers :
- bpf_lwt_seg6_store_bytes: Modify non-sensitive fields of the SRH
- bpf_lwt_seg6_adjust_srh: Allow to grow or shrink a SRH
(to add/delete TLVs)
- bpf_lwt_seg6_action: Apply some SRv6 network programming actions
(specifically End.X, End.T, End.B6 and
End.B6.Encap)
The specifications of these helpers are provided in the patch (see
include/uapi/linux/bpf.h).
The non-sensitive fields of the SRH are the following : flags, tag and
TLVs. The other fields can not be modified, to maintain the SRH
integrity. Flags, tag and TLVs can easily be modified as their validity
can be checked afterwards via seg6_validate_srh. It is not allowed to
modify the segments directly. If one wants to add segments on the path,
he should stack a new SRH using the End.B6 action via
bpf_lwt_seg6_action.
Growing, shrinking or editing TLVs via the helpers will flag the SRH as
invalid, and it will have to be re-validated before re-entering the IPv6
layer. This flag is stored in a per-CPU buffer, along with the current
header length in bytes.
Storing the SRH len in bytes in the control block is mandatory when using
bpf_lwt_seg6_adjust_srh. The Header Ext. Length field contains the SRH
len rounded to 8 bytes (a padding TLV can be inserted to ensure the 8-bytes
boundary). When adding/deleting TLVs within the BPF program, the SRH may
temporary be in an invalid state where its length cannot be rounded to 8
bytes without remainder, hence the need to store the length in bytes
separately. The caller of the BPF program can then ensure that the SRH's
final length is valid using this value. Again, a final SRH modified by a
BPF program which doesn’t respect the 8-bytes boundary will be discarded
as it will be considered as invalid.
Finally, a fourth helper is provided, bpf_lwt_push_encap, which is
available from the LWT BPF IN hook, but not from the seg6local BPF one.
This helper allows to encapsulate a Segment Routing Header (either with
a new outer IPv6 header, or by inlining it directly in the existing IPv6
header) into a non-SRv6 packet. This helper is required if we want to
offer the possibility to dynamically encapsulate a SRH for non-SRv6 packet,
as the BPF seg6local hook only works on traffic already containing a SRH.
This is the BPF equivalent of the seg6 LWT infrastructure, which achieves
the same purpose but with a static SRH per route.
These helpers require CONFIG_IPV6=y (and not =m).
Signed-off-by: Mathieu Xhonneux <m.xhonneux@gmail.com>
Acked-by: David Lebrun <dlebrun@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Mathieu Xhonneux [Sun, 20 May 2018 13:58:13 +0000 (14:58 +0100)]
ipv6: sr: export function lookup_nexthop
The function lookup_nexthop is essential to implement most of the seg6local
actions. As we want to provide a BPF helper allowing to apply some of these
actions on the packet being processed, the helper should be able to call
this function, hence the need to make it public.
Moreover, if one argument is incorrect or if the next hop can not be found,
an error should be returned by the BPF helper so the BPF program can adapt
its processing of the packet (return an error, properly force the drop,
...). This patch hence makes this function return dst->error to indicate a
possible error.
Signed-off-by: Mathieu Xhonneux <m.xhonneux@gmail.com>
Acked-by: David Lebrun <dlebrun@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Mathieu Xhonneux [Sun, 20 May 2018 13:58:12 +0000 (14:58 +0100)]
ipv6: sr: make seg6.h includable without IPv6
include/net/seg6.h cannot be included in a source file if CONFIG_IPV6 is
not enabled:
include/net/seg6.h: In function 'seg6_pernet':
>> include/net/seg6.h:52:14: error: 'struct net' has no member named
'ipv6'; did you mean 'ipv4'?
return net->ipv6.seg6_data;
^~~~
ipv4
This commit makes seg6_pernet return NULL if IPv6 is not compiled, hence
allowing seg6.h to be included regardless of the configuration.
Signed-off-by: Mathieu Xhonneux <m.xhonneux@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel Borkmann [Thu, 24 May 2018 07:20:50 +0000 (09:20 +0200)]
Merge branch 'bpf-multi-prog-improvements'
Sandipan Das says:
====================
[1] Support for bpf-to-bpf function calls in the powerpc64 JIT compiler.
[2] Provide a way for resolving function calls because of the way JITed
images are allocated in powerpc64.
[3] Fix to get JITed instruction dumps for multi-function programs from
the bpf system call.
[4] Fix for bpftool to show delimited multi-function JITed image dumps.
v4:
- Incorporate review comments from Jakub.
- Fix JSON output for bpftool.
v3:
- Change base tree tag to bpf-next.
- Incorporate review comments from Alexei, Daniel and Jakub.
- Make sure that the JITed image does not grow or shrink after
the last pass due to the way the instruction sequence used
to load a callee's address maybe optimized.
- Make additional changes to the bpf system call and bpftool to
make multi-function JITed dumps easier to correlate.
v2:
- Incorporate review comments from Jakub.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:54 +0000 (12:26 +0530)]
tools: bpftool: add delimiters to multi-function JITed dumps
This splits up the contiguous JITed dump obtained via the bpf
system call into more relatable chunks for each function in
the program. If the kernel symbols corresponding to these are
known, they are printed in the header for each JIT image dump
otherwise the masked start address is printed.
Before applying this patch:
# bpftool prog dump jited id 1
0: push %rbp
1: mov %rsp,%rbp
...
70: leaveq
71: retq
72: push %rbp
73: mov %rsp,%rbp
...
dd: leaveq
de: retq
# bpftool -p prog dump jited id 1
[{
"pc": "0x0",
"operation": "push",
"operands": ["%rbp"
]
},{
...
},{
"pc": "0x71",
"operation": "retq",
"operands": [null
]
},{
"pc": "0x72",
"operation": "push",
"operands": ["%rbp"
]
},{
...
},{
"pc": "0xde",
"operation": "retq",
"operands": [null
]
}
]
After applying this patch:
# echo 0 > /proc/sys/net/core/bpf_jit_kallsyms
# bpftool prog dump jited id 1
0xffffffffc02c7000:
0: push %rbp
1: mov %rsp,%rbp
...
70: leaveq
71: retq
0xffffffffc02cf000:
0: push %rbp
1: mov %rsp,%rbp
...
6b: leaveq
6c: retq
# bpftool -p prog dump jited id 1
[{
"name": "0xffffffffc02c7000",
"insns": [{
"pc": "0x0",
"operation": "push",
"operands": ["%rbp"
]
},{
...
},{
"pc": "0x71",
"operation": "retq",
"operands": [null
]
}
]
},{
"name": "0xffffffffc02cf000",
"insns": [{
"pc": "0x0",
"operation": "push",
"operands": ["%rbp"
]
},{
...
},{
"pc": "0x6c",
"operation": "retq",
"operands": [null
]
}
]
}
]
# echo 1 > /proc/sys/net/core/bpf_jit_kallsyms
# bpftool prog dump jited id 1
bpf_prog_b811aab41a39ad3d_foo:
0: push %rbp
1: mov %rsp,%rbp
...
70: leaveq
71: retq
bpf_prog_cf418ac8b67bebd9_F:
0: push %rbp
1: mov %rsp,%rbp
...
6b: leaveq
6c: retq
# bpftool -p prog dump jited id 1
[{
"name": "bpf_prog_b811aab41a39ad3d_foo",
"insns": [{
"pc": "0x0",
"operation": "push",
"operands": ["%rbp"
]
},{
...
},{
"pc": "0x71",
"operation": "retq",
"operands": [null
]
}
]
},{
"name": "bpf_prog_cf418ac8b67bebd9_F",
"insns": [{
"pc": "0x0",
"operation": "push",
"operands": ["%rbp"
]
},{
...
},{
"pc": "0x6c",
"operation": "retq",
"operands": [null
]
}
]
}
]
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:53 +0000 (12:26 +0530)]
tools: bpf: sync bpf uapi header
Syncing the bpf.h uapi header with tools so that struct
bpf_prog_info has the two new fields for passing on the
JITed image lengths of each function in a multi-function
program.
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:52 +0000 (12:26 +0530)]
bpf: get JITed image lengths of functions via syscall
This adds new two new fields to struct bpf_prog_info. For
multi-function programs, these fields can be used to pass
a list of the JITed image lengths of each function for a
given program to userspace using the bpf system call with
the BPF_OBJ_GET_INFO_BY_FD command.
This can be used by userspace applications like bpftool
to split up the contiguous JITed dump, also obtained via
the system call, into more relatable chunks corresponding
to each function.
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:51 +0000 (12:26 +0530)]
bpf: fix multi-function JITed dump obtained via syscall
Currently, for multi-function programs, we cannot get the JITed
instructions using the bpf system call's BPF_OBJ_GET_INFO_BY_FD
command. Because of this, userspace tools such as bpftool fail
to identify a multi-function program as being JITed or not.
With the JIT enabled and the test program running, this can be
verified as follows:
# cat /proc/sys/net/core/bpf_jit_enable
1
Before applying this patch:
# bpftool prog list
1: kprobe name foo tag
b811aab41a39ad3d gpl
loaded_at 2018-05-16T11:43:38+0530 uid 0
xlated 216B not jited memlock 65536B
...
# bpftool prog dump jited id 1
no instructions returned
After applying this patch:
# bpftool prog list
1: kprobe name foo tag
b811aab41a39ad3d gpl
loaded_at 2018-05-16T12:13:01+0530 uid 0
xlated 216B jited 308B memlock 65536B
...
# bpftool prog dump jited id 1
0: nop
4: nop
8: mflr r0
c: std r0,16(r1)
10: stdu r1,-112(r1)
14: std r31,104(r1)
18: addi r31,r1,48
1c: li r3,10
...
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:50 +0000 (12:26 +0530)]
tools: bpftool: resolve calls without using imm field
Currently, we resolve the callee's address for a JITed function
call by using the imm field of the call instruction as an offset
from __bpf_call_base. If bpf_jit_kallsyms is enabled, we further
use this address to get the callee's kernel symbol's name.
For some architectures, such as powerpc64, the imm field is not
large enough to hold this offset. So, instead of assigning this
offset to the imm field, the verifier now assigns the subprog
id. Also, a list of kernel symbol addresses for all the JITed
functions is provided in the program info. We now use the imm
field as an index for this list to lookup a callee's symbol's
address and resolve its name.
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:49 +0000 (12:26 +0530)]
tools: bpf: sync bpf uapi header
Syncing the bpf.h uapi header with tools so that struct
bpf_prog_info has the two new fields for passing on the
addresses of the kernel symbols corresponding to each
function in a program.
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:48 +0000 (12:26 +0530)]
bpf: get kernel symbol addresses via syscall
This adds new two new fields to struct bpf_prog_info. For
multi-function programs, these fields can be used to pass
a list of kernel symbol addresses for all functions in a
given program to userspace using the bpf system call with
the BPF_OBJ_GET_INFO_BY_FD command.
When bpf_jit_kallsyms is enabled, we can get the address
of the corresponding kernel symbol for a callee function
and resolve the symbol's name. The address is determined
by adding the value of the call instruction's imm field
to __bpf_call_base. This offset gets assigned to the imm
field by the verifier.
For some architectures, such as powerpc64, the imm field
is not large enough to hold this offset.
We resolve this by:
[1] Assigning the subprog id to the imm field of a call
instruction in the verifier instead of the offset of
the callee's symbol's address from __bpf_call_base.
[2] Determining the address of a callee's corresponding
symbol by using the imm field as an index for the
list of kernel symbol addresses now available from
the program info.
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:47 +0000 (12:26 +0530)]
bpf: powerpc64: add JIT support for multi-function programs
This adds support for bpf-to-bpf function calls in the powerpc64
JIT compiler. The JIT compiler converts the bpf call instructions
to native branch instructions. After a round of the usual passes,
the start addresses of the JITed images for the callee functions
are known. Finally, to fixup the branch target addresses, we need
to perform an extra pass.
Because of the address range in which JITed images are allocated
on powerpc64, the offsets of the start addresses of these images
from __bpf_call_base are as large as 64 bits. So, for a function
call, we cannot use the imm field of the instruction to determine
the callee's address. Instead, we use the alternative method of
getting it from the list of function addresses in the auxiliary
data of the caller by using the off field as an index.
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:46 +0000 (12:26 +0530)]
bpf: powerpc64: pad function address loads with NOPs
For multi-function programs, loading the address of a callee
function to a register requires emitting instructions whose
count varies from one to five depending on the nature of the
address.
Since we come to know of the callee's address only before the
extra pass, the number of instructions required to load this
address may vary from what was previously generated. This can
make the JITed image grow or shrink.
To avoid this, we should generate a constant five-instruction
when loading function addresses by padding the optimized load
sequence with NOPs.
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sandipan Das [Thu, 24 May 2018 06:56:45 +0000 (12:26 +0530)]
bpf: support 64-bit offsets for bpf function calls
The imm field of a bpf instruction is a signed 32-bit integer.
For JITed bpf-to-bpf function calls, it holds the offset of the
start address of the callee's JITed image from __bpf_call_base.
For some architectures, such as powerpc64, this offset may be
as large as 64 bits and cannot be accomodated in the imm field
without truncation.
We resolve this by:
[1] Additionally using the auxiliary data of each function to
keep a list of start addresses of the JITed images for all
functions determined by the verifier.
[2] Retaining the subprog id inside the off field of the call
instructions and using it to index into the list mentioned
above and lookup the callee's address.
To make sure that the existing JIT compilers continue to work
without requiring changes, we keep the imm field as it is.
Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Martin KaFai Lau [Wed, 23 May 2018 18:32:36 +0000 (11:32 -0700)]
bpf: btf: Avoid variable length array
Sparse warning:
kernel/bpf/btf.c:1985:34: warning: Variable length array is used.
This patch directly uses ARRAY_SIZE().
Fixes:
f80442a4cd18 ("bpf: btf: Change how section is supported in btf_header")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
David S. Miller [Wed, 23 May 2018 20:37:11 +0000 (16:37 -0400)]
Merge git://git./linux/kernel/git/pablo/nf-next
Pablo Neira Ayuso says:
====================
Netfilter updates for net-next
The following patchset contains Netfilter updates for your net-next
tree, they are:
1) Remove obsolete nf_log tracing from nf_tables, from Florian Westphal.
2) Add support for map lookups to numgen, random and hash expressions,
from Laura Garcia.
3) Allow to register nat hooks for iptables and nftables at the same
time. Patchset from Florian Westpha.
4) Timeout support for rbtree sets.
5) ip6_rpfilter works needs interface for link-local addresses, from
Vincent Bernat.
6) Add nf_ct_hook and nf_nat_hook structures and use them.
7) Do not drop packets on packets raceing to insert conntrack entries
into hashes, this is particularly a problem in nfqueue setups.
8) Address fallout from xt_osf separation to nf_osf, patches
from Florian Westphal and Fernando Mancera.
9) Remove reference to struct nft_af_info, which doesn't exist anymore.
From Taehee Yoo.
This batch comes with is a conflict between
25fd386e0bc0 ("netfilter:
core: add missing __rcu annotation") in your tree and
2c205dd3981f
("netfilter: add struct nf_nat_hook and use it") coming in this batch.
This conflict can be solved by leaving the __rcu tag on
__netfilter_net_init() - added by
25fd386e0bc0 - and remove all code
related to nf_nat_decode_session_hook - which is gone after
2c205dd3981f, as described by:
diff --cc net/netfilter/core.c
index
e0ae4aae96f5,
206fb2c4c319..
168af54db975
--- a/net/netfilter/core.c
+++ b/net/netfilter/core.c
@@@ -611,7 -580,13 +611,8 @@@ const struct nf_conntrack_zone nf_ct_zo
EXPORT_SYMBOL_GPL(nf_ct_zone_dflt);
#endif /* CONFIG_NF_CONNTRACK */
- static void __net_init __netfilter_net_init(struct nf_hook_entries **e, int max)
-#ifdef CONFIG_NF_NAT_NEEDED
-void (*nf_nat_decode_session_hook)(struct sk_buff *, struct flowi *);
-EXPORT_SYMBOL(nf_nat_decode_session_hook);
-#endif
-
+ static void __net_init
+ __netfilter_net_init(struct nf_hook_entries __rcu **e, int max)
{
int h;
I can also merge your net-next tree into nf-next, solve the conflict and
resend the pull request if you prefer so.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 23 May 2018 20:33:01 +0000 (16:33 -0400)]
Merge branch 'amd-xgbe-next'
Tom Lendacky says:
====================
amd-xgbe: AMD XGBE driver updates 2018-05-21
The following updates are included in this driver update series:
- Fix the debug output for the max channels count
- Read (once) and save the port property registers during probe
- Remove the use of the comm_owned field
- Remove unused SFP diagnostic support indicator field
- Add ethtool --module-info support
- Add ethtool --show-ring/--set-ring support
- Update the driver in preparation for ethtool --set-channels support
- Add ethtool --show-channels/--set-channels support
- Update the driver to always perform link training in KR mode
- Advertise FEC support when using a KR re-driver
- Update the BelFuse quirk to now support SGMII
- Improve 100Mbps auto-negotiation for BelFuse parts
This patch series is based on net-next.
---
Changes since v1:
- Update the --set-channels support to the use of the combined, rx and
tx options as specified in the ethtool man page (in other words, don't
create combined channels based on the min of the tx and rx channels
specified).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:39:47 +0000 (11:39 -0500)]
amd-xgbe: Improve SFP 100Mbps auto-negotiation
After changing speed to 100Mbps as a result of auto-negotiation (AN),
some 10/100/1000Mbps SFPs indicate a successful link (no faults or loss
of signal), but cannot successfully transmit or receive data. These
SFPs required an extra auto-negotiation (AN) after the speed change in
order to operate properly. Add a quirk for these SFPs so that if the
outcome of the AN actually results in changing to a new speed, re-initiate
AN at that new speed.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:39:39 +0000 (11:39 -0500)]
amd-xgbe: Update the BelFuse quirk to support SGMII
Instead of using a quirk to make the BelFuse 1GBT-SFP06 part look like
a 1000baseX part, program the SFP PHY to support SGMII and 10/100/1000
baseT.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:39:31 +0000 (11:39 -0500)]
amd-xgbe: Advertise FEC support with the KR re-driver
When a KR re-driver is present, indicate the FEC support is available
during auto-negotiation.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:39:21 +0000 (11:39 -0500)]
amd-xgbe: Always attempt link training in KR mode
Link training is always attempted when in KR mode, but the code is
structured to check if link training has been enabled before attempting
to perform it. Since that check will always be true, simplify the code
to always enable and start link training during KR auto-negotiation.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:39:13 +0000 (11:39 -0500)]
amd-xgbe: Add ethtool show/set channels support
Add ethtool support to show and set the device channel configuration.
Changing the channel configuration will result in a device restart.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:39:04 +0000 (11:39 -0500)]
amd-xgbe: Prepare for ethtool set-channel support
In order to support being able to dynamically set/change the number of
Rx and Tx channels, update the code to:
- Move alloc and free of device memory into callable functions
- Move setting of the real number of Rx and Tx channels to device startup
- Move mapping of the RSS channels to device startup
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:38:56 +0000 (11:38 -0500)]
amd-xgbe: Add ethtool show/set ring parameter support
Add ethtool support to show and set the number of the Rx and Tx ring
descriptors. Changing the ring configuration will result in a device
restart.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:38:46 +0000 (11:38 -0500)]
amd-xgbe: Add ethtool support to retrieve SFP module info
Add support to get SFP module information using ethtool.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:38:38 +0000 (11:38 -0500)]
amd-xgbe: Remove field that indicates SFP diagnostic support
The driver currently sets an indication of whether the SFP supports, and
that the driver can obtain, diagnostics data. This isn't currently used
by the driver and the logic to set this indicator is flawed because the
field is cleared each time the SFP is checked and only set when a new SFP
is detected. Remove this field and the logic supporting it.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:38:29 +0000 (11:38 -0500)]
amd-xgbe: Remove use of comm_owned field
The comm_owned field can hide logic where double locking is attempted
and prevent multiple threads for the same device from accessing the
mutex properly. Remove the comm_owned field and use the mutex API
exclusively for gaining ownership. The current driver has been audited
and is obtaining communications ownership properly.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Lendacky [Wed, 23 May 2018 16:38:20 +0000 (11:38 -0500)]
amd-xgbe: Read and save the port property registers during probe
Read and save the port property registers once during the device probe
and then use the saved values as they are needed.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>