David S. Miller [Wed, 20 Nov 2019 20:34:37 +0000 (12:34 -0800)]
Merge branch 'page_pool-DMA-sync'
Lorenzo Bianconi says:
====================
add DMA-sync-for-device capability to page_pool API
Introduce the possibility to sync DMA memory for device in the page_pool API.
This feature allows to sync proper DMA size and not always full buffer
(dma_sync_single_for_device can be very costly).
Please note DMA-sync-for-CPU is still device driver responsibility.
Relying on page_pool DMA sync mvneta driver improves XDP_DROP pps of
about 170Kpps:
- XDP_DROP DMA sync managed by mvneta driver: ~420Kpps
- XDP_DROP DMA sync managed by page_pool API: ~585Kpps
Do not change naming convention for the moment since the changes will hit other
drivers as well. I will address it in another series.
Changes since v4:
- do not allow the driver to set max_len to 0
- convert PP_FLAG_DMA_MAP/PP_FLAG_DMA_SYNC_DEV to BIT() macro
Changes since v3:
- move dma_sync_for_device before putting the page in ptr_ring in
__page_pool_recycle_into_ring since ptr_ring can be consumed
concurrently. Simplify the code moving dma_sync_for_device
before running __page_pool_recycle_direct/__page_pool_recycle_into_ring
Changes since v2:
- rely on PP_FLAG_DMA_SYNC_DEV flag instead of dma_sync
Changes since v1:
- rename sync in dma_sync
- set dma_sync_size to 0xFFFFFFFF in page_pool_recycle_direct and
page_pool_put_page routines
- Improve documentation
====================
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lorenzo Bianconi [Wed, 20 Nov 2019 14:54:19 +0000 (16:54 +0200)]
net: mvneta: get rid of huge dma sync in mvneta_rx_refill
Get rid of costly dma_sync_single_for_device in mvneta_rx_refill
since now the driver can let page_pool API to manage needed DMA
sync with a proper size.
- XDP_DROP DMA sync managed by mvneta driver: ~420Kpps
- XDP_DROP DMA sync managed by page_pool API: ~585Kpps
Tested-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lorenzo Bianconi [Wed, 20 Nov 2019 14:54:18 +0000 (16:54 +0200)]
net: page_pool: add the possibility to sync DMA memory for device
Introduce the following parameters in order to add the possibility to sync
DMA memory for device before putting allocated pages in the page_pool
caches:
- PP_FLAG_DMA_SYNC_DEV: if set in page_pool_params flags, all pages that
the driver gets from page_pool will be DMA-synced-for-device according
to the length provided by the device driver. Please note DMA-sync-for-CPU
is still device driver responsibility
- offset: DMA address offset where the DMA engine starts copying rx data
- max_len: maximum DMA memory size page_pool is allowed to flush. This
is currently used in __page_pool_alloc_pages_slow routine when pages
are allocated from page allocator
These parameters are supposed to be set by device drivers.
This optimization reduces the length of the DMA-sync-for-device.
The optimization is valid because pages are initially
DMA-synced-for-device as defined via max_len. At RX time, the driver
will perform a DMA-sync-for-CPU on the memory for the packet length.
What is important is the memory occupied by packet payload, because
this is the area CPU is allowed to read and modify. As we don't track
cache-lines written into by the CPU, simply use the packet payload length
as dma_sync_size at page_pool recycle time. This also take into account
any tail-extend.
Tested-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lorenzo Bianconi [Wed, 20 Nov 2019 14:54:17 +0000 (16:54 +0200)]
net: mvneta: rely on page_pool_recycle_direct in mvneta_run_xdp
Rely on page_pool_recycle_direct and not on xdp_return_buff in
mvneta_run_xdp. This is a preliminary patch to limit the dma sync len
to the one strictly necessary
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gautam Ramakrishnan [Wed, 20 Nov 2019 14:13:54 +0000 (19:43 +0530)]
net: sched: pie: enable timestamp based delay calculation
RFC 8033 suggests an alternative approach to calculate the queue
delay in PIE by using a timestamp on every enqueued packet. This
patch adds an implementation of that approach and sets it as the
default method to calculate queue delay. The previous method (based
on Little's law) to calculate queue delay is set as optional.
Signed-off-by: Gautam Ramakrishnan <gautamramk@gmail.com>
Signed-off-by: Leslie Monis <lesliemonis@gmail.com>
Signed-off-by: Mohit P. Tahiliani <tahiliani@nitk.edu.in>
Acked-by: Dave Taht <dave.taht@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Krzysztof Kozlowski [Wed, 20 Nov 2019 13:41:20 +0000 (21:41 +0800)]
isdn: Fix Kconfig indentation
Adjust indentation from spaces to tab (+optional two spaces) as in
coding style with command like:
$ sed -e 's/^ /\t/' -i */Kconfig
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Krzysztof Kozlowski [Wed, 20 Nov 2019 13:40:44 +0000 (21:40 +0800)]
nfc: Fix Kconfig indentation
Adjust indentation from spaces to tab (+optional two spaces) as in
coding style with command like:
$ sed -e 's/^ /\t/' -i */Kconfig
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 20 Nov 2019 20:05:24 +0000 (12:05 -0800)]
Merge branch 'cxgb4-add-TC-MATCHALL-classifier-offload'
Rahul Lakkireddy says:
====================
cxgb4: add TC-MATCHALL classifier offload
This series of patches add support to offload TC-MATCHALL classifier
to hardware to classify all outgoing and incoming traffic on the
underlying port. Only 1 egress and 1 ingress rule each can be
offloaded on the underlying port.
Patch 1 adds support for TC-MATCHALL classifier offload on the egress
side. TC-POLICE is the only action that can be offloaded on the egress
side and is used to rate limit all outgoing traffic to specified max
rate.
Patch 2 adds logic to reject the current rule offload if its priority
conflicts with existing rules in the TCAM.
Patch 3 adds support for TC-MATCHALL classifier offload on the ingress
side. The same set of actions supported by existing TC-FLOWER
classifier offload can be applied on all the incoming traffic.
v5:
- Fixed commit message and comment to include comparison for equal
priority in patch 2.
v4:
- Removed check in patch 1 to reject police offload if prio is not 1.
- Moved TC_SETUP_BLOCK code to separate function in patch 1.
- Added logic to ensure the prio passed by TC doesn't conflict with
other rules in TCAM in patch 2.
- Higher index has lower priority than lower index in TCAM. So, rework
cxgb4_get_free_ftid() to search free index from end of TCAM in
descending order in patch 2.
- Added check to ensure the matchall rule's prio doesn't conflict with
other rules in TCAM in patch 3.
- Added logic to fill default mask for VIID, if none has been
provided, to prevent conflict with duplicate VIID rules in patch 3.
- Used existing variables in private structure to fill VIID info,
instead of extracting the info manually in patch 3.
v3:
- Added check in patch 1 to reject police offload if prio is not 1.
- Assign block_shared variable only for TC_SETUP_BLOCK in patch 1.
v2:
- Added check to reject flow block sharing for policers in patch 1.
- Removed logic to fetch free index from end of TCAM in patch 2.
Must maintain the same ordering as in kernel.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Wed, 20 Nov 2019 00:16:08 +0000 (05:46 +0530)]
cxgb4: add TC-MATCHALL classifier ingress offload
Add TC-MATCHALL classifier ingress offload support. The same actions
supported by existing TC-FLOWER offload can be applied to all incoming
traffic on the underlying interface.
Ensure the rule priority doesn't conflict with existing rules in the
TCAM. Only 1 ingress matchall rule can be active at a time on the
underlying interface.
v5:
- No change.
v4:
- Added check to ensure the matchall rule's prio doesn't conflict with
other rules in TCAM.
- Added logic to fill default mask for VIID, if none has been
provided, to prevent conflict with duplicate VIID rules.
- Used existing variables in private structure to fill VIID info,
instead of extracting the info manually.
v3:
- No change.
v2:
- Removed logic to fetch free index from end of TCAM. Must maintain
same ordering as in kernel.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Wed, 20 Nov 2019 00:16:07 +0000 (05:46 +0530)]
cxgb4: check rule prio conflicts before offload
Only offload rule if it satisfies both of the following conditions:
1. The immediate previous rule has priority <= current rule's priority.
2. The immediate next rule has priority >= current rule's priority.
Also rework free entry fetch logic to search from end of TCAM, instead
of beginning, because higher indices have lower priority than lower
indices. This is similar to how TC auto generates priority values.
v5:
- Fixed commit message and comment to include comparison for equal
priority.
v4:
- Patch added in this version.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Wed, 20 Nov 2019 00:16:06 +0000 (05:46 +0530)]
cxgb4: add TC-MATCHALL classifier egress offload
Add TC-MATCHALL classifier offload with TC-POLICE action applied for
all outgoing traffic on the underlying interface. Split flow block
offload to support both egress and ingress classification.
For example, to rate limit all outgoing traffic to 1 Gbps:
$ tc qdisc add dev enp2s0f4 clsact
$ tc filter add dev enp2s0f4 egress matchall skip_sw \
action police rate 1Gbit burst 8Kbit
Note that skip_sw is important. Otherwise, both stack and hardware
will end up doing policing. Policing can't be shared across flow
blocks. Only 1 egress matchall rule can be active at a time on the
underlying interface.
v5:
- No change.
v4:
- Removed check to reject police offload if prio is not 1.
- Moved TC_SETUP_BLOCK code to separate function.
v3:
- Added check to reject police offload if prio is not 1.
- Assign block_shared variable only for TC_SETUP_BLOCK.
v2:
- Added check to reject flow block sharing for policers.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 20 Nov 2019 19:47:36 +0000 (11:47 -0800)]
Merge branch 'page_pool-API-for-numa-node-change-handling'
Saeed Mahameed says:
====================
page_pool: API for numa node change handling
This series extends page pool API to allow page pool consumers to update
page pool numa node on the fly. This is required since on some systems,
rx rings irqs can migrate between numa nodes, due to irq balancer or user
defined scripts, current page pool has no way to know of such migration
and will keep allocating and holding on to pages from a wrong numa node,
which is bad for the consumer performance.
1) Add API to update numa node id of the page pool
Consumers will call this API to update the page pool numa node id.
2) Don't recycle non-reusable pages:
Page pool will check upon page return whether a page is suitable for
recycling or not.
2.1) when it belongs to a different num node.
2.2) when it was allocated under memory pressure.
3) mlx5 will use the new API to update page pool numa id on demand.
The series is a joint work between me and Jonathan, we tested it and it
proved itself worthy to avoid page allocator bottlenecks and improve
packet rate and cpu utilization significantly for the described
scenarios above.
Performance testing:
XDP drop/tx rate and TCP single/multi stream, on mlx5 driver
while migrating rx ring irq from close to far numa:
mlx5 internal page cache was locally disabled to get pure page pool
results.
CPU: Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz
NIC: Mellanox Technologies MT27700 Family [ConnectX-4] (100G)
XDP Drop/TX single core:
NUMA | XDP | Before | After
---------------------------------------
Close | Drop | 11 Mpps | 10.9 Mpps
Far | Drop | 4.4 Mpps | 5.8 Mpps
Close | TX | 6.5 Mpps | 6.5 Mpps
Far | TX | 3.5 Mpps | 4 Mpps
Improvement is about 30% drop packet rate, 15% tx packet rate for numa
far test.
No degradation for numa close tests.
TCP single/multi cpu/stream:
NUMA | #cpu | Before | After
--------------------------------------
Close | 1 | 18 Gbps | 18 Gbps
Far | 1 | 15 Gbps | 18 Gbps
Close | 12 | 80 Gbps | 80 Gbps
Far | 12 | 68 Gbps | 80 Gbps
In all test cases we see improvement for the far numa case, and no
impact on the close numa case.
==================
Performance analysis and conclusions by Jesper [1]:
Impact on XDP drop x86_64 is inconclusive and shows only 0.3459ns
slow-down, as this is below measurement accuracy of system.
v2->v3:
- Rebase on top of latest net-next and Jesper's page pool object
release patchset [2]
- No code changes
- Performance analysis by Jesper added to the cover letter.
v1->v2:
- Drop last patch, as requested by Ilias and Jesper.
- Fix documentation's performance numbers order.
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool04_inflight_changes.org#performance-notes
[2] https://patchwork.ozlabs.org/cover/1192098/
====================
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Saeed Mahameed [Wed, 20 Nov 2019 00:15:21 +0000 (00:15 +0000)]
net/mlx5e: Rx, Update page pool numa node when changed
Once every napi poll cycle, check if numa node is different than
the page pool's numa id, and update it using page_pool_update_nid().
Alternatively, we could have registered an irq affinity change handler,
but page_pool_update_nid() must be called from napi context anyways, so
the handler won't actually help.
Performance testing:
XDP drop/tx rate and TCP single/multi stream, on mlx5 driver
while migrating rx ring irq from close to far numa:
mlx5 internal page cache was locally disabled to get pure page pool
results.
CPU: Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz
NIC: Mellanox Technologies MT27700 Family [ConnectX-4] (100G)
XDP Drop/TX single core:
NUMA | XDP | Before | After
---------------------------------------
Close | Drop | 11 Mpps | 10.9 Mpps
Far | Drop | 4.4 Mpps | 5.8 Mpps
Close | TX | 6.5 Mpps | 6.5 Mpps
Far | TX | 3.5 Mpps | 4 Mpps
Improvement is about 30% drop packet rate, 15% tx packet rate for numa
far test.
No degradation for numa close tests.
TCP single/multi cpu/stream:
NUMA | #cpu | Before | After
--------------------------------------
Close | 1 | 18 Gbps | 18 Gbps
Far | 1 | 15 Gbps | 18 Gbps
Close | 12 | 80 Gbps | 80 Gbps
Far | 12 | 68 Gbps | 80 Gbps
In all test cases we see improvement for the far numa case, and no
impact on the close numa case.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Saeed Mahameed [Wed, 20 Nov 2019 00:15:19 +0000 (00:15 +0000)]
page_pool: Don't recycle non-reusable pages
A page is NOT reusable when at least one of the following is true:
1) allocated when system was under some pressure. (page_is_pfmemalloc)
2) belongs to a different NUMA node than pool->p.nid.
To update pool->p.nid users should call page_pool_update_nid().
Holding on to such pages in the pool will hurt the consumer performance
when the pool migrates to a different numa node.
Performance testing:
XDP drop/tx rate and TCP single/multi stream, on mlx5 driver
while migrating rx ring irq from close to far numa:
mlx5 internal page cache was locally disabled to get pure page pool
results.
CPU: Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz
NIC: Mellanox Technologies MT27700 Family [ConnectX-4] (100G)
XDP Drop/TX single core:
NUMA | XDP | Before | After
---------------------------------------
Close | Drop | 11 Mpps | 10.9 Mpps
Far | Drop | 4.4 Mpps | 5.8 Mpps
Close | TX | 6.5 Mpps | 6.5 Mpps
Far | TX | 3.5 Mpps | 4 Mpps
Improvement is about 30% drop packet rate, 15% tx packet rate for numa
far test.
No degradation for numa close tests.
TCP single/multi cpu/stream:
NUMA | #cpu | Before | After
--------------------------------------
Close | 1 | 18 Gbps | 18 Gbps
Far | 1 | 15 Gbps | 18 Gbps
Close | 12 | 80 Gbps | 80 Gbps
Far | 12 | 68 Gbps | 80 Gbps
In all test cases we see improvement for the far numa case, and no
impact on the close numa case.
The impact of adding a check per page is very negligible, and shows no
performance degradation whatsoever, also functionality wise it seems more
correct and more robust for page pool to verify when pages should be
recycled, since page pool can't guarantee where pages are coming from.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Saeed Mahameed [Wed, 20 Nov 2019 00:15:17 +0000 (00:15 +0000)]
page_pool: Add API to update numa node
Add page_pool_update_nid() to be called by page pool consumers when they
detect numa node changes.
It will update the page pool nid value to start allocating from the new
effective numa node.
This is to mitigate page pool allocating pages from a wrong numa node,
where the pool was originally allocated, and holding on to pages that
belong to a different numa node, which causes performance degradation.
For pages that are already being consumed and could be returned to the
pool by the consumer, in next patch we will add a check per page to avoid
recycling them back to the pool and return them to the page allocator.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 20 Nov 2019 19:25:24 +0000 (11:25 -0800)]
Merge branch 'cpsw-switchdev'
Grygorii Strashko says:
====================
net: ethernet: ti: introduce new cpsw switchdev based driver
Thank you All for review of v6.
There are no significant changes in this version, just fixed comments to v6.
--- v6
The major change in this version is DT bindings conversation to json-schema, and
fixed other comments to v5. Also added patch to clean up ALE on init and netif
restart.
--- v5
The major part of work done in this iteration is rebasing on top of net-next
with XDP series from Ivan Khoronzhuk [3], and enable XDP support in the new
CPSW switchdev driver (it was little bit painful ;(). There are mostly no
functional changes in new CPSW driver, just few fixes, sync with old driver
and cleanups/optimizations. So, I've kept rest of cover letter unchanged.
---
This series originally based on work [1][2] done by
Ilias Apalodimas <ilias.apalodimas@linaro.org>.
This the RFC v5 which introduces new CPSW switchdev based driver which is
operating in dual-emac mode by default, thus working as 2 individual
network interfaces. The Switch mode can be enabled by configuring devlink driver
parameter "switch_mode" to 1/true:
devlink dev param set platform/
48484000.switch \
name switch_mode value 1 cmode runtime
This can be done regardless of the state of Port's netdev devices - UP/DOWN, but
Port's netdev devices have to be in UP before joining the bridge to avoid
overwriting of bridge configuration as CPSW switch driver completely reloads its
configuration when first Port changes its state to UP.
When the both interfaces joined the bridge - CPSW switch driver will start
marking packets with offload_fwd_mark flag unless "ale_bypass=0".
All configuration is implemented via switchdev API.
The previous solution of tracking both Ports joined the bridge
(from netdevice_notifier) proved to be not correct as changing CPSW switch
driver mode required cleanup of ALE table and CPSW settings which happens
while second Port is joined bridge and as result configuration loaded
by bridge for the first Port became corrupted.
The introduction of the new CPSW switchdev based driver (cpsw_new.c) is split
on two parts: Part 1 - basic dual-emac driver; Part 2 switchdev support.
Such approach has simplified code development and testing alot. And, I hope,
it will help with better review.
patches #1 - 5: preparation patches which also moves common code to cpsw_priv.c
patches #6 - 9: Introduce TI CPSW switch driver based on switchdev and new
DT bindings
patch #10: new CPSW switchdev driver documentation
patch #11: adds DT nodes for new CPSW switchdev driver added for DRA7 SoC
patch #12: adds DT nodes for new cpsw switchdev driver for am571x-idk board
patch #13: enables build of TI CPSW driver
Most of the contents of the previous cover-letter have been added in
new driver documentation, so please refer to that for configuration,
testing and future work.
These patches can be found at (branch contains some additional patches required
for testing on top of net-next):
https://github.com/grygoriyS/linux.git
branch: lkml-5.4-switch-tbd-v7
changes in v7:
- patch 2: added check for devm_kmalloc_array() return value
- patch 6: fixed comments
changes in v6: https://lkml.org/lkml/2019/11/9/108
- DT bindings converted to json-schema
- netdev initialization is split on creation and registration.
The netdevs registration happens now at the end of the pobe.
- reworked cpsw_set_pauseparam() to use PHYlib APIs.
- other comments for v5 fixed
v5: https://patchwork.kernel.org/cover/
11208785/
- rebase on top of net-next with XDP series from Ivan Khoronzhuk [3],
and enable XDP support in the new CPSW switchdev driver
cpsw driver (tested XDP_DROP only)
- sync with old cpsw driver
- implement comments from Ivan Khoronzhuk and Rob Herring
- fixed "NETDEV WATCHDOG: .." warning after interface after interface UP/DOWN,
missed TX wake in cpsw_adjust_link()
v4: https://patchwork.kernel.org/cover/
11010523/
- finished split of common CPSW code
- added devlink support
- changed CPSW mode configuration approach: from netdevice_notifier to devlink
parameter
- refactor and clean up ALE changes which allows to modify VLANs/MDBs entries
- added missed support for port QDISC_CBS and QDISC_MQPRIO
- the CPSW is split on two parts: basic dual_mac driver and switchdev support
- added missed callback .ndo_get_port_parent_id()
- reworked ingress frames marking in switch mode (offload_fwd_mark)
- applied comments from Andrew Lunn
v3: https://lwn.net/Articles/786677/
Changes in v3:
- alot of work done to split properly common code between legacy and switchdev
CPSW drivers and clean up code
- CPSW switchdev interface updated to the current LKML switchdev interface
- actually new CPSW switchdev based driver introduced
- optimized dual_mac mode in new driver. Main change is that in promiscuous
mode P0_UNI_FLOOD (both ports) is enabled in addition to ALLMULTI (current
port) instead of ALE_BYPASS. So, port in non promiscuous mode will keep
possibility of mcast and vlan filtering.
- changed bridge join sequnce: now switch mode will be enabled only when
both ports joined the bridge. CPSW will be switched to dual_mac mode if any
port leave bridge. ALE table is completly cleared and then refiled while
switching to switch mode - this simplidies code a lot, but introduces some
limitation to bridge setup sequence:
ip link add name br0 type bridge
ip link set dev br0 type bridge ageing_time 1000
ip link set dev br0 type bridge vlan_filtering 0 <- disable
echo 0 > /sys/class/net/br0/bridge/default_vlan
ip link set dev sw0p1 up <- add ports
ip link set dev sw0p2 up
ip link set dev sw0p1 master br0
ip link set dev sw0p2 master br0
echo 1 > /sys/class/net/br0/bridge/default_vlan <- enable
ip link set dev br0 type bridge vlan_filtering 1
bridge vlan add dev br0 vid 1 pvid untagged self
- STP tested with vlan_filtering 1/0. To make STP work I've had to set
NO_SA_UPDATE for all slave ports (see comment in code). It also required to
statically register STP mcast address {0x01, 0x80, 0xc2, 0x0, 0x0, 0x0};
- allowed build both TI_CPSW and TI_CPSW_SWITCHDEV drivers
- PTP can be enabled on both ports in dual_mac mode
[1] https://patchwork.ozlabs.org/cover/929367/
[2] https://patches.linaro.org/cover/136709/
[3] https://patchwork.kernel.org/cover/
11035813/
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:25 +0000 (00:19 +0200)]
arm: omap2plus_defconfig: enable new cpsw switchdev driver
Add CONFIG_TI_CPSW_SWITCHDEV option to enable new cpsw switchdev driver
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:24 +0000 (00:19 +0200)]
ARM: dts: am571x-idk: enable for new cpsw switch dev driver
Add DT nodes for new cpsw switchdev driver for am571x-idk board for now to
enable testing of the new solution.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:23 +0000 (00:19 +0200)]
ARM: dts: dra7: add dt nodes for new cpsw switch dev driver
Add DT nodes for new cpsw switch dev driver.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Tue, 19 Nov 2019 22:19:22 +0000 (00:19 +0200)]
Documentation: networking: add cpsw switchdev based driver documentation
A new cpsw dirver based on switchdev was added. Add documentation about
basic configuration and future features
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:21 +0000 (00:19 +0200)]
phy: ti: phy-gmii-sel: dependency from ti cpsw-switchdev driver
Add dependency from TI_CPSW_SWITCHDEV.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Tue, 19 Nov 2019 22:19:20 +0000 (00:19 +0200)]
net: ethernet: ti: introduce cpsw switchdev based driver part 2 - switch
CPSW switchdev based driver which is operating in dual-emac mode by
default, thus working as 2 individual network interfaces. The Switch mode
can be enabled by configuring devlink driver parameter "switch_mode" to 1:
devlink dev param set platform/
48484000.switch \
name switch_mode value 1 cmode runtime
This can be done regardless of the state of Port's netdevs - UP/DOWN, but
Port's netdev devices have to be UP before joining the bridge to avoid
overwriting of bridge configuration as CPSW switch driver completely
reloads its configuration when first Port changes its state to UP.
When the both interfaces joined the bridge - CPSW switch driver will start
marking packets with offload_fwd_mark flag unless "ale_bypass=0".
All configuration is implemented via switchdev API and notifiers.
Supported:
- SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS
- SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS: BR_MCAST_FLOOD
- SWITCHDEV_ATTR_ID_PORT_STP_STATE
- SWITCHDEV_OBJ_ID_PORT_VLAN
- SWITCHDEV_OBJ_ID_PORT_MDB
- SWITCHDEV_OBJ_ID_HOST_MDB
Hence CPSW switchdev driver supports:
- FDB offloading
- MDB offloading
- VLAN filtering and offloading
- STP
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Tue, 19 Nov 2019 22:19:19 +0000 (00:19 +0200)]
net: ethernet: ti: introduce cpsw switchdev based driver part 1 - dual-emac
Part 1:
Introduce basic CPSW dual_mac driver (cpsw_new.c) which is operating in
dual-emac mode by default, thus working as 2 individual network interfaces.
Main differences from legacy CPSW driver are:
- optimized promiscuous mode: The P0_UNI_FLOOD (both ports) is enabled in
addition to ALLMULTI (current port) instead of ALE_BYPASS. So, Ports in
promiscuous mode will keep possibility of mcast and vlan filtering, which
is provides significant benefits when ports are joined to the same bridge,
but without enabling "switch" mode, or to different bridges.
- learning disabled on ports as it make not too much sense for
segregated ports - no forwarding in HW.
- enabled basic support for devlink.
devlink dev show
platform/
48484000.switch
devlink dev param show
platform/
48484000.switch:
name ale_bypass type driver-specific
values:
cmode runtime value false
- "ale_bypass" devlink driver parameter allows to enable
ALE_CONTROL(4).BYPASS mode for debug purposes.
- updated DT bindings.
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:18 +0000 (00:19 +0200)]
dt-bindings: net: ti: add new cpsw switch driver bindings
Add bindings for the new TI CPSW switch driver. Comparing to the legacy
bindings (net/cpsw.txt):
- ports definition follows DSA bindings (net/dsa/dsa.txt) and ports can be
marked as "disabled" if not physically wired.
- all deprecated properties dropped;
- all legacy propertiies dropped which represent constant HW cpapbilities
(cpdma_channels, ale_entries, bd_ram_size, mac_control, slaves,
active_slave)
- TI CPTS DT properties are reused as is, but grouped in "cpts" sub-node
- TI Davinci MDIO DT bindings are reused as is, because Davinci MDIO is
reused.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:17 +0000 (00:19 +0200)]
net: ethernet: ti: cpsw: move set of common functions in cpsw_priv
As a preparatory patch to add support for a switchdev based cpsw driver,
move common functions to cpsw-priv.c so that they can be used across both
drivers.
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:16 +0000 (00:19 +0200)]
net: ethernet: ti: cpsw: resolve build deps of cpsw drivers
A following patches introduce new CPSW switchdev driver which uses common
code with legacy CPSW driver. This will introduce build dependency between
CPSW switchdev and CPSW legacy drivers related to for_each_slave() and
cpsw_slave_index() - they can be compiled both, but only one of them will
be not functional depending in Kconfig settings due to duffrences in Slave
Ports indexes calculation.
To fix this make for_each_slave() local (it's used now only by legacy CPSW
driver) and convert cpsw_slave_index() to be a function pointer which is
assigned in probe. Driver to probe is defined by DT.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Tue, 19 Nov 2019 22:19:15 +0000 (00:19 +0200)]
net: ethernet: ti: ale: modify vlan/mdb api for switchdev
A following patch introduces switchdev functionality, so modify
ALE engine VLANs/MDBs API:
- cpsw_ale_del_mcast(): update so it will remove only selected ports from
mcast port_mask or delete whole mcast record if !port_mask
- cpsw_ale_del_vlan(): update so it will remove only selected ports from
all VLAN record's masks or delete whole VLAN record if !port_mask
- add cpsw_ale_vlan_add_modify() to add or modify existing VLAN record's
masks
- add cpsw_ale_set_unreg_mcast() for enabling unreg mcast on port VLANs
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:14 +0000 (00:19 +0200)]
net: ethernet: ti: cpsw: allow untagged traffic on host port
Now untagged vlan traffic is not support on Host P0 port. This patch adds
in ALE context bitmap of VLANs for which Host P0 port bit set in Force
Untagged Packet Egress bitmask in VLANs ALE entries, and adds corresponding
check in VLAN incapsulation header parsing function cpsw_rx_vlan_encap().
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Grygorii Strashko [Tue, 19 Nov 2019 22:19:13 +0000 (00:19 +0200)]
net: ethernet: ti: ale: clean ale tbl on init and intf restart
Clean CPSW ALE on init and intf restart (up/down) to avoid reading obsolete
or garbage entries from ALE table.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 20 Nov 2019 19:21:35 +0000 (11:21 -0800)]
Merge branch 'nf_tables_offload-vlan-matching-support'
Pablo Neira Ayuso says:
====================
nf_tables_offload: vlan matching support
The following patchset contains Netfilter support for vlan matching
offloads:
1) Constify nft_reg_load() as a preparation patch.
2) Restrict rule matching to ingress interface type ARPHRD_ETHER.
3) Add new vlan_tci field to flow_dissector_key_vlan structure,
to allow to set up vlan_id, vlan_dei and vlan_priority in one go.
4) C-VLAN matching support.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Pablo Neira Ayuso [Tue, 19 Nov 2019 22:05:55 +0000 (23:05 +0100)]
netfilter: nft_payload: add C-VLAN offload support
Match on h_vlan_encapsulated_proto and set up protocol dependency. Check
for protocol dependency before accessing the tci field. Allow to match
on the encapsulated ethertype too.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pablo Neira Ayuso [Tue, 19 Nov 2019 22:05:54 +0000 (23:05 +0100)]
netfilter: nft_payload: add VLAN offload support
Match on ethertype and set up protocol dependency. Check for protocol
dependency before accessing the tci field. Allow to match on the
encapsulated ethertype too.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pablo Neira Ayuso [Tue, 19 Nov 2019 22:05:53 +0000 (23:05 +0100)]
netfilter: nf_tables_offload: allow ethernet interface type only
Hardware offload support at this stage assumes an ethernet device in
place. The flow dissector provides the intermediate representation to
express this selector, so extend it to allow to store the interface
type. Flower does not uses this, so skb_flow_dissect_meta() is not
extended to match on this new field.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pablo Neira Ayuso [Tue, 19 Nov 2019 22:05:52 +0000 (23:05 +0100)]
netfilter: nf_tables: constify nft_reg_load{8, 16, 64}()
This patch constifies the pointer to source register data that is passed
as an input parameter.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Tue, 19 Nov 2019 09:39:11 +0000 (17:39 +0800)]
lwtunnel: add support for multiple geneve opts
geneve RFC (draft-ietf-nvo3-geneve-14) allows a geneve packet to carry
multiple geneve opts, so it's necessary for lwtunnel to support adding
multiple geneve opts in one lwtunnel route. But vxlan and erspan opts
are still only allowed to add one option.
With this patch, iproute2 could make it like:
# ip r a 1.1.1.0/24 encap ip id 1 geneve_opts 0:0:
12121212,1:2:
12121212 \
dst 10.1.0.2 dev geneve1
# ip r a 1.1.1.0/24 encap ip id 1 vxlan_opts 456 \
dst 10.1.0.2 dev erspan1
# ip r a 1.1.1.0/24 encap ip id 1 erspan_opts 1:123:0:0 \
dst 10.1.0.2 dev erspan1
Which are pretty much like cls_flower and act_tunnel_key.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Tue, 19 Nov 2019 07:30:56 +0000 (13:00 +0530)]
cxgb4: remove unneeded semicolon for switch block
Semicolon is not required at the end of switch block. So, remove it.
Addresses coccinelle warning:
drivers/net/ethernet/chelsio/cxgb4/sge.c:2260:2-3: Unneeded semicolon
Fixes:
4846d5330daf ("cxgb4: add Tx and Rx path for ETHOFLD traffic")
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 18 Nov 2019 18:16:57 +0000 (20:16 +0200)]
net: dsa: felix: Fix CPU port assignment when not last port
On the NXP LS1028A, there are 2 Ethernet links between the Felix switch
and the ENETC:
- eno2 <-> swp4, at 2.5G
- eno3 <-> swp5, at 1G
Only one of the above Ethernet port pairs can act as a DSA link for
tagging.
When adding initial support for the driver, it was tested only on the 1G
eno3 <-> swp5 interface, due to the necessity of using PHYLIB initially
(which treats fixed-link interfaces as emulated C22 PHYs, so it doesn't
support fixed-link speeds higher than 1G).
After making PHYLINK work, it appears that swp4 still can't act as CPU
port. So it looks like ocelot_set_cpu_port was being called for swp4,
but then it was called again for swp5, overwriting the CPU port assigned
in the DT.
It appears that when you call dsa_upstream_port for a port that is not
defined in the device tree (such as swp5 when using swp4 as CPU port),
its dp->cpu_dp pointer is not initialized by dsa_tree_setup_default_cpu,
and this trips up the following condition in dsa_upstream_port:
if (!cpu_dp)
return port;
So the moral of the story is: don't call dsa_upstream_port for a port
that is not defined in the device tree, and therefore its dsa_port
structure is not completely initialized (ds->num_ports is still 6).
Fixes:
56051948773e ("net: dsa: ocelot: add driver for Felix switch family")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Mon, 18 Nov 2019 11:48:35 +0000 (11:48 +0000)]
net: phy: dp83869: fix return of uninitialized variable ret
In the case where the call to phy_interface_is_rgmii returns zero
the variable ret is left uninitialized and this is returned at
the end of the function dp83869_configure_rgmii. Fix this by
returning 0 instead of the uninitialized value in ret.
Addresses-Coverity: ("Uninitialized scalar variable")
Fixes:
01db923e8377 ("net: phy: dp83869: Add TI dp83869 phy")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Mon, 18 Nov 2019 10:10:12 +0000 (18:10 +0800)]
lwtunnel: change to use nla_put_u8 for LWTUNNEL_IP_OPT_ERSPAN_VER
LWTUNNEL_IP_OPT_ERSPAN_VER is u8 type, and nla_put_u8 should have
been used instead of nla_put_u32(). This is a copy-paste error.
Fixes:
b0a21810bd5e ("lwtunnel: add options setting and dumping for erspan")
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Nov 2019 01:13:29 +0000 (17:13 -0800)]
Merge branch 'bnxt_en-Updates'
Michael Chan says:
====================
bnxt_en: Updates.
This series has the firmware interface update that changes the aRFS/ntuple
interface on 57500 chips. The 2nd patch adds a counter and improves
the hardware buffer error handling on the 57500 chips. The rest of the
series is mainly enhancements on error recovery and firmware reset.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Pavan Chebbi [Mon, 18 Nov 2019 08:56:43 +0000 (03:56 -0500)]
bnxt_en: Abort waiting for firmware response if there is no heartbeat.
This is especially beneficial during the NVRAM related firmware
commands that have longer timeouts. If the BNXT_STATE_FW_FATAL_COND
flag gets set while waiting for firmware response, abort and return
error.
Signed-off-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Mon, 18 Nov 2019 08:56:42 +0000 (03:56 -0500)]
bnxt_en: Add a warning message for driver initiated reset
During loss of heartbeat, log this warning message.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Mon, 18 Nov 2019 08:56:41 +0000 (03:56 -0500)]
bnxt_en: Return proper error code for non-existent NVM variable
For NVM params that are not supported in the current NVM
configuration, return the error as -EOPNOTSUPP.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Mon, 18 Nov 2019 08:56:40 +0000 (03:56 -0500)]
bnxt_en: Report health status update after reset is done
Report health status update to devlink health reporter, once
reset is completed.
Cc: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Mon, 18 Nov 2019 08:56:39 +0000 (03:56 -0500)]
bnxt_en: Set MASTER flag during driver registration.
The Linux driver is capable of being the master function to handle
resets, so we set the flag to let firmware know. Some other
drivers, such as DPDK, is not capable and will not set the flag.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Mon, 18 Nov 2019 08:56:38 +0000 (03:56 -0500)]
bnxt_en: Extend ETHTOOL_RESET to hot reset driver.
If firmware supports hot reset, extend ETHTOOL_RESET to support
hot reset driver which does not require a driver reload after
ETHTOOL_RESET. The driver will go through the same coordinated
reset sequence as a firmware initiated fatal/non-fatal reset.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Mon, 18 Nov 2019 08:56:37 +0000 (03:56 -0500)]
bnxt_en: Increase firmware response timeout for coredump commands.
Use the larger HWRM_COREDUMP_TIMEOUT value for coredump related
data response from the firmware. These commands take longer than
normal commands.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Mon, 18 Nov 2019 08:56:36 +0000 (03:56 -0500)]
bnxt_en: Improve RX buffer error handling.
When hardware reports RX buffer errors, the latest 57500 chips do not
require reset. The packet is discarded by the hardware and the
ring will continue to operate.
Also, add an rx_buf_errors counter for this type of error. It can help
the user to identify if the aggregation ring is too small.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Mon, 18 Nov 2019 08:56:35 +0000 (03:56 -0500)]
bnxt_en: Update firmware interface spec to 1.10.1.12.
The aRFS ring table interface has changed for the 57500 chips. Updating
it accordingly so it will work with the latest production firmware.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Nov 2019 01:11:54 +0000 (17:11 -0800)]
Merge branch 'selftests-Add-ethtool-and-scale-tests'
Ido Schimmel says:
====================
selftests: Add ethtool and scale tests
This patch set adds generic ethtool tests and a mlxsw-specific router
scale test for Spectrum-2.
Patches #1-#2 from Danielle add the router scale test for Spectrum-2. It
re-uses the same test as Spectrum-1, but it is invoked with a different
scale, according to what it is queried from devlink-resource.
Patches #3-#5 from Amit are a re-work of the ethtool tests that were
posted in the past [1]. Patches #3-#4 add the necessary library
routines, whereas patch #5 adds the test itself. The test checks both
good and bad flows with autoneg on and off. The test plan it detailed in
the commit message.
Last time Andrew and Florian (copied) provided very useful feedback that
is incorporated in this set. Namely:
* Parse the value of the different link modes from
/usr/include/linux/ethtool.h
* Differentiate between supported and advertised speeds and use the
latter in autoneg tests
* Make the test generic and move it to net/forwarding/ instead of being
mlxsw-specific
[1] https://patchwork.ozlabs.org/cover/1112903/
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Amit Cohen [Mon, 18 Nov 2019 07:50:02 +0000 (09:50 +0200)]
selftests: forwarding: Add speed and auto-negotiation test
Check configurations and packets transference with different variations
of autoneg and speed.
Test plan:
1. Test force of same speed with autoneg off
2. Test force of different speeds with autoneg off (should fail)
3. One side is autoneg on and other side sets force of common speeds
4. One side is autoneg on and other side only advertises a subset of the
common speeds (one speed of the subset)
5. One side is autoneg on and other side only advertises a subset of the
common speeds. Check that highest speed is negotiated
6. Test autoneg on, but each side advertises different speeds (should
fail)
Signed-off-by: Amit Cohen <amitc@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amit Cohen [Mon, 18 Nov 2019 07:50:01 +0000 (09:50 +0200)]
selftests: forwarding: lib.sh: Add wait for dev with timeout
Add a function that waits for device with maximum number of iterations.
It enables to limit the waiting and prevent infinite loop.
This will be used by the subsequent patch which will set two ports to
different speeds in order to make sure they cannot negotiate a link.
Waiting for all the setup is limited with 10 minutes for each device.
Signed-off-by: Amit Cohen <amitc@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amit Cohen [Mon, 18 Nov 2019 07:50:00 +0000 (09:50 +0200)]
selftests: forwarding: Add ethtool_lib.sh
Functions:
1. speeds_arr_get
The function returns an array of speed values from
/usr/include/linux/ethtool.h The array looks as follows:
[10baseT/Half] = 0,
[10baseT/Full] = 1,
...
2. ethtool_set:
params: cmd
The function runs ethtool by cmd (ethtool -s cmd) and checks if
there was an error in configuration
3. dev_speeds_get:
params: dev, with_mode (0 or 1), adver (0 or 1)
return value: Array of supported/Advertised link modes
with/without mode
* Example 1:
speeds_get swp1 0 0
return: 1000 10000 40000
* Example 2:
speeds_get swp1 1 1
return: 1000baseKX/Full 10000baseKR/Full 40000baseCR4/Full
4. common_speeds_get:
params: dev1, dev2, with_mode (0 or 1), adver (0 or 1)
return value: Array of common speeds of dev1 and dev2
* Example:
common_speeds_get swp1 swp2 0 0
return: 1000 10000
Assuming that swp1 supports 1000 10000 40000 and swp2 supports
1000 10000
Signed-off-by: Amit Cohen <amitc@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Mon, 18 Nov 2019 07:49:59 +0000 (09:49 +0200)]
selftests: mlxsw: Check devlink device before running test
The scale test for Spectrum-2 should only be invoked for Spectrum-2.
Skip the test otherwise.
Signed-off-by: Danielle Ratson <danieller@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Mon, 18 Nov 2019 07:49:58 +0000 (09:49 +0200)]
selftests: mlxsw: Add router scale test for Spectrum-2
Same as for Spectrum-1, test the ability to add the maximum number of
routes possible to the switch.
Invoke the test from the 'resource_scale' wrapper script.
Signed-off-by: Danielle Ratson <danieller@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Nov 2019 01:03:18 +0000 (17:03 -0800)]
Merge branch 'page_pool-followup-changes-to-restore-tracepoint-features'
Jesper Dangaard says:
====================
page_pool: followup changes to restore tracepoint features
This patchset is a followup to Jonathan patch, that do not release
pool until inflight == 0. That changed page_pool to be responsible for
its own delayed destruction instead of relying on xdp memory model.
As the page_pool maintainer, I'm promoting the use of tracepoint to
troubleshoot and help driver developers verify correctness when
converting at driver to use page_pool. The role of xdp:mem_disconnect
have changed, which broke my bpftrace tools for shutdown verification.
With these changes, the same capabilities are regained.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesper Dangaard Brouer [Sat, 16 Nov 2019 11:22:48 +0000 (12:22 +0100)]
page_pool: extend tracepoint to also include the page PFN
The MM tracepoint for page free (called kmem:mm_page_free) doesn't provide
the page pointer directly, instead it provides the PFN (Page Frame Number).
This is annoying when writing a page_pool leak detector in BPF.
This patch change page_pool tracepoints to also provide the PFN.
The page pointer is still provided to allow other kinds of
troubleshooting from BPF.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesper Dangaard Brouer [Sat, 16 Nov 2019 11:22:43 +0000 (12:22 +0100)]
page_pool: add destroy attempts counter and rename tracepoint
When Jonathan change the page_pool to become responsible to its
own shutdown via deferred work queue, then the disconnect_cnt
counter was removed from xdp memory model tracepoint.
This patch change the page_pool_inflight tracepoint name to
page_pool_release, because it reflects the new responsability
better. And it reintroduces a counter that reflect the number of
times page_pool_release have been tried.
The counter is also used by the code, to only empty the alloc
cache once. With a stuck work queue running every second and
counter being 64-bit, it will overrun in approx 584 billion
years. For comparison, Earth lifetime expectancy is 7.5 billion
years, before the Sun will engulf, and destroy, the Earth.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesper Dangaard Brouer [Sat, 16 Nov 2019 11:22:38 +0000 (12:22 +0100)]
xdp: remove memory poison on free for struct xdp_mem_allocator
When looking at the details I realised that the memory poison in
__xdp_mem_allocator_rcu_free doesn't make sense. This is because the
SLUB allocator uses the first 16 bytes (on 64 bit), for its freelist,
which overlap with members in struct xdp_mem_allocator, that were
updated. Thus, SLUB already does the "poisoning" for us.
I still believe that poisoning memory make sense in other cases.
Kernel have gained different use-after-free detection mechanism, but
enabling those is associated with a huge overhead. Experience is that
debugging facilities can change the timing so much, that that a race
condition will not be provoked when enabled. Thus, I'm still in favour
of poisoning memory where it makes sense.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Fri, 15 Nov 2019 20:08:37 +0000 (20:08 +0000)]
net: phy: avoid matching all-ones clause 45 PHY IDs
We currently match clause 45 PHYs using any ID read from a MMD marked
as present in the "Devices in package" registers 5 and 6. However,
this is incorrect. 45.2 says:
"The definition of the term package is vendor specific and could be
a chip, module, or other similar entity."
so a package could be more or less than the whole PHY - a PHY could be
made up of several modules instantiated onto a single chip such as the
Marvell 88x3310, or some of the MMDs could be disabled according to
chip configuration, such as the Broadcom 84881.
In the case of Broadcom 84881, the "Devices in package" registers
contain 0xc000009b, meaning that there is a PHYXS present in the
package, but all registers in MMD 4 return 0xffff. This leads to our
matching code incorrectly binding this PHY to one of our generic PHY
drivers.
This patch changes the way we determine whether to attempt to match a
MMD identifier, or use it to request a module - if the identifier is
all-ones, then we skip over it. When reading the identifiers, we
initialise phydev->c45_ids.device_ids to all-ones, only reading the
device ID if the "Devices in package" registers indicates we should.
This avoids the generic drivers incorrectly matching on a PHY ID of
0xffffffff.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Nov 2019 00:56:13 +0000 (16:56 -0800)]
Merge branch 'Add-support-for-SFPs-behind-PHYs'
Russell King says:
====================
Add support for SFPs behind PHYs
This series adds partial support for SFP cages connected to PHYs,
specifically optical SFPs.
We add core infrastructure to phylib for this, and arrange for
minimal code in the PHY driver - currently, this is code to verify
that the module is one that we can support for Marvell 10G PHYs.
v2: add yaml binding patch
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Fri, 15 Nov 2019 19:56:56 +0000 (19:56 +0000)]
net: phy: marvell10g: add SFP+ support
Add support for SFP+ cages to the Marvell 10G PHY driver. This is
slightly complicated by the way phylib works in that we need to use
a multi-step process to attach the SFP bus, and we also need to track
the phylink state machine to know when the module's transmit disable
signal should change state.
With appropriate DT changes, this allows the SFP+ canges on the
Macchiatobin platform to be functional.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Fri, 15 Nov 2019 19:56:51 +0000 (19:56 +0000)]
net: phy: add core phylib sfp support
Add core phylib help for supporting SFP sockets on PHYs. This provides
a mechanism to inform the SFP layer about PHY up/down events, and also
unregister the SFP bus when the PHY is going away.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Fri, 15 Nov 2019 19:56:46 +0000 (19:56 +0000)]
dt-bindings: net: add ethernet controller and phy sfp property
Document the missing sfp property for ethernet controllers (which
has existed for some time) which is being extended to ethernet PHYs.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Rob Herring <robh@kernel.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Nov 2019 00:43:05 +0000 (16:43 -0800)]
Merge git://git./linux/kernel/git/pablo/nf-next
Pablo Neira Ayuso says:
====================
Netfilter updates for net-next
The following patchset contains Netfilter updates for net-next:
1) Wildcard support for the net,iface set from Kristian Evensen.
2) Offload support for matching on the input interface.
3) Simplify matching on vlan header fields.
4) Add nft_payload_rebuild_vlan_hdr() function to rebuild the vlan
header from the vlan sk_buff metadata.
5) Pass extack to nft_flow_cls_offload_setup().
6) Add C-VLAN matching support.
7) Use time64_t in xt_time to fix y2038 overflow, from Arnd Bergmann.
8) Use time_t in nft_meta to fix y2038 overflow, also from Arnd.
9) Add flow_action_entry_next() helper function to flowtable offload
infrastructure.
10) Add IPv6 support to the flowtable offload infrastructure.
11) Support for input interface matching from postrouting,
from Phil Sutter.
12) Missing check for ndo callback in flowtable offload, from wenxu.
13) Remove conntrack parameter from flow_offload_fill_dir(), from wenxu.
14) Do not pass flow_rule object for rule removal, cookie is sufficient
to achieve this.
15) Release flow_rule object in case of error from the offload commit
path.
16) Undo offload ruleset updates if transaction fails.
17) Check for error when binding flowtable callbacks, from wenxu.
18) Always unbind flowtable callbacks when unregistering hooks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 17 Nov 2019 02:47:31 +0000 (18:47 -0800)]
Merge git://git./linux/kernel/git/netdev/net
Lots of overlapping changes and parallel additions, stuff
like that.
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Sun, 17 Nov 2019 02:14:32 +0000 (18:14 -0800)]
Merge branch 'linus' of git://git./linux/kernel/git/herbert/crypto-2.6
Pull crypto fix from Herbert Xu:
"This reverts a number of changes to the khwrng thread which feeds the
kernel random number pool from hwrng drivers. They were trying to fix
issues with suspend-and-resume but ended up causing regressions"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
Revert "hwrng: core - Freeze khwrng thread during suspend"
Herbert Xu [Sun, 17 Nov 2019 00:48:17 +0000 (08:48 +0800)]
Revert "hwrng: core - Freeze khwrng thread during suspend"
This reverts commit
03a3bb7ae631 ("hwrng: core - Freeze khwrng
thread during suspend"),
ff296293b353 ("random: Support freezable
kthreads in add_hwgenerator_randomness()") and
59b569480dc8 ("random:
Use wait_event_freezable() in add_hwgenerator_randomness()").
These patches introduced regressions and we need more time to
get them ready for mainline.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Linus Torvalds [Sun, 17 Nov 2019 00:10:59 +0000 (16:10 -0800)]
Merge branch 'x86-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull x86 fixes from Ingo Molnar:
"Two fixes: disable unreliable HPET on Intel Coffe Lake platforms, and
fix a lockdep splat in the resctrl code"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/resctrl: Fix potential lockdep warning
x86/quirks: Disable HPET on Intel Coffe Lake platforms
Linus Torvalds [Sun, 17 Nov 2019 00:08:46 +0000 (16:08 -0800)]
Merge branch 'timers-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull timer fix from Ingo Molnar:
"Fix integer truncation bug in __do_adjtimex()"
* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
ntp/y2038: Remove incorrect time_t truncation
Linus Torvalds [Sat, 16 Nov 2019 23:56:01 +0000 (15:56 -0800)]
Merge branch 'perf-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull perf fixes from Ingo Molnar:
"Misc fixes: a handful of AUX event handling related fixes, a Sparse
fix and two ABI fixes"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/core: Fix missing static inline on perf_cgroup_switch()
perf/core: Consistently fail fork on allocation failures
perf/aux: Disallow aux_output for kernel events
perf/core: Reattach a misplaced comment
perf/aux: Fix the aux_output group inheritance fix
perf/core: Disallow uncore-cgroup events
Linus Torvalds [Sat, 16 Nov 2019 23:52:00 +0000 (15:52 -0800)]
Merge git://git./linux/kernel/git/netdev/net
Pull networking fixes from David Miller:
1) Fix memory leak in xfrm_state code, from Steffen Klassert.
2) Fix races between devlink reload operations and device
setup/cleanup, from Jiri Pirko.
3) Null deref in NFC code, from Stephan Gerhold.
4) Refcount fixes in SMC, from Ursula Braun.
5) Memory leak in slcan open error paths, from Jouni Hogander.
6) Fix ETS bandwidth validation in hns3, from Yonglong Liu.
7) Info leak on short USB request answers in ax88172a driver, from
Oliver Neukum.
8) Release mem region properly in ep93xx_eth, from Chuhong Yuan.
9) PTP config timestamp flags validation, from Richard Cochran.
10) Dangling pointers after SKB data realloc in seg6, from Andrea Mayer.
11) Missing free_netdev() in gemini driver, from Chuhong Yuan.
* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (56 commits)
ipmr: Fix skb headroom in ipmr_get_route().
net: hns3: cleanup of stray struct hns3_link_mode_mapping
net/smc: fix fastopen for non-blocking connect()
rds: ib: update WR sizes when bringing up connection
net: gemini: add missed free_netdev
net: dsa: tag_8021q: Fix dsa_8021q_restore_pvid for an absent pvid
seg6: fix skb transport_header after decap_and_validate()
seg6: fix srh pointer in get_srh()
net: stmmac: Use the correct style for SPDX License Identifier
octeontx2-af: Use the correct style for SPDX License Identifier
ptp: Extend the test program to check the external time stamp flags.
mlx5: Reject requests to enable time stamping on both edges.
igb: Reject requests that fail to enable time stamping on both edges.
dp83640: Reject requests to enable time stamping on both edges.
mv88e6xxx: Reject requests to enable time stamping on both edges.
ptp: Introduce strict checking of external time stamp options.
renesas: reject unsupported external timestamp flags
mlx5: reject unsupported external timestamp flags
igb: reject unsupported external timestamp flags
dp83640: reject unsupported external timestamp flags
...
kbuild test robot [Fri, 15 Nov 2019 22:38:34 +0000 (06:38 +0800)]
mscc.c: fix semicolon.cocci warnings
drivers/net/phy/mscc.c:1683:3-4: Unneeded semicolon
Remove unneeded semicolon.
Generated by: scripts/coccinelle/misc/semicolon.cocci
Fixes:
75a1ccfe6c72 ("mscc.c: Add support for additional VSC PHYs")
CC: Bryan Whitehead <Bryan.Whitehead@microchip.com>
Signed-off-by: kbuild test robot <lkp@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 16 Nov 2019 01:55:54 +0000 (17:55 -0800)]
selftests: net: avoid ptl lock contention in tcp_mmap
tcp_mmap is used as a reference program for TCP rx zerocopy,
so it is important to point out some potential issues.
If multiple threads are concurrently using getsockopt(...
TCP_ZEROCOPY_RECEIVE), there is a chance the low-level mm
functions compete on shared ptl lock, if vma are arbitrary placed.
Instead of letting the mm layer place the chunks back to back,
this patch enforces an alignment so that each thread uses
a different ptl lock.
Performance measured on a 100 Gbit NIC, with 8 tcp_mmap clients
launched at the same time :
$ for f in {1..8}; do ./tcp_mmap -H 2002:a05:6608:290:: & done
In the following run, we reproduce the old behavior by requesting no alignment :
$ tcp_mmap -sz -C $((128*1024)) -a 4096
received 32768 MB (100 % mmap'ed) in 9.69532 s, 28.3516 Gbit
cpu usage user:0.08634 sys:3.86258, 120.511 usec per MB, 171839 c-switches
received 32768 MB (100 % mmap'ed) in 25.4719 s, 10.7914 Gbit
cpu usage user:0.055268 sys:21.5633, 659.745 usec per MB, 9065 c-switches
received 32768 MB (100 % mmap'ed) in 28.5419 s, 9.63069 Gbit
cpu usage user:0.057401 sys:23.8761, 730.392 usec per MB, 14987 c-switches
received 32768 MB (100 % mmap'ed) in 28.655 s, 9.59268 Gbit
cpu usage user:0.059689 sys:23.8087, 728.406 usec per MB, 18509 c-switches
received 32768 MB (100 % mmap'ed) in 28.7808 s, 9.55074 Gbit
cpu usage user:0.066042 sys:23.4632, 718.056 usec per MB, 24702 c-switches
received 32768 MB (100 % mmap'ed) in 28.8259 s, 9.5358 Gbit
cpu usage user:0.056547 sys:23.6628, 723.858 usec per MB, 23518 c-switches
received 32768 MB (100 % mmap'ed) in 28.8808 s, 9.51767 Gbit
cpu usage user:0.059357 sys:23.8515, 729.703 usec per MB, 14691 c-switches
received 32768 MB (100 % mmap'ed) in 28.8879 s, 9.51534 Gbit
cpu usage user:0.047115 sys:23.7349, 725.769 usec per MB, 21773 c-switches
New behavior (automatic alignment based on Hugepagesize),
we can see the system overhead being dramatically reduced.
$ tcp_mmap -sz -C $((128*1024))
received 32768 MB (100 % mmap'ed) in 13.5339 s, 20.3103 Gbit
cpu usage user:0.122644 sys:3.4125, 107.884 usec per MB, 168567 c-switches
received 32768 MB (100 % mmap'ed) in 16.0335 s, 17.1439 Gbit
cpu usage user:0.132428 sys:3.55752, 112.608 usec per MB, 188557 c-switches
received 32768 MB (100 % mmap'ed) in 17.5506 s, 15.6621 Gbit
cpu usage user:0.155405 sys:3.24889, 103.891 usec per MB, 226652 c-switches
received 32768 MB (100 % mmap'ed) in 19.1924 s, 14.3222 Gbit
cpu usage user:0.135352 sys:3.35583, 106.542 usec per MB, 207404 c-switches
received 32768 MB (100 % mmap'ed) in 22.3649 s, 12.2906 Gbit
cpu usage user:0.142429 sys:3.53187, 112.131 usec per MB, 250225 c-switches
received 32768 MB (100 % mmap'ed) in 22.5336 s, 12.1986 Gbit
cpu usage user:0.140654 sys:3.61971, 114.757 usec per MB, 253754 c-switches
received 32768 MB (100 % mmap'ed) in 22.5483 s, 12.1906 Gbit
cpu usage user:0.134035 sys:3.55952, 112.718 usec per MB, 252997 c-switches
received 32768 MB (100 % mmap'ed) in 22.6442 s, 12.139 Gbit
cpu usage user:0.126173 sys:3.71251, 117.147 usec per MB, 253728 c-switches
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Arjun Roy <arjunroy@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Fri, 15 Nov 2019 21:38:25 +0000 (22:38 +0100)]
r8169: load firmware for RTL8168fp/RTL8117
Load Realtek-provided firmware for RTL8168fp/RTL8117. Unlike the
firmware for other chip versions which is for the PHY, firmware for
RTL8168fp/RTL8117 is for the MAC.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Fri, 15 Nov 2019 20:35:22 +0000 (21:35 +0100)]
r8169: improve conditional firmware loading for RTL8168d
Using constant MII_EXPANSION is misleading here because register 0x06
has a different meaning on page 0x0005. Here a proprietary PHY
parameter is read by writing the parameter id to register 0x05 on page
0x0005, followed by reading the parameter value from register 0x06.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Fri, 15 Nov 2019 20:05:45 +0000 (20:05 +0000)]
net: phylink: update to use phy_support_asym_pause()
Use phy_support_asym_pause() rather than open-coding it.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Fri, 15 Nov 2019 17:29:52 +0000 (18:29 +0100)]
ipmr: Fix skb headroom in ipmr_get_route().
In route.c, inet_rtm_getroute_build_skb() creates an skb with no
headroom. This skb is then used by inet_rtm_getroute() which may pass
it to rt_fill_info() and, from there, to ipmr_get_route(). The later
might try to reuse this skb by cloning it and prepending an IPv4
header. But since the original skb has no headroom, skb_push() triggers
skb_under_panic():
skbuff: skb_under_panic: text:
00000000ca46ad8a len:80 put:20 head:
00000000cd28494e data:
000000009366fd6b tail:0x3c end:0xec0 dev:veth0
------------[ cut here ]------------
kernel BUG at net/core/skbuff.c:108!
invalid opcode: 0000 [#1] SMP KASAN PTI
CPU: 6 PID: 587 Comm: ip Not tainted 5.4.0-rc6+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-2.fc30 04/01/2014
RIP: 0010:skb_panic+0xbf/0xd0
Code: 41 a2 ff 8b 4b 70 4c 8b 4d d0 48 c7 c7 20 76 f5 8b 44 8b 45 bc 48 8b 55 c0 48 8b 75 c8 41 54 41 57 41 56 41 55 e8 75 dc 7a ff <0f> 0b 0f 1f 44 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00
RSP: 0018:
ffff888059ddf0b0 EFLAGS:
00010286
RAX:
0000000000000086 RBX:
ffff888060a315c0 RCX:
ffffffff8abe4822
RDX:
0000000000000000 RSI:
0000000000000008 RDI:
ffff88806c9a79cc
RBP:
ffff888059ddf118 R08:
ffffed100d9361b1 R09:
ffffed100d9361b0
R10:
ffff88805c68aee3 R11:
ffffed100d9361b1 R12:
ffff88805d218000
R13:
ffff88805c689fec R14:
000000000000003c R15:
0000000000000ec0
FS:
00007f6af184b700(0000) GS:
ffff88806c980000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00007ffc8204a000 CR3:
0000000057b40006 CR4:
0000000000360ee0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
Call Trace:
skb_push+0x7e/0x80
ipmr_get_route+0x459/0x6fa
rt_fill_info+0x692/0x9f0
inet_rtm_getroute+0xd26/0xf20
rtnetlink_rcv_msg+0x45d/0x630
netlink_rcv_skb+0x1a5/0x220
rtnetlink_rcv+0x15/0x20
netlink_unicast+0x305/0x3a0
netlink_sendmsg+0x575/0x730
sock_sendmsg+0xb5/0xc0
___sys_sendmsg+0x497/0x4f0
__sys_sendmsg+0xcb/0x150
__x64_sys_sendmsg+0x48/0x50
do_syscall_64+0xd2/0xac0
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Actually the original skb used to have enough headroom, but the
reserve_skb() call was lost with the introduction of
inet_rtm_getroute_build_skb() by commit
404eb77ea766 ("ipv4: support
sport, dport and ip_proto in RTM_GETROUTE").
We could reserve some headroom again in inet_rtm_getroute_build_skb(),
but this function shouldn't be responsible for handling the special
case of ipmr_get_route(). Let's handle that directly in
ipmr_get_route() by calling skb_realloc_headroom() instead of
skb_clone().
Fixes:
404eb77ea766 ("ipv4: support sport, dport and ip_proto in RTM_GETROUTE")
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 16 Nov 2019 21:06:07 +0000 (13:06 -0800)]
Merge tag 'wireless-drivers-next-2019-11-15' of git://git./linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers-next patches for v5.5
Second set of patches for v5.5. Nothing special this time, smaller
features to various drivers and of course fixes all over.
Major changes:
iwlwifi
* update scan FW API
* bump the supported FW API version
* add debug dump collection on assert in WoWLAN
* enable adaptive dwell on P2P interfaces
ath10k
* request for PM_QOS_CPU_DMA_LATENCY to improve firmware initialisation time
qtnfmac
* add support for getting/setting transmit power
* handle MIC failure event from firmware
rtl8xxxu
* add support for Edimax EW-7611ULB
wil6210
* add SPDX license identifiers
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Salil Mehta [Fri, 15 Nov 2019 11:52:32 +0000 (11:52 +0000)]
net: hns3: cleanup of stray struct hns3_link_mode_mapping
This patch cleans-up the stray left over code. It has no
functionality impact.
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ursula Braun [Fri, 15 Nov 2019 11:39:30 +0000 (12:39 +0100)]
net/smc: fix fastopen for non-blocking connect()
FASTOPEN does not work with SMC-sockets. Since SMC allows fallback to
TCP native during connection start, the FASTOPEN setsockopts trigger
this fallback, if the SMC-socket is still in state SMC_INIT.
But if a FASTOPEN setsockopt is called after a non-blocking connect(),
this is broken, and fallback does not make sense.
This change complements
commit
cd2063604ea6 ("net/smc: avoid fallback in case of non-blocking connect")
and fixes the syzbot reported problem "WARNING in smc_unhash_sk".
Reported-by: syzbot+8488cc4cf1c9e09b8b86@syzkaller.appspotmail.com
Fixes:
e1bbdd570474 ("net/smc: reduce sock_put() for fallback sockets")
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matteo Croce [Fri, 15 Nov 2019 11:10:37 +0000 (12:10 +0100)]
bonding: symmetric ICMP transmit
A bonding with layer2+3 or layer3+4 hashing uses the IP addresses and the ports
to balance packets between slaves. With some network errors, we receive an ICMP
error packet by the remote host or a router. If sent by a router, the source IP
can differ from the remote host one. Additionally the ICMP protocol has no port
numbers, so a layer3+4 bonding will get a different hash than the previous one.
These two conditions could let the packet go through a different interface than
the other packets of the same flow:
# tcpdump -qltnni veth0 |sed 's/^/0: /' &
# tcpdump -qltnni veth1 |sed 's/^/1: /' &
# hping3 -2 192.168.0.2 -p 9
0: IP 192.168.0.1.2251 > 192.168.0.2.9: UDP, length 0
1: IP 192.168.0.2 > 192.168.0.1: ICMP 192.168.0.2 udp port 9 unreachable, length 36
1: IP 192.168.0.1.2252 > 192.168.0.2.9: UDP, length 0
1: IP 192.168.0.2 > 192.168.0.1: ICMP 192.168.0.2 udp port 9 unreachable, length 36
1: IP 192.168.0.1.2253 > 192.168.0.2.9: UDP, length 0
1: IP 192.168.0.2 > 192.168.0.1: ICMP 192.168.0.2 udp port 9 unreachable, length 36
0: IP 192.168.0.1.2254 > 192.168.0.2.9: UDP, length 0
1: IP 192.168.0.2 > 192.168.0.1: ICMP 192.168.0.2 udp port 9 unreachable, length 36
An ICMP error packet contains the header of the packet which caused the network
error, so inspect it and match the flow against it, so we can send the ICMP via
the same interface of the previous packet in the flow.
Move the IP and port dissect code into a generic function bond_flow_ip() and if
we are dissecting an ICMP error packet, call it again with the adjusted offset.
# hping3 -2 192.168.0.2 -p 9
1: IP 192.168.0.1.1224 > 192.168.0.2.9: UDP, length 0
1: IP 192.168.0.2 > 192.168.0.1: ICMP 192.168.0.2 udp port 9 unreachable, length 36
1: IP 192.168.0.1.1225 > 192.168.0.2.9: UDP, length 0
1: IP 192.168.0.2 > 192.168.0.1: ICMP 192.168.0.2 udp port 9 unreachable, length 36
0: IP 192.168.0.1.1226 > 192.168.0.2.9: UDP, length 0
0: IP 192.168.0.2 > 192.168.0.1: ICMP 192.168.0.2 udp port 9 unreachable, length 36
0: IP 192.168.0.1.1227 > 192.168.0.2.9: UDP, length 0
0: IP 192.168.0.2 > 192.168.0.1: ICMP 192.168.0.2 udp port 9 unreachable, length 36
Signed-off-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Horatiu Vultur [Fri, 15 Nov 2019 10:11:15 +0000 (11:11 +0100)]
net: mscc: ocelot: omit error check from of_get_phy_mode
The commit
0c65b2b90d13c ("net: of_get_phy_mode: Change API to solve
int/unit warnings") updated the function of_get_phy_mode declaration.
Now it returns an error code and in case the node doesn't contain the
property 'phy-mode' or 'phy-connection-type' it returns -EINVAL and would
set the phy_interface_t to PHY_INTERFACE_MODE_NA.
Ocelot VSC7514 has 4 internal phys which have the phy interface
PHY_INTERFACE_MODE_NA. So because of_get_phy_mode would assign
PHY_INTERFACE_MODE_NA to phy_mode when there is an error, there is no need
to add the error check.
Updates for v2:
- drop error check because of_get_phy_mode already assigns phy_interface
to PHY_INTERFACE_MODE in case of error.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Lobakin [Fri, 15 Nov 2019 09:11:35 +0000 (12:11 +0300)]
net: core: allow fast GRO for skbs with Ethernet header in head
Commit
78d3fd0b7de8 ("gro: Only use skb_gro_header for completely
non-linear packets") back in May'09 (v2.6.31-rc1) has changed the
original condition '!skb_headlen(skb)' to
'skb->mac_header == skb->tail' in gro_reset_offset() saying: "Since
the drivers that need this optimisation all provide completely
non-linear packets" (note that this condition has become the current
'skb_mac_header(skb) == skb_tail_pointer(skb)' later with commmit
ced14f6804a9 ("net: Correct comparisons and calculations using
skb->tail and skb-transport_header") without any functional changes).
For now, we have the following rough statistics for v5.4-rc7:
1) napi_gro_frags: 14
2) napi_gro_receive with skb->head containing (most of) payload: 83
3) napi_gro_receive with skb->head containing all the headers: 20
4) napi_gro_receive with skb->head containing only Ethernet header: 2
With the current condition, fast GRO with the usage of
NAPI_GRO_CB(skb)->frag0 is available only in the [1] case.
Packets pushed by [2] and [3] go through the 'slow' path, but
it's not a problem for them as they already contain all the needed
headers in skb->head, so pskb_may_pull() only moves skb->data.
The layout of skbs in the fourth [4] case at the moment of
dev_gro_receive() is identical to skbs that have come through [1],
as napi_frags_skb() pulls Ethernet header to skb->head. The only
difference is that the mentioned condition is always false for them,
because skb_put() and friends irreversibly alter the tail pointer.
They also go through the 'slow' path, but now every single
pskb_may_pull() in every single .gro_receive() will call the *really*
slow __pskb_pull_tail() to pull headers to head. This significantly
decreases the overall performance for no visible reasons.
The only two users of method [4] is:
* drivers/staging/qlge
* drivers/net/wireless/iwlwifi (all three variants: dvm, mvm, mvm-mq)
Note that in case with wireless drivers we can't use [1]
(napi_gro_frags()) at least for now and mac80211 stack always
performs pushes and pulls anyways, so performance hit is inavoidable.
At the moment of v2.6.31 the mentioned change was necessary (that's
why I don't add the "Fixes:" tag), but it became obsolete since
skb_gro_mac_header() has gone in commit
a50e233c50db ("net-gro:
restore frag0 optimization"), so we can simply revert the condition
in gro_reset_offset() to allow skbs from [4] go through the 'fast'
path just like in case [1].
This was tested on a 600 MHz MIPS CPU and a custom driver and this
patch gave boosts up to 40 Mbps to method [4] in both directions
comparing to net-next, which made overall performance relatively
close to [1] (without it, [4] is the slowest).
v2:
- Add more references and explanations to commit message
- Fix some typos ibid
- No functional changes
Signed-off-by: Alexander Lobakin <alobakin@dlink.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dag Moxnes [Fri, 15 Nov 2019 08:56:01 +0000 (09:56 +0100)]
rds: ib: update WR sizes when bringing up connection
Currently WR sizes are updated from rds_ib_sysctl_max_send_wr and
rds_ib_sysctl_max_recv_wr when a connection is shut down. As a result,
a connection being down while rds_ib_sysctl_max_send_wr or
rds_ib_sysctl_max_recv_wr are updated, will not update the sizes when
it comes back up.
Move resizing of WRs to rds_ib_setup_qp so that connections will be setup
with the most current WR sizes.
Signed-off-by: Dag Moxnes <dag.moxnes@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Chuhong Yuan [Fri, 15 Nov 2019 06:24:54 +0000 (14:24 +0800)]
net: gemini: add missed free_netdev
This driver forgets to free allocated netdev in remove like
what is done in probe failure.
Add the free to fix it.
Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 16 Nov 2019 20:50:57 +0000 (12:50 -0800)]
Merge branch 'bnx2x-Remove-function-casts'
Kees Cook says:
====================
bnx2x: Remove function casts
In order to make the entire kernel usable under Clang's Control Flow
Integrity protections, function prototype casts need to be avoided
because this will trip CFI checks at runtime (i.e. a mismatch between
the caller's expected function prototype and the destination function's
prototype). Many of these cases can be found with -Wcast-function-type,
which found that bnx2x had a bunch of needless (or at least confusing)
function casts. This series removes them all.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Kees Cook [Fri, 15 Nov 2019 05:07:15 +0000 (21:07 -0800)]
bnx2x: Remove hw_reset_t function casts
All .rw_reset callbacks except bnx2x_84833_hw_reset_phy() use a
void return type. No callers of .hw_reset check a return value and
bnx2x_84833_hw_reset_phy() unconditionally returns 0. Remove all
hw_reset_t casts and fix the return type to void.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kees Cook [Fri, 15 Nov 2019 05:07:14 +0000 (21:07 -0800)]
bnx2x: Remove format_fw_ver_t function casts
The return values for format_fw_ver_t callbacks are supposed to be
"int", not "u8". Ultimately, the top-level caller doesn't actually check
the return value at all, but just clean this all up anyway and fix the
prototypes so that casts are no longer needed.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kees Cook [Fri, 15 Nov 2019 05:07:13 +0000 (21:07 -0800)]
bnx2x: Remove config_init_t function casts
No callers of .config_init check return values. Remove the casting and
change all callbacks to have the correct function prototype.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kees Cook [Fri, 15 Nov 2019 05:07:12 +0000 (21:07 -0800)]
bnx2x: Remove read_status_t function casts
The function casts for .read_status callbacks end up casting some int
return values to u8. This seems to be bug-prone (-EINVAL being returned
into something that appears to be true/false), but fixing the function
prototypes doesn't change the existing behavior. Fix the return values
to remove the casts.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kees Cook [Fri, 15 Nov 2019 05:07:11 +0000 (21:07 -0800)]
bnx2x: Drop redundant callback function casts
NULL is already "void *" so it will auto-cast in assignments and
initializers. Additionally, all the callbacks for .link_reset,
.config_loopback, .set_link_led, and .phy_specific_func are already
correct. No casting is needed for these, so remove them.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Po Liu [Fri, 15 Nov 2019 03:33:41 +0000 (03:33 +0000)]
enetc: update TSN Qbv PSPEED set according to adjust link speed
ENETC has a register PSPEED to indicate the link speed of hardware.
It is need to update accordingly. PSPEED field needs to be updated
with the port speed for QBV scheduling purposes. Or else there is
chance for gate slot not free by frame taking the MAC if PSPEED and
phy speed not match. So update PSPEED when link adjust. This is
implement by the adjust_link.
Signed-off-by: Po Liu <Po.Liu@nxp.com>
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Po Liu [Fri, 15 Nov 2019 03:33:33 +0000 (03:33 +0000)]
enetc: Configure the Time-Aware Scheduler via tc-taprio offload
ENETC supports in hardware for time-based egress shaping according
to IEEE 802.1Qbv. This patch implement the Qbv enablement by the
hardware offload method qdisc tc-taprio method.
Also update cbdr writeback to up level since control bd ring may
writeback data to control bd ring.
Signed-off-by: Po Liu <Po.Liu@nxp.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jonathan Lemon [Thu, 14 Nov 2019 22:13:00 +0000 (14:13 -0800)]
page_pool: do not release pool until inflight == 0.
The page pool keeps track of the number of pages in flight, and
it isn't safe to remove the pool until all pages are returned.
Disallow removing the pool until all pages are back, so the pool
is always available for page producers.
Make the page pool responsible for its own delayed destruction
instead of relying on XDP, so the page pool can be used without
the xdp memory model.
When all pages are returned, free the pool and notify xdp if the
pool is registered with the xdp memory system. Have the callback
perform a table walk since some drivers (cpsw) may share the pool
among multiple xdp_rxq_info.
Note that the increment of pages_state_release_cnt may result in
inflight == 0, resulting in the pool being released.
Fixes:
d956a048cd3f ("xdp: force mem allocator removal and periodic warning")
Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 16 Nov 2019 20:26:49 +0000 (12:26 -0800)]
Merge branch 'smc-last-part-of-termination-improvements'
Karsten Graul says:
====================
last part of termination improvements
Patches 1 and 2 finish the set of termination patches, introducing
a reboot handler that terminates all link groups. Patch 3 adds an
rcu_barrier before the module is unloaded, and patch 4 is cleanup.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ursula Braun [Sat, 16 Nov 2019 16:47:32 +0000 (17:47 +0100)]
net/smc: remove unused constant
Constant SMC_CLOSE_WAIT_LISTEN_CLCSOCK_TIME is defined, but since
commit
3d502067599f ("net/smc: simplify wait when closing listen socket")
no longer used. Remove it.
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ursula Braun [Sat, 16 Nov 2019 16:47:31 +0000 (17:47 +0100)]
net/smc: use rcu_barrier() on module unload
Add rcu_barrier() to make sure no RCU readers or callbacks are
pending when the module is unloaded.
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ursula Braun [Sat, 16 Nov 2019 16:47:30 +0000 (17:47 +0100)]
net/smc: guarantee removal of link groups in reboot
When rebooting it should be guaranteed all link groups are cleaned
up and freed.
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ursula Braun [Sat, 16 Nov 2019 16:47:29 +0000 (17:47 +0100)]
net/smc: introduce bookkeeping of SMCR link groups
If the smc module is unloaded return control from exit routine only,
if all link groups are freed.
If an IB device is thrown away return control from device removal only,
if all link groups belonging to this device are freed.
Counters for the total number of SMCR link groups and for the total
number of SMCR links per IB device are introduced. smc module unloading
continues only if the total number of SMCR link groups is zero. IB device
removal continues only it the total number of SMCR links per IB device
has decreased to zero.
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>