platform/kernel/linux-rpi.git
22 months agonet: dsa: microchip: add KSZ9896 to KSZ9477 I2C driver
Romain Naour [Fri, 2 Sep 2022 10:16:08 +0000 (12:16 +0200)]
net: dsa: microchip: add KSZ9896 to KSZ9477 I2C driver

Add support for the KSZ9896 6-port Gigabit Ethernet Switch to the
ksz9477 driver. The KSZ9896 supports both SPI (already in) and I2C.

Signed-off-by: Romain Naour <romain.naour@skf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: dsa: microchip: add KSZ9896 switch support
Romain Naour [Fri, 2 Sep 2022 10:16:07 +0000 (12:16 +0200)]
net: dsa: microchip: add KSZ9896 switch support

Add support for the KSZ9896 6-port Gigabit Ethernet Switch to the
ksz9477 driver.

Although the KSZ9896 is already listed in the device tree binding
documentation since a1c0ed24fe9b (dt-bindings: net: dsa: document
additional Microchip KSZ9477 family switches) the chip id
(0x00989600) is not recognized by ksz_switch_detect() and rejected
by the driver.

The KSZ9896 is similar to KSZ9897 but has only one configurable
MII/RMII/RGMII/GMII cpu port.

Signed-off-by: Romain Naour <romain.naour@skf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agoMerge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Paolo Abeni [Tue, 6 Sep 2022 21:21:14 +0000 (23:21 +0200)]
Merge https://git./linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2022-09-05

The following pull-request contains BPF updates for your *net-next* tree.

We've added 106 non-merge commits during the last 18 day(s) which contain
a total of 159 files changed, 5225 insertions(+), 1358 deletions(-).

There are two small merge conflicts, resolve them as follows:

1) tools/testing/selftests/bpf/DENYLIST.s390x

  Commit 27e23836ce22 ("selftests/bpf: Add lru_bug to s390x deny list") in
  bpf tree was needed to get BPF CI green on s390x, but it conflicted with
  newly added tests on bpf-next. Resolve by adding both hunks, result:

  [...]
  lru_bug                                  # prog 'printk': failed to auto-attach: -524
  setget_sockopt                           # attach unexpected error: -524                                               (trampoline)
  cb_refs                                  # expected error message unexpected error: -524                               (trampoline)
  cgroup_hierarchical_stats                # JIT does not support calling kernel function                                (kfunc)
  htab_update                              # failed to attach: ERROR: strerror_r(-524)=22                                (trampoline)
  [...]

2) net/core/filter.c

  Commit 1227c1771dd2 ("net: Fix data-races around sysctl_[rw]mem_(max|default).")
  from net tree conflicts with commit 29003875bd5b ("bpf: Change bpf_setsockopt(SOL_SOCKET)
  to reuse sk_setsockopt()") from bpf-next tree. Take the code as it is from
  bpf-next tree, result:

  [...]
if (getopt) {
if (optname == SO_BINDTODEVICE)
return -EINVAL;
return sk_getsockopt(sk, SOL_SOCKET, optname,
     KERNEL_SOCKPTR(optval),
     KERNEL_SOCKPTR(optlen));
}

return sk_setsockopt(sk, SOL_SOCKET, optname,
     KERNEL_SOCKPTR(optval), *optlen);
  [...]

The main changes are:

1) Add any-context BPF specific memory allocator which is useful in particular for BPF
   tracing with bonus of performance equal to full prealloc, from Alexei Starovoitov.

2) Big batch to remove duplicated code from bpf_{get,set}sockopt() helpers as an effort
   to reuse the existing core socket code as much as possible, from Martin KaFai Lau.

3) Extend BPF flow dissector for BPF programs to just augment the in-kernel dissector
   with custom logic. In other words, allow for partial replacement, from Shmulik Ladkani.

4) Add a new cgroup iterator to BPF with different traversal options, from Hao Luo.

5) Support for BPF to collect hierarchical cgroup statistics efficiently through BPF
   integration with the rstat framework, from Yosry Ahmed.

6) Support bpf_{g,s}et_retval() under more BPF cgroup hooks, from Stanislav Fomichev.

7) BPF hash table and local storages fixes under fully preemptible kernel, from Hou Tao.

8) Add various improvements to BPF selftests and libbpf for compilation with gcc BPF
   backend, from James Hilliard.

9) Fix verifier helper permissions and reference state management for synchronous
   callbacks, from Kumar Kartikeya Dwivedi.

10) Add support for BPF selftest's xskxceiver to also be used against real devices that
    support MAC loopback, from Maciej Fijalkowski.

11) Various fixes to the bpf-helpers(7) man page generation script, from Quentin Monnet.

12) Document BPF verifier's tnum_in(tnum_range(), ...) gotchas, from Shung-Hsi Yu.

13) Various minor misc improvements all over the place.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (106 commits)
  bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.
  bpf: Remove usage of kmem_cache from bpf_mem_cache.
  bpf: Remove prealloc-only restriction for sleepable bpf programs.
  bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.
  bpf: Remove tracing program restriction on map types
  bpf: Convert percpu hash map to per-cpu bpf_mem_alloc.
  bpf: Add percpu allocation support to bpf_mem_alloc.
  bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.
  bpf: Adjust low/high watermarks in bpf_mem_cache
  bpf: Optimize call_rcu in non-preallocated hash map.
  bpf: Optimize element count in non-preallocated hash map.
  bpf: Relax the requirement to use preallocated hash maps in tracing progs.
  samples/bpf: Reduce syscall overhead in map_perf_test.
  selftests/bpf: Improve test coverage of test_maps
  bpf: Convert hash map to bpf_mem_alloc.
  bpf: Introduce any context BPF specific memory allocator.
  selftest/bpf: Add test for bpf_getsockopt()
  bpf: Change bpf_getsockopt(SOL_IPV6) to reuse do_ipv6_getsockopt()
  bpf: Change bpf_getsockopt(SOL_IP) to reuse do_ip_getsockopt()
  bpf: Change bpf_getsockopt(SOL_TCP) to reuse do_tcp_getsockopt()
  ...
====================

Link: https://lore.kernel.org/r/20220905161136.9150-1-daniel@iogearbox.net
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
22 months agonet: moxa: fix endianness-related issues from 'sparse'
Sergei Antonov [Fri, 2 Sep 2022 12:50:37 +0000 (15:50 +0300)]
net: moxa: fix endianness-related issues from 'sparse'

Sparse checker found two endianness-related issues:

.../moxart_ether.c:34:15: warning: incorrect type in assignment (different base types)
.../moxart_ether.c:34:15:    expected unsigned int [usertype]
.../moxart_ether.c:34:15:    got restricted __le32 [usertype]

.../moxart_ether.c:39:16: warning: cast to restricted __le32

Fix them by using __le32 type instead of u32.

Signed-off-by: Sergei Antonov <saproj@gmail.com>
Link: https://lore.kernel.org/r/20220902125037.1480268-1-saproj@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
22 months agonet: ftmac100: fix endianness-related issues from 'sparse'
Sergei Antonov [Fri, 2 Sep 2022 11:37:49 +0000 (14:37 +0300)]
net: ftmac100: fix endianness-related issues from 'sparse'

Sparse found a number of endianness-related issues of these kinds:

.../ftmac100.c:192:32: warning: restricted __le32 degrades to integer

.../ftmac100.c:208:23: warning: incorrect type in assignment (different base types)
.../ftmac100.c:208:23:    expected unsigned int rxdes0
.../ftmac100.c:208:23:    got restricted __le32 [usertype]

.../ftmac100.c:249:23: warning: invalid assignment: &=
.../ftmac100.c:249:23:    left side has type unsigned int
.../ftmac100.c:249:23:    right side has type restricted __le32

.../ftmac100.c:527:16: warning: cast to restricted __le32

Change type of some fields from 'unsigned int' to '__le32' to fix it.

Signed-off-by: Sergei Antonov <saproj@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20220902113749.1408562-1-saproj@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
22 months agonet: lan966x: Extend lan966x with RGMII support
Horatiu Vultur [Fri, 2 Sep 2022 11:15:48 +0000 (13:15 +0200)]
net: lan966x: Extend lan966x with RGMII support

Extend lan966x with RGMII support. The MAC supports all RGMII_* modes.

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Link: https://lore.kernel.org/r/20220902111548.614525-1-horatiu.vultur@microchip.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
22 months agor8169: remove not needed net_ratelimit() check
Heiner Kallweit [Sat, 3 Sep 2022 11:15:13 +0000 (13:15 +0200)]
r8169: remove not needed net_ratelimit() check

We're not in a hot path and don't want to miss this message,
therefore remove the net_ratelimit() check.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonetlink: Bounds-check struct nlmsgerr creation
Kees Cook [Sat, 3 Sep 2022 04:37:49 +0000 (21:37 -0700)]
netlink: Bounds-check struct nlmsgerr creation

In preparation for FORTIFY_SOURCE doing bounds-check on memcpy(),
switch from __nlmsg_put to nlmsg_put(), and explain the bounds check
for dealing with the memcpy() across a composite flexible array struct.
Avoids this future run-time warning:

  memcpy: detected field-spanning write (size 32) of single field "&errmsg->msg" at net/netlink/af_netlink.c:2447 (size 16)

Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Jozsef Kadlecsik <kadlec@netfilter.org>
Cc: Florian Westphal <fw@strlen.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: syzbot <syzkaller@googlegroups.com>
Cc: netfilter-devel@vger.kernel.org
Cc: coreteam@netfilter.org
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220901071336.1418572-1-keescook@chromium.org
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agoMerge branch 'bpf-allocator'
Daniel Borkmann [Mon, 5 Sep 2022 13:33:07 +0000 (15:33 +0200)]
Merge branch 'bpf-allocator'

Alexei Starovoitov says:

====================
Introduce any context BPF specific memory allocator.

Tracing BPF programs can attach to kprobe and fentry. Hence they run in
unknown context where calling plain kmalloc() might not be safe. Front-end
kmalloc() with per-cpu cache of free elements. Refill this cache asynchronously
from irq_work.

Major achievements enabled by bpf_mem_alloc:
- Dynamically allocated hash maps used to be 10 times slower than fully
  preallocated. With bpf_mem_alloc and subsequent optimizations the speed
  of dynamic maps is equal to full prealloc.
- Tracing bpf programs can use dynamically allocated hash maps. Potentially
  saving lots of memory. Typical hash map is sparsely populated.
- Sleepable bpf programs can used dynamically allocated hash maps.

Future work:
- Expose bpf_mem_alloc as uapi FD to be used in dynptr_alloc, kptr_alloc
- Convert lru map to bpf_mem_alloc
- Further cleanup htab code. Example: htab_use_raw_lock can be removed.

Changelog:

v5->v6:
- Debugged the reason for selftests/bpf/test_maps ooming in a small VM that BPF CI is using.
  Added patch 16 that optimizes the usage of rcu_barrier-s between bpf_mem_alloc and
  hash map. It drastically improved the speed of htab destruction.

v4->v5:
- Fixed missing migrate_disable in hash tab free path (Daniel)
- Replaced impossible "memory leak" with WARN_ON_ONCE (Martin)
- Dropped sysctl kernel.bpf_force_dyn_alloc patch (Daniel)
- Added Andrii's ack
- Added new patch 15 that removes kmem_cache usage from bpf_mem_alloc.
  It saves memory, speeds up map create/destroy operations
  while maintains hash map update/delete performance.

v3->v4:
- fix build issue due to missing local.h on 32-bit arch
- add Kumar's ack
- proposal for next steps from Delyan:
https://lore.kernel.org/bpf/d3f76b27f4e55ec9e400ae8dcaecbb702a4932e8.camel@fb.com/

v2->v3:
- Rewrote the free_list algorithm based on discussions with Kumar. Patch 1.
- Allowed sleepable bpf progs use dynamically allocated maps. Patches 13 and 14.
- Added sysctl to force bpf_mem_alloc in hash map even if pre-alloc is
  requested to reduce memory consumption. Patch 15.
- Fix: zero-fill percpu allocation
- Single rcu_barrier at the end instead of each cpu during bpf_mem_alloc destruction

v2 thread:
https://lore.kernel.org/bpf/20220817210419.95560-1-alexei.starovoitov@gmail.com/

v1->v2:
- Moved unsafe direct call_rcu() from hash map into safe place inside bpf_mem_alloc. Patches 7 and 9.
- Optimized atomic_inc/dec in hash map with percpu_counter. Patch 6.
- Tuned watermarks per allocation size. Patch 8
- Adopted this approach to per-cpu allocation. Patch 10.
- Fully converted hash map to bpf_mem_alloc. Patch 11.
- Removed tracing prog restriction on map types. Combination of all patches and final patch 12.

v1 thread:
https://lore.kernel.org/bpf/20220623003230.37497-1-alexei.starovoitov@gmail.com/

LWN article:
https://lwn.net/Articles/899274/
====================

Link: https://lore.kernel.org/r/
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
22 months agobpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:58 +0000 (14:10 -0700)]
bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.

User space might be creating and destroying a lot of hash maps. Synchronous
rcu_barrier-s in a destruction path of hash map delay freeing of hash buckets
and other map memory and may cause artificial OOM situation under stress.
Optimize rcu_barrier usage between bpf hash map and bpf_mem_alloc:
- remove rcu_barrier from hash map, since htab doesn't use call_rcu
  directly and there are no callback to wait for.
- bpf_mem_alloc has call_rcu_in_progress flag that indicates pending callbacks.
  Use it to avoid barriers in fast path.
- When barriers are needed copy bpf_mem_alloc into temp structure
  and wait for rcu barrier-s in the worker to let the rest of
  hash map freeing to proceed.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220902211058.60789-17-alexei.starovoitov@gmail.com
22 months agobpf: Remove usage of kmem_cache from bpf_mem_cache.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:57 +0000 (14:10 -0700)]
bpf: Remove usage of kmem_cache from bpf_mem_cache.

For bpf_mem_cache based hash maps the following stress test:
for (i = 1; i <= 512; i <<= 1)
  for (j = 1; j <= 1 << 18; j <<= 1)
    fd = bpf_map_create(BPF_MAP_TYPE_HASH, NULL, i, j, 2, 0);
creates many kmem_cache-s that are not mergeable in debug kernels
and consume unnecessary amount of memory.
Turned out bpf_mem_cache's free_list logic does batching well,
so usage of kmem_cache for fixes size allocations doesn't bring
any performance benefits vs normal kmalloc.
Hence get rid of kmem_cache in bpf_mem_cache.
That saves memory, speeds up map create/destroy operations,
while maintains hash map update/delete performance.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220902211058.60789-16-alexei.starovoitov@gmail.com
22 months agobpf: Remove prealloc-only restriction for sleepable bpf programs.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:56 +0000 (14:10 -0700)]
bpf: Remove prealloc-only restriction for sleepable bpf programs.

Since hash map is now converted to bpf_mem_alloc and it's waiting for rcu and
rcu_tasks_trace GPs before freeing elements into global memory slabs it's safe
to use dynamically allocated hash maps in sleepable bpf programs.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-15-alexei.starovoitov@gmail.com
22 months agobpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:55 +0000 (14:10 -0700)]
bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.

Use call_rcu_tasks_trace() to wait for sleepable progs to finish.
Then use call_rcu() to wait for normal progs to finish
and finally do free_one() on each element when freeing objects
into global memory pool.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-14-alexei.starovoitov@gmail.com
22 months agobpf: Remove tracing program restriction on map types
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:54 +0000 (14:10 -0700)]
bpf: Remove tracing program restriction on map types

The hash map is now fully converted to bpf_mem_alloc. Its implementation is not
allocating synchronously and not calling call_rcu() directly. It's now safe to
use non-preallocated hash maps in all types of tracing programs including
BPF_PROG_TYPE_PERF_EVENT that runs out of NMI context.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-13-alexei.starovoitov@gmail.com
22 months agobpf: Convert percpu hash map to per-cpu bpf_mem_alloc.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:53 +0000 (14:10 -0700)]
bpf: Convert percpu hash map to per-cpu bpf_mem_alloc.

Convert dynamic allocations in percpu hash map from alloc_percpu() to
bpf_mem_cache_alloc() from per-cpu bpf_mem_alloc. Since bpf_mem_alloc frees
objects after RCU gp the call_rcu() is removed. pcpu_init_value() now needs to
zero-fill per-cpu allocations, since dynamically allocated map elements are now
similar to full prealloc, since alloc_percpu() is not called inline and the
elements are reused in the freelist.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-12-alexei.starovoitov@gmail.com
22 months agobpf: Add percpu allocation support to bpf_mem_alloc.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:52 +0000 (14:10 -0700)]
bpf: Add percpu allocation support to bpf_mem_alloc.

Extend bpf_mem_alloc to cache free list of fixed size per-cpu allocations.
Once such cache is created bpf_mem_cache_alloc() will return per-cpu objects.
bpf_mem_cache_free() will free them back into global per-cpu pool after
observing RCU grace period.
per-cpu flavor of bpf_mem_alloc is going to be used by per-cpu hash maps.

The free list cache consists of tuples { llist_node, per-cpu pointer }
Unlike alloc_percpu() that returns per-cpu pointer
the bpf_mem_cache_alloc() returns a pointer to per-cpu pointer and
bpf_mem_cache_free() expects to receive it back.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-11-alexei.starovoitov@gmail.com
22 months agobpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:51 +0000 (14:10 -0700)]
bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.

SLAB_TYPESAFE_BY_RCU makes kmem_caches non mergeable and slows down
kmem_cache_destroy. All bpf_mem_cache are safe to share across different maps
and programs. Convert SLAB_TYPESAFE_BY_RCU to batched call_rcu. This change
solves the memory consumption issue, avoids kmem_cache_destroy latency and
keeps bpf hash map performance the same.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-10-alexei.starovoitov@gmail.com
22 months agobpf: Adjust low/high watermarks in bpf_mem_cache
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:50 +0000 (14:10 -0700)]
bpf: Adjust low/high watermarks in bpf_mem_cache

The same low/high watermarks for every bucket in bpf_mem_cache consume
significant amount of memory. Preallocating 64 elements of 4096 bytes each in
the free list is not efficient. Make low/high watermarks and batching value
dependent on element size. This change brings significant memory savings.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-9-alexei.starovoitov@gmail.com
22 months agobpf: Optimize call_rcu in non-preallocated hash map.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:49 +0000 (14:10 -0700)]
bpf: Optimize call_rcu in non-preallocated hash map.

Doing call_rcu() million times a second becomes a bottle neck.
Convert non-preallocated hash map from call_rcu to SLAB_TYPESAFE_BY_RCU.
The rcu critical section is no longer observed for one htab element
which makes non-preallocated hash map behave just like preallocated hash map.
The map elements are released back to kernel memory after observing
rcu critical section.
This improves 'map_perf_test 4' performance from 100k events per second
to 250k events per second.

bpf_mem_alloc + percpu_counter + typesafe_by_rcu provide 10x performance
boost to non-preallocated hash map and make it within few % of preallocated map
while consuming fraction of memory.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-8-alexei.starovoitov@gmail.com
22 months agobpf: Optimize element count in non-preallocated hash map.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:48 +0000 (14:10 -0700)]
bpf: Optimize element count in non-preallocated hash map.

The atomic_inc/dec might cause extreme cache line bouncing when multiple cpus
access the same bpf map. Based on specified max_entries for the hash map
calculate when percpu_counter becomes faster than atomic_t and use it for such
maps. For example samples/bpf/map_perf_test is using hash map with max_entries
1000. On a system with 16 cpus the 'map_perf_test 4' shows 14k events per
second using atomic_t. On a system with 15 cpus it shows 100k events per second
using percpu. map_perf_test is an extreme case where all cpus colliding on
atomic_t which causes extreme cache bouncing. Note that the slow path of
percpu_counter is 5k events per secound vs 14k for atomic, so the heuristic is
necessary. See comment in the code why the heuristic is based on
num_online_cpus().

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-7-alexei.starovoitov@gmail.com
22 months agobpf: Relax the requirement to use preallocated hash maps in tracing progs.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:47 +0000 (14:10 -0700)]
bpf: Relax the requirement to use preallocated hash maps in tracing progs.

Since bpf hash map was converted to use bpf_mem_alloc it is safe to use
from tracing programs and in RT kernels.
But per-cpu hash map is still using dynamic allocation for per-cpu map
values, hence keep the warning for this map type.
In the future alloc_percpu_gfp can be front-end-ed with bpf_mem_cache
and this restriction will be completely lifted.
perf_event (NMI) bpf programs have to use preallocated hash maps,
because free_htab_elem() is using call_rcu which might crash if re-entered.

Sleepable bpf programs have to use preallocated hash maps, because
life time of the map elements is not protected by rcu_read_lock/unlock.
This restriction can be lifted in the future as well.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-6-alexei.starovoitov@gmail.com
22 months agosamples/bpf: Reduce syscall overhead in map_perf_test.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:46 +0000 (14:10 -0700)]
samples/bpf: Reduce syscall overhead in map_perf_test.

Make map_perf_test for preallocated and non-preallocated hash map
spend more time inside bpf program to focus performance analysis
on the speed of update/lookup/delete operations performed by bpf program.

It makes 'perf report' of bpf_mem_alloc look like:
 11.76%  map_perf_test    [k] _raw_spin_lock_irqsave
 11.26%  map_perf_test    [k] htab_map_update_elem
  9.70%  map_perf_test    [k] _raw_spin_lock
  9.47%  map_perf_test    [k] htab_map_delete_elem
  8.57%  map_perf_test    [k] memcpy_erms
  5.58%  map_perf_test    [k] alloc_htab_elem
  4.09%  map_perf_test    [k] __htab_map_lookup_elem
  3.44%  map_perf_test    [k] syscall_exit_to_user_mode
  3.13%  map_perf_test    [k] lookup_nulls_elem_raw
  3.05%  map_perf_test    [k] migrate_enable
  3.04%  map_perf_test    [k] memcmp
  2.67%  map_perf_test    [k] unit_free
  2.39%  map_perf_test    [k] lookup_elem_raw

Reduce default iteration count as well to make 'map_perf_test' quick enough
even on debug kernels.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-5-alexei.starovoitov@gmail.com
22 months agoselftests/bpf: Improve test coverage of test_maps
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:45 +0000 (14:10 -0700)]
selftests/bpf: Improve test coverage of test_maps

Make test_maps more stressful with more parallelism in
update/delete/lookup/walk including different value sizes.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-4-alexei.starovoitov@gmail.com
22 months agobpf: Convert hash map to bpf_mem_alloc.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:44 +0000 (14:10 -0700)]
bpf: Convert hash map to bpf_mem_alloc.

Convert bpf hash map to use bpf memory allocator.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-3-alexei.starovoitov@gmail.com
22 months agobpf: Introduce any context BPF specific memory allocator.
Alexei Starovoitov [Fri, 2 Sep 2022 21:10:43 +0000 (14:10 -0700)]
bpf: Introduce any context BPF specific memory allocator.

Tracing BPF programs can attach to kprobe and fentry. Hence they
run in unknown context where calling plain kmalloc() might not be safe.

Front-end kmalloc() with minimal per-cpu cache of free elements.
Refill this cache asynchronously from irq_work.

BPF programs always run with migration disabled.
It's safe to allocate from cache of the current cpu with irqs disabled.
Free-ing is always done into bucket of the current cpu as well.
irq_work trims extra free elements from buckets with kfree
and refills them with kmalloc, so global kmalloc logic takes care
of freeing objects allocated by one cpu and freed on another.

struct bpf_mem_alloc supports two modes:
- When size != 0 create kmem_cache and bpf_mem_cache for each cpu.
  This is typical bpf hash map use case when all elements have equal size.
- When size == 0 allocate 11 bpf_mem_cache-s for each cpu, then rely on
  kmalloc/kfree. Max allocation size is 4096 in this case.
  This is bpf_dynptr and bpf_kptr use case.

bpf_mem_alloc/bpf_mem_free are bpf specific 'wrappers' of kmalloc/kfree.
bpf_mem_cache_alloc/bpf_mem_cache_free are 'wrappers' of kmem_cache_alloc/kmem_cache_free.

The allocators are NMI-safe from bpf programs only. They are not NMI-safe in general.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-2-alexei.starovoitov@gmail.com
22 months agonet: phy: Add 1000BASE-KX interface mode
Sean Anderson [Fri, 2 Sep 2022 22:02:39 +0000 (18:02 -0400)]
net: phy: Add 1000BASE-KX interface mode

Add 1000BASE-KX interface mode. This 1G backplane ethernet as described in
clause 70. Clause 73 autonegotiation is mandatory, and only full duplex
operation is supported.

Although at the PMA level this interface mode is identical to
1000BASE-X, it uses a different form of in-band autonegation. This
justifies a separate interface mode, since the interface mode (along
with the MLO_AN_* autonegotiation mode) sets the type of autonegotiation
which will be used on a link. This results in more than just electrical
differences between the link modes.

With regard to 1000BASE-X, 1000BASE-KX holds a similar position to
SGMII: same signaling, but different autonegotiation. PCS drivers
(which typically handle in-band autonegotiation) may only support
1000BASE-X, and not 1000BASE-KX. Similarly, the phy mode is used to
configure serdes phys with phy_set_mode_ext. Due to the different
electrical standards (SFI or XFI vs Clause 70), they will likely want to
use different configuration. Adding a phy interface mode for
1000BASE-KX helps simplify configuration in these areas.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agoMerge branch 'dpaa-cleanups'
David S. Miller [Mon, 5 Sep 2022 13:27:40 +0000 (14:27 +0100)]
Merge branch 'dpaa-cleanups'

Sean Anderson says:

====================
net: dpaa: Cleanups in preparation for phylink conversion (part 2)

This series contains several cleanup patches for dpaa/fman. While they
are intended to prepare for a phylink conversion, they stand on their
own. This series was originally submitted as part of [1].

[1] https://lore.kernel.org/netdev/20220715215954.1449214-1-sean.anderson@seco.com

Changes in v5:
- Reduce line length of tgec_config
- Reduce line length of qman_update_cgr_safe
- Rebase onto net-next/master

Changes in v4:
- weer -> were
- tricy -> tricky
- Use mac_dev for calling change_addr
- qman_cgr_create -> qman_create_cgr

Changes in v2:
- Fix prototype for dtsec_initialization
- Fix warning if sizeof(void *) != sizeof(resource_size_t)
- Specify type of mac_dev for exception_cb
- Add helper for sanity checking cgr ops
- Add CGR update function
- Adjust queue depth on rate change
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: dpaa: Adjust queue depth on rate change
Sean Anderson [Fri, 2 Sep 2022 21:57:36 +0000 (17:57 -0400)]
net: dpaa: Adjust queue depth on rate change

Instead of setting the queue depth once during probe, adjust it on the
fly whenever we configure the link. This is a bit unusal, since usually
the DPAA driver calls into the FMAN driver, but here we do the opposite.
We need to add a netdev to struct mac_device for this, but it will soon
live in the phylink config.

I haven't tested this extensively, but it doesn't seem to break
anything. We could possibly optimize this a bit by keeping track of the
last rate, but for now we just update every time. 10GEC probably doesn't
need to call into this at all, but I've added it for consistency.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agosoc: fsl: qbman: Add CGR update function
Sean Anderson [Fri, 2 Sep 2022 21:57:35 +0000 (17:57 -0400)]
soc: fsl: qbman: Add CGR update function

This adds a function to update a CGR with new parameters. qman_create_cgr
can almost be used for this (with flags=0), but it's not suitable because
it also registers the callback function. The _safe variant was modeled off
of qman_cgr_delete_safe. However, we handle multiple arguments and a return
value.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agosoc: fsl: qbman: Add helper for sanity checking cgr ops
Sean Anderson [Fri, 2 Sep 2022 21:57:34 +0000 (17:57 -0400)]
soc: fsl: qbman: Add helper for sanity checking cgr ops

This breaks out/combines get_affine_portal and the cgr sanity check in
preparation for the next commit. No functional change intended.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: dpaa: Use mac_dev variable in dpaa_netdev_init
Sean Anderson [Fri, 2 Sep 2022 21:57:33 +0000 (17:57 -0400)]
net: dpaa: Use mac_dev variable in dpaa_netdev_init

There are several references to mac_dev in dpaa_netdev_init. Make things a
bit more concise by adding a local variable for it.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Change return type of disable to void
Sean Anderson [Fri, 2 Sep 2022 21:57:32 +0000 (17:57 -0400)]
net: fman: Change return type of disable to void

When disabling, there is nothing we can do about errors. In fact, the
only error which can occur is misuse of the API. Just warn in the mac
driver instead.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Clean up error handling
Sean Anderson [Fri, 2 Sep 2022 21:57:31 +0000 (17:57 -0400)]
net: fman: Clean up error handling

This removes the _return label, since something like

err = -EFOO;
goto _return;

can be replaced by the briefer

return -EFOO;

Additionally, this skips going to _return_of_node_put when dev_node has
already been put (preventing a double put).

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Specify type of mac_dev for exception_cb
Sean Anderson [Fri, 2 Sep 2022 21:57:30 +0000 (17:57 -0400)]
net: fman: Specify type of mac_dev for exception_cb

Instead of using a void pointer for mac_dev, specify its type
explicitly.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Use mac_dev for some params
Sean Anderson [Fri, 2 Sep 2022 21:57:29 +0000 (17:57 -0400)]
net: fman: Use mac_dev for some params

Some params are already present in mac_dev. Use them directly instead of
passing them through params.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Pass params directly to mac init
Sean Anderson [Fri, 2 Sep 2022 21:57:28 +0000 (17:57 -0400)]
net: fman: Pass params directly to mac init

Instead of having the mac init functions call back into the fman core to
get their params, just pass them directly to the init functions.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Map the base address once
Sean Anderson [Fri, 2 Sep 2022 21:57:27 +0000 (17:57 -0400)]
net: fman: Map the base address once

We don't need to remap the base address from the resource twice (once in
mac_probe() and again in set_fman_mac_params()). We still need the
resource to get the end address, but we can use a single function call
to get both at once.

While we're at it, use platform_get_mem_or_io and devm_request_resource
to map the resource. I think this is the more "correct" way to do things
here, since we use the pdev resource, instead of creating a new one.
It's still a bit tricky, since we need to ensure that the resource is a
child of the fman region when it gets requested.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Remove internal_phy_node from params
Sean Anderson [Fri, 2 Sep 2022 21:57:26 +0000 (17:57 -0400)]
net: fman: Remove internal_phy_node from params

This member was used to pass the phy node between mac_probe and the
mac-specific initialization function. But now that the phy node is
gotten in the initialization function, this parameter does not serve a
purpose. Remove it, and do the grabbing of the node/grabbing of the phy
in the same place.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Inline several functions into initialization
Sean Anderson [Fri, 2 Sep 2022 21:57:25 +0000 (17:57 -0400)]
net: fman: Inline several functions into initialization

There are several small functions which were only necessary because the
initialization functions didn't have access to the mac private data. Now
that they do, just do things directly.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Mark mac methods static
Sean Anderson [Fri, 2 Sep 2022 21:57:24 +0000 (17:57 -0400)]
net: fman: Mark mac methods static

These methods are no longer accessed outside of the driver file, so mark
them as static.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fman: Move initialization to mac-specific files
Sean Anderson [Fri, 2 Sep 2022 21:57:23 +0000 (17:57 -0400)]
net: fman: Move initialization to mac-specific files

This moves mac-specific initialization to mac-specific files. This will
make it easier to work with individual macs. It will also make it easier
to refactor the initialization to simplify the control flow. No
functional change intended.

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Acked-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agor8169: remove useless PCI region size check
Heiner Kallweit [Fri, 2 Sep 2022 21:16:52 +0000 (23:16 +0200)]
r8169: remove useless PCI region size check

Let's trust the hardware here and remove this useless check.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agoMerge branch 'lan937x-phy-link-interrupt'
David S. Miller [Mon, 5 Sep 2022 12:06:40 +0000 (13:06 +0100)]
Merge branch 'lan937x-phy-link-interrupt'

Arun Ramadoss says:

====================
net: dsa: microchip: lan937x: enable interrupt for internal phy link detection

This patch series enables the internal phy link detection for lan937x using the
interrupt method. lan937x acts as the interrupt controller for the internal
ports and phy, the irq_domain is registered for the individual ports and in
turn for the individual port interrupts.

RFC v3 -> Patch v1
- Removed the RFC v3 1/3 from the series - changing exit from reset
- Changed the variable name in ksz_port from irq to pirq
- Added the check for return value of irq_find_mapping during phy irq
  registeration.
- Moved the clearing of POR_READY_INT from girq_thread_fn to
  lan937x_reset_switch

RFC v2 -> v3
- Used the interrupt controller implementation of phy link

Changes in RFC v2
- fixed the compilation issue
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: dsa: microchip: lan937x: add interrupt support for port phy link
Arun Ramadoss [Fri, 2 Sep 2022 10:32:10 +0000 (16:02 +0530)]
net: dsa: microchip: lan937x: add interrupt support for port phy link

This patch enables the interrupts for internal phy link detection for
LAN937x. The interrupt enable bits are active low. There is global
interrupt mask for each port. And each port has the individual interrupt
mask for TAS. QCI, SGMII, PTP, PHY and ACL.
The first level of interrupt domain is registered for global port
interrupt and second level of interrupt domain for the individual port
interrupts. The phy interrupt is enabled in the lan937x_mdio_register
function. Interrupt from which port is raised will be detected based on
the interrupt host data.

Signed-off-by: Arun Ramadoss <arun.ramadoss@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: dsa: microchip: lan937x: clear the POR_READY_INT status bit
Arun Ramadoss [Fri, 2 Sep 2022 10:32:09 +0000 (16:02 +0530)]
net: dsa: microchip: lan937x: clear the POR_READY_INT status bit

In the lan937x_reset_switch(), it masks all the switch and port
registers. In the Global_Int_status register, POR ready bit is write 1
to clear bit and all other bits are read only. So, this patch clear the
por_ready_int status bit by writing 1.

Signed-off-by: Arun Ramadoss <arun.ramadoss@microchip.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: dsa: microchip: add reference to ksz_device inside the ksz_port
Arun Ramadoss [Fri, 2 Sep 2022 10:32:08 +0000 (16:02 +0530)]
net: dsa: microchip: add reference to ksz_device inside the ksz_port

struct ksz_port doesn't have reference to ksz_device as of now. In order
to find out from which port interrupt has triggered, we need to pass the
struct ksz_port as a host data. When the interrupt is triggered, we can
get the port from which interrupt triggered, but to identify it is phy
interrupt we have to read status register. The regmap structure for
accessing the device register is present in the ksz_device struct. To
access the ksz_device from the ksz_port, the reference is added to it
with port number as well.

Signed-off-by: Arun Ramadoss <arun.ramadoss@microchip.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agoMerge branch 'ipa-transaction-IDs'
David S. Miller [Mon, 5 Sep 2022 11:47:02 +0000 (12:47 +0100)]
Merge branch 'ipa-transaction-IDs'

Alex Elder says:

====================
net: ipa: start using transaction IDs

A previous group of patches added ID fields to track the state of
transactions:
  https://lore.kernel.org/netdev/20220831224017.377745-1-elder@linaro.org

This series starts using those IDs instead of the lists used
previously.  Most of this series involves reworking the function
that determines which transaction is the "last", which determines
when a channel has been quiesed.  The last patch is mainly used to
prove that the new index method of tracking transaction state is
equivalent to the previous use of lists.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: ipa: verify a few more IDs
Alex Elder [Fri, 2 Sep 2022 21:02:18 +0000 (16:02 -0500)]
net: ipa: verify a few more IDs

The completed transaction list is used in gsi_channel_trans_complete()
to return the next transaction in completed state.

Add some temporary checks to verify the transaction indicated by the
completed ID matches the one first in this list.

Similarly, we use the pending and completed transaction lists when
cancelling pending transactions in gsi_channel_trans_cancel_pending().

Add temporary checks there to verify the transactions indicated by
IDs match those tracked by these lists.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: ipa: further simplify gsi_channel_trans_last()
Alex Elder [Fri, 2 Sep 2022 21:02:17 +0000 (16:02 -0500)]
net: ipa: further simplify gsi_channel_trans_last()

Do a little more refactoring in gsi_channel_trans_last() to simplify
it further.  The resulting code should behave exactly as before.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: ipa: simplify gsi_channel_trans_last()
Alex Elder [Fri, 2 Sep 2022 21:02:16 +0000 (16:02 -0500)]
net: ipa: simplify gsi_channel_trans_last()

Using a little logic we can simplify gsi_channel_trans_last().

The first condition in that function looks like this:
    if (trans_info->allocated_id != trans_info->free_id)
And if that's false, we proceed to the next one:
    if (trans_info->committed_id != trans_info->allocated_id)

Failure of the first test implies:
    trans_info->allocated_id == trans_info->free_id
And therefore, the second one can be rewritten this way:
    if (trans_info->committed_id != trans_info->free_id)

Substituting free_id for allocated_id and committed_id can also be
done in the code blocks executed when these conditions yield true.
The net result is that all three blocks for TX endpoints can be
consolidated into just one.

The two blocks of code at the end of that function (used for both TX
and RX channels) can be similarly consolidated into a single block.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: ipa: use IDs exclusively for last transaction
Alex Elder [Fri, 2 Sep 2022 21:02:15 +0000 (16:02 -0500)]
net: ipa: use IDs exclusively for last transaction

Always use transaction IDs when finding the "last" transaction to
await when quiescing a channel.  This basically extends what was
done in the previous patch to all other transaction state IDs.

As a result we are no longer updating any transaction lists inside
gsi_channel_trans_last(), so there's no need to take the spinlock.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: ipa: use IDs for last allocated transaction
Alex Elder [Fri, 2 Sep 2022 21:02:14 +0000 (16:02 -0500)]
net: ipa: use IDs for last allocated transaction

Use the allocated and free transaction IDs to determine whether the
"last" transaction used for quiescing a channel is in allocated
state.  The last allocated transaction that has not been committed
(if any) immediately precedes the first free transaction in the
transaction array.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: ipa: rework last transaction determination
Alex Elder [Fri, 2 Sep 2022 21:02:13 +0000 (16:02 -0500)]
net: ipa: rework last transaction determination

When quiescing a channel, we find the "last" transaction, which is
the latest one to have been allocated.  (New transaction allocation
will have been prevented by the time this is called.)

Currently we do this by looking for the first non-empty transaction
list in each state, then return the last entry from that last.
Instead, determine the last entry in each list (if any) and return
that entry if found.

Temporarily (locally) introduce list_last_entry_or_null() as a
helper for this, mirroring list_first_entry_or_null().  This macro
definition will be removed by an upcoming patch.

Remove the temporary warnings added by the previous commit.

Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agor8169: use devm_clk_get_optional_enabled() to simplify the code
Heiner Kallweit [Fri, 2 Sep 2022 20:52:34 +0000 (22:52 +0200)]
r8169: use devm_clk_get_optional_enabled() to simplify the code

Now that we have devm_clk_get_optional_enabled(), we don't have to
open-code it.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agor8169: remove comment about apparently non-existing chip versions
Heiner Kallweit [Fri, 2 Sep 2022 20:21:57 +0000 (22:21 +0200)]
r8169: remove comment about apparently non-existing chip versions

It's not clear where these entries came from, and as I wrote in the
comment: Not even Realtek's r8101 driver knows these chip id's.
So remove the comment.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agor8169: merge handling of chip versions 12 and 17 (RTL8168B)
Heiner Kallweit [Fri, 2 Sep 2022 20:10:53 +0000 (22:10 +0200)]
r8169: merge handling of chip versions 12 and 17 (RTL8168B)

It's not clear why XID's 380 and 381..387 ever got different chip
version id's. VER_12 and VER_17 are handled exactly the same.
Therefore merge handling under the VER_17 umbrella.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agoMerge branch 'altera-tse-phylink'
David S. Miller [Mon, 5 Sep 2022 09:16:53 +0000 (10:16 +0100)]
Merge branch 'altera-tse-phylink'

Maxime Chevallier says:

====================
net: altera: tse: phylink conversion

This is V4 of a series converting the Altera TSE driver to phylink,
introducing a new PCS driver along the way.

The Altera TSE can be built with a SGMII/1000BaseX PCS, allowing to use
SFP ports with this MAC, which is the end goal of adding phylink support
and a proper PCS driver.

The PCS itself can either be mapped in the MAC's register space, in that
case, it's accessed through 32 bits registers, with the higher 16 bits
always 0. Alternatively, it can sit on its own register space, exposing
16 bits registers, some of which ressemble the standard PHY registers.

To tackle that rework, several things needs updating, starting by the DT
binding, since we add support for a new register range for the PCS.

Hence, the first patch of the series is a conversion to YAML of the
existing binding.

Then, patch 2 does a bit of simple cleanup to the TSE driver, using nice
reverse xmas tree definitions.

Patch 3 adds the actual PCS driver, as a standalone driver. Some future
series will then reuse that PCS driver from the dwmac-socfpga driver,
which implements support for this exact PCS too, allowing to share the
code nicely.

Patch 4 is then a phylink conversion of the altera_tse driver, to use
this new PCS driver.

Finally, patch 5 updates the newly converted DT binding to support the
pcs register range.

This series contains bits and pieces for this conversion, please tell me if
you want me to send it as individual patches.

V4 Changes:
 - Add missing MODULE_* macros to the TSE PCS driver

V3 Changes:
 - YAML binding conversion changes and PCS addition changes thanks to
   Krzysztof's reviews

V2 Changes :
 - Fixed the binding after the YAML conversion
 - Added a pcs_validate() callback
 - Introduced a comment to justify a soft reset for the PCS
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agodt-bindings: net: altera: tse: add an optional pcs register range
Maxime Chevallier [Fri, 2 Sep 2022 08:32:05 +0000 (10:32 +0200)]
dt-bindings: net: altera: tse: add an optional pcs register range

Some implementations of the TSE have their PCS as an external bloc,
exposed at its own register range. Document this, and add a new example
showing a case using the pcs and the new phylink conversion to connect
an sfp port to a TSE mac.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: altera: tse: convert to phylink
Maxime Chevallier [Fri, 2 Sep 2022 08:32:04 +0000 (10:32 +0200)]
net: altera: tse: convert to phylink

Convert the Altera Triple Speed Ethernet Controller to phylink.
This controller supports MII, GMII and RGMII with its MAC, and
SGMII + 1000BaseX through a small embedded PCS.

The PCS itself has a register set very similar to what is found in a
typical 802.3 ethernet PHY, but this register set memory-mapped instead
of lying on an mdio bus.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: pcs: add new PCS driver for altera TSE PCS
Maxime Chevallier [Fri, 2 Sep 2022 08:32:03 +0000 (10:32 +0200)]
net: pcs: add new PCS driver for altera TSE PCS

The Altera Triple Speed Ethernet has a SGMII/1000BaseC PCS that can be
integrated in several ways. It can either be part of the TSE MAC's
address space, accessed through 32 bits accesses on the mapped mdio
device 0, or through a dedicated 16 bits register set.

This driver allows using the TSE PCS outside of altera TSE's driver,
since it can be used standalone by other MACs.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: altera: tse: cosmetic change to use reverse xmas tree ordering
Maxime Chevallier [Fri, 2 Sep 2022 08:32:02 +0000 (10:32 +0200)]
net: altera: tse: cosmetic change to use reverse xmas tree ordering

Make the driver code cleaner through a strictly cosmetic change, using
he reverse xmas tree variable declaration ordering.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agodt-bindings: net: Convert Altera TSE bindings to yaml
Maxime Chevallier [Fri, 2 Sep 2022 08:32:01 +0000 (10:32 +0200)]
dt-bindings: net: Convert Altera TSE bindings to yaml

Convert the bindings for the Altera Triple-Speed Ethernet to yaml.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agoMerge tag 'wireless-next-2022-09-03' of git://git.kernel.org/pub/scm/linux/kernel...
David S. Miller [Sun, 4 Sep 2022 10:24:34 +0000 (11:24 +0100)]
Merge tag 'wireless-next-2022-09-03' of git://git./linux/kernel/git/wireless/wireless-next

Johannes Berg says:

====================
drivers
 - rtw89: large update across the map, e.g. coex, pci(e), etc.
 - ath9k: uninit memory read fix
 - ath10k: small peer map fix and a WCN3990 device fix
 - wfx: underflow

stack
 - the "change MAC address while IFF_UP" change from James
   we discussed
 - more MLO work, including a set of fixes for the previous
   code, now that we have more code we can exercise it more
 - prevent some features with MLO that aren't ready yet
   (AP_VLAN and 4-address connections)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agowifi: mac80211_hwsim: fix multi-channel handling in netlink RX
Johannes Berg [Fri, 2 Sep 2022 14:12:49 +0000 (16:12 +0200)]
wifi: mac80211_hwsim: fix multi-channel handling in netlink RX

In netlink RX, now that we can actually have multiple
channel contexts for MLO, things don't work well as we
only keep a single pointer, and then on link switching
we might NULL it, and hit the return if the channel is
NULL.

However, we already use mac80211_hwsim_tx_iter() which
deals with all this, so remove the test and adjust the
remaining code a bit.

This then means we no longer use the chanctx pointer,
so remove it as well.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: call drv_sta_state() under sdata_lock() in reconfig
Johannes Berg [Fri, 2 Sep 2022 14:12:51 +0000 (16:12 +0200)]
wifi: mac80211: call drv_sta_state() under sdata_lock() in reconfig

Currently, other paths calling drv_sta_state() hold the mutex
and therefore drivers can assume that, and look at links with
that protection. Fix that for the reconfig path as well; to
do it more easily use ieee80211_reconfig_stations() for the
AP/AP_VLAN station reconfig as well.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: nl80211: add MLD address to assoc BSS entries
Johannes Berg [Fri, 2 Sep 2022 14:12:50 +0000 (16:12 +0200)]
wifi: nl80211: add MLD address to assoc BSS entries

Add an MLD address attribute to BSS entries that the interface
is currently associated with to help userspace figure out what's
going on.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: mlme: refactor QoS settings code
Johannes Berg [Fri, 2 Sep 2022 14:12:46 +0000 (16:12 +0200)]
wifi: mac80211: mlme: refactor QoS settings code

Refactor the code to apply QoS settings to the driver so
we can call it on link switch.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211_hwsim: warn on invalid link address
Johannes Berg [Fri, 2 Sep 2022 14:12:39 +0000 (16:12 +0200)]
wifi: mac80211_hwsim: warn on invalid link address

Catch the bugs fixed in mac80211 by the previous commits
and warn if an invalid address is added (or removed).

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: fix double SW scan stop
Johannes Berg [Fri, 2 Sep 2022 14:12:55 +0000 (16:12 +0200)]
wifi: mac80211: fix double SW scan stop

When we stop a not-yet-started scan, we erroneously call
into the driver, causing a sequence of sw_scan_start()
followed by sw_scan_complete() twice. This will cause a
warning in hwsim with next in line commit that validates
the address passed to wmediumd/virtio. Fix this by doing
the calls only if we were actually scanning.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: mlme: assign link address correctly
Johannes Berg [Fri, 2 Sep 2022 14:12:38 +0000 (16:12 +0200)]
wifi: mac80211: mlme: assign link address correctly

Right now, we assign the link address only after we add
the link to the driver, which is quite obviously wrong.
It happens to work in many cases because it gets updated
immediately, and then link_conf updates may update it,
but it's clearly not really right.

Set the link address during ieee80211_mgd_setup_link()
so it's set before telling the driver about the link.

Fixes: 81151ce462e5 ("wifi: mac80211: support MLO authentication/association with one link")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: move link code to a new file
Johannes Berg [Fri, 2 Sep 2022 14:12:37 +0000 (16:12 +0200)]
wifi: mac80211: move link code to a new file

We probably should've done that originally, we already have
about 300 lines of code there, and will add more. Move all
the link code we wrote to a new file.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211_hwsim: refactor RX a bit
Johannes Berg [Fri, 2 Sep 2022 14:12:36 +0000 (16:12 +0200)]
wifi: mac80211_hwsim: refactor RX a bit

Refactor some common RX functionality between the netlink
and non-netlink paths, adding the special hwsim TLV (if
compiled) also in the netlink path.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211_hwsim: check STA magic in change_sta_links
Johannes Berg [Fri, 2 Sep 2022 14:12:35 +0000 (16:12 +0200)]
wifi: mac80211_hwsim: check STA magic in change_sta_links

Just as an additional check that mac80211 isn't doing
anything strange, add a check of the STA magic (which
gets assigned when the station is added, and cleared
when the station is removed).

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: remove unused arg to ieee80211_chandef_eht_oper
Johannes Berg [Fri, 2 Sep 2022 14:12:34 +0000 (16:12 +0200)]
wifi: mac80211: remove unused arg to ieee80211_chandef_eht_oper

We don't need the sdata argument, and it doesn't make any
sense for a direct conversion from one value to another,
so just remove the argument

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211_hwsim: remove multicast workaround
Johannes Berg [Fri, 2 Sep 2022 14:12:33 +0000 (16:12 +0200)]
wifi: mac80211_hwsim: remove multicast workaround

Now that we have proper multicast TX in mac80211, there's
no longer a need to fake something here.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: nl80211: remove redundant err variable
Jinpeng Cui [Mon, 29 Aug 2022 11:29:53 +0000 (11:29 +0000)]
wifi: nl80211: remove redundant err variable

Return value from rdev_set_mcast_rate() directly instead of
taking this in another redundant variable.

Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: Jinpeng Cui <cui.jinpeng2@zte.com.cn>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: Support POWERED_ADDR_CHANGE feature
James Prestwood [Fri, 26 Aug 2022 17:00:32 +0000 (10:00 -0700)]
wifi: mac80211: Support POWERED_ADDR_CHANGE feature

Adds support in mac80211 for NL80211_EXT_FEATURE_POWERED_ADDR_CHANGE.
The motivation behind this functionality is to fix limitations of
address randomization on frequencies which are disallowed in world
roaming.

The way things work now, if a client wants to randomize their address
per-connection it must power down the device, change the MAC, and
power back up. Here lies a problem since powering down the device
may result in frequencies being disabled (until the regdom is set).
If the desired BSS is on one such frequency the client is unable to
connect once the phy is powered again.

For mac80211 based devices changing the MAC while powered is possible
but currently disallowed (-EBUSY). This patch adds some logic to
allow a MAC change while powered by removing the interface, changing
the MAC, and adding it again. mac80211 will advertise support for
this feature so userspace can determine the best course of action e.g.
disallow address randomization on certain frequencies if not
supported.

There are certain limitations put on this which simplify the logic:
 - No active connection
 - No offchannel work, including scanning.

Signed-off-by: James Prestwood <prestwoj@gmail.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: nl80211: Add POWERED_ADDR_CHANGE feature
James Prestwood [Fri, 26 Aug 2022 17:00:31 +0000 (10:00 -0700)]
wifi: nl80211: Add POWERED_ADDR_CHANGE feature

Add a new extended feature bit signifying that the wireless hardware
supports changing the MAC address while the underlying net_device is
powered. Note that this has a different meaning from
IFF_LIVE_ADDR_CHANGE as additional restrictions might be imposed by
the hardware, such as:

 - No connection is active on this interface, carrier is off
 - No scan is in progress
 - No offchannel operations are in progress

Signed-off-by: James Prestwood <prestwoj@gmail.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: prevent 4-addr use on MLDs
Johannes Berg [Fri, 2 Sep 2022 14:12:58 +0000 (16:12 +0200)]
wifi: mac80211: prevent 4-addr use on MLDs

We haven't tried this yet, and it's not very likely to
work well right now, so for now disable 4-addr use on
interfaces that are MLDs.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Link: https://lore.kernel.org/r/20220902161143.f2e4cc2efaa1.I5924e8fb44a2d098b676f5711b36bbc1b1bd68e2@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agowifi: mac80211: prevent VLANs on MLDs
Johannes Berg [Fri, 2 Sep 2022 14:12:59 +0000 (16:12 +0200)]
wifi: mac80211: prevent VLANs on MLDs

Do not allow VLANs to be added to AP interfaces that are
MLDs, this isn't going to work because the link structs
aren't propagated to the VLAN interfaces yet.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Link: https://lore.kernel.org/r/20220902161144.8c88531146e9.If2ef9a3b138d4f16ed2fda91c852da156bdf5e4d@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
22 months agoMerge branch 'net_sched-redundant-resource-cleanups'
David S. Miller [Sat, 3 Sep 2022 09:40:40 +0000 (10:40 +0100)]
Merge branch 'net_sched-redundant-resource-cleanups'

Zhengchao Shao says:

====================
net: sched: remove redundant resource cleanup when init() fails

qdisc_create() calls .init() to initialize qdisc. If the initialization
fails, qdisc_create() will call .destroy() to release resources.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: sched: htb: remove redundant resource cleanup in htb_init()
Zhengchao Shao [Fri, 2 Sep 2022 08:34:30 +0000 (16:34 +0800)]
net: sched: htb: remove redundant resource cleanup in htb_init()

If htb_init() fails, qdisc_create() invokes htb_destroy() to clear
resources. Therefore, remove redundant resource cleanup in htb_init().

Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: sched: fq_codel: remove redundant resource cleanup in fq_codel_init()
Zhengchao Shao [Fri, 2 Sep 2022 08:34:29 +0000 (16:34 +0800)]
net: sched: fq_codel: remove redundant resource cleanup in fq_codel_init()

If fq_codel_init() fails, qdisc_create() invokes fq_codel_destroy() to
clear resources. Therefore, remove redundant resource cleanup in
fq_codel_init().

Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet: fec: add stop mode support for imx8 platform
Wei Fang [Fri, 2 Sep 2022 02:30:01 +0000 (10:30 +0800)]
net: fec: add stop mode support for imx8 platform

The current driver support stop mode by calling machine api.
The patch add dts support to set GPR register for stop request.

imx8mq enter stop/exit stop mode by setting GPR bit, which can
be accessed by A core.
imx8qm enter stop/exit stop mode by calling IMX_SC ipc APIs that
communicate with M core ipc service, and the M core set the related
GPR bit at last.

Signed-off-by: Wei Fang <wei.fang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agor8152: Add MAC passthrough support for Lenovo Travel Hub
André Apitzsch [Thu, 1 Sep 2022 17:00:13 +0000 (19:00 +0200)]
r8152: Add MAC passthrough support for Lenovo Travel Hub

The Lenovo USB-C Travel Hub supports MAC passthrough.

Signed-off-by: André Apitzsch <git@apitzsch.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet/ipv4: Use __DECLARE_FLEX_ARRAY() helper
Gustavo A. R. Silva [Wed, 31 Aug 2022 19:12:42 +0000 (14:12 -0500)]
net/ipv4: Use __DECLARE_FLEX_ARRAY() helper

We now have a cleaner way to keep compatibility with user-space
(a.k.a. not breaking it) when we need to keep in place a one-element
array (for its use in user-space) together with a flexible-array
member (for its use in kernel-space) without making it hard to read
at the source level. This is through the use of the new
__DECLARE_FLEX_ARRAY() helper macro.

The size and memory layout of the structure is preserved after the
changes. See below.

Before changes:

$ pahole -C ip_msfilter net/ipv4/igmp.o
struct ip_msfilter {
union {
struct {
__be32     imsf_multiaddr_aux;   /*     0     4 */
__be32     imsf_interface_aux;   /*     4     4 */
__u32      imsf_fmode_aux;       /*     8     4 */
__u32      imsf_numsrc_aux;      /*    12     4 */
__be32     imsf_slist[1];        /*    16     4 */
};                                       /*     0    20 */
struct {
__be32     imsf_multiaddr;       /*     0     4 */
__be32     imsf_interface;       /*     4     4 */
__u32      imsf_fmode;           /*     8     4 */
__u32      imsf_numsrc;          /*    12     4 */
__be32     imsf_slist_flex[0];   /*    16     0 */
};                                       /*     0    16 */
};                                               /*     0    20 */

/* size: 20, cachelines: 1, members: 1 */
/* last cacheline: 20 bytes */
};

After changes:

$ pahole -C ip_msfilter net/ipv4/igmp.o
struct ip_msfilter {
__be32                     imsf_multiaddr;       /*     0     4 */
__be32                     imsf_interface;       /*     4     4 */
__u32                      imsf_fmode;           /*     8     4 */
__u32                      imsf_numsrc;          /*    12     4 */
union {
__be32             imsf_slist[1];        /*    16     4 */
struct {
struct {
} __empty_imsf_slist_flex;       /*    16     0 */
__be32     imsf_slist_flex[0];   /*    16     0 */
};                                       /*    16     0 */
};                                               /*    16     4 */

/* size: 20, cachelines: 1, members: 5 */
/* last cacheline: 20 bytes */
};

In the past, we had to duplicate the whole original structure within
a union, and update the names of all the members. Now, we just need to
declare the flexible-array member to be used in kernel-space through
the __DECLARE_FLEX_ARRAY() helper together with the one-element array,
within a union. This makes the source code more clean and easier to read.

Link: https://github.com/KSPP/linux/issues/193
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
22 months agonet/sched: cls_api: remove redundant 0 check in tcf_qevent_init()
Zhengchao Shao [Thu, 1 Sep 2022 01:16:17 +0000 (09:16 +0800)]
net/sched: cls_api: remove redundant 0 check in tcf_qevent_init()

tcf_qevent_parse_block_index() never returns a zero block_index.
Therefore, it is unnecessary to check the value of block_index in
tcf_qevent_init().

Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Link: https://lore.kernel.org/r/20220901011617.14105-1-shaozhengchao@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
22 months agonet: lantiq_etop: Fix return type for implementation of ndo_start_xmit
GUO Zihua [Fri, 2 Sep 2022 08:15:21 +0000 (16:15 +0800)]
net: lantiq_etop: Fix return type for implementation of ndo_start_xmit

Since Linux now supports CFI, it will be a good idea to fix mismatched
return type for implementation of hooks. Otherwise this might get
cought out by CFI and cause a panic.

ltq_etop_tx() would return either NETDEV_TX_BUSY or NETDEV_TX_OK, so
change the return type to netdev_tx_t directly.

Signed-off-by: GUO Zihua <guozihua@huawei.com>
Link: https://lore.kernel.org/r/20220902081521.59867-1-guozihua@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
22 months agonet: sunplus: Fix return type for implementation of ndo_start_xmit
GUO Zihua [Fri, 2 Sep 2022 08:15:50 +0000 (16:15 +0800)]
net: sunplus: Fix return type for implementation of ndo_start_xmit

Since Linux now supports CFI, it will be a good idea to fix mismatched
return type for implementation of hooks. Otherwise this might get
cought out by CFI and cause a panic.

spl2sw_ethernet_start_xmit() would return either NETDEV_TX_BUSY or
NETDEV_TX_OK, so change the return type to netdev_tx_t directly.

Signed-off-by: GUO Zihua <guozihua@huawei.com>
Link: https://lore.kernel.org/r/20220902081550.60095-1-guozihua@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
22 months agonet: xscale: Fix return type for implementation of ndo_start_xmit
GUO Zihua [Fri, 2 Sep 2022 08:16:12 +0000 (16:16 +0800)]
net: xscale: Fix return type for implementation of ndo_start_xmit

Since Linux now supports CFI, it will be a good idea to fix mismatched
return type for implementation of hooks. Otherwise this might get
cought out by CFI and cause a panic.

eth_xmit() would return either NETDEV_TX_BUSY or NETDEV_TX_OK, so
change the return type to netdev_tx_t directly.

Signed-off-by: GUO Zihua <guozihua@huawei.com>
Link: https://lore.kernel.org/r/20220902081612.60405-1-guozihua@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
22 months agonet: broadcom: Fix return type for implementation of
GUO Zihua [Fri, 2 Sep 2022 07:54:07 +0000 (15:54 +0800)]
net: broadcom: Fix return type for implementation of

Since Linux now supports CFI, it will be a good idea to fix mismatched
return type for implementation of hooks. Otherwise this might get
cought out by CFI and cause a panic.

bcm4908_enet_start_xmit() would return either NETDEV_TX_BUSY or
NETDEV_TX_OK, so change the return type to netdev_tx_t directly.

Signed-off-by: GUO Zihua <guozihua@huawei.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/r/20220902075407.52358-1-guozihua@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
22 months agoMerge branch 'bpf: net: Remove duplicated code from bpf_getsockopt()'
Alexei Starovoitov [Sat, 3 Sep 2022 03:34:32 +0000 (20:34 -0700)]
Merge branch 'bpf: net: Remove duplicated code from bpf_getsockopt()'

Martin KaFai Lau says:

====================

From: Martin KaFai Lau <martin.lau@kernel.org>

The earlier commits [0] removed duplicated code from bpf_setsockopt().
This series is to remove duplicated code from bpf_getsockopt().

Unlike the setsockopt() which had already changed to take
the sockptr_t argument, the same has not been done to
getsockopt().  This is the extra step being done in this
series.

[0]: https://lore.kernel.org/all/20220817061704.4174272-1-kafai@fb.com/

v2:
- The previous v2 did not reach the list. It is a resend.
- Add comments on bpf_getsockopt() should not free
  the saved_syn (Stanislav)
- Explicitly null-terminate the tcp-cc name (Stanislav)
====================

Reviewed-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
22 months agoselftest/bpf: Add test for bpf_getsockopt()
Martin KaFai Lau [Fri, 2 Sep 2022 00:29:37 +0000 (17:29 -0700)]
selftest/bpf: Add test for bpf_getsockopt()

This patch removes the __bpf_getsockopt() which directly
reads the sk by using PTR_TO_BTF_ID.  Instead, the test now directly
uses the kernel bpf helper bpf_getsockopt() which supports all
the required optname now.

TCP_SAVE[D]_SYN and TCP_MAXSEG are not tested in a loop for all
the hooks and sock_ops's cb.  TCP_SAVE[D]_SYN only works
in passive connection.  TCP_MAXSEG only works when
it is setsockopt before the connection is established and
the getsockopt return value can only be tested after
the connection is established.

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20220902002937.2896904-1-kafai@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
22 months agobpf: Change bpf_getsockopt(SOL_IPV6) to reuse do_ipv6_getsockopt()
Martin KaFai Lau [Fri, 2 Sep 2022 00:29:31 +0000 (17:29 -0700)]
bpf: Change bpf_getsockopt(SOL_IPV6) to reuse do_ipv6_getsockopt()

This patch changes bpf_getsockopt(SOL_IPV6) to reuse
do_ipv6_getsockopt().  It removes the duplicated code from
bpf_getsockopt(SOL_IPV6).

This also makes bpf_getsockopt(SOL_IPV6) supporting the same
set of optnames as in bpf_setsockopt(SOL_IPV6).  In particular,
this adds IPV6_AUTOFLOWLABEL support to bpf_getsockopt(SOL_IPV6).

ipv6 could be compiled as a module.  Like how other code solved it
with stubs in ipv6_stubs.h, this patch adds the do_ipv6_getsockopt
to the ipv6_bpf_stub.

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20220902002931.2896218-1-kafai@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
22 months agobpf: Change bpf_getsockopt(SOL_IP) to reuse do_ip_getsockopt()
Martin KaFai Lau [Fri, 2 Sep 2022 00:29:25 +0000 (17:29 -0700)]
bpf: Change bpf_getsockopt(SOL_IP) to reuse do_ip_getsockopt()

This patch changes bpf_getsockopt(SOL_IP) to reuse
do_ip_getsockopt() and remove the duplicated code.

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20220902002925.2895416-1-kafai@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
22 months agobpf: Change bpf_getsockopt(SOL_TCP) to reuse do_tcp_getsockopt()
Martin KaFai Lau [Fri, 2 Sep 2022 00:29:18 +0000 (17:29 -0700)]
bpf: Change bpf_getsockopt(SOL_TCP) to reuse do_tcp_getsockopt()

This patch changes bpf_getsockopt(SOL_TCP) to reuse
do_tcp_getsockopt().  It removes the duplicated code from
bpf_getsockopt(SOL_TCP).

Before this patch, there were some optnames available to
bpf_setsockopt(SOL_TCP) but missing in bpf_getsockopt(SOL_TCP).
For example, TCP_NODELAY, TCP_MAXSEG, TCP_KEEPIDLE, TCP_KEEPINTVL,
and a few more.  It surprises users from time to time.  This patch
automatically closes this gap without duplicating more code.

bpf_getsockopt(TCP_SAVED_SYN) does not free the saved_syn,
so it stays in sol_tcp_sockopt().

For string name value like TCP_CONGESTION, bpf expects it
is always null terminated, so sol_tcp_sockopt() decrements
optlen by one before calling do_tcp_getsockopt() and
the 'if (optlen < saved_optlen) memset(..,0,..);'
in __bpf_getsockopt() will always do a null termination.

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20220902002918.2894511-1-kafai@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
22 months agobpf: Change bpf_getsockopt(SOL_SOCKET) to reuse sk_getsockopt()
Martin KaFai Lau [Fri, 2 Sep 2022 00:29:12 +0000 (17:29 -0700)]
bpf: Change bpf_getsockopt(SOL_SOCKET) to reuse sk_getsockopt()

This patch changes bpf_getsockopt(SOL_SOCKET) to reuse
sk_getsockopt().  It removes all duplicated code from
bpf_getsockopt(SOL_SOCKET).

Before this patch, there were some optnames available to
bpf_setsockopt(SOL_SOCKET) but missing in bpf_getsockopt(SOL_SOCKET).
It surprises users from time to time.  For example, SO_REUSEADDR,
SO_KEEPALIVE, SO_RCVLOWAT, and SO_MAX_PACING_RATE.  This patch
automatically closes this gap without duplicating more code.
The only exception is SO_BINDTODEVICE because it needs to acquire a
blocking lock.  Thus, SO_BINDTODEVICE is not supported.

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20220902002912.2894040-1-kafai@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
22 months agobpf: Embed kernel CONFIG check into the if statement in bpf_getsockopt
Martin KaFai Lau [Fri, 2 Sep 2022 00:29:06 +0000 (17:29 -0700)]
bpf: Embed kernel CONFIG check into the if statement in bpf_getsockopt

This patch moves the "#ifdef CONFIG_XXX" check into the "if/else"
statement itself.  The change is done for the bpf_getsockopt()
function only.  It will make the latter patches easier to follow
without the surrounding ifdef macro.

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20220902002906.2893572-1-kafai@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
22 months agobpf: net: Avoid do_ipv6_getsockopt() taking sk lock when called from bpf
Martin KaFai Lau [Fri, 2 Sep 2022 00:28:59 +0000 (17:28 -0700)]
bpf: net: Avoid do_ipv6_getsockopt() taking sk lock when called from bpf

Similar to the earlier patch that changes sk_getsockopt() to
use sockopt_{lock,release}_sock() such that it can avoid taking the
lock when called from bpf.  This patch also changes do_ipv6_getsockopt()
to use sockopt_{lock,release}_sock() such that bpf_getsockopt(SOL_IPV6)
can reuse do_ipv6_getsockopt().

Although bpf_getsockopt(SOL_IPV6) currently does not support optname
that requires lock_sock(), using sockopt_{lock,release}_sock()
consistently across *_getsockopt() will make future optname addition
harder to miss the sockopt_{lock,release}_sock() usage. eg. when
adding new optname that requires a lock and the new optname is
needed in bpf_getsockopt() also.

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20220902002859.2893064-1-kafai@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
22 months agobpf: net: Change do_ipv6_getsockopt() to take the sockptr_t argument
Martin KaFai Lau [Fri, 2 Sep 2022 00:28:53 +0000 (17:28 -0700)]
bpf: net: Change do_ipv6_getsockopt() to take the sockptr_t argument

Similar to the earlier patch that changes sk_getsockopt() to
take the sockptr_t argument .  This patch also changes
do_ipv6_getsockopt() to take the sockptr_t argument such that
a latter patch can make bpf_getsockopt(SOL_IPV6) to reuse
do_ipv6_getsockopt().

Note on the change in ip6_mc_msfget().  This function is to
return an array of sockaddr_storage in optval.  This function
is shared between ipv6_get_msfilter() and compat_ipv6_get_msfilter().
However, the sockaddr_storage is stored at different offset of the
optval because of the difference between group_filter and
compat_group_filter.  Thus, a new 'ss_offset' argument is
added to ip6_mc_msfget().

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20220902002853.2892532-1-kafai@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>