- drivers/next/e100.c
- drivers/net/e1000
- drivers/net/e1000e
- - drivers/net/ixgb
- drivers/net/ixgbe
- drivers/net/cxgb3
- drivers/net/s2io.c
Q: Users are allowed to embed bpf_spin_lock, bpf_timer fields in their BPF map
values (when using BTF support for BPF maps). This allows to use helpers for
such objects on these fields inside map values. Users are also allowed to embed
-pointers to some kernel types (with __kptr and __kptr_ref BTF tags). Will the
+pointers to some kernel types (with __kptr_untrusted and __kptr BTF tags). Will the
kernel preserve backwards compatibility for these features?
A: It depends. For bpf_spin_lock, bpf_timer: YES, for kptr and everything else:
the kernel will preserve backwards compatibility, as they are part of UAPI.
For kptrs, they are also part of UAPI, but only with respect to the kptr
-mechanism. The types that you can use with a __kptr and __kptr_ref tagged
+mechanism. The types that you can use with a __kptr_untrusted and __kptr tagged
pointer in your struct are NOT part of the UAPI contract. The supported types can
and will change across kernel releases. However, operations like accessing kptr
fields and bpf_kptr_xchg() helper will continue to be supported across kernel
net-next are both run by David S. Miller. From there, they will go
into the kernel mainline tree run by Linus Torvalds. To read up on the
process of net and net-next being merged into the mainline tree, see
-the :ref:`netdev-FAQ`
+the `netdev-FAQ`_.
Q: How do I indicate which tree (bpf vs. bpf-next) my patch should be applied to?
---------------------------------------------------------------------------------
-A: The process is the very same as described in the :ref:`netdev-FAQ`,
+A: The process is the very same as described in the `netdev-FAQ`_,
so please read up on it. The subject line must indicate whether the
patch is a fix or rather "next-like" content in order to let the
maintainers know whether it is targeted at bpf or bpf-next.
Once the BPF pull request was accepted by David S. Miller, then
the patches end up in net or net-next tree, respectively, and
make their way from there further into mainline. Again, see the
-:ref:`netdev-FAQ` for additional information e.g. on how often they are
+`netdev-FAQ`_ for additional information e.g. on how often they are
merged to mainline.
Q: How long do I need to wait for feedback on my BPF patches?
-----------------------------------------------------------------
A: For the time when the merge window is open, bpf-next will not be
processed. This is roughly analogous to net-next patch processing,
-so feel free to read up on the :ref:`netdev-FAQ` about further details.
+so feel free to read up on the `netdev-FAQ`_ about further details.
During those two weeks of merge window, we might ask you to resend
your patch series once bpf-next is open again. Once Linus released
netdev@vger.kernel.org
The process in general is the same as on netdev itself, see also the
-:ref:`netdev-FAQ`.
+`netdev-FAQ`_.
Q: Do you also backport to kernels not currently maintained as stable?
----------------------------------------------------------------------
What should I do?
A: The same rules apply as with netdev patch submissions in general, see
-the :ref:`netdev-FAQ`.
+the `netdev-FAQ`_.
Never add "``Cc: stable@vger.kernel.org``" to the patch description, but
ask the BPF maintainers to queue the patches instead. This can be done
.. Links
-.. _netdev-FAQ: Documentation/process/maintainer-netdev.rst
+.. _netdev-FAQ: https://www.kernel.org/doc/html/latest/process/maintainer-netdev.html
.. _selftests:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/bpf/
.. code-block:: c
struct cpumask_map_value {
- struct bpf_cpumask __kptr_ref * cpumask;
+ struct bpf_cpumask __kptr * cpumask;
};
struct array_map {
/* struct containing the struct bpf_cpumask kptr which is stored in the map. */
struct cpumasks_kfunc_map_value {
- struct bpf_cpumask __kptr_ref * bpf_cpumask;
+ struct bpf_cpumask __kptr * bpf_cpumask;
};
/* The map containing struct cpumasks_kfunc_map_value entries. */
* the wide instruction encoding, which appends a second 64-bit immediate (i.e.,
constant) value after the basic instruction for a total of 128 bits.
-The basic instruction encoding is as follows, where MSB and LSB mean the most significant
-bits and least significant bits, respectively:
+The fields conforming an encoded basic instruction are stored in the
+following order::
-============= ======= ======= ======= ============
-32 bits (MSB) 16 bits 4 bits 4 bits 8 bits (LSB)
-============= ======= ======= ======= ============
-imm offset src_reg dst_reg opcode
-============= ======= ======= ======= ============
+ opcode:8 src_reg:4 dst_reg:4 offset:16 imm:32 // In little-endian BPF.
+ opcode:8 dst_reg:4 src_reg:4 offset:16 imm:32 // In big-endian BPF.
**imm**
signed integer immediate value
**opcode**
operation to perform
+Note that the contents of multi-byte fields ('imm' and 'offset') are
+stored using big-endian byte ordering in big-endian BPF and
+little-endian byte ordering in little-endian BPF.
+
+For example::
+
+ opcode offset imm assembly
+ src_reg dst_reg
+ 07 0 1 00 00 44 33 22 11 r1 += 0x11223344 // little
+ dst_reg src_reg
+ 07 1 0 00 00 11 22 33 44 r1 += 0x11223344 // big
+
Note that most instructions do not use all of the fields.
Unused fields shall be cleared to zero.
using the same format but with opcode, dst_reg, src_reg, and offset all set to zero,
and imm containing the high 32 bits of the immediate value.
-================= ==================
-64 bits (MSB) 64 bits (LSB)
-================= ==================
-basic instruction pseudo instruction
-================= ==================
+This is depicted in the following figure::
+
+ basic_instruction
+ .-----------------------------.
+ | |
+ code:8 regs:8 offset:16 imm:32 unused:32 imm:32
+ | |
+ '--------------'
+ pseudo instruction
Thus the 64-bit immediate value is constructed as follows:
imm64 = (next_imm << 32) | imm
where 'next_imm' refers to the imm value of the pseudo instruction
-following the basic instruction.
+following the basic instruction. The unused bytes in the pseudo
+instruction are reserved and shall be cleared to zero.
Instruction classes
-------------------
size parameter, and the value of the constant matters for program safety, __k
suffix should be used.
+2.2.2 __uninit Annotation
+-------------------------
+
+This annotation is used to indicate that the argument will be treated as
+uninitialized.
+
+An example is given below::
+
+ __bpf_kfunc int bpf_dynptr_from_skb(..., struct bpf_dynptr_kern *ptr__uninit)
+ {
+ ...
+ }
+
+Here, the dynptr will be treated as an uninitialized dynptr. Without this
+annotation, the verifier will reject the program if the dynptr passed in is
+not initialized.
+
.. _BPF_kfunc_nodef:
2.3 Using an existing kernel function
2.4.8 KF_RCU flag
-----------------
-The KF_RCU flag is used for kfuncs which have a rcu ptr as its argument.
-When used together with KF_ACQUIRE, it indicates the kfunc should have a
-single argument which must be a trusted argument or a MEM_RCU pointer.
-The argument may have reference count of 0 and the kfunc must take this
-into consideration.
+The KF_RCU flag is a weaker version of KF_TRUSTED_ARGS. The kfuncs marked with
+KF_RCU expect either PTR_TRUSTED or MEM_RCU arguments. The verifier guarantees
+that the objects are valid and there is no use-after-free. The pointers are not
+NULL, but the object's refcount could have reached zero. The kfuncs need to
+consider doing refcnt != 0 check, especially when returning a KF_ACQUIRE
+pointer. Note as well that a KF_ACQUIRE kfunc that is KF_RCU should very likely
+also be KF_RET_NULL.
.. _KF_deprecated_flag:
/* struct containing the struct task_struct kptr which is actually stored in the map. */
struct __cgroups_kfunc_map_value {
- struct cgroup __kptr_ref * cgroup;
+ struct cgroup __kptr * cgroup;
};
/* The map containing struct __cgroups_kfunc_map_value entries. */
----
-Another kfunc available for interacting with ``struct cgroup *`` objects is
-bpf_cgroup_ancestor(). This allows callers to access the ancestor of a cgroup,
-and return it as a cgroup kptr.
+Other kfuncs available for interacting with ``struct cgroup *`` objects are
+bpf_cgroup_ancestor() and bpf_cgroup_from_id(), allowing callers to access
+the ancestor of a cgroup and find a cgroup by its ID, respectively. Both
+return a cgroup kptr.
.. kernel-doc:: kernel/bpf/helpers.c
:identifiers: bpf_cgroup_ancestor
+.. kernel-doc:: kernel/bpf/helpers.c
+ :identifiers: bpf_cgroup_from_id
+
Eventually, BPF should be updated to allow this to happen with a normal memory
load in the program itself. This is currently not possible without more work in
the verifier. bpf_cgroup_ancestor() can be used as follows:
`man-pages`_ for `bpf-helpers(7)`_.
BPF maps are accessed from user space via the ``bpf`` syscall, which provides
-commands to create maps, lookup elements, update elements and delete
-elements. More details of the BPF syscall are available in
-:doc:`/userspace-api/ebpf/syscall` and in the `man-pages`_ for `bpf(2)`_.
+commands to create maps, lookup elements, update elements and delete elements.
+More details of the BPF syscall are available in `ebpf-syscall`_ and in the
+`man-pages`_ for `bpf(2)`_.
Map Types
=========
.. _man-pages: https://www.kernel.org/doc/man-pages/
.. _bpf(2): https://man7.org/linux/man-pages/man2/bpf.2.html
.. _bpf-helpers(7): https://man7.org/linux/man-pages/man7/bpf-helpers.7.html
+.. _ebpf-syscall: https://docs.kernel.org/userspace-api/ebpf/syscall.html
+++ /dev/null
-MediaTek SGMIISYS controller
-============================
-
-The MediaTek SGMIISYS controller provides various clocks to the system.
-
-Required Properties:
-
-- compatible: Should be:
- - "mediatek,mt7622-sgmiisys", "syscon"
- - "mediatek,mt7629-sgmiisys", "syscon"
- - "mediatek,mt7981-sgmiisys_0", "syscon"
- - "mediatek,mt7981-sgmiisys_1", "syscon"
- - "mediatek,mt7986-sgmiisys_0", "syscon"
- - "mediatek,mt7986-sgmiisys_1", "syscon"
-- #clock-cells: Must be 1
-
-The SGMIISYS controller uses the common clk binding from
-Documentation/devicetree/bindings/clock/clock-bindings.txt
-The available clocks are defined in dt-bindings/clock/mt*-clk.h.
-
-Example:
-
-sgmiisys: sgmiisys@1b128000 {
- compatible = "mediatek,mt7622-sgmiisys", "syscon";
- reg = <0 0x1b128000 0 0x1000>;
- #clock-cells = <1>;
-};
- st,stm32-syscfg
- st,stm32-power-config
- st,stm32-tamp
+ - st,stm32f4-gcan
- const: syscon
- items:
- const: st,stm32-tamp
contains:
enum:
- st,stm32mp157-syscfg
+ - st,stm32f4-gcan
then:
required:
- clocks
operation modes at 10/100 Mb/s data transfer rates.
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
properties:
compatible:
title: Allwinner A10 EMAC Ethernet Controller
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
maintainers:
- Chen-Yu Tsai <wens@csie.org>
- Maxime Ripard <mripard@kernel.org>
allOf:
- - $ref: "mdio.yaml#"
+ - $ref: mdio.yaml#
# Select every compatible, including the deprecated ones. This way, we
# will be able to report a warning when we have that compatible, since
- tx-fifo-depth
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
- if:
properties:
compatible:
# Copyright 2019 BayLibre, SAS
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/amlogic,meson-dwmac.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/amlogic,meson-dwmac.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Amlogic Meson DWMAC Ethernet controller
MAC.
allOf:
- - $ref: "mdio.yaml#"
+ - $ref: mdio.yaml#
properties:
compatible:
- Florian Fainelli <f.fainelli@gmail.com>
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
- if:
properties:
compatible:
- phy-mode
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
unevaluatedProperties: false
- compatible
dependencies:
- brcm,requires-autobaud-mode: [ 'shutdown-gpios' ]
+ brcm,requires-autobaud-mode: [ shutdown-gpios ]
if:
not:
boot loader. This property should only be used the used operating system
doesn't support the clocks and clock-names property.
+ power-domains:
+ maxItems: 1
+
xceiver-supply:
description: Regulator that powers the CAN transceiver.
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/can/st,stm32-bxcan.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: STMicroelectronics bxCAN controller
+
+description: STMicroelectronics BxCAN controller for CAN bus
+
+maintainers:
+ - Dario Binacchi <dario.binacchi@amarulasolutions.com>
+
+allOf:
+ - $ref: can-controller.yaml#
+
+properties:
+ compatible:
+ enum:
+ - st,stm32f4-bxcan
+
+ st,can-primary:
+ description:
+ Primary and secondary mode of the bxCAN peripheral is only relevant
+ if the chip has two CAN peripherals. In that case they share some
+ of the required logic.
+ To avoid misunderstandings, it should be noted that ST documentation
+ uses the terms master/slave instead of primary/secondary.
+ type: boolean
+
+ reg:
+ maxItems: 1
+
+ interrupts:
+ items:
+ - description: transmit interrupt
+ - description: FIFO 0 receive interrupt
+ - description: FIFO 1 receive interrupt
+ - description: status change error interrupt
+
+ interrupt-names:
+ items:
+ - const: tx
+ - const: rx0
+ - const: rx1
+ - const: sce
+
+ resets:
+ maxItems: 1
+
+ clocks:
+ maxItems: 1
+
+ st,gcan:
+ $ref: /schemas/types.yaml#/definitions/phandle-array
+ description:
+ The phandle to the gcan node which allows to access the 512-bytes
+ SRAM memory shared by the two bxCAN cells (CAN1 primary and CAN2
+ secondary) in dual CAN peripheral configuration.
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - resets
+ - clocks
+ - st,gcan
+
+additionalProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/clock/stm32fx-clock.h>
+ #include <dt-bindings/mfd/stm32f4-rcc.h>
+
+ can1: can@40006400 {
+ compatible = "st,stm32f4-bxcan";
+ reg = <0x40006400 0x200>;
+ interrupts = <19>, <20>, <21>, <22>;
+ interrupt-names = "tx", "rx0", "rx1", "sce";
+ resets = <&rcc STM32F4_APB1_RESET(CAN1)>;
+ clocks = <&rcc 0 STM32F4_APB1_CLOCK(CAN1)>;
+ st,can-primary;
+ st,gcan = <&gcan>;
+ };
maxItems: 1
tx-fifo-depth:
- $ref: "/schemas/types.yaml#/definitions/uint32"
+ $ref: /schemas/types.yaml#/definitions/uint32
description: CAN Tx fifo depth (Zynq, Axi CAN).
rx-fifo-depth:
- $ref: "/schemas/types.yaml#/definitions/uint32"
+ $ref: /schemas/types.yaml#/definitions/uint32
description: CAN Rx fifo depth (Zynq, Axi CAN, CAN FD in sequential Rx mode)
tx-mailbox-count:
- $ref: "/schemas/types.yaml#/definitions/uint32"
+ $ref: /schemas/types.yaml#/definitions/uint32
description: CAN Tx mailbox buffer count (CAN FD)
required:
- const: brcm,bcm53115
- const: brcm,bcm53125
- const: brcm,bcm53128
+ - const: brcm,bcm53134
- const: brcm,bcm5365
- const: brcm,bcm5395
- const: brcm,bcm5389
- items:
- enum:
- brcm,bcm3384-switch
+ - brcm,bcm6318-switch
- brcm,bcm6328-switch
+ - brcm,bcm6362-switch
- brcm,bcm6368-switch
+ - brcm,bcm63268-switch
- const: brcm,bcm63xx-switch
required:
supports reporting the number of packets in-flight in a switch queue
type: boolean
- "#address-cells":
- const: 1
-
- "#size-cells":
- const: 0
-
ports:
type: object
required:
- reg
- interrupts
- - "#address-cells"
- - "#size-cells"
allOf:
- - $ref: "dsa.yaml#"
+ - $ref: dsa.yaml#
- if:
properties:
compatible:
- |
switch@f0b00000 {
compatible = "brcm,bcm7445-switch-v4.0";
- #address-cells = <1>;
- #size-cells = <0>;
reg = <0xf0b00000 0x40000>,
<0xf0b40000 0x110>,
<0xf0b40340 0x30>,
- Landen Chao <Landen.Chao@mediatek.com>
- DENG Qingfang <dqfext@gmail.com>
- Sean Wang <sean.wang@mediatek.com>
+ - Daniel Golle <daniel@makrotopia.org>
description: |
- There are two versions of MT7530, standalone and in a multi-chip module.
+ There are three versions of MT7530, standalone, in a multi-chip module and
+ built-into a SoC.
MT7530 is a part of the multi-chip module in MT7620AN, MT7620DA, MT7620DAN,
MT7620NN, MT7621AT, MT7621DAT, MT7621ST and MT7623AI SoCs.
+ The MT7988 SoC comes with a built-in switch similar to MT7531 as well as four
+ Gigabit Ethernet PHYs. The switch registers are directly mapped into the SoC's
+ memory map rather than using MDIO. The switch got an internally connected 10G
+ CPU port and 4 user ports connected to the built-in Gigabit Ethernet PHYs.
+
MT7530 in MT7620AN, MT7620DA, MT7620DAN and MT7620NN SoCs has got 10/100 PHYs
and the switch registers are directly mapped into SoC's memory map rather than
- using MDIO. The DSA driver currently doesn't support this.
+ using MDIO. The DSA driver currently doesn't support MT7620 variants.
There is only the standalone version of MT7531.
Multi-chip module MT7530 in MT7621AT, MT7621DAT and MT7621ST SoCs
const: mediatek,mt7621
+ - description:
+ Built-in switch of the MT7988 SoC
+ const: mediatek,mt7988-switch
+
reg:
maxItems: 1
gpio-controller:
type: boolean
- description:
+ description: |
If defined, LED controller of the MT7530 switch will run on GPIO mode.
There are 15 controllable pins.
maxItems: 1
io-supply:
- description:
+ description: |
Phandle to the regulator node necessary for the I/O power.
See Documentation/devicetree/bindings/regulator/mt6323-regulator.txt for
details for the regulator setup on these boards.
switch is a part of the multi-chip module.
reset-gpios:
- description:
+ description: |
GPIO to reset the switch. Use this if mediatek,mcm is not used.
This property is optional because some boards share the reset line with
other components which makes it impossible to probe the switch if the
required:
- mediatek,mcm
+ - if:
+ properties:
+ compatible:
+ const: mediatek,mt7988-switch
+ then:
+ $ref: "#/$defs/mt7530-dsa-port"
+ properties:
+ gpio-controller: false
+ mediatek,mcm: false
+ reset-names: false
+
unevaluatedProperties: false
examples:
With the legacy mapping the reg corresponding to the internal
mdio is the switch reg with an offset of -1.
-$ref: "dsa.yaml#"
+$ref: dsa.yaml#
patternProperties:
"^(ethernet-)?ports$":
mdio:
type: object
- $ref: "mdio.yaml#"
+ $ref: mdio.yaml#
description: optional node for embedded MDIO controller
required:
0: Disable 2.4 Vpp operating mode.
1: Request 2.4 Vpp operating mode from link partner.
Absence of this property will leave configuration to default values.
- $ref: "/schemas/types.yaml#/definitions/uint32"
+ $ref: /schemas/types.yaml#/definitions/uint32
enum: [0, 1]
broken-turn-around:
type: object
description: Ethernet switch ports
+ required:
+ - "#address-cells"
+ - "#size-cells"
+
oneOf:
- required:
- ports
description:
Regulator that powers the Ethernet PHY.
+ power-domains:
+ maxItems: 1
+
fsl,num-tx-queues:
$ref: /schemas/types.yaml#/definitions/uint32
description:
located under the 'dpmacs' node for the fsl-mc bus DTS node.
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
properties:
compatible:
# Copyright 2018 Linaro Ltd.
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/intel,ixp46x-ptp-timer.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/intel,ixp46x-ptp-timer.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Intel IXP46x PTP Timer (TSYNC)
# Copyright 2018 Linaro Ltd.
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/intel,ixp4xx-ethernet.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/intel,ixp4xx-ethernet.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Intel IXP4xx ethernet
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
description: Ethernet MMIO address range
queue-rx:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to the RX queue node
description: phandle to the RX queue on the NPE
queue-txready:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to the TX READY queue node
phy-handle: true
intel,npe-handle:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to the NPE this ethernet instance is using
# Copyright 2021 Linaro Ltd.
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/intel,ixp4xx-hss.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/intel,ixp4xx-hss.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Intel IXP4xx V.35 WAN High Speed Serial Link (HSS)
description: The HSS instance
intel,npe-handle:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
items:
- description: phandle to the NPE this HSS instance is using
and the instance to use in the second cell
intel,queue-chl-rxtrig:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to the RX trigger queue on the NPE
description: phandle to the RX trigger queue on the NPE
intel,queue-chl-txready:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to the TX ready queue on the NPE
description: phandle to the TX ready queue on the NPE
intel,queue-pkt-rx:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to the RX queue on the NPE
description: phandle to the packet RX queue on the NPE
intel,queue-pkt-tx:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
maxItems: 4
items:
items:
description: phandle to the packet TX0, TX1, TX2 and TX3 queues on the NPE
intel,queue-pkt-rxfree:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
maxItems: 4
items:
items:
RXFREE3 queues on the NPE
intel,queue-pkt-txdone:
- $ref: '/schemas/types.yaml#/definitions/phandle-array'
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to the TXDONE queue on the NPE
definition.
allOf:
- - $ref: "mdio.yaml#"
+ - $ref: mdio.yaml#
properties:
compatible:
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/marvell-bluetooth.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/marvell-bluetooth.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Marvell Bluetooth chips
- Russell King <linux@armlinux.org.uk>
allOf:
- - $ref: "mdio.yaml#"
+ - $ref: mdio.yaml#
properties:
compatible:
- mediatek,mt7623-eth
- mediatek,mt7622-eth
- mediatek,mt7629-eth
+ - mediatek,mt7981-eth
- mediatek,mt7986-eth
- ralink,rt5350-eth
description:
List of phandles to wireless ethernet dispatch nodes.
+ mediatek,wed-pcie:
+ $ref: /schemas/types.yaml#/definitions/phandle
+ description:
+ Phandle to the mediatek wed-pcie controller.
+
dma-coherent: true
mdio-bus:
const: 0
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
- if:
properties:
compatible:
mediatek,wed: false
+ mediatek,wed-pcie: false
+
- if:
properties:
compatible:
description:
Phandle to the mediatek pcie-mirror controller.
+ mediatek,wed-pcie: false
+
- if:
properties:
compatible:
mediatek,wed: false
+ mediatek,wed-pcie: false
+
+ - if:
+ properties:
+ compatible:
+ contains:
+ const: mediatek,mt7981-eth
+ then:
+ properties:
+ interrupts:
+ minItems: 4
+
+ clocks:
+ minItems: 15
+ maxItems: 15
+
+ clock-names:
+ items:
+ - const: fe
+ - const: gp2
+ - const: gp1
+ - const: wocpu0
+ - const: sgmii_ck
+ - const: sgmii_tx250m
+ - const: sgmii_rx250m
+ - const: sgmii_cdr_ref
+ - const: sgmii_cdr_fb
+ - const: sgmii2_tx250m
+ - const: sgmii2_rx250m
+ - const: sgmii2_cdr_ref
+ - const: sgmii2_cdr_fb
+ - const: netsys0
+ - const: netsys1
+
+ mediatek,sgmiisys:
+ minItems: 2
+ maxItems: 2
+
- if:
properties:
compatible:
minItems: 2
maxItems: 2
- mediatek,wed-pcie:
- $ref: /schemas/types.yaml#/definitions/phandle
- description:
- Phandle to the mediatek wed-pcie controller.
-
patternProperties:
"^mac@[0-1]$":
type: object
modes with flow-control as well as CRC offloading and VLAN tags.
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
properties:
compatible:
"^port@[0-9a-f]+$":
type: object
- $ref: "/schemas/net/ethernet-controller.yaml#"
+ $ref: /schemas/net/ethernet-controller.yaml#
unevaluatedProperties: false
properties:
microchip,bandwidth:
description: Specifies bandwidth in Mbit/s allocated to the port.
- $ref: "/schemas/types.yaml#/definitions/uint32"
+ $ref: /schemas/types.yaml#/definitions/uint32
maximum: 25000
microchip,sd-sgpio:
Index of the ports Signal Detect SGPIO in the set of 384 SGPIOs
This is optional, and only needed if the default used index is
is not correct.
- $ref: "/schemas/types.yaml#/definitions/uint32"
+ $ref: /schemas/types.yaml#/definitions/uint32
minimum: 0
maximum: 383
- Alexandre Belloni <alexandre.belloni@bootlin.com>
allOf:
- - $ref: "mdio.yaml#"
+ - $ref: mdio.yaml#
properties:
compatible:
maxItems: 1
reset-n-io:
- $ref: "/schemas/types.yaml#/definitions/phandle-array"
+ $ref: /schemas/types.yaml#/definitions/phandle-array
maxItems: 1
description: |
Output GPIO pin used to reset the chip (active low)
- compatible
dependencies:
- interrupts: [ 'reg' ]
+ interrupts: [ reg ]
additionalProperties: false
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/pcs/mediatek,sgmiisys.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: MediaTek SGMIISYS Controller
+
+maintainers:
+ - Matthias Brugger <matthias.bgg@gmail.com>
+
+description:
+ The MediaTek SGMIISYS controller provides a SGMII PCS and some clocks
+ to the ethernet subsystem to which it is attached.
+
+properties:
+ compatible:
+ items:
+ - enum:
+ - mediatek,mt7622-sgmiisys
+ - mediatek,mt7629-sgmiisys
+ - mediatek,mt7981-sgmiisys_0
+ - mediatek,mt7981-sgmiisys_1
+ - mediatek,mt7986-sgmiisys_0
+ - mediatek,mt7986-sgmiisys_1
+ - const: syscon
+
+ reg:
+ maxItems: 1
+
+ '#clock-cells':
+ const: 1
+
+ mediatek,pnswap:
+ description: Invert polarity of the SGMII data lanes
+ type: boolean
+
+required:
+ - compatible
+ - reg
+ - '#clock-cells'
+
+additionalProperties: false
+
+examples:
+ - |
+ soc {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ sgmiisys: syscon@1b128000 {
+ compatible = "mediatek,mt7622-sgmiisys", "syscon";
+ reg = <0 0x1b128000 0 0x1000>;
+ #clock-cells = <1>;
+ };
+ };
by the PHY node to control power injection to the Ethernet cable.
allOf:
- - $ref: "pse-controller.yaml#"
+ - $ref: pse-controller.yaml#
properties:
compatible:
- qcom,sc7280-ipa
- qcom,sdm845-ipa
- qcom,sdx55-ipa
+ - qcom,sdx65-ipa
- qcom,sm6350-ipa
- qcom,sm8350-ipa
- "#size-cells"
allOf:
- - $ref: "mdio.yaml#"
+ - $ref: mdio.yaml#
- if:
properties:
used to communicate with the gmac phy connected.
allOf:
- - $ref: "mdio.yaml#"
+ - $ref: mdio.yaml#
properties:
compatible:
reg = <0x10>;
ports {
- /* ... */
+ #address-cells = <1>;
+ #size-cells = <0>;
+ /* ... */
};
};
};
- mdio
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
- if:
properties:
compatible:
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/rockchip-dwmac.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/rockchip-dwmac.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip 10/100/1000 Ethernet driver(GMAC)
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/sff,sfp.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/sff,sfp.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Small Form Factor (SFF) Committee Small Form-factor Pluggable (SFP)
Transceiver
snps,reset-delays-us: ["snps,reset-gpio"]
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
- if:
properties:
compatible:
# Copyright 2019 BayLibre, SAS
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/stm32-dwmac.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/stm32-dwmac.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: STMicroelectronics STM32 / MCU DWMAC glue layer controller
- compatible
allOf:
- - $ref: "snps,dwmac.yaml#"
+ - $ref: snps,dwmac.yaml#
properties:
compatible:
- ptp_ref
st,syscon:
- $ref: "/schemas/types.yaml#/definitions/phandle-array"
+ $ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to the syscon node which encompases the glue register
interrupt-names:
items:
- - const: "rx_thresh"
- - const: "rx"
- - const: "tx"
- - const: "misc"
+ - const: rx_thresh
+ - const: rx
+ - const: tx
+ - const: misc
pinctrl-names: true
type: object
description:
CPSW MDIO bus.
- $ref: "ti,davinci-mdio.yaml#"
+ $ref: ti,davinci-mdio.yaml#
required:
TI SoC Davinci/Keystone2 MDIO Controller
allOf:
- - $ref: "mdio.yaml#"
+ - $ref: mdio.yaml#
properties:
compatible:
# Copyright (C) 2020 Texas Instruments Incorporated
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/ti,dp83822.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/ti,dp83822.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI DP83822 ethernet PHY
http://www.ti.com/lit/ds/symlink/dp83822i.pdf
allOf:
- - $ref: "ethernet-phy.yaml#"
+ - $ref: ethernet-phy.yaml#
properties:
reg:
# Copyright (C) 2019 Texas Instruments Incorporated
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/ti,dp83867.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/ti,dp83867.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI DP83867 ethernet PHY
allOf:
- - $ref: "ethernet-controller.yaml#"
+ - $ref: ethernet-controller.yaml#
maintainers:
- Andrew Davis <afd@ti.com>
# Copyright (C) 2019 Texas Instruments Incorporated
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/ti,dp83869.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/ti,dp83869.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI DP83869 ethernet PHY
allOf:
- - $ref: "ethernet-phy.yaml#"
+ - $ref: ethernet-phy.yaml#
maintainers:
- Andrew Davis <afd@ti.com>
compatible:
enum:
+ - ti,am642-cpsw-nuss
- ti,am654-cpsw-nuss
- ti,j7200-cpswxg-nuss
- ti,j721e-cpsw-nuss
- ti,j721e-cpswxg-nuss
- - ti,am642-cpsw-nuss
+ - ti,j784s4-cpswxg-nuss
reg:
maxItems: 1
description: CPSW port number
phys:
- maxItems: 1
- description: phandle on phy-gmii-sel PHY
+ minItems: 1
+ items:
+ - description: CPSW MAC's PHY.
+ - description: Serdes PHY. Serdes PHY is required only if
+ the Serdes has to be configured in the
+ Single-Link configuration.
+
+ phy-names:
+ minItems: 1
+ items:
+ - const: mac
+ - const: serdes
label:
description: label associated with this port
properties:
compatible:
contains:
- const: ti,j721e-cpswxg-nuss
+ enum:
+ - ti,j721e-cpswxg-nuss
+ - ti,j784s4-cpswxg-nuss
then:
properties:
ethernet-ports:
compatible:
contains:
enum:
- - ti,j721e-cpswxg-nuss
- ti,j7200-cpswxg-nuss
+ - ti,j721e-cpswxg-nuss
+ - ti,j784s4-cpswxg-nuss
then:
properties:
ethernet-ports:
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/toshiba,visconti-dwmac.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/toshiba,visconti-dwmac.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Toshiba Visconti DWMAC Ethernet controller
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/net/vertexcom-mse102x.yaml#"
-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+$id: http://devicetree.org/schemas/net/vertexcom-mse102x.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
title: The Vertexcom MSE102x (SPI)
protocol:
description: Schema compatibility level. Default is "genetlink".
enum: [ genetlink, genetlink-c ]
- # Start genetlink-c
uapi-header:
description: Path to the uAPI header, default is linux/${family-name}.h
type: string
+ # Start genetlink-c
c-family-name:
description: Name of the define for the family name.
type: string
protocol:
description: Schema compatibility level. Default is "genetlink".
enum: [ genetlink, genetlink-c, genetlink-legacy ] # Trim
- # Start genetlink-c
uapi-header:
description: Path to the uAPI header, default is linux/${family-name}.h
type: string
+ # Start genetlink-c
c-family-name:
description: Name of the define for the family name.
type: string
description: Max length for a string or a binary attribute.
$ref: '#/$defs/len-or-define'
sub-type: *attr-type
+ # Start genetlink-legacy
+ struct:
+ description: Name of the struct type used for the attribute.
+ type: string
+ # End genetlink-legacy
# Make sure name-prefix does not appear in subsets (subsets inherit naming)
dependencies:
async-enum:
description: Name for the enum type with notifications/events.
type: string
+ # Start genetlink-legacy
+ fixed-header: &fixed-header
+ description: |
+ Name of the structure defining the optional fixed-length protocol
+ header. This header is placed in a message after the netlink and
+ genetlink headers and before any attributes.
+ type: string
+ # End genetlink-legacy
list:
description: List of commands
type: array
type: array
items:
enum: [ strict, dump ]
+ # Start genetlink-legacy
+ fixed-header: *fixed-header
+ # End genetlink-legacy
do: &subop-type
description: Main command handler.
type: object
protocol:
description: Schema compatibility level. Default is "genetlink".
enum: [ genetlink ]
+ uapi-header:
+ description: Path to the uAPI header, default is linux/${family-name}.h
+ type: string
definitions:
description: List of type and constant definitions (enums, flags, defines).
--- /dev/null
+# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+
+name: devlink
+
+protocol: genetlink-legacy
+
+doc: Partial family for Devlink.
+
+attribute-sets:
+ -
+ name: devlink
+ attributes:
+ -
+ name: bus-name
+ type: string
+ value: 1
+ -
+ name: dev-name
+ type: string
+ -
+ name: port-index
+ type: u32
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: info-driver-name
+ type: string
+ value: 98
+ -
+ name: info-serial-number
+ type: string
+ -
+ name: info-version-fixed
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-info-version
+ -
+ name: info-version-running
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-info-version
+ -
+ name: info-version-stored
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-info-version
+ -
+ name: info-version-name
+ type: string
+ -
+ name: info-version-value
+ type: string
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: reload-failed
+ type: u8
+ value: 136
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: reload-action
+ type: u8
+ value: 153
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: dev-stats
+ type: nest
+ value: 156
+ nested-attributes: dl-dev-stats
+ -
+ name: reload-stats
+ type: nest
+ nested-attributes: dl-reload-stats
+ -
+ name: reload-stats-entry
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-reload-stats-entry
+ -
+ name: reload-stats-limit
+ type: u8
+ -
+ name: reload-stats-value
+ type: u32
+ -
+ name: remote-reload-stats
+ type: nest
+ nested-attributes: dl-reload-stats
+ -
+ name: reload-action-info
+ type: nest
+ nested-attributes: dl-reload-act-info
+ -
+ name: reload-action-stats
+ type: nest
+ nested-attributes: dl-reload-act-stats
+ -
+ name: dl-dev-stats
+ subset-of: devlink
+ attributes:
+ -
+ name: reload-stats
+ type: nest
+ -
+ name: remote-reload-stats
+ type: nest
+ -
+ name: dl-reload-stats
+ subset-of: devlink
+ attributes:
+ -
+ name: reload-action-info
+ type: nest
+ -
+ name: dl-reload-act-info
+ subset-of: devlink
+ attributes:
+ -
+ name: reload-action
+ type: u8
+ -
+ name: reload-action-stats
+ type: nest
+ -
+ name: dl-reload-act-stats
+ subset-of: devlink
+ attributes:
+ -
+ name: reload-stats-entry
+ type: nest
+ -
+ name: dl-reload-stats-entry
+ subset-of: devlink
+ attributes:
+ -
+ name: reload-stats-limit
+ type: u8
+ -
+ name: reload-stats-value
+ type: u32
+ -
+ name: dl-info-version
+ subset-of: devlink
+ attributes:
+ -
+ name: info-version-name
+ type: string
+ -
+ name: info-version-value
+ type: string
+
+operations:
+ enum-model: directional
+ list:
+ -
+ name: get
+ doc: Get devlink instances.
+ attribute-set: devlink
+
+ do:
+ request:
+ value: 1
+ attributes: &dev-id-attrs
+ - bus-name
+ - dev-name
+ reply: &get-reply
+ value: 3
+ attributes:
+ - bus-name
+ - dev-name
+ - reload-failed
+ - reload-action
+ - dev-stats
+ dump:
+ reply: *get-reply
+
+ # TODO: fill in the operations in between
+
+ -
+ name: info-get
+ doc: Get device information, like driver name, hardware and firmware versions etc.
+ attribute-set: devlink
+
+ do:
+ request:
+ value: 51
+ attributes: *dev-id-attrs
+ reply:
+ value: 51
+ attributes:
+ - bus-name
+ - dev-name
doc: Partial family for Ethtool Netlink.
+definitions:
+ -
+ name: udp-tunnel-type
+ type: enum
+ entries: [ vxlan, geneve, vxlan-gpe ]
+
attribute-sets:
-
name: header
-
name: bit
type: nest
+ multi-attr: true
nested-attributes: bitset-bit
-
name: bitset
nested-attributes: bitset-bits
-
+ name: u64-array
+ attributes:
+ -
+ name: u64
+ type: nest
+ multi-attr: true
+ nested-attributes: u64
+ -
+ name: s32-array
+ attributes:
+ -
+ name: s32
+ type: nest
+ multi-attr: true
+ nested-attributes: s32
+ -
name: string
attributes:
-
-
name: rx-push
type: u8
+ -
+ name: tx-push-buf-len
+ type: u32
+ -
+ name: tx-push-buf-len-max
+ type: u32
-
name: mm-stat
name: stats
type: nest
nested-attributes: mm-stat
+ -
+ name: linkinfo
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: port
+ type: u8
+ -
+ name: phyaddr
+ type: u8
+ -
+ name: tp-mdix
+ type: u8
+ -
+ name: tp-mdix-ctrl
+ type: u8
+ -
+ name: transceiver
+ type: u8
+ -
+ name: linkmodes
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: autoneg
+ type: u8
+ -
+ name: ours
+ type: nest
+ nested-attributes: bitset
+ -
+ name: peer
+ type: nest
+ nested-attributes: bitset
+ -
+ name: speed
+ type: u32
+ -
+ name: duplex
+ type: u8
+ -
+ name: master-slave-cfg
+ type: u8
+ -
+ name: master-slave-state
+ type: u8
+ -
+ name: master-slave-lanes
+ type: u32
+ -
+ name: rate-matching
+ type: u8
+ -
+ name: linkstate
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: link
+ type: u8
+ -
+ name: sqi
+ type: u32
+ -
+ name: sqi-max
+ type: u32
+ -
+ name: ext-state
+ type: u8
+ -
+ name: ext-substate
+ type: u8
+ -
+ name: down-cnt
+ type: u32
+ -
+ name: debug
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: msgmask
+ type: nest
+ nested-attributes: bitset
+ -
+ name: wol
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: modes
+ type: nest
+ nested-attributes: bitset
+ -
+ name: sopass
+ type: binary
+ -
+ name: features
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: hw
+ type: nest
+ nested-attributes: bitset
+ -
+ name: wanted
+ type: nest
+ nested-attributes: bitset
+ -
+ name: active
+ type: nest
+ nested-attributes: bitset
+ -
+ name: nochange
+ type: nest
+ nested-attributes: bitset
+ -
+ name: channels
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: rx-max
+ type: u32
+ -
+ name: tx-max
+ type: u32
+ -
+ name: other-max
+ type: u32
+ -
+ name: combined-max
+ type: u32
+ -
+ name: rx-count
+ type: u32
+ -
+ name: tx-count
+ type: u32
+ -
+ name: other-count
+ type: u32
+ -
+ name: combined-count
+ type: u32
-operations:
- enum-model: directional
- list:
- -
- name: strset-get
- doc: Get string set from the kernel.
-
- attribute-set: strset
-
- do: &strset-get-op
- request:
- attributes:
- - header
- - stringsets
- - counts-only
- reply:
- attributes:
- - header
- - stringsets
- dump: *strset-get-op
-
- # TODO: fill in the requests in between
-
- -
- name: privflags-get
- doc: Get device private flags.
-
- attribute-set: privflags
-
- do: &privflag-get-op
- request:
- value: 13
- attributes:
- - header
- reply:
- value: 14
- attributes:
- - header
- - flags
- dump: *privflag-get-op
- -
- name: privflags-set
- doc: Set device private flags.
-
- attribute-set: privflags
-
- do:
- request:
- attributes:
- - header
- - flags
- -
- name: privflags-ntf
- doc: Notification for change in device private flags.
- notify: privflags-get
-
- -
- name: rings-get
- doc: Get ring params.
-
- attribute-set: rings
-
- do: &ring-get-op
- request:
- attributes:
- - header
- reply:
- attributes:
- - header
- - rx-max
- - rx-mini-max
- - rx-jumbo-max
- - tx-max
- - rx
- - rx-mini
- - rx-jumbo
- - tx
- - rx-buf-len
- - tcp-data-split
- - cqe-size
- - tx-push
- - rx-push
- dump: *ring-get-op
- -
- name: rings-set
- doc: Set ring params.
-
- attribute-set: rings
-
- do:
- request:
- attributes:
- - header
- - rx
- - rx-mini
- - rx-jumbo
- - tx
- - rx-buf-len
- - tcp-data-split
- - cqe-size
- - tx-push
- - rx-push
- -
- name: rings-ntf
- doc: Notification for change in ring params.
- notify: rings-get
-
- # TODO: fill in the requests in between
-
+ -
+ name: coalesce
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: rx-usecs
+ type: u32
+ -
+ name: rx-max-frames
+ type: u32
+ -
+ name: rx-usecs-irq
+ type: u32
+ -
+ name: rx-max-frames-irq
+ type: u32
+ -
+ name: tx-usecs
+ type: u32
+ -
+ name: tx-max-frames
+ type: u32
+ -
+ name: tx-usecs-irq
+ type: u32
+ -
+ name: tx-max-frames-irq
+ type: u32
+ -
+ name: stats-block-usecs
+ type: u32
+ -
+ name: use-adaptive-rx
+ type: u8
+ -
+ name: use-adaptive-tx
+ type: u8
+ -
+ name: pkt-rate-low
+ type: u32
+ -
+ name: rx-usecs-low
+ type: u32
+ -
+ name: rx-max-frames-low
+ type: u32
+ -
+ name: tx-usecs-low
+ type: u32
+ -
+ name: tx-max-frames-low
+ type: u32
+ -
+ name: pkt-rate-high
+ type: u32
+ -
+ name: rx-usecs-high
+ type: u32
+ -
+ name: rx-max-frames-high
+ type: u32
+ -
+ name: tx-usecs-high
+ type: u32
+ -
+ name: tx-max-frames-high
+ type: u32
+ -
+ name: rate-sample-interval
+ type: u32
+ -
+ name: use-cqe-mode-tx
+ type: u8
+ -
+ name: use-cqe-mode-rx
+ type: u8
+ -
+ name: tx-aggr-max-bytes
+ type: u32
+ -
+ name: tx-aggr-max-frames
+ type: u32
+ -
+ name: tx-aggr-time-usecs
+ type: u32
+ -
+ name: pause-stat
+ attributes:
+ -
+ name: pad
+ type: u32
+ -
+ name: tx-frames
+ type: u64
+ -
+ name: rx-frames
+ type: u64
+ -
+ name: pause
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: autoneg
+ type: u8
+ -
+ name: rx
+ type: u8
+ -
+ name: tx
+ type: u8
+ -
+ name: stats
+ type: nest
+ nested-attributes: pause-stat
+ -
+ name: stats-src
+ type: u32
+ -
+ name: eee
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: modes-ours
+ type: nest
+ nested-attributes: bitset
+ -
+ name: modes-peer
+ type: nest
+ nested-attributes: bitset
+ -
+ name: active
+ type: u8
+ -
+ name: enabled
+ type: u8
+ -
+ name: tx-lpi-enabled
+ type: u8
+ -
+ name: tx-lpi-timer
+ type: u32
+ -
+ name: tsinfo
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: timestamping
+ type: nest
+ nested-attributes: bitset
+ -
+ name: tx-types
+ type: nest
+ nested-attributes: bitset
+ -
+ name: rx-filters
+ type: nest
+ nested-attributes: bitset
+ -
+ name: phc-index
+ type: u32
+ -
+ name: cable-test-nft-nest-result
+ attributes:
+ -
+ name: pair
+ type: u8
+ -
+ name: code
+ type: u8
+ -
+ name: cable-test-nft-nest-fault-length
+ attributes:
+ -
+ name: pair
+ type: u8
+ -
+ name: cm
+ type: u32
+ -
+ name: cable-test-nft-nest
+ attributes:
+ -
+ name: result
+ type: nest
+ nested-attributes: cable-test-nft-nest-result
+ -
+ name: fault-length
+ type: nest
+ nested-attributes: cable-test-nft-nest-fault-length
+ -
+ name: cable-test
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: status
+ type: u8
+ -
+ name: nest
+ type: nest
+ nested-attributes: cable-test-nft-nest
+ -
+ name: cable-test-tdr-cfg
+ attributes:
+ -
+ name: first
+ type: u32
+ -
+ name: last
+ type: u32
+ -
+ name: step
+ type: u32
+ -
+ name: pari
+ type: u8
+ -
+ name: cable-test-tdr
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: cfg
+ type: nest
+ nested-attributes: cable-test-tdr-cfg
+ -
+ name: tunnel-info-udp-entry
+ attributes:
+ -
+ name: port
+ type: u16
+ byte-order: big-endian
+ -
+ name: type
+ type: u32
+ enum: udp-tunnel-type
+ -
+ name: tunnel-info-udp-table
+ attributes:
+ -
+ name: size
+ type: u32
+ -
+ name: types
+ type: nest
+ nested-attributes: bitset
+ -
+ name: udp-ports
+ type: nest
+ nested-attributes: tunnel-info-udp-entry
+ -
+ name: tunnel-info
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: udp-ports
+ type: nest
+ nested-attributes: tunnel-info-udp-table
+ -
+ name: fec-stat
+ attributes:
+ -
+ name: pad
+ type: u8
+ -
+ name: corrected
+ type: nest
+ nested-attributes: u64-array
+ -
+ name: uncorr
+ type: nest
+ nested-attributes: u64-array
+ -
+ name: corr-bits
+ type: nest
+ nested-attributes: u64-array
+ -
+ name: fec
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: modes
+ type: nest
+ nested-attributes: bitset
+ -
+ name: auto
+ type: u8
+ -
+ name: active
+ type: u32
+ -
+ name: stats
+ type: nest
+ nested-attributes: fec-stat
+ -
+ name: module-eeprom
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: offset
+ type: u32
+ -
+ name: length
+ type: u32
+ -
+ name: page
+ type: u8
+ -
+ name: bank
+ type: u8
+ -
+ name: i2c-address
+ type: u8
+ -
+ name: data
+ type: binary
+ -
+ name: stats-grp
+ attributes:
+ -
+ name: pad
+ type: u32
+ -
+ name: id
+ type: u32
+ -
+ name: ss-id
+ type: u32
+ -
+ name: stat
+ type: nest
+ nested-attributes: u64
+ -
+ name: hist-rx
+ type: nest
+ nested-attributes: u64
+ -
+ name: hist-tx
+ type: nest
+ nested-attributes: u64
+ -
+ name: hist-bkt-low
+ type: u32
+ -
+ name: hist-bkt-hi
+ type: u32
+ -
+ name: hist-bkt-val
+ type: u64
+ -
+ name: stats
+ attributes:
+ -
+ name: pad
+ type: u32
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: groups
+ type: nest
+ nested-attributes: bitset
+ -
+ name: grp
+ type: nest
+ nested-attributes: stats-grp
+ -
+ name: src
+ type: u32
+ -
+ name: phc-vclocks
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: num
+ type: u32
+ -
+ name: index
+ type: nest
+ nested-attributes: s32-array
+ -
+ name: module
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: power-mode-policy
+ type: u8
+ -
+ name: power-mode
+ type: u8
+ -
+ name: pse
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: admin-state
+ type: u32
+ -
+ name: admin-control
+ type: u32
+ -
+ name: pw-d-status
+ type: u32
+ -
+ name: rss
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: context
+ type: u32
+ -
+ name: hfunc
+ type: u32
+ -
+ name: indir
+ type: binary
+ -
+ name: hkey
+ type: binary
+ -
+ name: plca
+ attributes:
+ -
+ name: header
+ type: nest
+ nested-attributes: header
+ -
+ name: version
+ type: u16
+ -
+ name: enabled
+ type: u8
+ -
+ name: status
+ type: u8
+ -
+ name: node-cnt
+ type: u32
+ -
+ name: node-id
+ type: u32
+ -
+ name: to-tmr
+ type: u32
+ -
+ name: burst-cnt
+ type: u32
+ -
+ name: burst-tmr
+ type: u32
+
+operations:
+ enum-model: directional
+ list:
+ -
+ name: strset-get
+ doc: Get string set from the kernel.
+
+ attribute-set: strset
+
+ do: &strset-get-op
+ request:
+ attributes:
+ - header
+ - stringsets
+ - counts-only
+ reply:
+ attributes:
+ - header
+ - stringsets
+ dump: *strset-get-op
+ -
+ name: linkinfo-get
+ doc: Get link info.
+
+ attribute-set: linkinfo
+
+ do: &linkinfo-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &linkinfo
+ - header
+ - port
+ - phyaddr
+ - tp-mdix
+ - tp-mdix-ctrl
+ - transceiver
+ dump: *linkinfo-get-op
+ -
+ name: linkinfo-set
+ doc: Set link info.
+
+ attribute-set: linkinfo
+
+ do:
+ request:
+ attributes: *linkinfo
+ -
+ name: linkinfo-ntf
+ doc: Notification for change in link info.
+ notify: linkinfo-get
+ -
+ name: linkmodes-get
+ doc: Get link modes.
+
+ attribute-set: linkmodes
+
+ do: &linkmodes-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &linkmodes
+ - header
+ - autoneg
+ - ours
+ - peer
+ - speed
+ - duplex
+ - master-slave-cfg
+ - master-slave-state
+ - master-slave-lanes
+ - rate-matching
+ dump: *linkmodes-get-op
+ -
+ name: linkmodes-set
+ doc: Set link modes.
+
+ attribute-set: linkmodes
+
+ do:
+ request:
+ attributes: *linkmodes
+ -
+ name: linkmodes-ntf
+ doc: Notification for change in link modes.
+ notify: linkmodes-get
+ -
+ name: linkstate-get
+ doc: Get link state.
+
+ attribute-set: linkstate
+
+ do: &linkstate-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes:
+ - header
+ - link
+ - sqi
+ - sqi-max
+ - ext-state
+ - ext-substate
+ - down-cnt
+ dump: *linkstate-get-op
+ -
+ name: debug-get
+ doc: Get debug message mask.
+
+ attribute-set: debug
+
+ do: &debug-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &debug
+ - header
+ - msgmask
+ dump: *debug-get-op
+ -
+ name: debug-set
+ doc: Set debug message mask.
+
+ attribute-set: debug
+
+ do:
+ request:
+ attributes: *debug
+ -
+ name: debug-ntf
+ doc: Notification for change in debug message mask.
+ notify: debug-get
+ -
+ name: wol-get
+ doc: Get WOL params.
+
+ attribute-set: wol
+
+ do: &wol-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &wol
+ - header
+ - modes
+ - sopass
+ dump: *wol-get-op
+ -
+ name: wol-set
+ doc: Set WOL params.
+
+ attribute-set: wol
+
+ do:
+ request:
+ attributes: *wol
+ -
+ name: wol-ntf
+ doc: Notification for change in WOL params.
+ notify: wol-get
+ -
+ name: features-get
+ doc: Get features.
+
+ attribute-set: features
+
+ do: &feature-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &feature
+ - header
+ # User-changeable features.
+ - hw
+ # User-requested features.
+ - wanted
+ # Currently active features.
+ - active
+ # Unchangeable features.
+ - nochange
+ dump: *feature-get-op
+ -
+ name: features-set
+ doc: Set features.
+
+ attribute-set: features
+
+ do: &feature-set-op
+ request:
+ attributes: *feature
+ reply:
+ attributes: *feature
+ -
+ name: features-ntf
+ doc: Notification for change in features.
+ notify: features-get
+ -
+ name: privflags-get
+ doc: Get device private flags.
+
+ attribute-set: privflags
+
+ do: &privflag-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &privflag
+ - header
+ - flags
+ dump: *privflag-get-op
+ -
+ name: privflags-set
+ doc: Set device private flags.
+
+ attribute-set: privflags
+
+ do:
+ request:
+ attributes: *privflag
+ -
+ name: privflags-ntf
+ doc: Notification for change in device private flags.
+ notify: privflags-get
+
+ -
+ name: rings-get
+ doc: Get ring params.
+
+ attribute-set: rings
+
+ do: &ring-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &ring
+ - header
+ - rx-max
+ - rx-mini-max
+ - rx-jumbo-max
+ - tx-max
+ - rx
+ - rx-mini
+ - rx-jumbo
+ - tx
+ - rx-buf-len
+ - tcp-data-split
+ - cqe-size
+ - tx-push
+ - rx-push
+ - tx-push-buf-len
+ - tx-push-buf-len-max
+ dump: *ring-get-op
+ -
+ name: rings-set
+ doc: Set ring params.
+
+ attribute-set: rings
+
+ do:
+ request:
+ attributes: *ring
+ -
+ name: rings-ntf
+ doc: Notification for change in ring params.
+ notify: rings-get
+ -
+ name: channels-get
+ doc: Get channel params.
+
+ attribute-set: channels
+
+ do: &channel-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &channel
+ - header
+ - rx-max
+ - tx-max
+ - other-max
+ - combined-max
+ - rx-count
+ - tx-count
+ - other-count
+ - combined-count
+ dump: *channel-get-op
+ -
+ name: channels-set
+ doc: Set channel params.
+
+ attribute-set: channels
+
+ do:
+ request:
+ attributes: *channel
+ -
+ name: channels-ntf
+ doc: Notification for change in channel params.
+ notify: channels-get
+ -
+ name: coalesce-get
+ doc: Get coalesce params.
+
+ attribute-set: coalesce
+
+ do: &coalesce-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &coalesce
+ - header
+ - rx-usecs
+ - rx-max-frames
+ - rx-usecs-irq
+ - rx-max-frames-irq
+ - tx-usecs
+ - tx-max-frames
+ - tx-usecs-irq
+ - tx-max-frames-irq
+ - stats-block-usecs
+ - use-adaptive-rx
+ - use-adaptive-tx
+ - pkt-rate-low
+ - rx-usecs-low
+ - rx-max-frames-low
+ - tx-usecs-low
+ - tx-max-frames-low
+ - pkt-rate-high
+ - rx-usecs-high
+ - rx-max-frames-high
+ - tx-usecs-high
+ - tx-max-frames-high
+ - rate-sample-interval
+ - use-cqe-mode-tx
+ - use-cqe-mode-rx
+ - tx-aggr-max-bytes
+ - tx-aggr-max-frames
+ - tx-aggr-time-usecs
+ dump: *coalesce-get-op
+ -
+ name: coalesce-set
+ doc: Set coalesce params.
+
+ attribute-set: coalesce
+
+ do:
+ request:
+ attributes: *coalesce
+ -
+ name: coalesce-ntf
+ doc: Notification for change in coalesce params.
+ notify: coalesce-get
+ -
+ name: pause-get
+ doc: Get pause params.
+
+ attribute-set: pause
+
+ do: &pause-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &pause
+ - header
+ - autoneg
+ - rx
+ - tx
+ - stats
+ - stats-src
+ dump: *pause-get-op
+ -
+ name: pause-set
+ doc: Set pause params.
+
+ attribute-set: pause
+
+ do:
+ request:
+ attributes: *pause
+ -
+ name: pause-ntf
+ doc: Notification for change in pause params.
+ notify: pause-get
+ -
+ name: eee-get
+ doc: Get eee params.
+
+ attribute-set: eee
+
+ do: &eee-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &eee
+ - header
+ - modes-ours
+ - modes-peer
+ - active
+ - enabled
+ - tx-lpi-enabled
+ - tx-lpi-timer
+ dump: *eee-get-op
+ -
+ name: eee-set
+ doc: Set eee params.
+
+ attribute-set: eee
+
+ do:
+ request:
+ attributes: *eee
+ -
+ name: eee-ntf
+ doc: Notification for change in eee params.
+ notify: eee-get
+ -
+ name: tsinfo-get
+ doc: Get tsinfo params.
+
+ attribute-set: tsinfo
+
+ do: &tsinfo-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes:
+ - header
+ - timestamping
+ - tx-types
+ - rx-filters
+ - phc-index
+ dump: *tsinfo-get-op
+ -
+ name: cable-test-act
+ doc: Cable test.
+
+ attribute-set: cable-test
+
+ do:
+ request:
+ attributes:
+ - header
+ reply:
+ attributes:
+ - header
+ - cable-test-nft-nest
+ -
+ name: cable-test-tdr-act
+ doc: Cable test TDR.
+
+ attribute-set: cable-test-tdr
+
+ do:
+ request:
+ attributes:
+ - header
+ reply:
+ attributes:
+ - header
+ - cable-test-tdr-cfg
+ -
+ name: tunnel-info-get
+ doc: Get tsinfo params.
+
+ attribute-set: tunnel-info
+
+ do: &tunnel-info-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes:
+ - header
+ - udp-ports
+ dump: *tunnel-info-get-op
+ -
+ name: fec-get
+ doc: Get FEC params.
+
+ attribute-set: fec
+
+ do: &fec-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &fec
+ - header
+ - modes
+ - auto
+ - active
+ - stats
+ dump: *fec-get-op
+ -
+ name: fec-set
+ doc: Set FEC params.
+
+ attribute-set: fec
+
+ do:
+ request:
+ attributes: *fec
+ -
+ name: fec-ntf
+ doc: Notification for change in FEC params.
+ notify: fec-get
+ -
+ name: module-eeprom-get
+ doc: Get module EEPROM params.
+
+ attribute-set: module-eeprom
+
+ do: &module-eeprom-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes:
+ - header
+ - offset
+ - length
+ - page
+ - bank
+ - i2c-address
+ - data
+ dump: *module-eeprom-get-op
+ -
+ name: stats-get
+ doc: Get statistics.
+
+ attribute-set: stats
+
+ do: &stats-get-op
+ request:
+ attributes:
+ - header
+ - groups
+ reply:
+ attributes:
+ - header
+ - groups
+ - grp
+ - src
+ dump: *stats-get-op
+ -
+ name: phc-vclocks-get
+ doc: Get PHC VCLOCKs.
+
+ attribute-set: phc-vclocks
+
+ do: &phc-vclocks-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes:
+ - header
+ - num
+ dump: *phc-vclocks-get-op
+ -
+ name: module-get
+ doc: Get module params.
+
+ attribute-set: module
+
+ do: &module-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &module
+ - header
+ - power-mode-policy
+ - power-mode
+ dump: *module-get-op
+ -
+ name: module-set
+ doc: Set module params.
+
+ attribute-set: module
+
+ do:
+ request:
+ attributes: *module
+ -
+ name: module-ntf
+ doc: Notification for change in module params.
+ notify: module-get
+ -
+ name: pse-get
+ doc: Get Power Sourcing Equipment params.
+
+ attribute-set: pse
+
+ do: &pse-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &pse
+ - header
+ - admin-state
+ - admin-control
+ - pw-d-status
+ dump: *pse-get-op
+ -
+ name: pse-set
+ doc: Set Power Sourcing Equipment params.
+
+ attribute-set: pse
+
+ do:
+ request:
+ attributes: *pse
+ -
+ name: rss-get
+ doc: Get RSS params.
+
+ attribute-set: rss
+
+ do: &rss-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes:
+ - header
+ - context
+ - hfunc
+ - indir
+ - hkey
+ dump: *rss-get-op
+ -
+ name: plca-get
+ doc: Get PLCA params.
+
+ attribute-set: plca
+
+ do: &plca-get-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: &plca
+ - header
+ - version
+ - enabled
+ - status
+ - node-cnt
+ - node-id
+ - to-tmr
+ - burst-cnt
+ - burst-tmr
+ dump: *plca-get-op
+ -
+ name: plca-set
+ doc: Set PLCA params.
+
+ attribute-set: plca
+
+ do:
+ request:
+ attributes: *plca
+ -
+ name: plca-get-status
+ doc: Get PLCA status params.
+
+ attribute-set: plca
+
+ do: &plca-get-status-op
+ request:
+ attributes:
+ - header
+ reply:
+ attributes: *plca
+ dump: *plca-get-status-op
+ -
+ name: plca-ntf
+ doc: Notification for change in PLCA params.
+ notify: plca-get
-
name: mm-get
doc: Get MAC Merge configuration and state
do: &mm-get-op
request:
- value: 42
attributes:
- header
reply:
- value: 42
attributes:
- header
- pmac-enabled
--- /dev/null
+# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+
+name: ovs_datapath
+version: 2
+protocol: genetlink-legacy
+
+doc:
+ OVS datapath configuration over generic netlink.
+
+definitions:
+ -
+ name: ovs-header
+ type: struct
+ members:
+ -
+ name: dp-ifindex
+ type: u32
+ -
+ name: user-features
+ type: flags
+ entries:
+ -
+ name: unaligned
+ doc: Allow last Netlink attribute to be unaligned
+ -
+ name: vport-pids
+ doc: Allow datapath to associate multiple Netlink PIDs to each vport
+ -
+ name: tc-recirc-sharing
+ doc: Allow tc offload recirc sharing
+ -
+ name: dispatch-upcall-per-cpu
+ doc: Allow per-cpu dispatch of upcalls
+ -
+ name: datapath-stats
+ type: struct
+ members:
+ -
+ name: hit
+ type: u64
+ -
+ name: missed
+ type: u64
+ -
+ name: lost
+ type: u64
+ -
+ name: flows
+ type: u64
+ -
+ name: megaflow-stats
+ type: struct
+ members:
+ -
+ name: mask-hit
+ type: u64
+ -
+ name: masks
+ type: u32
+ -
+ name: padding
+ type: u32
+ -
+ name: cache-hits
+ type: u64
+ -
+ name: pad1
+ type: u64
+
+attribute-sets:
+ -
+ name: datapath
+ attributes:
+ -
+ name: name
+ type: string
+ -
+ name: upcall-pid
+ doc: upcall pid
+ type: u32
+ -
+ name: stats
+ type: binary
+ struct: datapath-stats
+ -
+ name: megaflow-stats
+ type: binary
+ struct: megaflow-stats
+ -
+ name: user-features
+ type: u32
+ enum: user-features
+ enum-as-flags: true
+ -
+ name: pad
+ type: unused
+ -
+ name: masks-cache-size
+ type: u32
+ -
+ name: per-cpu-pids
+ type: binary
+ sub-type: u32
+
+operations:
+ fixed-header: ovs-header
+ list:
+ -
+ name: dp-get
+ doc: Get / dump OVS data path configuration and state
+ value: 3
+ attribute-set: datapath
+ do: &dp-get-op
+ request:
+ attributes:
+ - name
+ reply:
+ attributes:
+ - name
+ - upcall-pid
+ - stats
+ - megaflow-stats
+ - user-features
+ - masks-cache-size
+ - per-cpu-pids
+ dump: *dp-get-op
+ -
+ name: dp-new
+ doc: Create new OVS data path
+ value: 1
+ attribute-set: datapath
+ do:
+ request:
+ attributes:
+ - dp-ifindex
+ - name
+ - upcall-pid
+ - user-features
+ -
+ name: dp-del
+ doc: Delete existing OVS data path
+ value: 2
+ attribute-set: datapath
+ do:
+ request:
+ attributes:
+ - dp-ifindex
+ - name
+
+mcast-groups:
+ list:
+ -
+ name: ovs_datapath
--- /dev/null
+# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+
+name: ovs_vport
+version: 2
+protocol: genetlink-legacy
+
+doc:
+ OVS vport configuration over generic netlink.
+
+definitions:
+ -
+ name: ovs-header
+ type: struct
+ members:
+ -
+ name: dp-ifindex
+ type: u32
+ -
+ name: vport-type
+ type: enum
+ entries: [ unspec, netdev, internal, gre, vxlan, geneve ]
+ -
+ name: vport-stats
+ type: struct
+ members:
+ -
+ name: rx-packets
+ type: u64
+ -
+ name: tx-packets
+ type: u64
+ -
+ name: rx-bytes
+ type: u64
+ -
+ name: tx-bytes
+ type: u64
+ -
+ name: rx-errors
+ type: u64
+ -
+ name: tx-errors
+ type: u64
+ -
+ name: rx-dropped
+ type: u64
+ -
+ name: tx-dropped
+ type: u64
+
+attribute-sets:
+ -
+ name: vport-options
+ attributes:
+ -
+ name: dst-port
+ type: u32
+ -
+ name: extension
+ type: u32
+ -
+ name: upcall-stats
+ attributes:
+ -
+ name: success
+ type: u64
+ value: 0
+ -
+ name: fail
+ type: u64
+ -
+ name: vport
+ attributes:
+ -
+ name: port-no
+ type: u32
+ -
+ name: type
+ type: u32
+ enum: vport-type
+ -
+ name: name
+ type: string
+ -
+ name: options
+ type: nest
+ nested-attributes: vport-options
+ -
+ name: upcall-pid
+ type: binary
+ sub-type: u32
+ -
+ name: stats
+ type: binary
+ struct: vport-stats
+ -
+ name: pad
+ type: unused
+ -
+ name: ifindex
+ type: u32
+ -
+ name: netnsid
+ type: u32
+ -
+ name: upcall-stats
+ type: nest
+ nested-attributes: upcall-stats
+
+operations:
+ list:
+ -
+ name: vport-get
+ doc: Get / dump OVS vport configuration and state
+ value: 3
+ attribute-set: vport
+ fixed-header: ovs-header
+ do: &vport-get-op
+ request:
+ attributes:
+ - dp-ifindex
+ - name
+ reply: &dev-all
+ attributes:
+ - dp-ifindex
+ - port-no
+ - type
+ - name
+ - upcall-pid
+ - stats
+ - ifindex
+ - netnsid
+ - upcall-stats
+ dump: *vport-get-op
+
+mcast-groups:
+ list:
+ -
+ name: ovs_vport
enabling interrupts, handling an incoming IRQ in ISR, re-enabling the
softirq and switching context back to softirq.
-More detailed documentation of NAPI may be found on the pages of Linux
-Foundation `<https://wiki.linuxfoundation.org/networking/napi>`_.
+See :ref:`Documentation/networking/napi.rst <napi>` for more information.
Integrating the core to Xilinx Zynq
-----------------------------------
intel/fm10k
intel/igb
intel/igbvf
- intel/ixgb
intel/ixgbe
intel/ixgbevf
intel/i40e
NAPI (Rx polling mode) is supported in the e100 driver.
-See https://wiki.linuxfoundation.org/networking/napi for more
-information on NAPI.
+See :ref:`Documentation/networking/napi.rst <napi>` for more information.
Multiple Interfaces on Same Ethernet Broadcast Network
------------------------------------------------------
For general information, go to the Intel support website at:
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-http://sourceforge.net/projects/e1000
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
=======
For general information, go to the Intel support website at:
-
- http://support.intel.com
-
-or the Intel Wired Networking project hosted by Sourceforge at:
-
- http://sourceforge.net/projects/e1000
+http://support.intel.com
If an issue is identified with the released source code on the supported
kernel with a supported adapter, email the specific information related
-to the issue to e1000-devel@lists.sf.net
+to the issue to intel-wired-lan@lists.osuosl.org.
Support
=======
For general information, go to the Intel support website at:
-
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
Support
=======
For general information, go to the Intel support website at:
-
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
NAPI
----
NAPI (Rx polling mode) is supported in the i40e driver.
-For more information on NAPI, see
-https://wiki.linuxfoundation.org/networking/napi
+
+See :ref:`Documentation/networking/napi.rst <napi>` for more information.
Flow Control
------------
Support
=======
For general information, go to the Intel support website at:
-
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
Support
=======
For general information, go to the Intel support website at:
-
https://support.intel.com
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on the supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net
+to intel-wired-lan@lists.osuosl.org.
NAPI
----
+
This driver supports NAPI (Rx polling mode).
-For more information on NAPI, see
-https://wiki.linuxfoundation.org/networking/napi
+See :ref:`Documentation/networking/napi.rst <napi>` for more information.
MACVLAN
-------
For general information, go to the Intel support website at:
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
Trademarks
Support
=======
For general information, go to the Intel support website at:
-
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
Support
=======
For general information, go to the Intel support website at:
-
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
+++ /dev/null
-.. SPDX-License-Identifier: GPL-2.0+
-
-=====================================================================
-Linux Base Driver for 10 Gigabit Intel(R) Ethernet Network Connection
-=====================================================================
-
-October 1, 2018
-
-
-Contents
-========
-
-- In This Release
-- Identifying Your Adapter
-- Command Line Parameters
-- Improving Performance
-- Additional Configurations
-- Known Issues/Troubleshooting
-- Support
-
-
-
-In This Release
-===============
-
-This file describes the ixgb Linux Base Driver for the 10 Gigabit Intel(R)
-Network Connection. This driver includes support for Itanium(R)2-based
-systems.
-
-For questions related to hardware requirements, refer to the documentation
-supplied with your 10 Gigabit adapter. All hardware requirements listed apply
-to use with Linux.
-
-The following features are available in this kernel:
- - Native VLANs
- - Channel Bonding (teaming)
- - SNMP
-
-Channel Bonding documentation can be found in the Linux kernel source:
-/Documentation/networking/bonding.rst
-
-The driver information previously displayed in the /proc filesystem is not
-supported in this release. Alternatively, you can use ethtool (version 1.6
-or later), lspci, and iproute2 to obtain the same information.
-
-Instructions on updating ethtool can be found in the section "Additional
-Configurations" later in this document.
-
-
-Identifying Your Adapter
-========================
-
-The following Intel network adapters are compatible with the drivers in this
-release:
-
-+------------+------------------------------+----------------------------------+
-| Controller | Adapter Name | Physical Layer |
-+============+==============================+==================================+
-| 82597EX | Intel(R) PRO/10GbE LR/SR/CX4 | - 10G Base-LR (fiber) |
-| | Server Adapters | - 10G Base-SR (fiber) |
-| | | - 10G Base-CX4 (copper) |
-+------------+------------------------------+----------------------------------+
-
-For more information on how to identify your adapter, go to the Adapter &
-Driver ID Guide at:
-
- https://support.intel.com
-
-
-Command Line Parameters
-=======================
-
-If the driver is built as a module, the following optional parameters are
-used by entering them on the command line with the modprobe command using
-this syntax::
-
- modprobe ixgb [<option>=<VAL1>,<VAL2>,...]
-
-For example, with two 10GbE PCI adapters, entering::
-
- modprobe ixgb TxDescriptors=80,128
-
-loads the ixgb driver with 80 TX resources for the first adapter and 128 TX
-resources for the second adapter.
-
-The default value for each parameter is generally the recommended setting,
-unless otherwise noted.
-
-Copybreak
----------
-:Valid Range: 0-XXXX
-:Default Value: 256
-
- This is the maximum size of packet that is copied to a new buffer on
- receive.
-
-Debug
------
-:Valid Range: 0-16 (0=none,...,16=all)
-:Default Value: 0
-
- This parameter adjusts the level of debug messages displayed in the
- system logs.
-
-FlowControl
------------
-:Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx)
-:Default Value: 1 if no EEPROM, otherwise read from EEPROM
-
- This parameter controls the automatic generation(Tx) and response(Rx) to
- Ethernet PAUSE frames. There are hardware bugs associated with enabling
- Tx flow control so beware.
-
-RxDescriptors
--------------
-:Valid Range: 64-4096
-:Default Value: 1024
-
- This value is the number of receive descriptors allocated by the driver.
- Increasing this value allows the driver to buffer more incoming packets.
- Each descriptor is 16 bytes. A receive buffer is also allocated for
- each descriptor and can be either 2048, 4056, 8192, or 16384 bytes,
- depending on the MTU setting. When the MTU size is 1500 or less, the
- receive buffer size is 2048 bytes. When the MTU is greater than 1500 the
- receive buffer size will be either 4056, 8192, or 16384 bytes. The
- maximum MTU size is 16114.
-
-TxDescriptors
--------------
-:Valid Range: 64-4096
-:Default Value: 256
-
- This value is the number of transmit descriptors allocated by the driver.
- Increasing this value allows the driver to queue more transmits. Each
- descriptor is 16 bytes.
-
-RxIntDelay
-----------
-:Valid Range: 0-65535 (0=off)
-:Default Value: 72
-
- This value delays the generation of receive interrupts in units of
- 0.8192 microseconds. Receive interrupt reduction can improve CPU
- efficiency if properly tuned for specific network traffic. Increasing
- this value adds extra latency to frame reception and can end up
- decreasing the throughput of TCP traffic. If the system is reporting
- dropped receives, this value may be set too high, causing the driver to
- run out of available receive descriptors.
-
-TxIntDelay
-----------
-:Valid Range: 0-65535 (0=off)
-:Default Value: 32
-
- This value delays the generation of transmit interrupts in units of
- 0.8192 microseconds. Transmit interrupt reduction can improve CPU
- efficiency if properly tuned for specific network traffic. Increasing
- this value adds extra latency to frame transmission and can end up
- decreasing the throughput of TCP traffic. If this value is set too high,
- it will cause the driver to run out of available transmit descriptors.
-
-XsumRX
-------
-:Valid Range: 0-1
-:Default Value: 1
-
- A value of '1' indicates that the driver should enable IP checksum
- offload for received packets (both UDP and TCP) to the adapter hardware.
-
-RxFCHighThresh
---------------
-:Valid Range: 1,536-262,136 (0x600 - 0x3FFF8, 8 byte granularity)
-:Default Value: 196,608 (0x30000)
-
- Receive Flow control high threshold (when we send a pause frame)
-
-RxFCLowThresh
--------------
-:Valid Range: 64-262,136 (0x40 - 0x3FFF8, 8 byte granularity)
-:Default Value: 163,840 (0x28000)
-
- Receive Flow control low threshold (when we send a resume frame)
-
-FCReqTimeout
-------------
-:Valid Range: 1-65535
-:Default Value: 65535
-
- Flow control request timeout (how long to pause the link partner's tx)
-
-IntDelayEnable
---------------
-:Value Range: 0,1
-:Default Value: 1
-
- Interrupt Delay, 0 disables transmit interrupt delay and 1 enables it.
-
-
-Improving Performance
-=====================
-
-With the 10 Gigabit server adapters, the default Linux configuration will
-very likely limit the total available throughput artificially. There is a set
-of configuration changes that, when applied together, will increase the ability
-of Linux to transmit and receive data. The following enhancements were
-originally acquired from settings published at https://www.spec.org/web99/ for
-various submitted results using Linux.
-
-NOTE:
- These changes are only suggestions, and serve as a starting point for
- tuning your network performance.
-
-The changes are made in three major ways, listed in order of greatest effect:
-
-- Use ip link to modify the mtu (maximum transmission unit) and the txqueuelen
- parameter.
-- Use sysctl to modify /proc parameters (essentially kernel tuning)
-- Use setpci to modify the MMRBC field in PCI-X configuration space to increase
- transmit burst lengths on the bus.
-
-NOTE:
- setpci modifies the adapter's configuration registers to allow it to read
- up to 4k bytes at a time (for transmits). However, for some systems the
- behavior after modifying this register may be undefined (possibly errors of
- some kind). A power-cycle, hard reset or explicitly setting the e6 register
- back to 22 (setpci -d 8086:1a48 e6.b=22) may be required to get back to a
- stable configuration.
-
-- COPY these lines and paste them into ixgb_perf.sh:
-
-::
-
- #!/bin/bash
- echo "configuring network performance , edit this file to change the interface
- or device ID of 10GbE card"
- # set mmrbc to 4k reads, modify only Intel 10GbE device IDs
- # replace 1a48 with appropriate 10GbE device's ID installed on the system,
- # if needed.
- setpci -d 8086:1a48 e6.b=2e
- # set the MTU (max transmission unit) - it requires your switch and clients
- # to change as well.
- # set the txqueuelen
- # your ixgb adapter should be loaded as eth1 for this to work, change if needed
- ip li set dev eth1 mtu 9000 txqueuelen 1000 up
- # call the sysctl utility to modify /proc/sys entries
- sysctl -p ./sysctl_ixgb.conf
-
-- COPY these lines and paste them into sysctl_ixgb.conf:
-
-::
-
- # some of the defaults may be different for your kernel
- # call this file with sysctl -p <this file>
- # these are just suggested values that worked well to increase throughput in
- # several network benchmark tests, your mileage may vary
-
- ### IPV4 specific settings
- # turn TCP timestamp support off, default 1, reduces CPU use
- net.ipv4.tcp_timestamps = 0
- # turn SACK support off, default on
- # on systems with a VERY fast bus -> memory interface this is the big gainer
- net.ipv4.tcp_sack = 0
- # set min/default/max TCP read buffer, default 4096 87380 174760
- net.ipv4.tcp_rmem = 10000000 10000000 10000000
- # set min/pressure/max TCP write buffer, default 4096 16384 131072
- net.ipv4.tcp_wmem = 10000000 10000000 10000000
- # set min/pressure/max TCP buffer space, default 31744 32256 32768
- net.ipv4.tcp_mem = 10000000 10000000 10000000
-
- ### CORE settings (mostly for socket and UDP effect)
- # set maximum receive socket buffer size, default 131071
- net.core.rmem_max = 524287
- # set maximum send socket buffer size, default 131071
- net.core.wmem_max = 524287
- # set default receive socket buffer size, default 65535
- net.core.rmem_default = 524287
- # set default send socket buffer size, default 65535
- net.core.wmem_default = 524287
- # set maximum amount of option memory buffers, default 10240
- net.core.optmem_max = 524287
- # set number of unprocessed input packets before kernel starts dropping them; default 300
- net.core.netdev_max_backlog = 300000
-
-Edit the ixgb_perf.sh script if necessary to change eth1 to whatever interface
-your ixgb driver is using and/or replace '1a48' with appropriate 10GbE device's
-ID installed on the system.
-
-NOTE:
- Unless these scripts are added to the boot process, these changes will
- only last only until the next system reboot.
-
-
-Resolving Slow UDP Traffic
---------------------------
-If your server does not seem to be able to receive UDP traffic as fast as it
-can receive TCP traffic, it could be because Linux, by default, does not set
-the network stack buffers as large as they need to be to support high UDP
-transfer rates. One way to alleviate this problem is to allow more memory to
-be used by the IP stack to store incoming data.
-
-For instance, use the commands::
-
- sysctl -w net.core.rmem_max=262143
-
-and::
-
- sysctl -w net.core.rmem_default=262143
-
-to increase the read buffer memory max and default to 262143 (256k - 1) from
-defaults of max=131071 (128k - 1) and default=65535 (64k - 1). These variables
-will increase the amount of memory used by the network stack for receives, and
-can be increased significantly more if necessary for your application.
-
-
-Additional Configurations
-=========================
-
-Configuring the Driver on Different Distributions
--------------------------------------------------
-Configuring a network driver to load properly when the system is started is
-distribution dependent. Typically, the configuration process involves adding
-an alias line to /etc/modprobe.conf as well as editing other system startup
-scripts and/or configuration files. Many popular Linux distributions ship
-with tools to make these changes for you. To learn the proper way to
-configure a network device for your system, refer to your distribution
-documentation. If during this process you are asked for the driver or module
-name, the name for the Linux Base Driver for the Intel 10GbE Family of
-Adapters is ixgb.
-
-Viewing Link Messages
----------------------
-Link messages will not be displayed to the console if the distribution is
-restricting system messages. In order to see network driver link messages on
-your console, set dmesg to eight by entering the following::
-
- dmesg -n 8
-
-NOTE: This setting is not saved across reboots.
-
-Jumbo Frames
-------------
-The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
-enabled by changing the MTU to a value larger than the default of 1500.
-The maximum value for the MTU is 16114. Use the ip command to
-increase the MTU size. For example::
-
- ip li set dev ethx mtu 9000
-
-The maximum MTU setting for Jumbo Frames is 16114. This value coincides
-with the maximum Jumbo Frames size of 16128.
-
-Ethtool
--------
-The driver utilizes the ethtool interface for driver configuration and
-diagnostics, as well as displaying statistical information. The ethtool
-version 1.6 or later is required for this functionality.
-
-The latest release of ethtool can be found from
-https://www.kernel.org/pub/software/network/ethtool/
-
-NOTE:
- The ethtool version 1.6 only supports a limited set of ethtool options.
- Support for a more complete ethtool feature set can be enabled by
- upgrading to the latest version.
-
-NAPI
-----
-NAPI (Rx polling mode) is supported in the ixgb driver.
-
-See https://wiki.linuxfoundation.org/networking/napi for more information on
-NAPI.
-
-
-Known Issues/Troubleshooting
-============================
-
-NOTE:
- After installing the driver, if your Intel Network Connection is not
- working, verify in the "In This Release" section of the readme that you have
- installed the correct driver.
-
-Cable Interoperability Issue with Fujitsu XENPAK Module in SmartBits Chassis
-----------------------------------------------------------------------------
-Excessive CRC errors may be observed if the Intel(R) PRO/10GbE CX4
-Server adapter is connected to a Fujitsu XENPAK CX4 module in a SmartBits
-chassis using 15 m/24AWG cable assemblies manufactured by Fujitsu or Leoni.
-The CRC errors may be received either by the Intel(R) PRO/10GbE CX4
-Server adapter or the SmartBits. If this situation occurs using a different
-cable assembly may resolve the issue.
-
-Cable Interoperability Issues with HP Procurve 3400cl Switch Port
------------------------------------------------------------------
-Excessive CRC errors may be observed if the Intel(R) PRO/10GbE CX4 Server
-adapter is connected to an HP Procurve 3400cl switch port using short cables
-(1 m or shorter). If this situation occurs, using a longer cable may resolve
-the issue.
-
-Excessive CRC errors may be observed using Fujitsu 24AWG cable assemblies that
-Are 10 m or longer or where using a Leoni 15 m/24AWG cable assembly. The CRC
-errors may be received either by the CX4 Server adapter or at the switch. If
-this situation occurs, using a different cable assembly may resolve the issue.
-
-Jumbo Frames System Requirement
--------------------------------
-Memory allocation failures have been observed on Linux systems with 64 MB
-of RAM or less that are running Jumbo Frames. If you are using Jumbo
-Frames, your system may require more than the advertised minimum
-requirement of 64 MB of system memory.
-
-Performance Degradation with Jumbo Frames
------------------------------------------
-Degradation in throughput performance may be observed in some Jumbo frames
-environments. If this is observed, increasing the application's socket buffer
-size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help.
-See the specific application manual and /usr/src/linux*/Documentation/
-networking/ip-sysctl.txt for more details.
-
-Allocating Rx Buffers when Using Jumbo Frames
----------------------------------------------
-Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if
-the available memory is heavily fragmented. This issue may be seen with PCI-X
-adapters or with packet split disabled. This can be reduced or eliminated
-by changing the amount of available memory for receive buffer allocation, by
-increasing /proc/sys/vm/min_free_kbytes.
-
-Multiple Interfaces on Same Ethernet Broadcast Network
-------------------------------------------------------
-Due to the default ARP behavior on Linux, it is not possible to have
-one system on two IP networks in the same Ethernet broadcast domain
-(non-partitioned switch) behave as expected. All Ethernet interfaces
-will respond to IP traffic for any IP address assigned to the system.
-This results in unbalanced receive traffic.
-
-If you have multiple interfaces in a server, do either of the following:
-
- - Turn on ARP filtering by entering::
-
- echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
-
- - Install the interfaces in separate broadcast domains - either in
- different switches or in a switch partitioned to VLANs.
-
-UDP Stress Test Dropped Packet Issue
---------------------------------------
-Under small packets UDP stress test with 10GbE driver, the Linux system
-may drop UDP packets due to the fullness of socket buffers. You may want
-to change the driver's Flow Control variables to the minimum value for
-controlling packet reception.
-
-Tx Hangs Possible Under Stress
-------------------------------
-Under stress conditions, if TX hangs occur, turning off TSO
-"ethtool -K eth0 tso off" may resolve the problem.
-
-
-Support
-=======
-For general information, go to the Intel support website at:
-
-https://www.intel.com/support/
-
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
-If an issue is identified with the released source code on a supported kernel
-with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net
Support
=======
For general information, go to the Intel support website at:
-
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
Support
=======
For general information, go to the Intel support website at:
-
https://www.intel.com/support/
-or the Intel Wired Networking project hosted by Sourceforge at:
-
-https://sourceforge.net/projects/e1000
-
If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
-to e1000-devel@lists.sf.net.
+to intel-wired-lan@lists.osuosl.org.
- The number of receive packets with CQE compression on ring i [#accel]_.
- Acceleration
- * - `rx[i]_cache_reuse`
- - The number of events of successful reuse of a page from a driver's
- internal page cache.
- - Acceleration
-
- * - `rx[i]_cache_full`
- - The number of events of full internal page cache where driver can't put a
- page back to the cache for recycling (page will be freed).
- - Acceleration
-
- * - `rx[i]_cache_empty`
- - The number of events where cache was empty - no page to give. Driver
- shall allocate new page.
- - Acceleration
-
- * - `rx[i]_cache_busy`
- - The number of events where cache head was busy and cannot be recycled.
- Driver allocated new page.
- - Acceleration
-
- * - `rx[i]_cache_waive`
- - The number of cache evacuation. This can occur due to page move to
- another NUMA node or page was pfmemalloc-ed and should be freed as soon
- as possible.
- - Acceleration
-
* - `rx[i]_arfs_err`
- Number of flow rules that failed to be added to the flow table.
- Error
$ devlink dev eswitch set pci/0000:06:00.0 mode switchdev
+hairpin_num_queues: Number of hairpin queues
+--------------------------------------------
+We refer to a TC NIC rule that involves forwarding as "hairpin".
+
+Hairpin queues are mlx5 hardware specific implementation for hardware
+forwarding of such packets.
+
+- Show the number of hairpin queues::
+
+ $ devlink dev param show pci/0000:06:00.0 name hairpin_num_queues
+ pci/0000:06:00.0:
+ name hairpin_num_queues type driver-specific
+ values:
+ cmode driverinit value 2
+
+- Change the number of hairpin queues::
+
+ $ devlink dev param set pci/0000:06:00.0 name hairpin_num_queues value 4 cmode driverinit
+
+hairpin_queue_size: Size of the hairpin queues
+----------------------------------------------
+Control the size of the hairpin queues.
+
+- Show the size of the hairpin queues::
+
+ $ devlink dev param show pci/0000:06:00.0 name hairpin_queue_size
+ pci/0000:06:00.0:
+ name hairpin_queue_size type driver-specific
+ values:
+ cmode driverinit value 1024
+
+- Change the size (in packets) of the hairpin queues::
+
+ $ devlink dev param set pci/0000:06:00.0 name hairpin_queue_size value 512 cmode driverinit
+
Health reporters
================
Default: disabled
+ * - ``hairpin_num_queues``
+ - u32
+ - driverinit
+ - We refer to a TC NIC rule that involves forwarding as "hairpin".
+ Hairpin queues are mlx5 hardware specific implementation for hardware
+ forwarding of such packets.
+
+ Control the number of hairpin queues.
+ * - ``hairpin_queue_size``
+ - u32
+ - driverinit
+ - Control the size (in packets) of the hairpin queues.
The ``mlx5`` driver supports reloading via ``DEVLINK_CMD_RELOAD``
Kernel response contents:
- ==================================== ====== ===========================
- ``ETHTOOL_A_RINGS_HEADER`` nested reply header
- ``ETHTOOL_A_RINGS_RX_MAX`` u32 max size of RX ring
- ``ETHTOOL_A_RINGS_RX_MINI_MAX`` u32 max size of RX mini ring
- ``ETHTOOL_A_RINGS_RX_JUMBO_MAX`` u32 max size of RX jumbo ring
- ``ETHTOOL_A_RINGS_TX_MAX`` u32 max size of TX ring
- ``ETHTOOL_A_RINGS_RX`` u32 size of RX ring
- ``ETHTOOL_A_RINGS_RX_MINI`` u32 size of RX mini ring
- ``ETHTOOL_A_RINGS_RX_JUMBO`` u32 size of RX jumbo ring
- ``ETHTOOL_A_RINGS_TX`` u32 size of TX ring
- ``ETHTOOL_A_RINGS_RX_BUF_LEN`` u32 size of buffers on the ring
- ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` u8 TCP header / data split
- ``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE
- ``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode
- ``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode
- ==================================== ====== ===========================
+ ======================================= ====== ===========================
+ ``ETHTOOL_A_RINGS_HEADER`` nested reply header
+ ``ETHTOOL_A_RINGS_RX_MAX`` u32 max size of RX ring
+ ``ETHTOOL_A_RINGS_RX_MINI_MAX`` u32 max size of RX mini ring
+ ``ETHTOOL_A_RINGS_RX_JUMBO_MAX`` u32 max size of RX jumbo ring
+ ``ETHTOOL_A_RINGS_TX_MAX`` u32 max size of TX ring
+ ``ETHTOOL_A_RINGS_RX`` u32 size of RX ring
+ ``ETHTOOL_A_RINGS_RX_MINI`` u32 size of RX mini ring
+ ``ETHTOOL_A_RINGS_RX_JUMBO`` u32 size of RX jumbo ring
+ ``ETHTOOL_A_RINGS_TX`` u32 size of TX ring
+ ``ETHTOOL_A_RINGS_RX_BUF_LEN`` u32 size of buffers on the ring
+ ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` u8 TCP header / data split
+ ``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE
+ ``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode
+ ``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode
+ ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` u32 size of TX push buffer
+ ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX`` u32 max size of TX push buffer
+ ======================================= ====== ===========================
``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` indicates whether the device is usable with
page-flipping TCP zero-copy receive (``getsockopt(TCP_ZEROCOPY_RECEIVE)``).
may increase the CPU cost. Drivers may enforce additional per-packet
eligibility checks (e.g. on packet size).
+``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` specifies the maximum number of bytes of a
+transmitted packet a driver can push directly to the underlying device
+('push' mode). Pushing some of the payload bytes to the device has the
+advantages of reducing latency for small packets by avoiding DMA mapping (same
+as ``ETHTOOL_A_RINGS_TX_PUSH`` parameter) as well as allowing the underlying
+device to process packet headers ahead of fetching its payload.
+This can help the device to make fast actions based on the packet's headers.
+This is similar to the "tx-copybreak" parameter, which copies the packet to a
+preallocated DMA memory area instead of mapping new memory. However,
+tx-push-buff parameter copies the packet directly to the device to allow the
+device to take faster actions on the packet.
+
RINGS_SET
=========
``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE
``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode
``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode
+ ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` u32 size of TX push buffer
==================================== ====== ===========================
Kernel checks that requested ring sizes do not exceed limits reported by
mpls-sysctl
mptcp-sysctl
multiqueue
+ napi
netconsole
netdev-features
netdevices
--- /dev/null
+.. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+
+.. _napi:
+
+====
+NAPI
+====
+
+NAPI is the event handling mechanism used by the Linux networking stack.
+The name NAPI no longer stands for anything in particular [#]_.
+
+In basic operation the device notifies the host about new events
+via an interrupt.
+The host then schedules a NAPI instance to process the events.
+The device may also be polled for events via NAPI without receiving
+interrupts first (:ref:`busy polling<poll>`).
+
+NAPI processing usually happens in the software interrupt context,
+but there is an option to use :ref:`separate kernel threads<threaded>`
+for NAPI processing.
+
+All in all NAPI abstracts away from the drivers the context and configuration
+of event (packet Rx and Tx) processing.
+
+Driver API
+==========
+
+The two most important elements of NAPI are the struct napi_struct
+and the associated poll method. struct napi_struct holds the state
+of the NAPI instance while the method is the driver-specific event
+handler. The method will typically free Tx packets that have been
+transmitted and process newly received packets.
+
+.. _drv_ctrl:
+
+Control API
+-----------
+
+netif_napi_add() and netif_napi_del() add/remove a NAPI instance
+from the system. The instances are attached to the netdevice passed
+as argument (and will be deleted automatically when netdevice is
+unregistered). Instances are added in a disabled state.
+
+napi_enable() and napi_disable() manage the disabled state.
+A disabled NAPI can't be scheduled and its poll method is guaranteed
+to not be invoked. napi_disable() waits for ownership of the NAPI
+instance to be released.
+
+The control APIs are not idempotent. Control API calls are safe against
+concurrent use of datapath APIs but an incorrect sequence of control API
+calls may result in crashes, deadlocks, or race conditions. For example,
+calling napi_disable() multiple times in a row will deadlock.
+
+Datapath API
+------------
+
+napi_schedule() is the basic method of scheduling a NAPI poll.
+Drivers should call this function in their interrupt handler
+(see :ref:`drv_sched` for more info). A successful call to napi_schedule()
+will take ownership of the NAPI instance.
+
+Later, after NAPI is scheduled, the driver's poll method will be
+called to process the events/packets. The method takes a ``budget``
+argument - drivers can process completions for any number of Tx
+packets but should only process up to ``budget`` number of
+Rx packets. Rx processing is usually much more expensive.
+
+In other words, it is recommended to ignore the budget argument when
+performing TX buffer reclamation to ensure that the reclamation is not
+arbitrarily bounded; however, it is required to honor the budget argument
+for RX processing.
+
+.. warning::
+
+ The ``budget`` argument may be 0 if core tries to only process Tx completions
+ and no Rx packets.
+
+The poll method returns the amount of work done. If the driver still
+has outstanding work to do (e.g. ``budget`` was exhausted)
+the poll method should return exactly ``budget``. In that case,
+the NAPI instance will be serviced/polled again (without the
+need to be scheduled).
+
+If event processing has been completed (all outstanding packets
+processed) the poll method should call napi_complete_done()
+before returning. napi_complete_done() releases the ownership
+of the instance.
+
+.. warning::
+
+ The case of finishing all events and using exactly ``budget``
+ must be handled carefully. There is no way to report this
+ (rare) condition to the stack, so the driver must either
+ not call napi_complete_done() and wait to be called again,
+ or return ``budget - 1``.
+
+ If the ``budget`` is 0 napi_complete_done() should never be called.
+
+Call sequence
+-------------
+
+Drivers should not make assumptions about the exact sequencing
+of calls. The poll method may be called without the driver scheduling
+the instance (unless the instance is disabled). Similarly,
+it's not guaranteed that the poll method will be called, even
+if napi_schedule() succeeded (e.g. if the instance gets disabled).
+
+As mentioned in the :ref:`drv_ctrl` section - napi_disable() and subsequent
+calls to the poll method only wait for the ownership of the instance
+to be released, not for the poll method to exit. This means that
+drivers should avoid accessing any data structures after calling
+napi_complete_done().
+
+.. _drv_sched:
+
+Scheduling and IRQ masking
+--------------------------
+
+Drivers should keep the interrupts masked after scheduling
+the NAPI instance - until NAPI polling finishes any further
+interrupts are unnecessary.
+
+Drivers which have to mask the interrupts explicitly (as opposed
+to IRQ being auto-masked by the device) should use the napi_schedule_prep()
+and __napi_schedule() calls:
+
+.. code-block:: c
+
+ if (napi_schedule_prep(&v->napi)) {
+ mydrv_mask_rxtx_irq(v->idx);
+ /* schedule after masking to avoid races */
+ __napi_schedule(&v->napi);
+ }
+
+IRQ should only be unmasked after a successful call to napi_complete_done():
+
+.. code-block:: c
+
+ if (budget && napi_complete_done(&v->napi, work_done)) {
+ mydrv_unmask_rxtx_irq(v->idx);
+ return min(work_done, budget - 1);
+ }
+
+napi_schedule_irqoff() is a variant of napi_schedule() which takes advantage
+of guarantees given by being invoked in IRQ context (no need to
+mask interrupts). Note that PREEMPT_RT forces all interrupts
+to be threaded so the interrupt may need to be marked ``IRQF_NO_THREAD``
+to avoid issues on real-time kernel configurations.
+
+Instance to queue mapping
+-------------------------
+
+Modern devices have multiple NAPI instances (struct napi_struct) per
+interface. There is no strong requirement on how the instances are
+mapped to queues and interrupts. NAPI is primarily a polling/processing
+abstraction without specific user-facing semantics. That said, most networking
+devices end up using NAPI in fairly similar ways.
+
+NAPI instances most often correspond 1:1:1 to interrupts and queue pairs
+(queue pair is a set of a single Rx and single Tx queue).
+
+In less common cases a NAPI instance may be used for multiple queues
+or Rx and Tx queues can be serviced by separate NAPI instances on a single
+core. Regardless of the queue assignment, however, there is usually still
+a 1:1 mapping between NAPI instances and interrupts.
+
+It's worth noting that the ethtool API uses a "channel" terminology where
+each channel can be either ``rx``, ``tx`` or ``combined``. It's not clear
+what constitutes a channel; the recommended interpretation is to understand
+a channel as an IRQ/NAPI which services queues of a given type. For example,
+a configuration of 1 ``rx``, 1 ``tx`` and 1 ``combined`` channel is expected
+to utilize 3 interrupts, 2 Rx and 2 Tx queues.
+
+User API
+========
+
+User interactions with NAPI depend on NAPI instance ID. The instance IDs
+are only visible to the user thru the ``SO_INCOMING_NAPI_ID`` socket option.
+It's not currently possible to query IDs used by a given device.
+
+Software IRQ coalescing
+-----------------------
+
+NAPI does not perform any explicit event coalescing by default.
+In most scenarios batching happens due to IRQ coalescing which is done
+by the device. There are cases where software coalescing is helpful.
+
+NAPI can be configured to arm a repoll timer instead of unmasking
+the hardware interrupts as soon as all packets are processed.
+The ``gro_flush_timeout`` sysfs configuration of the netdevice
+is reused to control the delay of the timer, while
+``napi_defer_hard_irqs`` controls the number of consecutive empty polls
+before NAPI gives up and goes back to using hardware IRQs.
+
+.. _poll:
+
+Busy polling
+------------
+
+Busy polling allows a user process to check for incoming packets before
+the device interrupt fires. As is the case with any busy polling it trades
+off CPU cycles for lower latency (production uses of NAPI busy polling
+are not well known).
+
+Busy polling is enabled by either setting ``SO_BUSY_POLL`` on
+selected sockets or using the global ``net.core.busy_poll`` and
+``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
+also exists.
+
+IRQ mitigation
+---------------
+
+While busy polling is supposed to be used by low latency applications,
+a similar mechanism can be used for IRQ mitigation.
+
+Very high request-per-second applications (especially routing/forwarding
+applications and especially applications using AF_XDP sockets) may not
+want to be interrupted until they finish processing a request or a batch
+of packets.
+
+Such applications can pledge to the kernel that they will perform a busy
+polling operation periodically, and the driver should keep the device IRQs
+permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
+socket option. To avoid system misbehavior the pledge is revoked
+if ``gro_flush_timeout`` passes without any busy poll call.
+
+The NAPI budget for busy polling is lower than the default (which makes
+sense given the low latency intention of normal busy polling). This is
+not the case with IRQ mitigation, however, so the budget can be adjusted
+with the ``SO_BUSY_POLL_BUDGET`` socket option.
+
+.. _threaded:
+
+Threaded NAPI
+-------------
+
+Threaded NAPI is an operating mode that uses dedicated kernel
+threads rather than software IRQ context for NAPI processing.
+The configuration is per netdevice and will affect all
+NAPI instances of that device. Each NAPI instance will spawn a separate
+thread (called ``napi/${ifc-name}-${napi-id}``).
+
+It is recommended to pin each kernel thread to a single CPU, the same
+CPU as the CPU which services the interrupt. Note that the mapping
+between IRQs and NAPI instances may not be trivial (and is driver
+dependent). The NAPI instance IDs will be assigned in the opposite
+order than the process IDs of the kernel threads.
+
+Threaded NAPI is controlled by writing 0/1 to the ``threaded`` file in
+netdev's sysfs directory.
+
+.. rubric:: Footnotes
+
+.. [#] NAPI was originally referred to as New API in 2.4 Linux.
netdev patch review
-------------------
+.. _patch_status:
+
Patch status
~~~~~~~~~~~~
patch is a good way to ensure your patch is ignored or pushed to the
bottom of the priority list.
+Changes requested
+~~~~~~~~~~~~~~~~~
+
+Patches :ref:`marked<patch_status>` as ``Changes Requested`` need
+to be revised. The new version should come with a change log,
+preferably including links to previous postings, for example::
+
+ [PATCH net-next v3] net: make cows go moo
+
+ Even users who don't drink milk appreciate hearing the cows go "moo".
+
+ The amount of mooing will depend on packet rate so should match
+ the diurnal cycle quite well.
+
+ Signed-of-by: Joe Defarmer <joe@barn.org>
+ ---
+ v3:
+ - add a note about time-of-day mooing fluctuation to the commit message
+ v2: https://lore.kernel.org/netdev/123themessageid@barn.org/
+ - fix missing argument in kernel doc for netif_is_bovine()
+ - fix memory leak in netdev_register_cow()
+ v1: https://lore.kernel.org/netdev/456getstheclicks@barn.org/
+
+The commit message should be revised to answer any questions reviewers
+had to ask in previous discussions. Occasionally the update of
+the commit message will be the only change in the new version.
+
Partial resends
~~~~~~~~~~~~~~~
Occasionally a patch series gets applied before receiving critical feedback,
or the wrong version of a series gets applied.
-There is no revert possible, once it is pushed out, it stays like that.
+
+Making the patch disappear once it is pushed out is not possible, the commit
+history in netdev trees is immutable.
Please send incremental versions on top of what has been merged in order to fix
the patches the way they would look like if your latest patch series was to be
merged.
+In cases where full revert is needed the revert has to be submitted
+as a patch to the list with a commit message explaining the technical
+problems with the reverted commit. Reverts should be used as a last resort,
+when original change is completely wrong; incremental fixes are preferred.
+
Stable tree
~~~~~~~~~~~
Structures
----------
-Legacy families can define C structures both to be used as the contents
-of an attribute and as a fixed message header. The plan is to define
-the structs in ``definitions`` and link the appropriate attrs.
+Legacy families can define C structures both to be used as the contents of
+an attribute and as a fixed message header. Structures are defined in
+``definitions`` and referenced in operations or attributes. Note that
+structures defined in YAML are implicitly packed according to C
+conventions. For example, the following struct is 4 bytes, not 6 bytes:
+
+.. code-block:: c
+
+ struct {
+ u8 a;
+ u16 b;
+ u8 c;
+ }
+
+Any padding must be explicitly added and C-like languages should infer the
+need for explicit padding from whether the members are naturally aligned.
+
+Here is the struct definition from above, declared in YAML:
+
+.. code-block:: yaml
+
+ definitions:
+ -
+ name: message-header
+ type: struct
+ members:
+ -
+ name: a
+ type: u8
+ -
+ name: b
+ type: u16
+ -
+ name: c
+ type: u8
+
+Fixed Headers
+~~~~~~~~~~~~~
+
+Fixed message headers can be added to operations using ``fixed-header``.
+The default ``fixed-header`` can be set in ``operations`` and it can be set
+or overridden for each operation.
+
+.. code-block:: yaml
+
+ operations:
+ fixed-header: message-header
+ list:
+ -
+ name: get
+ fixed-header: custom-header
+ attribute-set: message-attrs
+
+Attributes
+~~~~~~~~~~
+
+A ``binary`` attribute can be interpreted as a C structure using a
+``struct`` property with the name of the structure definition. The
+``struct`` property implies ``sub-type: struct`` so it is not necessary to
+specify a sub-type.
+
+.. code-block:: yaml
+
+ attribute-sets:
+ -
+ name: stats-attrs
+ attributes:
+ -
+ name: stats
+ type: binary
+ struct: vport-stats
+
+C Arrays
+--------
+
+Legacy families also use ``binary`` attributes to encapsulate C arrays. The
+``sub-type`` is used to identify the type of scalar to extract.
+
+.. code-block:: yaml
+
+ attributes:
+ -
+ name: ports
+ type: binary
+ sub-type: u32
Multi-message DO
----------------
The validation policy in the kernel is formed by combining the type
definition (``type`` and ``nested-attributes``) and the ``checks``.
+sub-type
+~~~~~~~~
+
+Legacy families have special ways of expressing arrays. ``sub-type`` can be
+used to define the type of array members in case array members are not
+fully defined as attributes (in a bona fide attribute space). For instance
+a C array of u32 values can be specified with ``type: binary`` and
+``sub-type: u32``. Binary types and legacy array formats are described in
+more detail in :doc:`genetlink-legacy`.
+
operations
----------
F: drivers/scsi/BusLogic.*
F: drivers/scsi/FlashPoint.*
+BXCAN CAN NETWORK DRIVER
+M: Dario Binacchi <dario.binacchi@amarulasolutions.com>
+L: linux-can@vger.kernel.org
+S: Maintained
+F: Documentation/devicetree/bindings/net/can/st,stm32-bxcan.yaml
+F: drivers/net/can/bxcan.c
+
C-MEDIA CMI8788 DRIVER
M: Clemens Ladisch <clemens@ladisch.de>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next.git
F: Documentation/networking/mac80211-injection.rst
F: Documentation/networking/mac80211_hwsim/mac80211_hwsim.rst
-F: drivers/net/wireless/mac80211_hwsim.[ch]
+F: drivers/net/wireless/virtual/mac80211_hwsim.[ch]
F: include/net/mac80211.h
F: net/mac80211/
S: Maintained
F: drivers/net/ethernet/mediatek/
+MEDIATEK ETHERNET PCS DRIVER
+M: Alexander Couzens <lynxis@fe80.eu>
+M: Daniel Golle <daniel@makrotopia.org>
+L: netdev@vger.kernel.org
+S: Maintained
+F: drivers/net/pcs/pcs-mtk-lynxi.c
+F: include/linux/pcs/pcs-mtk-lynxi.h
+
MEDIATEK I2C CONTROLLER DRIVER
M: Qii Wang <qii.wang@mediatek.com>
L: linux-i2c@vger.kernel.org
M: Sean Wang <sean.wang@mediatek.com>
M: Landen Chao <Landen.Chao@mediatek.com>
M: DENG Qingfang <dqfext@gmail.com>
+M: Daniel Golle <daniel@makrotopia.org>
L: netdev@vger.kernel.org
S: Maintained
+F: drivers/net/dsa/mt7530-mdio.c
+F: drivers/net/dsa/mt7530-mmio.c
F: drivers/net/dsa/mt7530.*
F: net/dsa/tag_mtk.c
S: Maintained
F: Documentation/devicetree/bindings/net/nfc/
F: drivers/nfc/
-F: include/linux/platform_data/nfcmrvl.h
F: include/net/nfc/
F: include/uapi/linux/nfc.h
F: net/nfc/
S: Maintained
F: drivers/ptp/ptp_ocp.c
+INTEL PTP DFL ToD DRIVER
+M: Tianfei Zhang <tianfei.zhang@intel.com>
+L: linux-fpga@vger.kernel.org
+L: netdev@vger.kernel.org
+S: Maintained
+F: drivers/ptp/ptp_dfl_tod.c
+
OPENCORES I2C BUS DRIVER
M: Peter Korsgaard <peter@korsgaard.com>
M: Andrew Lunn <andrew@lunn.ch>
RAYLINK/WEBGEAR 802.11 WIRELESS LAN DRIVER
L: linux-wireless@vger.kernel.org
S: Orphan
-F: drivers/net/wireless/ray*
+F: drivers/net/wireless/legacy/ray*
RC-CORE / LIRC FRAMEWORK
M: Sean Young <sean@mess.org>
M: Jussi Kivilinna <jussi.kivilinna@iki.fi>
L: linux-wireless@vger.kernel.org
S: Maintained
-F: drivers/net/wireless/rndis_wlan.c
+F: drivers/net/wireless/legacy/rndis_wlan.c
USB XHCI DRIVER
M: Mathias Nyman <mathias.nyman@intel.com>
WL3501 WIRELESS PCMCIA CARD DRIVER
L: linux-wireless@vger.kernel.org
S: Odd fixes
-F: drivers/net/wireless/wl3501*
+F: drivers/net/wireless/legacy/wl3501*
WOLFSON MICROELECTRONICS DRIVERS
L: patches@opensource.cirrus.com
slew-rate = <2>;
};
};
+
+ can1_pins_a: can1-0 {
+ pins1 {
+ pinmux = <STM32_PINMUX('B', 9, AF9)>; /* CAN1_TX */
+ };
+ pins2 {
+ pinmux = <STM32_PINMUX('B', 8, AF9)>; /* CAN1_RX */
+ bias-pull-up;
+ };
+ };
+
+ can2_pins_a: can2-0 {
+ pins1 {
+ pinmux = <STM32_PINMUX('B', 13, AF9)>; /* CAN2_TX */
+ };
+ pins2 {
+ pinmux = <STM32_PINMUX('B', 5, AF9)>; /* CAN2_RX */
+ bias-pull-up;
+ };
+ };
+
+ can2_pins_b: can2-1 {
+ pins1 {
+ pinmux = <STM32_PINMUX('B', 13, AF9)>; /* CAN2_TX */
+ };
+ pins2 {
+ pinmux = <STM32_PINMUX('B', 12, AF9)>; /* CAN2_RX */
+ bias-pull-up;
+ };
+ };
};
};
};
status = "disabled";
};
+ can1: can@40006400 {
+ compatible = "st,stm32f4-bxcan";
+ reg = <0x40006400 0x200>;
+ interrupts = <19>, <20>, <21>, <22>;
+ interrupt-names = "tx", "rx0", "rx1", "sce";
+ resets = <&rcc STM32F4_APB1_RESET(CAN1)>;
+ clocks = <&rcc 0 STM32F4_APB1_CLOCK(CAN1)>;
+ st,can-primary;
+ st,gcan = <&gcan>;
+ status = "disabled";
+ };
+
+ gcan: gcan@40006600 {
+ compatible = "st,stm32f4-gcan", "syscon";
+ reg = <0x40006600 0x200>;
+ clocks = <&rcc 0 STM32F4_APB1_CLOCK(CAN1)>;
+ };
+
+ can2: can@40006800 {
+ compatible = "st,stm32f4-bxcan";
+ reg = <0x40006800 0x200>;
+ interrupts = <63>, <64>, <65>, <66>;
+ interrupt-names = "tx", "rx0", "rx1", "sce";
+ resets = <&rcc STM32F4_APB1_RESET(CAN2)>;
+ clocks = <&rcc 0 STM32F4_APB1_CLOCK(CAN2)>;
+ st,gcan = <&gcan>;
+ status = "disabled";
+ };
+
dac: dac@40007400 {
compatible = "st,stm32f4-dac-core";
reg = <0x40007400 0x400>;
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_IGB=y
-CONFIG_IXGB=y
CONFIG_IXGBE=y
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set
return prog;
}
+
+/* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */
+bool bpf_jit_supports_subprog_tailcalls(void)
+{
+ return true;
+}
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE
- select HAVE_EBPF_JIT if !CPU_MICROMIPS && \
- !CPU_DADDI_WORKAROUNDS && \
- !CPU_R4000_WORKAROUNDS && \
- !CPU_R4400_WORKAROUNDS
+ select HAVE_EBPF_JIT if !CPU_MICROMIPS
select HAVE_EXIT_THREAD
select HAVE_FAST_GUP
select HAVE_FTRACE_MCOUNT_RECORD
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_IGB=y
-CONFIG_IXGB=y
CONFIG_IXGBE=y
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_IGB=y
-CONFIG_IXGB=y
CONFIG_IXGBE=y
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set
CONFIG_PCMCIA_FMVJ18X=m
CONFIG_E100=m
CONFIG_E1000=m
-CONFIG_IXGB=m
CONFIG_SKGE=m
CONFIG_SKY2=m
CONFIG_MYRI10GE=m
/* All legal eBPF values are valid */
return true;
case BPF_ADD:
+ if (IS_ENABLED(CONFIG_CPU_DADDI_WORKAROUNDS))
+ return false;
/* imm must be 16 bits */
return imm >= -0x8000 && imm <= 0x7fff;
case BPF_SUB:
+ if (IS_ENABLED(CONFIG_CPU_DADDI_WORKAROUNDS))
+ return false;
/* -imm must be 16 bits */
return imm >= -0x7fff && imm <= 0x8000;
case BPF_AND:
} else {
emit(ctx, dmultu, dst, src);
emit(ctx, mflo, dst);
+ /* Ensure multiplication is completed */
+ if (IS_ENABLED(CONFIG_CPU_R4000_WORKAROUNDS))
+ emit(ctx, mfhi, MIPS_R_ZERO);
}
break;
/* dst = dst / src */
CONFIG_E100=y
CONFIG_E1000=y
CONFIG_E1000E=y
-CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_I40E=m
CONFIG_MLX4_EN=m
CONFIG_E100=y
CONFIG_E1000=y
CONFIG_E1000E=y
-CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_I40E=m
CONFIG_MLX4_EN=m
CONFIG_TIGON3=y
CONFIG_E100=y
CONFIG_E1000=y
-CONFIG_IXGB=m
CONFIG_SUNGEM=y
CONFIG_BROADCOM_PHY=m
CONFIG_MARVELL_PHY=y
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=m
-CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_MV643XX_ETH=m
CONFIG_SKGE=m
CONFIG_E100=y
CONFIG_E1000=y
CONFIG_E1000E=y
-CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_I40E=m
CONFIG_MLX4_EN=m
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=m
-CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_I40E=m
# CONFIG_NET_VENDOR_MARVELL is not set
{
__build_epilogue(false, ctx);
}
+
+bool bpf_jit_supports_kfunc_call(void)
+{
+ return true;
+}
dev->id.id == BCMA_CORE_USB20_HOST;
}
-static inline u32 mips_read32(struct bcma_drv_mips *mcore,
- u16 offset)
-{
- return bcma_read32(mcore->core, offset);
-}
-
static u32 bcma_core_mips_irqflag(struct bcma_device *dev)
{
u32 flag;
struct bcma_device *core)
{
struct device_node *node;
- u64 size;
- const __be32 *reg;
+ int ret;
if (!parent->of_node)
return NULL;
for_each_child_of_node(parent->of_node, node) {
- reg = of_get_address(node, 0, &size, NULL);
- if (!reg)
+ struct resource res;
+ ret = of_address_to_resource(node, 0, &res);
+ if (ret)
continue;
- if (of_translate_address(node, reg) == core->addr)
+ if (res.start == core->addr)
return node;
}
return NULL;
return;
}
-static inline unsigned char
-cpld_read_reg(struct hfc_multi *hc, unsigned char reg)
-{
- unsigned char bytein;
-
- cpld_set_reg(hc, reg);
-
- /* Do data pin read low byte */
- HFC_outb(hc, R_GPIO_OUT1, reg);
-
- enablepcibridge(hc);
- bytein = readpcibridge(hc, 1);
- disablepcibridge(hc);
-
- return bytein;
-}
-
static inline void
vpm_write_address(struct hfc_multi *hc, unsigned short addr)
{
cpld_write_reg(hc, 1, 0x01 & (addr >> 8));
}
-static inline unsigned short
-vpm_read_address(struct hfc_multi *c)
-{
- unsigned short addr;
- unsigned short highbit;
-
- addr = cpld_read_reg(c, 0);
- highbit = cpld_read_reg(c, 1);
-
- addr = addr | (highbit << 8);
-
- return addr & 0x1ff;
-}
-
static inline unsigned char
vpm_in(struct hfc_multi *c, int which, unsigned short addr)
{
write_lock_irqsave(&card_lock, flags);
list_del(&card->list);
write_unlock_irqrestore(&card_lock, flags);
- pci_clear_master(card->pdev);
pci_disable_device(card->pdev);
pci_set_drvdata(card->pdev, NULL);
kfree(card);
#define VSC7512_SIO_CTRL_RES_START 0x710700f8
#define VSC7512_SIO_CTRL_RES_SIZE 0x00000100
+#define VSC7512_HSIO_RES_START 0x710d0000
+#define VSC7512_HSIO_RES_SIZE 0x00000128
+
#define VSC7512_ANA_RES_START 0x71880000
#define VSC7512_ANA_RES_SIZE 0x00010000
DEFINE_RES_REG_NAMED(VSC7512_SIO_CTRL_RES_START, VSC7512_SIO_CTRL_RES_SIZE, "gcb_sio"),
};
+static const struct resource vsc7512_serdes_resources[] = {
+ DEFINE_RES_REG_NAMED(VSC7512_HSIO_RES_START, VSC7512_HSIO_RES_SIZE, "hsio"),
+};
+
static const struct resource vsc7512_switch_resources[] = {
DEFINE_RES_REG_NAMED(VSC7512_ANA_RES_START, VSC7512_ANA_RES_SIZE, "ana"),
+ DEFINE_RES_REG_NAMED(VSC7512_HSIO_RES_START, VSC7512_HSIO_RES_SIZE, "hsio"),
DEFINE_RES_REG_NAMED(VSC7512_QS_RES_START, VSC7512_QS_RES_SIZE, "qs"),
DEFINE_RES_REG_NAMED(VSC7512_QSYS_RES_START, VSC7512_QSYS_RES_SIZE, "qsys"),
DEFINE_RES_REG_NAMED(VSC7512_REW_RES_START, VSC7512_REW_RES_SIZE, "rew"),
.num_resources = ARRAY_SIZE(vsc7512_miim1_resources),
.resources = vsc7512_miim1_resources,
}, {
+ .name = "ocelot-serdes",
+ .of_compatible = "mscc,vsc7514-serdes",
+ .num_resources = ARRAY_SIZE(vsc7512_serdes_resources),
+ .resources = vsc7512_serdes_resources,
+ }, {
.name = "ocelot-ext-switch",
.of_compatible = "mscc,vsc7512-switch",
.num_resources = ARRAY_SIZE(vsc7512_switch_resources),
This is a driver for the SoC CAN controller in Atmel's AT91SAM9263
and AT91SAM9X5 processors.
+config CAN_BXCAN
+ tristate "STM32 Basic Extended CAN (bxCAN) devices"
+ depends on OF || ARCH_STM32 || COMPILE_TEST
+ depends on HAS_IOMEM
+ select CAN_RX_OFFLOAD
+ help
+ Say yes here to build support for the STMicroelectronics STM32 basic
+ extended CAN Controller (bxCAN).
+
+ This driver can also be built as a module. If so, the module
+ will be called bxcan.
+
config CAN_CAN327
tristate "Serial / USB serial ELM327 based OBD-II Interfaces (can327)"
depends on TTY
obj-y += softing/
obj-$(CONFIG_CAN_AT91) += at91_can.o
+obj-$(CONFIG_CAN_BXCAN) += bxcan.o
obj-$(CONFIG_CAN_CAN327) += can327.o
obj-$(CONFIG_CAN_CC770) += cc770/
obj-$(CONFIG_CAN_C_CAN) += c_can/
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+//
+// bxcan.c - STM32 Basic Extended CAN controller driver
+//
+// Copyright (c) 2022 Dario Binacchi <dario.binacchi@amarulasolutions.com>
+//
+// NOTE: The ST documentation uses the terms master/slave instead of
+// primary/secondary.
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/bitfield.h>
+#include <linux/can.h>
+#include <linux/can/dev.h>
+#include <linux/can/error.h>
+#include <linux/can/rx-offload.h>
+#include <linux/clk.h>
+#include <linux/ethtool.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+
+#define BXCAN_NAPI_WEIGHT 3
+#define BXCAN_TIMEOUT_US 10000
+
+#define BXCAN_RX_MB_NUM 2
+#define BXCAN_TX_MB_NUM 3
+
+/* Primary control register (MCR) bits */
+#define BXCAN_MCR_RESET BIT(15)
+#define BXCAN_MCR_TTCM BIT(7)
+#define BXCAN_MCR_ABOM BIT(6)
+#define BXCAN_MCR_AWUM BIT(5)
+#define BXCAN_MCR_NART BIT(4)
+#define BXCAN_MCR_RFLM BIT(3)
+#define BXCAN_MCR_TXFP BIT(2)
+#define BXCAN_MCR_SLEEP BIT(1)
+#define BXCAN_MCR_INRQ BIT(0)
+
+/* Primary status register (MSR) bits */
+#define BXCAN_MSR_ERRI BIT(2)
+#define BXCAN_MSR_SLAK BIT(1)
+#define BXCAN_MSR_INAK BIT(0)
+
+/* Transmit status register (TSR) bits */
+#define BXCAN_TSR_RQCP2 BIT(16)
+#define BXCAN_TSR_RQCP1 BIT(8)
+#define BXCAN_TSR_RQCP0 BIT(0)
+
+/* Receive FIFO 0 register (RF0R) bits */
+#define BXCAN_RF0R_RFOM0 BIT(5)
+#define BXCAN_RF0R_FMP0_MASK GENMASK(1, 0)
+
+/* Interrupt enable register (IER) bits */
+#define BXCAN_IER_SLKIE BIT(17)
+#define BXCAN_IER_WKUIE BIT(16)
+#define BXCAN_IER_ERRIE BIT(15)
+#define BXCAN_IER_LECIE BIT(11)
+#define BXCAN_IER_BOFIE BIT(10)
+#define BXCAN_IER_EPVIE BIT(9)
+#define BXCAN_IER_EWGIE BIT(8)
+#define BXCAN_IER_FOVIE1 BIT(6)
+#define BXCAN_IER_FFIE1 BIT(5)
+#define BXCAN_IER_FMPIE1 BIT(4)
+#define BXCAN_IER_FOVIE0 BIT(3)
+#define BXCAN_IER_FFIE0 BIT(2)
+#define BXCAN_IER_FMPIE0 BIT(1)
+#define BXCAN_IER_TMEIE BIT(0)
+
+/* Error status register (ESR) bits */
+#define BXCAN_ESR_REC_MASK GENMASK(31, 24)
+#define BXCAN_ESR_TEC_MASK GENMASK(23, 16)
+#define BXCAN_ESR_LEC_MASK GENMASK(6, 4)
+#define BXCAN_ESR_BOFF BIT(2)
+#define BXCAN_ESR_EPVF BIT(1)
+#define BXCAN_ESR_EWGF BIT(0)
+
+/* Bit timing register (BTR) bits */
+#define BXCAN_BTR_SILM BIT(31)
+#define BXCAN_BTR_LBKM BIT(30)
+#define BXCAN_BTR_SJW_MASK GENMASK(25, 24)
+#define BXCAN_BTR_TS2_MASK GENMASK(22, 20)
+#define BXCAN_BTR_TS1_MASK GENMASK(19, 16)
+#define BXCAN_BTR_BRP_MASK GENMASK(9, 0)
+
+/* TX mailbox identifier register (TIxR, x = 0..2) bits */
+#define BXCAN_TIxR_STID_MASK GENMASK(31, 21)
+#define BXCAN_TIxR_EXID_MASK GENMASK(31, 3)
+#define BXCAN_TIxR_IDE BIT(2)
+#define BXCAN_TIxR_RTR BIT(1)
+#define BXCAN_TIxR_TXRQ BIT(0)
+
+/* TX mailbox data length and time stamp register (TDTxR, x = 0..2 bits */
+#define BXCAN_TDTxR_DLC_MASK GENMASK(3, 0)
+
+/* RX FIFO mailbox identifier register (RIxR, x = 0..1 */
+#define BXCAN_RIxR_STID_MASK GENMASK(31, 21)
+#define BXCAN_RIxR_EXID_MASK GENMASK(31, 3)
+#define BXCAN_RIxR_IDE BIT(2)
+#define BXCAN_RIxR_RTR BIT(1)
+
+/* RX FIFO mailbox data length and timestamp register (RDTxR, x = 0..1) bits */
+#define BXCAN_RDTxR_TIME_MASK GENMASK(31, 16)
+#define BXCAN_RDTxR_DLC_MASK GENMASK(3, 0)
+
+#define BXCAN_FMR_REG 0x00
+#define BXCAN_FM1R_REG 0x04
+#define BXCAN_FS1R_REG 0x0c
+#define BXCAN_FFA1R_REG 0x14
+#define BXCAN_FA1R_REG 0x1c
+#define BXCAN_FiR1_REG(b) (0x40 + (b) * 8)
+#define BXCAN_FiR2_REG(b) (0x44 + (b) * 8)
+
+#define BXCAN_FILTER_ID(primary) (primary ? 0 : 14)
+
+/* Filter primary register (FMR) bits */
+#define BXCAN_FMR_CANSB_MASK GENMASK(13, 8)
+#define BXCAN_FMR_FINIT BIT(0)
+
+enum bxcan_lec_code {
+ BXCAN_LEC_NO_ERROR = 0,
+ BXCAN_LEC_STUFF_ERROR,
+ BXCAN_LEC_FORM_ERROR,
+ BXCAN_LEC_ACK_ERROR,
+ BXCAN_LEC_BIT1_ERROR,
+ BXCAN_LEC_BIT0_ERROR,
+ BXCAN_LEC_CRC_ERROR,
+ BXCAN_LEC_UNUSED
+};
+
+/* Structure of the message buffer */
+struct bxcan_mb {
+ u32 id; /* can identifier */
+ u32 dlc; /* data length control and timestamp */
+ u32 data[2]; /* data */
+};
+
+/* Structure of the hardware registers */
+struct bxcan_regs {
+ u32 mcr; /* 0x00 - primary control */
+ u32 msr; /* 0x04 - primary status */
+ u32 tsr; /* 0x08 - transmit status */
+ u32 rf0r; /* 0x0c - FIFO 0 */
+ u32 rf1r; /* 0x10 - FIFO 1 */
+ u32 ier; /* 0x14 - interrupt enable */
+ u32 esr; /* 0x18 - error status */
+ u32 btr; /* 0x1c - bit timing*/
+ u32 reserved0[88]; /* 0x20 */
+ struct bxcan_mb tx_mb[BXCAN_TX_MB_NUM]; /* 0x180 - tx mailbox */
+ struct bxcan_mb rx_mb[BXCAN_RX_MB_NUM]; /* 0x1b0 - rx mailbox */
+};
+
+struct bxcan_priv {
+ struct can_priv can;
+ struct can_rx_offload offload;
+ struct device *dev;
+ struct net_device *ndev;
+
+ struct bxcan_regs __iomem *regs;
+ struct regmap *gcan;
+ int tx_irq;
+ int sce_irq;
+ bool primary;
+ struct clk *clk;
+ spinlock_t rmw_lock; /* lock for read-modify-write operations */
+ unsigned int tx_head;
+ unsigned int tx_tail;
+ u32 timestamp;
+};
+
+static const struct can_bittiming_const bxcan_bittiming_const = {
+ .name = KBUILD_MODNAME,
+ .tseg1_min = 1,
+ .tseg1_max = 16,
+ .tseg2_min = 1,
+ .tseg2_max = 8,
+ .sjw_max = 4,
+ .brp_min = 1,
+ .brp_max = 1024,
+ .brp_inc = 1,
+};
+
+static inline void bxcan_rmw(struct bxcan_priv *priv, void __iomem *addr,
+ u32 clear, u32 set)
+{
+ unsigned long flags;
+ u32 old, val;
+
+ spin_lock_irqsave(&priv->rmw_lock, flags);
+ old = readl(addr);
+ val = (old & ~clear) | set;
+ if (val != old)
+ writel(val, addr);
+
+ spin_unlock_irqrestore(&priv->rmw_lock, flags);
+}
+
+static void bxcan_disable_filters(struct bxcan_priv *priv, bool primary)
+{
+ unsigned int fid = BXCAN_FILTER_ID(primary);
+ u32 fmask = BIT(fid);
+
+ regmap_update_bits(priv->gcan, BXCAN_FA1R_REG, fmask, 0);
+}
+
+static void bxcan_enable_filters(struct bxcan_priv *priv, bool primary)
+{
+ unsigned int fid = BXCAN_FILTER_ID(primary);
+ u32 fmask = BIT(fid);
+
+ /* Filter settings:
+ *
+ * Accept all messages.
+ * Assign filter 0 to CAN1 and filter 14 to CAN2 in identifier
+ * mask mode with 32 bits width.
+ */
+
+ /* Enter filter initialization mode and assing filters to CAN
+ * controllers.
+ */
+ regmap_update_bits(priv->gcan, BXCAN_FMR_REG,
+ BXCAN_FMR_CANSB_MASK | BXCAN_FMR_FINIT,
+ FIELD_PREP(BXCAN_FMR_CANSB_MASK, 14) |
+ BXCAN_FMR_FINIT);
+
+ /* Deactivate filter */
+ regmap_update_bits(priv->gcan, BXCAN_FA1R_REG, fmask, 0);
+
+ /* Two 32-bit registers in identifier mask mode */
+ regmap_update_bits(priv->gcan, BXCAN_FM1R_REG, fmask, 0);
+
+ /* Single 32-bit scale configuration */
+ regmap_update_bits(priv->gcan, BXCAN_FS1R_REG, fmask, fmask);
+
+ /* Assign filter to FIFO 0 */
+ regmap_update_bits(priv->gcan, BXCAN_FFA1R_REG, fmask, 0);
+
+ /* Accept all messages */
+ regmap_write(priv->gcan, BXCAN_FiR1_REG(fid), 0);
+ regmap_write(priv->gcan, BXCAN_FiR2_REG(fid), 0);
+
+ /* Activate filter */
+ regmap_update_bits(priv->gcan, BXCAN_FA1R_REG, fmask, fmask);
+
+ /* Exit filter initialization mode */
+ regmap_update_bits(priv->gcan, BXCAN_FMR_REG, BXCAN_FMR_FINIT, 0);
+}
+
+static inline u8 bxcan_get_tx_head(const struct bxcan_priv *priv)
+{
+ return priv->tx_head % BXCAN_TX_MB_NUM;
+}
+
+static inline u8 bxcan_get_tx_tail(const struct bxcan_priv *priv)
+{
+ return priv->tx_tail % BXCAN_TX_MB_NUM;
+}
+
+static inline u8 bxcan_get_tx_free(const struct bxcan_priv *priv)
+{
+ return BXCAN_TX_MB_NUM - (priv->tx_head - priv->tx_tail);
+}
+
+static bool bxcan_tx_busy(const struct bxcan_priv *priv)
+{
+ if (bxcan_get_tx_free(priv) > 0)
+ return false;
+
+ netif_stop_queue(priv->ndev);
+
+ /* Memory barrier before checking tx_free (head and tail) */
+ smp_mb();
+
+ if (bxcan_get_tx_free(priv) == 0) {
+ netdev_dbg(priv->ndev,
+ "Stopping tx-queue (tx_head=0x%08x, tx_tail=0x%08x, len=%d).\n",
+ priv->tx_head, priv->tx_tail,
+ priv->tx_head - priv->tx_tail);
+
+ return true;
+ }
+
+ netif_start_queue(priv->ndev);
+
+ return false;
+}
+
+static int bxcan_chip_softreset(struct bxcan_priv *priv)
+{
+ struct bxcan_regs __iomem *regs = priv->regs;
+ u32 value;
+
+ bxcan_rmw(priv, ®s->mcr, 0, BXCAN_MCR_RESET);
+ return readx_poll_timeout(readl, ®s->msr, value,
+ value & BXCAN_MSR_SLAK, BXCAN_TIMEOUT_US,
+ USEC_PER_SEC);
+}
+
+static int bxcan_enter_init_mode(struct bxcan_priv *priv)
+{
+ struct bxcan_regs __iomem *regs = priv->regs;
+ u32 value;
+
+ bxcan_rmw(priv, ®s->mcr, 0, BXCAN_MCR_INRQ);
+ return readx_poll_timeout(readl, ®s->msr, value,
+ value & BXCAN_MSR_INAK, BXCAN_TIMEOUT_US,
+ USEC_PER_SEC);
+}
+
+static int bxcan_leave_init_mode(struct bxcan_priv *priv)
+{
+ struct bxcan_regs __iomem *regs = priv->regs;
+ u32 value;
+
+ bxcan_rmw(priv, ®s->mcr, BXCAN_MCR_INRQ, 0);
+ return readx_poll_timeout(readl, ®s->msr, value,
+ !(value & BXCAN_MSR_INAK), BXCAN_TIMEOUT_US,
+ USEC_PER_SEC);
+}
+
+static int bxcan_enter_sleep_mode(struct bxcan_priv *priv)
+{
+ struct bxcan_regs __iomem *regs = priv->regs;
+ u32 value;
+
+ bxcan_rmw(priv, ®s->mcr, 0, BXCAN_MCR_SLEEP);
+ return readx_poll_timeout(readl, ®s->msr, value,
+ value & BXCAN_MSR_SLAK, BXCAN_TIMEOUT_US,
+ USEC_PER_SEC);
+}
+
+static int bxcan_leave_sleep_mode(struct bxcan_priv *priv)
+{
+ struct bxcan_regs __iomem *regs = priv->regs;
+ u32 value;
+
+ bxcan_rmw(priv, ®s->mcr, BXCAN_MCR_SLEEP, 0);
+ return readx_poll_timeout(readl, ®s->msr, value,
+ !(value & BXCAN_MSR_SLAK), BXCAN_TIMEOUT_US,
+ USEC_PER_SEC);
+}
+
+static inline
+struct bxcan_priv *rx_offload_to_priv(struct can_rx_offload *offload)
+{
+ return container_of(offload, struct bxcan_priv, offload);
+}
+
+static struct sk_buff *bxcan_mailbox_read(struct can_rx_offload *offload,
+ unsigned int mbxno, u32 *timestamp,
+ bool drop)
+{
+ struct bxcan_priv *priv = rx_offload_to_priv(offload);
+ struct bxcan_regs __iomem *regs = priv->regs;
+ struct bxcan_mb __iomem *mb_regs = ®s->rx_mb[0];
+ struct sk_buff *skb = NULL;
+ struct can_frame *cf;
+ u32 rf0r, id, dlc;
+
+ rf0r = readl(®s->rf0r);
+ if (unlikely(drop)) {
+ skb = ERR_PTR(-ENOBUFS);
+ goto mark_as_read;
+ }
+
+ if (!(rf0r & BXCAN_RF0R_FMP0_MASK))
+ goto mark_as_read;
+
+ skb = alloc_can_skb(offload->dev, &cf);
+ if (unlikely(!skb)) {
+ skb = ERR_PTR(-ENOMEM);
+ goto mark_as_read;
+ }
+
+ id = readl(&mb_regs->id);
+ if (id & BXCAN_RIxR_IDE)
+ cf->can_id = FIELD_GET(BXCAN_RIxR_EXID_MASK, id) | CAN_EFF_FLAG;
+ else
+ cf->can_id = FIELD_GET(BXCAN_RIxR_STID_MASK, id) & CAN_SFF_MASK;
+
+ dlc = readl(&mb_regs->dlc);
+ priv->timestamp = FIELD_GET(BXCAN_RDTxR_TIME_MASK, dlc);
+ cf->len = can_cc_dlc2len(FIELD_GET(BXCAN_RDTxR_DLC_MASK, dlc));
+
+ if (id & BXCAN_RIxR_RTR) {
+ cf->can_id |= CAN_RTR_FLAG;
+ } else {
+ int i, j;
+
+ for (i = 0, j = 0; i < cf->len; i += 4, j++)
+ *(u32 *)(cf->data + i) = readl(&mb_regs->data[j]);
+ }
+
+ mark_as_read:
+ rf0r |= BXCAN_RF0R_RFOM0;
+ writel(rf0r, ®s->rf0r);
+ return skb;
+}
+
+static irqreturn_t bxcan_rx_isr(int irq, void *dev_id)
+{
+ struct net_device *ndev = dev_id;
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ struct bxcan_regs __iomem *regs = priv->regs;
+ u32 rf0r;
+
+ rf0r = readl(®s->rf0r);
+ if (!(rf0r & BXCAN_RF0R_FMP0_MASK))
+ return IRQ_NONE;
+
+ can_rx_offload_irq_offload_fifo(&priv->offload);
+ can_rx_offload_irq_finish(&priv->offload);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t bxcan_tx_isr(int irq, void *dev_id)
+{
+ struct net_device *ndev = dev_id;
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ struct bxcan_regs __iomem *regs = priv->regs;
+ struct net_device_stats *stats = &ndev->stats;
+ u32 tsr, rqcp_bit;
+ int idx;
+
+ tsr = readl(®s->tsr);
+ if (!(tsr & (BXCAN_TSR_RQCP0 | BXCAN_TSR_RQCP1 | BXCAN_TSR_RQCP2)))
+ return IRQ_NONE;
+
+ while (priv->tx_head - priv->tx_tail > 0) {
+ idx = bxcan_get_tx_tail(priv);
+ rqcp_bit = BXCAN_TSR_RQCP0 << (idx << 3);
+ if (!(tsr & rqcp_bit))
+ break;
+
+ stats->tx_packets++;
+ stats->tx_bytes += can_get_echo_skb(ndev, idx, NULL);
+ priv->tx_tail++;
+ }
+
+ writel(tsr, ®s->tsr);
+
+ if (bxcan_get_tx_free(priv)) {
+ /* Make sure that anybody stopping the queue after
+ * this sees the new tx_ring->tail.
+ */
+ smp_mb();
+ netif_wake_queue(ndev);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static void bxcan_handle_state_change(struct net_device *ndev, u32 esr)
+{
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ enum can_state new_state = priv->can.state;
+ struct can_berr_counter bec;
+ enum can_state rx_state, tx_state;
+ struct sk_buff *skb;
+ struct can_frame *cf;
+
+ /* Early exit if no error flag is set */
+ if (!(esr & (BXCAN_ESR_EWGF | BXCAN_ESR_EPVF | BXCAN_ESR_BOFF)))
+ return;
+
+ bec.txerr = FIELD_GET(BXCAN_ESR_TEC_MASK, esr);
+ bec.rxerr = FIELD_GET(BXCAN_ESR_REC_MASK, esr);
+
+ if (esr & BXCAN_ESR_BOFF)
+ new_state = CAN_STATE_BUS_OFF;
+ else if (esr & BXCAN_ESR_EPVF)
+ new_state = CAN_STATE_ERROR_PASSIVE;
+ else if (esr & BXCAN_ESR_EWGF)
+ new_state = CAN_STATE_ERROR_WARNING;
+
+ /* state hasn't changed */
+ if (unlikely(new_state == priv->can.state))
+ return;
+
+ skb = alloc_can_err_skb(ndev, &cf);
+
+ tx_state = bec.txerr >= bec.rxerr ? new_state : 0;
+ rx_state = bec.txerr <= bec.rxerr ? new_state : 0;
+ can_change_state(ndev, cf, tx_state, rx_state);
+
+ if (new_state == CAN_STATE_BUS_OFF) {
+ can_bus_off(ndev);
+ } else if (skb) {
+ cf->can_id |= CAN_ERR_CNT;
+ cf->data[6] = bec.txerr;
+ cf->data[7] = bec.rxerr;
+ }
+
+ if (skb) {
+ int err;
+
+ err = can_rx_offload_queue_timestamp(&priv->offload, skb,
+ priv->timestamp);
+ if (err)
+ ndev->stats.rx_fifo_errors++;
+ }
+}
+
+static void bxcan_handle_bus_err(struct net_device *ndev, u32 esr)
+{
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ enum bxcan_lec_code lec_code;
+ struct can_frame *cf;
+ struct sk_buff *skb;
+
+ lec_code = FIELD_GET(BXCAN_ESR_LEC_MASK, esr);
+
+ /* Early exit if no lec update or no error.
+ * No lec update means that no CAN bus event has been detected
+ * since CPU wrote BXCAN_LEC_UNUSED value to status reg.
+ */
+ if (lec_code == BXCAN_LEC_UNUSED || lec_code == BXCAN_LEC_NO_ERROR)
+ return;
+
+ /* Common for all type of bus errors */
+ priv->can.can_stats.bus_error++;
+
+ /* Propagate the error condition to the CAN stack */
+ skb = alloc_can_err_skb(ndev, &cf);
+ if (skb)
+ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ switch (lec_code) {
+ case BXCAN_LEC_STUFF_ERROR:
+ netdev_dbg(ndev, "Stuff error\n");
+ ndev->stats.rx_errors++;
+ if (skb)
+ cf->data[2] |= CAN_ERR_PROT_STUFF;
+ break;
+
+ case BXCAN_LEC_FORM_ERROR:
+ netdev_dbg(ndev, "Form error\n");
+ ndev->stats.rx_errors++;
+ if (skb)
+ cf->data[2] |= CAN_ERR_PROT_FORM;
+ break;
+
+ case BXCAN_LEC_ACK_ERROR:
+ netdev_dbg(ndev, "Ack error\n");
+ ndev->stats.tx_errors++;
+ if (skb) {
+ cf->can_id |= CAN_ERR_ACK;
+ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
+ }
+ break;
+
+ case BXCAN_LEC_BIT1_ERROR:
+ netdev_dbg(ndev, "Bit error (recessive)\n");
+ ndev->stats.tx_errors++;
+ if (skb)
+ cf->data[2] |= CAN_ERR_PROT_BIT1;
+ break;
+
+ case BXCAN_LEC_BIT0_ERROR:
+ netdev_dbg(ndev, "Bit error (dominant)\n");
+ ndev->stats.tx_errors++;
+ if (skb)
+ cf->data[2] |= CAN_ERR_PROT_BIT0;
+ break;
+
+ case BXCAN_LEC_CRC_ERROR:
+ netdev_dbg(ndev, "CRC error\n");
+ ndev->stats.rx_errors++;
+ if (skb) {
+ cf->data[2] |= CAN_ERR_PROT_BIT;
+ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
+ }
+ break;
+
+ default:
+ break;
+ }
+
+ if (skb) {
+ int err;
+
+ err = can_rx_offload_queue_timestamp(&priv->offload, skb,
+ priv->timestamp);
+ if (err)
+ ndev->stats.rx_fifo_errors++;
+ }
+}
+
+static irqreturn_t bxcan_state_change_isr(int irq, void *dev_id)
+{
+ struct net_device *ndev = dev_id;
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ struct bxcan_regs __iomem *regs = priv->regs;
+ u32 msr, esr;
+
+ msr = readl(®s->msr);
+ if (!(msr & BXCAN_MSR_ERRI))
+ return IRQ_NONE;
+
+ esr = readl(®s->esr);
+ bxcan_handle_state_change(ndev, esr);
+
+ if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
+ bxcan_handle_bus_err(ndev, esr);
+
+ msr |= BXCAN_MSR_ERRI;
+ writel(msr, ®s->msr);
+ can_rx_offload_irq_finish(&priv->offload);
+
+ return IRQ_HANDLED;
+}
+
+static int bxcan_chip_start(struct net_device *ndev)
+{
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ struct bxcan_regs __iomem *regs = priv->regs;
+ struct can_bittiming *bt = &priv->can.bittiming;
+ u32 clr, set;
+ int err;
+
+ err = bxcan_chip_softreset(priv);
+ if (err) {
+ netdev_err(ndev, "failed to reset chip, error %pe\n",
+ ERR_PTR(err));
+ return err;
+ }
+
+ err = bxcan_leave_sleep_mode(priv);
+ if (err) {
+ netdev_err(ndev, "failed to leave sleep mode, error %pe\n",
+ ERR_PTR(err));
+ goto failed_leave_sleep;
+ }
+
+ err = bxcan_enter_init_mode(priv);
+ if (err) {
+ netdev_err(ndev, "failed to enter init mode, error %pe\n",
+ ERR_PTR(err));
+ goto failed_enter_init;
+ }
+
+ /* MCR
+ *
+ * select request order priority
+ * enable time triggered mode
+ * bus-off state left on sw request
+ * sleep mode left on sw request
+ * retransmit automatically on error
+ * do not lock RX FIFO on overrun
+ */
+ bxcan_rmw(priv, ®s->mcr,
+ BXCAN_MCR_ABOM | BXCAN_MCR_AWUM | BXCAN_MCR_NART |
+ BXCAN_MCR_RFLM, BXCAN_MCR_TTCM | BXCAN_MCR_TXFP);
+
+ /* Bit timing register settings */
+ set = FIELD_PREP(BXCAN_BTR_BRP_MASK, bt->brp - 1) |
+ FIELD_PREP(BXCAN_BTR_TS1_MASK, bt->phase_seg1 +
+ bt->prop_seg - 1) |
+ FIELD_PREP(BXCAN_BTR_TS2_MASK, bt->phase_seg2 - 1) |
+ FIELD_PREP(BXCAN_BTR_SJW_MASK, bt->sjw - 1);
+
+ /* loopback + silent mode put the controller in test mode,
+ * useful for hot self-test
+ */
+ if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)
+ set |= BXCAN_BTR_LBKM;
+
+ if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
+ set |= BXCAN_BTR_SILM;
+
+ bxcan_rmw(priv, ®s->btr, BXCAN_BTR_SILM | BXCAN_BTR_LBKM |
+ BXCAN_BTR_BRP_MASK | BXCAN_BTR_TS1_MASK | BXCAN_BTR_TS2_MASK |
+ BXCAN_BTR_SJW_MASK, set);
+
+ bxcan_enable_filters(priv, priv->primary);
+
+ /* Clear all internal status */
+ priv->tx_head = 0;
+ priv->tx_tail = 0;
+
+ err = bxcan_leave_init_mode(priv);
+ if (err) {
+ netdev_err(ndev, "failed to leave init mode, error %pe\n",
+ ERR_PTR(err));
+ goto failed_leave_init;
+ }
+
+ /* Set a `lec` value so that we can check for updates later */
+ bxcan_rmw(priv, ®s->esr, BXCAN_ESR_LEC_MASK,
+ FIELD_PREP(BXCAN_ESR_LEC_MASK, BXCAN_LEC_UNUSED));
+
+ /* IER
+ *
+ * Enable interrupt for:
+ * bus-off
+ * passive error
+ * warning error
+ * last error code
+ * RX FIFO pending message
+ * TX mailbox empty
+ */
+ clr = BXCAN_IER_WKUIE | BXCAN_IER_SLKIE | BXCAN_IER_FOVIE1 |
+ BXCAN_IER_FFIE1 | BXCAN_IER_FMPIE1 | BXCAN_IER_FOVIE0 |
+ BXCAN_IER_FFIE0;
+ set = BXCAN_IER_ERRIE | BXCAN_IER_BOFIE | BXCAN_IER_EPVIE |
+ BXCAN_IER_EWGIE | BXCAN_IER_FMPIE0 | BXCAN_IER_TMEIE;
+
+ if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
+ set |= BXCAN_IER_LECIE;
+ else
+ clr |= BXCAN_IER_LECIE;
+
+ bxcan_rmw(priv, ®s->ier, clr, set);
+
+ priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ return 0;
+
+failed_leave_init:
+failed_enter_init:
+failed_leave_sleep:
+ bxcan_chip_softreset(priv);
+ return err;
+}
+
+static int bxcan_open(struct net_device *ndev)
+{
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ int err;
+
+ err = clk_prepare_enable(priv->clk);
+ if (err) {
+ netdev_err(ndev, "failed to enable clock, error %pe\n",
+ ERR_PTR(err));
+ return err;
+ }
+
+ err = open_candev(ndev);
+ if (err) {
+ netdev_err(ndev, "open_candev() failed, error %pe\n",
+ ERR_PTR(err));
+ goto out_disable_clock;
+ }
+
+ can_rx_offload_enable(&priv->offload);
+ err = request_irq(ndev->irq, bxcan_rx_isr, IRQF_SHARED, ndev->name,
+ ndev);
+ if (err) {
+ netdev_err(ndev, "failed to register rx irq(%d), error %pe\n",
+ ndev->irq, ERR_PTR(err));
+ goto out_close_candev;
+ }
+
+ err = request_irq(priv->tx_irq, bxcan_tx_isr, IRQF_SHARED, ndev->name,
+ ndev);
+ if (err) {
+ netdev_err(ndev, "failed to register tx irq(%d), error %pe\n",
+ priv->tx_irq, ERR_PTR(err));
+ goto out_free_rx_irq;
+ }
+
+ err = request_irq(priv->sce_irq, bxcan_state_change_isr, IRQF_SHARED,
+ ndev->name, ndev);
+ if (err) {
+ netdev_err(ndev, "failed to register sce irq(%d), error %pe\n",
+ priv->sce_irq, ERR_PTR(err));
+ goto out_free_tx_irq;
+ }
+
+ err = bxcan_chip_start(ndev);
+ if (err)
+ goto out_free_sce_irq;
+
+ netif_start_queue(ndev);
+ return 0;
+
+out_free_sce_irq:
+ free_irq(priv->sce_irq, ndev);
+out_free_tx_irq:
+ free_irq(priv->tx_irq, ndev);
+out_free_rx_irq:
+ free_irq(ndev->irq, ndev);
+out_close_candev:
+ can_rx_offload_disable(&priv->offload);
+ close_candev(ndev);
+out_disable_clock:
+ clk_disable_unprepare(priv->clk);
+ return err;
+}
+
+static void bxcan_chip_stop(struct net_device *ndev)
+{
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ struct bxcan_regs __iomem *regs = priv->regs;
+
+ /* disable all interrupts */
+ bxcan_rmw(priv, ®s->ier, BXCAN_IER_SLKIE | BXCAN_IER_WKUIE |
+ BXCAN_IER_ERRIE | BXCAN_IER_LECIE | BXCAN_IER_BOFIE |
+ BXCAN_IER_EPVIE | BXCAN_IER_EWGIE | BXCAN_IER_FOVIE1 |
+ BXCAN_IER_FFIE1 | BXCAN_IER_FMPIE1 | BXCAN_IER_FOVIE0 |
+ BXCAN_IER_FFIE0 | BXCAN_IER_FMPIE0 | BXCAN_IER_TMEIE, 0);
+ bxcan_disable_filters(priv, priv->primary);
+ bxcan_enter_sleep_mode(priv);
+ priv->can.state = CAN_STATE_STOPPED;
+}
+
+static int bxcan_stop(struct net_device *ndev)
+{
+ struct bxcan_priv *priv = netdev_priv(ndev);
+
+ netif_stop_queue(ndev);
+ bxcan_chip_stop(ndev);
+ free_irq(ndev->irq, ndev);
+ free_irq(priv->tx_irq, ndev);
+ free_irq(priv->sce_irq, ndev);
+ can_rx_offload_disable(&priv->offload);
+ close_candev(ndev);
+ clk_disable_unprepare(priv->clk);
+ return 0;
+}
+
+static netdev_tx_t bxcan_start_xmit(struct sk_buff *skb,
+ struct net_device *ndev)
+{
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ struct can_frame *cf = (struct can_frame *)skb->data;
+ struct bxcan_regs __iomem *regs = priv->regs;
+ struct bxcan_mb __iomem *mb_regs;
+ unsigned int idx;
+ u32 id;
+ int i, j;
+
+ if (can_dropped_invalid_skb(ndev, skb))
+ return NETDEV_TX_OK;
+
+ if (bxcan_tx_busy(priv))
+ return NETDEV_TX_BUSY;
+
+ idx = bxcan_get_tx_head(priv);
+ priv->tx_head++;
+ if (bxcan_get_tx_free(priv) == 0)
+ netif_stop_queue(ndev);
+
+ mb_regs = ®s->tx_mb[idx];
+ if (cf->can_id & CAN_EFF_FLAG)
+ id = FIELD_PREP(BXCAN_TIxR_EXID_MASK, cf->can_id) |
+ BXCAN_TIxR_IDE;
+ else
+ id = FIELD_PREP(BXCAN_TIxR_STID_MASK, cf->can_id);
+
+ if (cf->can_id & CAN_RTR_FLAG) { /* Remote transmission request */
+ id |= BXCAN_TIxR_RTR;
+ } else {
+ for (i = 0, j = 0; i < cf->len; i += 4, j++)
+ writel(*(u32 *)(cf->data + i), &mb_regs->data[j]);
+ }
+
+ writel(FIELD_PREP(BXCAN_TDTxR_DLC_MASK, cf->len), &mb_regs->dlc);
+
+ can_put_echo_skb(skb, ndev, idx, 0);
+
+ /* Start transmission */
+ writel(id | BXCAN_TIxR_TXRQ, &mb_regs->id);
+
+ return NETDEV_TX_OK;
+}
+
+static const struct net_device_ops bxcan_netdev_ops = {
+ .ndo_open = bxcan_open,
+ .ndo_stop = bxcan_stop,
+ .ndo_start_xmit = bxcan_start_xmit,
+ .ndo_change_mtu = can_change_mtu,
+};
+
+static const struct ethtool_ops bxcan_ethtool_ops = {
+ .get_ts_info = ethtool_op_get_ts_info,
+};
+
+static int bxcan_do_set_mode(struct net_device *ndev, enum can_mode mode)
+{
+ int err;
+
+ switch (mode) {
+ case CAN_MODE_START:
+ err = bxcan_chip_start(ndev);
+ if (err)
+ return err;
+
+ netif_wake_queue(ndev);
+ break;
+
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int bxcan_get_berr_counter(const struct net_device *ndev,
+ struct can_berr_counter *bec)
+{
+ struct bxcan_priv *priv = netdev_priv(ndev);
+ struct bxcan_regs __iomem *regs = priv->regs;
+ u32 esr;
+ int err;
+
+ err = clk_prepare_enable(priv->clk);
+ if (err)
+ return err;
+
+ esr = readl(®s->esr);
+ bec->txerr = FIELD_GET(BXCAN_ESR_TEC_MASK, esr);
+ bec->rxerr = FIELD_GET(BXCAN_ESR_REC_MASK, esr);
+ clk_disable_unprepare(priv->clk);
+ return 0;
+}
+
+static int bxcan_probe(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ struct device *dev = &pdev->dev;
+ struct net_device *ndev;
+ struct bxcan_priv *priv;
+ struct clk *clk = NULL;
+ void __iomem *regs;
+ struct regmap *gcan;
+ bool primary;
+ int err, rx_irq, tx_irq, sce_irq;
+
+ regs = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(regs)) {
+ dev_err(dev, "failed to get base address\n");
+ return PTR_ERR(regs);
+ }
+
+ gcan = syscon_regmap_lookup_by_phandle(np, "st,gcan");
+ if (IS_ERR(gcan)) {
+ dev_err(dev, "failed to get shared memory base address\n");
+ return PTR_ERR(gcan);
+ }
+
+ primary = of_property_read_bool(np, "st,can-primary");
+ clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(clk)) {
+ dev_err(dev, "failed to get clock\n");
+ return PTR_ERR(clk);
+ }
+
+ rx_irq = platform_get_irq_byname(pdev, "rx0");
+ if (rx_irq < 0) {
+ dev_err(dev, "failed to get rx0 irq\n");
+ return rx_irq;
+ }
+
+ tx_irq = platform_get_irq_byname(pdev, "tx");
+ if (tx_irq < 0) {
+ dev_err(dev, "failed to get tx irq\n");
+ return tx_irq;
+ }
+
+ sce_irq = platform_get_irq_byname(pdev, "sce");
+ if (sce_irq < 0) {
+ dev_err(dev, "failed to get sce irq\n");
+ return sce_irq;
+ }
+
+ ndev = alloc_candev(sizeof(struct bxcan_priv), BXCAN_TX_MB_NUM);
+ if (!ndev) {
+ dev_err(dev, "alloc_candev() failed\n");
+ return -ENOMEM;
+ }
+
+ priv = netdev_priv(ndev);
+ platform_set_drvdata(pdev, ndev);
+ SET_NETDEV_DEV(ndev, dev);
+ ndev->netdev_ops = &bxcan_netdev_ops;
+ ndev->ethtool_ops = &bxcan_ethtool_ops;
+ ndev->irq = rx_irq;
+ ndev->flags |= IFF_ECHO;
+
+ priv->dev = dev;
+ priv->ndev = ndev;
+ priv->regs = regs;
+ priv->gcan = gcan;
+ priv->clk = clk;
+ priv->tx_irq = tx_irq;
+ priv->sce_irq = sce_irq;
+ priv->primary = primary;
+ priv->can.clock.freq = clk_get_rate(clk);
+ spin_lock_init(&priv->rmw_lock);
+ priv->tx_head = 0;
+ priv->tx_tail = 0;
+ priv->can.bittiming_const = &bxcan_bittiming_const;
+ priv->can.do_set_mode = bxcan_do_set_mode;
+ priv->can.do_get_berr_counter = bxcan_get_berr_counter;
+ priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
+ CAN_CTRLMODE_LISTENONLY | CAN_CTRLMODE_BERR_REPORTING;
+
+ priv->offload.mailbox_read = bxcan_mailbox_read;
+ err = can_rx_offload_add_fifo(ndev, &priv->offload, BXCAN_NAPI_WEIGHT);
+ if (err) {
+ dev_err(dev, "failed to add FIFO rx_offload\n");
+ goto out_free_candev;
+ }
+
+ err = register_candev(ndev);
+ if (err) {
+ dev_err(dev, "failed to register netdev\n");
+ goto out_can_rx_offload_del;
+ }
+
+ dev_info(dev, "clk: %d Hz, IRQs: %d, %d, %d\n", priv->can.clock.freq,
+ tx_irq, rx_irq, sce_irq);
+ return 0;
+
+out_can_rx_offload_del:
+ can_rx_offload_del(&priv->offload);
+out_free_candev:
+ free_candev(ndev);
+ return err;
+}
+
+static int bxcan_remove(struct platform_device *pdev)
+{
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ struct bxcan_priv *priv = netdev_priv(ndev);
+
+ unregister_candev(ndev);
+ clk_disable_unprepare(priv->clk);
+ can_rx_offload_del(&priv->offload);
+ free_candev(ndev);
+ return 0;
+}
+
+static int __maybe_unused bxcan_suspend(struct device *dev)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct bxcan_priv *priv = netdev_priv(ndev);
+
+ if (!netif_running(ndev))
+ return 0;
+
+ netif_stop_queue(ndev);
+ netif_device_detach(ndev);
+
+ bxcan_enter_sleep_mode(priv);
+ priv->can.state = CAN_STATE_SLEEPING;
+ clk_disable_unprepare(priv->clk);
+ return 0;
+}
+
+static int __maybe_unused bxcan_resume(struct device *dev)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct bxcan_priv *priv = netdev_priv(ndev);
+
+ if (!netif_running(ndev))
+ return 0;
+
+ clk_prepare_enable(priv->clk);
+ bxcan_leave_sleep_mode(priv);
+ priv->can.state = CAN_STATE_ERROR_ACTIVE;
+
+ netif_device_attach(ndev);
+ netif_start_queue(ndev);
+ return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(bxcan_pm_ops, bxcan_suspend, bxcan_resume);
+
+static const struct of_device_id bxcan_of_match[] = {
+ {.compatible = "st,stm32f4-bxcan"},
+ { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, bxcan_of_match);
+
+static struct platform_driver bxcan_driver = {
+ .driver = {
+ .name = KBUILD_MODNAME,
+ .pm = &bxcan_pm_ops,
+ .of_match_table = bxcan_of_match,
+ },
+ .probe = bxcan_probe,
+ .remove = bxcan_remove,
+};
+
+module_platform_driver(bxcan_driver);
+
+MODULE_AUTHOR("Dario Binacchi <dario.binacchi@amarulasolutions.com>");
+MODULE_DESCRIPTION("STMicroelectronics Basic Extended CAN controller driver");
+MODULE_LICENSE("GPL");
pci_iounmap(pdev, addr);
out_release_regions:
pci_disable_msi(pdev);
- pci_clear_master(pdev);
pci_release_regions(pdev);
out_disable_device:
pci_disable_device(pdev);
pci_iounmap(pdev, addr);
pci_disable_msi(pdev);
- pci_clear_master(pdev);
pci_release_regions(pdev);
pci_disable_device(pdev);
}
err_pci_iounmap_bar1:
pci_iounmap(pdev, addr);
err_release_regions:
- if (msi_ok) {
+ if (msi_ok)
pci_disable_msi(pdev);
- pci_clear_master(pdev);
- }
pci_release_regions(pdev);
err_disable_device:
pci_disable_device(pdev);
pci_iounmap(pdev, bdata->bar1_base);
- if (bdata->use_msi) {
+ if (bdata->use_msi)
pci_disable_msi(pdev);
- pci_clear_master(pdev);
- }
pci_release_regions(pdev);
pci_disable_device(pdev);
free_irq(pcie->pci->irq, pcie);
- pci_clear_master(pdev);
pci_iounmap(pdev, pcie->reg_base);
pci_release_regions(pdev);
pci_disable_device(pdev);
/* Don't re-enable interrupts if the driver had a fatal error
* (e.g., FIFO read failure).
*/
- if (work_done >= 0)
- m_can_enable_all_interrupts(cdev);
+ if (work_done < 0)
+ m_can_disable_all_interrupts(cdev);
return work_done;
}
return IRQ_NONE;
/* ACK all irqs */
- if (ir & IR_ALL_INT)
- m_can_write(cdev, M_CAN_IR, ir);
+ m_can_write(cdev, M_CAN_IR, ir);
if (cdev->ops->clear_interrupts)
cdev->ops->clear_interrupts(cdev);
*/
if ((ir & IR_RF0N) || (ir & IR_ERR_ALL_30X)) {
cdev->irqstatus = ir;
- m_can_disable_all_interrupts(cdev);
- if (!cdev->is_peripheral)
+ if (!cdev->is_peripheral) {
+ m_can_disable_all_interrupts(cdev);
napi_schedule(&cdev->napi);
- else if (m_can_rx_peripheral(dev, ir) < 0)
+ } else if (m_can_rx_peripheral(dev, ir) < 0) {
goto out_fail;
+ }
}
if (cdev->version == 30) {
static int m_can_chip_config(struct net_device *dev)
{
struct m_can_classdev *cdev = netdev_priv(dev);
+ u32 interrupts = IR_ALL_INT;
u32 cccr, test;
int err;
return err;
}
+ /* Disable unused interrupts */
+ interrupts &= ~(IR_ARA | IR_ELO | IR_DRX | IR_TEFF | IR_TEFW | IR_TFE |
+ IR_TCF | IR_HPM | IR_RF1F | IR_RF1W | IR_RF1N |
+ IR_RF0F | IR_RF0W);
+
m_can_config_endisable(cdev, true);
/* RX Buffer/FIFO Element Size 64 bytes data field */
m_can_write(cdev, M_CAN_TEST, test);
/* Enable interrupts */
- m_can_write(cdev, M_CAN_IR, IR_ALL_INT);
- if (!(cdev->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING))
+ if (!(cdev->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)) {
if (cdev->version == 30)
- m_can_write(cdev, M_CAN_IE, IR_ALL_INT &
- ~(IR_ERR_LEC_30X));
+ interrupts &= ~(IR_ERR_LEC_30X);
else
- m_can_write(cdev, M_CAN_IE, IR_ALL_INT &
- ~(IR_ERR_LEC_31X));
- else
- m_can_write(cdev, M_CAN_IE, IR_ALL_INT);
+ interrupts &= ~(IR_ERR_LEC_31X);
+ }
+ m_can_write(cdev, M_CAN_IE, interrupts);
/* route all interrupts to INT0 */
m_can_write(cdev, M_CAN_ILS, ILS_ALL_INT0);
cdev->tx_skb = NULL;
destroy_workqueue(cdev->tx_wq);
cdev->tx_wq = NULL;
- }
-
- if (cdev->is_peripheral)
can_rx_offload_disable(&cdev->offload);
+ }
close_candev(dev);
#include <linux/netdevice.h>
#include <linux/of.h>
#include <linux/of_device.h>
+#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/reset.h>
#include <linux/types.h>
struct net_device *ndev;
struct rcar_canfd_global *gpriv; /* Controller reference */
void __iomem *base; /* Register base address */
+ struct phy *transceiver; /* Optional transceiver */
struct napi_struct napi;
u32 tx_head; /* Incremented on xmit */
u32 tx_tail; /* Incremented on xmit done */
struct rcar_canfd_global *gpriv = priv->gpriv;
int err;
+ err = phy_power_on(priv->transceiver);
+ if (err) {
+ netdev_err(ndev, "failed to power on PHY: %pe\n", ERR_PTR(err));
+ return err;
+ }
+
/* Peripheral clock is already enabled in probe */
err = clk_prepare_enable(gpriv->can_clk);
if (err) {
- netdev_err(ndev, "failed to enable CAN clock, error %d\n", err);
- goto out_clock;
+ netdev_err(ndev, "failed to enable CAN clock: %pe\n", ERR_PTR(err));
+ goto out_phy;
}
err = open_candev(ndev);
if (err) {
- netdev_err(ndev, "open_candev() failed, error %d\n", err);
+ netdev_err(ndev, "open_candev() failed: %pe\n", ERR_PTR(err));
goto out_can_clock;
}
close_candev(ndev);
out_can_clock:
clk_disable_unprepare(gpriv->can_clk);
-out_clock:
+out_phy:
+ phy_power_off(priv->transceiver);
return err;
}
napi_disable(&priv->napi);
clk_disable_unprepare(gpriv->can_clk);
close_candev(ndev);
+ phy_power_off(priv->transceiver);
return 0;
}
};
static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
- u32 fcan_freq)
+ u32 fcan_freq, struct phy *transceiver)
{
const struct rcar_canfd_hw_info *info = gpriv->info;
struct platform_device *pdev = gpriv->pdev;
int err = -ENODEV;
ndev = alloc_candev(sizeof(*priv), RCANFD_FIFO_DEPTH);
- if (!ndev) {
- dev_err(dev, "alloc_candev() failed\n");
+ if (!ndev)
return -ENOMEM;
- }
+
priv = netdev_priv(ndev);
ndev->netdev_ops = &rcar_canfd_netdev_ops;
ndev->flags |= IFF_ECHO;
priv->ndev = ndev;
priv->base = gpriv->base;
+ priv->transceiver = transceiver;
priv->channel = ch;
priv->gpriv = gpriv;
+ if (transceiver)
+ priv->can.bitrate_max = transceiver->attrs.max_link_rate;
priv->can.clock.freq = fcan_freq;
dev_info(dev, "can_clk rate is %u\n", priv->can.clock.freq);
rcar_canfd_channel_err_interrupt, 0,
irq_name, priv);
if (err) {
- dev_err(dev, "devm_request_irq CH Err(%d) failed, error %d\n",
- err_irq, err);
+ dev_err(dev, "devm_request_irq CH Err %d failed: %pe\n",
+ err_irq, ERR_PTR(err));
goto fail;
}
irq_name = devm_kasprintf(dev, GFP_KERNEL, "canfd.ch%d_trx",
rcar_canfd_channel_tx_interrupt, 0,
irq_name, priv);
if (err) {
- dev_err(dev, "devm_request_irq Tx (%d) failed, error %d\n",
- tx_irq, err);
+ dev_err(dev, "devm_request_irq Tx %d failed: %pe\n",
+ tx_irq, ERR_PTR(err));
goto fail;
}
}
gpriv->ch[priv->channel] = priv;
err = register_candev(ndev);
if (err) {
- dev_err(dev, "register_candev() failed, error %d\n", err);
+ dev_err(dev, "register_candev() failed: %pe\n", ERR_PTR(err));
goto fail_candev;
}
dev_info(dev, "device registered (channel %u)\n", priv->channel);
static int rcar_canfd_probe(struct platform_device *pdev)
{
+ struct phy *transceivers[RCANFD_NUM_CHANNELS] = { NULL, };
const struct rcar_canfd_hw_info *info;
struct device *dev = &pdev->dev;
void __iomem *addr;
for (i = 0; i < info->max_channels; ++i) {
name[7] = '0' + i;
of_child = of_get_child_by_name(dev->of_node, name);
- if (of_child && of_device_is_available(of_child))
+ if (of_child && of_device_is_available(of_child)) {
channels_mask |= BIT(i);
+ transceivers[i] = devm_of_phy_optional_get(dev,
+ of_child, NULL);
+ }
of_node_put(of_child);
+ if (IS_ERR(transceivers[i]))
+ return PTR_ERR(transceivers[i]);
}
if (info->shared_global_irqs) {
rcar_canfd_channel_interrupt, 0,
"canfd.ch_int", gpriv);
if (err) {
- dev_err(dev, "devm_request_irq(%d) failed, error %d\n",
- ch_irq, err);
+ dev_err(dev, "devm_request_irq %d failed: %pe\n",
+ ch_irq, ERR_PTR(err));
goto fail_dev;
}
err = devm_request_irq(dev, g_irq, rcar_canfd_global_interrupt,
0, "canfd.g_int", gpriv);
if (err) {
- dev_err(dev, "devm_request_irq(%d) failed, error %d\n",
- g_irq, err);
+ dev_err(dev, "devm_request_irq %d failed: %pe\n",
+ g_irq, ERR_PTR(err));
goto fail_dev;
}
} else {
"canfd.g_recc", gpriv);
if (err) {
- dev_err(dev, "devm_request_irq(%d) failed, error %d\n",
- g_recc_irq, err);
+ dev_err(dev, "devm_request_irq %d failed: %pe\n",
+ g_recc_irq, ERR_PTR(err));
goto fail_dev;
}
rcar_canfd_global_err_interrupt, 0,
"canfd.g_err", gpriv);
if (err) {
- dev_err(dev, "devm_request_irq(%d) failed, error %d\n",
- g_err_irq, err);
+ dev_err(dev, "devm_request_irq %d failed: %pe\n",
+ g_err_irq, ERR_PTR(err));
goto fail_dev;
}
}
/* Enable peripheral clock for register access */
err = clk_prepare_enable(gpriv->clkp);
if (err) {
- dev_err(dev, "failed to enable peripheral clock, error %d\n",
- err);
+ dev_err(dev, "failed to enable peripheral clock: %pe\n",
+ ERR_PTR(err));
goto fail_reset;
}
err = rcar_canfd_reset_controller(gpriv);
if (err) {
- dev_err(dev, "reset controller failed\n");
+ dev_err(dev, "reset controller failed: %pe\n", ERR_PTR(err));
goto fail_clk;
}
}
for_each_set_bit(ch, &gpriv->channels_mask, info->max_channels) {
- err = rcar_canfd_channel_probe(gpriv, ch, fcan_freq);
+ err = rcar_canfd_channel_probe(gpriv, ch, fcan_freq,
+ transceivers[ch]);
if (err)
goto fail_channel;
}
};
/* Main message type used between library and application */
-struct __packed esd_usb_msg {
- union {
- struct header_msg hdr;
- struct version_msg version;
- struct version_reply_msg version_reply;
- struct rx_msg rx;
- struct tx_msg tx;
- struct tx_done_msg txdone;
- struct set_baudrate_msg setbaud;
- struct id_filter_msg filter;
- } msg;
+union __packed esd_usb_msg {
+ struct header_msg hdr;
+ struct version_msg version;
+ struct version_reply_msg version_reply;
+ struct rx_msg rx;
+ struct tx_msg tx;
+ struct tx_done_msg txdone;
+ struct set_baudrate_msg setbaud;
+ struct id_filter_msg filter;
};
static struct usb_device_id esd_usb_table[] = {
};
static void esd_usb_rx_event(struct esd_usb_net_priv *priv,
- struct esd_usb_msg *msg)
+ union esd_usb_msg *msg)
{
struct net_device_stats *stats = &priv->netdev->stats;
struct can_frame *cf;
struct sk_buff *skb;
- u32 id = le32_to_cpu(msg->msg.rx.id) & ESD_IDMASK;
+ u32 id = le32_to_cpu(msg->rx.id) & ESD_IDMASK;
if (id == ESD_EV_CAN_ERROR_EXT) {
- u8 state = msg->msg.rx.ev_can_err_ext.status;
- u8 ecc = msg->msg.rx.ev_can_err_ext.ecc;
- u8 rxerr = msg->msg.rx.ev_can_err_ext.rec;
- u8 txerr = msg->msg.rx.ev_can_err_ext.tec;
+ u8 state = msg->rx.ev_can_err_ext.status;
+ u8 ecc = msg->rx.ev_can_err_ext.ecc;
+
+ priv->bec.rxerr = msg->rx.ev_can_err_ext.rec;
+ priv->bec.txerr = msg->rx.ev_can_err_ext.tec;
netdev_dbg(priv->netdev,
"CAN_ERR_EV_EXT: dlc=%#02x state=%02x ecc=%02x rec=%02x tec=%02x\n",
- msg->msg.rx.dlc, state, ecc, rxerr, txerr);
+ msg->rx.dlc, state, ecc,
+ priv->bec.rxerr, priv->bec.txerr);
+
+ /* if berr-reporting is off, only pass through on state change ... */
+ if (!(priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) &&
+ state == priv->old_state)
+ return;
skb = alloc_can_err_skb(priv->netdev, &cf);
+ if (!skb)
+ stats->rx_dropped++;
if (state != priv->old_state) {
enum can_state tx_state, rx_state;
break;
default:
new_state = CAN_STATE_ERROR_ACTIVE;
- txerr = 0;
- rxerr = 0;
+ priv->bec.txerr = 0;
+ priv->bec.rxerr = 0;
break;
}
if (new_state != priv->can.state) {
- tx_state = (txerr >= rxerr) ? new_state : 0;
- rx_state = (txerr <= rxerr) ? new_state : 0;
+ tx_state = (priv->bec.txerr >= priv->bec.rxerr) ? new_state : 0;
+ rx_state = (priv->bec.txerr <= priv->bec.rxerr) ? new_state : 0;
can_change_state(priv->netdev, cf,
tx_state, rx_state);
}
cf->data[3] = ecc & SJA1000_ECC_SEG;
}
- priv->bec.txerr = txerr;
- priv->bec.rxerr = rxerr;
-
if (skb) {
cf->can_id |= CAN_ERR_CNT;
- cf->data[6] = txerr;
- cf->data[7] = rxerr;
+ cf->data[6] = priv->bec.txerr;
+ cf->data[7] = priv->bec.rxerr;
netif_rx(skb);
- } else {
- stats->rx_dropped++;
}
}
}
static void esd_usb_rx_can_msg(struct esd_usb_net_priv *priv,
- struct esd_usb_msg *msg)
+ union esd_usb_msg *msg)
{
struct net_device_stats *stats = &priv->netdev->stats;
struct can_frame *cf;
if (!netif_device_present(priv->netdev))
return;
- id = le32_to_cpu(msg->msg.rx.id);
+ id = le32_to_cpu(msg->rx.id);
if (id & ESD_EVENT) {
esd_usb_rx_event(priv, msg);
}
cf->can_id = id & ESD_IDMASK;
- can_frame_set_cc_len(cf, msg->msg.rx.dlc & ~ESD_RTR,
+ can_frame_set_cc_len(cf, msg->rx.dlc & ~ESD_RTR,
priv->can.ctrlmode);
if (id & ESD_EXTID)
cf->can_id |= CAN_EFF_FLAG;
- if (msg->msg.rx.dlc & ESD_RTR) {
+ if (msg->rx.dlc & ESD_RTR) {
cf->can_id |= CAN_RTR_FLAG;
} else {
for (i = 0; i < cf->len; i++)
- cf->data[i] = msg->msg.rx.data[i];
+ cf->data[i] = msg->rx.data[i];
stats->rx_bytes += cf->len;
}
}
static void esd_usb_tx_done_msg(struct esd_usb_net_priv *priv,
- struct esd_usb_msg *msg)
+ union esd_usb_msg *msg)
{
struct net_device_stats *stats = &priv->netdev->stats;
struct net_device *netdev = priv->netdev;
if (!netif_device_present(netdev))
return;
- context = &priv->tx_contexts[msg->msg.txdone.hnd & (MAX_TX_URBS - 1)];
+ context = &priv->tx_contexts[msg->txdone.hnd & (MAX_TX_URBS - 1)];
- if (!msg->msg.txdone.status) {
+ if (!msg->txdone.status) {
stats->tx_packets++;
stats->tx_bytes += can_get_echo_skb(netdev, context->echo_index,
NULL);
}
while (pos < urb->actual_length) {
- struct esd_usb_msg *msg;
+ union esd_usb_msg *msg;
- msg = (struct esd_usb_msg *)(urb->transfer_buffer + pos);
+ msg = (union esd_usb_msg *)(urb->transfer_buffer + pos);
- switch (msg->msg.hdr.cmd) {
+ switch (msg->hdr.cmd) {
case CMD_CAN_RX:
- if (msg->msg.rx.net >= dev->net_count) {
+ if (msg->rx.net >= dev->net_count) {
dev_err(dev->udev->dev.parent, "format error\n");
break;
}
- esd_usb_rx_can_msg(dev->nets[msg->msg.rx.net], msg);
+ esd_usb_rx_can_msg(dev->nets[msg->rx.net], msg);
break;
case CMD_CAN_TX:
- if (msg->msg.txdone.net >= dev->net_count) {
+ if (msg->txdone.net >= dev->net_count) {
dev_err(dev->udev->dev.parent, "format error\n");
break;
}
- esd_usb_tx_done_msg(dev->nets[msg->msg.txdone.net],
+ esd_usb_tx_done_msg(dev->nets[msg->txdone.net],
msg);
break;
}
- pos += msg->msg.hdr.len << 2;
+ pos += msg->hdr.len << 2;
if (pos > urb->actual_length) {
dev_err(dev->udev->dev.parent, "format error\n");
struct esd_tx_urb_context *context = urb->context;
struct esd_usb_net_priv *priv;
struct net_device *netdev;
- size_t size = sizeof(struct esd_usb_msg);
+ size_t size = sizeof(union esd_usb_msg);
WARN_ON(!context);
}
static DEVICE_ATTR_RO(nets);
-static int esd_usb_send_msg(struct esd_usb *dev, struct esd_usb_msg *msg)
+static int esd_usb_send_msg(struct esd_usb *dev, union esd_usb_msg *msg)
{
int actual_length;
return usb_bulk_msg(dev->udev,
usb_sndbulkpipe(dev->udev, 2),
msg,
- msg->msg.hdr.len << 2,
+ msg->hdr.len << 2,
&actual_length,
1000);
}
static int esd_usb_wait_msg(struct esd_usb *dev,
- struct esd_usb_msg *msg)
+ union esd_usb_msg *msg)
{
int actual_length;
{
struct esd_usb *dev = priv->usb;
struct net_device *netdev = priv->netdev;
- struct esd_usb_msg *msg;
+ union esd_usb_msg *msg;
int err, i;
msg = kmalloc(sizeof(*msg), GFP_KERNEL);
* the number of the starting bitmask (0..64) to the filter.option
* field followed by only some bitmasks.
*/
- msg->msg.hdr.cmd = CMD_IDADD;
- msg->msg.hdr.len = 2 + ESD_MAX_ID_SEGMENT;
- msg->msg.filter.net = priv->index;
- msg->msg.filter.option = ESD_ID_ENABLE; /* start with segment 0 */
+ msg->hdr.cmd = CMD_IDADD;
+ msg->hdr.len = 2 + ESD_MAX_ID_SEGMENT;
+ msg->filter.net = priv->index;
+ msg->filter.option = ESD_ID_ENABLE; /* start with segment 0 */
for (i = 0; i < ESD_MAX_ID_SEGMENT; i++)
- msg->msg.filter.mask[i] = cpu_to_le32(0xffffffff);
+ msg->filter.mask[i] = cpu_to_le32(0xffffffff);
/* enable 29bit extended IDs */
- msg->msg.filter.mask[ESD_MAX_ID_SEGMENT] = cpu_to_le32(0x00000001);
+ msg->filter.mask[ESD_MAX_ID_SEGMENT] = cpu_to_le32(0x00000001);
err = esd_usb_send_msg(dev, msg);
if (err)
struct esd_tx_urb_context *context = NULL;
struct net_device_stats *stats = &netdev->stats;
struct can_frame *cf = (struct can_frame *)skb->data;
- struct esd_usb_msg *msg;
+ union esd_usb_msg *msg;
struct urb *urb;
u8 *buf;
int i, err;
int ret = NETDEV_TX_OK;
- size_t size = sizeof(struct esd_usb_msg);
+ size_t size = sizeof(union esd_usb_msg);
if (can_dev_dropped_skb(netdev, skb))
return NETDEV_TX_OK;
goto nobufmem;
}
- msg = (struct esd_usb_msg *)buf;
+ msg = (union esd_usb_msg *)buf;
- msg->msg.hdr.len = 3; /* minimal length */
- msg->msg.hdr.cmd = CMD_CAN_TX;
- msg->msg.tx.net = priv->index;
- msg->msg.tx.dlc = can_get_cc_dlc(cf, priv->can.ctrlmode);
- msg->msg.tx.id = cpu_to_le32(cf->can_id & CAN_ERR_MASK);
+ msg->hdr.len = 3; /* minimal length */
+ msg->hdr.cmd = CMD_CAN_TX;
+ msg->tx.net = priv->index;
+ msg->tx.dlc = can_get_cc_dlc(cf, priv->can.ctrlmode);
+ msg->tx.id = cpu_to_le32(cf->can_id & CAN_ERR_MASK);
if (cf->can_id & CAN_RTR_FLAG)
- msg->msg.tx.dlc |= ESD_RTR;
+ msg->tx.dlc |= ESD_RTR;
if (cf->can_id & CAN_EFF_FLAG)
- msg->msg.tx.id |= cpu_to_le32(ESD_EXTID);
+ msg->tx.id |= cpu_to_le32(ESD_EXTID);
for (i = 0; i < cf->len; i++)
- msg->msg.tx.data[i] = cf->data[i];
+ msg->tx.data[i] = cf->data[i];
- msg->msg.hdr.len += (cf->len + 3) >> 2;
+ msg->hdr.len += (cf->len + 3) >> 2;
for (i = 0; i < MAX_TX_URBS; i++) {
if (priv->tx_contexts[i].echo_index == MAX_TX_URBS) {
context->echo_index = i;
/* hnd must not be 0 - MSB is stripped in txdone handling */
- msg->msg.tx.hnd = 0x80000000 | i; /* returned in TX done message */
+ msg->tx.hnd = 0x80000000 | i; /* returned in TX done message */
usb_fill_bulk_urb(urb, dev->udev, usb_sndbulkpipe(dev->udev, 2), buf,
- msg->msg.hdr.len << 2,
+ msg->hdr.len << 2,
esd_usb_write_bulk_callback, context);
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
static int esd_usb_close(struct net_device *netdev)
{
struct esd_usb_net_priv *priv = netdev_priv(netdev);
- struct esd_usb_msg *msg;
+ union esd_usb_msg *msg;
int i;
msg = kmalloc(sizeof(*msg), GFP_KERNEL);
return -ENOMEM;
/* Disable all IDs (see esd_usb_start()) */
- msg->msg.hdr.cmd = CMD_IDADD;
- msg->msg.hdr.len = 2 + ESD_MAX_ID_SEGMENT;
- msg->msg.filter.net = priv->index;
- msg->msg.filter.option = ESD_ID_ENABLE; /* start with segment 0 */
+ msg->hdr.cmd = CMD_IDADD;
+ msg->hdr.len = 2 + ESD_MAX_ID_SEGMENT;
+ msg->filter.net = priv->index;
+ msg->filter.option = ESD_ID_ENABLE; /* start with segment 0 */
for (i = 0; i <= ESD_MAX_ID_SEGMENT; i++)
- msg->msg.filter.mask[i] = 0;
+ msg->filter.mask[i] = 0;
if (esd_usb_send_msg(priv->usb, msg) < 0)
netdev_err(netdev, "sending idadd message failed\n");
/* set CAN controller to reset mode */
- msg->msg.hdr.len = 2;
- msg->msg.hdr.cmd = CMD_SETBAUD;
- msg->msg.setbaud.net = priv->index;
- msg->msg.setbaud.rsvd = 0;
- msg->msg.setbaud.baud = cpu_to_le32(ESD_USB_NO_BAUDRATE);
+ msg->hdr.len = 2;
+ msg->hdr.cmd = CMD_SETBAUD;
+ msg->setbaud.net = priv->index;
+ msg->setbaud.rsvd = 0;
+ msg->setbaud.baud = cpu_to_le32(ESD_USB_NO_BAUDRATE);
if (esd_usb_send_msg(priv->usb, msg) < 0)
netdev_err(netdev, "sending setbaud message failed\n");
{
struct esd_usb_net_priv *priv = netdev_priv(netdev);
struct can_bittiming *bt = &priv->can.bittiming;
- struct esd_usb_msg *msg;
+ union esd_usb_msg *msg;
int err;
u32 canbtr;
int sjw_shift;
if (!msg)
return -ENOMEM;
- msg->msg.hdr.len = 2;
- msg->msg.hdr.cmd = CMD_SETBAUD;
- msg->msg.setbaud.net = priv->index;
- msg->msg.setbaud.rsvd = 0;
- msg->msg.setbaud.baud = cpu_to_le32(canbtr);
+ msg->hdr.len = 2;
+ msg->hdr.cmd = CMD_SETBAUD;
+ msg->setbaud.net = priv->index;
+ msg->setbaud.rsvd = 0;
+ msg->setbaud.baud = cpu_to_le32(canbtr);
netdev_info(netdev, "setting BTR=%#x\n", canbtr);
priv->can.state = CAN_STATE_STOPPED;
priv->can.ctrlmode_supported = CAN_CTRLMODE_LISTENONLY |
- CAN_CTRLMODE_CC_LEN8_DLC;
+ CAN_CTRLMODE_CC_LEN8_DLC |
+ CAN_CTRLMODE_BERR_REPORTING;
if (le16_to_cpu(dev->udev->descriptor.idProduct) ==
USB_CANUSBM_PRODUCT_ID)
const struct usb_device_id *id)
{
struct esd_usb *dev;
- struct esd_usb_msg *msg;
+ union esd_usb_msg *msg;
int i, err;
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
}
/* query number of CAN interfaces (nets) */
- msg->msg.hdr.cmd = CMD_VERSION;
- msg->msg.hdr.len = 2;
- msg->msg.version.rsvd = 0;
- msg->msg.version.flags = 0;
- msg->msg.version.drv_version = 0;
+ msg->hdr.cmd = CMD_VERSION;
+ msg->hdr.len = 2;
+ msg->version.rsvd = 0;
+ msg->version.flags = 0;
+ msg->version.drv_version = 0;
err = esd_usb_send_msg(dev, msg);
if (err < 0) {
goto free_msg;
}
- dev->net_count = (int)msg->msg.version_reply.nets;
- dev->version = le32_to_cpu(msg->msg.version_reply.version);
+ dev->net_count = (int)msg->version_reply.nets;
+ dev->version = le32_to_cpu(msg->version_reply.version);
if (device_create_file(&intf->dev, &dev_attr_firmware))
dev_err(&intf->dev,
#include "kvaser_usb.h"
/* Kvaser USB vendor id. */
-#define KVASER_VENDOR_ID 0x0bfd
+#define KVASER_VENDOR_ID 0x0bfd
/* Kvaser Leaf USB devices product ids */
-#define USB_LEAF_DEVEL_PRODUCT_ID 10
-#define USB_LEAF_LITE_PRODUCT_ID 11
-#define USB_LEAF_PRO_PRODUCT_ID 12
-#define USB_LEAF_SPRO_PRODUCT_ID 14
-#define USB_LEAF_PRO_LS_PRODUCT_ID 15
-#define USB_LEAF_PRO_SWC_PRODUCT_ID 16
-#define USB_LEAF_PRO_LIN_PRODUCT_ID 17
-#define USB_LEAF_SPRO_LS_PRODUCT_ID 18
-#define USB_LEAF_SPRO_SWC_PRODUCT_ID 19
-#define USB_MEMO2_DEVEL_PRODUCT_ID 22
-#define USB_MEMO2_HSHS_PRODUCT_ID 23
-#define USB_UPRO_HSHS_PRODUCT_ID 24
-#define USB_LEAF_LITE_GI_PRODUCT_ID 25
-#define USB_LEAF_PRO_OBDII_PRODUCT_ID 26
-#define USB_MEMO2_HSLS_PRODUCT_ID 27
-#define USB_LEAF_LITE_CH_PRODUCT_ID 28
-#define USB_BLACKBIRD_SPRO_PRODUCT_ID 29
-#define USB_OEM_MERCURY_PRODUCT_ID 34
-#define USB_OEM_LEAF_PRODUCT_ID 35
-#define USB_CAN_R_PRODUCT_ID 39
-#define USB_LEAF_LITE_V2_PRODUCT_ID 288
-#define USB_MINI_PCIE_HS_PRODUCT_ID 289
-#define USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID 290
-#define USB_USBCAN_LIGHT_2HS_PRODUCT_ID 291
-#define USB_MINI_PCIE_2HS_PRODUCT_ID 292
-#define USB_USBCAN_R_V2_PRODUCT_ID 294
-#define USB_LEAF_LIGHT_R_V2_PRODUCT_ID 295
-#define USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID 296
+#define USB_LEAF_DEVEL_PRODUCT_ID 0x000a
+#define USB_LEAF_LITE_PRODUCT_ID 0x000b
+#define USB_LEAF_PRO_PRODUCT_ID 0x000c
+#define USB_LEAF_SPRO_PRODUCT_ID 0x000e
+#define USB_LEAF_PRO_LS_PRODUCT_ID 0x000f
+#define USB_LEAF_PRO_SWC_PRODUCT_ID 0x0010
+#define USB_LEAF_PRO_LIN_PRODUCT_ID 0x0011
+#define USB_LEAF_SPRO_LS_PRODUCT_ID 0x0012
+#define USB_LEAF_SPRO_SWC_PRODUCT_ID 0x0013
+#define USB_MEMO2_DEVEL_PRODUCT_ID 0x0016
+#define USB_MEMO2_HSHS_PRODUCT_ID 0x0017
+#define USB_UPRO_HSHS_PRODUCT_ID 0x0018
+#define USB_LEAF_LITE_GI_PRODUCT_ID 0x0019
+#define USB_LEAF_PRO_OBDII_PRODUCT_ID 0x001a
+#define USB_MEMO2_HSLS_PRODUCT_ID 0x001b
+#define USB_LEAF_LITE_CH_PRODUCT_ID 0x001c
+#define USB_BLACKBIRD_SPRO_PRODUCT_ID 0x001d
+#define USB_OEM_MERCURY_PRODUCT_ID 0x0022
+#define USB_OEM_LEAF_PRODUCT_ID 0x0023
+#define USB_CAN_R_PRODUCT_ID 0x0027
+#define USB_LEAF_LITE_V2_PRODUCT_ID 0x0120
+#define USB_MINI_PCIE_HS_PRODUCT_ID 0x0121
+#define USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID 0x0122
+#define USB_USBCAN_LIGHT_2HS_PRODUCT_ID 0x0123
+#define USB_MINI_PCIE_2HS_PRODUCT_ID 0x0124
+#define USB_USBCAN_R_V2_PRODUCT_ID 0x0126
+#define USB_LEAF_LIGHT_R_V2_PRODUCT_ID 0x0127
+#define USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID 0x0128
/* Kvaser USBCan-II devices product ids */
-#define USB_USBCAN_REVB_PRODUCT_ID 2
-#define USB_VCI2_PRODUCT_ID 3
-#define USB_USBCAN2_PRODUCT_ID 4
-#define USB_MEMORATOR_PRODUCT_ID 5
+#define USB_USBCAN_REVB_PRODUCT_ID 0x0002
+#define USB_VCI2_PRODUCT_ID 0x0003
+#define USB_USBCAN2_PRODUCT_ID 0x0004
+#define USB_MEMORATOR_PRODUCT_ID 0x0005
/* Kvaser Minihydra USB devices product ids */
-#define USB_BLACKBIRD_V2_PRODUCT_ID 258
-#define USB_MEMO_PRO_5HS_PRODUCT_ID 260
-#define USB_USBCAN_PRO_5HS_PRODUCT_ID 261
-#define USB_USBCAN_LIGHT_4HS_PRODUCT_ID 262
-#define USB_LEAF_PRO_HS_V2_PRODUCT_ID 263
-#define USB_USBCAN_PRO_2HS_V2_PRODUCT_ID 264
-#define USB_MEMO_2HS_PRODUCT_ID 265
-#define USB_MEMO_PRO_2HS_V2_PRODUCT_ID 266
-#define USB_HYBRID_2CANLIN_PRODUCT_ID 267
-#define USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID 268
-#define USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID 269
-#define USB_HYBRID_PRO_2CANLIN_PRODUCT_ID 270
-#define USB_U100_PRODUCT_ID 273
-#define USB_U100P_PRODUCT_ID 274
-#define USB_U100S_PRODUCT_ID 275
-#define USB_USBCAN_PRO_4HS_PRODUCT_ID 276
-#define USB_HYBRID_CANLIN_PRODUCT_ID 277
-#define USB_HYBRID_PRO_CANLIN_PRODUCT_ID 278
+#define USB_BLACKBIRD_V2_PRODUCT_ID 0x0102
+#define USB_MEMO_PRO_5HS_PRODUCT_ID 0x0104
+#define USB_USBCAN_PRO_5HS_PRODUCT_ID 0x0105
+#define USB_USBCAN_LIGHT_4HS_PRODUCT_ID 0x0106
+#define USB_LEAF_PRO_HS_V2_PRODUCT_ID 0x0107
+#define USB_USBCAN_PRO_2HS_V2_PRODUCT_ID 0x0108
+#define USB_MEMO_2HS_PRODUCT_ID 0x0109
+#define USB_MEMO_PRO_2HS_V2_PRODUCT_ID 0x010a
+#define USB_HYBRID_2CANLIN_PRODUCT_ID 0x010b
+#define USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID 0x010c
+#define USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID 0x010d
+#define USB_HYBRID_PRO_2CANLIN_PRODUCT_ID 0x010e
+#define USB_U100_PRODUCT_ID 0x0111
+#define USB_U100P_PRODUCT_ID 0x0112
+#define USB_U100S_PRODUCT_ID 0x0113
+#define USB_USBCAN_PRO_4HS_PRODUCT_ID 0x0114
+#define USB_HYBRID_CANLIN_PRODUCT_ID 0x0115
+#define USB_HYBRID_PRO_CANLIN_PRODUCT_ID 0x0116
static const struct kvaser_usb_driver_info kvaser_usb_driver_info_hydra = {
.quirks = KVASER_USB_QUIRK_HAS_HARDWARE_TIMESTAMP,
tristate "MediaTek MT7530 and MT7531 Ethernet switch support"
select NET_DSA_TAG_MTK
select MEDIATEK_GE_PHY
+ imply NET_DSA_MT7530_MDIO
+ imply NET_DSA_MT7530_MMIO
help
This enables support for the MediaTek MT7530 and MT7531 Ethernet
switch chips. Multi-chip module MT7530 in MT7621AT, MT7621DAT,
- MT7621ST and MT7623AI SoCs is supported.
+ MT7621ST and MT7623AI SoCs, and built-in switch in MT7988 SoC are
+ supported as well.
+
+config NET_DSA_MT7530_MDIO
+ tristate "MediaTek MT7530 MDIO interface driver"
+ depends on NET_DSA_MT7530
+ select PCS_MTK_LYNXI
+ help
+ This enables support for the MediaTek MT7530 and MT7531 switch
+ chips which are connected via MDIO, as well as multi-chip
+ module MT7530 which can be found in the MT7621AT, MT7621DAT,
+ MT7621ST and MT7623AI SoCs.
+
+config NET_DSA_MT7530_MMIO
+ tristate "MediaTek MT7530 MMIO interface driver"
+ depends on NET_DSA_MT7530
+ depends on HAS_IOMEM
+ help
+ This enables support for the built-in Ethernet switch found
+ in the MediaTek MT7988 SoC.
+ The switch is a similar design as MT7531, but the switch registers
+ are directly mapped into the SoCs register space rather than being
+ accessible via MDIO.
config NET_DSA_MV88E6060
tristate "Marvell 88E6060 ethernet switch chip support"
endif
obj-$(CONFIG_NET_DSA_LANTIQ_GSWIP) += lantiq_gswip.o
obj-$(CONFIG_NET_DSA_MT7530) += mt7530.o
+obj-$(CONFIG_NET_DSA_MT7530_MDIO) += mt7530-mdio.o
+obj-$(CONFIG_NET_DSA_MT7530_MMIO) += mt7530-mmio.o
obj-$(CONFIG_NET_DSA_MV88E6060) += mv88e6060.o
obj-$(CONFIG_NET_DSA_RZN1_A5PSW) += rzn1_a5psw.o
obj-$(CONFIG_NET_DSA_SMSC_LAN9303) += lan9303-core.o
b53_write8(dev, B53_CTRL_PAGE, off, reg);
}
+static void b53_adjust_63xx_rgmii(struct dsa_switch *ds, int port,
+ phy_interface_t interface)
+{
+ struct b53_device *dev = ds->priv;
+ u8 rgmii_ctrl = 0, off;
+
+ if (port == dev->imp_port)
+ off = B53_RGMII_CTRL_IMP;
+ else
+ off = B53_RGMII_CTRL_P(port);
+
+ b53_read8(dev, B53_CTRL_PAGE, off, &rgmii_ctrl);
+
+ switch (interface) {
+ case PHY_INTERFACE_MODE_RGMII_ID:
+ rgmii_ctrl |= (RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+ break;
+ case PHY_INTERFACE_MODE_RGMII_RXID:
+ rgmii_ctrl &= ~(RGMII_CTRL_DLL_TXC);
+ rgmii_ctrl |= RGMII_CTRL_DLL_RXC;
+ break;
+ case PHY_INTERFACE_MODE_RGMII_TXID:
+ rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC);
+ rgmii_ctrl |= RGMII_CTRL_DLL_TXC;
+ break;
+ case PHY_INTERFACE_MODE_RGMII:
+ default:
+ rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+ break;
+ }
+
+ if (port != dev->imp_port) {
+ if (is63268(dev))
+ rgmii_ctrl |= RGMII_CTRL_MII_OVERRIDE;
+
+ rgmii_ctrl |= RGMII_CTRL_ENABLE_GMII;
+ }
+
+ b53_write8(dev, B53_CTRL_PAGE, off, rgmii_ctrl);
+
+ dev_dbg(ds->dev, "Configured port %d for %s\n", port,
+ phy_modes(interface));
+}
+
static void b53_adjust_link(struct dsa_switch *ds, int port,
struct phy_device *phydev)
{
tx_pause, rx_pause);
b53_force_link(dev, port, phydev->link);
+ if (is63xx(dev) && port >= B53_63XX_RGMII0)
+ b53_adjust_63xx_rgmii(ds, port, phydev->interface);
+
if (is531x5(dev) && phy_interface_is_rgmii(phydev)) {
if (port == dev->imp_port)
off = B53_RGMII_CTRL_IMP;
{
struct b53_device *dev = ds->priv;
+ if (is63xx(dev) && port >= B53_63XX_RGMII0)
+ b53_adjust_63xx_rgmii(ds, port, interface);
+
if (mode == MLO_AN_PHY)
return;
.jumbo_size_reg = B53_JUMBO_MAX_SIZE_63XX,
},
{
+ .chip_id = BCM63268_DEVICE_ID,
+ .dev_name = "BCM63268",
+ .vlans = 4096,
+ .enabled_ports = 0, /* pdata must provide them */
+ .arl_bins = 4,
+ .arl_buckets = 1024,
+ .imp_port = 8,
+ .vta_regs = B53_VTA_REGS_63XX,
+ .duplex_reg = B53_DUPLEX_STAT_63XX,
+ .jumbo_pm_reg = B53_JUMBO_PORT_MASK_63XX,
+ .jumbo_size_reg = B53_JUMBO_MAX_SIZE_63XX,
+ },
+ {
.chip_id = BCM53010_DEVICE_ID,
.dev_name = "BCM53010",
.vlans = 4096,
.jumbo_pm_reg = B53_JUMBO_PORT_MASK,
.jumbo_size_reg = B53_JUMBO_MAX_SIZE,
},
+ {
+ .chip_id = BCM53134_DEVICE_ID,
+ .dev_name = "BCM53134",
+ .vlans = 4096,
+ .enabled_ports = 0x12f,
+ .imp_port = 8,
+ .cpu_port = B53_CPU_PORT,
+ .vta_regs = B53_VTA_REGS,
+ .arl_bins = 4,
+ .arl_buckets = 1024,
+ .duplex_reg = B53_DUPLEX_STAT_GE,
+ .jumbo_pm_reg = B53_JUMBO_PORT_MASK,
+ .jumbo_size_reg = B53_JUMBO_MAX_SIZE,
+ },
};
static int b53_switch_init(struct b53_device *dev)
case BCM53012_DEVICE_ID:
case BCM53018_DEVICE_ID:
case BCM53019_DEVICE_ID:
+ case BCM53134_DEVICE_ID:
dev->chip_id = id32;
break;
default:
#define B53_BRCM_OUI_2 0x03625c00
#define B53_BRCM_OUI_3 0x00406000
#define B53_BRCM_OUI_4 0x01410c00
+#define B53_BRCM_OUI_5 0xae025000
static int b53_mdio_probe(struct mdio_device *mdiodev)
{
if ((phy_id & 0xfffffc00) != B53_BRCM_OUI_1 &&
(phy_id & 0xfffffc00) != B53_BRCM_OUI_2 &&
(phy_id & 0xfffffc00) != B53_BRCM_OUI_3 &&
- (phy_id & 0xfffffc00) != B53_BRCM_OUI_4) {
+ (phy_id & 0xfffffc00) != B53_BRCM_OUI_4 &&
+ (phy_id & 0xfffffc00) != B53_BRCM_OUI_5) {
dev_err(&mdiodev->dev, "Unsupported device: 0x%08x\n", phy_id);
return -ENODEV;
}
{ .compatible = "brcm,bcm53115" },
{ .compatible = "brcm,bcm53125" },
{ .compatible = "brcm,bcm53128" },
+ { .compatible = "brcm,bcm53134" },
{ .compatible = "brcm,bcm5365" },
{ .compatible = "brcm,bcm5389" },
{ .compatible = "brcm,bcm5395" },
return -ENOMEM;
pdata->regs = mem;
- pdata->chip_id = BCM63XX_DEVICE_ID;
+ pdata->chip_id = (u32)(unsigned long)device_get_match_data(dev);
pdata->big_endian = of_property_read_bool(np, "big-endian");
of_ports = of_get_child_by_name(np, "ports");
}
static const struct of_device_id b53_mmap_of_table[] = {
- { .compatible = "brcm,bcm3384-switch" },
- { .compatible = "brcm,bcm6328-switch" },
- { .compatible = "brcm,bcm6368-switch" },
- { .compatible = "brcm,bcm63xx-switch" },
- { /* sentinel */ },
+ {
+ .compatible = "brcm,bcm3384-switch",
+ .data = (void *)BCM63XX_DEVICE_ID,
+ }, {
+ .compatible = "brcm,bcm6318-switch",
+ .data = (void *)BCM63268_DEVICE_ID,
+ }, {
+ .compatible = "brcm,bcm6328-switch",
+ .data = (void *)BCM63XX_DEVICE_ID,
+ }, {
+ .compatible = "brcm,bcm6362-switch",
+ .data = (void *)BCM63XX_DEVICE_ID,
+ }, {
+ .compatible = "brcm,bcm6368-switch",
+ .data = (void *)BCM63XX_DEVICE_ID,
+ }, {
+ .compatible = "brcm,bcm63268-switch",
+ .data = (void *)BCM63268_DEVICE_ID,
+ }, {
+ .compatible = "brcm,bcm63xx-switch",
+ .data = (void *)BCM63XX_DEVICE_ID,
+ }, { /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, b53_mmap_of_table);
BCM53125_DEVICE_ID = 0x53125,
BCM53128_DEVICE_ID = 0x53128,
BCM63XX_DEVICE_ID = 0x6300,
+ BCM63268_DEVICE_ID = 0x63268,
BCM53010_DEVICE_ID = 0x53010,
BCM53011_DEVICE_ID = 0x53011,
BCM53012_DEVICE_ID = 0x53012,
BCM583XX_DEVICE_ID = 0x58300,
BCM7445_DEVICE_ID = 0x7445,
BCM7278_DEVICE_ID = 0x7278,
+ BCM53134_DEVICE_ID = 0x5075,
};
struct b53_pcs {
{
return dev->chip_id == BCM53115_DEVICE_ID ||
dev->chip_id == BCM53125_DEVICE_ID ||
- dev->chip_id == BCM53128_DEVICE_ID;
+ dev->chip_id == BCM53128_DEVICE_ID ||
+ dev->chip_id == BCM53134_DEVICE_ID;
}
static inline int is63xx(struct b53_device *dev)
{
- return dev->chip_id == BCM63XX_DEVICE_ID;
+ return dev->chip_id == BCM63XX_DEVICE_ID ||
+ dev->chip_id == BCM63268_DEVICE_ID;
+}
+
+static inline int is63268(struct b53_device *dev)
+{
+ return dev->chip_id == BCM63268_DEVICE_ID;
}
static inline int is5301x(struct b53_device *dev)
return dev->chip_id == BCM58XX_DEVICE_ID ||
dev->chip_id == BCM583XX_DEVICE_ID ||
dev->chip_id == BCM7445_DEVICE_ID ||
- dev->chip_id == BCM7278_DEVICE_ID;
+ dev->chip_id == BCM7278_DEVICE_ID ||
+ dev->chip_id == BCM53134_DEVICE_ID;
}
+#define B53_63XX_RGMII0 4
#define B53_CPU_PORT_25 5
#define B53_CPU_PORT 8
#define B53_RGMII_CTRL_IMP 0x60
#define RGMII_CTRL_ENABLE_GMII BIT(7)
+#define RGMII_CTRL_MII_OVERRIDE BIT(6)
#define RGMII_CTRL_TIMING_SEL BIT(2)
#define RGMII_CTRL_DLL_RXC BIT(1)
#define RGMII_CTRL_DLL_TXC BIT(0)
static int hellcreek_led_setup(struct hellcreek *hellcreek)
{
struct device_node *leds, *led = NULL;
- const char *label, *state;
+ enum led_default_state state;
+ const char *label;
int ret = -EINVAL;
of_node_get(hellcreek->dev->of_node);
ret = of_property_read_string(led, "label", &label);
hellcreek->led_sync_good.name = ret ? "sync_good" : label;
- ret = of_property_read_string(led, "default-state", &state);
- if (!ret) {
- if (!strcmp(state, "on"))
- hellcreek->led_sync_good.brightness = 1;
- else if (!strcmp(state, "off"))
- hellcreek->led_sync_good.brightness = 0;
- else if (!strcmp(state, "keep"))
- hellcreek->led_sync_good.brightness =
- hellcreek_get_brightness(hellcreek,
- STATUS_OUT_SYNC_GOOD);
+ state = led_init_default_state_get(of_fwnode_handle(led));
+ switch (state) {
+ case LEDS_DEFSTATE_ON:
+ hellcreek->led_sync_good.brightness = 1;
+ break;
+ case LEDS_DEFSTATE_KEEP:
+ hellcreek->led_sync_good.brightness =
+ hellcreek_get_brightness(hellcreek, STATUS_OUT_SYNC_GOOD);
+ break;
+ default:
+ hellcreek->led_sync_good.brightness = 0;
}
hellcreek->led_sync_good.max_brightness = 1;
ret = of_property_read_string(led, "label", &label);
hellcreek->led_is_gm.name = ret ? "is_gm" : label;
- ret = of_property_read_string(led, "default-state", &state);
- if (!ret) {
- if (!strcmp(state, "on"))
- hellcreek->led_is_gm.brightness = 1;
- else if (!strcmp(state, "off"))
- hellcreek->led_is_gm.brightness = 0;
- else if (!strcmp(state, "keep"))
- hellcreek->led_is_gm.brightness =
- hellcreek_get_brightness(hellcreek,
- STATUS_OUT_IS_GM);
+ state = led_init_default_state_get(of_fwnode_handle(led));
+ switch (state) {
+ case LEDS_DEFSTATE_ON:
+ hellcreek->led_is_gm.brightness = 1;
+ break;
+ case LEDS_DEFSTATE_KEEP:
+ hellcreek->led_is_gm.brightness =
+ hellcreek_get_brightness(hellcreek, STATUS_OUT_IS_GM);
+ break;
+ default:
+ hellcreek->led_is_gm.brightness = 0;
}
hellcreek->led_is_gm.max_brightness = 1;
static struct i2c_driver lan9303_i2c_driver = {
.driver = {
.name = "LAN9303_I2C",
- .of_match_table = of_match_ptr(lan9303_i2c_of_match),
+ .of_match_table = lan9303_i2c_of_match,
},
.probe_new = lan9303_i2c_probe,
.remove = lan9303_i2c_remove,
static struct mdio_driver lan9303_mdio_driver = {
.mdiodrv.driver = {
.name = "LAN9303_MDIO",
- .of_match_table = of_match_ptr(lan9303_mdio_of_match),
+ .of_match_table = lan9303_mdio_of_match,
},
.probe = lan9303_mdio_probe,
.remove = lan9303_mdio_remove,
.ge_firmware_name = "lantiq/xrx300_phy11g_a21.bin",
};
-static const struct of_device_id xway_gphy_match[] = {
+static const struct of_device_id xway_gphy_match[] __maybe_unused = {
{ .compatible = "lantiq,xrx200-gphy-fw", .data = NULL },
{ .compatible = "lantiq,xrx200a1x-gphy-fw", .data = &xrx200a1x_gphy_data },
{ .compatible = "lantiq,xrx200a2x-gphy-fw", .data = &xrx200a2x_gphy_data },
int ksz8_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val);
int ksz8_r_dyn_mac_table(struct ksz_device *dev, u16 addr, u8 *mac_addr,
u8 *fid, u8 *src_port, u8 *timestamp, u16 *entries);
-int ksz8_r_sta_mac_table(struct ksz_device *dev, u16 addr,
- struct alu_struct *alu);
-void ksz8_w_sta_mac_table(struct ksz_device *dev, u16 addr,
- struct alu_struct *alu);
void ksz8_r_mib_cnt(struct ksz_device *dev, int port, u16 addr, u64 *cnt);
void ksz8_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
u64 *dropped, u64 *cnt);
void ksz8_port_init_cnt(struct ksz_device *dev, int port);
int ksz8_fdb_dump(struct ksz_device *dev, int port,
dsa_fdb_dump_cb_t *cb, void *data);
+int ksz8_fdb_add(struct ksz_device *dev, int port, const unsigned char *addr,
+ u16 vid, struct dsa_db db);
+int ksz8_fdb_del(struct ksz_device *dev, int port, const unsigned char *addr,
+ u16 vid, struct dsa_db db);
int ksz8_mdb_add(struct ksz_device *dev, int port,
const struct switchdev_obj_port_mdb *mdb, struct dsa_db db);
int ksz8_mdb_del(struct ksz_device *dev, int port,
}
}
-static void ksz8_r_table(struct ksz_device *dev, int table, u16 addr, u64 *data)
+static int ksz8_r_table(struct ksz_device *dev, int table, u16 addr, u64 *data)
{
const u16 *regs;
u16 ctrl_addr;
+ int ret;
regs = dev->info->regs;
ctrl_addr = IND_ACC_TABLE(table | TABLE_READ) | addr;
mutex_lock(&dev->alu_mutex);
- ksz_write16(dev, regs[REG_IND_CTRL_0], ctrl_addr);
- ksz_read64(dev, regs[REG_IND_DATA_HI], data);
+ ret = ksz_write16(dev, regs[REG_IND_CTRL_0], ctrl_addr);
+ if (ret)
+ goto unlock_alu;
+
+ ret = ksz_read64(dev, regs[REG_IND_DATA_HI], data);
+unlock_alu:
mutex_unlock(&dev->alu_mutex);
+
+ return ret;
}
-static void ksz8_w_table(struct ksz_device *dev, int table, u16 addr, u64 data)
+static int ksz8_w_table(struct ksz_device *dev, int table, u16 addr, u64 data)
{
const u16 *regs;
u16 ctrl_addr;
+ int ret;
regs = dev->info->regs;
ctrl_addr = IND_ACC_TABLE(table) | addr;
mutex_lock(&dev->alu_mutex);
- ksz_write64(dev, regs[REG_IND_DATA_HI], data);
- ksz_write16(dev, regs[REG_IND_CTRL_0], ctrl_addr);
+ ret = ksz_write64(dev, regs[REG_IND_DATA_HI], data);
+ if (ret)
+ goto unlock_alu;
+
+ ret = ksz_write16(dev, regs[REG_IND_CTRL_0], ctrl_addr);
+unlock_alu:
mutex_unlock(&dev->alu_mutex);
+
+ return ret;
}
static int ksz8_valid_dyn_entry(struct ksz_device *dev, u8 *data)
return rc;
}
-int ksz8_r_sta_mac_table(struct ksz_device *dev, u16 addr,
- struct alu_struct *alu)
+static int ksz8_r_sta_mac_table(struct ksz_device *dev, u16 addr,
+ struct alu_struct *alu, bool *valid)
{
u32 data_hi, data_lo;
const u8 *shifts;
const u32 *masks;
u64 data;
+ int ret;
shifts = dev->info->shifts;
masks = dev->info->masks;
- ksz8_r_table(dev, TABLE_STATIC_MAC, addr, &data);
+ ret = ksz8_r_table(dev, TABLE_STATIC_MAC, addr, &data);
+ if (ret)
+ return ret;
+
data_hi = data >> 32;
data_lo = (u32)data;
- if (data_hi & (masks[STATIC_MAC_TABLE_VALID] |
- masks[STATIC_MAC_TABLE_OVERRIDE])) {
- alu->mac[5] = (u8)data_lo;
- alu->mac[4] = (u8)(data_lo >> 8);
- alu->mac[3] = (u8)(data_lo >> 16);
- alu->mac[2] = (u8)(data_lo >> 24);
- alu->mac[1] = (u8)data_hi;
- alu->mac[0] = (u8)(data_hi >> 8);
- alu->port_forward =
- (data_hi & masks[STATIC_MAC_TABLE_FWD_PORTS]) >>
- shifts[STATIC_MAC_FWD_PORTS];
- alu->is_override =
- (data_hi & masks[STATIC_MAC_TABLE_OVERRIDE]) ? 1 : 0;
- data_hi >>= 1;
- alu->is_static = true;
- alu->is_use_fid =
- (data_hi & masks[STATIC_MAC_TABLE_USE_FID]) ? 1 : 0;
- alu->fid = (data_hi & masks[STATIC_MAC_TABLE_FID]) >>
- shifts[STATIC_MAC_FID];
+
+ if (!(data_hi & (masks[STATIC_MAC_TABLE_VALID] |
+ masks[STATIC_MAC_TABLE_OVERRIDE]))) {
+ *valid = false;
return 0;
}
- return -ENXIO;
+
+ alu->mac[5] = (u8)data_lo;
+ alu->mac[4] = (u8)(data_lo >> 8);
+ alu->mac[3] = (u8)(data_lo >> 16);
+ alu->mac[2] = (u8)(data_lo >> 24);
+ alu->mac[1] = (u8)data_hi;
+ alu->mac[0] = (u8)(data_hi >> 8);
+ alu->port_forward =
+ (data_hi & masks[STATIC_MAC_TABLE_FWD_PORTS]) >>
+ shifts[STATIC_MAC_FWD_PORTS];
+ alu->is_override = (data_hi & masks[STATIC_MAC_TABLE_OVERRIDE]) ? 1 : 0;
+ data_hi >>= 1;
+ alu->is_static = true;
+ alu->is_use_fid = (data_hi & masks[STATIC_MAC_TABLE_USE_FID]) ? 1 : 0;
+ alu->fid = (data_hi & masks[STATIC_MAC_TABLE_FID]) >>
+ shifts[STATIC_MAC_FID];
+
+ *valid = true;
+
+ return 0;
}
-void ksz8_w_sta_mac_table(struct ksz_device *dev, u16 addr,
- struct alu_struct *alu)
+static int ksz8_w_sta_mac_table(struct ksz_device *dev, u16 addr,
+ struct alu_struct *alu)
{
u32 data_hi, data_lo;
const u8 *shifts;
data_hi &= ~masks[STATIC_MAC_TABLE_OVERRIDE];
data = (u64)data_hi << 32 | data_lo;
- ksz8_w_table(dev, TABLE_STATIC_MAC, addr, data);
+
+ return ksz8_w_table(dev, TABLE_STATIC_MAC, addr, data);
}
static void ksz8_from_vlan(struct ksz_device *dev, u32 vlan, u8 *fid,
return ret;
}
-int ksz8_mdb_add(struct ksz_device *dev, int port,
- const struct switchdev_obj_port_mdb *mdb, struct dsa_db db)
+static int ksz8_add_sta_mac(struct ksz_device *dev, int port,
+ const unsigned char *addr, u16 vid)
{
struct alu_struct alu;
- int index;
+ int index, ret;
int empty = 0;
alu.port_forward = 0;
for (index = 0; index < dev->info->num_statics; index++) {
- if (!ksz8_r_sta_mac_table(dev, index, &alu)) {
- /* Found one already in static MAC table. */
- if (!memcmp(alu.mac, mdb->addr, ETH_ALEN) &&
- alu.fid == mdb->vid)
- break;
- /* Remember the first empty entry. */
- } else if (!empty) {
- empty = index + 1;
+ bool valid;
+
+ ret = ksz8_r_sta_mac_table(dev, index, &alu, &valid);
+ if (ret)
+ return ret;
+ if (!valid) {
+ /* Remember the first empty entry. */
+ if (!empty)
+ empty = index + 1;
+ continue;
}
+
+ if (!memcmp(alu.mac, addr, ETH_ALEN) && alu.fid == vid)
+ break;
}
/* no available entry */
if (index == dev->info->num_statics) {
index = empty - 1;
memset(&alu, 0, sizeof(alu));
- memcpy(alu.mac, mdb->addr, ETH_ALEN);
+ memcpy(alu.mac, addr, ETH_ALEN);
alu.is_static = true;
}
alu.port_forward |= BIT(port);
- if (mdb->vid) {
+ if (vid) {
alu.is_use_fid = true;
/* Need a way to map VID to FID. */
- alu.fid = mdb->vid;
+ alu.fid = vid;
}
- ksz8_w_sta_mac_table(dev, index, &alu);
- return 0;
+ return ksz8_w_sta_mac_table(dev, index, &alu);
}
-int ksz8_mdb_del(struct ksz_device *dev, int port,
- const struct switchdev_obj_port_mdb *mdb, struct dsa_db db)
+static int ksz8_del_sta_mac(struct ksz_device *dev, int port,
+ const unsigned char *addr, u16 vid)
{
struct alu_struct alu;
- int index;
+ int index, ret;
for (index = 0; index < dev->info->num_statics; index++) {
- if (!ksz8_r_sta_mac_table(dev, index, &alu)) {
- /* Found one already in static MAC table. */
- if (!memcmp(alu.mac, mdb->addr, ETH_ALEN) &&
- alu.fid == mdb->vid)
- break;
- }
+ bool valid;
+
+ ret = ksz8_r_sta_mac_table(dev, index, &alu, &valid);
+ if (ret)
+ return ret;
+ if (!valid)
+ continue;
+
+ if (!memcmp(alu.mac, addr, ETH_ALEN) && alu.fid == vid)
+ break;
}
/* no available entry */
if (index == dev->info->num_statics)
- goto exit;
+ return 0;
/* clear port */
alu.port_forward &= ~BIT(port);
if (!alu.port_forward)
alu.is_static = false;
- ksz8_w_sta_mac_table(dev, index, &alu);
-exit:
- return 0;
+ return ksz8_w_sta_mac_table(dev, index, &alu);
+}
+
+int ksz8_mdb_add(struct ksz_device *dev, int port,
+ const struct switchdev_obj_port_mdb *mdb, struct dsa_db db)
+{
+ return ksz8_add_sta_mac(dev, port, mdb->addr, mdb->vid);
+}
+
+int ksz8_mdb_del(struct ksz_device *dev, int port,
+ const struct switchdev_obj_port_mdb *mdb, struct dsa_db db)
+{
+ return ksz8_del_sta_mac(dev, port, mdb->addr, mdb->vid);
+}
+
+int ksz8_fdb_add(struct ksz_device *dev, int port, const unsigned char *addr,
+ u16 vid, struct dsa_db db)
+{
+ return ksz8_add_sta_mac(dev, port, addr, vid);
+}
+
+int ksz8_fdb_del(struct ksz_device *dev, int port, const unsigned char *addr,
+ u16 vid, struct dsa_db db)
+{
+ return ksz8_del_sta_mac(dev, port, addr, vid);
}
int ksz8_port_vlan_filtering(struct ksz_device *dev, int port, bool flag,
alu.is_override = true;
alu.port_forward = dev->info->cpu_ports;
- ksz8_w_sta_mac_table(dev, 0, &alu);
-
- return 0;
+ return ksz8_w_sta_mac_table(dev, 0, &alu);
}
int ksz8_setup(struct dsa_switch *ds)
static struct i2c_driver ksz9477_i2c_driver = {
.driver = {
.name = "ksz9477-switch",
- .of_match_table = of_match_ptr(ksz9477_dt_ids),
+ .of_match_table = ksz9477_dt_ids,
},
.probe_new = ksz9477_i2c_probe,
.remove = ksz9477_i2c_remove,
#include "ksz9477.h"
#include "lan937x.h"
-#define KSZ_CBS_ENABLE ((MTI_SCHEDULE_STRICT_PRIO << MTI_SCHEDULE_MODE_S) | \
- (MTI_SHAPING_SRP << MTI_SHAPING_S))
-#define KSZ_CBS_DISABLE ((MTI_SCHEDULE_WRR << MTI_SCHEDULE_MODE_S) |\
- (MTI_SHAPING_OFF << MTI_SHAPING_S))
#define MIB_COUNTER_NUM 0x20
struct ksz_stats_raw {
.freeze_mib = ksz8_freeze_mib,
.port_init_cnt = ksz8_port_init_cnt,
.fdb_dump = ksz8_fdb_dump,
+ .fdb_add = ksz8_fdb_add,
+ .fdb_del = ksz8_fdb_del,
.mdb_add = ksz8_mdb_add,
.mdb_del = ksz8_mdb_del,
.vlan_filtering = ksz8_port_vlan_filtering,
.port_nirqs = 3,
.num_tx_queues = 4,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &ksz9477_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
.port_nirqs = 4,
.num_tx_queues = 4,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &ksz9477_dev_ops,
.phy_errata_9477 = true,
.mib_names = ksz9477_mib_names,
.port_nirqs = 3,
.num_tx_queues = 4,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &ksz9477_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
.port_nirqs = 3,
.num_tx_queues = 4,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &ksz9477_dev_ops,
.phy_errata_9477 = true,
.mib_names = ksz9477_mib_names,
.port_nirqs = 6,
.num_tx_queues = 8,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
.port_nirqs = 6,
.num_tx_queues = 8,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
.port_nirqs = 6,
.num_tx_queues = 8,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
.port_nirqs = 6,
.num_tx_queues = 8,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
.port_nirqs = 6,
.num_tx_queues = 8,
.tc_cbs_supported = true,
+ .tc_ets_supported = true,
.ops = &lan937x_dev_ops,
.mib_names = ksz9477_mib_names,
.mib_cnt = ARRAY_SIZE(ksz9477_mib_names),
return 0;
}
+static int ksz_setup_tc_mode(struct ksz_device *dev, int port, u8 scheduler,
+ u8 shaper)
+{
+ return ksz_pwrite8(dev, port, REG_PORT_MTI_QUEUE_CTRL_0,
+ FIELD_PREP(MTI_SCHEDULE_MODE_M, scheduler) |
+ FIELD_PREP(MTI_SHAPING_M, shaper));
+}
+
static int ksz_setup_tc_cbs(struct dsa_switch *ds, int port,
struct tc_cbs_qopt_offload *qopt)
{
return ret;
if (!qopt->enable)
- return ksz_pwrite8(dev, port, REG_PORT_MTI_QUEUE_CTRL_0,
- KSZ_CBS_DISABLE);
+ return ksz_setup_tc_mode(dev, port, MTI_SCHEDULE_WRR,
+ MTI_SHAPING_OFF);
/* High Credit */
ret = ksz_pwrite16(dev, port, REG_PORT_MTI_HI_WATER_MARK,
return ret;
}
- return ksz_pwrite8(dev, port, REG_PORT_MTI_QUEUE_CTRL_0,
- KSZ_CBS_ENABLE);
+ return ksz_setup_tc_mode(dev, port, MTI_SCHEDULE_STRICT_PRIO,
+ MTI_SHAPING_SRP);
+}
+
+static int ksz_disable_egress_rate_limit(struct ksz_device *dev, int port)
+{
+ int queue, ret;
+
+ /* Configuration will not take effect until the last Port Queue X
+ * Egress Limit Control Register is written.
+ */
+ for (queue = 0; queue < dev->info->num_tx_queues; queue++) {
+ ret = ksz_pwrite8(dev, port, KSZ9477_REG_PORT_OUT_RATE_0 + queue,
+ KSZ9477_OUT_RATE_NO_LIMIT);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int ksz_ets_band_to_queue(struct tc_ets_qopt_offload_replace_params *p,
+ int band)
+{
+ /* Compared to queues, bands prioritize packets differently. In strict
+ * priority mode, the lowest priority is assigned to Queue 0 while the
+ * highest priority is given to Band 0.
+ */
+ return p->bands - 1 - band;
+}
+
+static int ksz_queue_set_strict(struct ksz_device *dev, int port, int queue)
+{
+ int ret;
+
+ ret = ksz_pwrite32(dev, port, REG_PORT_MTI_QUEUE_INDEX__4, queue);
+ if (ret)
+ return ret;
+
+ return ksz_setup_tc_mode(dev, port, MTI_SCHEDULE_STRICT_PRIO,
+ MTI_SHAPING_OFF);
+}
+
+static int ksz_queue_set_wrr(struct ksz_device *dev, int port, int queue,
+ int weight)
+{
+ int ret;
+
+ ret = ksz_pwrite32(dev, port, REG_PORT_MTI_QUEUE_INDEX__4, queue);
+ if (ret)
+ return ret;
+
+ ret = ksz_setup_tc_mode(dev, port, MTI_SCHEDULE_WRR,
+ MTI_SHAPING_OFF);
+ if (ret)
+ return ret;
+
+ return ksz_pwrite8(dev, port, KSZ9477_PORT_MTI_QUEUE_CTRL_1, weight);
+}
+
+static int ksz_tc_ets_add(struct ksz_device *dev, int port,
+ struct tc_ets_qopt_offload_replace_params *p)
+{
+ int ret, band, tc_prio;
+ u32 queue_map = 0;
+
+ /* In order to ensure proper prioritization, it is necessary to set the
+ * rate limit for the related queue to zero. Otherwise strict priority
+ * or WRR mode will not work. This is a hardware limitation.
+ */
+ ret = ksz_disable_egress_rate_limit(dev, port);
+ if (ret)
+ return ret;
+
+ /* Configure queue scheduling mode for all bands. Currently only strict
+ * prio mode is supported.
+ */
+ for (band = 0; band < p->bands; band++) {
+ int queue = ksz_ets_band_to_queue(p, band);
+
+ ret = ksz_queue_set_strict(dev, port, queue);
+ if (ret)
+ return ret;
+ }
+
+ /* Configure the mapping between traffic classes and queues. Note:
+ * priomap variable support 16 traffic classes, but the chip can handle
+ * only 8 classes.
+ */
+ for (tc_prio = 0; tc_prio < ARRAY_SIZE(p->priomap); tc_prio++) {
+ int queue;
+
+ if (tc_prio > KSZ9477_MAX_TC_PRIO)
+ break;
+
+ queue = ksz_ets_band_to_queue(p, p->priomap[tc_prio]);
+ queue_map |= queue << (tc_prio * KSZ9477_PORT_TC_MAP_S);
+ }
+
+ return ksz_pwrite32(dev, port, KSZ9477_PORT_MRI_TC_MAP__4, queue_map);
+}
+
+static int ksz_tc_ets_del(struct ksz_device *dev, int port)
+{
+ int ret, queue, tc_prio, s;
+ u32 queue_map = 0;
+
+ /* To restore the default chip configuration, set all queues to use the
+ * WRR scheduler with a weight of 1.
+ */
+ for (queue = 0; queue < dev->info->num_tx_queues; queue++) {
+ ret = ksz_queue_set_wrr(dev, port, queue,
+ KSZ9477_DEFAULT_WRR_WEIGHT);
+ if (ret)
+ return ret;
+ }
+
+ switch (dev->info->num_tx_queues) {
+ case 2:
+ s = 2;
+ break;
+ case 4:
+ s = 1;
+ break;
+ case 8:
+ s = 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* Revert the queue mapping for TC-priority to its default setting on
+ * the chip.
+ */
+ for (tc_prio = 0; tc_prio <= KSZ9477_MAX_TC_PRIO; tc_prio++) {
+ int queue;
+
+ queue = tc_prio >> s;
+ queue_map |= queue << (tc_prio * KSZ9477_PORT_TC_MAP_S);
+ }
+
+ return ksz_pwrite32(dev, port, KSZ9477_PORT_MRI_TC_MAP__4, queue_map);
+}
+
+static int ksz_tc_ets_validate(struct ksz_device *dev, int port,
+ struct tc_ets_qopt_offload_replace_params *p)
+{
+ int band;
+
+ /* Since it is not feasible to share one port among multiple qdisc,
+ * the user must configure all available queues appropriately.
+ */
+ if (p->bands != dev->info->num_tx_queues) {
+ dev_err(dev->dev, "Not supported amount of bands. It should be %d\n",
+ dev->info->num_tx_queues);
+ return -EOPNOTSUPP;
+ }
+
+ for (band = 0; band < p->bands; ++band) {
+ /* The KSZ switches utilize a weighted round robin configuration
+ * where a certain number of packets can be transmitted from a
+ * queue before the next queue is serviced. For more information
+ * on this, refer to section 5.2.8.4 of the KSZ8565R
+ * documentation on the Port Transmit Queue Control 1 Register.
+ * However, the current ETS Qdisc implementation (as of February
+ * 2023) assigns a weight to each queue based on the number of
+ * bytes or extrapolated bandwidth in percentages. Since this
+ * differs from the KSZ switches' method and we don't want to
+ * fake support by converting bytes to packets, it is better to
+ * return an error instead.
+ */
+ if (p->quanta[band]) {
+ dev_err(dev->dev, "Quanta/weights configuration is not supported.\n");
+ return -EOPNOTSUPP;
+ }
+ }
+
+ return 0;
+}
+
+static int ksz_tc_setup_qdisc_ets(struct dsa_switch *ds, int port,
+ struct tc_ets_qopt_offload *qopt)
+{
+ struct ksz_device *dev = ds->priv;
+ int ret;
+
+ if (!dev->info->tc_ets_supported)
+ return -EOPNOTSUPP;
+
+ if (qopt->parent != TC_H_ROOT) {
+ dev_err(dev->dev, "Parent should be \"root\"\n");
+ return -EOPNOTSUPP;
+ }
+
+ switch (qopt->command) {
+ case TC_ETS_REPLACE:
+ ret = ksz_tc_ets_validate(dev, port, &qopt->replace_params);
+ if (ret)
+ return ret;
+
+ return ksz_tc_ets_add(dev, port, &qopt->replace_params);
+ case TC_ETS_DESTROY:
+ return ksz_tc_ets_del(dev, port);
+ case TC_ETS_STATS:
+ case TC_ETS_GRAFT:
+ return -EOPNOTSUPP;
+ }
+
+ return -EOPNOTSUPP;
}
static int ksz_setup_tc(struct dsa_switch *ds, int port,
switch (type) {
case TC_SETUP_QDISC_CBS:
return ksz_setup_tc_cbs(ds, port, type_data);
+ case TC_SETUP_QDISC_ETS:
+ return ksz_tc_setup_qdisc_ets(ds, port, type_data);
default:
return -EOPNOTSUPP;
}
u8 port_nirqs;
u8 num_tx_queues;
bool tc_cbs_supported;
+ bool tc_ets_supported;
const struct ksz_dev_ops *ops;
bool phy_errata_9477;
bool ksz87xx_eee_link_erratum;
#define KSZ8_LEGAL_PACKET_SIZE 1518
#define KSZ9477_MAX_FRAME_SIZE 9000
+#define KSZ9477_REG_PORT_OUT_RATE_0 0x0420
+#define KSZ9477_OUT_RATE_NO_LIMIT 0
+
+#define KSZ9477_PORT_MRI_TC_MAP__4 0x0808
+
+#define KSZ9477_PORT_TC_MAP_S 4
+#define KSZ9477_MAX_TC_PRIO 7
+
/* CBS related registers */
#define REG_PORT_MTI_QUEUE_INDEX__4 0x0900
#define REG_PORT_MTI_QUEUE_CTRL_0 0x0914
-#define MTI_SCHEDULE_MODE_M 0x3
-#define MTI_SCHEDULE_MODE_S 6
+#define MTI_SCHEDULE_MODE_M GENMASK(7, 6)
#define MTI_SCHEDULE_STRICT_PRIO 0
#define MTI_SCHEDULE_WRR 2
-#define MTI_SHAPING_M 0x3
-#define MTI_SHAPING_S 4
+#define MTI_SHAPING_M GENMASK(5, 4)
#define MTI_SHAPING_OFF 0
#define MTI_SHAPING_SRP 1
#define MTI_SHAPING_TIME_AWARE 2
+#define KSZ9477_PORT_MTI_QUEUE_CTRL_1 0x0915
+#define KSZ9477_DEFAULT_WRR_WEIGHT 1
+
#define REG_PORT_MTI_HI_WATER_MARK 0x0916
#define REG_PORT_MTI_LO_WATER_MARK 0x0918
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/gpio/consumer.h>
+#include <linux/mdio.h>
+#include <linux/module.h>
+#include <linux/pcs/pcs-mtk-lynxi.h>
+#include <linux/of_irq.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+#include <linux/of_platform.h>
+#include <linux/regmap.h>
+#include <linux/reset.h>
+#include <linux/regulator/consumer.h>
+#include <net/dsa.h>
+
+#include "mt7530.h"
+
+static int
+mt7530_regmap_write(void *context, unsigned int reg, unsigned int val)
+{
+ struct mii_bus *bus = context;
+ u16 page, r, lo, hi;
+ int ret;
+
+ page = (reg >> 6) & 0x3ff;
+ r = (reg >> 2) & 0xf;
+ lo = val & 0xffff;
+ hi = val >> 16;
+
+ /* MT7530 uses 31 as the pseudo port */
+ ret = bus->write(bus, 0x1f, 0x1f, page);
+ if (ret < 0)
+ return ret;
+
+ ret = bus->write(bus, 0x1f, r, lo);
+ if (ret < 0)
+ return ret;
+
+ ret = bus->write(bus, 0x1f, 0x10, hi);
+ return ret;
+}
+
+static int
+mt7530_regmap_read(void *context, unsigned int reg, unsigned int *val)
+{
+ struct mii_bus *bus = context;
+ u16 page, r, lo, hi;
+ int ret;
+
+ page = (reg >> 6) & 0x3ff;
+ r = (reg >> 2) & 0xf;
+
+ /* MT7530 uses 31 as the pseudo port */
+ ret = bus->write(bus, 0x1f, 0x1f, page);
+ if (ret < 0)
+ return ret;
+
+ lo = bus->read(bus, 0x1f, r);
+ hi = bus->read(bus, 0x1f, 0x10);
+
+ *val = (hi << 16) | (lo & 0xffff);
+
+ return 0;
+}
+
+static void
+mt7530_mdio_regmap_lock(void *mdio_lock)
+{
+ mutex_lock_nested(mdio_lock, MDIO_MUTEX_NESTED);
+}
+
+static void
+mt7530_mdio_regmap_unlock(void *mdio_lock)
+{
+ mutex_unlock(mdio_lock);
+}
+
+static const struct regmap_bus mt7530_regmap_bus = {
+ .reg_write = mt7530_regmap_write,
+ .reg_read = mt7530_regmap_read,
+};
+
+static int
+mt7531_create_sgmii(struct mt7530_priv *priv)
+{
+ struct regmap_config *mt7531_pcs_config[2];
+ struct phylink_pcs *pcs;
+ struct regmap *regmap;
+ int i, ret = 0;
+
+ for (i = 0; i < 2; i++) {
+ mt7531_pcs_config[i] = devm_kzalloc(priv->dev,
+ sizeof(struct regmap_config),
+ GFP_KERNEL);
+ if (!mt7531_pcs_config[i]) {
+ ret = -ENOMEM;
+ break;
+ }
+
+ mt7531_pcs_config[i]->name = i ? "port6" : "port5";
+ mt7531_pcs_config[i]->reg_bits = 16;
+ mt7531_pcs_config[i]->val_bits = 32;
+ mt7531_pcs_config[i]->reg_stride = 4;
+ mt7531_pcs_config[i]->reg_base = MT7531_SGMII_REG_BASE(5 + i);
+ mt7531_pcs_config[i]->max_register = 0x17c;
+ mt7531_pcs_config[i]->lock = mt7530_mdio_regmap_lock;
+ mt7531_pcs_config[i]->unlock = mt7530_mdio_regmap_unlock;
+ mt7531_pcs_config[i]->lock_arg = &priv->bus->mdio_lock;
+
+ regmap = devm_regmap_init(priv->dev,
+ &mt7530_regmap_bus, priv->bus,
+ mt7531_pcs_config[i]);
+ if (IS_ERR(regmap)) {
+ ret = PTR_ERR(regmap);
+ break;
+ }
+ pcs = mtk_pcs_lynxi_create(priv->dev, regmap,
+ MT7531_PHYA_CTRL_SIGNAL3, 0);
+ if (!pcs) {
+ ret = -ENXIO;
+ break;
+ }
+ priv->ports[5 + i].sgmii_pcs = pcs;
+ }
+
+ if (ret && i)
+ mtk_pcs_lynxi_destroy(priv->ports[5].sgmii_pcs);
+
+ return ret;
+}
+
+static const struct of_device_id mt7530_of_match[] = {
+ { .compatible = "mediatek,mt7621", .data = &mt753x_table[ID_MT7621], },
+ { .compatible = "mediatek,mt7530", .data = &mt753x_table[ID_MT7530], },
+ { .compatible = "mediatek,mt7531", .data = &mt753x_table[ID_MT7531], },
+ { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, mt7530_of_match);
+
+static int
+mt7530_probe(struct mdio_device *mdiodev)
+{
+ static struct regmap_config *regmap_config;
+ struct mt7530_priv *priv;
+ struct device_node *dn;
+ int ret;
+
+ dn = mdiodev->dev.of_node;
+
+ priv = devm_kzalloc(&mdiodev->dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ priv->bus = mdiodev->bus;
+ priv->dev = &mdiodev->dev;
+
+ ret = mt7530_probe_common(priv);
+ if (ret)
+ return ret;
+
+ /* Use medatek,mcm property to distinguish hardware type that would
+ * cause a little bit differences on power-on sequence.
+ * Not MCM that indicates switch works as the remote standalone
+ * integrated circuit so the GPIO pin would be used to complete
+ * the reset, otherwise memory-mapped register accessing used
+ * through syscon provides in the case of MCM.
+ */
+ priv->mcm = of_property_read_bool(dn, "mediatek,mcm");
+ if (priv->mcm) {
+ dev_info(&mdiodev->dev, "MT7530 adapts as multi-chip module\n");
+
+ priv->rstc = devm_reset_control_get(&mdiodev->dev, "mcm");
+ if (IS_ERR(priv->rstc)) {
+ dev_err(&mdiodev->dev, "Couldn't get our reset line\n");
+ return PTR_ERR(priv->rstc);
+ }
+ } else {
+ priv->reset = devm_gpiod_get_optional(&mdiodev->dev, "reset",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(priv->reset)) {
+ dev_err(&mdiodev->dev, "Couldn't get our reset line\n");
+ return PTR_ERR(priv->reset);
+ }
+ }
+
+ if (priv->id == ID_MT7530) {
+ priv->core_pwr = devm_regulator_get(&mdiodev->dev, "core");
+ if (IS_ERR(priv->core_pwr))
+ return PTR_ERR(priv->core_pwr);
+
+ priv->io_pwr = devm_regulator_get(&mdiodev->dev, "io");
+ if (IS_ERR(priv->io_pwr))
+ return PTR_ERR(priv->io_pwr);
+ }
+
+ regmap_config = devm_kzalloc(&mdiodev->dev, sizeof(*regmap_config),
+ GFP_KERNEL);
+ if (!regmap_config)
+ return -ENOMEM;
+
+ regmap_config->reg_bits = 16;
+ regmap_config->val_bits = 32;
+ regmap_config->reg_stride = 4;
+ regmap_config->max_register = MT7530_CREV;
+ regmap_config->disable_locking = true;
+ priv->regmap = devm_regmap_init(priv->dev, &mt7530_regmap_bus,
+ priv->bus, regmap_config);
+ if (IS_ERR(priv->regmap))
+ return PTR_ERR(priv->regmap);
+
+ if (priv->id == ID_MT7531) {
+ ret = mt7531_create_sgmii(priv);
+ if (ret)
+ return ret;
+ }
+
+ return dsa_register_switch(priv->ds);
+}
+
+static void
+mt7530_remove(struct mdio_device *mdiodev)
+{
+ struct mt7530_priv *priv = dev_get_drvdata(&mdiodev->dev);
+ int ret = 0, i;
+
+ if (!priv)
+ return;
+
+ ret = regulator_disable(priv->core_pwr);
+ if (ret < 0)
+ dev_err(priv->dev,
+ "Failed to disable core power: %d\n", ret);
+
+ ret = regulator_disable(priv->io_pwr);
+ if (ret < 0)
+ dev_err(priv->dev, "Failed to disable io pwr: %d\n",
+ ret);
+
+ mt7530_remove_common(priv);
+
+ for (i = 0; i < 2; ++i)
+ mtk_pcs_lynxi_destroy(priv->ports[5 + i].sgmii_pcs);
+}
+
+static void mt7530_shutdown(struct mdio_device *mdiodev)
+{
+ struct mt7530_priv *priv = dev_get_drvdata(&mdiodev->dev);
+
+ if (!priv)
+ return;
+
+ dsa_switch_shutdown(priv->ds);
+
+ dev_set_drvdata(&mdiodev->dev, NULL);
+}
+
+static struct mdio_driver mt7530_mdio_driver = {
+ .probe = mt7530_probe,
+ .remove = mt7530_remove,
+ .shutdown = mt7530_shutdown,
+ .mdiodrv.driver = {
+ .name = "mt7530-mdio",
+ .of_match_table = mt7530_of_match,
+ },
+};
+
+mdio_module_driver(mt7530_mdio_driver);
+
+MODULE_AUTHOR("Sean Wang <sean.wang@mediatek.com>");
+MODULE_DESCRIPTION("Driver for Mediatek MT7530 Switch (MDIO)");
+MODULE_LICENSE("GPL");
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/module.h>
+#include <linux/of_platform.h>
+#include <linux/regmap.h>
+#include <linux/regulator/consumer.h>
+#include <linux/reset.h>
+#include <net/dsa.h>
+
+#include "mt7530.h"
+
+static const struct of_device_id mt7988_of_match[] = {
+ { .compatible = "mediatek,mt7988-switch", .data = &mt753x_table[ID_MT7988], },
+ { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, mt7988_of_match);
+
+static int
+mt7988_probe(struct platform_device *pdev)
+{
+ static struct regmap_config *sw_regmap_config;
+ struct mt7530_priv *priv;
+ void __iomem *base_addr;
+ int ret;
+
+ priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ priv->bus = NULL;
+ priv->dev = &pdev->dev;
+
+ ret = mt7530_probe_common(priv);
+ if (ret)
+ return ret;
+
+ priv->rstc = devm_reset_control_get(&pdev->dev, NULL);
+ if (IS_ERR(priv->rstc)) {
+ dev_err(&pdev->dev, "Couldn't get our reset line\n");
+ return PTR_ERR(priv->rstc);
+ }
+
+ base_addr = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(base_addr)) {
+ dev_err(&pdev->dev, "cannot request I/O memory space\n");
+ return -ENXIO;
+ }
+
+ sw_regmap_config = devm_kzalloc(&pdev->dev, sizeof(*sw_regmap_config), GFP_KERNEL);
+ if (!sw_regmap_config)
+ return -ENOMEM;
+
+ sw_regmap_config->name = "switch";
+ sw_regmap_config->reg_bits = 16;
+ sw_regmap_config->val_bits = 32;
+ sw_regmap_config->reg_stride = 4;
+ sw_regmap_config->max_register = MT7530_CREV;
+ priv->regmap = devm_regmap_init_mmio(&pdev->dev, base_addr, sw_regmap_config);
+ if (IS_ERR(priv->regmap))
+ return PTR_ERR(priv->regmap);
+
+ return dsa_register_switch(priv->ds);
+}
+
+static int
+mt7988_remove(struct platform_device *pdev)
+{
+ struct mt7530_priv *priv = platform_get_drvdata(pdev);
+
+ if (priv)
+ mt7530_remove_common(priv);
+
+ return 0;
+}
+
+static void mt7988_shutdown(struct platform_device *pdev)
+{
+ struct mt7530_priv *priv = platform_get_drvdata(pdev);
+
+ if (!priv)
+ return;
+
+ dsa_switch_shutdown(priv->ds);
+
+ dev_set_drvdata(&pdev->dev, NULL);
+}
+
+static struct platform_driver mt7988_platform_driver = {
+ .probe = mt7988_probe,
+ .remove = mt7988_remove,
+ .shutdown = mt7988_shutdown,
+ .driver = {
+ .name = "mt7530-mmio",
+ .of_match_table = mt7988_of_match,
+ },
+};
+module_platform_driver(mt7988_platform_driver);
+
+MODULE_AUTHOR("Daniel Golle <daniel@makrotopia.org>");
+MODULE_DESCRIPTION("Driver for Mediatek MT7530 Switch (MMIO)");
+MODULE_LICENSE("GPL");
}
static void
-core_write(struct mt7530_priv *priv, u32 reg, u32 val)
+mt7530_mutex_lock(struct mt7530_priv *priv)
{
- struct mii_bus *bus = priv->bus;
+ if (priv->bus)
+ mutex_lock_nested(&priv->bus->mdio_lock, MDIO_MUTEX_NESTED);
+}
+
+static void
+mt7530_mutex_unlock(struct mt7530_priv *priv)
+{
+ if (priv->bus)
+ mutex_unlock(&priv->bus->mdio_lock);
+}
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+static void
+core_write(struct mt7530_priv *priv, u32 reg, u32 val)
+{
+ mt7530_mutex_lock(priv);
core_write_mmd_indirect(priv, reg, MDIO_MMD_VEND2, val);
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
}
static void
core_rmw(struct mt7530_priv *priv, u32 reg, u32 mask, u32 set)
{
- struct mii_bus *bus = priv->bus;
u32 val;
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
val = core_read_mmd_indirect(priv, reg, MDIO_MMD_VEND2);
val &= ~mask;
val |= set;
core_write_mmd_indirect(priv, reg, MDIO_MMD_VEND2, val);
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
}
static void
static int
mt7530_mii_write(struct mt7530_priv *priv, u32 reg, u32 val)
{
- struct mii_bus *bus = priv->bus;
- u16 page, r, lo, hi;
int ret;
- page = (reg >> 6) & 0x3ff;
- r = (reg >> 2) & 0xf;
- lo = val & 0xffff;
- hi = val >> 16;
+ ret = regmap_write(priv->regmap, reg, val);
- /* MT7530 uses 31 as the pseudo port */
- ret = bus->write(bus, 0x1f, 0x1f, page);
if (ret < 0)
- goto err;
-
- ret = bus->write(bus, 0x1f, r, lo);
- if (ret < 0)
- goto err;
-
- ret = bus->write(bus, 0x1f, 0x10, hi);
-err:
- if (ret < 0)
- dev_err(&bus->dev,
+ dev_err(priv->dev,
"failed to write mt7530 register\n");
+
return ret;
}
static u32
mt7530_mii_read(struct mt7530_priv *priv, u32 reg)
{
- struct mii_bus *bus = priv->bus;
- u16 page, r, lo, hi;
int ret;
+ u32 val;
- page = (reg >> 6) & 0x3ff;
- r = (reg >> 2) & 0xf;
-
- /* MT7530 uses 31 as the pseudo port */
- ret = bus->write(bus, 0x1f, 0x1f, page);
- if (ret < 0) {
- dev_err(&bus->dev,
+ ret = regmap_read(priv->regmap, reg, &val);
+ if (ret) {
+ WARN_ON_ONCE(1);
+ dev_err(priv->dev,
"failed to read mt7530 register\n");
- return ret;
+ return 0;
}
- lo = bus->read(bus, 0x1f, r);
- hi = bus->read(bus, 0x1f, 0x10);
-
- return (hi << 16) | (lo & 0xffff);
+ return val;
}
static void
mt7530_write(struct mt7530_priv *priv, u32 reg, u32 val)
{
- struct mii_bus *bus = priv->bus;
-
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
mt7530_mii_write(priv, reg, val);
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
}
static u32
static u32
_mt7530_read(struct mt7530_dummy_poll *p)
{
- struct mii_bus *bus = p->priv->bus;
u32 val;
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(p->priv);
val = mt7530_mii_read(p->priv, p->reg);
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(p->priv);
return val;
}
mt7530_rmw(struct mt7530_priv *priv, u32 reg,
u32 mask, u32 set)
{
- struct mii_bus *bus = priv->bus;
- u32 val;
-
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
- val = mt7530_mii_read(priv, reg);
- val &= ~mask;
- val |= set;
- mt7530_mii_write(priv, reg, val);
+ regmap_update_bits(priv->regmap, reg, mask, set);
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
}
static void
mt7530_set(struct mt7530_priv *priv, u32 reg, u32 val)
{
- mt7530_rmw(priv, reg, 0, val);
+ mt7530_rmw(priv, reg, val, val);
}
static void
mt7531_ind_c45_phy_read(struct mt7530_priv *priv, int port, int devad,
int regnum)
{
- struct mii_bus *bus = priv->bus;
struct mt7530_dummy_poll p;
u32 reg, val;
int ret;
INIT_MT7530_DUMMY_POLL(&p, priv, MT7531_PHY_IAC);
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
ret = readx_poll_timeout(_mt7530_unlocked_read, &p, val,
!(val & MT7531_PHY_ACS_ST), 20, 100000);
ret = val & MT7531_MDIO_RW_DATA_MASK;
out:
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
return ret;
}
mt7531_ind_c45_phy_write(struct mt7530_priv *priv, int port, int devad,
int regnum, u16 data)
{
- struct mii_bus *bus = priv->bus;
struct mt7530_dummy_poll p;
u32 val, reg;
int ret;
INIT_MT7530_DUMMY_POLL(&p, priv, MT7531_PHY_IAC);
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
ret = readx_poll_timeout(_mt7530_unlocked_read, &p, val,
!(val & MT7531_PHY_ACS_ST), 20, 100000);
}
out:
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
return ret;
}
static int
mt7531_ind_c22_phy_read(struct mt7530_priv *priv, int port, int regnum)
{
- struct mii_bus *bus = priv->bus;
struct mt7530_dummy_poll p;
int ret;
u32 val;
INIT_MT7530_DUMMY_POLL(&p, priv, MT7531_PHY_IAC);
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
ret = readx_poll_timeout(_mt7530_unlocked_read, &p, val,
!(val & MT7531_PHY_ACS_ST), 20, 100000);
ret = val & MT7531_MDIO_RW_DATA_MASK;
out:
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
return ret;
}
mt7531_ind_c22_phy_write(struct mt7530_priv *priv, int port, int regnum,
u16 data)
{
- struct mii_bus *bus = priv->bus;
struct mt7530_dummy_poll p;
int ret;
u32 reg;
INIT_MT7530_DUMMY_POLL(&p, priv, MT7531_PHY_IAC);
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
ret = readx_poll_timeout(_mt7530_unlocked_read, &p, reg,
!(reg & MT7531_PHY_ACS_ST), 20, 100000);
}
out:
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
return ret;
}
return 0;
}
+static const char *p5_intf_modes(unsigned int p5_interface)
+{
+ switch (p5_interface) {
+ case P5_DISABLED:
+ return "DISABLED";
+ case P5_INTF_SEL_PHY_P0:
+ return "PHY P0";
+ case P5_INTF_SEL_PHY_P4:
+ return "PHY P4";
+ case P5_INTF_SEL_GMAC5:
+ return "GMAC5";
+ case P5_INTF_SEL_GMAC5_SGMII:
+ return "GMAC5_SGMII";
+ default:
+ return "unknown";
+ }
+}
+
static void mt7530_setup_port5(struct dsa_switch *ds, phy_interface_t interface)
{
struct mt7530_priv *priv = ds->priv;
mt7530_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
{
struct mt7530_priv *priv = ds->priv;
- struct mii_bus *bus = priv->bus;
int length;
u32 val;
if (!dsa_is_cpu_port(ds, port))
return 0;
- mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
val = mt7530_mii_read(priv, MT7530_GMACCR);
val &= ~MAX_RX_PKT_LEN_MASK;
mt7530_mii_write(priv, MT7530_GMACCR, val);
- mutex_unlock(&bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
return 0;
}
u32 val;
int p;
- mutex_lock_nested(&priv->bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
val = mt7530_mii_read(priv, MT7530_SYS_INT_STS);
mt7530_mii_write(priv, MT7530_SYS_INT_STS, val);
- mutex_unlock(&priv->bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
for (p = 0; p < MT7530_NUM_PHYS; p++) {
if (BIT(p) & val) {
{
struct mt7530_priv *priv = irq_data_get_irq_chip_data(d);
- mutex_lock_nested(&priv->bus->mdio_lock, MDIO_MUTEX_NESTED);
+ mt7530_mutex_lock(priv);
}
static void
struct mt7530_priv *priv = irq_data_get_irq_chip_data(d);
mt7530_mii_write(priv, MT7530_SYS_INT_EN, priv->irq_enable);
- mutex_unlock(&priv->bus->mdio_lock);
+ mt7530_mutex_unlock(priv);
}
static struct irq_chip mt7530_irq_chip = {
};
static void
+mt7988_irq_mask(struct irq_data *d)
+{
+ struct mt7530_priv *priv = irq_data_get_irq_chip_data(d);
+
+ priv->irq_enable &= ~BIT(d->hwirq);
+ mt7530_mii_write(priv, MT7530_SYS_INT_EN, priv->irq_enable);
+}
+
+static void
+mt7988_irq_unmask(struct irq_data *d)
+{
+ struct mt7530_priv *priv = irq_data_get_irq_chip_data(d);
+
+ priv->irq_enable |= BIT(d->hwirq);
+ mt7530_mii_write(priv, MT7530_SYS_INT_EN, priv->irq_enable);
+}
+
+static struct irq_chip mt7988_irq_chip = {
+ .name = KBUILD_MODNAME,
+ .irq_mask = mt7988_irq_mask,
+ .irq_unmask = mt7988_irq_unmask,
+};
+
+static int
+mt7988_irq_map(struct irq_domain *domain, unsigned int irq,
+ irq_hw_number_t hwirq)
+{
+ irq_set_chip_data(irq, domain->host_data);
+ irq_set_chip_and_handler(irq, &mt7988_irq_chip, handle_simple_irq);
+ irq_set_nested_thread(irq, true);
+ irq_set_noprobe(irq);
+
+ return 0;
+}
+
+static const struct irq_domain_ops mt7988_irq_domain_ops = {
+ .map = mt7988_irq_map,
+ .xlate = irq_domain_xlate_onecell,
+};
+
+static void
mt7530_setup_mdio_irq(struct mt7530_priv *priv)
{
struct dsa_switch *ds = priv->ds;
return priv->irq ? : -EINVAL;
}
- priv->irq_domain = irq_domain_add_linear(np, MT7530_NUM_PHYS,
- &mt7530_irq_domain_ops, priv);
+ if (priv->id == ID_MT7988)
+ priv->irq_domain = irq_domain_add_linear(np, MT7530_NUM_PHYS,
+ &mt7988_irq_domain_ops,
+ priv);
+ else
+ priv->irq_domain = irq_domain_add_linear(np, MT7530_NUM_PHYS,
+ &mt7530_irq_domain_ops,
+ priv);
+
if (!priv->irq_domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
static int
+mt7531_setup_common(struct dsa_switch *ds)
+{
+ struct mt7530_priv *priv = ds->priv;
+ struct dsa_port *cpu_dp;
+ int ret, i;
+
+ /* BPDU to CPU port */
+ dsa_switch_for_each_cpu_port(cpu_dp, ds) {
+ mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK,
+ BIT(cpu_dp->index));
+ break;
+ }
+ mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
+ MT753X_BPDU_CPU_ONLY);
+
+ /* Enable and reset MIB counters */
+ mt7530_mib_reset(ds);
+
+ for (i = 0; i < MT7530_NUM_PORTS; i++) {
+ /* Disable forwarding by default on all ports */
+ mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK,
+ PCR_MATRIX_CLR);
+
+ /* Disable learning by default on all ports */
+ mt7530_set(priv, MT7530_PSC_P(i), SA_DIS);
+
+ mt7530_set(priv, MT7531_DBG_CNT(i), MT7531_DIS_CLR);
+
+ if (dsa_is_cpu_port(ds, i)) {
+ ret = mt753x_cpu_port_enable(ds, i);
+ if (ret)
+ return ret;
+ } else {
+ mt7530_port_disable(ds, i);
+
+ /* Set default PVID to 0 on all user ports */
+ mt7530_rmw(priv, MT7530_PPBV1_P(i), G0_PORT_VID_MASK,
+ G0_PORT_VID_DEF);
+ }
+
+ /* Enable consistent egress tag */
+ mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK,
+ PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
+ }
+
+ /* Flush the FDB table */
+ ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+static int
mt7531_setup(struct dsa_switch *ds)
{
struct mt7530_priv *priv = ds->priv;
struct mt7530_dummy_poll p;
- struct dsa_port *cpu_dp;
u32 val, id;
int ret, i;
mt7531_ind_c45_phy_write(priv, MT753X_CTRL_PHY_ADDR, MDIO_MMD_VEND2,
CORE_PLL_GROUP4, val);
- /* BPDU to CPU port */
- dsa_switch_for_each_cpu_port(cpu_dp, ds) {
- mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK,
- BIT(cpu_dp->index));
- break;
- }
- mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
- MT753X_BPDU_CPU_ONLY);
-
- /* Enable and reset MIB counters */
- mt7530_mib_reset(ds);
-
- for (i = 0; i < MT7530_NUM_PORTS; i++) {
- /* Disable forwarding by default on all ports */
- mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK,
- PCR_MATRIX_CLR);
-
- /* Disable learning by default on all ports */
- mt7530_set(priv, MT7530_PSC_P(i), SA_DIS);
-
- mt7530_set(priv, MT7531_DBG_CNT(i), MT7531_DIS_CLR);
-
- if (dsa_is_cpu_port(ds, i)) {
- ret = mt753x_cpu_port_enable(ds, i);
- if (ret)
- return ret;
- } else {
- mt7530_port_disable(ds, i);
-
- /* Set default PVID to 0 on all user ports */
- mt7530_rmw(priv, MT7530_PPBV1_P(i), G0_PORT_VID_MASK,
- G0_PORT_VID_DEF);
- }
-
- /* Enable consistent egress tag */
- mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK,
- PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
- }
+ mt7531_setup_common(ds);
/* Setup VLAN ID 0 for VLAN-unaware bridges */
ret = mt7530_setup_vlan0(priv);
ds->assisted_learning_on_cpu_port = true;
ds->mtu_enforcement_ingress = true;
- /* Flush the FDB table */
- ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL);
- if (ret < 0)
- return ret;
-
return 0;
}
}
}
+static void mt7988_mac_port_get_caps(struct dsa_switch *ds, int port,
+ struct phylink_config *config)
+{
+ phy_interface_zero(config->supported_interfaces);
+
+ switch (port) {
+ case 0 ... 4: /* Internal phy */
+ __set_bit(PHY_INTERFACE_MODE_INTERNAL,
+ config->supported_interfaces);
+ break;
+
+ case 6:
+ __set_bit(PHY_INTERFACE_MODE_INTERNAL,
+ config->supported_interfaces);
+ config->mac_capabilities = MAC_ASYM_PAUSE | MAC_SYM_PAUSE |
+ MAC_10000FD;
+ }
+}
+
static int
mt753x_pad_setup(struct dsa_switch *ds, const struct phylink_link_state *state)
{
return 0;
}
-static void mt7531_pcs_link_up(struct phylink_pcs *pcs, unsigned int mode,
- phy_interface_t interface, int speed, int duplex)
-{
- struct mt7530_priv *priv = pcs_to_mt753x_pcs(pcs)->priv;
- int port = pcs_to_mt753x_pcs(pcs)->port;
- unsigned int val;
-
- /* For adjusting speed and duplex of SGMII force mode. */
- if (interface != PHY_INTERFACE_MODE_SGMII ||
- phylink_autoneg_inband(mode))
- return;
-
- /* SGMII force mode setting */
- val = mt7530_read(priv, MT7531_SGMII_MODE(port));
- val &= ~MT7531_SGMII_IF_MODE_MASK;
-
- switch (speed) {
- case SPEED_10:
- val |= MT7531_SGMII_FORCE_SPEED_10;
- break;
- case SPEED_100:
- val |= MT7531_SGMII_FORCE_SPEED_100;
- break;
- case SPEED_1000:
- val |= MT7531_SGMII_FORCE_SPEED_1000;
- break;
- }
-
- /* MT7531 SGMII 1G force mode can only work in full duplex mode,
- * no matter MT7531_SGMII_FORCE_HALF_DUPLEX is set or not.
- *
- * The speed check is unnecessary as the MAC capabilities apply
- * this restriction. --rmk
- */
- if ((speed == SPEED_10 || speed == SPEED_100) &&
- duplex != DUPLEX_FULL)
- val |= MT7531_SGMII_FORCE_HALF_DUPLEX;
-
- mt7530_write(priv, MT7531_SGMII_MODE(port), val);
-}
-
static bool mt753x_is_mac_port(u32 port)
{
return (port == 5 || port == 6);
}
-static int mt7531_sgmii_setup_mode_force(struct mt7530_priv *priv, u32 port,
- phy_interface_t interface)
-{
- u32 val;
-
- if (!mt753x_is_mac_port(port))
- return -EINVAL;
-
- mt7530_set(priv, MT7531_QPHY_PWR_STATE_CTRL(port),
- MT7531_SGMII_PHYA_PWD);
-
- val = mt7530_read(priv, MT7531_PHYA_CTRL_SIGNAL3(port));
- val &= ~MT7531_RG_TPHY_SPEED_MASK;
- /* Setup 2.5 times faster clock for 2.5Gbps data speeds with 10B/8B
- * encoding.
- */
- val |= (interface == PHY_INTERFACE_MODE_2500BASEX) ?
- MT7531_RG_TPHY_SPEED_3_125G : MT7531_RG_TPHY_SPEED_1_25G;
- mt7530_write(priv, MT7531_PHYA_CTRL_SIGNAL3(port), val);
-
- mt7530_clear(priv, MT7531_PCS_CONTROL_1(port), MT7531_SGMII_AN_ENABLE);
-
- /* MT7531 SGMII 1G and 2.5G force mode can only work in full duplex
- * mode, no matter MT7531_SGMII_FORCE_HALF_DUPLEX is set or not.
- */
- mt7530_rmw(priv, MT7531_SGMII_MODE(port),
- MT7531_SGMII_IF_MODE_MASK | MT7531_SGMII_REMOTE_FAULT_DIS,
- MT7531_SGMII_FORCE_SPEED_1000);
-
- mt7530_write(priv, MT7531_QPHY_PWR_STATE_CTRL(port), 0);
-
- return 0;
-}
-
-static int mt7531_sgmii_setup_mode_an(struct mt7530_priv *priv, int port,
- phy_interface_t interface)
-{
- if (!mt753x_is_mac_port(port))
- return -EINVAL;
-
- mt7530_set(priv, MT7531_QPHY_PWR_STATE_CTRL(port),
- MT7531_SGMII_PHYA_PWD);
-
- mt7530_rmw(priv, MT7531_PHYA_CTRL_SIGNAL3(port),
- MT7531_RG_TPHY_SPEED_MASK, MT7531_RG_TPHY_SPEED_1_25G);
-
- mt7530_set(priv, MT7531_SGMII_MODE(port),
- MT7531_SGMII_REMOTE_FAULT_DIS |
- MT7531_SGMII_SPEED_DUPLEX_AN);
-
- mt7530_rmw(priv, MT7531_PCS_SPEED_ABILITY(port),
- MT7531_SGMII_TX_CONFIG_MASK, 1);
-
- mt7530_set(priv, MT7531_PCS_CONTROL_1(port), MT7531_SGMII_AN_ENABLE);
-
- mt7530_set(priv, MT7531_PCS_CONTROL_1(port), MT7531_SGMII_AN_RESTART);
-
- mt7530_write(priv, MT7531_QPHY_PWR_STATE_CTRL(port), 0);
-
- return 0;
-}
-
-static void mt7531_pcs_an_restart(struct phylink_pcs *pcs)
+static int
+mt7988_mac_config(struct dsa_switch *ds, int port, unsigned int mode,
+ phy_interface_t interface)
{
- struct mt7530_priv *priv = pcs_to_mt753x_pcs(pcs)->priv;
- int port = pcs_to_mt753x_pcs(pcs)->port;
- u32 val;
+ if (dsa_is_cpu_port(ds, port) &&
+ interface == PHY_INTERFACE_MODE_INTERNAL)
+ return 0;
- /* Only restart AN when AN is enabled */
- val = mt7530_read(priv, MT7531_PCS_CONTROL_1(port));
- if (val & MT7531_SGMII_AN_ENABLE) {
- val |= MT7531_SGMII_AN_RESTART;
- mt7530_write(priv, MT7531_PCS_CONTROL_1(port), val);
- }
+ return -EINVAL;
}
static int
phydev = dp->slave->phydev;
return mt7531_rgmii_setup(priv, port, interface, phydev);
case PHY_INTERFACE_MODE_SGMII:
- return mt7531_sgmii_setup_mode_an(priv, port, interface);
case PHY_INTERFACE_MODE_NA:
case PHY_INTERFACE_MODE_1000BASEX:
case PHY_INTERFACE_MODE_2500BASEX:
- return mt7531_sgmii_setup_mode_force(priv, port, interface);
+ /* handled in SGMII PCS driver */
+ return 0;
default:
return -EINVAL;
}
switch (interface) {
case PHY_INTERFACE_MODE_TRGMII:
+ return &priv->pcs[port].pcs;
case PHY_INTERFACE_MODE_SGMII:
case PHY_INTERFACE_MODE_1000BASEX:
case PHY_INTERFACE_MODE_2500BASEX:
- return &priv->pcs[port].pcs;
-
+ return priv->ports[port].sgmii_pcs;
default:
return NULL;
}
switch (port) {
case 0 ... 4: /* Internal phy */
- if (state->interface != PHY_INTERFACE_MODE_GMII)
+ if (state->interface != PHY_INTERFACE_MODE_GMII &&
+ state->interface != PHY_INTERFACE_MODE_INTERNAL)
goto unsupported;
break;
case 5: /* 2nd cpu port with phy of port 0 or 4 / external phy */
/* MT753x MAC works in 1G full duplex mode for all up-clocked
* variants.
*/
- if (interface == PHY_INTERFACE_MODE_TRGMII ||
+ if (interface == PHY_INTERFACE_MODE_INTERNAL ||
+ interface == PHY_INTERFACE_MODE_TRGMII ||
(phy_interface_mode_is_8023z(interface))) {
speed = SPEED_1000;
duplex = DUPLEX_FULL;
return 0;
}
+static int
+mt7988_cpu_port_config(struct dsa_switch *ds, int port)
+{
+ struct mt7530_priv *priv = ds->priv;
+
+ mt7530_write(priv, MT7530_PMCR_P(port),
+ PMCR_CPU_PORT_SETTING(priv->id));
+
+ mt753x_phylink_mac_link_up(ds, port, MLO_AN_FIXED,
+ PHY_INTERFACE_MODE_INTERNAL, NULL,
+ SPEED_10000, DUPLEX_FULL, true, true);
+
+ return 0;
+}
+
static void mt753x_phylink_get_caps(struct dsa_switch *ds, int port,
struct phylink_config *config)
{
state->pause |= MLO_PAUSE_TX;
}
-static int
-mt7531_sgmii_pcs_get_state_an(struct mt7530_priv *priv, int port,
- struct phylink_link_state *state)
-{
- u32 status, val;
- u16 config_reg;
-
- status = mt7530_read(priv, MT7531_PCS_CONTROL_1(port));
- state->link = !!(status & MT7531_SGMII_LINK_STATUS);
- state->an_complete = !!(status & MT7531_SGMII_AN_COMPLETE);
- if (state->interface == PHY_INTERFACE_MODE_SGMII &&
- (status & MT7531_SGMII_AN_ENABLE)) {
- val = mt7530_read(priv, MT7531_PCS_SPEED_ABILITY(port));
- config_reg = val >> 16;
-
- switch (config_reg & LPA_SGMII_SPD_MASK) {
- case LPA_SGMII_1000:
- state->speed = SPEED_1000;
- break;
- case LPA_SGMII_100:
- state->speed = SPEED_100;
- break;
- case LPA_SGMII_10:
- state->speed = SPEED_10;
- break;
- default:
- dev_err(priv->dev, "invalid sgmii PHY speed\n");
- state->link = false;
- return -EINVAL;
- }
-
- if (config_reg & LPA_SGMII_FULL_DUPLEX)
- state->duplex = DUPLEX_FULL;
- else
- state->duplex = DUPLEX_HALF;
- }
-
- return 0;
-}
-
-static void
-mt7531_sgmii_pcs_get_state_inband(struct mt7530_priv *priv, int port,
- struct phylink_link_state *state)
-{
- unsigned int val;
-
- val = mt7530_read(priv, MT7531_PCS_CONTROL_1(port));
- state->link = !!(val & MT7531_SGMII_LINK_STATUS);
- if (!state->link)
- return;
-
- state->an_complete = state->link;
-
- if (state->interface == PHY_INTERFACE_MODE_2500BASEX)
- state->speed = SPEED_2500;
- else
- state->speed = SPEED_1000;
-
- state->duplex = DUPLEX_FULL;
- state->pause = MLO_PAUSE_NONE;
-}
-
-static void mt7531_pcs_get_state(struct phylink_pcs *pcs,
- struct phylink_link_state *state)
-{
- struct mt7530_priv *priv = pcs_to_mt753x_pcs(pcs)->priv;
- int port = pcs_to_mt753x_pcs(pcs)->port;
-
- if (state->interface == PHY_INTERFACE_MODE_SGMII) {
- mt7531_sgmii_pcs_get_state_an(priv, port, state);
- return;
- } else if ((state->interface == PHY_INTERFACE_MODE_1000BASEX) ||
- (state->interface == PHY_INTERFACE_MODE_2500BASEX)) {
- mt7531_sgmii_pcs_get_state_inband(priv, port, state);
- return;
- }
-
- state->link = false;
-}
-
static int mt753x_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
phy_interface_t interface,
const unsigned long *advertising,
.pcs_an_restart = mt7530_pcs_an_restart,
};
-static const struct phylink_pcs_ops mt7531_pcs_ops = {
- .pcs_validate = mt753x_pcs_validate,
- .pcs_get_state = mt7531_pcs_get_state,
- .pcs_config = mt753x_pcs_config,
- .pcs_an_restart = mt7531_pcs_an_restart,
- .pcs_link_up = mt7531_pcs_link_up,
-};
-
static int
mt753x_setup(struct dsa_switch *ds)
{
priv->pcs[i].pcs.ops = priv->info->pcs_ops;
priv->pcs[i].priv = priv;
priv->pcs[i].port = i;
- if (mt753x_is_mac_port(i))
- priv->pcs[i].pcs.poll = 1;
}
ret = priv->info->sw_setup(ds);
return 0;
}
-static const struct dsa_switch_ops mt7530_switch_ops = {
+static int mt7988_pad_setup(struct dsa_switch *ds, phy_interface_t interface)
+{
+ return 0;
+}
+
+static int mt7988_setup(struct dsa_switch *ds)
+{
+ struct mt7530_priv *priv = ds->priv;
+
+ /* Reset the switch */
+ reset_control_assert(priv->rstc);
+ usleep_range(20, 50);
+ reset_control_deassert(priv->rstc);
+ usleep_range(20, 50);
+
+ /* Reset the switch PHYs */
+ mt7530_write(priv, MT7530_SYS_CTRL, SYS_CTRL_PHY_RST);
+
+ return mt7531_setup_common(ds);
+}
+
+const struct dsa_switch_ops mt7530_switch_ops = {
.get_tag_protocol = mtk_get_tag_protocol,
.setup = mt753x_setup,
.get_strings = mt7530_get_strings,
.get_mac_eee = mt753x_get_mac_eee,
.set_mac_eee = mt753x_set_mac_eee,
};
+EXPORT_SYMBOL_GPL(mt7530_switch_ops);
-static const struct mt753x_info mt753x_table[] = {
+const struct mt753x_info mt753x_table[] = {
[ID_MT7621] = {
.id = ID_MT7621,
.pcs_ops = &mt7530_pcs_ops,
},
[ID_MT7531] = {
.id = ID_MT7531,
- .pcs_ops = &mt7531_pcs_ops,
+ .pcs_ops = &mt7530_pcs_ops,
.sw_setup = mt7531_setup,
.phy_read_c22 = mt7531_ind_c22_phy_read,
.phy_write_c22 = mt7531_ind_c22_phy_write,
.mac_port_get_caps = mt7531_mac_port_get_caps,
.mac_port_config = mt7531_mac_config,
},
+ [ID_MT7988] = {
+ .id = ID_MT7988,
+ .pcs_ops = &mt7530_pcs_ops,
+ .sw_setup = mt7988_setup,
+ .phy_read_c22 = mt7531_ind_c22_phy_read,
+ .phy_write_c22 = mt7531_ind_c22_phy_write,
+ .phy_read_c45 = mt7531_ind_c45_phy_read,
+ .phy_write_c45 = mt7531_ind_c45_phy_write,
+ .pad_setup = mt7988_pad_setup,
+ .cpu_port_config = mt7988_cpu_port_config,
+ .mac_port_get_caps = mt7988_mac_port_get_caps,
+ .mac_port_config = mt7988_mac_config,
+ },
};
+EXPORT_SYMBOL_GPL(mt753x_table);
-static const struct of_device_id mt7530_of_match[] = {
- { .compatible = "mediatek,mt7621", .data = &mt753x_table[ID_MT7621], },
- { .compatible = "mediatek,mt7530", .data = &mt753x_table[ID_MT7530], },
- { .compatible = "mediatek,mt7531", .data = &mt753x_table[ID_MT7531], },
- { /* sentinel */ },
-};
-MODULE_DEVICE_TABLE(of, mt7530_of_match);
-
-static int
-mt7530_probe(struct mdio_device *mdiodev)
+int
+mt7530_probe_common(struct mt7530_priv *priv)
{
- struct mt7530_priv *priv;
- struct device_node *dn;
-
- dn = mdiodev->dev.of_node;
-
- priv = devm_kzalloc(&mdiodev->dev, sizeof(*priv), GFP_KERNEL);
- if (!priv)
- return -ENOMEM;
+ struct device *dev = priv->dev;
- priv->ds = devm_kzalloc(&mdiodev->dev, sizeof(*priv->ds), GFP_KERNEL);
+ priv->ds = devm_kzalloc(dev, sizeof(*priv->ds), GFP_KERNEL);
if (!priv->ds)
return -ENOMEM;
- priv->ds->dev = &mdiodev->dev;
+ priv->ds->dev = dev;
priv->ds->num_ports = MT7530_NUM_PORTS;
- /* Use medatek,mcm property to distinguish hardware type that would
- * casues a little bit differences on power-on sequence.
- */
- priv->mcm = of_property_read_bool(dn, "mediatek,mcm");
- if (priv->mcm) {
- dev_info(&mdiodev->dev, "MT7530 adapts as multi-chip module\n");
-
- priv->rstc = devm_reset_control_get(&mdiodev->dev, "mcm");
- if (IS_ERR(priv->rstc)) {
- dev_err(&mdiodev->dev, "Couldn't get our reset line\n");
- return PTR_ERR(priv->rstc);
- }
- }
-
/* Get the hardware identifier from the devicetree node.
* We will need it for some of the clock and regulator setup.
*/
- priv->info = of_device_get_match_data(&mdiodev->dev);
+ priv->info = of_device_get_match_data(dev);
if (!priv->info)
return -EINVAL;
return -EINVAL;
priv->id = priv->info->id;
-
- if (priv->id == ID_MT7530) {
- priv->core_pwr = devm_regulator_get(&mdiodev->dev, "core");
- if (IS_ERR(priv->core_pwr))
- return PTR_ERR(priv->core_pwr);
-
- priv->io_pwr = devm_regulator_get(&mdiodev->dev, "io");
- if (IS_ERR(priv->io_pwr))
- return PTR_ERR(priv->io_pwr);
- }
-
- /* Not MCM that indicates switch works as the remote standalone
- * integrated circuit so the GPIO pin would be used to complete
- * the reset, otherwise memory-mapped register accessing used
- * through syscon provides in the case of MCM.
- */
- if (!priv->mcm) {
- priv->reset = devm_gpiod_get_optional(&mdiodev->dev, "reset",
- GPIOD_OUT_LOW);
- if (IS_ERR(priv->reset)) {
- dev_err(&mdiodev->dev, "Couldn't get our reset line\n");
- return PTR_ERR(priv->reset);
- }
- }
-
- priv->bus = mdiodev->bus;
- priv->dev = &mdiodev->dev;
+ priv->dev = dev;
priv->ds->priv = priv;
priv->ds->ops = &mt7530_switch_ops;
mutex_init(&priv->reg_mutex);
- dev_set_drvdata(&mdiodev->dev, priv);
+ dev_set_drvdata(dev, priv);
- return dsa_register_switch(priv->ds);
+ return 0;
}
+EXPORT_SYMBOL_GPL(mt7530_probe_common);
-static void
-mt7530_remove(struct mdio_device *mdiodev)
+void
+mt7530_remove_common(struct mt7530_priv *priv)
{
- struct mt7530_priv *priv = dev_get_drvdata(&mdiodev->dev);
- int ret = 0;
-
- if (!priv)
- return;
-
- ret = regulator_disable(priv->core_pwr);
- if (ret < 0)
- dev_err(priv->dev,
- "Failed to disable core power: %d\n", ret);
-
- ret = regulator_disable(priv->io_pwr);
- if (ret < 0)
- dev_err(priv->dev, "Failed to disable io pwr: %d\n",
- ret);
-
if (priv->irq)
mt7530_free_irq(priv);
dsa_unregister_switch(priv->ds);
- mutex_destroy(&priv->reg_mutex);
-}
-
-static void mt7530_shutdown(struct mdio_device *mdiodev)
-{
- struct mt7530_priv *priv = dev_get_drvdata(&mdiodev->dev);
-
- if (!priv)
- return;
-
- dsa_switch_shutdown(priv->ds);
- dev_set_drvdata(&mdiodev->dev, NULL);
+ mutex_destroy(&priv->reg_mutex);
}
-
-static struct mdio_driver mt7530_mdio_driver = {
- .probe = mt7530_probe,
- .remove = mt7530_remove,
- .shutdown = mt7530_shutdown,
- .mdiodrv.driver = {
- .name = "mt7530",
- .of_match_table = mt7530_of_match,
- },
-};
-
-mdio_module_driver(mt7530_mdio_driver);
+EXPORT_SYMBOL_GPL(mt7530_remove_common);
MODULE_AUTHOR("Sean Wang <sean.wang@mediatek.com>");
MODULE_DESCRIPTION("Driver for Mediatek MT7530 Switch");
ID_MT7530 = 0,
ID_MT7621 = 1,
ID_MT7531 = 2,
+ ID_MT7988 = 3,
};
#define NUM_TRGMII_CTRL 5
#define MT7531_MIRROR_PORT_SET(x) (((x) & MIRROR_MASK) << 16)
#define MT7531_CPU_PMAP_MASK GENMASK(7, 0)
-#define MT753X_MIRROR_REG(id) (((id) == ID_MT7531) ? \
+#define MT753X_MIRROR_REG(id) ((((id) == ID_MT7531) || ((id) == ID_MT7988)) ? \
MT7531_CFC : MT7530_MFC)
-#define MT753X_MIRROR_EN(id) (((id) == ID_MT7531) ? \
+#define MT753X_MIRROR_EN(id) ((((id) == ID_MT7531) || ((id) == ID_MT7988)) ? \
MT7531_MIRROR_EN : MIRROR_EN)
-#define MT753X_MIRROR_MASK(id) (((id) == ID_MT7531) ? \
+#define MT753X_MIRROR_MASK(id) ((((id) == ID_MT7531) || ((id) == ID_MT7988)) ? \
MT7531_MIRROR_MASK : MIRROR_MASK)
/* Registers for BPDU and PAE frame control*/
MT7531_FORCE_DPX | \
MT7531_FORCE_RX_FC | \
MT7531_FORCE_TX_FC)
-#define PMCR_FORCE_MODE_ID(id) (((id) == ID_MT7531) ? \
- MT7531_FORCE_MODE : \
- PMCR_FORCE_MODE)
+#define PMCR_FORCE_MODE_ID(id) ((((id) == ID_MT7531) || ((id) == ID_MT7988)) ? \
+ MT7531_FORCE_MODE : PMCR_FORCE_MODE)
#define PMCR_LINK_SETTINGS_MASK (PMCR_TX_EN | PMCR_FORCE_SPEED_1000 | \
PMCR_RX_EN | PMCR_FORCE_SPEED_100 | \
PMCR_TX_FC_EN | PMCR_RX_FC_EN | \
CCR_TX_OCT_CNT_BAD)
/* MT7531 SGMII register group */
-#define MT7531_SGMII_REG_BASE 0x5000
-#define MT7531_SGMII_REG(p, r) (MT7531_SGMII_REG_BASE + \
- ((p) - 5) * 0x1000 + (r))
-
-/* Register forSGMII PCS_CONTROL_1 */
-#define MT7531_PCS_CONTROL_1(p) MT7531_SGMII_REG(p, 0x00)
-#define MT7531_SGMII_LINK_STATUS BIT(18)
-#define MT7531_SGMII_AN_ENABLE BIT(12)
-#define MT7531_SGMII_AN_RESTART BIT(9)
-#define MT7531_SGMII_AN_COMPLETE BIT(21)
-
-/* Register for SGMII PCS_SPPED_ABILITY */
-#define MT7531_PCS_SPEED_ABILITY(p) MT7531_SGMII_REG(p, 0x08)
-#define MT7531_SGMII_TX_CONFIG_MASK GENMASK(15, 0)
-#define MT7531_SGMII_TX_CONFIG BIT(0)
-
-/* Register for SGMII_MODE */
-#define MT7531_SGMII_MODE(p) MT7531_SGMII_REG(p, 0x20)
-#define MT7531_SGMII_REMOTE_FAULT_DIS BIT(8)
-#define MT7531_SGMII_IF_MODE_MASK GENMASK(5, 1)
-#define MT7531_SGMII_FORCE_DUPLEX BIT(4)
-#define MT7531_SGMII_FORCE_SPEED_MASK GENMASK(3, 2)
-#define MT7531_SGMII_FORCE_SPEED_1000 BIT(3)
-#define MT7531_SGMII_FORCE_SPEED_100 BIT(2)
-#define MT7531_SGMII_FORCE_SPEED_10 0
-#define MT7531_SGMII_SPEED_DUPLEX_AN BIT(1)
-
-enum mt7531_sgmii_force_duplex {
- MT7531_SGMII_FORCE_FULL_DUPLEX = 0,
- MT7531_SGMII_FORCE_HALF_DUPLEX = 0x10,
-};
-
-/* Fields of QPHY_PWR_STATE_CTRL */
-#define MT7531_QPHY_PWR_STATE_CTRL(p) MT7531_SGMII_REG(p, 0xe8)
-#define MT7531_SGMII_PHYA_PWD BIT(4)
-
-/* Values of SGMII SPEED */
-#define MT7531_PHYA_CTRL_SIGNAL3(p) MT7531_SGMII_REG(p, 0x128)
-#define MT7531_RG_TPHY_SPEED_MASK (BIT(2) | BIT(3))
-#define MT7531_RG_TPHY_SPEED_1_25G 0x0
-#define MT7531_RG_TPHY_SPEED_3_125G BIT(2)
+#define MT7531_SGMII_REG_BASE(p) (0x5000 + ((p) - 5) * 0x1000)
+#define MT7531_PHYA_CTRL_SIGNAL3 0x128
/* Register for system reset */
#define MT7530_SYS_CTRL 0x7000
* @pm: The matrix used to show all connections with the port.
* @pvid: The VLAN specified is to be considered a PVID at ingress. Any
* untagged frames will be assigned to the related VLAN.
- * @vlan_filtering: The flags indicating whether the port that can recognize
- * VLAN-tagged frames.
+ * @sgmii_pcs: Pointer to PCS instance for SerDes ports
*/
struct mt7530_port {
bool enable;
u32 pm;
u16 pvid;
+ struct phylink_pcs *sgmii_pcs;
};
/* Port 5 interface select definitions */
P5_INTF_SEL_GMAC5_SGMII,
};
-static const char *p5_intf_modes(unsigned int p5_interface)
-{
- switch (p5_interface) {
- case P5_DISABLED:
- return "DISABLED";
- case P5_INTF_SEL_PHY_P0:
- return "PHY P0";
- case P5_INTF_SEL_PHY_P4:
- return "PHY P4";
- case P5_INTF_SEL_GMAC5:
- return "GMAC5";
- case P5_INTF_SEL_GMAC5_SGMII:
- return "GMAC5_SGMII";
- default:
- return "unknown";
- }
-}
-
struct mt7530_priv;
struct mt753x_pcs {
* @dev: The device pointer
* @ds: The pointer to the dsa core structure
* @bus: The bus used for the device and built-in PHY
+ * @regmap: The regmap instance representing all switch registers
* @rstc: The pointer to reset control used by MCM
* @core_pwr: The power supplied into the core
* @io_pwr: The power supplied into the I/O
struct device *dev;
struct dsa_switch *ds;
struct mii_bus *bus;
+ struct regmap *regmap;
struct reset_control *rstc;
struct regulator *core_pwr;
struct regulator *io_pwr;
p->reg = reg;
}
+int mt7530_probe_common(struct mt7530_priv *priv);
+void mt7530_remove_common(struct mt7530_priv *priv);
+
+extern const struct dsa_switch_ops mt7530_switch_ops;
+extern const struct mt753x_info mt753x_table[];
+
#endif /* __MT7530_H */
return mv88e6xxx_software_reset(chip);
}
-static void mv88e6xxx_teardown(struct dsa_switch *ds)
-{
- mv88e6xxx_teardown_devlink_params(ds);
- dsa_devlink_resources_unregister(ds);
- mv88e6xxx_teardown_devlink_regions_global(ds);
-}
-
-static int mv88e6xxx_setup(struct dsa_switch *ds)
-{
- struct mv88e6xxx_chip *chip = ds->priv;
- u8 cmode;
- int err;
- int i;
-
- chip->ds = ds;
- ds->slave_mii_bus = mv88e6xxx_default_mdio_bus(chip);
-
- /* Since virtual bridges are mapped in the PVT, the number we support
- * depends on the physical switch topology. We need to let DSA figure
- * that out and therefore we cannot set this at dsa_register_switch()
- * time.
- */
- if (mv88e6xxx_has_pvt(chip))
- ds->max_num_bridges = MV88E6XXX_MAX_PVT_SWITCHES -
- ds->dst->last_switch - 1;
-
- mv88e6xxx_reg_lock(chip);
-
- if (chip->info->ops->setup_errata) {
- err = chip->info->ops->setup_errata(chip);
- if (err)
- goto unlock;
- }
-
- /* Cache the cmode of each port. */
- for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {
- if (chip->info->ops->port_get_cmode) {
- err = chip->info->ops->port_get_cmode(chip, i, &cmode);
- if (err)
- goto unlock;
-
- chip->ports[i].cmode = cmode;
- }
- }
-
- err = mv88e6xxx_vtu_setup(chip);
- if (err)
- goto unlock;
-
- /* Must be called after mv88e6xxx_vtu_setup (which flushes the
- * VTU, thereby also flushing the STU).
- */
- err = mv88e6xxx_stu_setup(chip);
- if (err)
- goto unlock;
-
- /* Setup Switch Port Registers */
- for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {
- if (dsa_is_unused_port(ds, i))
- continue;
-
- /* Prevent the use of an invalid port. */
- if (mv88e6xxx_is_invalid_port(chip, i)) {
- dev_err(chip->dev, "port %d is invalid\n", i);
- err = -EINVAL;
- goto unlock;
- }
-
- err = mv88e6xxx_setup_port(chip, i);
- if (err)
- goto unlock;
- }
-
- err = mv88e6xxx_irl_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_mac_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_phy_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_pvt_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_atu_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_broadcast_setup(chip, 0);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_pot_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_rmu_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_rsvd2cpu_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_trunk_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_devmap_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_pri_setup(chip);
- if (err)
- goto unlock;
-
- /* Setup PTP Hardware Clock and timestamping */
- if (chip->info->ptp_support) {
- err = mv88e6xxx_ptp_setup(chip);
- if (err)
- goto unlock;
-
- err = mv88e6xxx_hwtstamp_setup(chip);
- if (err)
- goto unlock;
- }
-
- err = mv88e6xxx_stats_setup(chip);
- if (err)
- goto unlock;
-
-unlock:
- mv88e6xxx_reg_unlock(chip);
-
- if (err)
- return err;
-
- /* Have to be called without holding the register lock, since
- * they take the devlink lock, and we later take the locks in
- * the reverse order when getting/setting parameters or
- * resource occupancy.
- */
- err = mv88e6xxx_setup_devlink_resources(ds);
- if (err)
- return err;
-
- err = mv88e6xxx_setup_devlink_params(ds);
- if (err)
- goto out_resources;
-
- err = mv88e6xxx_setup_devlink_regions_global(ds);
- if (err)
- goto out_params;
-
- return 0;
-
-out_params:
- mv88e6xxx_teardown_devlink_params(ds);
-out_resources:
- dsa_devlink_resources_unregister(ds);
-
- return err;
-}
-
-static int mv88e6xxx_port_setup(struct dsa_switch *ds, int port)
-{
- return mv88e6xxx_setup_devlink_regions_port(ds, port);
-}
-
-static void mv88e6xxx_port_teardown(struct dsa_switch *ds, int port)
-{
- mv88e6xxx_teardown_devlink_regions_port(ds, port);
-}
-
/* prod_id for switch families which do not have a PHY model number */
static const u16 family_prod_id_table[] = {
[MV88E6XXX_FAMILY_6341] = MV88E6XXX_PORT_SWITCH_ID_PROD_6341,
bus->read_c45 = mv88e6xxx_mdio_read_c45;
bus->write_c45 = mv88e6xxx_mdio_write_c45;
bus->parent = chip->dev;
+ bus->phy_mask = ~GENMASK(chip->info->phy_base_addr +
+ mv88e6xxx_num_ports(chip) - 1,
+ chip->info->phy_base_addr);
if (!external) {
err = mv88e6xxx_g2_irq_mdio_setup(chip, bus);
}
}
-static int mv88e6xxx_mdios_register(struct mv88e6xxx_chip *chip,
- struct device_node *np)
+static int mv88e6xxx_mdios_register(struct mv88e6xxx_chip *chip)
{
+ struct device_node *np = chip->dev->of_node;
struct device_node *child;
int err;
return 0;
}
+static void mv88e6xxx_teardown(struct dsa_switch *ds)
+{
+ struct mv88e6xxx_chip *chip = ds->priv;
+
+ mv88e6xxx_teardown_devlink_params(ds);
+ dsa_devlink_resources_unregister(ds);
+ mv88e6xxx_teardown_devlink_regions_global(ds);
+ mv88e6xxx_mdios_unregister(chip);
+}
+
+static int mv88e6xxx_setup(struct dsa_switch *ds)
+{
+ struct mv88e6xxx_chip *chip = ds->priv;
+ u8 cmode;
+ int err;
+ int i;
+
+ err = mv88e6xxx_mdios_register(chip);
+ if (err)
+ return err;
+
+ chip->ds = ds;
+ ds->slave_mii_bus = mv88e6xxx_default_mdio_bus(chip);
+
+ /* Since virtual bridges are mapped in the PVT, the number we support
+ * depends on the physical switch topology. We need to let DSA figure
+ * that out and therefore we cannot set this at dsa_register_switch()
+ * time.
+ */
+ if (mv88e6xxx_has_pvt(chip))
+ ds->max_num_bridges = MV88E6XXX_MAX_PVT_SWITCHES -
+ ds->dst->last_switch - 1;
+
+ mv88e6xxx_reg_lock(chip);
+
+ if (chip->info->ops->setup_errata) {
+ err = chip->info->ops->setup_errata(chip);
+ if (err)
+ goto unlock;
+ }
+
+ /* Cache the cmode of each port. */
+ for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {
+ if (chip->info->ops->port_get_cmode) {
+ err = chip->info->ops->port_get_cmode(chip, i, &cmode);
+ if (err)
+ goto unlock;
+
+ chip->ports[i].cmode = cmode;
+ }
+ }
+
+ err = mv88e6xxx_vtu_setup(chip);
+ if (err)
+ goto unlock;
+
+ /* Must be called after mv88e6xxx_vtu_setup (which flushes the
+ * VTU, thereby also flushing the STU).
+ */
+ err = mv88e6xxx_stu_setup(chip);
+ if (err)
+ goto unlock;
+
+ /* Setup Switch Port Registers */
+ for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {
+ if (dsa_is_unused_port(ds, i))
+ continue;
+
+ /* Prevent the use of an invalid port. */
+ if (mv88e6xxx_is_invalid_port(chip, i)) {
+ dev_err(chip->dev, "port %d is invalid\n", i);
+ err = -EINVAL;
+ goto unlock;
+ }
+
+ err = mv88e6xxx_setup_port(chip, i);
+ if (err)
+ goto unlock;
+ }
+
+ err = mv88e6xxx_irl_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_mac_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_phy_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_pvt_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_atu_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_broadcast_setup(chip, 0);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_pot_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_rmu_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_rsvd2cpu_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_trunk_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_devmap_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_pri_setup(chip);
+ if (err)
+ goto unlock;
+
+ /* Setup PTP Hardware Clock and timestamping */
+ if (chip->info->ptp_support) {
+ err = mv88e6xxx_ptp_setup(chip);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_hwtstamp_setup(chip);
+ if (err)
+ goto unlock;
+ }
+
+ err = mv88e6xxx_stats_setup(chip);
+ if (err)
+ goto unlock;
+
+unlock:
+ mv88e6xxx_reg_unlock(chip);
+
+ if (err)
+ goto out_mdios;
+
+ /* Have to be called without holding the register lock, since
+ * they take the devlink lock, and we later take the locks in
+ * the reverse order when getting/setting parameters or
+ * resource occupancy.
+ */
+ err = mv88e6xxx_setup_devlink_resources(ds);
+ if (err)
+ goto out_mdios;
+
+ err = mv88e6xxx_setup_devlink_params(ds);
+ if (err)
+ goto out_resources;
+
+ err = mv88e6xxx_setup_devlink_regions_global(ds);
+ if (err)
+ goto out_params;
+
+ return 0;
+
+out_params:
+ mv88e6xxx_teardown_devlink_params(ds);
+out_resources:
+ dsa_devlink_resources_unregister(ds);
+out_mdios:
+ mv88e6xxx_mdios_unregister(chip);
+
+ return err;
+}
+
+static int mv88e6xxx_port_setup(struct dsa_switch *ds, int port)
+{
+ return mv88e6xxx_setup_devlink_regions_port(ds, port);
+}
+
+static void mv88e6xxx_port_teardown(struct dsa_switch *ds, int port)
+{
+ mv88e6xxx_teardown_devlink_regions_port(ds, port);
+}
+
static int mv88e6xxx_get_eeprom_len(struct dsa_switch *ds)
{
struct mv88e6xxx_chip *chip = ds->priv;
if (err)
goto out_g1_atu_prob_irq;
- err = mv88e6xxx_mdios_register(chip, np);
- if (err)
- goto out_g1_vtu_prob_irq;
-
err = mv88e6xxx_register_switch(chip);
if (err)
- goto out_mdio;
+ goto out_g1_vtu_prob_irq;
return 0;
-out_mdio:
- mv88e6xxx_mdios_unregister(chip);
out_g1_vtu_prob_irq:
mv88e6xxx_g1_vtu_prob_irq_free(chip);
out_g1_atu_prob_irq:
mv88e6xxx_phy_destroy(chip);
mv88e6xxx_unregister_switch(chip);
- mv88e6xxx_mdios_unregister(chip);
mv88e6xxx_g1_vtu_prob_irq_free(chip);
mv88e6xxx_g1_atu_prob_irq_free(chip);
int mv88e6xxx_g2_irq_mdio_setup(struct mv88e6xxx_chip *chip,
struct mii_bus *bus)
{
- int phy, irq, err, err_phy;
+ int phy, irq;
for (phy = 0; phy < chip->info->num_internal_phys; phy++) {
irq = irq_find_mapping(chip->g2_irq.domain, phy);
- if (irq < 0) {
- err = irq;
- goto out;
- }
+ if (irq < 0)
+ return irq;
+
bus->irq[chip->info->phy_base_addr + phy] = irq;
}
return 0;
-out:
- err_phy = phy;
-
- for (phy = 0; phy < err_phy; phy++)
- irq_dispose_mapping(bus->irq[phy]);
-
- return err;
}
void mv88e6xxx_g2_irq_mdio_free(struct mv88e6xxx_chip *chip,
struct mii_bus *bus)
{
- int phy;
-
- for (phy = 0; phy < chip->info->num_internal_phys; phy++)
- irq_dispose_mapping(bus->irq[phy]);
}
config->supported_interfaces);
}
+static void felix_phylink_mac_config(struct dsa_switch *ds, int port,
+ unsigned int mode,
+ const struct phylink_link_state *state)
+{
+ struct ocelot *ocelot = ds->priv;
+ struct felix *felix = ocelot_to_felix(ocelot);
+
+ if (felix->info->phylink_mac_config)
+ felix->info->phylink_mac_config(ocelot, port, mode, state);
+}
+
static struct phylink_pcs *felix_phylink_mac_select_pcs(struct dsa_switch *ds,
int port,
phy_interface_t iface)
if (err)
return err;
+ if (ocelot->targets[HSIO])
+ ocelot_pll5_init(ocelot);
+
err = ocelot_init(ocelot);
if (err)
goto out_mdiobus_free;
dsa_switch_for_each_available_port(dp, ds) {
ocelot_init_port(ocelot, dp->index);
+ if (felix->info->configure_serdes)
+ felix->info->configure_serdes(ocelot, dp->index,
+ dp->dn);
+
/* Set the default QoS Classification based on PCP and DEI
* bits of vlan tag.
*/
.get_sset_count = felix_get_sset_count,
.get_ts_info = felix_get_ts_info,
.phylink_get_caps = felix_phylink_get_caps,
+ .phylink_mac_config = felix_phylink_mac_config,
.phylink_mac_select_pcs = felix_phylink_mac_select_pcs,
.phylink_mac_link_down = felix_phylink_mac_link_down,
.phylink_mac_link_up = felix_phylink_mac_link_up,
#define OCELOT_PORT_MODE_USXGMII BIT(4)
#define OCELOT_PORT_MODE_1000BASEX BIT(5)
+struct device_node;
+
/* Platform-specific information */
struct felix_info {
/* Hardcoded resources provided by the hardware instantiation. */
void (*tas_guard_bands_update)(struct ocelot *ocelot, int port);
void (*port_sched_speed_set)(struct ocelot *ocelot, int port,
u32 speed);
+ void (*phylink_mac_config)(struct ocelot *ocelot, int port,
+ unsigned int mode,
+ const struct phylink_link_state *state);
+ int (*configure_serdes)(struct ocelot *ocelot, int port,
+ struct device_node *portnp);
};
/* Methods for initializing the hardware resources specific to a tagging
OCELOT_PORT_MODE_INTERNAL,
OCELOT_PORT_MODE_INTERNAL,
OCELOT_PORT_MODE_INTERNAL,
- OCELOT_PORT_MODE_NONE,
- OCELOT_PORT_MODE_NONE,
- OCELOT_PORT_MODE_NONE,
- OCELOT_PORT_MODE_NONE,
- OCELOT_PORT_MODE_NONE,
- OCELOT_PORT_MODE_NONE,
- OCELOT_PORT_MODE_NONE,
+ OCELOT_PORT_MODE_SERDES,
+ OCELOT_PORT_MODE_SERDES,
+ OCELOT_PORT_MODE_SERDES,
+ OCELOT_PORT_MODE_SERDES,
+ OCELOT_PORT_MODE_SERDES,
+ OCELOT_PORT_MODE_SGMII,
+ OCELOT_PORT_MODE_SERDES,
};
static const struct ocelot_ops ocelot_ext_ops = {
.num_ports = VSC7514_NUM_PORTS,
.num_tx_queues = OCELOT_NUM_TC,
.port_modes = vsc7512_port_modes,
+ .phylink_mac_config = ocelot_phylink_mac_config,
+ .configure_serdes = ocelot_port_configure_serdes,
};
static int ocelot_ext_probe(struct platform_device *pdev)
static struct platform_driver ocelot_ext_switch_driver = {
.driver = {
.name = "ocelot-ext-switch",
- .of_match_table = of_match_ptr(ocelot_ext_switch_of_match),
+ .of_match_table = ocelot_ext_switch_of_match,
},
.probe = ocelot_ext_probe,
.remove = ocelot_ext_remove,
.shutdown = seville_shutdown,
.driver = {
.name = "mscc_seville",
- .of_match_table = of_match_ptr(seville_of_match),
+ .of_match_table = seville_of_match,
},
};
module_platform_driver(seville_vsc9953_driver);
state->link = !!(reg & QCA8K_PORT_STATUS_LINK_UP);
state->an_complete = state->link;
- state->an_enabled = !!(reg & QCA8K_PORT_STATUS_LINK_AUTO);
state->duplex = (reg & QCA8K_PORT_STATUS_DUPLEX) ? DUPLEX_FULL :
DUPLEX_HALF;
#include <linux/of_irq.h>
#include <linux/regmap.h>
#include <linux/if_bridge.h>
+#include <linux/if_vlan.h>
#include "realtek.h"
/* Maximum packet length register */
#define RTL8365MB_CFG0_MAX_LEN_REG 0x088C
#define RTL8365MB_CFG0_MAX_LEN_MASK 0x3FFF
+#define RTL8365MB_CFG0_MAX_LEN_MAX 0x3FFF
/* Port learning limit registers */
#define RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE 0x0A20
}
}
+static int rtl8365mb_port_change_mtu(struct dsa_switch *ds, int port,
+ int new_mtu)
+{
+ struct realtek_priv *priv = ds->priv;
+ int frame_size;
+
+ /* When a new MTU is set, DSA always sets the CPU port's MTU to the
+ * largest MTU of the slave ports. Because the switch only has a global
+ * RX length register, only allowing CPU port here is enough.
+ */
+ if (!dsa_is_cpu_port(ds, port))
+ return 0;
+
+ frame_size = new_mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;
+
+ dev_dbg(priv->dev, "changing mtu to %d (frame size: %d)\n",
+ new_mtu, frame_size);
+
+ return regmap_update_bits(priv->map, RTL8365MB_CFG0_MAX_LEN_REG,
+ RTL8365MB_CFG0_MAX_LEN_MASK,
+ FIELD_PREP(RTL8365MB_CFG0_MAX_LEN_MASK,
+ frame_size));
+}
+
+static int rtl8365mb_port_max_mtu(struct dsa_switch *ds, int port)
+{
+ return RTL8365MB_CFG0_MAX_LEN_MAX - VLAN_ETH_HLEN - ETH_FCS_LEN;
+}
+
static void rtl8365mb_port_stp_state_set(struct dsa_switch *ds, int port,
u8 state)
{
p->index = i;
}
- /* Set maximum packet length to 1536 bytes */
- ret = regmap_update_bits(priv->map, RTL8365MB_CFG0_MAX_LEN_REG,
- RTL8365MB_CFG0_MAX_LEN_MASK,
- FIELD_PREP(RTL8365MB_CFG0_MAX_LEN_MASK, 1536));
+ ret = rtl8365mb_port_change_mtu(ds, cpu->trap_port, ETH_DATA_LEN);
if (ret)
goto out_teardown_irq;
.get_eth_mac_stats = rtl8365mb_get_mac_stats,
.get_eth_ctrl_stats = rtl8365mb_get_ctrl_stats,
.get_stats64 = rtl8365mb_get_stats64,
+ .port_change_mtu = rtl8365mb_port_change_mtu,
+ .port_max_mtu = rtl8365mb_port_max_mtu,
};
static const struct dsa_switch_ops rtl8365mb_switch_ops_mdio = {
.get_eth_mac_stats = rtl8365mb_get_mac_stats,
.get_eth_ctrl_stats = rtl8365mb_get_ctrl_stats,
.get_stats64 = rtl8365mb_get_stats64,
+ .port_change_mtu = rtl8365mb_port_change_mtu,
+ .port_max_mtu = rtl8365mb_port_max_mtu,
};
static const struct realtek_ops rtl8365mb_ops = {
{
unsigned int nic_base = dev->base_addr;
struct ei_device *ei_local = netdev_priv(dev);
- int xfer_count = count;
char *buf = skb->data;
if ((netif_msg_rx_status(ei_local)) && (count != 4))
insw(nic_base + AXNET_DATAPORT,buf,count>>1);
if (count & 0x01) {
buf[count-1] = inb(nic_base + AXNET_DATAPORT);
- xfer_count++;
}
-
}
/*====================================================================*/
source "drivers/net/ethernet/i825xx/Kconfig"
source "drivers/net/ethernet/ibm/Kconfig"
source "drivers/net/ethernet/intel/Kconfig"
-source "drivers/net/ethernet/wangxun/Kconfig"
source "drivers/net/ethernet/xscale/Kconfig"
config JME
source "drivers/net/ethernet/tundra/Kconfig"
source "drivers/net/ethernet/vertexcom/Kconfig"
source "drivers/net/ethernet/via/Kconfig"
+source "drivers/net/ethernet/wangxun/Kconfig"
source "drivers/net/ethernet/wiznet/Kconfig"
source "drivers/net/ethernet/xilinx/Kconfig"
source "drivers/net/ethernet/xircom/Kconfig"
} else {
dma_addr_t mapping;
u32 vlan_tag = 0;
- int i, len = 0;
+ int i;
mapping = ace_map_tx_skb(ap, skb, NULL, idx);
flagsize = (skb_headlen(skb) << 16);
const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
struct tx_ring_info *info;
- len += skb_frag_size(frag);
info = ap->skb->tx_skbuff + idx;
desc = ap->tx_ring + idx;
/* head update threshold in units of (queue size / ENA_COMP_HEAD_THRESH) */
#define ENA_COMP_HEAD_THRESH 4
+/* we allow 2 DMA descriptors per LLQ entry */
+#define ENA_LLQ_ENTRY_DESC_CHUNK_SIZE (2 * sizeof(struct ena_eth_io_tx_desc))
+#define ENA_LLQ_HEADER (128UL - ENA_LLQ_ENTRY_DESC_CHUNK_SIZE)
+#define ENA_LLQ_LARGE_HEADER (256UL - ENA_LLQ_ENTRY_DESC_CHUNK_SIZE)
struct ena_com_tx_ctx {
struct ena_com_tx_meta ena_meta;
ring->tx_max_pending = adapter->max_tx_ring_size;
ring->rx_max_pending = adapter->max_rx_ring_size;
+ if (adapter->ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
+ bool large_llq_supported = adapter->large_llq_header_supported;
+
+ kernel_ring->tx_push = true;
+ kernel_ring->tx_push_buf_len = adapter->ena_dev->tx_max_header_size;
+ if (large_llq_supported)
+ kernel_ring->tx_push_buf_max_len = ENA_LLQ_LARGE_HEADER;
+ else
+ kernel_ring->tx_push_buf_max_len = ENA_LLQ_HEADER;
+ } else {
+ kernel_ring->tx_push = false;
+ kernel_ring->tx_push_buf_max_len = 0;
+ kernel_ring->tx_push_buf_len = 0;
+ }
+
ring->tx_pending = adapter->tx_ring[0].ring_size;
ring->rx_pending = adapter->rx_ring[0].ring_size;
}
struct netlink_ext_ack *extack)
{
struct ena_adapter *adapter = netdev_priv(netdev);
- u32 new_tx_size, new_rx_size;
+ u32 new_tx_size, new_rx_size, new_tx_push_buf_len;
+ bool changed = false;
new_tx_size = ring->tx_pending < ENA_MIN_RING_SIZE ?
ENA_MIN_RING_SIZE : ring->tx_pending;
ENA_MIN_RING_SIZE : ring->rx_pending;
new_rx_size = rounddown_pow_of_two(new_rx_size);
- if (new_tx_size == adapter->requested_tx_ring_size &&
- new_rx_size == adapter->requested_rx_ring_size)
+ changed |= new_tx_size != adapter->requested_tx_ring_size ||
+ new_rx_size != adapter->requested_rx_ring_size;
+
+ /* This value is ignored if LLQ is not supported */
+ new_tx_push_buf_len = adapter->ena_dev->tx_max_header_size;
+
+ if ((adapter->ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) !=
+ kernel_ring->tx_push) {
+ NL_SET_ERR_MSG_MOD(extack, "Push mode state cannot be modified");
+ return -EINVAL;
+ }
+
+ /* Validate that the push buffer is supported on the underlying device */
+ if (kernel_ring->tx_push_buf_len) {
+ enum ena_admin_placement_policy_type placement;
+
+ new_tx_push_buf_len = kernel_ring->tx_push_buf_len;
+
+ placement = adapter->ena_dev->tx_mem_queue_type;
+ if (placement == ENA_ADMIN_PLACEMENT_POLICY_HOST)
+ return -EOPNOTSUPP;
+
+ if (new_tx_push_buf_len != ENA_LLQ_HEADER &&
+ new_tx_push_buf_len != ENA_LLQ_LARGE_HEADER) {
+ bool large_llq_sup = adapter->large_llq_header_supported;
+ char large_llq_size_str[40];
+
+ snprintf(large_llq_size_str, 40, ", %lu", ENA_LLQ_LARGE_HEADER);
+
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Supported tx push buff values: [%lu%s]",
+ ENA_LLQ_HEADER,
+ large_llq_sup ? large_llq_size_str : "");
+
+ return -EINVAL;
+ }
+
+ changed |= new_tx_push_buf_len != adapter->ena_dev->tx_max_header_size;
+ }
+
+ if (!changed)
return 0;
- return ena_update_queue_sizes(adapter, new_tx_size, new_rx_size);
+ return ena_update_queue_params(adapter, new_tx_size, new_rx_size,
+ new_tx_push_buf_len);
}
static u32 ena_flow_hash_to_flow_type(u16 hash_fields)
static const struct ethtool_ops ena_ethtool_ops = {
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
+ .supported_ring_params = ETHTOOL_RING_USE_TX_PUSH_BUF_LEN |
+ ETHTOOL_RING_USE_TX_PUSH,
.get_link_ksettings = ena_get_link_ksettings,
.get_drvinfo = ena_get_drvinfo,
.get_msglevel = ena_get_msglevel,
{
u32 total_done = 0;
u16 next_to_clean;
- u32 tx_bytes = 0;
int tx_pkts = 0;
u16 req_id;
int rc;
"tx_poll: q %d skb %p completed\n", xdp_ring->qid,
xdpf);
- tx_bytes += xdpf->len;
tx_pkts++;
total_done += tx_info->tx_descs;
return 0;
}
-int ena_update_queue_sizes(struct ena_adapter *adapter,
- u32 new_tx_size,
- u32 new_rx_size)
+int ena_update_queue_params(struct ena_adapter *adapter,
+ u32 new_tx_size,
+ u32 new_rx_size,
+ u32 new_llq_header_len)
{
- bool dev_was_up;
+ bool dev_was_up, large_llq_changed = false;
+ int rc = 0;
dev_was_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
ena_close(adapter->netdev);
0,
adapter->xdp_num_queues +
adapter->num_io_queues);
- return dev_was_up ? ena_up(adapter) : 0;
+
+ large_llq_changed = adapter->ena_dev->tx_mem_queue_type ==
+ ENA_ADMIN_PLACEMENT_POLICY_DEV;
+ large_llq_changed &=
+ new_llq_header_len != adapter->ena_dev->tx_max_header_size;
+
+ /* a check that the configuration is valid is done by caller */
+ if (large_llq_changed) {
+ adapter->large_llq_header_enabled = !adapter->large_llq_header_enabled;
+
+ ena_destroy_device(adapter, false);
+ rc = ena_restore_device(adapter);
+ }
+
+ return dev_was_up && !rc ? ena_up(adapter) : rc;
}
int ena_set_rx_copybreak(struct ena_adapter *adapter, u32 rx_copybreak)
.ndo_xdp_xmit = ena_xdp_xmit,
};
+static void ena_calc_io_queue_size(struct ena_adapter *adapter,
+ struct ena_com_dev_get_features_ctx *get_feat_ctx)
+{
+ struct ena_admin_feature_llq_desc *llq = &get_feat_ctx->llq;
+ struct ena_com_dev *ena_dev = adapter->ena_dev;
+ u32 tx_queue_size = ENA_DEFAULT_RING_SIZE;
+ u32 rx_queue_size = ENA_DEFAULT_RING_SIZE;
+ u32 max_tx_queue_size;
+ u32 max_rx_queue_size;
+
+ /* If this function is called after driver load, the ring sizes have already
+ * been configured. Take it into account when recalculating ring size.
+ */
+ if (adapter->tx_ring->ring_size)
+ tx_queue_size = adapter->tx_ring->ring_size;
+
+ if (adapter->rx_ring->ring_size)
+ rx_queue_size = adapter->rx_ring->ring_size;
+
+ if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
+ struct ena_admin_queue_ext_feature_fields *max_queue_ext =
+ &get_feat_ctx->max_queue_ext.max_queue_ext;
+ max_rx_queue_size = min_t(u32, max_queue_ext->max_rx_cq_depth,
+ max_queue_ext->max_rx_sq_depth);
+ max_tx_queue_size = max_queue_ext->max_tx_cq_depth;
+
+ if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
+ max_tx_queue_size = min_t(u32, max_tx_queue_size,
+ llq->max_llq_depth);
+ else
+ max_tx_queue_size = min_t(u32, max_tx_queue_size,
+ max_queue_ext->max_tx_sq_depth);
+
+ adapter->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+ max_queue_ext->max_per_packet_tx_descs);
+ adapter->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+ max_queue_ext->max_per_packet_rx_descs);
+ } else {
+ struct ena_admin_queue_feature_desc *max_queues =
+ &get_feat_ctx->max_queues;
+ max_rx_queue_size = min_t(u32, max_queues->max_cq_depth,
+ max_queues->max_sq_depth);
+ max_tx_queue_size = max_queues->max_cq_depth;
+
+ if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
+ max_tx_queue_size = min_t(u32, max_tx_queue_size,
+ llq->max_llq_depth);
+ else
+ max_tx_queue_size = min_t(u32, max_tx_queue_size,
+ max_queues->max_sq_depth);
+
+ adapter->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+ max_queues->max_packet_tx_descs);
+ adapter->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+ max_queues->max_packet_rx_descs);
+ }
+
+ max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
+ max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
+
+ /* When forcing large headers, we multiply the entry size by 2, and therefore divide
+ * the queue size by 2, leaving the amount of memory used by the queues unchanged.
+ */
+ if (adapter->large_llq_header_enabled) {
+ if ((llq->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B) &&
+ ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
+ max_tx_queue_size /= 2;
+ dev_info(&adapter->pdev->dev,
+ "Forcing large headers and decreasing maximum TX queue size to %d\n",
+ max_tx_queue_size);
+ } else {
+ dev_err(&adapter->pdev->dev,
+ "Forcing large headers failed: LLQ is disabled or device does not support large headers\n");
+
+ adapter->large_llq_header_enabled = false;
+ }
+ }
+
+ tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE,
+ max_tx_queue_size);
+ rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE,
+ max_rx_queue_size);
+
+ tx_queue_size = rounddown_pow_of_two(tx_queue_size);
+ rx_queue_size = rounddown_pow_of_two(rx_queue_size);
+
+ adapter->max_tx_ring_size = max_tx_queue_size;
+ adapter->max_rx_ring_size = max_rx_queue_size;
+ adapter->requested_tx_ring_size = tx_queue_size;
+ adapter->requested_rx_ring_size = rx_queue_size;
+}
+
static int ena_device_validate_params(struct ena_adapter *adapter,
struct ena_com_dev_get_features_ctx *get_feat_ctx)
{
return 0;
}
-static void set_default_llq_configurations(struct ena_llq_configurations *llq_config)
+static void set_default_llq_configurations(struct ena_adapter *adapter,
+ struct ena_llq_configurations *llq_config,
+ struct ena_admin_feature_llq_desc *llq)
{
+ struct ena_com_dev *ena_dev = adapter->ena_dev;
+
llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER;
llq_config->llq_stride_ctrl = ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY;
llq_config->llq_num_decs_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2;
- llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B;
- llq_config->llq_ring_entry_size_value = 128;
+
+ adapter->large_llq_header_supported =
+ !!(ena_dev->supported_features & BIT(ENA_ADMIN_LLQ));
+ adapter->large_llq_header_supported &=
+ !!(llq->entry_size_ctrl_supported &
+ ENA_ADMIN_LIST_ENTRY_SIZE_256B);
+
+ if ((llq->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B) &&
+ adapter->large_llq_header_enabled) {
+ llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_256B;
+ llq_config->llq_ring_entry_size_value = 256;
+ } else {
+ llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B;
+ llq_config->llq_ring_entry_size_value = 128;
+ }
}
static int ena_set_queues_placement_policy(struct pci_dev *pdev,
return 0;
}
+ if (!ena_dev->mem_bar) {
+ netdev_err(ena_dev->net_device,
+ "LLQ is advertised as supported but device doesn't expose mem bar\n");
+ ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
+ return 0;
+ }
+
rc = ena_com_config_dev_mode(ena_dev, llq, llq_default_configurations);
if (unlikely(rc)) {
dev_err(&pdev->dev,
{
bool has_mem_bar = !!(bars & BIT(ENA_MEM_BAR));
- if (!has_mem_bar) {
- if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
- dev_err(&pdev->dev,
- "ENA device does not expose LLQ bar. Fallback to host mode policy.\n");
- ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
- }
-
+ if (!has_mem_bar)
return 0;
- }
ena_dev->mem_bar = devm_ioremap_wc(&pdev->dev,
pci_resource_start(pdev, ENA_MEM_BAR),
return 0;
}
-static int ena_device_init(struct ena_com_dev *ena_dev, struct pci_dev *pdev,
+static int ena_device_init(struct ena_adapter *adapter, struct pci_dev *pdev,
struct ena_com_dev_get_features_ctx *get_feat_ctx,
bool *wd_state)
{
+ struct ena_com_dev *ena_dev = adapter->ena_dev;
struct ena_llq_configurations llq_config;
struct device *dev = &pdev->dev;
bool readless_supported;
*wd_state = !!(aenq_groups & BIT(ENA_ADMIN_KEEP_ALIVE));
- set_default_llq_configurations(&llq_config);
+ set_default_llq_configurations(adapter, &llq_config, &get_feat_ctx->llq);
rc = ena_set_queues_placement_policy(pdev, ena_dev, &get_feat_ctx->llq,
&llq_config);
goto err_admin_init;
}
+ ena_calc_io_queue_size(adapter, get_feat_ctx);
+
return 0;
err_admin_init:
struct ena_com_dev_get_features_ctx get_feat_ctx;
struct ena_com_dev *ena_dev = adapter->ena_dev;
struct pci_dev *pdev = adapter->pdev;
+ struct ena_ring *txr;
+ int rc, count, i;
bool wd_state;
- int rc;
set_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
- rc = ena_device_init(ena_dev, adapter->pdev, &get_feat_ctx, &wd_state);
+ rc = ena_device_init(adapter, adapter->pdev, &get_feat_ctx, &wd_state);
if (rc) {
dev_err(&pdev->dev, "Can not initialize device\n");
goto err;
}
adapter->wd_state = wd_state;
+ count = adapter->xdp_num_queues + adapter->num_io_queues;
+ for (i = 0 ; i < count; i++) {
+ txr = &adapter->tx_ring[i];
+ txr->tx_mem_queue_type = ena_dev->tx_mem_queue_type;
+ txr->tx_max_header_size = ena_dev->tx_max_header_size;
+ }
+
rc = ena_device_validate_params(adapter, &get_feat_ctx);
if (rc) {
dev_err(&pdev->dev, "Validation of device parameters failed\n");
pci_release_selected_regions(pdev, release_bars);
}
-
-static void ena_calc_io_queue_size(struct ena_adapter *adapter,
- struct ena_com_dev_get_features_ctx *get_feat_ctx)
-{
- struct ena_admin_feature_llq_desc *llq = &get_feat_ctx->llq;
- struct ena_com_dev *ena_dev = adapter->ena_dev;
- u32 tx_queue_size = ENA_DEFAULT_RING_SIZE;
- u32 rx_queue_size = ENA_DEFAULT_RING_SIZE;
- u32 max_tx_queue_size;
- u32 max_rx_queue_size;
-
- if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
- struct ena_admin_queue_ext_feature_fields *max_queue_ext =
- &get_feat_ctx->max_queue_ext.max_queue_ext;
- max_rx_queue_size = min_t(u32, max_queue_ext->max_rx_cq_depth,
- max_queue_ext->max_rx_sq_depth);
- max_tx_queue_size = max_queue_ext->max_tx_cq_depth;
-
- if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
- max_tx_queue_size = min_t(u32, max_tx_queue_size,
- llq->max_llq_depth);
- else
- max_tx_queue_size = min_t(u32, max_tx_queue_size,
- max_queue_ext->max_tx_sq_depth);
-
- adapter->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
- max_queue_ext->max_per_packet_tx_descs);
- adapter->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
- max_queue_ext->max_per_packet_rx_descs);
- } else {
- struct ena_admin_queue_feature_desc *max_queues =
- &get_feat_ctx->max_queues;
- max_rx_queue_size = min_t(u32, max_queues->max_cq_depth,
- max_queues->max_sq_depth);
- max_tx_queue_size = max_queues->max_cq_depth;
-
- if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
- max_tx_queue_size = min_t(u32, max_tx_queue_size,
- llq->max_llq_depth);
- else
- max_tx_queue_size = min_t(u32, max_tx_queue_size,
- max_queues->max_sq_depth);
-
- adapter->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
- max_queues->max_packet_tx_descs);
- adapter->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
- max_queues->max_packet_rx_descs);
- }
-
- max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
- max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
-
- tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE,
- max_tx_queue_size);
- rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE,
- max_rx_queue_size);
-
- tx_queue_size = rounddown_pow_of_two(tx_queue_size);
- rx_queue_size = rounddown_pow_of_two(rx_queue_size);
-
- adapter->max_tx_ring_size = max_tx_queue_size;
- adapter->max_rx_ring_size = max_rx_queue_size;
- adapter->requested_tx_ring_size = tx_queue_size;
- adapter->requested_rx_ring_size = rx_queue_size;
-}
-
/* ena_probe - Device Initialization Routine
* @pdev: PCI device information struct
* @ent: entry in ena_pci_tbl
pci_set_drvdata(pdev, adapter);
- rc = ena_device_init(ena_dev, pdev, &get_feat_ctx, &wd_state);
+ rc = ena_map_llq_mem_bar(pdev, ena_dev, bars);
+ if (rc) {
+ dev_err(&pdev->dev, "ENA LLQ bar mapping failed\n");
+ goto err_netdev_destroy;
+ }
+
+ rc = ena_device_init(adapter, pdev, &get_feat_ctx, &wd_state);
if (rc) {
dev_err(&pdev->dev, "ENA device init failed\n");
if (rc == -ETIME)
goto err_netdev_destroy;
}
- rc = ena_map_llq_mem_bar(pdev, ena_dev, bars);
- if (rc) {
- dev_err(&pdev->dev, "ENA llq bar mapping failed\n");
- goto err_device_destroy;
- }
-
/* Initial TX and RX interrupt delay. Assumes 1 usec granularity.
* Updated during device initialization with the real granularity
*/
ena_dev->intr_moder_rx_interval = ENA_INTR_INITIAL_RX_INTERVAL_USECS;
ena_dev->intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION;
max_num_io_queues = ena_calc_max_io_queue_num(pdev, ena_dev, &get_feat_ctx);
- ena_calc_io_queue_size(adapter, &get_feat_ctx);
if (unlikely(!max_num_io_queues)) {
rc = -EFAULT;
goto err_device_destroy;
"Failed to query interrupt moderation feature\n");
goto err_device_destroy;
}
+
ena_init_io_rings(adapter,
0,
adapter->xdp_num_queues +
rtnl_lock(); /* lock released inside the below if-else block */
adapter->reset_reason = ENA_REGS_RESET_SHUTDOWN;
ena_destroy_device(adapter, true);
+
if (shutdown) {
netif_device_detach(netdev);
dev_close(netdev);
u32 msg_enable;
+ /* large_llq_header_enabled is used for two purposes:
+ * 1. Indicates that large LLQ has been requested.
+ * 2. Indicates whether large LLQ is set or not after device
+ * initialization / configuration.
+ */
+ bool large_llq_header_enabled;
+ bool large_llq_header_supported;
+
u16 max_tx_sgl_size;
u16 max_rx_sgl_size;
int ena_update_hw_stats(struct ena_adapter *adapter);
-int ena_update_queue_sizes(struct ena_adapter *adapter,
- u32 new_tx_size,
- u32 new_rx_size);
+int ena_update_queue_params(struct ena_adapter *adapter,
+ u32 new_tx_size,
+ u32 new_rx_size,
+ u32 new_llq_header_len);
int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count);
#include <linux/ipv6.h>
#include <linux/if_vlan.h>
#include <linux/mdio.h>
-#include <linux/aer.h>
#include <linux/bitops.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
goto out_pci_disable;
}
- pci_enable_pcie_error_reporting(pdev);
pci_set_master(pdev);
if (!pdev->pm_cap) {
free_netdev(netdev);
out_pci_release:
pci_release_mem_regions(pdev);
- pci_disable_pcie_error_reporting(pdev);
out_pci_disable:
pci_disable_device(pdev);
return err;
iounmap(hw->hw_addr);
pci_release_mem_regions(pdev);
- pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
mutex_destroy(&alx->mtx);
synchronize_irq(adapter->pdev->irq);
}
-/**
- * atl1c_irq_reset - reset interrupt confiure on the NIC
- * @adapter: board private structure
- */
-static inline void atl1c_irq_reset(struct atl1c_adapter *adapter)
-{
- atomic_set(&adapter->irq_sem, 1);
- atl1c_irq_enable(adapter);
-}
-
/*
* atl1c_wait_until_idle - wait up to AT_HW_MAX_IDLE_DELAY reads
* of the idle status register until the device is actually idle
#include <linux/cache.h>
#include <linux/firmware.h>
#include <linux/log2.h>
-#include <linux/aer.h>
#include <linux/crash_dump.h>
#if IS_ENABLED(CONFIG_CNIC)
return 0;
}
-static int
+static void
load_cpu_fw(struct bnx2 *bp, const struct cpu_reg *cpu_reg,
const struct bnx2_mips_fw_file_entry *fw_entry)
{
val &= ~cpu_reg->mode_value_halt;
bnx2_reg_wr_ind(bp, cpu_reg->state, cpu_reg->state_value_clear);
bnx2_reg_wr_ind(bp, cpu_reg->mode, val);
-
- return 0;
}
-static int
+static void
bnx2_init_cpus(struct bnx2 *bp)
{
const struct bnx2_mips_fw_file *mips_fw =
(const struct bnx2_mips_fw_file *) bp->mips_firmware->data;
const struct bnx2_rv2p_fw_file *rv2p_fw =
(const struct bnx2_rv2p_fw_file *) bp->rv2p_firmware->data;
- int rc;
/* Initialize the RV2P processor. */
load_rv2p_fw(bp, RV2P_PROC1, &rv2p_fw->proc1);
load_rv2p_fw(bp, RV2P_PROC2, &rv2p_fw->proc2);
/* Initialize the RX Processor. */
- rc = load_cpu_fw(bp, &cpu_reg_rxp, &mips_fw->rxp);
- if (rc)
- goto init_cpu_err;
+ load_cpu_fw(bp, &cpu_reg_rxp, &mips_fw->rxp);
/* Initialize the TX Processor. */
- rc = load_cpu_fw(bp, &cpu_reg_txp, &mips_fw->txp);
- if (rc)
- goto init_cpu_err;
+ load_cpu_fw(bp, &cpu_reg_txp, &mips_fw->txp);
/* Initialize the TX Patch-up Processor. */
- rc = load_cpu_fw(bp, &cpu_reg_tpat, &mips_fw->tpat);
- if (rc)
- goto init_cpu_err;
+ load_cpu_fw(bp, &cpu_reg_tpat, &mips_fw->tpat);
/* Initialize the Completion Processor. */
- rc = load_cpu_fw(bp, &cpu_reg_com, &mips_fw->com);
- if (rc)
- goto init_cpu_err;
+ load_cpu_fw(bp, &cpu_reg_com, &mips_fw->com);
/* Initialize the Command Processor. */
- rc = load_cpu_fw(bp, &cpu_reg_cp, &mips_fw->cp);
-
-init_cpu_err:
- return rc;
+ load_cpu_fw(bp, &cpu_reg_cp, &mips_fw->cp);
}
static void
} else
bnx2_init_context(bp);
- if ((rc = bnx2_init_cpus(bp)) != 0)
- return rc;
+ bnx2_init_cpus(bp);
bnx2_init_nvram(bp);
int rc, i, j;
u32 reg;
u64 dma_mask, persist_dma_mask;
- int err;
SET_NETDEV_DEV(dev, &pdev->dev);
bp = netdev_priv(dev);
bp->flags |= BNX2_FLAG_PCIE;
if (BNX2_CHIP_REV(bp) == BNX2_CHIP_REV_Ax)
bp->flags |= BNX2_FLAG_JUMBO_BROKEN;
-
- /* AER (Advanced Error Reporting) hooks */
- err = pci_enable_pcie_error_reporting(pdev);
- if (!err)
- bp->flags |= BNX2_FLAG_AER_ENABLED;
-
} else {
bp->pcix_cap = pci_find_capability(pdev, PCI_CAP_ID_PCIX);
if (bp->pcix_cap == 0) {
return 0;
err_out_unmap:
- if (bp->flags & BNX2_FLAG_AER_ENABLED) {
- pci_disable_pcie_error_reporting(pdev);
- bp->flags &= ~BNX2_FLAG_AER_ENABLED;
- }
-
pci_iounmap(pdev, bp->regview);
bp->regview = NULL;
bnx2_free_stats_blk(dev);
kfree(bp->temp_stats_blk);
- if (bp->flags & BNX2_FLAG_AER_ENABLED) {
- pci_disable_pcie_error_reporting(pdev);
- bp->flags &= ~BNX2_FLAG_AER_ENABLED;
- }
-
bnx2_release_firmware(bp);
free_netdev(dev);
}
rtnl_unlock();
- if (!(bp->flags & BNX2_FLAG_AER_ENABLED))
- return result;
-
return result;
}
#define BNX2_FLAG_JUMBO_BROKEN 0x00000800
#define BNX2_FLAG_CAN_KEEP_VLAN 0x00001000
#define BNX2_FLAG_BROKEN_STATS 0x00002000
-#define BNX2_FLAG_AER_ENABLED 0x00004000
struct bnx2_napi bnx2_napi[BNX2_MAX_MSIX_VEC];
#define IS_VF_FLAG (1 << 22)
#define BC_SUPPORTS_RMMOD_CMD (1 << 23)
#define HAS_PHYS_PORT_ID (1 << 24)
-#define AER_ENABLED (1 << 25)
#define PTP_SUPPORTED (1 << 26)
#define TX_TIMESTAMPING_EN (1 << 27)
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/pci.h>
-#include <linux/aer.h>
#include <linux/init.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
.ndo_features_check = bnx2x_features_check,
};
-static void bnx2x_disable_pcie_error_reporting(struct bnx2x *bp)
-{
- if (bp->flags & AER_ENABLED) {
- pci_disable_pcie_error_reporting(bp->pdev);
- bp->flags &= ~AER_ENABLED;
- }
-}
-
static int bnx2x_init_dev(struct bnx2x *bp, struct pci_dev *pdev,
struct net_device *dev, unsigned long board_type)
{
/* Set PCIe reset type to fundamental for EEH recovery */
pdev->needs_freset = 1;
- /* AER (Advanced Error reporting) configuration */
- rc = pci_enable_pcie_error_reporting(pdev);
- if (!rc)
- bp->flags |= AER_ENABLED;
- else
- BNX2X_DEV_INFO("Failed To configure PCIe AER [%d]\n", rc);
-
/*
* Clean the following indirect addresses for all functions since it
* is not used by the driver.
bnx2x_free_mem_bp(bp);
init_one_exit:
- bnx2x_disable_pcie_error_reporting(bp);
-
if (bp->regview)
iounmap(bp->regview);
pci_set_power_state(pdev, PCI_D3hot);
}
- bnx2x_disable_pcie_error_reporting(bp);
if (remove_netdev) {
if (bp->regview)
iounmap(bp->regview);
#include <linux/prefetch.h>
#include <linux/cache.h>
#include <linux/log2.h>
-#include <linux/aer.h>
#include <linux/bitmap.h>
#include <linux/cpu_rmap.h>
#include <linux/cpumask.h>
if (flags & FUNC_QCAPS_RESP_FLAGS_WOL_MAGICPKT_SUPPORTED)
bp->flags |= BNXT_FLAG_WOL_CAP;
if (flags & FUNC_QCAPS_RESP_FLAGS_PTP_SUPPORTED) {
- __bnxt_hwrm_ptp_qcfg(bp);
+ bp->fw_cap |= BNXT_FW_CAP_PTP;
} else {
bnxt_ptp_clear(bp);
kfree(bp->ptp_cfg);
bnxt_hwrm_vnic_qcaps(bp);
bnxt_hwrm_port_led_qcaps(bp);
bnxt_ethtool_init(bp);
+ if (bp->fw_cap & BNXT_FW_CAP_PTP)
+ __bnxt_hwrm_ptp_qcfg(bp);
bnxt_dcb_init(bp);
return 0;
}
goto init_err_release;
}
- pci_enable_pcie_error_reporting(pdev);
-
INIT_WORK(&bp->sp_task, bnxt_sp_task);
INIT_DELAYED_WORK(&bp->fw_reset_task, bnxt_fw_reset_task);
bnxt_rdma_aux_device_uninit(bp);
bnxt_ptp_clear(bp);
- pci_disable_pcie_error_reporting(pdev);
unregister_netdev(dev);
clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
/* Flush any pending tasks */
u32 msg_enable;
- u32 fw_cap;
- #define BNXT_FW_CAP_SHORT_CMD 0x00000001
- #define BNXT_FW_CAP_LLDP_AGENT 0x00000002
- #define BNXT_FW_CAP_DCBX_AGENT 0x00000004
- #define BNXT_FW_CAP_NEW_RM 0x00000008
- #define BNXT_FW_CAP_IF_CHANGE 0x00000010
- #define BNXT_FW_CAP_KONG_MB_CHNL 0x00000080
- #define BNXT_FW_CAP_OVS_64BIT_HANDLE 0x00000400
- #define BNXT_FW_CAP_TRUSTED_VF 0x00000800
- #define BNXT_FW_CAP_ERROR_RECOVERY 0x00002000
- #define BNXT_FW_CAP_PKG_VER 0x00004000
- #define BNXT_FW_CAP_CFA_ADV_FLOW 0x00008000
- #define BNXT_FW_CAP_CFA_RFS_RING_TBL_IDX_V2 0x00010000
- #define BNXT_FW_CAP_PCIE_STATS_SUPPORTED 0x00020000
- #define BNXT_FW_CAP_EXT_STATS_SUPPORTED 0x00040000
- #define BNXT_FW_CAP_RSS_HASH_TYPE_DELTA 0x00080000
- #define BNXT_FW_CAP_ERR_RECOVER_RELOAD 0x00100000
- #define BNXT_FW_CAP_HOT_RESET 0x00200000
- #define BNXT_FW_CAP_PTP_RTC 0x00400000
- #define BNXT_FW_CAP_RX_ALL_PKT_TS 0x00800000
- #define BNXT_FW_CAP_VLAN_RX_STRIP 0x01000000
- #define BNXT_FW_CAP_VLAN_TX_INSERT 0x02000000
- #define BNXT_FW_CAP_EXT_HW_STATS_SUPPORTED 0x04000000
- #define BNXT_FW_CAP_LIVEPATCH 0x08000000
- #define BNXT_FW_CAP_PTP_PPS 0x10000000
- #define BNXT_FW_CAP_HOT_RESET_IF 0x20000000
- #define BNXT_FW_CAP_RING_MONITOR 0x40000000
- #define BNXT_FW_CAP_DBG_QCAPS 0x80000000
+ u64 fw_cap;
+ #define BNXT_FW_CAP_SHORT_CMD BIT_ULL(0)
+ #define BNXT_FW_CAP_LLDP_AGENT BIT_ULL(1)
+ #define BNXT_FW_CAP_DCBX_AGENT BIT_ULL(2)
+ #define BNXT_FW_CAP_NEW_RM BIT_ULL(3)
+ #define BNXT_FW_CAP_IF_CHANGE BIT_ULL(4)
+ #define BNXT_FW_CAP_KONG_MB_CHNL BIT_ULL(7)
+ #define BNXT_FW_CAP_OVS_64BIT_HANDLE BIT_ULL(10)
+ #define BNXT_FW_CAP_TRUSTED_VF BIT_ULL(11)
+ #define BNXT_FW_CAP_ERROR_RECOVERY BIT_ULL(13)
+ #define BNXT_FW_CAP_PKG_VER BIT_ULL(14)
+ #define BNXT_FW_CAP_CFA_ADV_FLOW BIT_ULL(15)
+ #define BNXT_FW_CAP_CFA_RFS_RING_TBL_IDX_V2 BIT_ULL(16)
+ #define BNXT_FW_CAP_PCIE_STATS_SUPPORTED BIT_ULL(17)
+ #define BNXT_FW_CAP_EXT_STATS_SUPPORTED BIT_ULL(18)
+ #define BNXT_FW_CAP_RSS_HASH_TYPE_DELTA BIT_ULL(19)
+ #define BNXT_FW_CAP_ERR_RECOVER_RELOAD BIT_ULL(20)
+ #define BNXT_FW_CAP_HOT_RESET BIT_ULL(21)
+ #define BNXT_FW_CAP_PTP_RTC BIT_ULL(22)
+ #define BNXT_FW_CAP_RX_ALL_PKT_TS BIT_ULL(23)
+ #define BNXT_FW_CAP_VLAN_RX_STRIP BIT_ULL(24)
+ #define BNXT_FW_CAP_VLAN_TX_INSERT BIT_ULL(25)
+ #define BNXT_FW_CAP_EXT_HW_STATS_SUPPORTED BIT_ULL(26)
+ #define BNXT_FW_CAP_LIVEPATCH BIT_ULL(27)
+ #define BNXT_FW_CAP_PTP_PPS BIT_ULL(28)
+ #define BNXT_FW_CAP_HOT_RESET_IF BIT_ULL(29)
+ #define BNXT_FW_CAP_RING_MONITOR BIT_ULL(30)
+ #define BNXT_FW_CAP_DBG_QCAPS BIT_ULL(31)
+ #define BNXT_FW_CAP_PTP BIT_ULL(32)
u32 fw_dbg_cap;
if (rc)
return rc;
- buflen = dir_entries * entry_length;
+ buflen = mul_u32_u32(dir_entries, entry_length);
buf = hwrm_req_dma_slice(bp, req, buflen, &dma_handle);
if (!buf) {
hwrm_req_drop(bp, req);
ptp_info);
struct bnxt *bp = ptp->bp;
- if (BNXT_PTP_USE_RTC(bp))
+ if (!BNXT_MH(bp))
return bnxt_ptp_adjfine_rtc(bp, scaled_ppm);
spin_lock_bh(&ptp->ptp_lock);
memset(&ptp->cc, 0, sizeof(ptp->cc));
ptp->cc.read = bnxt_cc_read;
ptp->cc.mask = CYCLECOUNTER_MASK(48);
- ptp->cc.shift = BNXT_CYCLES_SHIFT;
- ptp->cc.mult = clocksource_khz2mult(BNXT_DEVCLK_FREQ, ptp->cc.shift);
- ptp->cmult = ptp->cc.mult;
+ if (BNXT_MH(bp)) {
+ /* Use timecounter based non-real time mode */
+ ptp->cc.shift = BNXT_CYCLES_SHIFT;
+ ptp->cc.mult = clocksource_khz2mult(BNXT_DEVCLK_FREQ, ptp->cc.shift);
+ ptp->cmult = ptp->cc.mult;
+ } else {
+ ptp->cc.shift = 0;
+ ptp->cc.mult = 1;
+ }
ptp->next_overflow_check = jiffies + BNXT_PHC_OVERFLOW_PERIOD;
}
if (init_tc)
#include <asm/sibyte/board.h>
#include <asm/sibyte/sb1250.h>
-#if defined(CONFIG_SIBYTE_BCM1x55) || defined(CONFIG_SIBYTE_BCM1x80)
+#if defined(CONFIG_SIBYTE_BCM1x80)
#include <asm/sibyte/bcm1480_regs.h>
#include <asm/sibyte/bcm1480_int.h>
#define R_MAC_DMA_OODPKTLOST_RX R_MAC_DMA_OODPKTLOST
#include <asm/sibyte/sb1250_mac.h>
#include <asm/sibyte/sb1250_dma.h>
-#if defined(CONFIG_SIBYTE_BCM1x55) || defined(CONFIG_SIBYTE_BCM1x80)
+#if defined(CONFIG_SIBYTE_BCM1x80)
#define UNIT_INT(n) (K_BCM1480_INT_MAC_0 + ((n) * 2))
#elif defined(CONFIG_SIBYTE_SB1250) || defined(CONFIG_SIBYTE_BCM112X)
#define UNIT_INT(n) (K_INT_MAC_0 + (n))
* Turn on the rest of the bits in the enable register
*/
-#if defined(CONFIG_SIBYTE_BCM1x55) || defined(CONFIG_SIBYTE_BCM1x80)
+#if defined(CONFIG_SIBYTE_BCM1x80)
__raw_writeq(M_MAC_RXDMA_EN0 |
M_MAC_TXDMA_EN0, s->sbm_macenable);
#elif defined(CONFIG_SIBYTE_SB1250) || defined(CONFIG_SIBYTE_BCM112X)
#define GEM_CLK_DIV48 3
#define GEM_CLK_DIV64 4
#define GEM_CLK_DIV96 5
+#define GEM_CLK_DIV128 6
+#define GEM_CLK_DIV224 7
/* Constants for MAN register */
#define MACB_MAN_C22_SOF 1
/* Graceful stop timeouts in us. We should allow up to
* 1 frame time (10 Mbits/s, full-duplex, ignoring collisions)
*/
-#define MACB_HALT_TIMEOUT 1230
-
+#define MACB_HALT_TIMEOUT 14000
#define MACB_PM_TIMEOUT 100 /* ms */
#define MACB_MDIO_TIMEOUT 1000000 /* in usecs */
{
struct macb_queue *queue = container_of(work, struct macb_queue,
tx_error_task);
+ bool halt_timeout = false;
struct macb *bp = queue->bp;
struct macb_tx_skb *tx_skb;
struct macb_dma_desc *desc;
* (in case we have just queued new packets)
* macb/gem must be halted to write TBQP register
*/
- if (macb_halt_tx(bp))
- /* Just complain for now, reinitializing TX path can be good */
+ if (macb_halt_tx(bp)) {
netdev_err(bp->dev, "BUG: halt tx timed out\n");
+ macb_writel(bp, NCR, macb_readl(bp, NCR) & (~MACB_BIT(TE)));
+ halt_timeout = true;
+ }
/* Treat frames in TX queue including the ones that caused the error.
* Free transmit buffers in upper layer.
macb_writel(bp, TSR, macb_readl(bp, TSR));
queue_writel(queue, IER, MACB_TX_INT_FLAGS);
+ if (halt_timeout)
+ macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TE));
+
/* Now we are ready to start transmission again */
netif_tx_start_all_queues(bp->dev);
macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
config = GEM_BF(CLK, GEM_CLK_DIV48);
else if (pclk_hz <= 160000000)
config = GEM_BF(CLK, GEM_CLK_DIV64);
- else
+ else if (pclk_hz <= 240000000)
config = GEM_BF(CLK, GEM_CLK_DIV96);
+ else if (pclk_hz <= 320000000)
+ config = GEM_BF(CLK, GEM_CLK_DIV128);
+ else
+ config = GEM_BF(CLK, GEM_CLK_DIV224);
return config;
}
static const struct macb_config sama7g5_gem_config = {
.caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_CLK_HW_CHG |
- MACB_CAPS_MIIONRGMII,
+ MACB_CAPS_MIIONRGMII | MACB_CAPS_GEM_HAS_PTP,
.dma_burst_length = 16,
.clk_init = macb_clk_init,
.init = macb_init,
static const struct macb_config sama7g5_emac_config = {
.caps = MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII |
- MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_MIIONRGMII,
+ MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_MIIONRGMII |
+ MACB_CAPS_GEM_HAS_PTP,
.dma_burst_length = 16,
.clk_init = macb_clk_init,
.init = macb_init,
fallthrough;
case OCT_DEV_PCI_ENABLE_DONE:
- pci_clear_master(oct->pci_dev);
/* Disable the device, releasing the PCI INT */
pci_disable_device(oct->pci_dev);
fallthrough;
case OCT_DEV_PCI_ENABLE_DONE:
- pci_clear_master(oct->pci_dev);
/* Disable the device, releasing the PCI INT */
pci_disable_device(oct->pci_dev);
static void (*reqtype_free_fn[MAX_OCTEON_DEVICES][REQTYPE_LAST + 1]) (void *);
-static inline int IQ_INSTR_MODE_64B(struct octeon_device *oct, int iq_no)
-{
- struct octeon_instr_queue *iq =
- (struct octeon_instr_queue *)oct->instr_queue[iq_no];
- return iq->iqcmd_64B;
-}
-
-#define IQ_INSTR_MODE_32B(oct, iq_no) (!IQ_INSTR_MODE_64B(oct, iq_no))
-
/* Define this to return the request status comaptible to old code */
/*#define OCTEON_USE_OLD_REQ_STATUS*/
#endif
};
-static inline struct sge_qset *fl_to_qset(const struct sge_fl *q, int qidx)
-{
- return container_of(q, struct sge_qset, fl[qidx]);
-}
-
static inline struct sge_qset *rspq_to_qset(const struct sge_rspq *q)
{
return container_of(q, struct sge_qset, rspq);
#include <linux/mutex.h>
#include <linux/netdevice.h>
#include <linux/pci.h>
-#include <linux/aer.h>
#include <linux/rtnetlink.h>
#include <linux/sched.h>
#include <linux/seq_file.h>
goto out_free_adapter;
}
- pci_enable_pcie_error_reporting(pdev);
pci_set_master(pdev);
pci_save_state(pdev);
adap_idx++;
out_unmap_bar0:
iounmap(regs);
out_disable_device:
- pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
out_release_regions:
pci_release_regions(pdev);
}
#endif
iounmap(adapter->regs);
- pci_disable_pcie_error_reporting(pdev);
if ((adapter->flags & CXGB4_DEV_ENABLED)) {
pci_disable_device(pdev);
adapter->flags &= ~CXGB4_DEV_ENABLED;
err_release_regions:
pci_release_regions(pdev);
- pci_clear_master(pdev);
err_disable_device:
pci_disable_device(pdev);
* Disable the device and release its PCI resources.
*/
pci_disable_device(pdev);
- pci_clear_master(pdev);
pci_release_regions(pdev);
}
err_release_regions:
pci_release_regions(dev);
err_disable_dev:
- pci_clear_master(dev);
pci_disable_device(dev);
return err;
free_netdev(net_dev);
pci_release_regions(dev);
- pci_clear_master(dev);
pci_disable_device(dev);
}
/* To check if valid bit is set, check the entire word as we don't know
* the endianness of the data (old entry is host endian while a new entry is
- * little endian) */
+ * little endian)
+ */
static inline bool be_mcc_compl_is_new(struct be_mcc_compl *compl)
{
u32 flags;
u8 opcode = 0, subsystem = 0;
/* Just swap the status to host endian; mcc tag is opaquely copied
- * from mcc_wrb */
+ * from mcc_wrb
+ */
be_dws_le_to_cpu(compl, 4);
base_status = base_status(compl->status);
return 0;
}
-/*
- * Insert the mailbox address into the doorbell in two steps
+/* Insert the mailbox address into the doorbell in two steps
* Polls on the mbox doorbell till a command completion (or a timeout) occurs
*/
static int be_mbox_notify_wait(struct be_adapter *adapter)
req_hdr->subsystem = subsystem;
req_hdr->request_length = cpu_to_le32(cmd_len - sizeof(*req_hdr));
req_hdr->version = 0;
- fill_wrb_tags(wrb, (ulong) req_hdr);
+ fill_wrb_tags(wrb, (ulong)req_hdr);
wrb->payload_length = cmd_len;
if (mem) {
wrb->embedded |= (1 & MCC_WRB_SGE_CNT_MASK) <<
static inline struct be_mcc_wrb *wrb_from_mbox(struct be_adapter *adapter)
{
struct be_dma_mem *mbox_mem = &adapter->mbox_mem;
- struct be_mcc_wrb *wrb
- = &((struct be_mcc_mailbox *)(mbox_mem->va))->wrb;
+ struct be_mcc_wrb *wrb = &((struct be_mcc_mailbox *)(mbox_mem->va))->wrb;
+
memset(wrb, 0, sizeof(*wrb));
return wrb;
}
memcpy(dest_wrb, wrb, sizeof(*wrb));
if (wrb->embedded & cpu_to_le32(MCC_WRB_EMBEDDED_MASK))
- fill_wrb_tags(dest_wrb, (ulong) embedded_payload(wrb));
+ fill_wrb_tags(dest_wrb, (ulong)embedded_payload(wrb));
return dest_wrb;
}
err:
mutex_unlock(&adapter->mcc_lock);
- if (base_status(status) == MCC_STATUS_UNAUTHORIZED_REQUEST)
+ if (base_status(status) == MCC_STATUS_UNAUTHORIZED_REQUEST)
status = -EPERM;
return status;
total_size = buf_len;
- get_fat_cmd.size = sizeof(struct be_cmd_req_get_fat) + 60*1024;
+ get_fat_cmd.size = sizeof(struct be_cmd_req_get_fat) + 60 * 1024;
get_fat_cmd.va = dma_alloc_coherent(&adapter->pdev->dev,
get_fat_cmd.size,
&get_fat_cmd.dma, GFP_ATOMIC);
mutex_lock(&adapter->mcc_lock);
while (total_size) {
- buf_size = min(total_size, (u32)60*1024);
+ buf_size = min(total_size, (u32)60 * 1024);
total_size -= buf_size;
wrb = wrb_from_mccq(adapter);
req->pattern = cpu_to_le64(pattern);
req->byte_count = cpu_to_le32(byte_cnt);
for (i = 0; i < byte_cnt; i++) {
- req->snd_buff[i] = (u8)(pattern >> (j*8));
+ req->snd_buff[i] = (u8)(pattern >> (j * 8));
j++;
if (j > 7)
j = 0;
req->hdr.domain = domain;
req->mac_count = mac_count;
if (mac_count)
- memcpy(req->mac, mac_array, ETH_ALEN*mac_count);
+ memcpy(req->mac, mac_array, ETH_ALEN * mac_count);
status = be_mcc_notify_wait(adapter);
#include "be.h"
#include "be_cmds.h"
#include <asm/div64.h>
-#include <linux/aer.h>
#include <linux/if_bridge.h>
#include <net/busy_poll.h>
#include <net/vxlan.h>
be_unmap_pci_bars(adapter);
be_drv_cleanup(adapter);
- pci_disable_pcie_error_reporting(pdev);
-
pci_release_regions(pdev);
pci_disable_device(pdev);
goto free_netdev;
}
- status = pci_enable_pcie_error_reporting(pdev);
- if (!status)
- dev_info(&pdev->dev, "PCIe error reporting enabled\n");
-
status = be_map_pci_bars(adapter);
if (status)
goto free_netdev;
unmap_bars:
be_unmap_pci_bars(adapter);
free_netdev:
- pci_disable_pcie_error_reporting(pdev);
free_netdev(netdev);
rel_reg:
pci_release_regions(pdev);
{
phy_stop(adapter->netdev->phydev);
phy_disconnect(adapter->netdev->phydev);
- adapter->netdev->phydev = NULL;
}
static void tsnep_tx_ring_cleanup(struct tsnep_tx *tx)
struct dpmac_link_state *dpmac_state = &mac->state;
int err;
- if (state->an_enabled)
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ state->advertising))
dpmac_state->options |= DPMAC_LINK_OPT_AUTONEG;
else
dpmac_state->options &= ~DPMAC_LINK_OPT_AUTONEG;
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
-#include <linux/aer.h>
#include <linux/bitmap.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
bitmap_free(fdev->irq_map);
pci_free_irq_vectors(pdev);
- pci_clear_master(pdev);
- pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
fun_unmap_bars(fdev);
goto unmap;
}
- pci_enable_pcie_error_reporting(pdev);
-
rc = sanitize_dev(fdev);
if (rc)
goto disable_dev;
disable_admin:
fun_disable_admin_queue(fdev);
free_irq_mgr:
- pci_clear_master(pdev);
bitmap_free(fdev->irq_map);
free_irqs:
pci_free_irq_vectors(pdev);
disable_dev:
- pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
unmap:
fun_unmap_bars(fdev);
#define GVE_RX_BUFFER_SIZE_DQO 2048
+#define GVE_XDP_ACTIONS 5
+
+#define GVE_TX_MAX_HEADER_SIZE 182
+
#define GVE_GQ_TX_MIN_PKT_DESC_BYTES 182
/* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
u64 rx_frag_flip_cnt; /* free-running count of rx segments where page_flip was used */
u64 rx_frag_copy_cnt; /* free-running count of rx segments copied */
u64 rx_frag_alloc_cnt; /* free-running count of rx page allocations */
-
+ u64 xdp_tx_errors;
+ u64 xdp_redirect_errors;
+ u64 xdp_alloc_fails;
+ u64 xdp_actions[GVE_XDP_ACTIONS];
u32 q_num; /* queue index */
u32 ntfy_id; /* notification block index */
struct gve_queue_resources *q_resources; /* head and tail pointer idx */
struct u64_stats_sync statss; /* sync stats for 32bit archs */
struct gve_rx_ctx ctx; /* Info for packet currently being processed in this ring. */
+
+ /* XDP stuff */
+ struct xdp_rxq_info xdp_rxq;
+ struct xdp_rxq_info xsk_rxq;
+ struct xsk_buff_pool *xsk_pool;
+ struct page_frag_cache page_cache; /* Page cache to allocate XDP frames */
};
/* A TX desc ring entry */
* ring entry but only used for a pkt_desc not a seg_desc
*/
struct gve_tx_buffer_state {
- struct sk_buff *skb; /* skb for this pkt */
+ union {
+ struct sk_buff *skb; /* skb for this pkt */
+ struct xdp_frame *xdp_frame; /* xdp_frame */
+ };
+ struct {
+ u16 size; /* size of xmitted xdp pkt */
+ u8 is_xsk; /* xsk buff */
+ } xdp;
union {
struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
struct {
struct {
/* Spinlock for when cleanup in progress */
spinlock_t clean_lock;
+ /* Spinlock for XDP tx traffic */
+ spinlock_t xdp_lock;
};
/* DQO fields. */
dma_addr_t q_resources_bus; /* dma address of the queue resources */
dma_addr_t complq_bus_dqo; /* dma address of the dqo.compl_ring */
struct u64_stats_sync statss; /* sync stats for 32bit archs */
+ struct xsk_buff_pool *xsk_pool;
+ u32 xdp_xsk_wakeup;
+ u32 xdp_xsk_done;
+ u64 xdp_xsk_sent;
+ u64 xdp_xmit;
+ u64 xdp_xmit_errors;
} ____cacheline_aligned;
/* Wraps the info for one irq including the napi struct and the queues
u16 rx_data_slot_cnt; /* rx buffer length */
u64 max_registered_pages;
u64 num_registered_pages; /* num pages registered with NIC */
+ struct bpf_prog *xdp_prog; /* XDP BPF program */
u32 rx_copybreak; /* copy packets smaller than this */
u16 default_num_queues; /* default num queues to set up */
+ u16 num_xdp_queues;
struct gve_queue_config tx_cfg;
struct gve_queue_config rx_cfg;
struct gve_qpl_config qpl_cfg; /* map used QPL ids */
if (priv->queue_format != GVE_GQI_QPL_FORMAT)
return 0;
- return priv->tx_cfg.num_queues;
+ return priv->tx_cfg.num_queues + priv->num_xdp_queues;
+}
+
+/* Returns the number of XDP tx queue page lists
+ */
+static inline u32 gve_num_xdp_qpls(struct gve_priv *priv)
+{
+ if (priv->queue_format != GVE_GQI_QPL_FORMAT)
+ return 0;
+
+ return priv->num_xdp_queues;
}
/* Returns the number of rx queue page lists
return priv->rx_cfg.num_queues;
}
+static inline u32 gve_tx_qpl_id(struct gve_priv *priv, int tx_qid)
+{
+ return tx_qid;
+}
+
+static inline u32 gve_rx_qpl_id(struct gve_priv *priv, int rx_qid)
+{
+ return priv->tx_cfg.max_queues + rx_qid;
+}
+
+static inline u32 gve_tx_start_qpl_id(struct gve_priv *priv)
+{
+ return gve_tx_qpl_id(priv, 0);
+}
+
+static inline u32 gve_rx_start_qpl_id(struct gve_priv *priv)
+{
+ return gve_rx_qpl_id(priv, 0);
+}
+
/* Returns a pointer to the next available tx qpl in the list of qpls
*/
static inline
-struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv)
+struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv, int tx_qid)
{
- int id = find_first_zero_bit(priv->qpl_cfg.qpl_id_map,
- priv->qpl_cfg.qpl_map_size);
+ int id = gve_tx_qpl_id(priv, tx_qid);
- /* we are out of tx qpls */
- if (id >= gve_num_tx_qpls(priv))
+ /* QPL already in use */
+ if (test_bit(id, priv->qpl_cfg.qpl_id_map))
return NULL;
set_bit(id, priv->qpl_cfg.qpl_id_map);
/* Returns a pointer to the next available rx qpl in the list of qpls
*/
static inline
-struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv)
+struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv, int rx_qid)
{
- int id = find_next_zero_bit(priv->qpl_cfg.qpl_id_map,
- priv->qpl_cfg.qpl_map_size,
- gve_num_tx_qpls(priv));
+ int id = gve_rx_qpl_id(priv, rx_qid);
- /* we are out of rx qpls */
- if (id == gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv))
+ /* QPL already in use */
+ if (test_bit(id, priv->qpl_cfg.qpl_id_map))
return NULL;
set_bit(id, priv->qpl_cfg.qpl_id_map);
static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *priv,
int id)
{
- if (id < gve_num_tx_qpls(priv))
+ if (id < gve_rx_start_qpl_id(priv))
return DMA_TO_DEVICE;
else
return DMA_FROM_DEVICE;
priv->queue_format == GVE_GQI_QPL_FORMAT;
}
+static inline u32 gve_num_tx_queues(struct gve_priv *priv)
+{
+ return priv->tx_cfg.num_queues + priv->num_xdp_queues;
+}
+
+static inline u32 gve_xdp_tx_queue_id(struct gve_priv *priv, u32 queue_id)
+{
+ return priv->tx_cfg.num_queues + queue_id;
+}
+
+static inline u32 gve_xdp_tx_start_queue_id(struct gve_priv *priv)
+{
+ return gve_xdp_tx_queue_id(priv, 0);
+}
+
/* buffers */
int gve_alloc_page(struct gve_priv *priv, struct device *dev,
struct page **page, dma_addr_t *dma,
enum dma_data_direction);
/* tx handling */
netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
+int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+ u32 flags);
+int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
+ void *data, int len, void *frame_p);
+void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
bool gve_tx_poll(struct gve_notify_block *block, int budget);
-int gve_tx_alloc_rings(struct gve_priv *priv);
-void gve_tx_free_rings_gqi(struct gve_priv *priv);
+bool gve_xdp_poll(struct gve_notify_block *block, int budget);
+int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings);
+void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings);
u32 gve_tx_load_event_counter(struct gve_priv *priv,
struct gve_tx_ring *tx);
bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx);
return gve_adminq_issue_cmd(priv, &cmd);
}
-int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 num_queues)
+int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues)
{
int err;
int i;
- for (i = 0; i < num_queues; i++) {
+ for (i = start_id; i < start_id + num_queues; i++) {
err = gve_adminq_create_tx_queue(priv, i);
if (err)
return err;
return 0;
}
-int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 num_queues)
+int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues)
{
int err;
int i;
- for (i = 0; i < num_queues; i++) {
+ for (i = start_id; i < start_id + num_queues; i++) {
err = gve_adminq_destroy_tx_queue(priv, i);
if (err)
return err;
dma_addr_t db_array_bus_addr,
u32 num_ntfy_blks);
int gve_adminq_deconfigure_device_resources(struct gve_priv *priv);
-int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 num_queues);
-int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 queue_id);
+int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
+int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues);
int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id);
int gve_adminq_register_page_list(struct gve_priv *priv,
return priv->msg_enable;
}
+/* For the following stats column string names, make sure the order
+ * matches how it is filled in the code. For xdp_aborted, xdp_drop,
+ * xdp_pass, xdp_tx, xdp_redirect, make sure it also matches the order
+ * as declared in enum xdp_action inside file uapi/linux/bpf.h .
+ */
static const char gve_gstrings_main_stats[][ETH_GSTRING_LEN] = {
"rx_packets", "tx_packets", "rx_bytes", "tx_bytes",
"rx_dropped", "tx_dropped", "tx_timeouts",
"rx_dropped_pkt[%u]", "rx_copybreak_pkt[%u]", "rx_copied_pkt[%u]",
"rx_queue_drop_cnt[%u]", "rx_no_buffers_posted[%u]",
"rx_drops_packet_over_mru[%u]", "rx_drops_invalid_checksum[%u]",
+ "rx_xdp_aborted[%u]", "rx_xdp_drop[%u]", "rx_xdp_pass[%u]",
+ "rx_xdp_tx[%u]", "rx_xdp_redirect[%u]",
+ "rx_xdp_tx_errors[%u]", "rx_xdp_redirect_errors[%u]", "rx_xdp_alloc_fails[%u]",
};
static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = {
"tx_posted_desc[%u]", "tx_completed_desc[%u]", "tx_consumed_desc[%u]", "tx_bytes[%u]",
"tx_wake[%u]", "tx_stop[%u]", "tx_event_counter[%u]",
- "tx_dma_mapping_error[%u]",
+ "tx_dma_mapping_error[%u]", "tx_xsk_wakeup[%u]",
+ "tx_xsk_done[%u]", "tx_xsk_sent[%u]", "tx_xdp_xmit[%u]", "tx_xdp_xmit_errors[%u]"
};
static const char gve_gstrings_adminq_stats[][ETH_GSTRING_LEN] = {
{
struct gve_priv *priv = netdev_priv(netdev);
char *s = (char *)data;
+ int num_tx_queues;
int i, j;
+ num_tx_queues = gve_num_tx_queues(priv);
switch (stringset) {
case ETH_SS_STATS:
memcpy(s, *gve_gstrings_main_stats,
}
}
- for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ for (i = 0; i < num_tx_queues; i++) {
for (j = 0; j < NUM_GVE_TX_CNTS; j++) {
snprintf(s, ETH_GSTRING_LEN,
gve_gstrings_tx_stats[j], i);
static int gve_get_sset_count(struct net_device *netdev, int sset)
{
struct gve_priv *priv = netdev_priv(netdev);
+ int num_tx_queues;
+ num_tx_queues = gve_num_tx_queues(priv);
switch (sset) {
case ETH_SS_STATS:
return GVE_MAIN_STATS_LEN + GVE_ADMINQ_STATS_LEN +
(priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS) +
- (priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS);
+ (num_tx_queues * NUM_GVE_TX_CNTS);
case ETH_SS_PRIV_FLAGS:
return GVE_PRIV_FLAGS_STR_LEN;
default:
struct gve_priv *priv;
bool skip_nic_stats;
unsigned int start;
+ int num_tx_queues;
int ring;
int i, j;
ASSERT_RTNL();
priv = netdev_priv(netdev);
+ num_tx_queues = gve_num_tx_queues(priv);
report_stats = priv->stats_report->stats;
rx_qid_to_stats_idx = kmalloc_array(priv->rx_cfg.num_queues,
sizeof(int), GFP_KERNEL);
if (!rx_qid_to_stats_idx)
return;
- tx_qid_to_stats_idx = kmalloc_array(priv->tx_cfg.num_queues,
+ tx_qid_to_stats_idx = kmalloc_array(num_tx_queues,
sizeof(int), GFP_KERNEL);
if (!tx_qid_to_stats_idx) {
kfree(rx_qid_to_stats_idx);
}
}
for (tx_pkts = 0, tx_bytes = 0, tx_dropped = 0, ring = 0;
- ring < priv->tx_cfg.num_queues; ring++) {
+ ring < num_tx_queues; ring++) {
if (priv->tx) {
do {
start =
i = GVE_MAIN_STATS_LEN;
/* For rx cross-reporting stats, start from nic rx stats in report */
- base_stats_idx = GVE_TX_STATS_REPORT_NUM * priv->tx_cfg.num_queues +
+ base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues +
GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues;
max_stats_idx = NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues +
base_stats_idx;
if (skip_nic_stats) {
/* skip NIC rx stats */
i += NIC_RX_STATS_REPORT_NUM;
- continue;
- }
- for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
- u64 value =
- be64_to_cpu(report_stats[rx_qid_to_stats_idx[ring] + j].value);
+ } else {
+ stats_idx = rx_qid_to_stats_idx[ring];
+ for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) {
+ u64 value =
+ be64_to_cpu(report_stats[stats_idx + j].value);
- data[i++] = value;
+ data[i++] = value;
+ }
}
+ /* XDP rx counters */
+ do {
+ start = u64_stats_fetch_begin(&priv->rx[ring].statss);
+ for (j = 0; j < GVE_XDP_ACTIONS; j++)
+ data[i + j] = rx->xdp_actions[j];
+ data[i + j++] = rx->xdp_tx_errors;
+ data[i + j++] = rx->xdp_redirect_errors;
+ data[i + j++] = rx->xdp_alloc_fails;
+ } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+ start));
+ i += GVE_XDP_ACTIONS + 3; /* XDP rx counters */
}
} else {
i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
/* For tx cross-reporting stats, start from nic tx stats in report */
base_stats_idx = max_stats_idx;
- max_stats_idx = NIC_TX_STATS_REPORT_NUM * priv->tx_cfg.num_queues +
+ max_stats_idx = NIC_TX_STATS_REPORT_NUM * num_tx_queues +
max_stats_idx;
/* Preprocess the stats report for tx, map queue id to start index */
skip_nic_stats = false;
}
/* walk TX rings */
if (priv->tx) {
- for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
+ for (ring = 0; ring < num_tx_queues; ring++) {
struct gve_tx_ring *tx = &priv->tx[ring];
if (gve_is_gqi(priv)) {
if (skip_nic_stats) {
/* skip NIC tx stats */
i += NIC_TX_STATS_REPORT_NUM;
- continue;
- }
- for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) {
- u64 value =
- be64_to_cpu(report_stats[tx_qid_to_stats_idx[ring] + j].value);
- data[i++] = value;
+ } else {
+ stats_idx = tx_qid_to_stats_idx[ring];
+ for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) {
+ u64 value =
+ be64_to_cpu(report_stats[stats_idx + j].value);
+ data[i++] = value;
+ }
}
+ /* XDP xsk counters */
+ data[i++] = tx->xdp_xsk_wakeup;
+ data[i++] = tx->xdp_xsk_done;
+ do {
+ start = u64_stats_fetch_begin(&priv->tx[ring].statss);
+ data[i] = tx->xdp_xsk_sent;
+ data[i + 1] = tx->xdp_xmit;
+ data[i + 2] = tx->xdp_xmit_errors;
+ } while (u64_stats_fetch_retry(&priv->tx[ring].statss,
+ start));
+ i += 3; /* XDP tx counters */
}
} else {
- i += priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS;
+ i += num_tx_queues * NUM_GVE_TX_CNTS;
}
kfree(rx_qid_to_stats_idx);
if (!new_rx || !new_tx)
return -EINVAL;
+ if (priv->num_xdp_queues &&
+ (new_tx != new_rx || (2 * new_tx > priv->tx_cfg.max_queues))) {
+ dev_err(&priv->pdev->dev, "XDP load failed: The number of configured RX queues should be equal to the number of configured TX queues and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues");
+ return -EINVAL;
+ }
+
if (!netif_carrier_ok(netdev)) {
priv->tx_cfg.num_queues = new_tx;
priv->rx_cfg.num_queues = new_rx;
{
struct gve_priv *priv = netdev_priv(netdev);
u64 ori_flags, new_flags;
+ int num_tx_queues;
+ num_tx_queues = gve_num_tx_queues(priv);
ori_flags = READ_ONCE(priv->ethtool_flags);
new_flags = ori_flags;
/* delete report stats timer. */
if (!(flags & BIT(0)) && (ori_flags & BIT(0))) {
int tx_stats_num = GVE_TX_STATS_REPORT_NUM *
- priv->tx_cfg.num_queues;
+ num_tx_queues;
int rx_stats_num = GVE_RX_STATS_REPORT_NUM *
priv->rx_cfg.num_queues;
* Copyright (C) 2015-2021 Google, Inc.
*/
+#include <linux/bpf.h>
#include <linux/cpumask.h>
#include <linux/etherdevice.h>
+#include <linux/filter.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/utsname.h>
#include <linux/version.h>
#include <net/sch_generic.h>
+#include <net/xdp_sock_drv.h>
#include "gve.h"
#include "gve_dqo.h"
#include "gve_adminq.h"
struct gve_priv *priv = netdev_priv(dev);
unsigned int start;
u64 packets, bytes;
+ int num_tx_queues;
int ring;
+ num_tx_queues = gve_num_tx_queues(priv);
if (priv->rx) {
for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
do {
}
}
if (priv->tx) {
- for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
+ for (ring = 0; ring < num_tx_queues; ring++) {
do {
start =
u64_stats_fetch_begin(&priv->tx[ring].statss);
int tx_stats_num, rx_stats_num;
tx_stats_num = (GVE_TX_STATS_REPORT_NUM + NIC_TX_STATS_REPORT_NUM) *
- priv->tx_cfg.num_queues;
+ gve_num_tx_queues(priv);
rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) *
priv->rx_cfg.num_queues;
priv->stats_report_len = struct_size(priv->stats_report, stats,
block = container_of(napi, struct gve_notify_block, napi);
priv = block->priv;
- if (block->tx)
- reschedule |= gve_tx_poll(block, budget);
+ if (block->tx) {
+ if (block->tx->q_num < priv->tx_cfg.num_queues)
+ reschedule |= gve_tx_poll(block, budget);
+ else
+ reschedule |= gve_xdp_poll(block, budget);
+ }
+
if (block->rx) {
work_done = gve_rx_poll(block, budget);
reschedule |= work_done == budget;
netif_napi_del(&block->napi);
}
+static int gve_register_xdp_qpls(struct gve_priv *priv)
+{
+ int start_id;
+ int err;
+ int i;
+
+ start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
+ for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
+ err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "failed to register queue page list %d\n",
+ priv->qpls[i].id);
+ /* This failure will trigger a reset - no need to clean
+ * up
+ */
+ return err;
+ }
+ }
+ return 0;
+}
+
static int gve_register_qpls(struct gve_priv *priv)
{
- int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+ int start_id;
int err;
int i;
- for (i = 0; i < num_qpls; i++) {
+ start_id = gve_tx_start_qpl_id(priv);
+ for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
if (err) {
netif_err(priv, drv, priv->dev,
return err;
}
}
+
+ start_id = gve_rx_start_qpl_id(priv);
+ for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
+ err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "failed to register queue page list %d\n",
+ priv->qpls[i].id);
+ /* This failure will trigger a reset - no need to clean
+ * up
+ */
+ return err;
+ }
+ }
+ return 0;
+}
+
+static int gve_unregister_xdp_qpls(struct gve_priv *priv)
+{
+ int start_id;
+ int err;
+ int i;
+
+ start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
+ for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
+ err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
+ /* This failure will trigger a reset - no need to clean up */
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "Failed to unregister queue page list %d\n",
+ priv->qpls[i].id);
+ return err;
+ }
+ }
return 0;
}
static int gve_unregister_qpls(struct gve_priv *priv)
{
- int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+ int start_id;
int err;
int i;
- for (i = 0; i < num_qpls; i++) {
+ start_id = gve_tx_start_qpl_id(priv);
+ for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
+ err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
+ /* This failure will trigger a reset - no need to clean up */
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "Failed to unregister queue page list %d\n",
+ priv->qpls[i].id);
+ return err;
+ }
+ }
+
+ start_id = gve_rx_start_qpl_id(priv);
+ for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
/* This failure will trigger a reset - no need to clean up */
if (err) {
return 0;
}
+static int gve_create_xdp_rings(struct gve_priv *priv)
+{
+ int err;
+
+ err = gve_adminq_create_tx_queues(priv,
+ gve_xdp_tx_start_queue_id(priv),
+ priv->num_xdp_queues);
+ if (err) {
+ netif_err(priv, drv, priv->dev, "failed to create %d XDP tx queues\n",
+ priv->num_xdp_queues);
+ /* This failure will trigger a reset - no need to clean
+ * up
+ */
+ return err;
+ }
+ netif_dbg(priv, drv, priv->dev, "created %d XDP tx queues\n",
+ priv->num_xdp_queues);
+
+ return 0;
+}
+
static int gve_create_rings(struct gve_priv *priv)
{
+ int num_tx_queues = gve_num_tx_queues(priv);
int err;
int i;
- err = gve_adminq_create_tx_queues(priv, priv->tx_cfg.num_queues);
+ err = gve_adminq_create_tx_queues(priv, 0, num_tx_queues);
if (err) {
netif_err(priv, drv, priv->dev, "failed to create %d tx queues\n",
- priv->tx_cfg.num_queues);
+ num_tx_queues);
/* This failure will trigger a reset - no need to clean
* up
*/
return err;
}
netif_dbg(priv, drv, priv->dev, "created %d tx queues\n",
- priv->tx_cfg.num_queues);
+ num_tx_queues);
err = gve_adminq_create_rx_queues(priv, priv->rx_cfg.num_queues);
if (err) {
return 0;
}
+static void add_napi_init_xdp_sync_stats(struct gve_priv *priv,
+ int (*napi_poll)(struct napi_struct *napi,
+ int budget))
+{
+ int start_id = gve_xdp_tx_start_queue_id(priv);
+ int i;
+
+ /* Add xdp tx napi & init sync stats*/
+ for (i = start_id; i < start_id + priv->num_xdp_queues; i++) {
+ int ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
+
+ u64_stats_init(&priv->tx[i].statss);
+ priv->tx[i].ntfy_id = ntfy_idx;
+ gve_add_napi(priv, ntfy_idx, napi_poll);
+ }
+}
+
static void add_napi_init_sync_stats(struct gve_priv *priv,
int (*napi_poll)(struct napi_struct *napi,
int budget))
int i;
/* Add tx napi & init sync stats*/
- for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ for (i = 0; i < gve_num_tx_queues(priv); i++) {
int ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
u64_stats_init(&priv->tx[i].statss);
}
}
-static void gve_tx_free_rings(struct gve_priv *priv)
+static void gve_tx_free_rings(struct gve_priv *priv, int start_id, int num_rings)
{
if (gve_is_gqi(priv)) {
- gve_tx_free_rings_gqi(priv);
+ gve_tx_free_rings_gqi(priv, start_id, num_rings);
} else {
gve_tx_free_rings_dqo(priv);
}
}
+static int gve_alloc_xdp_rings(struct gve_priv *priv)
+{
+ int start_id;
+ int err = 0;
+
+ if (!priv->num_xdp_queues)
+ return 0;
+
+ start_id = gve_xdp_tx_start_queue_id(priv);
+ err = gve_tx_alloc_rings(priv, start_id, priv->num_xdp_queues);
+ if (err)
+ return err;
+ add_napi_init_xdp_sync_stats(priv, gve_napi_poll);
+
+ return 0;
+}
+
static int gve_alloc_rings(struct gve_priv *priv)
{
int err;
/* Setup tx rings */
- priv->tx = kvcalloc(priv->tx_cfg.num_queues, sizeof(*priv->tx),
+ priv->tx = kvcalloc(priv->tx_cfg.max_queues, sizeof(*priv->tx),
GFP_KERNEL);
if (!priv->tx)
return -ENOMEM;
if (gve_is_gqi(priv))
- err = gve_tx_alloc_rings(priv);
+ err = gve_tx_alloc_rings(priv, 0, gve_num_tx_queues(priv));
else
err = gve_tx_alloc_rings_dqo(priv);
if (err)
goto free_tx;
/* Setup rx rings */
- priv->rx = kvcalloc(priv->rx_cfg.num_queues, sizeof(*priv->rx),
+ priv->rx = kvcalloc(priv->rx_cfg.max_queues, sizeof(*priv->rx),
GFP_KERNEL);
if (!priv->rx) {
err = -ENOMEM;
kvfree(priv->rx);
priv->rx = NULL;
free_tx_queue:
- gve_tx_free_rings(priv);
+ gve_tx_free_rings(priv, 0, gve_num_tx_queues(priv));
free_tx:
kvfree(priv->tx);
priv->tx = NULL;
return err;
}
+static int gve_destroy_xdp_rings(struct gve_priv *priv)
+{
+ int start_id;
+ int err;
+
+ start_id = gve_xdp_tx_start_queue_id(priv);
+ err = gve_adminq_destroy_tx_queues(priv,
+ start_id,
+ priv->num_xdp_queues);
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "failed to destroy XDP queues\n");
+ /* This failure will trigger a reset - no need to clean up */
+ return err;
+ }
+ netif_dbg(priv, drv, priv->dev, "destroyed XDP queues\n");
+
+ return 0;
+}
+
static int gve_destroy_rings(struct gve_priv *priv)
{
+ int num_tx_queues = gve_num_tx_queues(priv);
int err;
- err = gve_adminq_destroy_tx_queues(priv, priv->tx_cfg.num_queues);
+ err = gve_adminq_destroy_tx_queues(priv, 0, num_tx_queues);
if (err) {
netif_err(priv, drv, priv->dev,
"failed to destroy tx queues\n");
gve_rx_free_rings_dqo(priv);
}
+static void gve_free_xdp_rings(struct gve_priv *priv)
+{
+ int ntfy_idx, start_id;
+ int i;
+
+ start_id = gve_xdp_tx_start_queue_id(priv);
+ if (priv->tx) {
+ for (i = start_id; i < start_id + priv->num_xdp_queues; i++) {
+ ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
+ gve_remove_napi(priv, ntfy_idx);
+ }
+ gve_tx_free_rings(priv, start_id, priv->num_xdp_queues);
+ }
+}
+
static void gve_free_rings(struct gve_priv *priv)
{
+ int num_tx_queues = gve_num_tx_queues(priv);
int ntfy_idx;
int i;
if (priv->tx) {
- for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ for (i = 0; i < num_tx_queues; i++) {
ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
gve_remove_napi(priv, ntfy_idx);
}
- gve_tx_free_rings(priv);
+ gve_tx_free_rings(priv, 0, num_tx_queues);
kvfree(priv->tx);
priv->tx = NULL;
}
qpl->page_buses[i], gve_qpl_dma_dir(priv, id));
kvfree(qpl->page_buses);
+ qpl->page_buses = NULL;
free_pages:
kvfree(qpl->pages);
+ qpl->pages = NULL;
priv->num_registered_pages -= qpl->num_entries;
}
+static int gve_alloc_xdp_qpls(struct gve_priv *priv)
+{
+ int start_id;
+ int i, j;
+ int err;
+
+ start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
+ for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) {
+ err = gve_alloc_queue_page_list(priv, i,
+ priv->tx_pages_per_qpl);
+ if (err)
+ goto free_qpls;
+ }
+
+ return 0;
+
+free_qpls:
+ for (j = start_id; j <= i; j++)
+ gve_free_queue_page_list(priv, j);
+ return err;
+}
+
static int gve_alloc_qpls(struct gve_priv *priv)
{
- int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+ int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
+ int start_id;
int i, j;
int err;
- if (num_qpls == 0)
+ if (priv->queue_format != GVE_GQI_QPL_FORMAT)
return 0;
- priv->qpls = kvcalloc(num_qpls, sizeof(*priv->qpls), GFP_KERNEL);
+ priv->qpls = kvcalloc(max_queues, sizeof(*priv->qpls), GFP_KERNEL);
if (!priv->qpls)
return -ENOMEM;
- for (i = 0; i < gve_num_tx_qpls(priv); i++) {
+ start_id = gve_tx_start_qpl_id(priv);
+ for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) {
err = gve_alloc_queue_page_list(priv, i,
priv->tx_pages_per_qpl);
if (err)
goto free_qpls;
}
- for (; i < num_qpls; i++) {
+
+ start_id = gve_rx_start_qpl_id(priv);
+ for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) {
err = gve_alloc_queue_page_list(priv, i,
priv->rx_data_slot_cnt);
if (err)
goto free_qpls;
}
- priv->qpl_cfg.qpl_map_size = BITS_TO_LONGS(num_qpls) *
+ priv->qpl_cfg.qpl_map_size = BITS_TO_LONGS(max_queues) *
sizeof(unsigned long) * BITS_PER_BYTE;
- priv->qpl_cfg.qpl_id_map = kvcalloc(BITS_TO_LONGS(num_qpls),
+ priv->qpl_cfg.qpl_id_map = kvcalloc(BITS_TO_LONGS(max_queues),
sizeof(unsigned long), GFP_KERNEL);
if (!priv->qpl_cfg.qpl_id_map) {
err = -ENOMEM;
for (j = 0; j <= i; j++)
gve_free_queue_page_list(priv, j);
kvfree(priv->qpls);
+ priv->qpls = NULL;
return err;
}
+static void gve_free_xdp_qpls(struct gve_priv *priv)
+{
+ int start_id;
+ int i;
+
+ start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv));
+ for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++)
+ gve_free_queue_page_list(priv, i);
+}
+
static void gve_free_qpls(struct gve_priv *priv)
{
- int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+ int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues;
int i;
- if (num_qpls == 0)
+ if (!priv->qpls)
return;
kvfree(priv->qpl_cfg.qpl_id_map);
+ priv->qpl_cfg.qpl_id_map = NULL;
- for (i = 0; i < num_qpls; i++)
+ for (i = 0; i < max_queues; i++)
gve_free_queue_page_list(priv, i);
kvfree(priv->qpls);
+ priv->qpls = NULL;
}
/* Use this to schedule a reset when the device is capable of continuing
static void gve_turndown(struct gve_priv *priv);
static void gve_turnup(struct gve_priv *priv);
+static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev)
+{
+ struct napi_struct *napi;
+ struct gve_rx_ring *rx;
+ int err = 0;
+ int i, j;
+ u32 tx_qid;
+
+ if (!priv->num_xdp_queues)
+ return 0;
+
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ rx = &priv->rx[i];
+ napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
+
+ err = xdp_rxq_info_reg(&rx->xdp_rxq, dev, i,
+ napi->napi_id);
+ if (err)
+ goto err;
+ err = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq,
+ MEM_TYPE_PAGE_SHARED, NULL);
+ if (err)
+ goto err;
+ rx->xsk_pool = xsk_get_pool_from_qid(dev, i);
+ if (rx->xsk_pool) {
+ err = xdp_rxq_info_reg(&rx->xsk_rxq, dev, i,
+ napi->napi_id);
+ if (err)
+ goto err;
+ err = xdp_rxq_info_reg_mem_model(&rx->xsk_rxq,
+ MEM_TYPE_XSK_BUFF_POOL, NULL);
+ if (err)
+ goto err;
+ xsk_pool_set_rxq_info(rx->xsk_pool,
+ &rx->xsk_rxq);
+ }
+ }
+
+ for (i = 0; i < priv->num_xdp_queues; i++) {
+ tx_qid = gve_xdp_tx_queue_id(priv, i);
+ priv->tx[tx_qid].xsk_pool = xsk_get_pool_from_qid(dev, i);
+ }
+ return 0;
+
+err:
+ for (j = i; j >= 0; j--) {
+ rx = &priv->rx[j];
+ if (xdp_rxq_info_is_reg(&rx->xdp_rxq))
+ xdp_rxq_info_unreg(&rx->xdp_rxq);
+ if (xdp_rxq_info_is_reg(&rx->xsk_rxq))
+ xdp_rxq_info_unreg(&rx->xsk_rxq);
+ }
+ return err;
+}
+
+static void gve_unreg_xdp_info(struct gve_priv *priv)
+{
+ int i, tx_qid;
+
+ if (!priv->num_xdp_queues)
+ return;
+
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ struct gve_rx_ring *rx = &priv->rx[i];
+
+ xdp_rxq_info_unreg(&rx->xdp_rxq);
+ if (rx->xsk_pool) {
+ xdp_rxq_info_unreg(&rx->xsk_rxq);
+ rx->xsk_pool = NULL;
+ }
+ }
+
+ for (i = 0; i < priv->num_xdp_queues; i++) {
+ tx_qid = gve_xdp_tx_queue_id(priv, i);
+ priv->tx[tx_qid].xsk_pool = NULL;
+ }
+}
+
+static void gve_drain_page_cache(struct gve_priv *priv)
+{
+ struct page_frag_cache *nc;
+ int i;
+
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ nc = &priv->rx[i].page_cache;
+ if (nc->va) {
+ __page_frag_cache_drain(virt_to_page(nc->va),
+ nc->pagecnt_bias);
+ nc->va = NULL;
+ }
+ }
+}
+
static int gve_open(struct net_device *dev)
{
struct gve_priv *priv = netdev_priv(dev);
int err;
+ if (priv->xdp_prog)
+ priv->num_xdp_queues = priv->rx_cfg.num_queues;
+ else
+ priv->num_xdp_queues = 0;
+
err = gve_alloc_qpls(priv);
if (err)
return err;
if (err)
goto free_rings;
+ err = gve_reg_xdp_info(priv, dev);
+ if (err)
+ goto free_rings;
+
err = gve_register_qpls(priv);
if (err)
goto reset;
netif_carrier_off(dev);
if (gve_get_device_rings_ok(priv)) {
gve_turndown(priv);
+ gve_drain_page_cache(priv);
err = gve_destroy_rings(priv);
if (err)
goto err;
}
del_timer_sync(&priv->stats_report_timer);
+ gve_unreg_xdp_info(priv);
gve_free_rings(priv);
gve_free_qpls(priv);
priv->interface_down_cnt++;
return gve_reset_recovery(priv, false);
}
+static int gve_remove_xdp_queues(struct gve_priv *priv)
+{
+ int err;
+
+ err = gve_destroy_xdp_rings(priv);
+ if (err)
+ return err;
+
+ err = gve_unregister_xdp_qpls(priv);
+ if (err)
+ return err;
+
+ gve_unreg_xdp_info(priv);
+ gve_free_xdp_rings(priv);
+ gve_free_xdp_qpls(priv);
+ priv->num_xdp_queues = 0;
+ return 0;
+}
+
+static int gve_add_xdp_queues(struct gve_priv *priv)
+{
+ int err;
+
+ priv->num_xdp_queues = priv->tx_cfg.num_queues;
+
+ err = gve_alloc_xdp_qpls(priv);
+ if (err)
+ goto err;
+
+ err = gve_alloc_xdp_rings(priv);
+ if (err)
+ goto free_xdp_qpls;
+
+ err = gve_reg_xdp_info(priv, priv->dev);
+ if (err)
+ goto free_xdp_rings;
+
+ err = gve_register_xdp_qpls(priv);
+ if (err)
+ goto free_xdp_rings;
+
+ err = gve_create_xdp_rings(priv);
+ if (err)
+ goto free_xdp_rings;
+
+ return 0;
+
+free_xdp_rings:
+ gve_free_xdp_rings(priv);
+free_xdp_qpls:
+ gve_free_xdp_qpls(priv);
+err:
+ priv->num_xdp_queues = 0;
+ return err;
+}
+
+static void gve_handle_link_status(struct gve_priv *priv, bool link_status)
+{
+ if (!gve_get_napi_enabled(priv))
+ return;
+
+ if (link_status == netif_carrier_ok(priv->dev))
+ return;
+
+ if (link_status) {
+ netdev_info(priv->dev, "Device link is up.\n");
+ netif_carrier_on(priv->dev);
+ } else {
+ netdev_info(priv->dev, "Device link is down.\n");
+ netif_carrier_off(priv->dev);
+ }
+}
+
+static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog,
+ struct netlink_ext_ack *extack)
+{
+ struct bpf_prog *old_prog;
+ int err = 0;
+ u32 status;
+
+ old_prog = READ_ONCE(priv->xdp_prog);
+ if (!netif_carrier_ok(priv->dev)) {
+ WRITE_ONCE(priv->xdp_prog, prog);
+ if (old_prog)
+ bpf_prog_put(old_prog);
+ return 0;
+ }
+
+ gve_turndown(priv);
+ if (!old_prog && prog) {
+ // Allocate XDP TX queues if an XDP program is
+ // being installed
+ err = gve_add_xdp_queues(priv);
+ if (err)
+ goto out;
+ } else if (old_prog && !prog) {
+ // Remove XDP TX queues if an XDP program is
+ // being uninstalled
+ err = gve_remove_xdp_queues(priv);
+ if (err)
+ goto out;
+ }
+ WRITE_ONCE(priv->xdp_prog, prog);
+ if (old_prog)
+ bpf_prog_put(old_prog);
+
+out:
+ gve_turnup(priv);
+ status = ioread32be(&priv->reg_bar0->device_status);
+ gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status);
+ return err;
+}
+
+static int gve_xsk_pool_enable(struct net_device *dev,
+ struct xsk_buff_pool *pool,
+ u16 qid)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ struct napi_struct *napi;
+ struct gve_rx_ring *rx;
+ int tx_qid;
+ int err;
+
+ if (qid >= priv->rx_cfg.num_queues) {
+ dev_err(&priv->pdev->dev, "xsk pool invalid qid %d", qid);
+ return -EINVAL;
+ }
+ if (xsk_pool_get_rx_frame_size(pool) <
+ priv->dev->max_mtu + sizeof(struct ethhdr)) {
+ dev_err(&priv->pdev->dev, "xsk pool frame_len too small");
+ return -EINVAL;
+ }
+
+ err = xsk_pool_dma_map(pool, &priv->pdev->dev,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+ if (err)
+ return err;
+
+ /* If XDP prog is not installed, return */
+ if (!priv->xdp_prog)
+ return 0;
+
+ rx = &priv->rx[qid];
+ napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
+ err = xdp_rxq_info_reg(&rx->xsk_rxq, dev, qid, napi->napi_id);
+ if (err)
+ goto err;
+
+ err = xdp_rxq_info_reg_mem_model(&rx->xsk_rxq,
+ MEM_TYPE_XSK_BUFF_POOL, NULL);
+ if (err)
+ goto err;
+
+ xsk_pool_set_rxq_info(pool, &rx->xsk_rxq);
+ rx->xsk_pool = pool;
+
+ tx_qid = gve_xdp_tx_queue_id(priv, qid);
+ priv->tx[tx_qid].xsk_pool = pool;
+
+ return 0;
+err:
+ if (xdp_rxq_info_is_reg(&rx->xsk_rxq))
+ xdp_rxq_info_unreg(&rx->xsk_rxq);
+
+ xsk_pool_dma_unmap(pool,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+ return err;
+}
+
+static int gve_xsk_pool_disable(struct net_device *dev,
+ u16 qid)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ struct napi_struct *napi_rx;
+ struct napi_struct *napi_tx;
+ struct xsk_buff_pool *pool;
+ int tx_qid;
+
+ pool = xsk_get_pool_from_qid(dev, qid);
+ if (!pool)
+ return -EINVAL;
+ if (qid >= priv->rx_cfg.num_queues)
+ return -EINVAL;
+
+ /* If XDP prog is not installed, unmap DMA and return */
+ if (!priv->xdp_prog)
+ goto done;
+
+ tx_qid = gve_xdp_tx_queue_id(priv, qid);
+ if (!netif_running(dev)) {
+ priv->rx[qid].xsk_pool = NULL;
+ xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq);
+ priv->tx[tx_qid].xsk_pool = NULL;
+ goto done;
+ }
+
+ napi_rx = &priv->ntfy_blocks[priv->rx[qid].ntfy_id].napi;
+ napi_disable(napi_rx); /* make sure current rx poll is done */
+
+ napi_tx = &priv->ntfy_blocks[priv->tx[tx_qid].ntfy_id].napi;
+ napi_disable(napi_tx); /* make sure current tx poll is done */
+
+ priv->rx[qid].xsk_pool = NULL;
+ xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq);
+ priv->tx[tx_qid].xsk_pool = NULL;
+ smp_mb(); /* Make sure it is visible to the workers on datapath */
+
+ napi_enable(napi_rx);
+ if (gve_rx_work_pending(&priv->rx[qid]))
+ napi_schedule(napi_rx);
+
+ napi_enable(napi_tx);
+ if (gve_tx_clean_pending(priv, &priv->tx[tx_qid]))
+ napi_schedule(napi_tx);
+
+done:
+ xsk_pool_dma_unmap(pool,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+ return 0;
+}
+
+static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ int tx_queue_id = gve_xdp_tx_queue_id(priv, queue_id);
+
+ if (queue_id >= priv->rx_cfg.num_queues || !priv->xdp_prog)
+ return -EINVAL;
+
+ if (flags & XDP_WAKEUP_TX) {
+ struct gve_tx_ring *tx = &priv->tx[tx_queue_id];
+ struct napi_struct *napi =
+ &priv->ntfy_blocks[tx->ntfy_id].napi;
+
+ if (!napi_if_scheduled_mark_missed(napi)) {
+ /* Call local_bh_enable to trigger SoftIRQ processing */
+ local_bh_disable();
+ napi_schedule(napi);
+ local_bh_enable();
+ }
+
+ tx->xdp_xsk_wakeup++;
+ }
+
+ return 0;
+}
+
+static int verify_xdp_configuration(struct net_device *dev)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+
+ if (dev->features & NETIF_F_LRO) {
+ netdev_warn(dev, "XDP is not supported when LRO is on.\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (priv->queue_format != GVE_GQI_QPL_FORMAT) {
+ netdev_warn(dev, "XDP is not supported in mode %d.\n",
+ priv->queue_format);
+ return -EOPNOTSUPP;
+ }
+
+ if (dev->mtu > (PAGE_SIZE / 2) - sizeof(struct ethhdr) - GVE_RX_PAD) {
+ netdev_warn(dev, "XDP is not supported for mtu %d.\n",
+ dev->mtu);
+ return -EOPNOTSUPP;
+ }
+
+ if (priv->rx_cfg.num_queues != priv->tx_cfg.num_queues ||
+ (2 * priv->tx_cfg.num_queues > priv->tx_cfg.max_queues)) {
+ netdev_warn(dev, "XDP load failed: The number of configured RX queues %d should be equal to the number of configured TX queues %d and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues %d",
+ priv->rx_cfg.num_queues,
+ priv->tx_cfg.num_queues,
+ priv->tx_cfg.max_queues);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ int err;
+
+ err = verify_xdp_configuration(dev);
+ if (err)
+ return err;
+ switch (xdp->command) {
+ case XDP_SETUP_PROG:
+ return gve_set_xdp(priv, xdp->prog, xdp->extack);
+ case XDP_SETUP_XSK_POOL:
+ if (xdp->xsk.pool)
+ return gve_xsk_pool_enable(dev, xdp->xsk.pool, xdp->xsk.queue_id);
+ else
+ return gve_xsk_pool_disable(dev, xdp->xsk.queue_id);
+ default:
+ return -EINVAL;
+ }
+}
+
int gve_adjust_queues(struct gve_priv *priv,
struct gve_queue_config new_rx_config,
struct gve_queue_config new_tx_config)
return;
/* Disable napi to prevent more work from coming in */
- for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+ for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
netif_tx_start_all_queues(priv->dev);
/* Enable napi and unmask interrupts for all queues */
- for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+ for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
.ndo_get_stats64 = gve_get_stats,
.ndo_tx_timeout = gve_tx_timeout,
.ndo_set_features = gve_set_features,
+ .ndo_bpf = gve_xdp,
+ .ndo_xdp_xmit = gve_xdp_xmit,
+ .ndo_xsk_wakeup = gve_xsk_wakeup,
};
static void gve_handle_status(struct gve_priv *priv, u32 status)
be64_add_cpu(&priv->stats_report->written_count, 1);
/* tx stats */
if (priv->tx) {
- for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+ for (idx = 0; idx < gve_num_tx_queues(priv); idx++) {
u32 last_completion = 0;
u32 tx_frames = 0;
}
}
-static void gve_handle_link_status(struct gve_priv *priv, bool link_status)
-{
- if (!gve_get_napi_enabled(priv))
- return;
-
- if (link_status == netif_carrier_ok(priv->dev))
- return;
-
- if (link_status) {
- netdev_info(priv->dev, "Device link is up.\n");
- netif_carrier_on(priv->dev);
- } else {
- netdev_info(priv->dev, "Device link is down.\n");
- netif_carrier_off(priv->dev);
- }
-}
-
/* Handle NIC status register changes, reset requests and report stats */
static void gve_service_task(struct work_struct *work)
{
gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status);
}
+static void gve_set_netdev_xdp_features(struct gve_priv *priv)
+{
+ if (priv->queue_format == GVE_GQI_QPL_FORMAT) {
+ priv->dev->xdp_features = NETDEV_XDP_ACT_BASIC;
+ priv->dev->xdp_features |= NETDEV_XDP_ACT_REDIRECT;
+ priv->dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
+ priv->dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
+ } else {
+ priv->dev->xdp_features = 0;
+ }
+}
+
static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device)
{
int num_ntfy;
}
setup_device:
+ gve_set_netdev_xdp_features(priv);
err = gve_setup_device_resources(priv);
if (!err)
return 0;
#include "gve_adminq.h"
#include "gve_utils.h"
#include <linux/etherdevice.h>
+#include <linux/filter.h>
+#include <net/xdp.h>
+#include <net/xdp_sock_drv.h>
static void gve_rx_free_buffer(struct device *dev,
struct gve_rx_slot_page_info *page_info,
return -ENOMEM;
if (!rx->data.raw_addressing) {
- rx->data.qpl = gve_assign_rx_qpl(priv);
+ rx->data.qpl = gve_assign_rx_qpl(priv, rx->q_num);
if (!rx->data.qpl) {
kvfree(rx->data.page_info);
rx->data.page_info = NULL;
if (len <= priv->rx_copybreak && is_only_frag) {
/* Just copy small packets */
- skb = gve_rx_copy(netdev, napi, page_info, len, GVE_RX_PAD);
+ skb = gve_rx_copy(netdev, napi, page_info, len);
if (skb) {
u64_stats_update_begin(&rx->statss);
rx->rx_copied_pkt++;
return skb;
}
+static int gve_xsk_pool_redirect(struct net_device *dev,
+ struct gve_rx_ring *rx,
+ void *data, int len,
+ struct bpf_prog *xdp_prog)
+{
+ struct xdp_buff *xdp;
+ int err;
+
+ if (rx->xsk_pool->frame_len < len)
+ return -E2BIG;
+ xdp = xsk_buff_alloc(rx->xsk_pool);
+ if (!xdp) {
+ u64_stats_update_begin(&rx->statss);
+ rx->xdp_alloc_fails++;
+ u64_stats_update_end(&rx->statss);
+ return -ENOMEM;
+ }
+ xdp->data_end = xdp->data + len;
+ memcpy(xdp->data, data, len);
+ err = xdp_do_redirect(dev, xdp, xdp_prog);
+ if (err)
+ xsk_buff_free(xdp);
+ return err;
+}
+
+static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx,
+ struct xdp_buff *orig, struct bpf_prog *xdp_prog)
+{
+ int total_len, len = orig->data_end - orig->data;
+ int headroom = XDP_PACKET_HEADROOM;
+ struct xdp_buff new;
+ void *frame;
+ int err;
+
+ if (rx->xsk_pool)
+ return gve_xsk_pool_redirect(dev, rx, orig->data,
+ len, xdp_prog);
+
+ total_len = headroom + SKB_DATA_ALIGN(len) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC);
+ if (!frame) {
+ u64_stats_update_begin(&rx->statss);
+ rx->xdp_alloc_fails++;
+ u64_stats_update_end(&rx->statss);
+ return -ENOMEM;
+ }
+ xdp_init_buff(&new, total_len, &rx->xdp_rxq);
+ xdp_prepare_buff(&new, frame, headroom, len, false);
+ memcpy(new.data, orig->data, len);
+
+ err = xdp_do_redirect(dev, &new, xdp_prog);
+ if (err)
+ page_frag_free(frame);
+
+ return err;
+}
+
+static void gve_xdp_done(struct gve_priv *priv, struct gve_rx_ring *rx,
+ struct xdp_buff *xdp, struct bpf_prog *xprog,
+ int xdp_act)
+{
+ struct gve_tx_ring *tx;
+ int tx_qid;
+ int err;
+
+ switch (xdp_act) {
+ case XDP_ABORTED:
+ case XDP_DROP:
+ default:
+ break;
+ case XDP_TX:
+ tx_qid = gve_xdp_tx_queue_id(priv, rx->q_num);
+ tx = &priv->tx[tx_qid];
+ spin_lock(&tx->xdp_lock);
+ err = gve_xdp_xmit_one(priv, tx, xdp->data,
+ xdp->data_end - xdp->data, NULL);
+ spin_unlock(&tx->xdp_lock);
+
+ if (unlikely(err)) {
+ u64_stats_update_begin(&rx->statss);
+ rx->xdp_tx_errors++;
+ u64_stats_update_end(&rx->statss);
+ }
+ break;
+ case XDP_REDIRECT:
+ err = gve_xdp_redirect(priv->dev, rx, xdp, xprog);
+
+ if (unlikely(err)) {
+ u64_stats_update_begin(&rx->statss);
+ rx->xdp_redirect_errors++;
+ u64_stats_update_end(&rx->statss);
+ }
+ break;
+ }
+ u64_stats_update_begin(&rx->statss);
+ if ((u32)xdp_act < GVE_XDP_ACTIONS)
+ rx->xdp_actions[xdp_act]++;
+ u64_stats_update_end(&rx->statss);
+}
+
#define GVE_PKTCONT_BIT_IS_SET(x) (GVE_RXF_PKT_CONT & (x))
static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat,
struct gve_rx_desc *desc, u32 idx,
union gve_rx_data_slot *data_slot;
struct gve_priv *priv = rx->gve;
struct sk_buff *skb = NULL;
+ struct bpf_prog *xprog;
+ struct xdp_buff xdp;
dma_addr_t page_bus;
void *va;
+ u16 len = frag_size;
struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
bool is_first_frag = ctx->frag_cnt == 0;
dma_sync_single_for_cpu(&priv->pdev->dev, page_bus,
PAGE_SIZE, DMA_FROM_DEVICE);
page_info->pad = is_first_frag ? GVE_RX_PAD : 0;
+ len -= page_info->pad;
frag_size -= page_info->pad;
- skb = gve_rx_skb(priv, rx, page_info, napi, frag_size,
+ xprog = READ_ONCE(priv->xdp_prog);
+ if (xprog && is_only_frag) {
+ void *old_data;
+ int xdp_act;
+
+ xdp_init_buff(&xdp, rx->packet_buffer_size, &rx->xdp_rxq);
+ xdp_prepare_buff(&xdp, page_info->page_address +
+ page_info->page_offset, GVE_RX_PAD,
+ len, false);
+ old_data = xdp.data;
+ xdp_act = bpf_prog_run_xdp(xprog, &xdp);
+ if (xdp_act != XDP_PASS) {
+ gve_xdp_done(priv, rx, &xdp, xprog, xdp_act);
+ ctx->total_size += frag_size;
+ goto finish_ok_pkt;
+ }
+
+ page_info->pad += xdp.data - old_data;
+ len = xdp.data_end - xdp.data;
+
+ u64_stats_update_begin(&rx->statss);
+ rx->xdp_actions[XDP_PASS]++;
+ u64_stats_update_end(&rx->statss);
+ }
+
+ skb = gve_rx_skb(priv, rx, page_info, napi, len,
data_slot, is_only_frag);
if (!skb) {
u64_stats_update_begin(&rx->statss);
static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
netdev_features_t feat)
{
+ u64 xdp_redirects = rx->xdp_actions[XDP_REDIRECT];
+ u64 xdp_txs = rx->xdp_actions[XDP_TX];
struct gve_rx_ctx *ctx = &rx->ctx;
struct gve_priv *priv = rx->gve;
struct gve_rx_cnts cnts = {0};
u64_stats_update_end(&rx->statss);
}
+ if (xdp_txs != rx->xdp_actions[XDP_TX])
+ gve_xdp_tx_flush(priv, rx->q_num);
+
+ if (xdp_redirects != rx->xdp_actions[XDP_REDIRECT])
+ xdp_do_flush();
+
/* restock ring slots */
if (!rx->data.raw_addressing) {
/* In QPL mode buffs are refilled as the desc are processed */
if (eop && buf_len <= priv->rx_copybreak) {
rx->ctx.skb_head = gve_rx_copy(priv->dev, napi,
- &buf_state->page_info, buf_len, 0);
+ &buf_state->page_info, buf_len);
if (unlikely(!rx->ctx.skb_head))
goto error;
rx->ctx.skb_tail = rx->ctx.skb_head;
#include <linux/tcp.h>
#include <linux/vmalloc.h>
#include <linux/skbuff.h>
+#include <net/xdp_sock_drv.h>
static inline void gve_tx_put_doorbell(struct gve_priv *priv,
struct gve_queue_resources *q_resources,
iowrite32be(val, &priv->db_bar2[be32_to_cpu(q_resources->db_index)]);
}
+void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid)
+{
+ u32 tx_qid = gve_xdp_tx_queue_id(priv, xdp_qid);
+ struct gve_tx_ring *tx = &priv->tx[tx_qid];
+
+ gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+}
+
/* gvnic can only transmit from a Registered Segment.
* We copy skb payloads into the registered segment before writing Tx
* descriptors and ringing the Tx doorbell.
atomic_add(bytes, &fifo->available);
}
+static size_t gve_tx_clear_buffer_state(struct gve_tx_buffer_state *info)
+{
+ size_t space_freed = 0;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
+ space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
+ info->iov[i].iov_len = 0;
+ info->iov[i].iov_padding = 0;
+ }
+ return space_freed;
+}
+
+static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx,
+ u32 to_do)
+{
+ struct gve_tx_buffer_state *info;
+ u32 clean_end = tx->done + to_do;
+ u64 pkts = 0, bytes = 0;
+ size_t space_freed = 0;
+ u32 xsk_complete = 0;
+ u32 idx;
+
+ for (; tx->done < clean_end; tx->done++) {
+ idx = tx->done & tx->mask;
+ info = &tx->info[idx];
+
+ if (unlikely(!info->xdp.size))
+ continue;
+
+ bytes += info->xdp.size;
+ pkts++;
+ xsk_complete += info->xdp.is_xsk;
+
+ info->xdp.size = 0;
+ if (info->xdp_frame) {
+ xdp_return_frame(info->xdp_frame);
+ info->xdp_frame = NULL;
+ }
+ space_freed += gve_tx_clear_buffer_state(info);
+ }
+
+ gve_tx_free_fifo(&tx->tx_fifo, space_freed);
+ if (xsk_complete > 0 && tx->xsk_pool)
+ xsk_tx_completed(tx->xsk_pool, xsk_complete);
+ u64_stats_update_begin(&tx->statss);
+ tx->bytes_done += bytes;
+ tx->pkt_done += pkts;
+ u64_stats_update_end(&tx->statss);
+ return pkts;
+}
+
static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
u32 to_do, bool try_to_wake);
gve_tx_remove_from_block(priv, idx);
slots = tx->mask + 1;
- gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
- netdev_tx_reset_queue(tx->netdev_txq);
+ if (tx->q_num < priv->tx_cfg.num_queues) {
+ gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
+ netdev_tx_reset_queue(tx->netdev_txq);
+ } else {
+ gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt);
+ }
dma_free_coherent(hdev, sizeof(*tx->q_resources),
tx->q_resources, tx->q_resources_bus);
/* Make sure everything is zeroed to start */
memset(tx, 0, sizeof(*tx));
spin_lock_init(&tx->clean_lock);
+ spin_lock_init(&tx->xdp_lock);
tx->q_num = idx;
tx->mask = slots - 1;
tx->raw_addressing = priv->queue_format == GVE_GQI_RDA_FORMAT;
tx->dev = &priv->pdev->dev;
if (!tx->raw_addressing) {
- tx->tx_fifo.qpl = gve_assign_tx_qpl(priv);
+ tx->tx_fifo.qpl = gve_assign_tx_qpl(priv, idx);
if (!tx->tx_fifo.qpl)
goto abort_with_desc;
/* map Tx FIFO */
netif_dbg(priv, drv, priv->dev, "tx[%d]->bus=%lx\n", idx,
(unsigned long)tx->bus);
- tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
+ if (idx < priv->tx_cfg.num_queues)
+ tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
gve_tx_add_to_block(priv, idx);
return 0;
return -ENOMEM;
}
-int gve_tx_alloc_rings(struct gve_priv *priv)
+int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings)
{
int err = 0;
int i;
- for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ for (i = start_id; i < start_id + num_rings; i++) {
err = gve_tx_alloc_ring(priv, i);
if (err) {
netif_err(priv, drv, priv->dev,
if (err) {
int j;
- for (j = 0; j < i; j++)
+ for (j = start_id; j < i; j++)
gve_tx_free_ring(priv, j);
}
return err;
}
-void gve_tx_free_rings_gqi(struct gve_priv *priv)
+void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings)
{
int i;
- for (i = 0; i < priv->tx_cfg.num_queues; i++)
+ for (i = start_id; i < start_id + num_rings; i++)
gve_tx_free_ring(priv, i);
}
}
static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc,
- struct sk_buff *skb, bool is_gso,
+ u16 csum_offset, u8 ip_summed, bool is_gso,
int l4_hdr_offset, u32 desc_cnt,
- u16 hlen, u64 addr)
+ u16 hlen, u64 addr, u16 pkt_len)
{
/* l4_hdr_offset and csum_offset are in units of 16-bit words */
if (is_gso) {
pkt_desc->pkt.type_flags = GVE_TXD_TSO | GVE_TXF_L4CSUM;
- pkt_desc->pkt.l4_csum_offset = skb->csum_offset >> 1;
+ pkt_desc->pkt.l4_csum_offset = csum_offset >> 1;
pkt_desc->pkt.l4_hdr_offset = l4_hdr_offset >> 1;
- } else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ } else if (likely(ip_summed == CHECKSUM_PARTIAL)) {
pkt_desc->pkt.type_flags = GVE_TXD_STD | GVE_TXF_L4CSUM;
- pkt_desc->pkt.l4_csum_offset = skb->csum_offset >> 1;
+ pkt_desc->pkt.l4_csum_offset = csum_offset >> 1;
pkt_desc->pkt.l4_hdr_offset = l4_hdr_offset >> 1;
} else {
pkt_desc->pkt.type_flags = GVE_TXD_STD;
pkt_desc->pkt.l4_hdr_offset = 0;
}
pkt_desc->pkt.desc_cnt = desc_cnt;
- pkt_desc->pkt.len = cpu_to_be16(skb->len);
+ pkt_desc->pkt.len = cpu_to_be16(pkt_len);
pkt_desc->pkt.seg_len = cpu_to_be16(hlen);
pkt_desc->pkt.seg_addr = cpu_to_be64(addr);
}
}
static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc,
- struct sk_buff *skb, bool is_gso,
+ u16 l3_offset, u16 gso_size,
+ bool is_gso_v6, bool is_gso,
u16 len, u64 addr)
{
seg_desc->seg.type_flags = GVE_TXD_SEG;
if (is_gso) {
- if (skb_is_gso_v6(skb))
+ if (is_gso_v6)
seg_desc->seg.type_flags |= GVE_TXSF_IPV6;
- seg_desc->seg.l3_offset = skb_network_offset(skb) >> 1;
- seg_desc->seg.mss = cpu_to_be16(skb_shinfo(skb)->gso_size);
+ seg_desc->seg.l3_offset = l3_offset >> 1;
+ seg_desc->seg.mss = cpu_to_be16(gso_size);
}
seg_desc->seg.seg_len = cpu_to_be16(len);
seg_desc->seg.seg_addr = cpu_to_be64(addr);
payload_nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, skb->len - hlen,
&info->iov[payload_iov]);
- gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset,
+ gve_tx_fill_pkt_desc(pkt_desc, skb->csum_offset, skb->ip_summed,
+ is_gso, l4_hdr_offset,
1 + mtd_desc_nr + payload_nfrags, hlen,
- info->iov[hdr_nfrags - 1].iov_offset);
+ info->iov[hdr_nfrags - 1].iov_offset, skb->len);
skb_copy_bits(skb, 0,
tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset,
next_idx = (tx->req + 1 + mtd_desc_nr + i - payload_iov) & tx->mask;
seg_desc = &tx->desc[next_idx];
- gve_tx_fill_seg_desc(seg_desc, skb, is_gso,
+ gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
+ skb_shinfo(skb)->gso_size,
+ skb_is_gso_v6(skb), is_gso,
info->iov[i].iov_len,
info->iov[i].iov_offset);
if (mtd_desc_nr)
num_descriptors++;
- gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset,
- num_descriptors, hlen, addr);
+ gve_tx_fill_pkt_desc(pkt_desc, skb->csum_offset, skb->ip_summed,
+ is_gso, l4_hdr_offset,
+ num_descriptors, hlen, addr, skb->len);
if (mtd_desc_nr) {
idx = (idx + 1) & tx->mask;
addr += hlen;
idx = (idx + 1) & tx->mask;
seg_desc = &tx->desc[idx];
- gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr);
+ gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
+ skb_shinfo(skb)->gso_size,
+ skb_is_gso_v6(skb), is_gso, len, addr);
}
for (i = 0; i < shinfo->nr_frags; i++) {
dma_unmap_len_set(&tx->info[idx], len, len);
dma_unmap_addr_set(&tx->info[idx], dma, addr);
- gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr);
+ gve_tx_fill_seg_desc(seg_desc, skb_network_offset(skb),
+ skb_shinfo(skb)->gso_size,
+ skb_is_gso_v6(skb), is_gso, len, addr);
}
return num_descriptors;
return NETDEV_TX_OK;
}
+static int gve_tx_fill_xdp(struct gve_priv *priv, struct gve_tx_ring *tx,
+ void *data, int len, void *frame_p, bool is_xsk)
+{
+ int pad, nfrags, ndescs, iovi, offset;
+ struct gve_tx_buffer_state *info;
+ u32 reqi = tx->req;
+
+ pad = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo, len);
+ if (pad >= GVE_TX_MAX_HEADER_SIZE)
+ pad = 0;
+ info = &tx->info[reqi & tx->mask];
+ info->xdp_frame = frame_p;
+ info->xdp.size = len;
+ info->xdp.is_xsk = is_xsk;
+
+ nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, pad + len,
+ &info->iov[0]);
+ iovi = pad > 0;
+ ndescs = nfrags - iovi;
+ offset = 0;
+
+ while (iovi < nfrags) {
+ if (!offset)
+ gve_tx_fill_pkt_desc(&tx->desc[reqi & tx->mask], 0,
+ CHECKSUM_NONE, false, 0, ndescs,
+ info->iov[iovi].iov_len,
+ info->iov[iovi].iov_offset, len);
+ else
+ gve_tx_fill_seg_desc(&tx->desc[reqi & tx->mask],
+ 0, 0, false, false,
+ info->iov[iovi].iov_len,
+ info->iov[iovi].iov_offset);
+
+ memcpy(tx->tx_fifo.base + info->iov[iovi].iov_offset,
+ data + offset, info->iov[iovi].iov_len);
+ gve_dma_sync_for_device(&priv->pdev->dev,
+ tx->tx_fifo.qpl->page_buses,
+ info->iov[iovi].iov_offset,
+ info->iov[iovi].iov_len);
+ offset += info->iov[iovi].iov_len;
+ iovi++;
+ reqi++;
+ }
+
+ return ndescs;
+}
+
+int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+ u32 flags)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ struct gve_tx_ring *tx;
+ int i, err = 0, qid;
+
+ if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+ return -EINVAL;
+
+ qid = gve_xdp_tx_queue_id(priv,
+ smp_processor_id() % priv->num_xdp_queues);
+
+ tx = &priv->tx[qid];
+
+ spin_lock(&tx->xdp_lock);
+ for (i = 0; i < n; i++) {
+ err = gve_xdp_xmit_one(priv, tx, frames[i]->data,
+ frames[i]->len, frames[i]);
+ if (err)
+ break;
+ }
+
+ if (flags & XDP_XMIT_FLUSH)
+ gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+
+ spin_unlock(&tx->xdp_lock);
+
+ u64_stats_update_begin(&tx->statss);
+ tx->xdp_xmit += n;
+ tx->xdp_xmit_errors += n - i;
+ u64_stats_update_end(&tx->statss);
+
+ return i ? i : err;
+}
+
+int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
+ void *data, int len, void *frame_p)
+{
+ int nsegs;
+
+ if (!gve_can_tx(tx, len + GVE_TX_MAX_HEADER_SIZE - 1))
+ return -EBUSY;
+
+ nsegs = gve_tx_fill_xdp(priv, tx, data, len, frame_p, false);
+ tx->req += nsegs;
+
+ return 0;
+}
+
#define GVE_TX_START_THRESH PAGE_SIZE
static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
u64 pkts = 0, bytes = 0;
size_t space_freed = 0;
struct sk_buff *skb;
- int i, j;
u32 idx;
+ int j;
for (j = 0; j < to_do; j++) {
idx = tx->done & tx->mask;
dev_consume_skb_any(skb);
if (tx->raw_addressing)
continue;
- /* FIFO free */
- for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
- space_freed += info->iov[i].iov_len + info->iov[i].iov_padding;
- info->iov[i].iov_len = 0;
- info->iov[i].iov_padding = 0;
- }
+ space_freed += gve_tx_clear_buffer_state(info);
}
}
return be32_to_cpu(counter);
}
+static int gve_xsk_tx(struct gve_priv *priv, struct gve_tx_ring *tx,
+ int budget)
+{
+ struct xdp_desc desc;
+ int sent = 0, nsegs;
+ void *data;
+
+ spin_lock(&tx->xdp_lock);
+ while (sent < budget) {
+ if (!gve_can_tx(tx, GVE_TX_START_THRESH))
+ goto out;
+
+ if (!xsk_tx_peek_desc(tx->xsk_pool, &desc)) {
+ tx->xdp_xsk_done = tx->xdp_xsk_wakeup;
+ goto out;
+ }
+
+ data = xsk_buff_raw_get_data(tx->xsk_pool, desc.addr);
+ nsegs = gve_tx_fill_xdp(priv, tx, data, desc.len, NULL, true);
+ tx->req += nsegs;
+ sent++;
+ }
+out:
+ if (sent > 0) {
+ gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+ xsk_tx_release(tx->xsk_pool);
+ }
+ spin_unlock(&tx->xdp_lock);
+ return sent;
+}
+
+bool gve_xdp_poll(struct gve_notify_block *block, int budget)
+{
+ struct gve_priv *priv = block->priv;
+ struct gve_tx_ring *tx = block->tx;
+ u32 nic_done;
+ bool repoll;
+ u32 to_do;
+
+ /* If budget is 0, do all the work */
+ if (budget == 0)
+ budget = INT_MAX;
+
+ /* Find out how much work there is to be done */
+ nic_done = gve_tx_load_event_counter(priv, tx);
+ to_do = min_t(u32, (nic_done - tx->done), budget);
+ gve_clean_xdp_done(priv, tx, to_do);
+ repoll = nic_done != tx->done;
+
+ if (tx->xsk_pool) {
+ int sent = gve_xsk_tx(priv, tx, budget);
+
+ u64_stats_update_begin(&tx->statss);
+ tx->xdp_xsk_sent += sent;
+ u64_stats_update_end(&tx->statss);
+ repoll |= (sent == budget);
+ if (xsk_uses_need_wakeup(tx->xsk_pool))
+ xsk_set_tx_need_wakeup(tx->xsk_pool);
+ }
+
+ /* If we still have work we want to repoll */
+ return repoll;
+}
+
bool gve_tx_poll(struct gve_notify_block *block, int budget)
{
struct gve_priv *priv = block->priv;
}
struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi,
- struct gve_rx_slot_page_info *page_info, u16 len,
- u16 padding)
+ struct gve_rx_slot_page_info *page_info, u16 len)
{
- void *va = page_info->page_address + padding + page_info->page_offset;
+ void *va = page_info->page_address + page_info->page_offset +
+ page_info->pad;
struct sk_buff *skb;
skb = napi_alloc_skb(napi, len);
void gve_rx_add_to_block(struct gve_priv *priv, int queue_idx);
struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi,
- struct gve_rx_slot_page_info *page_info, u16 len,
- u16 pad);
+ struct gve_rx_slot_page_info *page_info, u16 len);
/* Decrement pagecnt_bias. Set it back to INT_MAX if it reached zero. */
void gve_dec_pagecnt_bias(struct gve_rx_slot_page_info *page_info);
HNAE3_DEV_SUPPORT_CQ_B,
HNAE3_DEV_SUPPORT_FEC_STATS_B,
HNAE3_DEV_SUPPORT_LANE_NUM_B,
+ HNAE3_DEV_SUPPORT_WOL_B,
};
#define hnae3_ae_dev_fd_supported(ae_dev) \
#define hnae3_ae_dev_lane_num_supported(ae_dev) \
test_bit(HNAE3_DEV_SUPPORT_LANE_NUM_B, (ae_dev)->caps)
+#define hnae3_ae_dev_wol_supported(ae_dev) \
+ test_bit(HNAE3_DEV_SUPPORT_WOL_B, (ae_dev)->caps)
+
enum HNAE3_PF_CAP_BITS {
HNAE3_PF_SUPPORT_VLAN_FLTR_MDF_B = 0,
};
* Get phc info
* clean_vf_config
* Clean residual vf info after disable sriov
+ * get_wol
+ * Get wake on lan info
+ * set_wol
+ * Config wake on lan
*/
struct hnae3_ae_ops {
int (*init_ae_dev)(struct hnae3_ae_dev *ae_dev);
void (*clean_vf_config)(struct hnae3_ae_dev *ae_dev, int num_vfs);
int (*get_dscp_prio)(struct hnae3_handle *handle, u8 dscp,
u8 *tc_map_mode, u8 *priority);
+ void (*get_wol)(struct hnae3_handle *handle,
+ struct ethtool_wolinfo *wol);
+ int (*set_wol)(struct hnae3_handle *handle,
+ struct ethtool_wolinfo *wol);
};
struct hnae3_dcb_ops {
{HCLGE_COMM_CAP_FD_B, HNAE3_DEV_SUPPORT_FD_B},
{HCLGE_COMM_CAP_FEC_STATS_B, HNAE3_DEV_SUPPORT_FEC_STATS_B},
{HCLGE_COMM_CAP_LANE_NUM_B, HNAE3_DEV_SUPPORT_LANE_NUM_B},
+ {HCLGE_COMM_CAP_WOL_B, HNAE3_DEV_SUPPORT_WOL_B},
};
static const struct hclge_comm_caps_bit_map hclge_vf_cmd_caps[] = {
HCLGE_PPP_CMD0_INT_CMD = 0x2100,
HCLGE_PPP_CMD1_INT_CMD = 0x2101,
HCLGE_MAC_ETHERTYPE_IDX_RD = 0x2105,
+ HCLGE_OPC_WOL_GET_SUPPORTED_MODE = 0x2201,
+ HCLGE_OPC_WOL_CFG = 0x2202,
HCLGE_NCSI_INT_EN = 0x2401,
/* ROH MAC commands */
HCLGE_COMM_CAP_FD_B = 21,
HCLGE_COMM_CAP_FEC_STATS_B = 25,
HCLGE_COMM_CAP_LANE_NUM_B = 27,
+ HCLGE_COMM_CAP_WOL_B = 28,
};
enum HCLGE_COMM_API_CAP_BITS {
}, {
.name = "support lane num",
.cap_bit = HNAE3_DEV_SUPPORT_LANE_NUM_B,
+ }, {
+ .name = "support wake on lan",
+ .cap_bit = HNAE3_DEV_SUPPORT_WOL_B,
}
};
#include <linux/ipv6.h>
#include <linux/module.h>
#include <linux/pci.h>
-#include <linux/aer.h>
#include <linux/skbuff.h>
#include <linux/sctp.h>
#include <net/gre.h>
#define hns3_get_handle(ndev) \
(((struct hns3_nic_priv *)netdev_priv(ndev))->ae_handle)
+#define hns3_get_ae_dev(handle) \
+ (pci_get_drvdata((handle)->pdev))
+
+#define hns3_get_ops(handle) \
+ ((handle)->ae_algo->ops)
+
#define hns3_gl_usec_to_reg(int_gl) ((int_gl) >> 1)
#define hns3_gl_round_down(int_gl) round_down(int_gl, 2)
return -ENODATA;
}
+static void hns3_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+{
+ struct hnae3_handle *handle = hns3_get_handle(netdev);
+ const struct hnae3_ae_ops *ops = hns3_get_ops(handle);
+ struct hnae3_ae_dev *ae_dev = hns3_get_ae_dev(handle);
+
+ if (!hnae3_ae_dev_wol_supported(ae_dev))
+ return;
+
+ ops->get_wol(handle, wol);
+}
+
+static int hns3_set_wol(struct net_device *netdev,
+ struct ethtool_wolinfo *wol)
+{
+ struct hnae3_handle *handle = hns3_get_handle(netdev);
+ const struct hnae3_ae_ops *ops = hns3_get_ops(handle);
+ struct hnae3_ae_dev *ae_dev = hns3_get_ae_dev(handle);
+
+ if (!hnae3_ae_dev_wol_supported(ae_dev))
+ return -EOPNOTSUPP;
+
+ return ops->set_wol(handle, wol);
+}
+
static const struct ethtool_ops hns3vf_ethtool_ops = {
.supported_coalesce_params = HNS3_ETHTOOL_COALESCE,
.supported_ring_params = HNS3_ETHTOOL_RING,
.set_tunable = hns3_set_tunable,
.reset = hns3_set_reset,
.get_link_ext_state = hns3_get_link_ext_state,
+ .get_wol = hns3_get_wol,
+ .set_wol = hns3_set_wol,
};
void hns3_ethtool_set_ops(struct net_device *netdev)
u8 rsv1[18];
};
+struct hclge_wol_cfg_cmd {
+ __le32 wake_on_lan_mode;
+ u8 sopass[SOPASS_MAX];
+ u8 sopass_size;
+ u8 rsv[13];
+};
+
+struct hclge_query_wol_supported_cmd {
+ __le32 supported_wake_mode;
+ u8 rsv[20];
+};
+
struct hclge_hw;
int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num);
enum hclge_comm_cmd_status hclge_cmd_mdio_write(struct hclge_hw *hw,
if (!hw->hw.io_base) {
dev_err(&pdev->dev, "Can't map configuration register space\n");
ret = -ENOMEM;
- goto err_clr_master;
+ goto err_release_regions;
}
ret = hclge_dev_mem_map(hdev);
err_unmap_io_base:
pcim_iounmap(pdev, hdev->hw.hw.io_base);
-err_clr_master:
- pci_clear_master(pdev);
+err_release_regions:
pci_release_regions(pdev);
err_disable_device:
pci_disable_device(pdev);
pcim_iounmap(pdev, hdev->hw.hw.io_base);
pci_free_irq_vectors(pdev);
- pci_clear_master(pdev);
pci_release_mem_regions(pdev);
pci_disable_device(pdev);
}
hclge_write_dev(&hdev->hw, HCLGE_RXD_ADV_LAYOUT_EN_REG, 0);
}
+static struct hclge_wol_info *hclge_get_wol_info(struct hnae3_handle *handle)
+{
+ struct hclge_vport *vport = hclge_get_vport(handle);
+
+ return &vport->back->hw.mac.wol;
+}
+
+static int hclge_get_wol_supported_mode(struct hclge_dev *hdev,
+ u32 *wol_supported)
+{
+ struct hclge_query_wol_supported_cmd *wol_supported_cmd;
+ struct hclge_desc desc;
+ int ret;
+
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_WOL_GET_SUPPORTED_MODE,
+ true);
+ wol_supported_cmd = (struct hclge_query_wol_supported_cmd *)desc.data;
+
+ ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to query wol supported, ret = %d\n", ret);
+ return ret;
+ }
+
+ *wol_supported = le32_to_cpu(wol_supported_cmd->supported_wake_mode);
+
+ return 0;
+}
+
+static int hclge_set_wol_cfg(struct hclge_dev *hdev,
+ struct hclge_wol_info *wol_info)
+{
+ struct hclge_wol_cfg_cmd *wol_cfg_cmd;
+ struct hclge_desc desc;
+ int ret;
+
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_WOL_CFG, false);
+ wol_cfg_cmd = (struct hclge_wol_cfg_cmd *)desc.data;
+ wol_cfg_cmd->wake_on_lan_mode = cpu_to_le32(wol_info->wol_current_mode);
+ wol_cfg_cmd->sopass_size = wol_info->wol_sopass_size;
+ memcpy(wol_cfg_cmd->sopass, wol_info->wol_sopass, SOPASS_MAX);
+
+ ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (ret)
+ dev_err(&hdev->pdev->dev,
+ "failed to set wol config, ret = %d\n", ret);
+
+ return ret;
+}
+
+static int hclge_update_wol(struct hclge_dev *hdev)
+{
+ struct hclge_wol_info *wol_info = &hdev->hw.mac.wol;
+
+ if (!hnae3_ae_dev_wol_supported(hdev->ae_dev))
+ return 0;
+
+ return hclge_set_wol_cfg(hdev, wol_info);
+}
+
+static int hclge_init_wol(struct hclge_dev *hdev)
+{
+ struct hclge_wol_info *wol_info = &hdev->hw.mac.wol;
+ int ret;
+
+ if (!hnae3_ae_dev_wol_supported(hdev->ae_dev))
+ return 0;
+
+ memset(wol_info, 0, sizeof(struct hclge_wol_info));
+ ret = hclge_get_wol_supported_mode(hdev,
+ &wol_info->wol_support_mode);
+ if (ret) {
+ wol_info->wol_support_mode = 0;
+ return ret;
+ }
+
+ return hclge_update_wol(hdev);
+}
+
+static void hclge_get_wol(struct hnae3_handle *handle,
+ struct ethtool_wolinfo *wol)
+{
+ struct hclge_wol_info *wol_info = hclge_get_wol_info(handle);
+
+ wol->supported = wol_info->wol_support_mode;
+ wol->wolopts = wol_info->wol_current_mode;
+ if (wol_info->wol_current_mode & WAKE_MAGICSECURE)
+ memcpy(wol->sopass, wol_info->wol_sopass, SOPASS_MAX);
+}
+
+static int hclge_set_wol(struct hnae3_handle *handle,
+ struct ethtool_wolinfo *wol)
+{
+ struct hclge_wol_info *wol_info = hclge_get_wol_info(handle);
+ struct hclge_vport *vport = hclge_get_vport(handle);
+ u32 wol_mode;
+ int ret;
+
+ wol_mode = wol->wolopts;
+ if (wol_mode & ~wol_info->wol_support_mode)
+ return -EINVAL;
+
+ wol_info->wol_current_mode = wol_mode;
+ if (wol_mode & WAKE_MAGICSECURE) {
+ memcpy(wol_info->wol_sopass, wol->sopass, SOPASS_MAX);
+ wol_info->wol_sopass_size = SOPASS_MAX;
+ } else {
+ wol_info->wol_sopass_size = 0;
+ }
+
+ ret = hclge_set_wol_cfg(vport->back, wol_info);
+ if (ret)
+ wol_info->wol_current_mode = 0;
+
+ return ret;
+}
+
static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
{
struct pci_dev *pdev = ae_dev->pdev;
/* Enable MISC vector(vector0) */
hclge_enable_vector(&hdev->misc_vector, true);
+ ret = hclge_init_wol(hdev);
+ if (ret)
+ dev_warn(&pdev->dev,
+ "failed to wake on lan init, ret = %d\n", ret);
+
hclge_state_init(hdev);
hdev->last_reset_time = jiffies;
hclge_devlink_uninit(hdev);
err_pci_uninit:
pcim_iounmap(pdev, hdev->hw.hw.io_base);
- pci_clear_master(pdev);
pci_release_regions(pdev);
pci_disable_device(pdev);
out:
hclge_init_rxd_adv_layout(hdev);
+ ret = hclge_update_wol(hdev);
+ if (ret)
+ dev_warn(&pdev->dev,
+ "failed to update wol config, ret = %d\n", ret);
+
dev_info(&pdev->dev, "Reset done, %s driver initialization finished.\n",
HCLGE_DRIVER_NAME);
.get_link_diagnosis_info = hclge_get_link_diagnosis_info,
.clean_vf_config = hclge_clean_vport_config,
.get_dscp_prio = hclge_get_dscp_prio,
+ .get_wol = hclge_get_wol,
+ .set_wol = hclge_set_wol,
};
static struct hnae3_ae_algo ae_algo = {
#define QUERY_SFP_SPEED 0
#define QUERY_ACTIVE_SPEED 1
+struct hclge_wol_info {
+ u32 wol_support_mode; /* store the wake on lan info */
+ u32 wol_current_mode;
+ u8 wol_sopass[SOPASS_MAX];
+ u8 wol_sopass_size;
+};
+
struct hclge_mac {
u8 mac_id;
u8 phy_addr;
u32 user_fec_mode;
u32 fec_ability;
int link; /* store the link status of mac & phy (if phy exists) */
+ struct hclge_wol_info wol;
struct phy_device *phydev;
struct mii_bus *mdio_bus;
phy_interface_t phy_if;
if (!hw->hw.io_base) {
dev_err(&pdev->dev, "can't map configuration register space\n");
ret = -ENOMEM;
- goto err_clr_master;
+ goto err_release_regions;
}
ret = hclgevf_dev_mem_map(hdev);
err_unmap_io_base:
pci_iounmap(pdev, hdev->hw.hw.io_base);
-err_clr_master:
- pci_clear_master(pdev);
+err_release_regions:
pci_release_regions(pdev);
err_disable_device:
pci_disable_device(pdev);
devm_iounmap(&pdev->dev, hdev->hw.hw.mem_base);
pci_iounmap(pdev, hdev->hw.hw.io_base);
- pci_clear_master(pdev);
pci_release_regions(pdev);
pci_disable_device(pdev);
}
To compile this driver as a module, choose M here. The module
will be called igbvf.
-config IXGB
- tristate "Intel(R) PRO/10GbE support"
- depends on PCI
- help
- This driver supports Intel(R) PRO/10GbE family of adapters for
- PCI-X type cards. For PCI-E type cards, use the "ixgbe" driver
- instead. For more information on how to identify your adapter, go
- to the Adapter & Driver ID Guide that can be located at:
-
- <http://support.intel.com>
-
- More specific information on configuring the driver is in
- <file:Documentation/networking/device_drivers/ethernet/intel/ixgb.rst>.
-
- To compile this driver as a module, choose M here. The module
- will be called ixgb.
-
config IXGBE
tristate "Intel(R) 10GbE PCI Express adapters support"
depends on PCI
obj-$(CONFIG_IXGBE) += ixgbe/
obj-$(CONFIG_IXGBEVF) += ixgbevf/
obj-$(CONFIG_I40E) += i40e/
-obj-$(CONFIG_IXGB) += ixgb/
obj-$(CONFIG_IAVF) += iavf/
obj-$(CONFIG_FM10K) += fm10k/
obj-$(CONFIG_ICE) += ice/
#include <linux/smp.h>
#include <linux/pm_qos.h>
#include <linux/pm_runtime.h>
-#include <linux/aer.h>
#include <linux/prefetch.h>
#include <linux/suspend.h>
#include <linux/module.h>
#include <linux/interrupt.h>
-#include <linux/aer.h>
#include "fm10k.h"
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/pci.h>
-#include <linux/aer.h>
#include <linux/netdevice.h>
#include <linux/ioport.h>
#include <linux/iommu.h>
return -EOPNOTSUPP;
}
+ if ((changed_flags & I40E_FLAG_LEGACY_RX) &&
+ I40E_2K_TOO_SMALL_WITH_PADDING) {
+ dev_warn(&pf->pdev->dev,
+ "2k Rx buffer is too small to fit standard MTU and skb_shared_info\n");
+ return -EOPNOTSUPP;
+ }
+
if ((changed_flags & new_flags &
I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED) &&
(new_flags & I40E_FLAG_MFP_ENABLED))
}
/**
- * i40e_max_xdp_frame_size - returns the maximum allowed frame size for XDP
+ * i40e_calculate_vsi_rx_buf_len - Calculates buffer length
+ *
+ * @vsi: VSI to calculate rx_buf_len from
+ */
+static u16 i40e_calculate_vsi_rx_buf_len(struct i40e_vsi *vsi)
+{
+ if (!vsi->netdev || (vsi->back->flags & I40E_FLAG_LEGACY_RX))
+ return SKB_WITH_OVERHEAD(I40E_RXBUFFER_2048);
+
+ return PAGE_SIZE < 8192 ? I40E_RXBUFFER_3072 : I40E_RXBUFFER_2048;
+}
+
+/**
+ * i40e_max_vsi_frame_size - returns the maximum allowed frame size for VSI
* @vsi: the vsi
+ * @xdp_prog: XDP program
**/
-static int i40e_max_xdp_frame_size(struct i40e_vsi *vsi)
+static int i40e_max_vsi_frame_size(struct i40e_vsi *vsi,
+ struct bpf_prog *xdp_prog)
{
- if (PAGE_SIZE >= 8192 || (vsi->back->flags & I40E_FLAG_LEGACY_RX))
- return I40E_RXBUFFER_2048;
+ u16 rx_buf_len = i40e_calculate_vsi_rx_buf_len(vsi);
+ u16 chain_len;
+
+ if (xdp_prog && !xdp_prog->aux->xdp_has_frags)
+ chain_len = 1;
else
- return I40E_RXBUFFER_3072;
+ chain_len = I40E_MAX_CHAINED_RX_BUFFERS;
+
+ return min_t(u16, rx_buf_len * chain_len, I40E_MAX_RXBUFFER);
}
/**
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
struct i40e_pf *pf = vsi->back;
+ int frame_size;
- if (i40e_enabled_xdp_vsi(vsi)) {
- int frame_size = new_mtu + I40E_PACKET_HDR_PAD;
-
- if (frame_size > i40e_max_xdp_frame_size(vsi))
- return -EINVAL;
+ frame_size = i40e_max_vsi_frame_size(vsi, vsi->xdp_prog);
+ if (new_mtu > frame_size - I40E_PACKET_HDR_PAD) {
+ netdev_err(netdev, "Error changing mtu to %d, Max is %d\n",
+ new_mtu, frame_size - I40E_PACKET_HDR_PAD);
+ return -EINVAL;
}
netdev_dbg(netdev, "changing MTU from %d to %d\n",
}
}
+ xdp_init_buff(&ring->xdp, i40e_rx_pg_size(ring) / 2, &ring->xdp_rxq);
+
rx_ctx.dbuff = DIV_ROUND_UP(ring->rx_buf_len,
BIT_ULL(I40E_RXQ_CTX_DBUFF_SHIFT));
}
/* configure Rx buffer alignment */
- if (!vsi->netdev || (vsi->back->flags & I40E_FLAG_LEGACY_RX))
+ if (!vsi->netdev || (vsi->back->flags & I40E_FLAG_LEGACY_RX)) {
+ if (I40E_2K_TOO_SMALL_WITH_PADDING) {
+ dev_info(&vsi->back->pdev->dev,
+ "2k Rx buffer is too small to fit standard MTU and skb_shared_info\n");
+ return -EOPNOTSUPP;
+ }
clear_ring_build_skb_enabled(ring);
- else
+ } else {
set_ring_build_skb_enabled(ring);
+ }
ring->rx_offset = i40e_rx_offset(ring);
}
/**
- * i40e_calculate_vsi_rx_buf_len - Calculates buffer length
- *
- * @vsi: VSI to calculate rx_buf_len from
- */
-static u16 i40e_calculate_vsi_rx_buf_len(struct i40e_vsi *vsi)
-{
- if (!vsi->netdev || (vsi->back->flags & I40E_FLAG_LEGACY_RX))
- return I40E_RXBUFFER_2048;
-
-#if (PAGE_SIZE < 8192)
- if (!I40E_2K_TOO_SMALL_WITH_PADDING && vsi->netdev->mtu <= ETH_DATA_LEN)
- return I40E_RXBUFFER_1536 - NET_IP_ALIGN;
-#endif
-
- return PAGE_SIZE < 8192 ? I40E_RXBUFFER_3072 : I40E_RXBUFFER_2048;
-}
-
-/**
* i40e_vsi_configure_rx - Configure the VSI for Rx
* @vsi: the VSI being configured
*
int err = 0;
u16 i;
- vsi->max_frame = I40E_MAX_RXBUFFER;
+ vsi->max_frame = i40e_max_vsi_frame_size(vsi, vsi->xdp_prog);
vsi->rx_buf_len = i40e_calculate_vsi_rx_buf_len(vsi);
#if (PAGE_SIZE < 8192)
if (vsi->netdev && !I40E_2K_TOO_SMALL_WITH_PADDING &&
- vsi->netdev->mtu <= ETH_DATA_LEN)
- vsi->max_frame = I40E_RXBUFFER_1536 - NET_IP_ALIGN;
+ vsi->netdev->mtu <= ETH_DATA_LEN) {
+ vsi->rx_buf_len = I40E_RXBUFFER_1536 - NET_IP_ALIGN;
+ vsi->max_frame = vsi->rx_buf_len;
+ }
#endif
/* set up individual rings */
static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog,
struct netlink_ext_ack *extack)
{
- int frame_size = vsi->netdev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+ int frame_size = i40e_max_vsi_frame_size(vsi, prog);
struct i40e_pf *pf = vsi->back;
struct bpf_prog *old_prog;
bool need_reset;
int i;
/* Don't allow frames that span over multiple buffers */
- if (frame_size > i40e_calculate_vsi_rx_buf_len(vsi)) {
- NL_SET_ERR_MSG_MOD(extack, "MTU too large to enable XDP");
+ if (vsi->netdev->mtu > frame_size - I40E_PACKET_HDR_PAD) {
+ NL_SET_ERR_MSG_MOD(extack, "MTU too large for linear frames and XDP prog does not support frags");
return -EINVAL;
}
netdev->xdp_features = NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT |
- NETDEV_XDP_ACT_XSK_ZEROCOPY;
+ NETDEV_XDP_ACT_XSK_ZEROCOPY |
+ NETDEV_XDP_ACT_RX_SG;
} else {
/* Relate the VSI_VMDQ name to the VSI_MAIN name. Note that we
* are still limited by IFNAMSIZ, but we're adding 'v%d\0' to
TP_PROTO(struct i40e_ring *ring,
union i40e_16byte_rx_desc *desc,
- struct sk_buff *skb),
+ struct xdp_buff *xdp),
- TP_ARGS(ring, desc, skb),
+ TP_ARGS(ring, desc, xdp),
TP_STRUCT__entry(
__field(void*, ring)
__field(void*, desc)
- __field(void*, skb)
+ __field(void*, xdp)
__string(devname, ring->netdev->name)
),
TP_fast_assign(
__entry->ring = ring;
__entry->desc = desc;
- __entry->skb = skb;
+ __entry->xdp = xdp;
__assign_str(devname, ring->netdev->name);
),
TP_printk(
- "netdev: %s ring: %p desc: %p skb %p",
+ "netdev: %s ring: %p desc: %p xdp %p",
__get_str(devname), __entry->ring,
- __entry->desc, __entry->skb)
+ __entry->desc, __entry->xdp)
);
DEFINE_EVENT(
i40e_rx_template, i40e_clean_rx_irq,
TP_PROTO(struct i40e_ring *ring,
union i40e_16byte_rx_desc *desc,
- struct sk_buff *skb),
+ struct xdp_buff *xdp),
- TP_ARGS(ring, desc, skb));
+ TP_ARGS(ring, desc, xdp));
DEFINE_EVENT(
i40e_rx_template, i40e_clean_rx_irq_rx,
TP_PROTO(struct i40e_ring *ring,
union i40e_16byte_rx_desc *desc,
- struct sk_buff *skb),
+ struct xdp_buff *xdp),
- TP_ARGS(ring, desc, skb));
+ TP_ARGS(ring, desc, xdp));
DECLARE_EVENT_CLASS(
i40e_xmit_template,
if (!rx_ring->rx_bi)
return;
- dev_kfree_skb(rx_ring->skb);
- rx_ring->skb = NULL;
-
if (rx_ring->xsk_pool) {
i40e_xsk_clean_rx_ring(rx_ring);
goto skip_free;
rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
+ rx_ring->next_to_process = 0;
rx_ring->next_to_use = 0;
}
rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
+ rx_ring->next_to_process = 0;
rx_ring->next_to_use = 0;
/* XDP RX-queue info only needed for RX rings exposed to XDP */
writel(val, rx_ring->tail);
}
+#if (PAGE_SIZE >= 8192)
static unsigned int i40e_rx_frame_truesize(struct i40e_ring *rx_ring,
unsigned int size)
{
unsigned int truesize;
-#if (PAGE_SIZE < 8192)
- truesize = i40e_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */
-#else
truesize = rx_ring->rx_offset ?
SKB_DATA_ALIGN(size + rx_ring->rx_offset) +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) :
SKB_DATA_ALIGN(size);
-#endif
return truesize;
}
+#endif
/**
* i40e_alloc_mapped_page - recycle or make a new page
* i40e_can_reuse_rx_page - Determine if page can be reused for another Rx
* @rx_buffer: buffer containing the page
* @rx_stats: rx stats structure for the rx ring
- * @rx_buffer_pgcnt: buffer page refcount pre xdp_do_redirect() call
*
* If page is reusable, we have a green light for calling i40e_reuse_rx_page,
* which will assign the current buffer to the buffer that next_to_alloc is
* or busy if it could not be reused.
*/
static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer,
- struct i40e_rx_queue_stats *rx_stats,
- int rx_buffer_pgcnt)
+ struct i40e_rx_queue_stats *rx_stats)
{
unsigned int pagecnt_bias = rx_buffer->pagecnt_bias;
struct page *page = rx_buffer->page;
#if (PAGE_SIZE < 8192)
/* if we are only owner of page we can reuse it */
- if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1)) {
+ if (unlikely((rx_buffer->page_count - pagecnt_bias) > 1)) {
rx_stats->page_busy_count++;
return false;
}
}
/**
- * i40e_add_rx_frag - Add contents of Rx buffer to sk_buff
- * @rx_ring: rx descriptor ring to transact packets on
- * @rx_buffer: buffer containing page to add
- * @skb: sk_buff to place the data into
- * @size: packet length from rx_desc
- *
- * This function will add the data contained in rx_buffer->page to the skb.
- * It will just attach the page as a frag to the skb.
- *
- * The function will then update the page offset.
+ * i40e_rx_buffer_flip - adjusted rx_buffer to point to an unused region
+ * @rx_buffer: Rx buffer to adjust
+ * @truesize: Size of adjustment
**/
-static void i40e_add_rx_frag(struct i40e_ring *rx_ring,
- struct i40e_rx_buffer *rx_buffer,
- struct sk_buff *skb,
- unsigned int size)
+static void i40e_rx_buffer_flip(struct i40e_rx_buffer *rx_buffer,
+ unsigned int truesize)
{
#if (PAGE_SIZE < 8192)
- unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2;
-#else
- unsigned int truesize = SKB_DATA_ALIGN(size + rx_ring->rx_offset);
-#endif
-
- skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,
- rx_buffer->page_offset, size, truesize);
-
- /* page is being used so we must update the page offset */
-#if (PAGE_SIZE < 8192)
rx_buffer->page_offset ^= truesize;
#else
rx_buffer->page_offset += truesize;
* i40e_get_rx_buffer - Fetch Rx buffer and synchronize data for use
* @rx_ring: rx descriptor ring to transact packets on
* @size: size of buffer to add to skb
- * @rx_buffer_pgcnt: buffer page refcount
*
* This function will pull an Rx buffer from the ring and synchronize it
* for use by the CPU.
*/
static struct i40e_rx_buffer *i40e_get_rx_buffer(struct i40e_ring *rx_ring,
- const unsigned int size,
- int *rx_buffer_pgcnt)
+ const unsigned int size)
{
struct i40e_rx_buffer *rx_buffer;
- rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean);
- *rx_buffer_pgcnt =
+ rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_process);
+ rx_buffer->page_count =
#if (PAGE_SIZE < 8192)
page_count(rx_buffer->page);
#else
}
/**
- * i40e_construct_skb - Allocate skb and populate it
+ * i40e_put_rx_buffer - Clean up used buffer and either recycle or free
* @rx_ring: rx descriptor ring to transact packets on
* @rx_buffer: rx buffer to pull data from
+ *
+ * This function will clean up the contents of the rx_buffer. It will
+ * either recycle the buffer or unmap it and free the associated resources.
+ */
+static void i40e_put_rx_buffer(struct i40e_ring *rx_ring,
+ struct i40e_rx_buffer *rx_buffer)
+{
+ if (i40e_can_reuse_rx_page(rx_buffer, &rx_ring->rx_stats)) {
+ /* hand second half of page back to the ring */
+ i40e_reuse_rx_page(rx_ring, rx_buffer);
+ } else {
+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,
+ i40e_rx_pg_size(rx_ring),
+ DMA_FROM_DEVICE, I40E_RX_DMA_ATTR);
+ __page_frag_cache_drain(rx_buffer->page,
+ rx_buffer->pagecnt_bias);
+ /* clear contents of buffer_info */
+ rx_buffer->page = NULL;
+ }
+}
+
+/**
+ * i40e_process_rx_buffs- Processing of buffers post XDP prog or on error
+ * @rx_ring: Rx descriptor ring to transact packets on
+ * @xdp_res: Result of the XDP program
+ * @xdp: xdp_buff pointing to the data
+ **/
+static void i40e_process_rx_buffs(struct i40e_ring *rx_ring, int xdp_res,
+ struct xdp_buff *xdp)
+{
+ u32 next = rx_ring->next_to_clean;
+ struct i40e_rx_buffer *rx_buffer;
+
+ xdp->flags = 0;
+
+ while (1) {
+ rx_buffer = i40e_rx_bi(rx_ring, next);
+ if (++next == rx_ring->count)
+ next = 0;
+
+ if (!rx_buffer->page)
+ continue;
+
+ if (xdp_res == I40E_XDP_CONSUMED)
+ rx_buffer->pagecnt_bias++;
+ else
+ i40e_rx_buffer_flip(rx_buffer, xdp->frame_sz);
+
+ /* EOP buffer will be put in i40e_clean_rx_irq() */
+ if (next == rx_ring->next_to_process)
+ return;
+
+ i40e_put_rx_buffer(rx_ring, rx_buffer);
+ }
+}
+
+/**
+ * i40e_construct_skb - Allocate skb and populate it
+ * @rx_ring: rx descriptor ring to transact packets on
* @xdp: xdp_buff pointing to the data
+ * @nr_frags: number of buffers for the packet
*
* This function allocates an skb. It then populates it with the page
* data from the current receive descriptor, taking care to set up the
* skb correctly.
*/
static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring,
- struct i40e_rx_buffer *rx_buffer,
- struct xdp_buff *xdp)
+ struct xdp_buff *xdp,
+ u32 nr_frags)
{
unsigned int size = xdp->data_end - xdp->data;
-#if (PAGE_SIZE < 8192)
- unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2;
-#else
- unsigned int truesize = SKB_DATA_ALIGN(size);
-#endif
+ struct i40e_rx_buffer *rx_buffer;
unsigned int headlen;
struct sk_buff *skb;
memcpy(__skb_put(skb, headlen), xdp->data,
ALIGN(headlen, sizeof(long)));
+ rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean);
/* update all of the pointers */
size -= headlen;
if (size) {
+ if (unlikely(nr_frags >= MAX_SKB_FRAGS)) {
+ dev_kfree_skb(skb);
+ return NULL;
+ }
skb_add_rx_frag(skb, 0, rx_buffer->page,
rx_buffer->page_offset + headlen,
- size, truesize);
-
+ size, xdp->frame_sz);
/* buffer is used by skb, update page_offset */
-#if (PAGE_SIZE < 8192)
- rx_buffer->page_offset ^= truesize;
-#else
- rx_buffer->page_offset += truesize;
-#endif
+ i40e_rx_buffer_flip(rx_buffer, xdp->frame_sz);
} else {
/* buffer is unused, reset bias back to rx_buffer */
rx_buffer->pagecnt_bias++;
}
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+ struct skb_shared_info *sinfo, *skinfo = skb_shinfo(skb);
+
+ sinfo = xdp_get_shared_info_from_buff(xdp);
+ memcpy(&skinfo->frags[skinfo->nr_frags], &sinfo->frags[0],
+ sizeof(skb_frag_t) * nr_frags);
+
+ xdp_update_skb_shared_info(skb, skinfo->nr_frags + nr_frags,
+ sinfo->xdp_frags_size,
+ nr_frags * xdp->frame_sz,
+ xdp_buff_is_frag_pfmemalloc(xdp));
+
+ /* First buffer has already been processed, so bump ntc */
+ if (++rx_ring->next_to_clean == rx_ring->count)
+ rx_ring->next_to_clean = 0;
+
+ i40e_process_rx_buffs(rx_ring, I40E_XDP_PASS, xdp);
+ }
+
return skb;
}
/**
* i40e_build_skb - Build skb around an existing buffer
* @rx_ring: Rx descriptor ring to transact packets on
- * @rx_buffer: Rx buffer to pull data from
* @xdp: xdp_buff pointing to the data
+ * @nr_frags: number of buffers for the packet
*
* This function builds an skb around an existing Rx buffer, taking care
* to set up the skb correctly and avoid any memcpy overhead.
*/
static struct sk_buff *i40e_build_skb(struct i40e_ring *rx_ring,
- struct i40e_rx_buffer *rx_buffer,
- struct xdp_buff *xdp)
+ struct xdp_buff *xdp,
+ u32 nr_frags)
{
unsigned int metasize = xdp->data - xdp->data_meta;
-#if (PAGE_SIZE < 8192)
- unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2;
-#else
- unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +
- SKB_DATA_ALIGN(xdp->data_end -
- xdp->data_hard_start);
-#endif
struct sk_buff *skb;
/* Prefetch first cache line of first page. If xdp->data_meta
net_prefetch(xdp->data_meta);
/* build an skb around the page buffer */
- skb = napi_build_skb(xdp->data_hard_start, truesize);
+ skb = napi_build_skb(xdp->data_hard_start, xdp->frame_sz);
if (unlikely(!skb))
return NULL;
if (metasize)
skb_metadata_set(skb, metasize);
- /* buffer is used by skb, update page_offset */
-#if (PAGE_SIZE < 8192)
- rx_buffer->page_offset ^= truesize;
-#else
- rx_buffer->page_offset += truesize;
-#endif
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+ struct skb_shared_info *sinfo;
- return skb;
-}
+ sinfo = xdp_get_shared_info_from_buff(xdp);
+ xdp_update_skb_shared_info(skb, nr_frags,
+ sinfo->xdp_frags_size,
+ nr_frags * xdp->frame_sz,
+ xdp_buff_is_frag_pfmemalloc(xdp));
-/**
- * i40e_put_rx_buffer - Clean up used buffer and either recycle or free
- * @rx_ring: rx descriptor ring to transact packets on
- * @rx_buffer: rx buffer to pull data from
- * @rx_buffer_pgcnt: rx buffer page refcount pre xdp_do_redirect() call
- *
- * This function will clean up the contents of the rx_buffer. It will
- * either recycle the buffer or unmap it and free the associated resources.
- */
-static void i40e_put_rx_buffer(struct i40e_ring *rx_ring,
- struct i40e_rx_buffer *rx_buffer,
- int rx_buffer_pgcnt)
-{
- if (i40e_can_reuse_rx_page(rx_buffer, &rx_ring->rx_stats, rx_buffer_pgcnt)) {
- /* hand second half of page back to the ring */
- i40e_reuse_rx_page(rx_ring, rx_buffer);
+ i40e_process_rx_buffs(rx_ring, I40E_XDP_PASS, xdp);
} else {
- /* we are not reusing the buffer so unmap it */
- dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,
- i40e_rx_pg_size(rx_ring),
- DMA_FROM_DEVICE, I40E_RX_DMA_ATTR);
- __page_frag_cache_drain(rx_buffer->page,
- rx_buffer->pagecnt_bias);
- /* clear contents of buffer_info */
- rx_buffer->page = NULL;
+ struct i40e_rx_buffer *rx_buffer;
+
+ rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean);
+ /* buffer is used by skb, update page_offset */
+ i40e_rx_buffer_flip(rx_buffer, xdp->frame_sz);
}
+
+ return skb;
}
/**
}
/**
- * i40e_rx_buffer_flip - adjusted rx_buffer to point to an unused region
- * @rx_ring: Rx ring
- * @rx_buffer: Rx buffer to adjust
- * @size: Size of adjustment
- **/
-static void i40e_rx_buffer_flip(struct i40e_ring *rx_ring,
- struct i40e_rx_buffer *rx_buffer,
- unsigned int size)
-{
- unsigned int truesize = i40e_rx_frame_truesize(rx_ring, size);
-
-#if (PAGE_SIZE < 8192)
- rx_buffer->page_offset ^= truesize;
-#else
- rx_buffer->page_offset += truesize;
-#endif
-}
-
-/**
* i40e_xdp_ring_update_tail - Updates the XDP Tx ring tail register
* @xdp_ring: XDP Tx ring
*
}
/**
- * i40e_inc_ntc: Advance the next_to_clean index
+ * i40e_inc_ntp: Advance the next_to_process index
* @rx_ring: Rx ring
**/
-static void i40e_inc_ntc(struct i40e_ring *rx_ring)
+static void i40e_inc_ntp(struct i40e_ring *rx_ring)
+{
+ u32 ntp = rx_ring->next_to_process + 1;
+
+ ntp = (ntp < rx_ring->count) ? ntp : 0;
+ rx_ring->next_to_process = ntp;
+ prefetch(I40E_RX_DESC(rx_ring, ntp));
+}
+
+/**
+ * i40e_add_xdp_frag: Add a frag to xdp_buff
+ * @xdp: xdp_buff pointing to the data
+ * @nr_frags: return number of buffers for the packet
+ * @rx_buffer: rx_buffer holding data of the current frag
+ * @size: size of data of current frag
+ */
+static int i40e_add_xdp_frag(struct xdp_buff *xdp, u32 *nr_frags,
+ struct i40e_rx_buffer *rx_buffer, u32 size)
{
- u32 ntc = rx_ring->next_to_clean + 1;
+ struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+
+ if (!xdp_buff_has_frags(xdp)) {
+ sinfo->nr_frags = 0;
+ sinfo->xdp_frags_size = 0;
+ xdp_buff_set_frags_flag(xdp);
+ } else if (unlikely(sinfo->nr_frags >= MAX_SKB_FRAGS)) {
+ /* Overflowing packet: All frags need to be dropped */
+ return -ENOMEM;
+ }
+
+ __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buffer->page,
+ rx_buffer->page_offset, size);
+
+ sinfo->xdp_frags_size += size;
- ntc = (ntc < rx_ring->count) ? ntc : 0;
- rx_ring->next_to_clean = ntc;
- prefetch(I40E_RX_DESC(rx_ring, ntc));
+ if (page_is_pfmemalloc(rx_buffer->page))
+ xdp_buff_set_frag_pfmemalloc(xdp);
+ *nr_frags = sinfo->nr_frags;
+
+ return 0;
+}
+
+/**
+ * i40e_consume_xdp_buff - Consume all the buffers of the packet and update ntc
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @xdp: xdp_buff pointing to the data
+ * @rx_buffer: rx_buffer of eop desc
+ */
+static void i40e_consume_xdp_buff(struct i40e_ring *rx_ring,
+ struct xdp_buff *xdp,
+ struct i40e_rx_buffer *rx_buffer)
+{
+ i40e_process_rx_buffs(rx_ring, I40E_XDP_CONSUMED, xdp);
+ i40e_put_rx_buffer(rx_ring, rx_buffer);
+ rx_ring->next_to_clean = rx_ring->next_to_process;
+ xdp->data = NULL;
}
/**
static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget,
unsigned int *rx_cleaned)
{
- unsigned int total_rx_bytes = 0, total_rx_packets = 0, frame_sz = 0;
+ unsigned int total_rx_bytes = 0, total_rx_packets = 0;
u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
+ u16 clean_threshold = rx_ring->count / 2;
unsigned int offset = rx_ring->rx_offset;
- struct sk_buff *skb = rx_ring->skb;
+ struct xdp_buff *xdp = &rx_ring->xdp;
unsigned int xdp_xmit = 0;
struct bpf_prog *xdp_prog;
bool failure = false;
- struct xdp_buff xdp;
int xdp_res = 0;
-#if (PAGE_SIZE < 8192)
- frame_sz = i40e_rx_frame_truesize(rx_ring, 0);
-#endif
- xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
-
xdp_prog = READ_ONCE(rx_ring->xdp_prog);
while (likely(total_rx_packets < (unsigned int)budget)) {
+ u16 ntp = rx_ring->next_to_process;
struct i40e_rx_buffer *rx_buffer;
union i40e_rx_desc *rx_desc;
- int rx_buffer_pgcnt;
+ struct sk_buff *skb;
unsigned int size;
+ u32 nfrags = 0;
+ bool neop;
u64 qword;
/* return some buffers to hardware, one at a time is too slow */
- if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
+ if (cleaned_count >= clean_threshold) {
failure = failure ||
i40e_alloc_rx_buffers(rx_ring, cleaned_count);
cleaned_count = 0;
}
- rx_desc = I40E_RX_DESC(rx_ring, rx_ring->next_to_clean);
+ rx_desc = I40E_RX_DESC(rx_ring, ntp);
/* status_error_len will always be zero for unused descriptors
* because it's cleared in cleanup, and overlaps with hdr_addr
i40e_clean_programming_status(rx_ring,
rx_desc->raw.qword[0],
qword);
- rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean);
- i40e_inc_ntc(rx_ring);
+ rx_buffer = i40e_rx_bi(rx_ring, ntp);
+ i40e_inc_ntp(rx_ring);
i40e_reuse_rx_page(rx_ring, rx_buffer);
cleaned_count++;
continue;
if (!size)
break;
- i40e_trace(clean_rx_irq, rx_ring, rx_desc, skb);
- rx_buffer = i40e_get_rx_buffer(rx_ring, size, &rx_buffer_pgcnt);
-
+ i40e_trace(clean_rx_irq, rx_ring, rx_desc, xdp);
/* retrieve a buffer from the ring */
- if (!skb) {
+ rx_buffer = i40e_get_rx_buffer(rx_ring, size);
+
+ neop = i40e_is_non_eop(rx_ring, rx_desc);
+ i40e_inc_ntp(rx_ring);
+
+ if (!xdp->data) {
unsigned char *hard_start;
hard_start = page_address(rx_buffer->page) +
rx_buffer->page_offset - offset;
- xdp_prepare_buff(&xdp, hard_start, offset, size, true);
- xdp_buff_clear_frags_flag(&xdp);
+ xdp_prepare_buff(xdp, hard_start, offset, size, true);
#if (PAGE_SIZE > 4096)
/* At larger PAGE_SIZE, frame_sz depend on len size */
- xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size);
+ xdp->frame_sz = i40e_rx_frame_truesize(rx_ring, size);
#endif
- xdp_res = i40e_run_xdp(rx_ring, &xdp, xdp_prog);
+ } else if (i40e_add_xdp_frag(xdp, &nfrags, rx_buffer, size) &&
+ !neop) {
+ /* Overflowing packet: Drop all frags on EOP */
+ i40e_consume_xdp_buff(rx_ring, xdp, rx_buffer);
+ break;
}
+ if (neop)
+ continue;
+
+ xdp_res = i40e_run_xdp(rx_ring, xdp, xdp_prog);
+
if (xdp_res) {
- if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
- xdp_xmit |= xdp_res;
- i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
+ xdp_xmit |= xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR);
+
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+ i40e_process_rx_buffs(rx_ring, xdp_res, xdp);
+ size = xdp_get_buff_len(xdp);
+ } else if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
+ i40e_rx_buffer_flip(rx_buffer, xdp->frame_sz);
} else {
rx_buffer->pagecnt_bias++;
}
total_rx_bytes += size;
- total_rx_packets++;
- } else if (skb) {
- i40e_add_rx_frag(rx_ring, rx_buffer, skb, size);
- } else if (ring_uses_build_skb(rx_ring)) {
- skb = i40e_build_skb(rx_ring, rx_buffer, &xdp);
} else {
- skb = i40e_construct_skb(rx_ring, rx_buffer, &xdp);
- }
+ if (ring_uses_build_skb(rx_ring))
+ skb = i40e_build_skb(rx_ring, xdp, nfrags);
+ else
+ skb = i40e_construct_skb(rx_ring, xdp, nfrags);
+
+ /* drop if we failed to retrieve a buffer */
+ if (!skb) {
+ rx_ring->rx_stats.alloc_buff_failed++;
+ i40e_consume_xdp_buff(rx_ring, xdp, rx_buffer);
+ break;
+ }
- /* exit if we failed to retrieve a buffer */
- if (!xdp_res && !skb) {
- rx_ring->rx_stats.alloc_buff_failed++;
- rx_buffer->pagecnt_bias++;
- break;
- }
+ if (i40e_cleanup_headers(rx_ring, skb, rx_desc))
+ goto process_next;
- i40e_put_rx_buffer(rx_ring, rx_buffer, rx_buffer_pgcnt);
- cleaned_count++;
+ /* probably a little skewed due to removing CRC */
+ total_rx_bytes += skb->len;
- i40e_inc_ntc(rx_ring);
- if (i40e_is_non_eop(rx_ring, rx_desc))
- continue;
+ /* populate checksum, VLAN, and protocol */
+ i40e_process_skb_fields(rx_ring, rx_desc, skb);
- if (xdp_res || i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
- skb = NULL;
- continue;
+ i40e_trace(clean_rx_irq_rx, rx_ring, rx_desc, xdp);
+ napi_gro_receive(&rx_ring->q_vector->napi, skb);
}
- /* probably a little skewed due to removing CRC */
- total_rx_bytes += skb->len;
-
- /* populate checksum, VLAN, and protocol */
- i40e_process_skb_fields(rx_ring, rx_desc, skb);
-
- i40e_trace(clean_rx_irq_rx, rx_ring, rx_desc, skb);
- napi_gro_receive(&rx_ring->q_vector->napi, skb);
- skb = NULL;
-
/* update budget accounting */
total_rx_packets++;
+process_next:
+ cleaned_count += nfrags + 1;
+ i40e_put_rx_buffer(rx_ring, rx_buffer);
+ rx_ring->next_to_clean = rx_ring->next_to_process;
+
+ xdp->data = NULL;
}
i40e_finalize_xdp_rx(rx_ring, xdp_xmit);
- rx_ring->skb = skb;
i40e_update_rx_stats(rx_ring, total_rx_bytes, total_rx_packets);
struct page *page;
__u32 page_offset;
__u16 pagecnt_bias;
+ __u32 page_count;
};
struct i40e_queue_stats {
u8 dcb_tc; /* Traffic class of ring */
u8 __iomem *tail;
+ /* Storing xdp_buff on ring helps in saving the state of partially built
+ * packet when i40e_clean_rx_ring_irq() must return before it sees EOP
+ * and to resume packet building for this ring in the next call to
+ * i40e_clean_rx_ring_irq().
+ */
+ struct xdp_buff xdp;
+
+ /* Next descriptor to be processed; next_to_clean is updated only on
+ * processing EOP descriptor
+ */
+ u16 next_to_process;
/* high bit set means dynamic, use accessor routines to read/write.
* hardware only supports 2us resolution for the ITR registers.
* these values always store the USER setting, and must be converted
struct rcu_head rcu; /* to avoid race on free */
u16 next_to_alloc;
- struct sk_buff *skb; /* When i40e_clean_rx_ring_irq() must
- * return before it sees the EOP for
- * the current packet, we save that skb
- * here and resume receiving this
- * packet the next time
- * i40e_clean_rx_ring_irq() is called
- * for this ring.
- */
struct i40e_channel *ch;
u16 rx_offset;
}
/**
+ * i40e_vc_ether_addr_type - get type of virtchnl_ether_addr
+ * @vc_ether_addr: used to extract the type
+ **/
+static u8
+i40e_vc_ether_addr_type(struct virtchnl_ether_addr *vc_ether_addr)
+{
+ return vc_ether_addr->type & VIRTCHNL_ETHER_ADDR_TYPE_MASK;
+}
+
+/**
+ * i40e_is_vc_addr_legacy
+ * @vc_ether_addr: VIRTCHNL structure that contains MAC and type
+ *
+ * check if the MAC address is from an older VF
+ **/
+static bool
+i40e_is_vc_addr_legacy(struct virtchnl_ether_addr *vc_ether_addr)
+{
+ return i40e_vc_ether_addr_type(vc_ether_addr) ==
+ VIRTCHNL_ETHER_ADDR_LEGACY;
+}
+
+/**
+ * i40e_is_vc_addr_primary
+ * @vc_ether_addr: VIRTCHNL structure that contains MAC and type
+ *
+ * check if the MAC address is the VF's primary MAC
+ * This function should only be called when the MAC address in
+ * virtchnl_ether_addr is a valid unicast MAC
+ **/
+static bool
+i40e_is_vc_addr_primary(struct virtchnl_ether_addr *vc_ether_addr)
+{
+ return i40e_vc_ether_addr_type(vc_ether_addr) ==
+ VIRTCHNL_ETHER_ADDR_PRIMARY;
+}
+
+/**
+ * i40e_update_vf_mac_addr
+ * @vf: VF to update
+ * @vc_ether_addr: structure from VIRTCHNL with MAC to add
+ *
+ * update the VF's cached hardware MAC if allowed
+ **/
+static void
+i40e_update_vf_mac_addr(struct i40e_vf *vf,
+ struct virtchnl_ether_addr *vc_ether_addr)
+{
+ u8 *mac_addr = vc_ether_addr->addr;
+
+ if (!is_valid_ether_addr(mac_addr))
+ return;
+
+ /* If request to add MAC filter is a primary request update its default
+ * MAC address with the requested one. If it is a legacy request then
+ * check if current default is empty if so update the default MAC
+ */
+ if (i40e_is_vc_addr_primary(vc_ether_addr)) {
+ ether_addr_copy(vf->default_lan_addr.addr, mac_addr);
+ } else if (i40e_is_vc_addr_legacy(vc_ether_addr)) {
+ if (is_zero_ether_addr(vf->default_lan_addr.addr))
+ ether_addr_copy(vf->default_lan_addr.addr, mac_addr);
+ }
+}
+
+/**
* i40e_vc_add_mac_addr_msg
* @vf: pointer to the VF info
* @msg: pointer to the msg buffer
spin_unlock_bh(&vsi->mac_filter_hash_lock);
goto error_param;
}
- if (is_valid_ether_addr(al->list[i].addr) &&
- is_zero_ether_addr(vf->default_lan_addr.addr))
- ether_addr_copy(vf->default_lan_addr.addr,
- al->list[i].addr);
}
+ i40e_update_vf_mac_addr(vf, &al->list[i]);
}
spin_unlock_bh(&vsi->mac_filter_hash_lock);
spin_unlock_bh(&vsi->mac_filter_hash_lock);
+ if (was_unimac_deleted)
+ eth_zero_addr(vf->default_lan_addr.addr);
+
/* program the updated filter list */
ret = i40e_sync_vsi_filters(vsi);
if (ret)
#include <linux/module.h>
#include <linux/pci.h>
-#include <linux/aer.h>
#include <linux/netdevice.h>
#include <linux/vmalloc.h>
#include <linux/interrupt.h>
#include <linux/pci.h>
#include <linux/workqueue.h>
#include <linux/wait.h>
-#include <linux/aer.h>
#include <linux/interrupt.h>
#include <linux/ethtool.h>
#include <linux/timer.h>
.supported_flash_update_params = DEVLINK_SUPPORT_FLASH_UPDATE_OVERWRITE_MASK,
.reload_actions = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE),
- /* The ice driver currently does not support driver reinit */
.reload_down = ice_devlink_reload_down,
.reload_up = ice_devlink_reload_up,
.port_split = ice_devlink_port_split,
wake_up(&pf->aq_wait_queue);
}
+#define ICE_MBX_OVERFLOW_WATERMARK 64
+
/**
* __ice_clean_ctrlq - helper function to clean controlq rings
* @pf: ptr to struct ice_pf
return 0;
do {
+ struct ice_mbx_data data = {};
u16 opcode;
int ret;
ice_vf_lan_overflow_event(pf, &event);
break;
case ice_mbx_opc_send_msg_to_pf:
- if (!ice_is_malicious_vf(pf, &event, i, pending))
- ice_vc_process_vf_msg(pf, &event);
+ data.num_msg_proc = i;
+ data.num_pending_arq = pending;
+ data.max_num_msgs_mbx = hw->mailboxq.num_rq_entries;
+ data.async_watermark_val = ICE_MBX_OVERFLOW_WATERMARK;
+
+ ice_vc_process_vf_msg(pf, &event, &data);
break;
case ice_aqc_opc_fw_logging:
ice_output_fw_log(hw, &event.desc, event.msg_buf);
mutex_init(&pf->vfs.table_lock);
hash_init(pf->vfs.table);
+ ice_mbx_init_snapshot(&pf->hw);
return 0;
}
}
/* clear malicious info since the VF is getting released */
- if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->vfs.malvfs,
- ICE_MAX_SRIOV_VFS, vf->vf_id))
- dev_dbg(dev, "failed to clear malicious VF state for VF %u\n",
- vf->vf_id);
+ list_del(&vf->mbx_info.list_entry);
mutex_unlock(&vf->cfg_lock);
}
if (!num_vfs) {
if (!pci_vfs_assigned(pdev)) {
ice_free_vfs(pf);
- ice_mbx_deinit_snapshot(&pf->hw);
if (pf->lag)
ice_enable_lag(pf->lag);
return 0;
return -EBUSY;
}
- err = ice_mbx_init_snapshot(&pf->hw, num_vfs);
- if (err)
- return err;
-
err = ice_pci_sriov_ena(pf, num_vfs);
- if (err) {
- ice_mbx_deinit_snapshot(&pf->hw);
+ if (err)
return err;
- }
if (pf->lag)
ice_disable_lag(pf->lag);
}
}
}
-
-/**
- * ice_is_malicious_vf - helper function to detect a malicious VF
- * @pf: ptr to struct ice_pf
- * @event: pointer to the AQ event
- * @num_msg_proc: the number of messages processed so far
- * @num_msg_pending: the number of messages peinding in admin queue
- */
-bool
-ice_is_malicious_vf(struct ice_pf *pf, struct ice_rq_event_info *event,
- u16 num_msg_proc, u16 num_msg_pending)
-{
- s16 vf_id = le16_to_cpu(event->desc.retval);
- struct device *dev = ice_pf_to_dev(pf);
- struct ice_mbx_data mbxdata;
- bool malvf = false;
- struct ice_vf *vf;
- int status;
-
- vf = ice_get_vf_by_id(pf, vf_id);
- if (!vf)
- return false;
-
- if (test_bit(ICE_VF_STATE_DIS, vf->vf_states))
- goto out_put_vf;
-
- mbxdata.num_msg_proc = num_msg_proc;
- mbxdata.num_pending_arq = num_msg_pending;
- mbxdata.max_num_msgs_mbx = pf->hw.mailboxq.num_rq_entries;
-#define ICE_MBX_OVERFLOW_WATERMARK 64
- mbxdata.async_watermark_val = ICE_MBX_OVERFLOW_WATERMARK;
-
- /* check to see if we have a malicious VF */
- status = ice_mbx_vf_state_handler(&pf->hw, &mbxdata, vf_id, &malvf);
- if (status)
- goto out_put_vf;
-
- if (malvf) {
- bool report_vf = false;
-
- /* if the VF is malicious and we haven't let the user
- * know about it, then let them know now
- */
- status = ice_mbx_report_malvf(&pf->hw, pf->vfs.malvfs,
- ICE_MAX_SRIOV_VFS, vf_id,
- &report_vf);
- if (status)
- dev_dbg(dev, "Error reporting malicious VF\n");
-
- if (report_vf) {
- struct ice_vsi *pf_vsi = ice_get_main_vsi(pf);
-
- if (pf_vsi)
- dev_warn(dev, "VF MAC %pM on PF MAC %pM is generating asynchronous messages and may be overflowing the PF message queue. Please see the Adapter User Guide for more information\n",
- &vf->dev_lan_addr[0],
- pf_vsi->netdev->dev_addr);
- }
- }
-
-out_put_vf:
- ice_put_vf(vf);
- return malvf;
-}
ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi);
void ice_free_vfs(struct ice_pf *pf);
-void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event);
void ice_restore_all_vfs_msi_state(struct pci_dev *pdev);
-bool
-ice_is_malicious_vf(struct ice_pf *pf, struct ice_rq_event_info *event,
- u16 num_msg_proc, u16 num_msg_pending);
int
ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
static inline void ice_process_vflr_event(struct ice_pf *pf) { }
static inline void ice_free_vfs(struct ice_pf *pf) { }
static inline
-void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event) { }
-static inline
void ice_vf_lan_overflow_event(struct ice_pf *pf, struct ice_rq_event_info *event) { }
static inline void ice_print_vfs_mdd_events(struct ice_pf *pf) { }
static inline void ice_print_vf_rx_mdd_event(struct ice_vf *vf) { }
static inline void ice_restore_all_vfs_msi_state(struct pci_dev *pdev) { }
-static inline bool
-ice_is_malicious_vf(struct ice_pf __always_unused *pf,
- struct ice_rq_event_info __always_unused *event,
- u16 __always_unused num_msg_proc,
- u16 __always_unused num_msg_pending)
-{
- return false;
-}
-
static inline int
ice_sriov_configure(struct pci_dev __always_unused *pdev,
int __always_unused num_vfs)
u16 max_num_msgs_mbx;
};
-/* Structure to track messages sent by VFs on mailbox:
- * 1. vf_cntr: a counter array of VFs to track the number of
- * asynchronous messages sent by each VF
- * 2. vfcntr_len: number of entries in VF counter array
+/* Structure used to track a single VF's messages on the mailbox:
+ * 1. list_entry: linked list entry node
+ * 2. msg_count: the number of asynchronous messages sent by this VF
+ * 3. malicious: whether this VF has been detected as malicious before
*/
-struct ice_mbx_vf_counter {
- u32 *vf_cntr;
- u32 vfcntr_len;
+struct ice_mbx_vf_info {
+ struct list_head list_entry;
+ u32 msg_count;
+ u8 malicious : 1;
};
/* Structure to hold data relevant to the captured static snapshot
*/
struct ice_mbx_snapshot {
struct ice_mbx_snap_buffer_data mbx_buf;
- struct ice_mbx_vf_counter mbx_vf;
+ struct list_head mbx_vf;
};
/* Structure to hold data to be used for capturing or updating a
/* clear all malicious info if the VFs are getting reset */
ice_for_each_vf(pf, bkt, vf)
- if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->vfs.malvfs,
- ICE_MAX_SRIOV_VFS, vf->vf_id))
- dev_dbg(dev, "failed to clear malicious VF state for VF %u\n",
- vf->vf_id);
+ ice_mbx_clear_malvf(&vf->mbx_info);
/* If VFs have been disabled, there is no need to reset */
if (test_and_set_bit(ICE_VF_DIS, pf->state)) {
struct ice_pf *pf = vf->pf;
struct ice_vsi *vsi;
struct device *dev;
- struct ice_hw *hw;
int err = 0;
bool rsd;
dev = ice_pf_to_dev(pf);
- hw = &pf->hw;
if (flags & ICE_VF_RESET_NOTIFY)
ice_notify_vf_reset(vf);
ice_eswitch_replay_vf_mac_rule(vf);
/* if the VF has been reset allow it to come up again */
- if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->vfs.malvfs,
- ICE_MAX_SRIOV_VFS, vf->vf_id))
- dev_dbg(dev, "failed to clear malicious VF state for VF %u\n",
- vf->vf_id);
+ ice_mbx_clear_malvf(&vf->mbx_info);
out_unlock:
if (flags & ICE_VF_RESET_LOCK)
ice_vf_ctrl_invalidate_vsi(vf);
ice_vf_fdir_init(vf);
+ /* Initialize mailbox info for this VF */
+ ice_mbx_init_vf_info(&pf->hw, &vf->mbx_info);
+
mutex_init(&vf->cfg_lock);
}
u16 num_qps_per; /* number of queue pairs per VF */
u16 num_msix_per; /* number of MSI-X vectors per VF */
unsigned long last_printed_mdd_jiffies; /* MDD message rate limit */
- DECLARE_BITMAP(malvfs, ICE_MAX_SRIOV_VFS); /* malicious VF indicator */
};
/* VF information structure */
DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF);
struct ice_vlan port_vlan_info; /* Port VLAN ID, QoS, and TPID */
struct virtchnl_vlan_caps vlan_v2_caps;
+ struct ice_mbx_vf_info mbx_info;
u8 pf_set_mac:1; /* VF MAC address set by VMM admin */
u8 trusted:1;
u8 spoofchk:1;
*
* 2. When the caller starts processing its mailbox queue in response to an
* interrupt, the structure ice_mbx_snapshot is expected to be cleared before
- * the algorithm can be run for the first time for that interrupt. This can be
- * done via ice_mbx_reset_snapshot().
+ * the algorithm can be run for the first time for that interrupt. This
+ * requires calling ice_mbx_reset_snapshot() as well as calling
+ * ice_mbx_reset_vf_info() for each VF tracking structure.
*
* 3. For every message read by the caller from the MBX Queue, the caller must
* call the detection algorithm's entry function ice_mbx_vf_state_handler().
* Before every call to ice_mbx_vf_state_handler() the struct ice_mbx_data is
* filled as it is required to be passed to the algorithm.
*
- * 4. Every time a message is read from the MBX queue, a VFId is received which
- * is passed to the state handler. The boolean output is_malvf of the state
- * handler ice_mbx_vf_state_handler() serves as an indicator to the caller
- * whether this VF is malicious or not.
+ * 4. Every time a message is read from the MBX queue, a tracking structure
+ * for the VF must be passed to the state handler. The boolean output
+ * report_malvf from ice_mbx_vf_state_handler() serves as an indicator to the
+ * caller whether it must report this VF as malicious or not.
*
* 5. When a VF is identified to be malicious, the caller can send a message
- * to the system administrator. The caller can invoke ice_mbx_report_malvf()
- * to help determine if a malicious VF is to be reported or not. This function
- * requires the caller to maintain a global bitmap to track all malicious VFs
- * and pass that to ice_mbx_report_malvf() along with the VFID which was identified
- * to be malicious by ice_mbx_vf_state_handler().
+ * to the system administrator.
*
- * 6. The global bitmap maintained by PF can be cleared completely if PF is in
- * reset or the bit corresponding to a VF can be cleared if that VF is in reset.
- * When a VF is shut down and brought back up, we assume that the new VF
- * brought up is not malicious and hence report it if found malicious.
+ * 6. The PF is responsible for maintaining the struct ice_mbx_vf_info
+ * structure for each VF. The PF should clear the VF tracking structure if the
+ * VF is reset. When a VF is shut down and brought back up, we will then
+ * assume that the new VF is not malicious and may report it again if we
+ * detect it again.
*
* 7. The function ice_mbx_reset_snapshot() is called to reset the information
* in ice_mbx_snapshot for every new mailbox interrupt handled.
- *
- * 8. The memory allocated for variables in ice_mbx_snapshot is de-allocated
- * when driver is unloaded.
*/
#define ICE_RQ_DATA_MASK(rq_data) ((rq_data) & PF_MBX_ARQH_ARQH_M)
/* Using the highest value for an unsigned 16-bit value 0xFFFF to indicate that
#define ICE_IGNORE_MAX_MSG_CNT 0xFFFF
/**
+ * ice_mbx_reset_snapshot - Reset mailbox snapshot structure
+ * @snap: pointer to the mailbox snapshot
+ */
+static void ice_mbx_reset_snapshot(struct ice_mbx_snapshot *snap)
+{
+ struct ice_mbx_vf_info *vf_info;
+
+ /* Clear mbx_buf in the mailbox snaphot structure and setting the
+ * mailbox snapshot state to a new capture.
+ */
+ memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
+ snap->mbx_buf.state = ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
+
+ /* Reset message counts for all VFs to zero */
+ list_for_each_entry(vf_info, &snap->mbx_vf, list_entry)
+ vf_info->msg_count = 0;
+}
+
+/**
* ice_mbx_traverse - Pass through mailbox snapshot
* @hw: pointer to the HW struct
* @new_state: new algorithm state
/**
* ice_mbx_detect_malvf - Detect malicious VF in snapshot
* @hw: pointer to the HW struct
- * @vf_id: relative virtual function ID
+ * @vf_info: mailbox tracking structure for a VF
* @new_state: new algorithm state
* @is_malvf: boolean output to indicate if VF is malicious
*
* the permissible number of messages to send.
*/
static int
-ice_mbx_detect_malvf(struct ice_hw *hw, u16 vf_id,
+ice_mbx_detect_malvf(struct ice_hw *hw, struct ice_mbx_vf_info *vf_info,
enum ice_mbx_snapshot_state *new_state,
bool *is_malvf)
{
- struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
-
- if (vf_id >= snap->mbx_vf.vfcntr_len)
- return -EIO;
-
- /* increment the message count in the VF array */
- snap->mbx_vf.vf_cntr[vf_id]++;
+ /* increment the message count for this VF */
+ vf_info->msg_count++;
- if (snap->mbx_vf.vf_cntr[vf_id] >= ICE_ASYNC_VF_MSG_THRESHOLD)
+ if (vf_info->msg_count >= ICE_ASYNC_VF_MSG_THRESHOLD)
*is_malvf = true;
/* continue to iterate through the mailbox snapshot */
}
/**
- * ice_mbx_reset_snapshot - Reset mailbox snapshot structure
- * @snap: pointer to mailbox snapshot structure in the ice_hw struct
- *
- * Reset the mailbox snapshot structure and clear VF counter array.
- */
-static void ice_mbx_reset_snapshot(struct ice_mbx_snapshot *snap)
-{
- u32 vfcntr_len;
-
- if (!snap || !snap->mbx_vf.vf_cntr)
- return;
-
- /* Clear VF counters. */
- vfcntr_len = snap->mbx_vf.vfcntr_len;
- if (vfcntr_len)
- memset(snap->mbx_vf.vf_cntr, 0,
- (vfcntr_len * sizeof(*snap->mbx_vf.vf_cntr)));
-
- /* Reset mailbox snapshot for a new capture. */
- memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
- snap->mbx_buf.state = ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
-}
-
-/**
* ice_mbx_vf_state_handler - Handle states of the overflow algorithm
* @hw: pointer to the HW struct
* @mbx_data: pointer to structure containing mailbox data
- * @vf_id: relative virtual function (VF) ID
- * @is_malvf: boolean output to indicate if VF is malicious
+ * @vf_info: mailbox tracking structure for the VF in question
+ * @report_malvf: boolean output to indicate whether VF should be reported
*
* The function serves as an entry point for the malicious VF
* detection algorithm by handling the different states and state
* the static snapshot and look for a malicious VF.
*/
int
-ice_mbx_vf_state_handler(struct ice_hw *hw,
- struct ice_mbx_data *mbx_data, u16 vf_id,
- bool *is_malvf)
+ice_mbx_vf_state_handler(struct ice_hw *hw, struct ice_mbx_data *mbx_data,
+ struct ice_mbx_vf_info *vf_info, bool *report_malvf)
{
struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
struct ice_mbx_snap_buffer_data *snap_buf;
struct ice_ctl_q_info *cq = &hw->mailboxq;
enum ice_mbx_snapshot_state new_state;
+ bool is_malvf = false;
int status = 0;
- if (!is_malvf || !mbx_data)
+ if (!report_malvf || !mbx_data || !vf_info)
return -EINVAL;
+ *report_malvf = false;
+
/* When entering the mailbox state machine assume that the VF
* is not malicious until detected.
*/
- *is_malvf = false;
-
/* Checking if max messages allowed to be processed while servicing current
* interrupt is not less than the defined AVF message threshold.
*/
if (snap_buf->num_pending_arq >=
mbx_data->async_watermark_val) {
new_state = ICE_MAL_VF_DETECT_STATE_DETECT;
- status = ice_mbx_detect_malvf(hw, vf_id, &new_state, is_malvf);
+ status = ice_mbx_detect_malvf(hw, vf_info, &new_state, &is_malvf);
} else {
new_state = ICE_MAL_VF_DETECT_STATE_TRAVERSE;
ice_mbx_traverse(hw, &new_state);
case ICE_MAL_VF_DETECT_STATE_DETECT:
new_state = ICE_MAL_VF_DETECT_STATE_DETECT;
- status = ice_mbx_detect_malvf(hw, vf_id, &new_state, is_malvf);
+ status = ice_mbx_detect_malvf(hw, vf_info, &new_state, &is_malvf);
break;
default:
snap_buf->state = new_state;
- return status;
-}
-
-/**
- * ice_mbx_report_malvf - Track and note malicious VF
- * @hw: pointer to the HW struct
- * @all_malvfs: all malicious VFs tracked by PF
- * @bitmap_len: length of bitmap in bits
- * @vf_id: relative virtual function ID of the malicious VF
- * @report_malvf: boolean to indicate if malicious VF must be reported
- *
- * This function will update a bitmap that keeps track of the malicious
- * VFs attached to the PF. A malicious VF must be reported only once if
- * discovered between VF resets or loading so the function checks
- * the input vf_id against the bitmap to verify if the VF has been
- * detected in any previous mailbox iterations.
- */
-int
-ice_mbx_report_malvf(struct ice_hw *hw, unsigned long *all_malvfs,
- u16 bitmap_len, u16 vf_id, bool *report_malvf)
-{
- if (!all_malvfs || !report_malvf)
- return -EINVAL;
-
- *report_malvf = false;
-
- if (bitmap_len < hw->mbx_snapshot.mbx_vf.vfcntr_len)
- return -EINVAL;
-
- if (vf_id >= bitmap_len)
- return -EIO;
-
- /* If the vf_id is found in the bitmap set bit and boolean to true */
- if (!test_and_set_bit(vf_id, all_malvfs))
+ /* Only report VFs as malicious the first time we detect it */
+ if (is_malvf && !vf_info->malicious) {
+ vf_info->malicious = 1;
*report_malvf = true;
+ }
- return 0;
+ return status;
}
/**
- * ice_mbx_clear_malvf - Clear VF bitmap and counter for VF ID
- * @snap: pointer to the mailbox snapshot structure
- * @all_malvfs: all malicious VFs tracked by PF
- * @bitmap_len: length of bitmap in bits
- * @vf_id: relative virtual function ID of the malicious VF
+ * ice_mbx_clear_malvf - Clear VF mailbox info
+ * @vf_info: the mailbox tracking structure for a VF
*
- * In case of a VF reset, this function can be called to clear
- * the bit corresponding to the VF ID in the bitmap tracking all
- * malicious VFs attached to the PF. The function also clears the
- * VF counter array at the index of the VF ID. This is to ensure
- * that the new VF loaded is not considered malicious before going
- * through the overflow detection algorithm.
+ * In case of a VF reset, this function shall be called to clear the VF's
+ * current mailbox tracking state.
*/
-int
-ice_mbx_clear_malvf(struct ice_mbx_snapshot *snap, unsigned long *all_malvfs,
- u16 bitmap_len, u16 vf_id)
+void ice_mbx_clear_malvf(struct ice_mbx_vf_info *vf_info)
{
- if (!snap || !all_malvfs)
- return -EINVAL;
-
- if (bitmap_len < snap->mbx_vf.vfcntr_len)
- return -EINVAL;
-
- /* Ensure VF ID value is not larger than bitmap or VF counter length */
- if (vf_id >= bitmap_len || vf_id >= snap->mbx_vf.vfcntr_len)
- return -EIO;
-
- /* Clear VF ID bit in the bitmap tracking malicious VFs attached to PF */
- clear_bit(vf_id, all_malvfs);
-
- /* Clear the VF counter in the mailbox snapshot structure for that VF ID.
- * This is to ensure that if a VF is unloaded and a new one brought back
- * up with the same VF ID for a snapshot currently in traversal or detect
- * state the counter for that VF ID does not increment on top of existing
- * values in the mailbox overflow detection algorithm.
- */
- snap->mbx_vf.vf_cntr[vf_id] = 0;
-
- return 0;
+ vf_info->malicious = 0;
+ vf_info->msg_count = 0;
}
/**
- * ice_mbx_init_snapshot - Initialize mailbox snapshot structure
+ * ice_mbx_init_vf_info - Initialize a new VF mailbox tracking info
* @hw: pointer to the hardware structure
- * @vf_count: number of VFs allocated on a PF
+ * @vf_info: the mailbox tracking info structure for a VF
*
- * Clear the mailbox snapshot structure and allocate memory
- * for the VF counter array based on the number of VFs allocated
- * on that PF.
+ * Initialize a VF mailbox tracking info structure and insert it into the
+ * snapshot list.
*
- * Assumption: This function will assume ice_get_caps() has already been
- * called to ensure that the vf_count can be compared against the number
- * of VFs supported as defined in the functional capabilities of the device.
+ * If you remove the VF, you must also delete the associated VF info structure
+ * from the linked list.
*/
-int ice_mbx_init_snapshot(struct ice_hw *hw, u16 vf_count)
+void ice_mbx_init_vf_info(struct ice_hw *hw, struct ice_mbx_vf_info *vf_info)
{
struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
- /* Ensure that the number of VFs allocated is non-zero and
- * is not greater than the number of supported VFs defined in
- * the functional capabilities of the PF.
- */
- if (!vf_count || vf_count > hw->func_caps.num_allocd_vfs)
- return -EINVAL;
-
- snap->mbx_vf.vf_cntr = devm_kcalloc(ice_hw_to_dev(hw), vf_count,
- sizeof(*snap->mbx_vf.vf_cntr),
- GFP_KERNEL);
- if (!snap->mbx_vf.vf_cntr)
- return -ENOMEM;
-
- /* Setting the VF counter length to the number of allocated
- * VFs for given PF's functional capabilities.
- */
- snap->mbx_vf.vfcntr_len = vf_count;
-
- /* Clear mbx_buf in the mailbox snaphot structure and setting the
- * mailbox snapshot state to a new capture.
- */
- memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
- snap->mbx_buf.state = ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
-
- return 0;
+ ice_mbx_clear_malvf(vf_info);
+ list_add(&vf_info->list_entry, &snap->mbx_vf);
}
/**
- * ice_mbx_deinit_snapshot - Free mailbox snapshot structure
+ * ice_mbx_init_snapshot - Initialize mailbox snapshot data
* @hw: pointer to the hardware structure
*
- * Clear the mailbox snapshot structure and free the VF counter array.
+ * Clear the mailbox snapshot structure and initialize the VF mailbox list.
*/
-void ice_mbx_deinit_snapshot(struct ice_hw *hw)
+void ice_mbx_init_snapshot(struct ice_hw *hw)
{
struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
- /* Free VF counter array and reset VF counter length */
- devm_kfree(ice_hw_to_dev(hw), snap->mbx_vf.vf_cntr);
- snap->mbx_vf.vfcntr_len = 0;
-
- /* Clear mbx_buf in the mailbox snaphot structure */
- memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
+ INIT_LIST_HEAD(&snap->mbx_vf);
+ ice_mbx_reset_snapshot(snap);
}
u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed);
int
ice_mbx_vf_state_handler(struct ice_hw *hw, struct ice_mbx_data *mbx_data,
- u16 vf_id, bool *is_mal_vf);
-int
-ice_mbx_clear_malvf(struct ice_mbx_snapshot *snap, unsigned long *all_malvfs,
- u16 bitmap_len, u16 vf_id);
-int ice_mbx_init_snapshot(struct ice_hw *hw, u16 vf_count);
-void ice_mbx_deinit_snapshot(struct ice_hw *hw);
-int
-ice_mbx_report_malvf(struct ice_hw *hw, unsigned long *all_malvfs,
- u16 bitmap_len, u16 vf_id, bool *report_malvf);
+ struct ice_mbx_vf_info *vf_info, bool *report_malvf);
+void ice_mbx_clear_malvf(struct ice_mbx_vf_info *vf_info);
+void ice_mbx_init_vf_info(struct ice_hw *hw, struct ice_mbx_vf_info *vf_info);
+void ice_mbx_init_snapshot(struct ice_hw *hw);
#else /* CONFIG_PCI_IOV */
static inline int
ice_aq_send_msg_to_vf(struct ice_hw __always_unused *hw,
return 0;
}
+static inline void ice_mbx_init_snapshot(struct ice_hw *hw)
+{
+}
+
#endif /* CONFIG_PCI_IOV */
#endif /* _ICE_VF_MBX_H_ */
}
/**
+ * ice_is_malicious_vf - check if this vf might be overflowing mailbox
+ * @vf: the VF to check
+ * @mbxdata: data about the state of the mailbox
+ *
+ * Detect if a given VF might be malicious and attempting to overflow the PF
+ * mailbox. If so, log a warning message and ignore this event.
+ */
+static bool
+ice_is_malicious_vf(struct ice_vf *vf, struct ice_mbx_data *mbxdata)
+{
+ bool report_malvf = false;
+ struct device *dev;
+ struct ice_pf *pf;
+ int status;
+
+ pf = vf->pf;
+ dev = ice_pf_to_dev(pf);
+
+ if (test_bit(ICE_VF_STATE_DIS, vf->vf_states))
+ return vf->mbx_info.malicious;
+
+ /* check to see if we have a newly malicious VF */
+ status = ice_mbx_vf_state_handler(&pf->hw, mbxdata, &vf->mbx_info,
+ &report_malvf);
+ if (status)
+ dev_warn_ratelimited(dev, "Unable to check status of mailbox overflow for VF %u MAC %pM, status %d\n",
+ vf->vf_id, vf->dev_lan_addr, status);
+
+ if (report_malvf) {
+ struct ice_vsi *pf_vsi = ice_get_main_vsi(pf);
+ u8 zero_addr[ETH_ALEN] = {};
+
+ dev_warn(dev, "VF MAC %pM on PF MAC %pM is generating asynchronous messages and may be overflowing the PF message queue. Please see the Adapter User Guide for more information\n",
+ vf->dev_lan_addr,
+ pf_vsi ? pf_vsi->netdev->dev_addr : zero_addr);
+ }
+
+ return vf->mbx_info.malicious;
+}
+
+/**
* ice_vc_process_vf_msg - Process request from VF
* @pf: pointer to the PF structure
* @event: pointer to the AQ event
+ * @mbxdata: information used to detect VF attempting mailbox overflow
*
* called from the common asq/arq handler to
* process request from VF
*/
-void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
+void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
+ struct ice_mbx_data *mbxdata)
{
u32 v_opcode = le32_to_cpu(event->desc.cookie_high);
s16 vf_id = le16_to_cpu(event->desc.retval);
mutex_lock(&vf->cfg_lock);
+ /* Check if the VF is trying to overflow the mailbox */
+ if (ice_is_malicious_vf(vf, mbxdata))
+ goto finish;
+
/* Check if VF is disabled. */
if (test_bit(ICE_VF_STATE_DIS, vf->vf_states)) {
err = -EPERM;
ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode,
enum virtchnl_status_code v_retval, u8 *msg, u16 msglen);
bool ice_vc_isvalid_vsi_id(struct ice_vf *vf, u16 vsi_id);
+void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
+ struct ice_mbx_data *mbxdata);
#else /* CONFIG_PCI_IOV */
static inline void ice_virtchnl_set_dflt_ops(struct ice_vf *vf) { }
static inline void ice_virtchnl_set_repr_ops(struct ice_vf *vf) { }
{
return false;
}
+
+static inline void
+ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
+ struct ice_mbx_data *mbxdata)
+{
+}
#endif /* !CONFIG_PCI_IOV */
#endif /* _ICE_VIRTCHNL_H_ */
#include <linux/tcp.h>
#include <linux/sctp.h>
#include <linux/if_ether.h>
-#include <linux/aer.h>
#include <linux/prefetch.h>
#include <linux/bpf.h>
#include <linux/bpf_trace.h>
#define INCVALUE_82576_MASK GENMASK(E1000_TIMINCA_16NS_SHIFT - 1, 0)
#define INCVALUE_82576 (16u << IGB_82576_TSYNC_SHIFT)
#define IGB_NBITS_82580 40
+#define IGB_82580_BASE_PERIOD 0x800000000
static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter);
static void igb_ptp_sdp_init(struct igb_adapter *adapter);
struct igb_adapter *igb = container_of(ptp, struct igb_adapter,
ptp_caps);
struct e1000_hw *hw = &igb->hw;
- int neg_adj = 0;
+ bool neg_adj;
u64 rate;
u32 inca;
- if (scaled_ppm < 0) {
- neg_adj = 1;
- scaled_ppm = -scaled_ppm;
- }
- rate = scaled_ppm;
- rate <<= 13;
- rate = div_u64(rate, 15625);
+ neg_adj = diff_by_scaled_ppm(IGB_82580_BASE_PERIOD, scaled_ppm, &rate);
inca = rate & INCVALUE_MASK;
if (neg_adj)
netif_device_attach(netdev);
}
+/**
+ * igbvf_io_prepare - prepare device driver for PCI reset
+ * @pdev: PCI device information struct
+ */
+static void igbvf_io_prepare(struct pci_dev *pdev)
+{
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct igbvf_adapter *adapter = netdev_priv(netdev);
+
+ while (test_and_set_bit(__IGBVF_RESETTING, &adapter->state))
+ usleep_range(1000, 2000);
+ igbvf_down(adapter);
+}
+
+/**
+ * igbvf_io_reset_done - PCI reset done, device driver reset can begin
+ * @pdev: PCI device information struct
+ */
+static void igbvf_io_reset_done(struct pci_dev *pdev)
+{
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct igbvf_adapter *adapter = netdev_priv(netdev);
+
+ igbvf_up(adapter);
+ clear_bit(__IGBVF_RESETTING, &adapter->state);
+}
+
static void igbvf_print_device_info(struct igbvf_adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
.error_detected = igbvf_io_error_detected,
.slot_reset = igbvf_io_slot_reset,
.resume = igbvf_io_resume,
+ .reset_prepare = igbvf_io_prepare,
+ .reset_done = igbvf_io_reset_done,
};
static const struct pci_device_id igbvf_pci_tbl[] = {
u32 start_time;
u32 end_time;
+ u32 max_sdu;
/* CBS parameters */
bool cbs_enable; /* indicates if CBS is enabled */
ktime_t base_time;
ktime_t cycle_time;
bool qbv_enable;
+ u32 qbv_config_change_errors;
/* OS defined structs */
struct pci_dev *pdev;
#define IGC_FLAG_PTP BIT(8)
#define IGC_FLAG_WOL_SUPPORTED BIT(8)
#define IGC_FLAG_NEED_LINK_UPDATE BIT(9)
-#define IGC_FLAG_MEDIA_RESET BIT(10)
-#define IGC_FLAG_MAS_ENABLE BIT(12)
#define IGC_FLAG_HAS_MSIX BIT(13)
#define IGC_FLAG_EEE BIT(14)
#define IGC_FLAG_VLAN_PROMISC BIT(15)
*/
#define IGC_TW_SYSTEM_100_MASK 0x0000FF00
#define IGC_TW_SYSTEM_100_SHIFT 8
-#define IGC_DMACR_DMAC_EN 0x80000000 /* Enable DMA Coalescing */
-#define IGC_DMACR_DMACTHR_MASK 0x00FF0000
-#define IGC_DMACR_DMACTHR_SHIFT 16
/* Reg val to set scale to 1024 nsec */
#define IGC_LTRMINV_SCALE_1024 2
/* Reg val to set scale to 32768 nsec */
IGC_STAT("rx_hwtstamp_cleared", rx_hwtstamp_cleared),
IGC_STAT("tx_lpi_counter", stats.tlpic),
IGC_STAT("rx_lpi_counter", stats.rlpic),
+ IGC_STAT("qbv_config_change_errors", qbv_config_change_errors),
};
#define IGC_NETDEV_STAT(_net_stat) { \
u64 o2bspc;
u64 b2ospc;
u64 b2ogprc;
+ u64 txdrop;
};
struct net_device *igc_get_hw_dev(struct igc_hw *hw);
size = rd32(IGC_RXPBS) &
IGC_RXPBS_SIZE_I225_MASK;
- /* Calculations vary based on DMAC settings. */
- if (rd32(IGC_DMACR) & IGC_DMACR_DMAC_EN) {
- size -= (rd32(IGC_DMACR) &
- IGC_DMACR_DMACTHR_MASK) >>
- IGC_DMACR_DMACTHR_SHIFT;
- /* Convert size to bits. */
- size *= 1024 * 8;
- } else {
- /* Convert size to bytes, subtract the MTU, and then
- * convert the size to bits.
- */
- size *= 1024;
- size *= 8;
- }
+ /* Convert size to bytes, subtract the MTU, and then
+ * convert the size to bits.
+ */
+ size *= 1024;
+ size *= 8;
if (size < 0) {
hw_dbg("Invalid effective Rx buffer size %d\n",
#include <linux/module.h>
#include <linux/types.h>
#include <linux/if_vlan.h>
-#include <linux/aer.h>
#include <linux/tcp.h>
#include <linux/udp.h>
#include <linux/ip.h>
static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
struct igc_ring *tx_ring)
{
+ struct igc_adapter *adapter = netdev_priv(tx_ring->netdev);
bool first_flag = false, insert_empty = false;
u16 count = TXD_USE_COUNT(skb_headlen(skb));
__be16 protocol = vlan_get_protocol(skb);
first->bytecount = skb->len;
first->gso_segs = 1;
- if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
- struct igc_adapter *adapter = netdev_priv(tx_ring->netdev);
+ if (tx_ring->max_sdu > 0) {
+ u32 max_sdu = 0;
+
+ max_sdu = tx_ring->max_sdu +
+ (skb_vlan_tagged(first->skb) ? VLAN_HLEN : 0);
+
+ if (first->bytecount > max_sdu) {
+ adapter->stats.txdrop++;
+ goto out_drop;
+ }
+ }
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
/* FIXME: add support for retrieving timestamps from
* the other timer registers before skipping the
* timestamping request.
net_stats->tx_window_errors = adapter->stats.latecol;
net_stats->tx_carrier_errors = adapter->stats.tncrs;
- /* Tx Dropped needs to be maintained elsewhere */
+ /* Tx Dropped */
+ net_stats->tx_dropped = adapter->stats.txdrop;
/* Management Stats */
adapter->stats.mgptc += rd32(IGC_MGTPTC);
mod_timer(&adapter->phy_info_timer,
round_jiffies(jiffies + 2 * HZ));
- /* link is down, time to check for alternate media */
- if (adapter->flags & IGC_FLAG_MAS_ENABLE) {
- if (adapter->flags & IGC_FLAG_MEDIA_RESET) {
- schedule_work(&adapter->reset_task);
- /* return immediately */
- return;
- }
- }
pm_schedule_suspend(netdev->dev.parent,
MSEC_PER_SEC * 5);
-
- /* also check for alternate media here */
- } else if (!netif_carrier_ok(netdev) &&
- (adapter->flags & IGC_FLAG_MAS_ENABLE)) {
- if (adapter->flags & IGC_FLAG_MEDIA_RESET) {
- schedule_work(&adapter->reset_task);
- /* return immediately */
- return;
- }
}
}
adapter->base_time = 0;
adapter->cycle_time = NSEC_PER_SEC;
+ adapter->qbv_config_change_errors = 0;
for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *ring = adapter->tx_ring[i];
ring->start_time = 0;
ring->end_time = NSEC_PER_SEC;
+ ring->max_sdu = 0;
}
return 0;
}
}
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ struct igc_ring *ring = adapter->tx_ring[i];
+ struct net_device *dev = adapter->netdev;
+
+ if (qopt->max_sdu[i])
+ ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len;
+ else
+ ring->max_sdu = 0;
+ }
+
return 0;
}
caps->broken_mqprio = true;
- if (hw->mac.type == igc_i225)
+ if (hw->mac.type == igc_i225) {
+ caps->supports_queue_max_sdu = true;
caps->gate_mask_per_txq = true;
+ }
return 0;
}
/* LTR registers */
#define IGC_LTRC 0x01A0 /* Latency Tolerance Reporting Control */
-#define IGC_DMACR 0x02508 /* DMA Coalescing Control Register */
#define IGC_LTRMINV 0x5BB0 /* LTR Minimum Value */
#define IGC_LTRMAXV 0x5BB4 /* LTR Maximum Value */
static int igc_tsn_enable_offload(struct igc_adapter *adapter)
{
struct igc_hw *hw = &adapter->hw;
+ bool tsn_mode_reconfig = false;
u32 tqavctrl, baset_l, baset_h;
u32 sec, nsec, cycle;
ktime_t base_time, systim;
}
tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS;
+
+ if (tqavctrl & IGC_TQAVCTRL_TRANSMIT_MODE_TSN)
+ tsn_mode_reconfig = true;
+
tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV;
cycle = adapter->cycle_time;
s64 n = div64_s64(ktime_sub_ns(systim, base_time), cycle);
base_time = ktime_add_ns(base_time, (n + 1) * cycle);
+
+ /* Increase the counter if scheduling into the past while
+ * Gate Control List (GCL) is running.
+ */
+ if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) &&
+ tsn_mode_reconfig)
+ adapter->qbv_config_change_errors++;
} else {
/* According to datasheet section 7.5.2.9.3.3, FutScdDis bit
* has to be configured before the cycle time and base time.
+++ /dev/null
-# SPDX-License-Identifier: GPL-2.0
-# Copyright(c) 1999 - 2008 Intel Corporation.
-#
-# Makefile for the Intel(R) PRO/10GbE ethernet driver
-#
-
-obj-$(CONFIG_IXGB) += ixgb.o
-
-ixgb-objs := ixgb_main.o ixgb_hw.o ixgb_ee.o ixgb_ethtool.o ixgb_param.o
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-#ifndef _IXGB_H_
-#define _IXGB_H_
-
-#include <linux/stddef.h>
-#include <linux/module.h>
-#include <linux/types.h>
-#include <asm/byteorder.h>
-#include <linux/mm.h>
-#include <linux/errno.h>
-#include <linux/ioport.h>
-#include <linux/pci.h>
-#include <linux/kernel.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/delay.h>
-#include <linux/timer.h>
-#include <linux/slab.h>
-#include <linux/vmalloc.h>
-#include <linux/interrupt.h>
-#include <linux/string.h>
-#include <linux/pagemap.h>
-#include <linux/dma-mapping.h>
-#include <linux/bitops.h>
-#include <asm/io.h>
-#include <asm/irq.h>
-#include <linux/capability.h>
-#include <linux/in.h>
-#include <linux/ip.h>
-#include <linux/tcp.h>
-#include <linux/udp.h>
-#include <net/pkt_sched.h>
-#include <linux/list.h>
-#include <linux/reboot.h>
-#include <net/checksum.h>
-
-#include <linux/ethtool.h>
-#include <linux/if_vlan.h>
-
-#define BAR_0 0
-#define BAR_1 1
-
-struct ixgb_adapter;
-#include "ixgb_hw.h"
-#include "ixgb_ee.h"
-#include "ixgb_ids.h"
-
-/* TX/RX descriptor defines */
-#define DEFAULT_TXD 256
-#define MAX_TXD 4096
-#define MIN_TXD 64
-
-/* hardware cannot reliably support more than 512 descriptors owned by
- * hardware descriptor cache otherwise an unreliable ring under heavy
- * receive load may result */
-#define DEFAULT_RXD 512
-#define MAX_RXD 512
-#define MIN_RXD 64
-
-/* Supported Rx Buffer Sizes */
-#define IXGB_RXBUFFER_2048 2048
-#define IXGB_RXBUFFER_4096 4096
-#define IXGB_RXBUFFER_8192 8192
-#define IXGB_RXBUFFER_16384 16384
-
-/* How many Rx Buffers do we bundle into one write to the hardware ? */
-#define IXGB_RX_BUFFER_WRITE 8 /* Must be power of 2 */
-
-/* wrapper around a pointer to a socket buffer,
- * so a DMA handle can be stored along with the buffer */
-struct ixgb_buffer {
- struct sk_buff *skb;
- dma_addr_t dma;
- unsigned long time_stamp;
- u16 length;
- u16 next_to_watch;
- u16 mapped_as_page;
-};
-
-struct ixgb_desc_ring {
- /* pointer to the descriptor ring memory */
- void *desc;
- /* physical address of the descriptor ring */
- dma_addr_t dma;
- /* length of descriptor ring in bytes */
- unsigned int size;
- /* number of descriptors in the ring */
- unsigned int count;
- /* next descriptor to associate a buffer with */
- unsigned int next_to_use;
- /* next descriptor to check for DD status bit */
- unsigned int next_to_clean;
- /* array of buffer information structs */
- struct ixgb_buffer *buffer_info;
-};
-
-#define IXGB_DESC_UNUSED(R) \
- ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
- (R)->next_to_clean - (R)->next_to_use - 1)
-
-#define IXGB_GET_DESC(R, i, type) (&(((struct type *)((R).desc))[i]))
-#define IXGB_RX_DESC(R, i) IXGB_GET_DESC(R, i, ixgb_rx_desc)
-#define IXGB_TX_DESC(R, i) IXGB_GET_DESC(R, i, ixgb_tx_desc)
-#define IXGB_CONTEXT_DESC(R, i) IXGB_GET_DESC(R, i, ixgb_context_desc)
-
-/* board specific private data structure */
-
-struct ixgb_adapter {
- struct timer_list watchdog_timer;
- unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
- u32 bd_number;
- u32 rx_buffer_len;
- u32 part_num;
- u16 link_speed;
- u16 link_duplex;
- struct work_struct tx_timeout_task;
-
- /* TX */
- struct ixgb_desc_ring tx_ring ____cacheline_aligned_in_smp;
- unsigned int restart_queue;
- unsigned long timeo_start;
- u32 tx_cmd_type;
- u64 hw_csum_tx_good;
- u64 hw_csum_tx_error;
- u32 tx_int_delay;
- u32 tx_timeout_count;
- bool tx_int_delay_enable;
- bool detect_tx_hung;
-
- /* RX */
- struct ixgb_desc_ring rx_ring;
- u64 hw_csum_rx_error;
- u64 hw_csum_rx_good;
- u32 rx_int_delay;
- bool rx_csum;
-
- /* OS defined structs */
- struct napi_struct napi;
- struct net_device *netdev;
- struct pci_dev *pdev;
-
- /* structs defined in ixgb_hw.h */
- struct ixgb_hw hw;
- u16 msg_enable;
- struct ixgb_hw_stats stats;
- u32 alloc_rx_buff_failed;
- bool have_msi;
- unsigned long flags;
-};
-
-enum ixgb_state_t {
- /* TBD
- __IXGB_TESTING,
- __IXGB_RESETTING,
- */
- __IXGB_DOWN
-};
-
-/* Exported from other modules */
-void ixgb_check_options(struct ixgb_adapter *adapter);
-void ixgb_set_ethtool_ops(struct net_device *netdev);
-extern char ixgb_driver_name[];
-
-void ixgb_set_speed_duplex(struct net_device *netdev);
-
-int ixgb_up(struct ixgb_adapter *adapter);
-void ixgb_down(struct ixgb_adapter *adapter, bool kill_watchdog);
-void ixgb_reset(struct ixgb_adapter *adapter);
-int ixgb_setup_rx_resources(struct ixgb_adapter *adapter);
-int ixgb_setup_tx_resources(struct ixgb_adapter *adapter);
-void ixgb_free_rx_resources(struct ixgb_adapter *adapter);
-void ixgb_free_tx_resources(struct ixgb_adapter *adapter);
-void ixgb_update_stats(struct ixgb_adapter *adapter);
-
-
-#endif /* _IXGB_H_ */
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include "ixgb_hw.h"
-#include "ixgb_ee.h"
-/* Local prototypes */
-static u16 ixgb_shift_in_bits(struct ixgb_hw *hw);
-
-static void ixgb_shift_out_bits(struct ixgb_hw *hw,
- u16 data,
- u16 count);
-static void ixgb_standby_eeprom(struct ixgb_hw *hw);
-
-static bool ixgb_wait_eeprom_command(struct ixgb_hw *hw);
-
-static void ixgb_cleanup_eeprom(struct ixgb_hw *hw);
-
-/******************************************************************************
- * Raises the EEPROM's clock input.
- *
- * hw - Struct containing variables accessed by shared code
- * eecd_reg - EECD's current value
- *****************************************************************************/
-static void
-ixgb_raise_clock(struct ixgb_hw *hw,
- u32 *eecd_reg)
-{
- /* Raise the clock input to the EEPROM (by setting the SK bit), and then
- * wait 50 microseconds.
- */
- *eecd_reg = *eecd_reg | IXGB_EECD_SK;
- IXGB_WRITE_REG(hw, EECD, *eecd_reg);
- IXGB_WRITE_FLUSH(hw);
- udelay(50);
-}
-
-/******************************************************************************
- * Lowers the EEPROM's clock input.
- *
- * hw - Struct containing variables accessed by shared code
- * eecd_reg - EECD's current value
- *****************************************************************************/
-static void
-ixgb_lower_clock(struct ixgb_hw *hw,
- u32 *eecd_reg)
-{
- /* Lower the clock input to the EEPROM (by clearing the SK bit), and then
- * wait 50 microseconds.
- */
- *eecd_reg = *eecd_reg & ~IXGB_EECD_SK;
- IXGB_WRITE_REG(hw, EECD, *eecd_reg);
- IXGB_WRITE_FLUSH(hw);
- udelay(50);
-}
-
-/******************************************************************************
- * Shift data bits out to the EEPROM.
- *
- * hw - Struct containing variables accessed by shared code
- * data - data to send to the EEPROM
- * count - number of bits to shift out
- *****************************************************************************/
-static void
-ixgb_shift_out_bits(struct ixgb_hw *hw,
- u16 data,
- u16 count)
-{
- u32 eecd_reg;
- u32 mask;
-
- /* We need to shift "count" bits out to the EEPROM. So, value in the
- * "data" parameter will be shifted out to the EEPROM one bit at a time.
- * In order to do this, "data" must be broken down into bits.
- */
- mask = 0x01 << (count - 1);
- eecd_reg = IXGB_READ_REG(hw, EECD);
- eecd_reg &= ~(IXGB_EECD_DO | IXGB_EECD_DI);
- do {
- /* A "1" is shifted out to the EEPROM by setting bit "DI" to a "1",
- * and then raising and then lowering the clock (the SK bit controls
- * the clock input to the EEPROM). A "0" is shifted out to the EEPROM
- * by setting "DI" to "0" and then raising and then lowering the clock.
- */
- eecd_reg &= ~IXGB_EECD_DI;
-
- if (data & mask)
- eecd_reg |= IXGB_EECD_DI;
-
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
- IXGB_WRITE_FLUSH(hw);
-
- udelay(50);
-
- ixgb_raise_clock(hw, &eecd_reg);
- ixgb_lower_clock(hw, &eecd_reg);
-
- mask = mask >> 1;
-
- } while (mask);
-
- /* We leave the "DI" bit set to "0" when we leave this routine. */
- eecd_reg &= ~IXGB_EECD_DI;
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
-}
-
-/******************************************************************************
- * Shift data bits in from the EEPROM
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static u16
-ixgb_shift_in_bits(struct ixgb_hw *hw)
-{
- u32 eecd_reg;
- u32 i;
- u16 data;
-
- /* In order to read a register from the EEPROM, we need to shift 16 bits
- * in from the EEPROM. Bits are "shifted in" by raising the clock input to
- * the EEPROM (setting the SK bit), and then reading the value of the "DO"
- * bit. During this "shifting in" process the "DI" bit should always be
- * clear..
- */
-
- eecd_reg = IXGB_READ_REG(hw, EECD);
-
- eecd_reg &= ~(IXGB_EECD_DO | IXGB_EECD_DI);
- data = 0;
-
- for (i = 0; i < 16; i++) {
- data = data << 1;
- ixgb_raise_clock(hw, &eecd_reg);
-
- eecd_reg = IXGB_READ_REG(hw, EECD);
-
- eecd_reg &= ~(IXGB_EECD_DI);
- if (eecd_reg & IXGB_EECD_DO)
- data |= 1;
-
- ixgb_lower_clock(hw, &eecd_reg);
- }
-
- return data;
-}
-
-/******************************************************************************
- * Prepares EEPROM for access
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Lowers EEPROM clock. Clears input pin. Sets the chip select pin. This
- * function should be called before issuing a command to the EEPROM.
- *****************************************************************************/
-static void
-ixgb_setup_eeprom(struct ixgb_hw *hw)
-{
- u32 eecd_reg;
-
- eecd_reg = IXGB_READ_REG(hw, EECD);
-
- /* Clear SK and DI */
- eecd_reg &= ~(IXGB_EECD_SK | IXGB_EECD_DI);
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
-
- /* Set CS */
- eecd_reg |= IXGB_EECD_CS;
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
-}
-
-/******************************************************************************
- * Returns EEPROM to a "standby" state
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static void
-ixgb_standby_eeprom(struct ixgb_hw *hw)
-{
- u32 eecd_reg;
-
- eecd_reg = IXGB_READ_REG(hw, EECD);
-
- /* Deselect EEPROM */
- eecd_reg &= ~(IXGB_EECD_CS | IXGB_EECD_SK);
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
- IXGB_WRITE_FLUSH(hw);
- udelay(50);
-
- /* Clock high */
- eecd_reg |= IXGB_EECD_SK;
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
- IXGB_WRITE_FLUSH(hw);
- udelay(50);
-
- /* Select EEPROM */
- eecd_reg |= IXGB_EECD_CS;
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
- IXGB_WRITE_FLUSH(hw);
- udelay(50);
-
- /* Clock low */
- eecd_reg &= ~IXGB_EECD_SK;
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
- IXGB_WRITE_FLUSH(hw);
- udelay(50);
-}
-
-/******************************************************************************
- * Raises then lowers the EEPROM's clock pin
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static void
-ixgb_clock_eeprom(struct ixgb_hw *hw)
-{
- u32 eecd_reg;
-
- eecd_reg = IXGB_READ_REG(hw, EECD);
-
- /* Rising edge of clock */
- eecd_reg |= IXGB_EECD_SK;
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
- IXGB_WRITE_FLUSH(hw);
- udelay(50);
-
- /* Falling edge of clock */
- eecd_reg &= ~IXGB_EECD_SK;
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
- IXGB_WRITE_FLUSH(hw);
- udelay(50);
-}
-
-/******************************************************************************
- * Terminates a command by lowering the EEPROM's chip select pin
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static void
-ixgb_cleanup_eeprom(struct ixgb_hw *hw)
-{
- u32 eecd_reg;
-
- eecd_reg = IXGB_READ_REG(hw, EECD);
-
- eecd_reg &= ~(IXGB_EECD_CS | IXGB_EECD_DI);
-
- IXGB_WRITE_REG(hw, EECD, eecd_reg);
-
- ixgb_clock_eeprom(hw);
-}
-
-/******************************************************************************
- * Waits for the EEPROM to finish the current command.
- *
- * hw - Struct containing variables accessed by shared code
- *
- * The command is done when the EEPROM's data out pin goes high.
- *
- * Returns:
- * true: EEPROM data pin is high before timeout.
- * false: Time expired.
- *****************************************************************************/
-static bool
-ixgb_wait_eeprom_command(struct ixgb_hw *hw)
-{
- u32 eecd_reg;
- u32 i;
-
- /* Toggle the CS line. This in effect tells to EEPROM to actually execute
- * the command in question.
- */
- ixgb_standby_eeprom(hw);
-
- /* Now read DO repeatedly until is high (equal to '1'). The EEPROM will
- * signal that the command has been completed by raising the DO signal.
- * If DO does not go high in 10 milliseconds, then error out.
- */
- for (i = 0; i < 200; i++) {
- eecd_reg = IXGB_READ_REG(hw, EECD);
-
- if (eecd_reg & IXGB_EECD_DO)
- return true;
-
- udelay(50);
- }
- ASSERT(0);
- return false;
-}
-
-/******************************************************************************
- * Verifies that the EEPROM has a valid checksum
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Reads the first 64 16 bit words of the EEPROM and sums the values read.
- * If the sum of the 64 16 bit words is 0xBABA, the EEPROM's checksum is
- * valid.
- *
- * Returns:
- * true: Checksum is valid
- * false: Checksum is not valid.
- *****************************************************************************/
-bool
-ixgb_validate_eeprom_checksum(struct ixgb_hw *hw)
-{
- u16 checksum = 0;
- u16 i;
-
- for (i = 0; i < (EEPROM_CHECKSUM_REG + 1); i++)
- checksum += ixgb_read_eeprom(hw, i);
-
- if (checksum == (u16) EEPROM_SUM)
- return true;
- else
- return false;
-}
-
-/******************************************************************************
- * Calculates the EEPROM checksum and writes it to the EEPROM
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Sums the first 63 16 bit words of the EEPROM. Subtracts the sum from 0xBABA.
- * Writes the difference to word offset 63 of the EEPROM.
- *****************************************************************************/
-void
-ixgb_update_eeprom_checksum(struct ixgb_hw *hw)
-{
- u16 checksum = 0;
- u16 i;
-
- for (i = 0; i < EEPROM_CHECKSUM_REG; i++)
- checksum += ixgb_read_eeprom(hw, i);
-
- checksum = (u16) EEPROM_SUM - checksum;
-
- ixgb_write_eeprom(hw, EEPROM_CHECKSUM_REG, checksum);
-}
-
-/******************************************************************************
- * Writes a 16 bit word to a given offset in the EEPROM.
- *
- * hw - Struct containing variables accessed by shared code
- * reg - offset within the EEPROM to be written to
- * data - 16 bit word to be written to the EEPROM
- *
- * If ixgb_update_eeprom_checksum is not called after this function, the
- * EEPROM will most likely contain an invalid checksum.
- *
- *****************************************************************************/
-void
-ixgb_write_eeprom(struct ixgb_hw *hw, u16 offset, u16 data)
-{
- struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
-
- /* Prepare the EEPROM for writing */
- ixgb_setup_eeprom(hw);
-
- /* Send the 9-bit EWEN (write enable) command to the EEPROM (5-bit opcode
- * plus 4-bit dummy). This puts the EEPROM into write/erase mode.
- */
- ixgb_shift_out_bits(hw, EEPROM_EWEN_OPCODE, 5);
- ixgb_shift_out_bits(hw, 0, 4);
-
- /* Prepare the EEPROM */
- ixgb_standby_eeprom(hw);
-
- /* Send the Write command (3-bit opcode + 6-bit addr) */
- ixgb_shift_out_bits(hw, EEPROM_WRITE_OPCODE, 3);
- ixgb_shift_out_bits(hw, offset, 6);
-
- /* Send the data */
- ixgb_shift_out_bits(hw, data, 16);
-
- ixgb_wait_eeprom_command(hw);
-
- /* Recover from write */
- ixgb_standby_eeprom(hw);
-
- /* Send the 9-bit EWDS (write disable) command to the EEPROM (5-bit
- * opcode plus 4-bit dummy). This takes the EEPROM out of write/erase
- * mode.
- */
- ixgb_shift_out_bits(hw, EEPROM_EWDS_OPCODE, 5);
- ixgb_shift_out_bits(hw, 0, 4);
-
- /* Done with writing */
- ixgb_cleanup_eeprom(hw);
-
- /* clear the init_ctrl_reg_1 to signify that the cache is invalidated */
- ee_map->init_ctrl_reg_1 = cpu_to_le16(EEPROM_ICW1_SIGNATURE_CLEAR);
-}
-
-/******************************************************************************
- * Reads a 16 bit word from the EEPROM.
- *
- * hw - Struct containing variables accessed by shared code
- * offset - offset of 16 bit word in the EEPROM to read
- *
- * Returns:
- * The 16-bit value read from the eeprom
- *****************************************************************************/
-u16
-ixgb_read_eeprom(struct ixgb_hw *hw,
- u16 offset)
-{
- u16 data;
-
- /* Prepare the EEPROM for reading */
- ixgb_setup_eeprom(hw);
-
- /* Send the READ command (opcode + addr) */
- ixgb_shift_out_bits(hw, EEPROM_READ_OPCODE, 3);
- /*
- * We have a 64 word EEPROM, there are 6 address bits
- */
- ixgb_shift_out_bits(hw, offset, 6);
-
- /* Read the data */
- data = ixgb_shift_in_bits(hw);
-
- /* End this read operation */
- ixgb_standby_eeprom(hw);
-
- return data;
-}
-
-/******************************************************************************
- * Reads eeprom and stores data in shared structure.
- * Validates eeprom checksum and eeprom signature.
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Returns:
- * true: if eeprom read is successful
- * false: otherwise.
- *****************************************************************************/
-bool
-ixgb_get_eeprom_data(struct ixgb_hw *hw)
-{
- u16 i;
- u16 checksum = 0;
- struct ixgb_ee_map_type *ee_map;
-
- ENTER();
-
- ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
-
- pr_debug("Reading eeprom data\n");
- for (i = 0; i < IXGB_EEPROM_SIZE ; i++) {
- u16 ee_data;
- ee_data = ixgb_read_eeprom(hw, i);
- checksum += ee_data;
- hw->eeprom[i] = cpu_to_le16(ee_data);
- }
-
- if (checksum != (u16) EEPROM_SUM) {
- pr_debug("Checksum invalid\n");
- /* clear the init_ctrl_reg_1 to signify that the cache is
- * invalidated */
- ee_map->init_ctrl_reg_1 = cpu_to_le16(EEPROM_ICW1_SIGNATURE_CLEAR);
- return false;
- }
-
- if ((ee_map->init_ctrl_reg_1 & cpu_to_le16(EEPROM_ICW1_SIGNATURE_MASK))
- != cpu_to_le16(EEPROM_ICW1_SIGNATURE_VALID)) {
- pr_debug("Signature invalid\n");
- return false;
- }
-
- return true;
-}
-
-/******************************************************************************
- * Local function to check if the eeprom signature is good
- * If the eeprom signature is good, calls ixgb)get_eeprom_data.
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Returns:
- * true: eeprom signature was good and the eeprom read was successful
- * false: otherwise.
- ******************************************************************************/
-static bool
-ixgb_check_and_get_eeprom_data (struct ixgb_hw* hw)
-{
- struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
-
- if ((ee_map->init_ctrl_reg_1 & cpu_to_le16(EEPROM_ICW1_SIGNATURE_MASK))
- == cpu_to_le16(EEPROM_ICW1_SIGNATURE_VALID)) {
- return true;
- } else {
- return ixgb_get_eeprom_data(hw);
- }
-}
-
-/******************************************************************************
- * return a word from the eeprom
- *
- * hw - Struct containing variables accessed by shared code
- * index - Offset of eeprom word
- *
- * Returns:
- * Word at indexed offset in eeprom, if valid, 0 otherwise.
- ******************************************************************************/
-__le16
-ixgb_get_eeprom_word(struct ixgb_hw *hw, u16 index)
-{
-
- if (index < IXGB_EEPROM_SIZE && ixgb_check_and_get_eeprom_data(hw))
- return hw->eeprom[index];
-
- return 0;
-}
-
-/******************************************************************************
- * return the mac address from EEPROM
- *
- * hw - Struct containing variables accessed by shared code
- * mac_addr - Ethernet Address if EEPROM contents are valid, 0 otherwise
- *
- * Returns: None.
- ******************************************************************************/
-void
-ixgb_get_ee_mac_addr(struct ixgb_hw *hw,
- u8 *mac_addr)
-{
- int i;
- struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
-
- ENTER();
-
- if (ixgb_check_and_get_eeprom_data(hw)) {
- for (i = 0; i < ETH_ALEN; i++) {
- mac_addr[i] = ee_map->mac_addr[i];
- }
- pr_debug("eeprom mac address = %pM\n", mac_addr);
- }
-}
-
-
-/******************************************************************************
- * return the Printed Board Assembly number from EEPROM
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Returns:
- * PBA number if EEPROM contents are valid, 0 otherwise
- ******************************************************************************/
-u32
-ixgb_get_ee_pba_number(struct ixgb_hw *hw)
-{
- if (ixgb_check_and_get_eeprom_data(hw))
- return le16_to_cpu(hw->eeprom[EEPROM_PBA_1_2_REG])
- | (le16_to_cpu(hw->eeprom[EEPROM_PBA_3_4_REG])<<16);
-
- return 0;
-}
-
-
-/******************************************************************************
- * return the Device Id from EEPROM
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Returns:
- * Device Id if EEPROM contents are valid, 0 otherwise
- ******************************************************************************/
-u16
-ixgb_get_ee_device_id(struct ixgb_hw *hw)
-{
- struct ixgb_ee_map_type *ee_map = (struct ixgb_ee_map_type *)hw->eeprom;
-
- if (ixgb_check_and_get_eeprom_data(hw))
- return le16_to_cpu(ee_map->device_id);
-
- return 0;
-}
-
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-#ifndef _IXGB_EE_H_
-#define _IXGB_EE_H_
-
-#define IXGB_EEPROM_SIZE 64 /* Size in words */
-
-/* EEPROM Commands */
-#define EEPROM_READ_OPCODE 0x6 /* EEPROM read opcode */
-#define EEPROM_WRITE_OPCODE 0x5 /* EEPROM write opcode */
-#define EEPROM_ERASE_OPCODE 0x7 /* EEPROM erase opcode */
-#define EEPROM_EWEN_OPCODE 0x13 /* EEPROM erase/write enable */
-#define EEPROM_EWDS_OPCODE 0x10 /* EEPROM erase/write disable */
-
-/* EEPROM MAP (Word Offsets) */
-#define EEPROM_IA_1_2_REG 0x0000
-#define EEPROM_IA_3_4_REG 0x0001
-#define EEPROM_IA_5_6_REG 0x0002
-#define EEPROM_COMPATIBILITY_REG 0x0003
-#define EEPROM_PBA_1_2_REG 0x0008
-#define EEPROM_PBA_3_4_REG 0x0009
-#define EEPROM_INIT_CONTROL1_REG 0x000A
-#define EEPROM_SUBSYS_ID_REG 0x000B
-#define EEPROM_SUBVEND_ID_REG 0x000C
-#define EEPROM_DEVICE_ID_REG 0x000D
-#define EEPROM_VENDOR_ID_REG 0x000E
-#define EEPROM_INIT_CONTROL2_REG 0x000F
-#define EEPROM_SWDPINS_REG 0x0020
-#define EEPROM_CIRCUIT_CTRL_REG 0x0021
-#define EEPROM_D0_D3_POWER_REG 0x0022
-#define EEPROM_FLASH_VERSION 0x0032
-#define EEPROM_CHECKSUM_REG 0x003F
-
-/* Mask bits for fields in Word 0x0a of the EEPROM */
-
-#define EEPROM_ICW1_SIGNATURE_MASK 0xC000
-#define EEPROM_ICW1_SIGNATURE_VALID 0x4000
-#define EEPROM_ICW1_SIGNATURE_CLEAR 0x0000
-
-/* For checksumming, the sum of all words in the EEPROM should equal 0xBABA. */
-#define EEPROM_SUM 0xBABA
-
-/* EEPROM Map Sizes (Byte Counts) */
-#define PBA_SIZE 4
-
-/* EEPROM Map defines (WORD OFFSETS)*/
-
-/* EEPROM structure */
-struct ixgb_ee_map_type {
- u8 mac_addr[ETH_ALEN];
- __le16 compatibility;
- __le16 reserved1[4];
- __le32 pba_number;
- __le16 init_ctrl_reg_1;
- __le16 subsystem_id;
- __le16 subvendor_id;
- __le16 device_id;
- __le16 vendor_id;
- __le16 init_ctrl_reg_2;
- __le16 oem_reserved[16];
- __le16 swdpins_reg;
- __le16 circuit_ctrl_reg;
- u8 d3_power;
- u8 d0_power;
- __le16 reserved2[28];
- __le16 checksum;
-};
-
-/* EEPROM Functions */
-u16 ixgb_read_eeprom(struct ixgb_hw *hw, u16 reg);
-
-bool ixgb_validate_eeprom_checksum(struct ixgb_hw *hw);
-
-void ixgb_update_eeprom_checksum(struct ixgb_hw *hw);
-
-void ixgb_write_eeprom(struct ixgb_hw *hw, u16 reg, u16 data);
-
-#endif /* IXGB_EE_H */
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-/* ethtool support for ixgb */
-
-#include "ixgb.h"
-
-#include <linux/uaccess.h>
-
-#define IXGB_ALL_RAR_ENTRIES 16
-
-enum {NETDEV_STATS, IXGB_STATS};
-
-struct ixgb_stats {
- char stat_string[ETH_GSTRING_LEN];
- int type;
- int sizeof_stat;
- int stat_offset;
-};
-
-#define IXGB_STAT(m) IXGB_STATS, \
- sizeof_field(struct ixgb_adapter, m), \
- offsetof(struct ixgb_adapter, m)
-#define IXGB_NETDEV_STAT(m) NETDEV_STATS, \
- sizeof_field(struct net_device, m), \
- offsetof(struct net_device, m)
-
-static struct ixgb_stats ixgb_gstrings_stats[] = {
- {"rx_packets", IXGB_NETDEV_STAT(stats.rx_packets)},
- {"tx_packets", IXGB_NETDEV_STAT(stats.tx_packets)},
- {"rx_bytes", IXGB_NETDEV_STAT(stats.rx_bytes)},
- {"tx_bytes", IXGB_NETDEV_STAT(stats.tx_bytes)},
- {"rx_errors", IXGB_NETDEV_STAT(stats.rx_errors)},
- {"tx_errors", IXGB_NETDEV_STAT(stats.tx_errors)},
- {"rx_dropped", IXGB_NETDEV_STAT(stats.rx_dropped)},
- {"tx_dropped", IXGB_NETDEV_STAT(stats.tx_dropped)},
- {"multicast", IXGB_NETDEV_STAT(stats.multicast)},
- {"collisions", IXGB_NETDEV_STAT(stats.collisions)},
-
-/* { "rx_length_errors", IXGB_NETDEV_STAT(stats.rx_length_errors) }, */
- {"rx_over_errors", IXGB_NETDEV_STAT(stats.rx_over_errors)},
- {"rx_crc_errors", IXGB_NETDEV_STAT(stats.rx_crc_errors)},
- {"rx_frame_errors", IXGB_NETDEV_STAT(stats.rx_frame_errors)},
- {"rx_no_buffer_count", IXGB_STAT(stats.rnbc)},
- {"rx_fifo_errors", IXGB_NETDEV_STAT(stats.rx_fifo_errors)},
- {"rx_missed_errors", IXGB_NETDEV_STAT(stats.rx_missed_errors)},
- {"tx_aborted_errors", IXGB_NETDEV_STAT(stats.tx_aborted_errors)},
- {"tx_carrier_errors", IXGB_NETDEV_STAT(stats.tx_carrier_errors)},
- {"tx_fifo_errors", IXGB_NETDEV_STAT(stats.tx_fifo_errors)},
- {"tx_heartbeat_errors", IXGB_NETDEV_STAT(stats.tx_heartbeat_errors)},
- {"tx_window_errors", IXGB_NETDEV_STAT(stats.tx_window_errors)},
- {"tx_deferred_ok", IXGB_STAT(stats.dc)},
- {"tx_timeout_count", IXGB_STAT(tx_timeout_count) },
- {"tx_restart_queue", IXGB_STAT(restart_queue) },
- {"rx_long_length_errors", IXGB_STAT(stats.roc)},
- {"rx_short_length_errors", IXGB_STAT(stats.ruc)},
- {"tx_tcp_seg_good", IXGB_STAT(stats.tsctc)},
- {"tx_tcp_seg_failed", IXGB_STAT(stats.tsctfc)},
- {"rx_flow_control_xon", IXGB_STAT(stats.xonrxc)},
- {"rx_flow_control_xoff", IXGB_STAT(stats.xoffrxc)},
- {"tx_flow_control_xon", IXGB_STAT(stats.xontxc)},
- {"tx_flow_control_xoff", IXGB_STAT(stats.xofftxc)},
- {"rx_csum_offload_good", IXGB_STAT(hw_csum_rx_good)},
- {"rx_csum_offload_errors", IXGB_STAT(hw_csum_rx_error)},
- {"tx_csum_offload_good", IXGB_STAT(hw_csum_tx_good)},
- {"tx_csum_offload_errors", IXGB_STAT(hw_csum_tx_error)}
-};
-
-#define IXGB_STATS_LEN ARRAY_SIZE(ixgb_gstrings_stats)
-
-static int
-ixgb_get_link_ksettings(struct net_device *netdev,
- struct ethtool_link_ksettings *cmd)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
-
- ethtool_link_ksettings_zero_link_mode(cmd, supported);
- ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseT_Full);
- ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
-
- ethtool_link_ksettings_zero_link_mode(cmd, advertising);
- ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseT_Full);
- ethtool_link_ksettings_add_link_mode(cmd, advertising, FIBRE);
-
- cmd->base.port = PORT_FIBRE;
-
- if (netif_carrier_ok(adapter->netdev)) {
- cmd->base.speed = SPEED_10000;
- cmd->base.duplex = DUPLEX_FULL;
- } else {
- cmd->base.speed = SPEED_UNKNOWN;
- cmd->base.duplex = DUPLEX_UNKNOWN;
- }
-
- cmd->base.autoneg = AUTONEG_DISABLE;
- return 0;
-}
-
-void ixgb_set_speed_duplex(struct net_device *netdev)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- /* be optimistic about our link, since we were up before */
- adapter->link_speed = 10000;
- adapter->link_duplex = FULL_DUPLEX;
- netif_carrier_on(netdev);
- netif_wake_queue(netdev);
-}
-
-static int
-ixgb_set_link_ksettings(struct net_device *netdev,
- const struct ethtool_link_ksettings *cmd)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- u32 speed = cmd->base.speed;
-
- if (cmd->base.autoneg == AUTONEG_ENABLE ||
- (speed + cmd->base.duplex != SPEED_10000 + DUPLEX_FULL))
- return -EINVAL;
-
- if (netif_running(adapter->netdev)) {
- ixgb_down(adapter, true);
- ixgb_reset(adapter);
- ixgb_up(adapter);
- ixgb_set_speed_duplex(netdev);
- } else
- ixgb_reset(adapter);
-
- return 0;
-}
-
-static void
-ixgb_get_pauseparam(struct net_device *netdev,
- struct ethtool_pauseparam *pause)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_hw *hw = &adapter->hw;
-
- pause->autoneg = AUTONEG_DISABLE;
-
- if (hw->fc.type == ixgb_fc_rx_pause)
- pause->rx_pause = 1;
- else if (hw->fc.type == ixgb_fc_tx_pause)
- pause->tx_pause = 1;
- else if (hw->fc.type == ixgb_fc_full) {
- pause->rx_pause = 1;
- pause->tx_pause = 1;
- }
-}
-
-static int
-ixgb_set_pauseparam(struct net_device *netdev,
- struct ethtool_pauseparam *pause)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_hw *hw = &adapter->hw;
-
- if (pause->autoneg == AUTONEG_ENABLE)
- return -EINVAL;
-
- if (pause->rx_pause && pause->tx_pause)
- hw->fc.type = ixgb_fc_full;
- else if (pause->rx_pause && !pause->tx_pause)
- hw->fc.type = ixgb_fc_rx_pause;
- else if (!pause->rx_pause && pause->tx_pause)
- hw->fc.type = ixgb_fc_tx_pause;
- else if (!pause->rx_pause && !pause->tx_pause)
- hw->fc.type = ixgb_fc_none;
-
- if (netif_running(adapter->netdev)) {
- ixgb_down(adapter, true);
- ixgb_up(adapter);
- ixgb_set_speed_duplex(netdev);
- } else
- ixgb_reset(adapter);
-
- return 0;
-}
-
-static u32
-ixgb_get_msglevel(struct net_device *netdev)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- return adapter->msg_enable;
-}
-
-static void
-ixgb_set_msglevel(struct net_device *netdev, u32 data)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- adapter->msg_enable = data;
-}
-#define IXGB_GET_STAT(_A_, _R_) _A_->stats._R_
-
-static int
-ixgb_get_regs_len(struct net_device *netdev)
-{
-#define IXGB_REG_DUMP_LEN 136*sizeof(u32)
- return IXGB_REG_DUMP_LEN;
-}
-
-static void
-ixgb_get_regs(struct net_device *netdev,
- struct ethtool_regs *regs, void *p)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_hw *hw = &adapter->hw;
- u32 *reg = p;
- u32 *reg_start = reg;
- u8 i;
-
- /* the 1 (one) below indicates an attempt at versioning, if the
- * interface in ethtool or the driver changes, this 1 should be
- * incremented */
- regs->version = (1<<24) | hw->revision_id << 16 | hw->device_id;
-
- /* General Registers */
- *reg++ = IXGB_READ_REG(hw, CTRL0); /* 0 */
- *reg++ = IXGB_READ_REG(hw, CTRL1); /* 1 */
- *reg++ = IXGB_READ_REG(hw, STATUS); /* 2 */
- *reg++ = IXGB_READ_REG(hw, EECD); /* 3 */
- *reg++ = IXGB_READ_REG(hw, MFS); /* 4 */
-
- /* Interrupt */
- *reg++ = IXGB_READ_REG(hw, ICR); /* 5 */
- *reg++ = IXGB_READ_REG(hw, ICS); /* 6 */
- *reg++ = IXGB_READ_REG(hw, IMS); /* 7 */
- *reg++ = IXGB_READ_REG(hw, IMC); /* 8 */
-
- /* Receive */
- *reg++ = IXGB_READ_REG(hw, RCTL); /* 9 */
- *reg++ = IXGB_READ_REG(hw, FCRTL); /* 10 */
- *reg++ = IXGB_READ_REG(hw, FCRTH); /* 11 */
- *reg++ = IXGB_READ_REG(hw, RDBAL); /* 12 */
- *reg++ = IXGB_READ_REG(hw, RDBAH); /* 13 */
- *reg++ = IXGB_READ_REG(hw, RDLEN); /* 14 */
- *reg++ = IXGB_READ_REG(hw, RDH); /* 15 */
- *reg++ = IXGB_READ_REG(hw, RDT); /* 16 */
- *reg++ = IXGB_READ_REG(hw, RDTR); /* 17 */
- *reg++ = IXGB_READ_REG(hw, RXDCTL); /* 18 */
- *reg++ = IXGB_READ_REG(hw, RAIDC); /* 19 */
- *reg++ = IXGB_READ_REG(hw, RXCSUM); /* 20 */
-
- /* there are 16 RAR entries in hardware, we only use 3 */
- for (i = 0; i < IXGB_ALL_RAR_ENTRIES; i++) {
- *reg++ = IXGB_READ_REG_ARRAY(hw, RAL, (i << 1)); /*21,...,51 */
- *reg++ = IXGB_READ_REG_ARRAY(hw, RAH, (i << 1)); /*22,...,52 */
- }
-
- /* Transmit */
- *reg++ = IXGB_READ_REG(hw, TCTL); /* 53 */
- *reg++ = IXGB_READ_REG(hw, TDBAL); /* 54 */
- *reg++ = IXGB_READ_REG(hw, TDBAH); /* 55 */
- *reg++ = IXGB_READ_REG(hw, TDLEN); /* 56 */
- *reg++ = IXGB_READ_REG(hw, TDH); /* 57 */
- *reg++ = IXGB_READ_REG(hw, TDT); /* 58 */
- *reg++ = IXGB_READ_REG(hw, TIDV); /* 59 */
- *reg++ = IXGB_READ_REG(hw, TXDCTL); /* 60 */
- *reg++ = IXGB_READ_REG(hw, TSPMT); /* 61 */
- *reg++ = IXGB_READ_REG(hw, PAP); /* 62 */
-
- /* Physical */
- *reg++ = IXGB_READ_REG(hw, PCSC1); /* 63 */
- *reg++ = IXGB_READ_REG(hw, PCSC2); /* 64 */
- *reg++ = IXGB_READ_REG(hw, PCSS1); /* 65 */
- *reg++ = IXGB_READ_REG(hw, PCSS2); /* 66 */
- *reg++ = IXGB_READ_REG(hw, XPCSS); /* 67 */
- *reg++ = IXGB_READ_REG(hw, UCCR); /* 68 */
- *reg++ = IXGB_READ_REG(hw, XPCSTC); /* 69 */
- *reg++ = IXGB_READ_REG(hw, MACA); /* 70 */
- *reg++ = IXGB_READ_REG(hw, APAE); /* 71 */
- *reg++ = IXGB_READ_REG(hw, ARD); /* 72 */
- *reg++ = IXGB_READ_REG(hw, AIS); /* 73 */
- *reg++ = IXGB_READ_REG(hw, MSCA); /* 74 */
- *reg++ = IXGB_READ_REG(hw, MSRWD); /* 75 */
-
- /* Statistics */
- *reg++ = IXGB_GET_STAT(adapter, tprl); /* 76 */
- *reg++ = IXGB_GET_STAT(adapter, tprh); /* 77 */
- *reg++ = IXGB_GET_STAT(adapter, gprcl); /* 78 */
- *reg++ = IXGB_GET_STAT(adapter, gprch); /* 79 */
- *reg++ = IXGB_GET_STAT(adapter, bprcl); /* 80 */
- *reg++ = IXGB_GET_STAT(adapter, bprch); /* 81 */
- *reg++ = IXGB_GET_STAT(adapter, mprcl); /* 82 */
- *reg++ = IXGB_GET_STAT(adapter, mprch); /* 83 */
- *reg++ = IXGB_GET_STAT(adapter, uprcl); /* 84 */
- *reg++ = IXGB_GET_STAT(adapter, uprch); /* 85 */
- *reg++ = IXGB_GET_STAT(adapter, vprcl); /* 86 */
- *reg++ = IXGB_GET_STAT(adapter, vprch); /* 87 */
- *reg++ = IXGB_GET_STAT(adapter, jprcl); /* 88 */
- *reg++ = IXGB_GET_STAT(adapter, jprch); /* 89 */
- *reg++ = IXGB_GET_STAT(adapter, gorcl); /* 90 */
- *reg++ = IXGB_GET_STAT(adapter, gorch); /* 91 */
- *reg++ = IXGB_GET_STAT(adapter, torl); /* 92 */
- *reg++ = IXGB_GET_STAT(adapter, torh); /* 93 */
- *reg++ = IXGB_GET_STAT(adapter, rnbc); /* 94 */
- *reg++ = IXGB_GET_STAT(adapter, ruc); /* 95 */
- *reg++ = IXGB_GET_STAT(adapter, roc); /* 96 */
- *reg++ = IXGB_GET_STAT(adapter, rlec); /* 97 */
- *reg++ = IXGB_GET_STAT(adapter, crcerrs); /* 98 */
- *reg++ = IXGB_GET_STAT(adapter, icbc); /* 99 */
- *reg++ = IXGB_GET_STAT(adapter, ecbc); /* 100 */
- *reg++ = IXGB_GET_STAT(adapter, mpc); /* 101 */
- *reg++ = IXGB_GET_STAT(adapter, tptl); /* 102 */
- *reg++ = IXGB_GET_STAT(adapter, tpth); /* 103 */
- *reg++ = IXGB_GET_STAT(adapter, gptcl); /* 104 */
- *reg++ = IXGB_GET_STAT(adapter, gptch); /* 105 */
- *reg++ = IXGB_GET_STAT(adapter, bptcl); /* 106 */
- *reg++ = IXGB_GET_STAT(adapter, bptch); /* 107 */
- *reg++ = IXGB_GET_STAT(adapter, mptcl); /* 108 */
- *reg++ = IXGB_GET_STAT(adapter, mptch); /* 109 */
- *reg++ = IXGB_GET_STAT(adapter, uptcl); /* 110 */
- *reg++ = IXGB_GET_STAT(adapter, uptch); /* 111 */
- *reg++ = IXGB_GET_STAT(adapter, vptcl); /* 112 */
- *reg++ = IXGB_GET_STAT(adapter, vptch); /* 113 */
- *reg++ = IXGB_GET_STAT(adapter, jptcl); /* 114 */
- *reg++ = IXGB_GET_STAT(adapter, jptch); /* 115 */
- *reg++ = IXGB_GET_STAT(adapter, gotcl); /* 116 */
- *reg++ = IXGB_GET_STAT(adapter, gotch); /* 117 */
- *reg++ = IXGB_GET_STAT(adapter, totl); /* 118 */
- *reg++ = IXGB_GET_STAT(adapter, toth); /* 119 */
- *reg++ = IXGB_GET_STAT(adapter, dc); /* 120 */
- *reg++ = IXGB_GET_STAT(adapter, plt64c); /* 121 */
- *reg++ = IXGB_GET_STAT(adapter, tsctc); /* 122 */
- *reg++ = IXGB_GET_STAT(adapter, tsctfc); /* 123 */
- *reg++ = IXGB_GET_STAT(adapter, ibic); /* 124 */
- *reg++ = IXGB_GET_STAT(adapter, rfc); /* 125 */
- *reg++ = IXGB_GET_STAT(adapter, lfc); /* 126 */
- *reg++ = IXGB_GET_STAT(adapter, pfrc); /* 127 */
- *reg++ = IXGB_GET_STAT(adapter, pftc); /* 128 */
- *reg++ = IXGB_GET_STAT(adapter, mcfrc); /* 129 */
- *reg++ = IXGB_GET_STAT(adapter, mcftc); /* 130 */
- *reg++ = IXGB_GET_STAT(adapter, xonrxc); /* 131 */
- *reg++ = IXGB_GET_STAT(adapter, xontxc); /* 132 */
- *reg++ = IXGB_GET_STAT(adapter, xoffrxc); /* 133 */
- *reg++ = IXGB_GET_STAT(adapter, xofftxc); /* 134 */
- *reg++ = IXGB_GET_STAT(adapter, rjc); /* 135 */
-
- regs->len = (reg - reg_start) * sizeof(u32);
-}
-
-static int
-ixgb_get_eeprom_len(struct net_device *netdev)
-{
- /* return size in bytes */
- return IXGB_EEPROM_SIZE << 1;
-}
-
-static int
-ixgb_get_eeprom(struct net_device *netdev,
- struct ethtool_eeprom *eeprom, u8 *bytes)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_hw *hw = &adapter->hw;
- __le16 *eeprom_buff;
- int i, max_len, first_word, last_word;
- int ret_val = 0;
-
- if (eeprom->len == 0) {
- ret_val = -EINVAL;
- goto geeprom_error;
- }
-
- eeprom->magic = hw->vendor_id | (hw->device_id << 16);
-
- max_len = ixgb_get_eeprom_len(netdev);
-
- if (eeprom->offset > eeprom->offset + eeprom->len) {
- ret_val = -EINVAL;
- goto geeprom_error;
- }
-
- if ((eeprom->offset + eeprom->len) > max_len)
- eeprom->len = (max_len - eeprom->offset);
-
- first_word = eeprom->offset >> 1;
- last_word = (eeprom->offset + eeprom->len - 1) >> 1;
-
- eeprom_buff = kmalloc_array(last_word - first_word + 1,
- sizeof(__le16),
- GFP_KERNEL);
- if (!eeprom_buff)
- return -ENOMEM;
-
- /* note the eeprom was good because the driver loaded */
- for (i = 0; i <= (last_word - first_word); i++)
- eeprom_buff[i] = ixgb_get_eeprom_word(hw, (first_word + i));
-
- memcpy(bytes, (u8 *)eeprom_buff + (eeprom->offset & 1), eeprom->len);
- kfree(eeprom_buff);
-
-geeprom_error:
- return ret_val;
-}
-
-static int
-ixgb_set_eeprom(struct net_device *netdev,
- struct ethtool_eeprom *eeprom, u8 *bytes)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_hw *hw = &adapter->hw;
- u16 *eeprom_buff;
- void *ptr;
- int max_len, first_word, last_word;
- u16 i;
-
- if (eeprom->len == 0)
- return -EINVAL;
-
- if (eeprom->magic != (hw->vendor_id | (hw->device_id << 16)))
- return -EFAULT;
-
- max_len = ixgb_get_eeprom_len(netdev);
-
- if (eeprom->offset > eeprom->offset + eeprom->len)
- return -EINVAL;
-
- if ((eeprom->offset + eeprom->len) > max_len)
- eeprom->len = (max_len - eeprom->offset);
-
- first_word = eeprom->offset >> 1;
- last_word = (eeprom->offset + eeprom->len - 1) >> 1;
- eeprom_buff = kmalloc(max_len, GFP_KERNEL);
- if (!eeprom_buff)
- return -ENOMEM;
-
- ptr = (void *)eeprom_buff;
-
- if (eeprom->offset & 1) {
- /* need read/modify/write of first changed EEPROM word */
- /* only the second byte of the word is being modified */
- eeprom_buff[0] = ixgb_read_eeprom(hw, first_word);
- ptr++;
- }
- if ((eeprom->offset + eeprom->len) & 1) {
- /* need read/modify/write of last changed EEPROM word */
- /* only the first byte of the word is being modified */
- eeprom_buff[last_word - first_word]
- = ixgb_read_eeprom(hw, last_word);
- }
-
- memcpy(ptr, bytes, eeprom->len);
- for (i = 0; i <= (last_word - first_word); i++)
- ixgb_write_eeprom(hw, first_word + i, eeprom_buff[i]);
-
- /* Update the checksum over the first part of the EEPROM if needed */
- if (first_word <= EEPROM_CHECKSUM_REG)
- ixgb_update_eeprom_checksum(hw);
-
- kfree(eeprom_buff);
- return 0;
-}
-
-static void
-ixgb_get_drvinfo(struct net_device *netdev,
- struct ethtool_drvinfo *drvinfo)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
-
- strscpy(drvinfo->driver, ixgb_driver_name,
- sizeof(drvinfo->driver));
- strscpy(drvinfo->bus_info, pci_name(adapter->pdev),
- sizeof(drvinfo->bus_info));
-}
-
-static void
-ixgb_get_ringparam(struct net_device *netdev,
- struct ethtool_ringparam *ring,
- struct kernel_ethtool_ringparam *kernel_ring,
- struct netlink_ext_ack *extack)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_desc_ring *txdr = &adapter->tx_ring;
- struct ixgb_desc_ring *rxdr = &adapter->rx_ring;
-
- ring->rx_max_pending = MAX_RXD;
- ring->tx_max_pending = MAX_TXD;
- ring->rx_pending = rxdr->count;
- ring->tx_pending = txdr->count;
-}
-
-static int
-ixgb_set_ringparam(struct net_device *netdev,
- struct ethtool_ringparam *ring,
- struct kernel_ethtool_ringparam *kernel_ring,
- struct netlink_ext_ack *extack)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_desc_ring *txdr = &adapter->tx_ring;
- struct ixgb_desc_ring *rxdr = &adapter->rx_ring;
- struct ixgb_desc_ring tx_old, tx_new, rx_old, rx_new;
- int err;
-
- tx_old = adapter->tx_ring;
- rx_old = adapter->rx_ring;
-
- if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
- return -EINVAL;
-
- if (netif_running(adapter->netdev))
- ixgb_down(adapter, true);
-
- rxdr->count = max(ring->rx_pending,(u32)MIN_RXD);
- rxdr->count = min(rxdr->count,(u32)MAX_RXD);
- rxdr->count = ALIGN(rxdr->count, IXGB_REQ_RX_DESCRIPTOR_MULTIPLE);
-
- txdr->count = max(ring->tx_pending,(u32)MIN_TXD);
- txdr->count = min(txdr->count,(u32)MAX_TXD);
- txdr->count = ALIGN(txdr->count, IXGB_REQ_TX_DESCRIPTOR_MULTIPLE);
-
- if (netif_running(adapter->netdev)) {
- /* Try to get new resources before deleting old */
- if ((err = ixgb_setup_rx_resources(adapter)))
- goto err_setup_rx;
- if ((err = ixgb_setup_tx_resources(adapter)))
- goto err_setup_tx;
-
- /* save the new, restore the old in order to free it,
- * then restore the new back again */
-
- rx_new = adapter->rx_ring;
- tx_new = adapter->tx_ring;
- adapter->rx_ring = rx_old;
- adapter->tx_ring = tx_old;
- ixgb_free_rx_resources(adapter);
- ixgb_free_tx_resources(adapter);
- adapter->rx_ring = rx_new;
- adapter->tx_ring = tx_new;
- if ((err = ixgb_up(adapter)))
- return err;
- ixgb_set_speed_duplex(netdev);
- }
-
- return 0;
-err_setup_tx:
- ixgb_free_rx_resources(adapter);
-err_setup_rx:
- adapter->rx_ring = rx_old;
- adapter->tx_ring = tx_old;
- ixgb_up(adapter);
- return err;
-}
-
-static int
-ixgb_set_phys_id(struct net_device *netdev, enum ethtool_phys_id_state state)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
-
- switch (state) {
- case ETHTOOL_ID_ACTIVE:
- return 2;
-
- case ETHTOOL_ID_ON:
- ixgb_led_on(&adapter->hw);
- break;
-
- case ETHTOOL_ID_OFF:
- case ETHTOOL_ID_INACTIVE:
- ixgb_led_off(&adapter->hw);
- }
-
- return 0;
-}
-
-static int
-ixgb_get_sset_count(struct net_device *netdev, int sset)
-{
- switch (sset) {
- case ETH_SS_STATS:
- return IXGB_STATS_LEN;
- default:
- return -EOPNOTSUPP;
- }
-}
-
-static void
-ixgb_get_ethtool_stats(struct net_device *netdev,
- struct ethtool_stats *stats, u64 *data)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- int i;
- char *p = NULL;
-
- ixgb_update_stats(adapter);
- for (i = 0; i < IXGB_STATS_LEN; i++) {
- switch (ixgb_gstrings_stats[i].type) {
- case NETDEV_STATS:
- p = (char *) netdev +
- ixgb_gstrings_stats[i].stat_offset;
- break;
- case IXGB_STATS:
- p = (char *) adapter +
- ixgb_gstrings_stats[i].stat_offset;
- break;
- }
-
- data[i] = (ixgb_gstrings_stats[i].sizeof_stat ==
- sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
- }
-}
-
-static void
-ixgb_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
-{
- int i;
-
- switch(stringset) {
- case ETH_SS_STATS:
- for (i = 0; i < IXGB_STATS_LEN; i++) {
- memcpy(data + i * ETH_GSTRING_LEN,
- ixgb_gstrings_stats[i].stat_string,
- ETH_GSTRING_LEN);
- }
- break;
- }
-}
-
-static const struct ethtool_ops ixgb_ethtool_ops = {
- .get_drvinfo = ixgb_get_drvinfo,
- .get_regs_len = ixgb_get_regs_len,
- .get_regs = ixgb_get_regs,
- .get_link = ethtool_op_get_link,
- .get_eeprom_len = ixgb_get_eeprom_len,
- .get_eeprom = ixgb_get_eeprom,
- .set_eeprom = ixgb_set_eeprom,
- .get_ringparam = ixgb_get_ringparam,
- .set_ringparam = ixgb_set_ringparam,
- .get_pauseparam = ixgb_get_pauseparam,
- .set_pauseparam = ixgb_set_pauseparam,
- .get_msglevel = ixgb_get_msglevel,
- .set_msglevel = ixgb_set_msglevel,
- .get_strings = ixgb_get_strings,
- .set_phys_id = ixgb_set_phys_id,
- .get_sset_count = ixgb_get_sset_count,
- .get_ethtool_stats = ixgb_get_ethtool_stats,
- .get_link_ksettings = ixgb_get_link_ksettings,
- .set_link_ksettings = ixgb_set_link_ksettings,
-};
-
-void ixgb_set_ethtool_ops(struct net_device *netdev)
-{
- netdev->ethtool_ops = &ixgb_ethtool_ops;
-}
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-/* ixgb_hw.c
- * Shared functions for accessing and configuring the adapter
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <linux/pci_ids.h>
-#include "ixgb_hw.h"
-#include "ixgb_ids.h"
-
-#include <linux/etherdevice.h>
-
-/* Local function prototypes */
-
-static u32 ixgb_hash_mc_addr(struct ixgb_hw *hw, u8 * mc_addr);
-
-static void ixgb_mta_set(struct ixgb_hw *hw, u32 hash_value);
-
-static void ixgb_get_bus_info(struct ixgb_hw *hw);
-
-static bool ixgb_link_reset(struct ixgb_hw *hw);
-
-static void ixgb_optics_reset(struct ixgb_hw *hw);
-
-static void ixgb_optics_reset_bcm(struct ixgb_hw *hw);
-
-static ixgb_phy_type ixgb_identify_phy(struct ixgb_hw *hw);
-
-static void ixgb_clear_hw_cntrs(struct ixgb_hw *hw);
-
-static void ixgb_clear_vfta(struct ixgb_hw *hw);
-
-static void ixgb_init_rx_addrs(struct ixgb_hw *hw);
-
-static u16 ixgb_read_phy_reg(struct ixgb_hw *hw,
- u32 reg_address,
- u32 phy_address,
- u32 device_type);
-
-static bool ixgb_setup_fc(struct ixgb_hw *hw);
-
-static bool mac_addr_valid(u8 *mac_addr);
-
-static u32 ixgb_mac_reset(struct ixgb_hw *hw)
-{
- u32 ctrl_reg;
-
- ctrl_reg = IXGB_CTRL0_RST |
- IXGB_CTRL0_SDP3_DIR | /* All pins are Output=1 */
- IXGB_CTRL0_SDP2_DIR |
- IXGB_CTRL0_SDP1_DIR |
- IXGB_CTRL0_SDP0_DIR |
- IXGB_CTRL0_SDP3 | /* Initial value 1101 */
- IXGB_CTRL0_SDP2 |
- IXGB_CTRL0_SDP0;
-
-#ifdef HP_ZX1
- /* Workaround for 82597EX reset errata */
- IXGB_WRITE_REG_IO(hw, CTRL0, ctrl_reg);
-#else
- IXGB_WRITE_REG(hw, CTRL0, ctrl_reg);
-#endif
-
- /* Delay a few ms just to allow the reset to complete */
- msleep(IXGB_DELAY_AFTER_RESET);
- ctrl_reg = IXGB_READ_REG(hw, CTRL0);
-#ifdef DBG
- /* Make sure the self-clearing global reset bit did self clear */
- ASSERT(!(ctrl_reg & IXGB_CTRL0_RST));
-#endif
-
- if (hw->subsystem_vendor_id == PCI_VENDOR_ID_SUN) {
- ctrl_reg = /* Enable interrupt from XFP and SerDes */
- IXGB_CTRL1_GPI0_EN |
- IXGB_CTRL1_SDP6_DIR |
- IXGB_CTRL1_SDP7_DIR |
- IXGB_CTRL1_SDP6 |
- IXGB_CTRL1_SDP7;
- IXGB_WRITE_REG(hw, CTRL1, ctrl_reg);
- ixgb_optics_reset_bcm(hw);
- }
-
- if (hw->phy_type == ixgb_phy_type_txn17401)
- ixgb_optics_reset(hw);
-
- return ctrl_reg;
-}
-
-/******************************************************************************
- * Reset the transmit and receive units; mask and clear all interrupts.
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-bool
-ixgb_adapter_stop(struct ixgb_hw *hw)
-{
- u32 ctrl_reg;
-
- ENTER();
-
- /* If we are stopped or resetting exit gracefully and wait to be
- * started again before accessing the hardware.
- */
- if (hw->adapter_stopped) {
- pr_debug("Exiting because the adapter is already stopped!!!\n");
- return false;
- }
-
- /* Set the Adapter Stopped flag so other driver functions stop
- * touching the Hardware.
- */
- hw->adapter_stopped = true;
-
- /* Clear interrupt mask to stop board from generating interrupts */
- pr_debug("Masking off all interrupts\n");
- IXGB_WRITE_REG(hw, IMC, 0xFFFFFFFF);
-
- /* Disable the Transmit and Receive units. Then delay to allow
- * any pending transactions to complete before we hit the MAC with
- * the global reset.
- */
- IXGB_WRITE_REG(hw, RCTL, IXGB_READ_REG(hw, RCTL) & ~IXGB_RCTL_RXEN);
- IXGB_WRITE_REG(hw, TCTL, IXGB_READ_REG(hw, TCTL) & ~IXGB_TCTL_TXEN);
- IXGB_WRITE_FLUSH(hw);
- msleep(IXGB_DELAY_BEFORE_RESET);
-
- /* Issue a global reset to the MAC. This will reset the chip's
- * transmit, receive, DMA, and link units. It will not effect
- * the current PCI configuration. The global reset bit is self-
- * clearing, and should clear within a microsecond.
- */
- pr_debug("Issuing a global reset to MAC\n");
-
- ctrl_reg = ixgb_mac_reset(hw);
-
- /* Clear interrupt mask to stop board from generating interrupts */
- pr_debug("Masking off all interrupts\n");
- IXGB_WRITE_REG(hw, IMC, 0xffffffff);
-
- /* Clear any pending interrupt events. */
- IXGB_READ_REG(hw, ICR);
-
- return ctrl_reg & IXGB_CTRL0_RST;
-}
-
-
-/******************************************************************************
- * Identifies the vendor of the optics module on the adapter. The SR adapters
- * support two different types of XPAK optics, so it is necessary to determine
- * which optics are present before applying any optics-specific workarounds.
- *
- * hw - Struct containing variables accessed by shared code.
- *
- * Returns: the vendor of the XPAK optics module.
- *****************************************************************************/
-static ixgb_xpak_vendor
-ixgb_identify_xpak_vendor(struct ixgb_hw *hw)
-{
- u32 i;
- u16 vendor_name[5];
- ixgb_xpak_vendor xpak_vendor;
-
- ENTER();
-
- /* Read the first few bytes of the vendor string from the XPAK NVR
- * registers. These are standard XENPAK/XPAK registers, so all XPAK
- * devices should implement them. */
- for (i = 0; i < 5; i++) {
- vendor_name[i] = ixgb_read_phy_reg(hw,
- MDIO_PMA_PMD_XPAK_VENDOR_NAME
- + i, IXGB_PHY_ADDRESS,
- MDIO_MMD_PMAPMD);
- }
-
- /* Determine the actual vendor */
- if (vendor_name[0] == 'I' &&
- vendor_name[1] == 'N' &&
- vendor_name[2] == 'T' &&
- vendor_name[3] == 'E' && vendor_name[4] == 'L') {
- xpak_vendor = ixgb_xpak_vendor_intel;
- } else {
- xpak_vendor = ixgb_xpak_vendor_infineon;
- }
-
- return xpak_vendor;
-}
-
-/******************************************************************************
- * Determine the physical layer module on the adapter.
- *
- * hw - Struct containing variables accessed by shared code. The device_id
- * field must be (correctly) populated before calling this routine.
- *
- * Returns: the phy type of the adapter.
- *****************************************************************************/
-static ixgb_phy_type
-ixgb_identify_phy(struct ixgb_hw *hw)
-{
- ixgb_phy_type phy_type;
- ixgb_xpak_vendor xpak_vendor;
-
- ENTER();
-
- /* Infer the transceiver/phy type from the device id */
- switch (hw->device_id) {
- case IXGB_DEVICE_ID_82597EX:
- pr_debug("Identified TXN17401 optics\n");
- phy_type = ixgb_phy_type_txn17401;
- break;
-
- case IXGB_DEVICE_ID_82597EX_SR:
- /* The SR adapters carry two different types of XPAK optics
- * modules; read the vendor identifier to determine the exact
- * type of optics. */
- xpak_vendor = ixgb_identify_xpak_vendor(hw);
- if (xpak_vendor == ixgb_xpak_vendor_intel) {
- pr_debug("Identified TXN17201 optics\n");
- phy_type = ixgb_phy_type_txn17201;
- } else {
- pr_debug("Identified G6005 optics\n");
- phy_type = ixgb_phy_type_g6005;
- }
- break;
- case IXGB_DEVICE_ID_82597EX_LR:
- pr_debug("Identified G6104 optics\n");
- phy_type = ixgb_phy_type_g6104;
- break;
- case IXGB_DEVICE_ID_82597EX_CX4:
- pr_debug("Identified CX4\n");
- xpak_vendor = ixgb_identify_xpak_vendor(hw);
- if (xpak_vendor == ixgb_xpak_vendor_intel) {
- pr_debug("Identified TXN17201 optics\n");
- phy_type = ixgb_phy_type_txn17201;
- } else {
- pr_debug("Identified G6005 optics\n");
- phy_type = ixgb_phy_type_g6005;
- }
- break;
- default:
- pr_debug("Unknown physical layer module\n");
- phy_type = ixgb_phy_type_unknown;
- break;
- }
-
- /* update phy type for sun specific board */
- if (hw->subsystem_vendor_id == PCI_VENDOR_ID_SUN)
- phy_type = ixgb_phy_type_bcm;
-
- return phy_type;
-}
-
-/******************************************************************************
- * Performs basic configuration of the adapter.
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Resets the controller.
- * Reads and validates the EEPROM.
- * Initializes the receive address registers.
- * Initializes the multicast table.
- * Clears all on-chip counters.
- * Calls routine to setup flow control settings.
- * Leaves the transmit and receive units disabled and uninitialized.
- *
- * Returns:
- * true if successful,
- * false if unrecoverable problems were encountered.
- *****************************************************************************/
-bool
-ixgb_init_hw(struct ixgb_hw *hw)
-{
- u32 i;
- bool status;
-
- ENTER();
-
- /* Issue a global reset to the MAC. This will reset the chip's
- * transmit, receive, DMA, and link units. It will not effect
- * the current PCI configuration. The global reset bit is self-
- * clearing, and should clear within a microsecond.
- */
- pr_debug("Issuing a global reset to MAC\n");
-
- ixgb_mac_reset(hw);
-
- pr_debug("Issuing an EE reset to MAC\n");
-#ifdef HP_ZX1
- /* Workaround for 82597EX reset errata */
- IXGB_WRITE_REG_IO(hw, CTRL1, IXGB_CTRL1_EE_RST);
-#else
- IXGB_WRITE_REG(hw, CTRL1, IXGB_CTRL1_EE_RST);
-#endif
-
- /* Delay a few ms just to allow the reset to complete */
- msleep(IXGB_DELAY_AFTER_EE_RESET);
-
- if (!ixgb_get_eeprom_data(hw))
- return false;
-
- /* Use the device id to determine the type of phy/transceiver. */
- hw->device_id = ixgb_get_ee_device_id(hw);
- hw->phy_type = ixgb_identify_phy(hw);
-
- /* Setup the receive addresses.
- * Receive Address Registers (RARs 0 - 15).
- */
- ixgb_init_rx_addrs(hw);
-
- /*
- * Check that a valid MAC address has been set.
- * If it is not valid, we fail hardware init.
- */
- if (!mac_addr_valid(hw->curr_mac_addr)) {
- pr_debug("MAC address invalid after ixgb_init_rx_addrs\n");
- return(false);
- }
-
- /* tell the routines in this file they can access hardware again */
- hw->adapter_stopped = false;
-
- /* Fill in the bus_info structure */
- ixgb_get_bus_info(hw);
-
- /* Zero out the Multicast HASH table */
- pr_debug("Zeroing the MTA\n");
- for (i = 0; i < IXGB_MC_TBL_SIZE; i++)
- IXGB_WRITE_REG_ARRAY(hw, MTA, i, 0);
-
- /* Zero out the VLAN Filter Table Array */
- ixgb_clear_vfta(hw);
-
- /* Zero all of the hardware counters */
- ixgb_clear_hw_cntrs(hw);
-
- /* Call a subroutine to setup flow control. */
- status = ixgb_setup_fc(hw);
-
- /* 82597EX errata: Call check-for-link in case lane deskew is locked */
- ixgb_check_for_link(hw);
-
- return status;
-}
-
-/******************************************************************************
- * Initializes receive address filters.
- *
- * hw - Struct containing variables accessed by shared code
- *
- * Places the MAC address in receive address register 0 and clears the rest
- * of the receive address registers. Clears the multicast table. Assumes
- * the receiver is in reset when the routine is called.
- *****************************************************************************/
-static void
-ixgb_init_rx_addrs(struct ixgb_hw *hw)
-{
- u32 i;
-
- ENTER();
-
- /*
- * If the current mac address is valid, assume it is a software override
- * to the permanent address.
- * Otherwise, use the permanent address from the eeprom.
- */
- if (!mac_addr_valid(hw->curr_mac_addr)) {
-
- /* Get the MAC address from the eeprom for later reference */
- ixgb_get_ee_mac_addr(hw, hw->curr_mac_addr);
-
- pr_debug("Keeping Permanent MAC Addr = %pM\n",
- hw->curr_mac_addr);
- } else {
-
- /* Setup the receive address. */
- pr_debug("Overriding MAC Address in RAR[0]\n");
- pr_debug("New MAC Addr = %pM\n", hw->curr_mac_addr);
-
- ixgb_rar_set(hw, hw->curr_mac_addr, 0);
- }
-
- /* Zero out the other 15 receive addresses. */
- pr_debug("Clearing RAR[1-15]\n");
- for (i = 1; i < IXGB_RAR_ENTRIES; i++) {
- /* Write high reg first to disable the AV bit first */
- IXGB_WRITE_REG_ARRAY(hw, RA, ((i << 1) + 1), 0);
- IXGB_WRITE_REG_ARRAY(hw, RA, (i << 1), 0);
- }
-}
-
-/******************************************************************************
- * Updates the MAC's list of multicast addresses.
- *
- * hw - Struct containing variables accessed by shared code
- * mc_addr_list - the list of new multicast addresses
- * mc_addr_count - number of addresses
- * pad - number of bytes between addresses in the list
- *
- * The given list replaces any existing list. Clears the last 15 receive
- * address registers and the multicast table. Uses receive address registers
- * for the first 15 multicast addresses, and hashes the rest into the
- * multicast table.
- *****************************************************************************/
-void
-ixgb_mc_addr_list_update(struct ixgb_hw *hw,
- u8 *mc_addr_list,
- u32 mc_addr_count,
- u32 pad)
-{
- u32 hash_value;
- u32 i;
- u32 rar_used_count = 1; /* RAR[0] is used for our MAC address */
- u8 *mca;
-
- ENTER();
-
- /* Set the new number of MC addresses that we are being requested to use. */
- hw->num_mc_addrs = mc_addr_count;
-
- /* Clear RAR[1-15] */
- pr_debug("Clearing RAR[1-15]\n");
- for (i = rar_used_count; i < IXGB_RAR_ENTRIES; i++) {
- IXGB_WRITE_REG_ARRAY(hw, RA, (i << 1), 0);
- IXGB_WRITE_REG_ARRAY(hw, RA, ((i << 1) + 1), 0);
- }
-
- /* Clear the MTA */
- pr_debug("Clearing MTA\n");
- for (i = 0; i < IXGB_MC_TBL_SIZE; i++)
- IXGB_WRITE_REG_ARRAY(hw, MTA, i, 0);
-
- /* Add the new addresses */
- mca = mc_addr_list;
- for (i = 0; i < mc_addr_count; i++) {
- pr_debug("Adding the multicast addresses:\n");
- pr_debug("MC Addr #%d = %pM\n", i, mca);
-
- /* Place this multicast address in the RAR if there is room, *
- * else put it in the MTA
- */
- if (rar_used_count < IXGB_RAR_ENTRIES) {
- ixgb_rar_set(hw, mca, rar_used_count);
- pr_debug("Added a multicast address to RAR[%d]\n", i);
- rar_used_count++;
- } else {
- hash_value = ixgb_hash_mc_addr(hw, mca);
-
- pr_debug("Hash value = 0x%03X\n", hash_value);
-
- ixgb_mta_set(hw, hash_value);
- }
-
- mca += ETH_ALEN + pad;
- }
-
- pr_debug("MC Update Complete\n");
-}
-
-/******************************************************************************
- * Hashes an address to determine its location in the multicast table
- *
- * hw - Struct containing variables accessed by shared code
- * mc_addr - the multicast address to hash
- *
- * Returns:
- * The hash value
- *****************************************************************************/
-static u32
-ixgb_hash_mc_addr(struct ixgb_hw *hw,
- u8 *mc_addr)
-{
- u32 hash_value = 0;
-
- ENTER();
-
- /* The portion of the address that is used for the hash table is
- * determined by the mc_filter_type setting.
- */
- switch (hw->mc_filter_type) {
- /* [0] [1] [2] [3] [4] [5]
- * 01 AA 00 12 34 56
- * LSB MSB - According to H/W docs */
- case 0:
- /* [47:36] i.e. 0x563 for above example address */
- hash_value =
- ((mc_addr[4] >> 4) | (((u16) mc_addr[5]) << 4));
- break;
- case 1: /* [46:35] i.e. 0xAC6 for above example address */
- hash_value =
- ((mc_addr[4] >> 3) | (((u16) mc_addr[5]) << 5));
- break;
- case 2: /* [45:34] i.e. 0x5D8 for above example address */
- hash_value =
- ((mc_addr[4] >> 2) | (((u16) mc_addr[5]) << 6));
- break;
- case 3: /* [43:32] i.e. 0x634 for above example address */
- hash_value = ((mc_addr[4]) | (((u16) mc_addr[5]) << 8));
- break;
- default:
- /* Invalid mc_filter_type, what should we do? */
- pr_debug("MC filter type param set incorrectly\n");
- ASSERT(0);
- break;
- }
-
- hash_value &= 0xFFF;
- return hash_value;
-}
-
-/******************************************************************************
- * Sets the bit in the multicast table corresponding to the hash value.
- *
- * hw - Struct containing variables accessed by shared code
- * hash_value - Multicast address hash value
- *****************************************************************************/
-static void
-ixgb_mta_set(struct ixgb_hw *hw,
- u32 hash_value)
-{
- u32 hash_bit, hash_reg;
- u32 mta_reg;
-
- /* The MTA is a register array of 128 32-bit registers.
- * It is treated like an array of 4096 bits. We want to set
- * bit BitArray[hash_value]. So we figure out what register
- * the bit is in, read it, OR in the new bit, then write
- * back the new value. The register is determined by the
- * upper 7 bits of the hash value and the bit within that
- * register are determined by the lower 5 bits of the value.
- */
- hash_reg = (hash_value >> 5) & 0x7F;
- hash_bit = hash_value & 0x1F;
-
- mta_reg = IXGB_READ_REG_ARRAY(hw, MTA, hash_reg);
-
- mta_reg |= (1 << hash_bit);
-
- IXGB_WRITE_REG_ARRAY(hw, MTA, hash_reg, mta_reg);
-}
-
-/******************************************************************************
- * Puts an ethernet address into a receive address register.
- *
- * hw - Struct containing variables accessed by shared code
- * addr - Address to put into receive address register
- * index - Receive address register to write
- *****************************************************************************/
-void
-ixgb_rar_set(struct ixgb_hw *hw,
- const u8 *addr,
- u32 index)
-{
- u32 rar_low, rar_high;
-
- ENTER();
-
- /* HW expects these in little endian so we reverse the byte order
- * from network order (big endian) to little endian
- */
- rar_low = ((u32) addr[0] |
- ((u32)addr[1] << 8) |
- ((u32)addr[2] << 16) |
- ((u32)addr[3] << 24));
-
- rar_high = ((u32) addr[4] |
- ((u32)addr[5] << 8) |
- IXGB_RAH_AV);
-
- IXGB_WRITE_REG_ARRAY(hw, RA, (index << 1), rar_low);
- IXGB_WRITE_REG_ARRAY(hw, RA, ((index << 1) + 1), rar_high);
-}
-
-/******************************************************************************
- * Writes a value to the specified offset in the VLAN filter table.
- *
- * hw - Struct containing variables accessed by shared code
- * offset - Offset in VLAN filter table to write
- * value - Value to write into VLAN filter table
- *****************************************************************************/
-void
-ixgb_write_vfta(struct ixgb_hw *hw,
- u32 offset,
- u32 value)
-{
- IXGB_WRITE_REG_ARRAY(hw, VFTA, offset, value);
-}
-
-/******************************************************************************
- * Clears the VLAN filter table
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static void
-ixgb_clear_vfta(struct ixgb_hw *hw)
-{
- u32 offset;
-
- for (offset = 0; offset < IXGB_VLAN_FILTER_TBL_SIZE; offset++)
- IXGB_WRITE_REG_ARRAY(hw, VFTA, offset, 0);
-}
-
-/******************************************************************************
- * Configures the flow control settings based on SW configuration.
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-
-static bool
-ixgb_setup_fc(struct ixgb_hw *hw)
-{
- u32 ctrl_reg;
- u32 pap_reg = 0; /* by default, assume no pause time */
- bool status = true;
-
- ENTER();
-
- /* Get the current control reg 0 settings */
- ctrl_reg = IXGB_READ_REG(hw, CTRL0);
-
- /* Clear the Receive Pause Enable and Transmit Pause Enable bits */
- ctrl_reg &= ~(IXGB_CTRL0_RPE | IXGB_CTRL0_TPE);
-
- /* The possible values of the "flow_control" parameter are:
- * 0: Flow control is completely disabled
- * 1: Rx flow control is enabled (we can receive pause frames
- * but not send pause frames).
- * 2: Tx flow control is enabled (we can send pause frames
- * but we do not support receiving pause frames).
- * 3: Both Rx and TX flow control (symmetric) are enabled.
- * other: Invalid.
- */
- switch (hw->fc.type) {
- case ixgb_fc_none: /* 0 */
- /* Set CMDC bit to disable Rx Flow control */
- ctrl_reg |= (IXGB_CTRL0_CMDC);
- break;
- case ixgb_fc_rx_pause: /* 1 */
- /* RX Flow control is enabled, and TX Flow control is
- * disabled.
- */
- ctrl_reg |= (IXGB_CTRL0_RPE);
- break;
- case ixgb_fc_tx_pause: /* 2 */
- /* TX Flow control is enabled, and RX Flow control is
- * disabled, by a software over-ride.
- */
- ctrl_reg |= (IXGB_CTRL0_TPE);
- pap_reg = hw->fc.pause_time;
- break;
- case ixgb_fc_full: /* 3 */
- /* Flow control (both RX and TX) is enabled by a software
- * over-ride.
- */
- ctrl_reg |= (IXGB_CTRL0_RPE | IXGB_CTRL0_TPE);
- pap_reg = hw->fc.pause_time;
- break;
- default:
- /* We should never get here. The value should be 0-3. */
- pr_debug("Flow control param set incorrectly\n");
- ASSERT(0);
- break;
- }
-
- /* Write the new settings */
- IXGB_WRITE_REG(hw, CTRL0, ctrl_reg);
-
- if (pap_reg != 0)
- IXGB_WRITE_REG(hw, PAP, pap_reg);
-
- /* Set the flow control receive threshold registers. Normally,
- * these registers will be set to a default threshold that may be
- * adjusted later by the driver's runtime code. However, if the
- * ability to transmit pause frames in not enabled, then these
- * registers will be set to 0.
- */
- if (!(hw->fc.type & ixgb_fc_tx_pause)) {
- IXGB_WRITE_REG(hw, FCRTL, 0);
- IXGB_WRITE_REG(hw, FCRTH, 0);
- } else {
- /* We need to set up the Receive Threshold high and low water
- * marks as well as (optionally) enabling the transmission of XON
- * frames. */
- if (hw->fc.send_xon) {
- IXGB_WRITE_REG(hw, FCRTL,
- (hw->fc.low_water | IXGB_FCRTL_XONE));
- } else {
- IXGB_WRITE_REG(hw, FCRTL, hw->fc.low_water);
- }
- IXGB_WRITE_REG(hw, FCRTH, hw->fc.high_water);
- }
- return status;
-}
-
-/******************************************************************************
- * Reads a word from a device over the Management Data Interface (MDI) bus.
- * This interface is used to manage Physical layer devices.
- *
- * hw - Struct containing variables accessed by hw code
- * reg_address - Offset of device register being read.
- * phy_address - Address of device on MDI.
- *
- * Returns: Data word (16 bits) from MDI device.
- *
- * The 82597EX has support for several MDI access methods. This routine
- * uses the new protocol MDI Single Command and Address Operation.
- * This requires that first an address cycle command is sent, followed by a
- * read command.
- *****************************************************************************/
-static u16
-ixgb_read_phy_reg(struct ixgb_hw *hw,
- u32 reg_address,
- u32 phy_address,
- u32 device_type)
-{
- u32 i;
- u32 data;
- u32 command = 0;
-
- ASSERT(reg_address <= IXGB_MAX_PHY_REG_ADDRESS);
- ASSERT(phy_address <= IXGB_MAX_PHY_ADDRESS);
- ASSERT(device_type <= IXGB_MAX_PHY_DEV_TYPE);
-
- /* Setup and write the address cycle command */
- command = ((reg_address << IXGB_MSCA_NP_ADDR_SHIFT) |
- (device_type << IXGB_MSCA_DEV_TYPE_SHIFT) |
- (phy_address << IXGB_MSCA_PHY_ADDR_SHIFT) |
- (IXGB_MSCA_ADDR_CYCLE | IXGB_MSCA_MDI_COMMAND));
-
- IXGB_WRITE_REG(hw, MSCA, command);
-
- /**************************************************************
- ** Check every 10 usec to see if the address cycle completed
- ** The COMMAND bit will clear when the operation is complete.
- ** This may take as long as 64 usecs (we'll wait 100 usecs max)
- ** from the CPU Write to the Ready bit assertion.
- **************************************************************/
-
- for (i = 0; i < 10; i++)
- {
- udelay(10);
-
- command = IXGB_READ_REG(hw, MSCA);
-
- if ((command & IXGB_MSCA_MDI_COMMAND) == 0)
- break;
- }
-
- ASSERT((command & IXGB_MSCA_MDI_COMMAND) == 0);
-
- /* Address cycle complete, setup and write the read command */
- command = ((reg_address << IXGB_MSCA_NP_ADDR_SHIFT) |
- (device_type << IXGB_MSCA_DEV_TYPE_SHIFT) |
- (phy_address << IXGB_MSCA_PHY_ADDR_SHIFT) |
- (IXGB_MSCA_READ | IXGB_MSCA_MDI_COMMAND));
-
- IXGB_WRITE_REG(hw, MSCA, command);
-
- /**************************************************************
- ** Check every 10 usec to see if the read command completed
- ** The COMMAND bit will clear when the operation is complete.
- ** The read may take as long as 64 usecs (we'll wait 100 usecs max)
- ** from the CPU Write to the Ready bit assertion.
- **************************************************************/
-
- for (i = 0; i < 10; i++)
- {
- udelay(10);
-
- command = IXGB_READ_REG(hw, MSCA);
-
- if ((command & IXGB_MSCA_MDI_COMMAND) == 0)
- break;
- }
-
- ASSERT((command & IXGB_MSCA_MDI_COMMAND) == 0);
-
- /* Operation is complete, get the data from the MDIO Read/Write Data
- * register and return.
- */
- data = IXGB_READ_REG(hw, MSRWD);
- data >>= IXGB_MSRWD_READ_DATA_SHIFT;
- return((u16) data);
-}
-
-/******************************************************************************
- * Writes a word to a device over the Management Data Interface (MDI) bus.
- * This interface is used to manage Physical layer devices.
- *
- * hw - Struct containing variables accessed by hw code
- * reg_address - Offset of device register being read.
- * phy_address - Address of device on MDI.
- * device_type - Also known as the Device ID or DID.
- * data - 16-bit value to be written
- *
- * Returns: void.
- *
- * The 82597EX has support for several MDI access methods. This routine
- * uses the new protocol MDI Single Command and Address Operation.
- * This requires that first an address cycle command is sent, followed by a
- * write command.
- *****************************************************************************/
-static void
-ixgb_write_phy_reg(struct ixgb_hw *hw,
- u32 reg_address,
- u32 phy_address,
- u32 device_type,
- u16 data)
-{
- u32 i;
- u32 command = 0;
-
- ASSERT(reg_address <= IXGB_MAX_PHY_REG_ADDRESS);
- ASSERT(phy_address <= IXGB_MAX_PHY_ADDRESS);
- ASSERT(device_type <= IXGB_MAX_PHY_DEV_TYPE);
-
- /* Put the data in the MDIO Read/Write Data register */
- IXGB_WRITE_REG(hw, MSRWD, (u32)data);
-
- /* Setup and write the address cycle command */
- command = ((reg_address << IXGB_MSCA_NP_ADDR_SHIFT) |
- (device_type << IXGB_MSCA_DEV_TYPE_SHIFT) |
- (phy_address << IXGB_MSCA_PHY_ADDR_SHIFT) |
- (IXGB_MSCA_ADDR_CYCLE | IXGB_MSCA_MDI_COMMAND));
-
- IXGB_WRITE_REG(hw, MSCA, command);
-
- /**************************************************************
- ** Check every 10 usec to see if the address cycle completed
- ** The COMMAND bit will clear when the operation is complete.
- ** This may take as long as 64 usecs (we'll wait 100 usecs max)
- ** from the CPU Write to the Ready bit assertion.
- **************************************************************/
-
- for (i = 0; i < 10; i++)
- {
- udelay(10);
-
- command = IXGB_READ_REG(hw, MSCA);
-
- if ((command & IXGB_MSCA_MDI_COMMAND) == 0)
- break;
- }
-
- ASSERT((command & IXGB_MSCA_MDI_COMMAND) == 0);
-
- /* Address cycle complete, setup and write the write command */
- command = ((reg_address << IXGB_MSCA_NP_ADDR_SHIFT) |
- (device_type << IXGB_MSCA_DEV_TYPE_SHIFT) |
- (phy_address << IXGB_MSCA_PHY_ADDR_SHIFT) |
- (IXGB_MSCA_WRITE | IXGB_MSCA_MDI_COMMAND));
-
- IXGB_WRITE_REG(hw, MSCA, command);
-
- /**************************************************************
- ** Check every 10 usec to see if the read command completed
- ** The COMMAND bit will clear when the operation is complete.
- ** The write may take as long as 64 usecs (we'll wait 100 usecs max)
- ** from the CPU Write to the Ready bit assertion.
- **************************************************************/
-
- for (i = 0; i < 10; i++)
- {
- udelay(10);
-
- command = IXGB_READ_REG(hw, MSCA);
-
- if ((command & IXGB_MSCA_MDI_COMMAND) == 0)
- break;
- }
-
- ASSERT((command & IXGB_MSCA_MDI_COMMAND) == 0);
-
- /* Operation is complete, return. */
-}
-
-/******************************************************************************
- * Checks to see if the link status of the hardware has changed.
- *
- * hw - Struct containing variables accessed by hw code
- *
- * Called by any function that needs to check the link status of the adapter.
- *****************************************************************************/
-void
-ixgb_check_for_link(struct ixgb_hw *hw)
-{
- u32 status_reg;
- u32 xpcss_reg;
-
- ENTER();
-
- xpcss_reg = IXGB_READ_REG(hw, XPCSS);
- status_reg = IXGB_READ_REG(hw, STATUS);
-
- if ((xpcss_reg & IXGB_XPCSS_ALIGN_STATUS) &&
- (status_reg & IXGB_STATUS_LU)) {
- hw->link_up = true;
- } else if (!(xpcss_reg & IXGB_XPCSS_ALIGN_STATUS) &&
- (status_reg & IXGB_STATUS_LU)) {
- pr_debug("XPCSS Not Aligned while Status:LU is set\n");
- hw->link_up = ixgb_link_reset(hw);
- } else {
- /*
- * 82597EX errata. Since the lane deskew problem may prevent
- * link, reset the link before reporting link down.
- */
- hw->link_up = ixgb_link_reset(hw);
- }
- /* Anything else for 10 Gig?? */
-}
-
-/******************************************************************************
- * Check for a bad link condition that may have occurred.
- * The indication is that the RFC / LFC registers may be incrementing
- * continually. A full adapter reset is required to recover.
- *
- * hw - Struct containing variables accessed by hw code
- *
- * Called by any function that needs to check the link status of the adapter.
- *****************************************************************************/
-bool ixgb_check_for_bad_link(struct ixgb_hw *hw)
-{
- u32 newLFC, newRFC;
- bool bad_link_returncode = false;
-
- if (hw->phy_type == ixgb_phy_type_txn17401) {
- newLFC = IXGB_READ_REG(hw, LFC);
- newRFC = IXGB_READ_REG(hw, RFC);
- if ((hw->lastLFC + 250 < newLFC)
- || (hw->lastRFC + 250 < newRFC)) {
- pr_debug("BAD LINK! too many LFC/RFC since last check\n");
- bad_link_returncode = true;
- }
- hw->lastLFC = newLFC;
- hw->lastRFC = newRFC;
- }
-
- return bad_link_returncode;
-}
-
-/******************************************************************************
- * Clears all hardware statistics counters.
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static void
-ixgb_clear_hw_cntrs(struct ixgb_hw *hw)
-{
- ENTER();
-
- /* if we are stopped or resetting exit gracefully */
- if (hw->adapter_stopped) {
- pr_debug("Exiting because the adapter is stopped!!!\n");
- return;
- }
-
- IXGB_READ_REG(hw, TPRL);
- IXGB_READ_REG(hw, TPRH);
- IXGB_READ_REG(hw, GPRCL);
- IXGB_READ_REG(hw, GPRCH);
- IXGB_READ_REG(hw, BPRCL);
- IXGB_READ_REG(hw, BPRCH);
- IXGB_READ_REG(hw, MPRCL);
- IXGB_READ_REG(hw, MPRCH);
- IXGB_READ_REG(hw, UPRCL);
- IXGB_READ_REG(hw, UPRCH);
- IXGB_READ_REG(hw, VPRCL);
- IXGB_READ_REG(hw, VPRCH);
- IXGB_READ_REG(hw, JPRCL);
- IXGB_READ_REG(hw, JPRCH);
- IXGB_READ_REG(hw, GORCL);
- IXGB_READ_REG(hw, GORCH);
- IXGB_READ_REG(hw, TORL);
- IXGB_READ_REG(hw, TORH);
- IXGB_READ_REG(hw, RNBC);
- IXGB_READ_REG(hw, RUC);
- IXGB_READ_REG(hw, ROC);
- IXGB_READ_REG(hw, RLEC);
- IXGB_READ_REG(hw, CRCERRS);
- IXGB_READ_REG(hw, ICBC);
- IXGB_READ_REG(hw, ECBC);
- IXGB_READ_REG(hw, MPC);
- IXGB_READ_REG(hw, TPTL);
- IXGB_READ_REG(hw, TPTH);
- IXGB_READ_REG(hw, GPTCL);
- IXGB_READ_REG(hw, GPTCH);
- IXGB_READ_REG(hw, BPTCL);
- IXGB_READ_REG(hw, BPTCH);
- IXGB_READ_REG(hw, MPTCL);
- IXGB_READ_REG(hw, MPTCH);
- IXGB_READ_REG(hw, UPTCL);
- IXGB_READ_REG(hw, UPTCH);
- IXGB_READ_REG(hw, VPTCL);
- IXGB_READ_REG(hw, VPTCH);
- IXGB_READ_REG(hw, JPTCL);
- IXGB_READ_REG(hw, JPTCH);
- IXGB_READ_REG(hw, GOTCL);
- IXGB_READ_REG(hw, GOTCH);
- IXGB_READ_REG(hw, TOTL);
- IXGB_READ_REG(hw, TOTH);
- IXGB_READ_REG(hw, DC);
- IXGB_READ_REG(hw, PLT64C);
- IXGB_READ_REG(hw, TSCTC);
- IXGB_READ_REG(hw, TSCTFC);
- IXGB_READ_REG(hw, IBIC);
- IXGB_READ_REG(hw, RFC);
- IXGB_READ_REG(hw, LFC);
- IXGB_READ_REG(hw, PFRC);
- IXGB_READ_REG(hw, PFTC);
- IXGB_READ_REG(hw, MCFRC);
- IXGB_READ_REG(hw, MCFTC);
- IXGB_READ_REG(hw, XONRXC);
- IXGB_READ_REG(hw, XONTXC);
- IXGB_READ_REG(hw, XOFFRXC);
- IXGB_READ_REG(hw, XOFFTXC);
- IXGB_READ_REG(hw, RJC);
-}
-
-/******************************************************************************
- * Turns on the software controllable LED
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-void
-ixgb_led_on(struct ixgb_hw *hw)
-{
- u32 ctrl0_reg = IXGB_READ_REG(hw, CTRL0);
-
- /* To turn on the LED, clear software-definable pin 0 (SDP0). */
- ctrl0_reg &= ~IXGB_CTRL0_SDP0;
- IXGB_WRITE_REG(hw, CTRL0, ctrl0_reg);
-}
-
-/******************************************************************************
- * Turns off the software controllable LED
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-void
-ixgb_led_off(struct ixgb_hw *hw)
-{
- u32 ctrl0_reg = IXGB_READ_REG(hw, CTRL0);
-
- /* To turn off the LED, set software-definable pin 0 (SDP0). */
- ctrl0_reg |= IXGB_CTRL0_SDP0;
- IXGB_WRITE_REG(hw, CTRL0, ctrl0_reg);
-}
-
-/******************************************************************************
- * Gets the current PCI bus type, speed, and width of the hardware
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static void
-ixgb_get_bus_info(struct ixgb_hw *hw)
-{
- u32 status_reg;
-
- status_reg = IXGB_READ_REG(hw, STATUS);
-
- hw->bus.type = (status_reg & IXGB_STATUS_PCIX_MODE) ?
- ixgb_bus_type_pcix : ixgb_bus_type_pci;
-
- if (hw->bus.type == ixgb_bus_type_pci) {
- hw->bus.speed = (status_reg & IXGB_STATUS_PCI_SPD) ?
- ixgb_bus_speed_66 : ixgb_bus_speed_33;
- } else {
- switch (status_reg & IXGB_STATUS_PCIX_SPD_MASK) {
- case IXGB_STATUS_PCIX_SPD_66:
- hw->bus.speed = ixgb_bus_speed_66;
- break;
- case IXGB_STATUS_PCIX_SPD_100:
- hw->bus.speed = ixgb_bus_speed_100;
- break;
- case IXGB_STATUS_PCIX_SPD_133:
- hw->bus.speed = ixgb_bus_speed_133;
- break;
- default:
- hw->bus.speed = ixgb_bus_speed_reserved;
- break;
- }
- }
-
- hw->bus.width = (status_reg & IXGB_STATUS_BUS64) ?
- ixgb_bus_width_64 : ixgb_bus_width_32;
-}
-
-/******************************************************************************
- * Tests a MAC address to ensure it is a valid Individual Address
- *
- * mac_addr - pointer to MAC address.
- *
- *****************************************************************************/
-static bool
-mac_addr_valid(u8 *mac_addr)
-{
- bool is_valid = true;
- ENTER();
-
- /* Make sure it is not a multicast address */
- if (is_multicast_ether_addr(mac_addr)) {
- pr_debug("MAC address is multicast\n");
- is_valid = false;
- }
- /* Not a broadcast address */
- else if (is_broadcast_ether_addr(mac_addr)) {
- pr_debug("MAC address is broadcast\n");
- is_valid = false;
- }
- /* Reject the zero address */
- else if (is_zero_ether_addr(mac_addr)) {
- pr_debug("MAC address is all zeros\n");
- is_valid = false;
- }
- return is_valid;
-}
-
-/******************************************************************************
- * Resets the 10GbE link. Waits the settle time and returns the state of
- * the link.
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static bool
-ixgb_link_reset(struct ixgb_hw *hw)
-{
- bool link_status = false;
- u8 wait_retries = MAX_RESET_ITERATIONS;
- u8 lrst_retries = MAX_RESET_ITERATIONS;
-
- do {
- /* Reset the link */
- IXGB_WRITE_REG(hw, CTRL0,
- IXGB_READ_REG(hw, CTRL0) | IXGB_CTRL0_LRST);
-
- /* Wait for link-up and lane re-alignment */
- do {
- udelay(IXGB_DELAY_USECS_AFTER_LINK_RESET);
- link_status =
- ((IXGB_READ_REG(hw, STATUS) & IXGB_STATUS_LU)
- && (IXGB_READ_REG(hw, XPCSS) &
- IXGB_XPCSS_ALIGN_STATUS)) ? true : false;
- } while (!link_status && --wait_retries);
-
- } while (!link_status && --lrst_retries);
-
- return link_status;
-}
-
-/******************************************************************************
- * Resets the 10GbE optics module.
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-static void
-ixgb_optics_reset(struct ixgb_hw *hw)
-{
- if (hw->phy_type == ixgb_phy_type_txn17401) {
- ixgb_write_phy_reg(hw,
- MDIO_CTRL1,
- IXGB_PHY_ADDRESS,
- MDIO_MMD_PMAPMD,
- MDIO_CTRL1_RESET);
-
- ixgb_read_phy_reg(hw, MDIO_CTRL1, IXGB_PHY_ADDRESS, MDIO_MMD_PMAPMD);
- }
-}
-
-/******************************************************************************
- * Resets the 10GbE optics module for Sun variant NIC.
- *
- * hw - Struct containing variables accessed by shared code
- *****************************************************************************/
-
-#define IXGB_BCM8704_USER_PMD_TX_CTRL_REG 0xC803
-#define IXGB_BCM8704_USER_PMD_TX_CTRL_REG_VAL 0x0164
-#define IXGB_BCM8704_USER_CTRL_REG 0xC800
-#define IXGB_BCM8704_USER_CTRL_REG_VAL 0x7FBF
-#define IXGB_BCM8704_USER_DEV3_ADDR 0x0003
-#define IXGB_SUN_PHY_ADDRESS 0x0000
-#define IXGB_SUN_PHY_RESET_DELAY 305
-
-static void
-ixgb_optics_reset_bcm(struct ixgb_hw *hw)
-{
- u32 ctrl = IXGB_READ_REG(hw, CTRL0);
- ctrl &= ~IXGB_CTRL0_SDP2;
- ctrl |= IXGB_CTRL0_SDP3;
- IXGB_WRITE_REG(hw, CTRL0, ctrl);
- IXGB_WRITE_FLUSH(hw);
-
- /* SerDes needs extra delay */
- msleep(IXGB_SUN_PHY_RESET_DELAY);
-
- /* Broadcom 7408L configuration */
- /* Reference clock config */
- ixgb_write_phy_reg(hw,
- IXGB_BCM8704_USER_PMD_TX_CTRL_REG,
- IXGB_SUN_PHY_ADDRESS,
- IXGB_BCM8704_USER_DEV3_ADDR,
- IXGB_BCM8704_USER_PMD_TX_CTRL_REG_VAL);
- /* we must read the registers twice */
- ixgb_read_phy_reg(hw,
- IXGB_BCM8704_USER_PMD_TX_CTRL_REG,
- IXGB_SUN_PHY_ADDRESS,
- IXGB_BCM8704_USER_DEV3_ADDR);
- ixgb_read_phy_reg(hw,
- IXGB_BCM8704_USER_PMD_TX_CTRL_REG,
- IXGB_SUN_PHY_ADDRESS,
- IXGB_BCM8704_USER_DEV3_ADDR);
-
- ixgb_write_phy_reg(hw,
- IXGB_BCM8704_USER_CTRL_REG,
- IXGB_SUN_PHY_ADDRESS,
- IXGB_BCM8704_USER_DEV3_ADDR,
- IXGB_BCM8704_USER_CTRL_REG_VAL);
- ixgb_read_phy_reg(hw,
- IXGB_BCM8704_USER_CTRL_REG,
- IXGB_SUN_PHY_ADDRESS,
- IXGB_BCM8704_USER_DEV3_ADDR);
- ixgb_read_phy_reg(hw,
- IXGB_BCM8704_USER_CTRL_REG,
- IXGB_SUN_PHY_ADDRESS,
- IXGB_BCM8704_USER_DEV3_ADDR);
-
- /* SerDes needs extra delay */
- msleep(IXGB_SUN_PHY_RESET_DELAY);
-}
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-#ifndef _IXGB_HW_H_
-#define _IXGB_HW_H_
-
-#include <linux/mdio.h>
-
-#include "ixgb_osdep.h"
-
-/* Enums */
-typedef enum {
- ixgb_mac_unknown = 0,
- ixgb_82597,
- ixgb_num_macs
-} ixgb_mac_type;
-
-/* Types of physical layer modules */
-typedef enum {
- ixgb_phy_type_unknown = 0,
- ixgb_phy_type_g6005, /* 850nm, MM fiber, XPAK transceiver */
- ixgb_phy_type_g6104, /* 1310nm, SM fiber, XPAK transceiver */
- ixgb_phy_type_txn17201, /* 850nm, MM fiber, XPAK transceiver */
- ixgb_phy_type_txn17401, /* 1310nm, SM fiber, XENPAK transceiver */
- ixgb_phy_type_bcm /* SUN specific board */
-} ixgb_phy_type;
-
-/* XPAK transceiver vendors, for the SR adapters */
-typedef enum {
- ixgb_xpak_vendor_intel,
- ixgb_xpak_vendor_infineon
-} ixgb_xpak_vendor;
-
-/* Media Types */
-typedef enum {
- ixgb_media_type_unknown = 0,
- ixgb_media_type_fiber = 1,
- ixgb_media_type_copper = 2,
- ixgb_num_media_types
-} ixgb_media_type;
-
-/* Flow Control Settings */
-typedef enum {
- ixgb_fc_none = 0,
- ixgb_fc_rx_pause = 1,
- ixgb_fc_tx_pause = 2,
- ixgb_fc_full = 3,
- ixgb_fc_default = 0xFF
-} ixgb_fc_type;
-
-/* PCI bus types */
-typedef enum {
- ixgb_bus_type_unknown = 0,
- ixgb_bus_type_pci,
- ixgb_bus_type_pcix
-} ixgb_bus_type;
-
-/* PCI bus speeds */
-typedef enum {
- ixgb_bus_speed_unknown = 0,
- ixgb_bus_speed_33,
- ixgb_bus_speed_66,
- ixgb_bus_speed_100,
- ixgb_bus_speed_133,
- ixgb_bus_speed_reserved
-} ixgb_bus_speed;
-
-/* PCI bus widths */
-typedef enum {
- ixgb_bus_width_unknown = 0,
- ixgb_bus_width_32,
- ixgb_bus_width_64
-} ixgb_bus_width;
-
-#define IXGB_EEPROM_SIZE 64 /* Size in words */
-
-#define SPEED_10000 10000
-#define FULL_DUPLEX 2
-
-#define MIN_NUMBER_OF_DESCRIPTORS 8
-#define MAX_NUMBER_OF_DESCRIPTORS 0xFFF8 /* 13 bits in RDLEN/TDLEN, 128B aligned */
-
-#define IXGB_DELAY_BEFORE_RESET 10 /* allow 10ms after idling rx/tx units */
-#define IXGB_DELAY_AFTER_RESET 1 /* allow 1ms after the reset */
-#define IXGB_DELAY_AFTER_EE_RESET 10 /* allow 10ms after the EEPROM reset */
-
-#define IXGB_DELAY_USECS_AFTER_LINK_RESET 13 /* allow 13 microseconds after the reset */
- /* NOTE: this is MICROSECONDS */
-#define MAX_RESET_ITERATIONS 8 /* number of iterations to get things right */
-
-/* General Registers */
-#define IXGB_CTRL0 0x00000 /* Device Control Register 0 - RW */
-#define IXGB_CTRL1 0x00008 /* Device Control Register 1 - RW */
-#define IXGB_STATUS 0x00010 /* Device Status Register - RO */
-#define IXGB_EECD 0x00018 /* EEPROM/Flash Control/Data Register - RW */
-#define IXGB_MFS 0x00020 /* Maximum Frame Size - RW */
-
-/* Interrupt */
-#define IXGB_ICR 0x00080 /* Interrupt Cause Read - R/clr */
-#define IXGB_ICS 0x00088 /* Interrupt Cause Set - RW */
-#define IXGB_IMS 0x00090 /* Interrupt Mask Set/Read - RW */
-#define IXGB_IMC 0x00098 /* Interrupt Mask Clear - WO */
-
-/* Receive */
-#define IXGB_RCTL 0x00100 /* RX Control - RW */
-#define IXGB_FCRTL 0x00108 /* Flow Control Receive Threshold Low - RW */
-#define IXGB_FCRTH 0x00110 /* Flow Control Receive Threshold High - RW */
-#define IXGB_RDBAL 0x00118 /* RX Descriptor Base Low - RW */
-#define IXGB_RDBAH 0x0011C /* RX Descriptor Base High - RW */
-#define IXGB_RDLEN 0x00120 /* RX Descriptor Length - RW */
-#define IXGB_RDH 0x00128 /* RX Descriptor Head - RW */
-#define IXGB_RDT 0x00130 /* RX Descriptor Tail - RW */
-#define IXGB_RDTR 0x00138 /* RX Delay Timer Ring - RW */
-#define IXGB_RXDCTL 0x00140 /* Receive Descriptor Control - RW */
-#define IXGB_RAIDC 0x00148 /* Receive Adaptive Interrupt Delay Control - RW */
-#define IXGB_RXCSUM 0x00158 /* Receive Checksum Control - RW */
-#define IXGB_RA 0x00180 /* Receive Address Array Base - RW */
-#define IXGB_RAL 0x00180 /* Receive Address Low [0:15] - RW */
-#define IXGB_RAH 0x00184 /* Receive Address High [0:15] - RW */
-#define IXGB_MTA 0x00200 /* Multicast Table Array [0:127] - RW */
-#define IXGB_VFTA 0x00400 /* VLAN Filter Table Array [0:127] - RW */
-#define IXGB_REQ_RX_DESCRIPTOR_MULTIPLE 8
-
-/* Transmit */
-#define IXGB_TCTL 0x00600 /* TX Control - RW */
-#define IXGB_TDBAL 0x00608 /* TX Descriptor Base Low - RW */
-#define IXGB_TDBAH 0x0060C /* TX Descriptor Base High - RW */
-#define IXGB_TDLEN 0x00610 /* TX Descriptor Length - RW */
-#define IXGB_TDH 0x00618 /* TX Descriptor Head - RW */
-#define IXGB_TDT 0x00620 /* TX Descriptor Tail - RW */
-#define IXGB_TIDV 0x00628 /* TX Interrupt Delay Value - RW */
-#define IXGB_TXDCTL 0x00630 /* Transmit Descriptor Control - RW */
-#define IXGB_TSPMT 0x00638 /* TCP Segmentation PAD & Min Threshold - RW */
-#define IXGB_PAP 0x00640 /* Pause and Pace - RW */
-#define IXGB_REQ_TX_DESCRIPTOR_MULTIPLE 8
-
-/* Physical */
-#define IXGB_PCSC1 0x00700 /* PCS Control 1 - RW */
-#define IXGB_PCSC2 0x00708 /* PCS Control 2 - RW */
-#define IXGB_PCSS1 0x00710 /* PCS Status 1 - RO */
-#define IXGB_PCSS2 0x00718 /* PCS Status 2 - RO */
-#define IXGB_XPCSS 0x00720 /* 10GBASE-X PCS Status (or XGXS Lane Status) - RO */
-#define IXGB_UCCR 0x00728 /* Unilink Circuit Control Register */
-#define IXGB_XPCSTC 0x00730 /* 10GBASE-X PCS Test Control */
-#define IXGB_MACA 0x00738 /* MDI Autoscan Command and Address - RW */
-#define IXGB_APAE 0x00740 /* Autoscan PHY Address Enable - RW */
-#define IXGB_ARD 0x00748 /* Autoscan Read Data - RO */
-#define IXGB_AIS 0x00750 /* Autoscan Interrupt Status - RO */
-#define IXGB_MSCA 0x00758 /* MDI Single Command and Address - RW */
-#define IXGB_MSRWD 0x00760 /* MDI Single Read and Write Data - RW, RO */
-
-/* Wake-up */
-#define IXGB_WUFC 0x00808 /* Wake Up Filter Control - RW */
-#define IXGB_WUS 0x00810 /* Wake Up Status - RO */
-#define IXGB_FFLT 0x01000 /* Flexible Filter Length Table - RW */
-#define IXGB_FFMT 0x01020 /* Flexible Filter Mask Table - RW */
-#define IXGB_FTVT 0x01420 /* Flexible Filter Value Table - RW */
-
-/* Statistics */
-#define IXGB_TPRL 0x02000 /* Total Packets Received (Low) */
-#define IXGB_TPRH 0x02004 /* Total Packets Received (High) */
-#define IXGB_GPRCL 0x02008 /* Good Packets Received Count (Low) */
-#define IXGB_GPRCH 0x0200C /* Good Packets Received Count (High) */
-#define IXGB_BPRCL 0x02010 /* Broadcast Packets Received Count (Low) */
-#define IXGB_BPRCH 0x02014 /* Broadcast Packets Received Count (High) */
-#define IXGB_MPRCL 0x02018 /* Multicast Packets Received Count (Low) */
-#define IXGB_MPRCH 0x0201C /* Multicast Packets Received Count (High) */
-#define IXGB_UPRCL 0x02020 /* Unicast Packets Received Count (Low) */
-#define IXGB_UPRCH 0x02024 /* Unicast Packets Received Count (High) */
-#define IXGB_VPRCL 0x02028 /* VLAN Packets Received Count (Low) */
-#define IXGB_VPRCH 0x0202C /* VLAN Packets Received Count (High) */
-#define IXGB_JPRCL 0x02030 /* Jumbo Packets Received Count (Low) */
-#define IXGB_JPRCH 0x02034 /* Jumbo Packets Received Count (High) */
-#define IXGB_GORCL 0x02038 /* Good Octets Received Count (Low) */
-#define IXGB_GORCH 0x0203C /* Good Octets Received Count (High) */
-#define IXGB_TORL 0x02040 /* Total Octets Received (Low) */
-#define IXGB_TORH 0x02044 /* Total Octets Received (High) */
-#define IXGB_RNBC 0x02048 /* Receive No Buffers Count */
-#define IXGB_RUC 0x02050 /* Receive Undersize Count */
-#define IXGB_ROC 0x02058 /* Receive Oversize Count */
-#define IXGB_RLEC 0x02060 /* Receive Length Error Count */
-#define IXGB_CRCERRS 0x02068 /* CRC Error Count */
-#define IXGB_ICBC 0x02070 /* Illegal control byte in mid-packet Count */
-#define IXGB_ECBC 0x02078 /* Error Control byte in mid-packet Count */
-#define IXGB_MPC 0x02080 /* Missed Packets Count */
-#define IXGB_TPTL 0x02100 /* Total Packets Transmitted (Low) */
-#define IXGB_TPTH 0x02104 /* Total Packets Transmitted (High) */
-#define IXGB_GPTCL 0x02108 /* Good Packets Transmitted Count (Low) */
-#define IXGB_GPTCH 0x0210C /* Good Packets Transmitted Count (High) */
-#define IXGB_BPTCL 0x02110 /* Broadcast Packets Transmitted Count (Low) */
-#define IXGB_BPTCH 0x02114 /* Broadcast Packets Transmitted Count (High) */
-#define IXGB_MPTCL 0x02118 /* Multicast Packets Transmitted Count (Low) */
-#define IXGB_MPTCH 0x0211C /* Multicast Packets Transmitted Count (High) */
-#define IXGB_UPTCL 0x02120 /* Unicast Packets Transmitted Count (Low) */
-#define IXGB_UPTCH 0x02124 /* Unicast Packets Transmitted Count (High) */
-#define IXGB_VPTCL 0x02128 /* VLAN Packets Transmitted Count (Low) */
-#define IXGB_VPTCH 0x0212C /* VLAN Packets Transmitted Count (High) */
-#define IXGB_JPTCL 0x02130 /* Jumbo Packets Transmitted Count (Low) */
-#define IXGB_JPTCH 0x02134 /* Jumbo Packets Transmitted Count (High) */
-#define IXGB_GOTCL 0x02138 /* Good Octets Transmitted Count (Low) */
-#define IXGB_GOTCH 0x0213C /* Good Octets Transmitted Count (High) */
-#define IXGB_TOTL 0x02140 /* Total Octets Transmitted Count (Low) */
-#define IXGB_TOTH 0x02144 /* Total Octets Transmitted Count (High) */
-#define IXGB_DC 0x02148 /* Defer Count */
-#define IXGB_PLT64C 0x02150 /* Packet Transmitted was less than 64 bytes Count */
-#define IXGB_TSCTC 0x02170 /* TCP Segmentation Context Transmitted Count */
-#define IXGB_TSCTFC 0x02178 /* TCP Segmentation Context Tx Fail Count */
-#define IXGB_IBIC 0x02180 /* Illegal byte during Idle stream count */
-#define IXGB_RFC 0x02188 /* Remote Fault Count */
-#define IXGB_LFC 0x02190 /* Local Fault Count */
-#define IXGB_PFRC 0x02198 /* Pause Frame Receive Count */
-#define IXGB_PFTC 0x021A0 /* Pause Frame Transmit Count */
-#define IXGB_MCFRC 0x021A8 /* MAC Control Frames (non-Pause) Received Count */
-#define IXGB_MCFTC 0x021B0 /* MAC Control Frames (non-Pause) Transmitted Count */
-#define IXGB_XONRXC 0x021B8 /* XON Received Count */
-#define IXGB_XONTXC 0x021C0 /* XON Transmitted Count */
-#define IXGB_XOFFRXC 0x021C8 /* XOFF Received Count */
-#define IXGB_XOFFTXC 0x021D0 /* XOFF Transmitted Count */
-#define IXGB_RJC 0x021D8 /* Receive Jabber Count */
-
-/* CTRL0 Bit Masks */
-#define IXGB_CTRL0_LRST 0x00000008
-#define IXGB_CTRL0_JFE 0x00000010
-#define IXGB_CTRL0_XLE 0x00000020
-#define IXGB_CTRL0_MDCS 0x00000040
-#define IXGB_CTRL0_CMDC 0x00000080
-#define IXGB_CTRL0_SDP0 0x00040000
-#define IXGB_CTRL0_SDP1 0x00080000
-#define IXGB_CTRL0_SDP2 0x00100000
-#define IXGB_CTRL0_SDP3 0x00200000
-#define IXGB_CTRL0_SDP0_DIR 0x00400000
-#define IXGB_CTRL0_SDP1_DIR 0x00800000
-#define IXGB_CTRL0_SDP2_DIR 0x01000000
-#define IXGB_CTRL0_SDP3_DIR 0x02000000
-#define IXGB_CTRL0_RST 0x04000000
-#define IXGB_CTRL0_RPE 0x08000000
-#define IXGB_CTRL0_TPE 0x10000000
-#define IXGB_CTRL0_VME 0x40000000
-
-/* CTRL1 Bit Masks */
-#define IXGB_CTRL1_GPI0_EN 0x00000001
-#define IXGB_CTRL1_GPI1_EN 0x00000002
-#define IXGB_CTRL1_GPI2_EN 0x00000004
-#define IXGB_CTRL1_GPI3_EN 0x00000008
-#define IXGB_CTRL1_SDP4 0x00000010
-#define IXGB_CTRL1_SDP5 0x00000020
-#define IXGB_CTRL1_SDP6 0x00000040
-#define IXGB_CTRL1_SDP7 0x00000080
-#define IXGB_CTRL1_SDP4_DIR 0x00000100
-#define IXGB_CTRL1_SDP5_DIR 0x00000200
-#define IXGB_CTRL1_SDP6_DIR 0x00000400
-#define IXGB_CTRL1_SDP7_DIR 0x00000800
-#define IXGB_CTRL1_EE_RST 0x00002000
-#define IXGB_CTRL1_RO_DIS 0x00020000
-#define IXGB_CTRL1_PCIXHM_MASK 0x00C00000
-#define IXGB_CTRL1_PCIXHM_1_2 0x00000000
-#define IXGB_CTRL1_PCIXHM_5_8 0x00400000
-#define IXGB_CTRL1_PCIXHM_3_4 0x00800000
-#define IXGB_CTRL1_PCIXHM_7_8 0x00C00000
-
-/* STATUS Bit Masks */
-#define IXGB_STATUS_LU 0x00000002
-#define IXGB_STATUS_AIP 0x00000004
-#define IXGB_STATUS_TXOFF 0x00000010
-#define IXGB_STATUS_XAUIME 0x00000020
-#define IXGB_STATUS_RES 0x00000040
-#define IXGB_STATUS_RIS 0x00000080
-#define IXGB_STATUS_RIE 0x00000100
-#define IXGB_STATUS_RLF 0x00000200
-#define IXGB_STATUS_RRF 0x00000400
-#define IXGB_STATUS_PCI_SPD 0x00000800
-#define IXGB_STATUS_BUS64 0x00001000
-#define IXGB_STATUS_PCIX_MODE 0x00002000
-#define IXGB_STATUS_PCIX_SPD_MASK 0x0000C000
-#define IXGB_STATUS_PCIX_SPD_66 0x00000000
-#define IXGB_STATUS_PCIX_SPD_100 0x00004000
-#define IXGB_STATUS_PCIX_SPD_133 0x00008000
-#define IXGB_STATUS_REV_ID_MASK 0x000F0000
-#define IXGB_STATUS_REV_ID_SHIFT 16
-
-/* EECD Bit Masks */
-#define IXGB_EECD_SK 0x00000001
-#define IXGB_EECD_CS 0x00000002
-#define IXGB_EECD_DI 0x00000004
-#define IXGB_EECD_DO 0x00000008
-#define IXGB_EECD_FWE_MASK 0x00000030
-#define IXGB_EECD_FWE_DIS 0x00000010
-#define IXGB_EECD_FWE_EN 0x00000020
-
-/* MFS */
-#define IXGB_MFS_SHIFT 16
-
-/* Interrupt Register Bit Masks (used for ICR, ICS, IMS, and IMC) */
-#define IXGB_INT_TXDW 0x00000001
-#define IXGB_INT_TXQE 0x00000002
-#define IXGB_INT_LSC 0x00000004
-#define IXGB_INT_RXSEQ 0x00000008
-#define IXGB_INT_RXDMT0 0x00000010
-#define IXGB_INT_RXO 0x00000040
-#define IXGB_INT_RXT0 0x00000080
-#define IXGB_INT_AUTOSCAN 0x00000200
-#define IXGB_INT_GPI0 0x00000800
-#define IXGB_INT_GPI1 0x00001000
-#define IXGB_INT_GPI2 0x00002000
-#define IXGB_INT_GPI3 0x00004000
-
-/* RCTL Bit Masks */
-#define IXGB_RCTL_RXEN 0x00000002
-#define IXGB_RCTL_SBP 0x00000004
-#define IXGB_RCTL_UPE 0x00000008
-#define IXGB_RCTL_MPE 0x00000010
-#define IXGB_RCTL_RDMTS_MASK 0x00000300
-#define IXGB_RCTL_RDMTS_1_2 0x00000000
-#define IXGB_RCTL_RDMTS_1_4 0x00000100
-#define IXGB_RCTL_RDMTS_1_8 0x00000200
-#define IXGB_RCTL_MO_MASK 0x00003000
-#define IXGB_RCTL_MO_47_36 0x00000000
-#define IXGB_RCTL_MO_46_35 0x00001000
-#define IXGB_RCTL_MO_45_34 0x00002000
-#define IXGB_RCTL_MO_43_32 0x00003000
-#define IXGB_RCTL_MO_SHIFT 12
-#define IXGB_RCTL_BAM 0x00008000
-#define IXGB_RCTL_BSIZE_MASK 0x00030000
-#define IXGB_RCTL_BSIZE_2048 0x00000000
-#define IXGB_RCTL_BSIZE_4096 0x00010000
-#define IXGB_RCTL_BSIZE_8192 0x00020000
-#define IXGB_RCTL_BSIZE_16384 0x00030000
-#define IXGB_RCTL_VFE 0x00040000
-#define IXGB_RCTL_CFIEN 0x00080000
-#define IXGB_RCTL_CFI 0x00100000
-#define IXGB_RCTL_RPDA_MASK 0x00600000
-#define IXGB_RCTL_RPDA_MC_MAC 0x00000000
-#define IXGB_RCTL_MC_ONLY 0x00400000
-#define IXGB_RCTL_CFF 0x00800000
-#define IXGB_RCTL_SECRC 0x04000000
-#define IXGB_RDT_FPDB 0x80000000
-
-#define IXGB_RCTL_IDLE_RX_UNIT 0
-
-/* FCRTL Bit Masks */
-#define IXGB_FCRTL_XONE 0x80000000
-
-/* RXDCTL Bit Masks */
-#define IXGB_RXDCTL_PTHRESH_MASK 0x000001FF
-#define IXGB_RXDCTL_PTHRESH_SHIFT 0
-#define IXGB_RXDCTL_HTHRESH_MASK 0x0003FE00
-#define IXGB_RXDCTL_HTHRESH_SHIFT 9
-#define IXGB_RXDCTL_WTHRESH_MASK 0x07FC0000
-#define IXGB_RXDCTL_WTHRESH_SHIFT 18
-
-/* RAIDC Bit Masks */
-#define IXGB_RAIDC_HIGHTHRS_MASK 0x0000003F
-#define IXGB_RAIDC_DELAY_MASK 0x000FF800
-#define IXGB_RAIDC_DELAY_SHIFT 11
-#define IXGB_RAIDC_POLL_MASK 0x1FF00000
-#define IXGB_RAIDC_POLL_SHIFT 20
-#define IXGB_RAIDC_RXT_GATE 0x40000000
-#define IXGB_RAIDC_EN 0x80000000
-
-#define IXGB_RAIDC_POLL_1000_INTERRUPTS_PER_SECOND 1220
-#define IXGB_RAIDC_POLL_5000_INTERRUPTS_PER_SECOND 244
-#define IXGB_RAIDC_POLL_10000_INTERRUPTS_PER_SECOND 122
-#define IXGB_RAIDC_POLL_20000_INTERRUPTS_PER_SECOND 61
-
-/* RXCSUM Bit Masks */
-#define IXGB_RXCSUM_IPOFL 0x00000100
-#define IXGB_RXCSUM_TUOFL 0x00000200
-
-/* RAH Bit Masks */
-#define IXGB_RAH_ASEL_MASK 0x00030000
-#define IXGB_RAH_ASEL_DEST 0x00000000
-#define IXGB_RAH_ASEL_SRC 0x00010000
-#define IXGB_RAH_AV 0x80000000
-
-/* TCTL Bit Masks */
-#define IXGB_TCTL_TCE 0x00000001
-#define IXGB_TCTL_TXEN 0x00000002
-#define IXGB_TCTL_TPDE 0x00000004
-
-#define IXGB_TCTL_IDLE_TX_UNIT 0
-
-/* TXDCTL Bit Masks */
-#define IXGB_TXDCTL_PTHRESH_MASK 0x0000007F
-#define IXGB_TXDCTL_HTHRESH_MASK 0x00007F00
-#define IXGB_TXDCTL_HTHRESH_SHIFT 8
-#define IXGB_TXDCTL_WTHRESH_MASK 0x007F0000
-#define IXGB_TXDCTL_WTHRESH_SHIFT 16
-
-/* TSPMT Bit Masks */
-#define IXGB_TSPMT_TSMT_MASK 0x0000FFFF
-#define IXGB_TSPMT_TSPBP_MASK 0xFFFF0000
-#define IXGB_TSPMT_TSPBP_SHIFT 16
-
-/* PAP Bit Masks */
-#define IXGB_PAP_TXPC_MASK 0x0000FFFF
-#define IXGB_PAP_TXPV_MASK 0x000F0000
-#define IXGB_PAP_TXPV_10G 0x00000000
-#define IXGB_PAP_TXPV_1G 0x00010000
-#define IXGB_PAP_TXPV_2G 0x00020000
-#define IXGB_PAP_TXPV_3G 0x00030000
-#define IXGB_PAP_TXPV_4G 0x00040000
-#define IXGB_PAP_TXPV_5G 0x00050000
-#define IXGB_PAP_TXPV_6G 0x00060000
-#define IXGB_PAP_TXPV_7G 0x00070000
-#define IXGB_PAP_TXPV_8G 0x00080000
-#define IXGB_PAP_TXPV_9G 0x00090000
-#define IXGB_PAP_TXPV_WAN 0x000F0000
-
-/* PCSC1 Bit Masks */
-#define IXGB_PCSC1_LOOPBACK 0x00004000
-
-/* PCSC2 Bit Masks */
-#define IXGB_PCSC2_PCS_TYPE_MASK 0x00000003
-#define IXGB_PCSC2_PCS_TYPE_10GBX 0x00000001
-
-/* PCSS1 Bit Masks */
-#define IXGB_PCSS1_LOCAL_FAULT 0x00000080
-#define IXGB_PCSS1_RX_LINK_STATUS 0x00000004
-
-/* PCSS2 Bit Masks */
-#define IXGB_PCSS2_DEV_PRES_MASK 0x0000C000
-#define IXGB_PCSS2_DEV_PRES 0x00004000
-#define IXGB_PCSS2_TX_LF 0x00000800
-#define IXGB_PCSS2_RX_LF 0x00000400
-#define IXGB_PCSS2_10GBW 0x00000004
-#define IXGB_PCSS2_10GBX 0x00000002
-#define IXGB_PCSS2_10GBR 0x00000001
-
-/* XPCSS Bit Masks */
-#define IXGB_XPCSS_ALIGN_STATUS 0x00001000
-#define IXGB_XPCSS_PATTERN_TEST 0x00000800
-#define IXGB_XPCSS_LANE_3_SYNC 0x00000008
-#define IXGB_XPCSS_LANE_2_SYNC 0x00000004
-#define IXGB_XPCSS_LANE_1_SYNC 0x00000002
-#define IXGB_XPCSS_LANE_0_SYNC 0x00000001
-
-/* XPCSTC Bit Masks */
-#define IXGB_XPCSTC_BERT_TRIG 0x00200000
-#define IXGB_XPCSTC_BERT_SST 0x00100000
-#define IXGB_XPCSTC_BERT_PSZ_MASK 0x000C0000
-#define IXGB_XPCSTC_BERT_PSZ_SHIFT 17
-#define IXGB_XPCSTC_BERT_PSZ_INF 0x00000003
-#define IXGB_XPCSTC_BERT_PSZ_68 0x00000001
-#define IXGB_XPCSTC_BERT_PSZ_1028 0x00000000
-
-/* MSCA bit Masks */
-/* New Protocol Address */
-#define IXGB_MSCA_NP_ADDR_MASK 0x0000FFFF
-#define IXGB_MSCA_NP_ADDR_SHIFT 0
-/* Either Device Type or Register Address,depending on ST_CODE */
-#define IXGB_MSCA_DEV_TYPE_MASK 0x001F0000
-#define IXGB_MSCA_DEV_TYPE_SHIFT 16
-#define IXGB_MSCA_PHY_ADDR_MASK 0x03E00000
-#define IXGB_MSCA_PHY_ADDR_SHIFT 21
-#define IXGB_MSCA_OP_CODE_MASK 0x0C000000
-/* OP_CODE == 00, Address cycle, New Protocol */
-/* OP_CODE == 01, Write operation */
-/* OP_CODE == 10, Read operation */
-/* OP_CODE == 11, Read, auto increment, New Protocol */
-#define IXGB_MSCA_ADDR_CYCLE 0x00000000
-#define IXGB_MSCA_WRITE 0x04000000
-#define IXGB_MSCA_READ 0x08000000
-#define IXGB_MSCA_READ_AUTOINC 0x0C000000
-#define IXGB_MSCA_OP_CODE_SHIFT 26
-#define IXGB_MSCA_ST_CODE_MASK 0x30000000
-/* ST_CODE == 00, New Protocol */
-/* ST_CODE == 01, Old Protocol */
-#define IXGB_MSCA_NEW_PROTOCOL 0x00000000
-#define IXGB_MSCA_OLD_PROTOCOL 0x10000000
-#define IXGB_MSCA_ST_CODE_SHIFT 28
-/* Initiate command, self-clearing when command completes */
-#define IXGB_MSCA_MDI_COMMAND 0x40000000
-/*MDI In Progress Enable. */
-#define IXGB_MSCA_MDI_IN_PROG_EN 0x80000000
-
-/* MSRWD bit masks */
-#define IXGB_MSRWD_WRITE_DATA_MASK 0x0000FFFF
-#define IXGB_MSRWD_WRITE_DATA_SHIFT 0
-#define IXGB_MSRWD_READ_DATA_MASK 0xFFFF0000
-#define IXGB_MSRWD_READ_DATA_SHIFT 16
-
-/* Definitions for the optics devices on the MDIO bus. */
-#define IXGB_PHY_ADDRESS 0x0 /* Single PHY, multiple "Devices" */
-
-#define MDIO_PMA_PMD_XPAK_VENDOR_NAME 0x803A /* XPAK/XENPAK devices only */
-
-/* Vendor-specific MDIO registers */
-#define G6XXX_PMA_PMD_VS1 0xC001 /* Vendor-specific register */
-#define G6XXX_XGXS_XAUI_VS2 0x18 /* Vendor-specific register */
-
-#define G6XXX_PMA_PMD_VS1_PLL_RESET 0x80
-#define G6XXX_PMA_PMD_VS1_REMOVE_PLL_RESET 0x00
-#define G6XXX_XGXS_XAUI_VS2_INPUT_MASK 0x0F /* XAUI lanes synchronized */
-
-/* Layout of a single receive descriptor. The controller assumes that this
- * structure is packed into 16 bytes, which is a safe assumption with most
- * compilers. However, some compilers may insert padding between the fields,
- * in which case the structure must be packed in some compiler-specific
- * manner. */
-struct ixgb_rx_desc {
- __le64 buff_addr;
- __le16 length;
- __le16 reserved;
- u8 status;
- u8 errors;
- __le16 special;
-};
-
-#define IXGB_RX_DESC_STATUS_DD 0x01
-#define IXGB_RX_DESC_STATUS_EOP 0x02
-#define IXGB_RX_DESC_STATUS_IXSM 0x04
-#define IXGB_RX_DESC_STATUS_VP 0x08
-#define IXGB_RX_DESC_STATUS_TCPCS 0x20
-#define IXGB_RX_DESC_STATUS_IPCS 0x40
-#define IXGB_RX_DESC_STATUS_PIF 0x80
-
-#define IXGB_RX_DESC_ERRORS_CE 0x01
-#define IXGB_RX_DESC_ERRORS_SE 0x02
-#define IXGB_RX_DESC_ERRORS_P 0x08
-#define IXGB_RX_DESC_ERRORS_TCPE 0x20
-#define IXGB_RX_DESC_ERRORS_IPE 0x40
-#define IXGB_RX_DESC_ERRORS_RXE 0x80
-
-#define IXGB_RX_DESC_SPECIAL_VLAN_MASK 0x0FFF /* VLAN ID is in lower 12 bits */
-#define IXGB_RX_DESC_SPECIAL_PRI_MASK 0xE000 /* Priority is in upper 3 bits */
-#define IXGB_RX_DESC_SPECIAL_PRI_SHIFT 0x000D /* Priority is in upper 3 of 16 */
-
-/* Layout of a single transmit descriptor. The controller assumes that this
- * structure is packed into 16 bytes, which is a safe assumption with most
- * compilers. However, some compilers may insert padding between the fields,
- * in which case the structure must be packed in some compiler-specific
- * manner. */
-struct ixgb_tx_desc {
- __le64 buff_addr;
- __le32 cmd_type_len;
- u8 status;
- u8 popts;
- __le16 vlan;
-};
-
-#define IXGB_TX_DESC_LENGTH_MASK 0x000FFFFF
-#define IXGB_TX_DESC_TYPE_MASK 0x00F00000
-#define IXGB_TX_DESC_TYPE_SHIFT 20
-#define IXGB_TX_DESC_CMD_MASK 0xFF000000
-#define IXGB_TX_DESC_CMD_SHIFT 24
-#define IXGB_TX_DESC_CMD_EOP 0x01000000
-#define IXGB_TX_DESC_CMD_TSE 0x04000000
-#define IXGB_TX_DESC_CMD_RS 0x08000000
-#define IXGB_TX_DESC_CMD_VLE 0x40000000
-#define IXGB_TX_DESC_CMD_IDE 0x80000000
-
-#define IXGB_TX_DESC_TYPE 0x00100000
-
-#define IXGB_TX_DESC_STATUS_DD 0x01
-
-#define IXGB_TX_DESC_POPTS_IXSM 0x01
-#define IXGB_TX_DESC_POPTS_TXSM 0x02
-#define IXGB_TX_DESC_SPECIAL_PRI_SHIFT IXGB_RX_DESC_SPECIAL_PRI_SHIFT /* Priority is in upper 3 of 16 */
-
-struct ixgb_context_desc {
- u8 ipcss;
- u8 ipcso;
- __le16 ipcse;
- u8 tucss;
- u8 tucso;
- __le16 tucse;
- __le32 cmd_type_len;
- u8 status;
- u8 hdr_len;
- __le16 mss;
-};
-
-#define IXGB_CONTEXT_DESC_CMD_TCP 0x01000000
-#define IXGB_CONTEXT_DESC_CMD_IP 0x02000000
-#define IXGB_CONTEXT_DESC_CMD_TSE 0x04000000
-#define IXGB_CONTEXT_DESC_CMD_RS 0x08000000
-#define IXGB_CONTEXT_DESC_CMD_IDE 0x80000000
-
-#define IXGB_CONTEXT_DESC_TYPE 0x00000000
-
-#define IXGB_CONTEXT_DESC_STATUS_DD 0x01
-
-/* Filters */
-#define IXGB_MC_TBL_SIZE 128 /* Multicast Filter Table (4096 bits) */
-#define IXGB_VLAN_FILTER_TBL_SIZE 128 /* VLAN Filter Table (4096 bits) */
-#define IXGB_RAR_ENTRIES 3 /* Number of entries in Rx Address array */
-
-#define IXGB_MEMORY_REGISTER_BASE_ADDRESS 0
-#define ENET_HEADER_SIZE 14
-#define ENET_FCS_LENGTH 4
-#define IXGB_MAX_NUM_MULTICAST_ADDRESSES 128
-#define IXGB_MIN_ENET_FRAME_SIZE_WITHOUT_FCS 60
-#define IXGB_MAX_ENET_FRAME_SIZE_WITHOUT_FCS 1514
-#define IXGB_MAX_JUMBO_FRAME_SIZE 0x3F00
-
-/* Phy Addresses */
-#define IXGB_OPTICAL_PHY_ADDR 0x0 /* Optical Module phy address */
-#define IXGB_XAUII_PHY_ADDR 0x1 /* Xauii transceiver phy address */
-#define IXGB_DIAG_PHY_ADDR 0x1F /* Diagnostic Device phy address */
-
-/* This structure takes a 64k flash and maps it for identification commands */
-struct ixgb_flash_buffer {
- u8 manufacturer_id;
- u8 device_id;
- u8 filler1[0x2AA8];
- u8 cmd2;
- u8 filler2[0x2AAA];
- u8 cmd1;
- u8 filler3[0xAAAA];
-};
-
-/* Flow control parameters */
-struct ixgb_fc {
- u32 high_water; /* Flow Control High-water */
- u32 low_water; /* Flow Control Low-water */
- u16 pause_time; /* Flow Control Pause timer */
- bool send_xon; /* Flow control send XON */
- ixgb_fc_type type; /* Type of flow control */
-};
-
-/* The historical defaults for the flow control values are given below. */
-#define FC_DEFAULT_HI_THRESH (0x8000) /* 32KB */
-#define FC_DEFAULT_LO_THRESH (0x4000) /* 16KB */
-#define FC_DEFAULT_TX_TIMER (0x100) /* ~130 us */
-
-/* Phy definitions */
-#define IXGB_MAX_PHY_REG_ADDRESS 0xFFFF
-#define IXGB_MAX_PHY_ADDRESS 31
-#define IXGB_MAX_PHY_DEV_TYPE 31
-
-/* Bus parameters */
-struct ixgb_bus {
- ixgb_bus_speed speed;
- ixgb_bus_width width;
- ixgb_bus_type type;
-};
-
-struct ixgb_hw {
- u8 __iomem *hw_addr;/* Base Address of the hardware */
- void *back; /* Pointer to OS-dependent struct */
- struct ixgb_fc fc; /* Flow control parameters */
- struct ixgb_bus bus; /* Bus parameters */
- u32 phy_id; /* Phy Identifier */
- u32 phy_addr; /* XGMII address of Phy */
- ixgb_mac_type mac_type; /* Identifier for MAC controller */
- ixgb_phy_type phy_type; /* Transceiver/phy identifier */
- u32 max_frame_size; /* Maximum frame size supported */
- u32 mc_filter_type; /* Multicast filter hash type */
- u32 num_mc_addrs; /* Number of current Multicast addrs */
- u8 curr_mac_addr[ETH_ALEN]; /* Individual address currently programmed in MAC */
- u32 num_tx_desc; /* Number of Transmit descriptors */
- u32 num_rx_desc; /* Number of Receive descriptors */
- u32 rx_buffer_size; /* Size of Receive buffer */
- bool link_up; /* true if link is valid */
- bool adapter_stopped; /* State of adapter */
- u16 device_id; /* device id from PCI configuration space */
- u16 vendor_id; /* vendor id from PCI configuration space */
- u8 revision_id; /* revision id from PCI configuration space */
- u16 subsystem_vendor_id; /* subsystem vendor id from PCI configuration space */
- u16 subsystem_id; /* subsystem id from PCI configuration space */
- u32 bar0; /* Base Address registers */
- u32 bar1;
- u32 bar2;
- u32 bar3;
- u16 pci_cmd_word; /* PCI command register id from PCI configuration space */
- __le16 eeprom[IXGB_EEPROM_SIZE]; /* EEPROM contents read at init time */
- unsigned long io_base; /* Our I/O mapped location */
- u32 lastLFC;
- u32 lastRFC;
-};
-
-/* Statistics reported by the hardware */
-struct ixgb_hw_stats {
- u64 tprl;
- u64 tprh;
- u64 gprcl;
- u64 gprch;
- u64 bprcl;
- u64 bprch;
- u64 mprcl;
- u64 mprch;
- u64 uprcl;
- u64 uprch;
- u64 vprcl;
- u64 vprch;
- u64 jprcl;
- u64 jprch;
- u64 gorcl;
- u64 gorch;
- u64 torl;
- u64 torh;
- u64 rnbc;
- u64 ruc;
- u64 roc;
- u64 rlec;
- u64 crcerrs;
- u64 icbc;
- u64 ecbc;
- u64 mpc;
- u64 tptl;
- u64 tpth;
- u64 gptcl;
- u64 gptch;
- u64 bptcl;
- u64 bptch;
- u64 mptcl;
- u64 mptch;
- u64 uptcl;
- u64 uptch;
- u64 vptcl;
- u64 vptch;
- u64 jptcl;
- u64 jptch;
- u64 gotcl;
- u64 gotch;
- u64 totl;
- u64 toth;
- u64 dc;
- u64 plt64c;
- u64 tsctc;
- u64 tsctfc;
- u64 ibic;
- u64 rfc;
- u64 lfc;
- u64 pfrc;
- u64 pftc;
- u64 mcfrc;
- u64 mcftc;
- u64 xonrxc;
- u64 xontxc;
- u64 xoffrxc;
- u64 xofftxc;
- u64 rjc;
-};
-
-/* Function Prototypes */
-bool ixgb_adapter_stop(struct ixgb_hw *hw);
-bool ixgb_init_hw(struct ixgb_hw *hw);
-bool ixgb_adapter_start(struct ixgb_hw *hw);
-void ixgb_check_for_link(struct ixgb_hw *hw);
-bool ixgb_check_for_bad_link(struct ixgb_hw *hw);
-
-void ixgb_rar_set(struct ixgb_hw *hw, const u8 *addr, u32 index);
-
-/* Filters (multicast, vlan, receive) */
-void ixgb_mc_addr_list_update(struct ixgb_hw *hw, u8 *mc_addr_list,
- u32 mc_addr_count, u32 pad);
-
-/* Vfta functions */
-void ixgb_write_vfta(struct ixgb_hw *hw, u32 offset, u32 value);
-
-/* Access functions to eeprom data */
-void ixgb_get_ee_mac_addr(struct ixgb_hw *hw, u8 *mac_addr);
-u32 ixgb_get_ee_pba_number(struct ixgb_hw *hw);
-u16 ixgb_get_ee_device_id(struct ixgb_hw *hw);
-bool ixgb_get_eeprom_data(struct ixgb_hw *hw);
-__le16 ixgb_get_eeprom_word(struct ixgb_hw *hw, u16 index);
-
-/* Everything else */
-void ixgb_led_on(struct ixgb_hw *hw);
-void ixgb_led_off(struct ixgb_hw *hw);
-void ixgb_write_pci_cfg(struct ixgb_hw *hw,
- u32 reg,
- u16 * value);
-
-
-#endif /* _IXGB_HW_H_ */
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-#ifndef _IXGB_IDS_H_
-#define _IXGB_IDS_H_
-
-/**********************************************************************
-** The Device and Vendor IDs for 10 Gigabit MACs
-**********************************************************************/
-
-#define IXGB_DEVICE_ID_82597EX 0x1048
-#define IXGB_DEVICE_ID_82597EX_SR 0x1A48
-#define IXGB_DEVICE_ID_82597EX_LR 0x1B48
-#define IXGB_SUBDEVICE_ID_A11F 0xA11F
-#define IXGB_SUBDEVICE_ID_A01F 0xA01F
-
-#define IXGB_DEVICE_ID_82597EX_CX4 0x109E
-#define IXGB_SUBDEVICE_ID_A00C 0xA00C
-#define IXGB_SUBDEVICE_ID_A01C 0xA01C
-#define IXGB_SUBDEVICE_ID_7036 0x7036
-
-#endif /* #ifndef _IXGB_IDS_H_ */
-/* End of File */
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <linux/prefetch.h>
-#include "ixgb.h"
-
-char ixgb_driver_name[] = "ixgb";
-static char ixgb_driver_string[] = "Intel(R) PRO/10GbE Network Driver";
-
-static const char ixgb_copyright[] = "Copyright (c) 1999-2008 Intel Corporation.";
-
-#define IXGB_CB_LENGTH 256
-static unsigned int copybreak __read_mostly = IXGB_CB_LENGTH;
-module_param(copybreak, uint, 0644);
-MODULE_PARM_DESC(copybreak,
- "Maximum size of packet that is copied to a new buffer on receive");
-
-/* ixgb_pci_tbl - PCI Device ID Table
- *
- * Wildcard entries (PCI_ANY_ID) should come last
- * Last entry must be all 0s
- *
- * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
- * Class, Class Mask, private data (not used) }
- */
-static const struct pci_device_id ixgb_pci_tbl[] = {
- {PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
- {PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_CX4,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
- {PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_SR,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
- {PCI_VENDOR_ID_INTEL, IXGB_DEVICE_ID_82597EX_LR,
- PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
-
- /* required last entry */
- {0,}
-};
-
-MODULE_DEVICE_TABLE(pci, ixgb_pci_tbl);
-
-/* Local Function Prototypes */
-static int ixgb_init_module(void);
-static void ixgb_exit_module(void);
-static int ixgb_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
-static void ixgb_remove(struct pci_dev *pdev);
-static int ixgb_sw_init(struct ixgb_adapter *adapter);
-static int ixgb_open(struct net_device *netdev);
-static int ixgb_close(struct net_device *netdev);
-static void ixgb_configure_tx(struct ixgb_adapter *adapter);
-static void ixgb_configure_rx(struct ixgb_adapter *adapter);
-static void ixgb_setup_rctl(struct ixgb_adapter *adapter);
-static void ixgb_clean_tx_ring(struct ixgb_adapter *adapter);
-static void ixgb_clean_rx_ring(struct ixgb_adapter *adapter);
-static void ixgb_set_multi(struct net_device *netdev);
-static void ixgb_watchdog(struct timer_list *t);
-static netdev_tx_t ixgb_xmit_frame(struct sk_buff *skb,
- struct net_device *netdev);
-static int ixgb_change_mtu(struct net_device *netdev, int new_mtu);
-static int ixgb_set_mac(struct net_device *netdev, void *p);
-static irqreturn_t ixgb_intr(int irq, void *data);
-static bool ixgb_clean_tx_irq(struct ixgb_adapter *adapter);
-
-static int ixgb_clean(struct napi_struct *, int);
-static bool ixgb_clean_rx_irq(struct ixgb_adapter *, int *, int);
-static void ixgb_alloc_rx_buffers(struct ixgb_adapter *, int);
-
-static void ixgb_tx_timeout(struct net_device *dev, unsigned int txqueue);
-static void ixgb_tx_timeout_task(struct work_struct *work);
-
-static void ixgb_vlan_strip_enable(struct ixgb_adapter *adapter);
-static void ixgb_vlan_strip_disable(struct ixgb_adapter *adapter);
-static int ixgb_vlan_rx_add_vid(struct net_device *netdev,
- __be16 proto, u16 vid);
-static int ixgb_vlan_rx_kill_vid(struct net_device *netdev,
- __be16 proto, u16 vid);
-static void ixgb_restore_vlan(struct ixgb_adapter *adapter);
-
-static pci_ers_result_t ixgb_io_error_detected (struct pci_dev *pdev,
- pci_channel_state_t state);
-static pci_ers_result_t ixgb_io_slot_reset (struct pci_dev *pdev);
-static void ixgb_io_resume (struct pci_dev *pdev);
-
-static const struct pci_error_handlers ixgb_err_handler = {
- .error_detected = ixgb_io_error_detected,
- .slot_reset = ixgb_io_slot_reset,
- .resume = ixgb_io_resume,
-};
-
-static struct pci_driver ixgb_driver = {
- .name = ixgb_driver_name,
- .id_table = ixgb_pci_tbl,
- .probe = ixgb_probe,
- .remove = ixgb_remove,
- .err_handler = &ixgb_err_handler
-};
-
-MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
-MODULE_DESCRIPTION("Intel(R) PRO/10GbE Network Driver");
-MODULE_LICENSE("GPL v2");
-
-#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV|NETIF_MSG_PROBE|NETIF_MSG_LINK)
-static int debug = -1;
-module_param(debug, int, 0);
-MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
-
-/**
- * ixgb_init_module - Driver Registration Routine
- *
- * ixgb_init_module is the first routine called when the driver is
- * loaded. All it does is register with the PCI subsystem.
- **/
-
-static int __init
-ixgb_init_module(void)
-{
- pr_info("%s\n", ixgb_driver_string);
- pr_info("%s\n", ixgb_copyright);
-
- return pci_register_driver(&ixgb_driver);
-}
-
-module_init(ixgb_init_module);
-
-/**
- * ixgb_exit_module - Driver Exit Cleanup Routine
- *
- * ixgb_exit_module is called just before the driver is removed
- * from memory.
- **/
-
-static void __exit
-ixgb_exit_module(void)
-{
- pci_unregister_driver(&ixgb_driver);
-}
-
-module_exit(ixgb_exit_module);
-
-/**
- * ixgb_irq_disable - Mask off interrupt generation on the NIC
- * @adapter: board private structure
- **/
-
-static void
-ixgb_irq_disable(struct ixgb_adapter *adapter)
-{
- IXGB_WRITE_REG(&adapter->hw, IMC, ~0);
- IXGB_WRITE_FLUSH(&adapter->hw);
- synchronize_irq(adapter->pdev->irq);
-}
-
-/**
- * ixgb_irq_enable - Enable default interrupt generation settings
- * @adapter: board private structure
- **/
-
-static void
-ixgb_irq_enable(struct ixgb_adapter *adapter)
-{
- u32 val = IXGB_INT_RXT0 | IXGB_INT_RXDMT0 |
- IXGB_INT_TXDW | IXGB_INT_LSC;
- if (adapter->hw.subsystem_vendor_id == PCI_VENDOR_ID_SUN)
- val |= IXGB_INT_GPI0;
- IXGB_WRITE_REG(&adapter->hw, IMS, val);
- IXGB_WRITE_FLUSH(&adapter->hw);
-}
-
-int
-ixgb_up(struct ixgb_adapter *adapter)
-{
- struct net_device *netdev = adapter->netdev;
- int err, irq_flags = IRQF_SHARED;
- int max_frame = netdev->mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH;
- struct ixgb_hw *hw = &adapter->hw;
-
- /* hardware has been reset, we need to reload some things */
-
- ixgb_rar_set(hw, netdev->dev_addr, 0);
- ixgb_set_multi(netdev);
-
- ixgb_restore_vlan(adapter);
-
- ixgb_configure_tx(adapter);
- ixgb_setup_rctl(adapter);
- ixgb_configure_rx(adapter);
- ixgb_alloc_rx_buffers(adapter, IXGB_DESC_UNUSED(&adapter->rx_ring));
-
- /* disable interrupts and get the hardware into a known state */
- IXGB_WRITE_REG(&adapter->hw, IMC, 0xffffffff);
-
- /* only enable MSI if bus is in PCI-X mode */
- if (IXGB_READ_REG(&adapter->hw, STATUS) & IXGB_STATUS_PCIX_MODE) {
- err = pci_enable_msi(adapter->pdev);
- if (!err) {
- adapter->have_msi = true;
- irq_flags = 0;
- }
- /* proceed to try to request regular interrupt */
- }
-
- err = request_irq(adapter->pdev->irq, ixgb_intr, irq_flags,
- netdev->name, netdev);
- if (err) {
- if (adapter->have_msi)
- pci_disable_msi(adapter->pdev);
- netif_err(adapter, probe, adapter->netdev,
- "Unable to allocate interrupt Error: %d\n", err);
- return err;
- }
-
- if ((hw->max_frame_size != max_frame) ||
- (hw->max_frame_size !=
- (IXGB_READ_REG(hw, MFS) >> IXGB_MFS_SHIFT))) {
-
- hw->max_frame_size = max_frame;
-
- IXGB_WRITE_REG(hw, MFS, hw->max_frame_size << IXGB_MFS_SHIFT);
-
- if (hw->max_frame_size >
- IXGB_MAX_ENET_FRAME_SIZE_WITHOUT_FCS + ENET_FCS_LENGTH) {
- u32 ctrl0 = IXGB_READ_REG(hw, CTRL0);
-
- if (!(ctrl0 & IXGB_CTRL0_JFE)) {
- ctrl0 |= IXGB_CTRL0_JFE;
- IXGB_WRITE_REG(hw, CTRL0, ctrl0);
- }
- }
- }
-
- clear_bit(__IXGB_DOWN, &adapter->flags);
-
- napi_enable(&adapter->napi);
- ixgb_irq_enable(adapter);
-
- netif_wake_queue(netdev);
-
- mod_timer(&adapter->watchdog_timer, jiffies);
-
- return 0;
-}
-
-void
-ixgb_down(struct ixgb_adapter *adapter, bool kill_watchdog)
-{
- struct net_device *netdev = adapter->netdev;
-
- /* prevent the interrupt handler from restarting watchdog */
- set_bit(__IXGB_DOWN, &adapter->flags);
-
- netif_carrier_off(netdev);
-
- napi_disable(&adapter->napi);
- /* waiting for NAPI to complete can re-enable interrupts */
- ixgb_irq_disable(adapter);
- free_irq(adapter->pdev->irq, netdev);
-
- if (adapter->have_msi)
- pci_disable_msi(adapter->pdev);
-
- if (kill_watchdog)
- del_timer_sync(&adapter->watchdog_timer);
-
- adapter->link_speed = 0;
- adapter->link_duplex = 0;
- netif_stop_queue(netdev);
-
- ixgb_reset(adapter);
- ixgb_clean_tx_ring(adapter);
- ixgb_clean_rx_ring(adapter);
-}
-
-void
-ixgb_reset(struct ixgb_adapter *adapter)
-{
- struct ixgb_hw *hw = &adapter->hw;
-
- ixgb_adapter_stop(hw);
- if (!ixgb_init_hw(hw))
- netif_err(adapter, probe, adapter->netdev, "ixgb_init_hw failed\n");
-
- /* restore frame size information */
- IXGB_WRITE_REG(hw, MFS, hw->max_frame_size << IXGB_MFS_SHIFT);
- if (hw->max_frame_size >
- IXGB_MAX_ENET_FRAME_SIZE_WITHOUT_FCS + ENET_FCS_LENGTH) {
- u32 ctrl0 = IXGB_READ_REG(hw, CTRL0);
- if (!(ctrl0 & IXGB_CTRL0_JFE)) {
- ctrl0 |= IXGB_CTRL0_JFE;
- IXGB_WRITE_REG(hw, CTRL0, ctrl0);
- }
- }
-}
-
-static netdev_features_t
-ixgb_fix_features(struct net_device *netdev, netdev_features_t features)
-{
- /*
- * Tx VLAN insertion does not work per HW design when Rx stripping is
- * disabled.
- */
- if (!(features & NETIF_F_HW_VLAN_CTAG_RX))
- features &= ~NETIF_F_HW_VLAN_CTAG_TX;
-
- return features;
-}
-
-static int
-ixgb_set_features(struct net_device *netdev, netdev_features_t features)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- netdev_features_t changed = features ^ netdev->features;
-
- if (!(changed & (NETIF_F_RXCSUM|NETIF_F_HW_VLAN_CTAG_RX)))
- return 0;
-
- adapter->rx_csum = !!(features & NETIF_F_RXCSUM);
-
- if (netif_running(netdev)) {
- ixgb_down(adapter, true);
- ixgb_up(adapter);
- ixgb_set_speed_duplex(netdev);
- } else
- ixgb_reset(adapter);
-
- return 0;
-}
-
-
-static const struct net_device_ops ixgb_netdev_ops = {
- .ndo_open = ixgb_open,
- .ndo_stop = ixgb_close,
- .ndo_start_xmit = ixgb_xmit_frame,
- .ndo_set_rx_mode = ixgb_set_multi,
- .ndo_validate_addr = eth_validate_addr,
- .ndo_set_mac_address = ixgb_set_mac,
- .ndo_change_mtu = ixgb_change_mtu,
- .ndo_tx_timeout = ixgb_tx_timeout,
- .ndo_vlan_rx_add_vid = ixgb_vlan_rx_add_vid,
- .ndo_vlan_rx_kill_vid = ixgb_vlan_rx_kill_vid,
- .ndo_fix_features = ixgb_fix_features,
- .ndo_set_features = ixgb_set_features,
-};
-
-/**
- * ixgb_probe - Device Initialization Routine
- * @pdev: PCI device information struct
- * @ent: entry in ixgb_pci_tbl
- *
- * Returns 0 on success, negative on failure
- *
- * ixgb_probe initializes an adapter identified by a pci_dev structure.
- * The OS initialization, configuring of the adapter private structure,
- * and a hardware reset occur.
- **/
-
-static int
-ixgb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
-{
- struct net_device *netdev = NULL;
- struct ixgb_adapter *adapter;
- static int cards_found = 0;
- u8 addr[ETH_ALEN];
- int i;
- int err;
-
- err = pci_enable_device(pdev);
- if (err)
- return err;
-
- err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
- if (err) {
- pr_err("No usable DMA configuration, aborting\n");
- goto err_dma_mask;
- }
-
- err = pci_request_regions(pdev, ixgb_driver_name);
- if (err)
- goto err_request_regions;
-
- pci_set_master(pdev);
-
- netdev = alloc_etherdev(sizeof(struct ixgb_adapter));
- if (!netdev) {
- err = -ENOMEM;
- goto err_alloc_etherdev;
- }
-
- SET_NETDEV_DEV(netdev, &pdev->dev);
-
- pci_set_drvdata(pdev, netdev);
- adapter = netdev_priv(netdev);
- adapter->netdev = netdev;
- adapter->pdev = pdev;
- adapter->hw.back = adapter;
- adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
-
- adapter->hw.hw_addr = pci_ioremap_bar(pdev, BAR_0);
- if (!adapter->hw.hw_addr) {
- err = -EIO;
- goto err_ioremap;
- }
-
- for (i = BAR_1; i < PCI_STD_NUM_BARS; i++) {
- if (pci_resource_len(pdev, i) == 0)
- continue;
- if (pci_resource_flags(pdev, i) & IORESOURCE_IO) {
- adapter->hw.io_base = pci_resource_start(pdev, i);
- break;
- }
- }
-
- netdev->netdev_ops = &ixgb_netdev_ops;
- ixgb_set_ethtool_ops(netdev);
- netdev->watchdog_timeo = 5 * HZ;
- netif_napi_add(netdev, &adapter->napi, ixgb_clean);
-
- strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
-
- adapter->bd_number = cards_found;
- adapter->link_speed = 0;
- adapter->link_duplex = 0;
-
- /* setup the private structure */
-
- err = ixgb_sw_init(adapter);
- if (err)
- goto err_sw_init;
-
- netdev->hw_features = NETIF_F_SG |
- NETIF_F_TSO |
- NETIF_F_HW_CSUM |
- NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_CTAG_RX;
- netdev->features = netdev->hw_features |
- NETIF_F_HW_VLAN_CTAG_FILTER;
- netdev->hw_features |= NETIF_F_RXCSUM;
-
- netdev->features |= NETIF_F_HIGHDMA;
- netdev->vlan_features |= NETIF_F_HIGHDMA;
-
- /* MTU range: 68 - 16114 */
- netdev->min_mtu = ETH_MIN_MTU;
- netdev->max_mtu = IXGB_MAX_JUMBO_FRAME_SIZE - ETH_HLEN;
-
- /* make sure the EEPROM is good */
-
- if (!ixgb_validate_eeprom_checksum(&adapter->hw)) {
- netif_err(adapter, probe, adapter->netdev,
- "The EEPROM Checksum Is Not Valid\n");
- err = -EIO;
- goto err_eeprom;
- }
-
- ixgb_get_ee_mac_addr(&adapter->hw, addr);
- eth_hw_addr_set(netdev, addr);
-
- if (!is_valid_ether_addr(netdev->dev_addr)) {
- netif_err(adapter, probe, adapter->netdev, "Invalid MAC Address\n");
- err = -EIO;
- goto err_eeprom;
- }
-
- adapter->part_num = ixgb_get_ee_pba_number(&adapter->hw);
-
- timer_setup(&adapter->watchdog_timer, ixgb_watchdog, 0);
-
- INIT_WORK(&adapter->tx_timeout_task, ixgb_tx_timeout_task);
-
- strcpy(netdev->name, "eth%d");
- err = register_netdev(netdev);
- if (err)
- goto err_register;
-
- /* carrier off reporting is important to ethtool even BEFORE open */
- netif_carrier_off(netdev);
-
- netif_info(adapter, probe, adapter->netdev,
- "Intel(R) PRO/10GbE Network Connection\n");
- ixgb_check_options(adapter);
- /* reset the hardware with the new settings */
-
- ixgb_reset(adapter);
-
- cards_found++;
- return 0;
-
-err_register:
-err_sw_init:
-err_eeprom:
- iounmap(adapter->hw.hw_addr);
-err_ioremap:
- free_netdev(netdev);
-err_alloc_etherdev:
- pci_release_regions(pdev);
-err_request_regions:
-err_dma_mask:
- pci_disable_device(pdev);
- return err;
-}
-
-/**
- * ixgb_remove - Device Removal Routine
- * @pdev: PCI device information struct
- *
- * ixgb_remove is called by the PCI subsystem to alert the driver
- * that it should release a PCI device. The could be caused by a
- * Hot-Plug event, or because the driver is going to be removed from
- * memory.
- **/
-
-static void
-ixgb_remove(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct ixgb_adapter *adapter = netdev_priv(netdev);
-
- cancel_work_sync(&adapter->tx_timeout_task);
-
- unregister_netdev(netdev);
-
- iounmap(adapter->hw.hw_addr);
- pci_release_regions(pdev);
-
- free_netdev(netdev);
- pci_disable_device(pdev);
-}
-
-/**
- * ixgb_sw_init - Initialize general software structures (struct ixgb_adapter)
- * @adapter: board private structure to initialize
- *
- * ixgb_sw_init initializes the Adapter private data structure.
- * Fields are initialized based on PCI device information and
- * OS network device settings (MTU size).
- **/
-
-static int
-ixgb_sw_init(struct ixgb_adapter *adapter)
-{
- struct ixgb_hw *hw = &adapter->hw;
- struct net_device *netdev = adapter->netdev;
- struct pci_dev *pdev = adapter->pdev;
-
- /* PCI config space info */
-
- hw->vendor_id = pdev->vendor;
- hw->device_id = pdev->device;
- hw->subsystem_vendor_id = pdev->subsystem_vendor;
- hw->subsystem_id = pdev->subsystem_device;
-
- hw->max_frame_size = netdev->mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH;
- adapter->rx_buffer_len = hw->max_frame_size + 8; /* + 8 for errata */
-
- if ((hw->device_id == IXGB_DEVICE_ID_82597EX) ||
- (hw->device_id == IXGB_DEVICE_ID_82597EX_CX4) ||
- (hw->device_id == IXGB_DEVICE_ID_82597EX_LR) ||
- (hw->device_id == IXGB_DEVICE_ID_82597EX_SR))
- hw->mac_type = ixgb_82597;
- else {
- /* should never have loaded on this device */
- netif_err(adapter, probe, adapter->netdev, "unsupported device id\n");
- }
-
- /* enable flow control to be programmed */
- hw->fc.send_xon = 1;
-
- set_bit(__IXGB_DOWN, &adapter->flags);
- return 0;
-}
-
-/**
- * ixgb_open - Called when a network interface is made active
- * @netdev: network interface device structure
- *
- * Returns 0 on success, negative value on failure
- *
- * The open entry point is called when a network interface is made
- * active by the system (IFF_UP). At this point all resources needed
- * for transmit and receive operations are allocated, the interrupt
- * handler is registered with the OS, the watchdog timer is started,
- * and the stack is notified that the interface is ready.
- **/
-
-static int
-ixgb_open(struct net_device *netdev)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- int err;
-
- /* allocate transmit descriptors */
- err = ixgb_setup_tx_resources(adapter);
- if (err)
- goto err_setup_tx;
-
- netif_carrier_off(netdev);
-
- /* allocate receive descriptors */
-
- err = ixgb_setup_rx_resources(adapter);
- if (err)
- goto err_setup_rx;
-
- err = ixgb_up(adapter);
- if (err)
- goto err_up;
-
- netif_start_queue(netdev);
-
- return 0;
-
-err_up:
- ixgb_free_rx_resources(adapter);
-err_setup_rx:
- ixgb_free_tx_resources(adapter);
-err_setup_tx:
- ixgb_reset(adapter);
-
- return err;
-}
-
-/**
- * ixgb_close - Disables a network interface
- * @netdev: network interface device structure
- *
- * Returns 0, this is not allowed to fail
- *
- * The close entry point is called when an interface is de-activated
- * by the OS. The hardware is still under the drivers control, but
- * needs to be disabled. A global MAC reset is issued to stop the
- * hardware, and all transmit and receive resources are freed.
- **/
-
-static int
-ixgb_close(struct net_device *netdev)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
-
- ixgb_down(adapter, true);
-
- ixgb_free_tx_resources(adapter);
- ixgb_free_rx_resources(adapter);
-
- return 0;
-}
-
-/**
- * ixgb_setup_tx_resources - allocate Tx resources (Descriptors)
- * @adapter: board private structure
- *
- * Return 0 on success, negative on failure
- **/
-
-int
-ixgb_setup_tx_resources(struct ixgb_adapter *adapter)
-{
- struct ixgb_desc_ring *txdr = &adapter->tx_ring;
- struct pci_dev *pdev = adapter->pdev;
- int size;
-
- size = sizeof(struct ixgb_buffer) * txdr->count;
- txdr->buffer_info = vzalloc(size);
- if (!txdr->buffer_info)
- return -ENOMEM;
-
- /* round up to nearest 4K */
-
- txdr->size = txdr->count * sizeof(struct ixgb_tx_desc);
- txdr->size = ALIGN(txdr->size, 4096);
-
- txdr->desc = dma_alloc_coherent(&pdev->dev, txdr->size, &txdr->dma,
- GFP_KERNEL);
- if (!txdr->desc) {
- vfree(txdr->buffer_info);
- return -ENOMEM;
- }
-
- txdr->next_to_use = 0;
- txdr->next_to_clean = 0;
-
- return 0;
-}
-
-/**
- * ixgb_configure_tx - Configure 82597 Transmit Unit after Reset.
- * @adapter: board private structure
- *
- * Configure the Tx unit of the MAC after a reset.
- **/
-
-static void
-ixgb_configure_tx(struct ixgb_adapter *adapter)
-{
- u64 tdba = adapter->tx_ring.dma;
- u32 tdlen = adapter->tx_ring.count * sizeof(struct ixgb_tx_desc);
- u32 tctl;
- struct ixgb_hw *hw = &adapter->hw;
-
- /* Setup the Base and Length of the Tx Descriptor Ring
- * tx_ring.dma can be either a 32 or 64 bit value
- */
-
- IXGB_WRITE_REG(hw, TDBAL, (tdba & 0x00000000ffffffffULL));
- IXGB_WRITE_REG(hw, TDBAH, (tdba >> 32));
-
- IXGB_WRITE_REG(hw, TDLEN, tdlen);
-
- /* Setup the HW Tx Head and Tail descriptor pointers */
-
- IXGB_WRITE_REG(hw, TDH, 0);
- IXGB_WRITE_REG(hw, TDT, 0);
-
- /* don't set up txdctl, it induces performance problems if configured
- * incorrectly */
- /* Set the Tx Interrupt Delay register */
-
- IXGB_WRITE_REG(hw, TIDV, adapter->tx_int_delay);
-
- /* Program the Transmit Control Register */
-
- tctl = IXGB_TCTL_TCE | IXGB_TCTL_TXEN | IXGB_TCTL_TPDE;
- IXGB_WRITE_REG(hw, TCTL, tctl);
-
- /* Setup Transmit Descriptor Settings for this adapter */
- adapter->tx_cmd_type =
- IXGB_TX_DESC_TYPE |
- (adapter->tx_int_delay_enable ? IXGB_TX_DESC_CMD_IDE : 0);
-}
-
-/**
- * ixgb_setup_rx_resources - allocate Rx resources (Descriptors)
- * @adapter: board private structure
- *
- * Returns 0 on success, negative on failure
- **/
-
-int
-ixgb_setup_rx_resources(struct ixgb_adapter *adapter)
-{
- struct ixgb_desc_ring *rxdr = &adapter->rx_ring;
- struct pci_dev *pdev = adapter->pdev;
- int size;
-
- size = sizeof(struct ixgb_buffer) * rxdr->count;
- rxdr->buffer_info = vzalloc(size);
- if (!rxdr->buffer_info)
- return -ENOMEM;
-
- /* Round up to nearest 4K */
-
- rxdr->size = rxdr->count * sizeof(struct ixgb_rx_desc);
- rxdr->size = ALIGN(rxdr->size, 4096);
-
- rxdr->desc = dma_alloc_coherent(&pdev->dev, rxdr->size, &rxdr->dma,
- GFP_KERNEL);
-
- if (!rxdr->desc) {
- vfree(rxdr->buffer_info);
- return -ENOMEM;
- }
-
- rxdr->next_to_clean = 0;
- rxdr->next_to_use = 0;
-
- return 0;
-}
-
-/**
- * ixgb_setup_rctl - configure the receive control register
- * @adapter: Board private structure
- **/
-
-static void
-ixgb_setup_rctl(struct ixgb_adapter *adapter)
-{
- u32 rctl;
-
- rctl = IXGB_READ_REG(&adapter->hw, RCTL);
-
- rctl &= ~(3 << IXGB_RCTL_MO_SHIFT);
-
- rctl |=
- IXGB_RCTL_BAM | IXGB_RCTL_RDMTS_1_2 |
- IXGB_RCTL_RXEN | IXGB_RCTL_CFF |
- (adapter->hw.mc_filter_type << IXGB_RCTL_MO_SHIFT);
-
- rctl |= IXGB_RCTL_SECRC;
-
- if (adapter->rx_buffer_len <= IXGB_RXBUFFER_2048)
- rctl |= IXGB_RCTL_BSIZE_2048;
- else if (adapter->rx_buffer_len <= IXGB_RXBUFFER_4096)
- rctl |= IXGB_RCTL_BSIZE_4096;
- else if (adapter->rx_buffer_len <= IXGB_RXBUFFER_8192)
- rctl |= IXGB_RCTL_BSIZE_8192;
- else if (adapter->rx_buffer_len <= IXGB_RXBUFFER_16384)
- rctl |= IXGB_RCTL_BSIZE_16384;
-
- IXGB_WRITE_REG(&adapter->hw, RCTL, rctl);
-}
-
-/**
- * ixgb_configure_rx - Configure 82597 Receive Unit after Reset.
- * @adapter: board private structure
- *
- * Configure the Rx unit of the MAC after a reset.
- **/
-
-static void
-ixgb_configure_rx(struct ixgb_adapter *adapter)
-{
- u64 rdba = adapter->rx_ring.dma;
- u32 rdlen = adapter->rx_ring.count * sizeof(struct ixgb_rx_desc);
- struct ixgb_hw *hw = &adapter->hw;
- u32 rctl;
- u32 rxcsum;
-
- /* make sure receives are disabled while setting up the descriptors */
-
- rctl = IXGB_READ_REG(hw, RCTL);
- IXGB_WRITE_REG(hw, RCTL, rctl & ~IXGB_RCTL_RXEN);
-
- /* set the Receive Delay Timer Register */
-
- IXGB_WRITE_REG(hw, RDTR, adapter->rx_int_delay);
-
- /* Setup the Base and Length of the Rx Descriptor Ring */
-
- IXGB_WRITE_REG(hw, RDBAL, (rdba & 0x00000000ffffffffULL));
- IXGB_WRITE_REG(hw, RDBAH, (rdba >> 32));
-
- IXGB_WRITE_REG(hw, RDLEN, rdlen);
-
- /* Setup the HW Rx Head and Tail Descriptor Pointers */
- IXGB_WRITE_REG(hw, RDH, 0);
- IXGB_WRITE_REG(hw, RDT, 0);
-
- /* due to the hardware errata with RXDCTL, we are unable to use any of
- * the performance enhancing features of it without causing other
- * subtle bugs, some of the bugs could include receive length
- * corruption at high data rates (WTHRESH > 0) and/or receive
- * descriptor ring irregularites (particularly in hardware cache) */
- IXGB_WRITE_REG(hw, RXDCTL, 0);
-
- /* Enable Receive Checksum Offload for TCP and UDP */
- if (adapter->rx_csum) {
- rxcsum = IXGB_READ_REG(hw, RXCSUM);
- rxcsum |= IXGB_RXCSUM_TUOFL;
- IXGB_WRITE_REG(hw, RXCSUM, rxcsum);
- }
-
- /* Enable Receives */
-
- IXGB_WRITE_REG(hw, RCTL, rctl);
-}
-
-/**
- * ixgb_free_tx_resources - Free Tx Resources
- * @adapter: board private structure
- *
- * Free all transmit software resources
- **/
-
-void
-ixgb_free_tx_resources(struct ixgb_adapter *adapter)
-{
- struct pci_dev *pdev = adapter->pdev;
-
- ixgb_clean_tx_ring(adapter);
-
- vfree(adapter->tx_ring.buffer_info);
- adapter->tx_ring.buffer_info = NULL;
-
- dma_free_coherent(&pdev->dev, adapter->tx_ring.size,
- adapter->tx_ring.desc, adapter->tx_ring.dma);
-
- adapter->tx_ring.desc = NULL;
-}
-
-static void
-ixgb_unmap_and_free_tx_resource(struct ixgb_adapter *adapter,
- struct ixgb_buffer *buffer_info)
-{
- if (buffer_info->dma) {
- if (buffer_info->mapped_as_page)
- dma_unmap_page(&adapter->pdev->dev, buffer_info->dma,
- buffer_info->length, DMA_TO_DEVICE);
- else
- dma_unmap_single(&adapter->pdev->dev, buffer_info->dma,
- buffer_info->length, DMA_TO_DEVICE);
- buffer_info->dma = 0;
- }
-
- if (buffer_info->skb) {
- dev_kfree_skb_any(buffer_info->skb);
- buffer_info->skb = NULL;
- }
- buffer_info->time_stamp = 0;
- /* these fields must always be initialized in tx
- * buffer_info->length = 0;
- * buffer_info->next_to_watch = 0; */
-}
-
-/**
- * ixgb_clean_tx_ring - Free Tx Buffers
- * @adapter: board private structure
- **/
-
-static void
-ixgb_clean_tx_ring(struct ixgb_adapter *adapter)
-{
- struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
- struct ixgb_buffer *buffer_info;
- unsigned long size;
- unsigned int i;
-
- /* Free all the Tx ring sk_buffs */
-
- for (i = 0; i < tx_ring->count; i++) {
- buffer_info = &tx_ring->buffer_info[i];
- ixgb_unmap_and_free_tx_resource(adapter, buffer_info);
- }
-
- size = sizeof(struct ixgb_buffer) * tx_ring->count;
- memset(tx_ring->buffer_info, 0, size);
-
- /* Zero out the descriptor ring */
-
- memset(tx_ring->desc, 0, tx_ring->size);
-
- tx_ring->next_to_use = 0;
- tx_ring->next_to_clean = 0;
-
- IXGB_WRITE_REG(&adapter->hw, TDH, 0);
- IXGB_WRITE_REG(&adapter->hw, TDT, 0);
-}
-
-/**
- * ixgb_free_rx_resources - Free Rx Resources
- * @adapter: board private structure
- *
- * Free all receive software resources
- **/
-
-void
-ixgb_free_rx_resources(struct ixgb_adapter *adapter)
-{
- struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
- struct pci_dev *pdev = adapter->pdev;
-
- ixgb_clean_rx_ring(adapter);
-
- vfree(rx_ring->buffer_info);
- rx_ring->buffer_info = NULL;
-
- dma_free_coherent(&pdev->dev, rx_ring->size, rx_ring->desc,
- rx_ring->dma);
-
- rx_ring->desc = NULL;
-}
-
-/**
- * ixgb_clean_rx_ring - Free Rx Buffers
- * @adapter: board private structure
- **/
-
-static void
-ixgb_clean_rx_ring(struct ixgb_adapter *adapter)
-{
- struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
- struct ixgb_buffer *buffer_info;
- struct pci_dev *pdev = adapter->pdev;
- unsigned long size;
- unsigned int i;
-
- /* Free all the Rx ring sk_buffs */
-
- for (i = 0; i < rx_ring->count; i++) {
- buffer_info = &rx_ring->buffer_info[i];
- if (buffer_info->dma) {
- dma_unmap_single(&pdev->dev,
- buffer_info->dma,
- buffer_info->length,
- DMA_FROM_DEVICE);
- buffer_info->dma = 0;
- buffer_info->length = 0;
- }
-
- if (buffer_info->skb) {
- dev_kfree_skb(buffer_info->skb);
- buffer_info->skb = NULL;
- }
- }
-
- size = sizeof(struct ixgb_buffer) * rx_ring->count;
- memset(rx_ring->buffer_info, 0, size);
-
- /* Zero out the descriptor ring */
-
- memset(rx_ring->desc, 0, rx_ring->size);
-
- rx_ring->next_to_clean = 0;
- rx_ring->next_to_use = 0;
-
- IXGB_WRITE_REG(&adapter->hw, RDH, 0);
- IXGB_WRITE_REG(&adapter->hw, RDT, 0);
-}
-
-/**
- * ixgb_set_mac - Change the Ethernet Address of the NIC
- * @netdev: network interface device structure
- * @p: pointer to an address structure
- *
- * Returns 0 on success, negative on failure
- **/
-
-static int
-ixgb_set_mac(struct net_device *netdev, void *p)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct sockaddr *addr = p;
-
- if (!is_valid_ether_addr(addr->sa_data))
- return -EADDRNOTAVAIL;
-
- eth_hw_addr_set(netdev, addr->sa_data);
-
- ixgb_rar_set(&adapter->hw, addr->sa_data, 0);
-
- return 0;
-}
-
-/**
- * ixgb_set_multi - Multicast and Promiscuous mode set
- * @netdev: network interface device structure
- *
- * The set_multi entry point is called whenever the multicast address
- * list or the network interface flags are updated. This routine is
- * responsible for configuring the hardware for proper multicast,
- * promiscuous mode, and all-multi behavior.
- **/
-
-static void
-ixgb_set_multi(struct net_device *netdev)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_hw *hw = &adapter->hw;
- struct netdev_hw_addr *ha;
- u32 rctl;
-
- /* Check for Promiscuous and All Multicast modes */
-
- rctl = IXGB_READ_REG(hw, RCTL);
-
- if (netdev->flags & IFF_PROMISC) {
- rctl |= (IXGB_RCTL_UPE | IXGB_RCTL_MPE);
- /* disable VLAN filtering */
- rctl &= ~IXGB_RCTL_CFIEN;
- rctl &= ~IXGB_RCTL_VFE;
- } else {
- if (netdev->flags & IFF_ALLMULTI) {
- rctl |= IXGB_RCTL_MPE;
- rctl &= ~IXGB_RCTL_UPE;
- } else {
- rctl &= ~(IXGB_RCTL_UPE | IXGB_RCTL_MPE);
- }
- /* enable VLAN filtering */
- rctl |= IXGB_RCTL_VFE;
- rctl &= ~IXGB_RCTL_CFIEN;
- }
-
- if (netdev_mc_count(netdev) > IXGB_MAX_NUM_MULTICAST_ADDRESSES) {
- rctl |= IXGB_RCTL_MPE;
- IXGB_WRITE_REG(hw, RCTL, rctl);
- } else {
- u8 *mta = kmalloc_array(ETH_ALEN,
- IXGB_MAX_NUM_MULTICAST_ADDRESSES,
- GFP_ATOMIC);
- u8 *addr;
- if (!mta)
- goto alloc_failed;
-
- IXGB_WRITE_REG(hw, RCTL, rctl);
-
- addr = mta;
- netdev_for_each_mc_addr(ha, netdev) {
- memcpy(addr, ha->addr, ETH_ALEN);
- addr += ETH_ALEN;
- }
-
- ixgb_mc_addr_list_update(hw, mta, netdev_mc_count(netdev), 0);
- kfree(mta);
- }
-
-alloc_failed:
- if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)
- ixgb_vlan_strip_enable(adapter);
- else
- ixgb_vlan_strip_disable(adapter);
-
-}
-
-/**
- * ixgb_watchdog - Timer Call-back
- * @t: pointer to timer_list containing our private info pointer
- **/
-
-static void
-ixgb_watchdog(struct timer_list *t)
-{
- struct ixgb_adapter *adapter = from_timer(adapter, t, watchdog_timer);
- struct net_device *netdev = adapter->netdev;
- struct ixgb_desc_ring *txdr = &adapter->tx_ring;
-
- ixgb_check_for_link(&adapter->hw);
-
- if (ixgb_check_for_bad_link(&adapter->hw)) {
- /* force the reset path */
- netif_stop_queue(netdev);
- }
-
- if (adapter->hw.link_up) {
- if (!netif_carrier_ok(netdev)) {
- netdev_info(netdev,
- "NIC Link is Up 10 Gbps Full Duplex, Flow Control: %s\n",
- (adapter->hw.fc.type == ixgb_fc_full) ?
- "RX/TX" :
- (adapter->hw.fc.type == ixgb_fc_rx_pause) ?
- "RX" :
- (adapter->hw.fc.type == ixgb_fc_tx_pause) ?
- "TX" : "None");
- adapter->link_speed = 10000;
- adapter->link_duplex = FULL_DUPLEX;
- netif_carrier_on(netdev);
- }
- } else {
- if (netif_carrier_ok(netdev)) {
- adapter->link_speed = 0;
- adapter->link_duplex = 0;
- netdev_info(netdev, "NIC Link is Down\n");
- netif_carrier_off(netdev);
- }
- }
-
- ixgb_update_stats(adapter);
-
- if (!netif_carrier_ok(netdev)) {
- if (IXGB_DESC_UNUSED(txdr) + 1 < txdr->count) {
- /* We've lost link, so the controller stops DMA,
- * but we've got queued Tx work that's never going
- * to get done, so reset controller to flush Tx.
- * (Do the reset outside of interrupt context). */
- schedule_work(&adapter->tx_timeout_task);
- /* return immediately since reset is imminent */
- return;
- }
- }
-
- /* Force detection of hung controller every watchdog period */
- adapter->detect_tx_hung = true;
-
- /* generate an interrupt to force clean up of any stragglers */
- IXGB_WRITE_REG(&adapter->hw, ICS, IXGB_INT_TXDW);
-
- /* Reset the timer */
- mod_timer(&adapter->watchdog_timer, jiffies + 2 * HZ);
-}
-
-#define IXGB_TX_FLAGS_CSUM 0x00000001
-#define IXGB_TX_FLAGS_VLAN 0x00000002
-#define IXGB_TX_FLAGS_TSO 0x00000004
-
-static int
-ixgb_tso(struct ixgb_adapter *adapter, struct sk_buff *skb)
-{
- struct ixgb_context_desc *context_desc;
- unsigned int i;
- u8 ipcss, ipcso, tucss, tucso, hdr_len;
- u16 ipcse, tucse, mss;
-
- if (likely(skb_is_gso(skb))) {
- struct ixgb_buffer *buffer_info;
- struct iphdr *iph;
- int err;
-
- err = skb_cow_head(skb, 0);
- if (err < 0)
- return err;
-
- hdr_len = skb_tcp_all_headers(skb);
- mss = skb_shinfo(skb)->gso_size;
- iph = ip_hdr(skb);
- iph->tot_len = 0;
- iph->check = 0;
- tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
- iph->daddr, 0,
- IPPROTO_TCP, 0);
- ipcss = skb_network_offset(skb);
- ipcso = (void *)&(iph->check) - (void *)skb->data;
- ipcse = skb_transport_offset(skb) - 1;
- tucss = skb_transport_offset(skb);
- tucso = (void *)&(tcp_hdr(skb)->check) - (void *)skb->data;
- tucse = 0;
-
- i = adapter->tx_ring.next_to_use;
- context_desc = IXGB_CONTEXT_DESC(adapter->tx_ring, i);
- buffer_info = &adapter->tx_ring.buffer_info[i];
- WARN_ON(buffer_info->dma != 0);
-
- context_desc->ipcss = ipcss;
- context_desc->ipcso = ipcso;
- context_desc->ipcse = cpu_to_le16(ipcse);
- context_desc->tucss = tucss;
- context_desc->tucso = tucso;
- context_desc->tucse = cpu_to_le16(tucse);
- context_desc->mss = cpu_to_le16(mss);
- context_desc->hdr_len = hdr_len;
- context_desc->status = 0;
- context_desc->cmd_type_len = cpu_to_le32(
- IXGB_CONTEXT_DESC_TYPE
- | IXGB_CONTEXT_DESC_CMD_TSE
- | IXGB_CONTEXT_DESC_CMD_IP
- | IXGB_CONTEXT_DESC_CMD_TCP
- | IXGB_CONTEXT_DESC_CMD_IDE
- | (skb->len - (hdr_len)));
-
-
- if (++i == adapter->tx_ring.count) i = 0;
- adapter->tx_ring.next_to_use = i;
-
- return 1;
- }
-
- return 0;
-}
-
-static bool
-ixgb_tx_csum(struct ixgb_adapter *adapter, struct sk_buff *skb)
-{
- struct ixgb_context_desc *context_desc;
- unsigned int i;
- u8 css, cso;
-
- if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
- struct ixgb_buffer *buffer_info;
- css = skb_checksum_start_offset(skb);
- cso = css + skb->csum_offset;
-
- i = adapter->tx_ring.next_to_use;
- context_desc = IXGB_CONTEXT_DESC(adapter->tx_ring, i);
- buffer_info = &adapter->tx_ring.buffer_info[i];
- WARN_ON(buffer_info->dma != 0);
-
- context_desc->tucss = css;
- context_desc->tucso = cso;
- context_desc->tucse = 0;
- /* zero out any previously existing data in one instruction */
- *(u32 *)&(context_desc->ipcss) = 0;
- context_desc->status = 0;
- context_desc->hdr_len = 0;
- context_desc->mss = 0;
- context_desc->cmd_type_len =
- cpu_to_le32(IXGB_CONTEXT_DESC_TYPE
- | IXGB_TX_DESC_CMD_IDE);
-
- if (++i == adapter->tx_ring.count) i = 0;
- adapter->tx_ring.next_to_use = i;
-
- return true;
- }
-
- return false;
-}
-
-#define IXGB_MAX_TXD_PWR 14
-#define IXGB_MAX_DATA_PER_TXD (1<<IXGB_MAX_TXD_PWR)
-
-static int
-ixgb_tx_map(struct ixgb_adapter *adapter, struct sk_buff *skb,
- unsigned int first)
-{
- struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
- struct pci_dev *pdev = adapter->pdev;
- struct ixgb_buffer *buffer_info;
- int len = skb_headlen(skb);
- unsigned int offset = 0, size, count = 0, i;
- unsigned int mss = skb_shinfo(skb)->gso_size;
- unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
- unsigned int f;
-
- i = tx_ring->next_to_use;
-
- while (len) {
- buffer_info = &tx_ring->buffer_info[i];
- size = min(len, IXGB_MAX_DATA_PER_TXD);
- /* Workaround for premature desc write-backs
- * in TSO mode. Append 4-byte sentinel desc */
- if (unlikely(mss && !nr_frags && size == len && size > 8))
- size -= 4;
-
- buffer_info->length = size;
- WARN_ON(buffer_info->dma != 0);
- buffer_info->time_stamp = jiffies;
- buffer_info->mapped_as_page = false;
- buffer_info->dma = dma_map_single(&pdev->dev,
- skb->data + offset,
- size, DMA_TO_DEVICE);
- if (dma_mapping_error(&pdev->dev, buffer_info->dma))
- goto dma_error;
- buffer_info->next_to_watch = 0;
-
- len -= size;
- offset += size;
- count++;
- if (len) {
- i++;
- if (i == tx_ring->count)
- i = 0;
- }
- }
-
- for (f = 0; f < nr_frags; f++) {
- const skb_frag_t *frag = &skb_shinfo(skb)->frags[f];
- len = skb_frag_size(frag);
- offset = 0;
-
- while (len) {
- i++;
- if (i == tx_ring->count)
- i = 0;
-
- buffer_info = &tx_ring->buffer_info[i];
- size = min(len, IXGB_MAX_DATA_PER_TXD);
-
- /* Workaround for premature desc write-backs
- * in TSO mode. Append 4-byte sentinel desc */
- if (unlikely(mss && (f == (nr_frags - 1))
- && size == len && size > 8))
- size -= 4;
-
- buffer_info->length = size;
- buffer_info->time_stamp = jiffies;
- buffer_info->mapped_as_page = true;
- buffer_info->dma =
- skb_frag_dma_map(&pdev->dev, frag, offset, size,
- DMA_TO_DEVICE);
- if (dma_mapping_error(&pdev->dev, buffer_info->dma))
- goto dma_error;
- buffer_info->next_to_watch = 0;
-
- len -= size;
- offset += size;
- count++;
- }
- }
- tx_ring->buffer_info[i].skb = skb;
- tx_ring->buffer_info[first].next_to_watch = i;
-
- return count;
-
-dma_error:
- dev_err(&pdev->dev, "TX DMA map failed\n");
- buffer_info->dma = 0;
- if (count)
- count--;
-
- while (count--) {
- if (i==0)
- i += tx_ring->count;
- i--;
- buffer_info = &tx_ring->buffer_info[i];
- ixgb_unmap_and_free_tx_resource(adapter, buffer_info);
- }
-
- return 0;
-}
-
-static void
-ixgb_tx_queue(struct ixgb_adapter *adapter, int count, int vlan_id,int tx_flags)
-{
- struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
- struct ixgb_tx_desc *tx_desc = NULL;
- struct ixgb_buffer *buffer_info;
- u32 cmd_type_len = adapter->tx_cmd_type;
- u8 status = 0;
- u8 popts = 0;
- unsigned int i;
-
- if (tx_flags & IXGB_TX_FLAGS_TSO) {
- cmd_type_len |= IXGB_TX_DESC_CMD_TSE;
- popts |= (IXGB_TX_DESC_POPTS_IXSM | IXGB_TX_DESC_POPTS_TXSM);
- }
-
- if (tx_flags & IXGB_TX_FLAGS_CSUM)
- popts |= IXGB_TX_DESC_POPTS_TXSM;
-
- if (tx_flags & IXGB_TX_FLAGS_VLAN)
- cmd_type_len |= IXGB_TX_DESC_CMD_VLE;
-
- i = tx_ring->next_to_use;
-
- while (count--) {
- buffer_info = &tx_ring->buffer_info[i];
- tx_desc = IXGB_TX_DESC(*tx_ring, i);
- tx_desc->buff_addr = cpu_to_le64(buffer_info->dma);
- tx_desc->cmd_type_len =
- cpu_to_le32(cmd_type_len | buffer_info->length);
- tx_desc->status = status;
- tx_desc->popts = popts;
- tx_desc->vlan = cpu_to_le16(vlan_id);
-
- if (++i == tx_ring->count) i = 0;
- }
-
- tx_desc->cmd_type_len |=
- cpu_to_le32(IXGB_TX_DESC_CMD_EOP | IXGB_TX_DESC_CMD_RS);
-
- /* Force memory writes to complete before letting h/w
- * know there are new descriptors to fetch. (Only
- * applicable for weak-ordered memory model archs,
- * such as IA-64). */
- wmb();
-
- tx_ring->next_to_use = i;
- IXGB_WRITE_REG(&adapter->hw, TDT, i);
-}
-
-static int __ixgb_maybe_stop_tx(struct net_device *netdev, int size)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
-
- netif_stop_queue(netdev);
- /* Herbert's original patch had:
- * smp_mb__after_netif_stop_queue();
- * but since that doesn't exist yet, just open code it. */
- smp_mb();
-
- /* We need to check again in a case another CPU has just
- * made room available. */
- if (likely(IXGB_DESC_UNUSED(tx_ring) < size))
- return -EBUSY;
-
- /* A reprieve! */
- netif_start_queue(netdev);
- ++adapter->restart_queue;
- return 0;
-}
-
-static int ixgb_maybe_stop_tx(struct net_device *netdev,
- struct ixgb_desc_ring *tx_ring, int size)
-{
- if (likely(IXGB_DESC_UNUSED(tx_ring) >= size))
- return 0;
- return __ixgb_maybe_stop_tx(netdev, size);
-}
-
-
-/* Tx Descriptors needed, worst case */
-#define TXD_USE_COUNT(S) (((S) >> IXGB_MAX_TXD_PWR) + \
- (((S) & (IXGB_MAX_DATA_PER_TXD - 1)) ? 1 : 0))
-#define DESC_NEEDED TXD_USE_COUNT(IXGB_MAX_DATA_PER_TXD) /* skb->date */ + \
- MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE) + 1 /* for context */ \
- + 1 /* one more needed for sentinel TSO workaround */
-
-static netdev_tx_t
-ixgb_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- unsigned int first;
- unsigned int tx_flags = 0;
- int vlan_id = 0;
- int count = 0;
- int tso;
-
- if (test_bit(__IXGB_DOWN, &adapter->flags)) {
- dev_kfree_skb_any(skb);
- return NETDEV_TX_OK;
- }
-
- if (skb->len <= 0) {
- dev_kfree_skb_any(skb);
- return NETDEV_TX_OK;
- }
-
- if (unlikely(ixgb_maybe_stop_tx(netdev, &adapter->tx_ring,
- DESC_NEEDED)))
- return NETDEV_TX_BUSY;
-
- if (skb_vlan_tag_present(skb)) {
- tx_flags |= IXGB_TX_FLAGS_VLAN;
- vlan_id = skb_vlan_tag_get(skb);
- }
-
- first = adapter->tx_ring.next_to_use;
-
- tso = ixgb_tso(adapter, skb);
- if (tso < 0) {
- dev_kfree_skb_any(skb);
- return NETDEV_TX_OK;
- }
-
- if (likely(tso))
- tx_flags |= IXGB_TX_FLAGS_TSO;
- else if (ixgb_tx_csum(adapter, skb))
- tx_flags |= IXGB_TX_FLAGS_CSUM;
-
- count = ixgb_tx_map(adapter, skb, first);
-
- if (count) {
- ixgb_tx_queue(adapter, count, vlan_id, tx_flags);
- /* Make sure there is space in the ring for the next send. */
- ixgb_maybe_stop_tx(netdev, &adapter->tx_ring, DESC_NEEDED);
-
- } else {
- dev_kfree_skb_any(skb);
- adapter->tx_ring.buffer_info[first].time_stamp = 0;
- adapter->tx_ring.next_to_use = first;
- }
-
- return NETDEV_TX_OK;
-}
-
-/**
- * ixgb_tx_timeout - Respond to a Tx Hang
- * @netdev: network interface device structure
- * @txqueue: queue hanging (unused)
- **/
-
-static void
-ixgb_tx_timeout(struct net_device *netdev, unsigned int __always_unused txqueue)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
-
- /* Do the reset outside of interrupt context */
- schedule_work(&adapter->tx_timeout_task);
-}
-
-static void
-ixgb_tx_timeout_task(struct work_struct *work)
-{
- struct ixgb_adapter *adapter =
- container_of(work, struct ixgb_adapter, tx_timeout_task);
-
- adapter->tx_timeout_count++;
- ixgb_down(adapter, true);
- ixgb_up(adapter);
-}
-
-/**
- * ixgb_change_mtu - Change the Maximum Transfer Unit
- * @netdev: network interface device structure
- * @new_mtu: new value for maximum frame size
- *
- * Returns 0 on success, negative on failure
- **/
-
-static int
-ixgb_change_mtu(struct net_device *netdev, int new_mtu)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- int max_frame = new_mtu + ENET_HEADER_SIZE + ENET_FCS_LENGTH;
-
- if (netif_running(netdev))
- ixgb_down(adapter, true);
-
- adapter->rx_buffer_len = max_frame + 8; /* + 8 for errata */
-
- netdev->mtu = new_mtu;
-
- if (netif_running(netdev))
- ixgb_up(adapter);
-
- return 0;
-}
-
-/**
- * ixgb_update_stats - Update the board statistics counters.
- * @adapter: board private structure
- **/
-
-void
-ixgb_update_stats(struct ixgb_adapter *adapter)
-{
- struct net_device *netdev = adapter->netdev;
- struct pci_dev *pdev = adapter->pdev;
-
- /* Prevent stats update while adapter is being reset */
- if (pci_channel_offline(pdev))
- return;
-
- if ((netdev->flags & IFF_PROMISC) || (netdev->flags & IFF_ALLMULTI) ||
- (netdev_mc_count(netdev) > IXGB_MAX_NUM_MULTICAST_ADDRESSES)) {
- u64 multi = IXGB_READ_REG(&adapter->hw, MPRCL);
- u32 bcast_l = IXGB_READ_REG(&adapter->hw, BPRCL);
- u32 bcast_h = IXGB_READ_REG(&adapter->hw, BPRCH);
- u64 bcast = ((u64)bcast_h << 32) | bcast_l;
-
- multi |= ((u64)IXGB_READ_REG(&adapter->hw, MPRCH) << 32);
- /* fix up multicast stats by removing broadcasts */
- if (multi >= bcast)
- multi -= bcast;
-
- adapter->stats.mprcl += (multi & 0xFFFFFFFF);
- adapter->stats.mprch += (multi >> 32);
- adapter->stats.bprcl += bcast_l;
- adapter->stats.bprch += bcast_h;
- } else {
- adapter->stats.mprcl += IXGB_READ_REG(&adapter->hw, MPRCL);
- adapter->stats.mprch += IXGB_READ_REG(&adapter->hw, MPRCH);
- adapter->stats.bprcl += IXGB_READ_REG(&adapter->hw, BPRCL);
- adapter->stats.bprch += IXGB_READ_REG(&adapter->hw, BPRCH);
- }
- adapter->stats.tprl += IXGB_READ_REG(&adapter->hw, TPRL);
- adapter->stats.tprh += IXGB_READ_REG(&adapter->hw, TPRH);
- adapter->stats.gprcl += IXGB_READ_REG(&adapter->hw, GPRCL);
- adapter->stats.gprch += IXGB_READ_REG(&adapter->hw, GPRCH);
- adapter->stats.uprcl += IXGB_READ_REG(&adapter->hw, UPRCL);
- adapter->stats.uprch += IXGB_READ_REG(&adapter->hw, UPRCH);
- adapter->stats.vprcl += IXGB_READ_REG(&adapter->hw, VPRCL);
- adapter->stats.vprch += IXGB_READ_REG(&adapter->hw, VPRCH);
- adapter->stats.jprcl += IXGB_READ_REG(&adapter->hw, JPRCL);
- adapter->stats.jprch += IXGB_READ_REG(&adapter->hw, JPRCH);
- adapter->stats.gorcl += IXGB_READ_REG(&adapter->hw, GORCL);
- adapter->stats.gorch += IXGB_READ_REG(&adapter->hw, GORCH);
- adapter->stats.torl += IXGB_READ_REG(&adapter->hw, TORL);
- adapter->stats.torh += IXGB_READ_REG(&adapter->hw, TORH);
- adapter->stats.rnbc += IXGB_READ_REG(&adapter->hw, RNBC);
- adapter->stats.ruc += IXGB_READ_REG(&adapter->hw, RUC);
- adapter->stats.roc += IXGB_READ_REG(&adapter->hw, ROC);
- adapter->stats.rlec += IXGB_READ_REG(&adapter->hw, RLEC);
- adapter->stats.crcerrs += IXGB_READ_REG(&adapter->hw, CRCERRS);
- adapter->stats.icbc += IXGB_READ_REG(&adapter->hw, ICBC);
- adapter->stats.ecbc += IXGB_READ_REG(&adapter->hw, ECBC);
- adapter->stats.mpc += IXGB_READ_REG(&adapter->hw, MPC);
- adapter->stats.tptl += IXGB_READ_REG(&adapter->hw, TPTL);
- adapter->stats.tpth += IXGB_READ_REG(&adapter->hw, TPTH);
- adapter->stats.gptcl += IXGB_READ_REG(&adapter->hw, GPTCL);
- adapter->stats.gptch += IXGB_READ_REG(&adapter->hw, GPTCH);
- adapter->stats.bptcl += IXGB_READ_REG(&adapter->hw, BPTCL);
- adapter->stats.bptch += IXGB_READ_REG(&adapter->hw, BPTCH);
- adapter->stats.mptcl += IXGB_READ_REG(&adapter->hw, MPTCL);
- adapter->stats.mptch += IXGB_READ_REG(&adapter->hw, MPTCH);
- adapter->stats.uptcl += IXGB_READ_REG(&adapter->hw, UPTCL);
- adapter->stats.uptch += IXGB_READ_REG(&adapter->hw, UPTCH);
- adapter->stats.vptcl += IXGB_READ_REG(&adapter->hw, VPTCL);
- adapter->stats.vptch += IXGB_READ_REG(&adapter->hw, VPTCH);
- adapter->stats.jptcl += IXGB_READ_REG(&adapter->hw, JPTCL);
- adapter->stats.jptch += IXGB_READ_REG(&adapter->hw, JPTCH);
- adapter->stats.gotcl += IXGB_READ_REG(&adapter->hw, GOTCL);
- adapter->stats.gotch += IXGB_READ_REG(&adapter->hw, GOTCH);
- adapter->stats.totl += IXGB_READ_REG(&adapter->hw, TOTL);
- adapter->stats.toth += IXGB_READ_REG(&adapter->hw, TOTH);
- adapter->stats.dc += IXGB_READ_REG(&adapter->hw, DC);
- adapter->stats.plt64c += IXGB_READ_REG(&adapter->hw, PLT64C);
- adapter->stats.tsctc += IXGB_READ_REG(&adapter->hw, TSCTC);
- adapter->stats.tsctfc += IXGB_READ_REG(&adapter->hw, TSCTFC);
- adapter->stats.ibic += IXGB_READ_REG(&adapter->hw, IBIC);
- adapter->stats.rfc += IXGB_READ_REG(&adapter->hw, RFC);
- adapter->stats.lfc += IXGB_READ_REG(&adapter->hw, LFC);
- adapter->stats.pfrc += IXGB_READ_REG(&adapter->hw, PFRC);
- adapter->stats.pftc += IXGB_READ_REG(&adapter->hw, PFTC);
- adapter->stats.mcfrc += IXGB_READ_REG(&adapter->hw, MCFRC);
- adapter->stats.mcftc += IXGB_READ_REG(&adapter->hw, MCFTC);
- adapter->stats.xonrxc += IXGB_READ_REG(&adapter->hw, XONRXC);
- adapter->stats.xontxc += IXGB_READ_REG(&adapter->hw, XONTXC);
- adapter->stats.xoffrxc += IXGB_READ_REG(&adapter->hw, XOFFRXC);
- adapter->stats.xofftxc += IXGB_READ_REG(&adapter->hw, XOFFTXC);
- adapter->stats.rjc += IXGB_READ_REG(&adapter->hw, RJC);
-
- /* Fill out the OS statistics structure */
-
- netdev->stats.rx_packets = adapter->stats.gprcl;
- netdev->stats.tx_packets = adapter->stats.gptcl;
- netdev->stats.rx_bytes = adapter->stats.gorcl;
- netdev->stats.tx_bytes = adapter->stats.gotcl;
- netdev->stats.multicast = adapter->stats.mprcl;
- netdev->stats.collisions = 0;
-
- /* ignore RLEC as it reports errors for padded (<64bytes) frames
- * with a length in the type/len field */
- netdev->stats.rx_errors =
- /* adapter->stats.rnbc + */ adapter->stats.crcerrs +
- adapter->stats.ruc +
- adapter->stats.roc /*+ adapter->stats.rlec */ +
- adapter->stats.icbc +
- adapter->stats.ecbc + adapter->stats.mpc;
-
- /* see above
- * netdev->stats.rx_length_errors = adapter->stats.rlec;
- */
-
- netdev->stats.rx_crc_errors = adapter->stats.crcerrs;
- netdev->stats.rx_fifo_errors = adapter->stats.mpc;
- netdev->stats.rx_missed_errors = adapter->stats.mpc;
- netdev->stats.rx_over_errors = adapter->stats.mpc;
-
- netdev->stats.tx_errors = 0;
- netdev->stats.rx_frame_errors = 0;
- netdev->stats.tx_aborted_errors = 0;
- netdev->stats.tx_carrier_errors = 0;
- netdev->stats.tx_fifo_errors = 0;
- netdev->stats.tx_heartbeat_errors = 0;
- netdev->stats.tx_window_errors = 0;
-}
-
-/**
- * ixgb_intr - Interrupt Handler
- * @irq: interrupt number
- * @data: pointer to a network interface device structure
- **/
-
-static irqreturn_t
-ixgb_intr(int irq, void *data)
-{
- struct net_device *netdev = data;
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- struct ixgb_hw *hw = &adapter->hw;
- u32 icr = IXGB_READ_REG(hw, ICR);
-
- if (unlikely(!icr))
- return IRQ_NONE; /* Not our interrupt */
-
- if (unlikely(icr & (IXGB_INT_RXSEQ | IXGB_INT_LSC)))
- if (!test_bit(__IXGB_DOWN, &adapter->flags))
- mod_timer(&adapter->watchdog_timer, jiffies);
-
- if (napi_schedule_prep(&adapter->napi)) {
-
- /* Disable interrupts and register for poll. The flush
- of the posted write is intentionally left out.
- */
-
- IXGB_WRITE_REG(&adapter->hw, IMC, ~0);
- __napi_schedule(&adapter->napi);
- }
- return IRQ_HANDLED;
-}
-
-/**
- * ixgb_clean - NAPI Rx polling callback
- * @napi: napi struct pointer
- * @budget: max number of receives to clean
- **/
-
-static int
-ixgb_clean(struct napi_struct *napi, int budget)
-{
- struct ixgb_adapter *adapter = container_of(napi, struct ixgb_adapter, napi);
- int work_done = 0;
-
- ixgb_clean_tx_irq(adapter);
- ixgb_clean_rx_irq(adapter, &work_done, budget);
-
- /* If budget not fully consumed, exit the polling mode */
- if (work_done < budget) {
- napi_complete_done(napi, work_done);
- if (!test_bit(__IXGB_DOWN, &adapter->flags))
- ixgb_irq_enable(adapter);
- }
-
- return work_done;
-}
-
-/**
- * ixgb_clean_tx_irq - Reclaim resources after transmit completes
- * @adapter: board private structure
- **/
-
-static bool
-ixgb_clean_tx_irq(struct ixgb_adapter *adapter)
-{
- struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
- struct net_device *netdev = adapter->netdev;
- struct ixgb_tx_desc *tx_desc, *eop_desc;
- struct ixgb_buffer *buffer_info;
- unsigned int i, eop;
- bool cleaned = false;
-
- i = tx_ring->next_to_clean;
- eop = tx_ring->buffer_info[i].next_to_watch;
- eop_desc = IXGB_TX_DESC(*tx_ring, eop);
-
- while (eop_desc->status & IXGB_TX_DESC_STATUS_DD) {
-
- rmb(); /* read buffer_info after eop_desc */
- for (cleaned = false; !cleaned; ) {
- tx_desc = IXGB_TX_DESC(*tx_ring, i);
- buffer_info = &tx_ring->buffer_info[i];
-
- if (tx_desc->popts &
- (IXGB_TX_DESC_POPTS_TXSM |
- IXGB_TX_DESC_POPTS_IXSM))
- adapter->hw_csum_tx_good++;
-
- ixgb_unmap_and_free_tx_resource(adapter, buffer_info);
-
- *(u32 *)&(tx_desc->status) = 0;
-
- cleaned = (i == eop);
- if (++i == tx_ring->count) i = 0;
- }
-
- eop = tx_ring->buffer_info[i].next_to_watch;
- eop_desc = IXGB_TX_DESC(*tx_ring, eop);
- }
-
- tx_ring->next_to_clean = i;
-
- if (unlikely(cleaned && netif_carrier_ok(netdev) &&
- IXGB_DESC_UNUSED(tx_ring) >= DESC_NEEDED)) {
- /* Make sure that anybody stopping the queue after this
- * sees the new next_to_clean. */
- smp_mb();
-
- if (netif_queue_stopped(netdev) &&
- !(test_bit(__IXGB_DOWN, &adapter->flags))) {
- netif_wake_queue(netdev);
- ++adapter->restart_queue;
- }
- }
-
- if (adapter->detect_tx_hung) {
- /* detect a transmit hang in hardware, this serializes the
- * check with the clearing of time_stamp and movement of i */
- adapter->detect_tx_hung = false;
- if (tx_ring->buffer_info[eop].time_stamp &&
- time_after(jiffies, tx_ring->buffer_info[eop].time_stamp + HZ)
- && !(IXGB_READ_REG(&adapter->hw, STATUS) &
- IXGB_STATUS_TXOFF)) {
- /* detected Tx unit hang */
- netif_err(adapter, drv, adapter->netdev,
- "Detected Tx Unit Hang\n"
- " TDH <%x>\n"
- " TDT <%x>\n"
- " next_to_use <%x>\n"
- " next_to_clean <%x>\n"
- "buffer_info[next_to_clean]\n"
- " time_stamp <%lx>\n"
- " next_to_watch <%x>\n"
- " jiffies <%lx>\n"
- " next_to_watch.status <%x>\n",
- IXGB_READ_REG(&adapter->hw, TDH),
- IXGB_READ_REG(&adapter->hw, TDT),
- tx_ring->next_to_use,
- tx_ring->next_to_clean,
- tx_ring->buffer_info[eop].time_stamp,
- eop,
- jiffies,
- eop_desc->status);
- netif_stop_queue(netdev);
- }
- }
-
- return cleaned;
-}
-
-/**
- * ixgb_rx_checksum - Receive Checksum Offload for 82597.
- * @adapter: board private structure
- * @rx_desc: receive descriptor
- * @skb: socket buffer with received data
- **/
-
-static void
-ixgb_rx_checksum(struct ixgb_adapter *adapter,
- struct ixgb_rx_desc *rx_desc,
- struct sk_buff *skb)
-{
- /* Ignore Checksum bit is set OR
- * TCP Checksum has not been calculated
- */
- if ((rx_desc->status & IXGB_RX_DESC_STATUS_IXSM) ||
- (!(rx_desc->status & IXGB_RX_DESC_STATUS_TCPCS))) {
- skb_checksum_none_assert(skb);
- return;
- }
-
- /* At this point we know the hardware did the TCP checksum */
- /* now look at the TCP checksum error bit */
- if (rx_desc->errors & IXGB_RX_DESC_ERRORS_TCPE) {
- /* let the stack verify checksum errors */
- skb_checksum_none_assert(skb);
- adapter->hw_csum_rx_error++;
- } else {
- /* TCP checksum is good */
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- adapter->hw_csum_rx_good++;
- }
-}
-
-/*
- * this should improve performance for small packets with large amounts
- * of reassembly being done in the stack
- */
-static void ixgb_check_copybreak(struct napi_struct *napi,
- struct ixgb_buffer *buffer_info,
- u32 length, struct sk_buff **skb)
-{
- struct sk_buff *new_skb;
-
- if (length > copybreak)
- return;
-
- new_skb = napi_alloc_skb(napi, length);
- if (!new_skb)
- return;
-
- skb_copy_to_linear_data_offset(new_skb, -NET_IP_ALIGN,
- (*skb)->data - NET_IP_ALIGN,
- length + NET_IP_ALIGN);
- /* save the skb in buffer_info as good */
- buffer_info->skb = *skb;
- *skb = new_skb;
-}
-
-/**
- * ixgb_clean_rx_irq - Send received data up the network stack,
- * @adapter: board private structure
- * @work_done: output pointer to amount of packets cleaned
- * @work_to_do: how much work we can complete
- **/
-
-static bool
-ixgb_clean_rx_irq(struct ixgb_adapter *adapter, int *work_done, int work_to_do)
-{
- struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
- struct net_device *netdev = adapter->netdev;
- struct pci_dev *pdev = adapter->pdev;
- struct ixgb_rx_desc *rx_desc, *next_rxd;
- struct ixgb_buffer *buffer_info, *next_buffer, *next2_buffer;
- u32 length;
- unsigned int i, j;
- int cleaned_count = 0;
- bool cleaned = false;
-
- i = rx_ring->next_to_clean;
- rx_desc = IXGB_RX_DESC(*rx_ring, i);
- buffer_info = &rx_ring->buffer_info[i];
-
- while (rx_desc->status & IXGB_RX_DESC_STATUS_DD) {
- struct sk_buff *skb;
- u8 status;
-
- if (*work_done >= work_to_do)
- break;
-
- (*work_done)++;
- rmb(); /* read descriptor and rx_buffer_info after status DD */
- status = rx_desc->status;
- skb = buffer_info->skb;
- buffer_info->skb = NULL;
-
- prefetch(skb->data - NET_IP_ALIGN);
-
- if (++i == rx_ring->count)
- i = 0;
- next_rxd = IXGB_RX_DESC(*rx_ring, i);
- prefetch(next_rxd);
-
- j = i + 1;
- if (j == rx_ring->count)
- j = 0;
- next2_buffer = &rx_ring->buffer_info[j];
- prefetch(next2_buffer);
-
- next_buffer = &rx_ring->buffer_info[i];
-
- cleaned = true;
- cleaned_count++;
-
- dma_unmap_single(&pdev->dev,
- buffer_info->dma,
- buffer_info->length,
- DMA_FROM_DEVICE);
- buffer_info->dma = 0;
-
- length = le16_to_cpu(rx_desc->length);
- rx_desc->length = 0;
-
- if (unlikely(!(status & IXGB_RX_DESC_STATUS_EOP))) {
-
- /* All receives must fit into a single buffer */
-
- pr_debug("Receive packet consumed multiple buffers length<%x>\n",
- length);
-
- dev_kfree_skb_irq(skb);
- goto rxdesc_done;
- }
-
- if (unlikely(rx_desc->errors &
- (IXGB_RX_DESC_ERRORS_CE | IXGB_RX_DESC_ERRORS_SE |
- IXGB_RX_DESC_ERRORS_P | IXGB_RX_DESC_ERRORS_RXE))) {
- dev_kfree_skb_irq(skb);
- goto rxdesc_done;
- }
-
- ixgb_check_copybreak(&adapter->napi, buffer_info, length, &skb);
-
- /* Good Receive */
- skb_put(skb, length);
-
- /* Receive Checksum Offload */
- ixgb_rx_checksum(adapter, rx_desc, skb);
-
- skb->protocol = eth_type_trans(skb, netdev);
- if (status & IXGB_RX_DESC_STATUS_VP)
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
- le16_to_cpu(rx_desc->special));
-
- netif_receive_skb(skb);
-
-rxdesc_done:
- /* clean up descriptor, might be written over by hw */
- rx_desc->status = 0;
-
- /* return some buffers to hardware, one at a time is too slow */
- if (unlikely(cleaned_count >= IXGB_RX_BUFFER_WRITE)) {
- ixgb_alloc_rx_buffers(adapter, cleaned_count);
- cleaned_count = 0;
- }
-
- /* use prefetched values */
- rx_desc = next_rxd;
- buffer_info = next_buffer;
- }
-
- rx_ring->next_to_clean = i;
-
- cleaned_count = IXGB_DESC_UNUSED(rx_ring);
- if (cleaned_count)
- ixgb_alloc_rx_buffers(adapter, cleaned_count);
-
- return cleaned;
-}
-
-/**
- * ixgb_alloc_rx_buffers - Replace used receive buffers
- * @adapter: address of board private structure
- * @cleaned_count: how many buffers to allocate
- **/
-
-static void
-ixgb_alloc_rx_buffers(struct ixgb_adapter *adapter, int cleaned_count)
-{
- struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
- struct net_device *netdev = adapter->netdev;
- struct pci_dev *pdev = adapter->pdev;
- struct ixgb_rx_desc *rx_desc;
- struct ixgb_buffer *buffer_info;
- struct sk_buff *skb;
- unsigned int i;
- long cleancount;
-
- i = rx_ring->next_to_use;
- buffer_info = &rx_ring->buffer_info[i];
- cleancount = IXGB_DESC_UNUSED(rx_ring);
-
-
- /* leave three descriptors unused */
- while (--cleancount > 2 && cleaned_count--) {
- /* recycle! its good for you */
- skb = buffer_info->skb;
- if (skb) {
- skb_trim(skb, 0);
- goto map_skb;
- }
-
- skb = netdev_alloc_skb_ip_align(netdev, adapter->rx_buffer_len);
- if (unlikely(!skb)) {
- /* Better luck next round */
- adapter->alloc_rx_buff_failed++;
- break;
- }
-
- buffer_info->skb = skb;
- buffer_info->length = adapter->rx_buffer_len;
-map_skb:
- buffer_info->dma = dma_map_single(&pdev->dev,
- skb->data,
- adapter->rx_buffer_len,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
- adapter->alloc_rx_buff_failed++;
- break;
- }
-
- rx_desc = IXGB_RX_DESC(*rx_ring, i);
- rx_desc->buff_addr = cpu_to_le64(buffer_info->dma);
- /* guarantee DD bit not set now before h/w gets descriptor
- * this is the rest of the workaround for h/w double
- * writeback. */
- rx_desc->status = 0;
-
-
- if (++i == rx_ring->count)
- i = 0;
- buffer_info = &rx_ring->buffer_info[i];
- }
-
- if (likely(rx_ring->next_to_use != i)) {
- rx_ring->next_to_use = i;
- if (unlikely(i-- == 0))
- i = (rx_ring->count - 1);
-
- /* Force memory writes to complete before letting h/w
- * know there are new descriptors to fetch. (Only
- * applicable for weak-ordered memory model archs, such
- * as IA-64). */
- wmb();
- IXGB_WRITE_REG(&adapter->hw, RDT, i);
- }
-}
-
-static void
-ixgb_vlan_strip_enable(struct ixgb_adapter *adapter)
-{
- u32 ctrl;
-
- /* enable VLAN tag insert/strip */
- ctrl = IXGB_READ_REG(&adapter->hw, CTRL0);
- ctrl |= IXGB_CTRL0_VME;
- IXGB_WRITE_REG(&adapter->hw, CTRL0, ctrl);
-}
-
-static void
-ixgb_vlan_strip_disable(struct ixgb_adapter *adapter)
-{
- u32 ctrl;
-
- /* disable VLAN tag insert/strip */
- ctrl = IXGB_READ_REG(&adapter->hw, CTRL0);
- ctrl &= ~IXGB_CTRL0_VME;
- IXGB_WRITE_REG(&adapter->hw, CTRL0, ctrl);
-}
-
-static int
-ixgb_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- u32 vfta, index;
-
- /* add VID to filter table */
-
- index = (vid >> 5) & 0x7F;
- vfta = IXGB_READ_REG_ARRAY(&adapter->hw, VFTA, index);
- vfta |= (1 << (vid & 0x1F));
- ixgb_write_vfta(&adapter->hw, index, vfta);
- set_bit(vid, adapter->active_vlans);
-
- return 0;
-}
-
-static int
-ixgb_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid)
-{
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- u32 vfta, index;
-
- /* remove VID from filter table */
-
- index = (vid >> 5) & 0x7F;
- vfta = IXGB_READ_REG_ARRAY(&adapter->hw, VFTA, index);
- vfta &= ~(1 << (vid & 0x1F));
- ixgb_write_vfta(&adapter->hw, index, vfta);
- clear_bit(vid, adapter->active_vlans);
-
- return 0;
-}
-
-static void
-ixgb_restore_vlan(struct ixgb_adapter *adapter)
-{
- u16 vid;
-
- for_each_set_bit(vid, adapter->active_vlans, VLAN_N_VID)
- ixgb_vlan_rx_add_vid(adapter->netdev, htons(ETH_P_8021Q), vid);
-}
-
-/**
- * ixgb_io_error_detected - called when PCI error is detected
- * @pdev: pointer to pci device with error
- * @state: pci channel state after error
- *
- * This callback is called by the PCI subsystem whenever
- * a PCI bus error is detected.
- */
-static pci_ers_result_t ixgb_io_error_detected(struct pci_dev *pdev,
- pci_channel_state_t state)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct ixgb_adapter *adapter = netdev_priv(netdev);
-
- netif_device_detach(netdev);
-
- if (state == pci_channel_io_perm_failure)
- return PCI_ERS_RESULT_DISCONNECT;
-
- if (netif_running(netdev))
- ixgb_down(adapter, true);
-
- pci_disable_device(pdev);
-
- /* Request a slot reset. */
- return PCI_ERS_RESULT_NEED_RESET;
-}
-
-/**
- * ixgb_io_slot_reset - called after the pci bus has been reset.
- * @pdev: pointer to pci device with error
- *
- * This callback is called after the PCI bus has been reset.
- * Basically, this tries to restart the card from scratch.
- * This is a shortened version of the device probe/discovery code,
- * it resembles the first-half of the ixgb_probe() routine.
- */
-static pci_ers_result_t ixgb_io_slot_reset(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct ixgb_adapter *adapter = netdev_priv(netdev);
- u8 addr[ETH_ALEN];
-
- if (pci_enable_device(pdev)) {
- netif_err(adapter, probe, adapter->netdev,
- "Cannot re-enable PCI device after reset\n");
- return PCI_ERS_RESULT_DISCONNECT;
- }
-
- /* Perform card reset only on one instance of the card */
- if (0 != PCI_FUNC (pdev->devfn))
- return PCI_ERS_RESULT_RECOVERED;
-
- pci_set_master(pdev);
-
- netif_carrier_off(netdev);
- netif_stop_queue(netdev);
- ixgb_reset(adapter);
-
- /* Make sure the EEPROM is good */
- if (!ixgb_validate_eeprom_checksum(&adapter->hw)) {
- netif_err(adapter, probe, adapter->netdev,
- "After reset, the EEPROM checksum is not valid\n");
- return PCI_ERS_RESULT_DISCONNECT;
- }
- ixgb_get_ee_mac_addr(&adapter->hw, addr);
- eth_hw_addr_set(netdev, addr);
- memcpy(netdev->perm_addr, netdev->dev_addr, netdev->addr_len);
-
- if (!is_valid_ether_addr(netdev->perm_addr)) {
- netif_err(adapter, probe, adapter->netdev,
- "After reset, invalid MAC address\n");
- return PCI_ERS_RESULT_DISCONNECT;
- }
-
- return PCI_ERS_RESULT_RECOVERED;
-}
-
-/**
- * ixgb_io_resume - called when its OK to resume normal operations
- * @pdev: pointer to pci device with error
- *
- * The error recovery driver tells us that its OK to resume
- * normal operation. Implementation resembles the second-half
- * of the ixgb_probe() routine.
- */
-static void ixgb_io_resume(struct pci_dev *pdev)
-{
- struct net_device *netdev = pci_get_drvdata(pdev);
- struct ixgb_adapter *adapter = netdev_priv(netdev);
-
- pci_set_master(pdev);
-
- if (netif_running(netdev)) {
- if (ixgb_up(adapter)) {
- pr_err("can't bring device back up after reset\n");
- return;
- }
- }
-
- netif_device_attach(netdev);
- mod_timer(&adapter->watchdog_timer, jiffies);
-}
-
-/* ixgb_main.c */
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-/* glue for the OS independent part of ixgb
- * includes register access macros
- */
-
-#ifndef _IXGB_OSDEP_H_
-#define _IXGB_OSDEP_H_
-
-#include <linux/types.h>
-#include <linux/delay.h>
-#include <asm/io.h>
-#include <linux/interrupt.h>
-#include <linux/sched.h>
-#include <linux/if_ether.h>
-
-#undef ASSERT
-#define ASSERT(x) BUG_ON(!(x))
-
-#define ENTER() pr_debug("%s\n", __func__);
-
-#define IXGB_WRITE_REG(a, reg, value) ( \
- writel((value), ((a)->hw_addr + IXGB_##reg)))
-
-#define IXGB_READ_REG(a, reg) ( \
- readl((a)->hw_addr + IXGB_##reg))
-
-#define IXGB_WRITE_REG_ARRAY(a, reg, offset, value) ( \
- writel((value), ((a)->hw_addr + IXGB_##reg + ((offset) << 2))))
-
-#define IXGB_READ_REG_ARRAY(a, reg, offset) ( \
- readl((a)->hw_addr + IXGB_##reg + ((offset) << 2)))
-
-#define IXGB_WRITE_FLUSH(a) IXGB_READ_REG(a, STATUS)
-
-#define IXGB_MEMCPY memcpy
-
-#endif /* _IXGB_OSDEP_H_ */
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 1999 - 2008 Intel Corporation. */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include "ixgb.h"
-
-/* This is the only thing that needs to be changed to adjust the
- * maximum number of ports that the driver can manage.
- */
-
-#define IXGB_MAX_NIC 8
-
-#define OPTION_UNSET -1
-#define OPTION_DISABLED 0
-#define OPTION_ENABLED 1
-
-/* All parameters are treated the same, as an integer array of values.
- * This macro just reduces the need to repeat the same declaration code
- * over and over (plus this helps to avoid typo bugs).
- */
-
-#define IXGB_PARAM_INIT { [0 ... IXGB_MAX_NIC] = OPTION_UNSET }
-#define IXGB_PARAM(X, desc) \
- static int X[IXGB_MAX_NIC+1] \
- = IXGB_PARAM_INIT; \
- static unsigned int num_##X = 0; \
- module_param_array_named(X, X, int, &num_##X, 0); \
- MODULE_PARM_DESC(X, desc);
-
-/* Transmit Descriptor Count
- *
- * Valid Range: 64-4096
- *
- * Default Value: 256
- */
-
-IXGB_PARAM(TxDescriptors, "Number of transmit descriptors");
-
-/* Receive Descriptor Count
- *
- * Valid Range: 64-4096
- *
- * Default Value: 1024
- */
-
-IXGB_PARAM(RxDescriptors, "Number of receive descriptors");
-
-/* User Specified Flow Control Override
- *
- * Valid Range: 0-3
- * - 0 - No Flow Control
- * - 1 - Rx only, respond to PAUSE frames but do not generate them
- * - 2 - Tx only, generate PAUSE frames but ignore them on receive
- * - 3 - Full Flow Control Support
- *
- * Default Value: 2 - Tx only (silicon bug avoidance)
- */
-
-IXGB_PARAM(FlowControl, "Flow Control setting");
-
-/* XsumRX - Receive Checksum Offload Enable/Disable
- *
- * Valid Range: 0, 1
- * - 0 - disables all checksum offload
- * - 1 - enables receive IP/TCP/UDP checksum offload
- * on 82597 based NICs
- *
- * Default Value: 1
- */
-
-IXGB_PARAM(XsumRX, "Disable or enable Receive Checksum offload");
-
-/* Transmit Interrupt Delay in units of 0.8192 microseconds
- *
- * Valid Range: 0-65535
- *
- * Default Value: 32
- */
-
-IXGB_PARAM(TxIntDelay, "Transmit Interrupt Delay");
-
-/* Receive Interrupt Delay in units of 0.8192 microseconds
- *
- * Valid Range: 0-65535
- *
- * Default Value: 72
- */
-
-IXGB_PARAM(RxIntDelay, "Receive Interrupt Delay");
-
-/* Receive Flow control high threshold (when we send a pause frame)
- * (FCRTH)
- *
- * Valid Range: 1,536 - 262,136 (0x600 - 0x3FFF8, 8 byte granularity)
- *
- * Default Value: 196,608 (0x30000)
- */
-
-IXGB_PARAM(RxFCHighThresh, "Receive Flow Control High Threshold");
-
-/* Receive Flow control low threshold (when we send a resume frame)
- * (FCRTL)
- *
- * Valid Range: 64 - 262,136 (0x40 - 0x3FFF8, 8 byte granularity)
- * must be less than high threshold by at least 8 bytes
- *
- * Default Value: 163,840 (0x28000)
- */
-
-IXGB_PARAM(RxFCLowThresh, "Receive Flow Control Low Threshold");
-
-/* Flow control request timeout (how long to pause the link partner's tx)
- * (PAP 15:0)
- *
- * Valid Range: 1 - 65535
- *
- * Default Value: 65535 (0xffff) (we'll send an xon if we recover)
- */
-
-IXGB_PARAM(FCReqTimeout, "Flow Control Request Timeout");
-
-/* Interrupt Delay Enable
- *
- * Valid Range: 0, 1
- *
- * - 0 - disables transmit interrupt delay
- * - 1 - enables transmmit interrupt delay
- *
- * Default Value: 1
- */
-
-IXGB_PARAM(IntDelayEnable, "Transmit Interrupt Delay Enable");
-
-
-#define DEFAULT_TIDV 32
-#define MAX_TIDV 0xFFFF
-#define MIN_TIDV 0
-
-#define DEFAULT_RDTR 72
-#define MAX_RDTR 0xFFFF
-#define MIN_RDTR 0
-
-#define DEFAULT_FCRTL 0x28000
-#define DEFAULT_FCRTH 0x30000
-#define MIN_FCRTL 0
-#define MAX_FCRTL 0x3FFE8
-#define MIN_FCRTH 8
-#define MAX_FCRTH 0x3FFF0
-
-#define MIN_FCPAUSE 1
-#define MAX_FCPAUSE 0xffff
-#define DEFAULT_FCPAUSE 0xFFFF /* this may be too long */
-
-struct ixgb_option {
- enum { enable_option, range_option, list_option } type;
- const char *name;
- const char *err;
- int def;
- union {
- struct { /* range_option info */
- int min;
- int max;
- } r;
- struct { /* list_option info */
- int nr;
- const struct ixgb_opt_list {
- int i;
- const char *str;
- } *p;
- } l;
- } arg;
-};
-
-static int
-ixgb_validate_option(unsigned int *value, const struct ixgb_option *opt)
-{
- if (*value == OPTION_UNSET) {
- *value = opt->def;
- return 0;
- }
-
- switch (opt->type) {
- case enable_option:
- switch (*value) {
- case OPTION_ENABLED:
- pr_info("%s Enabled\n", opt->name);
- return 0;
- case OPTION_DISABLED:
- pr_info("%s Disabled\n", opt->name);
- return 0;
- }
- break;
- case range_option:
- if (*value >= opt->arg.r.min && *value <= opt->arg.r.max) {
- pr_info("%s set to %i\n", opt->name, *value);
- return 0;
- }
- break;
- case list_option: {
- int i;
- const struct ixgb_opt_list *ent;
-
- for (i = 0; i < opt->arg.l.nr; i++) {
- ent = &opt->arg.l.p[i];
- if (*value == ent->i) {
- if (ent->str[0] != '\0')
- pr_info("%s\n", ent->str);
- return 0;
- }
- }
- }
- break;
- default:
- BUG();
- }
-
- pr_info("Invalid %s specified (%i) %s\n", opt->name, *value, opt->err);
- *value = opt->def;
- return -1;
-}
-
-/**
- * ixgb_check_options - Range Checking for Command Line Parameters
- * @adapter: board private structure
- *
- * This routine checks all command line parameters for valid user
- * input. If an invalid value is given, or if no user specified
- * value exists, a default value is used. The final value is stored
- * in a variable in the adapter structure.
- **/
-
-void
-ixgb_check_options(struct ixgb_adapter *adapter)
-{
- int bd = adapter->bd_number;
- if (bd >= IXGB_MAX_NIC) {
- pr_notice("Warning: no configuration for board #%i\n", bd);
- pr_notice("Using defaults for all values\n");
- }
-
- { /* Transmit Descriptor Count */
- static const struct ixgb_option opt = {
- .type = range_option,
- .name = "Transmit Descriptors",
- .err = "using default of " __MODULE_STRING(DEFAULT_TXD),
- .def = DEFAULT_TXD,
- .arg = { .r = { .min = MIN_TXD,
- .max = MAX_TXD}}
- };
- struct ixgb_desc_ring *tx_ring = &adapter->tx_ring;
-
- if (num_TxDescriptors > bd) {
- tx_ring->count = TxDescriptors[bd];
- ixgb_validate_option(&tx_ring->count, &opt);
- } else {
- tx_ring->count = opt.def;
- }
- tx_ring->count = ALIGN(tx_ring->count, IXGB_REQ_TX_DESCRIPTOR_MULTIPLE);
- }
- { /* Receive Descriptor Count */
- static const struct ixgb_option opt = {
- .type = range_option,
- .name = "Receive Descriptors",
- .err = "using default of " __MODULE_STRING(DEFAULT_RXD),
- .def = DEFAULT_RXD,
- .arg = { .r = { .min = MIN_RXD,
- .max = MAX_RXD}}
- };
- struct ixgb_desc_ring *rx_ring = &adapter->rx_ring;
-
- if (num_RxDescriptors > bd) {
- rx_ring->count = RxDescriptors[bd];
- ixgb_validate_option(&rx_ring->count, &opt);
- } else {
- rx_ring->count = opt.def;
- }
- rx_ring->count = ALIGN(rx_ring->count, IXGB_REQ_RX_DESCRIPTOR_MULTIPLE);
- }
- { /* Receive Checksum Offload Enable */
- static const struct ixgb_option opt = {
- .type = enable_option,
- .name = "Receive Checksum Offload",
- .err = "defaulting to Enabled",
- .def = OPTION_ENABLED
- };
-
- if (num_XsumRX > bd) {
- unsigned int rx_csum = XsumRX[bd];
- ixgb_validate_option(&rx_csum, &opt);
- adapter->rx_csum = rx_csum;
- } else {
- adapter->rx_csum = opt.def;
- }
- }
- { /* Flow Control */
-
- static const struct ixgb_opt_list fc_list[] = {
- { ixgb_fc_none, "Flow Control Disabled" },
- { ixgb_fc_rx_pause, "Flow Control Receive Only" },
- { ixgb_fc_tx_pause, "Flow Control Transmit Only" },
- { ixgb_fc_full, "Flow Control Enabled" },
- { ixgb_fc_default, "Flow Control Hardware Default" }
- };
-
- static const struct ixgb_option opt = {
- .type = list_option,
- .name = "Flow Control",
- .err = "reading default settings from EEPROM",
- .def = ixgb_fc_tx_pause,
- .arg = { .l = { .nr = ARRAY_SIZE(fc_list),
- .p = fc_list }}
- };
-
- if (num_FlowControl > bd) {
- unsigned int fc = FlowControl[bd];
- ixgb_validate_option(&fc, &opt);
- adapter->hw.fc.type = fc;
- } else {
- adapter->hw.fc.type = opt.def;
- }
- }
- { /* Receive Flow Control High Threshold */
- static const struct ixgb_option opt = {
- .type = range_option,
- .name = "Rx Flow Control High Threshold",
- .err = "using default of " __MODULE_STRING(DEFAULT_FCRTH),
- .def = DEFAULT_FCRTH,
- .arg = { .r = { .min = MIN_FCRTH,
- .max = MAX_FCRTH}}
- };
-
- if (num_RxFCHighThresh > bd) {
- adapter->hw.fc.high_water = RxFCHighThresh[bd];
- ixgb_validate_option(&adapter->hw.fc.high_water, &opt);
- } else {
- adapter->hw.fc.high_water = opt.def;
- }
- if (!(adapter->hw.fc.type & ixgb_fc_tx_pause) )
- pr_info("Ignoring RxFCHighThresh when no RxFC\n");
- }
- { /* Receive Flow Control Low Threshold */
- static const struct ixgb_option opt = {
- .type = range_option,
- .name = "Rx Flow Control Low Threshold",
- .err = "using default of " __MODULE_STRING(DEFAULT_FCRTL),
- .def = DEFAULT_FCRTL,
- .arg = { .r = { .min = MIN_FCRTL,
- .max = MAX_FCRTL}}
- };
-
- if (num_RxFCLowThresh > bd) {
- adapter->hw.fc.low_water = RxFCLowThresh[bd];
- ixgb_validate_option(&adapter->hw.fc.low_water, &opt);
- } else {
- adapter->hw.fc.low_water = opt.def;
- }
- if (!(adapter->hw.fc.type & ixgb_fc_tx_pause) )
- pr_info("Ignoring RxFCLowThresh when no RxFC\n");
- }
- { /* Flow Control Pause Time Request*/
- static const struct ixgb_option opt = {
- .type = range_option,
- .name = "Flow Control Pause Time Request",
- .err = "using default of "__MODULE_STRING(DEFAULT_FCPAUSE),
- .def = DEFAULT_FCPAUSE,
- .arg = { .r = { .min = MIN_FCPAUSE,
- .max = MAX_FCPAUSE}}
- };
-
- if (num_FCReqTimeout > bd) {
- unsigned int pause_time = FCReqTimeout[bd];
- ixgb_validate_option(&pause_time, &opt);
- adapter->hw.fc.pause_time = pause_time;
- } else {
- adapter->hw.fc.pause_time = opt.def;
- }
- if (!(adapter->hw.fc.type & ixgb_fc_tx_pause) )
- pr_info("Ignoring FCReqTimeout when no RxFC\n");
- }
- /* high low and spacing check for rx flow control thresholds */
- if (adapter->hw.fc.type & ixgb_fc_tx_pause) {
- /* high must be greater than low */
- if (adapter->hw.fc.high_water < (adapter->hw.fc.low_water + 8)) {
- /* set defaults */
- pr_info("RxFCHighThresh must be >= (RxFCLowThresh + 8), Using Defaults\n");
- adapter->hw.fc.high_water = DEFAULT_FCRTH;
- adapter->hw.fc.low_water = DEFAULT_FCRTL;
- }
- }
- { /* Receive Interrupt Delay */
- static const struct ixgb_option opt = {
- .type = range_option,
- .name = "Receive Interrupt Delay",
- .err = "using default of " __MODULE_STRING(DEFAULT_RDTR),
- .def = DEFAULT_RDTR,
- .arg = { .r = { .min = MIN_RDTR,
- .max = MAX_RDTR}}
- };
-
- if (num_RxIntDelay > bd) {
- adapter->rx_int_delay = RxIntDelay[bd];
- ixgb_validate_option(&adapter->rx_int_delay, &opt);
- } else {
- adapter->rx_int_delay = opt.def;
- }
- }
- { /* Transmit Interrupt Delay */
- static const struct ixgb_option opt = {
- .type = range_option,
- .name = "Transmit Interrupt Delay",
- .err = "using default of " __MODULE_STRING(DEFAULT_TIDV),
- .def = DEFAULT_TIDV,
- .arg = { .r = { .min = MIN_TIDV,
- .max = MAX_TIDV}}
- };
-
- if (num_TxIntDelay > bd) {
- adapter->tx_int_delay = TxIntDelay[bd];
- ixgb_validate_option(&adapter->tx_int_delay, &opt);
- } else {
- adapter->tx_int_delay = opt.def;
- }
- }
-
- { /* Transmit Interrupt Delay Enable */
- static const struct ixgb_option opt = {
- .type = enable_option,
- .name = "Tx Interrupt Delay Enable",
- .err = "defaulting to Enabled",
- .def = OPTION_ENABLED
- };
-
- if (num_IntDelayEnable > bd) {
- unsigned int ide = IntDelayEnable[bd];
- ixgb_validate_option(&ide, &opt);
- adapter->tx_int_delay_enable = ide;
- } else {
- adapter->tx_int_delay_enable = opt.def;
- }
- }
-}
#include <linux/pci.h>
#include <linux/netdevice.h>
#include <linux/cpumask.h>
-#include <linux/aer.h>
#include <linux/if_vlan.h>
#include <linux/jiffies.h>
#include <linux/phy.h>
return true;
}
-static void mvpp2_port_copy_mac_addr(struct net_device *dev, struct mvpp2 *priv,
- struct fwnode_handle *fwnode,
- char **mac_from)
+static int mvpp2_port_copy_mac_addr(struct net_device *dev, struct mvpp2 *priv,
+ struct fwnode_handle *fwnode,
+ char **mac_from)
{
struct mvpp2_port *port = netdev_priv(dev);
char hw_mac_addr[ETH_ALEN] = {0};
char fw_mac_addr[ETH_ALEN];
+ int ret;
if (!fwnode_get_mac_address(fwnode, fw_mac_addr)) {
*mac_from = "firmware node";
eth_hw_addr_set(dev, fw_mac_addr);
- return;
+ return 0;
}
if (priv->hw_version == MVPP21) {
if (is_valid_ether_addr(hw_mac_addr)) {
*mac_from = "hardware";
eth_hw_addr_set(dev, hw_mac_addr);
- return;
+ return 0;
}
}
/* Only valid on OF enabled platforms */
- if (!of_get_mac_address_nvmem(to_of_node(fwnode), fw_mac_addr)) {
+ ret = of_get_mac_address_nvmem(to_of_node(fwnode), fw_mac_addr);
+ if (ret == -EPROBE_DEFER)
+ return ret;
+ if (!ret) {
*mac_from = "nvmem cell";
eth_hw_addr_set(dev, fw_mac_addr);
- return;
+ return 0;
}
*mac_from = "random";
eth_hw_addr_random(dev);
+
+ return 0;
}
static struct mvpp2_port *mvpp2_phylink_to_port(struct phylink_config *config)
mutex_init(&port->gather_stats_lock);
INIT_DELAYED_WORK(&port->stats_work, mvpp2_gather_hw_statistics);
- mvpp2_port_copy_mac_addr(dev, priv, port_fwnode, &mac_from);
+ err = mvpp2_port_copy_mac_addr(dev, priv, port_fwnode, &mac_from);
+ if (err < 0)
+ goto err_free_stats;
port->tx_ring_size = MVPP2_MAX_TXD_DFLT;
port->rx_ring_size = MVPP2_MAX_RXD_DFLT;
#include "octep_main.h"
#include "octep_regs_cn9k_pf.h"
+#define CTRL_MBOX_MAX_PF 128
+#define CTRL_MBOX_SZ ((size_t)(0x400000 / CTRL_MBOX_MAX_PF))
+
+#define FW_HB_INTERVAL_IN_SECS 1
+#define FW_HB_MISS_COUNT 10
+
/* Names of Hardware non-queue generic interrupts */
static char *cn93_non_ioq_msix_names[] = {
"epf_ire_rint",
{
struct octep_config *conf = oct->conf;
struct pci_dev *pdev = oct->pdev;
+ u8 link = 0;
u64 val;
+ int pos;
/* Read ring configuration:
* PF ring count, number of VFs and rings per VF supported
conf->msix_cfg.ioq_msix = conf->pf_ring_cfg.active_io_rings;
conf->msix_cfg.non_ioq_msix_names = cn93_non_ioq_msix_names;
- conf->ctrl_mbox_cfg.barmem_addr = (void __iomem *)oct->mmio[2].hw_addr + (0x400000ull * 7);
+ pos = pci_find_ext_capability(oct->pdev, PCI_EXT_CAP_ID_SRIOV);
+ if (pos) {
+ pci_read_config_byte(oct->pdev,
+ pos + PCI_SRIOV_FUNC_LINK,
+ &link);
+ link = PCI_DEVFN(PCI_SLOT(oct->pdev->devfn), link);
+ }
+ conf->ctrl_mbox_cfg.barmem_addr = (void __iomem *)oct->mmio[2].hw_addr +
+ (0x400000ull * 7) +
+ (link * CTRL_MBOX_SZ);
+
+ conf->hb_interval = FW_HB_INTERVAL_IN_SECS;
+ conf->max_hb_miss_cnt = FW_HB_MISS_COUNT;
+
}
/* Setup registers for a hardware Tx Queue */
mbox->mbox_read_reg = oct->mmio[0].hw_addr + CN93_SDP_R_MBOX_VF_PF_DATA(q_no);
}
-/* Mailbox Interrupt handler */
-static void cn93_handle_pf_mbox_intr(struct octep_device *oct)
+/* Process non-ioq interrupts required to keep pf interface running.
+ * OEI_RINT is needed for control mailbox
+ */
+static bool octep_poll_non_ioq_interrupts_cn93_pf(struct octep_device *oct)
{
- u64 mbox_int_val = 0ULL, val = 0ULL, qno = 0ULL;
+ bool handled = false;
+ u64 reg0;
- mbox_int_val = readq(oct->mbox[0]->mbox_int_reg);
- for (qno = 0; qno < OCTEP_MAX_VF; qno++) {
- val = readq(oct->mbox[qno]->mbox_read_reg);
- dev_dbg(&oct->pdev->dev,
- "PF MBOX READ: val:%llx from VF:%llx\n", val, qno);
+ /* Check for OEI INTR */
+ reg0 = octep_read_csr64(oct, CN93_SDP_EPF_OEI_RINT);
+ if (reg0) {
+ dev_info(&oct->pdev->dev,
+ "Received OEI_RINT intr: 0x%llx\n",
+ reg0);
+ octep_write_csr64(oct, CN93_SDP_EPF_OEI_RINT, reg0);
+ if (reg0 & CN93_SDP_EPF_OEI_RINT_DATA_BIT_MBOX)
+ queue_work(octep_wq, &oct->ctrl_mbox_task);
+ else if (reg0 & CN93_SDP_EPF_OEI_RINT_DATA_BIT_HBEAT)
+ atomic_set(&oct->hb_miss_cnt, 0);
+
+ handled = true;
}
- writeq(mbox_int_val, oct->mbox[0]->mbox_int_reg);
+ return handled;
}
/* Interrupts handler for all non-queue generic interrupts. */
goto irq_handled;
}
- /* Check for MBOX INTR */
- reg_val = octep_read_csr64(oct, CN93_SDP_EPF_MBOX_RINT(0));
- if (reg_val) {
- dev_info(&pdev->dev,
- "Received MBOX_RINT intr: 0x%llx\n", reg_val);
- cn93_handle_pf_mbox_intr(oct);
- goto irq_handled;
- }
-
- /* Check for OEI INTR */
- reg_val = octep_read_csr64(oct, CN93_SDP_EPF_OEI_RINT);
- if (reg_val) {
- dev_info(&pdev->dev,
- "Received OEI_EINT intr: 0x%llx\n", reg_val);
- octep_write_csr64(oct, CN93_SDP_EPF_OEI_RINT, reg_val);
- queue_work(octep_wq, &oct->ctrl_mbox_task);
+ /* Check for MBOX INTR and OEI INTR */
+ if (octep_poll_non_ioq_interrupts_cn93_pf(oct))
goto irq_handled;
- }
/* Check for DMA INTR */
reg_val = octep_read_csr64(oct, CN93_SDP_EPF_DMA_RINT);
oct->hw_ops.enable_interrupts = octep_enable_interrupts_cn93_pf;
oct->hw_ops.disable_interrupts = octep_disable_interrupts_cn93_pf;
+ oct->hw_ops.poll_non_ioq_interrupts = octep_poll_non_ioq_interrupts_cn93_pf;
oct->hw_ops.update_iq_read_idx = octep_update_iq_read_index_cn93_pf;
/* ctrl mbox config */
struct octep_ctrl_mbox_config ctrl_mbox_cfg;
+
+ /* Configured maximum heartbeat miss count */
+ u32 max_hb_miss_cnt;
+
+ /* Configured firmware heartbeat interval in secs */
+ u32 hb_interval;
};
#endif /* _OCTEP_CONFIG_H_ */
/* Time in msecs to wait for message response */
#define OCTEP_CTRL_MBOX_MSG_WAIT_MS 10
-#define OCTEP_CTRL_MBOX_INFO_MAGIC_NUM_OFFSET(m) (m)
-#define OCTEP_CTRL_MBOX_INFO_BARMEM_SZ_OFFSET(m) ((m) + 8)
-#define OCTEP_CTRL_MBOX_INFO_HOST_STATUS_OFFSET(m) ((m) + 24)
-#define OCTEP_CTRL_MBOX_INFO_FW_STATUS_OFFSET(m) ((m) + 144)
-
-#define OCTEP_CTRL_MBOX_H2FQ_INFO_OFFSET(m) ((m) + OCTEP_CTRL_MBOX_INFO_SZ)
-#define OCTEP_CTRL_MBOX_H2FQ_PROD_OFFSET(m) (OCTEP_CTRL_MBOX_H2FQ_INFO_OFFSET(m))
-#define OCTEP_CTRL_MBOX_H2FQ_CONS_OFFSET(m) ((OCTEP_CTRL_MBOX_H2FQ_INFO_OFFSET(m)) + 4)
-#define OCTEP_CTRL_MBOX_H2FQ_ELEM_SZ_OFFSET(m) ((OCTEP_CTRL_MBOX_H2FQ_INFO_OFFSET(m)) + 8)
-#define OCTEP_CTRL_MBOX_H2FQ_ELEM_CNT_OFFSET(m) ((OCTEP_CTRL_MBOX_H2FQ_INFO_OFFSET(m)) + 12)
-
-#define OCTEP_CTRL_MBOX_F2HQ_INFO_OFFSET(m) ((m) + \
- OCTEP_CTRL_MBOX_INFO_SZ + \
- OCTEP_CTRL_MBOX_H2FQ_INFO_SZ)
-#define OCTEP_CTRL_MBOX_F2HQ_PROD_OFFSET(m) (OCTEP_CTRL_MBOX_F2HQ_INFO_OFFSET(m))
-#define OCTEP_CTRL_MBOX_F2HQ_CONS_OFFSET(m) ((OCTEP_CTRL_MBOX_F2HQ_INFO_OFFSET(m)) + 4)
-#define OCTEP_CTRL_MBOX_F2HQ_ELEM_SZ_OFFSET(m) ((OCTEP_CTRL_MBOX_F2HQ_INFO_OFFSET(m)) + 8)
-#define OCTEP_CTRL_MBOX_F2HQ_ELEM_CNT_OFFSET(m) ((OCTEP_CTRL_MBOX_F2HQ_INFO_OFFSET(m)) + 12)
-
-#define OCTEP_CTRL_MBOX_Q_OFFSET(m, i) ((m) + \
- (sizeof(struct octep_ctrl_mbox_msg) * (i)))
-
-static u32 octep_ctrl_mbox_circq_inc(u32 index, u32 mask)
+/* Size of mbox info in bytes */
+#define OCTEP_CTRL_MBOX_INFO_SZ 256
+/* Size of mbox host to fw queue info in bytes */
+#define OCTEP_CTRL_MBOX_H2FQ_INFO_SZ 16
+/* Size of mbox fw to host queue info in bytes */
+#define OCTEP_CTRL_MBOX_F2HQ_INFO_SZ 16
+
+#define OCTEP_CTRL_MBOX_TOTAL_INFO_SZ (OCTEP_CTRL_MBOX_INFO_SZ + \
+ OCTEP_CTRL_MBOX_H2FQ_INFO_SZ + \
+ OCTEP_CTRL_MBOX_F2HQ_INFO_SZ)
+
+#define OCTEP_CTRL_MBOX_INFO_MAGIC_NUM(m) (m)
+#define OCTEP_CTRL_MBOX_INFO_BARMEM_SZ(m) ((m) + 8)
+#define OCTEP_CTRL_MBOX_INFO_HOST_STATUS(m) ((m) + 24)
+#define OCTEP_CTRL_MBOX_INFO_FW_STATUS(m) ((m) + 144)
+
+#define OCTEP_CTRL_MBOX_H2FQ_INFO(m) ((m) + OCTEP_CTRL_MBOX_INFO_SZ)
+#define OCTEP_CTRL_MBOX_H2FQ_PROD(m) (OCTEP_CTRL_MBOX_H2FQ_INFO(m))
+#define OCTEP_CTRL_MBOX_H2FQ_CONS(m) ((OCTEP_CTRL_MBOX_H2FQ_INFO(m)) + 4)
+#define OCTEP_CTRL_MBOX_H2FQ_SZ(m) ((OCTEP_CTRL_MBOX_H2FQ_INFO(m)) + 8)
+
+#define OCTEP_CTRL_MBOX_F2HQ_INFO(m) ((m) + \
+ OCTEP_CTRL_MBOX_INFO_SZ + \
+ OCTEP_CTRL_MBOX_H2FQ_INFO_SZ)
+#define OCTEP_CTRL_MBOX_F2HQ_PROD(m) (OCTEP_CTRL_MBOX_F2HQ_INFO(m))
+#define OCTEP_CTRL_MBOX_F2HQ_CONS(m) ((OCTEP_CTRL_MBOX_F2HQ_INFO(m)) + 4)
+#define OCTEP_CTRL_MBOX_F2HQ_SZ(m) ((OCTEP_CTRL_MBOX_F2HQ_INFO(m)) + 8)
+
+static const u32 mbox_hdr_sz = sizeof(union octep_ctrl_mbox_msg_hdr);
+
+static u32 octep_ctrl_mbox_circq_inc(u32 index, u32 inc, u32 sz)
{
- return (index + 1) & mask;
+ return (index + inc) % sz;
}
-static u32 octep_ctrl_mbox_circq_space(u32 pi, u32 ci, u32 mask)
+static u32 octep_ctrl_mbox_circq_space(u32 pi, u32 ci, u32 sz)
{
- return mask - ((pi - ci) & mask);
+ return sz - (abs(pi - ci) % sz);
}
-static u32 octep_ctrl_mbox_circq_depth(u32 pi, u32 ci, u32 mask)
+static u32 octep_ctrl_mbox_circq_depth(u32 pi, u32 ci, u32 sz)
{
- return ((pi - ci) & mask);
+ return (abs(pi - ci) % sz);
}
int octep_ctrl_mbox_init(struct octep_ctrl_mbox *mbox)
return -EINVAL;
}
- magic_num = readq(OCTEP_CTRL_MBOX_INFO_MAGIC_NUM_OFFSET(mbox->barmem));
+ magic_num = readq(OCTEP_CTRL_MBOX_INFO_MAGIC_NUM(mbox->barmem));
if (magic_num != OCTEP_CTRL_MBOX_MAGIC_NUMBER) {
pr_info("octep_ctrl_mbox : Invalid magic number %llx\n", magic_num);
return -EINVAL;
}
- status = readq(OCTEP_CTRL_MBOX_INFO_FW_STATUS_OFFSET(mbox->barmem));
+ status = readq(OCTEP_CTRL_MBOX_INFO_FW_STATUS(mbox->barmem));
if (status != OCTEP_CTRL_MBOX_STATUS_READY) {
pr_info("octep_ctrl_mbox : Firmware is not ready.\n");
return -EINVAL;
}
- mbox->barmem_sz = readl(OCTEP_CTRL_MBOX_INFO_BARMEM_SZ_OFFSET(mbox->barmem));
+ mbox->barmem_sz = readl(OCTEP_CTRL_MBOX_INFO_BARMEM_SZ(mbox->barmem));
- writeq(OCTEP_CTRL_MBOX_STATUS_INIT, OCTEP_CTRL_MBOX_INFO_HOST_STATUS_OFFSET(mbox->barmem));
+ writeq(OCTEP_CTRL_MBOX_STATUS_INIT,
+ OCTEP_CTRL_MBOX_INFO_HOST_STATUS(mbox->barmem));
- mbox->h2fq.elem_cnt = readl(OCTEP_CTRL_MBOX_H2FQ_ELEM_CNT_OFFSET(mbox->barmem));
- mbox->h2fq.elem_sz = readl(OCTEP_CTRL_MBOX_H2FQ_ELEM_SZ_OFFSET(mbox->barmem));
- mbox->h2fq.mask = (mbox->h2fq.elem_cnt - 1);
- mutex_init(&mbox->h2fq_lock);
+ mbox->h2fq.sz = readl(OCTEP_CTRL_MBOX_H2FQ_SZ(mbox->barmem));
+ mbox->h2fq.hw_prod = OCTEP_CTRL_MBOX_H2FQ_PROD(mbox->barmem);
+ mbox->h2fq.hw_cons = OCTEP_CTRL_MBOX_H2FQ_CONS(mbox->barmem);
+ mbox->h2fq.hw_q = mbox->barmem + OCTEP_CTRL_MBOX_TOTAL_INFO_SZ;
- mbox->f2hq.elem_cnt = readl(OCTEP_CTRL_MBOX_F2HQ_ELEM_CNT_OFFSET(mbox->barmem));
- mbox->f2hq.elem_sz = readl(OCTEP_CTRL_MBOX_F2HQ_ELEM_SZ_OFFSET(mbox->barmem));
- mbox->f2hq.mask = (mbox->f2hq.elem_cnt - 1);
- mutex_init(&mbox->f2hq_lock);
-
- mbox->h2fq.hw_prod = OCTEP_CTRL_MBOX_H2FQ_PROD_OFFSET(mbox->barmem);
- mbox->h2fq.hw_cons = OCTEP_CTRL_MBOX_H2FQ_CONS_OFFSET(mbox->barmem);
- mbox->h2fq.hw_q = mbox->barmem +
- OCTEP_CTRL_MBOX_INFO_SZ +
- OCTEP_CTRL_MBOX_H2FQ_INFO_SZ +
- OCTEP_CTRL_MBOX_F2HQ_INFO_SZ;
-
- mbox->f2hq.hw_prod = OCTEP_CTRL_MBOX_F2HQ_PROD_OFFSET(mbox->barmem);
- mbox->f2hq.hw_cons = OCTEP_CTRL_MBOX_F2HQ_CONS_OFFSET(mbox->barmem);
- mbox->f2hq.hw_q = mbox->h2fq.hw_q +
- ((mbox->h2fq.elem_sz + sizeof(union octep_ctrl_mbox_msg_hdr)) *
- mbox->h2fq.elem_cnt);
+ mbox->f2hq.sz = readl(OCTEP_CTRL_MBOX_F2HQ_SZ(mbox->barmem));
+ mbox->f2hq.hw_prod = OCTEP_CTRL_MBOX_F2HQ_PROD(mbox->barmem);
+ mbox->f2hq.hw_cons = OCTEP_CTRL_MBOX_F2HQ_CONS(mbox->barmem);
+ mbox->f2hq.hw_q = mbox->barmem +
+ OCTEP_CTRL_MBOX_TOTAL_INFO_SZ +
+ mbox->h2fq.sz;
/* ensure ready state is seen after everything is initialized */
wmb();
- writeq(OCTEP_CTRL_MBOX_STATUS_READY, OCTEP_CTRL_MBOX_INFO_HOST_STATUS_OFFSET(mbox->barmem));
+ writeq(OCTEP_CTRL_MBOX_STATUS_READY,
+ OCTEP_CTRL_MBOX_INFO_HOST_STATUS(mbox->barmem));
pr_info("Octep ctrl mbox : Init successful.\n");
return 0;
}
+static void
+octep_write_mbox_data(struct octep_ctrl_mbox_q *q, u32 *pi, u32 ci, void *buf, u32 w_sz)
+{
+ u8 __iomem *qbuf;
+ u32 cp_sz;
+
+ /* Assumption: Caller has ensured enough write space */
+ qbuf = (q->hw_q + *pi);
+ if (*pi < ci) {
+ /* copy entire w_sz */
+ memcpy_toio(qbuf, buf, w_sz);
+ *pi = octep_ctrl_mbox_circq_inc(*pi, w_sz, q->sz);
+ } else {
+ /* copy up to end of queue */
+ cp_sz = min((q->sz - *pi), w_sz);
+ memcpy_toio(qbuf, buf, cp_sz);
+ w_sz -= cp_sz;
+ *pi = octep_ctrl_mbox_circq_inc(*pi, cp_sz, q->sz);
+ if (w_sz) {
+ /* roll over and copy remaining w_sz */
+ buf += cp_sz;
+ qbuf = (q->hw_q + *pi);
+ memcpy_toio(qbuf, buf, w_sz);
+ *pi = octep_ctrl_mbox_circq_inc(*pi, w_sz, q->sz);
+ }
+ }
+}
+
int octep_ctrl_mbox_send(struct octep_ctrl_mbox *mbox, struct octep_ctrl_mbox_msg *msg)
{
- unsigned long timeout = msecs_to_jiffies(OCTEP_CTRL_MBOX_MSG_TIMEOUT_MS);
- unsigned long period = msecs_to_jiffies(OCTEP_CTRL_MBOX_MSG_WAIT_MS);
+ struct octep_ctrl_mbox_msg_buf *sg;
struct octep_ctrl_mbox_q *q;
- unsigned long expire;
- u64 *mbuf, *word0;
- u8 __iomem *qidx;
- u16 pi, ci;
- int i;
+ u32 pi, ci, buf_sz, w_sz;
+ int s;
if (!mbox || !msg)
return -EINVAL;
+ if (readq(OCTEP_CTRL_MBOX_INFO_FW_STATUS(mbox->barmem)) != OCTEP_CTRL_MBOX_STATUS_READY)
+ return -EIO;
+
+ mutex_lock(&mbox->h2fq_lock);
q = &mbox->h2fq;
pi = readl(q->hw_prod);
ci = readl(q->hw_cons);
- if (!octep_ctrl_mbox_circq_space(pi, ci, q->mask))
- return -ENOMEM;
-
- qidx = OCTEP_CTRL_MBOX_Q_OFFSET(q->hw_q, pi);
- mbuf = (u64 *)msg->msg;
- word0 = &msg->hdr.word0;
-
- mutex_lock(&mbox->h2fq_lock);
- for (i = 1; i <= msg->hdr.sizew; i++)
- writeq(*mbuf++, (qidx + (i * 8)));
-
- writeq(*word0, qidx);
+ if (octep_ctrl_mbox_circq_space(pi, ci, q->sz) < (msg->hdr.s.sz + mbox_hdr_sz)) {
+ mutex_unlock(&mbox->h2fq_lock);
+ return -EAGAIN;
+ }
- pi = octep_ctrl_mbox_circq_inc(pi, q->mask);
+ octep_write_mbox_data(q, &pi, ci, (void *)&msg->hdr, mbox_hdr_sz);
+ buf_sz = msg->hdr.s.sz;
+ for (s = 0; ((s < msg->sg_num) && (buf_sz > 0)); s++) {
+ sg = &msg->sg_list[s];
+ w_sz = (sg->sz <= buf_sz) ? sg->sz : buf_sz;
+ octep_write_mbox_data(q, &pi, ci, sg->msg, w_sz);
+ buf_sz -= w_sz;
+ }
writel(pi, q->hw_prod);
mutex_unlock(&mbox->h2fq_lock);
- /* don't check for notification response */
- if (msg->hdr.flags & OCTEP_CTRL_MBOX_MSG_HDR_FLAG_NOTIFY)
- return 0;
-
- expire = jiffies + timeout;
- while (true) {
- *word0 = readq(qidx);
- if (msg->hdr.flags == OCTEP_CTRL_MBOX_MSG_HDR_FLAG_RESP)
- break;
- schedule_timeout_interruptible(period);
- if (signal_pending(current) || time_after(jiffies, expire)) {
- pr_info("octep_ctrl_mbox: Timed out\n");
- return -EBUSY;
+ return 0;
+}
+
+static void
+octep_read_mbox_data(struct octep_ctrl_mbox_q *q, u32 pi, u32 *ci, void *buf, u32 r_sz)
+{
+ u8 __iomem *qbuf;
+ u32 cp_sz;
+
+ /* Assumption: Caller has ensured enough read space */
+ qbuf = (q->hw_q + *ci);
+ if (*ci < pi) {
+ /* copy entire r_sz */
+ memcpy_fromio(buf, qbuf, r_sz);
+ *ci = octep_ctrl_mbox_circq_inc(*ci, r_sz, q->sz);
+ } else {
+ /* copy up to end of queue */
+ cp_sz = min((q->sz - *ci), r_sz);
+ memcpy_fromio(buf, qbuf, cp_sz);
+ r_sz -= cp_sz;
+ *ci = octep_ctrl_mbox_circq_inc(*ci, cp_sz, q->sz);
+ if (r_sz) {
+ /* roll over and copy remaining r_sz */
+ buf += cp_sz;
+ qbuf = (q->hw_q + *ci);
+ memcpy_fromio(buf, qbuf, r_sz);
+ *ci = octep_ctrl_mbox_circq_inc(*ci, r_sz, q->sz);
}
}
- mbuf = (u64 *)msg->msg;
- for (i = 1; i <= msg->hdr.sizew; i++)
- *mbuf++ = readq(qidx + (i * 8));
-
- return 0;
}
int octep_ctrl_mbox_recv(struct octep_ctrl_mbox *mbox, struct octep_ctrl_mbox_msg *msg)
{
+ struct octep_ctrl_mbox_msg_buf *sg;
+ u32 pi, ci, r_sz, buf_sz, q_depth;
struct octep_ctrl_mbox_q *q;
- u32 count, pi, ci;
- u8 __iomem *qidx;
- u64 *mbuf;
- int i;
+ int s;
- if (!mbox || !msg)
- return -EINVAL;
+ if (readq(OCTEP_CTRL_MBOX_INFO_FW_STATUS(mbox->barmem)) != OCTEP_CTRL_MBOX_STATUS_READY)
+ return -EIO;
+ mutex_lock(&mbox->f2hq_lock);
q = &mbox->f2hq;
pi = readl(q->hw_prod);
ci = readl(q->hw_cons);
- count = octep_ctrl_mbox_circq_depth(pi, ci, q->mask);
- if (!count)
- return -EAGAIN;
-
- qidx = OCTEP_CTRL_MBOX_Q_OFFSET(q->hw_q, ci);
- mbuf = (u64 *)msg->msg;
- mutex_lock(&mbox->f2hq_lock);
-
- msg->hdr.word0 = readq(qidx);
- for (i = 1; i <= msg->hdr.sizew; i++)
- *mbuf++ = readq(qidx + (i * 8));
+ q_depth = octep_ctrl_mbox_circq_depth(pi, ci, q->sz);
+ if (q_depth < mbox_hdr_sz) {
+ mutex_unlock(&mbox->f2hq_lock);
+ return -EAGAIN;
+ }
- ci = octep_ctrl_mbox_circq_inc(ci, q->mask);
+ octep_read_mbox_data(q, pi, &ci, (void *)&msg->hdr, mbox_hdr_sz);
+ buf_sz = msg->hdr.s.sz;
+ for (s = 0; ((s < msg->sg_num) && (buf_sz > 0)); s++) {
+ sg = &msg->sg_list[s];
+ r_sz = (sg->sz <= buf_sz) ? sg->sz : buf_sz;
+ octep_read_mbox_data(q, pi, &ci, sg->msg, r_sz);
+ buf_sz -= r_sz;
+ }
writel(ci, q->hw_cons);
-
mutex_unlock(&mbox->f2hq_lock);
- if (msg->hdr.flags != OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ || !mbox->process_req)
- return 0;
-
- mbox->process_req(mbox->user_ctx, msg);
- mbuf = (u64 *)msg->msg;
- for (i = 1; i <= msg->hdr.sizew; i++)
- writeq(*mbuf++, (qidx + (i * 8)));
-
- writeq(msg->hdr.word0, qidx);
-
return 0;
}
{
if (!mbox)
return -EINVAL;
+ if (!mbox->barmem)
+ return -EINVAL;
- writeq(OCTEP_CTRL_MBOX_STATUS_UNINIT,
- OCTEP_CTRL_MBOX_INFO_HOST_STATUS_OFFSET(mbox->barmem));
+ writeq(OCTEP_CTRL_MBOX_STATUS_INVALID,
+ OCTEP_CTRL_MBOX_INFO_HOST_STATUS(mbox->barmem));
/* ensure uninit state is written before uninitialization */
wmb();
mutex_destroy(&mbox->h2fq_lock);
mutex_destroy(&mbox->f2hq_lock);
- writeq(OCTEP_CTRL_MBOX_STATUS_INVALID,
- OCTEP_CTRL_MBOX_INFO_HOST_STATUS_OFFSET(mbox->barmem));
-
pr_info("Octep ctrl mbox : Uninit successful.\n");
return 0;
* |-------------------------------------------|
* |producer index (4 bytes) |
* |consumer index (4 bytes) |
- * |element size (4 bytes) |
- * |element count (4 bytes) |
+ * |max element size (4 bytes) |
+ * |reserved (4 bytes) |
* |===========================================|
* |Fw to Host Queue info (16 bytes) |
* |-------------------------------------------|
* |producer index (4 bytes) |
* |consumer index (4 bytes) |
- * |element size (4 bytes) |
- * |element count (4 bytes) |
+ * |max element size (4 bytes) |
+ * |reserved (4 bytes) |
* |===========================================|
- * |Host to Fw Queue |
+ * |Host to Fw Queue ((total size-288/2) bytes)|
* |-------------------------------------------|
- * |((elem_sz + hdr(8 bytes)) * elem_cnt) bytes|
+ * | |
* |===========================================|
* |===========================================|
- * |Fw to Host Queue |
+ * |Fw to Host Queue ((total size-288/2) bytes)|
* |-------------------------------------------|
- * |((elem_sz + hdr(8 bytes)) * elem_cnt) bytes|
+ * | |
* |===========================================|
*/
#define OCTEP_CTRL_MBOX_MAGIC_NUMBER 0xdeaddeadbeefbeefull
-/* Size of mbox info in bytes */
-#define OCTEP_CTRL_MBOX_INFO_SZ 256
-/* Size of mbox host to target queue info in bytes */
-#define OCTEP_CTRL_MBOX_H2FQ_INFO_SZ 16
-/* Size of mbox target to host queue info in bytes */
-#define OCTEP_CTRL_MBOX_F2HQ_INFO_SZ 16
-/* Size of mbox queue in bytes */
-#define OCTEP_CTRL_MBOX_Q_SZ(sz, cnt) (((sz) + 8) * (cnt))
-/* Size of mbox in bytes */
-#define OCTEP_CTRL_MBOX_SZ(hsz, hcnt, fsz, fcnt) (OCTEP_CTRL_MBOX_INFO_SZ + \
- OCTEP_CTRL_MBOX_H2FQ_INFO_SZ + \
- OCTEP_CTRL_MBOX_F2HQ_INFO_SZ + \
- OCTEP_CTRL_MBOX_Q_SZ(hsz, hcnt) + \
- OCTEP_CTRL_MBOX_Q_SZ(fsz, fcnt))
-
/* Valid request message */
#define OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ BIT(0)
/* Valid response message */
#define OCTEP_CTRL_MBOX_MSG_HDR_FLAG_RESP BIT(1)
/* Valid notification, no response required */
#define OCTEP_CTRL_MBOX_MSG_HDR_FLAG_NOTIFY BIT(2)
+/* Valid custom message */
+#define OCTEP_CTRL_MBOX_MSG_HDR_FLAG_CUSTOM BIT(3)
+
+#define OCTEP_CTRL_MBOX_MSG_DESC_MAX 4
enum octep_ctrl_mbox_status {
OCTEP_CTRL_MBOX_STATUS_INVALID = 0,
/* mbox message */
union octep_ctrl_mbox_msg_hdr {
- u64 word0;
+ u64 words[2];
struct {
+ /* must be 0 */
+ u16 reserved1:15;
+ /* vf_idx is valid if 1 */
+ u16 is_vf:1;
+ /* sender vf index 0-(n-1), 0 if (is_vf==0) */
+ u16 vf_idx;
+ /* total size of message excluding header */
+ u32 sz;
/* OCTEP_CTRL_MBOX_MSG_HDR_FLAG_* */
u32 flags;
- /* size of message in words excluding header */
- u32 sizew;
- };
+ /* identifier to match responses */
+ u16 msg_id;
+ u16 reserved2;
+ } s;
+};
+
+/* mbox message buffer */
+struct octep_ctrl_mbox_msg_buf {
+ u32 reserved1;
+ u16 reserved2;
+ /* size of buffer */
+ u16 sz;
+ /* pointer to message buffer */
+ void *msg;
};
/* mbox message */
struct octep_ctrl_mbox_msg {
/* mbox transaction header */
union octep_ctrl_mbox_msg_hdr hdr;
- /* pointer to message buffer */
- void *msg;
+ /* number of sg buffer's */
+ int sg_num;
+ /* message buffer's */
+ struct octep_ctrl_mbox_msg_buf sg_list[OCTEP_CTRL_MBOX_MSG_DESC_MAX];
};
/* Mbox queue */
struct octep_ctrl_mbox_q {
- /* q element size, should be aligned to unsigned long */
- u16 elem_sz;
- /* q element count, should be power of 2 */
- u16 elem_cnt;
- /* q mask */
- u16 mask;
+ /* size of queue buffer */
+ u32 sz;
/* producer address in bar mem */
u8 __iomem *hw_prod;
/* consumer address in bar mem */
};
struct octep_ctrl_mbox {
- /* host driver version */
- u64 version;
/* size of bar memory */
u32 barmem_sz;
/* pointer to BAR memory */
u8 __iomem *barmem;
- /* user context for callback, can be null */
- void *user_ctx;
- /* callback handler for processing request, called from octep_ctrl_mbox_recv */
- int (*process_req)(void *user_ctx, struct octep_ctrl_mbox_msg *msg);
/* host-to-fw queue */
struct octep_ctrl_mbox_q h2fq;
/* fw-to-host queue */
/* Send mbox message.
*
* @param mbox: non-null pointer to struct octep_ctrl_mbox.
+ * @param msg: non-null pointer to struct octep_ctrl_mbox_msg.
+ * Caller should fill msg.sz and msg.desc.sz for each message.
*
* return value: 0 on success, -errno on failure.
*/
/* Retrieve mbox message.
*
* @param mbox: non-null pointer to struct octep_ctrl_mbox.
+ * @param msg: non-null pointer to struct octep_ctrl_mbox_msg.
+ * Caller should fill msg.sz and msg.desc.sz for each message.
*
* return value: 0 on success, -errno on failure.
*/
/* Uninitialize control mbox.
*
- * @param ep: non-null pointer to struct octep_ctrl_mbox.
+ * @param mbox: non-null pointer to struct octep_ctrl_mbox.
*
* return value: 0 on success, -errno on failure.
*/
#include <linux/types.h>
#include <linux/etherdevice.h>
#include <linux/pci.h>
+#include <linux/wait.h>
#include "octep_config.h"
#include "octep_main.h"
#include "octep_ctrl_net.h"
-int octep_get_link_status(struct octep_device *oct)
+static const u32 req_hdr_sz = sizeof(union octep_ctrl_net_req_hdr);
+static const u32 mtu_sz = sizeof(struct octep_ctrl_net_h2f_req_cmd_mtu);
+static const u32 mac_sz = sizeof(struct octep_ctrl_net_h2f_req_cmd_mac);
+static const u32 state_sz = sizeof(struct octep_ctrl_net_h2f_req_cmd_state);
+static const u32 link_info_sz = sizeof(struct octep_ctrl_net_link_info);
+static atomic_t ctrl_net_msg_id;
+
+static void init_send_req(struct octep_ctrl_mbox_msg *msg, void *buf,
+ u16 sz, int vfid)
{
- struct octep_ctrl_net_h2f_req req = {};
- struct octep_ctrl_net_h2f_resp *resp;
- struct octep_ctrl_mbox_msg msg = {};
- int err;
+ msg->hdr.s.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
+ msg->hdr.s.msg_id = atomic_inc_return(&ctrl_net_msg_id) &
+ GENMASK(sizeof(msg->hdr.s.msg_id) * BITS_PER_BYTE, 0);
+ msg->hdr.s.sz = req_hdr_sz + sz;
+ msg->sg_num = 1;
+ msg->sg_list[0].msg = buf;
+ msg->sg_list[0].sz = msg->hdr.s.sz;
+ if (vfid != OCTEP_CTRL_NET_INVALID_VFID) {
+ msg->hdr.s.is_vf = 1;
+ msg->hdr.s.vf_idx = vfid;
+ }
+}
+
+static int octep_send_mbox_req(struct octep_device *oct,
+ struct octep_ctrl_net_wait_data *d,
+ bool wait_for_response)
+{
+ int err, ret;
+
+ err = octep_ctrl_mbox_send(&oct->ctrl_mbox, &d->msg);
+ if (err < 0)
+ return err;
+
+ if (!wait_for_response)
+ return 0;
+
+ d->done = 0;
+ INIT_LIST_HEAD(&d->list);
+ list_add_tail(&d->list, &oct->ctrl_req_wait_list);
+ ret = wait_event_interruptible_timeout(oct->ctrl_req_wait_q,
+ (d->done != 0),
+ jiffies + msecs_to_jiffies(500));
+ list_del(&d->list);
+ if (ret == 0 || ret == 1)
+ return -EAGAIN;
+
+ /**
+ * (ret == 0) cond = false && timeout, return 0
+ * (ret < 0) interrupted by signal, return 0
+ * (ret == 1) cond = true && timeout, return 1
+ * (ret >= 1) cond = true && !timeout, return 1
+ */
+
+ if (d->data.resp.hdr.s.reply != OCTEP_CTRL_NET_REPLY_OK)
+ return -EAGAIN;
+
+ return 0;
+}
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_STATUS;
- req.link.cmd = OCTEP_CTRL_NET_CMD_GET;
+int octep_ctrl_net_init(struct octep_device *oct)
+{
+ struct octep_ctrl_mbox *ctrl_mbox;
+ struct pci_dev *pdev = oct->pdev;
+ int ret;
+
+ init_waitqueue_head(&oct->ctrl_req_wait_q);
+ INIT_LIST_HEAD(&oct->ctrl_req_wait_list);
+
+ /* Initialize control mbox */
+ ctrl_mbox = &oct->ctrl_mbox;
+ ctrl_mbox->barmem = CFG_GET_CTRL_MBOX_MEM_ADDR(oct->conf);
+ ret = octep_ctrl_mbox_init(ctrl_mbox);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to initialize control mbox\n");
+ return ret;
+ }
+ oct->ctrl_mbox_ifstats_offset = ctrl_mbox->barmem_sz;
+
+ return 0;
+}
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_STATE_REQ_SZW;
- msg.msg = &req;
- err = octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
- if (err)
+int octep_ctrl_net_get_link_status(struct octep_device *oct, int vfid)
+{
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
+ int err;
+
+ init_send_req(&d.msg, (void *)req, state_sz, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_STATUS;
+ req->link.cmd = OCTEP_CTRL_NET_CMD_GET;
+ err = octep_send_mbox_req(oct, &d, true);
+ if (err < 0)
return err;
- resp = (struct octep_ctrl_net_h2f_resp *)&req;
- return resp->link.state;
+ return d.data.resp.link.state;
}
-void octep_set_link_status(struct octep_device *oct, bool up)
+int octep_ctrl_net_set_link_status(struct octep_device *oct, int vfid, bool up,
+ bool wait_for_response)
{
- struct octep_ctrl_net_h2f_req req = {};
- struct octep_ctrl_mbox_msg msg = {};
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_STATUS;
- req.link.cmd = OCTEP_CTRL_NET_CMD_SET;
- req.link.state = (up) ? OCTEP_CTRL_NET_STATE_UP : OCTEP_CTRL_NET_STATE_DOWN;
+ init_send_req(&d.msg, req, state_sz, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_STATUS;
+ req->link.cmd = OCTEP_CTRL_NET_CMD_SET;
+ req->link.state = (up) ? OCTEP_CTRL_NET_STATE_UP :
+ OCTEP_CTRL_NET_STATE_DOWN;
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_STATE_REQ_SZW;
- msg.msg = &req;
- octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+ return octep_send_mbox_req(oct, &d, wait_for_response);
}
-void octep_set_rx_state(struct octep_device *oct, bool up)
+int octep_ctrl_net_set_rx_state(struct octep_device *oct, int vfid, bool up,
+ bool wait_for_response)
{
- struct octep_ctrl_net_h2f_req req = {};
- struct octep_ctrl_mbox_msg msg = {};
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_RX_STATE;
- req.link.cmd = OCTEP_CTRL_NET_CMD_SET;
- req.link.state = (up) ? OCTEP_CTRL_NET_STATE_UP : OCTEP_CTRL_NET_STATE_DOWN;
+ init_send_req(&d.msg, req, state_sz, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_RX_STATE;
+ req->link.cmd = OCTEP_CTRL_NET_CMD_SET;
+ req->link.state = (up) ? OCTEP_CTRL_NET_STATE_UP :
+ OCTEP_CTRL_NET_STATE_DOWN;
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_STATE_REQ_SZW;
- msg.msg = &req;
- octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+ return octep_send_mbox_req(oct, &d, wait_for_response);
}
-int octep_get_mac_addr(struct octep_device *oct, u8 *addr)
+int octep_ctrl_net_get_mac_addr(struct octep_device *oct, int vfid, u8 *addr)
{
- struct octep_ctrl_net_h2f_req req = {};
- struct octep_ctrl_net_h2f_resp *resp;
- struct octep_ctrl_mbox_msg msg = {};
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
int err;
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_MAC;
- req.link.cmd = OCTEP_CTRL_NET_CMD_GET;
-
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_MAC_REQ_SZW;
- msg.msg = &req;
- err = octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
- if (err)
+ init_send_req(&d.msg, req, mac_sz, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_MAC;
+ req->link.cmd = OCTEP_CTRL_NET_CMD_GET;
+ err = octep_send_mbox_req(oct, &d, true);
+ if (err < 0)
return err;
- resp = (struct octep_ctrl_net_h2f_resp *)&req;
- memcpy(addr, resp->mac.addr, ETH_ALEN);
+ memcpy(addr, d.data.resp.mac.addr, ETH_ALEN);
- return err;
+ return 0;
}
-int octep_set_mac_addr(struct octep_device *oct, u8 *addr)
+int octep_ctrl_net_set_mac_addr(struct octep_device *oct, int vfid, u8 *addr,
+ bool wait_for_response)
{
- struct octep_ctrl_net_h2f_req req = {};
- struct octep_ctrl_mbox_msg msg = {};
-
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_MAC;
- req.mac.cmd = OCTEP_CTRL_NET_CMD_SET;
- memcpy(&req.mac.addr, addr, ETH_ALEN);
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_MAC_REQ_SZW;
- msg.msg = &req;
+ init_send_req(&d.msg, req, mac_sz, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_MAC;
+ req->mac.cmd = OCTEP_CTRL_NET_CMD_SET;
+ memcpy(&req->mac.addr, addr, ETH_ALEN);
- return octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+ return octep_send_mbox_req(oct, &d, wait_for_response);
}
-int octep_set_mtu(struct octep_device *oct, int mtu)
+int octep_ctrl_net_set_mtu(struct octep_device *oct, int vfid, int mtu,
+ bool wait_for_response)
{
- struct octep_ctrl_net_h2f_req req = {};
- struct octep_ctrl_mbox_msg msg = {};
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_MTU;
- req.mtu.cmd = OCTEP_CTRL_NET_CMD_SET;
- req.mtu.val = mtu;
+ init_send_req(&d.msg, req, mtu_sz, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_MTU;
+ req->mtu.cmd = OCTEP_CTRL_NET_CMD_SET;
+ req->mtu.val = mtu;
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_MTU_REQ_SZW;
- msg.msg = &req;
-
- return octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+ return octep_send_mbox_req(oct, &d, wait_for_response);
}
-int octep_get_if_stats(struct octep_device *oct)
+int octep_ctrl_net_get_if_stats(struct octep_device *oct, int vfid,
+ struct octep_iface_rx_stats *rx_stats,
+ struct octep_iface_tx_stats *tx_stats)
{
- void __iomem *iface_rx_stats;
- void __iomem *iface_tx_stats;
- struct octep_ctrl_net_h2f_req req = {};
- struct octep_ctrl_mbox_msg msg = {};
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
+ struct octep_ctrl_net_h2f_resp *resp;
int err;
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_GET_IF_STATS;
- req.mac.cmd = OCTEP_CTRL_NET_CMD_GET;
- req.get_stats.offset = oct->ctrl_mbox_ifstats_offset;
-
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_GET_STATS_REQ_SZW;
- msg.msg = &req;
- err = octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
- if (err)
+ init_send_req(&d.msg, req, 0, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_GET_IF_STATS;
+ err = octep_send_mbox_req(oct, &d, true);
+ if (err < 0)
return err;
- iface_rx_stats = oct->ctrl_mbox.barmem + oct->ctrl_mbox_ifstats_offset;
- iface_tx_stats = oct->ctrl_mbox.barmem + oct->ctrl_mbox_ifstats_offset +
- sizeof(struct octep_iface_rx_stats);
- memcpy_fromio(&oct->iface_rx_stats, iface_rx_stats, sizeof(struct octep_iface_rx_stats));
- memcpy_fromio(&oct->iface_tx_stats, iface_tx_stats, sizeof(struct octep_iface_tx_stats));
-
- return err;
+ resp = &d.data.resp;
+ memcpy(rx_stats, &resp->if_stats.rx_stats, sizeof(struct octep_iface_rx_stats));
+ memcpy(tx_stats, &resp->if_stats.tx_stats, sizeof(struct octep_iface_tx_stats));
+ return 0;
}
-int octep_get_link_info(struct octep_device *oct)
+int octep_ctrl_net_get_link_info(struct octep_device *oct, int vfid,
+ struct octep_iface_link_info *link_info)
{
- struct octep_ctrl_net_h2f_req req = {};
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
struct octep_ctrl_net_h2f_resp *resp;
- struct octep_ctrl_mbox_msg msg = {};
int err;
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_INFO;
- req.mac.cmd = OCTEP_CTRL_NET_CMD_GET;
-
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_LINK_INFO_REQ_SZW;
- msg.msg = &req;
- err = octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
- if (err)
+ init_send_req(&d.msg, req, link_info_sz, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_INFO;
+ req->link_info.cmd = OCTEP_CTRL_NET_CMD_GET;
+ err = octep_send_mbox_req(oct, &d, true);
+ if (err < 0)
return err;
- resp = (struct octep_ctrl_net_h2f_resp *)&req;
- oct->link_info.supported_modes = resp->link_info.supported_modes;
- oct->link_info.advertised_modes = resp->link_info.advertised_modes;
- oct->link_info.autoneg = resp->link_info.autoneg;
- oct->link_info.pause = resp->link_info.pause;
- oct->link_info.speed = resp->link_info.speed;
+ resp = &d.data.resp;
+ link_info->supported_modes = resp->link_info.supported_modes;
+ link_info->advertised_modes = resp->link_info.advertised_modes;
+ link_info->autoneg = resp->link_info.autoneg;
+ link_info->pause = resp->link_info.pause;
+ link_info->speed = resp->link_info.speed;
+
+ return 0;
+}
+
+int octep_ctrl_net_set_link_info(struct octep_device *oct, int vfid,
+ struct octep_iface_link_info *link_info,
+ bool wait_for_response)
+{
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_req *req = &d.data.req;
+
+ init_send_req(&d.msg, req, link_info_sz, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_INFO;
+ req->link_info.cmd = OCTEP_CTRL_NET_CMD_SET;
+ req->link_info.info.advertised_modes = link_info->advertised_modes;
+ req->link_info.info.autoneg = link_info->autoneg;
+ req->link_info.info.pause = link_info->pause;
+ req->link_info.info.speed = link_info->speed;
+
+ return octep_send_mbox_req(oct, &d, wait_for_response);
+}
+
+static void process_mbox_resp(struct octep_device *oct,
+ struct octep_ctrl_mbox_msg *msg)
+{
+ struct octep_ctrl_net_wait_data *pos, *n;
+
+ list_for_each_entry_safe(pos, n, &oct->ctrl_req_wait_list, list) {
+ if (pos->msg.hdr.s.msg_id == msg->hdr.s.msg_id) {
+ memcpy(&pos->data.resp,
+ msg->sg_list[0].msg,
+ msg->hdr.s.sz);
+ pos->done = 1;
+ wake_up_interruptible_all(&oct->ctrl_req_wait_q);
+ break;
+ }
+ }
+}
- return err;
+static int process_mbox_notify(struct octep_device *oct,
+ struct octep_ctrl_mbox_msg *msg)
+{
+ struct net_device *netdev = oct->netdev;
+ struct octep_ctrl_net_f2h_req *req;
+
+ req = (struct octep_ctrl_net_f2h_req *)msg->sg_list[0].msg;
+ switch (req->hdr.s.cmd) {
+ case OCTEP_CTRL_NET_F2H_CMD_LINK_STATUS:
+ if (netif_running(netdev)) {
+ if (req->link.state) {
+ dev_info(&oct->pdev->dev, "netif_carrier_on\n");
+ netif_carrier_on(netdev);
+ } else {
+ dev_info(&oct->pdev->dev, "netif_carrier_off\n");
+ netif_carrier_off(netdev);
+ }
+ }
+ break;
+ default:
+ pr_info("Unknown mbox req : %u\n", req->hdr.s.cmd);
+ break;
+ }
+
+ return 0;
}
-int octep_set_link_info(struct octep_device *oct, struct octep_iface_link_info *link_info)
+void octep_ctrl_net_recv_fw_messages(struct octep_device *oct)
{
- struct octep_ctrl_net_h2f_req req = {};
- struct octep_ctrl_mbox_msg msg = {};
+ static u16 msg_sz = sizeof(union octep_ctrl_net_max_data);
+ union octep_ctrl_net_max_data data = {0};
+ struct octep_ctrl_mbox_msg msg = {0};
+ int ret;
+
+ msg.hdr.s.sz = msg_sz;
+ msg.sg_num = 1;
+ msg.sg_list[0].sz = msg_sz;
+ msg.sg_list[0].msg = &data;
+ while (true) {
+ /* mbox will overwrite msg.hdr.s.sz so initialize it */
+ msg.hdr.s.sz = msg_sz;
+ ret = octep_ctrl_mbox_recv(&oct->ctrl_mbox, (struct octep_ctrl_mbox_msg *)&msg);
+ if (ret < 0)
+ break;
+
+ if (msg.hdr.s.flags & OCTEP_CTRL_MBOX_MSG_HDR_FLAG_RESP)
+ process_mbox_resp(oct, &msg);
+ else if (msg.hdr.s.flags & OCTEP_CTRL_MBOX_MSG_HDR_FLAG_NOTIFY)
+ process_mbox_notify(oct, &msg);
+ }
+}
+
+int octep_ctrl_net_uninit(struct octep_device *oct)
+{
+ struct octep_ctrl_net_wait_data *pos, *n;
+
+ list_for_each_entry_safe(pos, n, &oct->ctrl_req_wait_list, list)
+ pos->done = 1;
- req.hdr.cmd = OCTEP_CTRL_NET_H2F_CMD_LINK_INFO;
- req.link_info.cmd = OCTEP_CTRL_NET_CMD_SET;
- req.link_info.info.advertised_modes = link_info->advertised_modes;
- req.link_info.info.autoneg = link_info->autoneg;
- req.link_info.info.pause = link_info->pause;
- req.link_info.info.speed = link_info->speed;
+ wake_up_interruptible_all(&oct->ctrl_req_wait_q);
- msg.hdr.flags = OCTEP_CTRL_MBOX_MSG_HDR_FLAG_REQ;
- msg.hdr.sizew = OCTEP_CTRL_NET_H2F_LINK_INFO_REQ_SZW;
- msg.msg = &req;
+ octep_ctrl_mbox_uninit(&oct->ctrl_mbox);
- return octep_ctrl_mbox_send(&oct->ctrl_mbox, &msg);
+ return 0;
}
#ifndef __OCTEP_CTRL_NET_H__
#define __OCTEP_CTRL_NET_H__
+#define OCTEP_CTRL_NET_INVALID_VFID (-1)
+
/* Supported commands */
enum octep_ctrl_net_cmd {
OCTEP_CTRL_NET_CMD_GET = 0,
OCTEP_CTRL_NET_F2H_CMD_LINK_STATUS,
};
-struct octep_ctrl_net_req_hdr {
- /* sender id */
- u16 sender;
- /* receiver id */
- u16 receiver;
- /* octep_ctrl_net_h2t_cmd */
- u16 cmd;
- /* reserved */
- u16 rsvd0;
+union octep_ctrl_net_req_hdr {
+ u64 words[1];
+ struct {
+ /* sender id */
+ u16 sender;
+ /* receiver id */
+ u16 receiver;
+ /* octep_ctrl_net_h2t_cmd */
+ u16 cmd;
+ /* reserved */
+ u16 rsvd0;
+ } s;
};
/* get/set mtu request */
u8 addr[ETH_ALEN];
};
-/* get if_stats, xstats, q_stats request */
-struct octep_ctrl_net_h2f_req_cmd_get_stats {
- /* offset into barmem where fw should copy over stats */
- u32 offset;
-};
-
/* get/set link state, rx state */
struct octep_ctrl_net_h2f_req_cmd_state {
/* enum octep_ctrl_net_cmd */
/* Host to fw request data */
struct octep_ctrl_net_h2f_req {
- struct octep_ctrl_net_req_hdr hdr;
+ union octep_ctrl_net_req_hdr hdr;
union {
struct octep_ctrl_net_h2f_req_cmd_mtu mtu;
struct octep_ctrl_net_h2f_req_cmd_mac mac;
- struct octep_ctrl_net_h2f_req_cmd_get_stats get_stats;
struct octep_ctrl_net_h2f_req_cmd_state link;
struct octep_ctrl_net_h2f_req_cmd_state rx;
struct octep_ctrl_net_h2f_req_cmd_link_info link_info;
};
} __packed;
-struct octep_ctrl_net_resp_hdr {
- /* sender id */
- u16 sender;
- /* receiver id */
- u16 receiver;
- /* octep_ctrl_net_h2t_cmd */
- u16 cmd;
- /* octep_ctrl_net_reply */
- u16 reply;
+union octep_ctrl_net_resp_hdr {
+ u64 words[1];
+ struct {
+ /* sender id */
+ u16 sender;
+ /* receiver id */
+ u16 receiver;
+ /* octep_ctrl_net_h2t_cmd */
+ u16 cmd;
+ /* octep_ctrl_net_reply */
+ u16 reply;
+ } s;
};
/* get mtu response */
u8 addr[ETH_ALEN];
};
+/* get if_stats, xstats, q_stats request */
+struct octep_ctrl_net_h2f_resp_cmd_get_stats {
+ struct octep_iface_rx_stats rx_stats;
+ struct octep_iface_tx_stats tx_stats;
+};
+
/* get link state, rx state response */
struct octep_ctrl_net_h2f_resp_cmd_state {
/* enum octep_ctrl_net_state */
/* Host to fw response data */
struct octep_ctrl_net_h2f_resp {
- struct octep_ctrl_net_resp_hdr hdr;
+ union octep_ctrl_net_resp_hdr hdr;
union {
struct octep_ctrl_net_h2f_resp_cmd_mtu mtu;
struct octep_ctrl_net_h2f_resp_cmd_mac mac;
+ struct octep_ctrl_net_h2f_resp_cmd_get_stats if_stats;
struct octep_ctrl_net_h2f_resp_cmd_state link;
struct octep_ctrl_net_h2f_resp_cmd_state rx;
struct octep_ctrl_net_link_info link_info;
/* Fw to host request data */
struct octep_ctrl_net_f2h_req {
- struct octep_ctrl_net_req_hdr hdr;
+ union octep_ctrl_net_req_hdr hdr;
union {
struct octep_ctrl_net_f2h_req_cmd_state link;
};
/* Fw to host response data */
struct octep_ctrl_net_f2h_resp {
- struct octep_ctrl_net_resp_hdr hdr;
+ union octep_ctrl_net_resp_hdr hdr;
};
-/* Size of host to fw octep_ctrl_mbox queue element */
-union octep_ctrl_net_h2f_data_sz {
+/* Max data size to be transferred over mbox */
+union octep_ctrl_net_max_data {
struct octep_ctrl_net_h2f_req h2f_req;
struct octep_ctrl_net_h2f_resp h2f_resp;
-};
-
-/* Size of fw to host octep_ctrl_mbox queue element */
-union octep_ctrl_net_f2h_data_sz {
struct octep_ctrl_net_f2h_req f2h_req;
struct octep_ctrl_net_f2h_resp f2h_resp;
};
-/* size of host to fw data in words */
-#define OCTEP_CTRL_NET_H2F_DATA_SZW ((sizeof(union octep_ctrl_net_h2f_data_sz)) / \
- (sizeof(unsigned long)))
-
-/* size of fw to host data in words */
-#define OCTEP_CTRL_NET_F2H_DATA_SZW ((sizeof(union octep_ctrl_net_f2h_data_sz)) / \
- (sizeof(unsigned long)))
-
-/* size in words of get/set mtu request */
-#define OCTEP_CTRL_NET_H2F_MTU_REQ_SZW 2
-/* size in words of get/set mac request */
-#define OCTEP_CTRL_NET_H2F_MAC_REQ_SZW 2
-/* size in words of get stats request */
-#define OCTEP_CTRL_NET_H2F_GET_STATS_REQ_SZW 2
-/* size in words of get/set state request */
-#define OCTEP_CTRL_NET_H2F_STATE_REQ_SZW 2
-/* size in words of get/set link info request */
-#define OCTEP_CTRL_NET_H2F_LINK_INFO_REQ_SZW 4
-
-/* size in words of get mtu response */
-#define OCTEP_CTRL_NET_H2F_GET_MTU_RESP_SZW 2
-/* size in words of set mtu response */
-#define OCTEP_CTRL_NET_H2F_SET_MTU_RESP_SZW 1
-/* size in words of get mac response */
-#define OCTEP_CTRL_NET_H2F_GET_MAC_RESP_SZW 2
-/* size in words of set mac response */
-#define OCTEP_CTRL_NET_H2F_SET_MAC_RESP_SZW 1
-/* size in words of get state request */
-#define OCTEP_CTRL_NET_H2F_GET_STATE_RESP_SZW 2
-/* size in words of set state request */
-#define OCTEP_CTRL_NET_H2F_SET_STATE_RESP_SZW 1
-/* size in words of get link info request */
-#define OCTEP_CTRL_NET_H2F_GET_LINK_INFO_RESP_SZW 4
-/* size in words of set link info request */
-#define OCTEP_CTRL_NET_H2F_SET_LINK_INFO_RESP_SZW 1
+struct octep_ctrl_net_wait_data {
+ struct list_head list;
+ int done;
+ struct octep_ctrl_mbox_msg msg;
+ union {
+ struct octep_ctrl_net_h2f_req req;
+ struct octep_ctrl_net_h2f_resp resp;
+ } data;
+};
+
+/** Initialize data for ctrl net.
+ *
+ * @param oct: non-null pointer to struct octep_device.
+ *
+ * return value: 0 on success, -errno on error.
+ */
+int octep_ctrl_net_init(struct octep_device *oct);
/** Get link status from firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
*
* return value: link status 0=down, 1=up.
*/
-int octep_get_link_status(struct octep_device *oct);
+int octep_ctrl_net_get_link_status(struct octep_device *oct, int vfid);
/** Set link status in firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
* @param up: boolean status.
+ * @param wait_for_response: poll for response.
+ *
+ * return value: 0 on success, -errno on failure
*/
-void octep_set_link_status(struct octep_device *oct, bool up);
+int octep_ctrl_net_set_link_status(struct octep_device *oct, int vfid, bool up,
+ bool wait_for_response);
/** Set rx state in firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
* @param up: boolean status.
+ * @param wait_for_response: poll for response.
+ *
+ * return value: 0 on success, -errno on failure.
*/
-void octep_set_rx_state(struct octep_device *oct, bool up);
+int octep_ctrl_net_set_rx_state(struct octep_device *oct, int vfid, bool up,
+ bool wait_for_response);
/** Get mac address from firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
* @param addr: non-null pointer to mac address.
*
* return value: 0 on success, -errno on failure.
*/
-int octep_get_mac_addr(struct octep_device *oct, u8 *addr);
+int octep_ctrl_net_get_mac_addr(struct octep_device *oct, int vfid, u8 *addr);
/** Set mac address in firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
* @param addr: non-null pointer to mac address.
+ * @param wait_for_response: poll for response.
+ *
+ * return value: 0 on success, -errno on failure.
*/
-int octep_set_mac_addr(struct octep_device *oct, u8 *addr);
+int octep_ctrl_net_set_mac_addr(struct octep_device *oct, int vfid, u8 *addr,
+ bool wait_for_response);
/** Set mtu in firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
* @param mtu: mtu.
+ * @param wait_for_response: poll for response.
+ *
+ * return value: 0 on success, -errno on failure.
*/
-int octep_set_mtu(struct octep_device *oct, int mtu);
+int octep_ctrl_net_set_mtu(struct octep_device *oct, int vfid, int mtu,
+ bool wait_for_response);
/** Get interface statistics from firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
+ * @param rx_stats: non-null pointer struct octep_iface_rx_stats.
+ * @param tx_stats: non-null pointer struct octep_iface_tx_stats.
*
* return value: 0 on success, -errno on failure.
*/
-int octep_get_if_stats(struct octep_device *oct);
+int octep_ctrl_net_get_if_stats(struct octep_device *oct, int vfid,
+ struct octep_iface_rx_stats *rx_stats,
+ struct octep_iface_tx_stats *tx_stats);
/** Get link info from firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
+ * @param link_info: non-null pointer to struct octep_iface_link_info.
*
* return value: 0 on success, -errno on failure.
*/
-int octep_get_link_info(struct octep_device *oct);
+int octep_ctrl_net_get_link_info(struct octep_device *oct, int vfid,
+ struct octep_iface_link_info *link_info);
/** Set link info in firmware.
*
* @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
+ * @param link_info: non-null pointer to struct octep_iface_link_info.
+ * @param wait_for_response: poll for response.
+ *
+ * return value: 0 on success, -errno on failure.
+ */
+int octep_ctrl_net_set_link_info(struct octep_device *oct,
+ int vfid,
+ struct octep_iface_link_info *link_info,
+ bool wait_for_response);
+
+/** Poll for firmware messages and process them.
+ *
+ * @param oct: non-null pointer to struct octep_device.
+ */
+void octep_ctrl_net_recv_fw_messages(struct octep_device *oct);
+
+/** Uninitialize data for ctrl net.
+ *
+ * @param oct: non-null pointer to struct octep_device.
+ *
+ * return value: 0 on success, -errno on error.
*/
-int octep_set_link_info(struct octep_device *oct, struct octep_iface_link_info *link_info);
+int octep_ctrl_net_uninit(struct octep_device *oct);
#endif /* __OCTEP_CTRL_NET_H__ */
rx_packets = 0;
rx_bytes = 0;
- octep_get_if_stats(oct);
iface_tx_stats = &oct->iface_tx_stats;
iface_rx_stats = &oct->iface_rx_stats;
+ octep_ctrl_net_get_if_stats(oct,
+ OCTEP_CTRL_NET_INVALID_VFID,
+ iface_rx_stats,
+ iface_tx_stats);
for (q = 0; q < oct->num_oqs; q++) {
struct octep_iq *iq = oct->iq[q];
ethtool_link_ksettings_zero_link_mode(cmd, supported);
ethtool_link_ksettings_zero_link_mode(cmd, advertising);
- octep_get_link_info(oct);
+ link_info = &oct->link_info;
+ octep_ctrl_net_get_link_info(oct, OCTEP_CTRL_NET_INVALID_VFID, link_info);
advertised_modes = oct->link_info.advertised_modes;
supported_modes = oct->link_info.supported_modes;
- link_info = &oct->link_info;
OCTEP_SET_ETHTOOL_LINK_MODES_BITMAP(supported_modes, cmd, supported);
OCTEP_SET_ETHTOOL_LINK_MODES_BITMAP(advertised_modes, cmd, advertising);
link_info_new.speed = cmd->base.speed;
link_info_new.autoneg = autoneg;
- err = octep_set_link_info(oct, &link_info_new);
+ err = octep_ctrl_net_set_link_info(oct, OCTEP_CTRL_NET_INVALID_VFID,
+ &link_info_new, true);
if (err)
return err;
#include <linux/types.h>
#include <linux/module.h>
#include <linux/pci.h>
-#include <linux/aer.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/rtnetlink.h>
#include "octep_main.h"
#include "octep_ctrl_net.h"
+#define OCTEP_INTR_POLL_TIME_MSECS 100
struct workqueue_struct *octep_wq;
/* Supported Devices */
octep_napi_enable(oct);
oct->link_info.admin_up = 1;
- octep_set_rx_state(oct, true);
-
- ret = octep_get_link_status(oct);
- if (!ret)
- octep_set_link_status(oct, true);
+ octep_ctrl_net_set_rx_state(oct, OCTEP_CTRL_NET_INVALID_VFID, true,
+ false);
+ octep_ctrl_net_set_link_status(oct, OCTEP_CTRL_NET_INVALID_VFID, true,
+ false);
+ oct->poll_non_ioq_intr = false;
/* Enable the input and output queues for this Octeon device */
oct->hw_ops.enable_io_queues(oct);
octep_oq_dbell_init(oct);
- ret = octep_get_link_status(oct);
+ ret = octep_ctrl_net_get_link_status(oct, OCTEP_CTRL_NET_INVALID_VFID);
if (ret > 0)
octep_link_up(netdev);
netdev_info(netdev, "Stopping the device ...\n");
+ octep_ctrl_net_set_link_status(oct, OCTEP_CTRL_NET_INVALID_VFID, false,
+ false);
+ octep_ctrl_net_set_rx_state(oct, OCTEP_CTRL_NET_INVALID_VFID, false,
+ false);
+
/* Stop Tx from stack */
netif_tx_stop_all_queues(netdev);
netif_carrier_off(netdev);
netif_tx_disable(netdev);
- octep_set_link_status(oct, false);
- octep_set_rx_state(oct, false);
-
oct->link_info.admin_up = 0;
oct->link_info.oper_up = 0;
oct->hw_ops.reset_io_queues(oct);
octep_free_oqs(oct);
octep_free_iqs(oct);
+
+ oct->poll_non_ioq_intr = true;
+ queue_delayed_work(octep_wq, &oct->intr_poll_task,
+ msecs_to_jiffies(OCTEP_INTR_POLL_TIME_MSECS));
+
netdev_info(netdev, "Device stopped !!\n");
return 0;
}
struct octep_device *oct = netdev_priv(netdev);
int q;
- octep_get_if_stats(oct);
+ if (netif_running(netdev))
+ octep_ctrl_net_get_if_stats(oct,
+ OCTEP_CTRL_NET_INVALID_VFID,
+ &oct->iface_rx_stats,
+ &oct->iface_tx_stats);
+
tx_packets = 0;
tx_bytes = 0;
rx_packets = 0;
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
- err = octep_set_mac_addr(oct, addr->sa_data);
+ err = octep_ctrl_net_set_mac_addr(oct, OCTEP_CTRL_NET_INVALID_VFID,
+ addr->sa_data, true);
if (err)
return err;
if (link_info->mtu == new_mtu)
return 0;
- err = octep_set_mtu(oct, new_mtu);
+ err = octep_ctrl_net_set_mtu(oct, OCTEP_CTRL_NET_INVALID_VFID, new_mtu,
+ true);
if (!err) {
oct->link_info.mtu = new_mtu;
netdev->mtu = new_mtu;
};
/**
+ * octep_intr_poll_task - work queue task to process non-ioq interrupts.
+ *
+ * @work: pointer to mbox work_struct
+ *
+ * Process non-ioq interrupts to handle control mailbox, pfvf mailbox.
+ **/
+static void octep_intr_poll_task(struct work_struct *work)
+{
+ struct octep_device *oct = container_of(work, struct octep_device,
+ intr_poll_task.work);
+
+ if (!oct->poll_non_ioq_intr) {
+ dev_info(&oct->pdev->dev, "Interrupt poll task stopped.\n");
+ return;
+ }
+
+ oct->hw_ops.poll_non_ioq_interrupts(oct);
+ queue_delayed_work(octep_wq, &oct->intr_poll_task,
+ msecs_to_jiffies(OCTEP_INTR_POLL_TIME_MSECS));
+}
+
+/**
+ * octep_hb_timeout_task - work queue task to check firmware heartbeat.
+ *
+ * @work: pointer to hb work_struct
+ *
+ * Check for heartbeat miss count. Uninitialize oct device if miss count
+ * exceeds configured max heartbeat miss count.
+ *
+ **/
+static void octep_hb_timeout_task(struct work_struct *work)
+{
+ struct octep_device *oct = container_of(work, struct octep_device,
+ hb_task.work);
+
+ int miss_cnt;
+
+ miss_cnt = atomic_inc_return(&oct->hb_miss_cnt);
+ if (miss_cnt < oct->conf->max_hb_miss_cnt) {
+ queue_delayed_work(octep_wq, &oct->hb_task,
+ msecs_to_jiffies(oct->conf->hb_interval * 1000));
+ return;
+ }
+
+ dev_err(&oct->pdev->dev, "Missed %u heartbeats. Uninitializing\n",
+ miss_cnt);
+ rtnl_lock();
+ if (netif_running(oct->netdev))
+ octep_stop(oct->netdev);
+ rtnl_unlock();
+}
+
+/**
* octep_ctrl_mbox_task - work queue task to handle ctrl mbox messages.
*
* @work: pointer to ctrl mbox work_struct
{
struct octep_device *oct = container_of(work, struct octep_device,
ctrl_mbox_task);
- struct net_device *netdev = oct->netdev;
- struct octep_ctrl_net_f2h_req req = {};
- struct octep_ctrl_mbox_msg msg;
- int ret = 0;
-
- msg.msg = &req;
- while (true) {
- ret = octep_ctrl_mbox_recv(&oct->ctrl_mbox, &msg);
- if (ret)
- break;
-
- switch (req.hdr.cmd) {
- case OCTEP_CTRL_NET_F2H_CMD_LINK_STATUS:
- if (netif_running(netdev)) {
- if (req.link.state) {
- dev_info(&oct->pdev->dev, "netif_carrier_on\n");
- netif_carrier_on(netdev);
- } else {
- dev_info(&oct->pdev->dev, "netif_carrier_off\n");
- netif_carrier_off(netdev);
- }
- }
- break;
- default:
- pr_info("Unknown mbox req : %u\n", req.hdr.cmd);
- break;
- }
- }
+
+ octep_ctrl_net_recv_fw_messages(oct);
}
static const char *octep_devid_to_str(struct octep_device *oct)
*/
int octep_device_setup(struct octep_device *oct)
{
- struct octep_ctrl_mbox *ctrl_mbox;
struct pci_dev *pdev = oct->pdev;
int i, ret;
oct->pkind = CFG_GET_IQ_PKIND(oct->conf);
- /* Initialize control mbox */
- ctrl_mbox = &oct->ctrl_mbox;
- ctrl_mbox->barmem = CFG_GET_CTRL_MBOX_MEM_ADDR(oct->conf);
- ret = octep_ctrl_mbox_init(ctrl_mbox);
- if (ret) {
- dev_err(&pdev->dev, "Failed to initialize control mbox\n");
- goto unsupported_dev;
- }
- oct->ctrl_mbox_ifstats_offset = OCTEP_CTRL_MBOX_SZ(ctrl_mbox->h2fq.elem_sz,
- ctrl_mbox->h2fq.elem_cnt,
- ctrl_mbox->f2hq.elem_sz,
- ctrl_mbox->f2hq.elem_cnt);
+ ret = octep_ctrl_net_init(oct);
+ if (ret)
+ return ret;
+ atomic_set(&oct->hb_miss_cnt, 0);
+ INIT_DELAYED_WORK(&oct->hb_task, octep_hb_timeout_task);
+ queue_delayed_work(octep_wq, &oct->hb_task,
+ msecs_to_jiffies(oct->conf->hb_interval * 1000));
return 0;
unsupported_dev:
oct->mbox[i] = NULL;
}
- octep_ctrl_mbox_uninit(&oct->ctrl_mbox);
+ octep_ctrl_net_uninit(oct);
+ cancel_delayed_work_sync(&oct->hb_task);
oct->hw_ops.soft_reset(oct);
for (i = 0; i < OCTEP_MMIO_REGIONS; i++) {
oct->conf = NULL;
}
+static bool get_fw_ready_status(struct pci_dev *pdev)
+{
+ u32 pos = 0;
+ u16 vsec_id;
+ u8 status;
+
+ while ((pos = pci_find_next_ext_capability(pdev, pos,
+ PCI_EXT_CAP_ID_VNDR))) {
+ pci_read_config_word(pdev, pos + 4, &vsec_id);
+#define FW_STATUS_VSEC_ID 0xA3
+ if (vsec_id != FW_STATUS_VSEC_ID)
+ continue;
+
+ pci_read_config_byte(pdev, (pos + 8), &status);
+ dev_info(&pdev->dev, "Firmware ready status = %u\n", status);
+ return status;
+ }
+ return false;
+}
+
/**
* octep_probe() - Octeon PCI device probe handler.
*
goto err_pci_regions;
}
- pci_enable_pcie_error_reporting(pdev);
pci_set_master(pdev);
+ if (!get_fw_ready_status(pdev)) {
+ dev_notice(&pdev->dev, "Firmware not ready; defer probe.\n");
+ err = -EPROBE_DEFER;
+ goto err_alloc_netdev;
+ }
+
netdev = alloc_etherdev_mq(sizeof(struct octep_device),
OCTEP_MAX_QUEUES);
if (!netdev) {
}
INIT_WORK(&octep_dev->tx_timeout_task, octep_tx_timeout_task);
INIT_WORK(&octep_dev->ctrl_mbox_task, octep_ctrl_mbox_task);
+ INIT_DELAYED_WORK(&octep_dev->intr_poll_task, octep_intr_poll_task);
+ octep_dev->poll_non_ioq_intr = true;
+ queue_delayed_work(octep_wq, &octep_dev->intr_poll_task,
+ msecs_to_jiffies(OCTEP_INTR_POLL_TIME_MSECS));
netdev->netdev_ops = &octep_netdev_ops;
octep_set_ethtool_ops(netdev);
netdev->max_mtu = OCTEP_MAX_MTU;
netdev->mtu = OCTEP_DEFAULT_MTU;
- err = octep_get_mac_addr(octep_dev, octep_dev->mac_addr);
+ err = octep_ctrl_net_get_mac_addr(octep_dev, OCTEP_CTRL_NET_INVALID_VFID,
+ octep_dev->mac_addr);
if (err) {
dev_err(&pdev->dev, "Failed to get mac address\n");
goto register_dev_err;
err_octep_config:
free_netdev(netdev);
err_alloc_netdev:
- pci_disable_pcie_error_reporting(pdev);
pci_release_mem_regions(pdev);
err_pci_regions:
err_dma_mask:
if (netdev->reg_state == NETREG_REGISTERED)
unregister_netdev(netdev);
+ oct->poll_non_ioq_intr = false;
+ cancel_delayed_work_sync(&oct->intr_poll_task);
octep_device_cleanup(oct);
pci_release_mem_regions(pdev);
free_netdev(netdev);
- pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
}
void (*enable_interrupts)(struct octep_device *oct);
void (*disable_interrupts)(struct octep_device *oct);
+ bool (*poll_non_ioq_interrupts)(struct octep_device *oct);
void (*enable_io_queues)(struct octep_device *oct);
void (*disable_io_queues)(struct octep_device *oct);
/* Work entry to handle ctrl mbox interrupt */
struct work_struct ctrl_mbox_task;
-
+ /* Wait queue for host to firmware requests */
+ wait_queue_head_t ctrl_req_wait_q;
+ /* List of objects waiting for h2f response */
+ struct list_head ctrl_req_wait_list;
+
+ /* Enable non-ioq interrupt polling */
+ bool poll_non_ioq_intr;
+ /* Work entry to poll non-ioq interrupts */
+ struct delayed_work intr_poll_task;
+
+ /* Firmware heartbeat timer */
+ struct timer_list hb_timer;
+ /* Firmware heartbeat miss count tracked by timer */
+ atomic_t hb_miss_cnt;
+ /* Task to reset device on heartbeat miss */
+ struct delayed_work hb_task;
};
static inline u16 OCTEP_MAJOR_REV(struct octep_device *oct)
/* Number of non-queue interrupts in CN93xx */
#define CN93_NUM_NON_IOQ_INTR 16
+
+/* bit 0 for control mbox interrupt */
+#define CN93_SDP_EPF_OEI_RINT_DATA_BIT_MBOX BIT_ULL(0)
+/* bit 1 for firmware heartbeat interrupt */
+#define CN93_SDP_EPF_OEI_RINT_DATA_BIT_HBEAT BIT_ULL(1)
+
#endif /* _OCTEP_REGS_CN9K_PF_H_ */
struct nix_cq_ctx_s cq;
struct nix_rsse_s rss;
struct nix_rx_mce_s mce;
- u64 prof;
+ struct nix_bandprof_s prof;
};
union {
struct nix_rq_ctx_s rq_mask;
struct nix_cq_ctx_s cq_mask;
struct nix_rsse_s rss_mask;
struct nix_rx_mce_s mce_mask;
- u64 prof_mask;
+ struct nix_bandprof_s prof_mask;
};
};
.suspend = pxa168_eth_suspend,
.driver = {
.name = DRIVER_NAME,
- .of_match_table = of_match_ptr(pxa168_eth_of_match),
+ .of_match_table = pxa168_eth_of_match,
},
};
select DIMLIB
select PAGE_POOL
select PAGE_POOL_STATS
+ select PCS_MTK_LYNXI
+ select REGMAP_MMIO
help
This driver supports the gigabit ethernet MACs in the
MediaTek SoC family.
#
obj-$(CONFIG_NET_MEDIATEK_SOC) += mtk_eth.o
-mtk_eth-y := mtk_eth_soc.o mtk_sgmii.o mtk_eth_path.o mtk_ppe.o mtk_ppe_debugfs.o mtk_ppe_offload.o
+mtk_eth-y := mtk_eth_soc.o mtk_eth_path.o mtk_ppe.o mtk_ppe_debugfs.o mtk_ppe_offload.o
mtk_eth-$(CONFIG_NET_MEDIATEK_SOC_WED) += mtk_wed.o mtk_wed_mcu.o mtk_wed_wo.o
ifdef CONFIG_DEBUG_FS
mtk_eth-$(CONFIG_NET_MEDIATEK_SOC_WED) += mtk_wed_debugfs.o
static int set_mux_u3_gmac2_to_qphy(struct mtk_eth *eth, int path)
{
- unsigned int val = 0;
+ unsigned int val = 0, mask = 0, reg = 0;
bool updated = true;
switch (path) {
case MTK_ETH_PATH_GMAC2_SGMII:
- val = CO_QPHY_SEL;
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_U3_COPHY_V2)) {
+ reg = USB_PHY_SWITCH_REG;
+ val = SGMII_QPHY_SEL;
+ mask = QPHY_SEL_MASK;
+ } else {
+ reg = INFRA_MISC2;
+ val = CO_QPHY_SEL;
+ mask = val;
+ }
break;
default:
updated = false;
}
if (updated)
- regmap_update_bits(eth->infra, INFRA_MISC2, CO_QPHY_SEL, val);
+ regmap_update_bits(eth->infra, reg, mask, val);
dev_dbg(eth->dev, "path %s in %s updated = %d\n",
mtk_eth_path_name(path), __func__, updated);
#include <linux/interrupt.h>
#include <linux/pinctrl/devinfo.h>
#include <linux/phylink.h>
+#include <linux/pcs/pcs-mtk-lynxi.h>
#include <linux/jhash.h>
#include <linux/bitfield.h>
#include <net/dsa.h>
{
u32 val;
- /* Check DDR memory type.
- * Currently TRGMII mode with DDR2 memory is not supported.
- */
- regmap_read(eth->ethsys, ETHSYS_SYSCFG, &val);
- if (interface == PHY_INTERFACE_MODE_TRGMII &&
- val & SYSCFG_DRAM_TYPE_DDR2) {
- dev_err(eth->dev,
- "TRGMII mode with DDR2 memory is not supported!\n");
- return -EOPNOTSUPP;
- }
-
val = (interface == PHY_INTERFACE_MODE_TRGMII) ?
ETHSYS_TRGMII_MT7621_DDR_PLL : 0;
static void mtk_gmac0_rgmii_adjust(struct mtk_eth *eth,
phy_interface_t interface, int speed)
{
- u32 val;
+ unsigned long rate;
+ u32 tck, rck, intf;
int ret;
if (interface == PHY_INTERFACE_MODE_TRGMII) {
mtk_w32(eth, TRGMII_MODE, INTF_MODE);
- val = 500000000;
- ret = clk_set_rate(eth->clks[MTK_CLK_TRGPLL], val);
+ ret = clk_set_rate(eth->clks[MTK_CLK_TRGPLL], 500000000);
if (ret)
dev_err(eth->dev, "Failed to set trgmii pll: %d\n", ret);
return;
}
- val = (speed == SPEED_1000) ?
- INTF_MODE_RGMII_1000 : INTF_MODE_RGMII_10_100;
- mtk_w32(eth, val, INTF_MODE);
+ if (speed == SPEED_1000) {
+ intf = INTF_MODE_RGMII_1000;
+ rate = 250000000;
+ rck = RCK_CTRL_RGMII_1000;
+ tck = TCK_CTRL_RGMII_1000;
+ } else {
+ intf = INTF_MODE_RGMII_10_100;
+ rate = 500000000;
+ rck = RCK_CTRL_RGMII_10_100;
+ tck = TCK_CTRL_RGMII_10_100;
+ }
+
+ mtk_w32(eth, intf, INTF_MODE);
regmap_update_bits(eth->ethsys, ETHSYS_CLKCFG0,
ETHSYS_TRGMII_CLK_SEL362_5,
ETHSYS_TRGMII_CLK_SEL362_5);
- val = (speed == SPEED_1000) ? 250000000 : 500000000;
- ret = clk_set_rate(eth->clks[MTK_CLK_TRGPLL], val);
+ ret = clk_set_rate(eth->clks[MTK_CLK_TRGPLL], rate);
if (ret)
dev_err(eth->dev, "Failed to set trgmii pll: %d\n", ret);
- val = (speed == SPEED_1000) ?
- RCK_CTRL_RGMII_1000 : RCK_CTRL_RGMII_10_100;
- mtk_w32(eth, val, TRGMII_RCK_CTRL);
-
- val = (speed == SPEED_1000) ?
- TCK_CTRL_RGMII_1000 : TCK_CTRL_RGMII_10_100;
- mtk_w32(eth, val, TRGMII_TCK_CTRL);
+ mtk_w32(eth, rck, TRGMII_RCK_CTRL);
+ mtk_w32(eth, tck, TRGMII_TCK_CTRL);
}
static struct phylink_pcs *mtk_mac_select_pcs(struct phylink_config *config,
sid = (MTK_HAS_CAPS(eth->soc->caps, MTK_SHARED_SGMII)) ?
0 : mac->id;
- return mtk_sgmii_select_pcs(eth->sgmii, sid);
+ return eth->sgmii_pcs[sid];
}
return NULL;
/* Setup soc pin functions */
switch (state->interface) {
case PHY_INTERFACE_MODE_TRGMII:
- if (mac->id)
- goto err_phy;
- if (!MTK_HAS_CAPS(mac->hw->soc->caps,
- MTK_GMAC1_TRGMII))
- goto err_phy;
- fallthrough;
case PHY_INTERFACE_MODE_RGMII_TXID:
case PHY_INTERFACE_MODE_RGMII_RXID:
case PHY_INTERFACE_MODE_RGMII_ID:
case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_MII:
- case PHY_INTERFACE_MODE_REVMII:
- case PHY_INTERFACE_MODE_RMII:
if (MTK_HAS_CAPS(eth->soc->caps, MTK_RGMII)) {
err = mtk_gmac_rgmii_path_setup(eth, mac->id);
if (err)
case PHY_INTERFACE_MODE_1000BASEX:
case PHY_INTERFACE_MODE_2500BASEX:
case PHY_INTERFACE_MODE_SGMII:
- if (MTK_HAS_CAPS(eth->soc->caps, MTK_SGMII)) {
- err = mtk_gmac_sgmii_path_setup(eth, mac->id);
- if (err)
- goto init_err;
- }
+ err = mtk_gmac_sgmii_path_setup(eth, mac->id);
+ if (err)
+ goto init_err;
break;
case PHY_INTERFACE_MODE_GMII:
if (MTK_HAS_CAPS(eth->soc->caps, MTK_GEPHY)) {
}
}
- ge_mode = 0;
switch (state->interface) {
case PHY_INTERFACE_MODE_MII:
case PHY_INTERFACE_MODE_GMII:
ge_mode = 1;
break;
- case PHY_INTERFACE_MODE_REVMII:
- ge_mode = 2;
- break;
- case PHY_INTERFACE_MODE_RMII:
- if (mac->id)
- goto err_phy;
- ge_mode = 3;
- break;
default:
+ ge_mode = 0;
break;
}
static int mtk_mdio_init(struct mtk_eth *eth)
{
+ unsigned int max_clk = 2500000, divider;
struct device_node *mii_np;
int ret;
+ u32 val;
mii_np = of_get_child_by_name(eth->dev->of_node, "mdio-bus");
if (!mii_np) {
eth->mii_bus->parent = eth->dev;
snprintf(eth->mii_bus->id, MII_BUS_ID_SIZE, "%pOFn", mii_np);
+
+ if (!of_property_read_u32(mii_np, "clock-frequency", &val)) {
+ if (val > MDC_MAX_FREQ || val < MDC_MAX_FREQ / MDC_MAX_DIVIDER) {
+ dev_err(eth->dev, "MDIO clock frequency out of range");
+ ret = -EINVAL;
+ goto err_put_node;
+ }
+ max_clk = val;
+ }
+ divider = min_t(unsigned int, DIV_ROUND_UP(MDC_MAX_FREQ, max_clk), 63);
+
+ /* Configure MDC Divider */
+ val = mtk_r32(eth, MTK_PPSC);
+ val &= ~PPSC_MDC_CFG;
+ val |= FIELD_PREP(PPSC_MDC_CFG, divider) | PPSC_MDC_TURBO;
+ mtk_w32(eth, val, MTK_PPSC);
+
+ dev_dbg(eth->dev, "MDC is running on %d Hz\n", MDC_MAX_FREQ / divider);
+
ret = of_mdiobus_register(eth->mii_bus, mii_np);
err_put_node:
return 0;
}
+static void mtk_sgmii_destroy(struct mtk_eth *eth)
+{
+ int i;
+
+ for (i = 0; i < MTK_MAX_DEVS; i++)
+ mtk_pcs_lynxi_destroy(eth->sgmii_pcs[i]);
+}
+
static int mtk_cleanup(struct mtk_eth *eth)
{
+ mtk_sgmii_destroy(eth);
mtk_unreg_dev(eth);
mtk_free_dev(eth);
cancel_work_sync(ð->pending_work);
struct mtk_mac *mac;
int id, err;
int txqs = 1;
+ u32 val;
if (!_id) {
dev_err(eth->dev, "missing mac id\n");
__set_bit(PHY_INTERFACE_MODE_TRGMII,
mac->phylink_config.supported_interfaces);
+ /* TRGMII is not permitted on MT7621 if using DDR2 */
+ if (MTK_HAS_CAPS(mac->hw->soc->caps, MTK_GMAC1_TRGMII) &&
+ MTK_HAS_CAPS(mac->hw->soc->caps, MTK_TRGMII_MT7621_CLK)) {
+ regmap_read(eth->ethsys, ETHSYS_SYSCFG, &val);
+ if (val & SYSCFG_DRAM_TYPE_DDR2)
+ __clear_bit(PHY_INTERFACE_MODE_TRGMII,
+ mac->phylink_config.supported_interfaces);
+ }
+
if (MTK_HAS_CAPS(mac->hw->soc->caps, MTK_SGMII)) {
__set_bit(PHY_INTERFACE_MODE_SGMII,
mac->phylink_config.supported_interfaces);
rtnl_unlock();
}
+static int mtk_sgmii_init(struct mtk_eth *eth)
+{
+ struct device_node *np;
+ struct regmap *regmap;
+ u32 flags;
+ int i;
+
+ for (i = 0; i < MTK_MAX_DEVS; i++) {
+ np = of_parse_phandle(eth->dev->of_node, "mediatek,sgmiisys", i);
+ if (!np)
+ break;
+
+ regmap = syscon_node_to_regmap(np);
+ flags = 0;
+ if (of_property_read_bool(np, "mediatek,pnswap"))
+ flags |= MTK_SGMII_FLAG_PN_SWAP;
+
+ of_node_put(np);
+
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+ eth->sgmii_pcs[i] = mtk_pcs_lynxi_create(eth->dev, regmap,
+ eth->soc->ana_rgc3,
+ flags);
+ }
+
+ return 0;
+}
+
static int mtk_probe(struct platform_device *pdev)
{
struct resource *res = NULL;
}
if (MTK_HAS_CAPS(eth->soc->caps, MTK_SGMII)) {
- eth->sgmii = devm_kzalloc(eth->dev, sizeof(*eth->sgmii),
- GFP_KERNEL);
- if (!eth->sgmii)
- return -ENOMEM;
-
- err = mtk_sgmii_init(eth->sgmii, pdev->dev.of_node,
- eth->soc->ana_rgc3);
+ err = mtk_sgmii_init(eth);
if (err)
return err;
"mediatek,pctl");
if (IS_ERR(eth->pctl)) {
dev_err(&pdev->dev, "no pctl regmap found\n");
- return PTR_ERR(eth->pctl);
+ err = PTR_ERR(eth->pctl);
+ goto err_destroy_sgmii;
}
}
if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!res)
- return -EINVAL;
+ if (!res) {
+ err = -EINVAL;
+ goto err_destroy_sgmii;
+ }
}
if (eth->soc->offload_version) {
for (i = 0; i < num_ppe; i++) {
u32 ppe_addr = eth->soc->reg_map->ppe_base + i * 0x400;
- eth->ppe[i] = mtk_ppe_init(eth, eth->base + ppe_addr,
- eth->soc->offload_version, i);
+ eth->ppe[i] = mtk_ppe_init(eth, eth->base + ppe_addr, i);
+
if (!eth->ppe[i]) {
err = -ENOMEM;
goto err_deinit_ppe;
mtk_hw_deinit(eth);
err_wed_exit:
mtk_wed_exit();
+err_destroy_sgmii:
+ mtk_sgmii_destroy(eth);
return err;
}
.required_pctl = false,
.offload_version = 2,
.hash_offset = 2,
+ .has_accounting = true,
.foe_entry_size = sizeof(struct mtk_foe_entry) - 16,
.txrx = {
.txd_size = sizeof(struct mtk_tx_dma),
.hw_features = MTK_HW_FEATURES,
.required_clks = MT7629_CLKS_BITMAP,
.required_pctl = false,
+ .has_accounting = true,
.txrx = {
.txd_size = sizeof(struct mtk_tx_dma),
.rxd_size = sizeof(struct mtk_rx_dma),
},
};
+static const struct mtk_soc_data mt7981_data = {
+ .reg_map = &mt7986_reg_map,
+ .ana_rgc3 = 0x128,
+ .caps = MT7981_CAPS,
+ .hw_features = MTK_HW_FEATURES,
+ .required_clks = MT7981_CLKS_BITMAP,
+ .required_pctl = false,
+ .offload_version = 2,
+ .hash_offset = 4,
+ .foe_entry_size = sizeof(struct mtk_foe_entry),
+ .has_accounting = true,
+ .txrx = {
+ .txd_size = sizeof(struct mtk_tx_dma_v2),
+ .rxd_size = sizeof(struct mtk_rx_dma_v2),
+ .rx_irq_done_mask = MTK_RX_DONE_INT_V2,
+ .rx_dma_l4_valid = RX_DMA_L4_VALID_V2,
+ .dma_max_len = MTK_TX_DMA_BUF_LEN_V2,
+ .dma_len_offset = 8,
+ },
+};
+
static const struct mtk_soc_data mt7986_data = {
.reg_map = &mt7986_reg_map,
.ana_rgc3 = 0x128,
.offload_version = 2,
.hash_offset = 4,
.foe_entry_size = sizeof(struct mtk_foe_entry),
+ .has_accounting = true,
.txrx = {
.txd_size = sizeof(struct mtk_tx_dma_v2),
.rxd_size = sizeof(struct mtk_rx_dma_v2),
{ .compatible = "mediatek,mt7622-eth", .data = &mt7622_data},
{ .compatible = "mediatek,mt7623-eth", .data = &mt7623_data},
{ .compatible = "mediatek,mt7629-eth", .data = &mt7629_data},
+ { .compatible = "mediatek,mt7981-eth", .data = &mt7981_data},
{ .compatible = "mediatek,mt7986-eth", .data = &mt7986_data},
{ .compatible = "ralink,rt5350-eth", .data = &rt5350_data},
{},
#define RX_DMA_VTAG_V2 BIT(0)
#define RX_DMA_L4_VALID_V2 BIT(2)
+/* PHY Polling and SMI Master Control registers */
+#define MTK_PPSC 0x10000
+#define PPSC_MDC_CFG GENMASK(29, 24)
+#define PPSC_MDC_TURBO BIT(20)
+#define MDC_MAX_FREQ 25000000
+#define MDC_MAX_DIVIDER 63
+
/* PHY Indirect Access Control registers */
#define MTK_PHY_IAC 0x10004
#define PHY_IAC_ACCESS BIT(31)
#define ETHSYS_DMA_AG_MAP_QDMA BIT(1)
#define ETHSYS_DMA_AG_MAP_PPE BIT(2)
-/* SGMII subsystem config registers */
-/* BMCR (low 16) BMSR (high 16) */
-#define SGMSYS_PCS_CONTROL_1 0x0
-#define SGMII_BMCR GENMASK(15, 0)
-#define SGMII_BMSR GENMASK(31, 16)
-#define SGMII_AN_RESTART BIT(9)
-#define SGMII_ISOLATE BIT(10)
-#define SGMII_AN_ENABLE BIT(12)
-#define SGMII_LINK_STATYS BIT(18)
-#define SGMII_AN_ABILITY BIT(19)
-#define SGMII_AN_COMPLETE BIT(21)
-#define SGMII_PCS_FAULT BIT(23)
-#define SGMII_AN_EXPANSION_CLR BIT(30)
-
-#define SGMSYS_PCS_ADVERTISE 0x8
-#define SGMII_ADVERTISE GENMASK(15, 0)
-#define SGMII_LPA GENMASK(31, 16)
-
-/* Register to programmable link timer, the unit in 2 * 8ns */
-#define SGMSYS_PCS_LINK_TIMER 0x18
-#define SGMII_LINK_TIMER_MASK GENMASK(19, 0)
-#define SGMII_LINK_TIMER_DEFAULT (0x186a0 & SGMII_LINK_TIMER_MASK)
-
-/* Register to control remote fault */
-#define SGMSYS_SGMII_MODE 0x20
-#define SGMII_IF_MODE_SGMII BIT(0)
-#define SGMII_SPEED_DUPLEX_AN BIT(1)
-#define SGMII_SPEED_MASK GENMASK(3, 2)
-#define SGMII_SPEED_10 FIELD_PREP(SGMII_SPEED_MASK, 0)
-#define SGMII_SPEED_100 FIELD_PREP(SGMII_SPEED_MASK, 1)
-#define SGMII_SPEED_1000 FIELD_PREP(SGMII_SPEED_MASK, 2)
-#define SGMII_DUPLEX_HALF BIT(4)
-#define SGMII_IF_MODE_BIT5 BIT(5)
-#define SGMII_REMOTE_FAULT_DIS BIT(8)
-#define SGMII_CODE_SYNC_SET_VAL BIT(9)
-#define SGMII_CODE_SYNC_SET_EN BIT(10)
-#define SGMII_SEND_AN_ERROR_EN BIT(11)
-#define SGMII_IF_MODE_MASK GENMASK(5, 1)
-
-/* Register to reset SGMII design */
-#define SGMII_RESERVED_0 0x34
-#define SGMII_SW_RESET BIT(0)
-
-/* Register to set SGMII speed, ANA RG_ Control Signals III*/
-#define SGMSYS_ANA_RG_CS3 0x2028
-#define RG_PHY_SPEED_MASK (BIT(2) | BIT(3))
-#define RG_PHY_SPEED_1_25G 0x0
-#define RG_PHY_SPEED_3_125G BIT(2)
-
-/* Register to power up QPHY */
-#define SGMSYS_QPHY_PWR_STATE_CTRL 0xe8
-#define SGMII_PHYA_PWD BIT(4)
-
/* Infrasys subsystem config registers */
#define INFRA_MISC2 0x70c
#define CO_QPHY_SEL BIT(0)
#define GEPHY_MAC_SEL BIT(1)
+/* Top misc registers */
+#define USB_PHY_SWITCH_REG 0x218
+#define QPHY_SEL_MASK GENMASK(1, 0)
+#define SGMII_QPHY_SEL 0x2
+
/* MT7628/88 specific stuff */
#define MT7628_PDMA_OFFSET 0x0800
#define MT7628_SDM_OFFSET 0x0c00
BIT(MTK_CLK_SGMII2_CDR_FB) | \
BIT(MTK_CLK_SGMII_CK) | \
BIT(MTK_CLK_ETH2PLL) | BIT(MTK_CLK_SGMIITOP))
+#define MT7981_CLKS_BITMAP (BIT(MTK_CLK_FE) | BIT(MTK_CLK_GP2) | BIT(MTK_CLK_GP1) | \
+ BIT(MTK_CLK_WOCPU0) | \
+ BIT(MTK_CLK_SGMII_TX_250M) | \
+ BIT(MTK_CLK_SGMII_RX_250M) | \
+ BIT(MTK_CLK_SGMII_CDR_REF) | \
+ BIT(MTK_CLK_SGMII_CDR_FB) | \
+ BIT(MTK_CLK_SGMII2_TX_250M) | \
+ BIT(MTK_CLK_SGMII2_RX_250M) | \
+ BIT(MTK_CLK_SGMII2_CDR_REF) | \
+ BIT(MTK_CLK_SGMII2_CDR_FB) | \
+ BIT(MTK_CLK_SGMII_CK))
#define MT7986_CLKS_BITMAP (BIT(MTK_CLK_FE) | BIT(MTK_CLK_GP2) | BIT(MTK_CLK_GP1) | \
BIT(MTK_CLK_WOCPU1) | BIT(MTK_CLK_WOCPU0) | \
BIT(MTK_CLK_SGMII_TX_250M) | \
MTK_NETSYS_V2_BIT,
MTK_SOC_MT7628_BIT,
MTK_RSTCTRL_PPE1_BIT,
+ MTK_U3_COPHY_V2_BIT,
/* MUX BITS*/
MTK_ETH_MUX_GDM1_TO_GMAC1_ESW_BIT,
#define MTK_NETSYS_V2 BIT(MTK_NETSYS_V2_BIT)
#define MTK_SOC_MT7628 BIT(MTK_SOC_MT7628_BIT)
#define MTK_RSTCTRL_PPE1 BIT(MTK_RSTCTRL_PPE1_BIT)
+#define MTK_U3_COPHY_V2 BIT(MTK_U3_COPHY_V2_BIT)
#define MTK_ETH_MUX_GDM1_TO_GMAC1_ESW \
BIT(MTK_ETH_MUX_GDM1_TO_GMAC1_ESW_BIT)
MTK_MUX_U3_GMAC2_TO_QPHY | \
MTK_MUX_GMAC12_TO_GEPHY_SGMII | MTK_QDMA)
+#define MT7981_CAPS (MTK_GMAC1_SGMII | MTK_GMAC2_SGMII | MTK_GMAC2_GEPHY | \
+ MTK_MUX_GMAC12_TO_GEPHY_SGMII | MTK_QDMA | \
+ MTK_MUX_U3_GMAC2_TO_QPHY | MTK_U3_COPHY_V2 | \
+ MTK_NETSYS_V2 | MTK_RSTCTRL_PPE1)
+
#define MT7986_CAPS (MTK_GMAC1_SGMII | MTK_GMAC2_SGMII | \
MTK_MUX_GMAC12_TO_GEPHY_SGMII | MTK_QDMA | \
MTK_NETSYS_V2 | MTK_RSTCTRL_PPE1)
* the extra setup for those pins used by GMAC.
* @hash_offset Flow table hash offset.
* @foe_entry_size Foe table entry size.
+ * @has_accounting Bool indicating support for accounting of
+ * offloaded flows.
* @txd_size Tx DMA descriptor size.
* @rxd_size Rx DMA descriptor size.
* @rx_irq_done_mask Rx irq done register mask.
u8 hash_offset;
u16 foe_entry_size;
netdev_features_t hw_features;
+ bool has_accounting;
struct {
u32 txd_size;
u32 rxd_size;
/* currently no SoC has more than 2 macs */
#define MTK_MAX_DEVS 2
-/* struct mtk_pcs - This structure holds each sgmii regmap and associated
- * data
- * @regmap: The register map pointing at the range used to setup
- * SGMII modes
- * @ana_rgc3: The offset refers to register ANA_RGC3 related to regmap
- * @interface: Currently configured interface mode
- * @pcs: Phylink PCS structure
- */
-struct mtk_pcs {
- struct regmap *regmap;
- u32 ana_rgc3;
- phy_interface_t interface;
- struct phylink_pcs pcs;
-};
-
-/* struct mtk_sgmii - This is the structure holding sgmii regmap and its
- * characteristics
- * @pcs Array of individual PCS structures
- */
-struct mtk_sgmii {
- struct mtk_pcs pcs[MTK_MAX_DEVS];
-};
-
/* struct mtk_eth - This is the main datasructure for holding the state
* of the driver
* @dev: The device pointer
* MII modes
* @infra: The register map pointing at the range used to setup
* SGMII and GePHY path
+ * @sgmii_pcs: Pointers to mtk-pcs-lynxi phylink_pcs instances
* @pctl: The register map pointing at the range used to setup
* GMAC port drive/slew values
* @dma_refcnt: track how many netdevs are using the DMA engine
u32 msg_enable;
unsigned long sysclk;
struct regmap *ethsys;
- struct regmap *infra;
- struct mtk_sgmii *sgmii;
+ struct regmap *infra;
+ struct phylink_pcs *sgmii_pcs[MTK_MAX_DEVS];
struct regmap *pctl;
bool hwlro;
refcount_t dma_refcnt;
void mtk_w32(struct mtk_eth *eth, u32 val, unsigned reg);
u32 mtk_r32(struct mtk_eth *eth, unsigned reg);
-struct phylink_pcs *mtk_sgmii_select_pcs(struct mtk_sgmii *ss, int id);
-int mtk_sgmii_init(struct mtk_sgmii *ss, struct device_node *np,
- u32 ana_rgc3);
-
int mtk_gmac_sgmii_path_setup(struct mtk_eth *eth, int mac_id);
int mtk_gmac_gephy_path_setup(struct mtk_eth *eth, int mac_id);
int mtk_gmac_rgmii_path_setup(struct mtk_eth *eth, int mac_id);
return ret;
}
+static int mtk_ppe_mib_wait_busy(struct mtk_ppe *ppe)
+{
+ int ret;
+ u32 val;
+
+ ret = readl_poll_timeout(ppe->base + MTK_PPE_MIB_SER_CR, val,
+ !(val & MTK_PPE_MIB_SER_CR_ST),
+ 20, MTK_PPE_WAIT_TIMEOUT_US);
+
+ if (ret)
+ dev_err(ppe->dev, "MIB table busy");
+
+ return ret;
+}
+
+static int mtk_mib_entry_read(struct mtk_ppe *ppe, u16 index, u64 *bytes, u64 *packets)
+{
+ u32 byte_cnt_low, byte_cnt_high, pkt_cnt_low, pkt_cnt_high;
+ u32 val, cnt_r0, cnt_r1, cnt_r2;
+ int ret;
+
+ val = FIELD_PREP(MTK_PPE_MIB_SER_CR_ADDR, index) | MTK_PPE_MIB_SER_CR_ST;
+ ppe_w32(ppe, MTK_PPE_MIB_SER_CR, val);
+
+ ret = mtk_ppe_mib_wait_busy(ppe);
+ if (ret)
+ return ret;
+
+ cnt_r0 = readl(ppe->base + MTK_PPE_MIB_SER_R0);
+ cnt_r1 = readl(ppe->base + MTK_PPE_MIB_SER_R1);
+ cnt_r2 = readl(ppe->base + MTK_PPE_MIB_SER_R2);
+
+ byte_cnt_low = FIELD_GET(MTK_PPE_MIB_SER_R0_BYTE_CNT_LOW, cnt_r0);
+ byte_cnt_high = FIELD_GET(MTK_PPE_MIB_SER_R1_BYTE_CNT_HIGH, cnt_r1);
+ pkt_cnt_low = FIELD_GET(MTK_PPE_MIB_SER_R1_PKT_CNT_LOW, cnt_r1);
+ pkt_cnt_high = FIELD_GET(MTK_PPE_MIB_SER_R2_PKT_CNT_HIGH, cnt_r2);
+ *bytes = ((u64)byte_cnt_high << 32) | byte_cnt_low;
+ *packets = (pkt_cnt_high << 16) | pkt_cnt_low;
+
+ return 0;
+}
+
static void mtk_ppe_cache_clear(struct mtk_ppe *ppe)
{
ppe_set(ppe, MTK_PPE_CACHE_CTL, MTK_PPE_CACHE_CTL_CLEAR);
hwe->ib1 |= FIELD_PREP(MTK_FOE_IB1_STATE, MTK_FOE_STATE_INVALID);
dma_wmb();
mtk_ppe_cache_clear(ppe);
+
+ if (ppe->accounting) {
+ struct mtk_foe_accounting *acct;
+
+ acct = ppe->acct_table + entry->hash * sizeof(*acct);
+ acct->packets = 0;
+ acct->bytes = 0;
+ }
}
entry->hash = 0xffff;
wmb();
hwe->ib1 = entry->ib1;
+ if (ppe->accounting)
+ *mtk_foe_entry_ib2(eth, hwe) |= MTK_FOE_IB2_MIB_CNT;
+
dma_wmb();
mtk_ppe_cache_clear(ppe);
return mtk_ppe_wait_busy(ppe);
}
-struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base,
- int version, int index)
+struct mtk_foe_accounting *mtk_foe_entry_get_mib(struct mtk_ppe *ppe, u32 index,
+ struct mtk_foe_accounting *diff)
{
+ struct mtk_foe_accounting *acct;
+ int size = sizeof(struct mtk_foe_accounting);
+ u64 bytes, packets;
+
+ if (!ppe->accounting)
+ return NULL;
+
+ if (mtk_mib_entry_read(ppe, index, &bytes, &packets))
+ return NULL;
+
+ acct = ppe->acct_table + index * size;
+
+ acct->bytes += bytes;
+ acct->packets += packets;
+
+ if (diff) {
+ diff->bytes = bytes;
+ diff->packets = packets;
+ }
+
+ return acct;
+}
+
+struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, int index)
+{
+ bool accounting = eth->soc->has_accounting;
const struct mtk_soc_data *soc = eth->soc;
+ struct mtk_foe_accounting *acct;
struct device *dev = eth->dev;
+ struct mtk_mib_entry *mib;
struct mtk_ppe *ppe;
u32 foe_flow_size;
void *foe;
ppe->base = base;
ppe->eth = eth;
ppe->dev = dev;
- ppe->version = version;
+ ppe->version = eth->soc->offload_version;
+ ppe->accounting = accounting;
foe = dmam_alloc_coherent(ppe->dev,
MTK_PPE_ENTRIES * soc->foe_entry_size,
if (!ppe->foe_flow)
goto err_free_l2_flows;
+ if (accounting) {
+ mib = dmam_alloc_coherent(ppe->dev, MTK_PPE_ENTRIES * sizeof(*mib),
+ &ppe->mib_phys, GFP_KERNEL);
+ if (!mib)
+ return NULL;
+
+ ppe->mib_table = mib;
+
+ acct = devm_kzalloc(dev, MTK_PPE_ENTRIES * sizeof(*acct),
+ GFP_KERNEL);
+
+ if (!acct)
+ return NULL;
+
+ ppe->acct_table = acct;
+ }
+
mtk_ppe_debugfs_init(ppe, index);
return ppe;
ppe_w32(ppe, MTK_PPE_DEFAULT_CPU_PORT1, 0xcb777);
ppe_w32(ppe, MTK_PPE_SBW_CTRL, 0x7f);
}
+
+ if (ppe->accounting && ppe->mib_phys) {
+ ppe_w32(ppe, MTK_PPE_MIB_TB_BASE, ppe->mib_phys);
+ ppe_m32(ppe, MTK_PPE_MIB_CFG, MTK_PPE_MIB_CFG_EN,
+ MTK_PPE_MIB_CFG_EN);
+ ppe_m32(ppe, MTK_PPE_MIB_CFG, MTK_PPE_MIB_CFG_RD_CLR,
+ MTK_PPE_MIB_CFG_RD_CLR);
+ ppe_m32(ppe, MTK_PPE_MIB_CACHE_CTL, MTK_PPE_MIB_CACHE_CTL_EN,
+ MTK_PPE_MIB_CFG_RD_CLR);
+ }
}
int mtk_ppe_stop(struct mtk_ppe *ppe)
#define MTK_FOE_IB2_MULTICAST BIT(8)
#define MTK_FOE_IB2_WDMA_QID2 GENMASK(13, 12)
+#define MTK_FOE_IB2_MIB_CNT BIT(15)
#define MTK_FOE_IB2_WDMA_DEVIDX BIT(16)
#define MTK_FOE_IB2_WDMA_WINFO BIT(17)
unsigned long cookie;
};
+struct mtk_mib_entry {
+ u32 byt_cnt_l;
+ u16 byt_cnt_h;
+ u32 pkt_cnt_l;
+ u8 pkt_cnt_h;
+ u8 _rsv0;
+ u32 _rsv1;
+} __packed;
+
+struct mtk_foe_accounting {
+ u64 bytes;
+ u64 packets;
+};
+
struct mtk_ppe {
struct mtk_eth *eth;
struct device *dev;
void __iomem *base;
int version;
char dirname[5];
+ bool accounting;
void *foe_table;
dma_addr_t foe_phys;
+ struct mtk_mib_entry *mib_table;
+ dma_addr_t mib_phys;
+
u16 foe_check_time[MTK_PPE_ENTRIES];
struct hlist_head *foe_flow;
void *acct_table;
};
-struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base,
- int version, int index);
+struct mtk_ppe *mtk_ppe_init(struct mtk_eth *eth, void __iomem *base, int index);
+
void mtk_ppe_deinit(struct mtk_eth *eth);
void mtk_ppe_start(struct mtk_ppe *ppe);
int mtk_ppe_stop(struct mtk_ppe *ppe);
void mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry);
int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry);
int mtk_ppe_debugfs_init(struct mtk_ppe *ppe, int index);
+struct mtk_foe_accounting *mtk_foe_entry_get_mib(struct mtk_ppe *ppe, u32 index,
+ struct mtk_foe_accounting *diff);
#endif
struct mtk_foe_entry *entry = mtk_foe_get_entry(ppe, i);
struct mtk_foe_mac_info *l2;
struct mtk_flow_addr_info ai = {};
+ struct mtk_foe_accounting *acct;
unsigned char h_source[ETH_ALEN];
unsigned char h_dest[ETH_ALEN];
int type, state;
if (bind && state != MTK_FOE_STATE_BIND)
continue;
+ acct = mtk_foe_entry_get_mib(ppe, i, NULL);
+
type = FIELD_GET(MTK_FOE_IB1_PACKET_TYPE, entry->ib1);
seq_printf(m, "%05x %s %7s", i,
mtk_foe_entry_state_str(state),
*((__be16 *)&h_dest[4]) = htons(l2->dest_mac_lo);
seq_printf(m, " eth=%pM->%pM etype=%04x"
- " vlan=%d,%d ib1=%08x ib2=%08x\n",
+ " vlan=%d,%d ib1=%08x ib2=%08x"
+ " packets=%llu bytes=%llu\n",
h_source, h_dest, ntohs(l2->etype),
- l2->vlan1, l2->vlan2, entry->ib1, ib2);
+ l2->vlan1, l2->vlan2, entry->ib1, ib2,
+ acct ? acct->packets : 0, acct ? acct->bytes : 0);
}
return 0;
mtk_flow_offload_stats(struct mtk_eth *eth, struct flow_cls_offload *f)
{
struct mtk_flow_entry *entry;
+ struct mtk_foe_accounting diff;
u32 idle;
entry = rhashtable_lookup(ð->flow_table, &f->cookie,
idle = mtk_foe_entry_idle_time(eth->ppe[entry->ppe_index], entry);
f->stats.lastused = jiffies - idle * HZ;
+ if (entry->hash != 0xFFFF &&
+ mtk_foe_entry_get_mib(eth->ppe[entry->ppe_index], entry->hash,
+ &diff)) {
+ f->stats.pkts += diff.packets;
+ f->stats.bytes += diff.bytes;
+ }
+
return 0;
}
#define MTK_PPE_MIB_TB_BASE 0x338
+#define MTK_PPE_MIB_SER_CR 0x33C
+#define MTK_PPE_MIB_SER_CR_ST BIT(16)
+#define MTK_PPE_MIB_SER_CR_ADDR GENMASK(13, 0)
+
+#define MTK_PPE_MIB_SER_R0 0x340
+#define MTK_PPE_MIB_SER_R0_BYTE_CNT_LOW GENMASK(31, 0)
+
+#define MTK_PPE_MIB_SER_R1 0x344
+#define MTK_PPE_MIB_SER_R1_PKT_CNT_LOW GENMASK(31, 16)
+#define MTK_PPE_MIB_SER_R1_BYTE_CNT_HIGH GENMASK(15, 0)
+
+#define MTK_PPE_MIB_SER_R2 0x348
+#define MTK_PPE_MIB_SER_R2_PKT_CNT_HIGH GENMASK(23, 0)
+
#define MTK_PPE_MIB_CACHE_CTL 0x350
#define MTK_PPE_MIB_CACHE_CTL_EN BIT(0)
#define MTK_PPE_MIB_CACHE_CTL_FLUSH BIT(2)
+++ /dev/null
-// SPDX-License-Identifier: GPL-2.0
-// Copyright (c) 2018-2019 MediaTek Inc.
-
-/* A library for MediaTek SGMII circuit
- *
- * Author: Sean Wang <sean.wang@mediatek.com>
- *
- */
-
-#include <linux/mfd/syscon.h>
-#include <linux/of.h>
-#include <linux/phylink.h>
-#include <linux/regmap.h>
-
-#include "mtk_eth_soc.h"
-
-static struct mtk_pcs *pcs_to_mtk_pcs(struct phylink_pcs *pcs)
-{
- return container_of(pcs, struct mtk_pcs, pcs);
-}
-
-static void mtk_pcs_get_state(struct phylink_pcs *pcs,
- struct phylink_link_state *state)
-{
- struct mtk_pcs *mpcs = pcs_to_mtk_pcs(pcs);
- unsigned int bm, adv;
-
- /* Read the BMSR and LPA */
- regmap_read(mpcs->regmap, SGMSYS_PCS_CONTROL_1, &bm);
- regmap_read(mpcs->regmap, SGMSYS_PCS_ADVERTISE, &adv);
-
- phylink_mii_c22_pcs_decode_state(state, FIELD_GET(SGMII_BMSR, bm),
- FIELD_GET(SGMII_LPA, adv));
-}
-
-static int mtk_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
- phy_interface_t interface,
- const unsigned long *advertising,
- bool permit_pause_to_mac)
-{
- bool mode_changed = false, changed, use_an;
- struct mtk_pcs *mpcs = pcs_to_mtk_pcs(pcs);
- unsigned int rgc3, sgm_mode, bmcr;
- int advertise, link_timer;
-
- advertise = phylink_mii_c22_pcs_encode_advertisement(interface,
- advertising);
- if (advertise < 0)
- return advertise;
-
- /* Clearing IF_MODE_BIT0 switches the PCS to BASE-X mode, and
- * we assume that fixes it's speed at bitrate = line rate (in
- * other words, 1000Mbps or 2500Mbps).
- */
- if (interface == PHY_INTERFACE_MODE_SGMII) {
- sgm_mode = SGMII_IF_MODE_SGMII;
- if (phylink_autoneg_inband(mode)) {
- sgm_mode |= SGMII_REMOTE_FAULT_DIS |
- SGMII_SPEED_DUPLEX_AN;
- use_an = true;
- } else {
- use_an = false;
- }
- } else if (phylink_autoneg_inband(mode)) {
- /* 1000base-X or 2500base-X autoneg */
- sgm_mode = SGMII_REMOTE_FAULT_DIS;
- use_an = linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
- advertising);
- } else {
- /* 1000base-X or 2500base-X without autoneg */
- sgm_mode = 0;
- use_an = false;
- }
-
- if (use_an) {
- bmcr = SGMII_AN_ENABLE;
- } else {
- bmcr = 0;
- }
-
- if (mpcs->interface != interface) {
- link_timer = phylink_get_link_timer_ns(interface);
- if (link_timer < 0)
- return link_timer;
-
- /* PHYA power down */
- regmap_update_bits(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL,
- SGMII_PHYA_PWD, SGMII_PHYA_PWD);
-
- /* Reset SGMII PCS state */
- regmap_update_bits(mpcs->regmap, SGMII_RESERVED_0,
- SGMII_SW_RESET, SGMII_SW_RESET);
-
- if (interface == PHY_INTERFACE_MODE_2500BASEX)
- rgc3 = RG_PHY_SPEED_3_125G;
- else
- rgc3 = 0;
-
- /* Configure the underlying interface speed */
- regmap_update_bits(mpcs->regmap, mpcs->ana_rgc3,
- RG_PHY_SPEED_3_125G, rgc3);
-
- /* Setup the link timer */
- regmap_write(mpcs->regmap, SGMSYS_PCS_LINK_TIMER, link_timer / 2 / 8);
-
- mpcs->interface = interface;
- mode_changed = true;
- }
-
- /* Update the advertisement, noting whether it has changed */
- regmap_update_bits_check(mpcs->regmap, SGMSYS_PCS_ADVERTISE,
- SGMII_ADVERTISE, advertise, &changed);
-
- /* Update the sgmsys mode register */
- regmap_update_bits(mpcs->regmap, SGMSYS_SGMII_MODE,
- SGMII_REMOTE_FAULT_DIS | SGMII_SPEED_DUPLEX_AN |
- SGMII_IF_MODE_SGMII, sgm_mode);
-
- /* Update the BMCR */
- regmap_update_bits(mpcs->regmap, SGMSYS_PCS_CONTROL_1,
- SGMII_AN_ENABLE, bmcr);
-
- /* Release PHYA power down state
- * Only removing bit SGMII_PHYA_PWD isn't enough.
- * There are cases when the SGMII_PHYA_PWD register contains 0x9 which
- * prevents SGMII from working. The SGMII still shows link but no traffic
- * can flow. Writing 0x0 to the PHYA_PWD register fix the issue. 0x0 was
- * taken from a good working state of the SGMII interface.
- * Unknown how much the QPHY needs but it is racy without a sleep.
- * Tested on mt7622 & mt7986.
- */
- usleep_range(50, 100);
- regmap_write(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL, 0);
-
- return changed || mode_changed;
-}
-
-static void mtk_pcs_restart_an(struct phylink_pcs *pcs)
-{
- struct mtk_pcs *mpcs = pcs_to_mtk_pcs(pcs);
-
- regmap_update_bits(mpcs->regmap, SGMSYS_PCS_CONTROL_1,
- SGMII_AN_RESTART, SGMII_AN_RESTART);
-}
-
-static void mtk_pcs_link_up(struct phylink_pcs *pcs, unsigned int mode,
- phy_interface_t interface, int speed, int duplex)
-{
- struct mtk_pcs *mpcs = pcs_to_mtk_pcs(pcs);
- unsigned int sgm_mode;
-
- if (!phylink_autoneg_inband(mode)) {
- /* Force the speed and duplex setting */
- if (speed == SPEED_10)
- sgm_mode = SGMII_SPEED_10;
- else if (speed == SPEED_100)
- sgm_mode = SGMII_SPEED_100;
- else
- sgm_mode = SGMII_SPEED_1000;
-
- if (duplex != DUPLEX_FULL)
- sgm_mode |= SGMII_DUPLEX_HALF;
-
- regmap_update_bits(mpcs->regmap, SGMSYS_SGMII_MODE,
- SGMII_DUPLEX_HALF | SGMII_SPEED_MASK,
- sgm_mode);
- }
-}
-
-static const struct phylink_pcs_ops mtk_pcs_ops = {
- .pcs_get_state = mtk_pcs_get_state,
- .pcs_config = mtk_pcs_config,
- .pcs_an_restart = mtk_pcs_restart_an,
- .pcs_link_up = mtk_pcs_link_up,
-};
-
-int mtk_sgmii_init(struct mtk_sgmii *ss, struct device_node *r, u32 ana_rgc3)
-{
- struct device_node *np;
- int i;
-
- for (i = 0; i < MTK_MAX_DEVS; i++) {
- np = of_parse_phandle(r, "mediatek,sgmiisys", i);
- if (!np)
- break;
-
- ss->pcs[i].ana_rgc3 = ana_rgc3;
- ss->pcs[i].regmap = syscon_node_to_regmap(np);
- of_node_put(np);
- if (IS_ERR(ss->pcs[i].regmap))
- return PTR_ERR(ss->pcs[i].regmap);
-
- ss->pcs[i].pcs.ops = &mtk_pcs_ops;
- ss->pcs[i].pcs.poll = true;
- ss->pcs[i].interface = PHY_INTERFACE_MODE_NA;
- }
-
- return 0;
-}
-
-struct phylink_pcs *mtk_sgmii_select_pcs(struct mtk_sgmii *ss, int id)
-{
- if (!ss->pcs[id].regmap)
- return NULL;
-
- return &ss->pcs[id].pcs;
-}
struct mlx4_en_rx_desc {
/* actual number of entries depends on rx ring stride */
- struct mlx4_wqe_data_seg data[0];
+ DECLARE_FLEX_ARRAY(struct mlx4_wqe_data_seg, data);
};
struct mlx4_en_rx_ring {
mlx5_core-$(CONFIG_MLX5_BRIDGE) += esw/bridge.o en/rep/bridge.o
+mlx5_core-$(CONFIG_THERMAL) += thermal.o
mlx5_core-$(CONFIG_MLX5_MPFS) += lib/mpfs.o
mlx5_core-$(CONFIG_VXLAN) += lib/vxlan.o
mlx5_core-$(CONFIG_PTP_1588_CLOCK) += lib/clock.o
#include <linux/mlx5/mlx5_ifc_vdpa.h>
#include <linux/mlx5/vport.h>
#include "mlx5_core.h"
+#include "devlink.h"
/* intf dev list mutex */
static DEFINE_MUTEX(mlx5_intf_mutex);
return true;
}
-static bool is_eth_enabled(struct mlx5_core_dev *dev)
-{
- union devlink_param_value val;
- int err;
-
- err = devl_param_driverinit_value_get(priv_to_devlink(dev),
- DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH,
- &val);
- return err ? false : val.vbool;
-}
-
bool mlx5_vnet_supported(struct mlx5_core_dev *dev)
{
if (!IS_ENABLED(CONFIG_MLX5_VDPA_NET))
.is_enabled = &is_ib_enabled },
[MLX5_INTERFACE_PROTOCOL_ETH] = { .suffix = "eth",
.is_supported = &mlx5_eth_supported,
- .is_enabled = &is_eth_enabled },
+ .is_enabled = &mlx5_core_is_eth_enabled },
[MLX5_INTERFACE_PROTOCOL_ETH_REP] = { .suffix = "eth-rep",
.is_supported = &is_eth_rep_supported },
[MLX5_INTERFACE_PROTOCOL_IB_REP] = { .suffix = "rdma-rep",
return (val.vu32 >= 64 && val.vu32 <= 4096) ? 0 : -EINVAL;
}
+static int
+mlx5_devlink_hairpin_num_queues_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
+{
+ return val.vu32 ? 0 : -EINVAL;
+}
+
+static int
+mlx5_devlink_hairpin_queue_size_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ u32 val32 = val.vu32;
+
+ if (!is_power_of_2(val32)) {
+ NL_SET_ERR_MSG_MOD(extack, "Value is not power of two");
+ return -EINVAL;
+ }
+
+ if (val32 > BIT(MLX5_CAP_GEN(dev, log_max_hairpin_num_packets))) {
+ NL_SET_ERR_MSG_FMT_MOD(
+ extack, "Maximum hairpin queue size is %lu",
+ BIT(MLX5_CAP_GEN(dev, log_max_hairpin_num_packets)));
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void mlx5_devlink_hairpin_params_init_values(struct devlink *devlink)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ union devlink_param_value value;
+ u32 link_speed = 0;
+ u64 link_speed64;
+
+ /* set hairpin pair per each 50Gbs share of the link */
+ mlx5_port_max_linkspeed(dev, &link_speed);
+ link_speed = max_t(u32, link_speed, 50000);
+ link_speed64 = link_speed;
+ do_div(link_speed64, 50000);
+
+ value.vu32 = link_speed64;
+ devl_param_driverinit_value_set(
+ devlink, MLX5_DEVLINK_PARAM_ID_HAIRPIN_NUM_QUEUES, value);
+
+ value.vu32 =
+ BIT(min_t(u32, 16 - MLX5_MPWRQ_MIN_LOG_STRIDE_SZ(dev),
+ MLX5_CAP_GEN(dev, log_max_hairpin_num_packets)));
+ devl_param_driverinit_value_set(
+ devlink, MLX5_DEVLINK_PARAM_ID_HAIRPIN_QUEUE_SIZE, value);
+}
+
static const struct devlink_param mlx5_devlink_params[] = {
DEVLINK_PARAM_GENERIC(ENABLE_ROCE, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
NULL, NULL, mlx5_devlink_enable_roce_validate),
static const struct devlink_param mlx5_devlink_eth_params[] = {
DEVLINK_PARAM_GENERIC(ENABLE_ETH, BIT(DEVLINK_PARAM_CMODE_DRIVERINIT),
NULL, NULL, NULL),
+ DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_HAIRPIN_NUM_QUEUES,
+ "hairpin_num_queues", DEVLINK_PARAM_TYPE_U32,
+ BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), NULL, NULL,
+ mlx5_devlink_hairpin_num_queues_validate),
+ DEVLINK_PARAM_DRIVER(MLX5_DEVLINK_PARAM_ID_HAIRPIN_QUEUE_SIZE,
+ "hairpin_queue_size", DEVLINK_PARAM_TYPE_U32,
+ BIT(DEVLINK_PARAM_CMODE_DRIVERINIT), NULL, NULL,
+ mlx5_devlink_hairpin_queue_size_validate),
};
static int mlx5_devlink_eth_params_register(struct devlink *devlink)
devl_param_driverinit_value_set(devlink,
DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH,
value);
+
+ mlx5_devlink_hairpin_params_init_values(devlink);
+
return 0;
}
{
int err;
+ /* Here only the driver init params should be registered.
+ * Runtime params should be registered by the code which
+ * behaviour they configure.
+ */
+
err = devl_params_register(devlink, mlx5_devlink_params,
ARRAY_SIZE(mlx5_devlink_params));
if (err)
MLX5_DEVLINK_PARAM_ID_ESW_LARGE_GROUP_NUM,
MLX5_DEVLINK_PARAM_ID_ESW_PORT_METADATA,
MLX5_DEVLINK_PARAM_ID_ESW_MULTIPORT,
+ MLX5_DEVLINK_PARAM_ID_HAIRPIN_NUM_QUEUES,
+ MLX5_DEVLINK_PARAM_ID_HAIRPIN_QUEUE_SIZE,
};
struct mlx5_trap_ctx {
int mlx5_devlink_params_register(struct devlink *devlink);
void mlx5_devlink_params_unregister(struct devlink *devlink);
+static inline bool mlx5_core_is_eth_enabled(struct mlx5_core_dev *dev)
+{
+ union devlink_param_value val;
+ int err;
+
+ err = devl_param_driverinit_value_get(priv_to_devlink(dev),
+ DEVLINK_PARAM_GENERIC_ID_ENABLE_ETH,
+ &val);
+ return err ? false : val.vbool;
+}
+
#endif /* __MLX5_DEVLINK_H__ */
params->mqprio.num_tc : 1;
}
+/* Keep this enum consistent with the corresponding strings array
+ * declared in en/reporter_rx.c
+ */
enum {
- MLX5E_RQ_STATE_ENABLED,
+ MLX5E_RQ_STATE_ENABLED = 0,
MLX5E_RQ_STATE_RECOVERING,
- MLX5E_RQ_STATE_AM,
+ MLX5E_RQ_STATE_DIM,
MLX5E_RQ_STATE_NO_CSUM_COMPLETE,
MLX5E_RQ_STATE_CSUM_FULL, /* cqe_csum_full hw bit is set */
MLX5E_RQ_STATE_MINI_CQE_HW_STRIDX, /* set when mini_cqe_resp_stride_index cap is used */
MLX5E_RQ_STATE_SHAMPO, /* set when SHAMPO cap is used */
MLX5E_RQ_STATE_MINI_CQE_ENHANCED, /* set when enhanced mini_cqe_cap is used */
+ MLX5E_RQ_STATE_XSK, /* set to indicate an xsk rq */
+ MLX5E_NUM_RQ_STATES, /* Must be kept last */
};
struct mlx5e_cq {
enum mlx5e_dma_map_type type;
};
+/* Keep this enum consistent with with the corresponding strings array
+ * declared in en/reporter_tx.c
+ */
enum {
- MLX5E_SQ_STATE_ENABLED,
+ MLX5E_SQ_STATE_ENABLED = 0,
MLX5E_SQ_STATE_MPWQE,
MLX5E_SQ_STATE_RECOVERING,
MLX5E_SQ_STATE_IPSEC,
- MLX5E_SQ_STATE_AM,
+ MLX5E_SQ_STATE_DIM,
MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE,
MLX5E_SQ_STATE_PENDING_XSK_TX,
MLX5E_SQ_STATE_PENDING_TLS_RX_RESYNC,
MLX5E_SQ_STATE_XDP_MULTIBUF,
+ MLX5E_NUM_SQ_STATES, /* Must be kept last */
};
struct mlx5e_tx_mpwqe {
cqe_ts_to_ns ptp_cyc2time;
} ____cacheline_aligned_in_smp;
-union mlx5e_alloc_unit {
- struct page *page;
- struct xdp_buff *xsk;
-};
-
/* XDP packets can be transmitted in different ways. On completion, we need to
* distinguish between them to clean up things in a proper way.
*/
struct work_struct recover_work;
} ____cacheline_aligned_in_smp;
+struct mlx5e_frag_page {
+ struct page *page;
+ u16 frags;
+};
+
+enum mlx5e_wqe_frag_flag {
+ MLX5E_WQE_FRAG_LAST_IN_PAGE,
+ MLX5E_WQE_FRAG_SKIP_RELEASE,
+};
+
struct mlx5e_wqe_frag_info {
- union mlx5e_alloc_unit *au;
+ union {
+ struct mlx5e_frag_page *frag_page;
+ struct xdp_buff **xskp;
+ };
u32 offset;
- bool last_in_page;
+ u8 flags;
+};
+
+union mlx5e_alloc_units {
+ DECLARE_FLEX_ARRAY(struct mlx5e_frag_page, frag_pages);
+ DECLARE_FLEX_ARRAY(struct page *, pages);
+ DECLARE_FLEX_ARRAY(struct xdp_buff *, xsk_buffs);
};
struct mlx5e_mpw_info {
u16 consumed_strides;
- DECLARE_BITMAP(xdp_xmit_bitmap, MLX5_MPWRQ_MAX_PAGES_PER_WQE);
- union mlx5e_alloc_unit alloc_units[];
+ DECLARE_BITMAP(skip_release_bitmap, MLX5_MPWRQ_MAX_PAGES_PER_WQE);
+ union mlx5e_alloc_units alloc_units;
};
#define MLX5E_MAX_RX_FRAGS 4
#define MLX5E_CACHE_UNIT (MLX5_MPWRQ_MAX_PAGES_PER_WQE > NAPI_POLL_WEIGHT ? \
MLX5_MPWRQ_MAX_PAGES_PER_WQE : NAPI_POLL_WEIGHT)
#define MLX5E_CACHE_SIZE (4 * roundup_pow_of_two(MLX5E_CACHE_UNIT))
-struct mlx5e_page_cache {
- u32 head;
- u32 tail;
- struct page *page_cache[MLX5E_CACHE_SIZE];
-};
struct mlx5e_rq;
typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq*, struct mlx5_cqe64*);
struct mlx5e_rq_frag_info arr[MLX5E_MAX_RX_FRAGS];
u8 num_frags;
u8 log_num_frags;
- u8 wqe_bulk;
+ u16 wqe_bulk;
+ u16 refill_unit;
u8 wqe_index_mask;
};
struct mlx5e_dma_info {
dma_addr_t addr;
- struct page *page;
+ union {
+ struct mlx5e_frag_page *frag_page;
+ struct page *page;
+ };
};
struct mlx5e_shampo_hd {
u32 mkey;
struct mlx5e_dma_info *info;
- struct page *last_page;
+ struct mlx5e_frag_page *pages;
+ u16 curr_page_index;
u16 hd_per_wq;
u16 hd_per_wqe;
unsigned long *bitmap;
struct {
struct mlx5_wq_cyc wq;
struct mlx5e_wqe_frag_info *frags;
- union mlx5e_alloc_unit *alloc_units;
+ union mlx5e_alloc_units *alloc_units;
struct mlx5e_rq_frags_info info;
mlx5e_fp_skb_from_cqe skb_from_cqe;
} wqe;
struct mlx5e_rq_stats *stats;
struct mlx5e_cq cq;
struct mlx5e_cq_decomp cqd;
- struct mlx5e_page_cache page_cache;
struct hwtstamp_config *tstamp;
struct mlx5_clock *clock;
struct mlx5e_icosq *icosq;
u32 link_speed = 0;
u32 pci_bw = 0;
- mlx5e_port_max_linkspeed(mdev, &link_speed);
+ mlx5_port_max_linkspeed(mdev, &link_speed);
pci_bw = pcie_bandwidth_available(mdev->pdev, NULL, NULL, NULL);
mlx5_core_dbg_once(mdev, "Max link speed = %d, PCI BW = %d\n",
link_speed, pci_bw);
return first_frag_size + (MLX5E_MAX_RX_FRAGS - 2) * frag_size + PAGE_SIZE;
}
+static void mlx5e_rx_compute_wqe_bulk_params(struct mlx5e_params *params,
+ struct mlx5e_rq_frags_info *info)
+{
+ u16 bulk_bound_rq_size = (1 << params->log_rq_mtu_frames) / 4;
+ u32 bulk_bound_rq_size_in_bytes;
+ u32 sum_frag_strides = 0;
+ u32 wqe_bulk_in_bytes;
+ u16 split_factor;
+ u32 wqe_bulk;
+ int i;
+
+ for (i = 0; i < info->num_frags; i++)
+ sum_frag_strides += info->arr[i].frag_stride;
+
+ /* For MTUs larger than PAGE_SIZE, align to PAGE_SIZE to reflect
+ * amount of consumed pages per wqe in bytes.
+ */
+ if (sum_frag_strides > PAGE_SIZE)
+ sum_frag_strides = ALIGN(sum_frag_strides, PAGE_SIZE);
+
+ bulk_bound_rq_size_in_bytes = bulk_bound_rq_size * sum_frag_strides;
+
+#define MAX_WQE_BULK_BYTES(xdp) ((xdp ? 256 : 512) * 1024)
+
+ /* A WQE bulk should not exceed min(512KB, 1/4 of rq size). For XDP
+ * keep bulk size smaller to avoid filling the page_pool cache on
+ * every bulk refill.
+ */
+ wqe_bulk_in_bytes = min_t(u32, MAX_WQE_BULK_BYTES(params->xdp_prog),
+ bulk_bound_rq_size_in_bytes);
+ wqe_bulk = DIV_ROUND_UP(wqe_bulk_in_bytes, sum_frag_strides);
+
+ /* Make sure that allocations don't start when the page is still used
+ * by older WQEs.
+ */
+ info->wqe_bulk = max_t(u16, info->wqe_index_mask + 1, wqe_bulk);
+
+ split_factor = DIV_ROUND_UP(MAX_WQE_BULK_BYTES(params->xdp_prog),
+ PP_ALLOC_CACHE_REFILL * PAGE_SIZE);
+ info->refill_unit = DIV_ROUND_UP(info->wqe_bulk, split_factor);
+}
+
#define DEFAULT_FRAG_SIZE (2048)
static int mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev,
}
out:
- /* Bulking optimization to skip allocation until at least 8 WQEs can be
- * allocated in a row. At the same time, never start allocation when
- * the page is still used by older WQEs.
+ /* Bulking optimization to skip allocation until a large enough number
+ * of WQEs can be allocated in a row. Bulking also influences how well
+ * deferred page release works.
*/
- info->wqe_bulk = max_t(u8, info->wqe_index_mask + 1, 8);
+ mlx5e_rx_compute_wqe_bulk_params(params, info);
+
+ mlx5_core_dbg(mdev, "%s: wqe_bulk = %u, wqe_bulk_refill_unit = %u\n",
+ __func__, info->wqe_bulk, info->refill_unit);
info->log_num_frags = order_base_2(info->num_frags);
#include "port.h"
-/* speed in units of 1Mb */
-static const u32 mlx5e_link_speed[MLX5E_LINK_MODES_NUMBER] = {
- [MLX5E_1000BASE_CX_SGMII] = 1000,
- [MLX5E_1000BASE_KX] = 1000,
- [MLX5E_10GBASE_CX4] = 10000,
- [MLX5E_10GBASE_KX4] = 10000,
- [MLX5E_10GBASE_KR] = 10000,
- [MLX5E_20GBASE_KR2] = 20000,
- [MLX5E_40GBASE_CR4] = 40000,
- [MLX5E_40GBASE_KR4] = 40000,
- [MLX5E_56GBASE_R4] = 56000,
- [MLX5E_10GBASE_CR] = 10000,
- [MLX5E_10GBASE_SR] = 10000,
- [MLX5E_10GBASE_ER] = 10000,
- [MLX5E_40GBASE_SR4] = 40000,
- [MLX5E_40GBASE_LR4] = 40000,
- [MLX5E_50GBASE_SR2] = 50000,
- [MLX5E_100GBASE_CR4] = 100000,
- [MLX5E_100GBASE_SR4] = 100000,
- [MLX5E_100GBASE_KR4] = 100000,
- [MLX5E_100GBASE_LR4] = 100000,
- [MLX5E_100BASE_TX] = 100,
- [MLX5E_1000BASE_T] = 1000,
- [MLX5E_10GBASE_T] = 10000,
- [MLX5E_25GBASE_CR] = 25000,
- [MLX5E_25GBASE_KR] = 25000,
- [MLX5E_25GBASE_SR] = 25000,
- [MLX5E_50GBASE_CR2] = 50000,
- [MLX5E_50GBASE_KR2] = 50000,
-};
-
-static const u32 mlx5e_ext_link_speed[MLX5E_EXT_LINK_MODES_NUMBER] = {
- [MLX5E_SGMII_100M] = 100,
- [MLX5E_1000BASE_X_SGMII] = 1000,
- [MLX5E_5GBASE_R] = 5000,
- [MLX5E_10GBASE_XFI_XAUI_1] = 10000,
- [MLX5E_40GBASE_XLAUI_4_XLPPI_4] = 40000,
- [MLX5E_25GAUI_1_25GBASE_CR_KR] = 25000,
- [MLX5E_50GAUI_2_LAUI_2_50GBASE_CR2_KR2] = 50000,
- [MLX5E_50GAUI_1_LAUI_1_50GBASE_CR_KR] = 50000,
- [MLX5E_CAUI_4_100GBASE_CR4_KR4] = 100000,
- [MLX5E_100GAUI_2_100GBASE_CR2_KR2] = 100000,
- [MLX5E_200GAUI_4_200GBASE_CR4_KR4] = 200000,
- [MLX5E_400GAUI_8] = 400000,
- [MLX5E_100GAUI_1_100GBASE_CR_KR] = 100000,
- [MLX5E_200GAUI_2_200GBASE_CR2_KR2] = 200000,
- [MLX5E_400GAUI_4_400GBASE_CR4_KR4] = 400000,
-};
-
-bool mlx5e_ptys_ext_supported(struct mlx5_core_dev *mdev)
-{
- struct mlx5e_port_eth_proto eproto;
- int err;
-
- if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet))
- return true;
-
- err = mlx5_port_query_eth_proto(mdev, 1, true, &eproto);
- if (err)
- return false;
-
- return !!eproto.cap;
-}
-
-static void mlx5e_port_get_speed_arr(struct mlx5_core_dev *mdev,
- const u32 **arr, u32 *size,
- bool force_legacy)
-{
- bool ext = force_legacy ? false : mlx5e_ptys_ext_supported(mdev);
-
- *size = ext ? ARRAY_SIZE(mlx5e_ext_link_speed) :
- ARRAY_SIZE(mlx5e_link_speed);
- *arr = ext ? mlx5e_ext_link_speed : mlx5e_link_speed;
-}
-
-int mlx5_port_query_eth_proto(struct mlx5_core_dev *dev, u8 port, bool ext,
- struct mlx5e_port_eth_proto *eproto)
-{
- u32 out[MLX5_ST_SZ_DW(ptys_reg)];
- int err;
-
- if (!eproto)
- return -EINVAL;
-
- err = mlx5_query_port_ptys(dev, out, sizeof(out), MLX5_PTYS_EN, port);
- if (err)
- return err;
-
- eproto->cap = MLX5_GET_ETH_PROTO(ptys_reg, out, ext,
- eth_proto_capability);
- eproto->admin = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_admin);
- eproto->oper = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_oper);
- return 0;
-}
-
void mlx5_port_query_eth_autoneg(struct mlx5_core_dev *dev, u8 *an_status,
u8 *an_disable_cap, u8 *an_disable_admin)
{
sizeof(out), MLX5_REG_PTYS, 0, 1);
}
-u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper,
- bool force_legacy)
-{
- unsigned long temp = eth_proto_oper;
- const u32 *table;
- u32 speed = 0;
- u32 max_size;
- int i;
-
- mlx5e_port_get_speed_arr(mdev, &table, &max_size, force_legacy);
- i = find_first_bit(&temp, max_size);
- if (i < max_size)
- speed = table[i];
- return speed;
-}
-
int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
{
- struct mlx5e_port_eth_proto eproto;
+ struct mlx5_port_eth_proto eproto;
bool force_legacy = false;
bool ext;
int err;
- ext = mlx5e_ptys_ext_supported(mdev);
+ ext = mlx5_ptys_ext_supported(mdev);
err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
if (err)
goto out;
if (err)
goto out;
}
- *speed = mlx5e_port_ptys2speed(mdev, eproto.oper, force_legacy);
+ *speed = mlx5_port_ptys2speed(mdev, eproto.oper, force_legacy);
if (!(*speed))
err = -EINVAL;
return err;
}
-int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
-{
- struct mlx5e_port_eth_proto eproto;
- u32 max_speed = 0;
- const u32 *table;
- u32 max_size;
- bool ext;
- int err;
- int i;
-
- ext = mlx5e_ptys_ext_supported(mdev);
- err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
- if (err)
- return err;
-
- mlx5e_port_get_speed_arr(mdev, &table, &max_size, false);
- for (i = 0; i < max_size; ++i)
- if (eproto.cap & MLX5E_PROT_MASK(i))
- max_speed = max(max_speed, table[i]);
-
- *speed = max_speed;
- return 0;
-}
-
-u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
- bool force_legacy)
-{
- u32 link_modes = 0;
- const u32 *table;
- u32 max_size;
- int i;
-
- mlx5e_port_get_speed_arr(mdev, &table, &max_size, force_legacy);
- for (i = 0; i < max_size; ++i) {
- if (table[i] == speed)
- link_modes |= MLX5E_PROT_MASK(i);
- }
- return link_modes;
-}
-
int mlx5e_port_query_pbmc(struct mlx5_core_dev *mdev, void *out)
{
int sz = MLX5_ST_SZ_BYTES(pbmc_reg);
#include <linux/mlx5/driver.h>
#include "en.h"
-struct mlx5e_port_eth_proto {
- u32 cap;
- u32 admin;
- u32 oper;
-};
-
-int mlx5_port_query_eth_proto(struct mlx5_core_dev *dev, u8 port, bool ext,
- struct mlx5e_port_eth_proto *eproto);
void mlx5_port_query_eth_autoneg(struct mlx5_core_dev *dev, u8 *an_status,
u8 *an_disable_cap, u8 *an_disable_admin);
int mlx5_port_set_eth_ptys(struct mlx5_core_dev *dev, bool an_disable,
u32 proto_admin, bool ext);
-u32 mlx5e_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper,
- bool force_legacy);
int mlx5e_port_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
-int mlx5e_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
-u32 mlx5e_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
- bool force_legacy);
-bool mlx5e_ptys_ext_supported(struct mlx5_core_dev *mdev);
int mlx5e_port_query_pbmc(struct mlx5_core_dev *mdev, void *out);
int mlx5e_port_set_pbmc(struct mlx5_core_dev *mdev, void *in);
int mlx5e_port_query_sbpr(struct mlx5_core_dev *mdev, u32 desc, u8 dir,
return macvlan->mode == MACVLAN_MODE_PASSTHRU;
}
-static int
-mlx5e_rep_indr_setup_block(struct net_device *netdev, struct Qdisc *sch,
- struct mlx5e_rep_priv *rpriv,
- struct flow_block_offload *f,
- flow_setup_cb_t *setup_cb,
- void *data,
- void (*cleanup)(struct flow_block_cb *block_cb))
+static bool
+mlx5e_rep_check_indr_block_supported(struct mlx5e_rep_priv *rpriv,
+ struct net_device *netdev,
+ struct flow_block_offload *f)
{
struct mlx5e_priv *priv = netdev_priv(rpriv->netdev);
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
- bool is_ovs_int_port = netif_is_ovs_master(netdev);
- struct mlx5e_rep_indr_block_priv *indr_priv;
- struct flow_block_cb *block_cb;
+ struct net_device *macvlan_real_dev;
- if (!mlx5e_tc_tun_device_to_offload(priv, netdev) &&
- !(is_vlan_dev(netdev) && vlan_dev_real_dev(netdev) == rpriv->netdev) &&
- !is_ovs_int_port) {
- if (!(netif_is_macvlan(netdev) && macvlan_dev_real_dev(netdev) == rpriv->netdev))
- return -EOPNOTSUPP;
+ if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS &&
+ f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
+ return false;
+
+ if (mlx5e_tc_tun_device_to_offload(priv, netdev))
+ return true;
+
+ if (is_vlan_dev(netdev) && vlan_dev_real_dev(netdev) == rpriv->netdev)
+ return true;
+
+ if (netif_is_macvlan(netdev)) {
if (!mlx5e_rep_macvlan_mode_supported(netdev)) {
netdev_warn(netdev, "Offloading ingress filter is supported only with macvlan passthru mode");
- return -EOPNOTSUPP;
+ return false;
}
+
+ macvlan_real_dev = macvlan_dev_real_dev(netdev);
+
+ if (macvlan_real_dev == rpriv->netdev)
+ return true;
+ if (netif_is_bond_master(macvlan_real_dev))
+ return true;
}
- if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS &&
- f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
- return -EOPNOTSUPP;
+ if (netif_is_ovs_master(netdev) && f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS &&
+ mlx5e_tc_int_port_supported(esw))
+ return true;
- if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS && !is_ovs_int_port)
- return -EOPNOTSUPP;
+ return false;
+}
+
+static int
+mlx5e_rep_indr_setup_block(struct net_device *netdev, struct Qdisc *sch,
+ struct mlx5e_rep_priv *rpriv,
+ struct flow_block_offload *f,
+ flow_setup_cb_t *setup_cb,
+ void *data,
+ void (*cleanup)(struct flow_block_cb *block_cb))
+{
+ struct mlx5e_rep_indr_block_priv *indr_priv;
+ struct flow_block_cb *block_cb;
- if (is_ovs_int_port && !mlx5e_tc_int_port_supported(esw))
+ if (!mlx5e_rep_check_indr_block_supported(rpriv, netdev, f))
return -EOPNOTSUPP;
f->unlocked_driver_cb = true;
#include "ptp.h"
#include "lib/tout.h"
+/* Keep this string array consistent with the MLX5E_RQ_STATE_* enums in en.h */
+static const char * const rq_sw_state_type_name[] = {
+ [MLX5E_RQ_STATE_ENABLED] = "enabled",
+ [MLX5E_RQ_STATE_RECOVERING] = "recovering",
+ [MLX5E_RQ_STATE_DIM] = "dim",
+ [MLX5E_RQ_STATE_NO_CSUM_COMPLETE] = "no_csum_complete",
+ [MLX5E_RQ_STATE_CSUM_FULL] = "csum_full",
+ [MLX5E_RQ_STATE_MINI_CQE_HW_STRIDX] = "mini_cqe_hw_stridx",
+ [MLX5E_RQ_STATE_SHAMPO] = "shampo",
+ [MLX5E_RQ_STATE_MINI_CQE_ENHANCED] = "mini_cqe_enhanced",
+ [MLX5E_RQ_STATE_XSK] = "xsk",
+};
+
static int mlx5e_query_rq_state(struct mlx5_core_dev *dev, u32 rqn, u8 *state)
{
int outlen = MLX5_ST_SZ_BYTES(query_rq_out);
mlx5e_reset_icosq_cc_pc(icosq);
- mlx5e_free_rx_in_progress_descs(rq);
+ mlx5e_free_rx_missing_descs(rq);
if (xskrq)
- mlx5e_free_rx_in_progress_descs(xskrq);
+ mlx5e_free_rx_missing_descs(xskrq);
clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state);
mlx5e_activate_icosq(icosq);
return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
+static int mlx5e_health_rq_put_sw_state(struct devlink_fmsg *fmsg, struct mlx5e_rq *rq)
+{
+ int err;
+ int i;
+
+ BUILD_BUG_ON_MSG(ARRAY_SIZE(rq_sw_state_type_name) != MLX5E_NUM_RQ_STATES,
+ "rq_sw_state_type_name string array must be consistent with MLX5E_RQ_STATE_* enum in en.h");
+ err = devlink_fmsg_obj_nest_start(fmsg);
+ if (err)
+ return err;
+
+ err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SW State");
+ if (err)
+ return err;
+
+ for (i = 0; i < ARRAY_SIZE(rq_sw_state_type_name); ++i) {
+ err = devlink_fmsg_u32_pair_put(fmsg, rq_sw_state_type_name[i],
+ test_bit(i, &rq->state));
+ if (err)
+ return err;
+ }
+
+ err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ if (err)
+ return err;
+
+ return devlink_fmsg_obj_nest_end(fmsg);
+}
+
static int
mlx5e_rx_reporter_build_diagnose_output_rq_common(struct mlx5e_rq *rq,
struct devlink_fmsg *fmsg)
if (err)
return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "SW state", rq->state);
- if (err)
- return err;
-
err = devlink_fmsg_u32_pair_put(fmsg, "WQE counter", wqe_counter);
if (err)
return err;
if (err)
return err;
+ err = mlx5e_health_rq_put_sw_state(fmsg, rq);
+ if (err)
+ return err;
+
err = mlx5e_health_cq_diag_fmsg(&rq->cq, fmsg);
if (err)
return err;
#include "en/devlink.h"
#include "lib/tout.h"
+/* Keep this string array consistent with the MLX5E_SQ_STATE_* enums in en.h */
+static const char * const sq_sw_state_type_name[] = {
+ [MLX5E_SQ_STATE_ENABLED] = "enabled",
+ [MLX5E_SQ_STATE_MPWQE] = "mpwqe",
+ [MLX5E_SQ_STATE_RECOVERING] = "recovering",
+ [MLX5E_SQ_STATE_IPSEC] = "ipsec",
+ [MLX5E_SQ_STATE_DIM] = "dim",
+ [MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE] = "vlan_need_l2_inline",
+ [MLX5E_SQ_STATE_PENDING_XSK_TX] = "pending_xsk_tx",
+ [MLX5E_SQ_STATE_PENDING_TLS_RX_RESYNC] = "pending_tls_rx_resync",
+ [MLX5E_SQ_STATE_XDP_MULTIBUF] = "xdp_multibuf",
+};
+
static int mlx5e_wait_for_sq_flush(struct mlx5e_txqsq *sq)
{
struct mlx5_core_dev *dev = sq->mdev;
sq->pc = 0;
}
+static int mlx5e_health_sq_put_sw_state(struct devlink_fmsg *fmsg, struct mlx5e_txqsq *sq)
+{
+ int err;
+ int i;
+
+ BUILD_BUG_ON_MSG(ARRAY_SIZE(sq_sw_state_type_name) != MLX5E_NUM_SQ_STATES,
+ "sq_sw_state_type_name string array must be consistent with MLX5E_SQ_STATE_* enum in en.h");
+ err = devlink_fmsg_obj_nest_start(fmsg);
+ if (err)
+ return err;
+
+ err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SW State");
+ if (err)
+ return err;
+
+ for (i = 0; i < ARRAY_SIZE(sq_sw_state_type_name); ++i) {
+ err = devlink_fmsg_u32_pair_put(fmsg, sq_sw_state_type_name[i],
+ test_bit(i, &sq->state));
+ if (err)
+ return err;
+ }
+
+ err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ if (err)
+ return err;
+
+ return devlink_fmsg_obj_nest_end(fmsg);
+}
+
static int mlx5e_tx_reporter_err_cqe_recover(void *ctx)
{
struct mlx5_core_dev *mdev;
if (err)
return err;
+ err = mlx5e_health_sq_put_sw_state(fmsg, sq);
+ if (err)
+ return err;
+
err = mlx5e_health_cq_diag_fmsg(&sq->cq, fmsg);
if (err)
return err;
if (mlx5_lag_mpesw_do_mirred(priv->mdev, out_dev, extack))
return -EOPNOTSUPP;
+ if (netif_is_macvlan(out_dev))
+ out_dev = macvlan_dev_real_dev(out_dev);
+
out_dev = get_fdb_out_dev(uplink_dev, out_dev);
if (!out_dev)
return -ENODEV;
return err;
}
- if (netif_is_macvlan(out_dev))
- out_dev = macvlan_dev_real_dev(out_dev);
-
err = verify_uplink_forwarding(priv, attr, out_dev, extack);
if (err)
return err;
bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a,
struct mlx5e_encap_key *b);
+bool mlx5e_tc_tun_encap_info_equal_options(struct mlx5e_encap_key *a,
+ struct mlx5e_encap_key *b,
+ __be16 tun_flags);
#endif /* CONFIG_MLX5_ESWITCH */
#endif //__MLX5_EN_TC_TUNNEL_H__
#include <net/fib_notifier.h>
#include <net/nexthop.h>
+#include <net/ip_tunnels.h>
#include "tc_tun_encap.h"
#include "en_tc.h"
#include "tc_tun.h"
#if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6)
else if (ip_version == 6) {
int ipv6_size = MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6);
- struct in6_addr zerov6 = {};
daddr = MLX5_ADDR_OF(fte_match_param, spec->match_value,
outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6);
outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6);
memcpy(&tun_attr->dst_ip.v6, daddr, ipv6_size);
memcpy(&tun_attr->src_ip.v6, saddr, ipv6_size);
- if (!memcmp(&tun_attr->dst_ip.v6, &zerov6, sizeof(zerov6)) ||
- !memcmp(&tun_attr->src_ip.v6, &zerov6, sizeof(zerov6)))
+ if (ipv6_addr_any(&tun_attr->dst_ip.v6) ||
+ ipv6_addr_any(&tun_attr->src_ip.v6))
return 0;
}
#endif
a->tc_tunnel->tunnel_type == b->tc_tunnel->tunnel_type;
}
+bool mlx5e_tc_tun_encap_info_equal_options(struct mlx5e_encap_key *a,
+ struct mlx5e_encap_key *b,
+ __be16 tun_flags)
+{
+ struct ip_tunnel_info *a_info;
+ struct ip_tunnel_info *b_info;
+ bool a_has_opts, b_has_opts;
+
+ if (!mlx5e_tc_tun_encap_info_equal_generic(a, b))
+ return false;
+
+ a_has_opts = !!(a->ip_tun_key->tun_flags & tun_flags);
+ b_has_opts = !!(b->ip_tun_key->tun_flags & tun_flags);
+
+ /* keys are equal when both don't have any options attached */
+ if (!a_has_opts && !b_has_opts)
+ return true;
+
+ if (a_has_opts != b_has_opts)
+ return false;
+
+ /* options stored in memory next to ip_tunnel_info struct */
+ a_info = container_of(a->ip_tun_key, struct ip_tunnel_info, key);
+ b_info = container_of(b->ip_tun_key, struct ip_tunnel_info, key);
+
+ return a_info->options_len == b_info->options_len &&
+ !memcmp(ip_tunnel_info_opts(a_info),
+ ip_tunnel_info_opts(b_info),
+ a_info->options_len);
+}
+
static int cmp_decap_info(struct mlx5e_decap_key *a,
struct mlx5e_decap_key *b)
{
static bool mlx5e_tc_tun_encap_info_equal_geneve(struct mlx5e_encap_key *a,
struct mlx5e_encap_key *b)
{
- struct ip_tunnel_info *a_info;
- struct ip_tunnel_info *b_info;
- bool a_has_opts, b_has_opts;
-
- if (!mlx5e_tc_tun_encap_info_equal_generic(a, b))
- return false;
-
- a_has_opts = !!(a->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT);
- b_has_opts = !!(b->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT);
-
- /* keys are equal when both don't have any options attached */
- if (!a_has_opts && !b_has_opts)
- return true;
-
- if (a_has_opts != b_has_opts)
- return false;
-
- /* geneve options stored in memory next to ip_tunnel_info struct */
- a_info = container_of(a->ip_tun_key, struct ip_tunnel_info, key);
- b_info = container_of(b->ip_tun_key, struct ip_tunnel_info, key);
-
- return a_info->options_len == b_info->options_len &&
- memcmp(a_info + 1, b_info + 1, a_info->options_len) == 0;
+ return mlx5e_tc_tun_encap_info_equal_options(a, b, TUNNEL_GENEVE_OPT);
}
struct mlx5e_tc_tunnel geneve_tunnel = {
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
/* Copyright (c) 2018 Mellanox Technologies. */
+#include <net/ip_tunnels.h>
#include <net/vxlan.h>
#include "lib/vxlan.h"
#include "en/tc_tun.h"
const struct ip_tunnel_key *tun_key = &e->tun_info->key;
__be32 tun_id = tunnel_id_to_key32(tun_key->tun_id);
struct udphdr *udp = (struct udphdr *)(buf);
+ const struct vxlan_metadata *md;
struct vxlanhdr *vxh;
- if (tun_key->tun_flags & TUNNEL_VXLAN_OPT)
+ if ((tun_key->tun_flags & TUNNEL_VXLAN_OPT) &&
+ e->tun_info->options_len != sizeof(*md))
return -EOPNOTSUPP;
vxh = (struct vxlanhdr *)((char *)udp + sizeof(struct udphdr));
*ip_proto = IPPROTO_UDP;
udp->dest = tun_key->tp_dst;
vxh->vx_flags = VXLAN_HF_VNI;
vxh->vx_vni = vxlan_vni_field(tun_id);
+ if (tun_key->tun_flags & TUNNEL_VXLAN_OPT) {
+ md = ip_tunnel_info_opts(e->tun_info);
+ vxlan_build_gbp_hdr(vxh, md);
+ }
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_vxlan_gbp_option(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ struct netlink_ext_ack *extack = f->common.extack;
+ struct flow_match_enc_opts enc_opts;
+ void *misc5_c, *misc5_v;
+ u32 *gbp, *gbp_mask;
+
+ flow_rule_match_enc_opts(rule, &enc_opts);
+
+ if (memchr_inv(&enc_opts.mask->data, 0, sizeof(enc_opts.mask->data)) &&
+ !MLX5_CAP_ESW_FT_FIELD_SUPPORT_2(priv->mdev, tunnel_header_0_1)) {
+ NL_SET_ERR_MSG_MOD(extack, "Matching on VxLAN GBP is not supported");
+ return -EOPNOTSUPP;
+ }
+
+ if (enc_opts.key->dst_opt_type != TUNNEL_VXLAN_OPT) {
+ NL_SET_ERR_MSG_MOD(extack, "Wrong VxLAN option type: not GBP");
+ return -EOPNOTSUPP;
+ }
+
+ if (enc_opts.key->len != sizeof(*gbp) ||
+ enc_opts.mask->len != sizeof(*gbp_mask)) {
+ NL_SET_ERR_MSG_MOD(extack, "VxLAN GBP option/mask len is not 32 bits");
+ return -EINVAL;
+ }
+
+ gbp = (u32 *)&enc_opts.key->data[0];
+ gbp_mask = (u32 *)&enc_opts.mask->data[0];
+
+ if (*gbp_mask & ~VXLAN_GBP_MASK) {
+ NL_SET_ERR_MSG_FMT_MOD(extack, "Wrong VxLAN GBP mask(0x%08X)\n", *gbp_mask);
+ return -EINVAL;
+ }
+
+ misc5_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters_5);
+ misc5_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters_5);
+ MLX5_SET(fte_match_set_misc5, misc5_c, tunnel_header_0, *gbp_mask);
+ MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_0, *gbp);
+
+ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_5;
return 0;
}
if (!enc_keyid.mask->keyid)
return 0;
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_OPTS)) {
+ int err;
+
+ err = mlx5e_tc_tun_parse_vxlan_gbp_option(priv, spec, f);
+ if (err)
+ return err;
+ }
+
/* match on VNI is required */
if (!MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev,
return 0;
}
+static bool mlx5e_tc_tun_encap_info_equal_vxlan(struct mlx5e_encap_key *a,
+ struct mlx5e_encap_key *b)
+{
+ return mlx5e_tc_tun_encap_info_equal_options(a, b, TUNNEL_VXLAN_OPT);
+}
+
static int mlx5e_tc_tun_get_remote_ifindex(struct net_device *mirred_dev)
{
const struct vxlan_dev *vxlan = netdev_priv(mirred_dev);
.generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_vxlan,
.parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_vxlan,
.parse_tunnel = mlx5e_tc_tun_parse_vxlan,
- .encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic,
+ .encap_info_equal = mlx5e_tc_tun_encap_info_equal_vxlan,
.get_remote_ifindex = mlx5e_tc_tun_get_remote_ifindex,
};
int mlx5e_poll_ico_cq(struct mlx5e_cq *cq);
/* RX */
-void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page);
-void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle);
INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq));
INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq));
int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget);
void mlx5e_free_rx_descs(struct mlx5e_rq *rq);
-void mlx5e_free_rx_in_progress_descs(struct mlx5e_rq *rq);
+void mlx5e_free_rx_missing_descs(struct mlx5e_rq *rq);
static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config)
{
static inline bool
mlx5e_skb_fifo_has_room(struct mlx5e_skb_fifo *fifo)
{
- return (u16)(*fifo->pc - *fifo->cc) < fifo->mask;
+ return (u16)(*fifo->pc - *fifo->cc) <= fifo->mask;
}
static inline bool
static inline struct mlx5e_mpw_info *mlx5e_get_mpw_info(struct mlx5e_rq *rq, int i)
{
- size_t isz = struct_size(rq->mpwqe.info, alloc_units, rq->mpwqe.pages_per_wqe);
+ size_t isz = struct_size(rq->mpwqe.info, alloc_units.frag_pages, rq->mpwqe.pages_per_wqe);
return (struct mlx5e_mpw_info *)((char *)rq->mpwqe.info + array_size(i, isz));
}
goto xdp_abort;
__set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags);
__set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags);
- if (xdp->rxq->mem.type != MEM_TYPE_XSK_BUFF_POOL)
- mlx5e_page_dma_unmap(rq, virt_to_page(xdp->data));
rq->stats->xdp_redirect++;
return true;
default:
static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
struct mlx5e_xdp_wqe_info *wi,
u32 *xsk_frames,
- bool recycle,
struct xdp_frame_bulk *bq)
{
struct mlx5e_xdp_info_fifo *xdpi_fifo = &sq->db.xdpi_fifo;
break;
case MLX5E_XDP_XMIT_MODE_PAGE:
/* XDP_TX from the regular RQ */
- mlx5e_page_release_dynamic(xdpi.page.rq, xdpi.page.page, recycle);
+ page_pool_put_defragged_page(xdpi.page.rq->page_pool,
+ xdpi.page.page, -1, true);
break;
case MLX5E_XDP_XMIT_MODE_XSK:
/* AF_XDP send */
sqcc += wi->num_wqebbs;
- mlx5e_free_xdpsq_desc(sq, wi, &xsk_frames, true, &bq);
+ mlx5e_free_xdpsq_desc(sq, wi, &xsk_frames, &bq);
} while (!last_wqe);
if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_REQ)) {
sq->cc += wi->num_wqebbs;
- mlx5e_free_xdpsq_desc(sq, wi, &xsk_frames, false, &bq);
+ mlx5e_free_xdpsq_desc(sq, wi, &xsk_frames, &bq);
}
xdp_flush_frame_bulk(&bq);
struct mlx5e_icosq *icosq = rq->icosq;
struct mlx5_wq_cyc *wq = &icosq->wq;
struct mlx5e_umr_wqe *umr_wqe;
+ struct xdp_buff **xsk_buffs;
int batch, i;
u32 offset; /* 17-bit value with MTT. */
u16 pi;
if (unlikely(!xsk_buff_can_alloc(rq->xsk_pool, rq->mpwqe.pages_per_wqe)))
goto err;
- BUILD_BUG_ON(sizeof(wi->alloc_units[0]) != sizeof(wi->alloc_units[0].xsk));
XSK_CHECK_PRIV_TYPE(struct mlx5e_xdp_buff);
- batch = xsk_buff_alloc_batch(rq->xsk_pool, (struct xdp_buff **)wi->alloc_units,
+ xsk_buffs = (struct xdp_buff **)wi->alloc_units.xsk_buffs;
+ batch = xsk_buff_alloc_batch(rq->xsk_pool, xsk_buffs,
rq->mpwqe.pages_per_wqe);
/* If batch < pages_per_wqe, either:
* the first error, which will mean there are no more valid descriptors.
*/
for (; batch < rq->mpwqe.pages_per_wqe; batch++) {
- wi->alloc_units[batch].xsk = xsk_buff_alloc(rq->xsk_pool);
- if (unlikely(!wi->alloc_units[batch].xsk))
+ xsk_buffs[batch] = xsk_buff_alloc(rq->xsk_pool);
+ if (unlikely(!xsk_buffs[batch]))
goto err_reuse_batch;
}
if (likely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_ALIGNED)) {
for (i = 0; i < batch; i++) {
- struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk);
- dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(xsk_buffs[i]);
+ dma_addr_t addr = xsk_buff_xdp_get_frame_dma(xsk_buffs[i]);
umr_wqe->inline_mtts[i] = (struct mlx5_mtt) {
.ptag = cpu_to_be64(addr | MLX5_EN_WR),
}
} else if (unlikely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_UNALIGNED)) {
for (i = 0; i < batch; i++) {
- struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk);
- dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(xsk_buffs[i]);
+ dma_addr_t addr = xsk_buff_xdp_get_frame_dma(xsk_buffs[i]);
umr_wqe->inline_ksms[i] = (struct mlx5_ksm) {
.key = rq->mkey_be,
u32 mapping_size = 1 << (rq->mpwqe.page_shift - 2);
for (i = 0; i < batch; i++) {
- struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk);
- dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(xsk_buffs[i]);
+ dma_addr_t addr = xsk_buff_xdp_get_frame_dma(xsk_buffs[i]);
umr_wqe->inline_ksms[i << 2] = (struct mlx5_ksm) {
.key = rq->mkey_be,
__be32 frame_size = cpu_to_be32(rq->xsk_pool->chunk_size);
for (i = 0; i < batch; i++) {
- struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[i].xsk);
- dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(xsk_buffs[i]);
+ dma_addr_t addr = xsk_buff_xdp_get_frame_dma(xsk_buffs[i]);
umr_wqe->inline_klms[i << 1] = (struct mlx5_klm) {
.key = rq->mkey_be,
}
}
- bitmap_zero(wi->xdp_xmit_bitmap, rq->mpwqe.pages_per_wqe);
+ bitmap_zero(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe);
wi->consumed_strides = 0;
umr_wqe->ctrl.opmod_idx_opcode =
err_reuse_batch:
while (--batch >= 0)
- xsk_buff_free(wi->alloc_units[batch].xsk);
+ xsk_buff_free(xsk_buffs[batch]);
err:
rq->stats->buff_alloc_err++;
u32 contig, alloc;
int i;
- /* mlx5e_init_frags_partition creates a 1:1 mapping between
- * rq->wqe.frags and rq->wqe.alloc_units, which allows us to
- * allocate XDP buffers straight into alloc_units.
+ /* Each rq->wqe.frags->xskp is 1:1 mapped to an element inside the
+ * rq->wqe.alloc_units->xsk_buffs array allocated here.
*/
- BUILD_BUG_ON(sizeof(rq->wqe.alloc_units[0]) !=
- sizeof(rq->wqe.alloc_units[0].xsk));
- buffs = (struct xdp_buff **)rq->wqe.alloc_units;
+ buffs = rq->wqe.alloc_units->xsk_buffs;
contig = mlx5_wq_cyc_get_size(wq) - ix;
if (wqe_bulk <= contig) {
alloc = xsk_buff_alloc_batch(rq->xsk_pool, buffs + ix, wqe_bulk);
/* Assumes log_num_frags == 0. */
frag = &rq->wqe.frags[j];
- addr = xsk_buff_xdp_get_frame_dma(frag->au->xsk);
+ addr = xsk_buff_xdp_get_frame_dma(*frag->xskp);
wqe->data[0].addr = cpu_to_be64(addr + rq->buff.headroom);
+ frag->flags &= ~BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
return alloc;
/* Assumes log_num_frags == 0. */
frag = &rq->wqe.frags[j];
- frag->au->xsk = xsk_buff_alloc(rq->xsk_pool);
- if (unlikely(!frag->au->xsk))
+ *frag->xskp = xsk_buff_alloc(rq->xsk_pool);
+ if (unlikely(!*frag->xskp))
return i;
- addr = xsk_buff_xdp_get_frame_dma(frag->au->xsk);
+ addr = xsk_buff_xdp_get_frame_dma(*frag->xskp);
wqe->data[0].addr = cpu_to_be64(addr + rq->buff.headroom);
+ frag->flags &= ~BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
return wqe_bulk;
u32 head_offset,
u32 page_idx)
{
- struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units[page_idx].xsk);
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->alloc_units.xsk_buffs[page_idx]);
struct bpf_prog *prog;
/* Check packet size. Note LRO doesn't use linear SKB */
prog = rcu_dereference(rq->xdp_prog);
if (likely(prog && mlx5e_xdp_handle(rq, prog, mxbuf))) {
if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)))
- __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
+ __set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */
return NULL; /* page/packet was consumed by XDP */
}
struct mlx5_cqe64 *cqe,
u32 cqe_bcnt)
{
- struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(wi->au->xsk);
+ struct mlx5e_xdp_buff *mxbuf = xsk_buff_to_mxbuf(*wi->xskp);
struct bpf_prog *prog;
/* wi->offset is not used in this function, because xdp->data and the
struct mlx5e_rq_param *rq_params, struct xsk_buff_pool *pool,
struct mlx5e_xsk_param *xsk)
{
+ struct mlx5e_rq *xskrq = &c->xskrq;
int err;
- err = mlx5e_init_xsk_rq(c, params, pool, xsk, &c->xskrq);
+ err = mlx5e_init_xsk_rq(c, params, pool, xsk, xskrq);
if (err)
return err;
- return mlx5e_open_rq(params, rq_params, xsk, cpu_to_node(c->cpu), &c->xskrq);
+ err = mlx5e_open_rq(params, rq_params, xsk, cpu_to_node(c->cpu), xskrq);
+ if (err)
+ return err;
+
+ __set_bit(MLX5E_RQ_STATE_XSK, &xskrq->state);
+ return 0;
}
int mlx5e_open_xsk(struct mlx5e_priv *priv, struct mlx5e_params *params,
struct net_device *netdev = x->xso.real_dev;
struct mlx5e_ipsec *ipsec;
struct mlx5e_priv *priv;
+ gfp_t gfp;
int err;
priv = netdev_priv(netdev);
return -EOPNOTSUPP;
ipsec = priv->ipsec;
- err = mlx5e_xfrm_validate_state(priv->mdev, x, extack);
- if (err)
- return err;
-
- sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL);
+ gfp = (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ) ? GFP_ATOMIC : GFP_KERNEL;
+ sa_entry = kzalloc(sizeof(*sa_entry), gfp);
if (!sa_entry)
return -ENOMEM;
sa_entry->x = x;
sa_entry->ipsec = ipsec;
+ /* Check if this SA is originated from acquire flow temporary SA */
+ if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
+ goto out;
+
+ err = mlx5e_xfrm_validate_state(priv->mdev, x, extack);
+ if (err)
+ goto err_xfrm;
/* check esn */
mlx5e_ipsec_update_esn_state(sa_entry);
mlx5e_ipsec_set_iv_esn : mlx5e_ipsec_set_iv;
INIT_WORK(&sa_entry->modify_work.work, _update_xfrm_state);
+out:
x->xso.offload_handle = (unsigned long)sa_entry;
return 0;
struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
struct mlx5e_ipsec_sa_entry *old;
+ if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
+ return;
+
old = xa_erase_bh(&ipsec->sadb, sa_entry->ipsec_obj_id);
WARN_ON(old != sa_entry);
}
{
struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
+ if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
+ goto sa_entry_free;
+
cancel_work_sync(&sa_entry->modify_work.work);
mlx5e_accel_ipsec_fs_del_rule(sa_entry);
mlx5_ipsec_free_sa_ctx(sa_entry);
+sa_entry_free:
kfree(sa_entry);
}
static void mlx5e_xfrm_update_curlft(struct xfrm_state *x)
{
struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
- int err;
+ struct mlx5e_ipsec_rule *ipsec_rule = &sa_entry->ipsec_rule;
+ u64 packets, bytes, lastuse;
- lockdep_assert_held(&x->lock);
+ lockdep_assert(lockdep_is_held(&x->lock) ||
+ lockdep_is_held(&dev_net(x->xso.real_dev)->xfrm.xfrm_cfg_mutex));
- if (sa_entry->attrs.soft_packet_limit == XFRM_INF)
- /* Limits are not configured, as soft limit
- * must be lowever than hard limit.
- */
+ if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
return;
- err = mlx5e_ipsec_aso_query(sa_entry, NULL);
- if (err)
- return;
-
- mlx5e_ipsec_aso_update_curlft(sa_entry, &x->curlft.packets);
+ mlx5_fc_query_cached(ipsec_rule->fc, &bytes, &packets, &lastuse);
+ x->curlft.packets += packets;
+ x->curlft.bytes += bytes;
}
-static int mlx5e_xfrm_validate_policy(struct xfrm_policy *x,
+static int mlx5e_xfrm_validate_policy(struct mlx5_core_dev *mdev,
+ struct xfrm_policy *x,
struct netlink_ext_ack *extack)
{
+ struct xfrm_selector *sel = &x->selector;
+
if (x->type != XFRM_POLICY_TYPE_MAIN) {
NL_SET_ERR_MSG_MOD(extack, "Cannot offload non-main policy types");
return -EINVAL;
return -EINVAL;
}
- if (!x->xfrm_vec[0].reqid) {
- NL_SET_ERR_MSG_MOD(extack, "Cannot offload policy without reqid");
+ if (!x->xfrm_vec[0].reqid && sel->proto == IPPROTO_IP &&
+ addr6_all_zero(sel->saddr.a6) && addr6_all_zero(sel->daddr.a6)) {
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported policy with reqid 0 without at least one of upper protocol or ip addr(s) different than 0");
return -EINVAL;
}
return -EINVAL;
}
- if (x->selector.proto != IPPROTO_IP &&
- (x->selector.proto != IPPROTO_UDP || x->xdo.dir != XFRM_DEV_OFFLOAD_OUT)) {
+ if (sel->proto != IPPROTO_IP &&
+ (sel->proto != IPPROTO_UDP || x->xdo.dir != XFRM_DEV_OFFLOAD_OUT)) {
NL_SET_ERR_MSG_MOD(extack, "Device does not support upper protocol other than UDP, and only Tx direction");
return -EINVAL;
}
+ if (x->priority) {
+ if (!(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_PRIO)) {
+ NL_SET_ERR_MSG_MOD(extack, "Device does not support policy priority");
+ return -EINVAL;
+ }
+
+ if (x->priority == U32_MAX) {
+ NL_SET_ERR_MSG_MOD(extack, "Device does not support requested policy priority");
+ return -EINVAL;
+ }
+ }
+
return 0;
}
attrs->upspec.sport = ntohs(sel->sport);
attrs->upspec.sport_mask = ntohs(sel->sport_mask);
attrs->upspec.proto = sel->proto;
+ attrs->prio = x->priority;
}
static int mlx5e_xfrm_add_policy(struct xfrm_policy *x,
return -EOPNOTSUPP;
}
- err = mlx5e_xfrm_validate_policy(x, extack);
+ err = mlx5e_xfrm_validate_policy(priv->mdev, x, extack);
if (err)
return err;
MLX5_IPSEC_CAP_ESN = 1 << 1,
MLX5_IPSEC_CAP_PACKET_OFFLOAD = 1 << 2,
MLX5_IPSEC_CAP_ROCE = 1 << 3,
+ MLX5_IPSEC_CAP_PRIO = 1 << 4,
};
struct mlx5e_priv;
struct mlx5_flow_handle *rule;
struct mlx5_modify_hdr *modify_hdr;
struct mlx5_pkt_reformat *pkt_reformat;
+ struct mlx5_fc *fc;
};
struct mlx5e_ipsec_modify_state_work {
u8 type : 2;
u8 dir : 2;
u32 reqid;
+ u32 prio;
};
struct mlx5e_ipsec_pol_entry {
int mlx5e_ipsec_aso_query(struct mlx5e_ipsec_sa_entry *sa_entry,
struct mlx5_wqe_aso_ctrl_seg *data);
-void mlx5e_ipsec_aso_update_curlft(struct mlx5e_ipsec_sa_entry *sa_entry,
- u64 *packets);
-
void mlx5e_accel_ipsec_fs_read_stats(struct mlx5e_priv *priv,
void *ipsec_stats);
{
return pol_entry->ipsec->mdev;
}
+
+static inline bool addr6_all_zero(__be32 *addr6)
+{
+ static const __be32 zaddr6[4] = {};
+
+ return !memcmp(addr6, zaddr6, sizeof(*zaddr6));
+}
#else
static inline void mlx5e_ipsec_init(struct mlx5e_priv *priv)
{
#include "ipsec.h"
#include "fs_core.h"
#include "lib/ipsec_fs_roce.h"
+#include "lib/fs_chains.h"
#define NUM_IPSEC_FTE BIT(15)
struct mlx5e_ipsec_miss sa;
struct mlx5e_ipsec_rule status;
struct mlx5e_ipsec_fc *fc;
+ struct mlx5_fs_chains *chains;
};
struct mlx5e_ipsec_tx {
struct mlx5e_ipsec_ft ft;
struct mlx5e_ipsec_miss pol;
+ struct mlx5e_ipsec_rule status;
struct mlx5_flow_namespace *ns;
struct mlx5e_ipsec_fc *fc;
+ struct mlx5_fs_chains *chains;
};
/* IPsec RX flow steering */
return MLX5_TT_IPV6_IPSEC_ESP;
}
+static struct mlx5e_ipsec_rx *ipsec_rx(struct mlx5e_ipsec *ipsec, u32 family)
+{
+ if (family == AF_INET)
+ return ipsec->rx_ipv4;
+
+ return ipsec->rx_ipv6;
+}
+
+static struct mlx5_fs_chains *
+ipsec_chains_create(struct mlx5_core_dev *mdev, struct mlx5_flow_table *miss_ft,
+ enum mlx5_flow_namespace_type ns, int base_prio,
+ int base_level, struct mlx5_flow_table **root_ft)
+{
+ struct mlx5_chains_attr attr = {};
+ struct mlx5_fs_chains *chains;
+ struct mlx5_flow_table *ft;
+ int err;
+
+ attr.flags = MLX5_CHAINS_AND_PRIOS_SUPPORTED |
+ MLX5_CHAINS_IGNORE_FLOW_LEVEL_SUPPORTED;
+ attr.max_grp_num = 2;
+ attr.default_ft = miss_ft;
+ attr.ns = ns;
+ attr.fs_base_prio = base_prio;
+ attr.fs_base_level = base_level;
+ chains = mlx5_chains_create(mdev, &attr);
+ if (IS_ERR(chains))
+ return chains;
+
+ /* Create chain 0, prio 1, level 0 to connect chains to prev in fs_core */
+ ft = mlx5_chains_get_table(chains, 0, 1, 0);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ goto err_chains_get;
+ }
+
+ *root_ft = ft;
+ return chains;
+
+err_chains_get:
+ mlx5_chains_destroy(chains);
+ return ERR_PTR(err);
+}
+
+static void ipsec_chains_destroy(struct mlx5_fs_chains *chains)
+{
+ mlx5_chains_put_table(chains, 0, 1, 0);
+ mlx5_chains_destroy(chains);
+}
+
+static struct mlx5_flow_table *
+ipsec_chains_get_table(struct mlx5_fs_chains *chains, u32 prio)
+{
+ return mlx5_chains_get_table(chains, 0, prio + 1, 0);
+}
+
+static void ipsec_chains_put_table(struct mlx5_fs_chains *chains, u32 prio)
+{
+ mlx5_chains_put_table(chains, 0, prio + 1, 0);
+}
+
static struct mlx5_flow_table *ipsec_ft_create(struct mlx5_flow_namespace *ns,
int level, int prio,
int max_num_groups)
static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
struct mlx5e_ipsec_rx *rx, u32 family)
{
- mlx5_del_flow_rules(rx->pol.rule);
- mlx5_destroy_flow_group(rx->pol.group);
- mlx5_destroy_flow_table(rx->ft.pol);
+ struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false);
+
+ /* disconnect */
+ mlx5_ttc_fwd_default_dest(ttc, family2tt(family));
+
+ if (rx->chains) {
+ ipsec_chains_destroy(rx->chains);
+ } else {
+ mlx5_del_flow_rules(rx->pol.rule);
+ mlx5_destroy_flow_group(rx->pol.group);
+ mlx5_destroy_flow_table(rx->ft.pol);
+ }
mlx5_del_flow_rules(rx->sa.rule);
mlx5_destroy_flow_group(rx->sa.group);
if (err)
goto err_fs;
+ if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_PRIO) {
+ rx->chains = ipsec_chains_create(mdev, rx->ft.sa,
+ MLX5_FLOW_NAMESPACE_KERNEL,
+ MLX5E_NIC_PRIO,
+ MLX5E_ACCEL_FS_POL_FT_LEVEL,
+ &rx->ft.pol);
+ if (IS_ERR(rx->chains)) {
+ err = PTR_ERR(rx->chains);
+ goto err_pol_ft;
+ }
+
+ goto connect;
+ }
+
ft = ipsec_ft_create(ns, MLX5E_ACCEL_FS_POL_FT_LEVEL, MLX5E_NIC_PRIO,
2);
if (IS_ERR(ft)) {
if (err)
goto err_pol_miss;
+connect:
+ /* connect */
+ memset(dest, 0x00, sizeof(*dest));
+ dest[0].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ dest[0].ft = rx->ft.pol;
+ mlx5_ttc_fwd_dest(ttc, family2tt(family), &dest[0]);
return 0;
err_pol_miss:
return err;
}
-static struct mlx5e_ipsec_rx *rx_ft_get(struct mlx5_core_dev *mdev,
- struct mlx5e_ipsec *ipsec, u32 family)
+static int rx_get(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
+ struct mlx5e_ipsec_rx *rx, u32 family)
{
- struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false);
- struct mlx5_flow_destination dest = {};
- struct mlx5e_ipsec_rx *rx;
- int err = 0;
-
- if (family == AF_INET)
- rx = ipsec->rx_ipv4;
- else
- rx = ipsec->rx_ipv6;
+ int err;
- mutex_lock(&rx->ft.mutex);
if (rx->ft.refcnt)
goto skip;
- /* create FT */
err = rx_create(mdev, ipsec, rx, family);
if (err)
- goto out;
-
- /* connect */
- dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
- dest.ft = rx->ft.pol;
- mlx5_ttc_fwd_dest(ttc, family2tt(family), &dest);
+ return err;
skip:
rx->ft.refcnt++;
-out:
+ return 0;
+}
+
+static void rx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_rx *rx,
+ u32 family)
+{
+ if (--rx->ft.refcnt)
+ return;
+
+ rx_destroy(ipsec->mdev, ipsec, rx, family);
+}
+
+static struct mlx5e_ipsec_rx *rx_ft_get(struct mlx5_core_dev *mdev,
+ struct mlx5e_ipsec *ipsec, u32 family)
+{
+ struct mlx5e_ipsec_rx *rx = ipsec_rx(ipsec, family);
+ int err;
+
+ mutex_lock(&rx->ft.mutex);
+ err = rx_get(mdev, ipsec, rx, family);
mutex_unlock(&rx->ft.mutex);
if (err)
return ERR_PTR(err);
+
return rx;
}
-static void rx_ft_put(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
- u32 family)
+static struct mlx5_flow_table *rx_ft_get_policy(struct mlx5_core_dev *mdev,
+ struct mlx5e_ipsec *ipsec,
+ u32 family, u32 prio)
{
- struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false);
- struct mlx5e_ipsec_rx *rx;
+ struct mlx5e_ipsec_rx *rx = ipsec_rx(ipsec, family);
+ struct mlx5_flow_table *ft;
+ int err;
- if (family == AF_INET)
- rx = ipsec->rx_ipv4;
- else
- rx = ipsec->rx_ipv6;
+ mutex_lock(&rx->ft.mutex);
+ err = rx_get(mdev, ipsec, rx, family);
+ if (err)
+ goto err_get;
+
+ ft = rx->chains ? ipsec_chains_get_table(rx->chains, prio) : rx->ft.pol;
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ goto err_get_ft;
+ }
+
+ mutex_unlock(&rx->ft.mutex);
+ return ft;
+
+err_get_ft:
+ rx_put(ipsec, rx, family);
+err_get:
+ mutex_unlock(&rx->ft.mutex);
+ return ERR_PTR(err);
+}
+
+static void rx_ft_put(struct mlx5e_ipsec *ipsec, u32 family)
+{
+ struct mlx5e_ipsec_rx *rx = ipsec_rx(ipsec, family);
mutex_lock(&rx->ft.mutex);
- rx->ft.refcnt--;
- if (rx->ft.refcnt)
- goto out;
+ rx_put(ipsec, rx, family);
+ mutex_unlock(&rx->ft.mutex);
+}
- /* disconnect */
- mlx5_ttc_fwd_default_dest(ttc, family2tt(family));
+static void rx_ft_put_policy(struct mlx5e_ipsec *ipsec, u32 family, u32 prio)
+{
+ struct mlx5e_ipsec_rx *rx = ipsec_rx(ipsec, family);
- /* remove FT */
- rx_destroy(mdev, ipsec, rx, family);
+ mutex_lock(&rx->ft.mutex);
+ if (rx->chains)
+ ipsec_chains_put_table(rx->chains, prio);
-out:
+ rx_put(ipsec, rx, family);
mutex_unlock(&rx->ft.mutex);
}
+static int ipsec_counter_rule_tx(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx)
+{
+ struct mlx5_flow_destination dest = {};
+ struct mlx5_flow_act flow_act = {};
+ struct mlx5_flow_handle *fte;
+ struct mlx5_flow_spec *spec;
+ int err;
+
+ spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+ if (!spec)
+ return -ENOMEM;
+
+ /* create fte */
+ flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW |
+ MLX5_FLOW_CONTEXT_ACTION_COUNT;
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
+ dest.counter_id = mlx5_fc_id(tx->fc->cnt);
+ fte = mlx5_add_flow_rules(tx->ft.status, spec, &flow_act, &dest, 1);
+ if (IS_ERR(fte)) {
+ err = PTR_ERR(fte);
+ mlx5_core_err(mdev, "Fail to add ipsec tx counter rule err=%d\n", err);
+ goto err_rule;
+ }
+
+ kvfree(spec);
+ tx->status.rule = fte;
+ return 0;
+
+err_rule:
+ kvfree(spec);
+ return err;
+}
+
/* IPsec TX flow steering */
+static void tx_destroy(struct mlx5e_ipsec_tx *tx, struct mlx5_ipsec_fs *roce)
+{
+ mlx5_ipsec_fs_roce_tx_destroy(roce);
+ if (tx->chains) {
+ ipsec_chains_destroy(tx->chains);
+ } else {
+ mlx5_del_flow_rules(tx->pol.rule);
+ mlx5_destroy_flow_group(tx->pol.group);
+ mlx5_destroy_flow_table(tx->ft.pol);
+ }
+
+ mlx5_destroy_flow_table(tx->ft.sa);
+ mlx5_del_flow_rules(tx->status.rule);
+ mlx5_destroy_flow_table(tx->ft.status);
+}
+
static int tx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec_tx *tx,
struct mlx5_ipsec_fs *roce)
{
struct mlx5_flow_table *ft;
int err;
- ft = ipsec_ft_create(tx->ns, 1, 0, 4);
+ ft = ipsec_ft_create(tx->ns, 2, 0, 1);
if (IS_ERR(ft))
return PTR_ERR(ft);
+ tx->ft.status = ft;
+ err = ipsec_counter_rule_tx(mdev, tx);
+ if (err)
+ goto err_status_rule;
+
+ ft = ipsec_ft_create(tx->ns, 1, 0, 4);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ goto err_sa_ft;
+ }
tx->ft.sa = ft;
+ if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_PRIO) {
+ tx->chains = ipsec_chains_create(
+ mdev, tx->ft.sa, MLX5_FLOW_NAMESPACE_EGRESS_IPSEC, 0, 0,
+ &tx->ft.pol);
+ if (IS_ERR(tx->chains)) {
+ err = PTR_ERR(tx->chains);
+ goto err_pol_ft;
+ }
+
+ goto connect_roce;
+ }
+
ft = ipsec_ft_create(tx->ns, 0, 0, 2);
if (IS_ERR(ft)) {
err = PTR_ERR(ft);
dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
dest.ft = tx->ft.sa;
err = ipsec_miss_create(mdev, tx->ft.pol, &tx->pol, &dest);
- if (err)
- goto err_pol_miss;
+ if (err) {
+ mlx5_destroy_flow_table(tx->ft.pol);
+ goto err_pol_ft;
+ }
+connect_roce:
err = mlx5_ipsec_fs_roce_tx_create(mdev, roce, tx->ft.pol);
if (err)
goto err_roce;
return 0;
err_roce:
- mlx5_del_flow_rules(tx->pol.rule);
- mlx5_destroy_flow_group(tx->pol.group);
-err_pol_miss:
- mlx5_destroy_flow_table(tx->ft.pol);
+ if (tx->chains) {
+ ipsec_chains_destroy(tx->chains);
+ } else {
+ mlx5_del_flow_rules(tx->pol.rule);
+ mlx5_destroy_flow_group(tx->pol.group);
+ mlx5_destroy_flow_table(tx->ft.pol);
+ }
err_pol_ft:
mlx5_destroy_flow_table(tx->ft.sa);
+err_sa_ft:
+ mlx5_del_flow_rules(tx->status.rule);
+err_status_rule:
+ mlx5_destroy_flow_table(tx->ft.status);
return err;
}
-static struct mlx5e_ipsec_tx *tx_ft_get(struct mlx5_core_dev *mdev,
- struct mlx5e_ipsec *ipsec)
+static int tx_get(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
+ struct mlx5e_ipsec_tx *tx)
{
- struct mlx5e_ipsec_tx *tx = ipsec->tx;
- int err = 0;
+ int err;
- mutex_lock(&tx->ft.mutex);
if (tx->ft.refcnt)
goto skip;
err = tx_create(mdev, tx, ipsec->roce);
if (err)
- goto out;
+ return err;
skip:
tx->ft.refcnt++;
-out:
+ return 0;
+}
+
+static void tx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx)
+{
+ if (--tx->ft.refcnt)
+ return;
+
+ tx_destroy(tx, ipsec->roce);
+}
+
+static struct mlx5_flow_table *tx_ft_get_policy(struct mlx5_core_dev *mdev,
+ struct mlx5e_ipsec *ipsec,
+ u32 prio)
+{
+ struct mlx5e_ipsec_tx *tx = ipsec->tx;
+ struct mlx5_flow_table *ft;
+ int err;
+
+ mutex_lock(&tx->ft.mutex);
+ err = tx_get(mdev, ipsec, tx);
+ if (err)
+ goto err_get;
+
+ ft = tx->chains ? ipsec_chains_get_table(tx->chains, prio) : tx->ft.pol;
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ goto err_get_ft;
+ }
+
+ mutex_unlock(&tx->ft.mutex);
+ return ft;
+
+err_get_ft:
+ tx_put(ipsec, tx);
+err_get:
+ mutex_unlock(&tx->ft.mutex);
+ return ERR_PTR(err);
+}
+
+static struct mlx5e_ipsec_tx *tx_ft_get(struct mlx5_core_dev *mdev,
+ struct mlx5e_ipsec *ipsec)
+{
+ struct mlx5e_ipsec_tx *tx = ipsec->tx;
+ int err;
+
+ mutex_lock(&tx->ft.mutex);
+ err = tx_get(mdev, ipsec, tx);
mutex_unlock(&tx->ft.mutex);
if (err)
return ERR_PTR(err);
+
return tx;
}
struct mlx5e_ipsec_tx *tx = ipsec->tx;
mutex_lock(&tx->ft.mutex);
- tx->ft.refcnt--;
- if (tx->ft.refcnt)
- goto out;
+ tx_put(ipsec, tx);
+ mutex_unlock(&tx->ft.mutex);
+}
- mlx5_ipsec_fs_roce_tx_destroy(ipsec->roce);
- mlx5_del_flow_rules(tx->pol.rule);
- mlx5_destroy_flow_group(tx->pol.group);
- mlx5_destroy_flow_table(tx->ft.pol);
- mlx5_destroy_flow_table(tx->ft.sa);
-out:
+static void tx_ft_put_policy(struct mlx5e_ipsec *ipsec, u32 prio)
+{
+ struct mlx5e_ipsec_tx *tx = ipsec->tx;
+
+ mutex_lock(&tx->ft.mutex);
+ if (tx->chains)
+ ipsec_chains_put_table(tx->chains, prio);
+
+ tx_put(ipsec, tx);
mutex_unlock(&tx->ft.mutex);
}
static void setup_fte_addr4(struct mlx5_flow_spec *spec, __be32 *saddr,
__be32 *daddr)
{
+ if (!*saddr && !*daddr)
+ return;
+
spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_version);
MLX5_SET(fte_match_param, spec->match_value, outer_headers.ip_version, 4);
- memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value,
- outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4), saddr, 4);
- memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value,
- outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4), daddr, 4);
- MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria,
- outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4);
- MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria,
- outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4);
+ if (*saddr) {
+ memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4), saddr, 4);
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria,
+ outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4);
+ }
+
+ if (*daddr) {
+ memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4), daddr, 4);
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria,
+ outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4);
+ }
}
static void setup_fte_addr6(struct mlx5_flow_spec *spec, __be32 *saddr,
__be32 *daddr)
{
+ if (addr6_all_zero(saddr) && addr6_all_zero(daddr))
+ return;
+
spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_version);
MLX5_SET(fte_match_param, spec->match_value, outer_headers.ip_version, 6);
- memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value,
- outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6), saddr, 16);
- memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value,
- outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), daddr, 16);
- memset(MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
- outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6), 0xff, 16);
- memset(MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
- outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), 0xff, 16);
+ if (!addr6_all_zero(saddr)) {
+ memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6), saddr, 16);
+ memset(MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
+ outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6), 0xff, 16);
+ }
+
+ if (!addr6_all_zero(daddr)) {
+ memcpy(MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), daddr, 16);
+ memset(MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
+ outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), 0xff, 16);
+ }
}
static void setup_fte_esp(struct mlx5_flow_spec *spec)
struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs;
struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry);
struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
- struct mlx5_flow_destination dest = {};
+ struct mlx5_flow_destination dest[2];
struct mlx5_flow_act flow_act = {};
struct mlx5_flow_handle *rule;
struct mlx5_flow_spec *spec;
struct mlx5e_ipsec_rx *rx;
+ struct mlx5_fc *counter;
int err;
rx = rx_ft_get(mdev, ipsec, attrs->family);
break;
}
+ counter = mlx5_fc_create(mdev, true);
+ if (IS_ERR(counter)) {
+ err = PTR_ERR(counter);
+ goto err_add_cnt;
+ }
flow_act.crypto.type = MLX5_FLOW_CONTEXT_ENCRYPT_DECRYPT_TYPE_IPSEC;
flow_act.crypto.obj_id = sa_entry->ipsec_obj_id;
flow_act.flags |= FLOW_ACT_NO_APPEND;
flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST |
- MLX5_FLOW_CONTEXT_ACTION_CRYPTO_DECRYPT;
- dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
- dest.ft = rx->ft.status;
- rule = mlx5_add_flow_rules(rx->ft.sa, spec, &flow_act, &dest, 1);
+ MLX5_FLOW_CONTEXT_ACTION_CRYPTO_DECRYPT |
+ MLX5_FLOW_CONTEXT_ACTION_COUNT;
+ dest[0].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ dest[0].ft = rx->ft.status;
+ dest[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
+ dest[1].counter_id = mlx5_fc_id(counter);
+ rule = mlx5_add_flow_rules(rx->ft.sa, spec, &flow_act, dest, 2);
if (IS_ERR(rule)) {
err = PTR_ERR(rule);
mlx5_core_err(mdev, "fail to add RX ipsec rule err=%d\n", err);
sa_entry->ipsec_rule.rule = rule;
sa_entry->ipsec_rule.modify_hdr = flow_act.modify_hdr;
+ sa_entry->ipsec_rule.fc = counter;
sa_entry->ipsec_rule.pkt_reformat = flow_act.pkt_reformat;
return 0;
err_add_flow:
+ mlx5_fc_destroy(mdev, counter);
+err_add_cnt:
if (flow_act.pkt_reformat)
mlx5_packet_reformat_dealloc(mdev, flow_act.pkt_reformat);
err_pkt_reformat:
err_mod_header:
kvfree(spec);
err_alloc:
- rx_ft_put(mdev, ipsec, attrs->family);
+ rx_ft_put(ipsec, attrs->family);
return err;
}
struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs;
struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry);
struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
- struct mlx5_flow_destination dest = {};
+ struct mlx5_flow_destination dest[2];
struct mlx5_flow_act flow_act = {};
struct mlx5_flow_handle *rule;
struct mlx5_flow_spec *spec;
struct mlx5e_ipsec_tx *tx;
- int err = 0;
+ struct mlx5_fc *counter;
+ int err;
tx = tx_ft_get(mdev, ipsec);
if (IS_ERR(tx))
setup_fte_reg_a(spec);
break;
case XFRM_DEV_OFFLOAD_PACKET:
- setup_fte_reg_c0(spec, attrs->reqid);
+ if (attrs->reqid)
+ setup_fte_reg_c0(spec, attrs->reqid);
err = setup_pkt_reformat(mdev, attrs, &flow_act);
if (err)
goto err_pkt_reformat;
break;
}
+ counter = mlx5_fc_create(mdev, true);
+ if (IS_ERR(counter)) {
+ err = PTR_ERR(counter);
+ goto err_add_cnt;
+ }
+
flow_act.crypto.type = MLX5_FLOW_CONTEXT_ENCRYPT_DECRYPT_TYPE_IPSEC;
flow_act.crypto.obj_id = sa_entry->ipsec_obj_id;
flow_act.flags |= FLOW_ACT_NO_APPEND;
- flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_ALLOW |
+ flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST |
MLX5_FLOW_CONTEXT_ACTION_CRYPTO_ENCRYPT |
MLX5_FLOW_CONTEXT_ACTION_COUNT;
- dest.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
- dest.counter_id = mlx5_fc_id(tx->fc->cnt);
- rule = mlx5_add_flow_rules(tx->ft.sa, spec, &flow_act, &dest, 1);
+ dest[0].ft = tx->ft.status;
+ dest[0].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ dest[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
+ dest[1].counter_id = mlx5_fc_id(counter);
+ rule = mlx5_add_flow_rules(tx->ft.sa, spec, &flow_act, dest, 2);
if (IS_ERR(rule)) {
err = PTR_ERR(rule);
mlx5_core_err(mdev, "fail to add TX ipsec rule err=%d\n", err);
kvfree(spec);
sa_entry->ipsec_rule.rule = rule;
+ sa_entry->ipsec_rule.fc = counter;
sa_entry->ipsec_rule.pkt_reformat = flow_act.pkt_reformat;
return 0;
err_add_flow:
+ mlx5_fc_destroy(mdev, counter);
+err_add_cnt:
if (flow_act.pkt_reformat)
mlx5_packet_reformat_dealloc(mdev, flow_act.pkt_reformat);
err_pkt_reformat:
{
struct mlx5_accel_pol_xfrm_attrs *attrs = &pol_entry->attrs;
struct mlx5_core_dev *mdev = mlx5e_ipsec_pol2dev(pol_entry);
+ struct mlx5e_ipsec_tx *tx = pol_entry->ipsec->tx;
struct mlx5_flow_destination dest[2] = {};
struct mlx5_flow_act flow_act = {};
struct mlx5_flow_handle *rule;
struct mlx5_flow_spec *spec;
- struct mlx5e_ipsec_tx *tx;
+ struct mlx5_flow_table *ft;
int err, dstn = 0;
- tx = tx_ft_get(mdev, pol_entry->ipsec);
- if (IS_ERR(tx))
- return PTR_ERR(tx);
+ ft = tx_ft_get_policy(mdev, pol_entry->ipsec, attrs->prio);
+ if (IS_ERR(ft))
+ return PTR_ERR(ft);
spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
if (!spec) {
setup_fte_no_frags(spec);
setup_fte_upper_proto_match(spec, &attrs->upspec);
- err = setup_modify_header(mdev, attrs->reqid, XFRM_DEV_OFFLOAD_OUT,
- &flow_act);
- if (err)
- goto err_mod_header;
+ if (attrs->reqid) {
+ err = setup_modify_header(mdev, attrs->reqid,
+ XFRM_DEV_OFFLOAD_OUT, &flow_act);
+ if (err)
+ goto err_mod_header;
+ }
switch (attrs->action) {
case XFRM_POLICY_ALLOW:
dest[dstn].ft = tx->ft.sa;
dest[dstn].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
dstn++;
- rule = mlx5_add_flow_rules(tx->ft.pol, spec, &flow_act, dest, dstn);
+ rule = mlx5_add_flow_rules(ft, spec, &flow_act, dest, dstn);
if (IS_ERR(rule)) {
err = PTR_ERR(rule);
mlx5_core_err(mdev, "fail to add TX ipsec rule err=%d\n", err);
return 0;
err_action:
- mlx5_modify_header_dealloc(mdev, flow_act.modify_hdr);
+ if (attrs->reqid)
+ mlx5_modify_header_dealloc(mdev, flow_act.modify_hdr);
err_mod_header:
kvfree(spec);
err_alloc:
- tx_ft_put(pol_entry->ipsec);
+ tx_ft_put_policy(pol_entry->ipsec, attrs->prio);
return err;
}
struct mlx5_flow_act flow_act = {};
struct mlx5_flow_handle *rule;
struct mlx5_flow_spec *spec;
+ struct mlx5_flow_table *ft;
struct mlx5e_ipsec_rx *rx;
int err, dstn = 0;
- rx = rx_ft_get(mdev, pol_entry->ipsec, attrs->family);
- if (IS_ERR(rx))
- return PTR_ERR(rx);
+ ft = rx_ft_get_policy(mdev, pol_entry->ipsec, attrs->family, attrs->prio);
+ if (IS_ERR(ft))
+ return PTR_ERR(ft);
+
+ rx = ipsec_rx(pol_entry->ipsec, attrs->family);
spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
if (!spec) {
dest[dstn].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
dest[dstn].ft = rx->ft.sa;
dstn++;
- rule = mlx5_add_flow_rules(rx->ft.pol, spec, &flow_act, dest, dstn);
+ rule = mlx5_add_flow_rules(ft, spec, &flow_act, dest, dstn);
if (IS_ERR(rule)) {
err = PTR_ERR(rule);
mlx5_core_err(mdev, "Fail to add RX IPsec policy rule err=%d\n", err);
err_action:
kvfree(spec);
err_alloc:
- rx_ft_put(mdev, pol_entry->ipsec, attrs->family);
+ rx_ft_put_policy(pol_entry->ipsec, attrs->family, attrs->prio);
return err;
}
struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry);
mlx5_del_flow_rules(ipsec_rule->rule);
-
+ mlx5_fc_destroy(mdev, ipsec_rule->fc);
if (ipsec_rule->pkt_reformat)
mlx5_packet_reformat_dealloc(mdev, ipsec_rule->pkt_reformat);
}
mlx5_modify_header_dealloc(mdev, ipsec_rule->modify_hdr);
- rx_ft_put(mdev, sa_entry->ipsec, sa_entry->attrs.family);
+ rx_ft_put(sa_entry->ipsec, sa_entry->attrs.family);
}
int mlx5e_accel_ipsec_fs_add_pol(struct mlx5e_ipsec_pol_entry *pol_entry)
mlx5_del_flow_rules(ipsec_rule->rule);
if (pol_entry->attrs.dir == XFRM_DEV_OFFLOAD_IN) {
- rx_ft_put(mdev, pol_entry->ipsec, pol_entry->attrs.family);
+ rx_ft_put_policy(pol_entry->ipsec, pol_entry->attrs.family,
+ pol_entry->attrs.prio);
return;
}
- mlx5_modify_header_dealloc(mdev, ipsec_rule->modify_hdr);
- tx_ft_put(pol_entry->ipsec);
+ if (ipsec_rule->modify_hdr)
+ mlx5_modify_header_dealloc(mdev, ipsec_rule->modify_hdr);
+
+ tx_ft_put_policy(pol_entry->ipsec, pol_entry->attrs.prio);
}
void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec)
MLX5_CAP_ETH(mdev, insert_trailer) && MLX5_CAP_ETH(mdev, swp))
caps |= MLX5_IPSEC_CAP_CRYPTO;
- if (MLX5_CAP_IPSEC(mdev, ipsec_full_offload) &&
- MLX5_CAP_FLOWTABLE_NIC_TX(mdev, reformat_add_esp_trasport) &&
- MLX5_CAP_FLOWTABLE_NIC_RX(mdev, reformat_del_esp_trasport) &&
- MLX5_CAP_FLOWTABLE_NIC_RX(mdev, decap))
- caps |= MLX5_IPSEC_CAP_PACKET_OFFLOAD;
+ if (MLX5_CAP_IPSEC(mdev, ipsec_full_offload)) {
+ if (MLX5_CAP_FLOWTABLE_NIC_TX(mdev,
+ reformat_add_esp_trasport) &&
+ MLX5_CAP_FLOWTABLE_NIC_RX(mdev,
+ reformat_del_esp_trasport) &&
+ MLX5_CAP_FLOWTABLE_NIC_RX(mdev, decap))
+ caps |= MLX5_IPSEC_CAP_PACKET_OFFLOAD;
+
+ if (MLX5_CAP_FLOWTABLE_NIC_TX(mdev, ignore_flow_level) &&
+ MLX5_CAP_FLOWTABLE_NIC_RX(mdev, ignore_flow_level))
+ caps |= MLX5_IPSEC_CAP_PRIO;
+ }
if (mlx5_get_roce_state(mdev) &&
MLX5_CAP_GEN_2(mdev, flow_table_type_2_type) & MLX5_FT_NIC_RX_2_NIC_RX_RDMA &&
spin_unlock_bh(&aso->lock);
return ret;
}
-
-void mlx5e_ipsec_aso_update_curlft(struct mlx5e_ipsec_sa_entry *sa_entry,
- u64 *packets)
-{
- struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
- struct mlx5e_ipsec_aso *aso = ipsec->aso;
- u64 hard_cnt;
-
- hard_cnt = MLX5_GET(ipsec_aso, aso->ctx, remove_flow_pkt_cnt);
- /* HW decresases the limit till it reaches zero to fire an avent.
- * We need to fix the calculations, so the returned count is a total
- * number of passed packets and not how much left.
- */
- *packets = sa_entry->attrs.hard_packet_limit - hard_cnt;
-}
struct ptys2ethtool_config **arr,
u32 *size)
{
- bool ext = mlx5e_ptys_ext_supported(mdev);
+ bool ext = mlx5_ptys_ext_supported(mdev);
*arr = ext ? ptys2ext_ethtool_table : ptys2legacy_ethtool_table;
*size = ext ? ARRAY_SIZE(ptys2ext_ethtool_table) :
if (!netif_carrier_ok(netdev))
goto out;
- speed = mlx5e_port_ptys2speed(priv->mdev, eth_proto_oper, force_legacy);
+ speed = mlx5_port_ptys2speed(priv->mdev, eth_proto_oper, force_legacy);
if (!speed) {
if (data_rate_oper)
speed = 100 * data_rate_oper;
struct ethtool_link_ksettings *link_ksettings)
{
unsigned long *lp_advertising = link_ksettings->link_modes.lp_advertising;
- bool ext = mlx5e_ptys_ext_supported(mdev);
+ bool ext = mlx5_ptys_ext_supported(mdev);
ptys2ethtool_adver_link(lp_advertising, eth_proto_lp, ext);
}
const struct ethtool_link_ksettings *link_ksettings)
{
struct mlx5_core_dev *mdev = priv->mdev;
- struct mlx5e_port_eth_proto eproto;
+ struct mlx5_port_eth_proto eproto;
const unsigned long *adver;
bool an_changes = false;
u8 an_disable_admin;
autoneg = link_ksettings->base.autoneg;
speed = link_ksettings->base.speed;
- ext_supported = mlx5e_ptys_ext_supported(mdev);
+ ext_supported = mlx5_ptys_ext_supported(mdev);
ext = ext_requested(autoneg, adver, ext_supported);
if (!ext_supported && ext)
return -EOPNOTSUPP;
goto out;
}
link_modes = autoneg == AUTONEG_ENABLE ? ethtool2ptys_adver_func(adver) :
- mlx5e_port_speed2linkmodes(mdev, speed, !ext);
+ mlx5_port_speed2linkmodes(mdev, speed, !ext);
err = mlx5e_speed_validate(priv->netdev, ext, link_modes, autoneg);
if (err)
shampo->bitmap = bitmap_zalloc_node(shampo->hd_per_wq, GFP_KERNEL,
node);
- if (!shampo->bitmap)
- return -ENOMEM;
-
shampo->info = kvzalloc_node(array_size(shampo->hd_per_wq,
sizeof(*shampo->info)),
GFP_KERNEL, node);
- if (!shampo->info) {
- kvfree(shampo->bitmap);
- return -ENOMEM;
- }
+ shampo->pages = kvzalloc_node(array_size(shampo->hd_per_wq,
+ sizeof(*shampo->pages)),
+ GFP_KERNEL, node);
+ if (!shampo->bitmap || !shampo->info || !shampo->pages)
+ goto err_nomem;
+
return 0;
+
+err_nomem:
+ kvfree(shampo->info);
+ kvfree(shampo->bitmap);
+ kvfree(shampo->pages);
+
+ return -ENOMEM;
}
static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
{
kvfree(rq->mpwqe.shampo->bitmap);
kvfree(rq->mpwqe.shampo->info);
+ kvfree(rq->mpwqe.shampo->pages);
}
static int mlx5e_rq_alloc_mpwqe_info(struct mlx5e_rq *rq, int node)
int wq_sz = mlx5_wq_ll_get_size(&rq->mpwqe.wq);
size_t alloc_size;
- alloc_size = array_size(wq_sz, struct_size(rq->mpwqe.info, alloc_units,
+ alloc_size = array_size(wq_sz, struct_size(rq->mpwqe.info,
+ alloc_units.frag_pages,
rq->mpwqe.pages_per_wqe));
rq->mpwqe.info = kvzalloc_node(alloc_size, GFP_KERNEL, node);
if (!rq->mpwqe.info)
return -ENOMEM;
+ /* For deferred page release (release right before alloc), make sure
+ * that on first round release is not called.
+ */
+ for (int i = 0; i < wq_sz; i++) {
+ struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, i);
+
+ bitmap_fill(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe);
+ }
+
mlx5e_build_umr_wqe(rq, rq->icosq, &rq->mpwqe.umr_wqe);
return 0;
struct mlx5e_wqe_frag_info *prev = NULL;
int i;
- if (rq->xsk_pool) {
- /* Assumptions used by XSK batched allocator. */
- WARN_ON(rq->wqe.info.num_frags != 1);
- WARN_ON(rq->wqe.info.log_num_frags != 0);
- WARN_ON(rq->wqe.info.arr[0].frag_stride != PAGE_SIZE);
- }
+ WARN_ON(rq->xsk_pool);
+
+ next_frag.frag_page = &rq->wqe.alloc_units->frag_pages[0];
- next_frag.au = &rq->wqe.alloc_units[0];
+ /* Skip first release due to deferred release. */
+ next_frag.flags = BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
for (i = 0; i < mlx5_wq_cyc_get_size(&rq->wqe.wq); i++) {
struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0];
for (f = 0; f < rq->wqe.info.num_frags; f++, frag++) {
if (next_frag.offset + frag_info[f].frag_stride > PAGE_SIZE) {
- next_frag.au++;
+ /* Pages are assigned at runtime. */
+ next_frag.frag_page++;
next_frag.offset = 0;
if (prev)
- prev->last_in_page = true;
+ prev->flags |= BIT(MLX5E_WQE_FRAG_LAST_IN_PAGE);
}
*frag = next_frag;
}
if (prev)
- prev->last_in_page = true;
+ prev->flags |= BIT(MLX5E_WQE_FRAG_LAST_IN_PAGE);
}
-static int mlx5e_init_au_list(struct mlx5e_rq *rq, int wq_sz, int node)
+static void mlx5e_init_xsk_buffs(struct mlx5e_rq *rq)
{
+ int i;
+
+ /* Assumptions used by XSK batched allocator. */
+ WARN_ON(rq->wqe.info.num_frags != 1);
+ WARN_ON(rq->wqe.info.log_num_frags != 0);
+ WARN_ON(rq->wqe.info.arr[0].frag_stride != PAGE_SIZE);
+
+ /* Considering the above assumptions a fragment maps to a single
+ * xsk_buff.
+ */
+ for (i = 0; i < mlx5_wq_cyc_get_size(&rq->wqe.wq); i++) {
+ rq->wqe.frags[i].xskp = &rq->wqe.alloc_units->xsk_buffs[i];
+
+ /* Skip first release due to deferred release as WQES are
+ * not allocated yet.
+ */
+ rq->wqe.frags[i].flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
+ }
+}
+
+static int mlx5e_init_wqe_alloc_info(struct mlx5e_rq *rq, int node)
+{
+ int wq_sz = mlx5_wq_cyc_get_size(&rq->wqe.wq);
int len = wq_sz << rq->wqe.info.log_num_frags;
+ struct mlx5e_wqe_frag_info *frags;
+ union mlx5e_alloc_units *aus;
+ int aus_sz;
+
+ if (rq->xsk_pool)
+ aus_sz = sizeof(*aus->xsk_buffs);
+ else
+ aus_sz = sizeof(*aus->frag_pages);
+
+ aus = kvzalloc_node(array_size(len, aus_sz), GFP_KERNEL, node);
+ if (!aus)
+ return -ENOMEM;
- rq->wqe.alloc_units = kvzalloc_node(array_size(len, sizeof(*rq->wqe.alloc_units)),
- GFP_KERNEL, node);
- if (!rq->wqe.alloc_units)
+ frags = kvzalloc_node(array_size(len, sizeof(*frags)), GFP_KERNEL, node);
+ if (!frags) {
+ kvfree(aus);
return -ENOMEM;
+ }
+
+ rq->wqe.alloc_units = aus;
+ rq->wqe.frags = frags;
- mlx5e_init_frags_partition(rq);
+ if (rq->xsk_pool)
+ mlx5e_init_xsk_buffs(rq);
+ else
+ mlx5e_init_frags_partition(rq);
return 0;
}
-static void mlx5e_free_au_list(struct mlx5e_rq *rq)
+static void mlx5e_free_wqe_alloc_info(struct mlx5e_rq *rq)
{
+ kvfree(rq->wqe.frags);
kvfree(rq->wqe.alloc_units);
}
struct mlx5e_rq_param *rqp,
int node, struct mlx5e_rq *rq)
{
- struct page_pool_params pp_params = { 0 };
struct mlx5_core_dev *mdev = rq->mdev;
void *rqc = rqp->rqc;
void *rqc_wq = MLX5_ADDR_OF(rqc, rqc, wq);
rq->wqe.info = rqp->frags_info;
rq->buff.frame0_sz = rq->wqe.info.arr[0].frag_stride;
- rq->wqe.frags =
- kvzalloc_node(array_size(sizeof(*rq->wqe.frags),
- (wq_sz << rq->wqe.info.log_num_frags)),
- GFP_KERNEL, node);
- if (!rq->wqe.frags) {
- err = -ENOMEM;
- goto err_rq_wq_destroy;
- }
-
- err = mlx5e_init_au_list(rq, wq_sz, node);
+ err = mlx5e_init_wqe_alloc_info(rq, node);
if (err)
- goto err_rq_frags;
+ goto err_rq_wq_destroy;
}
if (xsk) {
xsk_pool_set_rxq_info(rq->xsk_pool, &rq->xdp_rxq);
} else {
/* Create a page_pool and register it with rxq */
+ struct page_pool_params pp_params = { 0 };
+
pp_params.order = 0;
- pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
+ pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | PP_FLAG_PAGE_FRAG;
pp_params.pool_size = pool_size;
pp_params.nid = node;
pp_params.dev = rq->pdev;
pp_params.dma_dir = rq->buff.map_dir;
+ pp_params.max_len = PAGE_SIZE;
/* page_pool can be used even when there is no rq->xdp_prog,
* given page_pool does not handle DMA mapping there is no
rq->dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
}
- rq->page_cache.head = 0;
- rq->page_cache.tail = 0;
-
return 0;
err_destroy_page_pool:
mlx5e_free_mpwqe_rq_drop_page(rq);
break;
default: /* MLX5_WQ_TYPE_CYCLIC */
- mlx5e_free_au_list(rq);
-err_rq_frags:
- kvfree(rq->wqe.frags);
+ mlx5e_free_wqe_alloc_info(rq);
}
err_rq_wq_destroy:
mlx5_wq_destroy(&rq->wq_ctrl);
static void mlx5e_free_rq(struct mlx5e_rq *rq)
{
struct bpf_prog *old_prog;
- int i;
if (xdp_rxq_info_is_reg(&rq->xdp_rxq)) {
old_prog = rcu_dereference_protected(rq->xdp_prog,
mlx5e_rq_free_shampo(rq);
break;
default: /* MLX5_WQ_TYPE_CYCLIC */
- kvfree(rq->wqe.frags);
- mlx5e_free_au_list(rq);
- }
-
- for (i = rq->page_cache.head; i != rq->page_cache.tail;
- i = (i + 1) & (MLX5E_CACHE_SIZE - 1)) {
- /* With AF_XDP, page_cache is not used, so this loop is not
- * entered, and it's safe to call mlx5e_page_release_dynamic
- * directly.
- */
- mlx5e_page_release_dynamic(rq, rq->page_cache.page_cache[i], false);
+ mlx5e_free_wqe_alloc_info(rq);
}
xdp_rxq_info_unreg(&rq->xdp_rxq);
return -ETIMEDOUT;
}
-void mlx5e_free_rx_in_progress_descs(struct mlx5e_rq *rq)
+void mlx5e_free_rx_missing_descs(struct mlx5e_rq *rq)
{
struct mlx5_wq_ll *wq;
u16 head;
wq = &rq->mpwqe.wq;
head = wq->head;
- /* Outstanding UMR WQEs (in progress) start at wq->head */
- for (i = 0; i < rq->mpwqe.umr_in_progress; i++) {
+ /* Release WQEs that are in missing state: they have been
+ * popped from the list after completion but were not freed
+ * due to deferred release.
+ * Also free the linked-list reserved entry, hence the "+ 1".
+ */
+ for (i = 0; i < mlx5_wq_ll_missing(wq) + 1; i++) {
rq->dealloc_wqe(rq, head);
head = mlx5_wq_ll_get_wqe_next_ix(wq, head);
}
if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
struct mlx5_wq_ll *wq = &rq->mpwqe.wq;
- mlx5e_free_rx_in_progress_descs(rq);
+ mlx5e_free_rx_missing_descs(rq);
while (!mlx5_wq_ll_is_empty(wq)) {
struct mlx5e_rx_wqe_ll *wqe;
0, true);
} else {
struct mlx5_wq_cyc *wq = &rq->wqe.wq;
+ u16 missing = mlx5_wq_cyc_missing(wq);
+ u16 head = mlx5_wq_cyc_get_head(wq);
while (!mlx5_wq_cyc_is_empty(wq)) {
wqe_ix = mlx5_wq_cyc_get_tail(wq);
rq->dealloc_wqe(rq, wqe_ix);
mlx5_wq_cyc_pop(wq);
}
+ /* Missing slots might also contain unreleased pages due to
+ * deferred release.
+ */
+ while (missing--) {
+ wqe_ix = mlx5_wq_cyc_ctr2ix(wq, head++);
+ rq->dealloc_wqe(rq, wqe_ix);
+ }
}
}
__set_bit(MLX5E_RQ_STATE_CSUM_FULL, &rq->state);
if (params->rx_dim_enabled)
- __set_bit(MLX5E_RQ_STATE_AM, &rq->state);
+ __set_bit(MLX5E_RQ_STATE_DIM, &rq->state);
/* We disable csum_complete when XDP is enabled since
* XDP programs might manipulate packets which will render
mlx5e_set_sq_maxrate(c->netdev, sq, tx_rate);
if (params->tx_dim_enabled)
- sq->state |= BIT(MLX5E_SQ_STATE_AM);
+ sq->state |= BIT(MLX5E_SQ_STATE_DIM);
return 0;
/* Validate the max_wqe_size_sq capability. */
if (WARN_ON_ONCE(mlx5e_get_max_sq_wqebbs(priv->mdev) < MLX5E_MAX_TX_WQEBBS)) {
- mlx5_core_warn(priv->mdev, "MLX5E: Max SQ WQEBBs firmware capability: %u, needed %lu\n",
- mlx5e_get_max_sq_wqebbs(priv->mdev), MLX5E_MAX_TX_WQEBBS);
+ mlx5_core_warn(priv->mdev, "MLX5E: Max SQ WQEBBs firmware capability: %u, needed %u\n",
+ mlx5e_get_max_sq_wqebbs(priv->mdev), (unsigned int)MLX5E_MAX_TX_WQEBBS);
return -EIO;
}
return mlx5e_decompress_cqes_cont(rq, wq, 1, budget_rem);
}
-static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct page *page)
-{
- struct mlx5e_page_cache *cache = &rq->page_cache;
- u32 tail_next = (cache->tail + 1) & (MLX5E_CACHE_SIZE - 1);
- struct mlx5e_rq_stats *stats = rq->stats;
-
- if (tail_next == cache->head) {
- stats->cache_full++;
- return false;
- }
-
- if (!dev_page_is_reusable(page)) {
- stats->cache_waive++;
- return false;
- }
+#define MLX5E_PAGECNT_BIAS_MAX (PAGE_SIZE / 64)
- cache->page_cache[cache->tail] = page;
- cache->tail = tail_next;
- return true;
-}
-
-static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq, union mlx5e_alloc_unit *au)
+static int mlx5e_page_alloc_fragmented(struct mlx5e_rq *rq,
+ struct mlx5e_frag_page *frag_page)
{
- struct mlx5e_page_cache *cache = &rq->page_cache;
- struct mlx5e_rq_stats *stats = rq->stats;
- dma_addr_t addr;
-
- if (unlikely(cache->head == cache->tail)) {
- stats->cache_empty++;
- return false;
- }
+ struct page *page;
- if (page_ref_count(cache->page_cache[cache->head]) != 1) {
- stats->cache_busy++;
- return false;
- }
-
- au->page = cache->page_cache[cache->head];
- cache->head = (cache->head + 1) & (MLX5E_CACHE_SIZE - 1);
- stats->cache_reuse++;
-
- addr = page_pool_get_dma_addr(au->page);
- /* Non-XSK always uses PAGE_SIZE. */
- dma_sync_single_for_device(rq->pdev, addr, PAGE_SIZE, rq->buff.map_dir);
- return true;
-}
-
-static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, union mlx5e_alloc_unit *au)
-{
- dma_addr_t addr;
-
- if (mlx5e_rx_cache_get(rq, au))
- return 0;
-
- au->page = page_pool_dev_alloc_pages(rq->page_pool);
- if (unlikely(!au->page))
+ page = page_pool_dev_alloc_pages(rq->page_pool);
+ if (unlikely(!page))
return -ENOMEM;
- /* Non-XSK always uses PAGE_SIZE. */
- addr = dma_map_page(rq->pdev, au->page, 0, PAGE_SIZE, rq->buff.map_dir);
- if (unlikely(dma_mapping_error(rq->pdev, addr))) {
- page_pool_recycle_direct(rq->page_pool, au->page);
- au->page = NULL;
- return -ENOMEM;
- }
- page_pool_set_dma_addr(au->page, addr);
-
- return 0;
-}
+ page_pool_fragment_page(page, MLX5E_PAGECNT_BIAS_MAX);
-void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page)
-{
- dma_addr_t dma_addr = page_pool_get_dma_addr(page);
+ *frag_page = (struct mlx5e_frag_page) {
+ .page = page,
+ .frags = 0,
+ };
- dma_unmap_page_attrs(rq->pdev, dma_addr, PAGE_SIZE, rq->buff.map_dir,
- DMA_ATTR_SKIP_CPU_SYNC);
- page_pool_set_dma_addr(page, 0);
+ return 0;
}
-void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle)
+static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq,
+ struct mlx5e_frag_page *frag_page)
{
- if (likely(recycle)) {
- if (mlx5e_rx_cache_put(rq, page))
- return;
+ u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags;
+ struct page *page = frag_page->page;
- mlx5e_page_dma_unmap(rq, page);
- page_pool_recycle_direct(rq->page_pool, page);
- } else {
- mlx5e_page_dma_unmap(rq, page);
- page_pool_release_page(rq->page_pool, page);
- put_page(page);
- }
+ if (page_pool_defrag_page(page, drain_count) == 0)
+ page_pool_put_defragged_page(rq->page_pool, page, -1, true);
}
static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq,
int err = 0;
if (!frag->offset)
- /* On first frag (offset == 0), replenish page (alloc_unit actually).
- * Other frags that point to the same alloc_unit (with a different
+ /* On first frag (offset == 0), replenish page.
+ * Other frags that point to the same page (with a different
* offset) should just use the new one without replenishing again
* by themselves.
*/
- err = mlx5e_page_alloc_pool(rq, frag->au);
+ err = mlx5e_page_alloc_fragmented(rq, frag->frag_page);
return err;
}
+static bool mlx5e_frag_can_release(struct mlx5e_wqe_frag_info *frag)
+{
+#define CAN_RELEASE_MASK \
+ (BIT(MLX5E_WQE_FRAG_LAST_IN_PAGE) | BIT(MLX5E_WQE_FRAG_SKIP_RELEASE))
+
+#define CAN_RELEASE_VALUE BIT(MLX5E_WQE_FRAG_LAST_IN_PAGE)
+
+ return (frag->flags & CAN_RELEASE_MASK) == CAN_RELEASE_VALUE;
+}
+
static inline void mlx5e_put_rx_frag(struct mlx5e_rq *rq,
- struct mlx5e_wqe_frag_info *frag,
- bool recycle)
+ struct mlx5e_wqe_frag_info *frag)
{
- if (frag->last_in_page)
- mlx5e_page_release_dynamic(rq, frag->au->page, recycle);
+ if (mlx5e_frag_can_release(frag))
+ mlx5e_page_release_fragmented(rq, frag->frag_page);
}
static inline struct mlx5e_wqe_frag_info *get_frag(struct mlx5e_rq *rq, u16 ix)
if (unlikely(err))
goto free_frags;
+ frag->flags &= ~BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
+
headroom = i == 0 ? rq->buff.headroom : 0;
- addr = page_pool_get_dma_addr(frag->au->page);
+ addr = page_pool_get_dma_addr(frag->frag_page->page);
wqe->data[i].addr = cpu_to_be64(addr + frag->offset + headroom);
}
free_frags:
while (--i >= 0)
- mlx5e_put_rx_frag(rq, --frag, true);
+ mlx5e_put_rx_frag(rq, --frag);
return err;
}
static inline void mlx5e_free_rx_wqe(struct mlx5e_rq *rq,
- struct mlx5e_wqe_frag_info *wi,
- bool recycle)
+ struct mlx5e_wqe_frag_info *wi)
{
int i;
- if (rq->xsk_pool) {
- /* The `recycle` parameter is ignored, and the page is always
- * put into the Reuse Ring, because there is no way to return
- * the page to the userspace when the interface goes down.
- */
- xsk_buff_free(wi->au->xsk);
- return;
- }
-
for (i = 0; i < rq->wqe.info.num_frags; i++, wi++)
- mlx5e_put_rx_frag(rq, wi, recycle);
+ mlx5e_put_rx_frag(rq, wi);
+}
+
+static void mlx5e_xsk_free_rx_wqe(struct mlx5e_wqe_frag_info *wi)
+{
+ if (!(wi->flags & BIT(MLX5E_WQE_FRAG_SKIP_RELEASE)))
+ xsk_buff_free(*wi->xskp);
}
static void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix)
{
struct mlx5e_wqe_frag_info *wi = get_frag(rq, ix);
- mlx5e_free_rx_wqe(rq, wi, false);
+ if (rq->xsk_pool)
+ mlx5e_xsk_free_rx_wqe(wi);
+ else
+ mlx5e_free_rx_wqe(rq, wi);
+}
+
+static void mlx5e_xsk_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
+{
+ struct mlx5_wq_cyc *wq = &rq->wqe.wq;
+ int i;
+
+ for (i = 0; i < wqe_bulk; i++) {
+ int j = mlx5_wq_cyc_ctr2ix(wq, ix + i);
+ struct mlx5e_wqe_frag_info *wi;
+
+ wi = get_frag(rq, j);
+ /* The page is always put into the Reuse Ring, because there
+ * is no way to return the page to the userspace when the
+ * interface goes down.
+ */
+ mlx5e_xsk_free_rx_wqe(wi);
+ }
+}
+
+static void mlx5e_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
+{
+ struct mlx5_wq_cyc *wq = &rq->wqe.wq;
+ int i;
+
+ for (i = 0; i < wqe_bulk; i++) {
+ int j = mlx5_wq_cyc_ctr2ix(wq, ix + i);
+ struct mlx5e_wqe_frag_info *wi;
+
+ wi = get_frag(rq, j);
+ mlx5e_free_rx_wqe(rq, wi);
+ }
}
static int mlx5e_alloc_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
return i;
}
+static int mlx5e_refill_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
+{
+ int remaining = wqe_bulk;
+ int i = 0;
+
+ /* The WQE bulk is split into smaller bulks that are sized
+ * according to the page pool cache refill size to avoid overflowing
+ * the page pool cache due to too many page releases at once.
+ */
+ do {
+ int refill = min_t(u16, rq->wqe.info.refill_unit, remaining);
+ int alloc_count;
+
+ mlx5e_free_rx_wqes(rq, ix + i, refill);
+ alloc_count = mlx5e_alloc_rx_wqes(rq, ix + i, refill);
+ i += alloc_count;
+ if (unlikely(alloc_count != refill))
+ break;
+
+ remaining -= refill;
+ } while (remaining);
+
+ return i;
+}
+
static inline void
mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb,
- union mlx5e_alloc_unit *au, u32 frag_offset, u32 len,
+ struct page *page, u32 frag_offset, u32 len,
unsigned int truesize)
{
- dma_addr_t addr = page_pool_get_dma_addr(au->page);
+ dma_addr_t addr = page_pool_get_dma_addr(page);
dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len,
rq->buff.map_dir);
- page_ref_inc(au->page);
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
- au->page, frag_offset, len, truesize);
+ page, frag_offset, len, truesize);
}
static inline void
}
static void
-mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, bool recycle)
+mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi)
{
- union mlx5e_alloc_unit *alloc_units = wi->alloc_units;
bool no_xdp_xmit;
int i;
/* A common case for AF_XDP. */
- if (bitmap_full(wi->xdp_xmit_bitmap, rq->mpwqe.pages_per_wqe))
+ if (bitmap_full(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe))
return;
- no_xdp_xmit = bitmap_empty(wi->xdp_xmit_bitmap, rq->mpwqe.pages_per_wqe);
+ no_xdp_xmit = bitmap_empty(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe);
if (rq->xsk_pool) {
- /* The `recycle` parameter is ignored, and the page is always
- * put into the Reuse Ring, because there is no way to return
- * the page to the userspace when the interface goes down.
+ struct xdp_buff **xsk_buffs = wi->alloc_units.xsk_buffs;
+
+ /* The page is always put into the Reuse Ring, because there
+ * is no way to return the page to userspace when the interface
+ * goes down.
*/
for (i = 0; i < rq->mpwqe.pages_per_wqe; i++)
- if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))
- xsk_buff_free(alloc_units[i].xsk);
+ if (no_xdp_xmit || !test_bit(i, wi->skip_release_bitmap))
+ xsk_buff_free(xsk_buffs[i]);
} else {
- for (i = 0; i < rq->mpwqe.pages_per_wqe; i++)
- if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))
- mlx5e_page_release_dynamic(rq, alloc_units[i].page, recycle);
+ for (i = 0; i < rq->mpwqe.pages_per_wqe; i++) {
+ if (no_xdp_xmit || !test_bit(i, wi->skip_release_bitmap)) {
+ struct mlx5e_frag_page *frag_page;
+
+ frag_page = &wi->alloc_units.frag_pages[i];
+ mlx5e_page_release_fragmented(rq, frag_page);
+ }
+ }
}
}
struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo;
u16 entries, pi, header_offset, err, wqe_bbs, new_entries;
u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey;
- struct page *page = shampo->last_page;
+ u16 page_index = shampo->curr_page_index;
+ struct mlx5e_frag_page *frag_page;
u64 addr = shampo->last_addr;
struct mlx5e_dma_info *dma_info;
struct mlx5e_umr_wqe *umr_wqe;
umr_wqe = mlx5_wq_cyc_get_wqe(&sq->wq, pi);
build_klm_umr(sq, umr_wqe, shampo->key, index, entries, wqe_bbs);
+ frag_page = &shampo->pages[page_index];
+
for (i = 0; i < entries; i++, index++) {
dma_info = &shampo->info[index];
if (i >= klm_entries || (index < shampo->pi && shampo->pi - index <
header_offset = (index & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) <<
MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE;
if (!(header_offset & (PAGE_SIZE - 1))) {
- union mlx5e_alloc_unit au;
+ page_index = (page_index + 1) & (shampo->hd_per_wq - 1);
+ frag_page = &shampo->pages[page_index];
- err = mlx5e_page_alloc_pool(rq, &au);
+ err = mlx5e_page_alloc_fragmented(rq, frag_page);
if (unlikely(err))
goto err_unmap;
- page = dma_info->page = au.page;
- addr = dma_info->addr = page_pool_get_dma_addr(au.page);
+
+ addr = page_pool_get_dma_addr(frag_page->page);
+
+ dma_info->addr = addr;
+ dma_info->frag_page = frag_page;
} else {
dma_info->addr = addr + header_offset;
- dma_info->page = page;
+ dma_info->frag_page = frag_page;
}
update_klm:
};
shampo->pi = (shampo->pi + new_entries) & (shampo->hd_per_wq - 1);
- shampo->last_page = page;
+ shampo->curr_page_index = page_index;
shampo->last_addr = addr;
sq->pc += wqe_bbs;
sq->doorbell_cseg = &umr_wqe->ctrl;
dma_info = &shampo->info[--index];
if (!(i & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1))) {
dma_info->addr = ALIGN_DOWN(dma_info->addr, PAGE_SIZE);
- mlx5e_page_release_dynamic(rq, dma_info->page, true);
+ mlx5e_page_release_fragmented(rq, dma_info->frag_page);
}
}
rq->stats->buff_alloc_err++;
static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
{
struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, ix);
- union mlx5e_alloc_unit *au = &wi->alloc_units[0];
struct mlx5e_icosq *sq = rq->icosq;
+ struct mlx5e_frag_page *frag_page;
struct mlx5_wq_cyc *wq = &sq->wq;
struct mlx5e_umr_wqe *umr_wqe;
u32 offset; /* 17-bit value with MTT. */
umr_wqe = mlx5_wq_cyc_get_wqe(wq, pi);
memcpy(umr_wqe, &rq->mpwqe.umr_wqe, sizeof(struct mlx5e_umr_wqe));
- for (i = 0; i < rq->mpwqe.pages_per_wqe; i++, au++) {
+ frag_page = &wi->alloc_units.frag_pages[0];
+
+ for (i = 0; i < rq->mpwqe.pages_per_wqe; i++, frag_page++) {
dma_addr_t addr;
- err = mlx5e_page_alloc_pool(rq, au);
+ err = mlx5e_page_alloc_fragmented(rq, frag_page);
if (unlikely(err))
goto err_unmap;
- addr = page_pool_get_dma_addr(au->page);
+ addr = page_pool_get_dma_addr(frag_page->page);
umr_wqe->inline_mtts[i] = (struct mlx5_mtt) {
.ptag = cpu_to_be64(addr | MLX5_EN_WR),
};
sizeof(*umr_wqe->inline_mtts) * pad);
}
- bitmap_zero(wi->xdp_xmit_bitmap, rq->mpwqe.pages_per_wqe);
+ bitmap_zero(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe);
wi->consumed_strides = 0;
umr_wqe->ctrl.opmod_idx_opcode =
err_unmap:
while (--i >= 0) {
- au--;
- mlx5e_page_release_dynamic(rq, au->page, true);
+ frag_page--;
+ mlx5e_page_release_fragmented(rq, frag_page);
}
err:
void mlx5e_shampo_dealloc_hd(struct mlx5e_rq *rq, u16 len, u16 start, bool close)
{
struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo;
+ struct mlx5e_frag_page *deleted_page = NULL;
int hd_per_wq = shampo->hd_per_wq;
- struct page *deleted_page = NULL;
struct mlx5e_dma_info *hd_info;
int i, index = start;
hd_info = &shampo->info[index];
hd_info->addr = ALIGN_DOWN(hd_info->addr, PAGE_SIZE);
- if (hd_info->page != deleted_page) {
- deleted_page = hd_info->page;
- mlx5e_page_release_dynamic(rq, hd_info->page, false);
+ if (hd_info->frag_page && hd_info->frag_page != deleted_page) {
+ deleted_page = hd_info->frag_page;
+ mlx5e_page_release_fragmented(rq, hd_info->frag_page);
}
+
+ hd_info->frag_page = NULL;
}
if (start + len > hd_per_wq) {
static void mlx5e_dealloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
{
struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, ix);
- /* Don't recycle, this function is called on rq/netdev close */
- mlx5e_free_rx_mpwqe(rq, wi, false);
+ /* This function is called on rq/netdev close. */
+ mlx5e_free_rx_mpwqe(rq, wi);
}
INDIRECT_CALLABLE_SCOPE bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)
*/
wqe_bulk -= (head + wqe_bulk) & rq->wqe.info.wqe_index_mask;
- if (!rq->xsk_pool)
- count = mlx5e_alloc_rx_wqes(rq, head, wqe_bulk);
- else if (likely(!rq->xsk_pool->dma_need_sync))
+ if (!rq->xsk_pool) {
+ count = mlx5e_refill_rx_wqes(rq, head, wqe_bulk);
+ } else if (likely(!rq->xsk_pool->dma_need_sync)) {
+ mlx5e_xsk_free_rx_wqes(rq, head, wqe_bulk);
count = mlx5e_xsk_alloc_rx_wqes_batched(rq, head, wqe_bulk);
- else
+ } else {
+ mlx5e_xsk_free_rx_wqes(rq, head, wqe_bulk);
/* If dma_need_sync is true, it's more efficient to call
* xsk_buff_alloc in a loop, rather than xsk_buff_alloc_batch,
* because the latter does the same check and returns only one
* frame.
*/
count = mlx5e_xsk_alloc_rx_wqes(rq, head, wqe_bulk);
+ }
mlx5_wq_cyc_push_n(wq, count);
if (unlikely(count != wqe_bulk)) {
head = rq->mpwqe.actual_wq_head;
i = missing;
do {
+ struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, head);
+
+ /* Deferred free for better page pool cache usage. */
+ mlx5e_free_rx_mpwqe(rq, wi);
+
alloc_err = rq->xsk_pool ? mlx5e_xsk_alloc_rx_mpwqe(rq, head) :
mlx5e_alloc_rx_mpwqe(rq, head);
struct mlx5e_dma_info *last_head = &rq->mpwqe.shampo->info[header_index];
u16 head_offset = (last_head->addr & (PAGE_SIZE - 1)) + rq->buff.headroom;
- return page_address(last_head->page) + head_offset;
+ return page_address(last_head->frag_page->page) + head_offset;
}
static void mlx5e_shampo_update_ipv4_udp_hdr(struct mlx5e_rq *rq, struct iphdr *ipv4)
mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi,
struct mlx5_cqe64 *cqe, u32 cqe_bcnt)
{
- union mlx5e_alloc_unit *au = wi->au;
+ struct mlx5e_frag_page *frag_page = wi->frag_page;
u16 rx_headroom = rq->buff.headroom;
struct bpf_prog *prog;
struct sk_buff *skb;
dma_addr_t addr;
u32 frag_size;
- va = page_address(au->page) + wi->offset;
+ va = page_address(frag_page->page) + wi->offset;
data = va + rx_headroom;
frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
- addr = page_pool_get_dma_addr(au->page);
+ addr = page_pool_get_dma_addr(frag_page->page);
dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset,
frag_size, rq->buff.map_dir);
net_prefetch(data);
return NULL;
/* queue up for recycling/reuse */
- page_ref_inc(au->page);
+ skb_mark_for_recycle(skb);
+ frag_page->frags++;
return skb;
}
{
struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0];
struct mlx5e_wqe_frag_info *head_wi = wi;
- union mlx5e_alloc_unit *au = wi->au;
u16 rx_headroom = rq->buff.headroom;
+ struct mlx5e_frag_page *frag_page;
struct skb_shared_info *sinfo;
struct mlx5e_xdp_buff mxbuf;
u32 frag_consumed_bytes;
u32 truesize;
void *va;
- va = page_address(au->page) + wi->offset;
+ frag_page = wi->frag_page;
+
+ va = page_address(frag_page->page) + wi->offset;
frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt);
- addr = page_pool_get_dma_addr(au->page);
+ addr = page_pool_get_dma_addr(frag_page->page);
dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset,
rq->buff.frame0_sz, rq->buff.map_dir);
net_prefetchw(va); /* xdp_frame data area */
while (cqe_bcnt) {
skb_frag_t *frag;
- au = wi->au;
+ frag_page = wi->frag_page;
frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt);
- addr = page_pool_get_dma_addr(au->page);
+ addr = page_pool_get_dma_addr(frag_page->page);
dma_sync_single_for_cpu(rq->pdev, addr + wi->offset,
frag_consumed_bytes, rq->buff.map_dir);
}
frag = &sinfo->frags[sinfo->nr_frags++];
- __skb_frag_set_page(frag, au->page);
+
+ __skb_frag_set_page(frag, frag_page->page);
skb_frag_off_set(frag, wi->offset);
skb_frag_size_set(frag, frag_consumed_bytes);
- if (page_is_pfmemalloc(au->page))
+ if (page_is_pfmemalloc(frag_page->page))
xdp_buff_set_frag_pfmemalloc(&mxbuf.xdp);
sinfo->xdp_frags_size += frag_consumed_bytes;
int i;
for (i = wi - head_wi; i < rq->wqe.info.num_frags; i++)
- mlx5e_put_rx_frag(rq, &head_wi[i], true);
+ mlx5e_put_rx_frag(rq, &head_wi[i]);
}
return NULL; /* page/packet was consumed by XDP */
}
if (unlikely(!skb))
return NULL;
- page_ref_inc(head_wi->au->page);
+ skb_mark_for_recycle(skb);
+ head_wi->frag_page->frags++;
if (xdp_buff_has_frags(&mxbuf.xdp)) {
- int i;
-
/* sinfo->nr_frags is reset by build_skb, calculate again. */
xdp_update_skb_shared_info(skb, wi - head_wi - 1,
sinfo->xdp_frags_size, truesize,
xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp));
- for (i = 0; i < sinfo->nr_frags; i++) {
- skb_frag_t *frag = &sinfo->frags[i];
-
- page_ref_inc(skb_frag_page(frag));
- }
+ for (struct mlx5e_wqe_frag_info *pwi = head_wi + 1; pwi < wi; pwi++)
+ pwi->frag_page->frags++;
}
return skb;
if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
mlx5e_handle_rx_err_cqe(rq, cqe);
- goto free_wqe;
+ goto wq_cyc_pop;
}
skb = INDIRECT_CALL_3(rq->wqe.skb_from_cqe,
/* do not return page to cache,
* it will be returned on XDP_TX completion.
*/
- goto wq_cyc_pop;
+ wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
- goto free_wqe;
+ goto wq_cyc_pop;
}
mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
if (mlx5e_cqe_regb_chain(cqe))
if (!mlx5e_tc_update_skb_nic(cqe, skb)) {
dev_kfree_skb_any(skb);
- goto free_wqe;
+ goto wq_cyc_pop;
}
napi_gro_receive(rq->cq.napi, skb);
-free_wqe:
- mlx5e_free_rx_wqe(rq, wi, true);
wq_cyc_pop:
mlx5_wq_cyc_pop(wq);
}
if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
mlx5e_handle_rx_err_cqe(rq, cqe);
- goto free_wqe;
+ goto wq_cyc_pop;
}
skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
/* do not return page to cache,
* it will be returned on XDP_TX completion.
*/
- goto wq_cyc_pop;
+ wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
- goto free_wqe;
+ goto wq_cyc_pop;
}
mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
mlx5e_rep_tc_receive(cqe, rq, skb);
-free_wqe:
- mlx5e_free_rx_wqe(rq, wi, true);
wq_cyc_pop:
mlx5_wq_cyc_pop(wq);
}
wq = &rq->mpwqe.wq;
wqe = mlx5_wq_ll_get_wqe(wq, wqe_id);
- mlx5e_free_rx_mpwqe(rq, wi, true);
mlx5_wq_ll_pop(wq, cqe->wqe_id, &wqe->next.next_wqe_index);
}
static void
mlx5e_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq,
- union mlx5e_alloc_unit *au, u32 data_bcnt, u32 data_offset)
+ struct mlx5e_frag_page *frag_page,
+ u32 data_bcnt, u32 data_offset)
{
net_prefetchw(skb->data);
else
truesize = ALIGN(pg_consumed_bytes, BIT(rq->mpwqe.log_stride_sz));
- mlx5e_add_skb_frag(rq, skb, au, data_offset,
+ frag_page->frags++;
+ mlx5e_add_skb_frag(rq, skb, frag_page->page, data_offset,
pg_consumed_bytes, truesize);
data_bcnt -= pg_consumed_bytes;
data_offset = 0;
- au++;
+ frag_page++;
}
}
struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
u32 page_idx)
{
- union mlx5e_alloc_unit *au = &wi->alloc_units[page_idx];
+ struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx];
u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt);
+ struct mlx5e_frag_page *head_page = frag_page;
u32 frag_offset = head_offset + headlen;
u32 byte_cnt = cqe_bcnt - headlen;
- union mlx5e_alloc_unit *head_au = au;
struct sk_buff *skb;
dma_addr_t addr;
/* Non-linear mode, hence non-XSK, which always uses PAGE_SIZE. */
if (unlikely(frag_offset >= PAGE_SIZE)) {
- au++;
+ frag_page++;
frag_offset -= PAGE_SIZE;
}
- mlx5e_fill_skb_data(skb, rq, au, byte_cnt, frag_offset);
+ skb_mark_for_recycle(skb);
+ mlx5e_fill_skb_data(skb, rq, frag_page, byte_cnt, frag_offset);
/* copy header */
- addr = page_pool_get_dma_addr(head_au->page);
- mlx5e_copy_skb_header(rq, skb, head_au->page, addr,
+ addr = page_pool_get_dma_addr(head_page->page);
+ mlx5e_copy_skb_header(rq, skb, head_page->page, addr,
head_offset, head_offset, headlen);
/* skb linear part was allocated with headlen and aligned to long */
skb->tail += headlen;
struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
u32 page_idx)
{
- union mlx5e_alloc_unit *au = &wi->alloc_units[page_idx];
+ struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx];
u16 rx_headroom = rq->buff.headroom;
struct bpf_prog *prog;
struct sk_buff *skb;
return NULL;
}
- va = page_address(au->page) + head_offset;
+ va = page_address(frag_page->page) + head_offset;
data = va + rx_headroom;
frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
- addr = page_pool_get_dma_addr(au->page);
+ addr = page_pool_get_dma_addr(frag_page->page);
dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset,
frag_size, rq->buff.map_dir);
net_prefetch(data);
mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf);
if (mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
- __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
+ __set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */
return NULL; /* page/packet was consumed by XDP */
}
return NULL;
/* queue up for recycling/reuse */
- page_ref_inc(au->page);
+ skb_mark_for_recycle(skb);
+ frag_page->frags++;
return skb;
}
void *hdr, *data;
u32 frag_size;
- hdr = page_address(head->page) + head_offset;
+ hdr = page_address(head->frag_page->page) + head_offset;
data = hdr + rx_headroom;
frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + head_size);
if (unlikely(!skb))
return NULL;
- /* queue up for recycling/reuse */
- page_ref_inc(head->page);
-
+ head->frag_page->frags++;
} else {
/* allocate SKB and copy header for large header */
rq->stats->gro_large_hds++;
}
prefetchw(skb->data);
- mlx5e_copy_skb_header(rq, skb, head->page, head->addr,
+ mlx5e_copy_skb_header(rq, skb, head->frag_page->page, head->addr,
head_offset + rx_headroom,
rx_headroom, head_size);
/* skb linear part was allocated with headlen and aligned to long */
skb->tail += head_size;
skb->len += head_size;
}
+
+ /* queue up for recycling/reuse */
+ skb_mark_for_recycle(skb);
+
return skb;
}
u64 addr = shampo->info[header_index].addr;
if (((header_index + 1) & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) == 0) {
- shampo->info[header_index].addr = ALIGN_DOWN(addr, PAGE_SIZE);
- mlx5e_page_release_dynamic(rq, shampo->info[header_index].page, true);
+ struct mlx5e_dma_info *dma_info = &shampo->info[header_index];
+
+ dma_info->addr = ALIGN_DOWN(addr, PAGE_SIZE);
+ mlx5e_page_release_fragmented(rq, dma_info->frag_page);
}
bitmap_clear(shampo->bitmap, header_index, 1);
}
bool match = cqe->shampo.match;
struct mlx5e_rq_stats *stats = rq->stats;
struct mlx5e_rx_wqe_ll *wqe;
- union mlx5e_alloc_unit *au;
struct mlx5e_mpw_info *wi;
struct mlx5_wq_ll *wq;
}
if (likely(head_size)) {
- au = &wi->alloc_units[page_idx];
- mlx5e_fill_skb_data(*skb, rq, au, data_bcnt, data_offset);
+ struct mlx5e_frag_page *frag_page;
+
+ frag_page = &wi->alloc_units.frag_pages[page_idx];
+ mlx5e_fill_skb_data(*skb, rq, frag_page, data_bcnt, data_offset);
}
mlx5e_shampo_complete_rx_cqe(rq, cqe, cqe_bcnt, *skb);
wq = &rq->mpwqe.wq;
wqe = mlx5_wq_ll_get_wqe(wq, wqe_id);
- mlx5e_free_rx_mpwqe(rq, wi, true);
mlx5_wq_ll_pop(wq, cqe->wqe_id, &wqe->next.next_wqe_index);
}
wq = &rq->mpwqe.wq;
wqe = mlx5_wq_ll_get_wqe(wq, wqe_id);
- mlx5e_free_rx_mpwqe(rq, wi, true);
mlx5_wq_ll_pop(wq, cqe->wqe_id, &wqe->next.next_wqe_index);
}
if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
rq->stats->wqe_err++;
- goto wq_free_wqe;
+ goto wq_cyc_pop;
}
skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
mlx5e_skb_from_cqe_nonlinear,
rq, wi, cqe, cqe_bcnt);
if (!skb)
- goto wq_free_wqe;
+ goto wq_cyc_pop;
mlx5i_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
if (unlikely(!skb->dev)) {
dev_kfree_skb_any(skb);
- goto wq_free_wqe;
+ goto wq_cyc_pop;
}
napi_gro_receive(rq->cq.napi, skb);
-wq_free_wqe:
- mlx5e_free_rx_wqe(rq, wi, true);
+wq_cyc_pop:
mlx5_wq_cyc_pop(wq);
}
if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
rq->stats->wqe_err++;
- goto free_wqe;
+ goto wq_cyc_pop;
}
skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt);
if (!skb)
- goto free_wqe;
+ goto wq_cyc_pop;
mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
skb_push(skb, ETH_HLEN);
rq->netdev->devlink_port);
dev_kfree_skb_any(skb);
-free_wqe:
- mlx5e_free_rx_wqe(rq, wi, false);
+wq_cyc_pop:
mlx5_wq_cyc_pop(wq);
}
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_buff_alloc_err) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_blks) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_pkts) },
- { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cache_reuse) },
- { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cache_full) },
- { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cache_empty) },
- { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cache_busy) },
- { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cache_waive) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_congst_umr) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_err) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_recover) },
s->rx_buff_alloc_err += rq_stats->buff_alloc_err;
s->rx_cqe_compress_blks += rq_stats->cqe_compress_blks;
s->rx_cqe_compress_pkts += rq_stats->cqe_compress_pkts;
- s->rx_cache_reuse += rq_stats->cache_reuse;
- s->rx_cache_full += rq_stats->cache_full;
- s->rx_cache_empty += rq_stats->cache_empty;
- s->rx_cache_busy += rq_stats->cache_busy;
- s->rx_cache_waive += rq_stats->cache_waive;
s->rx_congst_umr += rq_stats->congst_umr;
s->rx_arfs_err += rq_stats->arfs_err;
s->rx_recover += rq_stats->recover;
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, buff_alloc_err) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
- { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cache_reuse) },
- { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cache_full) },
- { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cache_empty) },
- { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cache_busy) },
- { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cache_waive) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, congst_umr) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_err) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, recover) },
{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, buff_alloc_err) },
{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
- { MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cache_reuse) },
- { MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cache_full) },
- { MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cache_empty) },
- { MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cache_busy) },
- { MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cache_waive) },
{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, congst_umr) },
{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, arfs_err) },
{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, recover) },
u64 rx_buff_alloc_err;
u64 rx_cqe_compress_blks;
u64 rx_cqe_compress_pkts;
- u64 rx_cache_reuse;
- u64 rx_cache_full;
- u64 rx_cache_empty;
- u64 rx_cache_busy;
- u64 rx_cache_waive;
u64 rx_congst_umr;
u64 rx_arfs_err;
u64 rx_recover;
u64 buff_alloc_err;
u64 cqe_compress_blks;
u64 cqe_compress_pkts;
- u64 cache_reuse;
- u64 cache_full;
- u64 cache_empty;
- u64 cache_busy;
- u64 cache_waive;
u64 congst_umr;
u64 arfs_err;
u64 recover;
#include <net/bareudp.h>
#include <net/bonding.h>
#include <net/dst_metadata.h>
+#include "devlink.h"
#include "en.h"
#include "en/tc/post_act.h"
#include "en/tc/act_stats.h"
#define MLX5E_TC_TABLE_NUM_GROUPS 4
#define MLX5E_TC_TABLE_MAX_GROUP_SIZE BIT(18)
-struct mlx5e_hairpin_params {
- struct mlx5_core_dev *mdev;
- u32 num_queues;
- u32 queue_size;
-};
-
struct mlx5e_tc_table {
/* Protects the dynamic assignment of the t parameter
* which is the nic tc root table.
struct mlx5_tc_ct_priv *ct;
struct mapping_ctx *mapping;
- struct mlx5e_hairpin_params hairpin_params;
struct dentry *dfs_root;
/* tc action stats */
struct mlx5e_tir direct_tir;
int num_channels;
+ u8 log_num_packets;
struct mlx5e_rqt indir_rqt;
struct mlx5e_tir indir_tir[MLX5E_NUM_INDIR_TIRS];
struct mlx5_ttc_table *ttc;
hp->func_mdev = func_mdev;
hp->func_priv = priv;
hp->num_channels = params->num_channels;
+ hp->log_num_packets = params->log_num_packets;
err = mlx5e_hairpin_create_transport(hp);
if (err)
mutex_lock(&tc->hairpin_tbl_lock);
hash_for_each(tc->hairpin_tbl, bkt, hpe, hairpin_hlist)
- seq_printf(file, "Hairpin peer_vhca_id %u prio %u refcnt %u\n",
+ seq_printf(file,
+ "Hairpin peer_vhca_id %u prio %u refcnt %u num_channels %u num_packets %lu\n",
hpe->peer_vhca_id, hpe->prio,
- refcount_read(&hpe->refcnt));
+ refcount_read(&hpe->refcnt), hpe->hp->num_channels,
+ BIT(hpe->hp->log_num_packets));
mutex_unlock(&tc->hairpin_tbl_lock);
return 0;
&debugfs_hairpin_table_dump_fops);
}
-static void
-mlx5e_hairpin_params_init(struct mlx5e_hairpin_params *hairpin_params,
- struct mlx5_core_dev *mdev)
-{
- u32 link_speed = 0;
- u64 link_speed64;
-
- hairpin_params->mdev = mdev;
- /* set hairpin pair per each 50Gbs share of the link */
- mlx5e_port_max_linkspeed(mdev, &link_speed);
- link_speed = max_t(u32, link_speed, 50000);
- link_speed64 = link_speed;
- do_div(link_speed64, 50000);
- hairpin_params->num_queues = link_speed64;
-
- hairpin_params->queue_size =
- BIT(min_t(u32, 16 - MLX5_MPWRQ_MIN_LOG_STRIDE_SZ(mdev),
- MLX5_CAP_GEN(mdev, log_max_hairpin_num_packets)));
-}
-
static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5e_tc_flow_parse_attr *parse_attr,
struct netlink_ext_ack *extack)
{
struct mlx5e_tc_table *tc = mlx5e_fs_get_tc(priv->fs);
+ struct devlink *devlink = priv_to_devlink(priv->mdev);
int peer_ifindex = parse_attr->mirred_ifindex[0];
+ union devlink_param_value val = {};
struct mlx5_hairpin_params params;
struct mlx5_core_dev *peer_mdev;
struct mlx5e_hairpin_entry *hpe;
hash_hairpin_info(peer_id, match_prio));
mutex_unlock(&tc->hairpin_tbl_lock);
- params.log_num_packets = ilog2(tc->hairpin_params.queue_size);
+ err = devl_param_driverinit_value_get(
+ devlink, MLX5_DEVLINK_PARAM_ID_HAIRPIN_QUEUE_SIZE, &val);
+ if (err) {
+ err = -ENOMEM;
+ goto out_err;
+ }
+
+ params.log_num_packets = ilog2(val.vu32);
params.log_data_size =
clamp_t(u32,
params.log_num_packets +
MLX5_CAP_GEN(priv->mdev, log_max_hairpin_wq_data_sz));
params.q_counter = priv->q_counter;
- params.num_channels = tc->hairpin_params.num_queues;
+ err = devl_param_driverinit_value_get(
+ devlink, MLX5_DEVLINK_PARAM_ID_HAIRPIN_NUM_QUEUES, &val);
+ if (err) {
+ err = -ENOMEM;
+ goto out_err;
+ }
+
+ params.num_channels = val.vu32;
hp = mlx5e_hairpin_create(priv, ¶ms, peer_ifindex);
hpe->hp = hp;
return NOTIFY_DONE;
}
-static int mlx5e_tc_nic_get_ft_size(struct mlx5_core_dev *dev)
-{
- int tc_grp_size, tc_tbl_size;
- u32 max_flow_counter;
-
- max_flow_counter = (MLX5_CAP_GEN(dev, max_flow_counter_31_16) << 16) |
- MLX5_CAP_GEN(dev, max_flow_counter_15_0);
-
- tc_grp_size = min_t(int, max_flow_counter, MLX5E_TC_TABLE_MAX_GROUP_SIZE);
-
- tc_tbl_size = min_t(int, tc_grp_size * MLX5E_TC_TABLE_NUM_GROUPS,
- BIT(MLX5_CAP_FLOWTABLE_NIC_RX(dev, log_max_ft_size)));
-
- return tc_tbl_size;
-}
-
static int mlx5e_tc_nic_create_miss_table(struct mlx5e_priv *priv)
{
struct mlx5e_tc_table *tc = mlx5e_fs_get_tc(priv->fs);
attr.flags = MLX5_CHAINS_AND_PRIOS_SUPPORTED |
MLX5_CHAINS_IGNORE_FLOW_LEVEL_SUPPORTED;
attr.ns = MLX5_FLOW_NAMESPACE_KERNEL;
- attr.max_ft_sz = mlx5e_tc_nic_get_ft_size(dev);
attr.max_grp_num = MLX5E_TC_TABLE_NUM_GROUPS;
attr.default_ft = tc->miss_t;
attr.mapping = chains_mapping;
+ attr.fs_base_prio = MLX5E_TC_PRIO;
tc->chains = mlx5_chains_create(dev, &attr);
if (IS_ERR(tc->chains)) {
goto err_miss;
}
+ mlx5_chains_print_info(tc->chains);
+
tc->post_act = mlx5e_tc_post_act_init(priv, tc->chains, MLX5_FLOW_NAMESPACE_KERNEL);
tc->ct = mlx5_tc_ct_init(priv, tc->chains, &tc->mod_hdr,
MLX5_FLOW_NAMESPACE_KERNEL, tc->post_act);
- mlx5e_hairpin_params_init(&tc->hairpin_params, dev);
-
tc->netdevice_nb.notifier_call = mlx5e_tc_netdev_event;
err = register_netdevice_notifier_dev_net(priv->netdev,
&tc->netdevice_nb,
struct mlx5e_sq_stats *stats = sq->stats;
struct dim_sample dim_sample = {};
- if (unlikely(!test_bit(MLX5E_SQ_STATE_AM, &sq->state)))
+ if (unlikely(!test_bit(MLX5E_SQ_STATE_DIM, &sq->state)))
return;
dim_update_sample(sq->cq.event_ctr, stats->packets, stats->bytes, &dim_sample);
struct mlx5e_rq_stats *stats = rq->stats;
struct dim_sample dim_sample = {};
- if (unlikely(!test_bit(MLX5E_RQ_STATE_AM, &rq->state)))
+ if (unlikely(!test_bit(MLX5E_RQ_STATE_DIM, &rq->state)))
return;
dim_update_sample(rq->cq.event_ctr, stats->packets, stats->bytes, &dim_sample);
#include "lib/clock.h"
#include "diag/fw_tracer.h"
#include "mlx5_irq.h"
+#include "pci_irq.h"
#include "devlink.h"
#include "en_accel/ipsec.h"
struct mlx5_irq_table *irq_table;
struct mlx5_irq **comp_irqs;
struct mlx5_irq *ctrl_irq;
-#ifdef CONFIG_RFS_ACCEL
struct cpu_rmap *rmap;
-#endif
};
#define MLX5_ASYNC_EVENT_MASK ((1ull << MLX5_EVENT_TYPE_PATH_MIG) | \
mlx5_core_dbg(dev, "Failed to get param. using default. err = %d\n", err);
return MLX5_NUM_ASYNC_EQE;
}
+
static int create_async_eqs(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
}
EXPORT_SYMBOL(mlx5_eq_update_ci);
-static void comp_irqs_release(struct mlx5_core_dev *dev)
+static void comp_irqs_release_pci(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
- if (mlx5_core_is_sf(dev))
- mlx5_irq_affinity_irqs_release(dev, table->comp_irqs, table->num_comp_eqs);
- else
- mlx5_irqs_release_vectors(table->comp_irqs, table->num_comp_eqs);
- kfree(table->comp_irqs);
+ mlx5_irqs_release_vectors(table->comp_irqs, table->num_comp_eqs);
}
-static int comp_irqs_request(struct mlx5_core_dev *dev)
+static int comp_irqs_request_pci(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
const struct cpumask *prev = cpu_none_mask;
const struct cpumask *mask;
- int ncomp_eqs = table->num_comp_eqs;
+ int ncomp_eqs;
u16 *cpus;
int ret;
int cpu;
int i;
ncomp_eqs = table->num_comp_eqs;
- table->comp_irqs = kcalloc(ncomp_eqs, sizeof(*table->comp_irqs), GFP_KERNEL);
- if (!table->comp_irqs)
- return -ENOMEM;
- if (mlx5_core_is_sf(dev)) {
- ret = mlx5_irq_affinity_irqs_request_auto(dev, ncomp_eqs, table->comp_irqs);
- if (ret < 0)
- goto free_irqs;
- return ret;
- }
-
cpus = kcalloc(ncomp_eqs, sizeof(*cpus), GFP_KERNEL);
- if (!cpus) {
+ if (!cpus)
ret = -ENOMEM;
- goto free_irqs;
- }
i = 0;
rcu_read_lock();
}
spread_done:
rcu_read_unlock();
- ret = mlx5_irqs_request_vectors(dev, cpus, ncomp_eqs, table->comp_irqs);
+ ret = mlx5_irqs_request_vectors(dev, cpus, ncomp_eqs, table->comp_irqs, &table->rmap);
kfree(cpus);
- if (ret < 0)
- goto free_irqs;
return ret;
+}
+
+static void comp_irqs_release_sf(struct mlx5_core_dev *dev)
+{
+ struct mlx5_eq_table *table = dev->priv.eq_table;
+
+ mlx5_irq_affinity_irqs_release(dev, table->comp_irqs, table->num_comp_eqs);
+}
+
+static int comp_irqs_request_sf(struct mlx5_core_dev *dev)
+{
+ struct mlx5_eq_table *table = dev->priv.eq_table;
+ int ncomp_eqs = table->num_comp_eqs;
+
+ return mlx5_irq_affinity_irqs_request_auto(dev, ncomp_eqs, table->comp_irqs);
+}
+
+static void comp_irqs_release(struct mlx5_core_dev *dev)
+{
+ struct mlx5_eq_table *table = dev->priv.eq_table;
+
+ mlx5_core_is_sf(dev) ? comp_irqs_release_sf(dev) :
+ comp_irqs_release_pci(dev);
-free_irqs:
kfree(table->comp_irqs);
+}
+
+static int comp_irqs_request(struct mlx5_core_dev *dev)
+{
+ struct mlx5_eq_table *table = dev->priv.eq_table;
+ int ncomp_eqs;
+ int ret;
+
+ ncomp_eqs = table->num_comp_eqs;
+ table->comp_irqs = kcalloc(ncomp_eqs, sizeof(*table->comp_irqs), GFP_KERNEL);
+ if (!table->comp_irqs)
+ return -ENOMEM;
+
+ ret = mlx5_core_is_sf(dev) ? comp_irqs_request_sf(dev) :
+ comp_irqs_request_pci(dev);
+ if (ret < 0)
+ kfree(table->comp_irqs);
+
return ret;
}
+#ifdef CONFIG_RFS_ACCEL
+static int alloc_rmap(struct mlx5_core_dev *mdev)
+{
+ struct mlx5_eq_table *eq_table = mdev->priv.eq_table;
+
+ /* rmap is a mapping between irq number and queue number.
+ * Each irq can be assigned only to a single rmap.
+ * Since SFs share IRQs, rmap mapping cannot function correctly
+ * for irqs that are shared between different core/netdev RX rings.
+ * Hence we don't allow netdev rmap for SFs.
+ */
+ if (mlx5_core_is_sf(mdev))
+ return 0;
+
+ eq_table->rmap = alloc_irq_cpu_rmap(eq_table->num_comp_eqs);
+ if (!eq_table->rmap)
+ return -ENOMEM;
+ return 0;
+}
+
+static void free_rmap(struct mlx5_core_dev *mdev)
+{
+ struct mlx5_eq_table *eq_table = mdev->priv.eq_table;
+
+ if (eq_table->rmap) {
+ free_irq_cpu_rmap(eq_table->rmap);
+ eq_table->rmap = NULL;
+ }
+}
+#else
+static int alloc_rmap(struct mlx5_core_dev *mdev) { return 0; }
+static void free_rmap(struct mlx5_core_dev *mdev) {}
+#endif
+
static void destroy_comp_eqs(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
kfree(eq);
}
comp_irqs_release(dev);
+ free_rmap(dev);
}
static u16 comp_eq_depth_devlink_param_get(struct mlx5_core_dev *dev)
int err;
int i;
+ err = alloc_rmap(dev);
+ if (err)
+ return err;
+
ncomp_eqs = comp_irqs_request(dev);
- if (ncomp_eqs < 0)
- return ncomp_eqs;
+ if (ncomp_eqs < 0) {
+ err = ncomp_eqs;
+ goto err_irqs_req;
+ }
+
INIT_LIST_HEAD(&table->comp_eqs_list);
nent = comp_eq_depth_devlink_param_get(dev);
kfree(eq);
clean:
destroy_comp_eqs(dev);
+err_irqs_req:
+ free_rmap(dev);
return err;
}
return ERR_PTR(-ENOENT);
}
-static void clear_rmap(struct mlx5_core_dev *dev)
-{
-#ifdef CONFIG_RFS_ACCEL
- struct mlx5_eq_table *eq_table = dev->priv.eq_table;
-
- free_irq_cpu_rmap(eq_table->rmap);
-#endif
-}
-
-static int set_rmap(struct mlx5_core_dev *mdev)
-{
- int err = 0;
-#ifdef CONFIG_RFS_ACCEL
- struct mlx5_eq_table *eq_table = mdev->priv.eq_table;
- int vecidx;
-
- eq_table->rmap = alloc_irq_cpu_rmap(eq_table->num_comp_eqs);
- if (!eq_table->rmap) {
- err = -ENOMEM;
- mlx5_core_err(mdev, "Failed to allocate cpu_rmap. err %d", err);
- goto err_out;
- }
-
- for (vecidx = 0; vecidx < eq_table->num_comp_eqs; vecidx++) {
- err = irq_cpu_rmap_add(eq_table->rmap,
- pci_irq_vector(mdev->pdev, vecidx));
- if (err) {
- mlx5_core_err(mdev, "irq_cpu_rmap_add failed. err %d",
- err);
- goto err_irq_cpu_rmap_add;
- }
- }
- return 0;
-
-err_irq_cpu_rmap_add:
- clear_rmap(mdev);
-err_out:
-#endif
- return err;
-}
-
/* This function should only be called after mlx5_cmd_force_teardown_hca */
void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
mutex_lock(&table->lock); /* sync with create/destroy_async_eq */
- if (!mlx5_core_is_sf(dev))
- clear_rmap(dev);
mlx5_irq_table_destroy(dev);
mutex_unlock(&table->lock);
}
#define MLX5_MAX_ASYNC_EQS 3
#endif
-int mlx5_eq_table_create(struct mlx5_core_dev *dev)
+static int get_num_eqs(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *eq_table = dev->priv.eq_table;
- int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
+ int max_dev_eqs;
+ int max_eqs_sf;
+ int num_eqs;
+
+ /* If ethernet is disabled we use just a single completion vector to
+ * have the other vectors available for other drivers using mlx5_core. For
+ * example, mlx5_vdpa
+ */
+ if (!mlx5_core_is_eth_enabled(dev) && mlx5_eth_supported(dev))
+ return 1;
+
+ max_dev_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
MLX5_CAP_GEN(dev, max_num_eqs) :
1 << MLX5_CAP_GEN(dev, log_max_eq);
- int max_eqs_sf;
- int err;
- eq_table->num_comp_eqs =
- min_t(int,
- mlx5_irq_table_get_num_comp(eq_table->irq_table),
- num_eqs - MLX5_MAX_ASYNC_EQS);
+ num_eqs = min_t(int, mlx5_irq_table_get_num_comp(eq_table->irq_table),
+ max_dev_eqs - MLX5_MAX_ASYNC_EQS);
if (mlx5_core_is_sf(dev)) {
max_eqs_sf = min_t(int, MLX5_COMP_EQS_PER_SF,
mlx5_irq_table_get_sfs_vec(eq_table->irq_table));
- eq_table->num_comp_eqs = min_t(int, eq_table->num_comp_eqs,
- max_eqs_sf);
+ num_eqs = min_t(int, num_eqs, max_eqs_sf);
}
+ return num_eqs;
+}
+
+int mlx5_eq_table_create(struct mlx5_core_dev *dev)
+{
+ struct mlx5_eq_table *eq_table = dev->priv.eq_table;
+ int err;
+
+ eq_table->num_comp_eqs = get_num_eqs(dev);
err = create_async_eqs(dev);
if (err) {
mlx5_core_err(dev, "Failed to create async EQs\n");
goto err_async_eqs;
}
- if (!mlx5_core_is_sf(dev)) {
- /* rmap is a mapping between irq number and queue number.
- * each irq can be assign only to a single rmap.
- * since SFs share IRQs, rmap mapping cannot function correctly
- * for irqs that are shared for different core/netdev RX rings.
- * Hence we don't allow netdev rmap for SFs
- */
- err = set_rmap(dev);
- if (err)
- goto err_rmap;
- }
-
err = create_comp_eqs(dev);
if (err) {
mlx5_core_err(dev, "Failed to create completion EQs\n");
}
return 0;
+
err_comp_eqs:
- if (!mlx5_core_is_sf(dev))
- clear_rmap(dev);
-err_rmap:
destroy_async_eqs(dev);
err_async_eqs:
return err;
void mlx5_eq_table_destroy(struct mlx5_core_dev *dev)
{
- if (!mlx5_core_is_sf(dev))
- clear_rmap(dev);
destroy_comp_eqs(dev);
destroy_async_eqs(dev);
}
u64 value;
int err;
- err = mlx5e_port_max_linkspeed(mdev, &link_speed_max);
+ err = mlx5_port_max_linkspeed(mdev, &link_speed_max);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Failed to get link maximum speed");
return err;
struct mlx5_flow_table *nf_ft, *ft;
struct mlx5_chains_attr attr = {};
struct mlx5_fs_chains *chains;
- u32 fdb_max;
int err;
- fdb_max = 1 << MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size);
-
esw_init_chains_offload_flags(esw, &attr.flags);
attr.ns = MLX5_FLOW_NAMESPACE_FDB;
- attr.max_ft_sz = fdb_max;
+ attr.fs_base_prio = FDB_TC_OFFLOAD;
attr.max_grp_num = esw->params.large_group_num;
attr.default_ft = miss_fdb;
attr.mapping = esw->offloads.reg_c0_obj_pool;
esw_warn(dev, "Failed to create fdb chains err(%d)\n", err);
return err;
}
+ mlx5_chains_print_info(chains);
esw->fdb_table.offloads.esw_chains_priv = chains;
#define LAG_MIN_LEVEL (OFFLOADS_MIN_LEVEL + KERNEL_RX_MACSEC_MIN_LEVEL + 1)
#define KERNEL_TX_IPSEC_NUM_PRIOS 1
-#define KERNEL_TX_IPSEC_NUM_LEVELS 2
+#define KERNEL_TX_IPSEC_NUM_LEVELS 3
#define KERNEL_TX_IPSEC_MIN_LEVEL (KERNEL_TX_IPSEC_NUM_LEVELS)
#define KERNEL_TX_MACSEC_NUM_PRIOS 1
if (ignore_level) {
if (ft->type != FS_FT_FDB &&
- ft->type != FS_FT_NIC_RX)
+ ft->type != FS_FT_NIC_RX &&
+ ft->type != FS_FT_NIC_TX)
return false;
if (dest->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE &&
while (sensor_pci_not_working(dev)) {
if (time_after(jiffies, end))
return -ETIMEDOUT;
+ if (test_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state)) {
+ mlx5_core_warn(dev, "device is being removed, stop waiting for PCI\n");
+ return -ENODEV;
+ }
msleep(100);
}
return 0;
/* Creating an IRQ from irq_pool */
static struct mlx5_irq *
-irq_pool_request_irq(struct mlx5_irq_pool *pool, const struct cpumask *req_mask)
+irq_pool_request_irq(struct mlx5_irq_pool *pool, struct irq_affinity_desc *af_desc)
{
- cpumask_var_t auto_mask;
- struct mlx5_irq *irq;
+ struct irq_affinity_desc auto_desc = {};
u32 irq_index;
int err;
- if (!zalloc_cpumask_var(&auto_mask, GFP_KERNEL))
- return ERR_PTR(-ENOMEM);
err = xa_alloc(&pool->irqs, &irq_index, NULL, pool->xa_num_irqs, GFP_KERNEL);
if (err)
return ERR_PTR(err);
if (pool->irqs_per_cpu) {
- if (cpumask_weight(req_mask) > 1)
+ if (cpumask_weight(&af_desc->mask) > 1)
/* if req_mask contain more then one CPU, set the least loadad CPU
* of req_mask
*/
- cpumask_set_cpu(cpu_get_least_loaded(pool, req_mask), auto_mask);
+ cpumask_set_cpu(cpu_get_least_loaded(pool, &af_desc->mask),
+ &auto_desc.mask);
else
- cpu_get(pool, cpumask_first(req_mask));
+ cpu_get(pool, cpumask_first(&af_desc->mask));
}
- irq = mlx5_irq_alloc(pool, irq_index, cpumask_empty(auto_mask) ? req_mask : auto_mask);
- free_cpumask_var(auto_mask);
- return irq;
+ return mlx5_irq_alloc(pool, irq_index,
+ cpumask_empty(&auto_desc.mask) ? af_desc : &auto_desc,
+ NULL);
}
/* Looking for the IRQ with the smallest refcount that fits req_mask.
/**
* mlx5_irq_affinity_request - request an IRQ according to the given mask.
* @pool: IRQ pool to request from.
- * @req_mask: cpumask requested for this IRQ.
+ * @af_desc: affinity descriptor for this IRQ.
*
* This function returns a pointer to IRQ, or ERR_PTR in case of error.
*/
struct mlx5_irq *
-mlx5_irq_affinity_request(struct mlx5_irq_pool *pool, const struct cpumask *req_mask)
+mlx5_irq_affinity_request(struct mlx5_irq_pool *pool, struct irq_affinity_desc *af_desc)
{
struct mlx5_irq *least_loaded_irq, *new_irq;
mutex_lock(&pool->lock);
- least_loaded_irq = irq_pool_find_least_loaded(pool, req_mask);
+ least_loaded_irq = irq_pool_find_least_loaded(pool, &af_desc->mask);
if (least_loaded_irq &&
mlx5_irq_read_locked(least_loaded_irq) < pool->min_threshold)
goto out;
/* We didn't find an IRQ with less than min_thres, try to allocate a new IRQ */
- new_irq = irq_pool_request_irq(pool, req_mask);
+ new_irq = irq_pool_request_irq(pool, af_desc);
if (IS_ERR(new_irq)) {
if (!least_loaded_irq) {
/* We failed to create an IRQ and we didn't find an IRQ */
struct mlx5_irq **irqs)
{
struct mlx5_irq_pool *pool = mlx5_irq_pool_get(dev);
- cpumask_var_t req_mask;
+ struct irq_affinity_desc af_desc = {};
struct mlx5_irq *irq;
int i = 0;
- if (!zalloc_cpumask_var(&req_mask, GFP_KERNEL))
- return -ENOMEM;
- cpumask_copy(req_mask, cpu_online_mask);
+ af_desc.is_managed = 1;
+ cpumask_copy(&af_desc.mask, cpu_online_mask);
for (i = 0; i < nirqs; i++) {
if (mlx5_irq_pool_is_sf_pool(pool))
- irq = mlx5_irq_affinity_request(pool, req_mask);
+ irq = mlx5_irq_affinity_request(pool, &af_desc);
else
/* In case SF pool doesn't exists, fallback to the PF IRQs.
* The PF IRQs are already allocated and binded to CPU
* at this point. Hence, only an index is needed.
*/
- irq = mlx5_irq_request(dev, i, NULL);
+ irq = mlx5_irq_request(dev, i, NULL, NULL);
if (IS_ERR(irq))
break;
irqs[i] = irq;
- cpumask_clear_cpu(cpumask_first(mlx5_irq_get_affinity_mask(irq)), req_mask);
+ cpumask_clear_cpu(cpumask_first(mlx5_irq_get_affinity_mask(irq)), &af_desc.mask);
mlx5_core_dbg(pool->dev, "IRQ %u mapped to cpu %*pbl, %u EQs on this irq\n",
pci_irq_vector(dev->pdev, mlx5_irq_get_index(irq)),
cpumask_pr_args(mlx5_irq_get_affinity_mask(irq)),
mlx5_irq_read_locked(irq) / MLX5_EQ_REFS_PER_IRQ);
}
- free_cpumask_var(req_mask);
if (!i)
return PTR_ERR(irq);
return i;
#define chains_lock(chains) ((chains)->lock)
#define chains_ht(chains) ((chains)->chains_ht)
#define prios_ht(chains) ((chains)->prios_ht)
-#define tc_default_ft(chains) ((chains)->tc_default_ft)
-#define tc_end_ft(chains) ((chains)->tc_end_ft)
-#define ns_to_chains_fs_prio(ns) ((ns) == MLX5_FLOW_NAMESPACE_FDB ? \
- FDB_TC_OFFLOAD : MLX5E_TC_PRIO)
+#define chains_default_ft(chains) ((chains)->chains_default_ft)
+#define chains_end_ft(chains) ((chains)->chains_end_ft)
#define FT_TBL_SZ (64 * 1024)
struct mlx5_fs_chains {
/* Protects above chains_ht and prios_ht */
struct mutex lock;
- struct mlx5_flow_table *tc_default_ft;
- struct mlx5_flow_table *tc_end_ft;
+ struct mlx5_flow_table *chains_default_ft;
+ struct mlx5_flow_table *chains_end_ft;
struct mapping_ctx *chains_mapping;
enum mlx5_flow_namespace_type ns;
u32 group_num;
u32 flags;
+ int fs_base_prio;
+ int fs_base_level;
};
struct fs_chain {
mlx5_chains_set_end_ft(struct mlx5_fs_chains *chains,
struct mlx5_flow_table *ft)
{
- tc_end_ft(chains) = ft;
+ chains_end_ft(chains) = ft;
}
static struct mlx5_flow_table *
sz = (chain == mlx5_chains_get_nf_ft_chain(chains)) ? FT_TBL_SZ : POOL_NEXT_SIZE;
ft_attr.max_fte = sz;
- /* We use tc_default_ft(chains) as the table's next_ft till
+ /* We use chains_default_ft(chains) as the table's next_ft till
* ignore_flow_level is allowed on FT creation and not just for FTEs.
* Instead caller should add an explicit miss rule if needed.
*/
- ft_attr.next_ft = tc_default_ft(chains);
+ ft_attr.next_ft = chains_default_ft(chains);
/* The root table(chain 0, prio 1, level 0) is required to be
* connected to the previous fs_core managed prio.
*/
if (!mlx5_chains_ignore_flow_level_supported(chains) ||
(chain == 0 && prio == 1 && level == 0)) {
- ft_attr.level = level;
- ft_attr.prio = prio - 1;
+ ft_attr.level = chains->fs_base_level;
+ ft_attr.prio = chains->fs_base_prio;
ns = (chains->ns == MLX5_FLOW_NAMESPACE_FDB) ?
mlx5_get_fdb_sub_ns(chains->dev, chain) :
mlx5_get_flow_namespace(chains->dev, chains->ns);
} else {
ft_attr.flags |= MLX5_FLOW_TABLE_UNMANAGED;
- ft_attr.prio = ns_to_chains_fs_prio(chains->ns);
+ ft_attr.prio = chains->fs_base_prio;
/* Firmware doesn't allow us to create another level 0 table,
- * so we create all unmanaged tables as level 1.
+ * so we create all unmanaged tables as level 1 (base + 1).
*
* To connect them, we use explicit miss rules with
* ignore_flow_level. Caller is responsible to create
* these rules (if needed).
*/
- ft_attr.level = 1;
+ ft_attr.level = chains->fs_base_level + 1;
ns = mlx5_get_flow_namespace(chains->dev, chains->ns);
}
int err;
if (chain->chain == mlx5_chains_get_nf_ft_chain(chains) ||
- !mlx5_chains_prios_supported(chains))
+ !mlx5_chains_prios_supported(chains) ||
+ !chains->chains_mapping)
return 0;
err = mlx5_chains_get_chain_mapping(chains, chain->chain, &index);
dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
dest.ft = next_ft;
- if (next_ft == tc_end_ft(chains) &&
+ if (chains->chains_mapping && next_ft == chains_end_ft(chains) &&
chain->chain != mlx5_chains_get_nf_ft_chain(chains) &&
mlx5_chains_prios_supported(chains)) {
act.modify_hdr = chain->miss_modify_hdr;
/* Default miss for each chain: */
next_ft = (chain == mlx5_chains_get_nf_ft_chain(chains)) ?
- tc_default_ft(chains) :
- tc_end_ft(chains);
+ chains_default_ft(chains) :
+ chains_end_ft(chains);
list_for_each(pos, &chain_s->prios_list) {
struct prio *p = list_entry(pos, struct prio, list);
struct mlx5_flow_table *
mlx5_chains_get_tc_end_ft(struct mlx5_fs_chains *chains)
{
- return tc_end_ft(chains);
+ return chains_end_ft(chains);
}
struct mlx5_flow_table *
static struct mlx5_fs_chains *
mlx5_chains_init(struct mlx5_core_dev *dev, struct mlx5_chains_attr *attr)
{
- struct mlx5_fs_chains *chains_priv;
- u32 max_flow_counter;
+ struct mlx5_fs_chains *chains;
int err;
- chains_priv = kzalloc(sizeof(*chains_priv), GFP_KERNEL);
- if (!chains_priv)
+ chains = kzalloc(sizeof(*chains), GFP_KERNEL);
+ if (!chains)
return ERR_PTR(-ENOMEM);
- max_flow_counter = (MLX5_CAP_GEN(dev, max_flow_counter_31_16) << 16) |
- MLX5_CAP_GEN(dev, max_flow_counter_15_0);
-
- mlx5_core_dbg(dev,
- "Init flow table chains, max counters(%d), groups(%d), max flow table size(%d)\n",
- max_flow_counter, attr->max_grp_num, attr->max_ft_sz);
-
- chains_priv->dev = dev;
- chains_priv->flags = attr->flags;
- chains_priv->ns = attr->ns;
- chains_priv->group_num = attr->max_grp_num;
- chains_priv->chains_mapping = attr->mapping;
- tc_default_ft(chains_priv) = tc_end_ft(chains_priv) = attr->default_ft;
+ chains->dev = dev;
+ chains->flags = attr->flags;
+ chains->ns = attr->ns;
+ chains->group_num = attr->max_grp_num;
+ chains->chains_mapping = attr->mapping;
+ chains->fs_base_prio = attr->fs_base_prio;
+ chains->fs_base_level = attr->fs_base_level;
+ chains_default_ft(chains) = chains_end_ft(chains) = attr->default_ft;
- mlx5_core_info(dev, "Supported tc offload range - chains: %u, prios: %u\n",
- mlx5_chains_get_chain_range(chains_priv),
- mlx5_chains_get_prio_range(chains_priv));
-
- err = rhashtable_init(&chains_ht(chains_priv), &chain_params);
+ err = rhashtable_init(&chains_ht(chains), &chain_params);
if (err)
goto init_chains_ht_err;
- err = rhashtable_init(&prios_ht(chains_priv), &prio_params);
+ err = rhashtable_init(&prios_ht(chains), &prio_params);
if (err)
goto init_prios_ht_err;
- mutex_init(&chains_lock(chains_priv));
+ mutex_init(&chains_lock(chains));
- return chains_priv;
+ return chains;
init_prios_ht_err:
- rhashtable_destroy(&chains_ht(chains_priv));
+ rhashtable_destroy(&chains_ht(chains));
init_chains_ht_err:
- kfree(chains_priv);
+ kfree(chains);
return ERR_PTR(err);
}
return mapping_remove(ctx, chain_mapping);
}
+
+void
+mlx5_chains_print_info(struct mlx5_fs_chains *chains)
+{
+ mlx5_core_dbg(chains->dev, "Flow table chains groups(%d)\n", chains->group_num);
+}
struct mlx5_chains_attr {
enum mlx5_flow_namespace_type ns;
+ int fs_base_prio;
+ int fs_base_level;
u32 flags;
- u32 max_ft_sz;
u32 max_grp_num;
struct mlx5_flow_table *default_ft;
struct mapping_ctx *mapping;
void
mlx5_chains_set_end_ft(struct mlx5_fs_chains *chains,
struct mlx5_flow_table *ft);
+void
+mlx5_chains_print_info(struct mlx5_fs_chains *chains);
#else /* CONFIG_MLX5_CLS_ACT */
mlx5_chains_create(struct mlx5_core_dev *dev, struct mlx5_chains_attr *attr)
{ return NULL; }
static inline void
-mlx5_chains_destroy(struct mlx5_fs_chains *chains) {};
+mlx5_chains_destroy(struct mlx5_fs_chains *chains) {}
+static inline void
+mlx5_chains_print_info(struct mlx5_fs_chains *chains) {}
#endif /* CONFIG_MLX5_CLS_ACT */
#include <linux/kmod.h>
#include <linux/mlx5/mlx5_ifc.h>
#include <linux/mlx5/vport.h>
-#ifdef CONFIG_RFS_ACCEL
-#include <linux/cpu_rmap.h>
-#endif
#include <linux/version.h>
#include <net/devlink.h>
#include "mlx5_core.h"
+#include "thermal.h"
#include "lib/eq.h"
#include "fs_core.h"
#include "lib/mpfs.h"
if (!(fw_initializing >> 31))
break;
if (time_after(jiffies, end) ||
- test_and_clear_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state)) {
+ test_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state)) {
err = -EBUSY;
break;
}
return 0;
err_clr_master:
- pci_clear_master(dev->pdev);
release_bar(dev->pdev);
err_disable:
mlx5_pci_disable_device(dev);
*/
mlx5_drain_health_wq(dev);
iounmap(dev->iseg);
- pci_clear_master(dev->pdev);
release_bar(dev->pdev);
mlx5_pci_disable_device(dev);
}
goto function_teardown;
}
+ err = mlx5_devlink_params_register(priv_to_devlink(dev));
+ if (err)
+ goto err_devlink_params_reg;
+
err = mlx5_load(dev);
if (err)
goto err_load;
set_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
- err = mlx5_devlink_params_register(priv_to_devlink(dev));
- if (err)
- goto err_devlink_params_reg;
-
err = mlx5_register_device(dev);
if (err)
goto err_register;
return 0;
err_register:
- mlx5_devlink_params_unregister(priv_to_devlink(dev));
-err_devlink_params_reg:
clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
mlx5_unload(dev);
err_load:
+ mlx5_devlink_params_unregister(priv_to_devlink(dev));
+err_devlink_params_reg:
mlx5_cleanup_once(dev);
function_teardown:
mlx5_function_teardown(dev, true);
mutex_lock(&dev->intf_state_mutex);
mlx5_unregister_device(dev);
- mlx5_devlink_params_unregister(priv_to_devlink(dev));
if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
mlx5_core_warn(dev, "%s: interface is down, NOP\n",
clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
mlx5_unload(dev);
+ mlx5_devlink_params_unregister(priv_to_devlink(dev));
mlx5_cleanup_once(dev);
mlx5_function_teardown(dev, true);
out:
if (err)
dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err);
+ err = mlx5_thermal_init(dev);
+ if (err)
+ dev_err(&pdev->dev, "mlx5_thermal_init failed with error code %d\n", err);
+
pci_save_state(pdev);
devlink_register(devlink);
return 0;
mlx5_drain_fw_reset(dev);
devlink_unregister(devlink);
mlx5_sriov_disable(pdev);
+ mlx5_thermal_uninit(dev);
mlx5_crdump_disable(dev);
mlx5_drain_health_wq(dev);
mlx5_uninit_one(dev);
#define MLX5_COMP_EQS_PER_SF 8
struct mlx5_irq;
+struct cpu_rmap;
int mlx5_irq_table_init(struct mlx5_core_dev *dev);
void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev);
struct mlx5_irq *mlx5_ctrl_irq_request(struct mlx5_core_dev *dev);
void mlx5_ctrl_irq_release(struct mlx5_irq *ctrl_irq);
struct mlx5_irq *mlx5_irq_request(struct mlx5_core_dev *dev, u16 vecidx,
- struct cpumask *affinity);
+ struct irq_affinity_desc *af_desc,
+ struct cpu_rmap **rmap);
int mlx5_irqs_request_vectors(struct mlx5_core_dev *dev, u16 *cpus, int nirqs,
- struct mlx5_irq **irqs);
+ struct mlx5_irq **irqs, struct cpu_rmap **rmap);
void mlx5_irqs_release_vectors(struct mlx5_irq **irqs, int nirqs);
int mlx5_irq_attach_nb(struct mlx5_irq *irq, struct notifier_block *nb);
int mlx5_irq_detach_nb(struct mlx5_irq *irq, struct notifier_block *nb);
int mlx5_irq_affinity_irqs_request_auto(struct mlx5_core_dev *dev, int nirqs,
struct mlx5_irq **irqs);
struct mlx5_irq *mlx5_irq_affinity_request(struct mlx5_irq_pool *pool,
- const struct cpumask *req_mask);
+ struct irq_affinity_desc *af_desc);
void mlx5_irq_affinity_irqs_release(struct mlx5_core_dev *dev, struct mlx5_irq **irqs,
int num_irqs);
#else
}
static inline struct mlx5_irq *
-mlx5_irq_affinity_request(struct mlx5_irq_pool *pool, const struct cpumask *req_mask)
+mlx5_irq_affinity_request(struct mlx5_irq_pool *pool, struct irq_affinity_desc *af_desc)
{
return ERR_PTR(-EOPNOTSUPP);
}
#include "mlx5_irq.h"
#include "pci_irq.h"
#include "lib/sf.h"
+#include "lib/eq.h"
#ifdef CONFIG_RFS_ACCEL
#include <linux/cpu_rmap.h>
#endif
char name[MLX5_MAX_IRQ_NAME];
struct mlx5_irq_pool *pool;
int refcount;
- u32 index;
- int irqn;
+ struct msi_map map;
};
struct mlx5_irq_table {
- struct mlx5_irq_pool *pf_pool;
+ struct mlx5_irq_pool *pcif_pool;
struct mlx5_irq_pool *sf_ctrl_pool;
struct mlx5_irq_pool *sf_comp_pool;
};
static void irq_release(struct mlx5_irq *irq)
{
struct mlx5_irq_pool *pool = irq->pool;
+#ifdef CONFIG_RFS_ACCEL
+ struct cpu_rmap *rmap;
+#endif
- xa_erase(&pool->irqs, irq->index);
- /* free_irq requires that affinity_hint and rmap will be cleared
- * before calling it. This is why there is asymmetry with set_rmap
- * which should be called after alloc_irq but before request_irq.
+ xa_erase(&pool->irqs, irq->map.index);
+ /* free_irq requires that affinity_hint and rmap will be cleared before
+ * calling it. To satisfy this requirement, we call
+ * irq_cpu_rmap_remove() to remove the notifier
*/
- irq_update_affinity_hint(irq->irqn, NULL);
+ irq_update_affinity_hint(irq->map.virq, NULL);
+#ifdef CONFIG_RFS_ACCEL
+ rmap = mlx5_eq_table_get_rmap(pool->dev);
+ if (rmap && irq->map.index)
+ irq_cpu_rmap_remove(rmap, irq->map.virq);
+#endif
+
free_cpumask_var(irq->mask);
- free_irq(irq->irqn, &irq->nh);
+ free_irq(irq->map.virq, &irq->nh);
+ if (irq->map.index && pci_msix_can_alloc_dyn(pool->dev->pdev))
+ pci_msix_free_irq(pool->dev->pdev, irq->map);
kfree(irq);
}
return;
}
- if (vecidx == pool->xa_num_irqs.max) {
+ if (!vecidx) {
snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_async%d", vecidx);
return;
}
}
struct mlx5_irq *mlx5_irq_alloc(struct mlx5_irq_pool *pool, int i,
- const struct cpumask *affinity)
+ struct irq_affinity_desc *af_desc,
+ struct cpu_rmap **rmap)
{
struct mlx5_core_dev *dev = pool->dev;
char name[MLX5_MAX_IRQ_NAME];
irq = kzalloc(sizeof(*irq), GFP_KERNEL);
if (!irq)
return ERR_PTR(-ENOMEM);
- irq->irqn = pci_irq_vector(dev->pdev, i);
+ if (!i || !pci_msix_can_alloc_dyn(dev->pdev)) {
+ /* The vector at index 0 was already allocated.
+ * Just get the irq number. If dynamic irq is not supported
+ * vectors have also been allocated.
+ */
+ irq->map.virq = pci_irq_vector(dev->pdev, i);
+ irq->map.index = 0;
+ } else {
+ irq->map = pci_msix_alloc_irq_at(dev->pdev, MSI_ANY_INDEX, af_desc);
+ if (!irq->map.virq) {
+ err = irq->map.index;
+ goto err_alloc_irq;
+ }
+ }
+
+ if (i && rmap && *rmap) {
+#ifdef CONFIG_RFS_ACCEL
+ err = irq_cpu_rmap_add(*rmap, irq->map.virq);
+ if (err)
+ goto err_irq_rmap;
+#endif
+ }
if (!mlx5_irq_pool_is_sf_pool(pool))
irq_set_name(pool, name, i);
else
ATOMIC_INIT_NOTIFIER_HEAD(&irq->nh);
snprintf(irq->name, MLX5_MAX_IRQ_NAME,
"%s@pci:%s", name, pci_name(dev->pdev));
- err = request_irq(irq->irqn, irq_int_handler, 0, irq->name,
+ err = request_irq(irq->map.virq, irq_int_handler, 0, irq->name,
&irq->nh);
if (err) {
mlx5_core_err(dev, "Failed to request irq. err = %d\n", err);
err = -ENOMEM;
goto err_cpumask;
}
- if (affinity) {
- cpumask_copy(irq->mask, affinity);
- irq_set_affinity_and_hint(irq->irqn, irq->mask);
+ if (af_desc) {
+ cpumask_copy(irq->mask, &af_desc->mask);
+ irq_set_affinity_and_hint(irq->map.virq, irq->mask);
}
irq->pool = pool;
irq->refcount = 1;
- irq->index = i;
- err = xa_err(xa_store(&pool->irqs, irq->index, irq, GFP_KERNEL));
+ irq->map.index = i;
+ err = xa_err(xa_store(&pool->irqs, irq->map.index, irq, GFP_KERNEL));
if (err) {
mlx5_core_err(dev, "Failed to alloc xa entry for irq(%u). err = %d\n",
- irq->index, err);
+ irq->map.index, err);
goto err_xa;
}
return irq;
err_xa:
- irq_update_affinity_hint(irq->irqn, NULL);
+ if (af_desc)
+ irq_update_affinity_hint(irq->map.virq, NULL);
free_cpumask_var(irq->mask);
err_cpumask:
- free_irq(irq->irqn, &irq->nh);
+ free_irq(irq->map.virq, &irq->nh);
err_req_irq:
+#ifdef CONFIG_RFS_ACCEL
+ if (i && rmap && *rmap) {
+ free_irq_cpu_rmap(*rmap);
+ *rmap = NULL;
+ }
+err_irq_rmap:
+#endif
+ if (i && pci_msix_can_alloc_dyn(dev->pdev))
+ pci_msix_free_irq(dev->pdev, irq->map);
+err_alloc_irq:
kfree(irq);
return ERR_PTR(err);
}
int mlx5_irq_get_index(struct mlx5_irq *irq)
{
- return irq->index;
+ return irq->map.index;
}
/* irq_pool API */
/* requesting an irq from a given pool according to given index */
static struct mlx5_irq *
irq_pool_request_vector(struct mlx5_irq_pool *pool, int vecidx,
- struct cpumask *affinity)
+ struct irq_affinity_desc *af_desc,
+ struct cpu_rmap **rmap)
{
struct mlx5_irq *irq;
mlx5_irq_get_locked(irq);
goto unlock;
}
- irq = mlx5_irq_alloc(pool, vecidx, affinity);
+ irq = mlx5_irq_alloc(pool, vecidx, af_desc, rmap);
unlock:
mutex_unlock(&pool->lock);
return irq;
/* In some configs, there won't be a pool of SFs IRQs. Hence, returning
* the PF IRQs pool in case the SF pool doesn't exist.
*/
- return pool ? pool : irq_table->pf_pool;
+ return pool ? pool : irq_table->pcif_pool;
}
static struct mlx5_irq_pool *ctrl_irq_pool_get(struct mlx5_core_dev *dev)
/* In some configs, there won't be a pool of SFs IRQs. Hence, returning
* the PF IRQs pool in case the SF pool doesn't exist.
*/
- return pool ? pool : irq_table->pf_pool;
+ return pool ? pool : irq_table->pcif_pool;
}
/**
int i;
for (i = 0; i < nirqs; i++) {
- synchronize_irq(irqs[i]->irqn);
+ synchronize_irq(irqs[i]->map.virq);
mlx5_irq_put(irqs[i]);
}
}
struct mlx5_irq *mlx5_ctrl_irq_request(struct mlx5_core_dev *dev)
{
struct mlx5_irq_pool *pool = ctrl_irq_pool_get(dev);
- cpumask_var_t req_mask;
+ struct irq_affinity_desc af_desc;
struct mlx5_irq *irq;
- if (!zalloc_cpumask_var(&req_mask, GFP_KERNEL))
- return ERR_PTR(-ENOMEM);
- cpumask_copy(req_mask, cpu_online_mask);
+ cpumask_copy(&af_desc.mask, cpu_online_mask);
+ af_desc.is_managed = false;
if (!mlx5_irq_pool_is_sf_pool(pool)) {
- /* In case we are allocating a control IRQ for PF/VF */
+ /* In case we are allocating a control IRQ from a pci device's pool.
+ * This can happen also for a SF if the SFs pool is empty.
+ */
if (!pool->xa_num_irqs.max) {
- cpumask_clear(req_mask);
+ cpumask_clear(&af_desc.mask);
/* In case we only have a single IRQ for PF/VF */
- cpumask_set_cpu(cpumask_first(cpu_online_mask), req_mask);
+ cpumask_set_cpu(cpumask_first(cpu_online_mask), &af_desc.mask);
}
- /* Allocate the IRQ in the last index of the pool */
- irq = irq_pool_request_vector(pool, pool->xa_num_irqs.max, req_mask);
+ /* Allocate the IRQ in index 0. The vector was already allocated */
+ irq = irq_pool_request_vector(pool, 0, &af_desc, NULL);
} else {
- irq = mlx5_irq_affinity_request(pool, req_mask);
+ irq = mlx5_irq_affinity_request(pool, &af_desc);
}
- free_cpumask_var(req_mask);
return irq;
}
* @dev: mlx5 device that requesting the IRQ.
* @vecidx: vector index of the IRQ. This argument is ignore if affinity is
* provided.
- * @affinity: cpumask requested for this IRQ.
+ * @af_desc: affinity descriptor for this IRQ.
+ * @rmap: pointer to reverse map pointer for completion interrupts
*
* This function returns a pointer to IRQ, or ERR_PTR in case of error.
*/
struct mlx5_irq *mlx5_irq_request(struct mlx5_core_dev *dev, u16 vecidx,
- struct cpumask *affinity)
+ struct irq_affinity_desc *af_desc,
+ struct cpu_rmap **rmap)
{
struct mlx5_irq_table *irq_table = mlx5_irq_table_get(dev);
struct mlx5_irq_pool *pool;
struct mlx5_irq *irq;
- pool = irq_table->pf_pool;
- irq = irq_pool_request_vector(pool, vecidx, affinity);
+ pool = irq_table->pcif_pool;
+ irq = irq_pool_request_vector(pool, vecidx, af_desc, rmap);
if (IS_ERR(irq))
return irq;
mlx5_core_dbg(dev, "irq %u mapped to cpu %*pbl, %u EQs on this irq\n",
- irq->irqn, cpumask_pr_args(affinity),
+ irq->map.virq, cpumask_pr_args(&af_desc->mask),
irq->refcount / MLX5_EQ_REFS_PER_IRQ);
return irq;
}
/**
+ * mlx5_msix_alloc - allocate msix interrupt
+ * @dev: mlx5 device from which to request
+ * @handler: interrupt handler
+ * @affdesc: affinity descriptor
+ * @name: interrupt name
+ *
+ * Returns: struct msi_map with result encoded.
+ * Note: the caller must make sure to release the irq by calling
+ * mlx5_msix_free() if shutdown was initiated.
+ */
+struct msi_map mlx5_msix_alloc(struct mlx5_core_dev *dev,
+ irqreturn_t (*handler)(int, void *),
+ const struct irq_affinity_desc *affdesc,
+ const char *name)
+{
+ struct msi_map map;
+ int err;
+
+ if (!dev->pdev) {
+ map.virq = 0;
+ map.index = -EINVAL;
+ return map;
+ }
+
+ map = pci_msix_alloc_irq_at(dev->pdev, MSI_ANY_INDEX, affdesc);
+ if (!map.virq)
+ return map;
+
+ err = request_irq(map.virq, handler, 0, name, NULL);
+ if (err) {
+ mlx5_core_warn(dev, "err %d\n", err);
+ pci_msix_free_irq(dev->pdev, map);
+ map.virq = 0;
+ map.index = -ENOMEM;
+ }
+ return map;
+}
+EXPORT_SYMBOL(mlx5_msix_alloc);
+
+/**
+ * mlx5_msix_free - free a previously allocated msix interrupt
+ * @dev: mlx5 device associated with interrupt
+ * @map: map previously returned by mlx5_msix_alloc()
+ */
+void mlx5_msix_free(struct mlx5_core_dev *dev, struct msi_map map)
+{
+ free_irq(map.virq, NULL);
+ pci_msix_free_irq(dev->pdev, map);
+}
+EXPORT_SYMBOL(mlx5_msix_free);
+
+/**
* mlx5_irqs_release_vectors - release one or more IRQs back to the system.
* @irqs: IRQs to be released.
* @nirqs: number of IRQs to be released.
* @cpus: CPUs array for binding the IRQs
* @nirqs: number of IRQs to request.
* @irqs: an output array of IRQs pointers.
+ * @rmap: pointer to reverse map pointer for completion interrupts
*
* Each IRQ is bound to at most 1 CPU.
* This function is requests nirqs IRQs, starting from @vecidx.
* @nirqs), if successful, or a negative error code in case of an error.
*/
int mlx5_irqs_request_vectors(struct mlx5_core_dev *dev, u16 *cpus, int nirqs,
- struct mlx5_irq **irqs)
+ struct mlx5_irq **irqs, struct cpu_rmap **rmap)
{
- cpumask_var_t req_mask;
+ struct irq_affinity_desc af_desc;
struct mlx5_irq *irq;
int i;
- if (!zalloc_cpumask_var(&req_mask, GFP_KERNEL))
- return -ENOMEM;
+ af_desc.is_managed = 1;
for (i = 0; i < nirqs; i++) {
- cpumask_set_cpu(cpus[i], req_mask);
- irq = mlx5_irq_request(dev, i, req_mask);
+ cpumask_set_cpu(cpus[i], &af_desc.mask);
+ irq = mlx5_irq_request(dev, i + 1, &af_desc, rmap);
if (IS_ERR(irq))
break;
- cpumask_clear(req_mask);
+ cpumask_clear(&af_desc.mask);
irqs[i] = irq;
}
- free_cpumask_var(req_mask);
return i ? i : PTR_ERR(irq);
}
kvfree(pool);
}
-static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pf_vec)
+static int irq_pools_init(struct mlx5_core_dev *dev, int sf_vec, int pcif_vec)
{
struct mlx5_irq_table *table = dev->priv.irq_table;
int num_sf_ctrl_by_msix;
int num_sf_ctrl;
int err;
- /* init pf_pool */
- table->pf_pool = irq_pool_alloc(dev, 0, pf_vec, NULL,
- MLX5_EQ_SHARE_IRQ_MIN_COMP,
- MLX5_EQ_SHARE_IRQ_MAX_COMP);
- if (IS_ERR(table->pf_pool))
- return PTR_ERR(table->pf_pool);
+ /* init pcif_pool */
+ table->pcif_pool = irq_pool_alloc(dev, 0, pcif_vec, NULL,
+ MLX5_EQ_SHARE_IRQ_MIN_COMP,
+ MLX5_EQ_SHARE_IRQ_MAX_COMP);
+ if (IS_ERR(table->pcif_pool))
+ return PTR_ERR(table->pcif_pool);
if (!mlx5_sf_max_functions(dev))
return 0;
if (sf_vec < MLX5_IRQ_VEC_COMP_BASE_SF) {
MLX5_SFS_PER_CTRL_IRQ);
num_sf_ctrl = min_t(int, num_sf_ctrl_by_msix, num_sf_ctrl_by_sfs);
num_sf_ctrl = min_t(int, MLX5_IRQ_CTRL_SF_MAX, num_sf_ctrl);
- table->sf_ctrl_pool = irq_pool_alloc(dev, pf_vec, num_sf_ctrl,
+ table->sf_ctrl_pool = irq_pool_alloc(dev, pcif_vec, num_sf_ctrl,
"mlx5_sf_ctrl",
MLX5_EQ_SHARE_IRQ_MIN_CTRL,
MLX5_EQ_SHARE_IRQ_MAX_CTRL);
goto err_pf;
}
/* init sf_comp_pool */
- table->sf_comp_pool = irq_pool_alloc(dev, pf_vec + num_sf_ctrl,
+ table->sf_comp_pool = irq_pool_alloc(dev, pcif_vec + num_sf_ctrl,
sf_vec - num_sf_ctrl, "mlx5_sf_comp",
MLX5_EQ_SHARE_IRQ_MIN_COMP,
MLX5_EQ_SHARE_IRQ_MAX_COMP);
err_sf_ctrl:
irq_pool_free(table->sf_ctrl_pool);
err_pf:
- irq_pool_free(table->pf_pool);
+ irq_pool_free(table->pcif_pool);
return err;
}
irq_pool_free(table->sf_comp_pool);
irq_pool_free(table->sf_ctrl_pool);
}
- irq_pool_free(table->pf_pool);
+ irq_pool_free(table->pcif_pool);
}
/* irq_table API */
int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table)
{
- if (!table->pf_pool->xa_num_irqs.max)
+ if (!table->pcif_pool->xa_num_irqs.max)
return 1;
- return table->pf_pool->xa_num_irqs.max - table->pf_pool->xa_num_irqs.min;
+ return table->pcif_pool->xa_num_irqs.max - table->pcif_pool->xa_num_irqs.min;
}
int mlx5_irq_table_create(struct mlx5_core_dev *dev)
MLX5_CAP_GEN(dev, max_num_eqs) :
1 << MLX5_CAP_GEN(dev, log_max_eq);
int total_vec;
- int pf_vec;
+ int pcif_vec;
+ int req_vec;
int err;
+ int n;
if (mlx5_core_is_sf(dev))
return 0;
- pf_vec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() + 1;
- pf_vec = min_t(int, pf_vec, num_eqs);
+ pcif_vec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() + 1;
+ pcif_vec = min_t(int, pcif_vec, num_eqs);
- total_vec = pf_vec;
+ total_vec = pcif_vec;
if (mlx5_sf_max_functions(dev))
total_vec += MLX5_IRQ_CTRL_SF_MAX +
MLX5_COMP_EQS_PER_SF * mlx5_sf_max_functions(dev);
+ total_vec = min_t(int, total_vec, pci_msix_vec_count(dev->pdev));
+ pcif_vec = min_t(int, pcif_vec, pci_msix_vec_count(dev->pdev));
- total_vec = pci_alloc_irq_vectors(dev->pdev, 1, total_vec, PCI_IRQ_MSIX);
- if (total_vec < 0)
- return total_vec;
- pf_vec = min(pf_vec, total_vec);
+ req_vec = pci_msix_can_alloc_dyn(dev->pdev) ? 1 : total_vec;
+ n = pci_alloc_irq_vectors(dev->pdev, 1, req_vec, PCI_IRQ_MSIX);
+ if (n < 0)
+ return n;
- err = irq_pools_init(dev, total_vec - pf_vec, pf_vec);
+ err = irq_pools_init(dev, total_vec - pcif_vec, pcif_vec);
if (err)
pci_free_irq_vectors(dev->pdev);
#define MLX5_EQ_REFS_PER_IRQ (2)
struct mlx5_irq;
+struct cpu_rmap;
struct mlx5_irq_pool {
char name[MLX5_MAX_IRQ_NAME - MLX5_MAX_IRQ_IDX_CHARS];
}
struct mlx5_irq *mlx5_irq_alloc(struct mlx5_irq_pool *pool, int i,
- const struct cpumask *affinity);
+ struct irq_affinity_desc *af_desc,
+ struct cpu_rmap **rmap);
int mlx5_irq_get_locked(struct mlx5_irq *irq);
int mlx5_irq_read_locked(struct mlx5_irq *irq);
int mlx5_irq_put(struct mlx5_irq *irq);
kfree(out);
return err;
}
+
+/* speed in units of 1Mb */
+static const u32 mlx5e_link_speed[MLX5E_LINK_MODES_NUMBER] = {
+ [MLX5E_1000BASE_CX_SGMII] = 1000,
+ [MLX5E_1000BASE_KX] = 1000,
+ [MLX5E_10GBASE_CX4] = 10000,
+ [MLX5E_10GBASE_KX4] = 10000,
+ [MLX5E_10GBASE_KR] = 10000,
+ [MLX5E_20GBASE_KR2] = 20000,
+ [MLX5E_40GBASE_CR4] = 40000,
+ [MLX5E_40GBASE_KR4] = 40000,
+ [MLX5E_56GBASE_R4] = 56000,
+ [MLX5E_10GBASE_CR] = 10000,
+ [MLX5E_10GBASE_SR] = 10000,
+ [MLX5E_10GBASE_ER] = 10000,
+ [MLX5E_40GBASE_SR4] = 40000,
+ [MLX5E_40GBASE_LR4] = 40000,
+ [MLX5E_50GBASE_SR2] = 50000,
+ [MLX5E_100GBASE_CR4] = 100000,
+ [MLX5E_100GBASE_SR4] = 100000,
+ [MLX5E_100GBASE_KR4] = 100000,
+ [MLX5E_100GBASE_LR4] = 100000,
+ [MLX5E_100BASE_TX] = 100,
+ [MLX5E_1000BASE_T] = 1000,
+ [MLX5E_10GBASE_T] = 10000,
+ [MLX5E_25GBASE_CR] = 25000,
+ [MLX5E_25GBASE_KR] = 25000,
+ [MLX5E_25GBASE_SR] = 25000,
+ [MLX5E_50GBASE_CR2] = 50000,
+ [MLX5E_50GBASE_KR2] = 50000,
+};
+
+static const u32 mlx5e_ext_link_speed[MLX5E_EXT_LINK_MODES_NUMBER] = {
+ [MLX5E_SGMII_100M] = 100,
+ [MLX5E_1000BASE_X_SGMII] = 1000,
+ [MLX5E_5GBASE_R] = 5000,
+ [MLX5E_10GBASE_XFI_XAUI_1] = 10000,
+ [MLX5E_40GBASE_XLAUI_4_XLPPI_4] = 40000,
+ [MLX5E_25GAUI_1_25GBASE_CR_KR] = 25000,
+ [MLX5E_50GAUI_2_LAUI_2_50GBASE_CR2_KR2] = 50000,
+ [MLX5E_50GAUI_1_LAUI_1_50GBASE_CR_KR] = 50000,
+ [MLX5E_CAUI_4_100GBASE_CR4_KR4] = 100000,
+ [MLX5E_100GAUI_2_100GBASE_CR2_KR2] = 100000,
+ [MLX5E_200GAUI_4_200GBASE_CR4_KR4] = 200000,
+ [MLX5E_400GAUI_8] = 400000,
+ [MLX5E_100GAUI_1_100GBASE_CR_KR] = 100000,
+ [MLX5E_200GAUI_2_200GBASE_CR2_KR2] = 200000,
+ [MLX5E_400GAUI_4_400GBASE_CR4_KR4] = 400000,
+};
+
+int mlx5_port_query_eth_proto(struct mlx5_core_dev *dev, u8 port, bool ext,
+ struct mlx5_port_eth_proto *eproto)
+{
+ u32 out[MLX5_ST_SZ_DW(ptys_reg)];
+ int err;
+
+ if (!eproto)
+ return -EINVAL;
+
+ err = mlx5_query_port_ptys(dev, out, sizeof(out), MLX5_PTYS_EN, port);
+ if (err)
+ return err;
+
+ eproto->cap = MLX5_GET_ETH_PROTO(ptys_reg, out, ext,
+ eth_proto_capability);
+ eproto->admin = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_admin);
+ eproto->oper = MLX5_GET_ETH_PROTO(ptys_reg, out, ext, eth_proto_oper);
+ return 0;
+}
+
+bool mlx5_ptys_ext_supported(struct mlx5_core_dev *mdev)
+{
+ struct mlx5_port_eth_proto eproto;
+ int err;
+
+ if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_extended_ethernet))
+ return true;
+
+ err = mlx5_port_query_eth_proto(mdev, 1, true, &eproto);
+ if (err)
+ return false;
+
+ return !!eproto.cap;
+}
+
+static void mlx5e_port_get_speed_arr(struct mlx5_core_dev *mdev,
+ const u32 **arr, u32 *size,
+ bool force_legacy)
+{
+ bool ext = force_legacy ? false : mlx5_ptys_ext_supported(mdev);
+
+ *size = ext ? ARRAY_SIZE(mlx5e_ext_link_speed) :
+ ARRAY_SIZE(mlx5e_link_speed);
+ *arr = ext ? mlx5e_ext_link_speed : mlx5e_link_speed;
+}
+
+u32 mlx5_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper,
+ bool force_legacy)
+{
+ unsigned long temp = eth_proto_oper;
+ const u32 *table;
+ u32 speed = 0;
+ u32 max_size;
+ int i;
+
+ mlx5e_port_get_speed_arr(mdev, &table, &max_size, force_legacy);
+ i = find_first_bit(&temp, max_size);
+ if (i < max_size)
+ speed = table[i];
+ return speed;
+}
+
+u32 mlx5_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
+ bool force_legacy)
+{
+ u32 link_modes = 0;
+ const u32 *table;
+ u32 max_size;
+ int i;
+
+ mlx5e_port_get_speed_arr(mdev, &table, &max_size, force_legacy);
+ for (i = 0; i < max_size; ++i) {
+ if (table[i] == speed)
+ link_modes |= MLX5E_PROT_MASK(i);
+ }
+ return link_modes;
+}
+
+int mlx5_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
+{
+ struct mlx5_port_eth_proto eproto;
+ u32 max_speed = 0;
+ const u32 *table;
+ u32 max_size;
+ bool ext;
+ int err;
+ int i;
+
+ ext = mlx5_ptys_ext_supported(mdev);
+ err = mlx5_port_query_eth_proto(mdev, 1, ext, &eproto);
+ if (err)
+ return err;
+
+ mlx5e_port_get_speed_arr(mdev, &table, &max_size, false);
+ for (i = 0; i < max_size; ++i)
+ if (eproto.cap & MLX5E_PROT_MASK(i))
+ max_speed = max(max_speed, table[i]);
+
+ *speed = max_speed;
+ return 0;
+}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES.
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/thermal.h>
+#include <linux/err.h>
+#include <linux/mlx5/driver.h>
+#include "mlx5_core.h"
+#include "thermal.h"
+
+#define MLX5_THERMAL_POLL_INT_MSEC 1000
+#define MLX5_THERMAL_NUM_TRIPS 0
+#define MLX5_THERMAL_ASIC_SENSOR_INDEX 0
+
+/* Bit string indicating the writeablility of trip points if any */
+#define MLX5_THERMAL_TRIP_MASK (BIT(MLX5_THERMAL_NUM_TRIPS) - 1)
+
+struct mlx5_thermal {
+ struct mlx5_core_dev *mdev;
+ struct thermal_zone_device *tzdev;
+};
+
+static int mlx5_thermal_get_mtmp_temp(struct mlx5_core_dev *mdev, u32 id, int *p_temp)
+{
+ u32 mtmp_out[MLX5_ST_SZ_DW(mtmp_reg)] = {};
+ u32 mtmp_in[MLX5_ST_SZ_DW(mtmp_reg)] = {};
+ int err;
+
+ MLX5_SET(mtmp_reg, mtmp_in, sensor_index, id);
+
+ err = mlx5_core_access_reg(mdev, mtmp_in, sizeof(mtmp_in),
+ mtmp_out, sizeof(mtmp_out),
+ MLX5_REG_MTMP, 0, 0);
+
+ if (err)
+ return err;
+
+ *p_temp = MLX5_GET(mtmp_reg, mtmp_out, temperature);
+
+ return 0;
+}
+
+static int mlx5_thermal_get_temp(struct thermal_zone_device *tzdev,
+ int *p_temp)
+{
+ struct mlx5_thermal *thermal = tzdev->devdata;
+ struct mlx5_core_dev *mdev = thermal->mdev;
+ int err;
+
+ err = mlx5_thermal_get_mtmp_temp(mdev, MLX5_THERMAL_ASIC_SENSOR_INDEX, p_temp);
+
+ if (err)
+ return err;
+
+ /* The unit of temp returned is in 0.125 C. The thermal
+ * framework expects the value in 0.001 C.
+ */
+ *p_temp *= 125;
+
+ return 0;
+}
+
+static struct thermal_zone_device_ops mlx5_thermal_ops = {
+ .get_temp = mlx5_thermal_get_temp,
+};
+
+int mlx5_thermal_init(struct mlx5_core_dev *mdev)
+{
+ struct mlx5_thermal *thermal;
+ struct thermal_zone_device *tzd;
+ const char *data = "mlx5";
+
+ tzd = thermal_zone_get_zone_by_name(data);
+ if (!IS_ERR(tzd))
+ return 0;
+
+ thermal = kzalloc(sizeof(*thermal), GFP_KERNEL);
+ if (!thermal)
+ return -ENOMEM;
+
+ thermal->mdev = mdev;
+ thermal->tzdev = thermal_zone_device_register(data,
+ MLX5_THERMAL_NUM_TRIPS,
+ MLX5_THERMAL_TRIP_MASK,
+ thermal,
+ &mlx5_thermal_ops,
+ NULL, 0, MLX5_THERMAL_POLL_INT_MSEC);
+ if (IS_ERR(thermal->tzdev)) {
+ dev_err(mdev->device, "Failed to register thermal zone device (%s) %ld\n",
+ data, PTR_ERR(thermal->tzdev));
+ kfree(thermal);
+ return -EINVAL;
+ }
+
+ mdev->thermal = thermal;
+ return 0;
+}
+
+void mlx5_thermal_uninit(struct mlx5_core_dev *mdev)
+{
+ if (!mdev->thermal)
+ return;
+
+ thermal_zone_device_unregister(mdev->thermal->tzdev);
+ kfree(mdev->thermal);
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+ * Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES.
+ */
+#ifndef __MLX5_THERMAL_DRIVER_H
+#define __MLX5_THERMAL_DRIVER_H
+
+#if IS_ENABLED(CONFIG_THERMAL)
+int mlx5_thermal_init(struct mlx5_core_dev *mdev);
+void mlx5_thermal_uninit(struct mlx5_core_dev *mdev);
+#else
+static inline int mlx5_thermal_init(struct mlx5_core_dev *mdev)
+{
+ mdev->thermal = NULL;
+ return 0;
+}
+
+static inline void mlx5_thermal_uninit(struct mlx5_core_dev *mdev) { }
+#endif
+
+#endif /* __MLX5_THERMAL_DRIVER_H */
#define MLXSW_THERMAL_ASIC_TEMP_NORM 75000 /* 75C */
#define MLXSW_THERMAL_ASIC_TEMP_HIGH 85000 /* 85C */
#define MLXSW_THERMAL_ASIC_TEMP_HOT 105000 /* 105C */
+#define MLXSW_THERMAL_MODULE_TEMP_NORM 55000 /* 55C */
+#define MLXSW_THERMAL_MODULE_TEMP_HIGH 65000 /* 65C */
+#define MLXSW_THERMAL_MODULE_TEMP_HOT 80000 /* 80C */
#define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */
#define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2)
#define MLXSW_THERMAL_MAX_STATE 10
"mlxreg_fan",
};
-enum mlxsw_thermal_trips {
- MLXSW_THERMAL_TEMP_TRIP_NORM,
- MLXSW_THERMAL_TEMP_TRIP_HIGH,
- MLXSW_THERMAL_TEMP_TRIP_HOT,
-};
-
struct mlxsw_cooling_states {
int min_state;
int max_state;
},
};
+static const struct thermal_trip default_thermal_module_trips[] = {
+ { /* In range - 0-40% PWM */
+ .type = THERMAL_TRIP_ACTIVE,
+ .temperature = MLXSW_THERMAL_MODULE_TEMP_NORM,
+ .hysteresis = MLXSW_THERMAL_HYSTERESIS_TEMP,
+ },
+ {
+ /* In range - 40-100% PWM */
+ .type = THERMAL_TRIP_ACTIVE,
+ .temperature = MLXSW_THERMAL_MODULE_TEMP_HIGH,
+ .hysteresis = MLXSW_THERMAL_HYSTERESIS_TEMP,
+ },
+ { /* Warning */
+ .type = THERMAL_TRIP_HOT,
+ .temperature = MLXSW_THERMAL_MODULE_TEMP_HOT,
+ },
+};
+
static const struct mlxsw_cooling_states default_cooling_states[] = {
{
.min_state = 0,
return -ENODEV;
}
-static void
-mlxsw_thermal_module_trips_reset(struct mlxsw_thermal_module *tz)
-{
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temperature = 0;
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temperature = 0;
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temperature = 0;
-}
-
-static int
-mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core,
- struct mlxsw_thermal_module *tz,
- int crit_temp, int emerg_temp)
-{
- int err;
-
- /* Do not try to query temperature thresholds directly from the module's
- * EEPROM if we got valid thresholds from MTMP.
- */
- if (!emerg_temp || !crit_temp) {
- err = mlxsw_env_module_temp_thresholds_get(core, tz->slot_index,
- tz->module,
- SFP_TEMP_HIGH_WARN,
- &crit_temp);
- if (err)
- return err;
-
- err = mlxsw_env_module_temp_thresholds_get(core, tz->slot_index,
- tz->module,
- SFP_TEMP_HIGH_ALARM,
- &emerg_temp);
- if (err)
- return err;
- }
-
- if (crit_temp > emerg_temp) {
- dev_warn(dev, "%s : Critical threshold %d is above emergency threshold %d\n",
- tz->tzdev->type, crit_temp, emerg_temp);
- return 0;
- }
-
- /* According to the system thermal requirements, the thermal zones are
- * defined with three trip points. The critical and emergency
- * temperature thresholds, provided by QSFP module are set as "active"
- * and "hot" trip points, "normal" trip point is derived from "active"
- * by subtracting double hysteresis value.
- */
- if (crit_temp >= MLXSW_THERMAL_MODULE_TEMP_SHIFT)
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temperature = crit_temp -
- MLXSW_THERMAL_MODULE_TEMP_SHIFT;
- else
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temperature = crit_temp;
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temperature = crit_temp;
- tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temperature = emerg_temp;
-
- return 0;
-}
-
static int mlxsw_thermal_bind(struct thermal_zone_device *tzdev,
struct thermal_cooling_device *cdev)
{
return err;
}
-static void
-mlxsw_thermal_module_temp_and_thresholds_get(struct mlxsw_core *core,
- u8 slot_index, u16 sensor_index,
- int *p_temp, int *p_crit_temp,
- int *p_emerg_temp)
-{
- char mtmp_pl[MLXSW_REG_MTMP_LEN];
- int err;
-
- /* Read module temperature and thresholds. */
- mlxsw_reg_mtmp_pack(mtmp_pl, slot_index, sensor_index,
- false, false);
- err = mlxsw_reg_query(core, MLXSW_REG(mtmp), mtmp_pl);
- if (err) {
- /* Set temperature and thresholds to zero to avoid passing
- * uninitialized data back to the caller.
- */
- *p_temp = 0;
- *p_crit_temp = 0;
- *p_emerg_temp = 0;
-
- return;
- }
- mlxsw_reg_mtmp_unpack(mtmp_pl, p_temp, NULL, p_crit_temp, p_emerg_temp,
- NULL);
-}
-
static int mlxsw_thermal_module_temp_get(struct thermal_zone_device *tzdev,
int *p_temp)
{
struct mlxsw_thermal_module *tz = tzdev->devdata;
struct mlxsw_thermal *thermal = tz->parent;
- int temp, crit_temp, emerg_temp;
- struct device *dev;
+ char mtmp_pl[MLXSW_REG_MTMP_LEN];
u16 sensor_index;
+ int err;
- dev = thermal->bus_info->dev;
sensor_index = MLXSW_REG_MTMP_MODULE_INDEX_MIN + tz->module;
-
- /* Read module temperature and thresholds. */
- mlxsw_thermal_module_temp_and_thresholds_get(thermal->core,
- tz->slot_index,
- sensor_index, &temp,
- &crit_temp, &emerg_temp);
- *p_temp = temp;
-
- if (!temp)
- return 0;
-
- /* Update trip points. */
- mlxsw_thermal_module_trips_update(dev, thermal->core, tz,
- crit_temp, emerg_temp);
-
+ mlxsw_reg_mtmp_pack(mtmp_pl, tz->slot_index, sensor_index,
+ false, false);
+ err = mlxsw_reg_query(thermal->core, MLXSW_REG(mtmp), mtmp_pl);
+ if (err)
+ return err;
+ mlxsw_reg_mtmp_unpack(mtmp_pl, p_temp, NULL, NULL, NULL, NULL);
return 0;
}
thermal_zone_device_unregister(tzdev);
}
-static int
+static void
mlxsw_thermal_module_init(struct device *dev, struct mlxsw_core *core,
struct mlxsw_thermal *thermal,
struct mlxsw_thermal_area *area, u8 module)
{
struct mlxsw_thermal_module *module_tz;
- int dummy_temp, crit_temp, emerg_temp;
- u16 sensor_index;
- sensor_index = MLXSW_REG_MTMP_MODULE_INDEX_MIN + module;
module_tz = &area->tz_module_arr[module];
/* Skip if parent is already set (case of port split). */
if (module_tz->parent)
- return 0;
+ return;
module_tz->module = module;
module_tz->slot_index = area->slot_index;
module_tz->parent = thermal;
- memcpy(module_tz->trips, default_thermal_trips,
+ BUILD_BUG_ON(ARRAY_SIZE(default_thermal_module_trips) !=
+ MLXSW_THERMAL_NUM_TRIPS);
+ memcpy(module_tz->trips, default_thermal_module_trips,
sizeof(thermal->trips));
memcpy(module_tz->cooling_states, default_cooling_states,
sizeof(thermal->cooling_states));
- /* Initialize all trip point. */
- mlxsw_thermal_module_trips_reset(module_tz);
- /* Read module temperature and thresholds. */
- mlxsw_thermal_module_temp_and_thresholds_get(core, area->slot_index,
- sensor_index, &dummy_temp,
- &crit_temp, &emerg_temp);
- /* Update trip point according to the module data. */
- return mlxsw_thermal_module_trips_update(dev, core, module_tz,
- crit_temp, emerg_temp);
}
static void mlxsw_thermal_module_fini(struct mlxsw_thermal_module *module_tz)
if (!area->tz_module_arr)
return -ENOMEM;
- for (i = 0; i < area->tz_module_num; i++) {
- err = mlxsw_thermal_module_init(dev, core, thermal, area, i);
- if (err)
- goto err_thermal_module_init;
- }
+ for (i = 0; i < area->tz_module_num; i++)
+ mlxsw_thermal_module_init(dev, core, thermal, area, i);
for (i = 0; i < area->tz_module_num; i++) {
module_tz = &area->tz_module_arr[i];
return 0;
err_thermal_module_tz_init:
-err_thermal_module_init:
for (i = area->tz_module_num - 1; i >= 0; i--)
mlxsw_thermal_module_fini(&area->tz_module_arr[i]);
kfree(area->tz_module_arr);
u8 remote;
int i;
int p;
- int change = 0;
interrupt = hw_block_intr(hw);
port_cfg_back_pressure(hw, p,
(1 == info->duplex));
}
- change |= 1 << i;
port_cfg_change(hw, port, info, status);
}
info->state = media_connected;
} else {
- if (media_disconnected != info->state) {
- change |= 1 << i;
-
- /* Indicate the link just goes down. */
+ /* Indicate the link just goes down. */
+ if (media_disconnected != info->state)
hw->port_mib[p].link_down = 1;
- }
+
info->state = media_disconnected;
}
hw->port_mib[p].state = (u8) info->state;
phy_stop(netdev->phydev);
phy_disconnect(netdev->phydev);
- netdev->phydev = NULL;
}
static void lan743x_phy_interface_select(struct lan743x_adapter *adapter)
depends on NET_SWITCHDEV
depends on BRIDGE || BRIDGE=n
select PHYLINK
- select PACKING
select PAGE_POOL
select VCAP
help
if (likely(!(skb->dev->features & NETIF_F_RXFCS)))
skb_trim(skb, skb->len - ETH_FCS_LEN);
- lan966x_ptp_rxtstamp(lan966x, skb, timestamp);
+ lan966x_ptp_rxtstamp(lan966x, skb, src_port, timestamp);
skb->protocol = eth_type_trans(skb, skb->dev);
if (lan966x->bridge_mask & BIT(src_port)) {
#include <linux/ip.h>
#include <linux/of_platform.h>
#include <linux/of_net.h>
-#include <linux/packing.h>
#include <linux/phy/phy.h>
#include <linux/reset.h>
#include <net/addrconf.h>
return NETDEV_TX_BUSY;
}
+static void lan966x_ifh_set(u8 *ifh, size_t val, size_t pos, size_t length)
+{
+ int i = 0;
+
+ do {
+ u8 p = IFH_LEN_BYTES - (pos + i) / 8 - 1;
+ u8 v = val >> i & 0xff;
+
+ /* There is no need to check for limits of the array, as these
+ * will never be written
+ */
+ ifh[p] |= v << ((pos + i) % 8);
+ ifh[p - 1] |= v >> (8 - (pos + i) % 8);
+
+ i += 8;
+ } while (i < length);
+}
+
void lan966x_ifh_set_bypass(void *ifh, u64 bypass)
{
- packing(ifh, &bypass, IFH_POS_BYPASS + IFH_WID_BYPASS - 1,
- IFH_POS_BYPASS, IFH_LEN * 4, PACK, 0);
+ lan966x_ifh_set(ifh, bypass, IFH_POS_BYPASS, IFH_WID_BYPASS);
}
-void lan966x_ifh_set_port(void *ifh, u64 bypass)
+void lan966x_ifh_set_port(void *ifh, u64 port)
{
- packing(ifh, &bypass, IFH_POS_DSTS + IFH_WID_DSTS - 1,
- IFH_POS_DSTS, IFH_LEN * 4, PACK, 0);
+ lan966x_ifh_set(ifh, port, IFH_POS_DSTS, IFH_WID_DSTS);
}
-static void lan966x_ifh_set_qos_class(void *ifh, u64 bypass)
+static void lan966x_ifh_set_qos_class(void *ifh, u64 qos)
{
- packing(ifh, &bypass, IFH_POS_QOS_CLASS + IFH_WID_QOS_CLASS - 1,
- IFH_POS_QOS_CLASS, IFH_LEN * 4, PACK, 0);
+ lan966x_ifh_set(ifh, qos, IFH_POS_QOS_CLASS, IFH_WID_QOS_CLASS);
}
-static void lan966x_ifh_set_ipv(void *ifh, u64 bypass)
+static void lan966x_ifh_set_ipv(void *ifh, u64 ipv)
{
- packing(ifh, &bypass, IFH_POS_IPV + IFH_WID_IPV - 1,
- IFH_POS_IPV, IFH_LEN * 4, PACK, 0);
+ lan966x_ifh_set(ifh, ipv, IFH_POS_IPV, IFH_WID_IPV);
}
static void lan966x_ifh_set_vid(void *ifh, u64 vid)
{
- packing(ifh, &vid, IFH_POS_TCI + IFH_WID_TCI - 1,
- IFH_POS_TCI, IFH_LEN * 4, PACK, 0);
+ lan966x_ifh_set(ifh, vid, IFH_POS_TCI, IFH_WID_TCI);
}
static void lan966x_ifh_set_rew_op(void *ifh, u64 rew_op)
{
- packing(ifh, &rew_op, IFH_POS_REW_CMD + IFH_WID_REW_CMD - 1,
- IFH_POS_REW_CMD, IFH_LEN * 4, PACK, 0);
+ lan966x_ifh_set(ifh, rew_op, IFH_POS_REW_CMD, IFH_WID_REW_CMD);
}
static void lan966x_ifh_set_timestamp(void *ifh, u64 timestamp)
{
- packing(ifh, ×tamp, IFH_POS_TIMESTAMP + IFH_WID_TIMESTAMP - 1,
- IFH_POS_TIMESTAMP, IFH_LEN * 4, PACK, 0);
+ lan966x_ifh_set(ifh, timestamp, IFH_POS_TIMESTAMP, IFH_WID_TIMESTAMP);
}
static netdev_tx_t lan966x_port_xmit(struct sk_buff *skb,
}
}
+static u64 lan966x_ifh_get(u8 *ifh, size_t pos, size_t length)
+{
+ u64 val = 0;
+ u8 v;
+
+ for (int i = 0; i < length ; i++) {
+ int j = pos + i;
+ int k = j % 8;
+
+ if (i == 0 || k == 0)
+ v = ifh[IFH_LEN_BYTES - (j / 8) - 1];
+
+ if (v & (1 << k))
+ val |= (1 << i);
+ }
+
+ return val;
+}
+
void lan966x_ifh_get_src_port(void *ifh, u64 *src_port)
{
- packing(ifh, src_port, IFH_POS_SRCPORT + IFH_WID_SRCPORT - 1,
- IFH_POS_SRCPORT, IFH_LEN * 4, UNPACK, 0);
+ *src_port = lan966x_ifh_get(ifh, IFH_POS_SRCPORT, IFH_WID_SRCPORT);
}
static void lan966x_ifh_get_len(void *ifh, u64 *len)
{
- packing(ifh, len, IFH_POS_LEN + IFH_WID_LEN - 1,
- IFH_POS_LEN, IFH_LEN * 4, UNPACK, 0);
+ *len = lan966x_ifh_get(ifh, IFH_POS_LEN, IFH_WID_LEN);
}
void lan966x_ifh_get_timestamp(void *ifh, u64 *timestamp)
{
- packing(ifh, timestamp, IFH_POS_TIMESTAMP + IFH_WID_TIMESTAMP - 1,
- IFH_POS_TIMESTAMP, IFH_LEN * 4, UNPACK, 0);
+ *timestamp = lan966x_ifh_get(ifh, IFH_POS_TIMESTAMP, IFH_WID_TIMESTAMP);
}
static irqreturn_t lan966x_xtr_irq_handler(int irq, void *args)
*buf = val;
}
- lan966x_ptp_rxtstamp(lan966x, skb, timestamp);
+ lan966x_ptp_rxtstamp(lan966x, skb, src_port, timestamp);
skb->protocol = eth_type_trans(skb, dev);
if (lan966x->bridge_mask & BIT(src_port)) {
#define SE_IDX_QUEUE 0 /* 0-79 : Queue scheduler elements */
#define SE_IDX_PORT 80 /* 80-89 : Port schedular elements */
+#define LAN966X_VCAP_CID_IS1_L0 VCAP_CID_INGRESS_L0 /* IS1 lookup 0 */
+#define LAN966X_VCAP_CID_IS1_L1 VCAP_CID_INGRESS_L1 /* IS1 lookup 1 */
+#define LAN966X_VCAP_CID_IS1_L2 VCAP_CID_INGRESS_L2 /* IS1 lookup 2 */
+#define LAN966X_VCAP_CID_IS1_MAX (VCAP_CID_INGRESS_L3 - 1) /* IS1 Max */
+
#define LAN966X_VCAP_CID_IS2_L0 VCAP_CID_INGRESS_STAGE2_L0 /* IS2 lookup 0 */
#define LAN966X_VCAP_CID_IS2_L1 VCAP_CID_INGRESS_STAGE2_L1 /* IS2 lookup 1 */
#define LAN966X_VCAP_CID_IS2_MAX (VCAP_CID_INGRESS_STAGE2_L2 - 1) /* IS2 Max */
VCAP_IS2_PS_IPV6_MAC_ETYPE,
};
+enum vcap_is1_port_sel_other {
+ VCAP_IS1_PS_OTHER_NORMAL,
+ VCAP_IS1_PS_OTHER_7TUPLE,
+ VCAP_IS1_PS_OTHER_DBL_VID,
+ VCAP_IS1_PS_OTHER_DMAC_VID,
+};
+
+enum vcap_is1_port_sel_ipv4 {
+ VCAP_IS1_PS_IPV4_NORMAL,
+ VCAP_IS1_PS_IPV4_7TUPLE,
+ VCAP_IS1_PS_IPV4_5TUPLE_IP4,
+ VCAP_IS1_PS_IPV4_DBL_VID,
+ VCAP_IS1_PS_IPV4_DMAC_VID,
+};
+
+enum vcap_is1_port_sel_ipv6 {
+ VCAP_IS1_PS_IPV6_NORMAL,
+ VCAP_IS1_PS_IPV6_7TUPLE,
+ VCAP_IS1_PS_IPV6_5TUPLE_IP4,
+ VCAP_IS1_PS_IPV6_NORMAL_IP6,
+ VCAP_IS1_PS_IPV6_5TUPLE_IP6,
+ VCAP_IS1_PS_IPV6_DBL_VID,
+ VCAP_IS1_PS_IPV6_DMAC_VID,
+};
+
+enum vcap_is1_port_sel_rt {
+ VCAP_IS1_PS_RT_NORMAL,
+ VCAP_IS1_PS_RT_7TUPLE,
+ VCAP_IS1_PS_RT_DBL_VID,
+ VCAP_IS1_PS_RT_DMAC_VID,
+ VCAP_IS1_PS_RT_FOLLOW_OTHER = 7,
+};
+
struct lan966x_port;
struct lan966x_db {
struct phy *serdes;
struct fwnode_handle *fwnode;
- u8 ptp_cmd;
+ u8 ptp_tx_cmd;
+ bool ptp_rx_cmd;
u16 ts_id;
struct sk_buff_head tx_skbs;
int lan966x_ptp_hwtstamp_set(struct lan966x_port *port, struct ifreq *ifr);
int lan966x_ptp_hwtstamp_get(struct lan966x_port *port, struct ifreq *ifr);
void lan966x_ptp_rxtstamp(struct lan966x *lan966x, struct sk_buff *skb,
- u64 timestamp);
+ u64 src_port, u64 timestamp);
int lan966x_ptp_txtstamp_request(struct lan966x_port *port,
struct sk_buff *skb);
void lan966x_ptp_txtstamp_release(struct lan966x_port *port,
return 0;
}
-static int lan966x_police_del(struct lan966x_port *port,
- u16 pol_idx)
+static void lan966x_police_del(struct lan966x_port *port, u16 pol_idx)
{
struct lan966x *lan966x = port->lan966x;
lan_wr(ANA_POL_PIR_CFG_PIR_RATE_SET(GENMASK(14, 0)) |
ANA_POL_PIR_CFG_PIR_BURST_SET(0),
lan966x, ANA_POL_PIR_CFG(pol_idx));
-
- return 0;
}
static int lan966x_police_validate(struct lan966x_port *port,
struct netlink_ext_ack *extack)
{
struct lan966x *lan966x = port->lan966x;
- int err;
if (port->tc.police_id != police_id) {
NL_SET_ERR_MSG_MOD(extack,
return -EINVAL;
}
- err = lan966x_police_del(port, POL_IDX_PORT + port->chip_port);
- if (err) {
- NL_SET_ERR_MSG_MOD(extack,
- "Failed to add policer to port");
- return err;
- }
+ lan966x_police_del(port, POL_IDX_PORT + port->chip_port);
lan_rmw(ANA_POL_CFG_PORT_POL_ENA_SET(0) |
ANA_POL_CFG_POL_ORDER_SET(POL_ORDER),
switch (cfg.tx_type) {
case HWTSTAMP_TX_ON:
- port->ptp_cmd = IFH_REW_OP_TWO_STEP_PTP;
+ port->ptp_tx_cmd = IFH_REW_OP_TWO_STEP_PTP;
break;
case HWTSTAMP_TX_ONESTEP_SYNC:
- port->ptp_cmd = IFH_REW_OP_ONE_STEP_PTP;
+ port->ptp_tx_cmd = IFH_REW_OP_ONE_STEP_PTP;
break;
case HWTSTAMP_TX_OFF:
- port->ptp_cmd = IFH_REW_OP_NOOP;
+ port->ptp_tx_cmd = IFH_REW_OP_NOOP;
break;
default:
return -ERANGE;
switch (cfg.rx_filter) {
case HWTSTAMP_FILTER_NONE:
+ port->ptp_rx_cmd = false;
break;
case HWTSTAMP_FILTER_ALL:
case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
case HWTSTAMP_FILTER_PTP_V2_SYNC:
case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
case HWTSTAMP_FILTER_NTP_ALL:
+ port->ptp_rx_cmd = true;
cfg.rx_filter = HWTSTAMP_FILTER_ALL;
break;
default:
u8 msgtype;
int type;
- if (port->ptp_cmd == IFH_REW_OP_NOOP)
+ if (port->ptp_tx_cmd == IFH_REW_OP_NOOP)
return IFH_REW_OP_NOOP;
type = ptp_classify_raw(skb);
if (!header)
return IFH_REW_OP_NOOP;
- if (port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP)
+ if (port->ptp_tx_cmd == IFH_REW_OP_TWO_STEP_PTP)
return IFH_REW_OP_TWO_STEP_PTP;
/* If it is sync and run 1 step then set the correct operation,
phc->index = index;
phc->lan966x = lan966x;
- /* PTP Rx stamping is always enabled. */
- phc->hwtstamp_config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
-
return 0;
}
}
void lan966x_ptp_rxtstamp(struct lan966x *lan966x, struct sk_buff *skb,
- u64 timestamp)
+ u64 src_port, u64 timestamp)
{
struct skb_shared_hwtstamps *shhwtstamps;
struct lan966x_phc *phc;
struct timespec64 ts;
u64 full_ts_in_ns;
- if (!lan966x->ptp)
+ if (!lan966x->ptp ||
+ !lan966x->ports[src_port]->ptp_rx_cmd)
return;
phc = &lan966x->phc[LAN966X_PHC_PORT];
#define ANA_DROP_CFG_DROP_MC_SMAC_ENA_GET(x)\
FIELD_GET(ANA_DROP_CFG_DROP_MC_SMAC_ENA, x)
+/* ANA:PORT:VCAP_CFG */
+#define ANA_VCAP_CFG(g) __REG(TARGET_ANA, 0, 1, 28672, g, 9, 128, 12, 0, 1, 4)
+
+#define ANA_VCAP_CFG_S1_ENA BIT(14)
+#define ANA_VCAP_CFG_S1_ENA_SET(x)\
+ FIELD_PREP(ANA_VCAP_CFG_S1_ENA, x)
+#define ANA_VCAP_CFG_S1_ENA_GET(x)\
+ FIELD_GET(ANA_VCAP_CFG_S1_ENA, x)
+
+/* ANA:PORT:VCAP_S1_KEY_CFG */
+#define ANA_VCAP_S1_CFG(g, r) __REG(TARGET_ANA, 0, 1, 28672, g, 9, 128, 16, r, 3, 4)
+
+#define ANA_VCAP_S1_CFG_KEY_RT_CFG GENMASK(11, 9)
+#define ANA_VCAP_S1_CFG_KEY_RT_CFG_SET(x)\
+ FIELD_PREP(ANA_VCAP_S1_CFG_KEY_RT_CFG, x)
+#define ANA_VCAP_S1_CFG_KEY_RT_CFG_GET(x)\
+ FIELD_GET(ANA_VCAP_S1_CFG_KEY_RT_CFG, x)
+
+#define ANA_VCAP_S1_CFG_KEY_IP6_CFG GENMASK(8, 6)
+#define ANA_VCAP_S1_CFG_KEY_IP6_CFG_SET(x)\
+ FIELD_PREP(ANA_VCAP_S1_CFG_KEY_IP6_CFG, x)
+#define ANA_VCAP_S1_CFG_KEY_IP6_CFG_GET(x)\
+ FIELD_GET(ANA_VCAP_S1_CFG_KEY_IP6_CFG, x)
+
+#define ANA_VCAP_S1_CFG_KEY_IP4_CFG GENMASK(5, 3)
+#define ANA_VCAP_S1_CFG_KEY_IP4_CFG_SET(x)\
+ FIELD_PREP(ANA_VCAP_S1_CFG_KEY_IP4_CFG, x)
+#define ANA_VCAP_S1_CFG_KEY_IP4_CFG_GET(x)\
+ FIELD_GET(ANA_VCAP_S1_CFG_KEY_IP4_CFG, x)
+
+#define ANA_VCAP_S1_CFG_KEY_OTHER_CFG GENMASK(2, 0)
+#define ANA_VCAP_S1_CFG_KEY_OTHER_CFG_SET(x)\
+ FIELD_PREP(ANA_VCAP_S1_CFG_KEY_OTHER_CFG, x)
+#define ANA_VCAP_S1_CFG_KEY_OTHER_CFG_GET(x)\
+ FIELD_GET(ANA_VCAP_S1_CFG_KEY_OTHER_CFG, x)
+
/* ANA:PORT:VCAP_S2_CFG */
#define ANA_VCAP_S2_CFG(g) __REG(TARGET_ANA, 0, 1, 28672, g, 9, 128, 28, 0, 1, 4)
#include "vcap_api_client.h"
#include "vcap_tc.h"
-static bool lan966x_tc_is_known_etype(u16 etype)
+static bool lan966x_tc_is_known_etype(struct vcap_tc_flower_parse_usage *st,
+ u16 etype)
{
- switch (etype) {
- case ETH_P_ALL:
- case ETH_P_ARP:
- case ETH_P_IP:
- case ETH_P_IPV6:
- return true;
+ switch (st->admin->vtype) {
+ case VCAP_TYPE_IS1:
+ switch (etype) {
+ case ETH_P_ALL:
+ case ETH_P_ARP:
+ case ETH_P_IP:
+ case ETH_P_IPV6:
+ return true;
+ }
+ break;
+ case VCAP_TYPE_IS2:
+ switch (etype) {
+ case ETH_P_ALL:
+ case ETH_P_ARP:
+ case ETH_P_IP:
+ case ETH_P_IPV6:
+ case ETH_P_SNAP:
+ case ETH_P_802_2:
+ return true;
+ }
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack,
+ "VCAP type not supported");
+ return false;
}
return false;
flow_rule_match_basic(st->frule, &match);
if (match.mask->n_proto) {
st->l3_proto = be16_to_cpu(match.key->n_proto);
- if (!lan966x_tc_is_known_etype(st->l3_proto)) {
+ if (!lan966x_tc_is_known_etype(st, st->l3_proto)) {
err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ETYPE,
st->l3_proto, ~0);
if (err)
VCAP_BIT_1);
if (err)
goto out;
+ } else if (st->l3_proto == ETH_P_IPV6 &&
+ st->admin->vtype == VCAP_TYPE_IS1) {
+ /* Don't set any keys in this case */
+ } else if (st->l3_proto == ETH_P_SNAP &&
+ st->admin->vtype == VCAP_TYPE_IS1) {
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_ETYPE_LEN_IS,
+ VCAP_BIT_0);
+ if (err)
+ goto out;
+
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_IP_SNAP_IS,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+ } else if (st->admin->vtype == VCAP_TYPE_IS1) {
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_ETYPE_LEN_IS,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+
+ err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ETYPE,
+ st->l3_proto, ~0);
+ if (err)
+ goto out;
}
}
if (match.mask->ip_proto) {
st->l4_proto = match.key->ip_proto;
if (st->l4_proto == IPPROTO_TCP) {
+ if (st->admin->vtype == VCAP_TYPE_IS1) {
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_TCP_UDP_IS,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+ }
+
err = vcap_rule_add_key_bit(st->vrule,
VCAP_KF_TCP_IS,
VCAP_BIT_1);
if (err)
goto out;
} else if (st->l4_proto == IPPROTO_UDP) {
+ if (st->admin->vtype == VCAP_TYPE_IS1) {
+ err = vcap_rule_add_key_bit(st->vrule,
+ VCAP_KF_TCP_UDP_IS,
+ VCAP_BIT_1);
+ if (err)
+ goto out;
+ }
+
err = vcap_rule_add_key_bit(st->vrule,
VCAP_KF_TCP_IS,
VCAP_BIT_0);
}
static int
+lan966x_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st)
+{
+ if (st->admin->vtype != VCAP_TYPE_IS1) {
+ NL_SET_ERR_MSG_MOD(st->fco->common.extack,
+ "cvlan not supported in this VCAP");
+ return -EINVAL;
+ }
+
+ return vcap_tc_flower_handler_cvlan_usage(st);
+}
+
+static int
lan966x_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st)
{
- return vcap_tc_flower_handler_vlan_usage(st,
- VCAP_KF_8021Q_VID_CLS,
- VCAP_KF_8021Q_PCP_CLS);
+ enum vcap_key_field vid_key = VCAP_KF_8021Q_VID_CLS;
+ enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP_CLS;
+
+ if (st->admin->vtype == VCAP_TYPE_IS1) {
+ vid_key = VCAP_KF_8021Q_VID0;
+ pcp_key = VCAP_KF_8021Q_PCP0;
+ }
+
+ return vcap_tc_flower_handler_vlan_usage(st, vid_key, pcp_key);
}
static int
[FLOW_DISSECTOR_KEY_CONTROL] = lan966x_tc_flower_handler_control_usage,
[FLOW_DISSECTOR_KEY_PORTS] = vcap_tc_flower_handler_portnum_usage,
[FLOW_DISSECTOR_KEY_BASIC] = lan966x_tc_flower_handler_basic_usage,
+ [FLOW_DISSECTOR_KEY_CVLAN] = lan966x_tc_flower_handler_cvlan_usage,
[FLOW_DISSECTOR_KEY_VLAN] = lan966x_tc_flower_handler_vlan_usage,
[FLOW_DISSECTOR_KEY_TCP] = vcap_tc_flower_handler_tcp_usage,
[FLOW_DISSECTOR_KEY_ARP] = vcap_tc_flower_handler_arp_usage,
.fco = f,
.vrule = vrule,
.l3_proto = ETH_P_ALL,
+ .admin = admin,
};
int err = 0;
return 0;
}
+/* Add the actionset that is the default for the VCAP type */
+static int lan966x_tc_set_actionset(struct vcap_admin *admin,
+ struct vcap_rule *vrule)
+{
+ enum vcap_actionfield_set aset;
+ int err = 0;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS1:
+ aset = VCAP_AFS_S1;
+ break;
+ case VCAP_TYPE_IS2:
+ aset = VCAP_AFS_BASE_TYPE;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* Do not overwrite any current actionset */
+ if (vrule->actionset == VCAP_AFS_NO_VALUE)
+ err = vcap_set_rule_set_actionset(vrule, aset);
+
+ return err;
+}
+
+static int lan966x_tc_add_rule_link_target(struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ int target_cid)
+{
+ int link_val = target_cid % VCAP_CID_LOOKUP_SIZE;
+ int err;
+
+ if (!link_val)
+ return 0;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS1:
+ /* Choose IS1 specific NXT_IDX key (for chaining rules from IS1) */
+ err = vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_GEN_IDX_SEL,
+ 1, ~0);
+ if (err)
+ return err;
+
+ return vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_GEN_IDX,
+ link_val, ~0);
+ case VCAP_TYPE_IS2:
+ /* Add IS2 specific PAG key (for chaining rules from IS1) */
+ return vcap_rule_add_key_u32(vrule, VCAP_KF_LOOKUP_PAG,
+ link_val, ~0);
+ default:
+ break;
+ }
+ return 0;
+}
+
+static int lan966x_tc_add_rule_link(struct vcap_control *vctrl,
+ struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ struct flow_cls_offload *f,
+ int to_cid)
+{
+ struct vcap_admin *to_admin = vcap_find_admin(vctrl, to_cid);
+ int diff, err = 0;
+
+ if (!to_admin) {
+ NL_SET_ERR_MSG_MOD(f->common.extack,
+ "Unknown destination chain");
+ return -EINVAL;
+ }
+
+ diff = vcap_chain_offset(vctrl, f->common.chain_index, to_cid);
+ if (!diff)
+ return 0;
+
+ /* Between IS1 and IS2 the PAG value is used */
+ if (admin->vtype == VCAP_TYPE_IS1 && to_admin->vtype == VCAP_TYPE_IS2) {
+ /* This works for IS1->IS2 */
+ err = vcap_rule_add_action_u32(vrule, VCAP_AF_PAG_VAL, diff);
+ if (err)
+ return err;
+
+ err = vcap_rule_add_action_u32(vrule, VCAP_AF_PAG_OVERRIDE_MASK,
+ 0xff);
+ if (err)
+ return err;
+ } else {
+ NL_SET_ERR_MSG_MOD(f->common.extack,
+ "Unsupported chain destination");
+ return -EOPNOTSUPP;
+ }
+
+ return err;
+}
+
static int lan966x_tc_flower_add(struct lan966x_port *port,
struct flow_cls_offload *f,
struct vcap_admin *admin,
if (err)
goto out;
+ err = lan966x_tc_add_rule_link_target(admin, vrule,
+ f->common.chain_index);
+ if (err)
+ goto out;
+
frule = flow_cls_offload_flow_rule(f);
flow_action_for_each(idx, act, &frule->action) {
switch (act->id) {
case FLOW_ACTION_TRAP:
+ if (admin->vtype != VCAP_TYPE_IS2) {
+ NL_SET_ERR_MSG_MOD(f->common.extack,
+ "Trap action not supported in this VCAP");
+ err = -EOPNOTSUPP;
+ goto out;
+ }
+
err = vcap_rule_add_action_bit(vrule,
VCAP_AF_CPU_COPY_ENA,
VCAP_BIT_1);
break;
case FLOW_ACTION_GOTO:
+ err = lan966x_tc_set_actionset(admin, vrule);
+ if (err)
+ goto out;
+
+ err = lan966x_tc_add_rule_link(port->lan966x->vcap_ctrl,
+ admin, vrule,
+ f, act->chain_index);
+ if (err)
+ goto out;
+
break;
default:
NL_SET_ERR_MSG_MOD(f->common.extack,
#include "lan966x_vcap_ag_api.h"
/* keyfields */
+static const struct vcap_field is1_normal_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_KF_LOOKUP_INDEX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 1,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 3,
+ .width = 9,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 12,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_IP_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_8021CB_R_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 16,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 17,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 18,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 19,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 31,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 32,
+ .width = 3,
+ },
+ [VCAP_KF_L2_SMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 35,
+ .width = 48,
+ },
+ [VCAP_KF_ETYPE_LEN_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 83,
+ .width = 1,
+ },
+ [VCAP_KF_ETYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 84,
+ .width = 16,
+ },
+ [VCAP_KF_IP_SNAP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 100,
+ .width = 1,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 101,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAGMENT] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 102,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAG_OFS_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 103,
+ .width = 1,
+ },
+ [VCAP_KF_L3_OPTIONS_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 104,
+ .width = 1,
+ },
+ [VCAP_KF_L3_DSCP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 105,
+ .width = 6,
+ },
+ [VCAP_KF_L3_IP4_SIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 111,
+ .width = 32,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 143,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 144,
+ .width = 1,
+ },
+ [VCAP_KF_L4_SPORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 145,
+ .width = 16,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 161,
+ .width = 8,
+ },
+};
+
+static const struct vcap_field is1_5tuple_ip4_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_KF_LOOKUP_INDEX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 1,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 3,
+ .width = 9,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 12,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_IP_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_8021CB_R_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 16,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 17,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 18,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 19,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 31,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 32,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_TPID1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 35,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 36,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 48,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 49,
+ .width = 3,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 52,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAGMENT] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 53,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAG_OFS_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 54,
+ .width = 1,
+ },
+ [VCAP_KF_L3_OPTIONS_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 55,
+ .width = 1,
+ },
+ [VCAP_KF_L3_DSCP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 56,
+ .width = 6,
+ },
+ [VCAP_KF_L3_IP4_DIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 62,
+ .width = 32,
+ },
+ [VCAP_KF_L3_IP4_SIP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 94,
+ .width = 32,
+ },
+ [VCAP_KF_L3_IP_PROTO] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 126,
+ .width = 8,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 134,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 135,
+ .width = 1,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 136,
+ .width = 8,
+ },
+ [VCAP_KF_IP_PAYLOAD_5TUPLE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 144,
+ .width = 32,
+ },
+};
+
+static const struct vcap_field is1_normal_ip6_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_INDEX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 4,
+ .width = 9,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_IP_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_8021CB_R_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 16,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 17,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 18,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 19,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 20,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 32,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 33,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_TPID1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 36,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 37,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 49,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 50,
+ .width = 3,
+ },
+ [VCAP_KF_L2_SMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 53,
+ .width = 48,
+ },
+ [VCAP_KF_L3_DSCP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 101,
+ .width = 6,
+ },
+ [VCAP_KF_L3_IP6_SIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 107,
+ .width = 128,
+ },
+ [VCAP_KF_L3_IP_PROTO] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 235,
+ .width = 8,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 243,
+ .width = 1,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 244,
+ .width = 8,
+ },
+ [VCAP_KF_IP_PAYLOAD_S1_IP6] = {
+ .type = VCAP_FIELD_U112,
+ .offset = 252,
+ .width = 112,
+ },
+};
+
+static const struct vcap_field is1_7tuple_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_INDEX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 4,
+ .width = 9,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_IP_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_8021CB_R_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 16,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 17,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 18,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 19,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 20,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 32,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 33,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_TPID1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 36,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 37,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 49,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 50,
+ .width = 3,
+ },
+ [VCAP_KF_L2_DMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 53,
+ .width = 48,
+ },
+ [VCAP_KF_L2_SMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 101,
+ .width = 48,
+ },
+ [VCAP_KF_ETYPE_LEN_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 149,
+ .width = 1,
+ },
+ [VCAP_KF_ETYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 150,
+ .width = 16,
+ },
+ [VCAP_KF_IP_SNAP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 166,
+ .width = 1,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 167,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAGMENT] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 168,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAG_OFS_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 169,
+ .width = 1,
+ },
+ [VCAP_KF_L3_OPTIONS_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 170,
+ .width = 1,
+ },
+ [VCAP_KF_L3_DSCP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 171,
+ .width = 6,
+ },
+ [VCAP_KF_L3_IP6_DIP_MSB] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 177,
+ .width = 16,
+ },
+ [VCAP_KF_L3_IP6_DIP] = {
+ .type = VCAP_FIELD_U64,
+ .offset = 193,
+ .width = 64,
+ },
+ [VCAP_KF_L3_IP6_SIP_MSB] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 257,
+ .width = 16,
+ },
+ [VCAP_KF_L3_IP6_SIP] = {
+ .type = VCAP_FIELD_U64,
+ .offset = 273,
+ .width = 64,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 337,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 338,
+ .width = 1,
+ },
+ [VCAP_KF_L4_SPORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 339,
+ .width = 16,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 355,
+ .width = 8,
+ },
+};
+
+static const struct vcap_field is1_5tuple_ip6_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_INDEX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 4,
+ .width = 9,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_IP_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_8021CB_R_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 16,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 17,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 18,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 19,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 20,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 32,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 33,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_TPID1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 36,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 37,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 49,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 50,
+ .width = 3,
+ },
+ [VCAP_KF_L3_DSCP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 53,
+ .width = 6,
+ },
+ [VCAP_KF_L3_IP6_DIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 59,
+ .width = 128,
+ },
+ [VCAP_KF_L3_IP6_SIP] = {
+ .type = VCAP_FIELD_U128,
+ .offset = 187,
+ .width = 128,
+ },
+ [VCAP_KF_L3_IP_PROTO] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 315,
+ .width = 8,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 323,
+ .width = 1,
+ },
+ [VCAP_KF_L4_RNG] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 324,
+ .width = 8,
+ },
+ [VCAP_KF_IP_PAYLOAD_5TUPLE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 332,
+ .width = 32,
+ },
+};
+
+static const struct vcap_field is1_dbl_vid_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_INDEX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 4,
+ .width = 9,
+ },
+ [VCAP_KF_L2_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_L2_BC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_IP_MC_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_8021CB_R_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 16,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 17,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 18,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 19,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 20,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 32,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 33,
+ .width = 3,
+ },
+ [VCAP_KF_8021Q_TPID1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 36,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 37,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI1] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 49,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP1] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 50,
+ .width = 3,
+ },
+ [VCAP_KF_ETYPE_LEN_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 53,
+ .width = 1,
+ },
+ [VCAP_KF_ETYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 54,
+ .width = 16,
+ },
+ [VCAP_KF_IP_SNAP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 70,
+ .width = 1,
+ },
+ [VCAP_KF_IP4_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 71,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAGMENT] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 72,
+ .width = 1,
+ },
+ [VCAP_KF_L3_FRAG_OFS_GT0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 73,
+ .width = 1,
+ },
+ [VCAP_KF_L3_OPTIONS_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 74,
+ .width = 1,
+ },
+ [VCAP_KF_L3_DSCP] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 75,
+ .width = 6,
+ },
+ [VCAP_KF_TCP_UDP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 81,
+ .width = 1,
+ },
+ [VCAP_KF_TCP_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 82,
+ .width = 1,
+ },
+};
+
+static const struct vcap_field is1_rt_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_FIRST_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 2,
+ .width = 1,
+ },
+ [VCAP_KF_IF_IGR_PORT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 3,
+ .width = 3,
+ },
+ [VCAP_KF_8021CB_R_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 6,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 7,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 8,
+ .width = 1,
+ },
+ [VCAP_KF_L2_MAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 9,
+ .width = 48,
+ },
+ [VCAP_KF_RT_VLAN_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 57,
+ .width = 3,
+ },
+ [VCAP_KF_RT_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 60,
+ .width = 2,
+ },
+ [VCAP_KF_RT_FRMID] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 62,
+ .width = 32,
+ },
+};
+
+static const struct vcap_field is1_dmac_vid_keyfield[] = {
+ [VCAP_KF_TYPE] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 0,
+ .width = 2,
+ },
+ [VCAP_KF_LOOKUP_INDEX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 2,
+ },
+ [VCAP_KF_IF_IGR_PORT_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 4,
+ .width = 9,
+ },
+ [VCAP_KF_8021CB_R_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 14,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 15,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_TPID0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 16,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_VID0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 17,
+ .width = 12,
+ },
+ [VCAP_KF_8021Q_DEI0] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 29,
+ .width = 1,
+ },
+ [VCAP_KF_8021Q_PCP0] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 30,
+ .width = 3,
+ },
+ [VCAP_KF_L2_DMAC] = {
+ .type = VCAP_FIELD_U48,
+ .offset = 33,
+ .width = 48,
+ },
+};
+
static const struct vcap_field is2_mac_etype_keyfield[] = {
[VCAP_KF_TYPE] = {
.type = VCAP_FIELD_U32,
};
/* keyfield_set */
+static const struct vcap_set is1_keyfield_set[] = {
+ [VCAP_KFS_NORMAL] = {
+ .type_id = 0,
+ .sw_per_item = 2,
+ .sw_cnt = 2,
+ },
+ [VCAP_KFS_5TUPLE_IP4] = {
+ .type_id = 1,
+ .sw_per_item = 2,
+ .sw_cnt = 2,
+ },
+ [VCAP_KFS_NORMAL_IP6] = {
+ .type_id = 0,
+ .sw_per_item = 4,
+ .sw_cnt = 1,
+ },
+ [VCAP_KFS_7TUPLE] = {
+ .type_id = 1,
+ .sw_per_item = 4,
+ .sw_cnt = 1,
+ },
+ [VCAP_KFS_5TUPLE_IP6] = {
+ .type_id = 2,
+ .sw_per_item = 4,
+ .sw_cnt = 1,
+ },
+ [VCAP_KFS_DBL_VID] = {
+ .type_id = 0,
+ .sw_per_item = 1,
+ .sw_cnt = 4,
+ },
+ [VCAP_KFS_RT] = {
+ .type_id = 1,
+ .sw_per_item = 1,
+ .sw_cnt = 4,
+ },
+ [VCAP_KFS_DMAC_VID] = {
+ .type_id = 2,
+ .sw_per_item = 1,
+ .sw_cnt = 4,
+ },
+};
+
static const struct vcap_set is2_keyfield_set[] = {
[VCAP_KFS_MAC_ETYPE] = {
.type_id = 0,
};
/* keyfield_set map */
+static const struct vcap_field *is1_keyfield_set_map[] = {
+ [VCAP_KFS_NORMAL] = is1_normal_keyfield,
+ [VCAP_KFS_5TUPLE_IP4] = is1_5tuple_ip4_keyfield,
+ [VCAP_KFS_NORMAL_IP6] = is1_normal_ip6_keyfield,
+ [VCAP_KFS_7TUPLE] = is1_7tuple_keyfield,
+ [VCAP_KFS_5TUPLE_IP6] = is1_5tuple_ip6_keyfield,
+ [VCAP_KFS_DBL_VID] = is1_dbl_vid_keyfield,
+ [VCAP_KFS_RT] = is1_rt_keyfield,
+ [VCAP_KFS_DMAC_VID] = is1_dmac_vid_keyfield,
+};
+
static const struct vcap_field *is2_keyfield_set_map[] = {
[VCAP_KFS_MAC_ETYPE] = is2_mac_etype_keyfield,
[VCAP_KFS_MAC_LLC] = is2_mac_llc_keyfield,
};
/* keyfield_set map sizes */
+static int is1_keyfield_set_map_size[] = {
+ [VCAP_KFS_NORMAL] = ARRAY_SIZE(is1_normal_keyfield),
+ [VCAP_KFS_5TUPLE_IP4] = ARRAY_SIZE(is1_5tuple_ip4_keyfield),
+ [VCAP_KFS_NORMAL_IP6] = ARRAY_SIZE(is1_normal_ip6_keyfield),
+ [VCAP_KFS_7TUPLE] = ARRAY_SIZE(is1_7tuple_keyfield),
+ [VCAP_KFS_5TUPLE_IP6] = ARRAY_SIZE(is1_5tuple_ip6_keyfield),
+ [VCAP_KFS_DBL_VID] = ARRAY_SIZE(is1_dbl_vid_keyfield),
+ [VCAP_KFS_RT] = ARRAY_SIZE(is1_rt_keyfield),
+ [VCAP_KFS_DMAC_VID] = ARRAY_SIZE(is1_dmac_vid_keyfield),
+};
+
static int is2_keyfield_set_map_size[] = {
[VCAP_KFS_MAC_ETYPE] = ARRAY_SIZE(is2_mac_etype_keyfield),
[VCAP_KFS_MAC_LLC] = ARRAY_SIZE(is2_mac_llc_keyfield),
};
/* actionfields */
+static const struct vcap_field is1_s1_actionfield[] = {
+ [VCAP_AF_TYPE] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 0,
+ .width = 1,
+ },
+ [VCAP_AF_DSCP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 1,
+ .width = 1,
+ },
+ [VCAP_AF_DSCP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 2,
+ .width = 6,
+ },
+ [VCAP_AF_QOS_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 8,
+ .width = 1,
+ },
+ [VCAP_AF_QOS_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 9,
+ .width = 3,
+ },
+ [VCAP_AF_DP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 12,
+ .width = 1,
+ },
+ [VCAP_AF_DP_VAL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 13,
+ .width = 1,
+ },
+ [VCAP_AF_PAG_OVERRIDE_MASK] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 14,
+ .width = 8,
+ },
+ [VCAP_AF_PAG_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 22,
+ .width = 8,
+ },
+ [VCAP_AF_ISDX_REPLACE_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 30,
+ .width = 1,
+ },
+ [VCAP_AF_ISDX_ADD_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 31,
+ .width = 8,
+ },
+ [VCAP_AF_VID_REPLACE_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 39,
+ .width = 1,
+ },
+ [VCAP_AF_VID_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 40,
+ .width = 12,
+ },
+ [VCAP_AF_PCP_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 67,
+ .width = 1,
+ },
+ [VCAP_AF_PCP_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 68,
+ .width = 3,
+ },
+ [VCAP_AF_DEI_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 71,
+ .width = 1,
+ },
+ [VCAP_AF_DEI_VAL] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 72,
+ .width = 1,
+ },
+ [VCAP_AF_VLAN_POP_CNT_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 73,
+ .width = 1,
+ },
+ [VCAP_AF_VLAN_POP_CNT] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 74,
+ .width = 2,
+ },
+ [VCAP_AF_CUSTOM_ACE_TYPE_ENA] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 76,
+ .width = 4,
+ },
+ [VCAP_AF_SFID_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 80,
+ .width = 1,
+ },
+ [VCAP_AF_SFID_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 81,
+ .width = 8,
+ },
+ [VCAP_AF_SGID_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 89,
+ .width = 1,
+ },
+ [VCAP_AF_SGID_VAL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 90,
+ .width = 8,
+ },
+ [VCAP_AF_POLICE_ENA] = {
+ .type = VCAP_FIELD_BIT,
+ .offset = 98,
+ .width = 1,
+ },
+ [VCAP_AF_POLICE_IDX] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 99,
+ .width = 9,
+ },
+ [VCAP_AF_OAM_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 108,
+ .width = 3,
+ },
+ [VCAP_AF_MRP_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 111,
+ .width = 2,
+ },
+ [VCAP_AF_DLR_SEL] = {
+ .type = VCAP_FIELD_U32,
+ .offset = 113,
+ .width = 2,
+ },
+};
+
static const struct vcap_field is2_base_type_actionfield[] = {
[VCAP_AF_HIT_ME_ONCE] = {
.type = VCAP_FIELD_BIT,
};
/* actionfield_set */
+static const struct vcap_set is1_actionfield_set[] = {
+ [VCAP_AFS_S1] = {
+ .type_id = 0,
+ .sw_per_item = 1,
+ .sw_cnt = 4,
+ },
+};
+
static const struct vcap_set is2_actionfield_set[] = {
[VCAP_AFS_BASE_TYPE] = {
.type_id = -1,
};
/* actionfield_set map */
+static const struct vcap_field *is1_actionfield_set_map[] = {
+ [VCAP_AFS_S1] = is1_s1_actionfield,
+};
+
static const struct vcap_field *is2_actionfield_set_map[] = {
[VCAP_AFS_BASE_TYPE] = is2_base_type_actionfield,
[VCAP_AFS_SMAC_SIP] = is2_smac_sip_actionfield,
};
/* actionfield_set map size */
+static int is1_actionfield_set_map_size[] = {
+ [VCAP_AFS_S1] = ARRAY_SIZE(is1_s1_actionfield),
+};
+
static int is2_actionfield_set_map_size[] = {
[VCAP_AFS_BASE_TYPE] = ARRAY_SIZE(is2_base_type_actionfield),
[VCAP_AFS_SMAC_SIP] = ARRAY_SIZE(is2_smac_sip_actionfield),
};
/* Type Groups */
+static const struct vcap_typegroup is1_x4_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 3,
+ .value = 4,
+ },
+ {
+ .offset = 96,
+ .width = 1,
+ .value = 0,
+ },
+ {
+ .offset = 192,
+ .width = 2,
+ .value = 0,
+ },
+ {
+ .offset = 288,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup is1_x2_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 2,
+ .value = 2,
+ },
+ {
+ .offset = 96,
+ .width = 1,
+ .value = 0,
+ },
+ {}
+};
+
+static const struct vcap_typegroup is1_x1_keyfield_set_typegroups[] = {
+ {
+ .offset = 0,
+ .width = 1,
+ .value = 1,
+ },
+ {}
+};
+
static const struct vcap_typegroup is2_x4_keyfield_set_typegroups[] = {
{
.offset = 0,
{}
};
+static const struct vcap_typegroup *is1_keyfield_set_typegroups[] = {
+ [4] = is1_x4_keyfield_set_typegroups,
+ [2] = is1_x2_keyfield_set_typegroups,
+ [1] = is1_x1_keyfield_set_typegroups,
+ [5] = NULL,
+};
+
static const struct vcap_typegroup *is2_keyfield_set_typegroups[] = {
[4] = is2_x4_keyfield_set_typegroups,
[2] = is2_x2_keyfield_set_typegroups,
[5] = NULL,
};
+static const struct vcap_typegroup is1_x1_actionfield_set_typegroups[] = {
+ {}
+};
+
static const struct vcap_typegroup is2_x2_actionfield_set_typegroups[] = {
{
.offset = 0,
{}
};
+static const struct vcap_typegroup *is1_actionfield_set_typegroups[] = {
+ [1] = is1_x1_actionfield_set_typegroups,
+ [5] = NULL,
+};
+
static const struct vcap_typegroup *is2_actionfield_set_typegroups[] = {
[2] = is2_x2_actionfield_set_typegroups,
[1] = is2_x1_actionfield_set_typegroups,
/* Keyfieldset names */
static const char * const vcap_keyfield_set_names[] = {
[VCAP_KFS_NO_VALUE] = "(None)",
+ [VCAP_KFS_5TUPLE_IP4] = "VCAP_KFS_5TUPLE_IP4",
+ [VCAP_KFS_5TUPLE_IP6] = "VCAP_KFS_5TUPLE_IP6",
+ [VCAP_KFS_7TUPLE] = "VCAP_KFS_7TUPLE",
[VCAP_KFS_ARP] = "VCAP_KFS_ARP",
+ [VCAP_KFS_DBL_VID] = "VCAP_KFS_DBL_VID",
+ [VCAP_KFS_DMAC_VID] = "VCAP_KFS_DMAC_VID",
+ [VCAP_KFS_ETAG] = "VCAP_KFS_ETAG",
[VCAP_KFS_IP4_OTHER] = "VCAP_KFS_IP4_OTHER",
[VCAP_KFS_IP4_TCP_UDP] = "VCAP_KFS_IP4_TCP_UDP",
+ [VCAP_KFS_IP4_VID] = "VCAP_KFS_IP4_VID",
[VCAP_KFS_IP6_OTHER] = "VCAP_KFS_IP6_OTHER",
[VCAP_KFS_IP6_STD] = "VCAP_KFS_IP6_STD",
[VCAP_KFS_IP6_TCP_UDP] = "VCAP_KFS_IP6_TCP_UDP",
+ [VCAP_KFS_IP6_VID] = "VCAP_KFS_IP6_VID",
+ [VCAP_KFS_IP_7TUPLE] = "VCAP_KFS_IP_7TUPLE",
+ [VCAP_KFS_ISDX] = "VCAP_KFS_ISDX",
+ [VCAP_KFS_LL_FULL] = "VCAP_KFS_LL_FULL",
[VCAP_KFS_MAC_ETYPE] = "VCAP_KFS_MAC_ETYPE",
[VCAP_KFS_MAC_LLC] = "VCAP_KFS_MAC_LLC",
[VCAP_KFS_MAC_SNAP] = "VCAP_KFS_MAC_SNAP",
+ [VCAP_KFS_NORMAL] = "VCAP_KFS_NORMAL",
+ [VCAP_KFS_NORMAL_5TUPLE_IP4] = "VCAP_KFS_NORMAL_5TUPLE_IP4",
+ [VCAP_KFS_NORMAL_7TUPLE] = "VCAP_KFS_NORMAL_7TUPLE",
+ [VCAP_KFS_NORMAL_IP6] = "VCAP_KFS_NORMAL_IP6",
[VCAP_KFS_OAM] = "VCAP_KFS_OAM",
+ [VCAP_KFS_PURE_5TUPLE_IP4] = "VCAP_KFS_PURE_5TUPLE_IP4",
+ [VCAP_KFS_RT] = "VCAP_KFS_RT",
[VCAP_KFS_SMAC_SIP4] = "VCAP_KFS_SMAC_SIP4",
[VCAP_KFS_SMAC_SIP6] = "VCAP_KFS_SMAC_SIP6",
};
static const char * const vcap_actionfield_set_names[] = {
[VCAP_AFS_NO_VALUE] = "(None)",
[VCAP_AFS_BASE_TYPE] = "VCAP_AFS_BASE_TYPE",
+ [VCAP_AFS_CLASSIFICATION] = "VCAP_AFS_CLASSIFICATION",
+ [VCAP_AFS_CLASS_REDUCED] = "VCAP_AFS_CLASS_REDUCED",
+ [VCAP_AFS_FULL] = "VCAP_AFS_FULL",
+ [VCAP_AFS_S1] = "VCAP_AFS_S1",
[VCAP_AFS_SMAC_SIP] = "VCAP_AFS_SMAC_SIP",
};
/* Keyfield names */
static const char * const vcap_keyfield_names[] = {
[VCAP_KF_NO_VALUE] = "(None)",
+ [VCAP_KF_8021BR_ECID_BASE] = "8021BR_ECID_BASE",
+ [VCAP_KF_8021BR_ECID_EXT] = "8021BR_ECID_EXT",
+ [VCAP_KF_8021BR_E_TAGGED] = "8021BR_E_TAGGED",
+ [VCAP_KF_8021BR_GRP] = "8021BR_GRP",
+ [VCAP_KF_8021BR_IGR_ECID_BASE] = "8021BR_IGR_ECID_BASE",
+ [VCAP_KF_8021BR_IGR_ECID_EXT] = "8021BR_IGR_ECID_EXT",
+ [VCAP_KF_8021CB_R_TAGGED_IS] = "8021CB_R_TAGGED_IS",
+ [VCAP_KF_8021Q_DEI0] = "8021Q_DEI0",
+ [VCAP_KF_8021Q_DEI1] = "8021Q_DEI1",
+ [VCAP_KF_8021Q_DEI2] = "8021Q_DEI2",
[VCAP_KF_8021Q_DEI_CLS] = "8021Q_DEI_CLS",
+ [VCAP_KF_8021Q_PCP0] = "8021Q_PCP0",
+ [VCAP_KF_8021Q_PCP1] = "8021Q_PCP1",
+ [VCAP_KF_8021Q_PCP2] = "8021Q_PCP2",
[VCAP_KF_8021Q_PCP_CLS] = "8021Q_PCP_CLS",
+ [VCAP_KF_8021Q_TPID0] = "8021Q_TPID0",
+ [VCAP_KF_8021Q_TPID1] = "8021Q_TPID1",
+ [VCAP_KF_8021Q_TPID2] = "8021Q_TPID2",
+ [VCAP_KF_8021Q_VID0] = "8021Q_VID0",
+ [VCAP_KF_8021Q_VID1] = "8021Q_VID1",
+ [VCAP_KF_8021Q_VID2] = "8021Q_VID2",
[VCAP_KF_8021Q_VID_CLS] = "8021Q_VID_CLS",
+ [VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS] = "8021Q_VLAN_DBL_TAGGED_IS",
[VCAP_KF_8021Q_VLAN_TAGGED_IS] = "8021Q_VLAN_TAGGED_IS",
+ [VCAP_KF_8021Q_VLAN_TAGS] = "8021Q_VLAN_TAGS",
+ [VCAP_KF_ACL_GRP_ID] = "ACL_GRP_ID",
[VCAP_KF_ARP_ADDR_SPACE_OK_IS] = "ARP_ADDR_SPACE_OK_IS",
[VCAP_KF_ARP_LEN_OK_IS] = "ARP_LEN_OK_IS",
[VCAP_KF_ARP_OPCODE] = "ARP_OPCODE",
[VCAP_KF_ARP_PROTO_SPACE_OK_IS] = "ARP_PROTO_SPACE_OK_IS",
[VCAP_KF_ARP_SENDER_MATCH_IS] = "ARP_SENDER_MATCH_IS",
[VCAP_KF_ARP_TGT_MATCH_IS] = "ARP_TGT_MATCH_IS",
+ [VCAP_KF_COSID_CLS] = "COSID_CLS",
+ [VCAP_KF_ES0_ISDX_KEY_ENA] = "ES0_ISDX_KEY_ENA",
[VCAP_KF_ETYPE] = "ETYPE",
+ [VCAP_KF_ETYPE_LEN_IS] = "ETYPE_LEN_IS",
[VCAP_KF_HOST_MATCH] = "HOST_MATCH",
+ [VCAP_KF_IF_EGR_PORT_MASK] = "IF_EGR_PORT_MASK",
+ [VCAP_KF_IF_EGR_PORT_MASK_RNG] = "IF_EGR_PORT_MASK_RNG",
[VCAP_KF_IF_IGR_PORT] = "IF_IGR_PORT",
[VCAP_KF_IF_IGR_PORT_MASK] = "IF_IGR_PORT_MASK",
+ [VCAP_KF_IF_IGR_PORT_MASK_L3] = "IF_IGR_PORT_MASK_L3",
+ [VCAP_KF_IF_IGR_PORT_MASK_RNG] = "IF_IGR_PORT_MASK_RNG",
+ [VCAP_KF_IF_IGR_PORT_MASK_SEL] = "IF_IGR_PORT_MASK_SEL",
+ [VCAP_KF_IF_IGR_PORT_SEL] = "IF_IGR_PORT_SEL",
[VCAP_KF_IP4_IS] = "IP4_IS",
+ [VCAP_KF_IP_MC_IS] = "IP_MC_IS",
+ [VCAP_KF_IP_PAYLOAD_5TUPLE] = "IP_PAYLOAD_5TUPLE",
+ [VCAP_KF_IP_PAYLOAD_S1_IP6] = "IP_PAYLOAD_S1_IP6",
+ [VCAP_KF_IP_SNAP_IS] = "IP_SNAP_IS",
+ [VCAP_KF_ISDX_CLS] = "ISDX_CLS",
[VCAP_KF_ISDX_GT0_IS] = "ISDX_GT0_IS",
[VCAP_KF_L2_BC_IS] = "L2_BC_IS",
[VCAP_KF_L2_DMAC] = "L2_DMAC",
[VCAP_KF_L2_FRM_TYPE] = "L2_FRM_TYPE",
+ [VCAP_KF_L2_FWD_IS] = "L2_FWD_IS",
[VCAP_KF_L2_LLC] = "L2_LLC",
+ [VCAP_KF_L2_MAC] = "L2_MAC",
[VCAP_KF_L2_MC_IS] = "L2_MC_IS",
[VCAP_KF_L2_PAYLOAD0] = "L2_PAYLOAD0",
[VCAP_KF_L2_PAYLOAD1] = "L2_PAYLOAD1",
[VCAP_KF_L2_PAYLOAD2] = "L2_PAYLOAD2",
+ [VCAP_KF_L2_PAYLOAD_ETYPE] = "L2_PAYLOAD_ETYPE",
[VCAP_KF_L2_SMAC] = "L2_SMAC",
[VCAP_KF_L2_SNAP] = "L2_SNAP",
[VCAP_KF_L3_DIP_EQ_SIP_IS] = "L3_DIP_EQ_SIP_IS",
+ [VCAP_KF_L3_DPL_CLS] = "L3_DPL_CLS",
+ [VCAP_KF_L3_DSCP] = "L3_DSCP",
+ [VCAP_KF_L3_DST_IS] = "L3_DST_IS",
[VCAP_KF_L3_FRAGMENT] = "L3_FRAGMENT",
+ [VCAP_KF_L3_FRAGMENT_TYPE] = "L3_FRAGMENT_TYPE",
+ [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = "L3_FRAG_INVLD_L4_LEN",
[VCAP_KF_L3_FRAG_OFS_GT0] = "L3_FRAG_OFS_GT0",
[VCAP_KF_L3_IP4_DIP] = "L3_IP4_DIP",
[VCAP_KF_L3_IP4_SIP] = "L3_IP4_SIP",
[VCAP_KF_L3_IP6_DIP] = "L3_IP6_DIP",
+ [VCAP_KF_L3_IP6_DIP_MSB] = "L3_IP6_DIP_MSB",
[VCAP_KF_L3_IP6_SIP] = "L3_IP6_SIP",
+ [VCAP_KF_L3_IP6_SIP_MSB] = "L3_IP6_SIP_MSB",
[VCAP_KF_L3_IP_PROTO] = "L3_IP_PROTO",
[VCAP_KF_L3_OPTIONS_IS] = "L3_OPTIONS_IS",
[VCAP_KF_L3_PAYLOAD] = "L3_PAYLOAD",
+ [VCAP_KF_L3_RT_IS] = "L3_RT_IS",
[VCAP_KF_L3_TOS] = "L3_TOS",
[VCAP_KF_L3_TTL_GT0] = "L3_TTL_GT0",
[VCAP_KF_L4_1588_DOM] = "L4_1588_DOM",
[VCAP_KF_L4_ACK] = "L4_ACK",
[VCAP_KF_L4_DPORT] = "L4_DPORT",
[VCAP_KF_L4_FIN] = "L4_FIN",
+ [VCAP_KF_L4_PAYLOAD] = "L4_PAYLOAD",
[VCAP_KF_L4_PSH] = "L4_PSH",
[VCAP_KF_L4_RNG] = "L4_RNG",
[VCAP_KF_L4_RST] = "L4_RST",
[VCAP_KF_L4_SYN] = "L4_SYN",
[VCAP_KF_L4_URG] = "L4_URG",
[VCAP_KF_LOOKUP_FIRST_IS] = "LOOKUP_FIRST_IS",
+ [VCAP_KF_LOOKUP_GEN_IDX] = "LOOKUP_GEN_IDX",
+ [VCAP_KF_LOOKUP_GEN_IDX_SEL] = "LOOKUP_GEN_IDX_SEL",
+ [VCAP_KF_LOOKUP_INDEX] = "LOOKUP_INDEX",
[VCAP_KF_LOOKUP_PAG] = "LOOKUP_PAG",
+ [VCAP_KF_MIRROR_PROBE] = "MIRROR_PROBE",
[VCAP_KF_OAM_CCM_CNTS_EQ0] = "OAM_CCM_CNTS_EQ0",
[VCAP_KF_OAM_DETECTED] = "OAM_DETECTED",
[VCAP_KF_OAM_FLAGS] = "OAM_FLAGS",
[VCAP_KF_OAM_OPCODE] = "OAM_OPCODE",
[VCAP_KF_OAM_VER] = "OAM_VER",
[VCAP_KF_OAM_Y1731_IS] = "OAM_Y1731_IS",
+ [VCAP_KF_PROT_ACTIVE] = "PROT_ACTIVE",
+ [VCAP_KF_RT_FRMID] = "RT_FRMID",
+ [VCAP_KF_RT_TYPE] = "RT_TYPE",
+ [VCAP_KF_RT_VLAN_IDX] = "RT_VLAN_IDX",
[VCAP_KF_TCP_IS] = "TCP_IS",
+ [VCAP_KF_TCP_UDP_IS] = "TCP_UDP_IS",
[VCAP_KF_TYPE] = "TYPE",
};
static const char * const vcap_actionfield_names[] = {
[VCAP_AF_NO_VALUE] = "(None)",
[VCAP_AF_ACL_ID] = "ACL_ID",
+ [VCAP_AF_CLS_VID_SEL] = "CLS_VID_SEL",
+ [VCAP_AF_CNT_ID] = "CNT_ID",
+ [VCAP_AF_COPY_PORT_NUM] = "COPY_PORT_NUM",
+ [VCAP_AF_COPY_QUEUE_NUM] = "COPY_QUEUE_NUM",
[VCAP_AF_CPU_COPY_ENA] = "CPU_COPY_ENA",
[VCAP_AF_CPU_QUEUE_NUM] = "CPU_QUEUE_NUM",
+ [VCAP_AF_CUSTOM_ACE_TYPE_ENA] = "CUSTOM_ACE_TYPE_ENA",
+ [VCAP_AF_DEI_ENA] = "DEI_ENA",
+ [VCAP_AF_DEI_VAL] = "DEI_VAL",
+ [VCAP_AF_DLR_SEL] = "DLR_SEL",
+ [VCAP_AF_DP_ENA] = "DP_ENA",
+ [VCAP_AF_DP_VAL] = "DP_VAL",
+ [VCAP_AF_DSCP_ENA] = "DSCP_ENA",
+ [VCAP_AF_DSCP_VAL] = "DSCP_VAL",
+ [VCAP_AF_ES2_REW_CMD] = "ES2_REW_CMD",
[VCAP_AF_FWD_KILL_ENA] = "FWD_KILL_ENA",
+ [VCAP_AF_FWD_MODE] = "FWD_MODE",
[VCAP_AF_HIT_ME_ONCE] = "HIT_ME_ONCE",
[VCAP_AF_HOST_MATCH] = "HOST_MATCH",
+ [VCAP_AF_IGNORE_PIPELINE_CTRL] = "IGNORE_PIPELINE_CTRL",
+ [VCAP_AF_INTR_ENA] = "INTR_ENA",
+ [VCAP_AF_ISDX_ADD_REPLACE_SEL] = "ISDX_ADD_REPLACE_SEL",
+ [VCAP_AF_ISDX_ADD_VAL] = "ISDX_ADD_VAL",
[VCAP_AF_ISDX_ENA] = "ISDX_ENA",
+ [VCAP_AF_ISDX_REPLACE_ENA] = "ISDX_REPLACE_ENA",
+ [VCAP_AF_ISDX_VAL] = "ISDX_VAL",
[VCAP_AF_LRN_DIS] = "LRN_DIS",
+ [VCAP_AF_MAP_IDX] = "MAP_IDX",
+ [VCAP_AF_MAP_KEY] = "MAP_KEY",
+ [VCAP_AF_MAP_LOOKUP_SEL] = "MAP_LOOKUP_SEL",
[VCAP_AF_MASK_MODE] = "MASK_MODE",
+ [VCAP_AF_MATCH_ID] = "MATCH_ID",
+ [VCAP_AF_MATCH_ID_MASK] = "MATCH_ID_MASK",
[VCAP_AF_MIRROR_ENA] = "MIRROR_ENA",
+ [VCAP_AF_MIRROR_PROBE] = "MIRROR_PROBE",
+ [VCAP_AF_MIRROR_PROBE_ID] = "MIRROR_PROBE_ID",
+ [VCAP_AF_MRP_SEL] = "MRP_SEL",
+ [VCAP_AF_NXT_IDX] = "NXT_IDX",
+ [VCAP_AF_NXT_IDX_CTRL] = "NXT_IDX_CTRL",
+ [VCAP_AF_OAM_SEL] = "OAM_SEL",
+ [VCAP_AF_PAG_OVERRIDE_MASK] = "PAG_OVERRIDE_MASK",
+ [VCAP_AF_PAG_VAL] = "PAG_VAL",
+ [VCAP_AF_PCP_ENA] = "PCP_ENA",
+ [VCAP_AF_PCP_VAL] = "PCP_VAL",
+ [VCAP_AF_PIPELINE_FORCE_ENA] = "PIPELINE_FORCE_ENA",
+ [VCAP_AF_PIPELINE_PT] = "PIPELINE_PT",
[VCAP_AF_POLICE_ENA] = "POLICE_ENA",
[VCAP_AF_POLICE_IDX] = "POLICE_IDX",
+ [VCAP_AF_POLICE_REMARK] = "POLICE_REMARK",
[VCAP_AF_POLICE_VCAP_ONLY] = "POLICE_VCAP_ONLY",
[VCAP_AF_PORT_MASK] = "PORT_MASK",
+ [VCAP_AF_QOS_ENA] = "QOS_ENA",
+ [VCAP_AF_QOS_VAL] = "QOS_VAL",
[VCAP_AF_REW_OP] = "REW_OP",
+ [VCAP_AF_RT_DIS] = "RT_DIS",
+ [VCAP_AF_SFID_ENA] = "SFID_ENA",
+ [VCAP_AF_SFID_VAL] = "SFID_VAL",
+ [VCAP_AF_SGID_ENA] = "SGID_ENA",
+ [VCAP_AF_SGID_VAL] = "SGID_VAL",
+ [VCAP_AF_TYPE] = "TYPE",
+ [VCAP_AF_VID_REPLACE_ENA] = "VID_REPLACE_ENA",
+ [VCAP_AF_VID_VAL] = "VID_VAL",
+ [VCAP_AF_VLAN_POP_CNT] = "VLAN_POP_CNT",
+ [VCAP_AF_VLAN_POP_CNT_ENA] = "VLAN_POP_CNT_ENA",
};
/* VCAPs */
const struct vcap_info lan966x_vcaps[] = {
+ [VCAP_TYPE_IS1] = {
+ .name = "is1",
+ .rows = 192,
+ .sw_count = 4,
+ .sw_width = 96,
+ .sticky_width = 32,
+ .act_width = 123,
+ .default_cnt = 0,
+ .require_cnt_dis = 1,
+ .version = 1,
+ .keyfield_set = is1_keyfield_set,
+ .keyfield_set_size = ARRAY_SIZE(is1_keyfield_set),
+ .actionfield_set = is1_actionfield_set,
+ .actionfield_set_size = ARRAY_SIZE(is1_actionfield_set),
+ .keyfield_set_map = is1_keyfield_set_map,
+ .keyfield_set_map_size = is1_keyfield_set_map_size,
+ .actionfield_set_map = is1_actionfield_set_map,
+ .actionfield_set_map_size = is1_actionfield_set_map_size,
+ .keyfield_set_typegroups = is1_keyfield_set_typegroups,
+ .actionfield_set_typegroups = is1_actionfield_set_typegroups,
+ },
[VCAP_TYPE_IS2] = {
.name = "is2",
.rows = 64,
const struct vcap_statistics lan966x_vcap_stats = {
.name = "lan966x",
- .count = 1,
+ .count = 2,
.keyfield_set_names = vcap_keyfield_set_names,
.actionfield_set_names = vcap_actionfield_set_names,
.keyfield_names = vcap_keyfield_names,
#include "vcap_api.h"
#include "vcap_api_client.h"
-static void lan966x_vcap_port_keys(struct lan966x_port *port,
- struct vcap_admin *admin,
- struct vcap_output_print *out)
+static void lan966x_vcap_is1_port_keys(struct lan966x_port *port,
+ struct vcap_admin *admin,
+ struct vcap_output_print *out)
+{
+ struct lan966x *lan966x = port->lan966x;
+ u32 val;
+
+ out->prf(out->dst, " port[%d] (%s): ", port->chip_port,
+ netdev_name(port->dev));
+
+ val = lan_rd(lan966x, ANA_VCAP_CFG(port->chip_port));
+ out->prf(out->dst, "\n state: ");
+ if (ANA_VCAP_CFG_S1_ENA_GET(val))
+ out->prf(out->dst, "on");
+ else
+ out->prf(out->dst, "off");
+
+ for (int l = 0; l < admin->lookups; ++l) {
+ out->prf(out->dst, "\n Lookup %d: ", l);
+
+ out->prf(out->dst, "\n other: ");
+ switch (ANA_VCAP_S1_CFG_KEY_OTHER_CFG_GET(val)) {
+ case VCAP_IS1_PS_OTHER_NORMAL:
+ out->prf(out->dst, "normal");
+ break;
+ case VCAP_IS1_PS_OTHER_7TUPLE:
+ out->prf(out->dst, "7tuple");
+ break;
+ case VCAP_IS1_PS_OTHER_DBL_VID:
+ out->prf(out->dst, "dbl_vid");
+ break;
+ case VCAP_IS1_PS_OTHER_DMAC_VID:
+ out->prf(out->dst, "dmac_vid");
+ break;
+ default:
+ out->prf(out->dst, "-");
+ break;
+ }
+
+ out->prf(out->dst, "\n ipv4: ");
+ switch (ANA_VCAP_S1_CFG_KEY_IP4_CFG_GET(val)) {
+ case VCAP_IS1_PS_IPV4_NORMAL:
+ out->prf(out->dst, "normal");
+ break;
+ case VCAP_IS1_PS_IPV4_7TUPLE:
+ out->prf(out->dst, "7tuple");
+ break;
+ case VCAP_IS1_PS_IPV4_5TUPLE_IP4:
+ out->prf(out->dst, "5tuple_ipv4");
+ break;
+ case VCAP_IS1_PS_IPV4_DBL_VID:
+ out->prf(out->dst, "dbl_vid");
+ break;
+ case VCAP_IS1_PS_IPV4_DMAC_VID:
+ out->prf(out->dst, "dmac_vid");
+ break;
+ default:
+ out->prf(out->dst, "-");
+ break;
+ }
+
+ out->prf(out->dst, "\n ipv6: ");
+ switch (ANA_VCAP_S1_CFG_KEY_IP6_CFG_GET(val)) {
+ case VCAP_IS1_PS_IPV6_NORMAL:
+ out->prf(out->dst, "normal");
+ break;
+ case VCAP_IS1_PS_IPV6_7TUPLE:
+ out->prf(out->dst, "7tuple");
+ break;
+ case VCAP_IS1_PS_IPV6_5TUPLE_IP4:
+ out->prf(out->dst, "5tuple_ip4");
+ break;
+ case VCAP_IS1_PS_IPV6_NORMAL_IP6:
+ out->prf(out->dst, "normal_ip6");
+ break;
+ case VCAP_IS1_PS_IPV6_5TUPLE_IP6:
+ out->prf(out->dst, "5tuple_ip6");
+ break;
+ case VCAP_IS1_PS_IPV6_DBL_VID:
+ out->prf(out->dst, "dbl_vid");
+ break;
+ case VCAP_IS1_PS_IPV6_DMAC_VID:
+ out->prf(out->dst, "dmac_vid");
+ break;
+ default:
+ out->prf(out->dst, "-");
+ break;
+ }
+
+ out->prf(out->dst, "\n rt: ");
+ switch (ANA_VCAP_S1_CFG_KEY_RT_CFG_GET(val)) {
+ case VCAP_IS1_PS_RT_NORMAL:
+ out->prf(out->dst, "normal");
+ break;
+ case VCAP_IS1_PS_RT_7TUPLE:
+ out->prf(out->dst, "7tuple");
+ break;
+ case VCAP_IS1_PS_RT_DBL_VID:
+ out->prf(out->dst, "dbl_vid");
+ break;
+ case VCAP_IS1_PS_RT_DMAC_VID:
+ out->prf(out->dst, "dmac_vid");
+ break;
+ case VCAP_IS1_PS_RT_FOLLOW_OTHER:
+ out->prf(out->dst, "follow_other");
+ break;
+ default:
+ out->prf(out->dst, "-");
+ break;
+ }
+ }
+
+ out->prf(out->dst, "\n");
+}
+
+static void lan966x_vcap_is2_port_keys(struct lan966x_port *port,
+ struct vcap_admin *admin,
+ struct vcap_output_print *out)
{
struct lan966x *lan966x = port->lan966x;
u32 val;
vcap = &vctrl->vcaps[admin->vtype];
out->prf(out->dst, "%s:\n", vcap->name);
- lan966x_vcap_port_keys(port, admin, out);
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS2:
+ lan966x_vcap_is2_port_keys(port, admin, out);
+ break;
+ case VCAP_TYPE_IS1:
+ lan966x_vcap_is1_port_keys(port, admin, out);
+ break;
+ default:
+ out->prf(out->dst, " no info\n");
+ break;
+ }
return 0;
}
#define STREAMSIZE (64 * 4)
+#define LAN966X_IS1_LOOKUPS 3
#define LAN966X_IS2_LOOKUPS 2
static struct lan966x_vcap_inst {
bool ingress; /* is vcap in the ingress path */
} lan966x_vcap_inst_cfg[] = {
{
+ .vtype = VCAP_TYPE_IS1, /* IS1-0 */
+ .tgt_inst = 1,
+ .lookups = LAN966X_IS1_LOOKUPS,
+ .first_cid = LAN966X_VCAP_CID_IS1_L0,
+ .last_cid = LAN966X_VCAP_CID_IS1_MAX,
+ .count = 768,
+ .ingress = true,
+ },
+ {
.vtype = VCAP_TYPE_IS2, /* IS2-0 */
.tgt_inst = 2,
.lookups = LAN966X_IS2_LOOKUPS,
lan966x_vcap_wait_update(lan966x, admin->tgt_inst);
}
-static int lan966x_vcap_cid_to_lookup(int cid)
+static int lan966x_vcap_is1_cid_to_lookup(int cid)
+{
+ int lookup = 0;
+
+ if (cid >= LAN966X_VCAP_CID_IS1_L1 &&
+ cid < LAN966X_VCAP_CID_IS1_L2)
+ lookup = 1;
+ else if (cid >= LAN966X_VCAP_CID_IS1_L2 &&
+ cid < LAN966X_VCAP_CID_IS1_MAX)
+ lookup = 2;
+
+ return lookup;
+}
+
+static int lan966x_vcap_is2_cid_to_lookup(int cid)
{
if (cid >= LAN966X_VCAP_CID_IS2_L1 &&
cid < LAN966X_VCAP_CID_IS2_MAX)
return 0;
}
+/* Return the list of keysets for the vcap port configuration */
+static int
+lan966x_vcap_is1_get_port_keysets(struct net_device *ndev, int lookup,
+ struct vcap_keyset_list *keysetlist,
+ u16 l3_proto)
+{
+ struct lan966x_port *port = netdev_priv(ndev);
+ struct lan966x *lan966x = port->lan966x;
+ u32 val;
+
+ val = lan_rd(lan966x, ANA_VCAP_S1_CFG(port->chip_port, lookup));
+
+ /* Collect all keysets for the port in a list */
+ if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IP) {
+ switch (ANA_VCAP_S1_CFG_KEY_IP4_CFG_GET(val)) {
+ case VCAP_IS1_PS_IPV4_7TUPLE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_7TUPLE);
+ break;
+ case VCAP_IS1_PS_IPV4_5TUPLE_IP4:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_5TUPLE_IP4);
+ break;
+ case VCAP_IS1_PS_IPV4_NORMAL:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_NORMAL);
+ break;
+ }
+ }
+
+ if (l3_proto == ETH_P_ALL || l3_proto == ETH_P_IPV6) {
+ switch (ANA_VCAP_S1_CFG_KEY_IP6_CFG_GET(val)) {
+ case VCAP_IS1_PS_IPV6_NORMAL:
+ case VCAP_IS1_PS_IPV6_NORMAL_IP6:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_NORMAL);
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_NORMAL_IP6);
+ break;
+ case VCAP_IS1_PS_IPV6_5TUPLE_IP6:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_5TUPLE_IP6);
+ break;
+ case VCAP_IS1_PS_IPV6_7TUPLE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_7TUPLE);
+ break;
+ case VCAP_IS1_PS_IPV6_5TUPLE_IP4:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_5TUPLE_IP4);
+ break;
+ case VCAP_IS1_PS_IPV6_DMAC_VID:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_DMAC_VID);
+ break;
+ }
+ }
+
+ switch (ANA_VCAP_S1_CFG_KEY_OTHER_CFG_GET(val)) {
+ case VCAP_IS1_PS_OTHER_7TUPLE:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_7TUPLE);
+ break;
+ case VCAP_IS1_PS_OTHER_NORMAL:
+ vcap_keyset_list_add(keysetlist, VCAP_KFS_NORMAL);
+ break;
+ }
+
+ return 0;
+}
+
static int
lan966x_vcap_is2_get_port_keysets(struct net_device *dev, int lookup,
struct vcap_keyset_list *keysetlist,
if (!kslist || kslist->cnt == 0)
return VCAP_KFS_NO_VALUE;
- lookup = lan966x_vcap_cid_to_lookup(rule->vcap_chain_id);
keysetlist.max = ARRAY_SIZE(keysets);
keysetlist.keysets = keysets;
- err = lan966x_vcap_is2_get_port_keysets(dev, lookup, &keysetlist,
- l3_proto);
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS1:
+ lookup = lan966x_vcap_is1_cid_to_lookup(rule->vcap_chain_id);
+ err = lan966x_vcap_is1_get_port_keysets(dev, lookup, &keysetlist,
+ l3_proto);
+ break;
+ case VCAP_TYPE_IS2:
+ lookup = lan966x_vcap_is2_cid_to_lookup(rule->vcap_chain_id);
+ err = lan966x_vcap_is2_get_port_keysets(dev, lookup, &keysetlist,
+ l3_proto);
+ break;
+ default:
+ pr_err("vcap type: %s not supported\n",
+ lan966x_vcaps[admin->vtype].name);
+ return VCAP_KFS_NO_VALUE;
+ }
+
if (err)
return VCAP_KFS_NO_VALUE;
return VCAP_KFS_NO_VALUE;
}
-static bool lan966x_vcap_is_first_chain(struct vcap_rule *rule)
+static bool lan966x_vcap_is2_is_first_chain(struct vcap_rule *rule)
{
return (rule->vcap_chain_id >= LAN966X_VCAP_CID_IS2_L0 &&
rule->vcap_chain_id < LAN966X_VCAP_CID_IS2_L1);
}
-static void lan966x_vcap_add_default_fields(struct net_device *dev,
- struct vcap_admin *admin,
- struct vcap_rule *rule)
+static void lan966x_vcap_is1_add_default_fields(struct lan966x_port *port,
+ struct vcap_admin *admin,
+ struct vcap_rule *rule)
+{
+ u32 value, mask;
+ u32 lookup;
+
+ if (vcap_rule_get_key_u32(rule, VCAP_KF_IF_IGR_PORT_MASK,
+ &value, &mask))
+ vcap_rule_add_key_u32(rule, VCAP_KF_IF_IGR_PORT_MASK, 0,
+ ~BIT(port->chip_port));
+
+ lookup = lan966x_vcap_is1_cid_to_lookup(rule->vcap_chain_id);
+ vcap_rule_add_key_u32(rule, VCAP_KF_LOOKUP_INDEX, lookup, 0x3);
+}
+
+static void lan966x_vcap_is2_add_default_fields(struct lan966x_port *port,
+ struct vcap_admin *admin,
+ struct vcap_rule *rule)
{
- struct lan966x_port *port = netdev_priv(dev);
u32 value, mask;
if (vcap_rule_get_key_u32(rule, VCAP_KF_IF_IGR_PORT_MASK,
vcap_rule_add_key_u32(rule, VCAP_KF_IF_IGR_PORT_MASK, 0,
~BIT(port->chip_port));
- if (lan966x_vcap_is_first_chain(rule))
+ if (lan966x_vcap_is2_is_first_chain(rule))
vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS,
VCAP_BIT_1);
else
VCAP_BIT_0);
}
+static void lan966x_vcap_add_default_fields(struct net_device *dev,
+ struct vcap_admin *admin,
+ struct vcap_rule *rule)
+{
+ struct lan966x_port *port = netdev_priv(dev);
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS1:
+ lan966x_vcap_is1_add_default_fields(port, admin, rule);
+ break;
+ case VCAP_TYPE_IS2:
+ lan966x_vcap_is2_add_default_fields(port, admin, rule);
+ break;
+ default:
+ pr_err("vcap type: %s not supported\n",
+ lan966x_vcaps[admin->vtype].name);
+ break;
+ }
+}
+
static void lan966x_vcap_cache_erase(struct vcap_admin *admin)
{
memset(admin->cache.keystream, 0, STREAMSIZE);
static void lan966x_vcap_port_key_deselection(struct lan966x *lan966x,
struct vcap_admin *admin)
{
- for (int p = 0; p < lan966x->num_phys_ports; ++p)
- lan_wr(0, lan966x, ANA_VCAP_S2_CFG(p));
+ u32 val;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS1:
+ val = ANA_VCAP_S1_CFG_KEY_IP6_CFG_SET(VCAP_IS1_PS_IPV6_5TUPLE_IP6) |
+ ANA_VCAP_S1_CFG_KEY_IP4_CFG_SET(VCAP_IS1_PS_IPV4_5TUPLE_IP4) |
+ ANA_VCAP_S1_CFG_KEY_OTHER_CFG_SET(VCAP_IS1_PS_OTHER_NORMAL);
+
+ for (int p = 0; p < lan966x->num_phys_ports; ++p) {
+ if (!lan966x->ports[p])
+ continue;
+
+ for (int l = 0; l < LAN966X_IS1_LOOKUPS; ++l)
+ lan_wr(val, lan966x, ANA_VCAP_S1_CFG(p, l));
+
+ lan_rmw(ANA_VCAP_CFG_S1_ENA_SET(true),
+ ANA_VCAP_CFG_S1_ENA, lan966x,
+ ANA_VCAP_CFG(p));
+ }
+
+ break;
+ case VCAP_TYPE_IS2:
+ for (int p = 0; p < lan966x->num_phys_ports; ++p)
+ lan_wr(0, lan966x, ANA_VCAP_S2_CFG(p));
+
+ break;
+ default:
+ pr_err("vcap type: %s not supported\n",
+ lan966x_vcaps[admin->vtype].name);
+ break;
+ }
}
int lan966x_vcap_init(struct lan966x *lan966x)
lan_rmw(ANA_VCAP_S2_CFG_ENA_SET(true),
ANA_VCAP_S2_CFG_ENA, lan966x,
ANA_VCAP_S2_CFG(lan966x->ports[p]->chip_port));
+
+ lan_rmw(ANA_VCAP_CFG_S1_ENA_SET(true),
+ ANA_VCAP_CFG_S1_ENA, lan966x,
+ ANA_VCAP_CFG(lan966x->ports[p]->chip_port));
}
}
spx5_port->phylink_pcs.poll = true;
spx5_port->phylink_pcs.ops = &sparx5_phylink_pcs_ops;
spx5_port->is_mrouter = false;
+ INIT_LIST_HEAD(&spx5_port->tc_templates);
sparx5->ports[config->portno] = spx5_port;
err = sparx5_port_init(sparx5, spx5_port, &config->conf);
u16 ts_id;
struct sk_buff_head tx_skbs;
bool is_mrouter;
+ struct list_head tc_templates; /* list of TC templates on this port */
};
enum sparx5_core_clockfreq {
struct sparx5_wildcard_rule rule[SPX5_MAX_RULE_SIZE];
};
+struct sparx5_tc_flower_template {
+ struct list_head list; /* for insertion in the list of templates */
+ int cid; /* chain id */
+ enum vcap_keyfield_set orig; /* keyset used before the template */
+ enum vcap_keyfield_set keyset; /* new keyset used by template */
+ u16 l3_proto; /* protocol specified in the template */
+};
+
static int
sparx5_tc_flower_es0_tpid(struct vcap_tc_flower_parse_usage *st)
{
/* Find the keysets that the rule can use */
matches.keysets = keysets;
matches.max = ARRAY_SIZE(keysets);
- if (vcap_rule_find_keysets(vrule, &matches) == 0)
+ if (!vcap_rule_find_keysets(vrule, &matches))
return -EINVAL;
/* Find the keysets that the port configuration supports */
return err;
}
+/* Remove rule keys that may prevent templates from matching a keyset */
+static void sparx5_tc_flower_simplify_rule(struct vcap_admin *admin,
+ struct vcap_rule *vrule,
+ u16 l3_proto)
+{
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ vcap_rule_rem_key(vrule, VCAP_KF_ETYPE);
+ switch (l3_proto) {
+ case ETH_P_IP:
+ break;
+ case ETH_P_IPV6:
+ vcap_rule_rem_key(vrule, VCAP_KF_IP_SNAP_IS);
+ break;
+ default:
+ break;
+ }
+ break;
+ case VCAP_TYPE_ES2:
+ switch (l3_proto) {
+ case ETH_P_IP:
+ if (vrule->keyset == VCAP_KFS_IP4_OTHER)
+ vcap_rule_rem_key(vrule, VCAP_KF_TCP_IS);
+ break;
+ case ETH_P_IPV6:
+ if (vrule->keyset == VCAP_KFS_IP6_STD)
+ vcap_rule_rem_key(vrule, VCAP_KF_TCP_IS);
+ vcap_rule_rem_key(vrule, VCAP_KF_IP4_IS);
+ break;
+ default:
+ break;
+ }
+ break;
+ case VCAP_TYPE_IS2:
+ switch (l3_proto) {
+ case ETH_P_IP:
+ case ETH_P_IPV6:
+ vcap_rule_rem_key(vrule, VCAP_KF_IP4_IS);
+ break;
+ default:
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+}
+
+static bool sparx5_tc_flower_use_template(struct net_device *ndev,
+ struct flow_cls_offload *fco,
+ struct vcap_admin *admin,
+ struct vcap_rule *vrule)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5_tc_flower_template *ftp;
+
+ list_for_each_entry(ftp, &port->tc_templates, list) {
+ if (ftp->cid != fco->common.chain_index)
+ continue;
+
+ vcap_set_rule_set_keyset(vrule, ftp->keyset);
+ sparx5_tc_flower_simplify_rule(admin, vrule, ftp->l3_proto);
+ return true;
+ }
+ return false;
+}
+
static int sparx5_tc_flower_replace(struct net_device *ndev,
struct flow_cls_offload *fco,
struct vcap_admin *admin,
goto out;
}
- err = sparx5_tc_select_protocol_keyset(ndev, vrule, admin,
- state.l3_proto, &multi);
- if (err) {
- NL_SET_ERR_MSG_MOD(fco->common.extack,
- "No matching port keyset for filter protocol and keys");
- goto out;
+ if (!sparx5_tc_flower_use_template(ndev, fco, admin, vrule)) {
+ err = sparx5_tc_select_protocol_keyset(ndev, vrule, admin,
+ state.l3_proto, &multi);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(fco->common.extack,
+ "No matching port keyset for filter protocol and keys");
+ goto out;
+ }
}
/* provide the l3 protocol to guide the keyset selection */
return err;
}
+static int sparx5_tc_flower_template_create(struct net_device *ndev,
+ struct flow_cls_offload *fco,
+ struct vcap_admin *admin)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct vcap_tc_flower_parse_usage state = {
+ .fco = fco,
+ .l3_proto = ETH_P_ALL,
+ .admin = admin,
+ };
+ struct sparx5_tc_flower_template *ftp;
+ struct vcap_keyset_list kslist = {};
+ enum vcap_keyfield_set keysets[10];
+ struct vcap_control *vctrl;
+ struct vcap_rule *vrule;
+ int count, err;
+
+ if (admin->vtype == VCAP_TYPE_ES0) {
+ pr_err("%s:%d: %s\n", __func__, __LINE__,
+ "VCAP does not support templates");
+ return -EINVAL;
+ }
+
+ count = vcap_admin_rule_count(admin, fco->common.chain_index);
+ if (count > 0) {
+ pr_err("%s:%d: %s\n", __func__, __LINE__,
+ "Filters are already present");
+ return -EBUSY;
+ }
+
+ ftp = kzalloc(sizeof(*ftp), GFP_KERNEL);
+ if (!ftp)
+ return -ENOMEM;
+
+ ftp->cid = fco->common.chain_index;
+ ftp->orig = VCAP_KFS_NO_VALUE;
+ ftp->keyset = VCAP_KFS_NO_VALUE;
+
+ vctrl = port->sparx5->vcap_ctrl;
+ vrule = vcap_alloc_rule(vctrl, ndev, fco->common.chain_index,
+ VCAP_USER_TC, fco->common.prio, 0);
+ if (IS_ERR(vrule)) {
+ err = PTR_ERR(vrule);
+ goto err_rule;
+ }
+
+ state.vrule = vrule;
+ state.frule = flow_cls_offload_flow_rule(fco);
+ err = sparx5_tc_use_dissectors(&state, admin, vrule);
+ if (err) {
+ pr_err("%s:%d: key error: %d\n", __func__, __LINE__, err);
+ goto out;
+ }
+
+ ftp->l3_proto = state.l3_proto;
+
+ sparx5_tc_flower_simplify_rule(admin, vrule, state.l3_proto);
+
+ /* Find the keysets that the rule can use */
+ kslist.keysets = keysets;
+ kslist.max = ARRAY_SIZE(keysets);
+ if (!vcap_rule_find_keysets(vrule, &kslist)) {
+ pr_err("%s:%d: %s\n", __func__, __LINE__,
+ "Could not find a suitable keyset");
+ err = -ENOENT;
+ goto out;
+ }
+
+ ftp->keyset = vcap_select_min_rule_keyset(vctrl, admin->vtype, &kslist);
+ kslist.cnt = 0;
+ sparx5_vcap_set_port_keyset(ndev, admin, fco->common.chain_index,
+ state.l3_proto,
+ ftp->keyset,
+ &kslist);
+
+ if (kslist.cnt > 0)
+ ftp->orig = kslist.keysets[0];
+
+ /* Store new template */
+ list_add_tail(&ftp->list, &port->tc_templates);
+ vcap_free_rule(vrule);
+ return 0;
+
+out:
+ vcap_free_rule(vrule);
+err_rule:
+ kfree(ftp);
+ return err;
+}
+
+static int sparx5_tc_flower_template_destroy(struct net_device *ndev,
+ struct flow_cls_offload *fco,
+ struct vcap_admin *admin)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5_tc_flower_template *ftp, *tmp;
+ int err = -ENOENT;
+
+ /* Rules using the template are removed by the tc framework */
+ list_for_each_entry_safe(ftp, tmp, &port->tc_templates, list) {
+ if (ftp->cid != fco->common.chain_index)
+ continue;
+
+ sparx5_vcap_set_port_keyset(ndev, admin,
+ fco->common.chain_index,
+ ftp->l3_proto, ftp->orig,
+ NULL);
+ list_del(&ftp->list);
+ kfree(ftp);
+ break;
+ }
+ return err;
+}
+
int sparx5_tc_flower(struct net_device *ndev, struct flow_cls_offload *fco,
bool ingress)
{
return sparx5_tc_flower_destroy(ndev, fco, admin);
case FLOW_CLS_STATS:
return sparx5_tc_flower_stats(ndev, fco, admin);
+ case FLOW_CLS_TMPLT_CREATE:
+ return sparx5_tc_flower_template_create(ndev, fco, admin);
+ case FLOW_CLS_TMPLT_DESTROY:
+ return sparx5_tc_flower_template_destroy(ndev, fco, admin);
default:
return -EOPNOTSUPP;
}
out->prf(out->dst, "ip6_std");
break;
case VCAP_IS2_PS_IPV6_MC_IP4_TCP_UDP_OTHER:
- out->prf(out->dst, "ip4_tcp_udp ipv4_other");
+ out->prf(out->dst, "ip4_tcp_udp ip4_other");
break;
}
out->prf(out->dst, "\n ipv6_uc: ");
.port_info = sparx5_port_info,
};
+static u32 sparx5_vcap_is0_keyset_to_etype_ps(enum vcap_keyfield_set keyset)
+{
+ switch (keyset) {
+ case VCAP_KFS_NORMAL_7TUPLE:
+ return VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE;
+ case VCAP_KFS_NORMAL_5TUPLE_IP4:
+ return VCAP_IS0_PS_ETYPE_NORMAL_5TUPLE_IP4;
+ default:
+ return VCAP_IS0_PS_ETYPE_NORMAL_7TUPLE;
+ }
+}
+
+static void sparx5_vcap_is0_set_port_keyset(struct net_device *ndev, int lookup,
+ enum vcap_keyfield_set keyset,
+ int l3_proto)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5 *sparx5 = port->sparx5;
+ int portno = port->portno;
+ u32 value;
+
+ switch (l3_proto) {
+ case ETH_P_IP:
+ value = sparx5_vcap_is0_keyset_to_etype_ps(keyset);
+ spx5_rmw(ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL_SET(value),
+ ANA_CL_ADV_CL_CFG_IP4_CLM_KEY_SEL,
+ sparx5,
+ ANA_CL_ADV_CL_CFG(portno, lookup));
+ break;
+ case ETH_P_IPV6:
+ value = sparx5_vcap_is0_keyset_to_etype_ps(keyset);
+ spx5_rmw(ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL_SET(value),
+ ANA_CL_ADV_CL_CFG_IP6_CLM_KEY_SEL,
+ sparx5,
+ ANA_CL_ADV_CL_CFG(portno, lookup));
+ break;
+ default:
+ value = sparx5_vcap_is0_keyset_to_etype_ps(keyset);
+ spx5_rmw(ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL_SET(value),
+ ANA_CL_ADV_CL_CFG_ETYPE_CLM_KEY_SEL,
+ sparx5,
+ ANA_CL_ADV_CL_CFG(portno, lookup));
+ break;
+ }
+}
+
+static u32 sparx5_vcap_is2_keyset_to_arp_ps(enum vcap_keyfield_set keyset)
+{
+ switch (keyset) {
+ case VCAP_KFS_ARP:
+ return VCAP_IS2_PS_ARP_ARP;
+ default:
+ return VCAP_IS2_PS_ARP_MAC_ETYPE;
+ }
+}
+
+static u32 sparx5_vcap_is2_keyset_to_ipv4_ps(enum vcap_keyfield_set keyset)
+{
+ switch (keyset) {
+ case VCAP_KFS_MAC_ETYPE:
+ return VCAP_IS2_PS_IPV4_UC_MAC_ETYPE;
+ case VCAP_KFS_IP4_OTHER:
+ case VCAP_KFS_IP4_TCP_UDP:
+ return VCAP_IS2_PS_IPV4_UC_IP4_TCP_UDP_OTHER;
+ case VCAP_KFS_IP_7TUPLE:
+ return VCAP_IS2_PS_IPV4_UC_IP_7TUPLE;
+ default:
+ return VCAP_KFS_NO_VALUE;
+ }
+}
+
+static u32 sparx5_vcap_is2_keyset_to_ipv6_uc_ps(enum vcap_keyfield_set keyset)
+{
+ switch (keyset) {
+ case VCAP_KFS_MAC_ETYPE:
+ return VCAP_IS2_PS_IPV6_UC_MAC_ETYPE;
+ case VCAP_KFS_IP4_OTHER:
+ case VCAP_KFS_IP4_TCP_UDP:
+ return VCAP_IS2_PS_IPV6_UC_IP4_TCP_UDP_OTHER;
+ case VCAP_KFS_IP_7TUPLE:
+ return VCAP_IS2_PS_IPV6_UC_IP_7TUPLE;
+ default:
+ return VCAP_KFS_NO_VALUE;
+ }
+}
+
+static u32 sparx5_vcap_is2_keyset_to_ipv6_mc_ps(enum vcap_keyfield_set keyset)
+{
+ switch (keyset) {
+ case VCAP_KFS_MAC_ETYPE:
+ return VCAP_IS2_PS_IPV6_MC_MAC_ETYPE;
+ case VCAP_KFS_IP4_OTHER:
+ case VCAP_KFS_IP4_TCP_UDP:
+ return VCAP_IS2_PS_IPV6_MC_IP4_TCP_UDP_OTHER;
+ case VCAP_KFS_IP_7TUPLE:
+ return VCAP_IS2_PS_IPV6_MC_IP_7TUPLE;
+ default:
+ return VCAP_KFS_NO_VALUE;
+ }
+}
+
+static void sparx5_vcap_is2_set_port_keyset(struct net_device *ndev, int lookup,
+ enum vcap_keyfield_set keyset,
+ int l3_proto)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5 *sparx5 = port->sparx5;
+ int portno = port->portno;
+ u32 value;
+
+ switch (l3_proto) {
+ case ETH_P_ARP:
+ value = sparx5_vcap_is2_keyset_to_arp_ps(keyset);
+ spx5_rmw(ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL_SET(value),
+ ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL,
+ sparx5,
+ ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
+ break;
+ case ETH_P_IP:
+ value = sparx5_vcap_is2_keyset_to_ipv4_ps(keyset);
+ spx5_rmw(ANA_ACL_VCAP_S2_KEY_SEL_IP4_UC_KEY_SEL_SET(value),
+ ANA_ACL_VCAP_S2_KEY_SEL_IP4_UC_KEY_SEL,
+ sparx5,
+ ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
+ spx5_rmw(ANA_ACL_VCAP_S2_KEY_SEL_IP4_MC_KEY_SEL_SET(value),
+ ANA_ACL_VCAP_S2_KEY_SEL_IP4_MC_KEY_SEL,
+ sparx5,
+ ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
+ break;
+ case ETH_P_IPV6:
+ value = sparx5_vcap_is2_keyset_to_ipv6_uc_ps(keyset);
+ spx5_rmw(ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL_SET(value),
+ ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL,
+ sparx5,
+ ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
+ value = sparx5_vcap_is2_keyset_to_ipv6_mc_ps(keyset);
+ spx5_rmw(ANA_ACL_VCAP_S2_KEY_SEL_IP6_MC_KEY_SEL_SET(value),
+ ANA_ACL_VCAP_S2_KEY_SEL_IP6_MC_KEY_SEL,
+ sparx5,
+ ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
+ break;
+ default:
+ value = VCAP_IS2_PS_NONETH_MAC_ETYPE;
+ spx5_rmw(ANA_ACL_VCAP_S2_KEY_SEL_NON_ETH_KEY_SEL_SET(value),
+ ANA_ACL_VCAP_S2_KEY_SEL_NON_ETH_KEY_SEL,
+ sparx5,
+ ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
+ break;
+ }
+}
+
+static u32 sparx5_vcap_es2_keyset_to_arp_ps(enum vcap_keyfield_set keyset)
+{
+ switch (keyset) {
+ case VCAP_KFS_ARP:
+ return VCAP_ES2_PS_ARP_ARP;
+ default:
+ return VCAP_ES2_PS_ARP_MAC_ETYPE;
+ }
+}
+
+static u32 sparx5_vcap_es2_keyset_to_ipv4_ps(enum vcap_keyfield_set keyset)
+{
+ switch (keyset) {
+ case VCAP_KFS_MAC_ETYPE:
+ return VCAP_ES2_PS_IPV4_MAC_ETYPE;
+ case VCAP_KFS_IP_7TUPLE:
+ return VCAP_ES2_PS_IPV4_IP_7TUPLE;
+ case VCAP_KFS_IP4_TCP_UDP:
+ return VCAP_ES2_PS_IPV4_IP4_TCP_UDP_OTHER;
+ case VCAP_KFS_IP4_OTHER:
+ return VCAP_ES2_PS_IPV4_IP4_OTHER;
+ default:
+ return VCAP_ES2_PS_IPV4_MAC_ETYPE;
+ }
+}
+
+static u32 sparx5_vcap_es2_keyset_to_ipv6_ps(enum vcap_keyfield_set keyset)
+{
+ switch (keyset) {
+ case VCAP_KFS_MAC_ETYPE:
+ return VCAP_ES2_PS_IPV6_MAC_ETYPE;
+ case VCAP_KFS_IP4_TCP_UDP:
+ case VCAP_KFS_IP4_OTHER:
+ return VCAP_ES2_PS_IPV6_IP4_DOWNGRADE;
+ case VCAP_KFS_IP_7TUPLE:
+ return VCAP_ES2_PS_IPV6_IP_7TUPLE;
+ case VCAP_KFS_IP6_STD:
+ return VCAP_ES2_PS_IPV6_IP6_STD;
+ default:
+ return VCAP_ES2_PS_IPV6_MAC_ETYPE;
+ }
+}
+
+static void sparx5_vcap_es2_set_port_keyset(struct net_device *ndev, int lookup,
+ enum vcap_keyfield_set keyset,
+ int l3_proto)
+{
+ struct sparx5_port *port = netdev_priv(ndev);
+ struct sparx5 *sparx5 = port->sparx5;
+ int portno = port->portno;
+ u32 value;
+
+ switch (l3_proto) {
+ case ETH_P_IP:
+ value = sparx5_vcap_es2_keyset_to_ipv4_ps(keyset);
+ spx5_rmw(EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL_SET(value),
+ EACL_VCAP_ES2_KEY_SEL_IP4_KEY_SEL,
+ sparx5,
+ EACL_VCAP_ES2_KEY_SEL(portno, lookup));
+ break;
+ case ETH_P_IPV6:
+ value = sparx5_vcap_es2_keyset_to_ipv6_ps(keyset);
+ spx5_rmw(EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL_SET(value),
+ EACL_VCAP_ES2_KEY_SEL_IP6_KEY_SEL,
+ sparx5,
+ EACL_VCAP_ES2_KEY_SEL(portno, lookup));
+ break;
+ case ETH_P_ARP:
+ value = sparx5_vcap_es2_keyset_to_arp_ps(keyset);
+ spx5_rmw(EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL_SET(value),
+ EACL_VCAP_ES2_KEY_SEL_ARP_KEY_SEL,
+ sparx5,
+ EACL_VCAP_ES2_KEY_SEL(portno, lookup));
+ break;
+ }
+}
+
+/* Change the port keyset for the lookup and protocol */
+void sparx5_vcap_set_port_keyset(struct net_device *ndev,
+ struct vcap_admin *admin,
+ int cid,
+ u16 l3_proto,
+ enum vcap_keyfield_set keyset,
+ struct vcap_keyset_list *orig)
+{
+ struct sparx5_port *port;
+ int lookup;
+
+ switch (admin->vtype) {
+ case VCAP_TYPE_IS0:
+ lookup = sparx5_vcap_is0_cid_to_lookup(cid);
+ if (orig)
+ sparx5_vcap_is0_get_port_keysets(ndev, lookup, orig,
+ l3_proto);
+ sparx5_vcap_is0_set_port_keyset(ndev, lookup, keyset, l3_proto);
+ break;
+ case VCAP_TYPE_IS2:
+ lookup = sparx5_vcap_is2_cid_to_lookup(cid);
+ if (orig)
+ sparx5_vcap_is2_get_port_keysets(ndev, lookup, orig,
+ l3_proto);
+ sparx5_vcap_is2_set_port_keyset(ndev, lookup, keyset, l3_proto);
+ break;
+ case VCAP_TYPE_ES0:
+ break;
+ case VCAP_TYPE_ES2:
+ lookup = sparx5_vcap_es2_cid_to_lookup(cid);
+ if (orig)
+ sparx5_vcap_es2_get_port_keysets(ndev, lookup, orig,
+ l3_proto);
+ sparx5_vcap_es2_set_port_keyset(ndev, lookup, keyset, l3_proto);
+ break;
+ default:
+ port = netdev_priv(ndev);
+ sparx5_vcap_type_err(port->sparx5, admin, __func__);
+ break;
+ }
+}
+
/* Enable IS0 lookups per port and set the keyset generation */
static void sparx5_vcap_is0_port_key_selection(struct sparx5 *sparx5,
struct vcap_admin *admin)
u16 l3_proto,
struct vcap_keyset_list *kslist);
+/* Change the port keyset for the lookup and protocol */
+void sparx5_vcap_set_port_keyset(struct net_device *ndev,
+ struct vcap_admin *admin, int cid,
+ u16 l3_proto, enum vcap_keyfield_set keyset,
+ struct vcap_keyset_list *orig);
+
/* Check if the ethertype is supported by the vcap port classification */
bool sparx5_vcap_is_known_etype(struct vcap_admin *admin, u16 etype);
* Microchip VCAP API
*/
-/* This file is autogenerated by cml-utils 2023-02-10 11:15:56 +0100.
- * Commit ID: c30fb4bf0281cd4a7133bdab6682f9e43c872ada
+/* This file is autogenerated by cml-utils 2023-02-16 11:41:14 +0100.
+ * Commit ID: be85f176b3a151fa748dcaf97c8824a5c2e065f3
*/
#ifndef __VCAP_AG_API__
VCAP_TYPE_ES0,
VCAP_TYPE_ES2,
VCAP_TYPE_IS0,
+ VCAP_TYPE_IS1,
VCAP_TYPE_IS2,
VCAP_TYPE_MAX
};
/* Keyfieldset names with origin information */
enum vcap_keyfield_set {
VCAP_KFS_NO_VALUE, /* initial value */
+ VCAP_KFS_5TUPLE_IP4, /* lan966x is1 X2 */
+ VCAP_KFS_5TUPLE_IP6, /* lan966x is1 X4 */
+ VCAP_KFS_7TUPLE, /* lan966x is1 X4 */
VCAP_KFS_ARP, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
+ VCAP_KFS_DBL_VID, /* lan966x is1 X1 */
+ VCAP_KFS_DMAC_VID, /* lan966x is1 X1 */
VCAP_KFS_ETAG, /* sparx5 is0 X2 */
VCAP_KFS_IP4_OTHER, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
VCAP_KFS_IP4_TCP_UDP, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
VCAP_KFS_MAC_ETYPE, /* sparx5 is2 X6, sparx5 es2 X6, lan966x is2 X2 */
VCAP_KFS_MAC_LLC, /* lan966x is2 X2 */
VCAP_KFS_MAC_SNAP, /* lan966x is2 X2 */
+ VCAP_KFS_NORMAL, /* lan966x is1 X2 */
VCAP_KFS_NORMAL_5TUPLE_IP4, /* sparx5 is0 X6 */
VCAP_KFS_NORMAL_7TUPLE, /* sparx5 is0 X12 */
+ VCAP_KFS_NORMAL_IP6, /* lan966x is1 X4 */
VCAP_KFS_OAM, /* lan966x is2 X2 */
VCAP_KFS_PURE_5TUPLE_IP4, /* sparx5 is0 X3 */
+ VCAP_KFS_RT, /* lan966x is1 X1 */
VCAP_KFS_SMAC_SIP4, /* lan966x is2 X1 */
VCAP_KFS_SMAC_SIP6, /* lan966x is2 X2 */
};
* Used by 802.1BR Bridge Port Extension in an E-Tag
* VCAP_KF_8021BR_IGR_ECID_EXT: W8, sparx5: is0
* Used by 802.1BR Bridge Port Extension in an E-Tag
- * VCAP_KF_8021Q_DEI0: W1, sparx5: is0
+ * VCAP_KF_8021CB_R_TAGGED_IS: W1, lan966x: is1
+ * Set if frame contains an RTAG: IEEE 802.1CB (FRER Redundancy tag, Ethertype
+ * 0xf1c1)
+ * VCAP_KF_8021Q_DEI0: W1, sparx5: is0, lan966x: is1
* First DEI in multiple vlan tags (outer tag or default port tag)
- * VCAP_KF_8021Q_DEI1: W1, sparx5: is0
+ * VCAP_KF_8021Q_DEI1: W1, sparx5: is0, lan966x: is1
* Second DEI in multiple vlan tags (inner tag)
* VCAP_KF_8021Q_DEI2: W1, sparx5: is0
* Third DEI in multiple vlan tags (not always available)
* VCAP_KF_8021Q_DEI_CLS: W1, sparx5: is2/es2, lan966x: is2
* Classified DEI
- * VCAP_KF_8021Q_PCP0: W3, sparx5: is0
+ * VCAP_KF_8021Q_PCP0: W3, sparx5: is0, lan966x: is1
* First PCP in multiple vlan tags (outer tag or default port tag)
- * VCAP_KF_8021Q_PCP1: W3, sparx5: is0
+ * VCAP_KF_8021Q_PCP1: W3, sparx5: is0, lan966x: is1
* Second PCP in multiple vlan tags (inner tag)
* VCAP_KF_8021Q_PCP2: W3, sparx5: is0
* Third PCP in multiple vlan tags (not always available)
* Classified PCP
* VCAP_KF_8021Q_TPID: W3, sparx5: es0
* TPID for outer tag: 0: Customer TPID 1: Service TPID (88A8 or programmable)
- * VCAP_KF_8021Q_TPID0: W3, sparx5: is0
+ * VCAP_KF_8021Q_TPID0: sparx5 is0 W3, lan966x is1 W1
* First TPIC in multiple vlan tags (outer tag or default port tag)
- * VCAP_KF_8021Q_TPID1: W3, sparx5: is0
+ * VCAP_KF_8021Q_TPID1: sparx5 is0 W3, lan966x is1 W1
* Second TPID in multiple vlan tags (inner tag)
* VCAP_KF_8021Q_TPID2: W3, sparx5: is0
* Third TPID in multiple vlan tags (not always available)
- * VCAP_KF_8021Q_VID0: W12, sparx5: is0
+ * VCAP_KF_8021Q_VID0: W12, sparx5: is0, lan966x: is1
* First VID in multiple vlan tags (outer tag or default port tag)
- * VCAP_KF_8021Q_VID1: W12, sparx5: is0
+ * VCAP_KF_8021Q_VID1: W12, sparx5: is0, lan966x: is1
* Second VID in multiple vlan tags (inner tag)
* VCAP_KF_8021Q_VID2: W12, sparx5: is0
* Third VID in multiple vlan tags (not always available)
* VCAP_KF_8021Q_VID_CLS: sparx5 is2 W13, sparx5 es0 W13, sparx5 es2 W13,
* lan966x is2 W12
* Classified VID
- * VCAP_KF_8021Q_VLAN_TAGGED_IS: W1, sparx5: is2/es2, lan966x: is2
+ * VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS: W1, lan966x: is1
+ * Set if frame has two or more Q-tags. Independent of port VLAN awareness
+ * VCAP_KF_8021Q_VLAN_TAGGED_IS: W1, sparx5: is2/es2, lan966x: is1/is2
* Sparx5: Set if frame was received with a VLAN tag, LAN966x: Set if frame has
* one or more Q-tags. Independent of port VLAN awareness
* VCAP_KF_8021Q_VLAN_TAGS: W3, sparx5: is0
* Class of service
* VCAP_KF_ES0_ISDX_KEY_ENA: W1, sparx5: es2
* The value taken from the IFH .FWD.ES0_ISDX_KEY_ENA
- * VCAP_KF_ETYPE: W16, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_ETYPE: W16, sparx5: is0/is2/es2, lan966x: is1/is2
* Ethernet type
- * VCAP_KF_ETYPE_LEN_IS: W1, sparx5: is0/is2/es2
+ * VCAP_KF_ETYPE_LEN_IS: W1, sparx5: is0/is2/es2, lan966x: is1
* Set if frame has EtherType >= 0x600
* VCAP_KF_HOST_MATCH: W1, lan966x: is2
* The action from the SMAC_SIP4 or SMAC_SIP6 lookups. Used for IP source
* CPU queue)
* VCAP_KF_IF_EGR_PORT_NO: W7, sparx5: es0
* Egress port number
- * VCAP_KF_IF_IGR_PORT: sparx5 is0 W7, sparx5 es2 W9, lan966x is2 W4
+ * VCAP_KF_IF_IGR_PORT: sparx5 is0 W7, sparx5 es2 W9, lan966x is1 W3, lan966x
+ * is2 W4
* Sparx5: Logical ingress port number retrieved from
* ANA_CL::PORT_ID_CFG.LPORT_NUM or ERLEG, LAN966x: ingress port nunmber
* VCAP_KF_IF_IGR_PORT_MASK: sparx5 is0 W65, sparx5 is2 W32, sparx5 is2 W65,
- * lan966x is2 W9
+ * lan966x is1 W9, lan966x is2 W9
* Ingress port mask, one bit per port/erleg
* VCAP_KF_IF_IGR_PORT_MASK_L3: W1, sparx5: is2
* If set, IF_IGR_PORT_MASK, IF_IGR_PORT_MASK_RNG, and IF_IGR_PORT_MASK_SEL are
* Mapping: 0: DEFAULT 1: LOOPBACK 2: MASQUERADE 3: CPU_VD
* VCAP_KF_IF_IGR_PORT_SEL: W1, sparx5: es2
* Selector for IF_IGR_PORT: physical port number or ERLEG
- * VCAP_KF_IP4_IS: W1, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_IP4_IS: W1, sparx5: is0/is2/es2, lan966x: is1/is2
* Set if frame has EtherType = 0x800 and IP version = 4
- * VCAP_KF_IP_MC_IS: W1, sparx5: is0
+ * VCAP_KF_IP_MC_IS: W1, sparx5: is0, lan966x: is1
* Set if frame is IPv4 frame and frame's destination MAC address is an IPv4
* multicast address (0x01005E0 /25). Set if frame is IPv6 frame and frame's
* destination MAC address is an IPv6 multicast address (0x3333/16).
- * VCAP_KF_IP_PAYLOAD_5TUPLE: W32, sparx5: is0
+ * VCAP_KF_IP_PAYLOAD_5TUPLE: W32, sparx5: is0, lan966x: is1
* Payload bytes after IP header
- * VCAP_KF_IP_SNAP_IS: W1, sparx5: is0
+ * VCAP_KF_IP_PAYLOAD_S1_IP6: W112, lan966x: is1
+ * Payload after IPv6 header
+ * VCAP_KF_IP_SNAP_IS: W1, sparx5: is0, lan966x: is1
* Set if frame is IPv4, IPv6, or SNAP frame
* VCAP_KF_ISDX_CLS: W12, sparx5: is2/es0/es2
* Classified ISDX
* VCAP_KF_ISDX_GT0_IS: W1, sparx5: is2/es0/es2, lan966x: is2
* Set if classified ISDX > 0
- * VCAP_KF_L2_BC_IS: W1, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L2_BC_IS: W1, sparx5: is0/is2/es2, lan966x: is1/is2
* Set if frame's destination MAC address is the broadcast address
* (FF-FF-FF-FF-FF-FF).
- * VCAP_KF_L2_DMAC: W48, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L2_DMAC: W48, sparx5: is0/is2/es2, lan966x: is1/is2
* Destination MAC address
* VCAP_KF_L2_FRM_TYPE: W4, lan966x: is2
* Frame subtype for specific EtherTypes (MRP, DLR)
* Set if the frame is allowed to be forwarded to front ports
* VCAP_KF_L2_LLC: W40, lan966x: is2
* LLC header and data after up to two VLAN tags and the type/length field
- * VCAP_KF_L2_MC_IS: W1, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L2_MAC: W48, lan966x: is1
+ * MAC address (FIRST=1: SMAC, FIRST=0: DMAC)
+ * VCAP_KF_L2_MC_IS: W1, sparx5: is0/is2/es2, lan966x: is1/is2
* Set if frame's destination MAC address is a multicast address (bit 40 = 1).
* VCAP_KF_L2_PAYLOAD0: W16, lan966x: is2
* Payload bytes 0-1 after the frame's EtherType
* specifically for PTP frames.
* VCAP_KF_L2_PAYLOAD_ETYPE: W64, sparx5: is2/es2
* Byte 0-7 of L2 payload after Type/Len field and overloading for OAM
- * VCAP_KF_L2_SMAC: W48, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L2_SMAC: W48, sparx5: is0/is2/es2, lan966x: is1/is2
* Source MAC address
* VCAP_KF_L2_SNAP: W40, lan966x: is2
* SNAP header after LLC header (AA-AA-03)
* Set if Src IP matches Dst IP address
* VCAP_KF_L3_DPL_CLS: W1, sparx5: es0/es2
* The frames drop precedence level
- * VCAP_KF_L3_DSCP: W6, sparx5: is0
+ * VCAP_KF_L3_DSCP: W6, sparx5: is0, lan966x: is1
* Frame's DSCP value
* VCAP_KF_L3_DST_IS: W1, sparx5: is2
* Set if lookup is done for egress router leg
- * VCAP_KF_L3_FRAGMENT: W1, lan966x: is2
+ * VCAP_KF_L3_FRAGMENT: W1, lan966x: is1/is2
* Set if IPv4 frame is fragmented
* VCAP_KF_L3_FRAGMENT_TYPE: W2, sparx5: is0/is2/es2
* L3 Fragmentation type (none, initial, suspicious, valid follow up)
* VCAP_KF_L3_FRAG_INVLD_L4_LEN: W1, sparx5: is0/is2
* Set if frame's L4 length is less than ANA_CL:COMMON:CLM_FRAGMENT_CFG.L4_MIN_L
* EN
- * VCAP_KF_L3_FRAG_OFS_GT0: W1, lan966x: is2
+ * VCAP_KF_L3_FRAG_OFS_GT0: W1, lan966x: is1/is2
* Set if IPv4 frame is fragmented and it is not the first fragment
- * VCAP_KF_L3_IP4_DIP: W32, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L3_IP4_DIP: W32, sparx5: is0/is2/es2, lan966x: is1/is2
* Destination IPv4 Address
- * VCAP_KF_L3_IP4_SIP: W32, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L3_IP4_SIP: W32, sparx5: is0/is2/es2, lan966x: is1/is2
* Source IPv4 Address
- * VCAP_KF_L3_IP6_DIP: W128, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L3_IP6_DIP: sparx5 is0 W128, sparx5 is2 W128, sparx5 es2 W128,
+ * lan966x is1 W64, lan966x is1 W128, lan966x is2 W128
* Sparx5: Full IPv6 DIP, LAN966x: Either Full IPv6 DIP or a subset depending on
* frame type
- * VCAP_KF_L3_IP6_SIP: W128, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L3_IP6_DIP_MSB: W16, lan966x: is1
+ * MS 16bits of IPv6 DIP
+ * VCAP_KF_L3_IP6_SIP: sparx5 is0 W128, sparx5 is2 W128, sparx5 es2 W128,
+ * lan966x is1 W128, lan966x is1 W64, lan966x is2 W128
* Sparx5: Full IPv6 SIP, LAN966x: Either Full IPv6 SIP or a subset depending on
* frame type
- * VCAP_KF_L3_IP_PROTO: W8, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L3_IP6_SIP_MSB: W16, lan966x: is1
+ * MS 16bits of IPv6 DIP
+ * VCAP_KF_L3_IP_PROTO: W8, sparx5: is0/is2/es2, lan966x: is1/is2
* IPv4 frames: IP protocol. IPv6 frames: Next header, same as for IPV4
- * VCAP_KF_L3_OPTIONS_IS: W1, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L3_OPTIONS_IS: W1, sparx5: is0/is2/es2, lan966x: is1/is2
* Set if IPv4 frame contains options (IP len > 5)
* VCAP_KF_L3_PAYLOAD: sparx5 is2 W96, sparx5 is2 W40, sparx5 es2 W96, sparx5
* es2 W40, lan966x is2 W56
* VCAP_KF_L4_PSH: W1, sparx5: is2/es2, lan966x: is2
* Sparx5: TCP flag PSH, LAN966x: TCP: TCP flag PSH. PTP over UDP: flagField bit
* 1 (twoStepFlag)
- * VCAP_KF_L4_RNG: sparx5 is0 W8, sparx5 is2 W16, sparx5 es2 W16, lan966x is2 W8
+ * VCAP_KF_L4_RNG: sparx5 is0 W8, sparx5 is2 W16, sparx5 es2 W16, lan966x is1
+ * W8, lan966x is2 W8
* Range checker bitmask (one for each range checker). Input into range checkers
* is taken from classified results (VID, DSCP) and frame (SPORT, DPORT, ETYPE,
* outer VID, inner VID)
* VCAP_KF_L4_SEQUENCE_EQ0_IS: W1, sparx5: is2/es2, lan966x: is2
* Set if TCP sequence number is 0, LAN966x: Overlayed with PTP over UDP:
* messageType bit 0
- * VCAP_KF_L4_SPORT: W16, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_L4_SPORT: W16, sparx5: is0/is2/es2, lan966x: is1/is2
* TCP/UDP source port
* VCAP_KF_L4_SPORT_EQ_DPORT_IS: W1, sparx5: is2/es2, lan966x: is2
* Set if UDP or TCP source port equals UDP or TCP destination port
* VCAP_KF_L4_URG: W1, sparx5: is2/es2, lan966x: is2
* Sparx5: TCP flag URG, LAN966x: TCP: TCP flag URG. PTP over UDP: flagField bit
* 7 (reserved)
- * VCAP_KF_LOOKUP_FIRST_IS: W1, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_LOOKUP_FIRST_IS: W1, sparx5: is0/is2/es2, lan966x: is1/is2
* Selects between entries relevant for first and second lookup. Set for first
* lookup, cleared for second lookup.
* VCAP_KF_LOOKUP_GEN_IDX: W12, sparx5: is0
* Generic index - for chaining CLM instances
* VCAP_KF_LOOKUP_GEN_IDX_SEL: W2, sparx5: is0
* Select the mode of the Generic Index
+ * VCAP_KF_LOOKUP_INDEX: W2, lan966x: is1
+ * 0: First lookup, 1: Second lookup, 2: Third lookup, Similar to VCAP_KF_FIRST
+ * but with extra info
* VCAP_KF_LOOKUP_PAG: W8, sparx5: is2, lan966x: is2
* Classified Policy Association Group: chains rules from IS1/CLM to IS2
* VCAP_KF_MIRROR_PROBE: W2, sparx5: es2
* Set if frame's EtherType = 0x8902
* VCAP_KF_PROT_ACTIVE: W1, sparx5: es0/es2
* Protection is active
- * VCAP_KF_TCP_IS: W1, sparx5: is0/is2/es2, lan966x: is2
+ * VCAP_KF_RT_FRMID: W32, lan966x: is1
+ * Profinet or OPC-UA FrameId
+ * VCAP_KF_RT_TYPE: W2, lan966x: is1
+ * Encoding of frame's EtherType: 0: Other, 1: Profinet, 2: OPC-UA, 3: Custom
+ * (ANA::RT_CUSTOM)
+ * VCAP_KF_RT_VLAN_IDX: W3, lan966x: is1
+ * Real-time VLAN index from ANA::RT_VLAN_PCP
+ * VCAP_KF_TCP_IS: W1, sparx5: is0/is2/es2, lan966x: is1/is2
* Set if frame is IPv4 TCP frame (IP protocol = 6) or IPv6 TCP frames (Next
* header = 6)
- * VCAP_KF_TCP_UDP_IS: W1, sparx5: is0/is2/es2
+ * VCAP_KF_TCP_UDP_IS: W1, sparx5: is0/is2/es2, lan966x: is1
* Set if frame is IPv4/IPv6 TCP or UDP frame (IP protocol/next header equals 6
* or 17)
* VCAP_KF_TYPE: sparx5 is0 W2, sparx5 is0 W1, sparx5 is2 W4, sparx5 is2 W2,
- * sparx5 es0 W1, sparx5 es2 W3, lan966x is2 W4, lan966x is2 W2
+ * sparx5 es0 W1, sparx5 es2 W3, lan966x is1 W1, lan966x is1 W2, lan966x is2 W4,
+ * lan966x is2 W2
* Keyset type id - set by the API
*/
VCAP_KF_8021BR_GRP,
VCAP_KF_8021BR_IGR_ECID_BASE,
VCAP_KF_8021BR_IGR_ECID_EXT,
+ VCAP_KF_8021CB_R_TAGGED_IS,
VCAP_KF_8021Q_DEI0,
VCAP_KF_8021Q_DEI1,
VCAP_KF_8021Q_DEI2,
VCAP_KF_8021Q_VID1,
VCAP_KF_8021Q_VID2,
VCAP_KF_8021Q_VID_CLS,
+ VCAP_KF_8021Q_VLAN_DBL_TAGGED_IS,
VCAP_KF_8021Q_VLAN_TAGGED_IS,
VCAP_KF_8021Q_VLAN_TAGS,
VCAP_KF_ACL_GRP_ID,
VCAP_KF_IP4_IS,
VCAP_KF_IP_MC_IS,
VCAP_KF_IP_PAYLOAD_5TUPLE,
+ VCAP_KF_IP_PAYLOAD_S1_IP6,
VCAP_KF_IP_SNAP_IS,
VCAP_KF_ISDX_CLS,
VCAP_KF_ISDX_GT0_IS,
VCAP_KF_L2_FRM_TYPE,
VCAP_KF_L2_FWD_IS,
VCAP_KF_L2_LLC,
+ VCAP_KF_L2_MAC,
VCAP_KF_L2_MC_IS,
VCAP_KF_L2_PAYLOAD0,
VCAP_KF_L2_PAYLOAD1,
VCAP_KF_L3_IP4_DIP,
VCAP_KF_L3_IP4_SIP,
VCAP_KF_L3_IP6_DIP,
+ VCAP_KF_L3_IP6_DIP_MSB,
VCAP_KF_L3_IP6_SIP,
+ VCAP_KF_L3_IP6_SIP_MSB,
VCAP_KF_L3_IP_PROTO,
VCAP_KF_L3_OPTIONS_IS,
VCAP_KF_L3_PAYLOAD,
VCAP_KF_LOOKUP_FIRST_IS,
VCAP_KF_LOOKUP_GEN_IDX,
VCAP_KF_LOOKUP_GEN_IDX_SEL,
+ VCAP_KF_LOOKUP_INDEX,
VCAP_KF_LOOKUP_PAG,
VCAP_KF_MIRROR_PROBE,
VCAP_KF_OAM_CCM_CNTS_EQ0,
VCAP_KF_OAM_VER,
VCAP_KF_OAM_Y1731_IS,
VCAP_KF_PROT_ACTIVE,
+ VCAP_KF_RT_FRMID,
+ VCAP_KF_RT_TYPE,
+ VCAP_KF_RT_VLAN_IDX,
VCAP_KF_TCP_IS,
VCAP_KF_TCP_UDP_IS,
VCAP_KF_TYPE,
VCAP_AFS_CLASS_REDUCED, /* sparx5 is0 X1 */
VCAP_AFS_ES0, /* sparx5 es0 X1 */
VCAP_AFS_FULL, /* sparx5 is0 X3 */
+ VCAP_AFS_S1, /* lan966x is1 X1 */
VCAP_AFS_SMAC_SIP, /* lan966x is2 X1 */
};
* CPU extraction queue. Used when FWD_SEL >0 and PIPELINE_ACT = XTR.
* VCAP_AF_CPU_QUEUE_NUM: W3, sparx5: is2/es2, lan966x: is2
* CPU queue number. Used when CPU_COPY_ENA is set.
+ * VCAP_AF_CUSTOM_ACE_TYPE_ENA: W4, lan966x: is1
+ * Enables use of custom keys in IS2. Bits 3:2 control second lookup in IS2
+ * while bits 1:0 control first lookup. Encoding per lookup: 0: Disabled. 1:
+ * Extract 40 bytes after position corresponding to the location of the IPv4
+ * header and use as key. 2: Extract 40 bytes after SMAC and use as key
* VCAP_AF_DEI_A_VAL: W1, sparx5: es0
* DEI used in ES0 tag A. See TAG_A_DEI_SEL.
* VCAP_AF_DEI_B_VAL: W1, sparx5: es0
* DEI used in ES0 tag B. See TAG_B_DEI_SEL.
* VCAP_AF_DEI_C_VAL: W1, sparx5: es0
* DEI used in ES0 tag C. See TAG_C_DEI_SEL.
- * VCAP_AF_DEI_ENA: W1, sparx5: is0
+ * VCAP_AF_DEI_ENA: W1, sparx5: is0, lan966x: is1
* If set, use DEI_VAL as classified DEI value. Otherwise, DEI from basic
* classification is used
- * VCAP_AF_DEI_VAL: W1, sparx5: is0
+ * VCAP_AF_DEI_VAL: W1, sparx5: is0, lan966x: is1
* See DEI_ENA
- * VCAP_AF_DP_ENA: W1, sparx5: is0
+ * VCAP_AF_DLR_SEL: W2, lan966x: is1
+ * 0: No changes to port-based selection in ANA:PORT:OAM_CFG.DLR_ENA. 1: Enable
+ * DLR frame processing 2: Disable DLR processing
+ * VCAP_AF_DP_ENA: W1, sparx5: is0, lan966x: is1
* If set, use DP_VAL as classified drop precedence level. Otherwise, drop
* precedence level from basic classification is used.
- * VCAP_AF_DP_VAL: W2, sparx5: is0
+ * VCAP_AF_DP_VAL: sparx5 is0 W2, lan966x is1 W1
* See DP_ENA.
- * VCAP_AF_DSCP_ENA: W1, sparx5: is0
+ * VCAP_AF_DSCP_ENA: W1, sparx5: is0, lan966x: is1
* If set, use DSCP_VAL as classified DSCP value. Otherwise, DSCP value from
* basic classification is used.
* VCAP_AF_DSCP_SEL: W3, sparx5: es0
* table 0, otherwise use DSCP_VAL. 5: Mapped using mapping table 1, otherwise
* use mapping table 0. 6: Mapped using mapping table 2, otherwise use DSCP_VAL.
* 7: Mapped using mapping table 3, otherwise use mapping table 2
- * VCAP_AF_DSCP_VAL: W6, sparx5: is0/es0
+ * VCAP_AF_DSCP_VAL: W6, sparx5: is0/es0, lan966x: is1
* See DSCP_ENA.
* VCAP_AF_ES2_REW_CMD: W3, sparx5: es2
* Command forwarded to REW: 0: No action. 1: SWAP MAC addresses. 2: Do L2CP
* VCAP_AF_ISDX_ADD_REPLACE_SEL: W1, sparx5: is0
* Controls the classified ISDX. 0: New ISDX = old ISDX + ISDX_VAL. 1: New ISDX
* = ISDX_VAL.
+ * VCAP_AF_ISDX_ADD_VAL: W8, lan966x: is1
+ * If ISDX_REPLACE_ENA is set, ISDX_ADD_VAL is used directly as the new ISDX.
+ * Encoding: ISDX_REPLACE_ENA=0, ISDX_ADD_VAL=0: Disabled ISDX_EPLACE_ENA=0,
+ * ISDX_ADD_VAL>0: Add value to classified ISDX. ISDX_REPLACE_ENA=1: Replace
+ * with ISDX_ADD_VAL value.
* VCAP_AF_ISDX_ENA: W1, lan966x: is2
* Setting this bit to 1 causes the classified ISDX to be set to the value of
* POLICE_IDX[8:0].
+ * VCAP_AF_ISDX_REPLACE_ENA: W1, lan966x: is1
+ * If set, classified ISDX is set to ISDX_ADD_VAL.
* VCAP_AF_ISDX_VAL: W12, sparx5: is0
* See isdx_add_replace_sel
* VCAP_AF_LOOP_ENA: W1, sparx5: es0
* VCAP_AF_MIRROR_PROBE_ID: W2, sparx5: es2
* Signals a mirror probe to be placed in the IFH. Only possible when FWD_MODE
* is copy. 0: No mirroring. 1-3: Use mirror probe 0-2.
+ * VCAP_AF_MRP_SEL: W2, lan966x: is1
+ * 0: No changes to port-based selection in ANA:PORT:OAM_CFG.MRP_ENA. 1: Enable
+ * MRP frame processing 2: Disable MRP processing
* VCAP_AF_NXT_IDX: W12, sparx5: is0
* Index used as part of key (field G_IDX) in the next lookup.
* VCAP_AF_NXT_IDX_CTRL: W3, sparx5: is0
* Controls the generation of the G_IDX used in the VCAP CLM next lookup
- * VCAP_AF_PAG_OVERRIDE_MASK: W8, sparx5: is0
+ * VCAP_AF_OAM_SEL: W3, lan966x: is1
+ * 0: No changes to port-based selection in ANA:PORT:OAM_CFG.OAM_CFG 1: Enable
+ * OAM frame processing for untagged frames 2: Enable OAM frame processing for
+ * single frames 3: Enable OAM frame processing for double frames 4: Disable OAM
+ * frame processing
+ * VCAP_AF_PAG_OVERRIDE_MASK: W8, sparx5: is0, lan966x: is1
* Bits set in this mask will override PAG_VAL from port profile. New PAG = (PAG
* (input) AND ~PAG_OVERRIDE_MASK) OR (PAG_VAL AND PAG_OVERRIDE_MASK)
- * VCAP_AF_PAG_VAL: W8, sparx5: is0
+ * VCAP_AF_PAG_VAL: W8, sparx5: is0, lan966x: is1
* See PAG_OVERRIDE_MASK.
* VCAP_AF_PCP_A_VAL: W3, sparx5: es0
* PCP used in ES0 tag A. See TAG_A_PCP_SEL.
* PCP used in ES0 tag B. See TAG_B_PCP_SEL.
* VCAP_AF_PCP_C_VAL: W3, sparx5: es0
* PCP used in ES0 tag C. See TAG_C_PCP_SEL.
- * VCAP_AF_PCP_ENA: W1, sparx5: is0
+ * VCAP_AF_PCP_ENA: W1, sparx5: is0, lan966x: is1
* If set, use PCP_VAL as classified PCP value. Otherwise, PCP from basic
* classification is used.
- * VCAP_AF_PCP_VAL: W3, sparx5: is0
+ * VCAP_AF_PCP_VAL: W3, sparx5: is0, lan966x: is1
* See PCP_ENA.
* VCAP_AF_PIPELINE_ACT: W1, sparx5: es0
* Pipeline action when FWD_SEL > 0. 0: XTR. CPU_QU selects CPU extraction queue
* PIPELINE_PT == NONE. Overrules previous settings of pipeline point.
* VCAP_AF_PIPELINE_PT: sparx5 is2 W5, sparx5 es0 W2
* Pipeline point used if PIPELINE_FORCE_ENA is set
- * VCAP_AF_POLICE_ENA: W1, sparx5: is2/es2, lan966x: is2
- * Setting this bit to 1 causes frames that hit this action to be policed by the
- * ACL policer specified in POLICE_IDX. Only applies to the first lookup.
- * VCAP_AF_POLICE_IDX: sparx5 is2 W6, sparx5 es2 W6, lan966x is2 W9
- * Selects VCAP policer used when policing frames (POLICE_ENA)
+ * VCAP_AF_POLICE_ENA: W1, sparx5: is2/es2, lan966x: is1/is2
+ * If set, POLICE_IDX is used to lookup ANA::POL.
+ * VCAP_AF_POLICE_IDX: sparx5 is2 W6, sparx5 es2 W6, lan966x is1 W9, lan966x is2
+ * W9
+ * Policer index.
* VCAP_AF_POLICE_REMARK: W1, sparx5: es2
* If set, frames exceeding policer rates are marked as yellow but not
* discarded.
* port. 1: ES0 tag A: Push ES0 tag A. No port tag. 2: Force port tag: Always
* push port tag. No ES0 tag A. 3: Force untag: Never push port tag or ES0 tag
* A.
- * VCAP_AF_QOS_ENA: W1, sparx5: is0
+ * VCAP_AF_QOS_ENA: W1, sparx5: is0, lan966x: is1
* If set, use QOS_VAL as classified QoS class. Otherwise, QoS class from basic
* classification is used.
- * VCAP_AF_QOS_VAL: W3, sparx5: is0
+ * VCAP_AF_QOS_VAL: W3, sparx5: is0, lan966x: is1
* See QOS_ENA.
* VCAP_AF_REW_OP: W16, lan966x: is2
* Rewriter operation command.
* VCAP_AF_RT_DIS: W1, sparx5: is2
* If set, routing is disallowed. Only applies when IS_INNER_ACL is 0. See also
* IGR_ACL_ENA, EGR_ACL_ENA, and RLEG_STAT_IDX.
+ * VCAP_AF_SFID_ENA: W1, lan966x: is1
+ * If set, SFID_VAL is used to lookup ANA::SFID.
+ * VCAP_AF_SFID_VAL: W8, lan966x: is1
+ * Stream filter identifier.
+ * VCAP_AF_SGID_ENA: W1, lan966x: is1
+ * If set, SGID_VAL is used to lookup ANA::SGID.
+ * VCAP_AF_SGID_VAL: W8, lan966x: is1
+ * Stream gate identifier.
* VCAP_AF_SWAP_MACS_ENA: W1, sparx5: es0
* This setting is only active when FWD_SEL = 1 or FWD_SEL = 2 and PIPELINE_ACT
* = LBK_ASM. 0: No action. 1: Swap MACs and clear bit 40 in new SMAC.
* VCAP_AF_TAG_C_VID_SEL: W2, sparx5: es0
* Selects VID for ES0 tag C. The resulting VID is termed C-TAG.VID. 0:
* Classified VID. 1: VID_C_VAL. 2: IFH.ENCAP.GVID. 3: Reserved.
- * VCAP_AF_TYPE: W1, sparx5: is0
+ * VCAP_AF_TYPE: W1, sparx5: is0, lan966x: is1
* Actionset type id - Set by the API
* VCAP_AF_UNTAG_VID_ENA: W1, sparx5: es0
* Controls insertion of tag C. Untag or insert mode can be selected. See
* VID used in ES0 tag B. See TAG_B_VID_SEL.
* VCAP_AF_VID_C_VAL: W12, sparx5: es0
* VID used in ES0 tag C. See TAG_C_VID_SEL.
- * VCAP_AF_VID_VAL: W13, sparx5: is0
+ * VCAP_AF_VID_REPLACE_ENA: W1, lan966x: is1
+ * Controls the classified VID: VID_REPLACE_ENA=0: Add VID_ADD_VAL to basic
+ * classified VID and use result as new classified VID. VID_REPLACE_ENA = 1:
+ * Replace basic classified VID with VID_VAL value and use as new classified
+ * VID.
+ * VCAP_AF_VID_VAL: sparx5 is0 W13, lan966x is1 W12
* New VID Value
+ * VCAP_AF_VLAN_POP_CNT: W2, lan966x: is1
+ * See VLAN_POP_CNT_ENA
+ * VCAP_AF_VLAN_POP_CNT_ENA: W1, lan966x: is1
+ * If set, use VLAN_POP_CNT as the number of VLAN tags to pop from the incoming
+ * frame. This number is used by the Rewriter. Otherwise, VLAN_POP_CNT from
+ * ANA:PORT:VLAN_CFG.VLAN_POP_CNT is used
*/
/* Actionfield names */
VCAP_AF_CPU_COPY_ENA,
VCAP_AF_CPU_QU,
VCAP_AF_CPU_QUEUE_NUM,
+ VCAP_AF_CUSTOM_ACE_TYPE_ENA,
VCAP_AF_DEI_A_VAL,
VCAP_AF_DEI_B_VAL,
VCAP_AF_DEI_C_VAL,
VCAP_AF_DEI_ENA,
VCAP_AF_DEI_VAL,
+ VCAP_AF_DLR_SEL,
VCAP_AF_DP_ENA,
VCAP_AF_DP_VAL,
VCAP_AF_DSCP_ENA,
VCAP_AF_IGNORE_PIPELINE_CTRL,
VCAP_AF_INTR_ENA,
VCAP_AF_ISDX_ADD_REPLACE_SEL,
+ VCAP_AF_ISDX_ADD_VAL,
VCAP_AF_ISDX_ENA,
+ VCAP_AF_ISDX_REPLACE_ENA,
VCAP_AF_ISDX_VAL,
VCAP_AF_LOOP_ENA,
VCAP_AF_LRN_DIS,
VCAP_AF_MIRROR_ENA,
VCAP_AF_MIRROR_PROBE,
VCAP_AF_MIRROR_PROBE_ID,
+ VCAP_AF_MRP_SEL,
VCAP_AF_NXT_IDX,
VCAP_AF_NXT_IDX_CTRL,
+ VCAP_AF_OAM_SEL,
VCAP_AF_PAG_OVERRIDE_MASK,
VCAP_AF_PAG_VAL,
VCAP_AF_PCP_A_VAL,
VCAP_AF_QOS_VAL,
VCAP_AF_REW_OP,
VCAP_AF_RT_DIS,
+ VCAP_AF_SFID_ENA,
+ VCAP_AF_SFID_VAL,
+ VCAP_AF_SGID_ENA,
+ VCAP_AF_SGID_VAL,
VCAP_AF_SWAP_MACS_ENA,
VCAP_AF_TAG_A_DEI_SEL,
VCAP_AF_TAG_A_PCP_SEL,
VCAP_AF_VID_A_VAL,
VCAP_AF_VID_B_VAL,
VCAP_AF_VID_C_VAL,
+ VCAP_AF_VID_REPLACE_ENA,
VCAP_AF_VID_VAL,
+ VCAP_AF_VLAN_POP_CNT,
+ VCAP_AF_VLAN_POP_CNT_ENA,
};
#endif /* __VCAP_AG_API__ */
}
EXPORT_SYMBOL_GPL(vcap_lookup_rule_by_cookie);
+/* Get number of rules in a vcap instance lookup chain id range */
+int vcap_admin_rule_count(struct vcap_admin *admin, int cid)
+{
+ int max_cid = roundup(cid + 1, VCAP_CID_LOOKUP_SIZE);
+ int min_cid = rounddown(cid, VCAP_CID_LOOKUP_SIZE);
+ struct vcap_rule_internal *elem;
+ int count = 0;
+
+ list_for_each_entry(elem, &admin->rules, list) {
+ mutex_lock(&admin->lock);
+ if (elem->data.vcap_chain_id >= min_cid &&
+ elem->data.vcap_chain_id < max_cid)
+ ++count;
+ mutex_unlock(&admin->lock);
+ }
+ return count;
+}
+EXPORT_SYMBOL_GPL(vcap_admin_rule_count);
+
/* Make a copy of the rule, shallow or full */
static struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri,
bool full)
}
EXPORT_SYMBOL_GPL(vcap_rule_mod_key_u32);
+/* Remove a key field with value and mask in the rule */
+int vcap_rule_rem_key(struct vcap_rule *rule, enum vcap_key_field key)
+{
+ struct vcap_rule_internal *ri = to_intrule(rule);
+ struct vcap_client_keyfield *field;
+
+ field = vcap_find_keyfield(rule, key);
+ if (!field) {
+ pr_err("%s:%d: key %s is not in the rule\n",
+ __func__, __LINE__, vcap_keyfield_name(ri->vctrl, key));
+ return -EINVAL;
+ }
+ /* Deallocate the key field */
+ list_del(&field->ctrl.list);
+ kfree(field);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(vcap_rule_rem_key);
+
static int vcap_rule_mod_action(struct vcap_rule *rule,
enum vcap_action_field action,
enum vcap_field_type ftype,
}
EXPORT_SYMBOL_GPL(vcap_filter_rule_keys);
+/* Select the keyset from the list that results in the smallest rule size */
+enum vcap_keyfield_set
+vcap_select_min_rule_keyset(struct vcap_control *vctrl,
+ enum vcap_type vtype,
+ struct vcap_keyset_list *kslist)
+{
+ enum vcap_keyfield_set ret = VCAP_KFS_NO_VALUE;
+ const struct vcap_set *kset;
+ int max = 100, idx;
+
+ for (idx = 0; idx < kslist->cnt; ++idx) {
+ kset = vcap_keyfieldset(vctrl, vtype, kslist->keysets[idx]);
+ if (!kset)
+ continue;
+ if (kset->sw_per_item >= max)
+ continue;
+ max = kset->sw_per_item;
+ ret = kslist->keysets[idx];
+ }
+ return ret;
+}
+EXPORT_SYMBOL_GPL(vcap_select_min_rule_keyset);
+
/* Make a full copy of an existing rule with a new rule id */
struct vcap_rule *vcap_copy_rule(struct vcap_rule *erule)
{
int vcap_rule_add_action_u32(struct vcap_rule *rule,
enum vcap_action_field action, u32 value);
+/* Get number of rules in a vcap instance lookup chain id range */
+int vcap_admin_rule_count(struct vcap_admin *admin, int cid);
+
/* VCAP rule counter operations */
int vcap_get_rule_count_by_cookie(struct vcap_control *vctrl,
struct vcap_counter *ctr, u64 cookie);
int vcap_rule_get_key_u32(struct vcap_rule *rule, enum vcap_key_field key,
u32 *value, u32 *mask);
+/* Remove a key field with value and mask in the rule */
+int vcap_rule_rem_key(struct vcap_rule *rule, enum vcap_key_field key);
+
+/* Select the keyset from the list that results in the smallest rule size */
+enum vcap_keyfield_set
+vcap_select_min_rule_keyset(struct vcap_control *vctrl, enum vcap_type vtype,
+ struct vcap_keyset_list *kslist);
+
struct vcap_client_actionfield *
vcap_find_actionfield(struct vcap_rule *rule, enum vcap_action_field act);
#endif /* __VCAP_API_CLIENT__ */
"default_cnt: 73\n",
"require_cnt_dis: 0\n",
"version: 1\n",
- "vtype: 3\n",
+ "vtype: 4\n",
"vinst: 0\n",
"ingress: 1\n",
"first_cid: 10000\n",
"default_cnt: 73\n",
"require_cnt_dis: 0\n",
"version: 1\n",
- "vtype: 3\n",
+ "vtype: 4\n",
"vinst: 0\n",
"ingress: 1\n",
"first_cid: 8000000\n",
release_region:
pci_release_regions(pdev);
disable_dev:
- pci_clear_master(pdev);
pci_disable_device(pdev);
dev_err(&pdev->dev, "gdma probe failed: err = %d\n", err);
return err;
vfree(gc);
pci_release_regions(pdev);
- pci_clear_master(pdev);
pci_disable_device(pdev);
}
struct mana_txq *txq;
struct mana_cq *cq;
int err, len;
+ u16 ihs;
if (unlikely(!apc->port_is_up))
goto tx_drop;
txq = &apc->tx_qp[txq_idx].txq;
gdma_sq = txq->gdma_sq;
cq = &apc->tx_qp[txq_idx].tx_cq;
+ tx_stats = &txq->stats;
pkg.tx_oob.s_oob.vcq_num = cq->gdma_id;
pkg.tx_oob.s_oob.vsq_frame = txq->vsq_frame;
pkg.tx_oob.s_oob.pkt_fmt = pkt_fmt;
- if (pkt_fmt == MANA_SHORT_PKT_FMT)
+ if (pkt_fmt == MANA_SHORT_PKT_FMT) {
pkg.wqe_req.inline_oob_size = sizeof(struct mana_tx_short_oob);
- else
+ u64_stats_update_begin(&tx_stats->syncp);
+ tx_stats->short_pkt_fmt++;
+ u64_stats_update_end(&tx_stats->syncp);
+ } else {
pkg.wqe_req.inline_oob_size = sizeof(struct mana_tx_oob);
+ u64_stats_update_begin(&tx_stats->syncp);
+ tx_stats->long_pkt_fmt++;
+ u64_stats_update_end(&tx_stats->syncp);
+ }
pkg.wqe_req.inline_oob_data = &pkg.tx_oob;
pkg.wqe_req.flags = 0;
&ipv6_hdr(skb)->daddr, 0,
IPPROTO_TCP, 0);
}
+
+ if (skb->encapsulation) {
+ ihs = skb_inner_tcp_all_headers(skb);
+ u64_stats_update_begin(&tx_stats->syncp);
+ tx_stats->tso_inner_packets++;
+ tx_stats->tso_inner_bytes += skb->len - ihs;
+ u64_stats_update_end(&tx_stats->syncp);
+ } else {
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
+ ihs = skb_transport_offset(skb) + sizeof(struct udphdr);
+ } else {
+ ihs = skb_tcp_all_headers(skb);
+ if (ipv6_has_hopopt_jumbo(skb))
+ ihs -= sizeof(struct hop_jumbo_hdr);
+ }
+
+ u64_stats_update_begin(&tx_stats->syncp);
+ tx_stats->tso_packets++;
+ tx_stats->tso_bytes += skb->len - ihs;
+ u64_stats_update_end(&tx_stats->syncp);
+ }
+
} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
csum_type = mana_checksum_info(skb);
+ u64_stats_update_begin(&tx_stats->syncp);
+ tx_stats->csum_partial++;
+ u64_stats_update_end(&tx_stats->syncp);
+
if (csum_type == IPPROTO_TCP) {
pkg.tx_oob.s_oob.is_outer_ipv4 = ipv4;
pkg.tx_oob.s_oob.is_outer_ipv6 = ipv6;
}
}
- if (mana_map_skb(skb, apc, &pkg))
+ if (mana_map_skb(skb, apc, &pkg)) {
+ u64_stats_update_begin(&tx_stats->syncp);
+ tx_stats->mana_map_err++;
+ u64_stats_update_end(&tx_stats->syncp);
goto free_sgl_ptr;
+ }
skb_queue_tail(&txq->pending_skbs, skb);
if (comp_read < 1)
return;
+ apc->eth_stats.tx_cqes = comp_read;
+
for (i = 0; i < comp_read; i++) {
struct mana_tx_comp_oob *cqe_oob;
case CQE_TX_VLAN_TAGGING_VIOLATION:
WARN_ONCE(1, "TX: CQE error %d: ignored.\n",
cqe_oob->cqe_hdr.cqe_type);
+ apc->eth_stats.tx_cqe_err++;
break;
default:
*/
WARN_ONCE(1, "TX: Unexpected CQE type %d: HW BUG?\n",
cqe_oob->cqe_hdr.cqe_type);
+ apc->eth_stats.tx_cqe_unknown_type++;
return;
}
WARN_ON_ONCE(1);
cq->work_done = pkt_transmitted;
+
+ apc->eth_stats.tx_cqes -= pkt_transmitted;
}
static void mana_post_pkt_rxq(struct mana_rxq *rxq)
struct gdma_context *gc = rxq->gdma_rq->gdma_dev->gdma_context;
struct net_device *ndev = rxq->ndev;
struct mana_recv_buf_oob *rxbuf_oob;
+ struct mana_port_context *apc;
struct device *dev = gc->dev;
void *new_buf, *old_buf;
struct page *new_page;
u32 curr, pktlen;
dma_addr_t da;
+ apc = netdev_priv(ndev);
+
switch (oob->cqe_hdr.cqe_type) {
case CQE_RX_OKAY:
break;
case CQE_RX_COALESCED_4:
netdev_err(ndev, "RX coalescing is unsupported\n");
+ apc->eth_stats.rx_coalesced_err++;
return;
case CQE_RX_OBJECT_FENCE:
default:
netdev_err(ndev, "Unknown RX CQE type = %d\n",
oob->cqe_hdr.cqe_type);
+ apc->eth_stats.rx_cqe_unknown_type++;
return;
}
{
struct gdma_comp *comp = cq->gdma_comp_buf;
struct mana_rxq *rxq = cq->rxq;
+ struct mana_port_context *apc;
int comp_read, i;
+ apc = netdev_priv(rxq->ndev);
+
comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, CQE_POLLING_BUFFER);
WARN_ON_ONCE(comp_read > CQE_POLLING_BUFFER);
+ apc->eth_stats.rx_cqes = comp_read;
rxq->xdp_flush = false;
for (i = 0; i < comp_read; i++) {
return;
mana_process_rx_cqe(rxq, cq, &comp[i]);
+
+ apc->eth_stats.rx_cqes--;
}
if (rxq->xdp_flush)
} mana_eth_stats[] = {
{"stop_queue", offsetof(struct mana_ethtool_stats, stop_queue)},
{"wake_queue", offsetof(struct mana_ethtool_stats, wake_queue)},
+ {"tx_cqes", offsetof(struct mana_ethtool_stats, tx_cqes)},
+ {"tx_cq_err", offsetof(struct mana_ethtool_stats, tx_cqe_err)},
+ {"tx_cqe_unknown_type", offsetof(struct mana_ethtool_stats,
+ tx_cqe_unknown_type)},
+ {"rx_cqes", offsetof(struct mana_ethtool_stats, rx_cqes)},
+ {"rx_coalesced_err", offsetof(struct mana_ethtool_stats,
+ rx_coalesced_err)},
+ {"rx_cqe_unknown_type", offsetof(struct mana_ethtool_stats,
+ rx_cqe_unknown_type)},
};
static int mana_get_sset_count(struct net_device *ndev, int stringset)
if (stringset != ETH_SS_STATS)
return -EINVAL;
- return ARRAY_SIZE(mana_eth_stats) + num_queues * 8;
+ return ARRAY_SIZE(mana_eth_stats) + num_queues *
+ (MANA_STATS_RX_COUNT + MANA_STATS_TX_COUNT);
}
static void mana_get_strings(struct net_device *ndev, u32 stringset, u8 *data)
p += ETH_GSTRING_LEN;
sprintf(p, "tx_%d_xdp_xmit", i);
p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_%d_tso_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_%d_tso_bytes", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_%d_tso_inner_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_%d_tso_inner_bytes", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_%d_long_pkt_fmt", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_%d_short_pkt_fmt", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_%d_csum_partial", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_%d_mana_map_err", i);
+ p += ETH_GSTRING_LEN;
}
}
u64 xdp_xmit;
u64 xdp_drop;
u64 xdp_tx;
+ u64 tso_packets;
+ u64 tso_bytes;
+ u64 tso_inner_packets;
+ u64 tso_inner_bytes;
+ u64 long_pkt_fmt;
+ u64 short_pkt_fmt;
+ u64 csum_partial;
+ u64 mana_map_err;
int q, i = 0;
if (!apc->port_is_up)
packets = tx_stats->packets;
bytes = tx_stats->bytes;
xdp_xmit = tx_stats->xdp_xmit;
+ tso_packets = tx_stats->tso_packets;
+ tso_bytes = tx_stats->tso_bytes;
+ tso_inner_packets = tx_stats->tso_inner_packets;
+ tso_inner_bytes = tx_stats->tso_inner_bytes;
+ long_pkt_fmt = tx_stats->long_pkt_fmt;
+ short_pkt_fmt = tx_stats->short_pkt_fmt;
+ csum_partial = tx_stats->csum_partial;
+ mana_map_err = tx_stats->mana_map_err;
} while (u64_stats_fetch_retry(&tx_stats->syncp, start));
data[i++] = packets;
data[i++] = bytes;
data[i++] = xdp_xmit;
+ data[i++] = tso_packets;
+ data[i++] = tso_bytes;
+ data[i++] = tso_inner_packets;
+ data[i++] = tso_inner_bytes;
+ data[i++] = long_pkt_fmt;
+ data[i++] = short_pkt_fmt;
+ data[i++] = csum_partial;
+ data[i++] = mana_map_err;
}
}
#include <linux/dsa/ocelot.h>
#include <linux/if_bridge.h>
#include <linux/iopoll.h>
+#include <linux/phy/phy.h>
+#include <soc/mscc/ocelot_hsio.h>
#include <soc/mscc/ocelot_vcap.h>
#include "ocelot.h"
#include "ocelot_vcap.h"
ocelot_write(ocelot, MACACCESS_CMD_INIT, ANA_TABLES_MACACCESS);
}
+void ocelot_pll5_init(struct ocelot *ocelot)
+{
+ /* Configure PLL5. This will need a proper CCF driver
+ * The values are coming from the VTSS API for Ocelot
+ */
+ regmap_write(ocelot->targets[HSIO], HSIO_PLL5G_CFG4,
+ HSIO_PLL5G_CFG4_IB_CTRL(0x7600) |
+ HSIO_PLL5G_CFG4_IB_BIAS_CTRL(0x8));
+ regmap_write(ocelot->targets[HSIO], HSIO_PLL5G_CFG0,
+ HSIO_PLL5G_CFG0_CORE_CLK_DIV(0x11) |
+ HSIO_PLL5G_CFG0_CPU_CLK_DIV(2) |
+ HSIO_PLL5G_CFG0_ENA_BIAS |
+ HSIO_PLL5G_CFG0_ENA_VCO_BUF |
+ HSIO_PLL5G_CFG0_ENA_CP1 |
+ HSIO_PLL5G_CFG0_SELCPI(2) |
+ HSIO_PLL5G_CFG0_LOOP_BW_RES(0xe) |
+ HSIO_PLL5G_CFG0_SELBGV820(4) |
+ HSIO_PLL5G_CFG0_DIV4 |
+ HSIO_PLL5G_CFG0_ENA_CLKTREE |
+ HSIO_PLL5G_CFG0_ENA_LANE);
+ regmap_write(ocelot->targets[HSIO], HSIO_PLL5G_CFG2,
+ HSIO_PLL5G_CFG2_EN_RESET_FRQ_DET |
+ HSIO_PLL5G_CFG2_EN_RESET_OVERRUN |
+ HSIO_PLL5G_CFG2_GAIN_TEST(0x8) |
+ HSIO_PLL5G_CFG2_ENA_AMPCTRL |
+ HSIO_PLL5G_CFG2_PWD_AMPCTRL_N |
+ HSIO_PLL5G_CFG2_AMPC_SEL(0x10));
+}
+EXPORT_SYMBOL(ocelot_pll5_init);
+
static void ocelot_vcap_enable(struct ocelot *ocelot, int port)
{
ocelot_write_gix(ocelot, ANA_PORT_VCAP_S2_CFG_S2_ENA |
return err;
}
+int ocelot_port_configure_serdes(struct ocelot *ocelot, int port,
+ struct device_node *portnp)
+{
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ struct device *dev = ocelot->dev;
+ int err;
+
+ /* Ensure clock signals and speed are set on all QSGMII links */
+ if (ocelot_port->phy_mode == PHY_INTERFACE_MODE_QSGMII)
+ ocelot_port_rmwl(ocelot_port, 0,
+ DEV_CLOCK_CFG_MAC_TX_RST |
+ DEV_CLOCK_CFG_MAC_RX_RST,
+ DEV_CLOCK_CFG);
+
+ if (ocelot_port->phy_mode != PHY_INTERFACE_MODE_INTERNAL) {
+ struct phy *serdes = of_phy_get(portnp, NULL);
+
+ if (IS_ERR(serdes)) {
+ err = PTR_ERR(serdes);
+ dev_err_probe(dev, err,
+ "missing SerDes phys for port %d\n",
+ port);
+ return err;
+ }
+
+ err = phy_set_mode_ext(serdes, PHY_MODE_ETHERNET,
+ ocelot_port->phy_mode);
+ of_phy_put(serdes);
+ if (err) {
+ dev_err(dev, "Could not SerDes mode on port %d: %pe\n",
+ port, ERR_PTR(err));
+ return err;
+ }
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ocelot_port_configure_serdes);
+
+void ocelot_phylink_mac_config(struct ocelot *ocelot, int port,
+ unsigned int link_an_mode,
+ const struct phylink_link_state *state)
+{
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+
+ /* Disable HDX fast control */
+ ocelot_port_writel(ocelot_port, DEV_PORT_MISC_HDX_FAST_DIS,
+ DEV_PORT_MISC);
+
+ /* SGMII only for now */
+ ocelot_port_writel(ocelot_port, PCS1G_MODE_CFG_SGMII_MODE_ENA,
+ PCS1G_MODE_CFG);
+ ocelot_port_writel(ocelot_port, PCS1G_SD_CFG_SD_SEL, PCS1G_SD_CFG);
+
+ /* Enable PCS */
+ ocelot_port_writel(ocelot_port, PCS1G_CFG_PCS_ENA, PCS1G_CFG);
+
+ /* No aneg on SGMII */
+ ocelot_port_writel(ocelot_port, 0, PCS1G_ANEG_CFG);
+
+ /* No loopback */
+ ocelot_port_writel(ocelot_port, 0, PCS1G_LB_CFG);
+}
+EXPORT_SYMBOL_GPL(ocelot_phylink_mac_config);
+
void ocelot_phylink_mac_link_down(struct ocelot *ocelot, int port,
unsigned int link_an_mode,
phy_interface_t interface,
{
struct net_device *ndev = to_net_dev(config->dev);
struct ocelot_port_private *priv = netdev_priv(ndev);
- struct ocelot_port *ocelot_port = &priv->port;
-
- /* Disable HDX fast control */
- ocelot_port_writel(ocelot_port, DEV_PORT_MISC_HDX_FAST_DIS,
- DEV_PORT_MISC);
-
- /* SGMII only for now */
- ocelot_port_writel(ocelot_port, PCS1G_MODE_CFG_SGMII_MODE_ENA,
- PCS1G_MODE_CFG);
- ocelot_port_writel(ocelot_port, PCS1G_SD_CFG_SD_SEL, PCS1G_SD_CFG);
-
- /* Enable PCS */
- ocelot_port_writel(ocelot_port, PCS1G_CFG_PCS_ENA, PCS1G_CFG);
-
- /* No aneg on SGMII */
- ocelot_port_writel(ocelot_port, 0, PCS1G_ANEG_CFG);
+ struct ocelot *ocelot = priv->port.ocelot;
+ int port = priv->port.index;
- /* No loopback */
- ocelot_port_writel(ocelot_port, 0, PCS1G_LB_CFG);
+ ocelot_phylink_mac_config(ocelot, port, link_an_mode, state);
}
static void vsc7514_phylink_mac_link_down(struct phylink_config *config,
return -EINVAL;
}
- /* Ensure clock signals and speed are set on all QSGMII links */
- if (phy_mode == PHY_INTERFACE_MODE_QSGMII)
- ocelot_port_rmwl(ocelot_port, 0,
- DEV_CLOCK_CFG_MAC_TX_RST |
- DEV_CLOCK_CFG_MAC_RX_RST,
- DEV_CLOCK_CFG);
-
ocelot_port->phy_mode = phy_mode;
- if (phy_mode != PHY_INTERFACE_MODE_INTERNAL) {
- struct phy *serdes = of_phy_get(portnp, NULL);
-
- if (IS_ERR(serdes)) {
- err = PTR_ERR(serdes);
- dev_err_probe(dev, err,
- "missing SerDes phys for port %d\n",
- port);
- return err;
- }
-
- err = phy_set_mode_ext(serdes, PHY_MODE_ETHERNET, phy_mode);
- of_phy_put(serdes);
- if (err) {
- dev_err(dev, "Could not SerDes mode on port %d: %pe\n",
- port, ERR_PTR(err));
- return err;
- }
- }
+ err = ocelot_port_configure_serdes(ocelot, port, portnp);
+ if (err)
+ return err;
priv = container_of(ocelot_port, struct ocelot_port_private, port);
#include <soc/mscc/ocelot.h>
#include <soc/mscc/ocelot_vcap.h>
-#include <soc/mscc/ocelot_hsio.h>
#include <soc/mscc/vsc7514_regs.h>
#include "ocelot_fdma.h"
#include "ocelot.h"
#define VSC7514_VCAP_POLICER_BASE 128
#define VSC7514_VCAP_POLICER_MAX 191
-static void ocelot_pll5_init(struct ocelot *ocelot)
-{
- /* Configure PLL5. This will need a proper CCF driver
- * The values are coming from the VTSS API for Ocelot
- */
- regmap_write(ocelot->targets[HSIO], HSIO_PLL5G_CFG4,
- HSIO_PLL5G_CFG4_IB_CTRL(0x7600) |
- HSIO_PLL5G_CFG4_IB_BIAS_CTRL(0x8));
- regmap_write(ocelot->targets[HSIO], HSIO_PLL5G_CFG0,
- HSIO_PLL5G_CFG0_CORE_CLK_DIV(0x11) |
- HSIO_PLL5G_CFG0_CPU_CLK_DIV(2) |
- HSIO_PLL5G_CFG0_ENA_BIAS |
- HSIO_PLL5G_CFG0_ENA_VCO_BUF |
- HSIO_PLL5G_CFG0_ENA_CP1 |
- HSIO_PLL5G_CFG0_SELCPI(2) |
- HSIO_PLL5G_CFG0_LOOP_BW_RES(0xe) |
- HSIO_PLL5G_CFG0_SELBGV820(4) |
- HSIO_PLL5G_CFG0_DIV4 |
- HSIO_PLL5G_CFG0_ENA_CLKTREE |
- HSIO_PLL5G_CFG0_ENA_LANE);
- regmap_write(ocelot->targets[HSIO], HSIO_PLL5G_CFG2,
- HSIO_PLL5G_CFG2_EN_RESET_FRQ_DET |
- HSIO_PLL5G_CFG2_EN_RESET_OVERRUN |
- HSIO_PLL5G_CFG2_GAIN_TEST(0x8) |
- HSIO_PLL5G_CFG2_ENA_AMPCTRL |
- HSIO_PLL5G_CFG2_PWD_AMPCTRL_N |
- HSIO_PLL5G_CFG2_AMPC_SEL(0x10));
-}
-
static int ocelot_chip_init(struct ocelot *ocelot, const struct ocelot_ops *ops)
{
int ret;
bool is_pre_ct_flow(struct flow_cls_offload *flow)
{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
+ struct flow_dissector *dissector = rule->match.dissector;
struct flow_action_entry *act;
+ struct flow_match_ct ct;
int i;
+ if (dissector->used_keys & BIT(FLOW_DISSECTOR_KEY_CT)) {
+ flow_rule_match_ct(rule, &ct);
+ if (ct.key->ct_state)
+ return false;
+ }
+
+ if (flow->common.chain_index)
+ return false;
+
flow_action_for_each(i, act, &flow->rule->action) {
if (act->id == FLOW_ACTION_CT) {
/* The pre_ct rule only have the ct or ct nat action, cannot
struct flow_match_ct ct;
int i;
- /* post ct entry cannot contains any ct action except ct_clear. */
- flow_action_for_each(i, act, &flow->rule->action) {
- if (act->id == FLOW_ACTION_CT) {
- /* ignore ct clear action. */
- if (act->ct.action == TCA_CT_ACT_CLEAR) {
- exist_ct_clear = true;
- continue;
- }
-
- return false;
- }
- }
-
if (dissector->used_keys & BIT(FLOW_DISSECTOR_KEY_CT)) {
flow_rule_match_ct(rule, &ct);
if (ct.key->ct_state & TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED)
return true;
} else {
+ /* post ct entry cannot contains any ct action except ct_clear. */
+ flow_action_for_each(i, act, &flow->rule->action) {
+ if (act->id == FLOW_ACTION_CT) {
+ /* ignore ct clear action. */
+ if (act->ct.action == TCA_CT_ACT_CLEAR) {
+ exist_ct_clear = true;
+ continue;
+ }
+
+ return false;
+ }
+ }
/* when do nat with ct, the post ct entry ignore the ct status,
* will match the nat field(sip/dip) instead. In this situation,
* the flow chain index is not zero and contains ct clear action.
return 0;
}
+/* Extra check for multiple ct-zones merge
+ * currently surpport nft entries merge check in different zones
+ */
+static int nfp_ct_merge_extra_check(struct nfp_fl_ct_flow_entry *nft_entry,
+ struct nfp_fl_ct_tc_merge *tc_m_entry)
+{
+ struct nfp_fl_nft_tc_merge *prev_nft_m_entry;
+ struct nfp_fl_ct_flow_entry *pre_ct_entry;
+
+ pre_ct_entry = tc_m_entry->pre_ct_parent;
+ prev_nft_m_entry = pre_ct_entry->prev_m_entries[pre_ct_entry->num_prev_m_entries - 1];
+
+ return nfp_ct_merge_check(prev_nft_m_entry->nft_parent, nft_entry);
+}
+
static int nfp_ct_merge_act_check(struct nfp_fl_ct_flow_entry *pre_ct_entry,
struct nfp_fl_ct_flow_entry *post_ct_entry,
struct nfp_fl_ct_flow_entry *nft_entry)
static int nfp_fl_merge_actions_offload(struct flow_rule **rules,
struct nfp_flower_priv *priv,
struct net_device *netdev,
- struct nfp_fl_payload *flow_pay)
+ struct nfp_fl_payload *flow_pay,
+ int num_rules)
{
enum flow_action_hw_stats tmp_stats = FLOW_ACTION_HW_STATS_DONT_CARE;
struct flow_action_entry *a_in;
- int i, j, num_actions, id;
+ int i, j, id, num_actions = 0;
struct flow_rule *a_rule;
int err = 0, offset = 0;
- num_actions = rules[CT_TYPE_PRE_CT]->action.num_entries +
- rules[CT_TYPE_NFT]->action.num_entries +
- rules[CT_TYPE_POST_CT]->action.num_entries;
+ for (i = 0; i < num_rules; i++)
+ num_actions += rules[i]->action.num_entries;
/* Add one action to make sure there is enough room to add an checksum action
* when do nat.
*/
- a_rule = flow_rule_alloc(num_actions + 1);
+ a_rule = flow_rule_alloc(num_actions + (num_rules / 2));
if (!a_rule)
return -ENOMEM;
- /* Actions need a BASIC dissector. */
- a_rule->match = rules[CT_TYPE_PRE_CT]->match;
/* post_ct entry have one action at least. */
- if (rules[CT_TYPE_POST_CT]->action.num_entries != 0) {
- tmp_stats = rules[CT_TYPE_POST_CT]->action.entries[0].hw_stats;
- }
+ if (rules[num_rules - 1]->action.num_entries != 0)
+ tmp_stats = rules[num_rules - 1]->action.entries[0].hw_stats;
+
+ /* Actions need a BASIC dissector. */
+ a_rule->match = rules[0]->match;
/* Copy actions */
- for (j = 0; j < _CT_TYPE_MAX; j++) {
+ for (j = 0; j < num_rules; j++) {
u32 csum_updated = 0;
u8 ip_proto = 0;
/* nft entry is generated by tc ct, which mangle action do not care
* the stats, inherit the post entry stats to meet the
* flow_action_hw_stats_check.
+ * nft entry flow rules are at odd array index.
*/
- if (j == CT_TYPE_NFT) {
+ if (j & 0x01) {
if (a_in->hw_stats == FLOW_ACTION_HW_STATS_DONT_CARE)
a_in->hw_stats = tmp_stats;
nfp_fl_get_csum_flag(a_in, ip_proto, &csum_updated);
{
enum nfp_flower_tun_type tun_type = NFP_FL_TUNNEL_NONE;
struct nfp_fl_ct_zone_entry *zt = m_entry->zt;
+ struct flow_rule *rules[NFP_MAX_ENTRY_RULES];
+ struct nfp_fl_ct_flow_entry *pre_ct_entry;
struct nfp_fl_key_ls key_layer, tmp_layer;
struct nfp_flower_priv *priv = zt->priv;
u16 key_map[_FLOW_PAY_LAYERS_MAX];
struct nfp_fl_payload *flow_pay;
-
- struct flow_rule *rules[_CT_TYPE_MAX];
u8 *key, *msk, *kdata, *mdata;
struct nfp_port *port = NULL;
+ int num_rules, err, i, j = 0;
struct net_device *netdev;
bool qinq_sup;
u32 port_id;
u16 offset;
- int i, err;
netdev = m_entry->netdev;
qinq_sup = !!(priv->flower_ext_feats & NFP_FL_FEATS_VLAN_QINQ);
- rules[CT_TYPE_PRE_CT] = m_entry->tc_m_parent->pre_ct_parent->rule;
- rules[CT_TYPE_NFT] = m_entry->nft_parent->rule;
- rules[CT_TYPE_POST_CT] = m_entry->tc_m_parent->post_ct_parent->rule;
+ pre_ct_entry = m_entry->tc_m_parent->pre_ct_parent;
+ num_rules = pre_ct_entry->num_prev_m_entries * 2 + _CT_TYPE_MAX;
+
+ for (i = 0; i < pre_ct_entry->num_prev_m_entries; i++) {
+ rules[j++] = pre_ct_entry->prev_m_entries[i]->tc_m_parent->pre_ct_parent->rule;
+ rules[j++] = pre_ct_entry->prev_m_entries[i]->nft_parent->rule;
+ }
+
+ rules[j++] = m_entry->tc_m_parent->pre_ct_parent->rule;
+ rules[j++] = m_entry->nft_parent->rule;
+ rules[j++] = m_entry->tc_m_parent->post_ct_parent->rule;
memset(&key_layer, 0, sizeof(struct nfp_fl_key_ls));
memset(&key_map, 0, sizeof(key_map));
/* Calculate the resultant key layer and size for offload */
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
err = nfp_flower_calculate_key_layers(priv->app,
m_entry->netdev,
&tmp_layer, rules[i],
* that the layer is not present.
*/
if (!qinq_sup) {
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
offset = key_map[FLOW_PAY_META_TCI];
key = kdata + offset;
msk = mdata + offset;
offset = key_map[FLOW_PAY_MAC_MPLS];
key = kdata + offset;
msk = mdata + offset;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_mac((struct nfp_flower_mac_mpls *)key,
(struct nfp_flower_mac_mpls *)msk,
rules[i]);
offset = key_map[FLOW_PAY_IPV4];
key = kdata + offset;
msk = mdata + offset;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_ipv4((struct nfp_flower_ipv4 *)key,
(struct nfp_flower_ipv4 *)msk,
rules[i]);
offset = key_map[FLOW_PAY_IPV6];
key = kdata + offset;
msk = mdata + offset;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_ipv6((struct nfp_flower_ipv6 *)key,
(struct nfp_flower_ipv6 *)msk,
rules[i]);
offset = key_map[FLOW_PAY_L4];
key = kdata + offset;
msk = mdata + offset;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_tport((struct nfp_flower_tp_ports *)key,
(struct nfp_flower_tp_ports *)msk,
rules[i]);
offset = key_map[FLOW_PAY_QINQ];
key = kdata + offset;
msk = mdata + offset;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_vlan((struct nfp_flower_vlan *)key,
(struct nfp_flower_vlan *)msk,
rules[i]);
struct nfp_ipv6_addr_entry *entry;
struct in6_addr *dst;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_ipv6_gre_tun((void *)key,
(void *)msk, rules[i]);
}
} else {
__be32 dst;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_ipv4_gre_tun((void *)key,
(void *)msk, rules[i]);
}
struct nfp_ipv6_addr_entry *entry;
struct in6_addr *dst;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_ipv6_udp_tun((void *)key,
(void *)msk, rules[i]);
}
} else {
__be32 dst;
- for (i = 0; i < _CT_TYPE_MAX; i++) {
+ for (i = 0; i < num_rules; i++) {
nfp_flower_compile_ipv4_udp_tun((void *)key,
(void *)msk, rules[i]);
}
offset = key_map[FLOW_PAY_GENEVE_OPT];
key = kdata + offset;
msk = mdata + offset;
- for (i = 0; i < _CT_TYPE_MAX; i++)
+ for (i = 0; i < num_rules; i++)
nfp_flower_compile_geneve_opt(key, msk, rules[i]);
}
}
/* Merge actions into flow_pay */
- err = nfp_fl_merge_actions_offload(rules, priv, netdev, flow_pay);
+ err = nfp_fl_merge_actions_offload(rules, priv, netdev, flow_pay, num_rules);
if (err)
goto ct_offload_err;
if (err)
return err;
+ if (pre_ct_entry->num_prev_m_entries > 0) {
+ err = nfp_ct_merge_extra_check(nft_entry, tc_m_entry);
+ if (err)
+ return err;
+ }
+
/* Combine tc_merge and nft cookies for this cookie. */
new_cookie[0] = tc_m_entry->cookie[0];
new_cookie[1] = tc_m_entry->cookie[1];
list_add(&nft_m_entry->tc_merge_list, &tc_m_entry->children);
list_add(&nft_m_entry->nft_flow_list, &nft_entry->children);
- /* Generate offload structure and send to nfp */
- err = nfp_fl_ct_add_offload(nft_m_entry);
- if (err)
- goto err_nft_ct_offload;
-
err = rhashtable_insert_fast(&zt->nft_merge_tb, &nft_m_entry->hash_node,
nfp_nft_ct_merge_params);
if (err)
zt->nft_merge_count++;
+ if (post_ct_entry->goto_chain_index > 0)
+ return nfp_fl_create_new_pre_ct(nft_m_entry);
+
+ /* Generate offload structure and send to nfp */
+ err = nfp_fl_ct_add_offload(nft_m_entry);
+ if (err)
+ goto err_nft_ct_offload;
+
return err;
-err_nft_ct_merge_insert:
+err_nft_ct_offload:
nfp_fl_ct_del_offload(zt->priv->app, nft_m_entry->tc_flower_cookie,
nft_m_entry->netdev);
-err_nft_ct_offload:
+err_nft_ct_merge_insert:
list_del(&nft_m_entry->tc_merge_list);
list_del(&nft_m_entry->nft_flow_list);
kfree(nft_m_entry);
/* Checks that the chain_index of the filter matches the
* chain_index of the GOTO action.
*/
- if (post_ct_entry->chain_index != pre_ct_entry->chain_index)
+ if (post_ct_entry->chain_index != pre_ct_entry->goto_chain_index)
return -EINVAL;
err = nfp_ct_merge_check(pre_ct_entry, post_ct_entry);
entry->zt = zt;
entry->netdev = netdev;
- entry->cookie = flow->cookie;
+ entry->cookie = flow->cookie > 0 ? flow->cookie : (unsigned long)entry;
entry->chain_index = flow->common.chain_index;
entry->tun_offset = NFP_FL_CT_NO_TUN;
INIT_LIST_HEAD(&entry->children);
+ if (flow->cookie == 0)
+ return entry;
+
/* Now add a ct map entry to flower-priv */
map = get_hashentry(&zt->priv->ct_map_table, &flow->cookie,
nfp_ct_map_params, sizeof(*map));
list_del(&m_entry->tc_merge_list);
list_del(&m_entry->nft_flow_list);
+ if (m_entry->next_pre_ct_entry) {
+ struct nfp_fl_ct_map_entry pre_ct_map_ent;
+
+ pre_ct_map_ent.ct_entry = m_entry->next_pre_ct_entry;
+ pre_ct_map_ent.cookie = 0;
+ nfp_fl_ct_del_flow(&pre_ct_map_ent);
+ }
+
kfree(m_entry);
}
kfree(entry);
}
+static struct flow_action_entry *get_flow_act_ct(struct flow_rule *rule)
+{
+ struct flow_action_entry *act;
+ int i;
+
+ /* More than one ct action may be present in a flow rule,
+ * Return the first one that is not a CT clear action
+ */
+ flow_action_for_each(i, act, &rule->action) {
+ if (act->id == FLOW_ACTION_CT && act->ct.action != TCA_CT_ACT_CLEAR)
+ return act;
+ }
+
+ return NULL;
+}
+
static struct flow_action_entry *get_flow_act(struct flow_rule *rule,
enum flow_action_id act_id)
{
int nfp_fl_ct_handle_pre_ct(struct nfp_flower_priv *priv,
struct net_device *netdev,
struct flow_cls_offload *flow,
- struct netlink_ext_ack *extack)
+ struct netlink_ext_ack *extack,
+ struct nfp_fl_nft_tc_merge *m_entry)
{
struct flow_action_entry *ct_act, *ct_goto;
struct nfp_fl_ct_flow_entry *ct_entry;
struct nfp_fl_ct_zone_entry *zt;
int err;
- ct_act = get_flow_act(flow->rule, FLOW_ACTION_CT);
+ ct_act = get_flow_act_ct(flow->rule);
if (!ct_act) {
NL_SET_ERR_MSG_MOD(extack,
"unsupported offload: Conntrack action empty in conntrack offload");
if (IS_ERR(ct_entry))
return PTR_ERR(ct_entry);
ct_entry->type = CT_TYPE_PRE_CT;
- ct_entry->chain_index = ct_goto->chain_index;
+ ct_entry->chain_index = flow->common.chain_index;
+ ct_entry->goto_chain_index = ct_goto->chain_index;
+
+ if (m_entry) {
+ struct nfp_fl_ct_flow_entry *pre_ct_entry;
+ int i;
+
+ pre_ct_entry = m_entry->tc_m_parent->pre_ct_parent;
+ for (i = 0; i < pre_ct_entry->num_prev_m_entries; i++)
+ ct_entry->prev_m_entries[i] = pre_ct_entry->prev_m_entries[i];
+ ct_entry->prev_m_entries[i++] = m_entry;
+ ct_entry->num_prev_m_entries = i;
+
+ m_entry->next_pre_ct_entry = ct_entry;
+ }
+
list_add(&ct_entry->list_node, &zt->pre_ct_list);
zt->pre_ct_count++;
struct nfp_fl_ct_zone_entry *zt;
bool wildcarded = false;
struct flow_match_ct ct;
+ struct flow_action_entry *ct_goto;
flow_rule_match_ct(rule, &ct);
if (!ct.mask->ct_zone) {
ct_entry->type = CT_TYPE_POST_CT;
ct_entry->chain_index = flow->common.chain_index;
+ ct_goto = get_flow_act(flow->rule, FLOW_ACTION_GOTO);
+ ct_entry->goto_chain_index = ct_goto ? ct_goto->chain_index : 0;
list_add(&ct_entry->list_node, &zt->post_ct_list);
zt->post_ct_count++;
return 0;
}
+int nfp_fl_create_new_pre_ct(struct nfp_fl_nft_tc_merge *m_entry)
+{
+ struct nfp_fl_ct_flow_entry *pre_ct_entry, *post_ct_entry;
+ struct flow_cls_offload new_pre_ct_flow;
+ int err;
+
+ pre_ct_entry = m_entry->tc_m_parent->pre_ct_parent;
+ if (pre_ct_entry->num_prev_m_entries >= NFP_MAX_RECIRC_CT_ZONES - 1)
+ return -1;
+
+ post_ct_entry = m_entry->tc_m_parent->post_ct_parent;
+ memset(&new_pre_ct_flow, 0, sizeof(struct flow_cls_offload));
+ new_pre_ct_flow.rule = post_ct_entry->rule;
+ new_pre_ct_flow.common.chain_index = post_ct_entry->chain_index;
+
+ err = nfp_fl_ct_handle_pre_ct(pre_ct_entry->zt->priv,
+ pre_ct_entry->netdev,
+ &new_pre_ct_flow, NULL,
+ m_entry);
+ return err;
+}
+
static void
nfp_fl_ct_sub_stats(struct nfp_fl_nft_tc_merge *nft_merge,
enum ct_entry_type type, u64 *m_pkts,
0, priv->stats[ctx_id].used,
FLOW_ACTION_HW_STATS_DELAYED);
}
+
+ /* Update previous pre_ct/post_ct/nft flow stats */
+ if (nft_merge->tc_m_parent->pre_ct_parent->num_prev_m_entries > 0) {
+ struct nfp_fl_nft_tc_merge *tmp_nft_merge;
+ int i;
+
+ for (i = 0; i < nft_merge->tc_m_parent->pre_ct_parent->num_prev_m_entries; i++) {
+ tmp_nft_merge = nft_merge->tc_m_parent->pre_ct_parent->prev_m_entries[i];
+ flow_stats_update(&tmp_nft_merge->tc_m_parent->pre_ct_parent->stats,
+ priv->stats[ctx_id].bytes,
+ priv->stats[ctx_id].pkts,
+ 0, priv->stats[ctx_id].used,
+ FLOW_ACTION_HW_STATS_DELAYED);
+ flow_stats_update(&tmp_nft_merge->tc_m_parent->post_ct_parent->stats,
+ priv->stats[ctx_id].bytes,
+ priv->stats[ctx_id].pkts,
+ 0, priv->stats[ctx_id].used,
+ FLOW_ACTION_HW_STATS_DELAYED);
+ flow_stats_update(&tmp_nft_merge->nft_parent->stats,
+ priv->stats[ctx_id].bytes,
+ priv->stats[ctx_id].pkts,
+ 0, priv->stats[ctx_id].used,
+ FLOW_ACTION_HW_STATS_DELAYED);
+ }
+ }
+
/* Reset stats from the nfp */
priv->stats[ctx_id].pkts = 0;
priv->stats[ctx_id].bytes = 0;
switch (ct_entry->type) {
case CT_TYPE_PRE_CT:
zt->pre_ct_count--;
- rhashtable_remove_fast(m_table, &ct_map_ent->hash_node,
- nfp_ct_map_params);
+ if (ct_map_ent->cookie > 0)
+ rhashtable_remove_fast(m_table, &ct_map_ent->hash_node,
+ nfp_ct_map_params);
nfp_fl_ct_clean_flow_entry(ct_entry);
- kfree(ct_map_ent);
+ if (ct_map_ent->cookie > 0)
+ kfree(ct_map_ent);
if (!zt->pre_ct_count) {
zt->nft = NULL;
_CT_TYPE_MAX,
};
+#define NFP_MAX_RECIRC_CT_ZONES 4
+#define NFP_MAX_ENTRY_RULES (NFP_MAX_RECIRC_CT_ZONES * 2 + 1)
+
enum nfp_nfp_layer_name {
FLOW_PAY_META_TCI = 0,
FLOW_PAY_INPORT,
* @cookie: Flow cookie, same as original TC flow, used as key
* @list_node: Used by the list
* @chain_index: Chain index of the original flow
+ * @goto_chain_index: goto chain index of the flow
* @netdev: netdev structure.
- * @type: Type of pre-entry from enum ct_entry_type
* @zt: Reference to the zone table this belongs to
* @children: List of tc_merge flows this flow forms part of
* @rule: Reference to the original TC flow rule
* @stats: Used to cache stats for updating
+ * @prev_m_entries: Array of all previous nft_tc_merge entries
+ * @num_prev_m_entries: The number of all previous nft_tc_merge entries
* @tun_offset: Used to indicate tunnel action offset in action list
* @flags: Used to indicate flow flag like NAT which used by merge.
+ * @type: Type of ct-entry from enum ct_entry_type
*/
struct nfp_fl_ct_flow_entry {
unsigned long cookie;
struct list_head list_node;
u32 chain_index;
- enum ct_entry_type type;
+ u32 goto_chain_index;
struct net_device *netdev;
struct nfp_fl_ct_zone_entry *zt;
struct list_head children;
struct flow_rule *rule;
struct flow_stats stats;
+ struct nfp_fl_nft_tc_merge *prev_m_entries[NFP_MAX_RECIRC_CT_ZONES - 1];
+ u8 num_prev_m_entries;
u8 tun_offset; // Set to NFP_FL_CT_NO_TUN if no tun
u8 flags;
+ u8 type;
};
/**
* @nft_parent: The nft_entry parent
* @tc_flower_cookie: The cookie of the flow offloaded to the nfp
* @flow_pay: Reference to the offloaded flow struct
+ * @next_pre_ct_entry: Reference to the next ct zone pre ct entry
*/
struct nfp_fl_nft_tc_merge {
struct net_device *netdev;
struct nfp_fl_ct_flow_entry *nft_parent;
unsigned long tc_flower_cookie;
struct nfp_fl_payload *flow_pay;
+ struct nfp_fl_ct_flow_entry *next_pre_ct_entry;
};
/**
* @netdev: netdev structure.
* @flow: TC flower classifier offload structure.
* @extack: Extack pointer for errors
+ * @m_entry:previous nfp_fl_nft_tc_merge entry
*
* Adds a new entry to the relevant zone table and tries to
* merge with other +trk+est entries and offload if possible.
int nfp_fl_ct_handle_pre_ct(struct nfp_flower_priv *priv,
struct net_device *netdev,
struct flow_cls_offload *flow,
- struct netlink_ext_ack *extack);
+ struct netlink_ext_ack *extack,
+ struct nfp_fl_nft_tc_merge *m_entry);
/**
* nfp_fl_ct_handle_post_ct() - Handles +trk+est conntrack rules
* @priv: Pointer to app priv
struct netlink_ext_ack *extack);
/**
+ * nfp_fl_create_new_pre_ct() - create next ct_zone -trk conntrack rules
+ * @m_entry:previous nfp_fl_nft_tc_merge entry
+ *
+ * Create a new pre_ct entry from previous nfp_fl_nft_tc_merge entry
+ * to the next relevant zone table. Try to merge with other +trk+est
+ * entries and offload if possible. The created new pre_ct entry is
+ * linked to the previous nfp_fl_nft_tc_merge entry.
+ *
+ * Return: negative value on error, 0 if configured successfully.
+ */
+int nfp_fl_create_new_pre_ct(struct nfp_fl_nft_tc_merge *m_entry);
+
+/**
* nfp_fl_ct_clean_flow_entry() - Free a nfp_fl_ct_flow_entry
* @entry: Flow entry to cleanup
*/
port = nfp_port_from_netdev(netdev);
if (is_pre_ct_flow(flow))
- return nfp_fl_ct_handle_pre_ct(priv, netdev, flow, extack);
+ return nfp_fl_ct_handle_pre_ct(priv, netdev, flow, extack, NULL);
if (is_post_ct_flow(flow))
return nfp_fl_ct_handle_post_ct(priv, netdev, flow, extack);
port->eth_port = &pf->eth_tbl->ports[id];
port->eth_id = pf->eth_tbl->ports[id].index;
+ port->netdev->dev_port = id;
if (pf->mac_stats_mem)
port->eth_stats =
pf->mac_stats_mem + port->eth_id * NFP_MAC_STATS_SIZE;
.remove = nixge_remove,
.driver = {
.name = "nixge",
- .of_match_table = of_match_ptr(nixge_dt_ids),
+ .of_match_table = nixge_dt_ids,
},
};
module_platform_driver(nixge_driver);
write_dma_reg(PAS_DMA_TXCHAN_INCR(txring->chan.chno), 2);
}
-static int pasemi_mac_start_tx(struct sk_buff *skb, struct net_device *dev)
+static netdev_tx_t pasemi_mac_start_tx(struct sk_buff *skb, struct net_device *dev)
{
struct pasemi_mac * const mac = netdev_priv(dev);
struct pasemi_mac_txring * const txring = tx_ring(mac);
ionic_port_reset(ionic);
ionic_reset(ionic);
ionic_dev_teardown(ionic);
- pci_clear_master(pdev);
ionic_unmap_bars(ionic);
pci_release_regions(pdev);
pci_disable_device(pdev);
The following is packed:
- N cardrsp_rds_rings
- N cardrs_sds_rings */
- char data[0];
+ char data[];
} nx_cardrsp_rx_ctx_t;
#define SIZEOF_HOSTRQ_RX(HOSTRQ_RX, rds_rings, sds_rings) \
#include <linux/ipv6.h>
#include <linux/inetdevice.h>
#include <linux/sysfs.h>
-#include <linux/aer.h>
MODULE_DESCRIPTION("QLogic/NetXen (1/10) GbE Intelligent Ethernet Driver");
MODULE_LICENSE("GPL");
if ((err = pci_request_regions(pdev, netxen_nic_driver_name)))
goto err_out_disable_pdev;
- if (NX_IS_REVISION_P3(pdev->revision))
- pci_enable_pcie_error_reporting(pdev);
-
pci_set_master(pdev);
netdev = alloc_etherdev(sizeof(struct netxen_adapter));
free_netdev(netdev);
err_out_free_res:
- if (NX_IS_REVISION_P3(pdev->revision))
- pci_disable_pcie_error_reporting(pdev);
pci_release_regions(pdev);
err_out_disable_pdev:
netxen_release_firmware(adapter);
- if (NX_IS_REVISION_P3(pdev->revision)) {
+ if (NX_IS_REVISION_P3(pdev->revision))
netxen_cleanup_minidump(adapter);
- pci_disable_pcie_error_reporting(pdev);
- }
pci_release_regions(pdev);
pci_disable_device(pdev);
struct qed_ll2_rx_queue *p_rx = &p_ll2_conn->rx_queue;
u16 packet_length = 0, parse_flags = 0, vlan = 0;
struct qed_ll2_rx_packet *p_pkt = NULL;
- u32 num_ooo_add_to_peninsula = 0, cid;
union core_rx_cqe_union *cqe = NULL;
u16 cq_new_idx = 0, cq_old_idx = 0;
struct qed_ooo_buffer *p_buffer;
struct ooo_opaque *ooo_opq;
u8 placement_offset = 0;
u8 cqe_type;
+ u32 cid;
cq_new_idx = le16_to_cpu(*p_rx->p_fw_cons);
cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
cid, ooo_opq->ooo_isle);
break;
case TCP_EVENT_ADD_PEN:
- num_ooo_add_to_peninsula++;
qed_ooo_put_ready_buffer(p_hwfn,
p_hwfn->p_ooo_info,
p_buffer, true);
#include <linux/qed/qed_if.h>
#include <linux/qed/qed_ll2_if.h>
#include <net/devlink.h>
-#include <linux/aer.h>
#include <linux/phylink.h>
#include "qed.h"
{
struct pci_dev *pdev = cdev->pdev;
- pci_disable_pcie_error_reporting(pdev);
-
if (cdev->doorbells && cdev->db_size)
iounmap(cdev->doorbells);
if (cdev->regview)
return -ENOMEM;
}
- /* AER (Advanced Error reporting) configuration */
- rc = pci_enable_pcie_error_reporting(pdev);
- if (rc)
- DP_VERBOSE(cdev, NETIF_MSG_DRV,
- "Failed to configure PCIe AER [%d]\n", rc);
-
return 0;
err2:
#ifndef _QEDE_H_
#define _QEDE_H_
-#include <linux/compiler.h>
-#include <linux/version.h>
#include <linux/workqueue.h>
#include <linux/netdevice.h>
#include <linux/interrupt.h>
* Copyright (c) 2019-2020 Marvell International Ltd.
*/
-#include <linux/version.h>
#include <linux/types.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <net/ip6_checksum.h>
#include <linux/bitops.h>
#include <linux/vmalloc.h>
-#include <linux/aer.h>
#include "qede.h"
#include "qede_ptp.h"
#include <linux/ipv6.h>
#include <linux/ethtool.h>
#include <linux/interrupt.h>
-#include <linux/aer.h>
#include "qlcnic.h"
#include "qlcnic_sriov.h"
#include <net/ip.h>
#include <linux/ipv6.h>
#include <linux/inetdevice.h>
-#include <linux/aer.h>
#include <linux/log2.h>
#include <linux/pci.h>
#include <net/vxlan.h>
goto err_out_disable_pdev;
pci_set_master(pdev);
- pci_enable_pcie_error_reporting(pdev);
ahw = kzalloc(sizeof(struct qlcnic_hardware_context), GFP_KERNEL);
if (!ahw) {
kfree(ahw);
err_out_free_res:
- pci_disable_pcie_error_reporting(pdev);
pci_release_regions(pdev);
err_out_disable_pdev:
qlcnic_release_firmware(adapter);
- pci_disable_pcie_error_reporting(pdev);
pci_release_regions(pdev);
pci_disable_device(pdev);
#include <linux/ipv6.h>
#include <linux/inetdevice.h>
#include <linux/sysfs.h>
-#include <linux/aer.h>
#include <linux/log2.h>
#ifdef CONFIG_QLCNIC_HWMON
#include <linux/hwmon.h>
struct work_struct work;
} wk;
+ spinlock_t config25_lock;
+ spinlock_t mac_ocp_lock;
+
+ spinlock_t cfg9346_usage_lock;
+ int cfg9346_usage_count;
+
unsigned supports_gmii:1;
- unsigned aspm_manageable:1;
dma_addr_t counters_phys_addr;
struct rtl8169_counters *counters;
struct rtl8169_tc_offsets tc_offset;
static void rtl_lock_config_regs(struct rtl8169_private *tp)
{
- RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->cfg9346_usage_lock, flags);
+ if (!--tp->cfg9346_usage_count)
+ RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ spin_unlock_irqrestore(&tp->cfg9346_usage_lock, flags);
}
static void rtl_unlock_config_regs(struct rtl8169_private *tp)
{
- RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->cfg9346_usage_lock, flags);
+ if (!tp->cfg9346_usage_count++)
+ RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ spin_unlock_irqrestore(&tp->cfg9346_usage_lock, flags);
}
static void rtl_pci_commit(struct rtl8169_private *tp)
RTL_R8(tp, ChipCmd);
}
+static void rtl_mod_config2(struct rtl8169_private *tp, u8 clear, u8 set)
+{
+ unsigned long flags;
+ u8 val;
+
+ spin_lock_irqsave(&tp->config25_lock, flags);
+ val = RTL_R8(tp, Config2);
+ RTL_W8(tp, Config2, (val & ~clear) | set);
+ spin_unlock_irqrestore(&tp->config25_lock, flags);
+}
+
+static void rtl_mod_config5(struct rtl8169_private *tp, u8 clear, u8 set)
+{
+ unsigned long flags;
+ u8 val;
+
+ spin_lock_irqsave(&tp->config25_lock, flags);
+ val = RTL_R8(tp, Config5);
+ RTL_W8(tp, Config5, (val & ~clear) | set);
+ spin_unlock_irqrestore(&tp->config25_lock, flags);
+}
+
static bool rtl_is_8125(struct rtl8169_private *tp)
{
return tp->mac_version >= RTL_GIGA_MAC_VER_61;
(RTL_R32(tp, GPHY_OCP) & 0xffff) : -ETIMEDOUT;
}
-static void r8168_mac_ocp_write(struct rtl8169_private *tp, u32 reg, u32 data)
+static void __r8168_mac_ocp_write(struct rtl8169_private *tp, u32 reg, u32 data)
{
if (rtl_ocp_reg_failure(reg))
return;
RTL_W32(tp, OCPDR, OCPAR_FLAG | (reg << 15) | data);
}
-static u16 r8168_mac_ocp_read(struct rtl8169_private *tp, u32 reg)
+static void r8168_mac_ocp_write(struct rtl8169_private *tp, u32 reg, u32 data)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&tp->mac_ocp_lock, flags);
+ __r8168_mac_ocp_write(tp, reg, data);
+ spin_unlock_irqrestore(&tp->mac_ocp_lock, flags);
+}
+
+static u16 __r8168_mac_ocp_read(struct rtl8169_private *tp, u32 reg)
{
if (rtl_ocp_reg_failure(reg))
return 0;
return RTL_R32(tp, OCPDR);
}
+static u16 r8168_mac_ocp_read(struct rtl8169_private *tp, u32 reg)
+{
+ unsigned long flags;
+ u16 val;
+
+ spin_lock_irqsave(&tp->mac_ocp_lock, flags);
+ val = __r8168_mac_ocp_read(tp, reg);
+ spin_unlock_irqrestore(&tp->mac_ocp_lock, flags);
+
+ return val;
+}
+
static void r8168_mac_ocp_modify(struct rtl8169_private *tp, u32 reg, u16 mask,
u16 set)
{
- u16 data = r8168_mac_ocp_read(tp, reg);
+ unsigned long flags;
+ u16 data;
- r8168_mac_ocp_write(tp, reg, (data & ~mask) | set);
+ spin_lock_irqsave(&tp->mac_ocp_lock, flags);
+ data = __r8168_mac_ocp_read(tp, reg);
+ __r8168_mac_ocp_write(tp, reg, (data & ~mask) | set);
+ spin_unlock_irqrestore(&tp->mac_ocp_lock, flags);
}
/* Work around a hw issue with RTL8168g PHY, the quirk disables
{ WAKE_MAGIC, Config3, MagicPacket }
};
unsigned int i, tmp = ARRAY_SIZE(cfg);
+ unsigned long flags;
u8 options;
rtl_unlock_config_regs(tp);
r8168_mac_ocp_modify(tp, 0xc0b6, BIT(0), 0);
}
+ spin_lock_irqsave(&tp->config25_lock, flags);
for (i = 0; i < tmp; i++) {
options = RTL_R8(tp, cfg[i].reg) & ~cfg[i].mask;
if (wolopts & cfg[i].opt)
options |= cfg[i].mask;
RTL_W8(tp, cfg[i].reg, options);
}
+ spin_unlock_irqrestore(&tp->config25_lock, flags);
switch (tp->mac_version) {
case RTL_GIGA_MAC_VER_02 ... RTL_GIGA_MAC_VER_06:
case RTL_GIGA_MAC_VER_34:
case RTL_GIGA_MAC_VER_37:
case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_63:
- options = RTL_R8(tp, Config2) & ~PME_SIGNAL;
if (wolopts)
- options |= PME_SIGNAL;
- RTL_W8(tp, Config2, options);
+ rtl_mod_config2(tp, 0, PME_SIGNAL);
+ else
+ rtl_mod_config2(tp, PME_SIGNAL, 0);
break;
default:
break;
static void rtl_hw_aspm_clkreq_enable(struct rtl8169_private *tp, bool enable)
{
- /* Don't enable ASPM in the chip if OS can't control ASPM */
- if (enable && tp->aspm_manageable) {
- RTL_W8(tp, Config5, RTL_R8(tp, Config5) | ASPM_en);
- RTL_W8(tp, Config2, RTL_R8(tp, Config2) | ClkReqEn);
+ if (tp->mac_version < RTL_GIGA_MAC_VER_32)
+ return;
+
+ if (enable) {
+ rtl_mod_config5(tp, 0, ASPM_en);
+ rtl_mod_config2(tp, 0, ClkReqEn);
switch (tp->mac_version) {
case RTL_GIGA_MAC_VER_46 ... RTL_GIGA_MAC_VER_48:
break;
}
- RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~ClkReqEn);
- RTL_W8(tp, Config5, RTL_R8(tp, Config5) & ~ASPM_en);
+ rtl_mod_config2(tp, ClkReqEn, 0);
+ rtl_mod_config5(tp, ASPM_en, 0);
}
-
- udelay(10);
}
static void rtl_set_fifo_size(struct rtl8169_private *tp, u16 rx_stat,
RTL_W32(tp, MISC, RTL_R32(tp, MISC) | TXPLA_RST);
RTL_W32(tp, MISC, RTL_R32(tp, MISC) & ~TXPLA_RST);
- RTL_W8(tp, Config5, RTL_R8(tp, Config5) & ~Spi_en);
+ rtl_mod_config5(tp, Spi_en, 0);
}
static void rtl_hw_start_8168e_2(struct rtl8169_private *tp)
RTL_W8(tp, DLLPR, RTL_R8(tp, DLLPR) | PFM_EN);
RTL_W32(tp, MISC, RTL_R32(tp, MISC) | PWM_EN);
- RTL_W8(tp, Config5, RTL_R8(tp, Config5) & ~Spi_en);
-
- rtl_hw_aspm_clkreq_enable(tp, true);
+ rtl_mod_config5(tp, Spi_en, 0);
}
static void rtl_hw_start_8168f(struct rtl8169_private *tp)
RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
RTL_W8(tp, DLLPR, RTL_R8(tp, DLLPR) | PFM_EN);
RTL_W32(tp, MISC, RTL_R32(tp, MISC) | PWM_EN);
- RTL_W8(tp, Config5, RTL_R8(tp, Config5) & ~Spi_en);
+ rtl_mod_config5(tp, Spi_en, 0);
rtl8168_config_eee_mac(tp);
}
};
rtl_hw_start_8168g(tp);
-
- /* disable aspm and clock request before access ephy */
- rtl_hw_aspm_clkreq_enable(tp, false);
rtl_ephy_init(tp, e_info_8168g_1);
- rtl_hw_aspm_clkreq_enable(tp, true);
}
static void rtl_hw_start_8168g_2(struct rtl8169_private *tp)
};
rtl_hw_start_8168g(tp);
-
- /* disable aspm and clock request before access ephy */
- rtl_hw_aspm_clkreq_enable(tp, false);
rtl_ephy_init(tp, e_info_8168g_2);
}
rtl_hw_start_8168g(tp);
- /* disable aspm and clock request before access ephy */
- rtl_hw_aspm_clkreq_enable(tp, false);
rtl_ephy_init(tp, e_info_8411_2);
/* The following Realtek-provided magic fixes an issue with the RX unit
r8168_mac_ocp_write(tp, 0xFC32, 0x0C25);
r8168_mac_ocp_write(tp, 0xFC34, 0x00A9);
r8168_mac_ocp_write(tp, 0xFC36, 0x012D);
-
- rtl_hw_aspm_clkreq_enable(tp, true);
}
static void rtl_hw_start_8168h_1(struct rtl8169_private *tp)
};
int rg_saw_cnt;
- /* disable aspm and clock request before access ephy */
- rtl_hw_aspm_clkreq_enable(tp, false);
rtl_ephy_init(tp, e_info_8168h_1);
rtl_set_fifo_size(tp, 0x08, 0x10, 0x02, 0x06);
r8168_mac_ocp_write(tp, 0xe63e, 0x0000);
r8168_mac_ocp_write(tp, 0xc094, 0x0000);
r8168_mac_ocp_write(tp, 0xc09e, 0x0000);
-
- rtl_hw_aspm_clkreq_enable(tp, true);
}
static void rtl_hw_start_8168ep(struct rtl8169_private *tp)
{ 0x1e, 0x0000, 0x2000 },
};
- /* disable aspm and clock request before access ephy */
- rtl_hw_aspm_clkreq_enable(tp, false);
rtl_ephy_init(tp, e_info_8168ep_3);
rtl_hw_start_8168ep(tp);
r8168_mac_ocp_modify(tp, 0xd3e2, 0x0fff, 0x0271);
r8168_mac_ocp_modify(tp, 0xd3e4, 0x00ff, 0x0000);
r8168_mac_ocp_modify(tp, 0xe860, 0x0000, 0x0080);
-
- rtl_hw_aspm_clkreq_enable(tp, true);
}
static void rtl_hw_start_8117(struct rtl8169_private *tp)
int rg_saw_cnt;
rtl8168ep_stop_cmac(tp);
-
- /* disable aspm and clock request before access ephy */
- rtl_hw_aspm_clkreq_enable(tp, false);
rtl_ephy_init(tp, e_info_8117);
rtl_set_fifo_size(tp, 0x08, 0x10, 0x02, 0x06);
/* firmware is for MAC only */
r8169_apply_firmware(tp);
-
- rtl_hw_aspm_clkreq_enable(tp, true);
}
static void rtl_hw_start_8102e_1(struct rtl8169_private *tp)
static void rtl_hw_start_8106(struct rtl8169_private *tp)
{
- rtl_hw_aspm_clkreq_enable(tp, false);
-
/* Force LAN exit from ASPM if Rx/Tx are not idle */
RTL_W32(tp, FuncEvent, RTL_R32(tp, FuncEvent) | 0x002800);
rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000);
rtl_pcie_state_l2l3_disable(tp);
- rtl_hw_aspm_clkreq_enable(tp, true);
}
DECLARE_RTL_COND(rtl_mac_ocp_e00e_cond)
};
rtl_set_def_aspm_entry_latency(tp);
-
- /* disable aspm and clock request before access ephy */
- rtl_hw_aspm_clkreq_enable(tp, false);
rtl_ephy_init(tp, e_info_8125a_2);
-
rtl_hw_start_8125_common(tp);
- rtl_hw_aspm_clkreq_enable(tp, true);
}
static void rtl_hw_start_8125b(struct rtl8169_private *tp)
};
rtl_set_def_aspm_entry_latency(tp);
- rtl_hw_aspm_clkreq_enable(tp, false);
-
rtl_ephy_init(tp, e_info_8125b);
rtl_hw_start_8125_common(tp);
-
- rtl_hw_aspm_clkreq_enable(tp, true);
}
static void rtl_hw_config(struct rtl8169_private *tp)
static void rtl_hw_start(struct rtl8169_private *tp)
{
rtl_unlock_config_regs(tp);
-
+ /* disable aspm and clock request before ephy access */
+ rtl_hw_aspm_clkreq_enable(tp, false);
RTL_W16(tp, CPlusCmd, tp->cp_cmd);
if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
rtl_hw_start_8168(tp);
rtl_enable_exit_l1(tp);
+ rtl_hw_aspm_clkreq_enable(tp, true);
rtl_set_rx_max_size(tp);
rtl_set_rx_tx_desc_registers(tp);
rtl_lock_config_regs(tp);
}
if (napi_schedule_prep(&tp->napi)) {
+ rtl_unlock_config_regs(tp);
+ rtl_hw_aspm_clkreq_enable(tp, false);
+ rtl_lock_config_regs(tp);
+
rtl_irq_disable(tp);
__napi_schedule(&tp->napi);
}
work_done = rtl_rx(dev, tp, budget);
- if (work_done < budget && napi_complete_done(napi, work_done))
+ if (work_done < budget && napi_complete_done(napi, work_done)) {
rtl_irq_enable(tp);
+ rtl_unlock_config_regs(tp);
+ rtl_hw_aspm_clkreq_enable(tp, true);
+ rtl_lock_config_regs(tp);
+ }
+
return work_done;
}
rtl_rar_set(tp, mac_addr);
}
-/* register is set if system vendor successfully tested ASPM 1.2 */
-static bool rtl_aspm_is_safe(struct rtl8169_private *tp)
-{
- if (tp->mac_version >= RTL_GIGA_MAC_VER_61 &&
- r8168_mac_ocp_read(tp, 0xc0b2) & 0xf)
- return true;
-
- return false;
-}
-
static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct rtl8169_private *tp;
tp->eee_adv = -1;
tp->ocp_base = OCP_STD_PHY_BASE;
+ spin_lock_init(&tp->cfg9346_usage_lock);
+ spin_lock_init(&tp->config25_lock);
+ spin_lock_init(&tp->mac_ocp_lock);
+
dev->tstats = devm_netdev_alloc_pcpu_stats(&pdev->dev,
struct pcpu_sw_netstats);
if (!dev->tstats)
tp->mac_version = chipset;
- /* Disable ASPM L1 as that cause random device stop working
- * problems as well as full system hangs for some PCIe devices users.
- * Chips from RTL8168h partially have issues with L1.2, but seem
- * to work fine with L1 and L1.1.
- */
- if (rtl_aspm_is_safe(tp))
- rc = 0;
- else if (tp->mac_version >= RTL_GIGA_MAC_VER_46)
- rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);
- else
- rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1);
- tp->aspm_manageable = !rc;
-
tp->dash_type = rtl_check_dash(tp);
tp->cp_cmd = RTL_R16(tp, CPlusCmd) & CPCMD_MASK;
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
-#include <linux/sys_soc.h>
#include <linux/reset.h>
#include <linux/math64.h>
phy_print_status(phydev);
}
-static const struct soc_device_attribute r8a7795es10[] = {
- { .soc_id = "r8a7795", .revision = "ES1.0", },
- { /* sentinel */ }
-};
-
/* PHY init function */
static int ravb_phy_init(struct net_device *ndev)
{
goto err_deregister_fixed_link;
}
- /* This driver only support 10/100Mbit speeds on R-Car H3 ES1.0
- * at this time.
- */
- if (soc_device_match(r8a7795es10)) {
- phy_set_max_speed(phydev, SPEED_100);
-
- netdev_info(ndev, "limited PHY to 100Mbit/s\n");
- }
-
if (!info->half_duplex) {
/* 10BASE, Pause and Asym Pause is not supported */
phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT);
static void rswitch_phy_device_deinit(struct rswitch_device *rdev)
{
- if (rdev->ndev->phydev) {
+ if (rdev->ndev->phydev)
phy_disconnect(rdev->ndev->phydev);
- rdev->ndev->phydev = NULL;
- }
}
static int rswitch_serdes_set_params(struct rswitch_device *rdev)
.driver = {
.name = SXGBE_RESOURCE_NAME,
.pm = &sxgbe_platform_pm_ops,
- .of_match_table = of_match_ptr(sxgbe_dt_ids),
+ .of_match_table = sxgbe_dt_ids,
},
};
#include "net_driver.h"
#include <linux/module.h>
-#include <linux/aer.h>
#include "efx_common.h"
#include "efx_channels.h"
#include "io.h"
pci_dbg(pci_dev, "shutdown successful\n");
- pci_disable_pcie_error_reporting(pci_dev);
-
pci_set_drvdata(pci_dev, NULL);
efx_fini_struct(efx);
kfree(probe_data);
#include <linux/ethtool.h>
#include <linux/topology.h>
#include <linux/gfp.h>
-#include <linux/aer.h>
#include <linux/interrupt.h>
#include "net_driver.h"
#include <net/gre.h>
free_netdev(efx->net_dev);
probe_data = container_of(efx, struct efx_probe_data, efx);
kfree(probe_data);
-
- pci_disable_pcie_error_reporting(pci_dev);
};
/* NIC VPD information
netif_warn(efx, probe, efx->net_dev,
"failed to create MTDs (%d)\n", rc);
- (void)pci_enable_pcie_error_reporting(pci_dev);
-
if (efx->type->udp_tnl_push_ports)
efx->type->udp_tnl_push_ports(efx);
#include <linux/ethtool.h>
#include <linux/topology.h>
#include <linux/gfp.h>
-#include <linux/aer.h>
#include <linux/interrupt.h>
#include "net_driver.h"
#include "efx.h"
ef4_fini_struct(efx);
free_netdev(efx->net_dev);
-
- pci_disable_pcie_error_reporting(pci_dev);
};
/* NIC VPD information
netif_warn(efx, probe, efx->net_dev,
"failed to create MTDs (%d)\n", rc);
- rc = pci_enable_pcie_error_reporting(pci_dev);
- if (rc && rc != -EINVAL)
- netif_notice(efx, probe, efx->net_dev,
- "PCIE error reporting unavailable (%d).\n",
- rc);
-
return 0;
fail4:
if (outlen < sizeof(outbuf))
return -EIO;
caps->match_field_count = MCDI_DWORD(outbuf, MAE_GET_CAPS_OUT_MATCH_FIELD_COUNT);
+ caps->encap_types = MCDI_DWORD(outbuf, MAE_GET_CAPS_OUT_ENCAP_TYPES_SUPPORTED);
caps->action_prios = MCDI_DWORD(outbuf, MAE_GET_CAPS_OUT_ACTION_PRIOS);
return 0;
}
size_t outlen;
int rc, i;
+ /* AR and OR caps MCDIs have identical layout, so we are using the
+ * same code for both.
+ */
+ BUILD_BUG_ON(MC_CMD_MAE_GET_AR_CAPS_OUT_LEN(MAE_NUM_FIELDS) <
+ MC_CMD_MAE_GET_OR_CAPS_OUT_LEN(MAE_NUM_FIELDS));
BUILD_BUG_ON(MC_CMD_MAE_GET_AR_CAPS_IN_LEN);
+ BUILD_BUG_ON(MC_CMD_MAE_GET_OR_CAPS_IN_LEN);
rc = efx_mcdi_rpc(efx, cmd, NULL, 0, outbuf, sizeof(outbuf), &outlen);
if (rc)
return rc;
+ BUILD_BUG_ON(MC_CMD_MAE_GET_AR_CAPS_OUT_COUNT_OFST !=
+ MC_CMD_MAE_GET_OR_CAPS_OUT_COUNT_OFST);
count = MCDI_DWORD(outbuf, MAE_GET_AR_CAPS_OUT_COUNT);
memset(field_support, MAE_FIELD_UNSUPPORTED, MAE_NUM_FIELDS);
+ BUILD_BUG_ON(MC_CMD_MAE_GET_AR_CAPS_OUT_FIELD_FLAGS_OFST !=
+ MC_CMD_MAE_GET_OR_CAPS_OUT_FIELD_FLAGS_OFST);
caps = _MCDI_DWORD(outbuf, MAE_GET_AR_CAPS_OUT_FIELD_FLAGS);
/* We're only interested in the support status enum, not any other
* flags, so just extract that from each entry.
rc = efx_mae_get_basic_caps(efx, caps);
if (rc)
return rc;
- return efx_mae_get_rule_fields(efx, MC_CMD_MAE_GET_AR_CAPS,
- caps->action_rule_fields);
+ rc = efx_mae_get_rule_fields(efx, MC_CMD_MAE_GET_AR_CAPS,
+ caps->action_rule_fields);
+ if (rc)
+ return rc;
+ return efx_mae_get_rule_fields(efx, MC_CMD_MAE_GET_OR_CAPS,
+ caps->outer_rule_fields);
}
/* Bit twiddling:
CHECK_BIT(IP_FIRST_FRAG, ip_firstfrag) ||
CHECK(RECIRC_ID, recirc_id))
return rc;
+ /* Matches on outer fields are done in a separate hardware table,
+ * the Outer Rule table. Thus the Action Rule merely does an
+ * exact match on Outer Rule ID if any outer field matches are
+ * present. The exception is the VNI/VSID (enc_keyid), which is
+ * available to the Action Rule match iff the Outer Rule matched
+ * (and thus identified the encap protocol to use to extract it).
+ */
+ if (efx_tc_match_is_encap(mask)) {
+ rc = efx_mae_match_check_cap_typ(
+ supported_fields[MAE_FIELD_OUTER_RULE_ID],
+ MASK_ONES);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "No support for encap rule ID matches");
+ return rc;
+ }
+ if (CHECK(ENC_VNET_ID, enc_keyid))
+ return rc;
+ } else if (mask->enc_keyid) {
+ NL_SET_ERR_MSG_MOD(extack, "Match on enc_keyid requires other encap fields");
+ return -EINVAL;
+ }
return 0;
}
#undef CHECK_BIT
#undef CHECK
+#define CHECK(_mcdi) ({ \
+ rc = efx_mae_match_check_cap_typ(supported_fields[MAE_FIELD_ ## _mcdi],\
+ MASK_ONES); \
+ if (rc) \
+ NL_SET_ERR_MSG_FMT_MOD(extack, \
+ "No support for field %s", #_mcdi); \
+ rc; \
+})
+/* Checks that the fields needed for encap-rule matches are supported by the
+ * MAE. All the fields are exact-match.
+ */
+int efx_mae_check_encap_match_caps(struct efx_nic *efx, bool ipv6,
+ struct netlink_ext_ack *extack)
+{
+ u8 *supported_fields = efx->tc->caps->outer_rule_fields;
+ int rc;
+
+ if (CHECK(ENC_ETHER_TYPE))
+ return rc;
+ if (ipv6) {
+ if (CHECK(ENC_SRC_IP6) ||
+ CHECK(ENC_DST_IP6))
+ return rc;
+ } else {
+ if (CHECK(ENC_SRC_IP4) ||
+ CHECK(ENC_DST_IP4))
+ return rc;
+ }
+ if (CHECK(ENC_L4_DPORT) ||
+ CHECK(ENC_IP_PROTO))
+ return rc;
+ return 0;
+}
+#undef CHECK
+
+int efx_mae_check_encap_type_supported(struct efx_nic *efx, enum efx_encap_type typ)
+{
+ unsigned int bit;
+
+ switch (typ & EFX_ENCAP_TYPES_MASK) {
+ case EFX_ENCAP_TYPE_VXLAN:
+ bit = MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_VXLAN_LBN;
+ break;
+ case EFX_ENCAP_TYPE_GENEVE:
+ bit = MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_GENEVE_LBN;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+ if (efx->tc->caps->encap_types & BIT(bit))
+ return 0;
+ return -EOPNOTSUPP;
+}
+
int efx_mae_allocate_counter(struct efx_nic *efx, struct efx_tc_counter *cnt)
{
MCDI_DECLARE_BUF(outbuf, MC_CMD_MAE_COUNTER_ALLOC_OUT_LEN(1));
return 0;
}
+static int efx_mae_encap_type_to_mae_type(enum efx_encap_type type)
+{
+ switch (type & EFX_ENCAP_TYPES_MASK) {
+ case EFX_ENCAP_TYPE_NONE:
+ return MAE_MCDI_ENCAP_TYPE_NONE;
+ case EFX_ENCAP_TYPE_VXLAN:
+ return MAE_MCDI_ENCAP_TYPE_VXLAN;
+ case EFX_ENCAP_TYPE_GENEVE:
+ return MAE_MCDI_ENCAP_TYPE_GENEVE;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
int efx_mae_lookup_mport(struct efx_nic *efx, u32 vf_idx, u32 *id)
{
struct ef100_nic_data *nic_data = efx->nic_data;
size_t outlen;
int rc;
+ MCDI_POPULATE_DWORD_3(inbuf, MAE_ACTION_SET_ALLOC_IN_FLAGS,
+ MAE_ACTION_SET_ALLOC_IN_VLAN_PUSH, act->vlan_push,
+ MAE_ACTION_SET_ALLOC_IN_VLAN_POP, act->vlan_pop,
+ MAE_ACTION_SET_ALLOC_IN_DECAP, act->decap);
+
MCDI_SET_DWORD(inbuf, MAE_ACTION_SET_ALLOC_IN_SRC_MAC_ID,
MC_CMD_MAE_MAC_ADDR_ALLOC_OUT_MAC_ID_NULL);
MCDI_SET_DWORD(inbuf, MAE_ACTION_SET_ALLOC_IN_DST_MAC_ID,
MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NULL);
MCDI_SET_DWORD(inbuf, MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID,
MC_CMD_MAE_COUNTER_LIST_ALLOC_OUT_COUNTER_LIST_ID_NULL);
+ if (act->vlan_push) {
+ MCDI_SET_WORD_BE(inbuf, MAE_ACTION_SET_ALLOC_IN_VLAN0_TCI_BE,
+ act->vlan_tci[0]);
+ MCDI_SET_WORD_BE(inbuf, MAE_ACTION_SET_ALLOC_IN_VLAN0_PROTO_BE,
+ act->vlan_proto[0]);
+ }
+ if (act->vlan_push >= 2) {
+ MCDI_SET_WORD_BE(inbuf, MAE_ACTION_SET_ALLOC_IN_VLAN1_TCI_BE,
+ act->vlan_tci[1]);
+ MCDI_SET_WORD_BE(inbuf, MAE_ACTION_SET_ALLOC_IN_VLAN1_PROTO_BE,
+ act->vlan_proto[1]);
+ }
MCDI_SET_DWORD(inbuf, MAE_ACTION_SET_ALLOC_IN_ENCAP_HEADER_ID,
MC_CMD_MAE_ENCAP_HEADER_ALLOC_OUT_ENCAP_HEADER_ID_NULL);
if (act->deliver)
return 0;
}
+int efx_mae_register_encap_match(struct efx_nic *efx,
+ struct efx_tc_encap_match *encap)
+{
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_MAE_OUTER_RULE_INSERT_IN_LEN(MAE_ENC_FIELD_PAIRS_LEN));
+ MCDI_DECLARE_BUF(outbuf, MC_CMD_MAE_OUTER_RULE_INSERT_OUT_LEN);
+ MCDI_DECLARE_STRUCT_PTR(match_crit);
+ size_t outlen;
+ int rc;
+
+ rc = efx_mae_encap_type_to_mae_type(encap->tun_type);
+ if (rc < 0)
+ return rc;
+ match_crit = _MCDI_DWORD(inbuf, MAE_OUTER_RULE_INSERT_IN_FIELD_MATCH_CRITERIA);
+ /* The struct contains IP src and dst, and udp dport.
+ * So we actually need to filter on IP src and dst, L4 dport, and
+ * ipproto == udp.
+ */
+ MCDI_SET_DWORD(inbuf, MAE_OUTER_RULE_INSERT_IN_ENCAP_TYPE, rc);
+#ifdef CONFIG_IPV6
+ if (encap->src_ip | encap->dst_ip) {
+#endif
+ MCDI_STRUCT_SET_DWORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_SRC_IP4_BE,
+ encap->src_ip);
+ MCDI_STRUCT_SET_DWORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_SRC_IP4_BE_MASK,
+ ~(__be32)0);
+ MCDI_STRUCT_SET_DWORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_DST_IP4_BE,
+ encap->dst_ip);
+ MCDI_STRUCT_SET_DWORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_DST_IP4_BE_MASK,
+ ~(__be32)0);
+ MCDI_STRUCT_SET_WORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_ETHER_TYPE_BE,
+ htons(ETH_P_IP));
+#ifdef CONFIG_IPV6
+ } else {
+ memcpy(MCDI_STRUCT_PTR(match_crit, MAE_ENC_FIELD_PAIRS_ENC_SRC_IP6_BE),
+ &encap->src_ip6, sizeof(encap->src_ip6));
+ memset(MCDI_STRUCT_PTR(match_crit, MAE_ENC_FIELD_PAIRS_ENC_SRC_IP6_BE_MASK),
+ 0xff, sizeof(encap->src_ip6));
+ memcpy(MCDI_STRUCT_PTR(match_crit, MAE_ENC_FIELD_PAIRS_ENC_DST_IP6_BE),
+ &encap->dst_ip6, sizeof(encap->dst_ip6));
+ memset(MCDI_STRUCT_PTR(match_crit, MAE_ENC_FIELD_PAIRS_ENC_DST_IP6_BE_MASK),
+ 0xff, sizeof(encap->dst_ip6));
+ MCDI_STRUCT_SET_WORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_ETHER_TYPE_BE,
+ htons(ETH_P_IPV6));
+ }
+#endif
+ MCDI_STRUCT_SET_WORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_ETHER_TYPE_BE_MASK,
+ ~(__be16)0);
+ MCDI_STRUCT_SET_WORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_L4_DPORT_BE,
+ encap->udp_dport);
+ MCDI_STRUCT_SET_WORD_BE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_L4_DPORT_BE_MASK,
+ ~(__be16)0);
+ MCDI_STRUCT_SET_BYTE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_IP_PROTO, IPPROTO_UDP);
+ MCDI_STRUCT_SET_BYTE(match_crit, MAE_ENC_FIELD_PAIRS_ENC_IP_PROTO_MASK, ~0);
+ rc = efx_mcdi_rpc(efx, MC_CMD_MAE_OUTER_RULE_INSERT, inbuf,
+ sizeof(inbuf), outbuf, sizeof(outbuf), &outlen);
+ if (rc)
+ return rc;
+ if (outlen < sizeof(outbuf))
+ return -EIO;
+ encap->fw_id = MCDI_DWORD(outbuf, MAE_OUTER_RULE_INSERT_OUT_OR_ID);
+ return 0;
+}
+
+int efx_mae_unregister_encap_match(struct efx_nic *efx,
+ struct efx_tc_encap_match *encap)
+{
+ MCDI_DECLARE_BUF(outbuf, MC_CMD_MAE_OUTER_RULE_REMOVE_OUT_LEN(1));
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_MAE_OUTER_RULE_REMOVE_IN_LEN(1));
+ size_t outlen;
+ int rc;
+
+ MCDI_SET_DWORD(inbuf, MAE_OUTER_RULE_REMOVE_IN_OR_ID, encap->fw_id);
+ rc = efx_mcdi_rpc(efx, MC_CMD_MAE_OUTER_RULE_REMOVE, inbuf,
+ sizeof(inbuf), outbuf, sizeof(outbuf), &outlen);
+ if (rc)
+ return rc;
+ if (outlen < sizeof(outbuf))
+ return -EIO;
+ /* FW freed a different ID than we asked for, should also never happen.
+ * Warn because it means we've now got a different idea to the FW of
+ * what encap_mds exist, which could cause mayhem later.
+ */
+ if (WARN_ON(MCDI_DWORD(outbuf, MAE_OUTER_RULE_REMOVE_OUT_REMOVED_OR_ID) != encap->fw_id))
+ return -EIO;
+ /* We're probably about to free @encap, but let's just make sure its
+ * fw_id is blatted so that it won't look valid if it leaks out.
+ */
+ encap->fw_id = MC_CMD_MAE_OUTER_RULE_INSERT_OUT_OUTER_RULE_ID_NULL;
+ return 0;
+}
+
static int efx_mae_populate_match_criteria(MCDI_DECLARE_STRUCT_PTR(match_crit),
const struct efx_tc_match *match)
{
match->value.tcp_flags);
MCDI_STRUCT_SET_WORD_BE(match_crit, MAE_FIELD_MASK_VALUE_PAIRS_V2_TCP_FLAGS_BE_MASK,
match->mask.tcp_flags);
+ /* enc-keys are handled indirectly, through encap_match ID */
+ if (match->encap) {
+ MCDI_STRUCT_SET_DWORD(match_crit, MAE_FIELD_MASK_VALUE_PAIRS_V2_OUTER_RULE_ID,
+ match->encap->fw_id);
+ MCDI_STRUCT_SET_DWORD(match_crit, MAE_FIELD_MASK_VALUE_PAIRS_V2_OUTER_RULE_ID_MASK,
+ U32_MAX);
+ /* enc_keyid (VNI/VSID) is not part of the encap_match */
+ MCDI_STRUCT_SET_DWORD_BE(match_crit, MAE_FIELD_MASK_VALUE_PAIRS_V2_ENC_VNET_ID_BE,
+ match->value.enc_keyid);
+ MCDI_STRUCT_SET_DWORD_BE(match_crit, MAE_FIELD_MASK_VALUE_PAIRS_V2_ENC_VNET_ID_BE_MASK,
+ match->mask.enc_keyid);
+ } else if (WARN_ON_ONCE(match->mask.enc_src_ip) ||
+ WARN_ON_ONCE(match->mask.enc_dst_ip) ||
+ WARN_ON_ONCE(!ipv6_addr_any(&match->mask.enc_src_ip6)) ||
+ WARN_ON_ONCE(!ipv6_addr_any(&match->mask.enc_dst_ip6)) ||
+ WARN_ON_ONCE(match->mask.enc_ip_tos) ||
+ WARN_ON_ONCE(match->mask.enc_ip_ttl) ||
+ WARN_ON_ONCE(match->mask.enc_sport) ||
+ WARN_ON_ONCE(match->mask.enc_dport) ||
+ WARN_ON_ONCE(match->mask.enc_keyid)) {
+ /* No enc-keys should appear in a rule without an encap_match */
+ return -EOPNOTSUPP;
+ }
return 0;
}
struct mae_caps {
u32 match_field_count;
+ u32 encap_types;
u32 action_prios;
u8 action_rule_fields[MAE_NUM_FIELDS];
+ u8 outer_rule_fields[MAE_NUM_FIELDS];
};
int efx_mae_get_caps(struct efx_nic *efx, struct mae_caps *caps);
int efx_mae_match_check_caps(struct efx_nic *efx,
const struct efx_tc_match_fields *mask,
struct netlink_ext_ack *extack);
+int efx_mae_check_encap_match_caps(struct efx_nic *efx, bool ipv6,
+ struct netlink_ext_ack *extack);
+int efx_mae_check_encap_type_supported(struct efx_nic *efx,
+ enum efx_encap_type typ);
int efx_mae_allocate_counter(struct efx_nic *efx, struct efx_tc_counter *cnt);
int efx_mae_free_counter(struct efx_nic *efx, struct efx_tc_counter *cnt);
int efx_mae_free_action_set_list(struct efx_nic *efx,
struct efx_tc_action_set_list *acts);
+int efx_mae_register_encap_match(struct efx_nic *efx,
+ struct efx_tc_encap_match *encap);
+int efx_mae_unregister_encap_match(struct efx_nic *efx,
+ struct efx_tc_encap_match *encap);
+
int efx_mae_insert_rule(struct efx_nic *efx, const struct efx_tc_match *match,
u32 prio, u32 acts_id, u32 *id);
int efx_mae_delete_rule(struct efx_nic *efx, u32 id);
((void)BUILD_BUG_ON_ZERO(_field ## _LEN != 2), \
le16_to_cpu(*(__force const __le16 *)MCDI_STRUCT_PTR(_buf, _field)))
/* Write a 16-bit field defined in the protocol as being big-endian. */
+#define MCDI_SET_WORD_BE(_buf, _field, _value) do { \
+ BUILD_BUG_ON(MC_CMD_ ## _field ## _LEN != 2); \
+ BUILD_BUG_ON(MC_CMD_ ## _field ## _OFST & 1); \
+ *(__force __be16 *)MCDI_PTR(_buf, _field) = (_value); \
+ } while (0)
#define MCDI_STRUCT_SET_WORD_BE(_buf, _field, _value) do { \
BUILD_BUG_ON(_field ## _LEN != 2); \
BUILD_BUG_ON(_field ## _OFST & 1); \
#include <linux/ip.h>
#include <linux/udp.h>
#include <linux/time.h>
+#include <linux/errno.h>
#include <linux/ktime.h>
#include <linux/module.h>
#include <linux/pps_kernel.h>
/* How long an unmatched event or packet can be held */
#define PKT_EVENT_LIFETIME_MS 10
+/* How long unused unicast filters can be held */
+#define UCAST_FILTER_EXPIRY_JIFFIES msecs_to_jiffies(30000)
+
/* Offsets into PTP packet for identification. These offsets are from the
* start of the IP header, not the MAC header. Note that neither PTP V1 nor
* PTP V2 permit the use of IPV4 options.
#define PTP_MIN_LENGTH 63
-#define PTP_RXFILTERS_LEN 5
-
#define PTP_ADDR_IPV4 0xe0000181 /* 224.0.1.129 */
#define PTP_ADDR_IPV6 {0xff, 0x0e, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0x01, 0x81} /* ff0e::181 */
};
/**
+ * struct efx_ptp_rxfilter - Filter for PTP packets
+ * @list: Node of the list where the filter is added
+ * @ether_type: Network protocol of the filter (ETHER_P_IP / ETHER_P_IPV6)
+ * @loc_port: UDP port of the filter (PTP_EVENT_PORT / PTP_GENERAL_PORT)
+ * @loc_host: IPv4/v6 address of the filter
+ * @expiry: time when the filter expires, in jiffies
+ * @handle: Handle ID for the MCDI filters table
+ */
+struct efx_ptp_rxfilter {
+ struct list_head list;
+ __be16 ether_type;
+ __be16 loc_port;
+ __be32 loc_host[4];
+ unsigned long expiry;
+ int handle;
+};
+
+/**
* struct efx_ptp_data - Precision Time Protocol (PTP) state
* @efx: The NIC context
* @channel: The PTP channel (Siena only)
* @rx_evts: Instantiated events (on evt_list and evt_free_list)
* @workwq: Work queue for processing pending PTP operations
* @work: Work task
+ * @cleanup_work: Work task for periodic cleanup
* @reset_required: A serious error has occurred and the PTP task needs to be
* reset (disable, enable).
- * @rxfilters: Receive filters when operating
- * @rxfilters_count: Num of installed rxfilters, should be == PTP_RXFILTERS_LEN
+ * @rxfilters_mcast: Receive filters for multicast PTP packets
+ * @rxfilters_ucast: Receive filters for unicast PTP packets
* @config: Current timestamp configuration
* @enabled: PTP operation enabled
* @mode: Mode in which PTP operating (PTP version)
struct efx_ptp_event_rx rx_evts[MAX_RECEIVE_EVENTS];
struct workqueue_struct *workwq;
struct work_struct work;
+ struct delayed_work cleanup_work;
bool reset_required;
- u32 rxfilters[PTP_RXFILTERS_LEN];
- size_t rxfilters_count;
+ struct list_head rxfilters_mcast;
+ struct list_head rxfilters_ucast;
struct hwtstamp_config config;
bool enabled;
unsigned int mode;
const struct timespec64 *e_ts);
static int efx_phc_enable(struct ptp_clock_info *ptp,
struct ptp_clock_request *request, int on);
+static int efx_ptp_insert_unicast_filter(struct efx_nic *efx,
+ struct sk_buff *skb);
bool efx_ptp_use_mac_tx_timestamps(struct efx_nic *efx)
{
tx_queue = efx_channel_get_tx_queue(ptp_data->channel, type);
if (tx_queue && tx_queue->timestamping) {
+ skb_get(skb);
+
/* This code invokes normal driver TX code which is always
* protected from softirqs when called from generic TX code,
* which in turn disables preemption. Look at __dev_queue_xmit
local_bh_disable();
efx_enqueue_skb(tx_queue, skb);
local_bh_enable();
+
+ /* We need to add the filters after enqueuing the packet.
+ * Otherwise, there's high latency in sending back the
+ * timestamp, causing ptp4l timeouts
+ */
+ efx_ptp_insert_unicast_filter(efx, skb);
+ dev_consume_skb_any(skb);
} else {
WARN_ONCE(1, "PTP channel has no timestamped tx queue\n");
dev_kfree_skb_any(skb);
/* Transmit a PTP packet, via the MCDI interface, to the wire. */
static void efx_ptp_xmit_skb_mc(struct efx_nic *efx, struct sk_buff *skb)
{
+ MCDI_DECLARE_BUF(txtime, MC_CMD_PTP_OUT_TRANSMIT_LEN);
struct efx_ptp_data *ptp_data = efx->ptp_data;
struct skb_shared_hwtstamps timestamps;
- int rc = -EIO;
- MCDI_DECLARE_BUF(txtime, MC_CMD_PTP_OUT_TRANSMIT_LEN);
size_t len;
+ int rc;
MCDI_SET_DWORD(ptp_data->txbuf, PTP_IN_OP, MC_CMD_PTP_OP_TRANSMIT);
MCDI_SET_DWORD(ptp_data->txbuf, PTP_IN_PERIPH_ID, 0);
skb_tstamp_tx(skb, ×tamps);
- rc = 0;
+ /* Add the filters after sending back the timestamp to avoid delaying it
+ * or ptp4l may timeout.
+ */
+ efx_ptp_insert_unicast_filter(efx, skb);
fail:
dev_kfree_skb_any(skb);
local_bh_enable();
}
-static void efx_ptp_remove_multicast_filters(struct efx_nic *efx)
+static struct efx_ptp_rxfilter *
+efx_ptp_find_filter(struct list_head *filter_list, struct efx_filter_spec *spec)
{
- struct efx_ptp_data *ptp = efx->ptp_data;
+ struct efx_ptp_rxfilter *rxfilter;
- while (ptp->rxfilters_count) {
- ptp->rxfilters_count--;
- efx_filter_remove_id_safe(efx, EFX_FILTER_PRI_REQUIRED,
- ptp->rxfilters[ptp->rxfilters_count]);
+ list_for_each_entry(rxfilter, filter_list, list) {
+ if (rxfilter->ether_type == spec->ether_type &&
+ rxfilter->loc_port == spec->loc_port &&
+ !memcmp(rxfilter->loc_host, spec->loc_host, sizeof(spec->loc_host)))
+ return rxfilter;
}
+
+ return NULL;
+}
+
+static void efx_ptp_remove_one_filter(struct efx_nic *efx,
+ struct efx_ptp_rxfilter *rxfilter)
+{
+ efx_filter_remove_id_safe(efx, EFX_FILTER_PRI_REQUIRED,
+ rxfilter->handle);
+ list_del(&rxfilter->list);
+ kfree(rxfilter);
+}
+
+static void efx_ptp_remove_filters(struct efx_nic *efx,
+ struct list_head *filter_list)
+{
+ struct efx_ptp_rxfilter *rxfilter, *tmp;
+
+ list_for_each_entry_safe(rxfilter, tmp, filter_list, list)
+ efx_ptp_remove_one_filter(efx, rxfilter);
}
static void efx_ptp_init_filter(struct efx_nic *efx,
}
static int efx_ptp_insert_filter(struct efx_nic *efx,
- struct efx_filter_spec *rxfilter)
+ struct list_head *filter_list,
+ struct efx_filter_spec *spec,
+ unsigned long expiry)
{
struct efx_ptp_data *ptp = efx->ptp_data;
+ struct efx_ptp_rxfilter *rxfilter;
+ int rc;
+
+ rxfilter = efx_ptp_find_filter(filter_list, spec);
+ if (rxfilter) {
+ rxfilter->expiry = expiry;
+ return 0;
+ }
+
+ rxfilter = kzalloc(sizeof(*rxfilter), GFP_KERNEL);
+ if (!rxfilter)
+ return -ENOMEM;
- int rc = efx_filter_insert_filter(efx, rxfilter, true);
+ rc = efx_filter_insert_filter(efx, spec, true);
if (rc < 0)
- return rc;
- ptp->rxfilters[ptp->rxfilters_count] = rc;
- ptp->rxfilters_count++;
+ goto fail;
+
+ rxfilter->handle = rc;
+ rxfilter->ether_type = spec->ether_type;
+ rxfilter->loc_port = spec->loc_port;
+ memcpy(rxfilter->loc_host, spec->loc_host, sizeof(spec->loc_host));
+ rxfilter->expiry = expiry;
+ list_add(&rxfilter->list, filter_list);
+
+ queue_delayed_work(ptp->workwq, &ptp->cleanup_work,
+ UCAST_FILTER_EXPIRY_JIFFIES + 1);
+
return 0;
+
+fail:
+ kfree(rxfilter);
+ return rc;
}
-static int efx_ptp_insert_ipv4_filter(struct efx_nic *efx, u16 port)
+static int efx_ptp_insert_ipv4_filter(struct efx_nic *efx,
+ struct list_head *filter_list,
+ __be32 addr, u16 port,
+ unsigned long expiry)
{
- struct efx_filter_spec rxfilter;
+ struct efx_filter_spec spec;
- efx_ptp_init_filter(efx, &rxfilter);
- efx_filter_set_ipv4_local(&rxfilter, IPPROTO_UDP, htonl(PTP_ADDR_IPV4),
- htons(port));
- return efx_ptp_insert_filter(efx, &rxfilter);
+ efx_ptp_init_filter(efx, &spec);
+ efx_filter_set_ipv4_local(&spec, IPPROTO_UDP, addr, htons(port));
+ return efx_ptp_insert_filter(efx, filter_list, &spec, expiry);
}
-static int efx_ptp_insert_ipv6_filter(struct efx_nic *efx, u16 port)
+static int efx_ptp_insert_ipv6_filter(struct efx_nic *efx,
+ struct list_head *filter_list,
+ struct in6_addr *addr, u16 port,
+ unsigned long expiry)
{
- const struct in6_addr addr = {{PTP_ADDR_IPV6}};
- struct efx_filter_spec rxfilter;
+ struct efx_filter_spec spec;
- efx_ptp_init_filter(efx, &rxfilter);
- efx_filter_set_ipv6_local(&rxfilter, IPPROTO_UDP, &addr, htons(port));
- return efx_ptp_insert_filter(efx, &rxfilter);
+ efx_ptp_init_filter(efx, &spec);
+ efx_filter_set_ipv6_local(&spec, IPPROTO_UDP, addr, htons(port));
+ return efx_ptp_insert_filter(efx, filter_list, &spec, expiry);
}
-static int efx_ptp_insert_eth_filter(struct efx_nic *efx)
+static int efx_ptp_insert_eth_multicast_filter(struct efx_nic *efx)
{
+ struct efx_ptp_data *ptp = efx->ptp_data;
const u8 addr[ETH_ALEN] = PTP_ADDR_ETHER;
- struct efx_filter_spec rxfilter;
+ struct efx_filter_spec spec;
- efx_ptp_init_filter(efx, &rxfilter);
- efx_filter_set_eth_local(&rxfilter, EFX_FILTER_VID_UNSPEC, addr);
- rxfilter.match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
- rxfilter.ether_type = htons(ETH_P_1588);
- return efx_ptp_insert_filter(efx, &rxfilter);
+ efx_ptp_init_filter(efx, &spec);
+ efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC, addr);
+ spec.match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
+ spec.ether_type = htons(ETH_P_1588);
+ return efx_ptp_insert_filter(efx, &ptp->rxfilters_mcast, &spec, 0);
}
static int efx_ptp_insert_multicast_filters(struct efx_nic *efx)
struct efx_ptp_data *ptp = efx->ptp_data;
int rc;
- if (!ptp->channel || ptp->rxfilters_count)
+ if (!ptp->channel || !list_empty(&ptp->rxfilters_mcast))
return 0;
/* Must filter on both event and general ports to ensure
* that there is no packet re-ordering.
*/
- rc = efx_ptp_insert_ipv4_filter(efx, PTP_EVENT_PORT);
+ rc = efx_ptp_insert_ipv4_filter(efx, &ptp->rxfilters_mcast,
+ htonl(PTP_ADDR_IPV4), PTP_EVENT_PORT,
+ 0);
if (rc < 0)
goto fail;
- rc = efx_ptp_insert_ipv4_filter(efx, PTP_GENERAL_PORT);
+ rc = efx_ptp_insert_ipv4_filter(efx, &ptp->rxfilters_mcast,
+ htonl(PTP_ADDR_IPV4), PTP_GENERAL_PORT,
+ 0);
if (rc < 0)
goto fail;
* PTP over IPv6 and Ethernet
*/
if (efx_ptp_use_mac_tx_timestamps(efx)) {
- rc = efx_ptp_insert_ipv6_filter(efx, PTP_EVENT_PORT);
+ struct in6_addr ipv6_addr = {{PTP_ADDR_IPV6}};
+
+ rc = efx_ptp_insert_ipv6_filter(efx, &ptp->rxfilters_mcast,
+ &ipv6_addr, PTP_EVENT_PORT, 0);
if (rc < 0)
goto fail;
- rc = efx_ptp_insert_ipv6_filter(efx, PTP_GENERAL_PORT);
+ rc = efx_ptp_insert_ipv6_filter(efx, &ptp->rxfilters_mcast,
+ &ipv6_addr, PTP_GENERAL_PORT, 0);
if (rc < 0)
goto fail;
- rc = efx_ptp_insert_eth_filter(efx);
+ rc = efx_ptp_insert_eth_multicast_filter(efx);
if (rc < 0)
goto fail;
}
return 0;
fail:
- efx_ptp_remove_multicast_filters(efx);
+ efx_ptp_remove_filters(efx, &ptp->rxfilters_mcast);
+ return rc;
+}
+
+static bool efx_ptp_valid_unicast_event_pkt(struct sk_buff *skb)
+{
+ if (skb->protocol == htons(ETH_P_IP)) {
+ return ip_hdr(skb)->daddr != htonl(PTP_ADDR_IPV4) &&
+ ip_hdr(skb)->protocol == IPPROTO_UDP &&
+ udp_hdr(skb)->source == htons(PTP_EVENT_PORT);
+ } else if (skb->protocol == htons(ETH_P_IPV6)) {
+ struct in6_addr mcast_addr = {{PTP_ADDR_IPV6}};
+
+ return !ipv6_addr_equal(&ipv6_hdr(skb)->daddr, &mcast_addr) &&
+ ipv6_hdr(skb)->nexthdr == IPPROTO_UDP &&
+ udp_hdr(skb)->source == htons(PTP_EVENT_PORT);
+ }
+ return false;
+}
+
+static int efx_ptp_insert_unicast_filter(struct efx_nic *efx,
+ struct sk_buff *skb)
+{
+ struct efx_ptp_data *ptp = efx->ptp_data;
+ unsigned long expiry;
+ int rc;
+
+ if (!efx_ptp_valid_unicast_event_pkt(skb))
+ return -EINVAL;
+
+ expiry = jiffies + UCAST_FILTER_EXPIRY_JIFFIES;
+
+ if (skb->protocol == htons(ETH_P_IP)) {
+ __be32 addr = ip_hdr(skb)->saddr;
+
+ rc = efx_ptp_insert_ipv4_filter(efx, &ptp->rxfilters_ucast,
+ addr, PTP_EVENT_PORT, expiry);
+ if (rc < 0)
+ goto out;
+
+ rc = efx_ptp_insert_ipv4_filter(efx, &ptp->rxfilters_ucast,
+ addr, PTP_GENERAL_PORT, expiry);
+ } else if (efx_ptp_use_mac_tx_timestamps(efx)) {
+ /* IPv6 PTP only supported by devices with MAC hw timestamp */
+ struct in6_addr *addr = &ipv6_hdr(skb)->saddr;
+
+ rc = efx_ptp_insert_ipv6_filter(efx, &ptp->rxfilters_ucast,
+ addr, PTP_EVENT_PORT, expiry);
+ if (rc < 0)
+ goto out;
+
+ rc = efx_ptp_insert_ipv6_filter(efx, &ptp->rxfilters_ucast,
+ addr, PTP_GENERAL_PORT, expiry);
+ } else {
+ return -EOPNOTSUPP;
+ }
+
+out:
return rc;
}
return 0;
fail:
- efx_ptp_remove_multicast_filters(efx);
+ efx_ptp_remove_filters(efx, &ptp->rxfilters_mcast);
return rc;
}
rc = efx_ptp_disable(efx);
- efx_ptp_remove_multicast_filters(efx);
+ efx_ptp_remove_filters(efx, &ptp->rxfilters_mcast);
+ efx_ptp_remove_filters(efx, &ptp->rxfilters_ucast);
/* Make sure RX packets are really delivered */
efx_ptp_deliver_rx_queue(&efx->ptp_data->rxq);
efx_ptp_process_rx(efx, skb);
}
+static void efx_ptp_cleanup_worker(struct work_struct *work)
+{
+ struct efx_ptp_data *ptp =
+ container_of(work, struct efx_ptp_data, cleanup_work.work);
+ struct efx_ptp_rxfilter *rxfilter, *tmp;
+
+ list_for_each_entry_safe(rxfilter, tmp, &ptp->rxfilters_ucast, list) {
+ if (time_is_before_jiffies(rxfilter->expiry))
+ efx_ptp_remove_one_filter(ptp->efx, rxfilter);
+ }
+
+ if (!list_empty(&ptp->rxfilters_ucast)) {
+ queue_delayed_work(ptp->workwq, &ptp->cleanup_work,
+ UCAST_FILTER_EXPIRY_JIFFIES + 1);
+ }
+}
+
static const struct ptp_clock_info efx_phc_clock_info = {
.owner = THIS_MODULE,
.name = "sfc",
}
INIT_WORK(&ptp->work, efx_ptp_worker);
+ INIT_DELAYED_WORK(&ptp->cleanup_work, efx_ptp_cleanup_worker);
ptp->config.flags = 0;
ptp->config.tx_type = HWTSTAMP_TX_OFF;
ptp->config.rx_filter = HWTSTAMP_FILTER_NONE;
for (pos = 0; pos < MAX_RECEIVE_EVENTS; pos++)
list_add(&ptp->rx_evts[pos].link, &ptp->evt_free_list);
+ INIT_LIST_HEAD(&ptp->rxfilters_mcast);
+ INIT_LIST_HEAD(&ptp->rxfilters_ucast);
+
/* Get the NIC PTP attributes and set up time conversions */
rc = efx_ptp_get_attributes(efx);
if (rc < 0)
(void)efx_ptp_disable(efx);
cancel_work_sync(&efx->ptp_data->work);
+ cancel_delayed_work_sync(&efx->ptp_data->cleanup_work);
if (efx->ptp_data->pps_workwq)
cancel_work_sync(&efx->ptp_data->pps_work);
#include <linux/ethtool.h>
#include <linux/topology.h>
#include <linux/gfp.h>
-#include <linux/aer.h>
#include <linux/interrupt.h>
#include "net_driver.h"
#include <net/gre.h>
efx_siena_fini_struct(efx);
free_netdev(efx->net_dev);
-
- pci_disable_pcie_error_reporting(pci_dev);
};
/* NIC VPD information
netif_warn(efx, probe, efx->net_dev,
"failed to create MTDs (%d)\n", rc);
- (void)pci_enable_pcie_error_reporting(pci_dev);
-
if (efx->type->udp_tnl_push_ports)
efx->type->udp_tnl_push_ports(efx);
*/
#include <net/pkt_cls.h>
+#include <net/vxlan.h>
+#include <net/geneve.h>
#include "tc.h"
#include "tc_bindings.h"
#include "mae.h"
#include "ef100_rep.h"
#include "efx.h"
+static enum efx_encap_type efx_tc_indr_netdev_type(struct net_device *net_dev)
+{
+ if (netif_is_vxlan(net_dev))
+ return EFX_ENCAP_TYPE_VXLAN;
+ if (netif_is_geneve(net_dev))
+ return EFX_ENCAP_TYPE_GENEVE;
+
+ return EFX_ENCAP_TYPE_NONE;
+}
+
#define EFX_EFV_PF NULL
/* Look up the representor information (efv) for a device.
* May return NULL for the PF (us), or an error pointer for a device that
return efv;
}
+/* Convert a driver-internal vport ID into an internal device (PF or VF) */
+static s64 efx_tc_flower_internal_mport(struct efx_nic *efx, struct efx_rep *efv)
+{
+ u32 mport;
+
+ if (IS_ERR(efv))
+ return PTR_ERR(efv);
+ if (!efv) /* device is PF (us) */
+ efx_mae_mport_uplink(efx, &mport);
+ else /* device is repr */
+ efx_mae_mport_mport(efx, efv->mport, &mport);
+ return mport;
+}
+
/* Convert a driver-internal vport ID into an external device (wire or VF) */
static s64 efx_tc_flower_external_mport(struct efx_nic *efx, struct efx_rep *efv)
{
return mport;
}
+static const struct rhashtable_params efx_tc_encap_match_ht_params = {
+ .key_len = offsetof(struct efx_tc_encap_match, linkage),
+ .key_offset = 0,
+ .head_offset = offsetof(struct efx_tc_encap_match, linkage),
+};
+
static const struct rhashtable_params efx_tc_match_action_ht_params = {
.key_len = sizeof(unsigned long),
.key_offset = offsetof(struct efx_tc_flow_rule, cookie),
static void efx_tc_free_action_set(struct efx_nic *efx,
struct efx_tc_action_set *act, bool in_hw)
{
- /* Failure paths calling this on the 'running action' set in_hw=false,
+ /* Failure paths calling this on the 'cursor' action set in_hw=false,
* because if the alloc had succeeded we'd've put it in acts.list and
* not still have it in act.
*/
/* Don't kfree, as acts is embedded inside a struct efx_tc_flow_rule */
}
-static void efx_tc_delete_rule(struct efx_nic *efx, struct efx_tc_flow_rule *rule)
-{
- efx_mae_delete_rule(efx, rule->fw_id);
-
- /* Release entries in subsidiary tables */
- efx_tc_free_action_set_list(efx, &rule->acts, true);
- rule->fw_id = MC_CMD_MAE_ACTION_RULE_INSERT_OUT_ACTION_RULE_ID_NULL;
-}
-
static void efx_tc_flow_free(void *ptr, void *arg)
{
struct efx_tc_flow_rule *rule = ptr;
BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
BIT(FLOW_DISSECTOR_KEY_PORTS) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_PORTS) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_CONTROL) |
BIT(FLOW_DISSECTOR_KEY_TCP) |
BIT(FLOW_DISSECTOR_KEY_IP))) {
NL_SET_ERR_MSG_FMT_MOD(extack, "Unsupported flower keys %#x",
MAP_KEY_AND_MASK(PORTS, ports, src, l4_sport);
MAP_KEY_AND_MASK(PORTS, ports, dst, l4_dport);
MAP_KEY_AND_MASK(TCP, tcp, flags, tcp_flags);
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
+ struct flow_match_control fm;
+ flow_rule_match_enc_control(rule, &fm);
+ if (fm.mask->flags) {
+ NL_SET_ERR_MSG_FMT_MOD(extack, "Unsupported match on enc_control.flags %#x",
+ fm.mask->flags);
+ return -EOPNOTSUPP;
+ }
+ if (!IS_ALL_ONES(fm.mask->addr_type)) {
+ NL_SET_ERR_MSG_FMT_MOD(extack, "Unsupported enc addr_type mask %u (key %u)",
+ fm.mask->addr_type,
+ fm.key->addr_type);
+ return -EOPNOTSUPP;
+ }
+ switch (fm.key->addr_type) {
+ case FLOW_DISSECTOR_KEY_IPV4_ADDRS:
+ MAP_ENC_KEY_AND_MASK(IPV4_ADDRS, ipv4_addrs, enc_ipv4_addrs,
+ src, enc_src_ip);
+ MAP_ENC_KEY_AND_MASK(IPV4_ADDRS, ipv4_addrs, enc_ipv4_addrs,
+ dst, enc_dst_ip);
+ break;
+#ifdef CONFIG_IPV6
+ case FLOW_DISSECTOR_KEY_IPV6_ADDRS:
+ MAP_ENC_KEY_AND_MASK(IPV6_ADDRS, ipv6_addrs, enc_ipv6_addrs,
+ src, enc_src_ip6);
+ MAP_ENC_KEY_AND_MASK(IPV6_ADDRS, ipv6_addrs, enc_ipv6_addrs,
+ dst, enc_dst_ip6);
+ break;
+#endif
+ default:
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Unsupported enc addr_type %u (supported are IPv4, IPv6)",
+ fm.key->addr_type);
+ return -EOPNOTSUPP;
+ }
+ MAP_ENC_KEY_AND_MASK(IP, ip, enc_ip, tos, enc_ip_tos);
+ MAP_ENC_KEY_AND_MASK(IP, ip, enc_ip, ttl, enc_ip_ttl);
+ MAP_ENC_KEY_AND_MASK(PORTS, ports, enc_ports, src, enc_sport);
+ MAP_ENC_KEY_AND_MASK(PORTS, ports, enc_ports, dst, enc_dport);
+ MAP_ENC_KEY_AND_MASK(KEYID, enc_keyid, enc_keyid, keyid, enc_keyid);
+ } else if (dissector->used_keys &
+ (BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_IP) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_PORTS))) {
+ NL_SET_ERR_MSG_FMT_MOD(extack, "Flower enc keys require enc_control (keys: %#x)",
+ dissector->used_keys);
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int efx_tc_flower_record_encap_match(struct efx_nic *efx,
+ struct efx_tc_match *match,
+ enum efx_encap_type type,
+ struct netlink_ext_ack *extack)
+{
+ struct efx_tc_encap_match *encap, *old;
+ bool ipv6 = false;
+ int rc;
+
+ /* We require that the socket-defining fields (IP addrs and UDP dest
+ * port) are present and exact-match. Other fields are currently not
+ * allowed. This meets what OVS will ask for, and means that we don't
+ * need to handle difficult checks for overlapping matches as could
+ * come up if we allowed masks or varying sets of match fields.
+ */
+ if (match->mask.enc_dst_ip | match->mask.enc_src_ip) {
+ if (!IS_ALL_ONES(match->mask.enc_dst_ip)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Egress encap match is not exact on dst IP address");
+ return -EOPNOTSUPP;
+ }
+ if (!IS_ALL_ONES(match->mask.enc_src_ip)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Egress encap match is not exact on src IP address");
+ return -EOPNOTSUPP;
+ }
+#ifdef CONFIG_IPV6
+ if (!ipv6_addr_any(&match->mask.enc_dst_ip6) ||
+ !ipv6_addr_any(&match->mask.enc_src_ip6)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Egress encap match on both IPv4 and IPv6, don't understand");
+ return -EOPNOTSUPP;
+ }
+ } else {
+ ipv6 = true;
+ if (!efx_ipv6_addr_all_ones(&match->mask.enc_dst_ip6)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Egress encap match is not exact on dst IP address");
+ return -EOPNOTSUPP;
+ }
+ if (!efx_ipv6_addr_all_ones(&match->mask.enc_src_ip6)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Egress encap match is not exact on src IP address");
+ return -EOPNOTSUPP;
+ }
+#endif
+ }
+ if (!IS_ALL_ONES(match->mask.enc_dport)) {
+ NL_SET_ERR_MSG_MOD(extack, "Egress encap match is not exact on dst UDP port");
+ return -EOPNOTSUPP;
+ }
+ if (match->mask.enc_sport) {
+ NL_SET_ERR_MSG_MOD(extack, "Egress encap match on src UDP port not supported");
+ return -EOPNOTSUPP;
+ }
+ if (match->mask.enc_ip_tos) {
+ NL_SET_ERR_MSG_MOD(extack, "Egress encap match on IP ToS not supported");
+ return -EOPNOTSUPP;
+ }
+ if (match->mask.enc_ip_ttl) {
+ NL_SET_ERR_MSG_MOD(extack, "Egress encap match on IP TTL not supported");
+ return -EOPNOTSUPP;
+ }
+
+ rc = efx_mae_check_encap_match_caps(efx, ipv6, extack);
+ if (rc) {
+ NL_SET_ERR_MSG_FMT_MOD(extack, "MAE hw reports no support for IPv%d encap matches",
+ ipv6 ? 6 : 4);
+ return -EOPNOTSUPP;
+ }
+
+ encap = kzalloc(sizeof(*encap), GFP_USER);
+ if (!encap)
+ return -ENOMEM;
+ encap->src_ip = match->value.enc_src_ip;
+ encap->dst_ip = match->value.enc_dst_ip;
+#ifdef CONFIG_IPV6
+ encap->src_ip6 = match->value.enc_src_ip6;
+ encap->dst_ip6 = match->value.enc_dst_ip6;
+#endif
+ encap->udp_dport = match->value.enc_dport;
+ encap->tun_type = type;
+ old = rhashtable_lookup_get_insert_fast(&efx->tc->encap_match_ht,
+ &encap->linkage,
+ efx_tc_encap_match_ht_params);
+ if (old) {
+ /* don't need our new entry */
+ kfree(encap);
+ if (old->tun_type != type) {
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Egress encap match with conflicting tun_type %u != %u",
+ old->tun_type, type);
+ return -EEXIST;
+ }
+ if (!refcount_inc_not_zero(&old->ref))
+ return -EAGAIN;
+ /* existing entry found */
+ encap = old;
+ } else {
+ rc = efx_mae_register_encap_match(efx, encap);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to record egress encap match in HW");
+ goto fail;
+ }
+ refcount_set(&encap->ref, 1);
+ }
+ match->encap = encap;
return 0;
+fail:
+ rhashtable_remove_fast(&efx->tc->encap_match_ht, &encap->linkage,
+ efx_tc_encap_match_ht_params);
+ kfree(encap);
+ return rc;
+}
+
+static void efx_tc_flower_release_encap_match(struct efx_nic *efx,
+ struct efx_tc_encap_match *encap)
+{
+ int rc;
+
+ if (!refcount_dec_and_test(&encap->ref))
+ return; /* still in use */
+
+ rc = efx_mae_unregister_encap_match(efx, encap);
+ if (rc)
+ /* Display message but carry on and remove entry from our
+ * SW tables, because there's not much we can do about it.
+ */
+ netif_err(efx, drv, efx->net_dev,
+ "Failed to release encap match %#x, rc %d\n",
+ encap->fw_id, rc);
+ rhashtable_remove_fast(&efx->tc->encap_match_ht, &encap->linkage,
+ efx_tc_encap_match_ht_params);
+ kfree(encap);
+}
+
+static void efx_tc_delete_rule(struct efx_nic *efx, struct efx_tc_flow_rule *rule)
+{
+ efx_mae_delete_rule(efx, rule->fw_id);
+
+ /* Release entries in subsidiary tables */
+ efx_tc_free_action_set_list(efx, &rule->acts, true);
+ if (rule->match.encap)
+ efx_tc_flower_release_encap_match(efx, rule->match.encap);
+ rule->fw_id = MC_CMD_MAE_ACTION_RULE_INSERT_OUT_ACTION_RULE_ID_NULL;
+}
+
+static const char *efx_tc_encap_type_name(enum efx_encap_type typ)
+{
+ switch (typ) {
+ case EFX_ENCAP_TYPE_NONE:
+ return "none";
+ case EFX_ENCAP_TYPE_VXLAN:
+ return "vxlan";
+ case EFX_ENCAP_TYPE_GENEVE:
+ return "geneve";
+ default:
+ pr_warn_once("Unknown efx_encap_type %d encountered\n", typ);
+ return "unknown";
+ }
}
/* For details of action order constraints refer to SF-123102-TC-1§12.6.1 */
enum efx_tc_action_order {
+ EFX_TC_AO_DECAP,
+ EFX_TC_AO_VLAN_POP,
+ EFX_TC_AO_VLAN_PUSH,
EFX_TC_AO_COUNT,
EFX_TC_AO_DELIVER
};
enum efx_tc_action_order new)
{
switch (new) {
+ case EFX_TC_AO_DECAP:
+ if (act->decap)
+ return false;
+ fallthrough;
+ case EFX_TC_AO_VLAN_POP:
+ if (act->vlan_pop >= 2)
+ return false;
+ /* If we've already pushed a VLAN, we can't then pop it;
+ * the hardware would instead try to pop an existing VLAN
+ * before pushing the new one.
+ */
+ if (act->vlan_push)
+ return false;
+ fallthrough;
+ case EFX_TC_AO_VLAN_PUSH:
+ if (act->vlan_push >= 2)
+ return false;
+ fallthrough;
case EFX_TC_AO_COUNT:
if (act->count)
return false;
}
}
+static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
+ struct net_device *net_dev,
+ struct flow_cls_offload *tc)
+{
+ struct flow_rule *fr = flow_cls_offload_flow_rule(tc);
+ struct netlink_ext_ack *extack = tc->common.extack;
+ struct efx_tc_flow_rule *rule = NULL, *old = NULL;
+ struct efx_tc_action_set *act = NULL;
+ bool found = false, uplinked = false;
+ const struct flow_action_entry *fa;
+ struct efx_tc_match match;
+ struct efx_rep *to_efv;
+ s64 rc;
+ int i;
+
+ /* Parse match */
+ memset(&match, 0, sizeof(match));
+ rc = efx_tc_flower_parse_match(efx, fr, &match, NULL);
+ if (rc)
+ return rc;
+ /* The rule as given to us doesn't specify a source netdevice.
+ * But, determining whether packets from a VF should match it is
+ * complicated, so leave those to the software slowpath: qualify
+ * the filter with source m-port == wire.
+ */
+ rc = efx_tc_flower_external_mport(efx, EFX_EFV_PF);
+ if (rc < 0) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to identify ingress m-port for foreign filter");
+ return rc;
+ }
+ match.value.ingress_port = rc;
+ match.mask.ingress_port = ~0;
+
+ if (tc->common.chain_index) {
+ NL_SET_ERR_MSG_MOD(extack, "No support for nonzero chain_index");
+ return -EOPNOTSUPP;
+ }
+ match.mask.recirc_id = 0xff;
+
+ flow_action_for_each(i, fa, &fr->action) {
+ switch (fa->id) {
+ case FLOW_ACTION_REDIRECT:
+ case FLOW_ACTION_MIRRED: /* mirred means mirror here */
+ to_efv = efx_tc_flower_lookup_efv(efx, fa->dev);
+ if (IS_ERR(to_efv))
+ continue;
+ found = true;
+ break;
+ default:
+ break;
+ }
+ }
+ if (!found) { /* We don't care. */
+ netif_dbg(efx, drv, efx->net_dev,
+ "Ignoring foreign filter that doesn't egdev us\n");
+ rc = -EOPNOTSUPP;
+ goto release;
+ }
+
+ rc = efx_mae_match_check_caps(efx, &match.mask, NULL);
+ if (rc)
+ goto release;
+
+ if (efx_tc_match_is_encap(&match.mask)) {
+ enum efx_encap_type type;
+
+ type = efx_tc_indr_netdev_type(net_dev);
+ if (type == EFX_ENCAP_TYPE_NONE) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Egress encap match on unsupported tunnel device");
+ rc = -EOPNOTSUPP;
+ goto release;
+ }
+
+ rc = efx_mae_check_encap_type_supported(efx, type);
+ if (rc) {
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Firmware reports no support for %s encap match",
+ efx_tc_encap_type_name(type));
+ goto release;
+ }
+
+ rc = efx_tc_flower_record_encap_match(efx, &match, type,
+ extack);
+ if (rc)
+ goto release;
+ } else {
+ /* This is not a tunnel decap rule, ignore it */
+ netif_dbg(efx, drv, efx->net_dev,
+ "Ignoring foreign filter without encap match\n");
+ rc = -EOPNOTSUPP;
+ goto release;
+ }
+
+ rule = kzalloc(sizeof(*rule), GFP_USER);
+ if (!rule) {
+ rc = -ENOMEM;
+ goto release;
+ }
+ INIT_LIST_HEAD(&rule->acts.list);
+ rule->cookie = tc->cookie;
+ old = rhashtable_lookup_get_insert_fast(&efx->tc->match_action_ht,
+ &rule->linkage,
+ efx_tc_match_action_ht_params);
+ if (old) {
+ netif_dbg(efx, drv, efx->net_dev,
+ "Ignoring already-offloaded rule (cookie %lx)\n",
+ tc->cookie);
+ rc = -EEXIST;
+ goto release;
+ }
+
+ act = kzalloc(sizeof(*act), GFP_USER);
+ if (!act) {
+ rc = -ENOMEM;
+ goto release;
+ }
+
+ /* Parse actions. For foreign rules we only support decap & redirect.
+ * See corresponding code in efx_tc_flower_replace() for theory of
+ * operation & how 'act' cursor is used.
+ */
+ flow_action_for_each(i, fa, &fr->action) {
+ struct efx_tc_action_set save;
+
+ switch (fa->id) {
+ case FLOW_ACTION_REDIRECT:
+ case FLOW_ACTION_MIRRED:
+ /* See corresponding code in efx_tc_flower_replace() for
+ * long explanations of what's going on here.
+ */
+ save = *act;
+ if (fa->hw_stats) {
+ struct efx_tc_counter_index *ctr;
+
+ if (!(fa->hw_stats & FLOW_ACTION_HW_STATS_DELAYED)) {
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "hw_stats_type %u not supported (only 'delayed')",
+ fa->hw_stats);
+ rc = -EOPNOTSUPP;
+ goto release;
+ }
+ if (!efx_tc_flower_action_order_ok(act, EFX_TC_AO_COUNT)) {
+ rc = -EOPNOTSUPP;
+ goto release;
+ }
+
+ ctr = efx_tc_flower_get_counter_index(efx,
+ tc->cookie,
+ EFX_TC_COUNTER_TYPE_AR);
+ if (IS_ERR(ctr)) {
+ rc = PTR_ERR(ctr);
+ NL_SET_ERR_MSG_MOD(extack, "Failed to obtain a counter");
+ goto release;
+ }
+ act->count = ctr;
+ }
+
+ if (!efx_tc_flower_action_order_ok(act, EFX_TC_AO_DELIVER)) {
+ /* can't happen */
+ rc = -EOPNOTSUPP;
+ NL_SET_ERR_MSG_MOD(extack,
+ "Deliver action violates action order (can't happen)");
+ goto release;
+ }
+ to_efv = efx_tc_flower_lookup_efv(efx, fa->dev);
+ /* PF implies egdev is us, in which case we really
+ * want to deliver to the uplink (because this is an
+ * ingress filter). If we don't recognise the egdev
+ * at all, then we'd better trap so SW can handle it.
+ */
+ if (IS_ERR(to_efv))
+ to_efv = EFX_EFV_PF;
+ if (to_efv == EFX_EFV_PF) {
+ if (uplinked)
+ break;
+ uplinked = true;
+ }
+ rc = efx_tc_flower_internal_mport(efx, to_efv);
+ if (rc < 0) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to identify egress m-port");
+ goto release;
+ }
+ act->dest_mport = rc;
+ act->deliver = 1;
+ rc = efx_mae_alloc_action_set(efx, act);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Failed to write action set to hw (mirred)");
+ goto release;
+ }
+ list_add_tail(&act->list, &rule->acts.list);
+ act = NULL;
+ if (fa->id == FLOW_ACTION_REDIRECT)
+ break; /* end of the line */
+ /* Mirror, so continue on with saved act */
+ act = kzalloc(sizeof(*act), GFP_USER);
+ if (!act) {
+ rc = -ENOMEM;
+ goto release;
+ }
+ *act = save;
+ break;
+ case FLOW_ACTION_TUNNEL_DECAP:
+ if (!efx_tc_flower_action_order_ok(act, EFX_TC_AO_DECAP)) {
+ rc = -EINVAL;
+ NL_SET_ERR_MSG_MOD(extack, "Decap action violates action order");
+ goto release;
+ }
+ act->decap = 1;
+ /* If we previously delivered/trapped to uplink, now
+ * that we've decapped we'll want another copy if we
+ * try to deliver/trap to uplink again.
+ */
+ uplinked = false;
+ break;
+ default:
+ NL_SET_ERR_MSG_FMT_MOD(extack, "Unhandled action %u",
+ fa->id);
+ rc = -EOPNOTSUPP;
+ goto release;
+ }
+ }
+
+ if (act) {
+ if (!uplinked) {
+ /* Not shot/redirected, so deliver to default dest (which is
+ * the uplink, as this is an ingress filter)
+ */
+ efx_mae_mport_uplink(efx, &act->dest_mport);
+ act->deliver = 1;
+ }
+ rc = efx_mae_alloc_action_set(efx, act);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to write action set to hw (deliver)");
+ goto release;
+ }
+ list_add_tail(&act->list, &rule->acts.list);
+ act = NULL; /* Prevent double-free in error path */
+ }
+
+ rule->match = match;
+
+ netif_dbg(efx, drv, efx->net_dev,
+ "Successfully parsed foreign filter (cookie %lx)\n",
+ tc->cookie);
+
+ rc = efx_mae_alloc_action_set_list(efx, &rule->acts);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to write action set list to hw");
+ goto release;
+ }
+ rc = efx_mae_insert_rule(efx, &rule->match, EFX_TC_PRIO_TC,
+ rule->acts.fw_id, &rule->fw_id);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to insert rule in hw");
+ goto release_acts;
+ }
+ return 0;
+
+release_acts:
+ efx_mae_free_action_set_list(efx, &rule->acts);
+release:
+ /* We failed to insert the rule, so free up any entries we created in
+ * subsidiary tables.
+ */
+ if (act)
+ efx_tc_free_action_set(efx, act, false);
+ if (rule) {
+ rhashtable_remove_fast(&efx->tc->match_action_ht,
+ &rule->linkage,
+ efx_tc_match_action_ht_params);
+ efx_tc_free_action_set_list(efx, &rule->acts, false);
+ }
+ kfree(rule);
+ if (match.encap)
+ efx_tc_flower_release_encap_match(efx, match.encap);
+ return rc;
+}
+
static int efx_tc_flower_replace(struct efx_nic *efx,
struct net_device *net_dev,
struct flow_cls_offload *tc,
from_efv = efx_tc_flower_lookup_efv(efx, net_dev);
if (IS_ERR(from_efv)) {
- /* Might be a tunnel decap rule from an indirect block.
- * Support for those not implemented yet.
- */
- return -EOPNOTSUPP;
+ /* Not from our PF or representors, so probably a tunnel dev */
+ return efx_tc_flower_replace_foreign(efx, net_dev, tc);
}
if (efv != from_efv) {
rc = efx_tc_flower_parse_match(efx, fr, &match, extack);
if (rc)
return rc;
+ if (efx_tc_match_is_encap(&match.mask)) {
+ NL_SET_ERR_MSG_MOD(extack, "Ingress enc_key matches not supported");
+ rc = -EOPNOTSUPP;
+ goto release;
+ }
if (tc->common.chain_index) {
NL_SET_ERR_MSG_MOD(extack, "No support for nonzero chain_index");
goto release;
}
+ /**
+ * DOC: TC action translation
+ *
+ * Actions in TC are sequential and cumulative, with delivery actions
+ * potentially anywhere in the order. The EF100 MAE, however, takes
+ * an 'action set list' consisting of 'action sets', each of which is
+ * applied to the _original_ packet, and consists of a set of optional
+ * actions in a fixed order with delivery at the end.
+ * To translate between these two models, we maintain a 'cursor', @act,
+ * which describes the cumulative effect of all the packet-mutating
+ * actions encountered so far; on handling a delivery (mirred or drop)
+ * action, once the action-set has been inserted into hardware, we
+ * append @act to the action-set list (@rule->acts); if this is a pipe
+ * action (mirred mirror) we then allocate a new @act with a copy of
+ * the cursor state _before_ the delivery action, otherwise we set @act
+ * to %NULL.
+ * This ensures that every allocated action-set is either attached to
+ * @rule->acts or pointed to by @act (and never both), and that only
+ * those action-sets in @rule->acts exist in hardware. Consequently,
+ * in the failure path, @act only needs to be freed in memory, whereas
+ * for @rule->acts we remove each action-set from hardware before
+ * freeing it (efx_tc_free_action_set_list()), even if the action-set
+ * list itself is not in hardware.
+ */
flow_action_for_each(i, fa, &fr->action) {
struct efx_tc_action_set save;
+ u16 tci;
if (!act) {
/* more actions after a non-pipe action */
}
*act = save;
break;
+ case FLOW_ACTION_VLAN_POP:
+ if (act->vlan_push) {
+ act->vlan_push--;
+ } else if (efx_tc_flower_action_order_ok(act, EFX_TC_AO_VLAN_POP)) {
+ act->vlan_pop++;
+ } else {
+ NL_SET_ERR_MSG_MOD(extack,
+ "More than two VLAN pops, or action order violated");
+ rc = -EINVAL;
+ goto release;
+ }
+ break;
+ case FLOW_ACTION_VLAN_PUSH:
+ if (!efx_tc_flower_action_order_ok(act, EFX_TC_AO_VLAN_PUSH)) {
+ rc = -EINVAL;
+ NL_SET_ERR_MSG_MOD(extack,
+ "More than two VLAN pushes, or action order violated");
+ goto release;
+ }
+ tci = fa->vlan.vid & VLAN_VID_MASK;
+ tci |= fa->vlan.prio << VLAN_PRIO_SHIFT;
+ act->vlan_tci[act->vlan_push] = cpu_to_be16(tci);
+ act->vlan_proto[act->vlan_push] = fa->vlan.proto;
+ act->vlan_push++;
+ break;
default:
NL_SET_ERR_MSG_FMT_MOD(extack, "Unhandled action %u",
fa->id);
efx->tc->up = false;
}
+/* At teardown time, all TC filter rules (and thus all resources they created)
+ * should already have been removed. If we find any in our hashtables, make a
+ * cursory attempt to clean up the software side.
+ */
+static void efx_tc_encap_match_free(void *ptr, void *__unused)
+{
+ struct efx_tc_encap_match *encap = ptr;
+
+ WARN_ON(refcount_read(&encap->ref));
+ kfree(encap);
+}
+
int efx_init_struct_tc(struct efx_nic *efx)
{
int rc;
rc = efx_tc_init_counters(efx);
if (rc < 0)
goto fail_counters;
+ rc = rhashtable_init(&efx->tc->encap_match_ht, &efx_tc_encap_match_ht_params);
+ if (rc < 0)
+ goto fail_encap_match_ht;
rc = rhashtable_init(&efx->tc->match_action_ht, &efx_tc_match_action_ht_params);
if (rc < 0)
goto fail_match_action_ht;
efx->extra_channel_type[EFX_EXTRA_CHANNEL_TC] = &efx_tc_channel_type;
return 0;
fail_match_action_ht:
+ rhashtable_destroy(&efx->tc->encap_match_ht);
+fail_encap_match_ht:
efx_tc_destroy_counters(efx);
fail_counters:
mutex_destroy(&efx->tc->mutex);
MC_CMD_MAE_ACTION_RULE_INSERT_OUT_ACTION_RULE_ID_NULL);
rhashtable_free_and_destroy(&efx->tc->match_action_ht, efx_tc_flow_free,
efx);
+ rhashtable_free_and_destroy(&efx->tc->encap_match_ht,
+ efx_tc_encap_match_free, NULL);
efx_tc_fini_counters(efx);
mutex_unlock(&efx->tc->mutex);
mutex_destroy(&efx->tc->mutex);
#define IS_ALL_ONES(v) (!(typeof (v))~(v))
+#ifdef CONFIG_IPV6
+static inline bool efx_ipv6_addr_all_ones(struct in6_addr *addr)
+{
+ return !memchr_inv(addr, 0xff, sizeof(*addr));
+}
+#endif
+
struct efx_tc_action_set {
+ u16 vlan_push:2;
+ u16 vlan_pop:2;
+ u16 decap:1;
u16 deliver:1;
+ __be16 vlan_tci[2]; /* TCIs for vlan_push */
+ __be16 vlan_proto[2]; /* Ethertypes for vlan_push */
struct efx_tc_counter_index *count;
u32 dest_mport;
u32 fw_id; /* index of this entry in firmware actions table */
/* L4 */
__be16 l4_sport, l4_dport; /* Ports (UDP, TCP) */
__be16 tcp_flags;
+ /* Encap. The following are *outer* fields. Note that there are no
+ * outer eth (L2) fields; this is because TC doesn't have them.
+ */
+ __be32 enc_src_ip, enc_dst_ip;
+ struct in6_addr enc_src_ip6, enc_dst_ip6;
+ u8 enc_ip_tos, enc_ip_ttl;
+ __be16 enc_sport, enc_dport;
+ __be32 enc_keyid; /* e.g. VNI, VSID */
+};
+
+static inline bool efx_tc_match_is_encap(const struct efx_tc_match_fields *mask)
+{
+ return mask->enc_src_ip || mask->enc_dst_ip ||
+ !ipv6_addr_any(&mask->enc_src_ip6) ||
+ !ipv6_addr_any(&mask->enc_dst_ip6) || mask->enc_ip_tos ||
+ mask->enc_ip_ttl || mask->enc_sport || mask->enc_dport;
+}
+
+struct efx_tc_encap_match {
+ __be32 src_ip, dst_ip;
+ struct in6_addr src_ip6, dst_ip6;
+ __be16 udp_dport;
+ struct rhash_head linkage;
+ enum efx_encap_type tun_type;
+ refcount_t ref;
+ u32 fw_id; /* index of this entry in firmware encap match table */
};
struct efx_tc_match {
struct efx_tc_match_fields value;
struct efx_tc_match_fields mask;
+ struct efx_tc_encap_match *encap;
};
struct efx_tc_action_set_list {
* @mutex: Used to serialise operations on TC hashtables
* @counter_ht: Hashtable of TC counters (FW IDs and counter values)
* @counter_id_ht: Hashtable mapping TC counter cookies to counters
+ * @encap_match_ht: Hashtable of TC encap matches
* @match_action_ht: Hashtable of TC match-action rules
* @reps_mport_id: MAE port allocated for representor RX
* @reps_filter_uc: VNIC filter for representor unicast RX (promisc)
struct mutex mutex;
struct rhashtable counter_ht;
struct rhashtable counter_id_ht;
+ struct rhashtable encap_match_ht;
struct rhashtable match_action_ht;
u32 reps_mport_id, reps_mport_vport_id;
s32 reps_filter_uc, reps_filter_mc;
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/delay.h>
+#include <linux/gpio/consumer.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/errno.h>
#include <linux/workqueue.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/of_gpio.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
static int smsc911x_mii_probe(struct net_device *dev)
{
struct smsc911x_data *pdata = netdev_priv(dev);
- struct phy_device *phydev = NULL;
+ struct phy_device *phydev;
int ret;
/* find the first phy */
free_irq(dev->irq, dev);
mii_free_out:
phy_disconnect(dev->phydev);
- dev->phydev = NULL;
out:
pm_runtime_put(dev->dev.parent);
return retval;
if (dev->phydev) {
phy_stop(dev->phydev);
phy_disconnect(dev->phydev);
- dev->phydev = NULL;
}
netif_carrier_off(dev);
pm_runtime_put(dev->dev.parent);
.driver = {
.name = STMMAC_RESOURCE_NAME,
.pm = &stmmac_pltfr_pm_ops,
- .of_match_table = of_match_ptr(dwmac_generic_match),
+ .of_match_table = dwmac_generic_match,
},
};
module_platform_driver(dwmac_generic_driver);
#define MX93_GPR_ENET_QOS_INTF_SEL_RGMII (0x1 << 1)
#define MX93_GPR_ENET_QOS_CLK_GEN_EN (0x1 << 0)
+#define DMA_BUS_MODE 0x00001000
+#define DMA_BUS_MODE_SFT_RESET (0x1 << 0)
+#define RMII_RESET_SPEED (0x3 << 14)
+
struct imx_dwmac_ops {
u32 addr_width;
bool mac_rgmii_txclk_auto_adj;
+ int (*fix_soc_reset)(void *priv, void __iomem *ioaddr);
int (*set_intf_mode)(struct plat_stmmacenet_data *plat_dat);
};
dev_err(dwmac->dev, "failed to set tx rate %lu\n", rate);
}
+static int imx_dwmac_mx93_reset(void *priv, void __iomem *ioaddr)
+{
+ struct plat_stmmacenet_data *plat_dat = priv;
+ u32 value = readl(ioaddr + DMA_BUS_MODE);
+
+ /* DMA SW reset */
+ value |= DMA_BUS_MODE_SFT_RESET;
+ writel(value, ioaddr + DMA_BUS_MODE);
+
+ if (plat_dat->interface == PHY_INTERFACE_MODE_RMII) {
+ usleep_range(100, 200);
+ writel(RMII_RESET_SPEED, ioaddr + MAC_CTRL_REG);
+ }
+
+ return readl_poll_timeout(ioaddr + DMA_BUS_MODE, value,
+ !(value & DMA_BUS_MODE_SFT_RESET),
+ 10000, 1000000);
+}
+
static int
imx_dwmac_parse_dt(struct imx_priv_data *dwmac, struct device *dev)
{
if (ret)
goto err_dwmac_init;
+ dwmac->plat_dat->fix_soc_reset = dwmac->ops->fix_soc_reset;
+
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret)
goto err_drv_probe;
.addr_width = 32,
.mac_rgmii_txclk_auto_adj = true,
.set_intf_mode = imx93_set_intf_mode,
+ .fix_soc_reset = imx_dwmac_mx93_reset,
};
static const struct of_device_id imx_dwmac_match[] = {
.driver = {
.name = "qcom-ethqos",
.pm = &stmmac_pltfr_pm_ops,
- .of_match_table = of_match_ptr(qcom_ethqos_match),
+ .of_match_table = qcom_ethqos_match,
},
};
module_platform_driver(qcom_ethqos_driver);
return 0;
}
+int stmmac_reset(struct stmmac_priv *priv, void __iomem *ioaddr)
+{
+ struct plat_stmmacenet_data *plat = priv ? priv->plat : NULL;
+
+ if (!priv)
+ return -EINVAL;
+
+ if (plat && plat->fix_soc_reset)
+ return plat->fix_soc_reset(plat, ioaddr);
+
+ return stmmac_do_callback(priv, dma, reset, ioaddr);
+}
+
static const struct stmmac_hwif_entry {
bool gmac;
bool gmac4;
int (*enable_tbs)(void __iomem *ioaddr, bool en, u32 chan);
};
-#define stmmac_reset(__priv, __args...) \
- stmmac_do_callback(__priv, dma, reset, __args)
#define stmmac_dma_init(__priv, __args...) \
stmmac_do_void_callback(__priv, dma, init, __args)
#define stmmac_init_chan(__priv, __args...) \
#define GMAC_VERSION 0x00000020 /* GMAC CORE Version */
#define GMAC4_VERSION 0x00000110 /* GMAC4+ CORE Version */
+int stmmac_reset(struct stmmac_priv *priv, void __iomem *ioaddr);
int stmmac_hwif_init(struct stmmac_priv *priv);
#endif /* __STMMAC_HWIF_H__ */
* argument : macaddr=0x00,0x10,0x20,0x30,0x40,0x50
*/
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
+#include <asm/byteorder.h>
+#include <asm/dma.h>
+#include <asm/irq.h>
+#include <linux/bitops.h>
+#include <linux/crc32.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/ethtool.h>
#include <linux/fcntl.h>
-#include <linux/interrupt.h>
-#include <linux/ioport.h>
#include <linux/in.h>
-#include <linux/slab.h>
-#include <linux/string.h>
-#include <linux/delay.h>
#include <linux/init.h>
-#include <linux/ethtool.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/ioport.h>
+#include <linux/kernel.h>
#include <linux/mii.h>
-#include <linux/crc32.h>
-#include <linux/random.h>
-#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/module.h>
#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
+#include <linux/of_device.h>
+#include <linux/of.h>
+#include <linux/pci.h>
+#include <linux/random.h>
#include <linux/skbuff.h>
-#include <linux/mm.h>
-#include <linux/bitops.h>
-#include <linux/dma-mapping.h>
-
-#include <asm/io.h>
-#include <asm/dma.h>
-#include <asm/byteorder.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
#ifdef CONFIG_SPARC
-#include <linux/of.h>
-#include <linux/of_device.h>
+#include <asm/auxio.h>
#include <asm/idprom.h>
#include <asm/openprom.h>
#include <asm/oplib.h>
#include <asm/prom.h>
-#include <asm/auxio.h>
-#endif
-#include <linux/uaccess.h>
-
-#include <asm/irq.h>
-
-#ifdef CONFIG_PCI
-#include <linux/pci.h>
#endif
#include "sunhme.h"
return 1;
}
-static int happy_meal_init(struct happy_meal *hp);
-
static int is_lucent_phy(struct happy_meal *hp)
{
void __iomem *tregs = hp->tcvregs;
return ret;
}
+/* hp->happy_lock must be held */
+static void
+happy_meal_begin_auto_negotiation(struct happy_meal *hp,
+ void __iomem *tregs,
+ const struct ethtool_link_ksettings *ep)
+{
+ int timeout;
+
+ /* Read all of the registers we are interested in now. */
+ hp->sw_bmsr = happy_meal_tcvr_read(hp, tregs, MII_BMSR);
+ hp->sw_bmcr = happy_meal_tcvr_read(hp, tregs, MII_BMCR);
+ hp->sw_physid1 = happy_meal_tcvr_read(hp, tregs, MII_PHYSID1);
+ hp->sw_physid2 = happy_meal_tcvr_read(hp, tregs, MII_PHYSID2);
+
+ /* XXX Check BMSR_ANEGCAPABLE, should not be necessary though. */
+
+ hp->sw_advertise = happy_meal_tcvr_read(hp, tregs, MII_ADVERTISE);
+ if (!ep || ep->base.autoneg == AUTONEG_ENABLE) {
+ /* Advertise everything we can support. */
+ if (hp->sw_bmsr & BMSR_10HALF)
+ hp->sw_advertise |= (ADVERTISE_10HALF);
+ else
+ hp->sw_advertise &= ~(ADVERTISE_10HALF);
+
+ if (hp->sw_bmsr & BMSR_10FULL)
+ hp->sw_advertise |= (ADVERTISE_10FULL);
+ else
+ hp->sw_advertise &= ~(ADVERTISE_10FULL);
+ if (hp->sw_bmsr & BMSR_100HALF)
+ hp->sw_advertise |= (ADVERTISE_100HALF);
+ else
+ hp->sw_advertise &= ~(ADVERTISE_100HALF);
+ if (hp->sw_bmsr & BMSR_100FULL)
+ hp->sw_advertise |= (ADVERTISE_100FULL);
+ else
+ hp->sw_advertise &= ~(ADVERTISE_100FULL);
+ happy_meal_tcvr_write(hp, tregs, MII_ADVERTISE, hp->sw_advertise);
+
+ /* XXX Currently no Happy Meal cards I know off support 100BaseT4,
+ * XXX and this is because the DP83840 does not support it, changes
+ * XXX would need to be made to the tx/rx logic in the driver as well
+ * XXX so I completely skip checking for it in the BMSR for now.
+ */
+
+ ASD("Advertising [ %s%s%s%s]\n",
+ hp->sw_advertise & ADVERTISE_10HALF ? "10H " : "",
+ hp->sw_advertise & ADVERTISE_10FULL ? "10F " : "",
+ hp->sw_advertise & ADVERTISE_100HALF ? "100H " : "",
+ hp->sw_advertise & ADVERTISE_100FULL ? "100F " : "");
+
+ /* Enable Auto-Negotiation, this is usually on already... */
+ hp->sw_bmcr |= BMCR_ANENABLE;
+ happy_meal_tcvr_write(hp, tregs, MII_BMCR, hp->sw_bmcr);
+
+ /* Restart it to make sure it is going. */
+ hp->sw_bmcr |= BMCR_ANRESTART;
+ happy_meal_tcvr_write(hp, tregs, MII_BMCR, hp->sw_bmcr);
+
+ /* BMCR_ANRESTART self clears when the process has begun. */
+
+ timeout = 64; /* More than enough. */
+ while (--timeout) {
+ hp->sw_bmcr = happy_meal_tcvr_read(hp, tregs, MII_BMCR);
+ if (!(hp->sw_bmcr & BMCR_ANRESTART))
+ break; /* got it. */
+ udelay(10);
+ }
+ if (!timeout) {
+ netdev_err(hp->dev,
+ "Happy Meal would not start auto negotiation BMCR=0x%04x\n",
+ hp->sw_bmcr);
+ netdev_notice(hp->dev,
+ "Performing force link detection.\n");
+ goto force_link;
+ } else {
+ hp->timer_state = arbwait;
+ }
+ } else {
+force_link:
+ /* Force the link up, trying first a particular mode.
+ * Either we are here at the request of ethtool or
+ * because the Happy Meal would not start to autoneg.
+ */
+
+ /* Disable auto-negotiation in BMCR, enable the duplex and
+ * speed setting, init the timer state machine, and fire it off.
+ */
+ if (!ep || ep->base.autoneg == AUTONEG_ENABLE) {
+ hp->sw_bmcr = BMCR_SPEED100;
+ } else {
+ if (ep->base.speed == SPEED_100)
+ hp->sw_bmcr = BMCR_SPEED100;
+ else
+ hp->sw_bmcr = 0;
+ if (ep->base.duplex == DUPLEX_FULL)
+ hp->sw_bmcr |= BMCR_FULLDPLX;
+ }
+ happy_meal_tcvr_write(hp, tregs, MII_BMCR, hp->sw_bmcr);
+
+ if (!is_lucent_phy(hp)) {
+ /* OK, seems we need do disable the transceiver for the first
+ * tick to make sure we get an accurate link state at the
+ * second tick.
+ */
+ hp->sw_csconfig = happy_meal_tcvr_read(hp, tregs,
+ DP83840_CSCONFIG);
+ hp->sw_csconfig &= ~(CSCONFIG_TCVDISAB);
+ happy_meal_tcvr_write(hp, tregs, DP83840_CSCONFIG,
+ hp->sw_csconfig);
+ }
+ hp->timer_state = ltrywait;
+ }
+
+ hp->timer_ticks = 0;
+ hp->happy_timer.expires = jiffies + (12 * HZ)/10; /* 1.2 sec. */
+ add_timer(&hp->happy_timer);
+}
+
static void happy_meal_timer(struct timer_list *t)
{
struct happy_meal *hp = from_timer(hp, t, happy_timer);
netdev_notice(hp->dev,
"Link down, cable problem?\n");
- ret = happy_meal_init(hp);
- if (ret) {
- /* ho hum... */
- netdev_err(hp->dev,
- "Error, cannot re-init the Happy Meal.\n");
- }
+ happy_meal_begin_auto_negotiation(hp, tregs, NULL);
goto out;
}
if (!is_lucent_phy(hp)) {
hme_write32(hp, bregs + BMAC_LTCTR, 0);
}
-/* hp->happy_lock must be held */
-static void happy_meal_poll_stop(struct happy_meal *hp, void __iomem *tregs)
-{
- /* If polling disabled or not polling already, nothing to do. */
- if ((hp->happy_flags & (HFLAG_POLLENABLE | HFLAG_POLL)) !=
- (HFLAG_POLLENABLE | HFLAG_POLL)) {
- ASD("not polling, return\n");
- return;
- }
-
- /* Shut up the MIF. */
- ASD("were polling, mif ints off, polling off\n");
- hme_write32(hp, tregs + TCVR_IMASK, 0xffff);
-
- /* Turn off polling. */
- hme_write32(hp, tregs + TCVR_CFG,
- hme_read32(hp, tregs + TCVR_CFG) & ~(TCV_CFG_PENABLE));
-
- /* We are no longer polling. */
- hp->happy_flags &= ~(HFLAG_POLL);
-
- /* Let the bits set. */
- udelay(200);
- ASD("done\n");
-}
-
/* Only Sun can take such nice parts and fuck up the programming interface
* like this. Good job guys...
*/
static void happy_meal_transceiver_check(struct happy_meal *hp, void __iomem *tregs)
{
unsigned long tconfig = hme_read32(hp, tregs + TCVR_CFG);
+ u32 reread = hme_read32(hp, tregs + TCVR_CFG);
ASD("tcfg=%08lx\n", tconfig);
- if (hp->happy_flags & HFLAG_POLL) {
- /* If we are polling, we must stop to get the transceiver type. */
- if (hp->tcvr_type == internal) {
- if (tconfig & TCV_CFG_MDIO1) {
- happy_meal_poll_stop(hp, tregs);
- hp->paddr = TCV_PADDR_ETX;
- hp->tcvr_type = external;
- tconfig &= ~(TCV_CFG_PENABLE);
- tconfig |= TCV_CFG_PSELECT;
- hme_write32(hp, tregs + TCVR_CFG, tconfig);
- ASD("poll stop, internal->external\n");
- }
- } else {
- if (hp->tcvr_type == external) {
- if (!(hme_read32(hp, tregs + TCVR_STATUS) >> 16)) {
- happy_meal_poll_stop(hp, tregs);
- hp->paddr = TCV_PADDR_ITX;
- hp->tcvr_type = internal;
- hme_write32(hp, tregs + TCVR_CFG,
- hme_read32(hp, tregs + TCVR_CFG) &
- ~(TCV_CFG_PSELECT));
- ASD("poll stop, external->internal\n");
- }
- } else {
- ASD("polling, none\n");
- }
- }
+ if (reread & TCV_CFG_MDIO1) {
+ hme_write32(hp, tregs + TCVR_CFG, tconfig | TCV_CFG_PSELECT);
+ hp->paddr = TCV_PADDR_ETX;
+ hp->tcvr_type = external;
+ ASD("not polling, external\n");
} else {
- u32 reread = hme_read32(hp, tregs + TCVR_CFG);
-
- /* Else we can just work off of the MDIO bits. */
- if (reread & TCV_CFG_MDIO1) {
- hme_write32(hp, tregs + TCVR_CFG, tconfig | TCV_CFG_PSELECT);
- hp->paddr = TCV_PADDR_ETX;
- hp->tcvr_type = external;
- ASD("not polling, external\n");
+ if (reread & TCV_CFG_MDIO0) {
+ hme_write32(hp, tregs + TCVR_CFG,
+ tconfig & ~(TCV_CFG_PSELECT));
+ hp->paddr = TCV_PADDR_ITX;
+ hp->tcvr_type = internal;
+ ASD("not polling, internal\n");
} else {
- if (reread & TCV_CFG_MDIO0) {
- hme_write32(hp, tregs + TCVR_CFG,
- tconfig & ~(TCV_CFG_PSELECT));
- hp->paddr = TCV_PADDR_ITX;
- hp->tcvr_type = internal;
- ASD("not polling, internal\n");
- } else {
- netdev_err(hp->dev,
- "Transceiver and a coke please.");
- hp->tcvr_type = none; /* Grrr... */
- ASD("not polling, none\n");
- }
+ netdev_err(hp->dev,
+ "Transceiver and a coke please.");
+ hp->tcvr_type = none; /* Grrr... */
+ ASD("not polling, none\n");
}
}
}
}
/* hp->happy_lock must be held */
-static void
-happy_meal_begin_auto_negotiation(struct happy_meal *hp,
- void __iomem *tregs,
- const struct ethtool_link_ksettings *ep)
+static int happy_meal_init(struct happy_meal *hp)
{
- int timeout;
-
- /* Read all of the registers we are interested in now. */
- hp->sw_bmsr = happy_meal_tcvr_read(hp, tregs, MII_BMSR);
- hp->sw_bmcr = happy_meal_tcvr_read(hp, tregs, MII_BMCR);
- hp->sw_physid1 = happy_meal_tcvr_read(hp, tregs, MII_PHYSID1);
- hp->sw_physid2 = happy_meal_tcvr_read(hp, tregs, MII_PHYSID2);
+ const unsigned char *e = &hp->dev->dev_addr[0];
+ void __iomem *gregs = hp->gregs;
+ void __iomem *etxregs = hp->etxregs;
+ void __iomem *erxregs = hp->erxregs;
+ void __iomem *bregs = hp->bigmacregs;
+ void __iomem *tregs = hp->tcvregs;
+ const char *bursts = "64";
+ u32 regtmp, rxcfg;
- /* XXX Check BMSR_ANEGCAPABLE, should not be necessary though. */
+ /* If auto-negotiation timer is running, kill it. */
+ del_timer(&hp->happy_timer);
- hp->sw_advertise = happy_meal_tcvr_read(hp, tregs, MII_ADVERTISE);
- if (!ep || ep->base.autoneg == AUTONEG_ENABLE) {
- /* Advertise everything we can support. */
- if (hp->sw_bmsr & BMSR_10HALF)
- hp->sw_advertise |= (ADVERTISE_10HALF);
- else
- hp->sw_advertise &= ~(ADVERTISE_10HALF);
+ HMD("happy_flags[%08x]\n", hp->happy_flags);
+ if (!(hp->happy_flags & HFLAG_INIT)) {
+ HMD("set HFLAG_INIT\n");
+ hp->happy_flags |= HFLAG_INIT;
+ happy_meal_get_counters(hp, bregs);
+ }
- if (hp->sw_bmsr & BMSR_10FULL)
- hp->sw_advertise |= (ADVERTISE_10FULL);
- else
- hp->sw_advertise &= ~(ADVERTISE_10FULL);
- if (hp->sw_bmsr & BMSR_100HALF)
- hp->sw_advertise |= (ADVERTISE_100HALF);
- else
- hp->sw_advertise &= ~(ADVERTISE_100HALF);
- if (hp->sw_bmsr & BMSR_100FULL)
- hp->sw_advertise |= (ADVERTISE_100FULL);
- else
- hp->sw_advertise &= ~(ADVERTISE_100FULL);
- happy_meal_tcvr_write(hp, tregs, MII_ADVERTISE, hp->sw_advertise);
+ /* Stop transmitter and receiver. */
+ HMD("to happy_meal_stop\n");
+ happy_meal_stop(hp, gregs);
- /* XXX Currently no Happy Meal cards I know off support 100BaseT4,
- * XXX and this is because the DP83840 does not support it, changes
- * XXX would need to be made to the tx/rx logic in the driver as well
- * XXX so I completely skip checking for it in the BMSR for now.
- */
-
- ASD("Advertising [ %s%s%s%s]\n",
- hp->sw_advertise & ADVERTISE_10HALF ? "10H " : "",
- hp->sw_advertise & ADVERTISE_10FULL ? "10F " : "",
- hp->sw_advertise & ADVERTISE_100HALF ? "100H " : "",
- hp->sw_advertise & ADVERTISE_100FULL ? "100F " : "");
-
- /* Enable Auto-Negotiation, this is usually on already... */
- hp->sw_bmcr |= BMCR_ANENABLE;
- happy_meal_tcvr_write(hp, tregs, MII_BMCR, hp->sw_bmcr);
-
- /* Restart it to make sure it is going. */
- hp->sw_bmcr |= BMCR_ANRESTART;
- happy_meal_tcvr_write(hp, tregs, MII_BMCR, hp->sw_bmcr);
-
- /* BMCR_ANRESTART self clears when the process has begun. */
-
- timeout = 64; /* More than enough. */
- while (--timeout) {
- hp->sw_bmcr = happy_meal_tcvr_read(hp, tregs, MII_BMCR);
- if (!(hp->sw_bmcr & BMCR_ANRESTART))
- break; /* got it. */
- udelay(10);
- }
- if (!timeout) {
- netdev_err(hp->dev,
- "Happy Meal would not start auto negotiation BMCR=0x%04x\n",
- hp->sw_bmcr);
- netdev_notice(hp->dev,
- "Performing force link detection.\n");
- goto force_link;
- } else {
- hp->timer_state = arbwait;
- }
- } else {
-force_link:
- /* Force the link up, trying first a particular mode.
- * Either we are here at the request of ethtool or
- * because the Happy Meal would not start to autoneg.
- */
-
- /* Disable auto-negotiation in BMCR, enable the duplex and
- * speed setting, init the timer state machine, and fire it off.
- */
- if (!ep || ep->base.autoneg == AUTONEG_ENABLE) {
- hp->sw_bmcr = BMCR_SPEED100;
- } else {
- if (ep->base.speed == SPEED_100)
- hp->sw_bmcr = BMCR_SPEED100;
- else
- hp->sw_bmcr = 0;
- if (ep->base.duplex == DUPLEX_FULL)
- hp->sw_bmcr |= BMCR_FULLDPLX;
- }
- happy_meal_tcvr_write(hp, tregs, MII_BMCR, hp->sw_bmcr);
-
- if (!is_lucent_phy(hp)) {
- /* OK, seems we need do disable the transceiver for the first
- * tick to make sure we get an accurate link state at the
- * second tick.
- */
- hp->sw_csconfig = happy_meal_tcvr_read(hp, tregs,
- DP83840_CSCONFIG);
- hp->sw_csconfig &= ~(CSCONFIG_TCVDISAB);
- happy_meal_tcvr_write(hp, tregs, DP83840_CSCONFIG,
- hp->sw_csconfig);
- }
- hp->timer_state = ltrywait;
- }
-
- hp->timer_ticks = 0;
- hp->happy_timer.expires = jiffies + (12 * HZ)/10; /* 1.2 sec. */
- add_timer(&hp->happy_timer);
-}
-
-/* hp->happy_lock must be held */
-static int happy_meal_init(struct happy_meal *hp)
-{
- const unsigned char *e = &hp->dev->dev_addr[0];
- void __iomem *gregs = hp->gregs;
- void __iomem *etxregs = hp->etxregs;
- void __iomem *erxregs = hp->erxregs;
- void __iomem *bregs = hp->bigmacregs;
- void __iomem *tregs = hp->tcvregs;
- const char *bursts = "64";
- u32 regtmp, rxcfg;
-
- /* If auto-negotiation timer is running, kill it. */
- del_timer(&hp->happy_timer);
-
- HMD("happy_flags[%08x]\n", hp->happy_flags);
- if (!(hp->happy_flags & HFLAG_INIT)) {
- HMD("set HFLAG_INIT\n");
- hp->happy_flags |= HFLAG_INIT;
- happy_meal_get_counters(hp, bregs);
- }
-
- /* Stop polling. */
- HMD("to happy_meal_poll_stop\n");
- happy_meal_poll_stop(hp, tregs);
-
- /* Stop transmitter and receiver. */
- HMD("to happy_meal_stop\n");
- happy_meal_stop(hp, gregs);
-
- /* Alloc and reset the tx/rx descriptor chains. */
- HMD("to happy_meal_init_rings\n");
- happy_meal_init_rings(hp);
-
- /* Shut up the MIF. */
- HMD("Disable all MIF irqs (old[%08x])\n",
- hme_read32(hp, tregs + TCVR_IMASK));
- hme_write32(hp, tregs + TCVR_IMASK, 0xffff);
+ /* Alloc and reset the tx/rx descriptor chains. */
+ HMD("to happy_meal_init_rings\n");
+ happy_meal_init_rings(hp);
/* See if we can enable the MIF frame on this card to speak to the DP83840. */
if (hp->happy_flags & HFLAG_FENABLE) {
void __iomem *gregs = hp->gregs;
happy_meal_stop(hp, gregs);
- hme_write32(hp, tregs + TCVR_IMASK, 0xffff);
if (hp->happy_flags & HFLAG_FENABLE)
hme_write32(hp, tregs + TCVR_CFG,
hme_read32(hp, tregs + TCVR_CFG) & ~(TCV_CFG_BENABLE));
}
/* hp->happy_lock must be held */
-static void happy_meal_mif_interrupt(struct happy_meal *hp)
-{
- void __iomem *tregs = hp->tcvregs;
-
- netdev_info(hp->dev, "Link status change.\n");
- hp->sw_bmcr = happy_meal_tcvr_read(hp, tregs, MII_BMCR);
- hp->sw_lpa = happy_meal_tcvr_read(hp, tregs, MII_LPA);
-
- /* Use the fastest transmission protocol possible. */
- if (hp->sw_lpa & LPA_100FULL) {
- netdev_info(hp->dev, "Switching to 100Mbps at full duplex.\n");
- hp->sw_bmcr |= (BMCR_FULLDPLX | BMCR_SPEED100);
- } else if (hp->sw_lpa & LPA_100HALF) {
- netdev_info(hp->dev, "Switching to 100MBps at half duplex.\n");
- hp->sw_bmcr |= BMCR_SPEED100;
- } else if (hp->sw_lpa & LPA_10FULL) {
- netdev_info(hp->dev, "Switching to 10MBps at full duplex.\n");
- hp->sw_bmcr |= BMCR_FULLDPLX;
- } else {
- netdev_info(hp->dev, "Using 10Mbps at half duplex.\n");
- }
- happy_meal_tcvr_write(hp, tregs, MII_BMCR, hp->sw_bmcr);
-
- /* Finally stop polling and shut up the MIF. */
- happy_meal_poll_stop(hp, tregs);
-}
-
-/* hp->happy_lock must be held */
static void happy_meal_tx(struct happy_meal *hp)
{
struct happy_meal_txd *txbase = &hp->happy_block->happy_meal_txd[0];
u32 happy_status = hme_read32(hp, hp->gregs + GREG_STAT);
HMD("status=%08x\n", happy_status);
+ if (!happy_status)
+ return IRQ_NONE;
spin_lock(&hp->happy_lock);
goto out;
}
- if (happy_status & GREG_STAT_MIFIRQ)
- happy_meal_mif_interrupt(hp);
-
if (happy_status & GREG_STAT_TXALL)
happy_meal_tx(hp);
return IRQ_HANDLED;
}
-#ifdef CONFIG_SBUS
-static irqreturn_t quattro_sbus_interrupt(int irq, void *cookie)
-{
- struct quattro *qp = (struct quattro *) cookie;
- int i;
-
- for (i = 0; i < 4; i++) {
- struct net_device *dev = qp->happy_meals[i];
- struct happy_meal *hp = netdev_priv(dev);
- u32 happy_status = hme_read32(hp, hp->gregs + GREG_STAT);
-
- HMD("status=%08x\n", happy_status);
-
- if (!(happy_status & (GREG_STAT_ERRORS |
- GREG_STAT_MIFIRQ |
- GREG_STAT_TXALL |
- GREG_STAT_RXTOHOST)))
- continue;
-
- spin_lock(&hp->happy_lock);
-
- if (happy_status & GREG_STAT_ERRORS)
- if (happy_meal_is_not_so_happy(hp, happy_status))
- goto next;
-
- if (happy_status & GREG_STAT_MIFIRQ)
- happy_meal_mif_interrupt(hp);
-
- if (happy_status & GREG_STAT_TXALL)
- happy_meal_tx(hp);
-
- if (happy_status & GREG_STAT_RXTOHOST)
- happy_meal_rx(hp, dev);
-
- next:
- spin_unlock(&hp->happy_lock);
- }
- HMD("done\n");
-
- return IRQ_HANDLED;
-}
-#endif
-
static int happy_meal_open(struct net_device *dev)
{
struct happy_meal *hp = netdev_priv(dev);
int res;
- /* On SBUS Quattro QFE cards, all hme interrupts are concentrated
- * into a single source which we register handling at probe time.
- */
- if ((hp->happy_flags & (HFLAG_QUATTRO|HFLAG_PCI)) != HFLAG_QUATTRO) {
- res = request_irq(hp->irq, happy_meal_interrupt, IRQF_SHARED,
- dev->name, dev);
- if (res) {
- HMD("EAGAIN\n");
- netdev_err(dev, "Can't order irq %d to go.\n", hp->irq);
-
- return -EAGAIN;
- }
+ res = request_irq(hp->irq, happy_meal_interrupt, IRQF_SHARED,
+ dev->name, dev);
+ if (res) {
+ netdev_err(dev, "Can't order irq %d to go.\n", hp->irq);
+ return res;
}
HMD("to happy_meal_init\n");
res = happy_meal_init(hp);
spin_unlock_irq(&hp->happy_lock);
- if (res && ((hp->happy_flags & (HFLAG_QUATTRO|HFLAG_PCI)) != HFLAG_QUATTRO))
+ if (res)
free_irq(hp->irq, dev);
return res;
}
spin_unlock_irq(&hp->happy_lock);
- /* On Quattro QFE cards, all hme interrupts are concentrated
- * into a single source which we register handling at probe
- * time and never unregister.
- */
- if ((hp->happy_flags & (HFLAG_QUATTRO|HFLAG_PCI)) != HFLAG_QUATTRO)
- free_irq(hp->irq, dev);
+ free_irq(hp->irq, dev);
return 0;
}
platform_set_drvdata(op, qp);
return qp;
}
-
-/* After all quattro cards have been probed, we call these functions
- * to register the IRQ handlers for the cards that have been
- * successfully probed and skip the cards that failed to initialize
- */
-static int __init quattro_sbus_register_irqs(void)
-{
- struct quattro *qp;
-
- for (qp = qfe_sbus_list; qp != NULL; qp = qp->next) {
- struct platform_device *op = qp->quattro_dev;
- int err, qfe_slot, skip = 0;
-
- for (qfe_slot = 0; qfe_slot < 4; qfe_slot++) {
- if (!qp->happy_meals[qfe_slot])
- skip = 1;
- }
- if (skip)
- continue;
-
- err = request_irq(op->archdata.irqs[0],
- quattro_sbus_interrupt,
- IRQF_SHARED, "Quattro",
- qp);
- if (err != 0) {
- dev_err(&op->dev,
- "Quattro HME: IRQ registration error %d.\n",
- err);
- return err;
- }
- }
-
- return 0;
-}
-
-static void quattro_sbus_free_irqs(void)
-{
- struct quattro *qp;
-
- for (qp = qfe_sbus_list; qp != NULL; qp = qp->next) {
- struct platform_device *op = qp->quattro_dev;
- int qfe_slot, skip = 0;
-
- for (qfe_slot = 0; qfe_slot < 4; qfe_slot++) {
- if (!qp->happy_meals[qfe_slot])
- skip = 1;
- }
- if (skip)
- continue;
-
- free_irq(op->archdata.irqs[0], qp);
- }
-}
#endif /* CONFIG_SBUS */
#ifdef CONFIG_PCI
.ndo_validate_addr = eth_validate_addr,
};
-#ifdef CONFIG_SBUS
-static int happy_meal_sbus_probe_one(struct platform_device *op, int is_qfe)
+#ifdef CONFIG_PCI
+static int is_quattro_p(struct pci_dev *pdev)
{
- struct device_node *dp = op->dev.of_node, *sbus_dp;
- struct quattro *qp = NULL;
- struct happy_meal *hp;
- struct net_device *dev;
- int i, qfe_slot = -1;
- u8 addr[ETH_ALEN];
- int err = -ENODEV;
-
- sbus_dp = op->dev.parent->of_node;
-
- /* We can match PCI devices too, do not accept those here. */
- if (!of_node_name_eq(sbus_dp, "sbus") && !of_node_name_eq(sbus_dp, "sbi"))
- return err;
-
- if (is_qfe) {
- qp = quattro_sbus_find(op);
- if (qp == NULL)
- goto err_out;
- for (qfe_slot = 0; qfe_slot < 4; qfe_slot++)
- if (qp->happy_meals[qfe_slot] == NULL)
- break;
- if (qfe_slot == 4)
- goto err_out;
- }
+ struct pci_dev *busdev = pdev->bus->self;
+ struct pci_dev *this_pdev;
+ int n_hmes;
- err = -ENOMEM;
- dev = alloc_etherdev(sizeof(struct happy_meal));
- if (!dev)
- goto err_out;
- SET_NETDEV_DEV(dev, &op->dev);
+ if (!busdev || busdev->vendor != PCI_VENDOR_ID_DEC ||
+ busdev->device != PCI_DEVICE_ID_DEC_21153)
+ return 0;
- /* If user did not specify a MAC address specifically, use
- * the Quattro local-mac-address property...
- */
- for (i = 0; i < 6; i++) {
- if (macaddr[i] != 0)
- break;
+ n_hmes = 0;
+ list_for_each_entry(this_pdev, &pdev->bus->devices, bus_list) {
+ if (this_pdev->vendor == PCI_VENDOR_ID_SUN &&
+ this_pdev->device == PCI_DEVICE_ID_SUN_HAPPYMEAL)
+ n_hmes++;
}
- if (i < 6) { /* a mac address was given */
- for (i = 0; i < 6; i++)
- addr[i] = macaddr[i];
- eth_hw_addr_set(dev, addr);
- macaddr[5]++;
- } else {
- const unsigned char *addr;
- int len;
- addr = of_get_property(dp, "local-mac-address", &len);
+ if (n_hmes != 4)
+ return 0;
- if (qfe_slot != -1 && addr && len == ETH_ALEN)
- eth_hw_addr_set(dev, addr);
- else
- eth_hw_addr_set(dev, idprom->id_ethaddr);
- }
+ return 1;
+}
- hp = netdev_priv(dev);
+/* Fetch MAC address from vital product data of PCI ROM. */
+static int find_eth_addr_in_vpd(void __iomem *rom_base, int len, int index, unsigned char *dev_addr)
+{
+ int this_offset;
- hp->happy_dev = op;
- hp->dma_dev = &op->dev;
+ for (this_offset = 0x20; this_offset < len; this_offset++) {
+ void __iomem *p = rom_base + this_offset;
- spin_lock_init(&hp->happy_lock);
+ if (readb(p + 0) != 0x90 ||
+ readb(p + 1) != 0x00 ||
+ readb(p + 2) != 0x09 ||
+ readb(p + 3) != 0x4e ||
+ readb(p + 4) != 0x41 ||
+ readb(p + 5) != 0x06)
+ continue;
- err = -ENODEV;
- if (qp != NULL) {
- hp->qfe_parent = qp;
- hp->qfe_ent = qfe_slot;
- qp->happy_meals[qfe_slot] = dev;
- }
+ this_offset += 6;
+ p += 6;
- hp->gregs = of_ioremap(&op->resource[0], 0,
- GREG_REG_SIZE, "HME Global Regs");
- if (!hp->gregs) {
- dev_err(&op->dev, "Cannot map global registers.\n");
- goto err_out_free_netdev;
+ if (index == 0) {
+ for (int i = 0; i < 6; i++)
+ dev_addr[i] = readb(p + i);
+ return 1;
+ }
+ index--;
}
+ return 0;
+}
- hp->etxregs = of_ioremap(&op->resource[1], 0,
- ETX_REG_SIZE, "HME TX Regs");
- if (!hp->etxregs) {
- dev_err(&op->dev, "Cannot map MAC TX registers.\n");
- goto err_out_iounmap;
- }
+static void __maybe_unused get_hme_mac_nonsparc(struct pci_dev *pdev,
+ unsigned char *dev_addr)
+{
+ void __iomem *p;
+ size_t size;
- hp->erxregs = of_ioremap(&op->resource[2], 0,
- ERX_REG_SIZE, "HME RX Regs");
- if (!hp->erxregs) {
- dev_err(&op->dev, "Cannot map MAC RX registers.\n");
- goto err_out_iounmap;
+ p = pci_map_rom(pdev, &size);
+ if (p) {
+ int index = 0;
+ int found;
+
+ if (is_quattro_p(pdev))
+ index = PCI_SLOT(pdev->devfn);
+
+ found = readb(p) == 0x55 &&
+ readb(p + 1) == 0xaa &&
+ find_eth_addr_in_vpd(p, (64 * 1024), index, dev_addr);
+ pci_unmap_rom(pdev, p);
+ if (found)
+ return;
}
- hp->bigmacregs = of_ioremap(&op->resource[3], 0,
- BMAC_REG_SIZE, "HME BIGMAC Regs");
- if (!hp->bigmacregs) {
- dev_err(&op->dev, "Cannot map BIGMAC registers.\n");
- goto err_out_iounmap;
+ /* Sun MAC prefix then 3 random bytes. */
+ dev_addr[0] = 0x08;
+ dev_addr[1] = 0x00;
+ dev_addr[2] = 0x20;
+ get_random_bytes(&dev_addr[3], 3);
+}
+#endif
+
+static void happy_meal_addr_init(struct happy_meal *hp,
+ struct device_node *dp, int qfe_slot)
+{
+ int i;
+
+ for (i = 0; i < 6; i++) {
+ if (macaddr[i] != 0)
+ break;
}
- hp->tcvregs = of_ioremap(&op->resource[4], 0,
- TCVR_REG_SIZE, "HME Tranceiver Regs");
- if (!hp->tcvregs) {
- dev_err(&op->dev, "Cannot map TCVR registers.\n");
- goto err_out_iounmap;
+ if (i < 6) { /* a mac address was given */
+ u8 addr[ETH_ALEN];
+
+ for (i = 0; i < 6; i++)
+ addr[i] = macaddr[i];
+ eth_hw_addr_set(hp->dev, addr);
+ macaddr[5]++;
+ } else {
+#ifdef CONFIG_SPARC
+ const unsigned char *addr;
+ int len;
+
+ /* If user did not specify a MAC address specifically, use
+ * the Quattro local-mac-address property...
+ */
+ if (qfe_slot != -1) {
+ addr = of_get_property(dp, "local-mac-address", &len);
+ if (addr && len == 6) {
+ eth_hw_addr_set(hp->dev, addr);
+ return;
+ }
+ }
+
+ eth_hw_addr_set(hp->dev, idprom->id_ethaddr);
+#else
+ u8 addr[ETH_ALEN];
+
+ get_hme_mac_nonsparc(hp->happy_dev, addr);
+ eth_hw_addr_set(hp->dev, addr);
+#endif
}
+}
+
+static int happy_meal_common_probe(struct happy_meal *hp,
+ struct device_node *dp)
+{
+ struct net_device *dev = hp->dev;
+ int err;
- hp->hm_revision = of_getintprop_default(dp, "hm-rev", 0xff);
- if (hp->hm_revision == 0xff)
- hp->hm_revision = 0xa0;
+#ifdef CONFIG_SPARC
+ hp->hm_revision = of_getintprop_default(dp, "hm-rev", hp->hm_revision);
+#endif
/* Now enable the feature flags we can. */
if (hp->hm_revision == 0x20 || hp->hm_revision == 0x21)
- hp->happy_flags = HFLAG_20_21;
+ hp->happy_flags |= HFLAG_20_21;
else if (hp->hm_revision != 0xa0)
- hp->happy_flags = HFLAG_NOT_A0;
+ hp->happy_flags |= HFLAG_NOT_A0;
- if (qp != NULL)
- hp->happy_flags |= HFLAG_QUATTRO;
-
- /* Get the supported DVMA burst sizes from our Happy SBUS. */
- hp->happy_bursts = of_getintprop_default(sbus_dp,
- "burst-sizes", 0x00);
-
- hp->happy_block = dma_alloc_coherent(hp->dma_dev,
- PAGE_SIZE,
- &hp->hblock_dvma,
- GFP_ATOMIC);
- err = -ENOMEM;
+ hp->happy_block = dmam_alloc_coherent(hp->dma_dev, PAGE_SIZE,
+ &hp->hblock_dvma, GFP_KERNEL);
if (!hp->happy_block)
- goto err_out_iounmap;
+ return -ENOMEM;
/* Force check of the link first time we are brought up. */
hp->linkcheck = 0;
timer_setup(&hp->happy_timer, happy_meal_timer, 0);
- hp->dev = dev;
dev->netdev_ops = &hme_netdev_ops;
- dev->watchdog_timeo = 5*HZ;
+ dev->watchdog_timeo = 5 * HZ;
dev->ethtool_ops = &hme_ethtool_ops;
/* Happy Meal can do it all... */
dev->hw_features = NETIF_F_SG | NETIF_F_HW_CSUM;
dev->features |= dev->hw_features | NETIF_F_RXCSUM;
- hp->irq = op->archdata.irqs[0];
-
-#if defined(CONFIG_SBUS) && defined(CONFIG_PCI)
- /* Hook up SBUS register/descriptor accessors. */
- hp->read_desc32 = sbus_hme_read_desc32;
- hp->write_txd = sbus_hme_write_txd;
- hp->write_rxd = sbus_hme_write_rxd;
- hp->read32 = sbus_hme_read32;
- hp->write32 = sbus_hme_write32;
-#endif
/* Grrr, Happy Meal comes up by default not advertising
* full duplex 100baseT capabilities, fix this.
happy_meal_set_initial_advertisement(hp);
spin_unlock_irq(&hp->happy_lock);
- err = register_netdev(hp->dev);
- if (err) {
- dev_err(&op->dev, "Cannot register net device, aborting.\n");
- goto err_out_free_coherent;
- }
+ err = devm_register_netdev(hp->dma_dev, dev);
+ if (err)
+ dev_err(hp->dma_dev, "Cannot register net device, aborting.\n");
+ return err;
+}
- platform_set_drvdata(op, hp);
+#ifdef CONFIG_SBUS
+static int happy_meal_sbus_probe_one(struct platform_device *op, int is_qfe)
+{
+ struct device_node *dp = op->dev.of_node, *sbus_dp;
+ struct quattro *qp = NULL;
+ struct happy_meal *hp;
+ struct net_device *dev;
+ int qfe_slot = -1;
+ int err;
- if (qfe_slot != -1)
- netdev_info(dev,
- "Quattro HME slot %d (SBUS) 10/100baseT Ethernet %pM\n",
- qfe_slot, dev->dev_addr);
- else
- netdev_info(dev, "HAPPY MEAL (SBUS) 10/100baseT Ethernet %pM\n",
- dev->dev_addr);
+ sbus_dp = op->dev.parent->of_node;
- return 0;
+ /* We can match PCI devices too, do not accept those here. */
+ if (!of_node_name_eq(sbus_dp, "sbus") && !of_node_name_eq(sbus_dp, "sbi"))
+ return -ENODEV;
-err_out_free_coherent:
- dma_free_coherent(hp->dma_dev,
- PAGE_SIZE,
- hp->happy_block,
- hp->hblock_dvma);
-
-err_out_iounmap:
- if (hp->gregs)
- of_iounmap(&op->resource[0], hp->gregs, GREG_REG_SIZE);
- if (hp->etxregs)
- of_iounmap(&op->resource[1], hp->etxregs, ETX_REG_SIZE);
- if (hp->erxregs)
- of_iounmap(&op->resource[2], hp->erxregs, ERX_REG_SIZE);
- if (hp->bigmacregs)
- of_iounmap(&op->resource[3], hp->bigmacregs, BMAC_REG_SIZE);
- if (hp->tcvregs)
- of_iounmap(&op->resource[4], hp->tcvregs, TCVR_REG_SIZE);
+ if (is_qfe) {
+ qp = quattro_sbus_find(op);
+ if (qp == NULL)
+ return -ENODEV;
+ for (qfe_slot = 0; qfe_slot < 4; qfe_slot++)
+ if (qp->happy_meals[qfe_slot] == NULL)
+ break;
+ if (qfe_slot == 4)
+ return -ENODEV;
+ }
- if (qp)
- qp->happy_meals[qfe_slot] = NULL;
+ dev = devm_alloc_etherdev(&op->dev, sizeof(struct happy_meal));
+ if (!dev)
+ return -ENOMEM;
+ SET_NETDEV_DEV(dev, &op->dev);
-err_out_free_netdev:
- free_netdev(dev);
+ hp = netdev_priv(dev);
+ hp->dev = dev;
+ hp->happy_dev = op;
+ hp->dma_dev = &op->dev;
+ happy_meal_addr_init(hp, dp, qfe_slot);
-err_out:
- return err;
-}
-#endif
+ spin_lock_init(&hp->happy_lock);
-#ifdef CONFIG_PCI
-#ifndef CONFIG_SPARC
-static int is_quattro_p(struct pci_dev *pdev)
-{
- struct pci_dev *busdev = pdev->bus->self;
- struct pci_dev *this_pdev;
- int n_hmes;
+ if (qp != NULL) {
+ hp->qfe_parent = qp;
+ hp->qfe_ent = qfe_slot;
+ qp->happy_meals[qfe_slot] = dev;
+ }
- if (busdev == NULL ||
- busdev->vendor != PCI_VENDOR_ID_DEC ||
- busdev->device != PCI_DEVICE_ID_DEC_21153)
- return 0;
+ hp->gregs = devm_platform_ioremap_resource(op, 0);
+ if (IS_ERR(hp->gregs)) {
+ dev_err(&op->dev, "Cannot map global registers.\n");
+ err = PTR_ERR(hp->gregs);
+ goto err_out_clear_quattro;
+ }
- n_hmes = 0;
- list_for_each_entry(this_pdev, &pdev->bus->devices, bus_list) {
- if (this_pdev->vendor == PCI_VENDOR_ID_SUN &&
- this_pdev->device == PCI_DEVICE_ID_SUN_HAPPYMEAL)
- n_hmes++;
+ hp->etxregs = devm_platform_ioremap_resource(op, 1);
+ if (IS_ERR(hp->etxregs)) {
+ dev_err(&op->dev, "Cannot map MAC TX registers.\n");
+ err = PTR_ERR(hp->etxregs);
+ goto err_out_clear_quattro;
}
- if (n_hmes != 4)
- return 0;
+ hp->erxregs = devm_platform_ioremap_resource(op, 2);
+ if (IS_ERR(hp->erxregs)) {
+ dev_err(&op->dev, "Cannot map MAC RX registers.\n");
+ err = PTR_ERR(hp->erxregs);
+ goto err_out_clear_quattro;
+ }
- return 1;
-}
+ hp->bigmacregs = devm_platform_ioremap_resource(op, 3);
+ if (IS_ERR(hp->bigmacregs)) {
+ dev_err(&op->dev, "Cannot map BIGMAC registers.\n");
+ err = PTR_ERR(hp->bigmacregs);
+ goto err_out_clear_quattro;
+ }
-/* Fetch MAC address from vital product data of PCI ROM. */
-static int find_eth_addr_in_vpd(void __iomem *rom_base, int len, int index, unsigned char *dev_addr)
-{
- int this_offset;
+ hp->tcvregs = devm_platform_ioremap_resource(op, 4);
+ if (IS_ERR(hp->tcvregs)) {
+ dev_err(&op->dev, "Cannot map TCVR registers.\n");
+ err = PTR_ERR(hp->tcvregs);
+ goto err_out_clear_quattro;
+ }
- for (this_offset = 0x20; this_offset < len; this_offset++) {
- void __iomem *p = rom_base + this_offset;
+ hp->hm_revision = 0xa0;
- if (readb(p + 0) != 0x90 ||
- readb(p + 1) != 0x00 ||
- readb(p + 2) != 0x09 ||
- readb(p + 3) != 0x4e ||
- readb(p + 4) != 0x41 ||
- readb(p + 5) != 0x06)
- continue;
+ if (qp != NULL)
+ hp->happy_flags |= HFLAG_QUATTRO;
- this_offset += 6;
- p += 6;
+ hp->irq = op->archdata.irqs[0];
- if (index == 0) {
- int i;
+ /* Get the supported DVMA burst sizes from our Happy SBUS. */
+ hp->happy_bursts = of_getintprop_default(sbus_dp,
+ "burst-sizes", 0x00);
- for (i = 0; i < 6; i++)
- dev_addr[i] = readb(p + i);
- return 1;
- }
- index--;
- }
- return 0;
-}
+#ifdef CONFIG_PCI
+ /* Hook up SBUS register/descriptor accessors. */
+ hp->read_desc32 = sbus_hme_read_desc32;
+ hp->write_txd = sbus_hme_write_txd;
+ hp->write_rxd = sbus_hme_write_rxd;
+ hp->read32 = sbus_hme_read32;
+ hp->write32 = sbus_hme_write32;
+#endif
-static void get_hme_mac_nonsparc(struct pci_dev *pdev, unsigned char *dev_addr)
-{
- size_t size;
- void __iomem *p = pci_map_rom(pdev, &size);
+ err = happy_meal_common_probe(hp, dp);
+ if (err)
+ goto err_out_clear_quattro;
- if (p) {
- int index = 0;
- int found;
+ platform_set_drvdata(op, hp);
- if (is_quattro_p(pdev))
- index = PCI_SLOT(pdev->devfn);
+ if (qfe_slot != -1)
+ netdev_info(dev,
+ "Quattro HME slot %d (SBUS) 10/100baseT Ethernet %pM\n",
+ qfe_slot, dev->dev_addr);
+ else
+ netdev_info(dev, "HAPPY MEAL (SBUS) 10/100baseT Ethernet %pM\n",
+ dev->dev_addr);
- found = readb(p) == 0x55 &&
- readb(p + 1) == 0xaa &&
- find_eth_addr_in_vpd(p, (64 * 1024), index, dev_addr);
- pci_unmap_rom(pdev, p);
- if (found)
- return;
- }
+ return 0;
- /* Sun MAC prefix then 3 random bytes. */
- dev_addr[0] = 0x08;
- dev_addr[1] = 0x00;
- dev_addr[2] = 0x20;
- get_random_bytes(&dev_addr[3], 3);
+err_out_clear_quattro:
+ if (qp)
+ qp->happy_meals[qfe_slot] = NULL;
+ return err;
}
-#endif /* !(CONFIG_SPARC) */
+#endif
+#ifdef CONFIG_PCI
static int happy_meal_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
+ struct device_node *dp = NULL;
struct quattro *qp = NULL;
-#ifdef CONFIG_SPARC
- struct device_node *dp;
-#endif
struct happy_meal *hp;
struct net_device *dev;
void __iomem *hpreg_base;
struct resource *hpreg_res;
- int i, qfe_slot = -1;
char prom_name[64];
- u8 addr[ETH_ALEN];
- int err;
+ int qfe_slot = -1;
+ int err = -ENODEV;
/* Now make sure pci_dev cookie is there. */
#ifdef CONFIG_SPARC
err = pcim_enable_device(pdev);
if (err)
- goto err_out;
+ return err;
pci_set_master(pdev);
if (!strcmp(prom_name, "SUNW,qfe") || !strcmp(prom_name, "qfe")) {
qp = quattro_pci_find(pdev);
- if (IS_ERR(qp)) {
- err = PTR_ERR(qp);
- goto err_out;
- }
+ if (IS_ERR(qp))
+ return PTR_ERR(qp);
for (qfe_slot = 0; qfe_slot < 4; qfe_slot++)
if (!qp->happy_meals[qfe_slot])
break;
if (qfe_slot == 4)
- goto err_out;
+ return -ENODEV;
}
dev = devm_alloc_etherdev(&pdev->dev, sizeof(struct happy_meal));
- if (!dev) {
- err = -ENOMEM;
- goto err_out;
- }
+ if (!dev)
+ return -ENOMEM;
SET_NETDEV_DEV(dev, &pdev->dev);
hp = netdev_priv(dev);
-
+ hp->dev = dev;
hp->happy_dev = pdev;
hp->dma_dev = &pdev->dev;
goto err_out_clear_quattro;
}
- for (i = 0; i < 6; i++) {
- if (macaddr[i] != 0)
- break;
- }
- if (i < 6) { /* a mac address was given */
- for (i = 0; i < 6; i++)
- addr[i] = macaddr[i];
- eth_hw_addr_set(dev, addr);
- macaddr[5]++;
- } else {
-#ifdef CONFIG_SPARC
- const unsigned char *addr;
- int len;
-
- if (qfe_slot != -1 &&
- (addr = of_get_property(dp, "local-mac-address", &len))
- != NULL &&
- len == 6) {
- eth_hw_addr_set(dev, addr);
- } else {
- eth_hw_addr_set(dev, idprom->id_ethaddr);
- }
-#else
- u8 addr[ETH_ALEN];
-
- get_hme_mac_nonsparc(pdev, addr);
- eth_hw_addr_set(dev, addr);
-#endif
- }
+ happy_meal_addr_init(hp, dp, qfe_slot);
/* Layout registers. */
hp->gregs = (hpreg_base + 0x0000UL);
hp->bigmacregs = (hpreg_base + 0x6000UL);
hp->tcvregs = (hpreg_base + 0x7000UL);
-#ifdef CONFIG_SPARC
- hp->hm_revision = of_getintprop_default(dp, "hm-rev", 0xff);
- if (hp->hm_revision == 0xff)
+ if (IS_ENABLED(CONFIG_SPARC))
hp->hm_revision = 0xc0 | (pdev->revision & 0x0f);
-#else
- /* works with this on non-sparc hosts */
- hp->hm_revision = 0x20;
-#endif
-
- /* Now enable the feature flags we can. */
- if (hp->hm_revision == 0x20 || hp->hm_revision == 0x21)
- hp->happy_flags = HFLAG_20_21;
- else if (hp->hm_revision != 0xa0 && hp->hm_revision != 0xc0)
- hp->happy_flags = HFLAG_NOT_A0;
+ else
+ hp->hm_revision = 0x20;
if (qp != NULL)
hp->happy_flags |= HFLAG_QUATTRO;
/* Assume PCI happy meals can handle all burst sizes. */
hp->happy_bursts = DMA_BURSTBITS;
#endif
-
- hp->happy_block = dmam_alloc_coherent(&pdev->dev, PAGE_SIZE,
- &hp->hblock_dvma, GFP_KERNEL);
- if (!hp->happy_block) {
- err = -ENOMEM;
- goto err_out_clear_quattro;
- }
-
- hp->linkcheck = 0;
- hp->timer_state = asleep;
- hp->timer_ticks = 0;
-
- timer_setup(&hp->happy_timer, happy_meal_timer, 0);
-
hp->irq = pdev->irq;
- hp->dev = dev;
- dev->netdev_ops = &hme_netdev_ops;
- dev->watchdog_timeo = 5*HZ;
- dev->ethtool_ops = &hme_ethtool_ops;
- /* Happy Meal can do it all... */
- dev->hw_features = NETIF_F_SG | NETIF_F_HW_CSUM;
- dev->features |= dev->hw_features | NETIF_F_RXCSUM;
-
-#if defined(CONFIG_SBUS) && defined(CONFIG_PCI)
+#ifdef CONFIG_SBUS
/* Hook up PCI register/descriptor accessors. */
hp->read_desc32 = pci_hme_read_desc32;
hp->write_txd = pci_hme_write_txd;
hp->write32 = pci_hme_write32;
#endif
- /* Grrr, Happy Meal comes up by default not advertising
- * full duplex 100baseT capabilities, fix this.
- */
- spin_lock_irq(&hp->happy_lock);
- happy_meal_set_initial_advertisement(hp);
- spin_unlock_irq(&hp->happy_lock);
-
- err = devm_register_netdev(&pdev->dev, dev);
- if (err) {
- dev_err(&pdev->dev, "Cannot register net device, aborting.\n");
+ err = happy_meal_common_probe(hp, dp);
+ if (err)
goto err_out_clear_quattro;
- }
pci_set_drvdata(pdev, hp);
err_out_clear_quattro:
if (qp != NULL)
qp->happy_meals[qfe_slot] = NULL;
-
-err_out:
return err;
}
return happy_meal_sbus_probe_one(op, is_qfe);
}
-static int hme_sbus_remove(struct platform_device *op)
-{
- struct happy_meal *hp = platform_get_drvdata(op);
- struct net_device *net_dev = hp->dev;
-
- unregister_netdev(net_dev);
-
- /* XXX qfe parent interrupt... */
-
- of_iounmap(&op->resource[0], hp->gregs, GREG_REG_SIZE);
- of_iounmap(&op->resource[1], hp->etxregs, ETX_REG_SIZE);
- of_iounmap(&op->resource[2], hp->erxregs, ERX_REG_SIZE);
- of_iounmap(&op->resource[3], hp->bigmacregs, BMAC_REG_SIZE);
- of_iounmap(&op->resource[4], hp->tcvregs, TCVR_REG_SIZE);
- dma_free_coherent(hp->dma_dev,
- PAGE_SIZE,
- hp->happy_block,
- hp->hblock_dvma);
-
- free_netdev(net_dev);
-
- return 0;
-}
-
static const struct of_device_id hme_sbus_match[] = {
{
.name = "SUNW,hme",
.of_match_table = hme_sbus_match,
},
.probe = hme_sbus_probe,
- .remove = hme_sbus_remove,
};
static int __init happy_meal_sbus_init(void)
{
- int err;
-
- err = platform_driver_register(&hme_sbus_driver);
- if (!err)
- err = quattro_sbus_register_irqs();
-
- return err;
+ return platform_driver_register(&hme_sbus_driver);
}
static void happy_meal_sbus_exit(void)
{
platform_driver_unregister(&hme_sbus_driver);
- quattro_sbus_free_irqs();
while (qfe_sbus_list) {
struct quattro *qfe = qfe_sbus_list;
};
/* Here are the happy flags. */
-#define HFLAG_POLL 0x00000001 /* We are doing MIF polling */
#define HFLAG_FENABLE 0x00000002 /* The MII frame is enabled */
#define HFLAG_LANCE 0x00000004 /* We are using lance-mode */
#define HFLAG_RXENABLE 0x00000008 /* Receiver is enabled */
#define HFLAG_AUTO 0x00000010 /* Using auto-negotiation, 0 = force */
#define HFLAG_FULL 0x00000020 /* Full duplex enable */
#define HFLAG_MACFULL 0x00000040 /* Using full duplex in the MAC */
-#define HFLAG_POLLENABLE 0x00000080 /* Actually try MIF polling */
#define HFLAG_RXCV 0x00000100 /* XXX RXCV ENABLE */
#define HFLAG_INIT 0x00000200 /* Init called at least once */
#define HFLAG_LINKUP 0x00000400 /* 1 = Link is up */
#define HFLAG_PCI 0x00000800 /* PCI based Happy Meal */
#define HFLAG_QUATTRO 0x00001000 /* On QFE/Quattro card */
-#define HFLAG_20_21 (HFLAG_POLLENABLE | HFLAG_FENABLE)
-#define HFLAG_NOT_A0 (HFLAG_POLLENABLE | HFLAG_FENABLE | HFLAG_LANCE | HFLAG_RXCV)
+#define HFLAG_20_21 HFLAG_FENABLE
+#define HFLAG_NOT_A0 (HFLAG_FENABLE | HFLAG_LANCE | HFLAG_RXCV)
/* Support for QFE/Quattro cards. */
struct quattro {
for (i = 0; i < MAX_NETDEV_NUM; i++)
if (comm->ndev[i]) {
ndev = comm->ndev[i];
- if (ndev) {
+ if (ndev)
phy_disconnect(ndev->phydev);
- ndev->phydev = NULL;
- }
}
}
#define AM65_CPSW_PORTN_REG_TS_CTL_LTYPE2 0x31C
#define AM65_CPSW_SGMII_CONTROL_REG 0x010
+#define AM65_CPSW_SGMII_MR_ADV_ABILITY_REG 0x018
#define AM65_CPSW_SGMII_CONTROL_MR_AN_ENABLE BIT(0)
#define AM65_CPSW_CTL_VLAN_AWARE BIT(1)
/* AM65_CPSW_P0_REG_CTL */
#define AM65_CPSW_P0_REG_CTL_RX_CHECKSUM_EN BIT(0)
+#define AM65_CPSW_P0_REG_CTL_RX_REMAP_VLAN BIT(16)
/* AM65_CPSW_PORT_REG_PRI_CTL */
#define AM65_CPSW_PORT_REG_PRI_CTL_RX_PTYPE_RROBIN BIT(8)
/* set base flow_id */
writel(common->rx_flow_id_base,
host_p->port_base + AM65_CPSW_PORT0_REG_FLOW_ID_OFFSET);
- /* en tx crc offload */
- writel(AM65_CPSW_P0_REG_CTL_RX_CHECKSUM_EN, host_p->port_base + AM65_CPSW_P0_REG_CTL);
+ writel(AM65_CPSW_P0_REG_CTL_RX_CHECKSUM_EN | AM65_CPSW_P0_REG_CTL_RX_REMAP_VLAN,
+ host_p->port_base + AM65_CPSW_P0_REG_CTL);
am65_cpsw_nuss_set_p0_ptype(common);
else
am65_cpsw_init_host_port_switch(common);
+ am65_cpsw_qos_tx_p0_rate_init(common);
+
for (i = 0; i < common->rx_chns.descs_num; i++) {
skb = __netdev_alloc_skb_ip_align(NULL,
AM65_CPSW_MAX_PACKET_SIZE,
goto runtime_put;
}
- for (i = 0; i < common->tx_ch_num; i++)
- netdev_tx_reset_queue(netdev_get_tx_queue(ndev, i));
+ for (i = 0; i < common->tx_ch_num; i++) {
+ struct netdev_queue *txq = netdev_get_tx_queue(ndev, i);
+
+ netdev_tx_reset_queue(txq);
+ txq->tx_maxrate = common->tx_chns[i].rate_mbps;
+ }
ret = am65_cpsw_nuss_common_open(common);
if (ret)
.ndo_vlan_rx_kill_vid = am65_cpsw_nuss_ndo_slave_kill_vid,
.ndo_eth_ioctl = am65_cpsw_nuss_ndo_slave_ioctl,
.ndo_setup_tc = am65_cpsw_qos_ndo_setup_tc,
+ .ndo_set_tx_maxrate = am65_cpsw_qos_ndo_tx_p0_set_maxrate,
};
static void am65_cpsw_disable_phy(struct phy *phy)
static int am65_cpsw_init_serdes_phy(struct device *dev, struct device_node *port_np,
struct am65_cpsw_port *port)
{
- const char *name = "serdes-phy";
+ const char *name = "serdes";
struct phy *phy;
int ret;
- phy = devm_of_phy_get(dev, port_np, name);
- if (PTR_ERR(phy) == -ENODEV)
- return 0;
- if (IS_ERR(phy))
- return PTR_ERR(phy);
+ phy = devm_of_phy_optional_get(dev, port_np, name);
+ if (IS_ERR_OR_NULL(phy))
+ return PTR_ERR_OR_ZERO(phy);
/* Serdes PHY exists. Store it. */
port->slave.serdes_phy = phy;
struct am65_cpsw_port *port = container_of(slave, struct am65_cpsw_port, slave);
struct am65_cpsw_common *common = port->common;
- if (common->pdata.extra_modes & BIT(state->interface))
+ if (common->pdata.extra_modes & BIT(state->interface)) {
+ if (state->interface == PHY_INTERFACE_MODE_SGMII)
+ writel(ADVERTISE_SGMII,
+ port->sgmii_base + AM65_CPSW_SGMII_MR_ADV_ABILITY_REG);
+
writel(AM65_CPSW_SGMII_CONTROL_MR_AN_ENABLE,
port->sgmii_base + AM65_CPSW_SGMII_CONTROL_REG);
+ }
}
static void am65_cpsw_nuss_mac_link_down(struct phylink_config *config, unsigned int mode,
if (speed == SPEED_1000)
mac_control |= CPSW_SL_CTL_GIG;
+ if (interface == PHY_INTERFACE_MODE_SGMII)
+ mac_control |= CPSW_SL_CTL_EXT_EN;
if (speed == SPEED_10 && phy_interface_mode_is_rgmii(interface))
/* Can be used with in band mode only */
mac_control |= CPSW_SL_CTL_EXT_EN;
devm_remove_action(dev, am65_cpsw_nuss_free_tx_chns, common);
+ common->tx_ch_rate_msk = 0;
for (i = 0; i < common->tx_ch_num; i++) {
struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
port->slave.phylink_config.mac_capabilities = MAC_SYM_PAUSE | MAC_10 | MAC_100 | MAC_1000FD;
port->slave.phylink_config.mac_managed_pm = true; /* MAC does PM */
- if (phy_interface_mode_is_rgmii(port->slave.phy_if)) {
+ switch (port->slave.phy_if) {
+ case PHY_INTERFACE_MODE_RGMII:
+ case PHY_INTERFACE_MODE_RGMII_ID:
+ case PHY_INTERFACE_MODE_RGMII_RXID:
+ case PHY_INTERFACE_MODE_RGMII_TXID:
phy_interface_set_rgmii(port->slave.phylink_config.supported_interfaces);
- } else if (port->slave.phy_if == PHY_INTERFACE_MODE_RMII) {
+ break;
+
+ case PHY_INTERFACE_MODE_RMII:
__set_bit(PHY_INTERFACE_MODE_RMII,
port->slave.phylink_config.supported_interfaces);
- } else if (common->pdata.extra_modes & BIT(port->slave.phy_if)) {
- __set_bit(PHY_INTERFACE_MODE_QSGMII,
- port->slave.phylink_config.supported_interfaces);
- } else {
+ break;
+
+ case PHY_INTERFACE_MODE_QSGMII:
+ case PHY_INTERFACE_MODE_SGMII:
+ if (common->pdata.extra_modes & BIT(port->slave.phy_if)) {
+ __set_bit(port->slave.phy_if,
+ port->slave.phylink_config.supported_interfaces);
+ } else {
+ dev_err(dev, "selected phy-mode is not supported\n");
+ return -EOPNOTSUPP;
+ }
+ break;
+
+ default:
dev_err(dev, "selected phy-mode is not supported\n");
return -EOPNOTSUPP;
}
.quirks = 0,
.ale_dev_id = "am64-cpswxg",
.fdqring_mode = K3_RINGACC_RING_MODE_RING,
- .extra_modes = BIT(PHY_INTERFACE_MODE_QSGMII),
+ .extra_modes = BIT(PHY_INTERFACE_MODE_QSGMII) | BIT(PHY_INTERFACE_MODE_SGMII),
};
static const struct am65_cpsw_pdata j721e_cpswxg_pdata = {
.quirks = 0,
.ale_dev_id = "am64-cpswxg",
.fdqring_mode = K3_RINGACC_RING_MODE_MESSAGE,
- .extra_modes = BIT(PHY_INTERFACE_MODE_QSGMII),
+ .extra_modes = BIT(PHY_INTERFACE_MODE_QSGMII) | BIT(PHY_INTERFACE_MODE_SGMII),
};
static const struct of_device_id am65_cpsw_nuss_of_mtable[] = {
u32 id;
u32 descs_num;
char tx_chn_name[128];
+ u32 rate_mbps;
};
struct am65_cpsw_rx_chn {
int usage_count; /* number of opened ports */
struct cpsw_ale *ale;
int tx_ch_num;
+ u32 tx_ch_rate_msk;
u32 rx_flow_id_base;
struct am65_cpsw_tx_chn tx_chns[AM65_CPSW_MAX_TX_QUEUES];
#define AM65_CPSW_PN_REG_CTL 0x004
#define AM65_CPSW_PN_REG_FIFO_STATUS 0x050
#define AM65_CPSW_PN_REG_EST_CTL 0x060
+#define AM65_CPSW_PN_REG_PRI_CIR(pri) (0x140 + 4 * (pri))
/* AM65_CPSW_REG_CTL register fields */
#define AM65_CPSW_CTL_EST_EN BIT(18)
port->qos.link_speed = SPEED_UNKNOWN;
}
+
+static u32
+am65_cpsw_qos_tx_rate_calc(u32 rate_mbps, unsigned long bus_freq)
+{
+ u32 ir;
+
+ bus_freq /= 1000000;
+ ir = DIV_ROUND_UP(((u64)rate_mbps * 32768), bus_freq);
+ return ir;
+}
+
+static void
+am65_cpsw_qos_tx_p0_rate_apply(struct am65_cpsw_common *common,
+ int tx_ch, u32 rate_mbps)
+{
+ struct am65_cpsw_host *host = am65_common_get_host(common);
+ u32 ch_cir;
+ int i;
+
+ ch_cir = am65_cpsw_qos_tx_rate_calc(rate_mbps, common->bus_freq);
+ writel(ch_cir, host->port_base + AM65_CPSW_PN_REG_PRI_CIR(tx_ch));
+
+ /* update rates for every port tx queues */
+ for (i = 0; i < common->port_num; i++) {
+ struct net_device *ndev = common->ports[i].ndev;
+
+ if (!ndev)
+ continue;
+ netdev_get_tx_queue(ndev, tx_ch)->tx_maxrate = rate_mbps;
+ }
+}
+
+int am65_cpsw_qos_ndo_tx_p0_set_maxrate(struct net_device *ndev,
+ int queue, u32 rate_mbps)
+{
+ struct am65_cpsw_port *port = am65_ndev_to_port(ndev);
+ struct am65_cpsw_common *common = port->common;
+ struct am65_cpsw_tx_chn *tx_chn;
+ u32 ch_rate, tx_ch_rate_msk_new;
+ u32 ch_msk = 0;
+ int ret;
+
+ dev_dbg(common->dev, "apply TX%d rate limiting %uMbps tx_rate_msk%x\n",
+ queue, rate_mbps, common->tx_ch_rate_msk);
+
+ if (common->pf_p0_rx_ptype_rrobin) {
+ dev_err(common->dev, "TX Rate Limiting failed - rrobin mode\n");
+ return -EINVAL;
+ }
+
+ ch_rate = netdev_get_tx_queue(ndev, queue)->tx_maxrate;
+ if (ch_rate == rate_mbps)
+ return 0;
+
+ ret = pm_runtime_get_sync(common->dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(common->dev);
+ return ret;
+ }
+ ret = 0;
+
+ tx_ch_rate_msk_new = common->tx_ch_rate_msk;
+ if (rate_mbps && !(tx_ch_rate_msk_new & BIT(queue))) {
+ tx_ch_rate_msk_new |= BIT(queue);
+ ch_msk = GENMASK(common->tx_ch_num - 1, queue);
+ ch_msk = tx_ch_rate_msk_new ^ ch_msk;
+ } else if (!rate_mbps) {
+ tx_ch_rate_msk_new &= ~BIT(queue);
+ ch_msk = queue ? GENMASK(queue - 1, 0) : 0;
+ ch_msk = tx_ch_rate_msk_new & ch_msk;
+ }
+
+ if (ch_msk) {
+ dev_err(common->dev, "TX rate limiting has to be enabled sequentially hi->lo tx_rate_msk:%x tx_rate_msk_new:%x\n",
+ common->tx_ch_rate_msk, tx_ch_rate_msk_new);
+ ret = -EINVAL;
+ goto exit_put;
+ }
+
+ tx_chn = &common->tx_chns[queue];
+ tx_chn->rate_mbps = rate_mbps;
+ common->tx_ch_rate_msk = tx_ch_rate_msk_new;
+
+ if (!common->usage_count)
+ /* will be applied on next netif up */
+ goto exit_put;
+
+ am65_cpsw_qos_tx_p0_rate_apply(common, queue, rate_mbps);
+
+exit_put:
+ pm_runtime_put(common->dev);
+ return ret;
+}
+
+void am65_cpsw_qos_tx_p0_rate_init(struct am65_cpsw_common *common)
+{
+ struct am65_cpsw_host *host = am65_common_get_host(common);
+ int tx_ch;
+
+ for (tx_ch = 0; tx_ch < common->tx_ch_num; tx_ch++) {
+ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[tx_ch];
+ u32 ch_cir;
+
+ if (!tx_chn->rate_mbps)
+ continue;
+
+ ch_cir = am65_cpsw_qos_tx_rate_calc(tx_chn->rate_mbps,
+ common->bus_freq);
+ writel(ch_cir,
+ host->port_base + AM65_CPSW_PN_REG_PRI_CIR(tx_ch));
+ }
+}
#include <linux/netdevice.h>
#include <net/pkt_sched.h>
+struct am65_cpsw_common;
+
struct am65_cpsw_est {
int buf;
/* has to be the last one */
void *type_data);
void am65_cpsw_qos_link_up(struct net_device *ndev, int link_speed);
void am65_cpsw_qos_link_down(struct net_device *ndev);
+int am65_cpsw_qos_ndo_tx_p0_set_maxrate(struct net_device *ndev, int queue, u32 rate_mbps);
+void am65_cpsw_qos_tx_p0_rate_init(struct am65_cpsw_common *common);
#endif /* AM65_CPSW_QOS_H_ */
u64 timestamp;
u32 genf_enable;
u32 hw_ts_enable;
+ u32 estf_enable;
struct sk_buff_head txq;
bool pps_enabled;
bool pps_present;
static int am65_cpts_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
{
struct am65_cpts *cpts = container_of(ptp, struct am65_cpts, ptp_info);
- u32 pps_ctrl_val = 0, pps_ppm_hi = 0, pps_ppm_low = 0;
+ u32 estf_ctrl_val = 0, estf_ppm_hi = 0, estf_ppm_low = 0;
s32 ppb = scaled_ppm_to_ppb(scaled_ppm);
int pps_index = cpts->pps_genf_idx;
u64 adj_period, pps_adj_period;
u32 ctrl_val, ppm_hi, ppm_low;
unsigned long flags;
- int neg_adj = 0;
+ int neg_adj = 0, i;
if (ppb < 0) {
neg_adj = 1;
ppm_low = lower_32_bits(adj_period);
if (cpts->pps_enabled) {
- pps_ctrl_val = am65_cpts_read32(cpts, genf[pps_index].control);
+ estf_ctrl_val = am65_cpts_read32(cpts, genf[pps_index].control);
if (neg_adj)
- pps_ctrl_val &= ~BIT(1);
+ estf_ctrl_val &= ~BIT(1);
else
- pps_ctrl_val |= BIT(1);
+ estf_ctrl_val |= BIT(1);
/* GenF PPM will do correction using cpts refclk tick which is
* (cpts->ts_add_val + 1) ns, so GenF length PPM adj period
* need to be corrected.
*/
pps_adj_period = adj_period * (cpts->ts_add_val + 1);
- pps_ppm_hi = upper_32_bits(pps_adj_period) & 0x3FF;
- pps_ppm_low = lower_32_bits(pps_adj_period);
+ estf_ppm_hi = upper_32_bits(pps_adj_period) & 0x3FF;
+ estf_ppm_low = lower_32_bits(pps_adj_period);
}
spin_lock_irqsave(&cpts->lock, flags);
am65_cpts_write32(cpts, ppm_low, ts_ppm_low);
if (cpts->pps_enabled) {
- am65_cpts_write32(cpts, pps_ctrl_val, genf[pps_index].control);
- am65_cpts_write32(cpts, pps_ppm_hi, genf[pps_index].ppm_hi);
- am65_cpts_write32(cpts, pps_ppm_low, genf[pps_index].ppm_low);
+ am65_cpts_write32(cpts, estf_ctrl_val, genf[pps_index].control);
+ am65_cpts_write32(cpts, estf_ppm_hi, genf[pps_index].ppm_hi);
+ am65_cpts_write32(cpts, estf_ppm_low, genf[pps_index].ppm_low);
}
+ for (i = 0; i < AM65_CPTS_ESTF_MAX_NUM; i++) {
+ if (cpts->estf_enable & BIT(i)) {
+ am65_cpts_write32(cpts, estf_ctrl_val, estf[i].control);
+ am65_cpts_write32(cpts, estf_ppm_hi, estf[i].ppm_hi);
+ am65_cpts_write32(cpts, estf_ppm_low, estf[i].ppm_low);
+ }
+ }
/* All GenF/EstF can be updated here the same way */
spin_unlock_irqrestore(&cpts->lock, flags);
am65_cpts_write32(cpts, val, estf[idx].comp_lo);
val = lower_32_bits(cycles);
am65_cpts_write32(cpts, val, estf[idx].length);
+ am65_cpts_write32(cpts, 0, estf[idx].control);
+ am65_cpts_write32(cpts, 0, estf[idx].ppm_hi);
+ am65_cpts_write32(cpts, 0, estf[idx].ppm_low);
+
+ cpts->estf_enable |= BIT(idx);
dev_dbg(cpts->dev, "%s: ESTF:%u enabled\n", __func__, idx);
void am65_cpts_estf_disable(struct am65_cpts *cpts, int idx)
{
am65_cpts_write32(cpts, 0, estf[idx].length);
+ cpts->estf_enable &= ~BIT(idx);
dev_dbg(cpts->dev, "%s: ESTF:%u disabled\n", __func__, idx);
}
netcp->tx_pool_region_id = temp[1];
if (netcp->tx_pool_size < MAX_SKB_FRAGS) {
- dev_err(dev, "tx-pool size too small, must be at least %ld\n",
- MAX_SKB_FRAGS);
+ dev_err(dev, "tx-pool size too small, must be at least %u\n",
+ (unsigned int)MAX_SKB_FRAGS);
ret = -ENODEV;
goto quit;
}
#include <linux/etherdevice.h>
#include <linux/netdevice.h>
#include <linux/if_ether.h>
+#include <linux/if_vlan.h>
#include <linux/iopoll.h>
#include <linux/pci.h>
struct net_device *netdev = wx->netdev;
u32 mhadd, max_frame;
- max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
+ max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
/* adjust max frame to be at least the size of a standard frame */
if (max_frame < (ETH_FRAME_LEN + ETH_FCS_LEN))
max_frame = (ETH_FRAME_LEN + ETH_FCS_LEN);
wr32(wx, WX_PSR_MAX_SZ, max_frame);
}
+/**
+ * wx_change_mtu - Change the Maximum Transfer Unit
+ * @netdev: network interface device structure
+ * @new_mtu: new value for maximum frame size
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int wx_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct wx *wx = netdev_priv(netdev);
+
+ netdev->mtu = new_mtu;
+ wx_set_rx_buffer_len(wx);
+
+ return 0;
+}
+EXPORT_SYMBOL(wx_change_mtu);
+
/* Disable the specified rx queue */
void wx_disable_rx_queue(struct wx *wx, struct wx_ring *ring)
{
int wx_set_mac(struct net_device *netdev, void *p);
void wx_disable_rx(struct wx *wx);
void wx_set_rx_mode(struct net_device *netdev);
+int wx_change_mtu(struct net_device *netdev, int new_mtu);
void wx_disable_rx_queue(struct wx *wx, struct wx_ring *ring);
void wx_configure(struct wx *wx);
int wx_disable_pcie_master(struct wx *wx);
#include <linux/bitfield.h>
#include <linux/netdevice.h>
-/* Vendor ID */
-#ifndef PCI_VENDOR_ID_WANGXUN
-#define PCI_VENDOR_ID_WANGXUN 0x8088
-#endif
-
#define WX_NCSI_SUP 0x8000
#define WX_NCSI_MASK 0x8000
#define WX_WOL_SUP 0x4000
#define WX_MAX_RXD 8192
#define WX_MAX_TXD 8192
+#define WX_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */
+
/* Supported Rx Buffer Sizes */
#define WX_RXBUFFER_256 256 /* Used for skb receive header */
#define WX_RXBUFFER_2K 2048
#include <linux/pci.h>
#include <linux/netdevice.h>
#include <linux/string.h>
-#include <linux/aer.h>
#include <linux/etherdevice.h>
#include <net/ip.h>
#include <linux/phy.h>
+#include <linux/if_vlan.h>
#include "../libwx/wx_type.h"
#include "../libwx/wx_hw.h"
static const struct net_device_ops ngbe_netdev_ops = {
.ndo_open = ngbe_open,
.ndo_stop = ngbe_close,
+ .ndo_change_mtu = wx_change_mtu,
.ndo_start_xmit = wx_xmit_frame,
.ndo_set_rx_mode = wx_set_rx_mode,
.ndo_validate_addr = eth_validate_addr,
goto err_pci_disable_dev;
}
- pci_enable_pcie_error_reporting(pdev);
pci_set_master(pdev);
netdev = devm_alloc_etherdev_mqs(&pdev->dev,
netdev->priv_flags |= IFF_SUPP_NOFCS;
netdev->min_mtu = ETH_MIN_MTU;
- netdev->max_mtu = NGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN);
+ netdev->max_mtu = WX_MAX_JUMBO_FRAME_SIZE -
+ (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN);
wx->bd_number = func_nums;
/* setup the private structure */
err_free_mac_table:
kfree(wx->mac_table);
err_pci_release_regions:
- pci_disable_pcie_error_reporting(pdev);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
err_pci_disable_dev:
kfree(wx->mac_table);
wx_clear_interrupt_scheme(wx);
- pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
}
#define NGBE_RX_PB_SIZE 42
#define NGBE_MC_TBL_SIZE 128
#define NGBE_TDB_PB_SZ (20 * 1024) /* 160KB Packet Buffer */
-#define NGBE_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */
/* TX/RX descriptor defines */
#define NGBE_DEFAULT_TXD 512 /* default ring size */
#include <linux/pci.h>
#include <linux/netdevice.h>
#include <linux/string.h>
-#include <linux/aer.h>
#include <linux/etherdevice.h>
#include <net/ip.h>
+#include <linux/if_vlan.h>
#include "../libwx/wx_type.h"
#include "../libwx/wx_lib.h"
static const struct net_device_ops txgbe_netdev_ops = {
.ndo_open = txgbe_open,
.ndo_stop = txgbe_close,
+ .ndo_change_mtu = wx_change_mtu,
.ndo_start_xmit = wx_xmit_frame,
.ndo_set_rx_mode = wx_set_rx_mode,
.ndo_validate_addr = eth_validate_addr,
goto err_pci_disable_dev;
}
- pci_enable_pcie_error_reporting(pdev);
pci_set_master(pdev);
netdev = devm_alloc_etherdev_mqs(&pdev->dev,
netdev->priv_flags |= IFF_SUPP_NOFCS;
netdev->min_mtu = ETH_MIN_MTU;
- netdev->max_mtu = TXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN);
+ netdev->max_mtu = WX_MAX_JUMBO_FRAME_SIZE -
+ (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN);
/* make sure the EEPROM is good */
err = txgbe_validate_eeprom_checksum(wx, NULL);
err_free_mac_table:
kfree(wx->mac_table);
err_pci_release_regions:
- pci_disable_pcie_error_reporting(pdev);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
err_pci_disable_dev:
kfree(wx->mac_table);
wx_clear_interrupt_scheme(wx);
- pci_disable_pcie_error_reporting(pdev);
-
pci_disable_device(pdev);
}
#define TXGBE_SP_MC_TBL_SIZE 128
#define TXGBE_SP_RX_PB_SIZE 512
#define TXGBE_SP_TDB_PB_SZ (160 * 1024) /* 160KB Packet Buffer */
-#define TXGBE_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */
/* TX/RX descriptor defines */
#define TXGBE_DEFAULT_TXD 512
if (unlikely(geneveh->ver != GENEVE_VER))
goto drop;
- inner_proto = geneveh->proto_type;
-
- if (unlikely((inner_proto != htons(ETH_P_TEB) &&
- inner_proto != htons(ETH_P_IP) &&
- inner_proto != htons(ETH_P_IPV6))))
- goto drop;
-
gs = rcu_dereference_sk_user_data(sk);
if (!gs)
goto drop;
if (!geneve)
goto drop;
+ inner_proto = geneveh->proto_type;
+
if (unlikely((!geneve->cfg.inner_proto_inherit &&
inner_proto != htons(ETH_P_TEB)))) {
geneve->dev->stats.rx_dropped++;
dev->type = ARPHRD_NONE;
dev->hard_header_len = 0;
dev->addr_len = 0;
- dev->flags = IFF_NOARP;
+ dev->flags = IFF_POINTOPOINT | IFF_NOARP;
}
err = register_netdevice(dev);
static struct spi_driver adf7242_driver = {
.id_table = adf7242_device_id,
.driver = {
- .of_match_table = of_match_ptr(adf7242_of_match),
+ .of_match_table = adf7242_of_match,
.name = "adf7242",
- .owner = THIS_MODULE,
},
.probe = adf7242_probe,
.remove = adf7242_remove,
static struct spi_driver at86rf230_driver = {
.id_table = at86rf230_device_id,
.driver = {
- .of_match_table = of_match_ptr(at86rf230_of_match),
+ .of_match_table = at86rf230_of_match,
.name = "at86rf230",
},
.probe = at86rf230_probe,
static struct spi_driver ca8210_spi_driver = {
.driver = {
.name = DRIVER_NAME,
- .owner = THIS_MODULE,
- .of_match_table = of_match_ptr(ca8210_of_ids),
+ .of_match_table = ca8210_of_ids,
},
.probe = ca8210_probe,
.remove = ca8210_remove
static struct spi_driver mcr20a_driver = {
.id_table = mcr20a_device_id,
.driver = {
- .of_match_table = of_match_ptr(mcr20a_of_match),
+ .of_match_table = mcr20a_of_match,
.name = "mcr20a",
},
.probe = mcr20a_probe,
#
# Makefile for the Qualcomm IPA driver.
-IPA_VERSIONS := 3.1 3.5.1 4.2 4.5 4.7 4.9 4.11
+IPA_REG_VERSIONS := 3.1 3.5.1 4.2 4.5 4.7 4.9 4.11 5.0
# Some IPA versions can reuse another set of GSI register definitions.
-GSI_IPA_VERSIONS := 3.1 3.5.1 4.0 4.5 4.9 4.11
+GSI_REG_VERSIONS := 3.1 3.5.1 4.0 4.5 4.9 4.11 5.0
+
+IPA_DATA_VERSIONS := 3.1 3.5.1 4.2 4.5 4.7 4.9 4.11 5.0
obj-$(CONFIG_QCOM_IPA) += ipa.o
ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
ipa_sysfs.o
-ipa-y += $(GSI_IPA_VERSIONS:%=reg/gsi_reg-v%.o)
+ipa-y += $(IPA_REG_VERSIONS:%=reg/ipa_reg-v%.o)
-ipa-y += $(IPA_VERSIONS:%=reg/ipa_reg-v%.o)
+ipa-y += $(GSI_REG_VERSIONS:%=reg/gsi_reg-v%.o)
-ipa-y += $(IPA_VERSIONS:%=data/ipa_data-v%.o)
+ipa-y += $(IPA_DATA_VERSIONS:%=data/ipa_data-v%.o)
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/log2.h>
+
+#include "../gsi.h"
+#include "../ipa_data.h"
+#include "../ipa_endpoint.h"
+#include "../ipa_mem.h"
+
+/** enum ipa_resource_type - IPA resource types for an SoC having IPA v5.0 */
+enum ipa_resource_type {
+ /* Source resource types; first must have value 0 */
+ IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS = 0,
+ IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS,
+ IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF,
+ IPA_RESOURCE_TYPE_SRC_HPS_DMARS,
+ IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES,
+
+ /* Destination resource types; first must have value 0 */
+ IPA_RESOURCE_TYPE_DST_DATA_SECTORS = 0,
+ IPA_RESOURCE_TYPE_DST_DPS_DMARS,
+ IPA_RESOURCE_TYPE_DST_ULSO_SEGMENTS,
+};
+
+/* Resource groups used for an SoC having IPA v5.0 */
+enum ipa_rsrc_group_id {
+ /* Source resource group identifiers */
+ IPA_RSRC_GROUP_SRC_UL = 0,
+ IPA_RSRC_GROUP_SRC_DL,
+ IPA_RSRC_GROUP_SRC_UNUSED_2,
+ IPA_RSRC_GROUP_SRC_UNUSED_3,
+ IPA_RSRC_GROUP_SRC_URLLC,
+ IPA_RSRC_GROUP_SRC_U_RX_QC,
+ IPA_RSRC_GROUP_SRC_COUNT, /* Last in set; not a source group */
+
+ /* Destination resource group identifiers */
+ IPA_RSRC_GROUP_DST_UL = 0,
+ IPA_RSRC_GROUP_DST_DL,
+ IPA_RSRC_GROUP_DST_DMA,
+ IPA_RSRC_GROUP_DST_QDSS,
+ IPA_RSRC_GROUP_DST_CV2X,
+ IPA_RSRC_GROUP_DST_UC,
+ IPA_RSRC_GROUP_DST_DRB_IP,
+ IPA_RSRC_GROUP_DST_COUNT, /* Last; not a destination group */
+};
+
+/* QSB configuration data for an SoC having IPA v5.0 */
+static const struct ipa_qsb_data ipa_qsb_data[] = {
+ [IPA_QSB_MASTER_DDR] = {
+ .max_writes = 0,
+ .max_reads = 0, /* no limit (hardware max) */
+ .max_reads_beats = 0,
+ },
+ [IPA_QSB_MASTER_PCIE] = {
+ .max_writes = 0,
+ .max_reads = 0, /* no limit (hardware max) */
+ .max_reads_beats = 0,
+ },
+};
+
+/* Endpoint configuration data for an SoC having IPA v5.0 */
+static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = {
+ [IPA_ENDPOINT_AP_COMMAND_TX] = {
+ .ee_id = GSI_EE_AP,
+ .channel_id = 12,
+ .endpoint_id = 14,
+ .toward_ipa = true,
+ .channel = {
+ .tre_count = 256,
+ .event_count = 256,
+ .tlv_count = 20,
+ },
+ .endpoint = {
+ .config = {
+ .resource_group = IPA_RSRC_GROUP_SRC_UL,
+ .dma_mode = true,
+ .dma_endpoint = IPA_ENDPOINT_AP_LAN_RX,
+ .tx = {
+ .seq_type = IPA_SEQ_DMA,
+ },
+ },
+ },
+ },
+ [IPA_ENDPOINT_AP_LAN_RX] = {
+ .ee_id = GSI_EE_AP,
+ .channel_id = 13,
+ .endpoint_id = 16,
+ .toward_ipa = false,
+ .channel = {
+ .tre_count = 256,
+ .event_count = 256,
+ .tlv_count = 9,
+ },
+ .endpoint = {
+ .config = {
+ .resource_group = IPA_RSRC_GROUP_DST_UL,
+ .aggregation = true,
+ .status_enable = true,
+ .rx = {
+ .buffer_size = 8192,
+ .pad_align = ilog2(sizeof(u32)),
+ .aggr_time_limit = 500,
+ },
+ },
+ },
+ },
+ [IPA_ENDPOINT_AP_MODEM_TX] = {
+ .ee_id = GSI_EE_AP,
+ .channel_id = 11,
+ .endpoint_id = 2,
+ .toward_ipa = true,
+ .channel = {
+ .tre_count = 512,
+ .event_count = 512,
+ .tlv_count = 25,
+ },
+ .endpoint = {
+ .filter_support = true,
+ .config = {
+ .resource_group = IPA_RSRC_GROUP_SRC_UL,
+ .checksum = true,
+ .qmap = true,
+ .status_enable = true,
+ .tx = {
+ .seq_type = IPA_SEQ_2_PASS_SKIP_LAST_UC,
+ .status_endpoint =
+ IPA_ENDPOINT_MODEM_AP_RX,
+ },
+ },
+ },
+ },
+ [IPA_ENDPOINT_AP_MODEM_RX] = {
+ .ee_id = GSI_EE_AP,
+ .channel_id = 1,
+ .endpoint_id = 23,
+ .toward_ipa = false,
+ .channel = {
+ .tre_count = 256,
+ .event_count = 256,
+ .tlv_count = 9,
+ },
+ .endpoint = {
+ .config = {
+ .resource_group = IPA_RSRC_GROUP_DST_DL,
+ .checksum = true,
+ .qmap = true,
+ .aggregation = true,
+ .rx = {
+ .buffer_size = 8192,
+ .aggr_time_limit = 500,
+ .aggr_close_eof = true,
+ },
+ },
+ },
+ },
+ [IPA_ENDPOINT_MODEM_AP_TX] = {
+ .ee_id = GSI_EE_MODEM,
+ .channel_id = 0,
+ .endpoint_id = 12,
+ .toward_ipa = true,
+ .endpoint = {
+ .filter_support = true,
+ },
+ },
+ [IPA_ENDPOINT_MODEM_AP_RX] = {
+ .ee_id = GSI_EE_MODEM,
+ .channel_id = 7,
+ .endpoint_id = 21,
+ .toward_ipa = false,
+ },
+ [IPA_ENDPOINT_MODEM_DL_NLO_TX] = {
+ .ee_id = GSI_EE_MODEM,
+ .channel_id = 2,
+ .endpoint_id = 15,
+ .toward_ipa = true,
+ .endpoint = {
+ .filter_support = true,
+ },
+ },
+};
+
+/* Source resource configuration data for an SoC having IPA v5.0 */
+static const struct ipa_resource ipa_resource_src[] = {
+ [IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS] = {
+ .limits[IPA_RSRC_GROUP_SRC_UL] = {
+ .min = 3, .max = 9,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_DL] = {
+ .min = 4, .max = 10,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_URLLC] = {
+ .min = 1, .max = 63,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_U_RX_QC] = {
+ .min = 0, .max = 63,
+ },
+ },
+ [IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS] = {
+ .limits[IPA_RSRC_GROUP_SRC_UL] = {
+ .min = 9, .max = 9,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_DL] = {
+ .min = 12, .max = 12,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_URLLC] = {
+ .min = 10, .max = 10,
+ },
+ },
+ [IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF] = {
+ .limits[IPA_RSRC_GROUP_SRC_UL] = {
+ .min = 9, .max = 9,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_DL] = {
+ .min = 24, .max = 24,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_URLLC] = {
+ .min = 20, .max = 20,
+ },
+ },
+ [IPA_RESOURCE_TYPE_SRC_HPS_DMARS] = {
+ .limits[IPA_RSRC_GROUP_SRC_UL] = {
+ .min = 0, .max = 63,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_DL] = {
+ .min = 0, .max = 63,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_URLLC] = {
+ .min = 1, .max = 63,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_U_RX_QC] = {
+ .min = 0, .max = 63,
+ },
+ },
+ [IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES] = {
+ .limits[IPA_RSRC_GROUP_SRC_UL] = {
+ .min = 22, .max = 22,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_DL] = {
+ .min = 16, .max = 16,
+ },
+ .limits[IPA_RSRC_GROUP_SRC_URLLC] = {
+ .min = 16, .max = 16,
+ },
+ },
+};
+
+/* Destination resource configuration data for an SoC having IPA v5.0 */
+static const struct ipa_resource ipa_resource_dst[] = {
+ [IPA_RESOURCE_TYPE_DST_DATA_SECTORS] = {
+ .limits[IPA_RSRC_GROUP_DST_UL] = {
+ .min = 6, .max = 6,
+ },
+ .limits[IPA_RSRC_GROUP_DST_DL] = {
+ .min = 5, .max = 5,
+ },
+ .limits[IPA_RSRC_GROUP_DST_DRB_IP] = {
+ .min = 39, .max = 39,
+ },
+ },
+ [IPA_RESOURCE_TYPE_DST_DPS_DMARS] = {
+ .limits[IPA_RSRC_GROUP_DST_UL] = {
+ .min = 0, .max = 3,
+ },
+ .limits[IPA_RSRC_GROUP_DST_DL] = {
+ .min = 0, .max = 3,
+ },
+ },
+ [IPA_RESOURCE_TYPE_DST_ULSO_SEGMENTS] = {
+ .limits[IPA_RSRC_GROUP_DST_UL] = {
+ .min = 0, .max = 63,
+ },
+ .limits[IPA_RSRC_GROUP_DST_DL] = {
+ .min = 0, .max = 63,
+ },
+ },
+};
+
+/* Resource configuration data for an SoC having IPA v5.0 */
+static const struct ipa_resource_data ipa_resource_data = {
+ .rsrc_group_dst_count = IPA_RSRC_GROUP_DST_COUNT,
+ .rsrc_group_src_count = IPA_RSRC_GROUP_SRC_COUNT,
+ .resource_src_count = ARRAY_SIZE(ipa_resource_src),
+ .resource_src = ipa_resource_src,
+ .resource_dst_count = ARRAY_SIZE(ipa_resource_dst),
+ .resource_dst = ipa_resource_dst,
+};
+
+/* IPA-resident memory region data for an SoC having IPA v5.0 */
+static const struct ipa_mem ipa_mem_local_data[] = {
+ {
+ .id = IPA_MEM_UC_EVENT_RING,
+ .offset = 0x0000,
+ .size = 0x1000,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_UC_SHARED,
+ .offset = 0x1000,
+ .size = 0x0080,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_UC_INFO,
+ .offset = 0x1080,
+ .size = 0x0200,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_V4_FILTER_HASHED,
+ .offset = 0x1288,
+ .size = 0x0078,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_V4_FILTER,
+ .offset = 0x1308,
+ .size = 0x0078,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_V6_FILTER_HASHED,
+ .offset = 0x1388,
+ .size = 0x0078,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_V6_FILTER,
+ .offset = 0x1408,
+ .size = 0x0078,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_V4_ROUTE_HASHED,
+ .offset = 0x1488,
+ .size = 0x0098,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_V4_ROUTE,
+ .offset = 0x1528,
+ .size = 0x0098,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_V6_ROUTE_HASHED,
+ .offset = 0x15c8,
+ .size = 0x0098,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_V6_ROUTE,
+ .offset = 0x1668,
+ .size = 0x0098,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_MODEM_HEADER,
+ .offset = 0x1708,
+ .size = 0x0240,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_AP_HEADER,
+ .offset = 0x1948,
+ .size = 0x01e0,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_MODEM_PROC_CTX,
+ .offset = 0x1b40,
+ .size = 0x0b20,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_AP_PROC_CTX,
+ .offset = 0x2660,
+ .size = 0x0200,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_STATS_QUOTA_MODEM,
+ .offset = 0x2868,
+ .size = 0x0060,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_STATS_QUOTA_AP,
+ .offset = 0x28c8,
+ .size = 0x0048,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_AP_V4_FILTER,
+ .offset = 0x2918,
+ .size = 0x0118,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_AP_V6_FILTER,
+ .offset = 0x2aa0,
+ .size = 0x0228,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_STATS_FILTER_ROUTE,
+ .offset = 0x2cd0,
+ .size = 0x0ba0,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_STATS_DROP,
+ .offset = 0x3870,
+ .size = 0x0020,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_MODEM,
+ .offset = 0x3898,
+ .size = 0x0d48,
+ .canary_count = 2,
+ },
+ {
+ .id = IPA_MEM_NAT_TABLE,
+ .offset = 0x45e0,
+ .size = 0x0900,
+ .canary_count = 0,
+ },
+ {
+ .id = IPA_MEM_PDN_CONFIG,
+ .offset = 0x4ee8,
+ .size = 0x0100,
+ .canary_count = 2,
+ },
+};
+
+/* Memory configuration data for an SoC having IPA v5.0 */
+static const struct ipa_mem_data ipa_mem_data = {
+ .local_count = ARRAY_SIZE(ipa_mem_local_data),
+ .local = ipa_mem_local_data,
+ .imem_addr = 0x14688000,
+ .imem_size = 0x00003000,
+ .smem_id = 497,
+ .smem_size = 0x00009000,
+};
+
+/* Interconnect rates are in 1000 byte/second units */
+static const struct ipa_interconnect_data ipa_interconnect_data[] = {
+ {
+ .name = "memory",
+ .peak_bandwidth = 1900000, /* 1.9 GBps */
+ .average_bandwidth = 600000, /* 600 MBps */
+ },
+ /* Average rate is unused for the next interconnect */
+ {
+ .name = "config",
+ .peak_bandwidth = 76800, /* 76.8 MBps */
+ .average_bandwidth = 0, /* unused */
+ },
+};
+
+/* Clock and interconnect configuration data for an SoC having IPA v5.0 */
+static const struct ipa_power_data ipa_power_data = {
+ .core_clock_rate = 120 * 1000 * 1000, /* Hz */
+ .interconnect_count = ARRAY_SIZE(ipa_interconnect_data),
+ .interconnect_data = ipa_interconnect_data,
+};
+
+/* Configuration data for an SoC having IPA v5.0. */
+const struct ipa_data ipa_data_v5_0 = {
+ .version = IPA_VERSION_5_0,
+ .qsb_count = ARRAY_SIZE(ipa_qsb_data),
+ .qsb_data = ipa_qsb_data,
+ .modem_route_count = 11,
+ .endpoint_count = ARRAY_SIZE(ipa_gsi_endpoint_data),
+ .endpoint_data = ipa_gsi_endpoint_data,
+ .resource_data = &ipa_resource_data,
+ .mem_data = &ipa_mem_data,
+ .power_data = &ipa_power_data,
+};
#include "ipa_version.h"
/* Maximum number of channels and event rings supported by the driver */
-#define GSI_CHANNEL_COUNT_MAX 23
-#define GSI_EVT_RING_COUNT_MAX 24
+#define GSI_CHANNEL_COUNT_MAX 28
+#define GSI_EVT_RING_COUNT_MAX 28
/* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
#define GSI_TLV_MAX 64
case IPA_VERSION_4_11:
return &gsi_regs_v4_11;
+ case IPA_VERSION_5_0:
+ return &gsi_regs_v5_0;
+
default:
return NULL;
}
extern const struct regs gsi_regs_v4_5;
extern const struct regs gsi_regs_v4_9;
extern const struct regs gsi_regs_v4_11;
+extern const struct regs gsi_regs_v5_0;
/**
* gsi_reg() - Return the structure describing a GSI register
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
- * Copyright (C) 2019-2022 Linaro Ltd.
+ * Copyright (C) 2019-2023 Linaro Ltd.
*/
#ifndef _IPA_DATA_H_
#define _IPA_DATA_H_
extern const struct ipa_data ipa_data_v4_7;
extern const struct ipa_data ipa_data_v4_9;
extern const struct ipa_data ipa_data_v4_11;
+extern const struct ipa_data ipa_data_v5_0;
#endif /* _IPA_DATA_H_ */
} else if (ipa->version < IPA_VERSION_4_5) {
val |= reg_bit(reg, GSI_MULTI_AXI_MASTERS_DIS);
} else {
- /* For IPA v4.5 FULL_FLUSH_WAIT_RS_CLOSURE_EN is 0 */
+ /* For IPA v4.5+ FULL_FLUSH_WAIT_RS_CLOSURE_EN is 0 */
}
val |= reg_bit(reg, GSI_MULTI_INORDER_RD_DIS);
.compatible = "qcom,sc7280-ipa",
.data = &ipa_data_v4_11,
},
+ {
+ .compatible = "qcom,sdx65-ipa",
+ .data = &ipa_data_v5_0,
+ },
{ },
};
MODULE_DEVICE_TABLE(of, ipa_match);
return &ipa_regs_v4_9;
case IPA_VERSION_4_11:
return &ipa_regs_v4_11;
+ case IPA_VERSION_5_0:
+ return &ipa_regs_v5_0;
default:
return NULL;
}
extern const struct regs ipa_regs_v4_7;
extern const struct regs ipa_regs_v4_9;
extern const struct regs ipa_regs_v4_11;
+extern const struct regs ipa_regs_v5_0;
const struct reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id);
return "4.9";
case IPA_VERSION_4_11:
return "4.11";
+ case IPA_VERSION_5_0:
+ return "5.0";
default:
return "0.0"; /* Won't happen (checked at probe time) */
}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/types.h>
+
+#include "../gsi.h"
+#include "../reg.h"
+#include "../gsi_reg.h"
+
+REG(INTER_EE_SRC_CH_IRQ_MSK, inter_ee_src_ch_irq_msk,
+ 0x0000c01c + 0x1000 * GSI_EE_AP);
+
+REG(INTER_EE_SRC_EV_CH_IRQ_MSK, inter_ee_src_ev_ch_irq_msk,
+ 0x0000c028 + 0x1000 * GSI_EE_AP);
+
+static const u32 reg_ch_c_cntxt_0_fmask[] = {
+ [CHTYPE_PROTOCOL] = GENMASK(6, 0),
+ [CHTYPE_DIR] = BIT(7),
+ [CH_EE] = GENMASK(11, 8),
+ [CHID] = GENMASK(19, 12),
+ [CHSTATE] = GENMASK(23, 20),
+ [ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_0, ch_c_cntxt_0,
+ 0x00014000 + 0x12000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_cntxt_1_fmask[] = {
+ [CH_R_LENGTH] = GENMASK(23, 0),
+ [ERINDEX] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1,
+ 0x00014004 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x00014008 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0001400c + 0x12000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ch_c_qos_fmask[] = {
+ [WRR_WEIGHT] = GENMASK(3, 0),
+ /* Bits 4-7 reserved */
+ [MAX_PREFETCH] = BIT(8),
+ [USE_DB_ENG] = BIT(9),
+ [PREFETCH_MODE] = GENMASK(13, 10),
+ /* Bits 14-15 reserved */
+ [EMPTY_LVL_THRSHOLD] = GENMASK(23, 16),
+ [DB_IN_BYTES] = BIT(24),
+ [LOW_LATENCY_EN] = BIT(25),
+ /* Bits 26-31 reserved */
+};
+
+REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x00014048 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_0, ch_c_scratch_0,
+ 0x0001404c + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_1, ch_c_scratch_1,
+ 0x00014050 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_2, ch_c_scratch_2,
+ 0x00014054 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_SCRATCH_3, ch_c_scratch_3,
+ 0x00014058 + 0x12000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
+ [EV_CHTYPE] = GENMASK(6, 0),
+ [EV_INTYPE] = BIT(7),
+ [EV_EVCHID] = GENMASK(15, 8),
+ [EV_EE] = GENMASK(19, 16),
+ [EV_CHSTATE] = GENMASK(23, 20),
+ [EV_ELEMENT_SIZE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
+ 0x0001c000 + 0x12000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
+ [R_LENGTH] = GENMASK(19, 0),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
+ 0x0001c004 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_2, ev_ch_e_cntxt_2,
+ 0x0001c008 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_3, ev_ch_e_cntxt_3,
+ 0x0001c00c + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_4, ev_ch_e_cntxt_4,
+ 0x0001c010 + 0x12000 * GSI_EE_AP, 0x80);
+
+static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
+ [EV_MODT] = GENMASK(15, 0),
+ [EV_MODC] = GENMASK(23, 16),
+ [EV_MOD_CNT] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(EV_CH_E_CNTXT_8, ev_ch_e_cntxt_8,
+ 0x0001c020 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_9, ev_ch_e_cntxt_9,
+ 0x0001c024 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_10, ev_ch_e_cntxt_10,
+ 0x0001c028 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_11, ev_ch_e_cntxt_11,
+ 0x0001c02c + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_12, ev_ch_e_cntxt_12,
+ 0x0001c030 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_CNTXT_13, ev_ch_e_cntxt_13,
+ 0x0001c034 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_0, ev_ch_e_scratch_0,
+ 0x0001c048 + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
+ 0x0001c04c + 0x12000 * GSI_EE_AP, 0x80);
+
+REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
+ 0x00024000 + 0x12000 * GSI_EE_AP, 0x08);
+
+REG_STRIDE(EV_CH_E_DOORBELL_0, ev_ch_e_doorbell_0,
+ 0x00024800 + 0x12000 * GSI_EE_AP, 0x08);
+
+static const u32 reg_gsi_status_fmask[] = {
+ [ENABLED] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(GSI_STATUS, gsi_status, 0x00025000 + 0x12000 * GSI_EE_AP);
+
+static const u32 reg_ch_cmd_fmask[] = {
+ [CH_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [CH_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(CH_CMD, ch_cmd, 0x00025008 + 0x12000 * GSI_EE_AP);
+
+static const u32 reg_ev_ch_cmd_fmask[] = {
+ [EV_CHID] = GENMASK(7, 0),
+ /* Bits 8-23 reserved */
+ [EV_OPCODE] = GENMASK(31, 24),
+};
+
+REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x00025010 + 0x12000 * GSI_EE_AP);
+
+static const u32 reg_generic_cmd_fmask[] = {
+ [GENERIC_OPCODE] = GENMASK(4, 0),
+ [GENERIC_CHID] = GENMASK(9, 5),
+ [GENERIC_EE] = GENMASK(13, 10),
+ /* Bits 14-31 reserved */
+};
+
+REG_FIELDS(GENERIC_CMD, generic_cmd, 0x00025018 + 0x12000 * GSI_EE_AP);
+
+static const u32 reg_hw_param_2_fmask[] = {
+ [NUM_CH_PER_EE] = GENMASK(7, 0),
+ [IRAM_SIZE] = GENMASK(12, 8),
+ [GSI_CH_PEND_TRANSLATE] = BIT(13),
+ [GSI_CH_FULL_LOGIC] = BIT(14),
+ [GSI_USE_SDMA] = BIT(15),
+ [GSI_SDMA_N_INT] = GENMASK(18, 16),
+ [GSI_SDMA_MAX_BURST] = GENMASK(26, 19),
+ [GSI_SDMA_N_IOVEC] = GENMASK(29, 27),
+ [GSI_USE_RD_WR_ENG] = BIT(30),
+ [GSI_USE_INTER_EE] = BIT(31),
+};
+
+REG_FIELDS(HW_PARAM_2, hw_param_2, 0x00025040 + 0x12000 * GSI_EE_AP);
+
+static const u32 reg_hw_param_4_fmask[] = {
+ [EV_PER_EE] = GENMASK(7, 0),
+ [IRAM_PROTOCOL_COUNT] = GENMASK(15, 8),
+ /* Bits 16-31 reserved */
+};
+
+REG_FIELDS(HW_PARAM_4, hw_param_4, 0x00025050 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x00025080 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x00025088 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x00025090 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_MSK, cntxt_src_ch_irq_msk,
+ 0x00025094 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_CH_IRQ_CLR, cntxt_src_ch_irq_clr,
+ 0x00025098 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x0002509c + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_MSK, cntxt_src_ev_ch_irq_msk,
+ 0x000250a0 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_EV_CH_IRQ_CLR, cntxt_src_ev_ch_irq_clr,
+ 0x000250a4 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x000250a8 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_MSK, cntxt_src_ieob_irq_msk,
+ 0x000250ac + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_SRC_IEOB_IRQ_CLR, cntxt_src_ieob_irq_clr,
+ 0x000250b0 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x00025200 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x00025204 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x00025208 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x0002520c + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x00025210 + 0x12000 * GSI_EE_AP);
+
+REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x00025214 + 0x12000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_intset_fmask[] = {
+ [INTYPE] = BIT(0)
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x00025220 + 0x12000 * GSI_EE_AP);
+
+static const u32 reg_error_log_fmask[] = {
+ [ERR_ARG3] = GENMASK(3, 0),
+ [ERR_ARG2] = GENMASK(7, 4),
+ [ERR_ARG1] = GENMASK(11, 8),
+ [ERR_CODE] = GENMASK(15, 12),
+ /* Bits 16-18 reserved */
+ [ERR_VIRT_IDX] = GENMASK(23, 19),
+ [ERR_TYPE] = GENMASK(27, 24),
+ [ERR_EE] = GENMASK(31, 28),
+};
+
+REG_FIELDS(ERROR_LOG, error_log, 0x00025240 + 0x12000 * GSI_EE_AP);
+
+REG(ERROR_LOG_CLR, error_log_clr, 0x00025244 + 0x12000 * GSI_EE_AP);
+
+static const u32 reg_cntxt_scratch_0_fmask[] = {
+ [INTER_EE_RESULT] = GENMASK(2, 0),
+ /* Bits 3-4 reserved */
+ [GENERIC_EE_RESULT] = GENMASK(7, 5),
+ /* Bits 8-31 reserved */
+};
+
+REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x00025400 + 0x12000 * GSI_EE_AP);
+
+static const struct reg *reg_array[] = {
+ [INTER_EE_SRC_CH_IRQ_MSK] = ®_inter_ee_src_ch_irq_msk,
+ [INTER_EE_SRC_EV_CH_IRQ_MSK] = ®_inter_ee_src_ev_ch_irq_msk,
+ [CH_C_CNTXT_0] = ®_ch_c_cntxt_0,
+ [CH_C_CNTXT_1] = ®_ch_c_cntxt_1,
+ [CH_C_CNTXT_2] = ®_ch_c_cntxt_2,
+ [CH_C_CNTXT_3] = ®_ch_c_cntxt_3,
+ [CH_C_QOS] = ®_ch_c_qos,
+ [CH_C_SCRATCH_0] = ®_ch_c_scratch_0,
+ [CH_C_SCRATCH_1] = ®_ch_c_scratch_1,
+ [CH_C_SCRATCH_2] = ®_ch_c_scratch_2,
+ [CH_C_SCRATCH_3] = ®_ch_c_scratch_3,
+ [EV_CH_E_CNTXT_0] = ®_ev_ch_e_cntxt_0,
+ [EV_CH_E_CNTXT_1] = ®_ev_ch_e_cntxt_1,
+ [EV_CH_E_CNTXT_2] = ®_ev_ch_e_cntxt_2,
+ [EV_CH_E_CNTXT_3] = ®_ev_ch_e_cntxt_3,
+ [EV_CH_E_CNTXT_4] = ®_ev_ch_e_cntxt_4,
+ [EV_CH_E_CNTXT_8] = ®_ev_ch_e_cntxt_8,
+ [EV_CH_E_CNTXT_9] = ®_ev_ch_e_cntxt_9,
+ [EV_CH_E_CNTXT_10] = ®_ev_ch_e_cntxt_10,
+ [EV_CH_E_CNTXT_11] = ®_ev_ch_e_cntxt_11,
+ [EV_CH_E_CNTXT_12] = ®_ev_ch_e_cntxt_12,
+ [EV_CH_E_CNTXT_13] = ®_ev_ch_e_cntxt_13,
+ [EV_CH_E_SCRATCH_0] = ®_ev_ch_e_scratch_0,
+ [EV_CH_E_SCRATCH_1] = ®_ev_ch_e_scratch_1,
+ [CH_C_DOORBELL_0] = ®_ch_c_doorbell_0,
+ [EV_CH_E_DOORBELL_0] = ®_ev_ch_e_doorbell_0,
+ [GSI_STATUS] = ®_gsi_status,
+ [CH_CMD] = ®_ch_cmd,
+ [EV_CH_CMD] = ®_ev_ch_cmd,
+ [GENERIC_CMD] = ®_generic_cmd,
+ [HW_PARAM_2] = ®_hw_param_2,
+ [HW_PARAM_4] = ®_hw_param_4,
+ [CNTXT_TYPE_IRQ] = ®_cntxt_type_irq,
+ [CNTXT_TYPE_IRQ_MSK] = ®_cntxt_type_irq_msk,
+ [CNTXT_SRC_CH_IRQ] = ®_cntxt_src_ch_irq,
+ [CNTXT_SRC_CH_IRQ_MSK] = ®_cntxt_src_ch_irq_msk,
+ [CNTXT_SRC_CH_IRQ_CLR] = ®_cntxt_src_ch_irq_clr,
+ [CNTXT_SRC_EV_CH_IRQ] = ®_cntxt_src_ev_ch_irq,
+ [CNTXT_SRC_EV_CH_IRQ_MSK] = ®_cntxt_src_ev_ch_irq_msk,
+ [CNTXT_SRC_EV_CH_IRQ_CLR] = ®_cntxt_src_ev_ch_irq_clr,
+ [CNTXT_SRC_IEOB_IRQ] = ®_cntxt_src_ieob_irq,
+ [CNTXT_SRC_IEOB_IRQ_MSK] = ®_cntxt_src_ieob_irq_msk,
+ [CNTXT_SRC_IEOB_IRQ_CLR] = ®_cntxt_src_ieob_irq_clr,
+ [CNTXT_GLOB_IRQ_STTS] = ®_cntxt_glob_irq_stts,
+ [CNTXT_GLOB_IRQ_EN] = ®_cntxt_glob_irq_en,
+ [CNTXT_GLOB_IRQ_CLR] = ®_cntxt_glob_irq_clr,
+ [CNTXT_GSI_IRQ_STTS] = ®_cntxt_gsi_irq_stts,
+ [CNTXT_GSI_IRQ_EN] = ®_cntxt_gsi_irq_en,
+ [CNTXT_GSI_IRQ_CLR] = ®_cntxt_gsi_irq_clr,
+ [CNTXT_INTSET] = ®_cntxt_intset,
+ [ERROR_LOG] = ®_error_log,
+ [ERROR_LOG_CLR] = ®_error_log_clr,
+ [CNTXT_SCRATCH_0] = ®_cntxt_scratch_0,
+};
+
+const struct regs gsi_regs_v5_0 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
+};
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2023 Linaro Ltd. */
+
+#include <linux/types.h>
+
+#include "../ipa.h"
+#include "../ipa_reg.h"
+
+static const u32 reg_flavor_0_fmask[] = {
+ [MAX_PIPES] = GENMASK(7, 0),
+ [MAX_CONS_PIPES] = GENMASK(15, 8),
+ [MAX_PROD_PIPES] = GENMASK(23, 16),
+ [PROD_LOWEST] = GENMASK(31, 24),
+};
+
+REG_FIELDS(FLAVOR_0, flavor_0, 0x00000000);
+
+static const u32 reg_comp_cfg_fmask[] = {
+ [RAM_ARB_PRI_CLIENT_SAMP_FIX_DIS] = BIT(0),
+ [GSI_SNOC_BYPASS_DIS] = BIT(1),
+ [GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2),
+ [GEN_QMB_1_SNOC_BYPASS_DIS] = BIT(3),
+ /* Bit 4 reserved */
+ [IPA_QMB_SELECT_CONS_EN] = BIT(5),
+ [IPA_QMB_SELECT_PROD_EN] = BIT(6),
+ [GSI_MULTI_INORDER_RD_DIS] = BIT(7),
+ [GSI_MULTI_INORDER_WR_DIS] = BIT(8),
+ [GEN_QMB_0_MULTI_INORDER_RD_DIS] = BIT(9),
+ [GEN_QMB_1_MULTI_INORDER_RD_DIS] = BIT(10),
+ [GEN_QMB_0_MULTI_INORDER_WR_DIS] = BIT(11),
+ [GEN_QMB_1_MULTI_INORDER_WR_DIS] = BIT(12),
+ [GEN_QMB_0_SNOC_CNOC_LOOP_PROT_DIS] = BIT(13),
+ [GSI_SNOC_CNOC_LOOP_PROT_DISABLE] = BIT(14),
+ [GSI_MULTI_AXI_MASTERS_DIS] = BIT(15),
+ [IPA_QMB_SELECT_GLOBAL_EN] = BIT(16),
+ [FULL_FLUSH_WAIT_RS_CLOSURE_EN] = BIT(17),
+ /* Bit 18 reserved */
+ [QMB_RAM_RD_CACHE_DISABLE] = BIT(19),
+ [GENQMB_AOOOWR] = BIT(20),
+ [IF_OUT_OF_BUF_STOP_RESET_MASK_EN] = BIT(21),
+ [ATOMIC_FETCHER_ARB_LOCK_DIS] = GENMASK(27, 22),
+ /* Bits 28-29 reserved */
+ [GEN_QMB_1_DYNAMIC_ASIZE] = BIT(30),
+ [GEN_QMB_0_DYNAMIC_ASIZE] = BIT(31),
+};
+
+REG_FIELDS(COMP_CFG, comp_cfg, 0x0000002c);
+
+static const u32 reg_clkon_cfg_fmask[] = {
+ [CLKON_RX] = BIT(0),
+ [CLKON_PROC] = BIT(1),
+ [TX_WRAPPER] = BIT(2),
+ [CLKON_MISC] = BIT(3),
+ [RAM_ARB] = BIT(4),
+ [FTCH_HPS] = BIT(5),
+ [FTCH_DPS] = BIT(6),
+ [CLKON_HPS] = BIT(7),
+ [CLKON_DPS] = BIT(8),
+ [RX_HPS_CMDQS] = BIT(9),
+ [HPS_DPS_CMDQS] = BIT(10),
+ [DPS_TX_CMDQS] = BIT(11),
+ [RSRC_MNGR] = BIT(12),
+ [CTX_HANDLER] = BIT(13),
+ [ACK_MNGR] = BIT(14),
+ [D_DCPH] = BIT(15),
+ [H_DCPH] = BIT(16),
+ /* Bit 17 reserved */
+ [NTF_TX_CMDQS] = BIT(18),
+ [CLKON_TX_0] = BIT(19),
+ [CLKON_TX_1] = BIT(20),
+ [CLKON_FNR] = BIT(21),
+ [QSB2AXI_CMDQ_L] = BIT(22),
+ [AGGR_WRAPPER] = BIT(23),
+ [RAM_SLAVEWAY] = BIT(24),
+ [CLKON_QMB] = BIT(25),
+ [WEIGHT_ARB] = BIT(26),
+ [GSI_IF] = BIT(27),
+ [CLKON_GLOBAL] = BIT(28),
+ [GLOBAL_2X_CLK] = BIT(29),
+ [DPL_FIFO] = BIT(30),
+ [DRBIP] = BIT(31),
+};
+
+REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000034);
+
+static const u32 reg_route_fmask[] = {
+ [ROUTE_DEF_PIPE] = GENMASK(7, 0),
+ [ROUTE_FRAG_DEF_PIPE] = GENMASK(15, 8),
+ [ROUTE_DEF_HDR_OFST] = GENMASK(25, 16),
+ [ROUTE_DEF_HDR_TABLE] = BIT(26),
+ [ROUTE_DEF_RETAIN_HDR] = BIT(27),
+ [ROUTE_DIS] = BIT(28),
+ /* Bits 29-31 reserved */
+};
+
+REG_FIELDS(ROUTE, route, 0x00000038);
+
+static const u32 reg_shared_mem_size_fmask[] = {
+ [MEM_SIZE] = GENMASK(15, 0),
+ [MEM_BADDR] = GENMASK(31, 16),
+};
+
+REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000040);
+
+static const u32 reg_qsb_max_writes_fmask[] = {
+ [GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0),
+ [GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4),
+ /* Bits 8-31 reserved */
+};
+
+REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000054);
+
+static const u32 reg_qsb_max_reads_fmask[] = {
+ [GEN_QMB_0_MAX_READS] = GENMASK(3, 0),
+ [GEN_QMB_1_MAX_READS] = GENMASK(7, 4),
+ /* Bits 8-15 reserved */
+ [GEN_QMB_0_MAX_READS_BEATS] = GENMASK(23, 16),
+ [GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24),
+};
+
+REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000058);
+
+/* Valid bits defined by ipa->available */
+
+REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x00000100, 0x0004);
+
+static const u32 reg_filt_rout_cache_flush_fmask[] = {
+ [ROUTER_CACHE] = BIT(0),
+ /* Bits 1-3 reserved */
+ [FILTER_CACHE] = BIT(4),
+ /* Bits 5-31 reserved */
+};
+
+REG_FIELDS(FILT_ROUT_CACHE_FLUSH, filt_rout_cache_flush, 0x0000404);
+
+static const u32 reg_local_pkt_proc_cntxt_fmask[] = {
+ [IPA_BASE_ADDR] = GENMASK(17, 0),
+ /* Bits 18-31 reserved */
+};
+
+/* Offset must be a multiple of 8 */
+REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x00000478);
+
+static const u32 reg_ipa_tx_cfg_fmask[] = {
+ /* Bits 0-1 reserved */
+ [PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2),
+ [DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6),
+ [DMAW_SCND_OUTSD_PRED_EN] = BIT(10),
+ [DMAW_MAX_BEATS_256_DIS] = BIT(11),
+ [PA_MASK_EN] = BIT(12),
+ [PREFETCH_ALMOST_EMPTY_SIZE_TX1] = GENMASK(16, 13),
+ [DUAL_TX_ENABLE] = BIT(17),
+ [SSPND_PA_NO_START_STATE] = BIT(18),
+ /* Bit 19 reserved */
+ [HOLB_STICKY_DROP_EN] = BIT(20),
+ /* Bits 21-31 reserved */
+};
+
+REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x00000488);
+
+static const u32 reg_idle_indication_cfg_fmask[] = {
+ [ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0),
+ [CONST_NON_IDLE_ENABLE] = BIT(16),
+ /* Bits 17-31 reserved */
+};
+
+REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x000004a8);
+
+static const u32 reg_qtime_timestamp_cfg_fmask[] = {
+ [DPL_TIMESTAMP_LSB] = GENMASK(4, 0),
+ /* Bits 5-6 reserved */
+ [DPL_TIMESTAMP_SEL] = BIT(7),
+ [TAG_TIMESTAMP_LSB] = GENMASK(12, 8),
+ /* Bits 13-15 reserved */
+ [NAT_TIMESTAMP_LSB] = GENMASK(20, 16),
+ /* Bits 21-31 reserved */
+};
+
+REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x000004ac);
+
+static const u32 reg_timers_xo_clk_div_cfg_fmask[] = {
+ [DIV_VALUE] = GENMASK(8, 0),
+ /* Bits 9-30 reserved */
+ [DIV_ENABLE] = BIT(31),
+};
+
+REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x000004b0);
+
+static const u32 reg_timers_pulse_gran_cfg_fmask[] = {
+ [PULSE_GRAN_0] = GENMASK(2, 0),
+ [PULSE_GRAN_1] = GENMASK(5, 3),
+ [PULSE_GRAN_2] = GENMASK(8, 6),
+ [PULSE_GRAN_3] = GENMASK(11, 9),
+ /* Bits 12-31 reserved */
+};
+
+REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x000004b4);
+
+static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = {
+ [X_MIN_LIM] = GENMASK(5, 0),
+ /* Bits 6-7 reserved */
+ [X_MAX_LIM] = GENMASK(13, 8),
+ /* Bits 14-15 reserved */
+ [Y_MIN_LIM] = GENMASK(21, 16),
+ /* Bits 22-23 reserved */
+ [Y_MAX_LIM] = GENMASK(29, 24),
+ /* Bits 30-31 reserved */
+};
+
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type,
+ 0x00000500, 0x0020);
+
+static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = {
+ [X_MIN_LIM] = GENMASK(5, 0),
+ /* Bits 6-7 reserved */
+ [X_MAX_LIM] = GENMASK(13, 8),
+ /* Bits 14-15 reserved */
+ [Y_MIN_LIM] = GENMASK(21, 16),
+ /* Bits 22-23 reserved */
+ [Y_MAX_LIM] = GENMASK(29, 24),
+ /* Bits 30-31 reserved */
+};
+
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type,
+ 0x00000504, 0x0020);
+
+static const u32 reg_src_rsrc_grp_45_rsrc_type_fmask[] = {
+ [X_MIN_LIM] = GENMASK(5, 0),
+ /* Bits 6-7 reserved */
+ [X_MAX_LIM] = GENMASK(13, 8),
+ /* Bits 14-15 reserved */
+ [Y_MIN_LIM] = GENMASK(21, 16),
+ /* Bits 22-23 reserved */
+ [Y_MAX_LIM] = GENMASK(29, 24),
+ /* Bits 30-31 reserved */
+};
+
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type,
+ 0x00000508, 0x0020);
+
+static const u32 reg_src_rsrc_grp_67_rsrc_type_fmask[] = {
+ [X_MIN_LIM] = GENMASK(5, 0),
+ /* Bits 6-7 reserved */
+ [X_MAX_LIM] = GENMASK(13, 8),
+ /* Bits 14-15 reserved */
+ [Y_MIN_LIM] = GENMASK(21, 16),
+ /* Bits 22-23 reserved */
+ [Y_MAX_LIM] = GENMASK(29, 24),
+ /* Bits 30-31 reserved */
+};
+
+REG_STRIDE_FIELDS(SRC_RSRC_GRP_67_RSRC_TYPE, src_rsrc_grp_67_rsrc_type,
+ 0x0000050c, 0x0020);
+
+static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = {
+ [X_MIN_LIM] = GENMASK(5, 0),
+ /* Bits 6-7 reserved */
+ [X_MAX_LIM] = GENMASK(13, 8),
+ /* Bits 14-15 reserved */
+ [Y_MIN_LIM] = GENMASK(21, 16),
+ /* Bits 22-23 reserved */
+ [Y_MAX_LIM] = GENMASK(29, 24),
+ /* Bits 30-31 reserved */
+};
+
+REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type,
+ 0x00000600, 0x0020);
+
+static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = {
+ [X_MIN_LIM] = GENMASK(5, 0),
+ /* Bits 6-7 reserved */
+ [X_MAX_LIM] = GENMASK(13, 8),
+ /* Bits 14-15 reserved */
+ [Y_MIN_LIM] = GENMASK(21, 16),
+ /* Bits 22-23 reserved */
+ [Y_MAX_LIM] = GENMASK(29, 24),
+ /* Bits 30-31 reserved */
+};
+
+REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type,
+ 0x00000604, 0x0020);
+
+static const u32 reg_dst_rsrc_grp_45_rsrc_type_fmask[] = {
+ [X_MIN_LIM] = GENMASK(5, 0),
+ /* Bits 6-7 reserved */
+ [X_MAX_LIM] = GENMASK(13, 8),
+ /* Bits 14-15 reserved */
+ [Y_MIN_LIM] = GENMASK(21, 16),
+ /* Bits 22-23 reserved */
+ [Y_MAX_LIM] = GENMASK(29, 24),
+ /* Bits 30-31 reserved */
+};
+
+REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type,
+ 0x00000608, 0x0020);
+
+static const u32 reg_dst_rsrc_grp_67_rsrc_type_fmask[] = {
+ [X_MIN_LIM] = GENMASK(5, 0),
+ /* Bits 6-7 reserved */
+ [X_MAX_LIM] = GENMASK(13, 8),
+ /* Bits 14-15 reserved */
+ [Y_MIN_LIM] = GENMASK(21, 16),
+ /* Bits 22-23 reserved */
+ [Y_MAX_LIM] = GENMASK(29, 24),
+ /* Bits 30-31 reserved */
+};
+
+REG_STRIDE_FIELDS(DST_RSRC_GRP_67_RSRC_TYPE, dst_rsrc_grp_67_rsrc_type,
+ 0x0000060c, 0x0020);
+
+/* Valid bits defined by ipa->available */
+
+REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000006b0, 0x0004);
+
+static const u32 reg_endp_init_cfg_fmask[] = {
+ [FRAG_OFFLOAD_EN] = BIT(0),
+ [CS_OFFLOAD_EN] = GENMASK(2, 1),
+ [CS_METADATA_HDR_OFFSET] = GENMASK(6, 3),
+ /* Bit 7 reserved */
+ [CS_GEN_QMB_MASTER_SEL] = BIT(8),
+ /* Bits 9-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00001008, 0x0080);
+
+static const u32 reg_endp_init_nat_fmask[] = {
+ [NAT_EN] = GENMASK(1, 0),
+ /* Bits 2-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000100c, 0x0080);
+
+static const u32 reg_endp_init_hdr_fmask[] = {
+ [HDR_LEN] = GENMASK(5, 0),
+ [HDR_OFST_METADATA_VALID] = BIT(6),
+ [HDR_OFST_METADATA] = GENMASK(12, 7),
+ [HDR_ADDITIONAL_CONST_LEN] = GENMASK(18, 13),
+ [HDR_OFST_PKT_SIZE_VALID] = BIT(19),
+ [HDR_OFST_PKT_SIZE] = GENMASK(25, 20),
+ /* Bit 26 reserved */
+ [HDR_LEN_INC_DEAGG_HDR] = BIT(27),
+ [HDR_LEN_MSB] = GENMASK(29, 28),
+ [HDR_OFST_METADATA_MSB] = GENMASK(31, 30),
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00001010, 0x0080);
+
+static const u32 reg_endp_init_hdr_ext_fmask[] = {
+ [HDR_ENDIANNESS] = BIT(0),
+ [HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1),
+ [HDR_TOTAL_LEN_OR_PAD] = BIT(2),
+ [HDR_PAYLOAD_LEN_INC_PADDING] = BIT(3),
+ [HDR_TOTAL_LEN_OR_PAD_OFFSET] = GENMASK(9, 4),
+ [HDR_PAD_TO_ALIGNMENT] = GENMASK(13, 10),
+ /* Bits 14-15 reserved */
+ [HDR_TOTAL_LEN_OR_PAD_OFFSET_MSB] = GENMASK(17, 16),
+ [HDR_OFST_PKT_SIZE_MSB] = GENMASK(19, 18),
+ [HDR_ADDITIONAL_CONST_LEN_MSB] = GENMASK(21, 20),
+ [HDR_BYTES_TO_REMOVE_VALID] = BIT(22),
+ /* Bit 23 reserved */
+ [HDR_BYTES_TO_REMOVE] = GENMASK(31, 24),
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00001014, 0x0080);
+
+REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask,
+ 0x00001018, 0x0080);
+
+static const u32 reg_endp_init_mode_fmask[] = {
+ [ENDP_MODE] = GENMASK(2, 0),
+ [DCPH_ENABLE] = BIT(3),
+ [DEST_PIPE_INDEX] = GENMASK(11, 4),
+ [BYTE_THRESHOLD] = GENMASK(27, 12),
+ [PIPE_REPLICATION_EN] = BIT(28),
+ [PAD_EN] = BIT(29),
+ [DRBIP_ACL_ENABLE] = BIT(30),
+ /* Bit 31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00001020, 0x0080);
+
+static const u32 reg_endp_init_aggr_fmask[] = {
+ [AGGR_EN] = GENMASK(1, 0),
+ [AGGR_TYPE] = GENMASK(4, 2),
+ [BYTE_LIMIT] = GENMASK(10, 5),
+ /* Bit 11 reserved */
+ [TIME_LIMIT] = GENMASK(16, 12),
+ [PKT_LIMIT] = GENMASK(22, 17),
+ [SW_EOF_ACTIVE] = BIT(23),
+ [FORCE_CLOSE] = BIT(24),
+ /* Bit 25 reserved */
+ [HARD_BYTE_LIMIT_EN] = BIT(26),
+ [AGGR_GRAN_SEL] = BIT(27),
+ /* Bits 28-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00001024, 0x0080);
+
+static const u32 reg_endp_init_hol_block_en_fmask[] = {
+ [HOL_BLOCK_EN] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en,
+ 0x0000102c, 0x0080);
+
+static const u32 reg_endp_init_hol_block_timer_fmask[] = {
+ [TIMER_LIMIT] = GENMASK(4, 0),
+ /* Bits 5-7 reserved */
+ [TIMER_GRAN_SEL] = GENMASK(9, 8),
+ /* Bits 10-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer,
+ 0x00001030, 0x0080);
+
+static const u32 reg_endp_init_deaggr_fmask[] = {
+ [DEAGGR_HDR_LEN] = GENMASK(5, 0),
+ [SYSPIPE_ERR_DETECTION] = BIT(6),
+ [PACKET_OFFSET_VALID] = BIT(7),
+ [PACKET_OFFSET_LOCATION] = GENMASK(13, 8),
+ [IGNORE_MIN_PKT_ERR] = BIT(14),
+ /* Bit 15 reserved */
+ [MAX_PACKET_LEN] = GENMASK(31, 16),
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00001034, 0x0080);
+
+static const u32 reg_endp_init_rsrc_grp_fmask[] = {
+ [ENDP_RSRC_GRP] = GENMASK(2, 0),
+ /* Bits 3-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00001038, 0x0080);
+
+static const u32 reg_endp_init_seq_fmask[] = {
+ [SEQ_TYPE] = GENMASK(7, 0),
+ /* Bits 8-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000103c, 0x0080);
+
+static const u32 reg_endp_status_fmask[] = {
+ [STATUS_EN] = BIT(0),
+ [STATUS_ENDP] = GENMASK(8, 1),
+ [STATUS_PKT_SUPPRESS] = BIT(9),
+ /* Bits 10-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00001040, 0x0080);
+
+static const u32 reg_endp_filter_cache_cfg_fmask[] = {
+ [CACHE_MSK_SRC_ID] = BIT(0),
+ [CACHE_MSK_SRC_IP] = BIT(1),
+ [CACHE_MSK_DST_IP] = BIT(2),
+ [CACHE_MSK_SRC_PORT] = BIT(3),
+ [CACHE_MSK_DST_PORT] = BIT(4),
+ [CACHE_MSK_PROTOCOL] = BIT(5),
+ [CACHE_MSK_METADATA] = BIT(6),
+ /* Bits 7-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_FILTER_CACHE_CFG, endp_filter_cache_cfg,
+ 0x0000105c, 0x0080);
+
+static const u32 reg_endp_router_cache_cfg_fmask[] = {
+ [CACHE_MSK_SRC_ID] = BIT(0),
+ [CACHE_MSK_SRC_IP] = BIT(1),
+ [CACHE_MSK_DST_IP] = BIT(2),
+ [CACHE_MSK_SRC_PORT] = BIT(3),
+ [CACHE_MSK_DST_PORT] = BIT(4),
+ [CACHE_MSK_PROTOCOL] = BIT(5),
+ [CACHE_MSK_METADATA] = BIT(6),
+ /* Bits 7-31 reserved */
+};
+
+REG_STRIDE_FIELDS(ENDP_ROUTER_CACHE_CFG, endp_router_cache_cfg,
+ 0x00001070, 0x0080);
+
+/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
+REG(IPA_IRQ_STTS, ipa_irq_stts, 0x0000c008 + 0x1000 * GSI_EE_AP);
+
+/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
+REG(IPA_IRQ_EN, ipa_irq_en, 0x0000c00c + 0x1000 * GSI_EE_AP);
+
+/* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */
+REG(IPA_IRQ_CLR, ipa_irq_clr, 0x0000c010 + 0x1000 * GSI_EE_AP);
+
+static const u32 reg_ipa_irq_uc_fmask[] = {
+ [UC_INTR] = BIT(0),
+ /* Bits 1-31 reserved */
+};
+
+REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000c01c + 0x1000 * GSI_EE_AP);
+
+/* Valid bits defined by ipa->available */
+
+REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info,
+ 0x0000c030 + 0x1000 * GSI_EE_AP, 0x0004);
+
+/* Valid bits defined by ipa->available */
+
+REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en,
+ 0x0000c050 + 0x1000 * GSI_EE_AP, 0x0004);
+
+/* Valid bits defined by ipa->available */
+
+REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr,
+ 0x0000c070 + 0x1000 * GSI_EE_AP, 0x0004);
+
+static const struct reg *reg_array[] = {
+ [COMP_CFG] = ®_comp_cfg,
+ [CLKON_CFG] = ®_clkon_cfg,
+ [ROUTE] = ®_route,
+ [SHARED_MEM_SIZE] = ®_shared_mem_size,
+ [QSB_MAX_WRITES] = ®_qsb_max_writes,
+ [QSB_MAX_READS] = ®_qsb_max_reads,
+ [FILT_ROUT_CACHE_FLUSH] = ®_filt_rout_cache_flush,
+ [STATE_AGGR_ACTIVE] = ®_state_aggr_active,
+ [LOCAL_PKT_PROC_CNTXT] = ®_local_pkt_proc_cntxt,
+ [AGGR_FORCE_CLOSE] = ®_aggr_force_close,
+ [IPA_TX_CFG] = ®_ipa_tx_cfg,
+ [FLAVOR_0] = ®_flavor_0,
+ [IDLE_INDICATION_CFG] = ®_idle_indication_cfg,
+ [QTIME_TIMESTAMP_CFG] = ®_qtime_timestamp_cfg,
+ [TIMERS_XO_CLK_DIV_CFG] = ®_timers_xo_clk_div_cfg,
+ [TIMERS_PULSE_GRAN_CFG] = ®_timers_pulse_gran_cfg,
+ [SRC_RSRC_GRP_01_RSRC_TYPE] = ®_src_rsrc_grp_01_rsrc_type,
+ [SRC_RSRC_GRP_23_RSRC_TYPE] = ®_src_rsrc_grp_23_rsrc_type,
+ [SRC_RSRC_GRP_45_RSRC_TYPE] = ®_src_rsrc_grp_45_rsrc_type,
+ [SRC_RSRC_GRP_67_RSRC_TYPE] = ®_src_rsrc_grp_67_rsrc_type,
+ [DST_RSRC_GRP_01_RSRC_TYPE] = ®_dst_rsrc_grp_01_rsrc_type,
+ [DST_RSRC_GRP_23_RSRC_TYPE] = ®_dst_rsrc_grp_23_rsrc_type,
+ [DST_RSRC_GRP_45_RSRC_TYPE] = ®_dst_rsrc_grp_45_rsrc_type,
+ [DST_RSRC_GRP_67_RSRC_TYPE] = ®_dst_rsrc_grp_67_rsrc_type,
+ [ENDP_INIT_CFG] = ®_endp_init_cfg,
+ [ENDP_INIT_NAT] = ®_endp_init_nat,
+ [ENDP_INIT_HDR] = ®_endp_init_hdr,
+ [ENDP_INIT_HDR_EXT] = ®_endp_init_hdr_ext,
+ [ENDP_INIT_HDR_METADATA_MASK] = ®_endp_init_hdr_metadata_mask,
+ [ENDP_INIT_MODE] = ®_endp_init_mode,
+ [ENDP_INIT_AGGR] = ®_endp_init_aggr,
+ [ENDP_INIT_HOL_BLOCK_EN] = ®_endp_init_hol_block_en,
+ [ENDP_INIT_HOL_BLOCK_TIMER] = ®_endp_init_hol_block_timer,
+ [ENDP_INIT_DEAGGR] = ®_endp_init_deaggr,
+ [ENDP_INIT_RSRC_GRP] = ®_endp_init_rsrc_grp,
+ [ENDP_INIT_SEQ] = ®_endp_init_seq,
+ [ENDP_STATUS] = ®_endp_status,
+ [ENDP_FILTER_CACHE_CFG] = ®_endp_filter_cache_cfg,
+ [ENDP_ROUTER_CACHE_CFG] = ®_endp_router_cache_cfg,
+ [IPA_IRQ_STTS] = ®_ipa_irq_stts,
+ [IPA_IRQ_EN] = ®_ipa_irq_en,
+ [IPA_IRQ_CLR] = ®_ipa_irq_clr,
+ [IPA_IRQ_UC] = ®_ipa_irq_uc,
+ [IRQ_SUSPEND_INFO] = ®_irq_suspend_info,
+ [IRQ_SUSPEND_EN] = ®_irq_suspend_en,
+ [IRQ_SUSPEND_CLR] = ®_irq_suspend_clr,
+};
+
+const struct regs ipa_regs_v5_0 = {
+ .reg_count = ARRAY_SIZE(reg_array),
+ .reg = reg_array,
+};
struct sk_buff_head bc_queue;
struct work_struct bc_work;
u32 bc_queue_len_used;
+ int bc_cutoff;
u32 flags;
int count;
struct hlist_head vlan_source_hash[MACVLAN_HASH_SIZE];
+ DECLARE_BITMAP(bc_filter, MACVLAN_MC_FILTER_SZ);
DECLARE_BITMAP(mc_filter, MACVLAN_MC_FILTER_SZ);
unsigned char perm_addr[ETH_ALEN];
};
}
}
+static void macvlan_multicast_rx(const struct macvlan_port *port,
+ const struct macvlan_dev *src,
+ struct sk_buff *skb)
+{
+ if (!src)
+ /* frame comes from an external address */
+ macvlan_broadcast(skb, port, NULL,
+ MACVLAN_MODE_PRIVATE |
+ MACVLAN_MODE_VEPA |
+ MACVLAN_MODE_PASSTHRU|
+ MACVLAN_MODE_BRIDGE);
+ else if (src->mode == MACVLAN_MODE_VEPA)
+ /* flood to everyone except source */
+ macvlan_broadcast(skb, port, src->dev,
+ MACVLAN_MODE_VEPA |
+ MACVLAN_MODE_BRIDGE);
+ else
+ /*
+ * flood only to VEPA ports, bridge ports
+ * already saw the frame on the way out.
+ */
+ macvlan_broadcast(skb, port, src->dev,
+ MACVLAN_MODE_VEPA);
+}
+
static void macvlan_process_broadcast(struct work_struct *w)
{
struct macvlan_port *port = container_of(w, struct macvlan_port,
const struct macvlan_dev *src = MACVLAN_SKB_CB(skb)->src;
rcu_read_lock();
-
- if (!src)
- /* frame comes from an external address */
- macvlan_broadcast(skb, port, NULL,
- MACVLAN_MODE_PRIVATE |
- MACVLAN_MODE_VEPA |
- MACVLAN_MODE_PASSTHRU|
- MACVLAN_MODE_BRIDGE);
- else if (src->mode == MACVLAN_MODE_VEPA)
- /* flood to everyone except source */
- macvlan_broadcast(skb, port, src->dev,
- MACVLAN_MODE_VEPA |
- MACVLAN_MODE_BRIDGE);
- else
- /*
- * flood only to VEPA ports, bridge ports
- * already saw the frame on the way out.
- */
- macvlan_broadcast(skb, port, src->dev,
- MACVLAN_MODE_VEPA);
-
+ macvlan_multicast_rx(port, src, skb);
rcu_read_unlock();
if (src)
}
hash = mc_hash(NULL, eth->h_dest);
- if (test_bit(hash, port->mc_filter))
+ if (test_bit(hash, port->bc_filter))
macvlan_broadcast_enqueue(port, src, skb);
+ else if (test_bit(hash, port->mc_filter))
+ macvlan_multicast_rx(port, src, skb);
return RX_HANDLER_PASS;
}
static void macvlan_compute_filter(unsigned long *mc_filter,
struct net_device *dev,
- struct macvlan_dev *vlan)
+ struct macvlan_dev *vlan, int cutoff)
{
if (dev->flags & (IFF_PROMISC | IFF_ALLMULTI)) {
bitmap_fill(mc_filter, MACVLAN_MC_FILTER_SZ);
} else {
- struct netdev_hw_addr *ha;
DECLARE_BITMAP(filter, MACVLAN_MC_FILTER_SZ);
+ struct netdev_hw_addr *ha;
bitmap_zero(filter, MACVLAN_MC_FILTER_SZ);
netdev_for_each_mc_addr(ha, dev) {
+ if (!vlan && ha->synced <= cutoff)
+ continue;
+
__set_bit(mc_hash(vlan, ha->addr), filter);
}
}
}
+static void macvlan_recompute_bc_filter(struct macvlan_dev *vlan)
+{
+ if (vlan->port->bc_cutoff < 0) {
+ bitmap_zero(vlan->port->bc_filter, MACVLAN_MC_FILTER_SZ);
+ return;
+ }
+
+ macvlan_compute_filter(vlan->port->bc_filter, vlan->lowerdev, NULL,
+ vlan->port->bc_cutoff);
+}
+
static void macvlan_set_mac_lists(struct net_device *dev)
{
struct macvlan_dev *vlan = netdev_priv(dev);
- macvlan_compute_filter(vlan->mc_filter, dev, vlan);
+ macvlan_compute_filter(vlan->mc_filter, dev, vlan, 0);
dev_uc_sync(vlan->lowerdev, dev);
dev_mc_sync(vlan->lowerdev, dev);
* The solution is to maintain a list of broadcast addresses like
* we do for uc/mc, if you care.
*/
- macvlan_compute_filter(vlan->port->mc_filter, vlan->lowerdev, NULL);
+ macvlan_compute_filter(vlan->port->mc_filter, vlan->lowerdev, NULL,
+ 0);
+ macvlan_recompute_bc_filter(vlan);
+}
+
+static void update_port_bc_cutoff(struct macvlan_dev *vlan, int cutoff)
+{
+ if (vlan->port->bc_cutoff == cutoff)
+ return;
+
+ vlan->port->bc_cutoff = cutoff;
+ macvlan_recompute_bc_filter(vlan);
}
static int macvlan_change_mtu(struct net_device *dev, int new_mtu)
INIT_HLIST_HEAD(&port->vlan_source_hash[i]);
port->bc_queue_len_used = 0;
+ port->bc_cutoff = 1;
skb_queue_head_init(&port->bc_queue);
INIT_WORK(&port->bc_work, macvlan_process_broadcast);
if (data && data[IFLA_MACVLAN_BC_QUEUE_LEN])
vlan->bc_queue_len_req = nla_get_u32(data[IFLA_MACVLAN_BC_QUEUE_LEN]);
+ if (data && data[IFLA_MACVLAN_BC_CUTOFF])
+ update_port_bc_cutoff(
+ vlan, nla_get_s32(data[IFLA_MACVLAN_BC_CUTOFF]));
+
err = register_netdevice(dev);
if (err < 0)
goto destroy_macvlan_port;
update_port_bc_queue_len(vlan->port);
}
+ if (data && data[IFLA_MACVLAN_BC_CUTOFF])
+ update_port_bc_cutoff(
+ vlan, nla_get_s32(data[IFLA_MACVLAN_BC_CUTOFF]));
+
if (set_mode)
vlan->mode = mode;
if (data && data[IFLA_MACVLAN_MACADDR_MODE]) {
goto nla_put_failure;
if (nla_put_u32(skb, IFLA_MACVLAN_BC_QUEUE_LEN_USED, port->bc_queue_len_used))
goto nla_put_failure;
+ if (port->bc_cutoff != 1 &&
+ nla_put_s32(skb, IFLA_MACVLAN_BC_CUTOFF, port->bc_cutoff))
+ goto nla_put_failure;
return 0;
nla_put_failure:
return true;
}
- if (!of_find_property(child, "compatible", NULL))
+ if (!of_property_present(child, "compatible"))
return true;
return false;
/* auto scan for PHYs with empty reg property */
for_each_available_child_of_node(np, child) {
/* Skip PHYs with reg property set */
- if (of_find_property(child, "reg", NULL))
+ if (of_property_present(child, "reg"))
continue;
for (addr = 0; addr < PHY_MAX_ADDR; addr++) {
This module provides helpers to phylink for managing the Lynx PCS
which is part of the Layerscape and QorIQ Ethernet SERDES.
+config PCS_MTK_LYNXI
+ tristate
+ select REGMAP
+ help
+ This module provides helpers to phylink for managing the LynxI PCS
+ which is part of MediaTek's SoC and Ethernet switch ICs.
+
config PCS_RZN1_MIIC
tristate "Renesas RZ/N1 MII converter"
depends on OF && (ARCH_RZN1 || COMPILE_TEST)
obj-$(CONFIG_PCS_XPCS) += pcs_xpcs.o
obj-$(CONFIG_PCS_LYNX) += pcs-lynx.o
+obj-$(CONFIG_PCS_MTK_LYNXI) += pcs-mtk-lynxi.o
obj-$(CONFIG_PCS_RZN1_MIIC) += pcs-rzn1-miic.o
obj-$(CONFIG_PCS_ALTERA_TSE) += pcs-altera-tse.o
}
dev_dbg(&lynx->mdio->dev,
- "mode=%s/%s/%s link=%u an_enabled=%u an_complete=%u\n",
+ "mode=%s/%s/%s link=%u an_complete=%u\n",
phy_modes(state->interface),
phy_speed_to_str(state->speed),
phy_duplex_to_str(state->duplex),
- state->link, state->an_enabled, state->an_complete);
+ state->link, state->an_complete);
}
static int lynx_pcs_config_giga(struct mdio_device *pcs, unsigned int mode,
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2018-2019 MediaTek Inc.
+/* A library for MediaTek SGMII circuit
+ *
+ * Author: Sean Wang <sean.wang@mediatek.com>
+ * Author: Alexander Couzens <lynxis@fe80.eu>
+ * Author: Daniel Golle <daniel@makrotopia.org>
+ *
+ */
+
+#include <linux/mdio.h>
+#include <linux/of.h>
+#include <linux/pcs/pcs-mtk-lynxi.h>
+#include <linux/phylink.h>
+#include <linux/regmap.h>
+
+/* SGMII subsystem config registers */
+/* BMCR (low 16) BMSR (high 16) */
+#define SGMSYS_PCS_CONTROL_1 0x0
+#define SGMII_BMCR GENMASK(15, 0)
+#define SGMII_BMSR GENMASK(31, 16)
+
+#define SGMSYS_PCS_DEVICE_ID 0x4
+#define SGMII_LYNXI_DEV_ID 0x4d544950
+
+#define SGMSYS_PCS_ADVERTISE 0x8
+#define SGMII_ADVERTISE GENMASK(15, 0)
+#define SGMII_LPA GENMASK(31, 16)
+
+#define SGMSYS_PCS_SCRATCH 0x14
+#define SGMII_DEV_VERSION GENMASK(31, 16)
+
+/* Register to programmable link timer, the unit in 2 * 8ns */
+#define SGMSYS_PCS_LINK_TIMER 0x18
+#define SGMII_LINK_TIMER_MASK GENMASK(19, 0)
+#define SGMII_LINK_TIMER_VAL(ns) FIELD_PREP(SGMII_LINK_TIMER_MASK, \
+ ((ns) / 2 / 8))
+
+/* Register to control remote fault */
+#define SGMSYS_SGMII_MODE 0x20
+#define SGMII_IF_MODE_SGMII BIT(0)
+#define SGMII_SPEED_DUPLEX_AN BIT(1)
+#define SGMII_SPEED_MASK GENMASK(3, 2)
+#define SGMII_SPEED_10 FIELD_PREP(SGMII_SPEED_MASK, 0)
+#define SGMII_SPEED_100 FIELD_PREP(SGMII_SPEED_MASK, 1)
+#define SGMII_SPEED_1000 FIELD_PREP(SGMII_SPEED_MASK, 2)
+#define SGMII_DUPLEX_HALF BIT(4)
+#define SGMII_REMOTE_FAULT_DIS BIT(8)
+
+/* Register to reset SGMII design */
+#define SGMSYS_RESERVED_0 0x34
+#define SGMII_SW_RESET BIT(0)
+
+/* Register to set SGMII speed, ANA RG_ Control Signals III */
+#define SGMII_PHY_SPEED_MASK GENMASK(3, 2)
+#define SGMII_PHY_SPEED_1_25G FIELD_PREP(SGMII_PHY_SPEED_MASK, 0)
+#define SGMII_PHY_SPEED_3_125G FIELD_PREP(SGMII_PHY_SPEED_MASK, 1)
+
+/* Register to power up QPHY */
+#define SGMSYS_QPHY_PWR_STATE_CTRL 0xe8
+#define SGMII_PHYA_PWD BIT(4)
+
+/* Register to QPHY wrapper control */
+#define SGMSYS_QPHY_WRAP_CTRL 0xec
+#define SGMII_PN_SWAP_MASK GENMASK(1, 0)
+#define SGMII_PN_SWAP_TX_RX (BIT(0) | BIT(1))
+
+/* struct mtk_pcs_lynxi - This structure holds each sgmii regmap andassociated
+ * data
+ * @regmap: The register map pointing at the range used to setup
+ * SGMII modes
+ * @dev: Pointer to device owning the PCS
+ * @ana_rgc3: The offset of register ANA_RGC3 relative to regmap
+ * @interface: Currently configured interface mode
+ * @pcs: Phylink PCS structure
+ * @flags: Flags indicating hardware properties
+ */
+struct mtk_pcs_lynxi {
+ struct regmap *regmap;
+ u32 ana_rgc3;
+ phy_interface_t interface;
+ struct phylink_pcs pcs;
+ u32 flags;
+};
+
+static struct mtk_pcs_lynxi *pcs_to_mtk_pcs_lynxi(struct phylink_pcs *pcs)
+{
+ return container_of(pcs, struct mtk_pcs_lynxi, pcs);
+}
+
+static void mtk_pcs_lynxi_get_state(struct phylink_pcs *pcs,
+ struct phylink_link_state *state)
+{
+ struct mtk_pcs_lynxi *mpcs = pcs_to_mtk_pcs_lynxi(pcs);
+ unsigned int bm, adv;
+
+ /* Read the BMSR and LPA */
+ regmap_read(mpcs->regmap, SGMSYS_PCS_CONTROL_1, &bm);
+ regmap_read(mpcs->regmap, SGMSYS_PCS_ADVERTISE, &adv);
+
+ phylink_mii_c22_pcs_decode_state(state, FIELD_GET(SGMII_BMSR, bm),
+ FIELD_GET(SGMII_LPA, adv));
+}
+
+static int mtk_pcs_lynxi_config(struct phylink_pcs *pcs, unsigned int mode,
+ phy_interface_t interface,
+ const unsigned long *advertising,
+ bool permit_pause_to_mac)
+{
+ struct mtk_pcs_lynxi *mpcs = pcs_to_mtk_pcs_lynxi(pcs);
+ bool mode_changed = false, changed, use_an;
+ unsigned int rgc3, sgm_mode, bmcr;
+ int advertise, link_timer;
+
+ advertise = phylink_mii_c22_pcs_encode_advertisement(interface,
+ advertising);
+ if (advertise < 0)
+ return advertise;
+
+ /* Clearing IF_MODE_BIT0 switches the PCS to BASE-X mode, and
+ * we assume that fixes it's speed at bitrate = line rate (in
+ * other words, 1000Mbps or 2500Mbps).
+ */
+ if (interface == PHY_INTERFACE_MODE_SGMII) {
+ sgm_mode = SGMII_IF_MODE_SGMII;
+ if (phylink_autoneg_inband(mode)) {
+ sgm_mode |= SGMII_REMOTE_FAULT_DIS |
+ SGMII_SPEED_DUPLEX_AN;
+ use_an = true;
+ } else {
+ use_an = false;
+ }
+ } else if (phylink_autoneg_inband(mode)) {
+ /* 1000base-X or 2500base-X autoneg */
+ sgm_mode = SGMII_REMOTE_FAULT_DIS;
+ use_an = linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ advertising);
+ } else {
+ /* 1000base-X or 2500base-X without autoneg */
+ sgm_mode = 0;
+ use_an = false;
+ }
+
+ if (use_an)
+ bmcr = BMCR_ANENABLE;
+ else
+ bmcr = 0;
+
+ if (mpcs->interface != interface) {
+ link_timer = phylink_get_link_timer_ns(interface);
+ if (link_timer < 0)
+ return link_timer;
+
+ /* PHYA power down */
+ regmap_set_bits(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL,
+ SGMII_PHYA_PWD);
+
+ /* Reset SGMII PCS state */
+ regmap_set_bits(mpcs->regmap, SGMSYS_RESERVED_0,
+ SGMII_SW_RESET);
+
+ if (mpcs->flags & MTK_SGMII_FLAG_PN_SWAP)
+ regmap_update_bits(mpcs->regmap, SGMSYS_QPHY_WRAP_CTRL,
+ SGMII_PN_SWAP_MASK,
+ SGMII_PN_SWAP_TX_RX);
+
+ if (interface == PHY_INTERFACE_MODE_2500BASEX)
+ rgc3 = SGMII_PHY_SPEED_3_125G;
+ else
+ rgc3 = SGMII_PHY_SPEED_1_25G;
+
+ /* Configure the underlying interface speed */
+ regmap_update_bits(mpcs->regmap, mpcs->ana_rgc3,
+ SGMII_PHY_SPEED_MASK, rgc3);
+
+ /* Setup the link timer */
+ regmap_write(mpcs->regmap, SGMSYS_PCS_LINK_TIMER,
+ SGMII_LINK_TIMER_VAL(link_timer));
+
+ mpcs->interface = interface;
+ mode_changed = true;
+ }
+
+ /* Update the advertisement, noting whether it has changed */
+ regmap_update_bits_check(mpcs->regmap, SGMSYS_PCS_ADVERTISE,
+ SGMII_ADVERTISE, advertise, &changed);
+
+ /* Update the sgmsys mode register */
+ regmap_update_bits(mpcs->regmap, SGMSYS_SGMII_MODE,
+ SGMII_REMOTE_FAULT_DIS | SGMII_SPEED_DUPLEX_AN |
+ SGMII_IF_MODE_SGMII, sgm_mode);
+
+ /* Update the BMCR */
+ regmap_update_bits(mpcs->regmap, SGMSYS_PCS_CONTROL_1,
+ BMCR_ANENABLE, bmcr);
+
+ /* Release PHYA power down state
+ * Only removing bit SGMII_PHYA_PWD isn't enough.
+ * There are cases when the SGMII_PHYA_PWD register contains 0x9 which
+ * prevents SGMII from working. The SGMII still shows link but no traffic
+ * can flow. Writing 0x0 to the PHYA_PWD register fix the issue. 0x0 was
+ * taken from a good working state of the SGMII interface.
+ * Unknown how much the QPHY needs but it is racy without a sleep.
+ * Tested on mt7622 & mt7986.
+ */
+ usleep_range(50, 100);
+ regmap_write(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL, 0);
+
+ return changed || mode_changed;
+}
+
+static void mtk_pcs_lynxi_restart_an(struct phylink_pcs *pcs)
+{
+ struct mtk_pcs_lynxi *mpcs = pcs_to_mtk_pcs_lynxi(pcs);
+
+ regmap_set_bits(mpcs->regmap, SGMSYS_PCS_CONTROL_1, BMCR_ANRESTART);
+}
+
+static void mtk_pcs_lynxi_link_up(struct phylink_pcs *pcs, unsigned int mode,
+ phy_interface_t interface, int speed,
+ int duplex)
+{
+ struct mtk_pcs_lynxi *mpcs = pcs_to_mtk_pcs_lynxi(pcs);
+ unsigned int sgm_mode;
+
+ if (!phylink_autoneg_inband(mode)) {
+ /* Force the speed and duplex setting */
+ if (speed == SPEED_10)
+ sgm_mode = SGMII_SPEED_10;
+ else if (speed == SPEED_100)
+ sgm_mode = SGMII_SPEED_100;
+ else
+ sgm_mode = SGMII_SPEED_1000;
+
+ if (duplex != DUPLEX_FULL)
+ sgm_mode |= SGMII_DUPLEX_HALF;
+
+ regmap_update_bits(mpcs->regmap, SGMSYS_SGMII_MODE,
+ SGMII_DUPLEX_HALF | SGMII_SPEED_MASK,
+ sgm_mode);
+ }
+}
+
+static const struct phylink_pcs_ops mtk_pcs_lynxi_ops = {
+ .pcs_get_state = mtk_pcs_lynxi_get_state,
+ .pcs_config = mtk_pcs_lynxi_config,
+ .pcs_an_restart = mtk_pcs_lynxi_restart_an,
+ .pcs_link_up = mtk_pcs_lynxi_link_up,
+};
+
+struct phylink_pcs *mtk_pcs_lynxi_create(struct device *dev,
+ struct regmap *regmap, u32 ana_rgc3,
+ u32 flags)
+{
+ struct mtk_pcs_lynxi *mpcs;
+ u32 id, ver;
+ int ret;
+
+ ret = regmap_read(regmap, SGMSYS_PCS_DEVICE_ID, &id);
+ if (ret < 0)
+ return NULL;
+
+ if (id != SGMII_LYNXI_DEV_ID) {
+ dev_err(dev, "unknown PCS device id %08x\n", id);
+ return NULL;
+ }
+
+ ret = regmap_read(regmap, SGMSYS_PCS_SCRATCH, &ver);
+ if (ret < 0)
+ return NULL;
+
+ ver = FIELD_GET(SGMII_DEV_VERSION, ver);
+ if (ver != 0x1) {
+ dev_err(dev, "unknown PCS device version %04x\n", ver);
+ return NULL;
+ }
+
+ dev_dbg(dev, "MediaTek LynxI SGMII PCS (id 0x%08x, ver 0x%04x)\n", id,
+ ver);
+
+ mpcs = kzalloc(sizeof(*mpcs), GFP_KERNEL);
+ if (!mpcs)
+ return NULL;
+
+ mpcs->ana_rgc3 = ana_rgc3;
+ mpcs->regmap = regmap;
+ mpcs->flags = flags;
+ mpcs->pcs.ops = &mtk_pcs_lynxi_ops;
+ mpcs->pcs.poll = true;
+ mpcs->interface = PHY_INTERFACE_MODE_NA;
+
+ return &mpcs->pcs;
+}
+EXPORT_SYMBOL(mtk_pcs_lynxi_create);
+
+void mtk_pcs_lynxi_destroy(struct phylink_pcs *pcs)
+{
+ if (!pcs)
+ return;
+
+ kfree(pcs_to_mtk_pcs_lynxi(pcs));
+}
+EXPORT_SYMBOL(mtk_pcs_lynxi_destroy);
+
+MODULE_LICENSE("GPL");
return 0;
}
-static int xpcs_read_link_c73(struct dw_xpcs *xpcs, bool an)
+static int xpcs_read_link_c73(struct dw_xpcs *xpcs)
{
bool link = true;
int ret;
if (!(ret & MDIO_STAT1_LSTATUS))
link = false;
- if (an) {
- ret = xpcs_read(xpcs, MDIO_MMD_AN, MDIO_STAT1);
- if (ret < 0)
- return ret;
-
- if (!(ret & MDIO_STAT1_LSTATUS))
- link = false;
- }
-
return link;
}
struct phylink_link_state *state,
const struct xpcs_compat *compat)
{
+ bool an_enabled;
int ret;
/* Link needs to be read first ... */
- state->link = xpcs_read_link_c73(xpcs, state->an_enabled) > 0 ? 1 : 0;
+ state->link = xpcs_read_link_c73(xpcs) > 0 ? 1 : 0;
/* ... and then we check the faults. */
ret = xpcs_read_fault_c73(xpcs, state);
return xpcs_do_config(xpcs, state->interface, MLO_AN_INBAND, NULL);
}
- if (state->an_enabled && xpcs_aneg_done_c73(xpcs, state, compat)) {
+ an_enabled = linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ state->advertising);
+ if (an_enabled && xpcs_aneg_done_c73(xpcs, state, compat)) {
state->an_complete = true;
xpcs_read_lpa_c73(xpcs, state);
xpcs_resolve_lpa_c73(xpcs, state);
- } else if (state->an_enabled) {
+ } else if (an_enabled) {
state->link = 0;
} else if (state->link) {
xpcs_resolve_pma(xpcs, state);
{
int lpa, bmsr;
- if (state->an_enabled) {
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ state->advertising)) {
/* Reset link state */
state->link = false;
config MESON_GXL_PHY
tristate "Amlogic Meson GXL Internal PHY"
depends on ARCH_MESON || COMPILE_TEST
+ select SMSC_PHY
help
Currently has a driver for the Amlogic Meson GXL Internal PHY
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/ethtool_netlink.h>
-#include <linux/of_gpio.h>
#include <linux/bitfield.h>
-#include <linux/gpio/consumer.h>
#include <linux/regulator/of_regulator.h>
#include <linux/regulator/driver.h>
#include <linux/regulator/consumer.h>
+#include <linux/of.h>
#include <linux/phylink.h>
#include <linux/sfp.h>
#include <dt-bindings/net/qca-ar803x.h>
struct bcm7xxx_phy_priv {
u64 *stats;
- struct clk *clk;
};
static int bcm7xxx_28nm_d0_afe_config_init(struct phy_device *phydev)
static int bcm7xxx_28nm_probe(struct phy_device *phydev)
{
struct bcm7xxx_phy_priv *priv;
+ struct clk *clk;
int ret = 0;
priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
if (!priv->stats)
return -ENOMEM;
- priv->clk = devm_clk_get_optional(&phydev->mdio.dev, NULL);
- if (IS_ERR(priv->clk))
- return PTR_ERR(priv->clk);
-
- ret = clk_prepare_enable(priv->clk);
- if (ret)
- return ret;
+ clk = devm_clk_get_optional_enabled(&phydev->mdio.dev, NULL);
+ if (IS_ERR(clk))
+ return PTR_ERR(clk);
/* Dummy read to a register to workaround an issue upon reset where the
* internal inverter may not allow the first MDIO transaction to pass
return ret;
}
-static void bcm7xxx_28nm_remove(struct phy_device *phydev)
-{
- struct bcm7xxx_phy_priv *priv = phydev->priv;
-
- clk_disable_unprepare(priv->clk);
-}
-
#define BCM7XXX_28NM_GPHY(_oui, _name) \
{ \
.phy_id = (_oui), \
.get_strings = bcm_phy_get_strings, \
.get_stats = bcm7xxx_28nm_get_phy_stats, \
.probe = bcm7xxx_28nm_probe, \
- .remove = bcm7xxx_28nm_remove, \
}
#define BCM7XXX_28NM_EPHY(_oui, _name) \
.get_strings = bcm_phy_get_strings, \
.get_stats = bcm7xxx_28nm_get_phy_stats, \
.probe = bcm7xxx_28nm_probe, \
- .remove = bcm7xxx_28nm_remove, \
.read_mmd = bcm7xxx_28nm_ephy_read_mmd, \
.write_mmd = bcm7xxx_28nm_ephy_write_mmd, \
}
/* PHY_BASIC_FEATURES */ \
.flags = PHY_IS_INTERNAL, \
.probe = bcm7xxx_28nm_probe, \
- .remove = bcm7xxx_28nm_remove, \
.config_init = bcm7xxx_16nm_ephy_config_init, \
.config_aneg = genphy_config_aneg, \
.read_status = genphy_read_status, \
}
#endif /* CONFIG_OF_MDIO */
+static int dp83867_suspend(struct phy_device *phydev)
+{
+ /* Disable PHY Interrupts */
+ if (phy_interrupt_is_valid(phydev)) {
+ phydev->interrupts = PHY_INTERRUPT_DISABLED;
+ dp83867_config_intr(phydev);
+ }
+
+ return genphy_suspend(phydev);
+}
+
+static int dp83867_resume(struct phy_device *phydev)
+{
+ /* Enable PHY Interrupts */
+ if (phy_interrupt_is_valid(phydev)) {
+ phydev->interrupts = PHY_INTERRUPT_ENABLED;
+ dp83867_config_intr(phydev);
+ }
+
+ genphy_resume(phydev);
+
+ return 0;
+}
+
static int dp83867_probe(struct phy_device *phydev)
{
struct dp83867_private *dp83867;
.config_intr = dp83867_config_intr,
.handle_interrupt = dp83867_handle_interrupt,
- .suspend = genphy_suspend,
- .resume = genphy_resume,
+ .suspend = dp83867_suspend,
+ .resume = dp83867_resume,
.link_change_notify = dp83867_link_change_notify,
.set_loopback = dp83867_loopback,
#include <linux/phy.h>
#include <linux/netdevice.h>
#include <linux/bitfield.h>
+#include <linux/smscphy.h>
#define TSTCNTL 20
#define TSTCNTL_READ BIT(15)
#define TSTCNTL_WRITE_ADDRESS GENMASK(4, 0)
#define TSTREAD1 21
#define TSTWRITE 23
-#define INTSRC_FLAG 29
-#define INTSRC_ANEG_PR BIT(1)
-#define INTSRC_PARALLEL_FAULT BIT(2)
-#define INTSRC_ANEG_LP_ACK BIT(3)
-#define INTSRC_LINK_DOWN BIT(4)
-#define INTSRC_REMOTE_FAULT BIT(5)
-#define INTSRC_ANEG_COMPLETE BIT(6)
-#define INTSRC_ENERGY_DETECT BIT(7)
-#define INTSRC_MASK 30
-
-#define INT_SOURCES (INTSRC_LINK_DOWN | INTSRC_ANEG_COMPLETE | \
- INTSRC_ENERGY_DETECT)
#define BANK_ANALOG_DSP 0
#define BANK_WOL 1
return genphy_read_status(phydev);
}
-static int meson_gxl_ack_interrupt(struct phy_device *phydev)
-{
- int ret = phy_read(phydev, INTSRC_FLAG);
-
- return ret < 0 ? ret : 0;
-}
-
-static int meson_gxl_config_intr(struct phy_device *phydev)
-{
- int ret;
-
- if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
- /* Ack any pending IRQ */
- ret = meson_gxl_ack_interrupt(phydev);
- if (ret)
- return ret;
-
- ret = phy_write(phydev, INTSRC_MASK, INT_SOURCES);
- } else {
- ret = phy_write(phydev, INTSRC_MASK, 0);
-
- /* Ack any pending IRQ */
- ret = meson_gxl_ack_interrupt(phydev);
- }
-
- return ret;
-}
-
-static irqreturn_t meson_gxl_handle_interrupt(struct phy_device *phydev)
-{
- int irq_status;
-
- irq_status = phy_read(phydev, INTSRC_FLAG);
- if (irq_status < 0) {
- phy_error(phydev);
- return IRQ_NONE;
- }
-
- irq_status &= INT_SOURCES;
-
- if (irq_status == 0)
- return IRQ_NONE;
-
- /* Aneg-complete interrupt is used for link-up detection */
- if (phydev->autoneg == AUTONEG_ENABLE &&
- irq_status == INTSRC_ENERGY_DETECT)
- return IRQ_HANDLED;
-
- phy_trigger_machine(phydev);
-
- return IRQ_HANDLED;
-}
-
static struct phy_driver meson_gxl_phy[] = {
{
PHY_ID_MATCH_EXACT(0x01814400),
.soft_reset = genphy_soft_reset,
.config_init = meson_gxl_config_init,
.read_status = meson_gxl_read_status,
- .config_intr = meson_gxl_config_intr,
- .handle_interrupt = meson_gxl_handle_interrupt,
+ .config_intr = smsc_phy_config_intr,
+ .handle_interrupt = smsc_phy_handle_interrupt,
.suspend = genphy_suspend,
.resume = genphy_resume,
.read_mmd = genphy_read_mmd_unsupported,
.name = "Meson G12A Internal PHY",
/* PHY_BASIC_FEATURES */
.flags = PHY_IS_INTERNAL,
+ .probe = smsc_phy_probe,
+ .config_init = smsc_phy_config_init,
.soft_reset = genphy_soft_reset,
- .config_intr = meson_gxl_config_intr,
- .handle_interrupt = meson_gxl_handle_interrupt,
+ .read_status = lan87xx_read_status,
+ .config_intr = smsc_phy_config_intr,
+ .handle_interrupt = smsc_phy_handle_interrupt,
+
+ .get_tunable = smsc_phy_get_tunable,
+ .set_tunable = smsc_phy_set_tunable,
+
.suspend = genphy_suspend,
.resume = genphy_resume,
.read_mmd = genphy_read_mmd_unsupported,
struct ptp_clock_info ptp_clock_info;
/* Lock for ptp_clock */
struct mutex ptp_lock;
+ struct ptp_pin_desc *pin_config;
};
struct kszphy_priv {
if (err)
return err;
- temp = KSZPHY_INTCS_ALL;
- err = phy_write(phydev, MII_KSZPHY_INTCS, temp);
+ err = phy_write(phydev, MII_KSZPHY_INTCS, KSZPHY_INTCS_ALL);
} else {
- temp = 0;
- err = phy_write(phydev, MII_KSZPHY_INTCS, temp);
+ err = phy_write(phydev, MII_KSZPHY_INTCS, 0);
if (err)
return err;
#define LAN8841_PTP_INT_STS_PTP_TX_TS_INT BIT(12)
#define LAN8841_PTP_INT_STS_PTP_RX_TS_OVRFL_INT BIT(9)
#define LAN8841_PTP_INT_STS_PTP_RX_TS_INT BIT(8)
+#define LAN8841_PTP_INT_STS_PTP_GPIO_CAP_INT BIT(2)
static void lan8841_ptp_flush_fifo(struct kszphy_ptp_priv *ptp_priv, bool egress)
{
phy_read_mmd(phydev, 2, LAN8841_PTP_INT_STS);
}
+#define LAN8841_PTP_GPIO_CAP_STS 506
+#define LAN8841_PTP_GPIO_SEL 327
+#define LAN8841_PTP_GPIO_SEL_GPIO_SEL(gpio) ((gpio) << 8)
+#define LAN8841_PTP_GPIO_RE_LTC_SEC_HI_CAP 498
+#define LAN8841_PTP_GPIO_RE_LTC_SEC_LO_CAP 499
+#define LAN8841_PTP_GPIO_RE_LTC_NS_HI_CAP 500
+#define LAN8841_PTP_GPIO_RE_LTC_NS_LO_CAP 501
+#define LAN8841_PTP_GPIO_FE_LTC_SEC_HI_CAP 502
+#define LAN8841_PTP_GPIO_FE_LTC_SEC_LO_CAP 503
+#define LAN8841_PTP_GPIO_FE_LTC_NS_HI_CAP 504
+#define LAN8841_PTP_GPIO_FE_LTC_NS_LO_CAP 505
+
+static void lan8841_gpio_process_cap(struct kszphy_ptp_priv *ptp_priv)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ struct ptp_clock_event ptp_event = {0};
+ int pin, ret, tmp;
+ s32 sec, nsec;
+
+ pin = ptp_find_pin_unlocked(ptp_priv->ptp_clock, PTP_PF_EXTTS, 0);
+ if (pin == -1)
+ return;
+
+ tmp = phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_CAP_STS);
+ if (tmp < 0)
+ return;
+
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_GPIO_SEL,
+ LAN8841_PTP_GPIO_SEL_GPIO_SEL(pin));
+ if (ret)
+ return;
+
+ mutex_lock(&ptp_priv->ptp_lock);
+ if (tmp & BIT(pin)) {
+ sec = phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_RE_LTC_SEC_HI_CAP);
+ sec <<= 16;
+ sec |= phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_RE_LTC_SEC_LO_CAP);
+
+ nsec = phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_RE_LTC_NS_HI_CAP) & 0x3fff;
+ nsec <<= 16;
+ nsec |= phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_RE_LTC_NS_LO_CAP);
+ } else {
+ sec = phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_FE_LTC_SEC_HI_CAP);
+ sec <<= 16;
+ sec |= phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_FE_LTC_SEC_LO_CAP);
+
+ nsec = phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_FE_LTC_NS_HI_CAP) & 0x3fff;
+ nsec <<= 16;
+ nsec |= phy_read_mmd(phydev, 2, LAN8841_PTP_GPIO_FE_LTC_NS_LO_CAP);
+ }
+ mutex_unlock(&ptp_priv->ptp_lock);
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_GPIO_SEL, 0);
+ if (ret)
+ return;
+
+ ptp_event.index = 0;
+ ptp_event.timestamp = ktime_set(sec, nsec);
+ ptp_event.type = PTP_CLOCK_EXTTS;
+ ptp_clock_event(ptp_priv->ptp_clock, &ptp_event);
+}
+
static void lan8841_handle_ptp_interrupt(struct phy_device *phydev)
{
struct kszphy_priv *priv = phydev->priv;
do {
status = phy_read_mmd(phydev, 2, LAN8841_PTP_INT_STS);
+
if (status & LAN8841_PTP_INT_STS_PTP_TX_TS_INT)
lan8841_ptp_process_tx_ts(ptp_priv);
if (status & LAN8841_PTP_INT_STS_PTP_RX_TS_INT)
lan8841_ptp_process_rx_ts(ptp_priv);
+ if (status & LAN8841_PTP_INT_STS_PTP_GPIO_CAP_INT)
+ lan8841_gpio_process_cap(ptp_priv);
+
if (status & LAN8841_PTP_INT_STS_PTP_TX_TS_OVRFL_INT) {
lan8841_ptp_flush_fifo(ptp_priv, true);
skb_queue_purge(&ptp_priv->tx_queue);
return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ? -EFAULT : 0;
}
+#define LAN8841_EVENT_A 0
+#define LAN8841_EVENT_B 1
+#define LAN8841_PTP_LTC_TARGET_SEC_HI(event) ((event) == LAN8841_EVENT_A ? 278 : 288)
+#define LAN8841_PTP_LTC_TARGET_SEC_LO(event) ((event) == LAN8841_EVENT_A ? 279 : 289)
+#define LAN8841_PTP_LTC_TARGET_NS_HI(event) ((event) == LAN8841_EVENT_A ? 280 : 290)
+#define LAN8841_PTP_LTC_TARGET_NS_LO(event) ((event) == LAN8841_EVENT_A ? 281 : 291)
+
+static int lan8841_ptp_set_target(struct kszphy_ptp_priv *ptp_priv, u8 event,
+ s64 sec, u32 nsec)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ int ret;
+
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_TARGET_SEC_HI(event),
+ upper_16_bits(sec));
+ if (ret)
+ return ret;
+
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_TARGET_SEC_LO(event),
+ lower_16_bits(sec));
+ if (ret)
+ return ret;
+
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_TARGET_NS_HI(event) & 0x3fff,
+ upper_16_bits(nsec));
+ if (ret)
+ return ret;
+
+ return phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_TARGET_NS_LO(event),
+ lower_16_bits(nsec));
+}
+
+#define LAN8841_BUFFER_TIME 2
+
+static int lan8841_ptp_update_target(struct kszphy_ptp_priv *ptp_priv,
+ const struct timespec64 *ts)
+{
+ return lan8841_ptp_set_target(ptp_priv, LAN8841_EVENT_A,
+ ts->tv_sec + LAN8841_BUFFER_TIME, ts->tv_nsec);
+}
+
+#define LAN8841_PTP_LTC_TARGET_RELOAD_SEC_HI(event) ((event) == LAN8841_EVENT_A ? 282 : 292)
+#define LAN8841_PTP_LTC_TARGET_RELOAD_SEC_LO(event) ((event) == LAN8841_EVENT_A ? 283 : 293)
+#define LAN8841_PTP_LTC_TARGET_RELOAD_NS_HI(event) ((event) == LAN8841_EVENT_A ? 284 : 294)
+#define LAN8841_PTP_LTC_TARGET_RELOAD_NS_LO(event) ((event) == LAN8841_EVENT_A ? 285 : 295)
+
+static int lan8841_ptp_set_reload(struct kszphy_ptp_priv *ptp_priv, u8 event,
+ s64 sec, u32 nsec)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ int ret;
+
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_TARGET_RELOAD_SEC_HI(event),
+ upper_16_bits(sec));
+ if (ret)
+ return ret;
+
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_TARGET_RELOAD_SEC_LO(event),
+ lower_16_bits(sec));
+ if (ret)
+ return ret;
+
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_TARGET_RELOAD_NS_HI(event) & 0x3fff,
+ upper_16_bits(nsec));
+ if (ret)
+ return ret;
+
+ return phy_write_mmd(phydev, 2, LAN8841_PTP_LTC_TARGET_RELOAD_NS_LO(event),
+ lower_16_bits(nsec));
+}
+
#define LAN8841_PTP_LTC_SET_SEC_HI 262
#define LAN8841_PTP_LTC_SET_SEC_MID 263
#define LAN8841_PTP_LTC_SET_SEC_LO 264
struct kszphy_ptp_priv *ptp_priv = container_of(ptp, struct kszphy_ptp_priv,
ptp_clock_info);
struct phy_device *phydev = ptp_priv->phydev;
+ int ret;
/* Set the value to be stored */
mutex_lock(&ptp_priv->ptp_lock);
/* Set the command to load the LTC */
phy_write_mmd(phydev, 2, LAN8841_PTP_CMD_CTL,
LAN8841_PTP_CMD_CTL_PTP_LTC_LOAD);
+ ret = lan8841_ptp_update_target(ptp_priv, ts);
mutex_unlock(&ptp_priv->ptp_lock);
- return 0;
+ return ret;
}
#define LAN8841_PTP_LTC_RD_SEC_HI 358
bool add = true;
u32 nsec;
s32 sec;
+ int ret;
/* The HW allows up to 15 sec to adjust the time, but here we limit to
* 10 sec the adjustment. The reason is, in case the adjustment is 14
}
mutex_unlock(&ptp_priv->ptp_lock);
- return 0;
+ /* Update the target clock */
+ ptp->gettime64(ptp, &ts);
+ mutex_lock(&ptp_priv->ptp_lock);
+ ret = lan8841_ptp_update_target(ptp_priv, &ts);
+ mutex_unlock(&ptp_priv->ptp_lock);
+
+ return ret;
}
#define LAN8841_PTP_LTC_RATE_ADJ_HI 269
return 0;
}
+static int lan8841_ptp_verify(struct ptp_clock_info *ptp, unsigned int pin,
+ enum ptp_pin_function func, unsigned int chan)
+{
+ switch (func) {
+ case PTP_PF_NONE:
+ case PTP_PF_PEROUT:
+ case PTP_PF_EXTTS:
+ break;
+ default:
+ return -1;
+ }
+
+ return 0;
+}
+
+#define LAN8841_PTP_GPIO_NUM 10
+#define LAN8841_GPIO_EN 128
+#define LAN8841_GPIO_DIR 129
+#define LAN8841_GPIO_BUF 130
+
+static int lan8841_ptp_perout_off(struct kszphy_ptp_priv *ptp_priv, int pin)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ int ret;
+
+ ret = phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_EN, BIT(pin));
+ if (ret)
+ return ret;
+
+ ret = phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_DIR, BIT(pin));
+ if (ret)
+ return ret;
+
+ return phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_BUF, BIT(pin));
+}
+
+static int lan8841_ptp_perout_on(struct kszphy_ptp_priv *ptp_priv, int pin)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ int ret;
+
+ ret = phy_set_bits_mmd(phydev, 2, LAN8841_GPIO_EN, BIT(pin));
+ if (ret)
+ return ret;
+
+ ret = phy_set_bits_mmd(phydev, 2, LAN8841_GPIO_DIR, BIT(pin));
+ if (ret)
+ return ret;
+
+ return phy_set_bits_mmd(phydev, 2, LAN8841_GPIO_BUF, BIT(pin));
+}
+
+#define LAN8841_GPIO_DATA_SEL1 131
+#define LAN8841_GPIO_DATA_SEL2 132
+#define LAN8841_GPIO_DATA_SEL_GPIO_DATA_SEL_EVENT_MASK GENMASK(2, 0)
+#define LAN8841_GPIO_DATA_SEL_GPIO_DATA_SEL_EVENT_A 1
+#define LAN8841_GPIO_DATA_SEL_GPIO_DATA_SEL_EVENT_B 2
+#define LAN8841_PTP_GENERAL_CONFIG 257
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_POL_A BIT(1)
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_POL_B BIT(3)
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_A_MASK GENMASK(7, 4)
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_B_MASK GENMASK(11, 8)
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_A 4
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_B 7
+
+static int lan8841_ptp_remove_event(struct kszphy_ptp_priv *ptp_priv, int pin,
+ u8 event)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ u16 tmp;
+ int ret;
+
+ /* Now remove pin from the event. GPIO_DATA_SEL1 contains the GPIO
+ * pins 0-4 while GPIO_DATA_SEL2 contains GPIO pins 5-9, therefore
+ * depending on the pin, it requires to read a different register
+ */
+ if (pin < 5) {
+ tmp = LAN8841_GPIO_DATA_SEL_GPIO_DATA_SEL_EVENT_MASK << (3 * pin);
+ ret = phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_DATA_SEL1, tmp);
+ } else {
+ tmp = LAN8841_GPIO_DATA_SEL_GPIO_DATA_SEL_EVENT_MASK << (3 * (pin - 5));
+ ret = phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_DATA_SEL2, tmp);
+ }
+ if (ret)
+ return ret;
+
+ /* Disable the event */
+ if (event == LAN8841_EVENT_A)
+ tmp = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_POL_A |
+ LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_A_MASK;
+ else
+ tmp = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_POL_B |
+ LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_B_MASK;
+ return phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_EN, tmp);
+}
+
+static int lan8841_ptp_enable_event(struct kszphy_ptp_priv *ptp_priv, int pin,
+ u8 event, int pulse_width)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ u16 tmp;
+ int ret;
+
+ /* Enable the event */
+ if (event == LAN8841_EVENT_A)
+ ret = phy_modify_mmd(phydev, 2, LAN8841_PTP_GENERAL_CONFIG,
+ LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_POL_A |
+ LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_A_MASK,
+ LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_POL_A |
+ pulse_width << LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_A);
+ else
+ ret = phy_modify_mmd(phydev, 2, LAN8841_PTP_GENERAL_CONFIG,
+ LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_POL_B |
+ LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_B_MASK,
+ LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_POL_B |
+ pulse_width << LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_B);
+ if (ret)
+ return ret;
+
+ /* Now connect the pin to the event. GPIO_DATA_SEL1 contains the GPIO
+ * pins 0-4 while GPIO_DATA_SEL2 contains GPIO pins 5-9, therefore
+ * depending on the pin, it requires to read a different register
+ */
+ if (event == LAN8841_EVENT_A)
+ tmp = LAN8841_GPIO_DATA_SEL_GPIO_DATA_SEL_EVENT_A;
+ else
+ tmp = LAN8841_GPIO_DATA_SEL_GPIO_DATA_SEL_EVENT_B;
+
+ if (pin < 5)
+ ret = phy_set_bits_mmd(phydev, 2, LAN8841_GPIO_DATA_SEL1,
+ tmp << (3 * pin));
+ else
+ ret = phy_set_bits_mmd(phydev, 2, LAN8841_GPIO_DATA_SEL2,
+ tmp << (3 * (pin - 5)));
+
+ return ret;
+}
+
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_200MS 13
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_100MS 12
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_50MS 11
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_10MS 10
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_5MS 9
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_1MS 8
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_500US 7
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_100US 6
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_50US 5
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_10US 4
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_5US 3
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_1US 2
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_500NS 1
+#define LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_100NS 0
+
+static int lan8841_ptp_perout(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
+{
+ struct kszphy_ptp_priv *ptp_priv = container_of(ptp, struct kszphy_ptp_priv,
+ ptp_clock_info);
+ struct phy_device *phydev = ptp_priv->phydev;
+ struct timespec64 ts_on, ts_period;
+ s64 on_nsec, period_nsec;
+ int pulse_width;
+ int pin;
+ int ret;
+
+ if (rq->perout.flags & ~PTP_PEROUT_DUTY_CYCLE)
+ return -EOPNOTSUPP;
+
+ pin = ptp_find_pin(ptp_priv->ptp_clock, PTP_PF_PEROUT, rq->perout.index);
+ if (pin == -1 || pin >= LAN8841_PTP_GPIO_NUM)
+ return -EINVAL;
+
+ if (!on) {
+ ret = lan8841_ptp_perout_off(ptp_priv, pin);
+ if (ret)
+ return ret;
+
+ return lan8841_ptp_remove_event(ptp_priv, LAN8841_EVENT_A, pin);
+ }
+
+ ts_on.tv_sec = rq->perout.on.sec;
+ ts_on.tv_nsec = rq->perout.on.nsec;
+ on_nsec = timespec64_to_ns(&ts_on);
+
+ ts_period.tv_sec = rq->perout.period.sec;
+ ts_period.tv_nsec = rq->perout.period.nsec;
+ period_nsec = timespec64_to_ns(&ts_period);
+
+ if (period_nsec < 200) {
+ pr_warn_ratelimited("%s: perout period too small, minimum is 200 nsec\n",
+ phydev_name(phydev));
+ return -EOPNOTSUPP;
+ }
+
+ if (on_nsec >= period_nsec) {
+ pr_warn_ratelimited("%s: pulse width must be smaller than period\n",
+ phydev_name(phydev));
+ return -EINVAL;
+ }
+
+ switch (on_nsec) {
+ case 200000000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_200MS;
+ break;
+ case 100000000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_100MS;
+ break;
+ case 50000000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_50MS;
+ break;
+ case 10000000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_10MS;
+ break;
+ case 5000000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_5MS;
+ break;
+ case 1000000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_1MS;
+ break;
+ case 500000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_500US;
+ break;
+ case 100000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_100US;
+ break;
+ case 50000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_50US;
+ break;
+ case 10000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_10US;
+ break;
+ case 5000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_5US;
+ break;
+ case 1000:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_1US;
+ break;
+ case 500:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_500NS;
+ break;
+ case 100:
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_100NS;
+ break;
+ default:
+ pr_warn_ratelimited("%s: Use default duty cycle of 100ns\n",
+ phydev_name(phydev));
+ pulse_width = LAN8841_PTP_GENERAL_CONFIG_LTC_EVENT_100NS;
+ break;
+ }
+
+ mutex_lock(&ptp_priv->ptp_lock);
+ ret = lan8841_ptp_set_target(ptp_priv, LAN8841_EVENT_A, rq->perout.start.sec,
+ rq->perout.start.nsec);
+ mutex_unlock(&ptp_priv->ptp_lock);
+ if (ret)
+ return ret;
+
+ ret = lan8841_ptp_set_reload(ptp_priv, LAN8841_EVENT_A, rq->perout.period.sec,
+ rq->perout.period.nsec);
+ if (ret)
+ return ret;
+
+ ret = lan8841_ptp_enable_event(ptp_priv, pin, LAN8841_EVENT_A,
+ pulse_width);
+ if (ret)
+ return ret;
+
+ ret = lan8841_ptp_perout_on(ptp_priv, pin);
+ if (ret)
+ lan8841_ptp_remove_event(ptp_priv, pin, LAN8841_EVENT_A);
+
+ return ret;
+}
+
+#define LAN8841_PTP_GPIO_CAP_EN 496
+#define LAN8841_PTP_GPIO_CAP_EN_GPIO_RE_CAPTURE_ENABLE(gpio) (BIT(gpio))
+#define LAN8841_PTP_GPIO_CAP_EN_GPIO_FE_CAPTURE_ENABLE(gpio) (BIT(gpio) << 8)
+#define LAN8841_PTP_INT_EN_PTP_GPIO_CAP_EN BIT(2)
+
+static int lan8841_ptp_extts_on(struct kszphy_ptp_priv *ptp_priv, int pin,
+ u32 flags)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ u16 tmp = 0;
+ int ret;
+
+ /* Set GPIO to be intput */
+ ret = phy_set_bits_mmd(phydev, 2, LAN8841_GPIO_EN, BIT(pin));
+ if (ret)
+ return ret;
+
+ ret = phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_BUF, BIT(pin));
+ if (ret)
+ return ret;
+
+ /* Enable capture on the edges of the pin */
+ if (flags & PTP_RISING_EDGE)
+ tmp |= LAN8841_PTP_GPIO_CAP_EN_GPIO_RE_CAPTURE_ENABLE(pin);
+ if (flags & PTP_FALLING_EDGE)
+ tmp |= LAN8841_PTP_GPIO_CAP_EN_GPIO_FE_CAPTURE_ENABLE(pin);
+ ret = phy_write_mmd(phydev, 2, LAN8841_PTP_GPIO_CAP_EN, tmp);
+ if (ret)
+ return ret;
+
+ /* Enable interrupt */
+ return phy_modify_mmd(phydev, 2, LAN8841_PTP_INT_EN,
+ LAN8841_PTP_INT_EN_PTP_GPIO_CAP_EN,
+ LAN8841_PTP_INT_EN_PTP_GPIO_CAP_EN);
+}
+
+static int lan8841_ptp_extts_off(struct kszphy_ptp_priv *ptp_priv, int pin)
+{
+ struct phy_device *phydev = ptp_priv->phydev;
+ int ret;
+
+ /* Set GPIO to be output */
+ ret = phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_EN, BIT(pin));
+ if (ret)
+ return ret;
+
+ ret = phy_clear_bits_mmd(phydev, 2, LAN8841_GPIO_BUF, BIT(pin));
+ if (ret)
+ return ret;
+
+ /* Disable capture on both of the edges */
+ ret = phy_modify_mmd(phydev, 2, LAN8841_PTP_GPIO_CAP_EN,
+ LAN8841_PTP_GPIO_CAP_EN_GPIO_RE_CAPTURE_ENABLE(pin) |
+ LAN8841_PTP_GPIO_CAP_EN_GPIO_FE_CAPTURE_ENABLE(pin),
+ 0);
+ if (ret)
+ return ret;
+
+ /* Disable interrupt */
+ return phy_modify_mmd(phydev, 2, LAN8841_PTP_INT_EN,
+ LAN8841_PTP_INT_EN_PTP_GPIO_CAP_EN,
+ 0);
+}
+
+static int lan8841_ptp_extts(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
+{
+ struct kszphy_ptp_priv *ptp_priv = container_of(ptp, struct kszphy_ptp_priv,
+ ptp_clock_info);
+ int pin;
+ int ret;
+
+ /* Reject requests with unsupported flags */
+ if (rq->extts.flags & ~(PTP_ENABLE_FEATURE |
+ PTP_EXTTS_EDGES |
+ PTP_STRICT_FLAGS))
+ return -EOPNOTSUPP;
+
+ pin = ptp_find_pin(ptp_priv->ptp_clock, PTP_PF_EXTTS, rq->extts.index);
+ if (pin == -1 || pin >= LAN8841_PTP_GPIO_NUM)
+ return -EINVAL;
+
+ mutex_lock(&ptp_priv->ptp_lock);
+ if (on)
+ ret = lan8841_ptp_extts_on(ptp_priv, pin, rq->extts.flags);
+ else
+ ret = lan8841_ptp_extts_off(ptp_priv, pin);
+ mutex_unlock(&ptp_priv->ptp_lock);
+
+ return ret;
+}
+
+static int lan8841_ptp_enable(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
+{
+ switch (rq->type) {
+ case PTP_CLK_REQ_EXTTS:
+ return lan8841_ptp_extts(ptp, rq, on);
+ case PTP_CLK_REQ_PEROUT:
+ return lan8841_ptp_perout(ptp, rq, on);
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
static struct ptp_clock_info lan8841_ptp_clock_info = {
.owner = THIS_MODULE,
.name = "lan8841 ptp",
.settime64 = lan8841_ptp_settime64,
.adjtime = lan8841_ptp_adjtime,
.adjfine = lan8841_ptp_adjfine,
+ .verify = lan8841_ptp_verify,
+ .enable = lan8841_ptp_enable,
+ .n_per_out = LAN8841_PTP_GPIO_NUM,
+ .n_ext_ts = LAN8841_PTP_GPIO_NUM,
+ .n_pins = LAN8841_PTP_GPIO_NUM,
};
#define LAN8841_OPERATION_MODE_STRAP_LOW_REGISTER 3
priv = phydev->priv;
ptp_priv = &priv->ptp_priv;
+ ptp_priv->pin_config = devm_kcalloc(&phydev->mdio.dev,
+ LAN8841_PTP_GPIO_NUM,
+ sizeof(*ptp_priv->pin_config),
+ GFP_KERNEL);
+ if (!ptp_priv->pin_config)
+ return -ENOMEM;
+
+ for (int i = 0; i < LAN8841_PTP_GPIO_NUM; ++i) {
+ struct ptp_pin_desc *p = &ptp_priv->pin_config[i];
+
+ snprintf(p->name, sizeof(p->name), "pin%d", i);
+ p->index = i;
+ p->func = PTP_PF_NONE;
+ }
+
ptp_priv->ptp_clock_info = lan8841_ptp_clock_info;
+ ptp_priv->ptp_clock_info.pin_config = ptp_priv->pin_config;
ptp_priv->ptp_clock = ptp_clock_register(&ptp_priv->ptp_clock_info,
&phydev->mdio.dev);
if (IS_ERR(ptp_priv->ptp_clock)) {
u8 fw_major;
u8 fw_minor;
+
+ /* It takes 3 seconds to fully switch out of loopback mode before
+ * it can safely re-enter loopback mode. Record the time when
+ * loopback is disabled. Check and wait if necessary before loopback
+ * is enabled.
+ */
+ u64 lb_dis_to;
};
static const struct {
static int gpy_loopback(struct phy_device *phydev, bool enable)
{
+ struct gpy_priv *priv = phydev->priv;
+ u16 set = 0;
int ret;
- ret = phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK,
- enable ? BMCR_LOOPBACK : 0);
- if (!ret) {
- /* It takes some time for PHY device to switch
- * into/out-of loopback mode.
+ if (enable) {
+ u64 now = get_jiffies_64();
+
+ /* wait until 3 seconds from last disable */
+ if (time_before64(now, priv->lb_dis_to))
+ msleep(jiffies64_to_msecs(priv->lb_dis_to - now));
+
+ set = BMCR_LOOPBACK;
+ }
+
+ ret = phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK, set);
+ if (ret <= 0)
+ return ret;
+
+ if (enable) {
+ /* It takes some time for PHY device to switch into
+ * loopback mode.
*/
msleep(100);
+ } else {
+ priv->lb_dis_to = get_jiffies_64() + HZ * 3;
}
- return ret;
+ return 0;
}
static int gpy115_loopback(struct phy_device *phydev, bool enable)
mutex_unlock(&phydev->lock);
}
+static void phy_process_error(struct phy_device *phydev)
+{
+ mutex_lock(&phydev->lock);
+ phydev->state = PHY_HALTED;
+ mutex_unlock(&phydev->lock);
+
+ phy_trigger_machine(phydev);
+}
+
+static void phy_error_precise(struct phy_device *phydev,
+ const void *func, int err)
+{
+ WARN(1, "%pS: returned: %d\n", func, err);
+ phy_process_error(phydev);
+}
+
/**
* phy_error - enter HALTED state for this PHY device
* @phydev: target phy_device struct
void phy_error(struct phy_device *phydev)
{
WARN_ON(1);
-
- mutex_lock(&phydev->lock);
- phydev->state = PHY_HALTED;
- mutex_unlock(&phydev->lock);
-
- phy_trigger_machine(phydev);
+ phy_process_error(phydev);
}
EXPORT_SYMBOL(phy_error);
struct net_device *dev = phydev->attached_dev;
bool needs_aneg = false, do_suspend = false;
enum phy_state old_state;
+ const void *func = NULL;
bool finished = false;
int err = 0;
case PHY_NOLINK:
case PHY_RUNNING:
err = phy_check_link_status(phydev);
+ func = &phy_check_link_status;
break;
case PHY_CABLETEST:
err = phydev->drv->cable_test_get_status(phydev, &finished);
mutex_unlock(&phydev->lock);
- if (needs_aneg)
+ if (needs_aneg) {
err = phy_start_aneg(phydev);
- else if (do_suspend)
+ func = &phy_start_aneg;
+ } else if (do_suspend) {
phy_suspend(phydev);
+ }
if (err == -ENODEV)
return;
if (err < 0)
- phy_error(phydev);
+ phy_error_precise(phydev, func, err);
phy_process_state_change(phydev, old_state);
* phy_probe - probe and init a PHY device
* @dev: device to probe and init
*
- * Description: Take care of setting up the phy_device structure,
- * set the state to READY (the driver's init function should
- * set it to STARTING if needed).
+ * Take care of setting up the phy_device structure, set the state to READY.
*/
static int phy_probe(struct device *dev)
{
phylink_set(pl->supported, Autoneg);
phylink_set(pl->supported, Asym_Pause);
phylink_set(pl->supported, Pause);
- pl->link_config.an_enabled = true;
pl->cfg_link_an_mode = MLO_AN_INBAND;
switch (pl->link_config.interface) {
"failed to validate link configuration for in-band status\n");
return -EINVAL;
}
-
- /* Check if MAC/PCS also supports Autoneg. */
- pl->link_config.an_enabled = phylink_test(pl->supported, Autoneg);
}
return 0;
struct phylink_link_state *state)
{
/* If autoneg is disabled, pause AN is also disabled */
- if (!state->an_enabled)
+ if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ state->advertising))
state->pause &= ~MLO_PAUSE_AN;
/* Manual configuration of pause modes */
const struct phylink_link_state *state)
{
phylink_dbg(pl,
- "%s: mode=%s/%s/%s/%s/%s adv=%*pb pause=%02x link=%u an=%u\n",
+ "%s: mode=%s/%s/%s/%s/%s adv=%*pb pause=%02x link=%u\n",
__func__, phylink_an_mode_str(pl->cur_link_an_mode),
phy_modes(state->interface),
phy_speed_to_str(state->speed),
phy_duplex_to_str(state->duplex),
phy_rate_matching_to_str(state->rate_matching),
__ETHTOOL_LINK_MODE_MASK_NBITS, state->advertising,
- state->pause, state->link, state->an_enabled);
+ state->pause, state->link);
pl->mac_ops->mac_config(pl->config, pl->cur_link_an_mode, state);
}
static void phylink_mac_pcs_an_restart(struct phylink *pl)
{
- if (pl->link_config.an_enabled &&
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ pl->link_config.advertising) &&
phy_interface_mode_is_8023z(pl->link_config.interface) &&
phylink_autoneg_inband(pl->cur_link_an_mode)) {
if (pl->pcs)
linkmode_copy(state->advertising, pl->link_config.advertising);
linkmode_zero(state->lp_advertising);
state->interface = pl->link_config.interface;
- state->an_enabled = pl->link_config.an_enabled;
state->rate_matching = pl->link_config.rate_matching;
- if (state->an_enabled) {
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ state->advertising)) {
state->speed = SPEED_UNKNOWN;
state->duplex = DUPLEX_UNKNOWN;
state->pause = MLO_PAUSE_NONE;
pl->link_config.pause = MLO_PAUSE_AN;
pl->link_config.speed = SPEED_UNKNOWN;
pl->link_config.duplex = DUPLEX_UNKNOWN;
- pl->link_config.an_enabled = true;
pl->mac_ops = mac_ops;
__set_bit(PHYLINK_DISABLE_STOPPED, &pl->phylink_disable_state);
timer_setup(&pl->link_poll, phylink_fixed_poll, 0);
kset->base.speed = state->speed;
kset->base.duplex = state->duplex;
}
- kset->base.autoneg = state->an_enabled ? AUTONEG_ENABLE :
- AUTONEG_DISABLE;
+ kset->base.autoneg = linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ state->advertising) ?
+ AUTONEG_ENABLE : AUTONEG_DISABLE;
}
/**
/* We have ruled out the case with a PHY attached, and the
* fixed-link cases. All that is left are in-band links.
*/
- config.an_enabled = kset->base.autoneg == AUTONEG_ENABLE;
linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, config.advertising,
- config.an_enabled);
+ kset->base.autoneg == AUTONEG_ENABLE);
/* If this link is with an SFP, ensure that changes to advertised modes
* also cause the associated interface to be selected such that the
}
/* If autonegotiation is enabled, we must have an advertisement */
- if (config.an_enabled && phylink_is_empty_linkmode(config.advertising))
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ config.advertising) &&
+ phylink_is_empty_linkmode(config.advertising))
return -EINVAL;
mutex_lock(&pl->state_mutex);
pl->link_config.speed = config.speed;
pl->link_config.duplex = config.duplex;
- pl->link_config.an_enabled = config.an_enabled;
if (pl->link_config.interface != config.interface) {
/* The interface changed, e.g. 1000base-X <-> 2500base-X */
config.speed = SPEED_UNKNOWN;
config.duplex = DUPLEX_UNKNOWN;
config.pause = MLO_PAUSE_AN;
- config.an_enabled = pl->link_config.an_enabled;
/* Ignore errors if we're expecting a PHY to attach later */
ret = phylink_validate(pl, support, &config);
config.speed = SPEED_UNKNOWN;
config.duplex = DUPLEX_UNKNOWN;
config.pause = MLO_PAUSE_AN;
- config.an_enabled = true;
/* For all the interfaces that are supported, reduce the sfp_support
* mask to only those link modes that can be supported.
/* If there is no link or autonegotiation is disabled, the LP advertisement
* data is not meaningful, so don't go any further.
*/
- if (!state->link || !state->an_enabled)
+ if (!state->link || !linkmode_test_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ state->advertising))
return;
switch (state->interface) {
unsigned int br_min, br_nom, br_max;
__ETHTOOL_DECLARE_LINK_MODE_MASK(modes) = { 0, };
+ phylink_set(modes, Autoneg);
+ phylink_set(modes, Pause);
+ phylink_set(modes, Asym_Pause);
+
/* Decode the bitrate information to MBd */
br_min = br_nom = br_max = 0;
if (id->base.br_nominal) {
bus->sfp_quirk->modes(id, modes, interfaces);
linkmode_or(support, support, modes);
-
- phylink_set(support, Autoneg);
- phylink_set(support, Pause);
- phylink_set(support, Asym_Pause);
}
EXPORT_SYMBOL_GPL(sfp_parse_support);
unsigned int module_power_mW;
unsigned int module_t_start_up;
unsigned int module_t_wait;
+
+ bool have_a2;
bool tx_fault_ignore;
const struct sfp_quirk *quirk;
__set_bit(PHY_INTERFACE_MODE_2500BASEX, interfaces);
}
+static void sfp_quirk_disable_autoneg(const struct sfp_eeprom_id *id,
+ unsigned long *modes,
+ unsigned long *interfaces)
+{
+ linkmode_clear_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, modes);
+}
+
+static void sfp_quirk_oem_2_5g(const struct sfp_eeprom_id *id,
+ unsigned long *modes,
+ unsigned long *interfaces)
+{
+ /* Copper 2.5G SFP */
+ linkmode_set_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, modes);
+ __set_bit(PHY_INTERFACE_MODE_2500BASEX, interfaces);
+ sfp_quirk_disable_autoneg(id, modes, interfaces);
+}
+
static void sfp_quirk_ubnt_uf_instant(const struct sfp_eeprom_id *id,
unsigned long *modes,
unsigned long *interfaces)
SFP_QUIRK_M("UBNT", "UF-INSTANT", sfp_quirk_ubnt_uf_instant),
SFP_QUIRK_F("OEM", "SFP-10G-T", sfp_fixup_rollball_cc),
+ SFP_QUIRK_M("OEM", "SFP-2.5G-T", sfp_quirk_oem_2_5g),
SFP_QUIRK_F("OEM", "RTSFP-10", sfp_fixup_rollball_cc),
SFP_QUIRK_F("OEM", "RTSFP-10G", sfp_fixup_rollball_cc),
SFP_QUIRK_F("Turris", "RTSFP-10", sfp_fixup_rollball),
static int sfp_hwmon_insert(struct sfp *sfp)
{
- if (sfp->id.ext.sff8472_compliance == SFP_SFF8472_COMPLIANCE_NONE)
- return 0;
-
- if (!(sfp->id.ext.diagmon & SFP_DIAGMON_DDM))
- return 0;
-
- if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE)
- /* This driver in general does not support address
- * change.
- */
- return 0;
-
- mod_delayed_work(system_wq, &sfp->hwmon_probe, 1);
- sfp->hwmon_tries = R_PROBE_RETRY_SLOW;
+ if (sfp->have_a2 && sfp->id.ext.diagmon & SFP_DIAGMON_DDM) {
+ mod_delayed_work(system_wq, &sfp->hwmon_probe, 1);
+ sfp->hwmon_tries = R_PROBE_RETRY_SLOW;
+ }
return 0;
}
return 0;
}
+static int sfp_module_parse_sff8472(struct sfp *sfp)
+{
+ /* If the module requires address swap mode, warn about it */
+ if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE)
+ dev_warn(sfp->dev,
+ "module address swap to access page 0xA2 is not supported.\n");
+ else
+ sfp->have_a2 = true;
+
+ return 0;
+}
+
static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
{
/* SFP module inserted - read I2C data */
return -EINVAL;
}
- /* If the module requires address swap mode, warn about it */
- if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE)
- dev_warn(sfp->dev,
- "module address swap to access page 0xA2 is not supported.\n");
+ if (sfp->id.ext.sff8472_compliance != SFP_SFF8472_COMPLIANCE_NONE) {
+ ret = sfp_module_parse_sff8472(sfp);
+ if (ret < 0)
+ return ret;
+ }
/* Parse the module power requirement */
ret = sfp_module_parse_power(sfp);
memset(&sfp->id, 0, sizeof(sfp->id));
sfp->module_power_mW = 0;
+ sfp->have_a2 = false;
dev_info(sfp->dev, "module removed\n");
}
sfp->sm_dev_state != SFP_DEV_UP)
break;
- if (!(sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE))
+ /* Only use the soft state bits if we have access to the A2h
+ * memory, which implies that we have some level of SFF-8472
+ * compliance.
+ */
+ if (sfp->have_a2)
sfp_soft_start_poll(sfp);
sfp_module_tx_enable(sfp);
#define SPECIAL_CTRL_STS_AMDIX_ENABLE_ 0x4000
#define SPECIAL_CTRL_STS_AMDIX_STATE_ 0x2000
+#define EDPD_MAX_WAIT_DFLT_MS 640
+/* interval between phylib state machine runs in ms */
+#define PHY_STATE_MACH_MS 1000
+
struct smsc_hw_stat {
const char *string;
u8 reg;
};
struct smsc_phy_priv {
- bool energy_enable;
+ unsigned int edpd_enable:1;
+ unsigned int edpd_mode_set_by_user:1;
+ unsigned int edpd_max_wait_ms;
};
static int smsc_phy_ack_interrupt(struct phy_device *phydev)
return rc < 0 ? rc : 0;
}
-static int smsc_phy_config_intr(struct phy_device *phydev)
+int smsc_phy_config_intr(struct phy_device *phydev)
{
int rc;
return rc < 0 ? rc : 0;
}
+EXPORT_SYMBOL_GPL(smsc_phy_config_intr);
+
+static int smsc_phy_config_edpd(struct phy_device *phydev)
+{
+ struct smsc_phy_priv *priv = phydev->priv;
+
+ if (priv->edpd_enable)
+ return phy_set_bits(phydev, MII_LAN83C185_CTRL_STATUS,
+ MII_LAN83C185_EDPWRDOWN);
+ else
+ return phy_clear_bits(phydev, MII_LAN83C185_CTRL_STATUS,
+ MII_LAN83C185_EDPWRDOWN);
+}
-static irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev)
+irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev)
{
int irq_status;
return IRQ_HANDLED;
}
+EXPORT_SYMBOL_GPL(smsc_phy_handle_interrupt);
-static int smsc_phy_config_init(struct phy_device *phydev)
+int smsc_phy_config_init(struct phy_device *phydev)
{
struct smsc_phy_priv *priv = phydev->priv;
- int rc;
- if (!priv->energy_enable || phydev->irq != PHY_POLL)
+ if (!priv)
return 0;
- rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
-
- if (rc < 0)
- return rc;
+ /* don't use EDPD in irq mode except overridden by user */
+ if (!priv->edpd_mode_set_by_user && phydev->irq != PHY_POLL)
+ priv->edpd_enable = false;
- /* Enable energy detect mode for this SMSC Transceivers */
- rc = phy_write(phydev, MII_LAN83C185_CTRL_STATUS,
- rc | MII_LAN83C185_EDPWRDOWN);
- return rc;
+ return smsc_phy_config_edpd(phydev);
}
+EXPORT_SYMBOL_GPL(smsc_phy_config_init);
static int smsc_phy_reset(struct phy_device *phydev)
{
static int lan95xx_config_aneg_ext(struct phy_device *phydev)
{
- int rc;
-
- if (phydev->phy_id != 0x0007c0f0) /* not (LAN9500A or LAN9505A) */
- return lan87xx_config_aneg(phydev);
+ if (phydev->phy_id == 0x0007c0f0) { /* LAN9500A or LAN9505A */
+ /* Extend Manual AutoMDIX timer */
+ int rc = phy_set_bits(phydev, PHY_EDPD_CONFIG,
+ PHY_EDPD_CONFIG_EXT_CROSSOVER_);
- /* Extend Manual AutoMDIX timer */
- rc = phy_read(phydev, PHY_EDPD_CONFIG);
- if (rc < 0)
- return rc;
+ if (rc < 0)
+ return rc;
+ }
- rc |= PHY_EDPD_CONFIG_EXT_CROSSOVER_;
- phy_write(phydev, PHY_EDPD_CONFIG, rc);
return lan87xx_config_aneg(phydev);
}
* The workaround is only applicable to poll mode. Energy Detect Power-Down may
* not be used in interrupt mode lest link change detection becomes unreliable.
*/
-static int lan87xx_read_status(struct phy_device *phydev)
+int lan87xx_read_status(struct phy_device *phydev)
{
struct smsc_phy_priv *priv = phydev->priv;
int err;
if (err)
return err;
- if (!phydev->link && priv->energy_enable && phydev->irq == PHY_POLL) {
+ if (!phydev->link && priv && priv->edpd_enable &&
+ priv->edpd_max_wait_ms) {
+ unsigned int max_wait = priv->edpd_max_wait_ms * 1000;
+ int rc;
+
/* Disable EDPD to wake up PHY */
- int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
+ rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
if (rc < 0)
return rc;
*/
read_poll_timeout(phy_read, rc,
rc & MII_LAN83C185_ENERGYON || rc < 0,
- 10000, 640000, true, phydev,
+ 10000, max_wait, true, phydev,
MII_LAN83C185_CTRL_STATUS);
if (rc < 0)
return rc;
return err;
}
+EXPORT_SYMBOL_GPL(lan87xx_read_status);
static int smsc_get_sset_count(struct phy_device *phydev)
{
data[i] = smsc_get_stat(phydev, i);
}
-static int smsc_phy_probe(struct phy_device *phydev)
+static int smsc_phy_get_edpd(struct phy_device *phydev, u16 *edpd)
+{
+ struct smsc_phy_priv *priv = phydev->priv;
+
+ if (!priv)
+ return -EOPNOTSUPP;
+
+ if (!priv->edpd_enable)
+ *edpd = ETHTOOL_PHY_EDPD_DISABLE;
+ else if (!priv->edpd_max_wait_ms)
+ *edpd = ETHTOOL_PHY_EDPD_NO_TX;
+ else
+ *edpd = PHY_STATE_MACH_MS + priv->edpd_max_wait_ms;
+
+ return 0;
+}
+
+static int smsc_phy_set_edpd(struct phy_device *phydev, u16 edpd)
+{
+ struct smsc_phy_priv *priv = phydev->priv;
+
+ if (!priv)
+ return -EOPNOTSUPP;
+
+ switch (edpd) {
+ case ETHTOOL_PHY_EDPD_DISABLE:
+ priv->edpd_enable = false;
+ break;
+ case ETHTOOL_PHY_EDPD_NO_TX:
+ priv->edpd_enable = true;
+ priv->edpd_max_wait_ms = 0;
+ break;
+ case ETHTOOL_PHY_EDPD_DFLT_TX_MSECS:
+ edpd = PHY_STATE_MACH_MS + EDPD_MAX_WAIT_DFLT_MS;
+ fallthrough;
+ default:
+ if (phydev->irq != PHY_POLL)
+ return -EOPNOTSUPP;
+ if (edpd < PHY_STATE_MACH_MS || edpd > PHY_STATE_MACH_MS + 1000)
+ return -EINVAL;
+ priv->edpd_enable = true;
+ priv->edpd_max_wait_ms = edpd - PHY_STATE_MACH_MS;
+ }
+
+ priv->edpd_mode_set_by_user = true;
+
+ return smsc_phy_config_edpd(phydev);
+}
+
+int smsc_phy_get_tunable(struct phy_device *phydev,
+ struct ethtool_tunable *tuna, void *data)
+{
+ switch (tuna->id) {
+ case ETHTOOL_PHY_EDPD:
+ return smsc_phy_get_edpd(phydev, data);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+EXPORT_SYMBOL_GPL(smsc_phy_get_tunable);
+
+int smsc_phy_set_tunable(struct phy_device *phydev,
+ struct ethtool_tunable *tuna, const void *data)
+{
+ switch (tuna->id) {
+ case ETHTOOL_PHY_EDPD:
+ return smsc_phy_set_edpd(phydev, *(u16 *)data);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+EXPORT_SYMBOL_GPL(smsc_phy_set_tunable);
+
+int smsc_phy_probe(struct phy_device *phydev)
{
struct device *dev = &phydev->mdio.dev;
- struct device_node *of_node = dev->of_node;
struct smsc_phy_priv *priv;
struct clk *refclk;
if (!priv)
return -ENOMEM;
- priv->energy_enable = true;
+ priv->edpd_enable = true;
+ priv->edpd_max_wait_ms = EDPD_MAX_WAIT_DFLT_MS;
- if (of_property_read_bool(of_node, "smsc,disable-energy-detect"))
- priv->energy_enable = false;
+ if (device_property_present(dev, "smsc,disable-energy-detect"))
+ priv->edpd_enable = false;
phydev->priv = priv;
return clk_set_rate(refclk, 50 * 1000 * 1000);
}
+EXPORT_SYMBOL_GPL(smsc_phy_probe);
static struct phy_driver smsc_phy_driver[] = {
{
.get_strings = smsc_get_strings,
.get_stats = smsc_get_stats,
+ .get_tunable = smsc_phy_get_tunable,
+ .set_tunable = smsc_phy_set_tunable,
+
.suspend = genphy_suspend,
.resume = genphy_resume,
}, {
.get_strings = smsc_get_strings,
.get_stats = smsc_get_stats,
+ .get_tunable = smsc_phy_get_tunable,
+ .set_tunable = smsc_phy_set_tunable,
+
.suspend = genphy_suspend,
.resume = genphy_resume,
}, {
.get_strings = smsc_get_strings,
.get_stats = smsc_get_stats,
+ .get_tunable = smsc_phy_get_tunable,
+ .set_tunable = smsc_phy_set_tunable,
+
.suspend = genphy_suspend,
.resume = genphy_resume,
}, {
.get_strings = smsc_get_strings,
.get_stats = smsc_get_stats,
+ .get_tunable = smsc_phy_get_tunable,
+ .set_tunable = smsc_phy_set_tunable,
+
.suspend = genphy_suspend,
.resume = genphy_resume,
} };
static struct spi_driver ks8995_driver = {
.driver = {
.name = "spi-ks8995",
- .of_match_table = of_match_ptr(ks8895_spi_of_match),
+ .of_match_table = ks8895_spi_of_match,
},
.probe = ks8995_probe,
.remove = ks8995_remove,
goto err_put;
}
+ /* tap groks IOCB_NOWAIT just fine, mark it as such */
+ file->f_mode |= FMODE_NOWAIT;
+
dev_put(tap->dev);
rtnl_unlock();
{
struct file *file = iocb->ki_filp;
struct tap_queue *q = file->private_data;
+ int noblock = 0;
+
+ if ((file->f_flags & O_NONBLOCK) || (iocb->ki_flags & IOCB_NOWAIT))
+ noblock = 1;
- return tap_get_user(q, NULL, from, file->f_flags & O_NONBLOCK);
+ return tap_get_user(q, NULL, from, noblock);
}
/* Put packet to the user space buffer */
struct file *file = iocb->ki_filp;
struct tap_queue *q = file->private_data;
ssize_t len = iov_iter_count(to), ret;
+ int noblock = 0;
+
+ if ((file->f_flags & O_NONBLOCK) || (iocb->ki_flags & IOCB_NOWAIT))
+ noblock = 1;
- ret = tap_do_read(q, to, file->f_flags & O_NONBLOCK, NULL);
+ ret = tap_do_read(q, to, noblock, NULL);
ret = min_t(ssize_t, ret, len);
if (ret > 0)
iocb->ki_pos = ret;
sock_set_flag(&tfile->sk, SOCK_ZEROCOPY);
+ /* tun groks IOCB_NOWAIT just fine, mark it as such */
+ file->f_mode |= FMODE_NOWAIT;
return 0;
}
VIRTIO_NET_F_GUEST_UFO,
VIRTIO_NET_F_GUEST_CSUM,
VIRTIO_NET_F_GUEST_USO4,
- VIRTIO_NET_F_GUEST_USO6
+ VIRTIO_NET_F_GUEST_USO6,
+ VIRTIO_NET_F_GUEST_HDRLEN
};
#define GUEST_OFFLOAD_GRO_HW_MASK ((1ULL << VIRTIO_NET_F_GUEST_TSO4) | \
VIRTIO_NET_F_CTRL_MAC_ADDR, \
VIRTIO_NET_F_MTU, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS, \
VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_STANDBY, \
- VIRTIO_NET_F_RSS, VIRTIO_NET_F_HASH_REPORT, VIRTIO_NET_F_NOTF_COAL
+ VIRTIO_NET_F_RSS, VIRTIO_NET_F_HASH_REPORT, VIRTIO_NET_F_NOTF_COAL, \
+ VIRTIO_NET_F_GUEST_HDRLEN
static unsigned int features[] = {
VIRTNET_FEATURES,
obj-$(CONFIG_VXLAN) += vxlan.o
-vxlan-objs := vxlan_core.o vxlan_multicast.o vxlan_vnifilter.o
+vxlan-objs := vxlan_core.o vxlan_multicast.o vxlan_vnifilter.o vxlan_mdb.o
ip_tunnel_collect_metadata();
}
-#if IS_ENABLED(CONFIG_IPV6)
-static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
-{
- if (nla_len(nla) >= sizeof(struct in6_addr)) {
- ip->sin6.sin6_addr = nla_get_in6_addr(nla);
- ip->sa.sa_family = AF_INET6;
- return 0;
- } else if (nla_len(nla) >= sizeof(__be32)) {
- ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
- ip->sa.sa_family = AF_INET;
- return 0;
- } else {
- return -EAFNOSUPPORT;
- }
-}
-
-static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
- const union vxlan_addr *ip)
-{
- if (ip->sa.sa_family == AF_INET6)
- return nla_put_in6_addr(skb, attr, &ip->sin6.sin6_addr);
- else
- return nla_put_in_addr(skb, attr, ip->sin.sin_addr.s_addr);
-}
-
-#else /* !CONFIG_IPV6 */
-
-static int vxlan_nla_get_addr(union vxlan_addr *ip, struct nlattr *nla)
-{
- if (nla_len(nla) >= sizeof(struct in6_addr)) {
- return -EAFNOSUPPORT;
- } else if (nla_len(nla) >= sizeof(__be32)) {
- ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
- ip->sa.sa_family = AF_INET;
- return 0;
- } else {
- return -EAFNOSUPPORT;
- }
-}
-
-static int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
- const union vxlan_addr *ip)
-{
- return nla_put_in_addr(skb, attr, ip->sin.sin_addr.s_addr);
-}
-#endif
-
/* Find VXLAN socket based on network namespace, address family, UDP port,
* enabled unshareable flags and socket device binding (see l3mdev with
* non-default VRF).
struct vxlan_fdb *f;
struct sk_buff *reply;
- if (!(n->nud_state & NUD_CONNECTED)) {
+ if (!(READ_ONCE(n->nud_state) & NUD_CONNECTED)) {
neigh_release(n);
goto out;
}
struct vxlan_fdb *f;
struct sk_buff *reply;
- if (!(n->nud_state & NUD_CONNECTED)) {
+ if (!(READ_ONCE(n->nud_state) & NUD_CONNECTED)) {
neigh_release(n);
goto out;
}
return false;
}
-static void vxlan_build_gbp_hdr(struct vxlanhdr *vxh, u32 vxflags,
- struct vxlan_metadata *md)
-{
- struct vxlanhdr_gbp *gbp;
-
- if (!md->gbp)
- return;
-
- gbp = (struct vxlanhdr_gbp *)vxh;
- vxh->vx_flags |= VXLAN_HF_GBP;
-
- if (md->gbp & VXLAN_GBP_DONT_LEARN)
- gbp->dont_learn = 1;
-
- if (md->gbp & VXLAN_GBP_POLICY_APPLIED)
- gbp->policy_applied = 1;
-
- gbp->policy_id = htons(md->gbp & VXLAN_GBP_ID_MASK);
-}
-
-static int vxlan_build_gpe_hdr(struct vxlanhdr *vxh, u32 vxflags,
- __be16 protocol)
+static int vxlan_build_gpe_hdr(struct vxlanhdr *vxh, __be16 protocol)
{
struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)vxh;
}
if (vxflags & VXLAN_F_GBP)
- vxlan_build_gbp_hdr(vxh, vxflags, md);
+ vxlan_build_gbp_hdr(vxh, md);
if (vxflags & VXLAN_F_GPE) {
- err = vxlan_build_gpe_hdr(vxh, vxflags, skb->protocol);
+ err = vxlan_build_gpe_hdr(vxh, skb->protocol);
if (err < 0)
return err;
inner_protocol = skb->protocol;
return 0;
}
-static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
- __be32 default_vni, struct vxlan_rdst *rdst,
- bool did_rsc)
+void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ __be32 default_vni, struct vxlan_rdst *rdst, bool did_rsc)
{
struct dst_cache *dst_cache;
struct ip_tunnel_info *info;
#endif
}
+ if (vxlan->cfg.flags & VXLAN_F_MDB) {
+ struct vxlan_mdb_entry *mdb_entry;
+
+ rcu_read_lock();
+ mdb_entry = vxlan_mdb_entry_skb_get(vxlan, skb, vni);
+ if (mdb_entry) {
+ netdev_tx_t ret;
+
+ ret = vxlan_mdb_xmit(vxlan, mdb_entry, skb);
+ rcu_read_unlock();
+ return ret;
+ }
+ rcu_read_unlock();
+ }
+
eth = eth_hdr(skb);
f = vxlan_find_mac(vxlan, eth->h_dest, vni);
did_rsc = false;
if (err)
goto err_free_percpu;
+ err = vxlan_mdb_init(vxlan);
+ if (err)
+ goto err_gro_cells_destroy;
+
return 0;
+err_gro_cells_destroy:
+ gro_cells_destroy(&vxlan->gro_cells);
err_free_percpu:
free_percpu(dev->tstats);
err_vnigroup_uninit:
{
struct vxlan_dev *vxlan = netdev_priv(dev);
+ vxlan_mdb_fini(vxlan);
+
if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
vxlan_vnigroup_uninit(vxlan);
.ndo_fdb_del = vxlan_fdb_delete,
.ndo_fdb_dump = vxlan_fdb_dump,
.ndo_fdb_get = vxlan_fdb_get,
+ .ndo_mdb_add = vxlan_mdb_add,
+ .ndo_mdb_del = vxlan_mdb_del,
+ .ndo_mdb_dump = vxlan_mdb_dump,
.ndo_fill_metadata_dst = vxlan_fill_metadata_dst,
};
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/if_bridge.h>
+#include <linux/in.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+#include <linux/netlink.h>
+#include <linux/rhashtable.h>
+#include <linux/rhashtable-types.h>
+#include <linux/rtnetlink.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+#include <net/netlink.h>
+#include <net/vxlan.h>
+
+#include "vxlan_private.h"
+
+struct vxlan_mdb_entry_key {
+ union vxlan_addr src;
+ union vxlan_addr dst;
+ __be32 vni;
+};
+
+struct vxlan_mdb_entry {
+ struct rhash_head rhnode;
+ struct list_head remotes;
+ struct vxlan_mdb_entry_key key;
+ struct hlist_node mdb_node;
+ struct rcu_head rcu;
+};
+
+#define VXLAN_MDB_REMOTE_F_BLOCKED BIT(0)
+
+struct vxlan_mdb_remote {
+ struct list_head list;
+ struct vxlan_rdst __rcu *rd;
+ u8 flags;
+ u8 filter_mode;
+ u8 rt_protocol;
+ struct hlist_head src_list;
+ struct rcu_head rcu;
+};
+
+#define VXLAN_SGRP_F_DELETE BIT(0)
+
+struct vxlan_mdb_src_entry {
+ struct hlist_node node;
+ union vxlan_addr addr;
+ u8 flags;
+};
+
+struct vxlan_mdb_dump_ctx {
+ long reserved;
+ long entry_idx;
+ long remote_idx;
+};
+
+struct vxlan_mdb_config_src_entry {
+ union vxlan_addr addr;
+ struct list_head node;
+};
+
+struct vxlan_mdb_config {
+ struct vxlan_dev *vxlan;
+ struct vxlan_mdb_entry_key group;
+ struct list_head src_list;
+ union vxlan_addr remote_ip;
+ u32 remote_ifindex;
+ __be32 remote_vni;
+ __be16 remote_port;
+ u16 nlflags;
+ u8 flags;
+ u8 filter_mode;
+ u8 rt_protocol;
+};
+
+static const struct rhashtable_params vxlan_mdb_rht_params = {
+ .head_offset = offsetof(struct vxlan_mdb_entry, rhnode),
+ .key_offset = offsetof(struct vxlan_mdb_entry, key),
+ .key_len = sizeof(struct vxlan_mdb_entry_key),
+ .automatic_shrinking = true,
+};
+
+static int __vxlan_mdb_add(const struct vxlan_mdb_config *cfg,
+ struct netlink_ext_ack *extack);
+static int __vxlan_mdb_del(const struct vxlan_mdb_config *cfg,
+ struct netlink_ext_ack *extack);
+
+static void vxlan_br_mdb_entry_fill(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote,
+ struct br_mdb_entry *e)
+{
+ const union vxlan_addr *dst = &mdb_entry->key.dst;
+
+ memset(e, 0, sizeof(*e));
+ e->ifindex = vxlan->dev->ifindex;
+ e->state = MDB_PERMANENT;
+
+ if (remote->flags & VXLAN_MDB_REMOTE_F_BLOCKED)
+ e->flags |= MDB_FLAGS_BLOCKED;
+
+ switch (dst->sa.sa_family) {
+ case AF_INET:
+ e->addr.u.ip4 = dst->sin.sin_addr.s_addr;
+ e->addr.proto = htons(ETH_P_IP);
+ break;
+#if IS_ENABLED(CONFIG_IPV6)
+ case AF_INET6:
+ e->addr.u.ip6 = dst->sin6.sin6_addr;
+ e->addr.proto = htons(ETH_P_IPV6);
+ break;
+#endif
+ }
+}
+
+static int vxlan_mdb_entry_info_fill_srcs(struct sk_buff *skb,
+ const struct vxlan_mdb_remote *remote)
+{
+ struct vxlan_mdb_src_entry *ent;
+ struct nlattr *nest;
+
+ if (hlist_empty(&remote->src_list))
+ return 0;
+
+ nest = nla_nest_start(skb, MDBA_MDB_EATTR_SRC_LIST);
+ if (!nest)
+ return -EMSGSIZE;
+
+ hlist_for_each_entry(ent, &remote->src_list, node) {
+ struct nlattr *nest_ent;
+
+ nest_ent = nla_nest_start(skb, MDBA_MDB_SRCLIST_ENTRY);
+ if (!nest_ent)
+ goto out_cancel_err;
+
+ if (vxlan_nla_put_addr(skb, MDBA_MDB_SRCATTR_ADDRESS,
+ &ent->addr) ||
+ nla_put_u32(skb, MDBA_MDB_SRCATTR_TIMER, 0))
+ goto out_cancel_err;
+
+ nla_nest_end(skb, nest_ent);
+ }
+
+ nla_nest_end(skb, nest);
+
+ return 0;
+
+out_cancel_err:
+ nla_nest_cancel(skb, nest);
+ return -EMSGSIZE;
+}
+
+static int vxlan_mdb_entry_info_fill(const struct vxlan_dev *vxlan,
+ struct sk_buff *skb,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote)
+{
+ struct vxlan_rdst *rd = rtnl_dereference(remote->rd);
+ struct br_mdb_entry e;
+ struct nlattr *nest;
+
+ nest = nla_nest_start_noflag(skb, MDBA_MDB_ENTRY_INFO);
+ if (!nest)
+ return -EMSGSIZE;
+
+ vxlan_br_mdb_entry_fill(vxlan, mdb_entry, remote, &e);
+
+ if (nla_put_nohdr(skb, sizeof(e), &e) ||
+ nla_put_u32(skb, MDBA_MDB_EATTR_TIMER, 0))
+ goto nest_err;
+
+ if (!vxlan_addr_any(&mdb_entry->key.src) &&
+ vxlan_nla_put_addr(skb, MDBA_MDB_EATTR_SOURCE, &mdb_entry->key.src))
+ goto nest_err;
+
+ if (nla_put_u8(skb, MDBA_MDB_EATTR_RTPROT, remote->rt_protocol) ||
+ nla_put_u8(skb, MDBA_MDB_EATTR_GROUP_MODE, remote->filter_mode) ||
+ vxlan_mdb_entry_info_fill_srcs(skb, remote) ||
+ vxlan_nla_put_addr(skb, MDBA_MDB_EATTR_DST, &rd->remote_ip))
+ goto nest_err;
+
+ if (rd->remote_port && rd->remote_port != vxlan->cfg.dst_port &&
+ nla_put_u16(skb, MDBA_MDB_EATTR_DST_PORT,
+ be16_to_cpu(rd->remote_port)))
+ goto nest_err;
+
+ if (rd->remote_vni != vxlan->default_dst.remote_vni &&
+ nla_put_u32(skb, MDBA_MDB_EATTR_VNI, be32_to_cpu(rd->remote_vni)))
+ goto nest_err;
+
+ if (rd->remote_ifindex &&
+ nla_put_u32(skb, MDBA_MDB_EATTR_IFINDEX, rd->remote_ifindex))
+ goto nest_err;
+
+ if ((vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA) &&
+ mdb_entry->key.vni && nla_put_u32(skb, MDBA_MDB_EATTR_SRC_VNI,
+ be32_to_cpu(mdb_entry->key.vni)))
+ goto nest_err;
+
+ nla_nest_end(skb, nest);
+
+ return 0;
+
+nest_err:
+ nla_nest_cancel(skb, nest);
+ return -EMSGSIZE;
+}
+
+static int vxlan_mdb_entry_fill(const struct vxlan_dev *vxlan,
+ struct sk_buff *skb,
+ struct vxlan_mdb_dump_ctx *ctx,
+ const struct vxlan_mdb_entry *mdb_entry)
+{
+ int remote_idx = 0, s_remote_idx = ctx->remote_idx;
+ struct vxlan_mdb_remote *remote;
+ struct nlattr *nest;
+ int err = 0;
+
+ nest = nla_nest_start_noflag(skb, MDBA_MDB_ENTRY);
+ if (!nest)
+ return -EMSGSIZE;
+
+ list_for_each_entry(remote, &mdb_entry->remotes, list) {
+ if (remote_idx < s_remote_idx)
+ goto skip;
+
+ err = vxlan_mdb_entry_info_fill(vxlan, skb, mdb_entry, remote);
+ if (err)
+ break;
+skip:
+ remote_idx++;
+ }
+
+ ctx->remote_idx = err ? remote_idx : 0;
+ nla_nest_end(skb, nest);
+ return err;
+}
+
+static int vxlan_mdb_fill(const struct vxlan_dev *vxlan, struct sk_buff *skb,
+ struct vxlan_mdb_dump_ctx *ctx)
+{
+ int entry_idx = 0, s_entry_idx = ctx->entry_idx;
+ struct vxlan_mdb_entry *mdb_entry;
+ struct nlattr *nest;
+ int err = 0;
+
+ nest = nla_nest_start_noflag(skb, MDBA_MDB);
+ if (!nest)
+ return -EMSGSIZE;
+
+ hlist_for_each_entry(mdb_entry, &vxlan->mdb_list, mdb_node) {
+ if (entry_idx < s_entry_idx)
+ goto skip;
+
+ err = vxlan_mdb_entry_fill(vxlan, skb, ctx, mdb_entry);
+ if (err)
+ break;
+skip:
+ entry_idx++;
+ }
+
+ ctx->entry_idx = err ? entry_idx : 0;
+ nla_nest_end(skb, nest);
+ return err;
+}
+
+int vxlan_mdb_dump(struct net_device *dev, struct sk_buff *skb,
+ struct netlink_callback *cb)
+{
+ struct vxlan_mdb_dump_ctx *ctx = (void *)cb->ctx;
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ struct br_port_msg *bpm;
+ struct nlmsghdr *nlh;
+ int err;
+
+ ASSERT_RTNL();
+
+ NL_ASSERT_DUMP_CTX_FITS(struct vxlan_mdb_dump_ctx);
+
+ nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq, RTM_NEWMDB, sizeof(*bpm),
+ NLM_F_MULTI);
+ if (!nlh)
+ return -EMSGSIZE;
+
+ bpm = nlmsg_data(nlh);
+ memset(bpm, 0, sizeof(*bpm));
+ bpm->family = AF_BRIDGE;
+ bpm->ifindex = dev->ifindex;
+
+ err = vxlan_mdb_fill(vxlan, skb, ctx);
+
+ nlmsg_end(skb, nlh);
+
+ cb->seq = vxlan->mdb_seq;
+ nl_dump_check_consistent(cb, nlh);
+
+ return err;
+}
+
+static const struct nla_policy
+vxlan_mdbe_src_list_entry_pol[MDBE_SRCATTR_MAX + 1] = {
+ [MDBE_SRCATTR_ADDRESS] = NLA_POLICY_RANGE(NLA_BINARY,
+ sizeof(struct in_addr),
+ sizeof(struct in6_addr)),
+};
+
+static const struct nla_policy
+vxlan_mdbe_src_list_pol[MDBE_SRC_LIST_MAX + 1] = {
+ [MDBE_SRC_LIST_ENTRY] = NLA_POLICY_NESTED(vxlan_mdbe_src_list_entry_pol),
+};
+
+static struct netlink_range_validation vni_range = {
+ .max = VXLAN_N_VID - 1,
+};
+
+static const struct nla_policy vxlan_mdbe_attrs_pol[MDBE_ATTR_MAX + 1] = {
+ [MDBE_ATTR_SOURCE] = NLA_POLICY_RANGE(NLA_BINARY,
+ sizeof(struct in_addr),
+ sizeof(struct in6_addr)),
+ [MDBE_ATTR_GROUP_MODE] = NLA_POLICY_RANGE(NLA_U8, MCAST_EXCLUDE,
+ MCAST_INCLUDE),
+ [MDBE_ATTR_SRC_LIST] = NLA_POLICY_NESTED(vxlan_mdbe_src_list_pol),
+ [MDBE_ATTR_RTPROT] = NLA_POLICY_MIN(NLA_U8, RTPROT_STATIC),
+ [MDBE_ATTR_DST] = NLA_POLICY_RANGE(NLA_BINARY,
+ sizeof(struct in_addr),
+ sizeof(struct in6_addr)),
+ [MDBE_ATTR_DST_PORT] = { .type = NLA_U16 },
+ [MDBE_ATTR_VNI] = NLA_POLICY_FULL_RANGE(NLA_U32, &vni_range),
+ [MDBE_ATTR_IFINDEX] = NLA_POLICY_MIN(NLA_S32, 1),
+ [MDBE_ATTR_SRC_VNI] = NLA_POLICY_FULL_RANGE(NLA_U32, &vni_range),
+};
+
+static bool vxlan_mdb_is_valid_source(const struct nlattr *attr, __be16 proto,
+ struct netlink_ext_ack *extack)
+{
+ switch (proto) {
+ case htons(ETH_P_IP):
+ if (nla_len(attr) != sizeof(struct in_addr)) {
+ NL_SET_ERR_MSG_MOD(extack, "IPv4 invalid source address length");
+ return false;
+ }
+ if (ipv4_is_multicast(nla_get_in_addr(attr))) {
+ NL_SET_ERR_MSG_MOD(extack, "IPv4 multicast source address is not allowed");
+ return false;
+ }
+ break;
+#if IS_ENABLED(CONFIG_IPV6)
+ case htons(ETH_P_IPV6): {
+ struct in6_addr src;
+
+ if (nla_len(attr) != sizeof(struct in6_addr)) {
+ NL_SET_ERR_MSG_MOD(extack, "IPv6 invalid source address length");
+ return false;
+ }
+ src = nla_get_in6_addr(attr);
+ if (ipv6_addr_is_multicast(&src)) {
+ NL_SET_ERR_MSG_MOD(extack, "IPv6 multicast source address is not allowed");
+ return false;
+ }
+ break;
+ }
+#endif
+ default:
+ NL_SET_ERR_MSG_MOD(extack, "Invalid protocol used with source address");
+ return false;
+ }
+
+ return true;
+}
+
+static void vxlan_mdb_config_group_set(struct vxlan_mdb_config *cfg,
+ const struct br_mdb_entry *entry,
+ const struct nlattr *source_attr)
+{
+ struct vxlan_mdb_entry_key *group = &cfg->group;
+
+ switch (entry->addr.proto) {
+ case htons(ETH_P_IP):
+ group->dst.sa.sa_family = AF_INET;
+ group->dst.sin.sin_addr.s_addr = entry->addr.u.ip4;
+ break;
+#if IS_ENABLED(CONFIG_IPV6)
+ case htons(ETH_P_IPV6):
+ group->dst.sa.sa_family = AF_INET6;
+ group->dst.sin6.sin6_addr = entry->addr.u.ip6;
+ break;
+#endif
+ }
+
+ if (source_attr)
+ vxlan_nla_get_addr(&group->src, source_attr);
+}
+
+static bool vxlan_mdb_is_star_g(const struct vxlan_mdb_entry_key *group)
+{
+ return !vxlan_addr_any(&group->dst) && vxlan_addr_any(&group->src);
+}
+
+static bool vxlan_mdb_is_sg(const struct vxlan_mdb_entry_key *group)
+{
+ return !vxlan_addr_any(&group->dst) && !vxlan_addr_any(&group->src);
+}
+
+static int vxlan_mdb_config_src_entry_init(struct vxlan_mdb_config *cfg,
+ __be16 proto,
+ const struct nlattr *src_entry,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *tb[MDBE_SRCATTR_MAX + 1];
+ struct vxlan_mdb_config_src_entry *src;
+ int err;
+
+ err = nla_parse_nested(tb, MDBE_SRCATTR_MAX, src_entry,
+ vxlan_mdbe_src_list_entry_pol, extack);
+ if (err)
+ return err;
+
+ if (NL_REQ_ATTR_CHECK(extack, src_entry, tb, MDBE_SRCATTR_ADDRESS))
+ return -EINVAL;
+
+ if (!vxlan_mdb_is_valid_source(tb[MDBE_SRCATTR_ADDRESS], proto,
+ extack))
+ return -EINVAL;
+
+ src = kzalloc(sizeof(*src), GFP_KERNEL);
+ if (!src)
+ return -ENOMEM;
+
+ err = vxlan_nla_get_addr(&src->addr, tb[MDBE_SRCATTR_ADDRESS]);
+ if (err)
+ goto err_free_src;
+
+ list_add_tail(&src->node, &cfg->src_list);
+
+ return 0;
+
+err_free_src:
+ kfree(src);
+ return err;
+}
+
+static void
+vxlan_mdb_config_src_entry_fini(struct vxlan_mdb_config_src_entry *src)
+{
+ list_del(&src->node);
+ kfree(src);
+}
+
+static int vxlan_mdb_config_src_list_init(struct vxlan_mdb_config *cfg,
+ __be16 proto,
+ const struct nlattr *src_list,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_mdb_config_src_entry *src, *tmp;
+ struct nlattr *src_entry;
+ int rem, err;
+
+ nla_for_each_nested(src_entry, src_list, rem) {
+ err = vxlan_mdb_config_src_entry_init(cfg, proto, src_entry,
+ extack);
+ if (err)
+ goto err_src_entry_init;
+ }
+
+ return 0;
+
+err_src_entry_init:
+ list_for_each_entry_safe_reverse(src, tmp, &cfg->src_list, node)
+ vxlan_mdb_config_src_entry_fini(src);
+ return err;
+}
+
+static void vxlan_mdb_config_src_list_fini(struct vxlan_mdb_config *cfg)
+{
+ struct vxlan_mdb_config_src_entry *src, *tmp;
+
+ list_for_each_entry_safe_reverse(src, tmp, &cfg->src_list, node)
+ vxlan_mdb_config_src_entry_fini(src);
+}
+
+static int vxlan_mdb_config_attrs_init(struct vxlan_mdb_config *cfg,
+ const struct br_mdb_entry *entry,
+ const struct nlattr *set_attrs,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *mdbe_attrs[MDBE_ATTR_MAX + 1];
+ int err;
+
+ err = nla_parse_nested(mdbe_attrs, MDBE_ATTR_MAX, set_attrs,
+ vxlan_mdbe_attrs_pol, extack);
+ if (err)
+ return err;
+
+ if (NL_REQ_ATTR_CHECK(extack, set_attrs, mdbe_attrs, MDBE_ATTR_DST)) {
+ NL_SET_ERR_MSG_MOD(extack, "Missing remote destination IP address");
+ return -EINVAL;
+ }
+
+ if (mdbe_attrs[MDBE_ATTR_SOURCE] &&
+ !vxlan_mdb_is_valid_source(mdbe_attrs[MDBE_ATTR_SOURCE],
+ entry->addr.proto, extack))
+ return -EINVAL;
+
+ vxlan_mdb_config_group_set(cfg, entry, mdbe_attrs[MDBE_ATTR_SOURCE]);
+
+ /* rtnetlink code only validates that IPv4 group address is
+ * multicast.
+ */
+ if (!vxlan_addr_is_multicast(&cfg->group.dst) &&
+ !vxlan_addr_any(&cfg->group.dst)) {
+ NL_SET_ERR_MSG_MOD(extack, "Group address is not multicast");
+ return -EINVAL;
+ }
+
+ if (vxlan_addr_any(&cfg->group.dst) &&
+ mdbe_attrs[MDBE_ATTR_SOURCE]) {
+ NL_SET_ERR_MSG_MOD(extack, "Source cannot be specified for the all-zeros entry");
+ return -EINVAL;
+ }
+
+ if (vxlan_mdb_is_sg(&cfg->group))
+ cfg->filter_mode = MCAST_INCLUDE;
+
+ if (mdbe_attrs[MDBE_ATTR_GROUP_MODE]) {
+ if (!vxlan_mdb_is_star_g(&cfg->group)) {
+ NL_SET_ERR_MSG_MOD(extack, "Filter mode can only be set for (*, G) entries");
+ return -EINVAL;
+ }
+ cfg->filter_mode = nla_get_u8(mdbe_attrs[MDBE_ATTR_GROUP_MODE]);
+ }
+
+ if (mdbe_attrs[MDBE_ATTR_SRC_LIST]) {
+ if (!vxlan_mdb_is_star_g(&cfg->group)) {
+ NL_SET_ERR_MSG_MOD(extack, "Source list can only be set for (*, G) entries");
+ return -EINVAL;
+ }
+ if (!mdbe_attrs[MDBE_ATTR_GROUP_MODE]) {
+ NL_SET_ERR_MSG_MOD(extack, "Source list cannot be set without filter mode");
+ return -EINVAL;
+ }
+ err = vxlan_mdb_config_src_list_init(cfg, entry->addr.proto,
+ mdbe_attrs[MDBE_ATTR_SRC_LIST],
+ extack);
+ if (err)
+ return err;
+ }
+
+ if (vxlan_mdb_is_star_g(&cfg->group) && list_empty(&cfg->src_list) &&
+ cfg->filter_mode == MCAST_INCLUDE) {
+ NL_SET_ERR_MSG_MOD(extack, "Cannot add (*, G) INCLUDE with an empty source list");
+ return -EINVAL;
+ }
+
+ if (mdbe_attrs[MDBE_ATTR_RTPROT])
+ cfg->rt_protocol = nla_get_u8(mdbe_attrs[MDBE_ATTR_RTPROT]);
+
+ err = vxlan_nla_get_addr(&cfg->remote_ip, mdbe_attrs[MDBE_ATTR_DST]);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "Invalid remote destination address");
+ goto err_src_list_fini;
+ }
+
+ if (mdbe_attrs[MDBE_ATTR_DST_PORT])
+ cfg->remote_port =
+ cpu_to_be16(nla_get_u16(mdbe_attrs[MDBE_ATTR_DST_PORT]));
+
+ if (mdbe_attrs[MDBE_ATTR_VNI])
+ cfg->remote_vni =
+ cpu_to_be32(nla_get_u32(mdbe_attrs[MDBE_ATTR_VNI]));
+
+ if (mdbe_attrs[MDBE_ATTR_IFINDEX]) {
+ cfg->remote_ifindex =
+ nla_get_s32(mdbe_attrs[MDBE_ATTR_IFINDEX]);
+ if (!__dev_get_by_index(cfg->vxlan->net, cfg->remote_ifindex)) {
+ NL_SET_ERR_MSG_MOD(extack, "Outgoing interface not found");
+ err = -EINVAL;
+ goto err_src_list_fini;
+ }
+ }
+
+ if (mdbe_attrs[MDBE_ATTR_SRC_VNI])
+ cfg->group.vni =
+ cpu_to_be32(nla_get_u32(mdbe_attrs[MDBE_ATTR_SRC_VNI]));
+
+ return 0;
+
+err_src_list_fini:
+ vxlan_mdb_config_src_list_fini(cfg);
+ return err;
+}
+
+static int vxlan_mdb_config_init(struct vxlan_mdb_config *cfg,
+ struct net_device *dev, struct nlattr *tb[],
+ u16 nlmsg_flags,
+ struct netlink_ext_ack *extack)
+{
+ struct br_mdb_entry *entry = nla_data(tb[MDBA_SET_ENTRY]);
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+
+ memset(cfg, 0, sizeof(*cfg));
+ cfg->vxlan = vxlan;
+ cfg->group.vni = vxlan->default_dst.remote_vni;
+ INIT_LIST_HEAD(&cfg->src_list);
+ cfg->nlflags = nlmsg_flags;
+ cfg->filter_mode = MCAST_EXCLUDE;
+ cfg->rt_protocol = RTPROT_STATIC;
+ cfg->remote_vni = vxlan->default_dst.remote_vni;
+ cfg->remote_port = vxlan->cfg.dst_port;
+
+ if (entry->ifindex != dev->ifindex) {
+ NL_SET_ERR_MSG_MOD(extack, "Port net device must be the VXLAN net device");
+ return -EINVAL;
+ }
+
+ /* State is not part of the entry key and can be ignored on deletion
+ * requests.
+ */
+ if ((nlmsg_flags & (NLM_F_CREATE | NLM_F_REPLACE)) &&
+ entry->state != MDB_PERMANENT) {
+ NL_SET_ERR_MSG_MOD(extack, "MDB entry must be permanent");
+ return -EINVAL;
+ }
+
+ if (entry->flags) {
+ NL_SET_ERR_MSG_MOD(extack, "Invalid MDB entry flags");
+ return -EINVAL;
+ }
+
+ if (entry->vid) {
+ NL_SET_ERR_MSG_MOD(extack, "VID must not be specified");
+ return -EINVAL;
+ }
+
+ if (entry->addr.proto != htons(ETH_P_IP) &&
+ entry->addr.proto != htons(ETH_P_IPV6)) {
+ NL_SET_ERR_MSG_MOD(extack, "Group address must be an IPv4 / IPv6 address");
+ return -EINVAL;
+ }
+
+ if (NL_REQ_ATTR_CHECK(extack, NULL, tb, MDBA_SET_ENTRY_ATTRS)) {
+ NL_SET_ERR_MSG_MOD(extack, "Missing MDBA_SET_ENTRY_ATTRS attribute");
+ return -EINVAL;
+ }
+
+ return vxlan_mdb_config_attrs_init(cfg, entry, tb[MDBA_SET_ENTRY_ATTRS],
+ extack);
+}
+
+static void vxlan_mdb_config_fini(struct vxlan_mdb_config *cfg)
+{
+ vxlan_mdb_config_src_list_fini(cfg);
+}
+
+static struct vxlan_mdb_entry *
+vxlan_mdb_entry_lookup(struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry_key *group)
+{
+ return rhashtable_lookup_fast(&vxlan->mdb_tbl, group,
+ vxlan_mdb_rht_params);
+}
+
+static struct vxlan_mdb_remote *
+vxlan_mdb_remote_lookup(const struct vxlan_mdb_entry *mdb_entry,
+ const union vxlan_addr *addr)
+{
+ struct vxlan_mdb_remote *remote;
+
+ list_for_each_entry(remote, &mdb_entry->remotes, list) {
+ struct vxlan_rdst *rd = rtnl_dereference(remote->rd);
+
+ if (vxlan_addr_equal(addr, &rd->remote_ip))
+ return remote;
+ }
+
+ return NULL;
+}
+
+static void vxlan_mdb_rdst_free(struct rcu_head *head)
+{
+ struct vxlan_rdst *rd = container_of(head, struct vxlan_rdst, rcu);
+
+ dst_cache_destroy(&rd->dst_cache);
+ kfree(rd);
+}
+
+static int vxlan_mdb_remote_rdst_init(const struct vxlan_mdb_config *cfg,
+ struct vxlan_mdb_remote *remote)
+{
+ struct vxlan_rdst *rd;
+ int err;
+
+ rd = kzalloc(sizeof(*rd), GFP_KERNEL);
+ if (!rd)
+ return -ENOMEM;
+
+ err = dst_cache_init(&rd->dst_cache, GFP_KERNEL);
+ if (err)
+ goto err_free_rdst;
+
+ rd->remote_ip = cfg->remote_ip;
+ rd->remote_port = cfg->remote_port;
+ rd->remote_vni = cfg->remote_vni;
+ rd->remote_ifindex = cfg->remote_ifindex;
+ rcu_assign_pointer(remote->rd, rd);
+
+ return 0;
+
+err_free_rdst:
+ kfree(rd);
+ return err;
+}
+
+static void vxlan_mdb_remote_rdst_fini(struct vxlan_rdst *rd)
+{
+ call_rcu(&rd->rcu, vxlan_mdb_rdst_free);
+}
+
+static int vxlan_mdb_remote_init(const struct vxlan_mdb_config *cfg,
+ struct vxlan_mdb_remote *remote)
+{
+ int err;
+
+ err = vxlan_mdb_remote_rdst_init(cfg, remote);
+ if (err)
+ return err;
+
+ remote->flags = cfg->flags;
+ remote->filter_mode = cfg->filter_mode;
+ remote->rt_protocol = cfg->rt_protocol;
+ INIT_HLIST_HEAD(&remote->src_list);
+
+ return 0;
+}
+
+static void vxlan_mdb_remote_fini(struct vxlan_dev *vxlan,
+ struct vxlan_mdb_remote *remote)
+{
+ WARN_ON_ONCE(!hlist_empty(&remote->src_list));
+ vxlan_mdb_remote_rdst_fini(rtnl_dereference(remote->rd));
+}
+
+static struct vxlan_mdb_src_entry *
+vxlan_mdb_remote_src_entry_lookup(const struct vxlan_mdb_remote *remote,
+ const union vxlan_addr *addr)
+{
+ struct vxlan_mdb_src_entry *ent;
+
+ hlist_for_each_entry(ent, &remote->src_list, node) {
+ if (vxlan_addr_equal(&ent->addr, addr))
+ return ent;
+ }
+
+ return NULL;
+}
+
+static struct vxlan_mdb_src_entry *
+vxlan_mdb_remote_src_entry_add(struct vxlan_mdb_remote *remote,
+ const union vxlan_addr *addr)
+{
+ struct vxlan_mdb_src_entry *ent;
+
+ ent = kzalloc(sizeof(*ent), GFP_KERNEL);
+ if (!ent)
+ return NULL;
+
+ ent->addr = *addr;
+ hlist_add_head(&ent->node, &remote->src_list);
+
+ return ent;
+}
+
+static void
+vxlan_mdb_remote_src_entry_del(struct vxlan_mdb_src_entry *ent)
+{
+ hlist_del(&ent->node);
+ kfree(ent);
+}
+
+static int
+vxlan_mdb_remote_src_fwd_add(const struct vxlan_mdb_config *cfg,
+ const union vxlan_addr *addr,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_mdb_config sg_cfg;
+
+ memset(&sg_cfg, 0, sizeof(sg_cfg));
+ sg_cfg.vxlan = cfg->vxlan;
+ sg_cfg.group.src = *addr;
+ sg_cfg.group.dst = cfg->group.dst;
+ sg_cfg.group.vni = cfg->group.vni;
+ INIT_LIST_HEAD(&sg_cfg.src_list);
+ sg_cfg.remote_ip = cfg->remote_ip;
+ sg_cfg.remote_ifindex = cfg->remote_ifindex;
+ sg_cfg.remote_vni = cfg->remote_vni;
+ sg_cfg.remote_port = cfg->remote_port;
+ sg_cfg.nlflags = cfg->nlflags;
+ sg_cfg.filter_mode = MCAST_INCLUDE;
+ if (cfg->filter_mode == MCAST_EXCLUDE)
+ sg_cfg.flags = VXLAN_MDB_REMOTE_F_BLOCKED;
+ sg_cfg.rt_protocol = cfg->rt_protocol;
+
+ return __vxlan_mdb_add(&sg_cfg, extack);
+}
+
+static void
+vxlan_mdb_remote_src_fwd_del(struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry_key *group,
+ const struct vxlan_mdb_remote *remote,
+ const union vxlan_addr *addr)
+{
+ struct vxlan_rdst *rd = rtnl_dereference(remote->rd);
+ struct vxlan_mdb_config sg_cfg;
+
+ memset(&sg_cfg, 0, sizeof(sg_cfg));
+ sg_cfg.vxlan = vxlan;
+ sg_cfg.group.src = *addr;
+ sg_cfg.group.dst = group->dst;
+ sg_cfg.group.vni = group->vni;
+ INIT_LIST_HEAD(&sg_cfg.src_list);
+ sg_cfg.remote_ip = rd->remote_ip;
+
+ __vxlan_mdb_del(&sg_cfg, NULL);
+}
+
+static int
+vxlan_mdb_remote_src_add(const struct vxlan_mdb_config *cfg,
+ struct vxlan_mdb_remote *remote,
+ const struct vxlan_mdb_config_src_entry *src,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_mdb_src_entry *ent;
+ int err;
+
+ ent = vxlan_mdb_remote_src_entry_lookup(remote, &src->addr);
+ if (!ent) {
+ ent = vxlan_mdb_remote_src_entry_add(remote, &src->addr);
+ if (!ent)
+ return -ENOMEM;
+ } else if (!(cfg->nlflags & NLM_F_REPLACE)) {
+ NL_SET_ERR_MSG_MOD(extack, "Source entry already exists");
+ return -EEXIST;
+ }
+
+ err = vxlan_mdb_remote_src_fwd_add(cfg, &ent->addr, extack);
+ if (err)
+ goto err_src_del;
+
+ /* Clear flags in case source entry was marked for deletion as part of
+ * replace flow.
+ */
+ ent->flags = 0;
+
+ return 0;
+
+err_src_del:
+ vxlan_mdb_remote_src_entry_del(ent);
+ return err;
+}
+
+static void vxlan_mdb_remote_src_del(struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry_key *group,
+ const struct vxlan_mdb_remote *remote,
+ struct vxlan_mdb_src_entry *ent)
+{
+ vxlan_mdb_remote_src_fwd_del(vxlan, group, remote, &ent->addr);
+ vxlan_mdb_remote_src_entry_del(ent);
+}
+
+static int vxlan_mdb_remote_srcs_add(const struct vxlan_mdb_config *cfg,
+ struct vxlan_mdb_remote *remote,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_mdb_config_src_entry *src;
+ struct vxlan_mdb_src_entry *ent;
+ struct hlist_node *tmp;
+ int err;
+
+ list_for_each_entry(src, &cfg->src_list, node) {
+ err = vxlan_mdb_remote_src_add(cfg, remote, src, extack);
+ if (err)
+ goto err_src_del;
+ }
+
+ return 0;
+
+err_src_del:
+ hlist_for_each_entry_safe(ent, tmp, &remote->src_list, node)
+ vxlan_mdb_remote_src_del(cfg->vxlan, &cfg->group, remote, ent);
+ return err;
+}
+
+static void vxlan_mdb_remote_srcs_del(struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry_key *group,
+ struct vxlan_mdb_remote *remote)
+{
+ struct vxlan_mdb_src_entry *ent;
+ struct hlist_node *tmp;
+
+ hlist_for_each_entry_safe(ent, tmp, &remote->src_list, node)
+ vxlan_mdb_remote_src_del(vxlan, group, remote, ent);
+}
+
+static size_t
+vxlan_mdb_nlmsg_src_list_size(const struct vxlan_mdb_entry_key *group,
+ const struct vxlan_mdb_remote *remote)
+{
+ struct vxlan_mdb_src_entry *ent;
+ size_t nlmsg_size;
+
+ if (hlist_empty(&remote->src_list))
+ return 0;
+
+ /* MDBA_MDB_EATTR_SRC_LIST */
+ nlmsg_size = nla_total_size(0);
+
+ hlist_for_each_entry(ent, &remote->src_list, node) {
+ /* MDBA_MDB_SRCLIST_ENTRY */
+ nlmsg_size += nla_total_size(0) +
+ /* MDBA_MDB_SRCATTR_ADDRESS */
+ nla_total_size(vxlan_addr_size(&group->dst)) +
+ /* MDBA_MDB_SRCATTR_TIMER */
+ nla_total_size(sizeof(u8));
+ }
+
+ return nlmsg_size;
+}
+
+static size_t vxlan_mdb_nlmsg_size(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote)
+{
+ const struct vxlan_mdb_entry_key *group = &mdb_entry->key;
+ struct vxlan_rdst *rd = rtnl_dereference(remote->rd);
+ size_t nlmsg_size;
+
+ nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY_INFO */
+ nla_total_size(sizeof(struct br_mdb_entry)) +
+ /* MDBA_MDB_EATTR_TIMER */
+ nla_total_size(sizeof(u32));
+ /* MDBA_MDB_EATTR_SOURCE */
+ if (vxlan_mdb_is_sg(group))
+ nlmsg_size += nla_total_size(vxlan_addr_size(&group->dst));
+ /* MDBA_MDB_EATTR_RTPROT */
+ nlmsg_size += nla_total_size(sizeof(u8));
+ /* MDBA_MDB_EATTR_SRC_LIST */
+ nlmsg_size += vxlan_mdb_nlmsg_src_list_size(group, remote);
+ /* MDBA_MDB_EATTR_GROUP_MODE */
+ nlmsg_size += nla_total_size(sizeof(u8));
+ /* MDBA_MDB_EATTR_DST */
+ nlmsg_size += nla_total_size(vxlan_addr_size(&rd->remote_ip));
+ /* MDBA_MDB_EATTR_DST_PORT */
+ if (rd->remote_port && rd->remote_port != vxlan->cfg.dst_port)
+ nlmsg_size += nla_total_size(sizeof(u16));
+ /* MDBA_MDB_EATTR_VNI */
+ if (rd->remote_vni != vxlan->default_dst.remote_vni)
+ nlmsg_size += nla_total_size(sizeof(u32));
+ /* MDBA_MDB_EATTR_IFINDEX */
+ if (rd->remote_ifindex)
+ nlmsg_size += nla_total_size(sizeof(u32));
+ /* MDBA_MDB_EATTR_SRC_VNI */
+ if ((vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA) && group->vni)
+ nlmsg_size += nla_total_size(sizeof(u32));
+
+ return nlmsg_size;
+}
+
+static int vxlan_mdb_nlmsg_fill(const struct vxlan_dev *vxlan,
+ struct sk_buff *skb,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote,
+ int type)
+{
+ struct nlattr *mdb_nest, *mdb_entry_nest;
+ struct br_port_msg *bpm;
+ struct nlmsghdr *nlh;
+
+ nlh = nlmsg_put(skb, 0, 0, type, sizeof(*bpm), 0);
+ if (!nlh)
+ return -EMSGSIZE;
+
+ bpm = nlmsg_data(nlh);
+ memset(bpm, 0, sizeof(*bpm));
+ bpm->family = AF_BRIDGE;
+ bpm->ifindex = vxlan->dev->ifindex;
+
+ mdb_nest = nla_nest_start_noflag(skb, MDBA_MDB);
+ if (!mdb_nest)
+ goto cancel;
+ mdb_entry_nest = nla_nest_start_noflag(skb, MDBA_MDB_ENTRY);
+ if (!mdb_entry_nest)
+ goto cancel;
+
+ if (vxlan_mdb_entry_info_fill(vxlan, skb, mdb_entry, remote))
+ goto cancel;
+
+ nla_nest_end(skb, mdb_entry_nest);
+ nla_nest_end(skb, mdb_nest);
+ nlmsg_end(skb, nlh);
+
+ return 0;
+
+cancel:
+ nlmsg_cancel(skb, nlh);
+ return -EMSGSIZE;
+}
+
+static void vxlan_mdb_remote_notify(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote,
+ int type)
+{
+ struct net *net = dev_net(vxlan->dev);
+ struct sk_buff *skb;
+ int err = -ENOBUFS;
+
+ skb = nlmsg_new(vxlan_mdb_nlmsg_size(vxlan, mdb_entry, remote),
+ GFP_KERNEL);
+ if (!skb)
+ goto errout;
+
+ err = vxlan_mdb_nlmsg_fill(vxlan, skb, mdb_entry, remote, type);
+ if (err) {
+ kfree_skb(skb);
+ goto errout;
+ }
+
+ rtnl_notify(skb, net, 0, RTNLGRP_MDB, NULL, GFP_KERNEL);
+ return;
+errout:
+ rtnl_set_sk_err(net, RTNLGRP_MDB, err);
+}
+
+static int
+vxlan_mdb_remote_srcs_replace(const struct vxlan_mdb_config *cfg,
+ const struct vxlan_mdb_entry *mdb_entry,
+ struct vxlan_mdb_remote *remote,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_dev *vxlan = cfg->vxlan;
+ struct vxlan_mdb_src_entry *ent;
+ struct hlist_node *tmp;
+ int err;
+
+ hlist_for_each_entry(ent, &remote->src_list, node)
+ ent->flags |= VXLAN_SGRP_F_DELETE;
+
+ err = vxlan_mdb_remote_srcs_add(cfg, remote, extack);
+ if (err)
+ goto err_clear_delete;
+
+ hlist_for_each_entry_safe(ent, tmp, &remote->src_list, node) {
+ if (ent->flags & VXLAN_SGRP_F_DELETE)
+ vxlan_mdb_remote_src_del(vxlan, &mdb_entry->key, remote,
+ ent);
+ }
+
+ return 0;
+
+err_clear_delete:
+ hlist_for_each_entry(ent, &remote->src_list, node)
+ ent->flags &= ~VXLAN_SGRP_F_DELETE;
+ return err;
+}
+
+static int vxlan_mdb_remote_replace(const struct vxlan_mdb_config *cfg,
+ const struct vxlan_mdb_entry *mdb_entry,
+ struct vxlan_mdb_remote *remote,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_rdst *new_rd, *old_rd = rtnl_dereference(remote->rd);
+ struct vxlan_dev *vxlan = cfg->vxlan;
+ int err;
+
+ err = vxlan_mdb_remote_rdst_init(cfg, remote);
+ if (err)
+ return err;
+ new_rd = rtnl_dereference(remote->rd);
+
+ err = vxlan_mdb_remote_srcs_replace(cfg, mdb_entry, remote, extack);
+ if (err)
+ goto err_rdst_reset;
+
+ WRITE_ONCE(remote->flags, cfg->flags);
+ WRITE_ONCE(remote->filter_mode, cfg->filter_mode);
+ remote->rt_protocol = cfg->rt_protocol;
+ vxlan_mdb_remote_notify(vxlan, mdb_entry, remote, RTM_NEWMDB);
+
+ vxlan_mdb_remote_rdst_fini(old_rd);
+
+ return 0;
+
+err_rdst_reset:
+ rcu_assign_pointer(remote->rd, old_rd);
+ vxlan_mdb_remote_rdst_fini(new_rd);
+ return err;
+}
+
+static int vxlan_mdb_remote_add(const struct vxlan_mdb_config *cfg,
+ struct vxlan_mdb_entry *mdb_entry,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_mdb_remote *remote;
+ int err;
+
+ remote = vxlan_mdb_remote_lookup(mdb_entry, &cfg->remote_ip);
+ if (remote) {
+ if (!(cfg->nlflags & NLM_F_REPLACE)) {
+ NL_SET_ERR_MSG_MOD(extack, "Replace not specified and MDB remote entry already exists");
+ return -EEXIST;
+ }
+ return vxlan_mdb_remote_replace(cfg, mdb_entry, remote, extack);
+ }
+
+ if (!(cfg->nlflags & NLM_F_CREATE)) {
+ NL_SET_ERR_MSG_MOD(extack, "Create not specified and entry does not exist");
+ return -ENOENT;
+ }
+
+ remote = kzalloc(sizeof(*remote), GFP_KERNEL);
+ if (!remote)
+ return -ENOMEM;
+
+ err = vxlan_mdb_remote_init(cfg, remote);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to initialize remote MDB entry");
+ goto err_free_remote;
+ }
+
+ err = vxlan_mdb_remote_srcs_add(cfg, remote, extack);
+ if (err)
+ goto err_remote_fini;
+
+ list_add_rcu(&remote->list, &mdb_entry->remotes);
+ vxlan_mdb_remote_notify(cfg->vxlan, mdb_entry, remote, RTM_NEWMDB);
+
+ return 0;
+
+err_remote_fini:
+ vxlan_mdb_remote_fini(cfg->vxlan, remote);
+err_free_remote:
+ kfree(remote);
+ return err;
+}
+
+static void vxlan_mdb_remote_del(struct vxlan_dev *vxlan,
+ struct vxlan_mdb_entry *mdb_entry,
+ struct vxlan_mdb_remote *remote)
+{
+ vxlan_mdb_remote_notify(vxlan, mdb_entry, remote, RTM_DELMDB);
+ list_del_rcu(&remote->list);
+ vxlan_mdb_remote_srcs_del(vxlan, &mdb_entry->key, remote);
+ vxlan_mdb_remote_fini(vxlan, remote);
+ kfree_rcu(remote, rcu);
+}
+
+static struct vxlan_mdb_entry *
+vxlan_mdb_entry_get(struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry_key *group)
+{
+ struct vxlan_mdb_entry *mdb_entry;
+ int err;
+
+ mdb_entry = vxlan_mdb_entry_lookup(vxlan, group);
+ if (mdb_entry)
+ return mdb_entry;
+
+ mdb_entry = kzalloc(sizeof(*mdb_entry), GFP_KERNEL);
+ if (!mdb_entry)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&mdb_entry->remotes);
+ memcpy(&mdb_entry->key, group, sizeof(mdb_entry->key));
+ hlist_add_head(&mdb_entry->mdb_node, &vxlan->mdb_list);
+
+ err = rhashtable_lookup_insert_fast(&vxlan->mdb_tbl,
+ &mdb_entry->rhnode,
+ vxlan_mdb_rht_params);
+ if (err)
+ goto err_free_entry;
+
+ if (hlist_is_singular_node(&mdb_entry->mdb_node, &vxlan->mdb_list))
+ vxlan->cfg.flags |= VXLAN_F_MDB;
+
+ return mdb_entry;
+
+err_free_entry:
+ hlist_del(&mdb_entry->mdb_node);
+ kfree(mdb_entry);
+ return ERR_PTR(err);
+}
+
+static void vxlan_mdb_entry_put(struct vxlan_dev *vxlan,
+ struct vxlan_mdb_entry *mdb_entry)
+{
+ if (!list_empty(&mdb_entry->remotes))
+ return;
+
+ if (hlist_is_singular_node(&mdb_entry->mdb_node, &vxlan->mdb_list))
+ vxlan->cfg.flags &= ~VXLAN_F_MDB;
+
+ rhashtable_remove_fast(&vxlan->mdb_tbl, &mdb_entry->rhnode,
+ vxlan_mdb_rht_params);
+ hlist_del(&mdb_entry->mdb_node);
+ kfree_rcu(mdb_entry, rcu);
+}
+
+static int __vxlan_mdb_add(const struct vxlan_mdb_config *cfg,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_dev *vxlan = cfg->vxlan;
+ struct vxlan_mdb_entry *mdb_entry;
+ int err;
+
+ mdb_entry = vxlan_mdb_entry_get(vxlan, &cfg->group);
+ if (IS_ERR(mdb_entry))
+ return PTR_ERR(mdb_entry);
+
+ err = vxlan_mdb_remote_add(cfg, mdb_entry, extack);
+ if (err)
+ goto err_entry_put;
+
+ vxlan->mdb_seq++;
+
+ return 0;
+
+err_entry_put:
+ vxlan_mdb_entry_put(vxlan, mdb_entry);
+ return err;
+}
+
+static int __vxlan_mdb_del(const struct vxlan_mdb_config *cfg,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_dev *vxlan = cfg->vxlan;
+ struct vxlan_mdb_entry *mdb_entry;
+ struct vxlan_mdb_remote *remote;
+
+ mdb_entry = vxlan_mdb_entry_lookup(vxlan, &cfg->group);
+ if (!mdb_entry) {
+ NL_SET_ERR_MSG_MOD(extack, "Did not find MDB entry");
+ return -ENOENT;
+ }
+
+ remote = vxlan_mdb_remote_lookup(mdb_entry, &cfg->remote_ip);
+ if (!remote) {
+ NL_SET_ERR_MSG_MOD(extack, "Did not find MDB remote entry");
+ return -ENOENT;
+ }
+
+ vxlan_mdb_remote_del(vxlan, mdb_entry, remote);
+ vxlan_mdb_entry_put(vxlan, mdb_entry);
+
+ vxlan->mdb_seq++;
+
+ return 0;
+}
+
+int vxlan_mdb_add(struct net_device *dev, struct nlattr *tb[], u16 nlmsg_flags,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_mdb_config cfg;
+ int err;
+
+ ASSERT_RTNL();
+
+ err = vxlan_mdb_config_init(&cfg, dev, tb, nlmsg_flags, extack);
+ if (err)
+ return err;
+
+ err = __vxlan_mdb_add(&cfg, extack);
+
+ vxlan_mdb_config_fini(&cfg);
+ return err;
+}
+
+int vxlan_mdb_del(struct net_device *dev, struct nlattr *tb[],
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_mdb_config cfg;
+ int err;
+
+ ASSERT_RTNL();
+
+ err = vxlan_mdb_config_init(&cfg, dev, tb, 0, extack);
+ if (err)
+ return err;
+
+ err = __vxlan_mdb_del(&cfg, extack);
+
+ vxlan_mdb_config_fini(&cfg);
+ return err;
+}
+
+struct vxlan_mdb_entry *vxlan_mdb_entry_skb_get(struct vxlan_dev *vxlan,
+ struct sk_buff *skb,
+ __be32 src_vni)
+{
+ struct vxlan_mdb_entry *mdb_entry;
+ struct vxlan_mdb_entry_key group;
+
+ if (!is_multicast_ether_addr(eth_hdr(skb)->h_dest) ||
+ is_broadcast_ether_addr(eth_hdr(skb)->h_dest))
+ return NULL;
+
+ /* When not in collect metadata mode, 'src_vni' is zero, but MDB
+ * entries are stored with the VNI of the VXLAN device.
+ */
+ if (!(vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA))
+ src_vni = vxlan->default_dst.remote_vni;
+
+ memset(&group, 0, sizeof(group));
+ group.vni = src_vni;
+
+ switch (skb->protocol) {
+ case htons(ETH_P_IP):
+ if (!pskb_may_pull(skb, sizeof(struct iphdr)))
+ return NULL;
+ group.dst.sa.sa_family = AF_INET;
+ group.dst.sin.sin_addr.s_addr = ip_hdr(skb)->daddr;
+ group.src.sa.sa_family = AF_INET;
+ group.src.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
+ break;
+#if IS_ENABLED(CONFIG_IPV6)
+ case htons(ETH_P_IPV6):
+ if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
+ return NULL;
+ group.dst.sa.sa_family = AF_INET6;
+ group.dst.sin6.sin6_addr = ipv6_hdr(skb)->daddr;
+ group.src.sa.sa_family = AF_INET6;
+ group.src.sin6.sin6_addr = ipv6_hdr(skb)->saddr;
+ break;
+#endif
+ default:
+ return NULL;
+ }
+
+ mdb_entry = vxlan_mdb_entry_lookup(vxlan, &group);
+ if (mdb_entry)
+ return mdb_entry;
+
+ memset(&group.src, 0, sizeof(group.src));
+ mdb_entry = vxlan_mdb_entry_lookup(vxlan, &group);
+ if (mdb_entry)
+ return mdb_entry;
+
+ /* No (S, G) or (*, G) found. Look up the all-zeros entry, but only if
+ * the destination IP address is not link-local multicast since we want
+ * to transmit such traffic together with broadcast and unknown unicast
+ * traffic.
+ */
+ switch (skb->protocol) {
+ case htons(ETH_P_IP):
+ if (ipv4_is_local_multicast(group.dst.sin.sin_addr.s_addr))
+ return NULL;
+ group.dst.sin.sin_addr.s_addr = 0;
+ break;
+#if IS_ENABLED(CONFIG_IPV6)
+ case htons(ETH_P_IPV6):
+ if (ipv6_addr_type(&group.dst.sin6.sin6_addr) &
+ IPV6_ADDR_LINKLOCAL)
+ return NULL;
+ memset(&group.dst.sin6.sin6_addr, 0,
+ sizeof(group.dst.sin6.sin6_addr));
+ break;
+#endif
+ default:
+ return NULL;
+ }
+
+ return vxlan_mdb_entry_lookup(vxlan, &group);
+}
+
+netdev_tx_t vxlan_mdb_xmit(struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ struct sk_buff *skb)
+{
+ struct vxlan_mdb_remote *remote, *fremote = NULL;
+ __be32 src_vni = mdb_entry->key.vni;
+
+ list_for_each_entry_rcu(remote, &mdb_entry->remotes, list) {
+ struct sk_buff *skb1;
+
+ if ((vxlan_mdb_is_star_g(&mdb_entry->key) &&
+ READ_ONCE(remote->filter_mode) == MCAST_INCLUDE) ||
+ (READ_ONCE(remote->flags) & VXLAN_MDB_REMOTE_F_BLOCKED))
+ continue;
+
+ if (!fremote) {
+ fremote = remote;
+ continue;
+ }
+
+ skb1 = skb_clone(skb, GFP_ATOMIC);
+ if (skb1)
+ vxlan_xmit_one(skb1, vxlan->dev, src_vni,
+ rcu_dereference(remote->rd), false);
+ }
+
+ if (fremote)
+ vxlan_xmit_one(skb, vxlan->dev, src_vni,
+ rcu_dereference(fremote->rd), false);
+ else
+ kfree_skb(skb);
+
+ return NETDEV_TX_OK;
+}
+
+static void vxlan_mdb_check_empty(void *ptr, void *arg)
+{
+ WARN_ON_ONCE(1);
+}
+
+static void vxlan_mdb_remotes_flush(struct vxlan_dev *vxlan,
+ struct vxlan_mdb_entry *mdb_entry)
+{
+ struct vxlan_mdb_remote *remote, *tmp;
+
+ list_for_each_entry_safe(remote, tmp, &mdb_entry->remotes, list)
+ vxlan_mdb_remote_del(vxlan, mdb_entry, remote);
+}
+
+static void vxlan_mdb_entries_flush(struct vxlan_dev *vxlan)
+{
+ struct vxlan_mdb_entry *mdb_entry;
+ struct hlist_node *tmp;
+
+ /* The removal of an entry cannot trigger the removal of another entry
+ * since entries are always added to the head of the list.
+ */
+ hlist_for_each_entry_safe(mdb_entry, tmp, &vxlan->mdb_list, mdb_node) {
+ vxlan_mdb_remotes_flush(vxlan, mdb_entry);
+ vxlan_mdb_entry_put(vxlan, mdb_entry);
+ }
+}
+
+int vxlan_mdb_init(struct vxlan_dev *vxlan)
+{
+ int err;
+
+ err = rhashtable_init(&vxlan->mdb_tbl, &vxlan_mdb_rht_params);
+ if (err)
+ return err;
+
+ INIT_HLIST_HEAD(&vxlan->mdb_list);
+
+ return 0;
+}
+
+void vxlan_mdb_fini(struct vxlan_dev *vxlan)
+{
+ vxlan_mdb_entries_flush(vxlan);
+ WARN_ON_ONCE(vxlan->cfg.flags & VXLAN_F_MDB);
+ rhashtable_free_and_destroy(&vxlan->mdb_tbl, vxlan_mdb_check_empty,
+ NULL);
+}
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
+static inline int vxlan_nla_get_addr(union vxlan_addr *ip,
+ const struct nlattr *nla)
+{
+ if (nla_len(nla) >= sizeof(struct in6_addr)) {
+ ip->sin6.sin6_addr = nla_get_in6_addr(nla);
+ ip->sa.sa_family = AF_INET6;
+ return 0;
+ } else if (nla_len(nla) >= sizeof(__be32)) {
+ ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
+ ip->sa.sa_family = AF_INET;
+ return 0;
+ } else {
+ return -EAFNOSUPPORT;
+ }
+}
+
+static inline int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
+ const union vxlan_addr *ip)
+{
+ if (ip->sa.sa_family == AF_INET6)
+ return nla_put_in6_addr(skb, attr, &ip->sin6.sin6_addr);
+ else
+ return nla_put_in_addr(skb, attr, ip->sin.sin_addr.s_addr);
+}
+
+static inline bool vxlan_addr_is_multicast(const union vxlan_addr *ip)
+{
+ if (ip->sa.sa_family == AF_INET6)
+ return ipv6_addr_is_multicast(&ip->sin6.sin6_addr);
+ else
+ return ipv4_is_multicast(ip->sin.sin_addr.s_addr);
+}
+
#else /* !CONFIG_IPV6 */
static inline
return a->sin.sin_addr.s_addr == b->sin.sin_addr.s_addr;
}
+static inline int vxlan_nla_get_addr(union vxlan_addr *ip,
+ const struct nlattr *nla)
+{
+ if (nla_len(nla) >= sizeof(struct in6_addr)) {
+ return -EAFNOSUPPORT;
+ } else if (nla_len(nla) >= sizeof(__be32)) {
+ ip->sin.sin_addr.s_addr = nla_get_in_addr(nla);
+ ip->sa.sa_family = AF_INET;
+ return 0;
+ } else {
+ return -EAFNOSUPPORT;
+ }
+}
+
+static inline int vxlan_nla_put_addr(struct sk_buff *skb, int attr,
+ const union vxlan_addr *ip)
+{
+ return nla_put_in_addr(skb, attr, ip->sin.sin_addr.s_addr);
+}
+
+static inline bool vxlan_addr_is_multicast(const union vxlan_addr *ip)
+{
+ return ipv4_is_multicast(ip->sin.sin_addr.s_addr);
+}
+
#endif
+static inline size_t vxlan_addr_size(const union vxlan_addr *ip)
+{
+ if (ip->sa.sa_family == AF_INET6)
+ return sizeof(struct in6_addr);
+ else
+ return sizeof(__be32);
+}
+
static inline struct vxlan_vni_node *
vxlan_vnifilter_lookup(struct vxlan_dev *vxlan, __be32 vni)
{
__be16 port, __be32 src_vni, __be32 vni,
__u32 ifindex, __u16 ndm_flags, u32 nhid,
bool swdev_notify, struct netlink_ext_ack *extack);
+void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ __be32 default_vni, struct vxlan_rdst *rdst, bool did_rsc);
int vxlan_vni_in_use(struct net *src_net, struct vxlan_dev *vxlan,
struct vxlan_config *conf, __be32 vni);
int rifindex);
int vxlan_igmp_leave(struct vxlan_dev *vxlan, union vxlan_addr *rip,
int rifindex);
+
+/* vxlan_mdb.c */
+int vxlan_mdb_dump(struct net_device *dev, struct sk_buff *skb,
+ struct netlink_callback *cb);
+int vxlan_mdb_add(struct net_device *dev, struct nlattr *tb[], u16 nlmsg_flags,
+ struct netlink_ext_ack *extack);
+int vxlan_mdb_del(struct net_device *dev, struct nlattr *tb[],
+ struct netlink_ext_ack *extack);
+struct vxlan_mdb_entry *vxlan_mdb_entry_skb_get(struct vxlan_dev *vxlan,
+ struct sk_buff *skb,
+ __be32 src_vni);
+netdev_tx_t vxlan_mdb_xmit(struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ struct sk_buff *skb);
+int vxlan_mdb_init(struct vxlan_dev *vxlan);
+void vxlan_mdb_fini(struct vxlan_dev *vxlan);
#endif
source "drivers/net/wireless/zydas/Kconfig"
source "drivers/net/wireless/quantenna/Kconfig"
-config PCMCIA_RAYCS
- tristate "Aviator/Raytheon 2.4GHz wireless support"
- depends on PCMCIA
- select WIRELESS_EXT
- select WEXT_SPY
- select WEXT_PRIV
- help
- Say Y here if you intend to attach an Aviator/Raytheon PCMCIA
- (PC-card) wireless Ethernet networking card to your computer.
- Please read the file
- <file:Documentation/networking/device_drivers/wifi/ray_cs.rst> for
- details.
-
- To compile this driver as a module, choose M here: the module will be
- called ray_cs. If unsure, say N.
-
-config PCMCIA_WL3501
- tristate "Planet WL3501 PCMCIA cards"
- depends on CFG80211 && PCMCIA
- select WIRELESS_EXT
- select WEXT_SPY
- help
- A driver for WL3501 PCMCIA 802.11 wireless cards made by Planet.
- It has basic support for Linux wireless extensions and initial
- micro support for ethtool.
-
-config MAC80211_HWSIM
- tristate "Simulated radio testing tool for mac80211"
- depends on MAC80211
- help
- This driver is a developer testing tool that can be used to test
- IEEE 802.11 networking stack (mac80211) functionality. This is not
- needed for normal wireless LAN usage and is only for testing. See
- Documentation/networking/mac80211_hwsim for more information on how
- to use this tool.
-
- To compile this driver as a module, choose M here: the module will be
- called mac80211_hwsim. If unsure, say N.
+source "drivers/net/wireless/legacy/Kconfig"
-config USB_NET_RNDIS_WLAN
- tristate "Wireless RNDIS USB support"
- depends on USB
- depends on CFG80211
- select USB_NET_DRIVERS
- select USB_USBNET
- select USB_NET_CDCETHER
- select USB_NET_RNDIS_HOST
- help
- This is a driver for wireless RNDIS devices.
- These are USB based adapters found in devices such as:
-
- Buffalo WLI-U2-KG125S
- U.S. Robotics USR5421
- Belkin F5D7051
- Linksys WUSB54GSv2
- Linksys WUSB54GSC
- Asus WL169gE
- Eminent EM4045
- BT Voyager 1055
- Linksys WUSB54GSv1
- U.S. Robotics USR5420
- BUFFALO WLI-USB-G54
-
- All of these devices are based on Broadcom 4320 chip which is the
- only wireless RNDIS chip known to date.
-
- If you choose to build a module, it'll be called rndis_wlan.
-
-config VIRT_WIFI
- tristate "Wifi wrapper for ethernet drivers"
- depends on CFG80211
- help
- This option adds support for ethernet connections to appear as if they
- are wifi connections through a special rtnetlink device.
+source "drivers/net/wireless/virtual/Kconfig"
endif # WLAN
obj-$(CONFIG_WLAN_VENDOR_TI) += ti/
obj-$(CONFIG_WLAN_VENDOR_ZYDAS) += zydas/
-# 16-bit wireless PCMCIA client drivers
-obj-$(CONFIG_PCMCIA_RAYCS) += ray_cs.o
-obj-$(CONFIG_PCMCIA_WL3501) += wl3501_cs.o
-
-obj-$(CONFIG_USB_NET_RNDIS_WLAN) += rndis_wlan.o
-
-obj-$(CONFIG_MAC80211_HWSIM) += mac80211_hwsim.o
-
-obj-$(CONFIG_VIRT_WIFI) += virt_wifi.o
+obj-$(CONFIG_WLAN) += legacy/
+obj-$(CONFIG_WLAN) += virtual/
u8 kv_type;
u8 kv_pad;
u16 kv_len;
- u8 kv_val[16]; /* TK */
- u8 kv_mic[8]; /* Michael MIC key */
- u8 kv_txmic[8]; /* Michael MIC TX key (used only if the hardware
- * supports both MIC keys in the same key cache entry;
- * in that case, kv_mic is the RX key) */
+ struct_group(kv_values,
+ u8 kv_val[16]; /* TK */
+ u8 kv_mic[8]; /* Michael MIC key */
+ u8 kv_txmic[8]; /* Michael MIC TX key (used only if the hardware
+ * supports both MIC keys in the same key cache entry;
+ * in that case, kv_mic is the RX key) */
+ );
};
enum ath_cipher {
return addr;
}
-static inline u32 shadow_dst_wr_ind_addr(struct ath10k *ar,
- struct ath10k_ce_pipe *ce_state)
-{
- u32 ce_id = ce_state->id;
- u32 addr = 0;
-
- switch (ce_id) {
- case 1:
- addr = 0x00032034;
- break;
- case 2:
- addr = 0x00032038;
- break;
- case 5:
- addr = 0x00032044;
- break;
- case 7:
- addr = 0x0003204C;
- break;
- case 8:
- addr = 0x00032050;
- break;
- case 9:
- addr = 0x00032054;
- break;
- case 10:
- addr = 0x00032058;
- break;
- case 11:
- addr = 0x0003205C;
- break;
- default:
- ath10k_warn(ar, "invalid CE id: %d", ce_id);
- break;
- }
-
- return addr;
-}
-
static inline unsigned int
ath10k_set_ring_byte(unsigned int offset,
struct ath10k_hw_ce_regs_addr_map *addr_map)
host_ie_addr & ~(wm_regs->wm_mask));
}
-static inline void ath10k_ce_error_intr_enable(struct ath10k *ar,
- u32 ce_ctrl_addr)
-{
- struct ath10k_hw_ce_misc_regs *misc_regs = ar->hw_ce_regs->misc_regs;
-
- u32 misc_ie_addr = ath10k_ce_read32(ar, ce_ctrl_addr +
- ar->hw_ce_regs->misc_ie_addr);
-
- ath10k_ce_write32(ar,
- ce_ctrl_addr + ar->hw_ce_regs->misc_ie_addr,
- misc_ie_addr | misc_regs->err_mask);
-}
-
static inline void ath10k_ce_error_intr_disable(struct ath10k *ar,
u32 ce_ctrl_addr)
{
mutex_lock(&ar->conf_mutex);
- changed_flags &= SUPPORTED_FILTERS;
*total_flags &= SUPPORTED_FILTERS;
ar->filter_flags = *total_flags;
bitmap_clear(ar_snoc->pending_ce_irqs, 0, CE_COUNT_MAX);
+ dev_set_threaded(&ar->napi_dev, true);
ath10k_core_napi_enable(ar);
ath10k_snoc_irq_enable(ar);
ath10k_snoc_rx_post(ar);
ab->pci.msi.ep_base_data = int_prop + 32;
for (i = 0; i < ab->pci.msi.config->total_vectors; i++) {
- res = platform_get_resource(pdev, IORESOURCE_IRQ, i);
- if (!res)
- return -ENODEV;
+ ret = platform_get_irq(pdev, i);
+ if (ret < 0)
+ return ret;
- ab->pci.msi.irqs[i] = res->start;
+ ab->pci.msi.irqs[i] = ret;
}
set_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags);
* to a new space for accessing them.
*/
ab->mem_ce = ioremap(ce_remap->base, ce_remap->size);
- if (IS_ERR(ab->mem_ce)) {
+ if (!ab->mem_ce) {
dev_err(&pdev->dev, "ce ioremap error\n");
ret = -ENOMEM;
goto err_core_free;
config->twt_ap_pdev_count = ab->num_radios;
config->twt_ap_sta_count = 1000;
config->flag1 |= WMI_RSRC_CFG_FLAG1_BSS_CHANNEL_INFO_64;
+ config->flag1 |= WMI_RSRC_CFG_FLAG1_ACK_RSSI;
}
static int ath11k_hw_mac_id_to_pdev_id_ipq8074(struct ath11k_hw_params *hw,
ath11k_smps_map[smps]);
}
+static bool ath11k_mac_set_he_txbf_conf(struct ath11k_vif *arvif)
+{
+ struct ath11k *ar = arvif->ar;
+ u32 param, value;
+ int ret;
+
+ if (!arvif->vif->bss_conf.he_support)
+ return true;
+
+ param = WMI_VDEV_PARAM_SET_HEMU_MODE;
+ value = 0;
+ if (arvif->vif->bss_conf.he_su_beamformer) {
+ value |= FIELD_PREP(HE_MODE_SU_TX_BFER, HE_SU_BFER_ENABLE);
+ if (arvif->vif->bss_conf.he_mu_beamformer &&
+ arvif->vdev_type == WMI_VDEV_TYPE_AP)
+ value |= FIELD_PREP(HE_MODE_MU_TX_BFER, HE_MU_BFER_ENABLE);
+ }
+
+ if (arvif->vif->type != NL80211_IFTYPE_MESH_POINT) {
+ value |= FIELD_PREP(HE_MODE_DL_OFDMA, HE_DL_MUOFDMA_ENABLE) |
+ FIELD_PREP(HE_MODE_UL_OFDMA, HE_UL_MUOFDMA_ENABLE);
+
+ if (arvif->vif->bss_conf.he_full_ul_mumimo)
+ value |= FIELD_PREP(HE_MODE_UL_MUMIMO, HE_UL_MUMIMO_ENABLE);
+
+ if (arvif->vif->bss_conf.he_su_beamformee)
+ value |= FIELD_PREP(HE_MODE_SU_TX_BFEE, HE_SU_BFEE_ENABLE);
+ }
+
+ ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, param, value);
+ if (ret) {
+ ath11k_warn(ar->ab, "failed to set vdev %d HE MU mode: %d\n",
+ arvif->vdev_id, ret);
+ return false;
+ }
+
+ param = WMI_VDEV_PARAM_SET_HE_SOUNDING_MODE;
+ value = FIELD_PREP(HE_VHT_SOUNDING_MODE, HE_VHT_SOUNDING_MODE_ENABLE) |
+ FIELD_PREP(HE_TRIG_NONTRIG_SOUNDING_MODE,
+ HE_TRIG_NONTRIG_SOUNDING_MODE_ENABLE);
+ ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+ param, value);
+ if (ret) {
+ ath11k_warn(ar->ab, "failed to set vdev %d sounding mode: %d\n",
+ arvif->vdev_id, ret);
+ return false;
+ }
+ return true;
+}
+
+static bool ath11k_mac_vif_recalc_sta_he_txbf(struct ath11k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta_he_cap *he_cap)
+{
+ struct ath11k_vif *arvif = (void *)vif->drv_priv;
+ struct ieee80211_he_cap_elem he_cap_elem = {0};
+ struct ieee80211_sta_he_cap *cap_band = NULL;
+ struct cfg80211_chan_def def;
+ u32 param = WMI_VDEV_PARAM_SET_HEMU_MODE;
+ u32 hemode = 0;
+ int ret;
+
+ if (!vif->bss_conf.he_support)
+ return true;
+
+ if (vif->type != NL80211_IFTYPE_STATION)
+ return false;
+
+ if (WARN_ON(ath11k_mac_vif_chan(vif, &def)))
+ return false;
+
+ if (def.chan->band == NL80211_BAND_2GHZ)
+ cap_band = &ar->mac.iftype[NL80211_BAND_2GHZ][vif->type].he_cap;
+ else
+ cap_band = &ar->mac.iftype[NL80211_BAND_5GHZ][vif->type].he_cap;
+
+ memcpy(&he_cap_elem, &cap_band->he_cap_elem, sizeof(he_cap_elem));
+
+ if (HECAP_PHY_SUBFME_GET(he_cap_elem.phy_cap_info)) {
+ if (HECAP_PHY_SUBFMR_GET(he_cap->he_cap_elem.phy_cap_info))
+ hemode |= FIELD_PREP(HE_MODE_SU_TX_BFEE, HE_SU_BFEE_ENABLE);
+ if (HECAP_PHY_MUBFMR_GET(he_cap->he_cap_elem.phy_cap_info))
+ hemode |= FIELD_PREP(HE_MODE_MU_TX_BFEE, HE_MU_BFEE_ENABLE);
+ }
+
+ if (vif->type != NL80211_IFTYPE_MESH_POINT) {
+ hemode |= FIELD_PREP(HE_MODE_DL_OFDMA, HE_DL_MUOFDMA_ENABLE) |
+ FIELD_PREP(HE_MODE_UL_OFDMA, HE_UL_MUOFDMA_ENABLE);
+
+ if (HECAP_PHY_ULMUMIMO_GET(he_cap_elem.phy_cap_info))
+ if (HECAP_PHY_ULMUMIMO_GET(he_cap->he_cap_elem.phy_cap_info))
+ hemode |= FIELD_PREP(HE_MODE_UL_MUMIMO,
+ HE_UL_MUMIMO_ENABLE);
+
+ if (FIELD_GET(HE_MODE_MU_TX_BFEE, hemode))
+ hemode |= FIELD_PREP(HE_MODE_SU_TX_BFEE, HE_SU_BFEE_ENABLE);
+
+ if (FIELD_GET(HE_MODE_MU_TX_BFER, hemode))
+ hemode |= FIELD_PREP(HE_MODE_SU_TX_BFER, HE_SU_BFER_ENABLE);
+ }
+
+ ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, param, hemode);
+ if (ret) {
+ ath11k_warn(ar->ab, "failed to submit vdev param txbf 0x%x: %d\n",
+ hemode, ret);
+ return false;
+ }
+
+ return true;
+}
+
static void ath11k_bss_assoc(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_bss_conf *bss_conf)
struct ieee80211_sta *ap_sta;
struct ath11k_peer *peer;
bool is_auth = false;
+ struct ieee80211_sta_he_cap he_cap;
int ret;
lockdep_assert_held(&ar->conf_mutex);
return;
}
+ /* he_cap here is updated at assoc success for sta mode only */
+ he_cap = ap_sta->deflink.he_cap;
+
ath11k_peer_assoc_prepare(ar, vif, ap_sta, &peer_arg, false);
rcu_read_unlock();
return;
}
+ if (!ath11k_mac_vif_recalc_sta_he_txbf(ar, vif, &he_cap)) {
+ ath11k_warn(ar->ab, "failed to recalc he txbf for vdev %i on bss %pM\n",
+ arvif->vdev_id, bss_conf->bssid);
+ return;
+ }
+
WARN_ON(arvif->is_up);
arvif->aid = vif->cfg.aid;
ether_addr_copy(arvif->bssid, info->bssid);
if (changed & BSS_CHANGED_BEACON_ENABLED) {
+ if (info->enable_beacon)
+ ath11k_mac_set_he_txbf_conf(arvif);
ath11k_control_beaconing(arvif, info);
if (arvif->is_up && vif->bss_conf.he_support &&
return cpu_to_le16(bcap->he_6ghz_capa);
}
+static void ath11k_mac_set_hemcsmap(struct ath11k *ar,
+ struct ath11k_pdev_cap *cap,
+ struct ieee80211_sta_he_cap *he_cap,
+ int band)
+{
+ u16 txmcs_map, rxmcs_map;
+ u32 i;
+
+ rxmcs_map = 0;
+ txmcs_map = 0;
+ for (i = 0; i < 8; i++) {
+ if (i < ar->num_tx_chains &&
+ (ar->cfg_tx_chainmask >> cap->tx_chain_mask_shift) & BIT(i))
+ txmcs_map |= IEEE80211_HE_MCS_SUPPORT_0_11 << (i * 2);
+ else
+ txmcs_map |= IEEE80211_HE_MCS_NOT_SUPPORTED << (i * 2);
+
+ if (i < ar->num_rx_chains &&
+ (ar->cfg_rx_chainmask >> cap->tx_chain_mask_shift) & BIT(i))
+ rxmcs_map |= IEEE80211_HE_MCS_SUPPORT_0_11 << (i * 2);
+ else
+ rxmcs_map |= IEEE80211_HE_MCS_NOT_SUPPORTED << (i * 2);
+ }
+ he_cap->he_mcs_nss_supp.rx_mcs_80 =
+ cpu_to_le16(rxmcs_map & 0xffff);
+ he_cap->he_mcs_nss_supp.tx_mcs_80 =
+ cpu_to_le16(txmcs_map & 0xffff);
+ he_cap->he_mcs_nss_supp.rx_mcs_160 =
+ cpu_to_le16(rxmcs_map & 0xffff);
+ he_cap->he_mcs_nss_supp.tx_mcs_160 =
+ cpu_to_le16(txmcs_map & 0xffff);
+ he_cap->he_mcs_nss_supp.rx_mcs_80p80 =
+ cpu_to_le16(rxmcs_map & 0xffff);
+ he_cap->he_mcs_nss_supp.tx_mcs_80p80 =
+ cpu_to_le16(txmcs_map & 0xffff);
+}
+
static int ath11k_mac_copy_he_cap(struct ath11k *ar,
struct ath11k_pdev_cap *cap,
struct ieee80211_sband_iftype_data *data,
he_cap_elem->mac_cap_info[1] &=
IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_MASK;
+ he_cap_elem->phy_cap_info[0] &=
+ ~IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
+ he_cap_elem->phy_cap_info[0] &=
+ ~IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G;
he_cap_elem->phy_cap_info[5] &=
~IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_MASK;
break;
}
- he_cap->he_mcs_nss_supp.rx_mcs_80 =
- cpu_to_le16(band_cap->he_mcs & 0xffff);
- he_cap->he_mcs_nss_supp.tx_mcs_80 =
- cpu_to_le16(band_cap->he_mcs & 0xffff);
- he_cap->he_mcs_nss_supp.rx_mcs_160 =
- cpu_to_le16((band_cap->he_mcs >> 16) & 0xffff);
- he_cap->he_mcs_nss_supp.tx_mcs_160 =
- cpu_to_le16((band_cap->he_mcs >> 16) & 0xffff);
- he_cap->he_mcs_nss_supp.rx_mcs_80p80 =
- cpu_to_le16((band_cap->he_mcs >> 16) & 0xffff);
- he_cap->he_mcs_nss_supp.tx_mcs_80p80 =
- cpu_to_le16((band_cap->he_mcs >> 16) & 0xffff);
+ ath11k_mac_set_hemcsmap(ar, cap, he_cap, band);
memset(he_cap->ppe_thres, 0, sizeof(he_cap->ppe_thres));
if (he_cap_elem->phy_cap_info[6] &
}
}
-static u32
-ath11k_mac_prepare_he_mode(struct ath11k_pdev *pdev, u32 viftype)
-{
- struct ath11k_pdev_cap *pdev_cap = &pdev->cap;
- struct ath11k_band_cap *cap_band = NULL;
- u32 *hecap_phy_ptr = NULL;
- u32 hemode = 0;
-
- if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP)
- cap_band = &pdev_cap->band[NL80211_BAND_2GHZ];
- else
- cap_band = &pdev_cap->band[NL80211_BAND_5GHZ];
-
- hecap_phy_ptr = &cap_band->he_cap_phy_info[0];
-
- hemode = FIELD_PREP(HE_MODE_SU_TX_BFEE, HE_SU_BFEE_ENABLE) |
- FIELD_PREP(HE_MODE_SU_TX_BFER, HECAP_PHY_SUBFMR_GET(hecap_phy_ptr)) |
- FIELD_PREP(HE_MODE_UL_MUMIMO, HECAP_PHY_ULMUMIMO_GET(hecap_phy_ptr));
-
- /* TODO WDS and other modes */
- if (viftype == NL80211_IFTYPE_AP) {
- hemode |= FIELD_PREP(HE_MODE_MU_TX_BFER,
- HECAP_PHY_MUBFMR_GET(hecap_phy_ptr)) |
- FIELD_PREP(HE_MODE_DL_OFDMA, HE_DL_MUOFDMA_ENABLE) |
- FIELD_PREP(HE_MODE_UL_OFDMA, HE_UL_MUOFDMA_ENABLE);
- } else {
- hemode |= FIELD_PREP(HE_MODE_MU_TX_BFEE, HE_MU_BFEE_ENABLE);
- }
-
- return hemode;
-}
-
-static int ath11k_set_he_mu_sounding_mode(struct ath11k *ar,
- struct ath11k_vif *arvif)
-{
- u32 param_id, param_value;
- struct ath11k_base *ab = ar->ab;
- int ret = 0;
-
- param_id = WMI_VDEV_PARAM_SET_HEMU_MODE;
- param_value = ath11k_mac_prepare_he_mode(ar->pdev, arvif->vif->type);
- ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
- param_id, param_value);
- if (ret) {
- ath11k_warn(ab, "failed to set vdev %d HE MU mode: %d param_value %x\n",
- arvif->vdev_id, ret, param_value);
- return ret;
- }
- param_id = WMI_VDEV_PARAM_SET_HE_SOUNDING_MODE;
- param_value =
- FIELD_PREP(HE_VHT_SOUNDING_MODE, HE_VHT_SOUNDING_MODE_ENABLE) |
- FIELD_PREP(HE_TRIG_NONTRIG_SOUNDING_MODE,
- HE_TRIG_NONTRIG_SOUNDING_MODE_ENABLE);
- ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
- param_id, param_value);
- if (ret) {
- ath11k_warn(ab, "failed to set vdev %d HE MU mode: %d\n",
- arvif->vdev_id, ret);
- return ret;
- }
- return ret;
-}
-
static void ath11k_mac_op_update_vif_offload(struct ieee80211_hw *hw,
struct ieee80211_vif *vif)
{
struct ath11k_base *ab = ar->ab;
struct wmi_vdev_start_req_arg arg = {};
const struct cfg80211_chan_def *chandef = &ctx->def;
- int he_support = arvif->vif->bss_conf.he_support;
int ret = 0;
lockdep_assert_held(&ar->conf_mutex);
spin_lock_bh(&ab->base_lock);
arg.regdomain = ar->ab->dfs_region;
spin_unlock_bh(&ab->base_lock);
-
- if (he_support) {
- ret = ath11k_set_he_mu_sounding_mode(ar, arvif);
- if (ret) {
- ath11k_warn(ar->ab, "failed to set he mode vdev %i\n",
- arg.vdev_id);
- return ret;
- }
- }
}
arg.channel.passive |= !!(chandef->chan->flags & IEEE80211_CHAN_NO_IR);
goto err_free_if_combs;
}
+ if (test_bit(WMI_TLV_SERVICE_TX_DATA_MGMT_ACK_RSSI,
+ ar->ab->wmi_ab.svc_map))
+ wiphy_ext_feature_set(ar->hw->wiphy,
+ NL80211_EXT_FEATURE_ACK_SIGNAL_SUPPORT);
+
ar->hw->queues = ATH11K_HW_MAX_QUEUES;
ar->hw->wiphy->tx_queue_len = ATH11K_QUEUE_LEN;
ar->hw->offchannel_tx_hw_queue = ATH11K_HW_MAX_QUEUES - 1;
return -ENOBUFS;
}
+ mutex_lock(&ar->ab->tbl_mtx_lock);
spin_lock_bh(&ar->ab->base_lock);
peer = ath11k_peer_find_by_addr(ar->ab, param->peer_addr);
if (peer) {
if (peer->vdev_id == param->vdev_id) {
spin_unlock_bh(&ar->ab->base_lock);
+ mutex_unlock(&ar->ab->tbl_mtx_lock);
return -EINVAL;
}
/* Assume sta is transitioning to another band.
* Remove here the peer from rhash.
*/
- mutex_lock(&ar->ab->tbl_mtx_lock);
ath11k_peer_rhash_delete(ar->ab, peer);
- mutex_unlock(&ar->ab->tbl_mtx_lock);
}
spin_unlock_bh(&ar->ab->base_lock);
+ mutex_unlock(&ar->ab->tbl_mtx_lock);
ret = ath11k_wmi_send_peer_create_cmd(ar, param);
if (ret) {
{
struct ieee80211_regdomain *tmp_regd, *default_regd, *new_regd = NULL;
struct cur_reg_rule *reg_rule;
- u8 i = 0, j = 0;
+ u8 i = 0, j = 0, k = 0;
u8 num_rules;
u16 max_bw;
u32 flags;
char alpha2[3];
- num_rules = reg_info->num_5g_reg_rules + reg_info->num_2g_reg_rules;
+ num_rules = reg_info->num_5ghz_reg_rules + reg_info->num_2ghz_reg_rules;
+
+ /* FIXME: Currently taking reg rules for 6 GHz only from Indoor AP mode list.
+ * This can be updated after complete 6 GHz regulatory support is added.
+ */
+ if (reg_info->is_ext_reg_event)
+ num_rules += reg_info->num_6ghz_rules_ap[WMI_REG_INDOOR_AP];
if (!num_rules)
goto ret;
tmp_regd->dfs_region = ath11k_map_fw_dfs_region(reg_info->dfs_region);
ath11k_dbg(ab, ATH11K_DBG_REG,
- "\r\nCountry %s, CFG Regdomain %s FW Regdomain %d, num_reg_rules %d\n",
+ "Country %s, CFG Regdomain %s FW Regdomain %d, num_reg_rules %d\n",
alpha2, ath11k_reg_get_regdom_str(tmp_regd->dfs_region),
reg_info->dfs_region, num_rules);
/* Update reg_rules[] below. Firmware is expected to
- * send these rules in order(2G rules first and then 5G)
+ * send these rules in order(2 GHz rules first and then 5 GHz)
*/
for (; i < num_rules; i++) {
- if (reg_info->num_2g_reg_rules &&
- (i < reg_info->num_2g_reg_rules)) {
- reg_rule = reg_info->reg_rules_2g_ptr + i;
+ if (reg_info->num_2ghz_reg_rules &&
+ (i < reg_info->num_2ghz_reg_rules)) {
+ reg_rule = reg_info->reg_rules_2ghz_ptr + i;
max_bw = min_t(u16, reg_rule->max_bw,
- reg_info->max_bw_2g);
+ reg_info->max_bw_2ghz);
flags = 0;
- } else if (reg_info->num_5g_reg_rules &&
- (j < reg_info->num_5g_reg_rules)) {
- reg_rule = reg_info->reg_rules_5g_ptr + j++;
+ } else if (reg_info->num_5ghz_reg_rules &&
+ (j < reg_info->num_5ghz_reg_rules)) {
+ reg_rule = reg_info->reg_rules_5ghz_ptr + j++;
max_bw = min_t(u16, reg_rule->max_bw,
- reg_info->max_bw_5g);
+ reg_info->max_bw_5ghz);
/* FW doesn't pass NL80211_RRF_AUTO_BW flag for
* BW Auto correction, we can enable this by default
* per other BW rule flags we pass from here
*/
flags = NL80211_RRF_AUTO_BW;
+ } else if (reg_info->is_ext_reg_event &&
+ reg_info->num_6ghz_rules_ap[WMI_REG_INDOOR_AP] &&
+ (k < reg_info->num_6ghz_rules_ap[WMI_REG_INDOOR_AP])) {
+ reg_rule = reg_info->reg_rules_6ghz_ap_ptr[WMI_REG_INDOOR_AP] +
+ k++;
+ max_bw = min_t(u16, reg_rule->max_bw,
+ reg_info->max_bw_6ghz_ap[WMI_REG_INDOOR_AP]);
+ flags = NL80211_RRF_AUTO_BW;
} else {
break;
}
continue;
}
- ath11k_dbg(ab, ATH11K_DBG_REG,
- "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
- i + 1, reg_rule->start_freq, reg_rule->end_freq,
- max_bw, reg_rule->ant_gain, reg_rule->reg_power,
- tmp_regd->reg_rules[i].dfs_cac_ms,
- flags);
+ if (reg_info->is_ext_reg_event) {
+ ath11k_dbg(ab, ATH11K_DBG_REG,
+ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d) (%d, %d)\n",
+ i + 1, reg_rule->start_freq, reg_rule->end_freq,
+ max_bw, reg_rule->ant_gain, reg_rule->reg_power,
+ tmp_regd->reg_rules[i].dfs_cac_ms, flags,
+ reg_rule->psd_flag, reg_rule->psd_eirp);
+ } else {
+ ath11k_dbg(ab, ATH11K_DBG_REG,
+ "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
+ i + 1, reg_rule->start_freq, reg_rule->end_freq,
+ max_bw, reg_rule->ant_gain, reg_rule->reg_power,
+ tmp_regd->reg_rules[i].dfs_cac_ms,
+ flags);
+ }
}
tmp_regd->n_reg_rules = i;
= { .min_len = sizeof(struct wmi_vdev_stopped_event) },
[WMI_TAG_REG_CHAN_LIST_CC_EVENT]
= { .min_len = sizeof(struct wmi_reg_chan_list_cc_event) },
+ [WMI_TAG_REG_CHAN_LIST_CC_EXT_EVENT]
+ = { .min_len = sizeof(struct wmi_reg_chan_list_cc_ext_event) },
[WMI_TAG_MGMT_RX_HDR]
= { .min_len = sizeof(struct wmi_mgmt_rx_hdr) },
[WMI_TAG_MGMT_TX_COMPL_EVENT]
WMI_SCAN_EVENT_FOREIGN_CHAN |
WMI_SCAN_EVENT_DEQUEUED;
arg->scan_flags |= WMI_SCAN_CHAN_STAT_EVENT;
+
+ if (test_bit(WMI_TLV_SERVICE_PASSIVE_SCAN_START_TIME_ENHANCE,
+ ar->ab->wmi_ab.svc_map))
+ arg->scan_ctrl_flags_ext |=
+ WMI_SCAN_FLAG_EXT_PASSIVE_SCAN_START_TIME_ENHANCE;
+
arg->num_bssid = 1;
/* fill bssid_list[0] with 0xff, otherwise bssid and RA will be
/* for adaptive scan mode using 3 bits (21 - 23 bits) */
WMI_SCAN_SET_DWELL_MODE(cmd->scan_ctrl_flags,
param->adaptive_dwell_time_mode);
+
+ cmd->scan_ctrl_flags_ext = param->scan_ctrl_flags_ext;
}
int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
wmi_cfg->sched_params = tg_cfg->sched_params;
wmi_cfg->twt_ap_pdev_count = tg_cfg->twt_ap_pdev_count;
wmi_cfg->twt_ap_sta_count = tg_cfg->twt_ap_sta_count;
+ wmi_cfg->host_service_flags &=
+ ~(1 << WMI_CFG_HOST_SERVICE_FLAG_REG_CC_EXT);
+ wmi_cfg->host_service_flags |= (tg_cfg->is_reg_cc_ext_event_supported <<
+ WMI_CFG_HOST_SERVICE_FLAG_REG_CC_EXT);
}
static int ath11k_init_cmd_send(struct ath11k_pdev_wmi *wmi,
ab->hw_params.hw_ops->wmi_init_config(ab, &config);
+ if (test_bit(WMI_TLV_SERVICE_REG_CC_EXT_EVENT_SUPPORT,
+ ab->wmi_ab.svc_map))
+ config.is_reg_cc_ext_event_supported = 1;
+
memcpy(&wmi_sc->wlan_resource_config, &config, sizeof(config));
init_param.res_cfg = &wmi_sc->wlan_resource_config;
return 0;
}
+static void ath11k_print_reg_rule(struct ath11k_base *ab, const char *band,
+ u32 num_reg_rules,
+ struct cur_reg_rule *reg_rule_ptr)
+{
+ struct cur_reg_rule *reg_rule = reg_rule_ptr;
+ u32 count;
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI, "number of reg rules in %s band: %d\n",
+ band, num_reg_rules);
+
+ for (count = 0; count < num_reg_rules; count++) {
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "reg rule %d: (%d - %d @ %d) (%d, %d) (FLAGS %d)\n",
+ count + 1, reg_rule->start_freq, reg_rule->end_freq,
+ reg_rule->max_bw, reg_rule->ant_gain,
+ reg_rule->reg_power, reg_rule->flags);
+ reg_rule++;
+ }
+}
+
static struct cur_reg_rule
*create_reg_rules_from_wmi(u32 num_reg_rules,
struct wmi_regulatory_rule_struct *wmi_reg_rule)
const void **tb;
const struct wmi_reg_chan_list_cc_event *chan_list_event_hdr;
struct wmi_regulatory_rule_struct *wmi_reg_rule;
- u32 num_2g_reg_rules, num_5g_reg_rules;
+ u32 num_2ghz_reg_rules, num_5ghz_reg_rules;
int ret;
ath11k_dbg(ab, ATH11K_DBG_WMI, "processing regulatory channel list\n");
return -EPROTO;
}
- reg_info->num_2g_reg_rules = chan_list_event_hdr->num_2g_reg_rules;
- reg_info->num_5g_reg_rules = chan_list_event_hdr->num_5g_reg_rules;
+ reg_info->num_2ghz_reg_rules = chan_list_event_hdr->num_2ghz_reg_rules;
+ reg_info->num_5ghz_reg_rules = chan_list_event_hdr->num_5ghz_reg_rules;
- if (!(reg_info->num_2g_reg_rules + reg_info->num_5g_reg_rules)) {
+ if (!(reg_info->num_2ghz_reg_rules + reg_info->num_5ghz_reg_rules)) {
ath11k_warn(ab, "No regulatory rules available in the event info\n");
kfree(tb);
return -EINVAL;
reg_info->phy_id = chan_list_event_hdr->phy_id;
reg_info->ctry_code = chan_list_event_hdr->country_id;
reg_info->reg_dmn_pair = chan_list_event_hdr->domain_code;
- if (chan_list_event_hdr->status_code == WMI_REG_SET_CC_STATUS_PASS)
- reg_info->status_code = REG_SET_CC_STATUS_PASS;
- else if (chan_list_event_hdr->status_code == WMI_REG_CURRENT_ALPHA2_NOT_FOUND)
- reg_info->status_code = REG_CURRENT_ALPHA2_NOT_FOUND;
- else if (chan_list_event_hdr->status_code == WMI_REG_INIT_ALPHA2_NOT_FOUND)
- reg_info->status_code = REG_INIT_ALPHA2_NOT_FOUND;
- else if (chan_list_event_hdr->status_code == WMI_REG_SET_CC_CHANGE_NOT_ALLOWED)
- reg_info->status_code = REG_SET_CC_CHANGE_NOT_ALLOWED;
- else if (chan_list_event_hdr->status_code == WMI_REG_SET_CC_STATUS_NO_MEMORY)
- reg_info->status_code = REG_SET_CC_STATUS_NO_MEMORY;
- else if (chan_list_event_hdr->status_code == WMI_REG_SET_CC_STATUS_FAIL)
- reg_info->status_code = REG_SET_CC_STATUS_FAIL;
-
- reg_info->min_bw_2g = chan_list_event_hdr->min_bw_2g;
- reg_info->max_bw_2g = chan_list_event_hdr->max_bw_2g;
- reg_info->min_bw_5g = chan_list_event_hdr->min_bw_5g;
- reg_info->max_bw_5g = chan_list_event_hdr->max_bw_5g;
-
- num_2g_reg_rules = reg_info->num_2g_reg_rules;
- num_5g_reg_rules = reg_info->num_5g_reg_rules;
ath11k_dbg(ab, ATH11K_DBG_WMI,
- "%s:cc %s dsf %d BW: min_2g %d max_2g %d min_5g %d max_5g %d",
- __func__, reg_info->alpha2, reg_info->dfs_region,
- reg_info->min_bw_2g, reg_info->max_bw_2g,
- reg_info->min_bw_5g, reg_info->max_bw_5g);
+ "status_code %s",
+ ath11k_cc_status_to_str(reg_info->status_code));
+
+ reg_info->status_code =
+ ath11k_wmi_cc_setting_code_to_reg(chan_list_event_hdr->status_code);
+
+ reg_info->is_ext_reg_event = false;
+
+ reg_info->min_bw_2ghz = chan_list_event_hdr->min_bw_2ghz;
+ reg_info->max_bw_2ghz = chan_list_event_hdr->max_bw_2ghz;
+ reg_info->min_bw_5ghz = chan_list_event_hdr->min_bw_5ghz;
+ reg_info->max_bw_5ghz = chan_list_event_hdr->max_bw_5ghz;
+
+ num_2ghz_reg_rules = reg_info->num_2ghz_reg_rules;
+ num_5ghz_reg_rules = reg_info->num_5ghz_reg_rules;
ath11k_dbg(ab, ATH11K_DBG_WMI,
- "%s: num_2g_reg_rules %d num_5g_reg_rules %d", __func__,
- num_2g_reg_rules, num_5g_reg_rules);
+ "cc %s dsf %d BW: min_2ghz %d max_2ghz %d min_5ghz %d max_5ghz %d",
+ reg_info->alpha2, reg_info->dfs_region,
+ reg_info->min_bw_2ghz, reg_info->max_bw_2ghz,
+ reg_info->min_bw_5ghz, reg_info->max_bw_5ghz);
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "num_2ghz_reg_rules %d num_5ghz_reg_rules %d",
+ num_2ghz_reg_rules, num_5ghz_reg_rules);
wmi_reg_rule =
(struct wmi_regulatory_rule_struct *)((u8 *)chan_list_event_hdr
+ sizeof(*chan_list_event_hdr)
+ sizeof(struct wmi_tlv));
- if (num_2g_reg_rules) {
- reg_info->reg_rules_2g_ptr = create_reg_rules_from_wmi(num_2g_reg_rules,
- wmi_reg_rule);
- if (!reg_info->reg_rules_2g_ptr) {
+ if (num_2ghz_reg_rules) {
+ reg_info->reg_rules_2ghz_ptr =
+ create_reg_rules_from_wmi(num_2ghz_reg_rules,
+ wmi_reg_rule);
+ if (!reg_info->reg_rules_2ghz_ptr) {
kfree(tb);
- ath11k_warn(ab, "Unable to Allocate memory for 2g rules\n");
+ ath11k_warn(ab, "Unable to Allocate memory for 2 GHz rules\n");
return -ENOMEM;
}
+
+ ath11k_print_reg_rule(ab, "2 GHz",
+ num_2ghz_reg_rules,
+ reg_info->reg_rules_2ghz_ptr);
}
- if (num_5g_reg_rules) {
- wmi_reg_rule += num_2g_reg_rules;
- reg_info->reg_rules_5g_ptr = create_reg_rules_from_wmi(num_5g_reg_rules,
- wmi_reg_rule);
- if (!reg_info->reg_rules_5g_ptr) {
+ if (num_5ghz_reg_rules) {
+ wmi_reg_rule += num_2ghz_reg_rules;
+ reg_info->reg_rules_5ghz_ptr =
+ create_reg_rules_from_wmi(num_5ghz_reg_rules,
+ wmi_reg_rule);
+ if (!reg_info->reg_rules_5ghz_ptr) {
kfree(tb);
- ath11k_warn(ab, "Unable to Allocate memory for 5g rules\n");
+ ath11k_warn(ab, "Unable to Allocate memory for 5 GHz rules\n");
return -ENOMEM;
}
+
+ ath11k_print_reg_rule(ab, "5 GHz",
+ num_5ghz_reg_rules,
+ reg_info->reg_rules_5ghz_ptr);
}
ath11k_dbg(ab, ATH11K_DBG_WMI, "processed regulatory channel list\n");
return 0;
}
+static struct cur_reg_rule
+*create_ext_reg_rules_from_wmi(u32 num_reg_rules,
+ struct wmi_regulatory_ext_rule *wmi_reg_rule)
+{
+ struct cur_reg_rule *reg_rule_ptr;
+ u32 count;
+
+ reg_rule_ptr = kcalloc(num_reg_rules, sizeof(*reg_rule_ptr), GFP_ATOMIC);
+
+ if (!reg_rule_ptr)
+ return NULL;
+
+ for (count = 0; count < num_reg_rules; count++) {
+ reg_rule_ptr[count].start_freq =
+ u32_get_bits(wmi_reg_rule[count].freq_info,
+ REG_RULE_START_FREQ);
+ reg_rule_ptr[count].end_freq =
+ u32_get_bits(wmi_reg_rule[count].freq_info,
+ REG_RULE_END_FREQ);
+ reg_rule_ptr[count].max_bw =
+ u32_get_bits(wmi_reg_rule[count].bw_pwr_info,
+ REG_RULE_MAX_BW);
+ reg_rule_ptr[count].reg_power =
+ u32_get_bits(wmi_reg_rule[count].bw_pwr_info,
+ REG_RULE_REG_PWR);
+ reg_rule_ptr[count].ant_gain =
+ u32_get_bits(wmi_reg_rule[count].bw_pwr_info,
+ REG_RULE_ANT_GAIN);
+ reg_rule_ptr[count].flags =
+ u32_get_bits(wmi_reg_rule[count].flag_info,
+ REG_RULE_FLAGS);
+ reg_rule_ptr[count].psd_flag =
+ u32_get_bits(wmi_reg_rule[count].psd_power_info,
+ REG_RULE_PSD_INFO);
+ reg_rule_ptr[count].psd_eirp =
+ u32_get_bits(wmi_reg_rule[count].psd_power_info,
+ REG_RULE_PSD_EIRP);
+ }
+
+ return reg_rule_ptr;
+}
+
+static u8
+ath11k_invalid_5ghz_reg_ext_rules_from_wmi(u32 num_reg_rules,
+ const struct wmi_regulatory_ext_rule *rule)
+{
+ u8 num_invalid_5ghz_rules = 0;
+ u32 count, start_freq;
+
+ for (count = 0; count < num_reg_rules; count++) {
+ start_freq = u32_get_bits(rule[count].freq_info,
+ REG_RULE_START_FREQ);
+
+ if (start_freq >= ATH11K_MIN_6G_FREQ)
+ num_invalid_5ghz_rules++;
+ }
+
+ return num_invalid_5ghz_rules;
+}
+
+static int ath11k_pull_reg_chan_list_ext_update_ev(struct ath11k_base *ab,
+ struct sk_buff *skb,
+ struct cur_regulatory_info *reg_info)
+{
+ const void **tb;
+ const struct wmi_reg_chan_list_cc_ext_event *ev;
+ struct wmi_regulatory_ext_rule *ext_wmi_reg_rule;
+ u32 num_2ghz_reg_rules, num_5ghz_reg_rules;
+ u32 num_6ghz_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 num_6ghz_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ u32 total_reg_rules = 0;
+ int ret, i, j, num_invalid_5ghz_ext_rules = 0;
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI, "processing regulatory ext channel list\n");
+
+ tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return ret;
+ }
+
+ ev = tb[WMI_TAG_REG_CHAN_LIST_CC_EXT_EVENT];
+ if (!ev) {
+ ath11k_warn(ab, "failed to fetch reg chan list ext update ev\n");
+ kfree(tb);
+ return -EPROTO;
+ }
+
+ reg_info->num_2ghz_reg_rules = ev->num_2ghz_reg_rules;
+ reg_info->num_5ghz_reg_rules = ev->num_5ghz_reg_rules;
+ reg_info->num_6ghz_rules_ap[WMI_REG_INDOOR_AP] =
+ ev->num_6ghz_reg_rules_ap_lpi;
+ reg_info->num_6ghz_rules_ap[WMI_REG_STANDARD_POWER_AP] =
+ ev->num_6ghz_reg_rules_ap_sp;
+ reg_info->num_6ghz_rules_ap[WMI_REG_VERY_LOW_POWER_AP] =
+ ev->num_6ghz_reg_rules_ap_vlp;
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ reg_info->num_6ghz_rules_client[WMI_REG_INDOOR_AP][i] =
+ ev->num_6ghz_reg_rules_client_lpi[i];
+ reg_info->num_6ghz_rules_client[WMI_REG_STANDARD_POWER_AP][i] =
+ ev->num_6ghz_reg_rules_client_sp[i];
+ reg_info->num_6ghz_rules_client[WMI_REG_VERY_LOW_POWER_AP][i] =
+ ev->num_6ghz_reg_rules_client_vlp[i];
+ }
+
+ num_2ghz_reg_rules = reg_info->num_2ghz_reg_rules;
+ num_5ghz_reg_rules = reg_info->num_5ghz_reg_rules;
+
+ total_reg_rules += num_2ghz_reg_rules;
+ total_reg_rules += num_5ghz_reg_rules;
+
+ if ((num_2ghz_reg_rules > MAX_REG_RULES) ||
+ (num_5ghz_reg_rules > MAX_REG_RULES)) {
+ ath11k_warn(ab, "Num reg rules for 2.4 GHz/5 GHz exceeds max limit (num_2ghz_reg_rules: %d num_5ghz_reg_rules: %d max_rules: %d)\n",
+ num_2ghz_reg_rules, num_5ghz_reg_rules, MAX_REG_RULES);
+ kfree(tb);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
+ num_6ghz_reg_rules_ap[i] = reg_info->num_6ghz_rules_ap[i];
+
+ if (num_6ghz_reg_rules_ap[i] > MAX_6GHZ_REG_RULES) {
+ ath11k_warn(ab, "Num 6 GHz reg rules for AP mode(%d) exceeds max limit (num_6ghz_reg_rules_ap: %d, max_rules: %d)\n",
+ i, num_6ghz_reg_rules_ap[i], MAX_6GHZ_REG_RULES);
+ kfree(tb);
+ return -EINVAL;
+ }
+
+ total_reg_rules += num_6ghz_reg_rules_ap[i];
+ }
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ num_6ghz_client[WMI_REG_INDOOR_AP][i] =
+ reg_info->num_6ghz_rules_client[WMI_REG_INDOOR_AP][i];
+ total_reg_rules += num_6ghz_client[WMI_REG_INDOOR_AP][i];
+
+ num_6ghz_client[WMI_REG_STANDARD_POWER_AP][i] =
+ reg_info->num_6ghz_rules_client[WMI_REG_STANDARD_POWER_AP][i];
+ total_reg_rules += num_6ghz_client[WMI_REG_STANDARD_POWER_AP][i];
+
+ num_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][i] =
+ reg_info->num_6ghz_rules_client[WMI_REG_VERY_LOW_POWER_AP][i];
+ total_reg_rules += num_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][i];
+
+ if ((num_6ghz_client[WMI_REG_INDOOR_AP][i] > MAX_6GHZ_REG_RULES) ||
+ (num_6ghz_client[WMI_REG_STANDARD_POWER_AP][i] >
+ MAX_6GHZ_REG_RULES) ||
+ (num_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][i] >
+ MAX_6GHZ_REG_RULES)) {
+ ath11k_warn(ab,
+ "Num 6 GHz client reg rules exceeds max limit, for client(type: %d)\n",
+ i);
+ kfree(tb);
+ return -EINVAL;
+ }
+ }
+
+ if (!total_reg_rules) {
+ ath11k_warn(ab, "No reg rules available\n");
+ kfree(tb);
+ return -EINVAL;
+ }
+
+ memcpy(reg_info->alpha2, &ev->alpha2, REG_ALPHA2_LEN);
+
+ reg_info->dfs_region = ev->dfs_region;
+ reg_info->phybitmap = ev->phybitmap;
+ reg_info->num_phy = ev->num_phy;
+ reg_info->phy_id = ev->phy_id;
+ reg_info->ctry_code = ev->country_id;
+ reg_info->reg_dmn_pair = ev->domain_code;
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "status_code %s",
+ ath11k_cc_status_to_str(reg_info->status_code));
+
+ reg_info->status_code =
+ ath11k_wmi_cc_setting_code_to_reg(ev->status_code);
+
+ reg_info->is_ext_reg_event = true;
+
+ reg_info->min_bw_2ghz = ev->min_bw_2ghz;
+ reg_info->max_bw_2ghz = ev->max_bw_2ghz;
+ reg_info->min_bw_5ghz = ev->min_bw_5ghz;
+ reg_info->max_bw_5ghz = ev->max_bw_5ghz;
+
+ reg_info->min_bw_6ghz_ap[WMI_REG_INDOOR_AP] =
+ ev->min_bw_6ghz_ap_lpi;
+ reg_info->max_bw_6ghz_ap[WMI_REG_INDOOR_AP] =
+ ev->max_bw_6ghz_ap_lpi;
+ reg_info->min_bw_6ghz_ap[WMI_REG_STANDARD_POWER_AP] =
+ ev->min_bw_6ghz_ap_sp;
+ reg_info->max_bw_6ghz_ap[WMI_REG_STANDARD_POWER_AP] =
+ ev->max_bw_6ghz_ap_sp;
+ reg_info->min_bw_6ghz_ap[WMI_REG_VERY_LOW_POWER_AP] =
+ ev->min_bw_6ghz_ap_vlp;
+ reg_info->max_bw_6ghz_ap[WMI_REG_VERY_LOW_POWER_AP] =
+ ev->max_bw_6ghz_ap_vlp;
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "6 GHz AP BW: LPI (%d - %d), SP (%d - %d), VLP (%d - %d)\n",
+ reg_info->min_bw_6ghz_ap[WMI_REG_INDOOR_AP],
+ reg_info->max_bw_6ghz_ap[WMI_REG_INDOOR_AP],
+ reg_info->min_bw_6ghz_ap[WMI_REG_STANDARD_POWER_AP],
+ reg_info->max_bw_6ghz_ap[WMI_REG_STANDARD_POWER_AP],
+ reg_info->min_bw_6ghz_ap[WMI_REG_VERY_LOW_POWER_AP],
+ reg_info->max_bw_6ghz_ap[WMI_REG_VERY_LOW_POWER_AP]);
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ reg_info->min_bw_6ghz_client[WMI_REG_INDOOR_AP][i] =
+ ev->min_bw_6ghz_client_lpi[i];
+ reg_info->max_bw_6ghz_client[WMI_REG_INDOOR_AP][i] =
+ ev->max_bw_6ghz_client_lpi[i];
+ reg_info->min_bw_6ghz_client[WMI_REG_STANDARD_POWER_AP][i] =
+ ev->min_bw_6ghz_client_sp[i];
+ reg_info->max_bw_6ghz_client[WMI_REG_STANDARD_POWER_AP][i] =
+ ev->max_bw_6ghz_client_sp[i];
+ reg_info->min_bw_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][i] =
+ ev->min_bw_6ghz_client_vlp[i];
+ reg_info->max_bw_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][i] =
+ ev->max_bw_6ghz_client_vlp[i];
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "6 GHz %s BW: LPI (%d - %d), SP (%d - %d), VLP (%d - %d)\n",
+ ath11k_6ghz_client_type_to_str(i),
+ reg_info->min_bw_6ghz_client[WMI_REG_INDOOR_AP][i],
+ reg_info->max_bw_6ghz_client[WMI_REG_INDOOR_AP][i],
+ reg_info->min_bw_6ghz_client[WMI_REG_STANDARD_POWER_AP][i],
+ reg_info->max_bw_6ghz_client[WMI_REG_STANDARD_POWER_AP][i],
+ reg_info->min_bw_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][i],
+ reg_info->max_bw_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][i]);
+ }
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "cc_ext %s dsf %d BW: min_2ghz %d max_2ghz %d min_5ghz %d max_5ghz %d",
+ reg_info->alpha2, reg_info->dfs_region,
+ reg_info->min_bw_2ghz, reg_info->max_bw_2ghz,
+ reg_info->min_bw_5ghz, reg_info->max_bw_5ghz);
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "num_2ghz_reg_rules %d num_5ghz_reg_rules %d",
+ num_2ghz_reg_rules, num_5ghz_reg_rules);
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "num_6ghz_reg_rules_ap_lpi: %d num_6ghz_reg_rules_ap_sp: %d num_6ghz_reg_rules_ap_vlp: %d",
+ num_6ghz_reg_rules_ap[WMI_REG_INDOOR_AP],
+ num_6ghz_reg_rules_ap[WMI_REG_STANDARD_POWER_AP],
+ num_6ghz_reg_rules_ap[WMI_REG_VERY_LOW_POWER_AP]);
+
+ j = WMI_REG_DEFAULT_CLIENT;
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "6 GHz Regular client: num_6ghz_reg_rules_lpi: %d num_6ghz_reg_rules_sp: %d num_6ghz_reg_rules_vlp: %d",
+ num_6ghz_client[WMI_REG_INDOOR_AP][j],
+ num_6ghz_client[WMI_REG_STANDARD_POWER_AP][j],
+ num_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][j]);
+
+ j = WMI_REG_SUBORDINATE_CLIENT;
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "6 GHz Subordinate client: num_6ghz_reg_rules_lpi: %d num_6ghz_reg_rules_sp: %d num_6ghz_reg_rules_vlp: %d",
+ num_6ghz_client[WMI_REG_INDOOR_AP][j],
+ num_6ghz_client[WMI_REG_STANDARD_POWER_AP][j],
+ num_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][j]);
+
+ ext_wmi_reg_rule =
+ (struct wmi_regulatory_ext_rule *)((u8 *)ev + sizeof(*ev) +
+ sizeof(struct wmi_tlv));
+ if (num_2ghz_reg_rules) {
+ reg_info->reg_rules_2ghz_ptr =
+ create_ext_reg_rules_from_wmi(num_2ghz_reg_rules,
+ ext_wmi_reg_rule);
+
+ if (!reg_info->reg_rules_2ghz_ptr) {
+ kfree(tb);
+ ath11k_warn(ab, "Unable to Allocate memory for 2 GHz rules\n");
+ return -ENOMEM;
+ }
+
+ ath11k_print_reg_rule(ab, "2 GHz",
+ num_2ghz_reg_rules,
+ reg_info->reg_rules_2ghz_ptr);
+ }
+
+ ext_wmi_reg_rule += num_2ghz_reg_rules;
+
+ /* Firmware might include 6 GHz reg rule in 5 GHz rule list
+ * for few countries along with separate 6 GHz rule.
+ * Having same 6 GHz reg rule in 5 GHz and 6 GHz rules list
+ * causes intersect check to be true, and same rules will be
+ * shown multiple times in iw cmd.
+ * Hence, avoid parsing 6 GHz rule from 5 GHz reg rule list
+ */
+ num_invalid_5ghz_ext_rules =
+ ath11k_invalid_5ghz_reg_ext_rules_from_wmi(num_5ghz_reg_rules,
+ ext_wmi_reg_rule);
+
+ if (num_invalid_5ghz_ext_rules) {
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "CC: %s 5 GHz reg rules number %d from fw, %d number of invalid 5 GHz rules",
+ reg_info->alpha2, reg_info->num_5ghz_reg_rules,
+ num_invalid_5ghz_ext_rules);
+
+ num_5ghz_reg_rules = num_5ghz_reg_rules - num_invalid_5ghz_ext_rules;
+ reg_info->num_5ghz_reg_rules = num_5ghz_reg_rules;
+ }
+
+ if (num_5ghz_reg_rules) {
+ reg_info->reg_rules_5ghz_ptr =
+ create_ext_reg_rules_from_wmi(num_5ghz_reg_rules,
+ ext_wmi_reg_rule);
+
+ if (!reg_info->reg_rules_5ghz_ptr) {
+ kfree(tb);
+ ath11k_warn(ab, "Unable to Allocate memory for 5 GHz rules\n");
+ return -ENOMEM;
+ }
+
+ ath11k_print_reg_rule(ab, "5 GHz",
+ num_5ghz_reg_rules,
+ reg_info->reg_rules_5ghz_ptr);
+ }
+
+ /* We have adjusted the number of 5 GHz reg rules above. But still those
+ * many rules needs to be adjusted in ext_wmi_reg_rule.
+ *
+ * NOTE: num_invalid_5ghz_ext_rules will be 0 for rest other cases.
+ */
+ ext_wmi_reg_rule += (num_5ghz_reg_rules + num_invalid_5ghz_ext_rules);
+
+ for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
+ reg_info->reg_rules_6ghz_ap_ptr[i] =
+ create_ext_reg_rules_from_wmi(num_6ghz_reg_rules_ap[i],
+ ext_wmi_reg_rule);
+
+ if (!reg_info->reg_rules_6ghz_ap_ptr[i]) {
+ kfree(tb);
+ ath11k_warn(ab, "Unable to Allocate memory for 6 GHz AP rules\n");
+ return -ENOMEM;
+ }
+
+ ath11k_print_reg_rule(ab, ath11k_6ghz_ap_type_to_str(i),
+ num_6ghz_reg_rules_ap[i],
+ reg_info->reg_rules_6ghz_ap_ptr[i]);
+
+ ext_wmi_reg_rule += num_6ghz_reg_rules_ap[i];
+ }
+
+ for (j = 0; j < WMI_REG_CURRENT_MAX_AP_TYPE; j++) {
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "6 GHz AP type %s", ath11k_6ghz_ap_type_to_str(j));
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ reg_info->reg_rules_6ghz_client_ptr[j][i] =
+ create_ext_reg_rules_from_wmi(num_6ghz_client[j][i],
+ ext_wmi_reg_rule);
+
+ if (!reg_info->reg_rules_6ghz_client_ptr[j][i]) {
+ kfree(tb);
+ ath11k_warn(ab, "Unable to Allocate memory for 6 GHz client rules\n");
+ return -ENOMEM;
+ }
+
+ ath11k_print_reg_rule(ab,
+ ath11k_6ghz_client_type_to_str(i),
+ num_6ghz_client[j][i],
+ reg_info->reg_rules_6ghz_client_ptr[j][i]);
+
+ ext_wmi_reg_rule += num_6ghz_client[j][i];
+ }
+ }
+
+ reg_info->client_type = ev->client_type;
+ reg_info->rnr_tpe_usable = ev->rnr_tpe_usable;
+ reg_info->unspecified_ap_usable =
+ ev->unspecified_ap_usable;
+ reg_info->domain_code_6ghz_ap[WMI_REG_INDOOR_AP] =
+ ev->domain_code_6ghz_ap_lpi;
+ reg_info->domain_code_6ghz_ap[WMI_REG_STANDARD_POWER_AP] =
+ ev->domain_code_6ghz_ap_sp;
+ reg_info->domain_code_6ghz_ap[WMI_REG_VERY_LOW_POWER_AP] =
+ ev->domain_code_6ghz_ap_vlp;
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "6 GHz reg info client type %s rnr_tpe_usable %d unspecified_ap_usable %d AP sub domain: lpi %s, sp %s, vlp %s\n",
+ ath11k_6ghz_client_type_to_str(reg_info->client_type),
+ reg_info->rnr_tpe_usable,
+ reg_info->unspecified_ap_usable,
+ ath11k_sub_reg_6ghz_to_str(ev->domain_code_6ghz_ap_lpi),
+ ath11k_sub_reg_6ghz_to_str(ev->domain_code_6ghz_ap_sp),
+ ath11k_sub_reg_6ghz_to_str(ev->domain_code_6ghz_ap_vlp));
+
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++) {
+ reg_info->domain_code_6ghz_client[WMI_REG_INDOOR_AP][i] =
+ ev->domain_code_6ghz_client_lpi[i];
+ reg_info->domain_code_6ghz_client[WMI_REG_STANDARD_POWER_AP][i] =
+ ev->domain_code_6ghz_client_sp[i];
+ reg_info->domain_code_6ghz_client[WMI_REG_VERY_LOW_POWER_AP][i] =
+ ev->domain_code_6ghz_client_vlp[i];
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "6 GHz client type %s client sub domain: lpi %s, sp %s, vlp %s\n",
+ ath11k_6ghz_client_type_to_str(i),
+ ath11k_sub_reg_6ghz_to_str(ev->domain_code_6ghz_client_lpi[i]),
+ ath11k_sub_reg_6ghz_to_str(ev->domain_code_6ghz_client_sp[i]),
+ ath11k_sub_reg_6ghz_to_str(ev->domain_code_6ghz_client_vlp[i])
+ );
+ }
+
+ reg_info->domain_code_6ghz_super_id = ev->domain_code_6ghz_super_id;
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI,
+ "6 GHz client_type %s 6 GHz super domain %s",
+ ath11k_6ghz_client_type_to_str(reg_info->client_type),
+ ath11k_super_reg_6ghz_to_str(reg_info->domain_code_6ghz_super_id));
+
+ ath11k_dbg(ab, ATH11K_DBG_WMI, "processed regulatory ext channel list\n");
+
+ kfree(tb);
+ return 0;
+}
+
static int ath11k_pull_peer_del_resp_ev(struct ath11k_base *ab, struct sk_buff *skb,
struct wmi_peer_delete_resp_event *peer_del_resp)
{
return 0;
}
-static int wmi_process_mgmt_tx_comp(struct ath11k *ar, u32 desc_id,
- u32 status)
+static int wmi_process_mgmt_tx_comp(struct ath11k *ar,
+ struct wmi_mgmt_tx_compl_event *tx_compl_param)
{
struct sk_buff *msdu;
struct ieee80211_tx_info *info;
int num_mgmt;
spin_lock_bh(&ar->txmgmt_idr_lock);
- msdu = idr_find(&ar->txmgmt_idr, desc_id);
+ msdu = idr_find(&ar->txmgmt_idr, tx_compl_param->desc_id);
if (!msdu) {
ath11k_warn(ar->ab, "received mgmt tx compl for invalid msdu_id: %d\n",
- desc_id);
+ tx_compl_param->desc_id);
spin_unlock_bh(&ar->txmgmt_idr_lock);
return -ENOENT;
}
- idr_remove(&ar->txmgmt_idr, desc_id);
+ idr_remove(&ar->txmgmt_idr, tx_compl_param->desc_id);
spin_unlock_bh(&ar->txmgmt_idr_lock);
skb_cb = ATH11K_SKB_CB(msdu);
dma_unmap_single(ar->ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
info = IEEE80211_SKB_CB(msdu);
- if ((!(info->flags & IEEE80211_TX_CTL_NO_ACK)) && !status)
+ if ((!(info->flags & IEEE80211_TX_CTL_NO_ACK)) &&
+ !tx_compl_param->status) {
info->flags |= IEEE80211_TX_STAT_ACK;
+ if (test_bit(WMI_TLV_SERVICE_TX_DATA_MGMT_ACK_RSSI,
+ ar->ab->wmi_ab.svc_map))
+ info->status.ack_signal = tx_compl_param->ack_rssi;
+ }
ieee80211_tx_status_irqsafe(ar->hw, msdu);
ath11k_dbg(ar->ab, ATH11K_DBG_WMI,
"wmi mgmt tx comp pending %d desc id %d\n",
- num_mgmt, desc_id);
+ num_mgmt, tx_compl_param->desc_id);
if (!num_mgmt)
wake_up(&ar->txmgmt_empty_waitq);
param->pdev_id = ev->pdev_id;
param->desc_id = ev->desc_id;
param->status = ev->status;
+ param->ack_rssi = ev->ack_rssi;
kfree(tb);
return 0;
return false;
}
-static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *skb)
+static int ath11k_reg_chan_list_event(struct ath11k_base *ab,
+ struct sk_buff *skb,
+ enum wmi_reg_chan_list_cmd_type id)
{
struct cur_regulatory_info *reg_info = NULL;
struct ieee80211_regdomain *regd = NULL;
bool intersect = false;
- int ret = 0, pdev_idx;
+ int ret = 0, pdev_idx, i, j;
struct ath11k *ar;
reg_info = kzalloc(sizeof(*reg_info), GFP_ATOMIC);
goto fallback;
}
- ret = ath11k_pull_reg_chan_list_update_ev(ab, skb, reg_info);
+ if (id == WMI_REG_CHAN_LIST_CC_ID)
+ ret = ath11k_pull_reg_chan_list_update_ev(ab, skb, reg_info);
+ else
+ ret = ath11k_pull_reg_chan_list_ext_update_ev(ab, skb, reg_info);
+
if (ret) {
ath11k_warn(ab, "failed to extract regulatory info from received event\n");
goto fallback;
WARN_ON(1);
mem_free:
if (reg_info) {
- kfree(reg_info->reg_rules_2g_ptr);
- kfree(reg_info->reg_rules_5g_ptr);
+ kfree(reg_info->reg_rules_2ghz_ptr);
+ kfree(reg_info->reg_rules_5ghz_ptr);
+ if (reg_info->is_ext_reg_event) {
+ for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++)
+ kfree(reg_info->reg_rules_6ghz_ap_ptr[i]);
+
+ for (j = 0; j < WMI_REG_CURRENT_MAX_AP_TYPE; j++)
+ for (i = 0; i < WMI_REG_MAX_CLIENT_TYPE; i++)
+ kfree(reg_info->reg_rules_6ghz_client_ptr[j][i]);
+ }
kfree(reg_info);
}
return ret;
goto exit;
}
- wmi_process_mgmt_tx_comp(ar, tx_compl_param.desc_id,
- tx_compl_param.status);
+ wmi_process_mgmt_tx_comp(ar, &tx_compl_param);
ath11k_dbg(ab, ATH11K_DBG_MGMT,
- "mgmt tx compl ev pdev_id %d, desc_id %d, status %d",
+ "mgmt tx compl ev pdev_id %d, desc_id %d, status %d ack_rssi %d",
tx_compl_param.pdev_id, tx_compl_param.desc_id,
- tx_compl_param.status);
+ tx_compl_param.status, tx_compl_param.ack_rssi);
exit:
rcu_read_unlock();
ath11k_service_ready_ext2_event(ab, skb);
break;
case WMI_REG_CHAN_LIST_CC_EVENTID:
- ath11k_reg_chan_list_event(ab, skb);
+ ath11k_reg_chan_list_event(ab, skb, WMI_REG_CHAN_LIST_CC_ID);
+ break;
+ case WMI_REG_CHAN_LIST_CC_EXT_EVENTID:
+ ath11k_reg_chan_list_event(ab, skb, WMI_REG_CHAN_LIST_CC_EXT_ID);
break;
case WMI_READY_EVENTID:
ath11k_ready_event(ab, skb);
WMI_RMC_NEW_LEADER_EVENTID = WMI_TLV_CMD(WMI_GRP_RMC),
WMI_REG_CHAN_LIST_CC_EVENTID = WMI_TLV_CMD(WMI_GRP_REGULATORY),
WMI_11D_NEW_COUNTRY_EVENTID,
+ WMI_REG_CHAN_LIST_CC_EXT_EVENTID,
WMI_NDI_CAP_RSP_EVENTID = WMI_TLV_CMD(WMI_GRP_PROTOTYPE),
WMI_NDP_INITIATOR_RSP_EVENTID,
WMI_NDP_RESPONDER_RSP_EVENTID,
WMI_TAG_PDEV_SRG_OBSS_BSSID_ENABLE_BITMAP_CMD,
WMI_TAG_PDEV_NON_SRG_OBSS_COLOR_ENABLE_BITMAP_CMD,
WMI_TAG_PDEV_NON_SRG_OBSS_BSSID_ENABLE_BITMAP_CMD,
+ WMI_TAG_REGULATORY_RULE_EXT_STRUCT = 0x3A9,
+ WMI_TAG_REG_CHAN_LIST_CC_EXT_EVENT,
WMI_TAG_PDEV_SET_BIOS_SAR_TABLE_CMD = 0x3D8,
WMI_TAG_PDEV_SET_BIOS_GEO_TABLE_CMD,
WMI_TAG_MAX
WMI_TLV_SERVICE_EXT2_MSG = 220,
WMI_TLV_SERVICE_PEER_POWER_SAVE_DURATION_SUPPORT = 246,
WMI_TLV_SERVICE_SRG_SRP_SPATIAL_REUSE_SUPPORT = 249,
+ WMI_TLV_SERVICE_PASSIVE_SCAN_START_TIME_ENHANCE = 263,
/* The second 128 bits */
WMI_MAX_EXT_SERVICE = 256,
+ WMI_TLV_SERVICE_REG_CC_EXT_EVENT_SUPPORT = 281,
WMI_TLV_SERVICE_BIOS_SAR_SUPPORT = 326,
/* The third 128 bits */
} __packed;
#define WMI_RSRC_CFG_FLAG1_BSS_CHANNEL_INFO_64 BIT(5)
+#define WMI_RSRC_CFG_FLAG1_ACK_RSSI BIT(18)
+
+#define WMI_CFG_HOST_SERVICE_FLAG_REG_CC_EXT 4
struct wmi_resource_config {
u32 tlv_header;
u32 sched_params;
u32 twt_ap_pdev_count;
u32 twt_ap_sta_count;
+ u32 max_nlo_ssids;
+ u32 num_pkt_filters;
+ u32 num_max_sta_vdevs;
+ u32 max_bssid_indicator;
+ u32 ul_resp_config;
+ u32 msdu_flow_override_config0;
+ u32 msdu_flow_override_config1;
+ u32 flags2;
+ u32 host_service_flags;
} __packed;
struct wmi_service_ready_event {
#define REG_RULE_MAX_BW 0x0000ffff
#define REG_RULE_REG_PWR 0x00ff0000
#define REG_RULE_ANT_GAIN 0xff000000
+#define REG_RULE_PSD_INFO BIT(0)
+#define REG_RULE_PSD_EIRP 0xff0000
#define WMI_VDEV_PARAM_TXBF_SU_TX_BFEE BIT(0)
#define WMI_VDEV_PARAM_TXBF_MU_TX_BFEE BIT(1)
#define WMI_VDEV_PARAM_TXBF_SU_TX_BFER BIT(2)
#define WMI_VDEV_PARAM_TXBF_MU_TX_BFER BIT(3)
-#define HECAP_PHYDWORD_0 0
-#define HECAP_PHYDWORD_1 1
-#define HECAP_PHYDWORD_2 2
+#define HE_PHYCAP_BYTE_0 0
+#define HE_PHYCAP_BYTE_1 1
+#define HE_PHYCAP_BYTE_2 2
+#define HE_PHYCAP_BYTE_3 3
+#define HE_PHYCAP_BYTE_4 4
-#define HECAP_PHY_SU_BFER BIT(31)
+#define HECAP_PHY_SU_BFER BIT(7)
#define HECAP_PHY_SU_BFEE BIT(0)
#define HECAP_PHY_MU_BFER BIT(1)
-#define HECAP_PHY_UL_MUMIMO BIT(22)
-#define HECAP_PHY_UL_MUOFDMA BIT(23)
+#define HECAP_PHY_UL_MUMIMO BIT(6)
+#define HECAP_PHY_UL_MUOFDMA BIT(7)
#define HECAP_PHY_SUBFMR_GET(hecap_phy) \
- FIELD_GET(HECAP_PHY_SU_BFER, hecap_phy[HECAP_PHYDWORD_0])
+ FIELD_GET(HECAP_PHY_SU_BFER, hecap_phy[HE_PHYCAP_BYTE_3])
#define HECAP_PHY_SUBFME_GET(hecap_phy) \
- FIELD_GET(HECAP_PHY_SU_BFEE, hecap_phy[HECAP_PHYDWORD_1])
+ FIELD_GET(HECAP_PHY_SU_BFEE, hecap_phy[HE_PHYCAP_BYTE_4])
#define HECAP_PHY_MUBFMR_GET(hecap_phy) \
- FIELD_GET(HECAP_PHY_MU_BFER, hecap_phy[HECAP_PHYDWORD_1])
+ FIELD_GET(HECAP_PHY_MU_BFER, hecap_phy[HE_PHYCAP_BYTE_4])
#define HECAP_PHY_ULMUMIMO_GET(hecap_phy) \
- FIELD_GET(HECAP_PHY_UL_MUMIMO, hecap_phy[HECAP_PHYDWORD_0])
+ FIELD_GET(HECAP_PHY_UL_MUMIMO, hecap_phy[HE_PHYCAP_BYTE_2])
#define HECAP_PHY_ULOFDMA_GET(hecap_phy) \
- FIELD_GET(HECAP_PHY_UL_MUOFDMA, hecap_phy[HECAP_PHYDWORD_0])
+ FIELD_GET(HECAP_PHY_UL_MUOFDMA, hecap_phy[HE_PHYCAP_BYTE_2])
#define HE_MODE_SU_TX_BFEE BIT(0)
#define HE_MODE_SU_TX_BFER BIT(1)
#define HE_DL_MUOFDMA_ENABLE 1
#define HE_UL_MUOFDMA_ENABLE 1
#define HE_DL_MUMIMO_ENABLE 1
+#define HE_UL_MUMIMO_ENABLE 1
#define HE_MU_BFEE_ENABLE 1
#define HE_SU_BFEE_ENABLE 1
+#define HE_MU_BFER_ENABLE 1
+#define HE_SU_BFER_ENABLE 1
#define HE_VHT_SOUNDING_MODE_ENABLE 1
#define HE_SU_MU_SOUNDING_MODE_ENABLE 1
#define WMI_SCAN_DWELL_MODE_MASK 0x00E00000
#define WMI_SCAN_DWELL_MODE_SHIFT 21
+#define WMI_SCAN_FLAG_EXT_PASSIVE_SCAN_START_TIME_ENHANCE 0x00000800
enum {
WMI_SCAN_DWELL_MODE_DEFAULT = 0,
};
u32 scan_events;
};
+ u32 scan_ctrl_flags_ext;
u32 dwell_time_active;
u32 dwell_time_active_2g;
u32 dwell_time_passive;
#define MAX_REG_RULES 10
#define REG_ALPHA2_LEN 2
+#define MAX_6GHZ_REG_RULES 5
enum wmi_start_event_param {
WMI_VDEV_START_RESP_EVENT = 0,
WMI_VDEV_START_RESPONSE_INVALID_REGDOMAIN = 4,
};
-;
-enum cc_setting_code {
- REG_SET_CC_STATUS_PASS = 0,
- REG_CURRENT_ALPHA2_NOT_FOUND = 1,
- REG_INIT_ALPHA2_NOT_FOUND = 2,
- REG_SET_CC_CHANGE_NOT_ALLOWED = 3,
- REG_SET_CC_STATUS_NO_MEMORY = 4,
- REG_SET_CC_STATUS_FAIL = 5,
-};
-
/* Regaulatory Rule Flags Passed by FW */
#define REGULATORY_CHAN_DISABLED BIT(0)
#define REGULATORY_CHAN_NO_IR BIT(1)
#define REGULATORY_CHAN_NO_20MHZ BIT(11)
#define REGULATORY_CHAN_NO_10MHZ BIT(12)
-enum {
+enum wmi_reg_chan_list_cmd_type {
+ WMI_REG_CHAN_LIST_CC_ID = 0,
+ WMI_REG_CHAN_LIST_CC_EXT_ID = 1,
+};
+
+enum wmi_reg_cc_setting_code {
WMI_REG_SET_CC_STATUS_PASS = 0,
WMI_REG_CURRENT_ALPHA2_NOT_FOUND = 1,
WMI_REG_INIT_ALPHA2_NOT_FOUND = 2,
WMI_REG_SET_CC_CHANGE_NOT_ALLOWED = 3,
WMI_REG_SET_CC_STATUS_NO_MEMORY = 4,
WMI_REG_SET_CC_STATUS_FAIL = 5,
+
+ /* add new setting code above, update in
+ * @enum cc_setting_code as well.
+ * Also handle it in ath11k_wmi_cc_setting_code_to_reg()
+ */
+};
+
+enum cc_setting_code {
+ REG_SET_CC_STATUS_PASS = 0,
+ REG_CURRENT_ALPHA2_NOT_FOUND = 1,
+ REG_INIT_ALPHA2_NOT_FOUND = 2,
+ REG_SET_CC_CHANGE_NOT_ALLOWED = 3,
+ REG_SET_CC_STATUS_NO_MEMORY = 4,
+ REG_SET_CC_STATUS_FAIL = 5,
+
+ /* add new setting code above, update in
+ * @enum wmi_reg_cc_setting_code as well.
+ * Also handle it in ath11k_cc_status_to_str()
+ */
+};
+
+static inline enum cc_setting_code
+ath11k_wmi_cc_setting_code_to_reg(enum wmi_reg_cc_setting_code status_code)
+{
+ switch (status_code) {
+ case WMI_REG_SET_CC_STATUS_PASS:
+ return REG_SET_CC_STATUS_PASS;
+ case WMI_REG_CURRENT_ALPHA2_NOT_FOUND:
+ return REG_CURRENT_ALPHA2_NOT_FOUND;
+ case WMI_REG_INIT_ALPHA2_NOT_FOUND:
+ return REG_INIT_ALPHA2_NOT_FOUND;
+ case WMI_REG_SET_CC_CHANGE_NOT_ALLOWED:
+ return REG_SET_CC_CHANGE_NOT_ALLOWED;
+ case WMI_REG_SET_CC_STATUS_NO_MEMORY:
+ return REG_SET_CC_STATUS_NO_MEMORY;
+ case WMI_REG_SET_CC_STATUS_FAIL:
+ return REG_SET_CC_STATUS_FAIL;
+ }
+
+ return REG_SET_CC_STATUS_FAIL;
+}
+
+static inline const char *ath11k_cc_status_to_str(enum cc_setting_code code)
+{
+ switch (code) {
+ case REG_SET_CC_STATUS_PASS:
+ return "REG_SET_CC_STATUS_PASS";
+ case REG_CURRENT_ALPHA2_NOT_FOUND:
+ return "REG_CURRENT_ALPHA2_NOT_FOUND";
+ case REG_INIT_ALPHA2_NOT_FOUND:
+ return "REG_INIT_ALPHA2_NOT_FOUND";
+ case REG_SET_CC_CHANGE_NOT_ALLOWED:
+ return "REG_SET_CC_CHANGE_NOT_ALLOWED";
+ case REG_SET_CC_STATUS_NO_MEMORY:
+ return "REG_SET_CC_STATUS_NO_MEMORY";
+ case REG_SET_CC_STATUS_FAIL:
+ return "REG_SET_CC_STATUS_FAIL";
+ }
+
+ return "Unknown CC status";
+}
+
+enum wmi_reg_6ghz_ap_type {
+ WMI_REG_INDOOR_AP = 0,
+ WMI_REG_STANDARD_POWER_AP = 1,
+ WMI_REG_VERY_LOW_POWER_AP = 2,
+
+ /* add AP type above, handle in ath11k_6ghz_ap_type_to_str()
+ */
+ WMI_REG_CURRENT_MAX_AP_TYPE,
+ WMI_REG_MAX_AP_TYPE = 7,
};
+static inline const char *
+ath11k_6ghz_ap_type_to_str(enum wmi_reg_6ghz_ap_type type)
+{
+ switch (type) {
+ case WMI_REG_INDOOR_AP:
+ return "INDOOR AP";
+ case WMI_REG_STANDARD_POWER_AP:
+ return "STANDARD POWER AP";
+ case WMI_REG_VERY_LOW_POWER_AP:
+ return "VERY LOW POWER AP";
+ case WMI_REG_CURRENT_MAX_AP_TYPE:
+ return "CURRENT_MAX_AP_TYPE";
+ case WMI_REG_MAX_AP_TYPE:
+ return "MAX_AP_TYPE";
+ }
+
+ return "unknown 6 GHz AP type";
+}
+
+enum wmi_reg_6ghz_client_type {
+ WMI_REG_DEFAULT_CLIENT = 0,
+ WMI_REG_SUBORDINATE_CLIENT = 1,
+ WMI_REG_MAX_CLIENT_TYPE = 2,
+
+ /* add client type above, handle it in
+ * ath11k_6ghz_client_type_to_str()
+ */
+};
+
+static inline const char *
+ath11k_6ghz_client_type_to_str(enum wmi_reg_6ghz_client_type type)
+{
+ switch (type) {
+ case WMI_REG_DEFAULT_CLIENT:
+ return "DEFAULT CLIENT";
+ case WMI_REG_SUBORDINATE_CLIENT:
+ return "SUBORDINATE CLIENT";
+ case WMI_REG_MAX_CLIENT_TYPE:
+ return "MAX_CLIENT_TYPE";
+ }
+
+ return "unknown 6 GHz client type";
+}
+
+enum reg_subdomains_6ghz {
+ EMPTY_6GHZ = 0x0,
+ FCC1_CLIENT_LPI_REGULAR_6GHZ = 0x01,
+ FCC1_CLIENT_SP_6GHZ = 0x02,
+ FCC1_AP_LPI_6GHZ = 0x03,
+ FCC1_CLIENT_LPI_SUBORDINATE = FCC1_AP_LPI_6GHZ,
+ FCC1_AP_SP_6GHZ = 0x04,
+ ETSI1_LPI_6GHZ = 0x10,
+ ETSI1_VLP_6GHZ = 0x11,
+ ETSI2_LPI_6GHZ = 0x12,
+ ETSI2_VLP_6GHZ = 0x13,
+ APL1_LPI_6GHZ = 0x20,
+ APL1_VLP_6GHZ = 0x21,
+
+ /* add sub-domain above, handle it in
+ * ath11k_sub_reg_6ghz_to_str()
+ */
+};
+
+static inline const char *
+ath11k_sub_reg_6ghz_to_str(enum reg_subdomains_6ghz sub_id)
+{
+ switch (sub_id) {
+ case EMPTY_6GHZ:
+ return "N/A";
+ case FCC1_CLIENT_LPI_REGULAR_6GHZ:
+ return "FCC1_CLIENT_LPI_REGULAR_6GHZ";
+ case FCC1_CLIENT_SP_6GHZ:
+ return "FCC1_CLIENT_SP_6GHZ";
+ case FCC1_AP_LPI_6GHZ:
+ return "FCC1_AP_LPI_6GHZ/FCC1_CLIENT_LPI_SUBORDINATE";
+ case FCC1_AP_SP_6GHZ:
+ return "FCC1_AP_SP_6GHZ";
+ case ETSI1_LPI_6GHZ:
+ return "ETSI1_LPI_6GHZ";
+ case ETSI1_VLP_6GHZ:
+ return "ETSI1_VLP_6GHZ";
+ case ETSI2_LPI_6GHZ:
+ return "ETSI2_LPI_6GHZ";
+ case ETSI2_VLP_6GHZ:
+ return "ETSI2_VLP_6GHZ";
+ case APL1_LPI_6GHZ:
+ return "APL1_LPI_6GHZ";
+ case APL1_VLP_6GHZ:
+ return "APL1_VLP_6GHZ";
+ }
+
+ return "unknown sub reg id";
+}
+
+enum reg_super_domain_6ghz {
+ FCC1_6GHZ = 0x01,
+ ETSI1_6GHZ = 0x02,
+ ETSI2_6GHZ = 0x03,
+ APL1_6GHZ = 0x04,
+ FCC1_6GHZ_CL = 0x05,
+
+ /* add super domain above, handle it in
+ * ath11k_super_reg_6ghz_to_str()
+ */
+};
+
+static inline const char *
+ath11k_super_reg_6ghz_to_str(enum reg_super_domain_6ghz domain_id)
+{
+ switch (domain_id) {
+ case FCC1_6GHZ:
+ return "FCC1_6GHZ";
+ case ETSI1_6GHZ:
+ return "ETSI1_6GHZ";
+ case ETSI2_6GHZ:
+ return "ETSI2_6GHZ";
+ case APL1_6GHZ:
+ return "APL1_6GHZ";
+ case FCC1_6GHZ_CL:
+ return "FCC1_6GHZ_CL";
+ }
+
+ return "unknown domain id";
+}
+
struct cur_reg_rule {
u16 start_freq;
u16 end_freq;
u8 reg_power;
u8 ant_gain;
u16 flags;
+ bool psd_flag;
+ s8 psd_eirp;
};
struct cur_regulatory_info {
u8 alpha2[REG_ALPHA2_LEN + 1];
u32 dfs_region;
u32 phybitmap;
- u32 min_bw_2g;
- u32 max_bw_2g;
- u32 min_bw_5g;
- u32 max_bw_5g;
- u32 num_2g_reg_rules;
- u32 num_5g_reg_rules;
- struct cur_reg_rule *reg_rules_2g_ptr;
- struct cur_reg_rule *reg_rules_5g_ptr;
+ u32 min_bw_2ghz;
+ u32 max_bw_2ghz;
+ u32 min_bw_5ghz;
+ u32 max_bw_5ghz;
+ u32 num_2ghz_reg_rules;
+ u32 num_5ghz_reg_rules;
+ struct cur_reg_rule *reg_rules_2ghz_ptr;
+ struct cur_reg_rule *reg_rules_5ghz_ptr;
+ bool is_ext_reg_event;
+ enum wmi_reg_6ghz_client_type client_type;
+ bool rnr_tpe_usable;
+ bool unspecified_ap_usable;
+ u8 domain_code_6ghz_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u8 domain_code_6ghz_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ u32 domain_code_6ghz_super_id;
+ u32 min_bw_6ghz_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 max_bw_6ghz_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 min_bw_6ghz_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ u32 max_bw_6ghz_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ u32 num_6ghz_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 num_6ghz_rules_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
+ struct cur_reg_rule *reg_rules_6ghz_ap_ptr[WMI_REG_CURRENT_MAX_AP_TYPE];
+ struct cur_reg_rule *reg_rules_6ghz_client_ptr
+ [WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
};
struct wmi_reg_chan_list_cc_event {
u32 domain_code;
u32 dfs_region;
u32 phybitmap;
- u32 min_bw_2g;
- u32 max_bw_2g;
- u32 min_bw_5g;
- u32 max_bw_5g;
- u32 num_2g_reg_rules;
- u32 num_5g_reg_rules;
+ u32 min_bw_2ghz;
+ u32 max_bw_2ghz;
+ u32 min_bw_5ghz;
+ u32 max_bw_5ghz;
+ u32 num_2ghz_reg_rules;
+ u32 num_5ghz_reg_rules;
} __packed;
struct wmi_regulatory_rule_struct {
u32 flag_info;
};
+#define WMI_REG_CLIENT_MAX 4
+
+struct wmi_reg_chan_list_cc_ext_event {
+ u32 status_code;
+ u32 phy_id;
+ u32 alpha2;
+ u32 num_phy;
+ u32 country_id;
+ u32 domain_code;
+ u32 dfs_region;
+ u32 phybitmap;
+ u32 min_bw_2ghz;
+ u32 max_bw_2ghz;
+ u32 min_bw_5ghz;
+ u32 max_bw_5ghz;
+ u32 num_2ghz_reg_rules;
+ u32 num_5ghz_reg_rules;
+ u32 client_type;
+ u32 rnr_tpe_usable;
+ u32 unspecified_ap_usable;
+ u32 domain_code_6ghz_ap_lpi;
+ u32 domain_code_6ghz_ap_sp;
+ u32 domain_code_6ghz_ap_vlp;
+ u32 domain_code_6ghz_client_lpi[WMI_REG_CLIENT_MAX];
+ u32 domain_code_6ghz_client_sp[WMI_REG_CLIENT_MAX];
+ u32 domain_code_6ghz_client_vlp[WMI_REG_CLIENT_MAX];
+ u32 domain_code_6ghz_super_id;
+ u32 min_bw_6ghz_ap_sp;
+ u32 max_bw_6ghz_ap_sp;
+ u32 min_bw_6ghz_ap_lpi;
+ u32 max_bw_6ghz_ap_lpi;
+ u32 min_bw_6ghz_ap_vlp;
+ u32 max_bw_6ghz_ap_vlp;
+ u32 min_bw_6ghz_client_sp[WMI_REG_CLIENT_MAX];
+ u32 max_bw_6ghz_client_sp[WMI_REG_CLIENT_MAX];
+ u32 min_bw_6ghz_client_lpi[WMI_REG_CLIENT_MAX];
+ u32 max_bw_6ghz_client_lpi[WMI_REG_CLIENT_MAX];
+ u32 min_bw_6ghz_client_vlp[WMI_REG_CLIENT_MAX];
+ u32 max_bw_6ghz_client_vlp[WMI_REG_CLIENT_MAX];
+ u32 num_6ghz_reg_rules_ap_sp;
+ u32 num_6ghz_reg_rules_ap_lpi;
+ u32 num_6ghz_reg_rules_ap_vlp;
+ u32 num_6ghz_reg_rules_client_sp[WMI_REG_CLIENT_MAX];
+ u32 num_6ghz_reg_rules_client_lpi[WMI_REG_CLIENT_MAX];
+ u32 num_6ghz_reg_rules_client_vlp[WMI_REG_CLIENT_MAX];
+} __packed;
+
+struct wmi_regulatory_ext_rule {
+ u32 tlv_header;
+ u32 freq_info;
+ u32 bw_pwr_info;
+ u32 flag_info;
+ u32 psd_power_info;
+} __packed;
+
struct wmi_vdev_delete_resp_event {
u32 vdev_id;
} __packed;
u32 desc_id;
u32 status;
u32 pdev_id;
+ u32 ppdu_id;
+ u32 ack_rssi;
} __packed;
struct wmi_scan_event {
u32 sched_params;
u32 twt_ap_pdev_count;
u32 twt_ap_sta_count;
+ u8 is_reg_cc_ext_event_supported;
};
enum wmi_debug_log_param {
ret = ath12k_ce_alloc_pipe(ab, i);
if (ret) {
- /* Free any parial successful allocation */
+ /* Free any partial successful allocation */
ath12k_ce_free_pipes(ab);
return ret;
}
/* Below regd's are protected by ab->data_lock */
/* This is the regd set for every radio
- * by the firmware during initializatin
+ * by the firmware during initialization
*/
struct ieee80211_regdomain *default_regd[MAX_RADIOS];
/* This regd is set during dynamic country setting
}
if (dp->spt_info[i].paddr & ATH12K_SPT_4K_ALIGN_CHECK) {
- ath12k_warn(ab, "SPT allocated memoty is not 4K aligned");
+ ath12k_warn(ab, "SPT allocated memory is not 4K aligned");
ret = -EINVAL;
goto free;
}
dp->reoq_lut.vaddr = dma_alloc_coherent(ab->dev,
DP_REOQ_LUT_SIZE,
&dp->reoq_lut.paddr,
- GFP_KERNEL);
-
+ GFP_KERNEL | __GFP_ZERO);
if (!dp->reoq_lut.vaddr) {
ath12k_warn(ab, "failed to allocate memory for reoq table");
return -ENOMEM;
}
- memset(dp->reoq_lut.vaddr, 0, DP_REOQ_LUT_SIZE);
-
ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO1_QDESC_LUT_BASE0(ab),
dp->reoq_lut.paddr);
return 0;
#define HTT_TX_WBM_COMP_STATUS_OFFSET 8
-/* HTT tx completion is overlayed in wbm_release_ring */
+/* HTT tx completion is overlaid in wbm_release_ring */
#define HTT_TX_WBM_COMP_INFO0_STATUS GENMASK(16, 13)
#define HTT_TX_WBM_COMP_INFO1_REINJECT_REASON GENMASK(3, 0)
#define HTT_TX_WBM_COMP_INFO1_EXCEPTION_FRAME BIT(4)
* 3'b010: 4 usec
* 3'b011: 8 usec (default)
* 3'b100: 16 usec
- * Others: Reserverd
+ * Others: Reserved
* b'19 - response_required:
* Host needs HTT_T2H_MSG_TYPE_SRING_SETUP_DONE as response
* b'20:31 - reserved: reserved for future use
__le32 tlv_filter_mask_in1;
__le32 tlv_filter_mask_in2;
__le32 tlv_filter_mask_in3;
- __le32 reserverd[3];
+ __le32 reserved[3];
} __packed;
#define HTT_TX_RING_TLV_FILTER_MGMT_DMA_LEN GENMASK(3, 0)
case 3:
if (he_dcm && he_stbc) {
he_gi = HE_GI_0_8;
- he_ltf = HE_LTF_4_X;
+ he_ltf = HE_LTF_4_X;
} else {
he_gi = HE_GI_3_2;
he_ltf = HE_LTF_4_X;
- }
- break;
+ }
+ break;
}
ppdu_info->gi = he_gi;
value = he_gi << HE_GI_SHIFT;
spin_unlock_bh(&buf_ring->idr_lock);
if (unlikely(!msdu)) {
- ath12k_warn(ab, "montior destination with invalid buf_id %d\n",
+ ath12k_warn(ab, "monitor destination with invalid buf_id %d\n",
buf_id);
return HAL_RX_MON_STATUS_PPDU_NOT_DONE;
}
/* PN for multicast packets are not validate in HW,
* so skip 802.3 rx path
- * Also, fast_rx expectes the STA to be authorized, hence
+ * Also, fast_rx expects the STA to be authorized, hence
* eapol packets are sent in slow path.
*/
if (decap == DP_RX_DECAP_TYPE_ETHERNET2_DIX && !is_eapol_tkip &&
struct sk_buff *skb;
struct hal_srng *srng;
dma_addr_t paddr;
- u32 cookie, buf_id;
+ u32 cookie;
+ int buf_id;
srng = &ab->hal.srng_list[buf_ring->refill_buf_ring.ring_id];
spin_lock_bh(&srng->lock);
spin_unlock_bh(&buf_ring->idr_lock);
if (unlikely(!msdu)) {
- ath12k_warn(ab, "montior destination with invalid buf_id %d\n",
+ ath12k_warn(ab, "monitor destination with invalid buf_id %d\n",
buf_id);
return DP_MON_TX_STATUS_PPDU_NOT_DONE;
}
spin_unlock_bh(&buf_ring->idr_lock);
if (unlikely(!skb)) {
- ath12k_warn(ab, "montior destination with invalid buf_id %d\n",
+ ath12k_warn(ab, "monitor destination with invalid buf_id %d\n",
buf_id);
goto move_next;
}
spin_unlock_bh(&buf_ring->idr_lock);
if (unlikely(!skb)) {
- ath12k_warn(ab, "montior destination with invalid buf_id %d\n",
+ ath12k_warn(ab, "monitor destination with invalid buf_id %d\n",
buf_id);
goto move_next;
}
/* PN for multicast packets are not validate in HW,
* so skip 802.3 rx path
- * Also, fast_rx expectes the STA to be authorized, hence
+ * Also, fast_rx expects the STA to be authorized, hence
* eapol packets are sent in slow path.
*/
if (decap == DP_RX_DECAP_TYPE_ETHERNET2_DIX && !is_eapol &&
if (!desc_info) {
desc_info = ath12k_dp_get_rx_desc(ab, cookie);
if (!desc_info) {
- ath12k_warn(ab, "Invalid cookie in manual desc retrival");
+ ath12k_warn(ab, "Invalid cookie in manual desc retrieval");
continue;
}
}
if (!desc_info) {
desc_info = ath12k_dp_get_rx_desc(ab, cookie);
if (!desc_info) {
- ath12k_warn(ab, "Invalid cookie in manual desc retrival");
+ ath12k_warn(ab, "Invalid cookie in manual desc retrieval");
return -EINVAL;
}
}
msdu_len = ath12k_dp_rx_h_msdu_len(ab, desc);
peer_id = ath12k_dp_rx_h_peer_id(ab, desc);
+ spin_lock(&ab->base_lock);
if (!ath12k_peer_find_by_id(ab, peer_id)) {
+ spin_unlock(&ab->base_lock);
ath12k_dbg(ab, ATH12K_DBG_DATA, "invalid peer id received in wbm err pkt%d\n",
peer_id);
return -EINVAL;
}
+ spin_unlock(&ab->base_lock);
if (!rxcb->is_frag && ((msdu_len + hal_rx_desc_sz) > DP_RX_BUFFER_SIZE)) {
/* First buffer will be freed by the caller, so deduct it's length */
if (!desc_info) {
desc_info = ath12k_dp_get_rx_desc(ab, err_info.cookie);
if (!desc_info) {
- ath12k_warn(ab, "Invalid cookie in manual desc retrival");
+ ath12k_warn(ab, "Invalid cookie in manual desc retrieval");
continue;
}
}
skb_ext_desc->len, DMA_TO_DEVICE);
ret = dma_mapping_error(ab->dev, ti.paddr);
if (ret) {
- kfree(skb_ext_desc);
+ kfree_skb(skb_ext_desc);
goto fail_unmap_dma;
}
HAL_WBM0_RELEASE_RING_BASE_LSB(ab);
s->reg_size[1] = HAL_WBM1_RELEASE_RING_HP - HAL_WBM0_RELEASE_RING_HP;
- /* Some LMAC rings are not accesed from the host:
+ /* Some LMAC rings are not accessed from the host:
* RXDMA_BUG, RXDMA_DST, RXDMA_MONITOR_BUF, RXDMA_MONITOR_STATUS,
* RXDMA_MONITOR_DST, RXDMA_MONITOR_DESC, RXDMA_DIR_BUF_SRC,
* RXDMA_RX_MONITOR_BUF, TX_MONITOR_BUF, TX_MONITOR_DST, SW2RXDMA
#define HAL_WBM_SW_COOKIE_CONV_CFG_WBM2SW4_EN BIT(5)
#define HAL_WBM_SW_COOKIE_CONV_CFG_GLOBAL_EN BIT(8)
-/* TCL ring feild mask and offset */
+/* TCL ring field mask and offset */
#define HAL_TCL1_RING_BASE_MSB_RING_SIZE GENMASK(27, 8)
#define HAL_TCL1_RING_BASE_MSB_RING_BASE_ADDR_MSB GENMASK(7, 0)
#define HAL_TCL1_RING_ID_ENTRY_SIZE GENMASK(7, 0)
#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP6 GENMASK(20, 18)
#define HAL_TCL1_RING_FIELD_DSCP_TID_MAP7 GENMASK(23, 21)
-/* REO ring feild mask and offset */
+/* REO ring field mask and offset */
#define HAL_REO1_RING_BASE_MSB_RING_SIZE GENMASK(27, 8)
#define HAL_REO1_RING_BASE_MSB_RING_BASE_ADDR_MSB GENMASK(7, 0)
#define HAL_REO1_RING_ID_RING_ID GENMASK(15, 8)
} u;
};
-/* Interrupt mitigation - Batch threshold in terms of numer of frames */
+/* Interrupt mitigation - Batch threshold in terms of number of frames */
#define HAL_SRNG_INT_BATCH_THRESHOLD_TX 256
#define HAL_SRNG_INT_BATCH_THRESHOLD_RX 128
#define HAL_SRNG_INT_BATCH_THRESHOLD_OTHER 1
#define HAL_REO_CMD_FLG_UNBLK_RESOURCE BIT(7)
#define HAL_REO_CMD_FLG_UNBLK_CACHE BIT(8)
-/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO0_UPD_* feilds */
+/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO0_UPD_* fields */
#define HAL_REO_CMD_UPD0_RX_QUEUE_NUM BIT(8)
#define HAL_REO_CMD_UPD0_VLD BIT(9)
#define HAL_REO_CMD_UPD0_ALDC BIT(10)
#define HAL_REO_CMD_UPD0_PN_VALID BIT(29)
#define HAL_REO_CMD_UPD0_PN BIT(30)
-/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO1_* feilds */
+/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO1_* fields */
#define HAL_REO_CMD_UPD1_VLD BIT(16)
#define HAL_REO_CMD_UPD1_ALDC GENMASK(18, 17)
#define HAL_REO_CMD_UPD1_DIS_DUP_DETECTION BIT(19)
#define HAL_REO_CMD_UPD1_PN_HANDLE_ENABLE BIT(30)
#define HAL_REO_CMD_UPD1_IGNORE_AMPDU_FLG BIT(31)
-/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO2_* feilds */
+/* Should be matching with HAL_REO_UPD_RX_QUEUE_INFO2_* fields */
#define HAL_REO_CMD_UPD2_SVLD BIT(10)
#define HAL_REO_CMD_UPD2_SSN GENMASK(22, 11)
#define HAL_REO_CMD_UPD2_SEQ_2K_ERR BIT(23)
*
* msdu_continuation
* When set, this MSDU buffer was not able to hold the entire MSDU.
- * The next buffer will therefor contain additional information
+ * The next buffer will therefore contain additional information
* related to this MSDU.
*
* msdu_length
* link descriptor.
*
* tcl_cmd_type
- * used to select the type of TCL Command decriptor
+ * used to select the type of TCL Command descriptor
*
* desc_type
* Indicates the type of address provided in the buf_addr_info.
* index_loop_override
* When set, address search and packet routing is forced to use
* 'search_index' instead of following the register configuration
- * seleced by Bank_id.
+ * selected by Bank_id.
*
* ring_id
* The buffer pointer ring ID.
* Producer: SW/TQM/RXDMA/REO/SWITCH
* Consumer: WBM/SW/FW
*
- * HTT tx status is overlayed on wbm_release ring on 4-byte words 2, 3, 4 and 5
+ * HTT tx status is overlaid on wbm_release ring on 4-byte words 2, 3, 4 and 5
* for software based completions.
*
* buf_addr_info
* commands.
*
* execution_time (in us)
- * The amount of time REO took to excecute the command. Note that
+ * The amount of time REO took to execute the command. Note that
* this time does not include the duration of the command waiting
* in the command ring, before the execution started.
*
"tcl2host-status-ring",
};
+static int ath12k_pci_bus_wake_up(struct ath12k_base *ab)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+
+ return mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+}
+
+static void ath12k_pci_bus_release(struct ath12k_base *ab)
+{
+ struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
+
+ mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+}
+
+static const struct ath12k_pci_ops ath12k_pci_ops_qcn9274 = {
+ .wakeup = NULL,
+ .release = NULL,
+};
+
+static const struct ath12k_pci_ops ath12k_pci_ops_wcn7850 = {
+ .wakeup = ath12k_pci_bus_wake_up,
+ .release = ath12k_pci_bus_release,
+};
+
static void ath12k_pci_select_window(struct ath12k_pci *ab_pci, u32 offset)
{
struct ath12k_base *ab = ab_pci->ab;
{
struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
u32 val, window_start;
+ int ret = 0;
/* for offset beyond BAR + 4K - 32, may
* need to wakeup MHI to access.
*/
if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
- offset >= ACCESS_ALWAYS_OFF)
- mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+ offset >= ACCESS_ALWAYS_OFF && ab_pci->pci_ops->wakeup)
+ ret = ab_pci->pci_ops->wakeup(ab);
if (offset < WINDOW_START) {
val = ioread32(ab->mem + offset);
}
if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
- offset >= ACCESS_ALWAYS_OFF)
- mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
-
+ offset >= ACCESS_ALWAYS_OFF && ab_pci->pci_ops->release &&
+ !ret)
+ ab_pci->pci_ops->release(ab);
return val;
}
{
struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
u32 window_start;
+ int ret = 0;
/* for offset beyond BAR + 4K - 32, may
* need to wakeup MHI to access.
*/
if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
- offset >= ACCESS_ALWAYS_OFF)
- mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+ offset >= ACCESS_ALWAYS_OFF && ab_pci->pci_ops->wakeup)
+ ret = ab_pci->pci_ops->wakeup(ab);
if (offset < WINDOW_START) {
iowrite32(value, ab->mem + offset);
}
if (test_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
- offset >= ACCESS_ALWAYS_OFF)
- mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+ offset >= ACCESS_ALWAYS_OFF && ab_pci->pci_ops->release &&
+ !ret)
+ ab_pci->pci_ops->release(ab);
}
int ath12k_pci_power_up(struct ath12k_base *ab)
case QCN9274_DEVICE_ID:
ab_pci->msi_config = &ath12k_msi_config[0];
ab->static_window_map = true;
+ ab_pci->pci_ops = &ath12k_pci_ops_qcn9274;
ath12k_pci_read_hw_version(ab, &soc_hw_version_major,
&soc_hw_version_minor);
switch (soc_hw_version_major) {
ab_pci->msi_config = &ath12k_msi_config[0];
ab->static_window_map = false;
ab->hw_rev = ATH12K_HW_WCN7850_HW20;
+ ab_pci->pci_ops = &ath12k_pci_ops_wcn7850;
break;
default:
ATH12K_PCI_ASPM_RESTORE,
};
+struct ath12k_pci_ops {
+ int (*wakeup)(struct ath12k_base *ab);
+ void (*release)(struct ath12k_base *ab);
+};
+
struct ath12k_pci {
struct pci_dev *pdev;
struct ath12k_base *ab;
/* enum ath12k_pci_flags */
unsigned long flags;
u16 link_ctl;
+ const struct ath12k_pci_ops *pci_ops;
};
static inline struct ath12k_pci *ath12k_pci_priv(struct ath12k_base *ab)
*
* l4_offset
* Depending upon mode bit, this field either indicates the
- * L4 offset nin bytes from the start of RX_HEADER (only valid
+ * L4 offset in bytes from the start of RX_HEADER (only valid
* if either ipv4_proto or ipv6_proto is set to 1) or indicates
* the offset in bytes to the start of TCP or UDP header from
* the start of the IP header after decapsulation (Only valid if
/* tx/rx chainmask reported from fw depends on the actual hw chains used,
* For example, for 4x4 capable macphys, first 4 chains can be used for first
- * mac and the remaing 4 chains can be used for the second mac or vice-versa.
+ * mac and the remaining 4 chains can be used for the second mac or vice-versa.
* In this case, tx/rx chainmask 0xf will be advertised for first mac and 0xf0
* will be advertised for second mac or vice-versa. Compute the shift value
* for tx/rx chainmask which will be used to advertise supported ht/vht rates to
int ret, len, key_len_aligned;
/* WMI_TAG_ARRAY_BYTE needs to be aligned with 4, the actual key
- * length is specifed in cmd->key_len.
+ * length is specified in cmd->key_len.
*/
key_len_aligned = roundup(arg->key_len, 4);
}
/* TODO: Use wmi_service_segment_offset information to get the service
- * especially when more services are advertised in multiple sevice
+ * especially when more services are advertised in multiple service
* available events.
*/
for (i = 0, j = WMI_MAX_SERVICE;
} __packed;
struct wmi_pdev_temperature_event {
- /* temperature value in Celcius degree */
+ /* temperature value in Celsius degree */
a_sle32 temp;
__le32 pdev_id;
} __packed;
*/
enum wmi_sta_ps_param_pspoll_count {
WMI_STA_PS_PSPOLL_COUNT_NO_MAX = 0,
- /* Values greater than 0 indicate the maximum numer of PS-Poll frames
+ /* Values greater than 0 indicate the maximum number of PS-Poll frames
* FW will send before waking up.
*/
};
goto err_out;
}
- res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
- if (res == NULL) {
- dev_err(&pdev->dev, "no IRQ resource found\n");
- ret = -ENXIO;
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_err(&pdev->dev, "no IRQ resource found: %d\n", irq);
+ ret = irq;
goto err_iounmap;
}
- irq = res->start;
-
hw = ieee80211_alloc_hw(sizeof(struct ath5k_hw), &ath5k_hw_ops);
if (hw == NULL) {
dev_err(&pdev->dev, "no memory for ieee80211_hw\n");
ee->ee_n_piers[mode]++;
freq2 = (val >> 8) & 0xff;
- if (!freq2)
+ if (!freq2 || i >= max)
break;
pc[i++].freq = ath5k_eeprom_bin2freq(ee,
return -EACCES;
}
- size = sizeof(cid) + sizeof(addr) + sizeof(param);
+ size = sizeof(cid) + sizeof(addr) + sizeof(*param);
if (size > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
* Thus the possibility of ar->htc_target being NULL
* via ath6kl_recv_complete -> ath6kl_usb_io_comp_work.
*/
- if (WARN_ON_ONCE(!target)) {
- ath6kl_err("Target not yet initialized\n");
+ if (!target) {
+ ath6kl_dbg(ATH6KL_DBG_HTC, "Target not yet initialized\n");
status = -EINVAL;
goto free_skb;
}
.send = hif_usb_send,
};
+/* Need to free remain_skb allocated in ath9k_hif_usb_rx_stream
+ * in case ath9k_hif_usb_rx_stream wasn't called next time to
+ * process the buffer and subsequently free it.
+ */
+static void ath9k_hif_usb_free_rx_remain_skb(struct hif_device_usb *hif_dev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&hif_dev->rx_lock, flags);
+ if (hif_dev->remain_skb) {
+ dev_kfree_skb_any(hif_dev->remain_skb);
+ hif_dev->remain_skb = NULL;
+ hif_dev->rx_remain_len = 0;
+ RX_STAT_INC(hif_dev, skb_dropped);
+ }
+ spin_unlock_irqrestore(&hif_dev->rx_lock, flags);
+}
+
static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
struct sk_buff *skb)
{
static void ath9k_hif_usb_dealloc_rx_urbs(struct hif_device_usb *hif_dev)
{
usb_kill_anchored_urbs(&hif_dev->rx_submitted);
+ ath9k_hif_usb_free_rx_remain_skb(hif_dev);
}
static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
hk.kv_len = key->keylen;
if (key->keylen)
- memcpy(hk.kv_val, key->key, key->keylen);
+ memcpy(&hk.kv_values, key->key, key->keylen);
if (!(key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) {
switch (vif->type) {
#define PREPARE_HAL_BUF(send_buf, msg_body) \
do { \
- memset(send_buf, 0, msg_body.header.len); \
- memcpy(send_buf, &msg_body, sizeof(msg_body)); \
+ memcpy_and_pad(send_buf, msg_body.header.len, \
+ &msg_body, sizeof(msg_body), 0); \
} while (0) \
#define PREPARE_HAL_PTT_MSG_BUF(send_buf, p_msg_body) \
return slot + 1;
}
-static inline int prev_slot(struct b43legacy_dmaring *ring, int slot)
-{
- B43legacy_WARN_ON(!(slot >= 0 && slot <= ring->nr_slots - 1));
- if (slot == 0)
- return ring->nr_slots - 1;
- return slot - 1;
-}
-
#ifdef CONFIG_B43LEGACY_DEBUG
static void update_max_used_slots(struct b43legacy_dmaring *ring,
int current_used_slots)
return ret;
}
-static inline
-u16 freq_r3A_value(u16 frequency)
-{
- u16 value;
-
- if (frequency < 5091)
- value = 0x0040;
- else if (frequency < 5321)
- value = 0x0000;
- else if (frequency < 5806)
- value = 0x0080;
- else
- value = 0x0040;
-
- return value;
-}
-
int b43legacy_radio_selectchannel(struct b43legacy_wldev *dev,
u8 channel,
int synthetic_pu_workaround)
of.o
brcmfmac-$(CONFIG_DMI) += \
dmi.o
+brcmfmac-$(CONFIG_ACPI) += \
+ acpi.o
ifeq ($(CONFIG_BRCMFMAC),m)
obj-m += wcc/
--- /dev/null
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright The Asahi Linux Contributors
+ */
+
+#include <linux/acpi.h>
+#include "debug.h"
+#include "core.h"
+#include "common.h"
+
+void brcmf_acpi_probe(struct device *dev, enum brcmf_bus_type bus_type,
+ struct brcmf_mp_device *settings)
+{
+ acpi_status status;
+ const union acpi_object *o;
+ struct acpi_buffer buf = {ACPI_ALLOCATE_BUFFER, NULL};
+ struct acpi_device *adev = ACPI_COMPANION(dev);
+
+ if (!adev)
+ return;
+
+ if (!ACPI_FAILURE(acpi_dev_get_property(adev, "module-instance",
+ ACPI_TYPE_STRING, &o))) {
+ brcmf_dbg(INFO, "ACPI module-instance=%s\n", o->string.pointer);
+ settings->board_type = devm_kasprintf(dev, GFP_KERNEL,
+ "apple,%s",
+ o->string.pointer);
+ } else {
+ brcmf_dbg(INFO, "No ACPI module-instance\n");
+ return;
+ }
+
+ status = acpi_evaluate_object(adev->handle, "RWCV", NULL, &buf);
+ o = buf.pointer;
+ if (!ACPI_FAILURE(status) && o && o->type == ACPI_TYPE_BUFFER &&
+ o->buffer.length >= 2) {
+ char *antenna_sku = devm_kzalloc(dev, 3, GFP_KERNEL);
+
+ if (antenna_sku) {
+ memcpy(antenna_sku, o->buffer.pointer, 2);
+ brcmf_dbg(INFO, "ACPI RWCV data=%*phN antenna-sku=%s\n",
+ (int)o->buffer.length, o->buffer.pointer,
+ antenna_sku);
+ settings->antenna_sku = antenna_sku;
+ }
+
+ kfree(buf.pointer);
+ } else {
+ brcmf_dbg(INFO, "No ACPI antenna-sku\n");
+ }
+}
/* Firmware blobs that may be available */
enum brcmf_blob_type {
BRCMF_BLOB_CLM,
+ BRCMF_BLOB_TXCAP,
};
struct brcmf_mp_device;
}
}
+static void brcmf_scan_params_v2_to_v1(struct brcmf_scan_params_v2_le *params_v2_le,
+ struct brcmf_scan_params_le *params_le)
+{
+ size_t params_size;
+ u32 ch;
+ int n_channels, n_ssids;
+
+ memcpy(¶ms_le->ssid_le, ¶ms_v2_le->ssid_le,
+ sizeof(params_le->ssid_le));
+ memcpy(¶ms_le->bssid, ¶ms_v2_le->bssid,
+ sizeof(params_le->bssid));
+
+ params_le->bss_type = params_v2_le->bss_type;
+ params_le->scan_type = le32_to_cpu(params_v2_le->scan_type);
+ params_le->nprobes = params_v2_le->nprobes;
+ params_le->active_time = params_v2_le->active_time;
+ params_le->passive_time = params_v2_le->passive_time;
+ params_le->home_time = params_v2_le->home_time;
+ params_le->channel_num = params_v2_le->channel_num;
+
+ ch = le32_to_cpu(params_v2_le->channel_num);
+ n_channels = ch & BRCMF_SCAN_PARAMS_COUNT_MASK;
+ n_ssids = ch >> BRCMF_SCAN_PARAMS_NSSID_SHIFT;
+
+ params_size = sizeof(u16) * n_channels;
+ if (n_ssids > 0) {
+ params_size = roundup(params_size, sizeof(u32));
+ params_size += sizeof(struct brcmf_ssid_le) * n_ssids;
+ }
+
+ memcpy(¶ms_le->channel_list[0],
+ ¶ms_v2_le->channel_list[0], params_size);
+}
+
+static void brcmf_escan_prep(struct brcmf_cfg80211_info *cfg,
+ struct brcmf_scan_params_v2_le *params_le,
+ struct cfg80211_scan_request *request)
+{
+ u32 n_ssids;
+ u32 n_channels;
+ s32 i;
+ s32 offset;
+ u16 chanspec;
+ char *ptr;
+ int length;
+ struct brcmf_ssid_le ssid_le;
+
+ eth_broadcast_addr(params_le->bssid);
+
+ length = BRCMF_SCAN_PARAMS_V2_FIXED_SIZE;
+
+ params_le->version = cpu_to_le16(BRCMF_SCAN_PARAMS_VERSION_V2);
+ params_le->bss_type = DOT11_BSSTYPE_ANY;
+ params_le->scan_type = cpu_to_le32(BRCMF_SCANTYPE_ACTIVE);
+ params_le->channel_num = 0;
+ params_le->nprobes = cpu_to_le32(-1);
+ params_le->active_time = cpu_to_le32(-1);
+ params_le->passive_time = cpu_to_le32(-1);
+ params_le->home_time = cpu_to_le32(-1);
+ memset(¶ms_le->ssid_le, 0, sizeof(params_le->ssid_le));
+
+ /* Scan abort */
+ if (!request) {
+ length += sizeof(u16);
+ params_le->channel_num = cpu_to_le32(1);
+ params_le->channel_list[0] = cpu_to_le16(-1);
+ params_le->length = cpu_to_le16(length);
+ return;
+ }
+
+ n_ssids = request->n_ssids;
+ n_channels = request->n_channels;
+
+ /* Copy channel array if applicable */
+ brcmf_dbg(SCAN, "### List of channelspecs to scan ### %d\n",
+ n_channels);
+ if (n_channels > 0) {
+ length += roundup(sizeof(u16) * n_channels, sizeof(u32));
+ for (i = 0; i < n_channels; i++) {
+ chanspec = channel_to_chanspec(&cfg->d11inf,
+ request->channels[i]);
+ brcmf_dbg(SCAN, "Chan : %d, Channel spec: %x\n",
+ request->channels[i]->hw_value, chanspec);
+ params_le->channel_list[i] = cpu_to_le16(chanspec);
+ }
+ } else {
+ brcmf_dbg(SCAN, "Scanning all channels\n");
+ }
+
+ /* Copy ssid array if applicable */
+ brcmf_dbg(SCAN, "### List of SSIDs to scan ### %d\n", n_ssids);
+ if (n_ssids > 0) {
+ offset = offsetof(struct brcmf_scan_params_v2_le, channel_list) +
+ n_channels * sizeof(u16);
+ offset = roundup(offset, sizeof(u32));
+ length += sizeof(ssid_le) * n_ssids,
+ ptr = (char *)params_le + offset;
+ for (i = 0; i < n_ssids; i++) {
+ memset(&ssid_le, 0, sizeof(ssid_le));
+ ssid_le.SSID_len =
+ cpu_to_le32(request->ssids[i].ssid_len);
+ memcpy(ssid_le.SSID, request->ssids[i].ssid,
+ request->ssids[i].ssid_len);
+ if (!ssid_le.SSID_len)
+ brcmf_dbg(SCAN, "%d: Broadcast scan\n", i);
+ else
+ brcmf_dbg(SCAN, "%d: scan for %.32s size=%d\n",
+ i, ssid_le.SSID, ssid_le.SSID_len);
+ memcpy(ptr, &ssid_le, sizeof(ssid_le));
+ ptr += sizeof(ssid_le);
+ }
+ } else {
+ brcmf_dbg(SCAN, "Performing passive scan\n");
+ params_le->scan_type = cpu_to_le32(BRCMF_SCANTYPE_PASSIVE);
+ }
+ params_le->length = cpu_to_le16(length);
+ /* Adding mask to channel numbers */
+ params_le->channel_num =
+ cpu_to_le32((n_ssids << BRCMF_SCAN_PARAMS_NSSID_SHIFT) |
+ (n_channels & BRCMF_SCAN_PARAMS_COUNT_MASK));
+}
+
s32 brcmf_notify_escan_complete(struct brcmf_cfg80211_info *cfg,
struct brcmf_if *ifp, bool aborted,
bool fw_abort)
{
struct brcmf_pub *drvr = cfg->pub;
- struct brcmf_scan_params_le params_le;
+ struct brcmf_scan_params_v2_le params_v2_le;
struct cfg80211_scan_request *scan_request;
u64 reqid;
u32 bucket;
if (fw_abort) {
/* Do a scan abort to stop the driver's scan engine */
brcmf_dbg(SCAN, "ABORT scan in firmware\n");
- memset(¶ms_le, 0, sizeof(params_le));
- eth_broadcast_addr(params_le.bssid);
- params_le.bss_type = DOT11_BSSTYPE_ANY;
- params_le.scan_type = 0;
- params_le.channel_num = cpu_to_le32(1);
- params_le.nprobes = cpu_to_le32(1);
- params_le.active_time = cpu_to_le32(-1);
- params_le.passive_time = cpu_to_le32(-1);
- params_le.home_time = cpu_to_le32(-1);
- /* Scan is aborted by setting channel_list[0] to -1 */
- params_le.channel_list[0] = cpu_to_le16(-1);
+
+ brcmf_escan_prep(cfg, ¶ms_v2_le, NULL);
+
/* E-Scan (or anyother type) can be aborted by SCAN */
- err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SCAN,
- ¶ms_le, sizeof(params_le));
+ if (brcmf_feat_is_enabled(ifp, BRCMF_FEAT_SCAN_V2)) {
+ err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SCAN,
+ ¶ms_v2_le,
+ sizeof(params_v2_le));
+ } else {
+ struct brcmf_scan_params_le params_le;
+
+ brcmf_scan_params_v2_to_v1(¶ms_v2_le, ¶ms_le);
+ err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SCAN,
+ ¶ms_le,
+ sizeof(params_le));
+ }
+
if (err)
bphy_err(drvr, "Scan abort failed\n");
}
return err;
}
-static void brcmf_escan_prep(struct brcmf_cfg80211_info *cfg,
- struct brcmf_scan_params_le *params_le,
- struct cfg80211_scan_request *request)
-{
- u32 n_ssids;
- u32 n_channels;
- s32 i;
- s32 offset;
- u16 chanspec;
- char *ptr;
- struct brcmf_ssid_le ssid_le;
-
- eth_broadcast_addr(params_le->bssid);
- params_le->bss_type = DOT11_BSSTYPE_ANY;
- params_le->scan_type = BRCMF_SCANTYPE_ACTIVE;
- params_le->channel_num = 0;
- params_le->nprobes = cpu_to_le32(-1);
- params_le->active_time = cpu_to_le32(-1);
- params_le->passive_time = cpu_to_le32(-1);
- params_le->home_time = cpu_to_le32(-1);
- memset(¶ms_le->ssid_le, 0, sizeof(params_le->ssid_le));
-
- n_ssids = request->n_ssids;
- n_channels = request->n_channels;
-
- /* Copy channel array if applicable */
- brcmf_dbg(SCAN, "### List of channelspecs to scan ### %d\n",
- n_channels);
- if (n_channels > 0) {
- for (i = 0; i < n_channels; i++) {
- chanspec = channel_to_chanspec(&cfg->d11inf,
- request->channels[i]);
- brcmf_dbg(SCAN, "Chan : %d, Channel spec: %x\n",
- request->channels[i]->hw_value, chanspec);
- params_le->channel_list[i] = cpu_to_le16(chanspec);
- }
- } else {
- brcmf_dbg(SCAN, "Scanning all channels\n");
- }
- /* Copy ssid array if applicable */
- brcmf_dbg(SCAN, "### List of SSIDs to scan ### %d\n", n_ssids);
- if (n_ssids > 0) {
- offset = offsetof(struct brcmf_scan_params_le, channel_list) +
- n_channels * sizeof(u16);
- offset = roundup(offset, sizeof(u32));
- ptr = (char *)params_le + offset;
- for (i = 0; i < n_ssids; i++) {
- memset(&ssid_le, 0, sizeof(ssid_le));
- ssid_le.SSID_len =
- cpu_to_le32(request->ssids[i].ssid_len);
- memcpy(ssid_le.SSID, request->ssids[i].ssid,
- request->ssids[i].ssid_len);
- if (!ssid_le.SSID_len)
- brcmf_dbg(SCAN, "%d: Broadcast scan\n", i);
- else
- brcmf_dbg(SCAN, "%d: scan for %.32s size=%d\n",
- i, ssid_le.SSID, ssid_le.SSID_len);
- memcpy(ptr, &ssid_le, sizeof(ssid_le));
- ptr += sizeof(ssid_le);
- }
- } else {
- brcmf_dbg(SCAN, "Performing passive scan\n");
- params_le->scan_type = BRCMF_SCANTYPE_PASSIVE;
- }
- /* Adding mask to channel numbers */
- params_le->channel_num =
- cpu_to_le32((n_ssids << BRCMF_SCAN_PARAMS_NSSID_SHIFT) |
- (n_channels & BRCMF_SCAN_PARAMS_COUNT_MASK));
-}
-
static s32
brcmf_run_escan(struct brcmf_cfg80211_info *cfg, struct brcmf_if *ifp,
struct cfg80211_scan_request *request)
{
struct brcmf_pub *drvr = cfg->pub;
- s32 params_size = BRCMF_SCAN_PARAMS_FIXED_SIZE +
- offsetof(struct brcmf_escan_params_le, params_le);
+ s32 params_size = BRCMF_SCAN_PARAMS_V2_FIXED_SIZE +
+ offsetof(struct brcmf_escan_params_le, params_v2_le);
struct brcmf_escan_params_le *params;
s32 err = 0;
goto exit;
}
BUG_ON(params_size + sizeof("escan") >= BRCMF_DCMD_MEDLEN);
- brcmf_escan_prep(cfg, ¶ms->params_le, request);
- params->version = cpu_to_le32(BRCMF_ESCAN_REQ_VERSION);
+ brcmf_escan_prep(cfg, ¶ms->params_v2_le, request);
+
+ params->version = cpu_to_le32(BRCMF_ESCAN_REQ_VERSION_V2);
+
+ if (!brcmf_feat_is_enabled(ifp, BRCMF_FEAT_SCAN_V2)) {
+ struct brcmf_escan_params_le *params_v1;
+
+ params_size -= BRCMF_SCAN_PARAMS_V2_FIXED_SIZE;
+ params_size += BRCMF_SCAN_PARAMS_FIXED_SIZE;
+ params_v1 = kzalloc(params_size, GFP_KERNEL);
+ params_v1->version = cpu_to_le32(BRCMF_ESCAN_REQ_VERSION);
+ brcmf_scan_params_v2_to_v1(¶ms->params_v2_le, ¶ms_v1->params_le);
+ kfree(params);
+ params = params_v1;
+ }
+
params->action = cpu_to_le16(WL_ESCAN_ACTION_START);
params->sync_id = cpu_to_le16(0x1234);
{
struct brcmf_pub *drvr = ifp->drvr;
struct brcmf_wsec_pmk_le pmk;
- int i, err;
+ int err;
+
+ memset(&pmk, 0, sizeof(pmk));
- /* convert to firmware key format */
- pmk.key_len = cpu_to_le16(pmk_len << 1);
- pmk.flags = cpu_to_le16(BRCMF_WSEC_PASSPHRASE);
- for (i = 0; i < pmk_len; i++)
- snprintf(&pmk.key[2 * i], 3, "%02x", pmk_data[i]);
+ /* pass pmk directly */
+ pmk.key_len = cpu_to_le16(pmk_len);
+ pmk.flags = cpu_to_le16(0);
+ memcpy(pmk.key, pmk_data, pmk_len);
/* store psk in firmware */
err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SET_WSEC_PMK,
return 0;
}
+static s32
+brcmf_pmksa_v3_op(struct brcmf_if *ifp, struct cfg80211_pmksa *pmksa,
+ bool alive)
+{
+ struct brcmf_pmk_op_v3_le *pmk_op;
+ int length = offsetof(struct brcmf_pmk_op_v3_le, pmk);
+ int ret;
+
+ pmk_op = kzalloc(sizeof(*pmk_op), GFP_KERNEL);
+ pmk_op->version = cpu_to_le16(BRCMF_PMKSA_VER_3);
+
+ if (!pmksa) {
+ /* Flush operation, operate on entire list */
+ pmk_op->count = cpu_to_le16(0);
+ } else {
+ /* Single PMK operation */
+ pmk_op->count = cpu_to_le16(1);
+ length += sizeof(struct brcmf_pmksa_v3);
+ memcpy(pmk_op->pmk[0].bssid, pmksa->bssid, ETH_ALEN);
+ memcpy(pmk_op->pmk[0].pmkid, pmksa->pmkid, WLAN_PMKID_LEN);
+ pmk_op->pmk[0].pmkid_len = WLAN_PMKID_LEN;
+ pmk_op->pmk[0].time_left = cpu_to_le32(alive ? BRCMF_PMKSA_NO_EXPIRY : 0);
+ }
+
+ pmk_op->length = cpu_to_le16(length);
+
+ ret = brcmf_fil_iovar_data_set(ifp, "pmkid_info", pmk_op, sizeof(*pmk_op));
+ kfree(pmk_op);
+ return ret;
+}
+
static __used s32
brcmf_update_pmklist(struct brcmf_cfg80211_info *cfg, struct brcmf_if *ifp)
{
if (!check_vif_up(ifp->vif))
return -EIO;
+ brcmf_dbg(CONN, "set_pmksa - PMK bssid: %pM =\n", pmksa->bssid);
+ brcmf_dbg(CONN, "%*ph\n", WLAN_PMKID_LEN, pmksa->pmkid);
+
+ if (brcmf_feat_is_enabled(ifp, BRCMF_FEAT_PMKID_V3))
+ return brcmf_pmksa_v3_op(ifp, pmksa, true);
+
+ /* TODO: implement PMKID_V2 */
+
npmk = le32_to_cpu(cfg->pmk_list.npmk);
for (i = 0; i < npmk; i++)
if (!memcmp(pmksa->bssid, pmk[i].bssid, ETH_ALEN))
return -EINVAL;
}
- brcmf_dbg(CONN, "set_pmksa - PMK bssid: %pM =\n", pmk[npmk].bssid);
- brcmf_dbg(CONN, "%*ph\n", WLAN_PMKID_LEN, pmk[npmk].pmkid);
-
err = brcmf_update_pmklist(cfg, ifp);
brcmf_dbg(TRACE, "Exit\n");
brcmf_dbg(CONN, "del_pmksa - PMK bssid = %pM\n", pmksa->bssid);
+ if (brcmf_feat_is_enabled(ifp, BRCMF_FEAT_PMKID_V3))
+ return brcmf_pmksa_v3_op(ifp, pmksa, false);
+
+ /* TODO: implement PMKID_V2 */
+
npmk = le32_to_cpu(cfg->pmk_list.npmk);
for (i = 0; i < npmk; i++)
if (!memcmp(pmksa->bssid, pmk[i].bssid, ETH_ALEN))
if (!check_vif_up(ifp->vif))
return -EIO;
+ if (brcmf_feat_is_enabled(ifp, BRCMF_FEAT_PMKID_V3))
+ return brcmf_pmksa_v3_op(ifp, NULL, false);
+
+ /* TODO: implement PMKID_V2 */
+
memset(&cfg->pmk_list, 0, sizeof(cfg->pmk_list));
err = brcmf_update_pmklist(cfg, ifp);
(struct brcmf_cfg80211_assoc_ielen_le *)cfg->extra_buf;
req_len = le32_to_cpu(assoc_info->req_len);
resp_len = le32_to_cpu(assoc_info->resp_len);
+ if (req_len > WL_EXTRA_BUF_MAX || resp_len > WL_EXTRA_BUF_MAX) {
+ bphy_err(drvr, "invalid lengths in assoc info: req %u resp %u\n",
+ req_len, resp_len);
+ return -EINVAL;
+ }
if (req_len) {
err = brcmf_fil_iovar_data_get(ifp, "assoc_req_ies",
cfg->extra_buf,
{
struct brcmf_cfg80211_vif *vif = ifp->vif;
struct brcmf_rssi_be *info = data;
- s32 rssi, snr, noise;
+ s32 rssi, snr = 0, noise = 0;
s32 low, high, last;
- if (e->datalen < sizeof(*info)) {
+ if (e->datalen >= sizeof(*info)) {
+ rssi = be32_to_cpu(info->rssi);
+ snr = be32_to_cpu(info->snr);
+ noise = be32_to_cpu(info->noise);
+ } else if (e->datalen >= sizeof(rssi)) {
+ rssi = be32_to_cpu(*(__be32 *)data);
+ } else {
brcmf_err("insufficient RSSI event data\n");
return 0;
}
- rssi = be32_to_cpu(info->rssi);
- snr = be32_to_cpu(info->snr);
- noise = be32_to_cpu(info->noise);
-
low = vif->cqm_rssi_low;
high = vif->cqm_rssi_high;
last = vif->cqm_rssi_last;
switch (drvr->bus_if->chip) {
case BRCM_CC_43430_CHIP_ID:
case BRCM_CC_4345_CHIP_ID:
+ case BRCM_CC_4356_CHIP_ID:
case BRCM_CC_43602_CHIP_ID:
return true;
default:
#define ARMCR4_TCBANB_MASK 0xf
#define ARMCR4_TCBANB_SHIFT 0
-#define ARMCR4_BSZ_MASK 0x3f
+#define ARMCR4_BSZ_MASK 0x7f
#define ARMCR4_BSZ_MULT 8192
+#define ARMCR4_BLK_1K_MASK 0x200
struct brcmf_core_priv {
struct brcmf_core pub;
u32 nbb;
u32 totb;
u32 bxinfo;
+ u32 blksize;
u32 idx;
corecap = brcmf_chip_core_read32(cr4, ARMCR4_CAP);
for (idx = 0; idx < totb; idx++) {
brcmf_chip_core_write32(cr4, ARMCR4_BANKIDX, idx);
bxinfo = brcmf_chip_core_read32(cr4, ARMCR4_BANKINFO);
- memsize += ((bxinfo & ARMCR4_BSZ_MASK) + 1) * ARMCR4_BSZ_MULT;
+ blksize = ARMCR4_BSZ_MULT;
+ if (bxinfo & ARMCR4_BLK_1K_MASK)
+ blksize >>= 3;
+
+ memsize += ((bxinfo & ARMCR4_BSZ_MASK) + 1) * blksize;
}
return memsize;
return 0x170000;
case BRCM_CC_4378_CHIP_ID:
return 0x352000;
+ case BRCM_CC_4387_CHIP_ID:
+ return 0x740000;
default:
brcmf_err("unknown chip: %s\n", ci->pub.name);
break;
static inline void
brcmf_chip_cr4_set_passive(struct brcmf_chip_priv *chip)
{
+ int i;
struct brcmf_core *core;
brcmf_chip_disable_arm(chip, BCMA_CORE_ARM_CR4);
- core = brcmf_chip_get_core(&chip->pub, BCMA_CORE_80211);
- brcmf_chip_resetcore(core, D11_BCMA_IOCTL_PHYRESET |
- D11_BCMA_IOCTL_PHYCLOCKEN,
- D11_BCMA_IOCTL_PHYCLOCKEN,
- D11_BCMA_IOCTL_PHYCLOCKEN);
+ /* Disable the cores only and let the firmware enable them.
+ * Releasing reset ourselves breaks BCM4387 in weird ways.
+ */
+ for (i = 0; (core = brcmf_chip_get_d11core(&chip->pub, i)); i++)
+ brcmf_chip_coredisable(core, D11_BCMA_IOCTL_PHYRESET |
+ D11_BCMA_IOCTL_PHYCLOCKEN,
+ D11_BCMA_IOCTL_PHYCLOCKEN);
}
static bool brcmf_chip_cr4_set_active(struct brcmf_chip_priv *chip, u32 rstvec)
static int brcmf_c_download(struct brcmf_if *ifp, u16 flag,
struct brcmf_dload_data_le *dload_buf,
- u32 len)
+ u32 len, const char *var)
{
s32 err;
dload_buf->len = cpu_to_le32(len);
dload_buf->crc = cpu_to_le32(0);
- err = brcmf_fil_iovar_data_set(ifp, "clmload", dload_buf,
+ err = brcmf_fil_iovar_data_set(ifp, var, dload_buf,
struct_size(dload_buf, data, len));
return err;
}
-static int brcmf_c_process_clm_blob(struct brcmf_if *ifp)
+static int brcmf_c_download_blob(struct brcmf_if *ifp,
+ const void *data, size_t size,
+ const char *loadvar, const char *statvar)
{
struct brcmf_pub *drvr = ifp->drvr;
- struct brcmf_bus *bus = drvr->bus_if;
struct brcmf_dload_data_le *chunk_buf;
- const struct firmware *clm = NULL;
u32 chunk_len;
u32 datalen;
u32 cumulative_len;
brcmf_dbg(TRACE, "Enter\n");
- err = brcmf_bus_get_blob(bus, &clm, BRCMF_BLOB_CLM);
- if (err || !clm) {
- brcmf_info("no clm_blob available (err=%d), device may have limited channels available\n",
- err);
- return 0;
- }
-
chunk_buf = kzalloc(struct_size(chunk_buf, data, MAX_CHUNK_LEN),
GFP_KERNEL);
if (!chunk_buf) {
err = -ENOMEM;
- goto done;
+ return -ENOMEM;
}
- datalen = clm->size;
+ datalen = size;
cumulative_len = 0;
do {
if (datalen > MAX_CHUNK_LEN) {
chunk_len = datalen;
dl_flag |= DL_END;
}
- memcpy(chunk_buf->data, clm->data + cumulative_len, chunk_len);
+ memcpy(chunk_buf->data, data + cumulative_len, chunk_len);
- err = brcmf_c_download(ifp, dl_flag, chunk_buf, chunk_len);
+ err = brcmf_c_download(ifp, dl_flag, chunk_buf, chunk_len,
+ loadvar);
dl_flag &= ~DL_BEGIN;
} while ((datalen > 0) && (err == 0));
if (err) {
- bphy_err(drvr, "clmload (%zu byte file) failed (%d)\n",
- clm->size, err);
- /* Retrieve clmload_status and print */
- err = brcmf_fil_iovar_int_get(ifp, "clmload_status", &status);
+ bphy_err(drvr, "%s (%zu byte file) failed (%d)\n",
+ loadvar, size, err);
+ /* Retrieve status and print */
+ err = brcmf_fil_iovar_int_get(ifp, statvar, &status);
if (err)
- bphy_err(drvr, "get clmload_status failed (%d)\n", err);
+ bphy_err(drvr, "get %s failed (%d)\n", statvar, err);
else
- brcmf_dbg(INFO, "clmload_status=%d\n", status);
+ brcmf_dbg(INFO, "%s=%d\n", statvar, status);
err = -EIO;
}
kfree(chunk_buf);
-done:
- release_firmware(clm);
+ return err;
+}
+
+static int brcmf_c_process_clm_blob(struct brcmf_if *ifp)
+{
+ struct brcmf_pub *drvr = ifp->drvr;
+ struct brcmf_bus *bus = drvr->bus_if;
+ const struct firmware *fw = NULL;
+ s32 err;
+
+ brcmf_dbg(TRACE, "Enter\n");
+
+ err = brcmf_bus_get_blob(bus, &fw, BRCMF_BLOB_CLM);
+ if (err || !fw) {
+ brcmf_info("no clm_blob available (err=%d), device may have limited channels available\n",
+ err);
+ return 0;
+ }
+
+ err = brcmf_c_download_blob(ifp, fw->data, fw->size,
+ "clmload", "clmload_status");
+
+ release_firmware(fw);
+ return err;
+}
+
+static int brcmf_c_process_txcap_blob(struct brcmf_if *ifp)
+{
+ struct brcmf_pub *drvr = ifp->drvr;
+ struct brcmf_bus *bus = drvr->bus_if;
+ const struct firmware *fw = NULL;
+ s32 err;
+
+ brcmf_dbg(TRACE, "Enter\n");
+
+ err = brcmf_bus_get_blob(bus, &fw, BRCMF_BLOB_TXCAP);
+ if (err || !fw) {
+ brcmf_info("no txcap_blob available (err=%d)\n", err);
+ return 0;
+ }
+
+ brcmf_info("TxCap blob found, loading\n");
+ err = brcmf_c_download_blob(ifp, fw->data, fw->size,
+ "txcapload", "txcapload_status");
+
+ release_firmware(fw);
return err;
}
0x00, 0x90, 0x4c, 0xc5, 0x12, 0x38
};
+static int brcmf_c_process_cal_blob(struct brcmf_if *ifp)
+{
+ struct brcmf_pub *drvr = ifp->drvr;
+ struct brcmf_mp_device *settings = drvr->settings;
+ s32 err;
+
+ brcmf_dbg(TRACE, "Enter\n");
+
+ if (!settings->cal_blob || !settings->cal_size)
+ return 0;
+
+ brcmf_info("Calibration blob provided by platform, loading\n");
+ err = brcmf_c_download_blob(ifp, settings->cal_blob, settings->cal_size,
+ "calload", "calload_status");
+ return err;
+}
+
int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
{
struct brcmf_pub *drvr = ifp->drvr;
goto done;
}
+ /* Do TxCap downloading, if needed */
+ err = brcmf_c_process_txcap_blob(ifp);
+ if (err < 0) {
+ bphy_err(drvr, "download TxCap blob file failed, %d\n", err);
+ goto done;
+ }
+
+ /* Download external calibration blob, if available */
+ err = brcmf_c_process_cal_blob(ifp);
+ if (err < 0) {
+ bphy_err(drvr, "download calibration blob file failed, %d\n", err);
+ goto done;
+ }
+
/* query for 'ver' to get version info from firmware */
memset(buf, 0, sizeof(buf));
err = brcmf_fil_iovar_data_get(ifp, "ver", buf, sizeof(buf));
/* No platform data for this device, try OF and DMI data */
brcmf_dmi_probe(settings, chip, chiprev);
brcmf_of_probe(dev, bus_type, settings);
+ brcmf_acpi_probe(dev, bus_type, settings);
}
return settings;
}
const char *board_type;
unsigned char mac[ETH_ALEN];
const char *antenna_sku;
+ const void *cal_blob;
+ int cal_size;
union {
struct brcmfmac_sdio_pd sdio;
} bus;
brcmf_dmi_probe(struct brcmf_mp_device *settings, u32 chip, u32 chiprev) {}
#endif
+#ifdef CONFIG_ACPI
+void brcmf_acpi_probe(struct device *dev, enum brcmf_bus_type bus_type,
+ struct brcmf_mp_device *settings);
+#else
+static inline void brcmf_acpi_probe(struct device *dev,
+ enum brcmf_bus_type bus_type,
+ struct brcmf_mp_device *settings) {}
+#endif
+
u8 brcmf_map_prio_to_prec(void *cfg, u8 prio);
u8 brcmf_map_prio_to_aci(void *cfg, u8 prio);
drv->feat_flags |= feat_flags;
}
+struct brcmf_feat_wlcfeat {
+ u16 min_ver_major;
+ u16 min_ver_minor;
+ u32 feat_flags;
+};
+
+static const struct brcmf_feat_wlcfeat brcmf_feat_wlcfeat_map[] = {
+ { 12, 0, BIT(BRCMF_FEAT_PMKID_V2) },
+ { 13, 0, BIT(BRCMF_FEAT_PMKID_V3) },
+};
+
+static void brcmf_feat_wlc_version_overrides(struct brcmf_pub *drv)
+{
+ struct brcmf_if *ifp = brcmf_get_ifp(drv, 0);
+ const struct brcmf_feat_wlcfeat *e;
+ struct brcmf_wlc_version_le ver;
+ u32 feat_flags = 0;
+ int i, err, major, minor;
+
+ err = brcmf_fil_iovar_data_get(ifp, "wlc_ver", &ver, sizeof(ver));
+ if (err)
+ return;
+
+ major = le16_to_cpu(ver.wlc_ver_major);
+ minor = le16_to_cpu(ver.wlc_ver_minor);
+
+ brcmf_dbg(INFO, "WLC version: %d.%d\n", major, minor);
+
+ for (i = 0; i < ARRAY_SIZE(brcmf_feat_wlcfeat_map); i++) {
+ e = &brcmf_feat_wlcfeat_map[i];
+ if (major > e->min_ver_major ||
+ (major == e->min_ver_major &&
+ minor >= e->min_ver_minor)) {
+ feat_flags |= e->feat_flags;
+ }
+ }
+
+ if (!feat_flags)
+ return;
+
+ for (i = 0; i < BRCMF_FEAT_LAST; i++)
+ if (feat_flags & BIT(i))
+ brcmf_dbg(INFO, "enabling firmware feature: %s\n",
+ brcmf_feat_names[i]);
+ drv->feat_flags |= feat_flags;
+}
+
/**
* brcmf_feat_iovar_int_get() - determine feature through iovar query.
*
ifp->drvr->feat_flags |= BIT(BRCMF_FEAT_SCAN_RANDOM_MAC);
brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_FWSUP, "sup_wpa");
+ brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_SCAN_V2, "scan_ver");
if (drvr->settings->feature_disable) {
brcmf_dbg(INFO, "Features: 0x%02x, disable: 0x%02x\n",
ifp->drvr->feat_flags &= ~drvr->settings->feature_disable;
}
+ brcmf_feat_wlc_version_overrides(drvr);
brcmf_feat_firmware_overrides(drvr);
/* set chip related quirks */
* SAE: simultaneous authentication of equals
* FWAUTH: Firmware authenticator
* DUMP_OBSS: Firmware has capable to dump obss info to support ACS
+ * SCAN_V2: Version 2 scan params
*/
#define BRCMF_FEAT_LIST \
BRCMF_FEAT_DEF(MBSS) \
BRCMF_FEAT_DEF(DOT11H) \
BRCMF_FEAT_DEF(SAE) \
BRCMF_FEAT_DEF(FWAUTH) \
- BRCMF_FEAT_DEF(DUMP_OBSS)
+ BRCMF_FEAT_DEF(DUMP_OBSS) \
+ BRCMF_FEAT_DEF(SCAN_V2) \
+ BRCMF_FEAT_DEF(PMKID_V2) \
+ BRCMF_FEAT_DEF(PMKID_V3)
/*
* Quirks:
/* size of brcmf_scan_params not including variable length array */
#define BRCMF_SCAN_PARAMS_FIXED_SIZE 64
+#define BRCMF_SCAN_PARAMS_V2_FIXED_SIZE 72
+
+/* version of brcmf_scan_params structure */
+#define BRCMF_SCAN_PARAMS_VERSION_V2 2
/* masks for channel and ssid count */
#define BRCMF_SCAN_PARAMS_COUNT_MASK 0x0000ffff
#define BRCMF_PRIMARY_KEY (1 << 1)
#define DOT11_BSSTYPE_ANY 2
#define BRCMF_ESCAN_REQ_VERSION 1
+#define BRCMF_ESCAN_REQ_VERSION_V2 2
#define BRCMF_MAXRATES_IN_SET 16 /* max # of rates in rateset */
#define BRCMF_HE_CAP_MCS_MAP_NSS_MAX 8
+#define BRCMF_PMKSA_VER_2 2
+#define BRCMF_PMKSA_VER_3 3
+#define BRCMF_PMKSA_NO_EXPIRY 0xffffffff
+
/* MAX_CHUNK_LEN is the maximum length for data passing to firmware in each
* ioctl. It is relatively small because firmware has small maximum size input
* playload restriction for ioctls.
unsigned char SSID[IEEE80211_MAX_SSID_LEN];
};
+/* Alternate SSID structure used in some places... */
+struct brcmf_ssid8_le {
+ u8 SSID_len;
+ unsigned char SSID[IEEE80211_MAX_SSID_LEN];
+};
+
struct brcmf_scan_params_le {
struct brcmf_ssid_le ssid_le; /* default: {0, ""} */
u8 bssid[ETH_ALEN]; /* default: bcast */
__le16 channel_list[1]; /* list of chanspecs */
};
+struct brcmf_scan_params_v2_le {
+ __le16 version; /* structure version */
+ __le16 length; /* structure length */
+ struct brcmf_ssid_le ssid_le; /* default: {0, ""} */
+ u8 bssid[ETH_ALEN]; /* default: bcast */
+ s8 bss_type; /* default: any,
+ * DOT11_BSSTYPE_ANY/INFRASTRUCTURE/INDEPENDENT
+ */
+ u8 pad;
+ __le32 scan_type; /* flags, 0 use default */
+ __le32 nprobes; /* -1 use default, number of probes per channel */
+ __le32 active_time; /* -1 use default, dwell time per channel for
+ * active scanning
+ */
+ __le32 passive_time; /* -1 use default, dwell time per channel
+ * for passive scanning
+ */
+ __le32 home_time; /* -1 use default, dwell time for the
+ * home channel between channel scans
+ */
+ __le32 channel_num; /* count of channels and ssids that follow
+ *
+ * low half is count of channels in
+ * channel_list, 0 means default (use all
+ * available channels)
+ *
+ * high half is entries in struct brcmf_ssid
+ * array that follows channel_list, aligned for
+ * s32 (4 bytes) meaning an odd channel count
+ * implies a 2-byte pad between end of
+ * channel_list and first ssid
+ *
+ * if ssid count is zero, single ssid in the
+ * fixed parameter portion is assumed, otherwise
+ * ssid in the fixed portion is ignored
+ */
+ __le16 channel_list[1]; /* list of chanspecs */
+};
+
struct brcmf_scan_results {
u32 buflen;
u32 version;
__le32 version;
__le16 action;
__le16 sync_id;
- struct brcmf_scan_params_le params_le;
+ union {
+ struct brcmf_scan_params_le params_le;
+ struct brcmf_scan_params_v2_le params_v2_le;
+ };
};
struct brcmf_escan_result_le {
};
/**
+ * struct brcmf_wlc_version_le - firmware revision info.
+ *
+ * @version: structure version.
+ * @length: structure length.
+ * @epi_ver_major: EPI major version
+ * @epi_ver_minor: EPI minor version
+ * @epi_ver_rc: EPI rc version
+ * @epi_ver_incr: EPI increment version
+ * @wlc_ver_major: WLC major version
+ * @wlc_ver_minor: WLC minor version
+ */
+struct brcmf_wlc_version_le {
+ __le16 version;
+ __le16 length;
+
+ __le16 epi_ver_major;
+ __le16 epi_ver_minor;
+ __le16 epi_ver_rc;
+ __le16 epi_ver_incr;
+
+ __le16 wlc_ver_major;
+ __le16 wlc_ver_minor;
+};
+
+/**
* struct brcmf_assoclist_le - request assoc list.
*
* @count: indicates number of stations.
};
/**
+ * struct brcmf_pmksa_v2 - PMK Security Association
+ *
+ * @length: Length of the structure.
+ * @bssid: The AP's BSSID.
+ * @pmkid: The PMK ID.
+ * @pmk: PMK material for FILS key derivation.
+ * @pmk_len: Length of PMK data.
+ * @ssid: The AP's SSID.
+ * @fils_cache_id: FILS cache identifier
+ */
+struct brcmf_pmksa_v2 {
+ __le16 length;
+ u8 bssid[ETH_ALEN];
+ u8 pmkid[WLAN_PMKID_LEN];
+ u8 pmk[WLAN_PMK_LEN_SUITE_B_192];
+ __le16 pmk_len;
+ struct brcmf_ssid8_le ssid;
+ u16 fils_cache_id;
+};
+
+/**
+ * struct brcmf_pmksa_v3 - PMK Security Association
+ *
+ * @bssid: The AP's BSSID.
+ * @pmkid: The PMK ID.
+ * @pmkid_len: The length of the PMK ID.
+ * @pmk: PMK material for FILS key derivation.
+ * @pmk_len: Length of PMK data.
+ * @fils_cache_id: FILS cache identifier
+ * @ssid: The AP's SSID.
+ * @time_left: Remaining time until expiry. 0 = expired, ~0 = no expiry.
+ */
+struct brcmf_pmksa_v3 {
+ u8 bssid[ETH_ALEN];
+ u8 pmkid[WLAN_PMKID_LEN];
+ u8 pmkid_len;
+ u8 pmk[WLAN_PMK_LEN_SUITE_B_192];
+ u8 pmk_len;
+ __le16 fils_cache_id;
+ u8 pad;
+ struct brcmf_ssid8_le ssid;
+ __le32 time_left;
+};
+
+/**
* struct brcmf_pmk_list_le - List of pmksa's.
*
* @npmk: Number of pmksa's.
};
/**
+ * struct brcmf_pmk_list_v2_le - List of pmksa's.
+ *
+ * @version: Request version.
+ * @length: Length of this structure.
+ * @pmk: PMK SA information.
+ */
+struct brcmf_pmk_list_v2_le {
+ __le16 version;
+ __le16 length;
+ struct brcmf_pmksa_v2 pmk[BRCMF_MAXPMKID];
+};
+
+/**
+ * struct brcmf_pmk_op_v3_le - Operation on PMKSA list.
+ *
+ * @version: Request version.
+ * @length: Length of this structure.
+ * @pmk: PMK SA information.
+ */
+struct brcmf_pmk_op_v3_le {
+ __le16 version;
+ __le16 length;
+ __le16 count;
+ __le16 pad;
+ struct brcmf_pmksa_v3 pmk[BRCMF_MAXPMKID];
+};
+
+/**
* struct brcmf_pno_param_le - PNO scan configuration parameters
*
* @version: PNO parameters version.
if (!of_property_read_string(np, "apple,antenna-sku", &prop))
settings->antenna_sku = prop;
+ /* The WLAN calibration blob is normally stored in SROM, but Apple
+ * ARM64 platforms pass it via the DT instead.
+ */
+ prop = of_get_property(np, "brcm,cal-blob", &settings->cal_size);
+ if (prop && settings->cal_size)
+ settings->cal_blob = prop;
+
/* Set board-type to the first string of the machine compatible prop */
root = of_find_node_by_path("/");
if (root && err) {
sdio->drive_strength = val;
/* make sure there are interrupts defined in the node */
- if (!of_find_property(np, "interrupts", NULL))
+ if (!of_property_present(np, "interrupts"))
return;
irq = irq_of_parse_and_map(np, 0);
#include <linux/sched/signal.h>
#include <linux/kthread.h>
#include <linux/io.h>
+#include <linux/random.h>
#include <asm/unaligned.h>
#include <soc.h>
BRCMF_FW_CLM_DEF(43570, "brcmfmac43570-pcie");
BRCMF_FW_DEF(4358, "brcmfmac4358-pcie");
BRCMF_FW_DEF(4359, "brcmfmac4359-pcie");
+BRCMF_FW_DEF(4359C, "brcmfmac4359c-pcie");
BRCMF_FW_CLM_DEF(4364B2, "brcmfmac4364b2-pcie");
BRCMF_FW_CLM_DEF(4364B3, "brcmfmac4364b3-pcie");
BRCMF_FW_DEF(4365B, "brcmfmac4365b-pcie");
BRCMF_FW_DEF(4371, "brcmfmac4371-pcie");
BRCMF_FW_CLM_DEF(4377B3, "brcmfmac4377b3-pcie");
BRCMF_FW_CLM_DEF(4378B1, "brcmfmac4378b1-pcie");
+BRCMF_FW_CLM_DEF(4378B3, "brcmfmac4378b3-pcie");
+BRCMF_FW_CLM_DEF(4387C2, "brcmfmac4387c2-pcie");
/* firmware config files */
MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.txt");
/* per-board firmware binaries */
MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.bin");
MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.clm_blob");
+MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.txcap_blob");
static const struct brcmf_firmware_mapping brcmf_pcie_fwnames[] = {
BRCMF_FW_ENTRY(BRCM_CC_43602_CHIP_ID, 0xFFFFFFFF, 43602),
BRCMF_FW_ENTRY(BRCM_CC_43569_CHIP_ID, 0xFFFFFFFF, 43570),
BRCMF_FW_ENTRY(BRCM_CC_43570_CHIP_ID, 0xFFFFFFFF, 43570),
BRCMF_FW_ENTRY(BRCM_CC_4358_CHIP_ID, 0xFFFFFFFF, 4358),
- BRCMF_FW_ENTRY(BRCM_CC_4359_CHIP_ID, 0xFFFFFFFF, 4359),
+ BRCMF_FW_ENTRY(BRCM_CC_4359_CHIP_ID, 0x000001FF, 4359),
+ BRCMF_FW_ENTRY(BRCM_CC_4359_CHIP_ID, 0xFFFFFE00, 4359C),
BRCMF_FW_ENTRY(BRCM_CC_4364_CHIP_ID, 0x0000000F, 4364B2), /* 3 */
BRCMF_FW_ENTRY(BRCM_CC_4364_CHIP_ID, 0xFFFFFFF0, 4364B3), /* 4 */
BRCMF_FW_ENTRY(BRCM_CC_4365_CHIP_ID, 0x0000000F, 4365B),
BRCMF_FW_ENTRY(BRCM_CC_43666_CHIP_ID, 0xFFFFFFF0, 4366C),
BRCMF_FW_ENTRY(BRCM_CC_4371_CHIP_ID, 0xFFFFFFFF, 4371),
BRCMF_FW_ENTRY(BRCM_CC_4377_CHIP_ID, 0xFFFFFFFF, 4377B3), /* revision ID 4 */
- BRCMF_FW_ENTRY(BRCM_CC_4378_CHIP_ID, 0xFFFFFFFF, 4378B1), /* revision ID 3 */
+ BRCMF_FW_ENTRY(BRCM_CC_4378_CHIP_ID, 0x0000000F, 4378B1), /* revision ID 3 */
+ BRCMF_FW_ENTRY(BRCM_CC_4378_CHIP_ID, 0xFFFFFFE0, 4378B3), /* revision ID 5 */
+ BRCMF_FW_ENTRY(BRCM_CC_4387_CHIP_ID, 0xFFFFFFFF, 4387C2), /* revision ID 7 */
};
#define BRCMF_PCIE_FW_UP_TIMEOUT 5000 /* msec */
char fw_name[BRCMF_FW_NAME_LEN];
char nvram_name[BRCMF_FW_NAME_LEN];
char clm_name[BRCMF_FW_NAME_LEN];
+ char txcap_name[BRCMF_FW_NAME_LEN];
const struct firmware *clm_fw;
+ const struct firmware *txcap_fw;
const struct brcmf_pcie_reginfo *reginfo;
void __iomem *regs;
void __iomem *tcm;
*fw = devinfo->clm_fw;
devinfo->clm_fw = NULL;
break;
+ case BRCMF_BLOB_TXCAP:
+ *fw = devinfo->txcap_fw;
+ devinfo->txcap_fw = NULL;
+ break;
default:
return -ENOENT;
}
return 0;
}
+struct brcmf_random_seed_footer {
+ __le32 length;
+ __le32 magic;
+};
+
+#define BRCMF_RANDOM_SEED_MAGIC 0xfeedc0de
+#define BRCMF_RANDOM_SEED_LENGTH 0x100
static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
const struct firmware *fw, void *nvram,
nvram_len;
memcpy_toio(devinfo->tcm + address, nvram, nvram_len);
brcmf_fw_nvram_free(nvram);
+
+ if (devinfo->otp.valid) {
+ size_t rand_len = BRCMF_RANDOM_SEED_LENGTH;
+ struct brcmf_random_seed_footer footer = {
+ .length = cpu_to_le32(rand_len),
+ .magic = cpu_to_le32(BRCMF_RANDOM_SEED_MAGIC),
+ };
+ void *randbuf;
+
+ /* Some Apple chips/firmwares expect a buffer of random
+ * data to be present before NVRAM
+ */
+ brcmf_dbg(PCIE, "Download random seed\n");
+
+ address -= sizeof(footer);
+ memcpy_toio(devinfo->tcm + address, &footer,
+ sizeof(footer));
+
+ address -= rand_len;
+ randbuf = kzalloc(rand_len, GFP_KERNEL);
+ get_random_bytes(randbuf, rand_len);
+ memcpy_toio(devinfo->tcm + address, randbuf, rand_len);
+ kfree(randbuf);
+ }
} else {
brcmf_dbg(PCIE, "No matching NVRAM file found %s\n",
devinfo->nvram_name);
base = 0x1120;
words = 0x170;
break;
+ case BRCM_CC_4387_CHIP_ID:
+ coreid = BCMA_CORE_GCI;
+ base = 0x113c;
+ words = 0x170;
+ break;
default:
/* OTP not supported on this chip */
return 0;
#define BRCMF_PCIE_FW_CODE 0
#define BRCMF_PCIE_FW_NVRAM 1
#define BRCMF_PCIE_FW_CLM 2
+#define BRCMF_PCIE_FW_TXCAP 3
static void brcmf_pcie_setup(struct device *dev, int ret,
struct brcmf_fw_request *fwreq)
nvram = fwreq->items[BRCMF_PCIE_FW_NVRAM].nv_data.data;
nvram_len = fwreq->items[BRCMF_PCIE_FW_NVRAM].nv_data.len;
devinfo->clm_fw = fwreq->items[BRCMF_PCIE_FW_CLM].binary;
+ devinfo->txcap_fw = fwreq->items[BRCMF_PCIE_FW_TXCAP].binary;
kfree(fwreq);
ret = brcmf_chip_get_raminfo(devinfo->ci);
{ ".bin", devinfo->fw_name },
{ ".txt", devinfo->nvram_name },
{ ".clm_blob", devinfo->clm_name },
+ { ".txcap_blob", devinfo->txcap_name },
};
fwreq = brcmf_fw_alloc_request(devinfo->ci->chip, devinfo->ci->chiprev,
fwreq->items[BRCMF_PCIE_FW_NVRAM].flags = BRCMF_FW_REQF_OPTIONAL;
fwreq->items[BRCMF_PCIE_FW_CLM].type = BRCMF_FW_TYPE_BINARY;
fwreq->items[BRCMF_PCIE_FW_CLM].flags = BRCMF_FW_REQF_OPTIONAL;
+ fwreq->items[BRCMF_PCIE_FW_TXCAP].type = BRCMF_FW_TYPE_BINARY;
+ fwreq->items[BRCMF_PCIE_FW_TXCAP].flags = BRCMF_FW_REQF_OPTIONAL;
/* NVRAM reserves PCI domain 0 for Broadcom's SDK faked bus */
fwreq->domain_nr = pci_domain_nr(devinfo->pdev->bus) + 1;
fwreq->bus_nr = devinfo->pdev->bus->number;
brcmf_pcie_reset_device(devinfo);
brcmf_pcie_release_resource(devinfo);
release_firmware(devinfo->clm_fw);
+ release_firmware(devinfo->txcap_fw);
if (devinfo->ci)
brcmf_chip_detach(devinfo->ci);
BRCMF_PCIE_DEVICE(BRCM_PCIE_43596_DEVICE_ID, CYW),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4377_DEVICE_ID, WCC),
BRCMF_PCIE_DEVICE(BRCM_PCIE_4378_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4387_DEVICE_ID, WCC),
{ /* end: all zeroes */ }
};
u16 seq, start_seq = 0, bindex, index, mcl;
u8 mcs = 0;
bool ba_recd = false, ack_recd = false;
- u8 suc_mpdu = 0, tot_mpdu = 0;
+ u8 tot_mpdu = 0;
uint supr_status;
bool retry = true;
u16 mimoantsel = 0;
ieee80211_tx_status_irqsafe(wlc->pub->ieee_hw,
p);
ack_recd = true;
- suc_mpdu++;
}
}
/* either retransmit or send bar if ack not recd */
struct brcms_info *wl = hw->priv;
struct brcms_c_info *wlc = wl->wlc;
struct ieee80211_supported_band *band;
- int has_5g = 0;
u16 phy_type;
hw->wiphy->bands[NL80211_BAND_2GHZ] = NULL;
/* Assume all bands use the same phy. True for 11n devices. */
if (wl->pub->_nbands > 1) {
- has_5g++;
if (phy_type == PHY_TYPE_N || phy_type == PHY_TYPE_LCN) {
band = &wlc->bandstate[BAND_5G_INDEX]->band;
*band = brcms_band_5GHz_nphy_template;
#define BRCM_CC_4371_CHIP_ID 0x4371
#define BRCM_CC_4377_CHIP_ID 0x4377
#define BRCM_CC_4378_CHIP_ID 0x4378
+#define BRCM_CC_4387_CHIP_ID 0x4387
#define CY_CC_4373_CHIP_ID 0x4373
#define CY_CC_43012_CHIP_ID 43012
#define CY_CC_43439_CHIP_ID 43439
#define BRCM_PCIE_43596_DEVICE_ID 0x4415
#define BRCM_PCIE_4377_DEVICE_ID 0x4488
#define BRCM_PCIE_4378_DEVICE_ID 0x4425
+#define BRCM_PCIE_4387_DEVICE_ID 0x4433
/* brcmsmac IDs */
#define BCM4313_D11N2G_ID 0x4727 /* 4313 802.11n 2.4G device */
_ipw_read8(ipw, ofs); \
})
-/* 16-bit direct read (low 4K) */
-static inline u16 _ipw_read16(struct ipw_priv *ipw, unsigned long ofs)
-{
- return readw(ipw->hw_base + ofs);
-}
-
-/* alias to 16-bit direct read (low 4K of SRAM/regs), with debug wrapper */
-#define ipw_read16(ipw, ofs) ({ \
- IPW_DEBUG_IO("%s %d: read_direct16(0x%08X)\n", __FILE__, __LINE__, \
- (u32)(ofs)); \
- _ipw_read16(ipw, ofs); \
-})
-
/* 32-bit direct read (low 4K) */
static inline u32 _ipw_read32(struct ipw_priv *ipw, unsigned long ofs)
{
u32 base = ipw_read32(priv, IPW_ERROR_LOG);
u32 elem_len = ipw_read_reg32(priv, base);
- error = kmalloc(sizeof(*error) +
- sizeof(*error->elem) * elem_len +
- sizeof(*error->log) * log_len, GFP_ATOMIC);
+ error = kmalloc(size_add(struct_size(error, elem, elem_len),
+ array_size(sizeof(*error->log), log_len)),
+ GFP_ATOMIC);
if (!error) {
IPW_ERROR("Memory allocation for firmware error log "
"failed.\n");
error->config = priv->config;
error->elem_len = elem_len;
error->log_len = log_len;
- error->elem = (struct ipw_error_elem *)error->payload;
error->log = (struct ipw_event *)(error->elem + elem_len);
ipw_capture_event_log(priv, log_len, error->log);
u32 config;
u32 elem_len;
u32 log_len;
- struct ipw_error_elem *elem;
struct ipw_event *log;
- u8 payload[];
+ struct ipw_error_elem elem[];
} __packed;
#ifdef CONFIG_IPW2200_PROMISCUOUS
#include "fw/api/txq.h"
/* Highest firmware API version supported */
-#define IWL_22000_UCODE_API_MAX 74
+#define IWL_22000_UCODE_API_MAX 75
/* Lowest firmware API version supported */
#define IWL_22000_UCODE_API_MIN 39
#define IWL_BZ_A_MR_A_FW_PRE "iwlwifi-bz-a0-mr-a0-"
#define IWL_BZ_A_FM_A_FW_PRE "iwlwifi-bz-a0-fm-a0-"
#define IWL_BZ_A_FM4_A_FW_PRE "iwlwifi-bz-a0-fm4-a0-"
+#define IWL_BZ_A_FM_B_FW_PRE "iwlwifi-bz-a0-fm-b0-"
+#define IWL_BZ_A_FM4_B_FW_PRE "iwlwifi-bz-a0-fm4-b0-"
#define IWL_GL_A_FM_A_FW_PRE "iwlwifi-gl-a0-fm-a0-"
#define IWL_GL_B_FM_B_FW_PRE "iwlwifi-gl-b0-fm-b0-"
#define IWL_BZ_Z_GF_A_FW_PRE "iwlwifi-bz-z0-gf-a0-"
#define IWL_BNJ_A_FM_A_FW_PRE "iwlwifi-BzBnj-a0-fm-a0-"
#define IWL_BNJ_A_FM4_A_FW_PRE "iwlwifi-BzBnj-a0-fm4-a0-"
+#define IWL_BNJ_B_FM4_B_FW_PRE "iwlwifi-BzBnj-b0-fm4-b0-"
#define IWL_BNJ_A_GF_A_FW_PRE "iwlwifi-BzBnj-a0-gf-a0-"
+#define IWL_BNJ_B_GF_A_FW_PRE "iwlwifi-BzBnj-b0-gf-a0-"
#define IWL_BNJ_A_GF4_A_FW_PRE "iwlwifi-BzBnj-a0-gf4-a0-"
+#define IWL_BNJ_B_GF4_A_FW_PRE "iwlwifi-BzBnj-b0-gf4-a0-"
#define IWL_BNJ_A_HR_B_FW_PRE "iwlwifi-BzBnj-a0-hr-b0-"
+#define IWL_BNJ_B_HR_B_FW_PRE "iwlwifi-BzBnj-b0-hr-b0-"
#define IWL_BNJ_B_FM_B_FW_PRE "iwlwifi-BzBnj-b0-fm-b0-"
IWL_BZ_A_FM_A_FW_PRE __stringify(api) ".ucode"
#define IWL_BZ_A_FM4_A_MODULE_FIRMWARE(api) \
IWL_BZ_A_FM4_A_FW_PRE __stringify(api) ".ucode"
+#define IWL_BZ_A_FM_B_MODULE_FIRMWARE(api) \
+ IWL_BZ_A_FM_B_FW_PRE __stringify(api) ".ucode"
+#define IWL_BZ_A_FM4_B_MODULE_FIRMWARE(api) \
+ IWL_BZ_A_FM4_B_FW_PRE __stringify(api) ".ucode"
#define IWL_GL_A_FM_A_MODULE_FIRMWARE(api) \
IWL_GL_A_FM_A_FW_PRE __stringify(api) ".ucode"
#define IWL_GL_B_FM_B_MODULE_FIRMWARE(api) \
IWL_BNJ_A_FM_A_FW_PRE __stringify(api) ".ucode"
#define IWL_BNJ_A_FM4_A_MODULE_FIRMWARE(api) \
IWL_BNJ_A_FM4_A_FW_PRE __stringify(api) ".ucode"
+#define IWL_BNJ_B_FM4_B_MODULE_FIRMWARE(api) \
+ IWL_BNJ_B_FM4_B_FW_PRE __stringify(api) ".ucode"
#define IWL_BNJ_A_GF_A_MODULE_FIRMWARE(api) \
IWL_BNJ_A_GF_A_FW_PRE __stringify(api) ".ucode"
+#define IWL_BNJ_B_GF_A_MODULE_FIRMWARE(api) \
+ IWL_BNJ_B_GF_A_FW_PRE __stringify(api) ".ucode"
#define IWL_BNJ_A_GF4_A_MODULE_FIRMWARE(api) \
IWL_BNJ_A_GF4_A_FW_PRE __stringify(api) ".ucode"
+#define IWL_BNJ_B_GF4_A_MODULE_FIRMWARE(api) \
+ IWL_BNJ_B_GF4_A_FW_PRE __stringify(api) ".ucode"
#define IWL_BNJ_A_HR_B_MODULE_FIRMWARE(api) \
IWL_BNJ_A_HR_B_FW_PRE __stringify(api) ".ucode"
+#define IWL_BNJ_B_HR_B_MODULE_FIRMWARE(api) \
+ IWL_BNJ_B_HR_B_FW_PRE __stringify(api) ".ucode"
#define IWL_BNJ_B_FM_B_MODULE_FIRMWARE(api) \
IWL_BNJ_B_FM_B_FW_PRE __stringify(api) ".ucode"
.trans.gen2 = true, \
.nvm_type = IWL_NVM_EXT, \
.dbgc_supported = true, \
- .min_umac_error_event_table = 0x400000, \
+ .min_umac_error_event_table = 0xD0000, \
.d3_debug_data_base_addr = 0x401000, \
.d3_debug_data_length = 60 * 1024, \
.mon_smem_regs = { \
.num_rbds = IWL_NUM_RBDS_AX210_HE,
};
+const struct iwl_cfg iwl_cfg_bz_a0_fm_b0 = {
+ .fw_name_pre = IWL_BZ_A_FM_B_FW_PRE,
+ .uhb_supported = true,
+ IWL_DEVICE_BZ,
+ .features = IWL_TX_CSUM_NETIF_FLAGS_BZ | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_AX210_HE,
+};
+
+const struct iwl_cfg iwl_cfg_bz_a0_fm4_b0 = {
+ .fw_name_pre = IWL_BZ_A_FM4_B_FW_PRE,
+ .uhb_supported = true,
+ IWL_DEVICE_BZ,
+ .features = IWL_TX_CSUM_NETIF_FLAGS_BZ | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_AX210_HE,
+};
+
const struct iwl_cfg iwl_cfg_gl_a0_fm_a0 = {
.fw_name_pre = IWL_GL_A_FM_A_FW_PRE,
.uhb_supported = true,
.num_rbds = IWL_NUM_RBDS_AX210_HE,
};
+const struct iwl_cfg iwl_cfg_bnj_b0_fm4_b0 = {
+ .fw_name_pre = IWL_BNJ_B_FM4_B_FW_PRE,
+ .uhb_supported = true,
+ IWL_DEVICE_BZ,
+ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_AX210_HE,
+};
+
const struct iwl_cfg iwl_cfg_bnj_a0_gf_a0 = {
.fw_name_pre = IWL_BNJ_A_GF_A_FW_PRE,
.uhb_supported = true,
.num_rbds = IWL_NUM_RBDS_AX210_HE,
};
+const struct iwl_cfg iwl_cfg_bnj_b0_gf_a0 = {
+ .fw_name_pre = IWL_BNJ_B_GF_A_FW_PRE,
+ .uhb_supported = true,
+ IWL_DEVICE_BZ,
+ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_AX210_HE,
+};
+
const struct iwl_cfg iwl_cfg_bnj_a0_gf4_a0 = {
.fw_name_pre = IWL_BNJ_A_GF4_A_FW_PRE,
.uhb_supported = true,
.num_rbds = IWL_NUM_RBDS_AX210_HE,
};
+const struct iwl_cfg iwl_cfg_bnj_b0_gf4_a0 = {
+ .fw_name_pre = IWL_BNJ_B_GF4_A_FW_PRE,
+ .uhb_supported = true,
+ IWL_DEVICE_BZ,
+ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_AX210_HE,
+};
+
const struct iwl_cfg iwl_cfg_bnj_a0_hr_b0 = {
.fw_name_pre = IWL_BNJ_A_HR_B_FW_PRE,
.uhb_supported = true,
.num_rbds = IWL_NUM_RBDS_AX210_HE,
};
+const struct iwl_cfg iwl_cfg_bnj_b0_hr_b0 = {
+ .fw_name_pre = IWL_BNJ_B_HR_B_FW_PRE,
+ .uhb_supported = true,
+ IWL_DEVICE_BZ,
+ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_AX210_HE,
+};
+
const struct iwl_cfg iwl_cfg_bnj_b0_fm_b0 = {
.fw_name_pre = IWL_BNJ_B_FM_B_FW_PRE,
.uhb_supported = true,
MODULE_FIRMWARE(IWL_BZ_A_GF4_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BZ_A_MR_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BZ_A_FM_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+MODULE_FIRMWARE(IWL_BZ_A_FM_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_GL_A_FM_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BNJ_A_FM_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BNJ_A_FM4_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+MODULE_FIRMWARE(IWL_BNJ_B_FM4_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BNJ_A_GF_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+MODULE_FIRMWARE(IWL_BNJ_B_GF_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BNJ_A_GF4_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+MODULE_FIRMWARE(IWL_BNJ_B_GF4_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BNJ_A_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+MODULE_FIRMWARE(IWL_BNJ_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BZ_A_FM4_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+MODULE_FIRMWARE(IWL_BZ_A_FM4_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_GL_B_FM_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_BNJ_B_FM_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
HOT_SPOT_CMD = 0x53,
/**
+ * @WNM_80211V_TIMING_MEASUREMENT_NOTIFICATION: Time Sync
+ * measurement notification for TM/FTM. Sent on receipt of
+ * respective WNM action frame for TM protocol or public action
+ * frame for FTM protocol from peer device along with additional
+ * meta data specified in &struct iwl_time_msmt_notify
+ */
+ WNM_80211V_TIMING_MEASUREMENT_NOTIFICATION = 0x67,
+
+ /**
+ * @WNM_80211V_TIMING_MEASUREMENT_CONFIRM_NOTIFICATION: Time Sync
+ * measurement confirmation notification for TM/FTM. Sent on
+ * receipt of Ack from peer for previously Tx'ed TM/FTM
+ * action frame along with additional meta data specified in
+ * &struct iwl_time_msmt_cfm_notify
+ */
+ WNM_80211V_TIMING_MEASUREMENT_CONFIRM_NOTIFICATION = 0x68,
+
+ /**
* @SCAN_OFFLOAD_COMPLETE:
* notification, &struct iwl_periodic_scan_complete
*/
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2020 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
TRIGGER_RX_QUEUES_NOTIF_CMD = 0x2,
/**
+ * @WNM_PLATFORM_PTM_REQUEST_CMD: &struct iwl_time_sync_cfg_cmd
+ */
+ WNM_PLATFORM_PTM_REQUEST_CMD = 0x3,
+
+ /**
+ * @WNM_80211V_TIMING_MEASUREMENT_CONFIG_CMD:
+ * &struct iwl_time_sync_cfg_cmd
+ */
+ WNM_80211V_TIMING_MEASUREMENT_CONFIG_CMD = 0x4,
+
+ /**
* @STA_HE_CTXT_CMD: &struct iwl_he_sta_context_cmd
*/
STA_HE_CTXT_CMD = 0x7,
IWL_CHANNEL_ESTIMATION_COUNTER = BIT(2),
};
+enum iwl_time_sync_protocol_type {
+ IWL_TIME_SYNC_PROTOCOL_TM = BIT(0),
+ IWL_TIME_SYNC_PROTOCOL_FTM = BIT(1),
+}; /* WNM_TIMING_ENABLED_PROTOCOL_API_E_VER_1 */
+
+/**
+ * struct iwl_time_sync_cfg_cmd - TM/FTM time sync measurement configuration
+ *
+ * @protocols: The type of frames to raise notifications for. A bitmap
+ * of @iwl_time_sync_protocol_type
+ * @peer_addr: peer address with which TM/FTM measurements are required
+ * @reserved: for alignment
+ */
+struct iwl_time_sync_cfg_cmd {
+ __le32 protocols;
+ u8 peer_addr[ETH_ALEN];
+ u8 reserved[2];
+} __packed; /* WNM_80211V_TIMING_MEASUREMENT_CONFIG_CMD_API_S_VER_1 */
+
+/**
+ * enum iwl_synced_time_operation - PTM request options
+ *
+ * @IWL_SYNCED_TIME_OPERATION_READ_ARTB: read only the ARTB time
+ * @IWL_SYNCED_TIME_OPERATION_READ_GP2: read only the GP2 time
+ * @IWL_SYNCED_TIME_OPERATION_READ_BOTH: latch the ARTB and GP2 clocks and
+ * provide timestamps from both clocks for the same time point
+ */
+enum iwl_synced_time_operation {
+ IWL_SYNCED_TIME_OPERATION_READ_ARTB = 1,
+ IWL_SYNCED_TIME_OPERATION_READ_GP2,
+ IWL_SYNCED_TIME_OPERATION_READ_BOTH,
+};
+
+/**
+ * struct iwl_synced_time_cmd - request synced GP2/ARTB timestamps
+ *
+ * @operation: one of &enum iwl_synced_time_operation
+ */
+struct iwl_synced_time_cmd {
+ __le32 operation;
+} __packed; /* WNM_80211V_TIMING_CMD_API_S_VER_1 */
+
+/**
+ * struct iwl_synced_time_rsp - response to iwl_synced_time_cmd
+ *
+ * @operation: one of &enum iwl_synced_time_operation
+ * @platform_timestamp_hi: high DWORD of the ARTB clock timestamp in nanoseconds
+ * @platform_timestamp_lo: low DWORD of the ARTB clock timestamp in nanoseconds
+ * @gp2_timestamp_hi: high DWORD of the GP2 clock timestamp in 10's of
+ * nanoseconds
+ * @gp2_timestamp_lo: low DWORD of the GP2 clock timestamp in 10's of
+ * nanoseconds
+ */
+struct iwl_synced_time_rsp {
+ __le32 operation;
+ __le32 platform_timestamp_hi;
+ __le32 platform_timestamp_lo;
+ __le32 gp2_timestamp_hi;
+ __le32 gp2_timestamp_lo;
+} __packed; /* WNM_80211V_TIMING_RSP_API_S_VER_1 */
+
+/* PTP_CTX_MAX_DATA_SIZE_IN_API_D_VER_1 */
+#define PTP_CTX_MAX_DATA_SIZE 128
+
+/**
+ * struct iwl_time_msmt_ptp_ctx - Vendor specific information element
+ * to allow a space for flexibility for the userspace App
+ *
+ * @element_id: element id of vendor specific ie
+ * @length: length of vendor specific ie
+ * @reserved: for alignment
+ * @data: vendor specific data blob
+ */
+struct iwl_time_msmt_ptp_ctx {
+ /* Differentiate between FTM and TM specific Vendor IEs */
+ union {
+ struct {
+ u8 element_id;
+ u8 length;
+ __le16 reserved;
+ u8 data[PTP_CTX_MAX_DATA_SIZE];
+ } ftm; /* FTM specific vendor IE */
+ struct {
+ u8 element_id;
+ u8 length;
+ u8 data[PTP_CTX_MAX_DATA_SIZE];
+ } tm; /* TM specific vendor IE */
+ };
+} __packed /* PTP_CTX_VER_1 */;
+
+/**
+ * struct iwl_time_msmt_notify - Time Sync measurement notification
+ * for TM/FTM, along with additional meta data.
+ *
+ * @peer_addr: peer address
+ * @reserved: for alignment
+ * @dialog_token: measurement flow dialog token number
+ * @followup_dialog_token: Measurement flow previous dialog token number
+ * @t1_hi: high dword of t1-time of the Tx'ed action frame departure on
+ * sender side in units of 10 nano seconds
+ * @t1_lo: low dword of t1-time of the Tx'ed action frame departure on
+ * sender side in units of 10 nano seconds
+ * @t1_max_err: maximum t1-time error in units of 10 nano seconds
+ * @t4_hi: high dword of t4-time of the Rx'ed action frame's Ack arrival on
+ * sender side in units of 10 nano seconds
+ * @t4_lo: low dword of t4-time of the Rx'ed action frame's Ack arrival on
+ * sender side in units of 10 nano seconds
+ * @t4_max_err: maximum t4-time error in units of 10 nano seconds
+ * @t2_hi: high dword of t2-time of the Rx'ed action frame arrival on
+ * receiver side in units of 10 nano seconds
+ * @t2_lo: low dword of t2-time of the Rx'ed action frame arrival on
+ * receiver side in units of 10 nano seconds
+ * @t2_max_err: maximum t2-time error in units of 10 nano seconds
+ * @t3_hi: high dword of t3-time of the Tx'ed action frame's Ack departure on
+ * receiver side in units of 10 nano seconds
+ * @t3_lo: low dword of t3-time of the Tx'ed action frame's Ack departure on
+ * receiver side in units of 10 nano seconds
+ * @t3_max_err: maximum t3-time error in units of 10 nano seconds
+ * @ptp: vendor specific information element
+ */
+struct iwl_time_msmt_notify {
+ u8 peer_addr[ETH_ALEN];
+ u8 reserved[2];
+ __le32 dialog_token;
+ __le32 followup_dialog_token;
+ __le32 t1_hi;
+ __le32 t1_lo;
+ __le32 t1_max_err;
+ __le32 t4_hi;
+ __le32 t4_lo;
+ __le32 t4_max_err;
+ __le32 t2_hi;
+ __le32 t2_lo;
+ __le32 t2_max_err;
+ __le32 t3_hi;
+ __le32 t3_lo;
+ __le32 t3_max_err;
+ struct iwl_time_msmt_ptp_ctx ptp;
+} __packed; /* WNM_80211V_TIMING_MEASUREMENT_NTFY_API_S_VER_1 */
+
+/**
+ * struct iwl_time_msmt_cfm_notify - Time Sync measurement confirmation
+ * notification for TM/FTM. Sent on receipt of 802.11 Ack from peer for the
+ * Tx'ed TM/FTM measurement action frame.
+ *
+ * @peer_addr: peer address
+ * @reserved: for alignment
+ * @dialog_token: measurement flow dialog token number
+ * @t1_hi: high dword of t1-time of the Tx'ed action frame departure on
+ * sender side in units of 10 nano seconds
+ * @t1_lo: low dword of t1-time of the Tx'ed action frame departure on
+ * sender side in units of 10 nano seconds
+ * @t1_max_err: maximum t1-time error in units of 10 nano seconds
+ * @t4_hi: high dword of t4-time of the Rx'ed action frame's Ack arrival on
+ * sender side in units of 10 nano seconds
+ * @t4_lo: low dword of t4-time of the Rx'ed action frame's Ack arrival on
+ * sender side in units of 10 nano seconds
+ * @t4_max_err: maximum t4-time error in units of 10 nano seconds
+ */
+struct iwl_time_msmt_cfm_notify {
+ u8 peer_addr[ETH_ALEN];
+ u8 reserved[2];
+ __le32 dialog_token;
+ __le32 t1_hi;
+ __le32 t1_lo;
+ __le32 t1_max_err;
+ __le32 t4_hi;
+ __le32 t4_lo;
+ __le32 t4_max_err;
+} __packed; /* WNM_80211V_TIMING_MEASUREMENT_CONFIRM_NTFY_API_S_VER_1 */
+
/**
* struct iwl_channel_estimation_cfg - channel estimation reporting config
*/
*/
BUFFER_ALLOCATION = 0x8,
/**
+ * @GET_TAS_STATUS:
+ * sends command to fw to get TAS status
+ * the response is &struct iwl_mvm_tas_status_resp
+ */
+ GET_TAS_STATUS = 0xA,
+ /**
* @FW_DUMP_COMPLETE_CMD:
* sends command to fw once dump collection completed
* &struct iwl_dbg_dump_complete_cmd
__le32 tp_data;
} __packed; /* FW_DUMP_COMPLETE_CMD_API_S_VER_1 */
+#define TAS_LMAC_BAND_HB 0
+#define TAS_LMAC_BAND_LB 1
+#define TAS_LMAC_BAND_UHB 2
+#define TAS_LMAC_BAND_INVALID 3
+
+/**
+ * struct iwl_mvm_tas_status_per_mac - tas status per lmac
+ * @static_status: tas statically enabled or disabled per lmac - TRUE/FALSE
+ * @static_dis_reason: TAS static disable reason, uses
+ * &enum iwl_mvm_tas_statically_disabled_reason
+ * @dynamic_status: Current TAS status. uses
+ * &enum iwl_mvm_tas_dyna_status
+ * @near_disconnection: is TAS currently near disconnection per lmac? - TRUE/FALSE
+ * @max_reg_pwr_limit: Regulatory power limits in dBm
+ * @sar_limit: SAR limits per lmac in dBm
+ * @band: Band per lmac
+ * @reserved: reserved
+ */
+struct iwl_mvm_tas_status_per_mac {
+ u8 static_status;
+ u8 static_dis_reason;
+ u8 dynamic_status;
+ u8 near_disconnection;
+ __le16 max_reg_pwr_limit;
+ __le16 sar_limit;
+ u8 band;
+ u8 reserved[3];
+} __packed; /*DEBUG_GET_TAS_STATUS_PER_MAC_S_VER_1*/
+
+/**
+ * struct iwl_mvm_tas_status_resp - Response to GET_TAS_STATUS
+ * @tas_fw_version: TAS FW version
+ * @is_uhb_for_usa_enable: is UHB enabled in USA? - TRUE/FALSE
+ * @curr_mcc: current mcc
+ * @block_list: country block list
+ * @tas_status_mac: TAS status per lmac, uses
+ * &struct iwl_mvm_tas_status_per_mac
+ * @in_dual_radio: is TAS in dual radio? - TRUE/FALSE
+ * @reserved: reserved
+ */
+struct iwl_mvm_tas_status_resp {
+ u8 tas_fw_version;
+ u8 is_uhb_for_usa_enable;
+ __le16 curr_mcc;
+ __le16 block_list[16];
+ struct iwl_mvm_tas_status_per_mac tas_status_mac[2];
+ u8 in_dual_radio;
+ u8 reserved[3];
+} __packed; /*DEBUG_GET_TAS_STATUS_RSP_API_S_VER_3*/
+
+/**
+ * enum iwl_mvm_tas_dyna_status - TAS current running status
+ * @TAS_DYNA_INACTIVE: TAS status is inactive
+ * @TAS_DYNA_INACTIVE_MVM_MODE: TAS is disabled due because FW is in MVM mode
+ * or is in softap mode.
+ * @TAS_DYNA_INACTIVE_TRIGGER_MODE: TAS is disabled because FW is in
+ * multi user trigger mode
+ * @TAS_DYNA_INACTIVE_BLOCK_LISTED: TAS is disabled because current mcc
+ * is blocklisted mcc
+ * @TAS_DYNA_INACTIVE_UHB_NON_US: TAS is disabled because current band is UHB
+ * and current mcc is USA
+ * @TAS_DYNA_ACTIVE: TAS is currently active
+ * @TAS_DYNA_STATUS_MAX: TAS status max value
+ */
+enum iwl_mvm_tas_dyna_status {
+ TAS_DYNA_INACTIVE,
+ TAS_DYNA_INACTIVE_MVM_MODE,
+ TAS_DYNA_INACTIVE_TRIGGER_MODE,
+ TAS_DYNA_INACTIVE_BLOCK_LISTED,
+ TAS_DYNA_INACTIVE_UHB_NON_US,
+ TAS_DYNA_ACTIVE,
+
+ TAS_DYNA_STATUS_MAX,
+}; /*_TAS_DYNA_STATUS_E*/
+
+/**
+ * enum iwl_mvm_tas_statically_disabled_reason - TAS statically disabled reason
+ * @TAS_DISABLED_DUE_TO_BIOS: TAS is disabled because TAS is disabled in BIOS
+ * @TAS_DISABLED_DUE_TO_SAR_6DBM: TAS is disabled because SAR limit is less than 6 Dbm
+ * @TAS_DISABLED_REASON_INVALID: TAS disable reason is invalid
+ * @TAS_DISABLED_REASON_MAX: TAS disable reason max value
+ */
+enum iwl_mvm_tas_statically_disabled_reason {
+ TAS_DISABLED_DUE_TO_BIOS,
+ TAS_DISABLED_DUE_TO_SAR_6DBM,
+ TAS_DISABLED_REASON_INVALID,
+
+ TAS_DISABLED_REASON_MAX,
+}; /*_TAS_STATICALLY_DISABLED_REASON_E*/
+
#endif /* __iwl_fw_api_debug_h__ */
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2019, 2021 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2019, 2021-2022 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
#ifndef __iwl_fw_api_mac_cfg_h__
#define __iwl_fw_api_mac_cfg_h__
+#include "mac.h"
+
/**
* enum iwl_mac_conf_subcmd_ids - mac configuration command IDs
*/
* @CANCEL_CHANNEL_SWITCH_CMD: &struct iwl_cancel_channel_switch_cmd
*/
CANCEL_CHANNEL_SWITCH_CMD = 0x6,
-
+ /**
+ * @MAC_CONFIG_CMD: &struct iwl_mac_config_cmd
+ */
+ MAC_CONFIG_CMD = 0x8,
+ /**
+ * @LINK_CONFIG_CMD: &struct iwl_link_config_cmd
+ */
+ LINK_CONFIG_CMD = 0x9,
+ /**
+ * @STA_CONFIG_CMD: &struct iwl_mvm_sta_cfg_cmd
+ */
+ STA_CONFIG_CMD = 0xA,
+ /**
+ * @AUX_STA_CMD: &struct iwl_mvm_aux_sta_cmd
+ */
+ AUX_STA_CMD = 0xB,
+ /**
+ * @STA_REMOVE_CMD: &struct iwl_mvm_remove_sta_cmd
+ */
+ STA_REMOVE_CMD = 0xC,
+ /**
+ * @STA_DISABLE_TX_CMD: &struct iwl_mvm_sta_disable_tx_cmd
+ */
+ STA_DISABLE_TX_CMD = 0xD,
/**
* @SESSION_PROTECTION_NOTIF: &struct iwl_mvm_session_prot_notif
*/
__le16 reserved;
} __packed; /* MAC_LOW_LATENCY_API_S_VER_1 */
+/**
+ * struct iwl_mac_client_data - configuration data for client MAC context
+ *
+ * @is_assoc: 1 for associated state, 0 otherwise
+ * @assoc_id: unique ID assigned by the AP during association
+ * @data_policy: see &enum iwl_mac_data_policy
+ * @ctwin: client traffic window in TU (period after TBTT when GO is present).
+ * 0 indicates that there is no CT window.
+ */
+struct iwl_mac_client_data {
+ __le32 is_assoc;
+ __le32 assoc_id;
+ __le32 data_policy;
+ __le32 ctwin;
+} __packed; /* MAC_CONTEXT_CONFIG_CLIENT_DATA_API_S_VER_1 */
+
+/**
+ * struct iwl_mac_go_ibss_data - configuration data for GO and IBSS MAC context
+ *
+ * @beacon_template: beacon template ID
+ */
+struct iwl_mac_go_ibss_data {
+ __le32 beacon_template;
+} __packed; /* MAC_CONTEXT_CONFIG_GO_IBSS_DATA_API_S_VER_1 */
+
+/**
+ * struct iwl_mac_p2p_dev_data - configuration data for P2P device MAC context
+ *
+ * @is_disc_extended: if set to true, P2P Device discoverability is enabled on
+ * other channels as well. This should be to true only in case that the
+ * device is discoverable and there is an active GO. Note that setting this
+ * field when not needed, will increase the number of interrupts and have
+ * effect on the platform power, as this setting opens the Rx filters on
+ * all macs.
+ */
+struct iwl_mac_p2p_dev_data {
+ __le32 is_disc_extended;
+} __packed; /* MAC_CONTEXT_CONFIG_P2P_DEV_DATA_API_S_VER_1 */
+
+/**
+ * enum iwl_mac_config_filter_flags - MAC context configuration filter flags
+ *
+ * @MAC_CFG_FILTER_PROMISC: accept all data frames
+ * @MAC_CFG_FILTER_ACCEPT_CONTROL_AND_MGMT: pass all management and
+ * control frames to the host
+ * @MAC_CFG_FILTER_ACCEPT_GRP: accept multicast frames
+ * @MAC_CFG_FILTER_ACCEPT_BEACON: accept beacon frames
+ * @MAC_CFG_FILTER_ACCEPT_BCAST_PROBE_RESP: accept broadcast probe response
+ * @MAC_CFG_FILTER_ACCEPT_PROBE_REQ: accept probe requests
+ */
+enum iwl_mac_config_filter_flags {
+ MAC_CFG_FILTER_PROMISC = BIT(0),
+ MAC_CFG_FILTER_ACCEPT_CONTROL_AND_MGMT = BIT(1),
+ MAC_CFG_FILTER_ACCEPT_GRP = BIT(2),
+ MAC_CFG_FILTER_ACCEPT_BEACON = BIT(3),
+ MAC_CFG_FILTER_ACCEPT_BCAST_PROBE_RESP = BIT(4),
+ MAC_CFG_FILTER_ACCEPT_PROBE_REQ = BIT(5),
+}; /* MAC_FILTER_FLAGS_MASK_E_VER_1 */
+
+/**
+ * struct iwl_mac_config_cmd - command structure to configure MAC contexts in
+ * MLD API
+ * ( MAC_CONTEXT_CONFIG_CMD = 0x8 )
+ *
+ * @id_and_color: ID and color of the MAC
+ * @action: action to perform, one of FW_CTXT_ACTION_*
+ * @mac_type: one of &enum iwl_mac_types
+ * @local_mld_addr: mld address
+ * @reserved_for_local_mld_addr: reserved
+ * @filter_flags: combination of &enum iwl_mac_config_filter_flags
+ * @he_support: does this MAC support HE
+ * @eht_support: does this MAC support EHT. Requires he_support
+ * @nic_not_ack_enabled: mark that the NIC doesn't support receiving
+ * ACK-enabled AGG, (i.e. both BACK and non-BACK frames in single AGG).
+ * If the NIC is not ACK_ENABLED it may use the EOF-bit in first non-0
+ * len delim to determine if AGG or single.
+ * @client: client mac data
+ * @go_ibss: mac data for go or ibss
+ * @p2p_dev: mac data for p2p device
+ */
+struct iwl_mac_config_cmd {
+ /* COMMON_INDEX_HDR_API_S_VER_1 */
+ __le32 id_and_color;
+ __le32 action;
+ /* MAC_CONTEXT_TYPE_API_E */
+ __le32 mac_type;
+ u8 local_mld_addr[6];
+ __le16 reserved_for_local_mld_addr;
+ __le32 filter_flags;
+ __le32 he_support;
+ __le32 eht_support;
+ __le32 nic_not_ack_enabled;
+ /* MAC_CONTEXT_CONFIG_SPECIFIC_DATA_API_U_VER_1 */
+ union {
+ struct iwl_mac_client_data client;
+ struct iwl_mac_go_ibss_data go_ibss;
+ struct iwl_mac_p2p_dev_data p2p_dev;
+ };
+} __packed; /* MAC_CONTEXT_CONFIG_CMD_API_S_VER_1 */
+
+/**
+ * enum iwl_link_ctx_modify_flags - indicate to the fw what fields are being
+ * modified in &iwl_link_ctx_cfg_cmd
+ *
+ * @LINK_CONTEXT_MODIFY_ACTIVE: covers iwl_link_ctx_cfg_cmd::active
+ * @LINK_CONTEXT_MODIFY_RATES_INFO: covers iwl_link_ctx_cfg_cmd::cck_rates,
+ * iwl_link_ctx_cfg_cmd::ofdm_rates,
+ * iwl_link_ctx_cfg_cmd::cck_short_preamble,
+ * iwl_link_ctx_cfg_cmd::short_slot
+ * @LINK_CONTEXT_MODIFY_PROTECT_FLAGS: covers
+ * iwl_link_ctx_cfg_cmd::protection_flags
+ * @LINK_CONTEXT_MODIFY_QOS_PARAMS: covers iwl_link_ctx_cfg_cmd::qos_flags,
+ * iwl_link_ctx_cfg_cmd::ac,
+ * @LINK_CONTEXT_MODIFY_BEACON_TIMING: covers iwl_link_ctx_cfg_cmd::bi,
+ * iwl_link_ctx_cfg_cmd::dtim_interval,
+ * iwl_link_ctx_cfg_cmd::dtim_time,
+ * iwl_link_ctx_cfg_cmd::dtim_tsf,
+ * iwl_link_ctx_cfg_cmd::assoc_beacon_arrive_time.
+ * This flag can be set only once after assoc.
+ * @LINK_CONTEXT_MODIFY_HE_PARAMS: covers
+ * iwl_link_ctx_cfg_cmd::htc_trig_based_pkt_ext
+ * iwl_link_ctx_cfg_cmd::rand_alloc_ecwmin,
+ * iwl_link_ctx_cfg_cmd::rand_alloc_ecwmax,
+ * iwl_link_ctx_cfg_cmd::trig_based_txf,
+ * iwl_link_ctx_cfg_cmd::bss_color,
+ * iwl_link_ctx_cfg_cmd::ndp_fdbk_buff_th_exp,
+ * iwl_link_ctx_cfg_cmd::ref_bssid_addr
+ * iwl_link_ctx_cfg_cmd::bssid_index,
+ * iwl_link_ctx_cfg_cmd::frame_time_rts_th.
+ * This flag can be set any time.
+ * @LINK_CONTEXT_MODIFY_BSS_COLOR_DISABLE: covers
+ * iwl_link_ctx_cfg_cmd::bss_color_disable
+ * @LINK_CONTEXT_MODIFY_EHT_PARAMS: covers iwl_link_ctx_cfg_cmd::puncture_mask.
+ * This flag can be set only if the MAC that this link relates to has
+ * eht_support set to true.
+ * @LINK_CONTEXT_MODIFY_ALL: set all above flags
+ */
+enum iwl_link_ctx_modify_flags {
+ LINK_CONTEXT_MODIFY_ACTIVE = BIT(0),
+ LINK_CONTEXT_MODIFY_RATES_INFO = BIT(1),
+ LINK_CONTEXT_MODIFY_PROTECT_FLAGS = BIT(2),
+ LINK_CONTEXT_MODIFY_QOS_PARAMS = BIT(3),
+ LINK_CONTEXT_MODIFY_BEACON_TIMING = BIT(4),
+ LINK_CONTEXT_MODIFY_HE_PARAMS = BIT(5),
+ LINK_CONTEXT_MODIFY_BSS_COLOR_DISABLE = BIT(6),
+ LINK_CONTEXT_MODIFY_EHT_PARAMS = BIT(7),
+ LINK_CONTEXT_MODIFY_ALL = 0xff,
+}; /* LINK_CONTEXT_MODIFY_MASK_E_VER_1 */
+
+/**
+ * enum iwl_link_ctx_protection_flags - link protection flags
+ * @LINK_PROT_FLG_TGG_PROTECT: 11g protection when transmitting OFDM frames,
+ * this will require CCK RTS/CTS2self.
+ * RTS/CTS will protect full burst time.
+ * @LINK_PROT_FLG_HT_PROT: enable HT protection
+ * @LINK_PROT_FLG_FAT_PROT: protect 40 MHz transmissions
+ * @LINK_PROT_FLG_SELF_CTS_EN: allow CTS2self
+ */
+enum iwl_link_ctx_protection_flags {
+ LINK_PROT_FLG_TGG_PROTECT = BIT(0),
+ LINK_PROT_FLG_HT_PROT = BIT(1),
+ LINK_PROT_FLG_FAT_PROT = BIT(2),
+ LINK_PROT_FLG_SELF_CTS_EN = BIT(3),
+}; /* LINK_PROTECT_FLAGS_E_VER_1 */
+
+/**
+ * enum iwl_link_ctx_flags - link context flags
+ *
+ * @LINK_FLG_BSS_COLOR_DIS: BSS color disable, don't use the BSS
+ * color for RX filter but use MAC header
+ * enabled AGG, i.e. both BACK and non-BACK frames in a single AGG
+ * @LINK_FLG_MU_EDCA_CW: indicates that there is an element of MU EDCA
+ * parameter set, i.e. the backoff counters for trig-based ACs
+ * @LINK_FLG_RU_2MHZ_BLOCK: indicates that 26-tone RU OFDMA transmission are
+ * not allowed (as there are OBSS that might classify such transmissions as
+ * radar pulses).
+ * @LINK_FLG_NDP_FEEDBACK_ENABLED: mark support for NDP feedback and change
+ * of threshold
+ */
+enum iwl_link_ctx_flags {
+ LINK_FLG_BSS_COLOR_DIS = BIT(0),
+ LINK_FLG_MU_EDCA_CW = BIT(1),
+ LINK_FLG_RU_2MHZ_BLOCK = BIT(2),
+ LINK_FLG_NDP_FEEDBACK_ENABLED = BIT(3),
+}; /* LINK_CONTEXT_FLAG_E_VER_1 */
+
+/**
+ * struct iwl_link_config_cmd - command structure to configure the LINK context
+ * in MLD API
+ * ( LINK_CONFIG_CMD =0x9 )
+ *
+ * @action: action to perform, one of FW_CTXT_ACTION_*
+ * @link_id: the id of the link that this cmd configures
+ * @mac_id: interface ID. Relevant only if action is FW_CTXT_ACTION_ADD
+ * @phy_id: PHY index. Can be changed only if the link was inactive
+ * (and stays inactive). If the link is active (or becomes active),
+ * this field is ignored.
+ * @local_link_addr: the links MAC address. Can be changed only if the link was
+ * inactive (and stays inactive). If the link is active
+ * (or becomes active), this field is ignored.
+ * @reserved_for_local_link_addr: reserved
+ * @modify_mask: from &enum iwl_link_ctx_modify_flags, selects what to change.
+ * Relevant only if action is FW_CTXT_ACTION_MODIFY
+ * @active: indicates whether the link is active or not
+ * @listen_lmac: indicates whether the link should be allocated on the Listen
+ * Lmac or on the Main Lmac. Cannot be changed on an active Link.
+ * Relevant only for eSR.
+ * @cck_rates: basic rates available for CCK
+ * @ofdm_rates: basic rates available for OFDM
+ * @cck_short_preamble: 1 for enabling short preamble, 0 otherwise
+ * @short_slot: 1 for enabling short slots, 0 otherwise
+ * @protection_flags: combination of &enum iwl_link_ctx_protection_flags
+ * @qos_flags: from &enum iwl_mac_qos_flags
+ * @ac: one iwl_mac_qos configuration for each AC
+ * @htc_trig_based_pkt_ext: default PE in 4us units
+ * @rand_alloc_ecwmin: random CWmin = 2**ECWmin-1
+ * @rand_alloc_ecwmax: random CWmax = 2**ECWmax-1
+ * @ndp_fdbk_buff_th_exp: set exponent for the NDP feedback buffered threshold
+ * @trig_based_txf: MU EDCA Parameter set for the trigger based traffic queues
+ * @bi: beacon interval in TU, applicable only when associated
+ * @dtim_interval: DTIM interval in TU.
+ * Relevant only for GO, otherwise this is offloaded.
+ * @puncture_mask: puncture mask for EHT
+ * @frame_time_rts_th: HE duration RTS threshold, in units of 32us
+ * @flags: a combination from &enum iwl_link_ctx_flags
+ * @flags_mask: what of %flags have changed. Also &enum iwl_link_ctx_flags
+ * Below fields are for multi-bssid:
+ * @ref_bssid_addr: reference BSSID used by the AP
+ * @reserved_for_ref_bssid_addr: reserved
+ * @bssid_index: index of the associated VAP
+ * @bss_color: 11ax AP ID that is used in the HE SIG-A to mark inter BSS frame
+ * @reserved: alignment
+ * @ibss_bssid_addr: bssid for ibss
+ * @reserved_for_ibss_bssid_addr: reserved
+ * @reserved1: reserved for future use
+ */
+struct iwl_link_config_cmd {
+ __le32 action;
+ __le32 link_id;
+ __le32 mac_id;
+ __le32 phy_id;
+ u8 local_link_addr[6];
+ __le16 reserved_for_local_link_addr;
+ __le32 modify_mask;
+ __le32 active;
+ __le32 listen_lmac;
+ __le32 cck_rates;
+ __le32 ofdm_rates;
+ __le32 cck_short_preamble;
+ __le32 short_slot;
+ __le32 protection_flags;
+ /* MAC_QOS_PARAM_API_S_VER_1 */
+ __le32 qos_flags;
+ struct iwl_ac_qos ac[AC_NUM + 1];
+ u8 htc_trig_based_pkt_ext;
+ u8 rand_alloc_ecwmin;
+ u8 rand_alloc_ecwmax;
+ u8 ndp_fdbk_buff_th_exp;
+ struct iwl_he_backoff_conf trig_based_txf[AC_NUM];
+ __le32 bi;
+ __le32 dtim_interval;
+ __le16 puncture_mask;
+ __le16 frame_time_rts_th;
+ __le32 flags;
+ __le32 flags_mask;
+ /* The below fields are for multi-bssid */
+ u8 ref_bssid_addr[6];
+ __le16 reserved_for_ref_bssid_addr;
+ u8 bssid_index;
+ u8 bss_color;
+ u8 reserved[2];
+ u8 ibss_bssid_addr[6];
+ __le16 reserved_for_ibss_bssid_addr;
+ __le32 reserved1[8];
+} __packed; /* LINK_CONTEXT_CONFIG_CMD_API_S_VER_1 */
+
+/* Currently FW supports link ids in the range 0-3 and can have
+ * at most two active links for each vif.
+ */
+#define IWL_MVM_FW_MAX_ACTIVE_LINKS_NUM 2
+#define IWL_MVM_FW_MAX_LINK_ID 3
+
+/**
+ * enum iwl_fw_sta_type - FW station types
+ * @STATION_TYPE_PEER: represents a peer - AP in BSS, a TDLS sta, a client in
+ * P2P.
+ * @STATION_TYPE_BCAST_MGMT: The station used to send beacons and
+ * probe responses. Also used for traffic injection in sniffer mode
+ * @STATION_TYPE_MCAST: the station used for BCAST / MCAST in GO. Will be
+ * suspended / resumed at the right timing depending on the clients'
+ * power save state and the DTIM timing
+ * @STATION_TYPE_AUX: aux sta. In the FW there is no need for a special type
+ * for the aux sta, so this type is only for driver - internal use.
+ */
+enum iwl_fw_sta_type {
+ STATION_TYPE_PEER,
+ STATION_TYPE_BCAST_MGMT,
+ STATION_TYPE_MCAST,
+ STATION_TYPE_AUX,
+}; /* STATION_TYPE_E_VER_1 */
+
+/**
+ * struct iwl_mvm_sta_cfg_cmd - cmd structure to add a peer sta to the uCode's
+ * station table
+ * ( STA_CONFIG_CMD = 0xA )
+ *
+ * @sta_id: index of station in uCode's station table
+ * @link_id: the id of the link that is used to communicate with this sta
+ * @peer_mld_address: the peers mld address
+ * @reserved_for_peer_mld_address: reserved
+ * @peer_link_address: the address of the link that is used to communicate
+ * with this sta
+ * @reserved_for_peer_link_address: reserved
+ * @station_type: type of this station. See &enum iwl_fw_sta_type
+ * @assoc_id: for GO only
+ * @beamform_flags: beam forming controls
+ * @mfp: indicates whether the STA uses management frame protection or not.
+ * @mimo: indicates whether the sta uses mimo or not
+ * @mimo_protection: indicates whether the sta uses mimo protection or not
+ * @ack_enabled: indicates that the AP supports receiving ACK-
+ * enabled AGG, i.e. both BACK and non-BACK frames in a single AGG
+ * @trig_rnd_alloc: indicates that trigger based random allocation
+ * is enabled according to UORA element existence
+ * @tx_ampdu_spacing: minimum A-MPDU spacing:
+ * 4 - 2us density, 5 - 4us density, 6 - 8us density, 7 - 16us density
+ * @tx_ampdu_max_size: maximum A-MPDU length: 0 - 8K, 1 - 16K, 2 - 32K,
+ * 3 - 64K, 4 - 128K, 5 - 256K, 6 - 512K, 7 - 1024K.
+ * @sp_length: the size of the SP in actual number of frames
+ * @uapsd_acs: 4 LS bits are trigger enabled ACs, 4 MS bits are the deliver
+ * enabled ACs.
+ * @pkt_ext: optional, exists according to PPE-present bit in the HE/EHT-PHY
+ * capa
+ * @htc_flags: which features are supported in HTC
+ */
+struct iwl_mvm_sta_cfg_cmd {
+ __le32 sta_id;
+ __le32 link_id;
+ u8 peer_mld_address[ETH_ALEN];
+ __le16 reserved_for_peer_mld_address;
+ u8 peer_link_address[ETH_ALEN];
+ __le16 reserved_for_peer_link_address;
+ __le32 station_type;
+ __le32 assoc_id;
+ __le32 beamform_flags;
+ __le32 mfp;
+ __le32 mimo;
+ __le32 mimo_protection;
+ __le32 ack_enabled;
+ __le32 trig_rnd_alloc;
+ __le32 tx_ampdu_spacing;
+ __le32 tx_ampdu_max_size;
+ __le32 sp_length;
+ __le32 uapsd_acs;
+ struct iwl_he_pkt_ext_v2 pkt_ext;
+ __le32 htc_flags;
+} __packed; /* STA_CMD_API_S_VER_1 */
+
+/**
+ * struct iwl_mvm_aux_sta_cmd - command for AUX STA configuration
+ * ( AUX_STA_CMD = 0xB )
+ *
+ * @sta_id: index of aux sta to configure
+ * @lmac_id: ?
+ * @mac_addr: mac addr of the auxilary sta
+ * @reserved_for_mac_addr: reserved
+ */
+struct iwl_mvm_aux_sta_cmd {
+ __le32 sta_id;
+ __le32 lmac_id;
+ u8 mac_addr[ETH_ALEN];
+ __le16 reserved_for_mac_addr;
+
+} __packed; /* AUX_STA_CMD_API_S_VER_1 */
+
+/**
+ * struct iwl_mvm_remove_sta_cmd - a cmd structure to remove a sta added by
+ * STA_CONFIG_CMD or AUX_STA_CONFIG_CMD
+ * ( STA_REMOVE_CMD = 0xC )
+ *
+ * @sta_id: index of station to remove
+ */
+struct iwl_mvm_remove_sta_cmd {
+ __le32 sta_id;
+} __packed; /* REMOVE_STA_API_S_VER_1 */
+
+/**
+ * struct iwl_mvm_sta_disable_tx_cmd - disable / re-enable tx to a sta
+ * ( STA_DISABLE_TX_CMD = 0xD )
+ *
+ * @sta_id: index of the station to disable tx to
+ * @disable: indicates if to disable or re-enable tx
+ */
+struct iwl_mvm_sta_disable_tx_cmd {
+ __le32 sta_id;
+ __le32 disable;
+} __packed; /* STA_DISABLE_TX_API_S_VER_1 */
+
#endif /* __iwl_fw_api_mac_cfg_h__ */
/* Bit 4-5: (0) SISO, (1) MIMO2 (2) MIMO3 */
#define RATE_VHT_MCS_RATE_CODE_MSK 0xf
-#define RATE_VHT_MCS_NSS_POS 4
-#define RATE_VHT_MCS_NSS_MSK (3 << RATE_VHT_MCS_NSS_POS)
-#define RATE_VHT_MCS_MIMO2_MSK BIT(RATE_VHT_MCS_NSS_POS)
/*
* Legacy OFDM rate format for bits 7:0
* 1 2xLTF+0.8us
* 2 2xLTF+1.6us
* 3 4xLTF+3.2us
- * HE TRIG:
+ * HE-EHT TRIG:
* 0 1xLTF+1.6us
* 1 2xLTF+1.6us
* 2 4xLTF+3.2us
* 3 (does not occur)
+ * EHT MU:
+ * 0 2xLTF+0.8us
+ * 1 2xLTF+1.6us
+ * 2 4xLTF+0.8us
+ * 3 4xLTF+3.2us
*/
#define RATE_MCS_HE_GI_LTF_POS 20
#define RATE_MCS_HE_GI_LTF_MSK_V1 (3 << RATE_MCS_HE_GI_LTF_POS)
/*
* Bits 13-11: (0) 20MHz, (1) 40MHz, (2) 80MHz, (3) 160MHz, (4) 320MHz
*/
-#define RATE_MCS_CHAN_WIDTH_MSK (0x7 << RATE_MCS_CHAN_WIDTH_POS)
-#define RATE_MCS_CHAN_WIDTH_20 (0 << RATE_MCS_CHAN_WIDTH_POS)
-#define RATE_MCS_CHAN_WIDTH_40 (1 << RATE_MCS_CHAN_WIDTH_POS)
-#define RATE_MCS_CHAN_WIDTH_80 (2 << RATE_MCS_CHAN_WIDTH_POS)
-#define RATE_MCS_CHAN_WIDTH_160 (3 << RATE_MCS_CHAN_WIDTH_POS)
-#define RATE_MCS_CHAN_WIDTH_320 (4 << RATE_MCS_CHAN_WIDTH_POS)
+#define RATE_MCS_CHAN_WIDTH_MSK (0x7 << RATE_MCS_CHAN_WIDTH_POS)
+#define RATE_MCS_CHAN_WIDTH_20_VAL 0
+#define RATE_MCS_CHAN_WIDTH_20 (RATE_MCS_CHAN_WIDTH_20_VAL << RATE_MCS_CHAN_WIDTH_POS)
+#define RATE_MCS_CHAN_WIDTH_40_VAL 1
+#define RATE_MCS_CHAN_WIDTH_40 (RATE_MCS_CHAN_WIDTH_40_VAL << RATE_MCS_CHAN_WIDTH_POS)
+#define RATE_MCS_CHAN_WIDTH_80_VAL 2
+#define RATE_MCS_CHAN_WIDTH_80 (RATE_MCS_CHAN_WIDTH_80_VAL << RATE_MCS_CHAN_WIDTH_POS)
+#define RATE_MCS_CHAN_WIDTH_160_VAL 3
+#define RATE_MCS_CHAN_WIDTH_160 (RATE_MCS_CHAN_WIDTH_160_VAL << RATE_MCS_CHAN_WIDTH_POS)
+#define RATE_MCS_CHAN_WIDTH_320_VAL 4
+#define RATE_MCS_CHAN_WIDTH_320 (RATE_MCS_CHAN_WIDTH_320_VAL << RATE_MCS_CHAN_WIDTH_POS)
/* Bit 15-14: Antenna selection:
* Bit 14: Ant A active
/* number of EHT-LTF symbols 0 - 1 EHT-LTF, 1 - 2 EHT-LTFs, 2 - 4 EHT-LTFs,
* 3 - 6 EHT-LTFs, 4 - 8 EHT-LTFs */
IWL_RX_PHY_DATA1_EHT_SIG_LTF_NUM = 0x000000e0,
- IWL_RX_PHY_DATA1_EHT_RU_ALLOC = 0x0000ff00,
+ IWL_RX_PHY_DATA1_EHT_B0 = 0x00000100,
+ IWL_RX_PHY_DATA1_EHT_RU_B1_B7_ALLOC = 0x0000fe00,
};
/* goes into Metadata DW 7 */
/* OFDM_RX_VECTOR_COMMON_RU_ALLOC_0_OUT */
IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_A1 = 0x000001ff,
IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_A2 = 0x0003fe00,
- IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_A3 = 0x01fc0000,
+ IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_B1 = 0x07fc0000,
/* info type: EHT-TB-EXT */
IWL_RX_PHY_DATA2_EHT_TB_EXT_TRIG_SIGA1 = 0xffffffff,
enum iwl_rx_phy_eht_data3 {
/* info type: EHT-MU-EXT */
/* OFDM_RX_VECTOR_COMMON_RU_ALLOC_1_OUT */
- IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_B1 = 0x000001ff,
- IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_B2 = 0x0003fe00,
- IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_B3 = 0x01fc0000,
+ IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_B2 = 0x000001ff,
+ IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_C1 = 0x0003fe00,
+ IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_C2 = 0x07fc0000,
};
/* goes into Metadata DW 4 */
enum iwl_rx_phy_eht_data4 {
/* info type: EHT-MU-EXT */
/* OFDM_RX_VECTOR_COMMON_RU_ALLOC_2_OUT */
- IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_C1 = 0x000001ff,
- IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_C2 = 0x0003fe00,
- IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_C3 = 0x01fc0000,
- IWL_RX_PHY_DATA4_EHT_MU_EXT_SIGB_MCS = 0x18000000,
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_D1 = 0x000001ff,
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_D2 = 0x0003fe00,
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_SIGB_MCS = 0x000c0000,
};
/* goes into Metadata DW 16 */
* @mac_phy_idx: MAC/PHY index
*/
u8 mac_phy_idx;
- /* DW4 - carries csum data only when rpa_en == 1 */
- /**
- * @raw_csum: raw checksum (alledgedly unreliable)
- */
- __le16 raw_csum;
-
+ /* DW4 */
union {
+ struct {
+ /* carries csum data only when rpa_en == 1 */
+ /**
+ * @raw_csum: raw checksum (alledgedly unreliable)
+ */
+ __le16 raw_csum;
+
+ union {
+ /**
+ * @l3l4_flags: &enum iwl_rx_l3l4_flags
+ */
+ __le16 l3l4_flags;
+
+ /**
+ * @phy_data4: depends on info type, see phy_data1
+ */
+ __le16 phy_data4;
+ };
+ };
/**
- * @l3l4_flags: &enum iwl_rx_l3l4_flags
- */
- __le16 l3l4_flags;
-
- /**
- * @phy_data4: depends on info type, see phy_data1
+ * @phy_eht_data4: depends on info type, see phy_data1
*/
- __le16 phy_data4;
+ __le32 phy_eht_data4;
};
/* DW5 */
/**
#define RX_NO_DATA_INFO_TYPE_RX_ERR 1
#define RX_NO_DATA_INFO_TYPE_NDP 2
#define RX_NO_DATA_INFO_TYPE_MU_UNMATCHED 3
-#define RX_NO_DATA_INFO_TYPE_HE_TB_UNMATCHED 4
+#define RX_NO_DATA_INFO_TYPE_TB_UNMATCHED 4
#define RX_NO_DATA_INFO_ERR_POS 8
#define RX_NO_DATA_INFO_ERR_MSK (0xff << RX_NO_DATA_INFO_ERR_POS)
#define RX_NO_DATA_RX_VEC0_VHT_NSTS_MSK 0x38000000
#define RX_NO_DATA_RX_VEC2_EHT_NSTS_MSK 0x00f00000
+/* content of OFDM_RX_VECTOR_USIG_A1_OUT */
+enum iwl_rx_usig_a1 {
+ IWL_RX_USIG_A1_ENHANCED_WIFI_VER_ID = 0x00000007,
+ IWL_RX_USIG_A1_BANDWIDTH = 0x00000038,
+ IWL_RX_USIG_A1_UL_FLAG = 0x00000040,
+ IWL_RX_USIG_A1_BSS_COLOR = 0x00001f80,
+ IWL_RX_USIG_A1_TXOP_DURATION = 0x000fe000,
+ IWL_RX_USIG_A1_DISREGARD = 0x01f00000,
+ IWL_RX_USIG_A1_VALIDATE = 0x02000000,
+ IWL_RX_USIG_A1_EHT_BW320_SLOT = 0x04000000,
+ IWL_RX_USIG_A1_EHT_TYPE = 0x18000000,
+ IWL_RX_USIG_A1_RDY = 0x80000000,
+};
+
+/* content of OFDM_RX_VECTOR_USIG_A2_EHT_OUT */
+enum iwl_rx_usig_a2_eht {
+ IWL_RX_USIG_A2_EHT_PPDU_TYPE = 0x00000003,
+ IWL_RX_USIG_A2_EHT_USIG2_VALIDATE_B2 = 0x00000004,
+ IWL_RX_USIG_A2_EHT_PUNC_CHANNEL = 0x000000f8,
+ IWL_RX_USIG_A2_EHT_USIG2_VALIDATE_B8 = 0x00000100,
+ IWL_RX_USIG_A2_EHT_SIG_MCS = 0x00000600,
+ IWL_RX_USIG_A2_EHT_SIG_SYM_NUM = 0x0000f800,
+ IWL_RX_USIG_A2_EHT_TRIG_SPATIAL_REUSE_1 = 0x000f0000,
+ IWL_RX_USIG_A2_EHT_TRIG_SPATIAL_REUSE_2 = 0x00f00000,
+ IWL_RX_USIG_A2_EHT_TRIG_USIG2_DISREGARD = 0x1f000000,
+ IWL_RX_USIG_A2_EHT_CRC_OK = 0x40000000,
+ IWL_RX_USIG_A2_EHT_RDY = 0x80000000,
+};
+
/**
* struct iwl_rx_no_data - RX no data descriptor
* @info: 7:0 frame type, 15:8 RX error type
* @rx_vec: DW-12:9 raw RX vectors from DSP according to modulation type.
* for VHT: OFDM_RX_VECTOR_SIGA1_OUT, OFDM_RX_VECTOR_SIGA2_OUT
* for HE: OFDM_RX_VECTOR_HE_SIGA1_OUT, OFDM_RX_VECTOR_HE_SIGA2_OUT
- * for EHT: OFDM_RX_VECTOR_USIG_A1_OUT, OFDM_RX_VECTOR_USIG_A2_OUT,
+ * for EHT: OFDM_RX_VECTOR_USIG_A1_OUT, OFDM_RX_VECTOR_USIG_A2_EHT_OUT,
* OFDM_RX_VECTOR_EHT_OUT, OFDM_RX_VECTOR_EHT_USER_FIELD_OUT
*/
struct iwl_rx_no_data_ver_3 {
* is &enum iwl_mac_beacon_flags.
* @short_ssid: Short SSID
* @reserved: reserved
- * @template_id: currently equal to the mac context id of the coresponding mac.
+ * @link_id: the firmware id of the link that will use this beacon
* @tim_idx: the offset of the tim IE in the beacon
* @tim_size: the length of the tim IE
* @ecsa_offset: offset to the ECSA IE if present
__le16 flags;
__le32 short_ssid;
__le32 reserved;
- __le32 template_id;
+ __le32 link_id;
__le32 tim_idx;
__le32 tim_size;
__le32 ecsa_offset;
__le32 csa_offset;
struct ieee80211_hdr frame[];
} __packed; /* BEACON_TEMPLATE_CMD_API_S_VER_10,
- BEACON_TEMPLATE_CMD_API_S_VER_11,
- BEACON_TEMPLATE_CMD_API_S_VER_12 */
+ * BEACON_TEMPLATE_CMD_API_S_VER_11,
+ * BEACON_TEMPLATE_CMD_API_S_VER_12,
+ * BEACON_TEMPLATE_CMD_API_S_VER_13
+ */
struct iwl_beacon_notif {
struct iwl_mvm_tx_resp beacon_notify_hdr;
return entry->size;
}
+static u32 iwl_dump_ini_file_name_info(struct iwl_fw_runtime *fwrt,
+ struct list_head *list)
+{
+ struct iwl_fw_ini_dump_entry *entry;
+ struct iwl_dump_file_name_info *tlv;
+ u32 len = strnlen(fwrt->trans->dbg.dump_file_name_ext,
+ IWL_FW_INI_MAX_NAME);
+
+ if (!fwrt->trans->dbg.dump_file_name_ext_valid)
+ return 0;
+
+ entry = vzalloc(sizeof(*entry) + sizeof(*tlv) + len);
+ if (!entry)
+ return 0;
+
+ entry->size = sizeof(*tlv) + len;
+
+ tlv = (void *)entry->data;
+ tlv->type = cpu_to_le32(IWL_INI_DUMP_NAME_TYPE);
+ tlv->len = cpu_to_le32(len);
+ memcpy(tlv->data, fwrt->trans->dbg.dump_file_name_ext, len);
+
+ /* add the dump file name extension tlv to the list */
+ list_add_tail(&entry->list, list);
+
+ return entry->size;
+}
+
static const struct iwl_dump_ini_mem_ops iwl_dump_ini_region_ops[] = {
[IWL_FW_INI_REGION_INVALID] = {},
[IWL_FW_INI_REGION_INTERNAL_BUFFER] = {
size += iwl_dump_ini_mem(fwrt, list, ®_data,
&iwl_dump_ini_region_ops[IWL_FW_INI_REGION_DRAM_IMR]);
- if (size)
+ if (size) {
+ size += iwl_dump_ini_file_name_info(fwrt, list);
size += iwl_dump_ini_info(fwrt, trigger, list);
+ }
return size;
}
#include "iwl-csr.h"
#include "pnvm.h"
+#define FW_ASSERT_LMAC_FATAL 0x70
+#define FW_ASSERT_LMAC2_FATAL 0x72
+#define FW_ASSERT_UMAC_FATAL 0x71
+#define UMAC_RT_NMI_LMAC2_FATAL 0x72
+#define RT_NMI_INTERRUPT_OTHER_LMAC_FATAL 0x73
+#define FW_ASSERT_NMI_UNKNOWN 0x84
+
/*
* Note: This structure is read from the device with IO accesses,
* and the reading already does the endian conversion. As it is
#define ERROR_START_OFFSET (1 * sizeof(u32))
#define ERROR_ELEM_SIZE (7 * sizeof(u32))
+static bool iwl_fwrt_if_errorid_other_cpu(u32 err_id)
+{
+ err_id &= 0xFF;
+
+ if ((err_id >= FW_ASSERT_LMAC_FATAL &&
+ err_id <= RT_NMI_INTERRUPT_OTHER_LMAC_FATAL) ||
+ err_id == FW_ASSERT_NMI_UNKNOWN)
+ return true;
+ return false;
+}
+
static void iwl_fwrt_dump_umac_error_log(struct iwl_fw_runtime *fwrt)
{
struct iwl_trans *trans = fwrt->trans;
if (table.valid)
fwrt->dump.umac_err_id = table.error_id;
+ if (!iwl_fwrt_if_errorid_other_cpu(fwrt->dump.umac_err_id) &&
+ !fwrt->trans->dbg.dump_file_name_ext_valid) {
+ fwrt->trans->dbg.dump_file_name_ext_valid = true;
+ snprintf(fwrt->trans->dbg.dump_file_name_ext, IWL_FW_INI_MAX_NAME,
+ "0x%x", fwrt->dump.umac_err_id);
+ }
+
if (ERROR_START_OFFSET <= table.valid * ERROR_ELEM_SIZE) {
IWL_ERR(trans, "Start IWL Error Log Dump:\n");
IWL_ERR(trans, "Transport status: 0x%08lX, valid: %d\n",
if (table.valid)
fwrt->dump.lmac_err_id[lmac_num] = table.error_id;
+ if (!iwl_fwrt_if_errorid_other_cpu(fwrt->dump.lmac_err_id[lmac_num]) &&
+ !fwrt->trans->dbg.dump_file_name_ext_valid) {
+ fwrt->trans->dbg.dump_file_name_ext_valid = true;
+ snprintf(fwrt->trans->dbg.dump_file_name_ext, IWL_FW_INI_MAX_NAME,
+ "0x%x", fwrt->dump.lmac_err_id[lmac_num]);
+ }
+
if (ERROR_START_OFFSET <= table.valid * ERROR_ELEM_SIZE) {
IWL_ERR(trans, "Start IWL Error Log Dump:\n");
IWL_ERR(trans, "Transport status: 0x%08lX, valid: %d\n",
iwl_trans_read_mem_bytes(trans, base, &table, sizeof(table));
+ if (table.valid)
+ fwrt->dump.tcm_err_id[idx] = table.error_id;
+
+ if (!iwl_fwrt_if_errorid_other_cpu(fwrt->dump.tcm_err_id[idx]) &&
+ !fwrt->trans->dbg.dump_file_name_ext_valid) {
+ fwrt->trans->dbg.dump_file_name_ext_valid = true;
+ snprintf(fwrt->trans->dbg.dump_file_name_ext, IWL_FW_INI_MAX_NAME,
+ "0x%x", fwrt->dump.tcm_err_id[idx]);
+ }
+
IWL_ERR(fwrt, "TCM%d status:\n", idx + 1);
IWL_ERR(fwrt, "0x%08X | error ID\n", table.error_id);
IWL_ERR(fwrt, "0x%08X | tcm branchlink2\n", table.blink2);
iwl_trans_read_mem_bytes(trans, base, &table, sizeof(table));
+ if (table.valid)
+ fwrt->dump.rcm_err_id[idx] = table.error_id;
+
+ if (!iwl_fwrt_if_errorid_other_cpu(fwrt->dump.rcm_err_id[idx]) &&
+ !fwrt->trans->dbg.dump_file_name_ext_valid) {
+ fwrt->trans->dbg.dump_file_name_ext_valid = true;
+ snprintf(fwrt->trans->dbg.dump_file_name_ext, IWL_FW_INI_MAX_NAME,
+ "0x%x", fwrt->dump.rcm_err_id[idx]);
+ }
+
IWL_ERR(fwrt, "RCM%d status:\n", idx + 1);
IWL_ERR(fwrt, "0x%08X | error ID\n", table.error_id);
IWL_ERR(fwrt, "0x%08X | rcm branchlink2\n", table.blink2);
iwl_fwrt_dump_umac_error_log(fwrt);
iwl_fwrt_dump_tcm_error_log(fwrt, 0);
iwl_fwrt_dump_rcm_error_log(fwrt, 0);
- iwl_fwrt_dump_tcm_error_log(fwrt, 1);
- iwl_fwrt_dump_rcm_error_log(fwrt, 1);
+ if (fwrt->trans->dbg.tcm_error_event_table[1])
+ iwl_fwrt_dump_tcm_error_log(fwrt, 1);
+ if (fwrt->trans->dbg.rcm_error_event_table[1])
+ iwl_fwrt_dump_rcm_error_log(fwrt, 1);
iwl_fwrt_dump_iml_error_log(fwrt);
iwl_fwrt_dump_fseq_regs(fwrt);
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2014, 2018-2021 Intel Corporation
+ * Copyright (C) 2014, 2018-2022 Intel Corporation
* Copyright (C) 2014-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
} __packed;
/**
+ * struct iwl_dump_file_name_info - data for dump file name addition
+ * @type: region type with reserved bits
+ * @len: the length of file name string to be added to dump file
+ * @data: the string need to be added to dump file
+ */
+struct iwl_dump_file_name_info {
+ __le32 type;
+ __le32 len;
+ __u8 data[];
+} __packed;
+
+/**
* struct iwl_fw_error_dump_file - the layout of the header of the file
* @barker: must be %IWL_FW_ERROR_DUMP_BARKER
* @file_len: the length of all the file starting from %barker
/* Use bit 31 as dump info type to avoid colliding with region types */
#define IWL_INI_DUMP_INFO_TYPE BIT(31)
+/* Use bit 31 and bit 24 as dump name type to avoid colliding with region types */
+#define IWL_INI_DUMP_NAME_TYPE (BIT(31) | BIT(24))
+
/**
* struct iwl_fw_error_dump_data - data for one type
* @type: &enum iwl_fw_ini_region_type
IWL_UCODE_TLV_CAPA_BIGTK_SUPPORT = (__force iwl_ucode_tlv_capa_t)100,
IWL_UCODE_TLV_CAPA_DRAM_FRAG_SUPPORT = (__force iwl_ucode_tlv_capa_t)104,
IWL_UCODE_TLV_CAPA_DUMP_COMPLETE_SUPPORT = (__force iwl_ucode_tlv_capa_t)105,
+ IWL_UCODE_TLV_CAPA_SYNCED_TIME = (__force iwl_ucode_tlv_capa_t)106,
+ IWL_UCODE_TLV_CAPA_TIME_SYNC_BOTH_FTM_TM = (__force iwl_ucode_tlv_capa_t)108,
+ IWL_UCODE_TLV_CAPA_BIGTK_TX_SUPPORT = (__force iwl_ucode_tlv_capa_t)109,
#ifdef __CHECKER__
/* sparse says it cannot increment the previous enum member */
* @enhance_sensitivity_table: device can do enhanced sensitivity.
* @init_evtlog_ptr: event log offset for init ucode.
* @init_evtlog_size: event log size for init ucode.
- * @init_errlog_ptr: error log offfset for init ucode.
+ * @init_errlog_ptr: error log offset for init ucode.
* @inst_evtlog_ptr: event log offset for runtime ucode.
* @inst_evtlog_size: event log size for runtime ucode.
- * @inst_errlog_ptr: error log offfset for runtime ucode.
+ * @inst_errlog_ptr: error log offset for runtime ucode.
* @type: firmware type (&enum iwl_fw_type)
* @human_readable: human readable version
* we get the ALIVE from the uCode
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright(c) 2020-2021 Intel Corporation
+ * Copyright(c) 2020-2022 Intel Corporation
*/
#include "iwl-drv.h"
kfree(data);
skip_parse:
- data = NULL;
/* now try to get the reduce power table, if not loaded yet */
if (!trans->reduce_power_loaded) {
data = iwl_uefi_get_reduced_power(trans, &len);
* trying again over and over.
*/
trans->reduce_power_loaded = true;
-
- goto skip_reduce_power;
+ } else {
+ ret = iwl_trans_set_reduce_power(trans, data, len);
+ if (ret)
+ IWL_DEBUG_FW(trans,
+ "Failed to set reduce power table %d\n",
+ ret);
+ kfree(data);
}
}
- ret = iwl_trans_set_reduce_power(trans, data, len);
- if (ret)
- IWL_DEBUG_FW(trans,
- "Failed to set reduce power table %d\n",
- ret);
- kfree(data);
-
-skip_reduce_power:
iwl_init_notification_wait(notif_wait, &pnvm_wait,
ntf_cmds, ARRAY_SIZE(ntf_cmds),
iwl_pnvm_complete_fn, trans);
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2021 Intel Corporation
+ * Copyright (C) 2021-2022 Intel Corporation
*/
#include <net/mac80211.h>
rate_v1 & RATE_MCS_HE_MSK_V1) {
rate_v2 |= rate_v1 & RATE_VHT_MCS_RATE_CODE_MSK;
- rate_v2 |= rate_v1 & RATE_VHT_MCS_MIMO2_MSK;
+ rate_v2 |= rate_v1 & RATE_MCS_NSS_MSK;
if (rate_v1 & RATE_MCS_HE_MSK_V1) {
u32 he_type_bits = rate_v1 & RATE_MCS_HE_TYPE_MSK_V1;
};
#define MAX_NUM_LMAC 2
+#define MAX_NUM_TCM 2
+#define MAX_NUM_RCM 2
struct iwl_fwrt_shared_mem_cfg {
int num_lmacs;
int num_txfifo_entries;
unsigned long non_collect_ts_start[IWL_FW_INI_TIME_POINT_NUM];
u32 *d3_debug_data;
u32 lmac_err_id[MAX_NUM_LMAC];
+ u32 tcm_err_id[MAX_NUM_TCM];
+ u32 rcm_err_id[MAX_NUM_RCM];
u32 umac_err_id;
struct iwl_txf_iter_data txf_iter_data;
extern const struct iwl_cfg iwl_cfg_bz_a0_mr_a0;
extern const struct iwl_cfg iwl_cfg_bz_a0_fm_a0;
extern const struct iwl_cfg iwl_cfg_bz_a0_fm4_a0;
+extern const struct iwl_cfg iwl_cfg_bz_a0_fm_b0;
+extern const struct iwl_cfg iwl_cfg_bz_a0_fm4_b0;
extern const struct iwl_cfg iwl_cfg_gl_a0_fm_a0;
extern const struct iwl_cfg iwl_cfg_gl_b0_fm_b0;
extern const struct iwl_cfg iwl_cfg_bz_z0_gf_a0;
extern const struct iwl_cfg iwl_cfg_bnj_a0_fm_a0;
extern const struct iwl_cfg iwl_cfg_bnj_a0_fm4_a0;
extern const struct iwl_cfg iwl_cfg_bnj_a0_gf_a0;
+extern const struct iwl_cfg iwl_cfg_bnj_b0_gf_a0;
extern const struct iwl_cfg iwl_cfg_bnj_a0_gf4_a0;
+extern const struct iwl_cfg iwl_cfg_bnj_b0_gf4_a0;
extern const struct iwl_cfg iwl_cfg_bnj_a0_hr_b0;
+extern const struct iwl_cfg iwl_cfg_bnj_b0_hr_b0;
extern const struct iwl_cfg iwl_cfg_bnj_b0_fm_b0;
+extern const struct iwl_cfg iwl_cfg_bnj_b0_fm4_b0;
#endif /* CONFIG_IWLMVM */
#endif /* __IWL_CONFIG_H__ */
#define CSR_HW_RF_ID_TYPE_HRCDB (0x00109F00)
#define CSR_HW_RF_ID_TYPE_GF (0x0010D000)
#define CSR_HW_RF_ID_TYPE_GF4 (0x0010E000)
+#define CSR_HW_RF_ID_TYPE_MS (0x00111000)
/* HW_RF CHIP STEP */
#define CSR_HW_RF_STEP(_val) (((_val) >> 8) & 0xF)
ret = dbg_tlv_alloc[tlv_idx](trans, tlv);
if (ret) {
- IWL_ERR(trans,
- "WRT: Failed to allocate TLV 0x%x, ret %d, (ext=%d)\n",
- type, ret, ext);
+ IWL_WARN(trans,
+ "WRT: Failed to allocate TLV 0x%x, ret %d, (ext=%d)\n",
+ type, ret, ext);
goto out_err;
}
}
fwrt->trans->dbg.restart_required = FALSE;
- IWL_DEBUG_INFO(fwrt, "WRT: tp %d, reset_fw %d\n",
- tp, dump_data.trig->reset_fw);
- IWL_DEBUG_INFO(fwrt, "WRT: restart_required %d, last_tp_resetfw %d\n",
- fwrt->trans->dbg.restart_required,
- fwrt->trans->dbg.last_tp_resetfw);
+ IWL_DEBUG_FW(fwrt, "WRT: tp %d, reset_fw %d\n",
+ tp, dump_data.trig->reset_fw);
+ IWL_DEBUG_FW(fwrt,
+ "WRT: restart_required %d, last_tp_resetfw %d\n",
+ fwrt->trans->dbg.restart_required,
+ fwrt->trans->dbg.last_tp_resetfw);
if (fwrt->trans->trans_cfg->device_family ==
IWL_DEVICE_FAMILY_9000) {
IWL_DEBUG_FW(fwrt, "WRT: FW_ASSERT due to reset_fw_mode-no restart\n");
} else if (le32_to_cpu(dump_data.trig->reset_fw) ==
IWL_FW_INI_RESET_FW_MODE_STOP_AND_RELOAD_FW) {
- IWL_DEBUG_INFO(fwrt, "WRT: stop and reload firmware\n");
+ IWL_DEBUG_FW(fwrt, "WRT: stop and reload firmware\n");
fwrt->trans->dbg.restart_required = TRUE;
} else if (le32_to_cpu(dump_data.trig->reset_fw) ==
IWL_FW_INI_RESET_FW_MODE_STOP_FW_ONLY) {
- IWL_DEBUG_INFO(fwrt, "WRT: stop only and no reload firmware\n");
+ IWL_DEBUG_FW(fwrt,
+ "WRT: stop only and no reload firmware\n");
fwrt->trans->dbg.restart_required = FALSE;
fwrt->trans->dbg.last_tp_resetfw =
le32_to_cpu(dump_data.trig->reset_fw);
} else if (le32_to_cpu(dump_data.trig->reset_fw) ==
IWL_FW_INI_RESET_FW_MODE_NOTHING) {
- IWL_DEBUG_INFO(fwrt,
- "WRT: nothing need to be done after debug collection\n");
+ IWL_DEBUG_FW(fwrt,
+ "WRT: nothing need to be done after debug collection\n");
} else {
IWL_ERR(fwrt, "WRT: wrong resetfw %d\n",
le32_to_cpu(dump_data.trig->reset_fw));
#include "iwl-trans.h"
#define CREATE_TRACE_POINTS
+#ifdef CONFIG_CC_IS_GCC
+#pragma GCC diagnostic ignored "-Wsuggest-attribute=format"
+#endif
#include "iwl-devtrace.h"
EXPORT_TRACEPOINT_SYMBOL(iwlwifi_dev_ucode_event);
/* 2.4 GHz */
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
/* 5 GHz */
- 36, 40, 44 , 48, 52, 56, 60, 64,
+ 36, 40, 44, 48, 52, 56, 60, 64,
100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140, 144,
149, 153, 157, 161, 165
};
* @periodic_trig_list: periodic triggers list
* @domains_bitmap: bitmap of active domains other than &IWL_FW_INI_DOMAIN_ALWAYS_ON
* @ucode_preset: preset based on ucode
+ * @dump_file_name_ext: dump file name extension
+ * @dump_file_name_ext_valid: dump file name extension if valid or not
*/
struct iwl_trans_debug {
u8 n_dest_reg;
bool restart_required;
u32 last_tp_resetfw;
struct iwl_imr_data imr_data;
+ u8 dump_file_name_ext[IWL_FW_INI_MAX_NAME];
+ bool dump_file_name_ext_valid;
};
struct iwl_dma_ptr {
iwlmvm-y += tt.o offloading.o tdls.o
iwlmvm-y += ftm-responder.o ftm-initiator.o
iwlmvm-y += rfi.o
-iwlmvm-y += mld-key.o
+iwlmvm-y += mld-key.o mld-mac.o link.o mld-sta.o mld-mac80211.o
+iwlmvm-y += ptp.o
+iwlmvm-y += time-sync.o
iwlmvm-$(CONFIG_IWLWIFI_DEBUGFS) += debugfs.o debugfs-vif.o
iwlmvm-$(CONFIG_IWLWIFI_LEDS) += led.o
iwlmvm-$(CONFIG_PM) += d3.o
/*
* Copyright (C) 2012-2014, 2020 Intel Corporation
* Copyright (C) 2016 Intel Deutschland GmbH
+ * Copyright (C) 2022 Intel Corporation
*/
#include <net/mac80211.h>
#include "fw-api.h"
if (vif == data->ignore_vif)
return;
- if (mvmvif->phy_ctxt != data->phyctxt)
+ if (mvmvif->deflink.phy_ctxt != data->phyctxt)
return;
if (WARN_ON_ONCE(data->idx >= MAX_MACS_IN_BINDING))
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- if (WARN_ON_ONCE(!mvmvif->phy_ctxt))
+ if (WARN_ON_ONCE(!mvmvif->deflink.phy_ctxt))
return -EINVAL;
/*
if (iwl_mvm_sf_update(mvm, vif, false))
return -EINVAL;
- return iwl_mvm_binding_update(mvm, vif, mvmvif->phy_ctxt, true);
+ return iwl_mvm_binding_update(mvm, vif, mvmvif->deflink.phy_ctxt,
+ true);
}
int iwl_mvm_binding_remove_vif(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
int ret;
- if (WARN_ON_ONCE(!mvmvif->phy_ctxt))
+ if (WARN_ON_ONCE(!mvmvif->deflink.phy_ctxt))
return -EINVAL;
- ret = iwl_mvm_binding_update(mvm, vif, mvmvif->phy_ctxt, false);
+ ret = iwl_mvm_binding_update(mvm, vif, mvmvif->deflink.phy_ctxt,
+ false);
if (!ret)
if (iwl_mvm_sf_update(mvm, vif, true))
if (mvmsta->bt_reduced_txpower == enable)
return 0;
- value = mvmsta->sta_id;
+ value = mvmsta->deflink.sta_id;
if (enable)
value |= BT_REDUCED_TX_POWER_BIT;
swap(data->primary, data->secondary);
}
-/* must be called under rcu_read_lock */
-static void iwl_mvm_bt_notif_iterator(void *_data, u8 *mac,
- struct ieee80211_vif *vif)
+static void iwl_mvm_bt_notif_per_link(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct iwl_bt_iterator_data *data,
+ unsigned int link_id)
{
- struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- struct iwl_bt_iterator_data *data = _data;
- struct iwl_mvm *mvm = data->mvm;
- struct ieee80211_chanctx_conf *chanctx_conf;
/* default smps_mode is AUTOMATIC - only used for client modes */
enum ieee80211_smps_mode smps_mode = IEEE80211_SMPS_AUTOMATIC;
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
u32 bt_activity_grading, min_ag_for_static_smps;
+ struct ieee80211_chanctx_conf *chanctx_conf;
+ struct iwl_mvm_vif_link_info *link_info;
+ struct ieee80211_bss_conf *link_conf;
int ave_rssi;
lockdep_assert_held(&mvm->mutex);
- switch (vif->type) {
- case NL80211_IFTYPE_STATION:
- break;
- case NL80211_IFTYPE_AP:
- if (!mvmvif->ap_ibss_active)
- return;
- break;
- default:
+ link_info = mvmvif->link[link_id];
+ if (!link_info)
return;
- }
- chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
+ link_conf = rcu_dereference(vif->link_conf[link_id]);
+ /* This can happen due to races: if we receive the notification
+ * and have the mutex held, while mac80211 is stuck on our mutex
+ * in the middle of removing the link.
+ */
+ if (!link_conf)
+ return;
+
+ chanctx_conf = rcu_dereference(link_conf->chanctx_conf);
/* If channel context is invalid or not on 2.4GHz .. */
if ((!chanctx_conf ||
if (vif->type == NL80211_IFTYPE_STATION) {
/* ... relax constraints and disable rssi events */
iwl_mvm_update_smps(mvm, vif, IWL_MVM_SMPS_REQ_BT_COEX,
- smps_mode);
- iwl_mvm_bt_coex_reduced_txp(mvm, mvmvif->ap_sta_id,
+ smps_mode, link_id);
+ iwl_mvm_bt_coex_reduced_txp(mvm, link_info->ap_sta_id,
false);
+ /* FIXME: should this be per link? */
iwl_mvm_bt_coex_enable_rssi_event(mvm, vif, false, 0);
}
return;
if (!vif->cfg.assoc)
smps_mode = IEEE80211_SMPS_AUTOMATIC;
- if (mvmvif->phy_ctxt &&
- (mvm->last_bt_notif.rrc_status & BIT(mvmvif->phy_ctxt->id)))
+ if (link_info->phy_ctxt &&
+ (mvm->last_bt_notif.rrc_status & BIT(link_info->phy_ctxt->id)))
smps_mode = IEEE80211_SMPS_AUTOMATIC;
IWL_DEBUG_COEX(data->mvm,
- "mac %d: bt_activity_grading %d smps_req %d\n",
- mvmvif->id, bt_activity_grading, smps_mode);
+ "mac %d link %d: bt_activity_grading %d smps_req %d\n",
+ mvmvif->id, link_info->fw_link_id,
+ bt_activity_grading, smps_mode);
if (vif->type == NL80211_IFTYPE_STATION)
iwl_mvm_update_smps(mvm, vif, IWL_MVM_SMPS_REQ_BT_COEX,
- smps_mode);
+ smps_mode, link_id);
/* low latency is always primary */
if (iwl_mvm_vif_low_latency(mvmvif)) {
data->secondary = chanctx_conf;
}
+ /* FIXME: TCM load per interface? or need something per link? */
if (data->primary == chanctx_conf)
data->primary_load = mvm->tcm.result.load[mvmvif->id];
else if (data->secondary == chanctx_conf)
/* if secondary is not NULL, it might be a GO */
data->secondary = chanctx_conf;
+ /* FIXME: TCM load per interface? or need something per link? */
if (data->primary == chanctx_conf)
data->primary_load = mvm->tcm.result.load[mvmvif->id];
else if (data->secondary == chanctx_conf)
if (iwl_get_coex_type(mvm, vif) == BT_COEX_LOOSE_LUT ||
mvm->cfg->bt_shared_single_ant || !vif->cfg.assoc ||
le32_to_cpu(mvm->last_bt_notif.bt_activity_grading) == BT_OFF) {
- iwl_mvm_bt_coex_reduced_txp(mvm, mvmvif->ap_sta_id, false);
+ iwl_mvm_bt_coex_reduced_txp(mvm, link_info->ap_sta_id, false);
+ /* FIXME: should this be per link? */
iwl_mvm_bt_coex_enable_rssi_event(mvm, vif, false, 0);
return;
}
if (!ave_rssi)
ave_rssi = -100;
if (ave_rssi > -IWL_MVM_BT_COEX_EN_RED_TXP_THRESH) {
- if (iwl_mvm_bt_coex_reduced_txp(mvm, mvmvif->ap_sta_id, true))
+ if (iwl_mvm_bt_coex_reduced_txp(mvm, link_info->ap_sta_id,
+ true))
IWL_ERR(mvm, "Couldn't send BT_CONFIG cmd\n");
} else if (ave_rssi < -IWL_MVM_BT_COEX_DIS_RED_TXP_THRESH) {
- if (iwl_mvm_bt_coex_reduced_txp(mvm, mvmvif->ap_sta_id, false))
+ if (iwl_mvm_bt_coex_reduced_txp(mvm, link_info->ap_sta_id,
+ false))
IWL_ERR(mvm, "Couldn't send BT_CONFIG cmd\n");
}
iwl_mvm_bt_coex_enable_rssi_event(mvm, vif, true, ave_rssi);
}
+/* must be called under rcu_read_lock */
+static void iwl_mvm_bt_notif_iterator(void *_data, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_bt_iterator_data *data = _data;
+ struct iwl_mvm *mvm = data->mvm;
+ unsigned int link_id;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ switch (vif->type) {
+ case NL80211_IFTYPE_STATION:
+ break;
+ case NL80211_IFTYPE_AP:
+ if (!mvmvif->ap_ibss_active)
+ return;
+ break;
+ default:
+ return;
+ }
+
+ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++)
+ iwl_mvm_bt_notif_per_link(mvm, vif, data, link_id);
+}
+
static void iwl_mvm_bt_coex_notif_handle(struct iwl_mvm *mvm)
{
struct iwl_bt_iterator_data data = {
* Rssi update while not associated - can happen since the statistics
* are handled asynchronously
*/
- if (mvmvif->ap_sta_id == IWL_MVM_INVALID_STA)
+ if (mvmvif->deflink.ap_sta_id == IWL_MVM_INVALID_STA)
return;
/* No BT - reports should be disabled */
*/
if (rssi_event == RSSI_EVENT_LOW || mvm->cfg->bt_shared_single_ant ||
iwl_get_coex_type(mvm, vif) == BT_COEX_LOOSE_LUT)
- ret = iwl_mvm_bt_coex_reduced_txp(mvm, mvmvif->ap_sta_id,
+ ret = iwl_mvm_bt_coex_reduced_txp(mvm,
+ mvmvif->deflink.ap_sta_id,
false);
else
- ret = iwl_mvm_bt_coex_reduced_txp(mvm, mvmvif->ap_sta_id, true);
+ ret = iwl_mvm_bt_coex_reduced_txp(mvm,
+ mvmvif->deflink.ap_sta_id,
+ true);
if (ret)
IWL_ERR(mvm, "couldn't send BT_CONFIG HCMD upon RSSI event\n");
{
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(mvmsta->vif);
- struct iwl_mvm_phy_ctxt *phy_ctxt = mvmvif->phy_ctxt;
+ struct iwl_mvm_phy_ctxt *phy_ctxt = mvmvif->deflink.phy_ctxt;
enum iwl_bt_coex_lut_type lut_type;
if (mvm->last_bt_notif.ttc_status & BIT(phy_ctxt->id))
{
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(mvmsta->vif);
- struct iwl_mvm_phy_ctxt *phy_ctxt = mvmvif->phy_ctxt;
+ struct iwl_mvm_phy_ctxt *phy_ctxt = mvmvif->deflink.phy_ctxt;
enum iwl_bt_coex_lut_type lut_type;
if (mvm->last_bt_notif.ttc_status & BIT(phy_ctxt->id))
for (i = 0; i < ARRAY_SIZE(data.rsc->mcast_key_id_map); i++)
data.rsc->mcast_key_id_map[i] =
IWL_MCAST_KEY_MAP_INVALID;
- data.rsc->sta_id = cpu_to_le32(mvmvif->ap_sta_id);
+ data.rsc->sta_id = cpu_to_le32(mvmvif->deflink.ap_sta_id);
ieee80211_iter_keys(mvm->hw, vif,
iwl_mvm_wowlan_get_rsc_v5_data,
if (ver == 4) {
size = sizeof(*data.rsc_tsc);
- data.rsc_tsc->sta_id = cpu_to_le32(mvmvif->ap_sta_id);
+ data.rsc_tsc->sta_id =
+ cpu_to_le32(mvmvif->deflink.ap_sta_id);
} else {
/* ver == 2 || ver == IWL_FW_CMD_VER_UNKNOWN */
size = sizeof(data.rsc_tsc->params);
pattern_cmd->n_patterns = wowlan->n_patterns;
if (ver >= 3)
- pattern_cmd->sta_id = mvmvif->ap_sta_id;
+ pattern_cmd->sta_id = mvmvif->deflink.ap_sta_id;
for (i = 0; i < wowlan->n_patterns; i++) {
int mask_len = DIV_ROUND_UP(wowlan->patterns[i].pattern_len, 8);
return -EINVAL;
/* add back the PHY */
- if (WARN_ON(!mvmvif->phy_ctxt))
+ if (WARN_ON(!mvmvif->deflink.phy_ctxt))
return -EINVAL;
rcu_read_lock();
chains_dynamic = ctx->rx_chains_dynamic;
rcu_read_unlock();
- ret = iwl_mvm_phy_ctxt_add(mvm, mvmvif->phy_ctxt, &chandef,
+ ret = iwl_mvm_phy_ctxt_add(mvm, mvmvif->deflink.phy_ctxt, &chandef,
chains_static, chains_dynamic);
if (ret)
return ret;
/* add back binding - XXX refactor? */
binding_cmd.id_and_color =
- cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->phy_ctxt->id,
- mvmvif->phy_ctxt->color));
+ cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->deflink.phy_ctxt->id,
+ mvmvif->deflink.phy_ctxt->color));
binding_cmd.action = cpu_to_le32(FW_CTXT_ACTION_ADD);
binding_cmd.phy =
- cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->phy_ctxt->id,
- mvmvif->phy_ctxt->color));
+ cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->deflink.phy_ctxt->id,
+ mvmvif->deflink.phy_ctxt->color));
binding_cmd.macs[0] = cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
mvmvif->color));
for (i = 1; i < MAX_MACS_IN_BINDING; i++)
ret = iwl_mvm_sta_send_to_fw(mvm, ap_sta, false, 0);
if (ret)
return ret;
- rcu_assign_pointer(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id], ap_sta);
+ rcu_assign_pointer(mvm->fw_id_to_mac_id[mvmvif->deflink.ap_sta_id],
+ ap_sta);
ret = iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
if (ret)
/* and some quota */
quota = iwl_mvm_quota_cmd_get_quota(mvm, "a_cmd, 0);
quota->id_and_color =
- cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->phy_ctxt->id,
- mvmvif->phy_ctxt->color));
+ cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->deflink.phy_ctxt->id,
+ mvmvif->deflink.phy_ctxt->color));
quota->quota = cpu_to_le32(IWL_MVM_MAX_QUOTA);
quota->max_duration = cpu_to_le32(IWL_MVM_MAX_QUOTA);
if (ver == 2) {
size = sizeof(tkip_data.tkip);
tkip_data.tkip.sta_id =
- cpu_to_le32(mvmvif->ap_sta_id);
+ cpu_to_le32(mvmvif->deflink.ap_sta_id);
} else if (ver == 1 || ver == IWL_FW_CMD_VER_UNKNOWN) {
size = sizeof(struct iwl_wowlan_tkip_params_cmd_ver_1);
} else {
kek_kck_cmd.kek_len = cpu_to_le16(mvmvif->rekey_data.kek_len);
kek_kck_cmd.replay_ctr = mvmvif->rekey_data.replay_ctr;
kek_kck_cmd.akm = cpu_to_le32(mvmvif->rekey_data.akm);
- kek_kck_cmd.sta_id = cpu_to_le32(mvmvif->ap_sta_id);
+ kek_kck_cmd.sta_id = cpu_to_le32(mvmvif->deflink.ap_sta_id);
if (cmd_ver == 4) {
cmd_size = sizeof(struct iwl_wowlan_kek_kck_material_cmd_v4);
mvmvif = iwl_mvm_vif_from_mac80211(vif);
- if (mvmvif->ap_sta_id == IWL_MVM_INVALID_STA) {
+ if (mvmvif->deflink.ap_sta_id == IWL_MVM_INVALID_STA) {
/* if we're not associated, this must be netdetect */
if (!wowlan->nd_config) {
ret = 1;
} else {
struct iwl_wowlan_config_cmd wowlan_config_cmd = {};
- wowlan_config_cmd.sta_id = mvmvif->ap_sta_id;
+ wowlan_config_cmd.sta_id = mvmvif->deflink.ap_sta_id;
ap_sta = rcu_dereference_protected(
- mvm->fw_id_to_mac_id[mvmvif->ap_sta_id],
+ mvm->fw_id_to_mac_id[mvmvif->deflink.ap_sta_id],
lockdep_is_held(&mvm->mutex));
if (IS_ERR_OR_NULL(ap_sta)) {
ret = -EINVAL;
/* if FW uses status notification, status shouldn't be NULL here */
if (!d3_data->status) {
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- u8 sta_id = mvm->net_detect ? IWL_MVM_INVALID_STA : mvmvif->ap_sta_id;
+ u8 sta_id = mvm->net_detect ? IWL_MVM_INVALID_STA :
+ mvmvif->deflink.ap_sta_id;
d3_data->status = iwl_mvm_send_wowlan_get_status(mvm, sta_id);
}
mutex_lock(&mvm->mutex);
- ap_sta_id = mvmvif->ap_sta_id;
+ ap_sta_id = mvmvif->deflink.ap_sta_id;
switch (ieee80211_vif_type_p2p(vif)) {
case NL80211_IFTYPE_ADHOC:
pos += scnprintf(buf+pos, bufsz-pos, "Load: %d\n",
mvm->tcm.result.load[mvmvif->id]);
pos += scnprintf(buf+pos, bufsz-pos, "QoS:\n");
- for (i = 0; i < ARRAY_SIZE(mvmvif->queue_params); i++)
+ for (i = 0; i < ARRAY_SIZE(mvmvif->deflink.queue_params); i++)
pos += scnprintf(buf+pos, bufsz-pos,
"\t%d: txop:%d - cw_min:%d - cw_max = %d - aifs = %d upasd = %d\n",
- i, mvmvif->queue_params[i].txop,
- mvmvif->queue_params[i].cw_min,
- mvmvif->queue_params[i].cw_max,
- mvmvif->queue_params[i].aifs,
- mvmvif->queue_params[i].uapsd);
+ i, mvmvif->deflink.queue_params[i].txop,
+ mvmvif->deflink.queue_params[i].cw_min,
+ mvmvif->deflink.queue_params[i].cw_max,
+ mvmvif->deflink.queue_params[i].aifs,
+ mvmvif->deflink.queue_params[i].uapsd);
if (vif->type == NL80211_IFTYPE_STATION &&
ap_sta_id != IWL_MVM_INVALID_STA) {
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
#include <linux/err.h>
#include <linux/ieee80211.h>
#include <linux/netdevice.h>
+#include <linux/dmi.h>
#include "mvm.h"
#include "sta.h"
#include "debugfs.h"
#include "iwl-modparams.h"
#include "fw/error-dump.h"
+#include "fw/api/phy-ctxt.h"
static ssize_t iwl_dbgfs_ctdp_budget_read(struct file *file,
char __user *user_buf,
int pos;
if (!mvm->temperature_test)
- pos = scnprintf(buf , sizeof(buf), "disabled\n");
+ pos = scnprintf(buf, sizeof(buf), "disabled\n");
else
- pos = scnprintf(buf , sizeof(buf), "%d\n", mvm->temperature);
+ pos = scnprintf(buf, sizeof(buf), "%d\n", mvm->temperature);
return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
}
mvm->temperature = temperature;
}
IWL_DEBUG_TEMP(mvm, "%sabling debug set temperature (temp = %d)\n",
- mvm->temperature_test ? "En" : "Dis" ,
+ mvm->temperature_test ? "En" : "Dis",
mvm->temperature);
/* handle the temperature change */
iwl_mvm_tt_handler(mvm);
if (ret)
return -EIO;
- pos = scnprintf(buf , sizeof(buf), "%d\n", temp);
+ pos = scnprintf(buf, sizeof(buf), "%d\n", temp);
return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
}
{
struct ieee80211_sta *sta = file->private_data;
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- struct iwl_lq_sta_rs_fw *lq_sta = &mvmsta->lq_sta.rs_fw;
+ struct iwl_lq_sta_rs_fw *lq_sta = &mvmsta->deflink.lq_sta.rs_fw;
struct iwl_mvm *mvm = lq_sta->pers.drv;
static const size_t bufsz = 2048;
char *buff;
return ret;
}
+static ssize_t iwl_dbgfs_tas_get_status_read(struct file *file,
+ char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct iwl_mvm *mvm = file->private_data;
+ struct iwl_mvm_tas_status_resp tas_rsp;
+ struct iwl_mvm_tas_status_resp *rsp = &tas_rsp;
+ static const size_t bufsz = 1024;
+ char *buff, *pos, *endpos;
+ const char * const tas_dis_reason[TAS_DISABLED_REASON_MAX] = {
+ [TAS_DISABLED_DUE_TO_BIOS] =
+ "Due To BIOS",
+ [TAS_DISABLED_DUE_TO_SAR_6DBM] =
+ "Due To SAR Limit Less Than 6 dBm",
+ [TAS_DISABLED_REASON_INVALID] =
+ "N/A",
+ };
+ const char * const tas_current_status[TAS_DYNA_STATUS_MAX] = {
+ [TAS_DYNA_INACTIVE] = "INACTIVE",
+ [TAS_DYNA_INACTIVE_MVM_MODE] =
+ "inactive due to mvm mode",
+ [TAS_DYNA_INACTIVE_TRIGGER_MODE] =
+ "inactive due to trigger mode",
+ [TAS_DYNA_INACTIVE_BLOCK_LISTED] =
+ "inactive due to block listed",
+ [TAS_DYNA_INACTIVE_UHB_NON_US] =
+ "inactive due to uhb non US",
+ [TAS_DYNA_ACTIVE] = "ACTIVE",
+ };
+ struct iwl_host_cmd hcmd = {
+ .id = WIDE_ID(DEBUG_GROUP, GET_TAS_STATUS),
+ .flags = CMD_WANT_SKB,
+ .len = { 0, },
+ .data = { NULL, },
+ };
+ int ret, i, tmp;
+ bool tas_enabled = false;
+ unsigned long dyn_status;
+
+ if (!iwl_mvm_firmware_running(mvm))
+ return -ENODEV;
+
+ mutex_lock(&mvm->mutex);
+ ret = iwl_mvm_send_cmd(mvm, &hcmd);
+ mutex_unlock(&mvm->mutex);
+ if (ret < 0)
+ return ret;
+
+ buff = kzalloc(bufsz, GFP_KERNEL);
+ if (!buff)
+ return -ENOMEM;
+ pos = buff;
+ endpos = pos + bufsz;
+
+ rsp = (void *)hcmd.resp_pkt->data;
+
+ pos += scnprintf(pos, endpos - pos, "TAS Conclusion:\n");
+ for (i = 0; i < rsp->in_dual_radio + 1; i++) {
+ if (rsp->tas_status_mac[i].band != TAS_LMAC_BAND_INVALID &&
+ rsp->tas_status_mac[i].dynamic_status & BIT(TAS_DYNA_ACTIVE)) {
+ pos += scnprintf(pos, endpos - pos, "\tON for ");
+ switch (rsp->tas_status_mac[i].band) {
+ case TAS_LMAC_BAND_HB:
+ pos += scnprintf(pos, endpos - pos, "HB\n");
+ break;
+ case TAS_LMAC_BAND_LB:
+ pos += scnprintf(pos, endpos - pos, "LB\n");
+ break;
+ case TAS_LMAC_BAND_UHB:
+ pos += scnprintf(pos, endpos - pos, "UHB\n");
+ break;
+ case TAS_LMAC_BAND_INVALID:
+ pos += scnprintf(pos, endpos - pos,
+ "INVALID BAND\n");
+ break;
+ default:
+ pos += scnprintf(pos, endpos - pos,
+ "Unsupported band (%d)\n",
+ rsp->tas_status_mac[i].band);
+ goto out;
+ }
+ tas_enabled = true;
+ }
+ }
+ if (!tas_enabled)
+ pos += scnprintf(pos, endpos - pos, "\tOFF\n");
+
+ pos += scnprintf(pos, endpos - pos, "TAS Report\n");
+ pos += scnprintf(pos, endpos - pos, "TAS FW version: %d\n",
+ rsp->tas_fw_version);
+ pos += scnprintf(pos, endpos - pos, "Is UHB enabled for USA?: %s\n",
+ rsp->is_uhb_for_usa_enable ? "True" : "False");
+ pos += scnprintf(pos, endpos - pos, "Current MCC: 0x%x\n",
+ le16_to_cpu(rsp->curr_mcc));
+
+ pos += scnprintf(pos, endpos - pos, "Block list entries:");
+ for (i = 0; i < APCI_WTAS_BLACK_LIST_MAX; i++)
+ pos += scnprintf(pos, endpos - pos, " 0x%x",
+ le16_to_cpu(rsp->block_list[i]));
+
+ pos += scnprintf(pos, endpos - pos, "\nOEM name: %s\n",
+ dmi_get_system_info(DMI_SYS_VENDOR));
+ pos += scnprintf(pos, endpos - pos, "\tVendor In Approved List: %s\n",
+ iwl_mvm_is_vendor_in_approved_list() ? "YES" : "NO");
+ pos += scnprintf(pos, endpos - pos,
+ "\tDo TAS Support Dual Radio?: %s\n",
+ rsp->in_dual_radio ? "TRUE" : "FALSE");
+
+ for (i = 0; i < rsp->in_dual_radio + 1; i++) {
+ if (rsp->tas_status_mac[i].static_status == 0) {
+ pos += scnprintf(pos, endpos - pos,
+ "Static status: disabled\n");
+ pos += scnprintf(pos, endpos - pos,
+ "Static disabled reason: %s (0)\n",
+ tas_dis_reason[0]);
+ goto out;
+ }
+
+ pos += scnprintf(pos, endpos - pos, "TAS status for ");
+ switch (rsp->tas_status_mac[i].band) {
+ case TAS_LMAC_BAND_HB:
+ pos += scnprintf(pos, endpos - pos, "High band\n");
+ break;
+ case TAS_LMAC_BAND_LB:
+ pos += scnprintf(pos, endpos - pos, "Low band\n");
+ break;
+ case TAS_LMAC_BAND_UHB:
+ pos += scnprintf(pos, endpos - pos,
+ "Ultra high band\n");
+ break;
+ case TAS_LMAC_BAND_INVALID:
+ pos += scnprintf(pos, endpos - pos,
+ "INVALID band\n");
+ break;
+ default:
+ pos += scnprintf(pos, endpos - pos,
+ "Unsupported band (%d)\n",
+ rsp->tas_status_mac[i].band);
+ goto out;
+ }
+ pos += scnprintf(pos, endpos - pos, "Static status: %sabled\n",
+ rsp->tas_status_mac[i].static_status ?
+ "En" : "Dis");
+ pos += scnprintf(pos, endpos - pos,
+ "\tStatic Disabled Reason: ");
+ if (rsp->tas_status_mac[i].static_dis_reason < TAS_DISABLED_REASON_MAX)
+ pos += scnprintf(pos, endpos - pos, "%s (%d)\n",
+ tas_dis_reason[rsp->tas_status_mac[i].static_dis_reason],
+ rsp->tas_status_mac[i].static_dis_reason);
+ else
+ pos += scnprintf(pos, endpos - pos,
+ "unsupported value (%d)\n",
+ rsp->tas_status_mac[i].static_dis_reason);
+
+ pos += scnprintf(pos, endpos - pos, "Dynamic status:\n");
+ dyn_status = (rsp->tas_status_mac[i].dynamic_status);
+ for_each_set_bit(tmp, &dyn_status, sizeof(dyn_status)) {
+ if (tmp >= 0 && tmp < TAS_DYNA_STATUS_MAX)
+ pos += scnprintf(pos, endpos - pos,
+ "\t%s (%d)\n",
+ tas_current_status[tmp], tmp);
+ }
+
+ pos += scnprintf(pos, endpos - pos,
+ "Is near disconnection?: %s\n",
+ rsp->tas_status_mac[i].near_disconnection ?
+ "True" : "False");
+ tmp = le16_to_cpu(rsp->tas_status_mac[i].max_reg_pwr_limit);
+ pos += scnprintf(pos, endpos - pos,
+ "Max. regulatory pwr limit (dBm): %d.%03d\n",
+ tmp / 8, 125 * (tmp % 8));
+ tmp = le16_to_cpu(rsp->tas_status_mac[i].sar_limit);
+ pos += scnprintf(pos, endpos - pos,
+ "SAR limit (dBm): %d.%03d\n",
+ tmp / 8, 125 * (tmp % 8));
+ }
+
+out:
+ ret = simple_read_from_buffer(user_buf, count, ppos, buff, pos - buff);
+ kfree(buff);
+ iwl_free_resp(&hcmd);
+ return ret;
+}
+
static ssize_t iwl_dbgfs_phy_integration_ver_read(struct file *file,
char __user *user_buf,
size_t count, loff_t *ppos)
struct sk_buff *beacon;
struct ieee80211_tx_info *info;
struct iwl_mac_beacon_cmd beacon_cmd = {};
+ unsigned int link_id;
u8 rate;
int i;
info = IEEE80211_SKB_CB(beacon);
rate = iwl_mvm_mac_ctxt_get_beacon_rate(mvm, info, vif);
- beacon_cmd.flags =
- cpu_to_le16(iwl_mvm_mac_ctxt_get_beacon_flags(mvm->fw, rate));
- beacon_cmd.byte_cnt = cpu_to_le16((u16)beacon->len);
- beacon_cmd.template_id = cpu_to_le32((u32)mvmvif->id);
+ for_each_mvm_vif_valid_link(mvmvif, link_id) {
+ beacon_cmd.flags =
+ cpu_to_le16(iwl_mvm_mac_ctxt_get_beacon_flags(mvm->fw,
+ rate));
+ beacon_cmd.byte_cnt = cpu_to_le16((u16)beacon->len);
+ if (iwl_fw_lookup_cmd_ver(mvm->fw, BEACON_TEMPLATE_CMD, 0) > 12)
+ beacon_cmd.link_id =
+ cpu_to_le32(mvmvif->link[link_id]->fw_link_id);
+ else
+ beacon_cmd.link_id = cpu_to_le32((u32)mvmvif->id);
- iwl_mvm_mac_ctxt_set_tim(mvm, &beacon_cmd.tim_idx,
- &beacon_cmd.tim_size,
- beacon->data, beacon->len);
+ iwl_mvm_mac_ctxt_set_tim(mvm, &beacon_cmd.tim_idx,
+ &beacon_cmd.tim_size,
+ beacon->data, beacon->len);
- iwl_mvm_mac_ctxt_send_beacon_cmd(mvm, beacon, &beacon_cmd,
- sizeof(beacon_cmd));
+ iwl_mvm_mac_ctxt_send_beacon_cmd(mvm, beacon, &beacon_cmd,
+ sizeof(beacon_cmd));
+ }
mutex_unlock(&mvm->mutex);
dev_kfree_skb(beacon);
MVM_DEBUGFS_READ_FILE_OPS(drv_rx_stats);
MVM_DEBUGFS_READ_FILE_OPS(fw_ver);
MVM_DEBUGFS_READ_FILE_OPS(phy_integration_ver);
+MVM_DEBUGFS_READ_FILE_OPS(tas_get_status);
MVM_DEBUGFS_WRITE_FILE_OPS(fw_restart, 10);
MVM_DEBUGFS_WRITE_FILE_OPS(fw_nmi, 10);
MVM_DEBUGFS_WRITE_FILE_OPS(bt_tx_prio, 10);
if (mvm->fw->phy_integration_ver)
MVM_DEBUGFS_ADD_FILE(phy_integration_ver, mvm->debugfs_dir, 0400);
+ MVM_DEBUGFS_ADD_FILE(tas_get_status, mvm->debugfs_dir, 0400);
#ifdef CONFIG_ACPI
MVM_DEBUGFS_ADD_FILE(sar_geo_profile, mvm->debugfs_dir, 0400);
#endif
struct ieee80211_sta *sta;
rcu_read_lock();
- sta = rcu_dereference(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id]);
+ sta = rcu_dereference(mvm->fw_id_to_mac_id[mvmvif->deflink.ap_sta_id]);
if (!IS_ERR_OR_NULL(sta) && sta->mfp)
expected_tk_len = 0;
rcu_read_unlock();
rcu_read_lock();
- sta = rcu_dereference(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id]);
+ sta = rcu_dereference(mvm->fw_id_to_mac_id[mvmvif->deflink.ap_sta_id]);
if (sta->mfp && (peer->ftm.trigger_based || peer->ftm.non_trigger_based))
FTM_PUT_FLAG(PMF);
rcu_read_unlock();
- target->sta_id = mvmvif->ap_sta_id;
+ target->sta_id = mvmvif->deflink.ap_sta_id;
} else {
target->sta_id = IWL_MVM_INVALID_STA;
}
cpu_to_le32(IWL_TOF_RESPONDER_CMD_VALID_CHAN_INFO |
IWL_TOF_RESPONDER_CMD_VALID_BSSID |
IWL_TOF_RESPONDER_CMD_VALID_STA_ID),
- .sta_id = mvmvif->bcast_sta.sta_id,
+ .sta_id = mvmvif->deflink.bcast_sta.sta_id,
};
u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 6);
int err;
#include "iwl-phy-db.h"
#include "iwl-modparams.h"
#include "iwl-nvm-parse.h"
+#include "time-sync.h"
#define MVM_UCODE_ALIVE_TIMEOUT (HZ)
#define MVM_UCODE_CALIB_TIMEOUT (2 * HZ)
u32 version = iwl_fw_lookup_notif_ver(mvm->fw, LEGACY_GROUP,
UCODE_ALIVE_NTFY, 0);
u32 i;
- struct iwl_trans *trans = mvm->trans;
- enum iwl_device_family device_family = trans->trans_cfg->device_family;
if (version == 6) {
if (umac_error_table) {
if (umac_error_table >=
- mvm->trans->cfg->min_umac_error_event_table ||
- device_family >= IWL_DEVICE_FAMILY_BZ) {
+ mvm->trans->cfg->min_umac_error_event_table) {
iwl_fw_umac_set_alive_err_table(mvm->trans,
umac_error_table);
} else {
ret = iwl_read_ppag_table(&mvm->fwrt, &cmd, &cmd_size);
/* Not supporting PPAG table is a valid scenario */
- if(ret < 0)
+ if (ret < 0)
return 0;
IWL_DEBUG_RADIO(mvm, "Sending PER_PLATFORM_ANT_GAIN_CMD\n");
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
},
},
+ { .ident = "MSFT",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
+ },
+ },
/* keep last */
{}
};
+bool iwl_mvm_is_vendor_in_approved_list(void)
+{
+ return dmi_check_system(dmi_tas_approved_list);
+}
+
static bool iwl_mvm_add_to_tas_block_list(__le32 *list, __le32 *le_size, unsigned int mcc)
{
int i;
{
}
+bool iwl_mvm_is_vendor_in_approved_list(void)
+{
+ return false;
+}
+
static u8 iwl_mvm_eval_dsm_rfi(struct iwl_mvm *mvm)
{
return DSM_VALUE_RFI_DISABLE;
}
/* init the fw <-> mac80211 STA mapping */
- for (i = 0; i < mvm->fw->ucode_capa.num_stations; i++)
+ for (i = 0; i < mvm->fw->ucode_capa.num_stations; i++) {
RCU_INIT_POINTER(mvm->fw_id_to_mac_id[i], NULL);
+ RCU_INIT_POINTER(mvm->fw_id_to_link_sta[i], NULL);
+ }
+
+ memset(&mvm->fw_link_ids_map, 0, sizeof(mvm->fw_link_ids_map));
mvm->tdls_cs.peer.sta_id = IWL_MVM_INVALID_STA;
goto error;
}
- if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
iwl_mvm_send_recovery_cmd(mvm, ERROR_RECOVERY_UPDATE_DB);
+ iwl_mvm_time_sync_config(mvm, mvm->time_sync.peer_addr,
+ IWL_TIME_SYNC_PROTOCOL_TM |
+ IWL_TIME_SYNC_PROTOCOL_FTM);
+ }
+
+ if (!mvm->ptp_data.ptp_clock)
+ iwl_mvm_ptp_init(mvm);
if (iwl_acpi_get_eckv(mvm->dev, &mvm->ext_clock_valid))
IWL_DEBUG_INFO(mvm, "ECKV table doesn't exist in BIOS\n");
goto error;
/* init the fw <-> mac80211 STA mapping */
- for (i = 0; i < mvm->fw->ucode_capa.num_stations; i++)
+ for (i = 0; i < mvm->fw->ucode_capa.num_stations; i++) {
RCU_INIT_POINTER(mvm->fw_id_to_mac_id[i], NULL);
+ RCU_INIT_POINTER(mvm->fw_id_to_link_sta[i], NULL);
+ }
if (iwl_fw_lookup_cmd_ver(mvm->fw, ADD_STA, 0) < 12) {
/*
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+/*
+ * Copyright (C) 2022 - 2023 Intel Corporation
+ */
+#include "mvm.h"
+#include "time-event.h"
+
+static u32 iwl_mvm_get_free_fw_link_id(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif *mvm_vif)
+{
+ u32 link_id;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ link_id = ffz(mvm->fw_link_ids_map);
+
+ /* this case can happen if there're deactivated but not removed links */
+ if (link_id > IWL_MVM_FW_MAX_LINK_ID)
+ return IWL_MVM_FW_LINK_ID_INVALID;
+
+ mvm->fw_link_ids_map |= BIT(link_id);
+ return link_id;
+}
+
+static void iwl_mvm_release_fw_link_id(struct iwl_mvm *mvm, u32 link_id)
+{
+ lockdep_assert_held(&mvm->mutex);
+
+ if (!WARN_ON(link_id > IWL_MVM_FW_MAX_LINK_ID))
+ mvm->fw_link_ids_map &= ~BIT(link_id);
+}
+
+static int iwl_mvm_link_cmd_send(struct iwl_mvm *mvm,
+ struct iwl_link_config_cmd *cmd,
+ enum iwl_ctxt_action action)
+{
+ int ret;
+
+ cmd->action = cpu_to_le32(action);
+ ret = iwl_mvm_send_cmd_pdu(mvm,
+ WIDE_ID(MAC_CONF_GROUP, LINK_CONFIG_CMD), 0,
+ sizeof(*cmd), cmd);
+ if (ret)
+ IWL_ERR(mvm, "Failed to send LINK_CONFIG_CMD (action:%d): %d\n",
+ action, ret);
+ return ret;
+}
+
+int iwl_mvm_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ unsigned int link_id = link_conf->link_id;
+ struct iwl_mvm_vif_link_info *link_info = mvmvif->link[link_id];
+ struct iwl_link_config_cmd cmd = {};
+ struct iwl_mvm_phy_ctxt *phyctxt;
+
+ if (WARN_ON_ONCE(!link_info))
+ return -EINVAL;
+
+ if (link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID) {
+ link_info->fw_link_id = iwl_mvm_get_free_fw_link_id(mvm,
+ mvmvif);
+ if (link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID)
+ return -EINVAL;
+ }
+
+ /* Update SF - Disable if needed. if this fails, SF might still be on
+ * while many macs are bound, which is forbidden - so fail the binding.
+ */
+ if (iwl_mvm_sf_update(mvm, vif, false))
+ return -EINVAL;
+
+ cmd.link_id = cpu_to_le32(link_info->fw_link_id);
+ cmd.mac_id = cpu_to_le32(mvmvif->id);
+ /* P2P-Device already has a valid PHY context during add */
+ phyctxt = link_info->phy_ctxt;
+ if (phyctxt)
+ cmd.phy_id = cpu_to_le32(phyctxt->id);
+ else
+ cmd.phy_id = cpu_to_le32(FW_CTXT_INVALID);
+
+ memcpy(cmd.local_link_addr, link_conf->addr, ETH_ALEN);
+
+ if (vif->type == NL80211_IFTYPE_ADHOC && link_conf->bssid)
+ memcpy(cmd.ibss_bssid_addr, link_conf->bssid, ETH_ALEN);
+
+ return iwl_mvm_link_cmd_send(mvm, &cmd, FW_CTXT_ACTION_ADD);
+}
+
+int iwl_mvm_link_changed(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ u32 changes, bool active)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ unsigned int link_id = link_conf->link_id;
+ struct iwl_mvm_vif_link_info *link_info = mvmvif->link[link_id];
+ struct iwl_mvm_phy_ctxt *phyctxt;
+ struct iwl_link_config_cmd cmd = {};
+ u32 ht_flag, flags = 0, flags_mask = 0;
+ int ret;
+
+ if (WARN_ON_ONCE(!link_info ||
+ link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID))
+ return -EINVAL;
+
+ if (changes & LINK_CONTEXT_MODIFY_ACTIVE) {
+ /* When activating a link, phy context should be valid;
+ * when deactivating a link, it also should be valid since
+ * the link was active before. So, do nothing in this case.
+ * Since a link is added first with FW_CTXT_INVALID, then we
+ * can get here in case it's removed before it was activated.
+ */
+ if (!link_info->phy_ctxt)
+ return 0;
+
+ /* check there aren't too many active links */
+ if (!link_info->active && active) {
+ int i, count = 0;
+
+ /* link with phy_ctxt is active in FW */
+ for_each_mvm_vif_valid_link(mvmvif, i)
+ if (mvmvif->link[i]->phy_ctxt)
+ count++;
+
+ /* FIXME: IWL_MVM_FW_MAX_ACTIVE_LINKS_NUM should be
+ * defined per HW
+ */
+ if (count >= IWL_MVM_FW_MAX_ACTIVE_LINKS_NUM)
+ return -EINVAL;
+ }
+
+ /* Catch early if driver tries to activate or deactivate a link
+ * twice.
+ */
+ WARN_ON_ONCE(active == link_info->active);
+
+ /* When deactivating a link session protection should
+ * be stopped
+ */
+ if (!active && vif->type == NL80211_IFTYPE_STATION)
+ iwl_mvm_stop_session_protection(mvm, vif);
+ }
+
+ cmd.link_id = cpu_to_le32(link_info->fw_link_id);
+
+ /* The phy_id, link address and listen_lmac can be modified only until
+ * the link becomes active, otherwise they will be ignored.
+ */
+ phyctxt = link_info->phy_ctxt;
+ if (phyctxt)
+ cmd.phy_id = cpu_to_le32(phyctxt->id);
+ else
+ cmd.phy_id = cpu_to_le32(FW_CTXT_INVALID);
+ cmd.mac_id = cpu_to_le32(mvmvif->id);
+
+ memcpy(cmd.local_link_addr, link_conf->addr, ETH_ALEN);
+
+ cmd.active = cpu_to_le32(active);
+
+ if (vif->type == NL80211_IFTYPE_ADHOC && link_conf->bssid)
+ memcpy(cmd.ibss_bssid_addr, link_conf->bssid, ETH_ALEN);
+
+ /* TODO: set a value to cmd.listen_lmac when system requiremens
+ * will define it
+ */
+
+ iwl_mvm_set_fw_basic_rates(mvm, vif, link_conf,
+ &cmd.cck_rates, &cmd.ofdm_rates);
+
+ cmd.cck_short_preamble = cpu_to_le32(link_conf->use_short_preamble);
+ cmd.short_slot = cpu_to_le32(link_conf->use_short_slot);
+
+ /* The fw does not distinguish between ht and fat */
+ ht_flag = LINK_PROT_FLG_HT_PROT | LINK_PROT_FLG_FAT_PROT;
+ iwl_mvm_set_fw_protection_flags(mvm, vif, link_conf,
+ &cmd.protection_flags,
+ ht_flag, LINK_PROT_FLG_TGG_PROTECT);
+
+ iwl_mvm_set_fw_qos_params(mvm, vif, link_conf, &cmd.ac[0],
+ &cmd.qos_flags);
+
+
+ cmd.bi = cpu_to_le32(link_conf->beacon_int);
+ cmd.dtim_interval = cpu_to_le32(link_conf->beacon_int *
+ link_conf->dtim_period);
+
+ if (!link_conf->he_support || iwlwifi_mod_params.disable_11ax ||
+ (vif->type == NL80211_IFTYPE_STATION && !vif->cfg.assoc)) {
+ changes &= ~LINK_CONTEXT_MODIFY_HE_PARAMS;
+ goto send_cmd;
+ }
+
+ cmd.htc_trig_based_pkt_ext = link_conf->htc_trig_based_pkt_ext;
+
+ if (link_conf->uora_exists) {
+ cmd.rand_alloc_ecwmin =
+ link_conf->uora_ocw_range & 0x7;
+ cmd.rand_alloc_ecwmax =
+ (link_conf->uora_ocw_range >> 3) & 0x7;
+ }
+
+ /* TODO how to set ndp_fdbk_buff_th_exp? */
+
+ if (iwl_mvm_set_fw_mu_edca_params(mvm, mvmvif,
+ &cmd.trig_based_txf[0])) {
+ flags |= LINK_FLG_MU_EDCA_CW;
+ flags_mask |= LINK_FLG_MU_EDCA_CW;
+ }
+
+ if (link_conf->eht_puncturing && !iwlwifi_mod_params.disable_11be)
+ cmd.puncture_mask = cpu_to_le16(link_conf->eht_puncturing);
+ else
+ /* This flag can be set only if the MAC has eht support */
+ changes &= ~LINK_CONTEXT_MODIFY_EHT_PARAMS;
+
+ cmd.bss_color = link_conf->he_bss_color.color;
+
+ if (!link_conf->he_bss_color.enabled) {
+ flags |= LINK_FLG_BSS_COLOR_DIS;
+ flags_mask |= LINK_FLG_BSS_COLOR_DIS;
+ }
+
+ cmd.frame_time_rts_th = cpu_to_le16(link_conf->frame_time_rts_th);
+
+ /* Block 26-tone RU OFDMA transmissions */
+ if (link_info->he_ru_2mhz_block) {
+ flags |= LINK_FLG_RU_2MHZ_BLOCK;
+ flags_mask |= LINK_FLG_RU_2MHZ_BLOCK;
+ }
+
+ if (link_conf->nontransmitted) {
+ ether_addr_copy(cmd.ref_bssid_addr,
+ link_conf->transmitter_bssid);
+ cmd.bssid_index = link_conf->bssid_index;
+ }
+
+send_cmd:
+ cmd.modify_mask = cpu_to_le32(changes);
+ cmd.flags = cpu_to_le32(flags);
+ cmd.flags_mask = cpu_to_le32(flags_mask);
+
+ ret = iwl_mvm_link_cmd_send(mvm, &cmd, FW_CTXT_ACTION_MODIFY);
+ if (!ret && (changes & LINK_CONTEXT_MODIFY_ACTIVE))
+ link_info->active = active;
+
+ return ret;
+}
+
+int iwl_mvm_remove_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ unsigned int link_id = link_conf->link_id;
+ struct iwl_mvm_vif_link_info *link_info = mvmvif->link[link_id];
+ struct iwl_link_config_cmd cmd = {};
+ int ret;
+
+ if (WARN_ON(!link_info ||
+ link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID))
+ return -EINVAL;
+
+ cmd.link_id = cpu_to_le32(link_info->fw_link_id);
+ iwl_mvm_release_fw_link_id(mvm, link_info->fw_link_id);
+ link_info->fw_link_id = IWL_MVM_FW_LINK_ID_INVALID;
+
+ ret = iwl_mvm_link_cmd_send(mvm, &cmd, FW_CTXT_ACTION_REMOVE);
+
+ if (!ret)
+ if (iwl_mvm_sf_update(mvm, vif, true))
+ IWL_ERR(mvm, "Failed to update SF state\n");
+
+ return ret;
+}
+
+/* link should be deactivated before removal, so in most cases we need to
+ * perform these two operations together
+ */
+int iwl_mvm_disable_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ int ret;
+
+ ret = iwl_mvm_link_changed(mvm, vif, link_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE, false);
+ if (ret)
+ return ret;
+
+ ret = iwl_mvm_remove_link(mvm, vif, link_conf);
+ if (ret)
+ return ret;
+
+ return ret;
+}
* For TVQM this will be overwritten later with the FW assigned
* queue value (when queue is enabled).
*/
- mvmvif->cab_queue = IWL_MVM_DQA_GCAST_QUEUE;
+ mvmvif->deflink.cab_queue = IWL_MVM_DQA_GCAST_QUEUE;
}
- mvmvif->bcast_sta.sta_id = IWL_MVM_INVALID_STA;
- mvmvif->mcast_sta.sta_id = IWL_MVM_INVALID_STA;
- mvmvif->ap_sta_id = IWL_MVM_INVALID_STA;
+ mvmvif->deflink.bcast_sta.sta_id = IWL_MVM_INVALID_STA;
+ mvmvif->deflink.mcast_sta.sta_id = IWL_MVM_INVALID_STA;
+ mvmvif->deflink.ap_sta_id = IWL_MVM_INVALID_STA;
for (i = 0; i < NUM_IWL_MVM_SMPS_REQ; i++)
- mvmvif->smps_requests[i] = IEEE80211_SMPS_AUTOMATIC;
+ mvmvif->deflink.smps_requests[i] = IEEE80211_SMPS_AUTOMATIC;
return 0;
*ofdm_rates = ofdm;
}
-static void iwl_mvm_mac_ctxt_set_ht_flags(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct iwl_mac_ctx_cmd *cmd)
+void iwl_mvm_set_fw_basic_rates(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ __le32 *cck_rates, __le32 *ofdm_rates)
+{
+ struct ieee80211_chanctx_conf *chanctx;
+ u8 cck_ack_rates = 0, ofdm_ack_rates = 0;
+
+ rcu_read_lock();
+ chanctx = rcu_dereference(link_conf->chanctx_conf);
+ iwl_mvm_ack_rates(mvm, vif, chanctx ? chanctx->def.chan->band
+ : NL80211_BAND_2GHZ,
+ &cck_ack_rates, &ofdm_ack_rates);
+
+ rcu_read_unlock();
+
+ *cck_rates = cpu_to_le32((u32)cck_ack_rates);
+ *ofdm_rates = cpu_to_le32((u32)ofdm_ack_rates);
+}
+
+void iwl_mvm_set_fw_protection_flags(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ __le32 *protection_flags, u32 ht_flag,
+ u32 tgg_flag)
{
/* for both sta and ap, ht_operation_mode hold the protection_mode */
- u8 protection_mode = vif->bss_conf.ht_operation_mode &
+ u8 protection_mode = link_conf->ht_operation_mode &
IEEE80211_HT_OP_MODE_PROTECTION;
- /* The fw does not distinguish between ht and fat */
- u32 ht_flag = MAC_PROT_FLG_HT_PROT | MAC_PROT_FLG_FAT_PROT;
+ bool ht_enabled = !!(link_conf->ht_operation_mode &
+ IEEE80211_HT_OP_MODE_PROTECTION);
+
+ if (link_conf->use_cts_prot)
+ *protection_flags |= cpu_to_le32(tgg_flag);
+
+ IWL_DEBUG_RATE(mvm, "use_cts_prot %d, ht_operation_mode %d\n",
+ link_conf->use_cts_prot,
+ link_conf->ht_operation_mode);
+
+ if (!ht_enabled)
+ return;
IWL_DEBUG_RATE(mvm, "protection mode set to %d\n", protection_mode);
/*
break;
case IEEE80211_HT_OP_MODE_PROTECTION_NONMEMBER:
case IEEE80211_HT_OP_MODE_PROTECTION_NONHT_MIXED:
- cmd->protection_flags |= cpu_to_le32(ht_flag);
+ *protection_flags |= cpu_to_le32(ht_flag);
break;
case IEEE80211_HT_OP_MODE_PROTECTION_20MHZ:
/* Protect when channel wider than 20MHz */
- if (vif->bss_conf.chandef.width > NL80211_CHAN_WIDTH_20)
- cmd->protection_flags |= cpu_to_le32(ht_flag);
+ if (link_conf->chandef.width > NL80211_CHAN_WIDTH_20)
+ *protection_flags |= cpu_to_le32(ht_flag);
break;
default:
IWL_ERR(mvm, "Illegal protection mode %d\n",
}
}
-static void iwl_mvm_mac_ctxt_cmd_common(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct iwl_mac_ctx_cmd *cmd,
- const u8 *bssid_override,
- u32 action)
+void iwl_mvm_set_fw_qos_params(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct iwl_ac_qos *ac, __le32 *qos_flags)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- struct ieee80211_chanctx_conf *chanctx;
- bool ht_enabled = !!(vif->bss_conf.ht_operation_mode &
- IEEE80211_HT_OP_MODE_PROTECTION);
- u8 cck_ack_rates, ofdm_ack_rates;
- const u8 *bssid = bssid_override ?: vif->bss_conf.bssid;
int i;
- cmd->id_and_color = cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
- mvmvif->color));
- cmd->action = cpu_to_le32(action);
+ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+ u8 txf = iwl_mvm_mac_ac_to_tx_fifo(mvm, i);
+ u8 ucode_ac = iwl_mvm_mac80211_ac_to_ucode_ac(i);
+
+ ac[ucode_ac].cw_min =
+ cpu_to_le16(mvmvif->deflink.queue_params[i].cw_min);
+ ac[ucode_ac].cw_max =
+ cpu_to_le16(mvmvif->deflink.queue_params[i].cw_max);
+ ac[ucode_ac].edca_txop =
+ cpu_to_le16(mvmvif->deflink.queue_params[i].txop * 32);
+ ac[ucode_ac].aifsn = mvmvif->deflink.queue_params[i].aifs;
+ ac[ucode_ac].fifos_mask = BIT(txf);
+ }
+
+ if (link_conf->qos)
+ *qos_flags |= cpu_to_le32(MAC_QOS_FLG_UPDATE_EDCA);
+
+ if (link_conf->chandef.width != NL80211_CHAN_WIDTH_20_NOHT)
+ *qos_flags |= cpu_to_le32(MAC_QOS_FLG_TGN);
+}
+
+int iwl_mvm_get_mac_type(struct ieee80211_vif *vif)
+{
+ u32 mac_type = FW_MAC_TYPE_BSS_STA;
switch (vif->type) {
case NL80211_IFTYPE_STATION:
if (vif->p2p)
- cmd->mac_type = cpu_to_le32(FW_MAC_TYPE_P2P_STA);
+ mac_type = FW_MAC_TYPE_P2P_STA;
else
- cmd->mac_type = cpu_to_le32(FW_MAC_TYPE_BSS_STA);
+ mac_type = FW_MAC_TYPE_BSS_STA;
break;
case NL80211_IFTYPE_AP:
- cmd->mac_type = cpu_to_le32(FW_MAC_TYPE_GO);
+ mac_type = FW_MAC_TYPE_GO;
break;
case NL80211_IFTYPE_MONITOR:
- cmd->mac_type = cpu_to_le32(FW_MAC_TYPE_LISTENER);
+ mac_type = FW_MAC_TYPE_LISTENER;
break;
case NL80211_IFTYPE_P2P_DEVICE:
- cmd->mac_type = cpu_to_le32(FW_MAC_TYPE_P2P_DEVICE);
+ mac_type = FW_MAC_TYPE_P2P_DEVICE;
break;
case NL80211_IFTYPE_ADHOC:
- cmd->mac_type = cpu_to_le32(FW_MAC_TYPE_IBSS);
+ mac_type = FW_MAC_TYPE_IBSS;
break;
default:
WARN_ON_ONCE(1);
}
+ return mac_type;
+}
+
+static void iwl_mvm_mac_ctxt_cmd_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct iwl_mac_ctx_cmd *cmd,
+ const u8 *bssid_override,
+ u32 action)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ const u8 *bssid = bssid_override ?: vif->bss_conf.bssid;
+ u32 ht_flag;
+
+ cmd->id_and_color = cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
+ mvmvif->color));
+ cmd->action = cpu_to_le32(action);
+ cmd->mac_type = cpu_to_le32(iwl_mvm_get_mac_type(vif));
cmd->tsf_id = cpu_to_le32(mvmvif->tsf_id);
else
eth_broadcast_addr(cmd->bssid_addr);
- rcu_read_lock();
- chanctx = rcu_dereference(vif->bss_conf.chanctx_conf);
- iwl_mvm_ack_rates(mvm, vif, chanctx ? chanctx->def.chan->band
- : NL80211_BAND_2GHZ,
- &cck_ack_rates, &ofdm_ack_rates);
- rcu_read_unlock();
-
- cmd->cck_rates = cpu_to_le32((u32)cck_ack_rates);
- cmd->ofdm_rates = cpu_to_le32((u32)ofdm_ack_rates);
+ iwl_mvm_set_fw_basic_rates(mvm, vif, &vif->bss_conf, &cmd->cck_rates,
+ &cmd->ofdm_rates);
cmd->cck_short_preamble =
cpu_to_le32(vif->bss_conf.use_short_preamble ?
cmd->filter_flags = 0;
- for (i = 0; i < IEEE80211_NUM_ACS; i++) {
- u8 txf = iwl_mvm_mac_ac_to_tx_fifo(mvm, i);
- u8 ucode_ac = iwl_mvm_mac80211_ac_to_ucode_ac(i);
-
- cmd->ac[ucode_ac].cw_min =
- cpu_to_le16(mvmvif->queue_params[i].cw_min);
- cmd->ac[ucode_ac].cw_max =
- cpu_to_le16(mvmvif->queue_params[i].cw_max);
- cmd->ac[ucode_ac].edca_txop =
- cpu_to_le16(mvmvif->queue_params[i].txop * 32);
- cmd->ac[ucode_ac].aifsn = mvmvif->queue_params[i].aifs;
- cmd->ac[ucode_ac].fifos_mask = BIT(txf);
- }
-
- if (vif->bss_conf.qos)
- cmd->qos_flags |= cpu_to_le32(MAC_QOS_FLG_UPDATE_EDCA);
+ iwl_mvm_set_fw_qos_params(mvm, vif, &vif->bss_conf, &cmd->ac[0],
+ &cmd->qos_flags);
- if (vif->bss_conf.use_cts_prot)
- cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_TGG_PROTECT);
-
- IWL_DEBUG_RATE(mvm, "use_cts_prot %d, ht_operation_mode %d\n",
- vif->bss_conf.use_cts_prot,
- vif->bss_conf.ht_operation_mode);
- if (vif->bss_conf.chandef.width != NL80211_CHAN_WIDTH_20_NOHT)
- cmd->qos_flags |= cpu_to_le32(MAC_QOS_FLG_TGN);
- if (ht_enabled)
- iwl_mvm_mac_ctxt_set_ht_flags(mvm, vif, cmd);
+ /* The fw does not distinguish between ht and fat */
+ ht_flag = MAC_PROT_FLG_HT_PROT | MAC_PROT_FLG_FAT_PROT;
+ iwl_mvm_set_fw_protection_flags(mvm, vif, &vif->bss_conf,
+ &cmd->protection_flags,
+ ht_flag, MAC_PROT_FLG_TGG_PROTECT);
}
static int iwl_mvm_mac_ctxt_send_cmd(struct iwl_mvm *mvm,
int ret = iwl_mvm_send_cmd_pdu(mvm, MAC_CONTEXT_CMD, 0,
sizeof(*cmd), cmd);
if (ret)
- IWL_ERR(mvm, "Failed to send MAC context (action:%d): %d\n",
+ IWL_ERR(mvm, "Failed to send MAC_CONTEXT_CMD (action:%d): %d\n",
le32_to_cpu(cmd->action), ret);
return ret;
}
+void iwl_mvm_set_fw_dtim_tbtt(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ __le64 *dtim_tsf, __le32 *dtim_time,
+ __le32 *assoc_beacon_arrive_time)
+{
+ u32 dtim_offs;
+
+ /*
+ * The DTIM count counts down, so when it is N that means N
+ * more beacon intervals happen until the DTIM TBTT. Therefore
+ * add this to the current time. If that ends up being in the
+ * future, the firmware will handle it.
+ *
+ * Also note that the system_timestamp (which we get here as
+ * "sync_device_ts") and TSF timestamp aren't at exactly the
+ * same offset in the frame -- the TSF is at the first symbol
+ * of the TSF, the system timestamp is at signal acquisition
+ * time. This means there's an offset between them of at most
+ * a few hundred microseconds (24 * 8 bits + PLCP time gives
+ * 384us in the longest case), this is currently not relevant
+ * as the firmware wakes up around 2ms before the TBTT.
+ */
+ dtim_offs = link_conf->sync_dtim_count *
+ link_conf->beacon_int;
+ /* convert TU to usecs */
+ dtim_offs *= 1024;
+
+ *dtim_tsf =
+ cpu_to_le64(link_conf->sync_tsf + dtim_offs);
+ *dtim_time =
+ cpu_to_le32(link_conf->sync_device_ts + dtim_offs);
+ *assoc_beacon_arrive_time =
+ cpu_to_le32(link_conf->sync_device_ts);
+
+ IWL_DEBUG_INFO(mvm, "DTIM TBTT is 0x%llx/0x%x, offset %d\n",
+ le64_to_cpu(*dtim_tsf),
+ le32_to_cpu(*dtim_time),
+ dtim_offs);
+}
+
+__le32 iwl_mvm_mac_ctxt_cmd_p2p_sta_get_oppps_ctwin(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif)
+{
+ struct ieee80211_p2p_noa_attr *noa =
+ &vif->bss_conf.p2p_noa_attr;
+
+ return cpu_to_le32(noa->oppps_ctwindow &
+ IEEE80211_P2P_OPPPS_CTWINDOW_MASK);
+}
+
+__le32 iwl_mvm_mac_ctxt_cmd_sta_get_twt_policy(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif)
+{
+ __le32 twt_policy = cpu_to_le32(0);
+
+ if (vif->bss_conf.twt_requester && IWL_MVM_USE_TWT)
+ twt_policy |= cpu_to_le32(TWT_SUPPORTED);
+ if (vif->bss_conf.twt_protected)
+ twt_policy |= cpu_to_le32(PROTECTED_TWT_SUPPORTED);
+ if (vif->bss_conf.twt_broadcast)
+ twt_policy |= cpu_to_le32(BROADCAST_TWT_SUPPORTED);
+
+ return twt_policy;
+}
+
static int iwl_mvm_mac_ctxt_cmd_sta(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
u32 action, bool force_assoc_off,
cmd.filter_flags |= cpu_to_le32(MAC_FILTER_ACCEPT_GRP);
if (vif->p2p) {
- struct ieee80211_p2p_noa_attr *noa =
- &vif->bss_conf.p2p_noa_attr;
+ cmd.p2p_sta.ctwin =
+ iwl_mvm_mac_ctxt_cmd_p2p_sta_get_oppps_ctwin(mvm, vif);
- cmd.p2p_sta.ctwin = cpu_to_le32(noa->oppps_ctwindow &
- IEEE80211_P2P_OPPPS_CTWINDOW_MASK);
ctxt_sta = &cmd.p2p_sta.sta;
} else {
ctxt_sta = &cmd.sta;
if (vif->cfg.assoc && vif->bss_conf.dtim_period &&
!force_assoc_off) {
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- u32 dtim_offs;
- /*
- * The DTIM count counts down, so when it is N that means N
- * more beacon intervals happen until the DTIM TBTT. Therefore
- * add this to the current time. If that ends up being in the
- * future, the firmware will handle it.
- *
- * Also note that the system_timestamp (which we get here as
- * "sync_device_ts") and TSF timestamp aren't at exactly the
- * same offset in the frame -- the TSF is at the first symbol
- * of the TSF, the system timestamp is at signal acquisition
- * time. This means there's an offset between them of at most
- * a few hundred microseconds (24 * 8 bits + PLCP time gives
- * 384us in the longest case), this is currently not relevant
- * as the firmware wakes up around 2ms before the TBTT.
- */
- dtim_offs = vif->bss_conf.sync_dtim_count *
- vif->bss_conf.beacon_int;
- /* convert TU to usecs */
- dtim_offs *= 1024;
-
- ctxt_sta->dtim_tsf =
- cpu_to_le64(vif->bss_conf.sync_tsf + dtim_offs);
- ctxt_sta->dtim_time =
- cpu_to_le32(vif->bss_conf.sync_device_ts + dtim_offs);
- ctxt_sta->assoc_beacon_arrive_time =
- cpu_to_le32(vif->bss_conf.sync_device_ts);
-
- IWL_DEBUG_INFO(mvm, "DTIM TBTT is 0x%llx/0x%x, offset %d\n",
- le64_to_cpu(ctxt_sta->dtim_tsf),
- le32_to_cpu(ctxt_sta->dtim_time),
- dtim_offs);
+ iwl_mvm_set_fw_dtim_tbtt(mvm, vif, &vif->bss_conf,
+ &ctxt_sta->dtim_tsf,
+ &ctxt_sta->dtim_time,
+ &ctxt_sta->assoc_beacon_arrive_time);
ctxt_sta->is_assoc = cpu_to_le32(1);
if (vif->bss_conf.he_support && !iwlwifi_mod_params.disable_11ax) {
cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_11AX);
- if (vif->bss_conf.twt_requester && IWL_MVM_USE_TWT)
- ctxt_sta->data_policy |= cpu_to_le32(TWT_SUPPORTED);
- if (vif->bss_conf.twt_protected)
- ctxt_sta->data_policy |=
- cpu_to_le32(PROTECTED_TWT_SUPPORTED);
- if (vif->bss_conf.twt_broadcast)
- ctxt_sta->data_policy |=
- cpu_to_le32(BROADCAST_TWT_SUPPORTED);
+ ctxt_sta->data_policy |=
+ iwl_mvm_mac_ctxt_cmd_sta_get_twt_policy(mvm, vif);
}
u32 action)
{
struct iwl_mac_ctx_cmd cmd = {};
- u32 tfd_queue_msk = BIT(mvm->snif_queue);
+ u32 tfd_queue_msk = 0;
int ret;
WARN_ON(vif->type != NL80211_IFTYPE_MONITOR);
MAC_FILTER_ACCEPT_GRP);
ieee80211_hw_set(mvm->hw, RX_INCLUDES_FCS);
+ /*
+ * the queue mask is only relevant for old TX API, and
+ * mvm->snif_queue isn't set here (it's still set to
+ * IWL_MVM_INVALID_QUEUE so the BIT() of it is UB)
+ */
+ if (!iwl_mvm_has_new_tx_api(mvm))
+ tfd_queue_msk = BIT(mvm->snif_queue);
+
/* Allocate sniffer station */
ret = iwl_mvm_allocate_int_sta(mvm, &mvm->snif_sta, tfd_queue_msk,
vif->type, IWL_STA_GENERAL_PURPOSE);
data->go_active = true;
}
-static int iwl_mvm_mac_ctxt_cmd_p2p_device(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- u32 action)
+__le32 iwl_mac_ctxt_p2p_dev_has_extended_disc(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif)
{
- struct iwl_mac_ctx_cmd cmd = {};
struct iwl_mvm_go_iterator_data data = {};
- WARN_ON(vif->type != NL80211_IFTYPE_P2P_DEVICE);
-
- iwl_mvm_mac_ctxt_cmd_common(mvm, vif, &cmd, NULL, action);
-
- /* Override the filter flags to accept only probe requests */
- cmd.filter_flags = cpu_to_le32(MAC_FILTER_IN_PROBE_REQUEST);
-
/*
* This flag should be set to true when the P2P Device is
* discoverable and there is at least another active P2P GO. Settings
mvm->hw, IEEE80211_IFACE_ITER_RESUME_ALL,
iwl_mvm_go_iterator, &data);
- cmd.p2p_dev.is_disc_extended = cpu_to_le32(data.go_active ? 1 : 0);
+ return cpu_to_le32(data.go_active ? 1 : 0);
+}
+
+static int iwl_mvm_mac_ctxt_cmd_p2p_device(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 action)
+{
+ struct iwl_mac_ctx_cmd cmd = {};
+
+ WARN_ON(vif->type != NL80211_IFTYPE_P2P_DEVICE);
+
+ iwl_mvm_mac_ctxt_cmd_common(mvm, vif, &cmd, NULL, action);
+
+ cmd.p2p_dev.is_disc_extended =
+ iwl_mac_ctxt_p2p_dev_has_extended_disc(mvm, vif);
+
+ /* Override the filter flags to accept only probe requests */
+ cmd.filter_flags = cpu_to_le32(MAC_FILTER_IN_PROBE_REQUEST);
+
return iwl_mvm_mac_ctxt_send_cmd(mvm, &cmd);
}
/* Set up TX command fields */
tx->len = cpu_to_le16((u16)beacon->len);
- tx->sta_id = mvmvif->bcast_sta.sta_id;
+ tx->sta_id = mvmvif->deflink.bcast_sta.sta_id;
tx->life_time = cpu_to_le32(TX_CMD_LIFE_TIME_INFINITE);
tx_flags = TX_CMD_FLG_SEQ_CTL | TX_CMD_FLG_TSF;
tx_flags |=
static int iwl_mvm_mac_ctxt_send_beacon_v9(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
- struct sk_buff *beacon)
+ struct sk_buff *beacon,
+ struct ieee80211_bss_conf *link_conf)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(beacon);
/* Enable FILS on PSC channels only */
rcu_read_lock();
- ctx = rcu_dereference(vif->bss_conf.chanctx_conf);
+ ctx = rcu_dereference(link_conf->chanctx_conf);
channel = ieee80211_frequency_to_channel(ctx->def.chan->center_freq);
WARN_ON(channel == 0);
if (cfg80211_channel_is_psc(ctx->def.chan) &&
beacon_cmd.flags = cpu_to_le16(flags);
beacon_cmd.byte_cnt = cpu_to_le16((u16)beacon->len);
- beacon_cmd.template_id = cpu_to_le32((u32)mvmvif->id);
+ if (iwl_fw_lookup_cmd_ver(mvm->fw, BEACON_TEMPLATE_CMD, 0) > 12)
+ beacon_cmd.link_id =
+ cpu_to_le32(mvmvif->link[link_conf->link_id]->fw_link_id);
+ else
+ beacon_cmd.link_id = cpu_to_le32((u32)mvmvif->id);
if (vif->type == NL80211_IFTYPE_AP)
iwl_mvm_mac_ctxt_set_tim(mvm, &beacon_cmd.tim_idx,
int iwl_mvm_mac_ctxt_send_beacon(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
- struct sk_buff *beacon)
+ struct sk_buff *beacon,
+ struct ieee80211_bss_conf *link_conf)
{
if (WARN_ON(!beacon))
return -EINVAL;
if (fw_has_api(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_API_NEW_BEACON_TEMPLATE))
- return iwl_mvm_mac_ctxt_send_beacon_v9(mvm, vif, beacon);
+ return iwl_mvm_mac_ctxt_send_beacon_v9(mvm, vif, beacon,
+ link_conf);
return iwl_mvm_mac_ctxt_send_beacon_v7(mvm, vif, beacon);
}
/* The beacon template for the AP/GO/IBSS has changed and needs update */
int iwl_mvm_mac_ctxt_beacon_changed(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif)
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
{
struct sk_buff *beacon;
int ret;
WARN_ON(vif->type != NL80211_IFTYPE_AP &&
vif->type != NL80211_IFTYPE_ADHOC);
- beacon = ieee80211_beacon_get_template(mvm->hw, vif, NULL, 0);
+ beacon = ieee80211_beacon_get_template(mvm->hw, vif, NULL,
+ link_conf->link_id);
if (!beacon)
return -ENOMEM;
}
#endif
- ret = iwl_mvm_mac_ctxt_send_beacon(mvm, vif, beacon);
+ ret = iwl_mvm_mac_ctxt_send_beacon(mvm, vif, beacon, link_conf);
dev_kfree_skb(beacon);
return ret;
}
}
/*
+ * Fill the filter flags for mac context of type AP or P2P GO.
+ */
+void iwl_mvm_mac_ctxt_cmd_ap_set_filter_flags(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif *mvmvif,
+ __le32 *filter_flags,
+ int accept_probe_req_flag,
+ int accept_beacon_flag)
+{
+ /*
+ * in AP mode, pass probe requests and beacons from other APs
+ * (needed for ht protection); when there're no any associated
+ * station don't ask FW to pass beacons to prevent unnecessary
+ * wake-ups.
+ */
+ *filter_flags |= cpu_to_le32(accept_probe_req_flag);
+ if (mvmvif->ap_assoc_sta_count || !mvm->drop_bcn_ap_mode) {
+ *filter_flags |= cpu_to_le32(accept_beacon_flag);
+ IWL_DEBUG_HC(mvm, "Asking FW to pass beacons\n");
+ } else {
+ IWL_DEBUG_HC(mvm, "No need to receive beacons\n");
+ }
+}
+
+/*
* Fill the specific data for mac context of type AP of P2P GO
*/
static void iwl_mvm_mac_ctxt_cmd_fill_ap(struct iwl_mvm *mvm,
/* in AP mode, the MCAST FIFO takes the EDCA params from VO */
cmd->ac[IWL_MVM_TX_FIFO_VO].fifos_mask |= BIT(IWL_MVM_TX_FIFO_MCAST);
- /*
- * in AP mode, pass probe requests and beacons from other APs
- * (needed for ht protection); when there're no any associated
- * station don't ask FW to pass beacons to prevent unnecessary
- * wake-ups.
- */
- cmd->filter_flags |= cpu_to_le32(MAC_FILTER_IN_PROBE_REQUEST);
- if (mvmvif->ap_assoc_sta_count || !mvm->drop_bcn_ap_mode) {
- cmd->filter_flags |= cpu_to_le32(MAC_FILTER_IN_BEACON);
- IWL_DEBUG_HC(mvm, "Asking FW to pass beacons\n");
- } else {
- IWL_DEBUG_HC(mvm, "No need to receive beacons\n");
- }
+ iwl_mvm_mac_ctxt_cmd_ap_set_filter_flags(mvm, mvmvif,
+ &cmd->filter_flags,
+ MAC_FILTER_IN_PROBE_REQUEST,
+ MAC_FILTER_IN_BEACON);
ctxt_ap->bi = cpu_to_le32(vif->bss_conf.beacon_int);
ctxt_ap->dtim_interval = cpu_to_le32(vif->bss_conf.beacon_int *
if (!fw_has_api(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_API_STA_TYPE))
- ctxt_ap->mcast_qid = cpu_to_le32(mvmvif->cab_queue);
+ ctxt_ap->mcast_qid = cpu_to_le32(mvmvif->deflink.cab_queue);
/*
* Only set the beacon time when the MAC is being added, when we
mvmvif->color));
cmd.action = cpu_to_le32(FW_CTXT_ACTION_REMOVE);
- ret = iwl_mvm_send_cmd_pdu(mvm, MAC_CONTEXT_CMD, 0,
- sizeof(cmd), &cmd);
- if (ret) {
- IWL_ERR(mvm, "Failed to remove MAC context: %d\n", ret);
+ ret = iwl_mvm_mac_ctxt_send_cmd(mvm, &cmd);
+ if (ret)
return ret;
- }
mvmvif->uploaded = false;
if (!ieee80211_beacon_cntdwn_is_complete(csa_vif)) {
int c = ieee80211_beacon_update_cntdwn(csa_vif);
- iwl_mvm_mac_ctxt_beacon_changed(mvm, csa_vif);
+ iwl_mvm_mac_ctxt_beacon_changed(mvm, csa_vif,
+ &csa_vif->bss_conf);
if (csa_vif->p2p &&
!iwl_mvm_te_scheduled(&mvmvif->time_event_data) && gp2 &&
tx_success) {
struct ieee80211_vif *vif;
u32 id = le32_to_cpu(mb->mac_id);
union iwl_dbg_tlv_tp_data tp_data = { .fw_pkt = pkt };
+ u32 mac_type;
IWL_DEBUG_INFO(mvm,
"missed bcn mac_id=%u, consecutive=%u (%u, %u, %u)\n",
if (!vif)
goto out;
+ mac_type = iwl_mvm_get_mac_type(vif);
+
+ IWL_DEBUG_INFO(mvm, "missed beacon mac_type=%u,\n", mac_type);
+
+ mvm->trans->dbg.dump_file_name_ext_valid = true;
+ snprintf(mvm->trans->dbg.dump_file_name_ext, IWL_FW_INI_MAX_NAME,
+ "MacId_%d_MacType_%d", id, mac_type);
+
rx_missed_bcon = le32_to_cpu(mb->consec_missed_beacons);
rx_missed_bcon_since_rx =
le32_to_cpu(mb->consec_missed_beacons_since_last_rx);
sizeof(struct ieee80211_p2p_noa_desc) + 2)
new_data->noa_len -= sizeof(struct ieee80211_p2p_noa_desc);
- old_data = rcu_dereference_protected(mvmvif->probe_resp_data,
- lockdep_is_held(&mvmvif->mvm->mutex));
- rcu_assign_pointer(mvmvif->probe_resp_data, new_data);
+ old_data = rcu_dereference_protected(mvmvif->deflink.probe_resp_data,
+ lockdep_is_held(&mvmvif->mvm->mutex));
+ rcu_assign_pointer(mvmvif->deflink.probe_resp_data, new_data);
if (old_data)
kfree_rcu(old_data, rcu_head);
#include "fw/error-dump.h"
#include "iwl-prph.h"
#include "iwl-nvm-parse.h"
+#include "time-sync.h"
static const struct ieee80211_iface_limit iwl_mvm_limits[] = {
{
return ret;
}
+/* Each capability added here should also be add to tm_if_types_ext_capa_sta */
static const u8 he_if_types_ext_capa_sta[] = {
[0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
[2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT,
[7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
};
-static const struct wiphy_iftype_ext_capab he_iftypes_ext_capa[] = {
+static const u8 tm_if_types_ext_capa_sta[] = {
+ [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING,
+ [2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT |
+ WLAN_EXT_CAPA3_TIMING_MEASUREMENT_SUPPORT,
+ [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF,
+ [9] = WLAN_EXT_CAPA10_TWT_REQUESTER_SUPPORT,
+};
+
+/* Additional interface types for which extended capabilities are
+ * specified separately
+ */
+static const struct wiphy_iftype_ext_capab add_iftypes_ext_capa[] = {
{
.iftype = NL80211_IFTYPE_STATION,
.extended_capabilities = he_if_types_ext_capa_sta,
.extended_capabilities_mask = he_if_types_ext_capa_sta,
.extended_capabilities_len = sizeof(he_if_types_ext_capa_sta),
},
+ {
+ .iftype = NL80211_IFTYPE_STATION,
+ .extended_capabilities = tm_if_types_ext_capa_sta,
+ .extended_capabilities_mask = tm_if_types_ext_capa_sta,
+ .extended_capabilities_len = sizeof(tm_if_types_ext_capa_sta),
+ /* relevant only if EHT is supported */
+ .eml_capabilities = IEEE80211_EML_CAP_EMLSR_SUPP,
+ },
};
-static int
-iwl_mvm_op_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant)
+int iwl_mvm_op_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
*tx_ant = iwl_mvm_get_valid_tx_ant(mvm);
bool unified = fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_CNSLDTD_D3_D0_IMG);
#endif
+ u32 sec_key_id = WIDE_ID(DATA_PATH_GROUP, SEC_KEY_CMD);
+ u8 sec_key_ver = iwl_fw_lookup_cmd_ver(mvm->fw, sec_key_id, 0);
/* Tell mac80211 our characteristics */
ieee80211_hw_set(hw, SIGNAL_DBM);
ieee80211_hw_set(hw, SUPPORTS_PS);
ieee80211_hw_set(hw, SUPPORTS_DYNAMIC_PS);
ieee80211_hw_set(hw, AMPDU_AGGREGATION);
- ieee80211_hw_set(hw, TIMING_BEACON_ONLY);
ieee80211_hw_set(hw, CONNECTION_MONITOR);
ieee80211_hw_set(hw, CHANCTX_STA_CSA);
ieee80211_hw_set(hw, SUPPORT_FAST_XMIT);
ieee80211_hw_set(hw, SUPPORTS_CLONED_SKBS);
ieee80211_hw_set(hw, SUPPORTS_AMSDU_IN_AMPDU);
ieee80211_hw_set(hw, NEEDS_UNIQUE_STA_ADDR);
- ieee80211_hw_set(hw, DEAUTH_NEED_MGD_TX_PREP);
ieee80211_hw_set(hw, SUPPORTS_VHT_EXT_NSS_BW);
ieee80211_hw_set(hw, BUFF_MMPDU_TXQ);
ieee80211_hw_set(hw, STA_MMPDU_TXQ);
+
+ /* With MLD FW API, it tracks timing by itself,
+ * no need for any timing from the host
+ */
+ if (!mvm->mld_api_is_used)
+ ieee80211_hw_set(hw, TIMING_BEACON_ONLY);
+
+ /* We should probably have this, but mac80211
+ * currently doesn't support it for MLO.
+ */
+ if (!(hw->wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO))
+ ieee80211_hw_set(hw, DEAUTH_NEED_MGD_TX_PREP);
+
/*
* On older devices, enabling TX A-MSDU occasionally leads to
* something getting messed up, the command read from the FIFO
hw->wiphy->pmsr_capa = &iwl_mvm_pmsr_capa;
}
- if (fw_has_capa(&mvm->fw->ucode_capa,
- IWL_UCODE_TLV_CAPA_BIGTK_SUPPORT))
+ if (sec_key_ver &&
+ fw_has_capa(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_BIGTK_TX_SUPPORT))
+ wiphy_ext_feature_set(hw->wiphy,
+ NL80211_EXT_FEATURE_BEACON_PROTECTION);
+ else if (fw_has_capa(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_BIGTK_SUPPORT))
wiphy_ext_feature_set(hw->wiphy,
NL80211_EXT_FEATURE_BEACON_PROTECTION_CLIENT);
+ if (fw_has_capa(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_TIME_SYNC_BOTH_FTM_TM))
+ hw->wiphy->hw_timestamp_max_peers = 1;
+
ieee80211_hw_set(hw, SINGLE_SCAN_ON_ALL_BANDS);
hw->wiphy->features |=
NL80211_FEATURE_SCHED_SCAN_RANDOM_MAC_ADDR |
NL80211_EXT_FEATURE_OCE_PROBE_REQ_DEFERRAL_SUPPRESSION);
}
+ hw->wiphy->iftype_ext_capab = NULL;
+ hw->wiphy->num_iftype_ext_capab = 0;
+
if (mvm->nvm_data->sku_cap_11ax_enable &&
!iwlwifi_mod_params.disable_11ax) {
- hw->wiphy->iftype_ext_capab = he_iftypes_ext_capa;
+ hw->wiphy->iftype_ext_capab = add_iftypes_ext_capa;
hw->wiphy->num_iftype_ext_capab =
- ARRAY_SIZE(he_iftypes_ext_capa);
+ ARRAY_SIZE(add_iftypes_ext_capa) - 1;
ieee80211_hw_set(hw, SUPPORTS_MULTI_BSSID);
ieee80211_hw_set(hw, SUPPORTS_ONLY_HE_MULTI_BSSID);
}
+ if (iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(DATA_PATH_GROUP,
+ WNM_80211V_TIMING_MEASUREMENT_CONFIG_CMD),
+ IWL_FW_CMD_VER_UNKNOWN) >= 1) {
+ IWL_DEBUG_INFO(mvm->trans, "Timing measurement supported\n");
+
+ if (!hw->wiphy->iftype_ext_capab) {
+ hw->wiphy->num_iftype_ext_capab = 1;
+ hw->wiphy->iftype_ext_capab = add_iftypes_ext_capa +
+ ARRAY_SIZE(add_iftypes_ext_capa) - 1;
+ } else {
+ hw->wiphy->iftype_ext_capab = add_iftypes_ext_capa + 1;
+ }
+ }
+
mvm->rts_threshold = IEEE80211_MAX_RTS_THRESHOLD;
#ifdef CONFIG_PM_SLEEP
ieee80211_free_txskb(mvm->hw, skb);
}
-static void iwl_mvm_mac_tx(struct ieee80211_hw *hw,
- struct ieee80211_tx_control *control,
- struct sk_buff *skb)
+void iwl_mvm_mac_tx(struct ieee80211_hw *hw,
+ struct ieee80211_tx_control *control, struct sk_buff *skb)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct ieee80211_sta *sta = control->sta;
struct ieee80211_hdr *hdr = (void *)skb->data;
bool offchannel = IEEE80211_SKB_CB(skb)->flags &
IEEE80211_TX_CTL_TX_OFFCHAN;
+ u32 link_id = u32_get_bits(info->control.flags,
+ IEEE80211_TX_CTRL_MLO_LINK);
+ struct ieee80211_sta *tmp_sta = sta;
if (iwl_mvm_is_radio_killed(mvm)) {
IWL_DEBUG_DROP(mvm, "Dropping - RF/CT KILL\n");
!offchannel) {
struct iwl_mvm_vif *mvmvif =
iwl_mvm_vif_from_mac80211(info->control.vif);
- u8 ap_sta_id = READ_ONCE(mvmvif->ap_sta_id);
+ u8 ap_sta_id = READ_ONCE(mvmvif->deflink.ap_sta_id);
if (ap_sta_id < mvm->fw->ucode_capa.num_stations) {
/* mac80211 holds rcu read lock */
}
}
+ if (tmp_sta && !sta && link_id != IEEE80211_LINK_UNSPECIFIED &&
+ !ieee80211_is_probe_resp(hdr->frame_control)) {
+ /* translate MLD addresses to LINK addresses */
+ struct ieee80211_link_sta *link_sta =
+ rcu_dereference(tmp_sta->link[link_id]);
+ struct ieee80211_bss_conf *link_conf =
+ rcu_dereference(info->control.vif->link_conf[link_id]);
+ struct ieee80211_mgmt *mgmt;
+
+ if (WARN_ON(!link_sta || !link_conf))
+ goto drop;
+
+ /* if sta is NULL, the frame is a management frame */
+ mgmt = (void *)hdr;
+ memcpy(mgmt->da, link_sta->addr, ETH_ALEN);
+ memcpy(mgmt->sa, link_conf->addr, ETH_ALEN);
+ memcpy(mgmt->bssid, link_conf->bssid, ETH_ALEN);
+ }
+
iwl_mvm_tx_skb(mvm, skb, sta);
return;
drop:
rcu_read_unlock();
}
-static void iwl_mvm_mac_wake_tx_queue(struct ieee80211_hw *hw,
- struct ieee80211_txq *txq)
+void iwl_mvm_mac_wake_tx_queue(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_txq *mvmtxq = iwl_mvm_txq_from_mac80211(txq);
}
}
-static int iwl_mvm_mac_ampdu_action(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_ampdu_params *params)
+int iwl_mvm_mac_ampdu_action(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
switch (action) {
case IEEE80211_AMPDU_RX_START:
- if (iwl_mvm_vif_from_mac80211(vif)->ap_sta_id ==
- iwl_mvm_sta_from_mac80211(sta)->sta_id) {
+ if (iwl_mvm_vif_from_mac80211(vif)->deflink.ap_sta_id ==
+ iwl_mvm_sta_from_mac80211(sta)->deflink.sta_id) {
struct iwl_mvm_vif *mvmvif;
u16 macid = iwl_mvm_vif_from_mac80211(vif)->id;
struct iwl_mvm_tcm_mac *mdata = &mvm->tcm.data[macid];
{
struct iwl_mvm *mvm = data;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_probe_resp_data *probe_data;
+ unsigned int link_id;
mvmvif->uploaded = false;
- mvmvif->ap_sta_id = IWL_MVM_INVALID_STA;
spin_lock_bh(&mvm->time_event_lock);
iwl_mvm_te_clear_data(mvm, &mvmvif->time_event_data);
spin_unlock_bh(&mvm->time_event_lock);
- mvmvif->phy_ctxt = NULL;
memset(&mvmvif->bf_data, 0, sizeof(mvmvif->bf_data));
- memset(&mvmvif->probe_resp_data, 0, sizeof(mvmvif->probe_resp_data));
+
+ for_each_mvm_vif_valid_link(mvmvif, link_id) {
+ mvmvif->link[link_id]->ap_sta_id = IWL_MVM_INVALID_STA;
+ mvmvif->link[link_id]->fw_link_id = IWL_MVM_FW_LINK_ID_INVALID;
+ mvmvif->link[link_id]->phy_ctxt = NULL;
+ mvmvif->link[link_id]->active = 0;
+ }
+
+ probe_data = rcu_dereference_protected(mvmvif->deflink.probe_resp_data,
+ lockdep_is_held(&mvm->mutex));
+ if (probe_data)
+ kfree_rcu(probe_data, rcu_head);
+ RCU_INIT_POINTER(mvmvif->deflink.probe_resp_data, NULL);
}
static void iwl_mvm_restart_cleanup(struct iwl_mvm *mvm)
return ret;
}
-static int iwl_mvm_mac_start(struct ieee80211_hw *hw)
+int iwl_mvm_mac_start(struct ieee80211_hw *hw)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
mutex_unlock(&mvm->mutex);
}
-static void
-iwl_mvm_mac_reconfig_complete(struct ieee80211_hw *hw,
- enum ieee80211_reconfig_type reconfig_type)
+void iwl_mvm_mac_reconfig_complete(struct ieee80211_hw *hw,
+ enum ieee80211_reconfig_type reconfig_type)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
}
}
-static void iwl_mvm_mac_stop(struct ieee80211_hw *hw)
+void iwl_mvm_mac_stop(struct ieee80211_hw *hw)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
cancel_work_sync(&mvm->async_handlers_wk);
}
-static struct iwl_mvm_phy_ctxt *iwl_mvm_get_free_phy_ctxt(struct iwl_mvm *mvm)
+struct iwl_mvm_phy_ctxt *iwl_mvm_get_free_phy_ctxt(struct iwl_mvm *mvm)
{
u16 i;
return NULL;
}
-static int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
- s16 tx_power)
+int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ s16 tx_power)
{
u32 cmd_id = REDUCE_TX_POWER_CMD;
int len;
return iwl_mvm_send_cmd_pdu(mvm, cmd_id, 0, len, &cmd);
}
-static int iwl_mvm_post_channel_switch(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+int iwl_mvm_post_channel_switch(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mvmvif->csa_bcn_pending = false;
mvmsta = iwl_mvm_sta_from_staid_protected(mvm,
- mvmvif->ap_sta_id);
+ mvmvif->deflink.ap_sta_id);
if (WARN_ON(!mvmsta)) {
ret = -EIO;
}
iwl_mvm_sta_modify_disable_tx(mvm, mvmsta, false);
-
- iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
+ if (mvm->mld_api_is_used)
+ iwl_mvm_mld_mac_ctxt_changed(mvm, vif, false);
+ else
+ iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
if (!fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD)) {
return ret;
}
-static void iwl_mvm_abort_channel_switch(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+void iwl_mvm_abort_channel_switch(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
iwl_mvm_post_channel_switch(hw, vif);
}
-static void iwl_mvm_channel_switch_disconnect_wk(struct work_struct *wk)
+void iwl_mvm_channel_switch_disconnect_wk(struct work_struct *wk)
{
struct iwl_mvm_vif *mvmvif;
struct ieee80211_vif *vif;
ieee80211_chswitch_done(vif, false);
}
-static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+static u8
+iwl_mvm_chandef_get_primary_80(struct cfg80211_chan_def *chandef)
+{
+ int data_start;
+ int control_start;
+ int bw;
+
+ if (chandef->width == NL80211_CHAN_WIDTH_320)
+ bw = 320;
+ else if (chandef->width == NL80211_CHAN_WIDTH_160)
+ bw = 160;
+ else
+ return 0;
+
+ /* data is bw wide so the start is half the width */
+ data_start = chandef->center_freq1 - bw / 2;
+ /* control is 20Mhz width */
+ control_start = chandef->chan->center_freq - 10;
+
+ return (control_start - data_start) / 80;
+}
+
+/*
+ * Returns true if addding the interface is done
+ * (either with success or failure)
+ *
+ * FIXME: remove this again and merge it in
+ */
+static bool iwl_mvm_mac_add_interface_common(struct iwl_mvm *mvm,
+ struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ int *ret)
{
- struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- int ret;
+
+ lockdep_assert_held(&mvm->mutex);
mvmvif->mvm = mvm;
- RCU_INIT_POINTER(mvmvif->probe_resp_data, NULL);
+
+ /* the first link always points to the default one */
+ mvmvif->link[0] = &mvmvif->deflink;
/*
* Not much to do here. The stack will not allow interface
* don't really have to check the types.
*/
- mutex_lock(&mvm->mutex);
-
/* make sure that beacon statistics don't go backwards with FW reset */
if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
- mvmvif->beacon_stats.accu_num_beacons +=
- mvmvif->beacon_stats.num_beacons;
+ mvmvif->deflink.beacon_stats.accu_num_beacons +=
+ mvmvif->deflink.beacon_stats.num_beacons;
/* Allocate resources for the MAC context, and add it to the fw */
- ret = iwl_mvm_mac_ctxt_init(mvm, vif);
- if (ret)
- goto out_unlock;
+ *ret = iwl_mvm_mac_ctxt_init(mvm, vif);
+ if (*ret)
+ return true;
rcu_assign_pointer(mvm->vif_id_to_mac[mvmvif->id], vif);
*/
if (vif->type == NL80211_IFTYPE_AP ||
vif->type == NL80211_IFTYPE_ADHOC) {
- ret = iwl_mvm_alloc_bcast_sta(mvm, vif);
- if (ret) {
- IWL_ERR(mvm, "Failed to allocate bcast sta\n");
- goto out_unlock;
- }
-
- /*
- * Only queue for this station is the mcast queue,
- * which shouldn't be in TFD mask anyway
- */
- ret = iwl_mvm_allocate_int_sta(mvm, &mvmvif->mcast_sta,
- 0, vif->type,
- IWL_STA_MULTICAST);
- if (ret)
- goto out_unlock;
-
iwl_mvm_vif_dbgfs_register(mvm, vif);
- goto out_unlock;
+ return true;
}
mvmvif->features |= hw->netdev_features;
+ return false;
+}
+
+static int iwl_mvm_alloc_bcast_mcast_sta(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ ret = iwl_mvm_alloc_bcast_sta(mvm, vif);
+ if (ret) {
+ IWL_ERR(mvm, "Failed to allocate bcast sta\n");
+ return ret;
+ }
+
+ /*
+ * Only queue for this station is the mcast queue,
+ * which shouldn't be in TFD mask anyway
+ */
+ return iwl_mvm_allocate_int_sta(mvm, &mvmvif->deflink.mcast_sta, 0,
+ vif->type,
+ IWL_STA_MULTICAST);
+}
+
+static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ mutex_lock(&mvm->mutex);
+
+ /* Common for MLD and non-MLD API */
+ if (iwl_mvm_mac_add_interface_common(mvm, hw, vif, &ret))
+ goto out;
+
ret = iwl_mvm_mac_ctxt_add(mvm, vif);
if (ret)
goto out_unlock;
*/
if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
- mvmvif->phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
- if (!mvmvif->phy_ctxt) {
+ mvmvif->deflink.phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
+ if (!mvmvif->deflink.phy_ctxt) {
ret = -ENOSPC;
goto out_free_bf;
}
- iwl_mvm_phy_ctxt_ref(mvm, mvmvif->phy_ctxt);
+ iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
ret = iwl_mvm_binding_add_vif(mvm, vif);
if (ret)
goto out_unref_phy;
INIT_DELAYED_WORK(&mvmvif->csa_work,
iwl_mvm_channel_switch_disconnect_wk);
- if (vif->type == NL80211_IFTYPE_MONITOR)
+ if (vif->type == NL80211_IFTYPE_MONITOR) {
mvm->monitor_on = true;
+ mvm->monitor_p80 =
+ iwl_mvm_chandef_get_primary_80(&vif->bss_conf.chandef);
+ }
iwl_mvm_vif_dbgfs_register(mvm, vif);
mvm->csme_vif = vif;
}
+out:
+ if (!ret && (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_ADHOC))
+ ret = iwl_mvm_alloc_bcast_mcast_sta(mvm, vif);
+
goto out_unlock;
out_unbind:
iwl_mvm_binding_remove_vif(mvm, vif);
out_unref_phy:
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->phy_ctxt);
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
out_free_bf:
if (mvm->bf_allowed_vif == mvmvif) {
mvm->bf_allowed_vif = NULL;
IEEE80211_VIF_SUPPORTS_CQM_RSSI);
}
out_remove_mac:
- mvmvif->phy_ctxt = NULL;
+ mvmvif->deflink.phy_ctxt = NULL;
iwl_mvm_mac_ctxt_remove(mvm, vif);
out_unlock:
mutex_unlock(&mvm->mutex);
return ret;
}
-static void iwl_mvm_prepare_mac_removal(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif)
+void iwl_mvm_prepare_mac_removal(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif)
{
if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
/*
}
}
-static void iwl_mvm_mac_remove_interface(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+/* This function is doing the common part of removing the interface for
+ * both - MLD and non-MLD modes. Returns true if removing the interface
+ * is done
+ */
+static bool iwl_mvm_mac_remove_interface_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
mvm->csme_vif = NULL;
}
- probe_data = rcu_dereference_protected(mvmvif->probe_resp_data,
+ probe_data = rcu_dereference_protected(mvmvif->deflink.probe_resp_data,
lockdep_is_held(&mvm->mutex));
- RCU_INIT_POINTER(mvmvif->probe_resp_data, NULL);
+ RCU_INIT_POINTER(mvmvif->deflink.probe_resp_data, NULL);
if (probe_data)
kfree_rcu(probe_data, rcu_head);
mvm->noa_duration = 0;
}
#endif
- iwl_mvm_dealloc_int_sta(mvm, &mvmvif->mcast_sta);
- iwl_mvm_dealloc_bcast_sta(mvm, vif);
- goto out_release;
+ return true;
}
+ iwl_mvm_power_update_mac(mvm);
+ return false;
+}
+
+static void iwl_mvm_mac_remove_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+
+ if (iwl_mvm_mac_remove_interface_common(hw, vif))
+ goto out;
+
if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
mvm->p2p_device_vif = NULL;
iwl_mvm_rm_p2p_bcast_sta(mvm, vif);
iwl_mvm_binding_remove_vif(mvm, vif);
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->phy_ctxt);
- mvmvif->phy_ctxt = NULL;
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
+ mvmvif->deflink.phy_ctxt = NULL;
}
- iwl_mvm_power_update_mac(mvm);
iwl_mvm_mac_ctxt_remove(mvm, vif);
RCU_INIT_POINTER(mvm->vif_id_to_mac[mvmvif->id], NULL);
if (vif->type == NL80211_IFTYPE_MONITOR)
mvm->monitor_on = false;
-out_release:
- mutex_unlock(&mvm->mutex);
-}
+out:
+ if (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_ADHOC) {
+ iwl_mvm_dealloc_int_sta(mvm, &mvmvif->deflink.mcast_sta);
+ iwl_mvm_dealloc_bcast_sta(mvm, vif);
+ }
-static int iwl_mvm_mac_config(struct ieee80211_hw *hw, u32 changed)
-{
- return 0;
+ mutex_unlock(&mvm->mutex);
}
struct iwl_mvm_mc_iter_data {
IWL_ERR(mvm, "Failed to synchronize multicast groups update\n");
}
-static u64 iwl_mvm_prepare_multicast(struct ieee80211_hw *hw,
- struct netdev_hw_addr_list *mc_list)
+u64 iwl_mvm_prepare_multicast(struct ieee80211_hw *hw,
+ struct netdev_hw_addr_list *mc_list)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mcast_filter_cmd *cmd;
return (u64)(unsigned long)cmd;
}
-static void iwl_mvm_configure_filter(struct ieee80211_hw *hw,
- unsigned int changed_flags,
- unsigned int *total_flags,
- u64 multicast)
+void iwl_mvm_configure_filter(struct ieee80211_hw *hw,
+ unsigned int changed_flags,
+ unsigned int *total_flags, u64 multicast)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mcast_filter_cmd *cmd = (void *)(unsigned long)multicast;
mutex_unlock(&mvm->mutex);
}
-static int iwl_mvm_update_mu_groups(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif)
+int iwl_mvm_update_mu_groups(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct iwl_mu_group_mgmt_cmd cmd = {};
}
static void iwl_mvm_set_pkt_ext_from_he_ppe(struct iwl_mvm *mvm,
- struct ieee80211_sta *sta,
+ struct ieee80211_link_sta *link_sta,
struct iwl_he_pkt_ext_v2 *pkt_ext,
bool inheritance)
{
- u8 nss = (sta->deflink.he_cap.ppe_thres[0] & IEEE80211_PPE_THRES_NSS_MASK) + 1;
- u8 *ppe = &sta->deflink.he_cap.ppe_thres[0];
+ u8 nss = (link_sta->he_cap.ppe_thres[0] &
+ IEEE80211_PPE_THRES_NSS_MASK) + 1;
+ u8 *ppe = &link_sta->he_cap.ppe_thres[0];
u8 ru_index_bitmap =
u8_get_bits(*ppe,
IEEE80211_PPE_THRES_RU_INDEX_BITMASK_MASK);
inheritance);
}
-static void iwl_mvm_set_pkt_ext_from_nominal_padding(struct iwl_he_pkt_ext_v2 *pkt_ext,
- u8 nominal_padding,
- u32 *flags)
+static int
+iwl_mvm_set_pkt_ext_from_nominal_padding(struct iwl_he_pkt_ext_v2 *pkt_ext,
+ u8 nominal_padding)
{
int low_th = -1;
int high_th = -1;
break;
}
+ if (low_th < 0 || high_th < 0)
+ return -EINVAL;
+
/* Set the PPE thresholds accordingly */
- if (low_th >= 0 && high_th >= 0) {
- for (i = 0; i < MAX_HE_SUPP_NSS; i++) {
- u8 bw;
+ for (i = 0; i < MAX_HE_SUPP_NSS; i++) {
+ u8 bw;
- for (bw = 0;
- bw < ARRAY_SIZE(pkt_ext->pkt_ext_qam_th[i]);
- bw++) {
- pkt_ext->pkt_ext_qam_th[i][bw][0] = low_th;
- pkt_ext->pkt_ext_qam_th[i][bw][1] = high_th;
- }
+ for (bw = 0;
+ bw < ARRAY_SIZE(pkt_ext->pkt_ext_qam_th[i]);
+ bw++) {
+ pkt_ext->pkt_ext_qam_th[i][bw][0] = low_th;
+ pkt_ext->pkt_ext_qam_th[i][bw][1] = high_th;
}
-
- *flags |= STA_CTXT_HE_PACKET_EXT;
}
+
+ return 0;
}
static void iwl_mvm_get_optimal_ppe_info(struct iwl_he_pkt_ext_v2 *pkt_ext,
}
}
-static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif, u8 sta_id)
+/* Set the pkt_ext field according to PPE Thresholds element */
+int iwl_mvm_set_sta_pkt_ext(struct iwl_mvm *mvm,
+ struct ieee80211_link_sta *link_sta,
+ struct iwl_he_pkt_ext_v2 *pkt_ext)
{
- struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- struct iwl_he_sta_context_cmd_v3 sta_ctxt_cmd = {
- .sta_id = sta_id,
- .tid_limit = IWL_MAX_TID_COUNT,
- .bss_color = vif->bss_conf.he_bss_color.color,
- .htc_trig_based_pkt_ext = vif->bss_conf.htc_trig_based_pkt_ext,
- .frame_time_rts_th =
- cpu_to_le16(vif->bss_conf.frame_time_rts_th),
- };
- struct iwl_he_sta_context_cmd_v2 sta_ctxt_cmd_v2 = {};
- u32 cmd_id = WIDE_ID(DATA_PATH_GROUP, STA_HE_CTXT_CMD);
- u8 ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 2);
- int size;
- struct ieee80211_sta *sta;
- u32 flags;
- int i;
- const struct ieee80211_sta_he_cap *own_he_cap = NULL;
- struct ieee80211_chanctx_conf *chanctx_conf;
- const struct ieee80211_supported_band *sband;
- void *cmd;
u8 nominal_padding;
+ int i, ret = 0;
- if (!fw_has_api(&mvm->fw->ucode_capa, IWL_UCODE_TLV_API_MBSSID_HE))
- ver = 1;
-
- switch (ver) {
- case 1:
- /* same layout as v2 except some data at the end */
- cmd = &sta_ctxt_cmd_v2;
- size = sizeof(struct iwl_he_sta_context_cmd_v1);
- break;
- case 2:
- cmd = &sta_ctxt_cmd_v2;
- size = sizeof(struct iwl_he_sta_context_cmd_v2);
- break;
- case 3:
- cmd = &sta_ctxt_cmd;
- size = sizeof(struct iwl_he_sta_context_cmd_v3);
- break;
- default:
- IWL_ERR(mvm, "bad STA_HE_CTXT_CMD version %d\n", ver);
- return;
- }
-
- rcu_read_lock();
-
- chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
- if (WARN_ON(!chanctx_conf)) {
- rcu_read_unlock();
- return;
- }
-
- sband = mvm->hw->wiphy->bands[chanctx_conf->def.chan->band];
- own_he_cap = ieee80211_get_he_iftype_cap(sband,
- ieee80211_vif_type_p2p(vif));
-
- sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_ctxt_cmd.sta_id]);
- if (IS_ERR_OR_NULL(sta)) {
- rcu_read_unlock();
- WARN(1, "Can't find STA to configure HE\n");
- return;
- }
-
- if (!sta->deflink.he_cap.has_he) {
- rcu_read_unlock();
- return;
- }
-
- flags = 0;
-
- /* Block 26-tone RU OFDMA transmissions */
- if (mvmvif->he_ru_2mhz_block)
- flags |= STA_CTXT_HE_RU_2MHZ_BLOCK;
-
- /* HTC flags */
- if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[0] &
- IEEE80211_HE_MAC_CAP0_HTC_HE)
- sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_SUPPORT);
- if ((sta->deflink.he_cap.he_cap_elem.mac_cap_info[1] &
- IEEE80211_HE_MAC_CAP1_LINK_ADAPTATION) ||
- (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
- IEEE80211_HE_MAC_CAP2_LINK_ADAPTATION)) {
- u8 link_adap =
- ((sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
- IEEE80211_HE_MAC_CAP2_LINK_ADAPTATION) << 1) +
- (sta->deflink.he_cap.he_cap_elem.mac_cap_info[1] &
- IEEE80211_HE_MAC_CAP1_LINK_ADAPTATION);
-
- if (link_adap == 2)
- sta_ctxt_cmd.htc_flags |=
- cpu_to_le32(IWL_HE_HTC_LINK_ADAP_UNSOLICITED);
- else if (link_adap == 3)
- sta_ctxt_cmd.htc_flags |=
- cpu_to_le32(IWL_HE_HTC_LINK_ADAP_BOTH);
- }
- if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] & IEEE80211_HE_MAC_CAP2_BSR)
- sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_BSR_SUPP);
- if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[3] &
- IEEE80211_HE_MAC_CAP3_OMI_CONTROL)
- sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_OMI_SUPP);
- if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[4] & IEEE80211_HE_MAC_CAP4_BQR)
- sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_BQR_SUPP);
+ if (WARN_ON(!link_sta))
+ return -EINVAL;
- /*
- * Initialize the PPE thresholds to "None" (7), as described in Table
+ /* Initialize the PPE thresholds to "None" (7), as described in Table
* 9-262ac of 80211.ax/D3.0.
*/
- memset(&sta_ctxt_cmd.pkt_ext, IWL_HE_PKT_EXT_NONE,
- sizeof(sta_ctxt_cmd.pkt_ext));
+ memset(pkt_ext, IWL_HE_PKT_EXT_NONE,
+ sizeof(struct iwl_he_pkt_ext_v2));
- if (sta->deflink.eht_cap.has_eht) {
+ if (link_sta->eht_cap.has_eht) {
nominal_padding =
- u8_get_bits(sta->deflink.eht_cap.eht_cap_elem.phy_cap_info[5],
+ u8_get_bits(link_sta->eht_cap.eht_cap_elem.phy_cap_info[5],
IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK);
- /* If PPE Thresholds exists, parse them into a FW-familiar format. */
- if (sta->deflink.eht_cap.eht_cap_elem.phy_cap_info[5] &
+ /* If PPE Thresholds exists, parse them into a FW-familiar
+ * format.
+ */
+ if (link_sta->eht_cap.eht_cap_elem.phy_cap_info[5] &
IEEE80211_EHT_PHY_CAP5_PPE_THRESHOLD_PRESENT) {
- u8 nss = (sta->deflink.eht_cap.eht_ppe_thres[0] &
+ u8 nss = (link_sta->eht_cap.eht_ppe_thres[0] &
IEEE80211_EHT_PPE_THRES_NSS_MASK) + 1;
- u8 *ppe = &sta->deflink.eht_cap.eht_ppe_thres[0];
+ u8 *ppe = &link_sta->eht_cap.eht_ppe_thres[0];
u8 ru_index_bitmap =
u16_get_bits(*ppe,
IEEE80211_EHT_PPE_THRES_RU_INDEX_BITMASK_MASK);
/* Starting after PPE header */
u8 ppe_pos_bit = IEEE80211_EHT_PPE_THRES_INFO_HEADER_SIZE;
- iwl_mvm_parse_ppe(mvm,
- &sta_ctxt_cmd.pkt_ext,
- nss, ru_index_bitmap, ppe,
- ppe_pos_bit, true);
- flags |= STA_CTXT_HE_PACKET_EXT;
- /* EHT PPE Thresholds doesn't exist - set the API according to HE PPE Tresholds*/
- } else if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[6] &
+ iwl_mvm_parse_ppe(mvm, pkt_ext, nss, ru_index_bitmap,
+ ppe, ppe_pos_bit, true);
+ /* EHT PPE Thresholds doesn't exist - set the API according to
+ * HE PPE Tresholds
+ */
+ } else if (link_sta->he_cap.he_cap_elem.phy_cap_info[6] &
IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) {
- struct iwl_he_pkt_ext_v2 *pkt_ext =
- &sta_ctxt_cmd.pkt_ext;
-
- /*
- * Even though HE Capabilities IE doesn't contain PPE
- * Thresholds for BW 320Mhz, thresholds for this BW will
- * be filled in with the same values as 160Mhz, due to
- * the inheritance, as required.
- */
- iwl_mvm_set_pkt_ext_from_he_ppe(mvm, sta, pkt_ext,
+ /* Even though HE Capabilities IE doesn't contain PPE
+ * Thresholds for BW 320Mhz, thresholds for this BW will
+ * be filled in with the same values as 160Mhz, due to
+ * the inheritance, as required.
+ */
+ iwl_mvm_set_pkt_ext_from_he_ppe(mvm, link_sta, pkt_ext,
true);
- /*
- * According to the requirements, for MCSs 12-13 the maximum value between
- * HE PPE Threshold and Common Nominal Packet Padding needs to be taken
- */
+ /* According to the requirements, for MCSs 12-13 the
+ * maximum value between HE PPE Threshold and Common
+ * Nominal Packet Padding needs to be taken
+ */
iwl_mvm_get_optimal_ppe_info(pkt_ext, nominal_padding);
- flags |= STA_CTXT_HE_PACKET_EXT;
-
- /*
- * if PPE Thresholds doesn't present in both EHT IE and HE IE -
- * take the Thresholds from Common Nominal Packet Padding field
- */
+ /* if PPE Thresholds doesn't present in both EHT IE and HE IE -
+ * take the Thresholds from Common Nominal Packet Padding field
+ */
} else {
- iwl_mvm_set_pkt_ext_from_nominal_padding(&sta_ctxt_cmd.pkt_ext,
- nominal_padding,
- &flags);
+ ret = iwl_mvm_set_pkt_ext_from_nominal_padding(pkt_ext,
+ nominal_padding);
}
- } else if (sta->deflink.he_cap.has_he) {
+ } else if (link_sta->he_cap.has_he) {
/* If PPE Thresholds exist, parse them into a FW-familiar format. */
- if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[6] &
+ if (link_sta->he_cap.he_cap_elem.phy_cap_info[6] &
IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) {
- iwl_mvm_set_pkt_ext_from_he_ppe(mvm, sta,
- &sta_ctxt_cmd.pkt_ext,
+ iwl_mvm_set_pkt_ext_from_he_ppe(mvm, link_sta, pkt_ext,
false);
- flags |= STA_CTXT_HE_PACKET_EXT;
- /*
- * PPE Thresholds doesn't exist - set the API PPE values
- * according to Common Nominal Packet Padding field.
- */
+ /* PPE Thresholds doesn't exist - set the API PPE values
+ * according to Common Nominal Packet Padding field.
+ */
} else {
nominal_padding =
- u8_get_bits(sta->deflink.he_cap.he_cap_elem.phy_cap_info[9],
+ u8_get_bits(link_sta->he_cap.he_cap_elem.phy_cap_info[9],
IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_MASK);
if (nominal_padding != IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_RESERVED)
- iwl_mvm_set_pkt_ext_from_nominal_padding(&sta_ctxt_cmd.pkt_ext,
- nominal_padding,
- &flags);
+ ret = iwl_mvm_set_pkt_ext_from_nominal_padding(pkt_ext,
+ nominal_padding);
}
}
int bw;
for (bw = 0;
- bw < ARRAY_SIZE(sta_ctxt_cmd.pkt_ext.pkt_ext_qam_th[i]);
+ bw < ARRAY_SIZE(*pkt_ext->pkt_ext_qam_th[i]);
bw++) {
u8 *qam_th =
- &sta_ctxt_cmd.pkt_ext.pkt_ext_qam_th[i][bw][0];
+ &pkt_ext->pkt_ext_qam_th[i][bw][0];
IWL_DEBUG_HT(mvm,
"PPE table: nss[%d] bw[%d] PPET8 = %d, PPET16 = %d\n",
i, bw, qam_th[0], qam_th[1]);
}
}
+ return ret;
+}
- if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
- IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP)
- flags |= STA_CTXT_HE_32BIT_BA_BITMAP;
-
- if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
- IEEE80211_HE_MAC_CAP2_ACK_EN)
- flags |= STA_CTXT_HE_ACK_ENABLED;
-
- rcu_read_unlock();
-
+/*
+ * This function sets the MU EDCA parameters ans returns whether MU EDCA
+ * is enabled or not
+ */
+bool iwl_mvm_set_fw_mu_edca_params(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif *mvmvif,
+ struct iwl_he_backoff_conf *trig_based_txf)
+{
+ int i;
/* Mark MU EDCA as enabled, unless none detected on some AC */
- flags |= STA_CTXT_HE_MU_EDCA_CW;
+ bool mu_edca_enabled = true;
+
for (i = 0; i < IEEE80211_NUM_ACS; i++) {
struct ieee80211_he_mu_edca_param_ac_rec *mu_edca =
- &mvmvif->queue_params[i].mu_edca_param_rec;
+ &mvmvif->deflink.queue_params[i].mu_edca_param_rec;
u8 ac = iwl_mvm_mac80211_ac_to_ucode_ac(i);
- if (!mvmvif->queue_params[i].mu_edca) {
- flags &= ~STA_CTXT_HE_MU_EDCA_CW;
+ if (!mvmvif->deflink.queue_params[i].mu_edca) {
+ mu_edca_enabled = false;
break;
}
- sta_ctxt_cmd.trig_based_txf[ac].cwmin =
+ trig_based_txf[ac].cwmin =
cpu_to_le16(mu_edca->ecw_min_max & 0xf);
- sta_ctxt_cmd.trig_based_txf[ac].cwmax =
+ trig_based_txf[ac].cwmax =
cpu_to_le16((mu_edca->ecw_min_max & 0xf0) >> 4);
- sta_ctxt_cmd.trig_based_txf[ac].aifsn =
- cpu_to_le16(mu_edca->aifsn);
- sta_ctxt_cmd.trig_based_txf[ac].mu_time =
+ trig_based_txf[ac].aifsn =
+ cpu_to_le16(mu_edca->aifsn & 0xf);
+ trig_based_txf[ac].mu_time =
cpu_to_le16(mu_edca->mu_edca_timer);
}
+ return mu_edca_enabled;
+}
+
+bool iwl_mvm_is_nic_ack_enabled(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+{
+ const struct ieee80211_supported_band *sband;
+ const struct ieee80211_sta_he_cap *own_he_cap = NULL;
+
+ /* This capability is the same for all bands,
+ * so take it from one of them.
+ */
+ sband = mvm->hw->wiphy->bands[NL80211_BAND_2GHZ];
+ own_he_cap = ieee80211_get_he_iftype_cap(sband,
+ ieee80211_vif_type_p2p(vif));
+
+ return (own_he_cap && (own_he_cap->he_cap_elem.mac_cap_info[2] &
+ IEEE80211_HE_MAC_CAP2_ACK_EN));
+}
+
+__le32 iwl_mvm_get_sta_htc_flags(struct ieee80211_sta *sta,
+ struct ieee80211_link_sta *link_sta)
+{
+ u8 *mac_cap_info =
+ &link_sta->he_cap.he_cap_elem.mac_cap_info[0];
+ __le32 htc_flags = 0;
+
+ if (mac_cap_info[0] & IEEE80211_HE_MAC_CAP0_HTC_HE)
+ htc_flags |= cpu_to_le32(IWL_HE_HTC_SUPPORT);
+ if ((mac_cap_info[1] & IEEE80211_HE_MAC_CAP1_LINK_ADAPTATION) ||
+ (mac_cap_info[2] & IEEE80211_HE_MAC_CAP2_LINK_ADAPTATION)) {
+ u8 link_adap =
+ ((mac_cap_info[2] &
+ IEEE80211_HE_MAC_CAP2_LINK_ADAPTATION) << 1) +
+ (mac_cap_info[1] &
+ IEEE80211_HE_MAC_CAP1_LINK_ADAPTATION);
+
+ if (link_adap == 2)
+ htc_flags |=
+ cpu_to_le32(IWL_HE_HTC_LINK_ADAP_UNSOLICITED);
+ else if (link_adap == 3)
+ htc_flags |= cpu_to_le32(IWL_HE_HTC_LINK_ADAP_BOTH);
+ }
+ if (mac_cap_info[2] & IEEE80211_HE_MAC_CAP2_BSR)
+ htc_flags |= cpu_to_le32(IWL_HE_HTC_BSR_SUPP);
+ if (mac_cap_info[3] & IEEE80211_HE_MAC_CAP3_OMI_CONTROL)
+ htc_flags |= cpu_to_le32(IWL_HE_HTC_OMI_SUPP);
+ if (mac_cap_info[4] & IEEE80211_HE_MAC_CAP4_BQR)
+ htc_flags |= cpu_to_le32(IWL_HE_HTC_BQR_SUPP);
+
+ return htc_flags;
+}
+
+static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif, u8 sta_id)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_he_sta_context_cmd_v3 sta_ctxt_cmd = {
+ .sta_id = sta_id,
+ .tid_limit = IWL_MAX_TID_COUNT,
+ .bss_color = vif->bss_conf.he_bss_color.color,
+ .htc_trig_based_pkt_ext = vif->bss_conf.htc_trig_based_pkt_ext,
+ .frame_time_rts_th =
+ cpu_to_le16(vif->bss_conf.frame_time_rts_th),
+ };
+ struct iwl_he_sta_context_cmd_v2 sta_ctxt_cmd_v2 = {};
+ u32 cmd_id = WIDE_ID(DATA_PATH_GROUP, STA_HE_CTXT_CMD);
+ u8 ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 2);
+ int size;
+ struct ieee80211_sta *sta;
+ u32 flags;
+ int i;
+ void *cmd;
+
+ if (!fw_has_api(&mvm->fw->ucode_capa, IWL_UCODE_TLV_API_MBSSID_HE))
+ ver = 1;
+
+ switch (ver) {
+ case 1:
+ /* same layout as v2 except some data at the end */
+ cmd = &sta_ctxt_cmd_v2;
+ size = sizeof(struct iwl_he_sta_context_cmd_v1);
+ break;
+ case 2:
+ cmd = &sta_ctxt_cmd_v2;
+ size = sizeof(struct iwl_he_sta_context_cmd_v2);
+ break;
+ case 3:
+ cmd = &sta_ctxt_cmd;
+ size = sizeof(struct iwl_he_sta_context_cmd_v3);
+ break;
+ default:
+ IWL_ERR(mvm, "bad STA_HE_CTXT_CMD version %d\n", ver);
+ return;
+ }
+
+ rcu_read_lock();
+
+ sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_ctxt_cmd.sta_id]);
+ if (IS_ERR_OR_NULL(sta)) {
+ rcu_read_unlock();
+ WARN(1, "Can't find STA to configure HE\n");
+ return;
+ }
+
+ if (!sta->deflink.he_cap.has_he) {
+ rcu_read_unlock();
+ return;
+ }
+
+ flags = 0;
+
+ /* Block 26-tone RU OFDMA transmissions */
+ if (mvmvif->deflink.he_ru_2mhz_block)
+ flags |= STA_CTXT_HE_RU_2MHZ_BLOCK;
+
+ /* HTC flags */
+ sta_ctxt_cmd.htc_flags = iwl_mvm_get_sta_htc_flags(sta, &sta->deflink);
+
+ /* PPE Thresholds */
+ if (!iwl_mvm_set_sta_pkt_ext(mvm, &sta->deflink, &sta_ctxt_cmd.pkt_ext))
+ flags |= STA_CTXT_HE_PACKET_EXT;
+
+ if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
+ IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP)
+ flags |= STA_CTXT_HE_32BIT_BA_BITMAP;
+
+ if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
+ IEEE80211_HE_MAC_CAP2_ACK_EN)
+ flags |= STA_CTXT_HE_ACK_ENABLED;
+
+ rcu_read_unlock();
+
+ if (iwl_mvm_set_fw_mu_edca_params(mvm, mvmvif,
+ &sta_ctxt_cmd.trig_based_txf[0]))
+ flags |= STA_CTXT_HE_MU_EDCA_CW;
if (vif->bss_conf.uora_exists) {
flags |= STA_CTXT_HE_TRIG_RND_ALLOC;
(vif->bss_conf.uora_ocw_range >> 3) & 0x7;
}
- if (own_he_cap && !(own_he_cap->he_cap_elem.mac_cap_info[2] &
- IEEE80211_HE_MAC_CAP2_ACK_EN))
+ if (!iwl_mvm_is_nic_ack_enabled(mvm, vif))
flags |= STA_CTXT_HE_NIC_NOT_ACK_ENABLED;
if (vif->bss_conf.nontransmitted) {
IWL_ERR(mvm, "Failed to config FW to work HE!\n");
}
-static void iwl_mvm_protect_assoc(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- u32 duration_override)
+void iwl_mvm_protect_assoc(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ u32 duration_override)
{
u32 duration = IWL_MVM_TE_SESSION_PROTECTION_MAX_TIME_MS;
u32 min_duration = IWL_MVM_TE_SESSION_PROTECTION_MIN_TIME_MS;
min_duration, 500, false);
}
+/* Handle association common part to MLD and non-MLD modes */
+void iwl_mvm_bss_info_changed_station_assoc(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u64 changes)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ /* The firmware tracks the MU-MIMO group on its own.
+ * However, on HW restart we should restore this data.
+ */
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
+ (changes & BSS_CHANGED_MU_GROUPS) && vif->bss_conf.mu_mimo_owner) {
+ ret = iwl_mvm_update_mu_groups(mvm, vif);
+ if (ret)
+ IWL_ERR(mvm,
+ "failed to update VHT MU_MIMO groups\n");
+ }
+
+ iwl_mvm_recalc_multicast(mvm);
+
+ /* reset rssi values */
+ mvmvif->bf_data.ave_beacon_signal = 0;
+
+ iwl_mvm_bt_coex_vif_change(mvm);
+ iwl_mvm_update_smps_on_active_links(mvm, vif, IWL_MVM_SMPS_REQ_TT,
+ IEEE80211_SMPS_AUTOMATIC);
+ if (fw_has_capa(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_UMAC_SCAN))
+ iwl_mvm_config_scan(mvm);
+}
+
+/* Execute the common part for MLD and non-MLD modes */
+void
+iwl_mvm_bss_info_changed_station_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ u64 changes)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ if (changes & BSS_CHANGED_BEACON_INFO) {
+ /* We received a beacon from the associated AP so
+ * remove the session protection.
+ */
+ iwl_mvm_stop_session_protection(mvm, vif);
+
+ iwl_mvm_sf_update(mvm, vif, false);
+ WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
+ }
+
+ if (changes & (BSS_CHANGED_PS | BSS_CHANGED_P2P_PS | BSS_CHANGED_QOS |
+ /* Send power command on every beacon change,
+ * because we may have not enabled beacon abort yet.
+ */
+ BSS_CHANGED_BEACON_INFO)) {
+ ret = iwl_mvm_power_update_mac(mvm);
+ if (ret)
+ IWL_ERR(mvm, "failed to update power mode\n");
+ }
+
+ if (changes & BSS_CHANGED_CQM) {
+ IWL_DEBUG_MAC80211(mvm, "cqm info_changed\n");
+ /* reset cqm events tracking */
+ mvmvif->bf_data.last_cqm_event = 0;
+ if (mvmvif->bf_data.bf_enabled) {
+ /* FIXME: need to update per link when FW API will
+ * support it
+ */
+ ret = iwl_mvm_enable_beacon_filter(mvm, vif, 0);
+ if (ret)
+ IWL_ERR(mvm,
+ "failed to update CQM thresholds\n");
+ }
+ }
+
+ if (changes & BSS_CHANGED_BANDWIDTH)
+ iwl_mvm_update_link_smps(vif, link_conf);
+}
+
static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct ieee80211_bss_conf *bss_conf,
!iwlwifi_mod_params.disable_11ax) ||
(vif->bss_conf.eht_support &&
!iwlwifi_mod_params.disable_11be))
- iwl_mvm_cfg_he_sta(mvm, vif, mvmvif->ap_sta_id);
+ iwl_mvm_cfg_he_sta(mvm, vif, mvmvif->deflink.ap_sta_id);
iwl_mvm_mac_ctxt_recalc_tsf_id(mvm, vif);
}
!iwlwifi_mod_params.disable_11ax) ||
(vif->bss_conf.eht_support &&
!iwlwifi_mod_params.disable_11be)))
- iwl_mvm_cfg_he_sta(mvm, vif, mvmvif->ap_sta_id);
+ iwl_mvm_cfg_he_sta(mvm, vif, mvmvif->deflink.ap_sta_id);
/*
* If we're not associated yet, take the (new) BSSID before associating
* branch for disassociation below.
*/
if (changes & BSS_CHANGED_BSSID && !mvmvif->associated)
- memcpy(mvmvif->bssid, bss_conf->bssid, ETH_ALEN);
+ memcpy(mvmvif->deflink.bssid, bss_conf->bssid, ETH_ALEN);
- ret = iwl_mvm_mac_ctxt_changed(mvm, vif, false, mvmvif->bssid);
+ ret = iwl_mvm_mac_ctxt_changed(mvm, vif, false, mvmvif->deflink.bssid);
if (ret)
IWL_ERR(mvm, "failed to update MAC %pM\n", vif->addr);
/* after sending it once, adopt mac80211 data */
- memcpy(mvmvif->bssid, bss_conf->bssid, ETH_ALEN);
+ memcpy(mvmvif->deflink.bssid, bss_conf->bssid, ETH_ALEN);
mvmvif->associated = vif->cfg.assoc;
if (changes & BSS_CHANGED_ASSOC) {
if (vif->cfg.assoc) {
/* clear statistics to get clean beacon counter */
iwl_mvm_request_statistics(mvm, true);
- memset(&mvmvif->beacon_stats, 0,
- sizeof(mvmvif->beacon_stats));
+ memset(&mvmvif->deflink.beacon_stats, 0,
+ sizeof(mvmvif->deflink.beacon_stats));
/* add quota for this interface */
ret = iwl_mvm_update_quotas(mvm, true, NULL);
if (vif->p2p) {
iwl_mvm_update_smps(mvm, vif,
IWL_MVM_SMPS_REQ_PROT,
- IEEE80211_SMPS_DYNAMIC);
+ IEEE80211_SMPS_DYNAMIC, 0);
}
- } else if (mvmvif->ap_sta_id != IWL_MVM_INVALID_STA) {
+ } else if (mvmvif->deflink.ap_sta_id != IWL_MVM_INVALID_STA) {
iwl_mvm_mei_host_disassociated(mvm);
/*
* If update fails - SF might be running in associated
if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART,
&mvm->status)) {
/* first remove remaining keys */
- iwl_mvm_sec_key_remove_ap(mvm, vif);
+ iwl_mvm_sec_key_remove_ap(mvm, vif,
+ &mvmvif->deflink, 0);
/*
* Remove AP station now that
* the MAC is unassoc
*/
ret = iwl_mvm_rm_sta_id(mvm, vif,
- mvmvif->ap_sta_id);
+ mvmvif->deflink.ap_sta_id);
if (ret)
IWL_ERR(mvm,
"failed to remove AP station\n");
- mvmvif->ap_sta_id = IWL_MVM_INVALID_STA;
+ mvmvif->deflink.ap_sta_id = IWL_MVM_INVALID_STA;
}
/* remove quota for this interface */
vif->addr);
}
- /*
- * The firmware tracks the MU-MIMO group on its own.
- * However, on HW restart we should restore this data.
- */
- if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
- (changes & BSS_CHANGED_MU_GROUPS) && vif->bss_conf.mu_mimo_owner) {
- ret = iwl_mvm_update_mu_groups(mvm, vif);
- if (ret)
- IWL_ERR(mvm,
- "failed to update VHT MU_MIMO groups\n");
- }
+ iwl_mvm_bss_info_changed_station_assoc(mvm, vif, changes);
+ }
- iwl_mvm_recalc_multicast(mvm);
+ iwl_mvm_bss_info_changed_station_common(mvm, vif, &vif->bss_conf,
+ changes);
+}
- /* reset rssi values */
- mvmvif->bf_data.ave_beacon_signal = 0;
+bool iwl_mvm_start_ap_ibss_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ int *ret)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int i;
- iwl_mvm_bt_coex_vif_change(mvm);
- iwl_mvm_update_smps(mvm, vif, IWL_MVM_SMPS_REQ_TT,
- IEEE80211_SMPS_AUTOMATIC);
- if (fw_has_capa(&mvm->fw->ucode_capa,
- IWL_UCODE_TLV_CAPA_UMAC_SCAN))
- iwl_mvm_config_scan(mvm);
- }
+ lockdep_assert_held(&mvm->mutex);
- if (changes & BSS_CHANGED_BEACON_INFO) {
- /*
- * We received a beacon from the associated AP so
- * remove the session protection.
- */
- iwl_mvm_stop_session_protection(mvm, vif);
+ mvmvif->ap_assoc_sta_count = 0;
- iwl_mvm_sf_update(mvm, vif, false);
- WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
- }
+ /* must be set before quota calculations */
+ mvmvif->ap_ibss_active = true;
- if (changes & (BSS_CHANGED_PS | BSS_CHANGED_P2P_PS | BSS_CHANGED_QOS |
- /*
- * Send power command on every beacon change,
- * because we may have not enabled beacon abort yet.
- */
- BSS_CHANGED_BEACON_INFO)) {
- ret = iwl_mvm_power_update_mac(mvm);
- if (ret)
- IWL_ERR(mvm, "failed to update power mode\n");
+ /* send all the early keys to the device now */
+ for (i = 0; i < ARRAY_SIZE(mvmvif->ap_early_keys); i++) {
+ struct ieee80211_key_conf *key = mvmvif->ap_early_keys[i];
+
+ if (!key)
+ continue;
+
+ mvmvif->ap_early_keys[i] = NULL;
+
+ *ret = __iwl_mvm_mac_set_key(hw, SET_KEY, vif, NULL, key);
+ if (*ret)
+ return true;
}
- if (changes & BSS_CHANGED_CQM) {
- IWL_DEBUG_MAC80211(mvm, "cqm info_changed\n");
- /* reset cqm events tracking */
- mvmvif->bf_data.last_cqm_event = 0;
- if (mvmvif->bf_data.bf_enabled) {
- ret = iwl_mvm_enable_beacon_filter(mvm, vif, 0);
- if (ret)
- IWL_ERR(mvm,
- "failed to update CQM thresholds\n");
- }
+ if (vif->type == NL80211_IFTYPE_AP && !vif->p2p) {
+ iwl_mvm_vif_set_low_latency(mvmvif, true,
+ LOW_LATENCY_VIF_TYPE);
+ iwl_mvm_send_low_latency_cmd(mvm, true, mvmvif->id);
}
- if (changes & BSS_CHANGED_BANDWIDTH)
- iwl_mvm_apply_fw_smps_request(vif);
+ /* power updated needs to be done before quotas */
+ iwl_mvm_power_update_mac(mvm);
+
+ return false;
}
static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw,
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- int ret, i;
+ int ret;
mutex_lock(&mvm->mutex);
- /* Send the beacon template */
- ret = iwl_mvm_mac_ctxt_beacon_changed(mvm, vif);
- if (ret)
- goto out_unlock;
-
/*
* Re-calculate the tsf id, as the leader-follower relations depend on
* the beacon interval, which was not known when the AP interface
if (vif->type == NL80211_IFTYPE_AP)
iwl_mvm_mac_ctxt_recalc_tsf_id(mvm, vif);
- mvmvif->ap_assoc_sta_count = 0;
+ /* For older devices need to send beacon template before adding mac
+ * context. For the newer, the beacon is a resource that belongs to a
+ * MAC, so need to send beacon template after adding the mac.
+ */
+ if (mvm->trans->trans_cfg->device_family > IWL_DEVICE_FAMILY_22000) {
+ /* Add the mac context */
+ ret = iwl_mvm_mac_ctxt_add(mvm, vif);
+ if (ret)
+ goto out_unlock;
- /* Add the mac context */
- ret = iwl_mvm_mac_ctxt_add(mvm, vif);
- if (ret)
- goto out_unlock;
+ /* Send the beacon template */
+ ret = iwl_mvm_mac_ctxt_beacon_changed(mvm, vif, link_conf);
+ if (ret)
+ goto out_unlock;
+ } else {
+ /* Send the beacon template */
+ ret = iwl_mvm_mac_ctxt_beacon_changed(mvm, vif, link_conf);
+ if (ret)
+ goto out_unlock;
+
+ /* Add the mac context */
+ ret = iwl_mvm_mac_ctxt_add(mvm, vif);
+ if (ret)
+ goto out_unlock;
+ }
/* Perform the binding */
ret = iwl_mvm_binding_add_vif(mvm, vif);
}
}
- /* must be set before quota calculations */
- mvmvif->ap_ibss_active = true;
-
- /* send all the early keys to the device now */
- for (i = 0; i < ARRAY_SIZE(mvmvif->ap_early_keys); i++) {
- struct ieee80211_key_conf *key = mvmvif->ap_early_keys[i];
-
- if (!key)
- continue;
-
- mvmvif->ap_early_keys[i] = NULL;
-
- ret = __iwl_mvm_mac_set_key(hw, SET_KEY, vif, NULL, key);
- if (ret)
- goto out_quota_failed;
- }
-
- if (vif->type == NL80211_IFTYPE_AP && !vif->p2p) {
- iwl_mvm_vif_set_low_latency(mvmvif, true,
- LOW_LATENCY_VIF_TYPE);
- iwl_mvm_send_low_latency_cmd(mvm, true, mvmvif->id);
- }
-
- /* power updated needs to be done before quotas */
- iwl_mvm_power_update_mac(mvm);
+ if (iwl_mvm_start_ap_ibss_common(hw, vif, &ret))
+ goto out_failed;
ret = iwl_mvm_update_quotas(mvm, false, NULL);
if (ret)
- goto out_quota_failed;
+ goto out_failed;
/* Need to update the P2P Device MAC (only GO, IBSS is single vif) */
if (vif->p2p && mvm->p2p_device_vif)
goto out_unlock;
-out_quota_failed:
+out_failed:
iwl_mvm_power_update_mac(mvm);
mvmvif->ap_ibss_active = false;
iwl_mvm_send_rm_bcast_sta(mvm, vif);
return iwl_mvm_start_ap_ibss(hw, vif, &vif->bss_conf);
}
-static void iwl_mvm_stop_ap_ibss(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_bss_conf *link_conf)
+/* Common part for MLD and non-MLD ops */
+void iwl_mvm_stop_ap_ibss_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif)
{
- struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- iwl_mvm_prepare_mac_removal(mvm, vif);
+ lockdep_assert_held(&mvm->mutex);
- mutex_lock(&mvm->mutex);
+ iwl_mvm_prepare_mac_removal(mvm, vif);
/* Handle AP stop while in CSA */
if (rcu_access_pointer(mvm->csa_vif) == vif) {
}
iwl_mvm_bt_coex_vif_change(mvm);
+}
+
+static void iwl_mvm_stop_ap_ibss(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+
+ mutex_lock(&mvm->mutex);
+
+ iwl_mvm_stop_ap_ibss_common(mvm, vif);
/* Need to update the P2P Device MAC (only GO, IBSS is single vif) */
if (vif->p2p && mvm->p2p_device_vif)
/* Need to send a new beacon template to the FW */
if (changes & BSS_CHANGED_BEACON &&
- iwl_mvm_mac_ctxt_beacon_changed(mvm, vif))
+ iwl_mvm_mac_ctxt_beacon_changed(mvm, vif, &vif->bss_conf))
IWL_WARN(mvm, "Failed updating beacon data\n");
if (changes & BSS_CHANGED_FTM_RESPONDER) {
struct ieee80211_bss_conf *bss_conf,
u64 changes)
{
+ struct iwl_mvm_bss_info_changed_ops callbacks = {
+ .bss_info_changed_sta = iwl_mvm_bss_info_changed_station,
+ .bss_info_changed_ap_ibss = iwl_mvm_bss_info_changed_ap_ibss,
+ };
+
+ iwl_mvm_bss_info_changed_common(hw, vif, bss_conf, &callbacks,
+ changes);
+}
+
+void
+iwl_mvm_bss_info_changed_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *bss_conf,
+ struct iwl_mvm_bss_info_changed_ops *callbacks,
+ u64 changes)
+{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_lock(&mvm->mutex);
switch (vif->type) {
case NL80211_IFTYPE_STATION:
- iwl_mvm_bss_info_changed_station(mvm, vif, bss_conf, changes);
+ callbacks->bss_info_changed_sta(mvm, vif, bss_conf, changes);
break;
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_ADHOC:
- iwl_mvm_bss_info_changed_ap_ibss(mvm, vif, bss_conf, changes);
+ callbacks->bss_info_changed_ap_ibss(mvm, vif, bss_conf,
+ changes);
break;
case NL80211_IFTYPE_MONITOR:
if (changes & BSS_CHANGED_MU_GROUPS)
mutex_unlock(&mvm->mutex);
}
-static int iwl_mvm_mac_hw_scan(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_scan_request *hw_req)
+int iwl_mvm_mac_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *hw_req)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
return ret;
}
-static void iwl_mvm_mac_cancel_hw_scan(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+void iwl_mvm_mac_cancel_hw_scan(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_unlock(&mvm->mutex);
}
-static void
+void
iwl_mvm_mac_allow_buffered_frames(struct ieee80211_hw *hw,
struct ieee80211_sta *sta, u16 tids,
int num_frames,
tids, more_data, false);
}
-static void
+void
iwl_mvm_mac_release_buffered_frames(struct ieee80211_hw *hw,
struct ieee80211_sta *sta, u16 tids,
int num_frames,
*/
break;
case STA_NOTIFY_AWAKE:
- if (WARN_ON(mvmsta->sta_id == IWL_MVM_INVALID_STA))
+ if (WARN_ON(mvmsta->deflink.sta_id == IWL_MVM_INVALID_STA))
break;
if (txqs)
spin_unlock_bh(&mvmsta->lock);
}
-static void iwl_mvm_mac_sta_notify(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- enum sta_notify_cmd cmd,
- struct ieee80211_sta *sta)
+void iwl_mvm_mac_sta_notify(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ enum sta_notify_cmd cmd, struct ieee80211_sta *sta)
{
__iwl_mvm_mac_sta_notify(hw, cmd, sta);
}
rcu_read_unlock();
}
-static void iwl_mvm_sta_pre_rcu_remove(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_sta *sta)
+void iwl_mvm_sta_pre_rcu_remove(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ unsigned int link_id;
/*
* This is called before mac80211 does RCU synchronisation,
* be able to find the station this way, and we don't rely
* on further RCU synchronisation after the sta_state()
* callback deleted the station.
+ * Since there's mvm->mutex here, no need to have RCU lock for
+ * mvm_sta->link access.
*/
mutex_lock(&mvm->mutex);
- if (sta == rcu_access_pointer(mvm->fw_id_to_mac_id[mvm_sta->sta_id]))
- rcu_assign_pointer(mvm->fw_id_to_mac_id[mvm_sta->sta_id],
- ERR_PTR(-ENOENT));
+ for (link_id = 0; link_id < ARRAY_SIZE(mvm_sta->link); link_id++) {
+ struct iwl_mvm_link_sta *link_sta;
+ u32 sta_id;
+
+ if (!mvm_sta->link[link_id])
+ continue;
+ link_sta = rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+ sta_id = link_sta->sta_id;
+ if (sta == rcu_access_pointer(mvm->fw_id_to_mac_id[sta_id])) {
+ rcu_assign_pointer(mvm->fw_id_to_mac_id[sta_id],
+ ERR_PTR(-ENOENT));
+ RCU_INIT_POINTER(mvm->fw_id_to_link_sta[sta_id], NULL);
+ }
+ }
mutex_unlock(&mvm->mutex);
}
rcu_read_unlock();
}
-static void iwl_mvm_check_he_obss_narrow_bw_ru(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+static void
+iwl_mvm_check_he_obss_narrow_bw_ru(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ unsigned int link_id,
+ struct ieee80211_bss_conf *link_conf)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_he_obss_narrow_bw_ru_data iter_data = {
.tolerated = true,
};
- if (!(vif->bss_conf.chandef.chan->flags & IEEE80211_CHAN_RADAR)) {
- mvmvif->he_ru_2mhz_block = false;
+ if (WARN_ON_ONCE(!link_conf->chandef.chan ||
+ !mvmvif->link[link_id]))
+ return;
+
+ if (!(link_conf->chandef.chan->flags & IEEE80211_CHAN_RADAR)) {
+ mvmvif->link[link_id]->he_ru_2mhz_block = false;
return;
}
- cfg80211_bss_iter(hw->wiphy, &vif->bss_conf.chandef,
+ cfg80211_bss_iter(hw->wiphy, &link_conf->chandef,
iwl_mvm_check_he_obss_narrow_bw_ru_iter,
&iter_data);
* If there is at least one AP on radar channel that cannot
* tolerate 26-tone RU UL OFDMA transmissions using HE TB PPDU.
*/
- mvmvif->he_ru_2mhz_block = !iter_data.tolerated;
+ mvmvif->link[link_id]->he_ru_2mhz_block = !iter_data.tolerated;
}
static void iwl_mvm_reset_cca_40mhz_workaround(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mei_conn_info conn_info = {
.ssid_len = vif->cfg.ssid_len,
- .channel = vif->bss_conf.chandef.chan->hw_value,
};
if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
if (!mvm->mei_registered)
return;
+ /* FIXME: MEI needs to be updated for MLO */
+ if (!vif->bss_conf.chandef.chan)
+ return;
+
+ conn_info.channel = vif->bss_conf.chandef.chan->hw_value;
+
switch (mvm_sta->pairwise_cipher) {
case WLAN_CIPHER_SUITE_TKIP:
conn_info.pairwise_cipher = IWL_MEI_CIPHER_TKIP;
return;
}
- memcpy(conn_info.ssid, vif->cfg.ssid, vif->cfg.ssid_len);
- memcpy(conn_info.bssid, vif->bss_conf.bssid, ETH_ALEN);
+ memcpy(conn_info.ssid, vif->cfg.ssid, vif->cfg.ssid_len);
+ memcpy(conn_info.bssid, vif->bss_conf.bssid, ETH_ALEN);
+
+ /* TODO: add support for collocated AP data */
+ iwl_mei_host_associated(&conn_info, NULL);
+#endif
+}
+
+static int iwl_mvm_mac_ctxt_changed_wrapper(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ bool force_assoc_off)
+{
+ return iwl_mvm_mac_ctxt_changed(mvm, vif, force_assoc_off, NULL);
+}
+
+static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ enum ieee80211_sta_state old_state,
+ enum ieee80211_sta_state new_state)
+{
+ struct iwl_mvm_sta_state_ops callbacks = {
+ .add_sta = iwl_mvm_add_sta,
+ .update_sta = iwl_mvm_update_sta,
+ .rm_sta = iwl_mvm_rm_sta,
+ .mac_ctxt_changed = iwl_mvm_mac_ctxt_changed_wrapper,
+ };
+
+ return iwl_mvm_mac_sta_state_common(hw, vif, sta, old_state, new_state,
+ &callbacks);
+}
+
+/* FIXME: temporary making two assumptions in all sta handling functions:
+ * (1) when setting sta state, the link exists and protected
+ * (2) if a link is valid in sta then it's valid in vif (can
+ * use same index in the link array)
+ */
+static void iwl_mvm_rs_rate_init_all_links(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ bool update)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ unsigned int link_id;
+
+ for_each_mvm_vif_valid_link(mvmvif, link_id) {
+ struct ieee80211_bss_conf *conf =
+ link_conf_dereference_protected(vif, link_id);
+ struct ieee80211_link_sta *link_sta =
+ link_sta_dereference_protected(sta, link_id);
+
+ if (!conf || !link_sta || !mvmvif->link[link_id]->phy_ctxt)
+ continue;
+
+ iwl_mvm_rs_rate_init(mvm, sta, conf, link_sta,
+ mvmvif->link[link_id]->phy_ctxt->channel->band,
+ update);
+ }
+}
+
+#define IWL_MVM_MIN_BEACON_INTERVAL_TU 16
+
+static bool iwl_mvm_vif_conf_from_sta(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ unsigned int i;
+
+ /* Beacon interval check - firmware will crash if the beacon
+ * interval is less than 16. We can't avoid connecting at all,
+ * so refuse the station state change, this will cause mac80211
+ * to abandon attempts to connect to this AP, and eventually
+ * wpa_s will blocklist the AP...
+ */
+
+ for_each_set_bit(i, (unsigned long *)&sta->valid_links,
+ IEEE80211_MLD_MAX_NUM_LINKS) {
+ struct ieee80211_link_sta *link_sta =
+ link_sta_dereference_protected(sta, i);
+ struct ieee80211_bss_conf *link_conf =
+ link_conf_dereference_protected(vif, i);
+
+ if (!link_conf || !link_sta)
+ continue;
+
+ if (link_conf->beacon_int < IWL_MVM_MIN_BEACON_INTERVAL_TU) {
+ IWL_ERR(mvm,
+ "Beacon interval %d for AP %pM is too small\n",
+ link_conf->beacon_int, link_sta->addr);
+ return false;
+ }
+
+ link_conf->he_support = link_sta->he_cap.has_he;
+ }
+
+ return true;
+}
+
+static void iwl_mvm_vif_set_he_support(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ bool is_sta)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ unsigned int i;
+
+ for_each_set_bit(i, (unsigned long *)&sta->valid_links,
+ IEEE80211_MLD_MAX_NUM_LINKS) {
+ struct ieee80211_link_sta *link_sta =
+ link_sta_dereference_protected(sta, i);
+ struct ieee80211_bss_conf *link_conf =
+ link_conf_dereference_protected(vif, i);
+
+ if (!link_conf || !link_sta || !mvmvif->link[i])
+ continue;
+
+ link_conf->he_support = link_sta->he_cap.has_he;
+
+ if (is_sta) {
+ mvmvif->link[i]->he_ru_2mhz_block = false;
+ if (link_sta->he_cap.has_he)
+ iwl_mvm_check_he_obss_narrow_bw_ru(hw, vif, i,
+ link_conf);
+ }
+ }
+}
+
+static int
+iwl_mvm_sta_state_notexist_to_none(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct iwl_mvm_sta_state_ops *callbacks)
+{
+ unsigned int i;
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ !iwl_mvm_vif_conf_from_sta(mvm, vif, sta))
+ return -EINVAL;
+
+ if (sta->tdls &&
+ (vif->p2p ||
+ iwl_mvm_tdls_sta_count(mvm, NULL) == IWL_MVM_TDLS_STA_COUNT ||
+ iwl_mvm_phy_ctx_count(mvm) > 1)) {
+ IWL_DEBUG_MAC80211(mvm, "refusing TDLS sta\n");
+ return -EBUSY;
+ }
+
+ ret = callbacks->add_sta(mvm, vif, sta);
+ if (sta->tdls && ret == 0) {
+ iwl_mvm_recalc_tdls_state(mvm, vif, true);
+ iwl_mvm_tdls_check_trigger(mvm, vif, sta->addr,
+ NL80211_TDLS_SETUP);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(sta->link); i++) {
+ struct ieee80211_link_sta *link_sta;
+
+ link_sta = link_sta_dereference_protected(sta, i);
+ if (!link_sta)
+ continue;
+
+ link_sta->agg.max_rc_amsdu_len = 1;
+ }
+ ieee80211_sta_recalc_aggregates(sta);
+
+ return 0;
+}
+
+static int
+iwl_mvm_sta_state_auth_to_assoc(struct ieee80211_hw *hw,
+ struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct iwl_mvm_sta_state_ops *callbacks)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ unsigned int i;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (vif->type == NL80211_IFTYPE_AP) {
+ iwl_mvm_vif_set_he_support(hw, vif, sta, false);
+ mvmvif->ap_assoc_sta_count++;
+ callbacks->mac_ctxt_changed(mvm, vif, false);
+
+ /* since the below is not for MLD API, it's ok to use
+ * the default bss_conf
+ */
+ if (!mvm->mld_api_is_used &&
+ ((vif->bss_conf.he_support &&
+ !iwlwifi_mod_params.disable_11ax) ||
+ (vif->bss_conf.eht_support &&
+ !iwlwifi_mod_params.disable_11be)))
+ iwl_mvm_cfg_he_sta(mvm, vif, mvm_sta->deflink.sta_id);
+ } else if (vif->type == NL80211_IFTYPE_STATION) {
+ iwl_mvm_vif_set_he_support(hw, vif, sta, true);
+
+ callbacks->mac_ctxt_changed(mvm, vif, false);
+
+ if (!mvm->mld_api_is_used)
+ goto out;
+
+ for_each_set_bit(i, (unsigned long *)&sta->valid_links,
+ IEEE80211_MLD_MAX_NUM_LINKS) {
+ struct ieee80211_bss_conf *link_conf =
+ link_conf_dereference_protected(vif, i);
+
+ if (WARN_ON(!link_conf))
+ return -EINVAL;
+ if (!mvmvif->link[i])
+ continue;
+
+ iwl_mvm_link_changed(mvm, vif, link_conf,
+ LINK_CONTEXT_MODIFY_ALL &
+ ~LINK_CONTEXT_MODIFY_ACTIVE,
+ true);
+ }
+ }
+
+out:
+ iwl_mvm_rs_rate_init_all_links(mvm, vif, sta, false);
+
+ return callbacks->update_sta(mvm, vif, sta);
+}
+
+static int
+iwl_mvm_sta_state_assoc_to_authorized(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct iwl_mvm_sta_state_ops *callbacks)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+
+ lockdep_assert_held(&mvm->mutex);
+
+ /* we don't support TDLS during DCM */
+ if (iwl_mvm_phy_ctx_count(mvm) > 1)
+ iwl_mvm_teardown_tdls_peers(mvm);
+
+ if (sta->tdls) {
+ iwl_mvm_tdls_check_trigger(mvm, vif, sta->addr,
+ NL80211_TDLS_ENABLE_LINK);
+ } else {
+ /* enable beacon filtering */
+ WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
+
+ mvmvif->authorized = 1;
+
+ callbacks->mac_ctxt_changed(mvm, vif, false);
+ iwl_mvm_mei_host_associated(mvm, vif, mvm_sta);
+ }
+
+ iwl_mvm_rs_rate_init_all_links(mvm, vif, sta, true);
- /* TODO: add support for collocated AP data */
- iwl_mei_host_associated(&conn_info, NULL);
-#endif
+ return 0;
}
-static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+static int
+iwl_mvm_sta_state_authorized_to_assoc(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct iwl_mvm_sta_state_ops *callbacks)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ /* once we move into assoc state, need to update rate scale to
+ * disable using wide bandwidth
+ */
+ iwl_mvm_rs_rate_init_all_links(mvm, vif, sta, false);
+
+ if (!sta->tdls) {
+ /* Set this but don't call iwl_mvm_mac_ctxt_changed()
+ * yet to avoid sending high prio again for a little
+ * time.
+ */
+ mvmvif->authorized = 0;
+
+ /* disable beacon filtering */
+ ret = iwl_mvm_disable_beacon_filter(mvm, vif, 0);
+ WARN_ON(ret &&
+ !test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
+ &mvm->status));
+ }
+
+ return 0;
+}
+
+/* Common part for MLD and non-MLD modes */
+int iwl_mvm_mac_sta_state_common(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
enum ieee80211_sta_state old_state,
- enum ieee80211_sta_state new_state)
+ enum ieee80211_sta_state new_state,
+ struct iwl_mvm_sta_state_ops *callbacks)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ unsigned int link_id;
int ret;
IWL_DEBUG_MAC80211(mvm, "station %pM state change %d->%d\n",
sta->addr, old_state, new_state);
- /* this would be a mac80211 bug ... but don't crash */
- if (WARN_ON_ONCE(!mvmvif->phy_ctxt))
- return test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) ? 0 : -EINVAL;
-
/*
* If we are in a STA removal flow and in DQA mode:
*
}
mutex_lock(&mvm->mutex);
+
+ /* this would be a mac80211 bug ... but don't crash */
+ for_each_mvm_vif_valid_link(mvmvif, link_id) {
+ if (WARN_ON_ONCE(!mvmvif->link[link_id]->phy_ctxt)) {
+ mutex_unlock(&mvm->mutex);
+ return test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
+ &mvm->status) ? 0 : -EINVAL;
+ }
+ }
+
/* track whether or not the station is associated */
mvm_sta->sta_state = new_state;
if (old_state == IEEE80211_STA_NOTEXIST &&
new_state == IEEE80211_STA_NONE) {
- /*
- * Firmware bug - it'll crash if the beacon interval is less
- * than 16. We can't avoid connecting at all, so refuse the
- * station state change, this will cause mac80211 to abandon
- * attempts to connect to this AP, and eventually wpa_s will
- * blocklist the AP...
- */
- if (vif->type == NL80211_IFTYPE_STATION &&
- vif->bss_conf.beacon_int < 16) {
- IWL_ERR(mvm,
- "AP %pM beacon interval is %d, refusing due to firmware bug!\n",
- sta->addr, vif->bss_conf.beacon_int);
- ret = -EINVAL;
+ ret = iwl_mvm_sta_state_notexist_to_none(mvm, vif, sta,
+ callbacks);
+ if (ret < 0)
goto out_unlock;
- }
-
- if (vif->type == NL80211_IFTYPE_STATION)
- vif->bss_conf.he_support = sta->deflink.he_cap.has_he;
-
- if (sta->tdls &&
- (vif->p2p ||
- iwl_mvm_tdls_sta_count(mvm, NULL) ==
- IWL_MVM_TDLS_STA_COUNT ||
- iwl_mvm_phy_ctx_count(mvm) > 1)) {
- IWL_DEBUG_MAC80211(mvm, "refusing TDLS sta\n");
- ret = -EBUSY;
- goto out_unlock;
- }
-
- ret = iwl_mvm_add_sta(mvm, vif, sta);
- if (sta->tdls && ret == 0) {
- iwl_mvm_recalc_tdls_state(mvm, vif, true);
- iwl_mvm_tdls_check_trigger(mvm, vif, sta->addr,
- NL80211_TDLS_SETUP);
- }
-
- sta->deflink.agg.max_rc_amsdu_len = 1;
} else if (old_state == IEEE80211_STA_NONE &&
new_state == IEEE80211_STA_AUTH) {
/*
ret = 0;
} else if (old_state == IEEE80211_STA_AUTH &&
new_state == IEEE80211_STA_ASSOC) {
- if (vif->type == NL80211_IFTYPE_AP) {
- vif->bss_conf.he_support = sta->deflink.he_cap.has_he;
- mvmvif->ap_assoc_sta_count++;
- iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
- if ((vif->bss_conf.he_support &&
- !iwlwifi_mod_params.disable_11ax) ||
- (vif->bss_conf.eht_support &&
- !iwlwifi_mod_params.disable_11be))
- iwl_mvm_cfg_he_sta(mvm, vif, mvm_sta->sta_id);
- } else if (vif->type == NL80211_IFTYPE_STATION) {
- vif->bss_conf.he_support = sta->deflink.he_cap.has_he;
-
- mvmvif->he_ru_2mhz_block = false;
- if (sta->deflink.he_cap.has_he)
- iwl_mvm_check_he_obss_narrow_bw_ru(hw, vif);
-
- iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
- }
-
- iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
- false);
- ret = iwl_mvm_update_sta(mvm, vif, sta);
+ ret = iwl_mvm_sta_state_auth_to_assoc(hw, mvm, vif, sta,
+ callbacks);
} else if (old_state == IEEE80211_STA_ASSOC &&
new_state == IEEE80211_STA_AUTHORIZED) {
- ret = 0;
-
- /* we don't support TDLS during DCM */
- if (iwl_mvm_phy_ctx_count(mvm) > 1)
- iwl_mvm_teardown_tdls_peers(mvm);
-
- if (sta->tdls) {
- iwl_mvm_tdls_check_trigger(mvm, vif, sta->addr,
- NL80211_TDLS_ENABLE_LINK);
- } else {
- /* enable beacon filtering */
- WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
-
- mvmvif->authorized = 1;
-
- /*
- * Now that the station is authorized, i.e., keys were already
- * installed, need to indicate to the FW that
- * multicast data frames can be forwarded to the driver
- */
- iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
- iwl_mvm_mei_host_associated(mvm, vif, mvm_sta);
- }
-
- iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
- true);
+ ret = iwl_mvm_sta_state_assoc_to_authorized(mvm, vif, sta,
+ callbacks);
} else if (old_state == IEEE80211_STA_AUTHORIZED &&
new_state == IEEE80211_STA_ASSOC) {
- /* once we move into assoc state, need to update rate scale to
- * disable using wide bandwidth
- */
- iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
- false);
- if (!sta->tdls) {
- /* Multicast data frames are no longer allowed */
- iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
-
- /*
- * Set this after the above iwl_mvm_mac_ctxt_changed()
- * to avoid sending high prio again for a little time.
- */
- mvmvif->authorized = 0;
-
- /* disable beacon filtering */
- ret = iwl_mvm_disable_beacon_filter(mvm, vif, 0);
- WARN_ON(ret &&
- !test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
- &mvm->status));
- }
- ret = 0;
+ ret = iwl_mvm_sta_state_authorized_to_assoc(mvm, vif, sta,
+ callbacks);
} else if (old_state == IEEE80211_STA_ASSOC &&
new_state == IEEE80211_STA_AUTH) {
if (vif->type == NL80211_IFTYPE_AP) {
mvmvif->ap_assoc_sta_count--;
- iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
+ callbacks->mac_ctxt_changed(mvm, vif, false);
} else if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
iwl_mvm_stop_session_protection(mvm, vif);
ret = 0;
new_state == IEEE80211_STA_NOTEXIST) {
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
iwl_mvm_stop_session_protection(mvm, vif);
- ret = iwl_mvm_rm_sta(mvm, vif, sta);
+ ret = callbacks->rm_sta(mvm, vif, sta);
if (sta->tdls) {
iwl_mvm_recalc_tdls_state(mvm, vif, false);
iwl_mvm_tdls_check_trigger(mvm, vif, sta->addr,
return ret;
}
-static int iwl_mvm_mac_set_rts_threshold(struct ieee80211_hw *hw, u32 value)
+int iwl_mvm_mac_set_rts_threshold(struct ieee80211_hw *hw, u32 value)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
return 0;
}
-static void iwl_mvm_sta_rc_update(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_sta *sta, u32 changed)
+void iwl_mvm_sta_rc_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, u32 changed)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
if (changed & (IEEE80211_RC_BW_CHANGED |
IEEE80211_RC_SUPP_RATES_CHANGED |
IEEE80211_RC_NSS_CHANGED))
- iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
+ iwl_mvm_rs_rate_init(mvm, sta,
+ &vif->bss_conf, &sta->deflink,
+ mvmvif->deflink.phy_ctxt->channel->band,
true);
if (vif->type == NL80211_IFTYPE_STATION &&
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- mvmvif->queue_params[ac] = *params;
+ mvmvif->deflink.queue_params[ac] = *params;
/*
* No need to update right away, we'll get BSS_CHANGED_QOS
return 0;
}
-static void iwl_mvm_mac_mgd_prepare_tx(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_prep_tx_info *info)
+void iwl_mvm_mac_mgd_prepare_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_prep_tx_info *info)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_unlock(&mvm->mutex);
}
-static void iwl_mvm_mac_mgd_complete_tx(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_prep_tx_info *info)
+void iwl_mvm_mac_mgd_complete_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_prep_tx_info *info)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_unlock(&mvm->mutex);
}
-static int iwl_mvm_mac_sched_scan_start(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct cfg80211_sched_scan_request *req,
- struct ieee80211_scan_ies *ies)
+int iwl_mvm_mac_sched_scan_start(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_sched_scan_request *req,
+ struct ieee80211_scan_ies *ies)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
return ret;
}
-static int iwl_mvm_mac_sched_scan_stop(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+int iwl_mvm_mac_sched_scan_stop(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
switch (cmd) {
case SET_KEY:
- if (keyidx == 6 || keyidx == 7)
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ (keyidx == 6 || keyidx == 7))
rcu_assign_pointer(mvmvif->bcn_prot.keys[keyidx - 6],
key);
* on IBSS they're per-station and because we're lazy
* we don't support them for RX, so do the same.
* CMAC/GMAC in AP/IBSS modes must be done in software.
+ *
+ * Except, of course, beacon protection - it must be
+ * offloaded since we just set a beacon template.
*/
- if (key->cipher == WLAN_CIPHER_SUITE_AES_CMAC ||
- key->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_128 ||
- key->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_256) {
+ if (keyidx < 6 &&
+ (key->cipher == WLAN_CIPHER_SUITE_AES_CMAC ||
+ key->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_128 ||
+ key->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_256)) {
ret = -EOPNOTSUPP;
break;
}
if (mvmsta && key->flags & IEEE80211_KEY_FLAG_PAIRWISE)
mvmsta->pairwise_cipher = key->cipher;
- IWL_DEBUG_MAC80211(mvm, "set hwcrypto key\n");
+ IWL_DEBUG_MAC80211(mvm, "set hwcrypto key (sta:%pM, id:%d)\n",
+ sta ? sta->addr : NULL, key->keyidx);
if (sec_key_ver)
ret = iwl_mvm_sec_key_add(mvm, vif, sta, key);
break;
case DISABLE_KEY:
- if (keyidx == 6 || keyidx == 7)
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ (keyidx == 6 || keyidx == 7))
RCU_INIT_POINTER(mvmvif->bcn_prot.keys[keyidx - 6],
NULL);
return ret;
}
-static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
- enum set_key_cmd cmd,
- struct ieee80211_vif *vif,
- struct ieee80211_sta *sta,
- struct ieee80211_key_conf *key)
+int iwl_mvm_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ struct ieee80211_vif *vif, struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
return ret;
}
-static void iwl_mvm_mac_update_tkip_key(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_key_conf *keyconf,
- struct ieee80211_sta *sta,
- u32 iv32, u16 *phase1key)
+void iwl_mvm_mac_update_tkip_key(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_key_conf *keyconf,
+ struct ieee80211_sta *sta,
+ u32 iv32, u16 *phase1key)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
return res;
}
+static int iwl_mvm_add_aux_sta_for_hs20(struct iwl_mvm *mvm, u32 lmac_id)
+{
+ int ret = 0;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (!fw_has_capa(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_HOTSPOT_SUPPORT)) {
+ IWL_ERR(mvm, "hotspot not supported\n");
+ return -EINVAL;
+ }
+
+ if (iwl_fw_lookup_cmd_ver(mvm->fw, ADD_STA, 0) >= 12) {
+ ret = iwl_mvm_add_aux_sta(mvm, lmac_id);
+ WARN(ret, "Failed to allocate aux station");
+ }
+
+ return ret;
+}
+
+static int iwl_mvm_roc_switch_binding(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct iwl_mvm_phy_ctxt *new_phy_ctxt)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret = 0;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ /* Unbind the P2P_DEVICE from the current PHY context,
+ * and if the PHY context is not used remove it.
+ */
+ ret = iwl_mvm_binding_remove_vif(mvm, vif);
+ if (WARN(ret, "Failed unbinding P2P_DEVICE\n"))
+ return ret;
+
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
+
+ /* Bind the P2P_DEVICE to the current PHY Context */
+ mvmvif->deflink.phy_ctxt = new_phy_ctxt;
+
+ ret = iwl_mvm_binding_add_vif(mvm, vif);
+ WARN(ret, "Failed binding P2P_DEVICE\n");
+ return ret;
+}
+
static int iwl_mvm_roc(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_channel *channel,
int duration,
enum ieee80211_roc_type type)
{
+ struct iwl_mvm_roc_ops ops = {
+ .add_aux_sta_for_hs20 = iwl_mvm_add_aux_sta_for_hs20,
+ .switch_phy_ctxt = iwl_mvm_roc_switch_binding,
+ };
+
+ return iwl_mvm_roc_common(hw, vif, channel, duration, type, &ops);
+}
+
+/* Execute the common part for MLD and non-MLD modes */
+int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_channel *channel, int duration,
+ enum ieee80211_roc_type type,
+ struct iwl_mvm_roc_ops *ops)
+{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct cfg80211_chan_def chandef;
struct iwl_mvm_phy_ctxt *phy_ctxt;
bool band_change_removal;
int ret, i;
+ u32 lmac_id;
IWL_DEBUG_MAC80211(mvm, "enter (%d, %d, %d)\n", channel->hw_value,
duration, type);
switch (vif->type) {
case NL80211_IFTYPE_STATION:
- if (fw_has_capa(&mvm->fw->ucode_capa,
- IWL_UCODE_TLV_CAPA_HOTSPOT_SUPPORT)) {
- /* Use aux roc framework (HS20) */
- if (iwl_fw_lookup_cmd_ver(mvm->fw, ADD_STA, 0) >= 12) {
- u32 lmac_id;
-
- lmac_id = iwl_mvm_get_lmac_id(mvm->fw,
- channel->band);
- ret = iwl_mvm_add_aux_sta(mvm, lmac_id);
- if (WARN(ret,
- "Failed to allocate aux station"))
- goto out_unlock;
- }
+ lmac_id = iwl_mvm_get_lmac_id(mvm->fw, channel->band);
+
+ /* Use aux roc framework (HS20) */
+ ret = ops->add_aux_sta_for_hs20(mvm, lmac_id);
+ if (!ret)
ret = iwl_mvm_send_aux_roc_cmd(mvm, channel,
vif, duration);
- goto out_unlock;
- }
- IWL_ERR(mvm, "hotspot not supported\n");
- ret = -EINVAL;
goto out_unlock;
case NL80211_IFTYPE_P2P_DEVICE:
/* handle below */
for (i = 0; i < NUM_PHY_CTX; i++) {
phy_ctxt = &mvm->phy_ctxts[i];
- if (phy_ctxt->ref == 0 || mvmvif->phy_ctxt == phy_ctxt)
+ if (phy_ctxt->ref == 0 || mvmvif->deflink.phy_ctxt == phy_ctxt)
continue;
if (phy_ctxt->ref && channel == phy_ctxt->channel) {
- /*
- * Unbind the P2P_DEVICE from the current PHY context,
- * and if the PHY context is not used remove it.
- */
- ret = iwl_mvm_binding_remove_vif(mvm, vif);
- if (WARN(ret, "Failed unbinding P2P_DEVICE\n"))
- goto out_unlock;
-
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->phy_ctxt);
-
- /* Bind the P2P_DEVICE to the current PHY Context */
- mvmvif->phy_ctxt = phy_ctxt;
-
- ret = iwl_mvm_binding_add_vif(mvm, vif);
- if (WARN(ret, "Failed binding P2P_DEVICE\n"))
+ ret = ops->switch_phy_ctxt(mvm, vif, phy_ctxt);
+ if (ret)
goto out_unlock;
- iwl_mvm_phy_ctxt_ref(mvm, mvmvif->phy_ctxt);
+ iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
goto schedule_time_event;
}
}
/* Need to update the PHY context only if the ROC channel changed */
- if (channel == mvmvif->phy_ctxt->channel)
+ if (channel == mvmvif->deflink.phy_ctxt->channel)
goto schedule_time_event;
cfg80211_chandef_create(&chandef, channel, NL80211_CHAN_NO_HT);
band_change_removal =
fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_BINDING_CDB_SUPPORT) &&
- mvmvif->phy_ctxt->channel->band != chandef.chan->band;
+ mvmvif->deflink.phy_ctxt->channel->band != chandef.chan->band;
- if (mvmvif->phy_ctxt->ref == 1 && !band_change_removal) {
+ if (mvmvif->deflink.phy_ctxt->ref == 1 && !band_change_removal) {
/*
* Change the PHY context configuration as it is currently
* referenced only by the P2P Device MAC (and we can modify it)
*/
- ret = iwl_mvm_phy_ctxt_changed(mvm, mvmvif->phy_ctxt,
+ ret = iwl_mvm_phy_ctxt_changed(mvm, mvmvif->deflink.phy_ctxt,
&chandef, 1, 1);
if (ret)
goto out_unlock;
goto out_unlock;
}
- /* Unbind the P2P_DEVICE from the current PHY context */
- ret = iwl_mvm_binding_remove_vif(mvm, vif);
- if (WARN(ret, "Failed unbinding P2P_DEVICE\n"))
- goto out_unlock;
-
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->phy_ctxt);
-
- /* Bind the P2P_DEVICE to the new allocated PHY context */
- mvmvif->phy_ctxt = phy_ctxt;
-
- ret = iwl_mvm_binding_add_vif(mvm, vif);
- if (WARN(ret, "Failed binding P2P_DEVICE\n"))
+ ret = ops->switch_phy_ctxt(mvm, vif, phy_ctxt);
+ if (ret)
goto out_unlock;
- iwl_mvm_phy_ctxt_ref(mvm, mvmvif->phy_ctxt);
+ iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
}
schedule_time_event:
return ret;
}
-static int iwl_mvm_cancel_roc(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+int iwl_mvm_cancel_roc(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
return ret;
}
-static int iwl_mvm_add_chanctx(struct ieee80211_hw *hw,
- struct ieee80211_chanctx_conf *ctx)
+int iwl_mvm_add_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
iwl_mvm_phy_ctxt_unref(mvm, phy_ctxt);
}
-static void iwl_mvm_remove_chanctx(struct ieee80211_hw *hw,
- struct ieee80211_chanctx_conf *ctx)
+void iwl_mvm_remove_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_unlock(&mvm->mutex);
}
-static void iwl_mvm_change_chanctx(struct ieee80211_hw *hw,
- struct ieee80211_chanctx_conf *ctx,
- u32 changed)
+void iwl_mvm_change_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx, u32 changed)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
u16 *phy_ctxt_id = (u16 *)ctx->drv_priv;
mutex_unlock(&mvm->mutex);
}
-static int __iwl_mvm_assign_vif_chanctx(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct ieee80211_chanctx_conf *ctx,
- bool switching_chanctx)
+/*
+ * This function executes the common part for MLD and non-MLD modes.
+ *
+ * Returns true if we're done assigning the chanctx
+ * (either on failure or success)
+ */
+static bool
+__iwl_mvm_assign_vif_chanctx_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_chanctx_conf *ctx,
+ bool switching_chanctx, int *ret)
{
u16 *phy_ctxt_id = (u16 *)ctx->drv_priv;
struct iwl_mvm_phy_ctxt *phy_ctxt = &mvm->phy_ctxts[*phy_ctxt_id];
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- int ret;
lockdep_assert_held(&mvm->mutex);
- mvmvif->phy_ctxt = phy_ctxt;
+ mvmvif->deflink.phy_ctxt = phy_ctxt;
switch (vif->type) {
case NL80211_IFTYPE_AP:
* The AP binding flow is handled as part of the start_ap flow
* (in bss_info_changed), similarly for IBSS.
*/
- ret = 0;
- goto out;
+ *ret = 0;
+ return true;
case NL80211_IFTYPE_STATION:
- mvmvif->csa_bcn_pending = false;
break;
case NL80211_IFTYPE_MONITOR:
/* always disable PS when a monitor interface is active */
mvmvif->ps_disabled = true;
break;
default:
- ret = -EINVAL;
- goto out;
+ *ret = -EINVAL;
+ return true;
}
+ return false;
+}
+
+static int __iwl_mvm_assign_vif_chanctx(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx,
+ bool switching_chanctx)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ if (WARN_ON(!link_conf))
+ return -EINVAL;
+
+ if (__iwl_mvm_assign_vif_chanctx_common(mvm, vif, ctx,
+ switching_chanctx, &ret))
+ goto out;
ret = iwl_mvm_binding_add_vif(mvm, vif);
if (ret)
iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
}
- if (switching_chanctx && vif->type == NL80211_IFTYPE_STATION) {
+ if (vif->type == NL80211_IFTYPE_STATION) {
+ if (!switching_chanctx) {
+ mvmvif->csa_bcn_pending = false;
+ goto out;
+ }
+
mvmvif->csa_bcn_pending = true;
if (!fw_has_capa(&mvm->fw->ucode_capa,
iwl_mvm_power_update_mac(mvm);
out:
if (ret)
- mvmvif->phy_ctxt = NULL;
+ mvmvif->deflink.phy_ctxt = NULL;
return ret;
}
+
static int iwl_mvm_assign_vif_chanctx(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_bss_conf *link_conf,
int ret;
mutex_lock(&mvm->mutex);
- ret = __iwl_mvm_assign_vif_chanctx(mvm, vif, ctx, false);
+ ret = __iwl_mvm_assign_vif_chanctx(mvm, vif, link_conf, ctx, false);
mutex_unlock(&mvm->mutex);
return ret;
}
-static void __iwl_mvm_unassign_vif_chanctx(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct ieee80211_chanctx_conf *ctx,
- bool switching_chanctx)
+/*
+ * This function executes the common part for MLD and non-MLD modes.
+ *
+ * Returns if chanctx unassign chanctx is done
+ * (either on failure or success)
+ */
+static bool __iwl_mvm_unassign_vif_chanctx_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ bool switching_chanctx)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- struct ieee80211_vif *disabled_vif = NULL;
lockdep_assert_held(&mvm->mutex);
- iwl_mvm_remove_time_event(mvm, mvmvif, &mvmvif->time_event_data);
+ iwl_mvm_remove_time_event(mvm, mvmvif,
+ &mvmvif->time_event_data);
switch (vif->type) {
case NL80211_IFTYPE_ADHOC:
- goto out;
+ return true;
case NL80211_IFTYPE_MONITOR:
mvmvif->monitor_active = false;
mvmvif->ps_disabled = false;
- iwl_mvm_rm_snif_sta(mvm, vif);
break;
case NL80211_IFTYPE_AP:
/* This part is triggered only during CSA */
if (!switching_chanctx || !mvmvif->ap_ibss_active)
- goto out;
+ return true;
mvmvif->csa_countdown = false;
mvmvif->ap_ibss_active = false;
break;
- case NL80211_IFTYPE_STATION:
- if (!switching_chanctx)
- break;
+ default:
+ break;
+ }
+ return false;
+}
- disabled_vif = vif;
+static void __iwl_mvm_unassign_vif_chanctx(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx,
+ bool switching_chanctx)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct ieee80211_vif *disabled_vif = NULL;
+
+ if (__iwl_mvm_unassign_vif_chanctx_common(mvm, vif, switching_chanctx))
+ goto out;
+
+ if (vif->type == NL80211_IFTYPE_MONITOR)
+ iwl_mvm_rm_snif_sta(mvm, vif);
+
+ if (vif->type == NL80211_IFTYPE_STATION && switching_chanctx) {
+ disabled_vif = vif;
if (!fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD))
iwl_mvm_mac_ctxt_changed(mvm, vif, true, NULL);
- break;
- default:
- break;
}
iwl_mvm_update_quotas(mvm, false, disabled_vif);
if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD) &&
switching_chanctx)
return;
- mvmvif->phy_ctxt = NULL;
+ mvmvif->deflink.phy_ctxt = NULL;
iwl_mvm_power_update_mac(mvm);
}
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_lock(&mvm->mutex);
- __iwl_mvm_unassign_vif_chanctx(mvm, vif, ctx, false);
+ __iwl_mvm_unassign_vif_chanctx(mvm, vif, link_conf, ctx, false);
mutex_unlock(&mvm->mutex);
}
static int
iwl_mvm_switch_vif_chanctx_swap(struct iwl_mvm *mvm,
- struct ieee80211_vif_chanctx_switch *vifs)
+ struct ieee80211_vif_chanctx_switch *vifs,
+ struct iwl_mvm_switch_vif_chanctx_ops *ops)
{
int ret;
mutex_lock(&mvm->mutex);
- __iwl_mvm_unassign_vif_chanctx(mvm, vifs[0].vif, vifs[0].old_ctx, true);
+ ops->__unassign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf,
+ vifs[0].old_ctx, true);
__iwl_mvm_remove_chanctx(mvm, vifs[0].old_ctx);
ret = __iwl_mvm_add_chanctx(mvm, vifs[0].new_ctx);
goto out_reassign;
}
- ret = __iwl_mvm_assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].new_ctx,
- true);
+ ret = ops->__assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf,
+ vifs[0].new_ctx, true);
if (ret) {
IWL_ERR(mvm,
"failed to assign new_ctx during channel switch\n");
goto out_restart;
}
- if (__iwl_mvm_assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].old_ctx,
- true)) {
+ if (ops->__assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf,
+ vifs[0].old_ctx, true)) {
IWL_ERR(mvm, "failed to reassign old_ctx after failure.\n");
goto out_restart;
}
static int
iwl_mvm_switch_vif_chanctx_reassign(struct iwl_mvm *mvm,
- struct ieee80211_vif_chanctx_switch *vifs)
+ struct ieee80211_vif_chanctx_switch *vifs,
+ struct iwl_mvm_switch_vif_chanctx_ops *ops)
{
int ret;
mutex_lock(&mvm->mutex);
- __iwl_mvm_unassign_vif_chanctx(mvm, vifs[0].vif, vifs[0].old_ctx, true);
+ ops->__unassign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf,
+ vifs[0].old_ctx, true);
- ret = __iwl_mvm_assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].new_ctx,
- true);
+ ret = ops->__assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf,
+ vifs[0].new_ctx, true);
if (ret) {
IWL_ERR(mvm,
"failed to assign new_ctx during channel switch\n");
goto out;
out_reassign:
- if (__iwl_mvm_assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].old_ctx,
- true)) {
+ if (ops->__assign_vif_chanctx(mvm, vifs[0].vif, vifs[0].link_conf,
+ vifs[0].old_ctx, true)) {
IWL_ERR(mvm, "failed to reassign old_ctx after failure.\n");
goto out_restart;
}
return ret;
}
-static int iwl_mvm_switch_vif_chanctx(struct ieee80211_hw *hw,
- struct ieee80211_vif_chanctx_switch *vifs,
- int n_vifs,
- enum ieee80211_chanctx_switch_mode mode)
+/* Execute the common part for both MLD and non-MLD modes */
+int
+iwl_mvm_switch_vif_chanctx_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif_chanctx_switch *vifs,
+ int n_vifs,
+ enum ieee80211_chanctx_switch_mode mode,
+ struct iwl_mvm_switch_vif_chanctx_ops *ops)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
switch (mode) {
case CHANCTX_SWMODE_SWAP_CONTEXTS:
- ret = iwl_mvm_switch_vif_chanctx_swap(mvm, vifs);
+ ret = iwl_mvm_switch_vif_chanctx_swap(mvm, vifs, ops);
break;
case CHANCTX_SWMODE_REASSIGN_VIF:
- ret = iwl_mvm_switch_vif_chanctx_reassign(mvm, vifs);
+ ret = iwl_mvm_switch_vif_chanctx_reassign(mvm, vifs, ops);
break;
default:
ret = -EOPNOTSUPP;
return ret;
}
-static int iwl_mvm_tx_last_beacon(struct ieee80211_hw *hw)
+static int iwl_mvm_switch_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_vif_chanctx_switch *vifs,
+ int n_vifs,
+ enum ieee80211_chanctx_switch_mode mode)
+{
+ struct iwl_mvm_switch_vif_chanctx_ops ops = {
+ .__assign_vif_chanctx = __iwl_mvm_assign_vif_chanctx,
+ .__unassign_vif_chanctx = __iwl_mvm_unassign_vif_chanctx,
+ };
+
+ return iwl_mvm_switch_vif_chanctx_common(hw, vifs, n_vifs, mode, &ops);
+}
+
+int iwl_mvm_tx_last_beacon(struct ieee80211_hw *hw)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
return mvm->ibss_manager;
}
-static int iwl_mvm_set_tim(struct ieee80211_hw *hw,
- struct ieee80211_sta *sta,
- bool set)
+int iwl_mvm_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ bool set)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
return -EINVAL;
}
- return iwl_mvm_mac_ctxt_beacon_changed(mvm, mvm_sta->vif);
+ return iwl_mvm_mac_ctxt_beacon_changed(mvm, mvm_sta->vif,
+ &mvm_sta->vif->bss_conf);
}
#ifdef CONFIG_NL80211_TESTMODE
return -EOPNOTSUPP;
}
-static int iwl_mvm_mac_testmode_cmd(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- void *data, int len)
+int iwl_mvm_mac_testmode_cmd(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ void *data, int len)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int err;
}
#endif
-static void iwl_mvm_channel_switch(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_channel_switch *chsw)
+void iwl_mvm_channel_switch(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_channel_switch *chsw)
{
/* By implementing this operation, we prevent mac80211 from
* starting its own channel switch timer, so that we can call
}
#define IWL_MAX_CSA_BLOCK_TX 1500
-static int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_channel_switch *chsw)
+int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_channel_switch *chsw)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct ieee80211_vif *csa_vif;
return ret;
}
-static void iwl_mvm_channel_switch_rx_beacon(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_channel_switch *chsw)
+void iwl_mvm_channel_switch_rx_beacon(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_channel_switch *chsw)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
mutex_unlock(&mvm->mutex);
}
-static void iwl_mvm_mac_flush(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif, u32 queues, bool drop)
+void iwl_mvm_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ u32 queues, bool drop)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif;
continue;
/* make sure only TDLS peers or the AP are flushed */
- WARN_ON(i != mvmvif->ap_sta_id && !sta->tdls);
+ WARN_ON_ONCE(i != mvmvif->deflink.ap_sta_id && !sta->tdls);
if (drop) {
if (iwl_mvm_flush_sta(mvm, mvmsta, false))
IWL_ERR(mvm, "flush request fail\n");
} else {
- msk |= mvmsta->tfd_queue_msk;
if (iwl_mvm_has_new_tx_api(mvm))
iwl_mvm_wait_sta_queues_empty(mvm, mvmsta);
+ else /* only used for !iwl_mvm_has_new_tx_api() below */
+ msk |= mvmsta->tfd_queue_msk;
}
}
iwl_trans_wait_tx_queues_empty(mvm->trans, msk);
}
-static int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx,
- struct survey_info *survey)
+int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx,
+ struct survey_info *survey)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
}
}
-static void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_sta *sta,
- struct station_info *sinfo)
+void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct station_info *sinfo)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- if (mvmsta->avg_energy) {
- sinfo->signal_avg = -(s8)mvmsta->avg_energy;
+ if (mvmsta->deflink.avg_energy) {
+ sinfo->signal_avg = -(s8)mvmsta->deflink.avg_energy;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);
}
if (iwl_mvm_has_tlc_offload(mvm)) {
- struct iwl_lq_sta_rs_fw *lq_sta = &mvmsta->lq_sta.rs_fw;
+ struct iwl_lq_sta_rs_fw *lq_sta = &mvmsta->deflink.lq_sta.rs_fw;
iwl_mvm_set_sta_rate(lq_sta->last_rate_n_flags, &sinfo->txrate);
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
mutex_lock(&mvm->mutex);
- if (mvmvif->ap_sta_id != mvmsta->sta_id)
+ if (mvmvif->deflink.ap_sta_id != mvmsta->deflink.sta_id)
goto unlock;
if (iwl_mvm_request_statistics(mvm, false))
goto unlock;
- sinfo->rx_beacon = mvmvif->beacon_stats.num_beacons +
- mvmvif->beacon_stats.accu_num_beacons;
+ sinfo->rx_beacon = mvmvif->deflink.beacon_stats.num_beacons +
+ mvmvif->deflink.beacon_stats.accu_num_beacons;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_BEACON_RX);
- if (mvmvif->beacon_stats.avg_signal) {
+ if (mvmvif->deflink.beacon_stats.avg_signal) {
/* firmware only reports a value after RXing a few beacons */
- sinfo->rx_beacon_signal_avg = mvmvif->beacon_stats.avg_signal;
+ sinfo->rx_beacon_signal_avg =
+ mvmvif->deflink.beacon_stats.avg_signal;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_BEACON_SIGNAL_AVG);
}
unlock:
event->u.ba.ssn);
}
-static void iwl_mvm_mac_event_callback(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- const struct ieee80211_event *event)
+void iwl_mvm_mac_event_callback(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ const struct ieee80211_event *event)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
}
}
-static void iwl_mvm_sync_rx_queues(struct ieee80211_hw *hw)
+void iwl_mvm_sync_rx_queues(struct ieee80211_hw *hw)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_unlock(&mvm->mutex);
}
-static int
+int
iwl_mvm_mac_get_ftm_responder_stats(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct cfg80211_ftm_responder_stats *stats)
return 0;
}
-static int iwl_mvm_start_pmsr(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct cfg80211_pmsr_request *request)
+int iwl_mvm_start_pmsr(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct cfg80211_pmsr_request *request)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
return ret;
}
-static void iwl_mvm_abort_pmsr(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct cfg80211_pmsr_request *request)
+void iwl_mvm_abort_pmsr(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct cfg80211_pmsr_request *request)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
return iwl_mvm_can_hw_csum(skb) == iwl_mvm_can_hw_csum(head);
}
+int iwl_mvm_set_hw_timestamp(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_set_hw_timestamp *hwts)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ u32 protocols = 0;
+ int ret;
+
+ /* HW timestamping is only supported for a specific station */
+ if (!hwts->macaddr)
+ return -EOPNOTSUPP;
+
+ if (hwts->enable)
+ protocols =
+ IWL_TIME_SYNC_PROTOCOL_TM | IWL_TIME_SYNC_PROTOCOL_FTM;
+
+ mutex_lock(&mvm->mutex);
+ ret = iwl_mvm_time_sync_config(mvm, hwts->macaddr, protocols);
+ mutex_unlock(&mvm->mutex);
+
+ return ret;
+}
+
const struct ieee80211_ops iwl_mvm_hw_ops = {
.tx = iwl_mvm_mac_tx,
.wake_tx_queue = iwl_mvm_mac_wake_tx_queue,
#ifdef CONFIG_IWLWIFI_DEBUGFS
.sta_add_debugfs = iwl_mvm_sta_add_debugfs,
#endif
+ .set_hw_timestamp = iwl_mvm_set_hw_timestamp,
};
if (vif->type == NL80211_IFTYPE_AP &&
!(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE))
- return BIT(mvmvif->mcast_sta.sta_id);
+ return BIT(mvmvif->deflink.mcast_sta.sta_id);
if (sta) {
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- return BIT(mvmsta->sta_id);
+ return BIT(mvmsta->deflink.sta_id);
}
if (vif->type == NL80211_IFTYPE_STATION &&
- mvmvif->ap_sta_id != IWL_MVM_INVALID_STA)
- return BIT(mvmvif->ap_sta_id);
+ mvmvif->deflink.ap_sta_id != IWL_MVM_INVALID_STA)
+ return BIT(mvmvif->deflink.ap_sta_id);
/* invalid */
return 0;
rcu_read_lock();
if (!sta && vif->type == NL80211_IFTYPE_STATION &&
- mvmvif->ap_sta_id != IWL_MVM_INVALID_STA) {
- u8 sta_id = mvmvif->ap_sta_id;
+ mvmvif->deflink.ap_sta_id != IWL_MVM_INVALID_STA) {
+ u8 sta_id = mvmvif->deflink.ap_sta_id;
sta = rcu_dereference_check(mvm->fw_id_to_mac_id[sta_id],
lockdep_is_held(&mvm->mutex));
void *data)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ unsigned int link_id = (uintptr_t)data;
if (key->hw_key_idx == STA_KEY_IDX_INVALID)
return;
if (sta)
return;
+ if (key->link_id >= 0 && key->link_id != link_id)
+ return;
+
_iwl_mvm_sec_key_del(mvm, vif, NULL, key, CMD_ASYNC);
key->hw_key_idx = STA_KEY_IDX_INVALID;
}
void iwl_mvm_sec_key_remove_ap(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif)
+ struct ieee80211_vif *vif,
+ struct iwl_mvm_vif_link_info *link,
+ unsigned int link_id)
{
- struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
u32 sec_key_id = WIDE_ID(DATA_PATH_GROUP, SEC_KEY_CMD);
u8 sec_key_ver = iwl_fw_lookup_cmd_ver(mvm->fw, sec_key_id, 0);
- if (WARN_ON(vif->type != NL80211_IFTYPE_STATION ||
- mvmvif->ap_sta_id == IWL_MVM_INVALID_STA))
+ if (WARN_ON_ONCE(vif->type != NL80211_IFTYPE_STATION ||
+ link->ap_sta_id == IWL_MVM_INVALID_STA))
return;
if (!sec_key_ver)
ieee80211_iter_keys_rcu(mvm->hw, vif,
iwl_mvm_sec_key_remove_ap_iter,
- NULL);
+ (void *)(uintptr_t)link_id);
}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+/*
+ * Copyright (C) 2022 Intel Corporation
+ */
+#include "mvm.h"
+
+static void iwl_mvm_mld_mac_ctxt_cmd_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct iwl_mac_config_cmd *cmd,
+ u32 action)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct ieee80211_bss_conf *link_conf;
+ unsigned int link_id;
+
+ cmd->id_and_color = cpu_to_le32(mvmvif->id);
+ cmd->action = cpu_to_le32(action);
+
+ cmd->mac_type = cpu_to_le32(iwl_mvm_get_mac_type(vif));
+
+ memcpy(cmd->local_mld_addr, vif->addr, ETH_ALEN);
+
+ cmd->he_support = 0;
+ cmd->eht_support = 0;
+
+ /* should be set by specific context type handler */
+ cmd->filter_flags = 0;
+
+ cmd->nic_not_ack_enabled =
+ cpu_to_le32(!iwl_mvm_is_nic_ack_enabled(mvm, vif));
+
+ if (iwlwifi_mod_params.disable_11ax)
+ return;
+
+ /* If we have MLO enabled, then the firmware needs to enable
+ * address translation for the station(s) we add. That depends
+ * on having EHT enabled in firmware, which in turn depends on
+ * mac80211 in the code below.
+ * However, mac80211 doesn't enable HE/EHT until it has parsed
+ * the association response successfully, so just skip all that
+ * and enable both when we have MLO.
+ */
+ if (vif->valid_links) {
+ cmd->he_support = cpu_to_le32(1);
+ cmd->eht_support = cpu_to_le32(1);
+ return;
+ }
+
+ rcu_read_lock();
+ for (link_id = 0; link_id < ARRAY_SIZE((vif)->link_conf); link_id++) {
+ link_conf = rcu_dereference(vif->link_conf[link_id]);
+ if (!link_conf)
+ continue;
+
+ if (link_conf->he_support)
+ cmd->he_support = cpu_to_le32(1);
+
+ /* it's not reasonable to have EHT without HE and FW API doesn't
+ * support it. Ignore EHT in this case.
+ */
+ if (!link_conf->he_support && link_conf->eht_support)
+ continue;
+
+ if (link_conf->eht_support) {
+ cmd->eht_support = cpu_to_le32(1);
+ break;
+ }
+ }
+ rcu_read_unlock();
+}
+
+static int iwl_mvm_mld_mac_ctxt_send_cmd(struct iwl_mvm *mvm,
+ struct iwl_mac_config_cmd *cmd)
+{
+ int ret = iwl_mvm_send_cmd_pdu(mvm,
+ WIDE_ID(MAC_CONF_GROUP, MAC_CONFIG_CMD),
+ 0, sizeof(*cmd), cmd);
+ if (ret)
+ IWL_ERR(mvm, "Failed to send MAC_CONFIG_CMD (action:%d): %d\n",
+ le32_to_cpu(cmd->action), ret);
+ return ret;
+}
+
+static int iwl_mvm_mld_mac_ctxt_cmd_sta(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 action, bool force_assoc_off)
+{
+ struct iwl_mac_config_cmd cmd = {};
+
+ WARN_ON(vif->type != NL80211_IFTYPE_STATION);
+
+ /* Fill the common data for all mac context types */
+ iwl_mvm_mld_mac_ctxt_cmd_common(mvm, vif, &cmd, action);
+
+ /*
+ * We always want to hear MCAST frames, if we're not authorized yet,
+ * we'll drop them.
+ */
+ cmd.filter_flags |= cpu_to_le32(MAC_CFG_FILTER_ACCEPT_GRP);
+
+ if (vif->p2p)
+ cmd.client.ctwin =
+ iwl_mvm_mac_ctxt_cmd_p2p_sta_get_oppps_ctwin(mvm, vif);
+
+ if (vif->cfg.assoc && !force_assoc_off) {
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+
+ cmd.client.is_assoc = cpu_to_le32(1);
+
+ if (!mvmvif->authorized &&
+ fw_has_capa(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_COEX_HIGH_PRIO))
+ cmd.client.data_policy |=
+ cpu_to_le32(COEX_HIGH_PRIORITY_ENABLE);
+
+ } else {
+ cmd.client.is_assoc = cpu_to_le32(0);
+
+ /* Allow beacons to pass through as long as we are not
+ * associated, or we do not have dtim period information.
+ */
+ cmd.filter_flags |= cpu_to_le32(MAC_CFG_FILTER_ACCEPT_BEACON);
+ }
+
+ cmd.client.assoc_id = cpu_to_le32(vif->cfg.aid);
+
+ if (vif->probe_req_reg && vif->cfg.assoc && vif->p2p)
+ cmd.filter_flags |= cpu_to_le32(MAC_CFG_FILTER_ACCEPT_PROBE_REQ);
+
+ if (vif->bss_conf.he_support && !iwlwifi_mod_params.disable_11ax)
+ cmd.client.data_policy |=
+ iwl_mvm_mac_ctxt_cmd_sta_get_twt_policy(mvm, vif);
+
+ return iwl_mvm_mld_mac_ctxt_send_cmd(mvm, &cmd);
+}
+
+static int iwl_mvm_mld_mac_ctxt_cmd_listener(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 action)
+{
+ struct iwl_mac_config_cmd cmd = {};
+
+ WARN_ON(vif->type != NL80211_IFTYPE_MONITOR);
+
+ iwl_mvm_mld_mac_ctxt_cmd_common(mvm, vif, &cmd, action);
+
+ cmd.filter_flags = cpu_to_le32(MAC_CFG_FILTER_PROMISC |
+ MAC_FILTER_IN_CONTROL_AND_MGMT |
+ MAC_CFG_FILTER_ACCEPT_BEACON |
+ MAC_CFG_FILTER_ACCEPT_PROBE_REQ |
+ MAC_CFG_FILTER_ACCEPT_GRP);
+
+ return iwl_mvm_mld_mac_ctxt_send_cmd(mvm, &cmd);
+}
+
+static int iwl_mvm_mld_mac_ctxt_cmd_ibss(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 action)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mac_config_cmd cmd = {};
+
+ WARN_ON(vif->type != NL80211_IFTYPE_ADHOC);
+
+ iwl_mvm_mld_mac_ctxt_cmd_common(mvm, vif, &cmd, action);
+
+ cmd.filter_flags = cpu_to_le32(MAC_CFG_FILTER_ACCEPT_BEACON |
+ MAC_CFG_FILTER_ACCEPT_PROBE_REQ |
+ MAC_CFG_FILTER_ACCEPT_GRP);
+
+ /* TODO: Assumes that the beacon id == mac context id */
+ cmd.go_ibss.beacon_template = cpu_to_le32(mvmvif->id);
+
+ return iwl_mvm_mld_mac_ctxt_send_cmd(mvm, &cmd);
+}
+
+static int iwl_mvm_mld_mac_ctxt_cmd_p2p_device(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 action)
+{
+ struct iwl_mac_config_cmd cmd = {};
+
+ WARN_ON(vif->type != NL80211_IFTYPE_P2P_DEVICE);
+
+ iwl_mvm_mld_mac_ctxt_cmd_common(mvm, vif, &cmd, action);
+
+ cmd.p2p_dev.is_disc_extended =
+ iwl_mac_ctxt_p2p_dev_has_extended_disc(mvm, vif);
+
+ /* Override the filter flags to accept only probe requests */
+ cmd.filter_flags = cpu_to_le32(MAC_CFG_FILTER_ACCEPT_PROBE_REQ);
+
+ return iwl_mvm_mld_mac_ctxt_send_cmd(mvm, &cmd);
+}
+
+static int iwl_mvm_mld_mac_ctxt_cmd_ap_go(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 action)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mac_config_cmd cmd = {};
+
+ WARN_ON(vif->type != NL80211_IFTYPE_AP);
+
+ /* Fill the common data for all mac context types */
+ iwl_mvm_mld_mac_ctxt_cmd_common(mvm, vif, &cmd, action);
+
+ iwl_mvm_mac_ctxt_cmd_ap_set_filter_flags(mvm, mvmvif,
+ &cmd.filter_flags,
+ MAC_CFG_FILTER_ACCEPT_PROBE_REQ,
+ MAC_CFG_FILTER_ACCEPT_BEACON);
+
+ /* TODO: Assume that the beacon id == mac context id */
+ cmd.go_ibss.beacon_template = cpu_to_le32(mvmvif->id);
+
+ return iwl_mvm_mld_mac_ctxt_send_cmd(mvm, &cmd);
+}
+
+static int iwl_mvm_mld_mac_ctx_send(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 action, bool force_assoc_off)
+{
+ switch (vif->type) {
+ case NL80211_IFTYPE_STATION:
+ return iwl_mvm_mld_mac_ctxt_cmd_sta(mvm, vif, action,
+ force_assoc_off);
+ case NL80211_IFTYPE_AP:
+ return iwl_mvm_mld_mac_ctxt_cmd_ap_go(mvm, vif, action);
+ case NL80211_IFTYPE_MONITOR:
+ return iwl_mvm_mld_mac_ctxt_cmd_listener(mvm, vif, action);
+ case NL80211_IFTYPE_P2P_DEVICE:
+ return iwl_mvm_mld_mac_ctxt_cmd_p2p_device(mvm, vif, action);
+ case NL80211_IFTYPE_ADHOC:
+ return iwl_mvm_mld_mac_ctxt_cmd_ibss(mvm, vif, action);
+ default:
+ break;
+ }
+
+ return -EOPNOTSUPP;
+}
+
+int iwl_mvm_mld_mac_ctxt_add(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ if (WARN_ON_ONCE(vif->type == NL80211_IFTYPE_NAN))
+ return -EOPNOTSUPP;
+
+ if (WARN_ONCE(mvmvif->uploaded, "Adding active MAC %pM/%d\n",
+ vif->addr, ieee80211_vif_type_p2p(vif)))
+ return -EIO;
+
+ ret = iwl_mvm_mld_mac_ctx_send(mvm, vif, FW_CTXT_ACTION_ADD,
+ true);
+ if (ret)
+ return ret;
+
+ /* will only do anything at resume from D3 time */
+ iwl_mvm_set_last_nonqos_seq(mvm, vif);
+
+ mvmvif->uploaded = true;
+ return 0;
+}
+
+int iwl_mvm_mld_mac_ctxt_changed(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ bool force_assoc_off)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+
+ if (WARN_ON_ONCE(vif->type == NL80211_IFTYPE_NAN))
+ return -EOPNOTSUPP;
+
+ if (WARN_ONCE(!mvmvif->uploaded, "Changing inactive MAC %pM/%d\n",
+ vif->addr, ieee80211_vif_type_p2p(vif)))
+ return -EIO;
+
+ return iwl_mvm_mld_mac_ctx_send(mvm, vif, FW_CTXT_ACTION_MODIFY,
+ force_assoc_off);
+}
+
+int iwl_mvm_mld_mac_ctxt_remove(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mac_config_cmd cmd = {
+ .action = cpu_to_le32(FW_CTXT_ACTION_REMOVE),
+ .id_and_color = cpu_to_le32(mvmvif->id),
+ };
+ int ret;
+
+ if (WARN_ON_ONCE(vif->type == NL80211_IFTYPE_NAN))
+ return -EOPNOTSUPP;
+
+ if (WARN_ONCE(!mvmvif->uploaded, "Removing inactive MAC %pM/%d\n",
+ vif->addr, ieee80211_vif_type_p2p(vif)))
+ return -EIO;
+
+ ret = iwl_mvm_mld_mac_ctxt_send_cmd(mvm, &cmd);
+ if (ret)
+ return ret;
+
+ mvmvif->uploaded = false;
+
+ return 0;
+}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+/*
+ * Copyright (C) 2022 Intel Corporation
+ */
+#include "mvm.h"
+
+static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ mutex_lock(&mvm->mutex);
+
+ mvmvif->mvm = mvm;
+
+ /* Not much to do here. The stack will not allow interface
+ * types or combinations that we didn't advertise, so we
+ * don't really have to check the types.
+ */
+
+ /* make sure that beacon statistics don't go backwards with FW reset */
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
+ mvmvif->deflink.beacon_stats.accu_num_beacons +=
+ mvmvif->deflink.beacon_stats.num_beacons;
+
+ /* Allocate resources for the MAC context, and add it to the fw */
+ ret = iwl_mvm_mac_ctxt_init(mvm, vif);
+ if (ret)
+ goto out_unlock;
+
+ rcu_assign_pointer(mvm->vif_id_to_mac[mvmvif->id], vif);
+
+ mvmvif->features |= hw->netdev_features;
+
+ /* reset deflink MLO parameters */
+ mvmvif->deflink.fw_link_id = IWL_MVM_FW_LINK_ID_INVALID;
+ mvmvif->deflink.active = 0;
+ /* the first link always points to the default one */
+ mvmvif->link[0] = &mvmvif->deflink;
+
+ ret = iwl_mvm_mld_mac_ctxt_add(mvm, vif);
+ if (ret)
+ goto out_unlock;
+
+ /* beacon filtering */
+ ret = iwl_mvm_disable_beacon_filter(mvm, vif, 0);
+ if (ret)
+ goto out_remove_mac;
+
+ if (!mvm->bf_allowed_vif &&
+ vif->type == NL80211_IFTYPE_STATION && !vif->p2p) {
+ mvm->bf_allowed_vif = mvmvif;
+ vif->driver_flags |= IEEE80211_VIF_BEACON_FILTER |
+ IEEE80211_VIF_SUPPORTS_CQM_RSSI;
+ }
+
+ /*
+ * P2P_DEVICE interface does not have a channel context assigned to it,
+ * so a dedicated PHY context is allocated to it and the corresponding
+ * MAC context is bound to it at this stage.
+ */
+ if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
+ mvmvif->deflink.phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
+ if (!mvmvif->deflink.phy_ctxt) {
+ ret = -ENOSPC;
+ goto out_free_bf;
+ }
+
+ iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
+ ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf);
+ if (ret)
+ goto out_unref_phy;
+
+ ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE |
+ LINK_CONTEXT_MODIFY_RATES_INFO,
+ true);
+ if (ret)
+ goto out_remove_link;
+
+ ret = iwl_mvm_mld_add_bcast_sta(mvm, vif, &vif->bss_conf);
+ if (ret)
+ goto out_remove_link;
+
+ /* Save a pointer to p2p device vif, so it can later be used to
+ * update the p2p device MAC when a GO is started/stopped
+ */
+ mvm->p2p_device_vif = vif;
+ } else {
+ ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf);
+ if (ret)
+ goto out_free_bf;
+ }
+
+ ret = iwl_mvm_power_update_mac(mvm);
+ if (ret)
+ goto out_free_bf;
+
+ iwl_mvm_tcm_add_vif(mvm, vif);
+ INIT_DELAYED_WORK(&mvmvif->csa_work,
+ iwl_mvm_channel_switch_disconnect_wk);
+
+ if (vif->type == NL80211_IFTYPE_MONITOR) {
+ mvm->monitor_on = true;
+ ieee80211_hw_set(mvm->hw, RX_INCLUDES_FCS);
+ }
+
+ iwl_mvm_vif_dbgfs_register(mvm, vif);
+
+ if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
+ vif->type == NL80211_IFTYPE_STATION && !vif->p2p &&
+ !mvm->csme_vif && mvm->mei_registered) {
+ iwl_mei_set_nic_info(vif->addr, mvm->nvm_data->hw_addr);
+ iwl_mei_set_netdev(ieee80211_vif_to_wdev(vif)->netdev);
+ mvm->csme_vif = vif;
+ }
+
+ goto out_unlock;
+
+ out_remove_link:
+ iwl_mvm_disable_link(mvm, vif, &vif->bss_conf);
+ out_unref_phy:
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
+ out_free_bf:
+ if (mvm->bf_allowed_vif == mvmvif) {
+ mvm->bf_allowed_vif = NULL;
+ vif->driver_flags &= ~(IEEE80211_VIF_BEACON_FILTER |
+ IEEE80211_VIF_SUPPORTS_CQM_RSSI);
+ }
+ out_remove_mac:
+ mvmvif->deflink.phy_ctxt = NULL;
+ mvmvif->link[0] = NULL;
+ iwl_mvm_mld_mac_ctxt_remove(mvm, vif);
+ out_unlock:
+ mutex_unlock(&mvm->mutex);
+
+ return ret;
+}
+
+static void iwl_mvm_mld_mac_remove_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_probe_resp_data *probe_data;
+
+ iwl_mvm_prepare_mac_removal(mvm, vif);
+
+ if (!(vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_ADHOC))
+ iwl_mvm_tcm_rm_vif(mvm, vif);
+
+ mutex_lock(&mvm->mutex);
+
+ if (vif == mvm->csme_vif) {
+ iwl_mei_set_netdev(NULL);
+ mvm->csme_vif = NULL;
+ }
+
+ if (mvm->bf_allowed_vif == mvmvif) {
+ mvm->bf_allowed_vif = NULL;
+ vif->driver_flags &= ~(IEEE80211_VIF_BEACON_FILTER |
+ IEEE80211_VIF_SUPPORTS_CQM_RSSI);
+ }
+
+ if (vif->bss_conf.ftm_responder)
+ memset(&mvm->ftm_resp_stats, 0, sizeof(mvm->ftm_resp_stats));
+
+ iwl_mvm_vif_dbgfs_clean(mvm, vif);
+
+ /* For AP/GO interface, the tear down of the resources allocated to the
+ * interface is be handled as part of the stop_ap flow.
+ */
+ if (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_ADHOC) {
+#ifdef CONFIG_NL80211_TESTMODE
+ if (vif == mvm->noa_vif) {
+ mvm->noa_vif = NULL;
+ mvm->noa_duration = 0;
+ }
+#endif
+ }
+
+ iwl_mvm_power_update_mac(mvm);
+
+ if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
+ mvm->p2p_device_vif = NULL;
+
+ /* P2P device uses only one link */
+ iwl_mvm_mld_rm_bcast_sta(mvm, vif, &vif->bss_conf);
+ iwl_mvm_disable_link(mvm, vif, &vif->bss_conf);
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
+ mvmvif->deflink.phy_ctxt = NULL;
+ } else {
+ iwl_mvm_disable_link(mvm, vif, &vif->bss_conf);
+ }
+
+ iwl_mvm_mld_mac_ctxt_remove(mvm, vif);
+
+ RCU_INIT_POINTER(mvm->vif_id_to_mac[mvmvif->id], NULL);
+
+ probe_data = rcu_dereference_protected(mvmvif->deflink.probe_resp_data,
+ lockdep_is_held(&mvm->mutex));
+ RCU_INIT_POINTER(mvmvif->deflink.probe_resp_data, NULL);
+ if (probe_data)
+ kfree_rcu(probe_data, rcu_head);
+
+ if (vif->type == NL80211_IFTYPE_MONITOR) {
+ mvm->monitor_on = false;
+ __clear_bit(IEEE80211_HW_RX_INCLUDES_FCS, mvm->hw->flags);
+ }
+
+ mutex_unlock(&mvm->mutex);
+}
+
+static int
+__iwl_mvm_mld_assign_vif_chanctx(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx,
+ bool switching_chanctx)
+{
+ u16 *phy_ctxt_id = (u16 *)ctx->drv_priv;
+ struct iwl_mvm_phy_ctxt *phy_ctxt = &mvm->phy_ctxts[*phy_ctxt_id];
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ unsigned int link_id = link_conf->link_id;
+ int ret;
+
+ if (WARN_ON_ONCE(!mvmvif->link[link_id]))
+ return -EINVAL;
+
+ /* mac parameters such as HE support can change at this stage
+ * For sta, need first to configure correct state from drv_sta_state
+ * and only after that update mac config.
+ */
+ if (vif->type == NL80211_IFTYPE_AP) {
+ ret = iwl_mvm_mld_mac_ctxt_changed(mvm, vif, false);
+ if (ret) {
+ IWL_ERR(mvm, "failed to update MAC %pM\n", vif->addr);
+ return -EINVAL;
+ }
+ }
+
+ mvmvif->link[link_id]->phy_ctxt = phy_ctxt;
+
+ if (switching_chanctx) {
+ /* reactivate if we turned this off during channel switch */
+ if (vif->type == NL80211_IFTYPE_AP)
+ mvmvif->ap_ibss_active = true;
+ }
+
+ /* send it first with phy context ID */
+ ret = iwl_mvm_link_changed(mvm, vif, link_conf, 0, false);
+ if (ret)
+ goto out;
+
+ /* then activate */
+ ret = iwl_mvm_link_changed(mvm, vif, link_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE |
+ LINK_CONTEXT_MODIFY_RATES_INFO,
+ true);
+ if (ret)
+ goto out;
+
+ /*
+ * Power state must be updated before quotas,
+ * otherwise fw will complain.
+ */
+ iwl_mvm_power_update_mac(mvm);
+
+ if (vif->type == NL80211_IFTYPE_MONITOR) {
+ ret = iwl_mvm_mld_add_snif_sta(mvm, vif, link_conf);
+ if (ret)
+ goto deactivate;
+ }
+
+ return 0;
+
+deactivate:
+ iwl_mvm_link_changed(mvm, vif, link_conf, LINK_CONTEXT_MODIFY_ACTIVE,
+ false);
+out:
+ mvmvif->link[link_id]->phy_ctxt = NULL;
+ iwl_mvm_power_update_mac(mvm);
+ return ret;
+}
+
+static int iwl_mvm_mld_assign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ int ret;
+
+ mutex_lock(&mvm->mutex);
+ ret = __iwl_mvm_mld_assign_vif_chanctx(mvm, vif, link_conf, ctx, false);
+ mutex_unlock(&mvm->mutex);
+
+ return ret;
+}
+
+static void
+__iwl_mvm_mld_unassign_vif_chanctx(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx,
+ bool switching_chanctx)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ unsigned int link_id = link_conf->link_id;
+
+ /* shouldn't happen, but verify link_id is valid before accessing */
+ if (WARN_ON_ONCE(!mvmvif->link[link_id]))
+ return;
+
+ if (vif->type == NL80211_IFTYPE_AP && switching_chanctx) {
+ mvmvif->csa_countdown = false;
+
+ /* Set CS bit on all the stations */
+ iwl_mvm_modify_all_sta_disable_tx(mvm, mvmvif, true);
+
+ /* Save blocked iface, the timeout is set on the next beacon */
+ rcu_assign_pointer(mvm->csa_tx_blocked_vif, vif);
+
+ mvmvif->ap_ibss_active = false;
+ }
+
+ if (vif->type == NL80211_IFTYPE_MONITOR)
+ iwl_mvm_mld_rm_snif_sta(mvm, vif);
+
+ iwl_mvm_link_changed(mvm, vif, link_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE, false);
+
+ if (switching_chanctx)
+ return;
+ mvmvif->link[link_id]->phy_ctxt = NULL;
+ iwl_mvm_power_update_mac(mvm);
+}
+
+static void iwl_mvm_mld_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+
+ mutex_lock(&mvm->mutex);
+ __iwl_mvm_mld_unassign_vif_chanctx(mvm, vif, link_conf, ctx, false);
+ mutex_unlock(&mvm->mutex);
+}
+
+static int iwl_mvm_mld_start_ap_ibss(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret;
+
+ mutex_lock(&mvm->mutex);
+ /* Send the beacon template */
+ ret = iwl_mvm_mac_ctxt_beacon_changed(mvm, vif, link_conf);
+ if (ret)
+ goto out_unlock;
+
+ /* the link should be already activated when assigning chan context */
+ ret = iwl_mvm_link_changed(mvm, vif, link_conf,
+ LINK_CONTEXT_MODIFY_ALL &
+ ~LINK_CONTEXT_MODIFY_ACTIVE,
+ true);
+ if (ret)
+ goto out_unlock;
+
+ ret = iwl_mvm_mld_add_mcast_sta(mvm, vif, link_conf);
+ if (ret)
+ goto out_unlock;
+
+ /* Send the bcast station. At this stage the TBTT and DTIM time
+ * events are added and applied to the scheduler
+ */
+ ret = iwl_mvm_mld_add_bcast_sta(mvm, vif, link_conf);
+ if (ret)
+ goto out_rm_mcast;
+
+ if (iwl_mvm_start_ap_ibss_common(hw, vif, &ret))
+ goto out_failed;
+
+ /* Need to update the P2P Device MAC (only GO, IBSS is single vif) */
+ if (vif->p2p && mvm->p2p_device_vif)
+ iwl_mvm_mld_mac_ctxt_changed(mvm, mvm->p2p_device_vif, false);
+
+ iwl_mvm_bt_coex_vif_change(mvm);
+
+ /* we don't support TDLS during DCM */
+ if (iwl_mvm_phy_ctx_count(mvm) > 1)
+ iwl_mvm_teardown_tdls_peers(mvm);
+
+ iwl_mvm_ftm_restart_responder(mvm, vif);
+
+ goto out_unlock;
+
+out_failed:
+ iwl_mvm_power_update_mac(mvm);
+ mvmvif->ap_ibss_active = false;
+ iwl_mvm_mld_rm_bcast_sta(mvm, vif, link_conf);
+out_rm_mcast:
+ iwl_mvm_mld_rm_mcast_sta(mvm, vif, link_conf);
+out_unlock:
+ mutex_unlock(&mvm->mutex);
+ return ret;
+}
+
+static int iwl_mvm_mld_start_ap(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ return iwl_mvm_mld_start_ap_ibss(hw, vif, link_conf);
+}
+
+static int iwl_mvm_mld_start_ibss(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ return iwl_mvm_mld_start_ap_ibss(hw, vif, &vif->bss_conf);
+}
+
+static void iwl_mvm_mld_stop_ap_ibss(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+
+ mutex_lock(&mvm->mutex);
+
+ iwl_mvm_stop_ap_ibss_common(mvm, vif);
+
+ /* Need to update the P2P Device MAC (only GO, IBSS is single vif) */
+ if (vif->p2p && mvm->p2p_device_vif)
+ iwl_mvm_mld_mac_ctxt_changed(mvm, mvm->p2p_device_vif, false);
+
+ iwl_mvm_ftm_responder_clear(mvm, vif);
+
+ iwl_mvm_mld_rm_bcast_sta(mvm, vif, link_conf);
+ iwl_mvm_mld_rm_mcast_sta(mvm, vif, link_conf);
+
+ iwl_mvm_power_update_mac(mvm);
+ mutex_unlock(&mvm->mutex);
+}
+
+static void iwl_mvm_mld_stop_ap(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ iwl_mvm_mld_stop_ap_ibss(hw, vif, link_conf);
+}
+
+static void iwl_mvm_mld_stop_ibss(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ iwl_mvm_mld_stop_ap_ibss(hw, vif, &vif->bss_conf);
+}
+
+static int iwl_mvm_mld_mac_sta_state(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ enum ieee80211_sta_state old_state,
+ enum ieee80211_sta_state new_state)
+{
+ struct iwl_mvm_sta_state_ops callbacks = {
+ .add_sta = iwl_mvm_mld_add_sta,
+ .update_sta = iwl_mvm_mld_update_sta,
+ .rm_sta = iwl_mvm_mld_rm_sta,
+ .mac_ctxt_changed = iwl_mvm_mld_mac_ctxt_changed,
+ };
+
+ return iwl_mvm_mac_sta_state_common(hw, vif, sta, old_state, new_state,
+ &callbacks);
+}
+
+static void
+iwl_mvm_mld_link_info_changed_station(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ u64 changes)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ bool has_he, has_eht;
+ u32 link_changes = 0;
+ int ret;
+
+ if (WARN_ON_ONCE(!mvmvif->link[link_conf->link_id]))
+ return;
+
+ has_he = link_conf->he_support && !iwlwifi_mod_params.disable_11ax;
+ has_eht = link_conf->eht_support && !iwlwifi_mod_params.disable_11be;
+
+ /* Update EDCA params */
+ if (changes & BSS_CHANGED_QOS && vif->cfg.assoc && link_conf->qos)
+ link_changes |= LINK_CONTEXT_MODIFY_QOS_PARAMS;
+
+ if (changes & BSS_CHANGED_ERP_SLOT)
+ link_changes |= LINK_CONTEXT_MODIFY_RATES_INFO;
+
+ if (vif->cfg.assoc && (has_he || has_eht)) {
+ IWL_DEBUG_MAC80211(mvm, "Associated in HE mode\n");
+ link_changes |= LINK_CONTEXT_MODIFY_HE_PARAMS;
+ }
+
+ /* Update EHT Puncturing info */
+ if (changes & BSS_CHANGED_EHT_PUNCTURING && vif->cfg.assoc && has_eht)
+ link_changes |= LINK_CONTEXT_MODIFY_EHT_PARAMS;
+
+ if (link_changes) {
+ ret = iwl_mvm_link_changed(mvm, vif, link_conf, link_changes,
+ true);
+ if (ret)
+ IWL_ERR(mvm, "failed to update link\n");
+ }
+
+ ret = iwl_mvm_mld_mac_ctxt_changed(mvm, vif, false);
+ if (ret)
+ IWL_ERR(mvm, "failed to update MAC %pM\n", vif->addr);
+
+ memcpy(mvmvif->link[link_conf->link_id]->bssid, link_conf->bssid,
+ ETH_ALEN);
+
+ iwl_mvm_bss_info_changed_station_common(mvm, vif, link_conf, changes);
+}
+
+static bool iwl_mvm_mld_vif_have_valid_ap_sta(struct iwl_mvm_vif *mvmvif)
+{
+ int i;
+
+ for_each_mvm_vif_valid_link(mvmvif, i) {
+ if (mvmvif->link[i]->ap_sta_id != IWL_MVM_INVALID_STA)
+ return true;
+ }
+
+ return false;
+}
+
+static void iwl_mvm_mld_vif_delete_all_stas(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int i, ret;
+
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
+ return;
+
+ for_each_mvm_vif_valid_link(mvmvif, i) {
+ struct iwl_mvm_vif_link_info *link = mvmvif->link[i];
+
+ if (!link)
+ continue;
+
+ iwl_mvm_sec_key_remove_ap(mvm, vif, link, i);
+ ret = iwl_mvm_mld_rm_sta_id(mvm, link->ap_sta_id);
+ if (ret)
+ IWL_ERR(mvm, "failed to remove AP station\n");
+
+ link->ap_sta_id = IWL_MVM_INVALID_STA;
+ }
+}
+
+static void iwl_mvm_mld_vif_cfg_changed_station(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u64 changes)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct ieee80211_bss_conf *link_conf;
+ bool protect = false;
+ unsigned int i;
+ int ret;
+
+ /* This might get called without active links during the
+ * chanctx switch, but we don't care about it anyway.
+ */
+ if (changes == BSS_CHANGED_IDLE)
+ return;
+
+ ret = iwl_mvm_mld_mac_ctxt_changed(mvm, vif, false);
+ if (ret)
+ IWL_ERR(mvm, "failed to update MAC %pM\n", vif->addr);
+
+ mvmvif->associated = vif->cfg.assoc;
+
+ if (!(changes & BSS_CHANGED_ASSOC))
+ return;
+
+ if (vif->cfg.assoc) {
+ /* clear statistics to get clean beacon counter */
+ iwl_mvm_request_statistics(mvm, true);
+ iwl_mvm_sf_update(mvm, vif, false);
+ iwl_mvm_power_vif_assoc(mvm, vif);
+
+ for_each_mvm_vif_valid_link(mvmvif, i) {
+ memset(&mvmvif->link[i]->beacon_stats, 0,
+ sizeof(mvmvif->link[i]->beacon_stats));
+
+ if (vif->p2p) {
+ iwl_mvm_update_smps(mvm, vif,
+ IWL_MVM_SMPS_REQ_PROT,
+ IEEE80211_SMPS_DYNAMIC, i);
+ }
+
+ rcu_read_lock();
+ link_conf = rcu_dereference(vif->link_conf[i]);
+ if (link_conf && !link_conf->dtim_period)
+ protect = true;
+ rcu_read_unlock();
+ }
+
+ if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
+ protect) {
+ /* If we're not restarting and still haven't
+ * heard a beacon (dtim period unknown) then
+ * make sure we still have enough minimum time
+ * remaining in the time event, since the auth
+ * might actually have taken quite a while
+ * (especially for SAE) and so the remaining
+ * time could be small without us having heard
+ * a beacon yet.
+ */
+ iwl_mvm_protect_assoc(mvm, vif, 0);
+ }
+
+ iwl_mvm_sf_update(mvm, vif, false);
+
+ /* FIXME: need to decide about misbehaving AP handling */
+ iwl_mvm_power_vif_assoc(mvm, vif);
+ } else if (iwl_mvm_mld_vif_have_valid_ap_sta(mvmvif)) {
+ iwl_mvm_mei_host_disassociated(mvm);
+
+ /* If update fails - SF might be running in associated
+ * mode while disassociated - which is forbidden.
+ */
+ ret = iwl_mvm_sf_update(mvm, vif, false);
+ WARN_ONCE(ret &&
+ !test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
+ &mvm->status),
+ "Failed to update SF upon disassociation\n");
+
+ /* If we get an assert during the connection (after the
+ * station has been added, but before the vif is set
+ * to associated), mac80211 will re-add the station and
+ * then configure the vif. Since the vif is not
+ * associated, we would remove the station here and
+ * this would fail the recovery.
+ */
+ iwl_mvm_mld_vif_delete_all_stas(mvm, vif);
+ }
+
+ iwl_mvm_bss_info_changed_station_assoc(mvm, vif, changes);
+}
+
+static void
+iwl_mvm_mld_link_info_changed_ap_ibss(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ u64 changes)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ u32 link_changes = LINK_CONTEXT_MODIFY_PROTECT_FLAGS |
+ LINK_CONTEXT_MODIFY_QOS_PARAMS;
+
+ /* Changes will be applied when the AP/IBSS is started */
+ if (!mvmvif->ap_ibss_active)
+ return;
+
+ if (link_conf->he_support)
+ link_changes |= LINK_CONTEXT_MODIFY_HE_PARAMS;
+
+ if (changes & (BSS_CHANGED_ERP_CTS_PROT | BSS_CHANGED_HT |
+ BSS_CHANGED_BANDWIDTH | BSS_CHANGED_QOS |
+ BSS_CHANGED_HE_BSS_COLOR) &&
+ iwl_mvm_link_changed(mvm, vif, link_conf,
+ link_changes, true))
+ IWL_ERR(mvm, "failed to update MAC %pM\n", vif->addr);
+
+ /* Need to send a new beacon template to the FW */
+ if (changes & BSS_CHANGED_BEACON &&
+ iwl_mvm_mac_ctxt_beacon_changed(mvm, vif, link_conf))
+ IWL_WARN(mvm, "Failed updating beacon data\n");
+
+ /* FIXME: need to decide if we need FTM responder per link */
+ if (changes & BSS_CHANGED_FTM_RESPONDER) {
+ int ret = iwl_mvm_ftm_start_responder(mvm, vif);
+
+ if (ret)
+ IWL_WARN(mvm, "Failed to enable FTM responder (%d)\n",
+ ret);
+ }
+}
+
+static void iwl_mvm_mld_link_info_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ u64 changes)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+
+ mutex_lock(&mvm->mutex);
+
+ switch (vif->type) {
+ case NL80211_IFTYPE_STATION:
+ iwl_mvm_mld_link_info_changed_station(mvm, vif, link_conf,
+ changes);
+ break;
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_ADHOC:
+ iwl_mvm_mld_link_info_changed_ap_ibss(mvm, vif, link_conf,
+ changes);
+ break;
+ case NL80211_IFTYPE_MONITOR:
+ if (changes & BSS_CHANGED_MU_GROUPS)
+ iwl_mvm_update_mu_groups(mvm, vif);
+ break;
+ default:
+ /* shouldn't happen */
+ WARN_ON_ONCE(1);
+ }
+
+ if (changes & BSS_CHANGED_TXPOWER) {
+ IWL_DEBUG_CALIB(mvm, "Changing TX Power to %d dBm\n",
+ link_conf->txpower);
+ iwl_mvm_set_tx_power(mvm, vif, link_conf->txpower);
+ }
+
+ mutex_unlock(&mvm->mutex);
+}
+
+static void iwl_mvm_mld_vif_cfg_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ u64 changes)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+
+ mutex_lock(&mvm->mutex);
+
+ if (changes & BSS_CHANGED_IDLE && !vif->cfg.idle)
+ iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_SCHED, true);
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ iwl_mvm_mld_vif_cfg_changed_station(mvm, vif, changes);
+
+ mutex_unlock(&mvm->mutex);
+}
+
+static int
+iwl_mvm_mld_switch_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_vif_chanctx_switch *vifs,
+ int n_vifs,
+ enum ieee80211_chanctx_switch_mode mode)
+{
+ struct iwl_mvm_switch_vif_chanctx_ops ops = {
+ .__assign_vif_chanctx = __iwl_mvm_mld_assign_vif_chanctx,
+ .__unassign_vif_chanctx = __iwl_mvm_mld_unassign_vif_chanctx,
+ };
+
+ return iwl_mvm_switch_vif_chanctx_common(hw, vifs, n_vifs, mode, &ops);
+}
+
+static void iwl_mvm_mld_config_iface_filter(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ unsigned int filter_flags,
+ unsigned int changed_flags)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+
+ /* We support only filter for probe requests */
+ if (!(changed_flags & FIF_PROBE_REQ))
+ return;
+
+ /* Supported only for p2p client interfaces */
+ if (vif->type != NL80211_IFTYPE_STATION || !vif->cfg.assoc ||
+ !vif->p2p)
+ return;
+
+ mutex_lock(&mvm->mutex);
+ iwl_mvm_mld_mac_ctxt_changed(mvm, vif, false);
+ mutex_unlock(&mvm->mutex);
+}
+
+static int
+iwl_mvm_mld_mac_conf_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ unsigned int link_id, u16 ac,
+ const struct ieee80211_tx_queue_params *params)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+
+ mvmvif->deflink.queue_params[ac] = *params;
+
+ /* No need to update right away, we'll get BSS_CHANGED_QOS
+ * The exception is P2P_DEVICE interface which needs immediate update.
+ */
+ if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
+ int ret;
+
+ mutex_lock(&mvm->mutex);
+ ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
+ LINK_CONTEXT_MODIFY_QOS_PARAMS,
+ true);
+ mutex_unlock(&mvm->mutex);
+ return ret;
+ }
+ return 0;
+}
+
+static int iwl_mvm_link_switch_phy_ctx(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct iwl_mvm_phy_ctxt *new_phy_ctxt)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ret = 0;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ /* Inorder to change the phy_ctx of a link, the link needs to be
+ * inactive. Therefore, first deactivate the link, then change its
+ * phy_ctx, and then activate it again.
+ */
+ ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE, false);
+ if (WARN(ret, "Failed to deactivate link\n"))
+ return ret;
+
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
+
+ mvmvif->deflink.phy_ctxt = new_phy_ctxt;
+
+ ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf, 0, false);
+ if (WARN(ret, "Failed to deactivate link\n"))
+ return ret;
+
+ ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE, true);
+ WARN(ret, "Failed binding P2P_DEVICE\n");
+ return ret;
+}
+
+static int iwl_mvm_mld_roc(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_channel *channel, int duration,
+ enum ieee80211_roc_type type)
+{
+ struct iwl_mvm_roc_ops ops = {
+ .add_aux_sta_for_hs20 = iwl_mvm_mld_add_aux_sta,
+ .switch_phy_ctxt = iwl_mvm_link_switch_phy_ctx,
+ };
+
+ return iwl_mvm_roc_common(hw, vif, channel, duration, type, &ops);
+}
+
+static int
+iwl_mvm_mld_change_vif_links(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ u16 old_links, u16 new_links,
+ struct ieee80211_bss_conf *old[IEEE80211_MLD_MAX_NUM_LINKS])
+{
+ struct iwl_mvm_vif_link_info *new_link[IEEE80211_MLD_MAX_NUM_LINKS] = {};
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ u16 removed = old_links & ~new_links;
+ u16 added = new_links & ~old_links;
+ int err, i;
+
+ if (hweight16(new_links) > 2) {
+ return -EOPNOTSUPP;
+ } else if (hweight16(new_links) > 1) {
+ unsigned int n_active = 0;
+
+ for (i = 0; i < IEEE80211_MLD_MAX_NUM_LINKS; i++) {
+ struct ieee80211_bss_conf *link_conf;
+
+ link_conf = link_conf_dereference_protected(vif, i);
+ if (link_conf &&
+ rcu_access_pointer(link_conf->chanctx_conf))
+ n_active++;
+ }
+
+ if (n_active > 1)
+ return -EOPNOTSUPP;
+ }
+
+ for (i = 0; i < IEEE80211_MLD_MAX_NUM_LINKS; i++) {
+ int r;
+
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
+ break;
+
+ if (!(added & BIT(i)))
+ continue;
+ new_link[i] = kzalloc(sizeof(*new_link[i]), GFP_KERNEL);
+ if (!new_link[i]) {
+ err = -ENOMEM;
+ goto free;
+ }
+
+ new_link[i]->bcast_sta.sta_id = IWL_MVM_INVALID_STA;
+ new_link[i]->mcast_sta.sta_id = IWL_MVM_INVALID_STA;
+ new_link[i]->ap_sta_id = IWL_MVM_INVALID_STA;
+ new_link[i]->fw_link_id = IWL_MVM_FW_LINK_ID_INVALID;
+
+ for (r = 0; r < NUM_IWL_MVM_SMPS_REQ; r++)
+ new_link[i]->smps_requests[r] =
+ IEEE80211_SMPS_AUTOMATIC;
+ }
+
+ mutex_lock(&mvm->mutex);
+
+ if (old_links == 0) {
+ err = iwl_mvm_disable_link(mvm, vif, &vif->bss_conf);
+ if (err)
+ goto out_err;
+ mvmvif->link[0] = NULL;
+ }
+
+ for (i = 0; i < IEEE80211_MLD_MAX_NUM_LINKS; i++) {
+ if (removed & BIT(i)) {
+ struct ieee80211_bss_conf *link_conf = old[i];
+
+ err = iwl_mvm_disable_link(mvm, vif, link_conf);
+ if (err)
+ goto out_err;
+ kfree(mvmvif->link[i]);
+ mvmvif->link[i] = NULL;
+ }
+
+ if (added & BIT(i)) {
+ struct ieee80211_bss_conf *link_conf;
+
+ link_conf = link_conf_dereference_protected(vif, i);
+ if (WARN_ON(!link_conf))
+ continue;
+
+ if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART,
+ &mvm->status))
+ mvmvif->link[i] = new_link[i];
+ new_link[i] = NULL;
+ err = iwl_mvm_add_link(mvm, vif, link_conf);
+ if (err)
+ goto out_err;
+ }
+ }
+
+ err = 0;
+ if (new_links == 0) {
+ mvmvif->link[0] = &mvmvif->deflink;
+ err = iwl_mvm_add_link(mvm, vif, &vif->bss_conf);
+ }
+
+out_err:
+ /* we really don't have a good way to roll back here ... */
+ mutex_unlock(&mvm->mutex);
+
+free:
+ for (i = 0; i < IEEE80211_MLD_MAX_NUM_LINKS; i++)
+ kfree(new_link[i]);
+ return err;
+}
+
+static int
+iwl_mvm_mld_change_sta_links(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ u16 old_links, u16 new_links)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ int ret;
+
+ mutex_lock(&mvm->mutex);
+ ret = iwl_mvm_mld_update_sta_links(mvm, vif, sta, old_links, new_links);
+ mutex_unlock(&mvm->mutex);
+
+ return ret;
+}
+
+const struct ieee80211_ops iwl_mvm_mld_hw_ops = {
+ .tx = iwl_mvm_mac_tx,
+ .wake_tx_queue = iwl_mvm_mac_wake_tx_queue,
+ .ampdu_action = iwl_mvm_mac_ampdu_action,
+ .get_antenna = iwl_mvm_op_get_antenna,
+ .start = iwl_mvm_mac_start,
+ .reconfig_complete = iwl_mvm_mac_reconfig_complete,
+ .stop = iwl_mvm_mac_stop,
+ .add_interface = iwl_mvm_mld_mac_add_interface,
+ .remove_interface = iwl_mvm_mld_mac_remove_interface,
+ .config = iwl_mvm_mac_config,
+ .prepare_multicast = iwl_mvm_prepare_multicast,
+ .configure_filter = iwl_mvm_configure_filter,
+ .config_iface_filter = iwl_mvm_mld_config_iface_filter,
+ .link_info_changed = iwl_mvm_mld_link_info_changed,
+ .vif_cfg_changed = iwl_mvm_mld_vif_cfg_changed,
+ .hw_scan = iwl_mvm_mac_hw_scan,
+ .cancel_hw_scan = iwl_mvm_mac_cancel_hw_scan,
+ .sta_pre_rcu_remove = iwl_mvm_sta_pre_rcu_remove,
+ .sta_state = iwl_mvm_mld_mac_sta_state,
+ .sta_notify = iwl_mvm_mac_sta_notify,
+ .allow_buffered_frames = iwl_mvm_mac_allow_buffered_frames,
+ .release_buffered_frames = iwl_mvm_mac_release_buffered_frames,
+ .set_rts_threshold = iwl_mvm_mac_set_rts_threshold,
+ .sta_rc_update = iwl_mvm_sta_rc_update,
+ .conf_tx = iwl_mvm_mld_mac_conf_tx,
+ .mgd_prepare_tx = iwl_mvm_mac_mgd_prepare_tx,
+ .mgd_complete_tx = iwl_mvm_mac_mgd_complete_tx,
+ .mgd_protect_tdls_discover = iwl_mvm_mac_mgd_protect_tdls_discover,
+ .flush = iwl_mvm_mac_flush,
+ .sched_scan_start = iwl_mvm_mac_sched_scan_start,
+ .sched_scan_stop = iwl_mvm_mac_sched_scan_stop,
+ .set_key = iwl_mvm_mac_set_key,
+ .update_tkip_key = iwl_mvm_mac_update_tkip_key,
+ .remain_on_channel = iwl_mvm_mld_roc,
+ .cancel_remain_on_channel = iwl_mvm_cancel_roc,
+ .add_chanctx = iwl_mvm_add_chanctx,
+ .remove_chanctx = iwl_mvm_remove_chanctx,
+ .change_chanctx = iwl_mvm_change_chanctx,
+ .assign_vif_chanctx = iwl_mvm_mld_assign_vif_chanctx,
+ .unassign_vif_chanctx = iwl_mvm_mld_unassign_vif_chanctx,
+ .switch_vif_chanctx = iwl_mvm_mld_switch_vif_chanctx,
+
+ .start_ap = iwl_mvm_mld_start_ap,
+ .stop_ap = iwl_mvm_mld_stop_ap,
+ .join_ibss = iwl_mvm_mld_start_ibss,
+ .leave_ibss = iwl_mvm_mld_stop_ibss,
+
+ .tx_last_beacon = iwl_mvm_tx_last_beacon,
+
+ .set_tim = iwl_mvm_set_tim,
+
+ .channel_switch = iwl_mvm_channel_switch,
+ .pre_channel_switch = iwl_mvm_pre_channel_switch,
+ .post_channel_switch = iwl_mvm_post_channel_switch,
+ .abort_channel_switch = iwl_mvm_abort_channel_switch,
+ .channel_switch_rx_beacon = iwl_mvm_channel_switch_rx_beacon,
+
+ .tdls_channel_switch = iwl_mvm_tdls_channel_switch,
+ .tdls_cancel_channel_switch = iwl_mvm_tdls_cancel_channel_switch,
+ .tdls_recv_channel_switch = iwl_mvm_tdls_recv_channel_switch,
+
+ .event_callback = iwl_mvm_mac_event_callback,
+
+ .sync_rx_queues = iwl_mvm_sync_rx_queues,
+
+ CFG80211_TESTMODE_CMD(iwl_mvm_mac_testmode_cmd)
+
+#ifdef CONFIG_PM_SLEEP
+ /* look at d3.c */
+ .suspend = iwl_mvm_suspend,
+ .resume = iwl_mvm_resume,
+ .set_wakeup = iwl_mvm_set_wakeup,
+ .set_rekey_data = iwl_mvm_set_rekey_data,
+#if IS_ENABLED(CONFIG_IPV6)
+ .ipv6_addr_change = iwl_mvm_ipv6_addr_change,
+#endif
+ .set_default_unicast_key = iwl_mvm_set_default_unicast_key,
+#endif
+ .get_survey = iwl_mvm_mac_get_survey,
+ .sta_statistics = iwl_mvm_mac_sta_statistics,
+ .get_ftm_responder_stats = iwl_mvm_mac_get_ftm_responder_stats,
+ .start_pmsr = iwl_mvm_start_pmsr,
+ .abort_pmsr = iwl_mvm_abort_pmsr,
+
+#ifdef CONFIG_IWLWIFI_DEBUGFS
+ .sta_add_debugfs = iwl_mvm_sta_add_debugfs,
+#endif
+ .set_hw_timestamp = iwl_mvm_set_hw_timestamp,
+
+ .change_vif_links = iwl_mvm_mld_change_vif_links,
+ .change_sta_links = iwl_mvm_mld_change_sta_links,
+};
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+/*
+ * Copyright (C) 2022 Intel Corporation
+ */
+#include "mvm.h"
+#include "time-sync.h"
+
+static int iwl_mvm_mld_send_sta_cmd(struct iwl_mvm *mvm,
+ struct iwl_mvm_sta_cfg_cmd *cmd)
+{
+ int ret = iwl_mvm_send_cmd_pdu(mvm,
+ WIDE_ID(MAC_CONF_GROUP, STA_CONFIG_CMD),
+ 0, sizeof(*cmd), cmd);
+ if (ret)
+ IWL_ERR(mvm, "STA_CONFIG_CMD send failed, ret=0x%x\n", ret);
+ return ret;
+}
+
+/*
+ * Add an internal station to the FW table
+ */
+static int iwl_mvm_mld_add_int_sta_to_fw(struct iwl_mvm *mvm,
+ struct iwl_mvm_int_sta *sta,
+ const u8 *addr, int link_id)
+{
+ struct iwl_mvm_sta_cfg_cmd cmd;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.sta_id = cpu_to_le32((u8)sta->sta_id);
+
+ cmd.link_id = cpu_to_le32(link_id);
+
+ cmd.station_type = cpu_to_le32(sta->type);
+
+ if (addr) {
+ memcpy(cmd.peer_mld_address, addr, ETH_ALEN);
+ memcpy(cmd.peer_link_address, addr, ETH_ALEN);
+ }
+
+ return iwl_mvm_mld_send_sta_cmd(mvm, &cmd);
+}
+
+/*
+ * Remove a station from the FW table. Before sending the command to remove
+ * the station validate that the station is indeed known to the driver (sanity
+ * only).
+ */
+static int iwl_mvm_mld_rm_sta_from_fw(struct iwl_mvm *mvm, u32 sta_id)
+{
+ struct iwl_mvm_remove_sta_cmd rm_sta_cmd = {
+ .sta_id = cpu_to_le32(sta_id),
+ };
+ int ret;
+
+ /* Note: internal stations are marked as error values */
+ if (!rcu_access_pointer(mvm->fw_id_to_mac_id[sta_id])) {
+ IWL_ERR(mvm, "Invalid station id %d\n", sta_id);
+ return -EINVAL;
+ }
+
+ ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(MAC_CONF_GROUP, STA_REMOVE_CMD),
+ 0, sizeof(rm_sta_cmd), &rm_sta_cmd);
+ if (ret) {
+ IWL_ERR(mvm, "Failed to remove station. Id=%d\n", sta_id);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int iwl_mvm_add_aux_sta_to_fw(struct iwl_mvm *mvm,
+ struct iwl_mvm_int_sta *sta,
+ u32 lmac_id)
+{
+ int ret;
+
+ struct iwl_mvm_aux_sta_cmd cmd = {
+ .sta_id = cpu_to_le32(sta->sta_id),
+ .lmac_id = cpu_to_le32(lmac_id),
+ };
+
+ ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(MAC_CONF_GROUP, AUX_STA_CMD),
+ 0, sizeof(cmd), &cmd);
+ if (ret)
+ IWL_ERR(mvm, "Failed to send AUX_STA_CMD\n");
+ return ret;
+}
+
+/*
+ * Adds an internal sta to the FW table with its queues
+ */
+static int iwl_mvm_mld_add_int_sta_with_queue(struct iwl_mvm *mvm,
+ struct iwl_mvm_int_sta *sta,
+ const u8 *addr, int link_id,
+ u16 *queue, u8 tid,
+ unsigned int *_wdg_timeout)
+{
+ int ret, txq;
+ unsigned int wdg_timeout = _wdg_timeout ? *_wdg_timeout :
+ mvm->trans->trans_cfg->base_params->wd_timeout;
+
+ if (WARN_ON_ONCE(sta->sta_id == IWL_MVM_INVALID_STA))
+ return -ENOSPC;
+
+ if (sta->type == STATION_TYPE_AUX)
+ ret = iwl_mvm_add_aux_sta_to_fw(mvm, sta, link_id);
+ else
+ ret = iwl_mvm_mld_add_int_sta_to_fw(mvm, sta, addr, link_id);
+ if (ret)
+ return ret;
+
+ /*
+ * For 22000 firmware and on we cannot add queue to a station unknown
+ * to firmware so enable queue here - after the station was added
+ */
+ txq = iwl_mvm_tvqm_enable_txq(mvm, NULL, sta->sta_id, tid,
+ wdg_timeout);
+ if (txq < 0) {
+ iwl_mvm_mld_rm_sta_from_fw(mvm, sta->sta_id);
+ return txq;
+ }
+ *queue = txq;
+
+ return 0;
+}
+
+/*
+ * Adds a new int sta: allocate it in the driver, add it to the FW table,
+ * and add its queues.
+ */
+static int iwl_mvm_mld_add_int_sta(struct iwl_mvm *mvm,
+ struct iwl_mvm_int_sta *int_sta, u16 *queue,
+ enum nl80211_iftype iftype,
+ enum iwl_fw_sta_type sta_type,
+ int link_id, const u8 *addr, u8 tid,
+ unsigned int *wdg_timeout)
+{
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ /* qmask argument is not used in the new tx api, send a don't care */
+ ret = iwl_mvm_allocate_int_sta(mvm, int_sta, 0, iftype,
+ sta_type);
+ if (ret)
+ return ret;
+
+ ret = iwl_mvm_mld_add_int_sta_with_queue(mvm, int_sta, addr, link_id,
+ queue, tid, wdg_timeout);
+ if (ret) {
+ iwl_mvm_dealloc_int_sta(mvm, int_sta);
+ return ret;
+ }
+
+ return 0;
+}
+
+/* Allocate a new station entry for the broadcast station to the given vif,
+ * and send it to the FW.
+ * Note that each P2P mac should have its own broadcast station.
+ */
+int iwl_mvm_mld_add_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_vif_link_info *mvm_link =
+ mvmvif->link[link_conf->link_id];
+ struct iwl_mvm_int_sta *bsta = &mvm_link->bcast_sta;
+ static const u8 _baddr[] = {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
+ const u8 *baddr = _baddr;
+ unsigned int wdg_timeout =
+ iwl_mvm_get_wd_timeout(mvm, vif, false, false);
+ u16 *queue;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (vif->type == NL80211_IFTYPE_ADHOC)
+ baddr = link_conf->bssid;
+
+ if (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_ADHOC) {
+ queue = &mvm_link->mgmt_queue;
+ } else if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
+ queue = &mvm->p2p_dev_queue;
+ } else {
+ WARN(1, "Missing required TXQ for adding bcast STA\n");
+ return -EINVAL;
+ }
+
+ return iwl_mvm_mld_add_int_sta(mvm, bsta, queue,
+ ieee80211_vif_type_p2p(vif),
+ STATION_TYPE_BCAST_MGMT,
+ mvm_link->fw_link_id, baddr,
+ IWL_MAX_TID_COUNT, &wdg_timeout);
+}
+
+/* Allocate a new station entry for the broadcast station to the given vif,
+ * and send it to the FW.
+ * Note that each AP/GO mac should have its own multicast station.
+ */
+int iwl_mvm_mld_add_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_vif_link_info *mvm_link =
+ mvmvif->link[link_conf->link_id];
+ struct iwl_mvm_int_sta *msta = &mvm_link->mcast_sta;
+ static const u8 _maddr[] = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00};
+ const u8 *maddr = _maddr;
+ unsigned int timeout = iwl_mvm_get_wd_timeout(mvm, vif, false, false);
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (WARN_ON(vif->type != NL80211_IFTYPE_AP &&
+ vif->type != NL80211_IFTYPE_ADHOC))
+ return -EOPNOTSUPP;
+
+ /* In IBSS, ieee80211_check_queues() sets the cab_queue to be
+ * invalid, so make sure we use the queue we want.
+ * Note that this is done here as we want to avoid making DQA
+ * changes in mac80211 layer.
+ */
+ if (vif->type == NL80211_IFTYPE_ADHOC)
+ mvm_link->cab_queue = IWL_MVM_DQA_GCAST_QUEUE;
+
+ return iwl_mvm_mld_add_int_sta(mvm, msta, &mvm_link->cab_queue,
+ vif->type, STATION_TYPE_MCAST,
+ mvm_link->fw_link_id, maddr, 0,
+ &timeout);
+}
+
+/* Allocate a new station entry for the sniffer station to the given vif,
+ * and send it to the FW.
+ */
+int iwl_mvm_mld_add_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_vif_link_info *mvm_link =
+ mvmvif->link[link_conf->link_id];
+
+ lockdep_assert_held(&mvm->mutex);
+
+ return iwl_mvm_mld_add_int_sta(mvm, &mvm->snif_sta, &mvm->snif_queue,
+ vif->type, STATION_TYPE_BCAST_MGMT,
+ mvm_link->fw_link_id, NULL,
+ IWL_MAX_TID_COUNT, NULL);
+}
+
+int iwl_mvm_mld_add_aux_sta(struct iwl_mvm *mvm, u32 lmac_id)
+{
+ lockdep_assert_held(&mvm->mutex);
+
+ /* In CDB NICs we need to specify which lmac to use for aux activity;
+ * use the link_id argument place to send lmac_id to the function.
+ */
+ return iwl_mvm_mld_add_int_sta(mvm, &mvm->aux_sta, &mvm->aux_queue,
+ NL80211_IFTYPE_UNSPECIFIED,
+ STATION_TYPE_AUX, lmac_id, NULL,
+ IWL_MAX_TID_COUNT, NULL);
+}
+
+static int iwl_mvm_mld_disable_txq(struct iwl_mvm *mvm, int sta_id,
+ u16 *queueptr, u8 tid)
+{
+ int queue = *queueptr;
+ int ret = 0;
+
+ if (mvm->sta_remove_requires_queue_remove) {
+ u32 cmd_id = WIDE_ID(DATA_PATH_GROUP,
+ SCD_QUEUE_CONFIG_CMD);
+ struct iwl_scd_queue_cfg_cmd remove_cmd = {
+ .operation = cpu_to_le32(IWL_SCD_QUEUE_REMOVE),
+ .u.remove.tid = cpu_to_le32(tid),
+ .u.remove.sta_mask = cpu_to_le32(BIT(sta_id)),
+ };
+
+ ret = iwl_mvm_send_cmd_pdu(mvm, cmd_id, 0,
+ sizeof(remove_cmd),
+ &remove_cmd);
+ }
+
+ iwl_trans_txq_free(mvm->trans, queue);
+ *queueptr = IWL_MVM_INVALID_QUEUE;
+
+ return ret;
+}
+
+/* Removes a sta from the FW table, disable its queues, and dealloc it
+ */
+static int iwl_mvm_mld_rm_int_sta(struct iwl_mvm *mvm,
+ struct iwl_mvm_int_sta *int_sta,
+ bool flush, u8 tid, u16 *queuptr)
+{
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (WARN_ON_ONCE(int_sta->sta_id == IWL_MVM_INVALID_STA))
+ return -EINVAL;
+
+ if (flush)
+ iwl_mvm_flush_sta(mvm, int_sta, true);
+
+ iwl_mvm_mld_disable_txq(mvm, int_sta->sta_id, queuptr, tid);
+
+ ret = iwl_mvm_mld_rm_sta_from_fw(mvm, int_sta->sta_id);
+ if (ret)
+ IWL_WARN(mvm, "Failed sending remove station\n");
+
+ iwl_mvm_dealloc_int_sta(mvm, int_sta);
+
+ return ret;
+}
+
+int iwl_mvm_mld_rm_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_vif_link_info *link = mvmvif->link[link_conf->link_id];
+ u16 *queueptr;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ switch (vif->type) {
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_ADHOC:
+ queueptr = &link->mgmt_queue;
+ break;
+ case NL80211_IFTYPE_P2P_DEVICE:
+ queueptr = &mvm->p2p_dev_queue;
+ break;
+ default:
+ WARN(1, "Can't free bcast queue on vif type %d\n",
+ vif->type);
+ return -EINVAL;
+ }
+
+ return iwl_mvm_mld_rm_int_sta(mvm, &link->bcast_sta,
+ true, IWL_MAX_TID_COUNT, queueptr);
+}
+
+/* Send the FW a request to remove the station from it's internal data
+ * structures, and in addition remove it from the local data structure.
+ */
+int iwl_mvm_mld_rm_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_vif_link_info *link = mvmvif->link[link_conf->link_id];
+
+ lockdep_assert_held(&mvm->mutex);
+
+ return iwl_mvm_mld_rm_int_sta(mvm, &link->mcast_sta, true, 0,
+ &link->cab_queue);
+}
+
+int iwl_mvm_mld_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+{
+ lockdep_assert_held(&mvm->mutex);
+
+ return iwl_mvm_mld_rm_int_sta(mvm, &mvm->snif_sta, false,
+ IWL_MAX_TID_COUNT, &mvm->snif_queue);
+}
+
+int iwl_mvm_mld_rm_aux_sta(struct iwl_mvm *mvm)
+{
+ lockdep_assert_held(&mvm->mutex);
+
+ return iwl_mvm_mld_rm_int_sta(mvm, &mvm->aux_sta, false,
+ IWL_MAX_TID_COUNT, &mvm->aux_queue);
+}
+
+/* send a cfg sta command to add/update a sta in firmware */
+static int iwl_mvm_mld_cfg_sta(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ struct ieee80211_vif *vif,
+ struct ieee80211_link_sta *link_sta,
+ struct ieee80211_bss_conf *link_conf,
+ struct iwl_mvm_link_sta *mvm_link_sta)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ struct iwl_mvm_vif *mvm_vif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_vif_link_info *link_info =
+ mvm_vif->link[link_conf->link_id];
+ struct iwl_mvm_sta_cfg_cmd cmd = {
+ .sta_id = cpu_to_le32(mvm_link_sta->sta_id),
+ .station_type = cpu_to_le32(mvm_sta->sta_type),
+ .mfp = cpu_to_le32(sta->mfp),
+ };
+ u32 agg_size = 0, mpdu_dens = 0;
+
+ /* when adding sta, link should exist in FW */
+ if (WARN_ON(link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID))
+ return -EINVAL;
+
+ cmd.link_id = cpu_to_le32(link_info->fw_link_id);
+
+ memcpy(&cmd.peer_mld_address, sta->addr, ETH_ALEN);
+ memcpy(&cmd.peer_link_address, link_sta->addr, ETH_ALEN);
+
+ if (mvm_sta->sta_state >= IEEE80211_STA_ASSOC)
+ cmd.assoc_id = cpu_to_le32(sta->aid);
+
+ switch (link_sta->rx_nss) {
+ case 1:
+ cmd.mimo = cpu_to_le32(0);
+ break;
+ case 2 ... 8:
+ cmd.mimo = cpu_to_le32(1);
+ break;
+ }
+
+ switch (sta->deflink.smps_mode) {
+ case IEEE80211_SMPS_AUTOMATIC:
+ case IEEE80211_SMPS_NUM_MODES:
+ WARN_ON(1);
+ break;
+ case IEEE80211_SMPS_STATIC:
+ /* override NSS */
+ cmd.mimo = cpu_to_le32(0);
+ break;
+ case IEEE80211_SMPS_DYNAMIC:
+ cmd.mimo_protection = cpu_to_le32(1);
+ break;
+ case IEEE80211_SMPS_OFF:
+ /* nothing */
+ break;
+ }
+
+ mpdu_dens = iwl_mvm_get_sta_ampdu_dens(link_sta, link_conf, &agg_size);
+ cmd.tx_ampdu_spacing = cpu_to_le32(mpdu_dens);
+ cmd.tx_ampdu_max_size = cpu_to_le32(agg_size);
+
+ if (sta->wme) {
+ cmd.sp_length =
+ cpu_to_le32(sta->max_sp ? sta->max_sp * 2 : 128);
+ cmd.uapsd_acs = cpu_to_le32(iwl_mvm_get_sta_uapsd_acs(sta));
+ }
+
+ if (link_sta->he_cap.has_he) {
+ cmd.trig_rnd_alloc =
+ cpu_to_le32(link_conf->uora_exists ? 1 : 0);
+
+ /* PPE Thresholds */
+ iwl_mvm_set_sta_pkt_ext(mvm, link_sta, &cmd.pkt_ext);
+
+ /* HTC flags */
+ cmd.htc_flags = iwl_mvm_get_sta_htc_flags(sta, link_sta);
+
+ if (link_sta->he_cap.he_cap_elem.mac_cap_info[2] &
+ IEEE80211_HE_MAC_CAP2_ACK_EN)
+ cmd.ack_enabled = cpu_to_le32(1);
+ }
+
+ return iwl_mvm_mld_send_sta_cmd(mvm, &cmd);
+}
+
+static void iwl_mvm_mld_free_sta_link(struct iwl_mvm *mvm,
+ struct iwl_mvm_sta *mvm_sta,
+ struct iwl_mvm_link_sta *mvm_sta_link,
+ unsigned int link_id,
+ bool is_in_fw)
+{
+ RCU_INIT_POINTER(mvm->fw_id_to_mac_id[mvm_sta_link->sta_id],
+ is_in_fw ? ERR_PTR(-EINVAL) : NULL);
+ RCU_INIT_POINTER(mvm->fw_id_to_link_sta[mvm_sta_link->sta_id], NULL);
+ RCU_INIT_POINTER(mvm_sta->link[link_id], NULL);
+
+ if (mvm_sta_link != &mvm_sta->deflink)
+ kfree_rcu(mvm_sta_link, rcu_head);
+}
+
+static void iwl_mvm_mld_sta_rm_all_sta_links(struct iwl_mvm *mvm,
+ struct iwl_mvm_sta *mvm_sta)
+{
+ unsigned int link_id;
+
+ for (link_id = 0; link_id < ARRAY_SIZE(mvm_sta->link); link_id++) {
+ struct iwl_mvm_link_sta *link =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ if (!link)
+ continue;
+
+ iwl_mvm_mld_free_sta_link(mvm, mvm_sta, link, link_id, false);
+ }
+}
+
+static int iwl_mvm_mld_alloc_sta_link(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ unsigned int link_id)
+{
+ struct ieee80211_link_sta *link_sta =
+ link_sta_dereference_protected(sta, link_id);
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ struct iwl_mvm_link_sta *link;
+ u32 sta_id = iwl_mvm_find_free_sta_id(mvm,
+ ieee80211_vif_type_p2p(vif));
+
+ if (sta_id == IWL_MVM_INVALID_STA)
+ return -ENOSPC;
+
+ if (rcu_access_pointer(sta->link[link_id]) == &sta->deflink) {
+ link = &mvm_sta->deflink;
+ } else {
+ link = kzalloc(sizeof(*link), GFP_KERNEL);
+ if (!link)
+ return -ENOMEM;
+ }
+
+ link->sta_id = sta_id;
+ rcu_assign_pointer(mvm_sta->link[link_id], link);
+ rcu_assign_pointer(mvm->fw_id_to_mac_id[link->sta_id], sta);
+ rcu_assign_pointer(mvm->fw_id_to_link_sta[link->sta_id],
+ link_sta);
+
+ return 0;
+}
+
+/* allocate all the links of a sta, called when the station is first added */
+static int iwl_mvm_mld_alloc_sta_links(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ unsigned int link_id;
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ for (link_id = 0; link_id < ARRAY_SIZE(sta->link); link_id++) {
+ if (!rcu_access_pointer(sta->link[link_id]) ||
+ mvm_sta->link[link_id])
+ continue;
+
+ ret = iwl_mvm_mld_alloc_sta_link(mvm, vif, sta, link_id);
+ if (ret)
+ goto err;
+ }
+
+ return 0;
+
+err:
+ iwl_mvm_mld_sta_rm_all_sta_links(mvm, mvm_sta);
+ return ret;
+}
+
+static void iwl_mvm_mld_set_ap_sta_id(struct ieee80211_sta *sta,
+ struct iwl_mvm_vif_link_info *vif_link,
+ struct iwl_mvm_link_sta *sta_link)
+{
+ if (!sta->tdls) {
+ WARN_ON(vif_link->ap_sta_id != IWL_MVM_INVALID_STA);
+ vif_link->ap_sta_id = sta_link->sta_id;
+ } else {
+ WARN_ON(vif_link->ap_sta_id == IWL_MVM_INVALID_STA);
+ }
+}
+
+/* FIXME: consider waiting for mac80211 to add the STA instead of allocating
+ * queues here
+ */
+static int iwl_mvm_alloc_sta_after_restart(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct ieee80211_link_sta *link_sta;
+ unsigned int link_id;
+ /* no active link found */
+ int ret = -EINVAL;
+ int sta_id;
+
+ /* First add an empty station since allocating a queue requires
+ * a valid station. Since we need a link_id to allocate a station,
+ * pick up the first valid one.
+ */
+ for_each_sta_active_link(vif, sta, link_sta, link_id) {
+ struct iwl_mvm_vif_link_info *mvm_link;
+ struct ieee80211_bss_conf *link_conf =
+ link_conf_dereference_protected(vif, link_id);
+ struct iwl_mvm_link_sta *mvm_link_sta =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ if (!link_conf)
+ continue;
+
+ mvm_link = mvmvif->link[link_conf->link_id];
+
+ if (!mvm_link || !mvm_link_sta)
+ continue;
+
+ sta_id = mvm_link_sta->sta_id;
+ ret = iwl_mvm_mld_cfg_sta(mvm, sta, vif, link_sta,
+ link_conf, mvm_link_sta);
+ if (ret)
+ return ret;
+
+ rcu_assign_pointer(mvm->fw_id_to_mac_id[sta_id], sta);
+ rcu_assign_pointer(mvm->fw_id_to_link_sta[sta_id], link_sta);
+ ret = 0;
+ }
+
+ iwl_mvm_realloc_queues_after_restart(mvm, sta);
+
+ return ret;
+}
+
+int iwl_mvm_mld_add_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct iwl_mvm_vif *mvm_vif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ unsigned long link_sta_added_to_fw = 0;
+ struct ieee80211_link_sta *link_sta;
+ int ret = 0;
+ unsigned int link_id;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
+ ret = iwl_mvm_mld_alloc_sta_links(mvm, vif, sta);
+ if (ret)
+ return ret;
+ }
+
+ spin_lock_init(&mvm_sta->lock);
+
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
+ ret = iwl_mvm_alloc_sta_after_restart(mvm, vif, sta);
+ else
+ ret = iwl_mvm_sta_init(mvm, vif, sta, IWL_MVM_INVALID_STA,
+ STATION_TYPE_PEER);
+ if (ret)
+ goto err;
+
+ /* at this stage sta link pointers are already allocated */
+ ret = iwl_mvm_mld_update_sta(mvm, vif, sta);
+
+ for_each_sta_active_link(vif, sta, link_sta, link_id) {
+ struct ieee80211_bss_conf *link_conf =
+ link_conf_dereference_protected(vif, link_id);
+ struct iwl_mvm_link_sta *mvm_link_sta =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ if (WARN_ON(!link_conf || !mvm_link_sta))
+ goto err;
+
+ ret = iwl_mvm_mld_cfg_sta(mvm, sta, vif, link_sta, link_conf,
+ mvm_link_sta);
+ if (ret)
+ goto err;
+
+ link_sta_added_to_fw |= BIT(link_id);
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ iwl_mvm_mld_set_ap_sta_id(sta, mvm_vif->link[link_id],
+ mvm_link_sta);
+ }
+
+ return 0;
+
+err:
+ /* remove all already allocated stations in FW */
+ for_each_set_bit(link_id, &link_sta_added_to_fw,
+ IEEE80211_MLD_MAX_NUM_LINKS) {
+ struct iwl_mvm_link_sta *mvm_link_sta =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ iwl_mvm_mld_rm_sta_from_fw(mvm, mvm_link_sta->sta_id);
+ }
+
+ /* free all sta resources in the driver */
+ iwl_mvm_mld_sta_rm_all_sta_links(mvm, mvm_sta);
+ return ret;
+}
+
+int iwl_mvm_mld_update_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ struct ieee80211_link_sta *link_sta;
+ unsigned int link_id;
+ int ret = 0;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ for_each_sta_active_link(vif, sta, link_sta, link_id) {
+ struct ieee80211_bss_conf *link_conf =
+ link_conf_dereference_protected(vif, link_id);
+ struct iwl_mvm_link_sta *mvm_link_sta =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ if (WARN_ON(!link_conf || !mvm_link_sta))
+ return -EINVAL;
+
+ ret = iwl_mvm_mld_cfg_sta(mvm, sta, vif, link_sta, link_conf,
+ mvm_link_sta);
+
+ if (ret) {
+ IWL_ERR(mvm, "Failed to update sta link %d\n", link_id);
+ break;
+ }
+ }
+
+ return ret;
+}
+
+static void iwl_mvm_mld_disable_sta_queues(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ int i;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ for (i = 0; i < ARRAY_SIZE(mvm_sta->tid_data); i++) {
+ if (mvm_sta->tid_data[i].txq_id == IWL_MVM_INVALID_QUEUE)
+ continue;
+
+ iwl_mvm_mld_disable_txq(mvm, mvm_sta->deflink.sta_id,
+ &mvm_sta->tid_data[i].txq_id, i);
+ mvm_sta->tid_data[i].txq_id = IWL_MVM_INVALID_QUEUE;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(sta->txq); i++) {
+ struct iwl_mvm_txq *mvmtxq =
+ iwl_mvm_txq_from_mac80211(sta->txq[i]);
+
+ mvmtxq->txq_id = IWL_MVM_INVALID_QUEUE;
+ }
+}
+
+int iwl_mvm_mld_rm_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ struct ieee80211_link_sta *link_sta;
+ unsigned int link_id;
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ kfree(mvm_sta->dup_data);
+
+ /* flush its queues here since we are freeing mvm_sta */
+ for_each_sta_active_link(vif, sta, link_sta, link_id) {
+ struct iwl_mvm_link_sta *mvm_link_sta =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ if (WARN_ON(!mvm_link_sta))
+ return -EINVAL;
+
+ ret = iwl_mvm_flush_sta_tids(mvm, mvm_link_sta->sta_id,
+ 0xffff);
+ if (ret)
+ return ret;
+ }
+
+ ret = iwl_mvm_wait_sta_queues_empty(mvm, mvm_sta);
+ if (ret)
+ return ret;
+
+ iwl_mvm_mld_disable_sta_queues(mvm, vif, sta);
+
+ for_each_sta_active_link(vif, sta, link_sta, link_id) {
+ struct iwl_mvm_link_sta *mvm_link_sta =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+ bool stay_in_fw;
+
+ stay_in_fw = iwl_mvm_sta_del(mvm, vif, sta, link_sta, &ret);
+ if (ret)
+ break;
+
+ if (!stay_in_fw)
+ ret = iwl_mvm_mld_rm_sta_from_fw(mvm,
+ mvm_link_sta->sta_id);
+
+ iwl_mvm_mld_free_sta_link(mvm, mvm_sta, mvm_link_sta,
+ link_id, stay_in_fw);
+ }
+
+ return ret;
+}
+
+int iwl_mvm_mld_rm_sta_id(struct iwl_mvm *mvm, u8 sta_id)
+{
+ int ret = iwl_mvm_mld_rm_sta_from_fw(mvm, sta_id);
+
+ lockdep_assert_held(&mvm->mutex);
+
+ RCU_INIT_POINTER(mvm->fw_id_to_mac_id[sta_id], NULL);
+ RCU_INIT_POINTER(mvm->fw_id_to_link_sta[sta_id], NULL);
+ return ret;
+}
+
+void iwl_mvm_mld_sta_modify_disable_tx(struct iwl_mvm *mvm,
+ struct iwl_mvm_sta *mvmsta,
+ bool disable)
+{
+ struct iwl_mvm_sta_disable_tx_cmd cmd;
+ int ret;
+
+ cmd.sta_id = cpu_to_le32(mvmsta->deflink.sta_id);
+ cmd.disable = cpu_to_le32(disable);
+
+ ret = iwl_mvm_send_cmd_pdu(mvm,
+ WIDE_ID(MAC_CONF_GROUP, STA_DISABLE_TX_CMD),
+ CMD_ASYNC, sizeof(cmd), &cmd);
+ if (ret)
+ IWL_ERR(mvm,
+ "Failed to send STA_DISABLE_TX_CMD command (%d)\n",
+ ret);
+}
+
+void iwl_mvm_mld_sta_modify_disable_tx_ap(struct iwl_mvm *mvm,
+ struct ieee80211_sta *sta,
+ bool disable)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+
+ spin_lock_bh(&mvm_sta->lock);
+
+ if (mvm_sta->disable_tx == disable) {
+ spin_unlock_bh(&mvm_sta->lock);
+ return;
+ }
+
+ iwl_mvm_mld_sta_modify_disable_tx(mvm, mvm_sta, disable);
+
+ spin_unlock_bh(&mvm_sta->lock);
+}
+
+void iwl_mvm_mld_modify_all_sta_disable_tx(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif *mvmvif,
+ bool disable)
+{
+ struct ieee80211_sta *sta;
+ struct iwl_mvm_sta *mvm_sta;
+ int i;
+
+ rcu_read_lock();
+
+ /* Block/unblock all the stations of the given mvmvif */
+ for (i = 0; i < mvm->fw->ucode_capa.num_stations; i++) {
+ sta = rcu_dereference(mvm->fw_id_to_mac_id[i]);
+ if (IS_ERR_OR_NULL(sta))
+ continue;
+
+ mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ if (mvm_sta->mac_id_n_color !=
+ FW_CMD_ID_AND_COLOR(mvmvif->id, mvmvif->color))
+ continue;
+
+ iwl_mvm_mld_sta_modify_disable_tx(mvm, mvm_sta, disable);
+ }
+
+ rcu_read_unlock();
+}
+
+static int iwl_mvm_mld_update_sta_queue(struct iwl_mvm *mvm,
+ struct iwl_mvm_sta *mvm_sta,
+ u32 old_sta_mask,
+ u32 new_sta_mask)
+{
+ struct iwl_scd_queue_cfg_cmd cmd = {
+ .operation = cpu_to_le32(IWL_SCD_QUEUE_MODIFY),
+ .u.modify.old_sta_mask = cpu_to_le32(old_sta_mask),
+ .u.modify.new_sta_mask = cpu_to_le32(new_sta_mask),
+ };
+ struct iwl_host_cmd hcmd = {
+ .id = WIDE_ID(DATA_PATH_GROUP, SCD_QUEUE_CONFIG_CMD),
+ .len[0] = sizeof(cmd),
+ .data[0] = &cmd
+ };
+ int tid;
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ for (tid = 0; tid <= IWL_MAX_TID_COUNT; tid++) {
+ struct iwl_mvm_tid_data *tid_data = &mvm_sta->tid_data[tid];
+ int txq_id = tid_data->txq_id;
+
+ if (txq_id == IWL_MVM_INVALID_QUEUE)
+ continue;
+
+ if (tid == IWL_MAX_TID_COUNT)
+ cmd.u.modify.tid = cpu_to_le32(IWL_MGMT_TID);
+ else
+ cmd.u.modify.tid = cpu_to_le32(tid);
+
+ ret = iwl_mvm_send_cmd(mvm, &hcmd);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+int iwl_mvm_mld_update_sta_links(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ u16 old_links, u16 new_links)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ struct iwl_mvm_vif *mvm_vif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_link_sta *mvm_sta_link;
+ struct iwl_mvm_vif_link_info *mvm_vif_link;
+ unsigned long links_to_add = ~old_links & new_links;
+ unsigned long links_to_rem = old_links & ~new_links;
+ unsigned long old_links_long = old_links;
+ u32 current_sta_mask = 0, sta_mask_added = 0, sta_mask_to_rem = 0;
+ unsigned long link_sta_added_to_fw = 0, link_sta_allocated = 0;
+ unsigned int link_id;
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ for_each_set_bit(link_id, &old_links_long,
+ IEEE80211_MLD_MAX_NUM_LINKS) {
+ mvm_sta_link =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ if (WARN_ON(!mvm_sta_link)) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ current_sta_mask |= BIT(mvm_sta_link->sta_id);
+ if (links_to_rem & BIT(link_id))
+ sta_mask_to_rem |= BIT(mvm_sta_link->sta_id);
+ }
+
+ if (sta_mask_to_rem) {
+ ret = iwl_mvm_mld_update_sta_queue(mvm, mvm_sta,
+ current_sta_mask,
+ current_sta_mask & ~sta_mask_to_rem);
+ if (WARN_ON(ret))
+ goto err;
+
+ current_sta_mask &= ~sta_mask_to_rem;
+ }
+
+ for_each_set_bit(link_id, &links_to_rem, IEEE80211_MLD_MAX_NUM_LINKS) {
+ mvm_sta_link =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+ mvm_vif_link = mvm_vif->link[link_id];
+
+ if (WARN_ON(!mvm_sta_link || !mvm_vif_link)) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ ret = iwl_mvm_mld_rm_sta_from_fw(mvm, mvm_sta_link->sta_id);
+ if (WARN_ON(ret))
+ goto err;
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ mvm_vif_link->ap_sta_id = IWL_MVM_INVALID_STA;
+
+ iwl_mvm_mld_free_sta_link(mvm, mvm_sta, mvm_sta_link, link_id,
+ false);
+ }
+
+ for_each_set_bit(link_id, &links_to_add, IEEE80211_MLD_MAX_NUM_LINKS) {
+ struct ieee80211_bss_conf *link_conf =
+ link_conf_dereference_protected(vif, link_id);
+ struct ieee80211_link_sta *link_sta =
+ link_sta_dereference_protected(sta, link_id);
+ mvm_vif_link = mvm_vif->link[link_id];
+
+ if (WARN_ON(!mvm_vif_link || !link_conf || !link_sta ||
+ mvm_sta->link[link_id])) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ ret = iwl_mvm_mld_alloc_sta_link(mvm, vif, sta, link_id);
+ if (WARN_ON(ret))
+ goto err;
+
+ link_sta->agg.max_rc_amsdu_len = 1;
+ ieee80211_sta_recalc_aggregates(sta);
+
+ mvm_sta_link =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ if (WARN_ON(!mvm_sta_link)) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ iwl_mvm_mld_set_ap_sta_id(sta, mvm_vif_link,
+ mvm_sta_link);
+
+ link_sta_allocated |= BIT(link_id);
+
+ sta_mask_added |= BIT(mvm_sta_link->sta_id);
+
+ ret = iwl_mvm_mld_cfg_sta(mvm, sta, vif, link_sta, link_conf,
+ mvm_sta_link);
+ if (WARN_ON(ret))
+ goto err;
+
+ link_sta_added_to_fw |= BIT(link_id);
+ }
+
+ if (sta_mask_added) {
+ ret = iwl_mvm_mld_update_sta_queue(mvm, mvm_sta,
+ current_sta_mask,
+ current_sta_mask | sta_mask_added);
+ if (WARN_ON(ret))
+ goto err;
+ }
+
+ return 0;
+
+err:
+ /* remove all already allocated stations in FW */
+ for_each_set_bit(link_id, &link_sta_added_to_fw,
+ IEEE80211_MLD_MAX_NUM_LINKS) {
+ mvm_sta_link =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ iwl_mvm_mld_rm_sta_from_fw(mvm, mvm_sta_link->sta_id);
+ }
+
+ /* remove all already allocated station links in driver */
+ for_each_set_bit(link_id, &link_sta_allocated,
+ IEEE80211_MLD_MAX_NUM_LINKS) {
+ mvm_sta_link =
+ rcu_dereference_protected(mvm_sta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ iwl_mvm_mld_free_sta_link(mvm, mvm_sta, mvm_sta_link, link_id,
+ false);
+ }
+
+ return ret;
+}
#include <linux/thermal.h>
#endif
+#include <linux/ptp_clock_kernel.h>
+
#include <linux/ktime.h>
#include "iwl-op-mode.h"
/* offchannel queue towards mac80211 */
#define IWL_MVM_OFFCHANNEL_QUEUE 0
+/* invalid value for FW link id */
+#define IWL_MVM_FW_LINK_ID_INVALID 0xff
+
extern const struct ieee80211_ops iwl_mvm_hw_ops;
+extern const struct ieee80211_ops iwl_mvm_mld_hw_ops;
/**
* struct iwl_mvm_mod_params - module parameters for iwlmvm
};
/**
+ * struct iwl_mvm_vif_link_info - per link data in Virtual Interface
+ * @ap_sta_id: the sta_id of the AP - valid only if VIF type is STA
+ * @fw_link_id: the id of the link according to the FW API
+ * @bssid: BSSID for this (client) interface
+ * @bcast_sta: station used for broadcast packets. Used by the following
+ * vifs: P2P_DEVICE, GO and AP.
+ * @beacon_stats: beacon statistics, containing the # of received beacons,
+ * # of received beacons accumulated over FW restart, and the current
+ * average signal of beacons retrieved from the firmware
+ * @smps_requests: the SMPS requests of different parts of the driver,
+ * combined on update to yield the overall request to mac80211.
+ * @probe_resp_data: data from FW notification to store NOA and CSA related
+ * data to be inserted into probe response.
+ * @he_ru_2mhz_block: 26-tone RU OFDMA transmissions should be blocked
+ * @queue_params: QoS params for this MAC
+ * @mgmt_queue: queue number for unbufferable management frames
+ */
+struct iwl_mvm_vif_link_info {
+ u8 bssid[ETH_ALEN];
+ u8 ap_sta_id;
+ u8 fw_link_id;
+
+ struct iwl_mvm_int_sta bcast_sta;
+ struct iwl_mvm_int_sta mcast_sta;
+
+ struct {
+ u32 num_beacons, accu_num_beacons;
+ u8 avg_signal;
+ } beacon_stats;
+
+ enum ieee80211_smps_mode smps_requests[NUM_IWL_MVM_SMPS_REQ];
+ struct iwl_probe_resp_data __rcu *probe_resp_data;
+
+ bool he_ru_2mhz_block;
+ bool active;
+
+ u16 cab_queue;
+ /* Assigned while mac80211 has the link in a channel context,
+ * or, for P2P Device, while it exists.
+ */
+ struct iwl_mvm_phy_ctxt *phy_ctxt;
+ /* QoS data from mac80211, need to store this here
+ * as mac80211 has a separate callback but we need
+ * to have the data for the MAC context
+ */
+ struct ieee80211_tx_queue_params queue_params[IEEE80211_NUM_ACS];
+
+ u16 mgmt_queue;
+};
+
+/**
* struct iwl_mvm_vif - data per Virtual Interface, it is a MAC context
* @id: between 0 and 3
* @color: to solve races upon MAC addition and removal
- * @ap_sta_id: the sta_id of the AP - valid only if VIF type is STA
- * @bssid: BSSID for this (client) interface
* @associated: indicates that we're currently associated, used only for
* managing the firmware state in iwl_mvm_bss_info_changed_station()
* @ap_assoc_sta_count: count of stations associated to us - valid only
* @uploaded: indicates the MAC context has been added to the device
* @ap_ibss_active: indicates that AP/IBSS is configured and that the interface
* should get quota etc.
- * @pm_enabled - Indicate if MAC power management is allowed
+ * @pm_enabled - indicate if MAC power management is allowed
* @monitor_active: indicates that monitor context is configured, and that the
* interface should get quota etc.
* @low_latency: bit flags for low latency
* as a result from low_latency bit flags and takes force into account.
* @authorized: indicates the AP station was set to authorized
* @ps_disabled: indicates that this interface requires PS to be disabled
- * @queue_params: QoS params for this MAC
- * @bcast_sta: station used for broadcast packets. Used by the following
- * vifs: P2P_DEVICE, GO and AP.
- * @beacon_skb: the skb used to hold the AP/GO beacon template
- * @smps_requests: the SMPS requests of different parts of the driver,
- * combined on update to yield the overall request to mac80211.
- * @beacon_stats: beacon statistics, containing the # of received beacons,
- * # of received beacons accumulated over FW restart, and the current
- * average signal of beacons retrieved from the firmware
+ * @csa_countdown: indicates that CSA countdown may be started
* @csa_failed: CSA failed to schedule time event, report an error later
+ * @csa_bcn_pending: indicates that we are waiting for a beacon on a new channel
* @features: hw features active for this vif
- * @probe_resp_data: data from FW notification to store NOA and CSA related
- * data to be inserted into probe response.
*/
struct iwl_mvm_vif {
struct iwl_mvm *mvm;
u16 id;
u16 color;
- u8 ap_sta_id;
- u8 bssid[ETH_ALEN];
bool associated;
u8 ap_assoc_sta_count;
-
- u16 cab_queue;
-
bool uploaded;
bool ap_ibss_active;
bool pm_enabled;
bool monitor_active;
+
u8 low_latency: 6;
u8 low_latency_actual: 1;
+
u8 authorized:1;
bool ps_disabled;
- struct iwl_mvm_vif_bf_data bf_data;
-
- struct {
- u32 num_beacons, accu_num_beacons;
- u8 avg_signal;
- } beacon_stats;
u32 ap_beacon_time;
-
- enum iwl_tsf_id tsf_id;
-
- /*
- * QoS data from mac80211, need to store this here
- * as mac80211 has a separate callback but we need
- * to have the data for the MAC context
- */
- struct ieee80211_tx_queue_params queue_params[IEEE80211_NUM_ACS];
- struct iwl_mvm_time_event_data time_event_data;
- struct iwl_mvm_time_event_data hs_time_event_data;
-
- struct iwl_mvm_int_sta bcast_sta;
- struct iwl_mvm_int_sta mcast_sta;
-
- /*
- * Assigned while mac80211 has the interface in a channel context,
- * or, for P2P Device, while it exists.
- */
- struct iwl_mvm_phy_ctxt *phy_ctxt;
+ struct iwl_mvm_vif_bf_data bf_data;
#ifdef CONFIG_PM
/* WoWLAN GTK rekey data */
int dbgfs_quota_min;
#endif
- enum ieee80211_smps_mode smps_requests[NUM_IWL_MVM_SMPS_REQ];
-
/* FW identified misbehaving AP */
u8 uapsd_misbehaving_bssid[ETH_ALEN];
-
struct delayed_work uapsd_nonagg_detected_wk;
- /* Indicates that CSA countdown may be started */
bool csa_countdown;
bool csa_failed;
+ bool csa_bcn_pending;
u16 csa_target_freq;
u16 csa_count;
u16 csa_misbehave;
struct delayed_work csa_work;
- /* Indicates that we are waiting for a beacon on a new channel */
- bool csa_bcn_pending;
+ enum iwl_tsf_id tsf_id;
+
+ struct iwl_mvm_time_event_data time_event_data;
+ struct iwl_mvm_time_event_data hs_time_event_data;
/* TCP Checksum Offload */
netdev_features_t features;
- struct iwl_probe_resp_data __rcu *probe_resp_data;
-
/* we can only have 2 GTK + 2 IGTK active at a time */
struct ieee80211_key_conf *ap_early_keys[4];
- /* 26-tone RU OFDMA transmissions should be blocked */
- bool he_ru_2mhz_block;
-
struct {
struct ieee80211_key_conf __rcu *keys[2];
} bcn_prot;
+
+ struct iwl_mvm_vif_link_info deflink;
+ struct iwl_mvm_vif_link_info *link[IEEE80211_MLD_MAX_NUM_LINKS];
};
+#define for_each_mvm_vif_valid_link(mvm_vif, link_id) \
+ for (link_id = 0; \
+ link_id < ARRAY_SIZE((mvm_vif)->link); \
+ link_id++) \
+ if ((mvm_vif)->link[link_id])
+
static inline struct iwl_mvm_vif *
iwl_mvm_vif_from_mac80211(struct ieee80211_vif *vif)
{
enum iwl_mvm_queue_status status;
};
+struct ptp_data {
+ struct ptp_clock *ptp_clock;
+ struct ptp_clock_info ptp_clock_info;
+
+ struct delayed_work dwork;
+
+ /* The last GP2 reading from the hw */
+ u32 last_gp2;
+
+ /* number of wraparounds since scale_update_adj_time_ns */
+ u32 wrap_counter;
+
+ /* GP2 time when the scale was last updated */
+ u32 scale_update_gp2;
+
+ /* Adjusted time when the scale was last updated in nanoseconds */
+ u64 scale_update_adj_time_ns;
+
+ /* clock frequency offset, scaled to 65536000000 */
+ u64 scaled_freq;
+
+ /* Delta between hardware clock and ptp clock in nanoseconds */
+ s64 delta;
+};
+
+struct iwl_time_sync_data {
+ struct sk_buff_head frame_list;
+ u8 peer_addr[ETH_ALEN];
+ bool active;
+};
+
struct iwl_mvm {
/* for logger access */
struct device *dev;
/* data related to data path */
struct iwl_rx_phy_info last_phy_info;
struct ieee80211_sta __rcu *fw_id_to_mac_id[IWL_MVM_STATION_COUNT_MAX];
+ struct ieee80211_link_sta __rcu *fw_id_to_link_sta[IWL_MVM_STATION_COUNT_MAX];
+ unsigned long fw_link_ids_map;
u8 rx_ba_sessions;
/* configured by mac80211 */
struct list_head resp_pasn_list;
+ struct ptp_data ptp_data;
+
struct {
u8 range_resp;
} cmd_ver;
/* does a monitor vif exist (only one can exist hence bool) */
bool monitor_on;
+ /*
+ * primary channel position relative to he whole bandwidth,
+ * in steps of 80 MHz
+ */
+ u8 monitor_p80;
/* sniffer data to include in radiotap */
__le16 cur_aid;
unsigned long last_reset_or_resume_time_jiffies;
bool sta_remove_requires_queue_remove;
+ bool mld_api_is_used;
bool pldr_sync;
+
+ struct iwl_time_sync_data time_sync;
};
/* Extract MVM priv from op_mode and _hw */
{
return fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_CSUM_SUPPORT) &&
- !IWL_MVM_HW_CSUM_DISABLE;
+ !IWL_MVM_HW_CSUM_DISABLE;
}
static inline bool iwl_mvm_is_mplut_supported(struct iwl_mvm *mvm)
IWL_UCODE_TLV_CAPA_MULTI_QUEUE_RX_SUPPORT);
}
+static inline bool iwl_mvm_has_mld_api(const struct iwl_fw *fw)
+{
+ return (iwl_fw_lookup_cmd_ver(fw, LINK_CONFIG_CMD,
+ IWL_FW_CMD_VER_UNKNOWN) !=
+ IWL_FW_CMD_VER_UNKNOWN) &&
+ (iwl_fw_lookup_cmd_ver(fw, MAC_CONFIG_CMD,
+ IWL_FW_CMD_VER_UNKNOWN) !=
+ IWL_FW_CMD_VER_UNKNOWN) &&
+ (iwl_fw_lookup_cmd_ver(fw, STA_CONFIG_CMD,
+ IWL_FW_CMD_VER_UNKNOWN) !=
+ IWL_FW_CMD_VER_UNKNOWN) &&
+ (iwl_fw_lookup_cmd_ver(fw, AUX_STA_CMD,
+ IWL_FW_CMD_VER_UNKNOWN) !=
+ IWL_FW_CMD_VER_UNKNOWN) &&
+ (iwl_fw_lookup_cmd_ver(fw, STA_REMOVE_CMD,
+ IWL_FW_CMD_VER_UNKNOWN) !=
+ IWL_FW_CMD_VER_UNKNOWN) &&
+ (iwl_fw_lookup_cmd_ver(fw, STA_DISABLE_TX_CMD,
+ IWL_FW_CMD_VER_UNKNOWN) !=
+ IWL_FW_CMD_VER_UNKNOWN);
+}
+
static inline bool iwl_mvm_has_new_tx_api(struct iwl_mvm *mvm)
{
/* TODO - replace with TLV once defined */
struct ieee80211_tx_rate *r);
u8 iwl_mvm_mac80211_idx_to_hwrate(const struct iwl_fw *fw, int rate_idx);
u8 iwl_mvm_mac80211_ac_to_ucode_ac(enum ieee80211_ac_numbers ac);
+bool iwl_mvm_is_nic_ack_enabled(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
static inline void iwl_mvm_dump_nic_error_log(struct iwl_mvm *mvm)
{
int iwl_mvm_flush_sta(struct iwl_mvm *mvm, void *sta, bool internal);
int iwl_mvm_flush_sta_tids(struct iwl_mvm *mvm, u32 sta_id, u16 tids);
+/* Utils to extract sta related data */
+__le32 iwl_mvm_get_sta_htc_flags(struct ieee80211_sta *sta,
+ struct ieee80211_link_sta *link_sta);
+u8 iwl_mvm_get_sta_uapsd_acs(struct ieee80211_sta *sta);
+u32 iwl_mvm_get_sta_ampdu_dens(struct ieee80211_link_sta *link_sta,
+ struct ieee80211_bss_conf *link_conf,
+ u32 *_agg_size);
+int iwl_mvm_set_sta_pkt_ext(struct iwl_mvm *mvm,
+ struct ieee80211_link_sta *link_sta,
+ struct iwl_he_pkt_ext_v2 *pkt_ext);
+
void iwl_mvm_async_handlers_purge(struct iwl_mvm *mvm);
static inline void iwl_mvm_set_tx_cmd_ccmp(struct ieee80211_tx_info *info,
struct iwl_rx_cmd_buffer *rxb);
/* MVM PHY */
+struct iwl_mvm_phy_ctxt *iwl_mvm_get_free_phy_ctxt(struct iwl_mvm *mvm);
int iwl_mvm_phy_ctxt_add(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
struct cfg80211_chan_def *chandef,
u8 chains_static, u8 chains_dynamic);
u8 iwl_mvm_get_ctrl_pos(struct cfg80211_chan_def *chandef);
/* MAC (virtual interface) programming */
+
+void iwl_mvm_prepare_mac_removal(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif);
+void iwl_mvm_set_fw_basic_rates(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ __le32 *cck_rates, __le32 *ofdm_rates);
+void iwl_mvm_set_fw_protection_flags(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ __le32 *protection_flags, u32 ht_flag,
+ u32 tgg_flag);
+void iwl_mvm_set_fw_qos_params(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct iwl_ac_qos *ac, __le32 *qos_flags);
+bool iwl_mvm_set_fw_mu_edca_params(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif *mvmvif,
+ struct iwl_he_backoff_conf *trig_based_txf);
+void iwl_mvm_set_fw_dtim_tbtt(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ __le64 *dtim_tsf, __le32 *dtim_time,
+ __le32 *assoc_beacon_arrive_time);
+__le32 iwl_mac_ctxt_p2p_dev_has_extended_disc(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif);
+void iwl_mvm_mac_ctxt_cmd_ap_set_filter_flags(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif *mvmvif,
+ __le32 *filter_flags,
+ int accept_probe_req_flag,
+ int accept_beacon_flag);
+int iwl_mvm_get_mac_type(struct ieee80211_vif *vif);
+__le32 iwl_mvm_mac_ctxt_cmd_p2p_sta_get_oppps_ctwin(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif);
+__le32 iwl_mvm_mac_ctxt_cmd_sta_get_twt_policy(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif);
+int iwl_mvm_mld_mac_ctxt_add(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+int iwl_mvm_mld_mac_ctxt_changed(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ bool force_assoc_off);
+int iwl_mvm_mld_mac_ctxt_remove(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
int iwl_mvm_mac_ctxt_init(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
int iwl_mvm_mac_ctxt_add(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
int iwl_mvm_mac_ctxt_changed(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
bool force_assoc_off, const u8 *bssid_override);
int iwl_mvm_mac_ctxt_remove(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
int iwl_mvm_mac_ctxt_beacon_changed(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif);
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
int iwl_mvm_mac_ctxt_send_beacon(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
- struct sk_buff *beacon);
+ struct sk_buff *beacon,
+ struct ieee80211_bss_conf *link_conf);
int iwl_mvm_mac_ctxt_send_beacon_cmd(struct iwl_mvm *mvm,
struct sk_buff *beacon,
void *data, int len);
int iwl_mvm_binding_add_vif(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
int iwl_mvm_binding_remove_vif(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+/* Links */
+int iwl_mvm_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
+int iwl_mvm_link_changed(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ u32 changes, bool active);
+int iwl_mvm_remove_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
+int iwl_mvm_disable_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
+
+/* AP and IBSS */
+bool iwl_mvm_start_ap_ibss_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif, int *ret);
+void iwl_mvm_stop_ap_ibss_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif);
+
+/* BSS Info */
+/**
+ * struct iwl_mvm_bss_info_changed_ops - callbacks for the bss_info_changed()
+ *
+ * Since the only difference between both MLD and
+ * non-MLD versions of bss_info_changed() is these function calls,
+ * each version will send its specific function calls to
+ * %iwl_mvm_bss_info_changed_common().
+ *
+ * @bss_info_changed_sta: pointer to the function that handles changes
+ * in bss_info in sta mode
+ * @bss_info_changed_ap_ibss: pointer to the function that handles changes
+ * in bss_info in ap and ibss modes
+ */
+struct iwl_mvm_bss_info_changed_ops {
+ void (*bss_info_changed_sta)(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *bss_conf,
+ u64 changes);
+ void (*bss_info_changed_ap_ibss)(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *bss_conf,
+ u64 changes);
+};
+
+void
+iwl_mvm_bss_info_changed_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *bss_conf,
+ struct iwl_mvm_bss_info_changed_ops *callbacks,
+ u64 changes);
+void
+iwl_mvm_bss_info_changed_station_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ u64 changes);
+void iwl_mvm_bss_info_changed_station_assoc(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u64 changes);
+
+/* ROC */
+/**
+ * struct iwl_mvm_roc_ops - callbacks for the remain_on_channel()
+ *
+ * Since the only difference between both MLD and
+ * non-MLD versions of remain_on_channel() is these function calls,
+ * each version will send its specific function calls to
+ * %iwl_mvm_roc_common().
+ *
+ * @add_aux_sta_for_hs20: pointer to the function that adds an aux sta
+ * for Hot Spot 2.0
+ * @switch_phy_ctxt: pointer to the function that switches a vif from one
+ * phy_ctx to another
+ */
+struct iwl_mvm_roc_ops {
+ int (*add_aux_sta_for_hs20)(struct iwl_mvm *mvm, u32 lmac_id);
+ int (*switch_phy_ctxt)(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct iwl_mvm_phy_ctxt *new_phy_ctxt);
+};
+
+int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_channel *channel, int duration,
+ enum ieee80211_roc_type type,
+ struct iwl_mvm_roc_ops *ops);
+int iwl_mvm_cancel_roc(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
+/*Session Protection */
+void iwl_mvm_protect_assoc(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ u32 duration_override);
+
/* Quota management */
static inline size_t iwl_mvm_quota_cmd_size(struct iwl_mvm *mvm)
{
/* SMPS */
void iwl_mvm_update_smps(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
enum iwl_mvm_smps_type_request req_type,
- enum ieee80211_smps_mode smps_request);
+ enum ieee80211_smps_mode smps_request,
+ unsigned int link_id);
+void
+iwl_mvm_update_smps_on_active_links(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ enum iwl_mvm_smps_type_request req_type,
+ enum ieee80211_smps_mode smps_request);
bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm,
struct iwl_mvm_phy_ctxt *ctxt);
-void iwl_mvm_apply_fw_smps_request(struct ieee80211_vif *vif);
+void iwl_mvm_update_link_smps(struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
/* Low latency */
int iwl_mvm_update_low_latency(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
const struct ieee80211_sta *sta,
u16 tid);
+void iwl_mvm_ptp_init(struct iwl_mvm *mvm);
+void iwl_mvm_ptp_remove(struct iwl_mvm *mvm);
+u64 iwl_mvm_ptp_get_adj_time(struct iwl_mvm *mvm, u64 base_time);
int iwl_mvm_sar_select_profile(struct iwl_mvm *mvm, int prof_a, int prof_b);
int iwl_mvm_get_sar_geo_profile(struct iwl_mvm *mvm);
int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm);
struct ieee80211_sta *sta,
struct ieee80211_key_conf *keyconf);
void iwl_mvm_sec_key_remove_ap(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif);
+ struct ieee80211_vif *vif,
+ struct iwl_mvm_vif_link_info *link,
+ unsigned int link_id);
int iwl_rfi_send_config_cmd(struct iwl_mvm *mvm,
struct iwl_rfi_lut_entry *rfi_table);
}
}
+/* Channel Switch */
+void iwl_mvm_channel_switch_disconnect_wk(struct work_struct *wk);
+int iwl_mvm_post_channel_switch(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
+
+/* Channel Context */
+/**
+ * struct iwl_mvm_switch_vif_chanctx_ops - callbacks for switch_vif_chanctx()
+ *
+ * Since the only difference between both MLD and
+ * non-MLD versions of switch_vif_chanctx() is these function calls,
+ * each version will send its specific function calls to
+ * %iwl_mvm_switch_vif_chanctx_common().
+ *
+ * @__assign_vif_chanctx: pointer to the function that assigns a chanctx to
+ * a given vif
+ * @__unassign_vif_chanctx: pointer to the function that unassigns a chanctx to
+ * a given vif
+ */
+struct iwl_mvm_switch_vif_chanctx_ops {
+ int (*__assign_vif_chanctx)(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx,
+ bool switching_chanctx);
+ void (*__unassign_vif_chanctx)(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_chanctx_conf *ctx,
+ bool switching_chanctx);
+};
+
+int
+iwl_mvm_switch_vif_chanctx_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif_chanctx_switch *vifs,
+ int n_vifs,
+ enum ieee80211_chanctx_switch_mode mode,
+ struct iwl_mvm_switch_vif_chanctx_ops *ops);
+
/* Channel info utils */
static inline bool iwl_mvm_has_ultra_hb_channel(struct iwl_mvm *mvm)
{
void iwl_mvm_send_roaming_forbidden_event(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
bool forbidden);
+bool iwl_mvm_is_vendor_in_approved_list(void);
+/* Callbacks for ieee80211_ops */
+void iwl_mvm_mac_tx(struct ieee80211_hw *hw,
+ struct ieee80211_tx_control *control, struct sk_buff *skb);
+void iwl_mvm_mac_wake_tx_queue(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq);
+
+int iwl_mvm_mac_ampdu_action(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params);
+int iwl_mvm_op_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant);
+int iwl_mvm_mac_start(struct ieee80211_hw *hw);
+void iwl_mvm_mac_reconfig_complete(struct ieee80211_hw *hw,
+ enum ieee80211_reconfig_type reconfig_type);
+void iwl_mvm_mac_stop(struct ieee80211_hw *hw);
+static inline int iwl_mvm_mac_config(struct ieee80211_hw *hw, u32 changed)
+{
+ return 0;
+}
+
+u64 iwl_mvm_prepare_multicast(struct ieee80211_hw *hw,
+ struct netdev_hw_addr_list *mc_list);
+
+void iwl_mvm_configure_filter(struct ieee80211_hw *hw,
+ unsigned int changed_flags,
+ unsigned int *total_flags, u64 multicast);
+int iwl_mvm_mac_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *hw_req);
+void iwl_mvm_mac_cancel_hw_scan(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
+void iwl_mvm_sta_pre_rcu_remove(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+void iwl_mvm_mac_sta_notify(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ enum sta_notify_cmd cmd,
+ struct ieee80211_sta *sta);
+void
+iwl_mvm_mac_allow_buffered_frames(struct ieee80211_hw *hw,
+ struct ieee80211_sta *sta, u16 tids,
+ int num_frames,
+ enum ieee80211_frame_release_type reason,
+ bool more_data);
+void
+iwl_mvm_mac_release_buffered_frames(struct ieee80211_hw *hw,
+ struct ieee80211_sta *sta, u16 tids,
+ int num_frames,
+ enum ieee80211_frame_release_type reason,
+ bool more_data);
+int iwl_mvm_mac_set_rts_threshold(struct ieee80211_hw *hw, u32 value);
+void iwl_mvm_sta_rc_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, u32 changed);
+void iwl_mvm_mac_mgd_prepare_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_prep_tx_info *info);
+void iwl_mvm_mac_mgd_complete_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_prep_tx_info *info);
+void iwl_mvm_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ u32 queues, bool drop);
+int iwl_mvm_mac_sched_scan_start(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_sched_scan_request *req,
+ struct ieee80211_scan_ies *ies);
+int iwl_mvm_mac_sched_scan_stop(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
+int iwl_mvm_mac_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ struct ieee80211_vif *vif, struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key);
+void iwl_mvm_mac_update_tkip_key(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_key_conf *keyconf,
+ struct ieee80211_sta *sta,
+ u32 iv32, u16 *phase1key);
+int iwl_mvm_add_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx);
+void iwl_mvm_remove_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx);
+void iwl_mvm_change_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx, u32 changed);
+int iwl_mvm_tx_last_beacon(struct ieee80211_hw *hw);
+int iwl_mvm_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ bool set);
+void iwl_mvm_channel_switch(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_channel_switch *chsw);
+int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_channel_switch *chsw);
+void iwl_mvm_abort_channel_switch(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
+void iwl_mvm_channel_switch_rx_beacon(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_channel_switch *chsw);
+void iwl_mvm_mac_event_callback(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ const struct ieee80211_event *event);
+void iwl_mvm_sync_rx_queues(struct ieee80211_hw *hw);
+int iwl_mvm_mac_testmode_cmd(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ void *data, int len);
+int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx,
+ struct survey_info *survey);
+void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct station_info *sinfo);
+int
+iwl_mvm_mac_get_ftm_responder_stats(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_ftm_responder_stats *stats);
+int iwl_mvm_start_pmsr(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct cfg80211_pmsr_request *request);
+void iwl_mvm_abort_pmsr(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct cfg80211_pmsr_request *request);
+
+bool iwl_mvm_have_links_same_channel(struct iwl_mvm_vif *vif1,
+ struct iwl_mvm_vif *vif2);
+bool iwl_mvm_vif_is_active(struct iwl_mvm_vif *mvmvif);
+int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ s16 tx_power);
+int iwl_mvm_set_hw_timestamp(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_set_hw_timestamp *hwts);
+int iwl_mvm_update_mu_groups(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
#endif /* __IWL_MVM_H__ */
#include "fw-api.h"
#include "fw/acpi.h"
#include "fw/uefi.h"
+#include "time-sync.h"
#define DRV_DESCRIPTION "The new Intel(R) wireless AGN driver for Linux"
MODULE_DESCRIPTION(DRV_DESCRIPTION);
ieee80211_disconnect(vif, true);
}
-void iwl_mvm_apply_fw_smps_request(struct ieee80211_vif *vif)
+void iwl_mvm_update_link_smps(struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = mvmvif->mvm;
enum ieee80211_smps_mode mode = IEEE80211_SMPS_AUTOMATIC;
+ if (!link_conf)
+ return;
+
if (mvm->fw_static_smps_request &&
- vif->bss_conf.chandef.width == NL80211_CHAN_WIDTH_160 &&
- vif->bss_conf.he_support)
+ link_conf->chandef.width == NL80211_CHAN_WIDTH_160 &&
+ link_conf->he_support)
mode = IEEE80211_SMPS_STATIC;
- iwl_mvm_update_smps(mvm, vif, IWL_MVM_SMPS_REQ_FW, mode);
+ iwl_mvm_update_smps(mvm, vif, IWL_MVM_SMPS_REQ_FW, mode,
+ link_conf->link_id);
}
static void iwl_mvm_intf_dual_chain_req(void *data, u8 *mac,
struct ieee80211_vif *vif)
{
- iwl_mvm_apply_fw_smps_request(vif);
+ struct ieee80211_bss_conf *link_conf;
+ unsigned int link_id;
+
+ rcu_read_lock();
+
+ for_each_vif_active_link(vif, link_conf, link_id)
+ iwl_mvm_update_link_smps(vif, link_conf);
+
+ rcu_read_unlock();
}
static void iwl_mvm_rx_thermal_dual_chain_req(struct iwl_mvm *mvm,
RX_HANDLER_GRP(SYSTEM_GROUP, RFI_DEACTIVATE_NOTIF,
iwl_rfi_deactivate_notif_handler, RX_HANDLER_ASYNC_UNLOCKED,
struct iwl_rfi_deactivate_notif),
+
+ RX_HANDLER_GRP(LEGACY_GROUP,
+ WNM_80211V_TIMING_MEASUREMENT_NOTIFICATION,
+ iwl_mvm_time_sync_msmt_event, RX_HANDLER_SYNC,
+ struct iwl_time_msmt_notify),
+ RX_HANDLER_GRP(LEGACY_GROUP,
+ WNM_80211V_TIMING_MEASUREMENT_CONFIRM_NOTIFICATION,
+ iwl_mvm_time_sync_msmt_confirm_event, RX_HANDLER_SYNC,
+ struct iwl_time_msmt_cfm_notify),
};
#undef RX_HANDLER
#undef RX_HANDLER_GRP
HCMD_NAME(SCAN_OFFLOAD_PROFILES_QUERY_CMD),
HCMD_NAME(BT_COEX_UPDATE_REDUCED_TXP),
HCMD_NAME(BT_COEX_CI),
+ HCMD_NAME(WNM_80211V_TIMING_MEASUREMENT_CONFIRM_NOTIFICATION),
+ HCMD_NAME(WNM_80211V_TIMING_MEASUREMENT_NOTIFICATION),
HCMD_NAME(PHY_CONFIGURATION_CMD),
HCMD_NAME(CALIB_RES_NOTIF_PHY_DB),
HCMD_NAME(PHY_DB_CMD),
static const struct iwl_hcmd_names iwl_mvm_mac_conf_names[] = {
HCMD_NAME(CHANNEL_SWITCH_TIME_EVENT_CMD),
HCMD_NAME(SESSION_PROTECTION_CMD),
+ HCMD_NAME(MAC_CONFIG_CMD),
+ HCMD_NAME(LINK_CONFIG_CMD),
+ HCMD_NAME(STA_CONFIG_CMD),
+ HCMD_NAME(AUX_STA_CMD),
+ HCMD_NAME(STA_REMOVE_CMD),
+ HCMD_NAME(STA_DISABLE_TX_CMD),
HCMD_NAME(SESSION_PROTECTION_NOTIF),
HCMD_NAME(CHANNEL_SWITCH_START_NOTIF),
};
********************************/
hw = ieee80211_alloc_hw(sizeof(struct iwl_op_mode) +
sizeof(struct iwl_mvm),
+ iwl_mvm_has_mld_api(fw) ? &iwl_mvm_mld_hw_ops :
&iwl_mvm_hw_ops);
if (!hw)
return NULL;
mvm->sta_remove_requires_queue_remove =
trans_cfg.queue_alloc_cmd_ver > 0;
+ mvm->mld_api_is_used = iwl_mvm_has_mld_api(mvm->fw);
+
/* Configure transport layer */
iwl_trans_configure(mvm->trans, &trans_cfg);
else
memset(&mvm->rx_stats, 0, sizeof(struct mvm_statistics_rx));
+ iwl_mvm_init_time_sync(&mvm->time_sync);
+
mvm->debugfs_dir = dbgfs_dir;
mvm->mei_registered = !iwl_mei_register(mvm, &mei_ops);
kfree(mvm->error_recovery_buf);
mvm->error_recovery_buf = NULL;
+ iwl_mvm_ptp_remove(mvm);
+
iwl_trans_op_mode_leave(mvm->trans);
iwl_phy_db_free(mvm->phy_db);
unsigned long *data = _data;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- if (!mvmvif->phy_ctxt)
+ if (!mvmvif->deflink.phy_ctxt)
return;
if (vif->type == NL80211_IFTYPE_STATION ||
vif->type == NL80211_IFTYPE_AP)
- __set_bit(mvmvif->phy_ctxt->id, data);
+ __set_bit(mvmvif->deflink.phy_ctxt->id, data);
}
int iwl_mvm_phy_ctx_count(struct iwl_mvm *mvm)
#endif
for (ac = IEEE80211_AC_VO; ac <= IEEE80211_AC_BK; ac++) {
- if (!mvmvif->queue_params[ac].uapsd)
+ if (!mvmvif->deflink.queue_params[ac].uapsd)
continue;
if (!test_bit(IWL_MVM_STATUS_IN_D3, &mvm->status))
cmd->uapsd_ac_flags |= BIT(ac);
/* QNDP TID - the highest TID with no admission control */
- if (!tid_found && !mvmvif->queue_params[ac].acm) {
+ if (!tid_found && !mvmvif->deflink.queue_params[ac].acm) {
tid_found = true;
switch (ac) {
case IEEE80211_AC_VO:
static bool iwl_mvm_power_is_radar(struct ieee80211_vif *vif)
{
struct ieee80211_chanctx_conf *chanctx_conf;
- struct ieee80211_channel *chan;
+ struct ieee80211_bss_conf *link_conf;
bool radar_detect = false;
+ unsigned int link_id;
rcu_read_lock();
- chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
- WARN_ON(!chanctx_conf);
- if (chanctx_conf) {
- chan = chanctx_conf->def.chan;
- radar_detect = chan->flags & IEEE80211_CHAN_RADAR;
+ for_each_vif_active_link(vif, link_conf, link_id) {
+ chanctx_conf = rcu_dereference(link_conf->chanctx_conf);
+ /* this happens on link switching, just ignore inactive ones */
+ if (!chanctx_conf)
+ continue;
+
+ radar_detect = !!(chanctx_conf->def.chan->flags &
+ IEEE80211_CHAN_RADAR);
+ if (radar_detect)
+ goto out;
}
- rcu_read_unlock();
+out:
+ rcu_read_unlock();
return radar_detect;
}
/* The ap_sta_id is not expected to change during current association
* so no explicit protection is needed
*/
- if (mvmvif->ap_sta_id == *ap_sta_id)
- memcpy(mvmvif->uapsd_misbehaving_bssid, vif->bss_conf.bssid,
+ if (mvmvif->deflink.ap_sta_id == *ap_sta_id)
+ memcpy(mvmvif->uapsd_misbehaving_bssid,
+ vif->bss_conf.bssid,
ETH_ALEN);
}
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
bool *disable_ps = _data;
- if (mvmvif->phy_ctxt && mvmvif->phy_ctxt->id < NUM_PHY_CTX)
+ if (iwl_mvm_vif_is_active(mvmvif))
*disable_ps |= mvmvif->ps_disabled;
}
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_power_vifs *power_iterator = _data;
- bool active = mvmvif->phy_ctxt && mvmvif->phy_ctxt->id < NUM_PHY_CTX;
+ bool active;
if (!mvmvif->uploaded)
return;
+ active = iwl_mvm_vif_is_active(mvmvif);
+
switch (ieee80211_vif_type_p2p(vif)) {
case NL80211_IFTYPE_P2P_DEVICE:
break;
}
if (vifs->bss_active && vifs->p2p_active)
- client_same_channel = (bss_mvmvif->phy_ctxt->id ==
- p2p_mvmvif->phy_ctxt->id);
+ client_same_channel =
+ iwl_mvm_have_links_same_channel(bss_mvmvif, p2p_mvmvif);
+
if (vifs->bss_active && vifs->ap_active)
- ap_same_channel = (bss_mvmvif->phy_ctxt->id ==
- ap_mvmvif->phy_ctxt->id);
+ ap_same_channel =
+ iwl_mvm_have_links_same_channel(bss_mvmvif, ap_mvmvif);
/* clients are not stand alone: enable PM if DCM */
if (!(client_same_channel || ap_same_channel)) {
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+/*
+ * Copyright (C) 2021 - 2023 Intel Corporation
+ */
+
+#include "mvm.h"
+#include "iwl-debug.h"
+#include <linux/timekeeping.h>
+#include <linux/math64.h>
+
+#define IWL_PTP_GP2_WRAP 0x100000000ULL
+#define IWL_PTP_WRAP_TIME (3600 * HZ)
+
+/* The scaled_ppm parameter is ppm (parts per million) with a 16-bit fractional
+ * part, which means that a value of 1 in one of those fields actually means
+ * 2^-16 ppm, and 2^16=65536 is 1 ppm.
+ */
+#define SCALE_FACTOR 65536000000ULL
+#define IWL_PTP_WRAP_THRESHOLD_USEC (5000)
+
+#define IWL_PTP_GET_CROSS_TS_NUM 5
+
+static void iwl_mvm_ptp_update_new_read(struct iwl_mvm *mvm, u32 gp2)
+{
+ /* If the difference is above the threshold, assume it's a wraparound.
+ * Otherwise assume it's an old read and ignore it.
+ */
+ if (gp2 < mvm->ptp_data.last_gp2 &&
+ mvm->ptp_data.last_gp2 - gp2 < IWL_PTP_WRAP_THRESHOLD_USEC) {
+ IWL_DEBUG_INFO(mvm,
+ "PTP: ignore old read (gp2=%u, last_gp2=%u)\n",
+ gp2, mvm->ptp_data.last_gp2);
+ return;
+ }
+
+ if (gp2 < mvm->ptp_data.last_gp2) {
+ mvm->ptp_data.wrap_counter++;
+ IWL_DEBUG_INFO(mvm,
+ "PTP: wraparound detected (new counter=%u)\n",
+ mvm->ptp_data.wrap_counter);
+ }
+
+ mvm->ptp_data.last_gp2 = gp2;
+ schedule_delayed_work(&mvm->ptp_data.dwork, IWL_PTP_WRAP_TIME);
+}
+
+u64 iwl_mvm_ptp_get_adj_time(struct iwl_mvm *mvm, u64 base_time_ns)
+{
+ struct ptp_data *data = &mvm->ptp_data;
+ u64 last_gp2_ns = mvm->ptp_data.scale_update_gp2 * NSEC_PER_USEC;
+ u64 res;
+ u64 diff;
+
+ iwl_mvm_ptp_update_new_read(mvm,
+ div64_u64(base_time_ns, NSEC_PER_USEC));
+
+ IWL_DEBUG_INFO(mvm, "base_time_ns=%llu, wrap_counter=%u\n",
+ (unsigned long long)base_time_ns, data->wrap_counter);
+
+ base_time_ns = base_time_ns +
+ (data->wrap_counter * IWL_PTP_GP2_WRAP * NSEC_PER_USEC);
+
+ /* It is possible that a GP2 timestamp was received from fw before the
+ * last scale update. Since we don't know how to scale - ignore it.
+ */
+ if (base_time_ns < last_gp2_ns) {
+ IWL_DEBUG_INFO(mvm, "Time before scale update - ignore\n");
+ return 0;
+ }
+
+ diff = base_time_ns - last_gp2_ns;
+ IWL_DEBUG_INFO(mvm, "diff ns=%llu\n", (unsigned long long)diff);
+
+ diff = mul_u64_u64_div_u64(diff, data->scaled_freq,
+ SCALE_FACTOR);
+ IWL_DEBUG_INFO(mvm, "scaled diff ns=%llu\n", (unsigned long long)diff);
+
+ res = data->scale_update_adj_time_ns + data->delta + diff;
+
+ IWL_DEBUG_INFO(mvm, "base=%llu delta=%lld adj=%llu\n",
+ (unsigned long long)base_time_ns, (long long)data->delta,
+ (unsigned long long)res);
+ return res;
+}
+
+static int
+iwl_mvm_get_crosstimestamp_fw(struct iwl_mvm *mvm, u32 *gp2, u64 *sys_time)
+{
+ struct iwl_synced_time_cmd synced_time_cmd = {
+ .operation = cpu_to_le32(IWL_SYNCED_TIME_OPERATION_READ_BOTH)
+ };
+ struct iwl_host_cmd cmd = {
+ .id = WIDE_ID(DATA_PATH_GROUP, WNM_PLATFORM_PTM_REQUEST_CMD),
+ .flags = CMD_WANT_SKB,
+ .data[0] = &synced_time_cmd,
+ .len[0] = sizeof(synced_time_cmd),
+ };
+ struct iwl_synced_time_rsp *resp;
+ struct iwl_rx_packet *pkt;
+ int ret;
+ u64 gp2_10ns;
+
+ ret = iwl_mvm_send_cmd(mvm, &cmd);
+ if (ret)
+ return ret;
+
+ pkt = cmd.resp_pkt;
+
+ if (iwl_rx_packet_payload_len(pkt) != sizeof(*resp)) {
+ IWL_ERR(mvm, "PTP: Invalid command response\n");
+ iwl_free_resp(&cmd);
+ return -EIO;
+ }
+
+ resp = (void *)pkt->data;
+
+ gp2_10ns = (u64)le32_to_cpu(resp->gp2_timestamp_hi) << 32 |
+ le32_to_cpu(resp->gp2_timestamp_lo);
+ *gp2 = div_u64(gp2_10ns, 100);
+
+ *sys_time = (u64)le32_to_cpu(resp->platform_timestamp_hi) << 32 |
+ le32_to_cpu(resp->platform_timestamp_lo);
+
+ return ret;
+}
+
+static void iwl_mvm_phc_get_crosstimestamp_loop(struct iwl_mvm *mvm,
+ ktime_t *sys_time, u32 *gp2)
+{
+ u64 diff = 0, new_diff;
+ u64 tmp_sys_time;
+ u32 tmp_gp2;
+ int i;
+
+ for (i = 0; i < IWL_PTP_GET_CROSS_TS_NUM; i++) {
+ iwl_mvm_get_sync_time(mvm, CLOCK_REALTIME, &tmp_gp2, NULL,
+ &tmp_sys_time);
+ new_diff = tmp_sys_time - ((u64)tmp_gp2 * NSEC_PER_USEC);
+ if (!diff || new_diff < diff) {
+ *sys_time = tmp_sys_time;
+ *gp2 = tmp_gp2;
+ diff = new_diff;
+ IWL_DEBUG_INFO(mvm, "PTP: new times: gp2=%u sys=%lld\n",
+ *gp2, *sys_time);
+ }
+ }
+}
+
+static int
+iwl_mvm_phc_get_crosstimestamp(struct ptp_clock_info *ptp,
+ struct system_device_crosststamp *xtstamp)
+{
+ struct iwl_mvm *mvm = container_of(ptp, struct iwl_mvm,
+ ptp_data.ptp_clock_info);
+ int ret = 0;
+ /* Raw value read from GP2 register in usec */
+ u32 gp2;
+ /* GP2 value in ns*/
+ s64 gp2_ns;
+ /* System (wall) time */
+ ktime_t sys_time;
+
+ memset(xtstamp, 0, sizeof(struct system_device_crosststamp));
+
+ if (!mvm->ptp_data.ptp_clock) {
+ IWL_ERR(mvm, "No PHC clock registered\n");
+ return -ENODEV;
+ }
+
+ mutex_lock(&mvm->mutex);
+ if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_SYNCED_TIME)) {
+ ret = iwl_mvm_get_crosstimestamp_fw(mvm, &gp2, &sys_time);
+
+ if (ret)
+ goto out;
+ } else {
+ iwl_mvm_phc_get_crosstimestamp_loop(mvm, &sys_time, &gp2);
+ }
+
+ gp2_ns = iwl_mvm_ptp_get_adj_time(mvm, (u64)gp2 * NSEC_PER_USEC);
+
+ IWL_INFO(mvm, "Got Sync Time: GP2:%u, last_GP2: %u, GP2_ns: %lld, sys_time: %lld\n",
+ gp2, mvm->ptp_data.last_gp2, gp2_ns, (s64)sys_time);
+
+ /* System monotonic raw time is not used */
+ xtstamp->device = (ktime_t)gp2_ns;
+ xtstamp->sys_realtime = sys_time;
+
+out:
+ mutex_unlock(&mvm->mutex);
+ return ret;
+}
+
+static void iwl_mvm_ptp_work(struct work_struct *wk)
+{
+ struct iwl_mvm *mvm = container_of(wk, struct iwl_mvm,
+ ptp_data.dwork.work);
+ u32 gp2;
+
+ mutex_lock(&mvm->mutex);
+ gp2 = iwl_mvm_get_systime(mvm);
+ iwl_mvm_ptp_update_new_read(mvm, gp2);
+ mutex_unlock(&mvm->mutex);
+}
+
+static int iwl_mvm_ptp_gettime(struct ptp_clock_info *ptp,
+ struct timespec64 *ts)
+{
+ struct iwl_mvm *mvm = container_of(ptp, struct iwl_mvm,
+ ptp_data.ptp_clock_info);
+ u64 gp2;
+ u64 ns;
+
+ mutex_lock(&mvm->mutex);
+ gp2 = iwl_mvm_get_systime(mvm);
+ ns = iwl_mvm_ptp_get_adj_time(mvm, gp2 * NSEC_PER_USEC);
+ mutex_unlock(&mvm->mutex);
+
+ *ts = ns_to_timespec64(ns);
+ return 0;
+}
+
+static int iwl_mvm_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct iwl_mvm *mvm = container_of(ptp, struct iwl_mvm,
+ ptp_data.ptp_clock_info);
+ struct ptp_data *data = container_of(ptp, struct ptp_data,
+ ptp_clock_info);
+
+ mutex_lock(&mvm->mutex);
+ data->delta += delta;
+ IWL_DEBUG_INFO(mvm, "delta=%lld, new delta=%lld\n", (long long)delta,
+ (long long)data->delta);
+ mutex_unlock(&mvm->mutex);
+ return 0;
+}
+
+static int iwl_mvm_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+{
+ struct iwl_mvm *mvm = container_of(ptp, struct iwl_mvm,
+ ptp_data.ptp_clock_info);
+ struct ptp_data *data = &mvm->ptp_data;
+ u32 gp2;
+
+ mutex_lock(&mvm->mutex);
+
+ /* Must call _iwl_mvm_ptp_get_adj_time() before updating
+ * data->scale_update_gp2 or data->scaled_freq since
+ * scale_update_adj_time_ns should reflect the previous scaled_freq.
+ */
+ gp2 = iwl_mvm_get_systime(mvm);
+ data->scale_update_adj_time_ns =
+ iwl_mvm_ptp_get_adj_time(mvm, gp2 * NSEC_PER_USEC);
+ data->scale_update_gp2 = gp2;
+ data->wrap_counter = 0;
+ data->delta = 0;
+
+ data->scaled_freq = SCALE_FACTOR + scaled_ppm;
+ IWL_DEBUG_INFO(mvm, "adjfine: scaled_ppm=%ld new=%llu\n",
+ scaled_ppm, (unsigned long long)data->scaled_freq);
+
+ mutex_unlock(&mvm->mutex);
+ return 0;
+}
+
+/* iwl_mvm_ptp_init - initialize PTP for devices which support it.
+ * @mvm: internal mvm structure, see &struct iwl_mvm.
+ *
+ * Performs the required steps for enabling PTP support.
+ */
+void iwl_mvm_ptp_init(struct iwl_mvm *mvm)
+{
+ /* Warn if the interface already has a ptp_clock defined */
+ if (WARN_ON(mvm->ptp_data.ptp_clock))
+ return;
+
+ mvm->ptp_data.ptp_clock_info.owner = THIS_MODULE;
+ mvm->ptp_data.ptp_clock_info.max_adj = 0x7fffffff;
+ mvm->ptp_data.ptp_clock_info.getcrosststamp =
+ iwl_mvm_phc_get_crosstimestamp;
+ mvm->ptp_data.ptp_clock_info.adjfine = iwl_mvm_ptp_adjfine;
+ mvm->ptp_data.ptp_clock_info.adjtime = iwl_mvm_ptp_adjtime;
+ mvm->ptp_data.ptp_clock_info.gettime64 = iwl_mvm_ptp_gettime;
+ mvm->ptp_data.scaled_freq = SCALE_FACTOR;
+
+ /* Give a short 'friendly name' to identify the PHC clock */
+ snprintf(mvm->ptp_data.ptp_clock_info.name,
+ sizeof(mvm->ptp_data.ptp_clock_info.name),
+ "%s", "iwlwifi-PTP");
+
+ INIT_DELAYED_WORK(&mvm->ptp_data.dwork, iwl_mvm_ptp_work);
+
+ mvm->ptp_data.ptp_clock =
+ ptp_clock_register(&mvm->ptp_data.ptp_clock_info, mvm->dev);
+
+ if (IS_ERR(mvm->ptp_data.ptp_clock)) {
+ IWL_ERR(mvm, "Failed to register PHC clock (%ld)\n",
+ PTR_ERR(mvm->ptp_data.ptp_clock));
+ mvm->ptp_data.ptp_clock = NULL;
+ } else if (mvm->ptp_data.ptp_clock) {
+ IWL_INFO(mvm, "Registered PHC clock: %s, with index: %d\n",
+ mvm->ptp_data.ptp_clock_info.name,
+ ptp_clock_index(mvm->ptp_data.ptp_clock));
+ }
+}
+
+/* iwl_mvm_ptp_remove - disable PTP device.
+ * @mvm: internal mvm structure, see &struct iwl_mvm.
+ *
+ * Disable PTP support.
+ */
+void iwl_mvm_ptp_remove(struct iwl_mvm *mvm)
+{
+ if (mvm->ptp_data.ptp_clock) {
+ IWL_INFO(mvm, "Unregistering PHC clock: %s, with index: %d\n",
+ mvm->ptp_data.ptp_clock_info.name,
+ ptp_clock_index(mvm->ptp_data.ptp_clock));
+
+ ptp_clock_unregister(mvm->ptp_data.ptp_clock);
+ mvm->ptp_data.ptp_clock = NULL;
+ memset(&mvm->ptp_data.ptp_clock_info, 0,
+ sizeof(mvm->ptp_data.ptp_clock_info));
+ mvm->ptp_data.last_gp2 = 0;
+ cancel_delayed_work_sync(&mvm->ptp_data.dwork);
+ }
+}
if (vif == data->disabled_vif)
return;
- if (!mvmvif->phy_ctxt)
+ if (!mvmvif->deflink.phy_ctxt)
return;
/* currently, PHY ID == binding ID */
- id = mvmvif->phy_ctxt->id;
+ id = mvmvif->deflink.phy_ctxt->id;
/* need at least one binding per PHY */
BUILD_BUG_ON(NUM_PHY_CTX > MAX_BINDINGS);
}
if (data->colors[id] < 0)
- data->colors[id] = mvmvif->phy_ctxt->color;
+ data->colors[id] = mvmvif->deflink.phy_ctxt->color;
else
- WARN_ON_ONCE(data->colors[id] != mvmvif->phy_ctxt->color);
+ WARN_ON_ONCE(data->colors[id] !=
+ mvmvif->deflink.phy_ctxt->color);
data->n_interfaces[id]++;
if (!mvmvif->ap_ibss_active)
return;
- phy_id = mvmvif->phy_ctxt->id;
+ phy_id = mvmvif->deflink.phy_ctxt->id;
beacon_int = mvm->noa_vif->bss_conf.beacon_int;
for (i = 0; i < MAX_BINDINGS; i++) {
#include "iwl-op-mode.h"
#include "mvm.h"
-static u8 rs_fw_bw_from_sta_bw(const struct ieee80211_sta *sta)
+static u8 rs_fw_bw_from_sta_bw(const struct ieee80211_link_sta *link_sta)
{
- switch (sta->deflink.bandwidth) {
+ switch (link_sta->bandwidth) {
case IEEE80211_STA_RX_BW_320:
return IWL_TLC_MNG_CH_WIDTH_320MHZ;
case IEEE80211_STA_RX_BW_160:
return fw_chains;
}
-static u8 rs_fw_sgi_cw_support(struct ieee80211_sta *sta)
+static u8 rs_fw_sgi_cw_support(struct ieee80211_link_sta *link_sta)
{
- struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
- struct ieee80211_sta_vht_cap *vht_cap = &sta->deflink.vht_cap;
- struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
+ struct ieee80211_sta_ht_cap *ht_cap = &link_sta->ht_cap;
+ struct ieee80211_sta_vht_cap *vht_cap = &link_sta->vht_cap;
+ struct ieee80211_sta_he_cap *he_cap = &link_sta->he_cap;
u8 supp = 0;
if (he_cap->has_he)
}
static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm,
- struct ieee80211_sta *sta,
+ struct ieee80211_link_sta *link_sta,
struct ieee80211_supported_band *sband)
{
- struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
- struct ieee80211_sta_vht_cap *vht_cap = &sta->deflink.vht_cap;
- struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
+ struct ieee80211_sta_ht_cap *ht_cap = &link_sta->ht_cap;
+ struct ieee80211_sta_vht_cap *vht_cap = &link_sta->vht_cap;
+ struct ieee80211_sta_he_cap *he_cap = &link_sta->he_cap;
bool vht_ena = vht_cap->vht_supported;
u16 flags = 0;
}
static void
-rs_fw_vht_set_enabled_rates(const struct ieee80211_sta *sta,
+rs_fw_vht_set_enabled_rates(const struct ieee80211_link_sta *link_sta,
const struct ieee80211_sta_vht_cap *vht_cap,
struct iwl_tlc_config_cmd_v4 *cmd)
{
u16 supp;
int i, highest_mcs;
- u8 max_nss = sta->deflink.rx_nss;
+ u8 max_nss = link_sta->rx_nss;
struct ieee80211_vht_cap ieee_vht_cap = {
.vht_cap_info = cpu_to_le32(vht_cap->cap),
.supp_mcs = vht_cap->vht_mcs,
};
/* the station support only a single receive chain */
- if (sta->deflink.smps_mode == IEEE80211_SMPS_STATIC)
+ if (link_sta->smps_mode == IEEE80211_SMPS_STATIC)
max_nss = 1;
for (i = 0; i < max_nss && i < IWL_TLC_NSS_MAX; i++) {
continue;
supp = BIT(highest_mcs + 1) - 1;
- if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
+ if (link_sta->bandwidth == IEEE80211_STA_RX_BW_20)
supp &= ~BIT(IWL_TLC_MNG_HT_RATE_MCS9);
cmd->ht_rates[i][IWL_TLC_MCS_PER_BW_80] = cpu_to_le16(supp);
* configuration is supported - only for MCS 0 since we already
* decoded the MCS bits anyway ourselves.
*/
- if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_160 &&
+ if (link_sta->bandwidth == IEEE80211_STA_RX_BW_160 &&
ieee80211_get_vht_max_nss(&ieee_vht_cap,
IEEE80211_VHT_CHANWIDTH_160MHZ,
0, true, nss) >= nss)
}
static void
-rs_fw_he_set_enabled_rates(const struct ieee80211_sta *sta,
+rs_fw_he_set_enabled_rates(const struct ieee80211_link_sta *link_sta,
struct ieee80211_supported_band *sband,
struct iwl_tlc_config_cmd_v4 *cmd)
{
- const struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
+ const struct ieee80211_sta_he_cap *he_cap = &link_sta->he_cap;
u16 mcs_160 = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_160);
u16 mcs_80 = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_80);
u16 tx_mcs_80 =
u16 tx_mcs_160 =
le16_to_cpu(sband->iftype_data->he_cap.he_mcs_nss_supp.tx_mcs_160);
int i;
- u8 nss = sta->deflink.rx_nss;
+ u8 nss = link_sta->rx_nss;
/* the station support only a single receive chain */
- if (sta->deflink.smps_mode == IEEE80211_SMPS_STATIC)
+ if (link_sta->smps_mode == IEEE80211_SMPS_STATIC)
nss = 1;
for (i = 0; i < nss && i < IWL_TLC_NSS_MAX; i++) {
}
}
-static void rs_fw_eht_set_enabled_rates(const struct ieee80211_sta *sta,
- struct ieee80211_supported_band *sband,
- struct iwl_tlc_config_cmd_v4 *cmd)
+static void
+rs_fw_eht_set_enabled_rates(const struct ieee80211_link_sta *link_sta,
+ struct ieee80211_supported_band *sband,
+ struct iwl_tlc_config_cmd_v4 *cmd)
{
/* peer RX mcs capa */
const struct ieee80211_eht_mcs_nss_supp *eht_rx_mcs =
- &sta->deflink.eht_cap.eht_mcs_nss_supp;
+ &link_sta->eht_cap.eht_mcs_nss_supp;
/* our TX mcs capa */
const struct ieee80211_eht_mcs_nss_supp *eht_tx_mcs =
&sband->iftype_data->eht_cap.eht_mcs_nss_supp;
struct ieee80211_eht_mcs_nss_supp_20mhz_only mcs_tx_20;
/* peer is 20Mhz only */
- if (!(sta->deflink.he_cap.he_cap_elem.phy_cap_info[0] &
+ if (!(link_sta->he_cap.he_cap_elem.phy_cap_info[0] &
IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_MASK_ALL)) {
mcs_rx_20 = eht_rx_mcs->only_20mhz;
} else {
const struct ieee80211_eht_mcs_nss_supp_bw *mcs_tx =
rs_fw_rs_mcs2eht_mcs(bw, eht_tx_mcs);
- /* got unsuppored index for bw */
+ /* got unsupported index for bw */
if (!mcs_rx || !mcs_tx)
continue;
+ /* break out if we don't support the bandwidth */
+ if (cmd->max_ch_width < (bw + IWL_TLC_MNG_CH_WIDTH_80MHZ))
+ break;
+
rs_fw_set_eht_mcs_nss(cmd->ht_rates, bw,
MAX_NSS_MCS(9, mcs_rx, mcs_tx), GENMASK(9, 0));
rs_fw_set_eht_mcs_nss(cmd->ht_rates, bw,
}
/* the station support only a single receive chain */
- if (sta->deflink.smps_mode == IEEE80211_SMPS_STATIC ||
- sta->deflink.rx_nss < 2)
+ if (link_sta->smps_mode == IEEE80211_SMPS_STATIC ||
+ link_sta->rx_nss < 2)
memset(cmd->ht_rates[IWL_TLC_NSS_2], 0,
sizeof(cmd->ht_rates[IWL_TLC_NSS_2]));
}
-static void rs_fw_set_supp_rates(struct ieee80211_sta *sta,
+static void rs_fw_set_supp_rates(struct ieee80211_link_sta *link_sta,
struct ieee80211_supported_band *sband,
struct iwl_tlc_config_cmd_v4 *cmd)
{
int i;
u16 supp = 0;
unsigned long tmp; /* must be unsigned long for for_each_set_bit */
- const struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
- const struct ieee80211_sta_vht_cap *vht_cap = &sta->deflink.vht_cap;
- const struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
+ const struct ieee80211_sta_ht_cap *ht_cap = &link_sta->ht_cap;
+ const struct ieee80211_sta_vht_cap *vht_cap = &link_sta->vht_cap;
+ const struct ieee80211_sta_he_cap *he_cap = &link_sta->he_cap;
/* non HT rates */
- tmp = sta->deflink.supp_rates[sband->band];
+ tmp = link_sta->supp_rates[sband->band];
for_each_set_bit(i, &tmp, BITS_PER_LONG)
supp |= BIT(sband->bitrates[i].hw_value);
cmd->mode = IWL_TLC_MNG_MODE_NON_HT;
/* HT/VHT rates */
- if (sta->deflink.eht_cap.has_eht) {
+ if (link_sta->eht_cap.has_eht) {
cmd->mode = IWL_TLC_MNG_MODE_EHT;
- rs_fw_eht_set_enabled_rates(sta, sband, cmd);
+ rs_fw_eht_set_enabled_rates(link_sta, sband, cmd);
} else if (he_cap->has_he) {
cmd->mode = IWL_TLC_MNG_MODE_HE;
- rs_fw_he_set_enabled_rates(sta, sband, cmd);
+ rs_fw_he_set_enabled_rates(link_sta, sband, cmd);
} else if (vht_cap->vht_supported) {
cmd->mode = IWL_TLC_MNG_MODE_VHT;
- rs_fw_vht_set_enabled_rates(sta, vht_cap, cmd);
+ rs_fw_vht_set_enabled_rates(link_sta, vht_cap, cmd);
} else if (ht_cap->ht_supported) {
cmd->mode = IWL_TLC_MNG_MODE_HT;
cmd->ht_rates[IWL_TLC_NSS_1][IWL_TLC_MCS_PER_BW_80] =
cpu_to_le16(ht_cap->mcs.rx_mask[0]);
/* the station support only a single receive chain */
- if (sta->deflink.smps_mode == IEEE80211_SMPS_STATIC)
+ if (link_sta->smps_mode == IEEE80211_SMPS_STATIC)
cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_MCS_PER_BW_80] =
0;
else
struct iwl_rx_packet *pkt = rxb_addr(rxb);
struct iwl_tlc_update_notif *notif;
struct ieee80211_sta *sta;
+ struct ieee80211_link_sta *link_sta;
struct iwl_mvm_sta *mvmsta;
+ struct iwl_mvm_link_sta *mvm_link_sta;
struct iwl_lq_sta_rs_fw *lq_sta;
u32 flags;
rcu_read_lock();
notif = (void *)pkt->data;
+ link_sta = rcu_dereference(mvm->fw_id_to_link_sta[notif->sta_id]);
sta = rcu_dereference(mvm->fw_id_to_mac_id[notif->sta_id]);
- if (IS_ERR_OR_NULL(sta)) {
+ if (IS_ERR_OR_NULL(sta) || !link_sta) {
/* can happen in remove station flow where mvm removed internally
* the station before removing from FW
*/
flags = le32_to_cpu(notif->flags);
- lq_sta = &mvmsta->lq_sta.rs_fw;
+ mvm_link_sta = rcu_dereference(mvmsta->link[link_sta->link_id]);
+ if (!mvm_link_sta) {
+ IWL_DEBUG_RATE(mvm,
+ "Invalid mvmsta RCU pointer for link (%d) of sta id (%d) in TLC notification\n",
+ link_sta->link_id, notif->sta_id);
+ goto out;
+ }
+ lq_sta = &mvm_link_sta->lq_sta.rs_fw;
if (flags & IWL_TLC_NOTIF_FLAG_RATE) {
char pretty_rate[100];
u16 size = le32_to_cpu(notif->amsdu_size);
int i;
- if (sta->deflink.agg.max_amsdu_len < size) {
+ if (link_sta->agg.max_amsdu_len < size) {
/*
- * In debug sta->deflink.agg.max_amsdu_len < size
+ * In debug link_sta->agg.max_amsdu_len < size
* so also check with orig_amsdu_len which holds the
* original data before debugfs changed the value
*/
mvmsta->amsdu_enabled = le32_to_cpu(notif->amsdu_enabled);
mvmsta->max_amsdu_len = size;
- sta->deflink.agg.max_rc_amsdu_len = mvmsta->max_amsdu_len;
+ link_sta->agg.max_rc_amsdu_len = mvmsta->max_amsdu_len;
for (i = 0; i < IWL_MAX_TID_COUNT; i++) {
if (mvmsta->amsdu_enabled & BIT(i))
- sta->deflink.agg.max_tid_amsdu_len[i] =
+ link_sta->agg.max_tid_amsdu_len[i] =
iwl_mvm_max_amsdu_size(mvm, sta, i);
else
/*
* Not so elegant, but this will effectively
* prevent AMSDU on this TID
*/
- sta->deflink.agg.max_tid_amsdu_len[i] = 1;
+ link_sta->agg.max_tid_amsdu_len[i] = 1;
}
IWL_DEBUG_RATE(mvm,
rcu_read_unlock();
}
-u16 rs_fw_get_max_amsdu_len(struct ieee80211_sta *sta)
+u16 rs_fw_get_max_amsdu_len(struct ieee80211_sta *sta,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_link_sta *link_sta)
{
- struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- const struct ieee80211_sta_vht_cap *vht_cap = &sta->deflink.vht_cap;
- const struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
+ const struct ieee80211_sta_vht_cap *vht_cap = &link_sta->vht_cap;
+ const struct ieee80211_sta_ht_cap *ht_cap = &link_sta->ht_cap;
+
+ if (WARN_ON_ONCE(!link_conf->chandef.chan))
+ return IEEE80211_MAX_MPDU_LEN_VHT_3895;
- if (mvmsta->vif->bss_conf.chandef.chan->band == NL80211_BAND_6GHZ) {
- switch (le16_get_bits(sta->deflink.he_6ghz_capa.capa,
+ if (link_conf->chandef.chan->band == NL80211_BAND_6GHZ) {
+ switch (le16_get_bits(link_sta->he_6ghz_capa.capa,
IEEE80211_HE_6GHZ_CAP_MAX_MPDU_LEN)) {
case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454:
return IEEE80211_MAX_MPDU_LEN_VHT_11454;
}
void rs_fw_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_link_sta *link_sta,
enum nl80211_band band, bool update)
{
struct ieee80211_hw *hw = mvm->hw;
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- struct iwl_lq_sta_rs_fw *lq_sta = &mvmsta->lq_sta.rs_fw;
u32 cmd_id = WIDE_ID(DATA_PATH_GROUP, TLC_MNG_CONFIG_CMD);
struct ieee80211_supported_band *sband = hw->wiphy->bands[band];
- u16 max_amsdu_len = rs_fw_get_max_amsdu_len(sta);
+ u16 max_amsdu_len = rs_fw_get_max_amsdu_len(sta, link_conf, link_sta);
+ struct iwl_mvm_link_sta *mvm_link_sta;
+ struct iwl_lq_sta_rs_fw *lq_sta;
struct iwl_tlc_config_cmd_v4 cfg_cmd = {
- .sta_id = mvmsta->sta_id,
.max_ch_width = update ?
- rs_fw_bw_from_sta_bw(sta) : RATE_MCS_CHAN_WIDTH_20,
- .flags = cpu_to_le16(rs_fw_get_config_flags(mvm, sta, sband)),
+ rs_fw_bw_from_sta_bw(link_sta) :
+ IWL_TLC_MNG_CH_WIDTH_20MHZ,
+ .flags = cpu_to_le16(rs_fw_get_config_flags(mvm, link_sta,
+ sband)),
.chains = rs_fw_set_active_chains(iwl_mvm_get_valid_tx_ant(mvm)),
- .sgi_ch_width_supp = rs_fw_sgi_cw_support(sta),
+ .sgi_ch_width_supp = rs_fw_sgi_cw_support(link_sta),
.max_mpdu_len = iwl_mvm_is_csum_supported(mvm) ?
cpu_to_le16(max_amsdu_len) : 0,
};
- int ret;
+ unsigned int link_id = link_conf->link_id;
int cmd_ver;
+ int ret;
+ rcu_read_lock();
+ mvm_link_sta = rcu_dereference(mvmsta->link[link_id]);
+ if (WARN_ON_ONCE(!mvm_link_sta)) {
+ rcu_read_unlock();
+ return;
+ }
+
+ cfg_cmd.sta_id = mvm_link_sta->sta_id;
+
+ lq_sta = &mvm_link_sta->lq_sta.rs_fw;
memset(lq_sta, 0, offsetof(typeof(*lq_sta), pers));
+ rcu_read_unlock();
+
#ifdef CONFIG_IWLWIFI_DEBUGFS
iwl_mvm_reset_frame_stats(mvm);
#endif
- rs_fw_set_supp_rates(sta, sband, &cfg_cmd);
+ rs_fw_set_supp_rates(link_sta, sband, &cfg_cmd);
/*
* since TLC offload works with one mode we can assume
void iwl_mvm_rs_add_sta(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta)
{
- struct iwl_lq_sta_rs_fw *lq_sta = &mvmsta->lq_sta.rs_fw;
+ unsigned int link_id;
IWL_DEBUG_RATE(mvm, "create station rate scale window\n");
- lq_sta->pers.drv = mvm;
- lq_sta->pers.sta_id = mvmsta->sta_id;
- lq_sta->pers.chains = 0;
- memset(lq_sta->pers.chain_signal, 0, sizeof(lq_sta->pers.chain_signal));
- lq_sta->pers.last_rssi = S8_MIN;
- lq_sta->last_rate_n_flags = 0;
+ for (link_id = 0; link_id < ARRAY_SIZE(mvmsta->link); link_id++) {
+ struct iwl_lq_sta_rs_fw *lq_sta;
+ struct iwl_mvm_link_sta *link =
+ rcu_dereference_protected(mvmsta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+ if (!link)
+ continue;
+
+ lq_sta = &link->lq_sta.rs_fw;
+
+ lq_sta->pers.drv = mvm;
+ lq_sta->pers.sta_id = link->sta_id;
+ lq_sta->pers.chains = 0;
+ memset(lq_sta->pers.chain_signal, 0,
+ sizeof(lq_sta->pers.chain_signal));
+ lq_sta->pers.last_rssi = S8_MIN;
+ lq_sta->last_rate_n_flags = 0;
#ifdef CONFIG_MAC80211_DEBUGFS
- lq_sta->pers.dbg_fixed_rate = 0;
+ lq_sta->pers.dbg_fixed_rate = 0;
#endif
+ }
}
// SPDX-License-Identifier: GPL-2.0-only
/******************************************************************************
*
- * Copyright(c) 2005 - 2014, 2018 - 2021 Intel Corporation. All rights reserved.
+ * Copyright(c) 2005 - 2014, 2018 - 2022 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
*****************************************************************************/
(rate->index <= IWL_RATE_MCS_9_INDEX))
rate_str = ht_vht_rates[rate->index];
else
- rate_str = "BAD_RATE";
+ rate_str = NULL;
sprintf(buf, "(%s|%s|%s)", rs_pretty_lq_type(rate->type),
- iwl_rs_pretty_ant(rate->ant), rate_str);
+ iwl_rs_pretty_ant(rate->ant), rate_str ?: "BAD_RATE");
return buf;
}
return -EINVAL;
if (tbl->column != RS_COLUMN_INVALID) {
- struct lq_sta_pers *pers = &mvmsta->lq_sta.rs_drv.pers;
+ struct lq_sta_pers *pers = &mvmsta->deflink.lq_sta.rs_drv.pers;
pers->tx_stats[tbl->column][scale_index].total += attempts;
pers->tx_stats[tbl->column][scale_index].success += successes;
WARN_ON_ONCE(1);
}
} else if (ucode_rate & RATE_MCS_VHT_MSK_V1) {
- nss = ((ucode_rate & RATE_VHT_MCS_NSS_MSK) >>
- RATE_VHT_MCS_NSS_POS) + 1;
+ nss = FIELD_GET(RATE_MCS_NSS_MSK, ucode_rate) + 1;
if (nss == 1) {
rate->type = LQ_VHT_SISO;
WARN_ON_ONCE(1);
}
} else if (ucode_rate & RATE_MCS_HE_MSK_V1) {
- nss = ((ucode_rate & RATE_VHT_MCS_NSS_MSK) >>
- RATE_VHT_MCS_NSS_POS) + 1;
+ nss = FIELD_GET(RATE_MCS_NSS_MSK, ucode_rate) + 1;
if (nss == 1) {
rate->type = LQ_HE_SISO;
enum rs_action scale_action)
{
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ struct ieee80211_bss_conf *bss_conf = &mvmsta->vif->bss_conf;
int i;
- sta->deflink.agg.max_amsdu_len = rs_fw_get_max_amsdu_len(sta);
+ sta->deflink.agg.max_amsdu_len =
+ rs_fw_get_max_amsdu_len(sta, bss_conf, &sta->deflink);
/*
* In case TLC offload is not active amsdu_enabled is either 0xFFFF
else
mvmsta->amsdu_enabled = 0xFFFF;
- if (mvmsta->vif->bss_conf.he_support &&
+ if (bss_conf->he_support &&
!iwlwifi_mod_params.disable_11ax)
mvmsta->max_amsdu_len = sta->deflink.agg.max_amsdu_len;
else
struct iwl_mvm_sta *mvmsta,
struct ieee80211_rx_status *rx_status)
{
- struct iwl_lq_sta *lq_sta = &mvmsta->lq_sta.rs_drv;
+ struct iwl_lq_sta *lq_sta = &mvmsta->deflink.lq_sta.rs_drv;
int i;
lq_sta->pers.chains = rx_status->chains;
/* if vif isn't initialized mvm doesn't know about
* this station, so don't do anything with the it
*/
- sta = NULL;
mvm_sta = NULL;
}
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_op_mode *op_mode = (struct iwl_op_mode *)mvm_rate;
struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode);
- struct iwl_lq_sta *lq_sta = &mvmsta->lq_sta.rs_drv;
+ struct iwl_lq_sta *lq_sta = &mvmsta->deflink.lq_sta.rs_drv;
IWL_DEBUG_RATE(mvm, "create station rate scale window\n");
nss = ((rate & RATE_HT_MCS_NSS_MSK_V1) >> RATE_HT_MCS_NSS_POS_V1) + 1;
} else if (rate & RATE_MCS_VHT_MSK_V1) {
mvm->drv_rx_stats.vht_frames++;
- nss = ((rate & RATE_VHT_MCS_NSS_MSK) >>
- RATE_VHT_MCS_NSS_POS) + 1;
+ nss = FIELD_GET(RATE_MCS_NSS_MSK, rate) + 1;
} else {
mvm->drv_rx_stats.legacy_frames++;
}
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
struct ieee80211_sta_vht_cap *vht_cap = &sta->deflink.vht_cap;
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- struct iwl_lq_sta *lq_sta = &mvmsta->lq_sta.rs_drv;
+ struct iwl_lq_sta *lq_sta = &mvmsta->deflink.lq_sta.rs_drv;
struct ieee80211_supported_band *sband;
unsigned long supp; /* must be unsigned long for for_each_set_bit */
- lockdep_assert_held(&mvmsta->lq_sta.rs_drv.pers.lock);
+ lockdep_assert_held(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
/* clear all non-persistent lq data */
memset(lq_sta, 0, offsetof(typeof(*lq_sta), pers));
sband = hw->wiphy->bands[band];
- lq_sta->lq.sta_id = mvmsta->sta_id;
+ lq_sta->lq.sta_id = mvmsta->deflink.sta_id;
mvmsta->amsdu_enabled = 0;
mvmsta->max_amsdu_len = sta->cur->max_amsdu_len;
IWL_DEBUG_RATE(mvm,
"LQ: *** rate scale station global init for station %d ***\n",
- mvmsta->sta_id);
+ mvmsta->deflink.sta_id);
/* TODO: what is a good starting rate for STA? About middle? Maybe not
* the lowest or the highest rate.. Could consider using RSSI from
* previous packets? Need to have IEEE 802.1X auth succeed immediately
void *priv_sta, u32 changed)
{
struct iwl_op_mode *op_mode = mvm_r;
+ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm *mvm __maybe_unused = IWL_OP_MODE_GET_MVM(op_mode);
u8 tid;
- if (!iwl_mvm_sta_from_mac80211(sta)->vif)
+ if (!mvmsta->vif)
return;
/* Stop any ongoing aggregations as rs starts off assuming no agg */
for (tid = 0; tid < IWL_MAX_TID_COUNT; tid++)
ieee80211_stop_tx_ba_session(sta, tid);
- iwl_mvm_rs_rate_init(mvm, sta, sband->band, true);
+ iwl_mvm_rs_rate_init(mvm, sta,
+ &mvmsta->vif->bss_conf, &sta->deflink,
+ sband->band, true);
}
static void __iwl_mvm_rs_tx_status(struct iwl_mvm *mvm,
u8 lq_color = RS_DRV_DATA_LQ_COLOR_GET(tlc_info);
u32 tx_resp_hwrate = (uintptr_t)info->status.status_driver_data[1];
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- struct iwl_lq_sta *lq_sta = &mvmsta->lq_sta.rs_drv;
+ struct iwl_lq_sta *lq_sta = &mvmsta->deflink.lq_sta.rs_drv;
if (!lq_sta->pers.drv) {
IWL_DEBUG_RATE(mvm, "Rate scaling not initialized yet.\n");
/* If it's locked we are in middle of init flow
* just wait for next tx status to update the lq_sta data
*/
- if (!spin_trylock(&mvmsta->lq_sta.rs_drv.pers.lock))
+ if (!spin_trylock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock))
return;
__iwl_mvm_rs_tx_status(mvm, sta, tid, info, ndp);
- spin_unlock(&mvmsta->lq_sta.rs_drv.pers.lock);
+ spin_unlock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
}
#ifdef CONFIG_MAC80211_DEBUGFS
{
struct rs_bfer_active_iter_data *data = _data;
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- struct iwl_lq_cmd *lq_cmd = &mvmsta->lq_sta.rs_drv.lq;
+ struct iwl_lq_cmd *lq_cmd = &mvmsta->deflink.lq_sta.rs_drv.lq;
u32 ss_params = le32_to_cpu(lq_cmd->ss_params);
if (sta == data->exclude_sta)
prio = 1;
break;
default:
- WARN_ONCE(true, "viftype %d sta_id %d", viftype, sta->sta_id);
+ WARN_ONCE(true, "viftype %d sta_id %d", viftype,
+ sta->deflink.sta_id);
prio = -1;
}
}
IWL_DEBUG_RATE(mvm, "Found existing sta %d with BFER activated\n",
- bfer_mvmsta->sta_id);
+ bfer_mvmsta->deflink.sta_id);
/* Disallow BFER on another STA if active and we're a higher priority */
if (rs_bfer_priority_cmp(mvmsta, bfer_mvmsta) > 0) {
struct iwl_lq_cmd *bfersta_lq_cmd =
- &bfer_mvmsta->lq_sta.rs_drv.lq;
+ &bfer_mvmsta->deflink.lq_sta.rs_drv.lq;
u32 bfersta_ss_params = le32_to_cpu(bfersta_lq_cmd->ss_params);
bfersta_ss_params &= ~LQ_SS_BFER_ALLOWED;
ss_params |= LQ_SS_BFER_ALLOWED;
IWL_DEBUG_RATE(mvm,
"Lower priority BFER sta found (%d). Switch BFER\n",
- bfer_mvmsta->sta_id);
+ bfer_mvmsta->deflink.sta_id);
}
out:
lq_cmd->ss_params = cpu_to_le32(ss_params);
num_of_ant(initial_rate->ant) == 1)
lq_cmd->single_stream_ant_msk = initial_rate->ant;
- lq_cmd->agg_frame_cnt_limit = mvmsta->max_agg_bufsize;
+ lq_cmd->agg_frame_cnt_limit = lq_sta->pers.max_agg_bufsize;
/*
* In case of low latency, tell the firmware to leave a frame in the
if (rate & RATE_MCS_VHT_MSK_V1) {
type = "VHT";
mcs = rate & RATE_VHT_MCS_RATE_CODE_MSK;
- nss = ((rate & RATE_VHT_MCS_NSS_MSK)
- >> RATE_VHT_MCS_NSS_POS) + 1;
+ nss = FIELD_GET(RATE_MCS_NSS_MSK, rate) + 1;
} else if (rate & RATE_MCS_HT_MSK_V1) {
type = "HT";
mcs = rate & RATE_HT_MCS_INDEX_MSK_V1;
} else if (rate & RATE_MCS_HE_MSK_V1) {
type = "HE";
mcs = rate & RATE_VHT_MCS_RATE_CODE_MSK;
- nss = ((rate & RATE_VHT_MCS_NSS_MSK)
- >> RATE_VHT_MCS_NSS_POS) + 1;
+ nss = FIELD_GET(RATE_MCS_NSS_MSK, rate) + 1;
} else {
type = "Unknown"; /* shouldn't happen */
}
struct iwl_lq_sta *lq_sta = file->private_data;
struct iwl_mvm_sta *mvmsta =
- container_of(lq_sta, struct iwl_mvm_sta, lq_sta.rs_drv);
+ container_of(lq_sta, struct iwl_mvm_sta, deflink.lq_sta.rs_drv);
struct iwl_mvm *mvm;
struct iwl_scale_tbl_info *tbl = &(lq_sta->lq_info[lq_sta->active_tbl]);
struct rs_rate *rate = &tbl->rate;
struct iwl_lq_sta *lq_sta = priv_sta;
struct iwl_mvm_sta *mvmsta;
- mvmsta = container_of(lq_sta, struct iwl_mvm_sta, lq_sta.rs_drv);
+ mvmsta = container_of(lq_sta, struct iwl_mvm_sta,
+ deflink.lq_sta.rs_drv);
if (!mvmsta->vif)
return;
};
void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_link_sta *link_sta,
enum nl80211_band band, bool update)
{
if (iwl_mvm_has_tlc_offload(mvm)) {
- rs_fw_rate_init(mvm, sta, band, update);
+ rs_fw_rate_init(mvm, sta, link_conf, link_sta, band, update);
} else {
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
- spin_lock(&mvmsta->lq_sta.rs_drv.pers.lock);
+ spin_lock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
rs_drv_rate_init(mvm, sta, band);
- spin_unlock(&mvmsta->lq_sta.rs_drv.pers.lock);
+ spin_unlock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock);
}
}
static int rs_drv_tx_protection(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta,
bool enable)
{
- struct iwl_lq_cmd *lq = &mvmsta->lq_sta.rs_drv.lq;
+ struct iwl_lq_cmd *lq = &mvmsta->deflink.lq_sta.rs_drv.lq;
lockdep_assert_held(&mvm->mutex);
/* SPDX-License-Identifier: GPL-2.0-only */
/******************************************************************************
*
- * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2015 Intel Mobile Communications GmbH
* Copyright(c) 2017 Intel Deutschland GmbH
- * Copyright(c) 2018 - 2019 Intel Corporation
+ * Copyright (C) 2003 - 2014, 2018 - 2022 Intel Corporation
*****************************************************************************/
#ifndef __rs_h__
/**
* struct iwl_lq_sta_rs_fw - rate and related statistics for RS in FW
* @last_rate_n_flags: last rate reported by FW
+ * @max_agg_bufsize: the maximal size of the AGG buffer for this station
* @sta_id: the id of the station
#ifdef CONFIG_MAC80211_DEBUGFS
* @dbg_fixed_rate: for debug, use fixed rate if not 0
/* last tx rate_n_flags */
u32 last_rate_n_flags;
+
/* packets destined for this STA are aggregated */
u8 is_agg;
u8 chains;
s8 chain_signal[IEEE80211_MAX_CHAINS];
s8 last_rssi;
+ u16 max_agg_bufsize;
struct rs_rate_stats tx_stats[RS_COLUMN_COUNT][IWL_RATE_COUNT];
struct iwl_mvm *drv;
spinlock_t lock; /* for races in reinit/update table */
/* Initialize station's rate scaling information after adding station */
void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
- enum nl80211_band band, bool init);
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_link_sta *link_sta,
+ enum nl80211_band band, bool update);
/* Notify RS about Tx status */
void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
void iwl_mvm_rs_add_sta(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta);
void rs_fw_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_link_sta *link_sta,
enum nl80211_band band, bool update);
int rs_fw_tx_protection(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta,
bool enable);
void iwl_mvm_tlc_update_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb);
-u16 rs_fw_get_max_amsdu_len(struct ieee80211_sta *sta);
+u16 rs_fw_get_max_amsdu_len(struct ieee80211_sta *sta,
+ struct ieee80211_bss_conf *link_conf,
+ struct ieee80211_link_sta *link_sta);
#endif /* __rs__ */
default:
/* Expected in monitor (not having the keys) */
if (!mvm->monitor_on)
- IWL_ERR(mvm, "Unhandled alg: 0x%x\n", rx_pkt_status);
+ IWL_WARN(mvm, "Unhandled alg: 0x%x\n", rx_pkt_status);
}
return 0;
if (mdata->opened_rx_ba_sessions ||
mdata->uapsd_nonagg_detect.detected ||
- (!mvmvif->queue_params[IEEE80211_AC_VO].uapsd &&
- !mvmvif->queue_params[IEEE80211_AC_VI].uapsd &&
- !mvmvif->queue_params[IEEE80211_AC_BE].uapsd &&
- !mvmvif->queue_params[IEEE80211_AC_BK].uapsd) ||
- mvmsta->sta_id != mvmvif->ap_sta_id)
+ (!mvmvif->deflink.queue_params[IEEE80211_AC_VO].uapsd &&
+ !mvmvif->deflink.queue_params[IEEE80211_AC_VI].uapsd &&
+ !mvmvif->deflink.queue_params[IEEE80211_AC_BE].uapsd &&
+ !mvmvif->deflink.queue_params[IEEE80211_AC_BK].uapsd) ||
+ mvmsta->deflink.sta_id != mvmvif->deflink.ap_sta_id)
return;
if (rate_n_flags & RATE_MCS_HT_MSK_V1) {
ARRAY_SIZE(thresh_tpt)))
return;
thr = thresh_tpt[rate_n_flags & RATE_VHT_MCS_RATE_CODE_MSK];
- thr *= 1 + ((rate_n_flags & RATE_VHT_MCS_NSS_MSK) >>
- RATE_VHT_MCS_NSS_POS);
+ thr *= 1 + FIELD_GET(RATE_MCS_NSS_MSK, rate_n_flags);
}
thr <<= ((rate_n_flags & RATE_MCS_CHAN_WIDTH_MSK_V1) >>
u8 stbc = (rate_n_flags & RATE_MCS_STBC_MSK) >>
RATE_MCS_STBC_POS;
rx_status->nss =
- ((rate_n_flags & RATE_VHT_MCS_NSS_MSK) >>
- RATE_VHT_MCS_NSS_POS) + 1;
+ FIELD_GET(RATE_MCS_NSS_MSK, rate_n_flags) + 1;
rx_status->rate_idx = rate_n_flags & RATE_VHT_MCS_RATE_CODE_MSK;
rx_status->encoding = RX_ENC_VHT;
rx_status->enc_flags |= stbc << RX_ENC_FLAG_STBC_SHIFT;
* data copied into the "data" struct, but rather the data from
* the notification directly.
*/
- mvmvif->beacon_stats.num_beacons =
+ mvmvif->deflink.beacon_stats.num_beacons =
le32_to_cpu(data->beacon_counter[vif_id]);
- mvmvif->beacon_stats.avg_signal =
+ mvmvif->deflink.beacon_stats.avg_signal =
-data->beacon_average_energy[vif_id];
if (mvmvif->id != id)
* request to clear statistics
*/
if (le32_to_cpu(data->flags) & IWL_STATISTICS_REPLY_FLG_CLEAR)
- mvmvif->beacon_stats.accu_num_beacons +=
- mvmvif->beacon_stats.num_beacons;
+ mvmvif->deflink.beacon_stats.accu_num_beacons +=
+ mvmvif->deflink.beacon_stats.num_beacons;
iwl_mvm_update_vif_sig(vif, sig);
}
mac_stats = &data->per_mac_stats[vif_id];
- mvmvif->beacon_stats.num_beacons =
+ mvmvif->deflink.beacon_stats.num_beacons =
le32_to_cpu(mac_stats->beacon_counter);
- mvmvif->beacon_stats.avg_signal =
+ mvmvif->deflink.beacon_stats.avg_signal =
-le32_to_cpu(mac_stats->beacon_average_energy);
/* make sure that beacon statistics don't go backwards with TCM
* request to clear statistics
*/
if (le32_to_cpu(data->flags) & IWL_STATISTICS_REPLY_FLG_CLEAR)
- mvmvif->beacon_stats.accu_num_beacons +=
- mvmvif->beacon_stats.num_beacons;
+ mvmvif->deflink.beacon_stats.accu_num_beacons +=
+ mvmvif->deflink.beacon_stats.num_beacons;
sig = -le32_to_cpu(mac_stats->beacon_filter_average_energy);
iwl_mvm_update_vif_sig(vif, sig);
{
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
u8 *energy = _data;
- u32 sta_id = mvmsta->sta_id;
+ u32 sta_id = mvmsta->deflink.sta_id;
if (WARN_ONCE(sta_id >= IWL_MVM_STATION_COUNT_MAX, "sta_id %d >= %d",
sta_id, IWL_MVM_STATION_COUNT_MAX))
return;
if (energy[sta_id])
- mvmsta->avg_energy = energy[sta_id];
+ mvmsta->deflink.avg_energy = energy[sta_id];
}
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2015-2017 Intel Deutschland GmbH
*/
#include "iwl-trans.h"
#include "mvm.h"
#include "fw-api.h"
+#include "time-sync.h"
static inline int iwl_mvm_check_pn(struct iwl_mvm *mvm, struct sk_buff *skb,
int queue, struct ieee80211_sta *sta)
return 0;
}
+/* put a TLV on the skb and return data pointer
+ *
+ * Also pad to 4 the len and zero out all data part
+ */
+static void *
+iwl_mvm_radiotap_put_tlv(struct sk_buff *skb, u16 type, u16 len)
+{
+ struct ieee80211_radiotap_tlv *tlv;
+
+ tlv = skb_put(skb, sizeof(*tlv));
+ tlv->type = cpu_to_le16(type);
+ tlv->len = cpu_to_le16(len);
+ return skb_put_zero(skb, ALIGN(len, 4));
+}
+
static void iwl_mvm_add_rtap_sniffer_config(struct iwl_mvm *mvm,
struct sk_buff *skb)
{
struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
- struct ieee80211_vendor_radiotap *radiotap;
- const int size = sizeof(*radiotap) + sizeof(__le16);
+ struct ieee80211_radiotap_vendor_content *radiotap;
+ const u16 vendor_data_len = sizeof(mvm->cur_aid);
if (!mvm->cur_aid)
return;
- /* ensure alignment */
- BUILD_BUG_ON((size + 2) % 4);
+ radiotap = iwl_mvm_radiotap_put_tlv(skb,
+ IEEE80211_RADIOTAP_VENDOR_NAMESPACE,
+ sizeof(*radiotap) + vendor_data_len);
- radiotap = skb_put(skb, size + 2);
- radiotap->align = 1;
/* Intel OUI */
radiotap->oui[0] = 0xf6;
radiotap->oui[1] = 0x54;
radiotap->oui[2] = 0x25;
/* radiotap sniffer config sub-namespace */
- radiotap->subns = 1;
- radiotap->present = 0x1;
- radiotap->len = size - sizeof(*radiotap);
- radiotap->pad = 2;
+ radiotap->oui_subtype = 1;
+ radiotap->vendor_type = 0;
/* fill the data now */
memcpy(radiotap->data, &mvm->cur_aid, sizeof(mvm->cur_aid));
- /* and clear the padding */
- memset(radiotap->data + sizeof(__le16), 0, radiotap->pad);
- rx_status->flag |= RX_FLAG_RADIOTAP_VENDOR_DATA;
+ rx_status->flag |= RX_FLAG_RADIOTAP_TLV_AT_END;
}
/* iwl_mvm_pass_packet_to_mac80211 - passes the packet for mac80211 */
static void iwl_mvm_pass_packet_to_mac80211(struct iwl_mvm *mvm,
struct napi_struct *napi,
struct sk_buff *skb, int queue,
- struct ieee80211_sta *sta)
+ struct ieee80211_sta *sta,
+ struct ieee80211_link_sta *link_sta)
{
- if (iwl_mvm_check_pn(mvm, skb, queue, sta))
+ if (unlikely(iwl_mvm_check_pn(mvm, skb, queue, sta))) {
kfree_skb(skb);
- else
- ieee80211_rx_napi(mvm->hw, sta, skb, napi);
+ return;
+ }
+
+ if (sta && sta->valid_links && link_sta) {
+ struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
+
+ rx_status->link_valid = 1;
+ rx_status->link_id = link_sta->link_id;
+ }
+
+ ieee80211_rx_napi(mvm->hw, sta, skb, napi);
}
static void iwl_mvm_get_signal_strength(struct iwl_mvm *mvm,
*/
if (!is_multicast_ether_addr(hdr->addr1) &&
!mvm->monitor_on && net_ratelimit())
- IWL_ERR(mvm, "Unhandled alg: 0x%x\n", status);
+ IWL_WARN(mvm, "Unhandled alg: 0x%x\n", status);
}
return 0;
while ((skb = __skb_dequeue(skb_list))) {
iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb,
reorder_buf->queue,
- sta);
+ sta, NULL /* FIXME */);
reorder_buf->num_stored--;
}
}
return false;
}
- if (WARN(tid != baid_data->tid || mvm_sta->sta_id != baid_data->sta_id,
+ if (WARN(tid != baid_data->tid ||
+ mvm_sta->deflink.sta_id != baid_data->sta_id,
"baid 0x%x is mapped to sta:%d tid:%d, but was received for sta:%d tid:%d\n",
- baid, baid_data->sta_id, baid_data->tid, mvm_sta->sta_id,
+ baid, baid_data->sta_id, baid_data->tid, mvm_sta->deflink.sta_id,
tid))
return false;
struct iwl_mvm_rx_phy_data {
enum iwl_rx_phy_info_type info_type;
- __le32 d0, d1, d2, d3;
+ __le32 d0, d1, d2, d3, eht_d4, d5;
__le16 d4;
+ bool with_data;
+ bool first_subframe;
+ __le32 rx_vec[4];
u32 rate_n_flags;
u32 gp2_on_air_rise;
}
}
+#define LE32_DEC_ENC(value, dec_bits, enc_bits) \
+ le32_encode_bits(le32_get_bits(value, dec_bits), enc_bits)
+
+#define IWL_MVM_ENC_USIG_VALUE_MASK(usig, in_value, dec_bits, enc_bits) do { \
+ typeof(enc_bits) _enc_bits = enc_bits; \
+ typeof(usig) _usig = usig; \
+ (_usig)->mask |= cpu_to_le32(_enc_bits); \
+ (_usig)->value |= LE32_DEC_ENC(in_value, dec_bits, _enc_bits); \
+} while (0)
+
+#define __IWL_MVM_ENC_EHT_RU(rt_data, rt_ru, fw_data, fw_ru) \
+ eht->data[(rt_data)] |= \
+ (cpu_to_le32 \
+ (IEEE80211_RADIOTAP_EHT_DATA ## rt_data ## _RU_ALLOC_CC_ ## rt_ru ## _KNOWN) | \
+ LE32_DEC_ENC(data ## fw_data, \
+ IWL_RX_PHY_DATA ## fw_data ## _EHT_MU_EXT_RU_ALLOC_ ## fw_ru, \
+ IEEE80211_RADIOTAP_EHT_DATA ## rt_data ## _RU_ALLOC_CC_ ## rt_ru))
+
+#define _IWL_MVM_ENC_EHT_RU(rt_data, rt_ru, fw_data, fw_ru) \
+ __IWL_MVM_ENC_EHT_RU(rt_data, rt_ru, fw_data, fw_ru)
+
+#define IEEE80211_RADIOTAP_RU_DATA_1_1_1 1
+#define IEEE80211_RADIOTAP_RU_DATA_2_1_1 2
+#define IEEE80211_RADIOTAP_RU_DATA_1_1_2 2
+#define IEEE80211_RADIOTAP_RU_DATA_2_1_2 2
+#define IEEE80211_RADIOTAP_RU_DATA_1_2_1 3
+#define IEEE80211_RADIOTAP_RU_DATA_2_2_1 3
+#define IEEE80211_RADIOTAP_RU_DATA_1_2_2 3
+#define IEEE80211_RADIOTAP_RU_DATA_2_2_2 4
+
+#define IWL_RX_RU_DATA_A1 2
+#define IWL_RX_RU_DATA_A2 2
+#define IWL_RX_RU_DATA_B1 2
+#define IWL_RX_RU_DATA_B2 3
+#define IWL_RX_RU_DATA_C1 3
+#define IWL_RX_RU_DATA_C2 3
+#define IWL_RX_RU_DATA_D1 4
+#define IWL_RX_RU_DATA_D2 4
+
+#define IWL_MVM_ENC_EHT_RU(rt_ru, fw_ru) \
+ _IWL_MVM_ENC_EHT_RU(IEEE80211_RADIOTAP_RU_DATA_ ## rt_ru, \
+ rt_ru, \
+ IWL_RX_RU_DATA_ ## fw_ru, \
+ fw_ru)
+
+static void iwl_mvm_decode_eht_ext_mu(struct iwl_mvm *mvm,
+ struct iwl_mvm_rx_phy_data *phy_data,
+ struct ieee80211_rx_status *rx_status,
+ struct ieee80211_radiotap_eht *eht,
+ struct ieee80211_radiotap_eht_usig *usig)
+{
+ if (phy_data->with_data) {
+ __le32 data1 = phy_data->d1;
+ __le32 data2 = phy_data->d2;
+ __le32 data3 = phy_data->d3;
+ __le32 data4 = phy_data->eht_d4;
+ __le32 data5 = phy_data->d5;
+ u32 phy_bw = phy_data->rate_n_flags & RATE_MCS_CHAN_WIDTH_MSK;
+
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, data5,
+ IWL_RX_PHY_DATA5_EHT_TYPE_AND_COMP,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B0_B1_PPDU_TYPE);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, data5,
+ IWL_RX_PHY_DATA5_EHT_MU_PUNC_CH_CODE,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B3_B7_PUNCTURED_INFO);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, data4,
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_SIGB_MCS,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B9_B10_SIG_MCS);
+ IWL_MVM_ENC_USIG_VALUE_MASK
+ (usig, data1, IWL_RX_PHY_DATA1_EHT_MU_NUM_SIG_SYM_USIGA2,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B11_B15_EHT_SIG_SYMBOLS);
+
+ eht->user_info[0] |=
+ cpu_to_le32(IEEE80211_RADIOTAP_EHT_USER_INFO_STA_ID_KNOWN) |
+ LE32_DEC_ENC(data5, IWL_RX_PHY_DATA5_EHT_MU_STA_ID_USR,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_STA_ID);
+
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_NR_NON_OFDMA_USERS_M);
+ eht->data[7] |= LE32_DEC_ENC
+ (data5, IWL_RX_PHY_DATA5_EHT_MU_NUM_USR_NON_OFDMA,
+ IEEE80211_RADIOTAP_EHT_DATA7_NUM_OF_NON_OFDMA_USERS);
+
+ /*
+ * Hardware labels the content channels/RU allocation values
+ * as follows:
+ * Content Channel 1 Content Channel 2
+ * 20 MHz: A1
+ * 40 MHz: A1 B1
+ * 80 MHz: A1 C1 B1 D1
+ * 160 MHz: A1 C1 A2 C2 B1 D1 B2 D2
+ * 320 MHz: A1 C1 A2 C2 A3 C3 A4 C4 B1 D1 B2 D2 B3 D3 B4 D4
+ *
+ * However firmware can only give us A1-D2, so the higher
+ * frequencies are missing.
+ */
+
+ switch (phy_bw) {
+ case RATE_MCS_CHAN_WIDTH_320:
+ /* additional values are missing in RX metadata */
+ case RATE_MCS_CHAN_WIDTH_160:
+ /* content channel 1 */
+ IWL_MVM_ENC_EHT_RU(1_2_1, A2);
+ IWL_MVM_ENC_EHT_RU(1_2_2, C2);
+ /* content channel 2 */
+ IWL_MVM_ENC_EHT_RU(2_2_1, B2);
+ IWL_MVM_ENC_EHT_RU(2_2_2, D2);
+ fallthrough;
+ case RATE_MCS_CHAN_WIDTH_80:
+ /* content channel 1 */
+ IWL_MVM_ENC_EHT_RU(1_1_2, C1);
+ /* content channel 2 */
+ IWL_MVM_ENC_EHT_RU(2_1_2, D1);
+ fallthrough;
+ case RATE_MCS_CHAN_WIDTH_40:
+ /* content channel 2 */
+ IWL_MVM_ENC_EHT_RU(2_1_1, B1);
+ fallthrough;
+ case RATE_MCS_CHAN_WIDTH_20:
+ IWL_MVM_ENC_EHT_RU(1_1_1, A1);
+ break;
+ }
+ } else {
+ __le32 usig_a1 = phy_data->rx_vec[0];
+ __le32 usig_a2 = phy_data->rx_vec[1];
+
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a1,
+ IWL_RX_USIG_A1_DISREGARD,
+ IEEE80211_RADIOTAP_EHT_USIG1_MU_B20_B24_DISREGARD);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a1,
+ IWL_RX_USIG_A1_VALIDATE,
+ IEEE80211_RADIOTAP_EHT_USIG1_MU_B25_VALIDATE);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_PPDU_TYPE,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B0_B1_PPDU_TYPE);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_USIG2_VALIDATE_B2,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B2_VALIDATE);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_PUNC_CHANNEL,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B3_B7_PUNCTURED_INFO);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_USIG2_VALIDATE_B8,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B8_VALIDATE);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_SIG_MCS,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B9_B10_SIG_MCS);
+ IWL_MVM_ENC_USIG_VALUE_MASK
+ (usig, usig_a2, IWL_RX_USIG_A2_EHT_SIG_SYM_NUM,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B11_B15_EHT_SIG_SYMBOLS);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_CRC_OK,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B16_B19_CRC);
+ }
+}
+
+static void iwl_mvm_decode_eht_ext_tb(struct iwl_mvm *mvm,
+ struct iwl_mvm_rx_phy_data *phy_data,
+ struct ieee80211_rx_status *rx_status,
+ struct ieee80211_radiotap_eht *eht,
+ struct ieee80211_radiotap_eht_usig *usig)
+{
+ if (phy_data->with_data) {
+ __le32 data5 = phy_data->d5;
+
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, data5,
+ IWL_RX_PHY_DATA5_EHT_TYPE_AND_COMP,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B0_B1_PPDU_TYPE);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, data5,
+ IWL_RX_PHY_DATA5_EHT_TB_SPATIAL_REUSE1,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B3_B6_SPATIAL_REUSE_1);
+
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, data5,
+ IWL_RX_PHY_DATA5_EHT_TB_SPATIAL_REUSE2,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B7_B10_SPATIAL_REUSE_2);
+ } else {
+ __le32 usig_a1 = phy_data->rx_vec[0];
+ __le32 usig_a2 = phy_data->rx_vec[1];
+
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a1,
+ IWL_RX_USIG_A1_DISREGARD,
+ IEEE80211_RADIOTAP_EHT_USIG1_TB_B20_B25_DISREGARD);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_PPDU_TYPE,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B0_B1_PPDU_TYPE);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_USIG2_VALIDATE_B2,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B2_VALIDATE);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_TRIG_SPATIAL_REUSE_1,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B3_B6_SPATIAL_REUSE_1);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_TRIG_SPATIAL_REUSE_2,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B7_B10_SPATIAL_REUSE_2);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_TRIG_USIG2_DISREGARD,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B11_B15_DISREGARD);
+ IWL_MVM_ENC_USIG_VALUE_MASK(usig, usig_a2,
+ IWL_RX_USIG_A2_EHT_CRC_OK,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B16_B19_CRC);
+ }
+}
+
+static void iwl_mvm_decode_eht_ru(struct iwl_mvm *mvm,
+ struct ieee80211_rx_status *rx_status,
+ struct ieee80211_radiotap_eht *eht)
+{
+ u32 ru = le32_get_bits(eht->data[8],
+ IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_B7_B1);
+ enum nl80211_eht_ru_alloc nl_ru;
+
+ /* Using D1.5 Table 9-53a - Encoding of PS160 and RU Allocation subfields
+ * in an EHT variant User Info field
+ */
+
+ switch (ru) {
+ case 0 ... 36:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_26;
+ break;
+ case 37 ... 52:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_52;
+ break;
+ case 53 ... 60:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_106;
+ break;
+ case 61 ... 64:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_242;
+ break;
+ case 65 ... 66:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_484;
+ break;
+ case 67:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_996;
+ break;
+ case 68:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_2x996;
+ break;
+ case 69:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_4x996;
+ break;
+ case 70 ... 81:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_52P26;
+ break;
+ case 82 ... 89:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_106P26;
+ break;
+ case 90 ... 93:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_484P242;
+ break;
+ case 94 ... 95:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_996P484;
+ break;
+ case 96 ... 99:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_996P484P242;
+ break;
+ case 100 ... 103:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_2x996P484;
+ break;
+ case 104:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_3x996;
+ break;
+ case 105 ... 106:
+ nl_ru = NL80211_RATE_INFO_EHT_RU_ALLOC_3x996P484;
+ break;
+ default:
+ return;
+ }
+
+ rx_status->bw = RATE_INFO_BW_EHT_RU;
+ rx_status->eht.ru = nl_ru;
+}
+
+static void iwl_mvm_decode_eht_phy_data(struct iwl_mvm *mvm,
+ struct iwl_mvm_rx_phy_data *phy_data,
+ struct ieee80211_rx_status *rx_status,
+ struct ieee80211_radiotap_eht *eht,
+ struct ieee80211_radiotap_eht_usig *usig)
+
+{
+ __le32 data0 = phy_data->d0;
+ __le32 data1 = phy_data->d1;
+ __le32 usig_a1 = phy_data->rx_vec[0];
+ u8 info_type = phy_data->info_type;
+
+ /* Not in EHT range */
+ if (info_type < IWL_RX_PHY_INFO_TYPE_EHT_MU ||
+ info_type > IWL_RX_PHY_INFO_TYPE_EHT_TB_EXT)
+ return;
+
+ usig->common |= cpu_to_le32
+ (IEEE80211_RADIOTAP_EHT_USIG_COMMON_UL_DL_KNOWN |
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BSS_COLOR_KNOWN);
+ if (phy_data->with_data) {
+ usig->common |= LE32_DEC_ENC(data0,
+ IWL_RX_PHY_DATA0_EHT_UPLINK,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_UL_DL);
+ usig->common |= LE32_DEC_ENC(data0,
+ IWL_RX_PHY_DATA0_EHT_BSS_COLOR_MASK,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BSS_COLOR);
+ } else {
+ usig->common |= LE32_DEC_ENC(usig_a1,
+ IWL_RX_USIG_A1_UL_FLAG,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_UL_DL);
+ usig->common |= LE32_DEC_ENC(usig_a1,
+ IWL_RX_USIG_A1_BSS_COLOR,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BSS_COLOR);
+ }
+
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_SPATIAL_REUSE);
+ eht->data[0] |= LE32_DEC_ENC(data0,
+ IWL_RX_PHY_DATA0_ETH_SPATIAL_REUSE_MASK,
+ IEEE80211_RADIOTAP_EHT_DATA0_SPATIAL_REUSE);
+
+ /* All RU allocating size/index is in TB format */
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_RU_ALLOC_TB_FMT);
+ eht->data[8] |= LE32_DEC_ENC(data0, IWL_RX_PHY_DATA0_EHT_PS160,
+ IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_PS_160);
+ eht->data[8] |= LE32_DEC_ENC(data1, IWL_RX_PHY_DATA1_EHT_B0,
+ IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_B0);
+ eht->data[8] |= LE32_DEC_ENC(data1, IWL_RX_PHY_DATA1_EHT_RU_B1_B7_ALLOC,
+ IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_B7_B1);
+
+ iwl_mvm_decode_eht_ru(mvm, rx_status, eht);
+
+ /* We only get here in case of IWL_RX_MPDU_PHY_TSF_OVERLOAD is set
+ * which is on only in case of monitor mode so no need to check monitor
+ * mode
+ */
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_PRIMARY_80);
+ eht->data[1] |=
+ le32_encode_bits(mvm->monitor_p80,
+ IEEE80211_RADIOTAP_EHT_DATA1_PRIMARY_80);
+
+ usig->common |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_USIG_COMMON_TXOP_KNOWN);
+ if (phy_data->with_data)
+ usig->common |= LE32_DEC_ENC(data0, IWL_RX_PHY_DATA0_EHT_TXOP_DUR_MASK,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_TXOP);
+ else
+ usig->common |= LE32_DEC_ENC(usig_a1, IWL_RX_USIG_A1_TXOP_DURATION,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_TXOP);
+
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_LDPC_EXTRA_SYM_OM);
+ eht->data[0] |= LE32_DEC_ENC(data0, IWL_RX_PHY_DATA0_EHT_LDPC_EXT_SYM,
+ IEEE80211_RADIOTAP_EHT_DATA0_LDPC_EXTRA_SYM_OM);
+
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_PRE_PADD_FACOR_OM);
+ eht->data[0] |= LE32_DEC_ENC(data0, IWL_RX_PHY_DATA0_EHT_PRE_FEC_PAD_MASK,
+ IEEE80211_RADIOTAP_EHT_DATA0_PRE_PADD_FACOR_OM);
+
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_PE_DISAMBIGUITY_OM);
+ eht->data[0] |= LE32_DEC_ENC(data0, IWL_RX_PHY_DATA0_EHT_PE_DISAMBIG,
+ IEEE80211_RADIOTAP_EHT_DATA0_PE_DISAMBIGUITY_OM);
+
+ /* TODO: what about IWL_RX_PHY_DATA0_EHT_BW320_SLOT */
+
+ if (!le32_get_bits(data0, IWL_RX_PHY_DATA0_EHT_SIGA_CRC_OK))
+ usig->common |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_USIG_COMMON_BAD_USIG_CRC);
+
+ usig->common |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_USIG_COMMON_PHY_VER_KNOWN);
+ usig->common |= LE32_DEC_ENC(data0, IWL_RX_PHY_DATA0_EHT_PHY_VER,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_PHY_VER);
+
+ /*
+ * TODO: what about TB - IWL_RX_PHY_DATA1_EHT_TB_PILOT_TYPE,
+ * IWL_RX_PHY_DATA1_EHT_TB_LOW_SS
+ */
+
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_EHT_LTF);
+ eht->data[0] |= LE32_DEC_ENC(data1, IWL_RX_PHY_DATA1_EHT_SIG_LTF_NUM,
+ IEEE80211_RADIOTAP_EHT_DATA0_EHT_LTF);
+
+ if (info_type == IWL_RX_PHY_INFO_TYPE_EHT_TB_EXT ||
+ info_type == IWL_RX_PHY_INFO_TYPE_EHT_TB)
+ iwl_mvm_decode_eht_ext_tb(mvm, phy_data, rx_status, eht, usig);
+
+ if (info_type == IWL_RX_PHY_INFO_TYPE_EHT_MU_EXT ||
+ info_type == IWL_RX_PHY_INFO_TYPE_EHT_MU)
+ iwl_mvm_decode_eht_ext_mu(mvm, phy_data, rx_status, eht, usig);
+}
+
+static void iwl_mvm_rx_eht(struct iwl_mvm *mvm, struct sk_buff *skb,
+ struct iwl_mvm_rx_phy_data *phy_data,
+ int queue)
+{
+ struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
+
+ struct ieee80211_radiotap_eht *eht;
+ struct ieee80211_radiotap_eht_usig *usig;
+ size_t eht_len = sizeof(*eht);
+
+ u32 rate_n_flags = phy_data->rate_n_flags;
+ u32 he_type = rate_n_flags & RATE_MCS_HE_TYPE_MSK;
+ /* EHT and HE have the same valus for LTF */
+ u8 ltf = IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE_UNKNOWN;
+ u16 phy_info = phy_data->phy_info;
+ u32 bw;
+
+ /* u32 for 1 user_info */
+ if (phy_data->with_data)
+ eht_len += sizeof(u32);
+
+ eht = iwl_mvm_radiotap_put_tlv(skb, IEEE80211_RADIOTAP_EHT, eht_len);
+
+ usig = iwl_mvm_radiotap_put_tlv(skb, IEEE80211_RADIOTAP_EHT_USIG,
+ sizeof(*usig));
+ rx_status->flag |= RX_FLAG_RADIOTAP_TLV_AT_END;
+ usig->common |=
+ cpu_to_le32(IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_KNOWN);
+
+ /* specific handling for 320MHz */
+ bw = FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK, rate_n_flags);
+ if (bw == RATE_MCS_CHAN_WIDTH_320_VAL)
+ bw += FIELD_GET(IWL_RX_PHY_DATA0_EHT_BW320_SLOT,
+ le32_to_cpu(phy_data->d0));
+
+ usig->common |= cpu_to_le32
+ (FIELD_PREP(IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW, bw));
+
+ /* report the AMPDU-EOF bit on single frames */
+ if (!queue && !(phy_info & IWL_RX_MPDU_PHY_AMPDU)) {
+ rx_status->flag |= RX_FLAG_AMPDU_DETAILS;
+ rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT_KNOWN;
+ if (phy_data->d0 & cpu_to_le32(IWL_RX_PHY_DATA0_EHT_DELIM_EOF))
+ rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT;
+ }
+
+ /* update aggregation data for monitor sake on default queue */
+ if (!queue && (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD) &&
+ (phy_info & IWL_RX_MPDU_PHY_AMPDU) && phy_data->first_subframe) {
+ rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT_KNOWN;
+ if (phy_data->d0 & cpu_to_le32(IWL_RX_PHY_DATA0_EHT_DELIM_EOF))
+ rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT;
+ }
+
+ if (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD)
+ iwl_mvm_decode_eht_phy_data(mvm, phy_data, rx_status, eht, usig);
+
+#define CHECK_TYPE(F) \
+ BUILD_BUG_ON(IEEE80211_RADIOTAP_HE_DATA1_FORMAT_ ## F != \
+ (RATE_MCS_HE_TYPE_ ## F >> RATE_MCS_HE_TYPE_POS))
+
+ CHECK_TYPE(SU);
+ CHECK_TYPE(EXT_SU);
+ CHECK_TYPE(MU);
+ CHECK_TYPE(TRIG);
+
+ switch (FIELD_GET(RATE_MCS_HE_GI_LTF_MSK, rate_n_flags)) {
+ case 0:
+ if (he_type == RATE_MCS_HE_TYPE_TRIG) {
+ rx_status->eht.gi = NL80211_RATE_INFO_EHT_GI_1_6;
+ ltf = IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE_1X;
+ } else {
+ rx_status->eht.gi = NL80211_RATE_INFO_EHT_GI_0_8;
+ ltf = IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE_2X;
+ }
+ break;
+ case 1:
+ rx_status->eht.gi = NL80211_RATE_INFO_EHT_GI_1_6;
+ ltf = IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE_2X;
+ break;
+ case 2:
+ ltf = IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE_4X;
+ if (he_type == RATE_MCS_HE_TYPE_TRIG)
+ rx_status->eht.gi = NL80211_RATE_INFO_EHT_GI_3_2;
+ else
+ rx_status->eht.gi = NL80211_RATE_INFO_EHT_GI_0_8;
+ break;
+ case 3:
+ if (he_type != RATE_MCS_HE_TYPE_TRIG) {
+ ltf = IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE_4X;
+ rx_status->eht.gi = NL80211_RATE_INFO_EHT_GI_3_2;
+ }
+ break;
+ default:
+ /* nothing here */
+ break;
+ }
+
+ if (ltf != IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE_UNKNOWN) {
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_GI);
+ eht->data[0] |= cpu_to_le32
+ (FIELD_PREP(IEEE80211_RADIOTAP_EHT_DATA0_LTF,
+ ltf) |
+ FIELD_PREP(IEEE80211_RADIOTAP_EHT_DATA0_GI,
+ rx_status->eht.gi));
+ }
+
+
+ if (!phy_data->with_data) {
+ eht->known |= cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_NSS_S |
+ IEEE80211_RADIOTAP_EHT_KNOWN_BEAMFORMED_S);
+ eht->data[7] |=
+ le32_encode_bits(le32_get_bits(phy_data->rx_vec[2],
+ RX_NO_DATA_RX_VEC2_EHT_NSTS_MSK),
+ IEEE80211_RADIOTAP_EHT_DATA7_NSS_S);
+ if (rate_n_flags & RATE_MCS_BF_MSK)
+ eht->data[7] |=
+ cpu_to_le32(IEEE80211_RADIOTAP_EHT_DATA7_BEAMFORMED_S);
+ } else {
+ eht->user_info[0] |=
+ cpu_to_le32(IEEE80211_RADIOTAP_EHT_USER_INFO_MCS_KNOWN |
+ IEEE80211_RADIOTAP_EHT_USER_INFO_CODING_KNOWN |
+ IEEE80211_RADIOTAP_EHT_USER_INFO_NSS_KNOWN_O |
+ IEEE80211_RADIOTAP_EHT_USER_INFO_BEAMFORMING_KNOWN_O |
+ IEEE80211_RADIOTAP_EHT_USER_INFO_DATA_FOR_USER);
+
+ if (rate_n_flags & RATE_MCS_BF_MSK)
+ eht->user_info[0] |=
+ cpu_to_le32(IEEE80211_RADIOTAP_EHT_USER_INFO_BEAMFORMING_O);
+
+ if (rate_n_flags & RATE_MCS_LDPC_MSK)
+ eht->user_info[0] |=
+ cpu_to_le32(IEEE80211_RADIOTAP_EHT_USER_INFO_CODING);
+
+ eht->user_info[0] |= cpu_to_le32
+ (FIELD_PREP(IEEE80211_RADIOTAP_EHT_USER_INFO_MCS,
+ FIELD_GET(RATE_VHT_MCS_RATE_CODE_MSK,
+ rate_n_flags)) |
+ FIELD_PREP(IEEE80211_RADIOTAP_EHT_USER_INFO_NSS_O,
+ FIELD_GET(RATE_MCS_NSS_MSK, rate_n_flags)));
+ }
+}
+
static void iwl_mvm_rx_he(struct iwl_mvm *mvm, struct sk_buff *skb,
struct iwl_mvm_rx_phy_data *phy_data,
int queue)
/* update aggregation data for monitor sake on default queue */
if (!queue && (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD) &&
- (phy_info & IWL_RX_MPDU_PHY_AMPDU)) {
- bool toggle_bit = phy_info & IWL_RX_MPDU_PHY_AMPDU_TOGGLE;
-
- /* toggle is switched whenever new aggregation starts */
- if (toggle_bit != mvm->ampdu_toggle) {
- rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT_KNOWN;
- if (phy_data->d0 & cpu_to_le32(IWL_RX_PHY_DATA0_HE_DELIM_EOF))
- rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT;
- }
+ (phy_info & IWL_RX_MPDU_PHY_AMPDU) && phy_data->first_subframe) {
+ rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT_KNOWN;
+ if (phy_data->d0 & cpu_to_le32(IWL_RX_PHY_DATA0_EHT_DELIM_EOF))
+ rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT;
}
if (he_type == RATE_MCS_HE_TYPE_EXT_SU &&
case IWL_RX_PHY_INFO_TYPE_HE_MU:
case IWL_RX_PHY_INFO_TYPE_HE_MU_EXT:
case IWL_RX_PHY_INFO_TYPE_HE_TB:
+ case IWL_RX_PHY_INFO_TYPE_EHT_MU:
+ case IWL_RX_PHY_INFO_TYPE_EHT_TB:
+ case IWL_RX_PHY_INFO_TYPE_EHT_MU_EXT:
+ case IWL_RX_PHY_INFO_TYPE_EHT_TB_EXT:
lsig = skb_put(skb, sizeof(*lsig));
lsig->data1 = cpu_to_le16(IEEE80211_RADIOTAP_LSIG_DATA1_LENGTH_KNOWN);
lsig->data2 = le16_encode_bits(le32_get_bits(phy_data->d1,
iwl_mvm_get_signal_strength(mvm, rx_status, rate_n_flags,
phy_data->energy_a, phy_data->energy_b);
+ /* using TLV format and must be after all fixed len fields */
+ if (format == RATE_MCS_EHT_MSK)
+ iwl_mvm_rx_eht(mvm, skb, phy_data, queue);
+
if (unlikely(mvm->monitor_on))
iwl_mvm_add_rtap_sniffer_config(mvm, skb);
u32 len;
u32 pkt_len = iwl_rx_packet_payload_len(pkt);
struct ieee80211_sta *sta = NULL;
+ struct ieee80211_link_sta *link_sta = NULL;
struct sk_buff *skb;
u8 crypt_len = 0;
size_t desc_size;
phy_data.d1 = desc->v3.phy_data1;
phy_data.d2 = desc->v3.phy_data2;
phy_data.d3 = desc->v3.phy_data3;
+ phy_data.eht_d4 = desc->phy_eht_data4;
+ phy_data.d5 = desc->v3.phy_data5;
} else {
phy_data.rate_n_flags = le32_to_cpu(desc->v1.rate_n_flags);
phy_data.channel = desc->v1.channel;
return;
}
+ phy_data.with_data = true;
phy_data.phy_info = le16_to_cpu(desc->phy_info);
phy_data.d4 = desc->phy_data4;
if (mvm->ampdu_ref == 0)
mvm->ampdu_ref++;
mvm->ampdu_toggle = toggle_bit;
+ phy_data.first_subframe = true;
}
rx_status->ampdu_reference = mvm->ampdu_ref;
}
sta = rcu_dereference(mvm->fw_id_to_mac_id[id]);
if (IS_ERR(sta))
sta = NULL;
+ link_sta = rcu_dereference(mvm->fw_id_to_link_sta[id]);
}
} else if (!is_multicast_ether_addr(hdr->addr2)) {
/*
goto out;
}
- if (!iwl_mvm_reorder(mvm, napi, queue, sta, skb, desc))
- iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb, queue,
- sta);
+ if (!iwl_mvm_reorder(mvm, napi, queue, sta, skb, desc) &&
+ likely(!iwl_mvm_time_sync_frame(mvm, skb, hdr->addr2)))
+ iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb, queue, sta,
+ link_sta);
+
out:
rcu_read_unlock();
}
phy_data.energy_a = u32_get_bits(rssi, RX_NO_DATA_CHAIN_A_MSK);
phy_data.energy_b = u32_get_bits(rssi, RX_NO_DATA_CHAIN_B_MSK);
phy_data.channel = u32_get_bits(rssi, RX_NO_DATA_CHANNEL_MSK);
+ phy_data.with_data = false;
if (iwl_fw_lookup_notif_ver(mvm->fw, DATA_PATH_GROUP,
RX_NO_DATA_NOTIF, 0) < 2) {
sizeof(struct iwl_rx_no_data_ver_3)))
/* invalid len for ver 3 */
return;
+ memcpy(phy_data.rx_vec, desc->rx_vec, sizeof(phy_data.rx_vec));
} else {
if (format == RATE_MCS_EHT_MSK)
/* no support for EHT before version 3 API */
IEEE80211_RADIOTAP_ZERO_LEN_PSDU_SOUNDING;
break;
case RX_NO_DATA_INFO_TYPE_MU_UNMATCHED:
- case RX_NO_DATA_INFO_TYPE_HE_TB_UNMATCHED:
+ case RX_NO_DATA_INFO_TYPE_TB_UNMATCHED:
rx_status->zero_length_psdu_type =
IEEE80211_RADIOTAP_ZERO_LEN_PSDU_NOT_CAPTURED;
break;
*
* We mark it as mac header, for upper layers to know where
* all radio tap header ends.
- *
- * Since data doesn't move data while putting data on skb and that is
- * the only way we use, data + len is the next place that hdr would be put
*/
- skb_set_mac_header(skb, skb->len);
+ skb_reset_mac_header(skb);
/*
* Override the nss from the rx_vec since the rate_n_flags has
struct iwl_mvm_scan_iter_data *data = _data;
struct iwl_mvm_vif *curr_mvmvif;
- if (vif->type != NL80211_IFTYPE_P2P_DEVICE && mvmvif->phy_ctxt &&
- mvmvif->phy_ctxt->id < NUM_PHY_CTX)
+ if (vif->type != NL80211_IFTYPE_P2P_DEVICE &&
+ mvmvif->deflink.phy_ctxt &&
+ mvmvif->deflink.phy_ctxt->id < NUM_PHY_CTX)
data->global_cnt += 1;
if (!data->current_vif || vif == data->current_vif)
curr_mvmvif = iwl_mvm_vif_from_mac80211(data->current_vif);
if (vif->type == NL80211_IFTYPE_AP && vif->p2p &&
- mvmvif->phy_ctxt && curr_mvmvif->phy_ctxt &&
- mvmvif->phy_ctxt->id != curr_mvmvif->phy_ctxt->id)
+ mvmvif->deflink.phy_ctxt && curr_mvmvif->deflink.phy_ctxt &&
+ mvmvif->deflink.phy_ctxt->id != curr_mvmvif->deflink.phy_ctxt->id)
data->is_dcm_with_p2p_go = true;
}
if (vif == data->current_vif)
return;
- if (vif->type == NL80211_IFTYPE_AP && vif->p2p &&
- mvmvif->phy_ctxt->id < NUM_PHY_CTX &&
- (data->band == NUM_NL80211_BANDS ||
- mvmvif->phy_ctxt->channel->band == data->band))
- data->p2p_go = true;
+ if (vif->type == NL80211_IFTYPE_AP && vif->p2p) {
+ u32 link_id;
+
+ for (link_id = 0;
+ link_id < ARRAY_SIZE(mvmvif->link);
+ link_id++) {
+ struct iwl_mvm_vif_link_info *link =
+ mvmvif->link[link_id];
+
+ if (link && link->phy_ctxt->id < NUM_PHY_CTX &&
+ (data->band == NUM_NL80211_BANDS ||
+ link->phy_ctxt->channel->band == data->band)) {
+ data->p2p_go = true;
+ break;
+ }
+ }
+ }
}
static bool _iwl_mvm_get_respect_p2p_go(struct iwl_mvm *mvm,
.scan_start_tsf = mvm->scan_start,
};
- memcpy(info.tsf_bssid, mvm->scan_vif->bssid, ETH_ALEN);
+ memcpy(info.tsf_bssid, mvm->scan_vif->deflink.bssid, ETH_ALEN);
ieee80211_scan_completed(mvm->hw, &info);
mvm->scan_vif = NULL;
cancel_delayed_work(&mvm->scan_timeout_dwork);
struct iwl_mvm_active_iface_iterator_data *data = _data;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- if (vif == data->ignore_vif || !mvmvif->phy_ctxt ||
+ if (vif == data->ignore_vif || !mvmvif->deflink.phy_ctxt ||
vif->type == NL80211_IFTYPE_P2P_DEVICE)
return;
data->num_active_macs++;
if (vif->type == NL80211_IFTYPE_STATION) {
- data->sta_vif_ap_sta_id = mvmvif->ap_sta_id;
+ data->sta_vif_ap_sta_id = mvmvif->deflink.ap_sta_id;
if (vif->cfg.assoc)
data->sta_vif_state = SF_FULL_ON;
else
struct ieee80211_sta *sta)
{
int i, j, watermark;
+ u8 max_rx_nss = 0;
+ bool is_legacy = true;
+ struct ieee80211_link_sta *link_sta;
+ unsigned int link_id;
sf_cmd->watermark[SF_LONG_DELAY_ON] = cpu_to_le32(SF_W_MARK_SCAN);
* capabilities of the AP station, and choose the watermark accordingly.
*/
if (sta) {
- if (sta->deflink.ht_cap.ht_supported ||
- sta->deflink.vht_cap.vht_supported ||
- sta->deflink.he_cap.has_he) {
- switch (sta->deflink.rx_nss) {
+ /* find the maximal NSS number among all links (if relevant) */
+ rcu_read_lock();
+ for (link_id = 0; link_id < ARRAY_SIZE(sta->link); link_id++) {
+ link_sta = rcu_dereference(sta->link[link_id]);
+ if (!link_sta)
+ continue;
+
+ if (link_sta->ht_cap.ht_supported ||
+ link_sta->vht_cap.vht_supported ||
+ link_sta->eht_cap.has_eht ||
+ link_sta->he_cap.has_he) {
+ is_legacy = false;
+ max_rx_nss = max(max_rx_nss, link_sta->rx_nss);
+ }
+ }
+ rcu_read_unlock();
+
+ if (!is_legacy) {
+ switch (max_rx_nss) {
case 1:
watermark = SF_W_MARK_SISO;
break;
memcpy(sf_cmd->full_on_timeouts, sf_full_timeout_def,
sizeof(sf_full_timeout_def));
}
-
}
static int iwl_mvm_sf_config(struct iwl_mvm *mvm, u8 sta_id,
} else if (changed_vif->cfg.assoc &&
changed_vif->bss_conf.dtim_period) {
mvmvif = iwl_mvm_vif_from_mac80211(changed_vif);
- sta_id = mvmvif->ap_sta_id;
+ sta_id = mvmvif->deflink.ap_sta_id;
new_state = SF_FULL_ON;
} else {
new_state = SF_INIT_OFF;
/* If there are multiple active macs - change to SF_UNINIT */
new_state = SF_UNINIT;
}
+
+ /* For MLO it's ok to use deflink->sta_id as it's needed only to get
+ * a pointer to mac80211 sta
+ */
return iwl_mvm_sf_config(mvm, sta_id, new_state);
}
return sizeof(struct iwl_mvm_add_sta_cmd_v7);
}
-static int iwl_mvm_find_free_sta_id(struct iwl_mvm *mvm,
- enum nl80211_iftype iftype)
+int iwl_mvm_find_free_sta_id(struct iwl_mvm *mvm, enum nl80211_iftype iftype)
{
int sta_id;
u32 reserved_ids = 0;
return IWL_MVM_INVALID_STA;
}
+/* Calculate the ampdu density and max size */
+u32 iwl_mvm_get_sta_ampdu_dens(struct ieee80211_link_sta *link_sta,
+ struct ieee80211_bss_conf *link_conf,
+ u32 *_agg_size)
+{
+ u32 agg_size = 0, mpdu_dens = 0;
+
+ if (WARN_ON(!link_sta))
+ return 0;
+
+ if (link_sta->ht_cap.ht_supported)
+ mpdu_dens = link_sta->ht_cap.ampdu_density;
+
+ if (link_conf->chandef.chan->band ==
+ NL80211_BAND_6GHZ) {
+ mpdu_dens = le16_get_bits(link_sta->he_6ghz_capa.capa,
+ IEEE80211_HE_6GHZ_CAP_MIN_MPDU_START);
+ agg_size = le16_get_bits(link_sta->he_6ghz_capa.capa,
+ IEEE80211_HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP);
+ } else if (link_sta->vht_cap.vht_supported) {
+ agg_size = link_sta->vht_cap.cap &
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
+ agg_size >>=
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT;
+ } else if (link_sta->ht_cap.ht_supported) {
+ agg_size = link_sta->ht_cap.ampdu_factor;
+ }
+
+ /* D6.0 10.12.2 A-MPDU length limit rules
+ * A STA indicates the maximum length of the A-MPDU preEOF padding
+ * that it can receive in an HE PPDU in the Maximum A-MPDU Length
+ * Exponent field in its HT Capabilities, VHT Capabilities,
+ * and HE 6 GHz Band Capabilities elements (if present) and the
+ * Maximum AMPDU Length Exponent Extension field in its HE
+ * Capabilities element
+ */
+ if (link_sta->he_cap.has_he)
+ agg_size +=
+ u8_get_bits(link_sta->he_cap.he_cap_elem.mac_cap_info[3],
+ IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK);
+
+ /* Limit to max A-MPDU supported by FW */
+ if (agg_size > (STA_FLG_MAX_AGG_SIZE_4M >> STA_FLG_MAX_AGG_SIZE_SHIFT))
+ agg_size = (STA_FLG_MAX_AGG_SIZE_4M >>
+ STA_FLG_MAX_AGG_SIZE_SHIFT);
+
+ *_agg_size = agg_size;
+ return mpdu_dens;
+}
+
+u8 iwl_mvm_get_sta_uapsd_acs(struct ieee80211_sta *sta)
+{
+ u8 uapsd_acs = 0;
+
+ if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_BK)
+ uapsd_acs |= BIT(AC_BK);
+ if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_BE)
+ uapsd_acs |= BIT(AC_BE);
+ if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VI)
+ uapsd_acs |= BIT(AC_VI);
+ if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VO)
+ uapsd_acs |= BIT(AC_VO);
+
+ return uapsd_acs | uapsd_acs << 4;
+}
+
/* send station add/update command to firmware */
int iwl_mvm_sta_send_to_fw(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
bool update, unsigned int flags)
{
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm_add_sta_cmd add_sta_cmd = {
- .sta_id = mvm_sta->sta_id,
+ .sta_id = mvm_sta->deflink.sta_id,
.mac_id_n_color = cpu_to_le32(mvm_sta->mac_id_n_color),
.add_modify = update ? 1 : 0,
.station_flags_msk = cpu_to_le32(STA_FLG_FAT_EN_MSK |
break;
}
- if (sta->deflink.ht_cap.ht_supported) {
+ if (sta->deflink.ht_cap.ht_supported ||
+ mvm_sta->vif->bss_conf.chandef.chan->band == NL80211_BAND_6GHZ)
add_sta_cmd.station_flags_msk |=
cpu_to_le32(STA_FLG_MAX_AGG_SIZE_MSK |
STA_FLG_AGG_MPDU_DENS_MSK);
- mpdu_dens = sta->deflink.ht_cap.ampdu_density;
- }
-
- if (mvm_sta->vif->bss_conf.chandef.chan->band == NL80211_BAND_6GHZ) {
- add_sta_cmd.station_flags_msk |=
- cpu_to_le32(STA_FLG_MAX_AGG_SIZE_MSK |
- STA_FLG_AGG_MPDU_DENS_MSK);
-
- mpdu_dens = le16_get_bits(sta->deflink.he_6ghz_capa.capa,
- IEEE80211_HE_6GHZ_CAP_MIN_MPDU_START);
- agg_size = le16_get_bits(sta->deflink.he_6ghz_capa.capa,
- IEEE80211_HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP);
- } else if (sta->deflink.vht_cap.vht_supported) {
- agg_size = sta->deflink.vht_cap.cap &
- IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
- agg_size >>=
- IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT;
- } else if (sta->deflink.ht_cap.ht_supported) {
- agg_size = sta->deflink.ht_cap.ampdu_factor;
- }
-
- /* D6.0 10.12.2 A-MPDU length limit rules
- * A STA indicates the maximum length of the A-MPDU preEOF padding
- * that it can receive in an HE PPDU in the Maximum A-MPDU Length
- * Exponent field in its HT Capabilities, VHT Capabilities,
- * and HE 6 GHz Band Capabilities elements (if present) and the
- * Maximum AMPDU Length Exponent Extension field in its HE
- * Capabilities element
- */
- if (sta->deflink.he_cap.has_he)
- agg_size += u8_get_bits(sta->deflink.he_cap.he_cap_elem.mac_cap_info[3],
- IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK);
-
- /* Limit to max A-MPDU supported by FW */
- if (agg_size > (STA_FLG_MAX_AGG_SIZE_4M >> STA_FLG_MAX_AGG_SIZE_SHIFT))
- agg_size = (STA_FLG_MAX_AGG_SIZE_4M >>
- STA_FLG_MAX_AGG_SIZE_SHIFT);
-
+ mpdu_dens = iwl_mvm_get_sta_ampdu_dens(&sta->deflink,
+ &mvm_sta->vif->bss_conf,
+ &agg_size);
add_sta_cmd.station_flags |=
cpu_to_le32(agg_size << STA_FLG_MAX_AGG_SIZE_SHIFT);
add_sta_cmd.station_flags |=
cpu_to_le32(mpdu_dens << STA_FLG_AGG_MPDU_DENS_SHIFT);
+
if (mvm_sta->sta_state >= IEEE80211_STA_ASSOC)
add_sta_cmd.assoc_id = cpu_to_le16(sta->aid);
if (sta->wme) {
add_sta_cmd.modify_mask |= STA_MODIFY_UAPSD_ACS;
-
- if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_BK)
- add_sta_cmd.uapsd_acs |= BIT(AC_BK);
- if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_BE)
- add_sta_cmd.uapsd_acs |= BIT(AC_BE);
- if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VI)
- add_sta_cmd.uapsd_acs |= BIT(AC_VI);
- if (sta->uapsd_queues & IEEE80211_WMM_IE_STA_QOSINFO_AC_VO)
- add_sta_cmd.uapsd_acs |= BIT(AC_VO);
- add_sta_cmd.uapsd_acs |= add_sta_cmd.uapsd_acs << 4;
+ add_sta_cmd.uapsd_acs = iwl_mvm_get_sta_uapsd_acs(sta);
add_sta_cmd.sp_length = sta->max_sp ? sta->max_sp * 2 : 128;
}
mvmsta->tid_disable_agg |= disable_agg_tids;
cmd.mac_id_n_color = cpu_to_le32(mvmsta->mac_id_n_color);
- cmd.sta_id = mvmsta->sta_id;
+ cmd.sta_id = mvmsta->deflink.sta_id;
cmd.add_modify = STA_MODE_MODIFY;
cmd.modify_mask = STA_MODIFY_QUEUES;
if (disable_agg_tids)
return -ENOSPC;
}
-static int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm,
- u8 sta_id, u8 tid, unsigned int timeout)
+static int iwl_mvm_get_queue_size(struct ieee80211_sta *sta)
+{
+ int max_size = IWL_DEFAULT_QUEUE_SIZE;
+ unsigned int link_id;
+
+ /* this queue isn't used for traffic (cab_queue) */
+ if (!sta)
+ return IWL_MGMT_QUEUE_SIZE;
+
+ rcu_read_lock();
+
+ for (link_id = 0; link_id < ARRAY_SIZE(sta->link); link_id++) {
+ struct ieee80211_link_sta *link =
+ rcu_dereference(sta->link[link_id]);
+
+ if (!link)
+ continue;
+
+ /* support for 1k ba size */
+ if (link->eht_cap.has_eht &&
+ max_size < IWL_DEFAULT_QUEUE_SIZE_EHT)
+ max_size = IWL_DEFAULT_QUEUE_SIZE_EHT;
+
+ /* support for 256 ba size */
+ if (link->he_cap.has_he &&
+ max_size < IWL_DEFAULT_QUEUE_SIZE_HE)
+ max_size = IWL_DEFAULT_QUEUE_SIZE_HE;
+ }
+
+ rcu_read_unlock();
+ return max_size;
+}
+
+int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm,
+ struct ieee80211_sta *sta,
+ u8 sta_id, u8 tid, unsigned int timeout)
{
int queue, size;
+ u32 sta_mask = 0;
if (tid == IWL_MAX_TID_COUNT) {
tid = IWL_MGMT_TID;
size = max_t(u32, IWL_MGMT_QUEUE_SIZE,
mvm->trans->cfg->min_txq_size);
} else {
- struct ieee80211_sta *sta;
-
- rcu_read_lock();
- sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]);
-
- /* this queue isn't used for traffic (cab_queue) */
- if (IS_ERR_OR_NULL(sta)) {
- size = IWL_MGMT_QUEUE_SIZE;
- } else if (sta->deflink.he_cap.has_he) {
- /* support for 256 ba size */
- size = IWL_DEFAULT_QUEUE_SIZE_HE;
- } else {
- size = IWL_DEFAULT_QUEUE_SIZE;
- }
-
- rcu_read_unlock();
+ size = iwl_mvm_get_queue_size(sta);
}
/* take the min with bc tbl entries allowed */
/* size needs to be power of 2 values for calculating read/write pointers */
size = rounddown_pow_of_two(size);
+ if (sta) {
+ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ unsigned int link_id;
+
+ for (link_id = 0;
+ link_id < ARRAY_SIZE(mvmsta->link);
+ link_id++) {
+ struct iwl_mvm_link_sta *link =
+ rcu_dereference_protected(mvmsta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+
+ if (!link)
+ continue;
+
+ sta_mask |= BIT(link->sta_id);
+ }
+ } else {
+ sta_mask |= BIT(sta_id);
+ }
+
+ if (!sta_mask)
+ return -EINVAL;
+
do {
- queue = iwl_trans_txq_alloc(mvm->trans, 0, BIT(sta_id),
+ queue = iwl_trans_txq_alloc(mvm->trans, 0, sta_mask,
tid, size, timeout);
if (queue < 0)
IWL_DEBUG_TX_QUEUES(mvm,
- "Failed allocating TXQ of size %d for sta %d tid %d, ret: %d\n",
- size, sta_id, tid, queue);
+ "Failed allocating TXQ of size %d for sta mask %x tid %d, ret: %d\n",
+ size, sta_mask, tid, queue);
size /= 2;
} while (queue < 0 && size >= 16);
if (queue < 0)
return queue;
- IWL_DEBUG_TX_QUEUES(mvm, "Enabling TXQ #%d for sta %d tid %d\n",
- queue, sta_id, tid);
+ IWL_DEBUG_TX_QUEUES(mvm, "Enabling TXQ #%d for sta mask 0x%x tid %d\n",
+ queue, sta_mask, tid);
return queue;
}
IWL_DEBUG_TX_QUEUES(mvm,
"Allocating queue for sta %d on tid %d\n",
- mvmsta->sta_id, tid);
- queue = iwl_mvm_tvqm_enable_txq(mvm, mvmsta->sta_id, tid, wdg_timeout);
+ mvmsta->deflink.sta_id, tid);
+ queue = iwl_mvm_tvqm_enable_txq(mvm, sta, mvmsta->deflink.sta_id,
+ tid, wdg_timeout);
if (queue < 0)
return queue;
mvmtxq->txq_id = queue;
mvm->tvqm_info[queue].txq_tid = tid;
- mvm->tvqm_info[queue].sta_id = mvmsta->sta_id;
+ mvm->tvqm_info[queue].sta_id = mvmsta->deflink.sta_id;
IWL_DEBUG_TX_QUEUES(mvm, "Allocated queue is %d\n", queue);
mvmsta->tid_disable_agg &= ~BIT(tid);
cmd.mac_id_n_color = cpu_to_le32(mvmsta->mac_id_n_color);
- cmd.sta_id = mvmsta->sta_id;
+ cmd.sta_id = mvmsta->deflink.sta_id;
cmd.add_modify = STA_MODE_MODIFY;
cmd.modify_mask = STA_MODIFY_TID_DISABLE_TX;
cmd.tfd_queue_msk = cpu_to_le32(mvmsta->tfd_queue_msk);
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_trans_txq_scd_cfg cfg = {
.fifo = iwl_mvm_mac_ac_to_tx_fifo(mvm, ac),
- .sta_id = mvmsta->sta_id,
+ .sta_id = mvmsta->deflink.sta_id,
.tid = tid,
.frame_limit = IWL_FRAME_LIMIT,
};
spin_unlock_bh(&mvmsta->lock);
if (tid == IWL_MAX_TID_COUNT) {
- queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id,
+ queue = iwl_mvm_find_free_queue(mvm, mvmsta->deflink.sta_id,
IWL_MVM_DQA_MIN_MGMT_QUEUE,
IWL_MVM_DQA_MAX_MGMT_QUEUE);
if (queue >= IWL_MVM_DQA_MIN_MGMT_QUEUE)
}
if (queue < 0)
- queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id,
+ queue = iwl_mvm_find_free_queue(mvm, mvmsta->deflink.sta_id,
IWL_MVM_DQA_MIN_DATA_QUEUE,
IWL_MVM_DQA_MAX_DATA_QUEUE);
if (queue < 0) {
/* try harder - perhaps kill an inactive queue */
- queue = iwl_mvm_inactivity_check(mvm, mvmsta->sta_id);
+ queue = iwl_mvm_inactivity_check(mvm, mvmsta->deflink.sta_id);
}
/* No free queue - we'll have to share */
IWL_DEBUG_TX_QUEUES(mvm,
"Allocating %squeue #%d to sta %d on tid %d\n",
shared_queue ? "shared " : "", queue,
- mvmsta->sta_id, tid);
+ mvmsta->deflink.sta_id, tid);
if (shared_queue) {
/* Disable any open aggs on this queue */
out_err:
queue_tmp = queue;
- iwl_mvm_disable_txq(mvm, sta, mvmsta->sta_id, &queue_tmp, tid);
+ iwl_mvm_disable_txq(mvm, sta, mvmsta->deflink.sta_id, &queue_tmp, tid);
return ret;
}
IWL_MVM_QUEUE_FREE))
queue = IWL_MVM_DQA_BSS_CLIENT_QUEUE;
else
- queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id,
+ queue = iwl_mvm_find_free_queue(mvm, mvmsta->deflink.sta_id,
IWL_MVM_DQA_MIN_DATA_QUEUE,
IWL_MVM_DQA_MAX_DATA_QUEUE);
if (queue < 0) {
/* try again - this time kick out a queue if needed */
- queue = iwl_mvm_inactivity_check(mvm, mvmsta->sta_id);
+ queue = iwl_mvm_inactivity_check(mvm, mvmsta->deflink.sta_id);
if (queue < 0) {
IWL_ERR(mvm, "No available queues for new station\n");
return -ENOSPC;
mvmsta->reserved_queue = queue;
IWL_DEBUG_TX_QUEUES(mvm, "Reserving data queue #%d for sta_id %d\n",
- queue, mvmsta->sta_id);
+ queue, mvmsta->deflink.sta_id);
return 0;
}
*
* Note that re-enabling aggregations isn't done in this function.
*/
-static void iwl_mvm_realloc_queues_after_restart(struct iwl_mvm *mvm,
- struct ieee80211_sta *sta)
+void iwl_mvm_realloc_queues_after_restart(struct iwl_mvm *mvm,
+ struct ieee80211_sta *sta)
{
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
unsigned int wdg =
iwl_mvm_get_wd_timeout(mvm, mvm_sta->vif, false, false);
int i;
struct iwl_trans_txq_scd_cfg cfg = {
- .sta_id = mvm_sta->sta_id,
+ .sta_id = mvm_sta->deflink.sta_id,
.frame_limit = IWL_FRAME_LIMIT,
};
if (iwl_mvm_has_new_tx_api(mvm)) {
IWL_DEBUG_TX_QUEUES(mvm,
"Re-mapping sta %d tid %d\n",
- mvm_sta->sta_id, i);
- txq_id = iwl_mvm_tvqm_enable_txq(mvm, mvm_sta->sta_id,
+ mvm_sta->deflink.sta_id, i);
+ txq_id = iwl_mvm_tvqm_enable_txq(mvm, sta,
+ mvm_sta->deflink.sta_id,
i, wdg);
/*
* on failures, just set it to IWL_MVM_INVALID_QUEUE
IWL_DEBUG_TX_QUEUES(mvm,
"Re-mapping sta %d tid %d to queue %d\n",
- mvm_sta->sta_id, i, txq_id);
+ mvm_sta->deflink.sta_id, i,
+ txq_id);
iwl_mvm_enable_txq(mvm, sta, txq_id, seq, &cfg, wdg);
mvm->queue_info[txq_id].status = IWL_MVM_QUEUE_READY;
return ret;
}
-int iwl_mvm_add_sta(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct ieee80211_sta *sta)
+/* Initialize driver data of a new sta */
+int iwl_mvm_sta_init(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, int sta_id, u8 sta_type)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm_rxq_dup_data *dup_data;
- int i, ret, sta_id;
- bool sta_update = false;
- unsigned int sta_flags = 0;
+ int i, ret = 0;
lockdep_assert_held(&mvm->mutex);
- if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
- sta_id = iwl_mvm_find_free_sta_id(mvm,
- ieee80211_vif_type_p2p(vif));
- else
- sta_id = mvm_sta->sta_id;
-
- if (sta_id == IWL_MVM_INVALID_STA)
- return -ENOSPC;
-
- spin_lock_init(&mvm_sta->lock);
+ mvm_sta->mac_id_n_color = FW_CMD_ID_AND_COLOR(mvmvif->id,
+ mvmvif->color);
+ mvm_sta->vif = vif;
- /* if this is a HW restart re-alloc existing queues */
- if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
- struct iwl_mvm_int_sta tmp_sta = {
- .sta_id = sta_id,
- .type = mvm_sta->sta_type,
- };
+ /* for MLD sta_id(s) should be allocated for each link before calling
+ * this function
+ */
+ if (!mvm->mld_api_is_used) {
+ if (WARN_ON(sta_id == IWL_MVM_INVALID_STA))
+ return -EINVAL;
- /*
- * First add an empty station since allocating
- * a queue requires a valid station
- */
- ret = iwl_mvm_add_int_sta_common(mvm, &tmp_sta, sta->addr,
- mvmvif->id, mvmvif->color);
- if (ret)
- goto err;
+ mvm_sta->deflink.sta_id = sta_id;
+ rcu_assign_pointer(mvm_sta->link[0], &mvm_sta->deflink);
- iwl_mvm_realloc_queues_after_restart(mvm, sta);
- sta_update = true;
- sta_flags = iwl_mvm_has_new_tx_api(mvm) ? 0 : STA_MODIFY_QUEUES;
- goto update_fw;
+ if (!mvm->trans->trans_cfg->gen2)
+ mvm_sta->deflink.lq_sta.rs_drv.pers.max_agg_bufsize =
+ LINK_QUAL_AGG_FRAME_LIMIT_DEF;
+ else
+ mvm_sta->deflink.lq_sta.rs_drv.pers.max_agg_bufsize =
+ LINK_QUAL_AGG_FRAME_LIMIT_GEN2_DEF;
}
- mvm_sta->sta_id = sta_id;
- mvm_sta->mac_id_n_color = FW_CMD_ID_AND_COLOR(mvmvif->id,
- mvmvif->color);
- mvm_sta->vif = vif;
- if (!mvm->trans->trans_cfg->gen2)
- mvm_sta->max_agg_bufsize = LINK_QUAL_AGG_FRAME_LIMIT_DEF;
- else
- mvm_sta->max_agg_bufsize = LINK_QUAL_AGG_FRAME_LIMIT_GEN2_DEF;
- mvm_sta->tx_protection = 0;
mvm_sta->tt_tx_protection = false;
- mvm_sta->sta_type = sta->tdls ? IWL_STA_TDLS_LINK : IWL_STA_LINK;
+ mvm_sta->sta_type = sta_type;
- /* HW restart, don't assume the memory has been zeroed */
mvm_sta->tid_disable_agg = 0xffff; /* No aggs at first */
- mvm_sta->tfd_queue_msk = 0;
- /* for HW restart - reset everything but the sequence number */
for (i = 0; i <= IWL_MAX_TID_COUNT; i++) {
- u16 seq = mvm_sta->tid_data[i].seq_number;
- memset(&mvm_sta->tid_data[i], 0, sizeof(mvm_sta->tid_data[i]));
- mvm_sta->tid_data[i].seq_number = seq;
-
/*
* Mark all queues for this STA as unallocated and defer TX
* frames until the queue is allocated
atomic_set(&mvmtxq->tx_request, 0);
}
- mvm_sta->agg_tids = 0;
-
- if (iwl_mvm_has_new_rx_api(mvm) &&
- !test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
+ if (iwl_mvm_has_new_rx_api(mvm)) {
int q;
dup_data = kcalloc(mvm->trans->num_rx_queues,
ret = iwl_mvm_reserve_sta_stream(mvm, sta,
ieee80211_vif_type_p2p(vif));
if (ret)
- goto err;
+ return ret;
}
/*
if (iwl_mvm_has_tlc_offload(mvm))
iwl_mvm_rs_add_sta(mvm, mvm_sta);
else
- spin_lock_init(&mvm_sta->lq_sta.rs_drv.pers.lock);
+ spin_lock_init(&mvm_sta->deflink.lq_sta.rs_drv.pers.lock);
iwl_mvm_toggle_tx_ant(mvm, &mvm_sta->tx_ant);
+ return 0;
+}
+
+int iwl_mvm_add_sta(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ int ret, sta_id;
+ bool sta_update = false;
+ unsigned int sta_flags = 0;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
+ sta_id = iwl_mvm_find_free_sta_id(mvm,
+ ieee80211_vif_type_p2p(vif));
+ else
+ sta_id = mvm_sta->deflink.sta_id;
+
+ if (sta_id == IWL_MVM_INVALID_STA)
+ return -ENOSPC;
+
+ spin_lock_init(&mvm_sta->lock);
+
+ /* if this is a HW restart re-alloc existing queues */
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
+ struct iwl_mvm_int_sta tmp_sta = {
+ .sta_id = sta_id,
+ .type = mvm_sta->sta_type,
+ };
+
+ /* First add an empty station since allocating
+ * a queue requires a valid station
+ */
+ ret = iwl_mvm_add_int_sta_common(mvm, &tmp_sta, sta->addr,
+ mvmvif->id, mvmvif->color);
+ if (ret)
+ goto err;
+
+ iwl_mvm_realloc_queues_after_restart(mvm, sta);
+ sta_update = true;
+ sta_flags = iwl_mvm_has_new_tx_api(mvm) ? 0 : STA_MODIFY_QUEUES;
+ goto update_fw;
+ }
+
+ ret = iwl_mvm_sta_init(mvm, vif, sta, sta_id,
+ sta->tdls ? IWL_STA_TDLS_LINK : IWL_STA_LINK);
+
update_fw:
ret = iwl_mvm_sta_send_to_fw(mvm, sta, sta_update, sta_flags);
if (ret)
if (vif->type == NL80211_IFTYPE_STATION) {
if (!sta->tdls) {
- WARN_ON(mvmvif->ap_sta_id != IWL_MVM_INVALID_STA);
- mvmvif->ap_sta_id = sta_id;
+ WARN_ON(mvmvif->deflink.ap_sta_id != IWL_MVM_INVALID_STA);
+ mvmvif->deflink.ap_sta_id = sta_id;
} else {
- WARN_ON(mvmvif->ap_sta_id == IWL_MVM_INVALID_STA);
+ WARN_ON(mvmvif->deflink.ap_sta_id == IWL_MVM_INVALID_STA);
}
}
lockdep_assert_held(&mvm->mutex);
cmd.mac_id_n_color = cpu_to_le32(mvmsta->mac_id_n_color);
- cmd.sta_id = mvmsta->sta_id;
+ cmd.sta_id = mvmsta->deflink.sta_id;
cmd.add_modify = STA_MODE_MODIFY;
cmd.station_flags = drain ? cpu_to_le32(STA_FLG_DRAIN_FLOW) : 0;
cmd.station_flags_msk = cpu_to_le32(STA_FLG_DRAIN_FLOW);
switch (status & IWL_ADD_STA_STATUS_MASK) {
case ADD_STA_SUCCESS:
IWL_DEBUG_INFO(mvm, "Frames for staid %d will drained in fw\n",
- mvmsta->sta_id);
+ mvmsta->deflink.sta_id);
break;
default:
ret = -EIO;
IWL_ERR(mvm, "Couldn't drain frames for staid %d\n",
- mvmsta->sta_id);
+ mvmsta->deflink.sta_id);
break;
}
if (mvm_sta->tid_data[i].txq_id == IWL_MVM_INVALID_QUEUE)
continue;
- iwl_mvm_disable_txq(mvm, sta, mvm_sta->sta_id,
+ iwl_mvm_disable_txq(mvm, sta, mvm_sta->deflink.sta_id,
&mvm_sta->tid_data[i].txq_id, i);
mvm_sta->tid_data[i].txq_id = IWL_MVM_INVALID_QUEUE;
}
return 0;
}
-int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct ieee80211_sta *sta)
+/* Execute the common part for both MLD and non-MLD modes.
+ * Returns if we're done with removing the station, either
+ * with error or success
+ */
+bool iwl_mvm_sta_del(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_link_sta *link_sta, int *ret)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_vif_link_info *mvm_link =
+ mvmvif->link[link_sta->link_id];
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
- u8 sta_id = mvm_sta->sta_id;
- int ret;
+ struct iwl_mvm_link_sta *mvm_link_sta;
+ u8 sta_id;
lockdep_assert_held(&mvm->mutex);
- if (iwl_mvm_has_new_rx_api(mvm))
- kfree(mvm_sta->dup_data);
-
- ret = iwl_mvm_drain_sta(mvm, mvm_sta, true);
- if (ret)
- return ret;
-
- /* flush its queues here since we are freeing mvm_sta */
- ret = iwl_mvm_flush_sta(mvm, mvm_sta, false);
- if (ret)
- return ret;
- if (iwl_mvm_has_new_tx_api(mvm)) {
- ret = iwl_mvm_wait_sta_queues_empty(mvm, mvm_sta);
- } else {
- u32 q_mask = mvm_sta->tfd_queue_msk;
-
- ret = iwl_trans_wait_tx_queues_empty(mvm->trans,
- q_mask);
- }
- if (ret)
- return ret;
-
- ret = iwl_mvm_drain_sta(mvm, mvm_sta, false);
-
- iwl_mvm_disable_sta_queues(mvm, vif, sta);
+ mvm_link_sta =
+ rcu_dereference_protected(mvm_sta->link[link_sta->link_id],
+ lockdep_is_held(&mvm->mutex));
+ sta_id = mvm_link_sta->sta_id;
/* If there is a TXQ still marked as reserved - free it */
if (mvm_sta->reserved_queue != IEEE80211_INVAL_HW_QUEUE) {
if (WARN((*status != IWL_MVM_QUEUE_RESERVED) &&
(*status != IWL_MVM_QUEUE_FREE),
"sta_id %d reserved txq %d status %d",
- sta_id, reserved_txq, *status))
- return -EINVAL;
+ sta_id, reserved_txq, *status)) {
+ *ret = -EINVAL;
+ return true;
+ }
*status = IWL_MVM_QUEUE_FREE;
}
- if (vif->type == NL80211_IFTYPE_STATION &&
- mvmvif->ap_sta_id == sta_id) {
+ if (vif->type == NL80211_IFTYPE_STATION) {
/* if associated - we can't remove the AP STA now */
if (vif->cfg.assoc)
- return ret;
+ return true;
/* first remove remaining keys */
- iwl_mvm_sec_key_remove_ap(mvm, vif);
+ iwl_mvm_sec_key_remove_ap(mvm, vif, mvm_link, 0);
/* unassoc - go ahead - remove the AP STA now */
- mvmvif->ap_sta_id = IWL_MVM_INVALID_STA;
+ mvm_link->ap_sta_id = IWL_MVM_INVALID_STA;
}
/*
spin_lock_bh(&mvm_sta->lock);
spin_unlock_bh(&mvm_sta->lock);
- ret = iwl_mvm_rm_sta_common(mvm, mvm_sta->sta_id);
- RCU_INIT_POINTER(mvm->fw_id_to_mac_id[mvm_sta->sta_id], NULL);
+ return false;
+}
+
+int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (iwl_mvm_has_new_rx_api(mvm))
+ kfree(mvm_sta->dup_data);
+
+ ret = iwl_mvm_drain_sta(mvm, mvm_sta, true);
+ if (ret)
+ return ret;
+
+ /* flush its queues here since we are freeing mvm_sta */
+ ret = iwl_mvm_flush_sta(mvm, mvm_sta, false);
+ if (ret)
+ return ret;
+ if (iwl_mvm_has_new_tx_api(mvm)) {
+ ret = iwl_mvm_wait_sta_queues_empty(mvm, mvm_sta);
+ } else {
+ u32 q_mask = mvm_sta->tfd_queue_msk;
+
+ ret = iwl_trans_wait_tx_queues_empty(mvm->trans,
+ q_mask);
+ }
+ if (ret)
+ return ret;
+
+ ret = iwl_mvm_drain_sta(mvm, mvm_sta, false);
+
+ iwl_mvm_disable_sta_queues(mvm, vif, sta);
+
+ if (iwl_mvm_sta_del(mvm, vif, sta, &sta->deflink, &ret))
+ return ret;
+
+ ret = iwl_mvm_rm_sta_common(mvm, mvm_sta->deflink.sta_id);
+ RCU_INIT_POINTER(mvm->fw_id_to_mac_id[mvm_sta->deflink.sta_id], NULL);
return ret;
}
int iwl_mvm_allocate_int_sta(struct iwl_mvm *mvm,
struct iwl_mvm_int_sta *sta,
u32 qmask, enum nl80211_iftype iftype,
- enum iwl_sta_type type)
+ u8 type)
{
if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) ||
sta->sta_id == IWL_MVM_INVALID_STA) {
WARN_ON(!iwl_mvm_has_new_tx_api(mvm));
- return iwl_mvm_tvqm_enable_txq(mvm, sta_id, IWL_MAX_TID_COUNT,
+ return iwl_mvm_tvqm_enable_txq(mvm, NULL, sta_id, IWL_MAX_TID_COUNT,
wdg_timeout);
}
int iwl_mvm_send_add_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- struct iwl_mvm_int_sta *bsta = &mvmvif->bcast_sta;
+ struct iwl_mvm_int_sta *bsta = &mvmvif->deflink.bcast_sta;
static const u8 _baddr[] = {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
const u8 *baddr = _baddr;
int queue;
iwl_mvm_get_wd_timeout(mvm, vif, false, false);
struct iwl_trans_txq_scd_cfg cfg = {
.fifo = IWL_MVM_TX_FIFO_VO,
- .sta_id = mvmvif->bcast_sta.sta_id,
+ .sta_id = mvmvif->deflink.bcast_sta.sta_id,
.tid = IWL_MAX_TID_COUNT,
.aggregate = false,
.frame_limit = IWL_FRAME_LIMIT,
* to firmware so enable queue here - after the station was added
*/
if (iwl_mvm_has_new_tx_api(mvm)) {
- queue = iwl_mvm_tvqm_enable_txq(mvm, bsta->sta_id,
+ queue = iwl_mvm_tvqm_enable_txq(mvm, NULL, bsta->sta_id,
IWL_MAX_TID_COUNT,
wdg_timeout);
if (queue < 0) {
}
if (vif->type == NL80211_IFTYPE_AP ||
- vif->type == NL80211_IFTYPE_ADHOC)
+ vif->type == NL80211_IFTYPE_ADHOC) {
+ /* for queue management */
mvm->probe_queue = queue;
- else if (vif->type == NL80211_IFTYPE_P2P_DEVICE)
+ /* for use in TX */
+ mvmvif->deflink.mgmt_queue = queue;
+ } else if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
mvm->p2p_dev_queue = queue;
+ }
+ } else if (vif->type == NL80211_IFTYPE_AP ||
+ vif->type == NL80211_IFTYPE_ADHOC) {
+ /* set it for use in TX */
+ mvmvif->deflink.mgmt_queue = mvm->probe_queue;
}
return 0;
}
-static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif)
+void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
u16 *queueptr, queue;
lockdep_assert_held(&mvm->mutex);
- iwl_mvm_flush_sta(mvm, &mvmvif->bcast_sta, true);
+ iwl_mvm_flush_sta(mvm, &mvmvif->deflink.bcast_sta, true);
switch (vif->type) {
case NL80211_IFTYPE_AP:
}
queue = *queueptr;
- iwl_mvm_disable_txq(mvm, NULL, mvmvif->bcast_sta.sta_id,
+ iwl_mvm_disable_txq(mvm, NULL, mvmvif->deflink.bcast_sta.sta_id,
queueptr, IWL_MAX_TID_COUNT);
+
+ if (vif->type == NL80211_IFTYPE_AP || vif->type == NL80211_IFTYPE_ADHOC)
+ mvmvif->deflink.mgmt_queue = mvm->probe_queue;
+
if (iwl_mvm_has_new_tx_api(mvm))
return;
- WARN_ON(!(mvmvif->bcast_sta.tfd_queue_msk & BIT(queue)));
- mvmvif->bcast_sta.tfd_queue_msk &= ~BIT(queue);
+ WARN_ON(!(mvmvif->deflink.bcast_sta.tfd_queue_msk & BIT(queue)));
+ mvmvif->deflink.bcast_sta.tfd_queue_msk &= ~BIT(queue);
}
/* Send the FW a request to remove the station from it's internal data
iwl_mvm_free_bcast_sta_queues(mvm, vif);
- ret = iwl_mvm_rm_sta_common(mvm, mvmvif->bcast_sta.sta_id);
+ ret = iwl_mvm_rm_sta_common(mvm, mvmvif->deflink.bcast_sta.sta_id);
if (ret)
IWL_WARN(mvm, "Failed sending remove station\n");
return ret;
lockdep_assert_held(&mvm->mutex);
- return iwl_mvm_allocate_int_sta(mvm, &mvmvif->bcast_sta, 0,
+ return iwl_mvm_allocate_int_sta(mvm, &mvmvif->deflink.bcast_sta, 0,
ieee80211_vif_type_p2p(vif),
IWL_STA_GENERAL_PURPOSE);
}
int iwl_mvm_add_p2p_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- struct iwl_mvm_int_sta *bsta = &mvmvif->bcast_sta;
+ struct iwl_mvm_int_sta *bsta = &mvmvif->deflink.bcast_sta;
int ret;
lockdep_assert_held(&mvm->mutex);
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- iwl_mvm_dealloc_int_sta(mvm, &mvmvif->bcast_sta);
+ iwl_mvm_dealloc_int_sta(mvm, &mvmvif->deflink.bcast_sta);
}
/*
int iwl_mvm_add_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- struct iwl_mvm_int_sta *msta = &mvmvif->mcast_sta;
+ struct iwl_mvm_int_sta *msta = &mvmvif->deflink.mcast_sta;
static const u8 _maddr[] = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00};
const u8 *maddr = _maddr;
struct iwl_trans_txq_scd_cfg cfg = {
* changes in mac80211 layer.
*/
if (vif->type == NL80211_IFTYPE_ADHOC)
- mvmvif->cab_queue = IWL_MVM_DQA_GCAST_QUEUE;
+ mvmvif->deflink.cab_queue = IWL_MVM_DQA_GCAST_QUEUE;
/*
* While in previous FWs we had to exclude cab queue from TFD queue
*/
if (!iwl_mvm_has_new_tx_api(mvm) &&
fw_has_api(&mvm->fw->ucode_capa, IWL_UCODE_TLV_API_STA_TYPE)) {
- iwl_mvm_enable_txq(mvm, NULL, mvmvif->cab_queue, 0, &cfg,
+ iwl_mvm_enable_txq(mvm, NULL, mvmvif->deflink.cab_queue, 0,
+ &cfg,
timeout);
- msta->tfd_queue_msk |= BIT(mvmvif->cab_queue);
+ msta->tfd_queue_msk |= BIT(mvmvif->deflink.cab_queue);
}
ret = iwl_mvm_add_int_sta_common(mvm, msta, maddr,
mvmvif->id, mvmvif->color);
* tfd_queue_mask.
*/
if (iwl_mvm_has_new_tx_api(mvm)) {
- int queue = iwl_mvm_tvqm_enable_txq(mvm, msta->sta_id,
- 0,
- timeout);
+ int queue = iwl_mvm_tvqm_enable_txq(mvm, NULL, msta->sta_id,
+ 0, timeout);
if (queue < 0) {
ret = queue;
goto err;
}
- mvmvif->cab_queue = queue;
+ mvmvif->deflink.cab_queue = queue;
} else if (!fw_has_api(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_API_STA_TYPE))
- iwl_mvm_enable_txq(mvm, NULL, mvmvif->cab_queue, 0, &cfg,
+ iwl_mvm_enable_txq(mvm, NULL, mvmvif->deflink.cab_queue, 0,
+ &cfg,
timeout);
return 0;
lockdep_assert_held(&mvm->mutex);
- iwl_mvm_flush_sta(mvm, &mvmvif->mcast_sta, true);
+ iwl_mvm_flush_sta(mvm, &mvmvif->deflink.mcast_sta, true);
- iwl_mvm_disable_txq(mvm, NULL, mvmvif->mcast_sta.sta_id,
- &mvmvif->cab_queue, 0);
+ iwl_mvm_disable_txq(mvm, NULL, mvmvif->deflink.mcast_sta.sta_id,
+ &mvmvif->deflink.cab_queue, 0);
- ret = iwl_mvm_rm_sta_common(mvm, mvmvif->mcast_sta.sta_id);
+ ret = iwl_mvm_rm_sta_common(mvm, mvmvif->deflink.mcast_sta.sta_id);
if (ret)
IWL_WARN(mvm, "Failed sending remove station\n");
{
struct iwl_mvm_add_sta_cmd cmd = {
.mac_id_n_color = cpu_to_le32(mvm_sta->mac_id_n_color),
- .sta_id = mvm_sta->sta_id,
+ .sta_id = mvm_sta->deflink.sta_id,
.add_modify = STA_MODE_MODIFY,
};
u32 status;
BUILD_BUG_ON(sizeof(struct iwl_rx_baid_cfg_resp) != sizeof(baid));
if (start) {
- cmd.alloc.sta_id_mask = cpu_to_le32(BIT(mvm_sta->sta_id));
+ cmd.alloc.sta_id_mask =
+ cpu_to_le32(BIT(mvm_sta->deflink.sta_id));
cmd.alloc.tid = tid;
cmd.alloc.ssn = cpu_to_le16(ssn);
cmd.alloc.win_size = cpu_to_le16(buf_size);
cmd.remove_v1.baid = cpu_to_le32(baid);
BUILD_BUG_ON(sizeof(cmd.remove_v1) > sizeof(cmd.remove));
} else {
- cmd.remove.sta_id_mask = cpu_to_le32(BIT(mvm_sta->sta_id));
+ cmd.remove.sta_id_mask =
+ cpu_to_le32(BIT(mvm_sta->deflink.sta_id));
cmd.remove.tid = cpu_to_le32(tid);
}
iwl_mvm_rx_agg_session_expired, 0);
baid_data->mvm = mvm;
baid_data->tid = tid;
- baid_data->sta_id = mvm_sta->sta_id;
+ baid_data->sta_id = mvm_sta->deflink.sta_id;
mvm_sta->tid_to_baid[tid] = baid;
if (timeout)
* RX is being processed in parallel
*/
IWL_DEBUG_HT(mvm, "Sta %d(%d) is assigned to BAID %d\n",
- mvm_sta->sta_id, tid, baid);
+ mvm_sta->deflink.sta_id, tid, baid);
WARN_ON(rcu_access_pointer(mvm->baid_map[baid]));
rcu_assign_pointer(mvm->baid_map[baid], baid_data);
} else {
}
cmd.mac_id_n_color = cpu_to_le32(mvm_sta->mac_id_n_color);
- cmd.sta_id = mvm_sta->sta_id;
+ cmd.sta_id = mvm_sta->deflink.sta_id;
cmd.add_modify = STA_MODE_MODIFY;
if (!iwl_mvm_has_new_tx_api(mvm))
cmd.modify_mask = STA_MODIFY_QUEUES;
*/
txq_id = mvmsta->tid_data[tid].txq_id;
if (txq_id == IWL_MVM_INVALID_QUEUE) {
- ret = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id,
+ ret = iwl_mvm_find_free_queue(mvm, mvmsta->deflink.sta_id,
IWL_MVM_DQA_MIN_DATA_QUEUE,
IWL_MVM_DQA_MAX_DATA_QUEUE);
if (ret < 0) {
IWL_DEBUG_TX_QUEUES(mvm,
"Start AGG: sta %d tid %d queue %d - ssn = %d, next_recl = %d\n",
- mvmsta->sta_id, tid, txq_id, tid_data->ssn,
+ mvmsta->deflink.sta_id, tid, txq_id,
+ tid_data->ssn,
tid_data->next_reclaimed);
/*
u16 ssn;
struct iwl_trans_txq_scd_cfg cfg = {
- .sta_id = mvmsta->sta_id,
+ .sta_id = mvmsta->deflink.sta_id,
.tid = tid,
.frame_limit = buf_size,
.aggregate = true,
}
ret = iwl_mvm_reconfig_scd(mvm, queue, cfg.fifo,
- mvmsta->sta_id, tid,
+ mvmsta->deflink.sta_id, tid,
buf_size, ssn);
if (ret) {
IWL_ERR(mvm,
* for each station. Therefore, use the minimum of all the
* aggregation sessions and our default value.
*/
- mvmsta->max_agg_bufsize =
- min(mvmsta->max_agg_bufsize, buf_size);
- mvmsta->lq_sta.rs_drv.lq.agg_frame_cnt_limit = mvmsta->max_agg_bufsize;
+ mvmsta->deflink.lq_sta.rs_drv.pers.max_agg_bufsize =
+ min(mvmsta->deflink.lq_sta.rs_drv.pers.max_agg_bufsize,
+ buf_size);
+ mvmsta->deflink.lq_sta.rs_drv.lq.agg_frame_cnt_limit =
+ mvmsta->deflink.lq_sta.rs_drv.pers.max_agg_bufsize;
IWL_DEBUG_HT(mvm, "Tx aggregation enabled on ra = %pM tid = %d\n",
sta->addr, tid);
- return iwl_mvm_send_lq_cmd(mvm, &mvmsta->lq_sta.rs_drv.lq);
+ return iwl_mvm_send_lq_cmd(mvm, &mvmsta->deflink.lq_sta.rs_drv.lq);
}
static void iwl_mvm_unreserve_agg_queue(struct iwl_mvm *mvm,
txq_id = tid_data->txq_id;
IWL_DEBUG_TX_QUEUES(mvm, "Stop AGG: sta %d tid %d q %d state %d\n",
- mvmsta->sta_id, tid, txq_id, tid_data->state);
+ mvmsta->deflink.sta_id, tid, txq_id,
+ tid_data->state);
mvmsta->agg_tids &= ~BIT(tid);
default:
IWL_ERR(mvm,
"Stopping AGG while state not ON or starting for %d on %d (%d)\n",
- mvmsta->sta_id, tid, tid_data->state);
+ mvmsta->deflink.sta_id, tid, tid_data->state);
IWL_ERR(mvm,
"\ttid_data->txq_id = %d\n", tid_data->txq_id);
err = -EINVAL;
spin_lock_bh(&mvmsta->lock);
txq_id = tid_data->txq_id;
IWL_DEBUG_TX_QUEUES(mvm, "Flush AGG: sta %d tid %d q %d state %d\n",
- mvmsta->sta_id, tid, txq_id, tid_data->state);
+ mvmsta->deflink.sta_id, tid, txq_id,
+ tid_data->state);
old_state = tid_data->state;
tid_data->state = IWL_AGG_OFF;
mvmsta->agg_tids &= ~BIT(tid);
iwl_mvm_drain_sta(mvm, mvmsta, true);
if (iwl_mvm_has_new_tx_api(mvm)) {
- if (iwl_mvm_flush_sta_tids(mvm, mvmsta->sta_id,
+ if (iwl_mvm_flush_sta_tids(mvm, mvmsta->deflink.sta_id,
BIT(tid)))
IWL_ERR(mvm, "Couldn't flush the AGG queue\n");
iwl_trans_wait_txq_empty(mvm->trans, txq_id);
* station ID, then use AP's station ID.
*/
if (vif->type == NL80211_IFTYPE_STATION &&
- mvmvif->ap_sta_id != IWL_MVM_INVALID_STA) {
- u8 sta_id = mvmvif->ap_sta_id;
+ mvmvif->deflink.ap_sta_id != IWL_MVM_INVALID_STA) {
+ u8 sta_id = mvmvif->deflink.ap_sta_id;
sta = rcu_dereference_check(mvm->fw_id_to_mac_id[sta_id],
lockdep_is_held(&mvm->mutex));
return sta->addr;
if (vif->type == NL80211_IFTYPE_STATION &&
- mvmvif->ap_sta_id != IWL_MVM_INVALID_STA) {
- u8 sta_id = mvmvif->ap_sta_id;
+ mvmvif->deflink.ap_sta_id != IWL_MVM_INVALID_STA) {
+ u8 sta_id = mvmvif->deflink.ap_sta_id;
sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[sta_id],
lockdep_is_held(&mvm->mutex));
return sta->addr;
if (sta) {
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
- sta_id = mvm_sta->sta_id;
+ sta_id = mvm_sta->deflink.sta_id;
mfp = sta->mfp;
} else if (vif->type == NL80211_IFTYPE_AP &&
!(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE)) {
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- sta_id = mvmvif->mcast_sta.sta_id;
+ sta_id = mvmvif->deflink.mcast_sta.sta_id;
} else {
IWL_ERR(mvm, "Failed to find station id\n");
return -EINVAL;
IWL_ERR(mvm, "Failed to find station\n");
return -EINVAL;
}
- sta_id = mvm_sta->sta_id;
+ sta_id = mvm_sta->deflink.sta_id;
/*
* It is possible that the 'sta' parameter is NULL, and thus
} else {
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- sta_id = mvmvif->mcast_sta.sta_id;
+ sta_id = mvmvif->deflink.mcast_sta.sta_id;
}
if (keyconf->cipher == WLAN_CIPHER_SUITE_AES_CMAC ||
/* Get the station from the mvm local station table */
mvm_sta = iwl_mvm_get_key_sta(mvm, vif, sta);
if (mvm_sta)
- sta_id = mvm_sta->sta_id;
+ sta_id = mvm_sta->deflink.sta_id;
else if (!sta && vif->type == NL80211_IFTYPE_AP && mcast)
- sta_id = iwl_mvm_vif_from_mac80211(vif)->mcast_sta.sta_id;
+ sta_id = iwl_mvm_vif_from_mac80211(vif)->deflink.mcast_sta.sta_id;
IWL_DEBUG_WEP(mvm, "mvm remove dynamic key: idx=%d sta=%d\n",
mvm_sta = iwl_mvm_get_key_sta(mvm, vif, sta);
if (WARN_ON_ONCE(!mvm_sta))
goto unlock;
- iwl_mvm_send_sta_key(mvm, mvm_sta->sta_id, keyconf, mcast,
+ iwl_mvm_send_sta_key(mvm, mvm_sta->deflink.sta_id, keyconf, mcast,
iv32, phase1key, CMD_ASYNC, keyconf->hw_key_idx,
mfp);
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm_add_sta_cmd cmd = {
.add_modify = STA_MODE_MODIFY,
- .sta_id = mvmsta->sta_id,
+ .sta_id = mvmsta->deflink.sta_id,
.station_flags_msk = cpu_to_le32(STA_FLG_PS),
.mac_id_n_color = cpu_to_le32(mvmsta->mac_id_n_color),
};
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm_add_sta_cmd cmd = {
.add_modify = STA_MODE_MODIFY,
- .sta_id = mvmsta->sta_id,
+ .sta_id = mvmsta->deflink.sta_id,
.modify_mask = STA_MODIFY_SLEEPING_STA_TX_COUNT,
.sleep_tx_count = cpu_to_le16(cnt),
.mac_id_n_color = cpu_to_le32(mvmsta->mac_id_n_color),
}
void iwl_mvm_sta_modify_disable_tx(struct iwl_mvm *mvm,
- struct iwl_mvm_sta *mvmsta, bool disable)
+ struct iwl_mvm_sta *mvmsta,
+ bool disable)
{
struct iwl_mvm_add_sta_cmd cmd = {
.add_modify = STA_MODE_MODIFY,
- .sta_id = mvmsta->sta_id,
+ .sta_id = mvmsta->deflink.sta_id,
.station_flags = disable ? cpu_to_le32(STA_FLG_DISABLE_TX) : 0,
.station_flags_msk = cpu_to_le32(STA_FLG_DISABLE_TX),
.mac_id_n_color = cpu_to_le32(mvmsta->mac_id_n_color),
};
int ret;
+ if (mvm->mld_api_is_used) {
+ iwl_mvm_mld_sta_modify_disable_tx(mvm, mvmsta, disable);
+ return;
+ }
+
ret = iwl_mvm_send_cmd_pdu(mvm, ADD_STA, CMD_ASYNC,
iwl_mvm_add_sta_cmd_size(mvm), &cmd);
if (ret)
{
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
+ if (mvm->mld_api_is_used) {
+ iwl_mvm_mld_sta_modify_disable_tx_ap(mvm, sta, disable);
+ return;
+ }
+
spin_lock_bh(&mvm_sta->lock);
if (mvm_sta->disable_tx == disable) {
struct iwl_mvm_sta *mvm_sta;
int i;
+ if (mvm->mld_api_is_used) {
+ iwl_mvm_mld_modify_all_sta_disable_tx(mvm, mvmvif, disable);
+ return;
+ }
+
rcu_read_lock();
/* Block/unblock all the stations of the given mvmvif */
return;
/* Need to block/unblock also multicast station */
- if (mvmvif->mcast_sta.sta_id != IWL_MVM_INVALID_STA)
+ if (mvmvif->deflink.mcast_sta.sta_id != IWL_MVM_INVALID_STA)
iwl_mvm_int_sta_modify_disable_tx(mvm, mvmvif,
- &mvmvif->mcast_sta, disable);
+ &mvmvif->deflink.mcast_sta,
+ disable);
/*
* Only unblock the broadcast station (FW blocks it for immediate
* quiet, not the driver)
*/
- if (!disable && mvmvif->bcast_sta.sta_id != IWL_MVM_INVALID_STA)
+ if (!disable && mvmvif->deflink.bcast_sta.sta_id != IWL_MVM_INVALID_STA)
iwl_mvm_int_sta_modify_disable_tx(mvm, mvmvif,
- &mvmvif->bcast_sta, disable);
+ &mvmvif->deflink.bcast_sta,
+ disable);
}
void iwl_mvm_csa_client_absent(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
rcu_read_lock();
- mvmsta = iwl_mvm_sta_from_staid_rcu(mvm, mvmvif->ap_sta_id);
+ mvmsta = iwl_mvm_sta_from_staid_rcu(mvm, mvmvif->deflink.ap_sta_id);
if (mvmsta)
iwl_mvm_sta_modify_disable_tx(mvm, mvmsta, true);
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2021 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
* Copyright (C) 2013-2014 Intel Mobile Communications GmbH
* Copyright (C) 2015-2016 Intel Deutschland GmbH
*/
} ____cacheline_aligned_in_smp;
/**
+ * struct iwl_mvm_link_sta - link specific parameters of a station
+ * @rcu_head: used for freeing the data
+ * @sta_id: the index of the station in the fw
+ * @lq_sta: holds rate scaling data, either for the case when RS is done in
+ * the driver - %rs_drv or in the FW - %rs_fw.
+ * @avg_energy: energy as reported by FW statistics notification
+ */
+struct iwl_mvm_link_sta {
+ struct rcu_head rcu_head;
+ u32 sta_id;
+ union {
+ struct iwl_lq_sta_rs_fw rs_fw;
+ struct iwl_lq_sta rs_drv;
+ } lq_sta;
+
+ u8 avg_energy;
+};
+
+/**
* struct iwl_mvm_sta - representation of a station in the driver
- * @sta_id: the index of the station in the fw (will be replaced by id_n_color)
* @tfd_queue_msk: the tfd queues used by the station
* @mac_id_n_color: the MAC context this station is linked to
* @tid_disable_agg: bitmap: if bit(tid) is set, the fw won't send ampdus for
* tid.
- * @max_agg_bufsize: the maximal size of the AGG buffer for this station
* @sta_type: station type
* @sta_state: station state according to enum %ieee80211_sta_state
* @bt_reduced_txpower: is reduced tx power enabled for this station
* and from Tx response flow, it needs a spinlock.
* @tid_data: per tid data + mgmt. Look at %iwl_mvm_tid_data.
* @tid_to_baid: a simple map of TID to baid
- * @lq_sta: holds rate scaling data, either for the case when RS is done in
- * the driver - %rs_drv or in the FW - %rs_fw.
* @reserved_queue: the queue reserved for this STA for DQA purposes
* Every STA has is given one reserved queue to allow it to operate. If no
* such queue can be guaranteed, the STA addition will fail.
* used during connection establishment (e.g. for the 4 way handshake
* exchange).
* @pairwise_cipher: used to feed iwlmei upon authorization
+ * @deflink: the default link station, for non-MLO STA, all link specific data
+ * is accessed via deflink (or link[0]). For MLO, it will hold data of the
+ * first added link STA.
+ * @link: per link sta entries. For non-MLO only link[0] holds data. For MLO,
+ * link[0] points to deflink and link[link_id] is allocated when new link
+ * sta is added.
*
* When mac80211 creates a station it reserves some space (hw->sta_data_size)
* in the structure for use by driver. This structure is placed in that
*
*/
struct iwl_mvm_sta {
- u32 sta_id;
u32 tfd_queue_msk;
u32 mac_id_n_color;
u16 tid_disable_agg;
- u16 max_agg_bufsize;
- enum iwl_sta_type sta_type;
+ u8 sta_type;
enum ieee80211_sta_state sta_state;
bool bt_reduced_txpower;
bool next_status_eosp;
spinlock_t lock;
struct iwl_mvm_tid_data tid_data[IWL_MAX_TID_COUNT + 1];
u8 tid_to_baid[IWL_MAX_TID_COUNT];
- union {
- struct iwl_lq_sta_rs_fw rs_fw;
- struct iwl_lq_sta rs_drv;
- } lq_sta;
struct ieee80211_vif *vif;
struct iwl_mvm_key_pn __rcu *ptk_pn[4];
struct iwl_mvm_rxq_dup_data *dup_data;
bool sleeping;
u8 agg_tids;
u8 sleep_tx_count;
- u8 avg_energy;
u8 tx_ant;
u32 pairwise_cipher;
+
+ struct iwl_mvm_link_sta deflink;
+ struct iwl_mvm_link_sta __rcu *link[IEEE80211_MLD_MAX_NUM_LINKS];
};
u16 iwl_mvm_tid_queued(struct iwl_mvm *mvm, struct iwl_mvm_tid_data *tid_data);
*/
struct iwl_mvm_int_sta {
u32 sta_id;
- enum iwl_sta_type type;
+ u8 type;
u32 tfd_queue_msk;
};
*/
int iwl_mvm_sta_send_to_fw(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
bool update, unsigned int flags);
+int iwl_mvm_find_free_sta_id(struct iwl_mvm *mvm, enum nl80211_iftype iftype);
+int iwl_mvm_sta_init(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, int sta_id, u8 sta_type);
int iwl_mvm_add_sta(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
return iwl_mvm_sta_send_to_fw(mvm, sta, true, 0);
}
+void iwl_mvm_realloc_queues_after_restart(struct iwl_mvm *mvm,
+ struct ieee80211_sta *sta);
int iwl_mvm_wait_sta_queues_empty(struct iwl_mvm *mvm,
struct iwl_mvm_sta *mvm_sta);
+bool iwl_mvm_sta_del(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_link_sta *link_sta, int *ret);
int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm);
int iwl_mvm_alloc_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif);
int iwl_mvm_send_add_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
int iwl_mvm_add_p2p_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
int iwl_mvm_send_rm_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
int iwl_mvm_allocate_int_sta(struct iwl_mvm *mvm,
struct iwl_mvm_int_sta *sta,
u32 qmask, enum nl80211_iftype iftype,
- enum iwl_sta_type type);
+ u8 type);
void iwl_mvm_dealloc_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
void iwl_mvm_dealloc_int_sta(struct iwl_mvm *mvm, struct iwl_mvm_int_sta *sta);
int iwl_mvm_add_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
void iwl_mvm_modify_all_sta_disable_tx(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif,
bool disable);
+
void iwl_mvm_csa_client_absent(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk);
int iwl_mvm_add_pasn_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
void iwl_mvm_cancel_channel_switch(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
u32 mac_id);
+/* Queues */
+int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm,
+ struct ieee80211_sta *sta,
+ u8 sta_id, u8 tid, unsigned int timeout);
+
+/* Sta state */
+/**
+ * struct iwl_mvm_sta_state_ops - callbacks for the sta_state() ops
+ *
+ * Since the only difference between both MLD and
+ * non-MLD versions of sta_state() is these function calls,
+ * each version will send its specific function calls to
+ * %iwl_mvm_mac_sta_state_common().
+ *
+ * @add_sta: pointer to the function that adds a new sta
+ * @update_sta: pointer to the function that updates a sta
+ * @rm_sta: pointer to the functions that removes a sta
+ * @mac_ctxt_changed: pointer to the function that handles a change in mac ctxt
+ */
+struct iwl_mvm_sta_state_ops {
+ int (*add_sta)(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+ int (*update_sta)(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+ int (*rm_sta)(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+ int (*mac_ctxt_changed)(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ bool force_assoc_off);
+};
+
+int iwl_mvm_mac_sta_state_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ enum ieee80211_sta_state old_state,
+ enum ieee80211_sta_state new_state,
+ struct iwl_mvm_sta_state_ops *callbacks);
+
+/* New MLD STA related APIs */
+/* STA */
+int iwl_mvm_mld_add_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
+int iwl_mvm_mld_add_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
+int iwl_mvm_mld_add_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
+int iwl_mvm_mld_add_aux_sta(struct iwl_mvm *mvm, u32 lmac_id);
+int iwl_mvm_mld_rm_bcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
+int iwl_mvm_mld_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+int iwl_mvm_mld_rm_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
+int iwl_mvm_mld_rm_aux_sta(struct iwl_mvm *mvm);
+int iwl_mvm_mld_add_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+int iwl_mvm_mld_update_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+int iwl_mvm_mld_rm_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+int iwl_mvm_mld_rm_sta_id(struct iwl_mvm *mvm, u8 sta_id);
+int iwl_mvm_mld_update_sta_links(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ u16 old_links, u16 new_links);
+
+/* Queues */
+void iwl_mvm_mld_modify_all_sta_disable_tx(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif *mvmvif,
+ bool disable);
+void iwl_mvm_mld_sta_modify_disable_tx(struct iwl_mvm *mvm,
+ struct iwl_mvm_sta *mvm_sta,
+ bool disable);
+void iwl_mvm_mld_sta_modify_disable_tx_ap(struct iwl_mvm *mvm,
+ struct ieee80211_sta *sta,
+ bool disable);
#endif /* __sta_h__ */
goto out;
}
mvmsta = iwl_mvm_sta_from_mac80211(sta);
- cmd.peer_sta_id = cpu_to_le32(mvmsta->sta_id);
+ cmd.peer_sta_id = cpu_to_le32(mvmsta->deflink.sta_id);
if (!chandef) {
if (mvm->tdls_cs.state == IWL_MVM_TDLS_SW_REQ_SENT &&
}
iwl_mvm_set_tx_cmd(mvm, skb, &tail->frame.tx_cmd, info,
- mvmsta->sta_id);
+ mvmsta->deflink.sta_id);
iwl_mvm_set_tx_cmd_rate(mvm, &tail->frame.tx_cmd, info, sta,
hdr->frame_control);
/* channel switch has started, update state */
if (type != TDLS_MOVE_CH) {
- mvm->tdls_cs.cur_sta_id = mvmsta->sta_id;
+ mvm->tdls_cs.cur_sta_id = mvmsta->deflink.sta_id;
iwl_mvm_tdls_update_cs_state(mvm,
type == TDLS_SEND_CHAN_SW_REQ ?
IWL_MVM_TDLS_SW_REQ_SENT :
}
mvmsta = iwl_mvm_sta_from_mac80211(sta);
- mvm->tdls_cs.peer.sta_id = mvmsta->sta_id;
+ mvm->tdls_cs.peer.sta_id = mvmsta->deflink.sta_id;
mvm->tdls_cs.peer.chandef = *chandef;
mvm->tdls_cs.peer.initiator = sta->tdls_initiator;
mvm->tdls_cs.peer.op_class = oper_class;
if (!WARN_ON(!mvm->p2p_device_vif)) {
mvmvif = iwl_mvm_vif_from_mac80211(mvm->p2p_device_vif);
- iwl_mvm_flush_sta(mvm, &mvmvif->bcast_sta, true);
+ iwl_mvm_flush_sta(mvm, &mvmvif->deflink.bcast_sta,
+ true);
}
}
/* do the same in case of hot spot 2.0 */
iwl_mvm_flush_sta(mvm, &mvm->aux_sta, true);
+ if (mvm->mld_api_is_used) {
+ iwl_mvm_mld_rm_aux_sta(mvm);
+ goto out_unlock;
+ }
+
/* In newer version of this command an aux station is added only
* in cases of dedicated tx queue and need to be removed in end
* of use */
iwl_mvm_rm_aux_sta(mvm);
}
+out_unlock:
mutex_unlock(&mvm->mutex);
}
struct iwl_mvm_sta *mvmsta;
rcu_read_lock();
- mvmsta = iwl_mvm_sta_from_staid_rcu(mvm, mvmvif->ap_sta_id);
+ mvmsta = iwl_mvm_sta_from_staid_rcu(mvm,
+ mvmvif->deflink.ap_sta_id);
if (!WARN_ON(!mvmsta))
iwl_mvm_sta_modify_disable_tx(mvm, mvmsta, false);
rcu_read_unlock();
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+/*
+ * Copyright (C) 2022 Intel Corporation
+ */
+
+#include "mvm.h"
+#include "time-sync.h"
+#include <linux/ieee80211.h>
+
+void iwl_mvm_init_time_sync(struct iwl_time_sync_data *data)
+{
+ skb_queue_head_init(&data->frame_list);
+}
+
+static bool iwl_mvm_is_skb_match(struct sk_buff *skb, u8 *addr, u8 dialog_token)
+{
+ struct ieee80211_mgmt *mgmt = (void *)skb->data;
+ u8 skb_dialog_token;
+
+ if (ieee80211_is_timing_measurement(skb))
+ skb_dialog_token = mgmt->u.action.u.wnm_timing_msr.dialog_token;
+ else
+ skb_dialog_token = mgmt->u.action.u.ftm.dialog_token;
+
+ if ((ether_addr_equal(mgmt->sa, addr) ||
+ ether_addr_equal(mgmt->da, addr)) &&
+ skb_dialog_token == dialog_token)
+ return true;
+
+ return false;
+}
+
+static struct sk_buff *iwl_mvm_time_sync_find_skb(struct iwl_mvm *mvm, u8 *addr,
+ u8 dialog_token)
+{
+ struct sk_buff *skb;
+
+ /* The queue is expected to have only one SKB. If there are other SKBs
+ * in the queue, they did not get a time sync notification and are
+ * probably obsolete by now, so drop them.
+ */
+ while ((skb = skb_dequeue(&mvm->time_sync.frame_list))) {
+ if (iwl_mvm_is_skb_match(skb, addr, dialog_token))
+ break;
+
+ kfree_skb(skb);
+ skb = NULL;
+ }
+
+ return skb;
+}
+
+static u64 iwl_mvm_get_64_bit(__le32 high, __le32 low)
+{
+ return ((u64)le32_to_cpu(high) << 32) | le32_to_cpu(low);
+}
+
+void iwl_mvm_time_sync_msmt_event(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb)
+{
+ struct iwl_rx_packet *pkt = rxb_addr(rxb);
+ struct iwl_time_msmt_notify *notif = (void *)pkt->data;
+ struct ieee80211_rx_status *rx_status;
+ struct skb_shared_hwtstamps *shwt;
+ u64 ts_10ns;
+ struct sk_buff *skb =
+ iwl_mvm_time_sync_find_skb(mvm, notif->peer_addr,
+ le32_to_cpu(notif->dialog_token));
+ u64 adj_time;
+
+ if (!skb) {
+ IWL_DEBUG_INFO(mvm, "Time sync event but no pending skb\n");
+ return;
+ }
+
+ ts_10ns = iwl_mvm_get_64_bit(notif->t2_hi, notif->t2_lo);
+ adj_time = iwl_mvm_ptp_get_adj_time(mvm, ts_10ns * 10);
+ shwt = skb_hwtstamps(skb);
+ shwt->hwtstamp = ktime_set(0, adj_time);
+
+ ts_10ns = iwl_mvm_get_64_bit(notif->t3_hi, notif->t3_lo);
+ adj_time = iwl_mvm_ptp_get_adj_time(mvm, ts_10ns * 10);
+ rx_status = IEEE80211_SKB_RXCB(skb);
+ rx_status->ack_tx_hwtstamp = ktime_set(0, adj_time);
+
+ IWL_DEBUG_INFO(mvm,
+ "Time sync: RX event - report frame t2=%llu t3=%llu\n",
+ ktime_to_ns(shwt->hwtstamp),
+ ktime_to_ns(rx_status->ack_tx_hwtstamp));
+ ieee80211_rx_napi(mvm->hw, NULL, skb, NULL);
+}
+
+void iwl_mvm_time_sync_msmt_confirm_event(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb)
+{
+ struct iwl_rx_packet *pkt = rxb_addr(rxb);
+ struct iwl_time_msmt_cfm_notify *notif = (void *)pkt->data;
+ struct ieee80211_tx_status status = {};
+ struct skb_shared_hwtstamps *shwt;
+ u64 ts_10ns, adj_time;
+
+ status.skb =
+ iwl_mvm_time_sync_find_skb(mvm, notif->peer_addr,
+ le32_to_cpu(notif->dialog_token));
+
+ if (!status.skb) {
+ IWL_DEBUG_INFO(mvm, "Time sync confirm but no pending skb\n");
+ return;
+ }
+
+ ts_10ns = iwl_mvm_get_64_bit(notif->t1_hi, notif->t1_lo);
+ adj_time = iwl_mvm_ptp_get_adj_time(mvm, ts_10ns * 10);
+ shwt = skb_hwtstamps(status.skb);
+ shwt->hwtstamp = ktime_set(0, adj_time);
+
+ ts_10ns = iwl_mvm_get_64_bit(notif->t4_hi, notif->t4_lo);
+ adj_time = iwl_mvm_ptp_get_adj_time(mvm, ts_10ns * 10);
+ status.info = IEEE80211_SKB_CB(status.skb);
+ status.ack_hwtstamp = ktime_set(0, adj_time);
+
+ IWL_DEBUG_INFO(mvm,
+ "Time sync: TX event - report frame t1=%llu t4=%llu\n",
+ ktime_to_ns(shwt->hwtstamp),
+ ktime_to_ns(status.ack_hwtstamp));
+ ieee80211_tx_status_ext(mvm->hw, &status);
+}
+
+int iwl_mvm_time_sync_config(struct iwl_mvm *mvm, const u8 *addr, u32 protocols)
+{
+ struct iwl_time_sync_cfg_cmd cmd = {};
+ int err;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (!fw_has_capa(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_TIME_SYNC_BOTH_FTM_TM))
+ return -EINVAL;
+
+ /* The fw only supports one peer. We do allow reconfiguration of the
+ * same peer for cases of fw reset etc.
+ */
+ if (mvm->time_sync.active &&
+ !ether_addr_equal(addr, mvm->time_sync.peer_addr)) {
+ IWL_DEBUG_INFO(mvm, "Time sync: reject config for peer: %pM\n",
+ addr);
+ return -ENOBUFS;
+ }
+
+ if (protocols & ~(IWL_TIME_SYNC_PROTOCOL_TM |
+ IWL_TIME_SYNC_PROTOCOL_FTM))
+ return -EINVAL;
+
+ cmd.protocols = cpu_to_le32(protocols);
+
+ ether_addr_copy(cmd.peer_addr, addr);
+
+ err = iwl_mvm_send_cmd_pdu(mvm,
+ WIDE_ID(DATA_PATH_GROUP,
+ WNM_80211V_TIMING_MEASUREMENT_CONFIG_CMD),
+ 0, sizeof(cmd), &cmd);
+ if (err) {
+ IWL_ERR(mvm, "Failed to send time sync cfg cmd: %d\n", err);
+ } else {
+ mvm->time_sync.active = protocols != 0;
+ ether_addr_copy(mvm->time_sync.peer_addr, addr);
+ IWL_DEBUG_INFO(mvm, "Time sync: set peer addr=%pM\n", addr);
+ }
+
+ if (!mvm->time_sync.active)
+ skb_queue_purge(&mvm->time_sync.frame_list);
+
+ return err;
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+/*
+ * Copyright (C) 2022 Intel Corporation
+ */
+#ifndef __TIME_SYNC_H__
+#define __TIME_SYNC_H__
+
+#include "mvm.h"
+#include <linux/ieee80211.h>
+
+void iwl_mvm_init_time_sync(struct iwl_time_sync_data *data);
+void iwl_mvm_time_sync_msmt_event(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb);
+void iwl_mvm_time_sync_msmt_confirm_event(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb);
+int iwl_mvm_time_sync_config(struct iwl_mvm *mvm, const u8 *addr,
+ u32 protocols);
+
+static inline
+bool iwl_mvm_time_sync_frame(struct iwl_mvm *mvm, struct sk_buff *skb, u8 *addr)
+{
+ if (ether_addr_equal(mvm->time_sync.peer_addr, addr) &&
+ (ieee80211_is_timing_measurement(skb) || ieee80211_is_ftm(skb))) {
+ skb_queue_tail(&mvm->time_sync.frame_list, skb);
+ return true;
+ }
+
+ return false;
+}
+#endif
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2012-2014, 2019-2021 Intel Corporation
+ * Copyright (C) 2012-2014, 2019-2022 Intel Corporation
* Copyright (C) 2013-2014 Intel Mobile Communications GmbH
* Copyright (C) 2015-2016 Intel Deutschland GmbH
*/
if (vif->type != NL80211_IFTYPE_STATION)
return;
- iwl_mvm_update_smps(mvm, vif, IWL_MVM_SMPS_REQ_TT, smps_mode);
+ iwl_mvm_update_smps(mvm, vif, IWL_MVM_SMPS_REQ_TT, smps_mode, 0);
}
static void iwl_mvm_tt_tx_protection(struct iwl_mvm *mvm, bool enable)
#include "iwl-eeprom-parse.h"
#include "mvm.h"
#include "sta.h"
+#include "time-sync.h"
static void
iwl_mvm_bar_check_trigger(struct iwl_mvm *mvm, const u8 *addr,
}
static int iwl_mvm_get_ctrl_vif_queue(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif_link_info *link,
struct ieee80211_tx_info *info,
struct ieee80211_hdr *hdr)
{
- struct iwl_mvm_vif *mvmvif =
- iwl_mvm_vif_from_mac80211(info->control.vif);
__le16 fc = hdr->frame_control;
switch (info->control.vif->type) {
if (ieee80211_is_mgmt(fc) &&
(!ieee80211_is_bufferable_mmpdu(fc) ||
ieee80211_is_deauth(fc) || ieee80211_is_disassoc(fc)))
- return mvm->probe_queue;
+ return link->mgmt_queue;
if (!ieee80211_has_order(fc) && !ieee80211_is_probe_req(fc) &&
is_multicast_ether_addr(hdr->addr1))
- return mvmvif->cab_queue;
+ return link->cab_queue;
WARN_ONCE(info->control.vif->type != NL80211_IFTYPE_ADHOC,
"fc=0x%02x", le16_to_cpu(fc));
- return mvm->probe_queue;
+ return link->mgmt_queue;
case NL80211_IFTYPE_P2P_DEVICE:
if (ieee80211_is_mgmt(fc))
return mvm->p2p_dev_queue;
rcu_read_lock();
- resp_data = rcu_dereference(mvmvif->probe_resp_data);
+ resp_data = rcu_dereference(mvmvif->deflink.probe_resp_data);
if (!resp_data)
goto out;
if (info.control.vif->type == NL80211_IFTYPE_P2P_DEVICE ||
info.control.vif->type == NL80211_IFTYPE_AP ||
info.control.vif->type == NL80211_IFTYPE_ADHOC) {
+ u32 link_id = u32_get_bits(info.control.flags,
+ IEEE80211_TX_CTRL_MLO_LINK);
+ struct iwl_mvm_vif_link_info *link;
+
+ if (link_id == IEEE80211_LINK_UNSPECIFIED) {
+ if (info.control.vif->active_links)
+ link_id = ffs(info.control.vif->active_links) - 1;
+ else
+ link_id = 0;
+ }
+
+ link = mvmvif->link[link_id];
+
if (!ieee80211_is_data(hdr->frame_control))
- sta_id = mvmvif->bcast_sta.sta_id;
+ sta_id = link->bcast_sta.sta_id;
else
- sta_id = mvmvif->mcast_sta.sta_id;
+ sta_id = link->mcast_sta.sta_id;
- queue = iwl_mvm_get_ctrl_vif_queue(mvm, &info, hdr);
+ queue = iwl_mvm_get_ctrl_vif_queue(mvm, link, &info,
+ hdr);
} else if (info.control.vif->type == NL80211_IFTYPE_MONITOR) {
queue = mvm->snif_queue;
sta_id = mvm->snif_sta.sta_id;
if (WARN_ON_ONCE(!mvmsta))
return -1;
- if (WARN_ON_ONCE(mvmsta->sta_id == IWL_MVM_INVALID_STA))
+ if (WARN_ON_ONCE(mvmsta->deflink.sta_id == IWL_MVM_INVALID_STA))
return -1;
if (unlikely(ieee80211_is_any_nullfunc(fc)) && sta->deflink.he_cap.has_he)
iwl_mvm_probe_resp_set_noa(mvm, skb);
dev_cmd = iwl_mvm_set_tx_params(mvm, skb, info, hdrlen,
- sta, mvmsta->sta_id);
+ sta, mvmsta->deflink.sta_id);
if (!dev_cmd)
goto drop;
}
IWL_DEBUG_TX(mvm, "TX to [%d|%d] Q:%d - seq: 0x%x len %d\n",
- mvmsta->sta_id, tid, txq_id,
+ mvmsta->deflink.sta_id, tid, txq_id,
IEEE80211_SEQ_TO_SN(seq_number), skb->len);
/* From now on, we cannot access info->control */
iwl_trans_free_tx_cmd(mvm->trans, dev_cmd);
spin_unlock(&mvmsta->lock);
drop:
- IWL_DEBUG_TX(mvm, "TX to [%d|%d] dropped\n", mvmsta->sta_id, tid);
+ IWL_DEBUG_TX(mvm, "TX to [%d|%d] dropped\n", mvmsta->deflink.sta_id,
+ tid);
return -1;
}
if (WARN_ON_ONCE(!mvmsta))
return -1;
- if (WARN_ON_ONCE(mvmsta->sta_id == IWL_MVM_INVALID_STA))
+ if (WARN_ON_ONCE(mvmsta->deflink.sta_id == IWL_MVM_INVALID_STA))
return -1;
memcpy(&info, skb->cb, sizeof(info));
r->idx = rate;
} else if (format == RATE_MCS_VHT_MSK) {
ieee80211_rate_set_vht(r, rate,
- ((rate_n_flags & RATE_MCS_NSS_MSK) >>
- RATE_MCS_NSS_POS) + 1);
+ FIELD_GET(RATE_MCS_NSS_MSK,
+ rate_n_flags) + 1);
r->flags |= IEEE80211_TX_RC_VHT_MCS;
} else if (format == RATE_MCS_HE_MSK) {
/* mac80211 cannot do this without ieee80211_tx_status_ext()
} else if (rate_n_flags & RATE_MCS_VHT_MSK_V1) {
ieee80211_rate_set_vht(
r, rate_n_flags & RATE_VHT_MCS_RATE_CODE_MSK,
- ((rate_n_flags & RATE_VHT_MCS_NSS_MSK) >>
- RATE_VHT_MCS_NSS_POS) + 1);
+ FIELD_GET(RATE_MCS_NSS_MSK, rate_n_flags) + 1);
r->flags |= IEEE80211_TX_RC_VHT_MCS;
} else {
r->idx = iwl_mvm_legacy_rate_to_mac80211_idx(rate_n_flags,
info->status.status_driver_data[0] =
RS_DRV_DATA_PACK(lq_color, tx_resp->reduced_tpc);
- ieee80211_tx_status(mvm->hw, skb);
+ if (likely(!iwl_mvm_time_sync_frame(mvm, skb, hdr->addr1)))
+ ieee80211_tx_status(mvm->hw, skb);
}
/* This is an aggregation queue or might become one, so we use
* possible (i.e. first MPDU in the aggregation wasn't acked)
* Still it's important to update RS about sent vs. acked.
*/
- if (!is_flush && skb_queue_empty(&reclaimed_skbs)) {
+ if (!is_flush && skb_queue_empty(&reclaimed_skbs) &&
+ !iwl_mvm_has_tlc_offload(mvm)) {
struct ieee80211_chanctx_conf *chanctx_conf = NULL;
+ /* no TLC offload, so non-MLD mode */
if (mvmsta->vif)
chanctx_conf =
rcu_dereference(mvmsta->vif->bss_conf.chanctx_conf);
tx_info->band = chanctx_conf->def.chan->band;
iwl_mvm_hwrate_to_tx_status(mvm->fw, rate, tx_info);
- if (!iwl_mvm_has_tlc_offload(mvm)) {
- IWL_DEBUG_TX_REPLY(mvm,
- "No reclaim. Update rs directly\n");
- iwl_mvm_rs_tx_status(mvm, sta, tid, tx_info, false);
- }
+ IWL_DEBUG_TX_REPLY(mvm, "No reclaim. Update rs directly\n");
+ iwl_mvm_rs_tx_status(mvm, sta, tid, tx_info, false);
}
out:
int iwl_mvm_flush_sta(struct iwl_mvm *mvm, void *sta, bool internal)
{
- struct iwl_mvm_int_sta *int_sta = sta;
- struct iwl_mvm_sta *mvm_sta = sta;
+ u32 sta_id, tfd_queue_msk;
- BUILD_BUG_ON(offsetof(struct iwl_mvm_int_sta, sta_id) !=
- offsetof(struct iwl_mvm_sta, sta_id));
+ if (internal) {
+ struct iwl_mvm_int_sta *int_sta = sta;
- if (iwl_mvm_has_new_tx_api(mvm))
- return iwl_mvm_flush_sta_tids(mvm, mvm_sta->sta_id, 0xffff);
+ sta_id = int_sta->sta_id;
+ tfd_queue_msk = int_sta->tfd_queue_msk;
+ } else {
+ struct iwl_mvm_sta *mvm_sta = sta;
- if (internal)
- return iwl_mvm_flush_tx_path(mvm, int_sta->tfd_queue_msk);
+ sta_id = mvm_sta->deflink.sta_id;
+ tfd_queue_msk = mvm_sta->tfd_queue_msk;
+ }
+
+ if (iwl_mvm_has_new_tx_api(mvm))
+ return iwl_mvm_flush_sta_tids(mvm, sta_id, 0xffff);
- return iwl_mvm_flush_tx_path(mvm, mvm_sta->tfd_queue_msk);
+ return iwl_mvm_flush_tx_path(mvm, tfd_queue_msk);
}
* @vif: Pointer to the ieee80211_vif structure
* @req_type: The part of the driver who call for a change.
* @smps_request: The request to change the SMPS mode.
+ * @link_id: for MLO link_id, otherwise 0 (deflink)
*
* Get a requst to change the SMPS mode,
* and change it according to all other requests in the driver.
*/
void iwl_mvm_update_smps(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
enum iwl_mvm_smps_type_request req_type,
- enum ieee80211_smps_mode smps_request)
+ enum ieee80211_smps_mode smps_request,
+ unsigned int link_id)
{
struct iwl_mvm_vif *mvmvif;
enum ieee80211_smps_mode smps_mode = IEEE80211_SMPS_AUTOMATIC;
return;
mvmvif = iwl_mvm_vif_from_mac80211(vif);
- mvmvif->smps_requests[req_type] = smps_request;
+
+ if (WARN_ON_ONCE(!mvmvif->link[link_id]))
+ return;
+
+ mvmvif->link[link_id]->smps_requests[req_type] = smps_request;
for (i = 0; i < NUM_IWL_MVM_SMPS_REQ; i++) {
- if (mvmvif->smps_requests[i] == IEEE80211_SMPS_STATIC) {
+ if (mvmvif->link[link_id]->smps_requests[i] ==
+ IEEE80211_SMPS_STATIC) {
smps_mode = IEEE80211_SMPS_STATIC;
break;
}
- if (mvmvif->smps_requests[i] == IEEE80211_SMPS_DYNAMIC)
+ if (mvmvif->link[link_id]->smps_requests[i] ==
+ IEEE80211_SMPS_DYNAMIC)
smps_mode = IEEE80211_SMPS_DYNAMIC;
}
- ieee80211_request_smps(vif, 0, smps_mode);
+ ieee80211_request_smps(vif, link_id, smps_mode);
+}
+
+void iwl_mvm_update_smps_on_active_links(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ enum iwl_mvm_smps_type_request req_type,
+ enum ieee80211_smps_mode smps_request)
+{
+ struct ieee80211_bss_conf *link_conf;
+ unsigned int link_id;
+
+ rcu_read_lock();
+ for_each_vif_active_link(vif, link_conf, link_id)
+ iwl_mvm_update_smps(mvm, vif, req_type, smps_request,
+ link_id);
+ rcu_read_unlock();
}
static bool iwl_wait_stats_complete(struct iwl_notif_wait_data *notif_wait,
struct iwl_mvm_diversity_iter_data *data = _data;
int i;
- if (mvmvif->phy_ctxt != data->ctxt)
+ if (mvmvif->deflink.phy_ctxt != data->ctxt)
return;
for (i = 0; i < NUM_IWL_MVM_SMPS_REQ; i++) {
- if (mvmvif->smps_requests[i] == IEEE80211_SMPS_STATIC ||
- mvmvif->smps_requests[i] == IEEE80211_SMPS_DYNAMIC) {
+ if (mvmvif->deflink.smps_requests[i] == IEEE80211_SMPS_STATIC ||
+ mvmvif->deflink.smps_requests[i] == IEEE80211_SMPS_DYNAMIC) {
data->result = false;
break;
}
if (iwl_mvm_vif_low_latency(mvmvif)) {
result->result = true;
- if (!mvmvif->phy_ctxt)
+ if (!mvmvif->deflink.phy_ctxt)
return;
- band = mvmvif->phy_ctxt->channel->band;
+ band = mvmvif->deflink.phy_ctxt->channel->band;
result->result_per_band[band] = true;
}
}
if (!vif->cfg.assoc)
return;
- if (!mvmvif->queue_params[IEEE80211_AC_VO].uapsd &&
- !mvmvif->queue_params[IEEE80211_AC_VI].uapsd &&
- !mvmvif->queue_params[IEEE80211_AC_BE].uapsd &&
- !mvmvif->queue_params[IEEE80211_AC_BK].uapsd)
+ if (!mvmvif->deflink.queue_params[IEEE80211_AC_VO].uapsd &&
+ !mvmvif->deflink.queue_params[IEEE80211_AC_VI].uapsd &&
+ !mvmvif->deflink.queue_params[IEEE80211_AC_BE].uapsd &&
+ !mvmvif->deflink.queue_params[IEEE80211_AC_BK].uapsd)
return;
if (mvm->tcm.data[mvmvif->id].uapsd_nonagg_detect.detected)
mvm->tcm.data[mvmvif->id].uapsd_nonagg_detect.detected = true;
IWL_INFO(mvm,
"detected AP should do aggregation but isn't, likely due to U-APSD\n");
- schedule_delayed_work(&mvmvif->uapsd_nonagg_detected_wk, 15 * HZ);
+ schedule_delayed_work(&mvmvif->uapsd_nonagg_detected_wk,
+ 15 * HZ);
}
static void iwl_mvm_check_uapsd_agg_expected_tpt(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
u32 *band = _data;
- if (!mvmvif->phy_ctxt)
+ if (!mvmvif->deflink.phy_ctxt)
return;
- band[mvmvif->id] = mvmvif->phy_ctxt->channel->band;
+ band[mvmvif->id] = mvmvif->deflink.phy_ctxt->channel->band;
}
static unsigned long iwl_mvm_calc_tcm_stats(struct iwl_mvm *mvm,
iwl_mvm_power_update_device(mvm);
}
}
+
+/* Find if at least two links from different vifs use same channel
+ * FIXME: consider having a refcount array in struct iwl_mvm_vif for
+ * used phy_ctxt ids.
+ */
+bool iwl_mvm_have_links_same_channel(struct iwl_mvm_vif *vif1,
+ struct iwl_mvm_vif *vif2)
+{
+ unsigned int i, j;
+
+ for_each_mvm_vif_valid_link(vif1, i) {
+ for_each_mvm_vif_valid_link(vif2, j) {
+ if (vif1->link[i]->phy_ctxt == vif2->link[j]->phy_ctxt)
+ return true;
+ }
+ }
+
+ return false;
+}
+
+bool iwl_mvm_vif_is_active(struct iwl_mvm_vif *mvmvif)
+{
+ unsigned int i;
+
+ /* FIXME: can it fail when phy_ctxt is assigned? */
+ for_each_mvm_vif_valid_link(mvmvif, i) {
+ if (mvmvif->link[i]->phy_ctxt &&
+ mvmvif->link[i]->phy_ctxt->id < NUM_PHY_CTX)
+ return true;
+ }
+
+ return false;
+}
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_CDB, IWL_CFG_NO_JACKET,
iwl_cfg_bz_a0_fm4_a0, iwl_bz_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
+ IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_IS_JACKET,
+ iwl_cfg_bz_a0_fm_b0, iwl_bz_name),
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
+ IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_CDB, IWL_CFG_IS_JACKET,
+ iwl_cfg_bz_a0_fm4_b0, iwl_bz_name),
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_GL, SILICON_A_STEP,
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY,
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_JACKET,
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_CDB, IWL_CFG_IS_JACKET,
iwl_cfg_bnj_a0_fm4_a0, iwl_bz_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_GL, SILICON_B_STEP,
+ IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_CDB, IWL_CFG_IS_JACKET,
+ iwl_cfg_bnj_b0_fm4_b0, iwl_bz_name),
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_IS_JACKET,
iwl_cfg_bnj_a0_gf_a0, iwl_bz_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
- IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_GL, SILICON_B_STEP,
+ IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_IS_JACKET,
+ iwl_cfg_bnj_b0_gf_a0, iwl_bz_name),
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_GL, SILICON_A_STEP,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_CDB, IWL_CFG_IS_JACKET,
iwl_cfg_bnj_a0_gf4_a0, iwl_bz_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
- IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_GL, SILICON_B_STEP,
+ IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_CDB, IWL_CFG_IS_JACKET,
+ iwl_cfg_bnj_b0_gf4_a0, iwl_bz_name),
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_GL, SILICON_A_STEP,
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_IS_JACKET,
iwl_cfg_bnj_a0_hr_b0, iwl_bz_name),
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_GL, SILICON_B_STEP,
+ IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB, IWL_CFG_NO_JACKET,
+ iwl_cfg_bnj_b0_hr_b0, iwl_bz_name),
/* SoF with JF2 */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
*/
#include "iwl-trans.h"
#include "iwl-prph.h"
case CSR_HW_RFID_TYPE(CSR_HW_RF_ID_TYPE_HRCDB):
pos = scnprintf(buf, buflen, "HRCDB");
break;
+ case CSR_HW_RFID_TYPE(CSR_HW_RF_ID_TYPE_MS):
+ pos = scnprintf(buf, buflen, "MS");
+ break;
default:
return;
}
--- /dev/null
+config PCMCIA_RAYCS
+ tristate "Aviator/Raytheon 2.4GHz wireless support"
+ depends on PCMCIA
+ select WIRELESS_EXT
+ select WEXT_SPY
+ select WEXT_PRIV
+ help
+ Say Y here if you intend to attach an Aviator/Raytheon PCMCIA
+ (PC-card) wireless Ethernet networking card to your computer.
+ Please read the file
+ <file:Documentation/networking/device_drivers/wifi/ray_cs.rst> for
+ details.
+
+ To compile this driver as a module, choose M here: the module will be
+ called ray_cs. If unsure, say N.
+
+config PCMCIA_WL3501
+ tristate "Planet WL3501 PCMCIA cards"
+ depends on CFG80211 && PCMCIA
+ select WIRELESS_EXT
+ select WEXT_SPY
+ help
+ A driver for WL3501 PCMCIA 802.11 wireless cards made by Planet.
+ It has basic support for Linux wireless extensions and initial
+ micro support for ethtool.
+
+config USB_NET_RNDIS_WLAN
+ tristate "Wireless RNDIS USB support"
+ depends on USB
+ depends on CFG80211
+ select USB_NET_DRIVERS
+ select USB_USBNET
+ select USB_NET_CDCETHER
+ select USB_NET_RNDIS_HOST
+ help
+ This is a driver for wireless RNDIS devices.
+ These are USB based adapters found in devices such as:
+
+ Buffalo WLI-U2-KG125S
+ U.S. Robotics USR5421
+ Belkin F5D7051
+ Linksys WUSB54GSv2
+ Linksys WUSB54GSC
+ Asus WL169gE
+ Eminent EM4045
+ BT Voyager 1055
+ Linksys WUSB54GSv1
+ U.S. Robotics USR5420
+ BUFFALO WLI-USB-G54
+
+ All of these devices are based on Broadcom 4320 chip which is the
+ only wireless RNDIS chip known to date.
+
+ If you choose to build a module, it'll be called rndis_wlan.
+
--- /dev/null
+# 16-bit wireless PCMCIA client drivers
+obj-$(CONFIG_PCMCIA_RAYCS) += ray_cs.o
+obj-$(CONFIG_PCMCIA_WL3501) += wl3501_cs.o
+
+obj-$(CONFIG_USB_NET_RNDIS_WLAN) += rndis_wlan.o
+
union {
__le32 media_stream_mode;
__le32 radio_status;
- struct ndis_80211_auth_request auth_request[0];
+ DECLARE_FLEX_ARRAY(struct ndis_80211_auth_request, auth_request);
struct ndis_80211_pmkid_cand_list cand_list;
} u;
} __packed;
if (bssid_len < sizeof(struct ndis_80211_bssid_ex) +
sizeof(struct ndis_80211_fixed_ies))
- return NULL;
+ return false;
fixed = (struct ndis_80211_fixed_ies *)bssid->ies;
(int)le32_to_cpu(bssid->ie_length));
ie_len -= sizeof(struct ndis_80211_fixed_ies);
if (ie_len < 0)
- return NULL;
+ return false;
/* extract data for cfg80211_inform_bss */
channel = ieee80211_get_channel(priv->wdev.wiphy,
KHZ_TO_MHZ(le32_to_cpu(bssid->config.ds_config)));
if (!channel)
- return NULL;
+ return false;
signal = level_to_qual(le32_to_cpu(bssid->rssi));
timestamp = le64_to_cpu(*(__le64 *)fixed->timestamp);
{
struct host_cmd_ds_chan_rpt_event *rpt_event;
struct mwifiex_ie_types_chan_rpt_data *rpt;
- u8 *evt_buf;
u16 event_len, tlv_len;
rpt_event = (void *)(skb->data + sizeof(u32));
return -1;
}
- evt_buf = (void *)&rpt_event->tlvbuf;
-
while (event_len >= sizeof(struct mwifiex_ie_types_header)) {
rpt = (void *)&rpt_event->tlvbuf;
tlv_len = le16_to_cpu(rpt->header.len);
break;
}
- evt_buf += (tlv_len + sizeof(rpt->header));
event_len -= (tlv_len + sizeof(rpt->header));
}
struct ieee80211_sband_iftype_data *iftype_data;
const struct qlink_tlv_iftype_data *tlv =
(const struct qlink_tlv_iftype_data *)data;
- size_t payload_len = tlv->n_iftype_data * sizeof(*tlv->iftype_data) +
- sizeof(*tlv) -
- sizeof(struct qlink_tlv_hdr);
+ size_t payload_len;
+
+ payload_len = struct_size(tlv, iftype_data, tlv->n_iftype_data);
+ payload_len = size_sub(payload_len, sizeof(struct qlink_tlv_hdr));
if (tlv->hdr.len != cpu_to_le16(payload_len)) {
pr_err("bad IFTYPE_DATA TLV len %u\n", tlv->hdr.len);
}
kfree(rt2x00dev->spec.channels_info);
+ kfree(rt2x00dev->chan_survey);
}
static const struct ieee80211_tpt_blink rt2x00_tpt_blink[] = {
parts written to utilize the Linux mac80211 stack.
The driver is known to work with a number of RTL8723AU,
RL8188CU, RTL8188RU, RTL8191CU, RTL8192CU, RTL8723BU, RTL8192EU,
- RTL8188FU, and RTL8188EU devices.
+ RTL8188FU, RTL8188EU, and RTL8710BU (aka RTL8188GU) devices.
This driver is under development and has a limited feature
set. In particular it does not yet support 40MHz channels
rtl8xxxu-y := rtl8xxxu_core.o rtl8xxxu_8192e.o rtl8xxxu_8723b.o \
rtl8xxxu_8723a.o rtl8xxxu_8192c.o rtl8xxxu_8188f.o \
- rtl8xxxu_8188e.o
+ rtl8xxxu_8188e.o rtl8xxxu_8710b.o
RTL8822B = 0x8822b,
RTL8703B = 0x8703b,
RTL8195A = 0x8195a,
- RTL8188F = 0x8188f
+ RTL8188F = 0x8188f,
+ RTL8710B = 0x8710b,
};
enum rtl8xxxu_rx_type {
#endif
};
+struct jaguar2_phy_stats_type0 {
+ /* DW0 */
+ u8 page_num;
+ u8 pwdb;
+#ifdef __LITTLE_ENDIAN
+ u8 gain: 6;
+ u8 rsvd_0: 1;
+ u8 trsw: 1;
+#else
+ u8 trsw: 1;
+ u8 rsvd_0: 1;
+ u8 gain: 6;
+#endif
+ u8 rsvd_1;
+
+ /* DW1 */
+ u8 rsvd_2;
+#ifdef __LITTLE_ENDIAN
+ u8 rxsc: 4;
+ u8 agc_table: 4;
+#else
+ u8 agc_table: 4;
+ u8 rxsc: 4;
+#endif
+ u8 channel;
+ u8 band;
+
+ /* DW2 */
+ u16 length;
+#ifdef __LITTLE_ENDIAN
+ u8 antidx_a: 3;
+ u8 antidx_b: 3;
+ u8 rsvd_3: 2;
+ u8 antidx_c: 3;
+ u8 antidx_d: 3;
+ u8 rsvd_4:2;
+#else
+ u8 rsvd_3: 2;
+ u8 antidx_b: 3;
+ u8 antidx_a: 3;
+ u8 rsvd_4:2;
+ u8 antidx_d: 3;
+ u8 antidx_c: 3;
+#endif
+
+ /* DW3 */
+ u8 signal_quality;
+#ifdef __LITTLE_ENDIAN
+ u8 vga:5;
+ u8 lna_l:3;
+ u8 bb_power:6;
+ u8 rsvd_9:1;
+ u8 lna_h:1;
+#else
+ u8 lna_l:3;
+ u8 vga:5;
+ u8 lna_h:1;
+ u8 rsvd_9:1;
+ u8 bb_power:6;
+#endif
+ u8 rsvd_5;
+
+ /* DW4 */
+ u32 rsvd_6;
+
+ /* DW5 */
+ u32 rsvd_7;
+
+ /* DW6 */
+ u32 rsvd_8;
+} __packed;
+
+struct jaguar2_phy_stats_type1 {
+ /* DW0 and DW1 */
+ u8 page_num;
+ u8 pwdb[4];
+#ifdef __LITTLE_ENDIAN
+ u8 l_rxsc: 4;
+ u8 ht_rxsc: 4;
+#else
+ u8 ht_rxsc: 4;
+ u8 l_rxsc: 4;
+#endif
+ u8 channel;
+#ifdef __LITTLE_ENDIAN
+ u8 band: 2;
+ u8 rsvd_0: 1;
+ u8 hw_antsw_occu: 1;
+ u8 gnt_bt: 1;
+ u8 ldpc: 1;
+ u8 stbc: 1;
+ u8 beamformed: 1;
+#else
+ u8 beamformed: 1;
+ u8 stbc: 1;
+ u8 ldpc: 1;
+ u8 gnt_bt: 1;
+ u8 hw_antsw_occu: 1;
+ u8 rsvd_0: 1;
+ u8 band: 2;
+#endif
+
+ /* DW2 */
+ u16 lsig_length;
+#ifdef __LITTLE_ENDIAN
+ u8 antidx_a: 3;
+ u8 antidx_b: 3;
+ u8 rsvd_1: 2;
+ u8 antidx_c: 3;
+ u8 antidx_d: 3;
+ u8 rsvd_2: 2;
+#else
+ u8 rsvd_1: 2;
+ u8 antidx_b: 3;
+ u8 antidx_a: 3;
+ u8 rsvd_2: 2;
+ u8 antidx_d: 3;
+ u8 antidx_c: 3;
+#endif
+
+ /* DW3 */
+ u8 paid;
+#ifdef __LITTLE_ENDIAN
+ u8 paid_msb: 1;
+ u8 gid: 6;
+ u8 rsvd_3: 1;
+#else
+ u8 rsvd_3: 1;
+ u8 gid: 6;
+ u8 paid_msb: 1;
+#endif
+ u8 intf_pos;
+#ifdef __LITTLE_ENDIAN
+ u8 intf_pos_msb: 1;
+ u8 rsvd_4: 2;
+ u8 nb_intf_flag: 1;
+ u8 rf_mode: 2;
+ u8 rsvd_5: 2;
+#else
+ u8 rsvd_5: 2;
+ u8 rf_mode: 2;
+ u8 nb_intf_flag: 1;
+ u8 rsvd_4: 2;
+ u8 intf_pos_msb: 1;
+#endif
+
+ /* DW4 */
+ s8 rxevm[4]; /* s(8,1) */
+
+ /* DW5 */
+ s8 cfo_tail[4]; /* s(8,7) */
+
+ /* DW6 */
+ s8 rxsnr[4]; /* s(8,1) */
+} __packed;
+
+struct jaguar2_phy_stats_type2 {
+ /* DW0 ane DW1 */
+ u8 page_num;
+ u8 pwdb[4];
+#ifdef __LITTLE_ENDIAN
+ u8 l_rxsc: 4;
+ u8 ht_rxsc: 4;
+#else
+ u8 ht_rxsc: 4;
+ u8 l_rxsc: 4;
+#endif
+ u8 channel;
+#ifdef __LITTLE_ENDIAN
+ u8 band: 2;
+ u8 rsvd_0: 1;
+ u8 hw_antsw_occu: 1;
+ u8 gnt_bt: 1;
+ u8 ldpc: 1;
+ u8 stbc: 1;
+ u8 beamformed: 1;
+#else
+ u8 beamformed: 1;
+ u8 stbc: 1;
+ u8 ldpc: 1;
+ u8 gnt_bt: 1;
+ u8 hw_antsw_occu: 1;
+ u8 rsvd_0: 1;
+ u8 band: 2;
+#endif
+
+ /* DW2 */
+#ifdef __LITTLE_ENDIAN
+ u8 shift_l_map: 6;
+ u8 rsvd_1: 2;
+#else
+ u8 rsvd_1: 2;
+ u8 shift_l_map: 6;
+#endif
+ u8 cnt_pw2cca;
+#ifdef __LITTLE_ENDIAN
+ u8 agc_table_a: 4;
+ u8 agc_table_b: 4;
+ u8 agc_table_c: 4;
+ u8 agc_table_d: 4;
+#else
+ u8 agc_table_b: 4;
+ u8 agc_table_a: 4;
+ u8 agc_table_d: 4;
+ u8 agc_table_c: 4;
+#endif
+
+ /* DW3 ~ DW6*/
+ u8 cnt_cca2agc_rdy;
+#ifdef __LITTLE_ENDIAN
+ u8 gain_a: 6;
+ u8 rsvd_2: 1;
+ u8 trsw_a: 1;
+ u8 gain_b: 6;
+ u8 rsvd_3: 1;
+ u8 trsw_b: 1;
+ u8 gain_c: 6;
+ u8 rsvd_4: 1;
+ u8 trsw_c: 1;
+ u8 gain_d: 6;
+ u8 rsvd_5: 1;
+ u8 trsw_d: 1;
+ u8 aagc_step_a: 2;
+ u8 aagc_step_b: 2;
+ u8 aagc_step_c: 2;
+ u8 aagc_step_d: 2;
+#else
+ u8 trsw_a: 1;
+ u8 rsvd_2: 1;
+ u8 gain_a: 6;
+ u8 trsw_b: 1;
+ u8 rsvd_3: 1;
+ u8 gain_b: 6;
+ u8 trsw_c: 1;
+ u8 rsvd_4: 1;
+ u8 gain_c: 6;
+ u8 trsw_d: 1;
+ u8 rsvd_5: 1;
+ u8 gain_d: 6;
+ u8 aagc_step_d: 2;
+ u8 aagc_step_c: 2;
+ u8 aagc_step_b: 2;
+ u8 aagc_step_a: 2;
+#endif
+ u8 ht_aagc_gain[4];
+ u8 dagc_gain[4];
+#ifdef __LITTLE_ENDIAN
+ u8 counter: 6;
+ u8 rsvd_6: 2;
+ u8 syn_count: 5;
+ u8 rsvd_7:3;
+#else
+ u8 rsvd_6: 2;
+ u8 counter: 6;
+ u8 rsvd_7:3;
+ u8 syn_count: 5;
+#endif
+} __packed;
+
/*
* Regs to backup
*/
u8 res12[0xc3];
} __packed;
+struct rtl8710bu_efuse {
+ __le16 rtl_id;
+ u8 res0[0x1e];
+ struct rtl8188fu_efuse_tx_power tx_power_index_A; /* 0x20 */
+ u8 res1[0x9c]; /* 0x2c */
+ u8 channel_plan; /* 0xc8 */
+ u8 xtal_k; /* 0xc9 */
+ u8 thermal_meter; /* 0xca */
+ u8 res2[0x4f];
+ u8 mac_addr[ETH_ALEN]; /* 0x11a */
+ u8 res3[0x11];
+ u8 rf_board_option; /* 0x131 */
+ u8 res4[2];
+ u8 eeprom_version; /* 0x134 */
+ u8 eeprom_customer_id; /* 0x135 */
+ u8 res5[5];
+ u8 country_code; /* 0x13b */
+ u8 res6[0x84];
+ u8 vid[2]; /* 0x1c0 */
+ u8 pid[2]; /* 0x1c2 */
+ u8 res7[0x3c];
+} __packed;
+
struct rtl8xxxu_reg8val {
u16 reg;
u8 val;
struct rtl8723au_idx ht20_tx_power_diff[RTL8723B_TX_COUNT];
struct rtl8723au_idx ht40_tx_power_diff[RTL8723B_TX_COUNT];
struct rtl8xxxu_power_base *power_base;
+ u8 package_type;
u32 chip_cut:4;
u32 rom_rev:4;
u32 is_multi_func:1;
u32 ep_tx_low_queue:1;
u32 rx_buf_aggregation:1;
u32 cck_agc_report_type:1;
+ u32 cck_new_agc:1;
u8 default_crystal_cap;
unsigned int pipe_interrupt;
unsigned int pipe_in;
int nr_out_eps;
struct mutex h2c_mutex;
+ /* Protect the indirect register accesses of RTL8710BU. */
+ struct mutex syson_indirect_access_mutex;
struct usb_anchor rx_anchor;
struct usb_anchor tx_anchor;
struct rtl8192eu_efuse efuse8192eu;
struct rtl8188fu_efuse efuse8188fu;
struct rtl8188eu_efuse efuse8188eu;
+ struct rtl8710bu_efuse efuse8710bu;
} efuse_wifi;
u32 adda_backup[RTL8XXXU_ADDA_REGS];
u32 mac_backup[RTL8XXXU_MAC_REGS];
struct rtl8xxxu_fileops {
int (*identify_chip) (struct rtl8xxxu_priv *priv);
+ int (*read_efuse) (struct rtl8xxxu_priv *priv);
int (*parse_efuse) (struct rtl8xxxu_priv *priv);
int (*load_firmware) (struct rtl8xxxu_priv *priv);
int (*power_on) (struct rtl8xxxu_priv *priv);
void (*phy_iq_calibrate) (struct rtl8xxxu_priv *priv);
void (*config_channel) (struct ieee80211_hw *hw);
int (*parse_rx_desc) (struct rtl8xxxu_priv *priv, struct sk_buff *skb);
+ void (*parse_phystats) (struct rtl8xxxu_priv *priv,
+ struct ieee80211_rx_status *rx_status,
+ struct rtl8723au_phy_stats *phy_stats,
+ u32 rxmcs, struct ieee80211_hdr *hdr,
+ bool crc_icv_err);
void (*init_aggregation) (struct rtl8xxxu_priv *priv);
void (*init_statistics) (struct rtl8xxxu_priv *priv);
void (*init_burst) (struct rtl8xxxu_priv *priv);
bool short_preamble, bool ampdu_enable,
u32 rts_rate);
void (*set_crystal_cap) (struct rtl8xxxu_priv *priv, u8 crystal_cap);
- s8 (*cck_rssi) (struct rtl8xxxu_priv *priv, u8 cck_agc_rpt);
+ s8 (*cck_rssi) (struct rtl8xxxu_priv *priv, struct rtl8723au_phy_stats *phy_stats);
int (*led_classdev_brightness_set) (struct led_classdev *led_cdev,
enum led_brightness brightness);
int writeN_block_size;
void rtl8xxxu_config_endpoints_sie(struct rtl8xxxu_priv *priv);
int rtl8xxxu_config_endpoints_no_sie(struct rtl8xxxu_priv *priv);
int rtl8xxxu_read_efuse8(struct rtl8xxxu_priv *priv, u16 offset, u8 *data);
+int rtl8xxxu_read_efuse(struct rtl8xxxu_priv *priv);
void rtl8xxxu_reset_8051(struct rtl8xxxu_priv *priv);
int rtl8xxxu_auto_llt_table(struct rtl8xxxu_priv *priv);
void rtl8xxxu_gen2_prepare_calibrate(struct rtl8xxxu_priv *priv, u8 start);
void rtl8723a_phy_lc_calibrate(struct rtl8xxxu_priv *priv);
+void rtl8188f_phy_lc_calibrate(struct rtl8xxxu_priv *priv);
int rtl8xxxu_flush_fifo(struct rtl8xxxu_priv *priv);
int rtl8xxxu_gen2_h2c_cmd(struct rtl8xxxu_priv *priv,
struct h2c_cmd *h2c, int len);
void rtl8xxxu_init_burst(struct rtl8xxxu_priv *priv);
int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb);
int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb);
+void rtl8723au_rx_parse_phystats(struct rtl8xxxu_priv *priv,
+ struct ieee80211_rx_status *rx_status,
+ struct rtl8723au_phy_stats *phy_stats,
+ u32 rxmcs, struct ieee80211_hdr *hdr,
+ bool crc_icv_err);
+void jaguar2_rx_parse_phystats(struct rtl8xxxu_priv *priv,
+ struct ieee80211_rx_status *rx_status,
+ struct rtl8723au_phy_stats *phy_stats,
+ u32 rxmcs, struct ieee80211_hdr *hdr,
+ bool crc_icv_err);
int rtl8xxxu_gen2_channel_to_group(int channel);
bool rtl8xxxu_simularity_compare(struct rtl8xxxu_priv *priv,
int result[][8], int c1, int c2);
void rtl8723bu_phy_init_antenna_selection(struct rtl8xxxu_priv *priv);
void rtl8723a_set_crystal_cap(struct rtl8xxxu_priv *priv, u8 crystal_cap);
void rtl8188f_set_crystal_cap(struct rtl8xxxu_priv *priv, u8 crystal_cap);
-s8 rtl8723a_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt);
+s8 rtl8723a_cck_rssi(struct rtl8xxxu_priv *priv, struct rtl8723au_phy_stats *phy_stats);
void rtl8xxxu_update_ra_report(struct rtl8xxxu_ra_report *rarpt,
u8 rate, u8 sgi, u8 bw);
void rtl8188e_ra_info_init_all(struct rtl8xxxu_ra_info *ra);
void rtl8188e_handle_ra_tx_report2(struct rtl8xxxu_priv *priv, struct sk_buff *skb);
+extern struct rtl8xxxu_fileops rtl8710bu_fops;
extern struct rtl8xxxu_fileops rtl8188fu_fops;
extern struct rtl8xxxu_fileops rtl8188eu_fops;
extern struct rtl8xxxu_fileops rtl8192cu_fops;
rtl8xxxu_write8(priv, REG_EARLY_MODE_CONTROL_8188E + 3, 0x01);
}
-static s8 rtl8188e_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
+static s8 rtl8188e_cck_rssi(struct rtl8xxxu_priv *priv, struct rtl8723au_phy_stats *phy_stats)
{
/* only use lna 0/1/2/3/7 */
static const s8 lna_gain_table_0[8] = {17, -1, -13, -29, -32, -35, -38, -41};
/* only use lna 3/7 */
static const s8 lna_gain_table_1[8] = {29, 20, 12, 3, -6, -15, -24, -33};
+ u8 cck_agc_rpt = phy_stats->cck_agc_rpt_ofdm_cfosho_a;
s8 rx_pwr_all = 0x00;
u8 vga_idx, lna_idx;
s8 lna_gain = 0;
dev_dbg(dev, "%s: len: %d items: %d\n", __func__, tx_rpt_len, items);
+ /* We only use macid 0, so only the first item is relevant.
+ * AP mode will use more of them if it's ever implemented.
+ */
+ if (!priv->vif || priv->vif->type == NL80211_IFTYPE_STATION)
+ items = 1;
+
for (macid = 0; macid < items; macid++) {
valid = false;
min_rpt_time = ra->rpt_time;
rpt += TX_RPT2_ITEM_SIZE;
-
- /*
- * We only use macid 0, so only the first item is relevant.
- * AP mode will use more of them if it's ever implemented.
- */
- break;
}
if (min_rpt_time != ra->pre_min_rpt_time) {
.load_firmware = rtl8188eu_load_firmware,
.power_on = rtl8188eu_power_on,
.power_off = rtl8188eu_power_off,
+ .read_efuse = rtl8xxxu_read_efuse,
.reset_8051 = rtl8188eu_reset_8051,
.llt_init = rtl8xxxu_init_llt_table,
.init_phy_bb = rtl8188eu_init_phy_bb,
.phy_iq_calibrate = rtl8188eu_phy_iq_calibrate,
.config_channel = rtl8188eu_config_channel,
.parse_rx_desc = rtl8xxxu_parse_rxdesc16,
+ .parse_phystats = rtl8723au_rx_parse_phystats,
.init_aggregation = rtl8188eu_init_aggregation,
.enable_rf = rtl8188e_enable_rf,
.disable_rf = rtl8188e_disable_rf,
return ret;
}
-static void rtl8188f_phy_lc_calibrate(struct rtl8xxxu_priv *priv)
+void rtl8188f_phy_lc_calibrate(struct rtl8xxxu_priv *priv)
{
u32 val32;
u32 rf_amode, lstf;
cfo->crystal_cap = crystal_cap;
}
-static s8 rtl8188f_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
+static s8 rtl8188f_cck_rssi(struct rtl8xxxu_priv *priv, struct rtl8723au_phy_stats *phy_stats)
{
+ u8 cck_agc_rpt = phy_stats->cck_agc_rpt_ofdm_cfosho_a;
s8 rx_pwr_all = 0x00;
u8 vga_idx, lna_idx;
.load_firmware = rtl8188fu_load_firmware,
.power_on = rtl8188fu_power_on,
.power_off = rtl8188fu_power_off,
+ .read_efuse = rtl8xxxu_read_efuse,
.reset_8051 = rtl8xxxu_reset_8051,
.llt_init = rtl8xxxu_auto_llt_table,
.init_phy_bb = rtl8188fu_init_phy_bb,
.phy_iq_calibrate = rtl8188fu_phy_iq_calibrate,
.config_channel = rtl8188fu_config_channel,
.parse_rx_desc = rtl8xxxu_parse_rxdesc24,
+ .parse_phystats = rtl8723au_rx_parse_phystats,
.init_aggregation = rtl8188fu_init_aggregation,
.init_statistics = rtl8188fu_init_statistics,
.init_burst = rtl8xxxu_init_burst,
.load_firmware = rtl8192cu_load_firmware,
.power_on = rtl8192cu_power_on,
.power_off = rtl8xxxu_power_off,
+ .read_efuse = rtl8xxxu_read_efuse,
.reset_8051 = rtl8xxxu_reset_8051,
.llt_init = rtl8xxxu_init_llt_table,
.init_phy_bb = rtl8xxxu_gen1_init_phy_bb,
.phy_iq_calibrate = rtl8xxxu_gen1_phy_iq_calibrate,
.config_channel = rtl8xxxu_gen1_config_channel,
.parse_rx_desc = rtl8xxxu_parse_rxdesc16,
+ .parse_phystats = rtl8723au_rx_parse_phystats,
.init_aggregation = rtl8xxxu_gen1_init_aggregation,
.enable_rf = rtl8xxxu_gen1_enable_rf,
.disable_rf = rtl8xxxu_gen1_disable_rf,
rtl8xxxu_write8(priv, REG_TXPAUSE, 0x00);
}
-static s8 rtl8192e_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
+static s8 rtl8192e_cck_rssi(struct rtl8xxxu_priv *priv, struct rtl8723au_phy_stats *phy_stats)
{
static const s8 lna_gain_table_0[8] = {15, 9, -10, -21, -23, -27, -43, -44};
static const s8 lna_gain_table_1[8] = {24, 18, 13, -4, -11, -18, -31, -36};
+ u8 cck_agc_rpt = phy_stats->cck_agc_rpt_ofdm_cfosho_a;
s8 rx_pwr_all = 0x00;
u8 vga_idx, lna_idx;
s8 lna_gain = 0;
.load_firmware = rtl8192eu_load_firmware,
.power_on = rtl8192eu_power_on,
.power_off = rtl8192eu_power_off,
+ .read_efuse = rtl8xxxu_read_efuse,
.reset_8051 = rtl8xxxu_reset_8051,
.llt_init = rtl8xxxu_auto_llt_table,
.init_phy_bb = rtl8192eu_init_phy_bb,
.phy_iq_calibrate = rtl8192eu_phy_iq_calibrate,
.config_channel = rtl8xxxu_gen2_config_channel,
.parse_rx_desc = rtl8xxxu_parse_rxdesc24,
+ .parse_phystats = rtl8723au_rx_parse_phystats,
.enable_rf = rtl8192e_enable_rf,
.disable_rf = rtl8xxxu_gen2_disable_rf,
.usb_quirks = rtl8xxxu_gen2_usb_quirks,
.rx_desc_size = sizeof(struct rtl8xxxu_rxdesc24),
.has_s0s1 = 0,
.gen2_thermal_meter = 1,
+ .needs_full_init = 1,
.adda_1t_init = 0x0fc01616,
.adda_1t_path_on = 0x0fc01616,
.adda_2t_path_on_a = 0x0fc01616,
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * RTL8XXXU mac80211 USB driver - 8710bu aka 8188gu specific subdriver
+ *
+ * Copyright (c) 2023 Bitterblue Smith <rtl8821cerfe2@gmail.com>
+ *
+ * Portions copied from existing rtl8xxxu code:
+ * Copyright (c) 2014 - 2017 Jes Sorensen <Jes.Sorensen@gmail.com>
+ *
+ * Portions, notably calibration code:
+ * Copyright(c) 2007 - 2011 Realtek Corporation. All rights reserved.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/usb.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/ethtool.h>
+#include <linux/wireless.h>
+#include <linux/firmware.h>
+#include <linux/moduleparam.h>
+#include <net/mac80211.h>
+#include "rtl8xxxu.h"
+#include "rtl8xxxu_regs.h"
+
+static const struct rtl8xxxu_reg8val rtl8710b_mac_init_table[] = {
+ {0x421, 0x0F}, {0x428, 0x0A}, {0x429, 0x10}, {0x430, 0x00},
+ {0x431, 0x00}, {0x432, 0x00}, {0x433, 0x01}, {0x434, 0x04},
+ {0x435, 0x05}, {0x436, 0x07}, {0x437, 0x08}, {0x43C, 0x04},
+ {0x43D, 0x05}, {0x43E, 0x07}, {0x43F, 0x08}, {0x440, 0x5D},
+ {0x441, 0x01}, {0x442, 0x00}, {0x444, 0x10}, {0x445, 0x00},
+ {0x446, 0x00}, {0x447, 0x00}, {0x448, 0x00}, {0x449, 0xF0},
+ {0x44A, 0x0F}, {0x44B, 0x3E}, {0x44C, 0x10}, {0x44D, 0x00},
+ {0x44E, 0x00}, {0x44F, 0x00}, {0x450, 0x00}, {0x451, 0xF0},
+ {0x452, 0x0F}, {0x453, 0x00}, {0x456, 0x5E}, {0x460, 0x66},
+ {0x461, 0x66}, {0x4C8, 0xFF}, {0x4C9, 0x08}, {0x4CC, 0xFF},
+ {0x4CD, 0xFF}, {0x4CE, 0x01}, {0x500, 0x26}, {0x501, 0xA2},
+ {0x502, 0x2F}, {0x503, 0x00}, {0x504, 0x28}, {0x505, 0xA3},
+ {0x506, 0x5E}, {0x507, 0x00}, {0x508, 0x2B}, {0x509, 0xA4},
+ {0x50A, 0x5E}, {0x50B, 0x00}, {0x50C, 0x4F}, {0x50D, 0xA4},
+ {0x50E, 0x00}, {0x50F, 0x00}, {0x512, 0x1C}, {0x514, 0x0A},
+ {0x516, 0x0A}, {0x525, 0x4F}, {0x550, 0x10}, {0x551, 0x10},
+ {0x559, 0x02}, {0x55C, 0x28}, {0x55D, 0xFF}, {0x605, 0x30},
+ {0x608, 0x0E}, {0x609, 0x2A}, {0x620, 0xFF}, {0x621, 0xFF},
+ {0x622, 0xFF}, {0x623, 0xFF}, {0x624, 0xFF}, {0x625, 0xFF},
+ {0x626, 0xFF}, {0x627, 0xFF}, {0x638, 0x28}, {0x63C, 0x0A},
+ {0x63D, 0x0A}, {0x63E, 0x0C}, {0x63F, 0x0C}, {0x640, 0x40},
+ {0x642, 0x40}, {0x643, 0x00}, {0x652, 0xC8}, {0x66A, 0xB0},
+ {0x66E, 0x05}, {0x700, 0x21}, {0x701, 0x43}, {0x702, 0x65},
+ {0x703, 0x87}, {0x708, 0x21}, {0x709, 0x43}, {0x70A, 0x65},
+ {0x70B, 0x87},
+ {0xffff, 0xff},
+};
+
+/* If updating the phy init tables, also update rtl8710b_revise_cck_tx_psf(). */
+static const struct rtl8xxxu_reg32val rtl8710bu_qfn48m_u_phy_init_table[] = {
+ {0x800, 0x80045700}, {0x804, 0x00000001},
+ {0x808, 0x00FC8000}, {0x80C, 0x0000000A},
+ {0x810, 0x10001331}, {0x814, 0x020C3D10},
+ {0x818, 0x00200385}, {0x81C, 0x00000000},
+ {0x820, 0x01000100}, {0x824, 0x00390204},
+ {0x828, 0x00000000}, {0x82C, 0x00000000},
+ {0x830, 0x00000000}, {0x834, 0x00000000},
+ {0x838, 0x00000000}, {0x83C, 0x00000000},
+ {0x840, 0x00010000}, {0x844, 0x00000000},
+ {0x848, 0x00000000}, {0x84C, 0x00000000},
+ {0x850, 0x00030000}, {0x854, 0x00000000},
+ {0x858, 0x7E1A569A}, {0x85C, 0x569A569A},
+ {0x860, 0x00000130}, {0x864, 0x20000000},
+ {0x868, 0x00000000}, {0x86C, 0x27272700},
+ {0x870, 0x00050000}, {0x874, 0x25005000},
+ {0x878, 0x00000808}, {0x87C, 0x004F0201},
+ {0x880, 0xB0000B1E}, {0x884, 0x00000007},
+ {0x888, 0x00000000}, {0x88C, 0xCCC400C0},
+ {0x890, 0x00000800}, {0x894, 0xFFFFFFFE},
+ {0x898, 0x40302010}, {0x89C, 0x00706050},
+ {0x900, 0x00000000}, {0x904, 0x00000023},
+ {0x908, 0x00000000}, {0x90C, 0x81121111},
+ {0x910, 0x00000402}, {0x914, 0x00000201},
+ {0x920, 0x18C6318C}, {0x924, 0x0000018C},
+ {0x948, 0x99000000}, {0x94C, 0x00000010},
+ {0x950, 0x00003000}, {0x954, 0x5A880000},
+ {0x958, 0x4BC6D87A}, {0x95C, 0x04EB9B79},
+ {0x96C, 0x00000003}, {0x970, 0x00000000},
+ {0x974, 0x00000000}, {0x978, 0x00000000},
+ {0x97C, 0x13000000}, {0x980, 0x00000000},
+ {0xA00, 0x00D046C8}, {0xA04, 0x80FF800C},
+ {0xA08, 0x84838300}, {0xA0C, 0x2E20100F},
+ {0xA10, 0x9500BB78}, {0xA14, 0x1114D028},
+ {0xA18, 0x00881117}, {0xA1C, 0x89140F00},
+ {0xA20, 0xE82C0001}, {0xA24, 0x64B80C1C},
+ {0xA28, 0x00008810}, {0xA2C, 0x00D30000},
+ {0xA70, 0x101FBF00}, {0xA74, 0x00000007},
+ {0xA78, 0x00000900}, {0xA7C, 0x225B0606},
+ {0xA80, 0x218075B1}, {0xA84, 0x00200000},
+ {0xA88, 0x040C0000}, {0xA8C, 0x12345678},
+ {0xA90, 0xABCDEF00}, {0xA94, 0x001B1B89},
+ {0xA98, 0x00000000}, {0xA9C, 0x80020000},
+ {0xAA0, 0x00000000}, {0xAA4, 0x0000000C},
+ {0xAA8, 0xCA110058}, {0xAAC, 0x01235667},
+ {0xAB0, 0x00000000}, {0xAB4, 0x20201402},
+ {0xB2C, 0x00000000}, {0xC00, 0x48071D40},
+ {0xC04, 0x03A05611}, {0xC08, 0x000000E4},
+ {0xC0C, 0x6C6C6C6C}, {0xC10, 0x18800000},
+ {0xC14, 0x40000100}, {0xC18, 0x08800000},
+ {0xC1C, 0x40000100}, {0xC20, 0x00000000},
+ {0xC24, 0x00000000}, {0xC28, 0x00000000},
+ {0xC2C, 0x00000000}, {0xC30, 0x69E9AC4A},
+ {0xC34, 0x31000040}, {0xC38, 0x21688080},
+ {0xC3C, 0x0000170C}, {0xC40, 0x1F78403F},
+ {0xC44, 0x00010036}, {0xC48, 0xEC020107},
+ {0xC4C, 0x007F037F}, {0xC50, 0x69553420},
+ {0xC54, 0x43BC0094}, {0xC58, 0x00013169},
+ {0xC5C, 0x00250492}, {0xC60, 0x00280A00},
+ {0xC64, 0x7112848B}, {0xC68, 0x47C074FF},
+ {0xC6C, 0x00000036}, {0xC70, 0x2C7F000D},
+ {0xC74, 0x020600DB}, {0xC78, 0x0000001F},
+ {0xC7C, 0x00B91612}, {0xC80, 0x390000E4},
+ {0xC84, 0x11F60000}, {0xC88, 0x1051B75F},
+ {0xC8C, 0x20200109}, {0xC90, 0x00091521},
+ {0xC94, 0x00000000}, {0xC98, 0x00121820},
+ {0xC9C, 0x00007F7F}, {0xCA0, 0x00011000},
+ {0xCA4, 0x800000A0}, {0xCA8, 0x84E6C606},
+ {0xCAC, 0x00000060}, {0xCB0, 0x00000000},
+ {0xCB4, 0x00000000}, {0xCB8, 0x00000000},
+ {0xCBC, 0x28000000}, {0xCC0, 0x1051B75F},
+ {0xCC4, 0x00000109}, {0xCC8, 0x000442D6},
+ {0xCCC, 0x00000000}, {0xCD0, 0x000001C8},
+ {0xCD4, 0x001C8000}, {0xCD8, 0x00000100},
+ {0xCDC, 0x40100000}, {0xCE0, 0x00222220},
+ {0xCE4, 0x10000000}, {0xCE8, 0x37644302},
+ {0xCEC, 0x2F97D40C}, {0xD00, 0x04030740},
+ {0xD04, 0x40020401}, {0xD08, 0x0000907F},
+ {0xD0C, 0x20010201}, {0xD10, 0xA0633333},
+ {0xD14, 0x3333BC53}, {0xD18, 0x7A8F5B6F},
+ {0xD2C, 0xCB979975}, {0xD30, 0x00000000},
+ {0xD34, 0x40608000}, {0xD38, 0x88000000},
+ {0xD3C, 0xC0127353}, {0xD40, 0x00000000},
+ {0xD44, 0x00000000}, {0xD48, 0x00000000},
+ {0xD4C, 0x00000000}, {0xD50, 0x00006528},
+ {0xD54, 0x00000000}, {0xD58, 0x00000282},
+ {0xD5C, 0x30032064}, {0xD60, 0x4653DE68},
+ {0xD64, 0x04518A3C}, {0xD68, 0x00002101},
+ {0xE00, 0x2D2D2D2D}, {0xE04, 0x2D2D2D2D},
+ {0xE08, 0x0390272D}, {0xE10, 0x2D2D2D2D},
+ {0xE14, 0x2D2D2D2D}, {0xE18, 0x2D2D2D2D},
+ {0xE1C, 0x2D2D2D2D}, {0xE28, 0x00000000},
+ {0xE30, 0x1000DC1F}, {0xE34, 0x10008C1F},
+ {0xE38, 0x02140102}, {0xE3C, 0x681604C2},
+ {0xE40, 0x01007C00}, {0xE44, 0x01004800},
+ {0xE48, 0xFB000000}, {0xE4C, 0x000028D1},
+ {0xE50, 0x1000DC1F}, {0xE54, 0x10008C1F},
+ {0xE58, 0x02140102}, {0xE5C, 0x28160D05},
+ {0xE60, 0x0000C008}, {0xE68, 0x001B25A4},
+ {0xE64, 0x281600A0}, {0xE6C, 0x01C00010},
+ {0xE70, 0x01C00010}, {0xE74, 0x02000010},
+ {0xE78, 0x02000010}, {0xE7C, 0x02000010},
+ {0xE80, 0x02000010}, {0xE84, 0x01C00010},
+ {0xE88, 0x02000010}, {0xE8C, 0x01C00010},
+ {0xED0, 0x01C00010}, {0xED4, 0x01C00010},
+ {0xED8, 0x01C00010}, {0xEDC, 0x00000010},
+ {0xEE0, 0x00000010}, {0xEEC, 0x03C00010},
+ {0xF14, 0x00000003}, {0xF00, 0x00100300},
+ {0xF08, 0x0000800B}, {0xF0C, 0x0000F007},
+ {0xF10, 0x0000A487}, {0xF1C, 0x80000064},
+ {0xF38, 0x00030155}, {0xF3C, 0x0000003A},
+ {0xF4C, 0x13000000}, {0xF50, 0x00000000},
+ {0xF18, 0x00000000},
+ {0xffff, 0xffffffff},
+};
+
+/* If updating the phy init tables, also update rtl8710b_revise_cck_tx_psf(). */
+static const struct rtl8xxxu_reg32val rtl8710bu_qfn48m_s_phy_init_table[] = {
+ {0x800, 0x80045700}, {0x804, 0x00000001},
+ {0x808, 0x00FC8000}, {0x80C, 0x0000000A},
+ {0x810, 0x10001331}, {0x814, 0x020C3D10},
+ {0x818, 0x00200385}, {0x81C, 0x00000000},
+ {0x820, 0x01000100}, {0x824, 0x00390204},
+ {0x828, 0x00000000}, {0x82C, 0x00000000},
+ {0x830, 0x00000000}, {0x834, 0x00000000},
+ {0x838, 0x00000000}, {0x83C, 0x00000000},
+ {0x840, 0x00010000}, {0x844, 0x00000000},
+ {0x848, 0x00000000}, {0x84C, 0x00000000},
+ {0x850, 0x00030000}, {0x854, 0x00000000},
+ {0x858, 0x7E1A569A}, {0x85C, 0x569A569A},
+ {0x860, 0x00000130}, {0x864, 0x20000000},
+ {0x868, 0x00000000}, {0x86C, 0x27272700},
+ {0x870, 0x00050000}, {0x874, 0x25005000},
+ {0x878, 0x00000808}, {0x87C, 0x004F0201},
+ {0x880, 0xB0000B1E}, {0x884, 0x00000007},
+ {0x888, 0x00000000}, {0x88C, 0xCCC400C0},
+ {0x890, 0x00000800}, {0x894, 0xFFFFFFFE},
+ {0x898, 0x40302010}, {0x89C, 0x00706050},
+ {0x900, 0x00000000}, {0x904, 0x00000023},
+ {0x908, 0x00000000}, {0x90C, 0x81121111},
+ {0x910, 0x00000402}, {0x914, 0x00000201},
+ {0x920, 0x18C6318C}, {0x924, 0x0000018C},
+ {0x948, 0x99000000}, {0x94C, 0x00000010},
+ {0x950, 0x00003000}, {0x954, 0x5A880000},
+ {0x958, 0x4BC6D87A}, {0x95C, 0x04EB9B79},
+ {0x96C, 0x00000003}, {0x970, 0x00000000},
+ {0x974, 0x00000000}, {0x978, 0x00000000},
+ {0x97C, 0x13000000}, {0x980, 0x00000000},
+ {0xA00, 0x00D046C8}, {0xA04, 0x80FF800C},
+ {0xA08, 0x84838300}, {0xA0C, 0x2A20100F},
+ {0xA10, 0x9500BB78}, {0xA14, 0x1114D028},
+ {0xA18, 0x00881117}, {0xA1C, 0x89140F00},
+ {0xA20, 0xE82C0001}, {0xA24, 0x64B80C1C},
+ {0xA28, 0x00008810}, {0xA2C, 0x00D30000},
+ {0xA70, 0x101FBF00}, {0xA74, 0x00000007},
+ {0xA78, 0x00000900}, {0xA7C, 0x225B0606},
+ {0xA80, 0x218075B1}, {0xA84, 0x00200000},
+ {0xA88, 0x040C0000}, {0xA8C, 0x12345678},
+ {0xA90, 0xABCDEF00}, {0xA94, 0x001B1B89},
+ {0xA98, 0x00000000}, {0xA9C, 0x80020000},
+ {0xAA0, 0x00000000}, {0xAA4, 0x0000000C},
+ {0xAA8, 0xCA110058}, {0xAAC, 0x01235667},
+ {0xAB0, 0x00000000}, {0xAB4, 0x20201402},
+ {0xB2C, 0x00000000}, {0xC00, 0x48071D40},
+ {0xC04, 0x03A05611}, {0xC08, 0x000000E4},
+ {0xC0C, 0x6C6C6C6C}, {0xC10, 0x18800000},
+ {0xC14, 0x40000100}, {0xC18, 0x08800000},
+ {0xC1C, 0x40000100}, {0xC20, 0x00000000},
+ {0xC24, 0x00000000}, {0xC28, 0x00000000},
+ {0xC2C, 0x00000000}, {0xC30, 0x69E9AC4A},
+ {0xC34, 0x31000040}, {0xC38, 0x21688080},
+ {0xC3C, 0x0000170C}, {0xC40, 0x1F78403F},
+ {0xC44, 0x00010036}, {0xC48, 0xEC020107},
+ {0xC4C, 0x007F037F}, {0xC50, 0x69553420},
+ {0xC54, 0x43BC0094}, {0xC58, 0x00013169},
+ {0xC5C, 0x00250492}, {0xC60, 0x00280A00},
+ {0xC64, 0x7112848B}, {0xC68, 0x47C074FF},
+ {0xC6C, 0x00000036}, {0xC70, 0x2C7F000D},
+ {0xC74, 0x020600DB}, {0xC78, 0x0000001F},
+ {0xC7C, 0x00B91612}, {0xC80, 0x390000E4},
+ {0xC84, 0x11F60000}, {0xC88, 0x1051B75F},
+ {0xC8C, 0x20200109}, {0xC90, 0x00091521},
+ {0xC94, 0x00000000}, {0xC98, 0x00121820},
+ {0xC9C, 0x00007F7F}, {0xCA0, 0x00011000},
+ {0xCA4, 0x800000A0}, {0xCA8, 0x84E6C606},
+ {0xCAC, 0x00000060}, {0xCB0, 0x00000000},
+ {0xCB4, 0x00000000}, {0xCB8, 0x00000000},
+ {0xCBC, 0x28000000}, {0xCC0, 0x1051B75F},
+ {0xCC4, 0x00000109}, {0xCC8, 0x000442D6},
+ {0xCCC, 0x00000000}, {0xCD0, 0x000001C8},
+ {0xCD4, 0x001C8000}, {0xCD8, 0x00000100},
+ {0xCDC, 0x40100000}, {0xCE0, 0x00222220},
+ {0xCE4, 0x10000000}, {0xCE8, 0x37644302},
+ {0xCEC, 0x2F97D40C}, {0xD00, 0x04030740},
+ {0xD04, 0x40020401}, {0xD08, 0x0000907F},
+ {0xD0C, 0x20010201}, {0xD10, 0xA0633333},
+ {0xD14, 0x3333BC53}, {0xD18, 0x7A8F5B6F},
+ {0xD2C, 0xCB979975}, {0xD30, 0x00000000},
+ {0xD34, 0x40608000}, {0xD38, 0x88000000},
+ {0xD3C, 0xC0127353}, {0xD40, 0x00000000},
+ {0xD44, 0x00000000}, {0xD48, 0x00000000},
+ {0xD4C, 0x00000000}, {0xD50, 0x00006528},
+ {0xD54, 0x00000000}, {0xD58, 0x00000282},
+ {0xD5C, 0x30032064}, {0xD60, 0x4653DE68},
+ {0xD64, 0x04518A3C}, {0xD68, 0x00002101},
+ {0xE00, 0x2D2D2D2D}, {0xE04, 0x2D2D2D2D},
+ {0xE08, 0x0390272D}, {0xE10, 0x2D2D2D2D},
+ {0xE14, 0x2D2D2D2D}, {0xE18, 0x2D2D2D2D},
+ {0xE1C, 0x2D2D2D2D}, {0xE28, 0x00000000},
+ {0xE30, 0x1000DC1F}, {0xE34, 0x10008C1F},
+ {0xE38, 0x02140102}, {0xE3C, 0x681604C2},
+ {0xE40, 0x01007C00}, {0xE44, 0x01004800},
+ {0xE48, 0xFB000000}, {0xE4C, 0x000028D1},
+ {0xE50, 0x1000DC1F}, {0xE54, 0x10008C1F},
+ {0xE58, 0x02140102}, {0xE5C, 0x28160D05},
+ {0xE60, 0x0000C008}, {0xE68, 0x001B25A4},
+ {0xE64, 0x281600A0}, {0xE6C, 0x01C00010},
+ {0xE70, 0x01C00010}, {0xE74, 0x02000010},
+ {0xE78, 0x02000010}, {0xE7C, 0x02000010},
+ {0xE80, 0x02000010}, {0xE84, 0x01C00010},
+ {0xE88, 0x02000010}, {0xE8C, 0x01C00010},
+ {0xED0, 0x01C00010}, {0xED4, 0x01C00010},
+ {0xED8, 0x01C00010}, {0xEDC, 0x00000010},
+ {0xEE0, 0x00000010}, {0xEEC, 0x03C00010},
+ {0xF14, 0x00000003}, {0xF00, 0x00100300},
+ {0xF08, 0x0000800B}, {0xF0C, 0x0000F007},
+ {0xF10, 0x0000A487}, {0xF1C, 0x80000064},
+ {0xF38, 0x00030155}, {0xF3C, 0x0000003A},
+ {0xF4C, 0x13000000}, {0xF50, 0x00000000},
+ {0xF18, 0x00000000},
+ {0xffff, 0xffffffff},
+};
+
+static const struct rtl8xxxu_reg32val rtl8710b_agc_table[] = {
+ {0xC78, 0xFC000001}, {0xC78, 0xFB010001},
+ {0xC78, 0xFA020001}, {0xC78, 0xF9030001},
+ {0xC78, 0xF8040001}, {0xC78, 0xF7050001},
+ {0xC78, 0xF6060001}, {0xC78, 0xF5070001},
+ {0xC78, 0xF4080001}, {0xC78, 0xF3090001},
+ {0xC78, 0xF20A0001}, {0xC78, 0xF10B0001},
+ {0xC78, 0xF00C0001}, {0xC78, 0xEF0D0001},
+ {0xC78, 0xEE0E0001}, {0xC78, 0xED0F0001},
+ {0xC78, 0xEC100001}, {0xC78, 0xEB110001},
+ {0xC78, 0xEA120001}, {0xC78, 0xE9130001},
+ {0xC78, 0xE8140001}, {0xC78, 0xE7150001},
+ {0xC78, 0xE6160001}, {0xC78, 0xE5170001},
+ {0xC78, 0xE4180001}, {0xC78, 0xE3190001},
+ {0xC78, 0xE21A0001}, {0xC78, 0xE11B0001},
+ {0xC78, 0xE01C0001}, {0xC78, 0xC31D0001},
+ {0xC78, 0xC21E0001}, {0xC78, 0xC11F0001},
+ {0xC78, 0xC0200001}, {0xC78, 0xA3210001},
+ {0xC78, 0xA2220001}, {0xC78, 0xA1230001},
+ {0xC78, 0xA0240001}, {0xC78, 0x86250001},
+ {0xC78, 0x85260001}, {0xC78, 0x84270001},
+ {0xC78, 0x83280001}, {0xC78, 0x82290001},
+ {0xC78, 0x812A0001}, {0xC78, 0x802B0001},
+ {0xC78, 0x632C0001}, {0xC78, 0x622D0001},
+ {0xC78, 0x612E0001}, {0xC78, 0x602F0001},
+ {0xC78, 0x42300001}, {0xC78, 0x41310001},
+ {0xC78, 0x40320001}, {0xC78, 0x23330001},
+ {0xC78, 0x22340001}, {0xC78, 0x21350001},
+ {0xC78, 0x20360001}, {0xC78, 0x02370001},
+ {0xC78, 0x01380001}, {0xC78, 0x00390001},
+ {0xC78, 0x003A0001}, {0xC78, 0x003B0001},
+ {0xC78, 0x003C0001}, {0xC78, 0x003D0001},
+ {0xC78, 0x003E0001}, {0xC78, 0x003F0001},
+ {0xC78, 0xF7400001}, {0xC78, 0xF7410001},
+ {0xC78, 0xF7420001}, {0xC78, 0xF7430001},
+ {0xC78, 0xF7440001}, {0xC78, 0xF7450001},
+ {0xC78, 0xF7460001}, {0xC78, 0xF7470001},
+ {0xC78, 0xF7480001}, {0xC78, 0xF6490001},
+ {0xC78, 0xF34A0001}, {0xC78, 0xF24B0001},
+ {0xC78, 0xF14C0001}, {0xC78, 0xF04D0001},
+ {0xC78, 0xD14E0001}, {0xC78, 0xD04F0001},
+ {0xC78, 0xB5500001}, {0xC78, 0xB4510001},
+ {0xC78, 0xB3520001}, {0xC78, 0xB2530001},
+ {0xC78, 0xB1540001}, {0xC78, 0xB0550001},
+ {0xC78, 0xAF560001}, {0xC78, 0xAE570001},
+ {0xC78, 0xAD580001}, {0xC78, 0xAC590001},
+ {0xC78, 0xAB5A0001}, {0xC78, 0xAA5B0001},
+ {0xC78, 0xA95C0001}, {0xC78, 0xA85D0001},
+ {0xC78, 0xA75E0001}, {0xC78, 0xA65F0001},
+ {0xC78, 0xA5600001}, {0xC78, 0xA4610001},
+ {0xC78, 0xA3620001}, {0xC78, 0xA2630001},
+ {0xC78, 0xA1640001}, {0xC78, 0xA0650001},
+ {0xC78, 0x87660001}, {0xC78, 0x86670001},
+ {0xC78, 0x85680001}, {0xC78, 0x84690001},
+ {0xC78, 0x836A0001}, {0xC78, 0x826B0001},
+ {0xC78, 0x816C0001}, {0xC78, 0x806D0001},
+ {0xC78, 0x636E0001}, {0xC78, 0x626F0001},
+ {0xC78, 0x61700001}, {0xC78, 0x60710001},
+ {0xC78, 0x42720001}, {0xC78, 0x41730001},
+ {0xC78, 0x40740001}, {0xC78, 0x23750001},
+ {0xC78, 0x22760001}, {0xC78, 0x21770001},
+ {0xC78, 0x20780001}, {0xC78, 0x03790001},
+ {0xC78, 0x027A0001}, {0xC78, 0x017B0001},
+ {0xC78, 0x007C0001}, {0xC78, 0x007D0001},
+ {0xC78, 0x007E0001}, {0xC78, 0x007F0001},
+ {0xC50, 0x69553422}, {0xC50, 0x69553420},
+ {0xffff, 0xffffffff}
+};
+
+static const struct rtl8xxxu_rfregval rtl8710bu_qfn48m_u_radioa_init_table[] = {
+ {0x00, 0x00030000}, {0x08, 0x00008400},
+ {0x17, 0x00000000}, {0x18, 0x00000C01},
+ {0x19, 0x000739D2}, {0x1C, 0x00000C4C},
+ {0x1B, 0x00000C6C}, {0x1E, 0x00080009},
+ {0x1F, 0x00000880}, {0x2F, 0x0001A060},
+ {0x3F, 0x00015000}, {0x42, 0x000060C0},
+ {0x57, 0x000D0000}, {0x58, 0x000C0160},
+ {0x67, 0x00001552}, {0x83, 0x00000000},
+ {0xB0, 0x000FF9F0}, {0xB1, 0x00010018},
+ {0xB2, 0x00054C00}, {0xB4, 0x0004486B},
+ {0xB5, 0x0000112A}, {0xB6, 0x0000053E},
+ {0xB7, 0x00014408}, {0xB8, 0x00010200},
+ {0xB9, 0x00080801}, {0xBA, 0x00040001},
+ {0xBB, 0x00000400}, {0xBF, 0x000C0000},
+ {0xC2, 0x00002400}, {0xC3, 0x00000009},
+ {0xC4, 0x00040C91}, {0xC5, 0x00099999},
+ {0xC6, 0x000000A3}, {0xC7, 0x00088820},
+ {0xC8, 0x00076C06}, {0xC9, 0x00000000},
+ {0xCA, 0x00080000}, {0xDF, 0x00000180},
+ {0xEF, 0x000001A8}, {0x3D, 0x00000003},
+ {0x3D, 0x00080003}, {0x51, 0x000F1E69},
+ {0x52, 0x000FBF6C}, {0x53, 0x0000032F},
+ {0x54, 0x00055007}, {0x56, 0x000517F0},
+ {0x35, 0x000000F4}, {0x35, 0x00000179},
+ {0x35, 0x000002F4}, {0x36, 0x00000BF8},
+ {0x36, 0x00008BF8}, {0x36, 0x00010BF8},
+ {0x36, 0x00018BF8}, {0x18, 0x00000C01},
+ {0x5A, 0x00048000}, {0x5A, 0x00048000},
+ {0x34, 0x0000ADF5}, {0x34, 0x00009DF2},
+ {0x34, 0x00008DEF}, {0x34, 0x00007DEC},
+ {0x34, 0x00006DE9}, {0x34, 0x00005CEC},
+ {0x34, 0x00004CE9}, {0x34, 0x00003C6C},
+ {0x34, 0x00002C69}, {0x34, 0x0000106E},
+ {0x34, 0x0000006B}, {0x84, 0x00048000},
+ {0x87, 0x00000065}, {0x8E, 0x00065540},
+ {0xDF, 0x00000110}, {0x86, 0x0000002A},
+ {0x8F, 0x00088000}, {0x81, 0x0003FD80},
+ {0xEF, 0x00082000}, {0x3B, 0x000F0F00},
+ {0x3B, 0x000E0E00}, {0x3B, 0x000DFE00},
+ {0x3B, 0x000C0D00}, {0x3B, 0x000B0C00},
+ {0x3B, 0x000A0500}, {0x3B, 0x00090400},
+ {0x3B, 0x00080000}, {0x3B, 0x00070F00},
+ {0x3B, 0x00060E00}, {0x3B, 0x00050A00},
+ {0x3B, 0x00040D00}, {0x3B, 0x00030C00},
+ {0x3B, 0x00020500}, {0x3B, 0x00010400},
+ {0x3B, 0x00000000}, {0xEF, 0x00080000},
+ {0xEF, 0x00088000}, {0x3B, 0x00000170},
+ {0x3B, 0x000C0030}, {0xEF, 0x00080000},
+ {0xEF, 0x00080000}, {0x30, 0x00010000},
+ {0x31, 0x0000000F}, {0x32, 0x00047EFE},
+ {0xEF, 0x00000000}, {0x00, 0x00010159},
+ {0x18, 0x0000FC01}, {0xFE, 0x00000000},
+ {0x00, 0x00033D95},
+ {0xff, 0xffffffff}
+};
+
+static const struct rtl8xxxu_rfregval rtl8710bu_qfn48m_s_radioa_init_table[] = {
+ {0x00, 0x00030000}, {0x08, 0x00008400},
+ {0x17, 0x00000000}, {0x18, 0x00000C01},
+ {0x19, 0x000739D2}, {0x1C, 0x00000C4C},
+ {0x1B, 0x00000C6C}, {0x1E, 0x00080009},
+ {0x1F, 0x00000880}, {0x2F, 0x0001A060},
+ {0x3F, 0x00015000}, {0x42, 0x000060C0},
+ {0x57, 0x000D0000}, {0x58, 0x000C0160},
+ {0x67, 0x00001552}, {0x83, 0x00000000},
+ {0xB0, 0x000FF9F0}, {0xB1, 0x00010018},
+ {0xB2, 0x00054C00}, {0xB4, 0x0004486B},
+ {0xB5, 0x0000112A}, {0xB6, 0x0000053E},
+ {0xB7, 0x00014408}, {0xB8, 0x00010200},
+ {0xB9, 0x00080801}, {0xBA, 0x00040001},
+ {0xBB, 0x00000400}, {0xBF, 0x000C0000},
+ {0xC2, 0x00002400}, {0xC3, 0x00000009},
+ {0xC4, 0x00040C91}, {0xC5, 0x00099999},
+ {0xC6, 0x000000A3}, {0xC7, 0x00088820},
+ {0xC8, 0x00076C06}, {0xC9, 0x00000000},
+ {0xCA, 0x00080000}, {0xDF, 0x00000180},
+ {0xEF, 0x000001A8}, {0x3D, 0x00000003},
+ {0x3D, 0x00080003}, {0x51, 0x000F1E69},
+ {0x52, 0x000FBF6C}, {0x53, 0x0000032F},
+ {0x54, 0x00055007}, {0x56, 0x000517F0},
+ {0x35, 0x000000F4}, {0x35, 0x00000179},
+ {0x35, 0x000002F4}, {0x36, 0x00000BF8},
+ {0x36, 0x00008BF8}, {0x36, 0x00010BF8},
+ {0x36, 0x00018BF8}, {0x18, 0x00000C01},
+ {0x5A, 0x00048000}, {0x5A, 0x00048000},
+ {0x34, 0x0000ADF5}, {0x34, 0x00009DF2},
+ {0x34, 0x00008DEF}, {0x34, 0x00007DEC},
+ {0x34, 0x00006DE9}, {0x34, 0x00005CEC},
+ {0x34, 0x00004CE9}, {0x34, 0x00003C6C},
+ {0x34, 0x00002C69}, {0x34, 0x0000106E},
+ {0x34, 0x0000006B}, {0x84, 0x00048000},
+ {0x87, 0x00000065}, {0x8E, 0x00065540},
+ {0xDF, 0x00000110}, {0x86, 0x0000002A},
+ {0x8F, 0x00088000}, {0x81, 0x0003FD80},
+ {0xEF, 0x00082000}, {0x3B, 0x000F0F00},
+ {0x3B, 0x000E0E00}, {0x3B, 0x000DFE00},
+ {0x3B, 0x000C0D00}, {0x3B, 0x000B0C00},
+ {0x3B, 0x000A0500}, {0x3B, 0x00090400},
+ {0x3B, 0x00080000}, {0x3B, 0x00070F00},
+ {0x3B, 0x00060E00}, {0x3B, 0x00050A00},
+ {0x3B, 0x00040D00}, {0x3B, 0x00030C00},
+ {0x3B, 0x00020500}, {0x3B, 0x00010400},
+ {0x3B, 0x00000000}, {0xEF, 0x00080000},
+ {0xEF, 0x00088000}, {0x3B, 0x000000B0},
+ {0x3B, 0x000C0030}, {0xEF, 0x00080000},
+ {0xEF, 0x00080000}, {0x30, 0x00010000},
+ {0x31, 0x0000000F}, {0x32, 0x00047EFE},
+ {0xEF, 0x00000000}, {0x00, 0x00010159},
+ {0x18, 0x0000FC01}, {0xFE, 0x00000000},
+ {0x00, 0x00033D95},
+ {0xff, 0xffffffff}
+};
+
+static u32 rtl8710b_indirect_read32(struct rtl8xxxu_priv *priv, u32 addr)
+{
+ struct device *dev = &priv->udev->dev;
+ u32 val32, value = 0xffffffff;
+ u8 polling_count = 0xff;
+
+ if (!IS_ALIGNED(addr, 4)) {
+ dev_warn(dev, "%s: Aborting because 0x%x is not a multiple of 4.\n",
+ __func__, addr);
+ return value;
+ }
+
+ mutex_lock(&priv->syson_indirect_access_mutex);
+
+ rtl8xxxu_write32(priv, REG_USB_HOST_INDIRECT_ADDR_8710B, addr);
+ rtl8xxxu_write32(priv, REG_EFUSE_INDIRECT_CTRL_8710B, NORMAL_REG_READ_OFFSET);
+
+ do
+ val32 = rtl8xxxu_read32(priv, REG_EFUSE_INDIRECT_CTRL_8710B);
+ while ((val32 & BIT(31)) && (--polling_count > 0));
+
+ if (polling_count == 0)
+ dev_warn(dev, "%s: Failed to read from 0x%x, 0x806c = 0x%x\n",
+ __func__, addr, val32);
+ else
+ value = rtl8xxxu_read32(priv, REG_USB_HOST_INDIRECT_DATA_8710B);
+
+ mutex_unlock(&priv->syson_indirect_access_mutex);
+
+ if (rtl8xxxu_debug & RTL8XXXU_DEBUG_REG_READ)
+ dev_info(dev, "%s(%04x) = 0x%08x\n", __func__, addr, value);
+
+ return value;
+}
+
+static void rtl8710b_indirect_write32(struct rtl8xxxu_priv *priv, u32 addr, u32 val)
+{
+ struct device *dev = &priv->udev->dev;
+ u8 polling_count = 0xff;
+ u32 val32;
+
+ if (!IS_ALIGNED(addr, 4)) {
+ dev_warn(dev, "%s: Aborting because 0x%x is not a multiple of 4.\n",
+ __func__, addr);
+ return;
+ }
+
+ mutex_lock(&priv->syson_indirect_access_mutex);
+
+ rtl8xxxu_write32(priv, REG_USB_HOST_INDIRECT_ADDR_8710B, addr);
+ rtl8xxxu_write32(priv, REG_USB_HOST_INDIRECT_DATA_8710B, val);
+ rtl8xxxu_write32(priv, REG_EFUSE_INDIRECT_CTRL_8710B, NORMAL_REG_WRITE_OFFSET);
+
+ do
+ val32 = rtl8xxxu_read32(priv, REG_EFUSE_INDIRECT_CTRL_8710B);
+ while ((val32 & BIT(31)) && (--polling_count > 0));
+
+ if (polling_count == 0)
+ dev_warn(dev, "%s: Failed to write 0x%x to 0x%x, 0x806c = 0x%x\n",
+ __func__, val, addr, val32);
+
+ mutex_unlock(&priv->syson_indirect_access_mutex);
+
+ if (rtl8xxxu_debug & RTL8XXXU_DEBUG_REG_WRITE)
+ dev_info(dev, "%s(%04x) = 0x%08x\n", __func__, addr, val);
+}
+
+static u32 rtl8710b_read_syson_reg(struct rtl8xxxu_priv *priv, u32 addr)
+{
+ return rtl8710b_indirect_read32(priv, addr | SYSON_REG_BASE_ADDR_8710B);
+}
+
+static void rtl8710b_write_syson_reg(struct rtl8xxxu_priv *priv, u32 addr, u32 val)
+{
+ rtl8710b_indirect_write32(priv, addr | SYSON_REG_BASE_ADDR_8710B, val);
+}
+
+static int rtl8710b_read_efuse8(struct rtl8xxxu_priv *priv, u16 offset, u8 *data)
+{
+ u32 val32;
+ int i;
+
+ /* Write Address */
+ rtl8xxxu_write32(priv, REG_USB_HOST_INDIRECT_ADDR_8710B, offset);
+
+ rtl8xxxu_write32(priv, REG_EFUSE_INDIRECT_CTRL_8710B, EFUSE_READ_OFFSET);
+
+ /* Poll for data read */
+ val32 = rtl8xxxu_read32(priv, REG_EFUSE_INDIRECT_CTRL_8710B);
+ for (i = 0; i < RTL8XXXU_MAX_REG_POLL; i++) {
+ val32 = rtl8xxxu_read32(priv, REG_EFUSE_INDIRECT_CTRL_8710B);
+ if (!(val32 & BIT(31)))
+ break;
+ }
+
+ if (i == RTL8XXXU_MAX_REG_POLL)
+ return -EIO;
+
+ val32 = rtl8xxxu_read32(priv, REG_USB_HOST_INDIRECT_DATA_8710B);
+
+ *data = val32 & 0xff;
+ return 0;
+}
+
+#define EEPROM_PACKAGE_TYPE_8710B 0xF8
+#define PACKAGE_QFN48M_U 0xee
+#define PACKAGE_QFN48M_S 0xfe
+
+static int rtl8710bu_identify_chip(struct rtl8xxxu_priv *priv)
+{
+ struct device *dev = &priv->udev->dev;
+ u32 cfg0, cfg2, vendor;
+ u8 package_type = 0x7; /* a nonsense value */
+
+ sprintf(priv->chip_name, "8710BU");
+ priv->rtl_chip = RTL8710B;
+ priv->rf_paths = 1;
+ priv->rx_paths = 1;
+ priv->tx_paths = 1;
+ priv->has_wifi = 1;
+
+ cfg0 = rtl8710b_read_syson_reg(priv, REG_SYS_SYSTEM_CFG0_8710B);
+ priv->chip_cut = cfg0 & 0xf;
+
+ if (cfg0 & BIT(16)) {
+ dev_info(dev, "%s: Unsupported test chip\n", __func__);
+ return -EOPNOTSUPP;
+ }
+
+ vendor = u32_get_bits(cfg0, 0xc0);
+
+ /* SMIC and TSMC are swapped compared to rtl8xxxu_identify_vendor_2bits */
+ switch (vendor) {
+ case 0:
+ sprintf(priv->chip_vendor, "SMIC");
+ priv->vendor_smic = 1;
+ break;
+ case 1:
+ sprintf(priv->chip_vendor, "TSMC");
+ break;
+ case 2:
+ sprintf(priv->chip_vendor, "UMC");
+ priv->vendor_umc = 1;
+ break;
+ default:
+ sprintf(priv->chip_vendor, "unknown");
+ break;
+ }
+
+ rtl8710b_read_efuse8(priv, EEPROM_PACKAGE_TYPE_8710B, &package_type);
+
+ if (package_type == 0xff) {
+ dev_warn(dev, "Package type is undefined. Assuming it based on the vendor.\n");
+
+ if (priv->vendor_umc) {
+ package_type = PACKAGE_QFN48M_U;
+ } else if (priv->vendor_smic) {
+ package_type = PACKAGE_QFN48M_S;
+ } else {
+ dev_warn(dev, "The vendor is neither UMC nor SMIC. Assuming the package type is QFN48M_U.\n");
+
+ /*
+ * In this case the vendor driver doesn't set
+ * the package type to anything, which is the
+ * same as setting it to PACKAGE_DEFAULT (0).
+ */
+ package_type = PACKAGE_QFN48M_U;
+ }
+ } else if (package_type != PACKAGE_QFN48M_S &&
+ package_type != PACKAGE_QFN48M_U) {
+ dev_warn(dev, "Failed to read the package type. Assuming it's the default QFN48M_U.\n");
+
+ /*
+ * In this case the vendor driver actually sets it to
+ * PACKAGE_DEFAULT, but that selects the same values
+ * from the init tables as PACKAGE_QFN48M_U.
+ */
+ package_type = PACKAGE_QFN48M_U;
+ }
+
+ priv->package_type = package_type;
+
+ dev_dbg(dev, "Package type: 0x%x\n", package_type);
+
+ cfg2 = rtl8710b_read_syson_reg(priv, REG_SYS_SYSTEM_CFG2_8710B);
+ priv->rom_rev = cfg2 & 0xf;
+
+ return rtl8xxxu_config_endpoints_no_sie(priv);
+}
+
+static void rtl8710b_revise_cck_tx_psf(struct rtl8xxxu_priv *priv, u8 channel)
+{
+ if (channel == 13) {
+ /* Normal values */
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER2, 0x64B80C1C);
+ rtl8xxxu_write32(priv, REG_CCK0_DEBUG_PORT, 0x00008810);
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER3, 0x01235667);
+ /* Special value for channel 13 */
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER1, 0xd1d80001);
+ } else if (channel == 14) {
+ /* Special values for channel 14 */
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER2, 0x0000B81C);
+ rtl8xxxu_write32(priv, REG_CCK0_DEBUG_PORT, 0x00000000);
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER3, 0x00003667);
+ /* Normal value */
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER1, 0xE82C0001);
+ } else {
+ /* Restore normal values from the phy init table */
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER2, 0x64B80C1C);
+ rtl8xxxu_write32(priv, REG_CCK0_DEBUG_PORT, 0x00008810);
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER3, 0x01235667);
+ rtl8xxxu_write32(priv, REG_CCK0_TX_FILTER1, 0xE82C0001);
+ }
+}
+
+static void rtl8710bu_config_channel(struct ieee80211_hw *hw)
+{
+ struct rtl8xxxu_priv *priv = hw->priv;
+ bool ht40 = conf_is_ht40(&hw->conf);
+ u8 channel, subchannel = 0;
+ bool sec_ch_above = 0;
+ u32 val32;
+ u16 val16;
+
+ channel = (u8)hw->conf.chandef.chan->hw_value;
+
+ if (conf_is_ht40_plus(&hw->conf)) {
+ sec_ch_above = 1;
+ channel += 2;
+ subchannel = 2;
+ } else if (conf_is_ht40_minus(&hw->conf)) {
+ sec_ch_above = 0;
+ channel -= 2;
+ subchannel = 1;
+ }
+
+ /* Set channel */
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_MODE_AG);
+ u32p_replace_bits(&val32, channel, MODE_AG_CHANNEL_MASK);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_MODE_AG, val32);
+
+ rtl8710b_revise_cck_tx_psf(priv, channel);
+
+ /* Set bandwidth mode */
+ val16 = rtl8xxxu_read16(priv, REG_WMAC_TRXPTCL_CTL);
+ val16 &= ~WMAC_TRXPTCL_CTL_BW_MASK;
+ if (ht40)
+ val16 |= WMAC_TRXPTCL_CTL_BW_40;
+ rtl8xxxu_write16(priv, REG_WMAC_TRXPTCL_CTL, val16);
+
+ rtl8xxxu_write8(priv, REG_DATA_SUBCHANNEL, subchannel);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_RF_MODE);
+ u32p_replace_bits(&val32, ht40, FPGA_RF_MODE);
+ rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA1_RF_MODE);
+ u32p_replace_bits(&val32, ht40, FPGA_RF_MODE);
+ rtl8xxxu_write32(priv, REG_FPGA1_RF_MODE, val32);
+
+ if (ht40) {
+ /* Set Control channel to upper or lower. */
+ val32 = rtl8xxxu_read32(priv, REG_CCK0_SYSTEM);
+ u32p_replace_bits(&val32, !sec_ch_above, CCK0_SIDEBAND);
+ rtl8xxxu_write32(priv, REG_CCK0_SYSTEM, val32);
+ }
+
+ /* RXADC CLK */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_RF_MODE);
+ val32 |= GENMASK(10, 8);
+ rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
+
+ /* TXDAC CLK */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_RF_MODE);
+ val32 |= BIT(14) | BIT(12);
+ val32 &= ~BIT(13);
+ rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
+
+ /* small BW */
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_TX_PSDO_NOISE_WEIGHT);
+ val32 &= ~GENMASK(31, 30);
+ rtl8xxxu_write32(priv, REG_OFDM0_TX_PSDO_NOISE_WEIGHT, val32);
+
+ /* adc buffer clk */
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_TX_PSDO_NOISE_WEIGHT);
+ val32 &= ~BIT(29);
+ val32 |= BIT(28);
+ rtl8xxxu_write32(priv, REG_OFDM0_TX_PSDO_NOISE_WEIGHT, val32);
+
+ /* adc buffer clk */
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_XA_RX_AFE);
+ val32 &= ~BIT(29);
+ val32 |= BIT(28);
+ rtl8xxxu_write32(priv, REG_OFDM0_XA_RX_AFE, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XB_RF_INT_OE);
+ val32 &= ~BIT(30);
+ val32 |= BIT(29);
+ rtl8xxxu_write32(priv, REG_FPGA0_XB_RF_INT_OE, val32);
+
+ if (ht40) {
+ val32 = rtl8xxxu_read32(priv, REG_OFDM_RX_DFIR);
+ val32 &= ~BIT(19);
+ rtl8xxxu_write32(priv, REG_OFDM_RX_DFIR, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM_RX_DFIR);
+ val32 &= ~GENMASK(23, 20);
+ rtl8xxxu_write32(priv, REG_OFDM_RX_DFIR, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM_RX_DFIR);
+ val32 &= ~GENMASK(27, 24);
+ rtl8xxxu_write32(priv, REG_OFDM_RX_DFIR, val32);
+
+ /* RF TRX_BW */
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_MODE_AG);
+ val32 &= ~MODE_AG_BW_MASK;
+ val32 |= MODE_AG_BW_40MHZ_8723B;
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_MODE_AG, val32);
+ } else {
+ val32 = rtl8xxxu_read32(priv, REG_OFDM_RX_DFIR);
+ val32 |= BIT(19);
+ rtl8xxxu_write32(priv, REG_OFDM_RX_DFIR, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM_RX_DFIR);
+ val32 &= ~GENMASK(23, 20);
+ val32 |= BIT(23);
+ rtl8xxxu_write32(priv, REG_OFDM_RX_DFIR, val32);
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM_RX_DFIR);
+ val32 &= ~GENMASK(27, 24);
+ val32 |= BIT(27) | BIT(25);
+ rtl8xxxu_write32(priv, REG_OFDM_RX_DFIR, val32);
+
+ /* RF TRX_BW */
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_MODE_AG);
+ val32 &= ~MODE_AG_BW_MASK;
+ val32 |= MODE_AG_BW_20MHZ_8723B;
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_MODE_AG, val32);
+ }
+}
+
+static void rtl8710bu_init_aggregation(struct rtl8xxxu_priv *priv)
+{
+ u32 agg_rx;
+ u8 agg_ctrl;
+
+ /* RX aggregation */
+ agg_ctrl = rtl8xxxu_read8(priv, REG_TRXDMA_CTRL);
+ agg_ctrl &= ~TRXDMA_CTRL_RXDMA_AGG_EN;
+
+ agg_rx = rtl8xxxu_read32(priv, REG_RXDMA_AGG_PG_TH);
+ agg_rx &= ~RXDMA_USB_AGG_ENABLE;
+ agg_rx &= ~0xFF0F; /* reset agg size and timeout */
+
+ rtl8xxxu_write8(priv, REG_TRXDMA_CTRL, agg_ctrl);
+ rtl8xxxu_write32(priv, REG_RXDMA_AGG_PG_TH, agg_rx);
+}
+
+static void rtl8710bu_init_statistics(struct rtl8xxxu_priv *priv)
+{
+ u32 val32;
+
+ /* Time duration for NHM unit: 4us, 0xc350=200ms */
+ rtl8xxxu_write16(priv, REG_NHM_TIMER_8723B + 2, 0xc350);
+ rtl8xxxu_write16(priv, REG_NHM_TH9_TH10_8723B + 2, 0xffff);
+ rtl8xxxu_write32(priv, REG_NHM_TH3_TO_TH0_8723B, 0xffffff50);
+ rtl8xxxu_write32(priv, REG_NHM_TH7_TO_TH4_8723B, 0xffffffff);
+
+ /* TH8 */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ val32 |= 0xff;
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /* Enable CCK */
+ val32 = rtl8xxxu_read32(priv, REG_NHM_TH9_TH10_8723B);
+ val32 &= ~(BIT(8) | BIT(9) | BIT(10));
+ val32 |= BIT(8);
+ rtl8xxxu_write32(priv, REG_NHM_TH9_TH10_8723B, val32);
+
+ /* Max power amongst all RX antennas */
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_FA_RSTC);
+ val32 |= BIT(7);
+ rtl8xxxu_write32(priv, REG_OFDM0_FA_RSTC, val32);
+}
+
+static int rtl8710b_read_efuse(struct rtl8xxxu_priv *priv)
+{
+ struct device *dev = &priv->udev->dev;
+ u8 val8, word_mask, header, extheader;
+ u16 efuse_addr, offset;
+ int i, ret = 0;
+ u32 val32;
+
+ val32 = rtl8710b_read_syson_reg(priv, REG_SYS_EEPROM_CTRL0_8710B);
+ priv->boot_eeprom = u32_get_bits(val32, EEPROM_BOOT);
+ priv->has_eeprom = u32_get_bits(val32, EEPROM_ENABLE);
+
+ /* Default value is 0xff */
+ memset(priv->efuse_wifi.raw, 0xff, EFUSE_MAP_LEN);
+
+ efuse_addr = 0;
+ while (efuse_addr < EFUSE_REAL_CONTENT_LEN_8723A) {
+ u16 map_addr;
+
+ ret = rtl8710b_read_efuse8(priv, efuse_addr++, &header);
+ if (ret || header == 0xff)
+ goto exit;
+
+ if ((header & 0x1f) == 0x0f) { /* extended header */
+ offset = (header & 0xe0) >> 5;
+
+ ret = rtl8710b_read_efuse8(priv, efuse_addr++, &extheader);
+ if (ret)
+ goto exit;
+
+ /* All words disabled */
+ if ((extheader & 0x0f) == 0x0f)
+ continue;
+
+ offset |= ((extheader & 0xf0) >> 1);
+ word_mask = extheader & 0x0f;
+ } else {
+ offset = (header >> 4) & 0x0f;
+ word_mask = header & 0x0f;
+ }
+
+ /* Get word enable value from PG header */
+
+ /* We have 8 bits to indicate validity */
+ map_addr = offset * 8;
+ for (i = 0; i < EFUSE_MAX_WORD_UNIT; i++) {
+ /* Check word enable condition in the section */
+ if (word_mask & BIT(i)) {
+ map_addr += 2;
+ continue;
+ }
+
+ ret = rtl8710b_read_efuse8(priv, efuse_addr++, &val8);
+ if (ret)
+ goto exit;
+ if (map_addr >= EFUSE_MAP_LEN - 1) {
+ dev_warn(dev, "%s: Illegal map_addr (%04x), efuse corrupt!\n",
+ __func__, map_addr);
+ ret = -EINVAL;
+ goto exit;
+ }
+ priv->efuse_wifi.raw[map_addr++] = val8;
+
+ ret = rtl8710b_read_efuse8(priv, efuse_addr++, &val8);
+ if (ret)
+ goto exit;
+ priv->efuse_wifi.raw[map_addr++] = val8;
+ }
+ }
+
+exit:
+
+ return ret;
+}
+
+static int rtl8710bu_parse_efuse(struct rtl8xxxu_priv *priv)
+{
+ struct rtl8710bu_efuse *efuse = &priv->efuse_wifi.efuse8710bu;
+
+ if (efuse->rtl_id != cpu_to_le16(0x8195))
+ return -EINVAL;
+
+ ether_addr_copy(priv->mac_addr, efuse->mac_addr);
+
+ memcpy(priv->cck_tx_power_index_A, efuse->tx_power_index_A.cck_base,
+ sizeof(efuse->tx_power_index_A.cck_base));
+
+ memcpy(priv->ht40_1s_tx_power_index_A,
+ efuse->tx_power_index_A.ht40_base,
+ sizeof(efuse->tx_power_index_A.ht40_base));
+
+ priv->ofdm_tx_power_diff[0].a = efuse->tx_power_index_A.ht20_ofdm_1s_diff.a;
+ priv->ht20_tx_power_diff[0].a = efuse->tx_power_index_A.ht20_ofdm_1s_diff.b;
+
+ priv->default_crystal_cap = efuse->xtal_k & 0x3f;
+
+ return 0;
+}
+
+static int rtl8710bu_load_firmware(struct rtl8xxxu_priv *priv)
+{
+ if (priv->vendor_smic) {
+ return rtl8xxxu_load_firmware(priv, "rtlwifi/rtl8710bufw_SMIC.bin");
+ } else if (priv->vendor_umc) {
+ return rtl8xxxu_load_firmware(priv, "rtlwifi/rtl8710bufw_UMC.bin");
+ } else {
+ dev_err(&priv->udev->dev, "We have no suitable firmware for this chip.\n");
+ return -1;
+ }
+}
+
+static void rtl8710bu_init_phy_bb(struct rtl8xxxu_priv *priv)
+{
+ const struct rtl8xxxu_reg32val *phy_init_table;
+ u32 val32;
+
+ /* Enable BB and RF */
+ val32 = rtl8xxxu_read32(priv, REG_SYS_FUNC_8710B);
+ val32 |= GENMASK(17, 16) | GENMASK(26, 24);
+ rtl8xxxu_write32(priv, REG_SYS_FUNC_8710B, val32);
+
+ if (priv->package_type == PACKAGE_QFN48M_U)
+ phy_init_table = rtl8710bu_qfn48m_u_phy_init_table;
+ else
+ phy_init_table = rtl8710bu_qfn48m_s_phy_init_table;
+
+ rtl8xxxu_init_phy_regs(priv, phy_init_table);
+
+ rtl8xxxu_init_phy_regs(priv, rtl8710b_agc_table);
+}
+
+static int rtl8710bu_init_phy_rf(struct rtl8xxxu_priv *priv)
+{
+ const struct rtl8xxxu_rfregval *radioa_init_table;
+
+ if (priv->package_type == PACKAGE_QFN48M_U)
+ radioa_init_table = rtl8710bu_qfn48m_u_radioa_init_table;
+ else
+ radioa_init_table = rtl8710bu_qfn48m_s_radioa_init_table;
+
+ return rtl8xxxu_init_phy_rf(priv, radioa_init_table, RF_A);
+}
+
+static int rtl8710bu_iqk_path_a(struct rtl8xxxu_priv *priv, u32 *lok_result)
+{
+ u32 reg_eac, reg_e94, reg_e9c, val32, path_sel_bb;
+ int result = 0;
+
+ path_sel_bb = rtl8xxxu_read32(priv, REG_S0S1_PATH_SWITCH);
+
+ rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, 0x99000000);
+
+ /*
+ * Leave IQK mode
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /*
+ * Enable path A PA in TX IQK mode
+ */
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_WE_LUT);
+ val32 |= 0x80000;
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, val32);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x20000);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0000f);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0x07ff7);
+
+ /* PA,PAD gain adjust */
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF);
+ val32 |= BIT(11);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, val32);
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_56);
+ u32p_replace_bits(&val32, 0x1ed, 0x00fff);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_56, val32);
+
+ /* enter IQK mode */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0x808000, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /* path-A IQK setting */
+ rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x18008c1c);
+ rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x38008c1c);
+
+ rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x821403ff);
+ rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x28160c06);
+
+ /* LO calibration setting */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x02002911);
+
+ /* One shot, path A LOK & IQK */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xfa000000);
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+ mdelay(10);
+
+ rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, path_sel_bb);
+
+ /*
+ * Leave IQK mode
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF);
+ val32 &= ~BIT(11);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, val32);
+
+ /* save LOK result */
+ *lok_result = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_TXM_IDAC);
+
+ /* Check failed */
+ reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+ reg_e94 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_A);
+ reg_e9c = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_A);
+
+ if (!(reg_eac & BIT(28)) &&
+ ((reg_e94 & 0x03ff0000) != 0x01420000) &&
+ ((reg_e9c & 0x03ff0000) != 0x00420000))
+ result |= 0x01;
+
+ return result;
+}
+
+static int rtl8710bu_rx_iqk_path_a(struct rtl8xxxu_priv *priv, u32 lok_result)
+{
+ u32 reg_ea4, reg_eac, reg_e94, reg_e9c, val32, path_sel_bb, tmp;
+ int result = 0;
+
+ path_sel_bb = rtl8xxxu_read32(priv, REG_S0S1_PATH_SWITCH);
+
+ rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, 0x99000000);
+
+ /*
+ * Leave IQK mode
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /* modify RXIQK mode table */
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_WE_LUT);
+ val32 |= 0x80000;
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, val32);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x30000);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0000f);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0xf1173);
+
+ /* PA,PAD gain adjust */
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF);
+ val32 |= BIT(11);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, val32);
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_56);
+ u32p_replace_bits(&val32, 0xf, 0x003e0);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_56, val32);
+
+ /*
+ * Enter IQK mode
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0x808000, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /* path-A IQK setting */
+ rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x18008c1c);
+ rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x38008c1c);
+
+ rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x8216129f);
+ rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x28160c00);
+
+ /*
+ * Tx IQK setting
+ */
+ rtl8xxxu_write32(priv, REG_TX_IQK, 0x01007c00);
+ rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
+
+ /* LO calibration setting */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x0046a911);
+
+ /* One shot, path A LOK & IQK */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf9000000);
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+ mdelay(10);
+
+ /* Check failed */
+ reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+ reg_e94 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_A);
+ reg_e9c = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_A);
+
+ if (!(reg_eac & BIT(28)) &&
+ ((reg_e94 & 0x03ff0000) != 0x01420000) &&
+ ((reg_e9c & 0x03ff0000) != 0x00420000)) {
+ result |= 0x01;
+ } else { /* If TX not OK, ignore RX */
+
+ /* reload RF path */
+ rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, path_sel_bb);
+
+ /*
+ * Leave IQK mode
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF);
+ val32 &= ~BIT(11);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, val32);
+
+ return result;
+ }
+
+ val32 = 0x80007c00 | (reg_e94 & 0x3ff0000) | ((reg_e9c & 0x3ff0000) >> 16);
+ rtl8xxxu_write32(priv, REG_TX_IQK, val32);
+
+ /*
+ * Modify RX IQK mode table
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_WE_LUT);
+ val32 |= 0x80000;
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, val32);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x30000);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0000f);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0xf7ff2);
+
+ /*
+ * PA, PAD setting
+ */
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF);
+ val32 |= BIT(11);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, val32);
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_56);
+ u32p_replace_bits(&val32, 0x2a, 0x00fff);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_56, val32);
+
+ /*
+ * Enter IQK mode
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0x808000, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ /*
+ * RX IQK setting
+ */
+ rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
+
+ /* path-A IQK setting */
+ rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x38008c1c);
+ rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x18008c1c);
+
+ rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x2816169f);
+
+ /* LO calibration setting */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x0046a911);
+
+ /* One shot, path A LOK & IQK */
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf9000000);
+ rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+ mdelay(10);
+
+ /* reload RF path */
+ rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, path_sel_bb);
+
+ /*
+ * Leave IQK mode
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF);
+ val32 &= ~BIT(11);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, val32);
+
+ /* reload LOK value */
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXM_IDAC, lok_result);
+
+ /* Check failed */
+ reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+ reg_ea4 = rtl8xxxu_read32(priv, REG_RX_POWER_BEFORE_IQK_A_2);
+
+ tmp = (reg_eac & 0x03ff0000) >> 16;
+ if ((tmp & 0x200) > 0)
+ tmp = 0x400 - tmp;
+
+ if (!(reg_eac & BIT(27)) &&
+ ((reg_ea4 & 0x03ff0000) != 0x01320000) &&
+ ((reg_eac & 0x03ff0000) != 0x00360000) &&
+ (((reg_ea4 & 0x03ff0000) >> 16) < 0x11a) &&
+ (((reg_ea4 & 0x03ff0000) >> 16) > 0xe6) &&
+ (tmp < 0x1a))
+ result |= 0x02;
+
+ return result;
+}
+
+static void rtl8710bu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
+ int result[][8], int t)
+{
+ struct device *dev = &priv->udev->dev;
+ u32 i, val32, rx_initial_gain, lok_result;
+ u32 path_sel_bb, path_sel_rf;
+ int path_a_ok;
+ int retry = 2;
+ static const u32 adda_regs[RTL8XXXU_ADDA_REGS] = {
+ REG_FPGA0_XCD_SWITCH_CTRL, REG_BLUETOOTH,
+ REG_RX_WAIT_CCA, REG_TX_CCK_RFON,
+ REG_TX_CCK_BBON, REG_TX_OFDM_RFON,
+ REG_TX_OFDM_BBON, REG_TX_TO_RX,
+ REG_TX_TO_TX, REG_RX_CCK,
+ REG_RX_OFDM, REG_RX_WAIT_RIFS,
+ REG_RX_TO_RX, REG_STANDBY,
+ REG_SLEEP, REG_PMPD_ANAEN
+ };
+ static const u32 iqk_mac_regs[RTL8XXXU_MAC_REGS] = {
+ REG_TXPAUSE, REG_BEACON_CTRL,
+ REG_BEACON_CTRL_1, REG_GPIO_MUXCFG
+ };
+ static const u32 iqk_bb_regs[RTL8XXXU_BB_REGS] = {
+ REG_OFDM0_TRX_PATH_ENABLE, REG_OFDM0_TR_MUX_PAR,
+ REG_FPGA0_XCD_RF_SW_CTRL, REG_CONFIG_ANT_A, REG_CONFIG_ANT_B,
+ REG_FPGA0_XAB_RF_SW_CTRL, REG_FPGA0_XA_RF_INT_OE,
+ REG_FPGA0_XB_RF_INT_OE, REG_CCK0_AFE_SETTING
+ };
+
+ /*
+ * Note: IQ calibration must be performed after loading
+ * PHY_REG.txt , and radio_a, radio_b.txt
+ */
+
+ rx_initial_gain = rtl8xxxu_read32(priv, REG_OFDM0_XA_AGC_CORE1);
+
+ if (t == 0) {
+ /* Save ADDA parameters, turn Path A ADDA on */
+ rtl8xxxu_save_regs(priv, adda_regs, priv->adda_backup,
+ RTL8XXXU_ADDA_REGS);
+ rtl8xxxu_save_mac_regs(priv, iqk_mac_regs, priv->mac_backup);
+ rtl8xxxu_save_regs(priv, iqk_bb_regs,
+ priv->bb_backup, RTL8XXXU_BB_REGS);
+ }
+
+ rtl8xxxu_path_adda_on(priv, adda_regs, true);
+
+ if (t == 0) {
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_HSSI_PARM1);
+ priv->pi_enabled = u32_get_bits(val32, FPGA0_HSSI_PARM1_PI);
+ }
+
+ if (!priv->pi_enabled) {
+ /* Switch BB to PI mode to do IQ Calibration */
+ rtl8xxxu_write32(priv, REG_FPGA0_XA_HSSI_PARM1, 0x01000100);
+ rtl8xxxu_write32(priv, REG_FPGA0_XB_HSSI_PARM1, 0x01000100);
+ }
+
+ /* MAC settings */
+ val32 = rtl8xxxu_read32(priv, REG_TX_PTCL_CTRL);
+ val32 |= 0x00ff0000;
+ rtl8xxxu_write32(priv, REG_TX_PTCL_CTRL, val32);
+
+ /* save RF path */
+ path_sel_bb = rtl8xxxu_read32(priv, REG_S0S1_PATH_SWITCH);
+ path_sel_rf = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_S0S1);
+
+ /* BB setting */
+ val32 = rtl8xxxu_read32(priv, REG_CCK0_AFE_SETTING);
+ val32 |= 0x0f000000;
+ rtl8xxxu_write32(priv, REG_CCK0_AFE_SETTING, val32);
+ rtl8xxxu_write32(priv, REG_RX_WAIT_CCA, 0x03c00010);
+ rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, 0x03a05601);
+ rtl8xxxu_write32(priv, REG_OFDM0_TR_MUX_PAR, 0x000800e4);
+ rtl8xxxu_write32(priv, REG_FPGA0_XCD_RF_SW_CTRL, 0x25204000);
+
+ /* IQ calibration setting */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0x808000, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+ rtl8xxxu_write32(priv, REG_TX_IQK, 0x01007c00);
+ rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
+
+ for (i = 0; i < retry; i++) {
+ path_a_ok = rtl8710bu_iqk_path_a(priv, &lok_result);
+
+ if (path_a_ok == 0x01) {
+ val32 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_A);
+ result[t][0] = (val32 >> 16) & 0x3ff;
+
+ val32 = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_A);
+ result[t][1] = (val32 >> 16) & 0x3ff;
+ break;
+ } else {
+ result[t][0] = 0x100;
+ result[t][1] = 0x0;
+ }
+ }
+
+ for (i = 0; i < retry; i++) {
+ path_a_ok = rtl8710bu_rx_iqk_path_a(priv, lok_result);
+
+ if (path_a_ok == 0x03) {
+ val32 = rtl8xxxu_read32(priv, REG_RX_POWER_BEFORE_IQK_A_2);
+ result[t][2] = (val32 >> 16) & 0x3ff;
+
+ val32 = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+ result[t][3] = (val32 >> 16) & 0x3ff;
+ break;
+ } else {
+ result[t][2] = 0x100;
+ result[t][3] = 0x0;
+ }
+ }
+
+ if (!path_a_ok)
+ dev_warn(dev, "%s: Path A IQK failed!\n", __func__);
+
+ /* Back to BB mode, load original value */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+ u32p_replace_bits(&val32, 0, 0xffffff00);
+ rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+ if (t == 0)
+ return;
+
+ /* Reload ADDA power saving parameters */
+ rtl8xxxu_restore_regs(priv, adda_regs, priv->adda_backup, RTL8XXXU_ADDA_REGS);
+
+ /* Reload MAC parameters */
+ rtl8xxxu_restore_mac_regs(priv, iqk_mac_regs, priv->mac_backup);
+
+ /* Reload BB parameters */
+ rtl8xxxu_restore_regs(priv, iqk_bb_regs, priv->bb_backup, RTL8XXXU_BB_REGS);
+
+ /* Reload RF path */
+ rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, path_sel_bb);
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_S0S1, path_sel_rf);
+
+ /* Restore RX initial gain */
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_XA_AGC_CORE1);
+ u32p_replace_bits(&val32, 0x50, 0x000000ff);
+ rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, val32);
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_XA_AGC_CORE1);
+ u32p_replace_bits(&val32, rx_initial_gain & 0xff, 0x000000ff);
+ rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, val32);
+
+ /* Load 0xe30 IQC default value */
+ rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x01008c00);
+ rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x01008c00);
+}
+
+static void rtl8710bu_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
+{
+ struct device *dev = &priv->udev->dev;
+ int result[4][8]; /* last is final result */
+ int i, candidate;
+ bool path_a_ok;
+ s32 reg_e94, reg_e9c, reg_ea4, reg_eac;
+ s32 reg_tmp = 0;
+ bool simu;
+ u32 path_sel_bb;
+
+ /* Save RF path */
+ path_sel_bb = rtl8xxxu_read32(priv, REG_S0S1_PATH_SWITCH);
+
+ memset(result, 0, sizeof(result));
+ candidate = -1;
+
+ path_a_ok = false;
+
+ for (i = 0; i < 3; i++) {
+ rtl8710bu_phy_iqcalibrate(priv, result, i);
+
+ if (i == 1) {
+ simu = rtl8xxxu_gen2_simularity_compare(priv, result, 0, 1);
+ if (simu) {
+ candidate = 0;
+ break;
+ }
+ }
+
+ if (i == 2) {
+ simu = rtl8xxxu_gen2_simularity_compare(priv, result, 0, 2);
+ if (simu) {
+ candidate = 0;
+ break;
+ }
+
+ simu = rtl8xxxu_gen2_simularity_compare(priv, result, 1, 2);
+ if (simu) {
+ candidate = 1;
+ } else {
+ for (i = 0; i < 8; i++)
+ reg_tmp += result[3][i];
+
+ if (reg_tmp)
+ candidate = 3;
+ else
+ candidate = -1;
+ }
+ }
+ }
+
+ if (candidate >= 0) {
+ reg_e94 = result[candidate][0];
+ reg_e9c = result[candidate][1];
+ reg_ea4 = result[candidate][2];
+ reg_eac = result[candidate][3];
+
+ dev_dbg(dev, "%s: candidate is %x\n", __func__, candidate);
+ dev_dbg(dev, "%s: e94=%x e9c=%x ea4=%x eac=%x\n",
+ __func__, reg_e94, reg_e9c, reg_ea4, reg_eac);
+
+ path_a_ok = true;
+
+ if (reg_e94)
+ rtl8xxxu_fill_iqk_matrix_a(priv, path_a_ok, result,
+ candidate, (reg_ea4 == 0));
+ }
+
+ rtl8xxxu_save_regs(priv, rtl8xxxu_iqk_phy_iq_bb_reg,
+ priv->bb_recovery_backup, RTL8XXXU_BB_REGS);
+
+ rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, path_sel_bb);
+}
+
+static int rtl8710b_emu_to_active(struct rtl8xxxu_priv *priv)
+{
+ u8 val8;
+ int count, ret = 0;
+
+ /* AFE power mode selection: 1: LDO mode, 0: Power-cut mode */
+ val8 = rtl8xxxu_read8(priv, 0x5d);
+ val8 &= ~BIT(0);
+ rtl8xxxu_write8(priv, 0x5d, val8);
+
+ val8 = rtl8xxxu_read8(priv, REG_SYS_FUNC_8710B);
+ val8 |= BIT(0);
+ rtl8xxxu_write8(priv, REG_SYS_FUNC_8710B, val8);
+
+ rtl8xxxu_write8(priv, 0x56, 0x0e);
+
+ val8 = rtl8xxxu_read8(priv, 0x20);
+ val8 |= BIT(0);
+ rtl8xxxu_write8(priv, 0x20, val8);
+
+ for (count = RTL8XXXU_MAX_REG_POLL; count; count--) {
+ val8 = rtl8xxxu_read8(priv, 0x20);
+ if (!(val8 & BIT(0)))
+ break;
+
+ udelay(10);
+ }
+
+ if (!count)
+ ret = -EBUSY;
+
+ return ret;
+}
+
+static int rtl8710bu_active_to_emu(struct rtl8xxxu_priv *priv)
+{
+ u8 val8;
+ u32 val32;
+ int count, ret = 0;
+
+ /* Turn off RF */
+ val32 = rtl8xxxu_read32(priv, REG_SYS_FUNC_8710B);
+ val32 &= ~GENMASK(26, 24);
+ rtl8xxxu_write32(priv, REG_SYS_FUNC_8710B, val32);
+
+ /* BB reset */
+ val32 = rtl8xxxu_read32(priv, REG_SYS_FUNC_8710B);
+ val32 &= ~GENMASK(17, 16);
+ rtl8xxxu_write32(priv, REG_SYS_FUNC_8710B, val32);
+
+ /* Turn off MAC by HW state machine */
+ val8 = rtl8xxxu_read8(priv, 0x20);
+ val8 |= BIT(1);
+ rtl8xxxu_write8(priv, 0x20, val8);
+
+ for (count = RTL8XXXU_MAX_REG_POLL; count; count--) {
+ val8 = rtl8xxxu_read8(priv, 0x20);
+ if ((val8 & BIT(1)) == 0) {
+ ret = 0;
+ break;
+ }
+ udelay(10);
+ }
+
+ if (!count)
+ ret = -EBUSY;
+
+ return ret;
+}
+
+static int rtl8710bu_active_to_lps(struct rtl8xxxu_priv *priv)
+{
+ struct device *dev = &priv->udev->dev;
+ u8 val8;
+ u16 val16;
+ u32 val32;
+ int retry, retval;
+
+ /* Tx Pause */
+ rtl8xxxu_write8(priv, REG_TXPAUSE, 0xff);
+
+ retry = 100;
+ retval = -EBUSY;
+ /*
+ * Poll 32 bit wide REG_SCH_TX_CMD for 0x00000000 to ensure no TX is pending.
+ */
+ do {
+ val32 = rtl8xxxu_read32(priv, REG_SCH_TX_CMD);
+ if (!val32) {
+ retval = 0;
+ break;
+ }
+ udelay(10);
+ } while (retry--);
+
+ if (!retry) {
+ dev_warn(dev, "Failed to flush TX queue\n");
+ retval = -EBUSY;
+ return retval;
+ }
+
+ /* Disable CCK and OFDM, clock gated */
+ val8 = rtl8xxxu_read8(priv, REG_SYS_FUNC);
+ val8 &= ~SYS_FUNC_BBRSTB;
+ rtl8xxxu_write8(priv, REG_SYS_FUNC, val8);
+
+ udelay(2);
+
+ /* Whole BB is reset */
+ val8 = rtl8xxxu_read8(priv, REG_SYS_FUNC);
+ val8 &= ~SYS_FUNC_BB_GLB_RSTN;
+ rtl8xxxu_write8(priv, REG_SYS_FUNC, val8);
+
+ /* Reset MAC TRX */
+ val16 = rtl8xxxu_read16(priv, REG_CR);
+ val16 &= 0xff00;
+ val16 |= CR_HCI_RXDMA_ENABLE | CR_HCI_TXDMA_ENABLE;
+ val16 &= ~CR_SECURITY_ENABLE;
+ rtl8xxxu_write16(priv, REG_CR, val16);
+
+ /* Respond TxOK to scheduler */
+ val8 = rtl8xxxu_read8(priv, REG_DUAL_TSF_RST);
+ val8 |= DUAL_TSF_TX_OK;
+ rtl8xxxu_write8(priv, REG_DUAL_TSF_RST, val8);
+
+ return retval;
+}
+
+static int rtl8710bu_power_on(struct rtl8xxxu_priv *priv)
+{
+ u32 val32;
+ u16 val16;
+ u8 val8;
+ int ret;
+
+ rtl8xxxu_write8(priv, REG_USB_ACCESS_TIMEOUT, 0x80);
+
+ val8 = rtl8xxxu_read8(priv, REG_SYS_ISO_CTRL);
+ val8 &= ~BIT(5);
+ rtl8xxxu_write8(priv, REG_SYS_ISO_CTRL, val8);
+
+ val8 = rtl8xxxu_read8(priv, REG_SYS_FUNC_8710B);
+ val8 |= BIT(0);
+ rtl8xxxu_write8(priv, REG_SYS_FUNC_8710B, val8);
+
+ val8 = rtl8xxxu_read8(priv, 0x20);
+ val8 |= BIT(0);
+ rtl8xxxu_write8(priv, 0x20, val8);
+
+ rtl8xxxu_write8(priv, REG_AFE_CTRL_8710B, 0);
+
+ val8 = rtl8xxxu_read8(priv, REG_WL_STATUS_8710B);
+ val8 |= BIT(1);
+ rtl8xxxu_write8(priv, REG_WL_STATUS_8710B, val8);
+
+ ret = rtl8710b_emu_to_active(priv);
+ if (ret)
+ return ret;
+
+ rtl8xxxu_write16(priv, REG_CR, 0);
+
+ val16 = rtl8xxxu_read16(priv, REG_CR);
+
+ val16 |= CR_HCI_TXDMA_ENABLE | CR_HCI_RXDMA_ENABLE |
+ CR_TXDMA_ENABLE | CR_RXDMA_ENABLE |
+ CR_PROTOCOL_ENABLE | CR_SCHEDULE_ENABLE |
+ CR_SECURITY_ENABLE | CR_CALTIMER_ENABLE;
+ rtl8xxxu_write16(priv, REG_CR, val16);
+
+ /* Enable hardware sequence number. */
+ val8 = rtl8xxxu_read8(priv, REG_HWSEQ_CTRL);
+ val8 |= 0x7f;
+ rtl8xxxu_write8(priv, REG_HWSEQ_CTRL, val8);
+
+ udelay(2);
+
+ /*
+ * Technically the rest was in the rtl8710bu_hal_init function,
+ * not the power_on function, but it's fine because we only
+ * call power_on from init_device.
+ */
+
+ val8 = rtl8xxxu_read8(priv, 0xfef9);
+ val8 &= ~BIT(0);
+ rtl8xxxu_write8(priv, 0xfef9, val8);
+
+ /* Clear the 0x40000138[5] to prevent CM4 Suspend */
+ val32 = rtl8710b_read_syson_reg(priv, 0x138);
+ val32 &= ~BIT(5);
+ rtl8710b_write_syson_reg(priv, 0x138, val32);
+
+ return ret;
+}
+
+static void rtl8710bu_power_off(struct rtl8xxxu_priv *priv)
+{
+ u32 val32;
+ u8 val8;
+
+ rtl8xxxu_flush_fifo(priv);
+
+ rtl8xxxu_write32(priv, REG_HISR0_8710B, 0xffffffff);
+ rtl8xxxu_write32(priv, REG_HIMR0_8710B, 0x0);
+
+ /* Set the 0x40000138[5] to allow CM4 Suspend */
+ val32 = rtl8710b_read_syson_reg(priv, 0x138);
+ val32 |= BIT(5);
+ rtl8710b_write_syson_reg(priv, 0x138, val32);
+
+ /* Stop rx */
+ rtl8xxxu_write8(priv, REG_CR, 0x00);
+
+ rtl8710bu_active_to_lps(priv);
+
+ /* Reset MCU ? */
+ val8 = rtl8xxxu_read8(priv, REG_8051FW_CTRL_V1_8710B + 3);
+ val8 &= ~BIT(0);
+ rtl8xxxu_write8(priv, REG_8051FW_CTRL_V1_8710B + 3, val8);
+
+ /* Reset MCU ready status */
+ rtl8xxxu_write8(priv, REG_8051FW_CTRL_V1_8710B, 0x00);
+
+ rtl8710bu_active_to_emu(priv);
+}
+
+static void rtl8710b_reset_8051(struct rtl8xxxu_priv *priv)
+{
+ u8 val8;
+
+ val8 = rtl8xxxu_read8(priv, REG_8051FW_CTRL_V1_8710B + 3);
+ val8 &= ~BIT(0);
+ rtl8xxxu_write8(priv, REG_8051FW_CTRL_V1_8710B + 3, val8);
+
+ udelay(50);
+
+ val8 = rtl8xxxu_read8(priv, REG_8051FW_CTRL_V1_8710B + 3);
+ val8 |= BIT(0);
+ rtl8xxxu_write8(priv, REG_8051FW_CTRL_V1_8710B + 3, val8);
+}
+
+static void rtl8710b_enable_rf(struct rtl8xxxu_priv *priv)
+{
+ u32 val32;
+
+ rtl8xxxu_write8(priv, REG_RF_CTRL, RF_ENABLE | RF_RSTB | RF_SDMRSTB);
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_TRX_PATH_ENABLE);
+ val32 &= ~(OFDM_RF_PATH_RX_MASK | OFDM_RF_PATH_TX_MASK);
+ val32 |= OFDM_RF_PATH_RX_A | OFDM_RF_PATH_TX_A;
+ rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, val32);
+
+ rtl8xxxu_write8(priv, REG_TXPAUSE, 0x00);
+}
+
+static void rtl8710b_disable_rf(struct rtl8xxxu_priv *priv)
+{
+ u32 val32;
+
+ val32 = rtl8xxxu_read32(priv, REG_OFDM0_TRX_PATH_ENABLE);
+ val32 &= ~OFDM_RF_PATH_TX_MASK;
+ rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, val32);
+
+ /* Power down RF module */
+ rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_AC, 0);
+}
+
+static void rtl8710b_usb_quirks(struct rtl8xxxu_priv *priv)
+{
+ u16 val16;
+
+ rtl8xxxu_gen2_usb_quirks(priv);
+
+ val16 = rtl8xxxu_read16(priv, REG_CR);
+ val16 |= (CR_MAC_TX_ENABLE | CR_MAC_RX_ENABLE);
+ rtl8xxxu_write16(priv, REG_CR, val16);
+}
+
+#define XTAL1 GENMASK(29, 24)
+#define XTAL0 GENMASK(23, 18)
+
+static void rtl8710b_set_crystal_cap(struct rtl8xxxu_priv *priv, u8 crystal_cap)
+{
+ struct rtl8xxxu_cfo_tracking *cfo = &priv->cfo_tracking;
+ u32 val32;
+
+ if (crystal_cap == cfo->crystal_cap)
+ return;
+
+ val32 = rtl8710b_read_syson_reg(priv, REG_SYS_XTAL_CTRL0_8710B);
+
+ dev_dbg(&priv->udev->dev,
+ "%s: Adjusting crystal cap from 0x%x (actually 0x%x 0x%x) to 0x%x\n",
+ __func__,
+ cfo->crystal_cap,
+ u32_get_bits(val32, XTAL1),
+ u32_get_bits(val32, XTAL0),
+ crystal_cap);
+
+ u32p_replace_bits(&val32, crystal_cap, XTAL1);
+ u32p_replace_bits(&val32, crystal_cap, XTAL0);
+ rtl8710b_write_syson_reg(priv, REG_SYS_XTAL_CTRL0_8710B, val32);
+
+ cfo->crystal_cap = crystal_cap;
+}
+
+static s8 rtl8710b_cck_rssi(struct rtl8xxxu_priv *priv, struct rtl8723au_phy_stats *phy_stats)
+{
+ struct jaguar2_phy_stats_type0 *phy_stats0 = (struct jaguar2_phy_stats_type0 *)phy_stats;
+ u8 lna_idx = (phy_stats0->lna_h << 3) | phy_stats0->lna_l;
+ u8 vga_idx = phy_stats0->vga;
+ s8 rx_pwr_all = 0x00;
+
+ switch (lna_idx) {
+ case 7:
+ rx_pwr_all = -52 - (2 * vga_idx);
+ break;
+ case 6:
+ rx_pwr_all = -42 - (2 * vga_idx);
+ break;
+ case 5:
+ rx_pwr_all = -36 - (2 * vga_idx);
+ break;
+ case 3:
+ rx_pwr_all = -12 - (2 * vga_idx);
+ break;
+ case 2:
+ rx_pwr_all = 0 - (2 * vga_idx);
+ break;
+ default:
+ rx_pwr_all = 0;
+ break;
+ }
+
+ return rx_pwr_all;
+}
+
+struct rtl8xxxu_fileops rtl8710bu_fops = {
+ .identify_chip = rtl8710bu_identify_chip,
+ .parse_efuse = rtl8710bu_parse_efuse,
+ .load_firmware = rtl8710bu_load_firmware,
+ .power_on = rtl8710bu_power_on,
+ .power_off = rtl8710bu_power_off,
+ .read_efuse = rtl8710b_read_efuse,
+ .reset_8051 = rtl8710b_reset_8051,
+ .llt_init = rtl8xxxu_auto_llt_table,
+ .init_phy_bb = rtl8710bu_init_phy_bb,
+ .init_phy_rf = rtl8710bu_init_phy_rf,
+ .phy_lc_calibrate = rtl8188f_phy_lc_calibrate,
+ .phy_iq_calibrate = rtl8710bu_phy_iq_calibrate,
+ .config_channel = rtl8710bu_config_channel,
+ .parse_rx_desc = rtl8xxxu_parse_rxdesc24,
+ .parse_phystats = jaguar2_rx_parse_phystats,
+ .init_aggregation = rtl8710bu_init_aggregation,
+ .init_statistics = rtl8710bu_init_statistics,
+ .init_burst = rtl8xxxu_init_burst,
+ .enable_rf = rtl8710b_enable_rf,
+ .disable_rf = rtl8710b_disable_rf,
+ .usb_quirks = rtl8710b_usb_quirks,
+ .set_tx_power = rtl8188f_set_tx_power,
+ .update_rate_mask = rtl8xxxu_gen2_update_rate_mask,
+ .report_connect = rtl8xxxu_gen2_report_connect,
+ .report_rssi = rtl8xxxu_gen2_report_rssi,
+ .fill_txdesc = rtl8xxxu_fill_txdesc_v2,
+ .set_crystal_cap = rtl8710b_set_crystal_cap,
+ .cck_rssi = rtl8710b_cck_rssi,
+ .writeN_block_size = 4,
+ .rx_desc_size = sizeof(struct rtl8xxxu_rxdesc24),
+ .tx_desc_size = sizeof(struct rtl8xxxu_txdesc40),
+ .has_tx_report = 1,
+ .gen2_thermal_meter = 1,
+ .needs_full_init = 1,
+ .adda_1t_init = 0x03c00016,
+ .adda_1t_path_on = 0x03c00016,
+ .trxff_boundary = 0x3f7f,
+ .pbp_rx = PBP_PAGE_SIZE_256,
+ .pbp_tx = PBP_PAGE_SIZE_256,
+ .mactable = rtl8710b_mac_init_table,
+ .total_page_num = TX_TOTAL_PAGE_NUM_8723B,
+ .page_num_hi = TX_PAGE_NUM_HI_PQ_8723B,
+ .page_num_lo = TX_PAGE_NUM_LO_PQ_8723B,
+ .page_num_norm = TX_PAGE_NUM_NORM_PQ_8723B,
+};
cfo->crystal_cap = crystal_cap;
}
-s8 rtl8723a_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
+s8 rtl8723a_cck_rssi(struct rtl8xxxu_priv *priv, struct rtl8723au_phy_stats *phy_stats)
{
+ u8 cck_agc_rpt = phy_stats->cck_agc_rpt_ofdm_cfosho_a;
s8 rx_pwr_all = 0x00;
switch (cck_agc_rpt & 0xc0) {
.load_firmware = rtl8723au_load_firmware,
.power_on = rtl8723au_power_on,
.power_off = rtl8xxxu_power_off,
+ .read_efuse = rtl8xxxu_read_efuse,
.reset_8051 = rtl8xxxu_reset_8051,
.llt_init = rtl8xxxu_init_llt_table,
.init_phy_bb = rtl8xxxu_gen1_init_phy_bb,
.phy_iq_calibrate = rtl8xxxu_gen1_phy_iq_calibrate,
.config_channel = rtl8xxxu_gen1_config_channel,
.parse_rx_desc = rtl8xxxu_parse_rxdesc16,
+ .parse_phystats = rtl8723au_rx_parse_phystats,
.init_aggregation = rtl8xxxu_gen1_init_aggregation,
.enable_rf = rtl8xxxu_gen1_enable_rf,
.disable_rf = rtl8xxxu_gen1_disable_rf,
rtl8xxxu_write32(priv, REG_OFDM0_FA_RSTC, val32);
}
-static s8 rtl8723b_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
+static s8 rtl8723b_cck_rssi(struct rtl8xxxu_priv *priv, struct rtl8723au_phy_stats *phy_stats)
{
+ u8 cck_agc_rpt = phy_stats->cck_agc_rpt_ofdm_cfosho_a;
s8 rx_pwr_all = 0x00;
u8 vga_idx, lna_idx;
.load_firmware = rtl8723bu_load_firmware,
.power_on = rtl8723bu_power_on,
.power_off = rtl8723bu_power_off,
+ .read_efuse = rtl8xxxu_read_efuse,
.reset_8051 = rtl8723bu_reset_8051,
.llt_init = rtl8xxxu_auto_llt_table,
.init_phy_bb = rtl8723bu_init_phy_bb,
.phy_iq_calibrate = rtl8723bu_phy_iq_calibrate,
.config_channel = rtl8xxxu_gen2_config_channel,
.parse_rx_desc = rtl8xxxu_parse_rxdesc24,
+ .parse_phystats = rtl8723au_rx_parse_phystats,
.init_aggregation = rtl8723bu_init_aggregation,
.init_statistics = rtl8723bu_init_statistics,
.init_burst = rtl8xxxu_init_burst,
MODULE_FIRMWARE("rtlwifi/rtl8723bu_nic.bin");
MODULE_FIRMWARE("rtlwifi/rtl8723bu_bt.bin");
MODULE_FIRMWARE("rtlwifi/rtl8188fufw.bin");
+MODULE_FIRMWARE("rtlwifi/rtl8710bufw_SMIC.bin");
+MODULE_FIRMWARE("rtlwifi/rtl8710bufw_UMC.bin");
module_param_named(debug, rtl8xxxu_debug, int, 0600);
MODULE_PARM_DESC(debug, "Set debug mask");
int len;
u8 data;
+ if (priv->rtl_chip == RTL8710B && addr <= 0xff)
+ addr |= 0x8000;
+
mutex_lock(&priv->usb_buf_mutex);
len = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
REALTEK_USB_CMD_REQ, REALTEK_USB_READ,
int len;
u16 data;
+ if (priv->rtl_chip == RTL8710B && addr <= 0xff)
+ addr |= 0x8000;
+
mutex_lock(&priv->usb_buf_mutex);
len = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
REALTEK_USB_CMD_REQ, REALTEK_USB_READ,
int len;
u32 data;
+ if (priv->rtl_chip == RTL8710B && addr <= 0xff)
+ addr |= 0x8000;
+
mutex_lock(&priv->usb_buf_mutex);
len = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
REALTEK_USB_CMD_REQ, REALTEK_USB_READ,
struct usb_device *udev = priv->udev;
int ret;
+ if (priv->rtl_chip == RTL8710B && addr <= 0xff)
+ addr |= 0x8000;
+
mutex_lock(&priv->usb_buf_mutex);
priv->usb_buf.val8 = val;
ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
struct usb_device *udev = priv->udev;
int ret;
+ if (priv->rtl_chip == RTL8710B && addr <= 0xff)
+ addr |= 0x8000;
+
mutex_lock(&priv->usb_buf_mutex);
priv->usb_buf.val16 = cpu_to_le16(val);
ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
struct usb_device *udev = priv->udev;
int ret;
+ if (priv->rtl_chip == RTL8710B && addr <= 0xff)
+ addr |= 0x8000;
+
mutex_lock(&priv->usb_buf_mutex);
priv->usb_buf.val32 = cpu_to_le32(val);
ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
static void rtl8xxxu_print_chipinfo(struct rtl8xxxu_priv *priv)
{
struct device *dev = &priv->udev->dev;
- char cut = '?';
-
- /* Currently always true: chip_cut is 4 bits. */
- if (priv->chip_cut <= 15)
- cut = 'A' + priv->chip_cut;
+ char cut = 'A' + priv->chip_cut;
dev_info(dev,
"RTL%s rev %c (%s) romver %d, %iT%iR, TX queues %i, WiFi=%i, BT=%i, GPS=%i, HI PA=%i\n",
return 0;
}
-static int rtl8xxxu_read_efuse(struct rtl8xxxu_priv *priv)
+int rtl8xxxu_read_efuse(struct rtl8xxxu_priv *priv)
{
struct device *dev = &priv->udev->dev;
int i, ret = 0;
static int rtl8xxxu_start_firmware(struct rtl8xxxu_priv *priv)
{
struct device *dev = &priv->udev->dev;
+ u16 reg_mcu_fw_dl;
int ret = 0, i;
u32 val32;
+ if (priv->rtl_chip == RTL8710B)
+ reg_mcu_fw_dl = REG_8051FW_CTRL_V1_8710B;
+ else
+ reg_mcu_fw_dl = REG_MCU_FW_DL;
+
/* Poll checksum report */
for (i = 0; i < RTL8XXXU_FIRMWARE_POLL_MAX; i++) {
- val32 = rtl8xxxu_read32(priv, REG_MCU_FW_DL);
+ val32 = rtl8xxxu_read32(priv, reg_mcu_fw_dl);
if (val32 & MCU_FW_DL_CSUM_REPORT)
break;
}
goto exit;
}
- val32 = rtl8xxxu_read32(priv, REG_MCU_FW_DL);
+ val32 = rtl8xxxu_read32(priv, reg_mcu_fw_dl);
val32 |= MCU_FW_DL_READY;
val32 &= ~MCU_WINT_INIT_READY;
- rtl8xxxu_write32(priv, REG_MCU_FW_DL, val32);
+ rtl8xxxu_write32(priv, reg_mcu_fw_dl, val32);
/*
* Reset the 8051 in order for the firmware to start running,
/* Wait for firmware to become ready */
for (i = 0; i < RTL8XXXU_FIRMWARE_POLL_MAX; i++) {
- val32 = rtl8xxxu_read32(priv, REG_MCU_FW_DL);
+ val32 = rtl8xxxu_read32(priv, reg_mcu_fw_dl);
if (val32 & MCU_WINT_INIT_READY)
break;
/*
* Init H2C command
*/
- if (priv->rtl_chip == RTL8723B || priv->rtl_chip == RTL8188F)
+ if (priv->rtl_chip == RTL8723B || priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8710B)
rtl8xxxu_write8(priv, REG_HMTFR, 0x0f);
exit:
return ret;
static int rtl8xxxu_download_firmware(struct rtl8xxxu_priv *priv)
{
int pages, remainder, i, ret;
+ u16 reg_mcu_fw_dl;
u8 val8;
u16 val16;
u32 val32;
u8 *fwptr;
- val8 = rtl8xxxu_read8(priv, REG_SYS_FUNC + 1);
- val8 |= 4;
- rtl8xxxu_write8(priv, REG_SYS_FUNC + 1, val8);
+ if (priv->rtl_chip == RTL8710B) {
+ reg_mcu_fw_dl = REG_8051FW_CTRL_V1_8710B;
+ } else {
+ reg_mcu_fw_dl = REG_MCU_FW_DL;
- /* 8051 enable */
- val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
- val16 |= SYS_FUNC_CPU_ENABLE;
- rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
+ val8 = rtl8xxxu_read8(priv, REG_SYS_FUNC + 1);
+ val8 |= 4;
+ rtl8xxxu_write8(priv, REG_SYS_FUNC + 1, val8);
- val8 = rtl8xxxu_read8(priv, REG_MCU_FW_DL);
+ /* 8051 enable */
+ val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
+ val16 |= SYS_FUNC_CPU_ENABLE;
+ rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
+ }
+
+ val8 = rtl8xxxu_read8(priv, reg_mcu_fw_dl);
if (val8 & MCU_FW_RAM_SEL) {
dev_info(&priv->udev->dev,
"Firmware is already running, resetting the MCU.\n");
- rtl8xxxu_write8(priv, REG_MCU_FW_DL, 0x00);
+ rtl8xxxu_write8(priv, reg_mcu_fw_dl, 0x00);
priv->fops->reset_8051(priv);
}
/* MCU firmware download enable */
- val8 = rtl8xxxu_read8(priv, REG_MCU_FW_DL);
+ val8 = rtl8xxxu_read8(priv, reg_mcu_fw_dl);
val8 |= MCU_FW_DL_ENABLE;
- rtl8xxxu_write8(priv, REG_MCU_FW_DL, val8);
+ rtl8xxxu_write8(priv, reg_mcu_fw_dl, val8);
/* 8051 reset */
- val32 = rtl8xxxu_read32(priv, REG_MCU_FW_DL);
+ val32 = rtl8xxxu_read32(priv, reg_mcu_fw_dl);
val32 &= ~BIT(19);
- rtl8xxxu_write32(priv, REG_MCU_FW_DL, val32);
+ rtl8xxxu_write32(priv, reg_mcu_fw_dl, val32);
+
+ if (priv->rtl_chip == RTL8710B) {
+ /* We must set 0x8090[8]=1 before download FW. */
+ val8 = rtl8xxxu_read8(priv, reg_mcu_fw_dl + 1);
+ val8 |= BIT(0);
+ rtl8xxxu_write8(priv, reg_mcu_fw_dl + 1, val8);
+ }
/* Reset firmware download checksum */
- val8 = rtl8xxxu_read8(priv, REG_MCU_FW_DL);
+ val8 = rtl8xxxu_read8(priv, reg_mcu_fw_dl);
val8 |= MCU_FW_DL_CSUM_REPORT;
- rtl8xxxu_write8(priv, REG_MCU_FW_DL, val8);
+ rtl8xxxu_write8(priv, reg_mcu_fw_dl, val8);
pages = priv->fw_size / RTL_FW_PAGE_SIZE;
remainder = priv->fw_size % RTL_FW_PAGE_SIZE;
fwptr = priv->fw_data->data;
for (i = 0; i < pages; i++) {
- val8 = rtl8xxxu_read8(priv, REG_MCU_FW_DL + 2) & 0xF8;
+ val8 = rtl8xxxu_read8(priv, reg_mcu_fw_dl + 2) & 0xF8;
val8 |= i;
- rtl8xxxu_write8(priv, REG_MCU_FW_DL + 2, val8);
+ rtl8xxxu_write8(priv, reg_mcu_fw_dl + 2, val8);
ret = rtl8xxxu_writeN(priv, REG_FW_START_ADDRESS,
fwptr, RTL_FW_PAGE_SIZE);
}
if (remainder) {
- val8 = rtl8xxxu_read8(priv, REG_MCU_FW_DL + 2) & 0xF8;
+ val8 = rtl8xxxu_read8(priv, reg_mcu_fw_dl + 2) & 0xF8;
val8 |= i;
- rtl8xxxu_write8(priv, REG_MCU_FW_DL + 2, val8);
+ rtl8xxxu_write8(priv, reg_mcu_fw_dl + 2, val8);
ret = rtl8xxxu_writeN(priv, REG_FW_START_ADDRESS,
fwptr, remainder);
if (ret != remainder) {
ret = 0;
fw_abort:
/* MCU firmware download disable */
- val16 = rtl8xxxu_read16(priv, REG_MCU_FW_DL);
+ val16 = rtl8xxxu_read16(priv, reg_mcu_fw_dl);
val16 &= ~MCU_FW_DL_ENABLE;
- rtl8xxxu_write16(priv, REG_MCU_FW_DL, val16);
+ rtl8xxxu_write16(priv, reg_mcu_fw_dl, val16);
return ret;
}
case 0x5300:
case 0x2300:
case 0x88f0:
+ case 0x10b0:
break;
default:
ret = -EINVAL;
rtl8xxxu_write8(priv, REG_HT_SINGLE_AMPDU_8723B, val8);
rtl8xxxu_write16(priv, REG_MAX_AGGR_NUM, 0x0c14);
- if (priv->rtl_chip == RTL8723B)
+ if (priv->rtl_chip == RTL8723B || priv->rtl_chip == RTL8710B)
val8 = 0x5e;
else if (priv->rtl_chip == RTL8188F)
val8 = 0x70; /* 0x5e would make it very slow */
rtl8xxxu_write32(priv, REG_AGGLEN_LMT, 0xffffffff);
rtl8xxxu_write8(priv, REG_RX_PKT_LIMIT, 0x18);
rtl8xxxu_write8(priv, REG_PIFS, 0x00);
- if (priv->rtl_chip == RTL8188F) {
+ if (priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8710B) {
rtl8xxxu_write8(priv, REG_FWHW_TXQ_CTRL, FWHW_TXQ_CTRL_AMPDU_RETRY);
rtl8xxxu_write32(priv, REG_FAST_EDCA_CTRL, 0x03086666);
}
+ /*
+ * The RTL8710BU vendor driver uses 0x50 here and it works fine,
+ * but in rtl8xxxu 0x50 causes slow upload and random packet loss. Why?
+ */
if (priv->rtl_chip == RTL8723B)
val8 = 0x50;
- else if (priv->rtl_chip == RTL8188F)
+ else if (priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8710B)
val8 = 0x28; /* 0x50 would make the upload slow */
rtl8xxxu_write8(priv, REG_USTIME_TSF_8723B, val8);
rtl8xxxu_write8(priv, REG_USTIME_EDCA, val8);
/* RFSW Control - clear bit 14 ?? */
if (priv->rtl_chip != RTL8723B && priv->rtl_chip != RTL8192E &&
- priv->rtl_chip != RTL8188E)
+ priv->rtl_chip != RTL8188E && priv->rtl_chip != RTL8710B)
rtl8xxxu_write32(priv, REG_FPGA0_TX_INFO, 0x00000003);
val32 = FPGA0_RF_TRSW | FPGA0_RF_TRSWB | FPGA0_RF_ANTSW |
rtl8xxxu_write32(priv, REG_FPGA0_XAB_RF_SW_CTRL, val32);
/* 0x860[6:5]= 00 - why? - this sets antenna B */
- if (priv->rtl_chip != RTL8192E && priv->rtl_chip != RTL8188E)
+ if (priv->rtl_chip != RTL8192E && priv->rtl_chip != RTL8188E &&
+ priv->rtl_chip != RTL8710B)
rtl8xxxu_write32(priv, REG_FPGA0_XA_RF_INT_OE, 0x66f60210);
if (!macpower) {
val8 &= 0xf8;
rtl8xxxu_write8(priv, 0xa3, val8);
}
+
+ if (priv->rtl_chip == RTL8710B)
+ rtl8xxxu_write8(priv, REG_EARLY_MODE_CONTROL_8710B, 0);
}
/*
- * Unit in 8 bytes, not obvious what it is used for
+ * Unit in 8 bytes.
+ * Get Rx PHY status in order to report RSSI and others.
*/
rtl8xxxu_write8(priv, REG_RX_DRVINFO_SZ, 4);
val8 = rtl8xxxu_read8(priv, REG_USB_SPECIAL_OPTION);
val8 |= USB_SPEC_INT_BULK_SELECT;
rtl8xxxu_write8(priv, REG_USB_SPECIAL_OPTION, val8);
+ } else if (priv->rtl_chip == RTL8710B) {
+ rtl8xxxu_write32(priv, REG_HIMR0_8710B, 0);
} else {
/*
* Enable all interrupts - not obvious USB needs to do this
RCR_APPEND_PHYSTAT | RCR_APPEND_ICV | RCR_APPEND_MIC;
rtl8xxxu_write32(priv, REG_RCR, val32);
- if (priv->rtl_chip == RTL8188F) {
+ if (priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8710B) {
/* Accept all data frames */
rtl8xxxu_write16(priv, REG_RXFLTMAP2, 0xffff);
val16 = BEACON_DISABLE_TSF_UPDATE | (BEACON_DISABLE_TSF_UPDATE << 8);
rtl8xxxu_write16(priv, REG_BEACON_CTRL, val16);
rtl8xxxu_write16(priv, REG_TBTT_PROHIBIT, 0x6404);
- if (priv->rtl_chip != RTL8188F)
+ if (priv->rtl_chip != RTL8188F && priv->rtl_chip != RTL8710B)
/* Firmware will control REG_DRVERLYINT when power saving is enable, */
/* so don't set this register on STA mode. */
rtl8xxxu_write8(priv, REG_DRIVER_EARLY_INT, DRIVER_EARLY_INT_TIME);
/*
* Initialize burst parameters
*/
-
if (priv->fops->init_burst)
priv->fops->init_burst(priv);
if (fops->init_aggregation)
fops->init_aggregation(priv);
- if (priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8188E) {
+ if (priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8188E ||
+ priv->rtl_chip == RTL8710B) {
rtl8xxxu_write16(priv, REG_PKT_VO_VI_LIFE_TIME, 0x0400); /* unit: 256us. 256ms */
rtl8xxxu_write16(priv, REG_PKT_BE_BK_LIFE_TIME, 0x0400); /* unit: 256us. 256ms */
}
fops->set_tx_power(priv, 1, false);
/* Let the 8051 take control of antenna setting */
- if (priv->rtl_chip != RTL8192E && priv->rtl_chip != RTL8188F) {
+ if (priv->rtl_chip != RTL8192E && priv->rtl_chip != RTL8188F &&
+ priv->rtl_chip != RTL8710B) {
val8 = rtl8xxxu_read8(priv, REG_LEDCFG2);
val8 |= LEDCFG2_DPDT_SELECT;
rtl8xxxu_write8(priv, REG_LEDCFG2, val8);
/* Disable BAR - not sure if this has any effect on USB */
rtl8xxxu_write32(priv, REG_BAR_MODE_CTRL, 0x0201ffff);
- if (priv->rtl_chip != RTL8188F && priv->rtl_chip != RTL8188E)
+ if (priv->rtl_chip != RTL8188F && priv->rtl_chip != RTL8188E && priv->rtl_chip != RTL8710B)
rtl8xxxu_write16(priv, REG_FAST_EDCA_CTRL, 0);
if (fops->init_statistics)
* This should enable thermal meter
*/
if (fops->gen2_thermal_meter) {
- if (priv->rtl_chip == RTL8188F) {
+ if (priv->rtl_chip == RTL8188F || priv->rtl_chip == RTL8710B) {
val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_T_METER_8723B);
val32 |= 0x30000;
rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_T_METER_8723B, val32);
rtl8xxxu_write32(priv, REG_AGC_RPT, val32);
}
+ if (priv->rtl_chip == RTL8710B) {
+ /*
+ * 0x76D[5:4] is Port0,Port1 Enable Bit.
+ * This is only for 8710B, 2b'00 for MP and 2b'11 for Normal Driver
+ */
+ val8 = rtl8xxxu_read8(priv, REG_PORT_CONTROL_8710B);
+ val8 |= BIT(5) | BIT(4);
+ rtl8xxxu_write8(priv, REG_PORT_CONTROL_8710B, val8);
+
+ /* Set 0x5c[8] and [2:0] = 1, LDO mode */
+ val32 = rtl8xxxu_read32(priv, REG_WL_RF_PSS_8710B);
+ val32 |= 0x107;
+ rtl8xxxu_write32(priv, REG_WL_RF_PSS_8710B, val32);
+ }
+
+ val32 = rtl8xxxu_read32(priv, 0xa9c);
+ priv->cck_new_agc = u32_get_bits(val32, BIT(17));
+
/* Initialise the center frequency offset tracking */
if (priv->fops->set_crystal_cap) {
val32 = rtl8xxxu_read32(priv, REG_OFDM1_CFO_TRACKING);
rtl8xxxu_calc_tx_desc_csum(tx_desc);
+ /* avoid zero checksum make tx hang */
+ if (priv->rtl_chip == RTL8710B)
+ tx_desc->csum = ~tx_desc->csum;
+
usb_fill_bulk_urb(&tx_urb->urb, priv->udev, priv->pipe_out[queue],
skb->data, skb->len, rtl8xxxu_tx_complete, skb);
dev_kfree_skb(skb);
}
-static void rtl8xxxu_rx_parse_phystats(struct rtl8xxxu_priv *priv,
- struct ieee80211_rx_status *rx_status,
- struct rtl8723au_phy_stats *phy_stats,
- u32 rxmcs, struct ieee80211_hdr *hdr,
- bool crc_icv_err)
+void rtl8723au_rx_parse_phystats(struct rtl8xxxu_priv *priv,
+ struct ieee80211_rx_status *rx_status,
+ struct rtl8723au_phy_stats *phy_stats,
+ u32 rxmcs, struct ieee80211_hdr *hdr,
+ bool crc_icv_err)
{
if (phy_stats->sgi_en)
rx_status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
/*
* Handle PHY stats for CCK rates
*/
- u8 cck_agc_rpt = phy_stats->cck_agc_rpt_ofdm_cfosho_a;
-
- rx_status->signal = priv->fops->cck_rssi(priv, cck_agc_rpt);
+ rx_status->signal = priv->fops->cck_rssi(priv, phy_stats);
} else {
bool parse_cfo = priv->fops->set_crystal_cap &&
priv->vif &&
}
}
+static void jaguar2_rx_parse_phystats_type0(struct rtl8xxxu_priv *priv,
+ struct ieee80211_rx_status *rx_status,
+ struct jaguar2_phy_stats_type0 *phy_stats0,
+ u32 rxmcs, struct ieee80211_hdr *hdr,
+ bool crc_icv_err)
+{
+ s8 rx_power = phy_stats0->pwdb - 110;
+
+ if (!priv->cck_new_agc)
+ rx_power = priv->fops->cck_rssi(priv, (struct rtl8723au_phy_stats *)phy_stats0);
+
+ rx_status->signal = rx_power;
+}
+
+static void jaguar2_rx_parse_phystats_type1(struct rtl8xxxu_priv *priv,
+ struct ieee80211_rx_status *rx_status,
+ struct jaguar2_phy_stats_type1 *phy_stats1,
+ u32 rxmcs, struct ieee80211_hdr *hdr,
+ bool crc_icv_err)
+{
+ bool parse_cfo = priv->fops->set_crystal_cap &&
+ priv->vif &&
+ priv->vif->type == NL80211_IFTYPE_STATION &&
+ priv->vif->cfg.assoc &&
+ !crc_icv_err &&
+ !ieee80211_is_ctl(hdr->frame_control) &&
+ ether_addr_equal(priv->vif->bss_conf.bssid, hdr->addr2);
+ u8 pwdb_max = 0;
+ int rx_path;
+
+ if (parse_cfo) {
+ /* Only path-A and path-B have CFO tail and short CFO */
+ priv->cfo_tracking.cfo_tail[RF_A] = phy_stats1->cfo_tail[RF_A];
+ priv->cfo_tracking.cfo_tail[RF_B] = phy_stats1->cfo_tail[RF_B];
+
+ priv->cfo_tracking.packet_count++;
+ }
+
+ for (rx_path = 0; rx_path < priv->rx_paths; rx_path++)
+ pwdb_max = max(pwdb_max, phy_stats1->pwdb[rx_path]);
+
+ rx_status->signal = pwdb_max - 110;
+}
+
+static void jaguar2_rx_parse_phystats_type2(struct rtl8xxxu_priv *priv,
+ struct ieee80211_rx_status *rx_status,
+ struct jaguar2_phy_stats_type2 *phy_stats2,
+ u32 rxmcs, struct ieee80211_hdr *hdr,
+ bool crc_icv_err)
+{
+ u8 pwdb_max = 0;
+ int rx_path;
+
+ for (rx_path = 0; rx_path < priv->rx_paths; rx_path++)
+ pwdb_max = max(pwdb_max, phy_stats2->pwdb[rx_path]);
+
+ rx_status->signal = pwdb_max - 110;
+}
+
+void jaguar2_rx_parse_phystats(struct rtl8xxxu_priv *priv,
+ struct ieee80211_rx_status *rx_status,
+ struct rtl8723au_phy_stats *phy_stats,
+ u32 rxmcs, struct ieee80211_hdr *hdr,
+ bool crc_icv_err)
+{
+ struct jaguar2_phy_stats_type0 *phy_stats0 = (struct jaguar2_phy_stats_type0 *)phy_stats;
+ struct jaguar2_phy_stats_type1 *phy_stats1 = (struct jaguar2_phy_stats_type1 *)phy_stats;
+ struct jaguar2_phy_stats_type2 *phy_stats2 = (struct jaguar2_phy_stats_type2 *)phy_stats;
+
+ switch (phy_stats0->page_num) {
+ case 0:
+ /* CCK */
+ jaguar2_rx_parse_phystats_type0(priv, rx_status, phy_stats0,
+ rxmcs, hdr, crc_icv_err);
+ break;
+ case 1:
+ /* OFDM */
+ jaguar2_rx_parse_phystats_type1(priv, rx_status, phy_stats1,
+ rxmcs, hdr, crc_icv_err);
+ break;
+ case 2:
+ /* Also OFDM but different (how?) */
+ jaguar2_rx_parse_phystats_type2(priv, rx_status, phy_stats2,
+ rxmcs, hdr, crc_icv_err);
+ break;
+ default:
+ return;
+ }
+}
+
static void rtl8xxxu_free_rx_resources(struct rtl8xxxu_priv *priv)
{
struct rtl8xxxu_rx_urb *rx_urb, *tmp;
skb_trim(skb, pkt_len);
if (rx_desc->phy_stats)
- rtl8xxxu_rx_parse_phystats(
+ priv->fops->parse_phystats(
priv, rx_status, phy_stats,
rx_desc->rxmcs,
(struct ieee80211_hdr *)skb->data,
}
if (rx_desc->phy_stats)
- rtl8xxxu_rx_parse_phystats(priv, rx_status, phy_stats,
+ priv->fops->parse_phystats(priv, rx_status, phy_stats,
rx_desc->rxmcs, (struct ieee80211_hdr *)skb->data,
rx_desc->crc32 || rx_desc->icverr);
case 0x818b:
case 0xf179:
case 0x8179:
+ case 0xb711:
untested = 0;
break;
}
break;
case 0x7392:
- if (id->idProduct == 0x7811 || id->idProduct == 0xa611)
+ if (id->idProduct == 0x7811 || id->idProduct == 0xa611 || id->idProduct == 0xb811)
untested = 0;
break;
case 0x050d:
priv->udev = udev;
priv->fops = (struct rtl8xxxu_fileops *)id->driver_info;
mutex_init(&priv->usb_buf_mutex);
+ mutex_init(&priv->syson_indirect_access_mutex);
mutex_init(&priv->h2c_mutex);
INIT_LIST_HEAD(&priv->tx_urb_free_list);
spin_lock_init(&priv->tx_urb_lock);
else
INIT_WORK(&priv->c2hcmd_work, rtl8xxxu_c2hcmd_callback);
- ret = rtl8xxxu_read_efuse(priv);
+ ret = priv->fops->read_efuse(priv);
if (ret) {
dev_err(&udev->dev, "Fatal - failed to read EFuse\n");
goto err_set_intfdata;
kfree(priv->fw_data);
mutex_destroy(&priv->usb_buf_mutex);
+ mutex_destroy(&priv->syson_indirect_access_mutex);
mutex_destroy(&priv->h2c_mutex);
ieee80211_free_hw(hw);
kfree(priv->fw_data);
mutex_destroy(&priv->usb_buf_mutex);
+ mutex_destroy(&priv->syson_indirect_access_mutex);
mutex_destroy(&priv->h2c_mutex);
if (priv->udev->state != USB_STATE_NOTATTACHED) {
/* Rosewill USB-N150 Nano */
{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0xffef, 0xff, 0xff, 0xff),
.driver_info = (unsigned long)&rtl8188eu_fops},
+/* RTL8710BU aka RTL8188GU (not to be confused with RTL8188GTVU) */
+{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0xb711, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8710bu_fops},
+/* TOTOLINK N150UA V5 / N150UA-B */
+{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x2005, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8710bu_fops},
#ifdef CONFIG_RTL8XXXU_UNTESTED
/* Still supported by rtlwifi */
{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x8176, 0xff, 0xff, 0xff),
.disable_hub_initiated_lpm = 1,
};
-static int __init rtl8xxxu_module_init(void)
-{
- int res;
-
- res = usb_register(&rtl8xxxu_driver);
- if (res < 0)
- pr_err(DRIVER_NAME ": usb_register() failed (%i)\n", res);
-
- return res;
-}
-
-static void __exit rtl8xxxu_module_exit(void)
-{
- usb_deregister(&rtl8xxxu_driver);
-}
-
-
MODULE_DEVICE_TABLE(usb, dev_table);
-module_init(rtl8xxxu_module_init);
-module_exit(rtl8xxxu_module_exit);
+module_usb_driver(rtl8xxxu_driver);
#define RXDMA_PRO_DMA_BURST_CNT GENMASK(3, 2) /* Set to 0x3. */
#define RXDMA_PRO_DMA_BURST_SIZE GENMASK(5, 4) /* Set to 0x1. */
+#define REG_EARLY_MODE_CONTROL_8710B 0x02bc
+
#define REG_RF_BB_CMD_ADDR 0x02c0
#define REG_RF_BB_CMD_DATA 0x02c4
#define REG_BT_CONTROL_8723BU 0x0764
#define BT_CONTROL_BT_GRANT BIT(12)
+#define REG_PORT_CONTROL_8710B 0x076d
#define REG_WLAN_ACT_CONTROL_8723B 0x076e
#define REG_FPGA0_RF_MODE 0x0800
#define CCK_PD_TYPE1_LV3_TH 0xdd
#define CCK_PD_TYPE1_LV4_TH 0xed
+#define REG_CCK0_TX_FILTER1 0x0a20
+#define REG_CCK0_TX_FILTER2 0x0a24
+#define REG_CCK0_DEBUG_PORT 0x0a28 /* debug port and Tx filter3 */
#define REG_AGC_RPT 0xa80
#define AGC_RPT_CCK BIT(7)
+#define REG_CCK0_TX_FILTER3 0x0aac
#define REG_CONFIG_ANT_A 0x0b68
#define REG_CONFIG_ANT_B 0x0b6c
Unavailable */
#define USB_HIMR_ROK BIT(0) /* Receive DMA OK Interrupt */
+#define REG_USB_ACCESS_TIMEOUT 0xfe4c
+
#define REG_USB_SPECIAL_OPTION 0xfe55
#define USB_SPEC_USB_AGG_ENABLE BIT(3) /* Enable USB aggregation */
#define USB_SPEC_INT_BULK_SELECT BIT(4) /* Use interrupt endpoint to
#define REG_NORMAL_SIE_MAC_ADDR 0xfe70 /* 0xfe70 - 0xfe75 */
#define REG_NORMAL_SIE_STRING 0xfe80 /* 0xfe80 - 0xfedf */
+/*
+ * 8710B register addresses between 0x00 and 0xff must have 0x8000
+ * added to them. We take care of that in the rtl8xxxu_read{8,16,32}
+ * and rtl8xxxu_write{8,16,32} functions.
+ */
+#define REG_SYS_FUNC_8710B 0x0004
+#define REG_AFE_CTRL_8710B 0x0050
+#define REG_WL_RF_PSS_8710B 0x005c
+#define REG_EFUSE_INDIRECT_CTRL_8710B 0x006c
+#define NORMAL_REG_READ_OFFSET 0x83000000
+#define NORMAL_REG_WRITE_OFFSET 0x84000000
+#define EFUSE_READ_OFFSET 0x85000000
+#define EFUSE_WRITE_OFFSET 0x86000000
+#define REG_HIMR0_8710B 0x0080
+#define REG_HISR0_8710B 0x0084
+/*
+ * 8710B uses this instead of REG_MCU_FW_DL, but at least bits
+ * 0-7 have the same meaning.
+ */
+#define REG_8051FW_CTRL_V1_8710B 0x0090
+#define REG_USB_HOST_INDIRECT_DATA_8710B 0x009c
+#define REG_WL_STATUS_8710B 0x00f0
+#define REG_USB_HOST_INDIRECT_ADDR_8710B 0x00f8
+
+/*
+ * 8710B registers which must be accessed through rtl8710b_read_syson_reg
+ * and rtl8710b_write_syson_reg.
+ */
+#define SYSON_REG_BASE_ADDR_8710B 0x40000000
+#define REG_SYS_XTAL_CTRL0_8710B 0x060
+#define REG_SYS_EEPROM_CTRL0_8710B 0x0e0
+#define REG_SYS_SYSTEM_CFG0_8710B 0x1f0
+#define REG_SYS_SYSTEM_CFG1_8710B 0x1f4
+#define REG_SYS_SYSTEM_CFG2_8710B 0x1f8
+
/* RF6052 registers */
#define RF6052_REG_AC 0x00
#define RF6052_REG_IQADJ_G1 0x01
tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
- if (!buffer || copy_from_user(tmp, buffer, tmp_len))
- return count;
+ if (copy_from_user(tmp, buffer, tmp_len))
+ return -EFAULT;
tmp[tmp_len] = '\0';
num = sscanf(tmp, "%x %x %x", &addr, &val, &len);
if (num != 3)
- return count;
+ return -EINVAL;
switch (len) {
case 1:
tmp_len = (count > sizeof(tmp) - 1 ? sizeof(tmp) - 1 : count);
- if (!buffer || copy_from_user(tmp, buffer, tmp_len))
- return count;
+ if (copy_from_user(tmp, buffer, tmp_len))
+ return -EFAULT;
tmp[tmp_len] = '\0';
if (num != 4) {
rtl_dbg(rtlpriv, COMP_ERR, DBG_DMESG,
"Format is <path> <addr> <mask> <data>\n");
- return count;
+ return -EINVAL;
}
rtl_set_rfreg(hw, path, addr, bitmask, data);
for (rf_path = 0; rf_path < 2; rf_path++) {
for (i = 0; i < 3; i++) {
- if (!autoload_fail) {
+ if (!autoload_fail &&
+ hwinfo[EEPROM_TXPOWERCCK + rf_path * 3 + i] != 0xff &&
+ hwinfo[EEPROM_TXPOWERHT40_1S + rf_path * 3 + i] != 0xff) {
rtlefuse->
eeprom_chnlarea_txpwr_cck[rf_path][i] =
hwinfo[EEPROM_TXPOWERCCK + rf_path * 3 + i];
}
for (i = 0; i < 3; i++) {
- if (!autoload_fail)
+ if (!autoload_fail &&
+ hwinfo[EEPROM_TXPOWERHT40_2SDIFF + i] != 0xff)
tempval = hwinfo[EEPROM_TXPOWERHT40_2SDIFF + i];
else
tempval = EEPROM_DEFAULT_HT40_2SDIFF;
}
for (i = 0; i < 3; i++) {
- if (!autoload_fail) {
+ if (!autoload_fail &&
+ hwinfo[EEPROM_TXPWR_GROUP + i] != 0xff &&
+ hwinfo[EEPROM_TXPWR_GROUP + 3 + i] != 0xff) {
rtlefuse->eeprom_pwrlimit_ht40[i] =
hwinfo[EEPROM_TXPWR_GROUP + i];
rtlefuse->eeprom_pwrlimit_ht20[i] =
for (i = 0; i < 14; i++) {
index = rtl92c_get_chnl_group((u8)i);
- if (!autoload_fail)
+ if (!autoload_fail &&
+ hwinfo[EEPROM_TXPOWERHT20DIFF + index] != 0xff)
tempval = hwinfo[EEPROM_TXPOWERHT20DIFF + index];
else
tempval = EEPROM_DEFAULT_HT20_DIFF;
index = rtl92c_get_chnl_group((u8)i);
- if (!autoload_fail)
+ if (!autoload_fail &&
+ hwinfo[EEPROM_TXPOWER_OFDMDIFF + index] != 0xff)
tempval = hwinfo[EEPROM_TXPOWER_OFDMDIFF + index];
else
tempval = EEPROM_DEFAULT_LEGACYHTTXPOWERDIFF;
"RF-B Legacy to HT40 Diff[%d] = 0x%x\n",
i, rtlefuse->txpwr_legacyhtdiff[RF90_PATH_B][i]);
- if (!autoload_fail)
+ if (!autoload_fail && hwinfo[RF_OPTION1] != 0xff)
rtlefuse->eeprom_regulatory = (hwinfo[RF_OPTION1] & 0x7);
else
rtlefuse->eeprom_regulatory = 0;
RTPRINT(rtlpriv, FINIT, INIT_TXPOWER,
"eeprom_regulatory = 0x%x\n", rtlefuse->eeprom_regulatory);
- if (!autoload_fail) {
+ if (!autoload_fail &&
+ hwinfo[EEPROM_TSSI_A] != 0xff &&
+ hwinfo[EEPROM_TSSI_B] != 0xff) {
rtlefuse->eeprom_tssi[RF90_PATH_A] = hwinfo[EEPROM_TSSI_A];
rtlefuse->eeprom_tssi[RF90_PATH_B] = hwinfo[EEPROM_TSSI_B];
} else {
rtlefuse->eeprom_tssi[RF90_PATH_A],
rtlefuse->eeprom_tssi[RF90_PATH_B]);
- if (!autoload_fail)
+ if (!autoload_fail && hwinfo[EEPROM_THERMAL_METER] != 0xff)
tempval = hwinfo[EEPROM_THERMAL_METER];
else
tempval = EEPROM_DEFAULT_THERMALMETER;
struct rtl_priv *rtlpriv = rtl_priv(hw);
u8 bt_msr = rtl_read_byte(rtlpriv, MSR);
enum led_ctl_mode ledaction = LED_CTL_NO_LINK;
- u8 bcnfunc_enable;
bt_msr &= 0xfc;
"Set HW_VAR_MEDIA_STATUS: No such media status(%x)\n",
type);
}
- bcnfunc_enable = rtl_read_byte(rtlpriv, REG_BCN_CTRL);
switch (type) {
case NL80211_IFTYPE_UNSPECIFIED:
bt_msr |= MSR_NOLINK;
ledaction = LED_CTL_LINK;
- bcnfunc_enable &= 0xF7;
rtl_dbg(rtlpriv, COMP_INIT, DBG_TRACE,
"Set Network type to NO LINK!\n");
break;
case NL80211_IFTYPE_ADHOC:
bt_msr |= MSR_ADHOC;
- bcnfunc_enable |= 0x08;
rtl_dbg(rtlpriv, COMP_INIT, DBG_TRACE,
"Set Network type to Ad Hoc!\n");
break;
case NL80211_IFTYPE_STATION:
bt_msr |= MSR_INFRA;
ledaction = LED_CTL_LINK;
- bcnfunc_enable &= 0xF7;
rtl_dbg(rtlpriv, COMP_INIT, DBG_TRACE,
"Set Network type to STA!\n");
break;
case NL80211_IFTYPE_AP:
bt_msr |= MSR_AP;
- bcnfunc_enable |= 0x08;
rtl_dbg(rtlpriv, COMP_INIT, DBG_TRACE,
"Set Network type to AP!\n");
break;
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_mac *mac = rtl_mac(rtl_priv(hw));
- u16 bcntime_cfg = 0;
- u16 bcn_cw = 6, bcn_ifs = 0xf;
u16 atim_window = 2;
/* ATIM Window (in unit of TU). */
* other ad hoc STA */
rtl_write_byte(rtlpriv, BCN_ERR_THRESH, 100);
- /* Beacon Time Configuration */
- if (mac->opmode == NL80211_IFTYPE_ADHOC)
- bcntime_cfg |= (bcn_cw << BCN_TCFG_CW_SHIFT);
-
- /* TODO: bcn_ifs may required to be changed on ASIC */
- bcntime_cfg |= bcn_ifs << BCN_TCFG_IFS;
-
/*for beacon changed */
rtl92s_phy_set_beacon_hwreg(hw, mac->beacon_interval);
}
* beyond this structure like:
* rtl_pci_priv or rtl_usb_priv
*/
- u8 priv[0] __aligned(sizeof(void *));
+ u8 priv[] __aligned(sizeof(void *));
};
#define rtl_priv(hw) (((struct rtl_priv *)(hw)->priv))
case RTW_HCI_TYPE_USB:
intf_mask = RTW_PWR_INTF_USB_MSK;
break;
+ case RTW_HCI_TYPE_SDIO:
+ intf_mask = RTW_PWR_INTF_SDIO_MSK;
+ break;
default:
return -EINVAL;
}
ret = rtw_sub_pwr_seq_parser(rtwdev, intf_mask, cut_mask, cmd);
if (ret)
- return -EBUSY;
+ return ret;
idx++;
} while (1);
const struct rtw_pwr_seq_cmd **pwr_seq;
u8 rpwm;
bool cur_pwr;
+ int ret;
if (rtw_chip_wcpu_11ac(rtwdev)) {
rpwm = rtw_read8(rtwdev, rtwdev->hci.rpwm_addr);
return -EALREADY;
pwr_seq = pwr_on ? chip->pwr_on_seq : chip->pwr_off_seq;
- if (rtw_pwr_seq_parser(rtwdev, pwr_seq))
- return -EINVAL;
+ ret = rtw_pwr_seq_parser(rtwdev, pwr_seq);
+ if (ret)
+ return ret;
if (pwr_on)
set_bit(RTW_FLAG_POWERON, rtwdev->flags);
else
return -EINVAL;
break;
+ case RTW_HCI_TYPE_SDIO:
+ rqpn = &chip->rqpn_table[0];
+ break;
default:
return -EINVAL;
}
else
return -EINVAL;
break;
+ case RTW_HCI_TYPE_SDIO:
+ pg_tbl = &chip->page_table[0];
+ break;
default:
return -EINVAL;
}
writel(val, rtwpci->mmap + addr);
}
-static inline void *rtw_pci_get_tx_desc(struct rtw_pci_tx_ring *tx_ring, u8 idx)
-{
- int offset = tx_ring->r.desc_size * idx;
-
- return tx_ring->r.head + offset;
-}
-
static void rtw_pci_free_tx_ring_skbs(struct rtw_dev *rtwdev,
struct rtw_pci_tx_ring *tx_ring)
{
static void rtw_pci_declaim(struct rtw_dev *rtwdev, struct pci_dev *pdev)
{
- pci_clear_master(pdev);
pci_disable_device(pdev);
}
ether_addr_copy(efuse->addr, map->u.mac_addr);
}
+static void rtw8821cs_efuse_parsing(struct rtw_efuse *efuse,
+ struct rtw8821c_efuse *map)
+{
+ ether_addr_copy(efuse->addr, map->s.mac_addr);
+}
+
enum rtw8821ce_rf_set {
SWITCH_TO_BTG,
SWITCH_TO_WLG,
case RTW_HCI_TYPE_USB:
rtw8821cu_efuse_parsing(efuse, map);
break;
+ case RTW_HCI_TYPE_SDIO:
+ rtw8821cs_efuse_parsing(efuse, map);
+ break;
default:
/* unsupported now */
return -ENOTSUPP;
u8 res7;
};
+struct rtw8821cs_efuse {
+ u8 res4[0x4a]; /* 0xd0 */
+ u8 mac_addr[ETH_ALEN]; /* 0x11a */
+} __packed;
+
struct rtw8821c_efuse {
__le16 rtl_id;
u8 res0[0x0e];
union {
struct rtw8821ce_efuse e;
struct rtw8821cu_efuse u;
+ struct rtw8821cs_efuse s;
};
};
ether_addr_copy(efuse->addr, map->u.mac_addr);
}
+static void rtw8822bs_efuse_parsing(struct rtw_efuse *efuse,
+ struct rtw8822b_efuse *map)
+{
+ ether_addr_copy(efuse->addr, map->s.mac_addr);
+}
+
static int rtw8822b_read_efuse(struct rtw_dev *rtwdev, u8 *log_map)
{
struct rtw_efuse *efuse = &rtwdev->efuse;
case RTW_HCI_TYPE_USB:
rtw8822bu_efuse_parsing(efuse, map);
break;
+ case RTW_HCI_TYPE_SDIO:
+ rtw8822bs_efuse_parsing(efuse, map);
+ break;
default:
/* unsupported now */
return -ENOTSUPP;
u8 res7;
};
+struct rtw8822bs_efuse {
+ u8 res4[0x4a]; /* 0xd0 */
+ u8 mac_addr[ETH_ALEN]; /* 0x11a */
+} __packed;
+
struct rtw8822b_efuse {
__le16 rtl_id;
u8 res0[0x0e];
u8 country_code[2];
u8 res[3];
union {
- struct rtw8822bu_efuse u;
struct rtw8822be_efuse e;
+ struct rtw8822bu_efuse u;
+ struct rtw8822bs_efuse s;
};
};
ether_addr_copy(efuse->addr, map->u.mac_addr);
}
+static void rtw8822cs_efuse_parsing(struct rtw_efuse *efuse,
+ struct rtw8822c_efuse *map)
+{
+ ether_addr_copy(efuse->addr, map->s.mac_addr);
+}
+
static int rtw8822c_read_efuse(struct rtw_dev *rtwdev, u8 *log_map)
{
struct rtw_efuse *efuse = &rtwdev->efuse;
case RTW_HCI_TYPE_USB:
rtw8822cu_efuse_parsing(efuse, map);
break;
+ case RTW_HCI_TYPE_SDIO:
+ rtw8822cs_efuse_parsing(efuse, map);
+ break;
default:
/* unsupported now */
return -ENOTSUPP;
u8 res2[0x3d];
};
+struct rtw8822cs_efuse {
+ u8 res0[0x4a]; /* 0x120 */
+ u8 mac_addr[ETH_ALEN]; /* 0x16a */
+} __packed;
+
struct rtw8822ce_efuse {
u8 mac_addr[ETH_ALEN]; /* 0x120 */
u8 vender_id[2];
u8 res9;
u8 res10[0x42];
union {
- struct rtw8822cu_efuse u;
struct rtw8822ce_efuse e;
+ struct rtw8822cu_efuse u;
+ struct rtw8822cs_efuse s;
};
};
ret = rtw_usb_alloc_rx_bufs(rtwusb);
if (ret)
- return ret;
+ goto err_release_hw;
ret = rtw_core_init(rtwdev);
if (ret)
#include "ps.h"
#include "reg.h"
-#define RTW89_COEX_VERSION 0x07000013
+#define RTW89_COEX_VERSION 0x07000113
#define FCXDEF_STEP 50 /* MUST <= FCXMAX_STEP and match with wl fw*/
enum btc_fbtc_tdma_template {
.fwlrole = 1, .frptmap = 2, .fcxctrl = 1,
.info_buf = 1280, .max_role_num = 5,
},
+ {RTL8852B, RTW89_FW_VER_CODE(0, 29, 29, 0),
+ .fcxbtcrpt = 105, .fcxtdma = 3, .fcxslots = 1, .fcxcysta = 5,
+ .fcxstep = 3, .fcxnullsta = 2, .fcxmreg = 2, .fcxgpiodbg = 1,
+ .fcxbtver = 1, .fcxbtscan = 2, .fcxbtafh = 2, .fcxbtdevinfo = 1,
+ .fwlrole = 1, .frptmap = 3, .fcxctrl = 1,
+ .info_buf = 1800, .max_role_num = 6,
+ },
{RTL8852B, RTW89_FW_VER_CODE(0, 29, 14, 0),
.fcxbtcrpt = 5, .fcxtdma = 3, .fcxslots = 1, .fcxcysta = 4,
.fcxstep = 3, .fcxnullsta = 2, .fcxmreg = 1, .fcxgpiodbg = 1,
u8 buf[];
} __packed;
-#define BTF_SET_MON_REG_VER 1
struct rtw89_btc_btf_set_mon_reg {
u8 fver;
u8 reg_num;
#define BTC_RPT_HDR_SIZE 3
#define BTC_CHK_WLSLOT_DRIFT_MAX 15
+#define BTC_CHK_BTSLOT_DRIFT_MAX 15
#define BTC_CHK_HANG_MAX 3
static void _chk_btc_err(struct rtw89_dev *rtwdev, u8 type, u32 cnt)
__func__, type, cnt);
switch (type) {
- case BTC_DCNT_RPT_FREEZE:
+ case BTC_DCNT_RPT_HANG:
if (dm->cnt_dm[BTC_DCNT_RPT] == cnt && btc->fwinfo.rpt_en_map)
- dm->cnt_dm[BTC_DCNT_RPT_FREEZE]++;
+ dm->cnt_dm[BTC_DCNT_RPT_HANG]++;
else
- dm->cnt_dm[BTC_DCNT_RPT_FREEZE] = 0;
+ dm->cnt_dm[BTC_DCNT_RPT_HANG] = 0;
- if (dm->cnt_dm[BTC_DCNT_RPT_FREEZE] >= BTC_CHK_HANG_MAX)
+ if (dm->cnt_dm[BTC_DCNT_RPT_HANG] >= BTC_CHK_HANG_MAX)
dm->error.map.wl_fw_hang = true;
else
dm->error.map.wl_fw_hang = false;
dm->cnt_dm[BTC_DCNT_RPT] = cnt;
break;
- case BTC_DCNT_CYCLE_FREEZE:
+ case BTC_DCNT_CYCLE_HANG:
if (dm->cnt_dm[BTC_DCNT_CYCLE] == cnt &&
(dm->tdma_now.type != CXTDMA_OFF ||
dm->tdma_now.ext_ctrl == CXECTL_EXT))
- dm->cnt_dm[BTC_DCNT_CYCLE_FREEZE]++;
+ dm->cnt_dm[BTC_DCNT_CYCLE_HANG]++;
else
- dm->cnt_dm[BTC_DCNT_CYCLE_FREEZE] = 0;
+ dm->cnt_dm[BTC_DCNT_CYCLE_HANG] = 0;
- if (dm->cnt_dm[BTC_DCNT_CYCLE_FREEZE] >= BTC_CHK_HANG_MAX)
+ if (dm->cnt_dm[BTC_DCNT_CYCLE_HANG] >= BTC_CHK_HANG_MAX)
dm->error.map.cycle_hang = true;
else
dm->error.map.cycle_hang = false;
dm->cnt_dm[BTC_DCNT_CYCLE] = cnt;
break;
- case BTC_DCNT_W1_FREEZE:
+ case BTC_DCNT_W1_HANG:
if (dm->cnt_dm[BTC_DCNT_W1] == cnt &&
dm->tdma_now.type != CXTDMA_OFF)
- dm->cnt_dm[BTC_DCNT_W1_FREEZE]++;
+ dm->cnt_dm[BTC_DCNT_W1_HANG]++;
else
- dm->cnt_dm[BTC_DCNT_W1_FREEZE] = 0;
+ dm->cnt_dm[BTC_DCNT_W1_HANG] = 0;
- if (dm->cnt_dm[BTC_DCNT_W1_FREEZE] >= BTC_CHK_HANG_MAX)
+ if (dm->cnt_dm[BTC_DCNT_W1_HANG] >= BTC_CHK_HANG_MAX)
dm->error.map.w1_hang = true;
else
dm->error.map.w1_hang = false;
dm->cnt_dm[BTC_DCNT_W1] = cnt;
break;
- case BTC_DCNT_B1_FREEZE:
+ case BTC_DCNT_B1_HANG:
if (dm->cnt_dm[BTC_DCNT_B1] == cnt &&
dm->tdma_now.type != CXTDMA_OFF)
- dm->cnt_dm[BTC_DCNT_B1_FREEZE]++;
+ dm->cnt_dm[BTC_DCNT_B1_HANG]++;
else
- dm->cnt_dm[BTC_DCNT_B1_FREEZE] = 0;
+ dm->cnt_dm[BTC_DCNT_B1_HANG] = 0;
- if (dm->cnt_dm[BTC_DCNT_B1_FREEZE] >= BTC_CHK_HANG_MAX)
+ if (dm->cnt_dm[BTC_DCNT_B1_HANG] >= BTC_CHK_HANG_MAX)
dm->error.map.b1_hang = true;
else
dm->error.map.b1_hang = false;
dm->cnt_dm[BTC_DCNT_B1] = cnt;
break;
+ case BTC_DCNT_E2G_HANG:
+ if (dm->cnt_dm[BTC_DCNT_E2G] == cnt &&
+ dm->tdma_now.ext_ctrl == CXECTL_EXT)
+ dm->cnt_dm[BTC_DCNT_E2G_HANG]++;
+ else
+ dm->cnt_dm[BTC_DCNT_E2G_HANG] = 0;
+
+ if (dm->cnt_dm[BTC_DCNT_E2G_HANG] >= BTC_CHK_HANG_MAX)
+ dm->error.map.wl_e2g_hang = true;
+ else
+ dm->error.map.wl_e2g_hang = false;
+
+ dm->cnt_dm[BTC_DCNT_E2G] = cnt;
+ break;
case BTC_DCNT_TDMA_NONSYNC:
if (cnt != 0) /* if tdma not sync between drv/fw */
dm->cnt_dm[BTC_DCNT_TDMA_NONSYNC]++;
dm->cnt_dm[BTC_DCNT_SLOT_NONSYNC] = 0;
if (dm->cnt_dm[BTC_DCNT_SLOT_NONSYNC] >= BTC_CHK_HANG_MAX)
- dm->error.map.tdma_no_sync = true;
+ dm->error.map.slot_no_sync = true;
else
- dm->error.map.tdma_no_sync = false;
+ dm->error.map.slot_no_sync = false;
break;
- case BTC_DCNT_BTCNT_FREEZE:
+ case BTC_DCNT_BTCNT_HANG:
cnt = cx->cnt_bt[BTC_BCNT_HIPRI_RX] +
cx->cnt_bt[BTC_BCNT_HIPRI_TX] +
cx->cnt_bt[BTC_BCNT_LOPRI_RX] +
cx->cnt_bt[BTC_BCNT_LOPRI_TX];
if (cnt == 0)
- dm->cnt_dm[BTC_DCNT_BTCNT_FREEZE]++;
+ dm->cnt_dm[BTC_DCNT_BTCNT_HANG]++;
else
- dm->cnt_dm[BTC_DCNT_BTCNT_FREEZE] = 0;
+ dm->cnt_dm[BTC_DCNT_BTCNT_HANG] = 0;
- if ((dm->cnt_dm[BTC_DCNT_BTCNT_FREEZE] >= BTC_CHK_HANG_MAX &&
- bt->enable.now) || (!dm->cnt_dm[BTC_DCNT_BTCNT_FREEZE] &&
+ if ((dm->cnt_dm[BTC_DCNT_BTCNT_HANG] >= BTC_CHK_HANG_MAX &&
+ bt->enable.now) || (!dm->cnt_dm[BTC_DCNT_BTCNT_HANG] &&
!bt->enable.now))
_update_bt_scbd(rtwdev, false);
break;
else
dm->error.map.wl_slot_drift = false;
break;
+ case BTC_DCNT_BT_SLOT_DRIFT:
+ if (cnt >= BTC_CHK_BTSLOT_DRIFT_MAX)
+ dm->cnt_dm[BTC_DCNT_BT_SLOT_DRIFT]++;
+ else
+ dm->cnt_dm[BTC_DCNT_BT_SLOT_DRIFT] = 0;
+
+ if (dm->cnt_dm[BTC_DCNT_BT_SLOT_DRIFT] >= BTC_CHK_HANG_MAX)
+ dm->error.map.bt_slot_drift = true;
+ else
+ dm->error.map.bt_slot_drift = false;
+
+ break;
}
}
struct rtw89_btc_bt_link_info *bt_linfo = &bt->link_info;
struct rtw89_btc_bt_a2dp_desc *a2dp = &bt_linfo->a2dp_desc;
struct rtw89_btc_fbtc_btver *pver = NULL;
- struct rtw89_btc_fbtc_btscan *pscan = NULL;
+ struct rtw89_btc_fbtc_btscan_v1 *pscan_v1;
+ struct rtw89_btc_fbtc_btscan_v2 *pscan_v2;
struct rtw89_btc_fbtc_btafh *pafh_v1 = NULL;
struct rtw89_btc_fbtc_btafh_v2 *pafh_v2 = NULL;
struct rtw89_btc_fbtc_btdevinfo *pdev = NULL;
+ bool scan_update = true;
+ int i;
pver = (struct rtw89_btc_fbtc_btver *)pfinfo;
- pscan = (struct rtw89_btc_fbtc_btscan *)pfinfo;
pdev = (struct rtw89_btc_fbtc_btdevinfo *)pfinfo;
rtw89_debug(rtwdev, RTW89_DBG_BTC,
bt->feature = le32_to_cpu(pver->feature);
break;
case BTC_RPT_TYPE_BT_SCAN:
- memcpy(bt->scan_info, pscan->scan, BTC_SCAN_MAX1);
+ if (ver->fcxbtscan == 1) {
+ pscan_v1 = (struct rtw89_btc_fbtc_btscan_v1 *)pfinfo;
+ for (i = 0; i < BTC_SCAN_MAX1; i++) {
+ bt->scan_info_v1[i] = pscan_v1->scan[i];
+ if (bt->scan_info_v1[i].win == 0 &&
+ bt->scan_info_v1[i].intvl == 0)
+ scan_update = false;
+ }
+ } else if (ver->fcxbtscan == 2) {
+ pscan_v2 = (struct rtw89_btc_fbtc_btscan_v2 *)pfinfo;
+ for (i = 0; i < CXSCAN_MAX; i++) {
+ bt->scan_info_v2[i] = pscan_v2->para[i];
+ if ((pscan_v2->type & BIT(i)) &&
+ pscan_v2->para[i].win == 0 &&
+ pscan_v2->para[i].intvl == 0)
+ scan_update = false;
+ }
+ }
+ if (scan_update)
+ bt->scan_info_update = 1;
break;
case BTC_RPT_TYPE_BT_AFH:
if (ver->fcxbtafh == 2) {
void *rpt_content = NULL, *pfinfo = NULL;
u8 rpt_type = 0;
u16 wl_slot_set = 0, wl_slot_real = 0;
- u32 trace_step = btc->ctrl.trace_step, rpt_len = 0, diff_t;
- u32 cnt_leak_slot = 0, bt_slot_real = 0, cnt_rx_imr = 0;
+ u32 trace_step = btc->ctrl.trace_step, rpt_len = 0, diff_t = 0;
+ u32 cnt_leak_slot, bt_slot_real, bt_slot_set, cnt_rx_imr;
u8 i;
rtw89_debug(rtwdev, RTW89_DBG_BTC,
} else if (ver->fcxbtcrpt == 5) {
pfinfo = &pfwinfo->rpt_ctrl.finfo.v5;
pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo.v5);
+ } else if (ver->fcxbtcrpt == 105) {
+ pfinfo = &pfwinfo->rpt_ctrl.finfo.v105;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo.v105);
+ pcinfo->req_fver = 5;
+ break;
} else {
goto err;
}
pfinfo = &pfwinfo->rpt_fbtc_cysta.finfo.v4;
pcysta->v4 = pfwinfo->rpt_fbtc_cysta.finfo.v4;
pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_cysta.finfo.v4);
+ } else if (ver->fcxcysta == 5) {
+ pfinfo = &pfwinfo->rpt_fbtc_cysta.finfo.v5;
+ pcysta->v5 = pfwinfo->rpt_fbtc_cysta.finfo.v5;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_cysta.finfo.v5);
} else {
goto err;
}
case BTC_RPT_TYPE_NULLSTA:
pcinfo = &pfwinfo->rpt_fbtc_nullsta.cinfo;
if (ver->fcxnullsta == 1) {
- pfinfo = &pfwinfo->rpt_fbtc_nullsta.finfo;
+ pfinfo = &pfwinfo->rpt_fbtc_nullsta.finfo.v1;
pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_nullsta.finfo.v1);
} else if (ver->fcxnullsta == 2) {
pfinfo = &pfwinfo->rpt_fbtc_nullsta.finfo.v2;
break;
case BTC_RPT_TYPE_MREG:
pcinfo = &pfwinfo->rpt_fbtc_mregval.cinfo;
- pfinfo = &pfwinfo->rpt_fbtc_mregval.finfo;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_mregval.finfo);
+ if (ver->fcxmreg == 1) {
+ pfinfo = &pfwinfo->rpt_fbtc_mregval.finfo.v1;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_mregval.finfo.v1);
+ } else if (ver->fcxmreg == 2) {
+ pfinfo = &pfwinfo->rpt_fbtc_mregval.finfo.v2;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_mregval.finfo.v2);
+ } else {
+ goto err;
+ }
pcinfo->req_fver = ver->fcxmreg;
break;
case BTC_RPT_TYPE_GPIO_DBG:
break;
case BTC_RPT_TYPE_BT_SCAN:
pcinfo = &pfwinfo->rpt_fbtc_btscan.cinfo;
- pfinfo = &pfwinfo->rpt_fbtc_btscan.finfo;
- pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btscan.finfo);
+ if (ver->fcxbtscan == 1) {
+ pfinfo = &pfwinfo->rpt_fbtc_btscan.finfo.v1;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btscan.finfo.v1);
+ } else if (ver->fcxbtscan == 2) {
+ pfinfo = &pfwinfo->rpt_fbtc_btscan.finfo.v2;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_fbtc_btscan.finfo.v2);
+ }
pcinfo->req_fver = ver->fcxbtscan;
break;
case BTC_RPT_TYPE_BT_AFH:
wl->ver_info.fw = prpt->v1.wl_fw_ver;
dm->wl_fw_cx_offload = !!prpt->v1.wl_fw_cx_offload;
- _chk_btc_err(rtwdev, BTC_DCNT_RPT_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_RPT_HANG,
pfwinfo->event[BTF_EVNT_RPT]);
/* To avoid I/O if WL LPS or power-off */
if (wl->status.map.lps != BTC_LPS_RF_OFF &&
!wl->status.map.rf_off) {
rtwdev->chip->ops->btc_update_bt_cnt(rtwdev);
- _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_FREEZE, 0);
+ _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_HANG, 0);
btc->cx.cnt_bt[BTC_BCNT_POLUT] =
rtw89_mac_get_plt_cnt(rtwdev,
btc->cx.cnt_bt[BTC_BCNT_POLUT] =
le32_to_cpu(prpt->v4.bt_cnt[BTC_BCNT_POLLUTED]);
- _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_FREEZE, 0);
- _chk_btc_err(rtwdev, BTC_DCNT_RPT_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_HANG, 0);
+ _chk_btc_err(rtwdev, BTC_DCNT_RPT_HANG,
pfwinfo->event[BTF_EVNT_RPT]);
if (le32_to_cpu(prpt->v4.bt_cnt[BTC_BCNT_RFK_TIMEOUT]) > 0)
btc->cx.cnt_bt[BTC_BCNT_POLUT] =
le16_to_cpu(prpt->v5.bt_cnt[BTC_BCNT_POLLUTED]);
- _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_FREEZE, 0);
- _chk_btc_err(rtwdev, BTC_DCNT_RPT_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_HANG, 0);
+ _chk_btc_err(rtwdev, BTC_DCNT_RPT_HANG,
+ pfwinfo->event[BTF_EVNT_RPT]);
+
+ dm->error.map.bt_rfk_timeout = bt->rfk_info.map.timeout;
+ } else if (ver->fcxbtcrpt == 105) {
+ prpt->v105 = pfwinfo->rpt_ctrl.finfo.v105;
+ pfwinfo->rpt_en_map = le32_to_cpu(prpt->v105.rpt_info.en);
+ wl->ver_info.fw_coex = le32_to_cpu(prpt->v105.rpt_info.cx_ver);
+ wl->ver_info.fw = le32_to_cpu(prpt->v105.rpt_info.fw_ver);
+ dm->wl_fw_cx_offload = 0;
+
+ for (i = RTW89_PHY_0; i < RTW89_PHY_MAX; i++)
+ memcpy(&dm->gnt.band[i], &prpt->v105.gnt_val[i][0],
+ sizeof(dm->gnt.band[i]));
+
+ btc->cx.cnt_bt[BTC_BCNT_HIPRI_TX] =
+ le16_to_cpu(prpt->v105.bt_cnt[BTC_BCNT_HI_TX_V105]);
+ btc->cx.cnt_bt[BTC_BCNT_HIPRI_RX] =
+ le16_to_cpu(prpt->v105.bt_cnt[BTC_BCNT_HI_RX_V105]);
+ btc->cx.cnt_bt[BTC_BCNT_LOPRI_TX] =
+ le16_to_cpu(prpt->v105.bt_cnt[BTC_BCNT_LO_TX_V105]);
+ btc->cx.cnt_bt[BTC_BCNT_LOPRI_RX] =
+ le16_to_cpu(prpt->v105.bt_cnt[BTC_BCNT_LO_RX_V105]);
+ btc->cx.cnt_bt[BTC_BCNT_POLUT] =
+ le16_to_cpu(prpt->v105.bt_cnt[BTC_BCNT_POLLUTED_V105]);
+
+ _chk_btc_err(rtwdev, BTC_DCNT_BTCNT_HANG, 0);
+ _chk_btc_err(rtwdev, BTC_DCNT_RPT_HANG,
pfwinfo->event[BTF_EVNT_RPT]);
dm->error.map.bt_rfk_timeout = bt->rfk_info.map.timeout;
BTC_DCNT_WL_SLOT_DRIFT, diff_t);
}
- _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_HANG,
le32_to_cpu(pcysta->v2.slot_cnt[CXST_W1]));
- _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_HANG,
le32_to_cpu(pcysta->v2.slot_cnt[CXST_B1]));
- _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_HANG,
le16_to_cpu(pcysta->v2.cycles));
} else if (ver->fcxcysta == 3) {
if (le16_to_cpu(pcysta->v3.cycles) < BTC_CYSTA_CHK_PERIOD)
}
}
- _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_HANG,
le32_to_cpu(pcysta->v3.slot_cnt[CXST_W1]));
- _chk_btc_err(rtwdev, BTC_DCNT_B1_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_B1_HANG,
le32_to_cpu(pcysta->v3.slot_cnt[CXST_B1]));
- _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_HANG,
le16_to_cpu(pcysta->v3.cycles));
} else if (ver->fcxcysta == 4) {
if (le16_to_cpu(pcysta->v4.cycles) < BTC_CYSTA_CHK_PERIOD)
}
}
- _chk_btc_err(rtwdev, BTC_DCNT_W1_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_HANG,
le16_to_cpu(pcysta->v4.slot_cnt[CXST_W1]));
- _chk_btc_err(rtwdev, BTC_DCNT_B1_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_B1_HANG,
le16_to_cpu(pcysta->v4.slot_cnt[CXST_B1]));
- _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_FREEZE,
+ _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_HANG,
le16_to_cpu(pcysta->v4.cycles));
+ } else if (ver->fcxcysta == 5) {
+ if (dm->fddt_train == BTC_FDDT_ENABLE)
+ break;
+ cnt_leak_slot = le16_to_cpu(pcysta->v5.slot_cnt[CXST_LK]);
+ cnt_rx_imr = le32_to_cpu(pcysta->v5.leak_slot.cnt_rximr);
+
+ /* Check Leak-AP */
+ if (cnt_leak_slot != 0 && cnt_rx_imr != 0 &&
+ dm->tdma_now.rxflctrl) {
+ if (le16_to_cpu(pcysta->v5.cycles) >= BTC_CYSTA_CHK_PERIOD &&
+ cnt_leak_slot < BTC_LEAK_AP_TH * cnt_rx_imr)
+ dm->leak_ap = 1;
+ }
+
+ /* Check diff time between real WL slot and W1 slot */
+ if (dm->tdma_now.type == CXTDMA_OFF) {
+ wl_slot_set = le16_to_cpu(dm->slot_now[CXST_W1].dur);
+ wl_slot_real = le16_to_cpu(pcysta->v5.cycle_time.tavg[CXT_WL]);
+
+ if (wl_slot_real > wl_slot_set)
+ diff_t = wl_slot_real - wl_slot_set;
+ else
+ diff_t = wl_slot_set - wl_slot_real;
+ }
+ _chk_btc_err(rtwdev, BTC_DCNT_WL_SLOT_DRIFT, diff_t);
+
+ /* Check diff time between real BT slot and EBT/E5G slot */
+ bt_slot_set = btc->bt_req_len;
+ bt_slot_real = le16_to_cpu(pcysta->v5.cycle_time.tavg[CXT_BT]);
+ diff_t = 0;
+ if (dm->tdma_now.type == CXTDMA_OFF &&
+ dm->tdma_now.ext_ctrl == CXECTL_EXT &&
+ bt_slot_set != 0) {
+ if (bt_slot_set > bt_slot_real)
+ diff_t = bt_slot_set - bt_slot_real;
+ else
+ diff_t = bt_slot_real - bt_slot_set;
+ }
+
+ _chk_btc_err(rtwdev, BTC_DCNT_BT_SLOT_DRIFT, diff_t);
+ _chk_btc_err(rtwdev, BTC_DCNT_E2G_HANG,
+ le16_to_cpu(pcysta->v5.slot_cnt[CXST_E2G]));
+ _chk_btc_err(rtwdev, BTC_DCNT_W1_HANG,
+ le16_to_cpu(pcysta->v5.slot_cnt[CXST_W1]));
+ _chk_btc_err(rtwdev, BTC_DCNT_B1_HANG,
+ le16_to_cpu(pcysta->v5.slot_cnt[CXST_B1]));
+ _chk_btc_err(rtwdev, BTC_DCNT_CYCLE_HANG,
+ le16_to_cpu(pcysta->v5.cycles));
} else {
goto err;
}
u32 rpt_map, bool rpt_state)
{
struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_wl_smap *wl_smap = &btc->cx.wl.status.map;
struct rtw89_btc_btf_fwinfo *fwinfo = &btc->fwinfo;
struct rtw89_btc_btf_set_report r = {0};
u32 val, bit_map;
+ if ((wl_smap->rf_off || wl_smap->lps != BTC_LPS_OFF) && rpt_state != 0)
+ return;
+
bit_map = rtw89_btc_fw_rpt_ver(rtwdev, rpt_map);
rtw89_debug(rtwdev, RTW89_DBG_BTC,
static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_btc_ver *ver = rtwdev->btc.ver;
struct rtw89_btc_btf_set_mon_reg *monreg = NULL;
- u8 n, *ptr = NULL, ulen;
+ u8 n, *ptr = NULL, ulen, cxmreg_max;
u16 sz = 0;
n = chip->mon_reg_num;
-
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): mon_reg_num=%d\n", __func__, n);
- if (n > CXMREG_MAX) {
+
+ if (ver->fcxmreg == 1)
+ cxmreg_max = CXMREG_MAX;
+ else if (ver->fcxmreg == 2)
+ cxmreg_max = CXMREG_MAX_V2;
+ else
+ return;
+
+ if (n > cxmreg_max) {
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): mon reg count %d > %d\n",
- __func__, n, CXMREG_MAX);
+ __func__, n, cxmreg_max);
return;
}
if (!monreg)
return;
- monreg->fver = BTF_SET_MON_REG_VER;
+ monreg->fver = ver->fcxmreg;
monreg->reg_num = n;
ptr = &monreg->buf[0];
memcpy(ptr, chip->mon_reg, n * ulen);
{
struct rtw89_btc *btc = &rtwdev->btc;
const struct rtw89_btc_ver *ver = btc->ver;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+ struct rtw89_btc_rf_trx_para rf_para = dm->rf_trx_para;
switch (type) {
case CXDRVINFO_INIT:
rtw89_fw_h2c_cxdrv_role(rtwdev);
else if (ver->fwlrole == 1)
rtw89_fw_h2c_cxdrv_role_v1(rtwdev);
+ else if (ver->fwlrole == 2)
+ rtw89_fw_h2c_cxdrv_role_v2(rtwdev);
break;
case CXDRVINFO_CTRL:
rtw89_fw_h2c_cxdrv_ctrl(rtwdev);
break;
+ case CXDRVINFO_TRX:
+ dm->trx_info.tx_power = u32_get_bits(rf_para.wl_tx_power,
+ RTW89_BTC_WL_DEF_TX_PWR);
+ dm->trx_info.rx_gain = u32_get_bits(rf_para.wl_rx_gain,
+ RTW89_BTC_WL_DEF_TX_PWR);
+ dm->trx_info.bt_tx_power = u32_get_bits(rf_para.bt_tx_power,
+ RTW89_BTC_WL_DEF_TX_PWR);
+ dm->trx_info.bt_rx_gain = u32_get_bits(rf_para.bt_rx_gain,
+ RTW89_BTC_WL_DEF_TX_PWR);
+ dm->trx_info.cn = wl->cn_report;
+ dm->trx_info.nhm = wl->nhm.pwr;
+ rtw89_fw_h2c_cxdrv_trx(rtwdev);
+ break;
case CXDRVINFO_RFK:
rtw89_fw_h2c_cxdrv_rfk(rtwdev);
break;
struct rtw89_btc_bt_link_info *b = &bt->link_info;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
struct rtw89_btc_wl_role_info_v1 *wl_rinfo_v1 = &wl->role_info_v1;
+ struct rtw89_btc_wl_role_info_v2 *wl_rinfo_v2 = &wl->role_info_v2;
struct rtw89_btc_wl_active_role *r;
struct rtw89_btc_wl_active_role_v1 *r1;
+ struct rtw89_btc_wl_active_role_v2 *r2;
u8 en = 0, i, ch = 0, bw = 0;
u8 mode, connect_cnt;
if (ver->fwlrole == 0) {
mode = wl_rinfo->link_mode;
connect_cnt = wl_rinfo->connect_cnt;
- } else {
+ } else if (ver->fwlrole == 1) {
mode = wl_rinfo_v1->link_mode;
connect_cnt = wl_rinfo_v1->connect_cnt;
+ } else if (ver->fwlrole == 2) {
+ mode = wl_rinfo_v2->link_mode;
+ connect_cnt = wl_rinfo_v2->connect_cnt;
+ } else {
+ return;
}
if (wl->status.map.rf_off || bt->whql_test ||
for (i = 0; i < RTW89_PORT_NUM; i++) {
r = &wl_rinfo->active_role[i];
r1 = &wl_rinfo_v1->active_role_v1[i];
+ r2 = &wl_rinfo_v2->active_role_v2[i];
if (ver->fwlrole == 0 &&
(r->role == RTW89_WIFI_ROLE_P2P_GO ||
ch = r1->ch;
bw = r1->bw;
break;
+ } else if (ver->fwlrole == 2 &&
+ (r2->role == RTW89_WIFI_ROLE_P2P_GO ||
+ r2->role == RTW89_WIFI_ROLE_P2P_CLIENT)) {
+ ch = r2->ch;
+ bw = r2->bw;
+ break;
}
}
} else {
for (i = 0; i < RTW89_PORT_NUM; i++) {
r = &wl_rinfo->active_role[i];
r1 = &wl_rinfo_v1->active_role_v1[i];
+ r2 = &wl_rinfo_v2->active_role_v2[i];
if (ver->fwlrole == 0 &&
r->connected && r->band == RTW89_BAND_2G) {
ch = r1->ch;
bw = r1->bw;
break;
+ } else if (ver->fwlrole == 2 &&
+ r2->connected && r2->band == RTW89_BAND_2G) {
+ ch = r2->ch;
+ bw = r2->bw;
+ break;
}
}
}
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
struct rtw89_btc_wl_role_info_v1 *wl_rinfo_v1 = &wl->role_info_v1;
+ struct rtw89_btc_wl_role_info_v2 *wl_rinfo_v2 = &wl->role_info_v2;
struct rtw89_btc_wl_dbcc_info *wl_dinfo = &wl->dbcc_info;
bool is_btg;
u8 mode;
if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
- else
+ else if (ver->fwlrole == 1)
mode = wl_rinfo_v1->link_mode;
+ else if (ver->fwlrole == 2)
+ mode = wl_rinfo_v2->link_mode;
+ else
+ return;
/* notify halbb ignore GNT_BT or not for WL BB Rx-AGC control */
if (mode == BTC_WLINK_5G) /* always 0 if 5G */
struct rtw89_btc_bt_hid_desc *hid = &b->hid_desc;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
struct rtw89_btc_wl_role_info_v1 *wl_rinfo_v1 = &wl->role_info_v1;
+ struct rtw89_btc_wl_role_info_v2 *wl_rinfo_v2 = &wl->role_info_v2;
struct rtw89_txtime_data data = {.rtwdev = rtwdev};
u8 mode;
u8 tx_retry;
if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
- else
+ else if (ver->fwlrole == 1)
mode = wl_rinfo_v1->link_mode;
+ else if (ver->fwlrole == 2)
+ mode = wl_rinfo_v2->link_mode;
+ else
+ return;
if (btc->dm.freerun || btc->ctrl.igno_bt || b->profile_cnt.now == 0 ||
mode == BTC_WLINK_5G || mode == BTC_WLINK_NOLINK) {
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
struct rtw89_btc_wl_role_info_v1 *wl_rinfo_v1 = &wl->role_info_v1;
+ struct rtw89_btc_wl_role_info_v2 *wl_rinfo_v2 = &wl->role_info_v2;
struct rtw89_btc_bt_info *bt = &btc->cx.bt;
bool bt_hi_lna_rx = false;
u8 mode;
if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
- else
+ else if (ver->fwlrole == 1)
mode = wl_rinfo_v1->link_mode;
+ else if (ver->fwlrole == 2)
+ mode = wl_rinfo_v2->link_mode;
+ else
+ return;
if (mode != BTC_WLINK_NOLINK && btc->dm.wl_btg_rx)
bt_hi_lna_rx = true;
_set_policy(rtwdev, policy_type, BTC_ACT_WL_2G_SCC);
}
+static void _action_wl_2g_scc_v2(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+ struct rtw89_btc_bt_info *bt = &btc->cx.bt;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ struct rtw89_btc_wl_role_info_v2 *wl_rinfo = &wl->role_info_v2;
+ u16 policy_type = BTC_CXP_OFF_BT;
+ u32 dur;
+
+ if (btc->mdinfo.ant.type == BTC_ANT_DEDICATED) {
+ policy_type = BTC_CXP_OFF_EQ0;
+ } else {
+ /* shared-antenna */
+ switch (wl_rinfo->mrole_type) {
+ case BTC_WLMROLE_STA_GC:
+ dm->wl_scc.null_role1 = RTW89_WIFI_ROLE_STATION;
+ dm->wl_scc.null_role2 = RTW89_WIFI_ROLE_P2P_CLIENT;
+ dm->wl_scc.ebt_null = 0; /* no ext-slot-control */
+ _action_by_bt(rtwdev);
+ return;
+ case BTC_WLMROLE_STA_STA:
+ dm->wl_scc.null_role1 = RTW89_WIFI_ROLE_STATION;
+ dm->wl_scc.null_role2 = RTW89_WIFI_ROLE_STATION;
+ dm->wl_scc.ebt_null = 0; /* no ext-slot-control */
+ _action_by_bt(rtwdev);
+ return;
+ case BTC_WLMROLE_STA_GC_NOA:
+ case BTC_WLMROLE_STA_GO:
+ case BTC_WLMROLE_STA_GO_NOA:
+ dm->wl_scc.null_role1 = RTW89_WIFI_ROLE_STATION;
+ dm->wl_scc.null_role2 = RTW89_WIFI_ROLE_NONE;
+ dur = wl_rinfo->mrole_noa_duration;
+
+ if (wl->status.map._4way) {
+ dm->wl_scc.ebt_null = 0;
+ policy_type = BTC_CXP_OFFE_WL;
+ } else if (bt->link_info.status.map.connect == 0) {
+ dm->wl_scc.ebt_null = 0;
+ policy_type = BTC_CXP_OFFE_2GISOB;
+ } else if (bt->link_info.a2dp_desc.exist &&
+ dur < btc->bt_req_len) {
+ dm->wl_scc.ebt_null = 1; /* tx null at EBT */
+ policy_type = BTC_CXP_OFFE_2GBWMIXB2;
+ } else if (bt->link_info.a2dp_desc.exist ||
+ bt->link_info.pan_desc.exist) {
+ dm->wl_scc.ebt_null = 1; /* tx null at EBT */
+ policy_type = BTC_CXP_OFFE_2GBWISOB;
+ } else {
+ dm->wl_scc.ebt_null = 0;
+ policy_type = BTC_CXP_OFFE_2GBWISOB;
+ }
+ break;
+ default:
+ break;
+ }
+ }
+
+ _set_ant(rtwdev, NM_EXEC, BTC_PHY_ALL, BTC_ANT_W2G);
+ _set_policy(rtwdev, policy_type, BTC_ACT_WL_2G_SCC);
+}
+
static void _action_wl_2g_ap(struct rtw89_dev *rtwdev)
{
struct rtw89_btc *btc = &rtwdev->btc;
_fw_set_drv_info(rtwdev, CXDRVINFO_ROLE);
}
-#define BTC_CHK_HANG_MAX 3
-#define BTC_SCB_INV_VALUE GENMASK(31, 0)
-
-void rtw89_coex_act1_work(struct work_struct *work)
+static void _update_wl_info_v2(struct rtw89_dev *rtwdev)
{
- struct rtw89_dev *rtwdev = container_of(work, struct rtw89_dev,
- coex_act1_work.work);
struct rtw89_btc *btc = &rtwdev->btc;
- struct rtw89_btc_dm *dm = &rtwdev->btc.dm;
- struct rtw89_btc_cx *cx = &btc->cx;
- struct rtw89_btc_wl_info *wl = &cx->wl;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+ struct rtw89_btc_wl_link_info *wl_linfo = wl->link_info;
+ struct rtw89_btc_wl_role_info_v2 *wl_rinfo = &wl->role_info_v2;
+ struct rtw89_btc_wl_dbcc_info *wl_dinfo = &wl->dbcc_info;
+ u8 cnt_connect = 0, cnt_connecting = 0, cnt_active = 0;
+ u8 cnt_2g = 0, cnt_5g = 0, phy;
+ u32 wl_2g_ch[2] = {}, wl_5g_ch[2] = {};
+ bool b2g = false, b5g = false, client_joined = false;
+ u8 i;
- mutex_lock(&rtwdev->mutex);
- rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): enter\n", __func__);
- dm->cnt_notify[BTC_NCNT_TIMER]++;
- if (wl->status.map._4way)
- wl->status.map._4way = false;
- if (wl->status.map.connecting)
- wl->status.map.connecting = false;
+ memset(wl_rinfo, 0, sizeof(*wl_rinfo));
- _run_coex(rtwdev, BTC_RSN_ACT1_WORK);
- mutex_unlock(&rtwdev->mutex);
-}
+ for (i = 0; i < RTW89_PORT_NUM; i++) {
+ if (!wl_linfo[i].active)
+ continue;
-void rtw89_coex_bt_devinfo_work(struct work_struct *work)
-{
- struct rtw89_dev *rtwdev = container_of(work, struct rtw89_dev,
- coex_bt_devinfo_work.work);
- struct rtw89_btc *btc = &rtwdev->btc;
- struct rtw89_btc_dm *dm = &rtwdev->btc.dm;
- struct rtw89_btc_bt_a2dp_desc *a2dp = &btc->cx.bt.link_info.a2dp_desc;
+ cnt_active++;
+ wl_rinfo->active_role_v2[cnt_active - 1].role = wl_linfo[i].role;
+ wl_rinfo->active_role_v2[cnt_active - 1].pid = wl_linfo[i].pid;
+ wl_rinfo->active_role_v2[cnt_active - 1].phy = wl_linfo[i].phy;
+ wl_rinfo->active_role_v2[cnt_active - 1].band = wl_linfo[i].band;
+ wl_rinfo->active_role_v2[cnt_active - 1].noa = (u8)wl_linfo[i].noa;
+ wl_rinfo->active_role_v2[cnt_active - 1].connected = 0;
- mutex_lock(&rtwdev->mutex);
- rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): enter\n", __func__);
- dm->cnt_notify[BTC_NCNT_TIMER]++;
- a2dp->play_latency = 0;
- _run_coex(rtwdev, BTC_RSN_BT_DEVINFO_WORK);
- mutex_unlock(&rtwdev->mutex);
-}
+ wl->port_id[wl_linfo[i].role] = wl_linfo[i].pid;
-void rtw89_coex_rfk_chk_work(struct work_struct *work)
-{
- struct rtw89_dev *rtwdev = container_of(work, struct rtw89_dev,
- coex_rfk_chk_work.work);
- struct rtw89_btc *btc = &rtwdev->btc;
- struct rtw89_btc_dm *dm = &rtwdev->btc.dm;
- struct rtw89_btc_cx *cx = &btc->cx;
- struct rtw89_btc_wl_info *wl = &cx->wl;
+ phy = wl_linfo[i].phy;
- mutex_lock(&rtwdev->mutex);
- rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): enter\n", __func__);
- dm->cnt_notify[BTC_NCNT_TIMER]++;
- if (wl->rfk_info.state != BTC_WRFK_STOP) {
- rtw89_debug(rtwdev, RTW89_DBG_BTC,
- "[BTC], %s(): RFK timeout\n", __func__);
- cx->cnt_wl[BTC_WCNT_RFK_TIMEOUT]++;
- dm->error.map.wl_rfk_timeout = true;
- wl->rfk_info.state = BTC_WRFK_STOP;
- _write_scbd(rtwdev, BTC_WSCB_WLRFK, false);
- _run_coex(rtwdev, BTC_RSN_RFK_CHK_WORK);
- }
- mutex_unlock(&rtwdev->mutex);
-}
+ if (rtwdev->dbcc_en && phy < RTW89_PHY_MAX) {
+ wl_dinfo->role[phy] = wl_linfo[i].role;
+ wl_dinfo->op_band[phy] = wl_linfo[i].band;
+ _update_dbcc_band(rtwdev, phy);
+ _fw_set_drv_info(rtwdev, CXDRVINFO_DBCC);
+ }
-static void _update_bt_scbd(struct rtw89_dev *rtwdev, bool only_update)
-{
- const struct rtw89_chip_info *chip = rtwdev->chip;
- struct rtw89_btc *btc = &rtwdev->btc;
- struct rtw89_btc_cx *cx = &btc->cx;
- struct rtw89_btc_bt_info *bt = &btc->cx.bt;
- u32 val;
- bool status_change = false;
+ if (wl_linfo[i].connected == MLME_NO_LINK) {
+ continue;
+ } else if (wl_linfo[i].connected == MLME_LINKING) {
+ cnt_connecting++;
+ } else {
+ cnt_connect++;
+ if ((wl_linfo[i].role == RTW89_WIFI_ROLE_P2P_GO ||
+ wl_linfo[i].role == RTW89_WIFI_ROLE_AP) &&
+ wl_linfo[i].client_cnt > 1)
+ client_joined = true;
+ }
- if (!chip->scbd)
- return;
+ wl_rinfo->role_map.val |= BIT(wl_linfo[i].role);
+ wl_rinfo->active_role_v2[cnt_active - 1].ch = wl_linfo[i].ch;
+ wl_rinfo->active_role_v2[cnt_active - 1].bw = wl_linfo[i].bw;
+ wl_rinfo->active_role_v2[cnt_active - 1].connected = 1;
- rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s\n", __func__);
+ /* only care 2 roles + BT coex */
+ if (wl_linfo[i].band != RTW89_BAND_2G) {
+ if (cnt_5g <= ARRAY_SIZE(wl_5g_ch) - 1)
+ wl_5g_ch[cnt_5g] = wl_linfo[i].ch;
+ cnt_5g++;
+ b5g = true;
+ } else {
+ if (cnt_2g <= ARRAY_SIZE(wl_2g_ch) - 1)
+ wl_2g_ch[cnt_2g] = wl_linfo[i].ch;
+ cnt_2g++;
+ b2g = true;
+ }
+ }
- val = _read_scbd(rtwdev);
+ wl_rinfo->connect_cnt = cnt_connect;
+
+ /* Be careful to change the following sequence!! */
+ if (cnt_connect == 0) {
+ wl_rinfo->link_mode = BTC_WLINK_NOLINK;
+ wl_rinfo->role_map.role.none = 1;
+ } else if (!b2g && b5g) {
+ wl_rinfo->link_mode = BTC_WLINK_5G;
+ } else if (wl_rinfo->role_map.role.nan) {
+ wl_rinfo->link_mode = BTC_WLINK_2G_NAN;
+ } else if (cnt_connect > BTC_TDMA_WLROLE_MAX) {
+ wl_rinfo->link_mode = BTC_WLINK_OTHER;
+ } else if (b2g && b5g && cnt_connect == 2) {
+ if (rtwdev->dbcc_en) {
+ switch (wl_dinfo->role[RTW89_PHY_0]) {
+ case RTW89_WIFI_ROLE_STATION:
+ wl_rinfo->link_mode = BTC_WLINK_2G_STA;
+ break;
+ case RTW89_WIFI_ROLE_P2P_GO:
+ wl_rinfo->link_mode = BTC_WLINK_2G_GO;
+ break;
+ case RTW89_WIFI_ROLE_P2P_CLIENT:
+ wl_rinfo->link_mode = BTC_WLINK_2G_GC;
+ break;
+ case RTW89_WIFI_ROLE_AP:
+ wl_rinfo->link_mode = BTC_WLINK_2G_AP;
+ break;
+ default:
+ wl_rinfo->link_mode = BTC_WLINK_OTHER;
+ break;
+ }
+ } else {
+ wl_rinfo->link_mode = BTC_WLINK_25G_MCC;
+ }
+ } else if (!b5g && cnt_connect == 2) {
+ if (wl_rinfo->role_map.role.station &&
+ (wl_rinfo->role_map.role.p2p_go ||
+ wl_rinfo->role_map.role.p2p_gc ||
+ wl_rinfo->role_map.role.ap)) {
+ if (wl_2g_ch[0] == wl_2g_ch[1])
+ wl_rinfo->link_mode = BTC_WLINK_2G_SCC;
+ else
+ wl_rinfo->link_mode = BTC_WLINK_2G_MCC;
+ } else {
+ wl_rinfo->link_mode = BTC_WLINK_2G_MCC;
+ }
+ } else if (!b5g && cnt_connect == 1) {
+ if (wl_rinfo->role_map.role.station)
+ wl_rinfo->link_mode = BTC_WLINK_2G_STA;
+ else if (wl_rinfo->role_map.role.ap)
+ wl_rinfo->link_mode = BTC_WLINK_2G_AP;
+ else if (wl_rinfo->role_map.role.p2p_go)
+ wl_rinfo->link_mode = BTC_WLINK_2G_GO;
+ else if (wl_rinfo->role_map.role.p2p_gc)
+ wl_rinfo->link_mode = BTC_WLINK_2G_GC;
+ else
+ wl_rinfo->link_mode = BTC_WLINK_OTHER;
+ }
+
+ /* if no client_joined, don't care P2P-GO/AP role */
+ if (wl_rinfo->role_map.role.p2p_go || wl_rinfo->role_map.role.ap) {
+ if (!client_joined) {
+ if (wl_rinfo->link_mode == BTC_WLINK_2G_SCC ||
+ wl_rinfo->link_mode == BTC_WLINK_2G_MCC) {
+ wl_rinfo->link_mode = BTC_WLINK_2G_STA;
+ wl_rinfo->connect_cnt = 1;
+ } else if (wl_rinfo->link_mode == BTC_WLINK_2G_GO ||
+ wl_rinfo->link_mode == BTC_WLINK_2G_AP) {
+ wl_rinfo->link_mode = BTC_WLINK_NOLINK;
+ wl_rinfo->connect_cnt = 0;
+ }
+ }
+ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], cnt_connect = %d, connecting = %d, link_mode = %d\n",
+ cnt_connect, cnt_connecting, wl_rinfo->link_mode);
+
+ _fw_set_drv_info(rtwdev, CXDRVINFO_ROLE);
+}
+
+#define BTC_CHK_HANG_MAX 3
+#define BTC_SCB_INV_VALUE GENMASK(31, 0)
+
+void rtw89_coex_act1_work(struct work_struct *work)
+{
+ struct rtw89_dev *rtwdev = container_of(work, struct rtw89_dev,
+ coex_act1_work.work);
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_dm *dm = &rtwdev->btc.dm;
+ struct rtw89_btc_cx *cx = &btc->cx;
+ struct rtw89_btc_wl_info *wl = &cx->wl;
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): enter\n", __func__);
+ dm->cnt_notify[BTC_NCNT_TIMER]++;
+ if (wl->status.map._4way)
+ wl->status.map._4way = false;
+ if (wl->status.map.connecting)
+ wl->status.map.connecting = false;
+
+ _run_coex(rtwdev, BTC_RSN_ACT1_WORK);
+ mutex_unlock(&rtwdev->mutex);
+}
+
+void rtw89_coex_bt_devinfo_work(struct work_struct *work)
+{
+ struct rtw89_dev *rtwdev = container_of(work, struct rtw89_dev,
+ coex_bt_devinfo_work.work);
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_dm *dm = &rtwdev->btc.dm;
+ struct rtw89_btc_bt_a2dp_desc *a2dp = &btc->cx.bt.link_info.a2dp_desc;
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): enter\n", __func__);
+ dm->cnt_notify[BTC_NCNT_TIMER]++;
+ a2dp->play_latency = 0;
+ _run_coex(rtwdev, BTC_RSN_BT_DEVINFO_WORK);
+ mutex_unlock(&rtwdev->mutex);
+}
+
+void rtw89_coex_rfk_chk_work(struct work_struct *work)
+{
+ struct rtw89_dev *rtwdev = container_of(work, struct rtw89_dev,
+ coex_rfk_chk_work.work);
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_dm *dm = &rtwdev->btc.dm;
+ struct rtw89_btc_cx *cx = &btc->cx;
+ struct rtw89_btc_wl_info *wl = &cx->wl;
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): enter\n", __func__);
+ dm->cnt_notify[BTC_NCNT_TIMER]++;
+ if (wl->rfk_info.state != BTC_WRFK_STOP) {
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): RFK timeout\n", __func__);
+ cx->cnt_wl[BTC_WCNT_RFK_TIMEOUT]++;
+ dm->error.map.wl_rfk_timeout = true;
+ wl->rfk_info.state = BTC_WRFK_STOP;
+ _write_scbd(rtwdev, BTC_WSCB_WLRFK, false);
+ _run_coex(rtwdev, BTC_RSN_RFK_CHK_WORK);
+ }
+ mutex_unlock(&rtwdev->mutex);
+}
+
+static void _update_bt_scbd(struct rtw89_dev *rtwdev, bool only_update)
+{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_cx *cx = &btc->cx;
+ struct rtw89_btc_bt_info *bt = &btc->cx.bt;
+ u32 val;
+ bool status_change = false;
+
+ if (!chip->scbd)
+ return;
+
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s\n", __func__);
+
+ val = _read_scbd(rtwdev);
if (val == BTC_SCB_INV_VALUE) {
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): return by invalid scbd value\n",
}
if (!(val & BTC_BSCB_ON) ||
- btc->dm.cnt_dm[BTC_DCNT_BTCNT_FREEZE] >= BTC_CHK_HANG_MAX)
+ btc->dm.cnt_dm[BTC_DCNT_BTCNT_HANG] >= BTC_CHK_HANG_MAX)
bt->enable.now = 0;
else
bt->enable.now = 1;
struct rtw89_btc_bt_info *bt = &btc->cx.bt;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
struct rtw89_btc_wl_role_info_v1 *wl_rinfo_v1 = &wl->role_info_v1;
+ struct rtw89_btc_wl_role_info_v2 *wl_rinfo_v2 = &wl->role_info_v2;
u8 mode;
lockdep_assert_held(&rtwdev->mutex);
if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
- else
+ else if (ver->fwlrole == 1)
mode = wl_rinfo_v1->link_mode;
+ else if (ver->fwlrole == 2)
+ mode = wl_rinfo_v2->link_mode;
+ else
+ return;
rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): reason=%d, mode=%d\n",
__func__, reason, mode);
}
dm->cnt_dm[BTC_DCNT_RUN]++;
+ dm->fddt_train = BTC_FDDT_DISABLE;
if (btc->ctrl.always_freerun) {
_action_freerun(rtwdev);
_action_wl_2g_scc(rtwdev);
else if (ver->fwlrole == 1)
_action_wl_2g_scc_v1(rtwdev);
+ else if (ver->fwlrole == 2)
+ _action_wl_2g_scc_v2(rtwdev);
break;
case BTC_WLINK_2G_MCC:
bt->scan_rx_low_pri = true;
mutex_unlock(&rtwdev->mutex);
}
+#define BT_PROFILE_PROTOCOL_MASK GENMASK(7, 4)
+
static void _update_bt_info(struct rtw89_dev *rtwdev, u8 *buf, u32 len)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
a2dp->exist = btinfo.lb2.a2dp;
b->profile_cnt.now += (u8)a2dp->exist;
pan->active = btinfo.lb2.pan;
+ btc->dm.trx_info.bt_profile = u32_get_bits(btinfo.val, BT_PROFILE_PROTOCOL_MASK);
/* parse raw info low-Byte3 */
btinfo.val = bt->raw_info[BTC_BTINFO_L3];
btinfo.val = bt->raw_info[BTC_BTINFO_H0];
/* raw val is dBm unit, translate from -100~ 0dBm to 0~100%*/
b->rssi = chip->ops->btc_get_bt_rssi(rtwdev, btinfo.hb0.rssi);
+ btc->dm.trx_info.bt_rssi = b->rssi;
/* parse raw info high-Byte1 */
btinfo.val = bt->raw_info[BTC_BTINFO_H1];
memcpy(wlinfo, &r, sizeof(*wlinfo));
if (ver->fwlrole == 0)
_update_wl_info(rtwdev);
- else
+ else if (ver->fwlrole == 1)
_update_wl_info_v1(rtwdev);
+ else if (ver->fwlrole == 2)
+ _update_wl_info_v2(rtwdev);
if (wlinfo->role == RTW89_WIFI_ROLE_STATION &&
wlinfo->connected == MLME_NO_LINK)
wl->status.map.lps = BTC_LPS_RF_OFF;
wl->status.map.busy = 0;
break;
+ case BTC_RFCTRL_LPS_WL_ON: /* LPS-Protocol (RFon) */
+ wl->status.map.rf_off = 0;
+ wl->status.map.lps = BTC_LPS_RF_ON;
+ wl->status.map.busy = 0;
+ break;
case BTC_RFCTRL_WL_ON:
default:
wl->status.map.rf_off = 0;
rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_ALL, false);
if (rf_state == BTC_RFCTRL_WL_OFF)
_write_scbd(rtwdev, BTC_WSCB_ALL, false);
+ else if (rf_state == BTC_RFCTRL_LPS_WL_ON &&
+ wl->status.map.lps_pre != BTC_LPS_OFF)
+ _update_bt_scbd(rtwdev, true);
}
- btc->dm.cnt_dm[BTC_DCNT_BTCNT_FREEZE] = 0;
+ btc->dm.cnt_dm[BTC_DCNT_BTCNT_HANG] = 0;
if (wl->status.map.lps_pre == BTC_LPS_OFF &&
wl->status.map.lps_pre != wl->status.map.lps)
btc->dm.tdma_instant_excute = 1;
btc->dm.tdma_instant_excute = 0;
_run_coex(rtwdev, BTC_RSN_NTFY_RADIO_STATE);
-
+ btc->dm.tdma_instant_excute = 0;
wl->status.map.rf_off_pre = wl->status.map.rf_off;
wl->status.map.lps_pre = wl->status.map.lps;
}
(struct rtw89_btc_wl_sta_iter_data *)data;
struct rtw89_dev *rtwdev = iter_data->rtwdev;
struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ const struct rtw89_btc_ver *ver = btc->ver;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_link_info *link_info = NULL;
struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
struct rtw89_vif *rtwvif = rtwsta->rtwvif;
struct rtw89_traffic_stats *stats = &rtwvif->stats;
const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_btc_wl_role_info *r;
+ struct rtw89_btc_wl_role_info_v1 *r1;
u32 last_tx_rate, last_rx_rate;
u16 last_tx_lvl, last_rx_lvl;
u8 port = rtwvif->port;
link_info_t->tx_rate = rtwsta->ra_report.hw_rate;
link_info_t->rx_rate = rtwsta->rx_hw_rate;
- wl->role_info.active_role[port].tx_lvl = (u16)stats->tx_tfc_lv;
- wl->role_info.active_role[port].rx_lvl = (u16)stats->rx_tfc_lv;
- wl->role_info.active_role[port].tx_rate = rtwsta->ra_report.hw_rate;
- wl->role_info.active_role[port].rx_rate = rtwsta->rx_hw_rate;
+ if (link_info->role == RTW89_WIFI_ROLE_STATION ||
+ link_info->role == RTW89_WIFI_ROLE_P2P_CLIENT) {
+ dm->trx_info.tx_rate = link_info_t->tx_rate;
+ dm->trx_info.rx_rate = link_info_t->rx_rate;
+ }
+
+ if (ver->fwlrole == 0) {
+ r = &wl->role_info;
+ r->active_role[port].tx_lvl = stats->tx_tfc_lv;
+ r->active_role[port].rx_lvl = stats->rx_tfc_lv;
+ r->active_role[port].tx_rate = rtwsta->ra_report.hw_rate;
+ r->active_role[port].rx_rate = rtwsta->rx_hw_rate;
+ } else if (ver->fwlrole == 1) {
+ r1 = &wl->role_info_v1;
+ r1->active_role_v1[port].tx_lvl = stats->tx_tfc_lv;
+ r1->active_role_v1[port].rx_lvl = stats->rx_tfc_lv;
+ r1->active_role_v1[port].tx_rate = rtwsta->ra_report.hw_rate;
+ r1->active_role_v1[port].rx_rate = rtwsta->rx_hw_rate;
+ } else if (ver->fwlrole == 2) {
+ dm->trx_info.tx_lvl = stats->tx_tfc_lv;
+ dm->trx_info.rx_lvl = stats->rx_tfc_lv;
+ dm->trx_info.tx_rate = rtwsta->ra_report.hw_rate;
+ dm->trx_info.rx_rate = rtwsta->rx_hw_rate;
+ }
+
+ dm->trx_info.tx_tp = link_info_t->tx_throughput;
+ dm->trx_info.rx_tp = link_info_t->rx_throughput;
if (is_sta_change)
iter_data->is_sta_change = true;
void rtw89_btc_ntfy_wl_sta(struct rtw89_dev *rtwdev)
{
struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_dm *dm = &btc->dm;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
struct rtw89_btc_wl_sta_iter_data data = {.rtwdev = rtwdev};
u8 i;
}
}
+ if (dm->trx_info.wl_rssi != wl->rssi_level)
+ dm->trx_info.wl_rssi = wl->rssi_level;
+
rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], %s(): busy=%d\n",
__func__, !!wl->status.map.busy);
seq_printf(m, " %-15s : Coex:%d.%d.%d(branch:%d), ",
"[coex_version]", ver_main, ver_sub, ver_hotfix, id_branch);
- if (dm->wl_fw_cx_offload != BTC_CX_FW_OFFLOAD)
- dm->error.map.offload_mismatch = true;
- else
- dm->error.map.offload_mismatch = false;
-
ver_main = FIELD_GET(GENMASK(31, 24), wl->ver_info.fw_coex);
ver_sub = FIELD_GET(GENMASK(23, 16), wl->ver_info.fw_coex);
ver_hotfix = FIELD_GET(GENMASK(15, 8), wl->ver_info.fw_coex);
struct rtw89_btc_wl_info *wl = &cx->wl;
struct rtw89_btc_wl_role_info *wl_rinfo = &wl->role_info;
struct rtw89_btc_wl_role_info_v1 *wl_rinfo_v1 = &wl->role_info_v1;
+ struct rtw89_btc_wl_role_info_v2 *wl_rinfo_v2 = &wl->role_info_v2;
u8 mode;
if (!(btc->dm.coex_info_map & BTC_COEX_INFO_WL))
if (ver->fwlrole == 0)
mode = wl_rinfo->link_mode;
- else
+ else if (ver->fwlrole == 1)
mode = wl_rinfo_v1->link_mode;
+ else if (ver->fwlrole == 2)
+ mode = wl_rinfo_v2->link_mode;
+ else
+ return;
seq_printf(m, " %-15s : link_mode:%d, ", "[status]", mode);
cx->cnt_bt[BTC_BCNT_INFOSAME]);
seq_printf(m,
- " %-15s : Hi-rx = %d, Hi-tx = %d, Lo-rx = %d, Lo-tx = %d (bt_polut_wl_tx = %d)\n",
+ " %-15s : Hi-rx = %d, Hi-tx = %d, Lo-rx = %d, Lo-tx = %d (bt_polut_wl_tx = %d)",
"[trx_req_cnt]", cx->cnt_bt[BTC_BCNT_HIPRI_RX],
cx->cnt_bt[BTC_BCNT_HIPRI_TX], cx->cnt_bt[BTC_BCNT_LOPRI_RX],
cx->cnt_bt[BTC_BCNT_LOPRI_TX], cx->cnt_bt[BTC_BCNT_POLUT]);
+ if (!bt->scan_info_update) {
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_SCAN_INFO, true);
+ seq_puts(m, "\n");
+ } else {
+ rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_SCAN_INFO, false);
+ if (ver->fcxbtscan == 1) {
+ seq_printf(m,
+ "(INQ:%d-%d/PAGE:%d-%d/LE:%d-%d/INIT:%d-%d)",
+ le16_to_cpu(bt->scan_info_v1[BTC_SCAN_INQ].win),
+ le16_to_cpu(bt->scan_info_v1[BTC_SCAN_INQ].intvl),
+ le16_to_cpu(bt->scan_info_v1[BTC_SCAN_PAGE].win),
+ le16_to_cpu(bt->scan_info_v1[BTC_SCAN_PAGE].intvl),
+ le16_to_cpu(bt->scan_info_v1[BTC_SCAN_BLE].win),
+ le16_to_cpu(bt->scan_info_v1[BTC_SCAN_BLE].intvl),
+ le16_to_cpu(bt->scan_info_v1[BTC_SCAN_INIT].win),
+ le16_to_cpu(bt->scan_info_v1[BTC_SCAN_INIT].intvl));
+ } else if (ver->fcxbtscan == 2) {
+ seq_printf(m,
+ "(BG:%d-%d/INIT:%d-%d/LE:%d-%d)",
+ le16_to_cpu(bt->scan_info_v2[CXSCAN_BG].win),
+ le16_to_cpu(bt->scan_info_v2[CXSCAN_BG].intvl),
+ le16_to_cpu(bt->scan_info_v2[CXSCAN_INIT].win),
+ le16_to_cpu(bt->scan_info_v2[CXSCAN_INIT].intvl),
+ le16_to_cpu(bt->scan_info_v2[CXSCAN_LE].win),
+ le16_to_cpu(bt->scan_info_v2[CXSCAN_LE].intvl));
+ }
+ seq_puts(m, "\n");
+ }
+
if (bt->enable.now && bt->ver_info.fw == 0)
rtw89_btc_fw_en_rpt(rtwdev, RPT_EN_BT_VER_INFO, true);
else
pcysta->v4 = pfwinfo->rpt_fbtc_cysta.finfo.v4;
except_cnt = pcysta->v4.except_cnt;
exception_map = le32_to_cpu(pcysta->v4.except_map);
+ } else if (ver->fcxcysta == 5) {
+ pcysta->v5 = pfwinfo->rpt_fbtc_cysta.finfo.v5;
+ except_cnt = pcysta->v5.except_cnt;
+ exception_map = le32_to_cpu(pcysta->v5.except_map);
} else {
return;
}
}
}
-static void _show_fbtc_nullsta(struct rtw89_dev *rtwdev, struct seq_file *m)
+static void _show_fbtc_cysta_v5(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
- const struct rtw89_btc_ver *ver = btc->ver;
+ struct rtw89_btc_bt_a2dp_desc *a2dp = &btc->cx.bt.link_info.a2dp_desc;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ struct rtw89_btc_fbtc_a2dp_trx_stat_v4 *a2dp_trx;
+ struct rtw89_btc_fbtc_cysta_v5 *pcysta;
struct rtw89_btc_rpt_cmn_info *pcinfo;
- union rtw89_btc_fbtc_cynullsta_info *ns;
- u8 i = 0;
-
- if (!btc->dm.tdma_now.rxflctrl)
- return;
+ u8 i, cnt = 0, slot_pair, divide_cnt;
+ u16 cycle, c_begin, c_end, store_index;
- pcinfo = &pfwinfo->rpt_fbtc_nullsta.cinfo;
+ pcinfo = &pfwinfo->rpt_fbtc_cysta.cinfo;
if (!pcinfo->valid)
return;
- ns = &pfwinfo->rpt_fbtc_nullsta.finfo;
- if (ver->fcxnullsta == 1) {
- for (i = 0; i < 2; i++) {
- seq_printf(m, " %-15s : ", "[NULL-STA]");
- seq_printf(m, "null-%d", i);
- seq_printf(m, "[ok:%d/",
- le32_to_cpu(ns->v1.result[i][1]));
- seq_printf(m, "fail:%d/",
- le32_to_cpu(ns->v1.result[i][0]));
- seq_printf(m, "on_time:%d/",
- le32_to_cpu(ns->v1.result[i][2]));
- seq_printf(m, "retry:%d/",
- le32_to_cpu(ns->v1.result[i][3]));
- seq_printf(m, "avg_t:%d.%03d/",
- le32_to_cpu(ns->v1.avg_t[i]) / 1000,
- le32_to_cpu(ns->v1.avg_t[i]) % 1000);
- seq_printf(m, "max_t:%d.%03d]\n",
- le32_to_cpu(ns->v1.max_t[i]) / 1000,
- le32_to_cpu(ns->v1.max_t[i]) % 1000);
- }
- } else {
- for (i = 0; i < 2; i++) {
- seq_printf(m, " %-15s : ", "[NULL-STA]");
- seq_printf(m, "null-%d", i);
- seq_printf(m, "[Tx:%d/",
- le32_to_cpu(ns->v2.result[i][4]));
- seq_printf(m, "[ok:%d/",
- le32_to_cpu(ns->v2.result[i][1]));
- seq_printf(m, "fail:%d/",
- le32_to_cpu(ns->v2.result[i][0]));
- seq_printf(m, "on_time:%d/",
- le32_to_cpu(ns->v2.result[i][2]));
- seq_printf(m, "retry:%d/",
- le32_to_cpu(ns->v2.result[i][3]));
- seq_printf(m, "avg_t:%d.%03d/",
- le32_to_cpu(ns->v2.avg_t[i]) / 1000,
- le32_to_cpu(ns->v2.avg_t[i]) % 1000);
- seq_printf(m, "max_t:%d.%03d]\n",
- le32_to_cpu(ns->v2.max_t[i]) / 1000,
- le32_to_cpu(ns->v2.max_t[i]) % 1000);
- }
- }
-}
+ pcysta = &pfwinfo->rpt_fbtc_cysta.finfo.v5;
+ seq_printf(m,
+ " %-15s : cycle:%d, bcn[all:%d/all_ok:%d/bt:%d/bt_ok:%d]",
+ "[cycle_cnt]",
+ le16_to_cpu(pcysta->cycles),
+ le16_to_cpu(pcysta->bcn_cnt[CXBCN_ALL]),
+ le16_to_cpu(pcysta->bcn_cnt[CXBCN_ALL_OK]),
+ le16_to_cpu(pcysta->bcn_cnt[CXBCN_BT_SLOT]),
+ le16_to_cpu(pcysta->bcn_cnt[CXBCN_BT_OK]));
-static void _show_fbtc_step_v2(struct rtw89_dev *rtwdev, struct seq_file *m)
+ for (i = 0; i < CXST_MAX; i++) {
+ if (!le16_to_cpu(pcysta->slot_cnt[i]))
+ continue;
+
+ seq_printf(m, ", %s:%d", id_to_slot(i),
+ le16_to_cpu(pcysta->slot_cnt[i]));
+ }
+
+ if (dm->tdma_now.rxflctrl)
+ seq_printf(m, ", leak_rx:%d",
+ le32_to_cpu(pcysta->leak_slot.cnt_rximr));
+
+ if (pcysta->collision_cnt)
+ seq_printf(m, ", collision:%d", pcysta->collision_cnt);
+
+ if (le16_to_cpu(pcysta->skip_cnt))
+ seq_printf(m, ", skip:%d",
+ le16_to_cpu(pcysta->skip_cnt));
+
+ seq_puts(m, "\n");
+
+ seq_printf(m, " %-15s : avg_t[wl:%d/bt:%d/lk:%d.%03d]",
+ "[cycle_time]",
+ le16_to_cpu(pcysta->cycle_time.tavg[CXT_WL]),
+ le16_to_cpu(pcysta->cycle_time.tavg[CXT_BT]),
+ le16_to_cpu(pcysta->leak_slot.tavg) / 1000,
+ le16_to_cpu(pcysta->leak_slot.tavg) % 1000);
+ seq_printf(m,
+ ", max_t[wl:%d/bt:%d/lk:%d.%03d]\n",
+ le16_to_cpu(pcysta->cycle_time.tmax[CXT_WL]),
+ le16_to_cpu(pcysta->cycle_time.tmax[CXT_BT]),
+ le16_to_cpu(pcysta->leak_slot.tmax) / 1000,
+ le16_to_cpu(pcysta->leak_slot.tmax) % 1000);
+
+ cycle = le16_to_cpu(pcysta->cycles);
+ if (cycle <= 1)
+ return;
+
+ /* 1 cycle record 1 wl-slot and 1 bt-slot */
+ slot_pair = BTC_CYCLE_SLOT_MAX / 2;
+
+ if (cycle <= slot_pair)
+ c_begin = 1;
+ else
+ c_begin = cycle - slot_pair + 1;
+
+ c_end = cycle;
+
+ if (a2dp->exist)
+ divide_cnt = 3;
+ else
+ divide_cnt = BTC_CYCLE_SLOT_MAX / 4;
+
+ if (c_begin > c_end)
+ return;
+
+ for (cycle = c_begin; cycle <= c_end; cycle++) {
+ cnt++;
+ store_index = ((cycle - 1) % slot_pair) * 2;
+
+ if (cnt % divide_cnt == 1)
+ seq_printf(m, " %-15s : ", "[cycle_step]");
+
+ seq_printf(m, "->b%02d",
+ le16_to_cpu(pcysta->slot_step_time[store_index]));
+ if (a2dp->exist) {
+ a2dp_trx = &pcysta->a2dp_trx[store_index];
+ seq_printf(m, "(%d/%d/%dM/%d/%d/%d)",
+ a2dp_trx->empty_cnt,
+ a2dp_trx->retry_cnt,
+ a2dp_trx->tx_rate ? 3 : 2,
+ a2dp_trx->tx_cnt,
+ a2dp_trx->ack_cnt,
+ a2dp_trx->nack_cnt);
+ }
+ seq_printf(m, "->w%02d",
+ le16_to_cpu(pcysta->slot_step_time[store_index + 1]));
+ if (a2dp->exist) {
+ a2dp_trx = &pcysta->a2dp_trx[store_index + 1];
+ seq_printf(m, "(%d/%d/%dM/%d/%d/%d)",
+ a2dp_trx->empty_cnt,
+ a2dp_trx->retry_cnt,
+ a2dp_trx->tx_rate ? 3 : 2,
+ a2dp_trx->tx_cnt,
+ a2dp_trx->ack_cnt,
+ a2dp_trx->nack_cnt);
+ }
+ if (cnt % divide_cnt == 0 || cnt == c_end)
+ seq_puts(m, "\n");
+ }
+
+ if (a2dp->exist) {
+ seq_printf(m, " %-15s : a2dp_ept:%d, a2dp_late:%d",
+ "[a2dp_t_sta]",
+ le16_to_cpu(pcysta->a2dp_ept.cnt),
+ le16_to_cpu(pcysta->a2dp_ept.cnt_timeout));
+
+ seq_printf(m, ", avg_t:%d, max_t:%d",
+ le16_to_cpu(pcysta->a2dp_ept.tavg),
+ le16_to_cpu(pcysta->a2dp_ept.tmax));
+
+ seq_puts(m, "\n");
+ }
+}
+
+static void _show_fbtc_nullsta(struct rtw89_dev *rtwdev, struct seq_file *m)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
+ struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
+ struct rtw89_btc_rpt_cmn_info *pcinfo;
+ union rtw89_btc_fbtc_cynullsta_info *ns;
+ u8 i = 0;
+
+ if (!btc->dm.tdma_now.rxflctrl)
+ return;
+
+ pcinfo = &pfwinfo->rpt_fbtc_nullsta.cinfo;
+ if (!pcinfo->valid)
+ return;
+
+ ns = &pfwinfo->rpt_fbtc_nullsta.finfo;
+ if (ver->fcxnullsta == 1) {
+ for (i = 0; i < 2; i++) {
+ seq_printf(m, " %-15s : ", "[NULL-STA]");
+ seq_printf(m, "null-%d", i);
+ seq_printf(m, "[ok:%d/",
+ le32_to_cpu(ns->v1.result[i][1]));
+ seq_printf(m, "fail:%d/",
+ le32_to_cpu(ns->v1.result[i][0]));
+ seq_printf(m, "on_time:%d/",
+ le32_to_cpu(ns->v1.result[i][2]));
+ seq_printf(m, "retry:%d/",
+ le32_to_cpu(ns->v1.result[i][3]));
+ seq_printf(m, "avg_t:%d.%03d/",
+ le32_to_cpu(ns->v1.avg_t[i]) / 1000,
+ le32_to_cpu(ns->v1.avg_t[i]) % 1000);
+ seq_printf(m, "max_t:%d.%03d]\n",
+ le32_to_cpu(ns->v1.max_t[i]) / 1000,
+ le32_to_cpu(ns->v1.max_t[i]) % 1000);
+ }
+ } else {
+ for (i = 0; i < 2; i++) {
+ seq_printf(m, " %-15s : ", "[NULL-STA]");
+ seq_printf(m, "null-%d", i);
+ seq_printf(m, "[Tx:%d/",
+ le32_to_cpu(ns->v2.result[i][4]));
+ seq_printf(m, "[ok:%d/",
+ le32_to_cpu(ns->v2.result[i][1]));
+ seq_printf(m, "fail:%d/",
+ le32_to_cpu(ns->v2.result[i][0]));
+ seq_printf(m, "on_time:%d/",
+ le32_to_cpu(ns->v2.result[i][2]));
+ seq_printf(m, "retry:%d/",
+ le32_to_cpu(ns->v2.result[i][3]));
+ seq_printf(m, "avg_t:%d.%03d/",
+ le32_to_cpu(ns->v2.avg_t[i]) / 1000,
+ le32_to_cpu(ns->v2.avg_t[i]) % 1000);
+ seq_printf(m, "max_t:%d.%03d]\n",
+ le32_to_cpu(ns->v2.max_t[i]) / 1000,
+ le32_to_cpu(ns->v2.max_t[i]) % 1000);
+ }
+ }
+}
+
+static void _show_fbtc_step_v2(struct rtw89_dev *rtwdev, struct seq_file *m)
{
struct rtw89_btc *btc = &rtwdev->btc;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
_show_fbtc_cysta_v3(rtwdev, m);
else if (ver->fcxcysta == 4)
_show_fbtc_cysta_v4(rtwdev, m);
+ else if (ver->fcxcysta == 5)
+ _show_fbtc_cysta_v5(rtwdev, m);
_show_fbtc_nullsta(rtwdev, m);
}
}
-static void _show_mreg(struct rtw89_dev *rtwdev, struct seq_file *m)
+static void _show_mreg_v1(struct rtw89_dev *rtwdev, struct seq_file *m)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_btc *btc = &rtwdev->btc;
struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
struct rtw89_btc_rpt_cmn_info *pcinfo = NULL;
- struct rtw89_btc_fbtc_mreg_val *pmreg = NULL;
+ struct rtw89_btc_fbtc_mreg_val_v1 *pmreg = NULL;
struct rtw89_btc_fbtc_gpio_dbg *gdbg = NULL;
struct rtw89_btc_cx *cx = &btc->cx;
struct rtw89_btc_wl_info *wl = &btc->cx.wl;
return;
}
- pmreg = &pfwinfo->rpt_fbtc_mregval.finfo;
+ pmreg = &pfwinfo->rpt_fbtc_mregval.finfo.v1;
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): rpt_fbtc_mregval reg_num = %d\n",
__func__, pmreg->reg_num);
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): stop due rpt_fbtc_gpio_dbg.cinfo\n",
__func__);
+ seq_puts(m, "\n");
+ return;
+ }
+
+ gdbg = &pfwinfo->rpt_fbtc_gpio_dbg.finfo;
+ if (!gdbg->en_map)
+ return;
+
+ seq_printf(m, " %-15s : enable_map:0x%08x",
+ "[gpio_dbg]", gdbg->en_map);
+
+ for (i = 0; i < BTC_DBG_MAX1; i++) {
+ if (!(gdbg->en_map & BIT(i)))
+ continue;
+ seq_printf(m, ", %d->GPIO%d", (u32)i, gdbg->gpio_map[i]);
+ }
+ seq_puts(m, "\n");
+}
+
+static void _show_mreg_v2(struct rtw89_dev *rtwdev, struct seq_file *m)
+{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
+ struct rtw89_btc_rpt_cmn_info *pcinfo = NULL;
+ struct rtw89_btc_fbtc_mreg_val_v2 *pmreg = NULL;
+ struct rtw89_btc_fbtc_gpio_dbg *gdbg = NULL;
+ struct rtw89_btc_cx *cx = &btc->cx;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+ struct rtw89_btc_bt_info *bt = &btc->cx.bt;
+ struct rtw89_mac_ax_coex_gnt gnt_cfg = {};
+ struct rtw89_mac_ax_gnt gnt;
+ u8 i = 0, type = 0, cnt = 0;
+ u32 val, offset;
+
+ if (!(btc->dm.coex_info_map & BTC_COEX_INFO_MREG))
+ return;
+
+ seq_puts(m, "========== [HW Status] ==========\n");
+
+ seq_printf(m,
+ " %-15s : WL->BT:0x%08x(cnt:%d), BT->WL:0x%08x(total:%d, bt_update:%d)\n",
+ "[scoreboard]", wl->scbd, cx->cnt_wl[BTC_WCNT_SCBDUPDATE],
+ bt->scbd, cx->cnt_bt[BTC_BCNT_SCBDREAD],
+ cx->cnt_bt[BTC_BCNT_SCBDUPDATE]);
+
+ /* To avoid I/O if WL LPS or power-off */
+ if (!wl->status.map.lps && !wl->status.map.rf_off) {
+ btc->dm.pta_owner = rtw89_mac_get_ctrl_path(rtwdev);
+
+ _get_gnt(rtwdev, &gnt_cfg);
+ gnt = gnt_cfg.band[0];
+ seq_printf(m,
+ " %-15s : pta_owner:%s, phy-0[gnt_wl:%s-%d/gnt_bt:%s-%d], ",
+ "[gnt_status]",
+ chip->chip_id == RTL8852C ? "HW" :
+ btc->dm.pta_owner == BTC_CTRL_BY_WL ? "WL" : "BT",
+ gnt.gnt_wl_sw_en ? "SW" : "HW", gnt.gnt_wl,
+ gnt.gnt_bt_sw_en ? "SW" : "HW", gnt.gnt_bt);
+
+ gnt = gnt_cfg.band[1];
+ seq_printf(m, "phy-1[gnt_wl:%s-%d/gnt_bt:%s-%d]\n",
+ gnt.gnt_wl_sw_en ? "SW" : "HW",
+ gnt.gnt_wl,
+ gnt.gnt_bt_sw_en ? "SW" : "HW",
+ gnt.gnt_bt);
+ }
+ pcinfo = &pfwinfo->rpt_fbtc_mregval.cinfo;
+ if (!pcinfo->valid) {
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): stop due rpt_fbtc_mregval.cinfo\n",
+ __func__);
+ return;
+ }
+
+ pmreg = &pfwinfo->rpt_fbtc_mregval.finfo.v2;
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): rpt_fbtc_mregval reg_num = %d\n",
+ __func__, pmreg->reg_num);
+
+ for (i = 0; i < pmreg->reg_num; i++) {
+ type = (u8)le16_to_cpu(chip->mon_reg[i].type);
+ offset = le32_to_cpu(chip->mon_reg[i].offset);
+ val = le32_to_cpu(pmreg->mreg_val[i]);
+
+ if (cnt % 6 == 0)
+ seq_printf(m, " %-15s : %d_0x%04x=0x%08x",
+ "[reg]", (u32)type, offset, val);
+ else
+ seq_printf(m, ", %d_0x%04x=0x%08x", (u32)type,
+ offset, val);
+ if (cnt % 6 == 5)
+ seq_puts(m, "\n");
+ cnt++;
+
+ if (i >= pmreg->reg_num)
+ seq_puts(m, "\n");
+ }
+
+ pcinfo = &pfwinfo->rpt_fbtc_gpio_dbg.cinfo;
+ if (!pcinfo->valid) {
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): stop due rpt_fbtc_gpio_dbg.cinfo\n",
+ __func__);
+ seq_puts(m, "\n");
return;
}
le16_to_cpu(prptctrl->rpt_info.cnt_aoac_rf_on),
le16_to_cpu(prptctrl->rpt_info.cnt_aoac_rf_off));
} else {
+ seq_printf(m,
+ " %-15s : h2c_cnt=%d(fail:%d), c2h_cnt=%d",
+ "[summary]", pfwinfo->cnt_h2c,
+ pfwinfo->cnt_h2c_fail, pfwinfo->cnt_c2h);
+ }
+
+ if (!pcinfo->valid || pfwinfo->len_mismch || pfwinfo->fver_mismch ||
+ pfwinfo->err[BTFRE_EXCEPTION]) {
seq_puts(m, "\n");
seq_printf(m,
+ " %-15s : WL FW rpt error!![rpt_ctrl_valid:%d/len:"
+ "0x%x/ver:0x%x/ex:%d/lps=%d/rf_off=%d]",
+ "[ERROR]", pcinfo->valid, pfwinfo->len_mismch,
+ pfwinfo->fver_mismch, pfwinfo->err[BTFRE_EXCEPTION],
+ wl->status.map.lps, wl->status.map.rf_off);
+ }
+
+ for (i = 0; i < BTC_NCNT_NUM; i++)
+ cnt_sum += dm->cnt_notify[i];
+
+ seq_puts(m, "\n");
+ seq_printf(m,
+ " %-15s : total=%d, show_coex_info=%d, power_on=%d, init_coex=%d, ",
+ "[notify_cnt]",
+ cnt_sum, cnt[BTC_NCNT_SHOW_COEX_INFO],
+ cnt[BTC_NCNT_POWER_ON], cnt[BTC_NCNT_INIT_COEX]);
+
+ seq_printf(m,
+ "power_off=%d, radio_state=%d, role_info=%d, wl_rfk=%d, wl_sta=%d",
+ cnt[BTC_NCNT_POWER_OFF], cnt[BTC_NCNT_RADIO_STATE],
+ cnt[BTC_NCNT_ROLE_INFO], cnt[BTC_NCNT_WL_RFK],
+ cnt[BTC_NCNT_WL_STA]);
+
+ seq_puts(m, "\n");
+ seq_printf(m,
+ " %-15s : scan_start=%d, scan_finish=%d, switch_band=%d, special_pkt=%d, ",
+ "[notify_cnt]",
+ cnt[BTC_NCNT_SCAN_START], cnt[BTC_NCNT_SCAN_FINISH],
+ cnt[BTC_NCNT_SWITCH_BAND], cnt[BTC_NCNT_SPECIAL_PACKET]);
+
+ seq_printf(m,
+ "timer=%d, control=%d, customerize=%d",
+ cnt[BTC_NCNT_TIMER], cnt[BTC_NCNT_CONTROL],
+ cnt[BTC_NCNT_CUSTOMERIZE]);
+}
+
+static void _show_summary_v105(struct rtw89_dev *rtwdev, struct seq_file *m)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_btf_fwinfo *pfwinfo = &btc->fwinfo;
+ struct rtw89_btc_fbtc_rpt_ctrl_v105 *prptctrl;
+ struct rtw89_btc_rpt_cmn_info *pcinfo;
+ struct rtw89_btc_cx *cx = &btc->cx;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ struct rtw89_btc_wl_info *wl = &cx->wl;
+ u32 cnt_sum = 0, *cnt = btc->dm.cnt_notify;
+ u8 i;
+
+ if (!(dm->coex_info_map & BTC_COEX_INFO_SUMMARY))
+ return;
+
+ seq_puts(m, "========== [Statistics] ==========\n");
+
+ pcinfo = &pfwinfo->rpt_ctrl.cinfo;
+ if (pcinfo->valid && !wl->status.map.lps && !wl->status.map.rf_off) {
+ prptctrl = &pfwinfo->rpt_ctrl.finfo.v105;
+
+ seq_printf(m,
+ " %-15s : h2c_cnt=%d(fail:%d, fw_recv:%d), c2h_cnt=%d(fw_send:%d, len:%d), ",
+ "[summary]", pfwinfo->cnt_h2c, pfwinfo->cnt_h2c_fail,
+ le16_to_cpu(prptctrl->rpt_info.cnt_h2c),
+ pfwinfo->cnt_c2h,
+ le16_to_cpu(prptctrl->rpt_info.cnt_c2h),
+ le16_to_cpu(prptctrl->rpt_info.len_c2h));
+
+ seq_printf(m,
+ "rpt_cnt=%d(fw_send:%d), rpt_map=0x%x",
+ pfwinfo->event[BTF_EVNT_RPT],
+ le16_to_cpu(prptctrl->rpt_info.cnt),
+ le32_to_cpu(prptctrl->rpt_info.en));
+
+ if (dm->error.map.wl_fw_hang)
+ seq_puts(m, " (WL FW Hang!!)");
+ seq_puts(m, "\n");
+ seq_printf(m,
+ " %-15s : send_ok:%d, send_fail:%d, recv:%d, ",
+ "[mailbox]",
+ le32_to_cpu(prptctrl->bt_mbx_info.cnt_send_ok),
+ le32_to_cpu(prptctrl->bt_mbx_info.cnt_send_fail),
+ le32_to_cpu(prptctrl->bt_mbx_info.cnt_recv));
+
+ seq_printf(m,
+ "A2DP_empty:%d(stop:%d, tx:%d, ack:%d, nack:%d)\n",
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_empty),
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_flowctrl),
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_tx),
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_ack),
+ le32_to_cpu(prptctrl->bt_mbx_info.a2dp.cnt_nack));
+
+ seq_printf(m,
+ " %-15s : wl_rfk[req:%d/go:%d/reject:%d/tout:%d]",
+ "[RFK/LPS]", cx->cnt_wl[BTC_WCNT_RFK_REQ],
+ cx->cnt_wl[BTC_WCNT_RFK_GO],
+ cx->cnt_wl[BTC_WCNT_RFK_REJECT],
+ cx->cnt_wl[BTC_WCNT_RFK_TIMEOUT]);
+
+ seq_printf(m,
+ ", bt_rfk[req:%d]",
+ le16_to_cpu(prptctrl->bt_cnt[BTC_BCNT_RFK_REQ]));
+
+ seq_printf(m,
+ ", AOAC[RF_on:%d/RF_off:%d]",
+ le16_to_cpu(prptctrl->rpt_info.cnt_aoac_rf_on),
+ le16_to_cpu(prptctrl->rpt_info.cnt_aoac_rf_off));
+ } else {
+ seq_printf(m,
" %-15s : h2c_cnt=%d(fail:%d), c2h_cnt=%d",
"[summary]", pfwinfo->cnt_h2c,
pfwinfo->cnt_h2c_fail, pfwinfo->cnt_c2h);
_show_bt_info(rtwdev, m);
_show_dm_info(rtwdev, m);
_show_fw_dm_msg(rtwdev, m);
- _show_mreg(rtwdev, m);
+
+ if (ver->fcxmreg == 1)
+ _show_mreg_v1(rtwdev, m);
+ else if (ver->fcxmreg == 2)
+ _show_mreg_v2(rtwdev, m);
+
if (ver->fcxbtcrpt == 1)
_show_summary_v1(rtwdev, m);
else if (ver->fcxbtcrpt == 4)
_show_summary_v4(rtwdev, m);
else if (ver->fcxbtcrpt == 5)
_show_summary_v5(rtwdev, m);
+ else if (ver->fcxbtcrpt == 105)
+ _show_summary_v105(rtwdev, m);
}
void rtw89_coex_recognize_ver(struct rtw89_dev *rtwdev)
BTC_RSSI_ST_MAX
};
+enum btc_fddt_en {
+ BTC_FDDT_DISABLE,
+ BTC_FDDT_ENABLE,
+};
+
#define BTC_RSSI_HIGH(_rssi_) \
({typeof(_rssi_) __rssi = (_rssi_); \
((__rssi == BTC_RSSI_ST_HIGH || \
enum btc_rfctrl {
BTC_RFCTRL_WL_OFF,
BTC_RFCTRL_WL_ON,
+ BTC_RFCTRL_LPS_WL_ON,
BTC_RFCTRL_FW_CTRL,
BTC_RFCTRL_MAX
};
desc_info->bk = true;
}
+static u16 rtw89_core_get_data_rate(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+{
+ struct ieee80211_vif *vif = tx_req->vif;
+ struct ieee80211_sta *sta = tx_req->sta;
+ struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern;
+ enum rtw89_sub_entity_idx idx = rtwvif->sub_entity_idx;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, idx);
+ u16 lowest_rate;
+
+ if (rate_pattern->enable)
+ return rate_pattern->rate;
+
+ if (vif->p2p)
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+ else if (chan->band_type == RTW89_BAND_2G)
+ lowest_rate = RTW89_HW_RATE_CCK1;
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+ if (!sta->deflink.supp_rates[chan->band_type])
+ return lowest_rate;
+
+ return __ffs(sta->deflink.supp_rates[chan->band_type]) + lowest_rate;
+}
+
static void
rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev,
struct rtw89_core_tx_request *tx_req)
struct ieee80211_sta *sta = tx_req->sta;
struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
- struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern;
- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
struct sk_buff *skb = tx_req->skb;
u8 tid, tid_indicate;
if (IEEE80211_SKB_CB(skb)->control.hw_key)
rtw89_core_tx_update_sec_key(rtwdev, tx_req);
- if (vif->p2p)
- desc_info->data_retry_lowest_rate = RTW89_HW_RATE_OFDM6;
- else if (rate_pattern->enable)
- desc_info->data_retry_lowest_rate = rate_pattern->rate;
- else if (chan->band_type == RTW89_BAND_2G)
- desc_info->data_retry_lowest_rate = RTW89_HW_RATE_CCK1;
- else
- desc_info->data_retry_lowest_rate = RTW89_HW_RATE_OFDM6;
+ desc_info->data_retry_lowest_rate = rtw89_core_get_data_rate(rtwdev, tx_req);
}
static enum btc_pkt_type
phy_ppdu->chan_idx = RTW89_GET_PHY_STS_IE01_CH_IDX(addr);
if (phy_ppdu->rate < RTW89_HW_RATE_OFDM6)
return;
+
+ if (!phy_ppdu->to_self)
+ return;
+
/* sign conversion for S(12,2) */
if (rtwdev->chip->cfo_src_fd)
cfo = sign_extend32(RTW89_GET_PHY_STS_IE01_FD_CFO(addr), 11);
if (phy_ppdu->ie < RTW89_CCK_PKT)
return -EINVAL;
- if (!phy_ppdu->to_self)
- return 0;
-
pos = (u8 *)phy_ppdu->buf + PHY_STS_HDR_LEN;
end = (u8 *)phy_ppdu->buf + phy_ppdu->len;
while (pos < end) {
}
}
+static void rtw89_core_cancel_6ghz_probe_tx(struct rtw89_dev *rtwdev,
+ struct sk_buff *skb)
+{
+ struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
+ struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
+ struct list_head *pkt_list = rtwdev->scan_info.pkt_list;
+ struct rtw89_pktofld_info *info;
+ const u8 *ies = mgmt->u.beacon.variable, *ssid_ie;
+
+ if (rx_status->band != NL80211_BAND_6GHZ)
+ return;
+
+ ssid_ie = cfg80211_find_ie(WLAN_EID_SSID, ies, skb->len);
+
+ list_for_each_entry(info, &pkt_list[NL80211_BAND_6GHZ], list) {
+ if (ether_addr_equal(info->bssid, mgmt->bssid)) {
+ rtw89_fw_h2c_del_pkt_offload(rtwdev, info->id);
+ continue;
+ }
+
+ if (!ssid_ie || ssid_ie[1] != info->ssid_len || info->ssid_len == 0)
+ continue;
+
+ if (memcmp(&ssid_ie[2], info->ssid, info->ssid_len) == 0)
+ rtw89_fw_h2c_del_pkt_offload(rtwdev, info->id);
+ }
+}
+
static void rtw89_vif_rx_stats_iter(void *data, u8 *mac,
struct ieee80211_vif *vif)
{
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
const u8 *bssid = iter_data->bssid;
+ if (rtwdev->scanning &&
+ (ieee80211_is_beacon(hdr->frame_control) ||
+ ieee80211_is_probe_resp(hdr->frame_control)))
+ rtw89_core_cancel_6ghz_probe_tx(rtwdev, skb);
+
if (!vif->bss_conf.bssid)
return;
rtw89_btc_ntfy_scan_start(rtwdev, RTW89_PHY_0, chan->band_type);
rtw89_chip_rfk_scan(rtwdev, true);
rtw89_hci_recalc_int_mit(rtwdev);
+ rtw89_phy_config_edcca(rtwdev, true);
rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, mac_addr);
}
rtw89_chip_rfk_scan(rtwdev, false);
rtw89_btc_ntfy_scan_finish(rtwdev, RTW89_PHY_0);
+ rtw89_phy_config_edcca(rtwdev, false);
rtwdev->scanning = false;
rtwdev->dig.bypass_dig = true;
hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_TDLS |
WIPHY_FLAG_TDLS_EXTERNAL_SETUP |
- WIPHY_FLAG_AP_UAPSD;
+ WIPHY_FLAG_AP_UAPSD | WIPHY_FLAG_SPLIT_SCAN_6GHZ;
hw->wiphy->features |= NL80211_FEATURE_SCAN_RANDOM_MAC_ADDR;
hw->wiphy->max_scan_ssids = RTW89_SCANOFLD_MAX_SSID;
ret = ieee80211_register_hw(hw);
if (ret) {
rtw89_err(rtwdev, "failed to register hw\n");
- goto err;
+ goto err_free_supported_band;
}
ret = rtw89_regd_init(rtwdev, rtw89_regd_notifier);
if (ret) {
rtw89_err(rtwdev, "failed to init regd\n");
- goto err;
+ goto err_unregister_hw;
}
return 0;
-err:
+err_unregister_hw:
+ ieee80211_unregister_hw(hw);
+err_free_supported_band:
+ rtw89_core_clr_supported_band(rtwdev);
+
return ret;
}
BTC_DCNT_RUN = 0x0,
BTC_DCNT_CX_RUNINFO,
BTC_DCNT_RPT,
- BTC_DCNT_RPT_FREEZE,
+ BTC_DCNT_RPT_HANG,
BTC_DCNT_CYCLE,
- BTC_DCNT_CYCLE_FREEZE,
+ BTC_DCNT_CYCLE_HANG,
BTC_DCNT_W1,
- BTC_DCNT_W1_FREEZE,
+ BTC_DCNT_W1_HANG,
BTC_DCNT_B1,
- BTC_DCNT_B1_FREEZE,
+ BTC_DCNT_B1_HANG,
BTC_DCNT_TDMA_NONSYNC,
BTC_DCNT_SLOT_NONSYNC,
- BTC_DCNT_BTCNT_FREEZE,
+ BTC_DCNT_BTCNT_HANG,
BTC_DCNT_WL_SLOT_DRIFT,
- BTC_DCNT_BT_SLOT_DRIFT,
BTC_DCNT_WL_STA_LAST,
- BTC_DCNT_NUM,
+ BTC_DCNT_BT_SLOT_DRIFT,
+ BTC_DCNT_BT_SLOT_FLOOD,
+ BTC_DCNT_FDDT_TRIG,
+ BTC_DCNT_E2G,
+ BTC_DCNT_E2G_HANG,
+ BTC_DCNT_NUM
};
enum rtw89_btc_wl_state_cnt {
u32 noa_duration; /* ms */
};
+struct rtw89_btc_wl_active_role_v2 {
+ u8 connected: 1;
+ u8 pid: 3;
+ u8 phy: 1;
+ u8 noa: 1;
+ u8 band: 2;
+
+ u8 client_ps: 1;
+ u8 bw: 7;
+
+ u8 role;
+ u8 ch;
+
+ u32 noa_duration; /* ms */
+};
+
struct rtw89_btc_wl_role_info_bpos {
u16 none: 1;
u16 station: 1;
u32 rsvd: 27;
};
+struct rtw89_btc_wl_role_info_v2 { /* struct size must be n*4 bytes */
+ u8 connect_cnt;
+ u8 link_mode;
+ union rtw89_btc_wl_role_info_map role_map;
+ struct rtw89_btc_wl_active_role_v2 active_role_v2[RTW89_PORT_NUM];
+ u32 mrole_type; /* btc_wl_mrole_type */
+ u32 mrole_noa_duration; /* ms */
+
+ u32 dbcc_en: 1;
+ u32 dbcc_chg: 1;
+ u32 dbcc_2g_phy: 2; /* which phy operate in 2G, HW_PHY_0 or HW_PHY_1 */
+ u32 link_mode_chg: 1;
+ u32 rsvd: 27;
+};
+
struct rtw89_btc_wl_ver_info {
u32 fw_coex; /* match with which coex_ver */
u32 fw;
u32 pta_owner: 1;
u32 wl_rfk_timeout: 1;
u32 bt_rfk_timeout: 1;
-
u32 wl_fw_hang: 1;
- u32 offload_mismatch: 1;
u32 cycle_hang: 1;
u32 w1_hang: 1;
-
u32 b1_hang: 1;
u32 tdma_no_sync: 1;
+ u32 slot_no_sync: 1;
u32 wl_slot_drift: 1;
+ u32 bt_slot_drift: 1;
+ u32 role_num_mismatch: 1;
+ u32 null1_tx_late: 1;
+ u32 bt_afh_conflict: 1;
+ u32 bt_leafh_conflict: 1;
+ u32 bt_slot_flood: 1;
+ u32 wl_e2g_hang: 1;
+ u32 wl_ver_mismatch: 1;
+ u32 bt_ver_mismatch: 1;
};
union rtw89_btc_dm_error_map {
u32 rx_gain_perpkt;
};
+struct rtw89_btc_wl_nhm {
+ u8 instant_wl_nhm_dbm;
+ u8 instant_wl_nhm_per_mhz;
+ u16 valid_record_times;
+ s8 record_pwr[16];
+ u8 record_ratio[16];
+ s8 pwr; /* dbm_per_MHz */
+ u8 ratio;
+ u8 current_status;
+ u8 refresh;
+ bool start_flag;
+ u8 last_ccx_rpt_stamp;
+ s8 pwr_max;
+ s8 pwr_min;
+};
+
struct rtw89_btc_wl_info {
struct rtw89_btc_wl_link_info link_info[RTW89_PORT_NUM];
struct rtw89_btc_wl_rfk_info rfk_info;
struct rtw89_btc_wl_afh_info afh_info;
struct rtw89_btc_wl_role_info role_info;
struct rtw89_btc_wl_role_info_v1 role_info_v1;
+ struct rtw89_btc_wl_role_info_v2 role_info_v2;
struct rtw89_btc_wl_scan_info scan_info;
struct rtw89_btc_wl_dbcc_info dbcc_info;
struct rtw89_btc_rf_para rf_para;
+ struct rtw89_btc_wl_nhm nhm;
union rtw89_btc_wl_state_map status;
u8 port_id[RTW89_WIFI_ROLE_MLME_MAX];
u8 rssi_level;
+ u8 cn_report;
bool scbd_change;
u32 scbd;
u16 tx_retry;
};
-struct rtw89_btc_bt_scan_info {
- u16 win;
- u16 intvl;
- u32 enable: 1;
- u32 interlace: 1;
- u32 rsvd: 30;
-};
-
enum rtw89_btc_bt_scan_type {
BTC_SCAN_INQ = 0,
BTC_SCAN_PAGE,
BTC_SCAN_MAX1,
};
+enum rtw89_btc_ble_scan_type {
+ CXSCAN_BG = 0,
+ CXSCAN_INIT,
+ CXSCAN_LE,
+ CXSCAN_MAX
+};
+
+#define RTW89_BTC_BTC_SCAN_V1_FLAG_ENABLE BIT(0)
+#define RTW89_BTC_BTC_SCAN_V1_FLAG_INTERLACE BIT(1)
+
+struct rtw89_btc_bt_scan_info_v1 {
+ __le16 win;
+ __le16 intvl;
+ __le32 flags;
+} __packed;
+
+struct rtw89_btc_bt_scan_info_v2 {
+ __le16 win;
+ __le16 intvl;
+} __packed;
+
+struct rtw89_btc_fbtc_btscan_v1 {
+ u8 fver; /* btc_ver::fcxbtscan */
+ u8 rsvd;
+ __le16 rsvd2;
+ struct rtw89_btc_bt_scan_info_v1 scan[BTC_SCAN_MAX1];
+} __packed;
+
+struct rtw89_btc_fbtc_btscan_v2 {
+ u8 fver; /* btc_ver::fcxbtscan */
+ u8 type;
+ __le16 rsvd2;
+ struct rtw89_btc_bt_scan_info_v2 para[CXSCAN_MAX];
+} __packed;
+
+union rtw89_btc_fbtc_btscan {
+ struct rtw89_btc_fbtc_btscan_v1 v1;
+ struct rtw89_btc_fbtc_btscan_v2 v2;
+};
+
struct rtw89_btc_bt_info {
struct rtw89_btc_bt_link_info link_info;
- struct rtw89_btc_bt_scan_info scan_info[BTC_SCAN_MAX1];
+ struct rtw89_btc_bt_scan_info_v1 scan_info_v1[BTC_SCAN_MAX1];
+ struct rtw89_btc_bt_scan_info_v2 scan_info_v2[CXSCAN_MAX];
struct rtw89_btc_bt_ver_info ver_info;
struct rtw89_btc_bool_sta_chg enable;
struct rtw89_btc_bool_sta_chg inq_pag;
u32 run_patch_code: 1;
u32 hi_lna_rx: 1;
u32 scan_rx_low_pri: 1;
- u32 rsvd: 21;
+ u32 scan_info_update: 1;
+ u32 rsvd: 20;
};
struct rtw89_btc_cx {
};
#define CXMREG_MAX 30
+#define CXMREG_MAX_V2 20
#define FCXMAX_STEP 255 /*STEP trace record cnt, Max:65535, default:255*/
#define BTC_CYCLE_SLOT_MAX 48 /* must be even number, non-zero */
BTC_BCNT_STA_MAX
};
+enum rtw89_btc_bt_sta_counter_v105 {
+ BTC_BCNT_RFK_REQ_V105 = 0,
+ BTC_BCNT_HI_TX_V105 = 1,
+ BTC_BCNT_HI_RX_V105 = 2,
+ BTC_BCNT_LO_TX_V105 = 3,
+ BTC_BCNT_LO_RX_V105 = 4,
+ BTC_BCNT_POLLUTED_V105 = 5,
+ BTC_BCNT_STA_MAX_V105
+};
+
struct rtw89_btc_fbtc_rpt_ctrl_v1 {
u16 fver; /* btc_ver::fcxbtcrpt */
u16 rpt_cnt; /* tmr counters */
struct rtw89_btc_fbtc_rpt_ctrl_bt_mailbox bt_mbx_info;
} __packed;
+struct rtw89_btc_fbtc_rpt_ctrl_v105 {
+ u8 fver;
+ u8 rsvd;
+ __le16 rsvd1;
+
+ u8 gnt_val[RTW89_PHY_MAX][4];
+ __le16 bt_cnt[BTC_BCNT_STA_MAX_V105];
+
+ struct rtw89_btc_fbtc_rpt_ctrl_info_v5 rpt_info;
+ struct rtw89_btc_fbtc_rpt_ctrl_bt_mailbox bt_mbx_info;
+} __packed;
+
union rtw89_btc_fbtc_rpt_ctrl_ver_info {
struct rtw89_btc_fbtc_rpt_ctrl_v1 v1;
struct rtw89_btc_fbtc_rpt_ctrl_v4 v4;
struct rtw89_btc_fbtc_rpt_ctrl_v5 v5;
+ struct rtw89_btc_fbtc_rpt_ctrl_v105 v105;
};
enum rtw89_fbtc_ext_ctrl_type {
u8 gpio_map[BTC_DBG_MAX1]; /*the debug signals to GPIO-Position */
} __packed;
-struct rtw89_btc_fbtc_mreg_val {
+struct rtw89_btc_fbtc_mreg_val_v1 {
u8 fver; /* btc_ver::fcxmreg */
u8 reg_num;
__le16 rsvd;
__le32 mreg_val[CXMREG_MAX];
} __packed;
+struct rtw89_btc_fbtc_mreg_val_v2 {
+ u8 fver; /* btc_ver::fcxmreg */
+ u8 reg_num;
+ __le16 rsvd;
+ __le32 mreg_val[CXMREG_MAX_V2];
+} __packed;
+
+union rtw89_btc_fbtc_mreg_val {
+ struct rtw89_btc_fbtc_mreg_val_v1 v1;
+ struct rtw89_btc_fbtc_mreg_val_v2 v2;
+};
+
#define RTW89_DEF_FBTC_MREG(__type, __bytes, __offset) \
{ .type = cpu_to_le16(__type), .bytes = cpu_to_le16(__bytes), \
.offset = cpu_to_le32(__offset), }
__le16 tmaxdiff[CXT_MAX]; /* max wl-wl bt-bt cycle diff time */
} __packed;
+struct rtw89_btc_fbtc_cycle_time_info_v5 {
+ __le16 tavg[CXT_MAX]; /* avg wl/bt cycle time */
+ __le16 tmax[CXT_MAX]; /* max wl/bt cycle time */
+} __packed;
+
struct rtw89_btc_fbtc_a2dp_trx_stat {
u8 empty_cnt;
u8 retry_cnt;
#define RTW89_BTC_FDDT_CELL_TRAIN_STATE GENMASK(3, 0)
#define RTW89_BTC_FDDT_CELL_TRAIN_PHASE GENMASK(7, 4)
+struct rtw89_btc_fbtc_cycle_fddt_info_v5 {
+ __le16 train_cycle;
+ __le16 tp;
+
+ s8 tx_power; /* absolute Tx power (dBm), 0xff-> no BTC control */
+ s8 bt_tx_power; /* decrease Tx power (dB) */
+ s8 bt_rx_gain; /* LNA constrain level */
+ u8 no_empty_cnt;
+
+ u8 rssi; /* [7:4] -> bt_rssi_level, [3:0]-> wl_rssi_level */
+ u8 cn; /* condition_num */
+ u8 train_status; /* [7:4]-> train-state, [3:0]-> train-phase */
+ u8 train_result; /* refer to enum btc_fddt_check_map */
+} __packed;
+
struct rtw89_btc_fbtc_fddt_cell_status {
s8 wl_tx_pwr;
s8 bt_tx_pwr;
u8 state_phase; /* [0:3] train state, [4:7] train phase */
} __packed;
+struct rtw89_btc_fbtc_fddt_cell_status_v5 {
+ s8 wl_tx_pwr;
+ s8 bt_tx_pwr;
+ s8 bt_rx_gain;
+} __packed;
+
struct rtw89_btc_fbtc_cysta_v3 { /* statistics for cycles */
u8 fver;
u8 rsvd;
__le32 except_map;
} __packed;
+struct rtw89_btc_fbtc_cysta_v5 { /* statistics for cycles */
+ u8 fver;
+ u8 rsvd;
+ u8 collision_cnt; /* counter for event/timer occur at the same time */
+ u8 except_cnt;
+ u8 wl_rx_err_ratio[BTC_CYCLE_SLOT_MAX];
+
+ __le16 skip_cnt;
+ __le16 cycles; /* total cycle number */
+
+ __le16 slot_step_time[BTC_CYCLE_SLOT_MAX]; /* record the wl/bt slot time */
+ __le16 slot_cnt[CXST_MAX]; /* slot count */
+ __le16 bcn_cnt[CXBCN_MAX];
+ struct rtw89_btc_fbtc_cycle_time_info_v5 cycle_time;
+ struct rtw89_btc_fbtc_cycle_leak_info leak_slot;
+ struct rtw89_btc_fbtc_cycle_a2dp_empty_info a2dp_ept;
+ struct rtw89_btc_fbtc_a2dp_trx_stat_v4 a2dp_trx[BTC_CYCLE_SLOT_MAX];
+ struct rtw89_btc_fbtc_cycle_fddt_info_v5 fddt_trx[BTC_CYCLE_SLOT_MAX];
+ struct rtw89_btc_fbtc_fddt_cell_status_v5 fddt_cells[FDD_TRAIN_WL_DIRECTION]
+ [FDD_TRAIN_WL_RSSI_LEVEL]
+ [FDD_TRAIN_BT_RSSI_LEVEL];
+ __le32 except_map;
+} __packed;
+
union rtw89_btc_fbtc_cysta_info {
struct rtw89_btc_fbtc_cysta_v2 v2;
struct rtw89_btc_fbtc_cysta_v3 v3;
struct rtw89_btc_fbtc_cysta_v4 v4;
+ struct rtw89_btc_fbtc_cysta_v5 v5;
};
struct rtw89_btc_fbtc_cynullsta_v1 { /* cycle null statistics */
__le32 feature;
} __packed;
-struct rtw89_btc_fbtc_btscan {
- u8 fver; /* btc_ver::fcxbtscan */
- u8 rsvd;
- __le16 rsvd2;
- u8 scan[6];
-} __packed;
-
struct rtw89_btc_fbtc_btafh {
u8 fver; /* btc_ver::fcxbtafh */
u8 rsvd;
u8 bt_rx_gain; /* LNA constrain level */
};
+struct rtw89_btc_trx_info {
+ u8 tx_lvl;
+ u8 rx_lvl;
+ u8 wl_rssi;
+ u8 bt_rssi;
+
+ s8 tx_power; /* absolute Tx power (dBm), 0xff-> no BTC control */
+ s8 rx_gain; /* rx gain table index (TBD.) */
+ s8 bt_tx_power; /* decrease Tx power (dB) */
+ s8 bt_rx_gain; /* LNA constrain level */
+
+ u8 cn; /* condition_num */
+ s8 nhm;
+ u8 bt_profile;
+ u8 rsvd2;
+
+ u16 tx_rate;
+ u16 rx_rate;
+
+ u32 tx_tp;
+ u32 rx_tp;
+ u32 rx_err_ratio;
+};
+
struct rtw89_btc_dm {
struct rtw89_btc_fbtc_slot slot[CXST_MAX];
struct rtw89_btc_fbtc_slot slot_now[CXST_MAX];
struct rtw89_btc_wl_tx_limit_para wl_tx_limit;
struct rtw89_btc_dm_step dm_step;
struct rtw89_btc_wl_scc_ctrl wl_scc;
+ struct rtw89_btc_trx_info trx_info;
union rtw89_btc_dm_error_map error;
u32 cnt_dm[BTC_DCNT_NUM];
u32 cnt_notify[BTC_NCNT_NUM];
u32 wl_only: 1;
u32 wl_fw_cx_offload: 1;
u32 freerun: 1;
+ u32 fddt_train: 1;
u32 wl_ps_ctrl: 2;
u32 wl_mimo_ps: 1;
u32 leak_ap: 1;
u32 wl_stb_chg: 1;
u32 pta_owner: 1;
u32 tdma_instant_excute: 1;
- u32 rsvd: 1;
u16 slot_dur[CXST_MAX];
u8 run_reason;
u8 run_action;
+
+ u8 wl_lna2: 1;
};
struct rtw89_btc_ctrl {
struct rtw89_btc_rpt_fbtc_mreg {
struct rtw89_btc_rpt_cmn_info cinfo; /* common info, by driver */
- struct rtw89_btc_fbtc_mreg_val finfo; /* info from fw */
+ union rtw89_btc_fbtc_mreg_val finfo; /* info from fw */
};
struct rtw89_btc_rpt_fbtc_gpio_dbg {
struct rtw89_btc_rpt_fbtc_btscan {
struct rtw89_btc_rpt_cmn_info cinfo; /* common info, by driver */
- struct rtw89_btc_fbtc_btscan finfo; /* info from fw */
+ union rtw89_btc_fbtc_btscan finfo; /* info from fw */
};
struct rtw89_btc_rpt_fbtc_btafh {
u32 txwd_body_size;
u32 h2c_ctrl_reg;
const u32 *h2c_regs;
+ struct rtw89_reg_def h2c_counter_reg;
u32 c2h_ctrl_reg;
const u32 *c2h_regs;
+ struct rtw89_reg_def c2h_counter_reg;
const struct rtw89_page_regs *page_regs;
bool cfo_src_fd;
const struct rtw89_reg_def *dcfo_comp;
const struct rtw89_rrsr_cfgs *rrsr_cfgs;
u32 bss_clr_map_reg;
u32 dma_ch_mask;
+ u32 edcca_lvl_reg;
const struct wiphy_wowlan_support *wowlan_stub;
};
RTW89_FW_FEATURE_SCAN_OFFLOAD,
RTW89_FW_FEATURE_TX_WAKE,
RTW89_FW_FEATURE_CRASH_TRIGGER,
- RTW89_FW_FEATURE_PACKET_DROP,
+ RTW89_FW_FEATURE_NO_PACKET_DROP,
RTW89_FW_FEATURE_NO_DEEP_PS,
RTW89_FW_FEATURE_NO_LPS_PG,
};
struct completion completion;
u8 h2c_seq;
u8 rec_seq;
+ u8 h2c_counter;
+ u8 c2h_counter;
struct rtw89_fw_suit normal;
struct rtw89_fw_suit wowlan;
bool fw_log_enable;
bool entity_active;
enum rtw89_entity_mode entity_mode;
+
+ u32 edcca_bak;
};
#define RTW89_MAX_MAC_ID_NUM 128
RTW89_FLAG_RUNNING,
RTW89_FLAG_BFEE_MON,
RTW89_FLAG_BFEE_EN,
+ RTW89_FLAG_BFEE_TIMER_KEEP,
RTW89_FLAG_NAPI_RUNNING,
RTW89_FLAG_LEISURE_PS,
RTW89_FLAG_LOW_POWER_MODE,
__DEF_FW_FEAT_COND(ge, >=); /* greater or equal */
__DEF_FW_FEAT_COND(le, <=); /* less or equal */
+__DEF_FW_FEAT_COND(lt, <); /* less than */
struct __fw_feat_cfg {
enum rtw89_core_chip_id chip_id;
__CFG_FW_FEAT(RTL8852A, ge, 0, 13, 35, 0, SCAN_OFFLOAD),
__CFG_FW_FEAT(RTL8852A, ge, 0, 13, 35, 0, TX_WAKE),
__CFG_FW_FEAT(RTL8852A, ge, 0, 13, 36, 0, CRASH_TRIGGER),
- __CFG_FW_FEAT(RTL8852A, ge, 0, 13, 38, 0, PACKET_DROP),
+ __CFG_FW_FEAT(RTL8852A, lt, 0, 13, 38, 0, NO_PACKET_DROP),
__CFG_FW_FEAT(RTL8852B, ge, 0, 29, 26, 0, NO_LPS_PG),
- __CFG_FW_FEAT(RTL8852C, ge, 0, 27, 20, 0, PACKET_DROP),
+ __CFG_FW_FEAT(RTL8852B, ge, 0, 29, 26, 0, TX_WAKE),
+ __CFG_FW_FEAT(RTL8852B, ge, 0, 29, 29, 0, CRASH_TRIGGER),
+ __CFG_FW_FEAT(RTL8852B, ge, 0, 29, 29, 0, SCAN_OFFLOAD),
__CFG_FW_FEAT(RTL8852C, le, 0, 27, 33, 0, NO_DEEP_PS),
__CFG_FW_FEAT(RTL8852C, ge, 0, 27, 34, 0, TX_WAKE),
__CFG_FW_FEAT(RTL8852C, ge, 0, 27, 36, 0, SCAN_OFFLOAD),
fw_info->h2c_seq = 0;
fw_info->rec_seq = 0;
+ fw_info->h2c_counter = 0;
+ fw_info->c2h_counter = 0;
rtwdev->mac.rpwm_seq_num = RPWM_SEQ_NUM_MAX;
rtwdev->mac.cpwm_seq_num = CPWM_SEQ_NUM_MAX;
return ret;
}
+#define H2C_LEN_CXDRVINFO_ROLE_SIZE_V2(max_role_num) \
+ (4 + 8 * (max_role_num) + H2C_LEN_CXDRVINFO_ROLE_DBCC_LEN + H2C_LEN_CXDRVHDR)
+
+int rtw89_fw_h2c_cxdrv_role_v2(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+ struct rtw89_btc_wl_role_info_v2 *role_info = &wl->role_info_v2;
+ struct rtw89_btc_wl_role_info_bpos *bpos = &role_info->role_map.role;
+ struct rtw89_btc_wl_active_role_v2 *active = role_info->active_role_v2;
+ struct sk_buff *skb;
+ u32 len;
+ u8 *cmd, offset;
+ int ret;
+ int i;
+
+ len = H2C_LEN_CXDRVINFO_ROLE_SIZE_V2(ver->max_role_num);
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+ if (!skb) {
+ rtw89_err(rtwdev, "failed to alloc skb for h2c cxdrv_role\n");
+ return -ENOMEM;
+ }
+ skb_put(skb, len);
+ cmd = skb->data;
+
+ RTW89_SET_FWCMD_CXHDR_TYPE(cmd, CXDRVINFO_ROLE);
+ RTW89_SET_FWCMD_CXHDR_LEN(cmd, len - H2C_LEN_CXDRVHDR);
+
+ RTW89_SET_FWCMD_CXROLE_CONNECT_CNT(cmd, role_info->connect_cnt);
+ RTW89_SET_FWCMD_CXROLE_LINK_MODE(cmd, role_info->link_mode);
+
+ RTW89_SET_FWCMD_CXROLE_ROLE_NONE(cmd, bpos->none);
+ RTW89_SET_FWCMD_CXROLE_ROLE_STA(cmd, bpos->station);
+ RTW89_SET_FWCMD_CXROLE_ROLE_AP(cmd, bpos->ap);
+ RTW89_SET_FWCMD_CXROLE_ROLE_VAP(cmd, bpos->vap);
+ RTW89_SET_FWCMD_CXROLE_ROLE_ADHOC(cmd, bpos->adhoc);
+ RTW89_SET_FWCMD_CXROLE_ROLE_ADHOC_MASTER(cmd, bpos->adhoc_master);
+ RTW89_SET_FWCMD_CXROLE_ROLE_MESH(cmd, bpos->mesh);
+ RTW89_SET_FWCMD_CXROLE_ROLE_MONITOR(cmd, bpos->moniter);
+ RTW89_SET_FWCMD_CXROLE_ROLE_P2P_DEV(cmd, bpos->p2p_device);
+ RTW89_SET_FWCMD_CXROLE_ROLE_P2P_GC(cmd, bpos->p2p_gc);
+ RTW89_SET_FWCMD_CXROLE_ROLE_P2P_GO(cmd, bpos->p2p_go);
+ RTW89_SET_FWCMD_CXROLE_ROLE_NAN(cmd, bpos->nan);
+
+ offset = PORT_DATA_OFFSET;
+ for (i = 0; i < RTW89_PORT_NUM; i++, active++) {
+ RTW89_SET_FWCMD_CXROLE_ACT_CONNECTED_V2(cmd, active->connected, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_PID_V2(cmd, active->pid, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_PHY_V2(cmd, active->phy, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_NOA_V2(cmd, active->noa, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_BAND_V2(cmd, active->band, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_CLIENT_PS_V2(cmd, active->client_ps, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_BW_V2(cmd, active->bw, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_ROLE_V2(cmd, active->role, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_CH_V2(cmd, active->ch, i, offset);
+ RTW89_SET_FWCMD_CXROLE_ACT_NOA_DUR_V2(cmd, active->noa_duration, i, offset);
+ }
+
+ offset = len - H2C_LEN_CXDRVINFO_ROLE_DBCC_LEN;
+ RTW89_SET_FWCMD_CXROLE_MROLE_TYPE(cmd, role_info->mrole_type, offset);
+ RTW89_SET_FWCMD_CXROLE_MROLE_NOA(cmd, role_info->mrole_noa_duration, offset);
+ RTW89_SET_FWCMD_CXROLE_DBCC_EN(cmd, role_info->dbcc_en, offset);
+ RTW89_SET_FWCMD_CXROLE_DBCC_CHG(cmd, role_info->dbcc_chg, offset);
+ RTW89_SET_FWCMD_CXROLE_DBCC_2G_PHY(cmd, role_info->dbcc_2g_phy, offset);
+ RTW89_SET_FWCMD_CXROLE_LINK_MODE_CHG(cmd, role_info->link_mode_chg, offset);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_OUTSRC, BTFC_SET,
+ SET_DRV_INFO, 0, 0,
+ len);
+
+ ret = rtw89_h2c_tx(rtwdev, skb, false);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to send h2c\n");
+ goto fail;
+ }
+
+ return 0;
+fail:
+ dev_kfree_skb_any(skb);
+
+ return ret;
+}
+
#define H2C_LEN_CXDRVINFO_CTRL (4 + H2C_LEN_CXDRVHDR)
int rtw89_fw_h2c_cxdrv_ctrl(struct rtw89_dev *rtwdev)
{
return ret;
}
+#define H2C_LEN_CXDRVINFO_TRX (28 + H2C_LEN_CXDRVHDR)
+int rtw89_fw_h2c_cxdrv_trx(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_trx_info *trx = &btc->dm.trx_info;
+ struct sk_buff *skb;
+ u8 *cmd;
+ int ret;
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_LEN_CXDRVINFO_TRX);
+ if (!skb) {
+ rtw89_err(rtwdev, "failed to alloc skb for h2c cxdrv_trx\n");
+ return -ENOMEM;
+ }
+ skb_put(skb, H2C_LEN_CXDRVINFO_TRX);
+ cmd = skb->data;
+
+ RTW89_SET_FWCMD_CXHDR_TYPE(cmd, CXDRVINFO_TRX);
+ RTW89_SET_FWCMD_CXHDR_LEN(cmd, H2C_LEN_CXDRVINFO_TRX - H2C_LEN_CXDRVHDR);
+
+ RTW89_SET_FWCMD_CXTRX_TXLV(cmd, trx->tx_lvl);
+ RTW89_SET_FWCMD_CXTRX_RXLV(cmd, trx->rx_lvl);
+ RTW89_SET_FWCMD_CXTRX_WLRSSI(cmd, trx->wl_rssi);
+ RTW89_SET_FWCMD_CXTRX_BTRSSI(cmd, trx->bt_rssi);
+ RTW89_SET_FWCMD_CXTRX_TXPWR(cmd, trx->tx_power);
+ RTW89_SET_FWCMD_CXTRX_RXGAIN(cmd, trx->rx_gain);
+ RTW89_SET_FWCMD_CXTRX_BTTXPWR(cmd, trx->bt_tx_power);
+ RTW89_SET_FWCMD_CXTRX_BTRXGAIN(cmd, trx->bt_rx_gain);
+ RTW89_SET_FWCMD_CXTRX_CN(cmd, trx->cn);
+ RTW89_SET_FWCMD_CXTRX_NHM(cmd, trx->nhm);
+ RTW89_SET_FWCMD_CXTRX_BTPROFILE(cmd, trx->bt_profile);
+ RTW89_SET_FWCMD_CXTRX_RSVD2(cmd, trx->rsvd2);
+ RTW89_SET_FWCMD_CXTRX_TXRATE(cmd, trx->tx_rate);
+ RTW89_SET_FWCMD_CXTRX_RXRATE(cmd, trx->rx_rate);
+ RTW89_SET_FWCMD_CXTRX_TXTP(cmd, trx->tx_tp);
+ RTW89_SET_FWCMD_CXTRX_RXTP(cmd, trx->rx_tp);
+ RTW89_SET_FWCMD_CXTRX_RXERRRA(cmd, trx->rx_err_ratio);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_OUTSRC, BTFC_SET,
+ SET_DRV_INFO, 0, 0,
+ H2C_LEN_CXDRVINFO_TRX);
+
+ ret = rtw89_h2c_tx(rtwdev, skb, false);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to send h2c\n");
+ goto fail;
+ }
+
+ return 0;
+fail:
+ dev_kfree_skb_any(skb);
+
+ return ret;
+}
+
#define H2C_LEN_CXDRVINFO_RFK (4 + H2C_LEN_CXDRVHDR)
int rtw89_fw_h2c_cxdrv_rfk(struct rtw89_dev *rtwdev)
{
struct rtw89_mac_h2c_info *info)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_fw_info *fw_info = &rtwdev->fw;
const u32 *h2c_reg = chip->h2c_regs;
u8 i, val, len;
int ret;
for (i = 0; i < RTW89_H2CREG_MAX; i++)
rtw89_write32(rtwdev, h2c_reg[i], info->h2creg[i]);
+ fw_info->h2c_counter++;
+ rtw89_write8_mask(rtwdev, chip->h2c_counter_reg.addr,
+ chip->h2c_counter_reg.mask, fw_info->h2c_counter);
rtw89_write8(rtwdev, chip->h2c_ctrl_reg, B_AX_H2CREG_TRIGGER);
return 0;
struct rtw89_mac_c2h_info *info)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_fw_info *fw_info = &rtwdev->fw;
const u32 *c2h_reg = chip->c2h_regs;
u32 ret;
u8 i, val;
info->content_len = (RTW89_GET_C2H_HDR_LEN(*info->c2hreg) << 2) -
RTW89_C2HREG_HDR_LEN;
+ fw_info->c2h_counter++;
+ rtw89_write8_mask(rtwdev, chip->c2h_counter_reg.addr,
+ chip->c2h_counter_reg.mask, fw_info->c2h_counter);
+
return 0;
}
}
}
+static bool rtw89_is_6ghz_wildcard_probe_req(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif,
+ struct rtw89_pktofld_info *info,
+ enum nl80211_band band, u8 ssid_idx)
+{
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+
+ if (band != NL80211_BAND_6GHZ)
+ return false;
+
+ if (req->ssids[ssid_idx].ssid_len) {
+ memcpy(info->ssid, req->ssids[ssid_idx].ssid,
+ req->ssids[ssid_idx].ssid_len);
+ info->ssid_len = req->ssids[ssid_idx].ssid_len;
+ return false;
+ } else {
+ return true;
+ }
+}
+
static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif,
- struct sk_buff *skb)
+ struct sk_buff *skb, u8 ssid_idx)
{
struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
goto out;
}
+ if (rtw89_is_6ghz_wildcard_probe_req(rtwdev, rtwvif, info, band,
+ ssid_idx)) {
+ kfree_skb(new);
+ kfree(info);
+ goto out;
+ }
+
ret = rtw89_fw_h2c_add_pkt_offload(rtwdev, &info->id, new);
if (ret) {
kfree_skb(new);
if (!skb)
return -ENOMEM;
- ret = rtw89_append_probe_req_ie(rtwdev, rtwvif, skb);
+ ret = rtw89_append_probe_req_ie(rtwdev, rtwvif, skb, i);
kfree_skb(skb);
if (ret)
return 0;
}
+static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev,
+ struct cfg80211_scan_request *req,
+ struct rtw89_mac_chinfo *ch_info)
+{
+ struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+ struct list_head *pkt_list = rtwdev->scan_info.pkt_list;
+ struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+ struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
+ struct cfg80211_scan_6ghz_params *params;
+ struct rtw89_pktofld_info *info, *tmp;
+ struct ieee80211_hdr *hdr;
+ struct sk_buff *skb;
+ bool found;
+ int ret = 0;
+ u8 i;
+
+ if (!req->n_6ghz_params)
+ return 0;
+
+ for (i = 0; i < req->n_6ghz_params; i++) {
+ params = &req->scan_6ghz_params[i];
+
+ if (req->channels[params->channel_idx]->hw_value !=
+ ch_info->pri_ch)
+ continue;
+
+ found = false;
+ list_for_each_entry(tmp, &pkt_list[NL80211_BAND_6GHZ], list) {
+ if (ether_addr_equal(tmp->bssid, params->bssid)) {
+ found = true;
+ break;
+ }
+ }
+ if (found)
+ continue;
+
+ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr,
+ NULL, 0, req->ie_len);
+ skb_put_data(skb, ies->ies[NL80211_BAND_6GHZ], ies->len[NL80211_BAND_6GHZ]);
+ skb_put_data(skb, ies->common_ies, ies->common_ie_len);
+ hdr = (struct ieee80211_hdr *)skb->data;
+ ether_addr_copy(hdr->addr3, params->bssid);
+
+ info = kzalloc(sizeof(*info), GFP_KERNEL);
+ if (!info) {
+ ret = -ENOMEM;
+ kfree_skb(skb);
+ goto out;
+ }
+
+ ret = rtw89_fw_h2c_add_pkt_offload(rtwdev, &info->id, skb);
+ if (ret) {
+ kfree_skb(skb);
+ kfree(info);
+ goto out;
+ }
+
+ ether_addr_copy(info->bssid, params->bssid);
+ info->channel_6ghz = req->channels[params->channel_idx]->hw_value;
+ list_add_tail(&info->list, &rtwdev->scan_info.pkt_list[NL80211_BAND_6GHZ]);
+
+ ch_info->tx_pkt = true;
+ ch_info->period = RTW89_CHANNEL_TIME_6G + RTW89_DWELL_TIME_6G;
+
+ kfree_skb(skb);
+ }
+
+out:
+ return ret;
+}
+
static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type,
int ssid_num,
struct rtw89_mac_chinfo *ch_info)
struct cfg80211_scan_request *req = rtwvif->scan_req;
struct rtw89_pktofld_info *info;
u8 band, probe_count = 0;
+ int ret;
ch_info->notify_action = RTW89_SCANOFLD_DEBUG_MASK;
ch_info->dfs_ch = chan_type == RTW89_CHAN_DFS;
ch_info->pause_data = false;
ch_info->probe_id = RTW89_SCANOFLD_PKT_NONE;
+ if (ch_info->ch_band == RTW89_BAND_6G) {
+ if ((ssid_num == 1 && req->ssids[0].ssid_len == 0) ||
+ !ch_info->is_psc) {
+ ch_info->tx_pkt = false;
+ if (!req->duration_mandatory)
+ ch_info->period -= RTW89_DWELL_TIME_6G;
+ }
+ }
+
+ ret = rtw89_update_6ghz_rnr_chan(rtwdev, req, ch_info);
+ if (ret)
+ rtw89_warn(rtwdev, "RNR fails: %d\n", ret);
+
if (ssid_num) {
- ch_info->num_pkt = ssid_num;
band = rtw89_hw_to_nl80211_band(ch_info->ch_band);
list_for_each_entry(info, &scan_info->pkt_list[band], list) {
- ch_info->pkt_id[probe_count] = info->id;
- if (++probe_count >= ssid_num)
+ if (info->channel_6ghz &&
+ ch_info->pri_ch != info->channel_6ghz)
+ continue;
+ ch_info->pkt_id[probe_count++] = info->id;
+ if (probe_count >= RTW89_SCANOFLD_MAX_SSID)
break;
}
- if (probe_count != ssid_num)
- rtw89_err(rtwdev, "SSID num differs from list len\n");
- }
-
- if (ch_info->ch_band == RTW89_BAND_6G) {
- if (ssid_num == 1 && req->ssids[0].ssid_len == 0) {
- ch_info->tx_pkt = false;
- if (!req->duration_mandatory)
- ch_info->period -= RTW89_DWELL_TIME_6G;
- }
+ ch_info->num_pkt = probe_count;
}
switch (chan_type) {
ch_info->central_ch = channel->hw_value;
ch_info->pri_ch = channel->hw_value;
ch_info->rand_seq_num = random_seq;
+ ch_info->is_psc = cfg80211_channel_is_psc(channel);
if (channel->flags &
(IEEE80211_CHAN_RADAR | IEEE80211_CHAN_NO_IR))
u16 tx_pwr_idx;
u8 rsvd1;
struct list_head list;
+ bool is_psc;
};
struct rtw89_scan_option {
struct rtw89_pktofld_info {
struct list_head list;
u8 id;
+
+ /* Below fields are for 6 GHz RNR use only */
+ u8 ssid[IEEE80211_MAX_SSID_LEN];
+ u8 ssid_len;
+ u8 bssid[ETH_ALEN];
+ u16 channel_6ghz;
};
static inline void RTW89_SET_FWCMD_RA_IS_DIS(void *cmd, u32 val)
CXDRVINFO_RUN,
CXDRVINFO_CTRL,
CXDRVINFO_SCAN,
+ CXDRVINFO_TRX, /* WL traffic to WL fw */
CXDRVINFO_MAX,
};
le32p_replace_bits((__le32 *)((u8 *)cmd + (20 + (12 + offset) * n)), val, GENMASK(31, 0));
}
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_CONNECTED_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (6 + (12 + offset) * n), val, BIT(0));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_PID_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (6 + (12 + offset) * n), val, GENMASK(3, 1));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_PHY_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (6 + (12 + offset) * n), val, BIT(4));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_NOA_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (6 + (12 + offset) * n), val, BIT(5));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_BAND_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (6 + (12 + offset) * n), val, GENMASK(7, 6));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_CLIENT_PS_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (7 + (12 + offset) * n), val, BIT(0));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_BW_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (7 + (12 + offset) * n), val, GENMASK(7, 1));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_ROLE_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (8 + (12 + offset) * n), val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_CH_V2(void *cmd, u8 val, int n, u8 offset)
+{
+ u8p_replace_bits((u8 *)cmd + (9 + (12 + offset) * n), val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXROLE_ACT_NOA_DUR_V2(void *cmd, u32 val, int n, u8 offset)
+{
+ le32p_replace_bits((__le32 *)((u8 *)cmd + (10 + (12 + offset) * n)), val, GENMASK(31, 0));
+}
+
static inline void RTW89_SET_FWCMD_CXROLE_MROLE_TYPE(void *cmd, u32 val, u8 offset)
{
le32p_replace_bits((__le32 *)((u8 *)cmd + offset), val, GENMASK(31, 0));
le32p_replace_bits((__le32 *)((u8 *)(cmd) + 2), val, GENMASK(18, 3));
}
+static inline void RTW89_SET_FWCMD_CXTRX_TXLV(void *cmd, u8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 2, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_RXLV(void *cmd, u8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 3, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_WLRSSI(void *cmd, u8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 4, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_BTRSSI(void *cmd, u8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 5, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_TXPWR(void *cmd, s8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 6, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_RXGAIN(void *cmd, s8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 7, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_BTTXPWR(void *cmd, s8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 8, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_BTRXGAIN(void *cmd, s8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 9, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_CN(void *cmd, u8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 10, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_NHM(void *cmd, s8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 11, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_BTPROFILE(void *cmd, u8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 12, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_RSVD2(void *cmd, u8 val)
+{
+ u8p_replace_bits((u8 *)cmd + 13, val, GENMASK(7, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_TXRATE(void *cmd, u16 val)
+{
+ le16p_replace_bits((__le16 *)((u8 *)cmd + 14), val, GENMASK(15, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_RXRATE(void *cmd, u16 val)
+{
+ le16p_replace_bits((__le16 *)((u8 *)cmd + 16), val, GENMASK(15, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_TXTP(void *cmd, u32 val)
+{
+ le32p_replace_bits((__le32 *)((u8 *)cmd + 18), val, GENMASK(31, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_RXTP(void *cmd, u32 val)
+{
+ le32p_replace_bits((__le32 *)((u8 *)cmd + 22), val, GENMASK(31, 0));
+}
+
+static inline void RTW89_SET_FWCMD_CXTRX_RXERRRA(void *cmd, u32 val)
+{
+ le32p_replace_bits((__le32 *)((u8 *)cmd + 26), val, GENMASK(31, 0));
+}
+
static inline void RTW89_SET_FWCMD_CXRFK_STATE(void *cmd, u32 val)
{
le32p_replace_bits((__le32 *)((u8 *)(cmd) + 2), val, GENMASK(1, 0));
int rtw89_fw_h2c_cxdrv_init(struct rtw89_dev *rtwdev);
int rtw89_fw_h2c_cxdrv_role(struct rtw89_dev *rtwdev);
int rtw89_fw_h2c_cxdrv_role_v1(struct rtw89_dev *rtwdev);
+int rtw89_fw_h2c_cxdrv_role_v2(struct rtw89_dev *rtwdev);
int rtw89_fw_h2c_cxdrv_ctrl(struct rtw89_dev *rtwdev);
+int rtw89_fw_h2c_cxdrv_trx(struct rtw89_dev *rtwdev);
int rtw89_fw_h2c_cxdrv_rfk(struct rtw89_dev *rtwdev);
int rtw89_fw_h2c_del_pkt_offload(struct rtw89_dev *rtwdev, u8 id);
int rtw89_fw_h2c_add_pkt_offload(struct rtw89_dev *rtwdev, u8 *id,
if (rtw89_read32(rtwdev, R_AX_PLATFORM_ENABLE) & B_AX_WCPU_EN)
return -EFAULT;
+ rtw89_write32(rtwdev, R_AX_UDM1, 0);
+ rtw89_write32(rtwdev, R_AX_UDM2, 0);
rtw89_write32(rtwdev, R_AX_HALT_H2C_CTRL, 0);
rtw89_write32(rtwdev, R_AX_HALT_C2H_CTRL, 0);
rtw89_write32(rtwdev, R_AX_HALT_H2C, 0);
return cnt;
}
+static void rtw89_mac_bfee_standby_timer(struct rtw89_dev *rtwdev, u8 mac_idx,
+ bool keep)
+{
+ u32 reg;
+
+ rtw89_debug(rtwdev, RTW89_DBG_BF, "set bfee standby_timer to %d\n", keep);
+ reg = rtw89_mac_reg_by_idx(R_AX_BFMEE_RESP_OPTION, mac_idx);
+ if (keep) {
+ set_bit(RTW89_FLAG_BFEE_TIMER_KEEP, rtwdev->flags);
+ rtw89_write32_mask(rtwdev, reg, B_AX_BFMEE_BFRP_RX_STANDBY_TIMER_MASK,
+ BFRP_RX_STANDBY_TIMER_KEEP);
+ } else {
+ clear_bit(RTW89_FLAG_BFEE_TIMER_KEEP, rtwdev->flags);
+ rtw89_write32_mask(rtwdev, reg, B_AX_BFMEE_BFRP_RX_STANDBY_TIMER_MASK,
+ BFRP_RX_STANDBY_TIMER_RELEASE);
+ }
+}
+
static void rtw89_mac_bfee_ctrl(struct rtw89_dev *rtwdev, u8 mac_idx, bool en)
{
u32 reg;
rtw89_write32(rtwdev, reg, CSI_RRSC_BMAP);
reg = rtw89_mac_reg_by_idx(R_AX_BFMEE_RESP_OPTION, mac_idx);
- val32 = FIELD_PREP(B_AX_BFMEE_BFRP_RX_STANDBY_TIMER_MASK, BFRP_RX_STANDBY_TIMER);
- val32 |= FIELD_PREP(B_AX_BFMEE_NDP_RX_STANDBY_TIMER_MASK, NDP_RX_STANDBY_TIMER);
+ val32 = FIELD_PREP(B_AX_BFMEE_NDP_RX_STANDBY_TIMER_MASK, NDP_RX_STANDBY_TIMER);
rtw89_write32(rtwdev, reg, val32);
+ rtw89_mac_bfee_standby_timer(rtwdev, mac_idx, true);
rtw89_mac_bfee_ctrl(rtwdev, mac_idx, true);
reg = rtw89_mac_reg_by_idx(R_AX_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
struct rtw89_vif *rtwvif;
bool en = stats->tx_tfc_lv <= stats->rx_tfc_lv;
bool old = test_bit(RTW89_FLAG_BFEE_EN, rtwdev->flags);
+ bool keep_timer = true;
+ bool old_keep_timer;
+
+ old_keep_timer = test_bit(RTW89_FLAG_BFEE_TIMER_KEEP, rtwdev->flags);
+
+ if (stats->tx_tfc_lv <= RTW89_TFC_LOW && stats->rx_tfc_lv <= RTW89_TFC_LOW)
+ keep_timer = false;
+
+ if (keep_timer != old_keep_timer) {
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+ rtw89_mac_bfee_standby_timer(rtwdev, rtwvif->mac_idx,
+ keep_timer);
+ }
if (en == old)
return;
for (i = 0; i < try_cnt; i++) {
ret = read_poll_timeout(mac_is_txq_empty, empty, empty, 50,
50000, false, rtwdev);
- if (ret)
+ if (ret && !RTW89_CHK_FW_FEATURE(NO_PACKET_DROP, &rtwdev->fw))
rtw89_fw_h2c_pkt_drop(rtwdev, ¶ms);
else
return 0;
rtw89_leave_lps(rtwdev);
rtw89_hci_flush_queues(rtwdev, queues, drop);
- if (drop && RTW89_CHK_FW_FEATURE(PACKET_DROP, &rtwdev->fw))
+ if (drop && !RTW89_CHK_FW_FEATURE(NO_PACKET_DROP, &rtwdev->fw))
__rtw89_drop_packets(rtwdev, vif);
else
rtw89_mac_flush_txq(rtwdev, queues, drop);
static void rtw89_pci_declaim_device(struct rtw89_dev *rtwdev,
struct pci_dev *pdev)
{
- pci_clear_master(pdev);
pci_disable_device(pdev);
}
rtw89_pci_link_cfg(rtwdev);
rtw89_pci_l1ss_cfg(rtwdev);
- ret = rtw89_core_register(rtwdev);
- if (ret) {
- rtw89_err(rtwdev, "failed to register core\n");
- goto err_clear_resource;
- }
-
rtw89_core_napi_init(rtwdev);
ret = rtw89_pci_request_irq(rtwdev, pdev);
if (ret) {
rtw89_err(rtwdev, "failed to request pci irq\n");
- goto err_unregister;
+ goto err_deinit_napi;
+ }
+
+ ret = rtw89_core_register(rtwdev);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to register core\n");
+ goto err_free_irq;
}
return 0;
-err_unregister:
+err_free_irq:
+ rtw89_pci_free_irq(rtwdev, pdev);
+err_deinit_napi:
rtw89_core_napi_deinit(rtwdev);
- rtw89_core_unregister(rtwdev);
err_clear_resource:
rtw89_pci_clear_resource(rtwdev, pdev);
err_declaim_pci:
data[RTW89_TSSI_SBW20]);
}
EXPORT_SYMBOL(rtw89_phy_tssi_ctrl_set_bandedge_cfg);
+
+static
+const u8 rtw89_ch_base_table[16] = {1, 0xff,
+ 36, 100, 132, 149, 0xff,
+ 1, 33, 65, 97, 129, 161, 193, 225, 0xff};
+#define RTW89_CH_BASE_IDX_2G 0
+#define RTW89_CH_BASE_IDX_5G_FIRST 2
+#define RTW89_CH_BASE_IDX_5G_LAST 5
+#define RTW89_CH_BASE_IDX_6G_FIRST 7
+#define RTW89_CH_BASE_IDX_6G_LAST 14
+
+#define RTW89_CH_BASE_IDX_MASK GENMASK(7, 4)
+#define RTW89_CH_OFFSET_MASK GENMASK(3, 0)
+
+u8 rtw89_encode_chan_idx(struct rtw89_dev *rtwdev, u8 central_ch, u8 band)
+{
+ u8 chan_idx;
+ u8 last, first;
+ u8 idx;
+
+ switch (band) {
+ case RTW89_BAND_2G:
+ chan_idx = FIELD_PREP(RTW89_CH_BASE_IDX_MASK, RTW89_CH_BASE_IDX_2G) |
+ FIELD_PREP(RTW89_CH_OFFSET_MASK, central_ch);
+ return chan_idx;
+ case RTW89_BAND_5G:
+ first = RTW89_CH_BASE_IDX_5G_FIRST;
+ last = RTW89_CH_BASE_IDX_5G_LAST;
+ break;
+ case RTW89_BAND_6G:
+ first = RTW89_CH_BASE_IDX_6G_FIRST;
+ last = RTW89_CH_BASE_IDX_6G_LAST;
+ break;
+ default:
+ rtw89_warn(rtwdev, "Unsupported band %d\n", band);
+ return 0;
+ }
+
+ for (idx = last; idx >= first; idx--)
+ if (central_ch >= rtw89_ch_base_table[idx])
+ break;
+
+ if (idx < first) {
+ rtw89_warn(rtwdev, "Unknown band %d channel %d\n", band, central_ch);
+ return 0;
+ }
+
+ chan_idx = FIELD_PREP(RTW89_CH_BASE_IDX_MASK, idx) |
+ FIELD_PREP(RTW89_CH_OFFSET_MASK,
+ (central_ch - rtw89_ch_base_table[idx]) >> 1);
+ return chan_idx;
+}
+EXPORT_SYMBOL(rtw89_encode_chan_idx);
+
+void rtw89_decode_chan_idx(struct rtw89_dev *rtwdev, u8 chan_idx,
+ u8 *ch, enum nl80211_band *band)
+{
+ u8 idx, offset;
+
+ idx = FIELD_GET(RTW89_CH_BASE_IDX_MASK, chan_idx);
+ offset = FIELD_GET(RTW89_CH_OFFSET_MASK, chan_idx);
+
+ if (idx == RTW89_CH_BASE_IDX_2G) {
+ *band = NL80211_BAND_2GHZ;
+ *ch = offset;
+ return;
+ }
+
+ *band = idx <= RTW89_CH_BASE_IDX_5G_LAST ? NL80211_BAND_5GHZ : NL80211_BAND_6GHZ;
+ *ch = rtw89_ch_base_table[idx] + (offset << 1);
+}
+EXPORT_SYMBOL(rtw89_decode_chan_idx);
+
+#define EDCCA_DEFAULT 249
+void rtw89_phy_config_edcca(struct rtw89_dev *rtwdev, bool scan)
+{
+ u32 reg = rtwdev->chip->edcca_lvl_reg;
+ struct rtw89_hal *hal = &rtwdev->hal;
+ u32 val;
+
+ if (scan) {
+ hal->edcca_bak = rtw89_phy_read32(rtwdev, reg);
+ val = hal->edcca_bak;
+ u32p_replace_bits(&val, EDCCA_DEFAULT, B_SEG0R_EDCCA_LVL_A_MSK);
+ u32p_replace_bits(&val, EDCCA_DEFAULT, B_SEG0R_EDCCA_LVL_P_MSK);
+ u32p_replace_bits(&val, EDCCA_DEFAULT, B_SEG0R_PPDU_LVL_MSK);
+ rtw89_phy_write32(rtwdev, reg, val);
+ } else {
+ rtw89_phy_write32(rtwdev, reg, hal->edcca_bak);
+ }
+}
enum rtw89_tssi_bandedge_cfg bandedge_cfg);
void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev);
+u8 rtw89_encode_chan_idx(struct rtw89_dev *rtwdev, u8 central_ch, u8 band);
+void rtw89_decode_chan_idx(struct rtw89_dev *rtwdev, u8 chan_idx,
+ u8 *ch, enum nl80211_band *band);
+void rtw89_phy_config_edcca(struct rtw89_dev *rtwdev, bool scan);
#endif
#define R_AX_UDM0 0x01F0
#define R_AX_UDM1 0x01F4
+#define B_AX_UDM1_MASK GENMASK(31, 16)
+#define B_AX_UDM1_HALMAC_C2H_ENQ_CNT_MASK GENMASK(15, 12)
+#define B_AX_UDM1_HALMAC_H2C_DEQ_CNT_MASK GENMASK(11, 8)
+#define B_AX_UDM1_WCPU_C2H_ENQ_CNT_MASK GENMASK(7, 4)
+#define B_AX_UDM1_WCPU_H2C_DEQ_CNT_MASK GENMASK(3, 0)
#define R_AX_UDM2 0x01F8
#define R_AX_UDM3 0x01FC
#define R_AX_BFMEE_RESP_OPTION_C1 0xED80
#define B_AX_BFMEE_NDP_RX_STANDBY_TIMER_MASK GENMASK(31, 24)
#define B_AX_BFMEE_BFRP_RX_STANDBY_TIMER_MASK GENMASK(23, 20)
+#define BFRP_RX_STANDBY_TIMER_KEEP 0x0
+#define BFRP_RX_STANDBY_TIMER_RELEASE 0x1
#define B_AX_MU_BFRPTSEG_SEL_MASK GENMASK(18, 17)
#define B_AX_BFMEE_NDP_RXSTDBY_SEL BIT(16)
#define BFRP_RX_STANDBY_TIMER 0x0
#define B_PKT_POP_EN BIT(8)
#define R_SEG0R_PD 0x481C
#define R_SEG0R_PD_V1 0x4860
+#define R_SEG0R_EDCCA_LVL 0x4840
+#define R_SEG0R_EDCCA_LVL_V1 0x4884
+#define B_SEG0R_PPDU_LVL_MSK GENMASK(31, 24)
+#define B_SEG0R_EDCCA_LVL_P_MSK GENMASK(15, 8)
+#define B_SEG0R_EDCCA_LVL_A_MSK GENMASK(7, 0)
#define B_SEG0R_PD_SPATIAL_REUSE_EN_MSK_V1 BIT(30)
#define B_SEG0R_PD_SPATIAL_REUSE_EN_MSK BIT(29)
#define B_SEG0R_PD_LOWER_BOUND_MSK GENMASK(10, 6)
static void rtw8852a_btc_set_wl_rx_gain(struct rtw89_dev *rtwdev, u32 level)
{
+ struct rtw89_btc *btc = &rtwdev->btc;
+
switch (level) {
case 0: /* original */
+ default:
rtw8852a_bb_ctrl_btc_preagc(rtwdev, false);
- rtw8852a_set_wl_lna2(rtwdev, 0);
+ btc->dm.wl_lna2 = 0;
break;
case 1: /* for FDD free-run */
rtw8852a_bb_ctrl_btc_preagc(rtwdev, true);
- rtw8852a_set_wl_lna2(rtwdev, 0);
+ btc->dm.wl_lna2 = 0;
break;
case 2: /* for BTG Co-Rx*/
rtw8852a_bb_ctrl_btc_preagc(rtwdev, false);
- rtw8852a_set_wl_lna2(rtwdev, 1);
+ btc->dm.wl_lna2 = 1;
break;
}
+
+ rtw8852a_set_wl_lna2(rtwdev, btc->dm.wl_lna2);
}
static void rtw8852a_fill_freq_with_ppdu(struct rtw89_dev *rtwdev,
.h2c_desc_size = sizeof(struct rtw89_txwd_body),
.txwd_body_size = sizeof(struct rtw89_txwd_body),
.h2c_ctrl_reg = R_AX_H2CREG_CTRL,
+ .h2c_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_H2C_DEQ_CNT_MASK >> 8},
.h2c_regs = rtw8852a_h2c_regs,
.c2h_ctrl_reg = R_AX_C2HREG_CTRL,
.c2h_regs = rtw8852a_c2h_regs,
+ .c2h_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_C2H_ENQ_CNT_MASK >> 8},
.page_regs = &rtw8852a_page_regs,
.cfo_src_fd = false,
.dcfo_comp = &rtw8852a_dcfo_comp,
.rrsr_cfgs = &rtw8852a_rrsr_cfgs,
.bss_clr_map_reg = R_BSS_CLR_MAP,
.dma_ch_mask = 0,
+ .edcca_lvl_reg = R_SEG0R_EDCCA_LVL,
#ifdef CONFIG_PM
.wowlan_stub = &rtw_wowlan_stub_8852a,
#endif
{
bool cck_en = chan->channel <= 14;
u8 pri_ch_idx = chan->pri_ch_idx;
+ u8 band = chan->band_type, chan_idx;
if (cck_en)
rtw8852b_ctrl_sco_cck(rtwdev, chan->primary_channel);
B_BT_DYN_DC_EST_EN_MSK, 0x0);
rtw89_phy_write32_mask(rtwdev, R_GNT_BT_WGT_EN, B_GNT_BT_WGT_EN, 0x0);
}
- rtw89_phy_write32_mask(rtwdev, R_MAC_PIN_SEL, B_CH_IDX_SEG0,
- chan->primary_channel);
+ chan_idx = rtw89_encode_chan_idx(rtwdev, chan->primary_channel, band);
+ rtw89_phy_write32_mask(rtwdev, R_MAC_PIN_SEL, B_CH_IDX_SEG0, chan_idx);
rtw8852b_5m_mask(rtwdev, chan, phy_idx);
rtw8852b_bb_set_pop(rtwdev);
rtw8852b_bb_reset_all(rtwdev, phy_idx);
/* set WL standby = Rx for GNT_BT_Tx = 1->0 settle issue */
if (state)
- rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x579);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x179);
else
rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x20);
rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWE, RFREG_MASK, 0x0);
}
+static void rtw8852b_btc_set_wl_lna2(struct rtw89_dev *rtwdev, u8 level)
+{
+ switch (level) {
+ case 0: /* default */
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWE, RFREG_MASK, 0x1000);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWA, RFREG_MASK, 0x0);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x15);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWA, RFREG_MASK, 0x1);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x17);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWA, RFREG_MASK, 0x2);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x15);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWA, RFREG_MASK, 0x3);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x17);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWE, RFREG_MASK, 0x0);
+ break;
+ case 1: /* Fix LNA2=5 */
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWE, RFREG_MASK, 0x1000);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWA, RFREG_MASK, 0x0);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x15);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWA, RFREG_MASK, 0x1);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x5);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWA, RFREG_MASK, 0x2);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x15);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWA, RFREG_MASK, 0x3);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWD0, RFREG_MASK, 0x5);
+ rtw89_write_rf(rtwdev, RF_PATH_B, RR_LUTWE, RFREG_MASK, 0x0);
+ break;
+ }
+}
+
static void rtw8852b_btc_set_wl_rx_gain(struct rtw89_dev *rtwdev, u32 level)
{
+ struct rtw89_btc *btc = &rtwdev->btc;
+
+ switch (level) {
+ case 0: /* original */
+ default:
+ rtw8852b_bb_ctrl_btc_preagc(rtwdev, false);
+ btc->dm.wl_lna2 = 0;
+ break;
+ case 1: /* for FDD free-run */
+ rtw8852b_bb_ctrl_btc_preagc(rtwdev, true);
+ btc->dm.wl_lna2 = 0;
+ break;
+ case 2: /* for BTG Co-Rx*/
+ rtw8852b_bb_ctrl_btc_preagc(rtwdev, false);
+ btc->dm.wl_lna2 = 1;
+ break;
+ }
+
+ rtw8852b_btc_set_wl_lna2(rtwdev, btc->dm.wl_lna2);
}
static void rtw8852b_fill_freq_with_ppdu(struct rtw89_dev *rtwdev,
struct ieee80211_rx_status *status)
{
u16 chan = phy_ppdu->chan_idx;
- u8 band;
+ enum nl80211_band band;
+ u8 ch;
if (chan == 0)
return;
- band = chan <= 14 ? NL80211_BAND_2GHZ : NL80211_BAND_5GHZ;
- status->freq = ieee80211_channel_to_frequency(chan, band);
+ rtw89_decode_chan_idx(rtwdev, chan, &ch, &band);
+ status->freq = ieee80211_channel_to_frequency(ch, band);
status->band = band;
}
.h2c_desc_size = sizeof(struct rtw89_txwd_body),
.txwd_body_size = sizeof(struct rtw89_txwd_body),
.h2c_ctrl_reg = R_AX_H2CREG_CTRL,
+ .h2c_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_H2C_DEQ_CNT_MASK >> 8},
.h2c_regs = rtw8852b_h2c_regs,
.c2h_ctrl_reg = R_AX_C2HREG_CTRL,
+ .c2h_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_C2H_ENQ_CNT_MASK >> 8},
.c2h_regs = rtw8852b_c2h_regs,
.page_regs = &rtw8852b_page_regs,
.cfo_src_fd = true,
.dma_ch_mask = BIT(RTW89_DMA_ACH4) | BIT(RTW89_DMA_ACH5) |
BIT(RTW89_DMA_ACH6) | BIT(RTW89_DMA_ACH7) |
BIT(RTW89_DMA_B1MG) | BIT(RTW89_DMA_B1HI),
+ .edcca_lvl_reg = R_SEG0R_EDCCA_LVL_V1,
};
EXPORT_SYMBOL(rtw8852b_chip_info);
}
}
-static
-const u8 rtw8852c_ch_base_table[16] = {1, 0xff,
- 36, 100, 132, 149, 0xff,
- 1, 33, 65, 97, 129, 161, 193, 225, 0xff};
-#define RTW8852C_CH_BASE_IDX_2G 0
-#define RTW8852C_CH_BASE_IDX_5G_FIRST 2
-#define RTW8852C_CH_BASE_IDX_5G_LAST 5
-#define RTW8852C_CH_BASE_IDX_6G_FIRST 7
-#define RTW8852C_CH_BASE_IDX_6G_LAST 14
-
-#define RTW8852C_CH_BASE_IDX_MASK GENMASK(7, 4)
-#define RTW8852C_CH_OFFSET_MASK GENMASK(3, 0)
-
-static u8 rtw8852c_encode_chan_idx(struct rtw89_dev *rtwdev, u8 central_ch, u8 band)
-{
- u8 chan_idx;
- u8 last, first;
- u8 idx;
-
- switch (band) {
- case RTW89_BAND_2G:
- chan_idx = FIELD_PREP(RTW8852C_CH_BASE_IDX_MASK, RTW8852C_CH_BASE_IDX_2G) |
- FIELD_PREP(RTW8852C_CH_OFFSET_MASK, central_ch);
- return chan_idx;
- case RTW89_BAND_5G:
- first = RTW8852C_CH_BASE_IDX_5G_FIRST;
- last = RTW8852C_CH_BASE_IDX_5G_LAST;
- break;
- case RTW89_BAND_6G:
- first = RTW8852C_CH_BASE_IDX_6G_FIRST;
- last = RTW8852C_CH_BASE_IDX_6G_LAST;
- break;
- default:
- rtw89_warn(rtwdev, "Unsupported band %d\n", band);
- return 0;
- }
-
- for (idx = last; idx >= first; idx--)
- if (central_ch >= rtw8852c_ch_base_table[idx])
- break;
-
- if (idx < first) {
- rtw89_warn(rtwdev, "Unknown band %d channel %d\n", band, central_ch);
- return 0;
- }
-
- chan_idx = FIELD_PREP(RTW8852C_CH_BASE_IDX_MASK, idx) |
- FIELD_PREP(RTW8852C_CH_OFFSET_MASK,
- (central_ch - rtw8852c_ch_base_table[idx]) >> 1);
- return chan_idx;
-}
-
-static void rtw8852c_decode_chan_idx(struct rtw89_dev *rtwdev, u8 chan_idx,
- u8 *ch, enum nl80211_band *band)
-{
- u8 idx, offset;
-
- idx = FIELD_GET(RTW8852C_CH_BASE_IDX_MASK, chan_idx);
- offset = FIELD_GET(RTW8852C_CH_OFFSET_MASK, chan_idx);
-
- if (idx == RTW8852C_CH_BASE_IDX_2G) {
- *band = NL80211_BAND_2GHZ;
- *ch = offset;
- return;
- }
-
- *band = idx <= RTW8852C_CH_BASE_IDX_5G_LAST ? NL80211_BAND_5GHZ : NL80211_BAND_6GHZ;
- *ch = rtw8852c_ch_base_table[idx] + (offset << 1);
-}
-
static void rtw8852c_set_gain_offset(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
enum rtw89_phy_idx phy_idx,
}
}
- chan_idx = rtw8852c_encode_chan_idx(rtwdev, chan->primary_channel, band);
+ chan_idx = rtw89_encode_chan_idx(rtwdev, chan->primary_channel, band);
rtw89_phy_write32_idx(rtwdev, R_MAC_PIN_SEL, B_CH_IDX_SEG0, chan_idx, phy_idx);
}
static void rtw8852c_btc_set_wl_rx_gain(struct rtw89_dev *rtwdev, u32 level)
{
+ struct rtw89_btc *btc = &rtwdev->btc;
+
switch (level) {
case 0: /* original */
+ default:
rtw8852c_bb_ctrl_btc_preagc(rtwdev, false);
- rtw8852c_set_wl_lna2(rtwdev, 0);
+ btc->dm.wl_lna2 = 0;
break;
case 1: /* for FDD free-run */
rtw8852c_bb_ctrl_btc_preagc(rtwdev, true);
- rtw8852c_set_wl_lna2(rtwdev, 0);
+ btc->dm.wl_lna2 = 0;
break;
case 2: /* for BTG Co-Rx*/
rtw8852c_bb_ctrl_btc_preagc(rtwdev, false);
- rtw8852c_set_wl_lna2(rtwdev, 1);
+ btc->dm.wl_lna2 = 1;
break;
}
+
+ rtw8852c_set_wl_lna2(rtwdev, btc->dm.wl_lna2);
}
static void rtw8852c_fill_freq_with_ppdu(struct rtw89_dev *rtwdev,
if (chan_idx == 0)
return;
- rtw8852c_decode_chan_idx(rtwdev, chan_idx, &ch, &band);
+ rtw89_decode_chan_idx(rtwdev, chan_idx, &ch, &band);
status->freq = ieee80211_channel_to_frequency(ch, band);
status->band = band;
}
.h2c_desc_size = sizeof(struct rtw89_rxdesc_short),
.txwd_body_size = sizeof(struct rtw89_txwd_body_v1),
.h2c_ctrl_reg = R_AX_H2CREG_CTRL_V1,
+ .h2c_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_H2C_DEQ_CNT_MASK >> 8},
.h2c_regs = rtw8852c_h2c_regs,
.c2h_ctrl_reg = R_AX_C2HREG_CTRL_V1,
+ .c2h_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_C2H_ENQ_CNT_MASK >> 8},
.c2h_regs = rtw8852c_c2h_regs,
.page_regs = &rtw8852c_page_regs,
.cfo_src_fd = false,
.rrsr_cfgs = &rtw8852c_rrsr_cfgs,
.bss_clr_map_reg = R_BSS_CLR_MAP,
.dma_ch_mask = 0,
+ .edcca_lvl_reg = R_SEG0R_EDCCA_LVL,
#ifdef CONFIG_PM
.wowlan_stub = &rtw_wowlan_stub_8852c,
#endif
static void ser_reset_trx_st_hdl(struct rtw89_ser *ser, u8 evt)
{
+ struct rtw89_dev *rtwdev = container_of(ser, struct rtw89_dev, ser);
+
switch (evt) {
case SER_EV_STATE_IN:
+ cancel_delayed_work_sync(&rtwdev->track_work);
drv_stop_tx(ser);
if (hal_stop_dma(ser)) {
hal_enable_dma(ser);
drv_resume_rx(ser);
drv_resume_tx(ser);
+ ieee80211_queue_delayed_work(rtwdev->hw, &rtwdev->track_work,
+ RTW89_TRACK_WORK_PERIOD);
break;
default:
struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
struct ieee80211_sta *wow_sta;
struct rtw89_sta *rtwsta = NULL;
- bool is_conn = true;
int ret;
wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid);
if (wow_sta)
rtwsta = (struct rtw89_sta *)wow_sta->drv_priv;
- else
- is_conn = false;
if (wow) {
if (rtw_wow->pattern_cnt)
}
}
- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, !is_conn);
- if (ret) {
- rtw89_warn(rtwdev, "failed to send h2c join info\n");
- return ret;
- }
-
ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
if (ret) {
rtw89_warn(rtwdev, "failed to send h2c cam\n");
rsi_dbg(MGMT_TX_ZONE,
"%s: Sending scan req frame\n", __func__);
+ if (!channel)
+ return 0;
+
skb = dev_alloc_skb(frame_len);
if (!skb) {
rsi_dbg(ERR_ZONE, "%s: Failed in allocation of skb\n",
return -ENOMEM;
}
- if (!channel) {
- dev_kfree_skb(skb);
- return 0;
- }
memset(skb->data, 0, frame_len);
chan_cfg = (struct rsi_chan_config *)skb->data;
wfx_bh_poll_irq(wdev);
err = wait_for_completion_timeout(&wdev->firmware_ready, 1 * HZ);
- if (err <= 0) {
- if (err == 0) {
- dev_err(wdev->dev, "timeout while waiting for startup indication\n");
- err = -ETIMEDOUT;
- } else if (err == -ERESTARTSYS) {
- dev_info(wdev->dev, "probe interrupted by user\n");
- }
+ if (err == 0) {
+ dev_err(wdev->dev, "timeout while waiting for startup indication\n");
+ err = -ETIMEDOUT;
goto bh_unregister;
}
--- /dev/null
+config MAC80211_HWSIM
+ tristate "Simulated radio testing tool for mac80211"
+ depends on MAC80211
+ help
+ This driver is a developer testing tool that can be used to test
+ IEEE 802.11 networking stack (mac80211) functionality. This is not
+ needed for normal wireless LAN usage and is only for testing. See
+ Documentation/networking/mac80211_hwsim for more information on how
+ to use this tool.
+
+ To compile this driver as a module, choose M here: the module will be
+ called mac80211_hwsim. If unsure, say N.
+
+config VIRT_WIFI
+ tristate "Wifi wrapper for ethernet drivers"
+ depends on CFG80211
+ help
+ This option adds support for ethernet connections to appear as if they
+ are wifi connections through a special rtnetlink device.
+
--- /dev/null
+obj-$(CONFIG_MAC80211_HWSIM) += mac80211_hwsim.o
+
+obj-$(CONFIG_VIRT_WIFI) += virt_wifi.o
/* RSSI in rx status of the receiver */
int rx_rssi;
+ /* only used when pmsr capability is supplied */
+ struct cfg80211_pmsr_capabilities pmsr_capa;
+ struct cfg80211_pmsr_request *pmsr_request;
+ struct wireless_dev *pmsr_request_wdev;
+
struct mac80211_hwsim_link_data link_data[IEEE80211_MLD_MAX_NUM_LINKS];
};
__le16 rt_chbitmask;
} __packed;
+static struct mac80211_hwsim_data *get_hwsim_data_ref_from_addr(const u8 *addr)
+{
+ return rhashtable_lookup_fast(&hwsim_radios_rht, addr, hwsim_rht_params);
+}
+
/* MAC80211_HWSIM netlink family */
static struct genl_family hwsim_genl_family;
/* MAC80211_HWSIM netlink policy */
+static const struct nla_policy
+hwsim_rate_info_policy[HWSIM_RATE_INFO_ATTR_MAX + 1] = {
+ [HWSIM_RATE_INFO_ATTR_FLAGS] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_MCS] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_LEGACY] = { .type = NLA_U16 },
+ [HWSIM_RATE_INFO_ATTR_NSS] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_BW] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_HE_GI] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_HE_DCM] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_HE_RU_ALLOC] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_N_BOUNDED_CH] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_EHT_GI] = { .type = NLA_U8 },
+ [HWSIM_RATE_INFO_ATTR_EHT_RU_ALLOC] = { .type = NLA_U8 },
+};
+
+static const struct nla_policy
+hwsim_ftm_result_policy[NL80211_PMSR_FTM_RESP_ATTR_MAX + 1] = {
+ [NL80211_PMSR_FTM_RESP_ATTR_FAIL_REASON] = { .type = NLA_U32 },
+ [NL80211_PMSR_FTM_RESP_ATTR_BURST_INDEX] = { .type = NLA_U16 },
+ [NL80211_PMSR_FTM_RESP_ATTR_NUM_FTMR_ATTEMPTS] = { .type = NLA_U32 },
+ [NL80211_PMSR_FTM_RESP_ATTR_NUM_FTMR_SUCCESSES] = { .type = NLA_U32 },
+ [NL80211_PMSR_FTM_RESP_ATTR_BUSY_RETRY_TIME] = { .type = NLA_U8 },
+ [NL80211_PMSR_FTM_RESP_ATTR_NUM_BURSTS_EXP] = { .type = NLA_U8 },
+ [NL80211_PMSR_FTM_RESP_ATTR_BURST_DURATION] = { .type = NLA_U8 },
+ [NL80211_PMSR_FTM_RESP_ATTR_FTMS_PER_BURST] = { .type = NLA_U8 },
+ [NL80211_PMSR_FTM_RESP_ATTR_RSSI_AVG] = { .type = NLA_U32 },
+ [NL80211_PMSR_FTM_RESP_ATTR_RSSI_SPREAD] = { .type = NLA_U32 },
+ [NL80211_PMSR_FTM_RESP_ATTR_TX_RATE] = NLA_POLICY_NESTED(hwsim_rate_info_policy),
+ [NL80211_PMSR_FTM_RESP_ATTR_RX_RATE] = NLA_POLICY_NESTED(hwsim_rate_info_policy),
+ [NL80211_PMSR_FTM_RESP_ATTR_RTT_AVG] = { .type = NLA_U64 },
+ [NL80211_PMSR_FTM_RESP_ATTR_RTT_VARIANCE] = { .type = NLA_U64 },
+ [NL80211_PMSR_FTM_RESP_ATTR_RTT_SPREAD] = { .type = NLA_U64 },
+ [NL80211_PMSR_FTM_RESP_ATTR_DIST_AVG] = { .type = NLA_U64 },
+ [NL80211_PMSR_FTM_RESP_ATTR_DIST_VARIANCE] = { .type = NLA_U64 },
+ [NL80211_PMSR_FTM_RESP_ATTR_DIST_SPREAD] = { .type = NLA_U64 },
+ [NL80211_PMSR_FTM_RESP_ATTR_LCI] = { .type = NLA_STRING },
+ [NL80211_PMSR_FTM_RESP_ATTR_CIVICLOC] = { .type = NLA_STRING },
+};
+
+static const struct nla_policy
+hwsim_pmsr_resp_type_policy[NL80211_PMSR_TYPE_MAX + 1] = {
+ [NL80211_PMSR_TYPE_FTM] = NLA_POLICY_NESTED(hwsim_ftm_result_policy),
+};
+
+static const struct nla_policy
+hwsim_pmsr_resp_policy[NL80211_PMSR_RESP_ATTR_MAX + 1] = {
+ [NL80211_PMSR_RESP_ATTR_STATUS] = { .type = NLA_U32 },
+ [NL80211_PMSR_RESP_ATTR_HOST_TIME] = { .type = NLA_U64 },
+ [NL80211_PMSR_RESP_ATTR_AP_TSF] = { .type = NLA_U64 },
+ [NL80211_PMSR_RESP_ATTR_FINAL] = { .type = NLA_FLAG },
+ [NL80211_PMSR_RESP_ATTR_DATA] = NLA_POLICY_NESTED(hwsim_pmsr_resp_type_policy),
+};
+
+static const struct nla_policy
+hwsim_pmsr_peer_result_policy[NL80211_PMSR_PEER_ATTR_MAX + 1] = {
+ [NL80211_PMSR_PEER_ATTR_ADDR] = NLA_POLICY_ETH_ADDR_COMPAT,
+ [NL80211_PMSR_PEER_ATTR_CHAN] = { .type = NLA_REJECT },
+ [NL80211_PMSR_PEER_ATTR_REQ] = { .type = NLA_REJECT },
+ [NL80211_PMSR_PEER_ATTR_RESP] = NLA_POLICY_NESTED(hwsim_pmsr_resp_policy),
+};
+
+static const struct nla_policy
+hwsim_pmsr_peers_result_policy[NL80211_PMSR_ATTR_MAX + 1] = {
+ [NL80211_PMSR_ATTR_MAX_PEERS] = { .type = NLA_REJECT },
+ [NL80211_PMSR_ATTR_REPORT_AP_TSF] = { .type = NLA_REJECT },
+ [NL80211_PMSR_ATTR_RANDOMIZE_MAC_ADDR] = { .type = NLA_REJECT },
+ [NL80211_PMSR_ATTR_TYPE_CAPA] = { .type = NLA_REJECT },
+ [NL80211_PMSR_ATTR_PEERS] = NLA_POLICY_NESTED_ARRAY(hwsim_pmsr_peer_result_policy),
+};
+
+static const struct nla_policy
+hwsim_ftm_capa_policy[NL80211_PMSR_FTM_CAPA_ATTR_MAX + 1] = {
+ [NL80211_PMSR_FTM_CAPA_ATTR_ASAP] = { .type = NLA_FLAG },
+ [NL80211_PMSR_FTM_CAPA_ATTR_NON_ASAP] = { .type = NLA_FLAG },
+ [NL80211_PMSR_FTM_CAPA_ATTR_REQ_LCI] = { .type = NLA_FLAG },
+ [NL80211_PMSR_FTM_CAPA_ATTR_REQ_CIVICLOC] = { .type = NLA_FLAG },
+ [NL80211_PMSR_FTM_CAPA_ATTR_PREAMBLES] = { .type = NLA_U32 },
+ [NL80211_PMSR_FTM_CAPA_ATTR_BANDWIDTHS] = { .type = NLA_U32 },
+ [NL80211_PMSR_FTM_CAPA_ATTR_MAX_BURSTS_EXPONENT] = NLA_POLICY_MAX(NLA_U8, 15),
+ [NL80211_PMSR_FTM_CAPA_ATTR_MAX_FTMS_PER_BURST] = NLA_POLICY_MAX(NLA_U8, 31),
+ [NL80211_PMSR_FTM_CAPA_ATTR_TRIGGER_BASED] = { .type = NLA_FLAG },
+ [NL80211_PMSR_FTM_CAPA_ATTR_NON_TRIGGER_BASED] = { .type = NLA_FLAG },
+};
+
+static const struct nla_policy
+hwsim_pmsr_capa_type_policy[NL80211_PMSR_TYPE_MAX + 1] = {
+ [NL80211_PMSR_TYPE_FTM] = NLA_POLICY_NESTED(hwsim_ftm_capa_policy),
+};
+
+static const struct nla_policy
+hwsim_pmsr_capa_policy[NL80211_PMSR_ATTR_MAX + 1] = {
+ [NL80211_PMSR_ATTR_MAX_PEERS] = { .type = NLA_U32 },
+ [NL80211_PMSR_ATTR_REPORT_AP_TSF] = { .type = NLA_FLAG },
+ [NL80211_PMSR_ATTR_RANDOMIZE_MAC_ADDR] = { .type = NLA_FLAG },
+ [NL80211_PMSR_ATTR_TYPE_CAPA] = NLA_POLICY_NESTED(hwsim_pmsr_capa_type_policy),
+ [NL80211_PMSR_ATTR_PEERS] = { .type = NLA_REJECT }, // only for request.
+};
+
static const struct nla_policy hwsim_genl_policy[HWSIM_ATTR_MAX + 1] = {
[HWSIM_ATTR_ADDR_RECEIVER] = NLA_POLICY_ETH_ADDR_COMPAT,
[HWSIM_ATTR_ADDR_TRANSMITTER] = NLA_POLICY_ETH_ADDR_COMPAT,
[HWSIM_ATTR_IFTYPE_SUPPORT] = { .type = NLA_U32 },
[HWSIM_ATTR_CIPHER_SUPPORT] = { .type = NLA_BINARY },
[HWSIM_ATTR_MLO_SUPPORT] = { .type = NLA_FLAG },
+ [HWSIM_ATTR_PMSR_SUPPORT] = NLA_POLICY_NESTED(hwsim_pmsr_capa_policy),
+ [HWSIM_ATTR_PMSR_RESULT] = NLA_POLICY_NESTED(hwsim_pmsr_peers_result_policy),
};
#if IS_REACHABLE(CONFIG_VIRTIO)
* the values accordingly.
*/
#ifdef HWSIM_RADIOTAP_OUI
- struct ieee80211_vendor_radiotap *rtap;
+ struct ieee80211_radiotap_vendor_tlv *rtap;
+ static const char vendor_data[8] = "ABCDEFGH";
+
+ // Make sure no padding is needed
+ BUILD_BUG_ON(sizeof(vendor_data) % 4);
+ /* this is last radiotap info before the mac header, so
+ * skb_reset_mac_header for mac8022 to know the end of
+ * the radiotap TLV/beginning of the 802.11 header
+ */
+ skb_reset_mac_header(skb);
/*
* Note that this code requires the headroom in the SKB
* that was allocated earlier.
*/
- rtap = skb_push(skb, sizeof(*rtap) + 8 + 4);
- rtap->oui[0] = HWSIM_RADIOTAP_OUI[0];
- rtap->oui[1] = HWSIM_RADIOTAP_OUI[1];
- rtap->oui[2] = HWSIM_RADIOTAP_OUI[2];
- rtap->subns = 127;
-
- /*
- * Radiotap vendor namespaces can (and should) also be
- * split into fields by using the standard radiotap
- * presence bitmap mechanism. Use just BIT(0) here for
- * the presence bitmap.
- */
- rtap->present = BIT(0);
- /* We have 8 bytes of (dummy) data */
- rtap->len = 8;
- /* For testing, also require it to be aligned */
- rtap->align = 8;
- /* And also test that padding works, 4 bytes */
- rtap->pad = 4;
- /* push the data */
- memcpy(rtap->data, "ABCDEFGH", 8);
- /* make sure to clear padding, mac80211 doesn't */
- memset(rtap->data + 8, 0, 4);
-
- IEEE80211_SKB_RXCB(skb)->flag |= RX_FLAG_RADIOTAP_VENDOR_DATA;
+ rtap = skb_push(skb, sizeof(*rtap) + sizeof(vendor_data));
+
+ rtap->len = cpu_to_le16(sizeof(*rtap) -
+ sizeof(struct ieee80211_radiotap_tlv) +
+ sizeof(vendor_data));
+ rtap->type = cpu_to_le16(IEEE80211_RADIOTAP_VENDOR_NAMESPACE);
+
+ rtap->content.oui[0] = HWSIM_RADIOTAP_OUI[0];
+ rtap->content.oui[1] = HWSIM_RADIOTAP_OUI[1];
+ rtap->content.oui[2] = HWSIM_RADIOTAP_OUI[2];
+ rtap->content.oui_subtype = 127;
+ /* clear reserved field */
+ rtap->content.reserved = 0;
+ rtap->content.vendor_type = 0;
+ memcpy(rtap->content.data, vendor_data, sizeof(vendor_data));
+
+ IEEE80211_SKB_RXCB(skb)->flag |= RX_FLAG_RADIOTAP_TLV_AT_END;
#endif
}
dev_kfree_skb(skb);
}
-static void mac80211_hwsim_beacon_tx(void *arg, u8 *mac,
- struct ieee80211_vif *vif)
+static void __mac80211_hwsim_beacon_tx(struct ieee80211_bss_conf *link_conf,
+ struct mac80211_hwsim_data *data,
+ struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct sk_buff *skb)
{
- struct mac80211_hwsim_link_data *link_data = arg;
- u32 link_id = link_data->link_id;
- struct ieee80211_bss_conf *link_conf;
- struct mac80211_hwsim_data *data =
- container_of(link_data, struct mac80211_hwsim_data,
- link_data[link_id]);
- struct ieee80211_hw *hw = data->hw;
struct ieee80211_tx_info *info;
struct ieee80211_rate *txrate;
struct ieee80211_mgmt *mgmt;
- struct sk_buff *skb;
/* TODO: get MCS */
int bitrate = 100;
- hwsim_check_magic(vif);
-
- link_conf = rcu_dereference(vif->link_conf[link_id]);
- if (!link_conf)
- return;
-
- if (vif->type != NL80211_IFTYPE_AP &&
- vif->type != NL80211_IFTYPE_MESH_POINT &&
- vif->type != NL80211_IFTYPE_ADHOC &&
- vif->type != NL80211_IFTYPE_OCB)
- return;
-
- skb = ieee80211_beacon_get(hw, vif, link_data->link_id);
- if (skb == NULL)
- return;
info = IEEE80211_SKB_CB(skb);
if (ieee80211_hw_check(hw, SUPPORTS_RC_TABLE))
ieee80211_get_tx_rates(vif, NULL, skb,
mac80211_hwsim_tx_frame(hw, skb,
rcu_dereference(link_conf->chanctx_conf)->def.chan);
+}
+
+static void mac80211_hwsim_beacon_tx(void *arg, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct mac80211_hwsim_link_data *link_data = arg;
+ u32 link_id = link_data->link_id;
+ struct ieee80211_bss_conf *link_conf;
+ struct mac80211_hwsim_data *data =
+ container_of(link_data, struct mac80211_hwsim_data,
+ link_data[link_id]);
+ struct ieee80211_hw *hw = data->hw;
+ struct sk_buff *skb;
+
+ hwsim_check_magic(vif);
+
+ link_conf = rcu_dereference(vif->link_conf[link_id]);
+ if (!link_conf)
+ return;
+
+ if (vif->type != NL80211_IFTYPE_AP &&
+ vif->type != NL80211_IFTYPE_MESH_POINT &&
+ vif->type != NL80211_IFTYPE_ADHOC &&
+ vif->type != NL80211_IFTYPE_OCB)
+ return;
+
+ if (vif->mbssid_tx_vif && vif->mbssid_tx_vif != vif)
+ return;
+
+ if (vif->bss_conf.ema_ap) {
+ struct ieee80211_ema_beacons *ema;
+ u8 i = 0;
+
+ ema = ieee80211_beacon_get_template_ema_list(hw, vif, link_id);
+ if (!ema || !ema->cnt)
+ return;
+
+ for (i = 0; i < ema->cnt; i++) {
+ __mac80211_hwsim_beacon_tx(link_conf, data, hw, vif,
+ ema->bcn[i].skb);
+ ema->bcn[i].skb = NULL; /* Already freed */
+ }
+ ieee80211_beacon_free_ema_list(ema);
+ } else {
+ skb = ieee80211_beacon_get(hw, vif, link_id);
+ if (!skb)
+ return;
+
+ __mac80211_hwsim_beacon_tx(link_conf, data, hw, vif, skb);
+ }
while ((skb = ieee80211_get_buffered_bc(hw, vif)) != NULL) {
mac80211_hwsim_tx_frame(hw, skb,
return 0;
}
+static int mac80211_hwsim_send_pmsr_ftm_request_peer(struct sk_buff *msg,
+ struct cfg80211_pmsr_ftm_request_peer *request)
+{
+ struct nlattr *ftm;
+
+ if (!request->requested)
+ return -EINVAL;
+
+ ftm = nla_nest_start(msg, NL80211_PMSR_TYPE_FTM);
+ if (!ftm)
+ return -ENOBUFS;
+
+ if (nla_put_u32(msg, NL80211_PMSR_FTM_REQ_ATTR_PREAMBLE, request->preamble))
+ return -ENOBUFS;
+
+ if (nla_put_u16(msg, NL80211_PMSR_FTM_REQ_ATTR_BURST_PERIOD, request->burst_period))
+ return -ENOBUFS;
+
+ if (request->asap && nla_put_flag(msg, NL80211_PMSR_FTM_REQ_ATTR_ASAP))
+ return -ENOBUFS;
+
+ if (request->request_lci && nla_put_flag(msg, NL80211_PMSR_FTM_REQ_ATTR_REQUEST_LCI))
+ return -ENOBUFS;
+
+ if (request->request_civicloc &&
+ nla_put_flag(msg, NL80211_PMSR_FTM_REQ_ATTR_REQUEST_CIVICLOC))
+ return -ENOBUFS;
+
+ if (request->trigger_based && nla_put_flag(msg, NL80211_PMSR_FTM_REQ_ATTR_TRIGGER_BASED))
+ return -ENOBUFS;
+
+ if (request->non_trigger_based &&
+ nla_put_flag(msg, NL80211_PMSR_FTM_REQ_ATTR_NON_TRIGGER_BASED))
+ return -ENOBUFS;
+
+ if (request->lmr_feedback && nla_put_flag(msg, NL80211_PMSR_FTM_REQ_ATTR_LMR_FEEDBACK))
+ return -ENOBUFS;
+
+ if (nla_put_u8(msg, NL80211_PMSR_FTM_REQ_ATTR_NUM_BURSTS_EXP, request->num_bursts_exp))
+ return -ENOBUFS;
+
+ if (nla_put_u8(msg, NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION, request->burst_duration))
+ return -ENOBUFS;
+
+ if (nla_put_u8(msg, NL80211_PMSR_FTM_REQ_ATTR_FTMS_PER_BURST, request->ftms_per_burst))
+ return -ENOBUFS;
+
+ if (nla_put_u8(msg, NL80211_PMSR_FTM_REQ_ATTR_NUM_FTMR_RETRIES, request->ftmr_retries))
+ return -ENOBUFS;
+
+ if (nla_put_u8(msg, NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION, request->burst_duration))
+ return -ENOBUFS;
+
+ if (nla_put_u8(msg, NL80211_PMSR_FTM_REQ_ATTR_BSS_COLOR, request->bss_color))
+ return -ENOBUFS;
+
+ nla_nest_end(msg, ftm);
+
+ return 0;
+}
+
+static int mac80211_hwsim_send_pmsr_request_peer(struct sk_buff *msg,
+ struct cfg80211_pmsr_request_peer *request)
+{
+ struct nlattr *peer, *chandef, *req, *data;
+ int err;
+
+ peer = nla_nest_start(msg, NL80211_PMSR_ATTR_PEERS);
+ if (!peer)
+ return -ENOBUFS;
+
+ if (nla_put(msg, NL80211_PMSR_PEER_ATTR_ADDR, ETH_ALEN,
+ request->addr))
+ return -ENOBUFS;
+
+ chandef = nla_nest_start(msg, NL80211_PMSR_PEER_ATTR_CHAN);
+ if (!chandef)
+ return -ENOBUFS;
+
+ err = nl80211_send_chandef(msg, &request->chandef);
+ if (err)
+ return err;
+
+ nla_nest_end(msg, chandef);
+
+ req = nla_nest_start(msg, NL80211_PMSR_PEER_ATTR_REQ);
+ if (!req)
+ return -ENOBUFS;
+
+ if (request->report_ap_tsf && nla_put_flag(msg, NL80211_PMSR_REQ_ATTR_GET_AP_TSF))
+ return -ENOBUFS;
+
+ data = nla_nest_start(msg, NL80211_PMSR_REQ_ATTR_DATA);
+ if (!data)
+ return -ENOBUFS;
+
+ err = mac80211_hwsim_send_pmsr_ftm_request_peer(msg, &request->ftm);
+ if (err)
+ return err;
+
+ nla_nest_end(msg, data);
+ nla_nest_end(msg, req);
+ nla_nest_end(msg, peer);
+
+ return 0;
+}
+
+static int mac80211_hwsim_send_pmsr_request(struct sk_buff *msg,
+ struct cfg80211_pmsr_request *request)
+{
+ struct nlattr *pmsr;
+ int err;
+
+ pmsr = nla_nest_start(msg, NL80211_ATTR_PEER_MEASUREMENTS);
+ if (!pmsr)
+ return -ENOBUFS;
+
+ if (nla_put_u32(msg, NL80211_ATTR_TIMEOUT, request->timeout))
+ return -ENOBUFS;
+
+ if (!is_zero_ether_addr(request->mac_addr)) {
+ if (nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, request->mac_addr))
+ return -ENOBUFS;
+ if (nla_put(msg, NL80211_ATTR_MAC_MASK, ETH_ALEN, request->mac_addr_mask))
+ return -ENOBUFS;
+ }
+
+ for (int i = 0; i < request->n_peers; i++) {
+ err = mac80211_hwsim_send_pmsr_request_peer(msg, &request->peers[i]);
+ if (err)
+ return err;
+ }
+
+ nla_nest_end(msg, pmsr);
+
+ return 0;
+}
+
+static int mac80211_hwsim_start_pmsr(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_pmsr_request *request)
+{
+ struct mac80211_hwsim_data *data;
+ struct sk_buff *skb = NULL;
+ struct nlattr *pmsr;
+ void *msg_head;
+ u32 _portid;
+ int err = 0;
+
+ data = hw->priv;
+ _portid = READ_ONCE(data->wmediumd);
+ if (!_portid && !hwsim_virtio_enabled)
+ return -EOPNOTSUPP;
+
+ mutex_lock(&data->mutex);
+
+ if (data->pmsr_request) {
+ err = -EBUSY;
+ goto out_free;
+ }
+
+ skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
+
+ if (!skb) {
+ err = -ENOMEM;
+ goto out_free;
+ }
+
+ msg_head = genlmsg_put(skb, 0, 0, &hwsim_genl_family, 0, HWSIM_CMD_START_PMSR);
+
+ if (nla_put(skb, HWSIM_ATTR_ADDR_TRANSMITTER,
+ ETH_ALEN, data->addresses[1].addr)) {
+ err = -ENOMEM;
+ goto out_free;
+ }
+
+ pmsr = nla_nest_start(skb, HWSIM_ATTR_PMSR_REQUEST);
+ if (!pmsr) {
+ err = -ENOMEM;
+ goto out_free;
+ }
+
+ err = mac80211_hwsim_send_pmsr_request(skb, request);
+ if (err)
+ goto out_free;
+
+ nla_nest_end(skb, pmsr);
+
+ genlmsg_end(skb, msg_head);
+ if (hwsim_virtio_enabled)
+ hwsim_tx_virtio(data, skb);
+ else
+ hwsim_unicast_netgroup(data, skb, _portid);
+
+ data->pmsr_request = request;
+ data->pmsr_request_wdev = ieee80211_vif_to_wdev(vif);
+
+out_free:
+ if (err && skb)
+ nlmsg_free(skb);
+
+ mutex_unlock(&data->mutex);
+ return err;
+}
+
+static void mac80211_hwsim_abort_pmsr(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_pmsr_request *request)
+{
+ struct mac80211_hwsim_data *data;
+ struct sk_buff *skb = NULL;
+ struct nlattr *pmsr;
+ void *msg_head;
+ u32 _portid;
+ int err = 0;
+
+ data = hw->priv;
+ _portid = READ_ONCE(data->wmediumd);
+ if (!_portid && !hwsim_virtio_enabled)
+ return;
+
+ mutex_lock(&data->mutex);
+
+ if (data->pmsr_request != request) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ if (!skb) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ msg_head = genlmsg_put(skb, 0, 0, &hwsim_genl_family, 0, HWSIM_CMD_ABORT_PMSR);
+
+ if (nla_put(skb, HWSIM_ATTR_ADDR_TRANSMITTER, ETH_ALEN, data->addresses[1].addr))
+ goto out;
+
+ pmsr = nla_nest_start(skb, HWSIM_ATTR_PMSR_REQUEST);
+ if (!pmsr) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ err = mac80211_hwsim_send_pmsr_request(skb, request);
+ if (err)
+ goto out;
+
+ err = nla_nest_end(skb, pmsr);
+ if (err)
+ goto out;
+
+ genlmsg_end(skb, msg_head);
+ if (hwsim_virtio_enabled)
+ hwsim_tx_virtio(data, skb);
+ else
+ hwsim_unicast_netgroup(data, skb, _portid);
+
+out:
+ if (err && skb)
+ nlmsg_free(skb);
+
+ mutex_unlock(&data->mutex);
+}
+
+static int mac80211_hwsim_parse_rate_info(struct nlattr *rateattr,
+ struct rate_info *rate_info,
+ struct genl_info *info)
+{
+ struct nlattr *tb[HWSIM_RATE_INFO_ATTR_MAX + 1];
+ int ret;
+
+ ret = nla_parse_nested(tb, HWSIM_RATE_INFO_ATTR_MAX,
+ rateattr, hwsim_rate_info_policy, info->extack);
+ if (ret)
+ return ret;
+
+ if (tb[HWSIM_RATE_INFO_ATTR_FLAGS])
+ rate_info->flags = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_FLAGS]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_MCS])
+ rate_info->mcs = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_MCS]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_LEGACY])
+ rate_info->legacy = nla_get_u16(tb[HWSIM_RATE_INFO_ATTR_LEGACY]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_NSS])
+ rate_info->nss = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_NSS]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_BW])
+ rate_info->bw = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_BW]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_HE_GI])
+ rate_info->he_gi = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_HE_GI]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_HE_DCM])
+ rate_info->he_dcm = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_HE_DCM]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_HE_RU_ALLOC])
+ rate_info->he_ru_alloc =
+ nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_HE_RU_ALLOC]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_N_BOUNDED_CH])
+ rate_info->n_bonded_ch = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_N_BOUNDED_CH]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_EHT_GI])
+ rate_info->eht_gi = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_EHT_GI]);
+
+ if (tb[HWSIM_RATE_INFO_ATTR_EHT_RU_ALLOC])
+ rate_info->eht_ru_alloc = nla_get_u8(tb[HWSIM_RATE_INFO_ATTR_EHT_RU_ALLOC]);
+
+ return 0;
+}
+
+static int mac80211_hwsim_parse_ftm_result(struct nlattr *ftm,
+ struct cfg80211_pmsr_ftm_result *result,
+ struct genl_info *info)
+{
+ struct nlattr *tb[NL80211_PMSR_FTM_RESP_ATTR_MAX + 1];
+ int ret;
+
+ ret = nla_parse_nested(tb, NL80211_PMSR_FTM_RESP_ATTR_MAX,
+ ftm, hwsim_ftm_result_policy, info->extack);
+ if (ret)
+ return ret;
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_FAIL_REASON])
+ result->failure_reason = nla_get_u32(tb[NL80211_PMSR_FTM_RESP_ATTR_FAIL_REASON]);
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_BURST_INDEX])
+ result->burst_index = nla_get_u16(tb[NL80211_PMSR_FTM_RESP_ATTR_BURST_INDEX]);
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_NUM_FTMR_ATTEMPTS]) {
+ result->num_ftmr_attempts_valid = 1;
+ result->num_ftmr_attempts =
+ nla_get_u32(tb[NL80211_PMSR_FTM_RESP_ATTR_NUM_FTMR_ATTEMPTS]);
+ }
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_NUM_FTMR_SUCCESSES]) {
+ result->num_ftmr_successes_valid = 1;
+ result->num_ftmr_successes =
+ nla_get_u32(tb[NL80211_PMSR_FTM_RESP_ATTR_NUM_FTMR_SUCCESSES]);
+ }
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_BUSY_RETRY_TIME])
+ result->busy_retry_time =
+ nla_get_u8(tb[NL80211_PMSR_FTM_RESP_ATTR_BUSY_RETRY_TIME]);
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_NUM_BURSTS_EXP])
+ result->num_bursts_exp = nla_get_u8(tb[NL80211_PMSR_FTM_RESP_ATTR_NUM_BURSTS_EXP]);
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_BURST_DURATION])
+ result->burst_duration = nla_get_u8(tb[NL80211_PMSR_FTM_RESP_ATTR_BURST_DURATION]);
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_FTMS_PER_BURST])
+ result->ftms_per_burst = nla_get_u8(tb[NL80211_PMSR_FTM_RESP_ATTR_FTMS_PER_BURST]);
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_RSSI_AVG]) {
+ result->rssi_avg_valid = 1;
+ result->rssi_avg = nla_get_s32(tb[NL80211_PMSR_FTM_RESP_ATTR_RSSI_AVG]);
+ }
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_RSSI_SPREAD]) {
+ result->rssi_spread_valid = 1;
+ result->rssi_spread =
+ nla_get_s32(tb[NL80211_PMSR_FTM_RESP_ATTR_RSSI_SPREAD]);
+ }
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_TX_RATE]) {
+ result->tx_rate_valid = 1;
+ ret = mac80211_hwsim_parse_rate_info(tb[NL80211_PMSR_FTM_RESP_ATTR_TX_RATE],
+ &result->tx_rate, info);
+ if (ret)
+ return ret;
+ }
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_RX_RATE]) {
+ result->rx_rate_valid = 1;
+ ret = mac80211_hwsim_parse_rate_info(tb[NL80211_PMSR_FTM_RESP_ATTR_RX_RATE],
+ &result->rx_rate, info);
+ if (ret)
+ return ret;
+ }
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_RTT_AVG]) {
+ result->rtt_avg_valid = 1;
+ result->rtt_avg =
+ nla_get_u64(tb[NL80211_PMSR_FTM_RESP_ATTR_RTT_AVG]);
+ }
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_RTT_VARIANCE]) {
+ result->rtt_variance_valid = 1;
+ result->rtt_variance =
+ nla_get_u64(tb[NL80211_PMSR_FTM_RESP_ATTR_RTT_VARIANCE]);
+ }
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_RTT_SPREAD]) {
+ result->rtt_spread_valid = 1;
+ result->rtt_spread =
+ nla_get_u64(tb[NL80211_PMSR_FTM_RESP_ATTR_RTT_SPREAD]);
+ }
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_DIST_AVG]) {
+ result->dist_avg_valid = 1;
+ result->dist_avg =
+ nla_get_u64(tb[NL80211_PMSR_FTM_RESP_ATTR_DIST_AVG]);
+ }
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_DIST_VARIANCE]) {
+ result->dist_variance_valid = 1;
+ result->dist_variance =
+ nla_get_u64(tb[NL80211_PMSR_FTM_RESP_ATTR_DIST_VARIANCE]);
+ }
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_DIST_SPREAD]) {
+ result->dist_spread_valid = 1;
+ result->dist_spread =
+ nla_get_u64(tb[NL80211_PMSR_FTM_RESP_ATTR_DIST_SPREAD]);
+ }
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_LCI]) {
+ result->lci = nla_data(tb[NL80211_PMSR_FTM_RESP_ATTR_LCI]);
+ result->lci_len = nla_len(tb[NL80211_PMSR_FTM_RESP_ATTR_LCI]);
+ }
+
+ if (tb[NL80211_PMSR_FTM_RESP_ATTR_CIVICLOC]) {
+ result->civicloc = nla_data(tb[NL80211_PMSR_FTM_RESP_ATTR_CIVICLOC]);
+ result->civicloc_len = nla_len(tb[NL80211_PMSR_FTM_RESP_ATTR_CIVICLOC]);
+ }
+
+ return 0;
+}
+
+static int mac80211_hwsim_parse_pmsr_resp(struct nlattr *resp,
+ struct cfg80211_pmsr_result *result,
+ struct genl_info *info)
+{
+ struct nlattr *tb[NL80211_PMSR_RESP_ATTR_MAX + 1];
+ struct nlattr *pmsr;
+ int rem;
+ int ret;
+
+ ret = nla_parse_nested(tb, NL80211_PMSR_RESP_ATTR_MAX, resp, hwsim_pmsr_resp_policy,
+ info->extack);
+ if (ret)
+ return ret;
+
+ if (tb[NL80211_PMSR_RESP_ATTR_STATUS])
+ result->status = nla_get_u32(tb[NL80211_PMSR_RESP_ATTR_STATUS]);
+
+ if (tb[NL80211_PMSR_RESP_ATTR_HOST_TIME])
+ result->host_time = nla_get_u64(tb[NL80211_PMSR_RESP_ATTR_HOST_TIME]);
+
+ if (tb[NL80211_PMSR_RESP_ATTR_AP_TSF]) {
+ result->ap_tsf_valid = 1;
+ result->ap_tsf = nla_get_u64(tb[NL80211_PMSR_RESP_ATTR_AP_TSF]);
+ }
+
+ result->final = !!tb[NL80211_PMSR_RESP_ATTR_FINAL];
+
+ if (!tb[NL80211_PMSR_RESP_ATTR_DATA])
+ return 0;
+
+ nla_for_each_nested(pmsr, tb[NL80211_PMSR_RESP_ATTR_DATA], rem) {
+ switch (nla_type(pmsr)) {
+ case NL80211_PMSR_TYPE_FTM:
+ result->type = NL80211_PMSR_TYPE_FTM;
+ ret = mac80211_hwsim_parse_ftm_result(pmsr, &result->ftm, info);
+ if (ret)
+ return ret;
+ break;
+ default:
+ NL_SET_ERR_MSG_ATTR(info->extack, pmsr, "Unknown pmsr resp type");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static int mac80211_hwsim_parse_pmsr_result(struct nlattr *peer,
+ struct cfg80211_pmsr_result *result,
+ struct genl_info *info)
+{
+ struct nlattr *tb[NL80211_PMSR_PEER_ATTR_MAX + 1];
+ int ret;
+
+ if (!peer)
+ return -EINVAL;
+
+ ret = nla_parse_nested(tb, NL80211_PMSR_PEER_ATTR_MAX, peer,
+ hwsim_pmsr_peer_result_policy, info->extack);
+ if (ret)
+ return ret;
+
+ if (tb[NL80211_PMSR_PEER_ATTR_ADDR])
+ memcpy(result->addr, nla_data(tb[NL80211_PMSR_PEER_ATTR_ADDR]),
+ ETH_ALEN);
+
+ if (tb[NL80211_PMSR_PEER_ATTR_RESP]) {
+ ret = mac80211_hwsim_parse_pmsr_resp(tb[NL80211_PMSR_PEER_ATTR_RESP], result, info);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+};
+
+static int hwsim_pmsr_report_nl(struct sk_buff *msg, struct genl_info *info)
+{
+ struct mac80211_hwsim_data *data;
+ struct nlattr *peers, *peer;
+ struct nlattr *reqattr;
+ const u8 *src;
+ int err;
+ int rem;
+
+ src = nla_data(info->attrs[HWSIM_ATTR_ADDR_TRANSMITTER]);
+ data = get_hwsim_data_ref_from_addr(src);
+ if (!data)
+ return -EINVAL;
+
+ mutex_lock(&data->mutex);
+ if (!data->pmsr_request) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ reqattr = info->attrs[HWSIM_ATTR_PMSR_RESULT];
+ if (!reqattr) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ peers = nla_find_nested(reqattr, NL80211_PMSR_ATTR_PEERS);
+ if (!peers) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ nla_for_each_nested(peer, peers, rem) {
+ struct cfg80211_pmsr_result result;
+
+ err = mac80211_hwsim_parse_pmsr_result(peer, &result, info);
+ if (err)
+ goto out;
+
+ cfg80211_pmsr_report(data->pmsr_request_wdev,
+ data->pmsr_request, &result, GFP_KERNEL);
+ }
+
+ cfg80211_pmsr_complete(data->pmsr_request_wdev, data->pmsr_request, GFP_KERNEL);
+
+ err = 0;
+out:
+ data->pmsr_request = NULL;
+ data->pmsr_request_wdev = NULL;
+
+ mutex_unlock(&data->mutex);
+ return err;
+}
+
#define HWSIM_COMMON_OPS \
.tx = mac80211_hwsim_tx, \
.wake_tx_queue = ieee80211_handle_wake_tx_queue, \
.flush = mac80211_hwsim_flush, \
.get_et_sset_count = mac80211_hwsim_get_et_sset_count, \
.get_et_stats = mac80211_hwsim_get_et_stats, \
- .get_et_strings = mac80211_hwsim_get_et_strings,
+ .get_et_strings = mac80211_hwsim_get_et_strings, \
+ .start_pmsr = mac80211_hwsim_start_pmsr, \
+ .abort_pmsr = mac80211_hwsim_abort_pmsr,
#define HWSIM_NON_MLO_OPS \
.sta_add = mac80211_hwsim_sta_add, \
u32 *ciphers;
u8 n_ciphers;
bool mlo;
+ const struct cfg80211_pmsr_capabilities *pmsr_capa;
};
static void hwsim_mcast_config_msg(struct sk_buff *mcast_skb,
hw->wiphy->n_cipher_suites = param->n_ciphers;
}
+ hw->wiphy->mbssid_max_interfaces = 8;
+ hw->wiphy->ema_max_profile_periodicity = 3;
+
data->rx_rssi = DEFAULT_RX_RSSI;
INIT_DELAYED_WORK(&data->roc_start, hw_roc_start);
NL80211_EXT_FEATURE_MULTICAST_REGISTRATIONS);
wiphy_ext_feature_set(hw->wiphy,
NL80211_EXT_FEATURE_BEACON_RATE_LEGACY);
+ wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_ENABLE_FTM_RESPONDER);
+
+ wiphy_ext_feature_set(hw->wiphy,
+ NL80211_EXT_FEATURE_SCAN_MIN_PREQ_CONTENT);
hw->wiphy->interface_modes = param->iftypes;
data->debugfs,
data, &hwsim_simulate_radar);
+ if (param->pmsr_capa) {
+ data->pmsr_capa = *param->pmsr_capa;
+ hw->wiphy->pmsr_capa = &data->pmsr_capa;
+ }
+
spin_lock_bh(&hwsim_radio_lock);
err = rhashtable_insert_fast(&hwsim_radios_rht, &data->rht,
hwsim_rht_params);
param.regd = data->regd;
param.channels = data->channels;
param.hwname = wiphy_name(data->hw->wiphy);
+ param.pmsr_capa = &data->pmsr_capa;
res = append_radio_msg(skb, data->idx, ¶m);
if (res < 0)
eth_hw_addr_set(dev, addr);
}
-static struct mac80211_hwsim_data *get_hwsim_data_ref_from_addr(const u8 *addr)
-{
- return rhashtable_lookup_fast(&hwsim_radios_rht,
- addr,
- hwsim_rht_params);
-}
-
static void hwsim_register_wmediumd(struct net *net, u32 portid)
{
struct mac80211_hwsim_data *data;
return true;
}
+static int parse_ftm_capa(const struct nlattr *ftm_capa, struct cfg80211_pmsr_capabilities *out,
+ struct genl_info *info)
+{
+ struct nlattr *tb[NL80211_PMSR_FTM_CAPA_ATTR_MAX + 1];
+ int ret;
+
+ ret = nla_parse_nested(tb, NL80211_PMSR_FTM_CAPA_ATTR_MAX, ftm_capa, hwsim_ftm_capa_policy,
+ NULL);
+ if (ret) {
+ NL_SET_ERR_MSG_ATTR(info->extack, ftm_capa, "malformed FTM capability");
+ return -EINVAL;
+ }
+
+ out->ftm.supported = 1;
+ if (tb[NL80211_PMSR_FTM_CAPA_ATTR_PREAMBLES])
+ out->ftm.preambles = nla_get_u32(tb[NL80211_PMSR_FTM_CAPA_ATTR_PREAMBLES]);
+ if (tb[NL80211_PMSR_FTM_CAPA_ATTR_BANDWIDTHS])
+ out->ftm.bandwidths = nla_get_u32(tb[NL80211_PMSR_FTM_CAPA_ATTR_BANDWIDTHS]);
+ if (tb[NL80211_PMSR_FTM_CAPA_ATTR_MAX_BURSTS_EXPONENT])
+ out->ftm.max_bursts_exponent =
+ nla_get_u8(tb[NL80211_PMSR_FTM_CAPA_ATTR_MAX_BURSTS_EXPONENT]);
+ if (tb[NL80211_PMSR_FTM_CAPA_ATTR_MAX_FTMS_PER_BURST])
+ out->ftm.max_ftms_per_burst =
+ nla_get_u8(tb[NL80211_PMSR_FTM_CAPA_ATTR_MAX_FTMS_PER_BURST]);
+ out->ftm.asap = !!tb[NL80211_PMSR_FTM_CAPA_ATTR_ASAP];
+ out->ftm.non_asap = !!tb[NL80211_PMSR_FTM_CAPA_ATTR_NON_ASAP];
+ out->ftm.request_lci = !!tb[NL80211_PMSR_FTM_CAPA_ATTR_REQ_LCI];
+ out->ftm.request_civicloc = !!tb[NL80211_PMSR_FTM_CAPA_ATTR_REQ_CIVICLOC];
+ out->ftm.trigger_based = !!tb[NL80211_PMSR_FTM_CAPA_ATTR_TRIGGER_BASED];
+ out->ftm.non_trigger_based = !!tb[NL80211_PMSR_FTM_CAPA_ATTR_NON_TRIGGER_BASED];
+
+ return 0;
+}
+
+static int parse_pmsr_capa(const struct nlattr *pmsr_capa, struct cfg80211_pmsr_capabilities *out,
+ struct genl_info *info)
+{
+ struct nlattr *tb[NL80211_PMSR_ATTR_MAX + 1];
+ struct nlattr *nla;
+ int size;
+ int ret;
+
+ ret = nla_parse_nested(tb, NL80211_PMSR_ATTR_MAX, pmsr_capa, hwsim_pmsr_capa_policy, NULL);
+ if (ret) {
+ NL_SET_ERR_MSG_ATTR(info->extack, pmsr_capa, "malformed PMSR capability");
+ return -EINVAL;
+ }
+
+ if (tb[NL80211_PMSR_ATTR_MAX_PEERS])
+ out->max_peers = nla_get_u32(tb[NL80211_PMSR_ATTR_MAX_PEERS]);
+ out->report_ap_tsf = !!tb[NL80211_PMSR_ATTR_REPORT_AP_TSF];
+ out->randomize_mac_addr = !!tb[NL80211_PMSR_ATTR_RANDOMIZE_MAC_ADDR];
+
+ if (!tb[NL80211_PMSR_ATTR_TYPE_CAPA]) {
+ NL_SET_ERR_MSG_ATTR(info->extack, tb[NL80211_PMSR_ATTR_TYPE_CAPA],
+ "malformed PMSR type");
+ return -EINVAL;
+ }
+
+ nla_for_each_nested(nla, tb[NL80211_PMSR_ATTR_TYPE_CAPA], size) {
+ switch (nla_type(nla)) {
+ case NL80211_PMSR_TYPE_FTM:
+ parse_ftm_capa(nla, out, info);
+ break;
+ default:
+ NL_SET_ERR_MSG_ATTR(info->extack, nla, "unsupported measurement type");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
{
struct hwsim_new_radio_params param = { 0 };
param.hwname = hwname;
}
+ if (info->attrs[HWSIM_ATTR_PMSR_SUPPORT]) {
+ struct cfg80211_pmsr_capabilities *pmsr_capa;
+
+ pmsr_capa = kmalloc(sizeof(*pmsr_capa), GFP_KERNEL);
+ if (!pmsr_capa) {
+ ret = -ENOMEM;
+ goto out_free;
+ }
+ ret = parse_pmsr_capa(info->attrs[HWSIM_ATTR_PMSR_SUPPORT], pmsr_capa, info);
+ if (ret)
+ goto out_free;
+ param.pmsr_capa = pmsr_capa;
+ }
+
ret = mac80211_hwsim_new_radio(info, ¶m);
+
+out_free:
kfree(hwname);
+ kfree(param.pmsr_capa);
return ret;
}
.doit = hwsim_get_radio_nl,
.dumpit = hwsim_dump_radio_nl,
},
+ {
+ .cmd = HWSIM_CMD_REPORT_PMSR,
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = hwsim_pmsr_report_nl,
+ },
};
static struct genl_family hwsim_genl_family __ro_after_init = {
.module = THIS_MODULE,
.small_ops = hwsim_ops,
.n_small_ops = ARRAY_SIZE(hwsim_ops),
- .resv_start_op = HWSIM_CMD_DEL_MAC_ADDR + 1,
+ .resv_start_op = HWSIM_CMD_REPORT_PMSR + 1, // match with __HWSIM_CMD_MAX
.mcgrps = hwsim_mcgrps,
.n_mcgrps = ARRAY_SIZE(hwsim_mcgrps),
};
case HWSIM_CMD_TX_INFO_FRAME:
hwsim_tx_info_frame_received_nl(skb, &info);
break;
+ case HWSIM_CMD_REPORT_PMSR:
+ hwsim_pmsr_report_nl(skb, &info);
+ break;
default:
pr_err_ratelimited("hwsim: invalid cmd: %d\n", gnlh->cmd);
return -EPROTO;
* to this receiver address for a given station.
* @HWSIM_CMD_DEL_MAC_ADDR: remove the MAC address again, the attributes
* are the same as to @HWSIM_CMD_ADD_MAC_ADDR.
+ * @HWSIM_CMD_START_PMSR: request to start peer measurement with the
+ * %HWSIM_ATTR_PMSR_REQUEST. Result will be sent back asynchronously
+ * with %HWSIM_CMD_REPORT_PMSR.
* @__HWSIM_CMD_MAX: enum limit
*/
enum {
HWSIM_CMD_GET_RADIO,
HWSIM_CMD_ADD_MAC_ADDR,
HWSIM_CMD_DEL_MAC_ADDR,
+ HWSIM_CMD_START_PMSR,
+ HWSIM_CMD_ABORT_PMSR,
+ HWSIM_CMD_REPORT_PMSR,
__HWSIM_CMD_MAX,
};
#define HWSIM_CMD_MAX (_HWSIM_CMD_MAX - 1)
* @HWSIM_ATTR_CIPHER_SUPPORT: u32 array of supported cipher types
* @HWSIM_ATTR_MLO_SUPPORT: claim MLO support (exact parameters TBD) for
* the new radio
+ * @HWSIM_ATTR_PMSR_SUPPORT: nested attribute used with %HWSIM_CMD_CREATE_RADIO
+ * to provide peer measurement capabilities. (nl80211_peer_measurement_attrs)
+ * @HWSIM_ATTR_PMSR_REQUEST: nested attribute used with %HWSIM_CMD_START_PMSR
+ * to provide details about peer measurement request (nl80211_peer_measurement_attrs)
+ * @HWSIM_ATTR_PMSR_RESULT: nested attributed used with %HWSIM_CMD_REPORT_PMSR
+ * to provide peer measurement result (nl80211_peer_measurement_attrs)
* @__HWSIM_ATTR_MAX: enum limit
*/
HWSIM_ATTR_IFTYPE_SUPPORT,
HWSIM_ATTR_CIPHER_SUPPORT,
HWSIM_ATTR_MLO_SUPPORT,
+ HWSIM_ATTR_PMSR_SUPPORT,
+ HWSIM_ATTR_PMSR_REQUEST,
+ HWSIM_ATTR_PMSR_RESULT,
__HWSIM_ATTR_MAX,
};
#define HWSIM_ATTR_MAX (__HWSIM_ATTR_MAX - 1)
HWSIM_VQ_RX,
HWSIM_NUM_VQS,
};
+
+/**
+ * enum hwsim_rate_info -- bitrate information.
+ *
+ * Information about a receiving or transmitting bitrate
+ * that can be mapped to struct rate_info
+ *
+ * @HWSIM_RATE_INFO_ATTR_FLAGS: bitflag of flags from &enum rate_info_flags
+ * @HWSIM_RATE_INFO_ATTR_MCS: mcs index if struct describes an HT/VHT/HE rate
+ * @HWSIM_RATE_INFO_ATTR_LEGACY: bitrate in 100kbit/s for 802.11abg
+ * @HWSIM_RATE_INFO_ATTR_NSS: number of streams (VHT & HE only)
+ * @HWSIM_RATE_INFO_ATTR_BW: bandwidth (from &enum rate_info_bw)
+ * @HWSIM_RATE_INFO_ATTR_HE_GI: HE guard interval (from &enum nl80211_he_gi)
+ * @HWSIM_RATE_INFO_ATTR_HE_DCM: HE DCM value
+ * @HWSIM_RATE_INFO_ATTR_HE_RU_ALLOC: HE RU allocation (from &enum nl80211_he_ru_alloc,
+ * only valid if bw is %RATE_INFO_BW_HE_RU)
+ * @HWSIM_RATE_INFO_ATTR_N_BOUNDED_CH: In case of EDMG the number of bonded channels (1-4)
+ * @HWSIM_RATE_INFO_ATTR_EHT_GI: EHT guard interval (from &enum nl80211_eht_gi)
+ * @HWSIM_RATE_INFO_ATTR_EHT_RU_ALLOC: EHT RU allocation (from &enum nl80211_eht_ru_alloc,
+ * only valid if bw is %RATE_INFO_BW_EHT_RU)
+ * @NUM_HWSIM_RATE_INFO_ATTRS: internal
+ * @HWSIM_RATE_INFO_ATTR_MAX: highest attribute number
+ */
+enum hwsim_rate_info_attributes {
+ __HWSIM_RATE_INFO_ATTR_INVALID,
+
+ HWSIM_RATE_INFO_ATTR_FLAGS,
+ HWSIM_RATE_INFO_ATTR_MCS,
+ HWSIM_RATE_INFO_ATTR_LEGACY,
+ HWSIM_RATE_INFO_ATTR_NSS,
+ HWSIM_RATE_INFO_ATTR_BW,
+ HWSIM_RATE_INFO_ATTR_HE_GI,
+ HWSIM_RATE_INFO_ATTR_HE_DCM,
+ HWSIM_RATE_INFO_ATTR_HE_RU_ALLOC,
+ HWSIM_RATE_INFO_ATTR_N_BOUNDED_CH,
+ HWSIM_RATE_INFO_ATTR_EHT_GI,
+ HWSIM_RATE_INFO_ATTR_EHT_RU_ALLOC,
+
+ /* keep last */
+ NUM_HWSIM_RATE_INFO_ATTRS,
+ HWSIM_RATE_INFO_ATTR_MAX = NUM_HWSIM_RATE_INFO_ATTRS - 1
+};
+
#endif /* __MAC80211_HWSIM_H */
ipc_port->ipc_imem = ipc_imem;
ipc_port->iosm_port = wwan_create_port(ipc_port->dev, port_type,
- &ipc_wwan_ctrl_ops, ipc_port);
+ &ipc_wwan_ctrl_ops, NULL,
+ ipc_port);
return ipc_port;
}
/* Register as a wwan port, id->driver_data contains wwan port type */
port = wwan_create_port(&cntrl->mhi_dev->dev, id->driver_data,
- &wwan_pops, mhiwwan);
+ &wwan_pops, NULL, mhiwwan);
if (IS_ERR(port)) {
kfree(mhiwwan);
return PTR_ERR(port);
/* Register as a wwan port, id.driver_data contains wwan port type */
port = wwan_create_port(parent, rpdev->id.driver_data,
- &rpmsg_wwan_pops, rpwwan);
+ &rpmsg_wwan_pops, NULL, rpwwan);
if (IS_ERR(port))
return PTR_ERR(port);
static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
{
struct t7xx_port *port_private = wwan_port_get_drvdata(port);
- size_t len, offset, chunk_len = 0, txq_mtu = CLDMA_MTU;
const struct t7xx_port_conf *port_conf;
+ struct sk_buff *cur = skb, *cloned;
struct t7xx_fsm_ctl *ctl;
enum md_state md_state;
+ int cnt = 0, ret;
- len = skb->len;
- if (!len || !port_private->chan_enable)
+ if (!port_private->chan_enable)
return -EINVAL;
port_conf = port_private->port_conf;
return -ENODEV;
}
- for (offset = 0; offset < len; offset += chunk_len) {
- struct sk_buff *skb_ccci;
- int ret;
-
- chunk_len = min(len - offset, txq_mtu - sizeof(struct ccci_header));
- skb_ccci = t7xx_port_alloc_skb(chunk_len);
- if (!skb_ccci)
- return -ENOMEM;
-
- skb_put_data(skb_ccci, skb->data + offset, chunk_len);
- ret = t7xx_port_send_skb(port_private, skb_ccci, 0, 0);
+ while (cur) {
+ cloned = skb_clone(cur, GFP_KERNEL);
+ cloned->len = skb_headlen(cur);
+ ret = t7xx_port_send_skb(port_private, cloned, 0, 0);
if (ret) {
- dev_kfree_skb_any(skb_ccci);
+ dev_kfree_skb(cloned);
dev_err(port_private->dev, "Write error on %s port, %d\n",
port_conf->name, ret);
- return ret;
+ return cnt ? cnt + ret : ret;
}
+ cnt += cur->len;
+ if (cur == skb)
+ cur = skb_shinfo(skb)->frag_list;
+ else
+ cur = cur->next;
}
dev_kfree_skb(skb);
static void t7xx_port_wwan_md_state_notify(struct t7xx_port *port, unsigned int state)
{
const struct t7xx_port_conf *port_conf = port->port_conf;
+ unsigned int header_len = sizeof(struct ccci_header);
+ struct wwan_port_caps caps;
if (state != MD_STATE_READY)
return;
if (!port->wwan.wwan_port) {
+ caps.frag_len = CLDMA_MTU - header_len;
+ caps.headroom_len = header_len;
port->wwan.wwan_port = wwan_create_port(port->dev, port_conf->port_type,
- &wwan_ops, port);
+ &wwan_ops, &caps, port);
if (IS_ERR(port->wwan.wwan_port))
dev_err(port->dev, "Unable to create WWWAN port %s", port_conf->name);
}
* @rxq: Buffer inbound queue
* @waitqueue: The waitqueue for port fops (read/write/poll)
* @data_lock: Port specific data access serialization
+ * @headroom_len: SKB reserved headroom size
+ * @frag_len: Length to fragment packet
* @at_data: AT port specific data
*/
struct wwan_port {
struct sk_buff_head rxq;
wait_queue_head_t waitqueue;
struct mutex data_lock; /* Port specific data access serialization */
+ size_t headroom_len;
+ size_t frag_len;
union {
struct {
struct ktermios termios;
struct wwan_port *wwan_create_port(struct device *parent,
enum wwan_port_type type,
const struct wwan_port_ops *ops,
+ struct wwan_port_caps *caps,
void *drvdata)
{
struct wwan_device *wwandev;
port->type = type;
port->ops = ops;
+ port->frag_len = caps ? caps->frag_len : SIZE_MAX;
+ port->headroom_len = caps ? caps->headroom_len : 0;
mutex_init(&port->ops_lock);
skb_queue_head_init(&port->rxq);
init_waitqueue_head(&port->waitqueue);
static ssize_t wwan_port_fops_write(struct file *filp, const char __user *buf,
size_t count, loff_t *offp)
{
+ struct sk_buff *skb, *head = NULL, *tail = NULL;
struct wwan_port *port = filp->private_data;
- struct sk_buff *skb;
+ size_t frag_len, remain = count;
int ret;
ret = wwan_wait_tx(port, !!(filp->f_flags & O_NONBLOCK));
if (ret)
return ret;
- skb = alloc_skb(count, GFP_KERNEL);
- if (!skb)
- return -ENOMEM;
+ do {
+ frag_len = min(remain, port->frag_len);
+ skb = alloc_skb(frag_len + port->headroom_len, GFP_KERNEL);
+ if (!skb) {
+ ret = -ENOMEM;
+ goto freeskb;
+ }
+ skb_reserve(skb, port->headroom_len);
+
+ if (!head) {
+ head = skb;
+ } else if (!tail) {
+ skb_shinfo(head)->frag_list = skb;
+ tail = skb;
+ } else {
+ tail->next = skb;
+ tail = skb;
+ }
- if (copy_from_user(skb_put(skb, count), buf, count)) {
- kfree_skb(skb);
- return -EFAULT;
- }
+ if (copy_from_user(skb_put(skb, frag_len), buf + count - remain, frag_len)) {
+ ret = -EFAULT;
+ goto freeskb;
+ }
- ret = wwan_port_op_tx(port, skb, !!(filp->f_flags & O_NONBLOCK));
- if (ret) {
- kfree_skb(skb);
- return ret;
- }
+ if (skb != head) {
+ head->data_len += skb->len;
+ head->len += skb->len;
+ head->truesize += skb->truesize;
+ }
+ } while (remain -= frag_len);
+
+ ret = wwan_port_op_tx(port, head, !!(filp->f_flags & O_NONBLOCK));
+ if (!ret)
+ return count;
- return count;
+freeskb:
+ kfree_skb(head);
+ return ret;
}
static __poll_t wwan_port_fops_poll(struct file *filp, poll_table *wait)
port->wwan = wwan_create_port(&dev->dev, WWAN_PORT_AT,
&wwan_hwsim_port_ops,
- port);
+ NULL, port);
if (IS_ERR(port->wwan)) {
err = PTR_ERR(port->wwan);
goto err_free_port;
return ret;
}
- if (of_find_property(node, "i2c-int-falling", NULL))
+ if (of_property_read_bool(node, "i2c-int-falling"))
pdata->irq_polarity = IRQF_TRIGGER_FALLING;
else
pdata->irq_polarity = IRQF_TRIGGER_RISING;
return reset_n_io;
}
pdata->reset_n_io = reset_n_io;
-
- if (of_find_property(node, "hci-muxed", NULL))
- pdata->hci_muxed = 1;
- else
- pdata->hci_muxed = 0;
+ pdata->hci_muxed = of_property_read_bool(node, "hci-muxed");
return 0;
}
#ifndef _NFCMRVL_H_
#define _NFCMRVL_H_
-#include <linux/platform_data/nfcmrvl.h>
-
#include "fw_dnld.h"
/* Define private flags: */
NFCMRVL_PHY_SPI = 3,
};
+struct nfcmrvl_platform_data {
+ /*
+ * Generic
+ */
+
+ /* GPIO that is wired to RESET_N signal */
+ int reset_n_io;
+ /* Tell if transport is muxed in HCI one */
+ bool hci_muxed;
+
+ /*
+ * UART specific
+ */
+
+ /* Tell if UART needs flow control at init */
+ bool flow_control;
+ /* Tell if firmware supports break control for power management */
+ bool break_control;
+
+
+ /*
+ * I2C specific
+ */
+
+ unsigned int irq;
+ unsigned int irq_polarity;
+};
+
struct nfcmrvl_private {
unsigned long flags;
return ret;
}
- if (of_find_property(matched_node, "flow-control", NULL))
- pdata->flow_control = 1;
- else
- pdata->flow_control = 0;
-
- if (of_find_property(matched_node, "break-control", NULL))
- pdata->break_control = 1;
- else
- pdata->break_control = 0;
+ pdata->flow_control = of_property_read_bool(matched_node, "flow-control");
+ pdata->break_control = of_property_read_bool(matched_node, "break-control");
of_node_put(matched_node);
trf7970a_pm_runtime_resume, NULL)
};
-static const struct of_device_id trf7970a_of_match[] = {
+static const struct of_device_id trf7970a_of_match[] __maybe_unused = {
{.compatible = "ti,trf7970a",},
{},
};
{
struct phy_provider *provider;
struct serdes_ctrl *ctrl;
+ struct resource *res;
unsigned int i;
int ret;
ctrl->dev = &pdev->dev;
ctrl->regs = syscon_node_to_regmap(pdev->dev.parent->of_node);
+ if (IS_ERR(ctrl->regs)) {
+ /* Fall back to using IORESOURCE_REG, if possible */
+ res = platform_get_resource(pdev, IORESOURCE_REG, 0);
+ if (res)
+ ctrl->regs = dev_get_regmap(ctrl->dev->parent,
+ res->name);
+ }
+
if (IS_ERR(ctrl->regs))
return PTR_ERR(ctrl->regs);
More information is available at http://www.timingcard.com/
+config PTP_DFL_TOD
+ tristate "FPGA DFL ToD Driver"
+ depends on FPGA_DFL
+ depends on PTP_1588_CLOCK
+ help
+ The DFL (Device Feature List) device driver for the Intel ToD
+ (Time-of-Day) device in FPGA card. The ToD IP within the FPGA
+ is exposed as PTP Hardware Clock (PHC) device to the Linux PTP
+ stack to synchronize the system clock to its ToD information
+ using phc2sys utility of the Linux PTP stack.
+
+ To compile this driver as a module, choose M here: the module
+ will be called ptp_dfl_tod.
+
endmenu
obj-$(CONFIG_PTP_1588_CLOCK_IDT82P33) += ptp_idt82p33.o
obj-$(CONFIG_PTP_1588_CLOCK_VMW) += ptp_vmw.o
obj-$(CONFIG_PTP_1588_CLOCK_OCP) += ptp_ocp.o
+obj-$(CONFIG_PTP_DFL_TOD) += ptp_dfl_tod.o
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * DFL device driver for Time-of-Day (ToD) private feature
+ *
+ * Copyright (C) 2023 Intel Corporation
+ */
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/dfl.h>
+#include <linux/gcd.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/ptp_clock_kernel.h>
+#include <linux/spinlock.h>
+#include <linux/units.h>
+
+#define FME_FEATURE_ID_TOD 0x22
+
+/* ToD clock register space. */
+#define TOD_CLK_FREQ 0x038
+
+/*
+ * The read sequence of ToD timestamp registers: TOD_NANOSEC, TOD_SECONDSL and
+ * TOD_SECONDSH, because there is a hardware snapshot whenever the TOD_NANOSEC
+ * register is read.
+ *
+ * The ToD IP requires writing registers in the reverse order to the read sequence.
+ * The timestamp is corrected when the TOD_NANOSEC register is written, so the
+ * sequence of write TOD registers: TOD_SECONDSH, TOD_SECONDSL and TOD_NANOSEC.
+ */
+#define TOD_SECONDSH 0x100
+#define TOD_SECONDSL 0x104
+#define TOD_NANOSEC 0x108
+#define TOD_PERIOD 0x110
+#define TOD_ADJUST_PERIOD 0x114
+#define TOD_ADJUST_COUNT 0x118
+#define TOD_DRIFT_ADJUST 0x11c
+#define TOD_DRIFT_ADJUST_RATE 0x120
+#define PERIOD_FRAC_OFFSET 16
+#define SECONDS_MSB GENMASK_ULL(47, 32)
+#define SECONDS_LSB GENMASK_ULL(31, 0)
+#define TOD_SECONDSH_SEC_MSB GENMASK_ULL(15, 0)
+
+#define CAL_SECONDS(m, l) ((FIELD_GET(TOD_SECONDSH_SEC_MSB, (m)) << 32) | (l))
+
+#define TOD_PERIOD_MASK GENMASK_ULL(19, 0)
+#define TOD_PERIOD_MAX FIELD_MAX(TOD_PERIOD_MASK)
+#define TOD_PERIOD_MIN 0
+#define TOD_DRIFT_ADJUST_MASK GENMASK_ULL(15, 0)
+#define TOD_DRIFT_ADJUST_FNS_MAX FIELD_MAX(TOD_DRIFT_ADJUST_MASK)
+#define TOD_DRIFT_ADJUST_RATE_MAX TOD_DRIFT_ADJUST_FNS_MAX
+#define TOD_ADJUST_COUNT_MASK GENMASK_ULL(19, 0)
+#define TOD_ADJUST_COUNT_MAX FIELD_MAX(TOD_ADJUST_COUNT_MASK)
+#define TOD_ADJUST_INTERVAL_US 10
+#define TOD_ADJUST_MS \
+ (((TOD_PERIOD_MAX >> 16) + 1) * (TOD_ADJUST_COUNT_MAX + 1))
+#define TOD_ADJUST_MS_MAX (TOD_ADJUST_MS / MICRO)
+#define TOD_ADJUST_MAX_US (TOD_ADJUST_MS_MAX * USEC_PER_MSEC)
+#define TOD_MAX_ADJ (500 * MEGA)
+
+struct dfl_tod {
+ struct ptp_clock_info ptp_clock_ops;
+ struct device *dev;
+ struct ptp_clock *ptp_clock;
+
+ /* ToD Clock address space */
+ void __iomem *tod_ctrl;
+
+ /* ToD clock registers protection */
+ spinlock_t tod_lock;
+};
+
+/*
+ * A fine ToD HW clock offset adjustment. To perform the fine offset adjustment, the
+ * adjust_period and adjust_count argument are used to update the TOD_ADJUST_PERIOD
+ * and TOD_ADJUST_COUNT register for in hardware. The dt->tod_lock spinlock must be
+ * held when calling this function.
+ */
+static int fine_adjust_tod_clock(struct dfl_tod *dt, u32 adjust_period,
+ u32 adjust_count)
+{
+ void __iomem *base = dt->tod_ctrl;
+ u32 val;
+
+ writel(adjust_period, base + TOD_ADJUST_PERIOD);
+ writel(adjust_count, base + TOD_ADJUST_COUNT);
+
+ /* Wait for present offset adjustment update to complete */
+ return readl_poll_timeout_atomic(base + TOD_ADJUST_COUNT, val, !val, TOD_ADJUST_INTERVAL_US,
+ TOD_ADJUST_MAX_US);
+}
+
+/*
+ * A coarse ToD HW clock offset adjustment. The coarse time adjustment performs by
+ * adding or subtracting the delta value from the current ToD HW clock time.
+ */
+static int coarse_adjust_tod_clock(struct dfl_tod *dt, s64 delta)
+{
+ u32 seconds_msb, seconds_lsb, nanosec;
+ void __iomem *base = dt->tod_ctrl;
+ u64 seconds, now;
+
+ if (delta == 0)
+ return 0;
+
+ nanosec = readl(base + TOD_NANOSEC);
+ seconds_lsb = readl(base + TOD_SECONDSL);
+ seconds_msb = readl(base + TOD_SECONDSH);
+
+ /* Calculate new time */
+ seconds = CAL_SECONDS(seconds_msb, seconds_lsb);
+ now = seconds * NSEC_PER_SEC + nanosec + delta;
+
+ seconds = div_u64_rem(now, NSEC_PER_SEC, &nanosec);
+ seconds_msb = FIELD_GET(SECONDS_MSB, seconds);
+ seconds_lsb = FIELD_GET(SECONDS_LSB, seconds);
+
+ writel(seconds_msb, base + TOD_SECONDSH);
+ writel(seconds_lsb, base + TOD_SECONDSL);
+ writel(nanosec, base + TOD_NANOSEC);
+
+ return 0;
+}
+
+static int dfl_tod_adjust_fine(struct ptp_clock_info *ptp, long scaled_ppm)
+{
+ struct dfl_tod *dt = container_of(ptp, struct dfl_tod, ptp_clock_ops);
+ u32 tod_period, tod_rem, tod_drift_adjust_fns, tod_drift_adjust_rate;
+ void __iomem *base = dt->tod_ctrl;
+ unsigned long flags, rate;
+ u64 ppb;
+
+ /* Get the clock rate from clock frequency register offset */
+ rate = readl(base + TOD_CLK_FREQ);
+
+ /* add GIGA as nominal ppb */
+ ppb = scaled_ppm_to_ppb(scaled_ppm) + GIGA;
+
+ tod_period = div_u64_rem(ppb << PERIOD_FRAC_OFFSET, rate, &tod_rem);
+ if (tod_period > TOD_PERIOD_MAX)
+ return -ERANGE;
+
+ /*
+ * The drift of ToD adjusted periodically by adding a drift_adjust_fns
+ * correction value every drift_adjust_rate count of clock cycles.
+ */
+ tod_drift_adjust_fns = tod_rem / gcd(tod_rem, rate);
+ tod_drift_adjust_rate = rate / gcd(tod_rem, rate);
+
+ while ((tod_drift_adjust_fns > TOD_DRIFT_ADJUST_FNS_MAX) ||
+ (tod_drift_adjust_rate > TOD_DRIFT_ADJUST_RATE_MAX)) {
+ tod_drift_adjust_fns >>= 1;
+ tod_drift_adjust_rate >>= 1;
+ }
+
+ if (tod_drift_adjust_fns == 0)
+ tod_drift_adjust_rate = 0;
+
+ spin_lock_irqsave(&dt->tod_lock, flags);
+ writel(tod_period, base + TOD_PERIOD);
+ writel(0, base + TOD_ADJUST_PERIOD);
+ writel(0, base + TOD_ADJUST_COUNT);
+ writel(tod_drift_adjust_fns, base + TOD_DRIFT_ADJUST);
+ writel(tod_drift_adjust_rate, base + TOD_DRIFT_ADJUST_RATE);
+ spin_unlock_irqrestore(&dt->tod_lock, flags);
+
+ return 0;
+}
+
+static int dfl_tod_adjust_time(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct dfl_tod *dt = container_of(ptp, struct dfl_tod, ptp_clock_ops);
+ u32 period, diff, rem, rem_period, adj_period;
+ void __iomem *base = dt->tod_ctrl;
+ unsigned long flags;
+ bool neg_adj;
+ u64 count;
+ int ret;
+
+ neg_adj = delta < 0;
+ if (neg_adj)
+ delta = -delta;
+
+ spin_lock_irqsave(&dt->tod_lock, flags);
+
+ /*
+ * Get the maximum possible value of the Period register offset
+ * adjustment in nanoseconds scale. This depends on the current
+ * Period register setting and the maximum and minimum possible
+ * values of the Period register.
+ */
+ period = readl(base + TOD_PERIOD);
+
+ if (neg_adj) {
+ diff = (period - TOD_PERIOD_MIN) >> PERIOD_FRAC_OFFSET;
+ adj_period = period - (diff << PERIOD_FRAC_OFFSET);
+ count = div_u64_rem(delta, diff, &rem);
+ rem_period = period - (rem << PERIOD_FRAC_OFFSET);
+ } else {
+ diff = (TOD_PERIOD_MAX - period) >> PERIOD_FRAC_OFFSET;
+ adj_period = period + (diff << PERIOD_FRAC_OFFSET);
+ count = div_u64_rem(delta, diff, &rem);
+ rem_period = period + (rem << PERIOD_FRAC_OFFSET);
+ }
+
+ ret = 0;
+
+ if (count > TOD_ADJUST_COUNT_MAX) {
+ ret = coarse_adjust_tod_clock(dt, delta);
+ } else {
+ /* Adjust the period by count cycles to adjust the time */
+ if (count)
+ ret = fine_adjust_tod_clock(dt, adj_period, count);
+
+ /* If there is a remainder, adjust the period for an additional cycle */
+ if (rem)
+ ret = fine_adjust_tod_clock(dt, rem_period, 1);
+ }
+
+ spin_unlock_irqrestore(&dt->tod_lock, flags);
+
+ return ret;
+}
+
+static int dfl_tod_get_timex(struct ptp_clock_info *ptp, struct timespec64 *ts,
+ struct ptp_system_timestamp *sts)
+{
+ struct dfl_tod *dt = container_of(ptp, struct dfl_tod, ptp_clock_ops);
+ u32 seconds_msb, seconds_lsb, nanosec;
+ void __iomem *base = dt->tod_ctrl;
+ unsigned long flags;
+ u64 seconds;
+
+ spin_lock_irqsave(&dt->tod_lock, flags);
+ ptp_read_system_prets(sts);
+ nanosec = readl(base + TOD_NANOSEC);
+ seconds_lsb = readl(base + TOD_SECONDSL);
+ seconds_msb = readl(base + TOD_SECONDSH);
+ ptp_read_system_postts(sts);
+ spin_unlock_irqrestore(&dt->tod_lock, flags);
+
+ seconds = CAL_SECONDS(seconds_msb, seconds_lsb);
+
+ ts->tv_nsec = nanosec;
+ ts->tv_sec = seconds;
+
+ return 0;
+}
+
+static int dfl_tod_set_time(struct ptp_clock_info *ptp,
+ const struct timespec64 *ts)
+{
+ struct dfl_tod *dt = container_of(ptp, struct dfl_tod, ptp_clock_ops);
+ u32 seconds_msb = FIELD_GET(SECONDS_MSB, ts->tv_sec);
+ u32 seconds_lsb = FIELD_GET(SECONDS_LSB, ts->tv_sec);
+ u32 nanosec = FIELD_GET(SECONDS_LSB, ts->tv_nsec);
+ void __iomem *base = dt->tod_ctrl;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dt->tod_lock, flags);
+ writel(seconds_msb, base + TOD_SECONDSH);
+ writel(seconds_lsb, base + TOD_SECONDSL);
+ writel(nanosec, base + TOD_NANOSEC);
+ spin_unlock_irqrestore(&dt->tod_lock, flags);
+
+ return 0;
+}
+
+static struct ptp_clock_info dfl_tod_clock_ops = {
+ .owner = THIS_MODULE,
+ .name = "dfl_tod",
+ .max_adj = TOD_MAX_ADJ,
+ .adjfine = dfl_tod_adjust_fine,
+ .adjtime = dfl_tod_adjust_time,
+ .gettimex64 = dfl_tod_get_timex,
+ .settime64 = dfl_tod_set_time,
+};
+
+static int dfl_tod_probe(struct dfl_device *ddev)
+{
+ struct device *dev = &ddev->dev;
+ struct dfl_tod *dt;
+
+ dt = devm_kzalloc(dev, sizeof(*dt), GFP_KERNEL);
+ if (!dt)
+ return -ENOMEM;
+
+ dt->tod_ctrl = devm_ioremap_resource(dev, &ddev->mmio_res);
+ if (IS_ERR(dt->tod_ctrl))
+ return PTR_ERR(dt->tod_ctrl);
+
+ dt->dev = dev;
+ spin_lock_init(&dt->tod_lock);
+ dev_set_drvdata(dev, dt);
+
+ dt->ptp_clock_ops = dfl_tod_clock_ops;
+
+ dt->ptp_clock = ptp_clock_register(&dt->ptp_clock_ops, dev);
+ if (IS_ERR(dt->ptp_clock))
+ return dev_err_probe(dt->dev, PTR_ERR(dt->ptp_clock),
+ "Unable to register PTP clock\n");
+
+ return 0;
+}
+
+static void dfl_tod_remove(struct dfl_device *ddev)
+{
+ struct dfl_tod *dt = dev_get_drvdata(&ddev->dev);
+
+ ptp_clock_unregister(dt->ptp_clock);
+}
+
+static const struct dfl_device_id dfl_tod_ids[] = {
+ { FME_ID, FME_FEATURE_ID_TOD },
+ { }
+};
+MODULE_DEVICE_TABLE(dfl, dfl_tod_ids);
+
+static struct dfl_driver dfl_tod_driver = {
+ .drv = {
+ .name = "dfl-tod",
+ },
+ .id_table = dfl_tod_ids,
+ .probe = dfl_tod_probe,
+ .remove = dfl_tod_remove,
+};
+module_dfl_driver(dfl_tod_driver);
+
+MODULE_DESCRIPTION("FPGA DFL ToD driver");
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
.remove = ines_ptp_ctrl_remove,
.driver = {
.name = "ines_ptp_ctrl",
- .of_match_table = of_match_ptr(ines_ptp_ctrl_of_match),
+ .of_match_table = ines_ptp_ctrl_of_match,
},
};
module_platform_driver(ines_ptp_ctrl_driver);
return 0;
}
+void kvm_arch_ptp_exit(void)
+{
+}
+
int kvm_arch_ptp_get_clock(struct timespec64 *ts)
{
return kvm_arch_ptp_get_crosststamp(NULL, ts, NULL);
static void __exit ptp_kvm_exit(void)
{
ptp_clock_unregister(kvm_ptp_clock.ptp_clock);
+ kvm_arch_ptp_exit();
}
static int __init ptp_kvm_init(void)
#include <uapi/linux/kvm_para.h>
#include <linux/ptp_clock_kernel.h>
#include <linux/ptp_kvm.h>
+#include <linux/set_memory.h>
static phys_addr_t clock_pair_gpa;
-static struct kvm_clock_pairing clock_pair;
+static struct kvm_clock_pairing clock_pair_glbl;
+static struct kvm_clock_pairing *clock_pair;
int kvm_arch_ptp_init(void)
{
+ struct page *p;
long ret;
if (!kvm_para_available())
return -ENODEV;
- clock_pair_gpa = slow_virt_to_phys(&clock_pair);
- if (!pvclock_get_pvti_cpu0_va())
- return -ENODEV;
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
+ p = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ if (!p)
+ return -ENOMEM;
+
+ clock_pair = page_address(p);
+ ret = set_memory_decrypted((unsigned long)clock_pair, 1);
+ if (ret) {
+ __free_page(p);
+ clock_pair = NULL;
+ goto nofree;
+ }
+ } else {
+ clock_pair = &clock_pair_glbl;
+ }
+
+ clock_pair_gpa = slow_virt_to_phys(clock_pair);
+ if (!pvclock_get_pvti_cpu0_va()) {
+ ret = -ENODEV;
+ goto err;
+ }
ret = kvm_hypercall2(KVM_HC_CLOCK_PAIRING, clock_pair_gpa,
KVM_CLOCK_PAIRING_WALLCLOCK);
- if (ret == -KVM_ENOSYS)
- return -ENODEV;
+ if (ret == -KVM_ENOSYS) {
+ ret = -ENODEV;
+ goto err;
+ }
return ret;
+
+err:
+ kvm_arch_ptp_exit();
+nofree:
+ return ret;
+}
+
+void kvm_arch_ptp_exit(void)
+{
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
+ WARN_ON(set_memory_encrypted((unsigned long)clock_pair, 1));
+ free_page((unsigned long)clock_pair);
+ clock_pair = NULL;
+ }
}
int kvm_arch_ptp_get_clock(struct timespec64 *ts)
return -EOPNOTSUPP;
}
- ts->tv_sec = clock_pair.sec;
- ts->tv_nsec = clock_pair.nsec;
+ ts->tv_sec = clock_pair->sec;
+ ts->tv_nsec = clock_pair->nsec;
return 0;
}
pr_err_ratelimited("clock pairing hypercall ret %lu\n", ret);
return -EOPNOTSUPP;
}
- tspec->tv_sec = clock_pair.sec;
- tspec->tv_nsec = clock_pair.nsec;
- *cycle = __pvclock_read_cycles(src, clock_pair.tsc);
+ tspec->tv_sec = clock_pair->sec;
+ tspec->tv_nsec = clock_pair->nsec;
+ *cycle = __pvclock_read_cycles(src, clock_pair->tsc);
} while (pvclock_read_retry(src, version));
*cs = &kvm_clock;
.num_chipselect = 1,
.bits_per_word = 8,
.num_devices = 1,
+ .force_irq = true,
.devices = &(struct spi_board_info) {
.modalias = "spi-nor",
},
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/device.h>
-#include <linux/pci.h>
#include <linux/err.h>
#include <linux/ctype.h>
#include <linux/processor.h>
return 0;
err_resource:
- pci_clear_master(pdev);
pci_release_mem_regions(pdev);
err_disable:
pci_disable_device(pdev);
ism_dev_exit(ism);
mutex_unlock(&ism_dev_list.mutex);
- pci_clear_master(pdev);
pci_release_mem_regions(pdev);
pci_disable_device(pdev);
device_del(&ism->dev);
return ism_move(smcd->priv, dmb_tok, idx, sf, offset, data, size);
}
+static int smcd_supports_v2(void)
+{
+ return SYSTEM_EID.serial_number[0] != '0' ||
+ SYSTEM_EID.type[0] != '0';
+}
+
static u64 smcd_get_local_gid(struct smcd_dev *smcd)
{
return ism_get_local_gid(smcd->priv);
.reset_vlan_required = smcd_reset_vlan_required,
.signal_event = smcd_signal_ieq,
.move_data = smcd_move,
+ .supports_v2 = smcd_supports_v2,
.get_system_eid = ism_get_seid,
.get_local_gid = smcd_get_local_gid,
.get_chid = smcd_get_chid,
frags++;
if (frags >= SKB_WR_LIST_SIZE) {
- pr_err("csk 0x%p, frags %u, %u,%u >%lu.\n",
+ pr_err("csk 0x%p, frags %u, %u,%u >%u.\n",
csk, skb_shinfo(skb)->nr_frags, skb->len,
- skb->data_len, SKB_WR_LIST_SIZE);
+ skb->data_len, (unsigned int)SKB_WR_LIST_SIZE);
return -EINVAL;
}
return;
}
- port = wwan_create_port(&intf->dev, desc->wwanp_type, &wdm_wwan_port_ops, desc);
+ port = wwan_create_port(&intf->dev, desc->wwanp_type, &wdm_wwan_port_ops,
+ NULL, desc);
if (IS_ERR(port)) {
dev_err(&intf->dev, "%s: Unable to create WWAN port\n",
dev_name(intf->usb_dev));
.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
.notify_buffer_size = virtio_transport_notify_buffer_size,
+ .read_skb = virtio_transport_read_skb,
},
.send_pkt = vhost_transport_send_pkt,
"sk_err=%d/%d\n", dlm_our_nodeid(),
con->nodeid, &inet->inet_daddr,
ntohs(inet->inet_dport), sk->sk_err,
- sk->sk_err_soft);
+ READ_ONCE(sk->sk_err_soft));
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
"dport %d, sk_err=%d/%d\n", dlm_our_nodeid(),
con->nodeid, &sk->sk_v6_daddr,
ntohs(inet->inet_dport), sk->sk_err,
- sk->sk_err_soft);
+ READ_ONCE(sk->sk_err_soft));
break;
#endif
default:
printk_ratelimited(KERN_ERR "dlm: node %d: socket error "
"invalid socket family %d set, "
"sk_err=%d/%d\n", dlm_our_nodeid(),
- sk->sk_family, sk->sk_err, sk->sk_err_soft);
+ sk->sk_family, sk->sk_err,
+ READ_ONCE(sk->sk_err_soft));
break;
}
#define arch_atomic_inc_and_test arch_atomic_inc_and_test
#endif
+#ifndef arch_atomic_add_negative_relaxed
+#ifdef arch_atomic_add_negative
+#define arch_atomic_add_negative_acquire arch_atomic_add_negative
+#define arch_atomic_add_negative_release arch_atomic_add_negative
+#define arch_atomic_add_negative_relaxed arch_atomic_add_negative
+#endif /* arch_atomic_add_negative */
+
#ifndef arch_atomic_add_negative
/**
- * arch_atomic_add_negative - add and test if negative
+ * arch_atomic_add_negative - Add and test if negative
* @i: integer value to add
* @v: pointer of type atomic_t
*
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic_add_negative(int i, atomic_t *v)
#define arch_atomic_add_negative arch_atomic_add_negative
#endif
+#ifndef arch_atomic_add_negative_acquire
+/**
+ * arch_atomic_add_negative_acquire - Add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
+ */
+static __always_inline bool
+arch_atomic_add_negative_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_acquire(i, v) < 0;
+}
+#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire
+#endif
+
+#ifndef arch_atomic_add_negative_release
+/**
+ * arch_atomic_add_negative_release - Add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
+ */
+static __always_inline bool
+arch_atomic_add_negative_release(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_release(i, v) < 0;
+}
+#define arch_atomic_add_negative_release arch_atomic_add_negative_release
+#endif
+
+#ifndef arch_atomic_add_negative_relaxed
+/**
+ * arch_atomic_add_negative_relaxed - Add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
+ */
+static __always_inline bool
+arch_atomic_add_negative_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_relaxed(i, v) < 0;
+}
+#define arch_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed
+#endif
+
+#else /* arch_atomic_add_negative_relaxed */
+
+#ifndef arch_atomic_add_negative_acquire
+static __always_inline bool
+arch_atomic_add_negative_acquire(int i, atomic_t *v)
+{
+ bool ret = arch_atomic_add_negative_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
+}
+#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire
+#endif
+
+#ifndef arch_atomic_add_negative_release
+static __always_inline bool
+arch_atomic_add_negative_release(int i, atomic_t *v)
+{
+ __atomic_release_fence();
+ return arch_atomic_add_negative_relaxed(i, v);
+}
+#define arch_atomic_add_negative_release arch_atomic_add_negative_release
+#endif
+
+#ifndef arch_atomic_add_negative
+static __always_inline bool
+arch_atomic_add_negative(int i, atomic_t *v)
+{
+ bool ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_add_negative_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
+}
+#define arch_atomic_add_negative arch_atomic_add_negative
+#endif
+
+#endif /* arch_atomic_add_negative_relaxed */
+
#ifndef arch_atomic_fetch_add_unless
/**
* arch_atomic_fetch_add_unless - add unless the number is already a given value
#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test
#endif
+#ifndef arch_atomic64_add_negative_relaxed
+#ifdef arch_atomic64_add_negative
+#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative
+#define arch_atomic64_add_negative_release arch_atomic64_add_negative
+#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative
+#endif /* arch_atomic64_add_negative */
+
#ifndef arch_atomic64_add_negative
/**
- * arch_atomic64_add_negative - add and test if negative
+ * arch_atomic64_add_negative - Add and test if negative
* @i: integer value to add
* @v: pointer of type atomic64_t
*
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic64_add_negative(s64 i, atomic64_t *v)
#define arch_atomic64_add_negative arch_atomic64_add_negative
#endif
+#ifndef arch_atomic64_add_negative_acquire
+/**
+ * arch_atomic64_add_negative_acquire - Add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
+ */
+static __always_inline bool
+arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_acquire(i, v) < 0;
+}
+#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
+#endif
+
+#ifndef arch_atomic64_add_negative_release
+/**
+ * arch_atomic64_add_negative_release - Add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
+ */
+static __always_inline bool
+arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_release(i, v) < 0;
+}
+#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release
+#endif
+
+#ifndef arch_atomic64_add_negative_relaxed
+/**
+ * arch_atomic64_add_negative_relaxed - Add and test if negative
+ * @i: integer value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
+ */
+static __always_inline bool
+arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_relaxed(i, v) < 0;
+}
+#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed
+#endif
+
+#else /* arch_atomic64_add_negative_relaxed */
+
+#ifndef arch_atomic64_add_negative_acquire
+static __always_inline bool
+arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+{
+ bool ret = arch_atomic64_add_negative_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
+}
+#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
+#endif
+
+#ifndef arch_atomic64_add_negative_release
+static __always_inline bool
+arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
+{
+ __atomic_release_fence();
+ return arch_atomic64_add_negative_relaxed(i, v);
+}
+#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release
+#endif
+
+#ifndef arch_atomic64_add_negative
+static __always_inline bool
+arch_atomic64_add_negative(s64 i, atomic64_t *v)
+{
+ bool ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_add_negative_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
+}
+#define arch_atomic64_add_negative arch_atomic64_add_negative
+#endif
+
+#endif /* arch_atomic64_add_negative_relaxed */
+
#ifndef arch_atomic64_fetch_add_unless
/**
* arch_atomic64_fetch_add_unless - add unless the number is already a given value
#endif
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// b5e87bdd5ede61470c29f7a7e4de781af3770f09
+// 00071fffa021cec66f6290d706d69c91df87bade
return arch_atomic_add_negative(i, v);
}
+static __always_inline bool
+atomic_add_negative_acquire(int i, atomic_t *v)
+{
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+atomic_add_negative_release(int i, atomic_t *v)
+{
+ kcsan_release();
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic_add_negative_release(i, v);
+}
+
+static __always_inline bool
+atomic_add_negative_relaxed(int i, atomic_t *v)
+{
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic_add_negative_relaxed(i, v);
+}
+
static __always_inline int
atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
return arch_atomic64_add_negative(i, v);
}
+static __always_inline bool
+atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+{
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic64_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+atomic64_add_negative_release(s64 i, atomic64_t *v)
+{
+ kcsan_release();
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic64_add_negative_release(i, v);
+}
+
+static __always_inline bool
+atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+{
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic64_add_negative_relaxed(i, v);
+}
+
static __always_inline s64
atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
return arch_atomic_long_add_negative(i, v);
}
+static __always_inline bool
+atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+{
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic_long_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+atomic_long_add_negative_release(long i, atomic_long_t *v)
+{
+ kcsan_release();
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic_long_add_negative_release(i, v);
+}
+
+static __always_inline bool
+atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+{
+ instrument_atomic_read_write(v, sizeof(*v));
+ return arch_atomic_long_add_negative_relaxed(i, v);
+}
+
static __always_inline long
atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
})
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-// 764f741eb77a7ad565dc8d99ce2837d5542e8aee
+// 1b485de9cbaa4900de59e14ee2084357eaeb1c3a
return arch_atomic64_add_negative(i, v);
}
+static __always_inline bool
+arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic64_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+arch_atomic_long_add_negative_release(long i, atomic_long_t *v)
+{
+ return arch_atomic64_add_negative_release(i, v);
+}
+
+static __always_inline bool
+arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic64_add_negative_relaxed(i, v);
+}
+
static __always_inline long
arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
return arch_atomic_add_negative(i, v);
}
+static __always_inline bool
+arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+arch_atomic_long_add_negative_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_add_negative_release(i, v);
+}
+
+static __always_inline bool
+arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_add_negative_relaxed(i, v);
+}
+
static __always_inline long
arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
#endif /* CONFIG_64BIT */
#endif /* _LINUX_ATOMIC_LONG_H */
-// e8f0e08ff072b74d180eabe2ad001282b38c2c88
+// a194c07d7d2f4b0e178d3c118c919775d5d65f50
bpf_callback_t callback_fn,
void *callback_ctx, u64 flags);
+ u64 (*map_mem_usage)(const struct bpf_map *map);
+
/* BTF id of struct allocated by map_alloc */
int *map_btf_id;
*/
NON_OWN_REF = BIT(14 + BPF_BASE_TYPE_BITS),
+ /* DYNPTR points to sk_buff */
+ DYNPTR_TYPE_SKB = BIT(15 + BPF_BASE_TYPE_BITS),
+
+ /* DYNPTR points to xdp_buff */
+ DYNPTR_TYPE_XDP = BIT(16 + BPF_BASE_TYPE_BITS),
+
__BPF_TYPE_FLAG_MAX,
__BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1,
};
-#define DYNPTR_TYPE_FLAG_MASK (DYNPTR_TYPE_LOCAL | DYNPTR_TYPE_RINGBUF)
+#define DYNPTR_TYPE_FLAG_MASK (DYNPTR_TYPE_LOCAL | DYNPTR_TYPE_RINGBUF | DYNPTR_TYPE_SKB \
+ | DYNPTR_TYPE_XDP)
/* Max number of base types. */
#define BPF_BASE_TYPE_LIMIT (1UL << BPF_BASE_TYPE_BITS)
return bpf_func(ctx, insnsi);
}
+/* the implementation of the opaque uapi struct bpf_dynptr */
+struct bpf_dynptr_kern {
+ void *data;
+ /* Size represents the number of usable bytes of dynptr data.
+ * If for example the offset is at 4 for a local dynptr whose data is
+ * of type u64, the number of usable bytes is 4.
+ *
+ * The upper 8 bits are reserved. It is as follows:
+ * Bits 0 - 23 = size
+ * Bits 24 - 30 = dynptr type
+ * Bit 31 = whether dynptr is read-only
+ */
+ u32 size;
+ u32 offset;
+} __aligned(8);
+
+enum bpf_dynptr_type {
+ BPF_DYNPTR_TYPE_INVALID,
+ /* Points to memory that is local to the bpf program */
+ BPF_DYNPTR_TYPE_LOCAL,
+ /* Underlying data is a ringbuf record */
+ BPF_DYNPTR_TYPE_RINGBUF,
+ /* Underlying data is a sk_buff */
+ BPF_DYNPTR_TYPE_SKB,
+ /* Underlying data is a xdp_buff */
+ BPF_DYNPTR_TYPE_XDP,
+};
+
+int bpf_dynptr_check_size(u32 size);
+u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr);
+
#ifdef CONFIG_BPF_JIT
int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
bool btf_nested_type_is_trusted(struct bpf_verifier_log *log,
const struct bpf_reg_state *reg,
- int off);
+ int off, const char *suffix);
bool btf_type_ids_nocast_alias(struct bpf_verifier_log *log,
const struct btf *reg_btf, u32 reg_id,
}
void notrace bpf_prog_inc_misses_counter(struct bpf_prog *prog);
+
+void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
+ enum bpf_dynptr_type type, u32 offset, u32 size);
+void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr);
+void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr);
#else /* !CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get(u32 ufd)
{
static inline void bpf_cgrp_storage_free(struct cgroup *cgroup)
{
}
+
+static inline void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
+ enum bpf_dynptr_type type, u32 offset, u32 size)
+{
+}
+
+static inline void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr)
+{
+}
+
+static inline void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr)
+{
+}
#endif /* CONFIG_BPF_SYSCALL */
void __bpf_free_used_btfs(struct bpf_prog_aux *aux,
struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr);
void bpf_map_offload_map_free(struct bpf_map *map);
+u64 bpf_map_offload_map_mem_usage(const struct bpf_map *map);
int bpf_prog_test_run_syscall(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr);
{
}
+static inline u64 bpf_map_offload_map_mem_usage(const struct bpf_map *map)
+{
+ return 0;
+}
+
static inline int bpf_prog_test_run_syscall(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr)
struct bpf_insn *insn_buf,
struct bpf_prog *prog,
u32 *target_size);
+int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags,
+ struct bpf_dynptr_kern *ptr);
#else
static inline bool bpf_sock_common_is_valid_access(int off, int size,
enum bpf_access_type type,
{
return 0;
}
+static inline int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags,
+ struct bpf_dynptr_kern *ptr)
+{
+ return -EOPNOTSUPP;
+}
#endif
#ifdef CONFIG_INET
u32 num_args, struct bpf_bprintf_data *data);
void bpf_bprintf_cleanup(struct bpf_bprintf_data *data);
-/* the implementation of the opaque uapi struct bpf_dynptr */
-struct bpf_dynptr_kern {
- void *data;
- /* Size represents the number of usable bytes of dynptr data.
- * If for example the offset is at 4 for a local dynptr whose data is
- * of type u64, the number of usable bytes is 4.
- *
- * The upper 8 bits are reserved. It is as follows:
- * Bits 0 - 23 = size
- * Bits 24 - 30 = dynptr type
- * Bit 31 = whether dynptr is read-only
- */
- u32 size;
- u32 offset;
-} __aligned(8);
-
-enum bpf_dynptr_type {
- BPF_DYNPTR_TYPE_INVALID,
- /* Points to memory that is local to the bpf program */
- BPF_DYNPTR_TYPE_LOCAL,
- /* Underlying data is a kernel-produced ringbuf record */
- BPF_DYNPTR_TYPE_RINGBUF,
-};
-
-void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
- enum bpf_dynptr_type type, u32 offset, u32 size);
-void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr);
-int bpf_dynptr_check_size(u32 size);
-u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr);
-
#ifdef CONFIG_BPF_LSM
void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype);
void bpf_cgroup_atype_put(int cgroup_atype);
void *value, u64 map_flags, gfp_t gfp_flags);
void bpf_local_storage_free_rcu(struct rcu_head *rcu);
+u64 bpf_local_storage_map_mem_usage(const struct bpf_map *map);
#endif /* _BPF_LOCAL_STORAGE_H */
struct work_struct work;
};
+/* 'size != 0' is for bpf_mem_alloc which manages fixed-size objects.
+ * Alloc and free are done with bpf_mem_cache_{alloc,free}().
+ *
+ * 'size = 0' is for bpf_mem_alloc which manages many fixed-size objects.
+ * Alloc and free are done with bpf_mem_{alloc,free}() and the size of
+ * the returned object is given by the size argument of bpf_mem_alloc().
+ */
int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu);
void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma);
bool bypass_spec_v1;
bool bypass_spec_v4;
bool seen_direct_write;
- bool rcu_tag_supported;
struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */
const struct bpf_line_info *prev_linfo;
struct bpf_verifier_log log;
enum bpf_arg_type arg_type);
int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
u32 regno, u32 mem_size);
-struct bpf_call_arg_meta;
-int process_dynptr_func(struct bpf_verifier_env *env, int regno,
- enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta);
/* this lives here instead of in bpf.h because it needs to dereference tgt_prog */
static inline u64 bpf_trampoline_compute_key(const struct bpf_prog *tgt_prog,
#define KF_TRUSTED_ARGS (1 << 4) /* kfunc only takes trusted pointer arguments */
#define KF_SLEEPABLE (1 << 5) /* kfunc may sleep */
#define KF_DESTRUCTIVE (1 << 6) /* kfunc performs destructive actions */
-#define KF_RCU (1 << 7) /* kfunc only takes rcu pointer arguments */
+#define KF_RCU (1 << 7) /* kfunc takes either rcu or trusted pointer arguments */
/*
* Tag marking a kernel function as a kfunc. This is meant to minimize the
#else
-#define BTF_ID_LIST(name) static u32 __maybe_unused name[16];
+#define BTF_ID_LIST(name) static u32 __maybe_unused name[64];
#define BTF_ID(prefix, name)
#define BTF_ID_FLAGS(prefix, name, ...)
#define BTF_ID_UNUSED
* struct cpu_rmap - CPU affinity reverse-map
* @refcount: kref for object
* @size: Number of objects to be reverse-mapped
- * @used: Number of objects added
* @obj: Pointer to array of object pointers
* @near: For each CPU, the index and distance to the nearest object,
* based on affinity masks
*/
struct cpu_rmap {
struct kref refcount;
- u16 size, used;
+ u16 size;
void **obj;
struct {
u16 index;
}
extern void free_irq_cpu_rmap(struct cpu_rmap *rmap);
+int irq_cpu_rmap_remove(struct cpu_rmap *rmap, int irq);
extern int irq_cpu_rmap_add(struct cpu_rmap *rmap, int irq);
#endif /* __LINUX_CPU_RMAP_H */
struct timer_list dccps_xmit_timer;
};
-static inline struct dccp_sock *dccp_sk(const struct sock *sk)
-{
- return (struct dccp_sock *)sk;
-}
+#define dccp_sk(ptr) container_of_const(ptr, struct dccp_sock, \
+ dccps_inet_connection.icsk_inet.sk)
static inline const char *dccp_role(const struct sock *sk)
{
* @tx_push: The flag of tx push mode
* @rx_push: The flag of rx push mode
* @cqe_size: Size of TX/RX completion queue event
+ * @tx_push_buf_len: Size of TX push buffer
+ * @tx_push_buf_max_len: Maximum allowed size of TX push buffer
*/
struct kernel_ethtool_ringparam {
u32 rx_buf_len;
u8 tx_push;
u8 rx_push;
u32 cqe_size;
+ u32 tx_push_buf_len;
+ u32 tx_push_buf_max_len;
};
/**
* @ETHTOOL_RING_USE_CQE_SIZE: capture for setting cqe_size
* @ETHTOOL_RING_USE_TX_PUSH: capture for setting tx_push
* @ETHTOOL_RING_USE_RX_PUSH: capture for setting rx_push
+ * @ETHTOOL_RING_USE_TX_PUSH_BUF_LEN: capture for setting tx_push_buf_len
*/
enum ethtool_supported_ring_param {
- ETHTOOL_RING_USE_RX_BUF_LEN = BIT(0),
- ETHTOOL_RING_USE_CQE_SIZE = BIT(1),
- ETHTOOL_RING_USE_TX_PUSH = BIT(2),
- ETHTOOL_RING_USE_RX_PUSH = BIT(3),
+ ETHTOOL_RING_USE_RX_BUF_LEN = BIT(0),
+ ETHTOOL_RING_USE_CQE_SIZE = BIT(1),
+ ETHTOOL_RING_USE_TX_PUSH = BIT(2),
+ ETHTOOL_RING_USE_RX_PUSH = BIT(3),
+ ETHTOOL_RING_USE_TX_PUSH_BUF_LEN = BIT(4),
};
#define __ETH_RSS_HASH_BIT(bit) ((u32)1 << (bit))
return XDP_REDIRECT;
}
+#ifdef CONFIG_NET
+int __bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, void *to, u32 len);
+int __bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, const void *from,
+ u32 len, u64 flags);
+int __bpf_xdp_load_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len);
+int __bpf_xdp_store_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len);
+void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len);
+void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
+ void *buf, unsigned long len, bool flush);
+#else /* CONFIG_NET */
+static inline int __bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset,
+ void *to, u32 len)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int __bpf_skb_store_bytes(struct sk_buff *skb, u32 offset,
+ const void *from, u32 len, u64 flags)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int __bpf_xdp_load_bytes(struct xdp_buff *xdp, u32 offset,
+ void *buf, u32 len)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int __bpf_xdp_store_bytes(struct xdp_buff *xdp, u32 offset,
+ void *buf, u32 len)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
+{
+ return NULL;
+}
+
+static inline void *bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off, void *buf,
+ unsigned long len, bool flush)
+{
+ return NULL;
+}
+#endif /* CONFIG_NET */
+
#endif /* __LINUX_FILTER_H__ */
sockptr_t optval, sockptr_t optlen);
extern int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
sockptr_t optval, size_t offset);
-extern int ip_mc_sf_allow(struct sock *sk, __be32 local, __be32 rmt,
+extern int ip_mc_sf_allow(const struct sock *sk, __be32 local, __be32 rmt,
int dif, int sdif);
extern void ip_mc_init_dev(struct in_device *);
extern void ip_mc_destroy_dev(struct in_device *);
return sk_fullsock(__sk) ? inet_sk(__sk)->pinet6 : NULL;
}
-static inline struct raw6_sock *raw6_sk(const struct sock *sk)
-{
- return (struct raw6_sock *)sk;
-}
+#define raw6_sk(ptr) container_of_const(ptr, struct raw6_sock, inet.sk)
#define ipv6_only_sock(sk) (sk->sk_ipv6only)
#define ipv6_sk_rxinfo(sk) ((sk)->sk_family == PF_INET6 && \
#define MLX5_CAP_ESW_INGRESS_ACL_MAX(mdev, cap) \
MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_esw_acl_ingress.cap)
+#define MLX5_CAP_ESW_FT_FIELD_SUPPORT_2(mdev, cap) \
+ MLX5_CAP_ESW_FLOWTABLE(mdev, ft_field_support_2_esw_fdb.cap)
+
+#define MLX5_CAP_ESW_FT_FIELD_SUPPORT_2_MAX(mdev, cap) \
+ MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, ft_field_support_2_esw_fdb.cap)
+
#define MLX5_CAP_ESW(mdev, cap) \
MLX5_GET(e_switch_cap, \
mdev->caps.hca[MLX5_CAP_ESWITCH]->cur, cap)
MLX5_REG_PCAM = 0x507f,
MLX5_REG_NODE_DESC = 0x6001,
MLX5_REG_HOST_ENDIANNESS = 0x7004,
+ MLX5_REG_MTMP = 0x900A,
MLX5_REG_MCIA = 0x9014,
MLX5_REG_MFRL = 0x9028,
MLX5_REG_MLCR = 0x902b,
struct mlx5_vxlan;
struct mlx5_geneve;
struct mlx5_hv_vhca;
+struct mlx5_thermal;
#define MLX5_LOG_SW_ICM_BLOCK_SIZE(dev) (MLX5_CAP_DEV_MEM(dev, log_sw_icm_alloc_granularity))
#define MLX5_SW_ICM_BLOCK_SIZE(dev) (1 << MLX5_LOG_SW_ICM_BLOCK_SIZE(dev))
struct mlx5_rsc_dump *rsc_dump;
u32 vsc_addr;
struct mlx5_hv_vhca *hv_vhca;
+ struct mlx5_thermal *thermal;
};
struct mlx5_db {
MLX5_OCTWORD = 16,
};
+struct msi_map mlx5_msix_alloc(struct mlx5_core_dev *dev,
+ irqreturn_t (*handler)(int, void *),
+ const struct irq_affinity_desc *affdesc,
+ const char *name);
+void mlx5_msix_free(struct mlx5_core_dev *dev, struct msi_map map);
+
#endif /* MLX5_DRIVER_H */
u8 metadata_reg_c_0[0x1];
};
+/* Table 2170 - Flow Table Fields Supported 2 Format */
struct mlx5_ifc_flow_table_fields_supported_2_bits {
u8 reserved_at_0[0xe];
u8 bth_opcode[0x1];
- u8 reserved_at_f[0x11];
+ u8 reserved_at_f[0x1];
+ u8 tunnel_header_0_1[0x1];
+ u8 reserved_at_11[0xf];
u8 reserved_at_20[0x60];
};
struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_esw_acl_egress;
- u8 reserved_at_800[0x1000];
+ u8 reserved_at_800[0xC00];
+
+ struct mlx5_ifc_flow_table_fields_supported_2_bits ft_field_support_2_esw_fdb;
+
+ struct mlx5_ifc_flow_table_fields_supported_2_bits ft_field_bitmask_support_2_esw_fdb;
+
+ u8 reserved_at_1500[0x300];
u8 sw_steering_fdb_action_drop_icm_address_rx[0x40];
u8 time_l[0x20];
};
+struct mlx5_ifc_mtmp_reg_bits {
+ u8 reserved_at_0[0x14];
+ u8 sensor_index[0xc];
+
+ u8 reserved_at_20[0x10];
+ u8 temperature[0x10];
+
+ u8 mte[0x1];
+ u8 mtr[0x1];
+ u8 reserved_at_42[0xe];
+ u8 max_temperature[0x10];
+
+ u8 tee[0x2];
+ u8 reserved_at_62[0xe];
+ u8 temp_threshold_hi[0x10];
+
+ u8 reserved_at_80[0x10];
+ u8 temp_threshold_lo[0x10];
+
+ u8 reserved_at_a0[0x20];
+
+ u8 sensor_name_hi[0x20];
+ u8 sensor_name_lo[0x20];
+};
+
union mlx5_ifc_ports_control_registers_document_bits {
struct mlx5_ifc_bufferx_reg_bits bufferx_reg;
struct mlx5_ifc_eth_2819_cntrs_grp_data_layout_bits eth_2819_cntrs_grp_data_layout;
struct mlx5_ifc_mfrl_reg_bits mfrl_reg;
struct mlx5_ifc_mtutc_reg_bits mtutc_reg;
struct mlx5_ifc_mrtc_reg_bits mrtc_reg;
+ struct mlx5_ifc_mtmp_reg_bits mtmp_reg;
u8 reserved_at_0[0x60e0];
};
MLX5_PTYS_WIDTH_12X = 1 << 4,
};
+struct mlx5_port_eth_proto {
+ u32 cap;
+ u32 admin;
+ u32 oper;
+};
+
#define MLX5E_PROT_MASK(link_mode) (1U << link_mode)
#define MLX5_GET_ETH_PROTO(reg, out, ext, field) \
(ext ? MLX5_GET(reg, out, ext_##field) : \
int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state);
int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio);
int mlx5_query_dscp2prio(struct mlx5_core_dev *mdev, u8 *dscp2prio);
+
+int mlx5_port_query_eth_proto(struct mlx5_core_dev *dev, u8 port, bool ext,
+ struct mlx5_port_eth_proto *eproto);
+bool mlx5_ptys_ext_supported(struct mlx5_core_dev *mdev);
+u32 mlx5_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper,
+ bool force_legacy);
+u32 mlx5_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed,
+ bool force_legacy);
+int mlx5_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
+
#endif /* __MLX5_PORT_H__ */
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_NET_TIMESTAMPING_H_
+#define _LINUX_NET_TIMESTAMPING_H_
+
+#include <uapi/linux/net_tstamp.h>
+
+/**
+ * struct kernel_hwtstamp_config - Kernel copy of struct hwtstamp_config
+ *
+ * @flags: see struct hwtstamp_config
+ * @tx_type: see struct hwtstamp_config
+ * @rx_filter: see struct hwtstamp_config
+ *
+ * Prefer using this structure for in-kernel processing of hardware
+ * timestamping configuration, over the inextensible struct hwtstamp_config
+ * exposed to the %SIOCGHWTSTAMP and %SIOCSHWTSTAMP ioctl UAPI.
+ */
+struct kernel_hwtstamp_config {
+ int flags;
+ int tx_type;
+ int rx_filter;
+};
+
+static inline void hwtstamp_config_to_kernel(struct kernel_hwtstamp_config *kernel_cfg,
+ const struct hwtstamp_config *cfg)
+{
+ kernel_cfg->flags = cfg->flags;
+ kernel_cfg->tx_type = cfg->tx_type;
+ kernel_cfg->rx_filter = cfg->rx_filter;
+}
+
+#endif /* _LINUX_NET_TIMESTAMPING_H_ */
#include <linux/rbtree.h>
#include <net/net_trackers.h>
#include <net/net_debug.h>
+#include <net/dropreason.h>
struct netpoll_info;
struct device;
struct sk_buff *skb;
struct list_head rx_list; /* Pending GRO_NORMAL skbs */
int rx_count; /* length of rx_list */
+ unsigned int napi_id;
struct hrtimer timer;
+ struct task_struct *thread;
+ /* control-path-only fields follow */
struct list_head dev_list;
struct hlist_node napi_hash_node;
- unsigned int napi_id;
- struct task_struct *thread;
};
enum {
return false;
}
-bool napi_complete_done(struct napi_struct *n, int work_done);
/**
- * napi_complete - NAPI processing complete
- * @n: NAPI context
+ * napi_complete_done - NAPI processing complete
+ * @n: NAPI context
+ * @work_done: number of packets processed
*
- * Mark NAPI processing as complete.
- * Consider using napi_complete_done() instead.
+ * Mark NAPI processing as complete. Should only be called if poll budget
+ * has not been completely consumed.
+ * Prefer over napi_complete().
* Return false if device should avoid rearming interrupts.
*/
+bool napi_complete_done(struct napi_struct *n, int work_done);
+
static inline bool napi_complete(struct napi_struct *n)
{
return napi_complete_done(n, 0);
* Used to add FDB entries to dump requests. Implementers should add
* entries to skb and update idx with the number of entries.
*
+ * int (*ndo_mdb_add)(struct net_device *dev, struct nlattr *tb[],
+ * u16 nlmsg_flags, struct netlink_ext_ack *extack);
+ * Adds an MDB entry to dev.
+ * int (*ndo_mdb_del)(struct net_device *dev, struct nlattr *tb[],
+ * struct netlink_ext_ack *extack);
+ * Deletes the MDB entry from dev.
+ * int (*ndo_mdb_dump)(struct net_device *dev, struct sk_buff *skb,
+ * struct netlink_callback *cb);
+ * Dumps MDB entries from dev. The first argument (marker) in the netlink
+ * callback is used by core rtnetlink code.
+ *
* int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh,
* u16 flags, struct netlink_ext_ack *extack)
* int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq,
const unsigned char *addr,
u16 vid, u32 portid, u32 seq,
struct netlink_ext_ack *extack);
+ int (*ndo_mdb_add)(struct net_device *dev,
+ struct nlattr *tb[],
+ u16 nlmsg_flags,
+ struct netlink_ext_ack *extack);
+ int (*ndo_mdb_del)(struct net_device *dev,
+ struct nlattr *tb[],
+ struct netlink_ext_ack *extack);
+ int (*ndo_mdb_dump)(struct net_device *dev,
+ struct sk_buff *skb,
+ struct netlink_callback *cb);
int (*ndo_bridge_setlink)(struct net_device *dev,
struct nlmsghdr *nlh,
u16 flags,
struct netdev_queue *netdev_get_tx_queue(const struct net_device *dev,
unsigned int index)
{
+ DEBUG_NET_WARN_ON_ONCE(index >= dev->num_tx_queues);
return &dev->_tx[index];
}
NETDEV_OFFLOAD_XSTATS_REPORT_USED,
NETDEV_OFFLOAD_XSTATS_REPORT_DELTA,
NETDEV_XDP_FEAT_CHANGE,
+ NETDEV_PRE_CHANGE_HWTSTAMP,
};
const char *netdev_cmd_to_name(enum netdev_cmd cmd);
const unsigned char *dev_addr;
};
+struct netdev_notifier_hwtstamp_info {
+ struct netdev_notifier_info info; /* must be first */
+ struct kernel_hwtstamp_config *config;
+};
+
enum netdev_offload_xstats_type {
NETDEV_OFFLOAD_XSTATS_TYPE_L3 = 1,
};
}
int call_netdevice_notifiers(unsigned long val, struct net_device *dev);
-
+int call_netdevice_notifiers_info(unsigned long val,
+ struct netdev_notifier_info *info);
extern rwlock_t dev_base_lock; /* Device list lock */
#ifdef CONFIG_RPS
struct softnet_data *rps_ipi_list;
#endif
+ bool in_net_rx_action;
#ifdef CONFIG_NET_FLOW_LIMIT
struct sd_flow_limit __rcu *flow_limit;
#endif
int netif_get_num_default_rss_queues(void);
-enum skb_free_reason {
- SKB_REASON_CONSUMED,
- SKB_REASON_DROPPED,
-};
-
-void __dev_kfree_skb_irq(struct sk_buff *skb, enum skb_free_reason reason);
-void __dev_kfree_skb_any(struct sk_buff *skb, enum skb_free_reason reason);
+void dev_kfree_skb_irq_reason(struct sk_buff *skb, enum skb_drop_reason reason);
+void dev_kfree_skb_any_reason(struct sk_buff *skb, enum skb_drop_reason reason);
/*
* It is not allowed to call kfree_skb() or consume_skb() from hardware
*/
static inline void dev_kfree_skb_irq(struct sk_buff *skb)
{
- __dev_kfree_skb_irq(skb, SKB_REASON_DROPPED);
+ dev_kfree_skb_irq_reason(skb, SKB_DROP_REASON_NOT_SPECIFIED);
}
static inline void dev_consume_skb_irq(struct sk_buff *skb)
{
- __dev_kfree_skb_irq(skb, SKB_REASON_CONSUMED);
+ dev_kfree_skb_irq_reason(skb, SKB_CONSUMED);
}
static inline void dev_kfree_skb_any(struct sk_buff *skb)
{
- __dev_kfree_skb_any(skb, SKB_REASON_DROPPED);
+ dev_kfree_skb_any_reason(skb, SKB_DROP_REASON_NOT_SPECIFIED);
}
static inline void dev_consume_skb_any(struct sk_buff *skb)
{
- __dev_kfree_skb_any(skb, SKB_REASON_CONSUMED);
+ dev_kfree_skb_any_reason(skb, SKB_CONSUMED);
}
u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp,
__sum16 nf_ip6_checksum(struct sk_buff *skb, unsigned int hook,
unsigned int dataoff, u_int8_t protocol);
+int nf_ip6_check_hbh_len(struct sk_buff *skb, u32 *plen);
+
int ipv6_netfilter_init(void);
void ipv6_netfilter_fini(void);
struct mutex *cb_mutex;
int (*bind)(struct net *net, int group);
void (*unbind)(struct net *net, int group);
- bool (*compare)(struct net *net, struct sock *sk);
};
struct sock *__netlink_kernel_create(struct net *net, int unit,
} \
} while (0)
+#define NL_SET_ERR_MSG_ATTR_POL_FMT(extack, attr, pol, fmt, args...) do { \
+ struct netlink_ext_ack *__extack = (extack); \
+ \
+ if (!__extack) \
+ break; \
+ \
+ if (snprintf(__extack->_msg_buf, NETLINK_MAX_FMTMSG_LEN, \
+ "%s" fmt "%s", "", ##args, "") >= \
+ NETLINK_MAX_FMTMSG_LEN) \
+ net_warn_ratelimited("%s" fmt "%s", "truncated extack: ", \
+ ##args, "\n"); \
+ \
+ do_trace_netlink_extack(__extack->_msg_buf); \
+ \
+ __extack->_msg = __extack->_msg_buf; \
+ __extack->bad_attr = (attr); \
+ __extack->policy = (pol); \
+} while (0)
+
#define NL_SET_ERR_MSG_ATTR(extack, attr, msg) \
NL_SET_ERR_MSG_ATTR_POL(extack, attr, NULL, msg)
+#define NL_SET_ERR_MSG_ATTR_FMT(extack, attr, msg, args...) \
+ NL_SET_ERR_MSG_ATTR_POL_FMT(extack, attr, NULL, msg, ##args)
+
#define NL_SET_ERR_ATTR_MISS(extack, nest, type) do { \
struct netlink_ext_ack *__extack = (extack); \
\
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_PCS_MTK_LYNXI_H
+#define __LINUX_PCS_MTK_LYNXI_H
+
+#include <linux/phylink.h>
+#include <linux/regmap.h>
+
+#define MTK_SGMII_FLAG_PN_SWAP BIT(0)
+struct phylink_pcs *mtk_pcs_lynxi_create(struct device *dev,
+ struct regmap *regmap,
+ u32 ana_rgc3, u32 flags);
+void mtk_pcs_lynxi_destroy(struct phylink_pcs *pcs);
+#endif
#define phy_read_poll_timeout(phydev, regnum, val, cond, sleep_us, \
timeout_us, sleep_before_read) \
({ \
- int __ret = read_poll_timeout(phy_read, val, (cond) || val < 0, \
+ int __ret = read_poll_timeout(phy_read, val, val < 0 || (cond), \
sleep_us, timeout_us, sleep_before_read, phydev, regnum); \
- if (val < 0) \
+ if (val < 0) \
__ret = val; \
if (__ret) \
phydev_err(phydev, "%s failed: %d\n", __func__, __ret); \
__ret; \
})
-
/**
* __phy_read - convenience function for reading a given PHY register
* @phydev: the phy_device struct
* the medium link mode (@speed and @duplex) and the speed/duplex of the phy
* interface mode (@interface) are different.
* @link: true if the link is up.
- * @an_enabled: true if autonegotiation is enabled/desired.
* @an_complete: true if autonegotiation has completed.
*/
struct phylink_link_state {
int pause;
int rate_matching;
unsigned int link:1;
- unsigned int an_enabled:1;
unsigned int an_complete:1;
};
+++ /dev/null
-/*
- * Copyright (C) 2015, Marvell International Ltd.
- *
- * This software file (the "File") is distributed by Marvell International
- * Ltd. under the terms of the GNU General Public License Version 2, June 1991
- * (the "License"). You may use, redistribute and/or modify this File in
- * accordance with the terms and conditions of the License, a copy of which
- * is available on the worldwide web at
- * http://www.gnu.org/licenses/old-licenses/gpl-2.0.txt.
- *
- * THE FILE IS DISTRIBUTED AS-IS, WITHOUT WARRANTY OF ANY KIND, AND THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE
- * ARE EXPRESSLY DISCLAIMED. The License provides additional details about
- * this warranty disclaimer.
- */
-
-#ifndef _NFCMRVL_PTF_H_
-#define _NFCMRVL_PTF_H_
-
-struct nfcmrvl_platform_data {
- /*
- * Generic
- */
-
- /* GPIO that is wired to RESET_N signal */
- int reset_n_io;
- /* Tell if transport is muxed in HCI one */
- unsigned int hci_muxed;
-
- /*
- * UART specific
- */
-
- /* Tell if UART needs flow control at init */
- unsigned int flow_control;
- /* Tell if firmware supports break control for power management */
- unsigned int break_control;
-
-
- /*
- * I2C specific
- */
-
- unsigned int irq;
- unsigned int irq_polarity;
-};
-
-#endif /* _NFCMRVL_PTF_H_ */
struct clocksource;
int kvm_arch_ptp_init(void);
+void kvm_arch_ptp_exit(void);
int kvm_arch_ptp_get_clock(struct timespec64 *ts);
int kvm_arch_ptp_get_crosststamp(u64 *cycle,
struct timespec64 *tspec, struct clocksource **cs);
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _LINUX_RCUREF_H
+#define _LINUX_RCUREF_H
+
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#include <linux/limits.h>
+#include <linux/lockdep.h>
+#include <linux/preempt.h>
+#include <linux/rcupdate.h>
+
+#define RCUREF_ONEREF 0x00000000U
+#define RCUREF_MAXREF 0x7FFFFFFFU
+#define RCUREF_SATURATED 0xA0000000U
+#define RCUREF_RELEASED 0xC0000000U
+#define RCUREF_DEAD 0xE0000000U
+#define RCUREF_NOREF 0xFFFFFFFFU
+
+/**
+ * rcuref_init - Initialize a rcuref reference count with the given reference count
+ * @ref: Pointer to the reference count
+ * @cnt: The initial reference count typically '1'
+ */
+static inline void rcuref_init(rcuref_t *ref, unsigned int cnt)
+{
+ atomic_set(&ref->refcnt, cnt - 1);
+}
+
+/**
+ * rcuref_read - Read the number of held reference counts of a rcuref
+ * @ref: Pointer to the reference count
+ *
+ * Return: The number of held references (0 ... N)
+ */
+static inline unsigned int rcuref_read(rcuref_t *ref)
+{
+ unsigned int c = atomic_read(&ref->refcnt);
+
+ /* Return 0 if within the DEAD zone. */
+ return c >= RCUREF_RELEASED ? 0 : c + 1;
+}
+
+extern __must_check bool rcuref_get_slowpath(rcuref_t *ref);
+
+/**
+ * rcuref_get - Acquire one reference on a rcuref reference count
+ * @ref: Pointer to the reference count
+ *
+ * Similar to atomic_inc_not_zero() but saturates at RCUREF_MAXREF.
+ *
+ * Provides no memory ordering, it is assumed the caller has guaranteed the
+ * object memory to be stable (RCU, etc.). It does provide a control dependency
+ * and thereby orders future stores. See documentation in lib/rcuref.c
+ *
+ * Return:
+ * False if the attempt to acquire a reference failed. This happens
+ * when the last reference has been put already
+ *
+ * True if a reference was successfully acquired
+ */
+static inline __must_check bool rcuref_get(rcuref_t *ref)
+{
+ /*
+ * Unconditionally increase the reference count. The saturation and
+ * dead zones provide enough tolerance for this.
+ */
+ if (likely(!atomic_add_negative_relaxed(1, &ref->refcnt)))
+ return true;
+
+ /* Handle the cases inside the saturation and dead zones */
+ return rcuref_get_slowpath(ref);
+}
+
+extern __must_check bool rcuref_put_slowpath(rcuref_t *ref);
+
+/*
+ * Internal helper. Do not invoke directly.
+ */
+static __always_inline __must_check bool __rcuref_put(rcuref_t *ref)
+{
+ RCU_LOCKDEP_WARN(!rcu_read_lock_held() && preemptible(),
+ "suspicious rcuref_put_rcusafe() usage");
+ /*
+ * Unconditionally decrease the reference count. The saturation and
+ * dead zones provide enough tolerance for this.
+ */
+ if (likely(!atomic_add_negative_release(-1, &ref->refcnt)))
+ return false;
+
+ /*
+ * Handle the last reference drop and cases inside the saturation
+ * and dead zones.
+ */
+ return rcuref_put_slowpath(ref);
+}
+
+/**
+ * rcuref_put_rcusafe -- Release one reference for a rcuref reference count RCU safe
+ * @ref: Pointer to the reference count
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before, and provides an acquire ordering on success such that free()
+ * must come after.
+ *
+ * Can be invoked from contexts, which guarantee that no grace period can
+ * happen which would free the object concurrently if the decrement drops
+ * the last reference and the slowpath races against a concurrent get() and
+ * put() pair. rcu_read_lock()'ed and atomic contexts qualify.
+ *
+ * Return:
+ * True if this was the last reference with no future references
+ * possible. This signals the caller that it can safely release the
+ * object which is protected by the reference counter.
+ *
+ * False if there are still active references or the put() raced
+ * with a concurrent get()/put() pair. Caller is not allowed to
+ * release the protected object.
+ */
+static inline __must_check bool rcuref_put_rcusafe(rcuref_t *ref)
+{
+ return __rcuref_put(ref);
+}
+
+/**
+ * rcuref_put -- Release one reference for a rcuref reference count
+ * @ref: Pointer to the reference count
+ *
+ * Can be invoked from any context.
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before, and provides an acquire ordering on success such that free()
+ * must come after.
+ *
+ * Return:
+ *
+ * True if this was the last reference with no future references
+ * possible. This signals the caller that it can safely schedule the
+ * object, which is protected by the reference counter, for
+ * deconstruction.
+ *
+ * False if there are still active references or the put() raced
+ * with a concurrent get()/put() pair. Caller is not allowed to
+ * deconstruct the protected object.
+ */
+static inline __must_check bool rcuref_put(rcuref_t *ref)
+{
+ bool released;
+
+ preempt_disable();
+ released = __rcuref_put(ref);
+ preempt_enable();
+ return released;
+}
+
+#endif
rcu_dereference_check(p, lockdep_rtnl_is_held())
/**
- * rcu_dereference_bh_rtnl - rcu_dereference_bh with debug checking
- * @p: The pointer to read, prior to dereference
- *
- * Do an rcu_dereference_bh(p), but check caller either holds rcu_read_lock_bh()
- * or RTNL. Note : Please prefer rtnl_dereference() or rcu_dereference_bh()
- */
-#define rcu_dereference_bh_rtnl(p) \
- rcu_dereference_bh_check(p, lockdep_rtnl_is_held())
-
-/**
* rtnl_dereference - fetch RCU pointer when updates are prevented by RTNL
* @p: The pointer to read, prior to dereferencing
*
struct sk_buff;
-/* To allow 64K frame to be packed as single skb without frag_list we
- * require 64K/PAGE_SIZE pages plus 1 additional page to allow for
- * buffers which do not start on a page boundary.
- *
- * Since GRO uses frags we allocate at least 16 regardless of page
- * size.
- */
-#if (65536/PAGE_SIZE + 1) < 16
-#define MAX_SKB_FRAGS 16UL
-#else
-#define MAX_SKB_FRAGS (65536/PAGE_SIZE + 1)
+#ifndef CONFIG_MAX_SKB_FRAGS
+# define CONFIG_MAX_SKB_FRAGS 17
#endif
+
+#define MAX_SKB_FRAGS CONFIG_MAX_SKB_FRAGS
+
extern int sysctl_max_skb_frags;
/* Set skb_shinfo(skb)->gso_size to this in case you want skb_segment to
* @csum_level: indicates the number of consecutive checksums found in
* the packet minus one that have been verified as
* CHECKSUM_UNNECESSARY (max 3)
- * @scm_io_uring: SKB holds io_uring registered files
* @dst_pending_confirm: need to confirm neighbour
* @decrypted: Decrypted SKB
* @slow_gro: state present at GRO time, slower prepare step required
#endif
__u8 slow_gro:1;
__u8 csum_not_inet:1;
- __u8 scm_io_uring:1;
#ifdef CONFIG_NET_SCHED
__u16 tc_index; /* traffic control index */
#define MII_LAN83C185_MODE_POWERDOWN 0xC0 /* Power Down mode */
#define MII_LAN83C185_MODE_ALL 0xE0 /* All capable mode */
+int smsc_phy_config_intr(struct phy_device *phydev);
+irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev);
+int smsc_phy_config_init(struct phy_device *phydev);
+int lan87xx_read_status(struct phy_device *phydev);
+int smsc_phy_get_tunable(struct phy_device *phydev,
+ struct ethtool_tunable *tuna, void *data);
+int smsc_phy_set_tunable(struct phy_device *phydev,
+ struct ethtool_tunable *tuna, const void *data);
+int smsc_phy_probe(struct phy_device *phydev);
+
#endif /* __LINUX_SMSCPHY_H__ */
struct stmmac_rxq_cfg rx_queues_cfg[MTL_MAX_RX_QUEUES];
struct stmmac_txq_cfg tx_queues_cfg[MTL_MAX_TX_QUEUES];
void (*fix_mac_speed)(void *priv, unsigned int speed);
+ int (*fix_soc_reset)(void *priv, void __iomem *ioaddr);
int (*serdes_powerup)(struct net_device *ndev, void *priv);
void (*serdes_powerdown)(struct net_device *ndev, void *priv);
void (*speed_mode_2500)(struct net_device *ndev, void *priv);
TCPF_MTU_REDUCED_DEFERRED = (1UL << TCP_MTU_REDUCED_DEFERRED),
};
-static inline struct tcp_sock *tcp_sk(const struct sock *sk)
-{
- return (struct tcp_sock *)sk;
-}
+#define tcp_sk(ptr) container_of_const(ptr, struct tcp_sock, inet_conn.icsk_inet.sk)
+
+/* Variant of tcp_sk() upgrading a const sock to a read/write tcp socket.
+ * Used in context of (lockless) tcp listeners.
+ */
+#define tcp_sk_rw(ptr) container_of(ptr, struct tcp_sock, inet_conn.icsk_inet.sk)
struct tcp_timewait_sock {
struct inet_timewait_sock tw_sk;
} atomic64_t;
#endif
+typedef struct {
+ atomic_t refcnt;
+} rcuref_t;
+
+#define RCUREF_INIT(i) { .refcnt = ATOMIC_INIT(i - 1) }
+
struct list_head {
struct list_head *next, *prev;
};
#define UDP_MAX_SEGMENTS (1 << 6UL)
-static inline struct udp_sock *udp_sk(const struct sock *sk)
-{
- return (struct udp_sock *)sk;
-}
+#define udp_sk(ptr) container_of_const(ptr, struct udp_sock, inet.sk)
static inline void udp_set_no_check6_tx(struct sock *sk, bool val)
{
void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
void virtio_transport_deliver_tap_pkt(struct sk_buff *skb);
int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list);
+int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor);
#endif /* _LINUX_VIRTIO_VSOCK_H */
poll_table *wait);
};
+/** struct wwan_port_caps - The WWAN port capbilities
+ * @frag_len: WWAN port TX fragments length
+ * @headroom_len: WWAN port TX fragments reserved headroom length
+ */
+struct wwan_port_caps {
+ size_t frag_len;
+ unsigned int headroom_len;
+};
+
/**
* wwan_create_port - Add a new WWAN port
* @parent: Device to use as parent and shared by all WWAN ports
* @type: WWAN port type
* @ops: WWAN port operations
+ * @caps: WWAN port capabilities
* @drvdata: Pointer to caller driver data
*
* Allocate and register a new WWAN port. The port will be automatically exposed
struct wwan_port *wwan_create_port(struct device *parent,
enum wwan_port_type type,
const struct wwan_port_ops *ops,
+ struct wwan_port_caps *caps,
void *drvdata);
/**
const struct in6_addr *addr);
void __ipv6_sock_mc_close(struct sock *sk);
void ipv6_sock_mc_close(struct sock *sk);
-bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr,
+bool inet6_mc_check(const struct sock *sk, const struct in6_addr *mc_addr,
const struct in6_addr *src_addr);
int ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr);
void unix_inflight(struct user_struct *user, struct file *fp);
void unix_notinflight(struct user_struct *user, struct file *fp);
void unix_destruct_scm(struct sk_buff *skb);
+void io_uring_destruct_scm(struct sk_buff *skb);
void unix_gc(void);
void wait_for_unix_gc(void);
struct sock *unix_get_socket(struct file *filp);
#endif
};
-static inline struct unix_sock *unix_sk(const struct sock *sk)
-{
- return (struct unix_sock *)sk;
-}
+#define unix_sk(ptr) container_of_const(ptr, struct unix_sock, sk)
#define peer_wait peer_wq.wait
void *trans;
};
+s64 vsock_connectible_has_data(struct vsock_sock *vsk);
s64 vsock_stream_has_data(struct vsock_sock *vsk);
s64 vsock_stream_has_space(struct vsock_sock *vsk);
struct sock *vsock_create_connected(struct sock *parent);
/* Addressing. */
u32 (*get_local_cid)(void);
+
+ /* Read a single skb */
+ int (*read_skb)(struct vsock_sock *, skb_read_actor_t);
};
/**** CORE ****/
int vsock_add_tap(struct vsock_tap *vt);
int vsock_remove_tap(struct vsock_tap *vt);
void vsock_deliver_tap(struct sk_buff *build_skb(void *opaque), void *opaque);
+int vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ int flags);
+int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+ size_t len, int flags);
+
+#ifdef CONFIG_BPF_SYSCALL
+extern struct proto vsock_proto;
+int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
+void __init vsock_bpf_build_proto(void);
+#else
+static inline void __init vsock_bpf_build_proto(void)
+{}
+#endif
#endif /* __AF_VSOCK_H__ */
{
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __ipv4_neigh_lookup_noref(dev, key);
if (n && !refcount_inc_not_zero(&n->refcnt))
n = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return n;
}
{
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __ipv4_neigh_lookup_noref(dev, key);
neigh_confirm(n);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
}
void arp_init(void);
struct ax25_cb *cb;
};
-static inline struct ax25_sock *ax25_sk(const struct sock *sk)
-{
- return (struct ax25_sock *) sk;
-}
+#define ax25_sk(ptr) container_of_const(ptr, struct ax25_sock, sk)
static inline struct ax25_cb *sk_to_ax25(const struct sock *sk)
{
};
/**
+ * struct cfg80211_set_hw_timestamp - enable/disable HW timestamping
+ * @macaddr: peer MAC address. NULL to enable/disable HW timestamping for all
+ * addresses.
+ * @enable: if set, enable HW timestamping for the specified MAC address.
+ * Otherwise disable HW timestamping for the specified MAC address.
+ */
+struct cfg80211_set_hw_timestamp {
+ const u8 *macaddr;
+ bool enable;
+};
+
+/**
* cfg80211_get_chandef_type - return old channel type from chandef
* @chandef: the channel definition
*
enum nl80211_iftype iftype);
/**
+ * nl80211_send_chandef - sends the channel definition.
+ * @msg: the msg to send channel definition
+ * @chandef: the channel definition to check
+ *
+ * Returns: 0 if sent the channel definition to msg, < 0 on error
+ **/
+int nl80211_send_chandef(struct sk_buff *msg, const struct cfg80211_chan_def *chandef);
+
+/**
* ieee80211_chanwidth_rate_flags - return rate flags for channel width
* @width: the channel width of the channel
*
};
/**
+ * struct cfg80211_rnr_elems - Reduced neighbor report (RNR) elements
+ *
+ * @cnt: Number of elements in array %elems.
+ *
+ * @elem: Array of RNR element(s) to be added into Beacon frames.
+ * @elem.data: Data for RNR elements.
+ * @elem.len: Length of data.
+ */
+struct cfg80211_rnr_elems {
+ u8 cnt;
+ struct {
+ const u8 *data;
+ size_t len;
+ } elem[];
+};
+
+/**
* struct cfg80211_beacon_data - beacon data
* @link_id: the link ID for the AP MLD link sending this beacon
* @head: head portion of beacon (before TIM IE)
* @probe_resp_len: length of probe response template (@probe_resp)
* @probe_resp: probe response template (AP mode only)
* @mbssid_ies: multiple BSSID elements
+ * @rnr_ies: reduced neighbor report elements
* @ftm_responder: enable FTM responder functionality; -1 for no change
* (which also implies no change in LCI/civic location data)
* @lci: Measurement Report element content, starting with Measurement Token
const u8 *lci;
const u8 *civicloc;
struct cfg80211_mbssid_elems *mbssid_ies;
+ struct cfg80211_rnr_elems *rnr_ies;
s8 ftm_responder;
size_t head_len, tail_len;
* @add_link_station: Add a link to a station.
* @mod_link_station: Modify a link of a station.
* @del_link_station: Remove a link of a station.
+ *
+ * @set_hw_timestamp: Enable/disable HW timestamping of TM/FTM frames.
*/
struct cfg80211_ops {
int (*suspend)(struct wiphy *wiphy, struct cfg80211_wowlan *wow);
struct link_station_parameters *params);
int (*del_link_station)(struct wiphy *wiphy, struct net_device *dev,
struct link_station_del_parameters *params);
+ int (*set_hw_timestamp)(struct wiphy *wiphy, struct net_device *dev,
+ struct cfg80211_set_hw_timestamp *hwts);
};
/*
int n_akm_suites;
};
+#define CFG80211_HW_TIMESTAMP_ALL_PEERS 0xffff
+
/**
* struct wiphy - wireless hardware description
* @mtx: mutex for the data (structures) of this device
* NL80211_MAX_NR_AKM_SUITES in order to avoid compatibility issues with
* legacy userspace and maximum allowed value is
* CFG80211_MAX_NUM_AKM_SUITES.
+ *
+ * @hw_timestamp_max_peers: maximum number of peers that the driver supports
+ * enabling HW timestamping for concurrently. Setting this field to a
+ * non-zero value indicates that the driver supports HW timestamping.
+ * A value of %CFG80211_HW_TIMESTAMP_ALL_PEERS indicates the driver
+ * supports enabling HW timestamping for all peers (i.e. no need to
+ * specify a mac address).
*/
struct wiphy {
struct mutex mtx;
u8 ema_max_profile_periodicity;
u16 max_num_akm_suites;
+ u16 hw_timestamp_max_peers;
+
char priv[] __aligned(NETDEV_ALIGN);
};
* mesh control field.
*
* @skb: The input A-MSDU frame without any headers.
- * @mesh_hdr: use standard compliant mesh A-MSDU subframe header
+ * @mesh_hdr: the type of mesh header to test
+ * 0: non-mesh A-MSDU length field
+ * 1: big-endian mesh A-MSDU length field
+ * 2: little-endian mesh A-MSDU length field
* Returns: true if subframe header lengths are valid for the @mesh_hdr mode
*/
-bool ieee80211_is_valid_amsdu(struct sk_buff *skb, bool mesh_hdr);
+bool ieee80211_is_valid_amsdu(struct sk_buff *skb, u8 mesh_hdr);
/**
* ieee80211_amsdu_to_8023s - decode an IEEE 802.11n A-MSDU frame
* @extra_headroom: The hardware extra headroom for SKBs in the @list.
* @check_da: DA to check in the inner ethernet header, or NULL
* @check_sa: SA to check in the inner ethernet header, or NULL
- * @mesh_control: A-MSDU subframe header includes the mesh control field
+ * @mesh_control: see mesh_hdr in ieee80211_is_valid_amsdu
*/
void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list,
const u8 *addr, enum nl80211_iftype iftype,
const unsigned int extra_headroom,
const u8 *check_da, const u8 *check_sa,
- bool mesh_control);
+ u8 mesh_control);
/**
* ieee80211_get_8023_tunnel_proto - get RFC1042 or bridge tunnel encap protocol
* @ie: IEs
* @ielen: length of IEs
* @band: enum nl80211_band of the channel
- * @ftype: frame type
*
* Returns the channel number, or -1 if none could be determined.
*/
int cfg80211_get_ies_channel_number(const u8 *ie, size_t ielen,
- enum nl80211_band band,
- enum cfg80211_bss_frame_type ftype);
+ enum nl80211_band band);
/**
* cfg80211_inform_bss_data - inform cfg80211 of a new BSS
* responsible for any cleanup. The caller must also ensure that
* skb->protocol is set appropriately.
* @unencrypted: Whether the frame was received unencrypted
+ * @link_id: the link the frame was received on, -1 if not applicable or unknown
*
* This function is used to inform userspace about a received control port
* frame. It should only be used if userspace indicated it wants to receive
*
* Return: %true if the frame was passed to userspace
*/
-bool cfg80211_rx_control_port(struct net_device *dev,
- struct sk_buff *skb, bool unencrypted);
+bool cfg80211_rx_control_port(struct net_device *dev, struct sk_buff *skb,
+ bool unencrypted, int link_id);
/**
* cfg80211_cqm_rssi_notify - connection quality monitoring rssi event
bool promisc_on_master;
};
-/* This structure defines the control interfaces that are overlayed by the
- * DSA layer on top of the DSA CPU/management net_device instance. This is
- * used by the core net_device layer while calling various net_device_ops
- * function pointers.
- */
-struct dsa_netdevice_ops {
- int (*ndo_eth_ioctl)(struct net_device *dev, struct ifreq *ifr,
- int cmd);
-};
-
struct dsa_lag {
struct net_device *dev;
unsigned int id;
*/
const struct ethtool_ops *orig_ethtool_ops;
- /*
- * Original copy of the master netdev net_device_ops
- */
- const struct dsa_netdevice_ops *netdev_ops;
-
/* List of MAC addresses that must be forwarded on this port.
* These are only valid on CPU ports and DSA links.
*/
#endif
}
-#if IS_ENABLED(CONFIG_NET_DSA)
-static inline int __dsa_netdevice_ops_check(struct net_device *dev)
-{
- int err = -EOPNOTSUPP;
-
- if (!dev->dsa_ptr)
- return err;
-
- if (!dev->dsa_ptr->netdev_ops)
- return err;
-
- return 0;
-}
-
-static inline int dsa_ndo_eth_ioctl(struct net_device *dev, struct ifreq *ifr,
- int cmd)
-{
- const struct dsa_netdevice_ops *ops;
- int err;
-
- err = __dsa_netdevice_ops_check(dev);
- if (err)
- return err;
-
- ops = dev->dsa_ptr->netdev_ops;
-
- return ops->ndo_eth_ioctl(dev, ifr, cmd);
-}
-#else
-static inline int dsa_ndo_eth_ioctl(struct net_device *dev, struct ifreq *ifr,
- int cmd)
-{
- return -EOPNOTSUPP;
-}
-#endif
-
void dsa_unregister_switch(struct dsa_switch *ds);
int dsa_register_switch(struct dsa_switch *ds);
void dsa_switch_shutdown(struct dsa_switch *ds);
#include <linux/bug.h>
#include <linux/jiffies.h>
#include <linux/refcount.h>
+#include <linux/rcuref.h>
#include <net/neighbour.h>
#include <asm/processor.h>
#include <linux/indirect_call_wrapper.h>
unsigned short trailer_len; /* space to reserve at tail */
/*
- * __refcnt wants to be on a different cache line from
+ * __rcuref wants to be on a different cache line from
* input/output/ops or performance tanks badly
*/
#ifdef CONFIG_64BIT
- atomic_t __refcnt; /* 64-bit offset 64 */
+ rcuref_t __rcuref; /* 64-bit offset 64 */
#endif
int __use;
unsigned long lastuse;
- struct lwtunnel_state *lwtstate;
struct rcu_head rcu_head;
short error;
short __pad;
__u32 tclassid;
#ifndef CONFIG_64BIT
- atomic_t __refcnt; /* 32-bit offset 64 */
+ struct lwtunnel_state *lwtstate;
+ rcuref_t __rcuref; /* 32-bit offset 64 */
#endif
netdevice_tracker dev_tracker;
+
+ /*
+ * Used by rtable and rt6_info. Moves lwtstate into the next cache
+ * line on 64bit so that lwtstate does not cause false sharing with
+ * __rcuref under contention of __rcuref. This also puts the
+ * frequently accessed members of rtable and rt6_info out of the
+ * __rcuref cache line.
+ */
+ struct list_head rt_uncached;
+ struct uncached_list *rt_uncached_list;
+#ifdef CONFIG_64BIT
+ struct lwtunnel_state *lwtstate;
+#endif
};
struct dst_metrics {
{
/*
* If your kernel compilation stops here, please check
- * the placement of __refcnt in struct dst_entry
+ * the placement of __rcuref in struct dst_entry
*/
- BUILD_BUG_ON(offsetof(struct dst_entry, __refcnt) & 63);
- WARN_ON(atomic_inc_not_zero(&dst->__refcnt) == 0);
+ BUILD_BUG_ON(offsetof(struct dst_entry, __rcuref) & 63);
+ WARN_ON(!rcuref_get(&dst->__rcuref));
}
static inline void dst_use_noref(struct dst_entry *dst, unsigned long time)
*/
static inline bool dst_hold_safe(struct dst_entry *dst)
{
- return atomic_inc_not_zero(&dst->__refcnt);
+ return rcuref_get(&dst->__rcuref);
}
/**
/*
* Copyright (c) 2017 Intel Deutschland GmbH
- * Copyright (c) 2018-2019, 2021 Intel Corporation
+ * Copyright (c) 2018-2019, 2021-2022 Intel Corporation
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
IEEE80211_RADIOTAP_HE_MU = 24,
IEEE80211_RADIOTAP_ZERO_LEN_PSDU = 26,
IEEE80211_RADIOTAP_LSIG = 27,
+ IEEE80211_RADIOTAP_TLV = 28,
/* valid in every it_present bitmap, even vendor namespaces */
IEEE80211_RADIOTAP_RADIOTAP_NAMESPACE = 29,
IEEE80211_RADIOTAP_VENDOR_NAMESPACE = 30,
- IEEE80211_RADIOTAP_EXT = 31
+ IEEE80211_RADIOTAP_EXT = 31,
+ IEEE80211_RADIOTAP_EHT_USIG = 33,
+ IEEE80211_RADIOTAP_EHT = 34,
};
/* for IEEE80211_RADIOTAP_FLAGS */
IEEE80211_RADIOTAP_ZERO_LEN_PSDU_VENDOR = 0xff,
};
+struct ieee80211_radiotap_tlv {
+ __le16 type;
+ __le16 len;
+ u8 data[];
+} __packed;
+
+/**
+ * struct ieee80211_radiotap_vendor_content - radiotap vendor data content
+ * @oui: radiotap vendor namespace OUI
+ * @oui_subtype: radiotap vendor sub namespace
+ * @vendor_type: radiotap vendor type
+ * @reserved: should always be set to zero (to avoid leaking memory)
+ * @data: the actual vendor namespace data
+ */
+struct ieee80211_radiotap_vendor_content {
+ u8 oui[3];
+ u8 oui_subtype;
+ __le16 vendor_type;
+ __le16 reserved;
+ u8 data[];
+} __packed;
+
+/**
+ * struct ieee80211_radiotap_vendor_tlv - vendor radiotap data information
+ * @type: should always be set to IEEE80211_RADIOTAP_VENDOR_NAMESPACE
+ * @len: length of data
+ * @content: vendor content see @ieee80211_radiotap_vendor_content
+ */
+struct ieee80211_radiotap_vendor_tlv {
+ __le16 type; /* IEEE80211_RADIOTAP_VENDOR_NAMESPACE */
+ __le16 len;
+ struct ieee80211_radiotap_vendor_content content;
+};
+
+/* ieee80211_radiotap_eht_usig - content of U-SIG tlv (type 33)
+ * see www.radiotap.org/fields/U-SIG.html for details
+ */
+struct ieee80211_radiotap_eht_usig {
+ __le32 common;
+ __le32 value;
+ __le32 mask;
+} __packed;
+
+/* ieee80211_radiotap_eht - content of EHT tlv (type 34)
+ * see www.radiotap.org/fields/EHT.html for details
+ */
+struct ieee80211_radiotap_eht {
+ __le32 known;
+ __le32 data[9];
+ __le32 user_info[];
+} __packed;
+
+/* Known field for EHT TLV
+ * The ending defines for what the field applies as following
+ * O - OFDMA (including TB), M - MU-MIMO, S - EHT sounding.
+ */
+enum ieee80211_radiotap_eht_known {
+ IEEE80211_RADIOTAP_EHT_KNOWN_SPATIAL_REUSE = 0x00000002,
+ IEEE80211_RADIOTAP_EHT_KNOWN_GI = 0x00000004,
+ IEEE80211_RADIOTAP_EHT_KNOWN_EHT_LTF = 0x00000010,
+ IEEE80211_RADIOTAP_EHT_KNOWN_LDPC_EXTRA_SYM_OM = 0x00000020,
+ IEEE80211_RADIOTAP_EHT_KNOWN_PRE_PADD_FACOR_OM = 0x00000040,
+ IEEE80211_RADIOTAP_EHT_KNOWN_PE_DISAMBIGUITY_OM = 0x00000080,
+ IEEE80211_RADIOTAP_EHT_KNOWN_DISREGARD_O = 0x00000100,
+ IEEE80211_RADIOTAP_EHT_KNOWN_DISREGARD_S = 0x00000200,
+ IEEE80211_RADIOTAP_EHT_KNOWN_CRC1 = 0x00002000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_TAIL1 = 0x00004000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_CRC2_O = 0x00008000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_TAIL2_O = 0x00010000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_NSS_S = 0x00020000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_BEAMFORMED_S = 0x00040000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_NR_NON_OFDMA_USERS_M = 0x00080000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_ENCODING_BLOCK_CRC_M = 0x00100000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_ENCODING_BLOCK_TAIL_M = 0x00200000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_RU_MRU_SIZE_OM = 0x00400000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_RU_MRU_INDEX_OM = 0x00800000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_RU_ALLOC_TB_FMT = 0x01000000,
+ IEEE80211_RADIOTAP_EHT_KNOWN_PRIMARY_80 = 0x02000000,
+};
+
+enum ieee80211_radiotap_eht_data {
+ /* Data 0 */
+ IEEE80211_RADIOTAP_EHT_DATA0_SPATIAL_REUSE = 0x00000078,
+ IEEE80211_RADIOTAP_EHT_DATA0_GI = 0x00000180,
+ IEEE80211_RADIOTAP_EHT_DATA0_LTF = 0x00000600,
+ IEEE80211_RADIOTAP_EHT_DATA0_EHT_LTF = 0x00003800,
+ IEEE80211_RADIOTAP_EHT_DATA0_LDPC_EXTRA_SYM_OM = 0x00004000,
+ IEEE80211_RADIOTAP_EHT_DATA0_PRE_PADD_FACOR_OM = 0x00018000,
+ IEEE80211_RADIOTAP_EHT_DATA0_PE_DISAMBIGUITY_OM = 0x00020000,
+ IEEE80211_RADIOTAP_EHT_DATA0_DISREGARD_S = 0x000c0000,
+ IEEE80211_RADIOTAP_EHT_DATA0_DISREGARD_O = 0x003c0000,
+ IEEE80211_RADIOTAP_EHT_DATA0_CRC1_O = 0x03c00000,
+ IEEE80211_RADIOTAP_EHT_DATA0_TAIL1_O = 0xfc000000,
+ /* Data 1 */
+ IEEE80211_RADIOTAP_EHT_DATA1_RU_SIZE = 0x0000001f,
+ IEEE80211_RADIOTAP_EHT_DATA1_RU_INDEX = 0x00001fe0,
+ IEEE80211_RADIOTAP_EHT_DATA1_RU_ALLOC_CC_1_1_1 = 0x003fe000,
+ IEEE80211_RADIOTAP_EHT_DATA1_RU_ALLOC_CC_1_1_1_KNOWN = 0x00400000,
+ IEEE80211_RADIOTAP_EHT_DATA1_PRIMARY_80 = 0xc0000000,
+ /* Data 2 */
+ IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_2_1_1 = 0x000001ff,
+ IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_2_1_1_KNOWN = 0x00000200,
+ IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_1_1_2 = 0x0007fc00,
+ IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_1_1_2_KNOWN = 0x00080000,
+ IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_2_1_2 = 0x1ff00000,
+ IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_2_1_2_KNOWN = 0x20000000,
+ /* Data 3 */
+ IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_1_2_1 = 0x000001ff,
+ IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_1_2_1_KNOWN = 0x00000200,
+ IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_2_2_1 = 0x0007fc00,
+ IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_2_2_1_KNOWN = 0x00080000,
+ IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_1_2_2 = 0x1ff00000,
+ IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_1_2_2_KNOWN = 0x20000000,
+ /* Data 4 */
+ IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_2_2_2 = 0x000001ff,
+ IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_2_2_2_KNOWN = 0x00000200,
+ IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_1_2_3 = 0x0007fc00,
+ IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_1_2_3_KNOWN = 0x00080000,
+ IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_2_2_3 = 0x1ff00000,
+ IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_2_2_3_KNOWN = 0x20000000,
+ /* Data 5 */
+ IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_1_2_4 = 0x000001ff,
+ IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_1_2_4_KNOWN = 0x00000200,
+ IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_2_2_4 = 0x0007fc00,
+ IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_2_2_4_KNOWN = 0x00080000,
+ IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_1_2_5 = 0x1ff00000,
+ IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_1_2_5_KNOWN = 0x20000000,
+ /* Data 6 */
+ IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_2_2_5 = 0x000001ff,
+ IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_2_2_5_KNOWN = 0x00000200,
+ IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_1_2_6 = 0x0007fc00,
+ IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_1_2_6_KNOWN = 0x00080000,
+ IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_2_2_6 = 0x1ff00000,
+ IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_2_2_6_KNOWN = 0x20000000,
+ /* Data 7 */
+ IEEE80211_RADIOTAP_EHT_DATA7_CRC2_O = 0x0000000f,
+ IEEE80211_RADIOTAP_EHT_DATA7_TAIL_2_O = 0x000003f0,
+ IEEE80211_RADIOTAP_EHT_DATA7_NSS_S = 0x0000f000,
+ IEEE80211_RADIOTAP_EHT_DATA7_BEAMFORMED_S = 0x00010000,
+ IEEE80211_RADIOTAP_EHT_DATA7_NUM_OF_NON_OFDMA_USERS = 0x000e0000,
+ IEEE80211_RADIOTAP_EHT_DATA7_USER_ENCODING_BLOCK_CRC = 0x00f00000,
+ IEEE80211_RADIOTAP_EHT_DATA7_USER_ENCODING_BLOCK_TAIL = 0x3f000000,
+ /* Data 8 */
+ IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_PS_160 = 0x00000001,
+ IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_B0 = 0x00000002,
+ IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_B7_B1 = 0x000001fc,
+};
+
+enum ieee80211_radiotap_eht_user_info {
+ IEEE80211_RADIOTAP_EHT_USER_INFO_STA_ID_KNOWN = 0x00000001,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_MCS_KNOWN = 0x00000002,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_CODING_KNOWN = 0x00000004,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_NSS_KNOWN_O = 0x00000010,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_BEAMFORMING_KNOWN_O = 0x00000020,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_SPATIAL_CONFIG_KNOWN_M = 0x00000040,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_DATA_FOR_USER = 0x00000080,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_STA_ID = 0x0007ff00,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_CODING = 0x00080000,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_MCS = 0x00f00000,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_NSS_O = 0x0f000000,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_BEAMFORMING_O = 0x20000000,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_SPATIAL_CONFIG_M = 0x3f000000,
+ IEEE80211_RADIOTAP_EHT_USER_INFO_RESEVED_c0000000 = 0xc0000000,
+};
+
+enum ieee80211_radiotap_eht_usig_common {
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_PHY_VER_KNOWN = 0x00000001,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_KNOWN = 0x00000002,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_UL_DL_KNOWN = 0x00000004,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BSS_COLOR_KNOWN = 0x00000008,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_TXOP_KNOWN = 0x00000010,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BAD_USIG_CRC = 0x00000020,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_PHY_VER = 0x00007000,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW = 0x00038000,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_UL_DL = 0x00040000,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BSS_COLOR = 0x01f80000,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_TXOP = 0xfe000000,
+};
+
+enum ieee80211_radiotap_eht_usig_mu {
+ /* MU-USIG-1 */
+ IEEE80211_RADIOTAP_EHT_USIG1_MU_B20_B24_DISREGARD = 0x0000001f,
+ IEEE80211_RADIOTAP_EHT_USIG1_MU_B25_VALIDATE = 0x00000020,
+ /* MU-USIG-2 */
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B0_B1_PPDU_TYPE = 0x000000c0,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B2_VALIDATE = 0x00000100,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B3_B7_PUNCTURED_INFO = 0x00003e00,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B8_VALIDATE = 0x00004000,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B9_B10_SIG_MCS = 0x00018000,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B11_B15_EHT_SIG_SYMBOLS = 0x003e0000,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B16_B19_CRC = 0x03c00000,
+ IEEE80211_RADIOTAP_EHT_USIG2_MU_B20_B25_TAIL = 0xfc000000,
+};
+
+enum ieee80211_radiotap_eht_usig_tb {
+ /* TB-USIG-1 */
+ IEEE80211_RADIOTAP_EHT_USIG1_TB_B20_B25_DISREGARD = 0x0000001f,
+
+ /* TB-USIG-2 */
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B0_B1_PPDU_TYPE = 0x000000c0,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B2_VALIDATE = 0x00000100,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B3_B6_SPATIAL_REUSE_1 = 0x00001e00,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B7_B10_SPATIAL_REUSE_2 = 0x0001e000,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B11_B15_DISREGARD = 0x003e0000,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B16_B19_CRC = 0x03c00000,
+ IEEE80211_RADIOTAP_EHT_USIG2_TB_B20_B25_TAIL = 0xfc000000,
+};
+
/**
* ieee80211_get_radiotap_len - get radiotap header length
*/
return sk_to_full_sk(skb->sk);
}
-static inline struct inet_sock *inet_sk(const struct sock *sk)
-{
- return (struct inet_sock *)sk;
-}
+#define inet_sk(ptr) container_of_const(ptr, struct inet_sock, sk)
static inline void __inet_sk_copy_descendant(struct sock *sk_to,
const struct sock *sk_from,
struct inet6_dev *rt6i_idev;
u32 rt6i_flags;
- struct list_head rt6i_uncached;
- struct uncached_list *rt6i_uncached_list;
-
/* more non-fragment space at head required */
unsigned short rt6i_nfheader_len;
};
rcu_read_lock();
from = rcu_dereference(rt->from);
- if (from) {
+ if (from)
*addr = from->fib6_prefsrc.addr;
- } else {
- struct in6_addr in6_zero = {};
-
- *addr = in6_zero;
- }
+ else
+ *addr = in6addr_any;
rcu_read_unlock();
}
static inline void ip6_rt_put_flags(struct rt6_info *rt, int flags)
{
if (!(flags & RT6_LOOKUP_F_DST_NOREF) ||
- !list_empty(&rt->rt6i_uncached))
+ !list_empty(&rt->dst.rt_uncached))
ip6_rt_put(rt);
}
GENMASK((sizeof_field(struct ip_tunnel_info, \
options_len) * BITS_PER_BYTE) - 1, 0)
+#define ip_tunnel_info_opts(info) \
+ _Generic(info, \
+ const struct ip_tunnel_info * : ((const void *)((info) + 1)),\
+ struct ip_tunnel_info * : ((void *)((info) + 1))\
+ )
+
struct ip_tunnel_info {
struct ip_tunnel_key key;
#ifdef CONFIG_DST_CACHE
}
}
-static inline void *ip_tunnel_info_opts(struct ip_tunnel_info *info)
-{
- return info + 1;
-}
-
static inline void ip_tunnel_info_opts_get(void *to,
const struct ip_tunnel_info *info)
{
* This structure keeps information about a BSS (and an association
* to that BSS) that can change during the lifetime of the BSS.
*
+ * @vif: reference to owning VIF
* @addr: (link) address used locally
* @link_id: link ID, or 0 for non-MLO
* @htc_trig_based_pkt_ext: default PE in 4us units, if BSS supports HE
* write-protected by sdata_lock and local->mtx so holding either is fine
* for read access.
* @color_change_color: the bss color that will be used after the change.
+ * @ht_ldpc: in AP mode, indicates interface has HT LDPC capability.
+ * @vht_ldpc: in AP mode, indicates interface has VHT LDPC capability.
+ * @he_ldpc: in AP mode, indicates interface has HE LDPC capability.
* @vht_su_beamformer: in AP mode, does this BSS support operation as an VHT SU
* beamformer
* @vht_su_beamformee: in AP mode, does this BSS support operation as an VHT SU
* @he_full_ul_mumimo: does this BSS support the reception (AP) or transmission
* (non-AP STA) of an HE TB PPDU on an RU that spans the entire PPDU
* bandwidth
+ * @eht_su_beamformer: in AP-mode, does this BSS enable operation as an EHT SU
+ * beamformer
+ * @eht_su_beamformee: in AP-mode, does this BSS enable operation as an EHT SU
+ * beamformee
+ * @eht_mu_beamformer: in AP-mode, does this BSS enable operation as an EHT MU
+ * beamformer
*/
struct ieee80211_bss_conf {
+ struct ieee80211_vif *vif;
+
const u8 *bssid;
unsigned int link_id;
u8 addr[ETH_ALEN] __aligned(2);
bool color_change_active;
u8 color_change_color;
+ bool ht_ldpc;
+ bool vht_ldpc;
+ bool he_ldpc;
bool vht_su_beamformer;
bool vht_su_beamformee;
bool vht_mu_beamformer;
bool he_su_beamformee;
bool he_mu_beamformer;
bool he_full_ul_mumimo;
+ bool eht_su_beamformer;
+ bool eht_su_beamformee;
+ bool eht_mu_beamformer;
};
/**
* subframes share the same sequence number. Reported subframes can be
* either regular MSDU or singly A-MSDUs. Subframes must not be
* interleaved with other frames.
- * @RX_FLAG_RADIOTAP_VENDOR_DATA: This frame contains vendor-specific
- * radiotap data in the skb->data (before the frame) as described by
- * the &struct ieee80211_vendor_radiotap.
+ * @RX_FLAG_RADIOTAP_TLV_AT_END: This frame contains radiotap TLVs in the
+ * skb->data (before the 802.11 header).
+ * If used, the SKB's mac_header pointer must be set to point
+ * to the 802.11 header after the TLVs, and any padding added after TLV
+ * data to align to 4 must be cleared by the driver putting the TLVs
+ * in the skb.
* @RX_FLAG_ALLOW_SAME_PN: Allow the same PN as same packet before.
* This is used for AMSDU subframes which can have the same PN as
* the first subframe.
RX_FLAG_ONLY_MONITOR = BIT(17),
RX_FLAG_SKIP_MONITOR = BIT(18),
RX_FLAG_AMSDU_MORE = BIT(19),
- RX_FLAG_RADIOTAP_VENDOR_DATA = BIT(20),
+ RX_FLAG_RADIOTAP_TLV_AT_END = BIT(20),
RX_FLAG_MIC_STRIPPED = BIT(21),
RX_FLAG_ALLOW_SAME_PN = BIT(22),
RX_FLAG_ICV_STRIPPED = BIT(23),
}
/**
- * struct ieee80211_vendor_radiotap - vendor radiotap data information
- * @present: presence bitmap for this vendor namespace
- * (this could be extended in the future if any vendor needs more
- * bits, the radiotap spec does allow for that)
- * @align: radiotap vendor namespace alignment. This defines the needed
- * alignment for the @data field below, not for the vendor namespace
- * description itself (which has a fixed 2-byte alignment)
- * Must be a power of two, and be set to at least 1!
- * @oui: radiotap vendor namespace OUI
- * @subns: radiotap vendor sub namespace
- * @len: radiotap vendor sub namespace skip length, if alignment is done
- * then that's added to this, i.e. this is only the length of the
- * @data field.
- * @pad: number of bytes of padding after the @data, this exists so that
- * the skb data alignment can be preserved even if the data has odd
- * length
- * @data: the actual vendor namespace data
- *
- * This struct, including the vendor data, goes into the skb->data before
- * the 802.11 header. It's split up in mac80211 using the align/oui/subns
- * data.
- */
-struct ieee80211_vendor_radiotap {
- u32 present;
- u8 align;
- u8 oui[3];
- u8 subns;
- u8 pad;
- u16 len;
- u8 data[];
-} __packed;
-
-/**
* enum ieee80211_conf_flags - configuration flags
*
* Flags to define PHY configuration options
* the station. See @sta_pre_rcu_remove if needed.
* This callback can sleep.
*
+ * @link_add_debugfs: Drivers can use this callback to add debugfs files
+ * when a link is added to a mac80211 vif. This callback should be within
+ * a CONFIG_MAC80211_DEBUGFS conditional. This callback can sleep.
+ * For non-MLO the callback will be called once for the default bss_conf
+ * with the vif's directory rather than a separate subdirectory.
+ *
* @sta_add_debugfs: Drivers can use this callback to add debugfs files
* when a station is added to mac80211's station list. This callback
* should be within a CONFIG_MAC80211_DEBUGFS conditional. This
* Note that a sta can also be inserted or removed with valid links,
* i.e. passed to @sta_add/@sta_state with sta->valid_links not zero.
* In fact, cannot change from having valid_links and not having them.
+ * @set_hw_timestamp: Enable/disable HW timestamping of TM/FTM frames. This is
+ * not restored at HW reset by mac80211 so drivers need to take care of
+ * that.
+ * @net_setup_tc: Called from .ndo_setup_tc in order to prepare hardware
+ * flow offloading for flows originating from the vif.
+ * Note that the driver must not assume that the vif driver_data is valid
+ * at this point, since the callback can be called during netdev teardown.
*/
struct ieee80211_ops {
void (*tx)(struct ieee80211_hw *hw,
int (*sta_remove)(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
#ifdef CONFIG_MAC80211_DEBUGFS
+ void (*link_add_debugfs)(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct dentry *dir);
void (*sta_add_debugfs)(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
u16 old_links, u16 new_links);
+ int (*set_hw_timestamp)(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_set_hw_timestamp *hwts);
+ int (*net_setup_tc)(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct net_device *dev,
+ enum tc_setup_type type,
+ void *type_data);
};
/**
unsigned int link_id);
/**
+ * ieee80211_beacon_get_template_ema_index - EMA beacon template generation
+ * @hw: pointer obtained from ieee80211_alloc_hw().
+ * @vif: &struct ieee80211_vif pointer from the add_interface callback.
+ * @offs: &struct ieee80211_mutable_offsets pointer to struct that will
+ * receive the offsets that may be updated by the driver.
+ * @link_id: the link id to which the beacon belongs (or 0 for a non-MLD AP).
+ * @ema_index: index of the beacon in the EMA set.
+ *
+ * This function follows the same rules as ieee80211_beacon_get_template()
+ * but returns a beacon template which includes multiple BSSID element at the
+ * requested index.
+ *
+ * Return: The beacon template. %NULL indicates the end of EMA templates.
+ */
+struct sk_buff *
+ieee80211_beacon_get_template_ema_index(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_mutable_offsets *offs,
+ unsigned int link_id, u8 ema_index);
+
+/**
+ * struct ieee80211_ema_beacons - List of EMA beacons
+ * @cnt: count of EMA beacons.
+ *
+ * @bcn: array of EMA beacons.
+ * @bcn.skb: the skb containing this specific beacon
+ * @bcn.offs: &struct ieee80211_mutable_offsets pointer to struct that will
+ * receive the offsets that may be updated by the driver.
+ */
+struct ieee80211_ema_beacons {
+ u8 cnt;
+ struct {
+ struct sk_buff *skb;
+ struct ieee80211_mutable_offsets offs;
+ } bcn[];
+};
+
+/**
+ * ieee80211_beacon_get_template_ema_list - EMA beacon template generation
+ * @hw: pointer obtained from ieee80211_alloc_hw().
+ * @vif: &struct ieee80211_vif pointer from the add_interface callback.
+ * @link_id: the link id to which the beacon belongs (or 0 for a non-MLD AP)
+ *
+ * This function follows the same rules as ieee80211_beacon_get_template()
+ * but allocates and returns a pointer to list of all beacon templates required
+ * to cover all profiles in the multiple BSSID set. Each template includes only
+ * one multiple BSSID element.
+ *
+ * Driver must call ieee80211_beacon_free_ema_list() to free the memory.
+ *
+ * Return: EMA beacon templates of type struct ieee80211_ema_beacons *.
+ * %NULL on error.
+ */
+struct ieee80211_ema_beacons *
+ieee80211_beacon_get_template_ema_list(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ unsigned int link_id);
+
+/**
+ * ieee80211_beacon_free_ema_list - free an EMA beacon template list
+ * @ema_beacons: list of EMA beacons of type &struct ieee80211_ema_beacons pointers.
+ *
+ * This function will free a list previously acquired by calling
+ * ieee80211_beacon_get_template_ema_list()
+ */
+void ieee80211_beacon_free_ema_list(struct ieee80211_ema_beacons *ema_beacons);
+
+/**
* ieee80211_beacon_get_tim - beacon generation function
* @hw: pointer obtained from ieee80211_alloc_hw().
* @vif: &struct ieee80211_vif pointer from the add_interface callback.
unsigned long delay);
/**
+ * ieee80211_refresh_tx_agg_session_timer - Refresh a tx agg session timer.
+ * @sta: the station for which to start a BA session
+ * @tid: the TID to BA on.
+ *
+ * This function allows low level driver to refresh tx agg session timer
+ * to maintain BA session, the session level will still be managed by the
+ * mac80211.
+ *
+ * Note: must be called in an RCU critical section.
+ */
+void ieee80211_refresh_tx_agg_session_timer(struct ieee80211_sta *sta,
+ u16 tid);
+
+/**
* ieee80211_start_tx_ba_session - Start a tx Block Ack session.
* @sta: the station for which to start a BA session
* @tid: the TID to BA on.
#define MAX_PORTS_IN_MANA_DEV 256
+/* Update this count whenever the respective structures are changed */
+#define MANA_STATS_RX_COUNT 5
+#define MANA_STATS_TX_COUNT 11
+
struct mana_stats_rx {
u64 packets;
u64 bytes;
u64 packets;
u64 bytes;
u64 xdp_xmit;
+ u64 tso_packets;
+ u64 tso_bytes;
+ u64 tso_inner_packets;
+ u64 tso_inner_bytes;
+ u64 short_pkt_fmt;
+ u64 long_pkt_fmt;
+ u64 csum_partial;
+ u64 mana_map_err;
struct u64_stats_sync syncp;
};
struct mana_ethtool_stats {
u64 stop_queue;
u64 wake_queue;
+ u64 tx_cqes;
+ u64 tx_cqe_err;
+ u64 tx_cqe_unknown_type;
+ u64 rx_cqes;
+ u64 rx_coalesced_err;
+ u64 rx_cqe_unknown_type;
};
struct mana_context {
{
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __ipv6_neigh_lookup_noref(dev, pkey);
if (n && !refcount_inc_not_zero(&n->refcnt))
n = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return n;
}
{
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __ipv6_neigh_lookup_noref(dev, pkey);
neigh_confirm(n);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
}
static inline void __ipv6_confirm_neigh_stub(struct net_device *dev,
{
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __ipv6_neigh_lookup_noref_stub(dev, pkey);
neigh_confirm(n);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
}
/* uses ipv6_stub and is meant for use outside of IPv6 core */
const void *pkey,
struct net_device *dev)
{
- struct neigh_hash_table *nht = rcu_dereference_bh(tbl->nht);
+ struct neigh_hash_table *nht = rcu_dereference(tbl->nht);
struct neighbour *n;
u32 hash_val;
hash_val = hash(pkey, dev, nht->hash_rnd) >> (32 - nht->hash_shift);
- for (n = rcu_dereference_bh(nht->hash_buckets[hash_val]);
+ for (n = rcu_dereference(nht->hash_buckets[hash_val]);
n != NULL;
- n = rcu_dereference_bh(n->next)) {
+ n = rcu_dereference(n->next)) {
if (n->dev == dev && key_eq(n, pkey))
return n;
}
int neigh_table_clear(int index, struct neigh_table *tbl);
struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
struct net_device *dev);
-struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
- const void *pkey);
struct neighbour *__neigh_create(struct neigh_table *tbl, const void *pkey,
struct net_device *dev, bool want_ref);
static inline struct neighbour *neigh_create(struct neigh_table *tbl,
if (READ_ONCE(neigh->used) != now)
WRITE_ONCE(neigh->used, now);
- if (!(neigh->nud_state & (NUD_CONNECTED | NUD_DELAY | NUD_PROBE)))
+ if (!(READ_ONCE(neigh->nud_state) & (NUD_CONNECTED | NUD_DELAY | NUD_PROBE)))
return __neigh_event_send(neigh, skb, immediate_ok);
return 0;
}
#include <uapi/linux/netfilter/nf_nat.h>
unsigned int
-nf_nat_redirect_ipv4(struct sk_buff *skb,
- const struct nf_nat_ipv4_multi_range_compat *mr,
+nf_nat_redirect_ipv4(struct sk_buff *skb, const struct nf_nat_range2 *range,
unsigned int hooknum);
unsigned int
nf_nat_redirect_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range,
}
/* Variant of nexthop_fib6_nh().
- * Caller should either hold rcu_read_lock_bh(), or RTNL.
+ * Caller should either hold rcu_read_lock(), or RTNL.
*/
static inline struct fib6_nh *nexthop_fib6_nh_bh(struct nexthop *nh)
{
if (nh->is_group) {
struct nh_group *nh_grp;
- nh_grp = rcu_dereference_bh_rtnl(nh->nh_grp);
+ nh_grp = rcu_dereference_rtnl(nh->nh_grp);
nh = nexthop_mpath_select(nh_grp, 0);
if (!nh)
return NULL;
}
- nhi = rcu_dereference_bh_rtnl(nh->nh_info);
+ nhi = rcu_dereference_rtnl(nh->nh_info);
if (nhi->family == AF_INET6)
return &nhi->fib6_nh;
}
struct qdisc_watchdog {
- u64 last_expires;
struct hrtimer timer;
struct Qdisc *qdisc;
};
extern struct proto raw_prot;
extern struct raw_hashinfo raw_v4_hashinfo;
-bool raw_v4_match(struct net *net, struct sock *sk, unsigned short num,
+bool raw_v4_match(struct net *net, const struct sock *sk, unsigned short num,
__be32 raddr, __be32 laddr, int dif, int sdif);
int raw_abort(struct sock *sk, int err);
u32 ipmr_table;
};
-static inline struct raw_sock *raw_sk(const struct sock *sk)
-{
- return (struct raw_sock *)sk;
-}
+#define raw_sk(ptr) container_of_const(ptr, struct raw_sock, inet.sk)
static inline bool raw_sk_bound_dev_eq(struct net *net, int bound_dev_if,
int dif, int sdif)
#include <net/raw.h>
extern struct raw_hashinfo raw_v6_hashinfo;
-bool raw_v6_match(struct net *net, struct sock *sk, unsigned short num,
+bool raw_v6_match(struct net *net, const struct sock *sk, unsigned short num,
const struct in6_addr *loc_addr,
const struct in6_addr *rmt_addr, int dif, int sdif);
/* Miscellaneous cached information */
u32 rt_mtu_locked:1,
rt_pmtu:31;
-
- struct list_head rt_uncached;
- struct uncached_list *rt_uncached_list;
};
static inline bool rt_is_input_route(const struct rtable *rt)
}
}
}
+
+static inline bool scm_has_secdata(struct socket *sock)
+{
+ return test_bit(SOCK_PASSSEC, &sock->flags);
+}
#else
static inline void scm_passec(struct socket *sock, struct msghdr *msg, struct scm_cookie *scm)
{ }
+
+static inline bool scm_has_secdata(struct socket *sock)
+{
+ return false;
+}
#endif /* CONFIG_SECURITY_NETWORK */
static __inline__ void scm_recv(struct socket *sock, struct msghdr *msg,
struct scm_cookie *scm, int flags)
{
if (!msg->msg_control) {
- if (test_bit(SOCK_PASSCRED, &sock->flags) || scm->fp)
+ if (test_bit(SOCK_PASSCRED, &sock->flags) || scm->fp ||
+ scm_has_secdata(sock))
msg->msg_flags |= MSG_CTRUNC;
scm_destroy(scm);
return;
struct sctp_sched_ops *sched_ops);
void sctp_sched_ops_prio_init(void);
void sctp_sched_ops_rr_init(void);
+void sctp_sched_ops_fc_init(void);
+void sctp_sched_ops_wfq_init(void);
#endif /* __sctp_stream_sched_h__ */
struct {
struct list_head rr_list;
};
+ struct {
+ struct list_head fc_list;
+ __u32 fc_length;
+ __u16 fc_weight;
+ };
};
};
/* The next stream in line */
struct sctp_stream_out_ext *rr_next;
};
+ struct {
+ struct list_head fc_list;
+ };
};
struct sctp_stream_interleave *si;
};
int (*move_data)(struct smcd_dev *dev, u64 dmb_tok, unsigned int idx,
bool sf, unsigned int offset, void *data,
unsigned int size);
+ int (*supports_v2)(void);
u8* (*get_system_eid)(void);
u64 (*get_local_gid)(struct smcd_dev *dev);
u16 (*get_chid)(struct smcd_dev *dev);
rcu_read_lock();
dst = rcu_dereference(sk->sk_dst_cache);
- if (dst && !atomic_inc_not_zero(&dst->__refcnt))
+ if (dst && !rcuref_get(&dst->__rcuref))
dst = NULL;
rcu_read_unlock();
return dst;
last_overflow = READ_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp);
if (!time_between32(now, last_overflow, last_overflow + HZ))
- WRITE_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp, now);
+ WRITE_ONCE(tcp_sk_rw(sk)->rx_opt.ts_recent_stamp, now);
}
/* syncookies: no recent synqueue overflow on this listening socket? */
#define __NET_VXLAN_H 1
#include <linux/if_vlan.h>
+#include <linux/rhashtable-types.h>
#include <net/udp_tunnel.h>
#include <net/dst_metadata.h>
#include <net/rtnetlink.h>
struct vxlan_vni_group __rcu *vnigrp;
struct hlist_head fdb_head[FDB_HASH_SIZE];
+
+ struct rhashtable mdb_tbl;
+ struct hlist_head mdb_list;
+ unsigned int mdb_seq;
};
#define VXLAN_F_LEARN 0x01
#define VXLAN_F_IPV6_LINKLOCAL 0x8000
#define VXLAN_F_TTL_INHERIT 0x10000
#define VXLAN_F_VNIFILTER 0x20000
+#define VXLAN_F_MDB 0x40000
/* Flags that are used in the receive path. These flags must match in
* order for a socket to be shareable
return true;
}
+static inline void vxlan_build_gbp_hdr(struct vxlanhdr *vxh, const struct vxlan_metadata *md)
+{
+ struct vxlanhdr_gbp *gbp;
+
+ if (!md->gbp)
+ return;
+
+ gbp = (struct vxlanhdr_gbp *)vxh;
+ vxh->vx_flags |= VXLAN_HF_GBP;
+
+ if (md->gbp & VXLAN_GBP_DONT_LEARN)
+ gbp->dont_learn = 1;
+
+ if (md->gbp & VXLAN_GBP_POLICY_APPLIED)
+ gbp->policy_applied = 1;
+
+ gbp->policy_id = htons(md->gbp & VXLAN_GBP_ID_MASK);
+}
+
#endif
atomic_t refcnt;
};
-static inline struct x25_sock *x25_sk(const struct sock *sk)
-{
- return (struct x25_sock *)sk;
-}
+#define x25_sk(ptr) container_of_const(ptr, struct x25_sock, sk)
/* af_x25.c */
extern int sysctl_x25_restart_request_timeout;
struct xsk_map {
struct bpf_map map;
spinlock_t lock; /* Synchronize map updates */
+ atomic_t count;
struct xdp_sock __rcu *xsk_map[];
};
XFRM_DEV_OFFLOAD_PACKET,
};
+enum {
+ XFRM_DEV_OFFLOAD_FLAG_ACQ = 1,
+};
+
struct xfrm_dev_offload {
struct net_device *dev;
netdevice_tracker dev_tracker;
unsigned long offload_handle;
u8 dir : 2;
u8 type : 2;
+ u8 flags : 2;
};
struct xfrm_mode {
};
struct ocelot;
+struct device_node;
struct ocelot_ops {
struct net_device *(*port_to_netdev)(struct ocelot *ocelot, int port);
enum devlink_sb_pool_type pool_type,
u32 *p_cur, u32 *p_max);
+int ocelot_port_configure_serdes(struct ocelot *ocelot, int port,
+ struct device_node *portnp);
+
+void ocelot_phylink_mac_config(struct ocelot *ocelot, int port,
+ unsigned int link_an_mode,
+ const struct phylink_link_state *state);
void ocelot_phylink_mac_link_down(struct ocelot *ocelot, int port,
unsigned int link_an_mode,
phy_interface_t interface,
}
#endif
+void ocelot_pll5_init(struct ocelot *ocelot);
+
#endif
),
TP_fast_assign(
- struct in6_addr in6_zero = {};
struct net_device *dev;
struct in6_addr *in6;
__be32 *p32;
*p32 = nhc->nhc_gw.ipv4;
in6 = (struct in6_addr *)__entry->gw6;
- *in6 = in6_zero;
+ *in6 = in6addr_any;
} else if (nhc->nhc_gw_family == AF_INET6) {
p32 = (__be32 *) __entry->gw4;
*p32 = 0;
*p32 = 0;
in6 = (struct in6_addr *)__entry->gw6;
- *in6 = in6_zero;
+ *in6 = in6addr_any;
}
),
strcpy(__entry->name, "-");
}
if (res->f6i == net->ipv6.fib6_null_entry) {
- struct in6_addr in6_zero = {};
-
in6 = (struct in6_addr *)__entry->gw;
- *in6 = in6_zero;
-
+ *in6 = in6addr_any;
} else if (res->nh) {
in6 = (struct in6_addr *)__entry->gw;
*in6 = res->nh->fib_nh_gw6;
TRACE_EVENT(qrtr_ns_service_announce_new,
- TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
+ TP_PROTO(unsigned int service, unsigned int instance,
+ unsigned int node, unsigned int port),
TP_ARGS(service, instance, node, port),
TP_STRUCT__entry(
- __field(__le32, service)
- __field(__le32, instance)
- __field(__le32, node)
- __field(__le32, port)
+ __field(unsigned int, service)
+ __field(unsigned int, instance)
+ __field(unsigned int, node)
+ __field(unsigned int, port)
),
TP_fast_assign(
TRACE_EVENT(qrtr_ns_service_announce_del,
- TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
+ TP_PROTO(unsigned int service, unsigned int instance,
+ unsigned int node, unsigned int port),
TP_ARGS(service, instance, node, port),
TP_STRUCT__entry(
- __field(__le32, service)
- __field(__le32, instance)
- __field(__le32, node)
- __field(__le32, port)
+ __field(unsigned int, service)
+ __field(unsigned int, instance)
+ __field(unsigned int, node)
+ __field(unsigned int, port)
),
TP_fast_assign(
TRACE_EVENT(qrtr_ns_server_add,
- TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port),
+ TP_PROTO(unsigned int service, unsigned int instance,
+ unsigned int node, unsigned int port),
TP_ARGS(service, instance, node, port),
TP_STRUCT__entry(
- __field(__le32, service)
- __field(__le32, instance)
- __field(__le32, node)
- __field(__le32, port)
+ __field(unsigned int, service)
+ __field(unsigned int, instance)
+ __field(unsigned int, node)
+ __field(unsigned int, port)
),
TP_fast_assign(
),
TP_fast_assign(
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
struct in6_addr *pin6;
__be32 *p32;
),
TP_fast_assign(
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
struct in6_addr *pin6;
__be32 *p32;
),
TP_fast_assign(
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
__be32 *p32;
__entry->skbaddr = skb;
* different maps if key/value layout matches across maps.
* Every bpf_timer_set_callback() can have different callback_fn.
*
+ * *flags* can be one of:
+ *
+ * **BPF_F_TIMER_ABS**
+ * Start the timer in absolute expire value instead of the
+ * default relative one.
+ *
* Return
* 0 on success.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier
* Description
* Write *len* bytes from *src* into *dst*, starting from *offset*
* into *dst*.
- * *flags* is currently unused.
+ *
+ * *flags* must be 0 except for skb-type dynptrs.
+ *
+ * For skb-type dynptrs:
+ * * All data slices of the dynptr are automatically
+ * invalidated after **bpf_dynptr_write**\ (). This is
+ * because writing may pull the skb and change the
+ * underlying packet buffer.
+ *
+ * * For *flags*, please see the flags accepted by
+ * **bpf_skb_store_bytes**\ ().
* Return
* 0 on success, -E2BIG if *offset* + *len* exceeds the length
* of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
- * is a read-only dynptr or if *flags* is not 0.
+ * is a read-only dynptr or if *flags* is not correct. For skb-type dynptrs,
+ * other errors correspond to errors returned by **bpf_skb_store_bytes**\ ().
*
* void *bpf_dynptr_data(const struct bpf_dynptr *ptr, u32 offset, u32 len)
* Description
*
* *len* must be a statically known value. The returned data slice
* is invalidated whenever the dynptr is invalidated.
+ *
+ * skb and xdp type dynptrs may not use bpf_dynptr_data. They should
+ * instead use bpf_dynptr_slice and bpf_dynptr_slice_rdwr.
* Return
* Pointer to the underlying dynptr data, NULL if the dynptr is
* read-only, if the dynptr is invalid, or if the offset and length
enum bpf_core_relo_kind kind;
};
+/*
+ * Flags to control bpf_timer_start() behaviour.
+ * - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is
+ * relative to current time.
+ */
+enum {
+ BPF_F_TIMER_ABS = (1ULL << 0),
+};
+
#endif /* _UAPI__LINUX_BPF_H__ */
ETHTOOL_A_RINGS_CQE_SIZE, /* u32 */
ETHTOOL_A_RINGS_TX_PUSH, /* u8 */
ETHTOOL_A_RINGS_RX_PUSH, /* u8 */
+ ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN, /* u32 */
+ ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX, /* u32 */
/* add new constants above here */
__ETHTOOL_A_RINGS_CNT,
MDBA_MDB_EATTR_GROUP_MODE,
MDBA_MDB_EATTR_SOURCE,
MDBA_MDB_EATTR_RTPROT,
+ MDBA_MDB_EATTR_DST,
+ MDBA_MDB_EATTR_DST_PORT,
+ MDBA_MDB_EATTR_VNI,
+ MDBA_MDB_EATTR_IFINDEX,
+ MDBA_MDB_EATTR_SRC_VNI,
__MDBA_MDB_EATTR_MAX
};
#define MDBA_MDB_EATTR_MAX (__MDBA_MDB_EATTR_MAX - 1)
MDBE_ATTR_SRC_LIST,
MDBE_ATTR_GROUP_MODE,
MDBE_ATTR_RTPROT,
+ MDBE_ATTR_DST,
+ MDBE_ATTR_DST_PORT,
+ MDBE_ATTR_VNI,
+ MDBE_ATTR_IFINDEX,
+ MDBE_ATTR_SRC_VNI,
__MDBE_ATTR_MAX,
};
#define MDBE_ATTR_MAX (__MDBE_ATTR_MAX - 1)
IFLA_MACVLAN_MACADDR_COUNT,
IFLA_MACVLAN_BC_QUEUE_LEN,
IFLA_MACVLAN_BC_QUEUE_LEN_USED,
+ IFLA_MACVLAN_BC_CUTOFF,
__IFLA_MACVLAN_MAX,
};
* enum nft_range_attributes - nf_tables range expression netlink attributes
*
* @NFTA_RANGE_SREG: source register of data to compare (NLA_U32: nft_registers)
- * @NFTA_RANGE_OP: cmp operation (NLA_U32: nft_cmp_ops)
+ * @NFTA_RANGE_OP: cmp operation (NLA_U32: nft_range_ops)
* @NFTA_RANGE_FROM_DATA: data range from (NLA_NESTED: nft_data_attributes)
* @NFTA_RANGE_TO_DATA: data range to (NLA_NESTED: nft_data_attributes)
*/
* @NFTA_EXTHDR_LEN: extension header length (NLA_U32)
* @NFTA_EXTHDR_FLAGS: extension header flags (NLA_U32)
* @NFTA_EXTHDR_OP: option match type (NLA_U32)
- * @NFTA_EXTHDR_SREG: option match type (NLA_U32)
+ * @NFTA_EXTHDR_SREG: source register (NLA_U32: nft_registers)
*/
enum nft_exthdr_attributes {
NFTA_EXTHDR_UNSPEC,
* @NFT_META_TIME_HOUR: hour of day (in seconds)
* @NFT_META_SDIF: slave device interface index
* @NFT_META_SDIFNAME: slave device interface name
+ * @NFT_META_BRI_BROUTE: packet br_netfilter_broute bit
*/
enum nft_meta_keys {
NFT_META_LEN,
NFT_META_TIME_HOUR,
NFT_META_SDIF,
NFT_META_SDIFNAME,
+ NFT_META_BRI_BROUTE,
__NFT_META_IIFTYPE,
};
/**
* enum nft_log_attributes - nf_tables log expression netlink attributes
*
- * @NFTA_LOG_GROUP: netlink group to send messages to (NLA_U32)
+ * @NFTA_LOG_GROUP: netlink group to send messages to (NLA_U16)
* @NFTA_LOG_PREFIX: prefix to prepend to log messages (NLA_STRING)
* @NFTA_LOG_SNAPLEN: length of payload to include in netlink message (NLA_U32)
- * @NFTA_LOG_QTHRESHOLD: queue threshold (NLA_U32)
+ * @NFTA_LOG_QTHRESHOLD: queue threshold (NLA_U16)
* @NFTA_LOG_LEVEL: log level (NLA_U32)
* @NFTA_LOG_FLAGS: logging flags (NLA_U32)
*/
NFQA_VLAN, /* nested attribute: packet vlan info */
NFQA_L2HDR, /* full L2 header */
NFQA_PRIORITY, /* skb->priority */
+ NFQA_CGROUP_CLASSID, /* __u32 cgroup classid */
__NFQA_MAX
};
* @NL80211_CMD_MODIFY_LINK_STA: Modify a link of an MLD station
* @NL80211_CMD_REMOVE_LINK_STA: Remove a link of an MLD station
*
+ * @NL80211_CMD_SET_HW_TIMESTAMP: Enable/disable HW timestamping of Timing
+ * measurement and Fine timing measurement frames. If %NL80211_ATTR_MAC
+ * is included, enable/disable HW timestamping only for frames to/from the
+ * specified MAC address. Otherwise enable/disable HW timestamping for
+ * all TM/FTM frames (including ones that were enabled with specific MAC
+ * address). If %NL80211_ATTR_HW_TIMESTAMP_ENABLED is not included, disable
+ * HW timestamping.
+ * The number of peers that HW timestamping can be enabled for concurrently
+ * is indicated by %NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS.
+ *
* @NL80211_CMD_MAX: highest used command number
* @__NL80211_CMD_AFTER_LAST: internal use
*/
NL80211_CMD_MODIFY_LINK_STA,
NL80211_CMD_REMOVE_LINK_STA,
+ NL80211_CMD_SET_HW_TIMESTAMP,
+
/* add new commands above here */
/* used to define NL80211_CMD_MAX below */
* indicates that the sub-channel is punctured. Higher 16 bits are
* reserved.
*
+ * @NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS: Maximum number of peers that HW
+ * timestamping can be enabled for concurrently (u16), a wiphy attribute.
+ * A value of 0xffff indicates setting for all peers (i.e. not specifying
+ * an address with %NL80211_CMD_SET_HW_TIMESTAMP) is supported.
+ * @NL80211_ATTR_HW_TIMESTAMP_ENABLED: Indicates whether HW timestamping should
+ * be enabled or not (flag attribute).
+ *
+ * @NL80211_ATTR_EMA_RNR_ELEMS: Optional nested attribute for
+ * reduced neighbor report (RNR) elements. This attribute can be used
+ * only when NL80211_MBSSID_CONFIG_ATTR_EMA is enabled.
+ * Userspace is responsible for splitting the RNR into multiple
+ * elements such that each element excludes the non-transmitting
+ * profiles already included in the MBSSID element
+ * (%NL80211_ATTR_MBSSID_ELEMS) at the same index. Each EMA beacon
+ * will be generated by adding MBSSID and RNR elements at the same
+ * index. If the userspace includes more RNR elements than number of
+ * MBSSID elements then these will be added in every EMA beacon.
+ *
* @NUM_NL80211_ATTR: total number of nl80211_attrs available
* @NL80211_ATTR_MAX: highest attribute number currently defined
* @__NL80211_ATTR_AFTER_LAST: internal use
NL80211_ATTR_PUNCT_BITMAP,
+ NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS,
+ NL80211_ATTR_HW_TIMESTAMP_ENABLED,
+
+ NL80211_ATTR_EMA_RNR_ELEMS,
+
/* add attributes here, update the policy in nl80211.c */
__NL80211_ATTR_AFTER_LAST,
* @NL80211_BAND_ATTR_EDMG_BW_CONFIG: Channel BW Configuration subfield encodes
* the allowed channel bandwidth configurations.
* Defined by IEEE P802.11ay/D4.0 section 9.4.2.251, Table 13.
+ * @NL80211_BAND_ATTR_S1G_MCS_NSS_SET: S1G capabilities, supported S1G-MCS and NSS
+ * set subfield, as in the S1G information IE, 5 bytes
+ * @NL80211_BAND_ATTR_S1G_CAPA: S1G capabilities information subfield as in the
+ * S1G information IE, 10 bytes
* @NL80211_BAND_ATTR_MAX: highest band attribute currently defined
* @__NL80211_BAND_ATTR_AFTER_LAST: internal use
*/
NL80211_BAND_ATTR_EDMG_CHANNELS,
NL80211_BAND_ATTR_EDMG_BW_CONFIG,
+ NL80211_BAND_ATTR_S1G_MCS_NSS_SET,
+ NL80211_BAND_ATTR_S1G_CAPA,
+
/* keep last */
__NL80211_BAND_ATTR_AFTER_LAST,
NL80211_BAND_ATTR_MAX = __NL80211_BAND_ATTR_AFTER_LAST - 1
* @NL80211_EXT_FEATURE_SECURE_NAN: Device supports NAN Pairing which enables
* authentication, data encryption and message integrity.
*
+ * @NL80211_EXT_FEATURE_AUTH_AND_DEAUTH_RANDOM_TA: Device supports randomized TA
+ * in authentication and deauthentication frames sent to unassociated peer
+ * using @NL80211_CMD_FRAME.
+ *
* @NUM_NL80211_EXT_FEATURES: number of extended features.
* @MAX_NL80211_EXT_FEATURES: highest extended feature index.
*/
NL80211_EXT_FEATURE_POWERED_ADDR_CHANGE,
NL80211_EXT_FEATURE_PUNCT,
NL80211_EXT_FEATURE_SECURE_NAN,
+ NL80211_EXT_FEATURE_AUTH_AND_DEAUTH_RANDOM_TA,
/* add new features before the definition below */
NUM_NL80211_EXT_FEATURES,
* @NL80211_SCAN_FLAG_FREQ_KHZ: report scan results with
* %NL80211_ATTR_SCAN_FREQ_KHZ. This also means
* %NL80211_ATTR_SCAN_FREQUENCIES will not be included.
- * @NL80211_SCAN_FLAG_COLOCATED_6GHZ: scan for colocated APs reported by
- * 2.4/5 GHz APs
+ * @NL80211_SCAN_FLAG_COLOCATED_6GHZ: scan for collocated APs reported by
+ * 2.4/5 GHz APs. When the flag is set, the scan logic will use the
+ * information from the RNR element found in beacons/probe responses
+ * received on the 2.4/5 GHz channels to actively scan only the 6GHz
+ * channels on which APs are expected to be found. Note that when not set,
+ * the scan logic would scan all 6GHz channels, but since transmission of
+ * probe requests on non PSC channels is limited, it is highly likely that
+ * these channels would passively be scanned. Also note that when the flag
+ * is set, in addition to the colocated APs, PSC channels would also be
+ * scanned if the user space has asked for it.
*/
enum nl80211_scan_flags {
NL80211_SCAN_FLAG_LOW_PRIORITY = 1<<0,
SCTP_SS_DEFAULT = SCTP_SS_FCFS,
SCTP_SS_PRIO,
SCTP_SS_RR,
- SCTP_SS_MAX = SCTP_SS_RR
+ SCTP_SS_FC,
+ SCTP_SS_WFQ,
+ SCTP_SS_MAX = SCTP_SS_WFQ
};
/* Probe Interval socket option */
*/
TCA_TUNNEL_KEY_ENC_TOS, /* u8 */
TCA_TUNNEL_KEY_ENC_TTL, /* u8 */
+ TCA_TUNNEL_KEY_NO_FRAG, /* flag */
__TCA_TUNNEL_KEY_MAX,
};
#define VIRTIO_NET_F_GUEST_USO6 55 /* Guest can handle USOv6 in. */
#define VIRTIO_NET_F_HOST_USO 56 /* Host can handle USO in. */
#define VIRTIO_NET_F_HASH_REPORT 57 /* Supports hash report */
+#define VIRTIO_NET_F_GUEST_HDRLEN 59 /* Guest provides the exact hdr_len value. */
#define VIRTIO_NET_F_RSS 60 /* Supports RSS RX steering */
#define VIRTIO_NET_F_RSC_EXT 61 /* extended coalescing info */
#define VIRTIO_NET_F_STANDBY 62 /* Act as standby for another device
UNIXCB(skb).fp = fpl;
skb->sk = sk;
- skb->scm_io_uring = 1;
- skb->destructor = unix_destruct_scm;
+ skb->destructor = io_uring_destruct_scm;
refcount_add(skb->truesize, &sk->sk_wmem_alloc);
}
return num_elems;
}
+static u64 array_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_array *array = container_of(map, struct bpf_array, map);
+ bool percpu = map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY;
+ u32 elem_size = array->elem_size;
+ u64 entries = map->max_entries;
+ u64 usage = sizeof(*array);
+
+ if (percpu) {
+ usage += entries * sizeof(void *);
+ usage += entries * elem_size * num_possible_cpus();
+ } else {
+ if (map->map_flags & BPF_F_MMAPABLE) {
+ usage = PAGE_ALIGN(usage);
+ usage += PAGE_ALIGN(entries * elem_size);
+ } else {
+ usage += entries * elem_size;
+ }
+ }
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(array_map_btf_ids, struct, bpf_array)
const struct bpf_map_ops array_map_ops = {
.map_meta_equal = array_map_meta_equal,
.map_update_batch = generic_map_update_batch,
.map_set_for_each_callback_args = map_set_for_each_callback_args,
.map_for_each_callback = bpf_for_each_array_elem,
+ .map_mem_usage = array_map_mem_usage,
.map_btf_id = &array_map_btf_ids[0],
.iter_seq_info = &iter_seq_info,
};
.map_update_batch = generic_map_update_batch,
.map_set_for_each_callback_args = map_set_for_each_callback_args,
.map_for_each_callback = bpf_for_each_array_elem,
+ .map_mem_usage = array_map_mem_usage,
.map_btf_id = &array_map_btf_ids[0],
.iter_seq_info = &iter_seq_info,
};
.map_fd_sys_lookup_elem = prog_fd_array_sys_lookup_elem,
.map_release_uref = prog_array_map_clear,
.map_seq_show_elem = prog_array_map_seq_show_elem,
+ .map_mem_usage = array_map_mem_usage,
.map_btf_id = &array_map_btf_ids[0],
};
.map_fd_put_ptr = perf_event_fd_array_put_ptr,
.map_release = perf_event_fd_array_release,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = array_map_mem_usage,
.map_btf_id = &array_map_btf_ids[0],
};
.map_fd_get_ptr = cgroup_fd_array_get_ptr,
.map_fd_put_ptr = cgroup_fd_array_put_ptr,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = array_map_mem_usage,
.map_btf_id = &array_map_btf_ids[0],
};
#endif
.map_lookup_batch = generic_map_lookup_batch,
.map_update_batch = generic_map_update_batch,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = array_map_mem_usage,
.map_btf_id = &array_map_btf_ids[0],
};
return btf_type_is_void(key_type) ? 0 : -EINVAL;
}
+static u64 bloom_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_bloom_filter *bloom;
+ u64 bitset_bytes;
+
+ bloom = container_of(map, struct bpf_bloom_filter, map);
+ bitset_bytes = BITS_TO_BYTES((u64)bloom->bitset_mask + 1);
+ bitset_bytes = roundup(bitset_bytes, sizeof(unsigned long));
+ return sizeof(*bloom) + bitset_bytes;
+}
+
BTF_ID_LIST_SINGLE(bpf_bloom_map_btf_ids, struct, bpf_bloom_filter)
const struct bpf_map_ops bloom_filter_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_update_elem = bloom_map_update_elem,
.map_delete_elem = bloom_map_delete_elem,
.map_check_btf = bloom_map_check_btf,
+ .map_mem_usage = bloom_map_mem_usage,
.map_btf_id = &bpf_bloom_map_btf_ids[0],
};
.map_update_elem = bpf_cgrp_storage_update_elem,
.map_delete_elem = bpf_cgrp_storage_delete_elem,
.map_check_btf = bpf_local_storage_map_check_btf,
+ .map_mem_usage = bpf_local_storage_map_mem_usage,
.map_btf_id = &bpf_local_storage_map_btf_id[0],
.map_owner_storage_ptr = cgroup_storage_ptr,
};
.map_update_elem = bpf_fd_inode_storage_update_elem,
.map_delete_elem = bpf_fd_inode_storage_delete_elem,
.map_check_btf = bpf_local_storage_map_check_btf,
+ .map_mem_usage = bpf_local_storage_map_mem_usage,
.map_btf_id = &bpf_local_storage_map_btf_id[0],
.map_owner_storage_ptr = inode_storage_ptr,
};
return map->ops->map_owner_storage_ptr(owner);
}
+static bool selem_linked_to_storage_lockless(const struct bpf_local_storage_elem *selem)
+{
+ return !hlist_unhashed_lockless(&selem->snode);
+}
+
static bool selem_linked_to_storage(const struct bpf_local_storage_elem *selem)
{
return !hlist_unhashed(&selem->snode);
}
+static bool selem_linked_to_map_lockless(const struct bpf_local_storage_elem *selem)
+{
+ return !hlist_unhashed_lockless(&selem->map_node);
+}
+
static bool selem_linked_to_map(const struct bpf_local_storage_elem *selem)
{
return !hlist_unhashed(&selem->map_node);
if (selem) {
if (value)
copy_map_value(&smap->map, SDATA(selem)->data, value);
+ /* No need to call check_and_init_map_value as memory is zero init */
return selem;
}
kfree_rcu(local_storage, rcu);
}
-static void bpf_selem_free_rcu(struct rcu_head *rcu)
+static void bpf_selem_free_fields_rcu(struct rcu_head *rcu)
+{
+ struct bpf_local_storage_elem *selem;
+ struct bpf_local_storage_map *smap;
+
+ selem = container_of(rcu, struct bpf_local_storage_elem, rcu);
+ /* protected by the rcu_barrier*() */
+ smap = rcu_dereference_protected(SDATA(selem)->smap, true);
+ bpf_obj_free_fields(smap->map.record, SDATA(selem)->data);
+ kfree(selem);
+}
+
+static void bpf_selem_free_fields_trace_rcu(struct rcu_head *rcu)
+{
+ /* Free directly if Tasks Trace RCU GP also implies RCU GP */
+ if (rcu_trace_implies_rcu_gp())
+ bpf_selem_free_fields_rcu(rcu);
+ else
+ call_rcu(rcu, bpf_selem_free_fields_rcu);
+}
+
+static void bpf_selem_free_trace_rcu(struct rcu_head *rcu)
{
struct bpf_local_storage_elem *selem;
{
struct bpf_local_storage_map *smap;
bool free_local_storage;
+ struct btf_record *rec;
void *owner;
smap = rcu_dereference_check(SDATA(selem)->smap, bpf_rcu_lock_held());
SDATA(selem))
RCU_INIT_POINTER(local_storage->cache[smap->cache_idx], NULL);
- if (use_trace_rcu)
- call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu);
- else
- kfree_rcu(selem, rcu);
+ /* A different RCU callback is chosen whenever we need to free
+ * additional fields in selem data before freeing selem.
+ * bpf_local_storage_map_free only executes rcu_barrier to wait for RCU
+ * callbacks when it has special fields, hence we can only conditionally
+ * dereference smap, as by this time the map might have already been
+ * freed without waiting for our call_rcu callback if it did not have
+ * any special fields.
+ */
+ rec = smap->map.record;
+ if (use_trace_rcu) {
+ if (!IS_ERR_OR_NULL(rec))
+ call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_fields_trace_rcu);
+ else
+ call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_trace_rcu);
+ } else {
+ if (!IS_ERR_OR_NULL(rec))
+ call_rcu(&selem->rcu, bpf_selem_free_fields_rcu);
+ else
+ kfree_rcu(selem, rcu);
+ }
return free_local_storage;
}
bool free_local_storage = false;
unsigned long flags;
- if (unlikely(!selem_linked_to_storage(selem)))
+ if (unlikely(!selem_linked_to_storage_lockless(selem)))
/* selem has already been unlinked from sk */
return;
struct bpf_local_storage_map_bucket *b;
unsigned long flags;
- if (unlikely(!selem_linked_to_map(selem)))
+ if (unlikely(!selem_linked_to_map_lockless(selem)))
/* selem has already be unlinked from smap */
return;
err = check_flags(old_sdata, map_flags);
if (err)
return ERR_PTR(err);
- if (old_sdata && selem_linked_to_storage(SELEM(old_sdata))) {
+ if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) {
copy_map_value_locked(&smap->map, old_sdata->data,
value, false);
return old_sdata;
return free_storage;
}
+u64 bpf_local_storage_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_local_storage_map *smap = (struct bpf_local_storage_map *)map;
+ u64 usage = sizeof(*smap);
+
+ /* The dynamically callocated selems are not counted currently. */
+ usage += sizeof(*smap->buckets) * (1ULL << smap->bucket_log);
+ return usage;
+}
+
struct bpf_map *
bpf_local_storage_map_alloc(union bpf_attr *attr,
struct bpf_local_storage_cache *cache)
*/
synchronize_rcu();
+ /* Only delay freeing of smap, buckets are not needed anymore */
kvfree(smap->buckets);
+
+ /* When local storage has special fields, callbacks for
+ * bpf_selem_free_fields_rcu and bpf_selem_free_fields_trace_rcu will
+ * keep using the map BTF record, we need to execute an RCU barrier to
+ * wait for them as the record will be freed right after our map_free
+ * callback.
+ */
+ if (!IS_ERR_OR_NULL(smap->map.record)) {
+ rcu_barrier_tasks_trace();
+ /* We cannot skip rcu_barrier() when rcu_trace_implies_rcu_gp()
+ * is true, because while call_rcu invocation is skipped in that
+ * case in bpf_selem_free_fields_trace_rcu (and all local
+ * storage maps pass use_trace_rcu = true), there can be
+ * call_rcu callbacks based on use_trace_rcu = false in the
+ * while ((selem = ...)) loop above or when owner's free path
+ * calls bpf_local_storage_unlink_nolock.
+ */
+ rcu_barrier();
+ }
bpf_map_area_free(smap);
}
return map;
}
+static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
+ const struct bpf_struct_ops *st_ops = st_map->st_ops;
+ const struct btf_type *vt = st_ops->value_type;
+ u64 usage;
+
+ usage = sizeof(*st_map) +
+ vt->size - sizeof(struct bpf_struct_ops_value);
+ usage += vt->size;
+ usage += btf_type_vlen(vt) * sizeof(struct bpf_links *);
+ usage += PAGE_SIZE;
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(bpf_struct_ops_map_btf_ids, struct, bpf_struct_ops_map)
const struct bpf_map_ops bpf_struct_ops_map_ops = {
.map_alloc_check = bpf_struct_ops_map_alloc_check,
.map_delete_elem = bpf_struct_ops_map_delete_elem,
.map_update_elem = bpf_struct_ops_map_update_elem,
.map_seq_show_elem = bpf_struct_ops_map_seq_show_elem,
+ .map_mem_usage = bpf_struct_ops_map_mem_usage,
.map_btf_id = &bpf_struct_ops_map_btf_ids[0],
};
.map_update_elem = bpf_pid_task_storage_update_elem,
.map_delete_elem = bpf_pid_task_storage_delete_elem,
.map_check_btf = bpf_local_storage_map_check_btf,
+ .map_mem_usage = bpf_local_storage_map_mem_usage,
.map_btf_id = &bpf_local_storage_map_btf_id[0],
.map_owner_storage_ptr = task_storage_ptr,
};
BTF_KFUNC_HOOK_TRACING,
BTF_KFUNC_HOOK_SYSCALL,
BTF_KFUNC_HOOK_FMODRET,
+ BTF_KFUNC_HOOK_CGROUP_SKB,
+ BTF_KFUNC_HOOK_SCHED_ACT,
+ BTF_KFUNC_HOOK_SK_SKB,
+ BTF_KFUNC_HOOK_SOCKET_FILTER,
+ BTF_KFUNC_HOOK_LWT,
BTF_KFUNC_HOOK_MAX,
};
/* Reject extra tags */
if (btf_type_is_type_tag(btf_type_by_id(btf, t->type)))
return -EINVAL;
- if (!strcmp("kptr", __btf_name_by_offset(btf, t->name_off)))
+ if (!strcmp("kptr_untrusted", __btf_name_by_offset(btf, t->name_off)))
type = BPF_KPTR_UNREF;
- else if (!strcmp("kptr_ref", __btf_name_by_offset(btf, t->name_off)))
+ else if (!strcmp("kptr", __btf_name_by_offset(btf, t->name_off)))
type = BPF_KPTR_REF;
else
return -EINVAL;
* int socket_filter_bpf_prog(struct __sk_buff *skb)
* { // no fields of skb are ever used }
*/
+ if (strcmp(ctx_tname, "__sk_buff") == 0 && strcmp(tname, "sk_buff") == 0)
+ return ctx_type;
+ if (strcmp(ctx_tname, "xdp_md") == 0 && strcmp(tname, "xdp_buff") == 0)
+ return ctx_type;
if (strcmp(ctx_tname, tname)) {
/* bpf_user_pt_regs_t is a typedef, so resolve it to
* underlying struct and check name again
const char *tname, *mname, *tag_value;
u32 vlen, elem_id, mid;
+ *flag = 0;
again:
tname = __btf_name_by_offset(btf, t->name_off);
if (!btf_type_is_struct(t)) {
* of this field or inside of this struct
*/
if (btf_type_is_struct(mtype)) {
+ if (BTF_INFO_KIND(mtype->info) == BTF_KIND_UNION &&
+ btf_type_vlen(mtype) != 1)
+ /*
+ * walking unions yields untrusted pointers
+ * with exception of __bpf_md_ptr and other
+ * unions with a single member
+ */
+ *flag |= PTR_UNTRUSTED;
+
/* our field must be inside that union or struct */
t = mtype;
stype = btf_type_skip_modifiers(btf, mtype->type, &id);
if (btf_type_is_struct(stype)) {
*next_btf_id = id;
- *flag = tmp_flag;
+ *flag |= tmp_flag;
return WALK_PTR;
}
}
return BTF_KFUNC_HOOK_TRACING;
case BPF_PROG_TYPE_SYSCALL:
return BTF_KFUNC_HOOK_SYSCALL;
+ case BPF_PROG_TYPE_CGROUP_SKB:
+ return BTF_KFUNC_HOOK_CGROUP_SKB;
+ case BPF_PROG_TYPE_SCHED_ACT:
+ return BTF_KFUNC_HOOK_SCHED_ACT;
+ case BPF_PROG_TYPE_SK_SKB:
+ return BTF_KFUNC_HOOK_SK_SKB;
+ case BPF_PROG_TYPE_SOCKET_FILTER:
+ return BTF_KFUNC_HOOK_SOCKET_FILTER;
+ case BPF_PROG_TYPE_LWT_OUT:
+ case BPF_PROG_TYPE_LWT_IN:
+ case BPF_PROG_TYPE_LWT_XMIT:
+ case BPF_PROG_TYPE_LWT_SEG6LOCAL:
+ return BTF_KFUNC_HOOK_LWT;
default:
return BTF_KFUNC_HOOK_MAX;
}
bool btf_nested_type_is_trusted(struct bpf_verifier_log *log,
const struct bpf_reg_state *reg,
- int off)
+ int off, const char *suffix)
{
struct btf *btf = reg->btf;
const struct btf_type *walk_type, *safe_type;
tname = btf_name_by_offset(btf, walk_type->name_off);
- ret = snprintf(safe_tname, sizeof(safe_tname), "%s__safe_fields", tname);
+ ret = snprintf(safe_tname, sizeof(safe_tname), "%s%s", tname, suffix);
if (ret < 0)
return false;
BPF_FIELD_SIZEOF(struct bpf_sysctl_kern, ppos),
treg, si->dst_reg,
offsetof(struct bpf_sysctl_kern, ppos));
- *insn++ = BPF_STX_MEM(
- BPF_SIZEOF(u32), treg, si->src_reg,
+ *insn++ = BPF_RAW_INSN(
+ BPF_CLASS(si->code) | BPF_MEM | BPF_SIZEOF(u32),
+ treg, si->src_reg,
bpf_ctx_narrow_access_offset(
- 0, sizeof(u32), sizeof(loff_t)));
+ 0, sizeof(u32), sizeof(loff_t)),
+ si->imm);
*insn++ = BPF_LDX_MEM(
BPF_DW, treg, si->dst_reg,
offsetof(struct bpf_sysctl_kern, tmp_reg));
return true;
}
-#define CG_SOCKOPT_ACCESS_FIELD(T, F) \
- T(BPF_FIELD_SIZEOF(struct bpf_sockopt_kern, F), \
- si->dst_reg, si->src_reg, \
- offsetof(struct bpf_sockopt_kern, F))
+#define CG_SOCKOPT_READ_FIELD(F) \
+ BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct bpf_sockopt_kern, F), \
+ si->dst_reg, si->src_reg, \
+ offsetof(struct bpf_sockopt_kern, F))
+
+#define CG_SOCKOPT_WRITE_FIELD(F) \
+ BPF_RAW_INSN((BPF_FIELD_SIZEOF(struct bpf_sockopt_kern, F) | \
+ BPF_MEM | BPF_CLASS(si->code)), \
+ si->dst_reg, si->src_reg, \
+ offsetof(struct bpf_sockopt_kern, F), \
+ si->imm)
static u32 cg_sockopt_convert_ctx_access(enum bpf_access_type type,
const struct bpf_insn *si,
switch (si->off) {
case offsetof(struct bpf_sockopt, sk):
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, sk);
+ *insn++ = CG_SOCKOPT_READ_FIELD(sk);
break;
case offsetof(struct bpf_sockopt, level):
if (type == BPF_WRITE)
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_STX_MEM, level);
+ *insn++ = CG_SOCKOPT_WRITE_FIELD(level);
else
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, level);
+ *insn++ = CG_SOCKOPT_READ_FIELD(level);
break;
case offsetof(struct bpf_sockopt, optname):
if (type == BPF_WRITE)
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_STX_MEM, optname);
+ *insn++ = CG_SOCKOPT_WRITE_FIELD(optname);
else
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, optname);
+ *insn++ = CG_SOCKOPT_READ_FIELD(optname);
break;
case offsetof(struct bpf_sockopt, optlen):
if (type == BPF_WRITE)
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_STX_MEM, optlen);
+ *insn++ = CG_SOCKOPT_WRITE_FIELD(optlen);
else
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, optlen);
+ *insn++ = CG_SOCKOPT_READ_FIELD(optlen);
break;
case offsetof(struct bpf_sockopt, retval):
BUILD_BUG_ON(offsetof(struct bpf_cg_run_ctx, run_ctx) != 0);
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct task_struct, bpf_ctx),
treg, treg,
offsetof(struct task_struct, bpf_ctx));
- *insn++ = BPF_STX_MEM(BPF_FIELD_SIZEOF(struct bpf_cg_run_ctx, retval),
- treg, si->src_reg,
- offsetof(struct bpf_cg_run_ctx, retval));
+ *insn++ = BPF_RAW_INSN(BPF_CLASS(si->code) | BPF_MEM |
+ BPF_FIELD_SIZEOF(struct bpf_cg_run_ctx, retval),
+ treg, si->src_reg,
+ offsetof(struct bpf_cg_run_ctx, retval),
+ si->imm);
*insn++ = BPF_LDX_MEM(BPF_DW, treg, si->dst_reg,
offsetof(struct bpf_sockopt_kern, tmp_reg));
} else {
}
break;
case offsetof(struct bpf_sockopt, optval):
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, optval);
+ *insn++ = CG_SOCKOPT_READ_FIELD(optval);
break;
case offsetof(struct bpf_sockopt, optval_end):
- *insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, optval_end);
+ *insn++ = CG_SOCKOPT_READ_FIELD(optval_end);
break;
}
return &bpf_get_current_pid_tgid_proto;
case BPF_FUNC_get_current_comm:
return &bpf_get_current_comm_proto;
- case BPF_FUNC_get_current_cgroup_id:
- return &bpf_get_current_cgroup_id_proto;
- case BPF_FUNC_get_current_ancestor_cgroup_id:
- return &bpf_get_current_ancestor_cgroup_id_proto;
#ifdef CONFIG_CGROUP_NET_CLASSID
case BPF_FUNC_get_cgroup_classid:
return &bpf_get_cgroup_classid_curr_proto;
__cpu_map_lookup_elem);
}
+static u64 cpu_map_mem_usage(const struct bpf_map *map)
+{
+ u64 usage = sizeof(struct bpf_cpu_map);
+
+ /* Currently the dynamically allocated elements are not counted */
+ usage += (u64)map->max_entries * sizeof(struct bpf_cpu_map_entry *);
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(cpu_map_btf_ids, struct, bpf_cpu_map)
const struct bpf_map_ops cpu_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_lookup_elem = cpu_map_lookup_elem,
.map_get_next_key = cpu_map_get_next_key,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = cpu_map_mem_usage,
.map_btf_id = &cpu_map_btf_ids[0],
.map_redirect = cpu_map_redirect,
};
/* cpumask must be the first element so struct bpf_cpumask be cast to struct cpumask. */
BUILD_BUG_ON(offsetof(struct bpf_cpumask, cpumask) != 0);
- cpumask = bpf_mem_alloc(&bpf_cpumask_ma, sizeof(*cpumask));
+ cpumask = bpf_mem_cache_alloc(&bpf_cpumask_ma);
if (!cpumask)
return NULL;
if (refcount_dec_and_test(&cpumask->usage)) {
migrate_disable();
- bpf_mem_free(&bpf_cpumask_ma, cpumask);
+ bpf_mem_cache_free(&bpf_cpumask_ma, cpumask);
migrate_enable();
}
}
BTF_ID_FLAGS(func, bpf_cpumask_release, KF_RELEASE | KF_TRUSTED_ARGS)
BTF_ID_FLAGS(func, bpf_cpumask_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS)
BTF_ID_FLAGS(func, bpf_cpumask_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL)
-BTF_ID_FLAGS(func, bpf_cpumask_first, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_first_zero, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_set_cpu, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_clear_cpu, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_test_cpu, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_test_and_set_cpu, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_test_and_clear_cpu, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_setall, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_clear, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_and, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_or, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_xor, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_equal, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_intersects, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_subset, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_empty, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_full, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_copy, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_any, KF_TRUSTED_ARGS)
-BTF_ID_FLAGS(func, bpf_cpumask_any_and, KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_cpumask_first, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_first_zero, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_set_cpu, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_clear_cpu, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_test_cpu, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_test_and_set_cpu, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_test_and_clear_cpu, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_setall, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_clear, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_and, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_or, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_xor, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_equal, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_intersects, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_subset, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_empty, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_full, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_copy, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_any, KF_RCU)
+BTF_ID_FLAGS(func, bpf_cpumask_any_and, KF_RCU)
BTF_SET8_END(cpumask_kfunc_btf_ids)
static const struct btf_kfunc_id_set cpumask_kfunc_set = {
},
};
- ret = bpf_mem_alloc_init(&bpf_cpumask_ma, 0, false);
+ ret = bpf_mem_alloc_init(&bpf_cpumask_ma, sizeof(struct bpf_cpumask), false);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &cpumask_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &cpumask_kfunc_set);
return ret ?: register_btf_id_dtor_kfuncs(cpumask_dtors,
return -EINVAL;
old_dev = unrcu_pointer(xchg(&dtab->netdev_map[k], NULL));
- if (old_dev)
+ if (old_dev) {
call_rcu(&old_dev->rcu, __dev_map_entry_free);
+ atomic_dec((atomic_t *)&dtab->items);
+ }
return 0;
}
old_dev = unrcu_pointer(xchg(&dtab->netdev_map[i], RCU_INITIALIZER(dev)));
if (old_dev)
call_rcu(&old_dev->rcu, __dev_map_entry_free);
+ else
+ atomic_inc((atomic_t *)&dtab->items);
return 0;
}
__dev_map_hash_lookup_elem);
}
+static u64 dev_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ u64 usage = sizeof(struct bpf_dtab);
+
+ if (map->map_type == BPF_MAP_TYPE_DEVMAP_HASH)
+ usage += (u64)dtab->n_buckets * sizeof(struct hlist_head);
+ else
+ usage += (u64)map->max_entries * sizeof(struct bpf_dtab_netdev *);
+ usage += atomic_read((atomic_t *)&dtab->items) *
+ (u64)sizeof(struct bpf_dtab_netdev);
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(dev_map_btf_ids, struct, bpf_dtab)
const struct bpf_map_ops dev_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_update_elem = dev_map_update_elem,
.map_delete_elem = dev_map_delete_elem,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = dev_map_mem_usage,
.map_btf_id = &dev_map_btf_ids[0],
.map_redirect = dev_map_redirect,
};
.map_update_elem = dev_map_hash_update_elem,
.map_delete_elem = dev_map_hash_delete_elem,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = dev_map_mem_usage,
.map_btf_id = &dev_map_btf_ids[0],
.map_redirect = dev_hash_map_redirect,
};
if (!dev || netdev != dev->dev)
continue;
odev = unrcu_pointer(cmpxchg(&dtab->netdev_map[i], RCU_INITIALIZER(dev), NULL));
- if (dev == odev)
+ if (dev == odev) {
call_rcu(&dev->rcu,
__dev_map_entry_free);
+ atomic_dec((atomic_t *)&dtab->items);
+ }
}
}
rcu_read_unlock();
struct htab_elem *elem;
elem = get_htab_elem(htab, i);
- bpf_obj_free_fields(htab->map.record, elem->key + round_up(htab->map.key_size, 8));
+ if (htab_is_percpu(htab)) {
+ void __percpu *pptr = htab_elem_get_ptr(elem, htab->map.key_size);
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu));
+ cond_resched();
+ }
+ } else {
+ bpf_obj_free_fields(htab->map.record, elem->key + round_up(htab->map.key_size, 8));
+ cond_resched();
+ }
cond_resched();
}
}
static void check_and_free_fields(struct bpf_htab *htab,
struct htab_elem *elem)
{
- void *map_value = elem->key + round_up(htab->map.key_size, 8);
+ if (htab_is_percpu(htab)) {
+ void __percpu *pptr = htab_elem_get_ptr(elem, htab->map.key_size);
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu));
+ } else {
+ void *map_value = elem->key + round_up(htab->map.key_size, 8);
- bpf_obj_free_fields(htab->map.record, map_value);
+ bpf_obj_free_fields(htab->map.record, map_value);
+ }
}
/* It is called from the bpf_lru_list when the LRU needs to delete
static void htab_elem_free(struct bpf_htab *htab, struct htab_elem *l)
{
+ check_and_free_fields(htab, l);
if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH)
bpf_mem_cache_free(&htab->pcpu_ma, l->ptr_to_pptr);
- check_and_free_fields(htab, l);
bpf_mem_cache_free(&htab->ma, l);
}
{
if (!onallcpus) {
/* copy true value_size bytes */
- memcpy(this_cpu_ptr(pptr), value, htab->map.value_size);
+ copy_map_value(&htab->map, this_cpu_ptr(pptr), value);
} else {
u32 size = round_up(htab->map.value_size, 8);
int off = 0, cpu;
for_each_possible_cpu(cpu) {
- bpf_long_memcpy(per_cpu_ptr(pptr, cpu),
- value + off, size);
+ copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value + off);
off += size;
}
}
* (onallcpus=false always when coming from bpf prog).
*/
if (!onallcpus) {
- u32 size = round_up(htab->map.value_size, 8);
int current_cpu = raw_smp_processor_id();
int cpu;
for_each_possible_cpu(cpu) {
if (cpu == current_cpu)
- bpf_long_memcpy(per_cpu_ptr(pptr, cpu), value,
- size);
- else
- memset(per_cpu_ptr(pptr, cpu), 0, size);
+ copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value);
+ else /* Since elem is preallocated, we cannot touch special fields */
+ zero_map_value(&htab->map, per_cpu_ptr(pptr, cpu));
}
} else {
pcpu_copy_value(htab, pptr, value, onallcpus);
pptr = htab_elem_get_ptr(l, key_size);
for_each_possible_cpu(cpu) {
- bpf_long_memcpy(value + off,
- per_cpu_ptr(pptr, cpu),
- roundup_value_size);
+ copy_map_value_long(&htab->map, value + off, per_cpu_ptr(pptr, cpu));
+ check_and_init_map_value(&htab->map, value + off);
off += roundup_value_size;
}
} else {
pptr = htab_elem_get_ptr(l, map->key_size);
for_each_possible_cpu(cpu) {
- bpf_long_memcpy(dst_val + off,
- per_cpu_ptr(pptr, cpu), size);
+ copy_map_value_long(&htab->map, dst_val + off, per_cpu_ptr(pptr, cpu));
+ check_and_init_map_value(&htab->map, dst_val + off);
off += size;
}
} else {
roundup_value_size = round_up(map->value_size, 8);
pptr = htab_elem_get_ptr(elem, map->key_size);
for_each_possible_cpu(cpu) {
- bpf_long_memcpy(info->percpu_value_buf + off,
- per_cpu_ptr(pptr, cpu),
- roundup_value_size);
+ copy_map_value_long(map, info->percpu_value_buf + off,
+ per_cpu_ptr(pptr, cpu));
+ check_and_init_map_value(map, info->percpu_value_buf + off);
off += roundup_value_size;
}
ctx.value = info->percpu_value_buf;
return num_elems;
}
+static u64 htab_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ u32 value_size = round_up(htab->map.value_size, 8);
+ bool prealloc = htab_is_prealloc(htab);
+ bool percpu = htab_is_percpu(htab);
+ bool lru = htab_is_lru(htab);
+ u64 num_entries;
+ u64 usage = sizeof(struct bpf_htab);
+
+ usage += sizeof(struct bucket) * htab->n_buckets;
+ usage += sizeof(int) * num_possible_cpus() * HASHTAB_MAP_LOCK_COUNT;
+ if (prealloc) {
+ num_entries = map->max_entries;
+ if (htab_has_extra_elems(htab))
+ num_entries += num_possible_cpus();
+
+ usage += htab->elem_size * num_entries;
+
+ if (percpu)
+ usage += value_size * num_possible_cpus() * num_entries;
+ else if (!lru)
+ usage += sizeof(struct htab_elem *) * num_possible_cpus();
+ } else {
+#define LLIST_NODE_SZ sizeof(struct llist_node)
+
+ num_entries = htab->use_percpu_counter ?
+ percpu_counter_sum(&htab->pcount) :
+ atomic_read(&htab->count);
+ usage += (htab->elem_size + LLIST_NODE_SZ) * num_entries;
+ if (percpu) {
+ usage += (LLIST_NODE_SZ + sizeof(void *)) * num_entries;
+ usage += value_size * num_possible_cpus() * num_entries;
+ }
+ }
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(htab_map_btf_ids, struct, bpf_htab)
const struct bpf_map_ops htab_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_seq_show_elem = htab_map_seq_show_elem,
.map_set_for_each_callback_args = map_set_for_each_callback_args,
.map_for_each_callback = bpf_for_each_hash_elem,
+ .map_mem_usage = htab_map_mem_usage,
BATCH_OPS(htab),
.map_btf_id = &htab_map_btf_ids[0],
.iter_seq_info = &iter_seq_info,
.map_seq_show_elem = htab_map_seq_show_elem,
.map_set_for_each_callback_args = map_set_for_each_callback_args,
.map_for_each_callback = bpf_for_each_hash_elem,
+ .map_mem_usage = htab_map_mem_usage,
BATCH_OPS(htab_lru),
.map_btf_id = &htab_map_btf_ids[0],
.iter_seq_info = &iter_seq_info,
*/
pptr = htab_elem_get_ptr(l, map->key_size);
for_each_possible_cpu(cpu) {
- bpf_long_memcpy(value + off,
- per_cpu_ptr(pptr, cpu), size);
+ copy_map_value_long(map, value + off, per_cpu_ptr(pptr, cpu));
+ check_and_init_map_value(map, value + off);
off += size;
}
ret = 0;
.map_seq_show_elem = htab_percpu_map_seq_show_elem,
.map_set_for_each_callback_args = map_set_for_each_callback_args,
.map_for_each_callback = bpf_for_each_hash_elem,
+ .map_mem_usage = htab_map_mem_usage,
BATCH_OPS(htab_percpu),
.map_btf_id = &htab_map_btf_ids[0],
.iter_seq_info = &iter_seq_info,
.map_seq_show_elem = htab_percpu_map_seq_show_elem,
.map_set_for_each_callback_args = map_set_for_each_callback_args,
.map_for_each_callback = bpf_for_each_hash_elem,
+ .map_mem_usage = htab_map_mem_usage,
BATCH_OPS(htab_lru_percpu),
.map_btf_id = &htab_map_btf_ids[0],
.iter_seq_info = &iter_seq_info,
.map_fd_sys_lookup_elem = bpf_map_fd_sys_lookup_elem,
.map_gen_lookup = htab_of_map_gen_lookup,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = htab_map_mem_usage,
BATCH_OPS(htab),
.map_btf_id = &htab_map_btf_ids[0],
};
{
struct bpf_hrtimer *t;
int ret = 0;
+ enum hrtimer_mode mode;
if (in_nmi())
return -EOPNOTSUPP;
- if (flags)
+ if (flags > BPF_F_TIMER_ABS)
return -EINVAL;
__bpf_spin_lock_irqsave(&timer->lock);
t = timer->timer;
ret = -EINVAL;
goto out;
}
- hrtimer_start(&t->timer, ns_to_ktime(nsecs), HRTIMER_MODE_REL_SOFT);
+
+ if (flags & BPF_F_TIMER_ABS)
+ mode = HRTIMER_MODE_ABS_SOFT;
+ else
+ mode = HRTIMER_MODE_REL_SOFT;
+
+ hrtimer_start(&t->timer, ns_to_ktime(nsecs), mode);
out:
__bpf_spin_unlock_irqrestore(&timer->lock);
return ret;
return ptr->size & DYNPTR_RDONLY_BIT;
}
+void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr)
+{
+ ptr->size |= DYNPTR_RDONLY_BIT;
+}
+
static void bpf_dynptr_set_type(struct bpf_dynptr_kern *ptr, enum bpf_dynptr_type type)
{
ptr->size |= type << DYNPTR_TYPE_SHIFT;
}
+static enum bpf_dynptr_type bpf_dynptr_get_type(const struct bpf_dynptr_kern *ptr)
+{
+ return (ptr->size & ~(DYNPTR_RDONLY_BIT)) >> DYNPTR_TYPE_SHIFT;
+}
+
u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
{
return ptr->size & DYNPTR_SIZE_MASK;
BPF_CALL_5(bpf_dynptr_read, void *, dst, u32, len, const struct bpf_dynptr_kern *, src,
u32, offset, u64, flags)
{
+ enum bpf_dynptr_type type;
int err;
if (!src->data || flags)
if (err)
return err;
- /* Source and destination may possibly overlap, hence use memmove to
- * copy the data. E.g. bpf_dynptr_from_mem may create two dynptr
- * pointing to overlapping PTR_TO_MAP_VALUE regions.
- */
- memmove(dst, src->data + src->offset + offset, len);
+ type = bpf_dynptr_get_type(src);
- return 0;
+ switch (type) {
+ case BPF_DYNPTR_TYPE_LOCAL:
+ case BPF_DYNPTR_TYPE_RINGBUF:
+ /* Source and destination may possibly overlap, hence use memmove to
+ * copy the data. E.g. bpf_dynptr_from_mem may create two dynptr
+ * pointing to overlapping PTR_TO_MAP_VALUE regions.
+ */
+ memmove(dst, src->data + src->offset + offset, len);
+ return 0;
+ case BPF_DYNPTR_TYPE_SKB:
+ return __bpf_skb_load_bytes(src->data, src->offset + offset, dst, len);
+ case BPF_DYNPTR_TYPE_XDP:
+ return __bpf_xdp_load_bytes(src->data, src->offset + offset, dst, len);
+ default:
+ WARN_ONCE(true, "bpf_dynptr_read: unknown dynptr type %d\n", type);
+ return -EFAULT;
+ }
}
static const struct bpf_func_proto bpf_dynptr_read_proto = {
BPF_CALL_5(bpf_dynptr_write, const struct bpf_dynptr_kern *, dst, u32, offset, void *, src,
u32, len, u64, flags)
{
+ enum bpf_dynptr_type type;
int err;
- if (!dst->data || flags || bpf_dynptr_is_rdonly(dst))
+ if (!dst->data || bpf_dynptr_is_rdonly(dst))
return -EINVAL;
err = bpf_dynptr_check_off_len(dst, offset, len);
if (err)
return err;
- /* Source and destination may possibly overlap, hence use memmove to
- * copy the data. E.g. bpf_dynptr_from_mem may create two dynptr
- * pointing to overlapping PTR_TO_MAP_VALUE regions.
- */
- memmove(dst->data + dst->offset + offset, src, len);
+ type = bpf_dynptr_get_type(dst);
- return 0;
+ switch (type) {
+ case BPF_DYNPTR_TYPE_LOCAL:
+ case BPF_DYNPTR_TYPE_RINGBUF:
+ if (flags)
+ return -EINVAL;
+ /* Source and destination may possibly overlap, hence use memmove to
+ * copy the data. E.g. bpf_dynptr_from_mem may create two dynptr
+ * pointing to overlapping PTR_TO_MAP_VALUE regions.
+ */
+ memmove(dst->data + dst->offset + offset, src, len);
+ return 0;
+ case BPF_DYNPTR_TYPE_SKB:
+ return __bpf_skb_store_bytes(dst->data, dst->offset + offset, src, len,
+ flags);
+ case BPF_DYNPTR_TYPE_XDP:
+ if (flags)
+ return -EINVAL;
+ return __bpf_xdp_store_bytes(dst->data, dst->offset + offset, src, len);
+ default:
+ WARN_ONCE(true, "bpf_dynptr_write: unknown dynptr type %d\n", type);
+ return -EFAULT;
+ }
}
static const struct bpf_func_proto bpf_dynptr_write_proto = {
BPF_CALL_3(bpf_dynptr_data, const struct bpf_dynptr_kern *, ptr, u32, offset, u32, len)
{
+ enum bpf_dynptr_type type;
int err;
if (!ptr->data)
if (bpf_dynptr_is_rdonly(ptr))
return 0;
- return (unsigned long)(ptr->data + ptr->offset + offset);
+ type = bpf_dynptr_get_type(ptr);
+
+ switch (type) {
+ case BPF_DYNPTR_TYPE_LOCAL:
+ case BPF_DYNPTR_TYPE_RINGBUF:
+ return (unsigned long)(ptr->data + ptr->offset + offset);
+ case BPF_DYNPTR_TYPE_SKB:
+ case BPF_DYNPTR_TYPE_XDP:
+ /* skb and xdp dynptrs should use bpf_dynptr_slice / bpf_dynptr_slice_rdwr */
+ return 0;
+ default:
+ WARN_ONCE(true, "bpf_dynptr_data: unknown dynptr type %d\n", type);
+ return 0;
+ }
}
static const struct bpf_func_proto bpf_dynptr_data_proto = {
return &bpf_cgrp_storage_get_proto;
case BPF_FUNC_cgrp_storage_delete:
return &bpf_cgrp_storage_delete_proto;
+ case BPF_FUNC_get_current_cgroup_id:
+ return &bpf_get_current_cgroup_id_proto;
+ case BPF_FUNC_get_current_ancestor_cgroup_id:
+ return &bpf_get_current_ancestor_cgroup_id_proto;
#endif
default:
break;
if (level > cgrp->level || level < 0)
return NULL;
+ /* cgrp's refcnt could be 0 here, but ancestors can still be accessed */
ancestor = cgrp->ancestors[level];
- cgroup_get(ancestor);
+ if (!cgroup_tryget(ancestor))
+ return NULL;
return ancestor;
}
+
+/**
+ * bpf_cgroup_from_id - Find a cgroup from its ID. A cgroup returned by this
+ * kfunc which is not subsequently stored in a map, must be released by calling
+ * bpf_cgroup_release().
+ * @cgid: cgroup id.
+ */
+__bpf_kfunc struct cgroup *bpf_cgroup_from_id(u64 cgid)
+{
+ struct cgroup *cgrp;
+
+ cgrp = cgroup_get_from_id(cgid);
+ if (IS_ERR(cgrp))
+ return NULL;
+ return cgrp;
+}
#endif /* CONFIG_CGROUPS */
/**
return p;
}
+/**
+ * bpf_dynptr_slice() - Obtain a read-only pointer to the dynptr data.
+ * @ptr: The dynptr whose data slice to retrieve
+ * @offset: Offset into the dynptr
+ * @buffer: User-provided buffer to copy contents into
+ * @buffer__szk: Size (in bytes) of the buffer. This is the length of the
+ * requested slice. This must be a constant.
+ *
+ * For non-skb and non-xdp type dynptrs, there is no difference between
+ * bpf_dynptr_slice and bpf_dynptr_data.
+ *
+ * If the intention is to write to the data slice, please use
+ * bpf_dynptr_slice_rdwr.
+ *
+ * The user must check that the returned pointer is not null before using it.
+ *
+ * Please note that in the case of skb and xdp dynptrs, bpf_dynptr_slice
+ * does not change the underlying packet data pointers, so a call to
+ * bpf_dynptr_slice will not invalidate any ctx->data/data_end pointers in
+ * the bpf program.
+ *
+ * Return: NULL if the call failed (eg invalid dynptr), pointer to a read-only
+ * data slice (can be either direct pointer to the data or a pointer to the user
+ * provided buffer, with its contents containing the data, if unable to obtain
+ * direct pointer)
+ */
+__bpf_kfunc void *bpf_dynptr_slice(const struct bpf_dynptr_kern *ptr, u32 offset,
+ void *buffer, u32 buffer__szk)
+{
+ enum bpf_dynptr_type type;
+ u32 len = buffer__szk;
+ int err;
+
+ if (!ptr->data)
+ return NULL;
+
+ err = bpf_dynptr_check_off_len(ptr, offset, len);
+ if (err)
+ return NULL;
+
+ type = bpf_dynptr_get_type(ptr);
+
+ switch (type) {
+ case BPF_DYNPTR_TYPE_LOCAL:
+ case BPF_DYNPTR_TYPE_RINGBUF:
+ return ptr->data + ptr->offset + offset;
+ case BPF_DYNPTR_TYPE_SKB:
+ return skb_header_pointer(ptr->data, ptr->offset + offset, len, buffer);
+ case BPF_DYNPTR_TYPE_XDP:
+ {
+ void *xdp_ptr = bpf_xdp_pointer(ptr->data, ptr->offset + offset, len);
+ if (xdp_ptr)
+ return xdp_ptr;
+
+ bpf_xdp_copy_buf(ptr->data, ptr->offset + offset, buffer, len, false);
+ return buffer;
+ }
+ default:
+ WARN_ONCE(true, "unknown dynptr type %d\n", type);
+ return NULL;
+ }
+}
+
+/**
+ * bpf_dynptr_slice_rdwr() - Obtain a writable pointer to the dynptr data.
+ * @ptr: The dynptr whose data slice to retrieve
+ * @offset: Offset into the dynptr
+ * @buffer: User-provided buffer to copy contents into
+ * @buffer__szk: Size (in bytes) of the buffer. This is the length of the
+ * requested slice. This must be a constant.
+ *
+ * For non-skb and non-xdp type dynptrs, there is no difference between
+ * bpf_dynptr_slice and bpf_dynptr_data.
+ *
+ * The returned pointer is writable and may point to either directly the dynptr
+ * data at the requested offset or to the buffer if unable to obtain a direct
+ * data pointer to (example: the requested slice is to the paged area of an skb
+ * packet). In the case where the returned pointer is to the buffer, the user
+ * is responsible for persisting writes through calling bpf_dynptr_write(). This
+ * usually looks something like this pattern:
+ *
+ * struct eth_hdr *eth = bpf_dynptr_slice_rdwr(&dynptr, 0, buffer, sizeof(buffer));
+ * if (!eth)
+ * return TC_ACT_SHOT;
+ *
+ * // mutate eth header //
+ *
+ * if (eth == buffer)
+ * bpf_dynptr_write(&ptr, 0, buffer, sizeof(buffer), 0);
+ *
+ * Please note that, as in the example above, the user must check that the
+ * returned pointer is not null before using it.
+ *
+ * Please also note that in the case of skb and xdp dynptrs, bpf_dynptr_slice_rdwr
+ * does not change the underlying packet data pointers, so a call to
+ * bpf_dynptr_slice_rdwr will not invalidate any ctx->data/data_end pointers in
+ * the bpf program.
+ *
+ * Return: NULL if the call failed (eg invalid dynptr), pointer to a
+ * data slice (can be either direct pointer to the data or a pointer to the user
+ * provided buffer, with its contents containing the data, if unable to obtain
+ * direct pointer)
+ */
+__bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 offset,
+ void *buffer, u32 buffer__szk)
+{
+ if (!ptr->data || bpf_dynptr_is_rdonly(ptr))
+ return NULL;
+
+ /* bpf_dynptr_slice_rdwr is the same logic as bpf_dynptr_slice.
+ *
+ * For skb-type dynptrs, it is safe to write into the returned pointer
+ * if the bpf program allows skb data writes. There are two possiblities
+ * that may occur when calling bpf_dynptr_slice_rdwr:
+ *
+ * 1) The requested slice is in the head of the skb. In this case, the
+ * returned pointer is directly to skb data, and if the skb is cloned, the
+ * verifier will have uncloned it (see bpf_unclone_prologue()) already.
+ * The pointer can be directly written into.
+ *
+ * 2) Some portion of the requested slice is in the paged buffer area.
+ * In this case, the requested data will be copied out into the buffer
+ * and the returned pointer will be a pointer to the buffer. The skb
+ * will not be pulled. To persist the write, the user will need to call
+ * bpf_dynptr_write(), which will pull the skb and commit the write.
+ *
+ * Similarly for xdp programs, if the requested slice is not across xdp
+ * fragments, then a direct pointer will be returned, otherwise the data
+ * will be copied out into the buffer and the user will need to call
+ * bpf_dynptr_write() to commit changes.
+ */
+ return bpf_dynptr_slice(ptr, offset, buffer, buffer__szk);
+}
+
__bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
{
return obj;
BTF_ID_FLAGS(func, bpf_cgroup_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS)
BTF_ID_FLAGS(func, bpf_cgroup_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_cgroup_release, KF_RELEASE)
-BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL)
#endif
BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL)
BTF_SET8_END(generic_btf_ids)
BTF_ID_FLAGS(func, bpf_rdonly_cast)
BTF_ID_FLAGS(func, bpf_rcu_read_lock)
BTF_ID_FLAGS(func, bpf_rcu_read_unlock)
+BTF_ID_FLAGS(func, bpf_dynptr_slice, KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
BTF_SET8_END(common_btf_ids)
static const struct btf_kfunc_id_set common_kfunc_set = {
rcu_read_unlock();
}
+static u64 cgroup_storage_map_usage(const struct bpf_map *map)
+{
+ /* Currently the dynamically allocated elements are not counted. */
+ return sizeof(struct bpf_cgroup_storage_map);
+}
+
BTF_ID_LIST_SINGLE(cgroup_storage_map_btf_ids, struct,
bpf_cgroup_storage_map)
const struct bpf_map_ops cgroup_storage_map_ops = {
.map_delete_elem = cgroup_storage_delete_elem,
.map_check_btf = cgroup_storage_check_btf,
.map_seq_show_elem = cgroup_storage_seq_show_elem,
+ .map_mem_usage = cgroup_storage_map_usage,
.map_btf_id = &cgroup_storage_map_btf_ids[0],
};
-EINVAL : 0;
}
+static u64 trie_mem_usage(const struct bpf_map *map)
+{
+ struct lpm_trie *trie = container_of(map, struct lpm_trie, map);
+ u64 elem_size;
+
+ elem_size = sizeof(struct lpm_trie_node) + trie->data_size +
+ trie->map.value_size;
+ return elem_size * READ_ONCE(trie->n_entries);
+}
+
BTF_ID_LIST_SINGLE(trie_map_btf_ids, struct, lpm_trie)
const struct bpf_map_ops trie_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_update_batch = generic_map_update_batch,
.map_delete_batch = generic_map_delete_batch,
.map_check_btf = trie_check_btf,
+ .map_mem_usage = trie_mem_usage,
.map_btf_id = &trie_map_btf_ids[0],
};
bpf_map_area_free(offmap);
}
+u64 bpf_map_offload_map_mem_usage(const struct bpf_map *map)
+{
+ /* The memory dynamically allocated in netdev dev_ops is not counted */
+ return sizeof(struct bpf_offloaded_map);
+}
+
int bpf_map_offload_lookup_elem(struct bpf_map *map, void *key, void *value)
{
struct bpf_offloaded_map *offmap = map_to_offmap(map);
return -EINVAL;
}
+static u64 queue_stack_map_mem_usage(const struct bpf_map *map)
+{
+ u64 usage = sizeof(struct bpf_queue_stack);
+
+ usage += ((u64)map->max_entries + 1) * map->value_size;
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(queue_map_btf_ids, struct, bpf_queue_stack)
const struct bpf_map_ops queue_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_pop_elem = queue_map_pop_elem,
.map_peek_elem = queue_map_peek_elem,
.map_get_next_key = queue_stack_map_get_next_key,
+ .map_mem_usage = queue_stack_map_mem_usage,
.map_btf_id = &queue_map_btf_ids[0],
};
.map_pop_elem = stack_map_pop_elem,
.map_peek_elem = stack_map_peek_elem,
.map_get_next_key = queue_stack_map_get_next_key,
+ .map_mem_usage = queue_stack_map_mem_usage,
.map_btf_id = &queue_map_btf_ids[0],
};
return 0;
}
+static u64 reuseport_array_mem_usage(const struct bpf_map *map)
+{
+ struct reuseport_array *array;
+
+ return struct_size(array, ptrs, map->max_entries);
+}
+
BTF_ID_LIST_SINGLE(reuseport_array_map_btf_ids, struct, reuseport_array)
const struct bpf_map_ops reuseport_array_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_lookup_elem = reuseport_array_lookup_elem,
.map_get_next_key = reuseport_array_get_next_key,
.map_delete_elem = reuseport_array_delete_elem,
+ .map_mem_usage = reuseport_array_mem_usage,
.map_btf_id = &reuseport_array_map_btf_ids[0],
};
(offsetof(struct bpf_ringbuf, consumer_pos) >> PAGE_SHIFT)
/* consumer page and producer page */
#define RINGBUF_POS_PAGES 2
+#define RINGBUF_NR_META_PAGES (RINGBUF_PGOFF + RINGBUF_POS_PAGES)
#define RINGBUF_MAX_RECORD_SZ (UINT_MAX/4)
{
const gfp_t flags = GFP_KERNEL_ACCOUNT | __GFP_RETRY_MAYFAIL |
__GFP_NOWARN | __GFP_ZERO;
- int nr_meta_pages = RINGBUF_PGOFF + RINGBUF_POS_PAGES;
+ int nr_meta_pages = RINGBUF_NR_META_PAGES;
int nr_data_pages = data_sz >> PAGE_SHIFT;
int nr_pages = nr_meta_pages + nr_data_pages;
struct page **pages, *page;
return 0;
}
+static u64 ringbuf_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_ringbuf *rb;
+ int nr_data_pages;
+ int nr_meta_pages;
+ u64 usage = sizeof(struct bpf_ringbuf_map);
+
+ rb = container_of(map, struct bpf_ringbuf_map, map)->rb;
+ usage += (u64)rb->nr_pages << PAGE_SHIFT;
+ nr_meta_pages = RINGBUF_NR_META_PAGES;
+ nr_data_pages = map->max_entries >> PAGE_SHIFT;
+ usage += (nr_meta_pages + 2 * nr_data_pages) * sizeof(struct page *);
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(ringbuf_map_btf_ids, struct, bpf_ringbuf_map)
const struct bpf_map_ops ringbuf_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_update_elem = ringbuf_map_update_elem,
.map_delete_elem = ringbuf_map_delete_elem,
.map_get_next_key = ringbuf_map_get_next_key,
+ .map_mem_usage = ringbuf_map_mem_usage,
.map_btf_id = &ringbuf_map_btf_ids[0],
};
.map_update_elem = ringbuf_map_update_elem,
.map_delete_elem = ringbuf_map_delete_elem,
.map_get_next_key = ringbuf_map_get_next_key,
+ .map_mem_usage = ringbuf_map_mem_usage,
.map_btf_id = &user_ringbuf_map_btf_ids[0],
};
put_callchain_buffers();
}
+static u64 stack_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map);
+ u64 value_size = map->value_size;
+ u64 n_buckets = smap->n_buckets;
+ u64 enties = map->max_entries;
+ u64 usage = sizeof(*smap);
+
+ usage += n_buckets * sizeof(struct stack_map_bucket *);
+ usage += enties * (sizeof(struct stack_map_bucket) + value_size);
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(stack_trace_map_btf_ids, struct, bpf_stack_map)
const struct bpf_map_ops stack_trace_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
.map_update_elem = stack_map_update_elem,
.map_delete_elem = stack_map_delete_elem,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = stack_map_mem_usage,
.map_btf_id = &stack_trace_map_btf_ids[0],
};
.map_alloc = bpf_map_offload_map_alloc,
.map_free = bpf_map_offload_map_free,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = bpf_map_offload_map_mem_usage,
};
static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
}
if (attr->map_ifindex)
ops = &bpf_map_offload_ops;
+ if (!ops->map_mem_usage)
+ return ERR_PTR(-EINVAL);
map = ops->map_alloc(attr);
if (IS_ERR(map))
return map;
}
#ifdef CONFIG_PROC_FS
-/* Provides an approximation of the map's memory footprint.
- * Used only to provide a backward compatibility and display
- * a reasonable "memlock" info.
- */
-static unsigned long bpf_map_memory_footprint(const struct bpf_map *map)
+/* Show the memory usage of a bpf map */
+static u64 bpf_map_memory_usage(const struct bpf_map *map)
{
- unsigned long size;
-
- size = round_up(map->key_size + bpf_map_value_size(map), 8);
-
- return round_up(map->max_entries * size, PAGE_SIZE);
+ return map->ops->map_mem_usage(map);
}
static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
"max_entries:\t%u\n"
"map_flags:\t%#x\n"
"map_extra:\t%#llx\n"
- "memlock:\t%lu\n"
+ "memlock:\t%llu\n"
"map_id:\t%u\n"
"frozen:\t%u\n",
map->map_type,
map->max_entries,
map->map_flags,
(unsigned long long)map->map_extra,
- bpf_map_memory_footprint(map),
+ bpf_map_memory_usage(map),
map->id,
READ_ONCE(map->frozen));
if (type) {
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
if (map->map_type != BPF_MAP_TYPE_HASH &&
+ map->map_type != BPF_MAP_TYPE_PERCPU_HASH &&
map->map_type != BPF_MAP_TYPE_LRU_HASH &&
+ map->map_type != BPF_MAP_TYPE_LRU_PERCPU_HASH &&
map->map_type != BPF_MAP_TYPE_ARRAY &&
- map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY) {
+ map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY &&
+ map->map_type != BPF_MAP_TYPE_SK_STORAGE &&
+ map->map_type != BPF_MAP_TYPE_INODE_STORAGE &&
+ map->map_type != BPF_MAP_TYPE_TASK_STORAGE &&
+ map->map_type != BPF_MAP_TYPE_CGRP_STORAGE) {
ret = -EOPNOTSUPP;
goto free_map_tab;
}
u32 ret_btf_id;
u32 subprogno;
struct btf_field *kptr_field;
- u8 uninit_dynptr_regno;
+};
+
+struct bpf_kfunc_call_arg_meta {
+ /* In parameters */
+ struct btf *btf;
+ u32 func_id;
+ u32 kfunc_flags;
+ const struct btf_type *func_proto;
+ const char *func_name;
+ /* Out parameters */
+ u32 ref_obj_id;
+ u8 release_regno;
+ bool r0_rdonly;
+ u32 ret_btf_id;
+ u64 r0_size;
+ u32 subprogno;
+ struct {
+ u64 value;
+ bool found;
+ } arg_constant;
+ struct {
+ struct btf *btf;
+ u32 btf_id;
+ } arg_obj_drop;
+ struct {
+ struct btf_field *field;
+ } arg_list_head;
+ struct {
+ struct btf_field *field;
+ } arg_rbtree_root;
+ struct {
+ enum bpf_dynptr_type type;
+ u32 id;
+ } initialized_dynptr;
+ u64 mem_size;
};
struct btf *btf_vmlinux;
type == PTR_TO_TCP_SOCK ||
type == PTR_TO_MAP_VALUE ||
type == PTR_TO_MAP_KEY ||
- type == PTR_TO_SOCK_COMMON;
+ type == PTR_TO_SOCK_COMMON ||
+ type == PTR_TO_MEM;
}
static bool type_is_ptr_alloc_obj(u32 type)
return spi - nr_slots + 1 >= 0 && spi < allocated_slots;
}
-static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+static int stack_slot_obj_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
+ const char *obj_kind, int nr_slots)
{
int off, spi;
if (!tnum_is_const(reg->var_off)) {
- verbose(env, "dynptr has to be at a constant offset\n");
+ verbose(env, "%s has to be at a constant offset\n", obj_kind);
return -EINVAL;
}
off = reg->off + reg->var_off.value;
if (off % BPF_REG_SIZE) {
- verbose(env, "cannot pass in dynptr at an offset=%d\n", off);
+ verbose(env, "cannot pass in %s at an offset=%d\n", obj_kind, off);
return -EINVAL;
}
spi = __get_spi(off);
- if (spi < 1) {
- verbose(env, "cannot pass in dynptr at an offset=%d\n", off);
+ if (spi + 1 < nr_slots) {
+ verbose(env, "cannot pass in %s at an offset=%d\n", obj_kind, off);
return -EINVAL;
}
- if (!is_spi_bounds_valid(func(env, reg), spi, BPF_DYNPTR_NR_SLOTS))
+ if (!is_spi_bounds_valid(func(env, reg), spi, nr_slots))
return -ERANGE;
return spi;
}
+static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+{
+ return stack_slot_obj_get_spi(env, reg, "dynptr", BPF_DYNPTR_NR_SLOTS);
+}
+
static const char *kernel_type_name(const struct btf* btf, u32 id)
{
return btf_name_by_offset(btf, btf_type_by_id(btf, id)->name_off);
}
+static const char *dynptr_type_str(enum bpf_dynptr_type type)
+{
+ switch (type) {
+ case BPF_DYNPTR_TYPE_LOCAL:
+ return "local";
+ case BPF_DYNPTR_TYPE_RINGBUF:
+ return "ringbuf";
+ case BPF_DYNPTR_TYPE_SKB:
+ return "skb";
+ case BPF_DYNPTR_TYPE_XDP:
+ return "xdp";
+ case BPF_DYNPTR_TYPE_INVALID:
+ return "<invalid>";
+ default:
+ WARN_ONCE(1, "unknown dynptr type %d\n", type);
+ return "<unknown>";
+ }
+}
+
static void mark_reg_scratched(struct bpf_verifier_env *env, u32 regno)
{
env->scratched_regs |= 1U << regno;
return BPF_DYNPTR_TYPE_LOCAL;
case DYNPTR_TYPE_RINGBUF:
return BPF_DYNPTR_TYPE_RINGBUF;
+ case DYNPTR_TYPE_SKB:
+ return BPF_DYNPTR_TYPE_SKB;
+ case DYNPTR_TYPE_XDP:
+ return BPF_DYNPTR_TYPE_XDP;
default:
return BPF_DYNPTR_TYPE_INVALID;
}
}
+static enum bpf_type_flag get_dynptr_type_flag(enum bpf_dynptr_type type)
+{
+ switch (type) {
+ case BPF_DYNPTR_TYPE_LOCAL:
+ return DYNPTR_TYPE_LOCAL;
+ case BPF_DYNPTR_TYPE_RINGBUF:
+ return DYNPTR_TYPE_RINGBUF;
+ case BPF_DYNPTR_TYPE_SKB:
+ return DYNPTR_TYPE_SKB;
+ case BPF_DYNPTR_TYPE_XDP:
+ return DYNPTR_TYPE_XDP;
+ default:
+ return 0;
+ }
+}
+
static bool dynptr_type_refcounted(enum bpf_dynptr_type type)
{
return type == BPF_DYNPTR_TYPE_RINGBUF;
static void __mark_reg_unknown(const struct bpf_verifier_env *env,
struct bpf_reg_state *reg);
+static void mark_reg_invalid(const struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+{
+ if (!env->allow_ptr_leaks)
+ __mark_reg_not_init(env, reg);
+ else
+ __mark_reg_unknown(env, reg);
+}
+
static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
struct bpf_func_state *state, int spi)
{
/* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */
if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM)
continue;
- if (dreg->dynptr_id == dynptr_id) {
- if (!env->allow_ptr_leaks)
- __mark_reg_not_init(env, dreg);
- else
- __mark_reg_unknown(env, dreg);
- }
+ if (dreg->dynptr_id == dynptr_id)
+ mark_reg_invalid(env, dreg);
}));
/* Do not release reference state, we are destroying dynptr on stack,
return 0;
}
-static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- int spi)
+static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
{
+ int spi;
+
if (reg->type == CONST_PTR_TO_DYNPTR)
return false;
- /* For -ERANGE (i.e. spi not falling into allocated stack slots), we
- * will do check_mem_access to check and update stack bounds later, so
- * return true for that case.
+ spi = dynptr_get_spi(env, reg);
+
+ /* -ERANGE (i.e. spi not falling into allocated stack slots) isn't an
+ * error because this just means the stack state hasn't been updated yet.
+ * We will do check_mem_access to check and update stack bounds later.
*/
- if (spi < 0)
- return spi == -ERANGE;
- /* We allow overwriting existing unreferenced STACK_DYNPTR slots, see
- * mark_stack_slots_dynptr which calls destroy_if_dynptr_stack_slot to
- * ensure dynptr objects at the slots we are touching are completely
- * destructed before we reinitialize them for a new one. For referenced
- * ones, destroy_if_dynptr_stack_slot returns an error early instead of
- * delaying it until the end where the user will get "Unreleased
+ if (spi < 0 && spi != -ERANGE)
+ return false;
+
+ /* We don't need to check if the stack slots are marked by previous
+ * dynptr initializations because we allow overwriting existing unreferenced
+ * STACK_DYNPTR slots, see mark_stack_slots_dynptr which calls
+ * destroy_if_dynptr_stack_slot to ensure dynptr objects at the slots we are
+ * touching are completely destructed before we reinitialize them for a new
+ * one. For referenced ones, destroy_if_dynptr_stack_slot returns an error early
+ * instead of delaying it until the end where the user will get "Unreleased
* reference" error.
*/
return true;
}
-static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- int spi)
+static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
{
struct bpf_func_state *state = func(env, reg);
- int i;
+ int i, spi;
- /* This already represents first slot of initialized bpf_dynptr */
+ /* This already represents first slot of initialized bpf_dynptr.
+ *
+ * CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
+ * check_func_arg_reg_off's logic, so we don't need to check its
+ * offset and alignment.
+ */
if (reg->type == CONST_PTR_TO_DYNPTR)
return true;
+ spi = dynptr_get_spi(env, reg);
if (spi < 0)
return false;
if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
for (j = 0; j < BPF_REG_SIZE; j++) {
if (state->stack[i].slot_type[j] != STACK_INVALID)
valid = true;
- types_buf[j] = slot_type_char[
- state->stack[i].slot_type[j]];
+ types_buf[j] = slot_type_char[state->stack[i].slot_type[j]];
}
types_buf[BPF_REG_SIZE] = 0;
if (!valid)
continue;
if (!print_all && !stack_slot_scratched(env, i))
continue;
- verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE);
- print_liveness(env, state->stack[i].spilled_ptr.live);
- if (is_spilled_reg(&state->stack[i])) {
+ switch (state->stack[i].slot_type[BPF_REG_SIZE - 1]) {
+ case STACK_SPILL:
reg = &state->stack[i].spilled_ptr;
t = reg->type;
+
+ verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE);
+ print_liveness(env, reg->live);
verbose(env, "=%s", t == SCALAR_VALUE ? "" : reg_type_str(env, t));
if (t == SCALAR_VALUE && reg->precise)
verbose(env, "P");
if (t == SCALAR_VALUE && tnum_is_const(reg->var_off))
verbose(env, "%lld", reg->var_off.value + reg->off);
- } else {
+ break;
+ case STACK_DYNPTR:
+ i += BPF_DYNPTR_NR_SLOTS - 1;
+ reg = &state->stack[i].spilled_ptr;
+
+ verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE);
+ print_liveness(env, reg->live);
+ verbose(env, "=dynptr_%s", dynptr_type_str(reg->dynptr.type));
+ if (reg->ref_obj_id)
+ verbose(env, "(ref_id=%d)", reg->ref_obj_id);
+ break;
+ case STACK_MISC:
+ case STACK_ZERO:
+ default:
+ reg = &state->stack[i].spilled_ptr;
+
+ for (j = 0; j < BPF_REG_SIZE; j++)
+ types_buf[j] = slot_type_char[state->stack[i].slot_type[j]];
+ types_buf[BPF_REG_SIZE] = 0;
+
+ verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE);
+ print_liveness(env, reg->live);
verbose(env, "=%s", types_buf);
+ break;
}
}
if (state->acquired_refs && state->refs[0].id) {
reg->type == PTR_TO_PACKET_END;
}
+static bool reg_is_dynptr_slice_pkt(const struct bpf_reg_state *reg)
+{
+ return base_type(reg->type) == PTR_TO_MEM &&
+ (reg->type & DYNPTR_TYPE_SKB || reg->type & DYNPTR_TYPE_XDP);
+}
+
/* Unmodified PTR_TO_PACKET[_META,_END] register from ctx access. */
static bool reg_is_init_pkt_pointer(const struct bpf_reg_state *reg,
enum bpf_reg_type which)
u8 code = insn[i].code;
if (code == (BPF_JMP | BPF_CALL) &&
- insn[i].imm == BPF_FUNC_tail_call &&
- insn[i].src_reg != BPF_PSEUDO_CALL)
+ insn[i].src_reg == 0 &&
+ insn[i].imm == BPF_FUNC_tail_call)
subprog[cur_subprog].has_tail_call = true;
if (BPF_CLASS(code) == BPF_LD &&
(BPF_MODE(code) == BPF_ABS || BPF_MODE(code) == BPF_IND))
struct bpf_reg_state *reg, u32 regno)
{
const char *targ_name = kernel_type_name(kptr_field->kptr.btf, kptr_field->kptr.btf_id);
- int perm_flags = PTR_MAYBE_NULL | PTR_TRUSTED;
+ int perm_flags = PTR_MAYBE_NULL | PTR_TRUSTED | MEM_RCU;
const char *reg_name = "";
/* Only unreferenced case accepts untrusted pointers */
return -EINVAL;
}
+/* The non-sleepable programs and sleepable programs with explicit bpf_rcu_read_lock()
+ * can dereference RCU protected pointers and result is PTR_TRUSTED.
+ */
+static bool in_rcu_cs(struct bpf_verifier_env *env)
+{
+ return env->cur_state->active_rcu_lock || !env->prog->aux->sleepable;
+}
+
+/* Once GCC supports btf_type_tag the following mechanism will be replaced with tag check */
+BTF_SET_START(rcu_protected_types)
+BTF_ID(struct, prog_test_ref_kfunc)
+BTF_ID(struct, cgroup)
+BTF_SET_END(rcu_protected_types)
+
+static bool rcu_protected_object(const struct btf *btf, u32 btf_id)
+{
+ if (!btf_is_kernel(btf))
+ return false;
+ return btf_id_set_contains(&rcu_protected_types, btf_id);
+}
+
+static bool rcu_safe_kptr(const struct btf_field *field)
+{
+ const struct btf_field_kptr *kptr = &field->kptr;
+
+ return field->type == BPF_KPTR_REF && rcu_protected_object(kptr->btf, kptr->btf_id);
+}
+
static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
int value_regno, int insn_idx,
struct btf_field *kptr_field)
* value from map as PTR_TO_BTF_ID, with the correct type.
*/
mark_btf_ld_reg(env, cur_regs(env), value_regno, PTR_TO_BTF_ID, kptr_field->kptr.btf,
- kptr_field->kptr.btf_id, PTR_MAYBE_NULL | PTR_UNTRUSTED);
+ kptr_field->kptr.btf_id,
+ rcu_safe_kptr(kptr_field) && in_rcu_cs(env) ?
+ PTR_MAYBE_NULL | MEM_RCU :
+ PTR_MAYBE_NULL | PTR_UNTRUSTED);
/* For mark_ptr_or_null_reg */
val_reg->id = ++env->id_gen;
} else if (class == BPF_STX) {
return 0;
}
-#define BTF_TYPE_SAFE_NESTED(__type) __PASTE(__type, __safe_fields)
+#define BTF_TYPE_SAFE_RCU(__type) __PASTE(__type, __safe_rcu)
+#define BTF_TYPE_SAFE_TRUSTED(__type) __PASTE(__type, __safe_trusted)
+
+/*
+ * Allow list few fields as RCU trusted or full trusted.
+ * This logic doesn't allow mix tagging and will be removed once GCC supports
+ * btf_type_tag.
+ */
-BTF_TYPE_SAFE_NESTED(struct task_struct) {
+/* RCU trusted: these fields are trusted in RCU CS and never NULL */
+BTF_TYPE_SAFE_RCU(struct task_struct) {
const cpumask_t *cpus_ptr;
+ struct css_set __rcu *cgroups;
+ struct task_struct __rcu *real_parent;
+ struct task_struct *group_leader;
};
-static bool nested_ptr_is_trusted(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg,
- int off)
+BTF_TYPE_SAFE_RCU(struct css_set) {
+ struct cgroup *dfl_cgrp;
+};
+
+/* full trusted: these fields are trusted even outside of RCU CS and never NULL */
+BTF_TYPE_SAFE_TRUSTED(struct bpf_iter_meta) {
+ __bpf_md_ptr(struct seq_file *, seq);
+};
+
+BTF_TYPE_SAFE_TRUSTED(struct bpf_iter__task) {
+ __bpf_md_ptr(struct bpf_iter_meta *, meta);
+ __bpf_md_ptr(struct task_struct *, task);
+};
+
+BTF_TYPE_SAFE_TRUSTED(struct linux_binprm) {
+ struct file *file;
+};
+
+BTF_TYPE_SAFE_TRUSTED(struct file) {
+ struct inode *f_inode;
+};
+
+BTF_TYPE_SAFE_TRUSTED(struct dentry) {
+ /* no negative dentry-s in places where bpf can see it */
+ struct inode *d_inode;
+};
+
+BTF_TYPE_SAFE_TRUSTED(struct socket) {
+ struct sock *sk;
+};
+
+static bool type_is_rcu(struct bpf_verifier_env *env,
+ struct bpf_reg_state *reg,
+ int off)
{
- /* If its parent is not trusted, it can't regain its trusted status. */
- if (!is_trusted_reg(reg))
- return false;
+ BTF_TYPE_EMIT(BTF_TYPE_SAFE_RCU(struct task_struct));
+ BTF_TYPE_EMIT(BTF_TYPE_SAFE_RCU(struct css_set));
- BTF_TYPE_EMIT(BTF_TYPE_SAFE_NESTED(struct task_struct));
+ return btf_nested_type_is_trusted(&env->log, reg, off, "__safe_rcu");
+}
- return btf_nested_type_is_trusted(&env->log, reg, off);
+static bool type_is_trusted(struct bpf_verifier_env *env,
+ struct bpf_reg_state *reg,
+ int off)
+{
+ BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct bpf_iter_meta));
+ BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct bpf_iter__task));
+ BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct linux_binprm));
+ BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct file));
+ BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct dentry));
+ BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct socket));
+
+ return btf_nested_type_is_trusted(&env->log, reg, off, "__safe_trusted");
}
static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
if (ret < 0)
return ret;
- /* If this is an untrusted pointer, all pointers formed by walking it
- * also inherit the untrusted flag.
- */
- if (type_flag(reg->type) & PTR_UNTRUSTED)
- flag |= PTR_UNTRUSTED;
-
- /* By default any pointer obtained from walking a trusted pointer is no
- * longer trusted, unless the field being accessed has explicitly been
- * marked as inheriting its parent's state of trust.
- *
- * An RCU-protected pointer can also be deemed trusted if we are in an
- * RCU read region. This case is handled below.
- */
- if (nested_ptr_is_trusted(env, reg, off))
- flag |= PTR_TRUSTED;
- else
- flag &= ~PTR_TRUSTED;
+ if (ret != PTR_TO_BTF_ID) {
+ /* just mark; */
- if (flag & MEM_RCU) {
- /* Mark value register as MEM_RCU only if it is protected by
- * bpf_rcu_read_lock() and the ptr reg is rcu or trusted. MEM_RCU
- * itself can already indicate trustedness inside the rcu
- * read lock region. Also mark rcu pointer as PTR_MAYBE_NULL since
- * it could be null in some cases.
+ } else if (type_flag(reg->type) & PTR_UNTRUSTED) {
+ /* If this is an untrusted pointer, all pointers formed by walking it
+ * also inherit the untrusted flag.
*/
- if (!env->cur_state->active_rcu_lock ||
- !(is_trusted_reg(reg) || is_rcu_reg(reg)))
- flag &= ~MEM_RCU;
- else
- flag |= PTR_MAYBE_NULL;
- } else if (reg->type & MEM_RCU) {
- /* ptr (reg) is marked as MEM_RCU, but the struct field is not tagged
- * with __rcu. Mark the flag as PTR_UNTRUSTED conservatively.
+ flag = PTR_UNTRUSTED;
+
+ } else if (is_trusted_reg(reg) || is_rcu_reg(reg)) {
+ /* By default any pointer obtained from walking a trusted pointer is no
+ * longer trusted, unless the field being accessed has explicitly been
+ * marked as inheriting its parent's state of trust (either full or RCU).
+ * For example:
+ * 'cgroups' pointer is untrusted if task->cgroups dereference
+ * happened in a sleepable program outside of bpf_rcu_read_lock()
+ * section. In a non-sleepable program it's trusted while in RCU CS (aka MEM_RCU).
+ * Note bpf_rcu_read_unlock() converts MEM_RCU pointers to PTR_UNTRUSTED.
+ *
+ * A regular RCU-protected pointer with __rcu tag can also be deemed
+ * trusted if we are in an RCU CS. Such pointer can be NULL.
*/
- flag |= PTR_UNTRUSTED;
+ if (type_is_trusted(env, reg, off)) {
+ flag |= PTR_TRUSTED;
+ } else if (in_rcu_cs(env) && !type_may_be_null(reg->type)) {
+ if (type_is_rcu(env, reg, off)) {
+ /* ignore __rcu tag and mark it MEM_RCU */
+ flag |= MEM_RCU;
+ } else if (flag & MEM_RCU) {
+ /* __rcu tagged pointers can be NULL */
+ flag |= PTR_MAYBE_NULL;
+ } else if (flag & (MEM_PERCPU | MEM_USER)) {
+ /* keep as-is */
+ } else {
+ /* walking unknown pointers yields untrusted pointer */
+ flag = PTR_UNTRUSTED;
+ }
+ } else {
+ /*
+ * If not in RCU CS or MEM_RCU pointer can be NULL then
+ * aggressively mark as untrusted otherwise such
+ * pointers will be plain PTR_TO_BTF_ID without flags
+ * and will be allowed to be passed into helpers for
+ * compat reasons.
+ */
+ flag = PTR_UNTRUSTED;
+ }
+ } else {
+ /* Old compat. Deprecated */
+ flag &= ~PTR_TRUSTED;
}
if (atype == BPF_READ && value_regno >= 0)
* Helpers which do not mutate the bpf_dynptr set MEM_RDONLY in their argument
* type, and declare it as 'const struct bpf_dynptr *' in their prototype.
*/
-int process_dynptr_func(struct bpf_verifier_env *env, int regno,
- enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta)
+static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
+ enum bpf_arg_type arg_type)
{
struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
- int spi = 0;
+ int err;
/* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an
* ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*):
verbose(env, "verifier internal error: misconfigured dynptr helper type flags\n");
return -EFAULT;
}
- /* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
- * check_func_arg_reg_off's logic. We only need to check offset
- * and its alignment for PTR_TO_STACK.
- */
- if (reg->type == PTR_TO_STACK) {
- spi = dynptr_get_spi(env, reg);
- if (spi < 0 && spi != -ERANGE)
- return spi;
- }
/* MEM_UNINIT - Points to memory that is an appropriate candidate for
* constructing a mutable bpf_dynptr object.
* to.
*/
if (arg_type & MEM_UNINIT) {
- if (!is_dynptr_reg_valid_uninit(env, reg, spi)) {
+ int i;
+
+ if (!is_dynptr_reg_valid_uninit(env, reg)) {
verbose(env, "Dynptr has to be an uninitialized dynptr\n");
return -EINVAL;
}
- /* We only support one dynptr being uninitialized at the moment,
- * which is sufficient for the helper functions we have right now.
- */
- if (meta->uninit_dynptr_regno) {
- verbose(env, "verifier internal error: multiple uninitialized dynptr args\n");
- return -EFAULT;
+ /* we write BPF_DW bits (8 bytes) at a time */
+ for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
+ err = check_mem_access(env, insn_idx, regno,
+ i, BPF_DW, BPF_WRITE, -1, false);
+ if (err)
+ return err;
}
- meta->uninit_dynptr_regno = regno;
+ err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx);
} else /* MEM_RDONLY and None case from above */ {
- int err;
-
/* For the reg->type == PTR_TO_STACK case, bpf_dynptr is never const */
if (reg->type == CONST_PTR_TO_DYNPTR && !(arg_type & MEM_RDONLY)) {
verbose(env, "cannot pass pointer to const bpf_dynptr, the helper mutates it\n");
return -EINVAL;
}
- if (!is_dynptr_reg_valid_init(env, reg, spi)) {
+ if (!is_dynptr_reg_valid_init(env, reg)) {
verbose(env,
"Expected an initialized dynptr as arg #%d\n",
regno);
/* Fold modifiers (in this case, MEM_RDONLY) when checking expected type */
if (!is_dynptr_type_expected(env, reg, arg_type & ~MEM_RDONLY)) {
- const char *err_extra = "";
-
- switch (arg_type & DYNPTR_TYPE_FLAG_MASK) {
- case DYNPTR_TYPE_LOCAL:
- err_extra = "local";
- break;
- case DYNPTR_TYPE_RINGBUF:
- err_extra = "ringbuf";
- break;
- default:
- err_extra = "<unknown>";
- break;
- }
verbose(env,
"Expected a dynptr of type %s as arg #%d\n",
- err_extra, regno);
+ dynptr_type_str(arg_to_dynptr_type(arg_type)), regno);
return -EINVAL;
}
err = mark_dynptr_read(env, reg);
- if (err)
- return err;
}
- return 0;
+ return err;
}
static bool arg_type_is_mem_size(enum bpf_arg_type type)
return -EACCES;
found:
- if (reg->type == PTR_TO_BTF_ID || reg->type & PTR_TRUSTED) {
+ if (base_type(reg->type) != PTR_TO_BTF_ID)
+ return 0;
+
+ switch ((int)reg->type) {
+ case PTR_TO_BTF_ID:
+ case PTR_TO_BTF_ID | PTR_TRUSTED:
+ case PTR_TO_BTF_ID | MEM_RCU:
+ {
/* For bpf_sk_release, it needs to match against first member
* 'struct sock_common', hence make an exception for it. This
* allows bpf_sk_release to work for multiple socket types.
return -EACCES;
}
}
- } else if (type_is_alloc(reg->type)) {
+ break;
+ }
+ case PTR_TO_BTF_ID | MEM_ALLOC:
if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock) {
verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n");
return -EFAULT;
}
+ /* Handled by helper specific checks */
+ break;
+ case PTR_TO_BTF_ID | MEM_PERCPU:
+ case PTR_TO_BTF_ID | MEM_PERCPU | PTR_TRUSTED:
+ /* Handled by helper specific checks */
+ break;
+ default:
+ verbose(env, "verifier internal error: invalid PTR_TO_BTF_ID register for type match\n");
+ return -EFAULT;
}
-
return 0;
}
case PTR_TO_BTF_ID | MEM_ALLOC:
case PTR_TO_BTF_ID | PTR_TRUSTED:
case PTR_TO_BTF_ID | MEM_RCU:
- case PTR_TO_BTF_ID | MEM_ALLOC | PTR_TRUSTED:
case PTR_TO_BTF_ID | MEM_ALLOC | NON_OWN_REF:
/* When referenced PTR_TO_BTF_ID is passed to release function,
* its fixed offset must be 0. In the other cases, fixed offset
}
}
+static struct bpf_reg_state *get_dynptr_arg_reg(struct bpf_verifier_env *env,
+ const struct bpf_func_proto *fn,
+ struct bpf_reg_state *regs)
+{
+ struct bpf_reg_state *state = NULL;
+ int i;
+
+ for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++)
+ if (arg_type_is_dynptr(fn->arg_type[i])) {
+ if (state) {
+ verbose(env, "verifier internal error: multiple dynptr args\n");
+ return NULL;
+ }
+ state = ®s[BPF_REG_1 + i];
+ }
+
+ if (!state)
+ verbose(env, "verifier internal error: no dynptr arg found\n");
+
+ return state;
+}
+
static int dynptr_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
{
struct bpf_func_state *state = func(env, reg);
return state->stack[spi].spilled_ptr.ref_obj_id;
}
+static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
+ struct bpf_reg_state *reg)
+{
+ struct bpf_func_state *state = func(env, reg);
+ int spi;
+
+ if (reg->type == CONST_PTR_TO_DYNPTR)
+ return reg->dynptr.type;
+
+ spi = __get_spi(reg->off);
+ if (spi < 0) {
+ verbose(env, "verifier internal error: invalid spi when querying dynptr type\n");
+ return BPF_DYNPTR_TYPE_INVALID;
+ }
+
+ return state->stack[spi].spilled_ptr.dynptr.type;
+}
+
static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
struct bpf_call_arg_meta *meta,
- const struct bpf_func_proto *fn)
+ const struct bpf_func_proto *fn,
+ int insn_idx)
{
u32 regno = BPF_REG_1 + arg;
struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
err = check_mem_size_reg(env, reg, regno, true, meta);
break;
case ARG_PTR_TO_DYNPTR:
- err = process_dynptr_func(env, regno, arg_type, meta);
+ err = process_dynptr_func(env, regno, insn_idx, arg_type);
if (err)
return err;
break;
break;
case BPF_MAP_TYPE_SK_STORAGE:
if (func_id != BPF_FUNC_sk_storage_get &&
- func_id != BPF_FUNC_sk_storage_delete)
+ func_id != BPF_FUNC_sk_storage_delete &&
+ func_id != BPF_FUNC_kptr_xchg)
goto error;
break;
case BPF_MAP_TYPE_INODE_STORAGE:
if (func_id != BPF_FUNC_inode_storage_get &&
- func_id != BPF_FUNC_inode_storage_delete)
+ func_id != BPF_FUNC_inode_storage_delete &&
+ func_id != BPF_FUNC_kptr_xchg)
goto error;
break;
case BPF_MAP_TYPE_TASK_STORAGE:
if (func_id != BPF_FUNC_task_storage_get &&
- func_id != BPF_FUNC_task_storage_delete)
+ func_id != BPF_FUNC_task_storage_delete &&
+ func_id != BPF_FUNC_kptr_xchg)
goto error;
break;
case BPF_MAP_TYPE_CGRP_STORAGE:
if (func_id != BPF_FUNC_cgrp_storage_get &&
- func_id != BPF_FUNC_cgrp_storage_delete)
+ func_id != BPF_FUNC_cgrp_storage_delete &&
+ func_id != BPF_FUNC_kptr_xchg)
goto error;
break;
case BPF_MAP_TYPE_BLOOM_FILTER:
/* Packet data might have moved, any old PTR_TO_PACKET[_META,_END]
* are now invalid, so turn them into unknown SCALAR_VALUE.
+ *
+ * This also applies to dynptr slices belonging to skb and xdp dynptrs,
+ * since these slices point to packet data.
*/
static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
{
struct bpf_reg_state *reg;
bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
- if (reg_is_pkt_pointer_any(reg))
- __mark_reg_unknown(env, reg);
+ if (reg_is_pkt_pointer_any(reg) || reg_is_dynptr_slice_pkt(reg))
+ mark_reg_invalid(env, reg);
}));
}
return err;
bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
- if (reg->ref_obj_id == ref_obj_id) {
- if (!env->allow_ptr_leaks)
- __mark_reg_not_init(env, reg);
- else
- __mark_reg_unknown(env, reg);
- }
+ if (reg->ref_obj_id == ref_obj_id)
+ mark_reg_invalid(env, reg);
}));
return 0;
bpf_for_each_reg_in_vstate(env->cur_state, unused, reg, ({
if (type_is_non_owning_ref(reg->type))
- __mark_reg_unknown(env, reg);
+ mark_reg_invalid(env, reg);
}));
}
meta.func_id = func_id;
/* check args */
for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
- err = check_func_arg(env, i, &meta, fn);
+ err = check_func_arg(env, i, &meta, fn, insn_idx);
if (err)
return err;
}
regs = cur_regs(env);
- /* This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
- * be reinitialized by any dynptr helper. Hence, mark_stack_slots_dynptr
- * is safe to do directly.
- */
- if (meta.uninit_dynptr_regno) {
- if (regs[meta.uninit_dynptr_regno].type == CONST_PTR_TO_DYNPTR) {
- verbose(env, "verifier internal error: CONST_PTR_TO_DYNPTR cannot be initialized\n");
- return -EFAULT;
- }
- /* we write BPF_DW bits (8 bytes) at a time */
- for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
- err = check_mem_access(env, insn_idx, meta.uninit_dynptr_regno,
- i, BPF_DW, BPF_WRITE, -1, false);
- if (err)
- return err;
- }
-
- err = mark_stack_slots_dynptr(env, ®s[meta.uninit_dynptr_regno],
- fn->arg_type[meta.uninit_dynptr_regno - BPF_REG_1],
- insn_idx);
- if (err)
- return err;
- }
-
if (meta.release_regno) {
err = -EINVAL;
/* This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
}
break;
case BPF_FUNC_dynptr_data:
- for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
- if (arg_type_is_dynptr(fn->arg_type[i])) {
- struct bpf_reg_state *reg = ®s[BPF_REG_1 + i];
- int id, ref_obj_id;
+ {
+ struct bpf_reg_state *reg;
+ int id, ref_obj_id;
- if (meta.dynptr_id) {
- verbose(env, "verifier internal error: meta.dynptr_id already set\n");
- return -EFAULT;
- }
+ reg = get_dynptr_arg_reg(env, fn, regs);
+ if (!reg)
+ return -EFAULT;
- if (meta.ref_obj_id) {
- verbose(env, "verifier internal error: meta.ref_obj_id already set\n");
- return -EFAULT;
- }
- id = dynptr_id(env, reg);
- if (id < 0) {
- verbose(env, "verifier internal error: failed to obtain dynptr id\n");
- return id;
- }
+ if (meta.dynptr_id) {
+ verbose(env, "verifier internal error: meta.dynptr_id already set\n");
+ return -EFAULT;
+ }
+ if (meta.ref_obj_id) {
+ verbose(env, "verifier internal error: meta.ref_obj_id already set\n");
+ return -EFAULT;
+ }
- ref_obj_id = dynptr_ref_obj_id(env, reg);
- if (ref_obj_id < 0) {
- verbose(env, "verifier internal error: failed to obtain dynptr ref_obj_id\n");
- return ref_obj_id;
- }
+ id = dynptr_id(env, reg);
+ if (id < 0) {
+ verbose(env, "verifier internal error: failed to obtain dynptr id\n");
+ return id;
+ }
- meta.dynptr_id = id;
- meta.ref_obj_id = ref_obj_id;
- break;
- }
+ ref_obj_id = dynptr_ref_obj_id(env, reg);
+ if (ref_obj_id < 0) {
+ verbose(env, "verifier internal error: failed to obtain dynptr ref_obj_id\n");
+ return ref_obj_id;
}
- if (i == MAX_BPF_FUNC_REG_ARGS) {
- verbose(env, "verifier internal error: no dynptr in bpf_dynptr_data()\n");
+
+ meta.dynptr_id = id;
+ meta.ref_obj_id = ref_obj_id;
+
+ break;
+ }
+ case BPF_FUNC_dynptr_write:
+ {
+ enum bpf_dynptr_type dynptr_type;
+ struct bpf_reg_state *reg;
+
+ reg = get_dynptr_arg_reg(env, fn, regs);
+ if (!reg)
return -EFAULT;
- }
+
+ dynptr_type = dynptr_get_type(env, reg);
+ if (dynptr_type == BPF_DYNPTR_TYPE_INVALID)
+ return -EFAULT;
+
+ if (dynptr_type == BPF_DYNPTR_TYPE_SKB)
+ /* this will trigger clear_all_pkt_pointers(), which will
+ * invalidate all dynptr slices associated with the skb
+ */
+ changes_data = true;
+
break;
+ }
case BPF_FUNC_user_ringbuf_drain:
err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
set_user_ringbuf_callback_state);
}
}
-struct bpf_kfunc_call_arg_meta {
- /* In parameters */
- struct btf *btf;
- u32 func_id;
- u32 kfunc_flags;
- const struct btf_type *func_proto;
- const char *func_name;
- /* Out parameters */
- u32 ref_obj_id;
- u8 release_regno;
- bool r0_rdonly;
- u32 ret_btf_id;
- u64 r0_size;
- u32 subprogno;
- struct {
- u64 value;
- bool found;
- } arg_constant;
- struct {
- struct btf *btf;
- u32 btf_id;
- } arg_obj_drop;
- struct {
- struct btf_field *field;
- } arg_list_head;
- struct {
- struct btf_field *field;
- } arg_rbtree_root;
-};
-
static bool is_kfunc_acquire(struct bpf_kfunc_call_arg_meta *meta)
{
return meta->kfunc_flags & KF_ACQUIRE;
return __kfunc_param_match_suffix(btf, arg, "__sz");
}
+static bool is_kfunc_arg_const_mem_size(const struct btf *btf,
+ const struct btf_param *arg,
+ const struct bpf_reg_state *reg)
+{
+ const struct btf_type *t;
+
+ t = btf_type_skip_modifiers(btf, arg->type, NULL);
+ if (!btf_type_is_scalar(t) || reg->type != SCALAR_VALUE)
+ return false;
+
+ return __kfunc_param_match_suffix(btf, arg, "__szk");
+}
+
static bool is_kfunc_arg_constant(const struct btf *btf, const struct btf_param *arg)
{
return __kfunc_param_match_suffix(btf, arg, "__k");
return __kfunc_param_match_suffix(btf, arg, "__alloc");
}
+static bool is_kfunc_arg_uninit(const struct btf *btf, const struct btf_param *arg)
+{
+ return __kfunc_param_match_suffix(btf, arg, "__uninit");
+}
+
static bool is_kfunc_arg_scalar_with_name(const struct btf *btf,
const struct btf_param *arg,
const char *name)
KF_bpf_rbtree_remove,
KF_bpf_rbtree_add,
KF_bpf_rbtree_first,
+ KF_bpf_dynptr_from_skb,
+ KF_bpf_dynptr_from_xdp,
+ KF_bpf_dynptr_slice,
+ KF_bpf_dynptr_slice_rdwr,
};
BTF_SET_START(special_kfunc_set)
BTF_ID(func, bpf_rbtree_remove)
BTF_ID(func, bpf_rbtree_add)
BTF_ID(func, bpf_rbtree_first)
+BTF_ID(func, bpf_dynptr_from_skb)
+BTF_ID(func, bpf_dynptr_from_xdp)
+BTF_ID(func, bpf_dynptr_slice)
+BTF_ID(func, bpf_dynptr_slice_rdwr)
BTF_SET_END(special_kfunc_set)
BTF_ID_LIST(special_kfunc_list)
BTF_ID(func, bpf_rbtree_remove)
BTF_ID(func, bpf_rbtree_add)
BTF_ID(func, bpf_rbtree_first)
+BTF_ID(func, bpf_dynptr_from_skb)
+BTF_ID(func, bpf_dynptr_from_xdp)
+BTF_ID(func, bpf_dynptr_slice)
+BTF_ID(func, bpf_dynptr_slice_rdwr)
static bool is_kfunc_bpf_rcu_read_lock(struct bpf_kfunc_call_arg_meta *meta)
{
if (is_kfunc_arg_callback(env, meta->btf, &args[argno]))
return KF_ARG_PTR_TO_CALLBACK;
- if (argno + 1 < nargs && is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1]))
+
+ if (argno + 1 < nargs &&
+ (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1]) ||
+ is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1])))
arg_mem_size = true;
/* This is the catch all argument type of register types supported by
ptr = reg->map_ptr;
break;
case PTR_TO_BTF_ID | MEM_ALLOC:
- case PTR_TO_BTF_ID | MEM_ALLOC | PTR_TRUSTED:
ptr = reg->btf;
break;
default:
&meta->arg_rbtree_root.field);
}
-static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta)
+static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta,
+ int insn_idx)
{
const char *func_name = meta->func_name, *ref_tname;
const struct btf *btf = meta->btf;
return -EINVAL;
}
- if (is_kfunc_trusted_args(meta) &&
+ if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) &&
(register_is_null(reg) || type_may_be_null(reg->type))) {
verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
return -EACCES;
return ret;
break;
case KF_ARG_PTR_TO_DYNPTR:
+ {
+ enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
+
if (reg->type != PTR_TO_STACK &&
reg->type != CONST_PTR_TO_DYNPTR) {
verbose(env, "arg#%d expected pointer to stack or dynptr_ptr\n", i);
return -EINVAL;
}
- ret = process_dynptr_func(env, regno, ARG_PTR_TO_DYNPTR | MEM_RDONLY, NULL);
+ if (reg->type == CONST_PTR_TO_DYNPTR)
+ dynptr_arg_type |= MEM_RDONLY;
+
+ if (is_kfunc_arg_uninit(btf, &args[i]))
+ dynptr_arg_type |= MEM_UNINIT;
+
+ if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb])
+ dynptr_arg_type |= DYNPTR_TYPE_SKB;
+ else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_xdp])
+ dynptr_arg_type |= DYNPTR_TYPE_XDP;
+
+ ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type);
if (ret < 0)
return ret;
+
+ if (!(dynptr_arg_type & MEM_UNINIT)) {
+ int id = dynptr_id(env, reg);
+
+ if (id < 0) {
+ verbose(env, "verifier internal error: failed to obtain dynptr id\n");
+ return id;
+ }
+ meta->initialized_dynptr.id = id;
+ meta->initialized_dynptr.type = dynptr_get_type(env, reg);
+ }
+
break;
+ }
case KF_ARG_PTR_TO_LIST_HEAD:
if (reg->type != PTR_TO_MAP_VALUE &&
reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
return ret;
break;
case KF_ARG_PTR_TO_MEM_SIZE:
- ret = check_kfunc_mem_size_reg(env, ®s[regno + 1], regno + 1);
+ {
+ struct bpf_reg_state *size_reg = ®s[regno + 1];
+ const struct btf_param *size_arg = &args[i + 1];
+
+ ret = check_kfunc_mem_size_reg(env, size_reg, regno + 1);
if (ret < 0) {
verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1);
return ret;
}
- /* Skip next '__sz' argument */
+
+ if (is_kfunc_arg_const_mem_size(meta->btf, size_arg, size_reg)) {
+ if (meta->arg_constant.found) {
+ verbose(env, "verifier internal error: only one constant argument permitted\n");
+ return -EFAULT;
+ }
+ if (!tnum_is_const(size_reg->var_off)) {
+ verbose(env, "R%d must be a known constant\n", regno + 1);
+ return -EINVAL;
+ }
+ meta->arg_constant.found = true;
+ meta->arg_constant.value = size_reg->var_off.value;
+ }
+
+ /* Skip next '__sz' or '__szk' argument */
i++;
break;
+ }
case KF_ARG_PTR_TO_CALLBACK:
meta->subprogno = reg->subprogno;
break;
rcu_lock = is_kfunc_bpf_rcu_read_lock(&meta);
rcu_unlock = is_kfunc_bpf_rcu_read_unlock(&meta);
- if ((rcu_lock || rcu_unlock) && !env->rcu_tag_supported) {
- verbose(env, "no vmlinux btf rcu tag support for kfunc %s\n", func_name);
- return -EACCES;
- }
if (env->cur_state->active_rcu_lock) {
struct bpf_func_state *state;
}
/* Check the arguments */
- err = check_kfunc_args(env, &meta);
+ err = check_kfunc_args(env, &meta, insn_idx);
if (err < 0)
return err;
/* In case of release function, we get register number of refcounted
regs[BPF_REG_0].type = PTR_TO_BTF_ID | PTR_UNTRUSTED;
regs[BPF_REG_0].btf = desc_btf;
regs[BPF_REG_0].btf_id = meta.arg_constant.value;
+ } else if (meta.func_id == special_kfunc_list[KF_bpf_dynptr_slice] ||
+ meta.func_id == special_kfunc_list[KF_bpf_dynptr_slice_rdwr]) {
+ enum bpf_type_flag type_flag = get_dynptr_type_flag(meta.initialized_dynptr.type);
+
+ mark_reg_known_zero(env, regs, BPF_REG_0);
+
+ if (!meta.arg_constant.found) {
+ verbose(env, "verifier internal error: bpf_dynptr_slice(_rdwr) no constant size\n");
+ return -EFAULT;
+ }
+
+ regs[BPF_REG_0].mem_size = meta.arg_constant.value;
+
+ /* PTR_MAYBE_NULL will be added when is_kfunc_ret_null is checked */
+ regs[BPF_REG_0].type = PTR_TO_MEM | type_flag;
+
+ if (meta.func_id == special_kfunc_list[KF_bpf_dynptr_slice]) {
+ regs[BPF_REG_0].type |= MEM_RDONLY;
+ } else {
+ /* this will set env->seen_direct_write to true */
+ if (!may_access_direct_pkt_data(env, NULL, BPF_WRITE)) {
+ verbose(env, "the prog does not allow writes to packet data\n");
+ return -EINVAL;
+ }
+ }
+
+ if (!meta.initialized_dynptr.id) {
+ verbose(env, "verifier internal error: no dynptr id\n");
+ return -EFAULT;
+ }
+ regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
+
+ /* we don't need to set BPF_REG_0's ref obj id
+ * because packet slices are not refcounted (see
+ * dynptr_type_refcounted)
+ */
} else {
verbose(env, "kernel function %s unhandled dynamic return type\n",
meta.func_name);
}
} else if (!__btf_type_is_struct(ptr_type)) {
if (!meta.r0_size) {
+ __u32 sz;
+
+ if (!IS_ERR(btf_resolve_size(desc_btf, ptr_type, &sz))) {
+ meta.r0_size = sz;
+ meta.r0_rdonly = true;
+ }
+ }
+ if (!meta.r0_size) {
ptr_type_name = btf_name_by_offset(desc_btf,
ptr_type->name_off);
verbose(env,
*/
static int visit_insn(int t, struct bpf_verifier_env *env)
{
- struct bpf_insn *insns = env->prog->insnsi;
+ struct bpf_insn *insns = env->prog->insnsi, *insn = &insns[t];
int ret;
- if (bpf_pseudo_func(insns + t))
+ if (bpf_pseudo_func(insn))
return visit_func_call_insn(t, insns, env, true);
/* All non-branch instructions have a single fall-through edge. */
- if (BPF_CLASS(insns[t].code) != BPF_JMP &&
- BPF_CLASS(insns[t].code) != BPF_JMP32)
+ if (BPF_CLASS(insn->code) != BPF_JMP &&
+ BPF_CLASS(insn->code) != BPF_JMP32)
return push_insn(t, t + 1, FALLTHROUGH, env, false);
- switch (BPF_OP(insns[t].code)) {
+ switch (BPF_OP(insn->code)) {
case BPF_EXIT:
return DONE_EXPLORING;
case BPF_CALL:
- if (insns[t].imm == BPF_FUNC_timer_set_callback)
+ if (insn->src_reg == 0 && insn->imm == BPF_FUNC_timer_set_callback)
/* Mark this call insn as a prune point to trigger
* is_state_visited() check before call itself is
* processed by __check_func_call(). Otherwise new
* async state will be pushed for further exploration.
*/
mark_prune_point(env, t);
- return visit_func_call_insn(t, insns, env,
- insns[t].src_reg == BPF_PSEUDO_CALL);
+ return visit_func_call_insn(t, insns, env, insn->src_reg == BPF_PSEUDO_CALL);
case BPF_JA:
- if (BPF_SRC(insns[t].code) != BPF_K)
+ if (BPF_SRC(insn->code) != BPF_K)
return -EINVAL;
/* unconditional jump with single edge */
- ret = push_insn(t, t + insns[t].off + 1, FALLTHROUGH, env,
+ ret = push_insn(t, t + insn->off + 1, FALLTHROUGH, env,
true);
if (ret)
return ret;
- mark_prune_point(env, t + insns[t].off + 1);
- mark_jmp_point(env, t + insns[t].off + 1);
+ mark_prune_point(env, t + insn->off + 1);
+ mark_jmp_point(env, t + insn->off + 1);
return ret;
if (ret)
return ret;
- return push_insn(t, t + insns[t].off + 1, BRANCH, env, true);
+ return push_insn(t, t + insn->off + 1, BRANCH, env, true);
}
}
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_MAP_KEY:
case PTR_TO_MAP_VALUE:
+ case PTR_TO_MEM:
+ case PTR_TO_BUF:
+ case PTR_TO_TP_BUFFER:
/* If the new min/max/var_off satisfy the old ones and
* everything else matches, we are OK.
*/
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, var_off)) == 0 &&
range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off) &&
- check_ids(rold->id, rcur->id, idmap);
+ check_ids(rold->id, rcur->id, idmap) &&
+ check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
case PTR_TO_PACKET_META:
case PTR_TO_PACKET:
/* We must have at least as much range as the old ptr
* This threshold shouldn't be too high either, since states
* at the end of the loop are likely to be useful in pruning.
*/
- if (env->jmps_processed - env->prev_jmps_processed < 20 &&
+ if (!env->test_state_freq &&
+ env->jmps_processed - env->prev_jmps_processed < 20 &&
env->insn_processed - env->prev_insn_processed < 100)
add_new_state = false;
goto miss;
!reg_type_mismatch_ok(prev));
}
+static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type type,
+ bool allow_trust_missmatch)
+{
+ enum bpf_reg_type *prev_type = &env->insn_aux_data[env->insn_idx].ptr_type;
+
+ if (*prev_type == NOT_INIT) {
+ /* Saw a valid insn
+ * dst_reg = *(u32 *)(src_reg + off)
+ * save type to validate intersecting paths
+ */
+ *prev_type = type;
+ } else if (reg_type_mismatch(type, *prev_type)) {
+ /* Abuser program is trying to use the same insn
+ * dst_reg = *(u32*) (src_reg + off)
+ * with different pointer types:
+ * src_reg == ctx in one branch and
+ * src_reg == stack|map in some other branch.
+ * Reject it.
+ */
+ if (allow_trust_missmatch &&
+ base_type(type) == PTR_TO_BTF_ID &&
+ base_type(*prev_type) == PTR_TO_BTF_ID) {
+ /*
+ * Have to support a use case when one path through
+ * the program yields TRUSTED pointer while another
+ * is UNTRUSTED. Fallback to UNTRUSTED to generate
+ * BPF_PROBE_MEM.
+ */
+ *prev_type = PTR_TO_BTF_ID | PTR_UNTRUSTED;
+ } else {
+ verbose(env, "same insn cannot be used with different pointers\n");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
static int do_check(struct bpf_verifier_env *env)
{
bool pop_log = !(env->log.level & BPF_LOG_LEVEL2);
return err;
} else if (class == BPF_LDX) {
- enum bpf_reg_type *prev_src_type, src_reg_type;
+ enum bpf_reg_type src_reg_type;
/* check for reserved fields is already done */
if (err)
return err;
- prev_src_type = &env->insn_aux_data[env->insn_idx].ptr_type;
-
- if (*prev_src_type == NOT_INIT) {
- /* saw a valid insn
- * dst_reg = *(u32 *)(src_reg + off)
- * save type to validate intersecting paths
- */
- *prev_src_type = src_reg_type;
-
- } else if (reg_type_mismatch(src_reg_type, *prev_src_type)) {
- /* ABuser program is trying to use the same insn
- * dst_reg = *(u32*) (src_reg + off)
- * with different pointer types:
- * src_reg == ctx in one branch and
- * src_reg == stack|map in some other branch.
- * Reject it.
- */
- verbose(env, "same insn cannot be used with different pointers\n");
- return -EINVAL;
- }
-
+ err = save_aux_ptr_type(env, src_reg_type, true);
+ if (err)
+ return err;
} else if (class == BPF_STX) {
- enum bpf_reg_type *prev_dst_type, dst_reg_type;
+ enum bpf_reg_type dst_reg_type;
if (BPF_MODE(insn->code) == BPF_ATOMIC) {
err = check_atomic(env, env->insn_idx, insn);
if (err)
return err;
- prev_dst_type = &env->insn_aux_data[env->insn_idx].ptr_type;
-
- if (*prev_dst_type == NOT_INIT) {
- *prev_dst_type = dst_reg_type;
- } else if (reg_type_mismatch(dst_reg_type, *prev_dst_type)) {
- verbose(env, "same insn cannot be used with different pointers\n");
- return -EINVAL;
- }
-
+ err = save_aux_ptr_type(env, dst_reg_type, false);
+ if (err)
+ return err;
} else if (class == BPF_ST) {
+ enum bpf_reg_type dst_reg_type;
+
if (BPF_MODE(insn->code) != BPF_MEM ||
insn->src_reg != BPF_REG_0) {
verbose(env, "BPF_ST uses reserved fields\n");
if (err)
return err;
- if (is_ctx_reg(env, insn->dst_reg)) {
- verbose(env, "BPF_ST stores into R%d %s is not allowed\n",
- insn->dst_reg,
- reg_type_str(env, reg_state(env, insn->dst_reg)->type));
- return -EACCES;
- }
+ dst_reg_type = regs[insn->dst_reg].type;
/* check that memory (dst_reg + off) is writeable */
err = check_mem_access(env, env->insn_idx, insn->dst_reg,
if (err)
return err;
+ err = save_aux_ptr_type(env, dst_reg_type, false);
+ if (err)
+ return err;
} else if (class == BPF_JMP || class == BPF_JMP32) {
u8 opcode = BPF_OP(insn->code);
err = check_helper_call(env, insn, &env->insn_idx);
if (err)
return err;
+
+ mark_reg_scratched(env, BPF_REG_0);
} else if (opcode == BPF_JA) {
if (BPF_SRC(insn->code) != BPF_K ||
insn->imm != 0 ||
for (i = 0; i < insn_cnt; i++, insn++) {
bpf_convert_ctx_access_t convert_ctx_access;
- bool ctx_access;
if (insn->code == (BPF_LDX | BPF_MEM | BPF_B) ||
insn->code == (BPF_LDX | BPF_MEM | BPF_H) ||
insn->code == (BPF_LDX | BPF_MEM | BPF_W) ||
insn->code == (BPF_LDX | BPF_MEM | BPF_DW)) {
type = BPF_READ;
- ctx_access = true;
} else if (insn->code == (BPF_STX | BPF_MEM | BPF_B) ||
insn->code == (BPF_STX | BPF_MEM | BPF_H) ||
insn->code == (BPF_STX | BPF_MEM | BPF_W) ||
insn->code == (BPF_ST | BPF_MEM | BPF_W) ||
insn->code == (BPF_ST | BPF_MEM | BPF_DW)) {
type = BPF_WRITE;
- ctx_access = BPF_CLASS(insn->code) == BPF_STX;
} else {
continue;
}
continue;
}
- if (!ctx_access)
- continue;
-
switch ((int)env->insn_aux_data[i + delta].ptr_type) {
case PTR_TO_CTX:
if (!ops->convert_ctx_access)
desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
*cnt = 1;
+ } else if (desc->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb]) {
+ bool seen_direct_write = env->seen_direct_write;
+ bool is_rdonly = !may_access_direct_pkt_data(env, NULL, BPF_WRITE);
+
+ if (is_rdonly)
+ insn->imm = BPF_CALL_IMM(bpf_dynptr_from_skb_rdonly);
+
+ /* restore env->seen_direct_write to its original value, since
+ * may_access_direct_pkt_data mutates it
+ */
+ env->seen_direct_write = seen_direct_write;
}
return 0;
}
env->bypass_spec_v1 = bpf_bypass_spec_v1();
env->bypass_spec_v4 = bpf_bypass_spec_v4();
env->bpf_capable = bpf_capable();
- env->rcu_tag_supported = btf_vmlinux &&
- btf_find_by_name_kind(btf_vmlinux, "rcu", BTF_KIND_TYPE_TAG) > 0;
if (is_priv)
env->test_state_freq = attr->prog_flags & BPF_F_TEST_STATE_FREQ;
NULL : &bpf_probe_read_compat_str_proto;
#endif
#ifdef CONFIG_CGROUPS
- case BPF_FUNC_get_current_cgroup_id:
- return &bpf_get_current_cgroup_id_proto;
- case BPF_FUNC_get_current_ancestor_cgroup_id:
- return &bpf_get_current_ancestor_cgroup_id_proto;
case BPF_FUNC_cgrp_storage_get:
return &bpf_cgrp_storage_get_proto;
case BPF_FUNC_cgrp_storage_delete:
list_sort.o uuid.o iov_iter.o clz_ctz.o \
bsearch.o find_bit.o llist.o memweight.o kfifo.o \
percpu-refcount.o rhashtable.o base64.o \
- once.o refcount.o usercopy.o errseq.o bucket_locks.o \
+ once.o refcount.o rcuref.o usercopy.o errseq.o bucket_locks.o \
generic-radix-tree.o
obj-$(CONFIG_STRING_SELFTEST) += test_string.o
obj-y += string_helpers.o
}
#endif
+static int get_free_index(struct cpu_rmap *rmap)
+{
+ int i;
+
+ for (i = 0; i < rmap->size; i++)
+ if (!rmap->obj[i])
+ return i;
+
+ return -ENOSPC;
+}
+
/**
* cpu_rmap_add - add object to a rmap
* @rmap: CPU rmap allocated with alloc_cpu_rmap()
* @obj: Object to add to rmap
*
- * Return index of object.
+ * Return index of object or -ENOSPC if no free entry was found
*/
int cpu_rmap_add(struct cpu_rmap *rmap, void *obj)
{
- u16 index;
+ int index = get_free_index(rmap);
+
+ if (index < 0)
+ return index;
- BUG_ON(rmap->used >= rmap->size);
- index = rmap->used++;
rmap->obj[index] = obj;
return index;
}
if (!rmap)
return;
- for (index = 0; index < rmap->used; index++) {
+ for (index = 0; index < rmap->size; index++) {
glue = rmap->obj[index];
- irq_set_affinity_notifier(glue->notify.irq, NULL);
+ if (glue)
+ irq_set_affinity_notifier(glue->notify.irq, NULL);
}
cpu_rmap_put(rmap);
container_of(ref, struct irq_glue, notify.kref);
cpu_rmap_put(glue->rmap);
+ glue->rmap->obj[glue->index] = NULL;
kfree(glue);
}
/**
+ * irq_cpu_rmap_remove - remove an IRQ from a CPU affinity reverse-map
+ * @rmap: The reverse-map
+ * @irq: The IRQ number
+ */
+int irq_cpu_rmap_remove(struct cpu_rmap *rmap, int irq)
+{
+ return irq_set_affinity_notifier(irq, NULL);
+}
+EXPORT_SYMBOL(irq_cpu_rmap_remove);
+
+/**
* irq_cpu_rmap_add - add an IRQ to a CPU affinity reverse-map
* @rmap: The reverse-map
* @irq: The IRQ number
glue->notify.release = irq_cpu_rmap_release;
glue->rmap = rmap;
cpu_rmap_get(rmap);
- glue->index = cpu_rmap_add(rmap, glue);
+ rc = cpu_rmap_add(rmap, glue);
+ if (rc < 0)
+ goto err_add;
+
+ glue->index = rc;
rc = irq_set_affinity_notifier(irq, &glue->notify);
- if (rc) {
- cpu_rmap_put(glue->rmap);
- kfree(glue);
- }
+ if (rc)
+ goto err_set;
+
+ return rc;
+
+err_set:
+ rmap->obj[glue->index] = NULL;
+err_add:
+ cpu_rmap_put(glue->rmap);
+ kfree(glue);
return rc;
}
EXPORT_SYMBOL(irq_cpu_rmap_add);
}
EXPORT_SYMBOL(packing);
-MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Generic bitfield packing and unpacking");
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+
+/*
+ * rcuref - A scalable reference count implementation for RCU managed objects
+ *
+ * rcuref is provided to replace open coded reference count implementations
+ * based on atomic_t. It protects explicitely RCU managed objects which can
+ * be visible even after the last reference has been dropped and the object
+ * is heading towards destruction.
+ *
+ * A common usage pattern is:
+ *
+ * get()
+ * rcu_read_lock();
+ * p = get_ptr();
+ * if (p && !atomic_inc_not_zero(&p->refcnt))
+ * p = NULL;
+ * rcu_read_unlock();
+ * return p;
+ *
+ * put()
+ * if (!atomic_dec_return(&->refcnt)) {
+ * remove_ptr(p);
+ * kfree_rcu((p, rcu);
+ * }
+ *
+ * atomic_inc_not_zero() is implemented with a try_cmpxchg() loop which has
+ * O(N^2) behaviour under contention with N concurrent operations.
+ *
+ * rcuref uses atomic_add_negative_relaxed() for the fast path, which scales
+ * better under contention.
+ *
+ * Why not refcount?
+ * =================
+ *
+ * In principle it should be possible to make refcount use the rcuref
+ * scheme, but the destruction race described below cannot be prevented
+ * unless the protected object is RCU managed.
+ *
+ * Theory of operation
+ * ===================
+ *
+ * rcuref uses an unsigned integer reference counter. As long as the
+ * counter value is greater than or equal to RCUREF_ONEREF and not larger
+ * than RCUREF_MAXREF the reference is alive:
+ *
+ * ONEREF MAXREF SATURATED RELEASED DEAD NOREF
+ * 0 0x7FFFFFFF 0x8000000 0xA0000000 0xBFFFFFFF 0xC0000000 0xE0000000 0xFFFFFFFF
+ * <---valid --------> <-------saturation zone-------> <-----dead zone----->
+ *
+ * The get() and put() operations do unconditional increments and
+ * decrements. The result is checked after the operation. This optimizes
+ * for the fast path.
+ *
+ * If the reference count is saturated or dead, then the increments and
+ * decrements are not harmful as the reference count still stays in the
+ * respective zones and is always set back to STATURATED resp. DEAD. The
+ * zones have room for 2^28 racing operations in each direction, which
+ * makes it practically impossible to escape the zones.
+ *
+ * Once the last reference is dropped the reference count becomes
+ * RCUREF_NOREF which forces rcuref_put() into the slowpath operation. The
+ * slowpath then tries to set the reference count from RCUREF_NOREF to
+ * RCUREF_DEAD via a cmpxchg(). This opens a small window where a
+ * concurrent rcuref_get() can acquire the reference count and bring it
+ * back to RCUREF_ONEREF or even drop the reference again and mark it DEAD.
+ *
+ * If the cmpxchg() succeeds then a concurrent rcuref_get() will result in
+ * DEAD + 1, which is inside the dead zone. If that happens the reference
+ * count is put back to DEAD.
+ *
+ * The actual race is possible due to the unconditional increment and
+ * decrements in rcuref_get() and rcuref_put():
+ *
+ * T1 T2
+ * get() put()
+ * if (atomic_add_negative(-1, &ref->refcnt))
+ * succeeds-> atomic_cmpxchg(&ref->refcnt, NOREF, DEAD);
+ *
+ * atomic_add_negative(1, &ref->refcnt); <- Elevates refcount to DEAD + 1
+ *
+ * As the result of T1's add is negative, the get() goes into the slow path
+ * and observes refcnt being in the dead zone which makes the operation fail.
+ *
+ * Possible critical states:
+ *
+ * Context Counter References Operation
+ * T1 0 1 init()
+ * T2 1 2 get()
+ * T1 0 1 put()
+ * T2 -1 0 put() tries to mark dead
+ * T1 0 1 get()
+ * T2 0 1 put() mark dead fails
+ * T1 -1 0 put() tries to mark dead
+ * T1 DEAD 0 put() mark dead succeeds
+ * T2 DEAD+1 0 get() fails and puts it back to DEAD
+ *
+ * Of course there are more complex scenarios, but the above illustrates
+ * the working principle. The rest is left to the imagination of the
+ * reader.
+ *
+ * Deconstruction race
+ * ===================
+ *
+ * The release operation must be protected by prohibiting a grace period in
+ * order to prevent a possible use after free:
+ *
+ * T1 T2
+ * put() get()
+ * // ref->refcnt = ONEREF
+ * if (!atomic_add_negative(-1, &ref->refcnt))
+ * return false; <- Not taken
+ *
+ * // ref->refcnt == NOREF
+ * --> preemption
+ * // Elevates ref->refcnt to ONEREF
+ * if (!atomic_add_negative(1, &ref->refcnt))
+ * return true; <- taken
+ *
+ * if (put(&p->ref)) { <-- Succeeds
+ * remove_pointer(p);
+ * kfree_rcu(p, rcu);
+ * }
+ *
+ * RCU grace period ends, object is freed
+ *
+ * atomic_cmpxchg(&ref->refcnt, NOREF, DEAD); <- UAF
+ *
+ * This is prevented by disabling preemption around the put() operation as
+ * that's in most kernel configurations cheaper than a rcu_read_lock() /
+ * rcu_read_unlock() pair and in many cases even a NOOP. In any case it
+ * prevents the grace period which keeps the object alive until all put()
+ * operations complete.
+ *
+ * Saturation protection
+ * =====================
+ *
+ * The reference count has a saturation limit RCUREF_MAXREF (INT_MAX).
+ * Once this is exceedded the reference count becomes stale by setting it
+ * to RCUREF_SATURATED, which will cause a memory leak, but it prevents
+ * wrap arounds which obviously cause worse problems than a memory
+ * leak. When saturation is reached a warning is emitted.
+ *
+ * Race conditions
+ * ===============
+ *
+ * All reference count increment/decrement operations are unconditional and
+ * only verified after the fact. This optimizes for the good case and takes
+ * the occasional race vs. a dead or already saturated refcount into
+ * account. The saturation and dead zones are large enough to accomodate
+ * for that.
+ *
+ * Memory ordering
+ * ===============
+ *
+ * Memory ordering rules are slightly relaxed wrt regular atomic_t functions
+ * and provide only what is strictly required for refcounts.
+ *
+ * The increments are fully relaxed; these will not provide ordering. The
+ * rationale is that whatever is used to obtain the object to increase the
+ * reference count on will provide the ordering. For locked data
+ * structures, its the lock acquire, for RCU/lockless data structures its
+ * the dependent load.
+ *
+ * rcuref_get() provides a control dependency ordering future stores which
+ * ensures that the object is not modified when acquiring a reference
+ * fails.
+ *
+ * rcuref_put() provides release order, i.e. all prior loads and stores
+ * will be issued before. It also provides a control dependency ordering
+ * against the subsequent destruction of the object.
+ *
+ * If rcuref_put() successfully dropped the last reference and marked the
+ * object DEAD it also provides acquire ordering.
+ */
+
+#include <linux/export.h>
+#include <linux/rcuref.h>
+
+/**
+ * rcuref_get_slowpath - Slowpath of rcuref_get()
+ * @ref: Pointer to the reference count
+ *
+ * Invoked when the reference count is outside of the valid zone.
+ *
+ * Return:
+ * False if the reference count was already marked dead
+ *
+ * True if the reference count is saturated, which prevents the
+ * object from being deconstructed ever.
+ */
+bool rcuref_get_slowpath(rcuref_t *ref)
+{
+ unsigned int cnt = atomic_read(&ref->refcnt);
+
+ /*
+ * If the reference count was already marked dead, undo the
+ * increment so it stays in the middle of the dead zone and return
+ * fail.
+ */
+ if (cnt >= RCUREF_RELEASED) {
+ atomic_set(&ref->refcnt, RCUREF_DEAD);
+ return false;
+ }
+
+ /*
+ * If it was saturated, warn and mark it so. In case the increment
+ * was already on a saturated value restore the saturation
+ * marker. This keeps it in the middle of the saturation zone and
+ * prevents the reference count from overflowing. This leaks the
+ * object memory, but prevents the obvious reference count overflow
+ * damage.
+ */
+ if (WARN_ONCE(cnt > RCUREF_MAXREF, "rcuref saturated - leaking memory"))
+ atomic_set(&ref->refcnt, RCUREF_SATURATED);
+ return true;
+}
+EXPORT_SYMBOL_GPL(rcuref_get_slowpath);
+
+/**
+ * rcuref_put_slowpath - Slowpath of __rcuref_put()
+ * @ref: Pointer to the reference count
+ *
+ * Invoked when the reference count is outside of the valid zone.
+ *
+ * Return:
+ * True if this was the last reference with no future references
+ * possible. This signals the caller that it can safely schedule the
+ * object, which is protected by the reference counter, for
+ * deconstruction.
+ *
+ * False if there are still active references or the put() raced
+ * with a concurrent get()/put() pair. Caller is not allowed to
+ * deconstruct the protected object.
+ */
+bool rcuref_put_slowpath(rcuref_t *ref)
+{
+ unsigned int cnt = atomic_read(&ref->refcnt);
+
+ /* Did this drop the last reference? */
+ if (likely(cnt == RCUREF_NOREF)) {
+ /*
+ * Carefully try to set the reference count to RCUREF_DEAD.
+ *
+ * This can fail if a concurrent get() operation has
+ * elevated it again or the corresponding put() even marked
+ * it dead already. Both are valid situations and do not
+ * require a retry. If this fails the caller is not
+ * allowed to deconstruct the object.
+ */
+ if (atomic_cmpxchg_release(&ref->refcnt, RCUREF_NOREF, RCUREF_DEAD) != RCUREF_NOREF)
+ return false;
+
+ /*
+ * The caller can safely schedule the object for
+ * deconstruction. Provide acquire ordering.
+ */
+ smp_acquire__after_ctrl_dep();
+ return true;
+ }
+
+ /*
+ * If the reference count was already in the dead zone, then this
+ * put() operation is imbalanced. Warn, put the reference count back to
+ * DEAD and tell the caller to not deconstruct the object.
+ */
+ if (WARN_ONCE(cnt >= RCUREF_RELEASED, "rcuref - imbalanced put()")) {
+ atomic_set(&ref->refcnt, RCUREF_DEAD);
+ return false;
+ }
+
+ /*
+ * This is a put() operation on a saturated refcount. Restore the
+ * mean saturation value and tell the caller to not deconstruct the
+ * object.
+ */
+ if (cnt > RCUREF_MAXREF)
+ atomic_set(&ref->refcnt, RCUREF_SATURATED);
+ return false;
+}
+EXPORT_SYMBOL_GPL(rcuref_put_slowpath);
const struct lowpan_iphc_ctx *ctx,
const unsigned char *lladdr, bool sam)
{
- struct in6_addr tmp = {};
+ struct in6_addr tmp;
u8 dam;
switch (lowpan_dev(dev)->lltype) {
switch (cmd) {
case SIOCSHWTSTAMP:
- if (!net_eq(dev_net(dev), &init_net))
+ if (!net_eq(dev_net(dev), dev_net(real_dev)))
break;
fallthrough;
case SIOCGMIIPHY:
network device refcount are using per cpu variables if this option is set.
This can be forced to N to detect underflows (with a performance drop).
+config MAX_SKB_FRAGS
+ int "Maximum number of fragments per skb_shared_info"
+ range 17 45
+ default 17
+ help
+ Having more fragments per skb_shared_info can help GRO efficiency.
+ This helps BIG TCP workloads, but might expose bugs in some
+ legacy drivers.
+ This also increases memory overhead of small packets,
+ and in drivers using build_skb().
+ If unsure, say 17.
+
config RPS
bool
depends on SMP && SYSFS
break;
case as_addparty:
case as_dropparty:
- sk->sk_err_soft = -msg->reply;
+ WRITE_ONCE(sk->sk_err_soft, -msg->reply);
/* < 0 failure, otherwise ep_ref */
clear_bit(ATM_VF_WAITING, &vcc->flags);
break;
__bpf_kfunc void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p)
{
+ /* p != NULL, but p->cnt could be 0 */
}
__bpf_kfunc void bpf_kfunc_call_test_destructive(void)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
-BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS | KF_RCU)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg)
BTF_SET8_END(test_sk_check_kfunc_ids)
if (n) {
struct net_bridge_fdb_entry *f;
- if (!(n->nud_state & NUD_VALID)) {
+ if (!(READ_ONCE(n->nud_state) & NUD_VALID)) {
neigh_release(n);
return;
}
if (n) {
struct net_bridge_fdb_entry *f;
- if (!(n->nud_state & NUD_VALID)) {
+ if (!(READ_ONCE(n->nud_state) & NUD_VALID)) {
neigh_release(n);
return;
}
.ndo_fdb_del_bulk = br_fdb_delete_bulk,
.ndo_fdb_dump = br_fdb_dump,
.ndo_fdb_get = br_fdb_get,
+ .ndo_mdb_add = br_mdb_add,
+ .ndo_mdb_del = br_mdb_del,
+ .ndo_mdb_dump = br_mdb_dump,
.ndo_bridge_getlink = br_getlink,
.ndo_bridge_setlink = br_setlink,
.ndo_bridge_dellink = br_dellink,
return err;
}
-static int br_mdb_valid_dump_req(const struct nlmsghdr *nlh,
- struct netlink_ext_ack *extack)
+int br_mdb_dump(struct net_device *dev, struct sk_buff *skb,
+ struct netlink_callback *cb)
{
+ struct net_bridge *br = netdev_priv(dev);
struct br_port_msg *bpm;
+ struct nlmsghdr *nlh;
+ int err;
- if (nlh->nlmsg_len < nlmsg_msg_size(sizeof(*bpm))) {
- NL_SET_ERR_MSG_MOD(extack, "Invalid header for mdb dump request");
- return -EINVAL;
- }
+ nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq, RTM_GETMDB, sizeof(*bpm),
+ NLM_F_MULTI);
+ if (!nlh)
+ return -EMSGSIZE;
bpm = nlmsg_data(nlh);
- if (bpm->ifindex) {
- NL_SET_ERR_MSG_MOD(extack, "Filtering by device index is not supported for mdb dump request");
- return -EINVAL;
- }
- if (nlmsg_attrlen(nlh, sizeof(*bpm))) {
- NL_SET_ERR_MSG(extack, "Invalid data after header in mdb dump request");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int br_mdb_dump(struct sk_buff *skb, struct netlink_callback *cb)
-{
- struct net_device *dev;
- struct net *net = sock_net(skb->sk);
- struct nlmsghdr *nlh = NULL;
- int idx = 0, s_idx;
-
- if (cb->strict_check) {
- int err = br_mdb_valid_dump_req(cb->nlh, cb->extack);
-
- if (err < 0)
- return err;
- }
-
- s_idx = cb->args[0];
+ memset(bpm, 0, sizeof(*bpm));
+ bpm->ifindex = dev->ifindex;
rcu_read_lock();
- for_each_netdev_rcu(net, dev) {
- if (netif_is_bridge_master(dev)) {
- struct net_bridge *br = netdev_priv(dev);
- struct br_port_msg *bpm;
-
- if (idx < s_idx)
- goto skip;
-
- nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid,
- cb->nlh->nlmsg_seq, RTM_GETMDB,
- sizeof(*bpm), NLM_F_MULTI);
- if (nlh == NULL)
- break;
-
- bpm = nlmsg_data(nlh);
- memset(bpm, 0, sizeof(*bpm));
- bpm->ifindex = dev->ifindex;
- if (br_mdb_fill_info(skb, cb, dev) < 0)
- goto out;
- if (br_rports_fill_info(skb, &br->multicast_ctx) < 0)
- goto out;
-
- cb->args[1] = 0;
- nlmsg_end(skb, nlh);
- skip:
- idx++;
- }
- }
+ err = br_mdb_fill_info(skb, cb, dev);
+ if (err)
+ goto out;
+ err = br_rports_fill_info(skb, &br->multicast_ctx);
+ if (err)
+ goto out;
out:
- if (nlh)
- nlmsg_end(skb, nlh);
rcu_read_unlock();
- cb->args[0] = idx;
- return skb->len;
+ nlmsg_end(skb, nlh);
+ return err;
}
static int nlmsg_populate_mdb_fill(struct sk_buff *skb,
[MDBE_ATTR_RTPROT] = NLA_POLICY_MIN(NLA_U8, RTPROT_STATIC),
};
-static int validate_mdb_entry(const struct nlattr *attr,
- struct netlink_ext_ack *extack)
-{
- struct br_mdb_entry *entry = nla_data(attr);
-
- if (nla_len(attr) != sizeof(struct br_mdb_entry)) {
- NL_SET_ERR_MSG_MOD(extack, "Invalid MDBA_SET_ENTRY attribute length");
- return -EINVAL;
- }
-
- if (entry->ifindex == 0) {
- NL_SET_ERR_MSG_MOD(extack, "Zero entry ifindex is not allowed");
- return -EINVAL;
- }
-
- if (entry->addr.proto == htons(ETH_P_IP)) {
- if (!ipv4_is_multicast(entry->addr.u.ip4)) {
- NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address is not multicast");
- return -EINVAL;
- }
- if (ipv4_is_local_multicast(entry->addr.u.ip4)) {
- NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address is local multicast");
- return -EINVAL;
- }
-#if IS_ENABLED(CONFIG_IPV6)
- } else if (entry->addr.proto == htons(ETH_P_IPV6)) {
- if (ipv6_addr_is_ll_all_nodes(&entry->addr.u.ip6)) {
- NL_SET_ERR_MSG_MOD(extack, "IPv6 entry group address is link-local all nodes");
- return -EINVAL;
- }
-#endif
- } else if (entry->addr.proto == 0) {
- /* L2 mdb */
- if (!is_multicast_ether_addr(entry->addr.u.mac_addr)) {
- NL_SET_ERR_MSG_MOD(extack, "L2 entry group is not multicast");
- return -EINVAL;
- }
- } else {
- NL_SET_ERR_MSG_MOD(extack, "Unknown entry protocol");
- return -EINVAL;
- }
-
- if (entry->state != MDB_PERMANENT && entry->state != MDB_TEMPORARY) {
- NL_SET_ERR_MSG_MOD(extack, "Unknown entry state");
- return -EINVAL;
- }
- if (entry->vid >= VLAN_VID_MASK) {
- NL_SET_ERR_MSG_MOD(extack, "Invalid entry VLAN id");
- return -EINVAL;
- }
-
- return 0;
-}
-
static bool is_valid_mdb_source(struct nlattr *attr, __be16 proto,
struct netlink_ext_ack *extack)
{
return 0;
}
-static const struct nla_policy mdba_policy[MDBA_SET_ENTRY_MAX + 1] = {
- [MDBA_SET_ENTRY_UNSPEC] = { .strict_start_type = MDBA_SET_ENTRY_ATTRS + 1 },
- [MDBA_SET_ENTRY] = NLA_POLICY_VALIDATE_FN(NLA_BINARY,
- validate_mdb_entry,
- sizeof(struct br_mdb_entry)),
- [MDBA_SET_ENTRY_ATTRS] = { .type = NLA_NESTED },
-};
-
-static int br_mdb_config_init(struct net *net, const struct nlmsghdr *nlh,
- struct br_mdb_config *cfg,
+static int br_mdb_config_init(struct br_mdb_config *cfg, struct net_device *dev,
+ struct nlattr *tb[], u16 nlmsg_flags,
struct netlink_ext_ack *extack)
{
- struct nlattr *tb[MDBA_SET_ENTRY_MAX + 1];
- struct br_port_msg *bpm;
- struct net_device *dev;
- int err;
-
- err = nlmsg_parse_deprecated(nlh, sizeof(*bpm), tb,
- MDBA_SET_ENTRY_MAX, mdba_policy, extack);
- if (err)
- return err;
+ struct net *net = dev_net(dev);
memset(cfg, 0, sizeof(*cfg));
cfg->filter_mode = MCAST_EXCLUDE;
cfg->rt_protocol = RTPROT_STATIC;
- cfg->nlflags = nlh->nlmsg_flags;
-
- bpm = nlmsg_data(nlh);
- if (!bpm->ifindex) {
- NL_SET_ERR_MSG_MOD(extack, "Invalid bridge ifindex");
- return -EINVAL;
- }
-
- dev = __dev_get_by_index(net, bpm->ifindex);
- if (!dev) {
- NL_SET_ERR_MSG_MOD(extack, "Bridge device doesn't exist");
- return -ENODEV;
- }
-
- if (!netif_is_bridge_master(dev)) {
- NL_SET_ERR_MSG_MOD(extack, "Device is not a bridge");
- return -EOPNOTSUPP;
- }
+ cfg->nlflags = nlmsg_flags;
cfg->br = netdev_priv(dev);
return -EINVAL;
}
- if (NL_REQ_ATTR_CHECK(extack, NULL, tb, MDBA_SET_ENTRY)) {
- NL_SET_ERR_MSG_MOD(extack, "Missing MDBA_SET_ENTRY attribute");
- return -EINVAL;
- }
-
cfg->entry = nla_data(tb[MDBA_SET_ENTRY]);
if (cfg->entry->ifindex != cfg->br->dev->ifindex) {
}
}
+ if (cfg->entry->addr.proto == htons(ETH_P_IP) &&
+ ipv4_is_zeronet(cfg->entry->addr.u.ip4)) {
+ NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address 0.0.0.0 is not allowed");
+ return -EINVAL;
+ }
+
if (tb[MDBA_SET_ENTRY_ATTRS])
return br_mdb_config_attrs_init(tb[MDBA_SET_ENTRY_ATTRS], cfg,
extack);
br_mdb_config_src_list_fini(cfg);
}
-static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh,
- struct netlink_ext_ack *extack)
+int br_mdb_add(struct net_device *dev, struct nlattr *tb[], u16 nlmsg_flags,
+ struct netlink_ext_ack *extack)
{
- struct net *net = sock_net(skb->sk);
struct net_bridge_vlan_group *vg;
struct net_bridge_vlan *v;
struct br_mdb_config cfg;
int err;
- err = br_mdb_config_init(net, nlh, &cfg, extack);
+ err = br_mdb_config_init(&cfg, dev, tb, nlmsg_flags, extack);
if (err)
return err;
return err;
}
-static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
- struct netlink_ext_ack *extack)
+int br_mdb_del(struct net_device *dev, struct nlattr *tb[],
+ struct netlink_ext_ack *extack)
{
- struct net *net = sock_net(skb->sk);
struct net_bridge_vlan_group *vg;
struct net_bridge_vlan *v;
struct br_mdb_config cfg;
int err;
- err = br_mdb_config_init(net, nlh, &cfg, extack);
+ err = br_mdb_config_init(&cfg, dev, tb, 0, extack);
if (err)
return err;
br_mdb_config_fini(&cfg);
return err;
}
-
-void br_mdb_init(void)
-{
- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_GETMDB, NULL, br_mdb_dump, 0);
- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_NEWMDB, br_mdb_add, NULL, 0);
- rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_DELMDB, br_mdb_del, NULL, 0);
-}
-
-void br_mdb_uninit(void)
-{
- rtnl_unregister(PF_BRIDGE, RTM_GETMDB);
- rtnl_unregister(PF_BRIDGE, RTM_NEWMDB);
- rtnl_unregister(PF_BRIDGE, RTM_DELMDB);
-}
struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
int ret;
- if ((neigh->nud_state & NUD_CONNECTED) && neigh->hh.hh_len) {
+ if ((READ_ONCE(neigh->nud_state) & NUD_CONNECTED) &&
+ READ_ONCE(neigh->hh.hh_len)) {
neigh_hh_bridge(&neigh->hh, skb);
skb->dev = nf_bridge->physindev;
ret = br_handle_frame_finish(net, sk, skb);
#include <linux/sysctl.h>
#endif
-/* We only check the length. A bridge shouldn't do any hop-by-hop stuff
- * anyway
- */
-static int br_nf_check_hbh_len(struct sk_buff *skb)
-{
- unsigned char *raw = (u8 *)(ipv6_hdr(skb) + 1);
- u32 pkt_len;
- const unsigned char *nh = skb_network_header(skb);
- int off = raw - nh;
- int len = (raw[1] + 1) << 3;
-
- if ((raw + len) - skb->data > skb_headlen(skb))
- goto bad;
-
- off += 2;
- len -= 2;
-
- while (len > 0) {
- int optlen = nh[off + 1] + 2;
-
- switch (nh[off]) {
- case IPV6_TLV_PAD1:
- optlen = 1;
- break;
-
- case IPV6_TLV_PADN:
- break;
-
- case IPV6_TLV_JUMBO:
- if (nh[off + 1] != 4 || (off & 3) != 2)
- goto bad;
- pkt_len = ntohl(*(__be32 *)(nh + off + 2));
- if (pkt_len <= IPV6_MAXPLEN ||
- ipv6_hdr(skb)->payload_len)
- goto bad;
- if (pkt_len > skb->len - sizeof(struct ipv6hdr))
- goto bad;
- if (pskb_trim_rcsum(skb,
- pkt_len + sizeof(struct ipv6hdr)))
- goto bad;
- nh = skb_network_header(skb);
- break;
- default:
- if (optlen > len)
- goto bad;
- break;
- }
- off += optlen;
- len -= optlen;
- }
- if (len == 0)
- return 0;
-bad:
- return -1;
-}
-
int br_validate_ipv6(struct net *net, struct sk_buff *skb)
{
const struct ipv6hdr *hdr;
goto inhdr_error;
pkt_len = ntohs(hdr->payload_len);
+ if (hdr->nexthdr == NEXTHDR_HOP && nf_ip6_check_hbh_len(skb, &pkt_len))
+ goto drop;
- if (pkt_len || hdr->nexthdr != NEXTHDR_HOP) {
- if (pkt_len + ip6h_len > skb->len) {
- __IP6_INC_STATS(net, idev,
- IPSTATS_MIB_INTRUNCATEDPKTS);
- goto drop;
- }
- if (pskb_trim_rcsum(skb, pkt_len + ip6h_len)) {
- __IP6_INC_STATS(net, idev,
- IPSTATS_MIB_INDISCARDS);
- goto drop;
- }
- hdr = ipv6_hdr(skb);
+ if (pkt_len + ip6h_len > skb->len) {
+ __IP6_INC_STATS(net, idev,
+ IPSTATS_MIB_INTRUNCATEDPKTS);
+ goto drop;
}
- if (hdr->nexthdr == NEXTHDR_HOP && br_nf_check_hbh_len(skb))
+ if (pskb_trim_rcsum(skb, pkt_len + ip6h_len)) {
+ __IP6_INC_STATS(net, idev,
+ IPSTATS_MIB_INDISCARDS);
goto drop;
+ }
memset(IP6CB(skb), 0, sizeof(struct inet6_skb_parm));
/* No IP options in IPv6 header; however it should be
{
int err;
- br_mdb_init();
br_vlan_rtnl_init();
rtnl_af_register(&br_af_ops);
out_af:
rtnl_af_unregister(&br_af_ops);
- br_mdb_uninit();
return err;
}
void br_netlink_fini(void)
{
- br_mdb_uninit();
br_vlan_rtnl_uninit();
rtnl_af_unregister(&br_af_ops);
rtnl_link_unregister(&br_link_ops);
{
struct rtable *rt = &br->fake_rtable;
- atomic_set(&rt->dst.__refcnt, 1);
+ rcuref_init(&rt->dst.__rcuref, 1);
rt->dst.dev = br->dev;
dst_init_metrics(&rt->dst, br_dst_default_metrics, true);
rt->dst.flags = DST_NOXFRM | DST_FAKE_RTABLE;
u32 br_multicast_ngroups_get(const struct net_bridge_mcast_port *pmctx);
void br_multicast_ngroups_set_max(struct net_bridge_mcast_port *pmctx, u32 max);
u32 br_multicast_ngroups_get_max(const struct net_bridge_mcast_port *pmctx);
-void br_mdb_init(void);
-void br_mdb_uninit(void);
+int br_mdb_add(struct net_device *dev, struct nlattr *tb[], u16 nlmsg_flags,
+ struct netlink_ext_ack *extack);
+int br_mdb_del(struct net_device *dev, struct nlattr *tb[],
+ struct netlink_ext_ack *extack);
+int br_mdb_dump(struct net_device *dev, struct sk_buff *skb,
+ struct netlink_callback *cb);
void br_multicast_host_join(const struct net_bridge_mcast *brmctx,
struct net_bridge_mdb_entry *mp, bool notify);
void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify);
return false;
}
-static inline void br_mdb_init(void)
+static inline int br_mdb_add(struct net_device *dev, struct nlattr *tb[],
+ u16 nlmsg_flags, struct netlink_ext_ack *extack)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int br_mdb_del(struct net_device *dev, struct nlattr *tb[],
+ struct netlink_ext_ack *extack)
{
+ return -EOPNOTSUPP;
}
-static inline void br_mdb_uninit(void)
+static inline int br_mdb_dump(struct net_device *dev, struct sk_buff *skb,
+ struct netlink_callback *cb)
{
+ return 0;
}
static inline int br_mdb_hash_init(struct net_bridge *br)
#include <net/netfilter/nf_tables.h>
#include <net/netfilter/nft_meta.h>
#include <linux/if_bridge.h>
+#include <uapi/linux/netfilter_bridge.h> /* NF_BR_PRE_ROUTING */
+
+#include "../br_private.h"
static const struct net_device *
nft_meta_get_bridge(const struct net_device *dev)
.reduce = nft_meta_get_reduce,
};
+static void nft_meta_bridge_set_eval(const struct nft_expr *expr,
+ struct nft_regs *regs,
+ const struct nft_pktinfo *pkt)
+{
+ const struct nft_meta *meta = nft_expr_priv(expr);
+ u32 *sreg = ®s->data[meta->sreg];
+ struct sk_buff *skb = pkt->skb;
+ u8 value8;
+
+ switch (meta->key) {
+ case NFT_META_BRI_BROUTE:
+ value8 = nft_reg_load8(sreg);
+ BR_INPUT_SKB_CB(skb)->br_netfilter_broute = !!value8;
+ break;
+ default:
+ nft_meta_set_eval(expr, regs, pkt);
+ }
+}
+
+static int nft_meta_bridge_set_init(const struct nft_ctx *ctx,
+ const struct nft_expr *expr,
+ const struct nlattr * const tb[])
+{
+ struct nft_meta *priv = nft_expr_priv(expr);
+ unsigned int len;
+ int err;
+
+ priv->key = ntohl(nla_get_be32(tb[NFTA_META_KEY]));
+ switch (priv->key) {
+ case NFT_META_BRI_BROUTE:
+ len = sizeof(u8);
+ break;
+ default:
+ return nft_meta_set_init(ctx, expr, tb);
+ }
+
+ priv->len = len;
+ err = nft_parse_register_load(tb[NFTA_META_SREG], &priv->sreg, len);
+ if (err < 0)
+ return err;
+
+ return 0;
+}
+
static bool nft_meta_bridge_set_reduce(struct nft_regs_track *track,
const struct nft_expr *expr)
{
return false;
}
+static int nft_meta_bridge_set_validate(const struct nft_ctx *ctx,
+ const struct nft_expr *expr,
+ const struct nft_data **data)
+{
+ struct nft_meta *priv = nft_expr_priv(expr);
+ unsigned int hooks;
+
+ switch (priv->key) {
+ case NFT_META_BRI_BROUTE:
+ hooks = 1 << NF_BR_PRE_ROUTING;
+ break;
+ default:
+ return nft_meta_set_validate(ctx, expr, data);
+ }
+
+ return nft_chain_validate_hooks(ctx->chain, hooks);
+}
+
static const struct nft_expr_ops nft_meta_bridge_set_ops = {
.type = &nft_meta_bridge_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_meta)),
- .eval = nft_meta_set_eval,
- .init = nft_meta_set_init,
+ .eval = nft_meta_bridge_set_eval,
+ .init = nft_meta_bridge_set_init,
.destroy = nft_meta_set_destroy,
.dump = nft_meta_set_dump,
.reduce = nft_meta_bridge_set_reduce,
- .validate = nft_meta_set_validate,
+ .validate = nft_meta_bridge_set_validate,
};
static const struct nft_expr_ops *
/* ISO 15765-2:2016 supports more than 4095 byte per ISO PDU as the FF_DL can
* take full 32 bit values (4 Gbyte). We would need some good concept to handle
- * this between user space and kernel space. For now increase the static buffer
- * to something about 64 kbyte to be able to test this new functionality.
+ * this between user space and kernel space. For now set the static buffer to
+ * something about 8 kbyte to be able to test this new functionality.
*/
-#define MAX_MSG_LENGTH 66000
+#define DEFAULT_MAX_PDU_SIZE 8300
+
+/* maximum PDU size before ISO 15765-2:2016 extension was 4095 */
+#define MAX_12BIT_PDU_SIZE 4095
+
+/* limit the isotp pdu size from the optional module parameter to 1MByte */
+#define MAX_PDU_SIZE (1025 * 1024U)
+
+static unsigned int max_pdu_size __read_mostly = DEFAULT_MAX_PDU_SIZE;
+module_param(max_pdu_size, uint, 0444);
+MODULE_PARM_DESC(max_pdu_size, "maximum isotp pdu size (default "
+ __stringify(DEFAULT_MAX_PDU_SIZE) ")");
/* N_PCI type values in bits 7-4 of N_PCI bytes */
#define N_PCI_SF 0x00 /* single frame */
};
struct tpcon {
- unsigned int idx;
+ u8 *buf;
+ unsigned int buflen;
unsigned int len;
+ unsigned int idx;
u32 state;
u8 bs;
u8 sn;
u8 ll_dl;
- u8 buf[MAX_MSG_LENGTH + 1];
+ u8 sbuf[DEFAULT_MAX_PDU_SIZE];
};
struct isotp_sock {
if (so->rx.len + ae + off + ff_pci_sz < so->rx.ll_dl)
return 1;
- if (so->rx.len > MAX_MSG_LENGTH) {
+ /* PDU size > default => try max_pdu_size */
+ if (so->rx.len > so->rx.buflen && so->rx.buflen < max_pdu_size) {
+ u8 *newbuf = kmalloc(max_pdu_size, GFP_ATOMIC);
+
+ if (newbuf) {
+ so->rx.buf = newbuf;
+ so->rx.buflen = max_pdu_size;
+ }
+ }
+
+ if (so->rx.len > so->rx.buflen) {
/* send FC frame with overflow status */
isotp_send_fc(sk, ae, ISOTP_FC_OVFLW);
return 1;
cf->data[0] = so->opt.ext_address;
/* create N_PCI bytes with 12/32 bit FF_DL data length */
- if (so->tx.len > 4095) {
+ if (so->tx.len > MAX_12BIT_PDU_SIZE) {
/* use 32 bit FF_DL notation */
cf->data[ae] = N_PCI_FF;
cf->data[ae + 1] = 0;
goto wait_free_buffer;
}
- if (!size || size > MAX_MSG_LENGTH) {
+ /* PDU size > default => try max_pdu_size */
+ if (size > so->tx.buflen && so->tx.buflen < max_pdu_size) {
+ u8 *newbuf = kmalloc(max_pdu_size, GFP_KERNEL);
+
+ if (newbuf) {
+ so->tx.buf = newbuf;
+ so->tx.buflen = max_pdu_size;
+ }
+ }
+
+ if (!size || size > so->tx.buflen) {
err = -EINVAL;
goto err_out_drop;
}
so->ifindex = 0;
so->bound = 0;
+ if (so->rx.buf != so->rx.sbuf)
+ kfree(so->rx.buf);
+
+ if (so->tx.buf != so->tx.sbuf)
+ kfree(so->tx.buf);
+
sock_orphan(sk);
sock->sk = NULL;
so->rx.state = ISOTP_IDLE;
so->tx.state = ISOTP_IDLE;
+ so->rx.buf = so->rx.sbuf;
+ so->tx.buf = so->tx.sbuf;
+ so->rx.buflen = ARRAY_SIZE(so->rx.sbuf);
+ so->tx.buflen = ARRAY_SIZE(so->tx.sbuf);
+
hrtimer_init(&so->rxtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
so->rxtimer.function = isotp_rx_timer_handler;
hrtimer_init(&so->txtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
{
int err;
- pr_info("can: isotp protocol\n");
+ max_pdu_size = max_t(unsigned int, max_pdu_size, MAX_12BIT_PDU_SIZE);
+ max_pdu_size = min_t(unsigned int, max_pdu_size, MAX_PDU_SIZE);
+
+ pr_info("can: isotp protocol (max_pdu_size %d)\n", max_pdu_size);
err = can_proto_register(&isotp_can_proto);
if (err < 0)
.map_local_storage_charge = bpf_sk_storage_charge,
.map_local_storage_uncharge = bpf_sk_storage_uncharge,
.map_owner_storage_ptr = bpf_sk_storage_ptr,
+ .map_mem_usage = bpf_local_storage_map_mem_usage,
};
const struct bpf_func_proto bpf_sk_storage_get_proto = {
frag = skb_shinfo(skb)->nr_frags;
while (length && iov_iter_count(from)) {
+ struct page *head, *last_head = NULL;
struct page *pages[MAX_SKB_FRAGS];
- struct page *last_head = NULL;
+ int refs, order, n = 0;
size_t start;
ssize_t copied;
unsigned long truesize;
- int refs, n = 0;
if (frag == MAX_SKB_FRAGS)
return -EMSGSIZE;
} else {
refcount_add(truesize, &skb->sk->sk_wmem_alloc);
}
+
+ head = compound_head(pages[n]);
+ order = compound_order(head);
+
for (refs = 0; copied != 0; start = 0) {
int size = min_t(int, copied, PAGE_SIZE - start);
- struct page *head = compound_head(pages[n]);
+
+ if (pages[n] - head > (1UL << order) - 1) {
+ head = compound_head(pages[n]);
+ order = compound_order(head);
+ }
start += (pages[n] - head) << PAGE_SHIFT;
copied -= size;
struct list_head ptype_all __read_mostly; /* Taps */
static int netif_rx_internal(struct sk_buff *skb);
-static int call_netdevice_notifiers_info(unsigned long val,
- struct netdev_notifier_info *info);
static int call_netdevice_notifiers_extack(unsigned long val,
struct net_device *dev,
struct netlink_ext_ack *extack);
N(SVLAN_FILTER_PUSH_INFO) N(SVLAN_FILTER_DROP_INFO)
N(PRE_CHANGEADDR) N(OFFLOAD_XSTATS_ENABLE) N(OFFLOAD_XSTATS_DISABLE)
N(OFFLOAD_XSTATS_REPORT_USED) N(OFFLOAD_XSTATS_REPORT_DELTA)
- N(XDP_FEAT_CHANGE)
+ N(XDP_FEAT_CHANGE) N(PRE_CHANGE_HWTSTAMP)
}
#undef N
return "UNKNOWN_NETDEV_EVENT";
* are as for raw_notifier_call_chain().
*/
-static int call_netdevice_notifiers_info(unsigned long val,
- struct netdev_notifier_info *info)
+int call_netdevice_notifiers_info(unsigned long val,
+ struct netdev_notifier_info *info)
{
struct net *net = dev_net(info->dev);
int ret;
struct xps_map *map, *new_map;
unsigned int nr_ids;
+ WARN_ON_ONCE(index >= dev->num_tx_queues);
+
if (dev->num_tc) {
/* Do not allow XPS on subordinate device directly */
num_tc = dev->num_tc;
EXPORT_SYMBOL(__netif_schedule);
struct dev_kfree_skb_cb {
- enum skb_free_reason reason;
+ enum skb_drop_reason reason;
};
static struct dev_kfree_skb_cb *get_kfree_skb_cb(const struct sk_buff *skb)
}
EXPORT_SYMBOL(netif_tx_wake_queue);
-void __dev_kfree_skb_irq(struct sk_buff *skb, enum skb_free_reason reason)
+void dev_kfree_skb_irq_reason(struct sk_buff *skb, enum skb_drop_reason reason)
{
unsigned long flags;
raise_softirq_irqoff(NET_TX_SOFTIRQ);
local_irq_restore(flags);
}
-EXPORT_SYMBOL(__dev_kfree_skb_irq);
+EXPORT_SYMBOL(dev_kfree_skb_irq_reason);
-void __dev_kfree_skb_any(struct sk_buff *skb, enum skb_free_reason reason)
+void dev_kfree_skb_any_reason(struct sk_buff *skb, enum skb_drop_reason reason)
{
if (in_hardirq() || irqs_disabled())
- __dev_kfree_skb_irq(skb, reason);
- else if (unlikely(reason == SKB_REASON_DROPPED))
- kfree_skb(skb);
+ dev_kfree_skb_irq_reason(skb, reason);
else
- consume_skb(skb);
+ kfree_skb_reason(skb, reason);
}
-EXPORT_SYMBOL(__dev_kfree_skb_any);
+EXPORT_SYMBOL(dev_kfree_skb_any_reason);
/**
* we add to pkt_len the headers size of all segments
*/
if (shinfo->gso_size && skb_transport_header_was_set(skb)) {
- unsigned int hdr_len;
u16 gso_segs = shinfo->gso_segs;
+ unsigned int hdr_len;
/* mac layer + network layer */
- hdr_len = skb_transport_header(skb) - skb_mac_header(skb);
+ hdr_len = skb_transport_offset(skb);
/* + transport layer */
if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) {
const struct tcphdr *th;
struct tcphdr _tcphdr;
- th = skb_header_pointer(skb, skb_transport_offset(skb),
+ th = skb_header_pointer(skb, hdr_len,
sizeof(_tcphdr), &_tcphdr);
if (likely(th))
hdr_len += __tcp_hdrlen(th);
} else {
struct udphdr _udphdr;
- if (skb_header_pointer(skb, skb_transport_offset(skb),
+ if (skb_header_pointer(skb, hdr_len,
sizeof(_udphdr), &_udphdr))
hdr_len += sizeof(struct udphdr);
}
}
list_add_tail(&napi->poll_list, &sd->poll_list);
- __raise_softirq_irqoff(NET_RX_SOFTIRQ);
+ /* If not called from net_rx_action()
+ * we have to raise NET_RX_SOFTIRQ.
+ */
+ if (!sd->in_net_rx_action)
+ __raise_softirq_irqoff(NET_RX_SOFTIRQ);
}
#ifdef CONFIG_RPS
}
/*
- * Check if this softnet_data structure is another cpu one
- * If yes, queue it to our IPI list and return 1
- * If no, return 0
+ * After we queued a packet into sd->input_pkt_queue,
+ * we need to make sure this queue is serviced soon.
+ *
+ * - If this is another cpu queue, link it to our rps_ipi_list,
+ * and make sure we will process rps_ipi_list from net_rx_action().
+ *
+ * - If this is our own queue, NAPI schedule our backlog.
+ * Note that this also raises NET_RX_SOFTIRQ.
*/
-static int napi_schedule_rps(struct softnet_data *sd)
+static void napi_schedule_rps(struct softnet_data *sd)
{
struct softnet_data *mysd = this_cpu_ptr(&softnet_data);
sd->rps_ipi_next = mysd->rps_ipi_list;
mysd->rps_ipi_list = sd;
- __raise_softirq_irqoff(NET_RX_SOFTIRQ);
- return 1;
+ /* If not called from net_rx_action()
+ * we have to raise NET_RX_SOFTIRQ.
+ */
+ if (!mysd->in_net_rx_action)
+ __raise_softirq_irqoff(NET_RX_SOFTIRQ);
+ return;
}
#endif /* CONFIG_RPS */
__napi_schedule_irqoff(&mysd->backlog);
- return 0;
}
#ifdef CONFIG_NET_FLOW_LIMIT
clist = clist->next;
WARN_ON(refcount_read(&skb->users));
- if (likely(get_kfree_skb_cb(skb)->reason == SKB_REASON_CONSUMED))
+ if (likely(get_kfree_skb_cb(skb)->reason == SKB_CONSUMED))
trace_consume_skb(skb, net_tx_action);
else
trace_kfree_skb(skb, net_tx_action,
- SKB_DROP_REASON_NOT_SPECIFIED);
+ get_kfree_skb_cb(skb)->reason);
if (skb->fclone != SKB_FCLONE_UNAVAILABLE)
__kfree_skb(skb);
LIST_HEAD(list);
LIST_HEAD(repoll);
+start:
+ sd->in_net_rx_action = true;
local_irq_disable();
list_splice_init(&sd->poll_list, &list);
local_irq_enable();
skb_defer_free_flush(sd);
if (list_empty(&list)) {
- if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll))
- goto end;
+ if (list_empty(&repoll)) {
+ sd->in_net_rx_action = false;
+ barrier();
+ /* We need to check if ____napi_schedule()
+ * had refilled poll_list while
+ * sd->in_net_rx_action was true.
+ */
+ if (!list_empty(&sd->poll_list))
+ goto start;
+ if (!sd_has_rps_ipi_waiting(sd))
+ goto end;
+ }
break;
}
list_splice(&list, &sd->poll_list);
if (!list_empty(&sd->poll_list))
__raise_softirq_irqoff(NET_RX_SOFTIRQ);
+ else
+ sd->in_net_rx_action = false;
net_rps_action_and_irq_enable(sd);
end:;
return err;
}
-static int net_hwtstamp_validate(struct ifreq *ifr)
+static int net_hwtstamp_validate(const struct kernel_hwtstamp_config *cfg)
{
- struct hwtstamp_config cfg;
enum hwtstamp_tx_types tx_type;
enum hwtstamp_rx_filters rx_filter;
int tx_type_valid = 0;
int rx_filter_valid = 0;
- if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg)))
- return -EFAULT;
-
- if (cfg.flags & ~HWTSTAMP_FLAG_MASK)
+ if (cfg->flags & ~HWTSTAMP_FLAG_MASK)
return -EINVAL;
- tx_type = cfg.tx_type;
- rx_filter = cfg.rx_filter;
+ tx_type = cfg->tx_type;
+ rx_filter = cfg->rx_filter;
switch (tx_type) {
case HWTSTAMP_TX_OFF:
struct ifreq *ifr, unsigned int cmd)
{
const struct net_device_ops *ops = dev->netdev_ops;
+
+ if (!ops->ndo_eth_ioctl)
+ return -EOPNOTSUPP;
+
+ if (!netif_device_present(dev))
+ return -ENODEV;
+
+ return ops->ndo_eth_ioctl(dev, ifr, cmd);
+}
+
+static int dev_get_hwtstamp(struct net_device *dev, struct ifreq *ifr)
+{
+ return dev_eth_ioctl(dev, ifr, SIOCGHWTSTAMP);
+}
+
+static int dev_set_hwtstamp(struct net_device *dev, struct ifreq *ifr)
+{
+ struct netdev_notifier_hwtstamp_info info = {
+ .info.dev = dev,
+ };
+ struct kernel_hwtstamp_config kernel_cfg;
+ struct netlink_ext_ack extack = {};
+ struct hwtstamp_config cfg;
int err;
- err = dsa_ndo_eth_ioctl(dev, ifr, cmd);
- if (err == 0 || err != -EOPNOTSUPP)
+ if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg)))
+ return -EFAULT;
+
+ hwtstamp_config_to_kernel(&kernel_cfg, &cfg);
+
+ err = net_hwtstamp_validate(&kernel_cfg);
+ if (err)
return err;
- if (ops->ndo_eth_ioctl) {
- if (netif_device_present(dev))
- err = ops->ndo_eth_ioctl(dev, ifr, cmd);
- else
- err = -ENODEV;
+ info.info.extack = &extack;
+ info.config = &kernel_cfg;
+
+ err = call_netdevice_notifiers_info(NETDEV_PRE_CHANGE_HWTSTAMP,
+ &info.info);
+ err = notifier_to_errno(err);
+ if (err) {
+ if (extack._msg)
+ netdev_err(dev, "%s\n", extack._msg);
+ return err;
}
- return err;
+ return dev_eth_ioctl(dev, ifr, SIOCSHWTSTAMP);
}
static int dev_siocbond(struct net_device *dev,
rtnl_lock();
return err;
+ case SIOCDEVPRIVATE ... SIOCDEVPRIVATE + 15:
+ return dev_siocdevprivate(dev, ifr, data, cmd);
+
case SIOCSHWTSTAMP:
- err = net_hwtstamp_validate(ifr);
- if (err)
- return err;
- fallthrough;
+ return dev_set_hwtstamp(dev, ifr);
- /*
- * Unknown or private ioctl
- */
- default:
- if (cmd >= SIOCDEVPRIVATE &&
- cmd <= SIOCDEVPRIVATE + 15)
- return dev_siocdevprivate(dev, ifr, data, cmd);
-
- if (cmd == SIOCGMIIPHY ||
- cmd == SIOCGMIIREG ||
- cmd == SIOCSMIIREG ||
- cmd == SIOCSHWTSTAMP ||
- cmd == SIOCGHWTSTAMP) {
- err = dev_eth_ioctl(dev, ifr, cmd);
- } else if (cmd == SIOCBONDENSLAVE ||
- cmd == SIOCBONDRELEASE ||
- cmd == SIOCBONDSETHWADDR ||
- cmd == SIOCBONDSLAVEINFOQUERY ||
- cmd == SIOCBONDINFOQUERY ||
- cmd == SIOCBONDCHANGEACTIVE) {
- err = dev_siocbond(dev, ifr, cmd);
- } else
- err = -EINVAL;
+ case SIOCGHWTSTAMP:
+ return dev_get_hwtstamp(dev, ifr);
+ case SIOCGMIIPHY:
+ case SIOCGMIIREG:
+ case SIOCSMIIREG:
+ return dev_eth_ioctl(dev, ifr, cmd);
+
+ case SIOCBONDENSLAVE:
+ case SIOCBONDRELEASE:
+ case SIOCBONDSETHWADDR:
+ case SIOCBONDSLAVEINFOQUERY:
+ case SIOCBONDINFOQUERY:
+ case SIOCBONDCHANGEACTIVE:
+ return dev_siocbond(dev, ifr, cmd);
+
+ /* Unknown ioctl */
+ default:
+ err = -EINVAL;
}
return err;
}
* @net: the applicable net namespace
* @cmd: command to issue
* @ifr: pointer to a struct ifreq in user space
+ * @data: data exchanged with userspace
* @need_copyout: whether or not copy_to_user() should be called
*
* Issue ioctl functions to devices. This is normally called by the
dst->tclassid = 0;
#endif
dst->lwtstate = NULL;
- atomic_set(&dst->__refcnt, initial_ref);
+ rcuref_init(&dst->__rcuref, initial_ref);
dst->__use = 0;
dst->lastuse = jiffies;
dst->flags = flags;
void dst_release(struct dst_entry *dst)
{
- if (dst) {
- int newrefcnt;
-
- newrefcnt = atomic_dec_return(&dst->__refcnt);
- if (WARN_ONCE(newrefcnt < 0, "dst_release underflow"))
- net_warn_ratelimited("%s: dst:%p refcnt:%d\n",
- __func__, dst, newrefcnt);
- if (!newrefcnt)
- call_rcu_hurry(&dst->rcu_head, dst_destroy_rcu);
- }
+ if (dst && rcuref_put(&dst->__rcuref))
+ call_rcu_hurry(&dst->rcu_head, dst_destroy_rcu);
}
EXPORT_SYMBOL(dst_release);
void dst_release_immediate(struct dst_entry *dst)
{
- if (dst) {
- int newrefcnt;
-
- newrefcnt = atomic_dec_return(&dst->__refcnt);
- if (WARN_ONCE(newrefcnt < 0, "dst_release_immediate underflow"))
- net_warn_ratelimited("%s: dst:%p refcnt:%d\n",
- __func__, dst, newrefcnt);
- if (!newrefcnt)
- dst_destroy(dst);
- }
+ if (dst && rcuref_put(&dst->__rcuref))
+ dst_destroy(dst);
}
EXPORT_SYMBOL(dst_release_immediate);
.arg5_type = ARG_ANYTHING,
};
+int __bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, const void *from,
+ u32 len, u64 flags)
+{
+ return ____bpf_skb_store_bytes(skb, offset, from, len, flags);
+}
+
BPF_CALL_4(bpf_skb_load_bytes, const struct sk_buff *, skb, u32, offset,
void *, to, u32, len)
{
.arg4_type = ARG_CONST_SIZE,
};
+int __bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, void *to, u32 len)
+{
+ return ____bpf_skb_load_bytes(skb, offset, to, len);
+}
+
BPF_CALL_4(bpf_flow_dissector_load_bytes,
const struct bpf_flow_dissector *, ctx, u32, offset,
void *, to, u32, len)
return -ENOMEM;
}
- rcu_read_lock_bh();
+ rcu_read_lock();
if (!nh) {
dst = skb_dst(skb);
nexthop = rt6_nexthop(container_of(dst, struct rt6_info, dst),
int ret;
sock_confirm_neigh(skb, neigh);
+ local_bh_disable();
dev_xmit_recursion_inc();
ret = neigh_output(neigh, skb, false);
dev_xmit_recursion_dec();
- rcu_read_unlock_bh();
+ local_bh_enable();
+ rcu_read_unlock();
return ret;
}
rcu_read_unlock_bh();
return -ENOMEM;
}
- rcu_read_lock_bh();
+ rcu_read_lock();
if (!nh) {
struct dst_entry *dst = skb_dst(skb);
struct rtable *rt = container_of(dst, struct rtable, dst);
} else if (nh->nh_family == AF_INET) {
neigh = ip_neigh_gw4(dev, nh->ipv4_nh);
} else {
- rcu_read_unlock_bh();
+ rcu_read_unlock();
goto out_drop;
}
int ret;
sock_confirm_neigh(skb, neigh);
+ local_bh_disable();
dev_xmit_recursion_inc();
ret = neigh_output(neigh, skb, is_v6gw);
dev_xmit_recursion_dec();
- rcu_read_unlock_bh();
+ local_bh_enable();
+ rcu_read_unlock();
return ret;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock();
out_drop:
kfree_skb(skb);
return -ENETDOWN;
.arg3_type = ARG_ANYTHING,
};
-BPF_CALL_1(bpf_xdp_get_buff_len, struct xdp_buff*, xdp)
+BPF_CALL_1(bpf_xdp_get_buff_len, struct xdp_buff*, xdp)
{
return xdp_get_buff_len(xdp);
}
.arg2_type = ARG_ANYTHING,
};
-static void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
- void *buf, unsigned long len, bool flush)
+void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
+ void *buf, unsigned long len, bool flush)
{
unsigned long ptr_len, ptr_off = 0;
skb_frag_t *next_frag, *end_frag;
}
}
-static void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
+void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
{
struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
u32 size = xdp->data_end - xdp->data;
.arg4_type = ARG_CONST_SIZE,
};
+int __bpf_xdp_load_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len)
+{
+ return ____bpf_xdp_load_bytes(xdp, offset, buf, len);
+}
+
BPF_CALL_4(bpf_xdp_store_bytes, struct xdp_buff *, xdp, u32, offset,
void *, buf, u32, len)
{
.arg4_type = ARG_CONST_SIZE,
};
+int __bpf_xdp_store_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len)
+{
+ return ____bpf_xdp_store_bytes(xdp, offset, buf, len);
+}
+
static int bpf_xdp_frags_increase_tail(struct xdp_buff *xdp, int offset)
{
struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
else
neigh = __ipv6_neigh_lookup_noref_stub(dev, params->ipv6_dst);
- if (!neigh || !(neigh->nud_state & NUD_VALID))
+ if (!neigh || !(READ_ONCE(neigh->nud_state) & NUD_VALID))
return BPF_FIB_LKUP_RET_NO_NEIGH;
memcpy(params->dmac, neigh->ha, ETH_ALEN);
memcpy(params->smac, dev->dev_addr, ETH_ALEN);
* not needed here.
*/
neigh = __ipv6_neigh_lookup_noref_stub(dev, dst);
- if (!neigh || !(neigh->nud_state & NUD_VALID))
+ if (!neigh || !(READ_ONCE(neigh->nud_state) & NUD_VALID))
return BPF_FIB_LKUP_RET_NO_NEIGH;
memcpy(params->dmac, neigh->ha, ETH_ALEN);
memcpy(params->smac, dev->dev_addr, ETH_ALEN);
return &bpf_sk_storage_delete_proto;
case BPF_FUNC_get_netns_cookie:
return &bpf_get_netns_cookie_sk_msg_proto;
-#ifdef CONFIG_CGROUPS
- case BPF_FUNC_get_current_cgroup_id:
- return &bpf_get_current_cgroup_id_proto;
- case BPF_FUNC_get_current_ancestor_cgroup_id:
- return &bpf_get_current_ancestor_cgroup_id_proto;
-#endif
#ifdef CONFIG_CGROUP_NET_CLASSID
case BPF_FUNC_get_cgroup_classid:
return &bpf_get_cgroup_classid_curr_proto;
#endif
/* <store>: skb->tstamp = tstamp */
- *insn++ = BPF_STX_MEM(BPF_DW, skb_reg, value_reg,
- offsetof(struct sk_buff, tstamp));
+ *insn++ = BPF_RAW_INSN(BPF_CLASS(si->code) | BPF_DW | BPF_MEM,
+ skb_reg, value_reg, offsetof(struct sk_buff, tstamp), si->imm);
return insn;
}
+#define BPF_EMIT_STORE(size, si, off) \
+ BPF_RAW_INSN(BPF_CLASS((si)->code) | (size) | BPF_MEM, \
+ (si)->dst_reg, (si)->src_reg, (off), (si)->imm)
+
static u32 bpf_convert_ctx_access(enum bpf_access_type type,
const struct bpf_insn *si,
struct bpf_insn *insn_buf,
case offsetof(struct __sk_buff, priority):
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
- bpf_target_off(struct sk_buff, priority, 4,
- target_size));
+ *insn++ = BPF_EMIT_STORE(BPF_W, si,
+ bpf_target_off(struct sk_buff, priority, 4,
+ target_size));
else
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
bpf_target_off(struct sk_buff, priority, 4,
case offsetof(struct __sk_buff, mark):
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
- bpf_target_off(struct sk_buff, mark, 4,
- target_size));
+ *insn++ = BPF_EMIT_STORE(BPF_W, si,
+ bpf_target_off(struct sk_buff, mark, 4,
+ target_size));
else
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
bpf_target_off(struct sk_buff, mark, 4,
case offsetof(struct __sk_buff, queue_mapping):
if (type == BPF_WRITE) {
- *insn++ = BPF_JMP_IMM(BPF_JGE, si->src_reg, NO_QUEUE_MAPPING, 1);
- *insn++ = BPF_STX_MEM(BPF_H, si->dst_reg, si->src_reg,
- bpf_target_off(struct sk_buff,
- queue_mapping,
- 2, target_size));
+ u32 off = bpf_target_off(struct sk_buff, queue_mapping, 2, target_size);
+
+ if (BPF_CLASS(si->code) == BPF_ST && si->imm >= NO_QUEUE_MAPPING) {
+ *insn++ = BPF_JMP_A(0); /* noop */
+ break;
+ }
+
+ if (BPF_CLASS(si->code) == BPF_STX)
+ *insn++ = BPF_JMP_IMM(BPF_JGE, si->src_reg, NO_QUEUE_MAPPING, 1);
+ *insn++ = BPF_EMIT_STORE(BPF_H, si, off);
} else {
*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
bpf_target_off(struct sk_buff,
off += offsetof(struct sk_buff, cb);
off += offsetof(struct qdisc_skb_cb, data);
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_SIZE(si->code), si->dst_reg,
- si->src_reg, off);
+ *insn++ = BPF_EMIT_STORE(BPF_SIZE(si->code), si, off);
else
*insn++ = BPF_LDX_MEM(BPF_SIZE(si->code), si->dst_reg,
si->src_reg, off);
off += offsetof(struct qdisc_skb_cb, tc_classid);
*target_size = 2;
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_H, si->dst_reg,
- si->src_reg, off);
+ *insn++ = BPF_EMIT_STORE(BPF_H, si, off);
else
*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg,
si->src_reg, off);
case offsetof(struct __sk_buff, tc_index):
#ifdef CONFIG_NET_SCHED
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_H, si->dst_reg, si->src_reg,
- bpf_target_off(struct sk_buff, tc_index, 2,
- target_size));
+ *insn++ = BPF_EMIT_STORE(BPF_H, si,
+ bpf_target_off(struct sk_buff, tc_index, 2,
+ target_size));
else
*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
bpf_target_off(struct sk_buff, tc_index, 2,
BUILD_BUG_ON(sizeof_field(struct sock, sk_bound_dev_if) != 4);
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
- offsetof(struct sock, sk_bound_dev_if));
+ *insn++ = BPF_EMIT_STORE(BPF_W, si,
+ offsetof(struct sock, sk_bound_dev_if));
else
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
offsetof(struct sock, sk_bound_dev_if));
BUILD_BUG_ON(sizeof_field(struct sock, sk_mark) != 4);
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
- offsetof(struct sock, sk_mark));
+ *insn++ = BPF_EMIT_STORE(BPF_W, si,
+ offsetof(struct sock, sk_mark));
else
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
offsetof(struct sock, sk_mark));
BUILD_BUG_ON(sizeof_field(struct sock, sk_priority) != 4);
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
- offsetof(struct sock, sk_priority));
+ *insn++ = BPF_EMIT_STORE(BPF_W, si,
+ offsetof(struct sock, sk_priority));
else
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
offsetof(struct sock, sk_priority));
offsetof(S, TF)); \
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(S, F), tmp_reg, \
si->dst_reg, offsetof(S, F)); \
- *insn++ = BPF_STX_MEM(SIZE, tmp_reg, si->src_reg, \
+ *insn++ = BPF_RAW_INSN(SIZE | BPF_MEM | BPF_CLASS(si->code), \
+ tmp_reg, si->src_reg, \
bpf_target_off(NS, NF, sizeof_field(NS, NF), \
target_size) \
- + OFF); \
+ + OFF, \
+ si->imm); \
*insn++ = BPF_LDX_MEM(BPF_DW, tmp_reg, si->dst_reg, \
offsetof(S, TF)); \
} while (0)
struct bpf_sock_ops_kern, sk),\
reg, si->dst_reg, \
offsetof(struct bpf_sock_ops_kern, sk));\
- *insn++ = BPF_STX_MEM(BPF_FIELD_SIZEOF(OBJ, OBJ_FIELD), \
- reg, si->src_reg, \
- offsetof(OBJ, OBJ_FIELD)); \
+ *insn++ = BPF_RAW_INSN(BPF_FIELD_SIZEOF(OBJ, OBJ_FIELD) | \
+ BPF_MEM | BPF_CLASS(si->code), \
+ reg, si->src_reg, \
+ offsetof(OBJ, OBJ_FIELD), \
+ si->imm); \
*insn++ = BPF_LDX_MEM(BPF_DW, reg, si->dst_reg, \
offsetof(struct bpf_sock_ops_kern, \
temp)); \
off -= offsetof(struct bpf_sock_ops, replylong[0]);
off += offsetof(struct bpf_sock_ops_kern, replylong[0]);
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
- off);
+ *insn++ = BPF_EMIT_STORE(BPF_W, si, off);
else
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
off);
off += offsetof(struct sk_buff, cb);
off += offsetof(struct sk_skb_cb, data);
if (type == BPF_WRITE)
- *insn++ = BPF_STX_MEM(BPF_SIZE(si->code), si->dst_reg,
- si->src_reg, off);
+ *insn++ = BPF_EMIT_STORE(BPF_SIZE(si->code), si, off);
else
*insn++ = BPF_LDX_MEM(BPF_SIZE(si->code), si->dst_reg,
si->src_reg, off);
return func;
}
+
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+ "Global functions as their definitions will be in vmlinux BTF");
+__bpf_kfunc int bpf_dynptr_from_skb(struct sk_buff *skb, u64 flags,
+ struct bpf_dynptr_kern *ptr__uninit)
+{
+ if (flags) {
+ bpf_dynptr_set_null(ptr__uninit);
+ return -EINVAL;
+ }
+
+ bpf_dynptr_init(ptr__uninit, skb, BPF_DYNPTR_TYPE_SKB, 0, skb->len);
+
+ return 0;
+}
+
+__bpf_kfunc int bpf_dynptr_from_xdp(struct xdp_buff *xdp, u64 flags,
+ struct bpf_dynptr_kern *ptr__uninit)
+{
+ if (flags) {
+ bpf_dynptr_set_null(ptr__uninit);
+ return -EINVAL;
+ }
+
+ bpf_dynptr_init(ptr__uninit, xdp, BPF_DYNPTR_TYPE_XDP, 0, xdp_get_buff_len(xdp));
+
+ return 0;
+}
+__diag_pop();
+
+int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags,
+ struct bpf_dynptr_kern *ptr__uninit)
+{
+ int err;
+
+ err = bpf_dynptr_from_skb(skb, flags, ptr__uninit);
+ if (err)
+ return err;
+
+ bpf_dynptr_set_rdonly(ptr__uninit);
+
+ return 0;
+}
+
+BTF_SET8_START(bpf_kfunc_check_set_skb)
+BTF_ID_FLAGS(func, bpf_dynptr_from_skb)
+BTF_SET8_END(bpf_kfunc_check_set_skb)
+
+BTF_SET8_START(bpf_kfunc_check_set_xdp)
+BTF_ID_FLAGS(func, bpf_dynptr_from_xdp)
+BTF_SET8_END(bpf_kfunc_check_set_xdp)
+
+static const struct btf_kfunc_id_set bpf_kfunc_set_skb = {
+ .owner = THIS_MODULE,
+ .set = &bpf_kfunc_check_set_skb,
+};
+
+static const struct btf_kfunc_id_set bpf_kfunc_set_xdp = {
+ .owner = THIS_MODULE,
+ .set = &bpf_kfunc_check_set_xdp,
+};
+
+static int __init bpf_kfunc_init(void)
+{
+ int ret;
+
+ ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_ACT, &bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SK_SKB, &bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SOCKET_FILTER, &bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SKB, &bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_OUT, &bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_IN, &bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_XMIT, &bpf_kfunc_set_skb);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_SEG6LOCAL, &bpf_kfunc_set_skb);
+ return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &bpf_kfunc_set_xdp);
+}
+late_initcall(bpf_kfunc_init);
NEIGH_CACHE_STAT_INC(tbl, lookups);
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __neigh_lookup_noref(tbl, pkey, dev);
if (n) {
if (!refcount_inc_not_zero(&n->refcnt))
NEIGH_CACHE_STAT_INC(tbl, hits);
}
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return n;
}
EXPORT_SYMBOL(neigh_lookup);
-struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
- const void *pkey)
-{
- struct neighbour *n;
- unsigned int key_len = tbl->key_len;
- u32 hash_val;
- struct neigh_hash_table *nht;
-
- NEIGH_CACHE_STAT_INC(tbl, lookups);
-
- rcu_read_lock_bh();
- nht = rcu_dereference_bh(tbl->nht);
- hash_val = tbl->hash(pkey, NULL, nht->hash_rnd) >> (32 - nht->hash_shift);
-
- for (n = rcu_dereference_bh(nht->hash_buckets[hash_val]);
- n != NULL;
- n = rcu_dereference_bh(n->next)) {
- if (!memcmp(n->primary_key, pkey, key_len) &&
- net_eq(dev_net(n->dev), net)) {
- if (!refcount_inc_not_zero(&n->refcnt))
- n = NULL;
- NEIGH_CACHE_STAT_INC(tbl, hits);
- break;
- }
- }
-
- rcu_read_unlock_bh();
- return n;
-}
-EXPORT_SYMBOL(neigh_lookup_nodev);
-
static struct neighbour *
___neigh_create(struct neigh_table *tbl, const void *pkey,
struct net_device *dev, u32 flags,
neigh->used +
NEIGH_VAR(neigh->parms, DELAY_PROBE_TIME))) {
neigh_dbg(2, "neigh %p is delayed\n", neigh);
- neigh->nud_state = NUD_DELAY;
+ WRITE_ONCE(neigh->nud_state, NUD_DELAY);
neigh->updated = jiffies;
neigh_suspect(neigh);
next = now + NEIGH_VAR(neigh->parms, DELAY_PROBE_TIME);
} else {
neigh_dbg(2, "neigh %p is suspected\n", neigh);
- neigh->nud_state = NUD_STALE;
+ WRITE_ONCE(neigh->nud_state, NUD_STALE);
neigh->updated = jiffies;
neigh_suspect(neigh);
notify = 1;
neigh->confirmed +
NEIGH_VAR(neigh->parms, DELAY_PROBE_TIME))) {
neigh_dbg(2, "neigh %p is now reachable\n", neigh);
- neigh->nud_state = NUD_REACHABLE;
+ WRITE_ONCE(neigh->nud_state, NUD_REACHABLE);
neigh->updated = jiffies;
neigh_connect(neigh);
notify = 1;
next = neigh->confirmed + neigh->parms->reachable_time;
} else {
neigh_dbg(2, "neigh %p is probed\n", neigh);
- neigh->nud_state = NUD_PROBE;
+ WRITE_ONCE(neigh->nud_state, NUD_PROBE);
neigh->updated = jiffies;
atomic_set(&neigh->probes, 0);
notify = 1;
if ((neigh->nud_state & (NUD_INCOMPLETE | NUD_PROBE)) &&
atomic_read(&neigh->probes) >= neigh_max_probes(neigh)) {
- neigh->nud_state = NUD_FAILED;
+ WRITE_ONCE(neigh->nud_state, NUD_FAILED);
notify = 1;
neigh_invalidate(neigh);
goto out;
atomic_set(&neigh->probes,
NEIGH_VAR(neigh->parms, UCAST_PROBES));
neigh_del_timer(neigh);
- neigh->nud_state = NUD_INCOMPLETE;
+ WRITE_ONCE(neigh->nud_state, NUD_INCOMPLETE);
neigh->updated = now;
if (!immediate_ok) {
next = now + 1;
}
neigh_add_timer(neigh, next);
} else {
- neigh->nud_state = NUD_FAILED;
+ WRITE_ONCE(neigh->nud_state, NUD_FAILED);
neigh->updated = jiffies;
write_unlock_bh(&neigh->lock);
} else if (neigh->nud_state & NUD_STALE) {
neigh_dbg(2, "neigh %p is delayed\n", neigh);
neigh_del_timer(neigh);
- neigh->nud_state = NUD_DELAY;
+ WRITE_ONCE(neigh->nud_state, NUD_DELAY);
neigh->updated = jiffies;
neigh_add_timer(neigh, jiffies +
NEIGH_VAR(neigh->parms, DELAY_PROBE_TIME));
neigh_update_flags(neigh, flags, ¬ify, &gc_update, &managed_update);
if (flags & (NEIGH_UPDATE_F_USE | NEIGH_UPDATE_F_MANAGED)) {
new = old & ~NUD_PERMANENT;
- neigh->nud_state = new;
+ WRITE_ONCE(neigh->nud_state, new);
err = 0;
goto out;
}
neigh_del_timer(neigh);
if (old & NUD_CONNECTED)
neigh_suspect(neigh);
- neigh->nud_state = new;
+ WRITE_ONCE(neigh->nud_state, new);
err = 0;
notify = old & NUD_VALID;
if ((old & (NUD_INCOMPLETE | NUD_PROBE)) &&
((new & NUD_REACHABLE) ?
neigh->parms->reachable_time :
0)));
- neigh->nud_state = new;
+ WRITE_ONCE(neigh->nud_state, new);
notify = 1;
}
neigh->updated = jiffies;
if (!(neigh->nud_state & NUD_FAILED))
return;
- neigh->nud_state = NUD_INCOMPLETE;
+ WRITE_ONCE(neigh->nud_state, NUD_INCOMPLETE);
atomic_set(&neigh->probes, neigh_max_probes(neigh));
neigh_add_timer(neigh,
jiffies + max(NEIGH_VAR(neigh->parms, RETRANS_TIME),
.ndtc_proxy_qlen = tbl->proxy_queue.qlen,
};
- rcu_read_lock_bh();
- nht = rcu_dereference_bh(tbl->nht);
+ rcu_read_lock();
+ nht = rcu_dereference(tbl->nht);
ndc.ndtc_hash_rnd = nht->hash_rnd[0];
ndc.ndtc_hash_mask = ((1 << nht->hash_shift) - 1);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
if (nla_put(skb, NDTA_CONFIG, sizeof(ndc), &ndc))
goto nla_put_failure;
if (filter->dev_idx || filter->master_idx)
flags |= NLM_F_DUMP_FILTERED;
- rcu_read_lock_bh();
- nht = rcu_dereference_bh(tbl->nht);
+ rcu_read_lock();
+ nht = rcu_dereference(tbl->nht);
for (h = s_h; h < (1 << nht->hash_shift); h++) {
if (h > s_h)
s_idx = 0;
- for (n = rcu_dereference_bh(nht->hash_buckets[h]), idx = 0;
+ for (n = rcu_dereference(nht->hash_buckets[h]), idx = 0;
n != NULL;
- n = rcu_dereference_bh(n->next)) {
+ n = rcu_dereference(n->next)) {
if (idx < s_idx || !net_eq(dev_net(n->dev), net))
goto next;
if (neigh_ifindex_filtered(n->dev, filter->dev_idx) ||
}
rc = skb->len;
out:
- rcu_read_unlock_bh();
+ rcu_read_unlock();
cb->args[1] = h;
cb->args[2] = idx;
return rc;
int chain;
struct neigh_hash_table *nht;
- rcu_read_lock_bh();
- nht = rcu_dereference_bh(tbl->nht);
+ rcu_read_lock();
+ nht = rcu_dereference(tbl->nht);
- read_lock(&tbl->lock); /* avoid resizes */
+ read_lock_bh(&tbl->lock); /* avoid resizes */
for (chain = 0; chain < (1 << nht->hash_shift); chain++) {
struct neighbour *n;
- for (n = rcu_dereference_bh(nht->hash_buckets[chain]);
+ for (n = rcu_dereference(nht->hash_buckets[chain]);
n != NULL;
- n = rcu_dereference_bh(n->next))
+ n = rcu_dereference(n->next))
cb(n, cookie);
}
- read_unlock(&tbl->lock);
- rcu_read_unlock_bh();
+ read_unlock_bh(&tbl->lock);
+ rcu_read_unlock();
}
EXPORT_SYMBOL(neigh_for_each);
tbl = neigh_tables[index];
if (!tbl)
goto out;
- rcu_read_lock_bh();
+ rcu_read_lock();
if (index == NEIGH_ARP_TABLE) {
u32 key = *((u32 *)addr);
neigh = __neigh_create(tbl, addr, dev, false);
err = PTR_ERR(neigh);
if (IS_ERR(neigh)) {
- rcu_read_unlock_bh();
+ rcu_read_unlock();
goto out_kfree_skb;
}
err = neigh->output(neigh, skb);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
}
else if (index == NEIGH_LINK_TABLE) {
err = dev_hard_header(skb, dev, ntohs(skb->protocol),
state->flags &= ~NEIGH_SEQ_IS_PNEIGH;
for (bucket = 0; bucket < (1 << nht->hash_shift); bucket++) {
- n = rcu_dereference_bh(nht->hash_buckets[bucket]);
+ n = rcu_dereference(nht->hash_buckets[bucket]);
while (n) {
if (!net_eq(dev_net(n->dev), net))
}
if (!(state->flags & NEIGH_SEQ_SKIP_NOARP))
break;
- if (n->nud_state & ~NUD_NOARP)
+ if (READ_ONCE(n->nud_state) & ~NUD_NOARP)
break;
next:
- n = rcu_dereference_bh(n->next);
+ n = rcu_dereference(n->next);
}
if (n)
if (v)
return n;
}
- n = rcu_dereference_bh(n->next);
+ n = rcu_dereference(n->next);
while (1) {
while (n) {
if (!(state->flags & NEIGH_SEQ_SKIP_NOARP))
break;
- if (n->nud_state & ~NUD_NOARP)
+ if (READ_ONCE(n->nud_state) & ~NUD_NOARP)
break;
next:
- n = rcu_dereference_bh(n->next);
+ n = rcu_dereference(n->next);
}
if (n)
if (++state->bucket >= (1 << nht->hash_shift))
break;
- n = rcu_dereference_bh(nht->hash_buckets[state->bucket]);
+ n = rcu_dereference(nht->hash_buckets[state->bucket]);
}
if (n && pos)
void *neigh_seq_start(struct seq_file *seq, loff_t *pos, struct neigh_table *tbl, unsigned int neigh_seq_flags)
__acquires(tbl->lock)
- __acquires(rcu_bh)
+ __acquires(rcu)
{
struct neigh_seq_state *state = seq->private;
state->bucket = 0;
state->flags = (neigh_seq_flags & ~NEIGH_SEQ_IS_PNEIGH);
- rcu_read_lock_bh();
- state->nht = rcu_dereference_bh(tbl->nht);
- read_lock(&tbl->lock);
+ rcu_read_lock();
+ state->nht = rcu_dereference(tbl->nht);
+ read_lock_bh(&tbl->lock);
return *pos ? neigh_get_idx_any(seq, pos) : SEQ_START_TOKEN;
}
void neigh_seq_stop(struct seq_file *seq, void *v)
__releases(tbl->lock)
- __releases(rcu_bh)
+ __releases(rcu)
{
struct neigh_seq_state *state = seq->private;
struct neigh_table *tbl = state->tbl;
- read_unlock(&tbl->lock);
- rcu_read_unlock_bh();
+ read_unlock_bh(&tbl->lock);
+ rcu_read_unlock();
}
EXPORT_SYMBOL(neigh_seq_stop);
return 0;
}
-static u32 softnet_backlog_len(struct softnet_data *sd)
+static u32 softnet_input_pkt_queue_len(struct softnet_data *sd)
{
- return skb_queue_len_lockless(&sd->input_pkt_queue) +
- skb_queue_len_lockless(&sd->process_queue);
+ return skb_queue_len_lockless(&sd->input_pkt_queue);
+}
+
+static u32 softnet_process_queue_len(struct softnet_data *sd)
+{
+ return skb_queue_len_lockless(&sd->process_queue);
}
static struct softnet_data *softnet_get_online(loff_t *pos)
static int softnet_seq_show(struct seq_file *seq, void *v)
{
struct softnet_data *sd = v;
+ u32 input_qlen = softnet_input_pkt_queue_len(sd);
+ u32 process_qlen = softnet_process_queue_len(sd);
unsigned int flow_limit_count = 0;
#ifdef CONFIG_NET_FLOW_LIMIT
* mapping the data a specific CPU
*/
seq_printf(seq,
- "%08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x\n",
+ "%08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x "
+ "%08x %08x\n",
sd->processed, sd->dropped, sd->time_squeeze, 0,
0, 0, 0, 0, /* was fastroute */
0, /* was cpu_collision */
sd->received_rps, flow_limit_count,
- softnet_backlog_len(sd), (int)seq->index);
+ input_qlen + process_qlen, (int)seq->index,
+ input_qlen, process_qlen);
return 0;
}
};
/* Ops table for netdev */
-static const struct genl_split_ops netdev_nl_ops[2] = {
+static const struct genl_split_ops netdev_nl_ops[] = {
{
.cmd = NETDEV_CMD_DEV_GET,
.doit = netdev_nl_dev_get_doit,
#include <net/rtnetlink.h>
#include <net/net_namespace.h>
#include <net/devlink.h>
+#if IS_ENABLED(CONFIG_IPV6)
+#include <net/addrconf.h>
+#endif
#include "dev.h"
if (dst) {
ci.rta_lastuse = jiffies_delta_to_clock_t(jiffies - dst->lastuse);
ci.rta_used = dst->__use;
- ci.rta_clntref = atomic_read(&dst->__refcnt);
+ ci.rta_clntref = rcuref_read(&dst->__rcuref);
}
if (expires) {
unsigned long clock;
return 0;
}
+static int rtnl_mdb_valid_dump_req(const struct nlmsghdr *nlh,
+ struct netlink_ext_ack *extack)
+{
+ struct br_port_msg *bpm;
+
+ if (nlh->nlmsg_len < nlmsg_msg_size(sizeof(*bpm))) {
+ NL_SET_ERR_MSG(extack, "Invalid header for mdb dump request");
+ return -EINVAL;
+ }
+
+ bpm = nlmsg_data(nlh);
+ if (bpm->ifindex) {
+ NL_SET_ERR_MSG(extack, "Filtering by device index is not supported for mdb dump request");
+ return -EINVAL;
+ }
+ if (nlmsg_attrlen(nlh, sizeof(*bpm))) {
+ NL_SET_ERR_MSG(extack, "Invalid data after header in mdb dump request");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+struct rtnl_mdb_dump_ctx {
+ long idx;
+};
+
+static int rtnl_mdb_dump(struct sk_buff *skb, struct netlink_callback *cb)
+{
+ struct rtnl_mdb_dump_ctx *ctx = (void *)cb->ctx;
+ struct net *net = sock_net(skb->sk);
+ struct net_device *dev;
+ int idx, s_idx;
+ int err;
+
+ NL_ASSERT_DUMP_CTX_FITS(struct rtnl_mdb_dump_ctx);
+
+ if (cb->strict_check) {
+ err = rtnl_mdb_valid_dump_req(cb->nlh, cb->extack);
+ if (err)
+ return err;
+ }
+
+ s_idx = ctx->idx;
+ idx = 0;
+
+ for_each_netdev(net, dev) {
+ if (idx < s_idx)
+ goto skip;
+ if (!dev->netdev_ops->ndo_mdb_dump)
+ goto skip;
+
+ err = dev->netdev_ops->ndo_mdb_dump(dev, skb, cb);
+ if (err == -EMSGSIZE)
+ goto out;
+ /* Moving on to next device, reset markers and sequence
+ * counters since they are all maintained per-device.
+ */
+ memset(cb->ctx, 0, sizeof(cb->ctx));
+ cb->prev_seq = 0;
+ cb->seq = 0;
+skip:
+ idx++;
+ }
+
+out:
+ ctx->idx = idx;
+ return skb->len;
+}
+
+static int rtnl_validate_mdb_entry(const struct nlattr *attr,
+ struct netlink_ext_ack *extack)
+{
+ struct br_mdb_entry *entry = nla_data(attr);
+
+ if (nla_len(attr) != sizeof(struct br_mdb_entry)) {
+ NL_SET_ERR_MSG_ATTR(extack, attr, "Invalid attribute length");
+ return -EINVAL;
+ }
+
+ if (entry->ifindex == 0) {
+ NL_SET_ERR_MSG(extack, "Zero entry ifindex is not allowed");
+ return -EINVAL;
+ }
+
+ if (entry->addr.proto == htons(ETH_P_IP)) {
+ if (!ipv4_is_multicast(entry->addr.u.ip4) &&
+ !ipv4_is_zeronet(entry->addr.u.ip4)) {
+ NL_SET_ERR_MSG(extack, "IPv4 entry group address is not multicast or 0.0.0.0");
+ return -EINVAL;
+ }
+ if (ipv4_is_local_multicast(entry->addr.u.ip4)) {
+ NL_SET_ERR_MSG(extack, "IPv4 entry group address is local multicast");
+ return -EINVAL;
+ }
+#if IS_ENABLED(CONFIG_IPV6)
+ } else if (entry->addr.proto == htons(ETH_P_IPV6)) {
+ if (ipv6_addr_is_ll_all_nodes(&entry->addr.u.ip6)) {
+ NL_SET_ERR_MSG(extack, "IPv6 entry group address is link-local all nodes");
+ return -EINVAL;
+ }
+#endif
+ } else if (entry->addr.proto == 0) {
+ /* L2 mdb */
+ if (!is_multicast_ether_addr(entry->addr.u.mac_addr)) {
+ NL_SET_ERR_MSG(extack, "L2 entry group is not multicast");
+ return -EINVAL;
+ }
+ } else {
+ NL_SET_ERR_MSG(extack, "Unknown entry protocol");
+ return -EINVAL;
+ }
+
+ if (entry->state != MDB_PERMANENT && entry->state != MDB_TEMPORARY) {
+ NL_SET_ERR_MSG(extack, "Unknown entry state");
+ return -EINVAL;
+ }
+ if (entry->vid >= VLAN_VID_MASK) {
+ NL_SET_ERR_MSG(extack, "Invalid entry VLAN id");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct nla_policy mdba_policy[MDBA_SET_ENTRY_MAX + 1] = {
+ [MDBA_SET_ENTRY_UNSPEC] = { .strict_start_type = MDBA_SET_ENTRY_ATTRS + 1 },
+ [MDBA_SET_ENTRY] = NLA_POLICY_VALIDATE_FN(NLA_BINARY,
+ rtnl_validate_mdb_entry,
+ sizeof(struct br_mdb_entry)),
+ [MDBA_SET_ENTRY_ATTRS] = { .type = NLA_NESTED },
+};
+
+static int rtnl_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *tb[MDBA_SET_ENTRY_MAX + 1];
+ struct net *net = sock_net(skb->sk);
+ struct br_port_msg *bpm;
+ struct net_device *dev;
+ int err;
+
+ err = nlmsg_parse_deprecated(nlh, sizeof(*bpm), tb,
+ MDBA_SET_ENTRY_MAX, mdba_policy, extack);
+ if (err)
+ return err;
+
+ bpm = nlmsg_data(nlh);
+ if (!bpm->ifindex) {
+ NL_SET_ERR_MSG(extack, "Invalid ifindex");
+ return -EINVAL;
+ }
+
+ dev = __dev_get_by_index(net, bpm->ifindex);
+ if (!dev) {
+ NL_SET_ERR_MSG(extack, "Device doesn't exist");
+ return -ENODEV;
+ }
+
+ if (NL_REQ_ATTR_CHECK(extack, NULL, tb, MDBA_SET_ENTRY)) {
+ NL_SET_ERR_MSG(extack, "Missing MDBA_SET_ENTRY attribute");
+ return -EINVAL;
+ }
+
+ if (!dev->netdev_ops->ndo_mdb_add) {
+ NL_SET_ERR_MSG(extack, "Device does not support MDB operations");
+ return -EOPNOTSUPP;
+ }
+
+ return dev->netdev_ops->ndo_mdb_add(dev, tb, nlh->nlmsg_flags, extack);
+}
+
+static int rtnl_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *tb[MDBA_SET_ENTRY_MAX + 1];
+ struct net *net = sock_net(skb->sk);
+ struct br_port_msg *bpm;
+ struct net_device *dev;
+ int err;
+
+ err = nlmsg_parse_deprecated(nlh, sizeof(*bpm), tb,
+ MDBA_SET_ENTRY_MAX, mdba_policy, extack);
+ if (err)
+ return err;
+
+ bpm = nlmsg_data(nlh);
+ if (!bpm->ifindex) {
+ NL_SET_ERR_MSG(extack, "Invalid ifindex");
+ return -EINVAL;
+ }
+
+ dev = __dev_get_by_index(net, bpm->ifindex);
+ if (!dev) {
+ NL_SET_ERR_MSG(extack, "Device doesn't exist");
+ return -ENODEV;
+ }
+
+ if (NL_REQ_ATTR_CHECK(extack, NULL, tb, MDBA_SET_ENTRY)) {
+ NL_SET_ERR_MSG(extack, "Missing MDBA_SET_ENTRY attribute");
+ return -EINVAL;
+ }
+
+ if (!dev->netdev_ops->ndo_mdb_del) {
+ NL_SET_ERR_MSG(extack, "Device does not support MDB operations");
+ return -EOPNOTSUPP;
+ }
+
+ return dev->netdev_ops->ndo_mdb_del(dev, tb, extack);
+}
+
/* Process one rtnetlink message. */
static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
rtnl_register(PF_UNSPEC, RTM_GETSTATS, rtnl_stats_get, rtnl_stats_dump,
0);
rtnl_register(PF_UNSPEC, RTM_SETSTATS, rtnl_stats_set, NULL, 0);
+
+ rtnl_register(PF_BRIDGE, RTM_GETMDB, NULL, rtnl_mdb_dump, 0);
+ rtnl_register(PF_BRIDGE, RTM_NEWMDB, rtnl_mdb_add, NULL, 0);
+ rtnl_register(PF_BRIDGE, RTM_DELMDB, rtnl_mdb_del, NULL, 0);
}
{
struct sk_buff *skb = __build_skb(data, frag_size);
- if (skb && frag_size) {
+ if (likely(skb && frag_size)) {
skb->head_frag = 1;
- if (page_is_pfmemalloc(virt_to_head_page(data)))
- skb->pfmemalloc = 1;
+ skb_propagate_pfmemalloc(virt_to_head_page(data), skb);
}
return skb;
}
if (frag_size) {
skb->head_frag = 1;
- if (page_is_pfmemalloc(virt_to_head_page(data)))
- skb->pfmemalloc = 1;
+ skb_propagate_pfmemalloc(virt_to_head_page(data), skb);
}
return skb;
}
bpf_map_put_with_uref(info->map);
}
+static u64 sock_map_mem_usage(const struct bpf_map *map)
+{
+ u64 usage = sizeof(struct bpf_stab);
+
+ usage += (u64)map->max_entries * sizeof(struct sock *);
+ return usage;
+}
+
static const struct bpf_iter_seq_info sock_map_iter_seq_info = {
.seq_ops = &sock_map_seq_ops,
.init_seq_private = sock_map_init_seq_private,
.map_lookup_elem = sock_map_lookup,
.map_release_uref = sock_map_release_progs,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = sock_map_mem_usage,
.map_btf_id = &sock_map_btf_ids[0],
.iter_seq_info = &sock_map_iter_seq_info,
};
bpf_map_put_with_uref(info->map);
}
+static u64 sock_hash_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map);
+ u64 usage = sizeof(*htab);
+
+ usage += htab->buckets_num * sizeof(struct bpf_shtab_bucket);
+ usage += atomic_read(&htab->count) * (u64)htab->elem_size;
+ return usage;
+}
+
static const struct bpf_iter_seq_info sock_hash_iter_seq_info = {
.seq_ops = &sock_hash_seq_ops,
.init_seq_private = sock_hash_init_seq_private,
.map_lookup_elem_sys_only = sock_hash_lookup_sys,
.map_release_uref = sock_hash_release_progs,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = sock_hash_mem_usage,
.map_btf_id = &sock_hash_map_btf_ids[0],
.iter_seq_info = &sock_hash_iter_seq_info,
};
* for the case, if this connection will not able to recover.
*/
if (mtu < dst_mtu(dst) && ip_dont_fragment(sk, dst))
- sk->sk_err_soft = EMSGSIZE;
+ WRITE_ONCE(sk->sk_err_soft, EMSGSIZE);
mtu = dst_mtu(dst);
sk_error_report(sk);
dccp_done(sk);
- } else
- sk->sk_err_soft = err;
+ } else {
+ WRITE_ONCE(sk->sk_err_soft, err);
+ }
goto out;
}
if (!sock_owned_by_user(sk) && inet->recverr) {
sk->sk_err = err;
sk_error_report(sk);
- } else /* Only an error on timeout */
- sk->sk_err_soft = err;
+ } else { /* Only an error on timeout */
+ WRITE_ONCE(sk->sk_err_soft, err);
+ }
out:
bh_unlock_sock(sk);
sock_put(sk);
*/
sk_error_report(sk);
dccp_done(sk);
- } else
- sk->sk_err_soft = err;
+ } else {
+ WRITE_ONCE(sk->sk_err_soft, err);
+ }
goto out;
}
if (!sock_owned_by_user(sk) && np->recverr) {
sk->sk_err = err;
sk_error_report(sk);
- } else
- sk->sk_err_soft = err;
-
+ } else {
+ WRITE_ONCE(sk->sk_err_soft, err);
+ }
out:
bh_unlock_sock(sk);
sock_put(sk);
if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
goto discard_and_relse;
+ nf_reset_ct(skb);
return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4,
refcounted) ? -1 : 0;
static void dccp_write_err(struct sock *sk)
{
- sk->sk_err = sk->sk_err_soft ? : ETIMEDOUT;
+ sk->sk_err = READ_ONCE(sk->sk_err_soft) ? : ETIMEDOUT;
sk_error_report(sk);
dccp_send_reset(sk, DCCP_RESET_CODE_ABORTED);
}
}
-static int dsa_master_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+/* Deny PTP operations on master if there is at least one switch in the tree
+ * that is PTP capable.
+ */
+int dsa_master_pre_change_hwtstamp(struct net_device *dev,
+ const struct kernel_hwtstamp_config *config,
+ struct netlink_ext_ack *extack)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
struct dsa_switch *ds = cpu_dp->ds;
struct dsa_switch_tree *dst;
- int err = -EOPNOTSUPP;
struct dsa_port *dp;
dst = ds->dst;
- switch (cmd) {
- case SIOCGHWTSTAMP:
- case SIOCSHWTSTAMP:
- /* Deny PTP operations on master if there is at least one
- * switch in the tree that is PTP capable.
- */
- list_for_each_entry(dp, &dst->ports, list)
- if (dsa_port_supports_hwtstamp(dp, ifr))
- return -EBUSY;
- break;
+ list_for_each_entry(dp, &dst->ports, list) {
+ if (dsa_port_supports_hwtstamp(dp)) {
+ NL_SET_ERR_MSG(extack,
+ "HW timestamping not allowed on DSA master when switch supports the operation");
+ return -EBUSY;
+ }
}
- if (dev->netdev_ops->ndo_eth_ioctl)
- err = dev->netdev_ops->ndo_eth_ioctl(dev, ifr, cmd);
-
- return err;
+ return 0;
}
-static const struct dsa_netdevice_ops dsa_netdev_ops = {
- .ndo_eth_ioctl = dsa_master_ioctl,
-};
-
static int dsa_master_ethtool_setup(struct net_device *dev)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
cpu_dp->orig_ethtool_ops = NULL;
}
-static void dsa_netdev_ops_set(struct net_device *dev,
- const struct dsa_netdevice_ops *ops)
-{
- if (netif_is_lag_master(dev))
- return;
-
- dev->dsa_ptr->netdev_ops = ops;
-}
-
/* Keep the master always promiscuous if the tagging protocol requires that
* (garbles MAC DA) or if it doesn't support unicast filtering, case in which
* it would revert to promiscuous mode as soon as we call dev_uc_add() on it
if (ret)
goto out_err_reset_promisc;
- dsa_netdev_ops_set(dev, &dsa_netdev_ops);
-
ret = sysfs_create_group(&dev->dev.kobj, &dsa_group);
if (ret)
- goto out_err_ndo_teardown;
+ goto out_err_ethtool_teardown;
return ret;
-out_err_ndo_teardown:
- dsa_netdev_ops_set(dev, NULL);
+out_err_ethtool_teardown:
dsa_master_ethtool_teardown(dev);
out_err_reset_promisc:
dsa_master_set_promiscuity(dev, -1);
void dsa_master_teardown(struct net_device *dev)
{
sysfs_remove_group(&dev->dev.kobj, &dsa_group);
- dsa_netdev_ops_set(dev, NULL);
dsa_master_ethtool_teardown(dev);
dsa_master_reset_mtu(dev);
dsa_master_set_promiscuity(dev, -1);
struct netlink_ext_ack *extack);
void dsa_master_lag_teardown(struct net_device *lag_dev,
struct dsa_port *cpu_dp);
+int dsa_master_pre_change_hwtstamp(struct net_device *dev,
+ const struct kernel_hwtstamp_config *config,
+ struct netlink_ext_ack *extack);
#endif
return !err;
}
-bool dsa_port_supports_hwtstamp(struct dsa_port *dp, struct ifreq *ifr)
+bool dsa_port_supports_hwtstamp(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
+ struct ifreq ifr = {};
int err;
if (!ds->ops->port_hwtstamp_get || !ds->ops->port_hwtstamp_set)
return false;
/* "See through" shim implementations of the "get" method.
- * This will clobber the ifreq structure, but we will either return an
- * error, or the master will overwrite it with proper values.
+ * Since we can't cook up a complete ioctl request structure, this will
+ * fail in copy_to_user() with -EFAULT, which hopefully is enough to
+ * detect a valid implementation.
*/
- err = ds->ops->port_hwtstamp_get(ds, dp->index, ifr);
+ err = ds->ops->port_hwtstamp_get(ds, dp->index, &ifr);
return err != -EOPNOTSUPP;
}
.db = db,
};
- if (!dp->ds->fdb_isolation)
- info.db.bridge.num = 0;
-
return dsa_port_notify(dp, DSA_NOTIFIER_HOST_FDB_ADD, &info);
}
};
int err;
+ if (!dp->ds->fdb_isolation)
+ db.bridge.num = 0;
+
/* Avoid a call to __dev_set_promiscuity() on the master, which
* requires rtnl_lock(), since we can't guarantee that is held here,
* and we can't take it either.
.db = db,
};
- if (!dp->ds->fdb_isolation)
- info.db.bridge.num = 0;
-
return dsa_port_notify(dp, DSA_NOTIFIER_HOST_FDB_DEL, &info);
}
};
int err;
+ if (!dp->ds->fdb_isolation)
+ db.bridge.num = 0;
+
if (master->priv_flags & IFF_UNICAST_FLT) {
err = dev_uc_del(master, addr);
if (err)
.db = db,
};
- if (!dp->ds->fdb_isolation)
- info.db.bridge.num = 0;
-
return dsa_port_notify(dp, DSA_NOTIFIER_HOST_MDB_ADD, &info);
}
};
int err;
+ if (!dp->ds->fdb_isolation)
+ db.bridge.num = 0;
+
err = dev_mc_add(master, mdb->addr);
if (err)
return err;
.db = db,
};
- if (!dp->ds->fdb_isolation)
- info.db.bridge.num = 0;
-
return dsa_port_notify(dp, DSA_NOTIFIER_HOST_MDB_DEL, &info);
}
};
int err;
+ if (!dp->ds->fdb_isolation)
+ db.bridge.num = 0;
+
err = dev_mc_del(master, mdb->addr);
if (err)
return err;
struct switchdev_vlan_msti;
struct phy_device;
-bool dsa_port_supports_hwtstamp(struct dsa_port *dp, struct ifreq *ifr);
+bool dsa_port_supports_hwtstamp(struct dsa_port *dp);
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops);
int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age);
static int dsa_slave_netdevice_event(struct notifier_block *nb,
unsigned long event, void *ptr)
{
+ struct netlink_ext_ack *extack = netdev_notifier_info_to_extack(ptr);
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
switch (event) {
return NOTIFY_OK;
}
+ case NETDEV_PRE_CHANGE_HWTSTAMP: {
+ struct netdev_notifier_hwtstamp_info *info = ptr;
+ int err;
+
+ if (!netdev_uses_dsa(dev))
+ return NOTIFY_DONE;
+
+ err = dsa_master_pre_change_hwtstamp(dev, info->config, extack);
+ return notifier_from_errno(err);
+ }
default:
break;
}
#include <linux/net.h>
#include <linux/pm_runtime.h>
#include <net/devlink.h>
+#include <net/ipv6.h>
#include <net/xdp_sock_drv.h>
#include <net/flow_offload.h>
#include <linux/ethtool_netlink.h>
ethtool_rx_flow_rule_create(const struct ethtool_rx_flow_spec_input *input)
{
const struct ethtool_rx_flow_spec *fs = input->fs;
- static struct in6_addr zero_addr = {};
struct ethtool_rx_flow_match *match;
struct ethtool_rx_flow_rule *flow;
struct flow_action_entry *act;
v6_spec = &fs->h_u.tcp_ip6_spec;
v6_m_spec = &fs->m_u.tcp_ip6_spec;
- if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
+ if (!ipv6_addr_any((struct in6_addr *)v6_m_spec->ip6src)) {
memcpy(&match->key.ipv6.src, v6_spec->ip6src,
sizeof(match->key.ipv6.src));
memcpy(&match->mask.ipv6.src, v6_m_spec->ip6src,
sizeof(match->mask.ipv6.src));
}
- if (memcmp(v6_m_spec->ip6dst, &zero_addr, sizeof(zero_addr))) {
+ if (!ipv6_addr_any((struct in6_addr *)v6_m_spec->ip6dst)) {
memcpy(&match->key.ipv6.dst, v6_spec->ip6dst,
sizeof(match->key.ipv6.dst));
memcpy(&match->mask.ipv6.dst, v6_m_spec->ip6dst,
sizeof(match->mask.ipv6.dst));
}
- if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr)) ||
- memcmp(v6_m_spec->ip6dst, &zero_addr, sizeof(zero_addr))) {
+ if (!ipv6_addr_any((struct in6_addr *)v6_m_spec->ip6src) ||
+ !ipv6_addr_any((struct in6_addr *)v6_m_spec->ip6dst)) {
match->dissector.used_keys |=
BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS);
match->dissector.offset[FLOW_DISSECTOR_KEY_IPV6_ADDRS] =
extern const struct nla_policy ethnl_privflags_get_policy[ETHTOOL_A_PRIVFLAGS_HEADER + 1];
extern const struct nla_policy ethnl_privflags_set_policy[ETHTOOL_A_PRIVFLAGS_FLAGS + 1];
extern const struct nla_policy ethnl_rings_get_policy[ETHTOOL_A_RINGS_HEADER + 1];
-extern const struct nla_policy ethnl_rings_set_policy[ETHTOOL_A_RINGS_RX_PUSH + 1];
+extern const struct nla_policy ethnl_rings_set_policy[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX + 1];
extern const struct nla_policy ethnl_channels_get_policy[ETHTOOL_A_CHANNELS_HEADER + 1];
extern const struct nla_policy ethnl_channels_set_policy[ETHTOOL_A_CHANNELS_COMBINED_COUNT + 1];
extern const struct nla_policy ethnl_coalesce_get_policy[ETHTOOL_A_COALESCE_HEADER + 1];
struct ethnl_reply_data base;
struct ethtool_ringparam ringparam;
struct kernel_ethtool_ringparam kernel_ringparam;
+ u32 supported_ring_params;
};
#define RINGS_REPDATA(__reply_base) \
if (!dev->ethtool_ops->get_ringparam)
return -EOPNOTSUPP;
+
+ data->supported_ring_params = dev->ethtool_ops->supported_ring_params;
ret = ethnl_ops_begin(dev);
if (ret < 0)
return ret;
nla_total_size(sizeof(u8)) + /* _RINGS_TCP_DATA_SPLIT */
nla_total_size(sizeof(u32) + /* _RINGS_CQE_SIZE */
nla_total_size(sizeof(u8)) + /* _RINGS_TX_PUSH */
- nla_total_size(sizeof(u8))); /* _RINGS_RX_PUSH */
+ nla_total_size(sizeof(u8))) + /* _RINGS_RX_PUSH */
+ nla_total_size(sizeof(u32)) + /* _RINGS_TX_PUSH_BUF_LEN */
+ nla_total_size(sizeof(u32)); /* _RINGS_TX_PUSH_BUF_LEN_MAX */
}
static int rings_fill_reply(struct sk_buff *skb,
const struct rings_reply_data *data = RINGS_REPDATA(reply_base);
const struct kernel_ethtool_ringparam *kr = &data->kernel_ringparam;
const struct ethtool_ringparam *ringparam = &data->ringparam;
+ u32 supported_ring_params = data->supported_ring_params;
WARN_ON(kr->tcp_data_split > ETHTOOL_TCP_DATA_SPLIT_ENABLED);
(kr->cqe_size &&
(nla_put_u32(skb, ETHTOOL_A_RINGS_CQE_SIZE, kr->cqe_size))) ||
nla_put_u8(skb, ETHTOOL_A_RINGS_TX_PUSH, !!kr->tx_push) ||
- nla_put_u8(skb, ETHTOOL_A_RINGS_RX_PUSH, !!kr->rx_push))
+ nla_put_u8(skb, ETHTOOL_A_RINGS_RX_PUSH, !!kr->rx_push) ||
+ ((supported_ring_params & ETHTOOL_RING_USE_TX_PUSH_BUF_LEN) &&
+ (nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX,
+ kr->tx_push_buf_max_len) ||
+ nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN,
+ kr->tx_push_buf_len))))
return -EMSGSIZE;
return 0;
[ETHTOOL_A_RINGS_CQE_SIZE] = NLA_POLICY_MIN(NLA_U32, 1),
[ETHTOOL_A_RINGS_TX_PUSH] = NLA_POLICY_MAX(NLA_U8, 1),
[ETHTOOL_A_RINGS_RX_PUSH] = NLA_POLICY_MAX(NLA_U8, 1),
+ [ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN] = { .type = NLA_U32 },
};
static int
return -EOPNOTSUPP;
}
+ if (tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN] &&
+ !(ops->supported_ring_params & ETHTOOL_RING_USE_TX_PUSH_BUF_LEN)) {
+ NL_SET_ERR_MSG_ATTR(info->extack,
+ tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN],
+ "setting tx push buf len is not supported");
+ return -EOPNOTSUPP;
+ }
+
return ops->get_ringparam && ops->set_ringparam ? 1 : -EOPNOTSUPP;
}
tb[ETHTOOL_A_RINGS_TX_PUSH], &mod);
ethnl_update_u8(&kernel_ringparam.rx_push,
tb[ETHTOOL_A_RINGS_RX_PUSH], &mod);
+ ethnl_update_u32(&kernel_ringparam.tx_push_buf_len,
+ tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN], &mod);
if (!mod)
return 0;
return -EINVAL;
}
+ if (kernel_ringparam.tx_push_buf_len > kernel_ringparam.tx_push_buf_max_len) {
+ NL_SET_ERR_MSG_ATTR_FMT(info->extack, tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN],
+ "Requested TX push buffer exceeds the maximum of %u",
+ kernel_ringparam.tx_push_buf_max_len);
+
+ return -EINVAL;
+ }
+
ret = dev->ethtool_ops->set_ringparam(dev, &ringparam,
&kernel_ringparam, info->extack);
return ret < 0 ? ret : 1;
sk->sk_state != TCP_SYN_SENT ||
(sk->sk_userlocks & SOCK_BINDADDR_LOCK) ||
(err = inet_sk_reselect_saddr(sk)) != 0)
- sk->sk_err_soft = -err;
+ WRITE_ONCE(sk->sk_err_soft, -err);
}
return err;
probes -= NEIGH_VAR(neigh->parms, UCAST_PROBES);
if (probes < 0) {
- if (!(neigh->nud_state & NUD_VALID))
+ if (!(READ_ONCE(neigh->nud_state) & NUD_VALID))
pr_debug("trying to ucast probe in NUD_INVALID\n");
neigh_ha_snapshot(dst_ha, neigh, dev);
dst_hw = dst_ha;
neigh = neigh_lookup(&arp_tbl, &ip, dev);
if (neigh) {
- if (!(neigh->nud_state & NUD_NOARP)) {
+ if (!(READ_ONCE(neigh->nud_state) & NUD_NOARP)) {
read_lock_bh(&neigh->lock);
memcpy(r->arp_ha.sa_data, neigh->ha, dev->addr_len);
r->arp_flags = arp_state_to_flags(neigh);
struct neigh_table *tbl = &arp_tbl;
if (neigh) {
- if ((neigh->nud_state & NUD_VALID) && !force) {
+ if ((READ_ONCE(neigh->nud_state) & NUD_VALID) && !force) {
neigh_release(neigh);
return 0;
}
- if (neigh->nud_state & ~NUD_NOARP)
+ if (READ_ONCE(neigh->nud_state) & ~NUD_NOARP)
err = neigh_update(neigh, NULL, NUD_FAILED,
NEIGH_UPDATE_F_OVERRIDE|
NEIGH_UPDATE_F_ADMIN, 0);
extack);
} else {
u32 new_metric = ifa->ifa_rt_priority;
+ u8 new_proto = ifa->ifa_proto;
inet_free_ifa(ifa);
ifa->ifa_rt_priority = new_metric;
}
+ ifa->ifa_proto = new_proto;
+
set_ifa_lifetime(ifa, valid_lft, prefered_lft);
cancel_delayed_work(&check_lifetime_work);
queue_delayed_work(system_power_efficient_wq,
n = NULL;
if (n) {
- state = n->nud_state;
+ state = READ_ONCE(n->nud_state);
neigh_release(n);
} else {
return 0;
if (nh->fib_nh_scope == RT_SCOPE_LINK) {
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
if (likely(nh->fib_nh_gw_family == AF_INET))
n = __ipv4_neigh_lookup_noref(nh->fib_nh_dev,
else
n = NULL;
if (n)
- state = n->nud_state;
+ state = READ_ONCE(n->nud_state);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
}
return !!(state & NUD_VALID);
/*
* check if a multicast source filter allows delivery for a given <src,dst,intf>
*/
-int ip_mc_sf_allow(struct sock *sk, __be32 loc_addr, __be32 rmt_addr,
+int ip_mc_sf_allow(const struct sock *sk, __be32 loc_addr, __be32 rmt_addr,
int dif, int sdif)
{
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
struct ip_mc_socklist *pmc;
struct ip_sf_socklist *psl;
int i;
unsigned short port, int l3mdev, const struct sock *sk)
{
#if IS_ENABLED(CONFIG_IPV6)
- struct in6_addr addr_any = {};
-
if (sk->sk_family != tb->family) {
if (sk->sk_family == AF_INET)
return net_eq(ib2_net(tb), net) && tb->port == port &&
tb->l3mdev == l3mdev &&
- ipv6_addr_equal(&tb->v6_rcv_saddr, &addr_any);
+ ipv6_addr_any(&tb->v6_rcv_saddr);
return false;
}
if (sk->sk_family == AF_INET6)
return net_eq(ib2_net(tb), net) && tb->port == port &&
tb->l3mdev == l3mdev &&
- ipv6_addr_equal(&tb->v6_rcv_saddr, &addr_any);
+ ipv6_addr_any(&tb->v6_rcv_saddr);
else
#endif
return net_eq(ib2_net(tb), net) && tb->port == port &&
{
struct inet_hashinfo *hinfo = tcp_or_dccp_get_hashinfo(sk);
u32 hash;
-#if IS_ENABLED(CONFIG_IPV6)
- struct in6_addr addr_any = {};
+#if IS_ENABLED(CONFIG_IPV6)
if (sk->sk_family == AF_INET6)
- hash = ipv6_portaddr_hash(net, &addr_any, port);
+ hash = ipv6_portaddr_hash(net, &in6addr_any, port);
else
#endif
hash = ipv4_portaddr_hash(net, 0, port);
}
EXPORT_SYMBOL_GPL(ip_local_out);
-static inline int ip_select_ttl(struct inet_sock *inet, struct dst_entry *dst)
+static inline int ip_select_ttl(const struct inet_sock *inet,
+ const struct dst_entry *dst)
{
int ttl = inet->uc_ttl;
__be32 saddr, __be32 daddr, struct ip_options_rcu *opt,
u8 tos)
{
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
struct rtable *rt = skb_rtable(skb);
struct net *net = sock_net(sk);
struct iphdr *iph;
return res;
}
- rcu_read_lock_bh();
+ rcu_read_lock();
neigh = ip_neigh_for_gw(rt, skb, &is_v6gw);
if (!IS_ERR(neigh)) {
int res;
sock_confirm_neigh(skb, neigh);
/* if crossing protocols, can not use the cached header */
res = neigh_output(neigh, skb, is_v6gw);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return res;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock();
net_dbg_ratelimited("%s: No header cache and no neighbour!\n",
__func__);
mtu = cork->gso_size ? IP_MAX_MTU : cork->fragsize;
paged = !!cork->gso_size;
- if (cork->tx_flags & SKBTX_ANY_SW_TSTAMP &&
+ if (cork->tx_flags & SKBTX_ANY_TSTAMP &&
sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)
tskey = atomic_inc_return(&sk->sk_tskey) - 1;
#include <linux/vmalloc.h>
#include <linux/netdevice.h>
#include <linux/module.h>
-#include <linux/icmp.h>
#include <net/ip.h>
#include <net/compat.h>
#include <linux/uaccess.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Netfilter Core Team <coreteam@netfilter.org>");
MODULE_DESCRIPTION("IPv4 packet filter");
-MODULE_ALIAS("ipt_icmp");
void *ipt_alloc_initial_table(const struct xt_table *info)
{
__ipt_unregister_table(net, table);
}
-/* Returns 1 if the type and code is matched by the range, 0 otherwise */
-static inline bool
-icmp_type_code_match(u_int8_t test_type, u_int8_t min_code, u_int8_t max_code,
- u_int8_t type, u_int8_t code,
- bool invert)
-{
- return ((test_type == 0xFF) ||
- (type == test_type && code >= min_code && code <= max_code))
- ^ invert;
-}
-
-static bool
-icmp_match(const struct sk_buff *skb, struct xt_action_param *par)
-{
- const struct icmphdr *ic;
- struct icmphdr _icmph;
- const struct ipt_icmp *icmpinfo = par->matchinfo;
-
- /* Must not be a fragment. */
- if (par->fragoff != 0)
- return false;
-
- ic = skb_header_pointer(skb, par->thoff, sizeof(_icmph), &_icmph);
- if (ic == NULL) {
- /* We've been asked to examine this packet, and we
- * can't. Hence, no choice but to drop.
- */
- par->hotdrop = true;
- return false;
- }
-
- return icmp_type_code_match(icmpinfo->type,
- icmpinfo->code[0],
- icmpinfo->code[1],
- ic->type, ic->code,
- !!(icmpinfo->invflags&IPT_ICMP_INV));
-}
-
-static int icmp_checkentry(const struct xt_mtchk_param *par)
-{
- const struct ipt_icmp *icmpinfo = par->matchinfo;
-
- /* Must specify no unknown invflags */
- return (icmpinfo->invflags & ~IPT_ICMP_INV) ? -EINVAL : 0;
-}
-
static struct xt_target ipt_builtin_tg[] __read_mostly = {
{
.name = XT_STANDARD_TARGET,
.owner = THIS_MODULE,
};
-static struct xt_match ipt_builtin_mt[] __read_mostly = {
- {
- .name = "icmp",
- .match = icmp_match,
- .matchsize = sizeof(struct ipt_icmp),
- .checkentry = icmp_checkentry,
- .proto = IPPROTO_ICMP,
- .family = NFPROTO_IPV4,
- .me = THIS_MODULE,
- },
-};
-
static int __net_init ip_tables_net_init(struct net *net)
{
return xt_proto_init(net, NFPROTO_IPV4);
ret = xt_register_targets(ipt_builtin_tg, ARRAY_SIZE(ipt_builtin_tg));
if (ret < 0)
goto err2;
- ret = xt_register_matches(ipt_builtin_mt, ARRAY_SIZE(ipt_builtin_mt));
- if (ret < 0)
- goto err4;
/* Register setsockopt */
ret = nf_register_sockopt(&ipt_sockopts);
if (ret < 0)
- goto err5;
+ goto err4;
return 0;
-err5:
- xt_unregister_matches(ipt_builtin_mt, ARRAY_SIZE(ipt_builtin_mt));
err4:
xt_unregister_targets(ipt_builtin_tg, ARRAY_SIZE(ipt_builtin_tg));
err2:
{
nf_unregister_sockopt(&ipt_sockopts);
- xt_unregister_matches(ipt_builtin_mt, ARRAY_SIZE(ipt_builtin_mt));
xt_unregister_targets(ipt_builtin_tg, ARRAY_SIZE(ipt_builtin_tg));
unregister_pernet_subsys(&ip_tables_net_ops);
}
int state = NUD_REACHABLE;
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __ipv6_neigh_lookup_noref_stub(nh->fib_nh_dev, &nh->fib_nh_gw6);
if (n)
- state = n->nud_state;
+ state = READ_ONCE(n->nud_state);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return !!(state & NUD_VALID);
}
int state = NUD_REACHABLE;
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __ipv4_neigh_lookup_noref(nh->fib_nh_dev,
(__force u32)nh->fib_nh_gw4);
if (n)
- state = n->nud_state;
+ state = READ_ONCE(n->nud_state);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return !!(state & NUD_VALID);
}
}
EXPORT_SYMBOL_GPL(raw_unhash_sk);
-bool raw_v4_match(struct net *net, struct sock *sk, unsigned short num,
+bool raw_v4_match(struct net *net, const struct sock *sk, unsigned short num,
__be32 raddr, __be32 laddr, int dif, int sdif)
{
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
if (net_eq(sock_net(sk), net) && inet->inet_num == num &&
!(inet->inet_daddr && inet->inet_daddr != raddr) &&
* use helper to figure it out.
*/
-static bool raw_lookup(struct net *net, struct sock *sk,
+static bool raw_lookup(struct net *net, const struct sock *sk,
const struct inet_diag_req_v2 *req)
{
struct inet_diag_req_raw *r = (void *)req;
struct net_device *dev = dst->dev;
struct neighbour *n;
- rcu_read_lock_bh();
+ rcu_read_lock();
if (likely(rt->rt_gw_family == AF_INET)) {
n = ip_neigh_gw4(dev, rt->rt_gw4);
if (!IS_ERR(n) && !refcount_inc_not_zero(&n->refcnt))
n = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return n;
}
if (!n)
n = neigh_create(&arp_tbl, &new_gw, rt->dst.dev);
if (!IS_ERR(n)) {
- if (!(n->nud_state & NUD_VALID)) {
+ if (!(READ_ONCE(n->nud_state) & NUD_VALID)) {
neigh_event_send(n, NULL);
} else {
if (fib_lookup(net, fl4, &res, 0) == 0) {
{
struct uncached_list *ul = raw_cpu_ptr(&rt_uncached_list);
- rt->rt_uncached_list = ul;
+ rt->dst.rt_uncached_list = ul;
spin_lock_bh(&ul->lock);
- list_add_tail(&rt->rt_uncached, &ul->head);
+ list_add_tail(&rt->dst.rt_uncached, &ul->head);
spin_unlock_bh(&ul->lock);
}
void rt_del_uncached_list(struct rtable *rt)
{
- if (!list_empty(&rt->rt_uncached)) {
- struct uncached_list *ul = rt->rt_uncached_list;
+ if (!list_empty(&rt->dst.rt_uncached)) {
+ struct uncached_list *ul = rt->dst.rt_uncached_list;
spin_lock_bh(&ul->lock);
- list_del_init(&rt->rt_uncached);
+ list_del_init(&rt->dst.rt_uncached);
spin_unlock_bh(&ul->lock);
}
}
continue;
spin_lock_bh(&ul->lock);
- list_for_each_entry_safe(rt, safe, &ul->head, rt_uncached) {
+ list_for_each_entry_safe(rt, safe, &ul->head, dst.rt_uncached) {
if (rt->dst.dev != dev)
continue;
rt->dst.dev = blackhole_netdev;
netdev_ref_replace(dev, blackhole_netdev,
&rt->dst.dev_tracker, GFP_ATOMIC);
- list_move(&rt->rt_uncached, &ul->quarantine);
+ list_move(&rt->dst.rt_uncached, &ul->quarantine);
}
spin_unlock_bh(&ul->lock);
}
rt->rt_uses_gateway = 0;
rt->rt_gw_family = 0;
rt->rt_gw4 = 0;
- INIT_LIST_HEAD(&rt->rt_uncached);
+ INIT_LIST_HEAD(&rt->dst.rt_uncached);
rt->dst.output = ip_output;
if (flags & RTCF_LOCAL)
new_rt->rt_gw4 = rt->rt_gw4;
else if (rt->rt_gw_family == AF_INET6)
new_rt->rt_gw6 = rt->rt_gw6;
- INIT_LIST_HEAD(&new_rt->rt_uncached);
+ INIT_LIST_HEAD(&new_rt->dst.rt_uncached);
new_rt->dst.input = rt->dst.input;
new_rt->dst.output = rt->dst.output;
else if (rt->rt_gw_family == AF_INET6)
rt->rt_gw6 = ort->rt_gw6;
- INIT_LIST_HEAD(&rt->rt_uncached);
+ INIT_LIST_HEAD(&rt->dst.rt_uncached);
}
dst_release(dst_orig);
}
/* This barrier is coupled with smp_wmb() in tcp_reset() */
smp_rmb();
- if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ if (READ_ONCE(sk->sk_err) ||
+ !skb_queue_empty_lockless(&sk->sk_error_queue))
mask |= EPOLLERR;
return mask;
if (old_state == TCP_LISTEN) {
inet_csk_listen_stop(sk);
} else if (unlikely(tp->repair)) {
- sk->sk_err = ECONNABORTED;
+ WRITE_ONCE(sk->sk_err, ECONNABORTED);
} else if (tcp_need_reset(old_state) ||
(tp->snd_nxt != tp->write_seq &&
(1 << old_state) & (TCPF_CLOSING | TCPF_LAST_ACK))) {
* states
*/
tcp_send_active_reset(sk, gfp_any());
- sk->sk_err = ECONNRESET;
+ WRITE_ONCE(sk->sk_err, ECONNRESET);
} else if (old_state == TCP_SYN_SENT)
- sk->sk_err = ECONNRESET;
+ WRITE_ONCE(sk->sk_err, ECONNRESET);
tcp_clear_xmit_timers(sk);
__skb_queue_purge(&sk->sk_receive_queue);
const __u8 *hash_location = NULL;
struct tcp_md5sig_key *hash_expected;
const struct tcphdr *th = tcp_hdr(skb);
- struct tcp_sock *tp = tcp_sk(sk);
+ const struct tcp_sock *tp = tcp_sk(sk);
int genhash, l3index;
u8 newhash[16];
bh_lock_sock(sk);
if (!sock_flag(sk, SOCK_DEAD)) {
- sk->sk_err = err;
+ WRITE_ONCE(sk->sk_err, err);
/* This barrier is coupled with smp_rmb() in tcp_poll() */
smp_wmb();
sk_error_report(sk);
static int __tcp_grow_window(const struct sock *sk, const struct sk_buff *skb,
unsigned int skbtruesize)
{
- struct tcp_sock *tp = tcp_sk(sk);
+ const struct tcp_sock *tp = tcp_sk(sk);
/* Optimize this! */
int truesize = tcp_win_from_space(sk, skbtruesize) >> 1;
int window = tcp_win_from_space(sk, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2])) >> 1;
/* We passed data and got it acked, remove any soft error
* log. Something worked...
*/
- sk->sk_err_soft = 0;
+ WRITE_ONCE(sk->sk_err_soft, 0);
icsk->icsk_probes_out = 0;
tp->rcv_tstamp = tcp_jiffies32;
if (!prior_packets)
/* We want the right error as BSD sees it (and indeed as we do). */
switch (sk->sk_state) {
case TCP_SYN_SENT:
- sk->sk_err = ECONNREFUSED;
+ WRITE_ONCE(sk->sk_err, ECONNREFUSED);
break;
case TCP_CLOSE_WAIT:
- sk->sk_err = EPIPE;
+ WRITE_ONCE(sk->sk_err, EPIPE);
break;
case TCP_CLOSE:
return;
default:
- sk->sk_err = ECONNRESET;
+ WRITE_ONCE(sk->sk_err, ECONNRESET);
}
/* This barrier is coupled with smp_rmb() in tcp_poll() */
smp_wmb();
*/
static bool tcp_reset_check(const struct sock *sk, const struct sk_buff *skb)
{
- struct tcp_sock *tp = tcp_sk(sk);
+ const struct tcp_sock *tp = tcp_sk(sk);
return unlikely(TCP_SKB_CB(skb)->seq == (tp->rcv_nxt - 1) &&
(1 << sk->sk_state) & (TCPF_CLOSE_WAIT | TCPF_LAST_ACK |
tp->rx_opt.saw_tstamp &&
tcp_paws_discard(sk, skb)) {
if (!th->rst) {
+ if (unlikely(th->syn))
+ goto syn_challenge;
NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
if (!tcp_oow_rate_limited(sock_net(sk), skb,
LINUX_MIB_TCPACKSKIPPEDPAWS,
* for the case, if this connection will not able to recover.
*/
if (mtu < dst_mtu(dst) && ip_dont_fragment(sk, dst))
- sk->sk_err_soft = EMSGSIZE;
+ WRITE_ONCE(sk->sk_err_soft, EMSGSIZE);
mtu = dst_mtu(dst);
ip_icmp_error(sk, skb, err, th->dest, info, (u8 *)th);
if (!sock_owned_by_user(sk)) {
- sk->sk_err = err;
+ WRITE_ONCE(sk->sk_err, err);
sk_error_report(sk);
tcp_done(sk);
} else {
- sk->sk_err_soft = err;
+ WRITE_ONCE(sk->sk_err_soft, err);
}
goto out;
}
inet = inet_sk(sk);
if (!sock_owned_by_user(sk) && inet->recverr) {
- sk->sk_err = err;
+ WRITE_ONCE(sk->sk_err, err);
sk_error_report(sk);
} else { /* Only an error on timeout */
- sk->sk_err_soft = err;
+ WRITE_ONCE(sk->sk_err_soft, err);
}
out:
}
EXPORT_SYMBOL_GPL(tcp_ca_openreq_child);
-static void smc_check_reset_syn_req(struct tcp_sock *oldtp,
+static void smc_check_reset_syn_req(const struct tcp_sock *oldtp,
struct request_sock *req,
struct tcp_sock *newtp)
{
const struct inet_request_sock *ireq = inet_rsk(req);
struct tcp_request_sock *treq = tcp_rsk(req);
struct inet_connection_sock *newicsk;
- struct tcp_sock *oldtp, *newtp;
+ const struct tcp_sock *oldtp;
+ struct tcp_sock *newtp;
u32 seq;
if (!newsk)
tp->rx_opt.rcv_wscale = rcv_wscale;
tp->rcv_ssthresh = tp->rcv_wnd;
- sk->sk_err = 0;
+ WRITE_ONCE(sk->sk_err, 0);
sock_reset_flag(sk, SOCK_DONE);
tp->snd_wnd = 0;
tcp_init_wl(tp, 0);
if (!res) {
TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
- if (unlikely(tcp_passive_fastopen(sk)))
- tcp_sk(sk)->total_retrans++;
+ if (unlikely(tcp_passive_fastopen(sk))) {
+ /* sk has const attribute because listeners are lockless.
+ * However in this case, we are dealing with a passive fastopen
+ * socket thus we can change total_retrans value.
+ */
+ tcp_sk_rw(sk)->total_retrans++;
+ }
trace_tcp_retransmit_synack(sk, req);
}
return res;
static u32 tcp_rack_reo_wnd(const struct sock *sk)
{
- struct tcp_sock *tp = tcp_sk(sk);
+ const struct tcp_sock *tp = tcp_sk(sk);
if (!tp->reord_seen) {
/* If reordering has not been observed, be aggressive during
static void tcp_write_err(struct sock *sk)
{
- sk->sk_err = sk->sk_err_soft ? : ETIMEDOUT;
+ WRITE_ONCE(sk->sk_err, READ_ONCE(sk->sk_err_soft) ? : ETIMEDOUT);
sk_error_report(sk);
tcp_write_queue_purge(sk);
shift++;
/* If some dubious ICMP arrived, penalize even more. */
- if (sk->sk_err_soft)
+ if (READ_ONCE(sk->sk_err_soft))
shift++;
if (tcp_check_oom(sk, shift)) {
int retries = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_orphan_retries); /* May be zero. */
/* We know from an ICMP that something is wrong. */
- if (sk->sk_err_soft && !alive)
+ if (READ_ONCE(sk->sk_err_soft) && !alive)
retries = 0;
/* However, if socket sent something recently, select some safe
EXPORT_SYMBOL_GPL(udp4_lib_lookup);
#endif
-static inline bool __udp_is_mcast_sock(struct net *net, struct sock *sk,
+static inline bool __udp_is_mcast_sock(struct net *net, const struct sock *sk,
__be16 loc_port, __be32 loc_addr,
__be16 rmt_port, __be32 rmt_addr,
int dif, int sdif, unsigned short hnum)
{
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
if (!net_eq(sock_net(sk), net) ||
udp_sk(sk)->udp_port_hash != hnum ||
spin_unlock(busy);
}
+static int udp_rmem_schedule(struct sock *sk, int size)
+{
+ int delta;
+
+ delta = size - sk->sk_forward_alloc;
+ if (delta > 0 && !__sk_mem_schedule(sk, delta, SK_MEM_RECV))
+ return -ENOBUFS;
+
+ return 0;
+}
+
int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
{
struct sk_buff_head *list = &sk->sk_receive_queue;
- int rmem, delta, amt, err = -ENOMEM;
+ int rmem, err = -ENOMEM;
spinlock_t *busy = NULL;
int size;
goto uncharge_drop;
spin_lock(&list->lock);
- if (size >= sk->sk_forward_alloc) {
- amt = sk_mem_pages(size);
- delta = amt << PAGE_SHIFT;
- if (!__sk_mem_raise_allocated(sk, delta, amt, SK_MEM_RECV)) {
- err = -ENOBUFS;
- spin_unlock(&list->lock);
- goto uncharge_drop;
- }
-
- sk->sk_forward_alloc += delta;
+ err = udp_rmem_schedule(sk, size);
+ if (err) {
+ spin_unlock(&list->lock);
+ goto uncharge_drop;
}
sk->sk_forward_alloc -= size;
xdst->u.rt.rt_gw6 = rt->rt_gw6;
xdst->u.rt.rt_pmtu = rt->rt_pmtu;
xdst->u.rt.rt_mtu_locked = rt->rt_mtu_locked;
- INIT_LIST_HEAD(&xdst->u.rt.rt_uncached);
+ INIT_LIST_HEAD(&xdst->u.rt.dst.rt_uncached);
rt_add_uncached_list(&xdst->u.rt);
return 0;
struct xfrm_dst *xdst = (struct xfrm_dst *)dst;
dst_destroy_metrics_generic(dst);
- if (xdst->u.rt.rt_uncached_list)
+ if (xdst->u.rt.dst.rt_uncached_list)
rt_del_uncached_list(&xdst->u.rt);
xfrm_dst_destroy(xdst);
}
unsigned int hash = inet6_addr_hash(net, &ifa->addr);
int err = 0;
- spin_lock(&net->ipv6.addrconf_hash_lock);
+ spin_lock_bh(&net->ipv6.addrconf_hash_lock);
/* Ignore adding duplicate addresses on an interface */
if (ipv6_chk_same_addr(net, &ifa->addr, dev, hash)) {
hlist_add_head_rcu(&ifa->addr_lst, &net->ipv6.inet6_addr_lst[hash]);
}
- spin_unlock(&net->ipv6.addrconf_hash_lock);
+ spin_unlock_bh(&net->ipv6.addrconf_hash_lock);
return err;
}
/* For caller */
refcount_set(&ifa->refcnt, 1);
- rcu_read_lock_bh();
+ rcu_read_lock();
err = ipv6_add_addr_hash(idev->dev, ifa);
if (err < 0) {
- rcu_read_unlock_bh();
+ rcu_read_unlock();
goto out;
}
- write_lock(&idev->lock);
+ write_lock_bh(&idev->lock);
/* Add to inet6_dev unicast addr list. */
ipv6_link_dev_addr(idev, ifa);
}
in6_ifa_hold(ifa);
- write_unlock(&idev->lock);
+ write_unlock_bh(&idev->lock);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
inet6addr_notifier_call_chain(NETDEV_UP, ifa);
out:
ipv6_accept_ra(ifp->idev) &&
ifp->idev->cnf.rtr_solicits != 0 &&
(dev->flags & IFF_LOOPBACK) == 0 &&
- (dev->type != ARPHRD_TUNNEL);
+ (dev->type != ARPHRD_TUNNEL) &&
+ !netif_is_team_port(dev);
read_unlock_bh(&ifp->idev->lock);
/* While dad is in progress mld report's source address is in6_addrany.
dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
if (IS_ERR(dst)) {
sk->sk_route_caps = 0;
- sk->sk_err_soft = -PTR_ERR(dst);
+ WRITE_ONCE(sk->sk_err_soft, -PTR_ERR(dst));
return PTR_ERR(dst);
}
dst = inet6_csk_route_socket(sk, &fl6);
if (IS_ERR(dst)) {
- sk->sk_err_soft = -PTR_ERR(dst);
+ WRITE_ONCE(sk->sk_err_soft, -PTR_ERR(dst));
sk->sk_route_caps = 0;
kfree_skb(skb);
return PTR_ERR(dst);
EXPORT_SYMBOL(ipv6_flowlabel_exclusive);
#define for_each_fl_rcu(hash, fl) \
- for (fl = rcu_dereference_bh(fl_ht[(hash)]); \
+ for (fl = rcu_dereference(fl_ht[(hash)]); \
fl != NULL; \
- fl = rcu_dereference_bh(fl->next))
+ fl = rcu_dereference(fl->next))
#define for_each_fl_continue_rcu(fl) \
- for (fl = rcu_dereference_bh(fl->next); \
+ for (fl = rcu_dereference(fl->next); \
fl != NULL; \
- fl = rcu_dereference_bh(fl->next))
+ fl = rcu_dereference(fl->next))
#define for_each_sk_fl_rcu(np, sfl) \
- for (sfl = rcu_dereference_bh(np->ipv6_fl_list); \
+ for (sfl = rcu_dereference(np->ipv6_fl_list); \
sfl != NULL; \
- sfl = rcu_dereference_bh(sfl->next))
+ sfl = rcu_dereference(sfl->next))
static inline struct ip6_flowlabel *__fl_lookup(struct net *net, __be32 label)
{
{
struct ip6_flowlabel *fl;
- rcu_read_lock_bh();
+ rcu_read_lock();
fl = __fl_lookup(net, label);
if (fl && !atomic_inc_not_zero(&fl->users))
fl = NULL;
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return fl;
}
fl->label = label & IPV6_FLOWLABEL_MASK;
+ rcu_read_lock();
spin_lock_bh(&ip6_fl_lock);
if (label == 0) {
for (;;) {
if (lfl) {
atomic_inc(&lfl->users);
spin_unlock_bh(&ip6_fl_lock);
+ rcu_read_unlock();
return lfl;
}
}
rcu_assign_pointer(fl_ht[FL_HASH(fl->label)], fl);
atomic_inc(&fl_size);
spin_unlock_bh(&ip6_fl_lock);
+ rcu_read_unlock();
return NULL;
}
label &= IPV6_FLOWLABEL_MASK;
- rcu_read_lock_bh();
+ rcu_read_lock();
for_each_sk_fl_rcu(np, sfl) {
struct ip6_flowlabel *fl = sfl->fl;
if (fl->label == label && atomic_inc_not_zero(&fl->users)) {
fl->lastuse = jiffies;
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return fl;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return NULL;
}
EXPORT_SYMBOL_GPL(__fl6_sock_lookup);
if (room > FL_MAX_SIZE - FL_MAX_PER_SOCK)
return 0;
- rcu_read_lock_bh();
+ rcu_read_lock();
for_each_sk_fl_rcu(np, sfl)
count++;
- rcu_read_unlock_bh();
+ rcu_read_unlock();
if (room <= 0 ||
((count >= FL_MAX_PER_SOCK ||
return 0;
}
- rcu_read_lock_bh();
+ rcu_read_lock();
for_each_sk_fl_rcu(np, sfl) {
if (sfl->fl->label == (np->flow_label & IPV6_FLOWLABEL_MASK)) {
freq->flr_linger = sfl->fl->linger / HZ;
spin_unlock_bh(&ip6_fl_lock);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return 0;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return -ENOENT;
}
struct ipv6_fl_socklist *sfl;
int err;
- rcu_read_lock_bh();
+ rcu_read_lock();
for_each_sk_fl_rcu(np, sfl) {
if (sfl->fl->label == freq->flr_label) {
err = fl6_renew(sfl->fl, freq->flr_linger,
freq->flr_expires);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return err;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock();
if (freq->flr_share == IPV6_FL_S_NONE &&
ns_capable(net->user_ns, CAP_NET_ADMIN)) {
if (freq->flr_label) {
err = -EEXIST;
- rcu_read_lock_bh();
+ rcu_read_lock();
for_each_sk_fl_rcu(np, sfl) {
if (sfl->fl->label == freq->flr_label) {
if (freq->flr_flags & IPV6_FL_F_EXCL) {
- rcu_read_unlock_bh();
+ rcu_read_unlock();
goto done;
}
fl1 = sfl->fl;
break;
}
}
- rcu_read_unlock_bh();
+ rcu_read_unlock();
if (!fl1)
fl1 = fl_lookup(net, freq->flr_label);
state->pid_ns = proc_pid_ns(file_inode(seq->file)->i_sb);
- rcu_read_lock_bh();
+ rcu_read_lock();
return *pos ? ip6fl_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
}
static void ip6fl_seq_stop(struct seq_file *seq, void *v)
__releases(RCU)
{
- rcu_read_unlock_bh();
+ rcu_read_unlock();
}
static int ip6fl_seq_show(struct seq_file *seq, void *v)
/* Only do this once for first final protocol */
have_final = true;
- /* Free reference early: we don't need it any more,
- and it may hold ip_conntrack module loaded
- indefinitely. */
- nf_reset_ct(skb);
skb_postpull_rcsum(skb, skb_network_header(skb),
skb_network_header_len(skb));
goto discard;
}
}
- if (!(ipprot->flags & INET6_PROTO_NOPOLICY) &&
- !xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
- SKB_DR_SET(reason, XFRM_POLICY);
- goto discard;
+ if (!(ipprot->flags & INET6_PROTO_NOPOLICY)) {
+ if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+ SKB_DR_SET(reason, XFRM_POLICY);
+ goto discard;
+ }
+ nf_reset_ct(skb);
}
ret = INDIRECT_CALL_2(ipprot->handler, tcp_v6_rcv, udpv6_rcv,
return res;
}
- rcu_read_lock_bh();
+ rcu_read_lock();
nexthop = rt6_nexthop((struct rt6_info *)dst, daddr);
neigh = __ipv6_neigh_lookup_noref(dev, nexthop);
if (unlikely(!neigh))
neigh = __neigh_create(&nd_tbl, nexthop, dev, false);
if (IS_ERR(neigh)) {
- rcu_read_unlock_bh();
+ rcu_read_unlock();
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTNOROUTES);
kfree_skb_reason(skb, SKB_DROP_REASON_NEIGH_CREATEFAIL);
return -EINVAL;
}
sock_confirm_neigh(skb, neigh);
ret = neigh_output(neigh, skb, false);
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return ret;
}
* dst entry of the nexthop router
*/
rt = (struct rt6_info *) *dst;
- rcu_read_lock_bh();
+ rcu_read_lock();
n = __ipv6_neigh_lookup_noref(rt->dst.dev,
rt6_nexthop(rt, &fl6->daddr));
- err = n && !(n->nud_state & NUD_VALID) ? -EINVAL : 0;
- rcu_read_unlock_bh();
+ err = n && !(READ_ONCE(n->nud_state) & NUD_VALID) ? -EINVAL : 0;
+ rcu_read_unlock();
if (err) {
struct inet6_ifaddr *ifp;
mtu = cork->gso_size ? IP6_MAX_MTU : cork->fragsize;
orig_mtu = mtu;
- if (cork->tx_flags & SKBTX_ANY_SW_TSTAMP &&
+ if (cork->tx_flags & SKBTX_ANY_TSTAMP &&
sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)
tskey = atomic_inc_return(&sk->sk_tskey) - 1;
return 0;
}
-bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr,
+bool inet6_mc_check(const struct sock *sk, const struct in6_addr *mc_addr,
const struct in6_addr *src_addr)
{
- struct ipv6_pinfo *np = inet6_sk(sk);
- struct ipv6_mc_socklist *mc;
- struct ip6_sf_socklist *psl;
+ const struct ipv6_pinfo *np = inet6_sk(sk);
+ const struct ipv6_mc_socklist *mc;
+ const struct ip6_sf_socklist *psl;
bool rv = true;
rcu_read_lock();
saddr = &ipv6_hdr(skb)->saddr;
probes -= NEIGH_VAR(neigh->parms, UCAST_PROBES);
if (probes < 0) {
- if (!(neigh->nud_state & NUD_VALID)) {
+ if (!(READ_ONCE(neigh->nud_state) & NUD_VALID)) {
ND_PRINTK(1, dbg,
"%s: trying to ucast probe in NUD_INVALID: %pI6\n",
__func__, target);
u8 old_flags = neigh->flags;
struct net *net = dev_net(dev);
- if (neigh->nud_state & NUD_FAILED)
+ if (READ_ONCE(neigh->nud_state) & NUD_FAILED)
goto out;
/*
#include <linux/netdevice.h>
#include <linux/module.h>
#include <linux/poison.h>
-#include <linux/icmpv6.h>
#include <net/ipv6.h>
#include <net/compat.h>
#include <linux/uaccess.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Netfilter Core Team <coreteam@netfilter.org>");
MODULE_DESCRIPTION("IPv6 packet filter");
-MODULE_ALIAS("ip6t_icmp6");
void *ip6t_alloc_initial_table(const struct xt_table *info)
{
__ip6t_unregister_table(net, table);
}
-/* Returns 1 if the type and code is matched by the range, 0 otherwise */
-static inline bool
-icmp6_type_code_match(u_int8_t test_type, u_int8_t min_code, u_int8_t max_code,
- u_int8_t type, u_int8_t code,
- bool invert)
-{
- return (type == test_type && code >= min_code && code <= max_code)
- ^ invert;
-}
-
-static bool
-icmp6_match(const struct sk_buff *skb, struct xt_action_param *par)
-{
- const struct icmp6hdr *ic;
- struct icmp6hdr _icmph;
- const struct ip6t_icmp *icmpinfo = par->matchinfo;
-
- /* Must not be a fragment. */
- if (par->fragoff != 0)
- return false;
-
- ic = skb_header_pointer(skb, par->thoff, sizeof(_icmph), &_icmph);
- if (ic == NULL) {
- /* We've been asked to examine this packet, and we
- * can't. Hence, no choice but to drop.
- */
- par->hotdrop = true;
- return false;
- }
-
- return icmp6_type_code_match(icmpinfo->type,
- icmpinfo->code[0],
- icmpinfo->code[1],
- ic->icmp6_type, ic->icmp6_code,
- !!(icmpinfo->invflags&IP6T_ICMP_INV));
-}
-
-/* Called when user tries to insert an entry of this type. */
-static int icmp6_checkentry(const struct xt_mtchk_param *par)
-{
- const struct ip6t_icmp *icmpinfo = par->matchinfo;
-
- /* Must specify no unknown invflags */
- return (icmpinfo->invflags & ~IP6T_ICMP_INV) ? -EINVAL : 0;
-}
-
/* The built-in targets: standard (NULL) and error. */
static struct xt_target ip6t_builtin_tg[] __read_mostly = {
{
.owner = THIS_MODULE,
};
-static struct xt_match ip6t_builtin_mt[] __read_mostly = {
- {
- .name = "icmp6",
- .match = icmp6_match,
- .matchsize = sizeof(struct ip6t_icmp),
- .checkentry = icmp6_checkentry,
- .proto = IPPROTO_ICMPV6,
- .family = NFPROTO_IPV6,
- .me = THIS_MODULE,
- },
-};
-
static int __net_init ip6_tables_net_init(struct net *net)
{
return xt_proto_init(net, NFPROTO_IPV6);
ret = xt_register_targets(ip6t_builtin_tg, ARRAY_SIZE(ip6t_builtin_tg));
if (ret < 0)
goto err2;
- ret = xt_register_matches(ip6t_builtin_mt, ARRAY_SIZE(ip6t_builtin_mt));
- if (ret < 0)
- goto err4;
/* Register setsockopt */
ret = nf_register_sockopt(&ip6t_sockopts);
if (ret < 0)
- goto err5;
+ goto err4;
return 0;
-err5:
- xt_unregister_matches(ip6t_builtin_mt, ARRAY_SIZE(ip6t_builtin_mt));
err4:
xt_unregister_targets(ip6t_builtin_tg, ARRAY_SIZE(ip6t_builtin_tg));
err2:
{
nf_unregister_sockopt(&ip6t_sockopts);
- xt_unregister_matches(ip6t_builtin_mt, ARRAY_SIZE(ip6t_builtin_mt));
xt_unregister_targets(ip6t_builtin_tg, ARRAY_SIZE(ip6t_builtin_tg));
unregister_pernet_subsys(&ip6_tables_net_ops);
}
seq_puts(seq, IPV6_SEQ_DGRAM_HEADER);
} else {
int bucket = ((struct ping_iter_state *) seq->private)->bucket;
- struct inet_sock *inet = inet_sk(v);
+ struct inet_sock *inet = inet_sk((struct sock *)v);
__u16 srcp = ntohs(inet->inet_sport);
__u16 destp = ntohs(inet->inet_dport);
ip6_dgram_sock_seq_show(seq, v, srcp, destp, bucket);
struct raw_hashinfo raw_v6_hashinfo;
EXPORT_SYMBOL_GPL(raw_v6_hashinfo);
-bool raw_v6_match(struct net *net, struct sock *sk, unsigned short num,
+bool raw_v6_match(struct net *net, const struct sock *sk, unsigned short num,
const struct in6_addr *loc_addr,
const struct in6_addr *rmt_addr, int dif, int sdif)
{
struct sk_buff *clone = skb_clone(skb, GFP_ATOMIC);
/* Not releasing hash table! */
- if (clone) {
- nf_reset_ct(clone);
+ if (clone)
rawv6_rcv(sk, clone);
- }
}
}
rcu_read_unlock();
kfree_skb_reason(skb, SKB_DROP_REASON_XFRM_POLICY);
return NET_RX_DROP;
}
+ nf_reset_ct(skb);
if (!rp->checksum)
skb->ip_summed = CHECKSUM_UNNECESSARY;
{
struct uncached_list *ul = raw_cpu_ptr(&rt6_uncached_list);
- rt->rt6i_uncached_list = ul;
+ rt->dst.rt_uncached_list = ul;
spin_lock_bh(&ul->lock);
- list_add_tail(&rt->rt6i_uncached, &ul->head);
+ list_add_tail(&rt->dst.rt_uncached, &ul->head);
spin_unlock_bh(&ul->lock);
}
void rt6_uncached_list_del(struct rt6_info *rt)
{
- if (!list_empty(&rt->rt6i_uncached)) {
- struct uncached_list *ul = rt->rt6i_uncached_list;
+ if (!list_empty(&rt->dst.rt_uncached)) {
+ struct uncached_list *ul = rt->dst.rt_uncached_list;
spin_lock_bh(&ul->lock);
- list_del_init(&rt->rt6i_uncached);
+ list_del_init(&rt->dst.rt_uncached);
spin_unlock_bh(&ul->lock);
}
}
continue;
spin_lock_bh(&ul->lock);
- list_for_each_entry_safe(rt, safe, &ul->head, rt6i_uncached) {
+ list_for_each_entry_safe(rt, safe, &ul->head, dst.rt_uncached) {
struct inet6_dev *rt_idev = rt->rt6i_idev;
struct net_device *rt_dev = rt->dst.dev;
bool handled = false;
handled = true;
}
if (handled)
- list_move(&rt->rt6i_uncached,
+ list_move(&rt->dst.rt_uncached,
&ul->quarantine);
}
spin_unlock_bh(&ul->lock);
static const struct rt6_info ip6_null_entry_template = {
.dst = {
- .__refcnt = ATOMIC_INIT(1),
+ .__rcuref = RCUREF_INIT(1),
.__use = 1,
.obsolete = DST_OBSOLETE_FORCE_CHK,
.error = -ENETUNREACH,
static const struct rt6_info ip6_prohibit_entry_template = {
.dst = {
- .__refcnt = ATOMIC_INIT(1),
+ .__rcuref = RCUREF_INIT(1),
.__use = 1,
.obsolete = DST_OBSOLETE_FORCE_CHK,
.error = -EACCES,
static const struct rt6_info ip6_blk_hole_entry_template = {
.dst = {
- .__refcnt = ATOMIC_INIT(1),
+ .__rcuref = RCUREF_INIT(1),
.__use = 1,
.obsolete = DST_OBSOLETE_FORCE_CHK,
.error = -EINVAL,
static void rt6_info_init(struct rt6_info *rt)
{
memset_after(rt, 0, dst);
- INIT_LIST_HEAD(&rt->rt6i_uncached);
+ INIT_LIST_HEAD(&rt->dst.rt_uncached);
}
/* allocate dst with ip6_dst_ops */
nh_gw = &fib6_nh->fib_nh_gw6;
dev = fib6_nh->fib_nh_dev;
- rcu_read_lock_bh();
+ rcu_read_lock();
last_probe = READ_ONCE(fib6_nh->last_probe);
idev = __in6_dev_get(dev);
neigh = __ipv6_neigh_lookup_noref(dev, nh_gw);
if (neigh) {
- if (neigh->nud_state & NUD_VALID)
+ if (READ_ONCE(neigh->nud_state) & NUD_VALID)
goto out;
- write_lock(&neigh->lock);
+ write_lock_bh(&neigh->lock);
if (!(neigh->nud_state & NUD_VALID) &&
time_after(jiffies,
neigh->updated + idev->cnf.rtr_probe_interval)) {
if (work)
__neigh_set_probe_once(neigh);
}
- write_unlock(&neigh->lock);
+ write_unlock_bh(&neigh->lock);
} else if (time_after(jiffies, last_probe +
idev->cnf.rtr_probe_interval)) {
work = kmalloc(sizeof(*work), GFP_ATOMIC);
}
out:
- rcu_read_unlock_bh();
+ rcu_read_unlock();
}
#else
static inline void rt6_probe(struct fib6_nh *fib6_nh)
enum rt6_nud_state ret = RT6_NUD_FAIL_HARD;
struct neighbour *neigh;
- rcu_read_lock_bh();
+ rcu_read_lock();
neigh = __ipv6_neigh_lookup_noref(fib6_nh->fib_nh_dev,
&fib6_nh->fib_nh_gw6);
if (neigh) {
- read_lock(&neigh->lock);
- if (neigh->nud_state & NUD_VALID)
+ u8 nud_state = READ_ONCE(neigh->nud_state);
+
+ if (nud_state & NUD_VALID)
ret = RT6_NUD_SUCCEED;
#ifdef CONFIG_IPV6_ROUTER_PREF
- else if (!(neigh->nud_state & NUD_FAILED))
+ else if (!(nud_state & NUD_FAILED))
ret = RT6_NUD_SUCCEED;
else
ret = RT6_NUD_FAIL_PROBE;
#endif
- read_unlock(&neigh->lock);
} else {
ret = IS_ENABLED(CONFIG_IPV6_ROUTER_PREF) ?
RT6_NUD_SUCCEED : RT6_NUD_FAIL_DO_RR;
}
- rcu_read_unlock_bh();
+ rcu_read_unlock();
return ret;
}
dst = ip6_route_output_flags_noref(net, sk, fl6, flags);
rt6 = (struct rt6_info *)dst;
/* For dst cached in uncached_list, refcnt is already taken. */
- if (list_empty(&rt6->rt6i_uncached) && !dst_hold_safe(dst)) {
+ if (list_empty(&rt6->dst.rt_uncached) && !dst_hold_safe(dst)) {
dst = &net->ipv6.ip6_null_entry->dst;
dst_hold(dst);
}
from = rcu_dereference(rt->from);
if (from && (rt->rt6i_flags & RTF_PCPU ||
- unlikely(!list_empty(&rt->rt6i_uncached))))
+ unlikely(!list_empty(&rt->dst.rt_uncached))))
dst_ret = rt6_dst_from_check(rt, from, cookie);
else
dst_ret = rt6_check(rt, from, cookie);
net->ipv6.ip6_null_entry->dst.ops = &net->ipv6.ip6_dst_ops;
dst_init_metrics(&net->ipv6.ip6_null_entry->dst,
ip6_template_metrics, true);
- INIT_LIST_HEAD(&net->ipv6.ip6_null_entry->rt6i_uncached);
+ INIT_LIST_HEAD(&net->ipv6.ip6_null_entry->dst.rt_uncached);
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
net->ipv6.fib6_has_custom_rules = false;
net->ipv6.ip6_prohibit_entry->dst.ops = &net->ipv6.ip6_dst_ops;
dst_init_metrics(&net->ipv6.ip6_prohibit_entry->dst,
ip6_template_metrics, true);
- INIT_LIST_HEAD(&net->ipv6.ip6_prohibit_entry->rt6i_uncached);
+ INIT_LIST_HEAD(&net->ipv6.ip6_prohibit_entry->dst.rt_uncached);
net->ipv6.ip6_blk_hole_entry = kmemdup(&ip6_blk_hole_entry_template,
sizeof(*net->ipv6.ip6_blk_hole_entry),
net->ipv6.ip6_blk_hole_entry->dst.ops = &net->ipv6.ip6_dst_ops;
dst_init_metrics(&net->ipv6.ip6_blk_hole_entry->dst,
ip6_template_metrics, true);
- INIT_LIST_HEAD(&net->ipv6.ip6_blk_hole_entry->rt6i_uncached);
+ INIT_LIST_HEAD(&net->ipv6.ip6_blk_hole_entry->dst.rt_uncached);
#ifdef CONFIG_IPV6_SUBTREES
net->ipv6.fib6_routes_require_src = 0;
#endif
ipv6_icmp_error(sk, skb, err, th->dest, ntohl(info), (u8 *)th);
if (!sock_owned_by_user(sk)) {
- sk->sk_err = err;
+ WRITE_ONCE(sk->sk_err, err);
sk_error_report(sk); /* Wake people up to see the error (see connect in sock.c) */
tcp_done(sk);
- } else
- sk->sk_err_soft = err;
+ } else {
+ WRITE_ONCE(sk->sk_err_soft, err);
+ }
goto out;
case TCP_LISTEN:
break;
}
if (!sock_owned_by_user(sk) && np->recverr) {
- sk->sk_err = err;
+ WRITE_ONCE(sk->sk_err, err);
sk_error_report(sk);
- } else
- sk->sk_err_soft = err;
-
+ } else {
+ WRITE_ONCE(sk->sk_err_soft, err);
+ }
out:
bh_unlock_sock(sk);
sock_put(sk);
if (drop_reason)
goto discard_and_relse;
+ nf_reset_ct(skb);
+
if (tcp_filter(sk, skb)) {
drop_reason = SKB_DROP_REASON_SOCKET_FILTER;
goto discard_and_relse;
drop_reason = SKB_DROP_REASON_XFRM_POLICY;
goto drop;
}
+ nf_reset_ct(skb);
if (static_branch_unlikely(&udpv6_encap_needed_key) && up->encap_type) {
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
return 0;
}
-static bool __udp_v6_is_mcast_sock(struct net *net, struct sock *sk,
+static bool __udp_v6_is_mcast_sock(struct net *net, const struct sock *sk,
__be16 loc_port, const struct in6_addr *loc_addr,
__be16 rmt_port, const struct in6_addr *rmt_addr,
int dif, int sdif, unsigned short hnum)
{
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
if (!net_eq(sock_net(sk), net))
return false;
if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
goto discard;
+ nf_reset_ct(skb);
if (udp_lib_checksum_complete(skb))
goto csum_error;
seq_puts(seq, IPV6_SEQ_DGRAM_HEADER);
} else {
int bucket = ((struct udp_iter_state *)seq->private)->bucket;
- struct inet_sock *inet = inet_sk(v);
+ const struct inet_sock *inet = inet_sk((const struct sock *)v);
__u16 srcp = ntohs(inet->inet_sport);
__u16 destp = ntohs(inet->inet_dport);
__ip6_dgram_sock_seq_show(seq, v, srcp, destp,
xdst->u.rt6.rt6i_gateway = rt->rt6i_gateway;
xdst->u.rt6.rt6i_dst = rt->rt6i_dst;
xdst->u.rt6.rt6i_src = rt->rt6i_src;
- INIT_LIST_HEAD(&xdst->u.rt6.rt6i_uncached);
+ INIT_LIST_HEAD(&xdst->u.rt6.dst.rt_uncached);
rt6_uncached_list_add(&xdst->u.rt6);
return 0;
if (likely(xdst->u.rt6.rt6i_idev))
in6_dev_put(xdst->u.rt6.rt6i_idev);
dst_destroy_metrics_generic(dst);
- if (xdst->u.rt6.rt6i_uncached_list)
+ if (xdst->u.rt6.dst.rt_uncached_list)
rt6_uncached_list_del(&xdst->u.rt6);
xfrm_dst_destroy(xdst);
}
ieee80211_send_addba_with_timeout(sta, tid_tx);
}
+void ieee80211_refresh_tx_agg_session_timer(struct ieee80211_sta *pubsta,
+ u16 tid)
+{
+ struct sta_info *sta = container_of(pubsta, struct sta_info, sta);
+ struct tid_ampdu_tx *tid_tx;
+
+ if (WARN_ON_ONCE(tid >= IEEE80211_NUM_TIDS))
+ return;
+
+ tid_tx = rcu_dereference(sta->ampdu_mlme.tid_tx[tid]);
+ if (!tid_tx)
+ return;
+
+ tid_tx->last_tx = jiffies;
+}
+EXPORT_SYMBOL(ieee80211_refresh_tx_agg_session_timer);
+
/*
* After accepting the AddBA Response we activated a timer,
* resetting it after each frame that we send.
return offset;
}
+static int
+ieee80211_copy_rnr_beacon(u8 *pos, struct cfg80211_rnr_elems *dst,
+ struct cfg80211_rnr_elems *src)
+{
+ int i, offset = 0;
+
+ for (i = 0; i < src->cnt; i++) {
+ memcpy(pos + offset, src->elem[i].data, src->elem[i].len);
+ dst->elem[i].len = src->elem[i].len;
+ dst->elem[i].data = pos + offset;
+ offset += dst->elem[i].len;
+ }
+ dst->cnt = src->cnt;
+
+ return offset;
+}
+
static int ieee80211_assign_beacon(struct ieee80211_sub_if_data *sdata,
struct ieee80211_link_data *link,
struct cfg80211_beacon_data *params,
const struct ieee80211_color_change_settings *cca)
{
struct cfg80211_mbssid_elems *mbssid = NULL;
+ struct cfg80211_rnr_elems *rnr = NULL;
struct beacon_data *new, *old;
int new_head_len, new_tail_len;
int size, err;
if (params->mbssid_ies) {
mbssid = params->mbssid_ies;
size += struct_size(new->mbssid_ies, elem, mbssid->cnt);
- size += ieee80211_get_mbssid_beacon_len(mbssid);
+ if (params->rnr_ies) {
+ rnr = params->rnr_ies;
+ size += struct_size(new->rnr_ies, elem, rnr->cnt);
+ }
+ size += ieee80211_get_mbssid_beacon_len(mbssid, rnr,
+ mbssid->cnt);
} else if (old && old->mbssid_ies) {
mbssid = old->mbssid_ies;
size += struct_size(new->mbssid_ies, elem, mbssid->cnt);
- size += ieee80211_get_mbssid_beacon_len(mbssid);
+ if (old && old->rnr_ies) {
+ rnr = old->rnr_ies;
+ size += struct_size(new->rnr_ies, elem, rnr->cnt);
+ }
+ size += ieee80211_get_mbssid_beacon_len(mbssid, rnr,
+ mbssid->cnt);
}
new = kzalloc(size, GFP_KERNEL);
/*
* pointers go into the block we allocated,
- * memory is | beacon_data | head | tail | mbssid_ies
+ * memory is | beacon_data | head | tail | mbssid_ies | rnr_ies
*/
new->head = ((u8 *) new) + sizeof(*new);
new->tail = new->head + new_head_len;
new->mbssid_ies = (void *)pos;
pos += struct_size(new->mbssid_ies, elem, mbssid->cnt);
- ieee80211_copy_mbssid_beacon(pos, new->mbssid_ies, mbssid);
+ pos += ieee80211_copy_mbssid_beacon(pos, new->mbssid_ies,
+ mbssid);
+ if (rnr) {
+ new->rnr_ies = (void *)pos;
+ pos += struct_size(new->rnr_ies, elem, rnr->cnt);
+ ieee80211_copy_rnr_beacon(pos, new->rnr_ies, rnr);
+ }
/* update bssid_indicator */
link_conf->bssid_indicator =
ilog2(__roundup_pow_of_two(mbssid->cnt + 1));
prev_beacon_int = link_conf->beacon_int;
link_conf->beacon_int = params->beacon_interval;
+ if (params->ht_cap)
+ link_conf->ht_ldpc =
+ params->ht_cap->cap_info &
+ cpu_to_le16(IEEE80211_HT_CAP_LDPC_CODING);
+
if (params->vht_cap) {
+ link_conf->vht_ldpc =
+ params->vht_cap->vht_cap_info &
+ cpu_to_le32(IEEE80211_VHT_CAP_RXLDPC);
link_conf->vht_su_beamformer =
params->vht_cap->vht_cap_info &
cpu_to_le32(IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE);
}
if (params->he_cap) {
+ link_conf->he_ldpc =
+ params->he_cap->phy_cap_info[1] &
+ IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD;
link_conf->he_su_beamformer =
params->he_cap->phy_cap_info[3] &
IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER;
if (params->eht_cap) {
link_conf->eht_puncturing = params->punct_bitmap;
changed |= BSS_CHANGED_EHT_PUNCTURING;
+
+ link_conf->eht_su_beamformer =
+ params->eht_cap->fixed.phy_cap_info[0] &
+ IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMER;
+ link_conf->eht_su_beamformee =
+ params->eht_cap->fixed.phy_cap_info[0] &
+ IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMEE;
+ link_conf->eht_mu_beamformer =
+ params->eht_cap->fixed.phy_cap_info[7] &
+ (IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_80MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_320MHZ);
+ } else {
+ link_conf->eht_su_beamformer = false;
+ link_conf->eht_su_beamformee = false;
+ link_conf->eht_mu_beamformer = false;
}
if (sdata->vif.type == NL80211_IFTYPE_AP &&
return;
kfree(link->u.ap.next_beacon->mbssid_ies);
+ kfree(link->u.ap.next_beacon->rnr_ies);
kfree(link->u.ap.next_beacon);
link->u.ap.next_beacon = NULL;
}
(void *)params->he_6ghz_capa,
link_sta);
- if (params->eht_capa)
+ if (params->he_capa && params->eht_capa)
ieee80211_eht_cap_ie_to_sta_eht_cap(sdata, sband,
(u8 *)params->he_capa,
params->he_capa_len,
len = beacon->head_len + beacon->tail_len + beacon->beacon_ies_len +
beacon->proberesp_ies_len + beacon->assocresp_ies_len +
- beacon->probe_resp_len + beacon->lci_len + beacon->civicloc_len +
- ieee80211_get_mbssid_beacon_len(beacon->mbssid_ies);
+ beacon->probe_resp_len + beacon->lci_len + beacon->civicloc_len;
+
+ if (beacon->mbssid_ies)
+ len += ieee80211_get_mbssid_beacon_len(beacon->mbssid_ies,
+ beacon->rnr_ies,
+ beacon->mbssid_ies->cnt);
new_beacon = kzalloc(sizeof(*new_beacon) + len, GFP_KERNEL);
if (!new_beacon)
kfree(new_beacon);
return NULL;
}
+
+ if (beacon->rnr_ies && beacon->rnr_ies->cnt) {
+ new_beacon->rnr_ies =
+ kzalloc(struct_size(new_beacon->rnr_ies,
+ elem, beacon->rnr_ies->cnt),
+ GFP_KERNEL);
+ if (!new_beacon->rnr_ies) {
+ kfree(new_beacon->mbssid_ies);
+ kfree(new_beacon);
+ return NULL;
+ }
+ }
}
pos = (u8 *)(new_beacon + 1);
memcpy(pos, beacon->probe_resp, beacon->probe_resp_len);
pos += beacon->probe_resp_len;
}
- if (beacon->mbssid_ies && beacon->mbssid_ies->cnt)
+ if (beacon->mbssid_ies && beacon->mbssid_ies->cnt) {
pos += ieee80211_copy_mbssid_beacon(pos,
new_beacon->mbssid_ies,
beacon->mbssid_ies);
+ if (beacon->rnr_ies && beacon->rnr_ies->cnt)
+ pos += ieee80211_copy_rnr_beacon(pos,
+ new_beacon->rnr_ies,
+ beacon->rnr_ies);
+ }
/* might copy -1, meaning no changes requested */
new_beacon->ftm_responder = beacon->ftm_responder;
return ret;
}
+static int ieee80211_set_hw_timestamp(struct wiphy *wiphy,
+ struct net_device *dev,
+ struct cfg80211_set_hw_timestamp *hwts)
+{
+ struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
+ struct ieee80211_local *local = sdata->local;
+
+ if (!local->ops->set_hw_timestamp)
+ return -EOPNOTSUPP;
+
+ if (!check_sdata_in_driver(sdata))
+ return -EIO;
+
+ return local->ops->set_hw_timestamp(&local->hw, &sdata->vif, hwts);
+}
+
const struct cfg80211_ops mac80211_config_ops = {
.add_virtual_intf = ieee80211_add_iface,
.del_virtual_intf = ieee80211_del_iface,
.add_link_station = ieee80211_add_link_station,
.mod_link_station = ieee80211_mod_link_station,
.del_link_station = ieee80211_del_link_station,
+ .set_hw_timestamp = ieee80211_set_hw_timestamp,
};
#include "driver-ops.h"
static ssize_t ieee80211_if_read(
- struct ieee80211_sub_if_data *sdata,
+ void *data,
char __user *userbuf,
size_t count, loff_t *ppos,
- ssize_t (*format)(const struct ieee80211_sub_if_data *, char *, int))
+ ssize_t (*format)(const void *, char *, int))
{
char buf[200];
ssize_t ret = -EINVAL;
read_lock(&dev_base_lock);
- ret = (*format)(sdata, buf, sizeof(buf));
+ ret = (*format)(data, buf, sizeof(buf));
read_unlock(&dev_base_lock);
if (ret >= 0)
}
static ssize_t ieee80211_if_write(
- struct ieee80211_sub_if_data *sdata,
+ void *data,
const char __user *userbuf,
size_t count, loff_t *ppos,
- ssize_t (*write)(struct ieee80211_sub_if_data *, const char *, int))
+ ssize_t (*write)(void *, const char *, int))
{
char buf[64];
ssize_t ret;
buf[count] = '\0';
rtnl_lock();
- ret = (*write)(sdata, buf, count);
+ ret = (*write)(data, buf, count);
rtnl_unlock();
return ret;
}
-#define IEEE80211_IF_FMT(name, field, format_string) \
+#define IEEE80211_IF_FMT(name, type, field, format_string) \
static ssize_t ieee80211_if_fmt_##name( \
- const struct ieee80211_sub_if_data *sdata, char *buf, \
+ const type *data, char *buf, \
int buflen) \
{ \
- return scnprintf(buf, buflen, format_string, sdata->field); \
+ return scnprintf(buf, buflen, format_string, data->field); \
}
-#define IEEE80211_IF_FMT_DEC(name, field) \
- IEEE80211_IF_FMT(name, field, "%d\n")
-#define IEEE80211_IF_FMT_HEX(name, field) \
- IEEE80211_IF_FMT(name, field, "%#x\n")
-#define IEEE80211_IF_FMT_LHEX(name, field) \
- IEEE80211_IF_FMT(name, field, "%#lx\n")
+#define IEEE80211_IF_FMT_DEC(name, type, field) \
+ IEEE80211_IF_FMT(name, type, field, "%d\n")
+#define IEEE80211_IF_FMT_HEX(name, type, field) \
+ IEEE80211_IF_FMT(name, type, field, "%#x\n")
+#define IEEE80211_IF_FMT_LHEX(name, type, field) \
+ IEEE80211_IF_FMT(name, type, field, "%#lx\n")
-#define IEEE80211_IF_FMT_HEXARRAY(name, field) \
+#define IEEE80211_IF_FMT_HEXARRAY(name, type, field) \
static ssize_t ieee80211_if_fmt_##name( \
- const struct ieee80211_sub_if_data *sdata, \
+ const type *data, \
char *buf, int buflen) \
{ \
char *p = buf; \
int i; \
- for (i = 0; i < sizeof(sdata->field); i++) { \
+ for (i = 0; i < sizeof(data->field); i++) { \
p += scnprintf(p, buflen + buf - p, "%.2x ", \
- sdata->field[i]); \
+ data->field[i]); \
} \
p += scnprintf(p, buflen + buf - p, "\n"); \
return p - buf; \
}
-#define IEEE80211_IF_FMT_ATOMIC(name, field) \
+#define IEEE80211_IF_FMT_ATOMIC(name, type, field) \
static ssize_t ieee80211_if_fmt_##name( \
- const struct ieee80211_sub_if_data *sdata, \
+ const type *data, \
char *buf, int buflen) \
{ \
- return scnprintf(buf, buflen, "%d\n", atomic_read(&sdata->field));\
+ return scnprintf(buf, buflen, "%d\n", atomic_read(&data->field));\
}
-#define IEEE80211_IF_FMT_MAC(name, field) \
+#define IEEE80211_IF_FMT_MAC(name, type, field) \
static ssize_t ieee80211_if_fmt_##name( \
- const struct ieee80211_sub_if_data *sdata, char *buf, \
+ const type *data, char *buf, \
int buflen) \
{ \
- return scnprintf(buf, buflen, "%pM\n", sdata->field); \
+ return scnprintf(buf, buflen, "%pM\n", data->field); \
}
-#define IEEE80211_IF_FMT_JIFFIES_TO_MS(name, field) \
+#define IEEE80211_IF_FMT_JIFFIES_TO_MS(name, type, field) \
static ssize_t ieee80211_if_fmt_##name( \
- const struct ieee80211_sub_if_data *sdata, \
+ const type *data, \
char *buf, int buflen) \
{ \
return scnprintf(buf, buflen, "%d\n", \
- jiffies_to_msecs(sdata->field)); \
+ jiffies_to_msecs(data->field)); \
}
#define _IEEE80211_IF_FILE_OPS(name, _read, _write) \
.llseek = generic_file_llseek, \
}
-#define _IEEE80211_IF_FILE_R_FN(name) \
+#define _IEEE80211_IF_FILE_R_FN(name, type) \
static ssize_t ieee80211_if_read_##name(struct file *file, \
char __user *userbuf, \
size_t count, loff_t *ppos) \
{ \
+ ssize_t (*fn)(const void *, char *, int) = (void *) \
+ ((ssize_t (*)(const type, char *, int)) \
+ ieee80211_if_fmt_##name); \
return ieee80211_if_read(file->private_data, \
- userbuf, count, ppos, \
- ieee80211_if_fmt_##name); \
+ userbuf, count, ppos, fn); \
}
-#define _IEEE80211_IF_FILE_W_FN(name) \
+#define _IEEE80211_IF_FILE_W_FN(name, type) \
static ssize_t ieee80211_if_write_##name(struct file *file, \
const char __user *userbuf, \
size_t count, loff_t *ppos) \
{ \
+ ssize_t (*fn)(void *, const char *, int) = (void *) \
+ ((ssize_t (*)(type, const char *, int)) \
+ ieee80211_if_parse_##name); \
return ieee80211_if_write(file->private_data, userbuf, count, \
- ppos, ieee80211_if_parse_##name); \
+ ppos, fn); \
}
#define IEEE80211_IF_FILE_R(name) \
- _IEEE80211_IF_FILE_R_FN(name) \
+ _IEEE80211_IF_FILE_R_FN(name, struct ieee80211_sub_if_data *) \
_IEEE80211_IF_FILE_OPS(name, ieee80211_if_read_##name, NULL)
#define IEEE80211_IF_FILE_W(name) \
- _IEEE80211_IF_FILE_W_FN(name) \
+ _IEEE80211_IF_FILE_W_FN(name, struct ieee80211_sub_if_data *) \
_IEEE80211_IF_FILE_OPS(name, NULL, ieee80211_if_write_##name)
#define IEEE80211_IF_FILE_RW(name) \
- _IEEE80211_IF_FILE_R_FN(name) \
- _IEEE80211_IF_FILE_W_FN(name) \
+ _IEEE80211_IF_FILE_R_FN(name, struct ieee80211_sub_if_data *) \
+ _IEEE80211_IF_FILE_W_FN(name, struct ieee80211_sub_if_data *) \
_IEEE80211_IF_FILE_OPS(name, ieee80211_if_read_##name, \
ieee80211_if_write_##name)
#define IEEE80211_IF_FILE(name, field, format) \
- IEEE80211_IF_FMT_##format(name, field) \
+ IEEE80211_IF_FMT_##format(name, struct ieee80211_sub_if_data, field) \
IEEE80211_IF_FILE_R(name)
+/* Same but with a link_ prefix in the ops variable name and different type */
+#define IEEE80211_IF_LINK_FILE_R(name) \
+ _IEEE80211_IF_FILE_R_FN(name, struct ieee80211_link_data *) \
+ _IEEE80211_IF_FILE_OPS(link_##name, ieee80211_if_read_##name, NULL)
+
+#define IEEE80211_IF_LINK_FILE_W(name) \
+ _IEEE80211_IF_FILE_W_FN(name) \
+ _IEEE80211_IF_FILE_OPS(link_##name, NULL, ieee80211_if_write_##name)
+
+#define IEEE80211_IF_LINK_FILE_RW(name) \
+ _IEEE80211_IF_FILE_R_FN(name, struct ieee80211_link_data *) \
+ _IEEE80211_IF_FILE_W_FN(name, struct ieee80211_link_data *) \
+ _IEEE80211_IF_FILE_OPS(link_##name, ieee80211_if_read_##name, \
+ ieee80211_if_write_##name)
+
+#define IEEE80211_IF_LINK_FILE(name, field, format) \
+ IEEE80211_IF_FMT_##format(name, struct ieee80211_link_data, field) \
+ IEEE80211_IF_LINK_FILE_R(name)
+
/* common attributes */
IEEE80211_IF_FILE(rc_rateidx_mask_2ghz, rc_rateidx_mask[NL80211_BAND_2GHZ],
HEX);
IEEE80211_IF_FILE(flags, flags, HEX);
IEEE80211_IF_FILE(state, state, LHEX);
-IEEE80211_IF_FILE(txpower, vif.bss_conf.txpower, DEC);
-IEEE80211_IF_FILE(ap_power_level, deflink.ap_power_level, DEC);
-IEEE80211_IF_FILE(user_power_level, deflink.user_power_level, DEC);
+IEEE80211_IF_LINK_FILE(txpower, conf->txpower, DEC);
+IEEE80211_IF_LINK_FILE(ap_power_level, ap_power_level, DEC);
+IEEE80211_IF_LINK_FILE(user_power_level, user_power_level, DEC);
static ssize_t
ieee80211_if_fmt_hw_queues(const struct ieee80211_sub_if_data *sdata,
IEEE80211_IF_FILE(aid, vif.cfg.aid, DEC);
IEEE80211_IF_FILE(beacon_timeout, u.mgd.beacon_timeout, JIFFIES_TO_MS);
-static int ieee80211_set_smps(struct ieee80211_sub_if_data *sdata,
+static int ieee80211_set_smps(struct ieee80211_link_data *link,
enum ieee80211_smps_mode smps_mode)
{
+ struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_local *local = sdata->local;
int err;
return -EOPNOTSUPP;
sdata_lock(sdata);
- err = __ieee80211_request_smps_mgd(sdata, &sdata->deflink, smps_mode);
+ err = __ieee80211_request_smps_mgd(link->sdata, link, smps_mode);
sdata_unlock(sdata);
return err;
[IEEE80211_SMPS_DYNAMIC] = "dynamic",
};
-static ssize_t ieee80211_if_fmt_smps(const struct ieee80211_sub_if_data *sdata,
+static ssize_t ieee80211_if_fmt_smps(const struct ieee80211_link_data *link,
char *buf, int buflen)
{
- if (sdata->vif.type == NL80211_IFTYPE_STATION)
+ if (link->sdata->vif.type == NL80211_IFTYPE_STATION)
return snprintf(buf, buflen, "request: %s\nused: %s\n",
- smps_modes[sdata->deflink.u.mgd.req_smps],
- smps_modes[sdata->deflink.smps_mode]);
+ smps_modes[link->u.mgd.req_smps],
+ smps_modes[link->smps_mode]);
return -EINVAL;
}
-static ssize_t ieee80211_if_parse_smps(struct ieee80211_sub_if_data *sdata,
+static ssize_t ieee80211_if_parse_smps(struct ieee80211_link_data *link,
const char *buf, int buflen)
{
enum ieee80211_smps_mode mode;
for (mode = 0; mode < IEEE80211_SMPS_NUM_MODES; mode++) {
if (strncmp(buf, smps_modes[mode], buflen) == 0) {
- int err = ieee80211_set_smps(sdata, mode);
+ int err = ieee80211_set_smps(link, mode);
if (!err)
return buflen;
return err;
return -EINVAL;
}
-IEEE80211_IF_FILE_RW(smps);
+IEEE80211_IF_LINK_FILE_RW(smps);
static ssize_t ieee80211_if_parse_tkip_mic_test(
struct ieee80211_sub_if_data *sdata, const char *buf, int buflen)
}
IEEE80211_IF_FILE_RW(active_links);
+IEEE80211_IF_LINK_FILE(addr, conf->addr, MAC);
+
#ifdef CONFIG_MAC80211_MESH
IEEE80211_IF_FILE(estab_plinks, u.mesh.estab_plinks, ATOMIC);
DEBUGFS_ADD(bssid);
DEBUGFS_ADD(aid);
DEBUGFS_ADD(beacon_timeout);
- DEBUGFS_ADD_MODE(smps, 0600);
DEBUGFS_ADD_MODE(tkip_mic_test, 0200);
DEBUGFS_ADD_MODE(beacon_loss, 0200);
DEBUGFS_ADD_MODE(uapsd_queues, 0600);
static void add_ap_files(struct ieee80211_sub_if_data *sdata)
{
DEBUGFS_ADD(num_mcast_sta);
- DEBUGFS_ADD_MODE(smps, 0600);
DEBUGFS_ADD(num_sta_ps);
DEBUGFS_ADD(dtim_count);
DEBUGFS_ADD(num_buffered_multicast);
DEBUGFS_ADD(flags);
DEBUGFS_ADD(state);
- DEBUGFS_ADD(txpower);
- DEBUGFS_ADD(user_power_level);
- DEBUGFS_ADD(ap_power_level);
if (sdata->vif.type != NL80211_IFTYPE_MONITOR)
add_common_files(sdata);
}
}
+#undef DEBUGFS_ADD_MODE
+#undef DEBUGFS_ADD
+
+#define DEBUGFS_ADD_MODE(dentry, name, mode) \
+ debugfs_create_file(#name, mode, dentry, \
+ link, &link_##name##_ops)
+
+#define DEBUGFS_ADD(dentry, name) DEBUGFS_ADD_MODE(dentry, name, 0400)
+
+static void add_link_files(struct ieee80211_link_data *link,
+ struct dentry *dentry)
+{
+ DEBUGFS_ADD(dentry, txpower);
+ DEBUGFS_ADD(dentry, user_power_level);
+ DEBUGFS_ADD(dentry, ap_power_level);
+
+ switch (link->sdata->vif.type) {
+ case NL80211_IFTYPE_STATION:
+ DEBUGFS_ADD_MODE(dentry, smps, 0600);
+ break;
+ default:
+ break;
+ }
+}
+
void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata)
{
char buf[10+IFNAMSIZ];
sdata->debugfs.subdir_stations = debugfs_create_dir("stations",
sdata->vif.debugfs_dir);
add_files(sdata);
+
+ if (!(sdata->local->hw.wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO))
+ add_link_files(&sdata->deflink, sdata->vif.debugfs_dir);
}
void ieee80211_debugfs_remove_netdev(struct ieee80211_sub_if_data *sdata)
sprintf(buf, "netdev:%s", sdata->name);
debugfs_rename(dir->d_parent, dir, dir->d_parent, buf);
}
+
+void ieee80211_link_debugfs_add(struct ieee80211_link_data *link)
+{
+ char link_dir_name[10];
+
+ if (WARN_ON(!link->sdata->vif.debugfs_dir))
+ return;
+
+ /* For now, this should not be called for non-MLO capable drivers */
+ if (WARN_ON(!(link->sdata->local->hw.wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO)))
+ return;
+
+ snprintf(link_dir_name, sizeof(link_dir_name),
+ "link-%d", link->link_id);
+
+ link->debugfs_dir =
+ debugfs_create_dir(link_dir_name,
+ link->sdata->vif.debugfs_dir);
+
+ DEBUGFS_ADD(link->debugfs_dir, addr);
+ add_link_files(link, link->debugfs_dir);
+}
+
+void ieee80211_link_debugfs_remove(struct ieee80211_link_data *link)
+{
+ if (!link->sdata->vif.debugfs_dir || !link->debugfs_dir) {
+ link->debugfs_dir = NULL;
+ return;
+ }
+
+ if (link->debugfs_dir == link->sdata->vif.debugfs_dir) {
+ WARN_ON(link != &link->sdata->deflink);
+ link->debugfs_dir = NULL;
+ return;
+ }
+
+ debugfs_remove_recursive(link->debugfs_dir);
+ link->debugfs_dir = NULL;
+}
+
+void ieee80211_link_debugfs_drv_add(struct ieee80211_link_data *link)
+{
+ if (WARN_ON(!link->debugfs_dir))
+ return;
+
+ drv_link_add_debugfs(link->sdata->local, link->sdata,
+ link->conf, link->debugfs_dir);
+}
+
+void ieee80211_link_debugfs_drv_remove(struct ieee80211_link_data *link)
+{
+ if (!link || !link->debugfs_dir)
+ return;
+
+ if (WARN_ON(link->debugfs_dir == link->sdata->vif.debugfs_dir))
+ return;
+
+ /* Recreate the directory excluding the driver data */
+ debugfs_remove_recursive(link->debugfs_dir);
+ link->debugfs_dir = NULL;
+
+ ieee80211_link_debugfs_add(link);
+}
void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata);
void ieee80211_debugfs_remove_netdev(struct ieee80211_sub_if_data *sdata);
void ieee80211_debugfs_rename_netdev(struct ieee80211_sub_if_data *sdata);
+
+void ieee80211_link_debugfs_add(struct ieee80211_link_data *link);
+void ieee80211_link_debugfs_remove(struct ieee80211_link_data *link);
+
+void ieee80211_link_debugfs_drv_add(struct ieee80211_link_data *link);
+void ieee80211_link_debugfs_drv_remove(struct ieee80211_link_data *link);
#else
static inline void ieee80211_debugfs_add_netdev(
struct ieee80211_sub_if_data *sdata)
static inline void ieee80211_debugfs_rename_netdev(
struct ieee80211_sub_if_data *sdata)
{}
+
+static inline void ieee80211_link_debugfs_add(struct ieee80211_link_data *link)
+{}
+static inline void ieee80211_link_debugfs_remove(struct ieee80211_link_data *link)
+{}
+
+static inline void ieee80211_link_debugfs_drv_add(struct ieee80211_link_data *link)
+{}
+static inline void ieee80211_link_debugfs_drv_remove(struct ieee80211_link_data *link)
+{}
#endif
#endif /* __IEEE80211_DEBUGFS_NETDEV_H */
#include "trace.h"
#include "driver-ops.h"
#include "debugfs_sta.h"
+#include "debugfs_netdev.h"
int drv_start(struct ieee80211_local *local)
{
u16 old_links, u16 new_links,
struct ieee80211_bss_conf *old[IEEE80211_MLD_MAX_NUM_LINKS])
{
+ struct ieee80211_link_data *link;
+ unsigned long links_to_add;
+ unsigned long links_to_rem;
+ unsigned int link_id;
int ret = -EOPNOTSUPP;
might_sleep();
if (old_links == new_links)
return 0;
+ links_to_add = ~old_links & new_links;
+ links_to_rem = old_links & ~new_links;
+
+ for_each_set_bit(link_id, &links_to_rem, IEEE80211_MLD_MAX_NUM_LINKS) {
+ link = rcu_access_pointer(sdata->link[link_id]);
+
+ ieee80211_link_debugfs_drv_remove(link);
+ }
+
trace_drv_change_vif_links(local, sdata, old_links, new_links);
if (local->ops->change_vif_links)
ret = local->ops->change_vif_links(&local->hw, &sdata->vif,
old_links, new_links, old);
trace_drv_return_int(local, ret);
- return ret;
+ if (ret)
+ return ret;
+
+ for_each_set_bit(link_id, &links_to_add, IEEE80211_MLD_MAX_NUM_LINKS) {
+ link = rcu_access_pointer(sdata->link[link_id]);
+
+ ieee80211_link_debugfs_drv_add(link);
+ }
+
+ return 0;
}
int drv_change_sta_links(struct ieee80211_local *local,
}
#ifdef CONFIG_MAC80211_DEBUGFS
+static inline void drv_link_add_debugfs(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_bss_conf *link_conf,
+ struct dentry *dir)
+{
+ might_sleep();
+
+ sdata = get_bss_sdata(sdata);
+ if (!check_sdata_in_driver(sdata))
+ return;
+
+ if (local->ops->link_add_debugfs)
+ local->ops->link_add_debugfs(&local->hw, &sdata->vif,
+ link_conf, dir);
+}
+
static inline void drv_sta_add_debugfs(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
struct ieee80211_sta *sta,
return ret;
}
+static inline int drv_net_setup_tc(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata,
+ struct net_device *dev,
+ enum tc_setup_type type, void *type_data)
+{
+ int ret = -EOPNOTSUPP;
+
+ sdata = get_bss_sdata(sdata);
+ trace_drv_net_setup_tc(local, sdata, type);
+ if (local->ops->net_setup_tc)
+ ret = local->ops->net_setup_tc(&local->hw, &sdata->vif, dev,
+ type, type_data);
+ trace_drv_return_int(local, ret);
+
+ return ret;
+}
+
int drv_change_vif_links(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
u16 old_links, u16 new_links,
extern const struct cfg80211_ops mac80211_config_ops;
struct ieee80211_local;
+struct ieee80211_mesh_fast_tx;
/* Maximum number of broadcast/multicast frames to buffer when some of the
* associated stations are using power saving. */
u16 cntdwn_counter_offsets[IEEE80211_MAX_CNTDWN_COUNTERS_NUM];
u8 cntdwn_current_counter;
struct cfg80211_mbssid_elems *mbssid_ies;
+ struct cfg80211_rnr_elems *rnr_ies;
struct rcu_head rcu_head;
};
atomic_t entries; /* Up to MAX_MESH_NEIGHBOURS */
};
+/**
+ * struct mesh_tx_cache - mesh fast xmit header cache
+ *
+ * @rht: hash table containing struct ieee80211_mesh_fast_tx, using skb DA as key
+ * @walk_head: linked list containing all ieee80211_mesh_fast_tx objects
+ * @walk_lock: lock protecting walk_head and rht
+ */
+struct mesh_tx_cache {
+ struct rhashtable rht;
+ struct hlist_head walk_head;
+ spinlock_t walk_lock;
+};
+
struct ieee80211_if_mesh {
struct timer_list housekeeping_timer;
struct timer_list mesh_path_timer;
struct mesh_stats mshstats;
struct mesh_config mshcfg;
atomic_t estab_plinks;
- u32 mesh_seqnum;
+ atomic_t mesh_seqnum;
bool accepting_plinks;
int num_gates;
struct beacon_data __rcu *beacon;
struct mesh_table mpp_paths; /* Store paths for MPP&MAP */
int mesh_paths_generation;
int mpp_paths_generation;
+ struct mesh_tx_cache tx_cache;
};
#ifdef CONFIG_MAC80211_MESH
struct ieee80211_tx_queue_params tx_conf[IEEE80211_NUM_ACS];
struct ieee80211_bss_conf *conf;
+
+#ifdef CONFIG_MAC80211_DEBUGFS
+ struct dentry *debugfs_dir;
+#endif
};
struct ieee80211_sub_if_data {
}
static inline int
-ieee80211_get_mbssid_beacon_len(struct cfg80211_mbssid_elems *elems)
+ieee80211_get_mbssid_beacon_len(struct cfg80211_mbssid_elems *elems,
+ struct cfg80211_rnr_elems *rnr_elems,
+ u8 i)
{
- int i, len = 0;
+ int len = 0;
- if (!elems)
+ if (!elems || !elems->cnt || i > elems->cnt)
return 0;
+ if (i < elems->cnt) {
+ len = elems->elem[i].len;
+ if (rnr_elems) {
+ len += rnr_elems->elem[i].len;
+ for (i = elems->cnt; i < rnr_elems->cnt; i++)
+ len += rnr_elems->elem[i].len;
+ }
+ return len;
+ }
+
+ /* i == elems->cnt, calculate total length of all MBSSID elements */
for (i = 0; i < elems->cnt; i++)
len += elems->elem[i].len;
+ if (rnr_elems) {
+ for (i = 0; i < rnr_elems->cnt; i++)
+ len += rnr_elems->elem[i].len;
+ }
+
return len;
}
/* interface handling */
#define MAC80211_SUPPORTED_FEATURES_TX (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | \
NETIF_F_HW_CSUM | NETIF_F_SG | \
- NETIF_F_HIGHDMA | NETIF_F_GSO_SOFTWARE)
+ NETIF_F_HIGHDMA | NETIF_F_GSO_SOFTWARE | \
+ NETIF_F_HW_TC)
#define MAC80211_SUPPORTED_FEATURES_RX (NETIF_F_RXCSUM)
#define MAC80211_SUPPORTED_FEATURES (MAC80211_SUPPORTED_FEATURES_TX | \
MAC80211_SUPPORTED_FEATURES_RX)
int link_id, u64 *cookie);
int ieee80211_probe_mesh_link(struct wiphy *wiphy, struct net_device *dev,
const u8 *buf, size_t len);
+void __ieee80211_xmit_fast(struct ieee80211_sub_if_data *sdata,
+ struct sta_info *sta,
+ struct ieee80211_fast_tx *fast_tx,
+ struct sk_buff *skb, bool ampdu,
+ const u8 *da, const u8 *sa);
+void ieee80211_aggr_check(struct ieee80211_sub_if_data *sdata,
+ struct sta_info *sta, struct sk_buff *skb);
/* HT */
void ieee80211_apply_htcap_overrides(struct ieee80211_sub_if_data *sdata,
enum ieee80211_smps_mode smps_mode,
struct sk_buff *skb);
u8 *ieee80211_ie_build_he_oper(u8 *pos, struct cfg80211_chan_def *chandef);
+u8 *ieee80211_ie_build_eht_oper(u8 *pos, struct cfg80211_chan_def *chandef,
+ const struct ieee80211_sta_eht_cap *eht_cap);
int ieee80211_parse_bitrates(enum nl80211_chan_width width,
const struct ieee80211_supported_band *sband,
const u8 *srates, int srates_len, u32 *rates);
struct sk_buff *skb);
void ieee80211_add_aid_request_ie(struct ieee80211_sub_if_data *sdata,
struct sk_buff *skb);
+u8 *ieee80211_ie_build_s1g_cap(u8 *pos, struct ieee80211_sta_s1g_cap *s1g_cap);
/* channel management */
bool ieee80211_chandef_ht_oper(const struct ieee80211_ht_operation *ht_oper,
dev_fetch_sw_netstats(stats, dev->tstats);
}
+static int ieee80211_netdev_setup_tc(struct net_device *dev,
+ enum tc_setup_type type, void *type_data)
+{
+ struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
+ struct ieee80211_local *local = sdata->local;
+
+ return drv_net_setup_tc(local, sdata, dev, type, type_data);
+}
+
static const struct net_device_ops ieee80211_dataif_ops = {
.ndo_open = ieee80211_open,
.ndo_stop = ieee80211_stop,
.ndo_set_rx_mode = ieee80211_set_multicast_list,
.ndo_set_mac_address = ieee80211_change_mac,
.ndo_get_stats64 = ieee80211_get_stats64,
+ .ndo_setup_tc = ieee80211_netdev_setup_tc,
};
static u16 ieee80211_monitor_select_queue(struct net_device *dev,
.ndo_set_mac_address = ieee80211_change_mac,
.ndo_get_stats64 = ieee80211_get_stats64,
.ndo_fill_forward_path = ieee80211_netdev_fill_forward_path,
+ .ndo_setup_tc = ieee80211_netdev_setup_tc,
};
static bool ieee80211_iftype_supports_hdr_offload(enum nl80211_iftype iftype)
#include "ieee80211_i.h"
#include "driver-ops.h"
#include "key.h"
+#include "debugfs_netdev.h"
void ieee80211_link_setup(struct ieee80211_link_data *link)
{
link->link_id = link_id;
link->conf = link_conf;
link_conf->link_id = link_id;
+ link_conf->vif = &sdata->vif;
INIT_WORK(&link->csa_finalize_work,
ieee80211_csa_finalize_work);
default:
WARN_ON(1);
}
+
+ ieee80211_link_debugfs_add(link);
}
}
if (WARN_ON(!link))
continue;
ieee80211_remove_link_keys(link, &keys);
+ ieee80211_link_debugfs_remove(link);
ieee80211_link_stop(link);
}
#include <asm/unaligned.h>
#include "ieee80211_i.h"
#include "mesh.h"
+#include "wme.h"
#include "driver-ops.h"
static int mesh_allocated;
ieee80211_chandef_vht_oper(&sdata->local->hw, vht_cap_info,
ie->vht_operation, ie->ht_operation,
&sta_chan_def);
- ieee80211_chandef_he_6ghz_oper(sdata, ie->he_operation, NULL,
+ ieee80211_chandef_he_6ghz_oper(sdata, ie->he_operation, ie->eht_operation,
&sta_chan_def);
if (!cfg80211_chandef_compatible(&sdata->vif.bss_conf.chandef,
return 0;
}
+int mesh_add_eht_cap_ie(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, u8 ie_len)
+{
+ const struct ieee80211_sta_he_cap *he_cap;
+ const struct ieee80211_sta_eht_cap *eht_cap;
+ struct ieee80211_supported_band *sband;
+ u8 *pos;
+
+ sband = ieee80211_get_sband(sdata);
+ if (!sband)
+ return -EINVAL;
+
+ he_cap = ieee80211_get_he_iftype_cap(sband, NL80211_IFTYPE_MESH_POINT);
+ eht_cap = ieee80211_get_eht_iftype_cap(sband, NL80211_IFTYPE_MESH_POINT);
+ if (!he_cap || !eht_cap ||
+ sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT ||
+ sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_5 ||
+ sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_10)
+ return 0;
+
+ if (skb_tailroom(skb) < ie_len)
+ return -ENOMEM;
+
+ pos = skb_put(skb, ie_len);
+ ieee80211_ie_build_eht_cap(pos, he_cap, eht_cap, pos + ie_len, false);
+
+ return 0;
+}
+
+int mesh_add_eht_oper_ie(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb)
+{
+ const struct ieee80211_sta_eht_cap *eht_cap;
+ struct ieee80211_supported_band *sband;
+ u32 len;
+ u8 *pos;
+
+ sband = ieee80211_get_sband(sdata);
+ if (!sband)
+ return -EINVAL;
+
+ eht_cap = ieee80211_get_eht_iftype_cap(sband, NL80211_IFTYPE_MESH_POINT);
+ if (!eht_cap ||
+ sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_20_NOHT ||
+ sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_5 ||
+ sdata->vif.bss_conf.chandef.width == NL80211_CHAN_WIDTH_10)
+ return 0;
+
+ len = 2 + 1 + offsetof(struct ieee80211_eht_operation, optional) +
+ offsetof(struct ieee80211_eht_operation_info, optional);
+
+ if (skb_tailroom(skb) < len)
+ return -ENOMEM;
+
+ pos = skb_put(skb, len);
+ ieee80211_ie_build_eht_oper(pos, &sdata->vif.bss_conf.chandef, eht_cap);
+
+ return 0;
+}
+
static void ieee80211_mesh_path_timer(struct timer_list *t)
{
struct ieee80211_sub_if_data *sdata =
if (he_oper)
sdata->vif.bss_conf.he_oper.params =
__le32_to_cpu(he_oper->he_oper_params);
+
+ sdata->vif.bss_conf.eht_support =
+ !!ieee80211_get_eht_iftype_cap(sband, NL80211_IFTYPE_MESH_POINT);
+}
+
+bool ieee80211_mesh_xmit_fast(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, u32 ctrl_flags)
+{
+ struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
+ struct ieee80211_mesh_fast_tx *entry;
+ struct ieee80211s_hdr *meshhdr;
+ u8 sa[ETH_ALEN] __aligned(2);
+ struct tid_ampdu_tx *tid_tx;
+ struct sta_info *sta;
+ bool copy_sa = false;
+ u16 ethertype;
+ u8 tid;
+
+ if (ctrl_flags & IEEE80211_TX_CTRL_SKIP_MPATH_LOOKUP)
+ return false;
+
+ if (ifmsh->mshcfg.dot11MeshNolearn)
+ return false;
+
+ /* Add support for these cases later */
+ if (ifmsh->ps_peers_light_sleep || ifmsh->ps_peers_deep_sleep)
+ return false;
+
+ if (is_multicast_ether_addr(skb->data))
+ return false;
+
+ ethertype = (skb->data[12] << 8) | skb->data[13];
+ if (ethertype < ETH_P_802_3_MIN)
+ return false;
+
+ if (skb->sk && skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS)
+ return false;
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ skb_set_transport_header(skb, skb_checksum_start_offset(skb));
+ if (skb_checksum_help(skb))
+ return false;
+ }
+
+ entry = mesh_fast_tx_get(sdata, skb->data);
+ if (!entry)
+ return false;
+
+ if (skb_headroom(skb) < entry->hdrlen + entry->fast_tx.hdr_len)
+ return false;
+
+ sta = rcu_dereference(entry->mpath->next_hop);
+ if (!sta)
+ return false;
+
+ tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+ tid_tx = rcu_dereference(sta->ampdu_mlme.tid_tx[tid]);
+ if (tid_tx) {
+ if (!test_bit(HT_AGG_STATE_OPERATIONAL, &tid_tx->state))
+ return false;
+ if (tid_tx->timeout)
+ tid_tx->last_tx = jiffies;
+ }
+
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ return true;
+
+ skb_set_queue_mapping(skb, ieee80211_select_queue(sdata, sta, skb));
+
+ meshhdr = (struct ieee80211s_hdr *)entry->hdr;
+ if ((meshhdr->flags & MESH_FLAGS_AE) == MESH_FLAGS_AE_A5_A6) {
+ /* preserve SA from eth header for 6-addr frames */
+ ether_addr_copy(sa, skb->data + ETH_ALEN);
+ copy_sa = true;
+ }
+
+ memcpy(skb_push(skb, entry->hdrlen - 2 * ETH_ALEN), entry->hdr,
+ entry->hdrlen);
+
+ meshhdr = (struct ieee80211s_hdr *)skb->data;
+ put_unaligned_le32(atomic_inc_return(&sdata->u.mesh.mesh_seqnum),
+ &meshhdr->seqnum);
+ meshhdr->ttl = sdata->u.mesh.mshcfg.dot11MeshTTL;
+ if (copy_sa)
+ ether_addr_copy(meshhdr->eaddr2, sa);
+
+ skb_push(skb, 2 * ETH_ALEN);
+ __ieee80211_xmit_fast(sdata, sta, &entry->fast_tx, skb, tid_tx,
+ entry->mpath->dst, sdata->vif.addr);
+
+ return true;
}
/**
meshhdr->ttl = sdata->u.mesh.mshcfg.dot11MeshTTL;
- /* FIXME: racy -- TX on multiple queues can be concurrent */
- put_unaligned(cpu_to_le32(sdata->u.mesh.mesh_seqnum), &meshhdr->seqnum);
- sdata->u.mesh.mesh_seqnum++;
-
+ put_unaligned_le32(atomic_inc_return(&sdata->u.mesh.mesh_seqnum),
+ &meshhdr->seqnum);
if (addr4or5 && !addr6) {
meshhdr->flags |= MESH_FLAGS_AE_A4;
memcpy(meshhdr->eaddr1, addr4or5, ETH_ALEN);
changed = mesh_accept_plinks_update(sdata);
ieee80211_mbss_info_change_notify(sdata, changed);
+ mesh_fast_tx_gc(sdata);
+
mod_timer(&ifmsh->housekeeping_timer,
round_jiffies(jiffies +
IEEE80211_MESH_HOUSEKEEPING_INTERVAL));
struct ieee80211_chanctx_conf *chanctx_conf;
struct mesh_csa_settings *csa;
enum nl80211_band band;
- u8 ie_len_he_cap;
+ u8 ie_len_he_cap, ie_len_eht_cap;
u8 *pos;
struct ieee80211_sub_if_data *sdata;
int hdr_len = offsetofend(struct ieee80211_mgmt, u.beacon);
ie_len_he_cap = ieee80211_ie_len_he_cap(sdata,
NL80211_IFTYPE_MESH_POINT);
+ ie_len_eht_cap = ieee80211_ie_len_eht_cap(sdata,
+ NL80211_IFTYPE_MESH_POINT);
head_len = hdr_len +
2 + /* NULL SSID */
/* Channel Switch Announcement */
2 + 1 + sizeof(struct ieee80211_he_operation) +
sizeof(struct ieee80211_he_6ghz_oper) +
2 + 1 + sizeof(struct ieee80211_he_6ghz_capa) +
+ ie_len_eht_cap +
+ 2 + 1 + offsetof(struct ieee80211_eht_operation, optional) +
+ offsetof(struct ieee80211_eht_operation_info, optional) +
ifmsh->ie_len;
bcn = kzalloc(sizeof(*bcn) + head_len + tail_len, GFP_KERNEL);
mesh_add_he_cap_ie(sdata, skb, ie_len_he_cap) ||
mesh_add_he_oper_ie(sdata, skb) ||
mesh_add_he_6ghz_cap_ie(sdata, skb) ||
+ mesh_add_eht_cap_ie(sdata, skb, ie_len_eht_cap) ||
+ mesh_add_eht_oper_ie(sdata, skb) ||
mesh_add_vendor_ies(sdata, skb))
goto out_free;
u8 rann_snd_addr[ETH_ALEN];
u32 rann_metric;
unsigned long last_preq_to_root;
+ unsigned long fast_tx_check;
bool is_root;
bool is_gate;
u32 path_change_count;
};
+#define MESH_FAST_TX_CACHE_MAX_SIZE 512
+#define MESH_FAST_TX_CACHE_THRESHOLD_SIZE 384
+#define MESH_FAST_TX_CACHE_TIMEOUT 8000 /* msecs */
+
+/**
+ * struct ieee80211_mesh_fast_tx - cached mesh fast tx entry
+ * @rhash: rhashtable pointer
+ * @addr_key: The Ethernet DA which is the key for this entry
+ * @fast_tx: base fast_tx data
+ * @hdr: cached mesh and rfc1042 headers
+ * @hdrlen: length of mesh + rfc1042
+ * @walk_list: list containing all the fast tx entries
+ * @mpath: mesh path corresponding to the Mesh DA
+ * @mppath: MPP entry corresponding to this DA
+ * @timestamp: Last used time of this entry
+ */
+struct ieee80211_mesh_fast_tx {
+ struct rhash_head rhash;
+ u8 addr_key[ETH_ALEN] __aligned(2);
+
+ struct ieee80211_fast_tx fast_tx;
+ u8 hdr[sizeof(struct ieee80211s_hdr) + sizeof(rfc1042_header)];
+ u16 hdrlen;
+
+ struct mesh_path *mpath, *mppath;
+ struct hlist_node walk_list;
+ unsigned long timestamp;
+};
+
/* Recent multicast cache */
/* RMC_BUCKETS must be a power of 2, maximum 256 */
#define RMC_BUCKETS 256
struct sk_buff *skb);
int mesh_add_he_6ghz_cap_ie(struct ieee80211_sub_if_data *sdata,
struct sk_buff *skb);
+int mesh_add_eht_cap_ie(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, u8 ie_len);
+int mesh_add_eht_oper_ie(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb);
void mesh_rmc_free(struct ieee80211_sub_if_data *sdata);
int mesh_rmc_init(struct ieee80211_sub_if_data *sdata);
void ieee80211s_init(void);
void mesh_path_tx_root_frame(struct ieee80211_sub_if_data *sdata);
bool mesh_action_is_path_sel(struct ieee80211_mgmt *mgmt);
+struct ieee80211_mesh_fast_tx *
+mesh_fast_tx_get(struct ieee80211_sub_if_data *sdata, const u8 *addr);
+bool ieee80211_mesh_xmit_fast(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, u32 ctrl_flags);
+void mesh_fast_tx_cache(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, struct mesh_path *mpath);
+void mesh_fast_tx_gc(struct ieee80211_sub_if_data *sdata);
+void mesh_fast_tx_flush_addr(struct ieee80211_sub_if_data *sdata,
+ const u8 *addr);
+void mesh_fast_tx_flush_mpath(struct mesh_path *mpath);
+void mesh_fast_tx_flush_sta(struct ieee80211_sub_if_data *sdata,
+ struct sta_info *sta);
+void mesh_path_refresh(struct ieee80211_sub_if_data *sdata,
+ struct mesh_path *mpath, const u8 *addr);
#ifdef CONFIG_MAC80211_MESH
static inline
u32 orig_sn, orig_metric;
unsigned long orig_lifetime, exp_time;
u32 last_hop_metric, new_metric;
+ bool flush_mpath = false;
bool process = true;
u8 hopcount;
}
if (fresh_info) {
- if (rcu_access_pointer(mpath->next_hop) != sta)
+ if (rcu_access_pointer(mpath->next_hop) != sta) {
mpath->path_change_count++;
+ flush_mpath = true;
+ }
mesh_path_assign_nexthop(mpath, sta);
mpath->flags |= MESH_PATH_SN_VALID;
mpath->metric = new_metric;
mpath->hop_count = hopcount;
mesh_path_activate(mpath);
spin_unlock_bh(&mpath->state_lock);
+ if (flush_mpath)
+ mesh_fast_tx_flush_mpath(mpath);
ewma_mesh_fail_avg_init(&sta->mesh->fail_avg);
/* init it at a low value - 0 start is tricky */
ewma_mesh_fail_avg_add(&sta->mesh->fail_avg, 1);
}
if (fresh_info) {
- if (rcu_access_pointer(mpath->next_hop) != sta)
+ if (rcu_access_pointer(mpath->next_hop) != sta) {
mpath->path_change_count++;
+ flush_mpath = true;
+ }
mesh_path_assign_nexthop(mpath, sta);
mpath->metric = last_hop_metric;
mpath->exp_time = time_after(mpath->exp_time, exp_time)
mpath->hop_count = 1;
mesh_path_activate(mpath);
spin_unlock_bh(&mpath->state_lock);
+ if (flush_mpath)
+ mesh_fast_tx_flush_mpath(mpath);
ewma_mesh_fail_avg_init(&sta->mesh->fail_avg);
/* init it at a low value - 0 start is tricky */
ewma_mesh_fail_avg_add(&sta->mesh->fail_avg, 1);
return 0;
}
+void mesh_path_refresh(struct ieee80211_sub_if_data *sdata,
+ struct mesh_path *mpath, const u8 *addr)
+{
+ if (mpath->flags & (MESH_PATH_REQ_QUEUED | MESH_PATH_FIXED |
+ MESH_PATH_RESOLVING))
+ return;
+
+ if (time_after(jiffies,
+ mpath->exp_time -
+ msecs_to_jiffies(sdata->u.mesh.mshcfg.path_refresh_time)) &&
+ (!addr || ether_addr_equal(sdata->vif.addr, addr)))
+ mesh_queue_preq(mpath, PREQ_Q_F_START | PREQ_Q_F_REFRESH);
+}
+
/**
* mesh_nexthop_lookup - put the appropriate next hop on a mesh frame. Calling
* this function is considered "using" the associated mpath, so preempt a path
if (!mpath || !(mpath->flags & MESH_PATH_ACTIVE))
return -ENOENT;
- if (time_after(jiffies,
- mpath->exp_time -
- msecs_to_jiffies(sdata->u.mesh.mshcfg.path_refresh_time)) &&
- ether_addr_equal(sdata->vif.addr, hdr->addr4) &&
- !(mpath->flags & MESH_PATH_RESOLVING) &&
- !(mpath->flags & MESH_PATH_FIXED))
- mesh_queue_preq(mpath, PREQ_Q_F_START | PREQ_Q_F_REFRESH);
+ mesh_path_refresh(sdata, mpath, hdr->addr4);
next_hop = rcu_dereference(mpath->next_hop);
if (next_hop) {
memcpy(hdr->addr1, next_hop->sta.addr, ETH_ALEN);
memcpy(hdr->addr2, sdata->vif.addr, ETH_ALEN);
ieee80211_mps_set_frame_flags(sdata, next_hop, hdr);
+ if (ieee80211_hw_check(&sdata->local->hw, SUPPORT_FAST_XMIT))
+ mesh_fast_tx_cache(sdata, skb, mpath);
return 0;
}
#include "wme.h"
#include "ieee80211_i.h"
#include "mesh.h"
+#include <linux/rhashtable.h>
static void mesh_path_free_rcu(struct mesh_table *tbl, struct mesh_path *mpath);
.hashfn = mesh_table_hash,
};
+static const struct rhashtable_params fast_tx_rht_params = {
+ .nelem_hint = 10,
+ .automatic_shrinking = true,
+ .key_len = ETH_ALEN,
+ .key_offset = offsetof(struct ieee80211_mesh_fast_tx, addr_key),
+ .head_offset = offsetof(struct ieee80211_mesh_fast_tx, rhash),
+ .hashfn = mesh_table_hash,
+};
+
+static void __mesh_fast_tx_entry_free(void *ptr, void *tblptr)
+{
+ struct ieee80211_mesh_fast_tx *entry = ptr;
+
+ kfree_rcu(entry, fast_tx.rcu_head);
+}
+
+static void mesh_fast_tx_deinit(struct ieee80211_sub_if_data *sdata)
+{
+ struct mesh_tx_cache *cache;
+
+ cache = &sdata->u.mesh.tx_cache;
+ rhashtable_free_and_destroy(&cache->rht,
+ __mesh_fast_tx_entry_free, NULL);
+}
+
+static void mesh_fast_tx_init(struct ieee80211_sub_if_data *sdata)
+{
+ struct mesh_tx_cache *cache;
+
+ cache = &sdata->u.mesh.tx_cache;
+ rhashtable_init(&cache->rht, &fast_tx_rht_params);
+ INIT_HLIST_HEAD(&cache->walk_head);
+ spin_lock_init(&cache->walk_lock);
+}
+
static inline bool mpath_expired(struct mesh_path *mpath)
{
return (mpath->flags & MESH_PATH_ACTIVE) &&
return new_mpath;
}
+static void mesh_fast_tx_entry_free(struct mesh_tx_cache *cache,
+ struct ieee80211_mesh_fast_tx *entry)
+{
+ hlist_del_rcu(&entry->walk_list);
+ rhashtable_remove_fast(&cache->rht, &entry->rhash, fast_tx_rht_params);
+ kfree_rcu(entry, fast_tx.rcu_head);
+}
+
+struct ieee80211_mesh_fast_tx *
+mesh_fast_tx_get(struct ieee80211_sub_if_data *sdata, const u8 *addr)
+{
+ struct ieee80211_mesh_fast_tx *entry;
+ struct mesh_tx_cache *cache;
+
+ cache = &sdata->u.mesh.tx_cache;
+ entry = rhashtable_lookup(&cache->rht, addr, fast_tx_rht_params);
+ if (!entry)
+ return NULL;
+
+ if (!(entry->mpath->flags & MESH_PATH_ACTIVE) ||
+ mpath_expired(entry->mpath)) {
+ spin_lock_bh(&cache->walk_lock);
+ entry = rhashtable_lookup(&cache->rht, addr, fast_tx_rht_params);
+ if (entry)
+ mesh_fast_tx_entry_free(cache, entry);
+ spin_unlock_bh(&cache->walk_lock);
+ return NULL;
+ }
+
+ mesh_path_refresh(sdata, entry->mpath, NULL);
+ if (entry->mppath)
+ entry->mppath->exp_time = jiffies;
+ entry->timestamp = jiffies;
+
+ return entry;
+}
+
+void mesh_fast_tx_cache(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, struct mesh_path *mpath)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_mesh_fast_tx *entry, *prev;
+ struct ieee80211_mesh_fast_tx build = {};
+ struct ieee80211s_hdr *meshhdr;
+ struct mesh_tx_cache *cache;
+ struct ieee80211_key *key;
+ struct mesh_path *mppath;
+ struct sta_info *sta;
+ u8 *qc;
+
+ if (sdata->noack_map ||
+ !ieee80211_is_data_qos(hdr->frame_control))
+ return;
+
+ build.fast_tx.hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ meshhdr = (struct ieee80211s_hdr *)(skb->data + build.fast_tx.hdr_len);
+ build.hdrlen = ieee80211_get_mesh_hdrlen(meshhdr);
+
+ cache = &sdata->u.mesh.tx_cache;
+ if (atomic_read(&cache->rht.nelems) >= MESH_FAST_TX_CACHE_MAX_SIZE)
+ return;
+
+ sta = rcu_dereference(mpath->next_hop);
+ if (!sta)
+ return;
+
+ if ((meshhdr->flags & MESH_FLAGS_AE) == MESH_FLAGS_AE_A5_A6) {
+ /* This is required to keep the mppath alive */
+ mppath = mpp_path_lookup(sdata, meshhdr->eaddr1);
+ if (!mppath)
+ return;
+ build.mppath = mppath;
+ } else if (ieee80211_has_a4(hdr->frame_control)) {
+ mppath = mpath;
+ } else {
+ return;
+ }
+
+ /* rate limit, in case fast xmit can't be enabled */
+ if (mppath->fast_tx_check == jiffies)
+ return;
+
+ mppath->fast_tx_check = jiffies;
+
+ /*
+ * Same use of the sta lock as in ieee80211_check_fast_xmit, in order
+ * to protect against concurrent sta key updates.
+ */
+ spin_lock_bh(&sta->lock);
+ key = rcu_access_pointer(sta->ptk[sta->ptk_idx]);
+ if (!key)
+ key = rcu_access_pointer(sdata->default_unicast_key);
+ build.fast_tx.key = key;
+
+ if (key) {
+ bool gen_iv, iv_spc;
+
+ gen_iv = key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_IV;
+ iv_spc = key->conf.flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE;
+
+ if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) ||
+ (key->flags & KEY_FLAG_TAINTED))
+ goto unlock_sta;
+
+ switch (key->conf.cipher) {
+ case WLAN_CIPHER_SUITE_CCMP:
+ case WLAN_CIPHER_SUITE_CCMP_256:
+ if (gen_iv)
+ build.fast_tx.pn_offs = build.fast_tx.hdr_len;
+ if (gen_iv || iv_spc)
+ build.fast_tx.hdr_len += IEEE80211_CCMP_HDR_LEN;
+ break;
+ case WLAN_CIPHER_SUITE_GCMP:
+ case WLAN_CIPHER_SUITE_GCMP_256:
+ if (gen_iv)
+ build.fast_tx.pn_offs = build.fast_tx.hdr_len;
+ if (gen_iv || iv_spc)
+ build.fast_tx.hdr_len += IEEE80211_GCMP_HDR_LEN;
+ break;
+ default:
+ goto unlock_sta;
+ }
+ }
+
+ memcpy(build.addr_key, mppath->dst, ETH_ALEN);
+ build.timestamp = jiffies;
+ build.fast_tx.band = info->band;
+ build.fast_tx.da_offs = offsetof(struct ieee80211_hdr, addr3);
+ build.fast_tx.sa_offs = offsetof(struct ieee80211_hdr, addr4);
+ build.mpath = mpath;
+ memcpy(build.hdr, meshhdr, build.hdrlen);
+ memcpy(build.hdr + build.hdrlen, rfc1042_header, sizeof(rfc1042_header));
+ build.hdrlen += sizeof(rfc1042_header);
+ memcpy(build.fast_tx.hdr, hdr, build.fast_tx.hdr_len);
+
+ hdr = (struct ieee80211_hdr *)build.fast_tx.hdr;
+ if (build.fast_tx.key)
+ hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+
+ qc = ieee80211_get_qos_ctl(hdr);
+ qc[1] |= IEEE80211_QOS_CTL_MESH_CONTROL_PRESENT >> 8;
+
+ entry = kmemdup(&build, sizeof(build), GFP_ATOMIC);
+ if (!entry)
+ goto unlock_sta;
+
+ spin_lock(&cache->walk_lock);
+ prev = rhashtable_lookup_get_insert_fast(&cache->rht,
+ &entry->rhash,
+ fast_tx_rht_params);
+ if (unlikely(IS_ERR(prev))) {
+ kfree(entry);
+ goto unlock_cache;
+ }
+
+ /*
+ * replace any previous entry in the hash table, in case we're
+ * replacing it with a different type (e.g. mpath -> mpp)
+ */
+ if (unlikely(prev)) {
+ rhashtable_replace_fast(&cache->rht, &prev->rhash,
+ &entry->rhash, fast_tx_rht_params);
+ hlist_del_rcu(&prev->walk_list);
+ kfree_rcu(prev, fast_tx.rcu_head);
+ }
+
+ hlist_add_head(&entry->walk_list, &cache->walk_head);
+
+unlock_cache:
+ spin_unlock(&cache->walk_lock);
+unlock_sta:
+ spin_unlock_bh(&sta->lock);
+}
+
+void mesh_fast_tx_gc(struct ieee80211_sub_if_data *sdata)
+{
+ unsigned long timeout = msecs_to_jiffies(MESH_FAST_TX_CACHE_TIMEOUT);
+ struct mesh_tx_cache *cache;
+ struct ieee80211_mesh_fast_tx *entry;
+ struct hlist_node *n;
+
+ cache = &sdata->u.mesh.tx_cache;
+ if (atomic_read(&cache->rht.nelems) < MESH_FAST_TX_CACHE_THRESHOLD_SIZE)
+ return;
+
+ spin_lock_bh(&cache->walk_lock);
+ hlist_for_each_entry_safe(entry, n, &cache->walk_head, walk_list)
+ if (!time_is_after_jiffies(entry->timestamp + timeout))
+ mesh_fast_tx_entry_free(cache, entry);
+ spin_unlock_bh(&cache->walk_lock);
+}
+
+void mesh_fast_tx_flush_mpath(struct mesh_path *mpath)
+{
+ struct ieee80211_sub_if_data *sdata = mpath->sdata;
+ struct mesh_tx_cache *cache = &sdata->u.mesh.tx_cache;
+ struct ieee80211_mesh_fast_tx *entry;
+ struct hlist_node *n;
+
+ cache = &sdata->u.mesh.tx_cache;
+ spin_lock_bh(&cache->walk_lock);
+ hlist_for_each_entry_safe(entry, n, &cache->walk_head, walk_list)
+ if (entry->mpath == mpath)
+ mesh_fast_tx_entry_free(cache, entry);
+ spin_unlock_bh(&cache->walk_lock);
+}
+
+void mesh_fast_tx_flush_sta(struct ieee80211_sub_if_data *sdata,
+ struct sta_info *sta)
+{
+ struct mesh_tx_cache *cache = &sdata->u.mesh.tx_cache;
+ struct ieee80211_mesh_fast_tx *entry;
+ struct hlist_node *n;
+
+ cache = &sdata->u.mesh.tx_cache;
+ spin_lock_bh(&cache->walk_lock);
+ hlist_for_each_entry_safe(entry, n, &cache->walk_head, walk_list)
+ if (rcu_access_pointer(entry->mpath->next_hop) == sta)
+ mesh_fast_tx_entry_free(cache, entry);
+ spin_unlock_bh(&cache->walk_lock);
+}
+
+void mesh_fast_tx_flush_addr(struct ieee80211_sub_if_data *sdata,
+ const u8 *addr)
+{
+ struct mesh_tx_cache *cache = &sdata->u.mesh.tx_cache;
+ struct ieee80211_mesh_fast_tx *entry;
+
+ cache = &sdata->u.mesh.tx_cache;
+ spin_lock_bh(&cache->walk_lock);
+ entry = rhashtable_lookup(&cache->rht, addr, fast_tx_rht_params);
+ if (entry)
+ mesh_fast_tx_entry_free(cache, entry);
+ spin_unlock_bh(&cache->walk_lock);
+}
+
/**
* mesh_path_add - allocate and add a new path to the mesh path table
* @dst: destination address of the path (ETH_ALEN length)
if (ret)
kfree(new_mpath);
+ else
+ mesh_fast_tx_flush_addr(sdata, dst);
sdata->u.mesh.mpp_paths_generation++;
return ret;
{
hlist_del_rcu(&mpath->walk_list);
rhashtable_remove_fast(&tbl->rhead, &mpath->rhash, mesh_rht_params);
+ if (tbl == &mpath->sdata->u.mesh.mpp_paths)
+ mesh_fast_tx_flush_addr(mpath->sdata, mpath->dst);
+ else
+ mesh_fast_tx_flush_mpath(mpath);
mesh_path_free_rcu(tbl, mpath);
}
mpath->exp_time = 0;
mpath->flags = MESH_PATH_FIXED | MESH_PATH_SN_VALID;
mesh_path_activate(mpath);
+ mesh_fast_tx_flush_mpath(mpath);
spin_unlock_bh(&mpath->state_lock);
ewma_mesh_fail_avg_init(&next_hop->mesh->fail_avg);
/* init it at a low value - 0 start is tricky */
{
mesh_table_init(&sdata->u.mesh.mesh_paths);
mesh_table_init(&sdata->u.mesh.mpp_paths);
+ mesh_fast_tx_init(sdata);
}
static
void mesh_pathtbl_unregister(struct ieee80211_sub_if_data *sdata)
{
+ mesh_fast_tx_deinit(sdata);
mesh_table_free(&sdata->u.mesh.mesh_paths);
mesh_table_free(&sdata->u.mesh.mpp_paths);
}
bool include_plid = false;
u16 peering_proto = 0;
u8 *pos, ie_len = 4;
- u8 ie_len_he_cap;
+ u8 ie_len_he_cap, ie_len_eht_cap;
int hdr_len = offsetofend(struct ieee80211_mgmt, u.action.u.self_prot);
int err = -ENOMEM;
ie_len_he_cap = ieee80211_ie_len_he_cap(sdata,
NL80211_IFTYPE_MESH_POINT);
+ ie_len_eht_cap = ieee80211_ie_len_eht_cap(sdata,
+ NL80211_IFTYPE_MESH_POINT);
skb = dev_alloc_skb(local->tx_headroom +
hdr_len +
2 + /* capability info */
2 + 1 + sizeof(struct ieee80211_he_operation) +
sizeof(struct ieee80211_he_6ghz_oper) +
2 + 1 + sizeof(struct ieee80211_he_6ghz_capa) +
+ ie_len_eht_cap +
+ 2 + 1 + offsetof(struct ieee80211_eht_operation, optional) +
+ offsetof(struct ieee80211_eht_operation_info, optional) +
2 + 8 + /* peering IE */
sdata->u.mesh.ie_len);
if (!skb)
mesh_add_vht_oper_ie(sdata, skb) ||
mesh_add_he_cap_ie(sdata, skb, ie_len_he_cap) ||
mesh_add_he_oper_ie(sdata, skb) ||
- mesh_add_he_6ghz_cap_ie(sdata, skb))
+ mesh_add_he_6ghz_cap_ie(sdata, skb) ||
+ mesh_add_eht_cap_ie(sdata, skb, ie_len_eht_cap) ||
+ mesh_add_eht_oper_ie(sdata, skb))
goto free;
}
elems->he_6ghz_capa,
&sta->deflink);
+ ieee80211_eht_cap_ie_to_sta_eht_cap(sdata, sband, elems->he_cap,
+ elems->he_cap_len,
+ elems->eht_cap, elems->eht_cap_len,
+ &sta->deflink);
+
if (bw != sta->sta.deflink.bandwidth)
changed |= IEEE80211_RC_BW_CHANGED;
return changed;
}
-static u32 ieee80211_link_set_associated(struct ieee80211_link_data *link,
+static u64 ieee80211_link_set_associated(struct ieee80211_link_data *link,
struct cfg80211_bss *cbss)
{
struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
bool already = false;
- if (WARN_ON(sdata->vif.valid_links))
+ if (WARN_ON_ONCE(sdata->vif.valid_links))
return;
if (!ieee80211_sdata_running(sdata))
goto free;
}
- if (sta && elems->opmode_notif)
+ if (elems->opmode_notif)
ieee80211_vht_handle_opmode(sdata, link_sta,
*elems->opmode_notif,
rx_status->band);
struct sta_info *sta_info;
bool ldpc, erp;
int use_vht;
- int n_supported = 0;
int ack_dur;
int stbc;
int i;
continue;
mi->supported[i] = mcs->rx_mask[nss - 1];
- if (mi->supported[i])
- n_supported++;
continue;
}
mi->supported[i] = minstrel_get_valid_vht_rates(bw, nss,
vht_cap->vht_mcs.tx_mcs_map);
-
- if (mi->supported[i])
- n_supported++;
}
sta_info = container_of(sta, struct sta_info, sta);
unsigned int present_fcs_len,
unsigned int rtap_space)
{
+ struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
struct ieee80211_hdr *hdr;
unsigned int hdrlen;
__le16 fc;
__pskb_trim(skb, skb->len - present_fcs_len);
pskb_pull(skb, rtap_space);
+ /* After pulling radiotap header, clear all flags that indicate
+ * info in skb->data.
+ */
+ status->flag &= ~(RX_FLAG_RADIOTAP_TLV_AT_END |
+ RX_FLAG_RADIOTAP_LSIG |
+ RX_FLAG_RADIOTAP_HE_MU |
+ RX_FLAG_RADIOTAP_HE);
+
hdr = (void *)skb->data;
fc = hdr->frame_control;
/* allocate extra bitmaps */
if (status->chains)
len += 4 * hweight8(status->chains);
- /* vendor presence bitmap */
- if (status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA)
- len += 4;
if (ieee80211_have_rx_timestamp(status)) {
len = ALIGN(len, 8);
len += 2 * hweight8(status->chains);
}
- if (status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA) {
- struct ieee80211_vendor_radiotap *rtap;
- int vendor_data_offset = 0;
+ if (status->flag & RX_FLAG_RADIOTAP_TLV_AT_END) {
+ int tlv_offset = 0;
/*
* The position to look at depends on the existence (or non-
* existence) of other elements, so take that into account...
*/
if (status->flag & RX_FLAG_RADIOTAP_HE)
- vendor_data_offset +=
+ tlv_offset +=
sizeof(struct ieee80211_radiotap_he);
if (status->flag & RX_FLAG_RADIOTAP_HE_MU)
- vendor_data_offset +=
+ tlv_offset +=
sizeof(struct ieee80211_radiotap_he_mu);
if (status->flag & RX_FLAG_RADIOTAP_LSIG)
- vendor_data_offset +=
+ tlv_offset +=
sizeof(struct ieee80211_radiotap_lsig);
- rtap = (void *)&skb->data[vendor_data_offset];
+ /* ensure 4 byte alignment for TLV */
+ len = ALIGN(len, 4);
- /* alignment for fixed 6-byte vendor data header */
- len = ALIGN(len, 2);
- /* vendor data header */
- len += 6;
- if (WARN_ON(rtap->align == 0))
- rtap->align = 1;
- len = ALIGN(len, rtap->align);
- len += rtap->len + rtap->pad;
+ /* TLVs until the mac header */
+ len += skb_mac_header(skb) - &skb->data[tlv_offset];
}
return len;
u32 it_present_val;
u16 rx_flags = 0;
u16 channel_flags = 0;
+ u32 tlvs_len = 0;
int mpdulen, chain;
unsigned long chains = status->chains;
- struct ieee80211_vendor_radiotap rtap = {};
struct ieee80211_radiotap_he he = {};
struct ieee80211_radiotap_he_mu he_mu = {};
struct ieee80211_radiotap_lsig lsig = {};
skb_pull(skb, sizeof(lsig));
}
- if (status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA) {
- rtap = *(struct ieee80211_vendor_radiotap *)skb->data;
- /* rtap.len and rtap.pad are undone immediately */
- skb_pull(skb, sizeof(rtap) + rtap.len + rtap.pad);
+ if (status->flag & RX_FLAG_RADIOTAP_TLV_AT_END) {
+ /* data is pointer at tlv all other info was pulled off */
+ tlvs_len = skb_mac_header(skb) - skb->data;
}
mpdulen = skb->len;
if (!(has_fcs && ieee80211_hw_check(&local->hw, RX_INCLUDES_FCS)))
mpdulen += FCS_LEN;
- rthdr = skb_push(skb, rtap_len);
- memset(rthdr, 0, rtap_len - rtap.len - rtap.pad);
+ rthdr = skb_push(skb, rtap_len - tlvs_len);
+ memset(rthdr, 0, rtap_len - tlvs_len);
it_present = &rthdr->it_present;
/* radiotap header, set always present flags */
BIT(IEEE80211_RADIOTAP_DBM_ANTSIGNAL);
}
- if (status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA) {
- it_present_val |= BIT(IEEE80211_RADIOTAP_VENDOR_NAMESPACE) |
- BIT(IEEE80211_RADIOTAP_EXT);
- put_unaligned_le32(it_present_val, it_present);
- it_present++;
- it_present_val = rtap.present;
- }
+ if (status->flag & RX_FLAG_RADIOTAP_TLV_AT_END)
+ it_present_val |= BIT(IEEE80211_RADIOTAP_TLV);
put_unaligned_le32(it_present_val, it_present);
*pos++ = status->chain_signal[chain];
*pos++ = chain;
}
-
- if (status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA) {
- /* ensure 2 byte alignment for the vendor field as required */
- if ((pos - (u8 *)rthdr) & 1)
- *pos++ = 0;
- *pos++ = rtap.oui[0];
- *pos++ = rtap.oui[1];
- *pos++ = rtap.oui[2];
- *pos++ = rtap.subns;
- put_unaligned_le16(rtap.len, pos);
- pos += 2;
- /* align the actual payload as requested */
- while ((pos - (u8 *)rthdr) & (rtap.align - 1))
- *pos++ = 0;
- /* data (and possible padding) already follows */
- }
}
static struct sk_buff *
bool only_monitor = false;
unsigned int min_head_len;
+ if (WARN_ON_ONCE(status->flag & RX_FLAG_RADIOTAP_TLV_AT_END &&
+ !skb_mac_header_was_set(origskb))) {
+ /* with this skb no way to know where frame payload starts */
+ dev_kfree_skb(origskb);
+ return NULL;
+ }
+
if (status->flag & RX_FLAG_RADIOTAP_HE)
rtap_space += sizeof(struct ieee80211_radiotap_he);
if (status->flag & RX_FLAG_RADIOTAP_LSIG)
rtap_space += sizeof(struct ieee80211_radiotap_lsig);
- if (unlikely(status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA)) {
- struct ieee80211_vendor_radiotap *rtap =
- (void *)(origskb->data + rtap_space);
-
- rtap_space += sizeof(*rtap) + rtap->len + rtap->pad;
- }
+ if (status->flag & RX_FLAG_RADIOTAP_TLV_AT_END)
+ rtap_space += skb_mac_header(origskb) - &origskb->data[rtap_space];
min_head_len = rtap_space;
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
bool noencrypt = !(status->flag & RX_FLAG_DECRYPTED);
- cfg80211_rx_control_port(dev, skb, noencrypt);
+ cfg80211_rx_control_port(dev, skb, noencrypt, rx->link_id);
dev_kfree_skb(skb);
} else {
struct ethhdr *ehdr = (void *)skb_mac_header(skb);
}
}
+#ifdef CONFIG_MAC80211_MESH
+static bool
+ieee80211_rx_mesh_fast_forward(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, int hdrlen)
+{
+ struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
+ struct ieee80211_mesh_fast_tx *entry = NULL;
+ struct ieee80211s_hdr *mesh_hdr;
+ struct tid_ampdu_tx *tid_tx;
+ struct sta_info *sta;
+ struct ethhdr eth;
+ u8 tid;
+
+ mesh_hdr = (struct ieee80211s_hdr *)(skb->data + sizeof(eth));
+ if ((mesh_hdr->flags & MESH_FLAGS_AE) == MESH_FLAGS_AE_A5_A6)
+ entry = mesh_fast_tx_get(sdata, mesh_hdr->eaddr1);
+ else if (!(mesh_hdr->flags & MESH_FLAGS_AE))
+ entry = mesh_fast_tx_get(sdata, skb->data);
+ if (!entry)
+ return false;
+
+ sta = rcu_dereference(entry->mpath->next_hop);
+ if (!sta)
+ return false;
+
+ if (skb_linearize(skb))
+ return false;
+
+ tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+ tid_tx = rcu_dereference(sta->ampdu_mlme.tid_tx[tid]);
+ if (tid_tx) {
+ if (!test_bit(HT_AGG_STATE_OPERATIONAL, &tid_tx->state))
+ return false;
+
+ if (tid_tx->timeout)
+ tid_tx->last_tx = jiffies;
+ }
+
+ ieee80211_aggr_check(sdata, sta, skb);
+
+ if (ieee80211_get_8023_tunnel_proto(skb->data + hdrlen,
+ &skb->protocol))
+ hdrlen += ETH_ALEN;
+ else
+ skb->protocol = htons(skb->len - hdrlen);
+ skb_set_network_header(skb, hdrlen + 2);
+
+ skb->dev = sdata->dev;
+ memcpy(ð, skb->data, ETH_HLEN - 2);
+ skb_pull(skb, 2);
+ __ieee80211_xmit_fast(sdata, sta, &entry->fast_tx, skb, tid_tx,
+ eth.h_dest, eth.h_source);
+ IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_unicast);
+ IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_frames);
+
+ return true;
+}
+#endif
+
static ieee80211_rx_result
ieee80211_rx_mesh_data(struct ieee80211_sub_if_data *sdata, struct sta_info *sta,
struct sk_buff *skb)
if (mesh_hdr->flags & MESH_FLAGS_AE) {
struct mesh_path *mppath;
char *proxied_addr;
+ bool update = false;
if (multicast)
proxied_addr = mesh_hdr->eaddr1;
mpp_path_add(sdata, proxied_addr, eth->h_source);
} else {
spin_lock_bh(&mppath->state_lock);
- if (!ether_addr_equal(mppath->mpp, eth->h_source))
+ if (!ether_addr_equal(mppath->mpp, eth->h_source)) {
memcpy(mppath->mpp, eth->h_source, ETH_ALEN);
+ update = true;
+ }
mppath->exp_time = jiffies;
spin_unlock_bh(&mppath->state_lock);
}
+
+ /* flush fast xmit cache if the address path changed */
+ if (update)
+ mesh_fast_tx_flush_addr(sdata, proxied_addr);
+
rcu_read_unlock();
}
skb_set_queue_mapping(skb, ieee802_1d_to_ac[skb->priority]);
+ if (!multicast &&
+ ieee80211_rx_mesh_fast_forward(sdata, skb, mesh_hdrlen))
+ return RX_QUEUED;
+
ieee80211_fill_mesh_addresses(&hdr, &hdr.frame_control,
eth->h_dest, eth->h_source);
hdrlen = ieee80211_hdrlen(hdr.frame_control);
info->control.flags |= IEEE80211_TX_INTCFL_NEED_TXPROCESSING;
info->control.vif = &sdata->vif;
info->control.jiffies = jiffies;
+ fwd_skb->dev = sdata->dev;
if (multicast) {
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_mcast);
memcpy(fwd_hdr->addr2, sdata->vif.addr, ETH_ALEN);
}
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_frames);
- fwd_skb->dev = sdata->dev;
ieee80211_add_pending_skb(local, fwd_skb);
rx_accept:
return RX_DROP_UNUSABLE;
if (rx->sta->amsdu_mesh_control < 0) {
- bool valid_std = ieee80211_is_valid_amsdu(skb, true);
- bool valid_nonstd = ieee80211_is_valid_amsdu(skb, false);
+ s8 valid = -1;
+ int i;
+
+ for (i = 0; i <= 2; i++) {
+ if (!ieee80211_is_valid_amsdu(skb, i))
+ continue;
+
+ if (valid >= 0) {
+ /* ambiguous */
+ valid = -1;
+ break;
+ }
- if (valid_std && !valid_nonstd)
- rx->sta->amsdu_mesh_control = 1;
- else if (valid_nonstd && !valid_std)
- rx->sta->amsdu_mesh_control = 0;
+ valid = i;
+ }
+
+ rx->sta->amsdu_mesh_control = valid;
}
ieee80211_amsdu_to_8023s(skb, &frame_list, dev->dev_addr,
if (!local->cooked_mntrs)
goto out_free_skb;
- /* vendor data is long removed here */
- status->flag &= ~RX_FLAG_RADIOTAP_VENDOR_DATA;
/* room for the radiotap header based on driver features */
needed_headroom = ieee80211_rx_radiotap_hdrlen(local, status, skb);
}
break;
+ case NL80211_IFTYPE_MESH_POINT:
+ fastrx.expected_ds_bits = cpu_to_le16(IEEE80211_FCTL_FROMDS |
+ IEEE80211_FCTL_TODS);
+ fastrx.da_offs = offsetof(struct ieee80211_hdr, addr3);
+ fastrx.sa_offs = offsetof(struct ieee80211_hdr, addr4);
+ break;
default:
goto clear;
}
struct sk_buff *skb = rx->skb;
struct ieee80211_hdr *hdr = (void *)skb->data;
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+ static ieee80211_rx_result res;
int orig_len = skb->len;
int hdrlen = ieee80211_hdrlen(hdr->frame_control);
int snap_offs = hdrlen;
snap_offs += IEEE80211_CCMP_HDR_LEN;
}
- if (!(status->rx_flags & IEEE80211_RX_AMSDU)) {
+ if (!ieee80211_vif_is_mesh(&rx->sdata->vif) &&
+ !(status->rx_flags & IEEE80211_RX_AMSDU)) {
if (!pskb_may_pull(skb, snap_offs + sizeof(*payload)))
return false;
/* do the header conversion - first grab the addresses */
ether_addr_copy(addrs.da, skb->data + fast_rx->da_offs);
ether_addr_copy(addrs.sa, skb->data + fast_rx->sa_offs);
- skb_postpull_rcsum(skb, skb->data + snap_offs,
- sizeof(rfc1042_header) + 2);
- /* remove the SNAP but leave the ethertype */
- skb_pull(skb, snap_offs + sizeof(rfc1042_header));
+ if (ieee80211_vif_is_mesh(&rx->sdata->vif)) {
+ skb_pull(skb, snap_offs - 2);
+ put_unaligned_be16(skb->len - 2, skb->data);
+ } else {
+ skb_postpull_rcsum(skb, skb->data + snap_offs,
+ sizeof(rfc1042_header) + 2);
+
+ /* remove the SNAP but leave the ethertype */
+ skb_pull(skb, snap_offs + sizeof(rfc1042_header));
+ }
/* push the addresses in front */
memcpy(skb_push(skb, sizeof(addrs)), &addrs, sizeof(addrs));
+ res = ieee80211_rx_mesh_data(rx->sdata, rx->sta, rx->skb);
+ switch (res) {
+ case RX_QUEUED:
+ return true;
+ case RX_CONTINUE:
+ break;
+ default:
+ goto drop;
+ }
+
ieee80211_rx_8023(rx, fast_rx, orig_len);
return true;
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
* Copyright 2013-2015 Intel Mobile Communications GmbH
* Copyright 2016-2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2021 Intel Corporation
+ * Copyright (C) 2018-2022 Intel Corporation
*/
#include <linux/if_arp.h>
return ret;
}
-/*
- * Only call this function when a scan can't be queued -- under RTNL.
- */
void ieee80211_scan_cancel(struct ieee80211_local *local)
{
+ /* ensure a new scan cannot be queued */
+ lockdep_assert_wiphy(local->hw.wiphy);
+
/*
* We are canceling software scan, or deferred scan that was not
* yet really started (see __ieee80211_start_scan ).
* taken from HT/VHT capabilities or VHT operating mode notification
* @cparams: CoDel parameters for this station.
* @reserved_tid: reserved TID (if any, otherwise IEEE80211_TID_UNRESERVED)
- * @amsdu_mesh_control: track the mesh A-MSDU format used by the peer
- * (-1: not yet known, 0: non-standard [without mesh header], 1: standard)
+ * @amsdu_mesh_control: track the mesh A-MSDU format used by the peer:
+ *
+ * * -1: not yet known
+ * * 0: non-mesh A-MSDU length field
+ * * 1: big-endian mesh A-MSDU length field
+ * * 2: little-endian mesh A-MSDU length field
+ *
* @fast_tx: TX fastpath information
* @fast_rx: RX fastpath information
* @tdls_chandef: a TDLS peer can have a wider chandef that is compatible to
TP_ARGS(local, sdata, sta)
);
+TRACE_EVENT(drv_net_setup_tc,
+ TP_PROTO(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata,
+ u8 type),
+
+ TP_ARGS(local, sdata, type),
+
+ TP_STRUCT__entry(
+ LOCAL_ENTRY
+ VIF_ENTRY
+ __field(u8, type)
+ ),
+
+ TP_fast_assign(
+ LOCAL_ASSIGN;
+ VIF_ASSIGN;
+ __entry->type = type;
+ ),
+
+ TP_printk(
+ LOCAL_PR_FMT VIF_PR_FMT " type:%d\n",
+ LOCAL_PR_ARG, VIF_PR_ARG, __entry->type
+ )
+);
+
TRACE_EVENT(drv_change_vif_links,
TP_PROTO(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
return queued;
}
-static void
-ieee80211_aggr_check(struct ieee80211_sub_if_data *sdata,
- struct sta_info *sta,
- struct sk_buff *skb)
+void ieee80211_aggr_check(struct ieee80211_sub_if_data *sdata,
+ struct sta_info *sta, struct sk_buff *skb)
{
struct rate_control_ref *ref = sdata->local->rate_ctrl;
u16 tid;
if (!ieee80211_hw_check(&local->hw, SUPPORT_FAST_XMIT))
return;
+ if (ieee80211_vif_is_mesh(&sdata->vif))
+ mesh_fast_tx_flush_sta(sdata, sta);
+
/* Locking here protects both the pointer itself, and against concurrent
* invocations winning data access races to, e.g., the key pointer that
* is used.
static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
struct sta_info *sta,
struct ieee80211_fast_tx *fast_tx,
- struct sk_buff *skb)
+ struct sk_buff *skb,
+ const u8 *da, const u8 *sa)
{
struct ieee80211_local *local = sdata->local;
struct fq *fq = &local->fq;
if (sdata->vif.offload_flags & IEEE80211_OFFLOAD_ENCAP_ENABLED)
return false;
+ if (ieee80211_vif_is_mesh(&sdata->vif))
+ return false;
+
if (skb_is_gso(skb))
return false;
ret = true;
data = skb_push(skb, ETH_ALEN + 2);
- memmove(data, data + ETH_ALEN + 2, 2 * ETH_ALEN);
+ ether_addr_copy(data, da);
+ ether_addr_copy(data + ETH_ALEN, sa);
data += 2 * ETH_ALEN;
len = cpu_to_be16(subframe_len);
return NULL;
}
-static void __ieee80211_xmit_fast(struct ieee80211_sub_if_data *sdata,
- struct sta_info *sta,
- struct ieee80211_fast_tx *fast_tx,
- struct sk_buff *skb, u8 tid, bool ampdu)
+void __ieee80211_xmit_fast(struct ieee80211_sub_if_data *sdata,
+ struct sta_info *sta,
+ struct ieee80211_fast_tx *fast_tx,
+ struct sk_buff *skb, bool ampdu,
+ const u8 *da, const u8 *sa)
{
struct ieee80211_local *local = sdata->local;
struct ieee80211_hdr *hdr = (void *)fast_tx->hdr;
ieee80211_tx_result r;
int hw_headroom = sdata->local->hw.extra_tx_headroom;
int extra_head = fast_tx->hdr_len - (ETH_HLEN - 2);
- struct ethhdr eth;
skb = skb_share_check(skb, GFP_ATOMIC);
if (unlikely(!skb))
return;
if ((hdr->frame_control & cpu_to_le16(IEEE80211_STYPE_QOS_DATA)) &&
- ieee80211_amsdu_aggregate(sdata, sta, fast_tx, skb))
+ ieee80211_amsdu_aggregate(sdata, sta, fast_tx, skb, da, sa))
return;
/* will not be crypto-handled beyond what we do here, so use false
ENCRYPT_NO)))
goto free;
- memcpy(ð, skb->data, ETH_HLEN - 2);
hdr = skb_push(skb, extra_head);
memcpy(skb->data, fast_tx->hdr, fast_tx->hdr_len);
- memcpy(skb->data + fast_tx->da_offs, eth.h_dest, ETH_ALEN);
- memcpy(skb->data + fast_tx->sa_offs, eth.h_source, ETH_ALEN);
+ memcpy(skb->data + fast_tx->da_offs, da, ETH_ALEN);
+ memcpy(skb->data + fast_tx->sa_offs, sa, ETH_ALEN);
info = IEEE80211_SKB_CB(skb);
memset(info, 0, sizeof(*info));
#endif
if (hdr->frame_control & cpu_to_le16(IEEE80211_STYPE_QOS_DATA)) {
- tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+
*ieee80211_get_qos_ctl(hdr) = tid;
}
struct ieee80211_hdr *hdr = (void *)fast_tx->hdr;
struct tid_ampdu_tx *tid_tx = NULL;
struct sk_buff *next;
+ struct ethhdr eth;
u8 tid = IEEE80211_NUM_TIDS;
/* control port protocol needs a lot of special handling */
}
}
+ memcpy(ð, skb->data, ETH_HLEN - 2);
+
/* after this point (skb is modified) we cannot return false */
skb = ieee80211_tx_skb_fixup(skb, ieee80211_sdata_netdev_features(sdata));
if (!skb)
skb_list_walk_safe(skb, skb, next) {
skb_mark_not_on_list(skb);
- __ieee80211_xmit_fast(sdata, sta, fast_tx, skb, tid, tid_tx);
+ __ieee80211_xmit_fast(sdata, sta, fast_tx, skb, tid_tx,
+ eth.h_dest, eth.h_source);
}
return true;
return;
}
+ sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift);
+
rcu_read_lock();
+ if (ieee80211_vif_is_mesh(&sdata->vif) &&
+ ieee80211_hw_check(&local->hw, SUPPORT_FAST_XMIT) &&
+ ieee80211_mesh_xmit_fast(sdata, skb, ctrl_flags))
+ goto out;
+
if (ieee80211_lookup_ra_sta(sdata, skb, &sta))
goto out_free;
skb_set_queue_mapping(skb, ieee80211_select_queue(sdata, sta, skb));
ieee80211_aggr_check(sdata, sta, skb);
- sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift);
-
if (sta) {
struct ieee80211_fast_tx *fast_tx;
tx.key = rcu_dereference(link->default_beacon_key);
if (!tx.key)
return 0;
+
+ if (unlikely(tx.key->flags & KEY_FLAG_TAINTED)) {
+ tx.key = NULL;
+ return -EINVAL;
+ }
+
+ if (!(tx.key->conf.flags & IEEE80211_KEY_FLAG_SW_MGMT_TX) &&
+ tx.key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)
+ IEEE80211_SKB_CB(skb)->control.hw_key = &tx.key->conf;
+
tx.local = local;
tx.sdata = sdata;
__skb_queue_head_init(&tx.skbs);
}
static void
-ieee80211_beacon_add_mbssid(struct sk_buff *skb, struct beacon_data *beacon)
+ieee80211_beacon_add_mbssid(struct sk_buff *skb, struct beacon_data *beacon,
+ u8 i)
{
- int i;
+ if (!beacon->mbssid_ies || !beacon->mbssid_ies->cnt ||
+ i > beacon->mbssid_ies->cnt)
+ return;
+
+ if (i < beacon->mbssid_ies->cnt) {
+ skb_put_data(skb, beacon->mbssid_ies->elem[i].data,
+ beacon->mbssid_ies->elem[i].len);
- if (!beacon->mbssid_ies)
+ if (beacon->rnr_ies && beacon->rnr_ies->cnt) {
+ skb_put_data(skb, beacon->rnr_ies->elem[i].data,
+ beacon->rnr_ies->elem[i].len);
+
+ for (i = beacon->mbssid_ies->cnt; i < beacon->rnr_ies->cnt; i++)
+ skb_put_data(skb, beacon->rnr_ies->elem[i].data,
+ beacon->rnr_ies->elem[i].len);
+ }
return;
+ }
+ /* i == beacon->mbssid_ies->cnt, include all MBSSID elements */
for (i = 0; i < beacon->mbssid_ies->cnt; i++)
skb_put_data(skb, beacon->mbssid_ies->elem[i].data,
beacon->mbssid_ies->elem[i].len);
struct ieee80211_mutable_offsets *offs,
bool is_template,
struct beacon_data *beacon,
- struct ieee80211_chanctx_conf *chanctx_conf)
+ struct ieee80211_chanctx_conf *chanctx_conf,
+ u8 ema_index)
{
struct ieee80211_local *local = hw_to_local(hw);
struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
/* headroom, head length,
* tail length, maximum TIM length and multiple BSSID length
*/
- mbssid_len = ieee80211_get_mbssid_beacon_len(beacon->mbssid_ies);
+ mbssid_len = ieee80211_get_mbssid_beacon_len(beacon->mbssid_ies,
+ beacon->rnr_ies,
+ ema_index);
+
skb = dev_alloc_skb(local->tx_headroom + beacon->head_len +
beacon->tail_len + 256 +
local->hw.extra_beacon_tailroom + mbssid_len);
offs->cntdwn_counter_offs[0] = beacon->cntdwn_counter_offsets[0];
if (mbssid_len) {
- ieee80211_beacon_add_mbssid(skb, beacon);
+ ieee80211_beacon_add_mbssid(skb, beacon, ema_index);
offs->mbssid_off = skb->len - mbssid_len;
}
return skb;
}
+static struct ieee80211_ema_beacons *
+ieee80211_beacon_get_ap_ema_list(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_link_data *link,
+ struct ieee80211_mutable_offsets *offs,
+ bool is_template, struct beacon_data *beacon,
+ struct ieee80211_chanctx_conf *chanctx_conf)
+{
+ struct ieee80211_ema_beacons *ema = NULL;
+
+ if (!beacon->mbssid_ies || !beacon->mbssid_ies->cnt)
+ return NULL;
+
+ ema = kzalloc(struct_size(ema, bcn, beacon->mbssid_ies->cnt),
+ GFP_ATOMIC);
+ if (!ema)
+ return NULL;
+
+ for (ema->cnt = 0; ema->cnt < beacon->mbssid_ies->cnt; ema->cnt++) {
+ ema->bcn[ema->cnt].skb =
+ ieee80211_beacon_get_ap(hw, vif, link,
+ &ema->bcn[ema->cnt].offs,
+ is_template, beacon,
+ chanctx_conf, ema->cnt);
+ if (!ema->bcn[ema->cnt].skb)
+ break;
+ }
+
+ if (ema->cnt == beacon->mbssid_ies->cnt)
+ return ema;
+
+ ieee80211_beacon_free_ema_list(ema);
+ return NULL;
+}
+
+#define IEEE80211_INCLUDE_ALL_MBSSID_ELEMS -1
+
static struct sk_buff *
__ieee80211_beacon_get(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_mutable_offsets *offs,
bool is_template,
- unsigned int link_id)
+ unsigned int link_id,
+ int ema_index,
+ struct ieee80211_ema_beacons **ema_beacons)
{
struct ieee80211_local *local = hw_to_local(hw);
struct beacon_data *beacon = NULL;
if (!beacon)
goto out;
- skb = ieee80211_beacon_get_ap(hw, vif, link, offs, is_template,
- beacon, chanctx_conf);
+ if (ema_beacons) {
+ *ema_beacons =
+ ieee80211_beacon_get_ap_ema_list(hw, vif, link,
+ offs,
+ is_template,
+ beacon,
+ chanctx_conf);
+ } else {
+ if (beacon->mbssid_ies && beacon->mbssid_ies->cnt) {
+ if (ema_index >= beacon->mbssid_ies->cnt)
+ goto out; /* End of MBSSID elements */
+
+ if (ema_index <= IEEE80211_INCLUDE_ALL_MBSSID_ELEMS)
+ ema_index = beacon->mbssid_ies->cnt;
+ } else {
+ ema_index = 0;
+ }
+
+ skb = ieee80211_beacon_get_ap(hw, vif, link, offs,
+ is_template, beacon,
+ chanctx_conf,
+ ema_index);
+ }
} else if (sdata->vif.type == NL80211_IFTYPE_ADHOC) {
struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
struct ieee80211_hdr *hdr;
struct ieee80211_mutable_offsets *offs,
unsigned int link_id)
{
- return __ieee80211_beacon_get(hw, vif, offs, true, link_id);
+ return __ieee80211_beacon_get(hw, vif, offs, true, link_id,
+ IEEE80211_INCLUDE_ALL_MBSSID_ELEMS, NULL);
}
EXPORT_SYMBOL(ieee80211_beacon_get_template);
+struct sk_buff *
+ieee80211_beacon_get_template_ema_index(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_mutable_offsets *offs,
+ unsigned int link_id, u8 ema_index)
+{
+ return __ieee80211_beacon_get(hw, vif, offs, true, link_id, ema_index,
+ NULL);
+}
+EXPORT_SYMBOL(ieee80211_beacon_get_template_ema_index);
+
+void ieee80211_beacon_free_ema_list(struct ieee80211_ema_beacons *ema_beacons)
+{
+ u8 i;
+
+ if (!ema_beacons)
+ return;
+
+ for (i = 0; i < ema_beacons->cnt; i++)
+ kfree_skb(ema_beacons->bcn[i].skb);
+
+ kfree(ema_beacons);
+}
+EXPORT_SYMBOL(ieee80211_beacon_free_ema_list);
+
+struct ieee80211_ema_beacons *
+ieee80211_beacon_get_template_ema_list(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ unsigned int link_id)
+{
+ struct ieee80211_ema_beacons *ema_beacons = NULL;
+
+ WARN_ON(__ieee80211_beacon_get(hw, vif, NULL, false, link_id, 0,
+ &ema_beacons));
+
+ return ema_beacons;
+}
+EXPORT_SYMBOL(ieee80211_beacon_get_template_ema_list);
+
struct sk_buff *ieee80211_beacon_get_tim(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
u16 *tim_offset, u16 *tim_length,
{
struct ieee80211_mutable_offsets offs = {};
struct sk_buff *bcn = __ieee80211_beacon_get(hw, vif, &offs, false,
- link_id);
+ link_id,
+ IEEE80211_INCLUDE_ALL_MBSSID_ELEMS,
+ NULL);
struct sk_buff *copy;
int shift;
rate_flags = ieee80211_chandef_rate_flags(chandef);
shift = ieee80211_chandef_get_shift(chandef);
+ /* For direct scan add S1G IE and consider its override bits */
+ if (band == NL80211_BAND_S1GHZ) {
+ if (end - pos < 2 + sizeof(struct ieee80211_s1g_cap))
+ goto out_err;
+ pos = ieee80211_ie_build_s1g_cap(pos, &sband->s1g_cap);
+ goto done;
+ }
+
num_rates = 0;
for (i = 0; i < sband->n_bitrates; i++) {
if ((BIT(i) & rate_mask) == 0)
return pos;
}
+u8 *ieee80211_ie_build_s1g_cap(u8 *pos, struct ieee80211_sta_s1g_cap *s1g_cap)
+{
+ *pos++ = WLAN_EID_S1G_CAPABILITIES;
+ *pos++ = sizeof(struct ieee80211_s1g_cap);
+ memset(pos, 0, sizeof(struct ieee80211_s1g_cap));
+
+ memcpy(pos, &s1g_cap->cap, sizeof(s1g_cap->cap));
+ pos += sizeof(s1g_cap->cap);
+
+ memcpy(pos, &s1g_cap->nss_mcs, sizeof(s1g_cap->nss_mcs));
+ pos += sizeof(s1g_cap->nss_mcs);
+
+ return pos;
+}
+
u8 *ieee80211_ie_build_ht_cap(u8 *pos, struct ieee80211_sta_ht_cap *ht_cap,
u16 cap)
{
return pos;
}
+u8 *ieee80211_ie_build_eht_oper(u8 *pos, struct cfg80211_chan_def *chandef,
+ const struct ieee80211_sta_eht_cap *eht_cap)
+
+{
+ const struct ieee80211_eht_mcs_nss_supp_20mhz_only *eht_mcs_nss =
+ &eht_cap->eht_mcs_nss_supp.only_20mhz;
+ struct ieee80211_eht_operation *eht_oper;
+ struct ieee80211_eht_operation_info *eht_oper_info;
+ u8 eht_oper_len = offsetof(struct ieee80211_eht_operation, optional);
+ u8 eht_oper_info_len =
+ offsetof(struct ieee80211_eht_operation_info, optional);
+ u8 chan_width = 0;
+
+ *pos++ = WLAN_EID_EXTENSION;
+ *pos++ = 1 + eht_oper_len + eht_oper_info_len;
+ *pos++ = WLAN_EID_EXT_EHT_OPERATION;
+
+ eht_oper = (struct ieee80211_eht_operation *)pos;
+
+ memcpy(&eht_oper->basic_mcs_nss, eht_mcs_nss, sizeof(*eht_mcs_nss));
+ eht_oper->params |= IEEE80211_EHT_OPER_INFO_PRESENT;
+ pos += eht_oper_len;
+
+ eht_oper_info =
+ (struct ieee80211_eht_operation_info *)eht_oper->optional;
+
+ eht_oper_info->ccfs0 =
+ ieee80211_frequency_to_channel(chandef->center_freq1);
+ if (chandef->center_freq2)
+ eht_oper_info->ccfs1 =
+ ieee80211_frequency_to_channel(chandef->center_freq2);
+ else
+ eht_oper_info->ccfs1 = 0;
+
+ switch (chandef->width) {
+ case NL80211_CHAN_WIDTH_320:
+ chan_width = IEEE80211_EHT_OPER_CHAN_WIDTH_320MHZ;
+ eht_oper_info->ccfs1 = eht_oper_info->ccfs0;
+ if (chandef->chan->center_freq < chandef->center_freq1)
+ eht_oper_info->ccfs0 -= 16;
+ else
+ eht_oper_info->ccfs0 += 16;
+ break;
+ case NL80211_CHAN_WIDTH_160:
+ eht_oper_info->ccfs1 = eht_oper_info->ccfs0;
+ if (chandef->chan->center_freq < chandef->center_freq1)
+ eht_oper_info->ccfs0 -= 8;
+ else
+ eht_oper_info->ccfs0 += 8;
+ fallthrough;
+ case NL80211_CHAN_WIDTH_80P80:
+ chan_width = IEEE80211_EHT_OPER_CHAN_WIDTH_160MHZ;
+ break;
+ case NL80211_CHAN_WIDTH_80:
+ chan_width = IEEE80211_EHT_OPER_CHAN_WIDTH_80MHZ;
+ break;
+ case NL80211_CHAN_WIDTH_40:
+ chan_width = IEEE80211_EHT_OPER_CHAN_WIDTH_40MHZ;
+ break;
+ default:
+ chan_width = IEEE80211_EHT_OPER_CHAN_WIDTH_20MHZ;
+ break;
+ }
+ eht_oper_info->control = chan_width;
+ pos += eht_oper_info_len;
+
+ /* TODO: eht_oper_info->optional */
+
+ return pos;
+}
+
bool ieee80211_chandef_ht_oper(const struct ieee80211_ht_operation *ht_oper,
struct cfg80211_chan_def *chandef)
{
module_exit(mctp_exit);
MODULE_DESCRIPTION("MCTP core");
-MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Jeremy Kerr <jk@codeconstruct.com.au>");
MODULE_ALIAS_NETPROTO(PF_MCTP);
nla_put_s32(skb, MPTCP_ATTR_IF_IDX, ssk->sk_bound_dev_if))
return -EMSGSIZE;
- sk_err = ssk->sk_err;
+ sk_err = READ_ONCE(ssk->sk_err);
if (sk_err && sk->sk_state == TCP_ESTABLISHED &&
nla_put_u8(skb, MPTCP_ATTR_ERROR, sk_err))
return -EMSGSIZE;
return false;
}
-static void mptcp_set_datafin_timeout(const struct sock *sk)
+static void mptcp_set_datafin_timeout(struct sock *sk)
{
struct inet_connection_sock *icsk = inet_csk(sk);
u32 retransmits;
/* Mirror the tcp_reset() error propagation */
switch (sk->sk_state) {
case TCP_SYN_SENT:
- sk->sk_err = ECONNREFUSED;
+ WRITE_ONCE(sk->sk_err, ECONNREFUSED);
break;
case TCP_CLOSE_WAIT:
- sk->sk_err = EPIPE;
+ WRITE_ONCE(sk->sk_err, EPIPE);
break;
case TCP_CLOSE:
return;
default:
- sk->sk_err = ECONNRESET;
+ WRITE_ONCE(sk->sk_err, ECONNRESET);
}
inet_sk_state_store(sk, TCP_CLOSE);
/* This barrier is coupled with smp_wmb() in __mptcp_error_report() */
smp_rmb();
- if (sk->sk_err)
+ if (READ_ONCE(sk->sk_err))
mask |= EPOLLERR;
return mask;
sock_owned_by_me((const struct sock *)msk);
}
-static inline struct mptcp_sock *mptcp_sk(const struct sock *sk)
-{
- return (struct mptcp_sock *)sk;
-}
+#define mptcp_sk(ptr) container_of_const(ptr, struct mptcp_sock, sk.icsk_inet.sk)
/* the msk socket don't use the backlog, also account for the bulk
* free memory
static inline struct mptcp_data_frag *mptcp_pending_tail(const struct sock *sk)
{
- struct mptcp_sock *msk = mptcp_sk(sk);
+ const struct mptcp_sock *msk = mptcp_sk(sk);
if (!msk->first_pending)
return NULL;
return list_last_entry(&msk->rtx_queue, struct mptcp_data_frag, list);
}
-static inline struct mptcp_data_frag *mptcp_rtx_head(const struct sock *sk)
+static inline struct mptcp_data_frag *mptcp_rtx_head(struct sock *sk)
{
struct mptcp_sock *msk = mptcp_sk(sk);
void mptcp_diag_fill_info(struct mptcp_sock *msk, struct mptcp_info *info)
{
u32 flags = 0;
- u8 val;
memset(info, 0, sizeof(*info));
info->mptcpi_add_addr_signal = READ_ONCE(msk->pm.add_addr_signaled);
info->mptcpi_add_addr_accepted = READ_ONCE(msk->pm.add_addr_accepted);
info->mptcpi_local_addr_used = READ_ONCE(msk->pm.local_addr_used);
- info->mptcpi_subflows_max = mptcp_pm_get_subflows_max(msk);
- val = mptcp_pm_get_add_addr_signal_max(msk);
- info->mptcpi_add_addr_signal_max = val;
- val = mptcp_pm_get_add_addr_accept_max(msk);
- info->mptcpi_add_addr_accepted_max = val;
- info->mptcpi_local_addr_max = mptcp_pm_get_local_addr_max(msk);
+
+ /* The following limits only make sense for the in-kernel PM */
+ if (mptcp_pm_is_kernel(msk)) {
+ info->mptcpi_subflows_max =
+ mptcp_pm_get_subflows_max(msk);
+ info->mptcpi_add_addr_signal_max =
+ mptcp_pm_get_add_addr_signal_max(msk);
+ info->mptcpi_add_addr_accepted_max =
+ mptcp_pm_get_add_addr_accept_max(msk);
+ info->mptcpi_local_addr_max =
+ mptcp_pm_get_local_addr_max(msk);
+ }
+
if (test_bit(MPTCP_FALLBACK_DONE, &msk->flags))
flags |= MPTCP_INFO_FLAG_FALLBACK;
if (READ_ONCE(msk->can_ack))
static void mptcp_get_sub_addrs(const struct sock *sk, struct mptcp_subflow_addrs *a)
{
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
memset(a, 0, sizeof(*a));
return !crypto_memneq(hmac, mp_opt->hmac, MPTCPOPT_HMAC_LEN);
}
-static void mptcp_force_close(struct sock *sk)
-{
- /* the msk is not yet exposed to user-space, and refcount is 2 */
- inet_sk_state_store(sk, TCP_CLOSE);
- sk_common_release(sk);
- sock_put(sk);
-}
-
static void subflow_ulp_fallback(struct sock *sk,
struct mptcp_subflow_context *old_ctx)
{
struct mptcp_subflow_request_sock *subflow_req;
struct mptcp_options_received mp_opt;
bool fallback, fallback_is_fatal;
- struct sock *new_msk = NULL;
struct mptcp_sock *owner;
struct sock *child;
* options.
*/
mptcp_get_options(skb, &mp_opt);
- if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPC)) {
+ if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPC))
fallback = true;
- goto create_child;
- }
- new_msk = mptcp_sk_clone(listener->conn, &mp_opt, req);
- if (!new_msk)
- fallback = true;
} else if (subflow_req->mp_join) {
mptcp_get_options(skb, &mp_opt);
if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPJ) ||
subflow_add_reset_reason(skb, MPTCP_RST_EMPTCP);
goto dispose_child;
}
-
- if (new_msk)
- mptcp_copy_inaddrs(new_msk, child);
- mptcp_subflow_drop_ctx(child);
- goto out;
+ goto fallback;
}
/* ssk inherits options of listener sk */
ctx->setsockopt_seq = listener->setsockopt_seq;
if (ctx->mp_capable) {
- owner = mptcp_sk(new_msk);
+ ctx->conn = mptcp_sk_clone(listener->conn, &mp_opt, req);
+ if (!ctx->conn)
+ goto fallback;
+
+ owner = mptcp_sk(ctx->conn);
/* this can't race with mptcp_close(), as the msk is
* not yet exposted to user-space
*/
- inet_sk_state_store((void *)new_msk, TCP_ESTABLISHED);
+ inet_sk_state_store(ctx->conn, TCP_ESTABLISHED);
/* record the newly created socket as the first msk
* subflow, but don't link it yet into conn_list
/* new mpc subflow takes ownership of the newly
* created mptcp socket
*/
- mptcp_sk(new_msk)->setsockopt_seq = ctx->setsockopt_seq;
+ owner->setsockopt_seq = ctx->setsockopt_seq;
mptcp_pm_new_connection(owner, child, 1);
mptcp_token_accept(subflow_req, owner);
- ctx->conn = new_msk;
- new_msk = NULL;
/* set msk addresses early to ensure mptcp_pm_get_local_id()
* uses the correct data
}
}
-out:
- /* dispose of the left over mptcp master, if any */
- if (unlikely(new_msk))
- mptcp_force_close(new_msk);
-
/* check for expected invariant - should never trigger, just help
* catching eariler subtle bugs
*/
/* The last child reference will be released by the caller */
return child;
+
+fallback:
+ mptcp_subflow_drop_ctx(child);
+ return child;
}
static struct inet_connection_sock_af_ops subflow_specific __ro_after_init;
subflow->reset_reason = MPTCP_RST_EMPTCP;
reset:
- ssk->sk_err = EBADMSG;
+ WRITE_ONCE(ssk->sk_err, EBADMSG);
tcp_set_state(ssk, TCP_CLOSE);
while ((skb = skb_peek(&ssk->sk_receive_queue)))
sk_eat_skb(ssk, skb);
ssk_state = inet_sk_state_load(ssk);
if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD))
inet_sk_state_store(sk, ssk_state);
- sk->sk_err = -err;
+ WRITE_ONCE(sk->sk_err, -err);
/* This barrier is coupled with smp_rmb() in mptcp_poll() */
smp_wmb();
config NETFILTER_XTABLES_COMPAT
bool "Netfilter Xtables 32bit support"
depends on COMPAT
- default y
help
This option provides a translation layer to run 32bit arp,ip(6),ebtables
binaries on 64bit kernels.
spin_unlock_bh(&dest->dst_lock);
IP_VS_DBG(10, "new dst %pI4, src %pI4, refcnt=%d\n",
&dest->addr.ip, &dest_dst->dst_saddr.ip,
- atomic_read(&rt->dst.__refcnt));
+ rcuref_read(&rt->dst.__rcuref));
}
if (ret_saddr)
*ret_saddr = dest_dst->dst_saddr.ip;
spin_unlock_bh(&dest->dst_lock);
IP_VS_DBG(10, "new dst %pI6, src %pI6, refcnt=%d\n",
&dest->addr.in6, &dest_dst->dst_saddr.in6,
- atomic_read(&rt->dst.__refcnt));
+ rcuref_read(&rt->dst.__rcuref));
}
if (ret_saddr)
*ret_saddr = dest_dst->dst_saddr.in6;
}
EXPORT_SYMBOL_GPL(__nf_conntrack_confirm);
-/* Returns true if a connection correspondings to the tuple (required
+/* Returns true if a connection corresponds to the tuple (required
for NAT). */
int
nf_conntrack_tuple_taken(const struct nf_conntrack_tuple *tuple,
static int ctnetlink_flush_iterate(struct nf_conn *ct, void *data)
{
- if (test_bit(IPS_OFFLOAD_BIT, &ct->status))
- return 0;
-
return ctnetlink_filter_match(ct, data);
}
ct = nf_ct_tuplehash_to_ctrack(h);
- if (test_bit(IPS_OFFLOAD_BIT, &ct->status)) {
- nf_ct_put(ct);
- return -EBUSY;
- }
-
if (cda[CTA_ID]) {
__be32 id = nla_get_be32(cda[CTA_ID]);
#include <net/netfilter/ipv6/nf_defrag_ipv6.h>
#include <net/ipv6_frag.h>
#include <net/ip.h>
+#include <linux/netfilter_ipv6.h>
/* 'skb' should already be pulled to nh_ofs. */
int nf_ct_helper(struct sk_buff *skb, struct nf_conn *ct,
len = skb_ip_totlen(skb);
break;
case NFPROTO_IPV6:
- len = sizeof(struct ipv6hdr)
- + ntohs(ipv6_hdr(skb)->payload_len);
+ len = ntohs(ipv6_hdr(skb)->payload_len);
+ if (ipv6_hdr(skb)->nexthdr == NEXTHDR_HOP) {
+ int err = nf_ip6_check_hbh_len(skb, &len);
+
+ if (err)
+ return err;
+ }
+ len += sizeof(struct ipv6hdr);
break;
default:
len = skb->len;
if (range->flags & NF_NAT_RANGE_PROTO_SPECIFIED) {
if (!(range->flags & NF_NAT_RANGE_PROTO_OFFSET) &&
l4proto_in_range(tuple, maniptype,
- &range->min_proto,
- &range->max_proto) &&
+ &range->min_proto,
+ &range->max_proto) &&
(range->min_proto.all == range->max_proto.all ||
!nf_nat_used_tuple(tuple, ct)))
return;
#include <linux/if.h>
#include <linux/inetdevice.h>
+#include <linux/in.h>
#include <linux/ip.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <net/netfilter/nf_nat.h>
#include <net/netfilter/nf_nat_redirect.h>
+static unsigned int
+nf_nat_redirect(struct sk_buff *skb, const struct nf_nat_range2 *range,
+ const union nf_inet_addr *newdst)
+{
+ struct nf_nat_range2 newrange;
+ enum ip_conntrack_info ctinfo;
+ struct nf_conn *ct;
+
+ ct = nf_ct_get(skb, &ctinfo);
+
+ memset(&newrange, 0, sizeof(newrange));
+
+ newrange.flags = range->flags | NF_NAT_RANGE_MAP_IPS;
+ newrange.min_addr = *newdst;
+ newrange.max_addr = *newdst;
+ newrange.min_proto = range->min_proto;
+ newrange.max_proto = range->max_proto;
+
+ return nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_DST);
+}
+
unsigned int
-nf_nat_redirect_ipv4(struct sk_buff *skb,
- const struct nf_nat_ipv4_multi_range_compat *mr,
+nf_nat_redirect_ipv4(struct sk_buff *skb, const struct nf_nat_range2 *range,
unsigned int hooknum)
{
- struct nf_conn *ct;
- enum ip_conntrack_info ctinfo;
- __be32 newdst;
- struct nf_nat_range2 newrange;
+ union nf_inet_addr newdst = {};
WARN_ON(hooknum != NF_INET_PRE_ROUTING &&
hooknum != NF_INET_LOCAL_OUT);
- ct = nf_ct_get(skb, &ctinfo);
- WARN_ON(!(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED)));
-
/* Local packets: make them go to loopback */
if (hooknum == NF_INET_LOCAL_OUT) {
- newdst = htonl(0x7F000001);
+ newdst.ip = htonl(INADDR_LOOPBACK);
} else {
const struct in_device *indev;
- newdst = 0;
-
indev = __in_dev_get_rcu(skb->dev);
if (indev) {
const struct in_ifaddr *ifa;
ifa = rcu_dereference(indev->ifa_list);
if (ifa)
- newdst = ifa->ifa_local;
+ newdst.ip = ifa->ifa_local;
}
- if (!newdst)
+ if (!newdst.ip)
return NF_DROP;
}
- /* Transfer from original range. */
- memset(&newrange.min_addr, 0, sizeof(newrange.min_addr));
- memset(&newrange.max_addr, 0, sizeof(newrange.max_addr));
- newrange.flags = mr->range[0].flags | NF_NAT_RANGE_MAP_IPS;
- newrange.min_addr.ip = newdst;
- newrange.max_addr.ip = newdst;
- newrange.min_proto = mr->range[0].min;
- newrange.max_proto = mr->range[0].max;
-
- /* Hand modified range to generic setup. */
- return nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_DST);
+ return nf_nat_redirect(skb, range, &newdst);
}
EXPORT_SYMBOL_GPL(nf_nat_redirect_ipv4);
nf_nat_redirect_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range,
unsigned int hooknum)
{
- struct nf_nat_range2 newrange;
- struct in6_addr newdst;
- enum ip_conntrack_info ctinfo;
- struct nf_conn *ct;
+ union nf_inet_addr newdst = {};
- ct = nf_ct_get(skb, &ctinfo);
if (hooknum == NF_INET_LOCAL_OUT) {
- newdst = loopback_addr;
+ newdst.in6 = loopback_addr;
} else {
struct inet6_dev *idev;
struct inet6_ifaddr *ifa;
if (idev != NULL) {
read_lock_bh(&idev->lock);
list_for_each_entry(ifa, &idev->addr_list, if_list) {
- newdst = ifa->addr;
+ newdst.in6 = ifa->addr;
addr = true;
break;
}
return NF_DROP;
}
- newrange.flags = range->flags | NF_NAT_RANGE_MAP_IPS;
- newrange.min_addr.in6 = newdst;
- newrange.max_addr.in6 = newdst;
- newrange.min_proto = range->min_proto;
- newrange.max_proto = range->max_proto;
-
- return nf_nat_setup_info(ct, &newrange, NF_NAT_MANIP_DST);
+ return nf_nat_redirect(skb, range, &newdst);
}
EXPORT_SYMBOL_GPL(nf_nat_redirect_ipv6);
}
static struct nfulnl_instance *
-__instance_lookup(struct nfnl_log_net *log, u_int16_t group_num)
+__instance_lookup(const struct nfnl_log_net *log, u16 group_num)
{
- struct hlist_head *head;
+ const struct hlist_head *head;
struct nfulnl_instance *inst;
head = &log->instance_table[instance_hashfn(group_num)];
}
static struct nfulnl_instance *
-instance_lookup_get(struct nfnl_log_net *log, u_int16_t group_num)
+instance_lookup_get_rcu(const struct nfnl_log_net *log, u16 group_num)
{
struct nfulnl_instance *inst;
- rcu_read_lock_bh();
inst = __instance_lookup(log, group_num);
if (inst && !refcount_inc_not_zero(&inst->use))
inst = NULL;
- rcu_read_unlock_bh();
+
+ return inst;
+}
+
+static struct nfulnl_instance *
+instance_lookup_get(const struct nfnl_log_net *log, u16 group_num)
+{
+ struct nfulnl_instance *inst;
+
+ rcu_read_lock();
+ inst = instance_lookup_get_rcu(log, group_num);
+ rcu_read_unlock();
return inst;
}
else
li = &default_loginfo;
- inst = instance_lookup_get(log, li->u.ulog.group);
+ inst = instance_lookup_get_rcu(log, li->u.ulog.group);
if (!inst)
return;
struct hlist_head *head = &log->instance_table[st->bucket];
if (!hlist_empty(head))
- return rcu_dereference_bh(hlist_first_rcu(head));
+ return rcu_dereference(hlist_first_rcu(head));
}
return NULL;
}
static struct hlist_node *get_next(struct net *net, struct iter_state *st,
struct hlist_node *h)
{
- h = rcu_dereference_bh(hlist_next_rcu(h));
+ h = rcu_dereference(hlist_next_rcu(h));
while (!h) {
struct nfnl_log_net *log;
struct hlist_head *head;
log = nfnl_log_pernet(net);
head = &log->instance_table[st->bucket];
- h = rcu_dereference_bh(hlist_first_rcu(head));
+ h = rcu_dereference(hlist_first_rcu(head));
}
return h;
}
}
static void *seq_start(struct seq_file *s, loff_t *pos)
- __acquires(rcu_bh)
+ __acquires(rcu)
{
- rcu_read_lock_bh();
+ rcu_read_lock();
return get_idx(seq_file_net(s), s->private, *pos);
}
}
static void seq_stop(struct seq_file *s, void *v)
- __releases(rcu_bh)
+ __releases(rcu)
{
- rcu_read_unlock_bh();
+ rcu_read_unlock();
}
static int seq_show(struct seq_file *s, void *v)
#include <linux/netfilter/nfnetlink_queue.h>
#include <linux/netfilter/nf_conntrack_common.h>
#include <linux/list.h>
+#include <linux/cgroup-defs.h>
#include <net/sock.h>
#include <net/tcp_states.h>
#include <net/netfilter/nf_queue.h>
return -1;
}
+static int nfqnl_put_sk_classid(struct sk_buff *skb, struct sock *sk)
+{
+#if IS_ENABLED(CONFIG_CGROUP_NET_CLASSID)
+ if (sk && sk_fullsock(sk)) {
+ u32 classid = sock_cgroup_classid(&sk->sk_cgrp_data);
+
+ if (classid && nla_put_be32(skb, NFQA_CGROUP_CLASSID, htonl(classid)))
+ return -1;
+ }
+#endif
+ return 0;
+}
+
static u32 nfqnl_get_sk_secctx(struct sk_buff *skb, char **secdata)
{
u32 seclen = 0;
+ nla_total_size(sizeof(u_int32_t)) /* priority */
+ nla_total_size(sizeof(struct nfqnl_msg_packet_hw))
+ nla_total_size(sizeof(u_int32_t)) /* skbinfo */
+#if IS_ENABLED(CONFIG_CGROUP_NET_CLASSID)
+ + nla_total_size(sizeof(u_int32_t)) /* classid */
+#endif
+ nla_total_size(sizeof(u_int32_t)); /* cap_len */
tstamp = skb_tstamp_cond(entskb, false);
nfqnl_put_sk_uidgid(skb, entskb->sk) < 0)
goto nla_put_failure;
+ if (nfqnl_put_sk_classid(skb, entskb->sk) < 0)
+ goto nla_put_failure;
+
if (seclen && nla_put(skb, NFQA_SECCTX, seclen, secdata))
goto nla_put_failure;
return -1;
}
-static void nft_masq_ipv4_eval(const struct nft_expr *expr,
- struct nft_regs *regs,
- const struct nft_pktinfo *pkt)
+static void nft_masq_eval(const struct nft_expr *expr,
+ struct nft_regs *regs,
+ const struct nft_pktinfo *pkt)
{
- struct nft_masq *priv = nft_expr_priv(expr);
+ const struct nft_masq *priv = nft_expr_priv(expr);
struct nf_nat_range2 range;
memset(&range, 0, sizeof(range));
range.flags = priv->flags;
if (priv->sreg_proto_min) {
- range.min_proto.all = (__force __be16)nft_reg_load16(
- ®s->data[priv->sreg_proto_min]);
- range.max_proto.all = (__force __be16)nft_reg_load16(
- ®s->data[priv->sreg_proto_max]);
+ range.min_proto.all = (__force __be16)
+ nft_reg_load16(®s->data[priv->sreg_proto_min]);
+ range.max_proto.all = (__force __be16)
+ nft_reg_load16(®s->data[priv->sreg_proto_max]);
+ }
+
+ switch (nft_pf(pkt)) {
+ case NFPROTO_IPV4:
+ regs->verdict.code = nf_nat_masquerade_ipv4(pkt->skb,
+ nft_hook(pkt),
+ &range,
+ nft_out(pkt));
+ break;
+#ifdef CONFIG_NF_TABLES_IPV6
+ case NFPROTO_IPV6:
+ regs->verdict.code = nf_nat_masquerade_ipv6(pkt->skb, &range,
+ nft_out(pkt));
+ break;
+#endif
+ default:
+ WARN_ON_ONCE(1);
+ break;
}
- regs->verdict.code = nf_nat_masquerade_ipv4(pkt->skb, nft_hook(pkt),
- &range, nft_out(pkt));
}
static void
static const struct nft_expr_ops nft_masq_ipv4_ops = {
.type = &nft_masq_ipv4_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_masq)),
- .eval = nft_masq_ipv4_eval,
+ .eval = nft_masq_eval,
.init = nft_masq_init,
.destroy = nft_masq_ipv4_destroy,
.dump = nft_masq_dump,
};
#ifdef CONFIG_NF_TABLES_IPV6
-static void nft_masq_ipv6_eval(const struct nft_expr *expr,
- struct nft_regs *regs,
- const struct nft_pktinfo *pkt)
-{
- struct nft_masq *priv = nft_expr_priv(expr);
- struct nf_nat_range2 range;
-
- memset(&range, 0, sizeof(range));
- range.flags = priv->flags;
- if (priv->sreg_proto_min) {
- range.min_proto.all = (__force __be16)nft_reg_load16(
- ®s->data[priv->sreg_proto_min]);
- range.max_proto.all = (__force __be16)nft_reg_load16(
- ®s->data[priv->sreg_proto_max]);
- }
- regs->verdict.code = nf_nat_masquerade_ipv6(pkt->skb, &range,
- nft_out(pkt));
-}
-
static void
nft_masq_ipv6_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
static const struct nft_expr_ops nft_masq_ipv6_ops = {
.type = &nft_masq_ipv6_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_masq)),
- .eval = nft_masq_ipv6_eval,
+ .eval = nft_masq_eval,
.init = nft_masq_init,
.destroy = nft_masq_ipv6_destroy,
.dump = nft_masq_dump,
#endif
#ifdef CONFIG_NF_TABLES_INET
-static void nft_masq_inet_eval(const struct nft_expr *expr,
- struct nft_regs *regs,
- const struct nft_pktinfo *pkt)
-{
- switch (nft_pf(pkt)) {
- case NFPROTO_IPV4:
- return nft_masq_ipv4_eval(expr, regs, pkt);
- case NFPROTO_IPV6:
- return nft_masq_ipv6_eval(expr, regs, pkt);
- }
-
- WARN_ON_ONCE(1);
-}
-
static void
nft_masq_inet_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
static const struct nft_expr_ops nft_masq_inet_ops = {
.type = &nft_masq_inet_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_masq)),
- .eval = nft_masq_inet_eval,
+ .eval = nft_masq_eval,
.init = nft_masq_init,
.destroy = nft_masq_inet_destroy,
.dump = nft_masq_dump,
} else {
priv->sreg_proto_max = priv->sreg_proto_min;
}
+
+ priv->flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
}
if (tb[NFTA_REDIR_FLAGS]) {
return -1;
}
-static void nft_redir_ipv4_eval(const struct nft_expr *expr,
- struct nft_regs *regs,
- const struct nft_pktinfo *pkt)
+static void nft_redir_eval(const struct nft_expr *expr,
+ struct nft_regs *regs,
+ const struct nft_pktinfo *pkt)
{
- struct nft_redir *priv = nft_expr_priv(expr);
- struct nf_nat_ipv4_multi_range_compat mr;
+ const struct nft_redir *priv = nft_expr_priv(expr);
+ struct nf_nat_range2 range;
- memset(&mr, 0, sizeof(mr));
+ memset(&range, 0, sizeof(range));
+ range.flags = priv->flags;
if (priv->sreg_proto_min) {
- mr.range[0].min.all = (__force __be16)nft_reg_load16(
- ®s->data[priv->sreg_proto_min]);
- mr.range[0].max.all = (__force __be16)nft_reg_load16(
- ®s->data[priv->sreg_proto_max]);
- mr.range[0].flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
+ range.min_proto.all = (__force __be16)
+ nft_reg_load16(®s->data[priv->sreg_proto_min]);
+ range.max_proto.all = (__force __be16)
+ nft_reg_load16(®s->data[priv->sreg_proto_max]);
}
- mr.range[0].flags |= priv->flags;
-
- regs->verdict.code = nf_nat_redirect_ipv4(pkt->skb, &mr, nft_hook(pkt));
+ switch (nft_pf(pkt)) {
+ case NFPROTO_IPV4:
+ regs->verdict.code = nf_nat_redirect_ipv4(pkt->skb, &range,
+ nft_hook(pkt));
+ break;
+#ifdef CONFIG_NF_TABLES_IPV6
+ case NFPROTO_IPV6:
+ regs->verdict.code = nf_nat_redirect_ipv6(pkt->skb, &range,
+ nft_hook(pkt));
+ break;
+#endif
+ default:
+ WARN_ON_ONCE(1);
+ break;
+ }
}
static void
static const struct nft_expr_ops nft_redir_ipv4_ops = {
.type = &nft_redir_ipv4_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_redir)),
- .eval = nft_redir_ipv4_eval,
+ .eval = nft_redir_eval,
.init = nft_redir_init,
.destroy = nft_redir_ipv4_destroy,
.dump = nft_redir_dump,
};
#ifdef CONFIG_NF_TABLES_IPV6
-static void nft_redir_ipv6_eval(const struct nft_expr *expr,
- struct nft_regs *regs,
- const struct nft_pktinfo *pkt)
-{
- struct nft_redir *priv = nft_expr_priv(expr);
- struct nf_nat_range2 range;
-
- memset(&range, 0, sizeof(range));
- if (priv->sreg_proto_min) {
- range.min_proto.all = (__force __be16)nft_reg_load16(
- ®s->data[priv->sreg_proto_min]);
- range.max_proto.all = (__force __be16)nft_reg_load16(
- ®s->data[priv->sreg_proto_max]);
- range.flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
- }
-
- range.flags |= priv->flags;
-
- regs->verdict.code =
- nf_nat_redirect_ipv6(pkt->skb, &range, nft_hook(pkt));
-}
-
static void
nft_redir_ipv6_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
static const struct nft_expr_ops nft_redir_ipv6_ops = {
.type = &nft_redir_ipv6_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_redir)),
- .eval = nft_redir_ipv6_eval,
+ .eval = nft_redir_eval,
.init = nft_redir_init,
.destroy = nft_redir_ipv6_destroy,
.dump = nft_redir_dump,
#endif
#ifdef CONFIG_NF_TABLES_INET
-static void nft_redir_inet_eval(const struct nft_expr *expr,
- struct nft_regs *regs,
- const struct nft_pktinfo *pkt)
-{
- switch (nft_pf(pkt)) {
- case NFPROTO_IPV4:
- return nft_redir_ipv4_eval(expr, regs, pkt);
- case NFPROTO_IPV6:
- return nft_redir_ipv6_eval(expr, regs, pkt);
- }
-
- WARN_ON_ONCE(1);
-}
-
static void
nft_redir_inet_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
static const struct nft_expr_ops nft_redir_inet_ops = {
.type = &nft_redir_inet_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_redir)),
- .eval = nft_redir_inet_eval,
+ .eval = nft_redir_eval,
.init = nft_redir_init,
.destroy = nft_redir_inet_destroy,
.dump = nft_redir_dump,
}
return ret;
}
+
+/* Only get and check the lengths, not do any hop-by-hop stuff. */
+int nf_ip6_check_hbh_len(struct sk_buff *skb, u32 *plen)
+{
+ int len, off = sizeof(struct ipv6hdr);
+ unsigned char *nh;
+
+ if (!pskb_may_pull(skb, off + 8))
+ return -ENOMEM;
+ nh = (unsigned char *)(ipv6_hdr(skb) + 1);
+ len = (nh[1] + 1) << 3;
+
+ if (!pskb_may_pull(skb, off + len))
+ return -ENOMEM;
+ nh = skb_network_header(skb);
+
+ off += 2;
+ len -= 2;
+ while (len > 0) {
+ int optlen;
+
+ if (nh[off] == IPV6_TLV_PAD1) {
+ off++;
+ len--;
+ continue;
+ }
+ if (len < 2)
+ return -EBADMSG;
+ optlen = nh[off + 1] + 2;
+ if (optlen > len)
+ return -EBADMSG;
+
+ if (nh[off] == IPV6_TLV_JUMBO) {
+ u32 pkt_len;
+
+ if (nh[off + 1] != 4 || (off & 3) != 2)
+ return -EBADMSG;
+ pkt_len = ntohl(*(__be32 *)(nh + off + 2));
+ if (pkt_len <= IPV6_MAXPLEN ||
+ ipv6_hdr(skb)->payload_len)
+ return -EBADMSG;
+ if (pkt_len > skb->len - sizeof(struct ipv6hdr))
+ return -EBADMSG;
+ *plen = pkt_len;
+ }
+ off += optlen;
+ len -= optlen;
+ }
+
+ return len ? -EBADMSG : 0;
+}
+EXPORT_SYMBOL_GPL(nf_ip6_check_hbh_len);
nf_ct_netns_put(par->net, par->family);
}
-/* FIXME: Take multiple ranges --RR */
static int redirect_tg4_check(const struct xt_tgchk_param *par)
{
const struct nf_nat_ipv4_multi_range_compat *mr = par->targinfo;
static unsigned int
redirect_tg4(struct sk_buff *skb, const struct xt_action_param *par)
{
- return nf_nat_redirect_ipv4(skb, par->targinfo, xt_hooknum(par));
+ const struct nf_nat_ipv4_multi_range_compat *mr = par->targinfo;
+ struct nf_nat_range2 range = {
+ .flags = mr->range[0].flags,
+ .min_proto = mr->range[0].min,
+ .max_proto = mr->range[0].max,
+ };
+
+ return nf_nat_redirect_ipv4(skb, &range, xt_hooknum(par));
}
static struct xt_target redirect_tg_reg[] __read_mostly = {
#include <linux/module.h>
#include <net/ip.h>
#include <linux/ipv6.h>
+#include <linux/icmp.h>
#include <net/ipv6.h>
#include <net/tcp.h>
#include <net/udp.h>
MODULE_ALIAS("ipt_tcp");
MODULE_ALIAS("ip6t_udp");
MODULE_ALIAS("ip6t_tcp");
+MODULE_ALIAS("ipt_icmp");
+MODULE_ALIAS("ip6t_icmp6");
/* Returns 1 if the port is matched by the range, 0 otherwise */
static inline bool
return (udpinfo->invflags & ~XT_UDP_INV_MASK) ? -EINVAL : 0;
}
+/* Returns 1 if the type and code is matched by the range, 0 otherwise */
+static bool type_code_in_range(u8 test_type, u8 min_code, u8 max_code,
+ u8 type, u8 code)
+{
+ return type == test_type && code >= min_code && code <= max_code;
+}
+
+static bool icmp_type_code_match(u8 test_type, u8 min_code, u8 max_code,
+ u8 type, u8 code, bool invert)
+{
+ return (test_type == 0xFF ||
+ type_code_in_range(test_type, min_code, max_code, type, code))
+ ^ invert;
+}
+
+static bool icmp6_type_code_match(u8 test_type, u8 min_code, u8 max_code,
+ u8 type, u8 code, bool invert)
+{
+ return type_code_in_range(test_type, min_code, max_code, type, code) ^ invert;
+}
+
+static bool
+icmp_match(const struct sk_buff *skb, struct xt_action_param *par)
+{
+ const struct icmphdr *ic;
+ struct icmphdr _icmph;
+ const struct ipt_icmp *icmpinfo = par->matchinfo;
+
+ /* Must not be a fragment. */
+ if (par->fragoff != 0)
+ return false;
+
+ ic = skb_header_pointer(skb, par->thoff, sizeof(_icmph), &_icmph);
+ if (!ic) {
+ /* We've been asked to examine this packet, and we
+ * can't. Hence, no choice but to drop.
+ */
+ par->hotdrop = true;
+ return false;
+ }
+
+ return icmp_type_code_match(icmpinfo->type,
+ icmpinfo->code[0],
+ icmpinfo->code[1],
+ ic->type, ic->code,
+ !!(icmpinfo->invflags & IPT_ICMP_INV));
+}
+
+static bool
+icmp6_match(const struct sk_buff *skb, struct xt_action_param *par)
+{
+ const struct icmp6hdr *ic;
+ struct icmp6hdr _icmph;
+ const struct ip6t_icmp *icmpinfo = par->matchinfo;
+
+ /* Must not be a fragment. */
+ if (par->fragoff != 0)
+ return false;
+
+ ic = skb_header_pointer(skb, par->thoff, sizeof(_icmph), &_icmph);
+ if (!ic) {
+ /* We've been asked to examine this packet, and we
+ * can't. Hence, no choice but to drop.
+ */
+ par->hotdrop = true;
+ return false;
+ }
+
+ return icmp6_type_code_match(icmpinfo->type,
+ icmpinfo->code[0],
+ icmpinfo->code[1],
+ ic->icmp6_type, ic->icmp6_code,
+ !!(icmpinfo->invflags & IP6T_ICMP_INV));
+}
+
+static int icmp_checkentry(const struct xt_mtchk_param *par)
+{
+ const struct ipt_icmp *icmpinfo = par->matchinfo;
+
+ return (icmpinfo->invflags & ~IPT_ICMP_INV) ? -EINVAL : 0;
+}
+
+static int icmp6_checkentry(const struct xt_mtchk_param *par)
+{
+ const struct ip6t_icmp *icmpinfo = par->matchinfo;
+
+ return (icmpinfo->invflags & ~IP6T_ICMP_INV) ? -EINVAL : 0;
+}
+
static struct xt_match tcpudp_mt_reg[] __read_mostly = {
{
.name = "tcp",
.proto = IPPROTO_UDPLITE,
.me = THIS_MODULE,
},
+ {
+ .name = "icmp",
+ .match = icmp_match,
+ .matchsize = sizeof(struct ipt_icmp),
+ .checkentry = icmp_checkentry,
+ .proto = IPPROTO_ICMP,
+ .family = NFPROTO_IPV4,
+ .me = THIS_MODULE,
+ },
+ {
+ .name = "icmp6",
+ .match = icmp6_match,
+ .matchsize = sizeof(struct ip6t_icmp),
+ .checkentry = icmp6_checkentry,
+ .proto = IPPROTO_ICMPV6,
+ .family = NFPROTO_IPV6,
+ .me = THIS_MODULE,
+ },
};
static int __init tcpudp_mt_init(void)
nl_table[unit].bind = cfg->bind;
nl_table[unit].unbind = cfg->unbind;
nl_table[unit].flags = cfg->flags;
- if (cfg->compare)
- nl_table[unit].compare = cfg->compare;
}
nl_table[unit].registered = 1;
} else {
struct module *module;
int (*bind)(struct net *net, int group);
void (*unbind)(struct net *net, int group);
- bool (*compare)(struct net *net, struct sock *sock);
int registered;
};
}
#endif
-static int packet_direct_xmit(struct sk_buff *skb)
+static int packet_xmit(const struct packet_sock *po, struct sk_buff *skb)
{
+ if (!packet_sock_flag(po, PACKET_SOCK_QDISC_BYPASS))
+ return dev_queue_xmit(skb);
+
#ifdef CONFIG_NETFILTER_EGRESS
if (nf_hook_egress_active()) {
skb = nf_hook_direct_egress(skb);
RCU_INIT_POINTER(po->cached_dev, NULL);
}
-static bool packet_use_direct_xmit(const struct packet_sock *po)
-{
- return po->xmit == packet_direct_xmit;
-}
-
static u16 packet_pick_tx_queue(struct sk_buff *skb)
{
struct net_device *dev = skb->dev;
{
struct packet_sock *po = pkt_sk(sk);
- if (!po->running) {
+ if (!packet_sock_flag(po, PACKET_SOCK_RUNNING)) {
if (po->fanout)
__fanout_link(sk, po);
else
dev_add_pack(&po->prot_hook);
sock_hold(sk);
- po->running = 1;
+ packet_sock_flag_set(po, PACKET_SOCK_RUNNING, 1);
}
}
lockdep_assert_held_once(&po->bind_lock);
- po->running = 0;
+ packet_sock_flag_set(po, PACKET_SOCK_RUNNING, 0);
if (po->fanout)
__fanout_unlink(sk, po);
{
struct packet_sock *po = pkt_sk(sk);
- if (po->running)
+ if (packet_sock_flag(po, PACKET_SOCK_RUNNING))
__unregister_prot_hook(sk, sync);
}
struct timespec64 ts;
__u32 ts_status;
- if (!(ts_status = tpacket_get_timestamp(skb, &ts, po->tp_tstamp)))
+ if (!(ts_status = tpacket_get_timestamp(skb, &ts, READ_ONCE(po->tp_tstamp))))
return 0;
h.raw = frame;
static int packet_rcv_has_room(struct packet_sock *po, struct sk_buff *skb)
{
- int pressure, ret;
+ bool pressure;
+ int ret;
ret = __packet_rcv_has_room(po, skb);
pressure = ret != ROOM_NORMAL;
- if (READ_ONCE(po->pressure) != pressure)
- WRITE_ONCE(po->pressure, pressure);
+ if (packet_sock_flag(po, PACKET_SOCK_PRESSURE) != pressure)
+ packet_sock_flag_set(po, PACKET_SOCK_PRESSURE, pressure);
return ret;
}
static void packet_rcv_try_clear_pressure(struct packet_sock *po)
{
- if (READ_ONCE(po->pressure) &&
+ if (packet_sock_flag(po, PACKET_SOCK_PRESSURE) &&
__packet_rcv_has_room(po, NULL) == ROOM_NORMAL)
- WRITE_ONCE(po->pressure, 0);
+ packet_sock_flag_set(po, PACKET_SOCK_PRESSURE, false);
}
static void packet_sock_destruct(struct sock *sk)
i = j = min_t(int, po->rollover->sock, num - 1);
do {
po_next = pkt_sk(rcu_dereference(f->arr[i]));
- if (po_next != po_skip && !READ_ONCE(po_next->pressure) &&
+ if (po_next != po_skip &&
+ !packet_sock_flag(po_next, PACKET_SOCK_PRESSURE) &&
packet_rcv_has_room(po_next, skb) == ROOM_NORMAL) {
if (i != j)
po->rollover->sock = i;
err = -EINVAL;
spin_lock(&po->bind_lock);
- if (po->running &&
+ if (packet_sock_flag(po, PACKET_SOCK_RUNNING) &&
match->type == type &&
match->prot_hook.type == po->prot_hook.type &&
match->prot_hook.dev == po->prot_hook.dev) {
sll = &PACKET_SKB_CB(skb)->sa.ll;
sll->sll_hatype = dev->type;
sll->sll_pkttype = skb->pkt_type;
- if (unlikely(po->origdev))
+ if (unlikely(packet_sock_flag(po, PACKET_SOCK_ORIGDEV)))
sll->sll_ifindex = orig_dev->ifindex;
else
sll->sll_ifindex = dev->ifindex;
netoff = TPACKET_ALIGN(po->tp_hdrlen +
(maclen < 16 ? 16 : maclen)) +
po->tp_reserve;
- if (po->has_vnet_hdr) {
+ if (packet_sock_flag(po, PACKET_SOCK_HAS_VNET_HDR)) {
netoff += sizeof(struct virtio_net_hdr);
do_vnet = true;
}
* closer to the time of capture.
*/
ts_status = tpacket_get_timestamp(skb, &ts,
- po->tp_tstamp | SOF_TIMESTAMPING_SOFTWARE);
+ READ_ONCE(po->tp_tstamp) |
+ SOF_TIMESTAMPING_SOFTWARE);
if (!ts_status)
ktime_get_real_ts64(&ts);
sll->sll_hatype = dev->type;
sll->sll_protocol = skb->protocol;
sll->sll_pkttype = skb->pkt_type;
- if (unlikely(po->origdev))
+ if (unlikely(packet_sock_flag(po, PACKET_SOCK_ORIGDEV)))
sll->sll_ifindex = orig_dev->ifindex;
else
sll->sll_ifindex = dev->ifindex;
nr_frags = skb_shinfo(skb)->nr_frags;
if (unlikely(nr_frags >= MAX_SKB_FRAGS)) {
- pr_err("Packet exceed the number of skb frags(%lu)\n",
- MAX_SKB_FRAGS);
+ pr_err("Packet exceed the number of skb frags(%u)\n",
+ (unsigned int)MAX_SKB_FRAGS);
return -EFAULT;
}
return -EMSGSIZE;
}
- if (unlikely(po->tp_tx_has_off)) {
+ if (unlikely(packet_sock_flag(po, PACKET_SOCK_TX_HAS_OFF))) {
int off_min, off_max;
off_min = po->tp_hdrlen - sizeof(struct sockaddr_ll);
size_max = po->tx_ring.frame_size
- (po->tp_hdrlen - sizeof(struct sockaddr_ll));
- if ((size_max > dev->mtu + reserve + VLAN_HLEN) && !po->has_vnet_hdr)
+ if ((size_max > dev->mtu + reserve + VLAN_HLEN) &&
+ !packet_sock_flag(po, PACKET_SOCK_HAS_VNET_HDR))
size_max = dev->mtu + reserve + VLAN_HLEN;
reinit_completion(&po->skb_completion);
status = TP_STATUS_SEND_REQUEST;
hlen = LL_RESERVED_SPACE(dev);
tlen = dev->needed_tailroom;
- if (po->has_vnet_hdr) {
+ if (packet_sock_flag(po, PACKET_SOCK_HAS_VNET_HDR)) {
vnet_hdr = data;
data += sizeof(*vnet_hdr);
tp_len -= sizeof(*vnet_hdr);
addr, hlen, copylen, &sockc);
if (likely(tp_len >= 0) &&
tp_len > dev->mtu + reserve &&
- !po->has_vnet_hdr &&
+ !packet_sock_flag(po, PACKET_SOCK_HAS_VNET_HDR) &&
!packet_extra_vlan_len_allowed(dev, skb))
tp_len = -EMSGSIZE;
if (unlikely(tp_len < 0)) {
tpacket_error:
- if (po->tp_loss) {
+ if (packet_sock_flag(po, PACKET_SOCK_TP_LOSS)) {
__packet_set_status(po, ph,
TP_STATUS_AVAILABLE);
packet_increment_head(&po->tx_ring);
}
}
- if (po->has_vnet_hdr) {
+ if (packet_sock_flag(po, PACKET_SOCK_HAS_VNET_HDR)) {
if (virtio_net_hdr_to_skb(skb, vnet_hdr, vio_le())) {
tp_len = -EINVAL;
goto tpacket_error;
packet_inc_pending(&po->tx_ring);
status = TP_STATUS_SEND_REQUEST;
- err = po->xmit(skb);
+ err = packet_xmit(po, skb);
if (unlikely(err != 0)) {
if (err > 0)
err = net_xmit_errno(err);
if (sock->type == SOCK_RAW)
reserve = dev->hard_header_len;
- if (po->has_vnet_hdr) {
+ if (packet_sock_flag(po, PACKET_SOCK_HAS_VNET_HDR)) {
err = packet_snd_vnet_parse(msg, &len, &vnet_hdr);
if (err)
goto out_unlock;
virtio_net_hdr_set_proto(skb, &vnet_hdr);
}
- err = po->xmit(skb);
+ err = packet_xmit(po, skb);
+
if (unlikely(err != 0)) {
if (err > 0)
err = net_xmit_errno(err);
if (need_rehook) {
dev_hold(dev);
- if (po->running) {
+ if (packet_sock_flag(po, PACKET_SOCK_RUNNING)) {
rcu_read_unlock();
/* prevents packet_notifier() from calling
* register_prot_hook()
dev->ifindex);
}
- BUG_ON(po->running);
+ BUG_ON(packet_sock_flag(po, PACKET_SOCK_RUNNING));
WRITE_ONCE(po->num, proto);
po->prot_hook.type = proto;
init_completion(&po->skb_completion);
sk->sk_family = PF_PACKET;
po->num = proto;
- po->xmit = dev_queue_xmit;
err = packet_alloc_pending(po);
if (err)
packet_rcv_try_clear_pressure(pkt_sk(sk));
- if (pkt_sk(sk)->has_vnet_hdr) {
+ if (packet_sock_flag(pkt_sk(sk), PACKET_SOCK_HAS_VNET_HDR)) {
err = packet_rcv_vnet(msg, skb, &len);
if (err)
goto out_free;
memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa, copy_len);
}
- if (pkt_sk(sk)->auxdata) {
+ if (packet_sock_flag(pkt_sk(sk), PACKET_SOCK_AUXDATA)) {
struct tpacket_auxdata aux;
aux.tp_status = TP_STATUS_USER;
if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) {
ret = -EBUSY;
} else {
- po->tp_loss = !!val;
+ packet_sock_flag_set(po, PACKET_SOCK_TP_LOSS, val);
ret = 0;
}
release_sock(sk);
if (copy_from_sockptr(&val, optval, sizeof(val)))
return -EFAULT;
- lock_sock(sk);
- po->auxdata = !!val;
- release_sock(sk);
+ packet_sock_flag_set(po, PACKET_SOCK_AUXDATA, val);
return 0;
}
case PACKET_ORIGDEV:
if (copy_from_sockptr(&val, optval, sizeof(val)))
return -EFAULT;
- lock_sock(sk);
- po->origdev = !!val;
- release_sock(sk);
+ packet_sock_flag_set(po, PACKET_SOCK_ORIGDEV, val);
return 0;
}
case PACKET_VNET_HDR:
if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) {
ret = -EBUSY;
} else {
- po->has_vnet_hdr = !!val;
+ packet_sock_flag_set(po, PACKET_SOCK_HAS_VNET_HDR, val);
ret = 0;
}
release_sock(sk);
if (copy_from_sockptr(&val, optval, sizeof(val)))
return -EFAULT;
- po->tp_tstamp = val;
+ WRITE_ONCE(po->tp_tstamp, val);
return 0;
}
case PACKET_FANOUT:
lock_sock(sk);
if (!po->rx_ring.pg_vec && !po->tx_ring.pg_vec)
- po->tp_tx_has_off = !!val;
+ packet_sock_flag_set(po, PACKET_SOCK_TX_HAS_OFF, val);
release_sock(sk);
return 0;
if (copy_from_sockptr(&val, optval, sizeof(val)))
return -EFAULT;
- po->xmit = val ? packet_direct_xmit : dev_queue_xmit;
+ packet_sock_flag_set(po, PACKET_SOCK_QDISC_BYPASS, val);
return 0;
}
default:
break;
case PACKET_AUXDATA:
- val = po->auxdata;
+ val = packet_sock_flag(po, PACKET_SOCK_AUXDATA);
break;
case PACKET_ORIGDEV:
- val = po->origdev;
+ val = packet_sock_flag(po, PACKET_SOCK_ORIGDEV);
break;
case PACKET_VNET_HDR:
- val = po->has_vnet_hdr;
+ val = packet_sock_flag(po, PACKET_SOCK_HAS_VNET_HDR);
break;
case PACKET_VERSION:
val = po->tp_version;
val = po->tp_reserve;
break;
case PACKET_LOSS:
- val = po->tp_loss;
+ val = packet_sock_flag(po, PACKET_SOCK_TP_LOSS);
break;
case PACKET_TIMESTAMP:
- val = po->tp_tstamp;
+ val = READ_ONCE(po->tp_tstamp);
break;
case PACKET_FANOUT:
val = (po->fanout ?
lv = sizeof(rstats);
break;
case PACKET_TX_HAS_OFF:
- val = po->tp_tx_has_off;
+ val = packet_sock_flag(po, PACKET_SOCK_TX_HAS_OFF);
break;
case PACKET_QDISC_BYPASS:
- val = packet_use_direct_xmit(po);
+ val = packet_sock_flag(po, PACKET_SOCK_QDISC_BYPASS);
break;
default:
return -ENOPROTOOPT;
case NETDEV_DOWN:
if (dev->ifindex == po->ifindex) {
spin_lock(&po->bind_lock);
- if (po->running) {
+ if (packet_sock_flag(po, PACKET_SOCK_RUNNING)) {
__unregister_prot_hook(sk, false);
sk->sk_err = ENETDOWN;
if (!sock_flag(sk, SOCK_DEAD))
/* Detach socket from network */
spin_lock(&po->bind_lock);
- was_running = po->running;
+ was_running = packet_sock_flag(po, PACKET_SOCK_RUNNING);
num = po->num;
if (was_running) {
WRITE_ONCE(po->num, 0);
s->sk_type,
ntohs(READ_ONCE(po->num)),
READ_ONCE(po->ifindex),
- po->running,
+ packet_sock_flag(po, PACKET_SOCK_RUNNING),
atomic_read(&s->sk_rmem_alloc),
from_kuid_munged(seq_user_ns(seq), sock_i_uid(s)),
sock_i_ino(s));
pinfo.pdi_version = po->tp_version;
pinfo.pdi_reserve = po->tp_reserve;
pinfo.pdi_copy_thresh = po->copy_thresh;
- pinfo.pdi_tstamp = po->tp_tstamp;
+ pinfo.pdi_tstamp = READ_ONCE(po->tp_tstamp);
pinfo.pdi_flags = 0;
- if (po->running)
+ if (packet_sock_flag(po, PACKET_SOCK_RUNNING))
pinfo.pdi_flags |= PDI_RUNNING;
- if (po->auxdata)
+ if (packet_sock_flag(po, PACKET_SOCK_AUXDATA))
pinfo.pdi_flags |= PDI_AUXDATA;
- if (po->origdev)
+ if (packet_sock_flag(po, PACKET_SOCK_ORIGDEV))
pinfo.pdi_flags |= PDI_ORIGDEV;
- if (po->has_vnet_hdr)
+ if (packet_sock_flag(po, PACKET_SOCK_HAS_VNET_HDR))
pinfo.pdi_flags |= PDI_VNETHDR;
- if (po->tp_loss)
+ if (packet_sock_flag(po, PACKET_SOCK_TP_LOSS))
pinfo.pdi_flags |= PDI_LOSS;
return nla_put(nlskb, PACKET_DIAG_INFO, sizeof(pinfo), &pinfo);
int copy_thresh;
spinlock_t bind_lock;
struct mutex pg_vec_lock;
- unsigned int running; /* bind_lock must be held */
- unsigned int auxdata:1, /* writer must hold sock lock */
- origdev:1,
- has_vnet_hdr:1,
- tp_loss:1,
- tp_tx_has_off:1;
- int pressure;
+ unsigned long flags;
int ifindex; /* bound device */
__be16 num;
struct packet_rollover *rollover;
unsigned int tp_tstamp;
struct completion skb_completion;
struct net_device __rcu *cached_dev;
- int (*xmit)(struct sk_buff *skb);
struct packet_type prot_hook ____cacheline_aligned_in_smp;
atomic_t tp_drops ____cacheline_aligned_in_smp;
};
-static inline struct packet_sock *pkt_sk(struct sock *sk)
+#define pkt_sk(ptr) container_of_const(ptr, struct packet_sock, sk)
+
+enum packet_sock_flags {
+ PACKET_SOCK_ORIGDEV,
+ PACKET_SOCK_AUXDATA,
+ PACKET_SOCK_TX_HAS_OFF,
+ PACKET_SOCK_TP_LOSS,
+ PACKET_SOCK_HAS_VNET_HDR,
+ PACKET_SOCK_RUNNING,
+ PACKET_SOCK_PRESSURE,
+ PACKET_SOCK_QDISC_BYPASS,
+};
+
+static inline void packet_sock_flag_set(struct packet_sock *po,
+ enum packet_sock_flags flag,
+ bool val)
+{
+ if (val)
+ set_bit(flag, &po->flags);
+ else
+ clear_bit(flag, &po->flags);
+}
+
+static inline bool packet_sock_flag(const struct packet_sock *po,
+ enum packet_sock_flags flag)
{
- return (struct packet_sock *)sk;
+ return test_bit(flag, &po->flags);
}
#endif
+ nla_total_size_64bit(sizeof(u64))
/* TCA_STATS_QUEUE */
+ nla_total_size_64bit(sizeof(struct gnet_stats_queue))
- + nla_total_size(0) /* TCA_OPTIONS nested */
+ + nla_total_size(0) /* TCA_ACT_OPTIONS nested */
+ nla_total_size(sizeof(struct tcf_t)); /* TCA_GACT_TM */
}
unsigned char *b = skb_tail_pointer(skb);
struct tc_cookie *cookie;
- if (nla_put_string(skb, TCA_KIND, a->ops->kind))
+ if (nla_put_string(skb, TCA_ACT_KIND, a->ops->kind))
goto nla_put_failure;
if (tcf_action_copy_stats(skb, a, 0))
goto nla_put_failure;
nest = nla_nest_start_noflag(skb, 0);
if (nest == NULL)
goto nla_put_failure;
- if (nla_put_string(skb, TCA_KIND, ops->kind))
+ if (nla_put_string(skb, TCA_ACT_KIND, ops->kind))
goto nla_put_failure;
ret = 0;
if (nla_put_u32(skb, TCA_ACT_IN_HW_COUNT, a->in_hw_count))
goto nla_put_failure;
- nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+ nest = nla_nest_start_noflag(skb, TCA_ACT_OPTIONS);
if (nest == NULL)
goto nla_put_failure;
err = tcf_action_dump_old(skb, a, bind, ref);
at_nh = skb->data == skb_network_header(skb);
if (at_nh != expects_nh) {
mac_len = skb_at_tc_ingress(skb) ? skb->mac_len :
- skb_network_header(skb) - skb_mac_header(skb);
+ skb_network_offset(skb);
if (expects_nh) {
/* target device/action expect data at nh */
skb_pull_rcsum(skb2, mac_len);
skb_push_rcsum(skb, skb->mac_len);
mac_len = skb->mac_len;
} else {
- mac_len = skb_network_header(skb) - skb_mac_header(skb);
+ mac_len = skb_network_offset(skb);
}
ret = READ_ONCE(m->tcf_action);
nla_get_u8(tb[TCA_TUNNEL_KEY_NO_CSUM]))
flags &= ~TUNNEL_CSUM;
+ if (nla_get_flag(tb[TCA_TUNNEL_KEY_NO_FRAG]))
+ flags |= TUNNEL_DONT_FRAGMENT;
+
if (tb[TCA_TUNNEL_KEY_ENC_DST_PORT])
dst_port = nla_get_be16(tb[TCA_TUNNEL_KEY_ENC_DST_PORT]);
key->tp_dst)) ||
nla_put_u8(skb, TCA_TUNNEL_KEY_NO_CSUM,
!(key->tun_flags & TUNNEL_CSUM)) ||
+ ((key->tun_flags & TUNNEL_DONT_FRAGMENT) &&
+ nla_put_flag(skb, TCA_TUNNEL_KEY_NO_FRAG)) ||
tunnel_key_opts_dump(skb, info))
goto nla_put_failure;
* because ETH_P_PPP_SES was stored in basic.n_proto
* which might get overwritten by ppp_proto
* or might be set to 0, the role of key_val::type
- * is simmilar to vlan_key::tpid
+ * is similar to vlan_key::tpid
*/
key_val->type = htons(ETH_P_PPP_SES);
key_mask->type = cpu_to_be16(~0);
* be provided for non-numeric types.
*
* Additionally, type dependent modifiers such as shift operators
- * or mask may be applied to extend the functionaliy. As of now,
+ * or mask may be applied to extend the functionality. As of now,
* the variable length type supports shifting the byte string to
* the right, eating up any number of octets and thus supporting
* wildcard interface name comparisons such as "ppp%" matching
return;
if (hrtimer_is_queued(&wd->timer)) {
+ u64 softexpires;
+
+ softexpires = ktime_to_ns(hrtimer_get_softexpires(&wd->timer));
/* If timer is already set in [expires, expires + delta_ns],
* do not reprogram it.
*/
- if (wd->last_expires - expires <= delta_ns)
+ if (softexpires - expires <= delta_ns)
return;
}
- wd->last_expires = expires;
hrtimer_start_range_ns(&wd->timer,
ns_to_ktime(expires),
delta_ns,
return cake_calc_overhead(q, len, off);
/* borrowed from qdisc_pkt_len_init() */
- hdr_len = skb_transport_header(skb) - skb_mac_header(skb);
+ hdr_len = skb_transport_offset(skb);
/* + transport layer */
if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 |
const struct tcphdr *th;
struct tcphdr _tcphdr;
- th = skb_header_pointer(skb, skb_transport_offset(skb),
+ th = skb_header_pointer(skb, hdr_len,
sizeof(_tcphdr), &_tcphdr);
if (likely(th))
hdr_len += __tcp_hdrlen(th);
} else {
struct udphdr _udphdr;
- if (skb_header_pointer(skb, skb_transport_offset(skb),
+ if (skb_header_pointer(skb, hdr_len,
sizeof(_udphdr), &_udphdr))
hdr_len += sizeof(struct udphdr);
}
if (tb[TCA_MQPRIO_MODE]) {
priv->flags |= TC_MQPRIO_F_MODE;
- priv->mode = *(u16 *)nla_data(tb[TCA_MQPRIO_MODE]);
+ priv->mode = nla_get_u16(tb[TCA_MQPRIO_MODE]);
}
if (tb[TCA_MQPRIO_SHAPER]) {
priv->flags |= TC_MQPRIO_F_SHAPER;
- priv->shaper = *(u16 *)nla_data(tb[TCA_MQPRIO_SHAPER]);
+ priv->shaper = nla_get_u16(tb[TCA_MQPRIO_SHAPER]);
}
if (tb[TCA_MQPRIO_MIN_RATE64]) {
return -EINVAL;
if (i >= qopt->num_tc)
break;
- priv->min_rate[i] = *(u64 *)nla_data(attr);
+ priv->min_rate[i] = nla_get_u64(attr);
i++;
}
priv->flags |= TC_MQPRIO_F_MIN_RATE;
return -EINVAL;
if (i >= qopt->num_tc)
break;
- priv->max_rate[i] = *(u64 *)nla_data(attr);
+ priv->max_rate[i] = nla_get_u64(attr);
i++;
}
priv->flags |= TC_MQPRIO_F_MAX_RATE;
}
/* If qdelay is zero and backlog is not, it means backlog is very small,
- * so we do not update probabilty in this round.
+ * so we do not update probability in this round.
*/
if (qdelay == 0 && backlog != 0)
update_prob = false;
tsnmap.o bind_addr.o socket.o primitive.o \
output.o input.o debug.o stream.o auth.o \
offload.o stream_sched.o stream_sched_prio.o \
- stream_sched_rr.o stream_interleave.o
+ stream_sched_rr.o stream_sched_fc.o \
+ stream_interleave.o
sctp_diag-y := diag.o
sk->sk_err = err;
sk_error_report(sk);
} else { /* Only an error on timeout */
- sk->sk_err_soft = err;
+ WRITE_ONCE(sk->sk_err_soft, err);
}
}
sk->sk_err = err;
sk_error_report(sk);
} else {
- sk->sk_err_soft = err;
+ WRITE_ONCE(sk->sk_err_soft, err);
}
}
sctp_sched_ops_fcfs_init();
sctp_sched_ops_prio_init();
sctp_sched_ops_rr_init();
+ sctp_sched_ops_fc_init();
+ sctp_sched_ops_wfq_init();
}
static void sctp_sched_free_sched(struct sctp_stream *stream)
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* SCTP kernel implementation
+ * (C) Copyright Red Hat Inc. 2022
+ *
+ * This file is part of the SCTP kernel implementation
+ *
+ * These functions manipulate sctp stream queue/scheduling.
+ *
+ * Please send any bug reports or fixes you make to the
+ * email addresched(es):
+ * lksctp developers <linux-sctp@vger.kernel.org>
+ *
+ * Written or modified by:
+ * Xin Long <lucien.xin@gmail.com>
+ */
+
+#include <linux/list.h>
+#include <net/sctp/sctp.h>
+#include <net/sctp/sm.h>
+#include <net/sctp/stream_sched.h>
+
+/* Fair Capacity and Weighted Fair Queueing handling
+ * RFC 8260 section 3.5 and 3.6
+ */
+static void sctp_sched_fc_unsched_all(struct sctp_stream *stream);
+
+static int sctp_sched_wfq_set(struct sctp_stream *stream, __u16 sid,
+ __u16 weight, gfp_t gfp)
+{
+ struct sctp_stream_out_ext *soute = SCTP_SO(stream, sid)->ext;
+
+ if (!weight)
+ return -EINVAL;
+
+ soute->fc_weight = weight;
+ return 0;
+}
+
+static int sctp_sched_wfq_get(struct sctp_stream *stream, __u16 sid,
+ __u16 *value)
+{
+ struct sctp_stream_out_ext *soute = SCTP_SO(stream, sid)->ext;
+
+ *value = soute->fc_weight;
+ return 0;
+}
+
+static int sctp_sched_fc_set(struct sctp_stream *stream, __u16 sid,
+ __u16 weight, gfp_t gfp)
+{
+ return 0;
+}
+
+static int sctp_sched_fc_get(struct sctp_stream *stream, __u16 sid,
+ __u16 *value)
+{
+ return 0;
+}
+
+static int sctp_sched_fc_init(struct sctp_stream *stream)
+{
+ INIT_LIST_HEAD(&stream->fc_list);
+
+ return 0;
+}
+
+static int sctp_sched_fc_init_sid(struct sctp_stream *stream, __u16 sid,
+ gfp_t gfp)
+{
+ struct sctp_stream_out_ext *soute = SCTP_SO(stream, sid)->ext;
+
+ INIT_LIST_HEAD(&soute->fc_list);
+ soute->fc_length = 0;
+ soute->fc_weight = 1;
+
+ return 0;
+}
+
+static void sctp_sched_fc_free_sid(struct sctp_stream *stream, __u16 sid)
+{
+}
+
+static void sctp_sched_fc_sched(struct sctp_stream *stream,
+ struct sctp_stream_out_ext *soute)
+{
+ struct sctp_stream_out_ext *pos;
+
+ if (!list_empty(&soute->fc_list))
+ return;
+
+ list_for_each_entry(pos, &stream->fc_list, fc_list)
+ if ((__u64)pos->fc_length * soute->fc_weight >=
+ (__u64)soute->fc_length * pos->fc_weight)
+ break;
+ list_add_tail(&soute->fc_list, &pos->fc_list);
+}
+
+static void sctp_sched_fc_enqueue(struct sctp_outq *q,
+ struct sctp_datamsg *msg)
+{
+ struct sctp_stream *stream;
+ struct sctp_chunk *ch;
+ __u16 sid;
+
+ ch = list_first_entry(&msg->chunks, struct sctp_chunk, frag_list);
+ sid = sctp_chunk_stream_no(ch);
+ stream = &q->asoc->stream;
+ sctp_sched_fc_sched(stream, SCTP_SO(stream, sid)->ext);
+}
+
+static struct sctp_chunk *sctp_sched_fc_dequeue(struct sctp_outq *q)
+{
+ struct sctp_stream *stream = &q->asoc->stream;
+ struct sctp_stream_out_ext *soute;
+ struct sctp_chunk *ch;
+
+ /* Bail out quickly if queue is empty */
+ if (list_empty(&q->out_chunk_list))
+ return NULL;
+
+ /* Find which chunk is next */
+ if (stream->out_curr)
+ soute = stream->out_curr->ext;
+ else
+ soute = list_entry(stream->fc_list.next, struct sctp_stream_out_ext, fc_list);
+ ch = list_entry(soute->outq.next, struct sctp_chunk, stream_list);
+
+ sctp_sched_dequeue_common(q, ch);
+ return ch;
+}
+
+static void sctp_sched_fc_dequeue_done(struct sctp_outq *q,
+ struct sctp_chunk *ch)
+{
+ struct sctp_stream *stream = &q->asoc->stream;
+ struct sctp_stream_out_ext *soute, *pos;
+ __u16 sid, i;
+
+ sid = sctp_chunk_stream_no(ch);
+ soute = SCTP_SO(stream, sid)->ext;
+ /* reduce all fc_lengths by U32_MAX / 4 if the current fc_length overflows. */
+ if (soute->fc_length > U32_MAX - ch->skb->len) {
+ for (i = 0; i < stream->outcnt; i++) {
+ pos = SCTP_SO(stream, i)->ext;
+ if (!pos)
+ continue;
+ if (pos->fc_length <= (U32_MAX >> 2)) {
+ pos->fc_length = 0;
+ continue;
+ }
+ pos->fc_length -= (U32_MAX >> 2);
+ }
+ }
+ soute->fc_length += ch->skb->len;
+
+ if (list_empty(&soute->outq)) {
+ list_del_init(&soute->fc_list);
+ return;
+ }
+
+ pos = soute;
+ list_for_each_entry_continue(pos, &stream->fc_list, fc_list)
+ if ((__u64)pos->fc_length * soute->fc_weight >=
+ (__u64)soute->fc_length * pos->fc_weight)
+ break;
+ list_move_tail(&soute->fc_list, &pos->fc_list);
+}
+
+static void sctp_sched_fc_sched_all(struct sctp_stream *stream)
+{
+ struct sctp_association *asoc;
+ struct sctp_chunk *ch;
+
+ asoc = container_of(stream, struct sctp_association, stream);
+ list_for_each_entry(ch, &asoc->outqueue.out_chunk_list, list) {
+ __u16 sid = sctp_chunk_stream_no(ch);
+
+ if (SCTP_SO(stream, sid)->ext)
+ sctp_sched_fc_sched(stream, SCTP_SO(stream, sid)->ext);
+ }
+}
+
+static void sctp_sched_fc_unsched_all(struct sctp_stream *stream)
+{
+ struct sctp_stream_out_ext *soute, *tmp;
+
+ list_for_each_entry_safe(soute, tmp, &stream->fc_list, fc_list)
+ list_del_init(&soute->fc_list);
+}
+
+static struct sctp_sched_ops sctp_sched_fc = {
+ .set = sctp_sched_fc_set,
+ .get = sctp_sched_fc_get,
+ .init = sctp_sched_fc_init,
+ .init_sid = sctp_sched_fc_init_sid,
+ .free_sid = sctp_sched_fc_free_sid,
+ .enqueue = sctp_sched_fc_enqueue,
+ .dequeue = sctp_sched_fc_dequeue,
+ .dequeue_done = sctp_sched_fc_dequeue_done,
+ .sched_all = sctp_sched_fc_sched_all,
+ .unsched_all = sctp_sched_fc_unsched_all,
+};
+
+void sctp_sched_ops_fc_init(void)
+{
+ sctp_sched_ops_register(SCTP_SS_FC, &sctp_sched_fc);
+}
+
+static struct sctp_sched_ops sctp_sched_wfq = {
+ .set = sctp_sched_wfq_set,
+ .get = sctp_sched_wfq_get,
+ .init = sctp_sched_fc_init,
+ .init_sid = sctp_sched_fc_init_sid,
+ .free_sid = sctp_sched_fc_free_sid,
+ .enqueue = sctp_sched_fc_enqueue,
+ .dequeue = sctp_sched_fc_dequeue,
+ .dequeue_done = sctp_sched_fc_dequeue_done,
+ .sched_all = sctp_sched_fc_sched_all,
+ .unsched_all = sctp_sched_fc_unsched_all,
+};
+
+void sctp_sched_ops_wfq_init(void)
+{
+ sctp_sched_ops_register(SCTP_SS_WFQ, &sctp_sched_wfq);
+}
* */
};
-static inline struct smc_sock *smc_sk(const struct sock *sk)
-{
- return (struct smc_sock *)sk;
-}
+#define smc_sk(ptr) container_of_const(ptr, struct smc_sock, sk)
static inline void smc_init_saved_callbacks(struct smc_sock *smc)
{
unsigned long *wr_tx_mask; /* bit mask of used indexes */
u32 wr_tx_cnt; /* number of WR send buffers */
wait_queue_head_t wr_tx_wait; /* wait for free WR send buf */
- atomic_t wr_tx_refcnt; /* tx refs to link */
+ struct {
+ struct percpu_ref wr_tx_refs;
+ } ____cacheline_aligned_in_smp;
+ struct completion tx_ref_comp;
struct smc_wr_buf *wr_rx_bufs; /* WR recv payload buffers */
struct ib_recv_wr *wr_rx_ibs; /* WR recv meta data */
struct ib_reg_wr wr_reg; /* WR register memory region */
wait_queue_head_t wr_reg_wait; /* wait for wr_reg result */
- atomic_t wr_reg_refcnt; /* reg refs to link */
+ struct {
+ struct percpu_ref wr_reg_refs;
+ } ____cacheline_aligned_in_smp;
+ struct completion reg_ref_comp;
enum smc_wr_reg_state wr_reg_state; /* state of wr_reg request */
u8 gid[SMC_GID_SIZE];/* gid matching used vlan id*/
u8 *system_eid = NULL;
system_eid = smcd->ops->get_system_eid();
- if (system_eid[24] != '0' || system_eid[28] != '0') {
+ if (smcd->ops->supports_v2()) {
smc_ism_v2_capable = true;
memcpy(smc_ism_v2_system_eid, system_eid,
SMC_MAX_EID_LEN);
if (rc)
return rc;
- atomic_inc(&link->wr_reg_refcnt);
+ percpu_ref_get(&link->wr_reg_refs);
rc = wait_event_interruptible_timeout(link->wr_reg_wait,
(link->wr_reg_state != POSTED),
SMC_WR_REG_MR_WAIT_TIME);
- if (atomic_dec_and_test(&link->wr_reg_refcnt))
- wake_up_all(&link->wr_reg_wait);
+ percpu_ref_put(&link->wr_reg_refs);
if (!rc) {
/* timeout - terminate link */
smcr_link_down_cond_sched(link);
smc_wr_wakeup_tx_wait(lnk);
smc_wr_tx_wait_no_pending_sends(lnk);
- wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt)));
- wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt)));
+ percpu_ref_kill(&lnk->wr_reg_refs);
+ wait_for_completion(&lnk->reg_ref_comp);
+ percpu_ref_kill(&lnk->wr_tx_refs);
+ wait_for_completion(&lnk->tx_ref_comp);
if (lnk->wr_rx_dma_addr) {
ib_dma_unmap_single(ibdev, lnk->wr_rx_dma_addr,
tasklet_setup(&smcibdev->send_tasklet, smc_wr_tx_tasklet_fn);
}
+static void smcr_wr_tx_refs_free(struct percpu_ref *ref)
+{
+ struct smc_link *lnk = container_of(ref, struct smc_link, wr_tx_refs);
+
+ complete(&lnk->tx_ref_comp);
+}
+
+static void smcr_wr_reg_refs_free(struct percpu_ref *ref)
+{
+ struct smc_link *lnk = container_of(ref, struct smc_link, wr_reg_refs);
+
+ complete(&lnk->reg_ref_comp);
+}
+
int smc_wr_create_link(struct smc_link *lnk)
{
struct ib_device *ibdev = lnk->smcibdev->ibdev;
smc_wr_init_sge(lnk);
bitmap_zero(lnk->wr_tx_mask, SMC_WR_BUF_CNT);
init_waitqueue_head(&lnk->wr_tx_wait);
- atomic_set(&lnk->wr_tx_refcnt, 0);
+ rc = percpu_ref_init(&lnk->wr_tx_refs, smcr_wr_tx_refs_free, 0, GFP_KERNEL);
+ if (rc)
+ goto dma_unmap;
+ init_completion(&lnk->tx_ref_comp);
init_waitqueue_head(&lnk->wr_reg_wait);
- atomic_set(&lnk->wr_reg_refcnt, 0);
+ rc = percpu_ref_init(&lnk->wr_reg_refs, smcr_wr_reg_refs_free, 0, GFP_KERNEL);
+ if (rc)
+ goto dma_unmap;
+ init_completion(&lnk->reg_ref_comp);
init_waitqueue_head(&lnk->wr_rx_empty_wait);
return rc;
{
if (!smc_link_sendable(link))
return false;
- atomic_inc(&link->wr_tx_refcnt);
+ percpu_ref_get(&link->wr_tx_refs);
return true;
}
static inline void smc_wr_tx_link_put(struct smc_link *link)
{
- if (atomic_dec_and_test(&link->wr_tx_refcnt))
- wake_up_all(&link->wr_tx_wait);
+ percpu_ref_put(&link->wr_tx_refs);
}
static inline void smc_wr_drain_cq(struct smc_link *lnk)
int __sys_getsockopt(int fd, int level, int optname, char __user *optval,
int __user *optlen)
{
+ int max_optlen __maybe_unused;
int err, fput_needed;
struct socket *sock;
- int max_optlen;
sock = sockfd_lookup_light(fd, &err, &fput_needed);
if (!sock)
* when peer was not connected to us.
*/
if (!sock_flag(other, SOCK_DEAD) && unix_peer(other) == sk) {
- other->sk_err = ECONNRESET;
+ WRITE_ONCE(other->sk_err, ECONNRESET);
sk_error_report(other);
}
}
/* No more writes */
skpair->sk_shutdown = SHUTDOWN_MASK;
if (!skb_queue_empty(&sk->sk_receive_queue) || embrion)
- skpair->sk_err = ECONNRESET;
+ WRITE_ONCE(skpair->sk_err, ECONNRESET);
unix_state_unlock(skpair);
skpair->sk_state_change(skpair);
sk_wake_async(skpair, SOCK_WAKE_WAITD, POLL_HUP);
mask = 0;
/* exceptional events? */
- if (sk->sk_err)
+ if (READ_ONCE(sk->sk_err))
mask |= EPOLLERR;
if (sk->sk_shutdown == SHUTDOWN_MASK)
mask |= EPOLLHUP;
mask = 0;
/* exceptional events? */
- if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
+ if (READ_ONCE(sk->sk_err) ||
+ !skb_queue_empty_lockless(&sk->sk_error_queue))
mask |= EPOLLERR |
(sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);
* release.path eventually putting registered files.
*/
skb_queue_walk_safe(&hitlist, skb, next_skb) {
- if (skb->scm_io_uring) {
+ if (skb->destructor == io_uring_destruct_scm) {
__skb_unlink(skb, &hitlist);
skb_queue_tail(&skb->sk->sk_receive_queue, skb);
}
sock_wfree(skb);
}
EXPORT_SYMBOL(unix_destruct_scm);
+
+void io_uring_destruct_scm(struct sk_buff *skb)
+{
+ unix_destruct_scm(skb);
+}
+EXPORT_SYMBOL(io_uring_destruct_scm);
obj-$(CONFIG_VSOCKETS_LOOPBACK) += vsock_loopback.o
vsock-y += af_vsock.o af_vsock_tap.o vsock_addr.o
+vsock-$(CONFIG_BPF_SYSCALL) += vsock_bpf.o
vsock_diag-y += diag.o
static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
/* Protocol family. */
-static struct proto vsock_proto = {
+struct proto vsock_proto = {
.name = "AF_VSOCK",
.owner = THIS_MODULE,
.obj_size = sizeof(struct vsock_sock),
+#ifdef CONFIG_BPF_SYSCALL
+ .psock_update_sk_prot = vsock_bpf_update_proto,
+#endif
};
/* The default peer timeout indicates how long we will wait for a peer response
}
EXPORT_SYMBOL_GPL(vsock_stream_has_data);
-static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
+s64 vsock_connectible_has_data(struct vsock_sock *vsk)
{
struct sock *sk = sk_vsock(vsk);
else
return vsock_stream_has_data(vsk);
}
+EXPORT_SYMBOL_GPL(vsock_connectible_has_data);
s64 vsock_stream_has_space(struct vsock_sock *vsk)
{
return mask;
}
+static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor)
+{
+ struct vsock_sock *vsk = vsock_sk(sk);
+
+ return vsk->transport->read_skb(vsk, read_actor);
+}
+
static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
size_t len)
{
memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr));
sock->state = SS_CONNECTED;
+ /* sock map disallows redirection of non-TCP sockets with sk_state !=
+ * TCP_ESTABLISHED (see sock_map_redirect_allowed()), so we set
+ * TCP_ESTABLISHED here to allow redirection of connected vsock dgrams.
+ *
+ * This doesn't seem to be abnormal state for datagram sockets, as the
+ * same approach can be see in other datagram socket types as well
+ * (such as unix sockets).
+ */
+ sk->sk_state = TCP_ESTABLISHED;
+
out:
release_sock(sk);
return err;
}
-static int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
- size_t len, int flags)
+int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+ size_t len, int flags)
{
- struct vsock_sock *vsk = vsock_sk(sock->sk);
+#ifdef CONFIG_BPF_SYSCALL
+ const struct proto *prot;
+#endif
+ struct vsock_sock *vsk;
+ struct sock *sk;
+
+ sk = sock->sk;
+ vsk = vsock_sk(sk);
+
+#ifdef CONFIG_BPF_SYSCALL
+ prot = READ_ONCE(sk->sk_prot);
+ if (prot != &vsock_proto)
+ return prot->recvmsg(sk, msg, len, flags, NULL);
+#endif
return vsk->transport->dgram_dequeue(vsk, msg, len, flags);
}
+EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg);
static const struct proto_ops vsock_dgram_ops = {
.family = PF_VSOCK,
.recvmsg = vsock_dgram_recvmsg,
.mmap = sock_no_mmap,
.sendpage = sock_no_sendpage,
+ .read_skb = vsock_read_skb,
};
static int vsock_transport_cancel_pkt(struct vsock_sock *vsk)
read = transport->stream_dequeue(vsk, msg, len - copied, flags);
if (read < 0) {
- err = -ENOMEM;
+ err = read;
break;
}
msg_len = transport->seqpacket_dequeue(vsk, msg, flags);
if (msg_len < 0) {
- err = -ENOMEM;
+ err = msg_len;
goto out;
}
return err;
}
-static int
+int
vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int flags)
{
struct sock *sk;
struct vsock_sock *vsk;
const struct vsock_transport *transport;
+#ifdef CONFIG_BPF_SYSCALL
+ const struct proto *prot;
+#endif
int err;
sk = sock->sk;
goto out;
}
+#ifdef CONFIG_BPF_SYSCALL
+ prot = READ_ONCE(sk->sk_prot);
+ if (prot != &vsock_proto) {
+ release_sock(sk);
+ return prot->recvmsg(sk, msg, len, flags, NULL);
+ }
+#endif
+
if (sk->sk_type == SOCK_STREAM)
err = __vsock_stream_recvmsg(sk, msg, len, flags);
else
release_sock(sk);
return err;
}
+EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg);
static int vsock_set_rcvlowat(struct sock *sk, int val)
{
.mmap = sock_no_mmap,
.sendpage = sock_no_sendpage,
.set_rcvlowat = vsock_set_rcvlowat,
+ .read_skb = vsock_read_skb,
};
static const struct proto_ops vsock_seqpacket_ops = {
.recvmsg = vsock_connectible_recvmsg,
.mmap = sock_no_mmap,
.sendpage = sock_no_sendpage,
+ .read_skb = vsock_read_skb,
};
static int vsock_create(struct net *net, struct socket *sock,
goto err_unregister_proto;
}
+ vsock_bpf_build_proto();
+
return 0;
err_unregister_proto:
.notify_send_pre_enqueue = virtio_transport_notify_send_pre_enqueue,
.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
.notify_buffer_size = virtio_transport_notify_buffer_size,
+
+ .read_skb = virtio_transport_read_skb,
},
.send_pkt = virtio_transport_send_pkt,
const struct virtio_transport *t_ops;
struct virtio_vsock_sock *vvs;
u32 pkt_len = info->pkt_len;
- struct sk_buff *skb;
+ u32 rest_len;
+ int ret;
info->type = virtio_transport_get_type(sk_vsock(vsk));
vvs = vsk->trans;
- /* we can send less than pkt_len bytes */
- if (pkt_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE)
- pkt_len = VIRTIO_VSOCK_MAX_PKT_BUF_SIZE;
-
/* virtio_transport_get_credit might return less than pkt_len credit */
pkt_len = virtio_transport_get_credit(vvs, pkt_len);
if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
return pkt_len;
- skb = virtio_transport_alloc_skb(info, pkt_len,
- src_cid, src_port,
- dst_cid, dst_port);
- if (!skb) {
- virtio_transport_put_credit(vvs, pkt_len);
- return -ENOMEM;
- }
+ rest_len = pkt_len;
+
+ do {
+ struct sk_buff *skb;
+ size_t skb_len;
+
+ skb_len = min_t(u32, VIRTIO_VSOCK_MAX_PKT_BUF_SIZE, rest_len);
+
+ skb = virtio_transport_alloc_skb(info, skb_len,
+ src_cid, src_port,
+ dst_cid, dst_port);
+ if (!skb) {
+ ret = -ENOMEM;
+ break;
+ }
+
+ virtio_transport_inc_tx_pkt(vvs, skb);
- virtio_transport_inc_tx_pkt(vvs, skb);
+ ret = t_ops->send_pkt(skb);
+ if (ret < 0)
+ break;
+
+ /* Both virtio and vhost 'send_pkt()' returns 'skb_len',
+ * but for reliability use 'ret' instead of 'skb_len'.
+ * Also if partial send happens (e.g. 'ret' != 'skb_len')
+ * somehow, we break this loop, but account such returned
+ * value in 'virtio_transport_put_credit()'.
+ */
+ rest_len -= ret;
+
+ if (WARN_ONCE(ret != skb_len,
+ "'send_pkt()' returns %i, but %zu expected\n",
+ ret, skb_len))
+ break;
+ } while (rest_len);
+
+ virtio_transport_put_credit(vvs, rest_len);
- return t_ops->send_pkt(skb);
+ /* Return number of bytes, if any data has been sent. */
+ if (rest_len != pkt_len)
+ ret = pkt_len - rest_len;
+
+ return ret;
}
static bool virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
{
u32 ret;
+ if (!credit)
+ return 0;
+
spin_lock_bh(&vvs->tx_lock);
ret = vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt);
if (ret > credit)
void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit)
{
+ if (!credit)
+ return;
+
spin_lock_bh(&vvs->tx_lock);
vvs->tx_cnt -= credit;
spin_unlock_bh(&vvs->tx_lock);
if (le16_to_cpu(hdr->op) == VIRTIO_VSOCK_OP_RST)
return 0;
+ if (!t)
+ return -ENOTCONN;
+
reply = virtio_transport_alloc_skb(&info, 0,
le64_to_cpu(hdr->dst_cid),
le32_to_cpu(hdr->dst_port),
if (!reply)
return -ENOMEM;
- if (!t) {
- kfree_skb(reply);
- return -ENOTCONN;
- }
-
return t->send_pkt(reply);
}
}
EXPORT_SYMBOL_GPL(virtio_transport_purge_skbs);
+int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_actor)
+{
+ struct virtio_vsock_sock *vvs = vsk->trans;
+ struct sock *sk = sk_vsock(vsk);
+ struct sk_buff *skb;
+ int off = 0;
+ int copied;
+ int err;
+
+ spin_lock_bh(&vvs->rx_lock);
+ /* Use __skb_recv_datagram() for race-free handling of the receive. It
+ * works for types other than dgrams.
+ */
+ skb = __skb_recv_datagram(sk, &vvs->rx_queue, MSG_DONTWAIT, &off, &err);
+ spin_unlock_bh(&vvs->rx_lock);
+
+ if (!skb)
+ return err;
+
+ copied = recv_actor(sk, skb);
+ kfree_skb(skb);
+ return copied;
+}
+EXPORT_SYMBOL_GPL(virtio_transport_read_skb);
+
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Asias He");
MODULE_DESCRIPTION("common code for virtio vsock");
size_t len,
int flags)
{
+ ssize_t err;
+
if (flags & MSG_PEEK)
- return vmci_qpair_peekv(vmci_trans(vsk)->qpair, msg, len, 0);
+ err = vmci_qpair_peekv(vmci_trans(vsk)->qpair, msg, len, 0);
else
- return vmci_qpair_dequev(vmci_trans(vsk)->qpair, msg, len, 0);
+ err = vmci_qpair_dequev(vmci_trans(vsk)->qpair, msg, len, 0);
+
+ if (err < 0)
+ err = -ENOMEM;
+
+ return err;
}
static ssize_t vmci_transport_stream_enqueue(
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Bobby Eshleman <bobby.eshleman@bytedance.com>
+ *
+ * Based off of net/unix/unix_bpf.c
+ */
+
+#include <linux/bpf.h>
+#include <linux/module.h>
+#include <linux/skmsg.h>
+#include <linux/socket.h>
+#include <linux/wait.h>
+#include <net/af_vsock.h>
+#include <net/sock.h>
+
+#define vsock_sk_has_data(__sk, __psock) \
+ ({ !skb_queue_empty(&(__sk)->sk_receive_queue) || \
+ !skb_queue_empty(&(__psock)->ingress_skb) || \
+ !list_empty(&(__psock)->ingress_msg); \
+ })
+
+static struct proto *vsock_prot_saved __read_mostly;
+static DEFINE_SPINLOCK(vsock_prot_lock);
+static struct proto vsock_bpf_prot;
+
+static bool vsock_has_data(struct sock *sk, struct sk_psock *psock)
+{
+ struct vsock_sock *vsk = vsock_sk(sk);
+ s64 ret;
+
+ ret = vsock_connectible_has_data(vsk);
+ if (ret > 0)
+ return true;
+
+ return vsock_sk_has_data(sk, psock);
+}
+
+static bool vsock_msg_wait_data(struct sock *sk, struct sk_psock *psock, long timeo)
+{
+ bool ret;
+
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
+
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+ return true;
+
+ if (!timeo)
+ return false;
+
+ add_wait_queue(sk_sleep(sk), &wait);
+ sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ ret = vsock_has_data(sk, psock);
+ if (!ret) {
+ wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
+ ret = vsock_has_data(sk, psock);
+ }
+ sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ remove_wait_queue(sk_sleep(sk), &wait);
+ return ret;
+}
+
+static int __vsock_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags)
+{
+ struct socket *sock = sk->sk_socket;
+ int err;
+
+ if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET)
+ err = vsock_connectible_recvmsg(sock, msg, len, flags);
+ else if (sk->sk_type == SOCK_DGRAM)
+ err = vsock_dgram_recvmsg(sock, msg, len, flags);
+ else
+ err = -EPROTOTYPE;
+
+ return err;
+}
+
+static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ size_t len, int flags, int *addr_len)
+{
+ struct sk_psock *psock;
+ int copied;
+
+ psock = sk_psock_get(sk);
+ if (unlikely(!psock))
+ return __vsock_recvmsg(sk, msg, len, flags);
+
+ lock_sock(sk);
+ if (vsock_has_data(sk, psock) && sk_psock_queue_empty(psock)) {
+ release_sock(sk);
+ sk_psock_put(sk, psock);
+ return __vsock_recvmsg(sk, msg, len, flags);
+ }
+
+ copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
+ while (copied == 0) {
+ long timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+
+ if (!vsock_msg_wait_data(sk, psock, timeo)) {
+ copied = -EAGAIN;
+ break;
+ }
+
+ if (sk_psock_queue_empty(psock)) {
+ release_sock(sk);
+ sk_psock_put(sk, psock);
+ return __vsock_recvmsg(sk, msg, len, flags);
+ }
+
+ copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
+ }
+
+ release_sock(sk);
+ sk_psock_put(sk, psock);
+
+ return copied;
+}
+
+/* Copy of original proto with updated sock_map methods */
+static struct proto vsock_bpf_prot = {
+ .close = sock_map_close,
+ .recvmsg = vsock_bpf_recvmsg,
+ .sock_is_readable = sk_msg_is_readable,
+ .unhash = sock_map_unhash,
+};
+
+static void vsock_bpf_rebuild_protos(struct proto *prot, const struct proto *base)
+{
+ *prot = *base;
+ prot->close = sock_map_close;
+ prot->recvmsg = vsock_bpf_recvmsg;
+ prot->sock_is_readable = sk_msg_is_readable;
+}
+
+static void vsock_bpf_check_needs_rebuild(struct proto *ops)
+{
+ /* Paired with the smp_store_release() below. */
+ if (unlikely(ops != smp_load_acquire(&vsock_prot_saved))) {
+ spin_lock_bh(&vsock_prot_lock);
+ if (likely(ops != vsock_prot_saved)) {
+ vsock_bpf_rebuild_protos(&vsock_bpf_prot, ops);
+ /* Make sure proto function pointers are updated before publishing the
+ * pointer to the struct.
+ */
+ smp_store_release(&vsock_prot_saved, ops);
+ }
+ spin_unlock_bh(&vsock_prot_lock);
+ }
+}
+
+int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
+{
+ struct vsock_sock *vsk;
+
+ if (restore) {
+ sk->sk_write_space = psock->saved_write_space;
+ sock_replace_proto(sk, psock->sk_proto);
+ return 0;
+ }
+
+ vsk = vsock_sk(sk);
+ if (!vsk->transport)
+ return -ENODEV;
+
+ if (!vsk->transport->read_skb)
+ return -EOPNOTSUPP;
+
+ vsock_bpf_check_needs_rebuild(psock->sk_proto);
+ sock_replace_proto(sk, &vsock_bpf_prot);
+ return 0;
+}
+
+void __init vsock_bpf_build_proto(void)
+{
+ vsock_bpf_rebuild_protos(&vsock_bpf_prot, &vsock_proto);
+}
.notify_send_pre_enqueue = virtio_transport_notify_send_pre_enqueue,
.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
.notify_buffer_size = virtio_transport_notify_buffer_size,
+
+ .read_skb = virtio_transport_read_skb,
},
.send_pkt = vsock_loopback_send_pkt,
return ether_addr_equal(addr, wdev_address(wdev));
}
+static bool cfg80211_allowed_random_address(struct wireless_dev *wdev,
+ const struct ieee80211_mgmt *mgmt)
+{
+ if (ieee80211_is_auth(mgmt->frame_control) ||
+ ieee80211_is_deauth(mgmt->frame_control)) {
+ /* Allow random TA to be used with authentication and
+ * deauthentication frames if the driver has indicated support.
+ */
+ if (wiphy_ext_feature_isset(
+ wdev->wiphy,
+ NL80211_EXT_FEATURE_AUTH_AND_DEAUTH_RANDOM_TA))
+ return true;
+ } else if (ieee80211_is_action(mgmt->frame_control) &&
+ mgmt->u.action.category == WLAN_CATEGORY_PUBLIC) {
+ /* Allow random TA to be used with Public Action frames if the
+ * driver has indicated support.
+ */
+ if (!wdev->connected &&
+ wiphy_ext_feature_isset(
+ wdev->wiphy,
+ NL80211_EXT_FEATURE_MGMT_TX_RANDOM_TA))
+ return true;
+
+ if (wdev->connected &&
+ wiphy_ext_feature_isset(
+ wdev->wiphy,
+ NL80211_EXT_FEATURE_MGMT_TX_RANDOM_TA_CONNECTED))
+ return true;
+ }
+
+ return false;
+}
+
int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev,
struct cfg80211_mgmt_tx_params *params, u64 *cookie)
return err;
}
- if (!cfg80211_allowed_address(wdev, mgmt->sa)) {
- /* Allow random TA to be used with Public Action frames if the
- * driver has indicated support for this. Otherwise, only allow
- * the local address to be used.
- */
- if (!ieee80211_is_action(mgmt->frame_control) ||
- mgmt->u.action.category != WLAN_CATEGORY_PUBLIC)
- return -EINVAL;
- if (!wdev->connected &&
- !wiphy_ext_feature_isset(
- &rdev->wiphy,
- NL80211_EXT_FEATURE_MGMT_TX_RANDOM_TA))
- return -EINVAL;
- if (wdev->connected &&
- !wiphy_ext_feature_isset(
- &rdev->wiphy,
- NL80211_EXT_FEATURE_MGMT_TX_RANDOM_TA_CONNECTED))
- return -EINVAL;
- }
+ if (!cfg80211_allowed_address(wdev, mgmt->sa) &&
+ !cfg80211_allowed_random_address(wdev, mgmt))
+ return -EINVAL;
/* Transmit the management frame as requested by user space */
return rdev_mgmt_tx(rdev, wdev, params, cookie);
[NL80211_ATTR_MAX_NUM_AKM_SUITES] = { .type = NLA_REJECT },
[NL80211_ATTR_PUNCT_BITMAP] =
NLA_POLICY_FULL_RANGE(NLA_U32, &nl80211_punct_bitmap_range),
+
+ [NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS] = { .type = NLA_U16 },
+ [NL80211_ATTR_HW_TIMESTAMP_ENABLED] = { .type = NLA_FLAG },
+ [NL80211_ATTR_EMA_RNR_ELEMS] = { .type = NLA_NESTED },
};
/* policy for the key attributes */
nla_nest_end(msg, nl_rates);
+ /* S1G capabilities */
+ if (sband->band == NL80211_BAND_S1GHZ && sband->s1g_cap.s1g &&
+ (nla_put(msg, NL80211_BAND_ATTR_S1G_CAPA,
+ sizeof(sband->s1g_cap.cap),
+ sband->s1g_cap.cap) ||
+ nla_put(msg, NL80211_BAND_ATTR_S1G_MCS_NSS_SET,
+ sizeof(sband->s1g_cap.nss_mcs),
+ sband->s1g_cap.nss_mcs)))
+ return -ENOBUFS;
+
return 0;
}
if (rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_MLO)
nla_put_flag(msg, NL80211_ATTR_MLO_SUPPORT);
+ if (rdev->wiphy.hw_timestamp_max_peers &&
+ nla_put_u16(msg, NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS,
+ rdev->wiphy.hw_timestamp_max_peers))
+ goto nla_put_failure;
+
/* done */
state->split_start = 0;
break;
return result;
}
-static int nl80211_send_chandef(struct sk_buff *msg,
- const struct cfg80211_chan_def *chandef)
+int nl80211_send_chandef(struct sk_buff *msg, const struct cfg80211_chan_def *chandef)
{
if (WARN_ON(!cfg80211_chandef_valid(chandef)))
return -EINVAL;
return -ENOBUFS;
return 0;
}
+EXPORT_SYMBOL(nl80211_send_chandef);
static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flags,
struct cfg80211_registered_device *rdev,
return elems;
}
+static struct cfg80211_rnr_elems *
+nl80211_parse_rnr_elems(struct wiphy *wiphy, struct nlattr *attrs,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *nl_elems;
+ struct cfg80211_rnr_elems *elems;
+ int rem_elems;
+ u8 i = 0, num_elems = 0;
+
+ nla_for_each_nested(nl_elems, attrs, rem_elems) {
+ int ret;
+
+ ret = validate_ie_attr(nl_elems, extack);
+ if (ret)
+ return ERR_PTR(ret);
+
+ num_elems++;
+ }
+
+ elems = kzalloc(struct_size(elems, elem, num_elems), GFP_KERNEL);
+ if (!elems)
+ return ERR_PTR(-ENOMEM);
+
+ nla_for_each_nested(nl_elems, attrs, rem_elems) {
+ elems->elem[i].data = nla_data(nl_elems);
+ elems->elem[i].len = nla_len(nl_elems);
+ i++;
+ }
+ elems->cnt = num_elems;
+ return elems;
+}
+
static int nl80211_parse_he_bss_color(struct nlattr *attrs,
struct cfg80211_he_bss_color *he_bss_color)
{
static int nl80211_parse_beacon(struct cfg80211_registered_device *rdev,
struct nlattr *attrs[],
- struct cfg80211_beacon_data *bcn)
+ struct cfg80211_beacon_data *bcn,
+ struct netlink_ext_ack *extack)
{
bool haveinfo = false;
int err;
return PTR_ERR(mbssid);
bcn->mbssid_ies = mbssid;
+
+ if (bcn->mbssid_ies && attrs[NL80211_ATTR_EMA_RNR_ELEMS]) {
+ struct cfg80211_rnr_elems *rnr =
+ nl80211_parse_rnr_elems(&rdev->wiphy,
+ attrs[NL80211_ATTR_EMA_RNR_ELEMS],
+ extack);
+
+ if (IS_ERR(rnr))
+ return PTR_ERR(rnr);
+
+ if (rnr && rnr->cnt < bcn->mbssid_ies->cnt)
+ return -EINVAL;
+
+ bcn->rnr_ies = rnr;
+ }
}
return 0;
if (!params)
return -ENOMEM;
- err = nl80211_parse_beacon(rdev, info->attrs, ¶ms->beacon);
+ err = nl80211_parse_beacon(rdev, info->attrs, ¶ms->beacon,
+ info->extack);
if (err)
goto out;
goto out_unlock;
}
+ if (!params->mbssid_config.ema && params->beacon.rnr_ies) {
+ err = -EINVAL;
+ goto out_unlock;
+ }
+
err = nl80211_calculate_ap_params(params);
if (err)
goto out_unlock;
params->mbssid_config.tx_wdev->netdev &&
params->mbssid_config.tx_wdev->netdev != dev)
dev_put(params->mbssid_config.tx_wdev->netdev);
+ kfree(params->beacon.rnr_ies);
kfree(params);
return err;
if (!wdev->links[link_id].ap.beacon_interval)
return -EINVAL;
- err = nl80211_parse_beacon(rdev, info->attrs, ¶ms);
+ err = nl80211_parse_beacon(rdev, info->attrs, ¶ms, info->extack);
if (err)
goto out;
out:
kfree(params.mbssid_ies);
+ kfree(params.rnr_ies);
return err;
}
struct nlattr *attr;
struct wiphy *wiphy;
int err, tmp, n_ssids = 0, n_channels, i;
- size_t ie_len;
+ size_t ie_len, size;
wiphy = &rdev->wiphy;
if (ie_len > wiphy->max_scan_ie_len)
return -EINVAL;
- request = kzalloc(sizeof(*request)
- + sizeof(*request->ssids) * n_ssids
- + sizeof(*request->channels) * n_channels
- + ie_len, GFP_KERNEL);
+ size = struct_size(request, channels, n_channels);
+ size = size_add(size, array_size(sizeof(*request->ssids), n_ssids));
+ size = size_add(size, ie_len);
+ request = kzalloc(size, GFP_KERNEL);
if (!request)
return -ENOMEM;
struct nlattr *attr;
int err, tmp, n_ssids = 0, n_match_sets = 0, n_channels, i, n_plans = 0;
enum nl80211_band band;
- size_t ie_len;
+ size_t ie_len, size;
struct nlattr *tb[NL80211_SCHED_SCAN_MATCH_ATTR_MAX + 1];
s32 default_match_rssi = NL80211_SCAN_RSSI_THOLD_OFF;
attrs[NL80211_ATTR_SCHED_SCAN_RSSI_ADJUST]))
return ERR_PTR(-EINVAL);
- request = kzalloc(sizeof(*request)
- + sizeof(*request->ssids) * n_ssids
- + sizeof(*request->match_sets) * n_match_sets
- + sizeof(*request->scan_plans) * n_plans
- + sizeof(*request->channels) * n_channels
- + ie_len, GFP_KERNEL);
+ size = struct_size(request, channels, n_channels);
+ size = size_add(size, array_size(sizeof(*request->ssids), n_ssids));
+ size = size_add(size, array_size(sizeof(*request->match_sets),
+ n_match_sets));
+ size = size_add(size, array_size(sizeof(*request->scan_plans),
+ n_plans));
+ size = size_add(size, ie_len);
+ request = kzalloc(size, GFP_KERNEL);
if (!request)
return ERR_PTR(-ENOMEM);
if (!need_new_beacon)
goto skip_beacons;
- err = nl80211_parse_beacon(rdev, info->attrs, ¶ms.beacon_after);
+ err = nl80211_parse_beacon(rdev, info->attrs, ¶ms.beacon_after,
+ info->extack);
if (err)
goto free;
if (err)
goto free;
- err = nl80211_parse_beacon(rdev, csa_attrs, ¶ms.beacon_csa);
+ err = nl80211_parse_beacon(rdev, csa_attrs, ¶ms.beacon_csa,
+ info->extack);
if (err)
goto free;
free:
kfree(params.beacon_after.mbssid_ies);
kfree(params.beacon_csa.mbssid_ies);
+ kfree(params.beacon_after.rnr_ies);
+ kfree(params.beacon_csa.rnr_ies);
kfree(csa_attrs);
return err;
}
params.count = nla_get_u8(info->attrs[NL80211_ATTR_COLOR_CHANGE_COUNT]);
params.color = nla_get_u8(info->attrs[NL80211_ATTR_COLOR_CHANGE_COLOR]);
- err = nl80211_parse_beacon(rdev, info->attrs, ¶ms.beacon_next);
+ err = nl80211_parse_beacon(rdev, info->attrs, ¶ms.beacon_next,
+ info->extack);
if (err)
return err;
if (err)
goto out;
- err = nl80211_parse_beacon(rdev, tb, ¶ms.beacon_color_change);
+ err = nl80211_parse_beacon(rdev, tb, ¶ms.beacon_color_change,
+ info->extack);
if (err)
goto out;
out:
kfree(params.beacon_next.mbssid_ies);
kfree(params.beacon_color_change.mbssid_ies);
+ kfree(params.beacon_next.rnr_ies);
+ kfree(params.beacon_color_change.rnr_ies);
kfree(tb);
return err;
}
return ret;
}
+static int nl80211_set_hw_timestamp(struct sk_buff *skb,
+ struct genl_info *info)
+{
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
+ struct net_device *dev = info->user_ptr[1];
+ struct cfg80211_set_hw_timestamp hwts = {};
+
+ if (!rdev->wiphy.hw_timestamp_max_peers)
+ return -EOPNOTSUPP;
+
+ if (!info->attrs[NL80211_ATTR_MAC] &&
+ rdev->wiphy.hw_timestamp_max_peers != CFG80211_HW_TIMESTAMP_ALL_PEERS)
+ return -EOPNOTSUPP;
+
+ if (info->attrs[NL80211_ATTR_MAC])
+ hwts.macaddr = nla_data(info->attrs[NL80211_ATTR_MAC]);
+
+ hwts.enable =
+ nla_get_flag(info->attrs[NL80211_ATTR_HW_TIMESTAMP_ENABLED]);
+
+ return rdev_set_hw_timestamp(rdev, dev, &hwts);
+}
+
#define NL80211_FLAG_NEED_WIPHY 0x01
#define NL80211_FLAG_NEED_NETDEV 0x02
#define NL80211_FLAG_NEED_RTNL 0x04
.internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP |
NL80211_FLAG_MLO_VALID_LINK_ID),
},
+ {
+ .cmd = NL80211_CMD_SET_HW_TIMESTAMP,
+ .doit = nl80211_set_hw_timestamp,
+ .flags = GENL_UNS_ADMIN_PERM,
+ .internal_flags = IFLAGS(NL80211_FLAG_NEED_NETDEV_UP),
+ },
};
static struct genl_family nl80211_fam __ro_after_init = {
static int __nl80211_rx_control_port(struct net_device *dev,
struct sk_buff *skb,
- bool unencrypted, gfp_t gfp)
+ bool unencrypted,
+ int link_id,
+ gfp_t gfp)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
NL80211_ATTR_PAD) ||
nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, addr) ||
nla_put_u16(msg, NL80211_ATTR_CONTROL_PORT_ETHERTYPE, proto) ||
+ (link_id >= 0 &&
+ nla_put_u8(msg, NL80211_ATTR_MLO_LINK_ID, link_id)) ||
(unencrypted && nla_put_flag(msg,
NL80211_ATTR_CONTROL_PORT_NO_ENCRYPT)))
goto nla_put_failure;
return -ENOBUFS;
}
-bool cfg80211_rx_control_port(struct net_device *dev,
- struct sk_buff *skb, bool unencrypted)
+bool cfg80211_rx_control_port(struct net_device *dev, struct sk_buff *skb,
+ bool unencrypted, int link_id)
{
int ret;
- trace_cfg80211_rx_control_port(dev, skb, unencrypted);
- ret = __nl80211_rx_control_port(dev, skb, unencrypted, GFP_ATOMIC);
+ trace_cfg80211_rx_control_port(dev, skb, unencrypted, link_id);
+ ret = __nl80211_rx_control_port(dev, skb, unencrypted, link_id,
+ GFP_ATOMIC);
trace_cfg80211_return_bool(ret == 0);
return ret == 0;
}
return ret;
}
+static inline int
+rdev_set_hw_timestamp(struct cfg80211_registered_device *rdev,
+ struct net_device *dev,
+ struct cfg80211_set_hw_timestamp *hwts)
+{
+ struct wiphy *wiphy = &rdev->wiphy;
+ int ret;
+
+ if (!rdev->ops->set_hw_timestamp)
+ return -EOPNOTSUPP;
+
+ trace_rdev_set_hw_timestamp(wiphy, dev, hwts);
+ ret = rdev->ops->set_hw_timestamp(wiphy, dev, hwts);
+ trace_rdev_return_int(wiphy, ret);
+
+ return ret;
+}
#endif /* __CFG80211_RDEV_OPS */
}
int cfg80211_get_ies_channel_number(const u8 *ie, size_t ielen,
- enum nl80211_band band,
- enum cfg80211_bss_frame_type ftype)
+ enum nl80211_band band)
{
const struct element *tmp;
if (!he_6ghz_oper)
return -1;
- if (ftype != CFG80211_BSS_FTYPE_BEACON ||
- he_6ghz_oper->control & IEEE80211_HE_6GHZ_OPER_CTRL_DUP_BEACON)
- return he_6ghz_oper->primary;
+ return he_6ghz_oper->primary;
}
} else if (band == NL80211_BAND_S1GHZ) {
tmp = cfg80211_find_elem(WLAN_EID_S1G_OPERATION, ie, ielen);
static struct ieee80211_channel *
cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
struct ieee80211_channel *channel,
- enum nl80211_bss_scan_width scan_width,
- enum cfg80211_bss_frame_type ftype)
+ enum nl80211_bss_scan_width scan_width)
{
u32 freq;
int channel_number;
struct ieee80211_channel *alt_channel;
channel_number = cfg80211_get_ies_channel_number(ie, ielen,
- channel->band, ftype);
+ channel->band);
if (channel_number < 0) {
/* No channel information in frame payload */
freq = ieee80211_channel_to_freq_khz(channel_number, channel->band);
/*
- * In 6GHz, duplicated beacon indication is relevant for
- * beacons only.
+ * Frame info (beacon/prob res) is the same as received channel,
+ * no need for further processing.
*/
- if (channel->band == NL80211_BAND_6GHZ &&
- (freq == channel->center_freq ||
- abs(freq - channel->center_freq) > 80))
+ if (freq == ieee80211_channel_to_khz(channel))
return channel;
alt_channel = ieee80211_get_channel_khz(wiphy, freq);
if (!alt_channel) {
- if (channel->band == NL80211_BAND_2GHZ) {
+ if (channel->band == NL80211_BAND_2GHZ ||
+ channel->band == NL80211_BAND_6GHZ) {
/*
* Better not allow unexpected channels when that could
* be going beyond the 1-11 range (e.g., discovering
* BSS on channel 12 when radio is configured for
- * channel 11.
+ * channel 11) or beyond the 6 GHz channel range.
*/
return NULL;
}
return NULL;
channel = cfg80211_get_bss_channel(wiphy, ie, ielen, data->chan,
- data->scan_width, ftype);
+ data->scan_width);
if (!channel)
return NULL;
size_t ielen, min_hdr_len = offsetof(struct ieee80211_mgmt,
u.probe_resp.variable);
int bss_type;
- enum cfg80211_bss_frame_type ftype;
BUILD_BUG_ON(offsetof(struct ieee80211_mgmt, u.probe_resp.variable) !=
offsetof(struct ieee80211_mgmt, u.beacon.variable));
variable = ext->u.s1g_beacon.variable;
}
- if (ieee80211_is_beacon(mgmt->frame_control))
- ftype = CFG80211_BSS_FTYPE_BEACON;
- else if (ieee80211_is_probe_resp(mgmt->frame_control))
- ftype = CFG80211_BSS_FTYPE_PRESP;
- else
- ftype = CFG80211_BSS_FTYPE_UNKNOWN;
-
channel = cfg80211_get_bss_channel(wiphy, variable,
- ielen, data->chan, data->scan_width,
- ftype);
+ ielen, data->chan, data->scan_width);
if (!channel)
return NULL;
TRACE_EVENT(cfg80211_rx_control_port,
TP_PROTO(struct net_device *netdev, struct sk_buff *skb,
- bool unencrypted),
- TP_ARGS(netdev, skb, unencrypted),
+ bool unencrypted, int link_id),
+ TP_ARGS(netdev, skb, unencrypted, link_id),
TP_STRUCT__entry(
NETDEV_ENTRY
__field(int, len)
MAC_ENTRY(from)
__field(u16, proto)
__field(bool, unencrypted)
+ __field(int, link_id)
),
TP_fast_assign(
NETDEV_ASSIGN;
MAC_ASSIGN(from, eth_hdr(skb)->h_source);
__entry->proto = be16_to_cpu(skb->protocol);
__entry->unencrypted = unencrypted;
+ __entry->link_id = link_id;
),
- TP_printk(NETDEV_PR_FMT ", len=%d, %pM, proto: 0x%x, unencrypted: %s",
+ TP_printk(NETDEV_PR_FMT ", len=%d, %pM, proto: 0x%x, unencrypted: %s, link: %d",
NETDEV_PR_ARG, __entry->len, __entry->from,
- __entry->proto, BOOL_TO_STR(__entry->unencrypted))
+ __entry->proto, BOOL_TO_STR(__entry->unencrypted),
+ __entry->link_id)
);
TRACE_EVENT(cfg80211_cqm_rssi_notify,
__entry->link_id)
);
+TRACE_EVENT(rdev_set_hw_timestamp,
+ TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
+ struct cfg80211_set_hw_timestamp *hwts),
+
+ TP_ARGS(wiphy, netdev, hwts),
+
+ TP_STRUCT__entry(
+ WIPHY_ENTRY
+ NETDEV_ENTRY
+ MAC_ENTRY(macaddr)
+ __field(bool, enable)
+ ),
+
+ TP_fast_assign(
+ WIPHY_ASSIGN;
+ NETDEV_ASSIGN;
+ MAC_ASSIGN(macaddr, hwts->macaddr);
+ __entry->enable = hwts->enable;
+ ),
+
+ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", mac %pM, enable: %u",
+ WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->macaddr,
+ __entry->enable)
+);
+
#endif /* !__RDEV_OPS_TRACE || TRACE_HEADER_MULTI_READ */
#undef TRACE_INCLUDE_PATH
return frame;
}
-bool ieee80211_is_valid_amsdu(struct sk_buff *skb, bool mesh_hdr)
+static u16
+ieee80211_amsdu_subframe_length(void *field, u8 mesh_flags, u8 hdr_type)
+{
+ __le16 *field_le = field;
+ __be16 *field_be = field;
+ u16 len;
+
+ if (hdr_type >= 2)
+ len = le16_to_cpu(*field_le);
+ else
+ len = be16_to_cpu(*field_be);
+ if (hdr_type)
+ len += __ieee80211_get_mesh_hdrlen(mesh_flags);
+
+ return len;
+}
+
+bool ieee80211_is_valid_amsdu(struct sk_buff *skb, u8 mesh_hdr)
{
int offset = 0, remaining, subframe_len, padding;
if (skb_copy_bits(skb, offset + 2 * ETH_ALEN, &hdr, sizeof(hdr)) < 0)
return false;
- if (mesh_hdr)
- len = le16_to_cpu(*(__le16 *)&hdr.len) +
- __ieee80211_get_mesh_hdrlen(hdr.mesh_flags);
- else
- len = ntohs(hdr.len);
-
+ len = ieee80211_amsdu_subframe_length(&hdr.len, hdr.mesh_flags,
+ mesh_hdr);
subframe_len = sizeof(struct ethhdr) + len;
padding = (4 - subframe_len) & 0x3;
remaining = skb->len - offset;
const u8 *addr, enum nl80211_iftype iftype,
const unsigned int extra_headroom,
const u8 *check_da, const u8 *check_sa,
- bool mesh_control)
+ u8 mesh_control)
{
unsigned int hlen = ALIGN(extra_headroom, 4);
struct sk_buff *frame = NULL;
skb_copy_bits(skb, offset, &hdr, copy_len);
if (iftype == NL80211_IFTYPE_MESH_POINT)
mesh_len = __ieee80211_get_mesh_hdrlen(hdr.flags);
- if (mesh_control)
- len = le16_to_cpu(*(__le16 *)&hdr.eth.h_proto) + mesh_len;
- else
- len = ntohs(hdr.eth.h_proto);
-
+ len = ieee80211_amsdu_subframe_length(&hdr.eth.h_proto, hdr.flags,
+ mesh_control);
subframe_len = sizeof(struct ethhdr) + len;
padding = (4 - subframe_len) & 0x3;
return ERR_PTR(-ENOMEM);
bpf_map_inc(&map->map);
+ atomic_inc(&map->count);
node->map = map;
node->map_entry = map_entry;
static void xsk_map_node_free(struct xsk_map_node *node)
{
+ struct xsk_map *map = node->map;
+
bpf_map_put(&node->map->map);
kfree(node);
+ atomic_dec(&map->count);
}
static void xsk_map_sock_add(struct xdp_sock *xs, struct xsk_map_node *node)
return &m->map;
}
+static u64 xsk_map_mem_usage(const struct bpf_map *map)
+{
+ struct xsk_map *m = container_of(map, struct xsk_map, map);
+
+ return struct_size(m, xsk_map, map->max_entries) +
+ (u64)atomic_read(&m->count) * sizeof(struct xsk_map_node);
+}
+
static void xsk_map_free(struct bpf_map *map)
{
struct xsk_map *m = container_of(map, struct xsk_map, map);
.map_update_elem = xsk_map_update_elem,
.map_delete_elem = xsk_map_delete_elem,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = xsk_map_mem_usage,
.map_btf_id = &xsk_map_btf_ids[0],
.map_redirect = xsk_map_redirect,
};
xso->dir = xdo->dir;
xso->dev = xdo->dev;
xso->real_dev = xdo->real_dev;
+ xso->flags = XFRM_DEV_OFFLOAD_FLAG_ACQ;
netdev_tracker_alloc(xso->dev, &xso->dev_tracker,
GFP_ATOMIC);
error = xso->dev->xfrmdev_ops->xdo_dev_state_add(x, NULL);
memcpy(&p->id, &x->id, sizeof(p->id));
memcpy(&p->sel, &x->sel, sizeof(p->sel));
memcpy(&p->lft, &x->lft, sizeof(p->lft));
+ if (x->xso.dev)
+ xfrm_dev_state_update_curlft(x);
memcpy(&p->curlft, &x->curlft, sizeof(p->curlft));
put_unaligned(x->stats.replay_window, &p->stats.replay_window);
put_unaligned(x->stats.replay, &p->stats.replay);
sub_and_test b i v
dec_and_test b v
inc_and_test b v
-add_negative b i v
+add_negative B i v
add_unless fb v i:a i:u
inc_not_zero b v
inc_unless_negative b v
cat <<EOF
/**
- * arch_${atomic}_add_negative - add and test if negative
+ * arch_${atomic}_add_negative${order} - Add and test if negative
* @i: integer value to add
* @v: pointer of type ${atomic}_t
*
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns true if the result is negative,
+ * or false when the result is greater than or equal to zero.
*/
static __always_inline bool
-arch_${atomic}_add_negative(${int} i, ${atomic}_t *v)
+arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
{
- return arch_${atomic}_add_return(i, v) < 0;
+ return arch_${atomic}_add_return${order}(i, v) < 0;
}
EOF
case LSM_AUDIT_DATA_NET:
if (a->u.net->sk) {
const struct sock *sk = a->u.net->sk;
- struct unix_sock *u;
+ const struct unix_sock *u;
struct unix_address *addr;
int len = 0;
char *p = NULL;
switch (sk->sk_family) {
case AF_INET: {
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
print_ipv4_addr(ab, inet->inet_rcv_saddr,
inet->inet_sport,
}
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6: {
- struct inet_sock *inet = inet_sk(sk);
+ const struct inet_sock *inet = inet_sk(sk);
print_ipv6_addr(ab, &sk->sk_v6_rcv_saddr,
inet->inet_sport,
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _UAPI__ASM_BPF_PERF_EVENT_H__
-#define _UAPI__ASM_BPF_PERF_EVENT_H__
-
-#include <asm/ptrace.h>
-
-typedef struct user_pt_regs bpf_user_pt_regs_t;
-
-#endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _UAPI__ASM_BPF_PERF_EVENT_H__
-#define _UAPI__ASM_BPF_PERF_EVENT_H__
-
-#include "ptrace.h"
-
-typedef user_pt_regs bpf_user_pt_regs_t;
-
-#endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
+++ /dev/null
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-/*
- * S390 version
- * Copyright IBM Corp. 1999, 2000
- * Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com)
- */
-
-#ifndef _UAPI_S390_PTRACE_H
-#define _UAPI_S390_PTRACE_H
-
-/*
- * Offsets in the user_regs_struct. They are used for the ptrace
- * system call and in entry.S
- */
-#ifndef __s390x__
-
-#define PT_PSWMASK 0x00
-#define PT_PSWADDR 0x04
-#define PT_GPR0 0x08
-#define PT_GPR1 0x0C
-#define PT_GPR2 0x10
-#define PT_GPR3 0x14
-#define PT_GPR4 0x18
-#define PT_GPR5 0x1C
-#define PT_GPR6 0x20
-#define PT_GPR7 0x24
-#define PT_GPR8 0x28
-#define PT_GPR9 0x2C
-#define PT_GPR10 0x30
-#define PT_GPR11 0x34
-#define PT_GPR12 0x38
-#define PT_GPR13 0x3C
-#define PT_GPR14 0x40
-#define PT_GPR15 0x44
-#define PT_ACR0 0x48
-#define PT_ACR1 0x4C
-#define PT_ACR2 0x50
-#define PT_ACR3 0x54
-#define PT_ACR4 0x58
-#define PT_ACR5 0x5C
-#define PT_ACR6 0x60
-#define PT_ACR7 0x64
-#define PT_ACR8 0x68
-#define PT_ACR9 0x6C
-#define PT_ACR10 0x70
-#define PT_ACR11 0x74
-#define PT_ACR12 0x78
-#define PT_ACR13 0x7C
-#define PT_ACR14 0x80
-#define PT_ACR15 0x84
-#define PT_ORIGGPR2 0x88
-#define PT_FPC 0x90
-/*
- * A nasty fact of life that the ptrace api
- * only supports passing of longs.
- */
-#define PT_FPR0_HI 0x98
-#define PT_FPR0_LO 0x9C
-#define PT_FPR1_HI 0xA0
-#define PT_FPR1_LO 0xA4
-#define PT_FPR2_HI 0xA8
-#define PT_FPR2_LO 0xAC
-#define PT_FPR3_HI 0xB0
-#define PT_FPR3_LO 0xB4
-#define PT_FPR4_HI 0xB8
-#define PT_FPR4_LO 0xBC
-#define PT_FPR5_HI 0xC0
-#define PT_FPR5_LO 0xC4
-#define PT_FPR6_HI 0xC8
-#define PT_FPR6_LO 0xCC
-#define PT_FPR7_HI 0xD0
-#define PT_FPR7_LO 0xD4
-#define PT_FPR8_HI 0xD8
-#define PT_FPR8_LO 0XDC
-#define PT_FPR9_HI 0xE0
-#define PT_FPR9_LO 0xE4
-#define PT_FPR10_HI 0xE8
-#define PT_FPR10_LO 0xEC
-#define PT_FPR11_HI 0xF0
-#define PT_FPR11_LO 0xF4
-#define PT_FPR12_HI 0xF8
-#define PT_FPR12_LO 0xFC
-#define PT_FPR13_HI 0x100
-#define PT_FPR13_LO 0x104
-#define PT_FPR14_HI 0x108
-#define PT_FPR14_LO 0x10C
-#define PT_FPR15_HI 0x110
-#define PT_FPR15_LO 0x114
-#define PT_CR_9 0x118
-#define PT_CR_10 0x11C
-#define PT_CR_11 0x120
-#define PT_IEEE_IP 0x13C
-#define PT_LASTOFF PT_IEEE_IP
-#define PT_ENDREGS 0x140-1
-
-#define GPR_SIZE 4
-#define CR_SIZE 4
-
-#define STACK_FRAME_OVERHEAD 96 /* size of minimum stack frame */
-
-#else /* __s390x__ */
-
-#define PT_PSWMASK 0x00
-#define PT_PSWADDR 0x08
-#define PT_GPR0 0x10
-#define PT_GPR1 0x18
-#define PT_GPR2 0x20
-#define PT_GPR3 0x28
-#define PT_GPR4 0x30
-#define PT_GPR5 0x38
-#define PT_GPR6 0x40
-#define PT_GPR7 0x48
-#define PT_GPR8 0x50
-#define PT_GPR9 0x58
-#define PT_GPR10 0x60
-#define PT_GPR11 0x68
-#define PT_GPR12 0x70
-#define PT_GPR13 0x78
-#define PT_GPR14 0x80
-#define PT_GPR15 0x88
-#define PT_ACR0 0x90
-#define PT_ACR1 0x94
-#define PT_ACR2 0x98
-#define PT_ACR3 0x9C
-#define PT_ACR4 0xA0
-#define PT_ACR5 0xA4
-#define PT_ACR6 0xA8
-#define PT_ACR7 0xAC
-#define PT_ACR8 0xB0
-#define PT_ACR9 0xB4
-#define PT_ACR10 0xB8
-#define PT_ACR11 0xBC
-#define PT_ACR12 0xC0
-#define PT_ACR13 0xC4
-#define PT_ACR14 0xC8
-#define PT_ACR15 0xCC
-#define PT_ORIGGPR2 0xD0
-#define PT_FPC 0xD8
-#define PT_FPR0 0xE0
-#define PT_FPR1 0xE8
-#define PT_FPR2 0xF0
-#define PT_FPR3 0xF8
-#define PT_FPR4 0x100
-#define PT_FPR5 0x108
-#define PT_FPR6 0x110
-#define PT_FPR7 0x118
-#define PT_FPR8 0x120
-#define PT_FPR9 0x128
-#define PT_FPR10 0x130
-#define PT_FPR11 0x138
-#define PT_FPR12 0x140
-#define PT_FPR13 0x148
-#define PT_FPR14 0x150
-#define PT_FPR15 0x158
-#define PT_CR_9 0x160
-#define PT_CR_10 0x168
-#define PT_CR_11 0x170
-#define PT_IEEE_IP 0x1A8
-#define PT_LASTOFF PT_IEEE_IP
-#define PT_ENDREGS 0x1B0-1
-
-#define GPR_SIZE 8
-#define CR_SIZE 8
-
-#define STACK_FRAME_OVERHEAD 160 /* size of minimum stack frame */
-
-#endif /* __s390x__ */
-
-#define NUM_GPRS 16
-#define NUM_FPRS 16
-#define NUM_CRS 16
-#define NUM_ACRS 16
-
-#define NUM_CR_WORDS 3
-
-#define FPR_SIZE 8
-#define FPC_SIZE 4
-#define FPC_PAD_SIZE 4 /* gcc insists on aligning the fpregs */
-#define ACR_SIZE 4
-
-
-#define PTRACE_OLDSETOPTIONS 21
-#define PTRACE_SYSEMU 31
-#define PTRACE_SYSEMU_SINGLESTEP 32
-#ifndef __ASSEMBLY__
-#include <linux/stddef.h>
-#include <linux/types.h>
-
-typedef union {
- float f;
- double d;
- __u64 ui;
- struct
- {
- __u32 hi;
- __u32 lo;
- } fp;
-} freg_t;
-
-typedef struct {
- __u32 fpc;
- __u32 pad;
- freg_t fprs[NUM_FPRS];
-} s390_fp_regs;
-
-#define FPC_EXCEPTION_MASK 0xF8000000
-#define FPC_FLAGS_MASK 0x00F80000
-#define FPC_DXC_MASK 0x0000FF00
-#define FPC_RM_MASK 0x00000003
-
-/* this typedef defines how a Program Status Word looks like */
-typedef struct {
- unsigned long mask;
- unsigned long addr;
-} __attribute__ ((aligned(8))) psw_t;
-
-#ifndef __s390x__
-
-#define PSW_MASK_PER 0x40000000UL
-#define PSW_MASK_DAT 0x04000000UL
-#define PSW_MASK_IO 0x02000000UL
-#define PSW_MASK_EXT 0x01000000UL
-#define PSW_MASK_KEY 0x00F00000UL
-#define PSW_MASK_BASE 0x00080000UL /* always one */
-#define PSW_MASK_MCHECK 0x00040000UL
-#define PSW_MASK_WAIT 0x00020000UL
-#define PSW_MASK_PSTATE 0x00010000UL
-#define PSW_MASK_ASC 0x0000C000UL
-#define PSW_MASK_CC 0x00003000UL
-#define PSW_MASK_PM 0x00000F00UL
-#define PSW_MASK_RI 0x00000000UL
-#define PSW_MASK_EA 0x00000000UL
-#define PSW_MASK_BA 0x00000000UL
-
-#define PSW_MASK_USER 0x0000FF00UL
-
-#define PSW_ADDR_AMODE 0x80000000UL
-#define PSW_ADDR_INSN 0x7FFFFFFFUL
-
-#define PSW_DEFAULT_KEY (((unsigned long) PAGE_DEFAULT_ACC) << 20)
-
-#define PSW_ASC_PRIMARY 0x00000000UL
-#define PSW_ASC_ACCREG 0x00004000UL
-#define PSW_ASC_SECONDARY 0x00008000UL
-#define PSW_ASC_HOME 0x0000C000UL
-
-#else /* __s390x__ */
-
-#define PSW_MASK_PER 0x4000000000000000UL
-#define PSW_MASK_DAT 0x0400000000000000UL
-#define PSW_MASK_IO 0x0200000000000000UL
-#define PSW_MASK_EXT 0x0100000000000000UL
-#define PSW_MASK_BASE 0x0000000000000000UL
-#define PSW_MASK_KEY 0x00F0000000000000UL
-#define PSW_MASK_MCHECK 0x0004000000000000UL
-#define PSW_MASK_WAIT 0x0002000000000000UL
-#define PSW_MASK_PSTATE 0x0001000000000000UL
-#define PSW_MASK_ASC 0x0000C00000000000UL
-#define PSW_MASK_CC 0x0000300000000000UL
-#define PSW_MASK_PM 0x00000F0000000000UL
-#define PSW_MASK_RI 0x0000008000000000UL
-#define PSW_MASK_EA 0x0000000100000000UL
-#define PSW_MASK_BA 0x0000000080000000UL
-
-#define PSW_MASK_USER 0x0000FF0180000000UL
-
-#define PSW_ADDR_AMODE 0x0000000000000000UL
-#define PSW_ADDR_INSN 0xFFFFFFFFFFFFFFFFUL
-
-#define PSW_DEFAULT_KEY (((unsigned long) PAGE_DEFAULT_ACC) << 52)
-
-#define PSW_ASC_PRIMARY 0x0000000000000000UL
-#define PSW_ASC_ACCREG 0x0000400000000000UL
-#define PSW_ASC_SECONDARY 0x0000800000000000UL
-#define PSW_ASC_HOME 0x0000C00000000000UL
-
-#endif /* __s390x__ */
-
-
-/*
- * The s390_regs structure is used to define the elf_gregset_t.
- */
-typedef struct {
- psw_t psw;
- unsigned long gprs[NUM_GPRS];
- unsigned int acrs[NUM_ACRS];
- unsigned long orig_gpr2;
-} s390_regs;
-
-/*
- * The user_pt_regs structure exports the beginning of
- * the in-kernel pt_regs structure to user space.
- */
-typedef struct {
- unsigned long args[1];
- psw_t psw;
- unsigned long gprs[NUM_GPRS];
-} user_pt_regs;
-
-/*
- * Now for the user space program event recording (trace) definitions.
- * The following structures are used only for the ptrace interface, don't
- * touch or even look at it if you don't want to modify the user-space
- * ptrace interface. In particular stay away from it for in-kernel PER.
- */
-typedef struct {
- unsigned long cr[NUM_CR_WORDS];
-} per_cr_words;
-
-#define PER_EM_MASK 0xE8000000UL
-
-typedef struct {
-#ifdef __s390x__
- unsigned : 32;
-#endif /* __s390x__ */
- unsigned em_branching : 1;
- unsigned em_instruction_fetch : 1;
- /*
- * Switching on storage alteration automatically fixes
- * the storage alteration event bit in the users std.
- */
- unsigned em_storage_alteration : 1;
- unsigned em_gpr_alt_unused : 1;
- unsigned em_store_real_address : 1;
- unsigned : 3;
- unsigned branch_addr_ctl : 1;
- unsigned : 1;
- unsigned storage_alt_space_ctl : 1;
- unsigned : 21;
- unsigned long starting_addr;
- unsigned long ending_addr;
-} per_cr_bits;
-
-typedef struct {
- unsigned short perc_atmid;
- unsigned long address;
- unsigned char access_id;
-} per_lowcore_words;
-
-typedef struct {
- unsigned perc_branching : 1;
- unsigned perc_instruction_fetch : 1;
- unsigned perc_storage_alteration : 1;
- unsigned perc_gpr_alt_unused : 1;
- unsigned perc_store_real_address : 1;
- unsigned : 3;
- unsigned atmid_psw_bit_31 : 1;
- unsigned atmid_validity_bit : 1;
- unsigned atmid_psw_bit_32 : 1;
- unsigned atmid_psw_bit_5 : 1;
- unsigned atmid_psw_bit_16 : 1;
- unsigned atmid_psw_bit_17 : 1;
- unsigned si : 2;
- unsigned long address;
- unsigned : 4;
- unsigned access_id : 4;
-} per_lowcore_bits;
-
-typedef struct {
- union {
- per_cr_words words;
- per_cr_bits bits;
- } control_regs;
- /*
- * The single_step and instruction_fetch bits are obsolete,
- * the kernel always sets them to zero. To enable single
- * stepping use ptrace(PTRACE_SINGLESTEP) instead.
- */
- unsigned single_step : 1;
- unsigned instruction_fetch : 1;
- unsigned : 30;
- /*
- * These addresses are copied into cr10 & cr11 if single
- * stepping is switched off
- */
- unsigned long starting_addr;
- unsigned long ending_addr;
- union {
- per_lowcore_words words;
- per_lowcore_bits bits;
- } lowcore;
-} per_struct;
-
-typedef struct {
- unsigned int len;
- unsigned long kernel_addr;
- unsigned long process_addr;
-} ptrace_area;
-
-/*
- * S/390 specific non posix ptrace requests. I chose unusual values so
- * they are unlikely to clash with future ptrace definitions.
- */
-#define PTRACE_PEEKUSR_AREA 0x5000
-#define PTRACE_POKEUSR_AREA 0x5001
-#define PTRACE_PEEKTEXT_AREA 0x5002
-#define PTRACE_PEEKDATA_AREA 0x5003
-#define PTRACE_POKETEXT_AREA 0x5004
-#define PTRACE_POKEDATA_AREA 0x5005
-#define PTRACE_GET_LAST_BREAK 0x5006
-#define PTRACE_PEEK_SYSTEM_CALL 0x5007
-#define PTRACE_POKE_SYSTEM_CALL 0x5008
-#define PTRACE_ENABLE_TE 0x5009
-#define PTRACE_DISABLE_TE 0x5010
-#define PTRACE_TE_ABORT_RAND 0x5011
-
-/*
- * The numbers chosen here are somewhat arbitrary but absolutely MUST
- * not overlap with any of the number assigned in <linux/ptrace.h>.
- */
-#define PTRACE_SINGLEBLOCK 12 /* resume execution until next branch */
-
-/*
- * PT_PROT definition is loosely based on hppa bsd definition in
- * gdb/hppab-nat.c
- */
-#define PTRACE_PROT 21
-
-typedef enum {
- ptprot_set_access_watchpoint,
- ptprot_set_write_watchpoint,
- ptprot_disable_watchpoint
-} ptprot_flags;
-
-typedef struct {
- unsigned long lowaddr;
- unsigned long hiaddr;
- ptprot_flags prot;
-} ptprot_area;
-
-/* Sequence of bytes for breakpoint illegal instruction. */
-#define S390_BREAKPOINT {0x0,0x1}
-#define S390_BREAKPOINT_U16 ((__u16)0x0001)
-#define S390_SYSCALL_OPCODE ((__u16)0x0a00)
-#define S390_SYSCALL_SIZE 2
-
-/*
- * The user_regs_struct defines the way the user registers are
- * store on the stack for signal handling.
- */
-struct user_regs_struct {
- psw_t psw;
- unsigned long gprs[NUM_GPRS];
- unsigned int acrs[NUM_ACRS];
- unsigned long orig_gpr2;
- s390_fp_regs fp_regs;
- /*
- * These per registers are in here so that gdb can modify them
- * itself as there is no "official" ptrace interface for hardware
- * watchpoints. This is the way intel does it.
- */
- per_struct per_info;
- unsigned long ieee_instruction_pointer; /* obsolete, always 0 */
-};
-
-#endif /* __ASSEMBLY__ */
-
-#endif /* _UAPI_S390_PTRACE_H */
case '"':
fputs("\\\"", self->out);
break;
- case '\'':
- fputs("\\\'", self->out);
- break;
default:
putc(*str, self->out);
}
/fixdep
/resolve_btfids
/libbpf/
+/libsubcmd/
* different maps if key/value layout matches across maps.
* Every bpf_timer_set_callback() can have different callback_fn.
*
+ * *flags* can be one of:
+ *
+ * **BPF_F_TIMER_ABS**
+ * Start the timer in absolute expire value instead of the
+ * default relative one.
+ *
* Return
* 0 on success.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier
* Description
* Write *len* bytes from *src* into *dst*, starting from *offset*
* into *dst*.
- * *flags* is currently unused.
+ *
+ * *flags* must be 0 except for skb-type dynptrs.
+ *
+ * For skb-type dynptrs:
+ * * All data slices of the dynptr are automatically
+ * invalidated after **bpf_dynptr_write**\ (). This is
+ * because writing may pull the skb and change the
+ * underlying packet buffer.
+ *
+ * * For *flags*, please see the flags accepted by
+ * **bpf_skb_store_bytes**\ ().
* Return
* 0 on success, -E2BIG if *offset* + *len* exceeds the length
* of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
- * is a read-only dynptr or if *flags* is not 0.
+ * is a read-only dynptr or if *flags* is not correct. For skb-type dynptrs,
+ * other errors correspond to errors returned by **bpf_skb_store_bytes**\ ().
*
* void *bpf_dynptr_data(const struct bpf_dynptr *ptr, u32 offset, u32 len)
* Description
*
* *len* must be a statically known value. The returned data slice
* is invalidated whenever the dynptr is invalidated.
+ *
+ * skb and xdp type dynptrs may not use bpf_dynptr_data. They should
+ * instead use bpf_dynptr_slice and bpf_dynptr_slice_rdwr.
* Return
* Pointer to the underlying dynptr data, NULL if the dynptr is
* read-only, if the dynptr is invalid, or if the offset and length
enum bpf_core_relo_kind kind;
};
+/*
+ * Flags to control bpf_timer_start() behaviour.
+ * - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is
+ * relative to current time.
+ */
+enum {
+ BPF_F_TIMER_ABS = (1ULL << 0),
+};
+
#endif /* _UAPI__LINUX_BPF_H__ */
IFLA_MACVLAN_MACADDR_COUNT,
IFLA_MACVLAN_BC_QUEUE_LEN,
IFLA_MACVLAN_BC_QUEUE_LEN_USED,
+ IFLA_MACVLAN_BC_CUTOFF,
__IFLA_MACVLAN_MAX,
};
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \
netlink.o bpf_prog_linfo.o libbpf_probes.o hashmap.o \
btf_dump.o ringbuf.o strset.o linker.o gen_loader.o relo_core.o \
- usdt.o
+ usdt.o zip.o
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
/*
- * common eBPF ELF operations.
+ * Common BPF ELF operations.
*
* Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
* Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
LIBBPF_API int bpf_link_get_fd_by_id_opts(__u32 id,
const struct bpf_get_fd_by_id_opts *opts);
LIBBPF_API int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len);
-/* Type-safe variants of bpf_obj_get_info_by_fd(). The callers still needs to
- * pass info_len, which should normally be
- * sizeof(struct bpf_{prog,map,btf,link}_info), in order to be compatible with
- * different libbpf and kernel versions.
+
+/**
+ * @brief **bpf_prog_get_info_by_fd()** obtains information about the BPF
+ * program corresponding to *prog_fd*.
+ *
+ * Populates up to *info_len* bytes of *info* and updates *info_len* with the
+ * actual number of bytes written to *info*.
+ *
+ * @param prog_fd BPF program file descriptor
+ * @param info pointer to **struct bpf_prog_info** that will be populated with
+ * BPF program information
+ * @param info_len pointer to the size of *info*; on success updated with the
+ * number of bytes written to *info*
+ * @return 0, on success; negative error code, otherwise (errno is also set to
+ * the error code)
*/
LIBBPF_API int bpf_prog_get_info_by_fd(int prog_fd, struct bpf_prog_info *info, __u32 *info_len);
+
+/**
+ * @brief **bpf_map_get_info_by_fd()** obtains information about the BPF
+ * map corresponding to *map_fd*.
+ *
+ * Populates up to *info_len* bytes of *info* and updates *info_len* with the
+ * actual number of bytes written to *info*.
+ *
+ * @param map_fd BPF map file descriptor
+ * @param info pointer to **struct bpf_map_info** that will be populated with
+ * BPF map information
+ * @param info_len pointer to the size of *info*; on success updated with the
+ * number of bytes written to *info*
+ * @return 0, on success; negative error code, otherwise (errno is also set to
+ * the error code)
+ */
LIBBPF_API int bpf_map_get_info_by_fd(int map_fd, struct bpf_map_info *info, __u32 *info_len);
+
+/**
+ * @brief **bpf_btf_get_info_by_fd()** obtains information about the
+ * BTF object corresponding to *btf_fd*.
+ *
+ * Populates up to *info_len* bytes of *info* and updates *info_len* with the
+ * actual number of bytes written to *info*.
+ *
+ * @param btf_fd BTF object file descriptor
+ * @param info pointer to **struct bpf_btf_info** that will be populated with
+ * BTF object information
+ * @param info_len pointer to the size of *info*; on success updated with the
+ * number of bytes written to *info*
+ * @return 0, on success; negative error code, otherwise (errno is also set to
+ * the error code)
+ */
LIBBPF_API int bpf_btf_get_info_by_fd(int btf_fd, struct bpf_btf_info *info, __u32 *info_len);
+
+/**
+ * @brief **bpf_btf_get_info_by_fd()** obtains information about the BPF
+ * link corresponding to *link_fd*.
+ *
+ * Populates up to *info_len* bytes of *info* and updates *info_len* with the
+ * actual number of bytes written to *info*.
+ *
+ * @param link_fd BPF link file descriptor
+ * @param info pointer to **struct bpf_link_info** that will be populated with
+ * BPF link information
+ * @param info_len pointer to the size of *info*; on success updated with the
+ * number of bytes written to *info*
+ * @return 0, on success; negative error code, otherwise (errno is also set to
+ * the error code)
+ */
LIBBPF_API int bpf_link_get_info_by_fd(int link_fd, struct bpf_link_info *info, __u32 *info_len);
struct bpf_prog_query_opts {
#define __kconfig __attribute__((section(".kconfig")))
#define __ksym __attribute__((section(".ksyms")))
+#define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted")))
#define __kptr __attribute__((btf_type_tag("kptr")))
-#define __kptr_ref __attribute__((btf_type_tag("kptr_ref")))
#ifndef ___bpf_concat
#define ___bpf_concat(a, b) a ## b
#define __PT_PARM2_SYSCALL_REG __PT_PARM2_REG
#define __PT_PARM3_SYSCALL_REG __PT_PARM3_REG
#define __PT_PARM4_SYSCALL_REG __PT_PARM4_REG
+#define __PT_PARM5_SYSCALL_REG uregs[4]
#define __PT_PARM6_SYSCALL_REG uregs[5]
#define __PT_PARM7_SYSCALL_REG uregs[6]
* https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html
*/
+/* loongarch provides struct user_pt_regs instead of struct pt_regs to userspace */
+#define __PT_REGS_CAST(x) ((const struct user_pt_regs *)(x))
#define __PT_PARM1_REG regs[4]
#define __PT_PARM2_REG regs[5]
#define __PT_PARM3_REG regs[6]
}
}
- err = 0;
-
if (!btf_data) {
pr_warn("failed to find '%s' ELF section in %s\n", BTF_ELF_SEC, path);
err = -ENODATA;
#include "libbpf_internal.h"
#include "hashmap.h"
#include "bpf_gen_internal.h"
+#include "zip.h"
#ifndef BPF_FS_MAGIC
#define BPF_FS_MAGIC 0xcafe4a11
progs = obj->programs;
nr_progs = obj->nr_programs;
nr_syms = symbols->d_size / sizeof(Elf64_Sym);
- sec_off = 0;
for (i = 0; i < nr_syms; i++) {
sym = elf_sym_by_idx(obj, i);
strict = !OPTS_GET(opts, relaxed_maps, false);
pin_root_path = OPTS_GET(opts, pin_root_path, NULL);
- err = err ?: bpf_object__init_user_btf_maps(obj, strict, pin_root_path);
+ err = bpf_object__init_user_btf_maps(obj, strict, pin_root_path);
err = err ?: bpf_object__init_global_data_maps(obj);
err = err ?: bpf_object__init_kconfig_map(obj);
err = err ?: bpf_object__init_struct_ops_maps(obj);
char errmsg[STRERR_BUFSIZE];
struct bpf_link_perf *link;
int prog_fd, link_fd = -1, err;
+ bool force_ioctl_attach;
if (!OPTS_VALID(opts, bpf_perf_event_opts))
return libbpf_err_ptr(-EINVAL);
link->link.dealloc = &bpf_link_perf_dealloc;
link->perf_event_fd = pfd;
- if (kernel_supports(prog->obj, FEAT_PERF_LINK)) {
+ force_ioctl_attach = OPTS_GET(opts, force_ioctl_attach, false);
+ if (kernel_supports(prog->obj, FEAT_PERF_LINK) && !force_ioctl_attach) {
DECLARE_LIBBPF_OPTS(bpf_link_create_opts, link_opts,
.perf_event.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0));
const struct bpf_kprobe_opts *opts)
{
DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
+ enum probe_attach_mode attach_mode;
char errmsg[STRERR_BUFSIZE];
char *legacy_probe = NULL;
struct bpf_link *link;
if (!OPTS_VALID(opts, bpf_kprobe_opts))
return libbpf_err_ptr(-EINVAL);
+ attach_mode = OPTS_GET(opts, attach_mode, PROBE_ATTACH_MODE_DEFAULT);
retprobe = OPTS_GET(opts, retprobe, false);
offset = OPTS_GET(opts, offset, 0);
pe_opts.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0);
legacy = determine_kprobe_perf_type() < 0;
+ switch (attach_mode) {
+ case PROBE_ATTACH_MODE_LEGACY:
+ legacy = true;
+ pe_opts.force_ioctl_attach = true;
+ break;
+ case PROBE_ATTACH_MODE_PERF:
+ if (legacy)
+ return libbpf_err_ptr(-ENOTSUP);
+ pe_opts.force_ioctl_attach = true;
+ break;
+ case PROBE_ATTACH_MODE_LINK:
+ if (legacy || !kernel_supports(prog->obj, FEAT_PERF_LINK))
+ return libbpf_err_ptr(-ENOTSUP);
+ break;
+ case PROBE_ATTACH_MODE_DEFAULT:
+ break;
+ default:
+ return libbpf_err_ptr(-EINVAL);
+ }
+
if (!legacy) {
pfd = perf_event_open_probe(false /* uprobe */, retprobe,
func_name, offset,
return NULL;
}
-/* Find offset of function name in object specified by path. "name" matches
- * symbol name or name@@LIB for library functions.
+/* Find offset of function name in the provided ELF object. "binary_path" is
+ * the path to the ELF binary represented by "elf", and only used for error
+ * reporting matters. "name" matches symbol name or name@@LIB for library
+ * functions.
*/
-static long elf_find_func_offset(const char *binary_path, const char *name)
+static long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
{
- int fd, i, sh_types[2] = { SHT_DYNSYM, SHT_SYMTAB };
+ int i, sh_types[2] = { SHT_DYNSYM, SHT_SYMTAB };
bool is_shared_lib, is_name_qualified;
- char errmsg[STRERR_BUFSIZE];
long ret = -ENOENT;
size_t name_len;
GElf_Ehdr ehdr;
- Elf *elf;
- fd = open(binary_path, O_RDONLY | O_CLOEXEC);
- if (fd < 0) {
- ret = -errno;
- pr_warn("failed to open %s: %s\n", binary_path,
- libbpf_strerror_r(ret, errmsg, sizeof(errmsg)));
- return ret;
- }
- elf = elf_begin(fd, ELF_C_READ_MMAP, NULL);
- if (!elf) {
- pr_warn("elf: could not read elf from %s: %s\n", binary_path, elf_errmsg(-1));
- close(fd);
- return -LIBBPF_ERRNO__FORMAT;
- }
if (!gelf_getehdr(elf, &ehdr)) {
pr_warn("elf: failed to get ehdr from %s: %s\n", binary_path, elf_errmsg(-1));
ret = -LIBBPF_ERRNO__FORMAT;
/* Does name specify "@@LIB"? */
is_name_qualified = strstr(name, "@@") != NULL;
- /* Search SHT_DYNSYM, SHT_SYMTAB for symbol. This search order is used because if
+ /* Search SHT_DYNSYM, SHT_SYMTAB for symbol. This search order is used because if
* a binary is stripped, it may only have SHT_DYNSYM, and a fully-statically
* linked binary may not have SHT_DYMSYM, so absence of a section should not be
* reported as a warning/error.
}
}
out:
+ return ret;
+}
+
+/* Find offset of function name in ELF object specified by path. "name" matches
+ * symbol name or name@@LIB for library functions.
+ */
+static long elf_find_func_offset_from_file(const char *binary_path, const char *name)
+{
+ char errmsg[STRERR_BUFSIZE];
+ long ret = -ENOENT;
+ Elf *elf;
+ int fd;
+
+ fd = open(binary_path, O_RDONLY | O_CLOEXEC);
+ if (fd < 0) {
+ ret = -errno;
+ pr_warn("failed to open %s: %s\n", binary_path,
+ libbpf_strerror_r(ret, errmsg, sizeof(errmsg)));
+ return ret;
+ }
+ elf = elf_begin(fd, ELF_C_READ_MMAP, NULL);
+ if (!elf) {
+ pr_warn("elf: could not read elf from %s: %s\n", binary_path, elf_errmsg(-1));
+ close(fd);
+ return -LIBBPF_ERRNO__FORMAT;
+ }
+
+ ret = elf_find_func_offset(elf, binary_path, name);
elf_end(elf);
close(fd);
return ret;
}
+/* Find offset of function name in archive specified by path. Currently
+ * supported are .zip files that do not compress their contents, as used on
+ * Android in the form of APKs, for example. "file_name" is the name of the ELF
+ * file inside the archive. "func_name" matches symbol name or name@@LIB for
+ * library functions.
+ *
+ * An overview of the APK format specifically provided here:
+ * https://en.wikipedia.org/w/index.php?title=Apk_(file_format)&oldid=1139099120#Package_contents
+ */
+static long elf_find_func_offset_from_archive(const char *archive_path, const char *file_name,
+ const char *func_name)
+{
+ struct zip_archive *archive;
+ struct zip_entry entry;
+ long ret;
+ Elf *elf;
+
+ archive = zip_archive_open(archive_path);
+ if (IS_ERR(archive)) {
+ ret = PTR_ERR(archive);
+ pr_warn("zip: failed to open %s: %ld\n", archive_path, ret);
+ return ret;
+ }
+
+ ret = zip_archive_find_entry(archive, file_name, &entry);
+ if (ret) {
+ pr_warn("zip: could not find archive member %s in %s: %ld\n", file_name,
+ archive_path, ret);
+ goto out;
+ }
+ pr_debug("zip: found entry for %s in %s at 0x%lx\n", file_name, archive_path,
+ (unsigned long)entry.data_offset);
+
+ if (entry.compression) {
+ pr_warn("zip: entry %s of %s is compressed and cannot be handled\n", file_name,
+ archive_path);
+ ret = -LIBBPF_ERRNO__FORMAT;
+ goto out;
+ }
+
+ elf = elf_memory((void *)entry.data, entry.data_length);
+ if (!elf) {
+ pr_warn("elf: could not read elf file %s from %s: %s\n", file_name, archive_path,
+ elf_errmsg(-1));
+ ret = -LIBBPF_ERRNO__LIBELF;
+ goto out;
+ }
+
+ ret = elf_find_func_offset(elf, file_name, func_name);
+ if (ret > 0) {
+ pr_debug("elf: symbol address match for %s of %s in %s: 0x%x + 0x%lx = 0x%lx\n",
+ func_name, file_name, archive_path, entry.data_offset, ret,
+ ret + entry.data_offset);
+ ret += entry.data_offset;
+ }
+ elf_end(elf);
+
+out:
+ zip_archive_close(archive);
+ return ret;
+}
+
static const char *arch_specific_lib_paths(void)
{
/*
const char *binary_path, size_t func_offset,
const struct bpf_uprobe_opts *opts)
{
- DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
+ const char *archive_path = NULL, *archive_sep = NULL;
char errmsg[STRERR_BUFSIZE], *legacy_probe = NULL;
- char full_binary_path[PATH_MAX];
+ DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
+ enum probe_attach_mode attach_mode;
+ char full_path[PATH_MAX];
struct bpf_link *link;
size_t ref_ctr_off;
int pfd, err;
if (!OPTS_VALID(opts, bpf_uprobe_opts))
return libbpf_err_ptr(-EINVAL);
+ attach_mode = OPTS_GET(opts, attach_mode, PROBE_ATTACH_MODE_DEFAULT);
retprobe = OPTS_GET(opts, retprobe, false);
ref_ctr_off = OPTS_GET(opts, ref_ctr_offset, 0);
pe_opts.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0);
if (!binary_path)
return libbpf_err_ptr(-EINVAL);
- if (!strchr(binary_path, '/')) {
- err = resolve_full_path(binary_path, full_binary_path,
- sizeof(full_binary_path));
+ /* Check if "binary_path" refers to an archive. */
+ archive_sep = strstr(binary_path, "!/");
+ if (archive_sep) {
+ full_path[0] = '\0';
+ libbpf_strlcpy(full_path, binary_path,
+ min(sizeof(full_path), (size_t)(archive_sep - binary_path + 1)));
+ archive_path = full_path;
+ binary_path = archive_sep + 2;
+ } else if (!strchr(binary_path, '/')) {
+ err = resolve_full_path(binary_path, full_path, sizeof(full_path));
if (err) {
pr_warn("prog '%s': failed to resolve full path for '%s': %d\n",
prog->name, binary_path, err);
return libbpf_err_ptr(err);
}
- binary_path = full_binary_path;
+ binary_path = full_path;
}
func_name = OPTS_GET(opts, func_name, NULL);
if (func_name) {
long sym_off;
- sym_off = elf_find_func_offset(binary_path, func_name);
+ if (archive_path) {
+ sym_off = elf_find_func_offset_from_archive(archive_path, binary_path,
+ func_name);
+ binary_path = archive_path;
+ } else {
+ sym_off = elf_find_func_offset_from_file(binary_path, func_name);
+ }
if (sym_off < 0)
return libbpf_err_ptr(sym_off);
func_offset += sym_off;
}
legacy = determine_uprobe_perf_type() < 0;
+ switch (attach_mode) {
+ case PROBE_ATTACH_MODE_LEGACY:
+ legacy = true;
+ pe_opts.force_ioctl_attach = true;
+ break;
+ case PROBE_ATTACH_MODE_PERF:
+ if (legacy)
+ return libbpf_err_ptr(-ENOTSUP);
+ pe_opts.force_ioctl_attach = true;
+ break;
+ case PROBE_ATTACH_MODE_LINK:
+ if (legacy || !kernel_supports(prog->obj, FEAT_PERF_LINK))
+ return libbpf_err_ptr(-ENOTSUP);
+ break;
+ case PROBE_ATTACH_MODE_DEFAULT:
+ break;
+ default:
+ return libbpf_err_ptr(-EINVAL);
+ }
+
if (!legacy) {
pfd = perf_event_open_probe(true /* uprobe */, retprobe, binary_path,
func_offset, pid, ref_ctr_off);
bpf_program__attach(const struct bpf_program *prog);
struct bpf_perf_event_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
__u64 bpf_cookie;
+ /* don't use BPF link when attach BPF program */
+ bool force_ioctl_attach;
+ size_t :0;
};
-#define bpf_perf_event_opts__last_field bpf_cookie
+#define bpf_perf_event_opts__last_field force_ioctl_attach
LIBBPF_API struct bpf_link *
bpf_program__attach_perf_event(const struct bpf_program *prog, int pfd);
bpf_program__attach_perf_event_opts(const struct bpf_program *prog, int pfd,
const struct bpf_perf_event_opts *opts);
+/**
+ * enum probe_attach_mode - the mode to attach kprobe/uprobe
+ *
+ * force libbpf to attach kprobe/uprobe in specific mode, -ENOTSUP will
+ * be returned if it is not supported by the kernel.
+ */
+enum probe_attach_mode {
+ /* attach probe in latest supported mode by kernel */
+ PROBE_ATTACH_MODE_DEFAULT = 0,
+ /* attach probe in legacy mode, using debugfs/tracefs */
+ PROBE_ATTACH_MODE_LEGACY,
+ /* create perf event with perf_event_open() syscall */
+ PROBE_ATTACH_MODE_PERF,
+ /* attach probe with BPF link */
+ PROBE_ATTACH_MODE_LINK,
+};
+
struct bpf_kprobe_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
__u64 bpf_cookie;
size_t offset;
/* kprobe is return probe */
bool retprobe;
+ /* kprobe attach mode */
+ enum probe_attach_mode attach_mode;
size_t :0;
};
-#define bpf_kprobe_opts__last_field retprobe
+#define bpf_kprobe_opts__last_field attach_mode
LIBBPF_API struct bpf_link *
bpf_program__attach_kprobe(const struct bpf_program *prog, bool retprobe,
const struct bpf_kprobe_multi_opts *opts);
struct bpf_ksyscall_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
__u64 bpf_cookie;
const struct bpf_ksyscall_opts *opts);
struct bpf_uprobe_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
/* offset of kernel reference counted USDT semaphore, added in
* a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe")
* binary_path.
*/
const char *func_name;
+ /* uprobe attach mode */
+ enum probe_attach_mode attach_mode;
size_t :0;
};
-#define bpf_uprobe_opts__last_field func_name
+#define bpf_uprobe_opts__last_field attach_mode
/**
* @brief **bpf_program__attach_uprobe()** attaches a BPF program
const struct bpf_usdt_opts *opts);
struct bpf_tracepoint_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
__u64 bpf_cookie;
typedef int (*ring_buffer_sample_fn)(void *ctx, void *data, size_t size);
struct ring_buffer_opts {
- size_t sz; /* size of this struct, for forward/backward compatiblity */
+ size_t sz; /* size of this struct, for forward/backward compatibility */
};
#define ring_buffer_opts__last_field sz
bpf_object__destroy_subskeleton(struct bpf_object_subskeleton *s);
struct gen_loader_opts {
- size_t sz; /* size of this struct, for forward/backward compatiblity */
+ size_t sz; /* size of this struct, for forward/backward compatibility */
const char *data;
const char *insns;
__u32 data_sz;
};
struct bpf_linker_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
};
#define bpf_linker_opts__last_field sz
struct bpf_linker_file_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
};
#define bpf_linker_file_opts__last_field sz
struct bpf_link **link);
struct libbpf_prog_handler_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
/* User-provided value that is passed to prog_setup_fn,
* prog_prepare_load_fn, and prog_attach_fn callbacks. Allows user to
static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *obj)
{
struct src_sec *src_symtab = &obj->secs[obj->symtab_sec_idx];
- struct dst_sec *dst_symtab;
int i, err;
for (i = 1; i < obj->sec_cnt; i++) {
return -1;
}
- /* add_dst_sec() above could have invalidated linker->secs */
- dst_symtab = &linker->secs[linker->symtab_sec_idx];
-
/* shdr->sh_link points to SYMTAB */
dst_sec->shdr->sh_link = linker->symtab_sec_idx;
dst_rel = dst_sec->raw_data + src_sec->dst_off;
n = src_sec->shdr->sh_size / src_sec->shdr->sh_entsize;
for (j = 0; j < n; j++, src_rel++, dst_rel++) {
- size_t src_sym_idx = ELF64_R_SYM(src_rel->r_info);
- size_t sym_type = ELF64_R_TYPE(src_rel->r_info);
- Elf64_Sym *src_sym, *dst_sym;
- size_t dst_sym_idx;
+ size_t src_sym_idx, dst_sym_idx, sym_type;
+ Elf64_Sym *src_sym;
src_sym_idx = ELF64_R_SYM(src_rel->r_info);
src_sym = src_symtab->data->d_buf + sizeof(*src_sym) * src_sym_idx;
dst_sym_idx = obj->sym_map[src_sym_idx];
- dst_sym = dst_symtab->raw_data + sizeof(*dst_sym) * dst_sym_idx;
dst_rel->r_offset += src_linked_sec->dst_off;
sym_type = ELF64_R_TYPE(src_rel->r_info);
dst_rel->r_info = ELF64_R_INFO(dst_sym_idx, sym_type);
return 0;
err = libbpf_netlink_resolve_genl_family_id("netdev", sizeof("netdev"), &id);
- if (err < 0)
+ if (err < 0) {
+ if (err == -ENOENT) {
+ opts->feature_flags = 0;
+ goto skip_feature_flags;
+ }
return libbpf_err(err);
+ }
memset(&req, 0, sizeof(req));
req.nh.nlmsg_len = NLMSG_LENGTH(GENL_HDRLEN);
opts->feature_flags = md.flags;
+skip_feature_flags:
return 0;
}
if (level <= 0)
return -EINVAL;
- local_t = btf_type_by_id(local_btf, local_id);
- targ_t = btf_type_by_id(targ_btf, targ_id);
-
recur:
depth--;
if (depth < 0)
return 0;
}
-static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg);
+static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz);
static int parse_usdt_spec(struct usdt_spec *spec, const struct usdt_note *note, __u64 usdt_cookie)
{
+ struct usdt_arg_spec *arg;
const char *s;
- int len;
+ int arg_sz, len;
spec->usdt_cookie = usdt_cookie;
spec->arg_cnt = 0;
return -E2BIG;
}
- len = parse_usdt_arg(s, spec->arg_cnt, &spec->args[spec->arg_cnt]);
+ arg = &spec->args[spec->arg_cnt];
+ len = parse_usdt_arg(s, spec->arg_cnt, arg, &arg_sz);
if (len < 0)
return len;
+ arg->arg_signed = arg_sz < 0;
+ if (arg_sz < 0)
+ arg_sz = -arg_sz;
+
+ switch (arg_sz) {
+ case 1: case 2: case 4: case 8:
+ arg->arg_bitshift = 64 - arg_sz * 8;
+ break;
+ default:
+ pr_warn("usdt: unsupported arg #%d (spec '%s') size: %d\n",
+ spec->arg_cnt, s, arg_sz);
+ return -EINVAL;
+ }
+
s += len;
spec->arg_cnt++;
}
return -ENOENT;
}
-static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg)
+static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz)
{
char reg_name[16];
- int arg_sz, len, reg_off;
+ int len, reg_off;
long off;
- if (sscanf(arg_str, " %d @ %ld ( %%%15[^)] ) %n", &arg_sz, &off, reg_name, &len) == 3) {
+ if (sscanf(arg_str, " %d @ %ld ( %%%15[^)] ) %n", arg_sz, &off, reg_name, &len) == 3) {
/* Memory dereference case, e.g., -4@-20(%rbp) */
arg->arg_type = USDT_ARG_REG_DEREF;
arg->val_off = off;
if (reg_off < 0)
return reg_off;
arg->reg_off = reg_off;
- } else if (sscanf(arg_str, " %d @ ( %%%15[^)] ) %n", &arg_sz, reg_name, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ ( %%%15[^)] ) %n", arg_sz, reg_name, &len) == 2) {
/* Memory dereference case without offset, e.g., 8@(%rsp) */
arg->arg_type = USDT_ARG_REG_DEREF;
arg->val_off = 0;
if (reg_off < 0)
return reg_off;
arg->reg_off = reg_off;
- } else if (sscanf(arg_str, " %d @ %%%15s %n", &arg_sz, reg_name, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ %%%15s %n", arg_sz, reg_name, &len) == 2) {
/* Register read case, e.g., -4@%eax */
arg->arg_type = USDT_ARG_REG;
arg->val_off = 0;
if (reg_off < 0)
return reg_off;
arg->reg_off = reg_off;
- } else if (sscanf(arg_str, " %d @ $%ld %n", &arg_sz, &off, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ $%ld %n", arg_sz, &off, &len) == 2) {
/* Constant value case, e.g., 4@$71 */
arg->arg_type = USDT_ARG_CONST;
arg->val_off = off;
return -EINVAL;
}
- arg->arg_signed = arg_sz < 0;
- if (arg_sz < 0)
- arg_sz = -arg_sz;
-
- switch (arg_sz) {
- case 1: case 2: case 4: case 8:
- arg->arg_bitshift = 64 - arg_sz * 8;
- break;
- default:
- pr_warn("usdt: unsupported arg #%d (spec '%s') size: %d\n",
- arg_num, arg_str, arg_sz);
- return -EINVAL;
- }
-
return len;
}
/* Do not support __s390__ for now, since user_pt_regs is broken with -m31. */
-static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg)
+static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz)
{
unsigned int reg;
- int arg_sz, len;
+ int len;
long off;
- if (sscanf(arg_str, " %d @ %ld ( %%r%u ) %n", &arg_sz, &off, ®, &len) == 3) {
+ if (sscanf(arg_str, " %d @ %ld ( %%r%u ) %n", arg_sz, &off, ®, &len) == 3) {
/* Memory dereference case, e.g., -2@-28(%r15) */
arg->arg_type = USDT_ARG_REG_DEREF;
arg->val_off = off;
return -EINVAL;
}
arg->reg_off = offsetof(user_pt_regs, gprs[reg]);
- } else if (sscanf(arg_str, " %d @ %%r%u %n", &arg_sz, ®, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ %%r%u %n", arg_sz, ®, &len) == 2) {
/* Register read case, e.g., -8@%r0 */
arg->arg_type = USDT_ARG_REG;
arg->val_off = 0;
return -EINVAL;
}
arg->reg_off = offsetof(user_pt_regs, gprs[reg]);
- } else if (sscanf(arg_str, " %d @ %ld %n", &arg_sz, &off, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ %ld %n", arg_sz, &off, &len) == 2) {
/* Constant value case, e.g., 4@71 */
arg->arg_type = USDT_ARG_CONST;
arg->val_off = off;
return -EINVAL;
}
- arg->arg_signed = arg_sz < 0;
- if (arg_sz < 0)
- arg_sz = -arg_sz;
-
- switch (arg_sz) {
- case 1: case 2: case 4: case 8:
- arg->arg_bitshift = 64 - arg_sz * 8;
- break;
- default:
- pr_warn("usdt: unsupported arg #%d (spec '%s') size: %d\n",
- arg_num, arg_str, arg_sz);
- return -EINVAL;
- }
-
return len;
}
return -ENOENT;
}
-static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg)
+static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz)
{
char reg_name[16];
- int arg_sz, len, reg_off;
+ int len, reg_off;
long off;
- if (sscanf(arg_str, " %d @ \[ %15[a-z0-9], %ld ] %n", &arg_sz, reg_name, &off, &len) == 3) {
+ if (sscanf(arg_str, " %d @ \[ %15[a-z0-9] , %ld ] %n", arg_sz, reg_name, &off, &len) == 3) {
/* Memory dereference case, e.g., -4@[sp, 96] */
arg->arg_type = USDT_ARG_REG_DEREF;
arg->val_off = off;
if (reg_off < 0)
return reg_off;
arg->reg_off = reg_off;
- } else if (sscanf(arg_str, " %d @ \[ %15[a-z0-9] ] %n", &arg_sz, reg_name, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ \[ %15[a-z0-9] ] %n", arg_sz, reg_name, &len) == 2) {
/* Memory dereference case, e.g., -4@[sp] */
arg->arg_type = USDT_ARG_REG_DEREF;
arg->val_off = 0;
if (reg_off < 0)
return reg_off;
arg->reg_off = reg_off;
- } else if (sscanf(arg_str, " %d @ %ld %n", &arg_sz, &off, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ %ld %n", arg_sz, &off, &len) == 2) {
/* Constant value case, e.g., 4@5 */
arg->arg_type = USDT_ARG_CONST;
arg->val_off = off;
arg->reg_off = 0;
- } else if (sscanf(arg_str, " %d @ %15[a-z0-9] %n", &arg_sz, reg_name, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ %15[a-z0-9] %n", arg_sz, reg_name, &len) == 2) {
/* Register read case, e.g., -8@x4 */
arg->arg_type = USDT_ARG_REG;
arg->val_off = 0;
return -EINVAL;
}
- arg->arg_signed = arg_sz < 0;
- if (arg_sz < 0)
- arg_sz = -arg_sz;
-
- switch (arg_sz) {
- case 1: case 2: case 4: case 8:
- arg->arg_bitshift = 64 - arg_sz * 8;
- break;
- default:
- pr_warn("usdt: unsupported arg #%d (spec '%s') size: %d\n",
- arg_num, arg_str, arg_sz);
- return -EINVAL;
- }
-
return len;
}
return -ENOENT;
}
-static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg)
+static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz)
{
char reg_name[16];
- int arg_sz, len, reg_off;
+ int len, reg_off;
long off;
- if (sscanf(arg_str, " %d @ %ld ( %15[a-z0-9] ) %n", &arg_sz, &off, reg_name, &len) == 3) {
+ if (sscanf(arg_str, " %d @ %ld ( %15[a-z0-9] ) %n", arg_sz, &off, reg_name, &len) == 3) {
/* Memory dereference case, e.g., -8@-88(s0) */
arg->arg_type = USDT_ARG_REG_DEREF;
arg->val_off = off;
if (reg_off < 0)
return reg_off;
arg->reg_off = reg_off;
- } else if (sscanf(arg_str, " %d @ %ld %n", &arg_sz, &off, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ %ld %n", arg_sz, &off, &len) == 2) {
/* Constant value case, e.g., 4@5 */
arg->arg_type = USDT_ARG_CONST;
arg->val_off = off;
arg->reg_off = 0;
- } else if (sscanf(arg_str, " %d @ %15[a-z0-9] %n", &arg_sz, reg_name, &len) == 2) {
+ } else if (sscanf(arg_str, " %d @ %15[a-z0-9] %n", arg_sz, reg_name, &len) == 2) {
/* Register read case, e.g., -8@a1 */
arg->arg_type = USDT_ARG_REG;
arg->val_off = 0;
return -EINVAL;
}
- arg->arg_signed = arg_sz < 0;
- if (arg_sz < 0)
- arg_sz = -arg_sz;
+ return len;
+}
- switch (arg_sz) {
- case 1: case 2: case 4: case 8:
- arg->arg_bitshift = 64 - arg_sz * 8;
- break;
- default:
- pr_warn("usdt: unsupported arg #%d (spec '%s') size: %d\n",
- arg_num, arg_str, arg_sz);
+#elif defined(__arm__)
+
+static int calc_pt_regs_off(const char *reg_name)
+{
+ static struct {
+ const char *name;
+ size_t pt_regs_off;
+ } reg_map[] = {
+ { "r0", offsetof(struct pt_regs, uregs[0]) },
+ { "r1", offsetof(struct pt_regs, uregs[1]) },
+ { "r2", offsetof(struct pt_regs, uregs[2]) },
+ { "r3", offsetof(struct pt_regs, uregs[3]) },
+ { "r4", offsetof(struct pt_regs, uregs[4]) },
+ { "r5", offsetof(struct pt_regs, uregs[5]) },
+ { "r6", offsetof(struct pt_regs, uregs[6]) },
+ { "r7", offsetof(struct pt_regs, uregs[7]) },
+ { "r8", offsetof(struct pt_regs, uregs[8]) },
+ { "r9", offsetof(struct pt_regs, uregs[9]) },
+ { "r10", offsetof(struct pt_regs, uregs[10]) },
+ { "fp", offsetof(struct pt_regs, uregs[11]) },
+ { "ip", offsetof(struct pt_regs, uregs[12]) },
+ { "sp", offsetof(struct pt_regs, uregs[13]) },
+ { "lr", offsetof(struct pt_regs, uregs[14]) },
+ { "pc", offsetof(struct pt_regs, uregs[15]) },
+ };
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(reg_map); i++) {
+ if (strcmp(reg_name, reg_map[i].name) == 0)
+ return reg_map[i].pt_regs_off;
+ }
+
+ pr_warn("usdt: unrecognized register '%s'\n", reg_name);
+ return -ENOENT;
+}
+
+static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz)
+{
+ char reg_name[16];
+ int len, reg_off;
+ long off;
+
+ if (sscanf(arg_str, " %d @ \[ %15[a-z0-9] , #%ld ] %n",
+ arg_sz, reg_name, &off, &len) == 3) {
+ /* Memory dereference case, e.g., -4@[fp, #96] */
+ arg->arg_type = USDT_ARG_REG_DEREF;
+ arg->val_off = off;
+ reg_off = calc_pt_regs_off(reg_name);
+ if (reg_off < 0)
+ return reg_off;
+ arg->reg_off = reg_off;
+ } else if (sscanf(arg_str, " %d @ \[ %15[a-z0-9] ] %n", arg_sz, reg_name, &len) == 2) {
+ /* Memory dereference case, e.g., -4@[sp] */
+ arg->arg_type = USDT_ARG_REG_DEREF;
+ arg->val_off = 0;
+ reg_off = calc_pt_regs_off(reg_name);
+ if (reg_off < 0)
+ return reg_off;
+ arg->reg_off = reg_off;
+ } else if (sscanf(arg_str, " %d @ #%ld %n", arg_sz, &off, &len) == 2) {
+ /* Constant value case, e.g., 4@#5 */
+ arg->arg_type = USDT_ARG_CONST;
+ arg->val_off = off;
+ arg->reg_off = 0;
+ } else if (sscanf(arg_str, " %d @ %15[a-z0-9] %n", arg_sz, reg_name, &len) == 2) {
+ /* Register read case, e.g., -8@r4 */
+ arg->arg_type = USDT_ARG_REG;
+ arg->val_off = 0;
+ reg_off = calc_pt_regs_off(reg_name);
+ if (reg_off < 0)
+ return reg_off;
+ arg->reg_off = reg_off;
+ } else {
+ pr_warn("usdt: unrecognized arg #%d spec '%s'\n", arg_num, arg_str);
return -EINVAL;
}
#else
-static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg)
+static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz)
{
pr_warn("usdt: libbpf doesn't support USDTs on current architecture\n");
return -ENOTSUP;
--- /dev/null
+// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+/*
+ * Routines for dealing with .zip archives.
+ *
+ * Copyright (c) Meta Platforms, Inc. and affiliates.
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <unistd.h>
+
+#include "libbpf_internal.h"
+#include "zip.h"
+
+/* Specification of ZIP file format can be found here:
+ * https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
+ * For a high level overview of the structure of a ZIP file see
+ * sections 4.3.1 - 4.3.6.
+ *
+ * Data structures appearing in ZIP files do not contain any
+ * padding and they might be misaligned. To allow us to safely
+ * operate on pointers to such structures and their members, we
+ * declare the types as packed.
+ */
+
+#define END_OF_CD_RECORD_MAGIC 0x06054b50
+
+/* See section 4.3.16 of the spec. */
+struct end_of_cd_record {
+ /* Magic value equal to END_OF_CD_RECORD_MAGIC */
+ __u32 magic;
+
+ /* Number of the file containing this structure or 0xFFFF if ZIP64 archive.
+ * Zip archive might span multiple files (disks).
+ */
+ __u16 this_disk;
+
+ /* Number of the file containing the beginning of the central directory or
+ * 0xFFFF if ZIP64 archive.
+ */
+ __u16 cd_disk;
+
+ /* Number of central directory records on this disk or 0xFFFF if ZIP64
+ * archive.
+ */
+ __u16 cd_records;
+
+ /* Number of central directory records on all disks or 0xFFFF if ZIP64
+ * archive.
+ */
+ __u16 cd_records_total;
+
+ /* Size of the central directory record or 0xFFFFFFFF if ZIP64 archive. */
+ __u32 cd_size;
+
+ /* Offset of the central directory from the beginning of the archive or
+ * 0xFFFFFFFF if ZIP64 archive.
+ */
+ __u32 cd_offset;
+
+ /* Length of comment data following end of central directory record. */
+ __u16 comment_length;
+
+ /* Up to 64k of arbitrary bytes. */
+ /* uint8_t comment[comment_length] */
+} __attribute__((packed));
+
+#define CD_FILE_HEADER_MAGIC 0x02014b50
+#define FLAG_ENCRYPTED (1 << 0)
+#define FLAG_HAS_DATA_DESCRIPTOR (1 << 3)
+
+/* See section 4.3.12 of the spec. */
+struct cd_file_header {
+ /* Magic value equal to CD_FILE_HEADER_MAGIC. */
+ __u32 magic;
+ __u16 version;
+ /* Minimum zip version needed to extract the file. */
+ __u16 min_version;
+ __u16 flags;
+ __u16 compression;
+ __u16 last_modified_time;
+ __u16 last_modified_date;
+ __u32 crc;
+ __u32 compressed_size;
+ __u32 uncompressed_size;
+ __u16 file_name_length;
+ __u16 extra_field_length;
+ __u16 file_comment_length;
+ /* Number of the disk where the file starts or 0xFFFF if ZIP64 archive. */
+ __u16 disk;
+ __u16 internal_attributes;
+ __u32 external_attributes;
+ /* Offset from the start of the disk containing the local file header to the
+ * start of the local file header.
+ */
+ __u32 offset;
+} __attribute__((packed));
+
+#define LOCAL_FILE_HEADER_MAGIC 0x04034b50
+
+/* See section 4.3.7 of the spec. */
+struct local_file_header {
+ /* Magic value equal to LOCAL_FILE_HEADER_MAGIC. */
+ __u32 magic;
+ /* Minimum zip version needed to extract the file. */
+ __u16 min_version;
+ __u16 flags;
+ __u16 compression;
+ __u16 last_modified_time;
+ __u16 last_modified_date;
+ __u32 crc;
+ __u32 compressed_size;
+ __u32 uncompressed_size;
+ __u16 file_name_length;
+ __u16 extra_field_length;
+} __attribute__((packed));
+
+struct zip_archive {
+ void *data;
+ __u32 size;
+ __u32 cd_offset;
+ __u32 cd_records;
+};
+
+static void *check_access(struct zip_archive *archive, __u32 offset, __u32 size)
+{
+ if (offset + size > archive->size || offset > offset + size)
+ return NULL;
+
+ return archive->data + offset;
+}
+
+/* Returns 0 on success, -EINVAL on error and -ENOTSUP if the eocd indicates the
+ * archive uses features which are not supported.
+ */
+static int try_parse_end_of_cd(struct zip_archive *archive, __u32 offset)
+{
+ __u16 comment_length, cd_records;
+ struct end_of_cd_record *eocd;
+ __u32 cd_offset, cd_size;
+
+ eocd = check_access(archive, offset, sizeof(*eocd));
+ if (!eocd || eocd->magic != END_OF_CD_RECORD_MAGIC)
+ return -EINVAL;
+
+ comment_length = eocd->comment_length;
+ if (offset + sizeof(*eocd) + comment_length != archive->size)
+ return -EINVAL;
+
+ cd_records = eocd->cd_records;
+ if (eocd->this_disk != 0 || eocd->cd_disk != 0 || eocd->cd_records_total != cd_records)
+ /* This is a valid eocd, but we only support single-file non-ZIP64 archives. */
+ return -ENOTSUP;
+
+ cd_offset = eocd->cd_offset;
+ cd_size = eocd->cd_size;
+ if (!check_access(archive, cd_offset, cd_size))
+ return -EINVAL;
+
+ archive->cd_offset = cd_offset;
+ archive->cd_records = cd_records;
+ return 0;
+}
+
+static int find_cd(struct zip_archive *archive)
+{
+ int64_t limit, offset;
+ int rc = -EINVAL;
+
+ if (archive->size <= sizeof(struct end_of_cd_record))
+ return -EINVAL;
+
+ /* Because the end of central directory ends with a variable length array of
+ * up to 0xFFFF bytes we can't know exactly where it starts and need to
+ * search for it at the end of the file, scanning the (limit, offset] range.
+ */
+ offset = archive->size - sizeof(struct end_of_cd_record);
+ limit = (int64_t)offset - (1 << 16);
+
+ for (; offset >= 0 && offset > limit && rc != 0; offset--) {
+ rc = try_parse_end_of_cd(archive, offset);
+ if (rc == -ENOTSUP)
+ break;
+ }
+ return rc;
+}
+
+struct zip_archive *zip_archive_open(const char *path)
+{
+ struct zip_archive *archive;
+ int err, fd;
+ off_t size;
+ void *data;
+
+ fd = open(path, O_RDONLY | O_CLOEXEC);
+ if (fd < 0)
+ return ERR_PTR(-errno);
+
+ size = lseek(fd, 0, SEEK_END);
+ if (size == (off_t)-1 || size > UINT32_MAX) {
+ close(fd);
+ return ERR_PTR(-EINVAL);
+ }
+
+ data = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd, 0);
+ err = -errno;
+ close(fd);
+
+ if (data == MAP_FAILED)
+ return ERR_PTR(err);
+
+ archive = malloc(sizeof(*archive));
+ if (!archive) {
+ munmap(data, size);
+ return ERR_PTR(-ENOMEM);
+ };
+
+ archive->data = data;
+ archive->size = size;
+
+ err = find_cd(archive);
+ if (err) {
+ munmap(data, size);
+ free(archive);
+ return ERR_PTR(err);
+ }
+
+ return archive;
+}
+
+void zip_archive_close(struct zip_archive *archive)
+{
+ munmap(archive->data, archive->size);
+ free(archive);
+}
+
+static struct local_file_header *local_file_header_at_offset(struct zip_archive *archive,
+ __u32 offset)
+{
+ struct local_file_header *lfh;
+
+ lfh = check_access(archive, offset, sizeof(*lfh));
+ if (!lfh || lfh->magic != LOCAL_FILE_HEADER_MAGIC)
+ return NULL;
+
+ return lfh;
+}
+
+static int get_entry_at_offset(struct zip_archive *archive, __u32 offset, struct zip_entry *out)
+{
+ struct local_file_header *lfh;
+ __u32 compressed_size;
+ const char *name;
+ void *data;
+
+ lfh = local_file_header_at_offset(archive, offset);
+ if (!lfh)
+ return -EINVAL;
+
+ offset += sizeof(*lfh);
+ if ((lfh->flags & FLAG_ENCRYPTED) || (lfh->flags & FLAG_HAS_DATA_DESCRIPTOR))
+ return -EINVAL;
+
+ name = check_access(archive, offset, lfh->file_name_length);
+ if (!name)
+ return -EINVAL;
+
+ offset += lfh->file_name_length;
+ if (!check_access(archive, offset, lfh->extra_field_length))
+ return -EINVAL;
+
+ offset += lfh->extra_field_length;
+ compressed_size = lfh->compressed_size;
+ data = check_access(archive, offset, compressed_size);
+ if (!data)
+ return -EINVAL;
+
+ out->compression = lfh->compression;
+ out->name_length = lfh->file_name_length;
+ out->name = name;
+ out->data = data;
+ out->data_length = compressed_size;
+ out->data_offset = offset;
+
+ return 0;
+}
+
+int zip_archive_find_entry(struct zip_archive *archive, const char *file_name,
+ struct zip_entry *out)
+{
+ size_t file_name_length = strlen(file_name);
+ __u32 i, offset = archive->cd_offset;
+
+ for (i = 0; i < archive->cd_records; ++i) {
+ __u16 cdfh_name_length, cdfh_flags;
+ struct cd_file_header *cdfh;
+ const char *cdfh_name;
+
+ cdfh = check_access(archive, offset, sizeof(*cdfh));
+ if (!cdfh || cdfh->magic != CD_FILE_HEADER_MAGIC)
+ return -EINVAL;
+
+ offset += sizeof(*cdfh);
+ cdfh_name_length = cdfh->file_name_length;
+ cdfh_name = check_access(archive, offset, cdfh_name_length);
+ if (!cdfh_name)
+ return -EINVAL;
+
+ cdfh_flags = cdfh->flags;
+ if ((cdfh_flags & FLAG_ENCRYPTED) == 0 &&
+ (cdfh_flags & FLAG_HAS_DATA_DESCRIPTOR) == 0 &&
+ file_name_length == cdfh_name_length &&
+ memcmp(file_name, archive->data + offset, file_name_length) == 0) {
+ return get_entry_at_offset(archive, cdfh->offset, out);
+ }
+
+ offset += cdfh_name_length;
+ offset += cdfh->extra_field_length;
+ offset += cdfh->file_comment_length;
+ }
+
+ return -ENOENT;
+}
--- /dev/null
+/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
+
+#ifndef __LIBBPF_ZIP_H
+#define __LIBBPF_ZIP_H
+
+#include <linux/types.h>
+
+/* Represents an open zip archive.
+ * Only basic ZIP files are supported, in particular the following are not
+ * supported:
+ * - encryption
+ * - streaming
+ * - multi-part ZIP files
+ * - ZIP64
+ */
+struct zip_archive;
+
+/* Carries information on name, compression method, and data corresponding to a
+ * file in a zip archive.
+ */
+struct zip_entry {
+ /* Compression method as defined in pkzip spec. 0 means data is uncompressed. */
+ __u16 compression;
+
+ /* Non-null terminated name of the file. */
+ const char *name;
+ /* Length of the file name. */
+ __u16 name_length;
+
+ /* Pointer to the file data. */
+ const void *data;
+ /* Length of the file data. */
+ __u32 data_length;
+ /* Offset of the file data within the archive. */
+ __u32 data_offset;
+};
+
+/* Open a zip archive. Returns NULL in case of an error. */
+struct zip_archive *zip_archive_open(const char *path);
+
+/* Close a zip archive and release resources. */
+void zip_archive_close(struct zip_archive *archive);
+
+/* Look up an entry corresponding to a file in given zip archive. */
+int zip_archive_find_entry(struct zip_archive *archive, const char *name, struct zip_entry *out);
+
+#endif
--- /dev/null
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+
+import argparse
+import json
+import pprint
+import sys
+import re
+
+from lib import YnlFamily
+
+def args_to_req(ynl, op_name, args, req):
+ """
+ Verify and convert command-line arguments to the ynl-compatible request.
+ """
+ valid_attrs = ynl.operation_do_attributes(op_name)
+ valid_attrs.remove('header') # not user-provided
+
+ if len(args) == 0:
+ print(f'no attributes, expected: {valid_attrs}')
+ sys.exit(1)
+
+ i = 0
+ while i < len(args):
+ attr = args[i]
+ if i + 1 >= len(args):
+ print(f'expected value for \'{attr}\'')
+ sys.exit(1)
+
+ if attr not in valid_attrs:
+ print(f'invalid attribute \'{attr}\', expected: {valid_attrs}')
+ sys.exit(1)
+
+ val = args[i+1]
+ i += 2
+
+ req[attr] = val
+
+def print_field(reply, *desc):
+ """
+ Pretty-print a set of fields from the reply. desc specifies the
+ fields and the optional type (bool/yn).
+ """
+ if len(desc) == 0:
+ return print_field(reply, *zip(reply.keys(), reply.keys()))
+
+ for spec in desc:
+ try:
+ field, name, tp = spec
+ except:
+ field, name = spec
+ tp = 'int'
+
+ value = reply.get(field, None)
+ if tp == 'yn':
+ value = 'yes' if value else 'no'
+ elif tp == 'bool' or isinstance(value, bool):
+ value = 'on' if value else 'off'
+ else:
+ value = 'n/a' if value is None else value
+
+ print(f'{name}: {value}')
+
+def print_speed(name, value):
+ """
+ Print out the speed-like strings from the value dict.
+ """
+ speed_re = re.compile(r'[0-9]+base[^/]+/.+')
+ speed = [ k for k, v in value.items() if v and speed_re.match(k) ]
+ print(f'{name}: {" ".join(speed)}')
+
+def doit(ynl, args, op_name):
+ """
+ Prepare request header, parse arguments and doit.
+ """
+ req = {
+ 'header': {
+ 'dev-name': args.device,
+ },
+ }
+
+ args_to_req(ynl, op_name, args.args, req)
+ ynl.do(op_name, req)
+
+def dumpit(ynl, args, op_name, extra = {}):
+ """
+ Prepare request header, parse arguments and dumpit (filtering out the
+ devices we're not interested in).
+ """
+ reply = ynl.dump(op_name, { 'header': {} } | extra)
+ if not reply:
+ return {}
+
+ for msg in reply:
+ if msg['header']['dev-name'] == args.device:
+ if args.json:
+ pprint.PrettyPrinter().pprint(msg)
+ sys.exit(0)
+ msg.pop('header', None)
+ return msg
+
+ print(f"Not supported for device {args.device}")
+ sys.exit(1)
+
+def bits_to_dict(attr):
+ """
+ Convert ynl-formatted bitmask to a dict of bit=value.
+ """
+ ret = {}
+ if 'bits' not in attr:
+ return dict()
+ if 'bit' not in attr['bits']:
+ return dict()
+ for bit in attr['bits']['bit']:
+ if bit['name'] == '':
+ continue
+ name = bit['name']
+ value = bit.get('value', False)
+ ret[name] = value
+ return ret
+
+def main():
+ parser = argparse.ArgumentParser(description='ethtool wannabe')
+ parser.add_argument('--json', action=argparse.BooleanOptionalAction)
+ parser.add_argument('--show-priv-flags', action=argparse.BooleanOptionalAction)
+ parser.add_argument('--set-priv-flags', action=argparse.BooleanOptionalAction)
+ parser.add_argument('--show-eee', action=argparse.BooleanOptionalAction)
+ parser.add_argument('--set-eee', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-a', '--show-pause', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-A', '--set-pause', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-c', '--show-coalesce', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-C', '--set-coalesce', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-g', '--show-ring', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-G', '--set-ring', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-k', '--show-features', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-K', '--set-features', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-l', '--show-channels', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-L', '--set-channels', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-T', '--show-time-stamping', action=argparse.BooleanOptionalAction)
+ parser.add_argument('-S', '--statistics', action=argparse.BooleanOptionalAction)
+ # TODO: --show-tunnels tunnel-info-get
+ # TODO: --show-module module-get
+ # TODO: --get-plca-cfg plca-get
+ # TODO: --get-plca-status plca-get-status
+ # TODO: --show-mm mm-get
+ # TODO: --show-fec fec-get
+ # TODO: --dump-module-eerpom module-eeprom-get
+ # TODO: pse-get
+ # TODO: rss-get
+ parser.add_argument('device', metavar='device', type=str)
+ parser.add_argument('args', metavar='args', type=str, nargs='*')
+ global args
+ args = parser.parse_args()
+
+ spec = '/usr/local/google/home/sdf/src/linux/Documentation/netlink/specs/ethtool.yaml'
+ schema = '/usr/local/google/home/sdf/src/linux/Documentation/netlink/genetlink-legacy.yaml'
+
+ ynl = YnlFamily(spec, schema)
+
+ if args.set_priv_flags:
+ # TODO: parse the bitmask
+ print("not implemented")
+ return
+
+ if args.set_eee:
+ return doit(ynl, args, 'eee-set')
+
+ if args.set_pause:
+ return doit(ynl, args, 'pause-set')
+
+ if args.set_coalesce:
+ return doit(ynl, args, 'coalesce-set')
+
+ if args.set_features:
+ # TODO: parse the bitmask
+ print("not implemented")
+ return
+
+ if args.set_channels:
+ return doit(ynl, args, 'channels-set')
+
+ if args.set_ring:
+ return doit(ynl, args, 'rings-set')
+
+ if args.show_priv_flags:
+ flags = bits_to_dict(dumpit(ynl, args, 'privflags-get')['flags'])
+ print_field(flags)
+ return
+
+ if args.show_eee:
+ eee = dumpit(ynl, args, 'eee-get')
+ ours = bits_to_dict(eee['modes-ours'])
+ peer = bits_to_dict(eee['modes-peer'])
+
+ if 'enabled' in eee:
+ status = 'enabled' if eee['enabled'] else 'disabled'
+ if 'active' in eee and eee['active']:
+ status = status + ' - active'
+ else:
+ status = status + ' - inactive'
+ else:
+ status = 'not supported'
+
+ print(f'EEE status: {status}')
+ print_field(eee, ('tx-lpi-timer', 'Tx LPI'))
+ print_speed('Advertised EEE link modes', ours)
+ print_speed('Link partner advertised EEE link modes', peer)
+
+ return
+
+ if args.show_pause:
+ print_field(dumpit(ynl, args, 'pause-get'),
+ ('autoneg', 'Autonegotiate', 'bool'),
+ ('rx', 'RX', 'bool'),
+ ('tx', 'TX', 'bool'))
+ return
+
+ if args.show_coalesce:
+ print_field(dumpit(ynl, args, 'coalesce-get'))
+ return
+
+ if args.show_features:
+ reply = dumpit(ynl, args, 'features-get')
+ available = bits_to_dict(reply['hw'])
+ requested = bits_to_dict(reply['wanted']).keys()
+ active = bits_to_dict(reply['active']).keys()
+ never_changed = bits_to_dict(reply['nochange']).keys()
+
+ for f in sorted(available):
+ value = "off"
+ if f in active:
+ value = "on"
+
+ fixed = ""
+ if f not in available or f in never_changed:
+ fixed = " [fixed]"
+
+ req = ""
+ if f in requested:
+ if f in active:
+ req = " [requested on]"
+ else:
+ req = " [requested off]"
+
+ print(f'{f}: {value}{fixed}{req}')
+
+ return
+
+ if args.show_channels:
+ reply = dumpit(ynl, args, 'channels-get')
+ print(f'Channel parameters for {args.device}:')
+
+ print(f'Pre-set maximums:')
+ print_field(reply,
+ ('rx-max', 'RX'),
+ ('tx-max', 'TX'),
+ ('other-max', 'Other'),
+ ('combined-max', 'Combined'))
+
+ print(f'Current hardware settings:')
+ print_field(reply,
+ ('rx-count', 'RX'),
+ ('tx-count', 'TX'),
+ ('other-count', 'Other'),
+ ('combined-count', 'Combined'))
+
+ return
+
+ if args.show_ring:
+ reply = dumpit(ynl, args, 'channels-get')
+
+ print(f'Ring parameters for {args.device}:')
+
+ print(f'Pre-set maximums:')
+ print_field(reply,
+ ('rx-max', 'RX'),
+ ('rx-mini-max', 'RX Mini'),
+ ('rx-jumbo-max', 'RX Jumbo'),
+ ('tx-max', 'TX'))
+
+ print(f'Current hardware settings:')
+ print_field(reply,
+ ('rx', 'RX'),
+ ('rx-mini', 'RX Mini'),
+ ('rx-jumbo', 'RX Jumbo'),
+ ('tx', 'TX'))
+
+ print_field(reply,
+ ('rx-buf-len', 'RX Buf Len'),
+ ('cqe-size', 'CQE Size'),
+ ('tx-push', 'TX Push', 'bool'))
+
+ return
+
+ if args.statistics:
+ print(f'NIC statistics:')
+
+ # TODO: pass id?
+ strset = dumpit(ynl, args, 'strset-get')
+ pprint.PrettyPrinter().pprint(strset)
+
+ req = {
+ 'groups': {
+ 'size': 1,
+ 'bits': {
+ 'bit':
+ # TODO: support passing the bitmask
+ #[
+ #{ 'name': 'eth-phy', 'value': True },
+ { 'name': 'eth-mac', 'value': True },
+ #{ 'name': 'eth-ctrl', 'value': True },
+ #{ 'name': 'rmon', 'value': True },
+ #],
+ },
+ },
+ }
+
+ rsp = dumpit(ynl, args, 'stats-get', req)
+ pprint.PrettyPrinter().pprint(rsp)
+ return
+
+ if args.show_time_stamping:
+ tsinfo = dumpit(ynl, args, 'tsinfo-get')
+
+ print(f'Time stamping parameters for {args.device}:')
+
+ print('Capabilities:')
+ [print(f'\t{v}') for v in bits_to_dict(tsinfo['timestamping'])]
+
+ print(f'PTP Hardware Clock: {tsinfo["phc-index"]}')
+
+ print('Hardware Transmit Timestamp Modes:')
+ [print(f'\t{v}') for v in bits_to_dict(tsinfo['tx-types'])]
+
+ print('Hardware Receive Filter Modes:')
+ [print(f'\t{v}') for v in bits_to_dict(tsinfo['rx-filters'])]
+ return
+
+ print(f'Settings for {args.device}:')
+ linkmodes = dumpit(ynl, args, 'linkmodes-get')
+ ours = bits_to_dict(linkmodes['ours'])
+
+ supported_ports = ('TP', 'AUI', 'BNC', 'MII', 'FIBRE', 'Backplane')
+ ports = [ p for p in supported_ports if ours.get(p, False)]
+ print(f'Supported ports: [ {" ".join(ports)} ]')
+
+ print_speed('Supported link modes', ours)
+
+ print_field(ours, ('Pause', 'Supported pause frame use', 'yn'))
+ print_field(ours, ('Autoneg', 'Supports auto-negotiation', 'yn'))
+
+ supported_fec = ('None', 'PS', 'BASER', 'LLRS')
+ fec = [ p for p in supported_fec if ours.get(p, False)]
+ fec_str = " ".join(fec)
+ if len(fec) == 0:
+ fec_str = "Not reported"
+
+ print(f'Supported FEC modes: {fec_str}')
+
+ speed = 'Unknown!'
+ if linkmodes['speed'] > 0 and linkmodes['speed'] < 0xffffffff:
+ speed = f'{linkmodes["speed"]}Mb/s'
+ print(f'Speed: {speed}')
+
+ duplex_modes = {
+ 0: 'Half',
+ 1: 'Full',
+ }
+ duplex = duplex_modes.get(linkmodes["duplex"], None)
+ if not duplex:
+ duplex = f'Unknown! ({linkmodes["duplex"]})'
+ print(f'Duplex: {duplex}')
+
+ autoneg = "off"
+ if linkmodes.get("autoneg", 0) != 0:
+ autoneg = "on"
+ print(f'Auto-negotiation: {autoneg}')
+
+ ports = {
+ 0: 'Twisted Pair',
+ 1: 'AUI',
+ 2: 'MII',
+ 3: 'FIBRE',
+ 4: 'BNC',
+ 5: 'Directly Attached Copper',
+ 0xef: 'None',
+ }
+ linkinfo = dumpit(ynl, args, 'linkinfo-get')
+ print(f'Port: {ports.get(linkinfo["port"], "Other")}')
+
+ print_field(linkinfo, ('phyaddr', 'PHYAD'))
+
+ transceiver = {
+ 0: 'Internal',
+ 1: 'External',
+ }
+ print(f'Transceiver: {transceiver.get(linkinfo["transceiver"], "Unknown")}')
+
+ mdix_ctrl = {
+ 1: 'off',
+ 2: 'on',
+ }
+ mdix = mdix_ctrl.get(linkinfo['tp-mdix-ctrl'], None)
+ if mdix:
+ mdix = mdix + ' (forced)'
+ else:
+ mdix = mdix_ctrl.get(linkinfo['tp-mdix'], 'Unknown (auto)')
+ print(f'MDI-X: {mdix}')
+
+ debug = dumpit(ynl, args, 'debug-get')
+ msgmask = bits_to_dict(debug.get("msgmask", [])).keys()
+ print(f'Current message level: {" ".join(msgmask)}')
+
+ linkstate = dumpit(ynl, args, 'linkstate-get')
+ detected_states = {
+ 0: 'no',
+ 1: 'yes',
+ }
+ # TODO: wol-get
+ detected = detected_states.get(linkstate['link'], 'unknown')
+ print(f'Link detected: {detected}')
+
+if __name__ == '__main__':
+ main()
def raw_value(self):
return self.value
- def user_value(self):
- if self.enum_set['type'] == 'flags':
+ def user_value(self, as_flags=None):
+ if self.enum_set['type'] == 'flags' or as_flags:
return 1 << self.value
else:
return self.value
return True
return False
- def get_mask(self):
+ def get_mask(self, as_flags=None):
mask = 0
for e in self.entries.values():
- mask += e.user_value()
+ mask += e.user_value(as_flags)
return mask
Represents a single attribute type within an attr space.
Attributes:
- value numerical ID when serialized
- attr_set Attribute Set containing this attr
+ value numerical ID when serialized
+ attr_set Attribute Set containing this attr
+ is_multi bool, attr may repeat multiple times
+ struct_name string, name of struct definition
+ sub_type string, name of sub type
"""
def __init__(self, family, attr_set, yaml, value):
super().__init__(family, yaml)
self.value = value
self.attr_set = attr_set
self.is_multi = yaml.get('multi-attr', False)
+ self.struct_name = yaml.get('struct')
+ self.sub_type = yaml.get('sub-type')
+ self.byte_order = yaml.get('byte-order')
class SpecAttrSet(SpecElement):
return self.attrs.items()
+class SpecStructMember(SpecElement):
+ """Struct member attribute
+
+ Represents a single struct member attribute.
+
+ Attributes:
+ type string, type of the member attribute
+ """
+ def __init__(self, family, yaml):
+ super().__init__(family, yaml)
+ self.type = yaml['type']
+
+
+class SpecStruct(SpecElement):
+ """Netlink struct type
+
+ Represents a C struct definition.
+
+ Attributes:
+ members ordered list of struct members
+ """
+ def __init__(self, family, yaml):
+ super().__init__(family, yaml)
+
+ self.members = []
+ for member in yaml.get('members', []):
+ self.members.append(self.new_member(family, member))
+
+ def new_member(self, family, elem):
+ return SpecStructMember(family, elem)
+
+ def __iter__(self):
+ yield from self.members
+
+ def items(self):
+ return self.members.items()
+
+
class SpecOperation(SpecElement):
"""Netlink Operation
Information about a single Netlink operation.
Attributes:
- value numerical ID when serialized, None if req/rsp values differ
+ value numerical ID when serialized, None if req/rsp values differ
- req_value numerical ID when serialized, user -> kernel
- rsp_value numerical ID when serialized, user <- kernel
- is_call bool, whether the operation is a call
- is_async bool, whether the operation is a notification
- is_resv bool, whether the operation does not exist (it's just a reserved ID)
- attr_set attribute set name
+ req_value numerical ID when serialized, user -> kernel
+ rsp_value numerical ID when serialized, user <- kernel
+ is_call bool, whether the operation is a call
+ is_async bool, whether the operation is a notification
+ is_resv bool, whether the operation does not exist (it's just a reserved ID)
+ attr_set attribute set name
+ fixed_header string, optional name of fixed header struct
- yaml raw spec as loaded from the spec file
+ yaml raw spec as loaded from the spec file
"""
def __init__(self, family, yaml, req_value, rsp_value):
super().__init__(family, yaml)
self.is_call = 'do' in yaml or 'dump' in yaml
self.is_async = 'notify' in yaml or 'event' in yaml
self.is_resv = not self.is_async and not self.is_call
+ self.fixed_header = self.yaml.get('fixed-header', family.fixed_header)
# Added by resolve:
self.attr_set = None
msgs_by_value dict of all messages (indexed by name)
ops dict of all valid requests / responses
consts dict of all constants/enums
+ fixed_header string, optional name of family default fixed header struct
"""
def __init__(self, spec_path, schema_path=None):
with open(spec_path, "r") as stream:
def new_attr_set(self, elem):
return SpecAttrSet(self, elem)
+ def new_struct(self, elem):
+ return SpecStruct(self, elem)
+
def new_operation(self, elem, req_val, rsp_val):
return SpecOperation(self, elem, req_val, rsp_val)
self._resolution_list.append(elem)
def _dictify_ops_unified(self):
+ self.fixed_header = self.yaml['operations'].get('fixed-header')
val = 1
for elem in self.yaml['operations']['list']:
if 'value' in elem:
self.msgs[op.name] = op
def _dictify_ops_directional(self):
+ self.fixed_header = self.yaml['operations'].get('fixed-header')
req_val = rsp_val = 1
for elem in self.yaml['operations']['list']:
if 'notify' in elem:
self.msgs[op.name] = op
+ def find_operation(self, name):
+ """
+ For a given operation name, find and return operation spec.
+ """
+ for op in self.yaml['operations']['list']:
+ if name == op['name']:
+ return op
+ return None
+
def resolve(self):
self.resolve_up(super())
for elem in definitions:
if elem['type'] == 'enum' or elem['type'] == 'flags':
self.consts[elem['name']] = self.new_enum(elem)
+ elif elem['type'] == 'struct':
+ self.consts[elem['name']] = self.new_struct(elem)
else:
self.consts[elem['name']] = elem
NLMSGERR_ATTR_MISS_NEST = 6
+class NlError(Exception):
+ def __init__(self, nl_msg):
+ self.nl_msg = nl_msg
+
+ def __str__(self):
+ return f"Netlink error: {os.strerror(-self.nl_msg.error)}\n{self.nl_msg}"
+
+
class NlAttr:
+ type_formats = { 'u8' : ('B', 1), 's8' : ('b', 1),
+ 'u16': ('H', 2), 's16': ('h', 2),
+ 'u32': ('I', 4), 's32': ('i', 4),
+ 'u64': ('Q', 8), 's64': ('q', 8) }
+
def __init__(self, raw, offset):
self._len, self._type = struct.unpack("HH", raw[offset:offset + 4])
self.type = self._type & ~Netlink.NLA_TYPE_MASK
self.full_len = (self.payload_len + 3) & ~3
self.raw = raw[offset + 4:offset + self.payload_len]
+ def format_byte_order(byte_order):
+ if byte_order:
+ return ">" if byte_order == "big-endian" else "<"
+ return ""
+
def as_u8(self):
return struct.unpack("B", self.raw)[0]
- def as_u16(self):
- return struct.unpack("H", self.raw)[0]
+ def as_u16(self, byte_order=None):
+ endian = NlAttr.format_byte_order(byte_order)
+ return struct.unpack(f"{endian}H", self.raw)[0]
- def as_u32(self):
- return struct.unpack("I", self.raw)[0]
+ def as_u32(self, byte_order=None):
+ endian = NlAttr.format_byte_order(byte_order)
+ return struct.unpack(f"{endian}I", self.raw)[0]
- def as_u64(self):
- return struct.unpack("Q", self.raw)[0]
+ def as_u64(self, byte_order=None):
+ endian = NlAttr.format_byte_order(byte_order)
+ return struct.unpack(f"{endian}Q", self.raw)[0]
def as_strz(self):
return self.raw.decode('ascii')[:-1]
def as_bin(self):
return self.raw
+ def as_c_array(self, type):
+ format, _ = self.type_formats[type]
+ return list({ x[0] for x in struct.iter_unpack(format, self.raw) })
+
+ def as_struct(self, members):
+ value = dict()
+ offset = 0
+ for m in members:
+ # TODO: handle non-scalar members
+ format, size = self.type_formats[m.type]
+ decoded = struct.unpack_from(format, self.raw, offset)
+ offset += size
+ value[m.name] = decoded[0]
+ return value
+
def __repr__(self):
return f"[type:{self.type} len:{self._len}] {self.raw}"
class GenlMsg:
- def __init__(self, nl_msg):
+ def __init__(self, nl_msg, fixed_header_members=[]):
self.nl = nl_msg
self.hdr = nl_msg.raw[0:4]
- self.raw = nl_msg.raw[4:]
+ offset = 4
self.genl_cmd, self.genl_version, _ = struct.unpack("BBH", self.hdr)
+ self.fixed_header_attrs = dict()
+ for m in fixed_header_members:
+ format, size = NlAttr.type_formats[m.type]
+ decoded = struct.unpack_from(format, nl_msg.raw, offset)
+ offset += size
+ self.fixed_header_attrs[m.name] = decoded[0]
+
+ self.raw = nl_msg.raw[offset:]
self.raw_attrs = NlAttrs(self.raw)
def __repr__(self):
attr_payload += self._add_attr(attr['nested-attributes'], subname, subvalue)
elif attr["type"] == 'flag':
attr_payload = b''
+ elif attr["type"] == 'u8':
+ attr_payload = struct.pack("B", int(value))
+ elif attr["type"] == 'u16':
+ endian = NlAttr.format_byte_order(attr.byte_order)
+ attr_payload = struct.pack(f"{endian}H", int(value))
elif attr["type"] == 'u32':
- attr_payload = struct.pack("I", int(value))
+ endian = NlAttr.format_byte_order(attr.byte_order)
+ attr_payload = struct.pack(f"{endian}I", int(value))
+ elif attr["type"] == 'u64':
+ endian = NlAttr.format_byte_order(attr.byte_order)
+ attr_payload = struct.pack(f"{endian}Q", int(value))
elif attr["type"] == 'string':
attr_payload = str(value).encode('ascii') + b'\x00'
elif attr["type"] == 'binary':
value = enum.entries_by_val[raw - i].name
rsp[attr_spec['name']] = value
+ def _decode_binary(self, attr, attr_spec):
+ if attr_spec.struct_name:
+ decoded = attr.as_struct(self.consts[attr_spec.struct_name])
+ elif attr_spec.sub_type:
+ decoded = attr.as_c_array(attr_spec.sub_type)
+ else:
+ decoded = attr.as_bin()
+ return decoded
+
def _decode(self, attrs, space):
attr_space = self.attr_sets[space]
rsp = dict()
decoded = subdict
elif attr_spec['type'] == 'u8':
decoded = attr.as_u8()
+ elif attr_spec['type'] == 'u16':
+ decoded = attr.as_u16(attr_spec.byte_order)
elif attr_spec['type'] == 'u32':
- decoded = attr.as_u32()
+ decoded = attr.as_u32(attr_spec.byte_order)
elif attr_spec['type'] == 'u64':
- decoded = attr.as_u64()
+ decoded = attr.as_u64(attr_spec.byte_order)
elif attr_spec["type"] == 'string':
decoded = attr.as_strz()
elif attr_spec["type"] == 'binary':
- decoded = attr.as_bin()
+ decoded = self._decode_binary(attr, attr_spec)
elif attr_spec["type"] == 'flag':
decoded = True
else:
self.handle_ntf(nl_msg, gm)
+ def operation_do_attributes(self, name):
+ """
+ For a given operation name, find and return a supported
+ set of attributes (as a dict).
+ """
+ op = self.find_operation(name)
+ if not op:
+ return None
+
+ return op['do']['request']['attributes'].copy()
+
def _op(self, method, vals, dump=False):
op = self.ops[method]
req_seq = random.randint(1024, 65535)
msg = _genl_msg(self.family.family_id, nl_flags, op.req_value, 1, req_seq)
+ fixed_header_members = []
+ if op.fixed_header:
+ fixed_header_members = self.consts[op.fixed_header].members
+ for m in fixed_header_members:
+ value = vals.pop(m.name)
+ format, _ = NlAttr.type_formats[m.type]
+ msg += struct.pack(format, value)
for name, value in vals.items():
msg += self._add_attr(op.attr_set.name, name, value)
msg = _genl_msg_finalize(msg)
self._decode_extack(msg, op.attr_set, nl_msg.extack)
if nl_msg.error:
- print("Netlink error:", os.strerror(-nl_msg.error))
- print(nl_msg)
- return
+ raise NlError(nl_msg)
if nl_msg.done:
if nl_msg.extack:
print("Netlink warning:")
done = True
break
- gm = GenlMsg(nl_msg)
+ gm = GenlMsg(nl_msg, fixed_header_members)
# Check if this is a reply to our request
if nl_msg.nl_seq != req_seq or gm.genl_cmd != op.rsp_value:
if gm.genl_cmd in self.async_msg_ids:
print('Unexpected message: ' + repr(gm))
continue
- rsp.append(self._decode(gm.raw_attrs, op.attr_set.name))
+ rsp.append(self._decode(gm.raw_attrs, op.attr_set.name)
+ | gm.fixed_header_attrs)
if not rsp:
return None
--- /dev/null
+jsonschema==4.*
+PyYAML==6.*
def _attr_policy(self, policy):
if 'flags-mask' in self.checks or self.is_bitfield:
if self.is_bitfield:
- mask = self.family.consts[self.attr['enum']].get_mask()
+ enum = self.family.consts[self.attr['enum']]
+ mask = enum.get_mask(as_flags=True)
else:
flags = self.family.consts[self.checks['flags-mask']]
flag_cnt = len(flags['entries'])
'split': 'genl_split_ops'}
struct_type = pol_to_struct[family.kernel_policy]
- if family.kernel_policy == 'split':
+ if not exported:
+ cnt = ""
+ elif family.kernel_policy == 'split':
cnt = 0
for op in family.ops.values():
if 'do' in op:
endif # CLANG_CROSS_FLAGS
CFLAGS += $(CLANG_CROSS_FLAGS)
AFLAGS += $(CLANG_CROSS_FLAGS)
+else
+CLANG_CROSS_FLAGS :=
endif # CROSS_COMPILE
# Hack to avoid type-punned warnings on old systems such as RHEL5:
bpf_cookie # failed to open_and_load program: -524 (trampoline)
bpf_loop # attaches to __x64_sys_nanosleep
cgrp_local_storage # prog_attach unexpected error: -524 (trampoline)
+dynptr/test_dynptr_skb_data
+dynptr/test_skb_readonly
fexit_sleep # fexit_skel_load fexit skeleton failed (trampoline)
get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace)
kprobe_multi_bench_attach # bpf_program__attach_kprobe_multi_opts unexpected error: -95
define get_sys_includes
$(shell $(1) $(2) -v -E - </dev/null 2>&1 \
| sed -n '/<...> search starts here:/,/End of search list./{ s| \(/.*\)|-idirafter \1|p }') \
-$(shell $(1) $(2) -dM -E - </dev/null | grep '__riscv_xlen ' | awk '{printf("-D__riscv_xlen=%d -D__BITS_PER_LONG=%d", $$3, $$3)}')
+$(shell $(1) $(2) -dM -E - </dev/null | grep '__riscv_xlen ' | awk '{printf("-D__riscv_xlen=%d -D__BITS_PER_LONG=%d", $$3, $$3)}') \
+$(shell $(1) $(2) -dM -E - </dev/null | grep '__loongarch_grlen ' | awk '{printf("-D__BITS_PER_LONG=%d", $$3)}')
endef
# Determine target endianness.
-I$(abspath $(OUTPUT)/../usr/include)
CLANG_CFLAGS = $(CLANG_SYS_INCLUDES) \
- -Wno-compare-distinct-pointer-types
+ -Wno-compare-distinct-pointer-types -Wuninitialized
$(OUTPUT)/test_l4lb_noinline.o: BPF_CFLAGS += -fno-inline
$(OUTPUT)/test_xdp_noinline.o: BPF_CFLAGS += -fno-inline
TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \
network_helpers.c testing_helpers.c \
btf_helpers.c flow_dissector_load.h \
- cap_helpers.c test_loader.c xsk.c
+ cap_helpers.c test_loader.c xsk.c disasm.c
TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \
$(OUTPUT)/liburandom_read.so \
$(OUTPUT)/xdp_synproxy \
--- /dev/null
+#ifndef __BPF_KFUNCS__
+#define __BPF_KFUNCS__
+
+/* Description
+ * Initializes an skb-type dynptr
+ * Returns
+ * Error code
+ */
+extern int bpf_dynptr_from_skb(struct __sk_buff *skb, __u64 flags,
+ struct bpf_dynptr *ptr__uninit) __ksym;
+
+/* Description
+ * Initializes an xdp-type dynptr
+ * Returns
+ * Error code
+ */
+extern int bpf_dynptr_from_xdp(struct xdp_md *xdp, __u64 flags,
+ struct bpf_dynptr *ptr__uninit) __ksym;
+
+/* Description
+ * Obtain a read-only pointer to the dynptr's data
+ * Returns
+ * Either a direct pointer to the dynptr data or a pointer to the user-provided
+ * buffer if unable to obtain a direct pointer
+ */
+extern void *bpf_dynptr_slice(const struct bpf_dynptr *ptr, __u32 offset,
+ void *buffer, __u32 buffer__szk) __ksym;
+
+/* Description
+ * Obtain a read-write pointer to the dynptr's data
+ * Returns
+ * Either a direct pointer to the dynptr data or a pointer to the user-provided
+ * buffer if unable to obtain a direct pointer
+ */
+extern void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr *ptr, __u32 offset,
+ void *buffer, __u32 buffer__szk) __ksym;
+
+#endif
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_PCI=y
+CONFIG_VIRTIO_VSOCKETS_COMMON=y
CONFIG_VLAN_8021Q=y
CONFIG_VSOCKETS=y
+CONFIG_VSOCKETS_LOOPBACK=y
CONFIG_XFRM_USER=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_PCI=y
+CONFIG_VIRTIO_VSOCKETS_COMMON=y
CONFIG_VLAN_8021Q=y
+CONFIG_VSOCKETS=y
+CONFIG_VSOCKETS_LOOPBACK=y
CONFIG_XFRM_USER=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_PCI=y
+CONFIG_VIRTIO_VSOCKETS_COMMON=y
CONFIG_VLAN_8021Q=y
+CONFIG_VSOCKETS=y
+CONFIG_VSOCKETS_LOOPBACK=y
CONFIG_X86_ACPI_CPUFREQ=y
CONFIG_X86_CPUID=y
CONFIG_X86_MSR=y
--- /dev/null
+../../../../kernel/bpf/disasm.c
\ No newline at end of file
--- /dev/null
+../../../../kernel/bpf/disasm.h
\ No newline at end of file
* func#0 @0
* 0: R1=ctx(off=0,imm=0) R10=fp0
* 0: (b7) r3 = 2 ; R3_w=2
+ *
+ * Sometimes it's actually two lines below, e.g. when
+ * searching for "6: R3_w=scalar(umax=255,var_off=(0x0; 0xff))":
+ * from 4 to 6: R0_w=pkt(off=8,r=8,imm=0) R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=8,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
+ * 6: R0_w=pkt(off=8,r=8,imm=0) R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=8,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
+ * 6: (71) r3 = *(u8 *)(r2 +0) ; R2_w=pkt(off=0,r=8,imm=0) R3_w=scalar(umax=255,var_off=(0x0; 0xff))
*/
- if (!strstr(line_ptr, m.match)) {
+ while (!strstr(line_ptr, m.match)) {
cur_line = -1;
line_ptr = strtok(NULL, "\n");
- sscanf(line_ptr, "%u: ", &cur_line);
+ sscanf(line_ptr ?: "", "%u: ", &cur_line);
+ if (!line_ptr || cur_line != m.line)
+ break;
}
- if (cur_line != m.line || !line_ptr ||
- !strstr(line_ptr, m.match)) {
- printf("Failed to find match %u: %s\n",
- m.line, m.match);
+ if (cur_line != m.line || !line_ptr || !strstr(line_ptr, m.match)) {
+ printf("Failed to find match %u: %s\n", m.line, m.match);
ret = 1;
printf("%s", bpf_vlog);
break;
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
+#include "test_attach_kprobe_sleepable.skel.h"
+#include "test_attach_probe_manual.skel.h"
#include "test_attach_probe.skel.h"
/* this is how USDT semaphore is actually defined, except volatile modifier */
asm volatile ("");
}
+/* attach point for ref_ctr */
+static noinline void trigger_func4(void)
+{
+ asm volatile ("");
+}
+
static char test_data[] = "test_data";
-void test_attach_probe(void)
+/* manual attach kprobe/kretprobe/uprobe/uretprobe testings */
+static void test_attach_probe_manual(enum probe_attach_mode attach_mode)
{
DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
+ DECLARE_LIBBPF_OPTS(bpf_kprobe_opts, kprobe_opts);
struct bpf_link *kprobe_link, *kretprobe_link;
struct bpf_link *uprobe_link, *uretprobe_link;
- struct test_attach_probe* skel;
- ssize_t uprobe_offset, ref_ctr_offset;
- struct bpf_link *uprobe_err_link;
- FILE *devnull;
- bool legacy;
-
- /* Check if new-style kprobe/uprobe API is supported.
- * Kernels that support new FD-based kprobe and uprobe BPF attachment
- * through perf_event_open() syscall expose
- * /sys/bus/event_source/devices/kprobe/type and
- * /sys/bus/event_source/devices/uprobe/type files, respectively. They
- * contain magic numbers that are passed as "type" field of
- * perf_event_attr. Lack of such file in the system indicates legacy
- * kernel with old-style kprobe/uprobe attach interface through
- * creating per-probe event through tracefs. For such cases
- * ref_ctr_offset feature is not supported, so we don't test it.
- */
- legacy = access("/sys/bus/event_source/devices/kprobe/type", F_OK) != 0;
+ struct test_attach_probe_manual *skel;
+ ssize_t uprobe_offset;
- uprobe_offset = get_uprobe_offset(&trigger_func);
- if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset"))
+ skel = test_attach_probe_manual__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_kprobe_manual_open_and_load"))
return;
- ref_ctr_offset = get_rel_offset((uintptr_t)&uprobe_ref_ctr);
- if (!ASSERT_GE(ref_ctr_offset, 0, "ref_ctr_offset"))
- return;
-
- skel = test_attach_probe__open();
- if (!ASSERT_OK_PTR(skel, "skel_open"))
- return;
-
- /* sleepable kprobe test case needs flags set before loading */
- if (!ASSERT_OK(bpf_program__set_flags(skel->progs.handle_kprobe_sleepable,
- BPF_F_SLEEPABLE), "kprobe_sleepable_flags"))
- goto cleanup;
-
- if (!ASSERT_OK(test_attach_probe__load(skel), "skel_load"))
- goto cleanup;
- if (!ASSERT_OK_PTR(skel->bss, "check_bss"))
+ uprobe_offset = get_uprobe_offset(&trigger_func);
+ if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset"))
goto cleanup;
/* manual-attach kprobe/kretprobe */
- kprobe_link = bpf_program__attach_kprobe(skel->progs.handle_kprobe,
- false /* retprobe */,
- SYS_NANOSLEEP_KPROBE_NAME);
+ kprobe_opts.attach_mode = attach_mode;
+ kprobe_opts.retprobe = false;
+ kprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kprobe,
+ SYS_NANOSLEEP_KPROBE_NAME,
+ &kprobe_opts);
if (!ASSERT_OK_PTR(kprobe_link, "attach_kprobe"))
goto cleanup;
skel->links.handle_kprobe = kprobe_link;
- kretprobe_link = bpf_program__attach_kprobe(skel->progs.handle_kretprobe,
- true /* retprobe */,
- SYS_NANOSLEEP_KPROBE_NAME);
+ kprobe_opts.retprobe = true;
+ kretprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kretprobe,
+ SYS_NANOSLEEP_KPROBE_NAME,
+ &kprobe_opts);
if (!ASSERT_OK_PTR(kretprobe_link, "attach_kretprobe"))
goto cleanup;
skel->links.handle_kretprobe = kretprobe_link;
- /* auto-attachable kprobe and kretprobe */
- skel->links.handle_kprobe_auto = bpf_program__attach(skel->progs.handle_kprobe_auto);
- ASSERT_OK_PTR(skel->links.handle_kprobe_auto, "attach_kprobe_auto");
-
- skel->links.handle_kretprobe_auto = bpf_program__attach(skel->progs.handle_kretprobe_auto);
- ASSERT_OK_PTR(skel->links.handle_kretprobe_auto, "attach_kretprobe_auto");
-
- if (!legacy)
- ASSERT_EQ(uprobe_ref_ctr, 0, "uprobe_ref_ctr_before");
-
+ /* manual-attach uprobe/uretprobe */
+ uprobe_opts.attach_mode = attach_mode;
+ uprobe_opts.ref_ctr_offset = 0;
uprobe_opts.retprobe = false;
- uprobe_opts.ref_ctr_offset = legacy ? 0 : ref_ctr_offset;
uprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uprobe,
0 /* self pid */,
"/proc/self/exe",
goto cleanup;
skel->links.handle_uprobe = uprobe_link;
- if (!legacy)
- ASSERT_GT(uprobe_ref_ctr, 0, "uprobe_ref_ctr_after");
-
- /* if uprobe uses ref_ctr, uretprobe has to use ref_ctr as well */
uprobe_opts.retprobe = true;
- uprobe_opts.ref_ctr_offset = legacy ? 0 : ref_ctr_offset;
uretprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uretprobe,
-1 /* any pid */,
"/proc/self/exe",
goto cleanup;
skel->links.handle_uretprobe = uretprobe_link;
- /* verify auto-attach fails for old-style uprobe definition */
- uprobe_err_link = bpf_program__attach(skel->progs.handle_uprobe_byname);
- if (!ASSERT_EQ(libbpf_get_error(uprobe_err_link), -EOPNOTSUPP,
- "auto-attach should fail for old-style name"))
- goto cleanup;
-
+ /* attach uprobe by function name manually */
uprobe_opts.func_name = "trigger_func2";
uprobe_opts.retprobe = false;
uprobe_opts.ref_ctr_offset = 0;
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname, "attach_uprobe_byname"))
goto cleanup;
+ /* trigger & validate kprobe && kretprobe */
+ usleep(1);
+
+ /* trigger & validate uprobe & uretprobe */
+ trigger_func();
+
+ /* trigger & validate uprobe attached by name */
+ trigger_func2();
+
+ ASSERT_EQ(skel->bss->kprobe_res, 1, "check_kprobe_res");
+ ASSERT_EQ(skel->bss->kretprobe_res, 2, "check_kretprobe_res");
+ ASSERT_EQ(skel->bss->uprobe_res, 3, "check_uprobe_res");
+ ASSERT_EQ(skel->bss->uretprobe_res, 4, "check_uretprobe_res");
+ ASSERT_EQ(skel->bss->uprobe_byname_res, 5, "check_uprobe_byname_res");
+
+cleanup:
+ test_attach_probe_manual__destroy(skel);
+}
+
+static void test_attach_probe_auto(struct test_attach_probe *skel)
+{
+ struct bpf_link *uprobe_err_link;
+
+ /* auto-attachable kprobe and kretprobe */
+ skel->links.handle_kprobe_auto = bpf_program__attach(skel->progs.handle_kprobe_auto);
+ ASSERT_OK_PTR(skel->links.handle_kprobe_auto, "attach_kprobe_auto");
+
+ skel->links.handle_kretprobe_auto = bpf_program__attach(skel->progs.handle_kretprobe_auto);
+ ASSERT_OK_PTR(skel->links.handle_kretprobe_auto, "attach_kretprobe_auto");
+
+ /* verify auto-attach fails for old-style uprobe definition */
+ uprobe_err_link = bpf_program__attach(skel->progs.handle_uprobe_byname);
+ if (!ASSERT_EQ(libbpf_get_error(uprobe_err_link), -EOPNOTSUPP,
+ "auto-attach should fail for old-style name"))
+ return;
+
/* verify auto-attach works */
skel->links.handle_uretprobe_byname =
bpf_program__attach(skel->progs.handle_uretprobe_byname);
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname, "attach_uretprobe_byname"))
- goto cleanup;
+ return;
+
+ /* trigger & validate kprobe && kretprobe */
+ usleep(1);
+
+ /* trigger & validate uprobe attached by name */
+ trigger_func2();
+
+ ASSERT_EQ(skel->bss->kprobe2_res, 11, "check_kprobe_auto_res");
+ ASSERT_EQ(skel->bss->kretprobe2_res, 22, "check_kretprobe_auto_res");
+ ASSERT_EQ(skel->bss->uretprobe_byname_res, 6, "check_uretprobe_byname_res");
+}
+
+static void test_uprobe_lib(struct test_attach_probe *skel)
+{
+ DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
+ FILE *devnull;
/* test attach by name for a library function, using the library
* as the binary argument. libc.so.6 will be resolved via dlopen()/dlinfo().
"libc.so.6",
0, &uprobe_opts);
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname2, "attach_uprobe_byname2"))
- goto cleanup;
+ return;
uprobe_opts.func_name = "fclose";
uprobe_opts.retprobe = true;
"libc.so.6",
0, &uprobe_opts);
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname2, "attach_uretprobe_byname2"))
+ return;
+
+ /* trigger & validate shared library u[ret]probes attached by name */
+ devnull = fopen("/dev/null", "r");
+ fclose(devnull);
+
+ ASSERT_EQ(skel->bss->uprobe_byname2_res, 7, "check_uprobe_byname2_res");
+ ASSERT_EQ(skel->bss->uretprobe_byname2_res, 8, "check_uretprobe_byname2_res");
+}
+
+static void test_uprobe_ref_ctr(struct test_attach_probe *skel)
+{
+ DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
+ struct bpf_link *uprobe_link, *uretprobe_link;
+ ssize_t uprobe_offset, ref_ctr_offset;
+
+ uprobe_offset = get_uprobe_offset(&trigger_func4);
+ if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset_ref_ctr"))
+ return;
+
+ ref_ctr_offset = get_rel_offset((uintptr_t)&uprobe_ref_ctr);
+ if (!ASSERT_GE(ref_ctr_offset, 0, "ref_ctr_offset"))
+ return;
+
+ ASSERT_EQ(uprobe_ref_ctr, 0, "uprobe_ref_ctr_before");
+
+ uprobe_opts.retprobe = false;
+ uprobe_opts.ref_ctr_offset = ref_ctr_offset;
+ uprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uprobe_ref_ctr,
+ 0 /* self pid */,
+ "/proc/self/exe",
+ uprobe_offset,
+ &uprobe_opts);
+ if (!ASSERT_OK_PTR(uprobe_link, "attach_uprobe_ref_ctr"))
+ return;
+ skel->links.handle_uprobe_ref_ctr = uprobe_link;
+
+ ASSERT_GT(uprobe_ref_ctr, 0, "uprobe_ref_ctr_after");
+
+ /* if uprobe uses ref_ctr, uretprobe has to use ref_ctr as well */
+ uprobe_opts.retprobe = true;
+ uprobe_opts.ref_ctr_offset = ref_ctr_offset;
+ uretprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uretprobe_ref_ctr,
+ -1 /* any pid */,
+ "/proc/self/exe",
+ uprobe_offset, &uprobe_opts);
+ if (!ASSERT_OK_PTR(uretprobe_link, "attach_uretprobe_ref_ctr"))
+ return;
+ skel->links.handle_uretprobe_ref_ctr = uretprobe_link;
+}
+
+static void test_kprobe_sleepable(void)
+{
+ struct test_attach_kprobe_sleepable *skel;
+
+ skel = test_attach_kprobe_sleepable__open();
+ if (!ASSERT_OK_PTR(skel, "skel_kprobe_sleepable_open"))
+ return;
+
+ /* sleepable kprobe test case needs flags set before loading */
+ if (!ASSERT_OK(bpf_program__set_flags(skel->progs.handle_kprobe_sleepable,
+ BPF_F_SLEEPABLE), "kprobe_sleepable_flags"))
+ goto cleanup;
+
+ if (!ASSERT_OK(test_attach_kprobe_sleepable__load(skel),
+ "skel_kprobe_sleepable_load"))
goto cleanup;
/* sleepable kprobes should not attach successfully */
skel->links.handle_kprobe_sleepable = bpf_program__attach(skel->progs.handle_kprobe_sleepable);
- if (!ASSERT_ERR_PTR(skel->links.handle_kprobe_sleepable, "attach_kprobe_sleepable"))
- goto cleanup;
+ ASSERT_ERR_PTR(skel->links.handle_kprobe_sleepable, "attach_kprobe_sleepable");
+cleanup:
+ test_attach_kprobe_sleepable__destroy(skel);
+}
+
+static void test_uprobe_sleepable(struct test_attach_probe *skel)
+{
/* test sleepable uprobe and uretprobe variants */
skel->links.handle_uprobe_byname3_sleepable = bpf_program__attach(skel->progs.handle_uprobe_byname3_sleepable);
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname3_sleepable, "attach_uprobe_byname3_sleepable"))
- goto cleanup;
+ return;
skel->links.handle_uprobe_byname3 = bpf_program__attach(skel->progs.handle_uprobe_byname3);
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname3, "attach_uprobe_byname3"))
- goto cleanup;
+ return;
skel->links.handle_uretprobe_byname3_sleepable = bpf_program__attach(skel->progs.handle_uretprobe_byname3_sleepable);
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname3_sleepable, "attach_uretprobe_byname3_sleepable"))
- goto cleanup;
+ return;
skel->links.handle_uretprobe_byname3 = bpf_program__attach(skel->progs.handle_uretprobe_byname3);
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname3, "attach_uretprobe_byname3"))
- goto cleanup;
+ return;
skel->bss->user_ptr = test_data;
- /* trigger & validate kprobe && kretprobe */
- usleep(1);
-
- /* trigger & validate shared library u[ret]probes attached by name */
- devnull = fopen("/dev/null", "r");
- fclose(devnull);
-
- /* trigger & validate uprobe & uretprobe */
- trigger_func();
-
- /* trigger & validate uprobe attached by name */
- trigger_func2();
-
/* trigger & validate sleepable uprobe attached by name */
trigger_func3();
- ASSERT_EQ(skel->bss->kprobe_res, 1, "check_kprobe_res");
- ASSERT_EQ(skel->bss->kprobe2_res, 11, "check_kprobe_auto_res");
- ASSERT_EQ(skel->bss->kretprobe_res, 2, "check_kretprobe_res");
- ASSERT_EQ(skel->bss->kretprobe2_res, 22, "check_kretprobe_auto_res");
- ASSERT_EQ(skel->bss->uprobe_res, 3, "check_uprobe_res");
- ASSERT_EQ(skel->bss->uretprobe_res, 4, "check_uretprobe_res");
- ASSERT_EQ(skel->bss->uprobe_byname_res, 5, "check_uprobe_byname_res");
- ASSERT_EQ(skel->bss->uretprobe_byname_res, 6, "check_uretprobe_byname_res");
- ASSERT_EQ(skel->bss->uprobe_byname2_res, 7, "check_uprobe_byname2_res");
- ASSERT_EQ(skel->bss->uretprobe_byname2_res, 8, "check_uretprobe_byname2_res");
ASSERT_EQ(skel->bss->uprobe_byname3_sleepable_res, 9, "check_uprobe_byname3_sleepable_res");
ASSERT_EQ(skel->bss->uprobe_byname3_res, 10, "check_uprobe_byname3_res");
ASSERT_EQ(skel->bss->uretprobe_byname3_sleepable_res, 11, "check_uretprobe_byname3_sleepable_res");
ASSERT_EQ(skel->bss->uretprobe_byname3_res, 12, "check_uretprobe_byname3_res");
+}
+
+void test_attach_probe(void)
+{
+ struct test_attach_probe *skel;
+
+ skel = test_attach_probe__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ if (!ASSERT_OK(test_attach_probe__load(skel), "skel_load"))
+ goto cleanup;
+ if (!ASSERT_OK_PTR(skel->bss, "check_bss"))
+ goto cleanup;
+
+ if (test__start_subtest("manual-default"))
+ test_attach_probe_manual(PROBE_ATTACH_MODE_DEFAULT);
+ if (test__start_subtest("manual-legacy"))
+ test_attach_probe_manual(PROBE_ATTACH_MODE_LEGACY);
+ if (test__start_subtest("manual-perf"))
+ test_attach_probe_manual(PROBE_ATTACH_MODE_PERF);
+ if (test__start_subtest("manual-link"))
+ test_attach_probe_manual(PROBE_ATTACH_MODE_LINK);
+
+ if (test__start_subtest("auto"))
+ test_attach_probe_auto(skel);
+ if (test__start_subtest("kprobe-sleepable"))
+ test_kprobe_sleepable();
+ if (test__start_subtest("uprobe-lib"))
+ test_uprobe_lib(skel);
+ if (test__start_subtest("uprobe-sleepable"))
+ test_uprobe_sleepable(skel);
+ if (test__start_subtest("uprobe-ref_ctr"))
+ test_uprobe_ref_ctr(skel);
cleanup:
test_attach_probe__destroy(skel);
"test_cgrp_xchg_release",
"test_cgrp_get_release",
"test_cgrp_get_ancestors",
+ "test_cgrp_from_id",
};
void test_cgrp_kfunc(void)
cgrp_ls_sleepable__destroy(skel);
}
-static void test_no_rcu_lock(__u64 cgroup_id)
+static void test_yes_rcu_lock(__u64 cgroup_id)
{
struct cgrp_ls_sleepable *skel;
int err;
skel->bss->target_pid = syscall(SYS_gettid);
- bpf_program__set_autoload(skel->progs.no_rcu_lock, true);
+ bpf_program__set_autoload(skel->progs.yes_rcu_lock, true);
err = cgrp_ls_sleepable__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto out;
cgrp_ls_sleepable__destroy(skel);
}
-static void test_rcu_lock(void)
+static void test_no_rcu_lock(void)
{
struct cgrp_ls_sleepable *skel;
int err;
if (!ASSERT_OK_PTR(skel, "skel_open"))
return;
- bpf_program__set_autoload(skel->progs.yes_rcu_lock, true);
+ bpf_program__set_autoload(skel->progs.no_rcu_lock, true);
err = cgrp_ls_sleepable__load(skel);
ASSERT_ERR(err, "skel_load");
test_negative();
if (test__start_subtest("cgroup_iter_sleepable"))
test_cgroup_iter_sleepable(cgroup_fd, cgroup_id);
+ if (test__start_subtest("yes_rcu_lock"))
+ test_yes_rcu_lock(cgroup_id);
if (test__start_subtest("no_rcu_lock"))
- test_no_rcu_lock(cgroup_id);
- if (test__start_subtest("rcu_lock"))
- test_rcu_lock();
+ test_no_rcu_lock();
close(cgroup_fd);
}
#include "progs/test_cls_redirect.h"
#include "test_cls_redirect.skel.h"
+#include "test_cls_redirect_dynptr.skel.h"
#include "test_cls_redirect_subprogs.skel.h"
#define ENCAP_IP INADDR_LOOPBACK
close_fds((int *)conns, sizeof(conns) / sizeof(conns[0][0]));
}
+static void test_cls_redirect_dynptr(void)
+{
+ struct test_cls_redirect_dynptr *skel;
+ int err;
+
+ skel = test_cls_redirect_dynptr__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ skel->rodata->ENCAPSULATION_IP = htonl(ENCAP_IP);
+ skel->rodata->ENCAPSULATION_PORT = htons(ENCAP_PORT);
+
+ err = test_cls_redirect_dynptr__load(skel);
+ if (!ASSERT_OK(err, "skel_load"))
+ goto cleanup;
+
+ test_cls_redirect_common(skel->progs.cls_redirect);
+
+cleanup:
+ test_cls_redirect_dynptr__destroy(skel);
+}
+
static void test_cls_redirect_inlined(void)
{
struct test_cls_redirect *skel;
test_cls_redirect_inlined();
if (test__start_subtest("cls_redirect_subprogs"))
test_cls_redirect_subprogs();
+ if (test__start_subtest("cls_redirect_dynptr"))
+ test_cls_redirect_dynptr();
}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+#include <limits.h>
+#include <stdio.h>
+#include <string.h>
+#include <ctype.h>
+#include <regex.h>
+#include <test_progs.h>
+
+#include "bpf/btf.h"
+#include "bpf_util.h"
+#include "linux/filter.h"
+#include "disasm.h"
+
+#define MAX_PROG_TEXT_SZ (32 * 1024)
+
+/* The code in this file serves the sole purpose of executing test cases
+ * specified in the test_cases array. Each test case specifies a program
+ * type, context field offset, and disassembly patterns that correspond
+ * to read and write instructions generated by
+ * verifier.c:convert_ctx_access() for accessing that field.
+ *
+ * For each test case, up to three programs are created:
+ * - One that uses BPF_LDX_MEM to read the context field.
+ * - One that uses BPF_STX_MEM to write to the context field.
+ * - One that uses BPF_ST_MEM to write to the context field.
+ *
+ * The disassembly of each program is then compared with the pattern
+ * specified in the test case.
+ */
+struct test_case {
+ char *name;
+ enum bpf_prog_type prog_type;
+ enum bpf_attach_type expected_attach_type;
+ int field_offset;
+ int field_sz;
+ /* Program generated for BPF_ST_MEM uses value 42 by default,
+ * this field allows to specify custom value.
+ */
+ struct {
+ bool use;
+ int value;
+ } st_value;
+ /* Pattern for BPF_LDX_MEM(field_sz, dst, ctx, field_offset) */
+ char *read;
+ /* Pattern for BPF_STX_MEM(field_sz, ctx, src, field_offset) and
+ * BPF_ST_MEM (field_sz, ctx, src, field_offset)
+ */
+ char *write;
+ /* Pattern for BPF_ST_MEM(field_sz, ctx, src, field_offset),
+ * takes priority over `write`.
+ */
+ char *write_st;
+ /* Pattern for BPF_STX_MEM (field_sz, ctx, src, field_offset),
+ * takes priority over `write`.
+ */
+ char *write_stx;
+};
+
+#define N(_prog_type, type, field, name_extra...) \
+ .name = #_prog_type "." #field name_extra, \
+ .prog_type = BPF_PROG_TYPE_##_prog_type, \
+ .field_offset = offsetof(type, field), \
+ .field_sz = sizeof(typeof(((type *)NULL)->field))
+
+static struct test_case test_cases[] = {
+/* Sign extension on s390 changes the pattern */
+#if defined(__x86_64__) || defined(__aarch64__)
+ {
+ N(SCHED_CLS, struct __sk_buff, tstamp),
+ .read = "r11 = *(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset);"
+ "w11 &= 160;"
+ "if w11 != 0xa0 goto pc+2;"
+ "$dst = 0;"
+ "goto pc+1;"
+ "$dst = *(u64 *)($ctx + sk_buff::tstamp);",
+ .write = "r11 = *(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset);"
+ "if w11 & 0x80 goto pc+1;"
+ "goto pc+2;"
+ "w11 &= -33;"
+ "*(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset) = r11;"
+ "*(u64 *)($ctx + sk_buff::tstamp) = $src;",
+ },
+#endif
+ {
+ N(SCHED_CLS, struct __sk_buff, priority),
+ .read = "$dst = *(u32 *)($ctx + sk_buff::priority);",
+ .write = "*(u32 *)($ctx + sk_buff::priority) = $src;",
+ },
+ {
+ N(SCHED_CLS, struct __sk_buff, mark),
+ .read = "$dst = *(u32 *)($ctx + sk_buff::mark);",
+ .write = "*(u32 *)($ctx + sk_buff::mark) = $src;",
+ },
+ {
+ N(SCHED_CLS, struct __sk_buff, cb[0]),
+ .read = "$dst = *(u32 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::data));",
+ .write = "*(u32 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::data)) = $src;",
+ },
+ {
+ N(SCHED_CLS, struct __sk_buff, tc_classid),
+ .read = "$dst = *(u16 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::tc_classid));",
+ .write = "*(u16 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::tc_classid)) = $src;",
+ },
+ {
+ N(SCHED_CLS, struct __sk_buff, tc_index),
+ .read = "$dst = *(u16 *)($ctx + sk_buff::tc_index);",
+ .write = "*(u16 *)($ctx + sk_buff::tc_index) = $src;",
+ },
+ {
+ N(SCHED_CLS, struct __sk_buff, queue_mapping),
+ .read = "$dst = *(u16 *)($ctx + sk_buff::queue_mapping);",
+ .write_stx = "if $src >= 0xffff goto pc+1;"
+ "*(u16 *)($ctx + sk_buff::queue_mapping) = $src;",
+ .write_st = "*(u16 *)($ctx + sk_buff::queue_mapping) = $src;",
+ },
+ {
+ /* This is a corner case in filter.c:bpf_convert_ctx_access() */
+ N(SCHED_CLS, struct __sk_buff, queue_mapping, ".ushrt_max"),
+ .st_value = { true, USHRT_MAX },
+ .write_st = "goto pc+0;",
+ },
+ {
+ N(CGROUP_SOCK, struct bpf_sock, bound_dev_if),
+ .read = "$dst = *(u32 *)($ctx + sock_common::skc_bound_dev_if);",
+ .write = "*(u32 *)($ctx + sock_common::skc_bound_dev_if) = $src;",
+ },
+ {
+ N(CGROUP_SOCK, struct bpf_sock, mark),
+ .read = "$dst = *(u32 *)($ctx + sock::sk_mark);",
+ .write = "*(u32 *)($ctx + sock::sk_mark) = $src;",
+ },
+ {
+ N(CGROUP_SOCK, struct bpf_sock, priority),
+ .read = "$dst = *(u32 *)($ctx + sock::sk_priority);",
+ .write = "*(u32 *)($ctx + sock::sk_priority) = $src;",
+ },
+ {
+ N(SOCK_OPS, struct bpf_sock_ops, replylong[0]),
+ .read = "$dst = *(u32 *)($ctx + bpf_sock_ops_kern::replylong);",
+ .write = "*(u32 *)($ctx + bpf_sock_ops_kern::replylong) = $src;",
+ },
+ {
+ N(CGROUP_SYSCTL, struct bpf_sysctl, file_pos),
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+ .read = "$dst = *(u64 *)($ctx + bpf_sysctl_kern::ppos);"
+ "$dst = *(u32 *)($dst +0);",
+ .write = "*(u64 *)($ctx + bpf_sysctl_kern::tmp_reg) = r9;"
+ "r9 = *(u64 *)($ctx + bpf_sysctl_kern::ppos);"
+ "*(u32 *)(r9 +0) = $src;"
+ "r9 = *(u64 *)($ctx + bpf_sysctl_kern::tmp_reg);",
+#else
+ .read = "$dst = *(u64 *)($ctx + bpf_sysctl_kern::ppos);"
+ "$dst = *(u32 *)($dst +4);",
+ .write = "*(u64 *)($ctx + bpf_sysctl_kern::tmp_reg) = r9;"
+ "r9 = *(u64 *)($ctx + bpf_sysctl_kern::ppos);"
+ "*(u32 *)(r9 +4) = $src;"
+ "r9 = *(u64 *)($ctx + bpf_sysctl_kern::tmp_reg);",
+#endif
+ },
+ {
+ N(CGROUP_SOCKOPT, struct bpf_sockopt, sk),
+ .read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::sk);",
+ .expected_attach_type = BPF_CGROUP_GETSOCKOPT,
+ },
+ {
+ N(CGROUP_SOCKOPT, struct bpf_sockopt, level),
+ .read = "$dst = *(u32 *)($ctx + bpf_sockopt_kern::level);",
+ .write = "*(u32 *)($ctx + bpf_sockopt_kern::level) = $src;",
+ .expected_attach_type = BPF_CGROUP_SETSOCKOPT,
+ },
+ {
+ N(CGROUP_SOCKOPT, struct bpf_sockopt, optname),
+ .read = "$dst = *(u32 *)($ctx + bpf_sockopt_kern::optname);",
+ .write = "*(u32 *)($ctx + bpf_sockopt_kern::optname) = $src;",
+ .expected_attach_type = BPF_CGROUP_SETSOCKOPT,
+ },
+ {
+ N(CGROUP_SOCKOPT, struct bpf_sockopt, optlen),
+ .read = "$dst = *(u32 *)($ctx + bpf_sockopt_kern::optlen);",
+ .write = "*(u32 *)($ctx + bpf_sockopt_kern::optlen) = $src;",
+ .expected_attach_type = BPF_CGROUP_SETSOCKOPT,
+ },
+ {
+ N(CGROUP_SOCKOPT, struct bpf_sockopt, retval),
+ .read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::current_task);"
+ "$dst = *(u64 *)($dst + task_struct::bpf_ctx);"
+ "$dst = *(u32 *)($dst + bpf_cg_run_ctx::retval);",
+ .write = "*(u64 *)($ctx + bpf_sockopt_kern::tmp_reg) = r9;"
+ "r9 = *(u64 *)($ctx + bpf_sockopt_kern::current_task);"
+ "r9 = *(u64 *)(r9 + task_struct::bpf_ctx);"
+ "*(u32 *)(r9 + bpf_cg_run_ctx::retval) = $src;"
+ "r9 = *(u64 *)($ctx + bpf_sockopt_kern::tmp_reg);",
+ .expected_attach_type = BPF_CGROUP_GETSOCKOPT,
+ },
+ {
+ N(CGROUP_SOCKOPT, struct bpf_sockopt, optval),
+ .read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::optval);",
+ .expected_attach_type = BPF_CGROUP_GETSOCKOPT,
+ },
+ {
+ N(CGROUP_SOCKOPT, struct bpf_sockopt, optval_end),
+ .read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::optval_end);",
+ .expected_attach_type = BPF_CGROUP_GETSOCKOPT,
+ },
+};
+
+#undef N
+
+static regex_t *ident_regex;
+static regex_t *field_regex;
+
+static char *skip_space(char *str)
+{
+ while (*str && isspace(*str))
+ ++str;
+ return str;
+}
+
+static char *skip_space_and_semi(char *str)
+{
+ while (*str && (isspace(*str) || *str == ';'))
+ ++str;
+ return str;
+}
+
+static char *match_str(char *str, char *prefix)
+{
+ while (*str && *prefix && *str == *prefix) {
+ ++str;
+ ++prefix;
+ }
+ if (*prefix)
+ return NULL;
+ return str;
+}
+
+static char *match_number(char *str, int num)
+{
+ char *next;
+ int snum = strtol(str, &next, 10);
+
+ if (next - str == 0 || num != snum)
+ return NULL;
+
+ return next;
+}
+
+static int find_field_offset_aux(struct btf *btf, int btf_id, char *field_name, int off)
+{
+ const struct btf_type *type = btf__type_by_id(btf, btf_id);
+ const struct btf_member *m;
+ __u16 mnum;
+ int i;
+
+ if (!type) {
+ PRINT_FAIL("Can't find btf_type for id %d\n", btf_id);
+ return -1;
+ }
+
+ if (!btf_is_struct(type) && !btf_is_union(type)) {
+ PRINT_FAIL("BTF id %d is not struct or union\n", btf_id);
+ return -1;
+ }
+
+ m = btf_members(type);
+ mnum = btf_vlen(type);
+
+ for (i = 0; i < mnum; ++i, ++m) {
+ const char *mname = btf__name_by_offset(btf, m->name_off);
+
+ if (strcmp(mname, "") == 0) {
+ int msize = find_field_offset_aux(btf, m->type, field_name,
+ off + m->offset);
+ if (msize >= 0)
+ return msize;
+ }
+
+ if (strcmp(mname, field_name))
+ continue;
+
+ return (off + m->offset) / 8;
+ }
+
+ return -1;
+}
+
+static int find_field_offset(struct btf *btf, char *pattern, regmatch_t *matches)
+{
+ int type_sz = matches[1].rm_eo - matches[1].rm_so;
+ int field_sz = matches[2].rm_eo - matches[2].rm_so;
+ char *type = pattern + matches[1].rm_so;
+ char *field = pattern + matches[2].rm_so;
+ char field_str[128] = {};
+ char type_str[128] = {};
+ int btf_id, field_offset;
+
+ if (type_sz >= sizeof(type_str)) {
+ PRINT_FAIL("Malformed pattern: type ident is too long: %d\n", type_sz);
+ return -1;
+ }
+
+ if (field_sz >= sizeof(field_str)) {
+ PRINT_FAIL("Malformed pattern: field ident is too long: %d\n", field_sz);
+ return -1;
+ }
+
+ strncpy(type_str, type, type_sz);
+ strncpy(field_str, field, field_sz);
+ btf_id = btf__find_by_name(btf, type_str);
+ if (btf_id < 0) {
+ PRINT_FAIL("No BTF info for type %s\n", type_str);
+ return -1;
+ }
+
+ field_offset = find_field_offset_aux(btf, btf_id, field_str, 0);
+ if (field_offset < 0) {
+ PRINT_FAIL("No BTF info for field %s::%s\n", type_str, field_str);
+ return -1;
+ }
+
+ return field_offset;
+}
+
+static regex_t *compile_regex(char *pat)
+{
+ regex_t *re;
+ int err;
+
+ re = malloc(sizeof(regex_t));
+ if (!re) {
+ PRINT_FAIL("Can't alloc regex\n");
+ return NULL;
+ }
+
+ err = regcomp(re, pat, REG_EXTENDED);
+ if (err) {
+ char errbuf[512];
+
+ regerror(err, re, errbuf, sizeof(errbuf));
+ PRINT_FAIL("Can't compile regex: %s\n", errbuf);
+ free(re);
+ return NULL;
+ }
+
+ return re;
+}
+
+static void free_regex(regex_t *re)
+{
+ if (!re)
+ return;
+
+ regfree(re);
+ free(re);
+}
+
+static u32 max_line_len(char *str)
+{
+ u32 max_line = 0;
+ char *next = str;
+
+ while (next) {
+ next = strchr(str, '\n');
+ if (next) {
+ max_line = max_t(u32, max_line, (next - str));
+ str = next + 1;
+ } else {
+ max_line = max_t(u32, max_line, strlen(str));
+ }
+ }
+
+ return min(max_line, 60u);
+}
+
+/* Print strings `pattern_origin` and `text_origin` side by side,
+ * assume `pattern_pos` and `text_pos` designate location within
+ * corresponding origin string where match diverges.
+ * The output should look like:
+ *
+ * Can't match disassembly(left) with pattern(right):
+ * r2 = *(u64 *)(r1 +0) ; $dst = *(u64 *)($ctx + bpf_sockopt_kern::sk1)
+ * ^ ^
+ * r0 = 0 ;
+ * exit ;
+ */
+static void print_match_error(FILE *out,
+ char *pattern_origin, char *text_origin,
+ char *pattern_pos, char *text_pos)
+{
+ char *pattern = pattern_origin;
+ char *text = text_origin;
+ int middle = max_line_len(text) + 2;
+
+ fprintf(out, "Can't match disassembly(left) with pattern(right):\n");
+ while (*pattern || *text) {
+ int column = 0;
+ int mark1 = -1;
+ int mark2 = -1;
+
+ /* Print one line from text */
+ while (*text && *text != '\n') {
+ if (text == text_pos)
+ mark1 = column;
+ fputc(*text, out);
+ ++text;
+ ++column;
+ }
+ if (text == text_pos)
+ mark1 = column;
+
+ /* Pad to the middle */
+ while (column < middle) {
+ fputc(' ', out);
+ ++column;
+ }
+ fputs("; ", out);
+ column += 3;
+
+ /* Print one line from pattern, pattern lines are terminated by ';' */
+ while (*pattern && *pattern != ';') {
+ if (pattern == pattern_pos)
+ mark2 = column;
+ fputc(*pattern, out);
+ ++pattern;
+ ++column;
+ }
+ if (pattern == pattern_pos)
+ mark2 = column;
+
+ fputc('\n', out);
+ if (*pattern)
+ ++pattern;
+ if (*text)
+ ++text;
+
+ /* If pattern and text diverge at this line, print an
+ * additional line with '^' marks, highlighting
+ * positions where match fails.
+ */
+ if (mark1 > 0 || mark2 > 0) {
+ for (column = 0; column <= max(mark1, mark2); ++column) {
+ if (column == mark1 || column == mark2)
+ fputc('^', out);
+ else
+ fputc(' ', out);
+ }
+ fputc('\n', out);
+ }
+ }
+}
+
+/* Test if `text` matches `pattern`. Pattern consists of the following elements:
+ *
+ * - Field offset references:
+ *
+ * <type>::<field>
+ *
+ * When such reference is encountered BTF is used to compute numerical
+ * value for the offset of <field> in <type>. The `text` is expected to
+ * contain matching numerical value.
+ *
+ * - Field groups:
+ *
+ * $(<type>::<field> [+ <type>::<field>]*)
+ *
+ * Allows to specify an offset that is a sum of multiple field offsets.
+ * The `text` is expected to contain matching numerical value.
+ *
+ * - Variable references, e.g. `$src`, `$dst`, `$ctx`.
+ * These are substitutions specified in `reg_map` array.
+ * If a substring of pattern is equal to `reg_map[i][0]` the `text` is
+ * expected to contain `reg_map[i][1]` in the matching position.
+ *
+ * - Whitespace is ignored, ';' counts as whitespace for `pattern`.
+ *
+ * - Any other characters, `pattern` and `text` should match one-to-one.
+ *
+ * Example of a pattern:
+ *
+ * __________ fields group ________________
+ * ' '
+ * *(u16 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::tc_classid)) = $src;
+ * ^^^^ '______________________'
+ * variable reference field offset reference
+ */
+static bool match_pattern(struct btf *btf, char *pattern, char *text, char *reg_map[][2])
+{
+ char *pattern_origin = pattern;
+ char *text_origin = text;
+ regmatch_t matches[3];
+
+_continue:
+ while (*pattern) {
+ if (!*text)
+ goto err;
+
+ /* Skip whitespace */
+ if (isspace(*pattern) || *pattern == ';') {
+ if (!isspace(*text) && text != text_origin && isalnum(text[-1]))
+ goto err;
+ pattern = skip_space_and_semi(pattern);
+ text = skip_space(text);
+ continue;
+ }
+
+ /* Check for variable references */
+ for (int i = 0; reg_map[i][0]; ++i) {
+ char *pattern_next, *text_next;
+
+ pattern_next = match_str(pattern, reg_map[i][0]);
+ if (!pattern_next)
+ continue;
+
+ text_next = match_str(text, reg_map[i][1]);
+ if (!text_next)
+ goto err;
+
+ pattern = pattern_next;
+ text = text_next;
+ goto _continue;
+ }
+
+ /* Match field group:
+ * $(sk_buff::cb + qdisc_skb_cb::tc_classid)
+ */
+ if (strncmp(pattern, "$(", 2) == 0) {
+ char *group_start = pattern, *text_next;
+ int acc_offset = 0;
+
+ pattern += 2;
+
+ for (;;) {
+ int field_offset;
+
+ pattern = skip_space(pattern);
+ if (!*pattern) {
+ PRINT_FAIL("Unexpected end of pattern\n");
+ goto err;
+ }
+
+ if (*pattern == ')') {
+ ++pattern;
+ break;
+ }
+
+ if (*pattern == '+') {
+ ++pattern;
+ continue;
+ }
+
+ printf("pattern: %s\n", pattern);
+ if (regexec(field_regex, pattern, 3, matches, 0) != 0) {
+ PRINT_FAIL("Field reference expected\n");
+ goto err;
+ }
+
+ field_offset = find_field_offset(btf, pattern, matches);
+ if (field_offset < 0)
+ goto err;
+
+ pattern += matches[0].rm_eo;
+ acc_offset += field_offset;
+ }
+
+ text_next = match_number(text, acc_offset);
+ if (!text_next) {
+ PRINT_FAIL("No match for group offset %.*s (%d)\n",
+ (int)(pattern - group_start),
+ group_start,
+ acc_offset);
+ goto err;
+ }
+ text = text_next;
+ }
+
+ /* Match field reference:
+ * sk_buff::cb
+ */
+ if (regexec(field_regex, pattern, 3, matches, 0) == 0) {
+ int field_offset;
+ char *text_next;
+
+ field_offset = find_field_offset(btf, pattern, matches);
+ if (field_offset < 0)
+ goto err;
+
+ text_next = match_number(text, field_offset);
+ if (!text_next) {
+ PRINT_FAIL("No match for field offset %.*s (%d)\n",
+ (int)matches[0].rm_eo, pattern, field_offset);
+ goto err;
+ }
+
+ pattern += matches[0].rm_eo;
+ text = text_next;
+ continue;
+ }
+
+ /* If pattern points to identifier not followed by '::'
+ * skip the identifier to avoid n^2 application of the
+ * field reference rule.
+ */
+ if (regexec(ident_regex, pattern, 1, matches, 0) == 0) {
+ if (strncmp(pattern, text, matches[0].rm_eo) != 0)
+ goto err;
+
+ pattern += matches[0].rm_eo;
+ text += matches[0].rm_eo;
+ continue;
+ }
+
+ /* Match literally */
+ if (*pattern != *text)
+ goto err;
+
+ ++pattern;
+ ++text;
+ }
+
+ return true;
+
+err:
+ test__fail();
+ print_match_error(stdout, pattern_origin, text_origin, pattern, text);
+ return false;
+}
+
+/* Request BPF program instructions after all rewrites are applied,
+ * e.g. verifier.c:convert_ctx_access() is done.
+ */
+static int get_xlated_program(int fd_prog, struct bpf_insn **buf, __u32 *cnt)
+{
+ struct bpf_prog_info info = {};
+ __u32 info_len = sizeof(info);
+ __u32 xlated_prog_len;
+ __u32 buf_element_size = sizeof(struct bpf_insn);
+
+ if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
+ perror("bpf_prog_get_info_by_fd failed");
+ return -1;
+ }
+
+ xlated_prog_len = info.xlated_prog_len;
+ if (xlated_prog_len % buf_element_size) {
+ printf("Program length %d is not multiple of %d\n",
+ xlated_prog_len, buf_element_size);
+ return -1;
+ }
+
+ *cnt = xlated_prog_len / buf_element_size;
+ *buf = calloc(*cnt, buf_element_size);
+ if (!buf) {
+ perror("can't allocate xlated program buffer");
+ return -ENOMEM;
+ }
+
+ bzero(&info, sizeof(info));
+ info.xlated_prog_len = xlated_prog_len;
+ info.xlated_prog_insns = (__u64)(unsigned long)*buf;
+ if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
+ perror("second bpf_prog_get_info_by_fd failed");
+ goto out_free_buf;
+ }
+
+ return 0;
+
+out_free_buf:
+ free(*buf);
+ return -1;
+}
+
+static void print_insn(void *private_data, const char *fmt, ...)
+{
+ va_list args;
+
+ va_start(args, fmt);
+ vfprintf((FILE *)private_data, fmt, args);
+ va_end(args);
+}
+
+/* Disassemble instructions to a stream */
+static void print_xlated(FILE *out, struct bpf_insn *insn, __u32 len)
+{
+ const struct bpf_insn_cbs cbs = {
+ .cb_print = print_insn,
+ .cb_call = NULL,
+ .cb_imm = NULL,
+ .private_data = out,
+ };
+ bool double_insn = false;
+ int i;
+
+ for (i = 0; i < len; i++) {
+ if (double_insn) {
+ double_insn = false;
+ continue;
+ }
+
+ double_insn = insn[i].code == (BPF_LD | BPF_IMM | BPF_DW);
+ print_bpf_insn(&cbs, insn + i, true);
+ }
+}
+
+/* We share code with kernel BPF disassembler, it adds '(FF) ' prefix
+ * for each instruction (FF stands for instruction `code` byte).
+ * This function removes the prefix inplace for each line in `str`.
+ */
+static void remove_insn_prefix(char *str, int size)
+{
+ const int prefix_size = 5;
+
+ int write_pos = 0, read_pos = prefix_size;
+ int len = strlen(str);
+ char c;
+
+ size = min(size, len);
+
+ while (read_pos < size) {
+ c = str[read_pos++];
+ if (c == 0)
+ break;
+ str[write_pos++] = c;
+ if (c == '\n')
+ read_pos += prefix_size;
+ }
+ str[write_pos] = 0;
+}
+
+struct prog_info {
+ char *prog_kind;
+ enum bpf_prog_type prog_type;
+ enum bpf_attach_type expected_attach_type;
+ struct bpf_insn *prog;
+ u32 prog_len;
+};
+
+static void match_program(struct btf *btf,
+ struct prog_info *pinfo,
+ char *pattern,
+ char *reg_map[][2],
+ bool skip_first_insn)
+{
+ struct bpf_insn *buf = NULL;
+ int err = 0, prog_fd = 0;
+ FILE *prog_out = NULL;
+ char *text = NULL;
+ __u32 cnt = 0;
+
+ text = calloc(MAX_PROG_TEXT_SZ, 1);
+ if (!text) {
+ PRINT_FAIL("Can't allocate %d bytes\n", MAX_PROG_TEXT_SZ);
+ goto out;
+ }
+
+ // TODO: log level
+ LIBBPF_OPTS(bpf_prog_load_opts, opts);
+ opts.log_buf = text;
+ opts.log_size = MAX_PROG_TEXT_SZ;
+ opts.log_level = 1 | 2 | 4;
+ opts.expected_attach_type = pinfo->expected_attach_type;
+
+ prog_fd = bpf_prog_load(pinfo->prog_type, NULL, "GPL",
+ pinfo->prog, pinfo->prog_len, &opts);
+ if (prog_fd < 0) {
+ PRINT_FAIL("Can't load program, errno %d (%s), verifier log:\n%s\n",
+ errno, strerror(errno), text);
+ goto out;
+ }
+
+ memset(text, 0, MAX_PROG_TEXT_SZ);
+
+ err = get_xlated_program(prog_fd, &buf, &cnt);
+ if (err) {
+ PRINT_FAIL("Can't load back BPF program\n");
+ goto out;
+ }
+
+ prog_out = fmemopen(text, MAX_PROG_TEXT_SZ - 1, "w");
+ if (!prog_out) {
+ PRINT_FAIL("Can't open memory stream\n");
+ goto out;
+ }
+ if (skip_first_insn)
+ print_xlated(prog_out, buf + 1, cnt - 1);
+ else
+ print_xlated(prog_out, buf, cnt);
+ fclose(prog_out);
+ remove_insn_prefix(text, MAX_PROG_TEXT_SZ);
+
+ ASSERT_TRUE(match_pattern(btf, pattern, text, reg_map),
+ pinfo->prog_kind);
+
+out:
+ if (prog_fd)
+ close(prog_fd);
+ free(buf);
+ free(text);
+}
+
+static void run_one_testcase(struct btf *btf, struct test_case *test)
+{
+ struct prog_info pinfo = {};
+ int bpf_sz;
+
+ if (!test__start_subtest(test->name))
+ return;
+
+ switch (test->field_sz) {
+ case 8:
+ bpf_sz = BPF_DW;
+ break;
+ case 4:
+ bpf_sz = BPF_W;
+ break;
+ case 2:
+ bpf_sz = BPF_H;
+ break;
+ case 1:
+ bpf_sz = BPF_B;
+ break;
+ default:
+ PRINT_FAIL("Unexpected field size: %d, want 8,4,2 or 1\n", test->field_sz);
+ return;
+ }
+
+ pinfo.prog_type = test->prog_type;
+ pinfo.expected_attach_type = test->expected_attach_type;
+
+ if (test->read) {
+ struct bpf_insn ldx_prog[] = {
+ BPF_LDX_MEM(bpf_sz, BPF_REG_2, BPF_REG_1, test->field_offset),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ char *reg_map[][2] = {
+ { "$ctx", "r1" },
+ { "$dst", "r2" },
+ {}
+ };
+
+ pinfo.prog_kind = "LDX";
+ pinfo.prog = ldx_prog;
+ pinfo.prog_len = ARRAY_SIZE(ldx_prog);
+ match_program(btf, &pinfo, test->read, reg_map, false);
+ }
+
+ if (test->write || test->write_st || test->write_stx) {
+ struct bpf_insn stx_prog[] = {
+ BPF_MOV64_IMM(BPF_REG_2, 0),
+ BPF_STX_MEM(bpf_sz, BPF_REG_1, BPF_REG_2, test->field_offset),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ char *stx_reg_map[][2] = {
+ { "$ctx", "r1" },
+ { "$src", "r2" },
+ {}
+ };
+ struct bpf_insn st_prog[] = {
+ BPF_ST_MEM(bpf_sz, BPF_REG_1, test->field_offset,
+ test->st_value.use ? test->st_value.value : 42),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ char *st_reg_map[][2] = {
+ { "$ctx", "r1" },
+ { "$src", "42" },
+ {}
+ };
+
+ if (test->write || test->write_stx) {
+ char *pattern = test->write_stx ? test->write_stx : test->write;
+
+ pinfo.prog_kind = "STX";
+ pinfo.prog = stx_prog;
+ pinfo.prog_len = ARRAY_SIZE(stx_prog);
+ match_program(btf, &pinfo, pattern, stx_reg_map, true);
+ }
+
+ if (test->write || test->write_st) {
+ char *pattern = test->write_st ? test->write_st : test->write;
+
+ pinfo.prog_kind = "ST";
+ pinfo.prog = st_prog;
+ pinfo.prog_len = ARRAY_SIZE(st_prog);
+ match_program(btf, &pinfo, pattern, st_reg_map, false);
+ }
+ }
+
+ test__end_subtest();
+}
+
+void test_ctx_rewrite(void)
+{
+ struct btf *btf;
+ int i;
+
+ field_regex = compile_regex("^([[:alpha:]_][[:alnum:]_]+)::([[:alpha:]_][[:alnum:]_]+)");
+ ident_regex = compile_regex("^[[:alpha:]_][[:alnum:]_]+");
+ if (!field_regex || !ident_regex)
+ return;
+
+ btf = btf__load_vmlinux_btf();
+ if (!btf) {
+ PRINT_FAIL("Can't load vmlinux BTF, errno %d (%s)\n", errno, strerror(errno));
+ goto out;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(test_cases); ++i)
+ run_one_testcase(btf, &test_cases[i]);
+
+out:
+ btf__free(btf);
+ free_regex(field_regex);
+ free_regex(ident_regex);
+}
#include "network_helpers.h"
#include "decap_sanity.skel.h"
-#define SYS(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- if (!ASSERT_OK(system(cmd), cmd)) \
- goto fail; \
- })
-
#define NS_TEST "decap_sanity_ns"
#define IPV6_IFACE_ADDR "face::1"
#define UDP_TEST_PORT 7777
if (!ASSERT_OK_PTR(skel, "skel open_and_load"))
return;
- SYS("ip netns add %s", NS_TEST);
- SYS("ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR);
- SYS("ip -net %s link set dev lo up", NS_TEST);
+ SYS(fail, "ip netns add %s", NS_TEST);
+ SYS(fail, "ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR);
+ SYS(fail, "ip -net %s link set dev lo up", NS_TEST);
nstoken = open_netns(NS_TEST);
if (!ASSERT_OK_PTR(nstoken, "open_netns"))
bpf_tc_hook_destroy(&qdisc_hook);
close_netns(nstoken);
}
- system("ip netns del " NS_TEST " &> /dev/null");
+ SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
decap_sanity__destroy(skel);
}
/* Copyright (c) 2022 Facebook */
#include <test_progs.h>
+#include <network_helpers.h>
#include "dynptr_fail.skel.h"
#include "dynptr_success.skel.h"
-static const char * const success_tests[] = {
- "test_read_write",
- "test_data_slice",
- "test_ringbuf",
+enum test_setup_type {
+ SETUP_SYSCALL_SLEEP,
+ SETUP_SKB_PROG,
};
-static void verify_success(const char *prog_name)
+static struct {
+ const char *prog_name;
+ enum test_setup_type type;
+} success_tests[] = {
+ {"test_read_write", SETUP_SYSCALL_SLEEP},
+ {"test_dynptr_data", SETUP_SYSCALL_SLEEP},
+ {"test_ringbuf", SETUP_SYSCALL_SLEEP},
+ {"test_skb_readonly", SETUP_SKB_PROG},
+ {"test_dynptr_skb_data", SETUP_SKB_PROG},
+};
+
+static void verify_success(const char *prog_name, enum test_setup_type setup_type)
{
struct dynptr_success *skel;
struct bpf_program *prog;
struct bpf_link *link;
+ int err;
skel = dynptr_success__open();
if (!ASSERT_OK_PTR(skel, "dynptr_success__open"))
skel->bss->pid = getpid();
- dynptr_success__load(skel);
- if (!ASSERT_OK_PTR(skel, "dynptr_success__load"))
- goto cleanup;
-
prog = bpf_object__find_program_by_name(skel->obj, prog_name);
if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
goto cleanup;
- link = bpf_program__attach(prog);
- if (!ASSERT_OK_PTR(link, "bpf_program__attach"))
+ bpf_program__set_autoload(prog, true);
+
+ err = dynptr_success__load(skel);
+ if (!ASSERT_OK(err, "dynptr_success__load"))
goto cleanup;
- usleep(1);
+ switch (setup_type) {
+ case SETUP_SYSCALL_SLEEP:
+ link = bpf_program__attach(prog);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach"))
+ goto cleanup;
- ASSERT_EQ(skel->bss->err, 0, "err");
+ usleep(1);
+
+ bpf_link__destroy(link);
+ break;
+ case SETUP_SKB_PROG:
+ {
+ int prog_fd;
+ char buf[64];
+
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = 1,
+ );
- bpf_link__destroy(link);
+ prog_fd = bpf_program__fd(prog);
+ if (!ASSERT_GE(prog_fd, 0, "prog_fd"))
+ goto cleanup;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+
+ if (!ASSERT_OK(err, "test_run"))
+ goto cleanup;
+
+ break;
+ }
+ }
+
+ ASSERT_EQ(skel->bss->err, 0, "err");
cleanup:
dynptr_success__destroy(skel);
int i;
for (i = 0; i < ARRAY_SIZE(success_tests); i++) {
- if (!test__start_subtest(success_tests[i]))
+ if (!test__start_subtest(success_tests[i].prog_name))
continue;
- verify_success(success_tests[i]);
+ verify_success(success_tests[i].prog_name, success_tests[i].type);
}
RUN_TESTS(dynptr_fail);
#include <net/if.h>
#include "empty_skb.skel.h"
-#define SYS(cmd) ({ \
- if (!ASSERT_OK(system(cmd), (cmd))) \
- goto out; \
-})
-
void test_empty_skb(void)
{
LIBBPF_OPTS(bpf_test_run_opts, tattr);
},
};
- SYS("ip netns add empty_skb");
+ SYS(out, "ip netns add empty_skb");
tok = open_netns("empty_skb");
- SYS("ip link add veth0 type veth peer veth1");
- SYS("ip link set dev veth0 up");
- SYS("ip link set dev veth1 up");
- SYS("ip addr add 10.0.0.1/8 dev veth0");
- SYS("ip addr add 10.0.0.2/8 dev veth1");
+ SYS(out, "ip link add veth0 type veth peer veth1");
+ SYS(out, "ip link set dev veth0 up");
+ SYS(out, "ip link set dev veth1 up");
+ SYS(out, "ip addr add 10.0.0.1/8 dev veth0");
+ SYS(out, "ip addr add 10.0.0.2/8 dev veth1");
veth_ifindex = if_nametoindex("veth0");
- SYS("ip link add ipip0 type ipip local 10.0.0.1 remote 10.0.0.2");
- SYS("ip link set ipip0 up");
- SYS("ip addr add 192.168.1.1/16 dev ipip0");
+ SYS(out, "ip link add ipip0 type ipip local 10.0.0.1 remote 10.0.0.2");
+ SYS(out, "ip link set ipip0 up");
+ SYS(out, "ip addr add 192.168.1.1/16 dev ipip0");
ipip_ifindex = if_nametoindex("ipip0");
bpf_obj = empty_skb__open_and_load();
empty_skb__destroy(bpf_obj);
if (tok)
close_netns(tok);
- system("ip netns del empty_skb");
+ SYS_NOFAIL("ip netns del empty_skb");
}
#include "network_helpers.h"
#include "fib_lookup.skel.h"
-#define SYS(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- if (!ASSERT_OK(system(cmd), cmd)) \
- goto fail; \
- })
-
#define NS_TEST "fib_lookup_ns"
#define IPV6_IFACE_ADDR "face::face"
#define IPV6_NUD_FAILED_ADDR "face::1"
{
int err;
- SYS("ip link add veth1 type veth peer name veth2");
- SYS("ip link set dev veth1 up");
+ SYS(fail, "ip link add veth1 type veth peer name veth2");
+ SYS(fail, "ip link set dev veth1 up");
- SYS("ip addr add %s/64 dev veth1 nodad", IPV6_IFACE_ADDR);
- SYS("ip neigh add %s dev veth1 nud failed", IPV6_NUD_FAILED_ADDR);
- SYS("ip neigh add %s dev veth1 lladdr %s nud stale", IPV6_NUD_STALE_ADDR, DMAC);
+ SYS(fail, "ip addr add %s/64 dev veth1 nodad", IPV6_IFACE_ADDR);
+ SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV6_NUD_FAILED_ADDR);
+ SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV6_NUD_STALE_ADDR, DMAC);
- SYS("ip addr add %s/24 dev veth1 nodad", IPV4_IFACE_ADDR);
- SYS("ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR);
- SYS("ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC);
+ SYS(fail, "ip addr add %s/24 dev veth1 nodad", IPV4_IFACE_ADDR);
+ SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR);
+ SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC);
err = write_sysctl("/proc/sys/net/ipv4/conf/veth1/forwarding", "1");
if (!ASSERT_OK(err, "write_sysctl(net.ipv4.conf.veth1.forwarding)"))
return;
prog_fd = bpf_program__fd(skel->progs.fib_lookup);
- SYS("ip netns add %s", NS_TEST);
+ SYS(fail, "ip netns add %s", NS_TEST);
nstoken = open_netns(NS_TEST);
if (!ASSERT_OK_PTR(nstoken, "open_netns"))
fail:
if (nstoken)
close_netns(nstoken);
- system("ip netns del " NS_TEST " &> /dev/null");
+ SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
fib_lookup__destroy(skel);
}
.retval = BPF_OK,
},
{
+ .name = "ipv6-empty-flow-label",
+ .pkt.ipv6 = {
+ .eth.h_proto = __bpf_constant_htons(ETH_P_IPV6),
+ .iph.nexthdr = IPPROTO_TCP,
+ .iph.payload_len = __bpf_constant_htons(MAGIC_BYTES),
+ .iph.flow_lbl = { 0x00, 0x00, 0x00 },
+ .tcp.doff = 5,
+ .tcp.source = 80,
+ .tcp.dest = 8080,
+ },
+ .keys = {
+ .flags = BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL,
+ .nhoff = ETH_HLEN,
+ .thoff = ETH_HLEN + sizeof(struct ipv6hdr),
+ .addr_proto = ETH_P_IPV6,
+ .ip_proto = IPPROTO_TCP,
+ .n_proto = __bpf_constant_htons(ETH_P_IPV6),
+ .sport = 80,
+ .dport = 8080,
+ },
+ .flags = BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL,
+ .retval = BPF_OK,
+ },
+ {
.name = "ipip-encap",
.pkt.ipip = {
.eth.h_proto = __bpf_constant_htons(ETH_P_IP),
test_l4lb("test_l4lb.bpf.o");
if (test__start_subtest("l4lb_noinline"))
test_l4lb("test_l4lb_noinline.bpf.o");
+ if (test__start_subtest("l4lb_noinline_dynptr"))
+ test_l4lb("test_l4lb_noinline_dynptr.bpf.o");
}
if (test__start_subtest("bad_core_relo_trunc_partial"))
bad_core_relo(300, TRUNC_PARTIAL /* truncate original log a bit */);
if (test__start_subtest("bad_core_relo_trunc_full"))
- bad_core_relo(250, TRUNC_FULL /* truncate also libbpf's message patch */);
+ bad_core_relo(210, TRUNC_FULL /* truncate also libbpf's message patch */);
if (test__start_subtest("bad_core_relo_subprog"))
bad_core_relo_subprog();
if (test__start_subtest("missing_map"))
#include "map_kptr.skel.h"
#include "map_kptr_fail.skel.h"
+#include "rcu_tasks_trace_gp.skel.h"
static void test_map_kptr_success(bool test_run)
{
+ LIBBPF_OPTS(bpf_test_run_opts, lopts);
LIBBPF_OPTS(bpf_test_run_opts, opts,
.data_in = &pkt_v4,
.data_size_in = sizeof(pkt_v4),
.repeat = 1,
);
+ int key = 0, ret, cpu;
struct map_kptr *skel;
- int key = 0, ret;
- char buf[16];
+ char buf[16], *pbuf;
skel = map_kptr__open_and_load();
if (!ASSERT_OK_PTR(skel, "map_kptr__open_and_load"))
return;
- ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref), &opts);
- ASSERT_OK(ret, "test_map_kptr_ref refcount");
- ASSERT_OK(opts.retval, "test_map_kptr_ref retval");
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref1), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref1 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref1 retval");
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref2), &opts);
ASSERT_OK(ret, "test_map_kptr_ref2 refcount");
ASSERT_OK(opts.retval, "test_map_kptr_ref2 retval");
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref1), &lopts);
+ ASSERT_OK(ret, "test_ls_map_kptr_ref1 refcount");
+ ASSERT_OK(lopts.retval, "test_ls_map_kptr_ref1 retval");
+
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref2), &lopts);
+ ASSERT_OK(ret, "test_ls_map_kptr_ref2 refcount");
+ ASSERT_OK(lopts.retval, "test_ls_map_kptr_ref2 retval");
+
if (test_run)
goto exit;
+ cpu = libbpf_num_possible_cpus();
+ if (!ASSERT_GT(cpu, 0, "libbpf_num_possible_cpus"))
+ goto exit;
+
+ pbuf = calloc(cpu, sizeof(buf));
+ if (!ASSERT_OK_PTR(pbuf, "calloc(pbuf)"))
+ goto exit;
+
ret = bpf_map__update_elem(skel->maps.array_map,
&key, sizeof(key), buf, sizeof(buf), 0);
ASSERT_OK(ret, "array_map update");
- ret = bpf_map__update_elem(skel->maps.array_map,
- &key, sizeof(key), buf, sizeof(buf), 0);
- ASSERT_OK(ret, "array_map update2");
+ skel->data->ref--;
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
+
+ ret = bpf_map__update_elem(skel->maps.pcpu_array_map,
+ &key, sizeof(key), pbuf, cpu * sizeof(buf), 0);
+ ASSERT_OK(ret, "pcpu_array_map update");
+ skel->data->ref--;
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
- ret = bpf_map__update_elem(skel->maps.hash_map,
- &key, sizeof(key), buf, sizeof(buf), 0);
- ASSERT_OK(ret, "hash_map update");
ret = bpf_map__delete_elem(skel->maps.hash_map, &key, sizeof(key), 0);
ASSERT_OK(ret, "hash_map delete");
+ skel->data->ref--;
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
+
+ ret = bpf_map__delete_elem(skel->maps.pcpu_hash_map, &key, sizeof(key), 0);
+ ASSERT_OK(ret, "pcpu_hash_map delete");
+ skel->data->ref--;
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
- ret = bpf_map__update_elem(skel->maps.hash_malloc_map,
- &key, sizeof(key), buf, sizeof(buf), 0);
- ASSERT_OK(ret, "hash_malloc_map update");
ret = bpf_map__delete_elem(skel->maps.hash_malloc_map, &key, sizeof(key), 0);
ASSERT_OK(ret, "hash_malloc_map delete");
+ skel->data->ref--;
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
+
+ ret = bpf_map__delete_elem(skel->maps.pcpu_hash_malloc_map, &key, sizeof(key), 0);
+ ASSERT_OK(ret, "pcpu_hash_malloc_map delete");
+ skel->data->ref--;
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
- ret = bpf_map__update_elem(skel->maps.lru_hash_map,
- &key, sizeof(key), buf, sizeof(buf), 0);
- ASSERT_OK(ret, "lru_hash_map update");
ret = bpf_map__delete_elem(skel->maps.lru_hash_map, &key, sizeof(key), 0);
ASSERT_OK(ret, "lru_hash_map delete");
+ skel->data->ref--;
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
+
+ ret = bpf_map__delete_elem(skel->maps.lru_pcpu_hash_map, &key, sizeof(key), 0);
+ ASSERT_OK(ret, "lru_pcpu_hash_map delete");
+ skel->data->ref--;
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
+ ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
+ ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref_del), &lopts);
+ ASSERT_OK(ret, "test_ls_map_kptr_ref_del delete");
+ skel->data->ref--;
+ ASSERT_OK(lopts.retval, "test_ls_map_kptr_ref_del retval");
+
+ free(pbuf);
exit:
map_kptr__destroy(skel);
}
-void test_map_kptr(void)
+static int kern_sync_rcu_tasks_trace(struct rcu_tasks_trace_gp *rcu)
{
- if (test__start_subtest("success")) {
+ long gp_seq = READ_ONCE(rcu->bss->gp_seq);
+ LIBBPF_OPTS(bpf_test_run_opts, opts);
+
+ if (!ASSERT_OK(bpf_prog_test_run_opts(bpf_program__fd(rcu->progs.do_call_rcu_tasks_trace),
+ &opts), "do_call_rcu_tasks_trace"))
+ return -EFAULT;
+ if (!ASSERT_OK(opts.retval, "opts.retval == 0"))
+ return -EFAULT;
+ while (gp_seq == READ_ONCE(rcu->bss->gp_seq))
+ sched_yield();
+ return 0;
+}
+
+void serial_test_map_kptr(void)
+{
+ struct rcu_tasks_trace_gp *skel;
+
+ RUN_TESTS(map_kptr_fail);
+
+ skel = rcu_tasks_trace_gp__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "rcu_tasks_trace_gp__open_and_load"))
+ return;
+ if (!ASSERT_OK(rcu_tasks_trace_gp__attach(skel), "rcu_tasks_trace_gp__attach"))
+ goto end;
+
+ if (test__start_subtest("success-map")) {
+ test_map_kptr_success(true);
+
+ ASSERT_OK(kern_sync_rcu_tasks_trace(skel), "sync rcu_tasks_trace");
+ ASSERT_OK(kern_sync_rcu(), "sync rcu");
+ /* Observe refcount dropping to 1 on bpf_map_free_deferred */
test_map_kptr_success(false);
- /* Do test_run twice, so that we see refcount going back to 1
- * after we leave it in map from first iteration.
- */
+
+ ASSERT_OK(kern_sync_rcu_tasks_trace(skel), "sync rcu_tasks_trace");
+ ASSERT_OK(kern_sync_rcu(), "sync rcu");
+ /* Observe refcount dropping to 1 on synchronous delete elem */
test_map_kptr_success(true);
}
- RUN_TESTS(map_kptr_fail);
+end:
+ rcu_tasks_trace_gp__destroy(skel);
+ return;
}
#include "network_helpers.h"
#include "mptcp_sock.skel.h"
+#define NS_TEST "mptcp_ns"
+
#ifndef TCP_CA_NAME_MAX
#define TCP_CA_NAME_MAX 16
#endif
static void test_base(void)
{
+ struct nstoken *nstoken = NULL;
int server_fd, cgroup_fd;
cgroup_fd = test__join_cgroup("/mptcp");
if (!ASSERT_GE(cgroup_fd, 0, "test__join_cgroup"))
return;
+ SYS(fail, "ip netns add %s", NS_TEST);
+ SYS(fail, "ip -net %s link set dev lo up", NS_TEST);
+
+ nstoken = open_netns(NS_TEST);
+ if (!ASSERT_OK_PTR(nstoken, "open_netns"))
+ goto fail;
+
/* without MPTCP */
server_fd = start_server(AF_INET, SOCK_STREAM, NULL, 0, 0);
if (!ASSERT_GE(server_fd, 0, "start_server"))
/* with MPTCP */
server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
if (!ASSERT_GE(server_fd, 0, "start_mptcp_server"))
- goto close_cgroup_fd;
+ goto fail;
ASSERT_OK(run_test(cgroup_fd, server_fd, true), "run_test mptcp");
close(server_fd);
-close_cgroup_fd:
+fail:
+ if (nstoken)
+ close_netns(nstoken);
+
+ SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
+
close(cgroup_fd);
}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "test_parse_tcp_hdr_opt.skel.h"
+#include "test_parse_tcp_hdr_opt_dynptr.skel.h"
+#include "test_tcp_hdr_options.h"
+
+struct test_pkt {
+ struct ipv6_packet pk6_v6;
+ u8 options[16];
+} __packed;
+
+struct test_pkt pkt = {
+ .pk6_v6.eth.h_proto = __bpf_constant_htons(ETH_P_IPV6),
+ .pk6_v6.iph.nexthdr = IPPROTO_TCP,
+ .pk6_v6.iph.payload_len = __bpf_constant_htons(MAGIC_BYTES),
+ .pk6_v6.tcp.urg_ptr = 123,
+ .pk6_v6.tcp.doff = 9, /* 16 bytes of options */
+
+ .options = {
+ TCPOPT_MSS, 4, 0x05, 0xB4, TCPOPT_NOP, TCPOPT_NOP,
+ 0, 6, 0xBB, 0xBB, 0xBB, 0xBB, TCPOPT_EOL
+ },
+};
+
+static void test_parse_opt(void)
+{
+ struct test_parse_tcp_hdr_opt *skel;
+ struct bpf_program *prog;
+ char buf[128];
+ int err;
+
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt,
+ .data_size_in = sizeof(pkt),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = 3,
+ );
+
+ skel = test_parse_tcp_hdr_opt__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+ return;
+
+ pkt.options[6] = skel->rodata->tcp_hdr_opt_kind_tpr;
+ prog = skel->progs.xdp_ingress_v6;
+
+ err = bpf_prog_test_run_opts(bpf_program__fd(prog), &topts);
+ ASSERT_OK(err, "ipv6 test_run");
+ ASSERT_EQ(topts.retval, XDP_PASS, "ipv6 test_run retval");
+ ASSERT_EQ(skel->bss->server_id, 0xBBBBBBBB, "server id");
+
+ test_parse_tcp_hdr_opt__destroy(skel);
+}
+
+static void test_parse_opt_dynptr(void)
+{
+ struct test_parse_tcp_hdr_opt_dynptr *skel;
+ struct bpf_program *prog;
+ char buf[128];
+ int err;
+
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt,
+ .data_size_in = sizeof(pkt),
+ .data_out = buf,
+ .data_size_out = sizeof(buf),
+ .repeat = 3,
+ );
+
+ skel = test_parse_tcp_hdr_opt_dynptr__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+ return;
+
+ pkt.options[6] = skel->rodata->tcp_hdr_opt_kind_tpr;
+ prog = skel->progs.xdp_ingress_v6;
+
+ err = bpf_prog_test_run_opts(bpf_program__fd(prog), &topts);
+ ASSERT_OK(err, "ipv6 test_run");
+ ASSERT_EQ(topts.retval, XDP_PASS, "ipv6 test_run retval");
+ ASSERT_EQ(skel->bss->server_id, 0xBBBBBBBB, "server id");
+
+ test_parse_tcp_hdr_opt_dynptr__destroy(skel);
+}
+
+void test_parse_tcp_hdr_opt(void)
+{
+ if (test__start_subtest("parse_tcp_hdr_opt"))
+ test_parse_opt();
+ if (test__start_subtest("parse_tcp_hdr_opt_dynptr"))
+ test_parse_opt_dynptr();
+}
bpf_program__set_autoload(skel->progs.get_cgroup_id, true);
bpf_program__set_autoload(skel->progs.task_succ, true);
- bpf_program__set_autoload(skel->progs.no_lock, true);
bpf_program__set_autoload(skel->progs.two_regions, true);
bpf_program__set_autoload(skel->progs.non_sleepable_1, true);
bpf_program__set_autoload(skel->progs.non_sleepable_2, true);
+ bpf_program__set_autoload(skel->progs.task_trusted_non_rcuptr, true);
err = rcu_read_lock__load(skel);
if (!ASSERT_OK(err, "skel_load"))
goto out;
static const char * const inproper_region_tests[] = {
"miss_lock",
+ "no_lock",
"miss_unlock",
"non_sleepable_rcu_mismatch",
"inproper_sleepable_helper",
}
static const char * const rcuptr_misuse_tests[] = {
- "task_untrusted_non_rcuptr",
"task_untrusted_rcuptr",
"cross_rcu_region",
};
void test_rcu_read_lock(void)
{
- struct btf *vmlinux_btf;
int cgroup_fd;
- vmlinux_btf = btf__load_vmlinux_btf();
- if (!ASSERT_OK_PTR(vmlinux_btf, "could not load vmlinux BTF"))
- return;
- if (btf__find_by_name_kind(vmlinux_btf, "rcu", BTF_KIND_TYPE_TAG) < 0) {
- test__skip();
- goto out;
- }
-
cgroup_fd = test__join_cgroup("/rcu_read_lock");
if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup /rcu_read_lock"))
goto out;
if (test__start_subtest("negative_tests_rcuptr_misuse"))
test_rcuptr_misuse();
close(cgroup_fd);
-out:
- btf__free(vmlinux_btf);
+out:;
}
#include <string.h>
#include <sys/select.h>
#include <unistd.h>
+#include <linux/vm_sockets.h>
#include <bpf/bpf.h>
#include <bpf/libbpf.h>
*len = sizeof(*addr6);
}
+static void init_addr_loopback_vsock(struct sockaddr_storage *ss, socklen_t *len)
+{
+ struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss));
+
+ addr->svm_family = AF_VSOCK;
+ addr->svm_port = VMADDR_PORT_ANY;
+ addr->svm_cid = VMADDR_CID_LOCAL;
+ *len = sizeof(*addr);
+}
+
static void init_addr_loopback(int family, struct sockaddr_storage *ss,
socklen_t *len)
{
case AF_INET6:
init_addr_loopback6(ss, len);
return;
+ case AF_VSOCK:
+ init_addr_loopback_vsock(ss, len);
+ return;
default:
FAIL("unsupported address family %d", family);
}
return "IPv6";
case AF_UNIX:
return "Unix";
+ case AF_VSOCK:
+ return "VSOCK";
default:
return "unknown";
}
unix_skb_redir_to_connected(skel, map, sotype);
}
+/* Returns two connected loopback vsock sockets */
+static int vsock_socketpair_connectible(int sotype, int *v0, int *v1)
+{
+ struct sockaddr_storage addr;
+ socklen_t len = sizeof(addr);
+ int s, p, c;
+
+ s = socket_loopback(AF_VSOCK, sotype);
+ if (s < 0)
+ return -1;
+
+ c = xsocket(AF_VSOCK, sotype | SOCK_NONBLOCK, 0);
+ if (c == -1)
+ goto close_srv;
+
+ if (getsockname(s, sockaddr(&addr), &len) < 0)
+ goto close_cli;
+
+ if (connect(c, sockaddr(&addr), len) < 0 && errno != EINPROGRESS) {
+ FAIL_ERRNO("connect");
+ goto close_cli;
+ }
+
+ len = sizeof(addr);
+ p = accept_timeout(s, sockaddr(&addr), &len, IO_TIMEOUT_SEC);
+ if (p < 0)
+ goto close_cli;
+
+ *v0 = p;
+ *v1 = c;
+
+ return 0;
+
+close_cli:
+ close(c);
+close_srv:
+ close(s);
+
+ return -1;
+}
+
+static void vsock_unix_redir_connectible(int sock_mapfd, int verd_mapfd,
+ enum redir_mode mode, int sotype)
+{
+ const char *log_prefix = redir_mode_str(mode);
+ char a = 'a', b = 'b';
+ int u0, u1, v0, v1;
+ int sfd[2];
+ unsigned int pass;
+ int err, n;
+ u32 key;
+
+ zero_verdict_count(verd_mapfd);
+
+ if (socketpair(AF_UNIX, SOCK_STREAM | SOCK_NONBLOCK, 0, sfd))
+ return;
+
+ u0 = sfd[0];
+ u1 = sfd[1];
+
+ err = vsock_socketpair_connectible(sotype, &v0, &v1);
+ if (err) {
+ FAIL("vsock_socketpair_connectible() failed");
+ goto close_uds;
+ }
+
+ err = add_to_sockmap(sock_mapfd, u0, v0);
+ if (err) {
+ FAIL("add_to_sockmap failed");
+ goto close_vsock;
+ }
+
+ n = write(v1, &a, sizeof(a));
+ if (n < 0)
+ FAIL_ERRNO("%s: write", log_prefix);
+ if (n == 0)
+ FAIL("%s: incomplete write", log_prefix);
+ if (n < 1)
+ goto out;
+
+ n = recv(mode == REDIR_INGRESS ? u0 : u1, &b, sizeof(b), MSG_DONTWAIT);
+ if (n < 0)
+ FAIL("%s: recv() err, errno=%d", log_prefix, errno);
+ if (n == 0)
+ FAIL("%s: incomplete recv", log_prefix);
+ if (b != a)
+ FAIL("%s: vsock socket map failed, %c != %c", log_prefix, a, b);
+
+ key = SK_PASS;
+ err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
+ if (err)
+ goto out;
+ if (pass != 1)
+ FAIL("%s: want pass count 1, have %d", log_prefix, pass);
+out:
+ key = 0;
+ bpf_map_delete_elem(sock_mapfd, &key);
+ key = 1;
+ bpf_map_delete_elem(sock_mapfd, &key);
+
+close_vsock:
+ close(v0);
+ close(v1);
+
+close_uds:
+ close(u0);
+ close(u1);
+}
+
+static void vsock_unix_skb_redir_connectible(struct test_sockmap_listen *skel,
+ struct bpf_map *inner_map,
+ int sotype)
+{
+ int verdict = bpf_program__fd(skel->progs.prog_skb_verdict);
+ int verdict_map = bpf_map__fd(skel->maps.verdict_map);
+ int sock_map = bpf_map__fd(inner_map);
+ int err;
+
+ err = xbpf_prog_attach(verdict, sock_map, BPF_SK_SKB_VERDICT, 0);
+ if (err)
+ return;
+
+ skel->bss->test_ingress = false;
+ vsock_unix_redir_connectible(sock_map, verdict_map, REDIR_EGRESS, sotype);
+ skel->bss->test_ingress = true;
+ vsock_unix_redir_connectible(sock_map, verdict_map, REDIR_INGRESS, sotype);
+
+ xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_VERDICT);
+}
+
+static void test_vsock_redir(struct test_sockmap_listen *skel, struct bpf_map *map)
+{
+ const char *family_name, *map_name;
+ char s[MAX_TEST_NAME];
+
+ family_name = family_str(AF_VSOCK);
+ map_name = map_type_str(map);
+ snprintf(s, sizeof(s), "%s %s %s", map_name, family_name, __func__);
+ if (!test__start_subtest(s))
+ return;
+
+ vsock_unix_skb_redir_connectible(skel, map, SOCK_STREAM);
+ vsock_unix_skb_redir_connectible(skel, map, SOCK_SEQPACKET);
+}
+
static void test_reuseport(struct test_sockmap_listen *skel,
struct bpf_map *map, int family, int sotype)
{
run_tests(skel, skel->maps.sock_map, AF_INET6);
test_unix_redir(skel, skel->maps.sock_map, SOCK_DGRAM);
test_unix_redir(skel, skel->maps.sock_map, SOCK_STREAM);
+ test_vsock_redir(skel, skel->maps.sock_map);
skel->bss->test_sockmap = false;
run_tests(skel, skel->maps.sock_hash, AF_INET);
run_tests(skel, skel->maps.sock_hash, AF_INET6);
test_unix_redir(skel, skel->maps.sock_hash, SOCK_DGRAM);
test_unix_redir(skel, skel->maps.sock_hash, SOCK_STREAM);
+ test_vsock_redir(skel, skel->maps.sock_hash);
test_sockmap_listen__destroy(skel);
}
return 0;
}
-#define SYS(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- if (!ASSERT_OK(system(cmd), cmd)) \
- goto fail; \
- })
-
static int netns_setup_links_and_routes(struct netns_setup_result *result)
{
struct nstoken *nstoken = NULL;
char veth_src_fwd_addr[IFADDR_STR_LEN+1] = {};
- SYS("ip link add veth_src type veth peer name veth_src_fwd");
- SYS("ip link add veth_dst type veth peer name veth_dst_fwd");
+ SYS(fail, "ip link add veth_src type veth peer name veth_src_fwd");
+ SYS(fail, "ip link add veth_dst type veth peer name veth_dst_fwd");
- SYS("ip link set veth_dst_fwd address " MAC_DST_FWD);
- SYS("ip link set veth_dst address " MAC_DST);
+ SYS(fail, "ip link set veth_dst_fwd address " MAC_DST_FWD);
+ SYS(fail, "ip link set veth_dst address " MAC_DST);
if (get_ifaddr("veth_src_fwd", veth_src_fwd_addr))
goto fail;
if (!ASSERT_GT(result->ifindex_veth_dst_fwd, 0, "ifindex_veth_dst_fwd"))
goto fail;
- SYS("ip link set veth_src netns " NS_SRC);
- SYS("ip link set veth_src_fwd netns " NS_FWD);
- SYS("ip link set veth_dst_fwd netns " NS_FWD);
- SYS("ip link set veth_dst netns " NS_DST);
+ SYS(fail, "ip link set veth_src netns " NS_SRC);
+ SYS(fail, "ip link set veth_src_fwd netns " NS_FWD);
+ SYS(fail, "ip link set veth_dst_fwd netns " NS_FWD);
+ SYS(fail, "ip link set veth_dst netns " NS_DST);
/** setup in 'src' namespace */
nstoken = open_netns(NS_SRC);
if (!ASSERT_OK_PTR(nstoken, "setns src"))
goto fail;
- SYS("ip addr add " IP4_SRC "/32 dev veth_src");
- SYS("ip addr add " IP6_SRC "/128 dev veth_src nodad");
- SYS("ip link set dev veth_src up");
+ SYS(fail, "ip addr add " IP4_SRC "/32 dev veth_src");
+ SYS(fail, "ip addr add " IP6_SRC "/128 dev veth_src nodad");
+ SYS(fail, "ip link set dev veth_src up");
- SYS("ip route add " IP4_DST "/32 dev veth_src scope global");
- SYS("ip route add " IP4_NET "/16 dev veth_src scope global");
- SYS("ip route add " IP6_DST "/128 dev veth_src scope global");
+ SYS(fail, "ip route add " IP4_DST "/32 dev veth_src scope global");
+ SYS(fail, "ip route add " IP4_NET "/16 dev veth_src scope global");
+ SYS(fail, "ip route add " IP6_DST "/128 dev veth_src scope global");
- SYS("ip neigh add " IP4_DST " dev veth_src lladdr %s",
+ SYS(fail, "ip neigh add " IP4_DST " dev veth_src lladdr %s",
veth_src_fwd_addr);
- SYS("ip neigh add " IP6_DST " dev veth_src lladdr %s",
+ SYS(fail, "ip neigh add " IP6_DST " dev veth_src lladdr %s",
veth_src_fwd_addr);
close_netns(nstoken);
* needs v4 one in order to start ARP probing. IP4_NET route is added
* to the endpoints so that the ARP processing will reply.
*/
- SYS("ip addr add " IP4_SLL "/32 dev veth_src_fwd");
- SYS("ip addr add " IP4_DLL "/32 dev veth_dst_fwd");
- SYS("ip link set dev veth_src_fwd up");
- SYS("ip link set dev veth_dst_fwd up");
+ SYS(fail, "ip addr add " IP4_SLL "/32 dev veth_src_fwd");
+ SYS(fail, "ip addr add " IP4_DLL "/32 dev veth_dst_fwd");
+ SYS(fail, "ip link set dev veth_src_fwd up");
+ SYS(fail, "ip link set dev veth_dst_fwd up");
- SYS("ip route add " IP4_SRC "/32 dev veth_src_fwd scope global");
- SYS("ip route add " IP6_SRC "/128 dev veth_src_fwd scope global");
- SYS("ip route add " IP4_DST "/32 dev veth_dst_fwd scope global");
- SYS("ip route add " IP6_DST "/128 dev veth_dst_fwd scope global");
+ SYS(fail, "ip route add " IP4_SRC "/32 dev veth_src_fwd scope global");
+ SYS(fail, "ip route add " IP6_SRC "/128 dev veth_src_fwd scope global");
+ SYS(fail, "ip route add " IP4_DST "/32 dev veth_dst_fwd scope global");
+ SYS(fail, "ip route add " IP6_DST "/128 dev veth_dst_fwd scope global");
close_netns(nstoken);
if (!ASSERT_OK_PTR(nstoken, "setns dst"))
goto fail;
- SYS("ip addr add " IP4_DST "/32 dev veth_dst");
- SYS("ip addr add " IP6_DST "/128 dev veth_dst nodad");
- SYS("ip link set dev veth_dst up");
+ SYS(fail, "ip addr add " IP4_DST "/32 dev veth_dst");
+ SYS(fail, "ip addr add " IP6_DST "/128 dev veth_dst nodad");
+ SYS(fail, "ip link set dev veth_dst up");
- SYS("ip route add " IP4_SRC "/32 dev veth_dst scope global");
- SYS("ip route add " IP4_NET "/16 dev veth_dst scope global");
- SYS("ip route add " IP6_SRC "/128 dev veth_dst scope global");
+ SYS(fail, "ip route add " IP4_SRC "/32 dev veth_dst scope global");
+ SYS(fail, "ip route add " IP4_NET "/16 dev veth_dst scope global");
+ SYS(fail, "ip route add " IP6_SRC "/128 dev veth_dst scope global");
- SYS("ip neigh add " IP4_SRC " dev veth_dst lladdr " MAC_DST_FWD);
- SYS("ip neigh add " IP6_SRC " dev veth_dst lladdr " MAC_DST_FWD);
+ SYS(fail, "ip neigh add " IP4_SRC " dev veth_dst lladdr " MAC_DST_FWD);
+ SYS(fail, "ip neigh add " IP6_SRC " dev veth_dst lladdr " MAC_DST_FWD);
close_netns(nstoken);
static int test_ping(int family, const char *addr)
{
- SYS("ip netns exec " NS_SRC " %s " PING_ARGS " %s > /dev/null", ping_command(family), addr);
+ SYS(fail, "ip netns exec " NS_SRC " %s " PING_ARGS " %s > /dev/null", ping_command(family), addr);
return 0;
fail:
return -1;
if (!ASSERT_OK(err, "ioctl TUNSETIFF"))
goto fail;
- SYS("ip link set dev %s up", name);
+ SYS(fail, "ip link set dev %s up", name);
return fd;
fail:
XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_EGRESS, skel->progs.tc_chk, 0);
/* Setup route and neigh tables */
- SYS("ip -netns " NS_SRC " addr add dev tun_src " IP4_TUN_SRC "/24");
- SYS("ip -netns " NS_FWD " addr add dev tun_fwd " IP4_TUN_FWD "/24");
+ SYS(fail, "ip -netns " NS_SRC " addr add dev tun_src " IP4_TUN_SRC "/24");
+ SYS(fail, "ip -netns " NS_FWD " addr add dev tun_fwd " IP4_TUN_FWD "/24");
- SYS("ip -netns " NS_SRC " addr add dev tun_src " IP6_TUN_SRC "/64 nodad");
- SYS("ip -netns " NS_FWD " addr add dev tun_fwd " IP6_TUN_FWD "/64 nodad");
+ SYS(fail, "ip -netns " NS_SRC " addr add dev tun_src " IP6_TUN_SRC "/64 nodad");
+ SYS(fail, "ip -netns " NS_FWD " addr add dev tun_fwd " IP6_TUN_FWD "/64 nodad");
- SYS("ip -netns " NS_SRC " route del " IP4_DST "/32 dev veth_src scope global");
- SYS("ip -netns " NS_SRC " route add " IP4_DST "/32 via " IP4_TUN_FWD
+ SYS(fail, "ip -netns " NS_SRC " route del " IP4_DST "/32 dev veth_src scope global");
+ SYS(fail, "ip -netns " NS_SRC " route add " IP4_DST "/32 via " IP4_TUN_FWD
" dev tun_src scope global");
- SYS("ip -netns " NS_DST " route add " IP4_TUN_SRC "/32 dev veth_dst scope global");
- SYS("ip -netns " NS_SRC " route del " IP6_DST "/128 dev veth_src scope global");
- SYS("ip -netns " NS_SRC " route add " IP6_DST "/128 via " IP6_TUN_FWD
+ SYS(fail, "ip -netns " NS_DST " route add " IP4_TUN_SRC "/32 dev veth_dst scope global");
+ SYS(fail, "ip -netns " NS_SRC " route del " IP6_DST "/128 dev veth_src scope global");
+ SYS(fail, "ip -netns " NS_SRC " route add " IP6_DST "/128 via " IP6_TUN_FWD
" dev tun_src scope global");
- SYS("ip -netns " NS_DST " route add " IP6_TUN_SRC "/128 dev veth_dst scope global");
+ SYS(fail, "ip -netns " NS_DST " route add " IP6_TUN_SRC "/128 dev veth_dst scope global");
- SYS("ip -netns " NS_DST " neigh add " IP4_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
- SYS("ip -netns " NS_DST " neigh add " IP6_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
+ SYS(fail, "ip -netns " NS_DST " neigh add " IP4_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
+ SYS(fail, "ip -netns " NS_DST " neigh add " IP6_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
if (!ASSERT_OK(set_forwarding(false), "disable forwarding"))
goto fail;
u64 bin_true_sample;
char cmd[256];
- int err, duration = 0;
+ int err, duration = 0, fresh_digest_idx = 0;
struct ima *skel = NULL;
skel = ima__open_and_load();
/*
* Test #3
* - Goal: confirm that bpf_ima_inode_hash() returns a non-fresh digest
- * - Expected result: 2 samples (/bin/true: non-fresh, fresh)
+ * - Expected result:
+ * 1 sample (/bin/true: fresh) if commit 62622dab0a28 applied
+ * 2 samples (/bin/true: non-fresh, fresh) if commit 62622dab0a28 is
+ * not applied
+ *
+ * If commit 62622dab0a28 ("ima: return IMA digest value only when
+ * IMA_COLLECTED flag is set") is applied, bpf_ima_inode_hash() refuses
+ * to give a non-fresh digest, hence the correct result is 1 instead of
+ * 2.
*/
test_init(skel->bss);
goto close_clean;
err = ring_buffer__consume(ringbuf);
- ASSERT_EQ(err, 2, "num_samples_or_err");
- ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
- ASSERT_NEQ(ima_hash_from_bpf[1], 0, "ima_hash");
- ASSERT_EQ(ima_hash_from_bpf[0], bin_true_sample, "sample_equal_or_err");
+ ASSERT_GE(err, 1, "num_samples_or_err");
+ if (err == 2) {
+ ASSERT_NEQ(ima_hash_from_bpf[0], 0, "ima_hash");
+ ASSERT_EQ(ima_hash_from_bpf[0], bin_true_sample,
+ "sample_equal_or_err");
+ fresh_digest_idx = 1;
+ }
+
+ ASSERT_NEQ(ima_hash_from_bpf[fresh_digest_idx], 0, "ima_hash");
/* IMA refreshed the digest. */
- ASSERT_NEQ(ima_hash_from_bpf[1], bin_true_sample,
- "sample_different_or_err");
+ ASSERT_NEQ(ima_hash_from_bpf[fresh_digest_idx], bin_true_sample,
+ "sample_equal_or_err");
/*
* Test #4
#define PING_ARGS "-i 0.01 -c 3 -w 10 -q"
-#define SYS(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- if (!ASSERT_OK(system(cmd), cmd)) \
- goto fail; \
- })
-
-#define SYS_NOFAIL(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- system(cmd); \
- })
-
static int config_device(void)
{
- SYS("ip netns add at_ns0");
- SYS("ip link add veth0 address " MAC_VETH1 " type veth peer name veth1");
- SYS("ip link set veth0 netns at_ns0");
- SYS("ip addr add " IP4_ADDR1_VETH1 "/24 dev veth1");
- SYS("ip link set dev veth1 up mtu 1500");
- SYS("ip netns exec at_ns0 ip addr add " IP4_ADDR_VETH0 "/24 dev veth0");
- SYS("ip netns exec at_ns0 ip link set dev veth0 up mtu 1500");
+ SYS(fail, "ip netns add at_ns0");
+ SYS(fail, "ip link add veth0 address " MAC_VETH1 " type veth peer name veth1");
+ SYS(fail, "ip link set veth0 netns at_ns0");
+ SYS(fail, "ip addr add " IP4_ADDR1_VETH1 "/24 dev veth1");
+ SYS(fail, "ip link set dev veth1 up mtu 1500");
+ SYS(fail, "ip netns exec at_ns0 ip addr add " IP4_ADDR_VETH0 "/24 dev veth0");
+ SYS(fail, "ip netns exec at_ns0 ip link set dev veth0 up mtu 1500");
return 0;
fail:
static int add_vxlan_tunnel(void)
{
/* at_ns0 namespace */
- SYS("ip netns exec at_ns0 ip link add dev %s type vxlan external gbp dstport 4789",
+ SYS(fail, "ip netns exec at_ns0 ip link add dev %s type vxlan external gbp dstport 4789",
VXLAN_TUNL_DEV0);
- SYS("ip netns exec at_ns0 ip link set dev %s address %s up",
+ SYS(fail, "ip netns exec at_ns0 ip link set dev %s address %s up",
VXLAN_TUNL_DEV0, MAC_TUNL_DEV0);
- SYS("ip netns exec at_ns0 ip addr add dev %s %s/24",
+ SYS(fail, "ip netns exec at_ns0 ip addr add dev %s %s/24",
VXLAN_TUNL_DEV0, IP4_ADDR_TUNL_DEV0);
- SYS("ip netns exec at_ns0 ip neigh add %s lladdr %s dev %s",
+ SYS(fail, "ip netns exec at_ns0 ip neigh add %s lladdr %s dev %s",
IP4_ADDR_TUNL_DEV1, MAC_TUNL_DEV1, VXLAN_TUNL_DEV0);
- SYS("ip netns exec at_ns0 ip neigh add %s lladdr %s dev veth0",
+ SYS(fail, "ip netns exec at_ns0 ip neigh add %s lladdr %s dev veth0",
IP4_ADDR2_VETH1, MAC_VETH1);
/* root namespace */
- SYS("ip link add dev %s type vxlan external gbp dstport 4789",
+ SYS(fail, "ip link add dev %s type vxlan external gbp dstport 4789",
VXLAN_TUNL_DEV1);
- SYS("ip link set dev %s address %s up", VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
- SYS("ip addr add dev %s %s/24", VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
- SYS("ip neigh add %s lladdr %s dev %s",
+ SYS(fail, "ip link set dev %s address %s up", VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
+ SYS(fail, "ip addr add dev %s %s/24", VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
+ SYS(fail, "ip neigh add %s lladdr %s dev %s",
IP4_ADDR_TUNL_DEV0, MAC_TUNL_DEV0, VXLAN_TUNL_DEV1);
return 0;
static int add_ip6vxlan_tunnel(void)
{
- SYS("ip netns exec at_ns0 ip -6 addr add %s/96 dev veth0",
+ SYS(fail, "ip netns exec at_ns0 ip -6 addr add %s/96 dev veth0",
IP6_ADDR_VETH0);
- SYS("ip netns exec at_ns0 ip link set dev veth0 up");
- SYS("ip -6 addr add %s/96 dev veth1", IP6_ADDR1_VETH1);
- SYS("ip -6 addr add %s/96 dev veth1", IP6_ADDR2_VETH1);
- SYS("ip link set dev veth1 up");
+ SYS(fail, "ip netns exec at_ns0 ip link set dev veth0 up");
+ SYS(fail, "ip -6 addr add %s/96 dev veth1", IP6_ADDR1_VETH1);
+ SYS(fail, "ip -6 addr add %s/96 dev veth1", IP6_ADDR2_VETH1);
+ SYS(fail, "ip link set dev veth1 up");
/* at_ns0 namespace */
- SYS("ip netns exec at_ns0 ip link add dev %s type vxlan external dstport 4789",
+ SYS(fail, "ip netns exec at_ns0 ip link add dev %s type vxlan external dstport 4789",
IP6VXLAN_TUNL_DEV0);
- SYS("ip netns exec at_ns0 ip addr add dev %s %s/24",
+ SYS(fail, "ip netns exec at_ns0 ip addr add dev %s %s/24",
IP6VXLAN_TUNL_DEV0, IP4_ADDR_TUNL_DEV0);
- SYS("ip netns exec at_ns0 ip link set dev %s address %s up",
+ SYS(fail, "ip netns exec at_ns0 ip link set dev %s address %s up",
IP6VXLAN_TUNL_DEV0, MAC_TUNL_DEV0);
/* root namespace */
- SYS("ip link add dev %s type vxlan external dstport 4789",
+ SYS(fail, "ip link add dev %s type vxlan external dstport 4789",
IP6VXLAN_TUNL_DEV1);
- SYS("ip addr add dev %s %s/24", IP6VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
- SYS("ip link set dev %s address %s up",
+ SYS(fail, "ip addr add dev %s %s/24", IP6VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
+ SYS(fail, "ip link set dev %s address %s up",
IP6VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
return 0;
static int test_ping(int family, const char *addr)
{
- SYS("%s %s %s > /dev/null", ping_command(family), PING_ARGS, addr);
+ SYS(fail, "%s %s %s > /dev/null", ping_command(family), PING_ARGS, addr);
return 0;
fail:
return -1;
/* check that timer_cb2() was executed twice */
ASSERT_EQ(timer_skel->bss->bss_data, 10, "bss_data");
+ /* check that timer_cb3() was executed twice */
+ ASSERT_EQ(timer_skel->bss->abs_data, 12, "abs_data");
+
/* check that there were no errors in timer execution */
ASSERT_EQ(timer_skel->bss->err, 0, "err");
/* Kick the kernel, causing it to drain the ring buffer and then wake
* up the test thread waiting on epoll.
*/
- syscall(__NR_getrlimit);
+ syscall(__NR_prlimit64);
return NULL;
}
#define IFINDEX_LO 1
#define XDP_FLAGS_REPLACE (1U << 4)
-void serial_test_xdp_attach(void)
+static void test_xdp_attach(const char *file)
{
__u32 duration = 0, id1, id2, id0 = 0, len;
struct bpf_object *obj1, *obj2, *obj3;
- const char *file = "./test_xdp.bpf.o";
struct bpf_prog_info info = {};
int err, fd1, fd2, fd3;
LIBBPF_OPTS(bpf_xdp_attach_opts, opts);
out_1:
bpf_object__close(obj1);
}
+
+void serial_test_xdp_attach(void)
+{
+ if (test__start_subtest("xdp_attach"))
+ test_xdp_attach("./test_xdp.bpf.o");
+ if (test__start_subtest("xdp_attach_dynptr"))
+ test_xdp_attach("./test_xdp_dynptr.bpf.o");
+}
static int bonding_setup(struct skeletons *skeletons, int mode, int xmit_policy,
int bond_both_attach)
{
-#define SYS(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- if (!ASSERT_OK(system(cmd), cmd)) \
- return -1; \
- })
-
- SYS("ip netns add ns_dst");
- SYS("ip link add veth1_1 type veth peer name veth2_1 netns ns_dst");
- SYS("ip link add veth1_2 type veth peer name veth2_2 netns ns_dst");
-
- SYS("ip link add bond1 type bond mode %s xmit_hash_policy %s",
+ SYS(fail, "ip netns add ns_dst");
+ SYS(fail, "ip link add veth1_1 type veth peer name veth2_1 netns ns_dst");
+ SYS(fail, "ip link add veth1_2 type veth peer name veth2_2 netns ns_dst");
+
+ SYS(fail, "ip link add bond1 type bond mode %s xmit_hash_policy %s",
mode_names[mode], xmit_policy_names[xmit_policy]);
- SYS("ip link set bond1 up address " BOND1_MAC_STR " addrgenmode none");
- SYS("ip -netns ns_dst link add bond2 type bond mode %s xmit_hash_policy %s",
+ SYS(fail, "ip link set bond1 up address " BOND1_MAC_STR " addrgenmode none");
+ SYS(fail, "ip -netns ns_dst link add bond2 type bond mode %s xmit_hash_policy %s",
mode_names[mode], xmit_policy_names[xmit_policy]);
- SYS("ip -netns ns_dst link set bond2 up address " BOND2_MAC_STR " addrgenmode none");
+ SYS(fail, "ip -netns ns_dst link set bond2 up address " BOND2_MAC_STR " addrgenmode none");
- SYS("ip link set veth1_1 master bond1");
+ SYS(fail, "ip link set veth1_1 master bond1");
if (bond_both_attach == BOND_BOTH_AND_ATTACH) {
- SYS("ip link set veth1_2 master bond1");
+ SYS(fail, "ip link set veth1_2 master bond1");
} else {
- SYS("ip link set veth1_2 up addrgenmode none");
+ SYS(fail, "ip link set veth1_2 up addrgenmode none");
if (xdp_attach(skeletons, skeletons->xdp_dummy->progs.xdp_dummy_prog, "veth1_2"))
return -1;
}
- SYS("ip -netns ns_dst link set veth2_1 master bond2");
+ SYS(fail, "ip -netns ns_dst link set veth2_1 master bond2");
if (bond_both_attach == BOND_BOTH_AND_ATTACH)
- SYS("ip -netns ns_dst link set veth2_2 master bond2");
+ SYS(fail, "ip -netns ns_dst link set veth2_2 master bond2");
else
- SYS("ip -netns ns_dst link set veth2_2 up addrgenmode none");
+ SYS(fail, "ip -netns ns_dst link set veth2_2 up addrgenmode none");
/* Load a dummy program on sending side as with veth peer needs to have a
* XDP program loaded as well.
}
return 0;
-
-#undef SYS
+fail:
+ return -1;
}
static void bonding_cleanup(struct skeletons *skeletons)
#include <uapi/linux/netdev.h>
#include "test_xdp_do_redirect.skel.h"
-#define SYS(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- if (!ASSERT_OK(system(cmd), cmd)) \
- goto out; \
- })
-
struct udp_packet {
struct ethhdr eth;
struct ipv6hdr iph;
* iface and NUM_PKTS-2 in the TC hook. We match the packets on the UDP
* payload.
*/
- SYS("ip netns add testns");
+ SYS(out, "ip netns add testns");
nstoken = open_netns("testns");
if (!ASSERT_OK_PTR(nstoken, "setns"))
goto out;
- SYS("ip link add veth_src type veth peer name veth_dst");
- SYS("ip link set dev veth_src address 00:11:22:33:44:55");
- SYS("ip link set dev veth_dst address 66:77:88:99:aa:bb");
- SYS("ip link set dev veth_src up");
- SYS("ip link set dev veth_dst up");
- SYS("ip addr add dev veth_src fc00::1/64");
- SYS("ip addr add dev veth_dst fc00::2/64");
- SYS("ip neigh add fc00::2 dev veth_src lladdr 66:77:88:99:aa:bb");
+ SYS(out, "ip link add veth_src type veth peer name veth_dst");
+ SYS(out, "ip link set dev veth_src address 00:11:22:33:44:55");
+ SYS(out, "ip link set dev veth_dst address 66:77:88:99:aa:bb");
+ SYS(out, "ip link set dev veth_src up");
+ SYS(out, "ip link set dev veth_dst up");
+ SYS(out, "ip addr add dev veth_src fc00::1/64");
+ SYS(out, "ip addr add dev veth_dst fc00::2/64");
+ SYS(out, "ip neigh add fc00::2 dev veth_src lladdr 66:77:88:99:aa:bb");
/* We enable forwarding in the test namespace because that will cause
* the packets that go through the kernel stack (with XDP_PASS) to be
* code didn't have this, so we keep the test behaviour to make sure the
* bug doesn't resurface.
*/
- SYS("sysctl -qw net.ipv6.conf.all.forwarding=1");
+ SYS(out, "sysctl -qw net.ipv6.conf.all.forwarding=1");
ifindex_src = if_nametoindex("veth_src");
ifindex_dst = if_nametoindex("veth_dst");
out:
if (nstoken)
close_netns(nstoken);
- system("ip netns del testns");
+ SYS_NOFAIL("ip netns del testns");
test_xdp_do_redirect__destroy(skel);
}
#define PREFIX_LEN "8"
#define FAMILY AF_INET
-#define SYS(cmd) ({ \
- if (!ASSERT_OK(system(cmd), (cmd))) \
- goto out; \
-})
-
struct xsk {
void *umem_area;
struct xsk_umem *umem;
/* Setup new networking namespace, with a veth pair. */
- SYS("ip netns add xdp_metadata");
+ SYS(out, "ip netns add xdp_metadata");
tok = open_netns("xdp_metadata");
- SYS("ip link add numtxqueues 1 numrxqueues 1 " TX_NAME
+ SYS(out, "ip link add numtxqueues 1 numrxqueues 1 " TX_NAME
" type veth peer " RX_NAME " numtxqueues 1 numrxqueues 1");
- SYS("ip link set dev " TX_NAME " address 00:00:00:00:00:01");
- SYS("ip link set dev " RX_NAME " address 00:00:00:00:00:02");
- SYS("ip link set dev " TX_NAME " up");
- SYS("ip link set dev " RX_NAME " up");
- SYS("ip addr add " TX_ADDR "/" PREFIX_LEN " dev " TX_NAME);
- SYS("ip addr add " RX_ADDR "/" PREFIX_LEN " dev " RX_NAME);
+ SYS(out, "ip link set dev " TX_NAME " address 00:00:00:00:00:01");
+ SYS(out, "ip link set dev " RX_NAME " address 00:00:00:00:00:02");
+ SYS(out, "ip link set dev " TX_NAME " up");
+ SYS(out, "ip link set dev " RX_NAME " up");
+ SYS(out, "ip addr add " TX_ADDR "/" PREFIX_LEN " dev " TX_NAME);
+ SYS(out, "ip addr add " RX_ADDR "/" PREFIX_LEN " dev " RX_NAME);
rx_ifindex = if_nametoindex(RX_NAME);
tx_ifindex = if_nametoindex(TX_NAME);
xdp_metadata__destroy(bpf_obj);
if (tok)
close_netns(tok);
- system("ip netns del xdp_metadata");
+ SYS_NOFAIL("ip netns del xdp_metadata");
}
#define CMD_OUT_BUF_SIZE 1023
-#define SYS(cmd) ({ \
- if (!ASSERT_OK(system(cmd), (cmd))) \
- goto out; \
-})
-
#define SYS_OUT(cmd, ...) ({ \
char buf[1024]; \
snprintf(buf, sizeof(buf), (cmd), ##__VA_ARGS__); \
char buf[CMD_OUT_BUF_SIZE];
size_t size;
- SYS("ip netns add synproxy");
+ SYS(out, "ip netns add synproxy");
- SYS("ip link add tmp0 type veth peer name tmp1");
- SYS("ip link set tmp1 netns synproxy");
- SYS("ip link set tmp0 up");
- SYS("ip addr replace 198.18.0.1/24 dev tmp0");
+ SYS(out, "ip link add tmp0 type veth peer name tmp1");
+ SYS(out, "ip link set tmp1 netns synproxy");
+ SYS(out, "ip link set tmp0 up");
+ SYS(out, "ip addr replace 198.18.0.1/24 dev tmp0");
/* When checksum offload is enabled, the XDP program sees wrong
* checksums and drops packets.
*/
- SYS("ethtool -K tmp0 tx off");
+ SYS(out, "ethtool -K tmp0 tx off");
if (xdp)
/* Workaround required for veth. */
- SYS("ip link set tmp0 xdp object xdp_dummy.bpf.o section xdp 2> /dev/null");
+ SYS(out, "ip link set tmp0 xdp object xdp_dummy.bpf.o section xdp 2> /dev/null");
ns = open_netns("synproxy");
if (!ASSERT_OK_PTR(ns, "setns"))
goto out;
- SYS("ip link set lo up");
- SYS("ip link set tmp1 up");
- SYS("ip addr replace 198.18.0.2/24 dev tmp1");
- SYS("sysctl -w net.ipv4.tcp_syncookies=2");
- SYS("sysctl -w net.ipv4.tcp_timestamps=1");
- SYS("sysctl -w net.netfilter.nf_conntrack_tcp_loose=0");
- SYS("iptables-legacy -t raw -I PREROUTING \
+ SYS(out, "ip link set lo up");
+ SYS(out, "ip link set tmp1 up");
+ SYS(out, "ip addr replace 198.18.0.2/24 dev tmp1");
+ SYS(out, "sysctl -w net.ipv4.tcp_syncookies=2");
+ SYS(out, "sysctl -w net.ipv4.tcp_timestamps=1");
+ SYS(out, "sysctl -w net.netfilter.nf_conntrack_tcp_loose=0");
+ SYS(out, "iptables-legacy -t raw -I PREROUTING \
-i tmp1 -p tcp -m tcp --syn --dport 8080 -j CT --notrack");
- SYS("iptables-legacy -t filter -A INPUT \
+ SYS(out, "iptables-legacy -t filter -A INPUT \
-i tmp1 -p tcp -m tcp --dport 8080 -m state --state INVALID,UNTRACKED \
-j SYNPROXY --sack-perm --timestamp --wscale 7 --mss 1460");
- SYS("iptables-legacy -t filter -A INPUT \
+ SYS(out, "iptables-legacy -t filter -A INPUT \
-i tmp1 -m state --state INVALID -j DROP");
ctrl_file = SYS_OUT("./xdp_synproxy --iface tmp1 --ports 8080 \
if (ns)
close_netns(ns);
- system("ip link del tmp0");
- system("ip netns del synproxy");
+ SYS_NOFAIL("ip link del tmp0");
+ SYS_NOFAIL("ip netns del synproxy");
}
void test_xdp_synproxy(void)
"proto esp aead 'rfc4106(gcm(aes))' " \
"0xe4d8f4b4da1df18a3510b3781496daa82488b713 128 mode tunnel "
-#define SYS(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- if (!ASSERT_OK(system(cmd), cmd)) \
- goto fail; \
- })
-
-#define SYS_NOFAIL(fmt, ...) \
- ({ \
- char cmd[1024]; \
- snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
- system(cmd); \
- })
-
static int attach_tc_prog(struct bpf_tc_hook *hook, int igr_fd, int egr_fd)
{
LIBBPF_OPTS(bpf_tc_opts, opts1, .handle = 1, .priority = 1,
static int config_underlay(void)
{
- SYS("ip netns add " NS0);
- SYS("ip netns add " NS1);
- SYS("ip netns add " NS2);
+ SYS(fail, "ip netns add " NS0);
+ SYS(fail, "ip netns add " NS1);
+ SYS(fail, "ip netns add " NS2);
/* NS0 <-> NS1 [veth01 <-> veth10] */
- SYS("ip link add veth01 netns " NS0 " type veth peer name veth10 netns " NS1);
- SYS("ip -net " NS0 " addr add " IP4_ADDR_VETH01 "/24 dev veth01");
- SYS("ip -net " NS0 " link set dev veth01 up");
- SYS("ip -net " NS1 " addr add " IP4_ADDR_VETH10 "/24 dev veth10");
- SYS("ip -net " NS1 " link set dev veth10 up");
+ SYS(fail, "ip link add veth01 netns " NS0 " type veth peer name veth10 netns " NS1);
+ SYS(fail, "ip -net " NS0 " addr add " IP4_ADDR_VETH01 "/24 dev veth01");
+ SYS(fail, "ip -net " NS0 " link set dev veth01 up");
+ SYS(fail, "ip -net " NS1 " addr add " IP4_ADDR_VETH10 "/24 dev veth10");
+ SYS(fail, "ip -net " NS1 " link set dev veth10 up");
/* NS0 <-> NS2 [veth02 <-> veth20] */
- SYS("ip link add veth02 netns " NS0 " type veth peer name veth20 netns " NS2);
- SYS("ip -net " NS0 " addr add " IP4_ADDR_VETH02 "/24 dev veth02");
- SYS("ip -net " NS0 " link set dev veth02 up");
- SYS("ip -net " NS2 " addr add " IP4_ADDR_VETH20 "/24 dev veth20");
- SYS("ip -net " NS2 " link set dev veth20 up");
+ SYS(fail, "ip link add veth02 netns " NS0 " type veth peer name veth20 netns " NS2);
+ SYS(fail, "ip -net " NS0 " addr add " IP4_ADDR_VETH02 "/24 dev veth02");
+ SYS(fail, "ip -net " NS0 " link set dev veth02 up");
+ SYS(fail, "ip -net " NS2 " addr add " IP4_ADDR_VETH20 "/24 dev veth20");
+ SYS(fail, "ip -net " NS2 " link set dev veth20 up");
return 0;
fail:
const char *ipv4_remote, int if_id)
{
/* State: local -> remote */
- SYS("ip -net %s xfrm state add src %s dst %s spi 1 "
+ SYS(fail, "ip -net %s xfrm state add src %s dst %s spi 1 "
ESP_DUMMY_PARAMS "if_id %d", ns, ipv4_local, ipv4_remote, if_id);
/* State: local <- remote */
- SYS("ip -net %s xfrm state add src %s dst %s spi 1 "
+ SYS(fail, "ip -net %s xfrm state add src %s dst %s spi 1 "
ESP_DUMMY_PARAMS "if_id %d", ns, ipv4_remote, ipv4_local, if_id);
/* Policy: local -> remote */
- SYS("ip -net %s xfrm policy add dir out src 0.0.0.0/0 dst 0.0.0.0/0 "
+ SYS(fail, "ip -net %s xfrm policy add dir out src 0.0.0.0/0 dst 0.0.0.0/0 "
"if_id %d tmpl src %s dst %s proto esp mode tunnel if_id %d", ns,
if_id, ipv4_local, ipv4_remote, if_id);
/* Policy: local <- remote */
- SYS("ip -net %s xfrm policy add dir in src 0.0.0.0/0 dst 0.0.0.0/0 "
+ SYS(fail, "ip -net %s xfrm policy add dir in src 0.0.0.0/0 dst 0.0.0.0/0 "
"if_id %d tmpl src %s dst %s proto esp mode tunnel if_id %d", ns,
if_id, ipv4_remote, ipv4_local, if_id);
if (!ASSERT_OK(setup_xfrmi_external_dev(NS0), "xfrmi"))
goto fail;
- SYS("ip -net " NS0 " addr add 192.168.1.100/24 dev ipsec0");
- SYS("ip -net " NS0 " link set dev ipsec0 up");
+ SYS(fail, "ip -net " NS0 " addr add 192.168.1.100/24 dev ipsec0");
+ SYS(fail, "ip -net " NS0 " link set dev ipsec0 up");
- SYS("ip -net " NS1 " link add ipsec0 type xfrm if_id %d", IF_ID_1);
- SYS("ip -net " NS1 " addr add 192.168.1.200/24 dev ipsec0");
- SYS("ip -net " NS1 " link set dev ipsec0 up");
+ SYS(fail, "ip -net " NS1 " link add ipsec0 type xfrm if_id %d", IF_ID_1);
+ SYS(fail, "ip -net " NS1 " addr add 192.168.1.200/24 dev ipsec0");
+ SYS(fail, "ip -net " NS1 " link set dev ipsec0 up");
- SYS("ip -net " NS2 " link add ipsec0 type xfrm if_id %d", IF_ID_2);
- SYS("ip -net " NS2 " addr add 192.168.1.200/24 dev ipsec0");
- SYS("ip -net " NS2 " link set dev ipsec0 up");
+ SYS(fail, "ip -net " NS2 " link add ipsec0 type xfrm if_id %d", IF_ID_2);
+ SYS(fail, "ip -net " NS2 " addr add 192.168.1.200/24 dev ipsec0");
+ SYS(fail, "ip -net " NS2 " link set dev ipsec0 up");
return 0;
fail:
{
skel->bss->req_if_id = if_id;
- SYS("ping -i 0.01 -c 3 -w 10 -q 192.168.1.200 > /dev/null");
+ SYS(fail, "ping -i 0.01 -c 3 -w 10 -q 192.168.1.200 > /dev/null");
if (!ASSERT_EQ(skel->bss->resp_if_id, if_id, "if_id"))
goto fail;
keys->ip_proto = ip6h->nexthdr;
keys->flow_label = ip6_flowlabel(ip6h);
- if (keys->flags & BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL)
+ if (keys->flow_label && keys->flags & BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL)
return export_flow_keys(keys, BPF_OK);
return parse_ipv6_proto(skb, ip6h->nexthdr);
#ifndef __BPF_MISC_H__
#define __BPF_MISC_H__
+/* This set of attributes controls behavior of the
+ * test_loader.c:test_loader__run_subtests().
+ *
+ * __msg Message expected to be found in the verifier log.
+ * Multiple __msg attributes could be specified.
+ *
+ * __success Expect program load success in privileged mode.
+ *
+ * __failure Expect program load failure in privileged mode.
+ *
+ * __log_level Log level to use for the program, numeric value expected.
+ *
+ * __flag Adds one flag use for the program, the following values are valid:
+ * - BPF_F_STRICT_ALIGNMENT;
+ * - BPF_F_TEST_RND_HI32;
+ * - BPF_F_TEST_STATE_FREQ;
+ * - BPF_F_SLEEPABLE;
+ * - BPF_F_XDP_HAS_FRAGS;
+ * - A numeric value.
+ * Multiple __flag attributes could be specified, the final flags
+ * value is derived by applying binary "or" to all specified values.
+ */
#define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" msg)))
#define __failure __attribute__((btf_decl_tag("comment:test_expect_failure")))
#define __success __attribute__((btf_decl_tag("comment:test_expect_success")))
#define __log_level(lvl) __attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
+#define __flag(flag) __attribute__((btf_decl_tag("comment:test_prog_flags="#flag)))
/* Convenience macro for use with 'asm volatile' blocks */
#define __naked __attribute__((naked))
#include <bpf/bpf_helpers.h>
struct map_value {
- struct prog_test_ref_kfunc __kptr_ref *ptr;
+ struct prog_test_ref_kfunc __kptr *ptr;
};
struct {
#include <bpf/bpf_tracing.h>
struct __cgrps_kfunc_map_value {
- struct cgroup __kptr_ref * cgrp;
+ struct cgroup __kptr * cgrp;
};
struct hash_map {
struct cgroup *bpf_cgroup_kptr_get(struct cgroup **pp) __ksym;
void bpf_cgroup_release(struct cgroup *p) __ksym;
struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level) __ksym;
+struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
static inline struct __cgrps_kfunc_map_value *cgrps_kfunc_map_value_lookup(struct cgroup *cgrp)
{
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("arg#0 is untrusted_ptr_or_null_ expected ptr_ or socket")
+__failure __msg("expects refcounted")
int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path)
{
struct __cgrps_kfunc_map_value *v;
SEC("tp_btf/cgroup_mkdir")
int BPF_PROG(test_cgrp_xchg_release, struct cgroup *cgrp, const char *path)
{
- struct cgroup *kptr;
+ struct cgroup *kptr, *cg;
struct __cgrps_kfunc_map_value *v;
long status;
return 0;
}
+ kptr = v->cgrp;
+ if (!kptr) {
+ err = 4;
+ return 0;
+ }
+
+ cg = bpf_cgroup_ancestor(kptr, 1);
+ if (cg) /* verifier only check */
+ bpf_cgroup_release(cg);
+
kptr = bpf_kptr_xchg(&v->cgrp, NULL);
if (!kptr) {
err = 3;
return 0;
}
+
+SEC("tp_btf/cgroup_mkdir")
+int BPF_PROG(test_cgrp_from_id, struct cgroup *cgrp, const char *path)
+{
+ struct cgroup *parent, *res;
+ u64 parent_cgid;
+
+ if (!is_test_kfunc_task())
+ return 0;
+
+ /* @cgrp's ID is not visible yet, let's test with the parent */
+ parent = bpf_cgroup_ancestor(cgrp, cgrp->level - 1);
+ if (!parent) {
+ err = 1;
+ return 0;
+ }
+
+ parent_cgid = parent->kn->id;
+ bpf_cgroup_release(parent);
+
+ res = bpf_cgroup_from_id(parent_cgid);
+ if (!res) {
+ err = 2;
+ return 0;
+ }
+
+ bpf_cgroup_release(res);
+
+ if (res != parent) {
+ err = 3;
+ return 0;
+ }
+
+ res = bpf_cgroup_from_id((u64)-1);
+ if (res) {
+ bpf_cgroup_release(res);
+ err = 4;
+ return 0;
+ }
+
+ return 0;
+}
if (task->pid != target_pid)
return 0;
- /* ptr_to_btf_id semantics. should work. */
+ /* task->cgroups is untrusted in sleepable prog outside of RCU CS */
cgrp = task->cgroups->dfl_cgrp;
ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
BPF_LOCAL_STORAGE_GET_F_CREATE);
bpf_rcu_read_lock();
cgrp = task->cgroups->dfl_cgrp;
- /* cgrp is untrusted and cannot pass to bpf_cgrp_storage_get() helper. */
+ /* cgrp is trusted under RCU CS */
ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0, BPF_LOCAL_STORAGE_GET_F_CREATE);
if (ptr)
cgroup_id = cgrp->kn->id;
int err;
struct __cpumask_map_value {
- struct bpf_cpumask __kptr_ref * cpumask;
+ struct bpf_cpumask __kptr * cpumask;
};
struct array_map {
}
SEC("tp_btf/task_newtask")
-__failure __msg("bpf_cpumask_acquire args#0 expected pointer to STRUCT bpf_cpumask")
+__failure __msg("must be referenced")
int BPF_PROG(test_acquire_wrong_cpumask, struct task_struct *task, u64 clone_flags)
{
struct bpf_cpumask *cpumask;
#include <string.h>
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
+#include <linux/if_ether.h>
#include "bpf_misc.h"
+#include "bpf_kfuncs.h"
char _license[] SEC("license") = "GPL";
return 0;
}
+/* A data slice can't be accessed out of bounds */
+SEC("?tc")
+__failure __msg("value is outside of the allowed memory range")
+int data_slice_out_of_bounds_skb(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ /* this should fail */
+ *(__u8*)(hdr + 1) = 1;
+
+ return SK_PASS;
+}
+
SEC("?raw_tp")
__failure __msg("value is outside of the allowed memory range")
int data_slice_out_of_bounds_map_value(void *ctx)
/* this should fail */
bpf_dynptr_read(read_data, sizeof(read_data), (void *)&ptr + 8, 0, 0);
-
return 0;
}
return 0;
}
+/* bpf_dynptr_slice()s are read-only and cannot be written to */
+SEC("?tc")
+__failure __msg("R0 cannot write into rdonly_mem")
+int skb_invalid_slice_write(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ /* this should fail */
+ hdr->h_proto = 1;
+
+ return SK_PASS;
+}
+
+/* The read-only data slice is invalidated whenever a helper changes packet data */
+SEC("?tc")
+__failure __msg("invalid mem access 'scalar'")
+int skb_invalid_data_slice1(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ val = hdr->h_proto;
+
+ if (bpf_skb_pull_data(skb, skb->len))
+ return SK_DROP;
+
+ /* this should fail */
+ val = hdr->h_proto;
+
+ return SK_PASS;
+}
+
+/* The read-write data slice is invalidated whenever a helper changes packet data */
+SEC("?tc")
+__failure __msg("invalid mem access 'scalar'")
+int skb_invalid_data_slice2(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ hdr->h_proto = 123;
+
+ if (bpf_skb_pull_data(skb, skb->len))
+ return SK_DROP;
+
+ /* this should fail */
+ hdr->h_proto = 1;
+
+ return SK_PASS;
+}
+
+/* The read-only data slice is invalidated whenever bpf_dynptr_write() is called */
+SEC("?tc")
+__failure __msg("invalid mem access 'scalar'")
+int skb_invalid_data_slice3(struct __sk_buff *skb)
+{
+ char write_data[64] = "hello there, world!!";
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ val = hdr->h_proto;
+
+ bpf_dynptr_write(&ptr, 0, write_data, sizeof(write_data), 0);
+
+ /* this should fail */
+ val = hdr->h_proto;
+
+ return SK_PASS;
+}
+
+/* The read-write data slice is invalidated whenever bpf_dynptr_write() is called */
+SEC("?tc")
+__failure __msg("invalid mem access 'scalar'")
+int skb_invalid_data_slice4(struct __sk_buff *skb)
+{
+ char write_data[64] = "hello there, world!!";
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ hdr->h_proto = 123;
+
+ bpf_dynptr_write(&ptr, 0, write_data, sizeof(write_data), 0);
+
+ /* this should fail */
+ hdr->h_proto = 1;
+
+ return SK_PASS;
+}
+
+/* The read-only data slice is invalidated whenever a helper changes packet data */
+SEC("?xdp")
+__failure __msg("invalid mem access 'scalar'")
+int xdp_invalid_data_slice1(struct xdp_md *xdp)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_xdp(xdp, 0, &ptr);
+ hdr = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ val = hdr->h_proto;
+
+ if (bpf_xdp_adjust_head(xdp, 0 - (int)sizeof(*hdr)))
+ return XDP_DROP;
+
+ /* this should fail */
+ val = hdr->h_proto;
+
+ return XDP_PASS;
+}
+
+/* The read-write data slice is invalidated whenever a helper changes packet data */
+SEC("?xdp")
+__failure __msg("invalid mem access 'scalar'")
+int xdp_invalid_data_slice2(struct xdp_md *xdp)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_xdp(xdp, 0, &ptr);
+ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ hdr->h_proto = 9;
+
+ if (bpf_xdp_adjust_head(xdp, 0 - (int)sizeof(*hdr)))
+ return XDP_DROP;
+
+ /* this should fail */
+ hdr->h_proto = 1;
+
+ return XDP_PASS;
+}
+
+/* Only supported prog type can create skb-type dynptrs */
+SEC("?raw_tp")
+__failure __msg("calling kernel function bpf_dynptr_from_skb is not allowed")
+int skb_invalid_ctx(void *ctx)
+{
+ struct bpf_dynptr ptr;
+
+ /* this should fail */
+ bpf_dynptr_from_skb(ctx, 0, &ptr);
+
+ return 0;
+}
+
/* Reject writes to dynptr slot for uninit arg */
SEC("?raw_tp")
__failure __msg("potential write to dynptr at off=-16")
return 0;
}
+/* Only supported prog type can create xdp-type dynptrs */
+SEC("?raw_tp")
+__failure __msg("calling kernel function bpf_dynptr_from_xdp is not allowed")
+int xdp_invalid_ctx(void *ctx)
+{
+ struct bpf_dynptr ptr;
+
+ /* this should fail */
+ bpf_dynptr_from_xdp(ctx, 0, &ptr);
+
+ return 0;
+}
+
+__u32 hdr_size = sizeof(struct ethhdr);
+/* Can't pass in variable-sized len to bpf_dynptr_slice */
+SEC("?tc")
+__failure __msg("unbounded memory access")
+int dynptr_slice_var_len1(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ /* this should fail */
+ hdr = bpf_dynptr_slice(&ptr, 0, buffer, hdr_size);
+ if (!hdr)
+ return SK_DROP;
+
+ return SK_PASS;
+}
+
+/* Can't pass in variable-sized len to bpf_dynptr_slice */
+SEC("?tc")
+__failure __msg("must be a known constant")
+int dynptr_slice_var_len2(struct __sk_buff *skb)
+{
+ char buffer[sizeof(struct ethhdr)] = {};
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ if (hdr_size <= sizeof(buffer)) {
+ /* this should fail */
+ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, hdr_size);
+ if (!hdr)
+ return SK_DROP;
+ hdr->h_proto = 12;
+ }
+
+ return SK_PASS;
+}
+
static int callback(__u32 index, void *data)
{
*(__u32 *)data = 123;
return 0;
}
+
+/* Program types that don't allow writes to packet data should fail if
+ * bpf_dynptr_slice_rdwr is called
+ */
+SEC("cgroup_skb/ingress")
+__failure __msg("the prog does not allow writes to packet data")
+int invalid_slice_rdwr_rdonly(struct __sk_buff *skb)
+{
+ char buffer[sizeof(struct ethhdr)] = {};
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ /* this should fail since cgroup_skb doesn't allow
+ * changing packet data
+ */
+ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+
+ return 0;
+}
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
+#include "bpf_kfuncs.h"
#include "errno.h"
char _license[] SEC("license") = "GPL";
__type(value, __u32);
} array_map SEC(".maps");
-SEC("tp/syscalls/sys_enter_nanosleep")
+SEC("?tp/syscalls/sys_enter_nanosleep")
int test_read_write(void *ctx)
{
char write_data[64] = "hello there, world!!";
return 0;
}
-SEC("tp/syscalls/sys_enter_nanosleep")
-int test_data_slice(void *ctx)
+SEC("?tp/syscalls/sys_enter_nanosleep")
+int test_dynptr_data(void *ctx)
{
__u32 key = 0, val = 235, *map_val;
struct bpf_dynptr ptr;
return 0;
}
-SEC("tp/syscalls/sys_enter_nanosleep")
+SEC("?tp/syscalls/sys_enter_nanosleep")
int test_ringbuf(void *ctx)
{
struct bpf_dynptr ptr;
bpf_ringbuf_discard_dynptr(&ptr, 0);
return 0;
}
+
+SEC("?cgroup_skb/egress")
+int test_skb_readonly(struct __sk_buff *skb)
+{
+ __u8 write_data[2] = {1, 2};
+ struct bpf_dynptr ptr;
+ __u64 *data;
+ int ret;
+
+ if (bpf_dynptr_from_skb(skb, 0, &ptr)) {
+ err = 1;
+ return 1;
+ }
+
+ /* since cgroup skbs are read only, writes should fail */
+ ret = bpf_dynptr_write(&ptr, 0, write_data, sizeof(write_data), 0);
+ if (ret != -EINVAL) {
+ err = 2;
+ return 1;
+ }
+
+ return 1;
+}
+
+SEC("?cgroup_skb/egress")
+int test_dynptr_skb_data(struct __sk_buff *skb)
+{
+ __u8 write_data[2] = {1, 2};
+ struct bpf_dynptr ptr;
+ __u64 *data;
+ int ret;
+
+ if (bpf_dynptr_from_skb(skb, 0, &ptr)) {
+ err = 1;
+ return 1;
+ }
+
+ /* This should return NULL. Must use bpf_dynptr_slice API */
+ data = bpf_dynptr_data(&ptr, 0, 1);
+ if (data) {
+ err = 2;
+ return 1;
+ }
+
+ return 1;
+}
struct callback_ctx *data)
{
/* writing to vma, which is illegal */
- vma->vm_flags |= 0x55;
+ vma->vm_start = 0xffffffffff600000;
return 0;
}
#include <bpf/bpf_tracing.h>
#include <bpf/bpf_helpers.h>
-static struct prog_test_ref_kfunc __kptr_ref *v;
+static struct prog_test_ref_kfunc __kptr *v;
long total_sum = -1;
extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym;
#include <bpf/bpf_helpers.h>
struct map_value {
- struct task_struct __kptr *ptr;
+ struct task_struct __kptr_untrusted *ptr;
};
struct {
#include <bpf/bpf_helpers.h>
struct map_value {
- struct prog_test_ref_kfunc __kptr *unref_ptr;
- struct prog_test_ref_kfunc __kptr_ref *ref_ptr;
+ struct prog_test_ref_kfunc __kptr_untrusted *unref_ptr;
+ struct prog_test_ref_kfunc __kptr *ref_ptr;
};
struct array_map {
__uint(max_entries, 1);
} array_map SEC(".maps");
+struct pcpu_array_map {
+ __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+ __type(key, int);
+ __type(value, struct map_value);
+ __uint(max_entries, 1);
+} pcpu_array_map SEC(".maps");
+
struct hash_map {
__uint(type, BPF_MAP_TYPE_HASH);
__type(key, int);
__uint(max_entries, 1);
} hash_map SEC(".maps");
+struct pcpu_hash_map {
+ __uint(type, BPF_MAP_TYPE_PERCPU_HASH);
+ __type(key, int);
+ __type(value, struct map_value);
+ __uint(max_entries, 1);
+} pcpu_hash_map SEC(".maps");
+
struct hash_malloc_map {
__uint(type, BPF_MAP_TYPE_HASH);
__type(key, int);
__uint(map_flags, BPF_F_NO_PREALLOC);
} hash_malloc_map SEC(".maps");
+struct pcpu_hash_malloc_map {
+ __uint(type, BPF_MAP_TYPE_PERCPU_HASH);
+ __type(key, int);
+ __type(value, struct map_value);
+ __uint(max_entries, 1);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+} pcpu_hash_malloc_map SEC(".maps");
+
struct lru_hash_map {
__uint(type, BPF_MAP_TYPE_LRU_HASH);
__type(key, int);
__uint(max_entries, 1);
} lru_hash_map SEC(".maps");
+struct lru_pcpu_hash_map {
+ __uint(type, BPF_MAP_TYPE_LRU_PERCPU_HASH);
+ __type(key, int);
+ __type(value, struct map_value);
+ __uint(max_entries, 1);
+} lru_pcpu_hash_map SEC(".maps");
+
+struct cgrp_ls_map {
+ __uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, struct map_value);
+} cgrp_ls_map SEC(".maps");
+
+struct task_ls_map {
+ __uint(type, BPF_MAP_TYPE_TASK_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, struct map_value);
+} task_ls_map SEC(".maps");
+
+struct inode_ls_map {
+ __uint(type, BPF_MAP_TYPE_INODE_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, struct map_value);
+} inode_ls_map SEC(".maps");
+
+struct sk_ls_map {
+ __uint(type, BPF_MAP_TYPE_SK_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, struct map_value);
+} sk_ls_map SEC(".maps");
+
#define DEFINE_MAP_OF_MAP(map_type, inner_map_type, name) \
struct { \
__uint(type, map_type); \
extern struct prog_test_ref_kfunc *
bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b) __ksym;
extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym;
+void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p) __ksym;
#define WRITE_ONCE(x, val) ((*(volatile typeof(x) *) &(x)) = (val))
WRITE_ONCE(v->unref_ptr, p);
if (!p)
return;
+ /*
+ * p is rcu_ptr_prog_test_ref_kfunc,
+ * because bpf prog is non-sleepable and runs in RCU CS.
+ * p can be passed to kfunc that requires KF_RCU.
+ */
+ bpf_kfunc_call_test_ref(p);
if (p->a + p->b > 100)
return;
/* store NULL */
p = bpf_kptr_xchg(&v->ref_ptr, NULL);
if (!p)
return;
+ /*
+ * p is trusted_ptr_prog_test_ref_kfunc.
+ * p can be passed to kfunc that requires KF_RCU.
+ */
+ bpf_kfunc_call_test_ref(p);
if (p->a + p->b > 100) {
bpf_kfunc_call_test_release(p);
return;
return 0;
}
+SEC("tp_btf/cgroup_mkdir")
+int BPF_PROG(test_cgrp_map_kptr, struct cgroup *cgrp, const char *path)
+{
+ struct map_value *v;
+
+ v = bpf_cgrp_storage_get(&cgrp_ls_map, cgrp, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (v)
+ test_kptr(v);
+ return 0;
+}
+
+SEC("lsm/inode_unlink")
+int BPF_PROG(test_task_map_kptr, struct inode *inode, struct dentry *victim)
+{
+ struct task_struct *task;
+ struct map_value *v;
+
+ task = bpf_get_current_task_btf();
+ if (!task)
+ return 0;
+ v = bpf_task_storage_get(&task_ls_map, task, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (v)
+ test_kptr(v);
+ return 0;
+}
+
+SEC("lsm/inode_unlink")
+int BPF_PROG(test_inode_map_kptr, struct inode *inode, struct dentry *victim)
+{
+ struct map_value *v;
+
+ v = bpf_inode_storage_get(&inode_ls_map, inode, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (v)
+ test_kptr(v);
+ return 0;
+}
+
+SEC("tc")
+int test_sk_map_kptr(struct __sk_buff *ctx)
+{
+ struct map_value *v;
+ struct bpf_sock *sk;
+
+ sk = ctx->sk;
+ if (!sk)
+ return 0;
+ v = bpf_sk_storage_get(&sk_ls_map, sk, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (v)
+ test_kptr(v);
+ return 0;
+}
+
SEC("tc")
int test_map_in_map_kptr(struct __sk_buff *ctx)
{
return 0;
}
-SEC("tc")
-int test_map_kptr_ref(struct __sk_buff *ctx)
+int ref = 1;
+
+static __always_inline
+int test_map_kptr_ref_pre(struct map_value *v)
{
struct prog_test_ref_kfunc *p, *p_st;
unsigned long arg = 0;
- struct map_value *v;
- int key = 0, ret;
+ int ret;
p = bpf_kfunc_call_test_acquire(&arg);
if (!p)
return 1;
+ ref++;
p_st = p->next;
- if (p_st->cnt.refs.counter != 2) {
+ if (p_st->cnt.refs.counter != ref) {
ret = 2;
goto end;
}
- v = bpf_map_lookup_elem(&array_map, &key);
- if (!v) {
- ret = 3;
- goto end;
- }
-
p = bpf_kptr_xchg(&v->ref_ptr, p);
if (p) {
- ret = 4;
+ ret = 3;
goto end;
}
- if (p_st->cnt.refs.counter != 2)
- return 5;
+ if (p_st->cnt.refs.counter != ref)
+ return 4;
p = bpf_kfunc_call_test_kptr_get(&v->ref_ptr, 0, 0);
if (!p)
- return 6;
- if (p_st->cnt.refs.counter != 3) {
- ret = 7;
+ return 5;
+ ref++;
+ if (p_st->cnt.refs.counter != ref) {
+ ret = 6;
goto end;
}
bpf_kfunc_call_test_release(p);
- if (p_st->cnt.refs.counter != 2)
- return 8;
+ ref--;
+ if (p_st->cnt.refs.counter != ref)
+ return 7;
p = bpf_kptr_xchg(&v->ref_ptr, NULL);
if (!p)
- return 9;
+ return 8;
bpf_kfunc_call_test_release(p);
- if (p_st->cnt.refs.counter != 1)
- return 10;
+ ref--;
+ if (p_st->cnt.refs.counter != ref)
+ return 9;
p = bpf_kfunc_call_test_acquire(&arg);
if (!p)
- return 11;
+ return 10;
+ ref++;
p = bpf_kptr_xchg(&v->ref_ptr, p);
if (p) {
- ret = 12;
+ ret = 11;
goto end;
}
- if (p_st->cnt.refs.counter != 2)
- return 13;
+ if (p_st->cnt.refs.counter != ref)
+ return 12;
/* Leave in map */
return 0;
end:
+ ref--;
bpf_kfunc_call_test_release(p);
return ret;
}
-SEC("tc")
-int test_map_kptr_ref2(struct __sk_buff *ctx)
+static __always_inline
+int test_map_kptr_ref_post(struct map_value *v)
{
struct prog_test_ref_kfunc *p, *p_st;
- struct map_value *v;
- int key = 0;
-
- v = bpf_map_lookup_elem(&array_map, &key);
- if (!v)
- return 1;
p_st = v->ref_ptr;
- if (!p_st || p_st->cnt.refs.counter != 2)
- return 2;
+ if (!p_st || p_st->cnt.refs.counter != ref)
+ return 1;
p = bpf_kptr_xchg(&v->ref_ptr, NULL);
if (!p)
- return 3;
- if (p_st->cnt.refs.counter != 2) {
+ return 2;
+ if (p_st->cnt.refs.counter != ref) {
bpf_kfunc_call_test_release(p);
- return 4;
+ return 3;
}
p = bpf_kptr_xchg(&v->ref_ptr, p);
if (p) {
bpf_kfunc_call_test_release(p);
- return 5;
+ return 4;
}
- if (p_st->cnt.refs.counter != 2)
- return 6;
+ if (p_st->cnt.refs.counter != ref)
+ return 5;
+
+ return 0;
+}
+
+#define TEST(map) \
+ v = bpf_map_lookup_elem(&map, &key); \
+ if (!v) \
+ return -1; \
+ ret = test_map_kptr_ref_pre(v); \
+ if (ret) \
+ return ret;
+
+#define TEST_PCPU(map) \
+ v = bpf_map_lookup_percpu_elem(&map, &key, 0); \
+ if (!v) \
+ return -1; \
+ ret = test_map_kptr_ref_pre(v); \
+ if (ret) \
+ return ret;
+
+SEC("tc")
+int test_map_kptr_ref1(struct __sk_buff *ctx)
+{
+ struct map_value *v, val = {};
+ int key = 0, ret;
+
+ bpf_map_update_elem(&hash_map, &key, &val, 0);
+ bpf_map_update_elem(&hash_malloc_map, &key, &val, 0);
+ bpf_map_update_elem(&lru_hash_map, &key, &val, 0);
+
+ bpf_map_update_elem(&pcpu_hash_map, &key, &val, 0);
+ bpf_map_update_elem(&pcpu_hash_malloc_map, &key, &val, 0);
+ bpf_map_update_elem(&lru_pcpu_hash_map, &key, &val, 0);
+
+ TEST(array_map);
+ TEST(hash_map);
+ TEST(hash_malloc_map);
+ TEST(lru_hash_map);
+
+ TEST_PCPU(pcpu_array_map);
+ TEST_PCPU(pcpu_hash_map);
+ TEST_PCPU(pcpu_hash_malloc_map);
+ TEST_PCPU(lru_pcpu_hash_map);
+
+ return 0;
+}
+
+#undef TEST
+#undef TEST_PCPU
+
+#define TEST(map) \
+ v = bpf_map_lookup_elem(&map, &key); \
+ if (!v) \
+ return -1; \
+ ret = test_map_kptr_ref_post(v); \
+ if (ret) \
+ return ret;
+
+#define TEST_PCPU(map) \
+ v = bpf_map_lookup_percpu_elem(&map, &key, 0); \
+ if (!v) \
+ return -1; \
+ ret = test_map_kptr_ref_post(v); \
+ if (ret) \
+ return ret;
+
+SEC("tc")
+int test_map_kptr_ref2(struct __sk_buff *ctx)
+{
+ struct map_value *v;
+ int key = 0, ret;
+
+ TEST(array_map);
+ TEST(hash_map);
+ TEST(hash_malloc_map);
+ TEST(lru_hash_map);
+
+ TEST_PCPU(pcpu_array_map);
+ TEST_PCPU(pcpu_hash_map);
+ TEST_PCPU(pcpu_hash_malloc_map);
+ TEST_PCPU(lru_pcpu_hash_map);
return 0;
}
+#undef TEST
+#undef TEST_PCPU
+
+SEC("tc")
+int test_map_kptr_ref3(struct __sk_buff *ctx)
+{
+ struct prog_test_ref_kfunc *p;
+ unsigned long sp = 0;
+
+ p = bpf_kfunc_call_test_acquire(&sp);
+ if (!p)
+ return 1;
+ ref++;
+ if (p->cnt.refs.counter != ref) {
+ bpf_kfunc_call_test_release(p);
+ return 2;
+ }
+ bpf_kfunc_call_test_release(p);
+ ref--;
+ return 0;
+}
+
+SEC("syscall")
+int test_ls_map_kptr_ref1(void *ctx)
+{
+ struct task_struct *current;
+ struct map_value *v;
+ int ret;
+
+ current = bpf_get_current_task_btf();
+ if (!current)
+ return 100;
+ v = bpf_task_storage_get(&task_ls_map, current, NULL, 0);
+ if (v)
+ return 150;
+ v = bpf_task_storage_get(&task_ls_map, current, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (!v)
+ return 200;
+ return test_map_kptr_ref_pre(v);
+}
+
+SEC("syscall")
+int test_ls_map_kptr_ref2(void *ctx)
+{
+ struct task_struct *current;
+ struct map_value *v;
+ int ret;
+
+ current = bpf_get_current_task_btf();
+ if (!current)
+ return 100;
+ v = bpf_task_storage_get(&task_ls_map, current, NULL, 0);
+ if (!v)
+ return 200;
+ return test_map_kptr_ref_post(v);
+}
+
+SEC("syscall")
+int test_ls_map_kptr_ref_del(void *ctx)
+{
+ struct task_struct *current;
+ struct map_value *v;
+ int ret;
+
+ current = bpf_get_current_task_btf();
+ if (!current)
+ return 100;
+ v = bpf_task_storage_get(&task_ls_map, current, NULL, 0);
+ if (!v)
+ return 200;
+ if (!v->ref_ptr)
+ return 300;
+ return bpf_task_storage_delete(&task_ls_map, current);
+}
+
char _license[] SEC("license") = "GPL";
struct map_value {
char buf[8];
- struct prog_test_ref_kfunc __kptr *unref_ptr;
- struct prog_test_ref_kfunc __kptr_ref *ref_ptr;
- struct prog_test_member __kptr_ref *ref_memb_ptr;
+ struct prog_test_ref_kfunc __kptr_untrusted *unref_ptr;
+ struct prog_test_ref_kfunc __kptr *ref_ptr;
+ struct prog_test_member __kptr *ref_memb_ptr;
};
struct array_map {
}
SEC("?tc")
-__failure __msg("R1 type=untrusted_ptr_or_null_ expected=percpu_ptr_")
+__failure __msg("R1 type=rcu_ptr_or_null_ expected=percpu_ptr_")
int mark_ref_as_untrusted_or_null(struct __sk_buff *ctx)
{
struct map_value *v;
}
SEC("?tc")
-__failure __msg("R2 type=untrusted_ptr_ expected=ptr_")
+__failure __msg("R2 must be referenced")
int reject_untrusted_xchg(struct __sk_buff *ctx)
{
struct prog_test_ref_kfunc *p;
*/
SEC("tp_btf/task_newtask")
-__failure __msg("R2 must be referenced or trusted")
+__failure __msg("R2 must be")
int BPF_PROG(test_invalid_nested_user_cpus, struct task_struct *task, u64 clone_flags)
{
bpf_cpumask_test_cpu(0, task->user_cpus_ptr);
long rbtree_add_and_remove(void *ctx)
{
struct bpf_rb_node *res = NULL;
- struct node_data *n, *m;
+ struct node_data *n, *m = NULL;
n = bpf_obj_new(typeof(*n));
if (!n)
bpf_spin_lock(&glock);
res = bpf_rbtree_first(&groot);
- if (res)
- n = container_of(res, struct node_data, node);
+ if (!res) {
+ bpf_spin_unlock(&glock);
+ return 1;
+ }
+ n = container_of(res, struct node_data, node);
bpf_spin_unlock(&glock);
bpf_spin_lock(&glock);
{
struct task_struct *task, *real_parent;
- /* no bpf_rcu_read_lock(), old code still works */
+ /* old style ptr_to_btf_id is not allowed in sleepable */
task = bpf_get_current_task_btf();
real_parent = task->real_parent;
(void)bpf_task_storage_get(&map_a, real_parent, 0, 0);
}
SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
-int task_untrusted_non_rcuptr(void *ctx)
+int task_trusted_non_rcuptr(void *ctx)
{
struct task_struct *task, *group_leader;
task = bpf_get_current_task_btf();
bpf_rcu_read_lock();
- /* the pointer group_leader marked as untrusted */
+ /* the pointer group_leader is explicitly marked as trusted */
group_leader = task->real_parent->group_leader;
(void)bpf_task_storage_get(&map_a, group_leader, 0, 0);
bpf_rcu_read_unlock();
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+
+struct task_ls_map {
+ __uint(type, BPF_MAP_TYPE_TASK_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, int);
+} task_ls_map SEC(".maps");
+
+long gp_seq;
+
+SEC("syscall")
+int do_call_rcu_tasks_trace(void *ctx)
+{
+ struct task_struct *current;
+ int *v;
+
+ current = bpf_get_current_task_btf();
+ v = bpf_task_storage_get(&task_ls_map, current, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (!v)
+ return 1;
+ /* Invoke call_rcu_tasks_trace */
+ return bpf_task_storage_delete(&task_ls_map, current);
+}
+
+SEC("kprobe/rcu_tasks_trace_postgp")
+int rcu_tasks_trace_postgp(void *ctx)
+{
+ __sync_add_and_fetch(&gp_seq, 1);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
#include <bpf/bpf_tracing.h>
struct __tasks_kfunc_map_value {
- struct task_struct __kptr_ref * task;
+ struct task_struct __kptr * task;
};
struct hash_map {
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Facebook
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_misc.h"
+
+int kprobe_res = 0;
+
+/**
+ * This program will be manually made sleepable on the userspace side
+ * and should thus be unattachable.
+ */
+SEC("kprobe/" SYS_PREFIX "sys_nanosleep")
+int handle_kprobe_sleepable(struct pt_regs *ctx)
+{
+ kprobe_res = 1;
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
#include <bpf/bpf_core_read.h>
#include "bpf_misc.h"
-int kprobe_res = 0;
int kprobe2_res = 0;
-int kretprobe_res = 0;
int kretprobe2_res = 0;
-int uprobe_res = 0;
-int uretprobe_res = 0;
int uprobe_byname_res = 0;
int uretprobe_byname_res = 0;
int uprobe_byname2_res = 0;
int uretprobe_byname3_res = 0;
void *user_ptr = 0;
-SEC("kprobe")
-int handle_kprobe(struct pt_regs *ctx)
-{
- kprobe_res = 1;
- return 0;
-}
-
SEC("ksyscall/nanosleep")
int BPF_KSYSCALL(handle_kprobe_auto, struct __kernel_timespec *req, struct __kernel_timespec *rem)
{
return 0;
}
-/**
- * This program will be manually made sleepable on the userspace side
- * and should thus be unattachable.
- */
-SEC("kprobe/" SYS_PREFIX "sys_nanosleep")
-int handle_kprobe_sleepable(struct pt_regs *ctx)
-{
- kprobe_res = 2;
- return 0;
-}
-
-SEC("kretprobe")
-int handle_kretprobe(struct pt_regs *ctx)
-{
- kretprobe_res = 2;
- return 0;
-}
-
SEC("kretsyscall/nanosleep")
int BPF_KRETPROBE(handle_kretprobe_auto, int ret)
{
}
SEC("uprobe")
-int handle_uprobe(struct pt_regs *ctx)
+int handle_uprobe_ref_ctr(struct pt_regs *ctx)
{
- uprobe_res = 3;
return 0;
}
SEC("uretprobe")
-int handle_uretprobe(struct pt_regs *ctx)
+int handle_uretprobe_ref_ctr(struct pt_regs *ctx)
{
- uretprobe_res = 4;
return 0;
}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Facebook
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_misc.h"
+
+int kprobe_res = 0;
+int kretprobe_res = 0;
+int uprobe_res = 0;
+int uretprobe_res = 0;
+int uprobe_byname_res = 0;
+void *user_ptr = 0;
+
+SEC("kprobe")
+int handle_kprobe(struct pt_regs *ctx)
+{
+ kprobe_res = 1;
+ return 0;
+}
+
+SEC("kretprobe")
+int handle_kretprobe(struct pt_regs *ctx)
+{
+ kretprobe_res = 2;
+ return 0;
+}
+
+SEC("uprobe")
+int handle_uprobe(struct pt_regs *ctx)
+{
+ uprobe_res = 3;
+ return 0;
+}
+
+SEC("uretprobe")
+int handle_uretprobe(struct pt_regs *ctx)
+{
+ uretprobe_res = 4;
+ return 0;
+}
+
+SEC("uprobe")
+int handle_uprobe_byname(struct pt_regs *ctx)
+{
+ uprobe_byname_res = 5;
+ return 0;
+}
+
+
+char _license[] SEC("license") = "GPL";
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+// Copyright (c) 2019, 2020 Cloudflare
+
+#include <stdbool.h>
+#include <stddef.h>
+#include <stdint.h>
+#include <string.h>
+
+#include <linux/bpf.h>
+#include <linux/icmp.h>
+#include <linux/icmpv6.h>
+#include <linux/if_ether.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/pkt_cls.h>
+#include <linux/tcp.h>
+#include <linux/udp.h>
+
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+
+#include "test_cls_redirect.h"
+#include "bpf_kfuncs.h"
+
+#define offsetofend(TYPE, MEMBER) \
+ (offsetof(TYPE, MEMBER) + sizeof((((TYPE *)0)->MEMBER)))
+
+#define IP_OFFSET_MASK (0x1FFF)
+#define IP_MF (0x2000)
+
+char _license[] SEC("license") = "Dual BSD/GPL";
+
+/**
+ * Destination port and IP used for UDP encapsulation.
+ */
+volatile const __be16 ENCAPSULATION_PORT;
+volatile const __be32 ENCAPSULATION_IP;
+
+typedef struct {
+ uint64_t processed_packets_total;
+ uint64_t l3_protocol_packets_total_ipv4;
+ uint64_t l3_protocol_packets_total_ipv6;
+ uint64_t l4_protocol_packets_total_tcp;
+ uint64_t l4_protocol_packets_total_udp;
+ uint64_t accepted_packets_total_syn;
+ uint64_t accepted_packets_total_syn_cookies;
+ uint64_t accepted_packets_total_last_hop;
+ uint64_t accepted_packets_total_icmp_echo_request;
+ uint64_t accepted_packets_total_established;
+ uint64_t forwarded_packets_total_gue;
+ uint64_t forwarded_packets_total_gre;
+
+ uint64_t errors_total_unknown_l3_proto;
+ uint64_t errors_total_unknown_l4_proto;
+ uint64_t errors_total_malformed_ip;
+ uint64_t errors_total_fragmented_ip;
+ uint64_t errors_total_malformed_icmp;
+ uint64_t errors_total_unwanted_icmp;
+ uint64_t errors_total_malformed_icmp_pkt_too_big;
+ uint64_t errors_total_malformed_tcp;
+ uint64_t errors_total_malformed_udp;
+ uint64_t errors_total_icmp_echo_replies;
+ uint64_t errors_total_malformed_encapsulation;
+ uint64_t errors_total_encap_adjust_failed;
+ uint64_t errors_total_encap_buffer_too_small;
+ uint64_t errors_total_redirect_loop;
+ uint64_t errors_total_encap_mtu_violate;
+} metrics_t;
+
+typedef enum {
+ INVALID = 0,
+ UNKNOWN,
+ ECHO_REQUEST,
+ SYN,
+ SYN_COOKIE,
+ ESTABLISHED,
+} verdict_t;
+
+typedef struct {
+ uint16_t src, dst;
+} flow_ports_t;
+
+_Static_assert(
+ sizeof(flow_ports_t) !=
+ offsetofend(struct bpf_sock_tuple, ipv4.dport) -
+ offsetof(struct bpf_sock_tuple, ipv4.sport) - 1,
+ "flow_ports_t must match sport and dport in struct bpf_sock_tuple");
+_Static_assert(
+ sizeof(flow_ports_t) !=
+ offsetofend(struct bpf_sock_tuple, ipv6.dport) -
+ offsetof(struct bpf_sock_tuple, ipv6.sport) - 1,
+ "flow_ports_t must match sport and dport in struct bpf_sock_tuple");
+
+struct iphdr_info {
+ void *hdr;
+ __u64 len;
+};
+
+typedef int ret_t;
+
+/* This is a bit of a hack. We need a return value which allows us to
+ * indicate that the regular flow of the program should continue,
+ * while allowing functions to use XDP_PASS and XDP_DROP, etc.
+ */
+static const ret_t CONTINUE_PROCESSING = -1;
+
+/* Convenience macro to call functions which return ret_t.
+ */
+#define MAYBE_RETURN(x) \
+ do { \
+ ret_t __ret = x; \
+ if (__ret != CONTINUE_PROCESSING) \
+ return __ret; \
+ } while (0)
+
+static bool ipv4_is_fragment(const struct iphdr *ip)
+{
+ uint16_t frag_off = ip->frag_off & bpf_htons(IP_OFFSET_MASK);
+ return (ip->frag_off & bpf_htons(IP_MF)) != 0 || frag_off > 0;
+}
+
+static int pkt_parse_ipv4(struct bpf_dynptr *dynptr, __u64 *offset, struct iphdr *iphdr)
+{
+ if (bpf_dynptr_read(iphdr, sizeof(*iphdr), dynptr, *offset, 0))
+ return -1;
+
+ *offset += sizeof(*iphdr);
+
+ if (iphdr->ihl < 5)
+ return -1;
+
+ /* skip ipv4 options */
+ *offset += (iphdr->ihl - 5) * 4;
+
+ return 0;
+}
+
+/* Parse the L4 ports from a packet, assuming a layout like TCP or UDP. */
+static bool pkt_parse_icmp_l4_ports(struct bpf_dynptr *dynptr, __u64 *offset, flow_ports_t *ports)
+{
+ if (bpf_dynptr_read(ports, sizeof(*ports), dynptr, *offset, 0))
+ return false;
+
+ *offset += sizeof(*ports);
+
+ /* Ports in the L4 headers are reversed, since we are parsing an ICMP
+ * payload which is going towards the eyeball.
+ */
+ uint16_t dst = ports->src;
+ ports->src = ports->dst;
+ ports->dst = dst;
+ return true;
+}
+
+static uint16_t pkt_checksum_fold(uint32_t csum)
+{
+ /* The highest reasonable value for an IPv4 header
+ * checksum requires two folds, so we just do that always.
+ */
+ csum = (csum & 0xffff) + (csum >> 16);
+ csum = (csum & 0xffff) + (csum >> 16);
+ return (uint16_t)~csum;
+}
+
+static void pkt_ipv4_checksum(struct iphdr *iph)
+{
+ iph->check = 0;
+
+ /* An IP header without options is 20 bytes. Two of those
+ * are the checksum, which we always set to zero. Hence,
+ * the maximum accumulated value is 18 / 2 * 0xffff = 0x8fff7,
+ * which fits in 32 bit.
+ */
+ _Static_assert(sizeof(struct iphdr) == 20, "iphdr must be 20 bytes");
+ uint32_t acc = 0;
+ uint16_t *ipw = (uint16_t *)iph;
+
+ for (size_t i = 0; i < sizeof(struct iphdr) / 2; i++)
+ acc += ipw[i];
+
+ iph->check = pkt_checksum_fold(acc);
+}
+
+static bool pkt_skip_ipv6_extension_headers(struct bpf_dynptr *dynptr, __u64 *offset,
+ const struct ipv6hdr *ipv6, uint8_t *upper_proto,
+ bool *is_fragment)
+{
+ /* We understand five extension headers.
+ * https://tools.ietf.org/html/rfc8200#section-4.1 states that all
+ * headers should occur once, except Destination Options, which may
+ * occur twice. Hence we give up after 6 headers.
+ */
+ struct {
+ uint8_t next;
+ uint8_t len;
+ } exthdr = {
+ .next = ipv6->nexthdr,
+ };
+ *is_fragment = false;
+
+ for (int i = 0; i < 6; i++) {
+ switch (exthdr.next) {
+ case IPPROTO_FRAGMENT:
+ *is_fragment = true;
+ /* NB: We don't check that hdrlen == 0 as per spec. */
+ /* fallthrough; */
+
+ case IPPROTO_HOPOPTS:
+ case IPPROTO_ROUTING:
+ case IPPROTO_DSTOPTS:
+ case IPPROTO_MH:
+ if (bpf_dynptr_read(&exthdr, sizeof(exthdr), dynptr, *offset, 0))
+ return false;
+
+ /* hdrlen is in 8-octet units, and excludes the first 8 octets. */
+ *offset += (exthdr.len + 1) * 8;
+
+ /* Decode next header */
+ break;
+
+ default:
+ /* The next header is not one of the known extension
+ * headers, treat it as the upper layer header.
+ *
+ * This handles IPPROTO_NONE.
+ *
+ * Encapsulating Security Payload (50) and Authentication
+ * Header (51) also end up here (and will trigger an
+ * unknown proto error later). They have a custom header
+ * format and seem too esoteric to care about.
+ */
+ *upper_proto = exthdr.next;
+ return true;
+ }
+ }
+
+ /* We never found an upper layer header. */
+ return false;
+}
+
+static int pkt_parse_ipv6(struct bpf_dynptr *dynptr, __u64 *offset, struct ipv6hdr *ipv6,
+ uint8_t *proto, bool *is_fragment)
+{
+ if (bpf_dynptr_read(ipv6, sizeof(*ipv6), dynptr, *offset, 0))
+ return -1;
+
+ *offset += sizeof(*ipv6);
+
+ if (!pkt_skip_ipv6_extension_headers(dynptr, offset, ipv6, proto, is_fragment))
+ return -1;
+
+ return 0;
+}
+
+/* Global metrics, per CPU
+ */
+struct {
+ __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+ __uint(max_entries, 1);
+ __type(key, unsigned int);
+ __type(value, metrics_t);
+} metrics_map SEC(".maps");
+
+static metrics_t *get_global_metrics(void)
+{
+ uint64_t key = 0;
+ return bpf_map_lookup_elem(&metrics_map, &key);
+}
+
+static ret_t accept_locally(struct __sk_buff *skb, encap_headers_t *encap)
+{
+ const int payload_off =
+ sizeof(*encap) +
+ sizeof(struct in_addr) * encap->unigue.hop_count;
+ int32_t encap_overhead = payload_off - sizeof(struct ethhdr);
+
+ /* Changing the ethertype if the encapsulated packet is ipv6 */
+ if (encap->gue.proto_ctype == IPPROTO_IPV6)
+ encap->eth.h_proto = bpf_htons(ETH_P_IPV6);
+
+ if (bpf_skb_adjust_room(skb, -encap_overhead, BPF_ADJ_ROOM_MAC,
+ BPF_F_ADJ_ROOM_FIXED_GSO |
+ BPF_F_ADJ_ROOM_NO_CSUM_RESET) ||
+ bpf_csum_level(skb, BPF_CSUM_LEVEL_DEC))
+ return TC_ACT_SHOT;
+
+ return bpf_redirect(skb->ifindex, BPF_F_INGRESS);
+}
+
+static ret_t forward_with_gre(struct __sk_buff *skb, struct bpf_dynptr *dynptr,
+ encap_headers_t *encap, struct in_addr *next_hop,
+ metrics_t *metrics)
+{
+ const int payload_off =
+ sizeof(*encap) +
+ sizeof(struct in_addr) * encap->unigue.hop_count;
+ int32_t encap_overhead =
+ payload_off - sizeof(struct ethhdr) - sizeof(struct iphdr);
+ int32_t delta = sizeof(struct gre_base_hdr) - encap_overhead;
+ __u8 encap_buffer[sizeof(encap_gre_t)] = {};
+ uint16_t proto = ETH_P_IP;
+ uint32_t mtu_len = 0;
+ encap_gre_t *encap_gre;
+
+ metrics->forwarded_packets_total_gre++;
+
+ /* Loop protection: the inner packet's TTL is decremented as a safeguard
+ * against any forwarding loop. As the only interesting field is the TTL
+ * hop limit for IPv6, it is easier to use bpf_skb_load_bytes/bpf_skb_store_bytes
+ * as they handle the split packets if needed (no need for the data to be
+ * in the linear section).
+ */
+ if (encap->gue.proto_ctype == IPPROTO_IPV6) {
+ proto = ETH_P_IPV6;
+ uint8_t ttl;
+ int rc;
+
+ rc = bpf_skb_load_bytes(
+ skb, payload_off + offsetof(struct ipv6hdr, hop_limit),
+ &ttl, 1);
+ if (rc != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+
+ if (ttl == 0) {
+ metrics->errors_total_redirect_loop++;
+ return TC_ACT_SHOT;
+ }
+
+ ttl--;
+ rc = bpf_skb_store_bytes(
+ skb, payload_off + offsetof(struct ipv6hdr, hop_limit),
+ &ttl, 1, 0);
+ if (rc != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+ } else {
+ uint8_t ttl;
+ int rc;
+
+ rc = bpf_skb_load_bytes(
+ skb, payload_off + offsetof(struct iphdr, ttl), &ttl,
+ 1);
+ if (rc != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+
+ if (ttl == 0) {
+ metrics->errors_total_redirect_loop++;
+ return TC_ACT_SHOT;
+ }
+
+ /* IPv4 also has a checksum to patch. While the TTL is only one byte,
+ * this function only works for 2 and 4 bytes arguments (the result is
+ * the same).
+ */
+ rc = bpf_l3_csum_replace(
+ skb, payload_off + offsetof(struct iphdr, check), ttl,
+ ttl - 1, 2);
+ if (rc != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+
+ ttl--;
+ rc = bpf_skb_store_bytes(
+ skb, payload_off + offsetof(struct iphdr, ttl), &ttl, 1,
+ 0);
+ if (rc != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+ }
+
+ if (bpf_check_mtu(skb, skb->ifindex, &mtu_len, delta, 0)) {
+ metrics->errors_total_encap_mtu_violate++;
+ return TC_ACT_SHOT;
+ }
+
+ if (bpf_skb_adjust_room(skb, delta, BPF_ADJ_ROOM_NET,
+ BPF_F_ADJ_ROOM_FIXED_GSO |
+ BPF_F_ADJ_ROOM_NO_CSUM_RESET) ||
+ bpf_csum_level(skb, BPF_CSUM_LEVEL_INC)) {
+ metrics->errors_total_encap_adjust_failed++;
+ return TC_ACT_SHOT;
+ }
+
+ if (bpf_skb_pull_data(skb, sizeof(encap_gre_t))) {
+ metrics->errors_total_encap_buffer_too_small++;
+ return TC_ACT_SHOT;
+ }
+
+ encap_gre = bpf_dynptr_slice_rdwr(dynptr, 0, encap_buffer, sizeof(encap_buffer));
+ if (!encap_gre) {
+ metrics->errors_total_encap_buffer_too_small++;
+ return TC_ACT_SHOT;
+ }
+
+ encap_gre->ip.protocol = IPPROTO_GRE;
+ encap_gre->ip.daddr = next_hop->s_addr;
+ encap_gre->ip.saddr = ENCAPSULATION_IP;
+ encap_gre->ip.tot_len =
+ bpf_htons(bpf_ntohs(encap_gre->ip.tot_len) + delta);
+ encap_gre->gre.flags = 0;
+ encap_gre->gre.protocol = bpf_htons(proto);
+ pkt_ipv4_checksum((void *)&encap_gre->ip);
+
+ if (encap_gre == encap_buffer)
+ bpf_dynptr_write(dynptr, 0, encap_buffer, sizeof(encap_buffer), 0);
+
+ return bpf_redirect(skb->ifindex, 0);
+}
+
+static ret_t forward_to_next_hop(struct __sk_buff *skb, struct bpf_dynptr *dynptr,
+ encap_headers_t *encap, struct in_addr *next_hop,
+ metrics_t *metrics)
+{
+ /* swap L2 addresses */
+ /* This assumes that packets are received from a router.
+ * So just swapping the MAC addresses here will make the packet go back to
+ * the router, which will send it to the appropriate machine.
+ */
+ unsigned char temp[ETH_ALEN];
+ memcpy(temp, encap->eth.h_dest, sizeof(temp));
+ memcpy(encap->eth.h_dest, encap->eth.h_source,
+ sizeof(encap->eth.h_dest));
+ memcpy(encap->eth.h_source, temp, sizeof(encap->eth.h_source));
+
+ if (encap->unigue.next_hop == encap->unigue.hop_count - 1 &&
+ encap->unigue.last_hop_gre) {
+ return forward_with_gre(skb, dynptr, encap, next_hop, metrics);
+ }
+
+ metrics->forwarded_packets_total_gue++;
+ uint32_t old_saddr = encap->ip.saddr;
+ encap->ip.saddr = encap->ip.daddr;
+ encap->ip.daddr = next_hop->s_addr;
+ if (encap->unigue.next_hop < encap->unigue.hop_count) {
+ encap->unigue.next_hop++;
+ }
+
+ /* Remove ip->saddr, add next_hop->s_addr */
+ const uint64_t off = offsetof(typeof(*encap), ip.check);
+ int ret = bpf_l3_csum_replace(skb, off, old_saddr, next_hop->s_addr, 4);
+ if (ret < 0) {
+ return TC_ACT_SHOT;
+ }
+
+ return bpf_redirect(skb->ifindex, 0);
+}
+
+static ret_t skip_next_hops(__u64 *offset, int n)
+{
+ __u32 res;
+ switch (n) {
+ case 1:
+ *offset += sizeof(struct in_addr);
+ case 0:
+ return CONTINUE_PROCESSING;
+
+ default:
+ return TC_ACT_SHOT;
+ }
+}
+
+/* Get the next hop from the GLB header.
+ *
+ * Sets next_hop->s_addr to 0 if there are no more hops left.
+ * pkt is positioned just after the variable length GLB header
+ * iff the call is successful.
+ */
+static ret_t get_next_hop(struct bpf_dynptr *dynptr, __u64 *offset, encap_headers_t *encap,
+ struct in_addr *next_hop)
+{
+ if (encap->unigue.next_hop > encap->unigue.hop_count)
+ return TC_ACT_SHOT;
+
+ /* Skip "used" next hops. */
+ MAYBE_RETURN(skip_next_hops(offset, encap->unigue.next_hop));
+
+ if (encap->unigue.next_hop == encap->unigue.hop_count) {
+ /* No more next hops, we are at the end of the GLB header. */
+ next_hop->s_addr = 0;
+ return CONTINUE_PROCESSING;
+ }
+
+ if (bpf_dynptr_read(next_hop, sizeof(*next_hop), dynptr, *offset, 0))
+ return TC_ACT_SHOT;
+
+ *offset += sizeof(*next_hop);
+
+ /* Skip the remainig next hops (may be zero). */
+ return skip_next_hops(offset, encap->unigue.hop_count - encap->unigue.next_hop - 1);
+}
+
+/* Fill a bpf_sock_tuple to be used with the socket lookup functions.
+ * This is a kludge that let's us work around verifier limitations:
+ *
+ * fill_tuple(&t, foo, sizeof(struct iphdr), 123, 321)
+ *
+ * clang will substitue a costant for sizeof, which allows the verifier
+ * to track it's value. Based on this, it can figure out the constant
+ * return value, and calling code works while still being "generic" to
+ * IPv4 and IPv6.
+ */
+static uint64_t fill_tuple(struct bpf_sock_tuple *tuple, void *iph,
+ uint64_t iphlen, uint16_t sport, uint16_t dport)
+{
+ switch (iphlen) {
+ case sizeof(struct iphdr): {
+ struct iphdr *ipv4 = (struct iphdr *)iph;
+ tuple->ipv4.daddr = ipv4->daddr;
+ tuple->ipv4.saddr = ipv4->saddr;
+ tuple->ipv4.sport = sport;
+ tuple->ipv4.dport = dport;
+ return sizeof(tuple->ipv4);
+ }
+
+ case sizeof(struct ipv6hdr): {
+ struct ipv6hdr *ipv6 = (struct ipv6hdr *)iph;
+ memcpy(&tuple->ipv6.daddr, &ipv6->daddr,
+ sizeof(tuple->ipv6.daddr));
+ memcpy(&tuple->ipv6.saddr, &ipv6->saddr,
+ sizeof(tuple->ipv6.saddr));
+ tuple->ipv6.sport = sport;
+ tuple->ipv6.dport = dport;
+ return sizeof(tuple->ipv6);
+ }
+
+ default:
+ return 0;
+ }
+}
+
+static verdict_t classify_tcp(struct __sk_buff *skb, struct bpf_sock_tuple *tuple,
+ uint64_t tuplen, void *iph, struct tcphdr *tcp)
+{
+ struct bpf_sock *sk =
+ bpf_skc_lookup_tcp(skb, tuple, tuplen, BPF_F_CURRENT_NETNS, 0);
+
+ if (sk == NULL)
+ return UNKNOWN;
+
+ if (sk->state != BPF_TCP_LISTEN) {
+ bpf_sk_release(sk);
+ return ESTABLISHED;
+ }
+
+ if (iph != NULL && tcp != NULL) {
+ /* Kludge: we've run out of arguments, but need the length of the ip header. */
+ uint64_t iphlen = sizeof(struct iphdr);
+
+ if (tuplen == sizeof(tuple->ipv6))
+ iphlen = sizeof(struct ipv6hdr);
+
+ if (bpf_tcp_check_syncookie(sk, iph, iphlen, tcp,
+ sizeof(*tcp)) == 0) {
+ bpf_sk_release(sk);
+ return SYN_COOKIE;
+ }
+ }
+
+ bpf_sk_release(sk);
+ return UNKNOWN;
+}
+
+static verdict_t classify_udp(struct __sk_buff *skb, struct bpf_sock_tuple *tuple, uint64_t tuplen)
+{
+ struct bpf_sock *sk =
+ bpf_sk_lookup_udp(skb, tuple, tuplen, BPF_F_CURRENT_NETNS, 0);
+
+ if (sk == NULL)
+ return UNKNOWN;
+
+ if (sk->state == BPF_TCP_ESTABLISHED) {
+ bpf_sk_release(sk);
+ return ESTABLISHED;
+ }
+
+ bpf_sk_release(sk);
+ return UNKNOWN;
+}
+
+static verdict_t classify_icmp(struct __sk_buff *skb, uint8_t proto, struct bpf_sock_tuple *tuple,
+ uint64_t tuplen, metrics_t *metrics)
+{
+ switch (proto) {
+ case IPPROTO_TCP:
+ return classify_tcp(skb, tuple, tuplen, NULL, NULL);
+
+ case IPPROTO_UDP:
+ return classify_udp(skb, tuple, tuplen);
+
+ default:
+ metrics->errors_total_malformed_icmp++;
+ return INVALID;
+ }
+}
+
+static verdict_t process_icmpv4(struct __sk_buff *skb, struct bpf_dynptr *dynptr, __u64 *offset,
+ metrics_t *metrics)
+{
+ struct icmphdr icmp;
+ struct iphdr ipv4;
+
+ if (bpf_dynptr_read(&icmp, sizeof(icmp), dynptr, *offset, 0)) {
+ metrics->errors_total_malformed_icmp++;
+ return INVALID;
+ }
+
+ *offset += sizeof(icmp);
+
+ /* We should never receive encapsulated echo replies. */
+ if (icmp.type == ICMP_ECHOREPLY) {
+ metrics->errors_total_icmp_echo_replies++;
+ return INVALID;
+ }
+
+ if (icmp.type == ICMP_ECHO)
+ return ECHO_REQUEST;
+
+ if (icmp.type != ICMP_DEST_UNREACH || icmp.code != ICMP_FRAG_NEEDED) {
+ metrics->errors_total_unwanted_icmp++;
+ return INVALID;
+ }
+
+ if (pkt_parse_ipv4(dynptr, offset, &ipv4)) {
+ metrics->errors_total_malformed_icmp_pkt_too_big++;
+ return INVALID;
+ }
+
+ /* The source address in the outer IP header is from the entity that
+ * originated the ICMP message. Use the original IP header to restore
+ * the correct flow tuple.
+ */
+ struct bpf_sock_tuple tuple;
+ tuple.ipv4.saddr = ipv4.daddr;
+ tuple.ipv4.daddr = ipv4.saddr;
+
+ if (!pkt_parse_icmp_l4_ports(dynptr, offset, (flow_ports_t *)&tuple.ipv4.sport)) {
+ metrics->errors_total_malformed_icmp_pkt_too_big++;
+ return INVALID;
+ }
+
+ return classify_icmp(skb, ipv4.protocol, &tuple,
+ sizeof(tuple.ipv4), metrics);
+}
+
+static verdict_t process_icmpv6(struct bpf_dynptr *dynptr, __u64 *offset, struct __sk_buff *skb,
+ metrics_t *metrics)
+{
+ struct bpf_sock_tuple tuple;
+ struct ipv6hdr ipv6;
+ struct icmp6hdr icmp6;
+ bool is_fragment;
+ uint8_t l4_proto;
+
+ if (bpf_dynptr_read(&icmp6, sizeof(icmp6), dynptr, *offset, 0)) {
+ metrics->errors_total_malformed_icmp++;
+ return INVALID;
+ }
+
+ /* We should never receive encapsulated echo replies. */
+ if (icmp6.icmp6_type == ICMPV6_ECHO_REPLY) {
+ metrics->errors_total_icmp_echo_replies++;
+ return INVALID;
+ }
+
+ if (icmp6.icmp6_type == ICMPV6_ECHO_REQUEST) {
+ return ECHO_REQUEST;
+ }
+
+ if (icmp6.icmp6_type != ICMPV6_PKT_TOOBIG) {
+ metrics->errors_total_unwanted_icmp++;
+ return INVALID;
+ }
+
+ if (pkt_parse_ipv6(dynptr, offset, &ipv6, &l4_proto, &is_fragment)) {
+ metrics->errors_total_malformed_icmp_pkt_too_big++;
+ return INVALID;
+ }
+
+ if (is_fragment) {
+ metrics->errors_total_fragmented_ip++;
+ return INVALID;
+ }
+
+ /* Swap source and dest addresses. */
+ memcpy(&tuple.ipv6.saddr, &ipv6.daddr, sizeof(tuple.ipv6.saddr));
+ memcpy(&tuple.ipv6.daddr, &ipv6.saddr, sizeof(tuple.ipv6.daddr));
+
+ if (!pkt_parse_icmp_l4_ports(dynptr, offset, (flow_ports_t *)&tuple.ipv6.sport)) {
+ metrics->errors_total_malformed_icmp_pkt_too_big++;
+ return INVALID;
+ }
+
+ return classify_icmp(skb, l4_proto, &tuple, sizeof(tuple.ipv6),
+ metrics);
+}
+
+static verdict_t process_tcp(struct bpf_dynptr *dynptr, __u64 *offset, struct __sk_buff *skb,
+ struct iphdr_info *info, metrics_t *metrics)
+{
+ struct bpf_sock_tuple tuple;
+ struct tcphdr tcp;
+ uint64_t tuplen;
+
+ metrics->l4_protocol_packets_total_tcp++;
+
+ if (bpf_dynptr_read(&tcp, sizeof(tcp), dynptr, *offset, 0)) {
+ metrics->errors_total_malformed_tcp++;
+ return INVALID;
+ }
+
+ *offset += sizeof(tcp);
+
+ if (tcp.syn)
+ return SYN;
+
+ tuplen = fill_tuple(&tuple, info->hdr, info->len, tcp.source, tcp.dest);
+ return classify_tcp(skb, &tuple, tuplen, info->hdr, &tcp);
+}
+
+static verdict_t process_udp(struct bpf_dynptr *dynptr, __u64 *offset, struct __sk_buff *skb,
+ struct iphdr_info *info, metrics_t *metrics)
+{
+ struct bpf_sock_tuple tuple;
+ struct udphdr udph;
+ uint64_t tuplen;
+
+ metrics->l4_protocol_packets_total_udp++;
+
+ if (bpf_dynptr_read(&udph, sizeof(udph), dynptr, *offset, 0)) {
+ metrics->errors_total_malformed_udp++;
+ return INVALID;
+ }
+ *offset += sizeof(udph);
+
+ tuplen = fill_tuple(&tuple, info->hdr, info->len, udph.source, udph.dest);
+ return classify_udp(skb, &tuple, tuplen);
+}
+
+static verdict_t process_ipv4(struct __sk_buff *skb, struct bpf_dynptr *dynptr,
+ __u64 *offset, metrics_t *metrics)
+{
+ struct iphdr ipv4;
+ struct iphdr_info info = {
+ .hdr = &ipv4,
+ .len = sizeof(ipv4),
+ };
+
+ metrics->l3_protocol_packets_total_ipv4++;
+
+ if (pkt_parse_ipv4(dynptr, offset, &ipv4)) {
+ metrics->errors_total_malformed_ip++;
+ return INVALID;
+ }
+
+ if (ipv4.version != 4) {
+ metrics->errors_total_malformed_ip++;
+ return INVALID;
+ }
+
+ if (ipv4_is_fragment(&ipv4)) {
+ metrics->errors_total_fragmented_ip++;
+ return INVALID;
+ }
+
+ switch (ipv4.protocol) {
+ case IPPROTO_ICMP:
+ return process_icmpv4(skb, dynptr, offset, metrics);
+
+ case IPPROTO_TCP:
+ return process_tcp(dynptr, offset, skb, &info, metrics);
+
+ case IPPROTO_UDP:
+ return process_udp(dynptr, offset, skb, &info, metrics);
+
+ default:
+ metrics->errors_total_unknown_l4_proto++;
+ return INVALID;
+ }
+}
+
+static verdict_t process_ipv6(struct __sk_buff *skb, struct bpf_dynptr *dynptr,
+ __u64 *offset, metrics_t *metrics)
+{
+ struct ipv6hdr ipv6;
+ struct iphdr_info info = {
+ .hdr = &ipv6,
+ .len = sizeof(ipv6),
+ };
+ uint8_t l4_proto;
+ bool is_fragment;
+
+ metrics->l3_protocol_packets_total_ipv6++;
+
+ if (pkt_parse_ipv6(dynptr, offset, &ipv6, &l4_proto, &is_fragment)) {
+ metrics->errors_total_malformed_ip++;
+ return INVALID;
+ }
+
+ if (ipv6.version != 6) {
+ metrics->errors_total_malformed_ip++;
+ return INVALID;
+ }
+
+ if (is_fragment) {
+ metrics->errors_total_fragmented_ip++;
+ return INVALID;
+ }
+
+ switch (l4_proto) {
+ case IPPROTO_ICMPV6:
+ return process_icmpv6(dynptr, offset, skb, metrics);
+
+ case IPPROTO_TCP:
+ return process_tcp(dynptr, offset, skb, &info, metrics);
+
+ case IPPROTO_UDP:
+ return process_udp(dynptr, offset, skb, &info, metrics);
+
+ default:
+ metrics->errors_total_unknown_l4_proto++;
+ return INVALID;
+ }
+}
+
+SEC("tc")
+int cls_redirect(struct __sk_buff *skb)
+{
+ __u8 encap_buffer[sizeof(encap_headers_t)] = {};
+ struct bpf_dynptr dynptr;
+ struct in_addr next_hop;
+ /* Tracks offset of the dynptr. This will be unnecessary once
+ * bpf_dynptr_advance() is available.
+ */
+ __u64 off = 0;
+ ret_t ret;
+
+ bpf_dynptr_from_skb(skb, 0, &dynptr);
+
+ metrics_t *metrics = get_global_metrics();
+ if (metrics == NULL)
+ return TC_ACT_SHOT;
+
+ metrics->processed_packets_total++;
+
+ /* Pass bogus packets as long as we're not sure they're
+ * destined for us.
+ */
+ if (skb->protocol != bpf_htons(ETH_P_IP))
+ return TC_ACT_OK;
+
+ encap_headers_t *encap;
+
+ /* Make sure that all encapsulation headers are available in
+ * the linear portion of the skb. This makes it easy to manipulate them.
+ */
+ if (bpf_skb_pull_data(skb, sizeof(*encap)))
+ return TC_ACT_OK;
+
+ encap = bpf_dynptr_slice_rdwr(&dynptr, 0, encap_buffer, sizeof(encap_buffer));
+ if (!encap)
+ return TC_ACT_OK;
+
+ off += sizeof(*encap);
+
+ if (encap->ip.ihl != 5)
+ /* We never have any options. */
+ return TC_ACT_OK;
+
+ if (encap->ip.daddr != ENCAPSULATION_IP ||
+ encap->ip.protocol != IPPROTO_UDP)
+ return TC_ACT_OK;
+
+ /* TODO Check UDP length? */
+ if (encap->udp.dest != ENCAPSULATION_PORT)
+ return TC_ACT_OK;
+
+ /* We now know that the packet is destined to us, we can
+ * drop bogus ones.
+ */
+ if (ipv4_is_fragment((void *)&encap->ip)) {
+ metrics->errors_total_fragmented_ip++;
+ return TC_ACT_SHOT;
+ }
+
+ if (encap->gue.variant != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+
+ if (encap->gue.control != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+
+ if (encap->gue.flags != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+
+ if (encap->gue.hlen !=
+ sizeof(encap->unigue) / 4 + encap->unigue.hop_count) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+
+ if (encap->unigue.version != 0) {
+ metrics->errors_total_malformed_encapsulation++;
+ return TC_ACT_SHOT;
+ }
+
+ if (encap->unigue.reserved != 0)
+ return TC_ACT_SHOT;
+
+ MAYBE_RETURN(get_next_hop(&dynptr, &off, encap, &next_hop));
+
+ if (next_hop.s_addr == 0) {
+ metrics->accepted_packets_total_last_hop++;
+ return accept_locally(skb, encap);
+ }
+
+ verdict_t verdict;
+ switch (encap->gue.proto_ctype) {
+ case IPPROTO_IPIP:
+ verdict = process_ipv4(skb, &dynptr, &off, metrics);
+ break;
+
+ case IPPROTO_IPV6:
+ verdict = process_ipv6(skb, &dynptr, &off, metrics);
+ break;
+
+ default:
+ metrics->errors_total_unknown_l3_proto++;
+ return TC_ACT_SHOT;
+ }
+
+ switch (verdict) {
+ case INVALID:
+ /* metrics have already been bumped */
+ return TC_ACT_SHOT;
+
+ case UNKNOWN:
+ return forward_to_next_hop(skb, &dynptr, encap, &next_hop, metrics);
+
+ case ECHO_REQUEST:
+ metrics->accepted_packets_total_icmp_echo_request++;
+ break;
+
+ case SYN:
+ if (encap->unigue.forward_syn) {
+ return forward_to_next_hop(skb, &dynptr, encap, &next_hop,
+ metrics);
+ }
+
+ metrics->accepted_packets_total_syn++;
+ break;
+
+ case SYN_COOKIE:
+ metrics->accepted_packets_total_syn_cookies++;
+ break;
+
+ case ESTABLISHED:
+ metrics->accepted_packets_total_established++;
+ break;
+ }
+
+ ret = accept_locally(skb, encap);
+
+ if (encap == encap_buffer)
+ bpf_dynptr_write(&dynptr, 0, encap_buffer, sizeof(encap_buffer), 0);
+
+ return ret;
+}
__failure __msg("arg#0 expected pointer to stack or dynptr_ptr")
int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size)
{
- unsigned long val;
+ unsigned long val = 0;
return bpf_verify_pkcs7_signature((struct bpf_dynptr *)val,
(struct bpf_dynptr *)val, NULL);
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Facebook
+#include <stddef.h>
+#include <stdbool.h>
+#include <string.h>
+#include <linux/pkt_cls.h>
+#include <linux/bpf.h>
+#include <linux/in.h>
+#include <linux/if_ether.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/icmp.h>
+#include <linux/icmpv6.h>
+#include <linux/tcp.h>
+#include <linux/udp.h>
+#include <bpf/bpf_helpers.h>
+#include "test_iptunnel_common.h"
+#include <bpf/bpf_endian.h>
+
+#include "bpf_kfuncs.h"
+
+static __always_inline __u32 rol32(__u32 word, unsigned int shift)
+{
+ return (word << shift) | (word >> ((-shift) & 31));
+}
+
+/* copy paste of jhash from kernel sources to make sure llvm
+ * can compile it into valid sequence of bpf instructions
+ */
+#define __jhash_mix(a, b, c) \
+{ \
+ a -= c; a ^= rol32(c, 4); c += b; \
+ b -= a; b ^= rol32(a, 6); a += c; \
+ c -= b; c ^= rol32(b, 8); b += a; \
+ a -= c; a ^= rol32(c, 16); c += b; \
+ b -= a; b ^= rol32(a, 19); a += c; \
+ c -= b; c ^= rol32(b, 4); b += a; \
+}
+
+#define __jhash_final(a, b, c) \
+{ \
+ c ^= b; c -= rol32(b, 14); \
+ a ^= c; a -= rol32(c, 11); \
+ b ^= a; b -= rol32(a, 25); \
+ c ^= b; c -= rol32(b, 16); \
+ a ^= c; a -= rol32(c, 4); \
+ b ^= a; b -= rol32(a, 14); \
+ c ^= b; c -= rol32(b, 24); \
+}
+
+#define JHASH_INITVAL 0xdeadbeef
+
+typedef unsigned int u32;
+
+static __noinline u32 jhash(const void *key, u32 length, u32 initval)
+{
+ u32 a, b, c;
+ const unsigned char *k = key;
+
+ a = b = c = JHASH_INITVAL + length + initval;
+
+ while (length > 12) {
+ a += *(u32 *)(k);
+ b += *(u32 *)(k + 4);
+ c += *(u32 *)(k + 8);
+ __jhash_mix(a, b, c);
+ length -= 12;
+ k += 12;
+ }
+ switch (length) {
+ case 12: c += (u32)k[11]<<24;
+ case 11: c += (u32)k[10]<<16;
+ case 10: c += (u32)k[9]<<8;
+ case 9: c += k[8];
+ case 8: b += (u32)k[7]<<24;
+ case 7: b += (u32)k[6]<<16;
+ case 6: b += (u32)k[5]<<8;
+ case 5: b += k[4];
+ case 4: a += (u32)k[3]<<24;
+ case 3: a += (u32)k[2]<<16;
+ case 2: a += (u32)k[1]<<8;
+ case 1: a += k[0];
+ __jhash_final(a, b, c);
+ case 0: /* Nothing left to add */
+ break;
+ }
+
+ return c;
+}
+
+static __noinline u32 __jhash_nwords(u32 a, u32 b, u32 c, u32 initval)
+{
+ a += initval;
+ b += initval;
+ c += initval;
+ __jhash_final(a, b, c);
+ return c;
+}
+
+static __noinline u32 jhash_2words(u32 a, u32 b, u32 initval)
+{
+ return __jhash_nwords(a, b, 0, initval + JHASH_INITVAL + (2 << 2));
+}
+
+#define PCKT_FRAGMENTED 65343
+#define IPV4_HDR_LEN_NO_OPT 20
+#define IPV4_PLUS_ICMP_HDR 28
+#define IPV6_PLUS_ICMP_HDR 48
+#define RING_SIZE 2
+#define MAX_VIPS 12
+#define MAX_REALS 5
+#define CTL_MAP_SIZE 16
+#define CH_RINGS_SIZE (MAX_VIPS * RING_SIZE)
+#define F_IPV6 (1 << 0)
+#define F_HASH_NO_SRC_PORT (1 << 0)
+#define F_ICMP (1 << 0)
+#define F_SYN_SET (1 << 1)
+
+struct packet_description {
+ union {
+ __be32 src;
+ __be32 srcv6[4];
+ };
+ union {
+ __be32 dst;
+ __be32 dstv6[4];
+ };
+ union {
+ __u32 ports;
+ __u16 port16[2];
+ };
+ __u8 proto;
+ __u8 flags;
+};
+
+struct ctl_value {
+ union {
+ __u64 value;
+ __u32 ifindex;
+ __u8 mac[6];
+ };
+};
+
+struct vip_meta {
+ __u32 flags;
+ __u32 vip_num;
+};
+
+struct real_definition {
+ union {
+ __be32 dst;
+ __be32 dstv6[4];
+ };
+ __u8 flags;
+};
+
+struct vip_stats {
+ __u64 bytes;
+ __u64 pkts;
+};
+
+struct eth_hdr {
+ unsigned char eth_dest[ETH_ALEN];
+ unsigned char eth_source[ETH_ALEN];
+ unsigned short eth_proto;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(max_entries, MAX_VIPS);
+ __type(key, struct vip);
+ __type(value, struct vip_meta);
+} vip_map SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, CH_RINGS_SIZE);
+ __type(key, __u32);
+ __type(value, __u32);
+} ch_rings SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, MAX_REALS);
+ __type(key, __u32);
+ __type(value, struct real_definition);
+} reals SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+ __uint(max_entries, MAX_VIPS);
+ __type(key, __u32);
+ __type(value, struct vip_stats);
+} stats SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, CTL_MAP_SIZE);
+ __type(key, __u32);
+ __type(value, struct ctl_value);
+} ctl_array SEC(".maps");
+
+static __noinline __u32 get_packet_hash(struct packet_description *pckt, bool ipv6)
+{
+ if (ipv6)
+ return jhash_2words(jhash(pckt->srcv6, 16, MAX_VIPS),
+ pckt->ports, CH_RINGS_SIZE);
+ else
+ return jhash_2words(pckt->src, pckt->ports, CH_RINGS_SIZE);
+}
+
+static __noinline bool get_packet_dst(struct real_definition **real,
+ struct packet_description *pckt,
+ struct vip_meta *vip_info,
+ bool is_ipv6)
+{
+ __u32 hash = get_packet_hash(pckt, is_ipv6);
+ __u32 key = RING_SIZE * vip_info->vip_num + hash % RING_SIZE;
+ __u32 *real_pos;
+
+ if (hash != 0x358459b7 /* jhash of ipv4 packet */ &&
+ hash != 0x2f4bc6bb /* jhash of ipv6 packet */)
+ return false;
+
+ real_pos = bpf_map_lookup_elem(&ch_rings, &key);
+ if (!real_pos)
+ return false;
+ key = *real_pos;
+ *real = bpf_map_lookup_elem(&reals, &key);
+ if (!(*real))
+ return false;
+ return true;
+}
+
+static __noinline int parse_icmpv6(struct bpf_dynptr *skb_ptr, __u64 off,
+ struct packet_description *pckt)
+{
+ __u8 buffer[sizeof(struct ipv6hdr)] = {};
+ struct icmp6hdr *icmp_hdr;
+ struct ipv6hdr *ip6h;
+
+ icmp_hdr = bpf_dynptr_slice(skb_ptr, off, buffer, sizeof(buffer));
+ if (!icmp_hdr)
+ return TC_ACT_SHOT;
+
+ if (icmp_hdr->icmp6_type != ICMPV6_PKT_TOOBIG)
+ return TC_ACT_OK;
+ off += sizeof(struct icmp6hdr);
+ ip6h = bpf_dynptr_slice(skb_ptr, off, buffer, sizeof(buffer));
+ if (!ip6h)
+ return TC_ACT_SHOT;
+ pckt->proto = ip6h->nexthdr;
+ pckt->flags |= F_ICMP;
+ memcpy(pckt->srcv6, ip6h->daddr.s6_addr32, 16);
+ memcpy(pckt->dstv6, ip6h->saddr.s6_addr32, 16);
+ return TC_ACT_UNSPEC;
+}
+
+static __noinline int parse_icmp(struct bpf_dynptr *skb_ptr, __u64 off,
+ struct packet_description *pckt)
+{
+ __u8 buffer_icmp[sizeof(struct iphdr)] = {};
+ __u8 buffer_ip[sizeof(struct iphdr)] = {};
+ struct icmphdr *icmp_hdr;
+ struct iphdr *iph;
+
+ icmp_hdr = bpf_dynptr_slice(skb_ptr, off, buffer_icmp, sizeof(buffer_icmp));
+ if (!icmp_hdr)
+ return TC_ACT_SHOT;
+ if (icmp_hdr->type != ICMP_DEST_UNREACH ||
+ icmp_hdr->code != ICMP_FRAG_NEEDED)
+ return TC_ACT_OK;
+ off += sizeof(struct icmphdr);
+ iph = bpf_dynptr_slice(skb_ptr, off, buffer_ip, sizeof(buffer_ip));
+ if (!iph || iph->ihl != 5)
+ return TC_ACT_SHOT;
+ pckt->proto = iph->protocol;
+ pckt->flags |= F_ICMP;
+ pckt->src = iph->daddr;
+ pckt->dst = iph->saddr;
+ return TC_ACT_UNSPEC;
+}
+
+static __noinline bool parse_udp(struct bpf_dynptr *skb_ptr, __u64 off,
+ struct packet_description *pckt)
+{
+ __u8 buffer[sizeof(struct udphdr)] = {};
+ struct udphdr *udp;
+
+ udp = bpf_dynptr_slice(skb_ptr, off, buffer, sizeof(buffer));
+ if (!udp)
+ return false;
+
+ if (!(pckt->flags & F_ICMP)) {
+ pckt->port16[0] = udp->source;
+ pckt->port16[1] = udp->dest;
+ } else {
+ pckt->port16[0] = udp->dest;
+ pckt->port16[1] = udp->source;
+ }
+ return true;
+}
+
+static __noinline bool parse_tcp(struct bpf_dynptr *skb_ptr, __u64 off,
+ struct packet_description *pckt)
+{
+ __u8 buffer[sizeof(struct tcphdr)] = {};
+ struct tcphdr *tcp;
+
+ tcp = bpf_dynptr_slice(skb_ptr, off, buffer, sizeof(buffer));
+ if (!tcp)
+ return false;
+
+ if (tcp->syn)
+ pckt->flags |= F_SYN_SET;
+
+ if (!(pckt->flags & F_ICMP)) {
+ pckt->port16[0] = tcp->source;
+ pckt->port16[1] = tcp->dest;
+ } else {
+ pckt->port16[0] = tcp->dest;
+ pckt->port16[1] = tcp->source;
+ }
+ return true;
+}
+
+static __noinline int process_packet(struct bpf_dynptr *skb_ptr,
+ struct eth_hdr *eth, __u64 off,
+ bool is_ipv6, struct __sk_buff *skb)
+{
+ struct packet_description pckt = {};
+ struct bpf_tunnel_key tkey = {};
+ struct vip_stats *data_stats;
+ struct real_definition *dst;
+ struct vip_meta *vip_info;
+ struct ctl_value *cval;
+ __u32 v4_intf_pos = 1;
+ __u32 v6_intf_pos = 2;
+ struct ipv6hdr *ip6h;
+ struct vip vip = {};
+ struct iphdr *iph;
+ int tun_flag = 0;
+ __u16 pkt_bytes;
+ __u64 iph_len;
+ __u32 ifindex;
+ __u8 protocol;
+ __u32 vip_num;
+ int action;
+
+ tkey.tunnel_ttl = 64;
+ if (is_ipv6) {
+ __u8 buffer[sizeof(struct ipv6hdr)] = {};
+
+ ip6h = bpf_dynptr_slice(skb_ptr, off, buffer, sizeof(buffer));
+ if (!ip6h)
+ return TC_ACT_SHOT;
+
+ iph_len = sizeof(struct ipv6hdr);
+ protocol = ip6h->nexthdr;
+ pckt.proto = protocol;
+ pkt_bytes = bpf_ntohs(ip6h->payload_len);
+ off += iph_len;
+ if (protocol == IPPROTO_FRAGMENT) {
+ return TC_ACT_SHOT;
+ } else if (protocol == IPPROTO_ICMPV6) {
+ action = parse_icmpv6(skb_ptr, off, &pckt);
+ if (action >= 0)
+ return action;
+ off += IPV6_PLUS_ICMP_HDR;
+ } else {
+ memcpy(pckt.srcv6, ip6h->saddr.s6_addr32, 16);
+ memcpy(pckt.dstv6, ip6h->daddr.s6_addr32, 16);
+ }
+ } else {
+ __u8 buffer[sizeof(struct iphdr)] = {};
+
+ iph = bpf_dynptr_slice(skb_ptr, off, buffer, sizeof(buffer));
+ if (!iph || iph->ihl != 5)
+ return TC_ACT_SHOT;
+
+ protocol = iph->protocol;
+ pckt.proto = protocol;
+ pkt_bytes = bpf_ntohs(iph->tot_len);
+ off += IPV4_HDR_LEN_NO_OPT;
+
+ if (iph->frag_off & PCKT_FRAGMENTED)
+ return TC_ACT_SHOT;
+ if (protocol == IPPROTO_ICMP) {
+ action = parse_icmp(skb_ptr, off, &pckt);
+ if (action >= 0)
+ return action;
+ off += IPV4_PLUS_ICMP_HDR;
+ } else {
+ pckt.src = iph->saddr;
+ pckt.dst = iph->daddr;
+ }
+ }
+ protocol = pckt.proto;
+
+ if (protocol == IPPROTO_TCP) {
+ if (!parse_tcp(skb_ptr, off, &pckt))
+ return TC_ACT_SHOT;
+ } else if (protocol == IPPROTO_UDP) {
+ if (!parse_udp(skb_ptr, off, &pckt))
+ return TC_ACT_SHOT;
+ } else {
+ return TC_ACT_SHOT;
+ }
+
+ if (is_ipv6)
+ memcpy(vip.daddr.v6, pckt.dstv6, 16);
+ else
+ vip.daddr.v4 = pckt.dst;
+
+ vip.dport = pckt.port16[1];
+ vip.protocol = pckt.proto;
+ vip_info = bpf_map_lookup_elem(&vip_map, &vip);
+ if (!vip_info) {
+ vip.dport = 0;
+ vip_info = bpf_map_lookup_elem(&vip_map, &vip);
+ if (!vip_info)
+ return TC_ACT_SHOT;
+ pckt.port16[1] = 0;
+ }
+
+ if (vip_info->flags & F_HASH_NO_SRC_PORT)
+ pckt.port16[0] = 0;
+
+ if (!get_packet_dst(&dst, &pckt, vip_info, is_ipv6))
+ return TC_ACT_SHOT;
+
+ if (dst->flags & F_IPV6) {
+ cval = bpf_map_lookup_elem(&ctl_array, &v6_intf_pos);
+ if (!cval)
+ return TC_ACT_SHOT;
+ ifindex = cval->ifindex;
+ memcpy(tkey.remote_ipv6, dst->dstv6, 16);
+ tun_flag = BPF_F_TUNINFO_IPV6;
+ } else {
+ cval = bpf_map_lookup_elem(&ctl_array, &v4_intf_pos);
+ if (!cval)
+ return TC_ACT_SHOT;
+ ifindex = cval->ifindex;
+ tkey.remote_ipv4 = dst->dst;
+ }
+ vip_num = vip_info->vip_num;
+ data_stats = bpf_map_lookup_elem(&stats, &vip_num);
+ if (!data_stats)
+ return TC_ACT_SHOT;
+ data_stats->pkts++;
+ data_stats->bytes += pkt_bytes;
+ bpf_skb_set_tunnel_key(skb, &tkey, sizeof(tkey), tun_flag);
+ *(u32 *)eth->eth_dest = tkey.remote_ipv4;
+ return bpf_redirect(ifindex, 0);
+}
+
+SEC("tc")
+int balancer_ingress(struct __sk_buff *ctx)
+{
+ __u8 buffer[sizeof(struct eth_hdr)] = {};
+ struct bpf_dynptr ptr;
+ struct eth_hdr *eth;
+ __u32 eth_proto;
+ __u32 nh_off;
+ int err;
+
+ nh_off = sizeof(struct eth_hdr);
+
+ bpf_dynptr_from_skb(ctx, 0, &ptr);
+ eth = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+ if (!eth)
+ return TC_ACT_SHOT;
+ eth_proto = eth->eth_proto;
+ if (eth_proto == bpf_htons(ETH_P_IP))
+ err = process_packet(&ptr, eth, nh_off, false, ctx);
+ else if (eth_proto == bpf_htons(ETH_P_IPV6))
+ err = process_packet(&ptr, eth, nh_off, true, ctx);
+ else
+ return TC_ACT_SHOT;
+
+ if (eth == buffer)
+ bpf_dynptr_write(&ptr, 0, buffer, sizeof(buffer), 0);
+
+ return err;
+}
+
+char _license[] SEC("license") = "GPL";
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+/* This parsing logic is taken from the open source library katran, a layer 4
+ * load balancer.
+ *
+ * This code logic using dynptrs can be found in test_parse_tcp_hdr_opt_dynptr.c
+ *
+ * https://github.com/facebookincubator/katran/blob/main/katran/lib/bpf/pckt_parsing.h
+ */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <linux/tcp.h>
+#include <stdbool.h>
+#include <linux/ipv6.h>
+#include <linux/if_ether.h>
+#include "test_tcp_hdr_options.h"
+
+char _license[] SEC("license") = "GPL";
+
+/* Kind number used for experiments */
+const __u32 tcp_hdr_opt_kind_tpr = 0xFD;
+/* Length of the tcp header option */
+const __u32 tcp_hdr_opt_len_tpr = 6;
+/* maximum number of header options to check to lookup server_id */
+const __u32 tcp_hdr_opt_max_opt_checks = 15;
+
+__u32 server_id;
+
+struct hdr_opt_state {
+ __u32 server_id;
+ __u8 byte_offset;
+ __u8 hdr_bytes_remaining;
+};
+
+static int parse_hdr_opt(const struct xdp_md *xdp, struct hdr_opt_state *state)
+{
+ const void *data = (void *)(long)xdp->data;
+ const void *data_end = (void *)(long)xdp->data_end;
+ __u8 *tcp_opt, kind, hdr_len;
+
+ tcp_opt = (__u8 *)(data + state->byte_offset);
+ if (tcp_opt + 1 > data_end)
+ return -1;
+
+ kind = tcp_opt[0];
+
+ if (kind == TCPOPT_EOL)
+ return -1;
+
+ if (kind == TCPOPT_NOP) {
+ state->hdr_bytes_remaining--;
+ state->byte_offset++;
+ return 0;
+ }
+
+ if (state->hdr_bytes_remaining < 2 ||
+ tcp_opt + sizeof(__u8) + sizeof(__u8) > data_end)
+ return -1;
+
+ hdr_len = tcp_opt[1];
+ if (hdr_len > state->hdr_bytes_remaining)
+ return -1;
+
+ if (kind == tcp_hdr_opt_kind_tpr) {
+ if (hdr_len != tcp_hdr_opt_len_tpr)
+ return -1;
+
+ if (tcp_opt + tcp_hdr_opt_len_tpr > data_end)
+ return -1;
+
+ state->server_id = *(__u32 *)&tcp_opt[2];
+ return 1;
+ }
+
+ state->hdr_bytes_remaining -= hdr_len;
+ state->byte_offset += hdr_len;
+ return 0;
+}
+
+SEC("xdp")
+int xdp_ingress_v6(struct xdp_md *xdp)
+{
+ const void *data = (void *)(long)xdp->data;
+ const void *data_end = (void *)(long)xdp->data_end;
+ struct hdr_opt_state opt_state = {};
+ __u8 tcp_hdr_opt_len = 0;
+ struct tcphdr *tcp_hdr;
+ __u64 tcp_offset = 0;
+ __u32 off;
+ int err;
+
+ tcp_offset = sizeof(struct ethhdr) + sizeof(struct ipv6hdr);
+ tcp_hdr = (struct tcphdr *)(data + tcp_offset);
+ if (tcp_hdr + 1 > data_end)
+ return XDP_DROP;
+
+ tcp_hdr_opt_len = (tcp_hdr->doff * 4) - sizeof(struct tcphdr);
+ if (tcp_hdr_opt_len < tcp_hdr_opt_len_tpr)
+ return XDP_DROP;
+
+ opt_state.hdr_bytes_remaining = tcp_hdr_opt_len;
+ opt_state.byte_offset = sizeof(struct tcphdr) + tcp_offset;
+
+ /* max number of bytes of options in tcp header is 40 bytes */
+ for (int i = 0; i < tcp_hdr_opt_max_opt_checks; i++) {
+ err = parse_hdr_opt(xdp, &opt_state);
+
+ if (err || !opt_state.hdr_bytes_remaining)
+ break;
+ }
+
+ if (!opt_state.server_id)
+ return XDP_DROP;
+
+ server_id = opt_state.server_id;
+
+ return XDP_PASS;
+}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+
+/* This logic is lifted from a real-world use case of packet parsing, used in
+ * the open source library katran, a layer 4 load balancer.
+ *
+ * This test demonstrates how to parse packet contents using dynptrs. The
+ * original code (parsing without dynptrs) can be found in test_parse_tcp_hdr_opt.c
+ */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <linux/tcp.h>
+#include <stdbool.h>
+#include <linux/ipv6.h>
+#include <linux/if_ether.h>
+#include "test_tcp_hdr_options.h"
+#include "bpf_kfuncs.h"
+
+char _license[] SEC("license") = "GPL";
+
+/* Kind number used for experiments */
+const __u32 tcp_hdr_opt_kind_tpr = 0xFD;
+/* Length of the tcp header option */
+const __u32 tcp_hdr_opt_len_tpr = 6;
+/* maximum number of header options to check to lookup server_id */
+const __u32 tcp_hdr_opt_max_opt_checks = 15;
+
+__u32 server_id;
+
+static int parse_hdr_opt(struct bpf_dynptr *ptr, __u32 *off, __u8 *hdr_bytes_remaining,
+ __u32 *server_id)
+{
+ __u8 *tcp_opt, kind, hdr_len;
+ __u8 buffer[sizeof(kind) + sizeof(hdr_len) + sizeof(*server_id)];
+ __u8 *data;
+
+ __builtin_memset(buffer, 0, sizeof(buffer));
+
+ data = bpf_dynptr_slice(ptr, *off, buffer, sizeof(buffer));
+ if (!data)
+ return -1;
+
+ kind = data[0];
+
+ if (kind == TCPOPT_EOL)
+ return -1;
+
+ if (kind == TCPOPT_NOP) {
+ *off += 1;
+ *hdr_bytes_remaining -= 1;
+ return 0;
+ }
+
+ if (*hdr_bytes_remaining < 2)
+ return -1;
+
+ hdr_len = data[1];
+ if (hdr_len > *hdr_bytes_remaining)
+ return -1;
+
+ if (kind == tcp_hdr_opt_kind_tpr) {
+ if (hdr_len != tcp_hdr_opt_len_tpr)
+ return -1;
+
+ __builtin_memcpy(server_id, (__u32 *)(data + 2), sizeof(*server_id));
+ return 1;
+ }
+
+ *off += hdr_len;
+ *hdr_bytes_remaining -= hdr_len;
+ return 0;
+}
+
+SEC("xdp")
+int xdp_ingress_v6(struct xdp_md *xdp)
+{
+ __u8 buffer[sizeof(struct tcphdr)] = {};
+ __u8 hdr_bytes_remaining;
+ struct tcphdr *tcp_hdr;
+ __u8 tcp_hdr_opt_len;
+ int err = 0;
+ __u32 off;
+
+ struct bpf_dynptr ptr;
+
+ bpf_dynptr_from_xdp(xdp, 0, &ptr);
+
+ off = sizeof(struct ethhdr) + sizeof(struct ipv6hdr);
+
+ tcp_hdr = bpf_dynptr_slice(&ptr, off, buffer, sizeof(buffer));
+ if (!tcp_hdr)
+ return XDP_DROP;
+
+ tcp_hdr_opt_len = (tcp_hdr->doff * 4) - sizeof(struct tcphdr);
+ if (tcp_hdr_opt_len < tcp_hdr_opt_len_tpr)
+ return XDP_DROP;
+
+ hdr_bytes_remaining = tcp_hdr_opt_len;
+
+ off += sizeof(struct tcphdr);
+
+ /* max number of bytes of options in tcp header is 40 bytes */
+ for (int i = 0; i < tcp_hdr_opt_max_opt_checks; i++) {
+ err = parse_hdr_opt(&ptr, &off, &hdr_bytes_remaining, &server_id);
+
+ if (err || !hdr_bytes_remaining)
+ break;
+ }
+
+ if (!server_id)
+ return XDP_DROP;
+
+ return XDP_PASS;
+}
bool *ipv4)
{
struct bpf_sock_tuple *result;
+ __u64 ihl_len = 0;
__u8 proto = 0;
- __u64 ihl_len;
if (eth_proto == bpf_htons(ETH_P_IP)) {
struct iphdr *iph = (struct iphdr *)(data + nh_off);
SEC("tc")
int vxlan_set_tunnel_dst(struct __sk_buff *skb)
{
- int ret;
struct bpf_tunnel_key key;
struct vxlan_metadata md;
__u32 index = 0;
__u32 *local_ip = NULL;
+ int ret = 0;
local_ip = bpf_map_lookup_elem(&local_ip_map, &index);
if (!local_ip) {
SEC("tc")
int vxlan_set_tunnel_src(struct __sk_buff *skb)
{
- int ret;
struct bpf_tunnel_key key;
struct vxlan_metadata md;
__u32 index = 0;
__u32 *local_ip = NULL;
+ int ret = 0;
local_ip = bpf_map_lookup_elem(&local_ip_map, &index);
if (!local_ip) {
int ip6vxlan_set_tunnel_dst(struct __sk_buff *skb)
{
struct bpf_tunnel_key key;
- int ret;
__u32 index = 0;
__u32 *local_ip;
+ int ret = 0;
local_ip = bpf_map_lookup_elem(&local_ip_map, &index);
if (!local_ip) {
int ip6vxlan_set_tunnel_src(struct __sk_buff *skb)
{
struct bpf_tunnel_key key;
- int ret;
__u32 index = 0;
__u32 *local_ip;
+ int ret = 0;
local_ip = bpf_map_lookup_elem(&local_ip_map, &index);
if (!local_ip) {
int ip6vxlan_get_tunnel_src(struct __sk_buff *skb)
{
struct bpf_tunnel_key key;
- int ret;
__u32 index = 0;
__u32 *local_ip;
+ int ret = 0;
local_ip = bpf_map_lookup_elem(&local_ip_map, &index);
if (!local_ip) {
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta */
+#include <stddef.h>
+#include <string.h>
+#include <linux/bpf.h>
+#include <linux/if_ether.h>
+#include <linux/if_packet.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/in.h>
+#include <linux/udp.h>
+#include <linux/tcp.h>
+#include <linux/pkt_cls.h>
+#include <sys/socket.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+#include "test_iptunnel_common.h"
+#include "bpf_kfuncs.h"
+
+const size_t tcphdr_sz = sizeof(struct tcphdr);
+const size_t udphdr_sz = sizeof(struct udphdr);
+const size_t ethhdr_sz = sizeof(struct ethhdr);
+const size_t iphdr_sz = sizeof(struct iphdr);
+const size_t ipv6hdr_sz = sizeof(struct ipv6hdr);
+
+struct {
+ __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+ __uint(max_entries, 256);
+ __type(key, __u32);
+ __type(value, __u64);
+} rxcnt SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(max_entries, MAX_IPTNL_ENTRIES);
+ __type(key, struct vip);
+ __type(value, struct iptnl_info);
+} vip2tnl SEC(".maps");
+
+static __always_inline void count_tx(__u32 protocol)
+{
+ __u64 *rxcnt_count;
+
+ rxcnt_count = bpf_map_lookup_elem(&rxcnt, &protocol);
+ if (rxcnt_count)
+ *rxcnt_count += 1;
+}
+
+static __always_inline int get_dport(void *trans_data, __u8 protocol)
+{
+ struct tcphdr *th;
+ struct udphdr *uh;
+
+ switch (protocol) {
+ case IPPROTO_TCP:
+ th = (struct tcphdr *)trans_data;
+ return th->dest;
+ case IPPROTO_UDP:
+ uh = (struct udphdr *)trans_data;
+ return uh->dest;
+ default:
+ return 0;
+ }
+}
+
+static __always_inline void set_ethhdr(struct ethhdr *new_eth,
+ const struct ethhdr *old_eth,
+ const struct iptnl_info *tnl,
+ __be16 h_proto)
+{
+ memcpy(new_eth->h_source, old_eth->h_dest, sizeof(new_eth->h_source));
+ memcpy(new_eth->h_dest, tnl->dmac, sizeof(new_eth->h_dest));
+ new_eth->h_proto = h_proto;
+}
+
+static __always_inline int handle_ipv4(struct xdp_md *xdp, struct bpf_dynptr *xdp_ptr)
+{
+ __u8 eth_buffer[ethhdr_sz + iphdr_sz + ethhdr_sz];
+ __u8 iph_buffer_tcp[iphdr_sz + tcphdr_sz];
+ __u8 iph_buffer_udp[iphdr_sz + udphdr_sz];
+ struct bpf_dynptr new_xdp_ptr;
+ struct iptnl_info *tnl;
+ struct ethhdr *new_eth;
+ struct ethhdr *old_eth;
+ __u32 transport_hdr_sz;
+ struct iphdr *iph;
+ __u16 *next_iph;
+ __u16 payload_len;
+ struct vip vip = {};
+ int dport;
+ __u32 csum = 0;
+ int i;
+
+ __builtin_memset(eth_buffer, 0, sizeof(eth_buffer));
+ __builtin_memset(iph_buffer_tcp, 0, sizeof(iph_buffer_tcp));
+ __builtin_memset(iph_buffer_udp, 0, sizeof(iph_buffer_udp));
+
+ if (ethhdr_sz + iphdr_sz + tcphdr_sz > xdp->data_end - xdp->data)
+ iph = bpf_dynptr_slice(xdp_ptr, ethhdr_sz, iph_buffer_udp, sizeof(iph_buffer_udp));
+ else
+ iph = bpf_dynptr_slice(xdp_ptr, ethhdr_sz, iph_buffer_tcp, sizeof(iph_buffer_tcp));
+
+ if (!iph)
+ return XDP_DROP;
+
+ dport = get_dport(iph + 1, iph->protocol);
+ if (dport == -1)
+ return XDP_DROP;
+
+ vip.protocol = iph->protocol;
+ vip.family = AF_INET;
+ vip.daddr.v4 = iph->daddr;
+ vip.dport = dport;
+ payload_len = bpf_ntohs(iph->tot_len);
+
+ tnl = bpf_map_lookup_elem(&vip2tnl, &vip);
+ /* It only does v4-in-v4 */
+ if (!tnl || tnl->family != AF_INET)
+ return XDP_PASS;
+
+ if (bpf_xdp_adjust_head(xdp, 0 - (int)iphdr_sz))
+ return XDP_DROP;
+
+ bpf_dynptr_from_xdp(xdp, 0, &new_xdp_ptr);
+ new_eth = bpf_dynptr_slice_rdwr(&new_xdp_ptr, 0, eth_buffer, sizeof(eth_buffer));
+ if (!new_eth)
+ return XDP_DROP;
+
+ iph = (struct iphdr *)(new_eth + 1);
+ old_eth = (struct ethhdr *)(iph + 1);
+
+ set_ethhdr(new_eth, old_eth, tnl, bpf_htons(ETH_P_IP));
+
+ if (new_eth == eth_buffer)
+ bpf_dynptr_write(&new_xdp_ptr, 0, eth_buffer, sizeof(eth_buffer), 0);
+
+ iph->version = 4;
+ iph->ihl = iphdr_sz >> 2;
+ iph->frag_off = 0;
+ iph->protocol = IPPROTO_IPIP;
+ iph->check = 0;
+ iph->tos = 0;
+ iph->tot_len = bpf_htons(payload_len + iphdr_sz);
+ iph->daddr = tnl->daddr.v4;
+ iph->saddr = tnl->saddr.v4;
+ iph->ttl = 8;
+
+ next_iph = (__u16 *)iph;
+ for (i = 0; i < iphdr_sz >> 1; i++)
+ csum += *next_iph++;
+
+ iph->check = ~((csum & 0xffff) + (csum >> 16));
+
+ count_tx(vip.protocol);
+
+ return XDP_TX;
+}
+
+static __always_inline int handle_ipv6(struct xdp_md *xdp, struct bpf_dynptr *xdp_ptr)
+{
+ __u8 eth_buffer[ethhdr_sz + ipv6hdr_sz + ethhdr_sz];
+ __u8 ip6h_buffer_tcp[ipv6hdr_sz + tcphdr_sz];
+ __u8 ip6h_buffer_udp[ipv6hdr_sz + udphdr_sz];
+ struct bpf_dynptr new_xdp_ptr;
+ struct iptnl_info *tnl;
+ struct ethhdr *new_eth;
+ struct ethhdr *old_eth;
+ __u32 transport_hdr_sz;
+ struct ipv6hdr *ip6h;
+ __u16 payload_len;
+ struct vip vip = {};
+ int dport;
+
+ __builtin_memset(eth_buffer, 0, sizeof(eth_buffer));
+ __builtin_memset(ip6h_buffer_tcp, 0, sizeof(ip6h_buffer_tcp));
+ __builtin_memset(ip6h_buffer_udp, 0, sizeof(ip6h_buffer_udp));
+
+ if (ethhdr_sz + iphdr_sz + tcphdr_sz > xdp->data_end - xdp->data)
+ ip6h = bpf_dynptr_slice(xdp_ptr, ethhdr_sz, ip6h_buffer_udp, sizeof(ip6h_buffer_udp));
+ else
+ ip6h = bpf_dynptr_slice(xdp_ptr, ethhdr_sz, ip6h_buffer_tcp, sizeof(ip6h_buffer_tcp));
+
+ if (!ip6h)
+ return XDP_DROP;
+
+ dport = get_dport(ip6h + 1, ip6h->nexthdr);
+ if (dport == -1)
+ return XDP_DROP;
+
+ vip.protocol = ip6h->nexthdr;
+ vip.family = AF_INET6;
+ memcpy(vip.daddr.v6, ip6h->daddr.s6_addr32, sizeof(vip.daddr));
+ vip.dport = dport;
+ payload_len = ip6h->payload_len;
+
+ tnl = bpf_map_lookup_elem(&vip2tnl, &vip);
+ /* It only does v6-in-v6 */
+ if (!tnl || tnl->family != AF_INET6)
+ return XDP_PASS;
+
+ if (bpf_xdp_adjust_head(xdp, 0 - (int)ipv6hdr_sz))
+ return XDP_DROP;
+
+ bpf_dynptr_from_xdp(xdp, 0, &new_xdp_ptr);
+ new_eth = bpf_dynptr_slice_rdwr(&new_xdp_ptr, 0, eth_buffer, sizeof(eth_buffer));
+ if (!new_eth)
+ return XDP_DROP;
+
+ ip6h = (struct ipv6hdr *)(new_eth + 1);
+ old_eth = (struct ethhdr *)(ip6h + 1);
+
+ set_ethhdr(new_eth, old_eth, tnl, bpf_htons(ETH_P_IPV6));
+
+ if (new_eth == eth_buffer)
+ bpf_dynptr_write(&new_xdp_ptr, 0, eth_buffer, sizeof(eth_buffer), 0);
+
+ ip6h->version = 6;
+ ip6h->priority = 0;
+ memset(ip6h->flow_lbl, 0, sizeof(ip6h->flow_lbl));
+ ip6h->payload_len = bpf_htons(bpf_ntohs(payload_len) + ipv6hdr_sz);
+ ip6h->nexthdr = IPPROTO_IPV6;
+ ip6h->hop_limit = 8;
+ memcpy(ip6h->saddr.s6_addr32, tnl->saddr.v6, sizeof(tnl->saddr.v6));
+ memcpy(ip6h->daddr.s6_addr32, tnl->daddr.v6, sizeof(tnl->daddr.v6));
+
+ count_tx(vip.protocol);
+
+ return XDP_TX;
+}
+
+SEC("xdp")
+int _xdp_tx_iptunnel(struct xdp_md *xdp)
+{
+ __u8 buffer[ethhdr_sz];
+ struct bpf_dynptr ptr;
+ struct ethhdr *eth;
+ __u16 h_proto;
+
+ __builtin_memset(buffer, 0, sizeof(buffer));
+
+ bpf_dynptr_from_xdp(xdp, 0, &ptr);
+ eth = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
+ if (!eth)
+ return XDP_DROP;
+
+ h_proto = eth->h_proto;
+
+ if (h_proto == bpf_htons(ETH_P_IP))
+ return handle_ipv4(xdp, &ptr);
+ else if (h_proto == bpf_htons(ETH_P_IPV6))
+
+ return handle_ipv6(xdp, &ptr);
+ else
+ return XDP_DROP;
+}
+
+char _license[] SEC("license") = "GPL";
__type(value, struct elem);
} lru SEC(".maps");
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, 1);
+ __type(key, int);
+ __type(value, struct elem);
+} abs_timer SEC(".maps");
+
__u64 bss_data;
+__u64 abs_data;
__u64 err;
__u64 ok;
__u64 callback_check = 52;
return bpf_timer_test();
}
+
+/* callback for absolute timer */
+static int timer_cb3(void *map, int *key, struct bpf_timer *timer)
+{
+ abs_data += 6;
+
+ if (abs_data < 12) {
+ bpf_timer_start(timer, bpf_ktime_get_boot_ns() + 1000,
+ BPF_F_TIMER_ABS);
+ } else {
+ /* Re-arm timer ~35 seconds in future */
+ bpf_timer_start(timer, bpf_ktime_get_boot_ns() + (1ull << 35),
+ BPF_F_TIMER_ABS);
+ }
+
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test3")
+int BPF_PROG2(test3, int, a)
+{
+ int key = 0;
+ struct bpf_timer *timer;
+
+ bpf_printk("test3");
+
+ timer = bpf_map_lookup_elem(&abs_timer, &key);
+ if (timer) {
+ if (bpf_timer_init(timer, &abs_timer, CLOCK_BOOTTIME) != 0)
+ err |= 2048;
+ bpf_timer_set_callback(timer, timer_cb3);
+ bpf_timer_start(timer, bpf_ktime_get_boot_ns() + 1000,
+ BPF_F_TIMER_ABS);
+ }
+
+ return 0;
+}
return 0;
}
-SEC("fentry/" SYS_PREFIX "sys_getrlimit")
+SEC("fentry/" SYS_PREFIX "sys_prlimit64")
int test_user_ringbuf_epoll(void *ctx)
{
long num_samples;
#define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success"
#define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg="
#define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level="
+#define TEST_TAG_PROG_FLAGS_PFX "comment:test_prog_flags="
struct test_spec {
const char *name;
bool expect_failure;
- const char *expect_msg;
+ const char **expect_msgs;
+ size_t expect_msg_cnt;
int log_level;
+ int prog_flags;
};
static int tester_init(struct test_loader *tester)
for (i = 1; i < btf__type_cnt(btf); i++) {
const struct btf_type *t;
- const char *s;
+ const char *s, *val;
+ char *e;
t = btf__type_by_id(btf, i);
if (!btf_is_decl_tag(t))
} else if (strcmp(s, TEST_TAG_EXPECT_SUCCESS) == 0) {
spec->expect_failure = false;
} else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
- spec->expect_msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
+ void *tmp;
+ const char **msg;
+
+ tmp = realloc(spec->expect_msgs,
+ (1 + spec->expect_msg_cnt) * sizeof(void *));
+ if (!tmp) {
+ ASSERT_FAIL("failed to realloc memory for messages\n");
+ return -ENOMEM;
+ }
+ spec->expect_msgs = tmp;
+ msg = &spec->expect_msgs[spec->expect_msg_cnt++];
+ *msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
} else if (str_has_pfx(s, TEST_TAG_LOG_LEVEL_PFX)) {
+ val = s + sizeof(TEST_TAG_LOG_LEVEL_PFX) - 1;
errno = 0;
- spec->log_level = strtol(s + sizeof(TEST_TAG_LOG_LEVEL_PFX) - 1, NULL, 0);
- if (errno) {
+ spec->log_level = strtol(val, &e, 0);
+ if (errno || e[0] != '\0') {
ASSERT_FAIL("failed to parse test log level from '%s'", s);
return -EINVAL;
}
+ } else if (str_has_pfx(s, TEST_TAG_PROG_FLAGS_PFX)) {
+ val = s + sizeof(TEST_TAG_PROG_FLAGS_PFX) - 1;
+ if (strcmp(val, "BPF_F_STRICT_ALIGNMENT") == 0) {
+ spec->prog_flags |= BPF_F_STRICT_ALIGNMENT;
+ } else if (strcmp(val, "BPF_F_ANY_ALIGNMENT") == 0) {
+ spec->prog_flags |= BPF_F_ANY_ALIGNMENT;
+ } else if (strcmp(val, "BPF_F_TEST_RND_HI32") == 0) {
+ spec->prog_flags |= BPF_F_TEST_RND_HI32;
+ } else if (strcmp(val, "BPF_F_TEST_STATE_FREQ") == 0) {
+ spec->prog_flags |= BPF_F_TEST_STATE_FREQ;
+ } else if (strcmp(val, "BPF_F_SLEEPABLE") == 0) {
+ spec->prog_flags |= BPF_F_SLEEPABLE;
+ } else if (strcmp(val, "BPF_F_XDP_HAS_FRAGS") == 0) {
+ spec->prog_flags |= BPF_F_XDP_HAS_FRAGS;
+ } else /* assume numeric value */ {
+ errno = 0;
+ spec->prog_flags |= strtol(val, &e, 0);
+ if (errno || e[0] != '\0') {
+ ASSERT_FAIL("failed to parse test prog flags from '%s'", s);
+ return -EINVAL;
+ }
+ }
}
}
struct bpf_object *obj,
struct bpf_program *prog)
{
- int min_log_level = 0;
+ int min_log_level = 0, prog_flags;
if (env.verbosity > VERBOSE_NONE)
min_log_level = 1;
else
bpf_program__set_log_level(prog, spec->log_level);
+ prog_flags = bpf_program__flags(prog);
+ bpf_program__set_flags(prog, prog_flags | spec->prog_flags);
+
tester->log_buf[0] = '\0';
+ tester->next_match_pos = 0;
}
static void emit_verifier_log(const char *log_buf, bool force)
struct bpf_program *prog,
int load_err)
{
- if (spec->expect_msg) {
+ int i, j;
+
+ for (i = 0; i < spec->expect_msg_cnt; i++) {
char *match;
+ const char *expect_msg;
+
+ expect_msg = spec->expect_msgs[i];
- match = strstr(tester->log_buf, spec->expect_msg);
+ match = strstr(tester->log_buf + tester->next_match_pos, expect_msg);
if (!ASSERT_OK_PTR(match, "expect_msg")) {
/* if we are in verbose mode, we've already emitted log */
if (env.verbosity == VERBOSE_NONE)
emit_verifier_log(tester->log_buf, true /*force*/);
- fprintf(stderr, "EXPECTED MSG: '%s'\n", spec->expect_msg);
+ for (j = 0; j < i; j++)
+ fprintf(stderr, "MATCHED MSG: '%s'\n", spec->expect_msgs[j]);
+ fprintf(stderr, "EXPECTED MSG: '%s'\n", expect_msg);
return;
}
+
+ tester->next_match_pos = match - tester->log_buf + strlen(expect_msg);
}
}
___ok; \
})
+#define SYS(goto_label, fmt, ...) \
+ ({ \
+ char cmd[1024]; \
+ snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
+ if (!ASSERT_OK(system(cmd), cmd)) \
+ goto goto_label; \
+ })
+
+#define SYS_NOFAIL(fmt, ...) \
+ ({ \
+ char cmd[1024]; \
+ snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
+ system(cmd); \
+ })
+
static inline __u64 ptr_to_u64(const void *ptr)
{
return (__u64) (unsigned long) ptr;
struct test_loader {
char *log_buf;
size_t log_buf_sz;
+ size_t next_match_pos;
struct bpf_object *obj;
};
#define TCPOPT_EOL 0
#define TCPOPT_NOP 1
+#define TCPOPT_MSS 2
#define TCPOPT_WINDOW 3
#define TCPOPT_EXP 254
* struct bpf_timer t;
* };
* struct btf_ptr {
+ * struct prog_test_ref_kfunc __kptr_untrusted *ptr;
* struct prog_test_ref_kfunc __kptr *ptr;
- * struct prog_test_ref_kfunc __kptr_ref *ptr;
- * struct prog_test_member __kptr_ref *ptr;
+ * struct prog_test_member __kptr *ptr;
* }
*/
static const char btf_str_sec[] = "\0bpf_spin_lock\0val\0cnt\0l\0bpf_timer\0timer\0t"
- "\0btf_ptr\0prog_test_ref_kfunc\0ptr\0kptr\0kptr_ref"
+ "\0btf_ptr\0prog_test_ref_kfunc\0ptr\0kptr\0kptr_untrusted"
"\0prog_test_member";
static __u32 btf_raw_types[] = {
/* int */
BTF_MEMBER_ENC(41, 4, 0), /* struct bpf_timer t; */
/* struct prog_test_ref_kfunc */ /* [6] */
BTF_STRUCT_ENC(51, 0, 0),
- BTF_STRUCT_ENC(89, 0, 0), /* [7] */
+ BTF_STRUCT_ENC(95, 0, 0), /* [7] */
+ /* type tag "kptr_untrusted" */
+ BTF_TYPE_TAG_ENC(80, 6), /* [8] */
/* type tag "kptr" */
- BTF_TYPE_TAG_ENC(75, 6), /* [8] */
- /* type tag "kptr_ref" */
- BTF_TYPE_TAG_ENC(80, 6), /* [9] */
- BTF_TYPE_TAG_ENC(80, 7), /* [10] */
+ BTF_TYPE_TAG_ENC(75, 6), /* [9] */
+ BTF_TYPE_TAG_ENC(75, 7), /* [10] */
BTF_PTR_ENC(8), /* [11] */
BTF_PTR_ENC(9), /* [12] */
BTF_PTR_ENC(10), /* [13] */
/* struct btf_ptr */ /* [14] */
BTF_STRUCT_ENC(43, 3, 24),
- BTF_MEMBER_ENC(71, 11, 0), /* struct prog_test_ref_kfunc __kptr *ptr; */
- BTF_MEMBER_ENC(71, 12, 64), /* struct prog_test_ref_kfunc __kptr_ref *ptr; */
- BTF_MEMBER_ENC(71, 13, 128), /* struct prog_test_member __kptr_ref *ptr; */
+ BTF_MEMBER_ENC(71, 11, 0), /* struct prog_test_ref_kfunc __kptr_untrusted *ptr; */
+ BTF_MEMBER_ENC(71, 12, 64), /* struct prog_test_ref_kfunc __kptr *ptr; */
+ BTF_MEMBER_ENC(71, 13, 128), /* struct prog_test_member __kptr *ptr; */
};
static char bpf_vlog[UINT_MAX >> 8];
},
.result_unpriv = REJECT,
.result = REJECT,
- .errstr = "negative offset ptr_ ptr R1 off=-4 disallowed",
+ .errstr = "ptr R1 off=-4 disallowed",
},
{
"calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset",
},
.result_unpriv = REJECT,
.result = REJECT,
- .errstr = "R1 must be referenced",
+ .errstr = "R1 must be",
},
{
"calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID",
{
- "context stores via ST",
- .insns = {
- BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_ST_MEM(BPF_DW, BPF_REG_1, offsetof(struct __sk_buff, mark), 0),
- BPF_EXIT_INSN(),
- },
- .errstr = "BPF_ST stores into R1 ctx is not allowed",
- .result = REJECT,
- .prog_type = BPF_PROG_TYPE_SCHED_CLS,
-},
-{
"context stores via BPF_ATOMIC",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 0),
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.fixup_map_kptr = { 1 },
.result = REJECT,
- .errstr = "R1 type=untrusted_ptr_or_null_ expected=percpu_ptr_",
+ .errstr = "R1 type=rcu_ptr_or_null_ expected=percpu_ptr_",
},
{
"map_kptr: ref: reject off != 0",
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
},
{
+ /* Same as above, but use BPF_ST_MEM to save 42
+ * instead of BPF_STX_MEM.
+ */
+ "unpriv: spill/fill of different pointers st",
+ .insns = {
+ BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
+ BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
+ BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+ BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
+ BPF_ST_MEM(BPF_W, BPF_REG_1, offsetof(struct __sk_buff, mark), 42),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = REJECT,
+ .errstr = "same insn cannot be used with different pointers",
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+},
+{
"unpriv: spill/fill of different pointers stx - ctx and sock",
.insns = {
BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
TEST_PROGS += bind_bhash.sh
TEST_PROGS += ip_local_port_range.sh
TEST_PROGS += rps_default_mask.sh
+TEST_PROGS += big_tcp.sh
TEST_PROGS_EXTENDED := in_netns.sh setup_loopback.sh setup_veth.sh
TEST_PROGS_EXTENDED += toeplitz_client.sh toeplitz.sh
TEST_GEN_FILES = socket nettest
TEST_GEN_FILES += nat6to4.o
TEST_GEN_FILES += ip_local_port_range
TEST_GEN_FILES += bind_wildcard
+TEST_PROGS += test_vxlan_mdb.sh
TEST_FILES := settings
include ../lib.mk
$(OUTPUT)/reuseport_bpf_numa: LDLIBS += -lnuma
-$(OUTPUT)/tcp_mmap: LDLIBS += -lpthread
+$(OUTPUT)/tcp_mmap: LDLIBS += -lpthread -lcrypto
$(OUTPUT)/tcp_inq: LDLIBS += -lpthread
$(OUTPUT)/bind_bhash: LDLIBS += -lpthread
--- /dev/null
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# Testing For IPv4 and IPv6 BIG TCP.
+# TOPO: CLIENT_NS (link0)<--->(link1) ROUTER_NS (link2)<--->(link3) SERVER_NS
+
+CLIENT_NS=$(mktemp -u client-XXXXXXXX)
+CLIENT_IP4="198.51.100.1"
+CLIENT_IP6="2001:db8:1::1"
+
+SERVER_NS=$(mktemp -u server-XXXXXXXX)
+SERVER_IP4="203.0.113.1"
+SERVER_IP6="2001:db8:2::1"
+
+ROUTER_NS=$(mktemp -u router-XXXXXXXX)
+SERVER_GW4="203.0.113.2"
+CLIENT_GW4="198.51.100.2"
+SERVER_GW6="2001:db8:2::2"
+CLIENT_GW6="2001:db8:1::2"
+
+MAX_SIZE=128000
+CHK_SIZE=65535
+
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+setup() {
+ ip netns add $CLIENT_NS
+ ip netns add $SERVER_NS
+ ip netns add $ROUTER_NS
+ ip -net $ROUTER_NS link add link1 type veth peer name link0 netns $CLIENT_NS
+ ip -net $ROUTER_NS link add link2 type veth peer name link3 netns $SERVER_NS
+
+ ip -net $CLIENT_NS link set link0 up
+ ip -net $CLIENT_NS link set link0 mtu 1442
+ ip -net $CLIENT_NS addr add $CLIENT_IP4/24 dev link0
+ ip -net $CLIENT_NS addr add $CLIENT_IP6/64 dev link0 nodad
+ ip -net $CLIENT_NS route add $SERVER_IP4 dev link0 via $CLIENT_GW4
+ ip -net $CLIENT_NS route add $SERVER_IP6 dev link0 via $CLIENT_GW6
+ ip -net $CLIENT_NS link set dev link0 \
+ gro_ipv4_max_size $MAX_SIZE gso_ipv4_max_size $MAX_SIZE
+ ip -net $CLIENT_NS link set dev link0 \
+ gro_max_size $MAX_SIZE gso_max_size $MAX_SIZE
+ ip net exec $CLIENT_NS sysctl -wq net.ipv4.tcp_window_scaling=10
+
+ ip -net $ROUTER_NS link set link1 up
+ ip -net $ROUTER_NS link set link2 up
+ ip -net $ROUTER_NS addr add $CLIENT_GW4/24 dev link1
+ ip -net $ROUTER_NS addr add $CLIENT_GW6/64 dev link1 nodad
+ ip -net $ROUTER_NS addr add $SERVER_GW4/24 dev link2
+ ip -net $ROUTER_NS addr add $SERVER_GW6/64 dev link2 nodad
+ ip -net $ROUTER_NS link set dev link1 \
+ gro_ipv4_max_size $MAX_SIZE gso_ipv4_max_size $MAX_SIZE
+ ip -net $ROUTER_NS link set dev link2 \
+ gro_ipv4_max_size $MAX_SIZE gso_ipv4_max_size $MAX_SIZE
+ ip -net $ROUTER_NS link set dev link1 \
+ gro_max_size $MAX_SIZE gso_max_size $MAX_SIZE
+ ip -net $ROUTER_NS link set dev link2 \
+ gro_max_size $MAX_SIZE gso_max_size $MAX_SIZE
+ # test for nf_ct_skb_network_trim in nf_conntrack_ovs used by TC ct action.
+ ip net exec $ROUTER_NS tc qdisc add dev link1 ingress
+ ip net exec $ROUTER_NS tc filter add dev link1 ingress \
+ proto ip flower ip_proto tcp action ct
+ ip net exec $ROUTER_NS tc filter add dev link1 ingress \
+ proto ipv6 flower ip_proto tcp action ct
+ ip net exec $ROUTER_NS sysctl -wq net.ipv4.ip_forward=1
+ ip net exec $ROUTER_NS sysctl -wq net.ipv6.conf.all.forwarding=1
+
+ ip -net $SERVER_NS link set link3 up
+ ip -net $SERVER_NS addr add $SERVER_IP4/24 dev link3
+ ip -net $SERVER_NS addr add $SERVER_IP6/64 dev link3 nodad
+ ip -net $SERVER_NS route add $CLIENT_IP4 dev link3 via $SERVER_GW4
+ ip -net $SERVER_NS route add $CLIENT_IP6 dev link3 via $SERVER_GW6
+ ip -net $SERVER_NS link set dev link3 \
+ gro_ipv4_max_size $MAX_SIZE gso_ipv4_max_size $MAX_SIZE
+ ip -net $SERVER_NS link set dev link3 \
+ gro_max_size $MAX_SIZE gso_max_size $MAX_SIZE
+ ip net exec $SERVER_NS sysctl -wq net.ipv4.tcp_window_scaling=10
+ ip net exec $SERVER_NS netserver 2>&1 >/dev/null
+}
+
+cleanup() {
+ ip net exec $SERVER_NS pkill netserver
+ ip -net $ROUTER_NS link del link1
+ ip -net $ROUTER_NS link del link2
+ ip netns del "$CLIENT_NS"
+ ip netns del "$SERVER_NS"
+ ip netns del "$ROUTER_NS"
+}
+
+start_counter() {
+ local ipt="iptables"
+ local iface=$1
+ local netns=$2
+
+ [ "$NF" = "6" ] && ipt="ip6tables"
+ ip net exec $netns $ipt -t raw -A PREROUTING -i $iface \
+ -m length ! --length 0:$CHK_SIZE -j ACCEPT
+}
+
+check_counter() {
+ local ipt="iptables"
+ local iface=$1
+ local netns=$2
+
+ [ "$NF" = "6" ] && ipt="ip6tables"
+ test `ip net exec $netns $ipt -t raw -L -v |grep $iface | awk '{print $1}'` != "0"
+}
+
+stop_counter() {
+ local ipt="iptables"
+ local iface=$1
+ local netns=$2
+
+ [ "$NF" = "6" ] && ipt="ip6tables"
+ ip net exec $netns $ipt -t raw -D PREROUTING -i $iface \
+ -m length ! --length 0:$CHK_SIZE -j ACCEPT
+}
+
+do_netperf() {
+ local serip=$SERVER_IP4
+ local netns=$1
+
+ [ "$NF" = "6" ] && serip=$SERVER_IP6
+ ip net exec $netns netperf -$NF -t TCP_STREAM -H $serip 2>&1 >/dev/null
+}
+
+do_test() {
+ local cli_tso=$1
+ local gw_gro=$2
+ local gw_tso=$3
+ local ser_gro=$4
+ local ret="PASS"
+
+ ip net exec $CLIENT_NS ethtool -K link0 tso $cli_tso
+ ip net exec $ROUTER_NS ethtool -K link1 gro $gw_gro
+ ip net exec $ROUTER_NS ethtool -K link2 tso $gw_tso
+ ip net exec $SERVER_NS ethtool -K link3 gro $ser_gro
+
+ start_counter link1 $ROUTER_NS
+ start_counter link3 $SERVER_NS
+ do_netperf $CLIENT_NS
+
+ if check_counter link1 $ROUTER_NS; then
+ check_counter link3 $SERVER_NS || ret="FAIL_on_link3"
+ else
+ ret="FAIL_on_link1"
+ fi
+
+ stop_counter link1 $ROUTER_NS
+ stop_counter link3 $SERVER_NS
+ printf "%-9s %-8s %-8s %-8s: [%s]\n" \
+ $cli_tso $gw_gro $gw_tso $ser_gro $ret
+ test $ret = "PASS"
+}
+
+testup() {
+ echo "CLI GSO | GW GRO | GW GSO | SER GRO" && \
+ do_test "on" "on" "on" "on" && \
+ do_test "on" "off" "on" "off" && \
+ do_test "off" "on" "on" "on" && \
+ do_test "on" "on" "off" "on" && \
+ do_test "off" "on" "off" "on"
+}
+
+if ! netperf -V &> /dev/null; then
+ echo "SKIP: Could not run test without netperf tool"
+ exit $ksft_skip
+fi
+
+if ! ip link help 2>&1 | grep gso_ipv4_max_size &> /dev/null; then
+ echo "SKIP: Could not run test without gso/gro_ipv4_max_size supported in ip-link"
+ exit $ksft_skip
+fi
+
+trap cleanup EXIT
+setup && echo "Testing for BIG TCP:" && \
+NF=4 testup && echo "***v4 Tests Done***" && \
+NF=6 testup && echo "***v6 Tests Done***"
+exit $?
CONFIG_IPV6_IOAM6_LWTUNNEL=y
CONFIG_CRYPTO_SM4_GENERIC=y
CONFIG_AMT=m
+CONFIG_VXLAN=m
tc_mpls_l2vpn.sh \
tc_police.sh \
tc_shblocks.sh \
+ tc_tunnel_key.sh \
tc_vlan_modify.sh \
vxlan_asymmetric_ipv6.sh \
vxlan_asymmetric.sh \
--- /dev/null
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+ALL_TESTS="tunnel_key_nofrag_test"
+
+NUM_NETIFS=4
+source tc_common.sh
+source lib.sh
+
+tcflags="skip_hw"
+
+h1_create()
+{
+ simple_if_init $h1 192.0.2.1/24
+ forwarding_enable
+ mtu_set $h1 1500
+ tunnel_create h1-et vxlan 192.0.2.1 192.0.2.2 dev $h1 dstport 0 external
+ tc qdisc add dev h1-et clsact
+ mtu_set h1-et 1230
+ mtu_restore $h1
+ mtu_set $h1 1000
+}
+
+h1_destroy()
+{
+ tc qdisc del dev h1-et clsact
+ tunnel_destroy h1-et
+ forwarding_restore
+ mtu_restore $h1
+ simple_if_fini $h1 192.0.2.1/24
+}
+
+h2_create()
+{
+ simple_if_init $h2 192.0.2.2/24
+}
+
+h2_destroy()
+{
+ simple_if_fini $h2 192.0.2.2/24
+}
+
+switch_create()
+{
+ simple_if_init $swp1 192.0.2.2/24
+ tc qdisc add dev $swp1 clsact
+ simple_if_init $swp2 192.0.2.1/24
+}
+
+switch_destroy()
+{
+ simple_if_fini $swp2 192.0.2.1/24
+ tc qdisc del dev $swp1 clsact
+ simple_if_fini $swp1 192.0.2.2/24
+}
+
+setup_prepare()
+{
+ h1=${NETIFS[p1]}
+ swp1=${NETIFS[p2]}
+
+ swp2=${NETIFS[p3]}
+ h2=${NETIFS[p4]}
+
+ h1mac=$(mac_get $h1)
+ h2mac=$(mac_get $h2)
+
+ swp1origmac=$(mac_get $swp1)
+ swp2origmac=$(mac_get $swp2)
+ ip link set $swp1 address $h2mac
+ ip link set $swp2 address $h1mac
+
+ vrf_prepare
+
+ h1_create
+ h2_create
+ switch_create
+
+ if ! tc action add action tunnel_key help 2>&1 | grep -q nofrag; then
+ log_test "SKIP: iproute doesn't support nofrag"
+ exit $ksft_skip
+ fi
+}
+
+cleanup()
+{
+ pre_cleanup
+
+ switch_destroy
+ h2_destroy
+ h1_destroy
+
+ vrf_cleanup
+
+ ip link set $swp2 address $swp2origmac
+ ip link set $swp1 address $swp1origmac
+}
+
+tunnel_key_nofrag_test()
+{
+ RET=0
+ local i
+
+ tc filter add dev $swp1 ingress protocol ip pref 100 handle 100 \
+ flower ip_flags nofrag action drop
+ tc filter add dev $swp1 ingress protocol ip pref 101 handle 101 \
+ flower ip_flags firstfrag action drop
+ tc filter add dev $swp1 ingress protocol ip pref 102 handle 102 \
+ flower ip_flags nofirstfrag action drop
+
+ # test 'nofrag' set
+ tc filter add dev h1-et egress protocol all pref 1 handle 1 matchall $tcflags \
+ action tunnel_key set src_ip 192.0.2.1 dst_ip 192.0.2.2 id 42 nofrag index 10
+ $MZ h1-et -c 1 -p 930 -a 00:aa:bb:cc:dd:ee -b 00:ee:dd:cc:bb:aa -t ip -q
+ tc_check_packets "dev $swp1 ingress" 100 1
+ check_err $? "packet smaller than MTU was not tunneled"
+
+ $MZ h1-et -c 1 -p 931 -a 00:aa:bb:cc:dd:ee -b 00:ee:dd:cc:bb:aa -t ip -q
+ tc_check_packets "dev $swp1 ingress" 100 1
+ check_err $? "packet bigger than MTU matched nofrag (nofrag was set)"
+ tc_check_packets "dev $swp1 ingress" 101 0
+ check_err $? "packet bigger than MTU matched firstfrag (nofrag was set)"
+ tc_check_packets "dev $swp1 ingress" 102 0
+ check_err $? "packet bigger than MTU matched nofirstfrag (nofrag was set)"
+
+ # test 'nofrag' cleared
+ tc actions change action tunnel_key set src_ip 192.0.2.1 dst_ip 192.0.2.2 id 42 index 10
+ $MZ h1-et -c 1 -p 931 -a 00:aa:bb:cc:dd:ee -b 00:ee:dd:cc:bb:aa -t ip -q
+ tc_check_packets "dev $swp1 ingress" 100 1
+ check_err $? "packet bigger than MTU matched nofrag (nofrag was unset)"
+ tc_check_packets "dev $swp1 ingress" 101 1
+ check_err $? "packet bigger than MTU didn't match firstfrag (nofrag was unset) "
+ tc_check_packets "dev $swp1 ingress" 102 1
+ check_err $? "packet bigger than MTU didn't match nofirstfrag (nofrag was unset) "
+
+ for i in 100 101 102; do
+ tc filter del dev $swp1 ingress protocol ip pref $i handle $i flower
+ done
+ tc filter del dev h1-et egress pref 1 handle 1 matchall
+
+ log_test "tunnel_key nofrag ($tcflags)"
+}
+
+trap cleanup EXIT
+
+setup_prepare
+setup_wait
+
+tests_run
+
+tc_offload_check
+if [[ $? -ne 0 ]]; then
+ log_info "Could not test offloaded functionality"
+else
+ tcflags="skip_sw"
+ tests_run
+fi
+
+exit $EXIT_STATUS
fi
}
+chk_mptcp_info()
+{
+ local nr_info=$1
+ local info
+ local cnt1
+ local cnt2
+ local dump_stats
+
+ if [[ $nr_info = "subflows_"* ]]; then
+ info="subflows"
+ nr_info=${nr_info:9}
+ else
+ echo "[fail] unsupported argument: $nr_info"
+ fail_test
+ return 1
+ fi
+
+ printf "%-${nr_blank}s %-30s" " " "mptcp_info $info=$nr_info"
+
+ cnt1=$(ss -N $ns1 -inmHM | grep "$info:" |
+ sed -n 's/.*\('"$info"':\)\([[:digit:]]*\).*$/\2/p;q')
+ [ -z "$cnt1" ] && cnt1=0
+ cnt2=$(ss -N $ns2 -inmHM | grep "$info:" |
+ sed -n 's/.*\('"$info"':\)\([[:digit:]]*\).*$/\2/p;q')
+ [ -z "$cnt2" ] && cnt2=0
+ if [ "$cnt1" != "$nr_info" ] || [ "$cnt2" != "$nr_info" ]; then
+ echo "[fail] got $cnt1:$cnt2 $info expected $nr_info"
+ fail_test
+ dump_stats=1
+ else
+ echo "[ ok ]"
+ fi
+
+ if [ "$dump_stats" = 1 ]; then
+ ss -N $ns1 -inmHM
+ ss -N $ns2 -inmHM
+ dump_stats
+ fi
+}
+
chk_link_usage()
{
local ns=$1
run_tests $ns1 $ns2 10.0.1.1 4 0 0 speed_20 2>/dev/null &
wait_mpj $ns2
+ chk_subflow_nr needtitle "before delete" 2
+ chk_mptcp_info subflows_1
+
pm_nl_del_endpoint $ns2 2 10.0.2.2
sleep 0.5
- chk_subflow_nr needtitle "after delete" 1
+ chk_subflow_nr "" "after delete" 1
+ chk_mptcp_info subflows_0
pm_nl_add_endpoint $ns2 10.0.2.2 dev ns2eth2 flags subflow
wait_mpj $ns2
chk_subflow_nr "" "after re-add" 2
+ chk_mptcp_info subflows_1
kill_tests_wait
fi
}
#
# set -e
+ALL_TESTS="
+ kci_test_polrouting
+ kci_test_route_get
+ kci_test_addrlft
+ kci_test_promote_secondaries
+ kci_test_tc
+ kci_test_gre
+ kci_test_gretap
+ kci_test_ip6gretap
+ kci_test_erspan
+ kci_test_ip6erspan
+ kci_test_bridge
+ kci_test_addrlabel
+ kci_test_ifalias
+ kci_test_vrf
+ kci_test_encap
+ kci_test_macsec
+ kci_test_ipsec
+ kci_test_ipsec_offload
+ kci_test_fdb_get
+ kci_test_neigh_get
+ kci_test_bridge_parent_id
+ kci_test_address_proto
+"
+
devdummy="test-dummy0"
# Kselftest framework requirement - SKIP code is 4.
echo "PASS: bridge_parent_id"
}
-kci_test_rtnl()
+address_get_proto()
+{
+ local addr=$1; shift
+
+ ip -N -j address show dev "$devdummy" |
+ jq -e -r --arg addr "${addr%/*}" \
+ '.[].addr_info[] | select(.local == $addr) | .protocol'
+}
+
+address_count()
{
+ ip -N -j address show dev "$devdummy" "$@" |
+ jq -e -r '[.[].addr_info[] | .local | select(. != null)] | length'
+}
+
+do_test_address_proto()
+{
+ local what=$1; shift
+ local addr=$1; shift
+ local addr2=${addr%/*}2/${addr#*/}
+ local addr3=${addr%/*}3/${addr#*/}
+ local proto
+ local count
local ret=0
- kci_add_dummy
- if [ $ret -ne 0 ];then
- echo "FAIL: cannot add dummy interface"
- return 1
- fi
+ local err
- kci_test_polrouting
+ ip address add dev "$devdummy" "$addr3"
check_err $?
- kci_test_route_get
+ proto=$(address_get_proto "$addr3")
+ [[ "$proto" == null ]]
check_err $?
- kci_test_addrlft
- check_err $?
- kci_test_promote_secondaries
- check_err $?
- kci_test_tc
- check_err $?
- kci_test_gre
+
+ ip address add dev "$devdummy" "$addr2" proto 0x99
check_err $?
- kci_test_gretap
+ proto=$(address_get_proto "$addr2")
+ [[ "$proto" == 0x99 ]]
check_err $?
- kci_test_ip6gretap
+
+ ip address add dev "$devdummy" "$addr" proto 0xab
check_err $?
- kci_test_erspan
+ proto=$(address_get_proto "$addr")
+ [[ "$proto" == 0xab ]]
check_err $?
- kci_test_ip6erspan
+
+ ip address replace dev "$devdummy" "$addr" proto 0x11
+ proto=$(address_get_proto "$addr")
check_err $?
- kci_test_bridge
+ [[ "$proto" == 0x11 ]]
check_err $?
- kci_test_addrlabel
+
+ count=$(address_count)
check_err $?
- kci_test_ifalias
+ (( count >= 3 )) # $addr, $addr2 and $addr3 plus any kernel addresses
check_err $?
- kci_test_vrf
+
+ count=$(address_count proto 0)
check_err $?
- kci_test_encap
+ (( count == 1 )) # just $addr3
check_err $?
- kci_test_macsec
+
+ count=$(address_count proto 0x11)
check_err $?
- kci_test_ipsec
+ (( count == 2 )) # $addr and $addr3
check_err $?
- kci_test_ipsec_offload
+
+ count=$(address_count proto 0xab)
check_err $?
- kci_test_fdb_get
+ (( count == 1 )) # just $addr3
check_err $?
- kci_test_neigh_get
+
+ ip address del dev "$devdummy" "$addr"
+ ip address del dev "$devdummy" "$addr2"
+ ip address del dev "$devdummy" "$addr3"
+
+ if [ $ret -ne 0 ]; then
+ echo "FAIL: address proto $what"
+ return 1
+ fi
+ echo "PASS: address proto $what"
+}
+
+kci_test_address_proto()
+{
+ local ret=0
+
+ do_test_address_proto IPv4 192.0.2.1/28
check_err $?
- kci_test_bridge_parent_id
+
+ do_test_address_proto IPv6 2001:db8:1::1/64
check_err $?
+ return $ret
+}
+
+kci_test_rtnl()
+{
+ local current_test
+ local ret=0
+
+ kci_add_dummy
+ if [ $ret -ne 0 ];then
+ echo "FAIL: cannot add dummy interface"
+ return 1
+ fi
+
+ for current_test in ${TESTS:-$ALL_TESTS}; do
+ $current_test
+ check_err $?
+ done
+
kci_del_dummy
return $ret
}
+usage()
+{
+ cat <<EOF
+usage: ${0##*/} OPTS
+
+ -t <test> Test(s) to run (default: all)
+ (options: $(echo $ALL_TESTS))
+EOF
+}
+
#check for needed privileges
if [ "$(id -u)" -ne 0 ];then
echo "SKIP: Need root privileges"
fi
done
+while getopts t:h o; do
+ case $o in
+ t) TESTS=$OPTARG;;
+ h) usage; exit 0;;
+ *) usage; exit 1;;
+ esac
+done
+
kci_test_rtnl
exit $?
#include <poll.h>
#include <linux/tcp.h>
#include <assert.h>
+#include <openssl/pem.h>
#ifndef MSG_ZEROCOPY
#define MSG_ZEROCOPY 0x4000000
#endif
+#ifndef min
+#define min(a, b) ((a) < (b) ? (a) : (b))
+#endif
+
#define FILE_SZ (1ULL << 35)
static int cfg_family = AF_INET6;
static socklen_t cfg_alen = sizeof(struct sockaddr_in6);
static int zflg; /* zero copy option. (MSG_ZEROCOPY for sender, mmap() for receiver */
static int xflg; /* hash received data (simple xor) (-h option) */
static int keepflag; /* -k option: receiver shall keep all received file in memory (no munmap() calls) */
+static int integrity; /* -i option: sender and receiver compute sha256 over the data.*/
static size_t chunk_size = 512*1024;
static size_t map_align;
unsigned long htotal;
+unsigned int digest_len;
static inline void prefetch(const void *x)
{
void *child_thread(void *arg)
{
+ unsigned char digest[SHA256_DIGEST_LENGTH];
unsigned long total_mmap = 0, total = 0;
struct tcp_zerocopy_receive zc;
+ unsigned char *buffer = NULL;
unsigned long delta_usec;
+ EVP_MD_CTX *ctx = NULL;
int flags = MAP_SHARED;
struct timeval t0, t1;
- char *buffer = NULL;
void *raddr = NULL;
void *addr = NULL;
double throughput;
addr = ALIGN_PTR_UP(raddr, map_align);
}
}
+ if (integrity) {
+ ctx = EVP_MD_CTX_new();
+ if (!ctx) {
+ perror("cannot enable SHA computing");
+ goto error;
+ }
+ EVP_DigestInit_ex(ctx, EVP_sha256(), NULL);
+ }
while (1) {
struct pollfd pfd = { .fd = fd, .events = POLLIN, };
int sub;
memset(&zc, 0, sizeof(zc));
zc.address = (__u64)((unsigned long)addr);
- zc.length = chunk_size;
+ zc.length = min(chunk_size, FILE_SZ - total);
res = getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE,
&zc, &zc_len);
if (zc.length) {
assert(zc.length <= chunk_size);
+ if (integrity)
+ EVP_DigestUpdate(ctx, addr, zc.length);
total_mmap += zc.length;
if (xflg)
hash_zone(addr, zc.length);
}
if (zc.recv_skip_hint) {
assert(zc.recv_skip_hint <= chunk_size);
- lu = read(fd, buffer, zc.recv_skip_hint);
+ lu = read(fd, buffer, min(zc.recv_skip_hint,
+ FILE_SZ - total));
if (lu > 0) {
+ if (integrity)
+ EVP_DigestUpdate(ctx, buffer, lu);
if (xflg)
hash_zone(buffer, lu);
total += lu;
}
+ if (lu == 0)
+ goto end;
}
continue;
}
sub = 0;
while (sub < chunk_size) {
- lu = read(fd, buffer + sub, chunk_size - sub);
+ lu = read(fd, buffer + sub, min(chunk_size - sub,
+ FILE_SZ - total));
if (lu == 0)
goto end;
if (lu < 0)
break;
+ if (integrity)
+ EVP_DigestUpdate(ctx, buffer + sub, lu);
if (xflg)
hash_zone(buffer + sub, lu);
total += lu;
gettimeofday(&t1, NULL);
delta_usec = (t1.tv_sec - t0.tv_sec) * 1000000 + t1.tv_usec - t0.tv_usec;
+ if (integrity) {
+ fcntl(fd, F_SETFL, 0);
+ EVP_DigestFinal_ex(ctx, digest, &digest_len);
+ lu = read(fd, buffer, SHA256_DIGEST_LENGTH);
+ if (lu != SHA256_DIGEST_LENGTH)
+ perror("Error: Cannot read SHA256\n");
+
+ if (memcmp(digest, buffer,
+ SHA256_DIGEST_LENGTH))
+ fprintf(stderr, "Error: SHA256 of the data is not right\n");
+ else
+ printf("\nSHA256 is correct\n");
+ }
+
throughput = 0;
if (delta_usec)
throughput = total * 8.0 / (double)delta_usec / 1000.0;
return hps;
}
+static void randomize(void *target, size_t count)
+{
+ static int urandom = -1;
+ ssize_t got;
+
+ urandom = open("/dev/urandom", O_RDONLY);
+ if (urandom < 0) {
+ perror("open /dev/urandom");
+ exit(1);
+ }
+ got = read(urandom, target, count);
+ if (got != count) {
+ perror("read /dev/urandom");
+ exit(1);
+ }
+}
+
int main(int argc, char *argv[])
{
+ unsigned char digest[SHA256_DIGEST_LENGTH];
struct sockaddr_storage listenaddr, addr;
unsigned int max_pacing_rate = 0;
+ EVP_MD_CTX *ctx = NULL;
+ unsigned char *buffer;
uint64_t total = 0;
char *host = NULL;
int fd, c, on = 1;
size_t buffer_sz;
- char *buffer;
int sflg = 0;
int mss = 0;
- while ((c = getopt(argc, argv, "46p:svr:w:H:zxkP:M:C:a:")) != -1) {
+ while ((c = getopt(argc, argv, "46p:svr:w:H:zxkP:M:C:a:i")) != -1) {
switch (c) {
case '4':
cfg_family = PF_INET;
case 'a':
map_align = atol(optarg);
break;
+ case 'i':
+ integrity = 1;
+ break;
default:
exit(1);
}
}
buffer = mmap_large_buffer(chunk_size, &buffer_sz);
- if (buffer == (char *)-1) {
+ if (buffer == (unsigned char *)-1) {
perror("mmap");
exit(1);
}
perror("setsockopt SO_ZEROCOPY, (-z option disabled)");
zflg = 0;
}
+ if (integrity) {
+ randomize(buffer, buffer_sz);
+ ctx = EVP_MD_CTX_new();
+ if (!ctx) {
+ perror("cannot enable SHA computing");
+ exit(1);
+ }
+ EVP_DigestInit_ex(ctx, EVP_sha256(), NULL);
+ }
while (total < FILE_SZ) {
+ size_t offset = total % chunk_size;
int64_t wr = FILE_SZ - total;
- if (wr > chunk_size)
- wr = chunk_size;
- /* Note : we just want to fill the pipe with 0 bytes */
- wr = send(fd, buffer, (size_t)wr, zflg ? MSG_ZEROCOPY : 0);
+ if (wr > chunk_size - offset)
+ wr = chunk_size - offset;
+ /* Note : we just want to fill the pipe with random bytes */
+ wr = send(fd, buffer + offset,
+ (size_t)wr, zflg ? MSG_ZEROCOPY : 0);
if (wr <= 0)
break;
+ if (integrity)
+ EVP_DigestUpdate(ctx, buffer + offset, wr);
total += wr;
}
+ if (integrity && total == FILE_SZ) {
+ EVP_DigestFinal_ex(ctx, digest, &digest_len);
+ send(fd, digest, (size_t)SHA256_DIGEST_LENGTH, 0);
+ }
close(fd);
munmap(buffer, buffer_sz);
return 0;
--- /dev/null
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# This test is for checking VXLAN MDB functionality. The topology consists of
+# two sets of namespaces: One for the testing of IPv4 underlay and another for
+# IPv6. In both cases, both IPv4 and IPv6 overlay traffic are tested.
+#
+# Data path functionality is tested by sending traffic from one of the upper
+# namespaces and checking using ingress tc filters that the expected traffic
+# was received by one of the lower namespaces.
+#
+# +------------------------------------+ +------------------------------------+
+# | ns1_v4 | | ns1_v6 |
+# | | | |
+# | br0.10 br0.4000 br0.20 | | br0.10 br0.4000 br0.20 |
+# | + + + | | + + + |
+# | | | | | | | | | |
+# | | | | | | | | | |
+# | +---------+---------+ | | +---------+---------+ |
+# | | | | | |
+# | | | | | |
+# | + | | + |
+# | br0 | | br0 |
+# | + | | + |
+# | | | | | |
+# | | | | | |
+# | + | | + |
+# | vx0 | | vx0 |
+# | | | |
+# | | | |
+# | veth0 | | veth0 |
+# | + | | + |
+# +-----------------|------------------+ +-----------------|------------------+
+# | |
+# +-----------------|------------------+ +-----------------|------------------+
+# | + | | + |
+# | veth0 | | veth0 |
+# | | | |
+# | | | |
+# | vx0 | | vx0 |
+# | + | | + |
+# | | | | | |
+# | | | | | |
+# | + | | + |
+# | br0 | | br0 |
+# | + | | + |
+# | | | | | |
+# | | | | | |
+# | +---------+---------+ | | +---------+---------+ |
+# | | | | | | | | | |
+# | | | | | | | | | |
+# | + + + | | + + + |
+# | br0.10 br0.4000 br0.10 | | br0.10 br0.4000 br0.20 |
+# | | | |
+# | ns2_v4 | | ns2_v6 |
+# +------------------------------------+ +------------------------------------+
+
+ret=0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+CONTROL_PATH_TESTS="
+ basic_star_g_ipv4_ipv4
+ basic_star_g_ipv6_ipv4
+ basic_star_g_ipv4_ipv6
+ basic_star_g_ipv6_ipv6
+ basic_sg_ipv4_ipv4
+ basic_sg_ipv6_ipv4
+ basic_sg_ipv4_ipv6
+ basic_sg_ipv6_ipv6
+ star_g_ipv4_ipv4
+ star_g_ipv6_ipv4
+ star_g_ipv4_ipv6
+ star_g_ipv6_ipv6
+ sg_ipv4_ipv4
+ sg_ipv6_ipv4
+ sg_ipv4_ipv6
+ sg_ipv6_ipv6
+ dump_ipv4_ipv4
+ dump_ipv6_ipv4
+ dump_ipv4_ipv6
+ dump_ipv6_ipv6
+"
+
+DATA_PATH_TESTS="
+ encap_params_ipv4_ipv4
+ encap_params_ipv6_ipv4
+ encap_params_ipv4_ipv6
+ encap_params_ipv6_ipv6
+ starg_exclude_ir_ipv4_ipv4
+ starg_exclude_ir_ipv6_ipv4
+ starg_exclude_ir_ipv4_ipv6
+ starg_exclude_ir_ipv6_ipv6
+ starg_include_ir_ipv4_ipv4
+ starg_include_ir_ipv6_ipv4
+ starg_include_ir_ipv4_ipv6
+ starg_include_ir_ipv6_ipv6
+ starg_exclude_p2mp_ipv4_ipv4
+ starg_exclude_p2mp_ipv6_ipv4
+ starg_exclude_p2mp_ipv4_ipv6
+ starg_exclude_p2mp_ipv6_ipv6
+ starg_include_p2mp_ipv4_ipv4
+ starg_include_p2mp_ipv6_ipv4
+ starg_include_p2mp_ipv4_ipv6
+ starg_include_p2mp_ipv6_ipv6
+ egress_vni_translation_ipv4_ipv4
+ egress_vni_translation_ipv6_ipv4
+ egress_vni_translation_ipv4_ipv6
+ egress_vni_translation_ipv6_ipv6
+ all_zeros_mdb_ipv4
+ all_zeros_mdb_ipv6
+ mdb_fdb_ipv4_ipv4
+ mdb_fdb_ipv6_ipv4
+ mdb_fdb_ipv4_ipv6
+ mdb_fdb_ipv6_ipv6
+ mdb_torture_ipv4_ipv4
+ mdb_torture_ipv6_ipv4
+ mdb_torture_ipv4_ipv6
+ mdb_torture_ipv6_ipv6
+"
+
+# All tests in this script. Can be overridden with -t option.
+TESTS="
+ $CONTROL_PATH_TESTS
+ $DATA_PATH_TESTS
+"
+VERBOSE=0
+PAUSE_ON_FAIL=no
+PAUSE=no
+
+################################################################################
+# Utilities
+
+log_test()
+{
+ local rc=$1
+ local expected=$2
+ local msg="$3"
+
+ if [ ${rc} -eq ${expected} ]; then
+ printf "TEST: %-60s [ OK ]\n" "${msg}"
+ nsuccess=$((nsuccess+1))
+ else
+ ret=1
+ nfail=$((nfail+1))
+ printf "TEST: %-60s [FAIL]\n" "${msg}"
+ if [ "$VERBOSE" = "1" ]; then
+ echo " rc=$rc, expected $expected"
+ fi
+
+ if [ "${PAUSE_ON_FAIL}" = "yes" ]; then
+ echo
+ echo "hit enter to continue, 'q' to quit"
+ read a
+ [ "$a" = "q" ] && exit 1
+ fi
+ fi
+
+ if [ "${PAUSE}" = "yes" ]; then
+ echo
+ echo "hit enter to continue, 'q' to quit"
+ read a
+ [ "$a" = "q" ] && exit 1
+ fi
+
+ [ "$VERBOSE" = "1" ] && echo
+}
+
+run_cmd()
+{
+ local cmd="$1"
+ local out
+ local stderr="2>/dev/null"
+
+ if [ "$VERBOSE" = "1" ]; then
+ printf "COMMAND: $cmd\n"
+ stderr=
+ fi
+
+ out=$(eval $cmd $stderr)
+ rc=$?
+ if [ "$VERBOSE" = "1" -a -n "$out" ]; then
+ echo " $out"
+ fi
+
+ return $rc
+}
+
+tc_check_packets()
+{
+ local ns=$1; shift
+ local id=$1; shift
+ local handle=$1; shift
+ local count=$1; shift
+ local pkts
+
+ sleep 0.1
+ pkts=$(tc -n $ns -j -s filter show $id \
+ | jq ".[] | select(.options.handle == $handle) | \
+ .options.actions[0].stats.packets")
+ [[ $pkts == $count ]]
+}
+
+################################################################################
+# Setup
+
+setup_common_ns()
+{
+ local ns=$1; shift
+ local local_addr=$1; shift
+
+ ip netns exec $ns sysctl -qw net.ipv4.ip_forward=1
+ ip netns exec $ns sysctl -qw net.ipv4.fib_multipath_use_neigh=1
+ ip netns exec $ns sysctl -qw net.ipv4.conf.default.ignore_routes_with_linkdown=1
+ ip netns exec $ns sysctl -qw net.ipv6.conf.all.keep_addr_on_down=1
+ ip netns exec $ns sysctl -qw net.ipv6.conf.all.forwarding=1
+ ip netns exec $ns sysctl -qw net.ipv6.conf.default.forwarding=1
+ ip netns exec $ns sysctl -qw net.ipv6.conf.default.ignore_routes_with_linkdown=1
+ ip netns exec $ns sysctl -qw net.ipv6.conf.all.accept_dad=0
+ ip netns exec $ns sysctl -qw net.ipv6.conf.default.accept_dad=0
+
+ ip -n $ns link set dev lo up
+ ip -n $ns address add $local_addr dev lo
+
+ ip -n $ns link set dev veth0 up
+
+ ip -n $ns link add name br0 up type bridge vlan_filtering 1 \
+ vlan_default_pvid 0 mcast_snooping 0
+
+ ip -n $ns link add link br0 name br0.10 up type vlan id 10
+ bridge -n $ns vlan add vid 10 dev br0 self
+
+ ip -n $ns link add link br0 name br0.20 up type vlan id 20
+ bridge -n $ns vlan add vid 20 dev br0 self
+
+ ip -n $ns link add link br0 name br0.4000 up type vlan id 4000
+ bridge -n $ns vlan add vid 4000 dev br0 self
+
+ ip -n $ns link add name vx0 up master br0 type vxlan \
+ local $local_addr dstport 4789 external vnifilter
+ bridge -n $ns link set dev vx0 vlan_tunnel on
+
+ bridge -n $ns vlan add vid 10 dev vx0
+ bridge -n $ns vlan add vid 10 dev vx0 tunnel_info id 10010
+ bridge -n $ns vni add vni 10010 dev vx0
+
+ bridge -n $ns vlan add vid 20 dev vx0
+ bridge -n $ns vlan add vid 20 dev vx0 tunnel_info id 10020
+ bridge -n $ns vni add vni 10020 dev vx0
+
+ bridge -n $ns vlan add vid 4000 dev vx0 pvid
+ bridge -n $ns vlan add vid 4000 dev vx0 tunnel_info id 14000
+ bridge -n $ns vni add vni 14000 dev vx0
+}
+
+setup_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local local_addr1=$1; shift
+ local local_addr2=$1; shift
+
+ ip netns add $ns1
+ ip netns add $ns2
+
+ ip link add name veth0 type veth peer name veth1
+ ip link set dev veth0 netns $ns1 name veth0
+ ip link set dev veth1 netns $ns2 name veth0
+
+ setup_common_ns $ns1 $local_addr1
+ setup_common_ns $ns2 $local_addr2
+}
+
+setup_v4()
+{
+ setup_common ns1_v4 ns2_v4 192.0.2.1 192.0.2.2
+
+ ip -n ns1_v4 address add 192.0.2.17/28 dev veth0
+ ip -n ns2_v4 address add 192.0.2.18/28 dev veth0
+
+ ip -n ns1_v4 route add default via 192.0.2.18
+ ip -n ns2_v4 route add default via 192.0.2.17
+}
+
+cleanup_v4()
+{
+ ip netns del ns2_v4
+ ip netns del ns1_v4
+}
+
+setup_v6()
+{
+ setup_common ns1_v6 ns2_v6 2001:db8:1::1 2001:db8:1::2
+
+ ip -n ns1_v6 address add 2001:db8:2::1/64 dev veth0 nodad
+ ip -n ns2_v6 address add 2001:db8:2::2/64 dev veth0 nodad
+
+ ip -n ns1_v6 route add default via 2001:db8:2::2
+ ip -n ns2_v6 route add default via 2001:db8:2::1
+}
+
+cleanup_v6()
+{
+ ip netns del ns2_v6
+ ip netns del ns1_v6
+}
+
+setup()
+{
+ set -e
+
+ setup_v4
+ setup_v6
+
+ sleep 5
+
+ set +e
+}
+
+cleanup()
+{
+ cleanup_v6 &> /dev/null
+ cleanup_v4 &> /dev/null
+}
+
+################################################################################
+# Tests - Control path
+
+basic_common()
+{
+ local ns1=$1; shift
+ local grp_key=$1; shift
+ local vtep_ip=$1; shift
+
+ # Test basic control path operations common to all MDB entry types.
+
+ # Basic add, replace and delete behavior.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ log_test $? 0 "MDB entry addition"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
+ log_test $? 0 "MDB entry presence after addition"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ log_test $? 0 "MDB entry replacement"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
+ log_test $? 0 "MDB entry presence after replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
+ log_test $? 0 "MDB entry deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
+ log_test $? 1 "MDB entry presence after deletion"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
+ log_test $? 255 "Non-existent MDB entry deletion"
+
+ # Default protocol and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"proto static\""
+ log_test $? 0 "MDB entry default protocol"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent proto 123 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"proto 123\""
+ log_test $? 0 "MDB entry protocol replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
+
+ # Default destination port and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" dst_port \""
+ log_test $? 1 "MDB entry default destination port"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip dst_port 1234 src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"dst_port 1234\""
+ log_test $? 0 "MDB entry destination port replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
+
+ # Default destination VNI and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" vni \""
+ log_test $? 1 "MDB entry default destination VNI"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip vni 1234 src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"vni 1234\""
+ log_test $? 0 "MDB entry destination VNI replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
+
+ # Default outgoing interface and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" via \""
+ log_test $? 1 "MDB entry default outgoing interface"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010 via veth0"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"via veth0\""
+ log_test $? 0 "MDB entry outgoing interface replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
+
+ # Common error cases.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port veth0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ log_test $? 255 "MDB entry with mismatch between device and port"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key temp dst $vtep_ip src_vni 10010"
+ log_test $? 255 "MDB entry with temp state"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent vid 10 dst $vtep_ip src_vni 10010"
+ log_test $? 255 "MDB entry with VLAN"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp 01:02:03:04:05:06 permanent dst $vtep_ip src_vni 10010"
+ log_test $? 255 "MDB entry MAC address"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent"
+ log_test $? 255 "MDB entry without extended parameters"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent proto 3 dst $vtep_ip src_vni 10010"
+ log_test $? 255 "MDB entry with an invalid protocol"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip vni $((2 ** 24)) src_vni 10010"
+ log_test $? 255 "MDB entry with an invalid destination VNI"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni $((2 ** 24))"
+ log_test $? 255 "MDB entry with an invalid source VNI"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent src_vni 10010"
+ log_test $? 255 "MDB entry without a remote destination IP"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
+ log_test $? 255 "Duplicate MDB entries"
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
+}
+
+basic_star_g_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local grp_key="grp 239.1.1.1"
+ local vtep_ip=198.51.100.100
+
+ echo
+ echo "Control path: Basic (*, G) operations - IPv4 overlay / IPv4 underlay"
+ echo "--------------------------------------------------------------------"
+
+ basic_common $ns1 "$grp_key" $vtep_ip
+}
+
+basic_star_g_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local grp_key="grp ff0e::1"
+ local vtep_ip=198.51.100.100
+
+ echo
+ echo "Control path: Basic (*, G) operations - IPv6 overlay / IPv4 underlay"
+ echo "--------------------------------------------------------------------"
+
+ basic_common $ns1 "$grp_key" $vtep_ip
+}
+
+basic_star_g_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local grp_key="grp 239.1.1.1"
+ local vtep_ip=2001:db8:1000::1
+
+ echo
+ echo "Control path: Basic (*, G) operations - IPv4 overlay / IPv6 underlay"
+ echo "--------------------------------------------------------------------"
+
+ basic_common $ns1 "$grp_key" $vtep_ip
+}
+
+basic_star_g_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local grp_key="grp ff0e::1"
+ local vtep_ip=2001:db8:1000::1
+
+ echo
+ echo "Control path: Basic (*, G) operations - IPv6 overlay / IPv6 underlay"
+ echo "--------------------------------------------------------------------"
+
+ basic_common $ns1 "$grp_key" $vtep_ip
+}
+
+basic_sg_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local grp_key="grp 239.1.1.1 src 192.0.2.129"
+ local vtep_ip=198.51.100.100
+
+ echo
+ echo "Control path: Basic (S, G) operations - IPv4 overlay / IPv4 underlay"
+ echo "--------------------------------------------------------------------"
+
+ basic_common $ns1 "$grp_key" $vtep_ip
+}
+
+basic_sg_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local grp_key="grp ff0e::1 src 2001:db8:100::1"
+ local vtep_ip=198.51.100.100
+
+ echo
+ echo "Control path: Basic (S, G) operations - IPv6 overlay / IPv4 underlay"
+ echo "---------------------------------------------------------------------"
+
+ basic_common $ns1 "$grp_key" $vtep_ip
+}
+
+basic_sg_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local grp_key="grp 239.1.1.1 src 192.0.2.129"
+ local vtep_ip=2001:db8:1000::1
+
+ echo
+ echo "Control path: Basic (S, G) operations - IPv4 overlay / IPv6 underlay"
+ echo "--------------------------------------------------------------------"
+
+ basic_common $ns1 "$grp_key" $vtep_ip
+}
+
+basic_sg_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local grp_key="grp ff0e::1 src 2001:db8:100::1"
+ local vtep_ip=2001:db8:1000::1
+
+ echo
+ echo "Control path: Basic (S, G) operations - IPv6 overlay / IPv6 underlay"
+ echo "--------------------------------------------------------------------"
+
+ basic_common $ns1 "$grp_key" $vtep_ip
+}
+
+star_g_common()
+{
+ local ns1=$1; shift
+ local grp=$1; shift
+ local src1=$1; shift
+ local src2=$1; shift
+ local src3=$1; shift
+ local vtep_ip=$1; shift
+ local all_zeros_grp=$1; shift
+
+ # Test control path operations specific to (*, G) entries.
+
+ # Basic add, replace and delete behavior.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
+ log_test $? 0 "(*, G) MDB entry addition with source list"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
+ log_test $? 0 "(*, G) MDB entry presence after addition"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ log_test $? 0 "(S, G) MDB entry presence after addition"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
+ log_test $? 0 "(*, G) MDB entry replacement with source list"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
+ log_test $? 0 "(*, G) MDB entry presence after replacement"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ log_test $? 0 "(S, G) MDB entry presence after replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
+ log_test $? 0 "(*, G) MDB entry deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
+ log_test $? 1 "(*, G) MDB entry presence after deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ log_test $? 1 "(S, G) MDB entry presence after deletion"
+
+ # Default filter mode and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep exclude"
+ log_test $? 0 "(*, G) MDB entry default filter mode"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode include source_list $src1 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep include"
+ log_test $? 0 "(*, G) MDB entry after replacing filter mode to \"include\""
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ log_test $? 0 "(S, G) MDB entry after replacing filter mode to \"include\""
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\" | grep blocked"
+ log_test $? 1 "\"blocked\" flag after replacing filter mode to \"include\""
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep exclude"
+ log_test $? 0 "(*, G) MDB entry after replacing filter mode to \"exclude\""
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ log_test $? 0 "(S, G) MDB entry after replacing filter mode to \"exclude\""
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\" | grep blocked"
+ log_test $? 0 "\"blocked\" flag after replacing filter mode to \"exclude\""
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
+
+ # Default source list and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep source_list"
+ log_test $? 1 "(*, G) MDB entry default source list"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1,$src2,$src3 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ log_test $? 0 "(S, G) MDB entry of 1st source after replacing source list"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src2\""
+ log_test $? 0 "(S, G) MDB entry of 2nd source after replacing source list"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src3\""
+ log_test $? 0 "(S, G) MDB entry of 3rd source after replacing source list"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1,$src3 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ log_test $? 0 "(S, G) MDB entry of 1st source after removing source"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src2\""
+ log_test $? 1 "(S, G) MDB entry of 2nd source after removing source"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src3\""
+ log_test $? 0 "(S, G) MDB entry of 3rd source after removing source"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
+
+ # Default protocol and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \"proto static\""
+ log_test $? 0 "(*, G) MDB entry default protocol"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \"proto static\""
+ log_test $? 0 "(S, G) MDB entry default protocol"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 proto bgp dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \"proto bgp\""
+ log_test $? 0 "(*, G) MDB entry protocol after replacement"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \"proto bgp\""
+ log_test $? 0 "(S, G) MDB entry protocol after replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
+
+ # Default destination port and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" dst_port \""
+ log_test $? 1 "(*, G) MDB entry default destination port"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" dst_port \""
+ log_test $? 1 "(S, G) MDB entry default destination port"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip dst_port 1234 src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" dst_port 1234 \""
+ log_test $? 0 "(*, G) MDB entry destination port after replacement"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" dst_port 1234 \""
+ log_test $? 0 "(S, G) MDB entry destination port after replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
+
+ # Default destination VNI and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" vni \""
+ log_test $? 1 "(*, G) MDB entry default destination VNI"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" vni \""
+ log_test $? 1 "(S, G) MDB entry default destination VNI"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip vni 1234 src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" vni 1234 \""
+ log_test $? 0 "(*, G) MDB entry destination VNI after replacement"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" vni 1234 \""
+ log_test $? 0 "(S, G) MDB entry destination VNI after replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
+
+ # Default outgoing interface and replacement.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" via \""
+ log_test $? 1 "(*, G) MDB entry default outgoing interface"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" via \""
+ log_test $? 1 "(S, G) MDB entry default outgoing interface"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010 via veth0"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" via veth0 \""
+ log_test $? 0 "(*, G) MDB entry outgoing interface after replacement"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" via veth0 \""
+ log_test $? 0 "(S, G) MDB entry outgoing interface after replacement"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
+
+ # Error cases.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $all_zeros_grp permanent filter_mode exclude dst $vtep_ip src_vni 10010"
+ log_test $? 255 "All-zeros group with filter mode"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $all_zeros_grp permanent source_list $src1 dst $vtep_ip src_vni 10010"
+ log_test $? 255 "All-zeros group with source list"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode include dst $vtep_ip src_vni 10010"
+ log_test $? 255 "(*, G) INCLUDE with an empty source list"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $grp dst $vtep_ip src_vni 10010"
+ log_test $? 255 "Invalid source in source list"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent source_list $src1 dst $vtep_ip src_vni 10010"
+ log_test $? 255 "Source list without filter mode"
+}
+
+star_g_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local grp=239.1.1.1
+ local src1=192.0.2.129
+ local src2=192.0.2.130
+ local src3=192.0.2.131
+ local vtep_ip=198.51.100.100
+ local all_zeros_grp=0.0.0.0
+
+ echo
+ echo "Control path: (*, G) operations - IPv4 overlay / IPv4 underlay"
+ echo "--------------------------------------------------------------"
+
+ star_g_common $ns1 $grp $src1 $src2 $src3 $vtep_ip $all_zeros_grp
+}
+
+star_g_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local grp=ff0e::1
+ local src1=2001:db8:100::1
+ local src2=2001:db8:100::2
+ local src3=2001:db8:100::3
+ local vtep_ip=198.51.100.100
+ local all_zeros_grp=::
+
+ echo
+ echo "Control path: (*, G) operations - IPv6 overlay / IPv4 underlay"
+ echo "--------------------------------------------------------------"
+
+ star_g_common $ns1 $grp $src1 $src2 $src3 $vtep_ip $all_zeros_grp
+}
+
+star_g_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local grp=239.1.1.1
+ local src1=192.0.2.129
+ local src2=192.0.2.130
+ local src3=192.0.2.131
+ local vtep_ip=2001:db8:1000::1
+ local all_zeros_grp=0.0.0.0
+
+ echo
+ echo "Control path: (*, G) operations - IPv4 overlay / IPv6 underlay"
+ echo "--------------------------------------------------------------"
+
+ star_g_common $ns1 $grp $src1 $src2 $src3 $vtep_ip $all_zeros_grp
+}
+
+star_g_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local grp=ff0e::1
+ local src1=2001:db8:100::1
+ local src2=2001:db8:100::2
+ local src3=2001:db8:100::3
+ local vtep_ip=2001:db8:1000::1
+ local all_zeros_grp=::
+
+ echo
+ echo "Control path: (*, G) operations - IPv6 overlay / IPv6 underlay"
+ echo "--------------------------------------------------------------"
+
+ star_g_common $ns1 $grp $src1 $src2 $src3 $vtep_ip $all_zeros_grp
+}
+
+sg_common()
+{
+ local ns1=$1; shift
+ local grp=$1; shift
+ local src=$1; shift
+ local vtep_ip=$1; shift
+ local all_zeros_grp=$1; shift
+
+ # Test control path operations specific to (S, G) entries.
+
+ # Default filter mode.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp src $src permanent dst $vtep_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep include"
+ log_test $? 0 "(S, G) MDB entry default filter mode"
+
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp src $src permanent dst $vtep_ip src_vni 10010"
+
+ # Error cases.
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp src $src permanent filter_mode include dst $vtep_ip src_vni 10010"
+ log_test $? 255 "(S, G) with filter mode"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp src $src permanent source_list $src dst $vtep_ip src_vni 10010"
+ log_test $? 255 "(S, G) with source list"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp src $grp permanent dst $vtep_ip src_vni 10010"
+ log_test $? 255 "(S, G) with an invalid source list"
+
+ run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $all_zeros_grp src $src permanent dst $vtep_ip src_vni 10010"
+ log_test $? 255 "All-zeros group with source"
+}
+
+sg_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local grp=239.1.1.1
+ local src=192.0.2.129
+ local vtep_ip=198.51.100.100
+ local all_zeros_grp=0.0.0.0
+
+ echo
+ echo "Control path: (S, G) operations - IPv4 overlay / IPv4 underlay"
+ echo "--------------------------------------------------------------"
+
+ sg_common $ns1 $grp $src $vtep_ip $all_zeros_grp
+}
+
+sg_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local grp=ff0e::1
+ local src=2001:db8:100::1
+ local vtep_ip=198.51.100.100
+ local all_zeros_grp=::
+
+ echo
+ echo "Control path: (S, G) operations - IPv6 overlay / IPv4 underlay"
+ echo "--------------------------------------------------------------"
+
+ sg_common $ns1 $grp $src $vtep_ip $all_zeros_grp
+}
+
+sg_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local grp=239.1.1.1
+ local src=192.0.2.129
+ local vtep_ip=2001:db8:1000::1
+ local all_zeros_grp=0.0.0.0
+
+ echo
+ echo "Control path: (S, G) operations - IPv4 overlay / IPv6 underlay"
+ echo "--------------------------------------------------------------"
+
+ sg_common $ns1 $grp $src $vtep_ip $all_zeros_grp
+}
+
+sg_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local grp=ff0e::1
+ local src=2001:db8:100::1
+ local vtep_ip=2001:db8:1000::1
+ local all_zeros_grp=::
+
+ echo
+ echo "Control path: (S, G) operations - IPv6 overlay / IPv6 underlay"
+ echo "--------------------------------------------------------------"
+
+ sg_common $ns1 $grp $src $vtep_ip $all_zeros_grp
+}
+
+ipv4_grps_get()
+{
+ local max_grps=$1; shift
+ local i
+
+ for i in $(seq 0 $((max_grps - 1))); do
+ echo "239.1.1.$i"
+ done
+}
+
+ipv6_grps_get()
+{
+ local max_grps=$1; shift
+ local i
+
+ for i in $(seq 0 $((max_grps - 1))); do
+ echo "ff0e::$(printf %x $i)"
+ done
+}
+
+dump_common()
+{
+ local ns1=$1; shift
+ local local_addr=$1; shift
+ local remote_prefix=$1; shift
+ local fn=$1; shift
+ local max_vxlan_devs=2
+ local max_remotes=64
+ local max_grps=256
+ local num_entries
+ local batch_file
+ local grp
+ local i j
+
+ # The kernel maintains various markers for the MDB dump. Add a test for
+ # large scale MDB dump to make sure that all the configured entries are
+ # dumped and that the markers are used correctly.
+
+ # Create net devices.
+ for i in $(seq 1 $max_vxlan_devs); do
+ ip -n $ns1 link add name vx-test${i} up type vxlan \
+ local $local_addr dstport 4789 external vnifilter
+ done
+
+ # Create batch file with MDB entries.
+ batch_file=$(mktemp)
+ for i in $(seq 1 $max_vxlan_devs); do
+ for j in $(seq 1 $max_remotes); do
+ for grp in $($fn $max_grps); do
+ echo "mdb add dev vx-test${i} port vx-test${i} grp $grp permanent dst ${remote_prefix}${j}" >> $batch_file
+ done
+ done
+ done
+
+ # Program the batch file and check for expected number of entries.
+ bridge -n $ns1 -b $batch_file
+ for i in $(seq 1 $max_vxlan_devs); do
+ num_entries=$(bridge -n $ns1 mdb show dev vx-test${i} | grep "permanent" | wc -l)
+ [[ $num_entries -eq $((max_grps * max_remotes)) ]]
+ log_test $? 0 "Large scale dump - VXLAN device #$i"
+ done
+
+ rm -rf $batch_file
+}
+
+dump_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local local_addr=192.0.2.1
+ local remote_prefix=198.51.100.
+ local fn=ipv4_grps_get
+
+ echo
+ echo "Control path: Large scale MDB dump - IPv4 overlay / IPv4 underlay"
+ echo "-----------------------------------------------------------------"
+
+ dump_common $ns1 $local_addr $remote_prefix $fn
+}
+
+dump_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local local_addr=192.0.2.1
+ local remote_prefix=198.51.100.
+ local fn=ipv6_grps_get
+
+ echo
+ echo "Control path: Large scale MDB dump - IPv6 overlay / IPv4 underlay"
+ echo "-----------------------------------------------------------------"
+
+ dump_common $ns1 $local_addr $remote_prefix $fn
+}
+
+dump_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local local_addr=2001:db8:1::1
+ local remote_prefix=2001:db8:1000::
+ local fn=ipv4_grps_get
+
+ echo
+ echo "Control path: Large scale MDB dump - IPv4 overlay / IPv6 underlay"
+ echo "-----------------------------------------------------------------"
+
+ dump_common $ns1 $local_addr $remote_prefix $fn
+}
+
+dump_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local local_addr=2001:db8:1::1
+ local remote_prefix=2001:db8:1000::
+ local fn=ipv6_grps_get
+
+ echo
+ echo "Control path: Large scale MDB dump - IPv6 overlay / IPv6 underlay"
+ echo "-----------------------------------------------------------------"
+
+ dump_common $ns1 $local_addr $remote_prefix $fn
+}
+
+################################################################################
+# Tests - Data path
+
+encap_params_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local vtep1_ip=$1; shift
+ local vtep2_ip=$1; shift
+ local plen=$1; shift
+ local enc_ethtype=$1; shift
+ local grp=$1; shift
+ local src=$1; shift
+ local mz=$1; shift
+
+ # Test that packets forwarded by the VXLAN MDB are encapsulated with
+ # the correct parameters. Transmit packets from the first namespace and
+ # check that they hit the corresponding filters on the ingress of the
+ # second namespace.
+
+ run_cmd "tc -n $ns2 qdisc replace dev veth0 clsact"
+ run_cmd "tc -n $ns2 qdisc replace dev vx0 clsact"
+ run_cmd "ip -n $ns2 address replace $vtep1_ip/$plen dev lo"
+ run_cmd "ip -n $ns2 address replace $vtep2_ip/$plen dev lo"
+
+ # Check destination IP.
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep2_ip src_vni 10020"
+
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_dst_ip $vtep1_ip action pass"
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Destination IP - match"
+
+ run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Destination IP - no match"
+
+ run_cmd "tc -n $ns2 filter del dev vx0 ingress pref 1 handle 101 flower"
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep2_ip src_vni 10020"
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep1_ip src_vni 10010"
+
+ # Check destination port.
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip dst_port 1111 src_vni 10020"
+
+ run_cmd "tc -n $ns2 filter replace dev veth0 ingress pref 1 handle 101 proto $enc_ethtype flower ip_proto udp dst_port 4789 action pass"
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev veth0 ingress" 101 1
+ log_test $? 0 "Default destination port - match"
+
+ run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev veth0 ingress" 101 1
+ log_test $? 0 "Default destination port - no match"
+
+ run_cmd "tc -n $ns2 filter replace dev veth0 ingress pref 1 handle 101 proto $enc_ethtype flower ip_proto udp dst_port 1111 action pass"
+ run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev veth0 ingress" 101 1
+ log_test $? 0 "Non-default destination port - match"
+
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev veth0 ingress" 101 1
+ log_test $? 0 "Non-default destination port - no match"
+
+ run_cmd "tc -n $ns2 filter del dev veth0 ingress pref 1 handle 101 flower"
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep1_ip src_vni 10020"
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep1_ip src_vni 10010"
+
+ # Check default VNI.
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip src_vni 10020"
+
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_key_id 10010 action pass"
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Default destination VNI - match"
+
+ run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Default destination VNI - no match"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip vni 10020 src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip vni 10010 src_vni 10020"
+
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_key_id 10020 action pass"
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Non-default destination VNI - match"
+
+ run_cmd "ip netns exec $ns1 $mz br0.20 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Non-default destination VNI - no match"
+
+ run_cmd "tc -n $ns2 filter del dev vx0 ingress pref 1 handle 101 flower"
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep1_ip src_vni 10020"
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep1_ip src_vni 10010"
+}
+
+encap_params_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local plen=32
+ local enc_ethtype="ip"
+ local grp=239.1.1.1
+ local src=192.0.2.129
+
+ echo
+ echo "Data path: Encapsulation parameters - IPv4 overlay / IPv4 underlay"
+ echo "------------------------------------------------------------------"
+
+ encap_params_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $enc_ethtype \
+ $grp $src "mausezahn"
+}
+
+encap_params_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local plen=32
+ local enc_ethtype="ip"
+ local grp=ff0e::1
+ local src=2001:db8:100::1
+
+ echo
+ echo "Data path: Encapsulation parameters - IPv6 overlay / IPv4 underlay"
+ echo "------------------------------------------------------------------"
+
+ encap_params_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $enc_ethtype \
+ $grp $src "mausezahn -6"
+}
+
+encap_params_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local plen=128
+ local enc_ethtype="ipv6"
+ local grp=239.1.1.1
+ local src=192.0.2.129
+
+ echo
+ echo "Data path: Encapsulation parameters - IPv4 overlay / IPv6 underlay"
+ echo "------------------------------------------------------------------"
+
+ encap_params_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $enc_ethtype \
+ $grp $src "mausezahn"
+}
+
+encap_params_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local plen=128
+ local enc_ethtype="ipv6"
+ local grp=ff0e::1
+ local src=2001:db8:100::1
+
+ echo
+ echo "Data path: Encapsulation parameters - IPv6 overlay / IPv6 underlay"
+ echo "------------------------------------------------------------------"
+
+ encap_params_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $enc_ethtype \
+ $grp $src "mausezahn -6"
+}
+
+starg_exclude_ir_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local vtep1_ip=$1; shift
+ local vtep2_ip=$1; shift
+ local plen=$1; shift
+ local grp=$1; shift
+ local valid_src=$1; shift
+ local invalid_src=$1; shift
+ local mz=$1; shift
+
+ # Install a (*, G) EXCLUDE MDB entry with one source and two remote
+ # VTEPs. Make sure that the source in the source list is not forwarded
+ # and that a source not in the list is forwarded. Remove one of the
+ # VTEPs from the entry and make sure that packets are only forwarded to
+ # the remaining VTEP.
+
+ run_cmd "tc -n $ns2 qdisc replace dev vx0 clsact"
+ run_cmd "ip -n $ns2 address replace $vtep1_ip/$plen dev lo"
+ run_cmd "ip -n $ns2 address replace $vtep2_ip/$plen dev lo"
+
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_dst_ip $vtep1_ip action pass"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 102 proto all flower enc_dst_ip $vtep2_ip action pass"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $invalid_src dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $invalid_src dst $vtep2_ip src_vni 10010"
+
+ # Check that invalid source is not forwarded to any VTEP.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 0
+ log_test $? 0 "Block excluded source - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 0
+ log_test $? 0 "Block excluded source - second VTEP"
+
+ # Check that valid source is forwarded to both VTEPs.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Forward valid source - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Forward valid source - second VTEP"
+
+ # Remove second VTEP.
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep2_ip src_vni 10010"
+
+ # Check that invalid source is not forwarded to any VTEP.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Block excluded source after removal - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Block excluded source after removal - second VTEP"
+
+ # Check that valid source is forwarded to the remaining VTEP.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 2
+ log_test $? 0 "Forward valid source after removal - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Forward valid source after removal - second VTEP"
+}
+
+starg_exclude_ir_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local plen=32
+ local grp=239.1.1.1
+ local valid_src=192.0.2.129
+ local invalid_src=192.0.2.145
+
+ echo
+ echo "Data path: (*, G) EXCLUDE - IR - IPv4 overlay / IPv4 underlay"
+ echo "-------------------------------------------------------------"
+
+ starg_exclude_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \
+ $valid_src $invalid_src "mausezahn"
+}
+
+starg_exclude_ir_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local plen=32
+ local grp=ff0e::1
+ local valid_src=2001:db8:100::1
+ local invalid_src=2001:db8:200::1
+
+ echo
+ echo "Data path: (*, G) EXCLUDE - IR - IPv6 overlay / IPv4 underlay"
+ echo "-------------------------------------------------------------"
+
+ starg_exclude_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \
+ $valid_src $invalid_src "mausezahn -6"
+}
+
+starg_exclude_ir_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local plen=128
+ local grp=239.1.1.1
+ local valid_src=192.0.2.129
+ local invalid_src=192.0.2.145
+
+ echo
+ echo "Data path: (*, G) EXCLUDE - IR - IPv4 overlay / IPv6 underlay"
+ echo "-------------------------------------------------------------"
+
+ starg_exclude_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \
+ $valid_src $invalid_src "mausezahn"
+}
+
+starg_exclude_ir_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local plen=128
+ local grp=ff0e::1
+ local valid_src=2001:db8:100::1
+ local invalid_src=2001:db8:200::1
+
+ echo
+ echo "Data path: (*, G) EXCLUDE - IR - IPv6 overlay / IPv6 underlay"
+ echo "-------------------------------------------------------------"
+
+ starg_exclude_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \
+ $valid_src $invalid_src "mausezahn -6"
+}
+
+starg_include_ir_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local vtep1_ip=$1; shift
+ local vtep2_ip=$1; shift
+ local plen=$1; shift
+ local grp=$1; shift
+ local valid_src=$1; shift
+ local invalid_src=$1; shift
+ local mz=$1; shift
+
+ # Install a (*, G) INCLUDE MDB entry with one source and two remote
+ # VTEPs. Make sure that the source in the source list is forwarded and
+ # that a source not in the list is not forwarded. Remove one of the
+ # VTEPs from the entry and make sure that packets are only forwarded to
+ # the remaining VTEP.
+
+ run_cmd "tc -n $ns2 qdisc replace dev vx0 clsact"
+ run_cmd "ip -n $ns2 address replace $vtep1_ip/$plen dev lo"
+ run_cmd "ip -n $ns2 address replace $vtep2_ip/$plen dev lo"
+
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_dst_ip $vtep1_ip action pass"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 102 proto all flower enc_dst_ip $vtep2_ip action pass"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode include source_list $valid_src dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode include source_list $valid_src dst $vtep2_ip src_vni 10010"
+
+ # Check that invalid source is not forwarded to any VTEP.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 0
+ log_test $? 0 "Block excluded source - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 0
+ log_test $? 0 "Block excluded source - second VTEP"
+
+ # Check that valid source is forwarded to both VTEPs.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Forward valid source - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Forward valid source - second VTEP"
+
+ # Remove second VTEP.
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep2_ip src_vni 10010"
+
+ # Check that invalid source is not forwarded to any VTEP.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Block excluded source after removal - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Block excluded source after removal - second VTEP"
+
+ # Check that valid source is forwarded to the remaining VTEP.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 2
+ log_test $? 0 "Forward valid source after removal - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Forward valid source after removal - second VTEP"
+}
+
+starg_include_ir_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local plen=32
+ local grp=239.1.1.1
+ local valid_src=192.0.2.129
+ local invalid_src=192.0.2.145
+
+ echo
+ echo "Data path: (*, G) INCLUDE - IR - IPv4 overlay / IPv4 underlay"
+ echo "-------------------------------------------------------------"
+
+ starg_include_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \
+ $valid_src $invalid_src "mausezahn"
+}
+
+starg_include_ir_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local plen=32
+ local grp=ff0e::1
+ local valid_src=2001:db8:100::1
+ local invalid_src=2001:db8:200::1
+
+ echo
+ echo "Data path: (*, G) INCLUDE - IR - IPv6 overlay / IPv4 underlay"
+ echo "-------------------------------------------------------------"
+
+ starg_include_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \
+ $valid_src $invalid_src "mausezahn -6"
+}
+
+starg_include_ir_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local plen=128
+ local grp=239.1.1.1
+ local valid_src=192.0.2.129
+ local invalid_src=192.0.2.145
+
+ echo
+ echo "Data path: (*, G) INCLUDE - IR - IPv4 overlay / IPv6 underlay"
+ echo "-------------------------------------------------------------"
+
+ starg_include_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \
+ $valid_src $invalid_src "mausezahn"
+}
+
+starg_include_ir_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local plen=128
+ local grp=ff0e::1
+ local valid_src=2001:db8:100::1
+ local invalid_src=2001:db8:200::1
+
+ echo
+ echo "Data path: (*, G) INCLUDE - IR - IPv6 overlay / IPv6 underlay"
+ echo "-------------------------------------------------------------"
+
+ starg_include_ir_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $grp \
+ $valid_src $invalid_src "mausezahn -6"
+}
+
+starg_exclude_p2mp_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local mcast_grp=$1; shift
+ local plen=$1; shift
+ local grp=$1; shift
+ local valid_src=$1; shift
+ local invalid_src=$1; shift
+ local mz=$1; shift
+
+ # Install a (*, G) EXCLUDE MDB entry with one source and one multicast
+ # group to which packets are sent. Make sure that the source in the
+ # source list is not forwarded and that a source not in the list is
+ # forwarded.
+
+ run_cmd "tc -n $ns2 qdisc replace dev vx0 clsact"
+ run_cmd "ip -n $ns2 address replace $mcast_grp/$plen dev veth0 autojoin"
+
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_dst_ip $mcast_grp action pass"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $invalid_src dst $mcast_grp src_vni 10010 via veth0"
+
+ # Check that invalid source is not forwarded.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 0
+ log_test $? 0 "Block excluded source"
+
+ # Check that valid source is forwarded.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Forward valid source"
+
+ # Remove the VTEP from the multicast group.
+ run_cmd "ip -n $ns2 address del $mcast_grp/$plen dev veth0"
+
+ # Check that valid source is not received anymore.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Receive of valid source after removal from group"
+}
+
+starg_exclude_p2mp_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local mcast_grp=238.1.1.1
+ local plen=32
+ local grp=239.1.1.1
+ local valid_src=192.0.2.129
+ local invalid_src=192.0.2.145
+
+ echo
+ echo "Data path: (*, G) EXCLUDE - P2MP - IPv4 overlay / IPv4 underlay"
+ echo "---------------------------------------------------------------"
+
+ starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \
+ $valid_src $invalid_src "mausezahn"
+}
+
+starg_exclude_p2mp_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local mcast_grp=238.1.1.1
+ local plen=32
+ local grp=ff0e::1
+ local valid_src=2001:db8:100::1
+ local invalid_src=2001:db8:200::1
+
+ echo
+ echo "Data path: (*, G) EXCLUDE - P2MP - IPv6 overlay / IPv4 underlay"
+ echo "---------------------------------------------------------------"
+
+ starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \
+ $valid_src $invalid_src "mausezahn -6"
+}
+
+starg_exclude_p2mp_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local mcast_grp=ff0e::2
+ local plen=128
+ local grp=239.1.1.1
+ local valid_src=192.0.2.129
+ local invalid_src=192.0.2.145
+
+ echo
+ echo "Data path: (*, G) EXCLUDE - P2MP - IPv4 overlay / IPv6 underlay"
+ echo "---------------------------------------------------------------"
+
+ starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \
+ $valid_src $invalid_src "mausezahn"
+}
+
+starg_exclude_p2mp_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local mcast_grp=ff0e::2
+ local plen=128
+ local grp=ff0e::1
+ local valid_src=2001:db8:100::1
+ local invalid_src=2001:db8:200::1
+
+ echo
+ echo "Data path: (*, G) EXCLUDE - P2MP - IPv6 overlay / IPv6 underlay"
+ echo "---------------------------------------------------------------"
+
+ starg_exclude_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \
+ $valid_src $invalid_src "mausezahn -6"
+}
+
+starg_include_p2mp_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local mcast_grp=$1; shift
+ local plen=$1; shift
+ local grp=$1; shift
+ local valid_src=$1; shift
+ local invalid_src=$1; shift
+ local mz=$1; shift
+
+ # Install a (*, G) INCLUDE MDB entry with one source and one multicast
+ # group to which packets are sent. Make sure that the source in the
+ # source list is forwarded and that a source not in the list is not
+ # forwarded.
+
+ run_cmd "tc -n $ns2 qdisc replace dev vx0 clsact"
+ run_cmd "ip -n $ns2 address replace $mcast_grp/$plen dev veth0 autojoin"
+
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_dst_ip $mcast_grp action pass"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode include source_list $valid_src dst $mcast_grp src_vni 10010 via veth0"
+
+ # Check that invalid source is not forwarded.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $invalid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 0
+ log_test $? 0 "Block excluded source"
+
+ # Check that valid source is forwarded.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Forward valid source"
+
+ # Remove the VTEP from the multicast group.
+ run_cmd "ip -n $ns2 address del $mcast_grp/$plen dev veth0"
+
+ # Check that valid source is not received anymore.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $valid_src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Receive of valid source after removal from group"
+}
+
+starg_include_p2mp_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local mcast_grp=238.1.1.1
+ local plen=32
+ local grp=239.1.1.1
+ local valid_src=192.0.2.129
+ local invalid_src=192.0.2.145
+
+ echo
+ echo "Data path: (*, G) INCLUDE - P2MP - IPv4 overlay / IPv4 underlay"
+ echo "---------------------------------------------------------------"
+
+ starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \
+ $valid_src $invalid_src "mausezahn"
+}
+
+starg_include_p2mp_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local mcast_grp=238.1.1.1
+ local plen=32
+ local grp=ff0e::1
+ local valid_src=2001:db8:100::1
+ local invalid_src=2001:db8:200::1
+
+ echo
+ echo "Data path: (*, G) INCLUDE - P2MP - IPv6 overlay / IPv4 underlay"
+ echo "---------------------------------------------------------------"
+
+ starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \
+ $valid_src $invalid_src "mausezahn -6"
+}
+
+starg_include_p2mp_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local mcast_grp=ff0e::2
+ local plen=128
+ local grp=239.1.1.1
+ local valid_src=192.0.2.129
+ local invalid_src=192.0.2.145
+
+ echo
+ echo "Data path: (*, G) INCLUDE - P2MP - IPv4 overlay / IPv6 underlay"
+ echo "---------------------------------------------------------------"
+
+ starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \
+ $valid_src $invalid_src "mausezahn"
+}
+
+starg_include_p2mp_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local mcast_grp=ff0e::2
+ local plen=128
+ local grp=ff0e::1
+ local valid_src=2001:db8:100::1
+ local invalid_src=2001:db8:200::1
+
+ echo
+ echo "Data path: (*, G) INCLUDE - P2MP - IPv6 overlay / IPv6 underlay"
+ echo "---------------------------------------------------------------"
+
+ starg_include_p2mp_common $ns1 $ns2 $mcast_grp $plen $grp \
+ $valid_src $invalid_src "mausezahn -6"
+}
+
+egress_vni_translation_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local mcast_grp=$1; shift
+ local plen=$1; shift
+ local proto=$1; shift
+ local grp=$1; shift
+ local src=$1; shift
+ local mz=$1; shift
+
+ # When P2MP tunnels are used with optimized inter-subnet multicast
+ # (OISM) [1], the ingress VTEP does not perform VNI translation and
+ # uses the VNI of the source broadcast domain (BD). If the egress VTEP
+ # is a member in the source BD, then no VNI translation is needed.
+ # Otherwise, the egress VTEP needs to translate the VNI to the
+ # supplementary broadcast domain (SBD) VNI, which is usually the L3VNI.
+ #
+ # In this test, remove the VTEP in the second namespace from VLAN 10
+ # (VNI 10010) and make sure that a packet sent from this VLAN on the
+ # first VTEP is received by the SVI corresponding to the L3VNI (14000 /
+ # VLAN 4000) on the second VTEP.
+ #
+ # The second VTEP will be able to decapsulate the packet with VNI 10010
+ # because this VNI is configured on its shared VXLAN device. Later,
+ # when ingressing the bridge, the VNI to VLAN lookup will fail because
+ # the VTEP is not a member in VLAN 10, which will cause the packet to
+ # be tagged with VLAN 4000 since it is configured as PVID.
+ #
+ # [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast
+
+ run_cmd "tc -n $ns2 qdisc replace dev br0.4000 clsact"
+ run_cmd "ip -n $ns2 address replace $mcast_grp/$plen dev veth0 autojoin"
+ run_cmd "tc -n $ns2 filter replace dev br0.4000 ingress pref 1 handle 101 proto $proto flower src_ip $src dst_ip $grp action pass"
+
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp src $src permanent dst $mcast_grp src_vni 10010 via veth0"
+
+ # Remove the second VTEP from VLAN 10.
+ run_cmd "bridge -n $ns2 vlan del vid 10 dev vx0"
+
+ # Make sure that packets sent from the first VTEP over VLAN 10 are
+ # received by the SVI corresponding to the L3VNI (14000 / VLAN 4000) on
+ # the second VTEP, since it is configured as PVID.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev br0.4000 ingress" 101 1
+ log_test $? 0 "Egress VNI translation - PVID configured"
+
+ # Remove PVID flag from VLAN 4000 on the second VTEP and make sure
+ # packets are no longer received by the SVI interface.
+ run_cmd "bridge -n $ns2 vlan add vid 4000 dev vx0"
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev br0.4000 ingress" 101 1
+ log_test $? 0 "Egress VNI translation - no PVID configured"
+
+ # Reconfigure the PVID and make sure packets are received again.
+ run_cmd "bridge -n $ns2 vlan add vid 4000 dev vx0 pvid"
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev br0.4000 ingress" 101 2
+ log_test $? 0 "Egress VNI translation - PVID reconfigured"
+}
+
+egress_vni_translation_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local mcast_grp=238.1.1.1
+ local plen=32
+ local proto="ipv4"
+ local grp=239.1.1.1
+ local src=192.0.2.129
+
+ echo
+ echo "Data path: Egress VNI translation - IPv4 overlay / IPv4 underlay"
+ echo "----------------------------------------------------------------"
+
+ egress_vni_translation_common $ns1 $ns2 $mcast_grp $plen $proto $grp \
+ $src "mausezahn"
+}
+
+egress_vni_translation_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local mcast_grp=238.1.1.1
+ local plen=32
+ local proto="ipv6"
+ local grp=ff0e::1
+ local src=2001:db8:100::1
+
+ echo
+ echo "Data path: Egress VNI translation - IPv6 overlay / IPv4 underlay"
+ echo "----------------------------------------------------------------"
+
+ egress_vni_translation_common $ns1 $ns2 $mcast_grp $plen $proto $grp \
+ $src "mausezahn -6"
+}
+
+egress_vni_translation_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local mcast_grp=ff0e::2
+ local plen=128
+ local proto="ipv4"
+ local grp=239.1.1.1
+ local src=192.0.2.129
+
+ echo
+ echo "Data path: Egress VNI translation - IPv4 overlay / IPv6 underlay"
+ echo "----------------------------------------------------------------"
+
+ egress_vni_translation_common $ns1 $ns2 $mcast_grp $plen $proto $grp \
+ $src "mausezahn"
+}
+
+egress_vni_translation_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local mcast_grp=ff0e::2
+ local plen=128
+ local proto="ipv6"
+ local grp=ff0e::1
+ local src=2001:db8:100::1
+
+ echo
+ echo "Data path: Egress VNI translation - IPv6 overlay / IPv6 underlay"
+ echo "----------------------------------------------------------------"
+
+ egress_vni_translation_common $ns1 $ns2 $mcast_grp $plen $proto $grp \
+ $src "mausezahn -6"
+}
+
+all_zeros_mdb_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local vtep1_ip=$1; shift
+ local vtep2_ip=$1; shift
+ local vtep3_ip=$1; shift
+ local vtep4_ip=$1; shift
+ local plen=$1; shift
+ local ipv4_grp=239.1.1.1
+ local ipv4_unreg_grp=239.2.2.2
+ local ipv4_ll_grp=224.0.0.100
+ local ipv4_src=192.0.2.129
+ local ipv6_grp=ff0e::1
+ local ipv6_unreg_grp=ff0e::2
+ local ipv6_ll_grp=ff02::1
+ local ipv6_src=2001:db8:100::1
+
+ # Install all-zeros (catchall) MDB entries for IPv4 and IPv6 traffic
+ # and make sure they only forward unregistered IP multicast traffic
+ # which is not link-local. Also make sure that each entry only forwards
+ # traffic from the matching address family.
+
+ # Associate two different VTEPs with one all-zeros MDB entry: Two with
+ # the IPv4 entry (0.0.0.0) and another two with the IPv6 one (::).
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp 0.0.0.0 permanent dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp 0.0.0.0 permanent dst $vtep2_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp :: permanent dst $vtep3_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp :: permanent dst $vtep4_ip src_vni 10010"
+
+ # Associate one VTEP from each set with a regular MDB entry: One with
+ # an IPv4 entry and another with an IPv6 one.
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $ipv4_grp permanent dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $ipv6_grp permanent dst $vtep3_ip src_vni 10010"
+
+ # Add filters to match on decapsulated traffic in the second namespace.
+ run_cmd "tc -n $ns2 qdisc replace dev vx0 clsact"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_dst_ip $vtep1_ip action pass"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 102 proto all flower enc_dst_ip $vtep2_ip action pass"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 103 proto all flower enc_dst_ip $vtep3_ip action pass"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 104 proto all flower enc_dst_ip $vtep4_ip action pass"
+
+ # Configure the VTEP addresses in the second namespace to enable
+ # decapsulation.
+ run_cmd "ip -n $ns2 address replace $vtep1_ip/$plen dev lo"
+ run_cmd "ip -n $ns2 address replace $vtep2_ip/$plen dev lo"
+ run_cmd "ip -n $ns2 address replace $vtep3_ip/$plen dev lo"
+ run_cmd "ip -n $ns2 address replace $vtep4_ip/$plen dev lo"
+
+ # Send registered IPv4 multicast and make sure it only arrives to the
+ # first VTEP.
+ run_cmd "ip netns exec $ns1 mausezahn br0.10 -A $ipv4_src -B $ipv4_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Registered IPv4 multicast - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 0
+ log_test $? 0 "Registered IPv4 multicast - second VTEP"
+
+ # Send unregistered IPv4 multicast that is not link-local and make sure
+ # it arrives to the first and second VTEPs.
+ run_cmd "ip netns exec $ns1 mausezahn br0.10 -A $ipv4_src -B $ipv4_unreg_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 2
+ log_test $? 0 "Unregistered IPv4 multicast - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Unregistered IPv4 multicast - second VTEP"
+
+ # Send IPv4 link-local multicast traffic and make sure it does not
+ # arrive to any VTEP.
+ run_cmd "ip netns exec $ns1 mausezahn br0.10 -A $ipv4_src -B $ipv4_ll_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 2
+ log_test $? 0 "Link-local IPv4 multicast - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Link-local IPv4 multicast - second VTEP"
+
+ # Send registered IPv4 multicast using a unicast MAC address and make
+ # sure it does not arrive to any VTEP.
+ run_cmd "ip netns exec $ns1 mausezahn br0.10 -a own -b 00:11:22:33:44:55 -A $ipv4_src -B $ipv4_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 2
+ log_test $? 0 "Registered IPv4 multicast with a unicast MAC - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Registered IPv4 multicast with a unicast MAC - second VTEP"
+
+ # Send registered IPv4 multicast using a broadcast MAC address and make
+ # sure it does not arrive to any VTEP.
+ run_cmd "ip netns exec $ns1 mausezahn br0.10 -a own -b bcast -A $ipv4_src -B $ipv4_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 2
+ log_test $? 0 "Registered IPv4 multicast with a broadcast MAC - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Registered IPv4 multicast with a broadcast MAC - second VTEP"
+
+ # Make sure IPv4 traffic did not reach the VTEPs associated with
+ # IPv6 entries.
+ tc_check_packets "$ns2" "dev vx0 ingress" 103 0
+ log_test $? 0 "IPv4 traffic - third VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 104 0
+ log_test $? 0 "IPv4 traffic - fourth VTEP"
+
+ # Reset IPv4 filters before testing IPv6 traffic.
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto all flower enc_dst_ip $vtep1_ip action pass"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 102 proto all flower enc_dst_ip $vtep2_ip action pass"
+
+ # Send registered IPv6 multicast and make sure it only arrives to the
+ # third VTEP.
+ run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -A $ipv6_src -B $ipv6_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 103 1
+ log_test $? 0 "Registered IPv6 multicast - third VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 104 0
+ log_test $? 0 "Registered IPv6 multicast - fourth VTEP"
+
+ # Send unregistered IPv6 multicast that is not link-local and make sure
+ # it arrives to the third and fourth VTEPs.
+ run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -A $ipv6_src -B $ipv6_unreg_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 103 2
+ log_test $? 0 "Unregistered IPv6 multicast - third VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 104 1
+ log_test $? 0 "Unregistered IPv6 multicast - fourth VTEP"
+
+ # Send IPv6 link-local multicast traffic and make sure it does not
+ # arrive to any VTEP.
+ run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -A $ipv6_src -B $ipv6_ll_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 103 2
+ log_test $? 0 "Link-local IPv6 multicast - third VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 104 1
+ log_test $? 0 "Link-local IPv6 multicast - fourth VTEP"
+
+ # Send registered IPv6 multicast using a unicast MAC address and make
+ # sure it does not arrive to any VTEP.
+ run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -a own -b 00:11:22:33:44:55 -A $ipv6_src -B $ipv6_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 103 2
+ log_test $? 0 "Registered IPv6 multicast with a unicast MAC - third VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 104 1
+ log_test $? 0 "Registered IPv6 multicast with a unicast MAC - fourth VTEP"
+
+ # Send registered IPv6 multicast using a broadcast MAC address and make
+ # sure it does not arrive to any VTEP.
+ run_cmd "ip netns exec $ns1 mausezahn -6 br0.10 -a own -b bcast -A $ipv6_src -B $ipv6_grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 103 2
+ log_test $? 0 "Registered IPv6 multicast with a broadcast MAC - third VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 104 1
+ log_test $? 0 "Registered IPv6 multicast with a broadcast MAC - fourth VTEP"
+
+ # Make sure IPv6 traffic did not reach the VTEPs associated with
+ # IPv4 entries.
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 0
+ log_test $? 0 "IPv6 traffic - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 0
+ log_test $? 0 "IPv6 traffic - second VTEP"
+}
+
+all_zeros_mdb_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.101
+ local vtep2_ip=198.51.100.102
+ local vtep3_ip=198.51.100.103
+ local vtep4_ip=198.51.100.104
+ local plen=32
+
+ echo
+ echo "Data path: All-zeros MDB entry - IPv4 underlay"
+ echo "----------------------------------------------"
+
+ all_zeros_mdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $vtep3_ip \
+ $vtep4_ip $plen
+}
+
+all_zeros_mdb_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local vtep3_ip=2001:db8:3000::1
+ local vtep4_ip=2001:db8:4000::1
+ local plen=128
+
+ echo
+ echo "Data path: All-zeros MDB entry - IPv6 underlay"
+ echo "----------------------------------------------"
+
+ all_zeros_mdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $vtep3_ip \
+ $vtep4_ip $plen
+}
+
+mdb_fdb_common()
+{
+ local ns1=$1; shift
+ local ns2=$1; shift
+ local vtep1_ip=$1; shift
+ local vtep2_ip=$1; shift
+ local plen=$1; shift
+ local proto=$1; shift
+ local grp=$1; shift
+ local src=$1; shift
+ local mz=$1; shift
+
+ # Install an MDB entry and an FDB entry and make sure that the FDB
+ # entry only forwards traffic that was not forwarded by the MDB.
+
+ # Associate the MDB entry with one VTEP and the FDB entry with another
+ # VTEP.
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 fdb add 00:00:00:00:00:00 dev vx0 self static dst $vtep2_ip src_vni 10010"
+
+ # Add filters to match on decapsulated traffic in the second namespace.
+ run_cmd "tc -n $ns2 qdisc replace dev vx0 clsact"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 101 proto $proto flower ip_proto udp dst_port 54321 enc_dst_ip $vtep1_ip action pass"
+ run_cmd "tc -n $ns2 filter replace dev vx0 ingress pref 1 handle 102 proto $proto flower ip_proto udp dst_port 54321 enc_dst_ip $vtep2_ip action pass"
+
+ # Configure the VTEP addresses in the second namespace to enable
+ # decapsulation.
+ run_cmd "ip -n $ns2 address replace $vtep1_ip/$plen dev lo"
+ run_cmd "ip -n $ns2 address replace $vtep2_ip/$plen dev lo"
+
+ # Send IP multicast traffic and make sure it is forwarded by the MDB
+ # and only arrives to the first VTEP.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "IP multicast - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 0
+ log_test $? 0 "IP multicast - second VTEP"
+
+ # Send broadcast traffic and make sure it is forwarded by the FDB and
+ # only arrives to the second VTEP.
+ run_cmd "ip netns exec $ns1 $mz br0.10 -a own -b bcast -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "Broadcast - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 1
+ log_test $? 0 "Broadcast - second VTEP"
+
+ # Remove the MDB entry and make sure that IP multicast is now forwarded
+ # by the FDB to the second VTEP.
+ run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep1_ip src_vni 10010"
+ run_cmd "ip netns exec $ns1 $mz br0.10 -A $src -B $grp -t udp sp=12345,dp=54321 -p 100 -c 1 -q"
+ tc_check_packets "$ns2" "dev vx0 ingress" 101 1
+ log_test $? 0 "IP multicast after removal - first VTEP"
+ tc_check_packets "$ns2" "dev vx0 ingress" 102 2
+ log_test $? 0 "IP multicast after removal - second VTEP"
+}
+
+mdb_fdb_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local plen=32
+ local proto="ipv4"
+ local grp=239.1.1.1
+ local src=192.0.2.129
+
+ echo
+ echo "Data path: MDB with FDB - IPv4 overlay / IPv4 underlay"
+ echo "------------------------------------------------------"
+
+ mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp $src \
+ "mausezahn"
+}
+
+mdb_fdb_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local ns2=ns2_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local plen=32
+ local proto="ipv6"
+ local grp=ff0e::1
+ local src=2001:db8:100::1
+
+ echo
+ echo "Data path: MDB with FDB - IPv6 overlay / IPv4 underlay"
+ echo "------------------------------------------------------"
+
+ mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp $src \
+ "mausezahn -6"
+}
+
+mdb_fdb_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local plen=128
+ local proto="ipv4"
+ local grp=239.1.1.1
+ local src=192.0.2.129
+
+ echo
+ echo "Data path: MDB with FDB - IPv4 overlay / IPv6 underlay"
+ echo "------------------------------------------------------"
+
+ mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp $src \
+ "mausezahn"
+}
+
+mdb_fdb_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local ns2=ns2_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local plen=128
+ local proto="ipv6"
+ local grp=ff0e::1
+ local src=2001:db8:100::1
+
+ echo
+ echo "Data path: MDB with FDB - IPv6 overlay / IPv6 underlay"
+ echo "------------------------------------------------------"
+
+ mdb_fdb_common $ns1 $ns2 $vtep1_ip $vtep2_ip $plen $proto $grp $src \
+ "mausezahn -6"
+}
+
+mdb_grp1_loop()
+{
+ local ns1=$1; shift
+ local vtep1_ip=$1; shift
+ local grp1=$1; shift
+
+ while true; do
+ bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp1 dst $vtep1_ip src_vni 10010
+ bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp1 permanent dst $vtep1_ip src_vni 10010
+ done >/dev/null 2>&1
+}
+
+mdb_grp2_loop()
+{
+ local ns1=$1; shift
+ local vtep1_ip=$1; shift
+ local vtep2_ip=$1; shift
+ local grp2=$1; shift
+
+ while true; do
+ bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp2 dst $vtep1_ip src_vni 10010
+ bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp2 permanent dst $vtep1_ip src_vni 10010
+ bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp2 permanent dst $vtep2_ip src_vni 10010
+ done >/dev/null 2>&1
+}
+
+mdb_torture_common()
+{
+ local ns1=$1; shift
+ local vtep1_ip=$1; shift
+ local vtep2_ip=$1; shift
+ local grp1=$1; shift
+ local grp2=$1; shift
+ local src=$1; shift
+ local mz=$1; shift
+ local pid1
+ local pid2
+ local pid3
+ local pid4
+
+ # Continuously send two streams that are forwarded by two different MDB
+ # entries. The first entry will be added and deleted in a loop. This
+ # allows us to test that the data path does not use freed MDB entry
+ # memory. The second entry will have two remotes, one that is added and
+ # deleted in a loop and another that is replaced in a loop. This allows
+ # us to test that the data path does not use freed remote entry memory.
+ # The test is considered successful if nothing crashed.
+
+ # Create the MDB entries that will be continuously deleted / replaced.
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp1 permanent dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp2 permanent dst $vtep1_ip src_vni 10010"
+ run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp2 permanent dst $vtep2_ip src_vni 10010"
+
+ mdb_grp1_loop $ns1 $vtep1_ip $grp1 &
+ pid1=$!
+ mdb_grp2_loop $ns1 $vtep1_ip $vtep2_ip $grp2 &
+ pid2=$!
+ ip netns exec $ns1 $mz br0.10 -A $src -B $grp1 -t udp sp=12345,dp=54321 -p 100 -c 0 -q &
+ pid3=$!
+ ip netns exec $ns1 $mz br0.10 -A $src -B $grp2 -t udp sp=12345,dp=54321 -p 100 -c 0 -q &
+ pid4=$!
+
+ sleep 30
+ kill -9 $pid1 $pid2 $pid3 $pid4
+ wait $pid1 $pid2 $pid3 $pid4 2>/dev/null
+
+ log_test 0 0 "Torture test"
+}
+
+mdb_torture_ipv4_ipv4()
+{
+ local ns1=ns1_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local grp1=239.1.1.1
+ local grp2=239.2.2.2
+ local src=192.0.2.129
+
+ echo
+ echo "Data path: MDB torture test - IPv4 overlay / IPv4 underlay"
+ echo "----------------------------------------------------------"
+
+ mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp2 $src \
+ "mausezahn"
+}
+
+mdb_torture_ipv6_ipv4()
+{
+ local ns1=ns1_v4
+ local vtep1_ip=198.51.100.100
+ local vtep2_ip=198.51.100.200
+ local grp1=ff0e::1
+ local grp2=ff0e::2
+ local src=2001:db8:100::1
+
+ echo
+ echo "Data path: MDB torture test - IPv6 overlay / IPv4 underlay"
+ echo "----------------------------------------------------------"
+
+ mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp2 $src \
+ "mausezahn -6"
+}
+
+mdb_torture_ipv4_ipv6()
+{
+ local ns1=ns1_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local grp1=239.1.1.1
+ local grp2=239.2.2.2
+ local src=192.0.2.129
+
+ echo
+ echo "Data path: MDB torture test - IPv4 overlay / IPv6 underlay"
+ echo "----------------------------------------------------------"
+
+ mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp2 $src \
+ "mausezahn"
+}
+
+mdb_torture_ipv6_ipv6()
+{
+ local ns1=ns1_v6
+ local vtep1_ip=2001:db8:1000::1
+ local vtep2_ip=2001:db8:2000::1
+ local grp1=ff0e::1
+ local grp2=ff0e::2
+ local src=2001:db8:100::1
+
+ echo
+ echo "Data path: MDB torture test - IPv6 overlay / IPv6 underlay"
+ echo "----------------------------------------------------------"
+
+ mdb_torture_common $ns1 $vtep1_ip $vtep2_ip $grp1 $grp2 $src \
+ "mausezahn -6"
+}
+
+################################################################################
+# Usage
+
+usage()
+{
+ cat <<EOF
+usage: ${0##*/} OPTS
+
+ -t <test> Test(s) to run (default: all)
+ (options: $TESTS)
+ -c Control path tests only
+ -d Data path tests only
+ -p Pause on fail
+ -P Pause after each test before cleanup
+ -v Verbose mode (show commands and output)
+EOF
+}
+
+################################################################################
+# Main
+
+trap cleanup EXIT
+
+while getopts ":t:cdpPvh" opt; do
+ case $opt in
+ t) TESTS=$OPTARG;;
+ c) TESTS=${CONTROL_PATH_TESTS};;
+ d) TESTS=${DATA_PATH_TESTS};;
+ p) PAUSE_ON_FAIL=yes;;
+ P) PAUSE=yes;;
+ v) VERBOSE=$(($VERBOSE + 1));;
+ h) usage; exit 0;;
+ *) usage; exit 1;;
+ esac
+done
+
+# Make sure we don't pause twice.
+[ "${PAUSE}" = "yes" ] && PAUSE_ON_FAIL=no
+
+if [ "$(id -u)" -ne 0 ];then
+ echo "SKIP: Need root privileges"
+ exit $ksft_skip;
+fi
+
+if [ ! -x "$(command -v ip)" ]; then
+ echo "SKIP: Could not run test without ip tool"
+ exit $ksft_skip
+fi
+
+if [ ! -x "$(command -v bridge)" ]; then
+ echo "SKIP: Could not run test without bridge tool"
+ exit $ksft_skip
+fi
+
+if [ ! -x "$(command -v mausezahn)" ]; then
+ echo "SKIP: Could not run test without mausezahn tool"
+ exit $ksft_skip
+fi
+
+if [ ! -x "$(command -v jq)" ]; then
+ echo "SKIP: Could not run test without jq tool"
+ exit $ksft_skip
+fi
+
+bridge mdb help 2>&1 | grep -q "src_vni"
+if [ $? -ne 0 ]; then
+ echo "SKIP: iproute2 bridge too old, missing VXLAN MDB support"
+ exit $ksft_skip
+fi
+
+# Start clean.
+cleanup
+
+for t in $TESTS
+do
+ setup; $t; cleanup;
+done
+
+if [ "$TESTS" != "none" ]; then
+ printf "\nTests passed: %3d\n" ${nsuccess}
+ printf "Tests failed: %3d\n" ${nfail}
+fi
+
+exit $ret
close(sfd);
}
+TEST(prequeue) {
+ struct tls_crypto_info_keys tls12;
+ char buf[20000], buf2[20000];
+ struct sockaddr_in addr;
+ int sfd, cfd, ret, fd;
+ socklen_t len;
+
+ len = sizeof(addr);
+ memrnd(buf, sizeof(buf));
+
+ tls_crypto_info_init(TLS_1_2_VERSION, TLS_CIPHER_AES_GCM_256, &tls12);
+
+ addr.sin_family = AF_INET;
+ addr.sin_addr.s_addr = htonl(INADDR_ANY);
+ addr.sin_port = 0;
+
+ fd = socket(AF_INET, SOCK_STREAM, 0);
+ sfd = socket(AF_INET, SOCK_STREAM, 0);
+
+ ASSERT_EQ(bind(sfd, &addr, sizeof(addr)), 0);
+ ASSERT_EQ(listen(sfd, 10), 0);
+ ASSERT_EQ(getsockname(sfd, &addr, &len), 0);
+ ASSERT_EQ(connect(fd, &addr, sizeof(addr)), 0);
+ ASSERT_GE(cfd = accept(sfd, &addr, &len), 0);
+ close(sfd);
+
+ ret = setsockopt(fd, IPPROTO_TCP, TCP_ULP, "tls", sizeof("tls"));
+ if (ret) {
+ ASSERT_EQ(errno, ENOENT);
+ SKIP(return, "no TLS support");
+ }
+
+ ASSERT_EQ(setsockopt(fd, SOL_TLS, TLS_TX, &tls12, tls12.len), 0);
+ EXPECT_EQ(send(fd, buf, sizeof(buf), MSG_DONTWAIT), sizeof(buf));
+
+ ASSERT_EQ(setsockopt(cfd, IPPROTO_TCP, TCP_ULP, "tls", sizeof("tls")), 0);
+ ASSERT_EQ(setsockopt(cfd, SOL_TLS, TLS_RX, &tls12, tls12.len), 0);
+ EXPECT_EQ(recv(cfd, buf2, sizeof(buf2), MSG_WAITALL), sizeof(buf2));
+
+ EXPECT_EQ(memcmp(buf, buf2, sizeof(buf)), 0);
+
+ close(fd);
+ close(cfd);
+}
+
TEST_HARNESS_MAIN
this test case will still appear in the results output but
marked as skipped. This key can be placed anywhere inside the
test case at the top level.
+dependsOn: Same as 'skip', but the value is executed as a command. The test
+ is skipped when the command returns non-zero.
category: A list of single-word descriptions covering what the command
under test is testing. Example: filter, actions, u32, gact, etc.
setup: The list of commands required to ensure the command under test
"teardown": [
"$TC actions flush action tunnel_key"
]
+ },
+ {
+ "id": "6bda",
+ "name": "Add tunnel_key action with nofrag option",
+ "category": [
+ "actions",
+ "tunnel_key"
+ ],
+ "dependsOn": "$TC actions add action tunnel_key help 2>&1 | grep -q nofrag",
+ "setup": [
+ [
+ "$TC action flush action tunnel_key",
+ 0,
+ 1,
+ 255
+ ]
+ ],
+ "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 10.10.10.2 id 1111 nofrag index 222",
+ "expExitCode": "0",
+ "verifyCmd": "$TC actions get action tunnel_key index 222",
+ "matchPattern": "action order [0-9]+: tunnel_key.*src_ip 10.10.10.1.*dst_ip 10.10.10.2.*key_id 1111.*csum.*nofrag pipe.*index 222",
+ "matchCount": "1",
+ "teardown": [
+ "$TC actions flush action tunnel_key"
+ ]
}
]
--- /dev/null
+[
+ {
+ "id": "abdc",
+ "name": "Reference pedit action object in filter",
+ "category": [
+ "infra",
+ "pedit"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC action add action pedit munge offset 0 u8 clear index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action pedit index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action pedit"
+ ]
+ },
+ {
+ "id": "7a70",
+ "name": "Reference mpls action object in filter",
+ "category": [
+ "infra",
+ "mpls"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC action add action mpls pop protocol ipv4 index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action mpls index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action mpls"
+ ]
+ },
+ {
+ "id": "d241",
+ "name": "Reference bpf action object in filter",
+ "category": [
+ "infra",
+ "bpf"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC action add action bpf bytecode '4,40 0 0 12,21 0 1 2048,6 0 0 262144,6 0 0 0' index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action bpf index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action bpf"
+ ]
+ },
+ {
+ "id": "383a",
+ "name": "Reference connmark action object in filter",
+ "category": [
+ "infra",
+ "connmark"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action connmark"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action connmark index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action connmark"
+ ]
+ },
+ {
+ "id": "c619",
+ "name": "Reference csum action object in filter",
+ "category": [
+ "infra",
+ "csum"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action csum ip4h index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action csum index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action csum"
+ ]
+ },
+ {
+ "id": "a93d",
+ "name": "Reference ct action object in filter",
+ "category": [
+ "infra",
+ "ct"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action ct index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action ct index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action ct"
+ ]
+ },
+ {
+ "id": "8bb5",
+ "name": "Reference ctinfo action object in filter",
+ "category": [
+ "infra",
+ "ctinfo"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC action add action ctinfo index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action ctinfo index 10",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action ctinfo"
+ ]
+ },
+ {
+ "id": "2241",
+ "name": "Reference gact action object in filter",
+ "category": [
+ "infra",
+ "gact"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action pass index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action gact index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action gact"
+ ]
+ },
+ {
+ "id": "35e9",
+ "name": "Reference gate action object in filter",
+ "category": [
+ "infra",
+ "gate"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC action add action gate priority 1 sched-entry close 100000000ns index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action gate index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action gate"
+ ]
+ },
+ {
+ "id": "b22e",
+ "name": "Reference ife action object in filter",
+ "category": [
+ "infra",
+ "ife"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action ife encode allow mark pass index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action ife index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action ife"
+ ]
+ },
+ {
+ "id": "ef74",
+ "name": "Reference mirred action object in filter",
+ "category": [
+ "infra",
+ "mirred"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action mirred egress mirror index 1 dev lo"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action mirred index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action mirred"
+ ]
+ },
+ {
+ "id": "2c81",
+ "name": "Reference nat action object in filter",
+ "category": [
+ "infra",
+ "nat"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action nat ingress 192.168.1.1 200.200.200.1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action nat index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action nat"
+ ]
+ },
+ {
+ "id": "ac9d",
+ "name": "Reference police action object in filter",
+ "category": [
+ "infra",
+ "police"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action police rate 1kbit burst 10k index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action police index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action police"
+ ]
+ },
+ {
+ "id": "68be",
+ "name": "Reference sample action object in filter",
+ "category": [
+ "infra",
+ "sample"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action sample rate 10 group 1 index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action sample index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action sample"
+ ]
+ },
+ {
+ "id": "cf01",
+ "name": "Reference skbedit action object in filter",
+ "category": [
+ "infra",
+ "skbedit"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action skbedit mark 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action skbedit index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action skbedit"
+ ]
+ },
+ {
+ "id": "c109",
+ "name": "Reference skbmod action object in filter",
+ "category": [
+ "infra",
+ "skbmod"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action skbmod set dmac 11:22:33:44:55:66 index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action skbmod index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action skbmod"
+ ]
+ },
+ {
+ "id": "4abc",
+ "name": "Reference tunnel_key action object in filter",
+ "category": [
+ "infra",
+ "tunnel_key"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 id 1 index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action tunnel_key index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action tunnel_key"
+ ]
+ },
+ {
+ "id": "dadd",
+ "name": "Reference vlan action object in filter",
+ "category": [
+ "infra",
+ "tunnel_key"
+ ],
+ "setup": [
+ "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$TC qdisc add dev $DUMMY ingress",
+ "$TC actions add action vlan pop pipe index 1"
+ ],
+ "cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action vlan index 1",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DUMMY parent ffff: handle 1 prio 1 protocol ip matchall",
+ "matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY ingress",
+ "$IP link del dev $DUMMY type dummy",
+ "$TC actions flush action vlan"
+ ]
+ }
+]
pm.call_post_execute()
return res
+ if 'dependsOn' in tidx:
+ if (args.verbose > 0):
+ print('probe command for test skip')
+ (p, procout) = exec_cmd(args, pm, 'execute', tidx['dependsOn'])
+ if p:
+ if (p.returncode != 0):
+ res = TestResult(tidx['id'], tidx['name'])
+ res.set_result(ResultState.skip)
+ res.set_errormsg('probe command: test skipped.')
+ pm.call_pre_case(tidx, test_skip=True)
+ pm.call_post_execute()
+ return res
+
# populate NAMES with TESTID for this test
NAMES['TESTID'] = tidx['id']
*.d
vsock_test
vsock_diag_test
+vsock_perf
exit(EXIT_FAILURE);
}
- if (errno != ENOMEM) {
+ if (errno != EFAULT) {
perror("unexpected errno of 'broken_buf'");
exit(EXIT_FAILURE);
}
exit(EXIT_FAILURE);
}
- if (errno != ENOMEM) {
+ if (errno != EFAULT) {
fprintf(stderr, "unexpected recv(2) errno %d\n", errno);
exit(EXIT_FAILURE);
}