Merge branch 'pm-cpufreq'
authorRafael J. Wysocki <rafael.j.wysocki@intel.com>
Mon, 24 Apr 2023 16:24:47 +0000 (18:24 +0200)
committerRafael J. Wysocki <rafael.j.wysocki@intel.com>
Mon, 24 Apr 2023 16:24:47 +0000 (18:24 +0200)
Merge cpufreq updates for 6.4-rc1:

 - Fix the frequency unit in cpufreq_verify_current_freq checks()
   (Sanjay Chandrashekara).

 - Make mode_state_machine in amd-pstate static (Tom Rix).

 - Make the cpufreq core require drivers with target_index() to set
   freq_table (Viresh Kumar).

 - Fix typo in the ARM_BRCMSTB_AVS_CPUFREQ Kconfig entry (Jingyu Wang).

 - Use of_property_read_bool() for boolean properties in the pmac32
   cpufreq driver (Rob Herring).

 - Make the cpufreq sysfs interface return proper error codes on
   obviously invalid input (qinyu).

 - Add guided autonomous mode support to the AMD P-state driver (Wyes
   Karny).

 - Make the Intel P-state driver enable HWP IO boost on all server
   platforms (Srinivas Pandruvada).

 - Add opp and bandwidth support to tegra194 cpufreq driver (Sumit
   Gupta).

 - Use of_property_present() for testing DT property presence (Rob
   Herring).

 - Remove MODULE_LICENSE in non-modules (Nick Alcock).

 - Add SM7225 to cpufreq-dt-platdev blocklist (Luca Weiss).

 - Optimizations and fixes for qcom-cpufreq-hw driver (Krzysztof
   Kozlowski, Konrad Dybcio, and Bjorn Andersson).

 - DT binding updates for qcom-cpufreq-hw driver (Konrad Dybcio and
   Bartosz Golaszewski).

 - Updates and fixes for mediatek driver (Jia-Wei Chang and
   AngeloGioacchino Del Regno).

* pm-cpufreq: (29 commits)
  cpufreq: use correct unit when verify cur freq
  cpufreq: tegra194: add OPP support and set bandwidth
  cpufreq: amd-pstate: Make varaiable mode_state_machine static
  cpufreq: drivers with target_index() must set freq_table
  cpufreq: qcom-cpufreq-hw: Revert adding cpufreq qos
  dt-bindings: cpufreq: cpufreq-qcom-hw: Add QCM2290
  dt-bindings: cpufreq: cpufreq-qcom-hw: Sanitize data per compatible
  dt-bindings: cpufreq: cpufreq-qcom-hw: Allow just 1 frequency domain
  cpufreq: Add SM7225 to cpufreq-dt-platdev blocklist
  cpufreq: qcom-cpufreq-hw: fix double IO unmap and resource release on exit
  cpufreq: mediatek: Raise proc and sram max voltage for MT7622/7623
  cpufreq: mediatek: raise proc/sram max voltage for MT8516
  cpufreq: mediatek: fix KP caused by handler usage after regulator_put/clk_put
  cpufreq: mediatek: fix passing zero to 'PTR_ERR'
  cpufreq: pmac32: Use of_property_read_bool() for boolean properties
  cpufreq: Fix typo in the ARM_BRCMSTB_AVS_CPUFREQ Kconfig entry
  cpufreq: warn about invalid vals to scaling_max/min_freq interfaces
  Documentation: cpufreq: amd-pstate: Update amd_pstate status sysfs for guided
  cpufreq: amd-pstate: Add guided mode control support via sysfs
  cpufreq: amd-pstate: Add guided autonomous mode
  ...

180 files changed:
Documentation/devicetree/bindings/interrupt-controller/loongarch,cpu-interrupt-controller.yaml [deleted file]
Documentation/devicetree/bindings/interrupt-controller/loongson,cpu-interrupt-controller.yaml [new file with mode: 0644]
Documentation/kbuild/llvm.rst
Documentation/networking/ip-sysctl.rst
Documentation/riscv/vm-layout.rst
Documentation/sound/hd-audio/models.rst
MAINTAINERS
Makefile
arch/arm64/kvm/arm.c
arch/arm64/kvm/hyp/include/nvhe/fixed_config.h
arch/arm64/kvm/hyp/nvhe/sys_regs.c
arch/arm64/kvm/pmu-emul.c
arch/arm64/kvm/sys_regs.c
arch/arm64/net/bpf_jit.h
arch/arm64/net/bpf_jit_comp.c
arch/loongarch/net/bpf_jit.c
arch/powerpc/mm/numa.c
arch/powerpc/platforms/pseries/papr_scm.c
arch/riscv/include/asm/fixmap.h
arch/riscv/include/asm/pgtable.h
arch/riscv/kernel/setup.c
arch/riscv/kernel/signal.c
arch/riscv/mm/init.c
arch/riscv/purgatory/Makefile
arch/x86/Makefile.um
arch/x86/kernel/x86_init.c
arch/x86/pci/fixup.c
arch/x86/purgatory/Makefile
drivers/acpi/resource.c
drivers/acpi/x86/utils.c
drivers/block/virtio_blk.c
drivers/bluetooth/btbcm.c
drivers/bluetooth/btsdio.c
drivers/bus/imx-weim.c
drivers/clk/clk-renesas-pcie.c
drivers/clk/imx/clk-imx6ul.c
drivers/clk/sprd/common.c
drivers/devfreq/Kconfig
drivers/devfreq/event/exynos-ppmu.c
drivers/devfreq/exynos-bus.c
drivers/dma/apple-admac.c
drivers/dma/dmaengine.c
drivers/dma/xilinx/xdma.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
drivers/gpu/drm/armada/armada_drv.c
drivers/gpu/drm/i915/display/icl_dsi.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf108.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk104.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk110.c
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gm107.c
drivers/gpu/drm/scheduler/sched_entity.c
drivers/hid/Kconfig
drivers/hid/hid-ids.h
drivers/hid/hid-input.c
drivers/hid/hid-sensor-custom.c
drivers/hid/hid-topre.c
drivers/hid/intel-ish-hid/ishtp/bus.c
drivers/i2c/busses/i2c-mchp-pci1xxxx.c
drivers/i2c/busses/i2c-ocores.c
drivers/i2c/i2c-core-of.c
drivers/infiniband/core/cma.c
drivers/infiniband/core/verbs.c
drivers/infiniband/hw/erdma/erdma_cq.c
drivers/infiniband/hw/erdma/erdma_hw.h
drivers/infiniband/hw/erdma/erdma_main.c
drivers/infiniband/hw/erdma/erdma_qp.c
drivers/infiniband/hw/erdma/erdma_verbs.h
drivers/infiniband/hw/irdma/cm.c
drivers/infiniband/hw/irdma/cm.h
drivers/infiniband/hw/irdma/hw.c
drivers/infiniband/hw/irdma/utils.c
drivers/infiniband/hw/mlx5/main.c
drivers/mtd/mtdblock.c
drivers/mtd/nand/raw/meson_nand.c
drivers/mtd/nand/raw/stm32_fmc2_nand.c
drivers/mtd/ubi/build.c
drivers/mtd/ubi/wl.c
drivers/net/bonding/bond_main.c
drivers/net/ethernet/cadence/macb_main.c
drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
drivers/net/ethernet/intel/iavf/iavf.h
drivers/net/ethernet/intel/iavf/iavf_main.c
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
drivers/net/ethernet/mellanox/mlx4/en_rx.c
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
drivers/net/ethernet/sun/niu.c
drivers/net/ethernet/ti/cpsw.c
drivers/net/ethernet/ti/cpsw_new.c
drivers/net/phy/nxp-c45-tja11xx.c
drivers/net/phy/sfp.c
drivers/net/usb/r8152.c
drivers/net/veth.c
drivers/net/wwan/iosm/iosm_ipc_pcie.c
drivers/nvme/host/pci.c
drivers/of/dynamic.c
drivers/of/platform.c
drivers/pci/remove.c
drivers/pinctrl/pinctrl-amd.c
drivers/scsi/ses.c
drivers/spi/spi.c
drivers/thermal/intel/therm_throt.c
drivers/vdpa/mlx5/net/mlx5_vnet.c
drivers/vdpa/vdpa_sim/vdpa_sim_net.c
drivers/vhost/scsi.c
drivers/video/fbdev/core/fbcon.c
drivers/video/fbdev/core/fbmem.c
fs/9p/xattr.c
fs/btrfs/disk-io.c
fs/btrfs/super.c
fs/cifs/smb2pdu.c
fs/ksmbd/smb2pdu.c
fs/netfs/iterator.c
include/linux/mlx5/device.h
include/linux/netdevice.h
include/linux/pci.h
include/linux/rtnetlink.h
include/net/bluetooth/hci_core.h
include/net/bonding.h
include/net/xdp.h
include/uapi/linux/virtio_blk.h
init/initramfs.c
io_uring/io_uring.c
kernel/cgroup/cpuset.c
kernel/cgroup/legacy_freezer.c
kernel/cgroup/rstat.c
kernel/rcu/tree.c
kernel/sched/fair.c
net/9p/trans_xen.c
net/bluetooth/hci_conn.c
net/bluetooth/hci_event.c
net/bluetooth/hci_sync.c
net/bluetooth/hidp/core.c
net/bluetooth/l2cap_core.c
net/bluetooth/sco.c
net/core/dev.c
net/core/rtnetlink.c
net/core/skbuff.c
net/core/xdp.c
net/ipv4/sysctl_net_ipv4.c
net/ipv4/tcp_ipv4.c
net/ipv6/udp.c
net/mptcp/fastopen.c
net/mptcp/options.c
net/mptcp/protocol.c
net/mptcp/subflow.c
net/openvswitch/actions.c
net/qrtr/af_qrtr.c
net/sctp/stream_interleave.c
net/smc/af_smc.c
scripts/Makefile.package
scripts/package/gen-diff-patch
scripts/package/mkdebian
scripts/package/mkspec
sound/firewire/tascam/tascam-stream.c
sound/i2c/cs8427.c
sound/pci/emu10k1/emupcm.c
sound/pci/hda/patch_hdmi.c
sound/pci/hda/patch_realtek.c
sound/pci/hda/patch_sigmatel.c
tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
tools/testing/selftests/bpf/progs/xdp_hw_metadata.c
tools/testing/selftests/bpf/progs/xdp_metadata.c
tools/testing/selftests/bpf/progs/xdp_metadata2.c
tools/testing/selftests/bpf/xdp_hw_metadata.c
tools/testing/selftests/bpf/xdp_metadata.h
tools/testing/selftests/drivers/net/bonding/Makefile
tools/testing/selftests/drivers/net/bonding/bond_options.sh [new file with mode: 0755]
tools/testing/selftests/drivers/net/bonding/bond_topo_3d1c.sh [new file with mode: 0644]
tools/testing/selftests/drivers/net/bonding/option_prio.sh [deleted file]
tools/testing/selftests/net/config
tools/testing/selftests/net/mptcp/userspace_pm.sh
tools/testing/selftests/net/openvswitch/ovs-dpctl.py
tools/virtio/virtio-trace/README
usr/gen_init_cpio.c

diff --git a/Documentation/devicetree/bindings/interrupt-controller/loongarch,cpu-interrupt-controller.yaml b/Documentation/devicetree/bindings/interrupt-controller/loongarch,cpu-interrupt-controller.yaml
deleted file mode 100644 (file)
index 2a1cf88..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
-%YAML 1.2
----
-$id: http://devicetree.org/schemas/interrupt-controller/loongarch,cpu-interrupt-controller.yaml#
-$schema: http://devicetree.org/meta-schemas/core.yaml#
-
-title: LoongArch CPU Interrupt Controller
-
-maintainers:
-  - Liu Peibao <liupeibao@loongson.cn>
-
-properties:
-  compatible:
-    const: loongarch,cpu-interrupt-controller
-
-  '#interrupt-cells':
-    const: 1
-
-  interrupt-controller: true
-
-additionalProperties: false
-
-required:
-  - compatible
-  - '#interrupt-cells'
-  - interrupt-controller
-
-examples:
-  - |
-    interrupt-controller {
-      compatible = "loongarch,cpu-interrupt-controller";
-      #interrupt-cells = <1>;
-      interrupt-controller;
-    };
diff --git a/Documentation/devicetree/bindings/interrupt-controller/loongson,cpu-interrupt-controller.yaml b/Documentation/devicetree/bindings/interrupt-controller/loongson,cpu-interrupt-controller.yaml
new file mode 100644 (file)
index 0000000..adf9899
--- /dev/null
@@ -0,0 +1,34 @@
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/interrupt-controller/loongson,cpu-interrupt-controller.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: LoongArch CPU Interrupt Controller
+
+maintainers:
+  - Liu Peibao <liupeibao@loongson.cn>
+
+properties:
+  compatible:
+    const: loongson,cpu-interrupt-controller
+
+  '#interrupt-cells':
+    const: 1
+
+  interrupt-controller: true
+
+additionalProperties: false
+
+required:
+  - compatible
+  - '#interrupt-cells'
+  - interrupt-controller
+
+examples:
+  - |
+    interrupt-controller {
+      compatible = "loongson,cpu-interrupt-controller";
+      #interrupt-cells = <1>;
+      interrupt-controller;
+    };
index bfb51685073cb6c5dd09d2617d15912b49573ce9..c3851fe1900da15e47883e1d7d5292075cfaee48 100644 (file)
@@ -171,6 +171,10 @@ Getting Help
 Getting LLVM
 -------------
 
+We provide prebuilt stable versions of LLVM on `kernel.org <https://kernel.org/pub/tools/llvm/>`_.
+Below are links that may be useful for building LLVM from source or procuring
+it through a distribution's package manager.
+
 - https://releases.llvm.org/download.html
 - https://github.com/llvm/llvm-project
 - https://llvm.org/docs/GettingStarted.html
index 87dd1c5283e61c03b9508a2d3e77bd82722ed176..58a78a3166978bd6957e043511e95e455e018f1d 100644 (file)
@@ -340,6 +340,8 @@ tcp_app_win - INTEGER
        Reserve max(window/2^tcp_app_win, mss) of window for application
        buffer. Value 0 is special, it means that nothing is reserved.
 
+       Possible values are [0, 31], inclusive.
+
        Default: 31
 
 tcp_autocorking - BOOLEAN
index 3be44e74ec5d6b81006e2d0e9d3011e0ce464d2d..5462c84f4723ff0836ba62f44775280f13e45e8f 100644 (file)
@@ -47,7 +47,7 @@ RISC-V Linux Kernel SV39
                                                               | Kernel-space virtual memory, shared between all processes:
   ____________________________________________________________|___________________________________________________________
                     |            |                  |         |
-   ffffffc6fee00000 | -228    GB | ffffffc6feffffff |    2 MB | fixmap
+   ffffffc6fea00000 | -228    GB | ffffffc6feffffff |    6 MB | fixmap
    ffffffc6ff000000 | -228    GB | ffffffc6ffffffff |   16 MB | PCI io
    ffffffc700000000 | -228    GB | ffffffc7ffffffff |    4 GB | vmemmap
    ffffffc800000000 | -224    GB | ffffffd7ffffffff |   64 GB | vmalloc/ioremap space
@@ -83,7 +83,7 @@ RISC-V Linux Kernel SV48
                                                               | Kernel-space virtual memory, shared between all processes:
   ____________________________________________________________|___________________________________________________________
                     |            |                  |         |
-   ffff8d7ffee00000 |  -114.5 TB | ffff8d7ffeffffff |    2 MB | fixmap
+   ffff8d7ffea00000 |  -114.5 TB | ffff8d7ffeffffff |    6 MB | fixmap
    ffff8d7fff000000 |  -114.5 TB | ffff8d7fffffffff |   16 MB | PCI io
    ffff8d8000000000 |  -114.5 TB | ffff8f7fffffffff |    2 TB | vmemmap
    ffff8f8000000000 |  -112.5 TB | ffffaf7fffffffff |   32 TB | vmalloc/ioremap space
@@ -119,7 +119,7 @@ RISC-V Linux Kernel SV57
                                                               | Kernel-space virtual memory, shared between all processes:
   ____________________________________________________________|___________________________________________________________
                     |            |                  |         |
-   ff1bfffffee00000 | -57     PB | ff1bfffffeffffff |    2 MB | fixmap
+   ff1bfffffea00000 | -57     PB | ff1bfffffeffffff |    6 MB | fixmap
    ff1bffffff000000 | -57     PB | ff1bffffffffffff |   16 MB | PCI io
    ff1c000000000000 | -57     PB | ff1fffffffffffff |    1 PB | vmemmap
    ff20000000000000 | -56     PB | ff5fffffffffffff |   16 PB | vmalloc/ioremap space
index 9b52f50a68542b932879f054bdd6b436ee7658a1..1204304500147637407240907078f17029999614 100644 (file)
@@ -704,7 +704,7 @@ ref
 no-jd
     BIOS setup but without jack-detection
 intel
-    Intel DG45* mobos
+    Intel D*45* mobos
 dell-m6-amic
     Dell desktops/laptops with analog mics
 dell-m6-dmic
index 90abe83c02f3bca36979d6c8d6f27fb5690af621..0e64787aace84000d521d749c44966c8126eb56d 100644 (file)
@@ -224,13 +224,13 @@ S:        Orphan / Obsolete
 F:     drivers/net/ethernet/8390/
 
 9P FILE SYSTEM
-M:     Eric Van Hensbergen <ericvh@gmail.com>
+M:     Eric Van Hensbergen <ericvh@kernel.org>
 M:     Latchesar Ionkov <lucho@ionkov.net>
 M:     Dominique Martinet <asmadeus@codewreck.org>
 R:     Christian Schoenebeck <linux_oss@crudebyte.com>
-L:     v9fs-developer@lists.sourceforge.net
+L:     v9fs@lists.linux.dev
 S:     Maintained
-W:     http://swik.net/v9fs
+W:     http://github.com/v9fs
 Q:     http://patchwork.kernel.org/project/v9fs-devel/list/
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs.git
 T:     git git://github.com/martinetd/linux.git
@@ -4461,14 +4461,14 @@ F:      Documentation/devicetree/bindings/net/ieee802154/ca8210.txt
 F:     drivers/net/ieee802154/ca8210.c
 
 CANAAN/KENDRYTE K210 SOC FPIOA DRIVER
-M:     Damien Le Moal <damien.lemoal@wdc.com>
+M:     Damien Le Moal <dlemoal@kernel.org>
 L:     linux-riscv@lists.infradead.org
 L:     linux-gpio@vger.kernel.org (pinctrl driver)
 F:     Documentation/devicetree/bindings/pinctrl/canaan,k210-fpioa.yaml
 F:     drivers/pinctrl/pinctrl-k210.c
 
 CANAAN/KENDRYTE K210 SOC RESET CONTROLLER DRIVER
-M:     Damien Le Moal <damien.lemoal@wdc.com>
+M:     Damien Le Moal <dlemoal@kernel.org>
 L:     linux-kernel@vger.kernel.org
 L:     linux-riscv@lists.infradead.org
 S:     Maintained
@@ -4476,7 +4476,7 @@ F:        Documentation/devicetree/bindings/reset/canaan,k210-rst.yaml
 F:     drivers/reset/reset-k210.c
 
 CANAAN/KENDRYTE K210 SOC SYSTEM CONTROLLER DRIVER
-M:     Damien Le Moal <damien.lemoal@wdc.com>
+M:     Damien Le Moal <dlemoal@kernel.org>
 L:     linux-riscv@lists.infradead.org
 S:     Maintained
 F:      Documentation/devicetree/bindings/mfd/canaan,k210-sysctl.yaml
@@ -11758,7 +11758,7 @@ T:      git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
 F:     drivers/ata/sata_promise.*
 
 LIBATA SUBSYSTEM (Serial and Parallel ATA drivers)
-M:     Damien Le Moal <damien.lemoal@opensource.wdc.com>
+M:     Damien Le Moal <dlemoal@kernel.org>
 L:     linux-ide@vger.kernel.org
 S:     Maintained
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata.git
@@ -23115,7 +23115,7 @@ S:      Maintained
 F:     arch/x86/kernel/cpu/zhaoxin.c
 
 ZONEFS FILESYSTEM
-M:     Damien Le Moal <damien.lemoal@opensource.wdc.com>
+M:     Damien Le Moal <dlemoal@kernel.org>
 M:     Naohiro Aota <naohiro.aota@wdc.com>
 R:     Johannes Thumshirn <jth@kernel.org>
 L:     linux-fsdevel@vger.kernel.org
index 5aeea3d98fc0c4d2293454d26d9cb87ea249d38b..b5c48e3c935aeea7e415b2c9c893f467090f19e0 100644 (file)
--- a/Makefile
+++ b/Makefile
@@ -2,7 +2,7 @@
 VERSION = 6
 PATCHLEVEL = 3
 SUBLEVEL = 0
-EXTRAVERSION = -rc6
+EXTRAVERSION = -rc7
 NAME = Hurr durr I'ma ninja sloth
 
 # *DOCUMENTATION*
index 3f6a5efdbcf0d69c8db41503d35283ca64c1e1f1..4b2e16e696a807cb6328892082ff71bcad90d1ca 100644 (file)
@@ -1890,9 +1890,33 @@ static int __init do_pkvm_init(u32 hyp_va_bits)
        return ret;
 }
 
+static u64 get_hyp_id_aa64pfr0_el1(void)
+{
+       /*
+        * Track whether the system isn't affected by spectre/meltdown in the
+        * hypervisor's view of id_aa64pfr0_el1, used for protected VMs.
+        * Although this is per-CPU, we make it global for simplicity, e.g., not
+        * to have to worry about vcpu migration.
+        *
+        * Unlike for non-protected VMs, userspace cannot override this for
+        * protected VMs.
+        */
+       u64 val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
+       val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
+                ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
+
+       val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2),
+                         arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED);
+       val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3),
+                         arm64_get_meltdown_state() == SPECTRE_UNAFFECTED);
+
+       return val;
+}
+
 static void kvm_hyp_init_symbols(void)
 {
-       kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+       kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = get_hyp_id_aa64pfr0_el1();
        kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
        kvm_nvhe_sym(id_aa64isar0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR0_EL1);
        kvm_nvhe_sym(id_aa64isar1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
index 07edfc7524c942eb2e199578b306bf92b869acbe..37440e1dda9306f7abde4cd24cc32c0d229b81ce 100644 (file)
  * Allow for protected VMs:
  * - Floating-point and Advanced SIMD
  * - Data Independent Timing
+ * - Spectre/Meltdown Mitigation
  */
 #define PVM_ID_AA64PFR0_ALLOW (\
        ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \
        ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \
-       ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) \
+       ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) | \
+       ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | \
+       ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3) \
        )
 
 /*
index 08d2b004f4b73cd61bd80f5b10b6749fd1052459..edd969a1f36b54bfe14b8e9a02a4bc35939f0118 100644 (file)
@@ -85,19 +85,12 @@ static u64 get_restricted_features_unsigned(u64 sys_reg_val,
 
 static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu)
 {
-       const struct kvm *kvm = (const struct kvm *)kern_hyp_va(vcpu->kvm);
        u64 set_mask = 0;
        u64 allow_mask = PVM_ID_AA64PFR0_ALLOW;
 
        set_mask |= get_restricted_features_unsigned(id_aa64pfr0_el1_sys_val,
                PVM_ID_AA64PFR0_RESTRICT_UNSIGNED);
 
-       /* Spectre and Meltdown mitigation in KVM */
-       set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2),
-                              (u64)kvm->arch.pfr0_csv2);
-       set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3),
-                              (u64)kvm->arch.pfr0_csv3);
-
        return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask;
 }
 
index c243b10f3e1507530b51128330267fe9591be880..5eca0cdd961df8410161e35a154b17a1583b7f9e 100644 (file)
@@ -558,6 +558,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
                for_each_set_bit(i, &mask, 32)
                        kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true);
        }
+       kvm_vcpu_pmu_restore_guest(vcpu);
 }
 
 static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
index 1b2c161120bead331d12e48413895b5b8a3a2bf7..34688918c81134b8df65f5612f46321d674b670f 100644 (file)
@@ -794,7 +794,6 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
                if (!kvm_supports_32bit_el0())
                        val |= ARMV8_PMU_PMCR_LC;
                kvm_pmu_handle_pmcr(vcpu, val);
-               kvm_vcpu_pmu_restore_guest(vcpu);
        } else {
                /* PMCR.P & PMCR.C are RAZ */
                val = __vcpu_sys_reg(vcpu, PMCR_EL0)
index a6acb94ea3d63370bec139fa5ff757206e1b1d6e..c2edadb8ec6a30de7788a6b67fdce762711b9475 100644 (file)
 /* DMB */
 #define A64_DMB_ISH aarch64_insn_gen_dmb(AARCH64_INSN_MB_ISH)
 
+/* ADR */
+#define A64_ADR(Rd, offset) \
+       aarch64_insn_gen_adr(0, offset, Rd, AARCH64_INSN_ADR_TYPE_ADR)
+
 #endif /* _BPF_JIT_H */
index 62f805f427b79fd1ce10abdee53f8b1283dc0c1e..b26da8efa616ec133b23b0d4cacd54192ea7c6af 100644 (file)
@@ -1900,7 +1900,8 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
                restore_args(ctx, args_off, nargs);
                /* call original func */
                emit(A64_LDR64I(A64_R(10), A64_SP, retaddr_off), ctx);
-               emit(A64_BLR(A64_R(10)), ctx);
+               emit(A64_ADR(A64_LR, AARCH64_INSN_SIZE * 2), ctx);
+               emit(A64_RET(A64_R(10)), ctx);
                /* store return value */
                emit(A64_STR64I(A64_R(0), A64_SP, retval_off), ctx);
                /* reserve a nop for bpf_tramp_image_put */
index 288003a9f0cae478a058102a6413e15a07585b29..d586df48ecc6432b94d034fcdc7641c9d1580794 100644 (file)
@@ -1022,6 +1022,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
                emit_atomic(insn, ctx);
                break;
 
+       /* Speculation barrier */
+       case BPF_ST | BPF_NOSPEC:
+               break;
+
        default:
                pr_err("bpf_jit: unknown opcode %02x\n", code);
                return -EINVAL;
index b44ce71917d75a9668d42e0fe6fe328e653a2790..16cfe56be05bb28723a065daf1dbf2de91d29aac 100644 (file)
@@ -366,6 +366,7 @@ void update_numa_distance(struct device_node *node)
        WARN(numa_distance_table[nid][nid] == -1,
             "NUMA distance details for node %d not provided\n", nid);
 }
+EXPORT_SYMBOL_GPL(update_numa_distance);
 
 /*
  * ibm,numa-lookup-index-table= {N, domainid1, domainid2, ..... domainidN}
index 2f8385523a1320047925af07a7219618a48c60d5..1a53e048ceb768f175237361018d6177092b4220 100644 (file)
@@ -1428,6 +1428,13 @@ static int papr_scm_probe(struct platform_device *pdev)
                return -ENODEV;
        }
 
+       /*
+        * open firmware platform device create won't update the NUMA 
+        * distance table. For PAPR SCM devices we use numa_map_to_online_node()
+        * to find the nearest online NUMA node and that requires correct
+        * distance table information.
+        */
+       update_numa_distance(dn);
 
        p = kzalloc(sizeof(*p), GFP_KERNEL);
        if (!p)
index 5c3e7b97fcc6f6356b5a11bb7476609d90dd713a..0a55099bb7349d0adbc9acbabe95441b4c2991e0 100644 (file)
  */
 enum fixed_addresses {
        FIX_HOLE,
+       /*
+        * The fdt fixmap mapping must be PMD aligned and will be mapped
+        * using PMD entries in fixmap_pmd in 64-bit and a PGD entry in 32-bit.
+        */
+       FIX_FDT_END,
+       FIX_FDT = FIX_FDT_END + FIX_FDT_SIZE / PAGE_SIZE - 1,
+
+       /* Below fixmaps will be mapped using fixmap_pte */
        FIX_PTE,
        FIX_PMD,
        FIX_PUD,
index ab05f892d317a82d808d3d8e058f441a44bc45a2..f641837ccf31d62d104d6798a63b3b1e3e8d4383 100644 (file)
 
 #define FIXADDR_TOP      PCI_IO_START
 #ifdef CONFIG_64BIT
-#define FIXADDR_SIZE     PMD_SIZE
+#define MAX_FDT_SIZE    PMD_SIZE
+#define FIX_FDT_SIZE    (MAX_FDT_SIZE + SZ_2M)
+#define FIXADDR_SIZE     (PMD_SIZE + FIX_FDT_SIZE)
 #else
-#define FIXADDR_SIZE     PGDIR_SIZE
+#define MAX_FDT_SIZE    PGDIR_SIZE
+#define FIX_FDT_SIZE    MAX_FDT_SIZE
+#define FIXADDR_SIZE     (PGDIR_SIZE + FIX_FDT_SIZE)
 #endif
 #define FIXADDR_START    (FIXADDR_TOP - FIXADDR_SIZE)
 
index 376d2827e7365af086c0c2b7d40f9ec2507d999b..a059b73f4ddb263744e47774d0549efe24e5933c 100644 (file)
@@ -278,12 +278,8 @@ void __init setup_arch(char **cmdline_p)
 #if IS_ENABLED(CONFIG_BUILTIN_DTB)
        unflatten_and_copy_device_tree();
 #else
-       if (early_init_dt_verify(__va(XIP_FIXUP(dtb_early_pa))))
-               unflatten_device_tree();
-       else
-               pr_err("No DTB found in kernel mappings\n");
+       unflatten_device_tree();
 #endif
-       early_init_fdt_scan_reserved_mem();
        misc_mem_init();
 
        init_resources();
index bfb2afa4135f89690b90da20d281252a539d8cda..dee66c9290ccee95e20f93517e47d16a133b02bb 100644 (file)
@@ -19,6 +19,7 @@
 #include <asm/signal32.h>
 #include <asm/switch_to.h>
 #include <asm/csr.h>
+#include <asm/cacheflush.h>
 
 extern u32 __user_rt_sigreturn[2];
 
@@ -181,6 +182,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
 {
        struct rt_sigframe __user *frame;
        long err = 0;
+       unsigned long __maybe_unused addr;
 
        frame = get_sigframe(ksig, regs, sizeof(*frame));
        if (!access_ok(frame, sizeof(*frame)))
@@ -209,7 +211,12 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
        if (copy_to_user(&frame->sigreturn_code, __user_rt_sigreturn,
                         sizeof(frame->sigreturn_code)))
                return -EFAULT;
-       regs->ra = (unsigned long)&frame->sigreturn_code;
+
+       addr = (unsigned long)&frame->sigreturn_code;
+       /* Make sure the two instructions are pushed to icache. */
+       flush_icache_range(addr, addr + sizeof(frame->sigreturn_code));
+
+       regs->ra = addr;
 #endif /* CONFIG_MMU */
 
        /*
index 478d6763a01a1ebde626635a2f7703f4415649a7..0f14f4a8d179a64f9b75541cefe3409d0cb22a12 100644 (file)
@@ -57,7 +57,6 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
 EXPORT_SYMBOL(empty_zero_page);
 
 extern char _start[];
-#define DTB_EARLY_BASE_VA      PGDIR_SIZE
 void *_dtb_early_va __initdata;
 uintptr_t _dtb_early_pa __initdata;
 
@@ -236,31 +235,22 @@ static void __init setup_bootmem(void)
        set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET);
 
        reserve_initrd_mem();
+
+       /*
+        * No allocation should be done before reserving the memory as defined
+        * in the device tree, otherwise the allocation could end up in a
+        * reserved region.
+        */
+       early_init_fdt_scan_reserved_mem();
+
        /*
         * If DTB is built in, no need to reserve its memblock.
         * Otherwise, do reserve it but avoid using
         * early_init_fdt_reserve_self() since __pa() does
         * not work for DTB pointers that are fixmap addresses
         */
-       if (!IS_ENABLED(CONFIG_BUILTIN_DTB)) {
-               /*
-                * In case the DTB is not located in a memory region we won't
-                * be able to locate it later on via the linear mapping and
-                * get a segfault when accessing it via __va(dtb_early_pa).
-                * To avoid this situation copy DTB to a memory region.
-                * Note that memblock_phys_alloc will also reserve DTB region.
-                */
-               if (!memblock_is_memory(dtb_early_pa)) {
-                       size_t fdt_size = fdt_totalsize(dtb_early_va);
-                       phys_addr_t new_dtb_early_pa = memblock_phys_alloc(fdt_size, PAGE_SIZE);
-                       void *new_dtb_early_va = early_memremap(new_dtb_early_pa, fdt_size);
-
-                       memcpy(new_dtb_early_va, dtb_early_va, fdt_size);
-                       early_memunmap(new_dtb_early_va, fdt_size);
-                       _dtb_early_pa = new_dtb_early_pa;
-               } else
-                       memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
-       }
+       if (!IS_ENABLED(CONFIG_BUILTIN_DTB))
+               memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
 
        dma_contiguous_reserve(dma32_phys_limit);
        if (IS_ENABLED(CONFIG_64BIT))
@@ -279,9 +269,6 @@ pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
 static pte_t fixmap_pte[PTRS_PER_PTE] __page_aligned_bss;
 
 pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
-static p4d_t __maybe_unused early_dtb_p4d[PTRS_PER_P4D] __initdata __aligned(PAGE_SIZE);
-static pud_t __maybe_unused early_dtb_pud[PTRS_PER_PUD] __initdata __aligned(PAGE_SIZE);
-static pmd_t __maybe_unused early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE);
 
 #ifdef CONFIG_XIP_KERNEL
 #define pt_ops                 (*(struct pt_alloc_ops *)XIP_FIXUP(&pt_ops))
@@ -626,9 +613,6 @@ static void __init create_p4d_mapping(p4d_t *p4dp,
 #define trampoline_pgd_next    (pgtable_l5_enabled ?                   \
                (uintptr_t)trampoline_p4d : (pgtable_l4_enabled ?       \
                (uintptr_t)trampoline_pud : (uintptr_t)trampoline_pmd))
-#define early_dtb_pgd_next     (pgtable_l5_enabled ?                   \
-               (uintptr_t)early_dtb_p4d : (pgtable_l4_enabled ?        \
-               (uintptr_t)early_dtb_pud : (uintptr_t)early_dtb_pmd))
 #else
 #define pgd_next_t             pte_t
 #define alloc_pgd_next(__va)   pt_ops.alloc_pte(__va)
@@ -636,7 +620,6 @@ static void __init create_p4d_mapping(p4d_t *p4dp,
 #define create_pgd_next_mapping(__nextp, __va, __pa, __sz, __prot)     \
        create_pte_mapping(__nextp, __va, __pa, __sz, __prot)
 #define fixmap_pgd_next                ((uintptr_t)fixmap_pte)
-#define early_dtb_pgd_next     ((uintptr_t)early_dtb_pmd)
 #define create_p4d_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0)
 #define create_pud_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0)
 #define create_pmd_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0)
@@ -860,32 +843,28 @@ static void __init create_kernel_page_table(pgd_t *pgdir, bool early)
  * this means 2 PMD entries whereas for 32-bit kernel, this is only 1 PGDIR
  * entry.
  */
-static void __init create_fdt_early_page_table(pgd_t *pgdir, uintptr_t dtb_pa)
+static void __init create_fdt_early_page_table(pgd_t *pgdir,
+                                              uintptr_t fix_fdt_va,
+                                              uintptr_t dtb_pa)
 {
-#ifndef CONFIG_BUILTIN_DTB
        uintptr_t pa = dtb_pa & ~(PMD_SIZE - 1);
 
-       create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA,
-                          IS_ENABLED(CONFIG_64BIT) ? early_dtb_pgd_next : pa,
-                          PGDIR_SIZE,
-                          IS_ENABLED(CONFIG_64BIT) ? PAGE_TABLE : PAGE_KERNEL);
-
-       if (pgtable_l5_enabled)
-               create_p4d_mapping(early_dtb_p4d, DTB_EARLY_BASE_VA,
-                                  (uintptr_t)early_dtb_pud, P4D_SIZE, PAGE_TABLE);
-
-       if (pgtable_l4_enabled)
-               create_pud_mapping(early_dtb_pud, DTB_EARLY_BASE_VA,
-                                  (uintptr_t)early_dtb_pmd, PUD_SIZE, PAGE_TABLE);
+#ifndef CONFIG_BUILTIN_DTB
+       /* Make sure the fdt fixmap address is always aligned on PMD size */
+       BUILD_BUG_ON(FIX_FDT % (PMD_SIZE / PAGE_SIZE));
 
-       if (IS_ENABLED(CONFIG_64BIT)) {
-               create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA,
+       /* In 32-bit only, the fdt lies in its own PGD */
+       if (!IS_ENABLED(CONFIG_64BIT)) {
+               create_pgd_mapping(early_pg_dir, fix_fdt_va,
+                                  pa, MAX_FDT_SIZE, PAGE_KERNEL);
+       } else {
+               create_pmd_mapping(fixmap_pmd, fix_fdt_va,
                                   pa, PMD_SIZE, PAGE_KERNEL);
-               create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA + PMD_SIZE,
+               create_pmd_mapping(fixmap_pmd, fix_fdt_va + PMD_SIZE,
                                   pa + PMD_SIZE, PMD_SIZE, PAGE_KERNEL);
        }
 
-       dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PMD_SIZE - 1));
+       dtb_early_va = (void *)fix_fdt_va + (dtb_pa & (PMD_SIZE - 1));
 #else
        /*
         * For 64-bit kernel, __va can't be used since it would return a linear
@@ -1055,7 +1034,8 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
        create_kernel_page_table(early_pg_dir, true);
 
        /* Setup early mapping for FDT early scan */
-       create_fdt_early_page_table(early_pg_dir, dtb_pa);
+       create_fdt_early_page_table(early_pg_dir,
+                                   __fix_to_virt(FIX_FDT), dtb_pa);
 
        /*
         * Bootime fixmap only can handle PMD_SIZE mapping. Thus, boot-ioremap
@@ -1097,6 +1077,16 @@ static void __init setup_vm_final(void)
        u64 i;
 
        /* Setup swapper PGD for fixmap */
+#if !defined(CONFIG_64BIT)
+       /*
+        * In 32-bit, the device tree lies in a pgd entry, so it must be copied
+        * directly in swapper_pg_dir in addition to the pgd entry that points
+        * to fixmap_pte.
+        */
+       unsigned long idx = pgd_index(__fix_to_virt(FIX_FDT));
+
+       set_pgd(&swapper_pg_dir[idx], early_pg_dir[idx]);
+#endif
        create_pgd_mapping(swapper_pg_dir, FIXADDR_START,
                           __pa_symbol(fixmap_pgd_next),
                           PGDIR_SIZE, PAGE_TABLE);
index d16bf715a586bb5595e5017da7c99870b096d36e..5730797a6b402c39c96cd0360486b3548631b389 100644 (file)
@@ -84,12 +84,7 @@ CFLAGS_string.o                      += $(PURGATORY_CFLAGS)
 CFLAGS_REMOVE_ctype.o          += $(PURGATORY_CFLAGS_REMOVE)
 CFLAGS_ctype.o                 += $(PURGATORY_CFLAGS)
 
-AFLAGS_REMOVE_entry.o          += -Wa,-gdwarf-2
-AFLAGS_REMOVE_memcpy.o         += -Wa,-gdwarf-2
-AFLAGS_REMOVE_memset.o         += -Wa,-gdwarf-2
-AFLAGS_REMOVE_strcmp.o         += -Wa,-gdwarf-2
-AFLAGS_REMOVE_strlen.o         += -Wa,-gdwarf-2
-AFLAGS_REMOVE_strncmp.o                += -Wa,-gdwarf-2
+asflags-remove-y               += $(foreach x, -g -gdwarf-4 -gdwarf-5, $(x) -Wa,$(x))
 
 $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
                $(call if_changed,ld)
index b70559b821df80a1737d913c04a434d1b6511f79..2106a2bd152bfaf1edcc7d58ebfcf5548e76d14e 100644 (file)
@@ -3,9 +3,14 @@ core-y += arch/x86/crypto/
 
 #
 # Disable SSE and other FP/SIMD instructions to match normal x86
+# This is required to work around issues in older LLVM versions, but breaks
+# GCC versions < 11. See:
+# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99652
 #
+ifeq ($(CONFIG_CC_IS_CLANG),y)
 KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx
 KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2
+endif
 
 ifeq ($(CONFIG_X86_32),y)
 START := 0x8048000
index ef80d361b4632ec64bb8aacd0f0bf6e88acb1021..10622cf2b30f4335f8b8a8bb0e9a8056065b823d 100644 (file)
@@ -33,8 +33,8 @@ static int __init iommu_init_noop(void) { return 0; }
 static void iommu_shutdown_noop(void) { }
 bool __init bool_x86_init_noop(void) { return false; }
 void x86_op_int_noop(int cpu) { }
-static __init int set_rtc_noop(const struct timespec64 *now) { return -EINVAL; }
-static __init void get_rtc_noop(struct timespec64 *now) { }
+static int set_rtc_noop(const struct timespec64 *now) { return -EINVAL; }
+static void get_rtc_noop(struct timespec64 *now) { }
 
 static __initconst const struct of_device_id of_cmos_match[] = {
        { .compatible = "motorola,mc146818" },
index 615a76d70019470b286d90fa91079836d46ea0ce..bf5161dcf89e7ebf9c454456252a856c8aa9bfab 100644 (file)
@@ -7,6 +7,7 @@
 #include <linux/dmi.h>
 #include <linux/pci.h>
 #include <linux/vgaarb.h>
+#include <asm/amd_nb.h>
 #include <asm/hpet.h>
 #include <asm/pci_x86.h>
 
@@ -824,3 +825,23 @@ static void rs690_fix_64bit_dma(struct pci_dev *pdev)
 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7910, rs690_fix_64bit_dma);
 
 #endif
+
+#ifdef CONFIG_AMD_NB
+
+#define AMD_15B8_RCC_DEV2_EPF0_STRAP2                                  0x10136008
+#define AMD_15B8_RCC_DEV2_EPF0_STRAP2_NO_SOFT_RESET_DEV2_F0_MASK       0x00000080L
+
+static void quirk_clear_strap_no_soft_reset_dev2_f0(struct pci_dev *dev)
+{
+       u32 data;
+
+       if (!amd_smn_read(0, AMD_15B8_RCC_DEV2_EPF0_STRAP2, &data)) {
+               data &= ~AMD_15B8_RCC_DEV2_EPF0_STRAP2_NO_SOFT_RESET_DEV2_F0_MASK;
+               if (amd_smn_write(0, AMD_15B8_RCC_DEV2_EPF0_STRAP2, data))
+                       pci_err(dev, "Failed to write data 0x%x\n", data);
+       } else {
+               pci_err(dev, "Failed to read data\n");
+       }
+}
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15b8, quirk_clear_strap_no_soft_reset_dev2_f0);
+#endif
index 17f09dc263811ab18f949af7303189263763a0e5..82fec66d46d29ee7e41e392fd5f22c524ad4f727 100644 (file)
@@ -69,8 +69,7 @@ CFLAGS_sha256.o                       += $(PURGATORY_CFLAGS)
 CFLAGS_REMOVE_string.o         += $(PURGATORY_CFLAGS_REMOVE)
 CFLAGS_string.o                        += $(PURGATORY_CFLAGS)
 
-AFLAGS_REMOVE_setup-x86_$(BITS).o      += -Wa,-gdwarf-2
-AFLAGS_REMOVE_entry64.o                        += -Wa,-gdwarf-2
+asflags-remove-y               += $(foreach x, -g -gdwarf-4 -gdwarf-5, $(x) -Wa,$(x))
 
 $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
                $(call if_changed,ld)
index 7b4801ce62d6bff936128c4581349e7342669c07..e8492b3a393ab6932c1f6c791bee1c442688fb64 100644 (file)
@@ -439,6 +439,13 @@ static const struct dmi_system_id asus_laptop[] = {
                        DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
                },
        },
+       {
+               .ident = "Asus ExpertBook B1502CBA",
+               .matches = {
+                       DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+                       DMI_MATCH(DMI_BOARD_NAME, "B1502CBA"),
+               },
+       },
        {
                .ident = "Asus ExpertBook B2402CBA",
                .matches = {
index da5727069d851e1a7f0f799d536759d5ab8a473d..ba420a28a4aadce7c087183d2cb4deb178e3bd08 100644 (file)
@@ -213,6 +213,7 @@ bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *s
       disk in the system.
  */
 static const struct x86_cpu_id storage_d3_cpu_ids[] = {
+       X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 24, NULL),  /* Picasso */
        X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 96, NULL),  /* Renoir */
        X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 104, NULL), /* Lucienne */
        X86_MATCH_VENDOR_FAM_MODEL(AMD, 25, 80, NULL),  /* Cezanne */
index 2723eede6f217ea1f8fdf8e46a7965d12fe22ba9..2b918e28acaacd973221e56344977053072f7f7d 100644 (file)
@@ -96,16 +96,14 @@ struct virtblk_req {
 
                /*
                 * The zone append command has an extended in header.
-                * The status field in zone_append_in_hdr must have
-                * the same offset in virtblk_req as the non-zoned
-                * status field above.
+                * The status field in zone_append_in_hdr must always
+                * be the last byte.
                 */
                struct {
+                       __virtio64 sector;
                        u8 status;
-                       u8 reserved[7];
-                       __le64 append_sector;
-               } zone_append_in_hdr;
-       };
+               } zone_append;
+       } in_hdr;
 
        size_t in_hdr_len;
 
@@ -154,7 +152,7 @@ static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr)
                        sgs[num_out + num_in++] = vbr->sg_table.sgl;
        }
 
-       sg_init_one(&in_hdr, &vbr->status, vbr->in_hdr_len);
+       sg_init_one(&in_hdr, &vbr->in_hdr.status, vbr->in_hdr_len);
        sgs[num_out + num_in++] = &in_hdr;
 
        return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
@@ -242,11 +240,14 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
                                      struct request *req,
                                      struct virtblk_req *vbr)
 {
-       size_t in_hdr_len = sizeof(vbr->status);
+       size_t in_hdr_len = sizeof(vbr->in_hdr.status);
        bool unmap = false;
        u32 type;
        u64 sector = 0;
 
+       if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) && op_is_zone_mgmt(req_op(req)))
+               return BLK_STS_NOTSUPP;
+
        /* Set fields for all request types */
        vbr->out_hdr.ioprio = cpu_to_virtio32(vdev, req_get_ioprio(req));
 
@@ -287,7 +288,7 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
        case REQ_OP_ZONE_APPEND:
                type = VIRTIO_BLK_T_ZONE_APPEND;
                sector = blk_rq_pos(req);
-               in_hdr_len = sizeof(vbr->zone_append_in_hdr);
+               in_hdr_len = sizeof(vbr->in_hdr.zone_append);
                break;
        case REQ_OP_ZONE_RESET:
                type = VIRTIO_BLK_T_ZONE_RESET;
@@ -297,7 +298,10 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
                type = VIRTIO_BLK_T_ZONE_RESET_ALL;
                break;
        case REQ_OP_DRV_IN:
-               /* Out header already filled in, nothing to do */
+               /*
+                * Out header has already been prepared by the caller (virtblk_get_id()
+                * or virtblk_submit_zone_report()), nothing to do here.
+                */
                return 0;
        default:
                WARN_ON_ONCE(1);
@@ -318,16 +322,28 @@ static blk_status_t virtblk_setup_cmd(struct virtio_device *vdev,
        return 0;
 }
 
+/*
+ * The status byte is always the last byte of the virtblk request
+ * in-header. This helper fetches its value for all in-header formats
+ * that are currently defined.
+ */
+static inline u8 virtblk_vbr_status(struct virtblk_req *vbr)
+{
+       return *((u8 *)&vbr->in_hdr + vbr->in_hdr_len - 1);
+}
+
 static inline void virtblk_request_done(struct request *req)
 {
        struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
-       blk_status_t status = virtblk_result(vbr->status);
+       blk_status_t status = virtblk_result(virtblk_vbr_status(vbr));
+       struct virtio_blk *vblk = req->mq_hctx->queue->queuedata;
 
        virtblk_unmap_data(req, vbr);
        virtblk_cleanup_cmd(req);
 
        if (req_op(req) == REQ_OP_ZONE_APPEND)
-               req->__sector = le64_to_cpu(vbr->zone_append_in_hdr.append_sector);
+               req->__sector = virtio64_to_cpu(vblk->vdev,
+                                               vbr->in_hdr.zone_append.sector);
 
        blk_mq_end_request(req, status);
 }
@@ -355,7 +371,7 @@ static int virtblk_handle_req(struct virtio_blk_vq *vq,
 
                if (likely(!blk_should_fake_timeout(req->q)) &&
                    !blk_mq_complete_request_remote(req) &&
-                   !blk_mq_add_to_batch(req, iob, vbr->status,
+                   !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr),
                                         virtblk_complete_batch))
                        virtblk_request_done(req);
                req_done++;
@@ -550,7 +566,6 @@ static void virtio_queue_rqs(struct request **rqlist)
 #ifdef CONFIG_BLK_DEV_ZONED
 static void *virtblk_alloc_report_buffer(struct virtio_blk *vblk,
                                          unsigned int nr_zones,
-                                         unsigned int zone_sectors,
                                          size_t *buflen)
 {
        struct request_queue *q = vblk->disk->queue;
@@ -558,7 +573,7 @@ static void *virtblk_alloc_report_buffer(struct virtio_blk *vblk,
        void *buf;
 
        nr_zones = min_t(unsigned int, nr_zones,
-                        get_capacity(vblk->disk) >> ilog2(zone_sectors));
+                        get_capacity(vblk->disk) >> ilog2(vblk->zone_sectors));
 
        bufsize = sizeof(struct virtio_blk_zone_report) +
                nr_zones * sizeof(struct virtio_blk_zone_descriptor);
@@ -592,7 +607,7 @@ static int virtblk_submit_zone_report(struct virtio_blk *vblk,
                return PTR_ERR(req);
 
        vbr = blk_mq_rq_to_pdu(req);
-       vbr->in_hdr_len = sizeof(vbr->status);
+       vbr->in_hdr_len = sizeof(vbr->in_hdr.status);
        vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_ZONE_REPORT);
        vbr->out_hdr.sector = cpu_to_virtio64(vblk->vdev, sector);
 
@@ -601,7 +616,7 @@ static int virtblk_submit_zone_report(struct virtio_blk *vblk,
                goto out;
 
        blk_execute_rq(req, false);
-       err = blk_status_to_errno(virtblk_result(vbr->status));
+       err = blk_status_to_errno(virtblk_result(vbr->in_hdr.status));
 out:
        blk_mq_free_request(req);
        return err;
@@ -609,29 +624,72 @@ out:
 
 static int virtblk_parse_zone(struct virtio_blk *vblk,
                               struct virtio_blk_zone_descriptor *entry,
-                              unsigned int idx, unsigned int zone_sectors,
-                              report_zones_cb cb, void *data)
+                              unsigned int idx, report_zones_cb cb, void *data)
 {
        struct blk_zone zone = { };
 
-       if (entry->z_type != VIRTIO_BLK_ZT_SWR &&
-           entry->z_type != VIRTIO_BLK_ZT_SWP &&
-           entry->z_type != VIRTIO_BLK_ZT_CONV) {
-               dev_err(&vblk->vdev->dev, "invalid zone type %#x\n",
-                       entry->z_type);
-               return -EINVAL;
+       zone.start = virtio64_to_cpu(vblk->vdev, entry->z_start);
+       if (zone.start + vblk->zone_sectors <= get_capacity(vblk->disk))
+               zone.len = vblk->zone_sectors;
+       else
+               zone.len = get_capacity(vblk->disk) - zone.start;
+       zone.capacity = virtio64_to_cpu(vblk->vdev, entry->z_cap);
+       zone.wp = virtio64_to_cpu(vblk->vdev, entry->z_wp);
+
+       switch (entry->z_type) {
+       case VIRTIO_BLK_ZT_SWR:
+               zone.type = BLK_ZONE_TYPE_SEQWRITE_REQ;
+               break;
+       case VIRTIO_BLK_ZT_SWP:
+               zone.type = BLK_ZONE_TYPE_SEQWRITE_PREF;
+               break;
+       case VIRTIO_BLK_ZT_CONV:
+               zone.type = BLK_ZONE_TYPE_CONVENTIONAL;
+               break;
+       default:
+               dev_err(&vblk->vdev->dev, "zone %llu: invalid type %#x\n",
+                       zone.start, entry->z_type);
+               return -EIO;
        }
 
-       zone.type = entry->z_type;
-       zone.cond = entry->z_state;
-       zone.len = zone_sectors;
-       zone.capacity = le64_to_cpu(entry->z_cap);
-       zone.start = le64_to_cpu(entry->z_start);
-       if (zone.cond == BLK_ZONE_COND_FULL)
+       switch (entry->z_state) {
+       case VIRTIO_BLK_ZS_EMPTY:
+               zone.cond = BLK_ZONE_COND_EMPTY;
+               break;
+       case VIRTIO_BLK_ZS_CLOSED:
+               zone.cond = BLK_ZONE_COND_CLOSED;
+               break;
+       case VIRTIO_BLK_ZS_FULL:
+               zone.cond = BLK_ZONE_COND_FULL;
                zone.wp = zone.start + zone.len;
-       else
-               zone.wp = le64_to_cpu(entry->z_wp);
+               break;
+       case VIRTIO_BLK_ZS_EOPEN:
+               zone.cond = BLK_ZONE_COND_EXP_OPEN;
+               break;
+       case VIRTIO_BLK_ZS_IOPEN:
+               zone.cond = BLK_ZONE_COND_IMP_OPEN;
+               break;
+       case VIRTIO_BLK_ZS_NOT_WP:
+               zone.cond = BLK_ZONE_COND_NOT_WP;
+               break;
+       case VIRTIO_BLK_ZS_RDONLY:
+               zone.cond = BLK_ZONE_COND_READONLY;
+               zone.wp = ULONG_MAX;
+               break;
+       case VIRTIO_BLK_ZS_OFFLINE:
+               zone.cond = BLK_ZONE_COND_OFFLINE;
+               zone.wp = ULONG_MAX;
+               break;
+       default:
+               dev_err(&vblk->vdev->dev, "zone %llu: invalid condition %#x\n",
+                       zone.start, entry->z_state);
+               return -EIO;
+       }
 
+       /*
+        * The callback below checks the validity of the reported
+        * entry data, no need to further validate it here.
+        */
        return cb(&zone, idx, data);
 }
 
@@ -641,39 +699,47 @@ static int virtblk_report_zones(struct gendisk *disk, sector_t sector,
 {
        struct virtio_blk *vblk = disk->private_data;
        struct virtio_blk_zone_report *report;
-       unsigned int zone_sectors = vblk->zone_sectors;
-       unsigned int nz, i;
-       int ret, zone_idx = 0;
+       unsigned long long nz, i;
        size_t buflen;
+       unsigned int zone_idx = 0;
+       int ret;
 
        if (WARN_ON_ONCE(!vblk->zone_sectors))
                return -EOPNOTSUPP;
 
-       report = virtblk_alloc_report_buffer(vblk, nr_zones,
-                                            zone_sectors, &buflen);
+       report = virtblk_alloc_report_buffer(vblk, nr_zones, &buflen);
        if (!report)
                return -ENOMEM;
 
+       mutex_lock(&vblk->vdev_mutex);
+
+       if (!vblk->vdev) {
+               ret = -ENXIO;
+               goto fail_report;
+       }
+
        while (zone_idx < nr_zones && sector < get_capacity(vblk->disk)) {
                memset(report, 0, buflen);
 
                ret = virtblk_submit_zone_report(vblk, (char *)report,
                                                 buflen, sector);
-               if (ret) {
-                       if (ret > 0)
-                               ret = -EIO;
-                       goto out_free;
-               }
-               nz = min((unsigned int)le64_to_cpu(report->nr_zones), nr_zones);
+               if (ret)
+                       goto fail_report;
+
+               nz = min_t(u64, virtio64_to_cpu(vblk->vdev, report->nr_zones),
+                          nr_zones);
                if (!nz)
                        break;
 
                for (i = 0; i < nz && zone_idx < nr_zones; i++) {
                        ret = virtblk_parse_zone(vblk, &report->zones[i],
-                                                zone_idx, zone_sectors, cb, data);
+                                                zone_idx, cb, data);
                        if (ret)
-                               goto out_free;
-                       sector = le64_to_cpu(report->zones[i].z_start) + zone_sectors;
+                               goto fail_report;
+
+                       sector = virtio64_to_cpu(vblk->vdev,
+                                                report->zones[i].z_start) +
+                                vblk->zone_sectors;
                        zone_idx++;
                }
        }
@@ -682,7 +748,8 @@ static int virtblk_report_zones(struct gendisk *disk, sector_t sector,
                ret = zone_idx;
        else
                ret = -EINVAL;
-out_free:
+fail_report:
+       mutex_unlock(&vblk->vdev_mutex);
        kvfree(report);
        return ret;
 }
@@ -691,20 +758,28 @@ static void virtblk_revalidate_zones(struct virtio_blk *vblk)
 {
        u8 model;
 
-       if (!vblk->zone_sectors)
-               return;
-
        virtio_cread(vblk->vdev, struct virtio_blk_config,
                     zoned.model, &model);
-       if (!blk_revalidate_disk_zones(vblk->disk, NULL))
-               set_capacity_and_notify(vblk->disk, 0);
+       switch (model) {
+       default:
+               dev_err(&vblk->vdev->dev, "unknown zone model %d\n", model);
+               fallthrough;
+       case VIRTIO_BLK_Z_NONE:
+       case VIRTIO_BLK_Z_HA:
+               disk_set_zoned(vblk->disk, BLK_ZONED_NONE);
+               return;
+       case VIRTIO_BLK_Z_HM:
+               WARN_ON_ONCE(!vblk->zone_sectors);
+               if (!blk_revalidate_disk_zones(vblk->disk, NULL))
+                       set_capacity_and_notify(vblk->disk, 0);
+       }
 }
 
 static int virtblk_probe_zoned_device(struct virtio_device *vdev,
                                       struct virtio_blk *vblk,
                                       struct request_queue *q)
 {
-       u32 v;
+       u32 v, wg;
        u8 model;
        int ret;
 
@@ -713,16 +788,11 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev,
 
        switch (model) {
        case VIRTIO_BLK_Z_NONE:
+       case VIRTIO_BLK_Z_HA:
+               /* Present the host-aware device as non-zoned */
                return 0;
        case VIRTIO_BLK_Z_HM:
                break;
-       case VIRTIO_BLK_Z_HA:
-               /*
-                * Present the host-aware device as a regular drive.
-                * TODO It is possible to add an option to make it appear
-                * in the system as a zoned drive.
-                */
-               return 0;
        default:
                dev_err(&vdev->dev, "unsupported zone model %d\n", model);
                return -EINVAL;
@@ -735,32 +805,31 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev,
 
        virtio_cread(vdev, struct virtio_blk_config,
                     zoned.max_open_zones, &v);
-       disk_set_max_open_zones(vblk->disk, le32_to_cpu(v));
-
-       dev_dbg(&vdev->dev, "max open zones = %u\n", le32_to_cpu(v));
+       disk_set_max_open_zones(vblk->disk, v);
+       dev_dbg(&vdev->dev, "max open zones = %u\n", v);
 
        virtio_cread(vdev, struct virtio_blk_config,
                     zoned.max_active_zones, &v);
-       disk_set_max_active_zones(vblk->disk, le32_to_cpu(v));
-       dev_dbg(&vdev->dev, "max active zones = %u\n", le32_to_cpu(v));
+       disk_set_max_active_zones(vblk->disk, v);
+       dev_dbg(&vdev->dev, "max active zones = %u\n", v);
 
        virtio_cread(vdev, struct virtio_blk_config,
-                    zoned.write_granularity, &v);
-       if (!v) {
+                    zoned.write_granularity, &wg);
+       if (!wg) {
                dev_warn(&vdev->dev, "zero write granularity reported\n");
                return -ENODEV;
        }
-       blk_queue_physical_block_size(q, le32_to_cpu(v));
-       blk_queue_io_min(q, le32_to_cpu(v));
+       blk_queue_physical_block_size(q, wg);
+       blk_queue_io_min(q, wg);
 
-       dev_dbg(&vdev->dev, "write granularity = %u\n", le32_to_cpu(v));
+       dev_dbg(&vdev->dev, "write granularity = %u\n", wg);
 
        /*
         * virtio ZBD specification doesn't require zones to be a power of
         * two sectors in size, but the code in this driver expects that.
         */
-       virtio_cread(vdev, struct virtio_blk_config, zoned.zone_sectors, &v);
-       vblk->zone_sectors = le32_to_cpu(v);
+       virtio_cread(vdev, struct virtio_blk_config, zoned.zone_sectors,
+                    &vblk->zone_sectors);
        if (vblk->zone_sectors == 0 || !is_power_of_2(vblk->zone_sectors)) {
                dev_err(&vdev->dev,
                        "zoned device with non power of two zone size %u\n",
@@ -783,36 +852,46 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev,
                        dev_warn(&vdev->dev, "zero max_append_sectors reported\n");
                        return -ENODEV;
                }
-               blk_queue_max_zone_append_sectors(q, le32_to_cpu(v));
-               dev_dbg(&vdev->dev, "max append sectors = %u\n", le32_to_cpu(v));
+               if ((v << SECTOR_SHIFT) < wg) {
+                       dev_err(&vdev->dev,
+                               "write granularity %u exceeds max_append_sectors %u limit\n",
+                               wg, v);
+                       return -ENODEV;
+               }
+
+               blk_queue_max_zone_append_sectors(q, v);
+               dev_dbg(&vdev->dev, "max append sectors = %u\n", v);
        }
 
        return ret;
 }
 
-static inline bool virtblk_has_zoned_feature(struct virtio_device *vdev)
-{
-       return virtio_has_feature(vdev, VIRTIO_BLK_F_ZONED);
-}
 #else
 
 /*
  * Zoned block device support is not configured in this kernel.
- * We only need to define a few symbols to avoid compilation errors.
+ * Host-managed zoned devices can't be supported, but others are
+ * good to go as regular block devices.
  */
 #define virtblk_report_zones       NULL
+
 static inline void virtblk_revalidate_zones(struct virtio_blk *vblk)
 {
 }
+
 static inline int virtblk_probe_zoned_device(struct virtio_device *vdev,
                        struct virtio_blk *vblk, struct request_queue *q)
 {
-       return -EOPNOTSUPP;
-}
+       u8 model;
 
-static inline bool virtblk_has_zoned_feature(struct virtio_device *vdev)
-{
-       return false;
+       virtio_cread(vdev, struct virtio_blk_config, zoned.model, &model);
+       if (model == VIRTIO_BLK_Z_HM) {
+               dev_err(&vdev->dev,
+                       "virtio_blk: zoned devices are not supported");
+               return -EOPNOTSUPP;
+       }
+
+       return 0;
 }
 #endif /* CONFIG_BLK_DEV_ZONED */
 
@@ -831,7 +910,7 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str)
                return PTR_ERR(req);
 
        vbr = blk_mq_rq_to_pdu(req);
-       vbr->in_hdr_len = sizeof(vbr->status);
+       vbr->in_hdr_len = sizeof(vbr->in_hdr.status);
        vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_GET_ID);
        vbr->out_hdr.sector = 0;
 
@@ -840,7 +919,7 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str)
                goto out;
 
        blk_execute_rq(req, false);
-       err = blk_status_to_errno(virtblk_result(vbr->status));
+       err = blk_status_to_errno(virtblk_result(vbr->in_hdr.status));
 out:
        blk_mq_free_request(req);
        return err;
@@ -1498,15 +1577,16 @@ static int virtblk_probe(struct virtio_device *vdev)
        virtblk_update_capacity(vblk, false);
        virtio_device_ready(vdev);
 
-       if (virtblk_has_zoned_feature(vdev)) {
+       /*
+        * All steps that follow use the VQs therefore they need to be
+        * placed after the virtio_device_ready() call above.
+        */
+       if (virtio_has_feature(vdev, VIRTIO_BLK_F_ZONED)) {
                err = virtblk_probe_zoned_device(vdev, vblk, q);
                if (err)
                        goto out_cleanup_disk;
        }
 
-       dev_info(&vdev->dev, "blk config size: %zu\n",
-               sizeof(struct virtio_blk_config));
-
        err = device_add_disk(&vdev->dev, vblk->disk, virtblk_attr_groups);
        if (err)
                goto out_cleanup_disk;
@@ -1607,10 +1687,7 @@ static unsigned int features[] = {
        VIRTIO_BLK_F_RO, VIRTIO_BLK_F_BLK_SIZE,
        VIRTIO_BLK_F_FLUSH, VIRTIO_BLK_F_TOPOLOGY, VIRTIO_BLK_F_CONFIG_WCE,
        VIRTIO_BLK_F_MQ, VIRTIO_BLK_F_DISCARD, VIRTIO_BLK_F_WRITE_ZEROES,
-       VIRTIO_BLK_F_SECURE_ERASE,
-#ifdef CONFIG_BLK_DEV_ZONED
-       VIRTIO_BLK_F_ZONED,
-#endif /* CONFIG_BLK_DEV_ZONED */
+       VIRTIO_BLK_F_SECURE_ERASE, VIRTIO_BLK_F_ZONED,
 };
 
 static struct virtio_driver virtio_blk = {
index 3006e2a0f37e1fa960f454ea0376f7e4e758c539..43e98a598bd9a51994bbbed072ad98b0f1a696b2 100644 (file)
@@ -511,7 +511,7 @@ static const char *btbcm_get_board_name(struct device *dev)
        len = strlen(tmp) + 1;
        board_type = devm_kzalloc(dev, len, GFP_KERNEL);
        strscpy(board_type, tmp, len);
-       for (i = 0; i < board_type[i]; i++) {
+       for (i = 0; i < len; i++) {
                if (board_type[i] == '/')
                        board_type[i] = '-';
        }
index 02893600db390402858ebf65618832f79cadc375..51000320e1ea89442af6a46d03974749e9f80fed 100644 (file)
@@ -358,6 +358,7 @@ static void btsdio_remove(struct sdio_func *func)
        if (!data)
                return;
 
+       cancel_work_sync(&data->work);
        hdev = data->hdev;
 
        sdio_set_drvdata(func, NULL);
index 36d42484142aede2a409279f3626f3ffc3e959e9..cf463c1d2102c6fb9bb13bbaea33f1844c9daa17 100644 (file)
@@ -329,6 +329,12 @@ static int of_weim_notify(struct notifier_block *nb, unsigned long action,
                                 "Failed to setup timing for '%pOF'\n", rd->dn);
 
                if (!of_node_check_flag(rd->dn, OF_POPULATED)) {
+                       /*
+                        * Clear the flag before adding the device so that
+                        * fw_devlink doesn't skip adding consumers to this
+                        * device.
+                        */
+                       rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE;
                        if (!of_platform_device_create(rd->dn, NULL, &pdev->dev)) {
                                dev_err(&pdev->dev,
                                        "Failed to create child device '%pOF'\n",
index f91f30560820d529e035c2e62efee3b659acbe6c..ff3a52d48479092257a98ddaaf3b2af701a6bd99 100644 (file)
@@ -143,8 +143,9 @@ static int rs9_regmap_i2c_read(void *context,
 static const struct regmap_config rs9_regmap_config = {
        .reg_bits = 8,
        .val_bits = 8,
-       .cache_type = REGCACHE_NONE,
+       .cache_type = REGCACHE_FLAT,
        .max_register = RS9_REG_BCP,
+       .num_reg_defaults_raw = 0x8,
        .rd_table = &rs9_readable_table,
        .wr_table = &rs9_writeable_table,
        .reg_write = rs9_regmap_i2c_write,
index 2836adb817b70e85c01009926003355506e07ce5..e3696a88b5a36af229370e8b7a996d60a987dcb4 100644 (file)
@@ -95,14 +95,16 @@ static const struct clk_div_table video_div_table[] = {
        { }
 };
 
-static const char * enet1_ref_sels[] = { "enet1_ref_125m", "enet1_ref_pad", };
+static const char * enet1_ref_sels[] = { "enet1_ref_125m", "enet1_ref_pad", "dummy", "dummy"};
 static const u32 enet1_ref_sels_table[] = { IMX6UL_GPR1_ENET1_TX_CLK_DIR,
-                                           IMX6UL_GPR1_ENET1_CLK_SEL };
+                                           IMX6UL_GPR1_ENET1_CLK_SEL, 0,
+                                           IMX6UL_GPR1_ENET1_TX_CLK_DIR | IMX6UL_GPR1_ENET1_CLK_SEL };
 static const u32 enet1_ref_sels_table_mask = IMX6UL_GPR1_ENET1_TX_CLK_DIR |
                                             IMX6UL_GPR1_ENET1_CLK_SEL;
-static const char * enet2_ref_sels[] = { "enet2_ref_125m", "enet2_ref_pad", };
+static const char * enet2_ref_sels[] = { "enet2_ref_125m", "enet2_ref_pad", "dummy", "dummy"};
 static const u32 enet2_ref_sels_table[] = { IMX6UL_GPR1_ENET2_TX_CLK_DIR,
-                                           IMX6UL_GPR1_ENET2_CLK_SEL };
+                                           IMX6UL_GPR1_ENET2_CLK_SEL, 0,
+                                           IMX6UL_GPR1_ENET2_TX_CLK_DIR | IMX6UL_GPR1_ENET2_CLK_SEL };
 static const u32 enet2_ref_sels_table_mask = IMX6UL_GPR1_ENET2_TX_CLK_DIR |
                                             IMX6UL_GPR1_ENET2_CLK_SEL;
 
index ce81e4087a8fce2e8d2239e6d706de7f33b24c7d..2bfbab8db94bf54dcb164012d79d45b0b0df3d2f 100644 (file)
@@ -17,7 +17,6 @@ static const struct regmap_config sprdclk_regmap_config = {
        .reg_bits       = 32,
        .reg_stride     = 4,
        .val_bits       = 32,
-       .max_register   = 0xffff,
        .fast_io        = true,
 };
 
@@ -43,6 +42,8 @@ int sprd_clk_regmap_init(struct platform_device *pdev,
        struct device *dev = &pdev->dev;
        struct device_node *node = dev->of_node, *np;
        struct regmap *regmap;
+       struct resource *res;
+       struct regmap_config reg_config = sprdclk_regmap_config;
 
        if (of_find_property(node, "sprd,syscon", NULL)) {
                regmap = syscon_regmap_lookup_by_phandle(node, "sprd,syscon");
@@ -59,12 +60,14 @@ int sprd_clk_regmap_init(struct platform_device *pdev,
                        return PTR_ERR(regmap);
                }
        } else {
-               base = devm_platform_ioremap_resource(pdev, 0);
+               base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
                if (IS_ERR(base))
                        return PTR_ERR(base);
 
+               reg_config.max_register = resource_size(res) - reg_config.reg_stride;
+
                regmap = devm_regmap_init_mmio(&pdev->dev, base,
-                                              &sprdclk_regmap_config);
+                                              &reg_config);
                if (IS_ERR(regmap)) {
                        pr_err("failed to init regmap\n");
                        return PTR_ERR(regmap);
index 9754d8b31621168e249dbdab3e33714ba83fb7b9..3c4862a752b5a3b8186f34fab44325d1f8d78bb8 100644 (file)
@@ -1,7 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0-only
 menuconfig PM_DEVFREQ
        bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support"
-       select SRCU
        select PM_OPP
        help
          A device may have a list of frequencies and voltages available.
index a443e7c42dafa050d097dc68446125f43a47ddd7..896a6cc93b00fac3d176e8cd6aa23af67b892c68 100644 (file)
@@ -621,8 +621,7 @@ static int exynos_ppmu_parse_dt(struct platform_device *pdev,
        }
 
        /* Maps the memory mapped IO to control PPMU register */
-       res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-       base = devm_ioremap_resource(dev, res);
+       base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
        if (IS_ERR(base))
                return PTR_ERR(base);
 
index 027e8f336acc96317a41a08293df582c61f7d5bd..88414445adf3a65c66a2a955746811b4d8ec65c1 100644 (file)
@@ -432,7 +432,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
                goto err;
 
        /* Create child platform device for the interconnect provider */
-       if (of_get_property(dev->of_node, "#interconnect-cells", NULL)) {
+       if (of_property_present(dev->of_node, "#interconnect-cells")) {
                bus->icc_pdev = platform_device_register_data(
                                                dev, "exynos-generic-icc",
                                                PLATFORM_DEVID_AUTO, NULL, 0);
@@ -513,7 +513,7 @@ static struct platform_driver exynos_bus_platdrv = {
        .driver = {
                .name   = "exynos-bus",
                .pm     = &exynos_bus_pm,
-               .of_match_table = of_match_ptr(exynos_bus_of_match),
+               .of_match_table = exynos_bus_of_match,
        },
 };
 module_platform_driver(exynos_bus_platdrv);
index 90f28bda29c8bd41e19b340e779c351634d29283..4cf8da77bdd91329012579de26bb7aa01f7979f4 100644 (file)
@@ -75,6 +75,7 @@
 
 #define REG_TX_INTSTATE(idx)           (0x0030 + (idx) * 4)
 #define REG_RX_INTSTATE(idx)           (0x0040 + (idx) * 4)
+#define REG_GLOBAL_INTSTATE(idx)       (0x0050 + (idx) * 4)
 #define REG_CHAN_INTSTATUS(ch, idx)    (0x8010 + (ch) * 0x200 + (idx) * 4)
 #define REG_CHAN_INTMASK(ch, idx)      (0x8020 + (ch) * 0x200 + (idx) * 4)
 
@@ -511,7 +512,10 @@ static int admac_terminate_all(struct dma_chan *chan)
        admac_stop_chan(adchan);
        admac_reset_rings(adchan);
 
-       adchan->current_tx = NULL;
+       if (adchan->current_tx) {
+               list_add_tail(&adchan->current_tx->node, &adchan->to_free);
+               adchan->current_tx = NULL;
+       }
        /*
         * Descriptors can only be freed after the tasklet
         * has been killed (in admac_synchronize).
@@ -672,13 +676,14 @@ static void admac_handle_chan_int(struct admac_data *ad, int no)
 static irqreturn_t admac_interrupt(int irq, void *devid)
 {
        struct admac_data *ad = devid;
-       u32 rx_intstate, tx_intstate;
+       u32 rx_intstate, tx_intstate, global_intstate;
        int i;
 
        rx_intstate = readl_relaxed(ad->base + REG_RX_INTSTATE(ad->irq_index));
        tx_intstate = readl_relaxed(ad->base + REG_TX_INTSTATE(ad->irq_index));
+       global_intstate = readl_relaxed(ad->base + REG_GLOBAL_INTSTATE(ad->irq_index));
 
-       if (!tx_intstate && !rx_intstate)
+       if (!tx_intstate && !rx_intstate && !global_intstate)
                return IRQ_NONE;
 
        for (i = 0; i < ad->nchannels; i += 2) {
@@ -693,6 +698,12 @@ static irqreturn_t admac_interrupt(int irq, void *devid)
                rx_intstate >>= 1;
        }
 
+       if (global_intstate) {
+               dev_warn(ad->dev, "clearing unknown global interrupt flag: %x\n",
+                        global_intstate);
+               writel_relaxed(~(u32) 0, ad->base + REG_GLOBAL_INTSTATE(ad->irq_index));
+       }
+
        return IRQ_HANDLED;
 }
 
@@ -850,6 +861,9 @@ static int admac_probe(struct platform_device *pdev)
 
        dma->directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM);
        dma->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+       dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
+                       BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
+                       BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
        dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
                        BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
                        BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
index c24bca210104c1e92a46da5ef8340489e27d5da7..826b98284fa1f845698cc996035d5f6bb56fdcc5 100644 (file)
@@ -1342,7 +1342,7 @@ int dmaenginem_async_device_register(struct dma_device *device)
        if (ret)
                return ret;
 
-       return devm_add_action(device->dev, dmaenginem_async_device_unregister, device);
+       return devm_add_action_or_reset(device->dev, dmaenginem_async_device_unregister, device);
 }
 EXPORT_SYMBOL(dmaenginem_async_device_register);
 
index 462109c61653752b698558421a0192124ae0e20c..93ee298d52b894f1a7200844168c2baaf0406763 100644 (file)
@@ -277,7 +277,7 @@ failed:
 
 /**
  * xdma_xfer_start - Start DMA transfer
- * @xdma_chan: DMA channel pointer
+ * @xchan: DMA channel pointer
  */
 static int xdma_xfer_start(struct xdma_chan *xchan)
 {
index 1583157da355b2c7c500ee4b9fb51e7298cd4e71..efd025d8961e682812de5b748e61795e45b7f1cb 100644 (file)
@@ -177,6 +177,40 @@ void dm_helpers_dp_update_branch_info(
        const struct dc_link *link)
 {}
 
+static void dm_helpers_construct_old_payload(
+                       struct dc_link *link,
+                       int pbn_per_slot,
+                       struct drm_dp_mst_atomic_payload *new_payload,
+                       struct drm_dp_mst_atomic_payload *old_payload)
+{
+       struct link_mst_stream_allocation_table current_link_table =
+                                                                       link->mst_stream_alloc_table;
+       struct link_mst_stream_allocation *dc_alloc;
+       int i;
+
+       *old_payload = *new_payload;
+
+       /* Set correct time_slots/PBN of old payload.
+        * other fields (delete & dsc_enabled) in
+        * struct drm_dp_mst_atomic_payload are don't care fields
+        * while calling drm_dp_remove_payload()
+        */
+       for (i = 0; i < current_link_table.stream_count; i++) {
+               dc_alloc =
+                       &current_link_table.stream_allocations[i];
+
+               if (dc_alloc->vcp_id == new_payload->vcpi) {
+                       old_payload->time_slots = dc_alloc->slot_count;
+                       old_payload->pbn = dc_alloc->slot_count * pbn_per_slot;
+                       break;
+               }
+       }
+
+       /* make sure there is an old payload*/
+       ASSERT(i != current_link_table.stream_count);
+
+}
+
 /*
  * Writes payload allocation table in immediate downstream device.
  */
@@ -188,7 +222,7 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
 {
        struct amdgpu_dm_connector *aconnector;
        struct drm_dp_mst_topology_state *mst_state;
-       struct drm_dp_mst_atomic_payload *payload;
+       struct drm_dp_mst_atomic_payload *target_payload, *new_payload, old_payload;
        struct drm_dp_mst_topology_mgr *mst_mgr;
 
        aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
@@ -204,17 +238,26 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
        mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
 
        /* It's OK for this to fail */
-       payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port);
-       if (enable)
-               drm_dp_add_payload_part1(mst_mgr, mst_state, payload);
-       else
-               drm_dp_remove_payload(mst_mgr, mst_state, payload, payload);
+       new_payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port);
+
+       if (enable) {
+               target_payload = new_payload;
+
+               drm_dp_add_payload_part1(mst_mgr, mst_state, new_payload);
+       } else {
+               /* construct old payload by VCPI*/
+               dm_helpers_construct_old_payload(stream->link, mst_state->pbn_div,
+                                               new_payload, &old_payload);
+               target_payload = &old_payload;
+
+               drm_dp_remove_payload(mst_mgr, mst_state, &old_payload, new_payload);
+       }
 
        /* mst_mgr->->payloads are VC payload notify MST branch using DPCD or
         * AUX message. The sequence is slot 1-63 allocated sequence for each
         * stream. AMD ASIC stream slot allocation should follow the same
         * sequence. copy DRM MST allocation to dc */
-       fill_dc_mst_payload_table_from_drm(stream->link, enable, payload, proposed_table);
+       fill_dc_mst_payload_table_from_drm(stream->link, enable, target_payload, proposed_table);
 
        return true;
 }
index f085cb97a62060ab730d44aaa52f7fcde9080566..85a090b9e3d9745da9e520988f289550c4f2b659 100644 (file)
 #define CTF_OFFSET_HOTSPOT             5
 #define CTF_OFFSET_MEM                 5
 
+static const int pmfw_decoded_link_speed[5] = {1, 2, 3, 4, 5};
+static const int pmfw_decoded_link_width[7] = {0, 1, 2, 4, 8, 12, 16};
+
+#define DECODE_GEN_SPEED(gen_speed_idx)                (pmfw_decoded_link_speed[gen_speed_idx])
+#define DECODE_LANE_WIDTH(lane_width_idx)      (pmfw_decoded_link_width[lane_width_idx])
+
 struct smu_13_0_max_sustainable_clocks {
        uint32_t display_clock;
        uint32_t phy_clock;
index 27448ffe60a43950591badc8a16cbc38ad214fd1..a5c97d61e92a67115208c83e0c59a07c15fdf183 100644 (file)
@@ -1144,8 +1144,8 @@ static int smu_v13_0_0_print_clk_levels(struct smu_context *smu,
                                        (pcie_table->pcie_lane[i] == 5) ? "x12" :
                                        (pcie_table->pcie_lane[i] == 6) ? "x16" : "",
                                        pcie_table->clk_freq[i],
-                                       ((gen_speed - 1) == pcie_table->pcie_gen[i]) &&
-                                       (lane_width == link_width[pcie_table->pcie_lane[i]]) ?
+                                       (gen_speed == DECODE_GEN_SPEED(pcie_table->pcie_gen[i])) &&
+                                       (lane_width == DECODE_LANE_WIDTH(link_width[pcie_table->pcie_lane[i]])) ?
                                        "*" : "");
                break;
 
index 9e1967d8049e3a02db79a68563810f713b3292f4..4399416dd9b8f47cadd52b6641edae45a1c6588b 100644 (file)
@@ -575,6 +575,14 @@ static int smu_v13_0_7_set_default_dpm_table(struct smu_context *smu)
                                                     dpm_table);
                if (ret)
                        return ret;
+
+               if (skutable->DriverReportedClocks.GameClockAc &&
+                       (dpm_table->dpm_levels[dpm_table->count - 1].value >
+                       skutable->DriverReportedClocks.GameClockAc)) {
+                       dpm_table->dpm_levels[dpm_table->count - 1].value =
+                               skutable->DriverReportedClocks.GameClockAc;
+                       dpm_table->max = skutable->DriverReportedClocks.GameClockAc;
+               }
        } else {
                dpm_table->count = 1;
                dpm_table->dpm_levels[0].value = smu->smu_table.boot_values.gfxclk / 100;
@@ -828,6 +836,57 @@ static int smu_v13_0_7_get_smu_metrics_data(struct smu_context *smu,
        return ret;
 }
 
+static int smu_v13_0_7_get_dpm_ultimate_freq(struct smu_context *smu,
+                                            enum smu_clk_type clk_type,
+                                            uint32_t *min,
+                                            uint32_t *max)
+{
+       struct smu_13_0_dpm_context *dpm_context =
+               smu->smu_dpm.dpm_context;
+       struct smu_13_0_dpm_table *dpm_table;
+
+       switch (clk_type) {
+       case SMU_MCLK:
+       case SMU_UCLK:
+               /* uclk dpm table */
+               dpm_table = &dpm_context->dpm_tables.uclk_table;
+               break;
+       case SMU_GFXCLK:
+       case SMU_SCLK:
+               /* gfxclk dpm table */
+               dpm_table = &dpm_context->dpm_tables.gfx_table;
+               break;
+       case SMU_SOCCLK:
+               /* socclk dpm table */
+               dpm_table = &dpm_context->dpm_tables.soc_table;
+               break;
+       case SMU_FCLK:
+               /* fclk dpm table */
+               dpm_table = &dpm_context->dpm_tables.fclk_table;
+               break;
+       case SMU_VCLK:
+       case SMU_VCLK1:
+               /* vclk dpm table */
+               dpm_table = &dpm_context->dpm_tables.vclk_table;
+               break;
+       case SMU_DCLK:
+       case SMU_DCLK1:
+               /* dclk dpm table */
+               dpm_table = &dpm_context->dpm_tables.dclk_table;
+               break;
+       default:
+               dev_err(smu->adev->dev, "Unsupported clock type!\n");
+               return -EINVAL;
+       }
+
+       if (min)
+               *min = dpm_table->min;
+       if (max)
+               *max = dpm_table->max;
+
+       return 0;
+}
+
 static int smu_v13_0_7_read_sensor(struct smu_context *smu,
                                   enum amd_pp_sensors sensor,
                                   void *data,
@@ -1074,8 +1133,8 @@ static int smu_v13_0_7_print_clk_levels(struct smu_context *smu,
                                        (pcie_table->pcie_lane[i] == 5) ? "x12" :
                                        (pcie_table->pcie_lane[i] == 6) ? "x16" : "",
                                        pcie_table->clk_freq[i],
-                                       (gen_speed == pcie_table->pcie_gen[i]) &&
-                                       (lane_width == pcie_table->pcie_lane[i]) ?
+                                       (gen_speed == DECODE_GEN_SPEED(pcie_table->pcie_gen[i])) &&
+                                       (lane_width == DECODE_LANE_WIDTH(pcie_table->pcie_lane[i])) ?
                                        "*" : "");
                break;
 
@@ -1329,9 +1388,17 @@ static int smu_v13_0_7_populate_umd_state_clk(struct smu_context *smu)
                                &dpm_context->dpm_tables.fclk_table;
        struct smu_umd_pstate_table *pstate_table =
                                &smu->pstate_table;
+       struct smu_table_context *table_context = &smu->smu_table;
+       PPTable_t *pptable = table_context->driver_pptable;
+       DriverReportedClocks_t driver_clocks =
+               pptable->SkuTable.DriverReportedClocks;
 
        pstate_table->gfxclk_pstate.min = gfx_table->min;
-       pstate_table->gfxclk_pstate.peak = gfx_table->max;
+       if (driver_clocks.GameClockAc &&
+               (driver_clocks.GameClockAc < gfx_table->max))
+               pstate_table->gfxclk_pstate.peak = driver_clocks.GameClockAc;
+       else
+               pstate_table->gfxclk_pstate.peak = gfx_table->max;
 
        pstate_table->uclk_pstate.min = mem_table->min;
        pstate_table->uclk_pstate.peak = mem_table->max;
@@ -1348,12 +1415,12 @@ static int smu_v13_0_7_populate_umd_state_clk(struct smu_context *smu)
        pstate_table->fclk_pstate.min = fclk_table->min;
        pstate_table->fclk_pstate.peak = fclk_table->max;
 
-       /*
-        * For now, just use the mininum clock frequency.
-        * TODO: update them when the real pstate settings available
-        */
-       pstate_table->gfxclk_pstate.standard = gfx_table->min;
-       pstate_table->uclk_pstate.standard = mem_table->min;
+       if (driver_clocks.BaseClockAc &&
+               driver_clocks.BaseClockAc < gfx_table->max)
+               pstate_table->gfxclk_pstate.standard = driver_clocks.BaseClockAc;
+       else
+               pstate_table->gfxclk_pstate.standard = gfx_table->max;
+       pstate_table->uclk_pstate.standard = mem_table->max;
        pstate_table->socclk_pstate.standard = soc_table->min;
        pstate_table->vclk_pstate.standard = vclk_table->min;
        pstate_table->dclk_pstate.standard = dclk_table->min;
@@ -1676,7 +1743,7 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
        .dpm_set_jpeg_enable = smu_v13_0_set_jpeg_enable,
        .init_pptable_microcode = smu_v13_0_init_pptable_microcode,
        .populate_umd_state_clk = smu_v13_0_7_populate_umd_state_clk,
-       .get_dpm_ultimate_freq = smu_v13_0_get_dpm_ultimate_freq,
+       .get_dpm_ultimate_freq = smu_v13_0_7_get_dpm_ultimate_freq,
        .get_vbios_bootup_values = smu_v13_0_get_vbios_bootup_values,
        .read_sensor = smu_v13_0_7_read_sensor,
        .feature_is_enabled = smu_cmn_feature_is_enabled,
index 0643887800b4d33cea277876b604e44a18f76c5c..142668cd6d7cdd079d893a81aa97a459f5838dcc 100644 (file)
@@ -99,7 +99,6 @@ static int armada_drm_bind(struct device *dev)
        if (ret) {
                dev_err(dev, "[" DRM_NAME ":%s] can't kick out simple-fb: %d\n",
                        __func__, ret);
-               kfree(priv);
                return ret;
        }
 
index 468a792e6a405a7f3b18bec607704d1e0c7740ef..fc0eaf40dc941745d43181d60f27db0b6a0f9d64 100644 (file)
@@ -300,9 +300,21 @@ static void configure_dual_link_mode(struct intel_encoder *encoder,
 {
        struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
        struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+       i915_reg_t dss_ctl1_reg, dss_ctl2_reg;
        u32 dss_ctl1;
 
-       dss_ctl1 = intel_de_read(dev_priv, DSS_CTL1);
+       /* FIXME: Move all DSS handling to intel_vdsc.c */
+       if (DISPLAY_VER(dev_priv) >= 12) {
+               struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
+
+               dss_ctl1_reg = ICL_PIPE_DSS_CTL1(crtc->pipe);
+               dss_ctl2_reg = ICL_PIPE_DSS_CTL2(crtc->pipe);
+       } else {
+               dss_ctl1_reg = DSS_CTL1;
+               dss_ctl2_reg = DSS_CTL2;
+       }
+
+       dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg);
        dss_ctl1 |= SPLITTER_ENABLE;
        dss_ctl1 &= ~OVERLAP_PIXELS_MASK;
        dss_ctl1 |= OVERLAP_PIXELS(intel_dsi->pixel_overlap);
@@ -323,16 +335,16 @@ static void configure_dual_link_mode(struct intel_encoder *encoder,
 
                dss_ctl1 &= ~LEFT_DL_BUF_TARGET_DEPTH_MASK;
                dss_ctl1 |= LEFT_DL_BUF_TARGET_DEPTH(dl_buffer_depth);
-               dss_ctl2 = intel_de_read(dev_priv, DSS_CTL2);
+               dss_ctl2 = intel_de_read(dev_priv, dss_ctl2_reg);
                dss_ctl2 &= ~RIGHT_DL_BUF_TARGET_DEPTH_MASK;
                dss_ctl2 |= RIGHT_DL_BUF_TARGET_DEPTH(dl_buffer_depth);
-               intel_de_write(dev_priv, DSS_CTL2, dss_ctl2);
+               intel_de_write(dev_priv, dss_ctl2_reg, dss_ctl2);
        } else {
                /* Interleave */
                dss_ctl1 |= DUAL_LINK_MODE_INTERLEAVE;
        }
 
-       intel_de_write(dev_priv, DSS_CTL1, dss_ctl1);
+       intel_de_write(dev_priv, dss_ctl1_reg, dss_ctl1);
 }
 
 /* aka DSI 8X clock */
index 76678dd60f93f9fd7373615776d73266abbe7344..c4c6f67af7ccc817a48e6dd84d3faee7d3c45e05 100644 (file)
@@ -31,6 +31,7 @@ gf108_fb = {
        .init = gf100_fb_init,
        .init_page = gf100_fb_init_page,
        .intr = gf100_fb_intr,
+       .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init,
        .ram_new = gf108_ram_new,
        .default_bigpage = 17,
 };
index f73442ccb424b491461054cb30abe87aaf6f8520..433fa966ba2319119d1140eb7cba2d7e02cc7483 100644 (file)
@@ -77,6 +77,7 @@ gk104_fb = {
        .init = gf100_fb_init,
        .init_page = gf100_fb_init_page,
        .intr = gf100_fb_intr,
+       .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init,
        .ram_new = gk104_ram_new,
        .default_bigpage = 17,
        .clkgate_pack = gk104_fb_clkgate_pack,
index 45d6cdffafeedc5f326bc45556357d882531f7d9..4dc283dedf8b5b383446ccdfa560bd15ca558297 100644 (file)
@@ -59,6 +59,7 @@ gk110_fb = {
        .init = gf100_fb_init,
        .init_page = gf100_fb_init_page,
        .intr = gf100_fb_intr,
+       .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init,
        .ram_new = gk104_ram_new,
        .default_bigpage = 17,
        .clkgate_pack = gk110_fb_clkgate_pack,
index de52462a92bf0ecd2b85a509091800e7b9761444..90bfff616d35bb539efd77d8fc49eab163fee089 100644 (file)
@@ -31,6 +31,7 @@ gm107_fb = {
        .init = gf100_fb_init,
        .init_page = gf100_fb_init_page,
        .intr = gf100_fb_intr,
+       .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init,
        .ram_new = gm107_ram_new,
        .default_bigpage = 17,
 };
index 15d04a0ec623469d6bd89c8460a335b9b22cfd8f..e0a8890a62e23a933c853b73ef27b5ee31acbbc2 100644 (file)
@@ -507,12 +507,19 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
 {
        struct drm_sched_entity *entity = sched_job->entity;
        bool first;
+       ktime_t submit_ts;
 
        trace_drm_sched_job(sched_job, entity);
        atomic_inc(entity->rq->sched->score);
        WRITE_ONCE(entity->last_user, current->group_leader);
+
+       /*
+        * After the sched_job is pushed into the entity queue, it may be
+        * completed and freed up at any time. We can no longer access it.
+        * Make sure to set the submit_ts first, to avoid a race.
+        */
+       sched_job->submit_ts = submit_ts = ktime_get();
        first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
-       sched_job->submit_ts = ktime_get();
 
        /* first job wakes up scheduler */
        if (first) {
@@ -529,7 +536,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
                spin_unlock(&entity->rq_lock);
 
                if (drm_sched_policy == DRM_SCHED_POLICY_FIFO)
-                       drm_sched_rq_update_fifo(entity, sched_job->submit_ts);
+                       drm_sched_rq_update_fifo(entity, submit_ts);
 
                drm_sched_wakeup(entity->rq->sched);
        }
index 82f64fb31fdab669bf81ccb415ed230d5734b13e..4ce012f83253ec9f12e3c12b5f95c12142cb4e8c 100644 (file)
@@ -1122,7 +1122,7 @@ config HID_TOPRE
        tristate "Topre REALFORCE keyboards"
        depends on HID
        help
-         Say Y for N-key rollover support on Topre REALFORCE R2 108 key keyboards.
+         Say Y for N-key rollover support on Topre REALFORCE R2 108/87 key keyboards.
 
 config HID_THINGM
        tristate "ThingM blink(1) USB RGB LED"
index 63545cd307e5f8805672301dac5adbfa0025f5f2..c2e9b6d1fd7d3eda054e33cf73a47a3b1f4deea4 100644 (file)
 #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN   0x261A
 #define I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN  0x2A1C
 #define I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN     0x279F
+#define I2C_DEVICE_ID_HP_SPECTRE_X360_13T_AW100        0x29F5
+#define I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V1     0x2BED
+#define I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V2     0x2BEE
 
 #define USB_VENDOR_ID_ELECOM           0x056e
 #define USB_DEVICE_ID_ELECOM_BM084     0x0061
 
 #define USB_VENDOR_ID_TOPRE                    0x0853
 #define USB_DEVICE_ID_TOPRE_REALFORCE_R2_108                   0x0148
+#define USB_DEVICE_ID_TOPRE_REALFORCE_R2_87                    0x0146
 
 #define USB_VENDOR_ID_TOPSEED          0x0766
 #define USB_DEVICE_ID_TOPSEED_CYBERLINK        0x0204
index 7fc967964dd8167de925c977e2207b8cb82b4603..5c65a584b3fa00492112309e8b3454feec66e70e 100644 (file)
@@ -398,6 +398,12 @@ static const struct hid_device_id hid_battery_quirks[] = {
          HID_BATTERY_QUIRK_IGNORE },
        { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN),
          HID_BATTERY_QUIRK_IGNORE },
+       { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_13T_AW100),
+         HID_BATTERY_QUIRK_IGNORE },
+       { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V1),
+         HID_BATTERY_QUIRK_IGNORE },
+       { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V2),
+         HID_BATTERY_QUIRK_IGNORE },
        {}
 };
 
index 3e3f89e01d819bb30b2eb30c36adc1d0ba89df42..d85398721659ab6ac90ddb9e48fea4c89fd90e38 100644 (file)
@@ -940,7 +940,7 @@ hid_sensor_register_platform_device(struct platform_device *pdev,
                                    struct hid_sensor_hub_device *hsdev,
                                    const struct hid_sensor_custom_match *match)
 {
-       char real_usage[HID_SENSOR_USAGE_LENGTH];
+       char real_usage[HID_SENSOR_USAGE_LENGTH] = { 0 };
        struct platform_device *custom_pdev;
        const char *dev_name;
        char *c;
index 88a91cdad5f800e951121b6e252fa7780629df22..d1d5ca310eadc0fd7b4d5a44c050efffe7b96e69 100644 (file)
@@ -36,6 +36,8 @@ static __u8 *topre_report_fixup(struct hid_device *hdev, __u8 *rdesc,
 static const struct hid_device_id topre_id_table[] = {
        { HID_USB_DEVICE(USB_VENDOR_ID_TOPRE,
                         USB_DEVICE_ID_TOPRE_REALFORCE_R2_108) },
+       { HID_USB_DEVICE(USB_VENDOR_ID_TOPRE,
+                        USB_DEVICE_ID_TOPRE_REALFORCE_R2_87) },
        { }
 };
 MODULE_DEVICE_TABLE(hid, topre_id_table);
index 81385ab37fa9a0dc40a703e6a4594501aa12c236..7fc738a223755601e50e51cecbf8f985e6241f01 100644 (file)
@@ -241,8 +241,8 @@ static int ishtp_cl_bus_match(struct device *dev, struct device_driver *drv)
        struct ishtp_cl_device *device = to_ishtp_cl_device(dev);
        struct ishtp_cl_driver *driver = to_ishtp_cl_driver(drv);
 
-       return guid_equal(&driver->id[0].guid,
-                         &device->fw_client->props.protocol_name);
+       return(device->fw_client ? guid_equal(&driver->id[0].guid,
+              &device->fw_client->props.protocol_name) : 0);
 }
 
 /**
index 09af7592114789a924c8bef1cc095df192367151..b21ffd6df9276b34ec14267d6db6fe9cd93c9c66 100644 (file)
@@ -48,9 +48,9 @@
  * SR_HOLD_TIME_XK_TICKS field will indicate the number of ticks of the
  * baud clock required to program 'Hold Time' at X KHz.
  */
-#define SR_HOLD_TIME_100K_TICKS        133
-#define SR_HOLD_TIME_400K_TICKS        20
-#define SR_HOLD_TIME_1000K_TICKS       11
+#define SR_HOLD_TIME_100K_TICKS                150
+#define SR_HOLD_TIME_400K_TICKS                20
+#define SR_HOLD_TIME_1000K_TICKS       12
 
 #define SMB_CORE_COMPLETION_REG_OFF3   (SMBUS_MAST_CORE_ADDR_BASE + 0x23)
 
  * the baud clock required to program 'fair idle delay' at X KHz. Fair idle
  * delay establishes the MCTP T(IDLE_DELAY) period.
  */
-#define FAIR_BUS_IDLE_MIN_100K_TICKS           969
-#define FAIR_BUS_IDLE_MIN_400K_TICKS           157
-#define FAIR_BUS_IDLE_MIN_1000K_TICKS          157
+#define FAIR_BUS_IDLE_MIN_100K_TICKS           992
+#define FAIR_BUS_IDLE_MIN_400K_TICKS           500
+#define FAIR_BUS_IDLE_MIN_1000K_TICKS          500
 
 /*
  * FAIR_IDLE_DELAY_XK_TICKS field will indicate the number of ticks of the
  * baud clock required to satisfy the fairness protocol at X KHz.
  */
-#define FAIR_IDLE_DELAY_100K_TICKS     1000
-#define FAIR_IDLE_DELAY_400K_TICKS     500
-#define FAIR_IDLE_DELAY_1000K_TICKS    500
+#define FAIR_IDLE_DELAY_100K_TICKS     963
+#define FAIR_IDLE_DELAY_400K_TICKS     156
+#define FAIR_IDLE_DELAY_1000K_TICKS    156
 
 #define SMB_IDLE_SCALING_100K          \
        ((FAIR_IDLE_DELAY_100K_TICKS << 16) | FAIR_BUS_IDLE_MIN_100K_TICKS)
  */
 #define BUS_CLK_100K_LOW_PERIOD_TICKS          156
 #define BUS_CLK_400K_LOW_PERIOD_TICKS          41
-#define BUS_CLK_1000K_LOW_PERIOD_TICKS 15
+#define BUS_CLK_1000K_LOW_PERIOD_TICKS         15
 
 /*
  * BUS_CLK_XK_HIGH_PERIOD_TICKS field defines the number of I2C Baud Clock
  */
 #define CLK_SYNC_100K                  4
 #define CLK_SYNC_400K                  4
-#define CLK_SYNC_1000K         4
+#define CLK_SYNC_1000K                 4
 
 #define SMB_CORE_DATA_TIMING_REG_OFF   (SMBUS_MAST_CORE_ADDR_BASE + 0x40)
 
  * determines the SCLK hold time following SDAT driven low during the first
  * START bit in a transfer.
  */
-#define FIRST_START_HOLD_100K_TICKS    22
-#define FIRST_START_HOLD_400K_TICKS    16
-#define FIRST_START_HOLD_1000K_TICKS   6
+#define FIRST_START_HOLD_100K_TICKS    23
+#define FIRST_START_HOLD_400K_TICKS    8
+#define FIRST_START_HOLD_1000K_TICKS   12
 
 /*
  * STOP_SETUP_XK_TICKS will indicate the number of ticks of the baud clock
  * required to program 'STOP_SETUP' timer at X KHz. This timer determines the
  * SDAT setup time from the rising edge of SCLK for a STOP condition.
  */
-#define STOP_SETUP_100K_TICKS          157
+#define STOP_SETUP_100K_TICKS          150
 #define STOP_SETUP_400K_TICKS          20
-#define STOP_SETUP_1000K_TICKS 12
+#define STOP_SETUP_1000K_TICKS         12
 
 /*
  * RESTART_SETUP_XK_TICKS will indicate the number of ticks of the baud clock
  * required to program 'RESTART_SETUP' timer at X KHz. This timer determines the
  * SDAT setup time from the rising edge of SCLK for a repeated START condition.
  */
-#define RESTART_SETUP_100K_TICKS       157
+#define RESTART_SETUP_100K_TICKS       156
 #define RESTART_SETUP_400K_TICKS       20
 #define RESTART_SETUP_1000K_TICKS      12
 
  * required to program 'DATA_HOLD' timer at X KHz. This timer determines the
  * SDAT hold time following SCLK driven low.
  */
-#define DATA_HOLD_100K_TICKS           2
+#define DATA_HOLD_100K_TICKS           12
 #define DATA_HOLD_400K_TICKS           2
 #define DATA_HOLD_1000K_TICKS          2
 
  * Bus Idle Minimum time = BUS_IDLE_MIN[7:0] x Baud_Clock_Period x
  * (BUS_IDLE_MIN_XK_TICKS[7] ? 4,1)
  */
-#define BUS_IDLE_MIN_100K_TICKS                167UL
-#define BUS_IDLE_MIN_400K_TICKS                139UL
-#define BUS_IDLE_MIN_1000K_TICKS               133UL
+#define BUS_IDLE_MIN_100K_TICKS                36UL
+#define BUS_IDLE_MIN_400K_TICKS                10UL
+#define BUS_IDLE_MIN_1000K_TICKS       4UL
 
 /*
  * CTRL_CUM_TIME_OUT_XK_TICKS defines SMBus Controller Cumulative Time-Out.
  * SMBus Controller Cumulative Time-Out duration =
  * CTRL_CUM_TIME_OUT_XK_TICKS[7:0] x Baud_Clock_Period x 2048
  */
-#define CTRL_CUM_TIME_OUT_100K_TICKS           159
-#define CTRL_CUM_TIME_OUT_400K_TICKS           159
-#define CTRL_CUM_TIME_OUT_1000K_TICKS          159
+#define CTRL_CUM_TIME_OUT_100K_TICKS           76
+#define CTRL_CUM_TIME_OUT_400K_TICKS           76
+#define CTRL_CUM_TIME_OUT_1000K_TICKS          76
 
 /*
  * TARGET_CUM_TIME_OUT_XK_TICKS defines SMBus Target Cumulative Time-Out duration.
  * SMBus Target Cumulative Time-Out duration = TARGET_CUM_TIME_OUT_XK_TICKS[7:0] x
  * Baud_Clock_Period x 4096
  */
-#define TARGET_CUM_TIME_OUT_100K_TICKS 199
-#define TARGET_CUM_TIME_OUT_400K_TICKS 199
-#define TARGET_CUM_TIME_OUT_1000K_TICKS        199
+#define TARGET_CUM_TIME_OUT_100K_TICKS 95
+#define TARGET_CUM_TIME_OUT_400K_TICKS 95
+#define TARGET_CUM_TIME_OUT_1000K_TICKS        95
 
 /*
  * CLOCK_HIGH_TIME_OUT_XK defines Clock High time out period.
  * Clock High time out period = CLOCK_HIGH_TIME_OUT_XK[7:0] x Baud_Clock_Period x 8
  */
-#define CLOCK_HIGH_TIME_OUT_100K_TICKS 204
-#define CLOCK_HIGH_TIME_OUT_400K_TICKS 204
-#define CLOCK_HIGH_TIME_OUT_1000K_TICKS        204
+#define CLOCK_HIGH_TIME_OUT_100K_TICKS 97
+#define CLOCK_HIGH_TIME_OUT_400K_TICKS 97
+#define CLOCK_HIGH_TIME_OUT_1000K_TICKS        97
 
 #define TO_SCALING_100K                \
        ((BUS_IDLE_MIN_100K_TICKS << 24) | (CTRL_CUM_TIME_OUT_100K_TICKS << 16) | \
index a0af027db04c11b98fef28be555ceb7804d92eb6..2e575856c5cd5dfa57a7fa28bce0ac17d62bfa30 100644 (file)
@@ -342,18 +342,18 @@ static int ocores_poll_wait(struct ocores_i2c *i2c)
  * ocores_isr(), we just add our polling code around it.
  *
  * It can run in atomic context
+ *
+ * Return: 0 on success, -ETIMEDOUT on timeout
  */
-static void ocores_process_polling(struct ocores_i2c *i2c)
+static int ocores_process_polling(struct ocores_i2c *i2c)
 {
-       while (1) {
-               irqreturn_t ret;
-               int err;
+       irqreturn_t ret;
+       int err = 0;
 
+       while (1) {
                err = ocores_poll_wait(i2c);
-               if (err) {
-                       i2c->state = STATE_ERROR;
+               if (err)
                        break; /* timeout */
-               }
 
                ret = ocores_isr(-1, i2c);
                if (ret == IRQ_NONE)
@@ -364,13 +364,15 @@ static void ocores_process_polling(struct ocores_i2c *i2c)
                                        break;
                }
        }
+
+       return err;
 }
 
 static int ocores_xfer_core(struct ocores_i2c *i2c,
                            struct i2c_msg *msgs, int num,
                            bool polling)
 {
-       int ret;
+       int ret = 0;
        u8 ctrl;
 
        ctrl = oc_getreg(i2c, OCI2C_CONTROL);
@@ -388,15 +390,16 @@ static int ocores_xfer_core(struct ocores_i2c *i2c,
        oc_setreg(i2c, OCI2C_CMD, OCI2C_CMD_START);
 
        if (polling) {
-               ocores_process_polling(i2c);
+               ret = ocores_process_polling(i2c);
        } else {
-               ret = wait_event_timeout(i2c->wait,
-                                        (i2c->state == STATE_ERROR) ||
-                                        (i2c->state == STATE_DONE), HZ);
-               if (ret == 0) {
-                       ocores_process_timeout(i2c);
-                       return -ETIMEDOUT;
-               }
+               if (wait_event_timeout(i2c->wait,
+                                      (i2c->state == STATE_ERROR) ||
+                                      (i2c->state == STATE_DONE), HZ) == 0)
+                       ret = -ETIMEDOUT;
+       }
+       if (ret) {
+               ocores_process_timeout(i2c);
+               return ret;
        }
 
        return (i2c->state == STATE_DONE) ? num : -EIO;
index bce6b796e04c2ca0523bf1fc83117b13b4bc2c43..545436b7dd5355a5505f1c4348241d58a6157470 100644 (file)
@@ -178,6 +178,11 @@ static int of_i2c_notify(struct notifier_block *nb, unsigned long action,
                        return NOTIFY_OK;
                }
 
+               /*
+                * Clear the flag before adding the device so that fw_devlink
+                * doesn't skip adding consumers to this device.
+                */
+               rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE;
                client = of_i2c_register_device(adap, rd->dn);
                if (IS_ERR(client)) {
                        dev_err(&adap->dev, "failed to create client for '%pOF'\n",
index 3081559377133e4df5f55789435130e48c415d52..6b9563d4f23c94dfab6dbff02a601942a7b315d4 100644 (file)
@@ -624,22 +624,11 @@ static inline unsigned short cma_family(struct rdma_id_private *id_priv)
        return id_priv->id.route.addr.src_addr.ss_family;
 }
 
-static int cma_set_qkey(struct rdma_id_private *id_priv, u32 qkey)
+static int cma_set_default_qkey(struct rdma_id_private *id_priv)
 {
        struct ib_sa_mcmember_rec rec;
        int ret = 0;
 
-       if (id_priv->qkey) {
-               if (qkey && id_priv->qkey != qkey)
-                       return -EINVAL;
-               return 0;
-       }
-
-       if (qkey) {
-               id_priv->qkey = qkey;
-               return 0;
-       }
-
        switch (id_priv->id.ps) {
        case RDMA_PS_UDP:
        case RDMA_PS_IB:
@@ -659,6 +648,16 @@ static int cma_set_qkey(struct rdma_id_private *id_priv, u32 qkey)
        return ret;
 }
 
+static int cma_set_qkey(struct rdma_id_private *id_priv, u32 qkey)
+{
+       if (!qkey ||
+           (id_priv->qkey && (id_priv->qkey != qkey)))
+               return -EINVAL;
+
+       id_priv->qkey = qkey;
+       return 0;
+}
+
 static void cma_translate_ib(struct sockaddr_ib *sib, struct rdma_dev_addr *dev_addr)
 {
        dev_addr->dev_type = ARPHRD_INFINIBAND;
@@ -1229,7 +1228,7 @@ static int cma_ib_init_qp_attr(struct rdma_id_private *id_priv,
        *qp_attr_mask = IB_QP_STATE | IB_QP_PKEY_INDEX | IB_QP_PORT;
 
        if (id_priv->id.qp_type == IB_QPT_UD) {
-               ret = cma_set_qkey(id_priv, 0);
+               ret = cma_set_default_qkey(id_priv);
                if (ret)
                        return ret;
 
@@ -4569,7 +4568,10 @@ static int cma_send_sidr_rep(struct rdma_id_private *id_priv,
        memset(&rep, 0, sizeof rep);
        rep.status = status;
        if (status == IB_SIDR_SUCCESS) {
-               ret = cma_set_qkey(id_priv, qkey);
+               if (qkey)
+                       ret = cma_set_qkey(id_priv, qkey);
+               else
+                       ret = cma_set_default_qkey(id_priv);
                if (ret)
                        return ret;
                rep.qp_num = id_priv->qp_num;
@@ -4774,9 +4776,7 @@ static void cma_make_mc_event(int status, struct rdma_id_private *id_priv,
        enum ib_gid_type gid_type;
        struct net_device *ndev;
 
-       if (!status)
-               status = cma_set_qkey(id_priv, be32_to_cpu(multicast->rec.qkey));
-       else
+       if (status)
                pr_debug_ratelimited("RDMA CM: MULTICAST_ERROR: failed to join multicast. status %d\n",
                                     status);
 
@@ -4804,7 +4804,7 @@ static void cma_make_mc_event(int status, struct rdma_id_private *id_priv,
        }
 
        event->param.ud.qp_num = 0xFFFFFF;
-       event->param.ud.qkey = be32_to_cpu(multicast->rec.qkey);
+       event->param.ud.qkey = id_priv->qkey;
 
 out:
        if (ndev)
@@ -4823,8 +4823,11 @@ static int cma_ib_mc_handler(int status, struct ib_sa_multicast *multicast)
            READ_ONCE(id_priv->state) == RDMA_CM_DESTROYING)
                goto out;
 
-       cma_make_mc_event(status, id_priv, multicast, &event, mc);
-       ret = cma_cm_event_handler(id_priv, &event);
+       ret = cma_set_qkey(id_priv, be32_to_cpu(multicast->rec.qkey));
+       if (!ret) {
+               cma_make_mc_event(status, id_priv, multicast, &event, mc);
+               ret = cma_cm_event_handler(id_priv, &event);
+       }
        rdma_destroy_ah_attr(&event.param.ud.ah_attr);
        WARN_ON(ret);
 
@@ -4877,9 +4880,11 @@ static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
        if (ret)
                return ret;
 
-       ret = cma_set_qkey(id_priv, 0);
-       if (ret)
-               return ret;
+       if (!id_priv->qkey) {
+               ret = cma_set_default_qkey(id_priv);
+               if (ret)
+                       return ret;
+       }
 
        cma_set_mgid(id_priv, (struct sockaddr *) &mc->addr, &rec.mgid);
        rec.qkey = cpu_to_be32(id_priv->qkey);
@@ -4956,9 +4961,6 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
        cma_iboe_set_mgid(addr, &ib.rec.mgid, gid_type);
 
        ib.rec.pkey = cpu_to_be16(0xffff);
-       if (id_priv->id.ps == RDMA_PS_UDP)
-               ib.rec.qkey = cpu_to_be32(RDMA_UDP_QKEY);
-
        if (dev_addr->bound_dev_if)
                ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if);
        if (!ndev)
@@ -4984,6 +4986,9 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
        if (err || !ib.rec.mtu)
                return err ?: -EINVAL;
 
+       if (!id_priv->qkey)
+               cma_set_default_qkey(id_priv);
+
        rdma_ip2gid((struct sockaddr *)&id_priv->id.route.addr.src_addr,
                    &ib.rec.port_gid);
        INIT_WORK(&mc->iboe_join.work, cma_iboe_join_work_handler);
@@ -5009,6 +5014,9 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
                            READ_ONCE(id_priv->state) != RDMA_CM_ADDR_RESOLVED))
                return -EINVAL;
 
+       if (id_priv->id.qp_type != IB_QPT_UD)
+               return -EINVAL;
+
        mc = kzalloc(sizeof(*mc), GFP_KERNEL);
        if (!mc)
                return -ENOMEM;
index 11b1c1603aeb44e6d0dcbc3904a57a5bf314d03e..b99b3cc283b650b9d001bccbfe0bcfdee6100573 100644 (file)
@@ -532,6 +532,8 @@ static struct ib_ah *_rdma_create_ah(struct ib_pd *pd,
        else
                ret = device->ops.create_ah(ah, &init_attr, NULL);
        if (ret) {
+               if (ah->sgid_attr)
+                       rdma_put_gid_attr(ah->sgid_attr);
                kfree(ah);
                return ERR_PTR(ret);
        }
index cabd8678b3558963b3069304ba5372ea0233b3a0..7bc354273d4ec00542efeeca47c384aa7431ad43 100644 (file)
@@ -65,7 +65,7 @@ static const enum ib_wc_opcode wc_mapping_table[ERDMA_NUM_OPCODES] = {
        [ERDMA_OP_LOCAL_INV] = IB_WC_LOCAL_INV,
        [ERDMA_OP_READ_WITH_INV] = IB_WC_RDMA_READ,
        [ERDMA_OP_ATOMIC_CAS] = IB_WC_COMP_SWAP,
-       [ERDMA_OP_ATOMIC_FAD] = IB_WC_FETCH_ADD,
+       [ERDMA_OP_ATOMIC_FAA] = IB_WC_FETCH_ADD,
 };
 
 static const struct {
index 4c38d99c73f1cb1b7044e94d9999da51605fe5d7..37ad1bb1917c4931dd74b1fb3e2ab55784aa8d73 100644 (file)
@@ -441,7 +441,7 @@ struct erdma_reg_mr_sqe {
 };
 
 /* EQ related. */
-#define ERDMA_DEFAULT_EQ_DEPTH 256
+#define ERDMA_DEFAULT_EQ_DEPTH 4096
 
 /* ceqe */
 #define ERDMA_CEQE_HDR_DB_MASK BIT_ULL(63)
@@ -491,7 +491,7 @@ enum erdma_opcode {
        ERDMA_OP_LOCAL_INV = 15,
        ERDMA_OP_READ_WITH_INV = 16,
        ERDMA_OP_ATOMIC_CAS = 17,
-       ERDMA_OP_ATOMIC_FAD = 18,
+       ERDMA_OP_ATOMIC_FAA = 18,
        ERDMA_NUM_OPCODES = 19,
        ERDMA_OP_INVALID = ERDMA_NUM_OPCODES + 1
 };
index 5dc31e5df5cba78ca1025120186fbe891e4551c3..4a29a53a6652eacde0bf5619a3b63850cf9faf2a 100644 (file)
@@ -56,7 +56,7 @@ done:
 static int erdma_enum_and_get_netdev(struct erdma_dev *dev)
 {
        struct net_device *netdev;
-       int ret = -ENODEV;
+       int ret = -EPROBE_DEFER;
 
        /* Already binded to a net_device, so we skip. */
        if (dev->netdev)
index d088d6bef431afa8c6936219dd2f60a6ce46942e..44923c51a01b44d7dc322fd472e0a0988d8dec2b 100644 (file)
@@ -405,7 +405,7 @@ static int erdma_push_one_sqe(struct erdma_qp *qp, u16 *pi,
                        FIELD_PREP(ERDMA_SQE_MR_MTT_CNT_MASK,
                                   mr->mem.mtt_nents);
 
-               if (mr->mem.mtt_nents < ERDMA_MAX_INLINE_MTT_ENTRIES) {
+               if (mr->mem.mtt_nents <= ERDMA_MAX_INLINE_MTT_ENTRIES) {
                        attrs |= FIELD_PREP(ERDMA_SQE_MR_MTT_TYPE_MASK, 0);
                        /* Copy SGLs to SQE content to accelerate */
                        memcpy(get_queue_entry(qp->kern_qp.sq_buf, idx + 1,
@@ -439,7 +439,7 @@ static int erdma_push_one_sqe(struct erdma_qp *qp, u16 *pi,
                                cpu_to_le64(atomic_wr(send_wr)->compare_add);
                } else {
                        wqe_hdr |= FIELD_PREP(ERDMA_SQE_HDR_OPCODE_MASK,
-                                             ERDMA_OP_ATOMIC_FAD);
+                                             ERDMA_OP_ATOMIC_FAA);
                        atomic_sqe->fetchadd_swap_data =
                                cpu_to_le64(atomic_wr(send_wr)->compare_add);
                }
index e0a993bc032a44aaa76e399ead33ccf8f6f0734e..131cf5f409822cb50daa57a8a24b17d4136dd393 100644 (file)
@@ -11,7 +11,7 @@
 
 /* RDMA Capability. */
 #define ERDMA_MAX_PD (128 * 1024)
-#define ERDMA_MAX_SEND_WR 4096
+#define ERDMA_MAX_SEND_WR 8192
 #define ERDMA_MAX_ORD 128
 #define ERDMA_MAX_IRD 128
 #define ERDMA_MAX_SGE_RD 1
index 195aa9ea18b6ca74c24dc7c45a24d693cedece9b..8817864154af1f079eda127f648085b2d9107999 100644 (file)
@@ -1458,13 +1458,15 @@ static int irdma_send_fin(struct irdma_cm_node *cm_node)
  * irdma_find_listener - find a cm node listening on this addr-port pair
  * @cm_core: cm's core
  * @dst_addr: listener ip addr
+ * @ipv4: flag indicating IPv4 when true
  * @dst_port: listener tcp port num
  * @vlan_id: virtual LAN ID
  * @listener_state: state to match with listen node's
  */
 static struct irdma_cm_listener *
-irdma_find_listener(struct irdma_cm_core *cm_core, u32 *dst_addr, u16 dst_port,
-                   u16 vlan_id, enum irdma_cm_listener_state listener_state)
+irdma_find_listener(struct irdma_cm_core *cm_core, u32 *dst_addr, bool ipv4,
+                   u16 dst_port, u16 vlan_id,
+                   enum irdma_cm_listener_state listener_state)
 {
        struct irdma_cm_listener *listen_node;
        static const u32 ip_zero[4] = { 0, 0, 0, 0 };
@@ -1477,7 +1479,7 @@ irdma_find_listener(struct irdma_cm_core *cm_core, u32 *dst_addr, u16 dst_port,
        list_for_each_entry (listen_node, &cm_core->listen_list, list) {
                memcpy(listen_addr, listen_node->loc_addr, sizeof(listen_addr));
                listen_port = listen_node->loc_port;
-               if (listen_port != dst_port ||
+               if (listen_node->ipv4 != ipv4 || listen_port != dst_port ||
                    !(listener_state & listen_node->listener_state))
                        continue;
                /* compare node pair, return node handle if a match */
@@ -2902,9 +2904,10 @@ irdma_make_listen_node(struct irdma_cm_core *cm_core,
        unsigned long flags;
 
        /* cannot have multiple matching listeners */
-       listener = irdma_find_listener(cm_core, cm_info->loc_addr,
-                                      cm_info->loc_port, cm_info->vlan_id,
-                                      IRDMA_CM_LISTENER_EITHER_STATE);
+       listener =
+               irdma_find_listener(cm_core, cm_info->loc_addr, cm_info->ipv4,
+                                   cm_info->loc_port, cm_info->vlan_id,
+                                   IRDMA_CM_LISTENER_EITHER_STATE);
        if (listener &&
            listener->listener_state == IRDMA_CM_LISTENER_ACTIVE_STATE) {
                refcount_dec(&listener->refcnt);
@@ -3153,6 +3156,7 @@ void irdma_receive_ilq(struct irdma_sc_vsi *vsi, struct irdma_puda_buf *rbuf)
 
                listener = irdma_find_listener(cm_core,
                                               cm_info.loc_addr,
+                                              cm_info.ipv4,
                                               cm_info.loc_port,
                                               cm_info.vlan_id,
                                               IRDMA_CM_LISTENER_ACTIVE_STATE);
index 19c284975fc7c874cc62d1253d4b60664c1a8aa9..7feadb3e1eda343c1fcfc1d6a47ff035317ae94c 100644 (file)
@@ -41,7 +41,7 @@
 #define TCP_OPTIONS_PADDING    3
 
 #define IRDMA_DEFAULT_RETRYS   64
-#define IRDMA_DEFAULT_RETRANS  8
+#define IRDMA_DEFAULT_RETRANS  32
 #define IRDMA_DEFAULT_TTL              0x40
 #define IRDMA_DEFAULT_RTT_VAR          6
 #define IRDMA_DEFAULT_SS_THRESH                0x3fffffff
index 2e1e2bad04011a42e4e025eb46774d2feaf7307b..43dfa4761f0698e5530ae403dcc68a9cff85a9cf 100644 (file)
@@ -41,6 +41,7 @@ static enum irdma_hmc_rsrc_type iw_hmc_obj_types[] = {
        IRDMA_HMC_IW_XFFL,
        IRDMA_HMC_IW_Q1,
        IRDMA_HMC_IW_Q1FL,
+       IRDMA_HMC_IW_PBLE,
        IRDMA_HMC_IW_TIMER,
        IRDMA_HMC_IW_FSIMC,
        IRDMA_HMC_IW_FSIAV,
@@ -827,6 +828,8 @@ static int irdma_create_hmc_objs(struct irdma_pci_f *rf, bool privileged,
        info.entry_type = rf->sd_type;
 
        for (i = 0; i < IW_HMC_OBJ_TYPE_NUM; i++) {
+               if (iw_hmc_obj_types[i] == IRDMA_HMC_IW_PBLE)
+                       continue;
                if (dev->hmc_info->hmc_obj[iw_hmc_obj_types[i]].cnt) {
                        info.rsrc_type = iw_hmc_obj_types[i];
                        info.count = dev->hmc_info->hmc_obj[info.rsrc_type].cnt;
index 445e69e864097ef85adbf351387eccd08150ffcb..7887230c867b1efd784ff5415e9704eff96044c6 100644 (file)
@@ -2595,7 +2595,10 @@ void irdma_generate_flush_completions(struct irdma_qp *iwqp)
                        /* remove the SQ WR by moving SQ tail*/
                        IRDMA_RING_SET_TAIL(*sq_ring,
                                sq_ring->tail + qp->sq_wrtrk_array[sq_ring->tail].quanta);
-
+                       if (cmpl->cpi.op_type == IRDMAQP_OP_NOP) {
+                               kfree(cmpl);
+                               continue;
+                       }
                        ibdev_dbg(iwqp->iwscq->ibcq.device,
                                  "DEV: %s: adding wr_id = 0x%llx SQ Completion to list qp_id=%d\n",
                                  __func__, cmpl->cpi.wr_id, qp->qp_id);
index 5b988db66b8fdb90c8c9303bbe29d4d3ec6f0a54..5d45de223c43a9afed62a70265b9e32d75db4e10 100644 (file)
@@ -442,6 +442,10 @@ static int translate_eth_ext_proto_oper(u32 eth_proto_oper, u16 *active_speed,
                *active_width = IB_WIDTH_2X;
                *active_speed = IB_SPEED_NDR;
                break;
+       case MLX5E_PROT_MASK(MLX5E_400GAUI_8):
+               *active_width = IB_WIDTH_8X;
+               *active_speed = IB_SPEED_HDR;
+               break;
        case MLX5E_PROT_MASK(MLX5E_400GAUI_4_400GBASE_CR4_KR4):
                *active_width = IB_WIDTH_4X;
                *active_speed = IB_SPEED_NDR;
index 1e94e7d10b8be64172d222b3136948133391d1d3..a0a1194dc1d902579f79b7befdad1d61b4a671a4 100644 (file)
@@ -153,7 +153,7 @@ static int do_cached_write (struct mtdblk_dev *mtdblk, unsigned long pos,
                                mtdblk->cache_state = STATE_EMPTY;
                                ret = mtd_read(mtd, sect_start, sect_size,
                                               &retlen, mtdblk->cache_data);
-                               if (ret)
+                               if (ret && !mtd_is_bitflip(ret))
                                        return ret;
                                if (retlen != sect_size)
                                        return -EIO;
@@ -188,8 +188,12 @@ static int do_cached_read (struct mtdblk_dev *mtdblk, unsigned long pos,
        pr_debug("mtdblock: read on \"%s\" at 0x%lx, size 0x%x\n",
                        mtd->name, pos, len);
 
-       if (!sect_size)
-               return mtd_read(mtd, pos, len, &retlen, buf);
+       if (!sect_size) {
+               ret = mtd_read(mtd, pos, len, &retlen, buf);
+               if (ret && !mtd_is_bitflip(ret))
+                       return ret;
+               return 0;
+       }
 
        while (len > 0) {
                unsigned long sect_start = (pos/sect_size)*sect_size;
@@ -209,7 +213,7 @@ static int do_cached_read (struct mtdblk_dev *mtdblk, unsigned long pos,
                        memcpy (buf, mtdblk->cache_data + offset, size);
                } else {
                        ret = mtd_read(mtd, pos, size, &retlen, buf);
-                       if (ret)
+                       if (ret && !mtd_is_bitflip(ret))
                                return ret;
                        if (retlen != size)
                                return -EIO;
index a28574c009003f74b96edd0f0e31b5a4aec846d5..074e14225c06a07e0ceba491d14b5272472860d7 100644 (file)
@@ -280,7 +280,7 @@ static void meson_nfc_cmd_access(struct nand_chip *nand, int raw, bool dir,
 
        if (raw) {
                len = mtd->writesize + mtd->oobsize;
-               cmd = (len & GENMASK(5, 0)) | scrambler | DMA_DIR(dir);
+               cmd = (len & GENMASK(13, 0)) | scrambler | DMA_DIR(dir);
                writel(cmd, nfc->reg_base + NFC_REG_CMD);
                return;
        }
@@ -544,7 +544,7 @@ static int meson_nfc_read_buf(struct nand_chip *nand, u8 *buf, int len)
        if (ret)
                goto out;
 
-       cmd = NFC_CMD_N2M | (len & GENMASK(5, 0));
+       cmd = NFC_CMD_N2M | (len & GENMASK(13, 0));
        writel(cmd, nfc->reg_base + NFC_REG_CMD);
 
        meson_nfc_drain_cmd(nfc);
@@ -568,7 +568,7 @@ static int meson_nfc_write_buf(struct nand_chip *nand, u8 *buf, int len)
        if (ret)
                return ret;
 
-       cmd = NFC_CMD_M2N | (len & GENMASK(5, 0));
+       cmd = NFC_CMD_M2N | (len & GENMASK(13, 0));
        writel(cmd, nfc->reg_base + NFC_REG_CMD);
 
        meson_nfc_drain_cmd(nfc);
index 5d627048c420de5d2516b2135736bb2aec65780b..9e74bcd90aaa2e216a74fee29476bc0b7f209e2c 100644 (file)
@@ -1531,6 +1531,9 @@ static int stm32_fmc2_nfc_setup_interface(struct nand_chip *chip, int chipnr,
        if (IS_ERR(sdrt))
                return PTR_ERR(sdrt);
 
+       if (conf->timings.mode > 3)
+               return -EOPNOTSUPP;
+
        if (chipnr == NAND_DATA_IFACE_CHECK_ONLY)
                return 0;
 
index 0904eb40c95fa133c1cdc4454beb6d4d3d90fc6a..ad025b2ee41773397f7dac35d34dbcba763eb5cf 100644 (file)
@@ -666,12 +666,6 @@ static int io_init(struct ubi_device *ubi, int max_beb_per1024)
        ubi->ec_hdr_alsize = ALIGN(UBI_EC_HDR_SIZE, ubi->hdrs_min_io_size);
        ubi->vid_hdr_alsize = ALIGN(UBI_VID_HDR_SIZE, ubi->hdrs_min_io_size);
 
-       if (ubi->vid_hdr_offset && ((ubi->vid_hdr_offset + UBI_VID_HDR_SIZE) >
-           ubi->vid_hdr_alsize)) {
-               ubi_err(ubi, "VID header offset %d too large.", ubi->vid_hdr_offset);
-               return -EINVAL;
-       }
-
        dbg_gen("min_io_size      %d", ubi->min_io_size);
        dbg_gen("max_write_size   %d", ubi->max_write_size);
        dbg_gen("hdrs_min_io_size %d", ubi->hdrs_min_io_size);
@@ -689,6 +683,21 @@ static int io_init(struct ubi_device *ubi, int max_beb_per1024)
                                                ubi->vid_hdr_aloffset;
        }
 
+       /*
+        * Memory allocation for VID header is ubi->vid_hdr_alsize
+        * which is described in comments in io.c.
+        * Make sure VID header shift + UBI_VID_HDR_SIZE not exceeds
+        * ubi->vid_hdr_alsize, so that all vid header operations
+        * won't access memory out of bounds.
+        */
+       if ((ubi->vid_hdr_shift + UBI_VID_HDR_SIZE) > ubi->vid_hdr_alsize) {
+               ubi_err(ubi, "Invalid VID header offset %d, VID header shift(%d)"
+                       " + VID header size(%zu) > VID header aligned size(%d).",
+                       ubi->vid_hdr_offset, ubi->vid_hdr_shift,
+                       UBI_VID_HDR_SIZE, ubi->vid_hdr_alsize);
+               return -EINVAL;
+       }
+
        /* Similar for the data offset */
        ubi->leb_start = ubi->vid_hdr_offset + UBI_VID_HDR_SIZE;
        ubi->leb_start = ALIGN(ubi->leb_start, ubi->min_io_size);
index 40f39e5d6dfcc068519598452bd8787c7b143c39..26a214f016c18448469c4bb3988ac09af224ab95 100644 (file)
@@ -575,7 +575,7 @@ static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk,
  * @vol_id: the volume ID that last used this PEB
  * @lnum: the last used logical eraseblock number for the PEB
  * @torture: if the physical eraseblock has to be tortured
- * @nested: denotes whether the work_sem is already held in read mode
+ * @nested: denotes whether the work_sem is already held
  *
  * This function returns zero in case of success and a %-ENOMEM in case of
  * failure.
@@ -1131,7 +1131,7 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
                int err1;
 
                /* Re-schedule the LEB for erasure */
-               err1 = schedule_erase(ubi, e, vol_id, lnum, 0, false);
+               err1 = schedule_erase(ubi, e, vol_id, lnum, 0, true);
                if (err1) {
                        spin_lock(&ubi->wl_lock);
                        wl_entry_destroy(ubi, e);
index 236e5219c8112ce615d49c61efb1a9e88df7b39f..8cc9a74789b799da9d3764bcf1a0a6d6d087b45d 100644 (file)
@@ -3269,7 +3269,8 @@ static int bond_na_rcv(const struct sk_buff *skb, struct bonding *bond,
 
        combined = skb_header_pointer(skb, 0, sizeof(_combined), &_combined);
        if (!combined || combined->ip6.nexthdr != NEXTHDR_ICMP ||
-           combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_ADVERTISEMENT)
+           (combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_SOLICITATION &&
+            combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_ADVERTISEMENT))
                goto out;
 
        saddr = &combined->ip6.saddr;
@@ -3291,7 +3292,7 @@ static int bond_na_rcv(const struct sk_buff *skb, struct bonding *bond,
        else if (curr_active_slave &&
                 time_after(slave_last_rx(bond, curr_active_slave),
                            curr_active_slave->last_link_up))
-               bond_validate_na(bond, slave, saddr, daddr);
+               bond_validate_na(bond, slave, daddr, saddr);
        else if (curr_arp_slave &&
                 bond_time_in_interval(bond, slave_last_tx(curr_arp_slave), 1))
                bond_validate_na(bond, slave, saddr, daddr);
index 66e30561569eb3aa660fdf79a7c5b2f8af49e76f..e43d99ec50ba2c9c547bdfa1daed5b894fee2992 100644 (file)
@@ -1064,6 +1064,10 @@ static dma_addr_t macb_get_addr(struct macb *bp, struct macb_dma_desc *desc)
        }
 #endif
        addr |= MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, desc->addr));
+#ifdef CONFIG_MACB_USE_HWSTAMP
+       if (bp->hw_dma_cap & HW_DMA_CAP_PTP)
+               addr &= ~GEM_BIT(DMA_RXVALID);
+#endif
        return addr;
 }
 
index da9d4b310fcdd9b4211c649328ca5dbf8ba2ba49..838750a03cf68a3d4da7d6c3be7cc45fc3c1a708 100644 (file)
@@ -989,6 +989,20 @@ static int enetc_get_mm(struct net_device *ndev, struct ethtool_mm_state *state)
        return 0;
 }
 
+/* FIXME: Workaround for the link partner's verification failing if ENETC
+ * priorly received too much express traffic. The documentation doesn't
+ * suggest this is needed.
+ */
+static void enetc_restart_emac_rx(struct enetc_si *si)
+{
+       u32 val = enetc_port_rd(&si->hw, ENETC_PM0_CMD_CFG);
+
+       enetc_port_wr(&si->hw, ENETC_PM0_CMD_CFG, val & ~ENETC_PM0_RX_EN);
+
+       if (val & ENETC_PM0_RX_EN)
+               enetc_port_wr(&si->hw, ENETC_PM0_CMD_CFG, val);
+}
+
 static int enetc_set_mm(struct net_device *ndev, struct ethtool_mm_cfg *cfg,
                        struct netlink_ext_ack *extack)
 {
@@ -1040,6 +1054,8 @@ static int enetc_set_mm(struct net_device *ndev, struct ethtool_mm_cfg *cfg,
 
        enetc_port_wr(hw, ENETC_MMCSR, val);
 
+       enetc_restart_emac_rx(priv->si);
+
        mutex_unlock(&priv->mm_lock);
 
        return 0;
index 232bc61d9eee9ce6c2eb7a6a3c758ac22cfe20b3..746ff76f2fb1e648882b9a393415823be12d0c9c 100644 (file)
@@ -59,8 +59,6 @@ enum iavf_vsi_state_t {
 struct iavf_vsi {
        struct iavf_adapter *back;
        struct net_device *netdev;
-       unsigned long active_cvlans[BITS_TO_LONGS(VLAN_N_VID)];
-       unsigned long active_svlans[BITS_TO_LONGS(VLAN_N_VID)];
        u16 seid;
        u16 id;
        DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__);
@@ -158,15 +156,20 @@ struct iavf_vlan {
        u16 tpid;
 };
 
+enum iavf_vlan_state_t {
+       IAVF_VLAN_INVALID,
+       IAVF_VLAN_ADD,          /* filter needs to be added */
+       IAVF_VLAN_IS_NEW,       /* filter is new, wait for PF answer */
+       IAVF_VLAN_ACTIVE,       /* filter is accepted by PF */
+       IAVF_VLAN_DISABLE,      /* filter needs to be deleted by PF, then marked INACTIVE */
+       IAVF_VLAN_INACTIVE,     /* filter is inactive, we are in IFF_DOWN */
+       IAVF_VLAN_REMOVE,       /* filter needs to be removed from list */
+};
+
 struct iavf_vlan_filter {
        struct list_head list;
        struct iavf_vlan vlan;
-       struct {
-               u8 is_new_vlan:1;       /* filter is new, wait for PF answer */
-               u8 remove:1;            /* filter needs to be removed */
-               u8 add:1;               /* filter needs to be added */
-               u8 padding:5;
-       };
+       enum iavf_vlan_state_t state;
 };
 
 #define IAVF_MAX_TRAFFIC_CLASS 4
@@ -258,6 +261,7 @@ struct iavf_adapter {
        wait_queue_head_t vc_waitqueue;
        struct iavf_q_vector *q_vectors;
        struct list_head vlan_filter_list;
+       int num_vlan_filters;
        struct list_head mac_filter_list;
        struct mutex crit_lock;
        struct mutex client_lock;
index 095201e83c9db002fc0667c24635dd68790f1a9a..2de4baff4c20501baab55b7ffddb84b6f8e20aee 100644 (file)
@@ -791,7 +791,8 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter,
                f->vlan = vlan;
 
                list_add_tail(&f->list, &adapter->vlan_filter_list);
-               f->add = true;
+               f->state = IAVF_VLAN_ADD;
+               adapter->num_vlan_filters++;
                adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
        }
 
@@ -813,7 +814,7 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
 
        f = iavf_find_vlan(adapter, vlan);
        if (f) {
-               f->remove = true;
+               f->state = IAVF_VLAN_REMOVE;
                adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER;
        }
 
@@ -828,14 +829,18 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
  **/
 static void iavf_restore_filters(struct iavf_adapter *adapter)
 {
-       u16 vid;
+       struct iavf_vlan_filter *f;
 
        /* re-add all VLAN filters */
-       for_each_set_bit(vid, adapter->vsi.active_cvlans, VLAN_N_VID)
-               iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021Q));
+       spin_lock_bh(&adapter->mac_vlan_list_lock);
 
-       for_each_set_bit(vid, adapter->vsi.active_svlans, VLAN_N_VID)
-               iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021AD));
+       list_for_each_entry(f, &adapter->vlan_filter_list, list) {
+               if (f->state == IAVF_VLAN_INACTIVE)
+                       f->state = IAVF_VLAN_ADD;
+       }
+
+       spin_unlock_bh(&adapter->mac_vlan_list_lock);
+       adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
 }
 
 /**
@@ -844,8 +849,7 @@ static void iavf_restore_filters(struct iavf_adapter *adapter)
  */
 u16 iavf_get_num_vlans_added(struct iavf_adapter *adapter)
 {
-       return bitmap_weight(adapter->vsi.active_cvlans, VLAN_N_VID) +
-               bitmap_weight(adapter->vsi.active_svlans, VLAN_N_VID);
+       return adapter->num_vlan_filters;
 }
 
 /**
@@ -928,11 +932,6 @@ static int iavf_vlan_rx_kill_vid(struct net_device *netdev,
                return 0;
 
        iavf_del_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto)));
-       if (proto == cpu_to_be16(ETH_P_8021Q))
-               clear_bit(vid, adapter->vsi.active_cvlans);
-       else
-               clear_bit(vid, adapter->vsi.active_svlans);
-
        return 0;
 }
 
@@ -1293,16 +1292,11 @@ static void iavf_clear_mac_vlan_filters(struct iavf_adapter *adapter)
                }
        }
 
-       /* remove all VLAN filters */
+       /* disable all VLAN filters */
        list_for_each_entry_safe(vlf, vlftmp, &adapter->vlan_filter_list,
-                                list) {
-               if (vlf->add) {
-                       list_del(&vlf->list);
-                       kfree(vlf);
-               } else {
-                       vlf->remove = true;
-               }
-       }
+                                list)
+               vlf->state = IAVF_VLAN_DISABLE;
+
        spin_unlock_bh(&adapter->mac_vlan_list_lock);
 }
 
@@ -2914,6 +2908,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
                list_del(&fv->list);
                kfree(fv);
        }
+       adapter->num_vlan_filters = 0;
 
        spin_unlock_bh(&adapter->mac_vlan_list_lock);
 
@@ -3131,9 +3126,6 @@ continue_reset:
        adapter->aq_required |= IAVF_FLAG_AQ_ADD_CLOUD_FILTER;
        iavf_misc_irq_enable(adapter);
 
-       bitmap_clear(adapter->vsi.active_cvlans, 0, VLAN_N_VID);
-       bitmap_clear(adapter->vsi.active_svlans, 0, VLAN_N_VID);
-
        mod_delayed_work(adapter->wq, &adapter->watchdog_task, 2);
 
        /* We were running when the reset started, so we need to restore some
index 4e17d006c52d46930a5a349928ddf7c8a6c81af5..9afbbdac35903f1ae018c00fe28213ece3227d3f 100644 (file)
@@ -642,16 +642,10 @@ static void iavf_vlan_add_reject(struct iavf_adapter *adapter)
 
        spin_lock_bh(&adapter->mac_vlan_list_lock);
        list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) {
-               if (f->is_new_vlan) {
-                       if (f->vlan.tpid == ETH_P_8021Q)
-                               clear_bit(f->vlan.vid,
-                                         adapter->vsi.active_cvlans);
-                       else
-                               clear_bit(f->vlan.vid,
-                                         adapter->vsi.active_svlans);
-
+               if (f->state == IAVF_VLAN_IS_NEW) {
                        list_del(&f->list);
                        kfree(f);
+                       adapter->num_vlan_filters--;
                }
        }
        spin_unlock_bh(&adapter->mac_vlan_list_lock);
@@ -679,7 +673,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
        spin_lock_bh(&adapter->mac_vlan_list_lock);
 
        list_for_each_entry(f, &adapter->vlan_filter_list, list) {
-               if (f->add)
+               if (f->state == IAVF_VLAN_ADD)
                        count++;
        }
        if (!count || !VLAN_FILTERING_ALLOWED(adapter)) {
@@ -710,11 +704,10 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
                vvfl->vsi_id = adapter->vsi_res->vsi_id;
                vvfl->num_elements = count;
                list_for_each_entry(f, &adapter->vlan_filter_list, list) {
-                       if (f->add) {
+                       if (f->state == IAVF_VLAN_ADD) {
                                vvfl->vlan_id[i] = f->vlan.vid;
                                i++;
-                               f->add = false;
-                               f->is_new_vlan = true;
+                               f->state = IAVF_VLAN_IS_NEW;
                                if (i == count)
                                        break;
                        }
@@ -760,7 +753,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
                vvfl_v2->vport_id = adapter->vsi_res->vsi_id;
                vvfl_v2->num_elements = count;
                list_for_each_entry(f, &adapter->vlan_filter_list, list) {
-                       if (f->add) {
+                       if (f->state == IAVF_VLAN_ADD) {
                                struct virtchnl_vlan_supported_caps *filtering_support =
                                        &adapter->vlan_v2_caps.filtering.filtering_support;
                                struct virtchnl_vlan *vlan;
@@ -778,8 +771,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter)
                                vlan->tpid = f->vlan.tpid;
 
                                i++;
-                               f->add = false;
-                               f->is_new_vlan = true;
+                               f->state = IAVF_VLAN_IS_NEW;
                        }
                }
 
@@ -822,10 +814,16 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
                 * filters marked for removal to enable bailing out before
                 * sending a virtchnl message
                 */
-               if (f->remove && !VLAN_FILTERING_ALLOWED(adapter)) {
+               if (f->state == IAVF_VLAN_REMOVE &&
+                   !VLAN_FILTERING_ALLOWED(adapter)) {
                        list_del(&f->list);
                        kfree(f);
-               } else if (f->remove) {
+                       adapter->num_vlan_filters--;
+               } else if (f->state == IAVF_VLAN_DISABLE &&
+                   !VLAN_FILTERING_ALLOWED(adapter)) {
+                       f->state = IAVF_VLAN_INACTIVE;
+               } else if (f->state == IAVF_VLAN_REMOVE ||
+                          f->state == IAVF_VLAN_DISABLE) {
                        count++;
                }
        }
@@ -857,11 +855,18 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
                vvfl->vsi_id = adapter->vsi_res->vsi_id;
                vvfl->num_elements = count;
                list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) {
-                       if (f->remove) {
+                       if (f->state == IAVF_VLAN_DISABLE) {
                                vvfl->vlan_id[i] = f->vlan.vid;
+                               f->state = IAVF_VLAN_INACTIVE;
                                i++;
+                               if (i == count)
+                                       break;
+                       } else if (f->state == IAVF_VLAN_REMOVE) {
+                               vvfl->vlan_id[i] = f->vlan.vid;
                                list_del(&f->list);
                                kfree(f);
+                               adapter->num_vlan_filters--;
+                               i++;
                                if (i == count)
                                        break;
                        }
@@ -901,7 +906,8 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
                vvfl_v2->vport_id = adapter->vsi_res->vsi_id;
                vvfl_v2->num_elements = count;
                list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) {
-                       if (f->remove) {
+                       if (f->state == IAVF_VLAN_DISABLE ||
+                           f->state == IAVF_VLAN_REMOVE) {
                                struct virtchnl_vlan_supported_caps *filtering_support =
                                        &adapter->vlan_v2_caps.filtering.filtering_support;
                                struct virtchnl_vlan *vlan;
@@ -915,8 +921,13 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
                                vlan->tci = f->vlan.vid;
                                vlan->tpid = f->vlan.tpid;
 
-                               list_del(&f->list);
-                               kfree(f);
+                               if (f->state == IAVF_VLAN_DISABLE) {
+                                       f->state = IAVF_VLAN_INACTIVE;
+                               } else {
+                                       list_del(&f->list);
+                                       kfree(f);
+                                       adapter->num_vlan_filters--;
+                               }
                                i++;
                                if (i == count)
                                        break;
@@ -2192,7 +2203,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
                                list_for_each_entry(vlf,
                                                    &adapter->vlan_filter_list,
                                                    list)
-                                       vlf->add = true;
+                                       vlf->state = IAVF_VLAN_ADD;
 
                                adapter->aq_required |=
                                        IAVF_FLAG_AQ_ADD_VLAN_FILTER;
@@ -2260,7 +2271,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
                                list_for_each_entry(vlf,
                                                    &adapter->vlan_filter_list,
                                                    list)
-                                       vlf->add = true;
+                                       vlf->state = IAVF_VLAN_ADD;
 
                                aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
                        }
@@ -2444,15 +2455,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
 
                spin_lock_bh(&adapter->mac_vlan_list_lock);
                list_for_each_entry(f, &adapter->vlan_filter_list, list) {
-                       if (f->is_new_vlan) {
-                               f->is_new_vlan = false;
-                               if (f->vlan.tpid == ETH_P_8021Q)
-                                       set_bit(f->vlan.vid,
-                                               adapter->vsi.active_cvlans);
-                               else
-                                       set_bit(f->vlan.vid,
-                                               adapter->vsi.active_svlans);
-                       }
+                       if (f->state == IAVF_VLAN_IS_NEW)
+                               f->state = IAVF_VLAN_ACTIVE;
                }
                spin_unlock_bh(&adapter->mac_vlan_list_lock);
                }
index 4b5e459b6d49fee01508e0fff1e203ba3d1cee74..332472fe49902e3f5ee17faf8ca4b5bcd2ac4cf7 100644 (file)
@@ -681,14 +681,32 @@ int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
        return 0;
 }
 
-int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
+int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash,
+                       enum xdp_rss_hash_type *rss_type)
 {
        struct mlx4_en_xdp_buff *_ctx = (void *)ctx;
+       struct mlx4_cqe *cqe = _ctx->cqe;
+       enum xdp_rss_hash_type xht = 0;
+       __be16 status;
 
        if (unlikely(!(_ctx->dev->features & NETIF_F_RXHASH)))
                return -ENODATA;
 
-       *hash = be32_to_cpu(_ctx->cqe->immed_rss_invalid);
+       *hash = be32_to_cpu(cqe->immed_rss_invalid);
+       status = cqe->status;
+       if (status & cpu_to_be16(MLX4_CQE_STATUS_TCP))
+               xht = XDP_RSS_L4_TCP;
+       if (status & cpu_to_be16(MLX4_CQE_STATUS_UDP))
+               xht = XDP_RSS_L4_UDP;
+       if (status & cpu_to_be16(MLX4_CQE_STATUS_IPV4 | MLX4_CQE_STATUS_IPV4F))
+               xht |= XDP_RSS_L3_IPV4;
+       if (status & cpu_to_be16(MLX4_CQE_STATUS_IPV6)) {
+               xht |= XDP_RSS_L3_IPV6;
+               if (cqe->ipv6_ext_mask)
+                       xht |= XDP_RSS_L3_DYNHDR;
+       }
+       *rss_type = xht;
+
        return 0;
 }
 
index 544e09b97483cb4a09ff8957f12e35001c802d77..4ac4d883047b1c7b12317c62e705382c7e4d2d12 100644 (file)
@@ -798,7 +798,8 @@ int mlx4_en_netdev_event(struct notifier_block *this,
 
 struct xdp_md;
 int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp);
-int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash);
+int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash,
+                       enum xdp_rss_hash_type *rss_type);
 
 /*
  * Functions for time stamping
index c5dae48b7932f7a6216f55cb3b14faa7d167e1c8..d9d3b9e1f15aa7e31a63df3226cb66b65062d2e3 100644 (file)
@@ -34,6 +34,7 @@
 #include <net/xdp_sock_drv.h>
 #include "en/xdp.h"
 #include "en/params.h"
+#include <linux/bitfield.h>
 
 int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk)
 {
@@ -169,14 +170,72 @@ static int mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
        return 0;
 }
 
-static int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
+/* Mapping HW RSS Type bits CQE_RSS_HTYPE_IP + CQE_RSS_HTYPE_L4 into 4-bits*/
+#define RSS_TYPE_MAX_TABLE     16 /* 4-bits max 16 entries */
+#define RSS_L4         GENMASK(1, 0)
+#define RSS_L3         GENMASK(3, 2) /* Same as CQE_RSS_HTYPE_IP */
+
+/* Valid combinations of CQE_RSS_HTYPE_IP + CQE_RSS_HTYPE_L4 sorted numerical */
+enum mlx5_rss_hash_type {
+       RSS_TYPE_NO_HASH        = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IP_NONE) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)),
+       RSS_TYPE_L3_IPV4        = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)),
+       RSS_TYPE_L4_IPV4_TCP    = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_TCP)),
+       RSS_TYPE_L4_IPV4_UDP    = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_UDP)),
+       RSS_TYPE_L4_IPV4_IPSEC  = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_IPSEC)),
+       RSS_TYPE_L3_IPV6        = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)),
+       RSS_TYPE_L4_IPV6_TCP    = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_TCP)),
+       RSS_TYPE_L4_IPV6_UDP    = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_UDP)),
+       RSS_TYPE_L4_IPV6_IPSEC  = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) |
+                                  FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_IPSEC)),
+};
+
+/* Invalid combinations will simply return zero, allows no boundary checks */
+static const enum xdp_rss_hash_type mlx5_xdp_rss_type[RSS_TYPE_MAX_TABLE] = {
+       [RSS_TYPE_NO_HASH]       = XDP_RSS_TYPE_NONE,
+       [1]                      = XDP_RSS_TYPE_NONE, /* Implicit zero */
+       [2]                      = XDP_RSS_TYPE_NONE, /* Implicit zero */
+       [3]                      = XDP_RSS_TYPE_NONE, /* Implicit zero */
+       [RSS_TYPE_L3_IPV4]       = XDP_RSS_TYPE_L3_IPV4,
+       [RSS_TYPE_L4_IPV4_TCP]   = XDP_RSS_TYPE_L4_IPV4_TCP,
+       [RSS_TYPE_L4_IPV4_UDP]   = XDP_RSS_TYPE_L4_IPV4_UDP,
+       [RSS_TYPE_L4_IPV4_IPSEC] = XDP_RSS_TYPE_L4_IPV4_IPSEC,
+       [RSS_TYPE_L3_IPV6]       = XDP_RSS_TYPE_L3_IPV6,
+       [RSS_TYPE_L4_IPV6_TCP]   = XDP_RSS_TYPE_L4_IPV6_TCP,
+       [RSS_TYPE_L4_IPV6_UDP]   = XDP_RSS_TYPE_L4_IPV6_UDP,
+       [RSS_TYPE_L4_IPV6_IPSEC] = XDP_RSS_TYPE_L4_IPV6_IPSEC,
+       [12]                     = XDP_RSS_TYPE_NONE, /* Implicit zero */
+       [13]                     = XDP_RSS_TYPE_NONE, /* Implicit zero */
+       [14]                     = XDP_RSS_TYPE_NONE, /* Implicit zero */
+       [15]                     = XDP_RSS_TYPE_NONE, /* Implicit zero */
+};
+
+static int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash,
+                            enum xdp_rss_hash_type *rss_type)
 {
        const struct mlx5e_xdp_buff *_ctx = (void *)ctx;
+       const struct mlx5_cqe64 *cqe = _ctx->cqe;
+       u32 hash_type, l4_type, ip_type, lookup;
 
        if (unlikely(!(_ctx->xdp.rxq->dev->features & NETIF_F_RXHASH)))
                return -ENODATA;
 
-       *hash = be32_to_cpu(_ctx->cqe->rss_hash_result);
+       *hash = be32_to_cpu(cqe->rss_hash_result);
+
+       hash_type = cqe->rss_hash_type;
+       BUILD_BUG_ON(CQE_RSS_HTYPE_IP != RSS_L3); /* same mask */
+       ip_type = hash_type & CQE_RSS_HTYPE_IP;
+       l4_type = FIELD_GET(CQE_RSS_HTYPE_L4, hash_type);
+       lookup = ip_type | l4_type;
+       *rss_type = mlx5_xdp_rss_type[lookup];
+
        return 0;
 }
 
index 87f76bac2e463af4162eefb6eec6c39ac4add088..eb827b86ecae8130a0565cd75db3a729991c3a42 100644 (file)
@@ -628,7 +628,13 @@ int qlcnic_fw_create_ctx(struct qlcnic_adapter *dev)
        int i, err, ring;
 
        if (dev->flags & QLCNIC_NEED_FLR) {
-               pci_reset_function(dev->pdev);
+               err = pci_reset_function(dev->pdev);
+               if (err) {
+                       dev_err(&dev->pdev->dev,
+                               "Adapter reset failed (%d). Please reboot\n",
+                               err);
+                       return err;
+               }
                dev->flags &= ~QLCNIC_NEED_FLR;
        }
 
index ab8b09a9ef61d62dbd71c0ee14332dde971a7e26..7a2e767762974cf9dc63d03c17a822630f8783ca 100644 (file)
@@ -4522,7 +4522,7 @@ static int niu_alloc_channels(struct niu *np)
 
                err = niu_rbr_fill(np, rp, GFP_KERNEL);
                if (err)
-                       return err;
+                       goto out_err;
        }
 
        tx_rings = kcalloc(num_tx_rings, sizeof(struct tx_ring_info),
index 37f0b62ec5d6a3bc99914fcee1681d9e3ff1bfdb..f9cd566d1c9b588e5cd547ca4109567dc50114e4 100644 (file)
@@ -27,7 +27,7 @@
 #include <linux/of.h>
 #include <linux/of_mdio.h>
 #include <linux/of_net.h>
-#include <linux/of_device.h>
+#include <linux/of_platform.h>
 #include <linux/if_vlan.h>
 #include <linux/kmemleak.h>
 #include <linux/sys_soc.h>
index 35128dd45ffceb27a6dafea19a2c9d342feb95c1..c61e4e44a78f06ddbd28e94ce65e2af410291fe7 100644 (file)
@@ -7,6 +7,7 @@
 
 #include <linux/io.h>
 #include <linux/clk.h>
+#include <linux/platform_device.h>
 #include <linux/timer.h>
 #include <linux/module.h>
 #include <linux/irqreturn.h>
@@ -23,7 +24,7 @@
 #include <linux/of.h>
 #include <linux/of_mdio.h>
 #include <linux/of_net.h>
-#include <linux/of_device.h>
+#include <linux/of_platform.h>
 #include <linux/if_vlan.h>
 #include <linux/kmemleak.h>
 #include <linux/sys_soc.h>
index 5813b07242ce16486aea67f42cd86ffbc49d7cf9..029875a59ff89083e82e9f2b735b9585a25cea26 100644 (file)
 #define MAX_ID_PS                      2260U
 #define DEFAULT_ID_PS                  2000U
 
-#define PPM_TO_SUBNS_INC(ppb)  div_u64(GENMASK(31, 0) * (ppb) * \
+#define PPM_TO_SUBNS_INC(ppb)  div_u64(GENMASK_ULL(31, 0) * (ppb) * \
                                        PTP_CLK_PERIOD_100BT1, NSEC_PER_SEC)
 
 #define NXP_C45_SKB_CB(skb)    ((struct nxp_c45_skb_cb *)(skb)->cb)
@@ -1337,6 +1337,17 @@ no_ptp_support:
        return ret;
 }
 
+static void nxp_c45_remove(struct phy_device *phydev)
+{
+       struct nxp_c45_phy *priv = phydev->priv;
+
+       if (priv->ptp_clock)
+               ptp_clock_unregister(priv->ptp_clock);
+
+       skb_queue_purge(&priv->tx_queue);
+       skb_queue_purge(&priv->rx_queue);
+}
+
 static struct phy_driver nxp_c45_driver[] = {
        {
                PHY_ID_MATCH_MODEL(PHY_ID_TJA_1103),
@@ -1359,6 +1370,7 @@ static struct phy_driver nxp_c45_driver[] = {
                .set_loopback           = genphy_c45_loopback,
                .get_sqi                = nxp_c45_get_sqi,
                .get_sqi_max            = nxp_c45_get_sqi_max,
+               .remove                 = nxp_c45_remove,
        },
 };
 
index 8af10bb53e57b697d13ad0a6a1c9be9f0bad3615..bf345032d450c9ae3949d1a4485c1b76bfd30f1f 100644 (file)
@@ -210,6 +210,12 @@ static const enum gpiod_flags gpio_flags[] = {
 #define SFP_PHY_ADDR           22
 #define SFP_PHY_ADDR_ROLLBALL  17
 
+/* SFP_EEPROM_BLOCK_SIZE is the size of data chunk to read the EEPROM
+ * at a time. Some SFP modules and also some Linux I2C drivers do not like
+ * reads longer than 16 bytes.
+ */
+#define SFP_EEPROM_BLOCK_SIZE  16
+
 struct sff_data {
        unsigned int gpios;
        bool (*module_supported)(const struct sfp_eeprom_id *id);
@@ -1929,11 +1935,7 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
        u8 check;
        int ret;
 
-       /* Some SFP modules and also some Linux I2C drivers do not like reads
-        * longer than 16 bytes, so read the EEPROM in chunks of 16 bytes at
-        * a time.
-        */
-       sfp->i2c_block_size = 16;
+       sfp->i2c_block_size = SFP_EEPROM_BLOCK_SIZE;
 
        ret = sfp_read(sfp, false, 0, &id.base, sizeof(id.base));
        if (ret < 0) {
@@ -2485,6 +2487,9 @@ static int sfp_module_eeprom(struct sfp *sfp, struct ethtool_eeprom *ee,
        unsigned int first, last, len;
        int ret;
 
+       if (!(sfp->state & SFP_F_PRESENT))
+               return -ENODEV;
+
        if (ee->len == 0)
                return -EINVAL;
 
@@ -2517,6 +2522,9 @@ static int sfp_module_eeprom_by_page(struct sfp *sfp,
                                     const struct ethtool_module_eeprom *page,
                                     struct netlink_ext_ack *extack)
 {
+       if (!(sfp->state & SFP_F_PRESENT))
+               return -ENODEV;
+
        if (page->bank) {
                NL_SET_ERR_MSG(extack, "Banks not supported");
                return -EOPNOTSUPP;
@@ -2621,6 +2629,7 @@ static struct sfp *sfp_alloc(struct device *dev)
                return ERR_PTR(-ENOMEM);
 
        sfp->dev = dev;
+       sfp->i2c_block_size = SFP_EEPROM_BLOCK_SIZE;
 
        mutex_init(&sfp->sm_mutex);
        mutex_init(&sfp->st_mutex);
index decb5ba56a25941d1b5f58495d26e8320607bf43..0fc4b959edc18e6bdd156ff1ec642a7dce8c343e 100644 (file)
@@ -1943,7 +1943,7 @@ static struct rx_agg *alloc_rx_agg(struct r8152 *tp, gfp_t mflags)
        if (!rx_agg)
                return NULL;
 
-       rx_agg->page = alloc_pages(mflags | __GFP_COMP, order);
+       rx_agg->page = alloc_pages(mflags | __GFP_COMP | __GFP_NOWARN, order);
        if (!rx_agg->page)
                goto free_rx;
 
index c1178915496d80da7e8c63c91bf395444ce3194f..e1b38fbf1dd95f82085b23751a818c267f99c49e 100644 (file)
@@ -1648,14 +1648,18 @@ static int veth_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp)
        return 0;
 }
 
-static int veth_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash)
+static int veth_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash,
+                           enum xdp_rss_hash_type *rss_type)
 {
        struct veth_xdp_buff *_ctx = (void *)ctx;
+       struct sk_buff *skb = _ctx->skb;
 
-       if (!_ctx->skb)
+       if (!skb)
                return -ENODATA;
 
-       *hash = skb_get_hash(_ctx->skb);
+       *hash = skb_get_hash(skb);
+       *rss_type = skb->l4_hash ? XDP_RSS_TYPE_L4_ANY : XDP_RSS_TYPE_NONE;
+
        return 0;
 }
 
index 5bf5a93937c9c4dd898d41407f26606742735a3a..04517bd3325a2abb3871f640aab0666d53467352 100644 (file)
@@ -295,7 +295,7 @@ static int ipc_pcie_probe(struct pci_dev *pci,
        ret = dma_set_mask(ipc_pcie->dev, DMA_BIT_MASK(64));
        if (ret) {
                dev_err(ipc_pcie->dev, "Could not set PCI DMA mask: %d", ret);
-               return ret;
+               goto set_mask_fail;
        }
 
        ipc_pcie_config_aspm(ipc_pcie);
@@ -323,6 +323,7 @@ static int ipc_pcie_probe(struct pci_dev *pci,
 imem_init_fail:
        ipc_pcie_resources_release(ipc_pcie);
 resources_req_fail:
+set_mask_fail:
        pci_disable_device(pci);
 pci_enable_fail:
        kfree(ipc_pcie);
index 282d808400c5bee4dc599468148e5c86c79ee950..cd7873de312154696079e390e83439983bc0762e 100644 (file)
@@ -3443,6 +3443,8 @@ static const struct pci_device_id nvme_id_table[] = {
        { PCI_DEVICE(0x1d97, 0x2269), /* Lexar NM760 */
                .driver_data = NVME_QUIRK_BOGUS_NID |
                                NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+       { PCI_DEVICE(0x10ec, 0x5763), /* TEAMGROUP T-FORCE CARDEA ZERO Z330 SSD */
+               .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0061),
                .driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, },
        { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0065),
index 07d93753b12f5f4d2ccc0b6f00fa1e2aefef5bd6..e311d406b17053065b2f0699e92cda1c2e5db481 100644 (file)
@@ -226,6 +226,7 @@ static void __of_attach_node(struct device_node *np)
        np->sibling = np->parent->child;
        np->parent->child = np;
        of_node_clear_flag(np, OF_DETACHED);
+       np->fwnode.flags |= FWNODE_FLAG_NOT_DEVICE;
 }
 
 /**
index b2bd2e783445dd78e40beefb31d83bfa8c7eac6d..78ae8418744905c9b038378e78699a1aeb111a1c 100644 (file)
@@ -737,6 +737,11 @@ static int of_platform_notify(struct notifier_block *nb,
                if (of_node_check_flag(rd->dn, OF_POPULATED))
                        return NOTIFY_OK;
 
+               /*
+                * Clear the flag before adding the device so that fw_devlink
+                * doesn't skip adding consumers to this device.
+                */
+               rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE;
                /* pdev_parent may be NULL when no bus platform device */
                pdev_parent = of_find_device_by_node(rd->dn->parent);
                pdev = of_platform_device_create(rd->dn, NULL,
index 0145aef1b9301823e3a406611353b596036e8a77..22d39e12b236a69901628e580bccc4b8ee403bcd 100644 (file)
@@ -157,8 +157,6 @@ void pci_remove_root_bus(struct pci_bus *bus)
        list_for_each_entry_safe(child, tmp,
                                 &bus->devices, bus_list)
                pci_remove_bus_device(child);
-       pci_remove_bus(bus);
-       host_bridge->bus = NULL;
 
 #ifdef CONFIG_PCI_DOMAINS_GENERIC
        /* Release domain_nr if it was dynamically allocated */
@@ -166,6 +164,9 @@ void pci_remove_root_bus(struct pci_bus *bus)
                pci_bus_release_domain_nr(bus, host_bridge->dev.parent);
 #endif
 
+       pci_remove_bus(bus);
+       host_bridge->bus = NULL;
+
        /* remove the host bridge */
        device_del(&host_bridge->dev);
 }
index 609821b756c2fa868d769a54c0f15225342a7cbb..9236a132c7bab6358105ed92a3dd525384a98d50 100644 (file)
@@ -872,34 +872,32 @@ static const struct pinconf_ops amd_pinconf_ops = {
        .pin_config_group_set = amd_pinconf_group_set,
 };
 
-static void amd_gpio_irq_init_pin(struct amd_gpio *gpio_dev, int pin)
+static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
 {
-       const struct pin_desc *pd;
+       struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
        unsigned long flags;
        u32 pin_reg, mask;
+       int i;
 
        mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3) |
                BIT(INTERRUPT_MASK_OFF) | BIT(INTERRUPT_ENABLE_OFF) |
                BIT(WAKE_CNTRL_OFF_S4);
 
-       pd = pin_desc_get(gpio_dev->pctrl, pin);
-       if (!pd)
-               return;
+       for (i = 0; i < desc->npins; i++) {
+               int pin = desc->pins[i].number;
+               const struct pin_desc *pd = pin_desc_get(gpio_dev->pctrl, pin);
 
-       raw_spin_lock_irqsave(&gpio_dev->lock, flags);
-       pin_reg = readl(gpio_dev->base + pin * 4);
-       pin_reg &= ~mask;
-       writel(pin_reg, gpio_dev->base + pin * 4);
-       raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
-}
+               if (!pd)
+                       continue;
 
-static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
-{
-       struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
-       int i;
+               raw_spin_lock_irqsave(&gpio_dev->lock, flags);
 
-       for (i = 0; i < desc->npins; i++)
-               amd_gpio_irq_init_pin(gpio_dev, i);
+               pin_reg = readl(gpio_dev->base + i * 4);
+               pin_reg &= ~mask;
+               writel(pin_reg, gpio_dev->base + i * 4);
+
+               raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+       }
 }
 
 #ifdef CONFIG_PM_SLEEP
@@ -952,10 +950,8 @@ static int amd_gpio_resume(struct device *dev)
        for (i = 0; i < desc->npins; i++) {
                int pin = desc->pins[i].number;
 
-               if (!amd_gpio_should_save(gpio_dev, pin)) {
-                       amd_gpio_irq_init_pin(gpio_dev, pin);
+               if (!amd_gpio_should_save(gpio_dev, pin))
                        continue;
-               }
 
                raw_spin_lock_irqsave(&gpio_dev->lock, flags);
                gpio_dev->saved_regs[i] |= readl(gpio_dev->base + pin * 4) & PIN_IRQ_PENDING;
index b11a9162e73aaeed994667a3ba04f2fa59cae240..b54f2c6c08c362799e31141b9abbaf3a4667ed71 100644 (file)
@@ -509,9 +509,6 @@ static int ses_enclosure_find_by_addr(struct enclosure_device *edev,
        int i;
        struct ses_component *scomp;
 
-       if (!edev->component[0].scratch)
-               return 0;
-
        for (i = 0; i < edev->components; i++) {
                scomp = edev->component[i].scratch;
                if (scomp->addr != efd->addr)
@@ -602,8 +599,10 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
                                                components++,
                                                type_ptr[0],
                                                name);
-                               else
+                               else if (components < edev->components)
                                        ecomp = &edev->component[components++];
+                               else
+                                       ecomp = ERR_PTR(-EINVAL);
 
                                if (!IS_ERR(ecomp)) {
                                        if (addl_desc_ptr) {
@@ -734,11 +733,6 @@ static int ses_intf_add(struct device *cdev,
                        components += type_ptr[1];
        }
 
-       if (components == 0) {
-               sdev_printk(KERN_WARNING, sdev, "enclosure has no enumerated components\n");
-               goto err_free;
-       }
-
        ses_dev->page1 = buf;
        ses_dev->page1_len = len;
        buf = NULL;
@@ -780,9 +774,11 @@ static int ses_intf_add(struct device *cdev,
                buf = NULL;
        }
 page2_not_supported:
-       scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL);
-       if (!scomp)
-               goto err_free;
+       if (components > 0) {
+               scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL);
+               if (!scomp)
+                       goto err_free;
+       }
 
        edev = enclosure_register(cdev->parent, dev_name(&sdev->sdev_gendev),
                                  components, &ses_enclosure_callbacks);
index 44b85a8d47f112f795fb0a8a46397daff7999bcf..7bc14fb309a69970ea92902e4f1ec88b32a41e8c 100644 (file)
@@ -4456,6 +4456,11 @@ static int of_spi_notify(struct notifier_block *nb, unsigned long action,
                        return NOTIFY_OK;
                }
 
+               /*
+                * Clear the flag before adding the device so that fw_devlink
+                * doesn't skip adding consumers to this device.
+                */
+               rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE;
                spi = of_register_spi_device(ctlr, rd->dn);
                put_device(&ctlr->dev);
 
index 2e22bb82b7389e740c70dc66de24351372861bf0..e69868e868eb9e9abd0a2e826dbf75913605b4bc 100644 (file)
@@ -193,8 +193,67 @@ static const struct attribute_group thermal_attr_group = {
 #define THERM_THROT_POLL_INTERVAL      HZ
 #define THERM_STATUS_PROCHOT_LOG       BIT(1)
 
-#define THERM_STATUS_CLEAR_CORE_MASK (BIT(1) | BIT(3) | BIT(5) | BIT(7) | BIT(9) | BIT(11) | BIT(13) | BIT(15))
-#define THERM_STATUS_CLEAR_PKG_MASK  (BIT(1) | BIT(3) | BIT(5) | BIT(7) | BIT(9) | BIT(11))
+static u64 therm_intr_core_clear_mask;
+static u64 therm_intr_pkg_clear_mask;
+
+static void thermal_intr_init_core_clear_mask(void)
+{
+       if (therm_intr_core_clear_mask)
+               return;
+
+       /*
+        * Reference: Intel SDM  Volume 4
+        * "Table 2-2. IA-32 Architectural MSRs", MSR 0x19C
+        * IA32_THERM_STATUS.
+        */
+
+       /*
+        * Bit 1, 3, 5: CPUID.01H:EDX[22] = 1. This driver will not
+        * enable interrupts, when 0 as it checks for X86_FEATURE_ACPI.
+        */
+       therm_intr_core_clear_mask = (BIT(1) | BIT(3) | BIT(5));
+
+       /*
+        * Bit 7 and 9: Thermal Threshold #1 and #2 log
+        * If CPUID.01H:ECX[8] = 1
+        */
+       if (boot_cpu_has(X86_FEATURE_TM2))
+               therm_intr_core_clear_mask |= (BIT(7) | BIT(9));
+
+       /* Bit 11: Power Limitation log (R/WC0) If CPUID.06H:EAX[4] = 1 */
+       if (boot_cpu_has(X86_FEATURE_PLN))
+               therm_intr_core_clear_mask |= BIT(11);
+
+       /*
+        * Bit 13: Current Limit log (R/WC0) If CPUID.06H:EAX[7] = 1
+        * Bit 15: Cross Domain Limit log (R/WC0) If CPUID.06H:EAX[7] = 1
+        */
+       if (boot_cpu_has(X86_FEATURE_HWP))
+               therm_intr_core_clear_mask |= (BIT(13) | BIT(15));
+}
+
+static void thermal_intr_init_pkg_clear_mask(void)
+{
+       if (therm_intr_pkg_clear_mask)
+               return;
+
+       /*
+        * Reference: Intel SDM  Volume 4
+        * "Table 2-2. IA-32 Architectural MSRs", MSR 0x1B1
+        * IA32_PACKAGE_THERM_STATUS.
+        */
+
+       /* All bits except BIT 26 depend on CPUID.06H: EAX[6] = 1 */
+       if (boot_cpu_has(X86_FEATURE_PTS))
+               therm_intr_pkg_clear_mask = (BIT(1) | BIT(3) | BIT(5) | BIT(7) | BIT(9) | BIT(11));
+
+       /*
+        * Intel SDM Volume 2A: Thermal and Power Management Leaf
+        * Bit 26: CPUID.06H: EAX[19] = 1
+        */
+       if (boot_cpu_has(X86_FEATURE_HFI))
+               therm_intr_pkg_clear_mask |= BIT(26);
+}
 
 /*
  * Clear the bits in package thermal status register for bit = 1
@@ -207,13 +266,10 @@ void thermal_clear_package_intr_status(int level, u64 bit_mask)
 
        if (level == CORE_LEVEL) {
                msr  = MSR_IA32_THERM_STATUS;
-               msr_val = THERM_STATUS_CLEAR_CORE_MASK;
+               msr_val = therm_intr_core_clear_mask;
        } else {
                msr  = MSR_IA32_PACKAGE_THERM_STATUS;
-               msr_val = THERM_STATUS_CLEAR_PKG_MASK;
-               if (boot_cpu_has(X86_FEATURE_HFI))
-                       msr_val |= BIT(26);
-
+               msr_val = therm_intr_pkg_clear_mask;
        }
 
        msr_val &= ~bit_mask;
@@ -708,6 +764,9 @@ void intel_init_thermal(struct cpuinfo_x86 *c)
        h = THERMAL_APIC_VECTOR | APIC_DM_FIXED | APIC_LVT_MASKED;
        apic_write(APIC_LVTTHMR, h);
 
+       thermal_intr_init_core_clear_mask();
+       thermal_intr_init_pkg_clear_mask();
+
        rdmsr(MSR_IA32_THERM_INTERRUPT, l, h);
        if (cpu_has(c, X86_FEATURE_PLN) && !int_pln_enable)
                wrmsr(MSR_IA32_THERM_INTERRUPT,
index 520646ae7fa013b2612c8ec240f4585fab669f92..195963b82b636340d4a1b73e15e25a6c5face806 100644 (file)
@@ -2467,10 +2467,11 @@ static int setup_driver(struct mlx5_vdpa_dev *mvdev)
                err = 0;
                goto out;
        }
+       mlx5_vdpa_add_debugfs(ndev);
        err = setup_virtqueues(mvdev);
        if (err) {
                mlx5_vdpa_warn(mvdev, "setup_virtqueues\n");
-               goto out;
+               goto err_setup;
        }
 
        err = create_rqt(ndev);
@@ -2500,6 +2501,8 @@ err_tir:
        destroy_rqt(ndev);
 err_rqt:
        teardown_virtqueues(ndev);
+err_setup:
+       mlx5_vdpa_remove_debugfs(ndev->debugfs);
 out:
        return err;
 }
@@ -2513,6 +2516,8 @@ static void teardown_driver(struct mlx5_vdpa_net *ndev)
        if (!ndev->setup)
                return;
 
+       mlx5_vdpa_remove_debugfs(ndev->debugfs);
+       ndev->debugfs = NULL;
        teardown_steering(ndev);
        destroy_tir(ndev);
        destroy_rqt(ndev);
@@ -3261,7 +3266,6 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
        if (err)
                goto err_reg;
 
-       mlx5_vdpa_add_debugfs(ndev);
        mgtdev->ndev = ndev;
        return 0;
 
index 862f405362de27dc3ca88eefa3d3ac334a154fa5..dfe2ce34180356e408cb129a9ce0be77461bd64d 100644 (file)
@@ -466,16 +466,21 @@ static int vdpasim_net_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
 
        vdpasim_net_setup_config(simdev, config);
 
-       ret = _vdpa_register_device(&simdev->vdpa, VDPASIM_NET_VQ_NUM);
-       if (ret)
-               goto reg_err;
-
        net = sim_to_net(simdev);
 
        u64_stats_init(&net->tx_stats.syncp);
        u64_stats_init(&net->rx_stats.syncp);
        u64_stats_init(&net->cq_stats.syncp);
 
+       /*
+        * Initialization must be completed before this call, since it can
+        * connect the device to the vDPA bus, so requests can arrive after
+        * this call.
+        */
+       ret = _vdpa_register_device(&simdev->vdpa, VDPASIM_NET_VQ_NUM);
+       if (ret)
+               goto reg_err;
+
        return 0;
 
 reg_err:
index b244e7c0f514ca3efdb676b771a4457780a2b7a2..32d0be96810359572aeafa556d08c5a5e98140b1 100644 (file)
@@ -125,7 +125,6 @@ struct vhost_scsi_tpg {
        struct se_portal_group se_tpg;
        /* Pointer back to vhost_scsi, protected by tv_tpg_mutex */
        struct vhost_scsi *vhost_scsi;
-       struct list_head tmf_queue;
 };
 
 struct vhost_scsi_tport {
@@ -206,10 +205,8 @@ struct vhost_scsi {
 
 struct vhost_scsi_tmf {
        struct vhost_work vwork;
-       struct vhost_scsi_tpg *tpg;
        struct vhost_scsi *vhost;
        struct vhost_scsi_virtqueue *svq;
-       struct list_head queue_entry;
 
        struct se_cmd se_cmd;
        u8 scsi_resp;
@@ -352,12 +349,9 @@ static void vhost_scsi_release_cmd_res(struct se_cmd *se_cmd)
 
 static void vhost_scsi_release_tmf_res(struct vhost_scsi_tmf *tmf)
 {
-       struct vhost_scsi_tpg *tpg = tmf->tpg;
        struct vhost_scsi_inflight *inflight = tmf->inflight;
 
-       mutex_lock(&tpg->tv_tpg_mutex);
-       list_add_tail(&tpg->tmf_queue, &tmf->queue_entry);
-       mutex_unlock(&tpg->tv_tpg_mutex);
+       kfree(tmf);
        vhost_scsi_put_inflight(inflight);
 }
 
@@ -1194,19 +1188,11 @@ vhost_scsi_handle_tmf(struct vhost_scsi *vs, struct vhost_scsi_tpg *tpg,
                goto send_reject;
        }
 
-       mutex_lock(&tpg->tv_tpg_mutex);
-       if (list_empty(&tpg->tmf_queue)) {
-               pr_err("Missing reserve TMF. Could not handle LUN RESET.\n");
-               mutex_unlock(&tpg->tv_tpg_mutex);
+       tmf = kzalloc(sizeof(*tmf), GFP_KERNEL);
+       if (!tmf)
                goto send_reject;
-       }
-
-       tmf = list_first_entry(&tpg->tmf_queue, struct vhost_scsi_tmf,
-                              queue_entry);
-       list_del_init(&tmf->queue_entry);
-       mutex_unlock(&tpg->tv_tpg_mutex);
 
-       tmf->tpg = tpg;
+       vhost_work_init(&tmf->vwork, vhost_scsi_tmf_resp_work);
        tmf->vhost = vs;
        tmf->svq = svq;
        tmf->resp_iov = vq->iov[vc->out];
@@ -1658,7 +1644,10 @@ undepend:
        for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
                tpg = vs_tpg[i];
                if (tpg) {
+                       mutex_lock(&tpg->tv_tpg_mutex);
+                       tpg->vhost_scsi = NULL;
                        tpg->tv_tpg_vhost_count--;
+                       mutex_unlock(&tpg->tv_tpg_mutex);
                        target_undepend_item(&tpg->se_tpg.tpg_group.cg_item);
                }
        }
@@ -2032,19 +2021,11 @@ static int vhost_scsi_port_link(struct se_portal_group *se_tpg,
 {
        struct vhost_scsi_tpg *tpg = container_of(se_tpg,
                                struct vhost_scsi_tpg, se_tpg);
-       struct vhost_scsi_tmf *tmf;
-
-       tmf = kzalloc(sizeof(*tmf), GFP_KERNEL);
-       if (!tmf)
-               return -ENOMEM;
-       INIT_LIST_HEAD(&tmf->queue_entry);
-       vhost_work_init(&tmf->vwork, vhost_scsi_tmf_resp_work);
 
        mutex_lock(&vhost_scsi_mutex);
 
        mutex_lock(&tpg->tv_tpg_mutex);
        tpg->tv_tpg_port_count++;
-       list_add_tail(&tmf->queue_entry, &tpg->tmf_queue);
        mutex_unlock(&tpg->tv_tpg_mutex);
 
        vhost_scsi_hotplug(tpg, lun);
@@ -2059,16 +2040,11 @@ static void vhost_scsi_port_unlink(struct se_portal_group *se_tpg,
 {
        struct vhost_scsi_tpg *tpg = container_of(se_tpg,
                                struct vhost_scsi_tpg, se_tpg);
-       struct vhost_scsi_tmf *tmf;
 
        mutex_lock(&vhost_scsi_mutex);
 
        mutex_lock(&tpg->tv_tpg_mutex);
        tpg->tv_tpg_port_count--;
-       tmf = list_first_entry(&tpg->tmf_queue, struct vhost_scsi_tmf,
-                              queue_entry);
-       list_del(&tmf->queue_entry);
-       kfree(tmf);
        mutex_unlock(&tpg->tv_tpg_mutex);
 
        vhost_scsi_hotunplug(tpg, lun);
@@ -2329,7 +2305,6 @@ vhost_scsi_make_tpg(struct se_wwn *wwn, const char *name)
        }
        mutex_init(&tpg->tv_tpg_mutex);
        INIT_LIST_HEAD(&tpg->tv_tpg_list);
-       INIT_LIST_HEAD(&tpg->tmf_queue);
        tpg->tport = tport;
        tpg->tport_tpgt = tpgt;
 
index 0a2c47df01f402b2221777a2d5ba78e748ceb43c..eb565a10e5cda306fb3f12b837d57a311599fb95 100644 (file)
@@ -823,7 +823,7 @@ static int set_con2fb_map(int unit, int newidx, int user)
        int oldidx = con2fb_map[unit];
        struct fb_info *info = fbcon_registered_fb[newidx];
        struct fb_info *oldinfo = NULL;
-       int found, err = 0, show_logo;
+       int err = 0, show_logo;
 
        WARN_CONSOLE_UNLOCKED();
 
@@ -841,26 +841,26 @@ static int set_con2fb_map(int unit, int newidx, int user)
        if (oldidx != -1)
                oldinfo = fbcon_registered_fb[oldidx];
 
-       found = search_fb_in_map(newidx);
-
-       if (!err && !found) {
+       if (!search_fb_in_map(newidx)) {
                err = con2fb_acquire_newinfo(vc, info, unit);
-               if (!err)
-                       con2fb_map[unit] = newidx;
+               if (err)
+                       return err;
+
+               fbcon_add_cursor_work(info);
        }
 
+       con2fb_map[unit] = newidx;
+
        /*
         * If old fb is not mapped to any of the consoles,
         * fbcon should release it.
         */
-       if (!err && oldinfo && !search_fb_in_map(oldidx))
+       if (oldinfo && !search_fb_in_map(oldidx))
                con2fb_release_oldinfo(vc, oldinfo, info);
 
        show_logo = (fg_console == 0 && !user &&
                         logo_shown != FBCON_LOGO_DONTSHOW);
 
-       if (!found)
-               fbcon_add_cursor_work(info);
        con2fb_map_boot[unit] = newidx;
        con2fb_init_display(vc, info, unit, show_logo);
 
index 875541ff185bf365f2a046e55c2421d50790649a..3fd95a79e4c334674e2bd6fdcecda747cdcd6c8c 100644 (file)
@@ -1116,6 +1116,8 @@ static long do_fb_ioctl(struct fb_info *info, unsigned int cmd,
        case FBIOPUT_VSCREENINFO:
                if (copy_from_user(&var, argp, sizeof(var)))
                        return -EFAULT;
+               /* only for kernel-internal use */
+               var.activate &= ~FB_ACTIVATE_KD_TEXT;
                console_lock();
                lock_fb_info(info);
                ret = fbcon_modechange_possible(info, &var);
index 50f7f3f6b55e9a7853cc1f6e4d8d14238274379a..1974a38bce206fa862c01a7ea0f82079642c9c5d 100644 (file)
@@ -35,10 +35,12 @@ ssize_t v9fs_fid_xattr_get(struct p9_fid *fid, const char *name,
                return retval;
        }
        if (attr_size > buffer_size) {
-               if (!buffer_size) /* request to get the attr_size */
-                       retval = attr_size;
-               else
+               if (buffer_size)
                        retval = -ERANGE;
+               else if (attr_size > SSIZE_MAX)
+                       retval = -EOVERFLOW;
+               else /* request to get the attr_size */
+                       retval = attr_size;
        } else {
                iov_iter_truncate(&to, attr_size);
                retval = p9_client_read(attr_fid, 0, &to, &err);
index b53f0e30ce2b3bbb2e40baa1163ee57e29e942b7..9e1596bb208db09ff69d441c9ca3330ea7b940c2 100644 (file)
@@ -2250,6 +2250,20 @@ static int btrfs_init_csum_hash(struct btrfs_fs_info *fs_info, u16 csum_type)
 
        fs_info->csum_shash = csum_shash;
 
+       /*
+        * Check if the checksum implementation is a fast accelerated one.
+        * As-is this is a bit of a hack and should be replaced once the csum
+        * implementations provide that information themselves.
+        */
+       switch (csum_type) {
+       case BTRFS_CSUM_TYPE_CRC32:
+               if (!strstr(crypto_shash_driver_name(csum_shash), "generic"))
+                       set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags);
+               break;
+       default:
+               break;
+       }
+
        btrfs_info(fs_info, "using %s (%s) checksum algorithm",
                        btrfs_super_csum_name(csum_type),
                        crypto_shash_driver_name(csum_shash));
index 581845bc206ad28b403d52f7b8dc421982078222..366fb4cde14584b5c1477869c064364c4a1c03fa 100644 (file)
@@ -1516,8 +1516,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
                shrinker_debugfs_rename(&s->s_shrink, "sb-%s:%s", fs_type->name,
                                        s->s_id);
                btrfs_sb(s)->bdev_holder = fs_type;
-               if (!strstr(crc32c_impl(), "generic"))
-                       set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags);
                error = btrfs_fill_super(s, fs_devices, data);
        }
        if (!error)
@@ -1631,6 +1629,8 @@ static void btrfs_resize_thread_pool(struct btrfs_fs_info *fs_info,
        btrfs_workqueue_set_max(fs_info->hipri_workers, new_pool_size);
        btrfs_workqueue_set_max(fs_info->delalloc_workers, new_pool_size);
        btrfs_workqueue_set_max(fs_info->caching_workers, new_pool_size);
+       workqueue_set_max_active(fs_info->endio_workers, new_pool_size);
+       workqueue_set_max_active(fs_info->endio_meta_workers, new_pool_size);
        btrfs_workqueue_set_max(fs_info->endio_write_workers, new_pool_size);
        btrfs_workqueue_set_max(fs_info->endio_freespace_worker, new_pool_size);
        btrfs_workqueue_set_max(fs_info->delayed_workers, new_pool_size);
index 2b92132097dc89363f4470b54a3355f61e75109e..4245249dbba86ab247f27f1f75f36348eaa23a4d 100644 (file)
@@ -587,11 +587,15 @@ assemble_neg_contexts(struct smb2_negotiate_req *req,
 
 }
 
+/* If invalid preauth context warn but use what we requested, SHA-512 */
 static void decode_preauth_context(struct smb2_preauth_neg_context *ctxt)
 {
        unsigned int len = le16_to_cpu(ctxt->DataLength);
 
-       /* If invalid preauth context warn but use what we requested, SHA-512 */
+       /*
+        * Caller checked that DataLength remains within SMB boundary. We still
+        * need to confirm that one HashAlgorithms member is accounted for.
+        */
        if (len < MIN_PREAUTH_CTXT_DATA_LEN) {
                pr_warn_once("server sent bad preauth context\n");
                return;
@@ -610,7 +614,11 @@ static void decode_compress_ctx(struct TCP_Server_Info *server,
 {
        unsigned int len = le16_to_cpu(ctxt->DataLength);
 
-       /* sizeof compress context is a one element compression capbility struct */
+       /*
+        * Caller checked that DataLength remains within SMB boundary. We still
+        * need to confirm that one CompressionAlgorithms member is accounted
+        * for.
+        */
        if (len < 10) {
                pr_warn_once("server sent bad compression cntxt\n");
                return;
@@ -632,6 +640,11 @@ static int decode_encrypt_ctx(struct TCP_Server_Info *server,
        unsigned int len = le16_to_cpu(ctxt->DataLength);
 
        cifs_dbg(FYI, "decode SMB3.11 encryption neg context of len %d\n", len);
+       /*
+        * Caller checked that DataLength remains within SMB boundary. We still
+        * need to confirm that one Cipher flexible array member is accounted
+        * for.
+        */
        if (len < MIN_ENCRYPT_CTXT_DATA_LEN) {
                pr_warn_once("server sent bad crypto ctxt len\n");
                return -EINVAL;
@@ -678,6 +691,11 @@ static void decode_signing_ctx(struct TCP_Server_Info *server,
 {
        unsigned int len = le16_to_cpu(pctxt->DataLength);
 
+       /*
+        * Caller checked that DataLength remains within SMB boundary. We still
+        * need to confirm that one SigningAlgorithms flexible array member is
+        * accounted for.
+        */
        if ((len < 4) || (len > 16)) {
                pr_warn_once("server sent bad signing negcontext\n");
                return;
@@ -719,14 +737,19 @@ static int smb311_decode_neg_context(struct smb2_negotiate_rsp *rsp,
        for (i = 0; i < ctxt_cnt; i++) {
                int clen;
                /* check that offset is not beyond end of SMB */
-               if (len_of_ctxts == 0)
-                       break;
-
                if (len_of_ctxts < sizeof(struct smb2_neg_context))
                        break;
 
                pctx = (struct smb2_neg_context *)(offset + (char *)rsp);
-               clen = le16_to_cpu(pctx->DataLength);
+               clen = sizeof(struct smb2_neg_context)
+                       + le16_to_cpu(pctx->DataLength);
+               /*
+                * 2.2.4 SMB2 NEGOTIATE Response
+                * Subsequent negotiate contexts MUST appear at the first 8-byte
+                * aligned offset following the previous negotiate context.
+                */
+               if (i + 1 != ctxt_cnt)
+                       clen = ALIGN(clen, 8);
                if (clen > len_of_ctxts)
                        break;
 
@@ -747,12 +770,10 @@ static int smb311_decode_neg_context(struct smb2_negotiate_rsp *rsp,
                else
                        cifs_server_dbg(VFS, "unknown negcontext of type %d ignored\n",
                                le16_to_cpu(pctx->ContextType));
-
                if (rc)
                        break;
-               /* offsets must be 8 byte aligned */
-               clen = ALIGN(clen, 8);
-               offset += clen + sizeof(struct smb2_neg_context);
+
+               offset += clen;
                len_of_ctxts -= clen;
        }
        return rc;
index 8af939a181be588f932cdca6419a1ac36047a128..67b7e766a06bace88ef65d2afdb7dff514f9f364 100644 (file)
@@ -876,17 +876,21 @@ static void assemble_neg_contexts(struct ksmbd_conn *conn,
 }
 
 static __le32 decode_preauth_ctxt(struct ksmbd_conn *conn,
-                                 struct smb2_preauth_neg_context *pneg_ctxt)
+                                 struct smb2_preauth_neg_context *pneg_ctxt,
+                                 int len_of_ctxts)
 {
-       __le32 err = STATUS_NO_PREAUTH_INTEGRITY_HASH_OVERLAP;
+       /*
+        * sizeof(smb2_preauth_neg_context) assumes SMB311_SALT_SIZE Salt,
+        * which may not be present. Only check for used HashAlgorithms[1].
+        */
+       if (len_of_ctxts < MIN_PREAUTH_CTXT_DATA_LEN)
+               return STATUS_INVALID_PARAMETER;
 
-       if (pneg_ctxt->HashAlgorithms == SMB2_PREAUTH_INTEGRITY_SHA512) {
-               conn->preauth_info->Preauth_HashId =
-                       SMB2_PREAUTH_INTEGRITY_SHA512;
-               err = STATUS_SUCCESS;
-       }
+       if (pneg_ctxt->HashAlgorithms != SMB2_PREAUTH_INTEGRITY_SHA512)
+               return STATUS_NO_PREAUTH_INTEGRITY_HASH_OVERLAP;
 
-       return err;
+       conn->preauth_info->Preauth_HashId = SMB2_PREAUTH_INTEGRITY_SHA512;
+       return STATUS_SUCCESS;
 }
 
 static void decode_encrypt_ctxt(struct ksmbd_conn *conn,
@@ -1014,7 +1018,8 @@ static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn,
                                break;
 
                        status = decode_preauth_ctxt(conn,
-                                                    (struct smb2_preauth_neg_context *)pctx);
+                                                    (struct smb2_preauth_neg_context *)pctx,
+                                                    len_of_ctxts);
                        if (status != STATUS_SUCCESS)
                                break;
                } else if (pctx->ContextType == SMB2_ENCRYPTION_CAPABILITIES) {
index e9a45dea748a2494707e851735deeafa1b89fdd2..8a4c866874297c18bdf63ebcf878d8e5b313b586 100644 (file)
@@ -139,7 +139,7 @@ static ssize_t netfs_extract_user_to_sg(struct iov_iter *iter,
                        size_t seg = min_t(size_t, PAGE_SIZE - off, len);
 
                        *pages++ = NULL;
-                       sg_set_page(sg, page, len, off);
+                       sg_set_page(sg, page, seg, off);
                        sgtable->nents++;
                        sg++;
                        len -= seg;
index 71b06ebad4024c429bf6b33cf3637e9e0c7fc61e..1db19a9d26e324a86ac0a5b9e4dfeea409ac1dfd 100644 (file)
@@ -36,6 +36,7 @@
 #include <linux/types.h>
 #include <rdma/ib_verbs.h>
 #include <linux/mlx5/mlx5_ifc.h>
+#include <linux/bitfield.h>
 
 #if defined(__LITTLE_ENDIAN)
 #define MLX5_SET_HOST_ENDIANNESS       0
@@ -980,14 +981,23 @@ enum {
 };
 
 enum {
-       CQE_RSS_HTYPE_IP        = 0x3 << 2,
+       CQE_RSS_HTYPE_IP        = GENMASK(3, 2),
        /* cqe->rss_hash_type[3:2] - IP destination selected for hash
         * (00 = none,  01 = IPv4, 10 = IPv6, 11 = Reserved)
         */
-       CQE_RSS_HTYPE_L4        = 0x3 << 6,
+       CQE_RSS_IP_NONE         = 0x0,
+       CQE_RSS_IPV4            = 0x1,
+       CQE_RSS_IPV6            = 0x2,
+       CQE_RSS_RESERVED        = 0x3,
+
+       CQE_RSS_HTYPE_L4        = GENMASK(7, 6),
        /* cqe->rss_hash_type[7:6] - L4 destination selected for hash
         * (00 = none, 01 = TCP. 10 = UDP, 11 = IPSEC.SPI
         */
+       CQE_RSS_L4_NONE         = 0x0,
+       CQE_RSS_L4_TCP          = 0x1,
+       CQE_RSS_L4_UDP          = 0x2,
+       CQE_RSS_L4_IPSEC        = 0x3,
 };
 
 enum {
index 470085b121d3c241e8045416e9d0426eb8a43850..c35f04f636f15a2a034baedd0b60b0857b96f0e0 100644 (file)
@@ -1624,7 +1624,8 @@ struct net_device_ops {
 
 struct xdp_metadata_ops {
        int     (*xmo_rx_timestamp)(const struct xdp_md *ctx, u64 *timestamp);
-       int     (*xmo_rx_hash)(const struct xdp_md *ctx, u32 *hash);
+       int     (*xmo_rx_hash)(const struct xdp_md *ctx, u32 *hash,
+                              enum xdp_rss_hash_type *rss_type);
 };
 
 /**
index b50e5c79f7e32fd9fb9c4b118677bda46b387be7..a5dda515fcd1d4f7fe2cd4b6e384797c0083027d 100644 (file)
@@ -1624,6 +1624,8 @@ pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
                                              flags, NULL);
 }
 
+static inline bool pci_msix_can_alloc_dyn(struct pci_dev *dev)
+{ return false; }
 static inline struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,
                                                   const struct irq_affinity_desc *affdesc)
 {
index 92ad75549e9cdbabb78cd9c6cc7e4c2356cf850a..b6e6378dcbbd7070200a709e32fe7553d614d9f9 100644 (file)
@@ -25,7 +25,8 @@ void rtmsg_ifinfo_newnet(int type, struct net_device *dev, unsigned int change,
 struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev,
                                       unsigned change, u32 event,
                                       gfp_t flags, int *new_nsid,
-                                      int new_ifindex, u32 portid, u32 seq);
+                                      int new_ifindex, u32 portid,
+                                      const struct nlmsghdr *nlh);
 void rtmsg_ifinfo_send(struct sk_buff *skb, struct net_device *dev,
                       gfp_t flags, u32 portid, const struct nlmsghdr *nlh);
 
index 6ed9b4d546a7ac897dd4ba90d72240182c60f2e1..d5311ceb21c62f30d302ecf8aa4ac16bc2edc062 100644 (file)
@@ -954,6 +954,7 @@ enum {
        HCI_CONN_STK_ENCRYPT,
        HCI_CONN_AUTH_INITIATOR,
        HCI_CONN_DROP,
+       HCI_CONN_CANCEL,
        HCI_CONN_PARAM_REMOVAL_PEND,
        HCI_CONN_NEW_LINK_KEY,
        HCI_CONN_SCANNING,
index ea36ab7f9e724bd3650d7502c969ab1b8ba2f7ba..c3843239517d539c15985f38b4887a888c940d9d 100644 (file)
@@ -761,13 +761,17 @@ static inline int bond_get_targets_ip(__be32 *targets, __be32 ip)
 #if IS_ENABLED(CONFIG_IPV6)
 static inline int bond_get_targets_ip6(struct in6_addr *targets, struct in6_addr *ip)
 {
+       struct in6_addr mcaddr;
        int i;
 
-       for (i = 0; i < BOND_MAX_NS_TARGETS; i++)
-               if (ipv6_addr_equal(&targets[i], ip))
+       for (i = 0; i < BOND_MAX_NS_TARGETS; i++) {
+               addrconf_addr_solict_mult(&targets[i], &mcaddr);
+               if ((ipv6_addr_equal(&targets[i], ip)) ||
+                   (ipv6_addr_equal(&mcaddr, ip)))
                        return i;
                else if (ipv6_addr_any(&targets[i]))
                        break;
+       }
 
        return -1;
 }
index 41c57b8b167147bb4544a9bdcb92e7a61459fc40..76aa748e792374abc91166561c072b18833e1a63 100644 (file)
@@ -8,6 +8,7 @@
 
 #include <linux/skbuff.h> /* skb_shared_info */
 #include <uapi/linux/netdev.h>
+#include <linux/bitfield.h>
 
 /**
  * DOC: XDP RX-queue information
@@ -425,6 +426,52 @@ XDP_METADATA_KFUNC_xxx
 MAX_XDP_METADATA_KFUNC,
 };
 
+enum xdp_rss_hash_type {
+       /* First part: Individual bits for L3/L4 types */
+       XDP_RSS_L3_IPV4         = BIT(0),
+       XDP_RSS_L3_IPV6         = BIT(1),
+
+       /* The fixed (L3) IPv4 and IPv6 headers can both be followed by
+        * variable/dynamic headers, IPv4 called Options and IPv6 called
+        * Extension Headers. HW RSS type can contain this info.
+        */
+       XDP_RSS_L3_DYNHDR       = BIT(2),
+
+       /* When RSS hash covers L4 then drivers MUST set XDP_RSS_L4 bit in
+        * addition to the protocol specific bit.  This ease interaction with
+        * SKBs and avoids reserving a fixed mask for future L4 protocol bits.
+        */
+       XDP_RSS_L4              = BIT(3), /* L4 based hash, proto can be unknown */
+       XDP_RSS_L4_TCP          = BIT(4),
+       XDP_RSS_L4_UDP          = BIT(5),
+       XDP_RSS_L4_SCTP         = BIT(6),
+       XDP_RSS_L4_IPSEC        = BIT(7), /* L4 based hash include IPSEC SPI */
+
+       /* Second part: RSS hash type combinations used for driver HW mapping */
+       XDP_RSS_TYPE_NONE            = 0,
+       XDP_RSS_TYPE_L2              = XDP_RSS_TYPE_NONE,
+
+       XDP_RSS_TYPE_L3_IPV4         = XDP_RSS_L3_IPV4,
+       XDP_RSS_TYPE_L3_IPV6         = XDP_RSS_L3_IPV6,
+       XDP_RSS_TYPE_L3_IPV4_OPT     = XDP_RSS_L3_IPV4 | XDP_RSS_L3_DYNHDR,
+       XDP_RSS_TYPE_L3_IPV6_EX      = XDP_RSS_L3_IPV6 | XDP_RSS_L3_DYNHDR,
+
+       XDP_RSS_TYPE_L4_ANY          = XDP_RSS_L4,
+       XDP_RSS_TYPE_L4_IPV4_TCP     = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_TCP,
+       XDP_RSS_TYPE_L4_IPV4_UDP     = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_UDP,
+       XDP_RSS_TYPE_L4_IPV4_SCTP    = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_SCTP,
+       XDP_RSS_TYPE_L4_IPV4_IPSEC   = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_IPSEC,
+
+       XDP_RSS_TYPE_L4_IPV6_TCP     = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_TCP,
+       XDP_RSS_TYPE_L4_IPV6_UDP     = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_UDP,
+       XDP_RSS_TYPE_L4_IPV6_SCTP    = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_SCTP,
+       XDP_RSS_TYPE_L4_IPV6_IPSEC   = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_IPSEC,
+
+       XDP_RSS_TYPE_L4_IPV6_TCP_EX  = XDP_RSS_TYPE_L4_IPV6_TCP  | XDP_RSS_L3_DYNHDR,
+       XDP_RSS_TYPE_L4_IPV6_UDP_EX  = XDP_RSS_TYPE_L4_IPV6_UDP  | XDP_RSS_L3_DYNHDR,
+       XDP_RSS_TYPE_L4_IPV6_SCTP_EX = XDP_RSS_TYPE_L4_IPV6_SCTP | XDP_RSS_L3_DYNHDR,
+};
+
 #ifdef CONFIG_NET
 u32 bpf_xdp_metadata_kfunc_id(int id);
 bool bpf_dev_bound_kfunc_id(u32 btf_id);
index 5af2a0300bb9d59a48beba17b79964af76985590..3744e4da1b2a7d1121b6873a3839ed962fa4898f 100644 (file)
@@ -140,11 +140,11 @@ struct virtio_blk_config {
 
        /* Zoned block device characteristics (if VIRTIO_BLK_F_ZONED) */
        struct virtio_blk_zoned_characteristics {
-               __le32 zone_sectors;
-               __le32 max_open_zones;
-               __le32 max_active_zones;
-               __le32 max_append_sectors;
-               __le32 write_granularity;
+               __virtio32 zone_sectors;
+               __virtio32 max_open_zones;
+               __virtio32 max_active_zones;
+               __virtio32 max_append_sectors;
+               __virtio32 write_granularity;
                __u8 model;
                __u8 unused2[3];
        } zoned;
@@ -241,11 +241,11 @@ struct virtio_blk_outhdr {
  */
 struct virtio_blk_zone_descriptor {
        /* Zone capacity */
-       __le64 z_cap;
+       __virtio64 z_cap;
        /* The starting sector of the zone */
-       __le64 z_start;
+       __virtio64 z_start;
        /* Zone write pointer position in sectors */
-       __le64 z_wp;
+       __virtio64 z_wp;
        /* Zone type */
        __u8 z_type;
        /* Zone state */
@@ -254,7 +254,7 @@ struct virtio_blk_zone_descriptor {
 };
 
 struct virtio_blk_zone_report {
-       __le64 nr_zones;
+       __virtio64 nr_zones;
        __u8 reserved[56];
        struct virtio_blk_zone_descriptor zones[];
 };
index f6c112e30bd47d9363086ea6f2154b1c7e2a0ccf..e7a01c2ccd1b0c33f795dd94a0c58ab57092bb93 100644 (file)
@@ -60,15 +60,8 @@ static void __init error(char *x)
                message = x;
 }
 
-static void panic_show_mem(const char *fmt, ...)
-{
-       va_list args;
-
-       show_mem(0, NULL);
-       va_start(args, fmt);
-       panic(fmt, args);
-       va_end(args);
-}
+#define panic_show_mem(fmt, ...) \
+       ({ show_mem(0, NULL); panic(fmt, ##__VA_ARGS__); })
 
 /* link hash */
 
index 2a8b8c304d2ab1c0f32ece06236145899834c589..4a865f0e85d0b8116c6ef3bd37b5f6db4af29835 100644 (file)
@@ -998,7 +998,7 @@ static void __io_req_complete_post(struct io_kiocb *req)
 
 void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 {
-       if (req->ctx->task_complete && (issue_flags & IO_URING_F_IOWQ)) {
+       if (req->ctx->task_complete && req->ctx->submitter_task != current) {
                req->io_task_work.func = io_req_task_complete;
                io_req_task_work_add(req);
        } else if (!(issue_flags & IO_URING_F_UNLOCKED) ||
index 636f1c682ac07a86668b186e808c5b1b98f70a6b..505d86b166426e46dd3549683c7b74d614d02af7 100644 (file)
@@ -1513,7 +1513,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
        spin_unlock_irq(&callback_lock);
 
        if (adding || deleting)
-               update_tasks_cpumask(parent, tmp->new_cpus);
+               update_tasks_cpumask(parent, tmp->addmask);
 
        /*
         * Set or clear CS_SCHED_LOAD_BALANCE when partcmd_update, if necessary.
@@ -1770,10 +1770,13 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
        /*
         * Use the cpumasks in trialcs for tmpmasks when they are pointers
         * to allocated cpumasks.
+        *
+        * Note that update_parent_subparts_cpumask() uses only addmask &
+        * delmask, but not new_cpus.
         */
        tmp.addmask  = trialcs->subparts_cpus;
        tmp.delmask  = trialcs->effective_cpus;
-       tmp.new_cpus = trialcs->cpus_allowed;
+       tmp.new_cpus = NULL;
 #endif
 
        retval = validate_change(cs, trialcs);
@@ -1838,6 +1841,11 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
        }
        spin_unlock_irq(&callback_lock);
 
+#ifdef CONFIG_CPUMASK_OFFSTACK
+       /* Now trialcs->cpus_allowed is available */
+       tmp.new_cpus = trialcs->cpus_allowed;
+#endif
+
        /* effective_cpus will be updated here */
        update_cpumasks_hier(cs, &tmp, false);
 
@@ -2445,6 +2453,20 @@ static int fmeter_getrate(struct fmeter *fmp)
 
 static struct cpuset *cpuset_attach_old_cs;
 
+/*
+ * Check to see if a cpuset can accept a new task
+ * For v1, cpus_allowed and mems_allowed can't be empty.
+ * For v2, effective_cpus can't be empty.
+ * Note that in v1, effective_cpus = cpus_allowed.
+ */
+static int cpuset_can_attach_check(struct cpuset *cs)
+{
+       if (cpumask_empty(cs->effective_cpus) ||
+          (!is_in_v2_mode() && nodes_empty(cs->mems_allowed)))
+               return -ENOSPC;
+       return 0;
+}
+
 /* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */
 static int cpuset_can_attach(struct cgroup_taskset *tset)
 {
@@ -2459,16 +2481,9 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
 
        percpu_down_write(&cpuset_rwsem);
 
-       /* allow moving tasks into an empty cpuset if on default hierarchy */
-       ret = -ENOSPC;
-       if (!is_in_v2_mode() &&
-           (cpumask_empty(cs->cpus_allowed) || nodes_empty(cs->mems_allowed)))
-               goto out_unlock;
-
-       /*
-        * Task cannot be moved to a cpuset with empty effective cpus.
-        */
-       if (cpumask_empty(cs->effective_cpus))
+       /* Check to see if task is allowed in the cpuset */
+       ret = cpuset_can_attach_check(cs);
+       if (ret)
                goto out_unlock;
 
        cgroup_taskset_for_each(task, css, tset) {
@@ -2485,7 +2500,6 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
         * changes which zero cpus/mems_allowed.
         */
        cs->attach_in_progress++;
-       ret = 0;
 out_unlock:
        percpu_up_write(&cpuset_rwsem);
        return ret;
@@ -2494,25 +2508,47 @@ out_unlock:
 static void cpuset_cancel_attach(struct cgroup_taskset *tset)
 {
        struct cgroup_subsys_state *css;
+       struct cpuset *cs;
 
        cgroup_taskset_first(tset, &css);
+       cs = css_cs(css);
 
        percpu_down_write(&cpuset_rwsem);
-       css_cs(css)->attach_in_progress--;
+       cs->attach_in_progress--;
+       if (!cs->attach_in_progress)
+               wake_up(&cpuset_attach_wq);
        percpu_up_write(&cpuset_rwsem);
 }
 
 /*
- * Protected by cpuset_rwsem.  cpus_attach is used only by cpuset_attach()
+ * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach_task()
  * but we can't allocate it dynamically there.  Define it global and
  * allocate from cpuset_init().
  */
 static cpumask_var_t cpus_attach;
+static nodemask_t cpuset_attach_nodemask_to;
+
+static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task)
+{
+       percpu_rwsem_assert_held(&cpuset_rwsem);
+
+       if (cs != &top_cpuset)
+               guarantee_online_cpus(task, cpus_attach);
+       else
+               cpumask_andnot(cpus_attach, task_cpu_possible_mask(task),
+                              cs->subparts_cpus);
+       /*
+        * can_attach beforehand should guarantee that this doesn't
+        * fail.  TODO: have a better way to handle failure here
+        */
+       WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
+
+       cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to);
+       cpuset_update_task_spread_flags(cs, task);
+}
 
 static void cpuset_attach(struct cgroup_taskset *tset)
 {
-       /* static buf protected by cpuset_rwsem */
-       static nodemask_t cpuset_attach_nodemask_to;
        struct task_struct *task;
        struct task_struct *leader;
        struct cgroup_subsys_state *css;
@@ -2543,20 +2579,8 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 
        guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
 
-       cgroup_taskset_for_each(task, css, tset) {
-               if (cs != &top_cpuset)
-                       guarantee_online_cpus(task, cpus_attach);
-               else
-                       cpumask_copy(cpus_attach, task_cpu_possible_mask(task));
-               /*
-                * can_attach beforehand should guarantee that this doesn't
-                * fail.  TODO: have a better way to handle failure here
-                */
-               WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
-
-               cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to);
-               cpuset_update_task_spread_flags(cs, task);
-       }
+       cgroup_taskset_for_each(task, css, tset)
+               cpuset_attach_task(cs, task);
 
        /*
         * Change mm for all threadgroup leaders. This is expensive and may
@@ -3247,6 +3271,68 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css)
        percpu_up_write(&cpuset_rwsem);
 }
 
+/*
+ * In case the child is cloned into a cpuset different from its parent,
+ * additional checks are done to see if the move is allowed.
+ */
+static int cpuset_can_fork(struct task_struct *task, struct css_set *cset)
+{
+       struct cpuset *cs = css_cs(cset->subsys[cpuset_cgrp_id]);
+       bool same_cs;
+       int ret;
+
+       rcu_read_lock();
+       same_cs = (cs == task_cs(current));
+       rcu_read_unlock();
+
+       if (same_cs)
+               return 0;
+
+       lockdep_assert_held(&cgroup_mutex);
+       percpu_down_write(&cpuset_rwsem);
+
+       /* Check to see if task is allowed in the cpuset */
+       ret = cpuset_can_attach_check(cs);
+       if (ret)
+               goto out_unlock;
+
+       ret = task_can_attach(task, cs->effective_cpus);
+       if (ret)
+               goto out_unlock;
+
+       ret = security_task_setscheduler(task);
+       if (ret)
+               goto out_unlock;
+
+       /*
+        * Mark attach is in progress.  This makes validate_change() fail
+        * changes which zero cpus/mems_allowed.
+        */
+       cs->attach_in_progress++;
+out_unlock:
+       percpu_up_write(&cpuset_rwsem);
+       return ret;
+}
+
+static void cpuset_cancel_fork(struct task_struct *task, struct css_set *cset)
+{
+       struct cpuset *cs = css_cs(cset->subsys[cpuset_cgrp_id]);
+       bool same_cs;
+
+       rcu_read_lock();
+       same_cs = (cs == task_cs(current));
+       rcu_read_unlock();
+
+       if (same_cs)
+               return;
+
+       percpu_down_write(&cpuset_rwsem);
+       cs->attach_in_progress--;
+       if (!cs->attach_in_progress)
+               wake_up(&cpuset_attach_wq);
+       percpu_up_write(&cpuset_rwsem);
+}
+
 /*
  * Make sure the new task conform to the current state of its parent,
  * which could have been changed by cpuset just after it inherits the
@@ -3254,11 +3340,33 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css)
  */
 static void cpuset_fork(struct task_struct *task)
 {
-       if (task_css_is_root(task, cpuset_cgrp_id))
+       struct cpuset *cs;
+       bool same_cs;
+
+       rcu_read_lock();
+       cs = task_cs(task);
+       same_cs = (cs == task_cs(current));
+       rcu_read_unlock();
+
+       if (same_cs) {
+               if (cs == &top_cpuset)
+                       return;
+
+               set_cpus_allowed_ptr(task, current->cpus_ptr);
+               task->mems_allowed = current->mems_allowed;
                return;
+       }
+
+       /* CLONE_INTO_CGROUP */
+       percpu_down_write(&cpuset_rwsem);
+       guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
+       cpuset_attach_task(cs, task);
+
+       cs->attach_in_progress--;
+       if (!cs->attach_in_progress)
+               wake_up(&cpuset_attach_wq);
 
-       set_cpus_allowed_ptr(task, current->cpus_ptr);
-       task->mems_allowed = current->mems_allowed;
+       percpu_up_write(&cpuset_rwsem);
 }
 
 struct cgroup_subsys cpuset_cgrp_subsys = {
@@ -3271,6 +3379,8 @@ struct cgroup_subsys cpuset_cgrp_subsys = {
        .attach         = cpuset_attach,
        .post_attach    = cpuset_post_attach,
        .bind           = cpuset_bind,
+       .can_fork       = cpuset_can_fork,
+       .cancel_fork    = cpuset_cancel_fork,
        .fork           = cpuset_fork,
        .legacy_cftypes = legacy_files,
        .dfl_cftypes    = dfl_files,
index 1b6b21851e9d47daa3456123d604ab4d5e0f3b9c..936473203a6b511c2fa095a04ba219282913ab6a 100644 (file)
@@ -22,6 +22,7 @@
 #include <linux/freezer.h>
 #include <linux/seq_file.h>
 #include <linux/mutex.h>
+#include <linux/cpu.h>
 
 /*
  * A cgroup is freezing if any FREEZING flags are set.  FREEZING_SELF is
@@ -350,7 +351,7 @@ static void freezer_apply_state(struct freezer *freezer, bool freeze,
 
        if (freeze) {
                if (!(freezer->state & CGROUP_FREEZING))
-                       static_branch_inc(&freezer_active);
+                       static_branch_inc_cpuslocked(&freezer_active);
                freezer->state |= state;
                freeze_cgroup(freezer);
        } else {
@@ -361,7 +362,7 @@ static void freezer_apply_state(struct freezer *freezer, bool freeze,
                if (!(freezer->state & CGROUP_FREEZING)) {
                        freezer->state &= ~CGROUP_FROZEN;
                        if (was_freezing)
-                               static_branch_dec(&freezer_active);
+                               static_branch_dec_cpuslocked(&freezer_active);
                        unfreeze_cgroup(freezer);
                }
        }
@@ -379,6 +380,7 @@ static void freezer_change_state(struct freezer *freezer, bool freeze)
 {
        struct cgroup_subsys_state *pos;
 
+       cpus_read_lock();
        /*
         * Update all its descendants in pre-order traversal.  Each
         * descendant will try to inherit its parent's FREEZING state as
@@ -407,6 +409,7 @@ static void freezer_change_state(struct freezer *freezer, bool freeze)
        }
        rcu_read_unlock();
        mutex_unlock(&freezer_mutex);
+       cpus_read_unlock();
 }
 
 static ssize_t freezer_write(struct kernfs_open_file *of,
index 831f1f472bb814c945006f996d978c545e7a8759..0a2b4967e3334ca54003725c0aec38fd169c435e 100644 (file)
@@ -457,9 +457,7 @@ static void root_cgroup_cputime(struct cgroup_base_stat *bstat)
        struct task_cputime *cputime = &bstat->cputime;
        int i;
 
-       cputime->stime = 0;
-       cputime->utime = 0;
-       cputime->sum_exec_runtime = 0;
+       memset(bstat, 0, sizeof(*bstat));
        for_each_possible_cpu(i) {
                struct kernel_cpustat kcpustat;
                u64 *cpustat = kcpustat.cpustat;
index 8e880c09ab59ed0fab53abd7954e60eaaa2c25f2..7b95ee98a1a53daa162396e737356c951acf51d8 100644 (file)
@@ -3024,6 +3024,18 @@ need_offload_krc(struct kfree_rcu_cpu *krcp)
        return !!READ_ONCE(krcp->head);
 }
 
+static bool
+need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp)
+{
+       int i;
+
+       for (i = 0; i < FREE_N_CHANNELS; i++)
+               if (!list_empty(&krwp->bulk_head_free[i]))
+                       return true;
+
+       return !!krwp->head_free;
+}
+
 static int krc_count(struct kfree_rcu_cpu *krcp)
 {
        int sum = atomic_read(&krcp->head_count);
@@ -3107,15 +3119,14 @@ static void kfree_rcu_monitor(struct work_struct *work)
        for (i = 0; i < KFREE_N_BATCHES; i++) {
                struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]);
 
-               // Try to detach bulk_head or head and attach it over any
-               // available corresponding free channel. It can be that
-               // a previous RCU batch is in progress, it means that
-               // immediately to queue another one is not possible so
-               // in that case the monitor work is rearmed.
-               if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) ||
-                       (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) ||
-                               (READ_ONCE(krcp->head) && !krwp->head_free)) {
+               // Try to detach bulk_head or head and attach it, only when
+               // all channels are free.  Any channel is not free means at krwp
+               // there is on-going rcu work to handle krwp's free business.
+               if (need_wait_for_krwp_work(krwp))
+                       continue;
 
+               // kvfree_rcu_drain_ready() might handle this krcp, if so give up.
+               if (need_offload_krc(krcp)) {
                        // Channel 1 corresponds to the SLAB-pointer bulk path.
                        // Channel 2 corresponds to vmalloc-pointer bulk path.
                        for (j = 0; j < FREE_N_CHANNELS; j++) {
index 6986ea31c9844719cf083ee1b60e3163add9c9db..5f6587d94c1dd692d2d0cbcf64151e66e690de55 100644 (file)
@@ -10238,6 +10238,16 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 
                sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
                                sds->total_capacity;
+
+               /*
+                * If the local group is more loaded than the average system
+                * load, don't try to pull any tasks.
+                */
+               if (local->avg_load >= sds->avg_load) {
+                       env->imbalance = 0;
+                       return;
+               }
+
        }
 
        /*
index c64050e839ac6faccd7f9741654cfa74bf661836..1fffe2bed5b02f3480b9f074d8b472016708729f 100644 (file)
@@ -280,6 +280,10 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
        write_unlock(&xen_9pfs_lock);
 
        for (i = 0; i < priv->num_rings; i++) {
+               struct xen_9pfs_dataring *ring = &priv->rings[i];
+
+               cancel_work_sync(&ring->work);
+
                if (!priv->rings[i].intf)
                        break;
                if (priv->rings[i].irq > 0)
index 17b946f9ba317c7c04222379f95b12e104f88a87..8455ba141ee6192cb039020e58de6c662e4d34f9 100644 (file)
@@ -68,7 +68,7 @@ static const struct sco_param esco_param_msbc[] = {
 };
 
 /* This function requires the caller holds hdev->lock */
-static void hci_connect_le_scan_cleanup(struct hci_conn *conn)
+static void hci_connect_le_scan_cleanup(struct hci_conn *conn, u8 status)
 {
        struct hci_conn_params *params;
        struct hci_dev *hdev = conn->hdev;
@@ -88,9 +88,28 @@ static void hci_connect_le_scan_cleanup(struct hci_conn *conn)
 
        params = hci_pend_le_action_lookup(&hdev->pend_le_conns, bdaddr,
                                           bdaddr_type);
-       if (!params || !params->explicit_connect)
+       if (!params)
                return;
 
+       if (params->conn) {
+               hci_conn_drop(params->conn);
+               hci_conn_put(params->conn);
+               params->conn = NULL;
+       }
+
+       if (!params->explicit_connect)
+               return;
+
+       /* If the status indicates successful cancellation of
+        * the attempt (i.e. Unknown Connection Id) there's no point of
+        * notifying failure since we'll go back to keep trying to
+        * connect. The only exception is explicit connect requests
+        * where a timeout + cancel does indicate an actual failure.
+        */
+       if (status && status != HCI_ERROR_UNKNOWN_CONN_ID)
+               mgmt_connect_failed(hdev, &conn->dst, conn->type,
+                                   conn->dst_type, status);
+
        /* The connection attempt was doing scan for new RPA, and is
         * in scan phase. If params are not associated with any other
         * autoconnect action, remove them completely. If they are, just unmark
@@ -178,7 +197,7 @@ static void le_scan_cleanup(struct work_struct *work)
        rcu_read_unlock();
 
        if (c == conn) {
-               hci_connect_le_scan_cleanup(conn);
+               hci_connect_le_scan_cleanup(conn, 0x00);
                hci_conn_cleanup(conn);
        }
 
@@ -1049,6 +1068,17 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
        return conn;
 }
 
+static bool hci_conn_unlink(struct hci_conn *conn)
+{
+       if (!conn->link)
+               return false;
+
+       conn->link->link = NULL;
+       conn->link = NULL;
+
+       return true;
+}
+
 int hci_conn_del(struct hci_conn *conn)
 {
        struct hci_dev *hdev = conn->hdev;
@@ -1060,15 +1090,16 @@ int hci_conn_del(struct hci_conn *conn)
        cancel_delayed_work_sync(&conn->idle_work);
 
        if (conn->type == ACL_LINK) {
-               struct hci_conn *sco = conn->link;
-               if (sco) {
-                       sco->link = NULL;
+               struct hci_conn *link = conn->link;
+
+               if (link) {
+                       hci_conn_unlink(conn);
                        /* Due to race, SCO connection might be not established
                         * yet at this point. Delete it now, otherwise it is
                         * possible for it to be stuck and can't be deleted.
                         */
-                       if (sco->handle == HCI_CONN_HANDLE_UNSET)
-                               hci_conn_del(sco);
+                       if (link->handle == HCI_CONN_HANDLE_UNSET)
+                               hci_conn_del(link);
                }
 
                /* Unacked frames */
@@ -1084,7 +1115,7 @@ int hci_conn_del(struct hci_conn *conn)
                struct hci_conn *acl = conn->link;
 
                if (acl) {
-                       acl->link = NULL;
+                       hci_conn_unlink(conn);
                        hci_conn_drop(acl);
                }
 
@@ -1179,31 +1210,8 @@ EXPORT_SYMBOL(hci_get_route);
 static void hci_le_conn_failed(struct hci_conn *conn, u8 status)
 {
        struct hci_dev *hdev = conn->hdev;
-       struct hci_conn_params *params;
 
-       params = hci_pend_le_action_lookup(&hdev->pend_le_conns, &conn->dst,
-                                          conn->dst_type);
-       if (params && params->conn) {
-               hci_conn_drop(params->conn);
-               hci_conn_put(params->conn);
-               params->conn = NULL;
-       }
-
-       /* If the status indicates successful cancellation of
-        * the attempt (i.e. Unknown Connection Id) there's no point of
-        * notifying failure since we'll go back to keep trying to
-        * connect. The only exception is explicit connect requests
-        * where a timeout + cancel does indicate an actual failure.
-        */
-       if (status != HCI_ERROR_UNKNOWN_CONN_ID ||
-           (params && params->explicit_connect))
-               mgmt_connect_failed(hdev, &conn->dst, conn->type,
-                                   conn->dst_type, status);
-
-       /* Since we may have temporarily stopped the background scanning in
-        * favor of connection establishment, we should restart it.
-        */
-       hci_update_passive_scan(hdev);
+       hci_connect_le_scan_cleanup(conn, status);
 
        /* Enable advertising in case this was a failed connection
         * attempt as a peripheral.
@@ -1237,15 +1245,15 @@ static void create_le_conn_complete(struct hci_dev *hdev, void *data, int err)
 {
        struct hci_conn *conn = data;
 
+       bt_dev_dbg(hdev, "err %d", err);
+
        hci_dev_lock(hdev);
 
        if (!err) {
-               hci_connect_le_scan_cleanup(conn);
+               hci_connect_le_scan_cleanup(conn, 0x00);
                goto done;
        }
 
-       bt_dev_err(hdev, "request failed to create LE connection: err %d", err);
-
        /* Check if connection is still pending */
        if (conn != hci_lookup_le_connect(hdev))
                goto done;
@@ -2438,6 +2446,12 @@ void hci_conn_hash_flush(struct hci_dev *hdev)
                c->state = BT_CLOSED;
 
                hci_disconn_cfm(c, HCI_ERROR_LOCAL_HOST_TERM);
+
+               /* Unlink before deleting otherwise it is possible that
+                * hci_conn_del removes the link which may cause the list to
+                * contain items already freed.
+                */
+               hci_conn_unlink(c);
                hci_conn_del(c);
        }
 }
@@ -2775,6 +2789,9 @@ int hci_abort_conn(struct hci_conn *conn, u8 reason)
 {
        int r = 0;
 
+       if (test_and_set_bit(HCI_CONN_CANCEL, &conn->flags))
+               return 0;
+
        switch (conn->state) {
        case BT_CONNECTED:
        case BT_CONFIG:
index ad92a4be5851739cba345c629690b9274e154db5..e87c928c9e17ae5608bd4614ed6f59edf680bcf4 100644 (file)
@@ -2881,16 +2881,6 @@ static void cs_le_create_conn(struct hci_dev *hdev, bdaddr_t *peer_addr,
 
        conn->resp_addr_type = peer_addr_type;
        bacpy(&conn->resp_addr, peer_addr);
-
-       /* We don't want the connection attempt to stick around
-        * indefinitely since LE doesn't have a page timeout concept
-        * like BR/EDR. Set a timer for any connection that doesn't use
-        * the accept list for connecting.
-        */
-       if (filter_policy == HCI_LE_USE_PEER_ADDR)
-               queue_delayed_work(conn->hdev->workqueue,
-                                  &conn->le_conn_timeout,
-                                  conn->conn_timeout);
 }
 
 static void hci_cs_le_create_conn(struct hci_dev *hdev, u8 status)
@@ -5902,6 +5892,12 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
        if (status)
                goto unlock;
 
+       /* Drop the connection if it has been aborted */
+       if (test_bit(HCI_CONN_CANCEL, &conn->flags)) {
+               hci_conn_drop(conn);
+               goto unlock;
+       }
+
        if (conn->dst_type == ADDR_LE_DEV_PUBLIC)
                addr_type = BDADDR_LE_PUBLIC;
        else
@@ -6995,7 +6991,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
                bis->iso_qos.in.latency = le16_to_cpu(ev->interval) * 125 / 100;
                bis->iso_qos.in.sdu = le16_to_cpu(ev->max_pdu);
 
-               hci_connect_cfm(bis, ev->status);
+               hci_iso_setup_path(bis);
        }
 
        hci_dev_unlock(hdev);
index 5a6aa1627791b530abf33f06be87b465e7d40af3..632be12672887ae0fdb8d4df67de4f63aa855f4e 100644 (file)
@@ -246,8 +246,9 @@ int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
 
        skb = __hci_cmd_sync_sk(hdev, opcode, plen, param, event, timeout, sk);
        if (IS_ERR(skb)) {
-               bt_dev_err(hdev, "Opcode 0x%4x failed: %ld", opcode,
-                               PTR_ERR(skb));
+               if (!event)
+                       bt_dev_err(hdev, "Opcode 0x%4x failed: %ld", opcode,
+                                  PTR_ERR(skb));
                return PTR_ERR(skb);
        }
 
@@ -5126,8 +5127,11 @@ static int hci_le_connect_cancel_sync(struct hci_dev *hdev,
        if (test_bit(HCI_CONN_SCANNING, &conn->flags))
                return 0;
 
+       if (test_and_set_bit(HCI_CONN_CANCEL, &conn->flags))
+               return 0;
+
        return __hci_cmd_sync_status(hdev, HCI_OP_LE_CREATE_CONN_CANCEL,
-                                    6, &conn->dst, HCI_CMD_TIMEOUT);
+                                    0, NULL, HCI_CMD_TIMEOUT);
 }
 
 static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn)
@@ -6102,6 +6106,9 @@ int hci_le_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn)
                                       conn->conn_timeout, NULL);
 
 done:
+       if (err == -ETIMEDOUT)
+               hci_le_connect_cancel_sync(hdev, conn);
+
        /* Re-enable advertising after the connection attempt is finished. */
        hci_resume_advertising_sync(hdev);
        return err;
index bed1a7b9205c20cceacd45a529b50f9b378dff9c..707f229f896a1fb01195b3001fd1629cbf16d0d9 100644 (file)
@@ -433,7 +433,7 @@ static void hidp_set_timer(struct hidp_session *session)
 static void hidp_del_timer(struct hidp_session *session)
 {
        if (session->idle_to > 0)
-               del_timer(&session->timer);
+               del_timer_sync(&session->timer);
 }
 
 static void hidp_process_report(struct hidp_session *session, int type,
index 49926f59cc1230286deaa9c28d6da9eeb6b42352..55a7226233f96df0836e5945be61e9ec3fed8880 100644 (file)
@@ -4652,33 +4652,27 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn,
 
        BT_DBG("scid 0x%4.4x dcid 0x%4.4x", scid, dcid);
 
-       mutex_lock(&conn->chan_lock);
-
-       chan = __l2cap_get_chan_by_scid(conn, dcid);
+       chan = l2cap_get_chan_by_scid(conn, dcid);
        if (!chan) {
-               mutex_unlock(&conn->chan_lock);
                cmd_reject_invalid_cid(conn, cmd->ident, dcid, scid);
                return 0;
        }
 
-       l2cap_chan_hold(chan);
-       l2cap_chan_lock(chan);
-
        rsp.dcid = cpu_to_le16(chan->scid);
        rsp.scid = cpu_to_le16(chan->dcid);
        l2cap_send_cmd(conn, cmd->ident, L2CAP_DISCONN_RSP, sizeof(rsp), &rsp);
 
        chan->ops->set_shutdown(chan);
 
+       mutex_lock(&conn->chan_lock);
        l2cap_chan_del(chan, ECONNRESET);
+       mutex_unlock(&conn->chan_lock);
 
        chan->ops->close(chan);
 
        l2cap_chan_unlock(chan);
        l2cap_chan_put(chan);
 
-       mutex_unlock(&conn->chan_lock);
-
        return 0;
 }
 
@@ -4698,33 +4692,27 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
 
        BT_DBG("dcid 0x%4.4x scid 0x%4.4x", dcid, scid);
 
-       mutex_lock(&conn->chan_lock);
-
-       chan = __l2cap_get_chan_by_scid(conn, scid);
+       chan = l2cap_get_chan_by_scid(conn, scid);
        if (!chan) {
                mutex_unlock(&conn->chan_lock);
                return 0;
        }
 
-       l2cap_chan_hold(chan);
-       l2cap_chan_lock(chan);
-
        if (chan->state != BT_DISCONN) {
                l2cap_chan_unlock(chan);
                l2cap_chan_put(chan);
-               mutex_unlock(&conn->chan_lock);
                return 0;
        }
 
+       mutex_lock(&conn->chan_lock);
        l2cap_chan_del(chan, 0);
+       mutex_unlock(&conn->chan_lock);
 
        chan->ops->close(chan);
 
        l2cap_chan_unlock(chan);
        l2cap_chan_put(chan);
 
-       mutex_unlock(&conn->chan_lock);
-
        return 0;
 }
 
index 1111da4e2f2bd533572ee82a1b42ffd725a2d593..cd1a27ac555d009a65dbebb5a5ea125b9261f147 100644 (file)
@@ -235,27 +235,41 @@ static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
        return err;
 }
 
-static int sco_connect(struct hci_dev *hdev, struct sock *sk)
+static int sco_connect(struct sock *sk)
 {
        struct sco_conn *conn;
        struct hci_conn *hcon;
+       struct hci_dev  *hdev;
        int err, type;
 
        BT_DBG("%pMR -> %pMR", &sco_pi(sk)->src, &sco_pi(sk)->dst);
 
+       hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src, BDADDR_BREDR);
+       if (!hdev)
+               return -EHOSTUNREACH;
+
+       hci_dev_lock(hdev);
+
        if (lmp_esco_capable(hdev) && !disable_esco)
                type = ESCO_LINK;
        else
                type = SCO_LINK;
 
        if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT &&
-           (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev)))
-               return -EOPNOTSUPP;
+           (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) {
+               err = -EOPNOTSUPP;
+               goto unlock;
+       }
 
        hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst,
                               sco_pi(sk)->setting, &sco_pi(sk)->codec);
-       if (IS_ERR(hcon))
-               return PTR_ERR(hcon);
+       if (IS_ERR(hcon)) {
+               err = PTR_ERR(hcon);
+               goto unlock;
+       }
+
+       hci_dev_unlock(hdev);
+       hci_dev_put(hdev);
 
        conn = sco_conn_add(hcon);
        if (!conn) {
@@ -263,13 +277,15 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk)
                return -ENOMEM;
        }
 
-       /* Update source addr of the socket */
-       bacpy(&sco_pi(sk)->src, &hcon->src);
-
        err = sco_chan_add(conn, sk, NULL);
        if (err)
                return err;
 
+       lock_sock(sk);
+
+       /* Update source addr of the socket */
+       bacpy(&sco_pi(sk)->src, &hcon->src);
+
        if (hcon->state == BT_CONNECTED) {
                sco_sock_clear_timer(sk);
                sk->sk_state = BT_CONNECTED;
@@ -278,6 +294,13 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk)
                sco_sock_set_timer(sk, sk->sk_sndtimeo);
        }
 
+       release_sock(sk);
+
+       return err;
+
+unlock:
+       hci_dev_unlock(hdev);
+       hci_dev_put(hdev);
        return err;
 }
 
@@ -565,7 +588,6 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
 {
        struct sockaddr_sco *sa = (struct sockaddr_sco *) addr;
        struct sock *sk = sock->sk;
-       struct hci_dev  *hdev;
        int err;
 
        BT_DBG("sk %p", sk);
@@ -574,37 +596,26 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
            addr->sa_family != AF_BLUETOOTH)
                return -EINVAL;
 
-       lock_sock(sk);
-       if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) {
-               err = -EBADFD;
-               goto done;
-       }
+       if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND)
+               return -EBADFD;
 
-       if (sk->sk_type != SOCK_SEQPACKET) {
+       if (sk->sk_type != SOCK_SEQPACKET)
                err = -EINVAL;
-               goto done;
-       }
-
-       hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR);
-       if (!hdev) {
-               err = -EHOSTUNREACH;
-               goto done;
-       }
-       hci_dev_lock(hdev);
 
+       lock_sock(sk);
        /* Set destination address and psm */
        bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr);
+       release_sock(sk);
 
-       err = sco_connect(hdev, sk);
-       hci_dev_unlock(hdev);
-       hci_dev_put(hdev);
+       err = sco_connect(sk);
        if (err)
-               goto done;
+               return err;
+
+       lock_sock(sk);
 
        err = bt_sock_wait_state(sk, BT_CONNECTED,
                                 sock_sndtimeo(sk, flags & O_NONBLOCK));
 
-done:
        release_sock(sk);
        return err;
 }
@@ -1129,6 +1140,8 @@ static int sco_sock_getsockopt(struct socket *sock, int level, int optname,
                        break;
                }
 
+               release_sock(sk);
+
                /* find total buffer size required to copy codec + caps */
                hci_dev_lock(hdev);
                list_for_each_entry(c, &hdev->local_codecs, list) {
@@ -1146,15 +1159,13 @@ static int sco_sock_getsockopt(struct socket *sock, int level, int optname,
                buf_len += sizeof(struct bt_codecs);
                if (buf_len > len) {
                        hci_dev_put(hdev);
-                       err = -ENOBUFS;
-                       break;
+                       return -ENOBUFS;
                }
                ptr = optval;
 
                if (put_user(num_codecs, ptr)) {
                        hci_dev_put(hdev);
-                       err = -EFAULT;
-                       break;
+                       return -EFAULT;
                }
                ptr += sizeof(num_codecs);
 
@@ -1194,12 +1205,14 @@ static int sco_sock_getsockopt(struct socket *sock, int level, int optname,
                        ptr += len;
                }
 
-               if (!err && put_user(buf_len, optlen))
-                       err = -EFAULT;
-
                hci_dev_unlock(hdev);
                hci_dev_put(hdev);
 
+               lock_sock(sk);
+
+               if (!err && put_user(buf_len, optlen))
+                       err = -EFAULT;
+
                break;
 
        default:
index 253584777101f2e6af3fc30107516f1e1197f8d3..1488f700bf819a9ee0b8cb59cdfa501075b2e74a 100644 (file)
@@ -3199,6 +3199,7 @@ static u16 skb_tx_hash(const struct net_device *dev,
        }
 
        if (skb_rx_queue_recorded(skb)) {
+               DEBUG_NET_WARN_ON_ONCE(qcount == 0);
                hash = skb_get_rx_queue(skb);
                if (hash >= qoffset)
                        hash -= qoffset;
@@ -10846,7 +10847,7 @@ void unregister_netdevice_many_notify(struct list_head *head,
                    dev->rtnl_link_state == RTNL_LINK_INITIALIZED)
                        skb = rtmsg_ifinfo_build_skb(RTM_DELLINK, dev, ~0U, 0,
                                                     GFP_KERNEL, NULL, 0,
-                                                    portid, nlmsg_seq(nlh));
+                                                    portid, nlh);
 
                /*
                 *      Flush the unicast and multicast chains
index 5d8eb57867a96fa1730fcafd95f0801fbfc76188..6e44e92ebdf5dc6aaa91b32c3c439b15044024ef 100644 (file)
@@ -3972,16 +3972,23 @@ static int rtnl_dump_all(struct sk_buff *skb, struct netlink_callback *cb)
 struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev,
                                       unsigned int change,
                                       u32 event, gfp_t flags, int *new_nsid,
-                                      int new_ifindex, u32 portid, u32 seq)
+                                      int new_ifindex, u32 portid,
+                                      const struct nlmsghdr *nlh)
 {
        struct net *net = dev_net(dev);
        struct sk_buff *skb;
        int err = -ENOBUFS;
+       u32 seq = 0;
 
        skb = nlmsg_new(if_nlmsg_size(dev, 0), flags);
        if (skb == NULL)
                goto errout;
 
+       if (nlmsg_report(nlh))
+               seq = nlmsg_seq(nlh);
+       else
+               portid = 0;
+
        err = rtnl_fill_ifinfo(skb, dev, dev_net(dev),
                               type, portid, seq, change, 0, 0, event,
                               new_nsid, new_ifindex, -1, flags);
@@ -4017,7 +4024,7 @@ static void rtmsg_ifinfo_event(int type, struct net_device *dev,
                return;
 
        skb = rtmsg_ifinfo_build_skb(type, dev, change, event, flags, new_nsid,
-                                    new_ifindex, portid, nlmsg_seq(nlh));
+                                    new_ifindex, portid, nlh);
        if (skb)
                rtmsg_ifinfo_send(skb, dev, flags, portid, nlh);
 }
index 1a31815104d617a66fee5e981131ceeffaa67128..4c0879798eb8a830abf748bdada7790d06154f7a 100644 (file)
@@ -5599,18 +5599,18 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
        if (skb_cloned(to))
                return false;
 
-       /* In general, avoid mixing slab allocated and page_pool allocated
-        * pages within the same SKB. However when @to is not pp_recycle and
-        * @from is cloned, we can transition frag pages from page_pool to
-        * reference counted.
-        *
-        * On the other hand, don't allow coalescing two pp_recycle SKBs if
-        * @from is cloned, in case the SKB is using page_pool fragment
+       /* In general, avoid mixing page_pool and non-page_pool allocated
+        * pages within the same SKB. Additionally avoid dealing with clones
+        * with page_pool pages, in case the SKB is using page_pool fragment
         * references (PP_FLAG_PAGE_FRAG). Since we only take full page
         * references for cloned SKBs at the moment that would result in
         * inconsistent reference counts.
+        * In theory we could take full references if @from is cloned and
+        * !@to->pp_recycle but its tricky (due to potential race with
+        * the clone disappearing) and rare, so not worth dealing with.
         */
-       if (to->pp_recycle != (from->pp_recycle && !skb_cloned(from)))
+       if (to->pp_recycle != from->pp_recycle ||
+           (from->pp_recycle && skb_cloned(from)))
                return false;
 
        if (len <= skb_tailroom(to)) {
index 528d4b37983df8c61bd3f6f50b9f4ccbe8cc7132..fb85aca819619b344bdbb74f72e1e0a0d4c3e5b2 100644 (file)
@@ -734,13 +734,21 @@ __bpf_kfunc int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx, u64 *tim
  * bpf_xdp_metadata_rx_hash - Read XDP frame RX hash.
  * @ctx: XDP context pointer.
  * @hash: Return value pointer.
+ * @rss_type: Return value pointer for RSS type.
+ *
+ * The RSS hash type (@rss_type) specifies what portion of packet headers NIC
+ * hardware used when calculating RSS hash value.  The RSS type can be decoded
+ * via &enum xdp_rss_hash_type either matching on individual L3/L4 bits
+ * ``XDP_RSS_L*`` or by combined traditional *RSS Hashing Types*
+ * ``XDP_RSS_TYPE_L*``.
  *
  * Return:
  * * Returns 0 on success or ``-errno`` on error.
  * * ``-EOPNOTSUPP`` : means device driver doesn't implement kfunc
  * * ``-ENODATA``    : means no RX-hash available for this frame
  */
-__bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash)
+__bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash,
+                                        enum xdp_rss_hash_type *rss_type)
 {
        return -EOPNOTSUPP;
 }
index 0d0cc4ef2b85a453582d3301f62d7d25d439e592..40fe70fc2015d5ce0a8d2791a714f1324ecb231e 100644 (file)
@@ -25,6 +25,7 @@ static int ip_local_port_range_min[] = { 1, 1 };
 static int ip_local_port_range_max[] = { 65535, 65535 };
 static int tcp_adv_win_scale_min = -31;
 static int tcp_adv_win_scale_max = 31;
+static int tcp_app_win_max = 31;
 static int tcp_min_snd_mss_min = TCP_MIN_SND_MSS;
 static int tcp_min_snd_mss_max = 65535;
 static int ip_privileged_port_min;
@@ -1198,6 +1199,8 @@ static struct ctl_table ipv4_net_table[] = {
                .maxlen         = sizeof(u8),
                .mode           = 0644,
                .proc_handler   = proc_dou8vec_minmax,
+               .extra1         = SYSCTL_ZERO,
+               .extra2         = &tcp_app_win_max,
        },
        {
                .procname       = "tcp_adv_win_scale",
index ea370afa70ed979266dbeea474b034e833b15db4..b9d55277cb858a3a288e1ae800c9f507444f0be1 100644 (file)
@@ -2780,7 +2780,7 @@ static int tcp_prog_seq_show(struct bpf_prog *prog, struct bpf_iter_meta *meta,
 static void bpf_iter_tcp_put_batch(struct bpf_tcp_iter_state *iter)
 {
        while (iter->cur_sk < iter->end_sk)
-               sock_put(iter->batch[iter->cur_sk++]);
+               sock_gen_put(iter->batch[iter->cur_sk++]);
 }
 
 static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter,
@@ -2941,7 +2941,7 @@ static void *bpf_iter_tcp_seq_next(struct seq_file *seq, void *v, loff_t *pos)
                 * st->bucket.  See tcp_seek_last_pos().
                 */
                st->offset++;
-               sock_put(iter->batch[iter->cur_sk++]);
+               sock_gen_put(iter->batch[iter->cur_sk++]);
        }
 
        if (iter->cur_sk < iter->end_sk)
index 9fb2f33ee3a76a09bbe15a9aaf1371a804f91ee2..a675acfb901d102ce56563b1d50ae827d9e04859 100644 (file)
@@ -1395,9 +1395,11 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
                        msg->msg_name = &sin;
                        msg->msg_namelen = sizeof(sin);
 do_udp_sendmsg:
-                       if (ipv6_only_sock(sk))
-                               return -ENETUNREACH;
-                       return udp_sendmsg(sk, msg, len);
+                       err = ipv6_only_sock(sk) ?
+                               -ENETUNREACH : udp_sendmsg(sk, msg, len);
+                       msg->msg_name = sin6;
+                       msg->msg_namelen = addr_len;
+                       return err;
                }
        }
 
index d237d142171c5a96b13ef459832fc988fa2f5401..bceaab8dd8e460e7a3459e12af6089ab6bb6b2b7 100644 (file)
@@ -9,11 +9,18 @@
 void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subflow,
                                              struct request_sock *req)
 {
-       struct sock *ssk = subflow->tcp_sock;
-       struct sock *sk = subflow->conn;
+       struct sock *sk, *ssk;
        struct sk_buff *skb;
        struct tcp_sock *tp;
 
+       /* on early fallback the subflow context is deleted by
+        * subflow_syn_recv_sock()
+        */
+       if (!subflow)
+               return;
+
+       ssk = subflow->tcp_sock;
+       sk = subflow->conn;
        tp = tcp_sk(ssk);
 
        subflow->is_mptfo = 1;
index b30cea2fbf3fd0d4706573c4c41624ca3d9cfe26..355f798d575a40542195fcc724819e1de9f2e4e7 100644 (file)
@@ -1192,9 +1192,8 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
         */
        if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) {
                if (mp_opt.data_fin && mp_opt.data_len == 1 &&
-                   mptcp_update_rcv_data_fin(msk, mp_opt.data_seq, mp_opt.dsn64) &&
-                   schedule_work(&msk->work))
-                       sock_hold(subflow->conn);
+                   mptcp_update_rcv_data_fin(msk, mp_opt.data_seq, mp_opt.dsn64))
+                       mptcp_schedule_work((struct sock *)msk);
 
                return true;
        }
index 60b23b2716c4083349f3f68655d243398bc31776..06c5872e3b0036ae5b70c9124e0ee8add38c003f 100644 (file)
@@ -2626,7 +2626,7 @@ static void mptcp_worker(struct work_struct *work)
 
        lock_sock(sk);
        state = sk->sk_state;
-       if (unlikely(state == TCP_CLOSE))
+       if (unlikely((1 << state) & (TCPF_CLOSE | TCPF_LISTEN)))
                goto unlock;
 
        mptcp_check_data_fin_ack(sk);
index a0041360ee9d95b0cf85845e98c0f157a578e59d..d34588850545703deea6b1b7864c9c90c5d63e96 100644 (file)
@@ -408,9 +408,8 @@ void mptcp_subflow_reset(struct sock *ssk)
 
        tcp_send_active_reset(ssk, GFP_ATOMIC);
        tcp_done(ssk);
-       if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags) &&
-           schedule_work(&mptcp_sk(sk)->work))
-               return; /* worker will put sk for us */
+       if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags))
+               mptcp_schedule_work(sk);
 
        sock_put(sk);
 }
@@ -1118,8 +1117,8 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
                                skb_ext_del(skb, SKB_EXT_MPTCP);
                                return MAPPING_OK;
                        } else {
-                               if (updated && schedule_work(&msk->work))
-                                       sock_hold((struct sock *)msk);
+                               if (updated)
+                                       mptcp_schedule_work((struct sock *)msk);
 
                                return MAPPING_DATA_FIN;
                        }
@@ -1222,17 +1221,12 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
 /* sched mptcp worker to remove the subflow if no more data is pending */
 static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk)
 {
-       struct sock *sk = (struct sock *)msk;
-
        if (likely(ssk->sk_state != TCP_CLOSE))
                return;
 
        if (skb_queue_empty(&ssk->sk_receive_queue) &&
-           !test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags)) {
-               sock_hold(sk);
-               if (!schedule_work(&msk->work))
-                       sock_put(sk);
-       }
+           !test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags))
+               mptcp_schedule_work((struct sock *)msk);
 }
 
 static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
index ca3ebfdb30231dd48b22cc36d8890c5e4b39223d..a8cf9a88758ef555743e3873648a1ea14b07c0a2 100644 (file)
@@ -913,7 +913,7 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port,
 {
        struct vport *vport = ovs_vport_rcu(dp, out_port);
 
-       if (likely(vport)) {
+       if (likely(vport && netif_carrier_ok(vport->dev))) {
                u16 mru = OVS_CB(skb)->mru;
                u32 cutlen = OVS_CB(skb)->cutlen;
 
index 3a70255c8d02fa1b66efb063aabf4a2b8dbcd305..76f0434d3d06a48b80d80c54241440ff4ce8ea9c 100644 (file)
@@ -498,6 +498,11 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
        if (!size || len != ALIGN(size, 4) + hdrlen)
                goto err;
 
+       if ((cb->type == QRTR_TYPE_NEW_SERVER ||
+            cb->type == QRTR_TYPE_RESUME_TX) &&
+           size < sizeof(struct qrtr_ctrl_pkt))
+               goto err;
+
        if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
            cb->type != QRTR_TYPE_RESUME_TX)
                goto err;
@@ -510,9 +515,6 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
                /* Remote node endpoint can bridge other distant nodes */
                const struct qrtr_ctrl_pkt *pkt;
 
-               if (size < sizeof(*pkt))
-                       goto err;
-
                pkt = data + hdrlen;
                qrtr_node_assign(node, le32_to_cpu(pkt->server.node));
        }
index 94727feb07b3e0ebd80e88acf5d08ed19f24da0f..b046b11200c93e53f1df4a91d061449772a643d1 100644 (file)
@@ -1154,7 +1154,8 @@ static void sctp_generate_iftsn(struct sctp_outq *q, __u32 ctsn)
 
 #define _sctp_walk_ifwdtsn(pos, chunk, end) \
        for (pos = chunk->subh.ifwdtsn_hdr->skip; \
-            (void *)pos < (void *)chunk->subh.ifwdtsn_hdr->skip + (end); pos++)
+            (void *)pos <= (void *)chunk->subh.ifwdtsn_hdr->skip + (end) - \
+                           sizeof(struct sctp_ifwdtsn_skip); pos++)
 
 #define sctp_walk_ifwdtsn(pos, ch) \
        _sctp_walk_ifwdtsn((pos), (ch), ntohs((ch)->chunk_hdr->length) - \
index c6b4a62276f6d8e16d58c6156d6f042c54fed355..50c38b624f772c05d8ecd2745da8fa719d6dc4b1 100644 (file)
@@ -3270,6 +3270,17 @@ static int __smc_create(struct net *net, struct socket *sock, int protocol,
                        sk_common_release(sk);
                        goto out;
                }
+
+               /* smc_clcsock_release() does not wait smc->clcsock->sk's
+                * destruction;  its sk_state might not be TCP_CLOSE after
+                * smc->sk is close()d, and TCP timers can be fired later,
+                * which need net ref.
+                */
+               sk = smc->clcsock->sk;
+               __netns_tracker_free(net, &sk->ns_tracker, false);
+               sk->sk_net_refcnt = 1;
+               get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
+               sock_inuse_add(net, 1);
        } else {
                smc->clcsock = clcsock;
        }
index 61f72eb8d9be7bc5de132b4304c5ebffbfe96341..4d90691505b12f6b3bcfc050e56e5452ce45b0d4 100644 (file)
@@ -27,21 +27,6 @@ fi ; \
 tar -I $(KGZIP) -c $(RCS_TAR_IGNORE) -f $(2).tar.gz \
        --transform 's:^:$(2)/:S' $(TAR_CONTENT) $(3)
 
-# tarball compression
-# ---------------------------------------------------------------------------
-
-%.tar.gz: %.tar
-       $(call cmd,gzip)
-
-%.tar.bz2: %.tar
-       $(call cmd,bzip2)
-
-%.tar.xz: %.tar
-       $(call cmd,xzmisc)
-
-%.tar.zst: %.tar
-       $(call cmd,zstd)
-
 # Git
 # ---------------------------------------------------------------------------
 
@@ -57,16 +42,24 @@ check-git:
                false; \
        fi
 
+git-config-tar.gz  = -c tar.tar.gz.command="$(KGZIP)"
+git-config-tar.bz2 = -c tar.tar.bz2.command="$(KBZIP2)"
+git-config-tar.xz  = -c tar.tar.xz.command="$(XZ)"
+git-config-tar.zst = -c tar.tar.zst.command="$(ZSTD)"
+
+quiet_cmd_archive = ARCHIVE $@
+      cmd_archive = git -C $(srctree) $(git-config-tar$(suffix $@)) archive \
+                    --output=$$(realpath $@) --prefix=$(basename $@)/ $(archive-args)
+
 # Linux source tarball
 # ---------------------------------------------------------------------------
 
-quiet_cmd_archive_linux = ARCHIVE $@
-      cmd_archive_linux = \
-       git -C $(srctree) archive --output=$$(realpath $@) --prefix=$(basename $@)/ $$(cat $<)
+linux-tarballs := $(addprefix linux, .tar.gz)
 
-targets += linux.tar
-linux.tar: .tmp_HEAD FORCE
-       $(call if_changed,archive_linux)
+targets += $(linux-tarballs)
+$(linux-tarballs): archive-args = $$(cat $<)
+$(linux-tarballs): .tmp_HEAD FORCE
+       $(call if_changed,archive)
 
 # rpm-pkg
 # ---------------------------------------------------------------------------
@@ -94,7 +87,7 @@ binrpm-pkg:
                $(UTS_MACHINE)-linux -bb $(objtree)/binkernel.spec
 
 quiet_cmd_debianize = GEN     $@
-      cmd_debianize = $(srctree)/scripts/package/mkdebian
+      cmd_debianize = $(srctree)/scripts/package/mkdebian $(mkdebian-opts)
 
 debian: FORCE
        $(call cmd,debianize)
@@ -103,6 +96,7 @@ PHONY += debian-orig
 debian-orig: private source = $(shell dpkg-parsechangelog -S Source)
 debian-orig: private version = $(shell dpkg-parsechangelog -S Version | sed 's/-[^-]*$$//')
 debian-orig: private orig-name = $(source)_$(version).orig.tar.gz
+debian-orig: mkdebian-opts = --need-source
 debian-orig: linux.tar.gz debian
        $(Q)if [ "$(df  --output=target .. 2>/dev/null)" = "$(df --output=target $< 2>/dev/null)" ]; then \
                ln -f $< ../$(orig-name); \
@@ -145,10 +139,17 @@ tar-install: FORCE
        $(Q)$(MAKE) -f $(srctree)/Makefile
        +$(Q)$(srctree)/scripts/package/buildtar $@
 
+compress-tar.gz  = -I "$(KGZIP)"
+compress-tar.bz2 = -I "$(KBZIP2)"
+compress-tar.xz  = -I "$(XZ)"
+compress-tar.zst = -I "$(ZSTD)"
+
 quiet_cmd_tar = TAR     $@
-      cmd_tar = cd $<; tar cf ../$@ --owner=root --group=root --sort=name *
+      cmd_tar = cd $<; tar cf ../$@ $(compress-tar$(suffix $@)) --owner=root --group=root --sort=name *
 
-linux-$(KERNELRELEASE)-$(ARCH).tar: tar-install
+dir-tarballs := $(addprefix linux-$(KERNELRELEASE)-$(ARCH), .tar .tar.gz .tar.bz2 .tar.xz .tar.zst)
+
+$(dir-tarballs): tar-install
        $(call cmd,tar)
 
 PHONY += dir-pkg
@@ -180,16 +181,17 @@ quiet_cmd_perf_version_file = GEN     $@
 .tmp_perf/PERF-VERSION-FILE: .tmp_HEAD $(srctree)/tools/perf/util/PERF-VERSION-GEN | .tmp_perf
        $(call cmd,perf_version_file)
 
-quiet_cmd_archive_perf = ARCHIVE $@
-      cmd_archive_perf = \
-       git -C $(srctree) archive --output=$$(realpath $@) --prefix=$(basename $@)/ \
-       --add-file=$$(realpath $(word 2, $^)) \
+perf-archive-args = --add-file=$$(realpath $(word 2, $^)) \
        --add-file=$$(realpath $(word 3, $^)) \
        $$(cat $(word 2, $^))^{tree} $$(cat $<)
 
-targets += perf-$(KERNELVERSION).tar
-perf-$(KERNELVERSION).tar: tools/perf/MANIFEST .tmp_perf/HEAD .tmp_perf/PERF-VERSION-FILE FORCE
-       $(call if_changed,archive_perf)
+
+perf-tarballs := $(addprefix perf-$(KERNELVERSION), .tar .tar.gz .tar.bz2 .tar.xz .tar.zst)
+
+targets += $(perf-tarballs)
+$(perf-tarballs): archive-args = $(perf-archive-args)
+$(perf-tarballs): tools/perf/MANIFEST .tmp_perf/HEAD .tmp_perf/PERF-VERSION-FILE FORCE
+       $(call if_changed,archive)
 
 PHONY += perf-tar-src-pkg
 perf-tar-src-pkg: perf-$(KERNELVERSION).tar
index f842ab50a780c657b7d2755c7c8a089393414474..8a98b7bb78a0c39d71d60cdb760d1d31b6ba9bab 100755 (executable)
@@ -1,44 +1,36 @@
 #!/bin/sh
 # SPDX-License-Identifier: GPL-2.0-only
 
-diff_patch="${1}"
-untracked_patch="${2}"
-srctree=$(dirname $0)/../..
+diff_patch=$1
 
-rm -f ${diff_patch} ${untracked_patch}
+mkdir -p "$(dirname "${diff_patch}")"
 
-if ! ${srctree}/scripts/check-git; then
-       exit
-fi
-
-mkdir -p "$(dirname ${diff_patch})" "$(dirname ${untracked_patch})"
+git -C "${srctree:-.}" diff HEAD > "${diff_patch}"
 
-git -C "${srctree}" diff HEAD > "${diff_patch}"
-
-if [ ! -s "${diff_patch}" ]; then
-       rm -f "${diff_patch}"
+if [ ! -s "${diff_patch}" ] ||
+   [ -z "$(git -C "${srctree:-.}" ls-files --other --exclude-standard | head -n1)" ]; then
        exit
 fi
 
-git -C ${srctree} status --porcelain --untracked-files=all |
-while read stat path
-do
-       if [ "${stat}" = '??' ]; then
-
-               if ! diff -u /dev/null "${srctree}/${path}" > .tmp_diff &&
-                       ! head -n1 .tmp_diff | grep -q "Binary files"; then
-                       {
-                               echo "--- /dev/null"
-                               echo "+++ linux/$path"
-                               cat .tmp_diff | tail -n +3
-                       } >> ${untracked_patch}
-               fi
-       fi
-done
-
-rm -f .tmp_diff
-
-if [ ! -s "${diff_patch}" ]; then
-       rm -f "${diff_patch}"
-       exit
-fi
+# The source tarball, which is generated by 'git archive', contains everything
+# you committed in the repository. If you have local diff ('git diff HEAD'),
+# it will go into ${diff_patch}. If untracked files are remaining, the resulting
+# source package may not be correct.
+#
+# Examples:
+#  - You modified a source file to add #include "new-header.h"
+#    but forgot to add new-header.h
+#  - You modified a Makefile to add 'obj-$(CONFIG_FOO) += new-dirver.o'
+#    but you forgot to add new-driver.c
+#
+# You need to commit them, or at least stage them by 'git add'.
+#
+# This script does not take care of untracked files because doing so would
+# introduce additional complexity. Instead, print a warning message here if
+# untracked files are found.
+# If all untracked files are just garbage, you can ignore this warning.
+echo >&2 "============================ WARNING ============================"
+echo >&2 "Your working tree has diff from HEAD, and also untracked file(s)."
+echo >&2 "Please make sure you did 'git add' for all new files you need in"
+echo >&2 "the source package."
+echo >&2 "================================================================="
index e20a2b5be9eb29c4af62227750e170b9dabb36f6..a4c2c2276223fa1de435344c80e74a7afdbbe389 100755 (executable)
@@ -84,7 +84,66 @@ set_debarch() {
        fi
 }
 
+# Create debian/source/ if it is a source package build
+gen_source ()
+{
+       mkdir -p debian/source
+
+       echo "3.0 (quilt)" > debian/source/format
+
+       {
+               echo "diff-ignore"
+               echo "extend-diff-ignore = .*"
+       } > debian/source/local-options
+
+       # Add .config as a patch
+       mkdir -p debian/patches
+       {
+               echo "Subject: Add .config"
+               echo "Author: ${maintainer}"
+               echo
+               echo "--- /dev/null"
+               echo "+++ linux/.config"
+               diff -u /dev/null "${KCONFIG_CONFIG}" | tail -n +3
+       } > debian/patches/config.patch
+       echo config.patch > debian/patches/series
+
+       "${srctree}/scripts/package/gen-diff-patch" debian/patches/diff.patch
+       if [ -s debian/patches/diff.patch ]; then
+               sed -i "
+                       1iSubject: Add local diff
+                       1iAuthor: ${maintainer}
+                       1i
+               " debian/patches/diff.patch
+
+               echo diff.patch >> debian/patches/series
+       else
+               rm -f debian/patches/diff.patch
+       fi
+}
+
 rm -rf debian
+mkdir debian
+
+email=${DEBEMAIL-$EMAIL}
+
+# use email string directly if it contains <email>
+if echo "${email}" | grep -q '<.*>'; then
+       maintainer=${email}
+else
+       # or construct the maintainer string
+       user=${KBUILD_BUILD_USER-$(id -nu)}
+       name=${DEBFULLNAME-${user}}
+       if [ -z "${email}" ]; then
+               buildhost=${KBUILD_BUILD_HOST-$(hostname -f 2>/dev/null || hostname)}
+               email="${user}@${buildhost}"
+       fi
+       maintainer="${name} <${email}>"
+fi
+
+if [ "$1" = --need-source ]; then
+       gen_source
+fi
 
 # Some variables and settings used throughout the script
 version=$KERNELRELEASE
@@ -104,22 +163,6 @@ fi
 debarch=
 set_debarch
 
-email=${DEBEMAIL-$EMAIL}
-
-# use email string directly if it contains <email>
-if echo $email | grep -q '<.*>'; then
-       maintainer=$email
-else
-       # or construct the maintainer string
-       user=${KBUILD_BUILD_USER-$(id -nu)}
-       name=${DEBFULLNAME-$user}
-       if [ -z "$email" ]; then
-               buildhost=${KBUILD_BUILD_HOST-$(hostname -f 2>/dev/null || hostname)}
-               email="$user@$buildhost"
-       fi
-       maintainer="$name <$email>"
-fi
-
 # Try to determine distribution
 if [ -n "$KDEB_CHANGELOG_DIST" ]; then
         distribution=$KDEB_CHANGELOG_DIST
@@ -132,34 +175,6 @@ else
         echo >&2 "Install lsb-release or set \$KDEB_CHANGELOG_DIST explicitly"
 fi
 
-mkdir -p debian/source/
-echo "3.0 (quilt)" > debian/source/format
-
-{
-       echo "diff-ignore"
-       echo "extend-diff-ignore = .*"
-} > debian/source/local-options
-
-# Add .config as a patch
-mkdir -p debian/patches
-{
-       echo "Subject: Add .config"
-       echo "Author: ${maintainer}"
-       echo
-       echo "--- /dev/null"
-       echo "+++ linux/.config"
-       diff -u /dev/null "${KCONFIG_CONFIG}" | tail -n +3
-} > debian/patches/config
-echo config > debian/patches/series
-
-$(dirname $0)/gen-diff-patch debian/patches/diff.patch debian/patches/untracked.patch
-if [ -f debian/patches/diff.patch ]; then
-       echo diff.patch >> debian/patches/series
-fi
-if [ -f debian/patches/untracked.patch ]; then
-       echo untracked.patch >> debian/patches/series
-fi
-
 echo $debarch > debian/arch
 extra_build_depends=", $(if_enabled_echo CONFIG_UNWINDER_ORC libelf-dev:native)"
 extra_build_depends="$extra_build_depends, $(if_enabled_echo CONFIG_SYSTEM_TRUSTED_KEYRING libssl-dev:native)"
index b7d1dc28a5d6de457e988021785a18e0a7310762..fc8ad3fbc0a95a96717dc8fd21a81c6ca77502b2 100755 (executable)
@@ -19,8 +19,7 @@ else
        mkdir -p rpmbuild/SOURCES
        cp linux.tar.gz rpmbuild/SOURCES
        cp "${KCONFIG_CONFIG}" rpmbuild/SOURCES/config
-       $(dirname $0)/gen-diff-patch rpmbuild/SOURCES/diff.patch rpmbuild/SOURCES/untracked.patch
-       touch rpmbuild/SOURCES/diff.patch rpmbuild/SOURCES/untracked.patch
+       "${srctree}/scripts/package/gen-diff-patch" rpmbuild/SOURCES/diff.patch
 fi
 
 if grep -q CONFIG_MODULES=y include/config/auto.conf; then
@@ -56,7 +55,6 @@ sed -e '/^DEL/d' -e 's/^\t*//' <<EOF
 $S     Source0: linux.tar.gz
 $S     Source1: config
 $S     Source2: diff.patch
-$S     Source3: untracked.patch
        Provides: $PROVIDES
 $S     BuildRequires: bc binutils bison dwarves
 $S     BuildRequires: (elfutils-libelf-devel or libelf-devel) flex
@@ -94,12 +92,7 @@ $S$M
 $S     %prep
 $S     %setup -q -n linux
 $S     cp %{SOURCE1} .config
-$S     if [ -s %{SOURCE2} ]; then
-$S             patch -p1 < %{SOURCE2}
-$S     fi
-$S     if [ -s %{SOURCE3} ]; then
-$S             patch -p1 < %{SOURCE3}
-$S     fi
+$S     patch -p1 < %{SOURCE2}
 $S
 $S     %build
 $S     $MAKE %{?_smp_mflags} KERNELRELEASE=$KERNELRELEASE KBUILD_BUILD_VERSION=%{release}
index 53e094cc411f8676678f424e321f16c66dabeb21..dfe783d01d7d20cb4295625cecc8d5953f3607b3 100644 (file)
@@ -490,7 +490,7 @@ int snd_tscm_stream_start_duplex(struct snd_tscm *tscm, unsigned int rate)
                // packet is important for media clock recovery.
                err = amdtp_domain_start(&tscm->domain, tx_init_skip_cycles, true, true);
                if (err < 0)
-                       return err;
+                       goto error;
 
                if (!amdtp_domain_wait_ready(&tscm->domain, READY_TIMEOUT_MS)) {
                        err = -ETIMEDOUT;
index 65012af6a36e4f0c3307340de764cb8f426d4bb2..f58b14b490455040a4dd8449cd6768fc990ab421 100644 (file)
@@ -561,10 +561,13 @@ int snd_cs8427_iec958_active(struct snd_i2c_device *cs8427, int active)
        if (snd_BUG_ON(!cs8427))
                return -ENXIO;
        chip = cs8427->private_data;
-       if (active)
+       if (active) {
                memcpy(chip->playback.pcm_status,
                       chip->playback.def_status, 24);
-       chip->playback.pcm_ctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_INACTIVE;
+               chip->playback.pcm_ctl->vd[0].access &= ~SNDRV_CTL_ELEM_ACCESS_INACTIVE;
+       } else {
+               chip->playback.pcm_ctl->vd[0].access |= SNDRV_CTL_ELEM_ACCESS_INACTIVE;
+       }
        snd_ctl_notify(cs8427->bus->card,
                       SNDRV_CTL_EVENT_MASK_VALUE | SNDRV_CTL_EVENT_MASK_INFO,
                       &chip->playback.pcm_ctl->id);
index 48af77ae8020f5a5c2ecca1c2bd974cc186f1779..6ec394fb1846845464b5e80158ce401ea4c03fcd 100644 (file)
@@ -1236,7 +1236,7 @@ static int snd_emu10k1_capture_mic_close(struct snd_pcm_substream *substream)
 {
        struct snd_emu10k1 *emu = snd_pcm_substream_chip(substream);
 
-       emu->capture_interrupt = NULL;
+       emu->capture_mic_interrupt = NULL;
        emu->pcm_capture_mic_substream = NULL;
        return 0;
 }
@@ -1344,7 +1344,7 @@ static int snd_emu10k1_capture_efx_close(struct snd_pcm_substream *substream)
 {
        struct snd_emu10k1 *emu = snd_pcm_substream_chip(substream);
 
-       emu->capture_interrupt = NULL;
+       emu->capture_efx_interrupt = NULL;
        emu->pcm_capture_efx_substream = NULL;
        return 0;
 }
@@ -1781,17 +1781,21 @@ int snd_emu10k1_pcm_efx(struct snd_emu10k1 *emu, int device)
        struct snd_kcontrol *kctl;
        int err;
 
-       err = snd_pcm_new(emu->card, "emu10k1 efx", device, 8, 1, &pcm);
+       err = snd_pcm_new(emu->card, "emu10k1 efx", device, emu->audigy ? 0 : 8, 1, &pcm);
        if (err < 0)
                return err;
 
        pcm->private_data = emu;
 
-       snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_emu10k1_fx8010_playback_ops);
+       if (!emu->audigy)
+               snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_emu10k1_fx8010_playback_ops);
        snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_emu10k1_capture_efx_ops);
 
        pcm->info_flags = 0;
-       strcpy(pcm->name, "Multichannel Capture/PT Playback");
+       if (emu->audigy)
+               strcpy(pcm->name, "Multichannel Capture");
+       else
+               strcpy(pcm->name, "Multichannel Capture/PT Playback");
        emu->pcm_efx = pcm;
 
        /* EFX capture - record the "FXBUS2" channels, by default we connect the EXTINs 
index 4ffa3a59f419fa80d3a32856a4cb8a32157c248d..5c6980394dcec2e2d92730e0cbb54eb7b73f2277 100644 (file)
@@ -4604,7 +4604,7 @@ HDA_CODEC_ENTRY(0x80862814, "DG1 HDMI",   patch_i915_tgl_hdmi),
 HDA_CODEC_ENTRY(0x80862815, "Alderlake HDMI",  patch_i915_tgl_hdmi),
 HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI", patch_i915_tgl_hdmi),
 HDA_CODEC_ENTRY(0x80862818, "Raptorlake HDMI", patch_i915_tgl_hdmi),
-HDA_CODEC_ENTRY(0x80862819, "DG2 HDMI",        patch_i915_adlp_hdmi),
+HDA_CODEC_ENTRY(0x80862819, "DG2 HDMI",        patch_i915_tgl_hdmi),
 HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI", patch_i915_icl_hdmi),
 HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI",        patch_i915_icl_hdmi),
 HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_adlp_hdmi),
index 26187f5d56b59ccf44455cab1fdb3f6d0c0fc365..3b9f077a227f7ea6ba24de147bc7a6dc9037428d 100644 (file)
@@ -6960,6 +6960,8 @@ enum {
        ALC269_FIXUP_DELL_M101Z,
        ALC269_FIXUP_SKU_IGNORE,
        ALC269_FIXUP_ASUS_G73JW,
+       ALC269_FIXUP_ASUS_N7601ZM_PINS,
+       ALC269_FIXUP_ASUS_N7601ZM,
        ALC269_FIXUP_LENOVO_EAPD,
        ALC275_FIXUP_SONY_HWEQ,
        ALC275_FIXUP_SONY_DISABLE_AAMIX,
@@ -7256,6 +7258,29 @@ static const struct hda_fixup alc269_fixups[] = {
                        { }
                }
        },
+       [ALC269_FIXUP_ASUS_N7601ZM_PINS] = {
+               .type = HDA_FIXUP_PINS,
+               .v.pins = (const struct hda_pintbl[]) {
+                       { 0x19, 0x03A11050 },
+                       { 0x1a, 0x03A11C30 },
+                       { 0x21, 0x03211420 },
+                       { }
+               }
+       },
+       [ALC269_FIXUP_ASUS_N7601ZM] = {
+               .type = HDA_FIXUP_VERBS,
+               .v.verbs = (const struct hda_verb[]) {
+                       {0x20, AC_VERB_SET_COEF_INDEX, 0x62},
+                       {0x20, AC_VERB_SET_PROC_COEF, 0xa007},
+                       {0x20, AC_VERB_SET_COEF_INDEX, 0x10},
+                       {0x20, AC_VERB_SET_PROC_COEF, 0x8420},
+                       {0x20, AC_VERB_SET_COEF_INDEX, 0x0f},
+                       {0x20, AC_VERB_SET_PROC_COEF, 0x7774},
+                       { }
+               },
+               .chained = true,
+               .chain_id = ALC269_FIXUP_ASUS_N7601ZM_PINS,
+       },
        [ALC269_FIXUP_LENOVO_EAPD] = {
                .type = HDA_FIXUP_VERBS,
                .v.verbs = (const struct hda_verb[]) {
@@ -9466,6 +9491,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+       SND_PCI_QUIRK(0x1043, 0x12a3, "Asus N7691ZM", ALC269_FIXUP_ASUS_N7601ZM),
        SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
        SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
        SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
@@ -9663,6 +9689,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x17aa, 0x22f1, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
        SND_PCI_QUIRK(0x17aa, 0x22f2, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
        SND_PCI_QUIRK(0x17aa, 0x22f3, "Thinkpad", ALC287_FIXUP_CS35L41_I2C_2),
+       SND_PCI_QUIRK(0x17aa, 0x2318, "Thinkpad Z13 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
+       SND_PCI_QUIRK(0x17aa, 0x2319, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
+       SND_PCI_QUIRK(0x17aa, 0x231a, "Thinkpad Z16 Gen2", ALC287_FIXUP_CS35L41_I2C_2),
        SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
        SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
        SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
index a794a01a68ca60e55bcd0ab653a0db3e6c3ffcc5..61258b0aac8d65a51206c2fd4f269e197003eec5 100644 (file)
@@ -1707,6 +1707,7 @@ static const struct snd_pci_quirk stac925x_fixup_tbl[] = {
 };
 
 static const struct hda_pintbl ref92hd73xx_pin_configs[] = {
+       // Port A-H
        { 0x0a, 0x02214030 },
        { 0x0b, 0x02a19040 },
        { 0x0c, 0x01a19020 },
@@ -1715,9 +1716,12 @@ static const struct hda_pintbl ref92hd73xx_pin_configs[] = {
        { 0x0f, 0x01014010 },
        { 0x10, 0x01014020 },
        { 0x11, 0x01014030 },
+       // CD in
        { 0x12, 0x02319040 },
+       // Digial Mic ins
        { 0x13, 0x90a000f0 },
        { 0x14, 0x90a000f0 },
+       // Digital outs
        { 0x22, 0x01452050 },
        { 0x23, 0x01452050 },
        {}
@@ -1758,6 +1762,7 @@ static const struct hda_pintbl alienware_m17x_pin_configs[] = {
 };
 
 static const struct hda_pintbl intel_dg45id_pin_configs[] = {
+       // Analog outputs
        { 0x0a, 0x02214230 },
        { 0x0b, 0x02A19240 },
        { 0x0c, 0x01013214 },
@@ -1765,6 +1770,9 @@ static const struct hda_pintbl intel_dg45id_pin_configs[] = {
        { 0x0e, 0x01A19250 },
        { 0x0f, 0x01011212 },
        { 0x10, 0x01016211 },
+       // Digital output
+       { 0x22, 0x01451380 },
+       { 0x23, 0x40f000f0 },
        {}
 };
 
@@ -1955,6 +1963,8 @@ static const struct snd_pci_quirk stac92hd73xx_fixup_tbl[] = {
                                "DFI LanParty", STAC_92HD73XX_REF),
        SND_PCI_QUIRK(PCI_VENDOR_ID_DFI, 0x3101,
                                "DFI LanParty", STAC_92HD73XX_REF),
+       SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5001,
+                               "Intel DP45SG", STAC_92HD73XX_INTEL),
        SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5002,
                                "Intel DG45ID", STAC_92HD73XX_INTEL),
        SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x5003,
index 7271a18ab3e22d0d637ef0e00518a9afdee6539c..8251a0fc6ee94d80403298b9cb69fabf4c0bc4dc 100644 (file)
@@ -167,8 +167,7 @@ void test_xdp_do_redirect(void)
 
        if (!ASSERT_EQ(query_opts.feature_flags,
                       NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
-                      NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
-                      NETDEV_XDP_ACT_NDO_XMIT_SG,
+                      NETDEV_XDP_ACT_RX_SG,
                       "veth_src query_opts.feature_flags"))
                goto out;
 
@@ -176,11 +175,36 @@ void test_xdp_do_redirect(void)
        if (!ASSERT_OK(err, "veth_dst bpf_xdp_query"))
                goto out;
 
+       if (!ASSERT_EQ(query_opts.feature_flags,
+                      NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+                      NETDEV_XDP_ACT_RX_SG,
+                      "veth_dst query_opts.feature_flags"))
+               goto out;
+
+       /* Enable GRO */
+       SYS("ethtool -K veth_src gro on");
+       SYS("ethtool -K veth_dst gro on");
+
+       err = bpf_xdp_query(ifindex_src, XDP_FLAGS_DRV_MODE, &query_opts);
+       if (!ASSERT_OK(err, "veth_src bpf_xdp_query gro on"))
+               goto out;
+
        if (!ASSERT_EQ(query_opts.feature_flags,
                       NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
                       NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
                       NETDEV_XDP_ACT_NDO_XMIT_SG,
-                      "veth_dst query_opts.feature_flags"))
+                      "veth_src query_opts.feature_flags gro on"))
+               goto out;
+
+       err = bpf_xdp_query(ifindex_dst, XDP_FLAGS_DRV_MODE, &query_opts);
+       if (!ASSERT_OK(err, "veth_dst bpf_xdp_query gro on"))
+               goto out;
+
+       if (!ASSERT_EQ(query_opts.feature_flags,
+                      NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+                      NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
+                      NETDEV_XDP_ACT_NDO_XMIT_SG,
+                      "veth_dst query_opts.feature_flags gro on"))
                goto out;
 
        memcpy(skel->rodata->expect_dst, &pkt_udp.eth.h_dest, ETH_ALEN);
index aa4beae99f4f6ed6df6f2c60d9b8470f229dd1b7..8c5e98da9ae9f036889eda11508573b8e27fe751 100644 (file)
@@ -273,6 +273,8 @@ static int verify_xsk_metadata(struct xsk *xsk)
        if (!ASSERT_NEQ(meta->rx_hash, 0, "rx_hash"))
                return -1;
 
+       ASSERT_EQ(meta->rx_hash_type, 0, "rx_hash_type");
+
        xsk_ring_cons__release(&xsk->rx, 1);
        refill_rx(xsk, comp_addr);
 
index 4c55b4d79d3d44744b52b6485e0ecaa26e4a9b69..e1c787815e44bba5a9f1e41d4649f84f16f1510f 100644 (file)
@@ -12,10 +12,14 @@ struct {
        __type(value, __u32);
 } xsk SEC(".maps");
 
+__u64 pkts_skip = 0;
+__u64 pkts_fail = 0;
+__u64 pkts_redir = 0;
+
 extern int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx,
                                         __u64 *timestamp) __ksym;
-extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx,
-                                   __u32 *hash) __ksym;
+extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash,
+                                   enum xdp_rss_hash_type *rss_type) __ksym;
 
 SEC("xdp")
 int rx(struct xdp_md *ctx)
@@ -26,7 +30,7 @@ int rx(struct xdp_md *ctx)
        struct udphdr *udp = NULL;
        struct iphdr *iph = NULL;
        struct xdp_meta *meta;
-       int ret;
+       int err;
 
        data = (void *)(long)ctx->data;
        data_end = (void *)(long)ctx->data_end;
@@ -46,17 +50,20 @@ int rx(struct xdp_md *ctx)
                        udp = NULL;
        }
 
-       if (!udp)
+       if (!udp) {
+               __sync_add_and_fetch(&pkts_skip, 1);
                return XDP_PASS;
+       }
 
-       if (udp->dest != bpf_htons(9091))
+       /* Forwarding UDP:9091 to AF_XDP */
+       if (udp->dest != bpf_htons(9091)) {
+               __sync_add_and_fetch(&pkts_skip, 1);
                return XDP_PASS;
+       }
 
-       bpf_printk("forwarding UDP:9091 to AF_XDP");
-
-       ret = bpf_xdp_adjust_meta(ctx, -(int)sizeof(struct xdp_meta));
-       if (ret != 0) {
-               bpf_printk("bpf_xdp_adjust_meta returned %d", ret);
+       err = bpf_xdp_adjust_meta(ctx, -(int)sizeof(struct xdp_meta));
+       if (err) {
+               __sync_add_and_fetch(&pkts_fail, 1);
                return XDP_PASS;
        }
 
@@ -65,20 +72,19 @@ int rx(struct xdp_md *ctx)
        meta = data_meta;
 
        if (meta + 1 > data) {
-               bpf_printk("bpf_xdp_adjust_meta doesn't appear to work");
+               __sync_add_and_fetch(&pkts_fail, 1);
                return XDP_PASS;
        }
 
-       if (!bpf_xdp_metadata_rx_timestamp(ctx, &meta->rx_timestamp))
-               bpf_printk("populated rx_timestamp with %llu", meta->rx_timestamp);
-       else
+       err = bpf_xdp_metadata_rx_timestamp(ctx, &meta->rx_timestamp);
+       if (err)
                meta->rx_timestamp = 0; /* Used by AF_XDP as not avail signal */
 
-       if (!bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash))
-               bpf_printk("populated rx_hash with %u", meta->rx_hash);
-       else
-               meta->rx_hash = 0; /* Used by AF_XDP as not avail signal */
+       err = bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash, &meta->rx_hash_type);
+       if (err < 0)
+               meta->rx_hash_err = err; /* Used by AF_XDP as no hash signal */
 
+       __sync_add_and_fetch(&pkts_redir, 1);
        return bpf_redirect_map(&xsk, ctx->rx_queue_index, XDP_PASS);
 }
 
index 77678b03438970c7bbc6c1b3102739d9c9baf71f..d151d406a123efc004c1fb09f28844adccade711 100644 (file)
@@ -21,8 +21,8 @@ struct {
 
 extern int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx,
                                         __u64 *timestamp) __ksym;
-extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx,
-                                   __u32 *hash) __ksym;
+extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash,
+                                   enum xdp_rss_hash_type *rss_type) __ksym;
 
 SEC("xdp")
 int rx(struct xdp_md *ctx)
@@ -56,7 +56,7 @@ int rx(struct xdp_md *ctx)
        if (timestamp == 0)
                meta->rx_timestamp = 1;
 
-       bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash);
+       bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash, &meta->rx_hash_type);
 
        return bpf_redirect_map(&xsk, ctx->rx_queue_index, XDP_PASS);
 }
index cf69d05451c39b5873fb823491a1a8c4010f9b5d..85f88d9d7a78565d4239a6fb6a18ae6a36d03f6e 100644 (file)
@@ -5,17 +5,18 @@
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_endian.h>
 
-extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx,
-                                   __u32 *hash) __ksym;
+extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash,
+                                   enum xdp_rss_hash_type *rss_type) __ksym;
 
 int called;
 
 SEC("freplace/rx")
 int freplace_rx(struct xdp_md *ctx)
 {
+       enum xdp_rss_hash_type type = 0;
        u32 hash = 0;
        /* Call _any_ metadata function to make sure we don't crash. */
-       bpf_xdp_metadata_rx_hash(ctx, &hash);
+       bpf_xdp_metadata_rx_hash(ctx, &hash, &type);
        called++;
        return XDP_PASS;
 }
index 1c8acb68b977cd04bda4096fdde12344b1a3c54a..987cf0db5ebc80f70b3c62d641bc878ed8ee1805 100644 (file)
@@ -141,7 +141,11 @@ static void verify_xdp_metadata(void *data)
        meta = data - sizeof(*meta);
 
        printf("rx_timestamp: %llu\n", meta->rx_timestamp);
-       printf("rx_hash: %u\n", meta->rx_hash);
+       if (meta->rx_hash_err < 0)
+               printf("No rx_hash err=%d\n", meta->rx_hash_err);
+       else
+               printf("rx_hash: 0x%X with RSS type:0x%X\n",
+                      meta->rx_hash, meta->rx_hash_type);
 }
 
 static void verify_skb_metadata(int fd)
@@ -212,7 +216,9 @@ static int verify_metadata(struct xsk *rx_xsk, int rxq, int server_fd)
        while (true) {
                errno = 0;
                ret = poll(fds, rxq + 1, 1000);
-               printf("poll: %d (%d)\n", ret, errno);
+               printf("poll: %d (%d) skip=%llu fail=%llu redir=%llu\n",
+                      ret, errno, bpf_obj->bss->pkts_skip,
+                      bpf_obj->bss->pkts_fail, bpf_obj->bss->pkts_redir);
                if (ret < 0)
                        break;
                if (ret == 0)
index f6780fbb0a214765a14a293cf7fb0654e4a32297..0c4624dc6f2f719183b21e7e0bd1fbc9e6b72164 100644 (file)
@@ -12,4 +12,8 @@
 struct xdp_meta {
        __u64 rx_timestamp;
        __u32 rx_hash;
+       union {
+               __u32 rx_hash_type;
+               __s32 rx_hash_err;
+       };
 };
index a39bb2560d9bfe88473d2434dacbf4f2ac9fce8f..03f92d7aeb19b970acfdb2d28ad882f630c12567 100644 (file)
@@ -8,11 +8,12 @@ TEST_PROGS := \
        dev_addr_lists.sh \
        mode-1-recovery-updelay.sh \
        mode-2-recovery-updelay.sh \
-       option_prio.sh \
+       bond_options.sh \
        bond-eth-type-change.sh
 
 TEST_FILES := \
        lag_lib.sh \
+       bond_topo_3d1c.sh \
        net_forwarding_lib.sh
 
 include ../../../lib.mk
diff --git a/tools/testing/selftests/drivers/net/bonding/bond_options.sh b/tools/testing/selftests/drivers/net/bonding/bond_options.sh
new file mode 100755 (executable)
index 0000000..db29a31
--- /dev/null
@@ -0,0 +1,264 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# Test bonding options with mode 1,5,6
+
+ALL_TESTS="
+       prio
+       arp_validate
+"
+
+REQUIRE_MZ=no
+NUM_NETIFS=0
+lib_dir=$(dirname "$0")
+source ${lib_dir}/net_forwarding_lib.sh
+source ${lib_dir}/bond_topo_3d1c.sh
+
+skip_prio()
+{
+       local skip=1
+
+       # check if iproute support prio option
+       ip -n ${s_ns} link set eth0 type bond_slave prio 10
+       [[ $? -ne 0 ]] && skip=0
+
+       # check if kernel support prio option
+       ip -n ${s_ns} -d link show eth0 | grep -q "prio 10"
+       [[ $? -ne 0 ]] && skip=0
+
+       return $skip
+}
+
+skip_ns()
+{
+       local skip=1
+
+       # check if iproute support ns_ip6_target option
+       ip -n ${s_ns} link add bond1 type bond ns_ip6_target ${g_ip6}
+       [[ $? -ne 0 ]] && skip=0
+
+       # check if kernel support ns_ip6_target option
+       ip -n ${s_ns} -d link show bond1 | grep -q "ns_ip6_target ${g_ip6}"
+       [[ $? -ne 0 ]] && skip=0
+
+       ip -n ${s_ns} link del bond1
+
+       return $skip
+}
+
+active_slave=""
+check_active_slave()
+{
+       local target_active_slave=$1
+       active_slave=$(cmd_jq "ip -n ${s_ns} -d -j link show bond0" ".[].linkinfo.info_data.active_slave")
+       test "$active_slave" = "$target_active_slave"
+       check_err $? "Current active slave is $active_slave but not $target_active_slave"
+}
+
+
+# Test bonding prio option
+prio_test()
+{
+       local param="$1"
+       RET=0
+
+       # create bond
+       bond_reset "${param}"
+
+       # check bonding member prio value
+       ip -n ${s_ns} link set eth0 type bond_slave prio 0
+       ip -n ${s_ns} link set eth1 type bond_slave prio 10
+       ip -n ${s_ns} link set eth2 type bond_slave prio 11
+       cmd_jq "ip -n ${s_ns} -d -j link show eth0" \
+               ".[].linkinfo.info_slave_data | select (.prio == 0)" "-e" &> /dev/null
+       check_err $? "eth0 prio is not 0"
+       cmd_jq "ip -n ${s_ns} -d -j link show eth1" \
+               ".[].linkinfo.info_slave_data | select (.prio == 10)" "-e" &> /dev/null
+       check_err $? "eth1 prio is not 10"
+       cmd_jq "ip -n ${s_ns} -d -j link show eth2" \
+               ".[].linkinfo.info_slave_data | select (.prio == 11)" "-e" &> /dev/null
+       check_err $? "eth2 prio is not 11"
+
+       bond_check_connection "setup"
+
+       # active slave should be the primary slave
+       check_active_slave eth1
+
+       # active slave should be the higher prio slave
+       ip -n ${s_ns} link set $active_slave down
+       bond_check_connection "fail over"
+       check_active_slave eth2
+
+       # when only 1 slave is up
+       ip -n ${s_ns} link set $active_slave down
+       bond_check_connection "only 1 slave up"
+       check_active_slave eth0
+
+       # when a higher prio slave change to up
+       ip -n ${s_ns} link set eth2 up
+       bond_check_connection "higher prio slave up"
+       case $primary_reselect in
+               "0")
+                       check_active_slave "eth2"
+                       ;;
+               "1")
+                       check_active_slave "eth0"
+                       ;;
+               "2")
+                       check_active_slave "eth0"
+                       ;;
+       esac
+       local pre_active_slave=$active_slave
+
+       # when the primary slave change to up
+       ip -n ${s_ns} link set eth1 up
+       bond_check_connection "primary slave up"
+       case $primary_reselect in
+               "0")
+                       check_active_slave "eth1"
+                       ;;
+               "1")
+                       check_active_slave "$pre_active_slave"
+                       ;;
+               "2")
+                       check_active_slave "$pre_active_slave"
+                       ip -n ${s_ns} link set $active_slave down
+                       bond_check_connection "pre_active slave down"
+                       check_active_slave "eth1"
+                       ;;
+       esac
+
+       # Test changing bond slave prio
+       if [[ "$primary_reselect" == "0" ]];then
+               ip -n ${s_ns} link set eth0 type bond_slave prio 1000000
+               ip -n ${s_ns} link set eth1 type bond_slave prio 0
+               ip -n ${s_ns} link set eth2 type bond_slave prio -50
+               ip -n ${s_ns} -d link show eth0 | grep -q 'prio 1000000'
+               check_err $? "eth0 prio is not 1000000"
+               ip -n ${s_ns} -d link show eth1 | grep -q 'prio 0'
+               check_err $? "eth1 prio is not 0"
+               ip -n ${s_ns} -d link show eth2 | grep -q 'prio -50'
+               check_err $? "eth3 prio is not -50"
+               check_active_slave "eth1"
+
+               ip -n ${s_ns} link set $active_slave down
+               bond_check_connection "change slave prio"
+               check_active_slave "eth0"
+       fi
+}
+
+prio_miimon()
+{
+       local primary_reselect
+       local mode=$1
+
+       for primary_reselect in 0 1 2; do
+               prio_test "mode $mode miimon 100 primary eth1 primary_reselect $primary_reselect"
+               log_test "prio" "$mode miimon primary_reselect $primary_reselect"
+       done
+}
+
+prio_arp()
+{
+       local primary_reselect
+       local mode=$1
+
+       for primary_reselect in 0 1 2; do
+               prio_test "mode active-backup arp_interval 100 arp_ip_target ${g_ip4} primary eth1 primary_reselect $primary_reselect"
+               log_test "prio" "$mode arp_ip_target primary_reselect $primary_reselect"
+       done
+}
+
+prio_ns()
+{
+       local primary_reselect
+       local mode=$1
+
+       if skip_ns; then
+               log_test_skip "prio ns" "Current iproute or kernel doesn't support bond option 'ns_ip6_target'."
+               return 0
+       fi
+
+       for primary_reselect in 0 1 2; do
+               prio_test "mode active-backup arp_interval 100 ns_ip6_target ${g_ip6} primary eth1 primary_reselect $primary_reselect"
+               log_test "prio" "$mode ns_ip6_target primary_reselect $primary_reselect"
+       done
+}
+
+prio()
+{
+       local mode modes="active-backup balance-tlb balance-alb"
+
+       if skip_prio; then
+               log_test_skip "prio" "Current iproute or kernel doesn't support bond option 'prio'."
+               return 0
+       fi
+
+       for mode in $modes; do
+               prio_miimon $mode
+               prio_arp $mode
+               prio_ns $mode
+       done
+}
+
+arp_validate_test()
+{
+       local param="$1"
+       RET=0
+
+       # create bond
+       bond_reset "${param}"
+
+       bond_check_connection
+       [ $RET -ne 0 ] && log_test "arp_validate" "$retmsg"
+
+       # wait for a while to make sure the mii status stable
+       sleep 5
+       for i in $(seq 0 2); do
+               mii_status=$(cmd_jq "ip -n ${s_ns} -j -d link show eth$i" ".[].linkinfo.info_slave_data.mii_status")
+               if [ ${mii_status} != "UP" ]; then
+                       RET=1
+                       log_test "arp_validate" "interface eth$i mii_status $mii_status"
+               fi
+       done
+}
+
+arp_validate_arp()
+{
+       local mode=$1
+       local val
+       for val in $(seq 0 6); do
+               arp_validate_test "mode $mode arp_interval 100 arp_ip_target ${g_ip4} arp_validate $val"
+               log_test "arp_validate" "$mode arp_ip_target arp_validate $val"
+       done
+}
+
+arp_validate_ns()
+{
+       local mode=$1
+       local val
+
+       if skip_ns; then
+               log_test_skip "arp_validate ns" "Current iproute or kernel doesn't support bond option 'ns_ip6_target'."
+               return 0
+       fi
+
+       for val in $(seq 0 6); do
+               arp_validate_test "mode $mode arp_interval 100 ns_ip6_target ${g_ip6} arp_validate $val"
+               log_test "arp_validate" "$mode ns_ip6_target arp_validate $val"
+       done
+}
+
+arp_validate()
+{
+       arp_validate_arp "active-backup"
+       arp_validate_ns "active-backup"
+}
+
+trap cleanup EXIT
+
+setup_prepare
+setup_wait
+tests_run
+
+exit $EXIT_STATUS
diff --git a/tools/testing/selftests/drivers/net/bonding/bond_topo_3d1c.sh b/tools/testing/selftests/drivers/net/bonding/bond_topo_3d1c.sh
new file mode 100644 (file)
index 0000000..4045ca9
--- /dev/null
@@ -0,0 +1,143 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# Topology for Bond mode 1,5,6 testing
+#
+#  +-------------------------------------+
+#  |                bond0                |
+#  |                  +                  |  Server
+#  |      eth0        | eth1   eth2      |  192.0.2.1/24
+#  |        +-------------------+        |  2001:db8::1/24
+#  |        |         |         |        |
+#  +-------------------------------------+
+#           |         |         |
+#  +-------------------------------------+
+#  |        |         |         |        |
+#  |    +---+---------+---------+---+    |  Gateway
+#  |    |            br0            |    |  192.0.2.254/24
+#  |    +-------------+-------------+    |  2001:db8::254/24
+#  |                  |                  |
+#  +-------------------------------------+
+#                     |
+#  +-------------------------------------+
+#  |                  |                  |  Client
+#  |                  +                  |  192.0.2.10/24
+#  |                eth0                 |  2001:db8::10/24
+#  +-------------------------------------+
+
+s_ns="s-$(mktemp -u XXXXXX)"
+c_ns="c-$(mktemp -u XXXXXX)"
+g_ns="g-$(mktemp -u XXXXXX)"
+s_ip4="192.0.2.1"
+c_ip4="192.0.2.10"
+g_ip4="192.0.2.254"
+s_ip6="2001:db8::1"
+c_ip6="2001:db8::10"
+g_ip6="2001:db8::254"
+
+gateway_create()
+{
+       ip netns add ${g_ns}
+       ip -n ${g_ns} link add br0 type bridge
+       ip -n ${g_ns} link set br0 up
+       ip -n ${g_ns} addr add ${g_ip4}/24 dev br0
+       ip -n ${g_ns} addr add ${g_ip6}/24 dev br0
+}
+
+gateway_destroy()
+{
+       ip -n ${g_ns} link del br0
+       ip netns del ${g_ns}
+}
+
+server_create()
+{
+       ip netns add ${s_ns}
+       ip -n ${s_ns} link add bond0 type bond mode active-backup miimon 100
+
+       for i in $(seq 0 2); do
+               ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns}
+
+               ip -n ${g_ns} link set s${i} up
+               ip -n ${g_ns} link set s${i} master br0
+               ip -n ${s_ns} link set eth${i} master bond0
+       done
+
+       ip -n ${s_ns} link set bond0 up
+       ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0
+       ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0
+       sleep 2
+}
+
+# Reset bond with new mode and options
+bond_reset()
+{
+       local param="$1"
+
+       ip -n ${s_ns} link set bond0 down
+       ip -n ${s_ns} link del bond0
+
+       ip -n ${s_ns} link add bond0 type bond $param
+       for i in $(seq 0 2); do
+               ip -n ${s_ns} link set eth$i master bond0
+       done
+
+       ip -n ${s_ns} link set bond0 up
+       ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0
+       ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0
+       sleep 2
+}
+
+server_destroy()
+{
+       for i in $(seq 0 2); do
+               ip -n ${s_ns} link del eth${i}
+       done
+       ip netns del ${s_ns}
+}
+
+client_create()
+{
+       ip netns add ${c_ns}
+       ip -n ${c_ns} link add eth0 type veth peer name c0 netns ${g_ns}
+
+       ip -n ${g_ns} link set c0 up
+       ip -n ${g_ns} link set c0 master br0
+
+       ip -n ${c_ns} link set eth0 up
+       ip -n ${c_ns} addr add ${c_ip4}/24 dev eth0
+       ip -n ${c_ns} addr add ${c_ip6}/24 dev eth0
+}
+
+client_destroy()
+{
+       ip -n ${c_ns} link del eth0
+       ip netns del ${c_ns}
+}
+
+setup_prepare()
+{
+       gateway_create
+       server_create
+       client_create
+}
+
+cleanup()
+{
+       pre_cleanup
+
+       client_destroy
+       server_destroy
+       gateway_destroy
+}
+
+bond_check_connection()
+{
+       local msg=${1:-"check connection"}
+
+       sleep 2
+       ip netns exec ${s_ns} ping ${c_ip4} -c5 -i 0.1 &>/dev/null
+       check_err $? "${msg}: ping failed"
+       ip netns exec ${s_ns} ping6 ${c_ip6} -c5 -i 0.1 &>/dev/null
+       check_err $? "${msg}: ping6 failed"
+}
diff --git a/tools/testing/selftests/drivers/net/bonding/option_prio.sh b/tools/testing/selftests/drivers/net/bonding/option_prio.sh
deleted file mode 100755 (executable)
index c32eebf..0000000
+++ /dev/null
@@ -1,245 +0,0 @@
-#!/bin/bash
-# SPDX-License-Identifier: GPL-2.0
-#
-# Test bonding option prio
-#
-
-ALL_TESTS="
-       prio_arp_ip_target_test
-       prio_miimon_test
-"
-
-REQUIRE_MZ=no
-REQUIRE_JQ=no
-NUM_NETIFS=0
-lib_dir=$(dirname "$0")
-source "$lib_dir"/net_forwarding_lib.sh
-
-destroy()
-{
-       ip link del bond0 &>/dev/null
-       ip link del br0 &>/dev/null
-       ip link del veth0 &>/dev/null
-       ip link del veth1 &>/dev/null
-       ip link del veth2 &>/dev/null
-       ip netns del ns1 &>/dev/null
-       ip link del veth3 &>/dev/null
-}
-
-cleanup()
-{
-       pre_cleanup
-
-       destroy
-}
-
-skip()
-{
-        local skip=1
-       ip link add name bond0 type bond mode 1 miimon 100 &>/dev/null
-       ip link add name veth0 type veth peer name veth0_p
-       ip link set veth0 master bond0
-
-       # check if iproute support prio option
-       ip link set dev veth0 type bond_slave prio 10
-       [[ $? -ne 0 ]] && skip=0
-
-       # check if bonding support prio option
-       ip -d link show veth0 | grep -q "prio 10"
-       [[ $? -ne 0 ]] && skip=0
-
-       ip link del bond0 &>/dev/null
-       ip link del veth0
-
-       return $skip
-}
-
-active_slave=""
-check_active_slave()
-{
-       local target_active_slave=$1
-       active_slave="$(cat /sys/class/net/bond0/bonding/active_slave)"
-       test "$active_slave" = "$target_active_slave"
-       check_err $? "Current active slave is $active_slave but not $target_active_slave"
-}
-
-
-# Test bonding prio option with mode=$mode monitor=$monitor
-# and primary_reselect=$primary_reselect
-prio_test()
-{
-       RET=0
-
-       local monitor=$1
-       local mode=$2
-       local primary_reselect=$3
-
-       local bond_ip4="192.169.1.2"
-       local peer_ip4="192.169.1.1"
-       local bond_ip6="2009:0a:0b::02"
-       local peer_ip6="2009:0a:0b::01"
-
-
-       # create veths
-       ip link add name veth0 type veth peer name veth0_p
-       ip link add name veth1 type veth peer name veth1_p
-       ip link add name veth2 type veth peer name veth2_p
-
-       # create bond
-       if [[ "$monitor" == "miimon" ]];then
-               ip link add name bond0 type bond mode $mode miimon 100 primary veth1 primary_reselect $primary_reselect
-       elif [[ "$monitor" == "arp_ip_target" ]];then
-               ip link add name bond0 type bond mode $mode arp_interval 1000 arp_ip_target $peer_ip4 primary veth1 primary_reselect $primary_reselect
-       elif [[ "$monitor" == "ns_ip6_target" ]];then
-               ip link add name bond0 type bond mode $mode arp_interval 1000 ns_ip6_target $peer_ip6 primary veth1 primary_reselect $primary_reselect
-       fi
-       ip link set bond0 up
-       ip link set veth0 master bond0
-       ip link set veth1 master bond0
-       ip link set veth2 master bond0
-       # check bonding member prio value
-       ip link set dev veth0 type bond_slave prio 0
-       ip link set dev veth1 type bond_slave prio 10
-       ip link set dev veth2 type bond_slave prio 11
-       ip -d link show veth0 | grep -q 'prio 0'
-       check_err $? "veth0 prio is not 0"
-       ip -d link show veth1 | grep -q 'prio 10'
-       check_err $? "veth0 prio is not 10"
-       ip -d link show veth2 | grep -q 'prio 11'
-       check_err $? "veth0 prio is not 11"
-
-       ip link set veth0 up
-       ip link set veth1 up
-       ip link set veth2 up
-       ip link set veth0_p up
-       ip link set veth1_p up
-       ip link set veth2_p up
-
-       # prepare ping target
-       ip link add name br0 type bridge
-       ip link set br0 up
-       ip link set veth0_p master br0
-       ip link set veth1_p master br0
-       ip link set veth2_p master br0
-       ip link add name veth3 type veth peer name veth3_p
-       ip netns add ns1
-       ip link set veth3_p master br0 up
-       ip link set veth3 netns ns1 up
-       ip netns exec ns1 ip addr add $peer_ip4/24 dev veth3
-       ip netns exec ns1 ip addr add $peer_ip6/64 dev veth3
-       ip addr add $bond_ip4/24 dev bond0
-       ip addr add $bond_ip6/64 dev bond0
-       sleep 5
-
-       ping $peer_ip4 -c5 -I bond0 &>/dev/null
-       check_err $? "ping failed 1."
-       ping6 $peer_ip6 -c5 -I bond0 &>/dev/null
-       check_err $? "ping6 failed 1."
-
-       # active salve should be the primary slave
-       check_active_slave veth1
-
-       # active slave should be the higher prio slave
-       ip link set $active_slave down
-       ping $peer_ip4 -c5 -I bond0 &>/dev/null
-       check_err $? "ping failed 2."
-       check_active_slave veth2
-
-       # when only 1 slave is up
-       ip link set $active_slave down
-       ping $peer_ip4 -c5 -I bond0 &>/dev/null
-       check_err $? "ping failed 3."
-       check_active_slave veth0
-
-       # when a higher prio slave change to up
-       ip link set veth2 up
-       ping $peer_ip4 -c5 -I bond0 &>/dev/null
-       check_err $? "ping failed 4."
-       case $primary_reselect in
-               "0")
-                       check_active_slave "veth2"
-                       ;;
-               "1")
-                       check_active_slave "veth0"
-                       ;;
-               "2")
-                       check_active_slave "veth0"
-                       ;;
-       esac
-       local pre_active_slave=$active_slave
-
-       # when the primary slave change to up
-       ip link set veth1 up
-       ping $peer_ip4 -c5 -I bond0 &>/dev/null
-       check_err $? "ping failed 5."
-       case $primary_reselect in
-               "0")
-                       check_active_slave "veth1"
-                       ;;
-               "1")
-                       check_active_slave "$pre_active_slave"
-                       ;;
-               "2")
-                       check_active_slave "$pre_active_slave"
-                       ip link set $active_slave down
-                       ping $peer_ip4 -c5 -I bond0 &>/dev/null
-                       check_err $? "ping failed 6."
-                       check_active_slave "veth1"
-                       ;;
-       esac
-
-       # Test changing bond salve prio
-       if [[ "$primary_reselect" == "0" ]];then
-               ip link set dev veth0 type bond_slave prio 1000000
-               ip link set dev veth1 type bond_slave prio 0
-               ip link set dev veth2 type bond_slave prio -50
-               ip -d link show veth0 | grep -q 'prio 1000000'
-               check_err $? "veth0 prio is not 1000000"
-               ip -d link show veth1 | grep -q 'prio 0'
-               check_err $? "veth1 prio is not 0"
-               ip -d link show veth2 | grep -q 'prio -50'
-               check_err $? "veth3 prio is not -50"
-               check_active_slave "veth1"
-
-               ip link set $active_slave down
-               ping $peer_ip4 -c5 -I bond0 &>/dev/null
-               check_err $? "ping failed 7."
-               check_active_slave "veth0"
-       fi
-
-       cleanup
-
-       log_test "prio_test" "Test bonding option 'prio' with mode=$mode monitor=$monitor and primary_reselect=$primary_reselect"
-}
-
-prio_miimon_test()
-{
-       local mode
-       local primary_reselect
-
-       for mode in 1 5 6; do
-               for primary_reselect in 0 1 2; do
-                       prio_test "miimon" $mode $primary_reselect
-               done
-       done
-}
-
-prio_arp_ip_target_test()
-{
-       local primary_reselect
-
-       for primary_reselect in 0 1 2; do
-               prio_test "arp_ip_target" 1 $primary_reselect
-       done
-}
-
-if skip;then
-       log_test_skip "option_prio.sh" "Current iproute doesn't support 'prio'."
-       exit 0
-fi
-
-trap cleanup EXIT
-
-tests_run
-
-exit "$EXIT_STATUS"
index cc9fd55ab8699b0c3092ea2708ee2a363e846195..2529226ce87ca2afc86b6d96d9d0d5886864a0cd 100644 (file)
@@ -48,3 +48,4 @@ CONFIG_BAREUDP=m
 CONFIG_IPV6_IOAM6_LWTUNNEL=y
 CONFIG_CRYPTO_SM4_GENERIC=y
 CONFIG_AMT=m
+CONFIG_IP_SCTP=m
index 48e52f995a98c84c7c94c0c20cf87ff66fac2c1f..b1eb7bce599dc0e76078df68df84c9ce117eac8e 100755 (executable)
@@ -913,6 +913,7 @@ test_listener()
                $client4_port > /dev/null 2>&1 &
        local listener_pid=$!
 
+       sleep 0.5
        verify_listener_events $client_evts $LISTENER_CREATED $AF_INET 10.0.2.2 $client4_port
 
        # ADD_ADDR from client to server machine reusing the subflow port
@@ -928,6 +929,7 @@ test_listener()
        # Delete the listener from the client ns, if one was created
        kill_wait $listener_pid
 
+       sleep 0.5
        verify_listener_events $client_evts $LISTENER_CLOSED $AF_INET 10.0.2.2 $client4_port
 }
 
index 3243c90d449e6ec2bea681c37974287b7939f55a..5d467d1993cb12a8d225654b4dde833fd23755ba 100644 (file)
@@ -62,7 +62,7 @@ class OvsDatapath(GenericNetlinkSocket):
         nla_map = (
             ("OVS_DP_ATTR_UNSPEC", "none"),
             ("OVS_DP_ATTR_NAME", "asciiz"),
-            ("OVS_DP_ATTR_UPCALL_PID", "uint32"),
+            ("OVS_DP_ATTR_UPCALL_PID", "array(uint32)"),
             ("OVS_DP_ATTR_STATS", "dpstats"),
             ("OVS_DP_ATTR_MEGAFLOW_STATS", "megaflowstats"),
             ("OVS_DP_ATTR_USER_FEATURES", "uint32"),
index b64845b823abde33b051563585d4c718a051a501..4fb9368bf7519b36e8c49ac78ad3ffa9ef15d496 100644 (file)
@@ -61,7 +61,7 @@ and
       id=channel0,name=agent-ctl-path\
  ##data path##
      -chardev pipe,id=charchannel1,path=/tmp/virtio-trace/trace-path-cpu0\
-     -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel0,\
+     -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,\
       id=channel1,name=trace-path-cpu0\
       ...
 
index ee01e40e8bc65e06ba3b74e3f50d5aaa485bcea1..61230532fef10f7261db75e5757a8b2c4366d2d9 100644 (file)
@@ -353,6 +353,12 @@ static int cpio_mkfile(const char *name, const char *location,
                buf.st_mtime = 0xffffffff;
        }
 
+       if (buf.st_mtime < 0) {
+               fprintf(stderr, "%s: Timestamp negative, clipping.\n",
+                       location);
+               buf.st_mtime = 0;
+       }
+
        if (buf.st_size > 0xffffffff) {
                fprintf(stderr, "%s: Size exceeds maximum cpio file size\n",
                        location);
@@ -602,10 +608,10 @@ int main (int argc, char *argv[])
        /*
         * Timestamps after 2106-02-07 06:28:15 UTC have an ascii hex time_t
         * representation that exceeds 8 chars and breaks the cpio header
-        * specification.
+        * specification. Negative timestamps similarly exceed 8 chars.
         */
-       if (default_mtime > 0xffffffff) {
-               fprintf(stderr, "ERROR: Timestamp too large for cpio format\n");
+       if (default_mtime > 0xffffffff || default_mtime < 0) {
+               fprintf(stderr, "ERROR: Timestamp out of range for cpio format\n");
                exit(1);
        }