Merge git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
authorDavid S. Miller <davem@davemloft.net>
Wed, 26 Oct 2022 12:46:38 +0000 (13:46 +0100)
committerDavid S. Miller <davem@davemloft.net>
Wed, 26 Oct 2022 12:46:38 +0000 (13:46 +0100)
Marc Kleine-Budde says:

====================
this is a pull request of 29 patches for net-next/master.

The first patch is by Daniel S. Trevitz and adds documentation for
switchable termination resistors.

Zhang Changzhong's patch fixes a debug output in the j13939 stack.

Oliver Hartkopp finally removes the pch_can driver, which is
superseded by the generic c_can driver.

Gustavo A. R. Silva replaces a zero-length array with
DECLARE_FLEX_ARRAY() in the ucan driver.

Kees Cook's patch removes a no longer needed silencing of
"-Warray-bounds" warnings for the kvaser_usb driver.

The next 2 patches target the m_can driver. The first is by me cleans
up the LEC error handling, the second is by Vivek Yadav and extends
the LEC error handling to the data phase of CAN-FD frames.

The next 9 patches all target the gs_usb driver. The first 5 patches
are by me and improve the Kconfig prompt and help text, set
netdev->dev_id to distinguish multi CAN channel devices, allow
loopback and listen only at the same time, and clean up the
gs_can_open() function a bit. The remaining 4 patches are by Jeroen
Hofstee and add support for 2 new features: Bus Error Reporting and
Get State.

Jimmy Assarsson and Anssi Hannula contribute 10 patches for the
kvaser_usb driver. They first add Listen Only and Bus Error Reporting
support, handle CMD_ERROR_EVENT errors, improve CAN state handling,
restart events, and configuration of the bit timing parameters.

Another patch by me which fixes the indention in the m_can driver.

A patch by Dongliang Mu cleans up the ucan_disconnect() function in
the ucan driver.

The last patch by Biju Das is for the rcan_canfd driver and cleans up
the reset handling.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
1176 files changed:
.mailmap
Documentation/admin-guide/acpi/index.rst
Documentation/admin-guide/cgroup-v2.rst
Documentation/admin-guide/device-mapper/verity.rst
Documentation/arm64/silicon-errata.rst
Documentation/block/ublk.rst
Documentation/devicetree/bindings/interrupt-controller/sifive,plic-1.0.0.yaml
Documentation/devicetree/bindings/leds/common.yaml
Documentation/devicetree/bindings/leds/mediatek,mt6370-indicator.yaml
Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt [deleted file]
Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml [new file with mode: 0644]
Documentation/devicetree/bindings/mfd/mediatek,mt6370.yaml
Documentation/devicetree/bindings/net/adi,adin1110.yaml
Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml
Documentation/devicetree/bindings/net/sff,sfp.yaml
Documentation/devicetree/bindings/pinctrl/xlnx,zynqmp-pinctrl.yaml
Documentation/devicetree/bindings/riscv/cpus.yaml
Documentation/devicetree/bindings/riscv/microchip.yaml
Documentation/devicetree/bindings/riscv/sifive,ccache0.yaml [moved from Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml with 83% similarity]
Documentation/devicetree/bindings/timer/sifive,clint.yaml
Documentation/driver-api/media/mc-core.rst
Documentation/filesystems/ubifs.rst
Documentation/hwmon/corsair-psu.rst
Documentation/mm/page_owner.rst
Documentation/networking/filter.rst
Documentation/networking/index.rst
Documentation/networking/tc-queue-filters.rst [new file with mode: 0644]
Documentation/process/howto.rst
Documentation/process/maintainer-netdev.rst
Documentation/riscv/index.rst
Documentation/riscv/uabi.rst [new file with mode: 0644]
Documentation/tools/rtla/rtla-timerlat-top.rst
Documentation/trace/ftrace.rst
Documentation/translations/it_IT/process/howto.rst
Documentation/translations/ja_JP/howto.rst
Documentation/translations/ko_KR/howto.rst
Documentation/translations/zh_CN/arch.rst [new file with mode: 0644]
Documentation/translations/zh_CN/devicetree/changesets.rst
Documentation/translations/zh_CN/devicetree/dynamic-resolution-notes.rst
Documentation/translations/zh_CN/devicetree/kernel-api.rst
Documentation/translations/zh_CN/devicetree/overlay-notes.rst
Documentation/translations/zh_CN/index.rst
Documentation/translations/zh_CN/mm/ksm.rst
Documentation/translations/zh_CN/mm/page_owner.rst
Documentation/translations/zh_CN/process/howto.rst
Documentation/translations/zh_CN/process/index.rst
Documentation/translations/zh_TW/process/howto.rst
Documentation/userspace-api/media/cec.h.rst.exceptions
Documentation/userspace-api/media/v4l/libv4l-introduction.rst
MAINTAINERS
Makefile
arch/alpha/kernel/core_marvel.c
arch/arm/kernel/process.c
arch/arm/kernel/signal.c
arch/arm/mach-mmp/devices.c
arch/arm/mach-spear/generic.h
arch/arm/mach-spear/spear3xx.c
arch/arm/mach-spear/spear6xx.c
arch/arm64/Kconfig
arch/arm64/include/asm/cputype.h
arch/arm64/include/asm/kvm_pgtable.h
arch/arm64/include/asm/stage2_pgtable.h
arch/arm64/kernel/cpu_errata.c
arch/arm64/kernel/entry-ftrace.S
arch/arm64/kernel/mte.c
arch/arm64/kernel/process.c
arch/arm64/kernel/proton-pack.c
arch/arm64/kernel/syscall.c
arch/arm64/kvm/hyp/Makefile
arch/arm64/kvm/hyp/nvhe/Makefile
arch/arm64/kvm/mmu.c
arch/arm64/kvm/vgic/vgic-its.c
arch/arm64/mm/mteswap.c
arch/arm64/tools/sysreg
arch/loongarch/include/asm/pgtable.h
arch/loongarch/kernel/process.c
arch/loongarch/kernel/vdso.c
arch/mips/kernel/process.c
arch/mips/kernel/vdso.c
arch/openrisc/kernel/dma.c
arch/parisc/include/asm/alternative.h
arch/parisc/include/asm/pdc.h
arch/parisc/include/asm/pgtable.h
arch/parisc/kernel/alternative.c
arch/parisc/kernel/entry.S
arch/parisc/kernel/pdc_cons.c
arch/parisc/kernel/process.c
arch/parisc/kernel/setup.c
arch/parisc/kernel/sys_parisc.c
arch/parisc/kernel/traps.c
arch/parisc/kernel/vdso.c
arch/powerpc/crypto/crc-vpmsum_test.c
arch/powerpc/include/asm/syscalls.h
arch/powerpc/kernel/Makefile
arch/powerpc/kernel/interrupt_64.S
arch/powerpc/kernel/process.c
arch/powerpc/kernel/sys_ppc32.c
arch/powerpc/kernel/syscalls/syscall.tbl
arch/powerpc/kvm/book3s_hv_uvmem.c
arch/powerpc/platforms/pseries/Makefile
arch/powerpc/platforms/pseries/dtl.c
arch/riscv/Kconfig
arch/riscv/Makefile
arch/riscv/boot/dts/microchip/Makefile
arch/riscv/boot/dts/microchip/mpfs-icicle-kit-fabric.dtsi
arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts
arch/riscv/boot/dts/microchip/mpfs-m100pfs-fabric.dtsi [new file with mode: 0644]
arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts [new file with mode: 0644]
arch/riscv/boot/dts/microchip/mpfs-polarberry-fabric.dtsi
arch/riscv/boot/dts/microchip/mpfs-sev-kit-fabric.dtsi [new file with mode: 0644]
arch/riscv/boot/dts/microchip/mpfs-sev-kit.dts [new file with mode: 0644]
arch/riscv/boot/dts/microchip/mpfs.dtsi
arch/riscv/errata/thead/errata.c
arch/riscv/include/asm/cacheflush.h
arch/riscv/include/asm/elf.h
arch/riscv/include/asm/io.h
arch/riscv/include/asm/kvm_vcpu_timer.h
arch/riscv/include/asm/mmu.h
arch/riscv/include/uapi/asm/auxvec.h
arch/riscv/kernel/cpu.c
arch/riscv/kernel/cpufeature.c
arch/riscv/kernel/setup.c
arch/riscv/kernel/sys_riscv.c
arch/riscv/kernel/traps.c
arch/riscv/kernel/vdso.c
arch/riscv/kvm/vcpu.c
arch/riscv/kvm/vcpu_timer.c
arch/riscv/mm/cacheflush.c
arch/riscv/mm/dma-noncoherent.c
arch/riscv/mm/fault.c
arch/s390/kernel/process.c
arch/s390/kernel/vdso.c
arch/s390/mm/mmap.c
arch/sparc/vdso/vma.c
arch/um/drivers/chan.h
arch/um/drivers/mconsole_kern.c
arch/um/drivers/mmapper_kern.c
arch/um/drivers/net_kern.c
arch/um/drivers/ssl.c
arch/um/drivers/stdio_console.c
arch/um/drivers/ubd_kern.c
arch/um/drivers/vector_kern.c
arch/um/drivers/virt-pci.c
arch/um/drivers/virtio_uml.c
arch/um/kernel/physmem.c
arch/um/kernel/process.c
arch/um/kernel/um_arch.c
arch/um/kernel/umid.c
arch/x86/Kconfig
arch/x86/entry/vdso/vma.c
arch/x86/events/intel/lbr.c
arch/x86/include/asm/iommu.h
arch/x86/kernel/cpu/amd.c
arch/x86/kernel/cpu/microcode/amd.c
arch/x86/kernel/cpu/resctrl/core.c
arch/x86/kernel/cpu/topology.c
arch/x86/kernel/fpu/init.c
arch/x86/kernel/fpu/xstate.c
arch/x86/kernel/ftrace_64.S
arch/x86/kernel/module.c
arch/x86/kernel/process.c
arch/x86/kernel/unwind_orc.c
arch/x86/kvm/x86.c
arch/x86/mm/pat/cpa-test.c
arch/x86/net/bpf_jit_comp.c
block/bfq-iosched.h
block/bio.c
block/blk-crypto-fallback.c
block/blk-mq.c
block/blk-wbt.c
block/genhd.c
crypto/async_tx/raid6test.c
crypto/testmgr.c
drivers/acpi/acpi_extlog.c
drivers/acpi/apei/ghes.c
drivers/acpi/arm64/iort.c
drivers/acpi/pci_root.c
drivers/acpi/resource.c
drivers/acpi/scan.c
drivers/ata/ahci.h
drivers/ata/ahci_brcm.c
drivers/ata/ahci_imx.c
drivers/ata/ahci_qoriq.c
drivers/ata/ahci_st.c
drivers/ata/ahci_xgene.c
drivers/ata/sata_rcar.c
drivers/block/drbd/drbd_receiver.c
drivers/block/drbd/drbd_req.c
drivers/block/ublk_drv.c
drivers/block/zram/zram_drv.c
drivers/char/hw_random/bcm2835-rng.c
drivers/char/random.c
drivers/clk/at91/clk-generated.c
drivers/clk/at91/clk-master.c
drivers/clk/at91/clk-peripheral.c
drivers/clk/clk-composite.c
drivers/clk/clk-divider.c
drivers/clk/clk.c
drivers/clk/clk_test.c
drivers/clk/mediatek/clk-mux.c
drivers/clk/qcom/clk-rcg2.c
drivers/clk/qcom/gcc-msm8660.c
drivers/clk/spear/spear3xx_clock.c
drivers/clk/spear/spear6xx_clock.c
drivers/clk/tegra/clk-tegra114.c
drivers/clk/tegra/clk-tegra124.c
drivers/clk/tegra/clk-tegra20.c
drivers/clk/tegra/clk-tegra210.c
drivers/clk/tegra/clk-tegra30.c
drivers/cpufreq/cpufreq-dt.c
drivers/cpufreq/imx6q-cpufreq.c
drivers/cpufreq/qcom-cpufreq-nvmem.c
drivers/cpufreq/sun50i-cpufreq-nvmem.c
drivers/cpufreq/tegra194-cpufreq.c
drivers/dax/hmem/device.c
drivers/dax/super.c
drivers/dma/dmatest.c
drivers/edac/Kconfig
drivers/edac/sifive_edac.c
drivers/firmware/efi/Kconfig
drivers/firmware/efi/arm-runtime.c
drivers/firmware/efi/efi.c
drivers/firmware/efi/libstub/Makefile.zboot
drivers/firmware/efi/libstub/fdt.c
drivers/firmware/efi/libstub/x86-stub.c
drivers/firmware/efi/libstub/zboot.lds
drivers/firmware/efi/riscv-runtime.c
drivers/firmware/efi/vars.c
drivers/gpu/drm/amd/amdgpu/amdgpu.h
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.h
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
drivers/gpu/drm/amd/amdgpu/cik_sdma.c
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
drivers/gpu/drm/amd/amdgpu/si_dma.c
drivers/gpu/drm/amd/amdgpu/sienna_cichlid.c
drivers/gpu/drm/amd/amdgpu/soc15.c
drivers/gpu/drm/amd/amdgpu/soc21.c
drivers/gpu/drm/amd/amdgpu/umc_v6_1.c
drivers/gpu/drm/amd/amdgpu/umc_v6_7.c
drivers/gpu/drm/amd/amdgpu/umc_v8_10.c
drivers/gpu/drm/amd/amdgpu/umc_v8_7.c
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
drivers/gpu/drm/amd/display/dc/core/dc.c
drivers/gpu/drm/amd/display/dc/core/dc_link.c
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
drivers/gpu/drm/amd/display/dc/core/dc_stream.c
drivers/gpu/drm/amd/display/dc/dc.h
drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h
drivers/gpu/drm/amd/display/dc/dc_link.h
drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c
drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.h
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.c
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c
drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.c
drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
drivers/gpu/drm/amd/display/dc/dml/Makefile
drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c
drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h
drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c
drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h
drivers/gpu/drm/amd/display/dc/inc/core_types.h
drivers/gpu/drm/amd/display/dc/inc/dcn_calcs.h
drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
drivers/gpu/drm/amd/display/dc/inc/hw/cursor_reg_cache.h [new file with mode: 0644]
drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
drivers/gpu/drm/amd/display/dc/inc/resource.h
drivers/gpu/drm/amd/display/dc/link/link_hwss_hpo_dp.c
drivers/gpu/drm/amd/display/dc/virtual/virtual_link_hwss.c
drivers/gpu/drm/amd/display/dmub/dmub_srv.h
drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
drivers/gpu/drm/amd/display/modules/color/color_gamma.c
drivers/gpu/drm/amd/include/asic_reg/umc/umc_8_10_0_offset.h
drivers/gpu/drm/amd/include/asic_reg/umc/umc_8_10_0_sh_mask.h
drivers/gpu/drm/amd/include/kgd_kfd_interface.h
drivers/gpu/drm/amd/pm/amdgpu_pm.c
drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu13_driver_if_v13_0_4.h
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
drivers/gpu/drm/drm_connector.c
drivers/gpu/drm/i915/display/g4x_hdmi.c
drivers/gpu/drm/i915/display/intel_display.c
drivers/gpu/drm/i915/display/intel_fb_pin.c
drivers/gpu/drm/i915/display/intel_psr.c
drivers/gpu/drm/i915/display/skl_watermark.c
drivers/gpu/drm/i915/gem/i915_gem_context.c
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
drivers/gpu/drm/i915/gem/i915_gem_object.c
drivers/gpu/drm/i915/gem/i915_gem_object.h
drivers/gpu/drm/i915/gem/i915_gem_object_types.h
drivers/gpu/drm/i915/gem/i915_gem_ttm.c
drivers/gpu/drm/i915/gt/intel_context.c
drivers/gpu/drm/i915/gt/intel_context.h
drivers/gpu/drm/i915/gt/intel_ggtt.c
drivers/gpu/drm/i915/gt/intel_mocs.c
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
drivers/gpu/drm/i915/i915_gem_gtt.c
drivers/gpu/drm/i915/i915_reg.h
drivers/gpu/drm/i915/selftests/i915_selftest.c
drivers/gpu/drm/nouveau/nouveau_dmem.c
drivers/gpu/drm/panfrost/panfrost_dump.c
drivers/gpu/drm/scheduler/sched_entity.c
drivers/gpu/drm/tests/drm_buddy_test.c
drivers/gpu/drm/tests/drm_format_helper_test.c
drivers/gpu/drm/tests/drm_mm_test.c
drivers/gpu/drm/vc4/vc4_drv.c
drivers/gpu/drm/vc4/vc4_hdmi.c
drivers/hid/hid-ids.h
drivers/hid/hid-lenovo.c
drivers/hid/hid-magicmouse.c
drivers/hid/hid-playstation.c
drivers/hid/hid-quirks.c
drivers/hid/hid-saitek.c
drivers/hwmon/coretemp.c
drivers/hwmon/corsair-psu.c
drivers/hwmon/pwm-fan.c
drivers/i2c/busses/Kconfig
drivers/i2c/busses/i2c-mlxbf.c
drivers/i2c/busses/i2c-mlxcpld.c
drivers/i2c/busses/i2c-qcom-cci.c
drivers/i2c/busses/i2c-sis630.c
drivers/i2c/busses/i2c-xiic.c
drivers/i3c/master.c
drivers/infiniband/core/cma.c
drivers/infiniband/hw/cxgb4/cm.c
drivers/infiniband/hw/cxgb4/id_table.c
drivers/infiniband/hw/hfi1/tid_rdma.c
drivers/infiniband/hw/hns/hns_roce_ah.c
drivers/infiniband/hw/mlx4/mad.c
drivers/infiniband/ulp/ipoib/ipoib_cm.c
drivers/infiniband/ulp/rtrs/rtrs-clt.c
drivers/iommu/amd/iommu.c
drivers/iommu/apple-dart.c
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
drivers/iommu/arm/arm-smmu/arm-smmu.c
drivers/iommu/intel/iommu.c
drivers/iommu/iommu.c
drivers/iommu/mtk_iommu.c
drivers/iommu/virtio-iommu.c
drivers/leds/leds-pca963x.c
drivers/md/bcache/request.c
drivers/md/dm-bufio.c
drivers/md/dm-cache-policy.h
drivers/md/dm-clone-target.c
drivers/md/dm-ioctl.c
drivers/md/dm-raid.c
drivers/md/dm-rq.c
drivers/md/dm-stats.c
drivers/md/dm-table.c
drivers/md/dm-verity-target.c
drivers/md/dm.c
drivers/md/raid5-cache.c
drivers/media/Kconfig
drivers/media/cec/core/cec-adap.c
drivers/media/cec/platform/cros-ec/cros-ec-cec.c
drivers/media/cec/platform/s5p/s5p_cec.c
drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
drivers/media/dvb-frontends/drxk_hard.c
drivers/media/i2c/ar0521.c
drivers/media/i2c/ir-kbd-i2c.c
drivers/media/i2c/isl7998x.c
drivers/media/i2c/mt9v111.c
drivers/media/i2c/ov5640.c
drivers/media/i2c/ov8865.c
drivers/media/mc/mc-device.c
drivers/media/mc/mc-entity.c
drivers/media/pci/cx18/cx18-av-core.c
drivers/media/pci/cx88/cx88-input.c
drivers/media/pci/cx88/cx88-video.c
drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
drivers/media/platform/amphion/vpu_v4l2.c
drivers/media/platform/chips-media/coda-jpeg.c
drivers/media/platform/mediatek/mdp3/mtk-mdp3-cmdq.c
drivers/media/platform/mediatek/mdp3/mtk-mdp3-comp.c
drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
drivers/media/platform/mediatek/mdp3/mtk-mdp3-vpu.c
drivers/media/platform/nxp/dw100/dw100.c
drivers/media/platform/qcom/camss/camss-video.c
drivers/media/platform/qcom/venus/helpers.c
drivers/media/platform/qcom/venus/hfi.c
drivers/media/platform/qcom/venus/vdec.c
drivers/media/platform/qcom/venus/venc.c
drivers/media/platform/qcom/venus/venc_ctrls.c
drivers/media/platform/renesas/rcar-vin/rcar-core.c
drivers/media/platform/renesas/rcar-vin/rcar-dma.c
drivers/media/platform/renesas/vsp1/vsp1_video.c
drivers/media/platform/rockchip/rkisp1/rkisp1-capture.c
drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
drivers/media/platform/rockchip/rkisp1/rkisp1-params.c
drivers/media/platform/rockchip/rkisp1/rkisp1-regs.h
drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c
drivers/media/platform/samsung/exynos4-is/fimc-capture.c
drivers/media/platform/samsung/exynos4-is/fimc-isp-video.c
drivers/media/platform/samsung/exynos4-is/fimc-lite.c
drivers/media/platform/samsung/s3c-camif/camif-capture.c
drivers/media/platform/st/stm32/stm32-dcmi.c
drivers/media/platform/sunxi/sun4i-csi/Kconfig
drivers/media/platform/sunxi/sun4i-csi/sun4i_dma.c
drivers/media/platform/sunxi/sun6i-csi/Kconfig
drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.h
drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
drivers/media/platform/sunxi/sun6i-csi/sun6i_video.h
drivers/media/platform/sunxi/sun6i-mipi-csi2/Kconfig
drivers/media/platform/sunxi/sun6i-mipi-csi2/sun6i_mipi_csi2.c
drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/Kconfig
drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/sun8i_a83t_mipi_csi2.c
drivers/media/platform/sunxi/sun8i-di/Kconfig
drivers/media/platform/sunxi/sun8i-rotate/Kconfig
drivers/media/platform/ti/cal/cal-video.c
drivers/media/platform/ti/cal/cal.h
drivers/media/platform/ti/omap3isp/isp.c
drivers/media/platform/ti/omap3isp/ispvideo.c
drivers/media/platform/ti/omap3isp/ispvideo.h
drivers/media/platform/verisilicon/hantro_drv.c
drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c
drivers/media/platform/verisilicon/hantro_hevc.c
drivers/media/platform/verisilicon/hantro_postproc.c
drivers/media/platform/verisilicon/imx8m_vpu_hw.c
drivers/media/platform/xilinx/xilinx-dma.c
drivers/media/platform/xilinx/xilinx-dma.h
drivers/media/radio/radio-si476x.c
drivers/media/radio/si4713/si4713.c
drivers/media/rc/imon.c
drivers/media/rc/mceusb.c
drivers/media/test-drivers/vimc/vimc-capture.c
drivers/media/test-drivers/vivid/vivid-radio-rx.c
drivers/media/test-drivers/vivid/vivid-touch-cap.c
drivers/media/tuners/xc4000.c
drivers/media/usb/au0828/au0828-core.c
drivers/media/usb/dvb-usb-v2/af9035.c
drivers/media/usb/msi2500/msi2500.c
drivers/media/v4l2-core/v4l2-ctrls-api.c
drivers/media/v4l2-core/v4l2-ctrls-core.c
drivers/media/v4l2-core/v4l2-dev.c
drivers/mfd/syscon.c
drivers/misc/habanalabs/gaudi2/gaudi2.c
drivers/mmc/core/block.c
drivers/mmc/core/card.h
drivers/mmc/core/core.c
drivers/mmc/core/quirks.h
drivers/mmc/host/dw_mmc.c
drivers/mmc/host/renesas_sdhi_core.c
drivers/mmc/host/sdhci-sprd.c
drivers/mmc/host/sdhci-tegra.c
drivers/mtd/nand/raw/nandsim.c
drivers/mtd/tests/mtd_nandecctest.c
drivers/mtd/tests/speedtest.c
drivers/mtd/tests/stresstest.c
drivers/mtd/ubi/block.c
drivers/mtd/ubi/build.c
drivers/mtd/ubi/cdev.c
drivers/mtd/ubi/debug.c
drivers/mtd/ubi/debug.h
drivers/mtd/ubi/eba.c
drivers/mtd/ubi/fastmap.c
drivers/mtd/ubi/io.c
drivers/mtd/ubi/ubi-media.h
drivers/mtd/ubi/ubi.h
drivers/mtd/ubi/vmt.c
drivers/mtd/ubi/wl.c
drivers/net/bonding/bond_3ad.c
drivers/net/bonding/bond_main.c
drivers/net/dsa/qca/qca8k-8xxx.c
drivers/net/ethernet/adi/adin1110.c
drivers/net/ethernet/amd/xgbe/xgbe-pci.c
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
drivers/net/ethernet/amd/xgbe/xgbe.h
drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
drivers/net/ethernet/aquantia/atlantic/aq_nic.h
drivers/net/ethernet/broadcom/bnx2.c
drivers/net/ethernet/broadcom/bnxt/bnxt.c
drivers/net/ethernet/broadcom/bnxt/bnxt.h
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
drivers/net/ethernet/broadcom/cnic.c
drivers/net/ethernet/broadcom/genet/bcmgenet.c
drivers/net/ethernet/brocade/bna/bfa_msgq.c
drivers/net/ethernet/cadence/macb_main.c
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
drivers/net/ethernet/dlink/dl2k.c
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
drivers/net/ethernet/freescale/dpaa2/Makefile
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-trace.h
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c [new file with mode: 0644]
drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h
drivers/net/ethernet/freescale/dpaa2/dpni.c
drivers/net/ethernet/freescale/dpaa2/dpni.h
drivers/net/ethernet/freescale/fec.h
drivers/net/ethernet/freescale/fec_ptp.c
drivers/net/ethernet/freescale/fman/mac.c
drivers/net/ethernet/freescale/fman/mac.h
drivers/net/ethernet/hisilicon/hns/hnae.c
drivers/net/ethernet/huawei/hinic/hinic_debugfs.c
drivers/net/ethernet/huawei/hinic/hinic_dev.h
drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
drivers/net/ethernet/huawei/hinic/hinic_main.c
drivers/net/ethernet/huawei/hinic/hinic_port.c
drivers/net/ethernet/huawei/hinic/hinic_sriov.c
drivers/net/ethernet/ibm/ibmveth.c
drivers/net/ethernet/ibm/ibmveth.h
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
drivers/net/ethernet/intel/i40e/i40e_main.c
drivers/net/ethernet/intel/i40e/i40e_txrx.c
drivers/net/ethernet/intel/i40e/i40e_txrx.h
drivers/net/ethernet/intel/i40e/i40e_xsk.c
drivers/net/ethernet/intel/i40e/i40e_xsk.h
drivers/net/ethernet/intel/ice/ice.h
drivers/net/ethernet/intel/ice/ice_main.c
drivers/net/ethernet/intel/ice/ice_tc_lib.c
drivers/net/ethernet/intel/ice/ice_tc_lib.h
drivers/net/ethernet/lantiq_etop.c
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
drivers/net/ethernet/mediatek/mtk_eth_soc.c
drivers/net/ethernet/mediatek/mtk_ppe.c
drivers/net/ethernet/mediatek/mtk_wed.c
drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
drivers/net/ethernet/mellanox/mlxsw/reg.h
drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
drivers/net/ethernet/microchip/Kconfig
drivers/net/ethernet/microchip/Makefile
drivers/net/ethernet/microchip/lan743x_ethtool.c
drivers/net/ethernet/microchip/lan743x_main.c
drivers/net/ethernet/microchip/lan743x_main.h
drivers/net/ethernet/microchip/lan966x/lan966x_ethtool.c
drivers/net/ethernet/microchip/sparx5/Kconfig
drivers/net/ethernet/microchip/sparx5/Makefile
drivers/net/ethernet/microchip/sparx5/sparx5_main.c
drivers/net/ethernet/microchip/sparx5/sparx5_main.h
drivers/net/ethernet/microchip/sparx5/sparx5_main_regs.h
drivers/net/ethernet/microchip/sparx5/sparx5_tc.c
drivers/net/ethernet/microchip/sparx5/sparx5_tc.h
drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c [new file with mode: 0644]
drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c [new file with mode: 0644]
drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.h [new file with mode: 0644]
drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c [new file with mode: 0644]
drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/Kconfig [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/Makefile [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/vcap_ag_api.h [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/vcap_ag_api_kunit.h [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/vcap_api.c [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/vcap_api.h [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/vcap_api_client.h [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c [new file with mode: 0644]
drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h [new file with mode: 0644]
drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
drivers/net/ethernet/netronome/nfp/flower/main.c
drivers/net/ethernet/netronome/nfp/flower/main.h
drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
drivers/net/ethernet/netronome/nfp/nfp_main.c
drivers/net/ethernet/pensando/ionic/ionic_lif.c
drivers/net/ethernet/rocker/rocker_main.c
drivers/net/ethernet/sfc/ef10.c
drivers/net/ethernet/sfc/ef100_ethtool.c
drivers/net/ethernet/sfc/ethtool_common.c
drivers/net/ethernet/sfc/ethtool_common.h
drivers/net/ethernet/sfc/filter.h
drivers/net/ethernet/sfc/mae.c
drivers/net/ethernet/sfc/net_driver.h
drivers/net/ethernet/sfc/rx_common.c
drivers/net/ethernet/sfc/tc.c
drivers/net/ethernet/sfc/tc.h
drivers/net/ethernet/socionext/netsec.c
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
drivers/net/ethernet/sun/sunhme.c
drivers/net/hamradio/baycom_epp.c
drivers/net/hamradio/hdlcdrv.c
drivers/net/hamradio/yam.c
drivers/net/hyperv/rndis_filter.c
drivers/net/ipa/gsi_trans.c
drivers/net/ipa/ipa_cmd.c
drivers/net/ipa/ipa_cmd.h
drivers/net/ipa/ipa_mem.c
drivers/net/ipa/ipa_qmi_msg.c
drivers/net/ipa/ipa_qmi_msg.h
drivers/net/ipa/ipa_table.c
drivers/net/ipa/ipa_table.h
drivers/net/macvlan.c
drivers/net/phy/at803x.c
drivers/net/phy/dp83822.c
drivers/net/phy/dp83867.c
drivers/net/phy/micrel.c
drivers/net/phy/phy-core.c
drivers/net/phy/phylink.c
drivers/net/phy/sfp.c
drivers/net/wireguard/selftest/allowedips.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
drivers/net/wireless/marvell/mwifiex/cfg80211.c
drivers/net/wireless/microchip/wilc1000/cfg80211.c
drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
drivers/net/wireless/st/cw1200/wsm.c
drivers/net/wireless/ti/wlcore/main.c
drivers/net/wwan/wwan_hwsim.c
drivers/nfc/virtual_ncidev.c
drivers/nvdimm/namespace_devs.c
drivers/nvdimm/region_devs.c
drivers/nvdimm/security.c
drivers/nvme/common/auth.c
drivers/nvme/host/apple.c
drivers/nvme/host/core.c
drivers/nvme/host/hwmon.c
drivers/nvme/host/multipath.c
drivers/nvme/host/pci.c
drivers/nvme/host/rdma.c
drivers/nvme/host/tcp.c
drivers/nvme/target/configfs.c
drivers/nvme/target/core.c
drivers/parisc/eisa_enumerator.c
drivers/pci/controller/pci-tegra.c
drivers/pci/setup-bus.c
drivers/perf/Kconfig
drivers/perf/alibaba_uncore_drw_pmu.c
drivers/perf/riscv_pmu_sbi.c
drivers/pinctrl/pinctrl-ingenic.c
drivers/pinctrl/pinctrl-ocelot.c
drivers/pinctrl/pinctrl-zynqmp.c
drivers/pinctrl/qcom/pinctrl-msm.c
drivers/ptp/ptp_ocp.c
drivers/rtc/Kconfig
drivers/rtc/rtc-cmos.c
drivers/rtc/rtc-ds1685.c
drivers/rtc/rtc-gamecube.c
drivers/rtc/rtc-isl12022.c
drivers/rtc/rtc-jz4740.c
drivers/rtc/rtc-mpfs.c
drivers/rtc/rtc-mxc.c
drivers/rtc/rtc-rv3028.c
drivers/rtc/rtc-stmp3xxx.c
drivers/rtc/rtc-ti-k3.c
drivers/s390/char/vmur.c
drivers/s390/char/vmur.h
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
drivers/scsi/fcoe/fcoe_ctlr.c
drivers/scsi/lpfc/lpfc_hbadisc.c
drivers/scsi/lpfc/lpfc_init.c
drivers/scsi/qedi/qedi_main.c
drivers/scsi/scsi_sysfs.c
drivers/soc/sifive/Kconfig
drivers/soc/sifive/Makefile
drivers/soc/sifive/sifive_ccache.c [new file with mode: 0644]
drivers/soc/sifive/sifive_l2_cache.c [deleted file]
drivers/staging/media/atomisp/Makefile
drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
drivers/staging/media/atomisp/include/hmm/hmm_bo.h
drivers/staging/media/atomisp/include/linux/atomisp.h
drivers/staging/media/atomisp/include/linux/atomisp_gmin_platform.h
drivers/staging/media/atomisp/include/linux/atomisp_platform.h
drivers/staging/media/atomisp/notes.txt
drivers/staging/media/atomisp/pci/atomisp_cmd.c
drivers/staging/media/atomisp/pci/atomisp_cmd.h
drivers/staging/media/atomisp/pci/atomisp_compat.h
drivers/staging/media/atomisp/pci/atomisp_compat_css20.c
drivers/staging/media/atomisp/pci/atomisp_file.c [deleted file]
drivers/staging/media/atomisp/pci/atomisp_file.h [deleted file]
drivers/staging/media/atomisp/pci/atomisp_fops.c
drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
drivers/staging/media/atomisp/pci/atomisp_internal.h
drivers/staging/media/atomisp/pci/atomisp_ioctl.c
drivers/staging/media/atomisp/pci/atomisp_ioctl.h
drivers/staging/media/atomisp/pci/atomisp_subdev.c
drivers/staging/media/atomisp/pci/atomisp_subdev.h
drivers/staging/media/atomisp/pci/atomisp_v4l2.c
drivers/staging/media/atomisp/pci/atomisp_v4l2.h
drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
drivers/staging/media/atomisp/pci/sh_css_params.c
drivers/staging/media/imx/imx-media-utils.c
drivers/staging/media/imx/imx7-media-csi.c
drivers/staging/media/ipu3/include/uapi/intel-ipu3.h
drivers/staging/media/ipu3/ipu3-v4l2.c
drivers/staging/media/meson/vdec/vdec.c
drivers/staging/media/omap4iss/iss.c
drivers/staging/media/omap4iss/iss_video.c
drivers/staging/media/omap4iss/iss_video.h
drivers/staging/media/sunxi/cedrus/Kconfig
drivers/staging/media/tegra-video/tegra210.c
drivers/target/iscsi/cxgbit/cxgbit_cm.c
drivers/thermal/intel/intel_powerclamp.c
drivers/thunderbolt/xdomain.c
drivers/tty/serial/Kconfig
drivers/video/fbdev/stifb.c
drivers/video/fbdev/uvesafb.c
drivers/watchdog/watchdog_core.c
drivers/watchdog/watchdog_dev.c
drivers/xen/grant-dma-ops.c
fs/btrfs/backref.c
fs/btrfs/backref.h
fs/btrfs/block-group.c
fs/btrfs/extent-io-tree.c
fs/btrfs/send.c
fs/btrfs/send.h
fs/ceph/inode.c
fs/ceph/mdsmap.c
fs/cifs/cached_dir.c
fs/cifs/cached_dir.h
fs/cifs/cifs_ioctl.h
fs/cifs/cifsfs.c
fs/cifs/cifsfs.h
fs/cifs/cifsglob.h
fs/cifs/cifsproto.h
fs/cifs/cifssmb.c
fs/cifs/connect.c
fs/cifs/dir.c
fs/cifs/file.c
fs/cifs/inode.c
fs/cifs/ioctl.c
fs/cifs/link.c
fs/cifs/readdir.c
fs/cifs/sess.c
fs/cifs/smb1ops.c
fs/cifs/smb2file.c
fs/cifs/smb2inode.c
fs/cifs/smb2misc.c
fs/cifs/smb2ops.c
fs/cifs/smb2pdu.c
fs/cifs/smb2pdu.h
fs/cifs/smb2proto.h
fs/efivarfs/vars.c
fs/erofs/fscache.c
fs/erofs/zdata.c
fs/erofs/zdata.h
fs/erofs/zmap.c
fs/exfat/inode.c
fs/ext2/ialloc.c
fs/ext4/ialloc.c
fs/ext4/ioctl.c
fs/ext4/mmp.c
fs/ext4/super.c
fs/ext4/verity.c
fs/f2fs/gc.c
fs/f2fs/namei.c
fs/f2fs/segment.c
fs/f2fs/verity.c
fs/fat/inode.c
fs/hostfs/hostfs_kern.c
fs/nfsd/nfs4state.c
fs/nfsd/nfsctl.c
fs/nfsd/nfsfh.c
fs/ntfs3/fslog.c
fs/ocfs2/namei.c
fs/proc/task_mmu.c
fs/ubifs/crypto.c
fs/ubifs/debug.c
fs/ubifs/dir.c
fs/ubifs/journal.c
fs/ubifs/lpt_commit.c
fs/ubifs/tnc_commit.c
fs/ubifs/ubifs.h
fs/ubifs/xattr.c
fs/xfs/libxfs/xfs_alloc.c
fs/xfs/libxfs/xfs_ialloc.c
fs/xfs/xfs_error.c
fs/xfs/xfs_icache.c
fs/xfs/xfs_log.c
include/acpi/ghes.h
include/asm-generic/vmlinux.lds.h
include/drm/gpu_scheduler.h
include/linux/blkdev.h
include/linux/bpf.h
include/linux/cgroup-defs.h
include/linux/cgroup.h
include/linux/clk-provider.h
include/linux/clk.h
include/linux/clk/at91_pmc.h
include/linux/clk/spear.h
include/linux/cpumask.h
include/linux/damon.h
include/linux/dsa/tag_qca.h
include/linux/efi.h
include/linux/io_uring_types.h
include/linux/iommu.h
include/linux/kvm_host.h
include/linux/memremap.h
include/linux/migrate.h
include/linux/mmc/card.h
include/linux/net.h
include/linux/netdevice.h
include/linux/netlink.h
include/linux/nodemask.h
include/linux/perf_event.h
include/linux/phylink.h
include/linux/prandom.h
include/linux/psi.h
include/linux/psi_types.h
include/linux/random.h
include/linux/sched.h
include/linux/sfp.h
include/linux/skbuff.h
include/linux/slab_def.h
include/linux/socket.h
include/linux/udp.h
include/linux/utsname.h
include/media/i2c/ir-kbd-i2c.h
include/media/media-device.h
include/media/media-entity.h
include/media/v4l2-common.h
include/media/v4l2-ctrls.h
include/media/v4l2-dev.h
include/media/v4l2-fwnode.h
include/media/v4l2-subdev.h
include/net/act_api.h
include/net/flow_offload.h
include/net/genetlink.h
include/net/net_namespace.h
include/net/netfilter/nf_queue.h
include/net/red.h
include/net/sctp/ulpqueue.h
include/net/sock.h
include/net/sock_reuseport.h
include/net/tc_act/tc_skbedit.h
include/net/transp_v6.h
include/net/udp.h
include/soc/sifive/sifive_ccache.h [new file with mode: 0644]
include/soc/sifive/sifive_l2_cache.h [deleted file]
include/sound/hdaudio.h
include/trace/events/watchdog.h [new file with mode: 0644]
include/uapi/drm/panfrost_drm.h
include/uapi/linux/cec-funcs.h
include/uapi/linux/cec.h
include/uapi/linux/ethtool.h
include/uapi/linux/rkisp1-config.h
include/uapi/mtd/ubi-user.h
init/Kconfig
io_uring/fdinfo.c
io_uring/filetable.h
io_uring/io-wq.c
io_uring/io_uring.c
io_uring/io_uring.h
io_uring/msg_ring.c
io_uring/net.c
io_uring/opdef.c
io_uring/rsrc.c
io_uring/rsrc.h
io_uring/rw.c
io_uring/tctx.c
io_uring/tctx.h
kernel/bpf/bloom_filter.c
kernel/bpf/btf.c
kernel/bpf/cgroup_iter.c
kernel/bpf/core.c
kernel/bpf/dispatcher.c
kernel/bpf/hashtab.c
kernel/bpf/memalloc.c
kernel/bpf/verifier.c
kernel/cgroup/cgroup.c
kernel/events/core.c
kernel/events/ring_buffer.c
kernel/gcov/gcc_4_7.c
kernel/kcsan/selftest.c
kernel/locking/test-ww_mutex.c
kernel/rcu/tree.c
kernel/sched/core.c
kernel/sched/deadline.c
kernel/sched/psi.c
kernel/sched/rt.c
kernel/sched/sched.h
kernel/sched/stats.h
kernel/time/clocksource.c
kernel/trace/blktrace.c
kernel/trace/bpf_trace.c
kernel/utsname_sysctl.c
lib/Kconfig.debug
lib/Kconfig.kgdb
lib/cmdline_kunit.c
lib/fault-inject.c
lib/find_bit_benchmark.c
lib/kobject.c
lib/kunit/string-stream.c
lib/kunit/test.c
lib/random32.c
lib/reed_solomon/test_rslib.c
lib/sbitmap.c
lib/test-string_helpers.c
lib/test_fprobe.c
lib/test_hexdump.c
lib/test_hmm.c
lib/test_hmm_uapi.h
lib/test_kprobes.c
lib/test_list_sort.c
lib/test_meminit.c
lib/test_min_heap.c
lib/test_objagg.c
lib/test_rhashtable.c
lib/test_vmalloc.c
lib/uuid.c
mm/compaction.c
mm/damon/core.c
mm/damon/vaddr.c
mm/highmem.c
mm/huge_memory.c
mm/hugetlb.c
mm/kasan/kasan_test.c
mm/memory.c
mm/mempolicy.c
mm/memremap.c
mm/migrate.c
mm/migrate_device.c
mm/mmap.c
mm/mmu_gather.c
mm/mprotect.c
mm/page_alloc.c
mm/shmem.c
mm/slab.c
mm/slub.c
mm/zsmalloc.c
net/802/garp.c
net/802/mrp.c
net/atm/mpoa_proc.c
net/ceph/mon_client.c
net/ceph/osd_client.c
net/core/dev.c
net/core/dev_ioctl.c
net/core/neighbour.c
net/core/net_namespace.c
net/core/pktgen.c
net/core/skbuff.c
net/core/skmsg.c
net/core/sock.c
net/core/sock_reuseport.c
net/core/stream.c
net/dccp/dccp.h
net/dccp/ipv4.c
net/dccp/ipv6.c
net/dccp/proto.c
net/dsa/slave.c
net/ethtool/common.c
net/ethtool/pse-pd.c
net/hsr/hsr_forward.c
net/ipv4/datagram.c
net/ipv4/igmp.c
net/ipv4/inet_connection_sock.c
net/ipv4/inet_hashtables.c
net/ipv4/ip_output.c
net/ipv4/ip_sockglue.c
net/ipv4/netfilter/ipt_rpfilter.c
net/ipv4/netfilter/nft_fib_ipv4.c
net/ipv4/route.c
net/ipv4/tcp.c
net/ipv4/tcp_cdg.c
net/ipv4/tcp_input.c
net/ipv4/tcp_ipv4.c
net/ipv4/udp.c
net/ipv6/addrconf.c
net/ipv6/af_inet6.c
net/ipv6/datagram.c
net/ipv6/ip6_flowlabel.c
net/ipv6/ipv6_sockglue.c
net/ipv6/mcast.c
net/ipv6/netfilter/ip6t_rpfilter.c
net/ipv6/netfilter/nft_fib_ipv6.c
net/ipv6/output_core.c
net/ipv6/ping.c
net/ipv6/raw.c
net/ipv6/tcp_ipv6.c
net/ipv6/udp.c
net/kcm/kcmsock.c
net/l2tp/l2tp_ip6.c
net/mac80211/rc80211_minstrel_ht.c
net/mac80211/scan.c
net/mptcp/protocol.c
net/mptcp/sockopt.c
net/netfilter/ipvs/ip_vs_conn.c
net/netfilter/ipvs/ip_vs_twos.c
net/netfilter/nf_nat_core.c
net/netfilter/nf_tables_api.c
net/netfilter/xt_statistic.c
net/netlink/af_netlink.c
net/openvswitch/actions.c
net/openvswitch/flow_netlink.c
net/packet/af_packet.c
net/rds/bind.c
net/rds/tcp.c
net/sched/act_gact.c
net/sched/act_sample.c
net/sched/act_skbedit.c
net/sched/cls_api.c
net/sched/sch_api.c
net/sched/sch_cake.c
net/sched/sch_fq_codel.c
net/sched/sch_netem.c
net/sched/sch_pie.c
net/sched/sch_sfb.c
net/sctp/associola.c
net/sctp/socket.c
net/sctp/stream_interleave.c
net/sctp/ulpqueue.c
net/smc/smc_core.c
net/socket.c
net/sunrpc/auth_gss/gss_krb5_wrap.c
net/sunrpc/cache.c
net/sunrpc/xprt.c
net/sunrpc/xprtsock.c
net/tipc/discover.c
net/tipc/socket.c
net/tipc/topsrv.c
net/tls/tls_strp.c
net/unix/af_unix.c
net/unix/garbage.c
net/xfrm/xfrm_state.c
scripts/Makefile.build
scripts/Makefile.modpost
scripts/clang-tools/run-clang-tools.py
scripts/package/mkspec
security/selinux/ss/services.c
security/selinux/ss/sidtab.c
security/selinux/ss/sidtab.h
sound/core/rawmidi.c
sound/core/sound_oss.c
sound/pci/hda/cs35l41_hda.c
sound/pci/hda/hda_component.h
sound/pci/hda/hda_cs_dsp_ctl.c
sound/pci/hda/hda_cs_dsp_ctl.h
sound/pci/hda/patch_realtek.c
sound/usb/card.h
sound/usb/endpoint.c
tools/arch/x86/include/asm/msr-index.h
tools/include/uapi/linux/kvm.h
tools/lib/perf/include/perf/event.h
tools/perf/arch/arm/util/auxtrace.c
tools/perf/arch/arm/util/pmu.c
tools/perf/arch/arm64/annotate/instructions.c
tools/perf/arch/arm64/util/Build
tools/perf/arch/arm64/util/hisi-ptt.c [new file with mode: 0644]
tools/perf/arch/x86/util/intel-pt.c
tools/perf/builtin-list.c
tools/perf/builtin-mem.c
tools/perf/tests/attr/base-record
tools/perf/tests/attr/system-wide-dummy
tools/perf/tests/attr/test-record-group
tools/perf/tests/attr/test-record-group-sampling
tools/perf/tests/attr/test-record-group1
tools/perf/tests/attr/test-record-group2
tools/perf/tests/shell/stat+csv_output.sh
tools/perf/tests/shell/stat+json_output.sh
tools/perf/tests/shell/test_arm_coresight.sh
tools/perf/tests/shell/test_intel_pt.sh
tools/perf/util/Build
tools/perf/util/auxtrace.c
tools/perf/util/auxtrace.h
tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
tools/perf/util/genelf.h
tools/perf/util/hisi-ptt-decoder/Build [new file with mode: 0644]
tools/perf/util/hisi-ptt-decoder/hisi-ptt-pkt-decoder.c [new file with mode: 0644]
tools/perf/util/hisi-ptt-decoder/hisi-ptt-pkt-decoder.h [new file with mode: 0644]
tools/perf/util/hisi-ptt.c [new file with mode: 0644]
tools/perf/util/hisi-ptt.h [new file with mode: 0644]
tools/perf/util/intel-pt.c
tools/perf/util/parse-events.c
tools/perf/util/pmu.c
tools/perf/util/pmu.h
tools/perf/util/pmu.l
tools/perf/util/pmu.y
tools/testing/selftests/bpf/prog_tests/btf.c
tools/testing/selftests/bpf/progs/user_ringbuf_success.c
tools/testing/selftests/drivers/net/bonding/Makefile
tools/testing/selftests/drivers/net/bonding/dev_addr_lists.sh
tools/testing/selftests/drivers/net/bonding/net_forwarding_lib.sh [new symlink]
tools/testing/selftests/drivers/net/dsa/test_bridge_fdb_stress.sh
tools/testing/selftests/drivers/net/team/Makefile
tools/testing/selftests/drivers/net/team/dev_addr_lists.sh
tools/testing/selftests/drivers/net/team/lag_lib.sh [new symlink]
tools/testing/selftests/drivers/net/team/net_forwarding_lib.sh [new symlink]
tools/testing/selftests/ftrace/test.d/dynevent/test_duplicates.tc
tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-eprobe.tc
tools/testing/selftests/futex/functional/Makefile
tools/testing/selftests/intel_pstate/Makefile
tools/testing/selftests/kexec/Makefile
tools/testing/selftests/kvm/aarch64/vgic_init.c
tools/testing/selftests/kvm/memslot_modification_stress_test.c
tools/testing/selftests/lib.mk
tools/testing/selftests/memory-hotplug/mem-on-off-test.sh
tools/testing/selftests/net/.gitignore
tools/testing/selftests/net/Makefile
tools/testing/selftests/net/so_incoming_cpu.c [new file with mode: 0644]
tools/testing/selftests/net/test_ingress_egress_chaining.sh [new file with mode: 0644]
tools/testing/selftests/perf_events/sigtrap_threads.c
tools/testing/selftests/vm/hmm-tests.c
tools/testing/selftests/vm/userfaultfd.c
tools/verification/dot2/dot2c.py
virt/kvm/kvm_main.c

index 380378e..fdd7989 100644 (file)
--- a/.mailmap
+++ b/.mailmap
@@ -104,6 +104,7 @@ Christoph Hellwig <hch@lst.de>
 Colin Ian King <colin.i.king@gmail.com> <colin.king@canonical.com>
 Corey Minyard <minyard@acm.org>
 Damian Hobson-Garcia <dhobsong@igel.co.jp>
+Dan Carpenter <error27@gmail.com> <dan.carpenter@oracle.com>
 Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com>
 Daniel Borkmann <daniel@iogearbox.net> <danborkmann@iogearbox.net>
 Daniel Borkmann <daniel@iogearbox.net> <daniel.borkmann@tik.ee.ethz.ch>
@@ -353,7 +354,8 @@ Peter Oruba <peter@oruba.de>
 Pratyush Anand <pratyush.anand@gmail.com> <pratyush.anand@st.com>
 Praveen BP <praveenbp@ti.com>
 Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com>
-Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com>
+Qais Yousef <qyousef@layalina.io> <qais.yousef@imgtec.com>
+Qais Yousef <qyousef@layalina.io> <qais.yousef@arm.com>
 Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com>
 Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com>
 Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl>
index 7127768..b078fdb 100644 (file)
@@ -9,7 +9,6 @@ the Linux ACPI support.
    :maxdepth: 1
 
    initrd_table_override
-   dsdt-override
    ssdt-overlays
    cppc_sysfs
    fan_performance_states
index 7bcfb38..dc254a3 100644 (file)
@@ -976,6 +976,29 @@ All cgroup core files are prefixed with "cgroup."
        killing cgroups is a process directed operation, i.e. it affects
        the whole thread-group.
 
+  cgroup.pressure
+       A read-write single value file that allowed values are "0" and "1".
+       The default is "1".
+
+       Writing "0" to the file will disable the cgroup PSI accounting.
+       Writing "1" to the file will re-enable the cgroup PSI accounting.
+
+       This control attribute is not hierarchical, so disable or enable PSI
+       accounting in a cgroup does not affect PSI accounting in descendants
+       and doesn't need pass enablement via ancestors from root.
+
+       The reason this control attribute exists is that PSI accounts stalls for
+       each cgroup separately and aggregates it at each level of the hierarchy.
+       This may cause non-negligible overhead for some workloads when under
+       deep level of the hierarchy, in which case this control attribute can
+       be used to disable PSI accounting in the non-leaf cgroups.
+
+  irq.pressure
+       A read-write nested-keyed file.
+
+       Shows pressure stall information for IRQ/SOFTIRQ. See
+       :ref:`Documentation/accounting/psi.rst <psi>` for details.
+
 Controllers
 ===========
 
index 1a6b913..a65c160 100644 (file)
@@ -141,6 +141,10 @@ root_hash_sig_key_desc <key_description>
     also gain new certificates at run time if they are signed by a certificate
     already in the secondary trusted keyring.
 
+try_verify_in_tasklet
+    If verity hashes are in cache, verify data blocks in kernel tasklet instead
+    of workqueue. This option can reduce IO latency.
+
 Theory of operation
 ===================
 
index 17d9fc5..808ade4 100644 (file)
@@ -76,6 +76,8 @@ stable kernels.
 +----------------+-----------------+-----------------+-----------------------------+
 | ARM            | Cortex-A55      | #1530923        | ARM64_ERRATUM_1530923       |
 +----------------+-----------------+-----------------+-----------------------------+
+| ARM            | Cortex-A55      | #2441007        | ARM64_ERRATUM_2441007       |
++----------------+-----------------+-----------------+-----------------------------+
 | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_832075        |
 +----------------+-----------------+-----------------+-----------------------------+
 | ARM            | Cortex-A57      | #852523         | N/A                         |
index 2122d1a..ba45c46 100644 (file)
@@ -144,6 +144,42 @@ managing and controlling ublk devices with help of several control commands:
   For retrieving device info via ``ublksrv_ctrl_dev_info``. It is the server's
   responsibility to save IO target specific info in userspace.
 
+- ``UBLK_CMD_START_USER_RECOVERY``
+
+  This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This
+  command is accepted after the old process has exited, ublk device is quiesced
+  and ``/dev/ublkc*`` is released. User should send this command before he starts
+  a new process which re-opens ``/dev/ublkc*``. When this command returns, the
+  ublk device is ready for the new process.
+
+- ``UBLK_CMD_END_USER_RECOVERY``
+
+  This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This
+  command is accepted after ublk device is quiesced and a new process has
+  opened ``/dev/ublkc*`` and get all ublk queues be ready. When this command
+  returns, ublk device is unquiesced and new I/O requests are passed to the
+  new process.
+
+- user recovery feature description
+
+  Two new features are added for user recovery: ``UBLK_F_USER_RECOVERY`` and
+  ``UBLK_F_USER_RECOVERY_REISSUE``.
+
+  With ``UBLK_F_USER_RECOVERY`` set, after one ubq_daemon(ublk server's io
+  handler) is dying, ublk does not delete ``/dev/ublkb*`` during the whole
+  recovery stage and ublk device ID is kept. It is ublk server's
+  responsibility to recover the device context by its own knowledge.
+  Requests which have not been issued to userspace are requeued. Requests
+  which have been issued to userspace are aborted.
+
+  With ``UBLK_F_USER_RECOVERY_REISSUE`` set, after one ubq_daemon(ublk
+  server's io handler) is dying, contrary to ``UBLK_F_USER_RECOVERY``,
+  requests which have been issued to userspace are requeued and will be
+  re-issued to the new process after handling ``UBLK_CMD_END_USER_RECOVERY``.
+  ``UBLK_F_USER_RECOVERY_REISSUE`` is designed for backends who tolerate
+  double-write since the driver may issue the same I/O request twice. It
+  might be useful to a read-only FS or a VM backend.
+
 Data plane
 ----------
 
index 92e0f8c..99e01f4 100644 (file)
@@ -66,6 +66,11 @@ properties:
           - enum:
               - allwinner,sun20i-d1-plic
           - const: thead,c900-plic
+      - items:
+          - const: sifive,plic-1.0.0
+          - const: riscv,plic0
+        deprecated: true
+        description: For the QEMU virt machine only
 
   reg:
     maxItems: 1
index 328952d..3c14a98 100644 (file)
@@ -79,24 +79,27 @@ properties:
       the LED.
     $ref: /schemas/types.yaml#/definitions/string
 
-    enum:
-        # LED will act as a back-light, controlled by the framebuffer system
-      - backlight
-        # LED will turn on (but for leds-gpio see "default-state" property in
-        # Documentation/devicetree/bindings/leds/leds-gpio.yaml)
-      - default-on
-        # LED "double" flashes at a load average based rate
-      - heartbeat
-        # LED indicates disk activity
-      - disk-activity
-        # LED indicates IDE disk activity (deprecated), in new implementations
-        # use "disk-activity"
-      - ide-disk
-        # LED flashes at a fixed, configurable rate
-      - timer
-        # LED alters the brightness for the specified duration with one software
-        # timer (requires "led-pattern" property)
-      - pattern
+    oneOf:
+      - enum:
+            # LED will act as a back-light, controlled by the framebuffer system
+          - backlight
+            # LED will turn on (but for leds-gpio see "default-state" property in
+            # Documentation/devicetree/bindings/leds/leds-gpio.yaml)
+          - default-on
+            # LED "double" flashes at a load average based rate
+          - heartbeat
+            # LED indicates disk activity
+          - disk-activity
+            # LED indicates IDE disk activity (deprecated), in new implementations
+            # use "disk-activity"
+          - ide-disk
+            # LED flashes at a fixed, configurable rate
+          - timer
+            # LED alters the brightness for the specified duration with one software
+            # timer (requires "led-pattern" property)
+          - pattern
+        # LED is triggered by SD/MMC activity
+      - pattern: "^mmc[0-9]+$"
 
   led-pattern:
     description: |
index 204b103..16b3abc 100644 (file)
@@ -13,9 +13,6 @@ description: |
   This module is part of the MT6370 MFD device.
   Add MT6370 LED driver include 4-channel RGB LED support Register/PWM/Breath Mode
 
-allOf:
-  - $ref: leds-class-multicolor.yaml#
-
 properties:
   compatible:
     const: mediatek,mt6370-indicator
@@ -29,6 +26,8 @@ properties:
 patternProperties:
   "^multi-led@[0-3]$":
     type: object
+    $ref: leds-class-multicolor.yaml#
+    unevaluatedProperties: false
 
     properties:
       reg:
diff --git a/Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt b/Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt
deleted file mode 100644 (file)
index b88dcdd..0000000
+++ /dev/null
@@ -1,9 +0,0 @@
-Dongwoon Anatech DW9714 camera voice coil lens driver
-
-DW9174 is a 10-bit DAC with current sink capability. It is intended
-for driving voice coil lenses in camera modules.
-
-Mandatory properties:
-
-- compatible: "dongwoon,dw9714"
-- reg: I²C slave address
diff --git a/Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml b/Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml
new file mode 100644 (file)
index 0000000..66229a3
--- /dev/null
@@ -0,0 +1,47 @@
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/media/i2c/dongwoon,dw9714.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Dongwoon Anatech DW9714 camera voice coil lens driver
+
+maintainers:
+  - Krzysztof Kozlowski <krzk@kernel.org>
+
+description:
+  DW9174 is a 10-bit DAC with current sink capability. It is intended for
+  driving voice coil lenses in camera modules.
+
+properties:
+  compatible:
+    const: dongwoon,dw9714
+
+  reg:
+    maxItems: 1
+
+  powerdown-gpios:
+    description:
+      XSD pin for shutdown (active low)
+
+  vcc-supply:
+    description: VDD power supply
+
+required:
+  - compatible
+  - reg
+
+additionalProperties: false
+
+examples:
+  - |
+    i2c {
+        #address-cells = <1>;
+        #size-cells = <0>;
+
+        camera-lens@c {
+            compatible = "dongwoon,dw9714";
+            reg = <0x0c>;
+            vcc-supply = <&reg_csi_1v8>;
+        };
+    };
index 250484d..5644882 100644 (file)
@@ -139,8 +139,8 @@ examples:
 
         charger {
           compatible = "mediatek,mt6370-charger";
-          interrupts = <48>, <68>, <6>;
-          interrupt-names = "attach_i", "uvp_d_evt", "mivr";
+          interrupts = <68>, <48>, <6>;
+          interrupt-names = "uvp_d_evt", "attach_i", "mivr";
           io-channels = <&mt6370_adc MT6370_CHAN_IBUS>;
 
           mt6370_otg_vbus: usb-otg-vbus-regulator {
index b6bd8ee..9de8652 100644 (file)
@@ -46,6 +46,10 @@ properties:
   interrupts:
     maxItems: 1
 
+  reset-gpios:
+    maxItems: 1
+    description: GPIO connected to active low reset
+
 required:
   - compatible
   - reg
index 64995cb..41c9760 100644 (file)
@@ -8,7 +8,6 @@ title: Samsung S3FWRN5 NCI NFC Controller
 
 maintainers:
   - Krzysztof Kozlowski <krzk@kernel.org>
-  - Krzysztof Opasiak <k.opasiak@samsung.com>
 
 properties:
   compatible:
index 06c66ab..231c4d7 100644 (file)
@@ -22,7 +22,8 @@ properties:
       phandle of an I2C bus controller for the SFP two wire serial
 
   maximum-power-milliwatt:
-    maxItems: 1
+    minimum: 1000
+    default: 1000
     description:
       Maximum module power consumption Specifies the maximum power consumption
       allowable by a module in the slot, in milli-Watts. Presently, modules can
index 1e2b9b6..2722dc7 100644 (file)
@@ -274,10 +274,6 @@ patternProperties:
           slew-rate:
             enum: [0, 1]
 
-          output-enable:
-            description:
-              This will internally disable the tri-state for MIO pins.
-
           drive-strength:
             description:
               Selects the drive strength for MIO pins, in mA.
index 873dd12..90a7cab 100644 (file)
@@ -9,6 +9,7 @@ title: RISC-V bindings for 'cpus' DT nodes
 maintainers:
   - Paul Walmsley <paul.walmsley@sifive.com>
   - Palmer Dabbelt <palmer@sifive.com>
+  - Conor Dooley <conor@kernel.org>
 
 description: |
   This document uses some terminology common to the RISC-V community
@@ -79,9 +80,7 @@ properties:
       insensitive, letters in the riscv,isa string must be all
       lowercase to simplify parsing.
     $ref: "/schemas/types.yaml#/definitions/string"
-    enum:
-      - rv64imac
-      - rv64imafdc
+    pattern: ^rv(?:64|32)imaf?d?q?c?b?v?k?h?(?:_[hsxz](?:[a-z])+)*$
 
   # RISC-V requires 'timebase-frequency' in /cpus, so disallow it here
   timebase-frequency: false
index 37f97ee..714d0fc 100644 (file)
@@ -7,8 +7,8 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
 title: Microchip PolarFire SoC-based boards
 
 maintainers:
-  - Cyril Jean <Cyril.Jean@microchip.com>
-  - Lewis Hanly <lewis.hanly@microchip.com>
+  - Conor Dooley <conor.dooley@microchip.com>
+  - Daire McNamara <daire.mcnamara@microchip.com>
 
 description:
   Microchip PolarFire SoC-based boards
@@ -17,12 +17,20 @@ properties:
   $nodename:
     const: '/'
   compatible:
-    items:
-      - enum:
-          - microchip,mpfs-icicle-kit
-          - microchip,mpfs-icicle-reference-rtlv2203
-          - sundance,polarberry
-      - const: microchip,mpfs
+    oneOf:
+      - items:
+          - enum:
+              - microchip,mpfs-icicle-reference-rtlv2203
+              - microchip,mpfs-icicle-reference-rtlv2210
+          - const: microchip,mpfs-icicle-kit
+          - const: microchip,mpfs
+
+      - items:
+          - enum:
+              - aries,m100pfsevp
+              - microchip,mpfs-sev-kit
+              - sundance,polarberry
+          - const: microchip,mpfs
 
 additionalProperties: true
 
@@ -2,18 +2,18 @@
 # Copyright (C) 2020 SiFive, Inc.
 %YAML 1.2
 ---
-$id: http://devicetree.org/schemas/riscv/sifive-l2-cache.yaml#
+$id: http://devicetree.org/schemas/riscv/sifive,ccache0.yaml#
 $schema: http://devicetree.org/meta-schemas/core.yaml#
 
-title: SiFive L2 Cache Controller
+title: SiFive Composable Cache Controller
 
 maintainers:
   - Sagar Kadam <sagar.kadam@sifive.com>
   - Paul Walmsley  <paul.walmsley@sifive.com>
 
 description:
-  The SiFive Level 2 Cache Controller is used to provide access to fast copies
-  of memory for masters in a Core Complex. The Level 2 Cache Controller also
+  The SiFive Composable Cache Controller is used to provide access to fast copies
+  of memory for masters in a Core Complex. The Composable Cache Controller also
   acts as directory-based coherency manager.
   All the properties in ePAPR/DeviceTree specification applies for this platform.
 
@@ -22,6 +22,7 @@ select:
     compatible:
       contains:
         enum:
+          - sifive,ccache0
           - sifive,fu540-c000-ccache
           - sifive,fu740-c000-ccache
 
@@ -33,6 +34,7 @@ properties:
     oneOf:
       - items:
           - enum:
+              - sifive,ccache0
               - sifive,fu540-c000-ccache
               - sifive,fu740-c000-ccache
           - const: cache
@@ -45,7 +47,7 @@ properties:
     const: 64
 
   cache-level:
-    const: 2
+    enum: [2, 3]
 
   cache-sets:
     enum: [1024, 2048]
@@ -115,6 +117,22 @@ allOf:
         cache-sets:
           const: 1024
 
+  - if:
+      properties:
+        compatible:
+          contains:
+            const: sifive,ccache0
+
+    then:
+      properties:
+        cache-level:
+          enum: [2, 3]
+
+    else:
+      properties:
+        cache-level:
+          const: 2
+
 additionalProperties: false
 
 required:
index e64f463..bbad241 100644 (file)
@@ -22,12 +22,18 @@ description:
 
 properties:
   compatible:
-    items:
-      - enum:
-          - sifive,fu540-c000-clint
-          - starfive,jh7100-clint
-          - canaan,k210-clint
-      - const: sifive,clint0
+    oneOf:
+      - items:
+          - enum:
+              - sifive,fu540-c000-clint
+              - starfive,jh7100-clint
+              - canaan,k210-clint
+          - const: sifive,clint0
+      - items:
+          - const: sifive,clint0
+          - const: riscv,clint0
+        deprecated: true
+        description: For the QEMU virt machine only
 
     description:
       Should be "<vendor>,<chip>-clint" and "sifive,clint<version>".
index 84aa7cd..400b8ca 100644 (file)
@@ -214,18 +214,29 @@ Link properties can be modified at runtime by calling
 Pipelines and media streams
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
+A media stream is a stream of pixels or metadata originating from one or more
+source devices (such as a sensors) and flowing through media entity pads
+towards the final sinks. The stream can be modified on the route by the
+devices (e.g. scaling or pixel format conversions), or it can be split into
+multiple branches, or multiple branches can be merged.
+
+A media pipeline is a set of media streams which are interdependent. This
+interdependency can be caused by the hardware (e.g. configuration of a second
+stream cannot be changed if the first stream has been enabled) or by the driver
+due to the software design. Most commonly a media pipeline consists of a single
+stream which does not branch.
+
 When starting streaming, drivers must notify all entities in the pipeline to
 prevent link states from being modified during streaming by calling
 :c:func:`media_pipeline_start()`.
 
-The function will mark all entities connected to the given entity through
-enabled links, either directly or indirectly, as streaming.
+The function will mark all the pads which are part of the pipeline as streaming.
 
 The struct media_pipeline instance pointed to by
-the pipe argument will be stored in every entity in the pipeline.
+the pipe argument will be stored in every pad in the pipeline.
 Drivers should embed the struct media_pipeline
 in higher-level pipeline structures and can then access the
-pipeline through the struct media_entity
+pipeline through the struct media_pad
 pipe field.
 
 Calls to :c:func:`media_pipeline_start()` can be nested.
index e6ee997..ced2f76 100644 (file)
@@ -59,7 +59,7 @@ differences.
 * JFFS2 is a write-through file-system, while UBIFS supports write-back,
   which makes UBIFS much faster on writes.
 
-Similarly to JFFS2, UBIFS supports on-the-flight compression which makes
+Similarly to JFFS2, UBIFS supports on-the-fly compression which makes
 it possible to fit quite a lot of data to the flash.
 
 Similarly to JFFS2, UBIFS is tolerant of unclean reboots and power-cuts.
index 3c1b164..6a03edb 100644 (file)
@@ -19,6 +19,8 @@ Supported devices:
 
   Corsair HX1200i
 
+  Corsair HX1500i
+
   Corsair RM550i
 
   Corsair RM650i
index f18fd89..1275149 100644 (file)
@@ -38,22 +38,10 @@ not affect to allocation performance, especially if the static keys jump
 label patching functionality is available. Following is the kernel's code
 size change due to this facility.
 
-- Without page owner::
-
-   text    data     bss     dec     hex filename
-   48392   2333     644   51369    c8a9 mm/page_alloc.o
-
-- With page owner::
-
-   text    data     bss     dec     hex filename
-   48800   2445     644   51889    cab1 mm/page_alloc.o
-   6662     108      29    6799    1a8f mm/page_owner.o
-   1025       8       8    1041     411 mm/page_ext.o
-
-Although, roughly, 8 KB code is added in total, page_alloc.o increase by
-520 bytes and less than half of it is in hotpath. Building the kernel with
-page owner and turning it on if needed would be great option to debug
-kernel memory problem.
+Although enabling page owner increases kernel size by several kilobytes,
+most of this code is outside page allocator and its hot path. Building
+the kernel with page owner and turning it on if needed would be great
+option to debug kernel memory problem.
 
 There is one notice that is caused by implementation detail. page owner
 stores information into the memory from struct page extension. This memory
index 43cdc4d..f69da50 100644 (file)
@@ -305,7 +305,7 @@ Possible BPF extensions are shown in the following table:
   vlan_tci                              skb_vlan_tag_get(skb)
   vlan_avail                            skb_vlan_tag_present(skb)
   vlan_tpid                             skb->vlan_proto
-  rand                                  prandom_u32()
+  rand                                  get_random_u32()
   ===================================   =================================================
 
 These extensions can also be prefixed with '#'.
index 16a153b..4f2d1f6 100644 (file)
@@ -104,6 +104,7 @@ Contents:
    switchdev
    sysfs-tagging
    tc-actions-env-rules
+   tc-queue-filters
    tcp-thin
    team
    timestamping
diff --git a/Documentation/networking/tc-queue-filters.rst b/Documentation/networking/tc-queue-filters.rst
new file mode 100644 (file)
index 0000000..6b41709
--- /dev/null
@@ -0,0 +1,37 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================
+TC queue based filtering
+=========================
+
+TC can be used for directing traffic to either a set of queues or
+to a single queue on both the transmit and receive side.
+
+On the transmit side:
+
+1) TC filter directing traffic to a set of queues is achieved
+   using the action skbedit priority for Tx priority selection,
+   the priority maps to a traffic class (set of queues) when
+   the queue-sets are configured using mqprio.
+
+2) TC filter directs traffic to a transmit queue with the action
+   skbedit queue_mapping $tx_qid. The action skbedit queue_mapping
+   for transmit queue is executed in software only and cannot be
+   offloaded.
+
+Likewise, on the receive side, the two filters for selecting set of
+queues and/or a single queue are supported as below:
+
+1) TC flower filter directs incoming traffic to a set of queues using
+   the 'hw_tc' option.
+   hw_tc $TCID - Specify a hardware traffic class to pass matching
+   packets on to. TCID is in the range 0 through 15.
+
+2) TC filter with action skbedit queue_mapping $rx_qid selects a
+   receive queue. The action skbedit queue_mapping for receive queue
+   is supported only in hardware. Multiple filters may compete in
+   the hardware for queue selection. In such case, the hardware
+   pipeline resolves conflicts based on priority. On Intel E810
+   devices, TC filter directing traffic to a queue have higher
+   priority over flow director filter assigning a queue. The hash
+   filter has lowest priority.
index cd6997a..bd15c39 100644 (file)
@@ -379,7 +379,7 @@ to subscribe and unsubscribe from the list can be found at:
 There are archives of the mailing list on the web in many different
 places.  Use a search engine to find these archives.  For example:
 
-       http://dir.gmane.org/gmane.linux.kernel
+       https://lore.kernel.org/lkml/
 
 It is highly recommended that you search the archives about the topic
 you want to bring up, before you post it to the list. A lot of things
index d140070..1fa5ab8 100644 (file)
@@ -319,3 +319,13 @@ unpatched tree to confirm infrastructure didn't mangle it.
 Finally, go back and read
 :ref:`Documentation/process/submitting-patches.rst <submittingpatches>`
 to be sure you are not repeating some common mistake documented there.
+
+My company uses peer feedback in employee performance reviews. Can I ask netdev maintainers for feedback?
+---------------------------------------------------------------------------------------------------------
+
+Yes, especially if you spend significant amount of time reviewing code
+and go out of your way to improve shared infrastructure.
+
+The feedback must be requested by you, the contributor, and will always
+be shared with you (even if you request for it to be submitted to your
+manager).
index e23b876..2e5b18f 100644 (file)
@@ -8,6 +8,7 @@ RISC-V architecture
     boot-image-header
     vm-layout
     patch-acceptance
+    uabi
 
     features
 
diff --git a/Documentation/riscv/uabi.rst b/Documentation/riscv/uabi.rst
new file mode 100644 (file)
index 0000000..21a82cf
--- /dev/null
@@ -0,0 +1,6 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+RISC-V Linux User ABI
+=====================
+
+Misaligned accesses are supported in userspace, but they may perform poorly.
index 1c321de..7c4e4b1 100644 (file)
@@ -39,7 +39,7 @@ higher than *30 us*. It is also set to stop the session if a *Thread* timer
 latency higher than *30 us* is hit. Finally, it is set to save the trace
 buffer if the stop condition is hit::
 
-  [root@alien ~]# rtla timerlat top -s 30 -t 30 -T
+  [root@alien ~]# rtla timerlat top -s 30 -T 30 -t
                    Timer Latency
     0 00:00:59   |          IRQ Timer Latency (us)        |         Thread Timer Latency (us)
   CPU COUNT      |      cur       min       avg       max |      cur       min       avg       max
index b37dc19..60bceb0 100644 (file)
@@ -564,7 +564,7 @@ of ftrace. Here is a list of some of the key files:
 
        start::
 
-               trace_fd = open("trace_marker", WR_ONLY);
+               trace_fd = open("trace_marker", O_WRONLY);
 
        Note: Writing into the trace_marker file can also initiate triggers
              that are written into /sys/kernel/tracing/events/ftrace/print/trigger
index 16ad562..15c08ae 100644 (file)
@@ -394,7 +394,7 @@ trovati al sito:
 Ci sono diversi archivi della lista di discussione. Usate un qualsiasi motore
 di ricerca per trovarli. Per esempio:
 
-       http://dir.gmane.org/gmane.linux.kernel
+       https://lore.kernel.org/lkml/
 
 É caldamente consigliata una ricerca in questi archivi sul tema che volete
 sollevare, prima di pubblicarlo sulla lista. Molte cose sono già state
index 649e2ff..b47a682 100644 (file)
@@ -410,7 +410,7 @@ https://bugzilla.kernel.org に行ってください。もし今後のバグレ
 このメーリングリストのアーカイブは web 上の多数の場所に存在します。こ
 れらのアーカイブを探すにはサーチエンジンを使いましょう。例えば-
 
-       http://dir.gmane.org/gmane.linux.kernel
+       https://lore.kernel.org/lkml/
 
 リストに投稿する前にすでにその話題がアーカイブに存在するかどうかを検索
 することを是非やってください。多数の事がすでに詳細に渡って議論されてお
index e439705..df53faf 100644 (file)
@@ -386,7 +386,7 @@ https://bugzilla.kernel.org 를 체크하고자 할 수도 있다; 소수의 커
 웹상의 많은 다른 곳에도 메일링 리스트의 아카이브들이 있다.
 이러한 아카이브들을 찾으려면 검색 엔진을 사용하라. 예를 들어:
 
-      http://dir.gmane.org/gmane.linux.kernel
+      https://lore.kernel.org/lkml/
 
 여러분이 새로운 문제에 관해 리스트에 올리기 전에 말하고 싶은 주제에 관한
 것을 아카이브에서 먼저 찾아보기를 강력히 권장한다. 이미 상세하게 토론된 많은
diff --git a/Documentation/translations/zh_CN/arch.rst b/Documentation/translations/zh_CN/arch.rst
new file mode 100644 (file)
index 0000000..690e173
--- /dev/null
@@ -0,0 +1,29 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+处理器体系结构
+==============
+
+以下文档提供了具体架构实现的编程细节。
+
+.. toctree::
+   :maxdepth: 2
+
+   mips/index
+   arm64/index
+   riscv/index
+   openrisc/index
+   parisc/index
+   loongarch/index
+
+TODOList:
+
+* arm/index
+* ia64/index
+* m68k/index
+* nios2/index
+* powerpc/index
+* s390/index
+* sh/index
+* sparc/index
+* x86/index
+* xtensa/index
index 2ace05f..3df1b03 100644 (file)
@@ -1,7 +1,7 @@
 .. SPDX-License-Identifier: GPL-2.0
 .. include:: ../disclaimer-zh_CN.rst
 
-:Original: Documentation/Devicetree/changesets.rst
+:Original: Documentation/devicetree/changesets.rst
 
 :翻译:
 
index 1151903..6dfd946 100644 (file)
@@ -1,7 +1,7 @@
 .. SPDX-License-Identifier: GPL-2.0
 .. include:: ../disclaimer-zh_CN.rst
 
-:Original: Documentation/Devicetree/dynamic-resolution-notes.rst
+:Original: Documentation/devicetree/dynamic-resolution-notes.rst
 
 :翻译:
 
index 6aa3b68..2fb7293 100644 (file)
@@ -1,7 +1,7 @@
 .. SPDX-License-Identifier: GPL-2.0
 .. include:: ../disclaimer-zh_CN.rst
 
-:Original: Documentation/Devicetree/kernel-api.rst
+:Original: Documentation/devicetree/kernel-api.rst
 
 :翻译:
 
index 1bd482c..43e3c0b 100644 (file)
@@ -1,7 +1,7 @@
 .. SPDX-License-Identifier: GPL-2.0
 .. include:: ../disclaimer-zh_CN.rst
 
-:Original: Documentation/Devicetree/overlay-notes.rst
+:Original: Documentation/devicetree/overlay-notes.rst
 
 :翻译:
 
index 2fc60e6..ec99ef5 100644 (file)
 顺便说下,中文文档也需要遵守内核编码风格,风格中中文和英文的主要不同就是中文
 的字符标点占用两个英文字符宽度, 所以,当英文要求不要超过每行100个字符时,
 中文就不要超过50个字符。另外,也要注意'-','=' 等符号与相关标题的对齐。在将
-补丁提交到社区之前,一定要进行必要的checkpatch.pl检查和编译测试。
+补丁提交到社区之前,一定要进行必要的 ``checkpatch.pl`` 检查和编译测试。
 
-许可证文档
-----------
-
-下面的文档介绍了Linux内核源代码的许可证(GPLv2)、如何在源代码树中正确标记
-单个文件的许可证、以及指向完整许可证文本的链接。
+与Linux 内核社区一起工作
+------------------------
 
-* Documentation/translations/zh_CN/process/license-rules.rst
-
-用户文档
---------
-
-下面的手册是为内核用户编写的——即那些试图让它在给定系统上以最佳方式工作的
-用户。
+与内核开发社区进行协作并将工作推向上游的基本指南。
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
 
-   admin-guide/index
-
-TODOList:
-
-* kbuild/index
+   process/development-process
+   process/submitting-patches
+   行为准则 <process/code-of-conduct>
+   maintainer/index
+   完整开发流程文档 <process/index>
 
\9bºä»¶ç\9b¸å\85³文档
-------------
\86\85é\83¨API文档
+-----------
 
-下列文档描述了内核需要的平台固件相关信息
+开发人员使用的内核内部交互接口手册
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
 
-   devicetree/index
+   core-api/index
+   driver-api/index
+   内核中的锁 <locking/index>
 
 TODOList:
 
-* firmware-guide/index
-
-应用程序开发人员文档
---------------------
-
-用户空间API手册涵盖了描述应用程序开发人员可见内核接口方面的文档。
+* subsystem-apis
 
-TODOlist:
+开发工具和流程
+--------------
 
-* userspace-api/index
-
-内核开发简介
-------------
-
-这些手册包含有关如何开发内核的整体信息。内核社区非常庞大,一年下来有数千名
-开发人员做出贡献。与任何大型社区一样,知道如何完成任务将使得更改合并的过程
-变得更加容易。
+为所有内核开发人员提供有用信息的各种其他手册。
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
 
-   process/index
-   dev-tools/index
+   process/license-rules
    doc-guide/index
+   dev-tools/index
+   dev-tools/testing-overview
    kernel-hacking/index
-   maintainer/index
 
 TODOList:
 
 * trace/index
 * fault-injection/index
 * livepatch/index
-* rust/index
 
-内核API文档
------------
+面向用户的文档
+--------------
 
»¥ä¸\8bæ\89\8bå\86\8cä»\8eå\86\85æ ¸å¼\80å\8f\91人å\91\98ç\9a\84è§\92度详ç»\86ä»\8bç»\8däº\86ç\89¹å®\9aç\9a\84å\86\85æ ¸å­\90ç³»ç»\9fæ\98¯å¦\82ä½\95å·¥ä½\9cç\9a\84ã\80\82è¿\99é\87\8cç\9a\84
¤§é\83¨å\88\86ä¿¡æ\81¯é\83½æ\98¯ç\9b´æ\8e¥ä»\8eå\86\85æ ¸æº\90代ç \81è\8e·å\8f\96ç\9a\84ï¼\8c并根æ\8d®é\9c\80è¦\81æ·»å\8a è¡¥å\85\85æ\9d\90æ\96\99ï¼\88æ\88\96è\80\85è\87³å°\91æ\98¯å\9c¨
-我们设法添加的时候——可能不是所有的都是有需要的)
¸\8bå\88\97æ\89\8bå\86\8cé\92\88对
¸\8cæ\9c\9bå\86\85æ ¸å\9c¨ç»\99å®\9aç³»ç»\9fä¸\8a以æ\9c\80ä½³æ\96¹å¼\8få·¥ä½\9cç\9a\84\94¨æ\88·*ï¼\8c
+和查找内核用户空间API信息的程序开发人员
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
 
-   core-api/index
-   driver-api/index
-   locking/index
-   accounting/index
-   cpu-freq/index
-   iio/index
-   infiniband/index
-   power/index
-   virt/index
-   sound/index
-   filesystems/index
-   scheduler/index
-   mm/index
-   peci/index
-   PCI/index
+   admin-guide/index
+   admin-guide/reporting-issues.rst
 
 TODOList:
 
-* block/index
-* cdrom/index
-* ide/index
-* fb/index
-* fpga/index
-* hid/index
-* i2c/index
-* isdn/index
-* leds/index
-* netlabel/index
-* networking/index
-* pcmcia/index
-* target/index
-* timers/index
-* spi/index
-* w1/index
-* watchdog/index
-* input/index
-* hwmon/index
-* gpu/index
-* security/index
-* crypto/index
-* bpf/index
-* usb/index
-* scsi/index
-* misc-devices/index
-* mhi/index
-
-体系结构无关文档
-----------------
+* 内核构建系统 <kbuild/index>
+* 用户空间工具 <tools/index>
+* userspace-api/index
 
-TODOList:
+也可参考独立于内核文档的 `Linux 手册页 <https://www.kernel.org/doc/man-pages/>`_ 。
 
-* asm-annotations
+固件相关文档
+------------
 
-特定体系结构文档
-----------------
+下列文档描述了内核需要的平台固件相关信息。
 
 .. toctree::
    :maxdepth: 2
 
-   mips/index
-   arm64/index
-   riscv/index
-   openrisc/index
-   parisc/index
-   loongarch/index
+   devicetree/index
 
 TODOList:
 
-* arm/index
-* ia64/index
-* m68k/index
-* nios2/index
-* powerpc/index
-* s390/index
-* sh/index
-* sparc/index
-* x86/index
-* xtensa/index
+* firmware-guide/index
+
+体系结构文档
+------------
+
+.. toctree::
+   :maxdepth: 2
+
+   arch
 
 其他文档
 --------
@@ -195,9 +130,9 @@ TODOList:
 TODOList:
 
 * staging/index
-* watch_queue
 
-目录和表格
+
+索引和表格
 ----------
 
 * :ref:`genindex`
index d1f82e8..f0f4587 100644 (file)
@@ -30,7 +30,7 @@ KSM的用户空间的接口在Documentation/translations/zh_CN/admin-guide/mm/ks
 KSM维护着稳定树中的KSM页的逆映射信息。
 
 当KSM页面的共享数小于 ``max_page_sharing`` 的虚拟内存区域(VMAs)时,则代表了
-KSM页的稳定树其中的节点指向了一个rmap_item结构体类型的列表。同时,这个KSM页
+KSM页的稳定树其中的节点指向了一个ksm_rmap_item结构体类型的列表。同时,这个KSM页
 的 ``page->mapping`` 指向了该稳定树节点。
 
 如果共享数超过了阈值,KSM将给稳定树添加第二个维度。稳定树就变成链接一个或多
index b7f81d7..21a6a08 100644 (file)
@@ -74,15 +74,19 @@ page owner在默认情况下是禁用的。所以,如果你想使用它,你
        cat /sys/kernel/debug/page_owner > page_owner_full.txt
        ./page_owner_sort page_owner_full.txt sorted_page_owner.txt
 
-   ``page_owner_full.txt`` 的一般输出情况如下(输出信息无翻译价值)::
+   ``page_owner_full.txt`` 的一般输出情况如下::
 
        Page allocated via order XXX, ...
        PFN XXX ...
-       // Detailed stack
+       // 栈详情
 
        Page allocated via order XXX, ...
        PFN XXX ...
-       // Detailed stack
+       // 栈详情
+    默认情况下,它将以一个给定的pfn开始,做完整的pfn转储,且page_owner支持fseek。
+
+    FILE *fp = fopen("/sys/kernel/debug/page_owner", "r");
+    fseek(fp, pfn_start, SEEK_SET);
 
    ``page_owner_sort`` 工具忽略了 ``PFN`` 行,将剩余的行放在buf中,使用regexp提
    取页序值,计算buf的次数和页数,最后根据参数进行排序。
index 1455190..5bf9531 100644 (file)
@@ -306,7 +306,7 @@ bugzilla.kernel.org是Linux内核开发者们用来跟踪内核Bug的网站。
 网上很多地方都有这个邮件列表的存档(archive)。可以使用搜索引擎来找到这些
 存档。比如:
 
-       http://dir.gmane.org/gmane.linux.kernel
+       https://lore.kernel.org/lkml/
 
 在发信之前,我们强烈建议你先在存档中搜索你想要讨论的问题。很多已经被详细
 讨论过的问题只在邮件列表的存档中可以找到。
index a683dbe..a1a35f8 100644 (file)
@@ -10,6 +10,7 @@
 
 .. _cn_process_index:
 
+========================
 与Linux 内核社区一起工作
 ========================
 
index 68ae441..86b0d4c 100644 (file)
@@ -309,7 +309,7 @@ bugzilla.kernel.org是Linux內核開發者們用來跟蹤內核Bug的網站。
 網上很多地方都有這個郵件列表的存檔(archive)。可以使用搜尋引擎來找到這些
 存檔。比如:
 
-       http://dir.gmane.org/gmane.linux.kernel
+       https://lore.kernel.org/lkml/
 
 在發信之前,我們強烈建議你先在存檔中搜索你想要討論的問題。很多已經被詳細
 討論過的問題只在郵件列表的存檔中可以找到。
index 13de01d..15fa175 100644 (file)
@@ -239,6 +239,7 @@ ignore define CEC_OP_FEAT_DEV_HAS_DECK_CONTROL
 ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_RATE
 ignore define CEC_OP_FEAT_DEV_SINK_HAS_ARC_TX
 ignore define CEC_OP_FEAT_DEV_SOURCE_HAS_ARC_RX
+ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_VOLUME_LEVEL
 
 ignore define CEC_MSG_GIVE_FEATURES
 
@@ -487,6 +488,7 @@ ignore define CEC_OP_SYS_AUD_STATUS_ON
 
 ignore define CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST
 ignore define CEC_MSG_SYSTEM_AUDIO_MODE_STATUS
+ignore define CEC_MSG_SET_AUDIO_VOLUME_LEVEL
 
 ignore define CEC_OP_AUD_FMT_ID_CEA861
 ignore define CEC_OP_AUD_FMT_ID_CEA861_CXT
index 9021531..7c8bf16 100644 (file)
@@ -136,9 +136,9 @@ V4L2 functions
 
    operates like the :c:func:`read()` function.
 
-.. c:function:: void v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
+.. c:function:: void *v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset);
 
-   operates like the :c:func:`munmap()` function.
+   operates like the :c:func:`mmap()` function.
 
 .. c:function:: int v4l2_munmap(void *_start, size_t length);
 
index 5c6ce09..657b223 100644 (file)
@@ -752,7 +752,7 @@ ALIBABA PMU DRIVER
 M:     Shuai Xue <xueshuai@linux.alibaba.com>
 S:     Supported
 F:     Documentation/admin-guide/perf/alibaba_pmu.rst
-F:     drivers/perf/alibaba_uncore_dwr_pmu.c
+F:     drivers/perf/alibaba_uncore_drw_pmu.c
 
 ALIENWARE WMI DRIVER
 L:     Dell.Client.Kernel@dell.com
@@ -2439,6 +2439,7 @@ L:        linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
 S:     Supported
 T:     git git://github.com/microchip-ung/linux-upstream.git
 F:     arch/arm64/boot/dts/microchip/
+F:     drivers/net/ethernet/microchip/vcap/
 F:     drivers/pinctrl/pinctrl-microchip-sgpio.c
 N:     sparx5
 
@@ -4459,13 +4460,15 @@ M:      Josef Bacik <josef@toxicpanda.com>
 M:     David Sterba <dsterba@suse.com>
 L:     linux-btrfs@vger.kernel.org
 S:     Maintained
-W:     http://btrfs.wiki.kernel.org/
-Q:     http://patchwork.kernel.org/project/linux-btrfs/list/
+W:     https://btrfs.readthedocs.io
+W:     https://btrfs.wiki.kernel.org/
+Q:     https://patchwork.kernel.org/project/linux-btrfs/list/
 C:     irc://irc.libera.chat/btrfs
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git
 F:     Documentation/filesystems/btrfs.rst
 F:     fs/btrfs/
 F:     include/linux/btrfs*
+F:     include/trace/events/btrfs.h
 F:     include/uapi/linux/btrfs*
 
 BTTV VIDEO4LINUX DRIVER
@@ -5266,6 +5269,7 @@ F:        tools/testing/selftests/cgroup/
 
 CONTROL GROUP - BLOCK IO CONTROLLER (BLKIO)
 M:     Tejun Heo <tj@kernel.org>
+M:     Josef Bacik <josef@toxicpanda.com>
 M:     Jens Axboe <axboe@kernel.dk>
 L:     cgroups@vger.kernel.org
 L:     linux-block@vger.kernel.org
@@ -5273,6 +5277,7 @@ T:        git git://git.kernel.dk/linux-block
 F:     Documentation/admin-guide/cgroup-v1/blkio-controller.rst
 F:     block/bfq-cgroup.c
 F:     block/blk-cgroup.c
+F:     block/blk-iocost.c
 F:     block/blk-iolatency.c
 F:     block/blk-throttle.c
 F:     include/linux/blk-cgroup.h
@@ -6280,7 +6285,7 @@ M:        Sakari Ailus <sakari.ailus@linux.intel.com>
 L:     linux-media@vger.kernel.org
 S:     Maintained
 T:     git git://linuxtv.org/media_tree.git
-F:     Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt
+F:     Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml
 F:     drivers/media/i2c/dw9714.c
 
 DONGWOON DW9768 LENS VOICE COIL DRIVER
@@ -6322,6 +6327,7 @@ F:        drivers/net/ethernet/freescale/dpaa2/Kconfig
 F:     drivers/net/ethernet/freescale/dpaa2/Makefile
 F:     drivers/net/ethernet/freescale/dpaa2/dpaa2-eth*
 F:     drivers/net/ethernet/freescale/dpaa2/dpaa2-mac*
+F:     drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk*
 F:     drivers/net/ethernet/freescale/dpaa2/dpkg.h
 F:     drivers/net/ethernet/freescale/dpaa2/dpmac*
 F:     drivers/net/ethernet/freescale/dpaa2/dpni*
@@ -14709,6 +14715,12 @@ F:     drivers/nvme/target/auth.c
 F:     drivers/nvme/target/fabrics-cmd-auth.c
 F:     include/linux/nvme-auth.h
 
+NVM EXPRESS HARDWARE MONITORING SUPPORT
+M:     Guenter Roeck <linux@roeck-us.net>
+L:     linux-nvme@lists.infradead.org
+S:     Supported
+F:     drivers/nvme/host/hwmon.c
+
 NVM EXPRESS FC TRANSPORT DRIVERS
 M:     James Smart <james.smart@broadcom.com>
 L:     linux-nvme@lists.infradead.org
@@ -15359,17 +15371,6 @@ L:     linux-rdma@vger.kernel.org
 S:     Supported
 F:     drivers/infiniband/ulp/opa_vnic
 
-OPEN FIRMWARE AND DEVICE TREE OVERLAYS
-M:     Pantelis Antoniou <pantelis.antoniou@konsulko.com>
-M:     Frank Rowand <frowand.list@gmail.com>
-L:     devicetree@vger.kernel.org
-S:     Maintained
-F:     Documentation/devicetree/dynamic-resolution-notes.rst
-F:     Documentation/devicetree/overlay-notes.rst
-F:     drivers/of/overlay.c
-F:     drivers/of/resolver.c
-K:     of_overlay_notifier_
-
 OPEN FIRMWARE AND FLATTENED DEVICE TREE
 M:     Rob Herring <robh+dt@kernel.org>
 M:     Frank Rowand <frowand.list@gmail.com>
@@ -15382,6 +15383,9 @@ F:      Documentation/ABI/testing/sysfs-firmware-ofw
 F:     drivers/of/
 F:     include/linux/of*.h
 F:     scripts/dtc/
+K:     of_overlay_notifier_
+K:     of_overlay_fdt_apply
+K:     of_overlay_remove
 
 OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS
 M:     Rob Herring <robh+dt@kernel.org>
@@ -15419,7 +15423,7 @@ M:      Stafford Horne <shorne@gmail.com>
 L:     openrisc@lists.librecores.org
 S:     Maintained
 W:     http://openrisc.io
-T:     git git://github.com/openrisc/linux.git
+T:     git https://github.com/openrisc/linux.git
 F:     Documentation/devicetree/bindings/openrisc/
 F:     Documentation/openrisc/
 F:     arch/openrisc/
@@ -15847,7 +15851,7 @@ F:      Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml
 F:     drivers/pci/controller/dwc/*designware*
 
 PCI DRIVER FOR TI DRA7XX/J721E
-M:     Kishon Vijay Abraham I <kishon@ti.com>
+M:     Vignesh Raghavendra <vigneshr@ti.com>
 L:     linux-omap@vger.kernel.org
 L:     linux-pci@vger.kernel.org
 L:     linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@@ -15864,10 +15868,10 @@ F:    Documentation/devicetree/bindings/pci/v3-v360epc-pci.txt
 F:     drivers/pci/controller/pci-v3-semi.c
 
 PCI ENDPOINT SUBSYSTEM
-M:     Kishon Vijay Abraham I <kishon@ti.com>
 M:     Lorenzo Pieralisi <lpieralisi@kernel.org>
 R:     Krzysztof Wilczyński <kw@linux.com>
 R:     Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+R:     Kishon Vijay Abraham I <kishon@kernel.org>
 L:     linux-pci@vger.kernel.org
 S:     Supported
 Q:     https://patchwork.kernel.org/project/linux-pci/list/
@@ -16673,6 +16677,7 @@ F:      Documentation/driver-api/ptp.rst
 F:     drivers/net/phy/dp83640*
 F:     drivers/ptp/*
 F:     include/linux/ptp_cl*
+K:     (?:\b|_)ptp(?:\b|_)
 
 PTP VIRTUAL CLOCK SUPPORT
 M:     Yangbo Lu <yangbo.lu@nxp.com>
@@ -17710,6 +17715,7 @@ M:      Palmer Dabbelt <palmer@dabbelt.com>
 M:     Albert Ou <aou@eecs.berkeley.edu>
 L:     linux-riscv@lists.infradead.org
 S:     Supported
+Q:     https://patchwork.kernel.org/project/linux-riscv/list/
 P:     Documentation/riscv/patch-acceptance.rst
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git
 F:     arch/riscv/
@@ -17721,12 +17727,13 @@ M:    Conor Dooley <conor.dooley@microchip.com>
 M:     Daire McNamara <daire.mcnamara@microchip.com>
 L:     linux-riscv@lists.infradead.org
 S:     Supported
-F:     Documentation/devicetree/bindings/clock/microchip,mpfs.yaml
+F:     Documentation/devicetree/bindings/clock/microchip,mpfs*.yaml
 F:     Documentation/devicetree/bindings/gpio/microchip,mpfs-gpio.yaml
 F:     Documentation/devicetree/bindings/i2c/microchip,corei2c.yaml
 F:     Documentation/devicetree/bindings/mailbox/microchip,mpfs-mailbox.yaml
 F:     Documentation/devicetree/bindings/net/can/microchip,mpfs-can.yaml
 F:     Documentation/devicetree/bindings/pwm/microchip,corepwm.yaml
+F:     Documentation/devicetree/bindings/riscv/microchip.yaml
 F:     Documentation/devicetree/bindings/soc/microchip/microchip,mpfs-sys-controller.yaml
 F:     Documentation/devicetree/bindings/spi/microchip,mpfs-spi.yaml
 F:     Documentation/devicetree/bindings/usb/microchip,mpfs-musb.yaml
@@ -18137,7 +18144,6 @@ L:      linux-media@vger.kernel.org
 S:     Maintained
 T:     git git://linuxtv.org/media_tree.git
 F:     drivers/staging/media/deprecated/saa7146/
-F:     include/media/drv-intf/saa7146*
 
 SAFESETID SECURITY MODULE
 M:     Micah Morton <mortonm@chromium.org>
@@ -18217,7 +18223,6 @@ F:      include/media/drv-intf/s3c_camif.h
 
 SAMSUNG S3FWRN5 NFC DRIVER
 M:     Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
-M:     Krzysztof Opasiak <k.opasiak@samsung.com>
 L:     linux-nfc@lists.01.org (subscribers-only)
 S:     Maintained
 F:     Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml
@@ -18523,8 +18528,7 @@ S:      Maintained
 F:     drivers/mmc/host/sdhci-esdhc-imx.c
 
 SECURE ENCRYPTING DEVICE (SED) OPAL DRIVER
-M:     Jonathan Derrick <jonathan.derrick@intel.com>
-M:     Revanth Rajashekar <revanth.rajashekar@intel.com>
+M:     Jonathan Derrick <jonathan.derrick@linux.dev>
 L:     linux-block@vger.kernel.org
 S:     Supported
 F:     block/opal_proto.h
@@ -21300,7 +21304,7 @@ L:      linux-usb@vger.kernel.org
 L:     netdev@vger.kernel.org
 S:     Maintained
 W:     https://github.com/petkan/pegasus
-T:     git git://github.com/petkan/pegasus.git
+T:     git https://github.com/petkan/pegasus.git
 F:     drivers/net/usb/pegasus.*
 
 USB PHY LAYER
@@ -21337,7 +21341,7 @@ L:      linux-usb@vger.kernel.org
 L:     netdev@vger.kernel.org
 S:     Maintained
 W:     https://github.com/petkan/rtl8150
-T:     git git://github.com/petkan/rtl8150.git
+T:     git https://github.com/petkan/rtl8150.git
 F:     drivers/net/usb/rtl8150.c
 
 USB SERIAL SUBSYSTEM
@@ -22128,6 +22132,7 @@ F:      Documentation/watchdog/
 F:     drivers/watchdog/
 F:     include/linux/watchdog.h
 F:     include/uapi/linux/watchdog.h
+F:     include/trace/events/watchdog.h
 
 WHISKEYCOVE PMIC GPIO DRIVER
 M:     Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
@@ -22768,7 +22773,7 @@ S:      Maintained
 W:     http://mjpeg.sourceforge.net/driver-zoran/
 Q:     https://patchwork.linuxtv.org/project/linux-media/list/
 F:     Documentation/driver-api/media/drivers/zoran.rst
-F:     drivers/staging/media/zoran/
+F:     drivers/media/pci/zoran/
 
 ZRAM COMPRESSED RAM BLOCK DEVICE DRVIER
 M:     Minchan Kim <minchan@kernel.org>
index cfbe6a7..d148a55 100644 (file)
--- a/Makefile
+++ b/Makefile
@@ -1,8 +1,8 @@
 # SPDX-License-Identifier: GPL-2.0
 VERSION = 6
-PATCHLEVEL = 0
+PATCHLEVEL = 1
 SUBLEVEL = 0
-EXTRAVERSION =
+EXTRAVERSION = -rc2
 NAME = Hurr durr I'ma ninja sloth
 
 # *DOCUMENTATION*
@@ -1979,6 +1979,8 @@ endif
 
 single-goals := $(addprefix $(build-dir)/, $(single-no-ko))
 
+KBUILD_MODULES := 1
+
 endif
 
 # Preset locale variables to speed up the build process. Limit locale
index 6d0b3ba..e9348ae 100644 (file)
@@ -803,7 +803,7 @@ void __iomem *marvel_ioportmap (unsigned long addr)
        return (void __iomem *)addr;
 }
 
-unsigned u8
+u8
 marvel_ioread8(const void __iomem *xaddr)
 {
        unsigned long addr = (unsigned long) xaddr;
index fc30df8..a2b31d9 100644 (file)
@@ -371,7 +371,7 @@ static unsigned long sigpage_addr(const struct mm_struct *mm,
 
        slots = ((last - first) >> PAGE_SHIFT) + 1;
 
-       offset = get_random_int() % slots;
+       offset = prandom_u32_max(slots);
 
        addr = first + (offset << PAGE_SHIFT);
 
index ea128e3..e07f359 100644 (file)
@@ -655,7 +655,7 @@ struct page *get_signal_page(void)
                 PAGE_SIZE / sizeof(u32));
 
        /* Give the signal return code some randomness */
-       offset = 0x200 + (get_random_int() & 0x7fc);
+       offset = 0x200 + (get_random_u16() & 0x7fc);
        signal_return_offset = offset;
 
        /* Copy signal return handlers into the page */
index 79f4a2a..9968239 100644 (file)
@@ -238,7 +238,7 @@ void pxa_usb_phy_deinit(void __iomem *phy_reg)
 static u64 __maybe_unused usb_dma_mask = ~(u32)0;
 
 #if IS_ENABLED(CONFIG_PHY_PXA_USB)
-struct resource pxa168_usb_phy_resources[] = {
+static struct resource pxa168_usb_phy_resources[] = {
        [0] = {
                .start  = PXA168_U2O_PHYBASE,
                .end    = PXA168_U2O_PHYBASE + USB_PHY_RANGE,
@@ -259,7 +259,7 @@ struct platform_device pxa168_device_usb_phy = {
 #endif /* CONFIG_PHY_PXA_USB */
 
 #if IS_ENABLED(CONFIG_USB_MV_UDC)
-struct resource pxa168_u2o_resources[] = {
+static struct resource pxa168_u2o_resources[] = {
        /* regbase */
        [0] = {
                .start  = PXA168_U2O_REGBASE + U2x_CAPREGS_OFFSET,
@@ -294,7 +294,7 @@ struct platform_device pxa168_device_u2o = {
 #endif /* CONFIG_USB_MV_UDC */
 
 #if IS_ENABLED(CONFIG_USB_EHCI_MV_U2O)
-struct resource pxa168_u2oehci_resources[] = {
+static struct resource pxa168_u2oehci_resources[] = {
        [0] = {
                .start  = PXA168_U2O_REGBASE,
                .end    = PXA168_U2O_REGBASE + USB_REG_RANGE,
@@ -321,7 +321,7 @@ struct platform_device pxa168_device_u2oehci = {
 #endif
 
 #if IS_ENABLED(CONFIG_USB_MV_OTG)
-struct resource pxa168_u2ootg_resources[] = {
+static struct resource pxa168_u2ootg_resources[] = {
        /* regbase */
        [0] = {
                .start  = PXA168_U2O_REGBASE + U2x_CAPREGS_OFFSET,
index 43b7996..9e36920 100644 (file)
@@ -25,11 +25,8 @@ extern struct pl022_ssp_controller pl022_plat_data;
 extern struct pl08x_platform_data pl080_plat_data;
 
 void __init spear_setup_of_timer(void);
-void __init spear3xx_clk_init(void __iomem *misc_base,
-                             void __iomem *soc_config_base);
 void __init spear3xx_map_io(void);
 void __init spear3xx_dt_init_irq(void);
-void __init spear6xx_clk_init(void __iomem *misc_base);
 void __init spear13xx_map_io(void);
 void __init spear13xx_l2x0_init(void);
 
index 2ba406e..7ef9670 100644 (file)
@@ -13,6 +13,7 @@
 #include <linux/amba/pl022.h>
 #include <linux/amba/pl080.h>
 #include <linux/clk.h>
+#include <linux/clk/spear.h>
 #include <linux/io.h>
 #include <asm/mach/map.h>
 #include "pl080.h"
index 5818349..f0a1e70 100644 (file)
@@ -12,6 +12,7 @@
 
 #include <linux/amba/pl08x.h>
 #include <linux/clk.h>
+#include <linux/clk/spear.h>
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/of_address.h>
@@ -339,7 +340,7 @@ static struct pl08x_platform_data spear6xx_pl080_plat_data = {
  * 0xD0000000          0xFD000000
  * 0xFC000000          0xFC000000
  */
-struct map_desc spear6xx_io_desc[] __initdata = {
+static struct map_desc spear6xx_io_desc[] __initdata = {
        {
                .virtual        = (unsigned long)VA_SPEAR6XX_ML_CPU_BASE,
                .pfn            = __phys_to_pfn(SPEAR_ICM3_ML1_2_BASE),
@@ -359,12 +360,12 @@ struct map_desc spear6xx_io_desc[] __initdata = {
 };
 
 /* This will create static memory mapping for selected devices */
-void __init spear6xx_map_io(void)
+static void __init spear6xx_map_io(void)
 {
        iotable_init(spear6xx_io_desc, ARRAY_SIZE(spear6xx_io_desc));
 }
 
-void __init spear6xx_timer_init(void)
+static void __init spear6xx_timer_init(void)
 {
        char pclk_name[] = "pll3_clk";
        struct clk *gpt_clk, *pclk;
@@ -394,7 +395,7 @@ void __init spear6xx_timer_init(void)
 }
 
 /* Add auxdata to pass platform data */
-struct of_dev_auxdata spear6xx_auxdata_lookup[] __initdata = {
+static struct of_dev_auxdata spear6xx_auxdata_lookup[] __initdata = {
        OF_DEV_AUXDATA("arm,pl080", SPEAR_ICM3_DMA_BASE, NULL,
                        &spear6xx_pl080_plat_data),
        {}
index f6737d2..505c8a1 100644 (file)
@@ -632,6 +632,23 @@ config ARM64_ERRATUM_1530923
 config ARM64_WORKAROUND_REPEAT_TLBI
        bool
 
+config ARM64_ERRATUM_2441007
+       bool "Cortex-A55: Completion of affected memory accesses might not be guaranteed by completion of a TLBI"
+       default y
+       select ARM64_WORKAROUND_REPEAT_TLBI
+       help
+         This option adds a workaround for ARM Cortex-A55 erratum #2441007.
+
+         Under very rare circumstances, affected Cortex-A55 CPUs
+         may not handle a race between a break-before-make sequence on one
+         CPU, and another CPU accessing the same page. This could allow a
+         store to a page that has been unmapped.
+
+         Work around this by adding the affected CPUs to the list that needs
+         TLB sequences to be done twice.
+
+         If unsure, say Y.
+
 config ARM64_ERRATUM_1286807
        bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
        default y
index 8aa0d27..abc4186 100644 (file)
@@ -60,6 +60,7 @@
 #define ARM_CPU_IMP_FUJITSU            0x46
 #define ARM_CPU_IMP_HISI               0x48
 #define ARM_CPU_IMP_APPLE              0x61
+#define ARM_CPU_IMP_AMPERE             0xC0
 
 #define ARM_CPU_PART_AEM_V8            0xD0F
 #define ARM_CPU_PART_FOUNDATION                0xD00
 #define APPLE_CPU_PART_M1_ICESTORM_MAX 0x028
 #define APPLE_CPU_PART_M1_FIRESTORM_MAX        0x029
 
+#define AMPERE_CPU_PART_AMPERE1                0xAC3
+
 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
 #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
 #define MIDR_APPLE_M1_FIRESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_PRO)
 #define MIDR_APPLE_M1_ICESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_MAX)
 #define MIDR_APPLE_M1_FIRESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_MAX)
+#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
 
 /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
 #define MIDR_FUJITSU_ERRATUM_010001            MIDR_FUJITSU_A64FX
index 1b098bd..3252eb5 100644 (file)
 
 #define KVM_PGTABLE_MAX_LEVELS         4U
 
+/*
+ * The largest supported block sizes for KVM (no 52-bit PA support):
+ *  - 4K (level 1):    1GB
+ *  - 16K (level 2):   32MB
+ *  - 64K (level 2):   512MB
+ */
+#ifdef CONFIG_ARM64_4K_PAGES
+#define KVM_PGTABLE_MIN_BLOCK_LEVEL    1U
+#else
+#define KVM_PGTABLE_MIN_BLOCK_LEVEL    2U
+#endif
+
 static inline u64 kvm_get_parange(u64 mmfr0)
 {
        u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
@@ -58,11 +70,7 @@ static inline u64 kvm_granule_size(u32 level)
 
 static inline bool kvm_level_supports_block_mapping(u32 level)
 {
-       /*
-        * Reject invalid block mappings and don't bother with 4TB mappings for
-        * 52-bit PAs.
-        */
-       return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1));
+       return level >= KVM_PGTABLE_MIN_BLOCK_LEVEL;
 }
 
 /**
index fe341a6..c8dca8a 100644 (file)
 #include <linux/pgtable.h>
 
 /*
- * PGDIR_SHIFT determines the size a top-level page table entry can map
- * and depends on the number of levels in the page table. Compute the
- * PGDIR_SHIFT for a given number of levels.
- */
-#define pt_levels_pgdir_shift(lvls)    ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (lvls))
-
-/*
  * The hardware supports concatenation of up to 16 tables at stage2 entry
  * level and we use the feature whenever possible, which means we resolve 4
  * additional bits of address at the entry level.
 #define stage2_pgtable_levels(ipa)     ARM64_HW_PGTABLE_LEVELS((ipa) - 4)
 #define kvm_stage2_levels(kvm)         VTCR_EL2_LVLS(kvm->arch.vtcr)
 
-/* stage2_pgdir_shift() is the size mapped by top-level stage2 entry for the VM */
-#define stage2_pgdir_shift(kvm)                pt_levels_pgdir_shift(kvm_stage2_levels(kvm))
-#define stage2_pgdir_size(kvm)         (1ULL << stage2_pgdir_shift(kvm))
-#define stage2_pgdir_mask(kvm)         ~(stage2_pgdir_size(kvm) - 1)
-
 /*
  * kvm_mmmu_cache_min_pages() is the number of pages required to install
  * a stage-2 translation. We pre-allocate the entry level page table at
  */
 #define kvm_mmu_cache_min_pages(kvm)   (kvm_stage2_levels(kvm) - 1)
 
-static inline phys_addr_t
-stage2_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
-{
-       phys_addr_t boundary = (addr + stage2_pgdir_size(kvm)) & stage2_pgdir_mask(kvm);
-
-       return (boundary - 1 < end - 1) ? boundary : end;
-}
-
 #endif /* __ARM64_S2_PGTABLE_H_ */
index 58ca4f6..89ac000 100644 (file)
@@ -230,6 +230,11 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
                ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe),
        },
 #endif
+#ifdef CONFIG_ARM64_ERRATUM_2441007
+       {
+               ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+       },
+#endif
 #ifdef CONFIG_ARM64_ERRATUM_2441009
        {
                /* Cortex-A510 r0p0 -> r1p1. Fixed in r1p2 */
index bd5df50..795344a 100644 (file)
@@ -7,6 +7,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/asm-offsets.h>
 #include <asm/assembler.h>
 #include <asm/ftrace.h>
@@ -294,10 +295,14 @@ SYM_FUNC_END(ftrace_graph_caller)
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
 
-SYM_FUNC_START(ftrace_stub)
+SYM_TYPED_FUNC_START(ftrace_stub)
        ret
 SYM_FUNC_END(ftrace_stub)
 
+SYM_TYPED_FUNC_START(ftrace_stub_graph)
+       ret
+SYM_FUNC_END(ftrace_stub_graph)
+
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 /*
  * void return_to_handler(void)
index aca8847..7467217 100644 (file)
@@ -48,7 +48,12 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
        if (!pte_is_tagged)
                return;
 
-       mte_clear_page_tags(page_address(page));
+       /*
+        * Test PG_mte_tagged again in case it was racing with another
+        * set_pte_at().
+        */
+       if (!test_and_set_bit(PG_mte_tagged, &page->flags))
+               mte_clear_page_tags(page_address(page));
 }
 
 void mte_sync_tags(pte_t old_pte, pte_t pte)
@@ -64,7 +69,7 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
 
        /* if PG_mte_tagged is set, tags have already been initialised */
        for (i = 0; i < nr_pages; i++, page++) {
-               if (!test_and_set_bit(PG_mte_tagged, &page->flags))
+               if (!test_bit(PG_mte_tagged, &page->flags))
                        mte_sync_page_tags(page, old_pte, check_swap,
                                           pte_is_tagged);
        }
index 9015f49..044a7d7 100644 (file)
@@ -591,7 +591,7 @@ unsigned long __get_wchan(struct task_struct *p)
 unsigned long arch_align_stack(unsigned long sp)
 {
        if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
-               sp -= get_random_int() & ~PAGE_MASK;
+               sp -= prandom_u32_max(PAGE_SIZE);
        return sp & ~0xf;
 }
 
index a8ea163..bfce41c 100644 (file)
@@ -868,6 +868,10 @@ u8 spectre_bhb_loop_affected(int scope)
                        MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
                        {},
                };
+               static const struct midr_range spectre_bhb_k11_list[] = {
+                       MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+                       {},
+               };
                static const struct midr_range spectre_bhb_k8_list[] = {
                        MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
                        MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
@@ -878,6 +882,8 @@ u8 spectre_bhb_loop_affected(int scope)
                        k = 32;
                else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
                        k = 24;
+               else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
+                       k = 11;
                else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
                        k =  8;
 
index 733451f..d72e8f2 100644 (file)
@@ -67,7 +67,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
         *
         * The resulting 5 bits of entropy is seen in SP[8:4].
         */
-       choose_random_kstack_offset(get_random_int() & 0x1FF);
+       choose_random_kstack_offset(get_random_u16() & 0x1FF);
 }
 
 static inline bool has_syscall_work(unsigned long flags)
index 687598e..a38dea6 100644 (file)
@@ -5,9 +5,6 @@
 
 incdir := $(srctree)/$(src)/include
 subdir-asflags-y := -I$(incdir)
-subdir-ccflags-y := -I$(incdir)                                \
-                   -fno-stack-protector                \
-                   -DDISABLE_BRANCH_PROFILING          \
-                   $(DISABLE_STACKLEAK_PLUGIN)
+subdir-ccflags-y := -I$(incdir)
 
 obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o
index b5c5119..be0a2bc 100644 (file)
@@ -10,6 +10,9 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
 # will explode instantly (Words of Marc Zyngier). So introduce a generic flag
 # __DISABLE_TRACE_MMIO__ to disable MMIO tracing for nVHE KVM.
 ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__
+ccflags-y += -fno-stack-protector      \
+            -DDISABLE_BRANCH_PROFILING \
+            $(DISABLE_STACKLEAK_PLUGIN)
 
 hostprogs := gen-hyprel
 HOST_EXTRACFLAGS += -I$(objtree)/include
@@ -89,6 +92,10 @@ quiet_cmd_hypcopy = HYPCOPY $@
 # Remove ftrace, Shadow Call Stack, and CFI CFLAGS.
 # This is equivalent to the 'notrace', '__noscs', and '__nocfi' annotations.
 KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
+# Starting from 13.0.0 llvm emits SHT_REL section '.llvm.call-graph-profile'
+# when profile optimization is applied. gen-hyprel does not support SHT_REL and
+# causes a build failure. Remove profile optimization flags.
+KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%, $(KBUILD_CFLAGS))
 
 # KVM nVHE code is run at a different exception code with a different map, so
 # compiler instrumentation that inserts callbacks or checks into the code may
index 34c5fee..60ee3d9 100644 (file)
@@ -31,6 +31,13 @@ static phys_addr_t hyp_idmap_vector;
 
 static unsigned long io_map_base;
 
+static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+       phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL);
+       phys_addr_t boundary = ALIGN_DOWN(addr + size, size);
+
+       return (boundary - 1 < end - 1) ? boundary : end;
+}
 
 /*
  * Release kvm_mmu_lock periodically if the memory region is large. Otherwise,
@@ -52,7 +59,7 @@ static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr,
                if (!pgt)
                        return -EINVAL;
 
-               next = stage2_pgd_addr_end(kvm, addr, end);
+               next = stage2_range_addr_end(addr, end);
                ret = fn(pgt, addr, next - addr);
                if (ret)
                        break;
index 24d7778..733b530 100644 (file)
@@ -2149,7 +2149,7 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
 
        memset(entry, 0, esz);
 
-       while (len > 0) {
+       while (true) {
                int next_offset;
                size_t byte_offset;
 
@@ -2162,6 +2162,9 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
                        return next_offset;
 
                byte_offset = next_offset * esz;
+               if (byte_offset >= len)
+                       break;
+
                id += next_offset;
                gpa += byte_offset;
                len -= byte_offset;
index 4334dec..bed803d 100644 (file)
@@ -53,7 +53,12 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
        if (!tags)
                return false;
 
-       mte_restore_page_tags(page_address(page), tags);
+       /*
+        * Test PG_mte_tagged again in case it was racing with another
+        * set_pte_at().
+        */
+       if (!test_and_set_bit(PG_mte_tagged, &page->flags))
+               mte_restore_page_tags(page_address(page), tags);
 
        return true;
 }
index 7f1fb36..384757a 100644 (file)
@@ -732,7 +732,7 @@ EndSysreg
 
 Sysreg SCTLR_EL1       3       0       1       0       0
 Field  63      TIDCP
-Field  62      SPINMASK
+Field  62      SPINTMASK
 Field  61      NMI
 Field  60      EnTP2
 Res0   59:58
index 8ea57e2..946704b 100644 (file)
@@ -412,6 +412,9 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
        __update_tlb(vma, address, ptep);
 }
 
+#define __HAVE_ARCH_UPDATE_MMU_TLB
+#define update_mmu_tlb update_mmu_cache
+
 static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
                        unsigned long address, pmd_t *pmdp)
 {
index 660492f..1256e35 100644 (file)
@@ -293,7 +293,7 @@ unsigned long stack_top(void)
 unsigned long arch_align_stack(unsigned long sp)
 {
        if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
-               sp -= get_random_int() & ~PAGE_MASK;
+               sp -= prandom_u32_max(PAGE_SIZE);
 
        return sp & STACK_ALIGN;
 }
index f32c38a..8c98260 100644 (file)
@@ -78,7 +78,7 @@ static unsigned long vdso_base(void)
        unsigned long base = STACK_TOP;
 
        if (current->flags & PF_RANDOMIZE) {
-               base += get_random_int() & (VDSO_RANDOMIZE_SIZE - 1);
+               base += prandom_u32_max(VDSO_RANDOMIZE_SIZE);
                base = PAGE_ALIGN(base);
        }
 
index 35b912b..bbe9ce4 100644 (file)
@@ -711,7 +711,7 @@ unsigned long mips_stack_top(void)
 unsigned long arch_align_stack(unsigned long sp)
 {
        if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
-               sp -= get_random_int() & ~PAGE_MASK;
+               sp -= prandom_u32_max(PAGE_SIZE);
 
        return sp & ALMASK;
 }
index b2cc2c2..5fd9bf1 100644 (file)
@@ -79,7 +79,7 @@ static unsigned long vdso_base(void)
        }
 
        if (current->flags & PF_RANDOMIZE) {
-               base += get_random_int() & (VDSO_RANDOMIZE_SIZE - 1);
+               base += prandom_u32_max(VDSO_RANDOMIZE_SIZE);
                base = PAGE_ALIGN(base);
        }
 
index a82b2ca..b3edbb3 100644 (file)
@@ -74,10 +74,10 @@ void *arch_dma_set_uncached(void *cpu_addr, size_t size)
         * We need to iterate through the pages, clearing the dcache for
         * them and setting the cache-inhibit bit.
         */
-       mmap_read_lock(&init_mm);
-       error = walk_page_range(&init_mm, va, va + size, &set_nocache_walk_ops,
-                       NULL);
-       mmap_read_unlock(&init_mm);
+       mmap_write_lock(&init_mm);
+       error = walk_page_range_novma(&init_mm, va, va + size,
+                       &set_nocache_walk_ops, NULL, NULL);
+       mmap_write_unlock(&init_mm);
 
        if (error)
                return ERR_PTR(error);
@@ -88,11 +88,11 @@ void arch_dma_clear_uncached(void *cpu_addr, size_t size)
 {
        unsigned long va = (unsigned long)cpu_addr;
 
-       mmap_read_lock(&init_mm);
+       mmap_write_lock(&init_mm);
        /* walk_page_range shouldn't be able to fail here */
-       WARN_ON(walk_page_range(&init_mm, va, va + size,
-                       &clear_nocache_walk_ops, NULL));
-       mmap_read_unlock(&init_mm);
+       WARN_ON(walk_page_range_novma(&init_mm, va, va + size,
+                       &clear_nocache_walk_ops, NULL, NULL));
+       mmap_write_unlock(&init_mm);
 }
 
 void arch_sync_dma_for_device(phys_addr_t addr, size_t size,
index 0ec54f4..1ed45fd 100644 (file)
 
 struct alt_instr {
        s32 orig_offset;        /* offset to original instructions */
-       s32 len;                /* end of original instructions */
-       u32 cond;               /* see ALT_COND_XXX */
+       s16 len;                /* end of original instructions */
+       u16 cond;               /* see ALT_COND_XXX */
        u32 replacement;        /* replacement instruction or code */
-};
+} __packed;
 
 void set_kernel_text_rw(int enable_read_write);
 void apply_alternatives_all(void);
@@ -35,8 +35,9 @@ void apply_alternatives(struct alt_instr *start, struct alt_instr *end,
 /* Alternative SMP implementation. */
 #define ALTERNATIVE(cond, replacement)         "!0:"   \
        ".section .altinstructions, \"aw\"      !"      \
-       ".word (0b-4-.), 1, " __stringify(cond) ","     \
-               __stringify(replacement) "      !"      \
+       ".word (0b-4-.)                         !"      \
+       ".hword 1, " __stringify(cond) "        !"      \
+       ".word " __stringify(replacement) "     !"      \
        ".previous"
 
 #else
@@ -44,15 +45,17 @@ void apply_alternatives(struct alt_instr *start, struct alt_instr *end,
 /* to replace one single instructions by a new instruction */
 #define ALTERNATIVE(from, to, cond, replacement)\
        .section .altinstructions, "aw" !       \
-       .word (from - .), (to - from)/4 !       \
-       .word cond, replacement         !       \
+       .word (from - .)                !       \
+       .hword (to - from)/4, cond      !       \
+       .word replacement               !       \
        .previous
 
 /* to replace multiple instructions by new code */
 #define ALTERNATIVE_CODE(from, num_instructions, cond, new_instr_ptr)\
        .section .altinstructions, "aw" !       \
-       .word (from - .), -num_instructions !   \
-       .word cond, (new_instr_ptr - .) !       \
+       .word (from - .)                !       \
+       .hword -num_instructions, cond  !       \
+       .word (new_instr_ptr - .)       !       \
        .previous
 
 #endif  /*  __ASSEMBLY__  */
index b643092..fcbcf9a 100644 (file)
@@ -19,9 +19,6 @@ extern unsigned long parisc_pat_pdc_cap; /* PDC capabilities (PAT) */
 #define PDC_TYPE_SYSTEM_MAP     1 /* 32-bit, but supports PDC_SYSTEM_MAP */
 #define PDC_TYPE_SNAKE          2 /* Doesn't support SYSTEM_MAP */
 
-void pdc_console_init(void);   /* in pdc_console.c */
-void pdc_console_restart(void);
-
 void setup_pdc(void);          /* in inventory.c */
 
 /* wrapper-functions from pdc.c */
index df7b931..ecd0288 100644 (file)
@@ -192,6 +192,11 @@ extern void __update_cache(pte_t pte);
 #define _PAGE_PRESENT_BIT  22   /* (0x200) Software: translation valid */
 #define _PAGE_HPAGE_BIT    21   /* (0x400) Software: Huge Page */
 #define _PAGE_USER_BIT     20   /* (0x800) Software: User accessible page */
+#ifdef CONFIG_HUGETLB_PAGE
+#define _PAGE_SPECIAL_BIT  _PAGE_DMB_BIT  /* DMB feature is currently unused */
+#else
+#define _PAGE_SPECIAL_BIT  _PAGE_HPAGE_BIT /* use unused HUGE PAGE bit */
+#endif
 
 /* N.B. The bits are defined in terms of a 32 bit word above, so the */
 /*      following macro is ok for both 32 and 64 bit.                */
@@ -219,7 +224,7 @@ extern void __update_cache(pte_t pte);
 #define _PAGE_PRESENT  (1 << xlate_pabit(_PAGE_PRESENT_BIT))
 #define _PAGE_HUGE     (1 << xlate_pabit(_PAGE_HPAGE_BIT))
 #define _PAGE_USER     (1 << xlate_pabit(_PAGE_USER_BIT))
-#define _PAGE_SPECIAL  (_PAGE_DMB)
+#define _PAGE_SPECIAL  (1 << xlate_pabit(_PAGE_SPECIAL_BIT))
 
 #define _PAGE_TABLE    (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_ACCESSED)
 #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_SPECIAL)
index daa1e90..66f5672 100644 (file)
@@ -26,7 +26,7 @@ void __init_or_module apply_alternatives(struct alt_instr *start,
        struct alt_instr *entry;
        int index = 0, applied = 0;
        int num_cpus = num_online_cpus();
-       u32 cond_check;
+       u16 cond_check;
 
        cond_check = ALT_COND_ALWAYS |
                ((num_cpus == 1) ? ALT_COND_NO_SMP : 0) |
@@ -45,8 +45,9 @@ void __init_or_module apply_alternatives(struct alt_instr *start,
 
        for (entry = start; entry < end; entry++, index++) {
 
-               u32 *from, cond, replacement;
-               s32 len;
+               u32 *from, replacement;
+               u16 cond;
+               s16 len;
 
                from = (u32 *)((ulong)&entry->orig_offset + entry->orig_offset);
                len = entry->len;
index df8102f..0e5ebfe 100644 (file)
         * Finally, _PAGE_READ goes in the top bit of PL1 (so we
         * trigger an access rights trap in user space if the user
         * tries to read an unreadable page */
+#if _PAGE_SPECIAL_BIT == _PAGE_DMB_BIT
+       /* need to drop DMB bit, as it's used as SPECIAL flag */
+       depi            0,_PAGE_SPECIAL_BIT,1,\pte
+#endif
        depd            \pte,8,7,\prot
 
        /* PAGE_USER indicates the page can be read with user privileges,
         * makes the tlb entry for the differently formatted pa11
         * insertion instructions */
        .macro          make_insert_tlb_11      spc,pte,prot
+#if _PAGE_SPECIAL_BIT == _PAGE_DMB_BIT
+       /* need to drop DMB bit, as it's used as SPECIAL flag */
+       depi            0,_PAGE_SPECIAL_BIT,1,\pte
+#endif
        zdep            \spc,30,15,\prot
        dep             \pte,8,7,\prot
        extru,=         \pte,_PAGE_NO_CACHE_BIT,1,%r0
index 2661cdd..7d0989f 100644 (file)
@@ -1,46 +1,18 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 /* 
- *    PDC Console support - ie use firmware to dump text via boot console
+ *    PDC early console support - use PDC firmware to dump text via boot console
  *
- *    Copyright (C) 1999-2003 Matthew Wilcox <willy at parisc-linux.org>
- *    Copyright (C) 2000 Martin K Petersen <mkp at mkp.net>
- *    Copyright (C) 2000 John Marvin <jsm at parisc-linux.org>
- *    Copyright (C) 2000-2003 Paul Bame <bame at parisc-linux.org>
- *    Copyright (C) 2000 Philipp Rumpf <prumpf with tux.org>
- *    Copyright (C) 2000 Michael Ang <mang with subcarrier.org>
- *    Copyright (C) 2000 Grant Grundler <grundler with parisc-linux.org>
- *    Copyright (C) 2001-2002 Ryan Bradetich <rbrad at parisc-linux.org>
- *    Copyright (C) 2001 Helge Deller <deller at parisc-linux.org>
- *    Copyright (C) 2001 Thomas Bogendoerfer <tsbogend at parisc-linux.org>
- *    Copyright (C) 2002 Randolph Chung <tausq with parisc-linux.org>
- *    Copyright (C) 2010 Guy Martin <gmsoft at tuxicoman.be>
+ *    Copyright (C) 2001-2022 Helge Deller <deller@gmx.de>
  */
 
-/*
- *  The PDC console is a simple console, which can be used for debugging 
- *  boot related problems on HP PA-RISC machines. It is also useful when no
- *  other console works.
- *
- *  This code uses the ROM (=PDC) based functions to read and write characters
- *  from and to PDC's boot path.
- */
-
-/* Define EARLY_BOOTUP_DEBUG to debug kernel related boot problems. 
- * On production kernels EARLY_BOOTUP_DEBUG should be undefined. */
-#define EARLY_BOOTUP_DEBUG
-
-
-#include <linux/kernel.h>
 #include <linux/console.h>
-#include <linux/string.h>
 #include <linux/init.h>
-#include <linux/major.h>
-#include <linux/tty.h>
+#include <linux/serial_core.h>
+#include <linux/kgdb.h>
 #include <asm/page.h>          /* for PAGE0 */
 #include <asm/pdc.h>           /* for iodc_call() proto and friends */
 
 static DEFINE_SPINLOCK(pdc_console_lock);
-static struct console pdc_cons;
 
 static void pdc_console_write(struct console *co, const char *s, unsigned count)
 {
@@ -54,7 +26,8 @@ static void pdc_console_write(struct console *co, const char *s, unsigned count)
        spin_unlock_irqrestore(&pdc_console_lock, flags);
 }
 
-int pdc_console_poll_key(struct console *co)
+#ifdef CONFIG_KGDB
+static int kgdb_pdc_read_char(void)
 {
        int c;
        unsigned long flags;
@@ -63,201 +36,40 @@ int pdc_console_poll_key(struct console *co)
        c = pdc_iodc_getc();
        spin_unlock_irqrestore(&pdc_console_lock, flags);
 
-       return c;
-}
-
-static int pdc_console_setup(struct console *co, char *options)
-{
-       return 0;
-}
-
-#if defined(CONFIG_PDC_CONSOLE)
-#include <linux/vt_kern.h>
-#include <linux/tty_flip.h>
-
-#define PDC_CONS_POLL_DELAY (30 * HZ / 1000)
-
-static void pdc_console_poll(struct timer_list *unused);
-static DEFINE_TIMER(pdc_console_timer, pdc_console_poll);
-static struct tty_port tty_port;
-
-static int pdc_console_tty_open(struct tty_struct *tty, struct file *filp)
-{
-       tty_port_tty_set(&tty_port, tty);
-       mod_timer(&pdc_console_timer, jiffies + PDC_CONS_POLL_DELAY);
-
-       return 0;
+       return (c <= 0) ? NO_POLL_CHAR : c;
 }
 
-static void pdc_console_tty_close(struct tty_struct *tty, struct file *filp)
+static void kgdb_pdc_write_char(u8 chr)
 {
-       if (tty->count == 1) {
-               del_timer_sync(&pdc_console_timer);
-               tty_port_tty_set(&tty_port, NULL);
-       }
+       if (PAGE0->mem_cons.cl_class != CL_DUPLEX)
+               pdc_console_write(NULL, &chr, 1);
 }
 
-static int pdc_console_tty_write(struct tty_struct *tty, const unsigned char *buf, int count)
-{
-       pdc_console_write(NULL, buf, count);
-       return count;
-}
-
-static unsigned int pdc_console_tty_write_room(struct tty_struct *tty)
-{
-       return 32768; /* no limit, no buffer used */
-}
-
-static const struct tty_operations pdc_console_tty_ops = {
-       .open = pdc_console_tty_open,
-       .close = pdc_console_tty_close,
-       .write = pdc_console_tty_write,
-       .write_room = pdc_console_tty_write_room,
+static struct kgdb_io kgdb_pdc_io_ops = {
+       .name = "kgdb_pdc",
+       .read_char = kgdb_pdc_read_char,
+       .write_char = kgdb_pdc_write_char,
 };
-
-static void pdc_console_poll(struct timer_list *unused)
-{
-       int data, count = 0;
-
-       while (1) {
-               data = pdc_console_poll_key(NULL);
-               if (data == -1)
-                       break;
-               tty_insert_flip_char(&tty_port, data & 0xFF, TTY_NORMAL);
-               count ++;
-       }
-
-       if (count)
-               tty_flip_buffer_push(&tty_port);
-
-       if (pdc_cons.flags & CON_ENABLED)
-               mod_timer(&pdc_console_timer, jiffies + PDC_CONS_POLL_DELAY);
-}
-
-static struct tty_driver *pdc_console_tty_driver;
-
-static int __init pdc_console_tty_driver_init(void)
-{
-       struct tty_driver *driver;
-       int err;
-
-       /* Check if the console driver is still registered.
-        * It is unregistered if the pdc console was not selected as the
-        * primary console. */
-
-       struct console *tmp;
-
-       console_lock();
-       for_each_console(tmp)
-               if (tmp == &pdc_cons)
-                       break;
-       console_unlock();
-
-       if (!tmp) {
-               printk(KERN_INFO "PDC console driver not registered anymore, not creating %s\n", pdc_cons.name);
-               return -ENODEV;
-       }
-
-       printk(KERN_INFO "The PDC console driver is still registered, removing CON_BOOT flag\n");
-       pdc_cons.flags &= ~CON_BOOT;
-
-       driver = tty_alloc_driver(1, TTY_DRIVER_REAL_RAW |
-                       TTY_DRIVER_RESET_TERMIOS);
-       if (IS_ERR(driver))
-               return PTR_ERR(driver);
-
-       tty_port_init(&tty_port);
-
-       driver->driver_name = "pdc_cons";
-       driver->name = "ttyB";
-       driver->major = MUX_MAJOR;
-       driver->minor_start = 0;
-       driver->type = TTY_DRIVER_TYPE_SYSTEM;
-       driver->init_termios = tty_std_termios;
-       tty_set_operations(driver, &pdc_console_tty_ops);
-       tty_port_link_device(&tty_port, driver, 0);
-
-       err = tty_register_driver(driver);
-       if (err) {
-               printk(KERN_ERR "Unable to register the PDC console TTY driver\n");
-               tty_port_destroy(&tty_port);
-               tty_driver_kref_put(driver);
-               return err;
-       }
-
-       pdc_console_tty_driver = driver;
-
-       return 0;
-}
-device_initcall(pdc_console_tty_driver_init);
-
-static struct tty_driver * pdc_console_device (struct console *c, int *index)
-{
-       *index = c->index;
-       return pdc_console_tty_driver;
-}
-#else
-#define pdc_console_device NULL
 #endif
 
-static struct console pdc_cons = {
-       .name =         "ttyB",
-       .write =        pdc_console_write,
-       .device =       pdc_console_device,
-       .setup =        pdc_console_setup,
-       .flags =        CON_BOOT | CON_PRINTBUFFER,
-       .index =        -1,
-};
-
-static int pdc_console_initialized;
-
-static void pdc_console_init_force(void)
+static int __init pdc_earlycon_setup(struct earlycon_device *device,
+                                    const char *opt)
 {
-       if (pdc_console_initialized)
-               return;
-       ++pdc_console_initialized;
-       
+       struct console *earlycon_console;
+
        /* If the console is duplex then copy the COUT parameters to CIN. */
        if (PAGE0->mem_cons.cl_class == CL_DUPLEX)
                memcpy(&PAGE0->mem_kbd, &PAGE0->mem_cons, sizeof(PAGE0->mem_cons));
 
-       /* register the pdc console */
-       register_console(&pdc_cons);
-}
+       earlycon_console = device->con;
+       earlycon_console->write = pdc_console_write;
+       device->port.iotype = UPIO_MEM32BE;
 
-void __init pdc_console_init(void)
-{
-#if defined(EARLY_BOOTUP_DEBUG) || defined(CONFIG_PDC_CONSOLE)
-       pdc_console_init_force();
+#ifdef CONFIG_KGDB
+       kgdb_register_io_module(&kgdb_pdc_io_ops);
 #endif
-#ifdef EARLY_BOOTUP_DEBUG
-       printk(KERN_INFO "Initialized PDC Console for debugging.\n");
-#endif
-}
-
-
-/*
- * Used for emergencies. Currently only used if an HPMC occurs. If an
- * HPMC occurs, it is possible that the current console may not be
- * properly initialised after the PDC IO reset. This routine unregisters
- * all of the current consoles, reinitializes the pdc console and
- * registers it.
- */
-
-void pdc_console_restart(void)
-{
-       struct console *console;
-
-       if (pdc_console_initialized)
-               return;
 
-       /* If we've already seen the output, don't bother to print it again */
-       if (console_drivers != NULL)
-               pdc_cons.flags &= ~CON_PRINTBUFFER;
-
-       while ((console = console_drivers) != NULL)
-               unregister_console(console_drivers);
-
-       /* force registering the pdc console */
-       pdc_console_init_force();
+       return 0;
 }
+
+EARLYCON_DECLARE(pdc, pdc_earlycon_setup);
index 3db0e97..c4f8374 100644 (file)
@@ -284,7 +284,7 @@ __get_wchan(struct task_struct *p)
 
 static inline unsigned long brk_rnd(void)
 {
-       return (get_random_int() & BRK_RND_MASK) << PAGE_SHIFT;
+       return (get_random_u32() & BRK_RND_MASK) << PAGE_SHIFT;
 }
 
 unsigned long arch_randomize_brk(struct mm_struct *mm)
index f005dde..375f38d 100644 (file)
@@ -70,6 +70,10 @@ void __init setup_cmdline(char **cmdline_p)
                        strlcat(p, "tty0", COMMAND_LINE_SIZE);
        }
 
+       /* default to use early console */
+       if (!strstr(p, "earlycon"))
+               strlcat(p, " earlycon=pdc", COMMAND_LINE_SIZE);
+
 #ifdef CONFIG_BLK_DEV_INITRD
                if (boot_args[2] != 0) /* did palo pass us a ramdisk? */
                {
@@ -139,8 +143,6 @@ void __init setup_arch(char **cmdline_p)
        if (__pa((unsigned long) &_end) >= KERNEL_INITIAL_SIZE)
                panic("KERNEL_INITIAL_ORDER too small!");
 
-       pdc_console_init();
-
 #ifdef CONFIG_64BIT
        if(parisc_narrow_firmware) {
                printk(KERN_INFO "Kernel is using PDC in 32-bit mode.\n");
index 2b34294..848b070 100644 (file)
@@ -239,14 +239,14 @@ static unsigned long mmap_rnd(void)
        unsigned long rnd = 0;
 
        if (current->flags & PF_RANDOMIZE)
-               rnd = get_random_int() & MMAP_RND_MASK;
+               rnd = get_random_u32() & MMAP_RND_MASK;
 
        return rnd << PAGE_SHIFT;
 }
 
 unsigned long arch_mmap_rnd(void)
 {
-       return (get_random_int() & MMAP_RND_MASK) << PAGE_SHIFT;
+       return (get_random_u32() & MMAP_RND_MASK) << PAGE_SHIFT;
 }
 
 static unsigned long mmap_legacy_base(void)
index b78f1b9..f9696fb 100644 (file)
@@ -239,13 +239,6 @@ void die_if_kernel(char *str, struct pt_regs *regs, long err)
        /* unlock the pdc lock if necessary */
        pdc_emergency_unlock();
 
-       /* maybe the kernel hasn't booted very far yet and hasn't been able 
-        * to initialize the serial or STI console. In that case we should 
-        * re-enable the pdc console, so that the user will be able to 
-        * identify the problem. */
-       if (!console_drivers)
-               pdc_console_restart();
-       
        if (err)
                printk(KERN_CRIT "%s (pid %d): %s (code %ld)\n",
                        current->comm, task_pid_nr(current), str, err);
@@ -429,10 +422,6 @@ void parisc_terminate(char *msg, struct pt_regs *regs, int code, unsigned long o
        /* unlock the pdc lock if necessary */
        pdc_emergency_unlock();
 
-       /* restart pdc console if necessary */
-       if (!console_drivers)
-               pdc_console_restart();
-
        /* Not all paths will gutter the processor... */
        switch(code){
 
@@ -482,9 +471,7 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
        unsigned long fault_space = 0;
        int si_code;
 
-       if (code == 1)
-           pdc_console_restart();  /* switch back to pdc if HPMC */
-       else if (!irqs_disabled_flags(regs->gr[0]))
+       if (!irqs_disabled_flags(regs->gr[0]))
            local_irq_enable();
 
        /* Security check:
index 63dc44c..47e5960 100644 (file)
@@ -75,7 +75,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
 
        map_base = mm->mmap_base;
        if (current->flags & PF_RANDOMIZE)
-               map_base -= (get_random_int() & 0x1f) * PAGE_SIZE;
+               map_base -= prandom_u32_max(0x20) * PAGE_SIZE;
 
        vdso_text_start = get_unmapped_area(NULL, map_base, vdso_text_len, 0, 0);
 
index c1c1ef9..273c527 100644 (file)
@@ -82,7 +82,7 @@ static int __init crc_test_init(void)
 
                        if (len <= offset)
                                continue;
-                       prandom_bytes(data, len);
+                       get_random_bytes(data, len);
                        len -= offset;
 
                        crypto_shash_update(crct10dif_shash, data+offset, len);
index 9840d57..a114249 100644 (file)
@@ -89,6 +89,22 @@ long compat_sys_rt_sigreturn(void);
  * responsible for combining parameter pairs.
  */
 
+#ifdef CONFIG_PPC32
+long sys_ppc_pread64(unsigned int fd,
+                    char __user *ubuf, compat_size_t count,
+                    u32 reg6, u32 pos1, u32 pos2);
+long sys_ppc_pwrite64(unsigned int fd,
+                     const char __user *ubuf, compat_size_t count,
+                     u32 reg6, u32 pos1, u32 pos2);
+long sys_ppc_readahead(int fd, u32 r4,
+                      u32 offset1, u32 offset2, u32 count);
+long sys_ppc_truncate64(const char __user *path, u32 reg4,
+                       unsigned long len1, unsigned long len2);
+long sys_ppc_ftruncate64(unsigned int fd, u32 reg4,
+                        unsigned long len1, unsigned long len2);
+long sys_ppc32_fadvise64(int fd, u32 unused, u32 offset1, u32 offset2,
+                        size_t len, int advice);
+#endif
 #ifdef CONFIG_COMPAT
 long compat_sys_mmap2(unsigned long addr, size_t len,
                      unsigned long prot, unsigned long flags,
index ee2d76c..9b61460 100644 (file)
@@ -73,6 +73,7 @@ obj-y                         := cputable.o syscalls.o \
 obj-y                          += ptrace/
 obj-$(CONFIG_PPC64)            += setup_64.o irq_64.o\
                                   paca.o nvram_64.o note.o
+obj-$(CONFIG_PPC32)            += sys_ppc32.o
 obj-$(CONFIG_COMPAT)           += sys_ppc32.o signal_32.o
 obj-$(CONFIG_VDSO32)           += vdso32_wrapper.o
 obj-$(CONFIG_PPC_WATCHDOG)     += watchdog.o
index 904a560..978a173 100644 (file)
@@ -538,7 +538,7 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_kernel)
        beq     .Lfast_kernel_interrupt_return_\srr\() // EE already disabled
        lbz     r11,PACAIRQHAPPENED(r13)
        andi.   r10,r11,PACA_IRQ_MUST_HARD_MASK
-       beq     1f // No HARD_MASK pending
+       beq     .Lfast_kernel_interrupt_return_\srr\() // No HARD_MASK pending
 
        /* Must clear MSR_EE from _MSR */
 #ifdef CONFIG_PPC_BOOK3S
@@ -555,12 +555,23 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_kernel)
        b       .Lfast_kernel_interrupt_return_\srr\()
 
 .Linterrupt_return_\srr\()_soft_enabled:
+       /*
+        * In the soft-enabled case, need to double-check that we have no
+        * pending interrupts that might have come in before we reached the
+        * restart section of code, and restart the exit so those can be
+        * handled.
+        *
+        * If there are none, it is be possible that the interrupt still
+        * has PACA_IRQ_HARD_DIS set, which needs to be cleared for the
+        * interrupted context. This clear will not clobber a new pending
+        * interrupt coming in, because we're in the restart section, so
+        * such would return to the restart location.
+        */
 #ifdef CONFIG_PPC_BOOK3S
        lbz     r11,PACAIRQHAPPENED(r13)
        andi.   r11,r11,(~PACA_IRQ_HARD_DIS)@l
        bne-    interrupt_return_\srr\()_kernel_restart
 #endif
-1:
        li      r11,0
        stb     r11,PACAIRQHAPPENED(r13) // clear the possible HARD_DIS
 
index 40834ef..67da147 100644 (file)
@@ -2303,6 +2303,6 @@ void notrace __ppc64_runlatch_off(void)
 unsigned long arch_align_stack(unsigned long sp)
 {
        if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
-               sp -= get_random_int() & ~PAGE_MASK;
+               sp -= prandom_u32_max(PAGE_SIZE);
        return sp & ~0xf;
 }
index dcc3c9f..1ab4a4d 100644 (file)
@@ -1,13 +1,23 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 /*
- * sys_ppc32.c: Conversion between 32bit and 64bit native syscalls.
+ * sys_ppc32.c: 32-bit system calls with complex calling conventions.
  *
  * Copyright (C) 2001 IBM
  * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
  * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
  *
- * These routines maintain argument size conversion between 32bit and 64bit
- * environment.
+ * 32-bit system calls with 64-bit arguments pass those in register pairs.
+ * This must be specially dealt with on 64-bit kernels. The compat_arg_u64_dual
+ * in generic compat syscalls is not always usable because the register
+ * pairing is constrained depending on preceding arguments.
+ *
+ * An analogous problem exists on 32-bit kernels with ARCH_HAS_SYSCALL_WRAPPER,
+ * the defined system call functions take the pt_regs as an argument, and there
+ * is a mapping macro which maps registers to arguments
+ * (SC_POWERPC_REGS_TO_ARGS) which also does not deal with these 64-bit
+ * arguments.
+ *
+ * This file contains these system calls.
  */
 
 #include <linux/kernel.h>
 #include <asm/syscalls.h>
 #include <asm/switch_to.h>
 
-COMPAT_SYSCALL_DEFINE6(ppc_pread64,
+#ifdef CONFIG_PPC32
+#define PPC32_SYSCALL_DEFINE4  SYSCALL_DEFINE4
+#define PPC32_SYSCALL_DEFINE5  SYSCALL_DEFINE5
+#define PPC32_SYSCALL_DEFINE6  SYSCALL_DEFINE6
+#else
+#define PPC32_SYSCALL_DEFINE4  COMPAT_SYSCALL_DEFINE4
+#define PPC32_SYSCALL_DEFINE5  COMPAT_SYSCALL_DEFINE5
+#define PPC32_SYSCALL_DEFINE6  COMPAT_SYSCALL_DEFINE6
+#endif
+
+PPC32_SYSCALL_DEFINE6(ppc_pread64,
                       unsigned int, fd,
                       char __user *, ubuf, compat_size_t, count,
                       u32, reg6, u32, pos1, u32, pos2)
@@ -55,7 +75,7 @@ COMPAT_SYSCALL_DEFINE6(ppc_pread64,
        return ksys_pread64(fd, ubuf, count, merge_64(pos1, pos2));
 }
 
-COMPAT_SYSCALL_DEFINE6(ppc_pwrite64,
+PPC32_SYSCALL_DEFINE6(ppc_pwrite64,
                       unsigned int, fd,
                       const char __user *, ubuf, compat_size_t, count,
                       u32, reg6, u32, pos1, u32, pos2)
@@ -63,28 +83,28 @@ COMPAT_SYSCALL_DEFINE6(ppc_pwrite64,
        return ksys_pwrite64(fd, ubuf, count, merge_64(pos1, pos2));
 }
 
-COMPAT_SYSCALL_DEFINE5(ppc_readahead,
+PPC32_SYSCALL_DEFINE5(ppc_readahead,
                       int, fd, u32, r4,
                       u32, offset1, u32, offset2, u32, count)
 {
        return ksys_readahead(fd, merge_64(offset1, offset2), count);
 }
 
-COMPAT_SYSCALL_DEFINE4(ppc_truncate64,
+PPC32_SYSCALL_DEFINE4(ppc_truncate64,
                       const char __user *, path, u32, reg4,
                       unsigned long, len1, unsigned long, len2)
 {
        return ksys_truncate(path, merge_64(len1, len2));
 }
 
-COMPAT_SYSCALL_DEFINE4(ppc_ftruncate64,
+PPC32_SYSCALL_DEFINE4(ppc_ftruncate64,
                       unsigned int, fd, u32, reg4,
                       unsigned long, len1, unsigned long, len2)
 {
        return ksys_ftruncate(fd, merge_64(len1, len2));
 }
 
-COMPAT_SYSCALL_DEFINE6(ppc32_fadvise64,
+PPC32_SYSCALL_DEFINE6(ppc32_fadvise64,
                       int, fd, u32, unused, u32, offset1, u32, offset2,
                       size_t, len, int, advice)
 {
index 2bca64f..e9e0df4 100644 (file)
 176    64      rt_sigtimedwait                 sys_rt_sigtimedwait
 177    nospu   rt_sigqueueinfo                 sys_rt_sigqueueinfo             compat_sys_rt_sigqueueinfo
 178    nospu   rt_sigsuspend                   sys_rt_sigsuspend               compat_sys_rt_sigsuspend
-179    common  pread64                         sys_pread64                     compat_sys_ppc_pread64
-180    common  pwrite64                        sys_pwrite64                    compat_sys_ppc_pwrite64
+179    32      pread64                         sys_ppc_pread64                 compat_sys_ppc_pread64
+179    64      pread64                         sys_pread64
+180    32      pwrite64                        sys_ppc_pwrite64                compat_sys_ppc_pwrite64
+180    64      pwrite64                        sys_pwrite64
 181    common  chown                           sys_chown
 182    common  getcwd                          sys_getcwd
 183    common  capget                          sys_capget
 188    common  putpmsg                         sys_ni_syscall
 189    nospu   vfork                           sys_vfork
 190    common  ugetrlimit                      sys_getrlimit                   compat_sys_getrlimit
-191    common  readahead                       sys_readahead                   compat_sys_ppc_readahead
+191    32      readahead                       sys_ppc_readahead               compat_sys_ppc_readahead
+191    64      readahead                       sys_readahead
 192    32      mmap2                           sys_mmap2                       compat_sys_mmap2
-193    32      truncate64                      sys_truncate64                  compat_sys_ppc_truncate64
-194    32      ftruncate64                     sys_ftruncate64                 compat_sys_ppc_ftruncate64
+193    32      truncate64                      sys_ppc_truncate64              compat_sys_ppc_truncate64
+194    32      ftruncate64                     sys_ppc_ftruncate64             compat_sys_ppc_ftruncate64
 195    32      stat64                          sys_stat64
 196    32      lstat64                         sys_lstat64
 197    32      fstat64                         sys_fstat64
 230    common  io_submit                       sys_io_submit                   compat_sys_io_submit
 231    common  io_cancel                       sys_io_cancel
 232    nospu   set_tid_address                 sys_set_tid_address
-233    common  fadvise64                       sys_fadvise64                   compat_sys_ppc32_fadvise64
+233    32      fadvise64                       sys_ppc32_fadvise64             compat_sys_ppc32_fadvise64
+233    64      fadvise64                       sys_fadvise64
 234    nospu   exit_group                      sys_exit_group
 235    nospu   lookup_dcookie                  sys_lookup_dcookie              compat_sys_lookup_dcookie
 236    common  epoll_create                    sys_epoll_create
index 5980063..e2f11f9 100644 (file)
@@ -508,10 +508,10 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
 static int __kvmppc_svm_page_out(struct vm_area_struct *vma,
                unsigned long start,
                unsigned long end, unsigned long page_shift,
-               struct kvm *kvm, unsigned long gpa)
+               struct kvm *kvm, unsigned long gpa, struct page *fault_page)
 {
        unsigned long src_pfn, dst_pfn = 0;
-       struct migrate_vma mig;
+       struct migrate_vma mig = { 0 };
        struct page *dpage, *spage;
        struct kvmppc_uvmem_page_pvt *pvt;
        unsigned long pfn;
@@ -525,6 +525,7 @@ static int __kvmppc_svm_page_out(struct vm_area_struct *vma,
        mig.dst = &dst_pfn;
        mig.pgmap_owner = &kvmppc_uvmem_pgmap;
        mig.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
+       mig.fault_page = fault_page;
 
        /* The requested page is already paged-out, nothing to do */
        if (!kvmppc_gfn_is_uvmem_pfn(gpa >> page_shift, kvm, NULL))
@@ -580,12 +581,14 @@ out_finalize:
 static inline int kvmppc_svm_page_out(struct vm_area_struct *vma,
                                      unsigned long start, unsigned long end,
                                      unsigned long page_shift,
-                                     struct kvm *kvm, unsigned long gpa)
+                                     struct kvm *kvm, unsigned long gpa,
+                                     struct page *fault_page)
 {
        int ret;
 
        mutex_lock(&kvm->arch.uvmem_lock);
-       ret = __kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa);
+       ret = __kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa,
+                               fault_page);
        mutex_unlock(&kvm->arch.uvmem_lock);
 
        return ret;
@@ -634,7 +637,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *slot,
                        pvt->remove_gfn = true;
 
                        if (__kvmppc_svm_page_out(vma, addr, addr + PAGE_SIZE,
-                                                 PAGE_SHIFT, kvm, pvt->gpa))
+                                                 PAGE_SHIFT, kvm, pvt->gpa, NULL))
                                pr_err("Can't page out gpa:0x%lx addr:0x%lx\n",
                                       pvt->gpa, addr);
                } else {
@@ -715,7 +718,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
 
        dpage = pfn_to_page(uvmem_pfn);
        dpage->zone_device_data = pvt;
-       lock_page(dpage);
+       zone_device_page_init(dpage);
        return dpage;
 out_clear:
        spin_lock(&kvmppc_uvmem_bitmap_lock);
@@ -736,7 +739,7 @@ static int kvmppc_svm_page_in(struct vm_area_struct *vma,
                bool pagein)
 {
        unsigned long src_pfn, dst_pfn = 0;
-       struct migrate_vma mig;
+       struct migrate_vma mig = { 0 };
        struct page *spage;
        unsigned long pfn;
        struct page *dpage;
@@ -994,7 +997,7 @@ static vm_fault_t kvmppc_uvmem_migrate_to_ram(struct vm_fault *vmf)
 
        if (kvmppc_svm_page_out(vmf->vma, vmf->address,
                                vmf->address + PAGE_SIZE, PAGE_SHIFT,
-                               pvt->kvm, pvt->gpa))
+                               pvt->kvm, pvt->gpa, vmf->page))
                return VM_FAULT_SIGBUS;
        else
                return 0;
@@ -1065,7 +1068,7 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
        if (!vma || vma->vm_start > start || vma->vm_end < end)
                goto out;
 
-       if (!kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa))
+       if (!kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa, NULL))
                ret = H_SUCCESS;
 out:
        mmap_read_unlock(kvm->mm);
index 14e143b..9231020 100644 (file)
@@ -7,7 +7,7 @@ obj-y                   := lpar.o hvCall.o nvram.o reconfig.o \
                           setup.o iommu.o event_sources.o ras.o \
                           firmware.o power.o dlpar.o mobility.o rng.o \
                           pci.o pci_dlpar.o eeh_pseries.o msi.o \
-                          papr_platform_attributes.o
+                          papr_platform_attributes.o dtl.o
 obj-$(CONFIG_SMP)      += smp.o
 obj-$(CONFIG_KEXEC_CORE)       += kexec.o
 obj-$(CONFIG_PSERIES_ENERGY)   += pseries_energy.o
@@ -19,7 +19,6 @@ obj-$(CONFIG_HVC_CONSOLE)     += hvconsole.o
 obj-$(CONFIG_HVCS)             += hvcserver.o
 obj-$(CONFIG_HCALL_STATS)      += hvCall_inst.o
 obj-$(CONFIG_CMM)              += cmm.o
-obj-$(CONFIG_DTL)              += dtl.o
 obj-$(CONFIG_IO_EVENT_IRQ)     += io_event_irq.o
 obj-$(CONFIG_LPARCFG)          += lparcfg.o
 obj-$(CONFIG_IBMVIO)           += vio.o
index 1b1977b..3f1cdcc 100644 (file)
@@ -18,6 +18,7 @@
 #include <asm/plpar_wrappers.h>
 #include <asm/machdep.h>
 
+#ifdef CONFIG_DTL
 struct dtl {
        struct dtl_entry        *buf;
        int                     cpu;
@@ -58,78 +59,6 @@ static DEFINE_PER_CPU(struct dtl_ring, dtl_rings);
 static atomic_t dtl_count;
 
 /*
- * Scan the dispatch trace log and count up the stolen time.
- * Should be called with interrupts disabled.
- */
-static notrace u64 scan_dispatch_log(u64 stop_tb)
-{
-       u64 i = local_paca->dtl_ridx;
-       struct dtl_entry *dtl = local_paca->dtl_curr;
-       struct dtl_entry *dtl_end = local_paca->dispatch_log_end;
-       struct lppaca *vpa = local_paca->lppaca_ptr;
-       u64 tb_delta;
-       u64 stolen = 0;
-       u64 dtb;
-
-       if (!dtl)
-               return 0;
-
-       if (i == be64_to_cpu(vpa->dtl_idx))
-               return 0;
-       while (i < be64_to_cpu(vpa->dtl_idx)) {
-               dtb = be64_to_cpu(dtl->timebase);
-               tb_delta = be32_to_cpu(dtl->enqueue_to_dispatch_time) +
-                       be32_to_cpu(dtl->ready_to_enqueue_time);
-               barrier();
-               if (i + N_DISPATCH_LOG < be64_to_cpu(vpa->dtl_idx)) {
-                       /* buffer has overflowed */
-                       i = be64_to_cpu(vpa->dtl_idx) - N_DISPATCH_LOG;
-                       dtl = local_paca->dispatch_log + (i % N_DISPATCH_LOG);
-                       continue;
-               }
-               if (dtb > stop_tb)
-                       break;
-               if (dtl_consumer)
-                       dtl_consumer(dtl, i);
-               stolen += tb_delta;
-               ++i;
-               ++dtl;
-               if (dtl == dtl_end)
-                       dtl = local_paca->dispatch_log;
-       }
-       local_paca->dtl_ridx = i;
-       local_paca->dtl_curr = dtl;
-       return stolen;
-}
-
-/*
- * Accumulate stolen time by scanning the dispatch trace log.
- * Called on entry from user mode.
- */
-void notrace pseries_accumulate_stolen_time(void)
-{
-       u64 sst, ust;
-       struct cpu_accounting_data *acct = &local_paca->accounting;
-
-       sst = scan_dispatch_log(acct->starttime_user);
-       ust = scan_dispatch_log(acct->starttime);
-       acct->stime -= sst;
-       acct->utime -= ust;
-       acct->steal_time += ust + sst;
-}
-
-u64 pseries_calculate_stolen_time(u64 stop_tb)
-{
-       if (!firmware_has_feature(FW_FEATURE_SPLPAR))
-               return 0;
-
-       if (get_paca()->dtl_ridx != be64_to_cpu(get_lppaca()->dtl_idx))
-               return scan_dispatch_log(stop_tb);
-
-       return 0;
-}
-
-/*
  * The cpu accounting code controls the DTL ring buffer, and we get
  * given entries as they are processed.
  */
@@ -436,3 +365,81 @@ static int dtl_init(void)
        return 0;
 }
 machine_arch_initcall(pseries, dtl_init);
+#endif /* CONFIG_DTL */
+
+#ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
+/*
+ * Scan the dispatch trace log and count up the stolen time.
+ * Should be called with interrupts disabled.
+ */
+static notrace u64 scan_dispatch_log(u64 stop_tb)
+{
+       u64 i = local_paca->dtl_ridx;
+       struct dtl_entry *dtl = local_paca->dtl_curr;
+       struct dtl_entry *dtl_end = local_paca->dispatch_log_end;
+       struct lppaca *vpa = local_paca->lppaca_ptr;
+       u64 tb_delta;
+       u64 stolen = 0;
+       u64 dtb;
+
+       if (!dtl)
+               return 0;
+
+       if (i == be64_to_cpu(vpa->dtl_idx))
+               return 0;
+       while (i < be64_to_cpu(vpa->dtl_idx)) {
+               dtb = be64_to_cpu(dtl->timebase);
+               tb_delta = be32_to_cpu(dtl->enqueue_to_dispatch_time) +
+                       be32_to_cpu(dtl->ready_to_enqueue_time);
+               barrier();
+               if (i + N_DISPATCH_LOG < be64_to_cpu(vpa->dtl_idx)) {
+                       /* buffer has overflowed */
+                       i = be64_to_cpu(vpa->dtl_idx) - N_DISPATCH_LOG;
+                       dtl = local_paca->dispatch_log + (i % N_DISPATCH_LOG);
+                       continue;
+               }
+               if (dtb > stop_tb)
+                       break;
+#ifdef CONFIG_DTL
+               if (dtl_consumer)
+                       dtl_consumer(dtl, i);
+#endif
+               stolen += tb_delta;
+               ++i;
+               ++dtl;
+               if (dtl == dtl_end)
+                       dtl = local_paca->dispatch_log;
+       }
+       local_paca->dtl_ridx = i;
+       local_paca->dtl_curr = dtl;
+       return stolen;
+}
+
+/*
+ * Accumulate stolen time by scanning the dispatch trace log.
+ * Called on entry from user mode.
+ */
+void notrace pseries_accumulate_stolen_time(void)
+{
+       u64 sst, ust;
+       struct cpu_accounting_data *acct = &local_paca->accounting;
+
+       sst = scan_dispatch_log(acct->starttime_user);
+       ust = scan_dispatch_log(acct->starttime);
+       acct->stime -= sst;
+       acct->utime -= ust;
+       acct->steal_time += ust + sst;
+}
+
+u64 pseries_calculate_stolen_time(u64 stop_tb)
+{
+       if (!firmware_has_feature(FW_FEATURE_SPLPAR))
+               return 0;
+
+       if (get_paca()->dtl_ridx != be64_to_cpu(get_lppaca()->dtl_idx))
+               return scan_dispatch_log(stop_tb);
+
+       return 0;
+}
+
+#endif
index 56976e5..6b48a3a 100644 (file)
@@ -70,6 +70,7 @@ config RISCV
        select GENERIC_SMP_IDLE_THREAD
        select GENERIC_TIME_VSYSCALL if MMU && 64BIT
        select GENERIC_VDSO_TIME_NS if HAVE_GENERIC_VDSO
+       select HARDIRQS_SW_RESEND
        select HAVE_ARCH_AUDITSYSCALL
        select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
        select HAVE_ARCH_JUMP_LABEL_RELATIVE if !XIP_KERNEL
index e9bd218..1c8ec65 100644 (file)
@@ -37,6 +37,7 @@ else
 endif
 
 ifeq ($(CONFIG_LD_IS_LLD),y)
+ifeq ($(shell test $(CONFIG_LLD_VERSION) -lt 150000; echo $$?),0)
        KBUILD_CFLAGS += -mno-relax
        KBUILD_AFLAGS += -mno-relax
 ifndef CONFIG_AS_IS_LLVM
@@ -44,6 +45,7 @@ ifndef CONFIG_AS_IS_LLVM
        KBUILD_AFLAGS += -Wa,-mno-relax
 endif
 endif
+endif
 
 # ISA string setting
 riscv-march-$(CONFIG_ARCH_RV32I)       := rv32ima
index 39aae7b..7427a20 100644 (file)
@@ -1,4 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 dtb-$(CONFIG_SOC_MICROCHIP_POLARFIRE) += mpfs-icicle-kit.dtb
+dtb-$(CONFIG_SOC_MICROCHIP_POLARFIRE) += mpfs-m100pfsevp.dtb
 dtb-$(CONFIG_SOC_MICROCHIP_POLARFIRE) += mpfs-polarberry.dtb
+dtb-$(CONFIG_SOC_MICROCHIP_POLARFIRE) += mpfs-sev-kit.dtb
 obj-$(CONFIG_BUILTIN_DTB) += $(addsuffix .o, $(dtb-y))
index 0d28858..24b1cfb 100644 (file)
@@ -2,20 +2,21 @@
 /* Copyright (c) 2020-2021 Microchip Technology Inc */
 
 / {
-       compatible = "microchip,mpfs-icicle-reference-rtlv2203", "microchip,mpfs";
+       compatible = "microchip,mpfs-icicle-reference-rtlv2210", "microchip,mpfs-icicle-kit",
+                    "microchip,mpfs";
 
-       core_pwm0: pwm@41000000 {
+       core_pwm0: pwm@40000000 {
                compatible = "microchip,corepwm-rtl-v4";
-               reg = <0x0 0x41000000 0x0 0xF0>;
+               reg = <0x0 0x40000000 0x0 0xF0>;
                microchip,sync-update-mask = /bits/ 32 <0>;
                #pwm-cells = <2>;
                clocks = <&fabric_clk3>;
                status = "disabled";
        };
 
-       i2c2: i2c@44000000 {
+       i2c2: i2c@40000200 {
                compatible = "microchip,corei2c-rtl-v7";
-               reg = <0x0 0x44000000 0x0 0x1000>;
+               reg = <0x0 0x40000200 0x0 0x100>;
                #address-cells = <1>;
                #size-cells = <0>;
                clocks = <&fabric_clk3>;
@@ -28,7 +29,7 @@
        fabric_clk3: fabric-clk3 {
                compatible = "fixed-clock";
                #clock-cells = <0>;
-               clock-frequency = <62500000>;
+               clock-frequency = <50000000>;
        };
 
        fabric_clk1: fabric-clk1 {
                #clock-cells = <0>;
                clock-frequency = <125000000>;
        };
+
+       pcie: pcie@3000000000 {
+               compatible = "microchip,pcie-host-1.0";
+               #address-cells = <0x3>;
+               #interrupt-cells = <0x1>;
+               #size-cells = <0x2>;
+               device_type = "pci";
+               reg = <0x30 0x0 0x0 0x8000000>, <0x0 0x43000000 0x0 0x10000>;
+               reg-names = "cfg", "apb";
+               bus-range = <0x0 0x7f>;
+               interrupt-parent = <&plic>;
+               interrupts = <119>;
+               interrupt-map = <0 0 0 1 &pcie_intc 0>,
+                               <0 0 0 2 &pcie_intc 1>,
+                               <0 0 0 3 &pcie_intc 2>,
+                               <0 0 0 4 &pcie_intc 3>;
+               interrupt-map-mask = <0 0 0 7>;
+               clocks = <&fabric_clk1>, <&fabric_clk3>;
+               clock-names = "fic1", "fic3";
+               ranges = <0x3000000 0x0 0x8000000 0x30 0x8000000 0x0 0x80000000>;
+               dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000 0x1 0x00000000>;
+               msi-parent = <&pcie>;
+               msi-controller;
+               status = "disabled";
+               pcie_intc: interrupt-controller {
+                       #address-cells = <0>;
+                       #interrupt-cells = <1>;
+                       interrupt-controller;
+               };
+       };
 };
index f3f87ed..ec7b7c2 100644 (file)
@@ -11,7 +11,8 @@
 
 / {
        model = "Microchip PolarFire-SoC Icicle Kit";
-       compatible = "microchip,mpfs-icicle-kit", "microchip,mpfs";
+       compatible = "microchip,mpfs-icicle-reference-rtlv2210", "microchip,mpfs-icicle-kit",
+                    "microchip,mpfs";
 
        aliases {
                ethernet0 = &mac1;
 
        ddrc_cache_lo: memory@80000000 {
                device_type = "memory";
-               reg = <0x0 0x80000000 0x0 0x2e000000>;
+               reg = <0x0 0x80000000 0x0 0x40000000>;
                status = "okay";
        };
 
        ddrc_cache_hi: memory@1000000000 {
                device_type = "memory";
-               reg = <0x10 0x0 0x0 0x40000000>;
+               reg = <0x10 0x40000000 0x0 0x40000000>;
                status = "okay";
        };
+
+       reserved-memory {
+               #address-cells = <2>;
+               #size-cells = <2>;
+               ranges;
+
+               hss_payload: region@BFC00000 {
+                       reg = <0x0 0xBFC00000 0x0 0x400000>;
+                       no-map;
+               };
+       };
 };
 
 &core_pwm0 {
diff --git a/arch/riscv/boot/dts/microchip/mpfs-m100pfs-fabric.dtsi b/arch/riscv/boot/dts/microchip/mpfs-m100pfs-fabric.dtsi
new file mode 100644 (file)
index 0000000..7b9ee13
--- /dev/null
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Copyright (c) 2022 Microchip Technology Inc */
+
+/ {
+       fabric_clk3: fabric-clk3 {
+               compatible = "fixed-clock";
+               #clock-cells = <0>;
+               clock-frequency = <62500000>;
+       };
+
+       fabric_clk1: fabric-clk1 {
+               compatible = "fixed-clock";
+               #clock-cells = <0>;
+               clock-frequency = <125000000>;
+       };
+
+       pcie: pcie@2000000000 {
+               compatible = "microchip,pcie-host-1.0";
+               #address-cells = <0x3>;
+               #interrupt-cells = <0x1>;
+               #size-cells = <0x2>;
+               device_type = "pci";
+               reg = <0x20 0x0 0x0 0x8000000>, <0x0 0x43000000 0x0 0x10000>;
+               reg-names = "cfg", "apb";
+               bus-range = <0x0 0x7f>;
+               interrupt-parent = <&plic>;
+               interrupts = <119>;
+               interrupt-map = <0 0 0 1 &pcie_intc 0>,
+                               <0 0 0 2 &pcie_intc 1>,
+                               <0 0 0 3 &pcie_intc 2>,
+                               <0 0 0 4 &pcie_intc 3>;
+               interrupt-map-mask = <0 0 0 7>;
+               clocks = <&fabric_clk1>, <&fabric_clk1>, <&fabric_clk3>;
+               clock-names = "fic0", "fic1", "fic3";
+               ranges = <0x3000000 0x0 0x8000000 0x20 0x8000000 0x0 0x80000000>;
+               msi-parent = <&pcie>;
+               msi-controller;
+               status = "disabled";
+               pcie_intc: interrupt-controller {
+                       #address-cells = <0>;
+                       #interrupt-cells = <1>;
+                       interrupt-controller;
+               };
+       };
+};
diff --git a/arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts b/arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts
new file mode 100644 (file)
index 0000000..184cb36
--- /dev/null
@@ -0,0 +1,179 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Original all-in-one devicetree:
+ * Copyright (C) 2021-2022 - Wolfgang Grandegger <wg@aries-embedded.de>
+ * Rewritten to use includes:
+ * Copyright (C) 2022 - Conor Dooley <conor.dooley@microchip.com>
+ */
+/dts-v1/;
+
+#include "mpfs.dtsi"
+#include "mpfs-m100pfs-fabric.dtsi"
+
+/* Clock frequency (in Hz) of the rtcclk */
+#define MTIMER_FREQ    1000000
+
+/ {
+       model = "Aries Embedded M100PFEVPS";
+       compatible = "aries,m100pfsevp", "microchip,mpfs";
+
+       aliases {
+               ethernet0 = &mac0;
+               ethernet1 = &mac1;
+               serial0 = &mmuart0;
+               serial1 = &mmuart1;
+               serial2 = &mmuart2;
+               serial3 = &mmuart3;
+               serial4 = &mmuart4;
+               gpio0 = &gpio0;
+               gpio1 = &gpio2;
+       };
+
+       chosen {
+               stdout-path = "serial1:115200n8";
+       };
+
+       cpus {
+               timebase-frequency = <MTIMER_FREQ>;
+       };
+
+       ddrc_cache_lo: memory@80000000 {
+               device_type = "memory";
+               reg = <0x0 0x80000000 0x0 0x40000000>;
+       };
+       ddrc_cache_hi: memory@1040000000 {
+               device_type = "memory";
+               reg = <0x10 0x40000000 0x0 0x40000000>;
+       };
+};
+
+&can0 {
+       status = "okay";
+};
+
+&i2c0 {
+       status = "okay";
+};
+
+&i2c1 {
+       status = "okay";
+};
+
+&gpio0 {
+       interrupts = <13>, <14>, <15>, <16>,
+                    <17>, <18>, <19>, <20>,
+                    <21>, <22>, <23>, <24>,
+                    <25>, <26>;
+       ngpios = <14>;
+       status = "okay";
+
+       pmic-irq-hog {
+               gpio-hog;
+               gpios = <13 0>;
+               input;
+       };
+
+       /* Set to low for eMMC, high for SD-card */
+       mmc-sel-hog {
+               gpio-hog;
+               gpios = <12 0>;
+               output-high;
+       };
+};
+
+&gpio2 {
+       interrupts = <13>, <14>, <15>, <16>,
+                    <17>, <18>, <19>, <20>,
+                    <21>, <22>, <23>, <24>,
+                    <25>, <26>, <27>, <28>,
+                    <29>, <30>, <31>, <32>,
+                    <33>, <34>, <35>, <36>,
+                    <37>, <38>, <39>, <40>,
+                    <41>, <42>, <43>, <44>;
+       status = "okay";
+};
+
+&mac0 {
+       status = "okay";
+       phy-mode = "gmii";
+       phy-handle = <&phy0>;
+       phy0: ethernet-phy@0 {
+               reg = <0>;
+       };
+};
+
+&mac1 {
+       status = "okay";
+       phy-mode = "gmii";
+       phy-handle = <&phy1>;
+       phy1: ethernet-phy@0 {
+               reg = <0>;
+       };
+};
+
+&mbox {
+       status = "okay";
+};
+
+&mmc {
+       max-frequency = <50000000>;
+       bus-width = <4>;
+       cap-mmc-highspeed;
+       cap-sd-highspeed;
+       no-1-8-v;
+       sd-uhs-sdr12;
+       sd-uhs-sdr25;
+       sd-uhs-sdr50;
+       sd-uhs-sdr104;
+       disable-wp;
+       status = "okay";
+};
+
+&mmuart1 {
+       status = "okay";
+};
+
+&mmuart2 {
+       status = "okay";
+};
+
+&mmuart3 {
+       status = "okay";
+};
+
+&mmuart4 {
+       status = "okay";
+};
+
+&pcie {
+       status = "okay";
+};
+
+&qspi {
+       status = "okay";
+};
+
+&refclk {
+       clock-frequency = <125000000>;
+};
+
+&rtc {
+       status = "okay";
+};
+
+&spi0 {
+       status = "okay";
+};
+
+&spi1 {
+       status = "okay";
+};
+
+&syscontroller {
+       status = "okay";
+};
+
+&usb {
+       status = "okay";
+       dr_mode = "host";
+};
index 49380c4..67303bc 100644 (file)
                #clock-cells = <0>;
                clock-frequency = <125000000>;
        };
+
+       pcie: pcie@2000000000 {
+               compatible = "microchip,pcie-host-1.0";
+               #address-cells = <0x3>;
+               #interrupt-cells = <0x1>;
+               #size-cells = <0x2>;
+               device_type = "pci";
+               reg = <0x20 0x0 0x0 0x8000000>, <0x0 0x43000000 0x0 0x10000>;
+               reg-names = "cfg", "apb";
+               bus-range = <0x0 0x7f>;
+               interrupt-parent = <&plic>;
+               interrupts = <119>;
+               interrupt-map = <0 0 0 1 &pcie_intc 0>,
+                               <0 0 0 2 &pcie_intc 1>,
+                               <0 0 0 3 &pcie_intc 2>,
+                               <0 0 0 4 &pcie_intc 3>;
+               interrupt-map-mask = <0 0 0 7>;
+               clocks = <&fabric_clk1>, <&fabric_clk1>, <&fabric_clk3>;
+               clock-names = "fic0", "fic1", "fic3";
+               ranges = <0x3000000 0x0 0x8000000 0x20 0x8000000 0x0 0x80000000>;
+               msi-parent = <&pcie>;
+               msi-controller;
+               status = "disabled";
+               pcie_intc: interrupt-controller {
+                       #address-cells = <0>;
+                       #interrupt-cells = <1>;
+                       interrupt-controller;
+               };
+       };
 };
diff --git a/arch/riscv/boot/dts/microchip/mpfs-sev-kit-fabric.dtsi b/arch/riscv/boot/dts/microchip/mpfs-sev-kit-fabric.dtsi
new file mode 100644 (file)
index 0000000..8545baf
--- /dev/null
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Copyright (c) 2022 Microchip Technology Inc */
+
+/ {
+       fabric_clk3: fabric-clk3 {
+               compatible = "fixed-clock";
+               #clock-cells = <0>;
+               clock-frequency = <0>;
+       };
+
+       fabric_clk1: fabric-clk1 {
+               compatible = "fixed-clock";
+               #clock-cells = <0>;
+               clock-frequency = <125000000>;
+       };
+
+       pcie: pcie@2000000000 {
+               compatible = "microchip,pcie-host-1.0";
+               #address-cells = <0x3>;
+               #interrupt-cells = <0x1>;
+               #size-cells = <0x2>;
+               device_type = "pci";
+               reg = <0x20 0x0 0x0 0x8000000>, <0x0 0x43000000 0x0 0x10000>;
+               reg-names = "cfg", "apb";
+               bus-range = <0x0 0x7f>;
+               interrupt-parent = <&plic>;
+               interrupts = <119>;
+               interrupt-map = <0 0 0 1 &pcie_intc 0>,
+                               <0 0 0 2 &pcie_intc 1>,
+                               <0 0 0 3 &pcie_intc 2>,
+                               <0 0 0 4 &pcie_intc 3>;
+               interrupt-map-mask = <0 0 0 7>;
+               clocks = <&fabric_clk1>, <&fabric_clk1>, <&fabric_clk3>;
+               clock-names = "fic0", "fic1", "fic3";
+               ranges = <0x3000000 0x0 0x8000000 0x20 0x8000000 0x0 0x80000000>;
+               msi-parent = <&pcie>;
+               msi-controller;
+               status = "disabled";
+               pcie_intc: interrupt-controller {
+                       #address-cells = <0>;
+                       #interrupt-cells = <1>;
+                       interrupt-controller;
+               };
+       };
+};
diff --git a/arch/riscv/boot/dts/microchip/mpfs-sev-kit.dts b/arch/riscv/boot/dts/microchip/mpfs-sev-kit.dts
new file mode 100644 (file)
index 0000000..013cb66
--- /dev/null
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Copyright (c) 2022 Microchip Technology Inc */
+
+/dts-v1/;
+
+#include "mpfs.dtsi"
+#include "mpfs-sev-kit-fabric.dtsi"
+
+/* Clock frequency (in Hz) of the rtcclk */
+#define MTIMER_FREQ            1000000
+
+/ {
+       #address-cells = <2>;
+       #size-cells = <2>;
+       model = "Microchip PolarFire-SoC SEV Kit";
+       compatible = "microchip,mpfs-sev-kit", "microchip,mpfs";
+
+       aliases {
+               ethernet0 = &mac1;
+               serial0 = &mmuart0;
+               serial1 = &mmuart1;
+               serial2 = &mmuart2;
+               serial3 = &mmuart3;
+               serial4 = &mmuart4;
+       };
+
+       chosen {
+               stdout-path = "serial1:115200n8";
+       };
+
+       cpus {
+               timebase-frequency = <MTIMER_FREQ>;
+       };
+
+       reserved-memory {
+               #address-cells = <2>;
+               #size-cells = <2>;
+               ranges;
+
+               fabricbuf0ddrc: buffer@80000000 {
+                       compatible = "shared-dma-pool";
+                       reg = <0x0 0x80000000 0x0 0x2000000>;
+               };
+
+               fabricbuf1ddrnc: buffer@c4000000 {
+                       compatible = "shared-dma-pool";
+                       reg = <0x0 0xc4000000 0x0 0x4000000>;
+               };
+
+               fabricbuf2ddrncwcb: buffer@d4000000 {
+                       compatible = "shared-dma-pool";
+                       reg = <0x0 0xd4000000 0x0 0x4000000>;
+               };
+       };
+
+       ddrc_cache: memory@1000000000 {
+               device_type = "memory";
+               reg = <0x10 0x0 0x0 0x76000000>;
+       };
+};
+
+&i2c0 {
+       status = "okay";
+};
+
+&gpio2 {
+       interrupts = <53>, <53>, <53>, <53>,
+                    <53>, <53>, <53>, <53>,
+                    <53>, <53>, <53>, <53>,
+                    <53>, <53>, <53>, <53>,
+                    <53>, <53>, <53>, <53>,
+                    <53>, <53>, <53>, <53>,
+                    <53>, <53>, <53>, <53>,
+                    <53>, <53>, <53>, <53>;
+       status = "okay";
+};
+
+&mac0 {
+       status = "okay";
+       phy-mode = "sgmii";
+       phy-handle = <&phy0>;
+       phy1: ethernet-phy@9 {
+               reg = <9>;
+       };
+       phy0: ethernet-phy@8 {
+               reg = <8>;
+       };
+};
+
+&mac1 {
+       status = "okay";
+       phy-mode = "sgmii";
+       phy-handle = <&phy1>;
+};
+
+&mbox {
+       status = "okay";
+};
+
+&mmc {
+       status = "okay";
+       bus-width = <4>;
+       disable-wp;
+       cap-sd-highspeed;
+       cap-mmc-highspeed;
+       mmc-ddr-1_8v;
+       mmc-hs200-1_8v;
+       sd-uhs-sdr12;
+       sd-uhs-sdr25;
+       sd-uhs-sdr50;
+       sd-uhs-sdr104;
+};
+
+&mmuart1 {
+       status = "okay";
+};
+
+&mmuart2 {
+       status = "okay";
+};
+
+&mmuart3 {
+       status = "okay";
+};
+
+&mmuart4 {
+       status = "okay";
+};
+
+&refclk {
+       clock-frequency = <125000000>;
+};
+
+&rtc {
+       status = "okay";
+};
+
+&syscontroller {
+       status = "okay";
+};
+
+&usb {
+       status = "okay";
+       dr_mode = "otg";
+};
index 6d9d455..8f46339 100644 (file)
                };
 
                qspi: spi@21000000 {
-                       compatible = "microchip,mpfs-qspi";
+                       compatible = "microchip,mpfs-qspi", "microchip,coreqspi-rtl-v2";
                        #address-cells = <1>;
                        #size-cells = <0>;
                        reg = <0x0 0x21000000 0x0 0x1000>;
                        status = "disabled";
                };
 
-               pcie: pcie@2000000000 {
-                       compatible = "microchip,pcie-host-1.0";
-                       #address-cells = <0x3>;
-                       #interrupt-cells = <0x1>;
-                       #size-cells = <0x2>;
-                       device_type = "pci";
-                       reg = <0x20 0x0 0x0 0x8000000>, <0x0 0x43000000 0x0 0x10000>;
-                       reg-names = "cfg", "apb";
-                       bus-range = <0x0 0x7f>;
-                       interrupt-parent = <&plic>;
-                       interrupts = <119>;
-                       interrupt-map = <0 0 0 1 &pcie_intc 0>,
-                                       <0 0 0 2 &pcie_intc 1>,
-                                       <0 0 0 3 &pcie_intc 2>,
-                                       <0 0 0 4 &pcie_intc 3>;
-                       interrupt-map-mask = <0 0 0 7>;
-                       clocks = <&fabric_clk1>, <&fabric_clk1>, <&fabric_clk3>;
-                       clock-names = "fic0", "fic1", "fic3";
-                       ranges = <0x3000000 0x0 0x8000000 0x20 0x8000000 0x0 0x80000000>;
-                       msi-parent = <&pcie>;
-                       msi-controller;
-                       status = "disabled";
-                       pcie_intc: interrupt-controller {
-                               #address-cells = <0>;
-                               #interrupt-cells = <1>;
-                               interrupt-controller;
-                       };
-               };
-
                mbox: mailbox@37020000 {
                        compatible = "microchip,mpfs-mailbox";
                        reg = <0x0 0x37020000 0x0 0x1000>, <0x0 0x2000318C 0x0 0x40>;
index 96648c1..2154693 100644 (file)
@@ -17,6 +17,9 @@
 static bool errata_probe_pbmt(unsigned int stage,
                              unsigned long arch_id, unsigned long impid)
 {
+       if (!IS_ENABLED(CONFIG_ERRATA_THEAD_PBMT))
+               return false;
+
        if (arch_id != 0 || impid != 0)
                return false;
 
@@ -30,7 +33,9 @@ static bool errata_probe_pbmt(unsigned int stage,
 static bool errata_probe_cmo(unsigned int stage,
                             unsigned long arch_id, unsigned long impid)
 {
-#ifdef CONFIG_ERRATA_THEAD_CMO
+       if (!IS_ENABLED(CONFIG_ERRATA_THEAD_CMO))
+               return false;
+
        if (arch_id != 0 || impid != 0)
                return false;
 
@@ -40,9 +45,6 @@ static bool errata_probe_cmo(unsigned int stage,
        riscv_cbom_block_size = L1_CACHE_BYTES;
        riscv_noncoherent_supported();
        return true;
-#else
-       return false;
-#endif
 }
 
 static u32 thead_errata_probe(unsigned int stage,
@@ -51,10 +53,10 @@ static u32 thead_errata_probe(unsigned int stage,
        u32 cpu_req_errata = 0;
 
        if (errata_probe_pbmt(stage, archid, impid))
-               cpu_req_errata |= (1U << ERRATA_THEAD_PBMT);
+               cpu_req_errata |= BIT(ERRATA_THEAD_PBMT);
 
        if (errata_probe_cmo(stage, archid, impid))
-               cpu_req_errata |= (1U << ERRATA_THEAD_CMO);
+               cpu_req_errata |= BIT(ERRATA_THEAD_CMO);
 
        return cpu_req_errata;
 }
index 273ece6..f6fbe70 100644 (file)
@@ -42,19 +42,13 @@ void flush_icache_mm(struct mm_struct *mm, bool local);
 
 #endif /* CONFIG_SMP */
 
-/*
- * The T-Head CMO errata internally probe the CBOM block size, but otherwise
- * don't depend on Zicbom.
- */
 extern unsigned int riscv_cbom_block_size;
-#ifdef CONFIG_RISCV_ISA_ZICBOM
 void riscv_init_cbom_blocksize(void);
-#else
-static inline void riscv_init_cbom_blocksize(void) { }
-#endif
 
 #ifdef CONFIG_RISCV_DMA_NONCOHERENT
 void riscv_noncoherent_supported(void);
+#else
+static inline void riscv_noncoherent_supported(void) {}
 #endif
 
 /*
index 14fc734..e7acffd 100644 (file)
@@ -99,6 +99,10 @@ do {                                                         \
                get_cache_size(2, CACHE_TYPE_UNIFIED));         \
        NEW_AUX_ENT(AT_L2_CACHEGEOMETRY,                        \
                get_cache_geometry(2, CACHE_TYPE_UNIFIED));     \
+       NEW_AUX_ENT(AT_L3_CACHESIZE,                            \
+               get_cache_size(3, CACHE_TYPE_UNIFIED));         \
+       NEW_AUX_ENT(AT_L3_CACHEGEOMETRY,                        \
+               get_cache_geometry(3, CACHE_TYPE_UNIFIED));     \
 } while (0)
 #define ARCH_HAS_SETUP_ADDITIONAL_PAGES
 struct linux_binprm;
index 69605a4..92080a2 100644 (file)
@@ -101,9 +101,9 @@ __io_reads_ins(reads, u32, l, __io_br(), __io_ar(addr))
 __io_reads_ins(ins,  u8, b, __io_pbr(), __io_par(addr))
 __io_reads_ins(ins, u16, w, __io_pbr(), __io_par(addr))
 __io_reads_ins(ins, u32, l, __io_pbr(), __io_par(addr))
-#define insb(addr, buffer, count) __insb((void __iomem *)(long)addr, buffer, count)
-#define insw(addr, buffer, count) __insw((void __iomem *)(long)addr, buffer, count)
-#define insl(addr, buffer, count) __insl((void __iomem *)(long)addr, buffer, count)
+#define insb(addr, buffer, count) __insb(PCI_IOBASE + (addr), buffer, count)
+#define insw(addr, buffer, count) __insw(PCI_IOBASE + (addr), buffer, count)
+#define insl(addr, buffer, count) __insl(PCI_IOBASE + (addr), buffer, count)
 
 __io_writes_outs(writes,  u8, b, __io_bw(), __io_aw())
 __io_writes_outs(writes, u16, w, __io_bw(), __io_aw())
@@ -115,22 +115,22 @@ __io_writes_outs(writes, u32, l, __io_bw(), __io_aw())
 __io_writes_outs(outs,  u8, b, __io_pbw(), __io_paw())
 __io_writes_outs(outs, u16, w, __io_pbw(), __io_paw())
 __io_writes_outs(outs, u32, l, __io_pbw(), __io_paw())
-#define outsb(addr, buffer, count) __outsb((void __iomem *)(long)addr, buffer, count)
-#define outsw(addr, buffer, count) __outsw((void __iomem *)(long)addr, buffer, count)
-#define outsl(addr, buffer, count) __outsl((void __iomem *)(long)addr, buffer, count)
+#define outsb(addr, buffer, count) __outsb(PCI_IOBASE + (addr), buffer, count)
+#define outsw(addr, buffer, count) __outsw(PCI_IOBASE + (addr), buffer, count)
+#define outsl(addr, buffer, count) __outsl(PCI_IOBASE + (addr), buffer, count)
 
 #ifdef CONFIG_64BIT
 __io_reads_ins(reads, u64, q, __io_br(), __io_ar(addr))
 #define readsq(addr, buffer, count) __readsq(addr, buffer, count)
 
 __io_reads_ins(ins, u64, q, __io_pbr(), __io_par(addr))
-#define insq(addr, buffer, count) __insq((void __iomem *)addr, buffer, count)
+#define insq(addr, buffer, count) __insq(PCI_IOBASE + (addr), buffer, count)
 
 __io_writes_outs(writes, u64, q, __io_bw(), __io_aw())
 #define writesq(addr, buffer, count) __writesq(addr, buffer, count)
 
 __io_writes_outs(outs, u64, q, __io_pbr(), __io_paw())
-#define outsq(addr, buffer, count) __outsq((void __iomem *)addr, buffer, count)
+#define outsq(addr, buffer, count) __outsq(PCI_IOBASE + (addr), buffer, count)
 #endif
 
 #include <asm-generic/io.h>
index 0d8fdb8..82f7260 100644 (file)
@@ -45,6 +45,7 @@ int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu);
 int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu);
 void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu);
 void kvm_riscv_guest_timer_init(struct kvm *kvm);
+void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu);
 void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu);
 bool kvm_riscv_vcpu_timer_pending(struct kvm_vcpu *vcpu);
 
index cedcf8e..0099dc1 100644 (file)
@@ -16,7 +16,6 @@ typedef struct {
        atomic_long_t id;
 #endif
        void *vdso;
-       void *vdso_info;
 #ifdef CONFIG_SMP
        /* A local icache flush is needed before user execution can resume. */
        cpumask_t icache_stale_mask;
index 32c73ba..fb187a3 100644 (file)
 #define AT_L1D_CACHEGEOMETRY   43
 #define AT_L2_CACHESIZE                44
 #define AT_L2_CACHEGEOMETRY    45
+#define AT_L3_CACHESIZE                46
+#define AT_L3_CACHEGEOMETRY    47
 
 /* entries in ARCH_DLINFO */
-#define AT_VECTOR_SIZE_ARCH    7
+#define AT_VECTOR_SIZE_ARCH    9
 
 #endif /* _UAPI_ASM_RISCV_AUXVEC_H */
index 4d0dece..fa427bd 100644 (file)
@@ -3,10 +3,13 @@
  * Copyright (C) 2012 Regents of the University of California
  */
 
+#include <linux/cpu.h>
 #include <linux/init.h>
 #include <linux/seq_file.h>
 #include <linux/of.h>
+#include <asm/csr.h>
 #include <asm/hwcap.h>
+#include <asm/sbi.h>
 #include <asm/smp.h>
 #include <asm/pgtable.h>
 
@@ -68,6 +71,50 @@ int riscv_of_parent_hartid(struct device_node *node, unsigned long *hartid)
 }
 
 #ifdef CONFIG_PROC_FS
+
+struct riscv_cpuinfo {
+       unsigned long mvendorid;
+       unsigned long marchid;
+       unsigned long mimpid;
+};
+static DEFINE_PER_CPU(struct riscv_cpuinfo, riscv_cpuinfo);
+
+static int riscv_cpuinfo_starting(unsigned int cpu)
+{
+       struct riscv_cpuinfo *ci = this_cpu_ptr(&riscv_cpuinfo);
+
+#if IS_ENABLED(CONFIG_RISCV_SBI)
+       ci->mvendorid = sbi_spec_is_0_1() ? 0 : sbi_get_mvendorid();
+       ci->marchid = sbi_spec_is_0_1() ? 0 : sbi_get_marchid();
+       ci->mimpid = sbi_spec_is_0_1() ? 0 : sbi_get_mimpid();
+#elif IS_ENABLED(CONFIG_RISCV_M_MODE)
+       ci->mvendorid = csr_read(CSR_MVENDORID);
+       ci->marchid = csr_read(CSR_MARCHID);
+       ci->mimpid = csr_read(CSR_MIMPID);
+#else
+       ci->mvendorid = 0;
+       ci->marchid = 0;
+       ci->mimpid = 0;
+#endif
+
+       return 0;
+}
+
+static int __init riscv_cpuinfo_init(void)
+{
+       int ret;
+
+       ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "riscv/cpuinfo:starting",
+                               riscv_cpuinfo_starting, NULL);
+       if (ret < 0) {
+               pr_err("cpuinfo: failed to register hotplug callbacks.\n");
+               return ret;
+       }
+
+       return 0;
+}
+device_initcall(riscv_cpuinfo_init);
+
 #define __RISCV_ISA_EXT_DATA(UPROP, EXTID) \
        {                                                       \
                .uprop = #UPROP,                                \
@@ -186,6 +233,7 @@ static int c_show(struct seq_file *m, void *v)
 {
        unsigned long cpu_id = (unsigned long)v - 1;
        struct device_node *node = of_get_cpu_node(cpu_id, NULL);
+       struct riscv_cpuinfo *ci = per_cpu_ptr(&riscv_cpuinfo, cpu_id);
        const char *compat, *isa;
 
        seq_printf(m, "processor\t: %lu\n", cpu_id);
@@ -196,6 +244,9 @@ static int c_show(struct seq_file *m, void *v)
        if (!of_property_read_string(node, "compatible", &compat)
            && strcmp(compat, "riscv"))
                seq_printf(m, "uarch\t\t: %s\n", compat);
+       seq_printf(m, "mvendorid\t: 0x%lx\n", ci->mvendorid);
+       seq_printf(m, "marchid\t\t: 0x%lx\n", ci->marchid);
+       seq_printf(m, "mimpid\t\t: 0x%lx\n", ci->mimpid);
        seq_puts(m, "\n");
        of_node_put(node);
 
index 9774f12..694267d 100644 (file)
@@ -254,35 +254,28 @@ void __init riscv_fill_hwcap(void)
 #ifdef CONFIG_RISCV_ALTERNATIVE
 static bool __init_or_module cpufeature_probe_svpbmt(unsigned int stage)
 {
-#ifdef CONFIG_RISCV_ISA_SVPBMT
-       switch (stage) {
-       case RISCV_ALTERNATIVES_EARLY_BOOT:
+       if (!IS_ENABLED(CONFIG_RISCV_ISA_SVPBMT))
+               return false;
+
+       if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
                return false;
-       default:
-               return riscv_isa_extension_available(NULL, SVPBMT);
-       }
-#endif
 
-       return false;
+       return riscv_isa_extension_available(NULL, SVPBMT);
 }
 
 static bool __init_or_module cpufeature_probe_zicbom(unsigned int stage)
 {
-#ifdef CONFIG_RISCV_ISA_ZICBOM
-       switch (stage) {
-       case RISCV_ALTERNATIVES_EARLY_BOOT:
+       if (!IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM))
+               return false;
+
+       if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
+               return false;
+
+       if (!riscv_isa_extension_available(NULL, ZICBOM))
                return false;
-       default:
-               if (riscv_isa_extension_available(NULL, ZICBOM)) {
-                       riscv_noncoherent_supported();
-                       return true;
-               } else {
-                       return false;
-               }
-       }
-#endif
 
-       return false;
+       riscv_noncoherent_supported();
+       return true;
 }
 
 /*
@@ -297,10 +290,10 @@ static u32 __init_or_module cpufeature_probe(unsigned int stage)
        u32 cpu_req_feature = 0;
 
        if (cpufeature_probe_svpbmt(stage))
-               cpu_req_feature |= (1U << CPUFEATURE_SVPBMT);
+               cpu_req_feature |= BIT(CPUFEATURE_SVPBMT);
 
        if (cpufeature_probe_zicbom(stage))
-               cpu_req_feature |= (1U << CPUFEATURE_ZICBOM);
+               cpu_req_feature |= BIT(CPUFEATURE_ZICBOM);
 
        return cpu_req_feature;
 }
index 2dfc463..ad76bb5 100644 (file)
@@ -252,10 +252,10 @@ static void __init parse_dtb(void)
                        pr_info("Machine model: %s\n", name);
                        dump_stack_set_arch_desc("%s (DT)", name);
                }
-               return;
+       } else {
+               pr_err("No DTB passed to the kernel\n");
        }
 
-       pr_err("No DTB passed to the kernel\n");
 #ifdef CONFIG_CMDLINE_FORCE
        strscpy(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);
        pr_info("Forcing kernel command line to: %s\n", boot_command_line);
index 571556b..5d3f2fb 100644 (file)
@@ -18,9 +18,6 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
        if (unlikely(offset & (~PAGE_MASK >> page_shift_offset)))
                return -EINVAL;
 
-       if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ)))
-               return -EINVAL;
-
        return ksys_mmap_pgoff(addr, len, prot, flags, fd,
                               offset >> (PAGE_SHIFT - page_shift_offset));
 }
index 635e6ec..f3e96d6 100644 (file)
@@ -33,6 +33,7 @@ void die(struct pt_regs *regs, const char *str)
 {
        static int die_counter;
        int ret;
+       long cause;
 
        oops_enter();
 
@@ -42,11 +43,13 @@ void die(struct pt_regs *regs, const char *str)
 
        pr_emerg("%s [#%d]\n", str, ++die_counter);
        print_modules();
-       show_regs(regs);
+       if (regs)
+               show_regs(regs);
 
-       ret = notify_die(DIE_OOPS, str, regs, 0, regs->cause, SIGSEGV);
+       cause = regs ? regs->cause : -1;
+       ret = notify_die(DIE_OOPS, str, regs, 0, cause, SIGSEGV);
 
-       if (regs && kexec_should_crash(current))
+       if (kexec_should_crash(current))
                crash_kexec(regs);
 
        bust_spinlocks(0);
index 692e7ae..123d052 100644 (file)
@@ -60,6 +60,11 @@ struct __vdso_info {
        struct vm_special_mapping *cm;
 };
 
+static struct __vdso_info vdso_info;
+#ifdef CONFIG_COMPAT
+static struct __vdso_info compat_vdso_info;
+#endif
+
 static int vdso_mremap(const struct vm_special_mapping *sm,
                       struct vm_area_struct *new_vma)
 {
@@ -115,15 +120,18 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
        struct mm_struct *mm = task->mm;
        struct vm_area_struct *vma;
        VMA_ITERATOR(vmi, mm, 0);
-       struct __vdso_info *vdso_info = mm->context.vdso_info;
 
        mmap_read_lock(mm);
 
        for_each_vma(vmi, vma) {
                unsigned long size = vma->vm_end - vma->vm_start;
 
-               if (vma_is_special_mapping(vma, vdso_info->dm))
+               if (vma_is_special_mapping(vma, vdso_info.dm))
                        zap_page_range(vma, vma->vm_start, size);
+#ifdef CONFIG_COMPAT
+               if (vma_is_special_mapping(vma, compat_vdso_info.dm))
+                       zap_page_range(vma, vma->vm_start, size);
+#endif
        }
 
        mmap_read_unlock(mm);
@@ -265,7 +273,6 @@ static int __setup_additional_pages(struct mm_struct *mm,
 
        vdso_base += VVAR_SIZE;
        mm->context.vdso = (void *)vdso_base;
-       mm->context.vdso_info = (void *)vdso_info;
 
        ret =
           _install_special_mapping(mm, vdso_base, vdso_text_len,
index a032c4f..71ebbc4 100644 (file)
@@ -708,6 +708,9 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
                                clear_bit(IRQ_VS_SOFT, &v->irqs_pending);
                }
        }
+
+       /* Sync-up timer CSRs */
+       kvm_riscv_vcpu_timer_sync(vcpu);
 }
 
 int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
index 185f238..ad34519 100644 (file)
@@ -320,20 +320,33 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu)
        kvm_riscv_vcpu_timer_unblocking(vcpu);
 }
 
-void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
+void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu)
 {
        struct kvm_vcpu_timer *t = &vcpu->arch.timer;
 
        if (!t->sstc_enabled)
                return;
 
-       t = &vcpu->arch.timer;
 #if defined(CONFIG_32BIT)
        t->next_cycles = csr_read(CSR_VSTIMECMP);
        t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32;
 #else
        t->next_cycles = csr_read(CSR_VSTIMECMP);
 #endif
+}
+
+void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
+{
+       struct kvm_vcpu_timer *t = &vcpu->arch.timer;
+
+       if (!t->sstc_enabled)
+               return;
+
+       /*
+        * The vstimecmp CSRs are saved by kvm_riscv_vcpu_timer_sync()
+        * upon every VM exit so no need to save here.
+        */
+
        /* timer should be enabled for the remaining operations */
        if (unlikely(!t->init_done))
                return;
index 6cb7d96..57b40a3 100644 (file)
@@ -3,6 +3,7 @@
  * Copyright (C) 2017 SiFive
  */
 
+#include <linux/of.h>
 #include <asm/cacheflush.h>
 
 #ifdef CONFIG_SMP
@@ -86,3 +87,40 @@ void flush_icache_pte(pte_t pte)
                flush_icache_all();
 }
 #endif /* CONFIG_MMU */
+
+unsigned int riscv_cbom_block_size;
+EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
+
+void riscv_init_cbom_blocksize(void)
+{
+       struct device_node *node;
+       unsigned long cbom_hartid;
+       u32 val, probed_block_size;
+       int ret;
+
+       probed_block_size = 0;
+       for_each_of_cpu_node(node) {
+               unsigned long hartid;
+
+               ret = riscv_of_processor_hartid(node, &hartid);
+               if (ret)
+                       continue;
+
+               /* set block-size for cbom extension if available */
+               ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
+               if (ret)
+                       continue;
+
+               if (!probed_block_size) {
+                       probed_block_size = val;
+                       cbom_hartid = hartid;
+               } else {
+                       if (probed_block_size != val)
+                               pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
+                                       cbom_hartid, hartid);
+               }
+       }
+
+       if (probed_block_size)
+               riscv_cbom_block_size = probed_block_size;
+}
index b0add98..d919efa 100644 (file)
@@ -8,13 +8,8 @@
 #include <linux/dma-direct.h>
 #include <linux/dma-map-ops.h>
 #include <linux/mm.h>
-#include <linux/of.h>
-#include <linux/of_device.h>
 #include <asm/cacheflush.h>
 
-unsigned int riscv_cbom_block_size;
-EXPORT_SYMBOL_GPL(riscv_cbom_block_size);
-
 static bool noncoherent_supported;
 
 void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
@@ -77,42 +72,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
        dev->dma_coherent = coherent;
 }
 
-#ifdef CONFIG_RISCV_ISA_ZICBOM
-void riscv_init_cbom_blocksize(void)
-{
-       struct device_node *node;
-       unsigned long cbom_hartid;
-       u32 val, probed_block_size;
-       int ret;
-
-       probed_block_size = 0;
-       for_each_of_cpu_node(node) {
-               unsigned long hartid;
-
-               ret = riscv_of_processor_hartid(node, &hartid);
-               if (ret)
-                       continue;
-
-               /* set block-size for cbom extension if available */
-               ret = of_property_read_u32(node, "riscv,cbom-block-size", &val);
-               if (ret)
-                       continue;
-
-               if (!probed_block_size) {
-                       probed_block_size = val;
-                       cbom_hartid = hartid;
-               } else {
-                       if (probed_block_size != val)
-                               pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",
-                                       cbom_hartid, hartid);
-               }
-       }
-
-       if (probed_block_size)
-               riscv_cbom_block_size = probed_block_size;
-}
-#endif
-
 void riscv_noncoherent_supported(void)
 {
        WARN(!riscv_cbom_block_size,
index f2fbd14..d86f7ce 100644 (file)
@@ -184,7 +184,8 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
                }
                break;
        case EXC_LOAD_PAGE_FAULT:
-               if (!(vma->vm_flags & VM_READ)) {
+               /* Write implies read */
+               if (!(vma->vm_flags & (VM_READ | VM_WRITE))) {
                        return true;
                }
                break;
index d5119e0..42af4b3 100644 (file)
@@ -224,13 +224,13 @@ unsigned long __get_wchan(struct task_struct *p)
 unsigned long arch_align_stack(unsigned long sp)
 {
        if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
-               sp -= get_random_int() & ~PAGE_MASK;
+               sp -= prandom_u32_max(PAGE_SIZE);
        return sp & ~0xf;
 }
 
 static inline unsigned long brk_rnd(void)
 {
-       return (get_random_int() & BRK_RND_MASK) << PAGE_SHIFT;
+       return (get_random_u16() & BRK_RND_MASK) << PAGE_SHIFT;
 }
 
 unsigned long arch_randomize_brk(struct mm_struct *mm)
index 535099f..3105ca5 100644 (file)
@@ -227,7 +227,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned long len)
        end -= len;
 
        if (end > start) {
-               offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
+               offset = prandom_u32_max(((end - start) >> PAGE_SHIFT) + 1);
                addr = start + (offset << PAGE_SHIFT);
        } else {
                addr = start;
index 5980ce3..3327c47 100644 (file)
@@ -37,7 +37,7 @@ static inline int mmap_is_legacy(struct rlimit *rlim_stack)
 
 unsigned long arch_mmap_rnd(void)
 {
-       return (get_random_int() & MMAP_RND_MASK) << PAGE_SHIFT;
+       return (get_random_u32() & MMAP_RND_MASK) << PAGE_SHIFT;
 }
 
 static unsigned long mmap_base_legacy(unsigned long rnd)
index cc19e09..ae9a86c 100644 (file)
@@ -354,7 +354,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned int len)
        unsigned int offset;
 
        /* This loses some more bits than a modulo, but is cheaper */
-       offset = get_random_int() & (PTRS_PER_PTE - 1);
+       offset = prandom_u32_max(PTRS_PER_PTE);
        return start + (offset << PAGE_SHIFT);
 }
 
index c37cc4f..3fec3b8 100644 (file)
@@ -36,7 +36,6 @@ extern int console_write_chan(struct chan *chan, const char *buf,
                              int len);
 extern int console_open_chan(struct line *line, struct console *co);
 extern void deactivate_chan(struct chan *chan, int irq);
-extern void reactivate_chan(struct chan *chan, int irq);
 extern void chan_enable_winch(struct chan *chan, struct tty_port *port);
 extern int enable_chan(struct line *line);
 extern void close_chan(struct line *line);
index 8ca67a6..5026e7b 100644 (file)
@@ -283,7 +283,7 @@ struct unplugged_pages {
 };
 
 static DEFINE_MUTEX(plug_mem_mutex);
-static unsigned long long unplugged_pages_count = 0;
+static unsigned long long unplugged_pages_count;
 static LIST_HEAD(unplugged_pages);
 static int unplug_index = UNPLUGGED_PER_PAGE;
 
@@ -846,13 +846,12 @@ static int notify_panic(struct notifier_block *self, unsigned long unused1,
 
        mconsole_notify(notify_socket, MCONSOLE_PANIC, message,
                        strlen(message) + 1);
-       return 0;
+       return NOTIFY_DONE;
 }
 
 static struct notifier_block panic_exit_notifier = {
-       .notifier_call          = notify_panic,
-       .next                   = NULL,
-       .priority               = 1
+       .notifier_call  = notify_panic,
+       .priority       = INT_MAX, /* run as soon as possible */
 };
 
 static int add_notifier(void)
index 0bf78ff..807cd33 100644 (file)
@@ -122,7 +122,7 @@ static int __init mmapper_init(void)
        return 0;
 }
 
-static void mmapper_exit(void)
+static void __exit mmapper_exit(void)
 {
        misc_deregister(&mmapper_dev);
 }
index 5933138..3d7836c 100644 (file)
@@ -265,7 +265,7 @@ static void uml_net_poll_controller(struct net_device *dev)
 static void uml_net_get_drvinfo(struct net_device *dev,
                                struct ethtool_drvinfo *info)
 {
-       strlcpy(info->driver, DRIVER_NAME, sizeof(info->driver));
+       strscpy(info->driver, DRIVER_NAME, sizeof(info->driver));
 }
 
 static const struct ethtool_ops uml_net_ethtool_ops = {
index 8514966..277cea3 100644 (file)
@@ -106,7 +106,7 @@ static const struct tty_operations ssl_ops = {
 /* Changed by ssl_init and referenced by ssl_exit, which are both serialized
  * by being an initcall and exitcall, respectively.
  */
-static int ssl_init_done = 0;
+static int ssl_init_done;
 
 static void ssl_console_write(struct console *c, const char *string,
                              unsigned len)
index 489d5a7..1c23973 100644 (file)
@@ -88,7 +88,7 @@ static int con_remove(int n, char **error_out)
 }
 
 /* Set in an initcall, checked in an exitcall */
-static int con_init_done = 0;
+static int con_init_done;
 
 static int con_install(struct tty_driver *driver, struct tty_struct *tty)
 {
index eb2d2f0..f4c1e6e 100644 (file)
@@ -1555,7 +1555,7 @@ static void do_io(struct io_thread_req *req, struct io_desc *desc)
 int kernel_fd = -1;
 
 /* Only changed by the io thread. XXX: currently unused. */
-static int io_count = 0;
+static int io_count;
 
 int io_thread(void *arg)
 {
index 5482653..ded7c47 100644 (file)
@@ -1372,7 +1372,7 @@ static void vector_net_poll_controller(struct net_device *dev)
 static void vector_net_get_drvinfo(struct net_device *dev,
                                struct ethtool_drvinfo *info)
 {
-       strlcpy(info->driver, DRIVER_NAME, sizeof(info->driver));
+       strscpy(info->driver, DRIVER_NAME, sizeof(info->driver));
 }
 
 static int vector_net_load_bpf_flash(struct net_device *dev,
index 0278470..acb55b3 100644 (file)
@@ -857,7 +857,7 @@ void *pci_root_bus_fwnode(struct pci_bus *bus)
        return um_pci_fwnode;
 }
 
-static int um_pci_init(void)
+static int __init um_pci_init(void)
 {
        int err, i;
 
@@ -940,7 +940,7 @@ free:
 }
 module_init(um_pci_init);
 
-static void um_pci_exit(void)
+static void __exit um_pci_exit(void)
 {
        unregister_virtio_driver(&um_pci_virtio_driver);
        irq_domain_remove(um_pci_msi_domain);
index e719af8..588930a 100644 (file)
@@ -374,45 +374,48 @@ static irqreturn_t vu_req_read_message(struct virtio_uml_device *vu_dev,
                u8 extra_payload[512];
        } msg;
        int rc;
+       irqreturn_t irq_rc = IRQ_NONE;
 
-       rc = vhost_user_recv_req(vu_dev, &msg.msg,
-                                sizeof(msg.msg.payload) +
-                                sizeof(msg.extra_payload));
-
-       vu_dev->recv_rc = rc;
-       if (rc)
-               return IRQ_NONE;
-
-       switch (msg.msg.header.request) {
-       case VHOST_USER_SLAVE_CONFIG_CHANGE_MSG:
-               vu_dev->config_changed_irq = true;
-               response = 0;
-               break;
-       case VHOST_USER_SLAVE_VRING_CALL:
-               virtio_device_for_each_vq((&vu_dev->vdev), vq) {
-                       if (vq->index == msg.msg.payload.vring_state.index) {
-                               response = 0;
-                               vu_dev->vq_irq_vq_map |= BIT_ULL(vq->index);
-                               break;
+       while (1) {
+               rc = vhost_user_recv_req(vu_dev, &msg.msg,
+                                        sizeof(msg.msg.payload) +
+                                        sizeof(msg.extra_payload));
+               if (rc)
+                       break;
+
+               switch (msg.msg.header.request) {
+               case VHOST_USER_SLAVE_CONFIG_CHANGE_MSG:
+                       vu_dev->config_changed_irq = true;
+                       response = 0;
+                       break;
+               case VHOST_USER_SLAVE_VRING_CALL:
+                       virtio_device_for_each_vq((&vu_dev->vdev), vq) {
+                               if (vq->index == msg.msg.payload.vring_state.index) {
+                                       response = 0;
+                                       vu_dev->vq_irq_vq_map |= BIT_ULL(vq->index);
+                                       break;
+                               }
                        }
+                       break;
+               case VHOST_USER_SLAVE_IOTLB_MSG:
+                       /* not supported - VIRTIO_F_ACCESS_PLATFORM */
+               case VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG:
+                       /* not supported - VHOST_USER_PROTOCOL_F_HOST_NOTIFIER */
+               default:
+                       vu_err(vu_dev, "unexpected slave request %d\n",
+                              msg.msg.header.request);
                }
-               break;
-       case VHOST_USER_SLAVE_IOTLB_MSG:
-               /* not supported - VIRTIO_F_ACCESS_PLATFORM */
-       case VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG:
-               /* not supported - VHOST_USER_PROTOCOL_F_HOST_NOTIFIER */
-       default:
-               vu_err(vu_dev, "unexpected slave request %d\n",
-                      msg.msg.header.request);
-       }
-
-       if (ev && !vu_dev->suspended)
-               time_travel_add_irq_event(ev);
 
-       if (msg.msg.header.flags & VHOST_USER_FLAG_NEED_REPLY)
-               vhost_user_reply(vu_dev, &msg.msg, response);
+               if (ev && !vu_dev->suspended)
+                       time_travel_add_irq_event(ev);
 
-       return IRQ_HANDLED;
+               if (msg.msg.header.flags & VHOST_USER_FLAG_NEED_REPLY)
+                       vhost_user_reply(vu_dev, &msg.msg, response);
+               irq_rc = IRQ_HANDLED;
+       };
+       /* mask EAGAIN as we try non-blocking read until socket is empty */
+       vu_dev->recv_rc = (rc == -EAGAIN) ? 0 : rc;
+       return irq_rc;
 }
 
 static irqreturn_t vu_req_interrupt(int irq, void *data)
index e7c7b53..9148511 100644 (file)
@@ -169,7 +169,7 @@ __uml_setup("iomem=", parse_iomem,
 );
 
 /*
- * This list is constructed in parse_iomem and addresses filled in in
+ * This list is constructed in parse_iomem and addresses filled in
  * setup_iomem, both of which run during early boot.  Afterwards, it's
  * unchanged.
  */
index 80b90b1..010bc42 100644 (file)
@@ -356,7 +356,7 @@ int singlestepping(void * t)
 unsigned long arch_align_stack(unsigned long sp)
 {
        if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
-               sp -= get_random_int() % 8192;
+               sp -= prandom_u32_max(8192);
        return sp & ~0xf;
 }
 #endif
index d9e023c..8adf8e8 100644 (file)
@@ -96,7 +96,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
 
 static void *c_start(struct seq_file *m, loff_t *pos)
 {
-       return *pos < NR_CPUS ? cpu_data + *pos : NULL;
+       return *pos < nr_cpu_ids ? cpu_data + *pos : NULL;
 }
 
 static void *c_next(struct seq_file *m, void *v, loff_t *pos)
@@ -132,7 +132,7 @@ static int have_root __initdata;
 static int have_console __initdata;
 
 /* Set in uml_mem_setup and modified in linux_main */
-long long physmem_size = 32 * 1024 * 1024;
+long long physmem_size = 64 * 1024 * 1024;
 EXPORT_SYMBOL(physmem_size);
 
 static const char *usage_string =
@@ -247,13 +247,13 @@ static int panic_exit(struct notifier_block *self, unsigned long unused1,
        bust_spinlocks(0);
        uml_exitcode = 1;
        os_dump_core();
-       return 0;
+
+       return NOTIFY_DONE;
 }
 
 static struct notifier_block panic_exit_notifier = {
-       .notifier_call          = panic_exit,
-       .next                   = NULL,
-       .priority               = 0
+       .notifier_call  = panic_exit,
+       .priority       = INT_MAX - 1, /* run as 2nd notifier, won't return */
 };
 
 void uml_finishsetup(void)
@@ -416,7 +416,7 @@ void __init setup_arch(char **cmdline_p)
        read_initrd();
 
        paging_init();
-       strlcpy(boot_command_line, command_line, COMMAND_LINE_SIZE);
+       strscpy(boot_command_line, command_line, COMMAND_LINE_SIZE);
        *cmdline_p = command_line;
        setup_hostinfo(host_info, sizeof host_info);
 
index 8031a03..72bc60a 100644 (file)
@@ -9,7 +9,7 @@
 #include <os.h>
 
 /* Changed by set_umid_arg */
-static int umid_inited = 0;
+static int umid_inited;
 
 static int __init set_umid_arg(char *name, int *add)
 {
index 6d1879e..67745ce 100644 (file)
@@ -1973,7 +1973,6 @@ config EFI
 config EFI_STUB
        bool "EFI stub support"
        depends on EFI
-       depends on $(cc-option,-mabi=ms) || X86_32
        select RELOCATABLE
        help
          This kernel feature allows a bzImage to be loaded directly
index 6292b96..311eae3 100644 (file)
@@ -327,7 +327,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
        end -= len;
 
        if (end > start) {
-               offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
+               offset = prandom_u32_max(((end - start) >> PAGE_SHIFT) + 1);
                addr = start + (offset << PAGE_SHIFT);
        } else {
                addr = start;
index 4fce1a4..8259d72 100644 (file)
@@ -1596,7 +1596,7 @@ void __init intel_pmu_arch_lbr_init(void)
        return;
 
 clear_arch_lbr:
-       clear_cpu_cap(&boot_cpu_data, X86_FEATURE_ARCH_LBR);
+       setup_clear_cpu_cap(X86_FEATURE_ARCH_LBR);
 }
 
 /**
index 0bef44d..2fd52b6 100644 (file)
@@ -25,8 +25,10 @@ arch_rmrr_sanity_check(struct acpi_dmar_reserved_memory *rmrr)
 {
        u64 start = rmrr->base_address;
        u64 end = rmrr->end_address + 1;
+       int entry_type;
 
-       if (e820__mapped_all(start, end, E820_TYPE_RESERVED))
+       entry_type = e820__get_entry_type(start, end);
+       if (entry_type == E820_TYPE_RESERVED || entry_type == E820_TYPE_NVS)
                return 0;
 
        pr_err(FW_BUG "No firmware reserved region can cover this RMRR [%#018Lx-%#018Lx], contact BIOS vendor for fixes\n",
index 48276c0..860b602 100644 (file)
@@ -503,7 +503,7 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
                va_align.flags    = ALIGN_VA_32 | ALIGN_VA_64;
 
                /* A random value per boot for bit slice [12:upper_bit) */
-               va_align.bits = get_random_int() & va_align.mask;
+               va_align.bits = get_random_u32() & va_align.mask;
        }
 
        if (cpu_has(c, X86_FEATURE_MWAITX))
index e7410e9..3a35dec 100644 (file)
@@ -440,7 +440,13 @@ apply_microcode_early_amd(u32 cpuid_1_eax, void *ucode, size_t size, bool save_p
                return ret;
 
        native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
-       if (rev >= mc->hdr.patch_id)
+
+       /*
+        * Allow application of the same revision to pick up SMT-specific
+        * changes even if the revision of the other SMT thread is already
+        * up-to-date.
+        */
+       if (rev > mc->hdr.patch_id)
                return ret;
 
        if (!__apply_microcode_amd(mc)) {
@@ -528,8 +534,12 @@ void load_ucode_amd_ap(unsigned int cpuid_1_eax)
 
        native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
 
-       /* Check whether we have saved a new patch already: */
-       if (*new_rev && rev < mc->hdr.patch_id) {
+       /*
+        * Check whether a new patch has been saved already. Also, allow application of
+        * the same revision in order to pick up SMT-thread-specific configuration even
+        * if the sibling SMT thread already has an up-to-date revision.
+        */
+       if (*new_rev && rev <= mc->hdr.patch_id) {
                if (!__apply_microcode_amd(mc)) {
                        *new_rev = mc->hdr.patch_id;
                        return;
index de62b0b..3266ea3 100644 (file)
@@ -66,9 +66,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
                        .rid                    = RDT_RESOURCE_L3,
                        .name                   = "L3",
                        .cache_level            = 3,
-                       .cache = {
-                               .min_cbm_bits   = 1,
-                       },
                        .domains                = domain_init(RDT_RESOURCE_L3),
                        .parse_ctrlval          = parse_cbm,
                        .format_str             = "%d=%0*x",
@@ -83,9 +80,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
                        .rid                    = RDT_RESOURCE_L2,
                        .name                   = "L2",
                        .cache_level            = 2,
-                       .cache = {
-                               .min_cbm_bits   = 1,
-                       },
                        .domains                = domain_init(RDT_RESOURCE_L2),
                        .parse_ctrlval          = parse_cbm,
                        .format_str             = "%d=%0*x",
@@ -836,6 +830,7 @@ static __init void rdt_init_res_defs_intel(void)
                        r->cache.arch_has_sparse_bitmaps = false;
                        r->cache.arch_has_empty_bitmaps = false;
                        r->cache.arch_has_per_cpu_cfg = false;
+                       r->cache.min_cbm_bits = 1;
                } else if (r->rid == RDT_RESOURCE_MBA) {
                        hw_res->msr_base = MSR_IA32_MBA_THRTL_BASE;
                        hw_res->msr_update = mba_wrmsr_intel;
@@ -856,6 +851,7 @@ static __init void rdt_init_res_defs_amd(void)
                        r->cache.arch_has_sparse_bitmaps = true;
                        r->cache.arch_has_empty_bitmaps = true;
                        r->cache.arch_has_per_cpu_cfg = true;
+                       r->cache.min_cbm_bits = 0;
                } else if (r->rid == RDT_RESOURCE_MBA) {
                        hw_res->msr_base = MSR_IA32_MBA_BW_BASE;
                        hw_res->msr_update = mba_wrmsr_amd;
index 132a2de..5e868b6 100644 (file)
@@ -96,6 +96,7 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
        unsigned int ht_mask_width, core_plus_mask_width, die_plus_mask_width;
        unsigned int core_select_mask, core_level_siblings;
        unsigned int die_select_mask, die_level_siblings;
+       unsigned int pkg_mask_width;
        bool die_level_present = false;
        int leaf;
 
@@ -111,10 +112,10 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
        core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
        core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
        die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
-       die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+       pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
 
        sub_index = 1;
-       do {
+       while (true) {
                cpuid_count(leaf, sub_index, &eax, &ebx, &ecx, &edx);
 
                /*
@@ -132,10 +133,15 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
                        die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
                }
 
+               if (LEAFB_SUBTYPE(ecx) != INVALID_TYPE)
+                       pkg_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+               else
+                       break;
+
                sub_index++;
-       } while (LEAFB_SUBTYPE(ecx) != INVALID_TYPE);
+       }
 
-       core_select_mask = (~(-1 << core_plus_mask_width)) >> ht_mask_width;
+       core_select_mask = (~(-1 << pkg_mask_width)) >> ht_mask_width;
        die_select_mask = (~(-1 << die_plus_mask_width)) >>
                                core_plus_mask_width;
 
@@ -148,7 +154,7 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
        }
 
        c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid,
-                               die_plus_mask_width);
+                               pkg_mask_width);
        /*
         * Reinit the apicid, now that we have extended initial_apicid.
         */
index 621f4b6..8946f89 100644 (file)
@@ -210,13 +210,6 @@ static void __init fpu__init_system_xstate_size_legacy(void)
        fpstate_reset(&current->thread.fpu);
 }
 
-static void __init fpu__init_init_fpstate(void)
-{
-       /* Bring init_fpstate size and features up to date */
-       init_fpstate.size               = fpu_kernel_cfg.max_size;
-       init_fpstate.xfeatures          = fpu_kernel_cfg.max_features;
-}
-
 /*
  * Called on the boot CPU once per system bootup, to set up the initial
  * FPU state that is later cloned into all processes:
@@ -236,5 +229,4 @@ void __init fpu__init_system(struct cpuinfo_x86 *c)
        fpu__init_system_xstate_size_legacy();
        fpu__init_system_xstate(fpu_kernel_cfg.max_size);
        fpu__init_task_struct_size();
-       fpu__init_init_fpstate();
 }
index c834015..59e543b 100644 (file)
@@ -360,7 +360,7 @@ static void __init setup_init_fpu_buf(void)
 
        print_xstate_features();
 
-       xstate_init_xcomp_bv(&init_fpstate.regs.xsave, fpu_kernel_cfg.max_features);
+       xstate_init_xcomp_bv(&init_fpstate.regs.xsave, init_fpstate.xfeatures);
 
        /*
         * Init all the features state with header.xfeatures being 0x0
@@ -678,20 +678,6 @@ static unsigned int __init get_xsave_size_user(void)
        return ebx;
 }
 
-/*
- * Will the runtime-enumerated 'xstate_size' fit in the init
- * task's statically-allocated buffer?
- */
-static bool __init is_supported_xstate_size(unsigned int test_xstate_size)
-{
-       if (test_xstate_size <= sizeof(init_fpstate.regs))
-               return true;
-
-       pr_warn("x86/fpu: xstate buffer too small (%zu < %d), disabling xsave\n",
-                       sizeof(init_fpstate.regs), test_xstate_size);
-       return false;
-}
-
 static int __init init_xstate_size(void)
 {
        /* Recompute the context size for enabled features: */
@@ -717,10 +703,6 @@ static int __init init_xstate_size(void)
        kernel_default_size =
                xstate_calculate_size(fpu_kernel_cfg.default_features, compacted);
 
-       /* Ensure we have the space to store all default enabled features. */
-       if (!is_supported_xstate_size(kernel_default_size))
-               return -EINVAL;
-
        if (!paranoid_xstate_size_valid(kernel_size))
                return -EINVAL;
 
@@ -875,6 +857,19 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
        update_regset_xstate_info(fpu_user_cfg.max_size,
                                  fpu_user_cfg.max_features);
 
+       /*
+        * init_fpstate excludes dynamic states as they are large but init
+        * state is zero.
+        */
+       init_fpstate.size               = fpu_kernel_cfg.default_size;
+       init_fpstate.xfeatures          = fpu_kernel_cfg.default_features;
+
+       if (init_fpstate.size > sizeof(init_fpstate.regs)) {
+               pr_warn("x86/fpu: init_fpstate buffer too small (%zu < %d), disabling XSAVE\n",
+                       sizeof(init_fpstate.regs), init_fpstate.size);
+               goto out_disable;
+       }
+
        setup_init_fpu_buf();
 
        /*
@@ -1130,6 +1125,15 @@ void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate,
         */
        mask = fpstate->user_xfeatures;
 
+       /*
+        * Dynamic features are not present in init_fpstate. When they are
+        * in an all zeros init state, remove those from 'mask' to zero
+        * those features in the user buffer instead of retrieving them
+        * from init_fpstate.
+        */
+       if (fpu_state_size_dynamic())
+               mask &= (header.xfeatures | xinit->header.xcomp_bv);
+
        for_each_extended_xfeature(i, mask) {
                /*
                 * If there was a feature or alignment gap, zero the space
index dfeb227..2a4be92 100644 (file)
@@ -4,6 +4,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/ptrace.h>
 #include <asm/ftrace.h>
 #include <asm/export.h>
 
        .endm
 
+SYM_TYPED_FUNC_START(ftrace_stub)
+       RET
+SYM_FUNC_END(ftrace_stub)
+
+SYM_TYPED_FUNC_START(ftrace_stub_graph)
+       RET
+SYM_FUNC_END(ftrace_stub_graph)
+
 #ifdef CONFIG_DYNAMIC_FTRACE
 
 SYM_FUNC_START(__fentry__)
@@ -172,21 +181,10 @@ SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
         */
 SYM_INNER_LABEL(ftrace_caller_end, SYM_L_GLOBAL)
        ANNOTATE_NOENDBR
-
-       jmp ftrace_epilogue
+       RET
 SYM_FUNC_END(ftrace_caller);
 STACK_FRAME_NON_STANDARD_FP(ftrace_caller)
 
-SYM_FUNC_START(ftrace_epilogue)
-/*
- * This is weak to keep gas from relaxing the jumps.
- */
-SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK)
-       UNWIND_HINT_FUNC
-       ENDBR
-       RET
-SYM_FUNC_END(ftrace_epilogue)
-
 SYM_FUNC_START(ftrace_regs_caller)
        /* Save the current flags before any operations that can change them */
        pushfq
@@ -262,14 +260,11 @@ SYM_INNER_LABEL(ftrace_regs_caller_jmp, SYM_L_GLOBAL)
        popfq
 
        /*
-        * As this jmp to ftrace_epilogue can be a short jump
-        * it must not be copied into the trampoline.
-        * The trampoline will add the code to jump
-        * to the return.
+        * The trampoline will add the return.
         */
 SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
        ANNOTATE_NOENDBR
-       jmp ftrace_epilogue
+       RET
 
        /* Swap the flags with orig_rax */
 1:     movq MCOUNT_REG_SIZE(%rsp), %rdi
@@ -280,7 +275,7 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
        /* Restore flags */
        popfq
        UNWIND_HINT_FUNC
-       jmp     ftrace_epilogue
+       RET
 
 SYM_FUNC_END(ftrace_regs_caller)
 STACK_FRAME_NON_STANDARD_FP(ftrace_regs_caller)
@@ -291,9 +286,6 @@ STACK_FRAME_NON_STANDARD_FP(ftrace_regs_caller)
 SYM_FUNC_START(__fentry__)
        cmpq $ftrace_stub, ftrace_trace_function
        jnz trace
-
-SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL)
-       ENDBR
        RET
 
 trace:
index b1abf66..c032edc 100644 (file)
@@ -53,7 +53,7 @@ static unsigned long int get_module_load_offset(void)
                 */
                if (module_load_offset == 0)
                        module_load_offset =
-                               (get_random_int() % 1024 + 1) * PAGE_SIZE;
+                               (prandom_u32_max(1024) + 1) * PAGE_SIZE;
                mutex_unlock(&module_kaslr_mutex);
        }
        return module_load_offset;
index 58a6ea4..c21b734 100644 (file)
@@ -965,7 +965,7 @@ early_param("idle", idle_setup);
 unsigned long arch_align_stack(unsigned long sp)
 {
        if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
-               sp -= get_random_int() % 8192;
+               sp -= prandom_u32_max(8192);
        return sp & ~0xf;
 }
 
index 0ea57da..c059820 100644 (file)
@@ -713,7 +713,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
        /* Otherwise, skip ahead to the user-specified starting frame: */
        while (!unwind_done(state) &&
               (!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
-                       state->sp < (unsigned long)first_frame))
+                       state->sp <= (unsigned long)first_frame))
                unwind_next_frame(state);
 
        return;
index 4bd5f8a..9cf1ba8 100644 (file)
@@ -6442,26 +6442,22 @@ static int kvm_add_msr_filter(struct kvm_x86_msr_filter *msr_filter,
        return 0;
 }
 
-static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
+static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm,
+                                      struct kvm_msr_filter *filter)
 {
-       struct kvm_msr_filter __user *user_msr_filter = argp;
        struct kvm_x86_msr_filter *new_filter, *old_filter;
-       struct kvm_msr_filter filter;
        bool default_allow;
        bool empty = true;
        int r = 0;
        u32 i;
 
-       if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
-               return -EFAULT;
-
-       if (filter.flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
+       if (filter->flags & ~KVM_MSR_FILTER_DEFAULT_DENY)
                return -EINVAL;
 
-       for (i = 0; i < ARRAY_SIZE(filter.ranges); i++)
-               empty &= !filter.ranges[i].nmsrs;
+       for (i = 0; i < ARRAY_SIZE(filter->ranges); i++)
+               empty &= !filter->ranges[i].nmsrs;
 
-       default_allow = !(filter.flags & KVM_MSR_FILTER_DEFAULT_DENY);
+       default_allow = !(filter->flags & KVM_MSR_FILTER_DEFAULT_DENY);
        if (empty && !default_allow)
                return -EINVAL;
 
@@ -6469,8 +6465,8 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
        if (!new_filter)
                return -ENOMEM;
 
-       for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
-               r = kvm_add_msr_filter(new_filter, &filter.ranges[i]);
+       for (i = 0; i < ARRAY_SIZE(filter->ranges); i++) {
+               r = kvm_add_msr_filter(new_filter, &filter->ranges[i]);
                if (r) {
                        kvm_free_msr_filter(new_filter);
                        return r;
@@ -6493,6 +6489,62 @@ static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp)
        return 0;
 }
 
+#ifdef CONFIG_KVM_COMPAT
+/* for KVM_X86_SET_MSR_FILTER */
+struct kvm_msr_filter_range_compat {
+       __u32 flags;
+       __u32 nmsrs;
+       __u32 base;
+       __u32 bitmap;
+};
+
+struct kvm_msr_filter_compat {
+       __u32 flags;
+       struct kvm_msr_filter_range_compat ranges[KVM_MSR_FILTER_MAX_RANGES];
+};
+
+#define KVM_X86_SET_MSR_FILTER_COMPAT _IOW(KVMIO, 0xc6, struct kvm_msr_filter_compat)
+
+long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
+                             unsigned long arg)
+{
+       void __user *argp = (void __user *)arg;
+       struct kvm *kvm = filp->private_data;
+       long r = -ENOTTY;
+
+       switch (ioctl) {
+       case KVM_X86_SET_MSR_FILTER_COMPAT: {
+               struct kvm_msr_filter __user *user_msr_filter = argp;
+               struct kvm_msr_filter_compat filter_compat;
+               struct kvm_msr_filter filter;
+               int i;
+
+               if (copy_from_user(&filter_compat, user_msr_filter,
+                                  sizeof(filter_compat)))
+                       return -EFAULT;
+
+               filter.flags = filter_compat.flags;
+               for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) {
+                       struct kvm_msr_filter_range_compat *cr;
+
+                       cr = &filter_compat.ranges[i];
+                       filter.ranges[i] = (struct kvm_msr_filter_range) {
+                               .flags = cr->flags,
+                               .nmsrs = cr->nmsrs,
+                               .base = cr->base,
+                               .bitmap = (__u8 *)(ulong)cr->bitmap,
+                       };
+               }
+
+               r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
+               break;
+       }
+       }
+
+       return r;
+}
+#endif
+
 #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
 static int kvm_arch_suspend_notifier(struct kvm *kvm)
 {
@@ -6915,9 +6967,16 @@ set_pit2_out:
        case KVM_SET_PMU_EVENT_FILTER:
                r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp);
                break;
-       case KVM_X86_SET_MSR_FILTER:
-               r = kvm_vm_ioctl_set_msr_filter(kvm, argp);
+       case KVM_X86_SET_MSR_FILTER: {
+               struct kvm_msr_filter __user *user_msr_filter = argp;
+               struct kvm_msr_filter filter;
+
+               if (copy_from_user(&filter, user_msr_filter, sizeof(filter)))
+                       return -EFAULT;
+
+               r = kvm_vm_ioctl_set_msr_filter(kvm, &filter);
                break;
+       }
        default:
                r = -ENOTTY;
        }
index 0612a73..423b21e 100644 (file)
@@ -136,10 +136,10 @@ static int pageattr_test(void)
        failed += print_split(&sa);
 
        for (i = 0; i < NTEST; i++) {
-               unsigned long pfn = prandom_u32() % max_pfn_mapped;
+               unsigned long pfn = prandom_u32_max(max_pfn_mapped);
 
                addr[i] = (unsigned long)__va(pfn << PAGE_SHIFT);
-               len[i] = prandom_u32() % NPAGES;
+               len[i] = prandom_u32_max(NPAGES);
                len[i] = min_t(unsigned long, len[i], max_pfn_mapped - pfn - 1);
 
                if (len[i] == 0)
index 0abd082..51afd6d 100644 (file)
@@ -11,6 +11,7 @@
 #include <linux/bpf.h>
 #include <linux/memory.h>
 #include <linux/sort.h>
+#include <linux/init.h>
 #include <asm/extable.h>
 #include <asm/set_memory.h>
 #include <asm/nospec-branch.h>
@@ -388,6 +389,18 @@ out:
        return ret;
 }
 
+int __init bpf_arch_init_dispatcher_early(void *ip)
+{
+       const u8 *nop_insn = x86_nops[5];
+
+       if (is_endbr(*(u32 *)ip))
+               ip += ENDBR_INSN_SIZE;
+
+       if (memcmp(ip, nop_insn, X86_PATCH_SIZE))
+               text_poke_early(ip, nop_insn, X86_PATCH_SIZE);
+       return 0;
+}
+
 int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
                       void *old_addr, void *new_addr)
 {
index 64ee618..71f7216 100644 (file)
@@ -369,12 +369,8 @@ struct bfq_queue {
        unsigned long split_time; /* time of last split */
 
        unsigned long first_IO_time; /* time of first I/O for this queue */
-
        unsigned long creation_time; /* when this queue is created */
 
-       /* max service rate measured so far */
-       u32 max_service_rate;
-
        /*
         * Pointer to the waker queue for this queue, i.e., to the
         * queue Q such that this queue happens to get new I/O right
index ec350e0..57c2f32 100644 (file)
@@ -567,7 +567,7 @@ EXPORT_SYMBOL(bio_alloc_bioset);
  * be reused by calling bio_uninit() before calling bio_init() again.
  *
  * Note that unlike bio_alloc() or bio_alloc_bioset() allocations from this
- * function are not backed by a mempool can can fail.  Do not use this function
+ * function are not backed by a mempool can fail.  Do not use this function
  * for allocations in the file system I/O path.
  *
  * Returns: Pointer to new bio on success, NULL on failure.
@@ -741,7 +741,7 @@ void bio_put(struct bio *bio)
                        return;
        }
 
-       if (bio->bi_opf & REQ_ALLOC_CACHE) {
+       if ((bio->bi_opf & REQ_ALLOC_CACHE) && !WARN_ON_ONCE(in_interrupt())) {
                struct bio_alloc_cache *cache;
 
                bio_uninit(bio);
index 621abd1..ad9844c 100644 (file)
@@ -539,7 +539,7 @@ static int blk_crypto_fallback_init(void)
        if (blk_crypto_fallback_inited)
                return 0;
 
-       prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
+       get_random_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
 
        err = bioset_init(&crypto_bio_split, 64, 0, 0);
        if (err)
index 8070b6c..33292c0 100644 (file)
@@ -3112,8 +3112,11 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tags *drv_tags,
        struct page *page;
        unsigned long flags;
 
-       /* There is no need to clear a driver tags own mapping */
-       if (drv_tags == tags)
+       /*
+        * There is no need to clear mapping if driver tags is not initialized
+        * or the mapping belongs to the driver tags.
+        */
+       if (!drv_tags || drv_tags == tags)
                return;
 
        list_for_each_entry(page, &tags->page_list, lru) {
index 2464679..c293e08 100644 (file)
@@ -841,12 +841,11 @@ int wbt_init(struct request_queue *q)
        rwb->last_comp = rwb->last_issue = jiffies;
        rwb->win_nsec = RWB_WINDOW_NSEC;
        rwb->enable_state = WBT_STATE_ON_DEFAULT;
-       rwb->wc = 1;
+       rwb->wc = test_bit(QUEUE_FLAG_WC, &q->queue_flags);
        rwb->rq_depth.default_depth = RWB_DEF_DEPTH;
        rwb->min_lat_nsec = wbt_default_latency_nsec(q);
 
        wbt_queue_depth_changed(&rwb->rqos);
-       wbt_set_write_cache(q, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
 
        /*
         * Assign rwb and add the stats callback.
index 5143953..17b33c6 100644 (file)
@@ -507,6 +507,13 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
                 */
                dev_set_uevent_suppress(ddev, 0);
                disk_uevent(disk, KOBJ_ADD);
+       } else {
+               /*
+                * Even if the block_device for a hidden gendisk is not
+                * registered, it needs to have a valid bd_dev so that the
+                * freeing of the dynamic major works.
+                */
+               disk->part0->bd_dev = MKDEV(disk->major, disk->first_minor);
        }
 
        disk_update_readahead(disk);
index 9719c75..d3fbee1 100644 (file)
@@ -37,7 +37,7 @@ static void makedata(int disks)
        int i;
 
        for (i = 0; i < disks; i++) {
-               prandom_bytes(page_address(data[i]), PAGE_SIZE);
+               get_random_bytes(page_address(data[i]), PAGE_SIZE);
                dataptrs[i] = data[i];
                dataoffs[i] = 0;
        }
index e4bb03b..bcd059c 100644 (file)
@@ -855,9 +855,9 @@ static int prepare_keybuf(const u8 *key, unsigned int ksize,
 /* Generate a random length in range [0, max_len], but prefer smaller values */
 static unsigned int generate_random_length(unsigned int max_len)
 {
-       unsigned int len = prandom_u32() % (max_len + 1);
+       unsigned int len = prandom_u32_max(max_len + 1);
 
-       switch (prandom_u32() % 4) {
+       switch (prandom_u32_max(4)) {
        case 0:
                return len % 64;
        case 1:
@@ -874,14 +874,14 @@ static void flip_random_bit(u8 *buf, size_t size)
 {
        size_t bitpos;
 
-       bitpos = prandom_u32() % (size * 8);
+       bitpos = prandom_u32_max(size * 8);
        buf[bitpos / 8] ^= 1 << (bitpos % 8);
 }
 
 /* Flip a random byte in the given nonempty data buffer */
 static void flip_random_byte(u8 *buf, size_t size)
 {
-       buf[prandom_u32() % size] ^= 0xff;
+       buf[prandom_u32_max(size)] ^= 0xff;
 }
 
 /* Sometimes make some random changes to the given nonempty data buffer */
@@ -891,15 +891,15 @@ static void mutate_buffer(u8 *buf, size_t size)
        size_t i;
 
        /* Sometimes flip some bits */
-       if (prandom_u32() % 4 == 0) {
-               num_flips = min_t(size_t, 1 << (prandom_u32() % 8), size * 8);
+       if (prandom_u32_max(4) == 0) {
+               num_flips = min_t(size_t, 1 << prandom_u32_max(8), size * 8);
                for (i = 0; i < num_flips; i++)
                        flip_random_bit(buf, size);
        }
 
        /* Sometimes flip some bytes */
-       if (prandom_u32() % 4 == 0) {
-               num_flips = min_t(size_t, 1 << (prandom_u32() % 8), size);
+       if (prandom_u32_max(4) == 0) {
+               num_flips = min_t(size_t, 1 << prandom_u32_max(8), size);
                for (i = 0; i < num_flips; i++)
                        flip_random_byte(buf, size);
        }
@@ -915,11 +915,11 @@ static void generate_random_bytes(u8 *buf, size_t count)
        if (count == 0)
                return;
 
-       switch (prandom_u32() % 8) { /* Choose a generation strategy */
+       switch (prandom_u32_max(8)) { /* Choose a generation strategy */
        case 0:
        case 1:
                /* All the same byte, plus optional mutations */
-               switch (prandom_u32() % 4) {
+               switch (prandom_u32_max(4)) {
                case 0:
                        b = 0x00;
                        break;
@@ -927,7 +927,7 @@ static void generate_random_bytes(u8 *buf, size_t count)
                        b = 0xff;
                        break;
                default:
-                       b = (u8)prandom_u32();
+                       b = get_random_u8();
                        break;
                }
                memset(buf, b, count);
@@ -935,8 +935,8 @@ static void generate_random_bytes(u8 *buf, size_t count)
                break;
        case 2:
                /* Ascending or descending bytes, plus optional mutations */
-               increment = (u8)prandom_u32();
-               b = (u8)prandom_u32();
+               increment = get_random_u8();
+               b = get_random_u8();
                for (i = 0; i < count; i++, b += increment)
                        buf[i] = b;
                mutate_buffer(buf, count);
@@ -944,7 +944,7 @@ static void generate_random_bytes(u8 *buf, size_t count)
        default:
                /* Fully random bytes */
                for (i = 0; i < count; i++)
-                       buf[i] = (u8)prandom_u32();
+                       buf[i] = get_random_u8();
        }
 }
 
@@ -959,24 +959,24 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,
                unsigned int this_len;
                const char *flushtype_str;
 
-               if (div == &divs[max_divs - 1] || prandom_u32() % 2 == 0)
+               if (div == &divs[max_divs - 1] || prandom_u32_max(2) == 0)
                        this_len = remaining;
                else
-                       this_len = 1 + (prandom_u32() % remaining);
+                       this_len = 1 + prandom_u32_max(remaining);
                div->proportion_of_total = this_len;
 
-               if (prandom_u32() % 4 == 0)
-                       div->offset = (PAGE_SIZE - 128) + (prandom_u32() % 128);
-               else if (prandom_u32() % 2 == 0)
-                       div->offset = prandom_u32() % 32;
+               if (prandom_u32_max(4) == 0)
+                       div->offset = (PAGE_SIZE - 128) + prandom_u32_max(128);
+               else if (prandom_u32_max(2) == 0)
+                       div->offset = prandom_u32_max(32);
                else
-                       div->offset = prandom_u32() % PAGE_SIZE;
-               if (prandom_u32() % 8 == 0)
+                       div->offset = prandom_u32_max(PAGE_SIZE);
+               if (prandom_u32_max(8) == 0)
                        div->offset_relative_to_alignmask = true;
 
                div->flush_type = FLUSH_TYPE_NONE;
                if (gen_flushes) {
-                       switch (prandom_u32() % 4) {
+                       switch (prandom_u32_max(4)) {
                        case 0:
                                div->flush_type = FLUSH_TYPE_REIMPORT;
                                break;
@@ -988,7 +988,7 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,
 
                if (div->flush_type != FLUSH_TYPE_NONE &&
                    !(req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
-                   prandom_u32() % 2 == 0)
+                   prandom_u32_max(2) == 0)
                        div->nosimd = true;
 
                switch (div->flush_type) {
@@ -1035,7 +1035,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
 
        p += scnprintf(p, end - p, "random:");
 
-       switch (prandom_u32() % 4) {
+       switch (prandom_u32_max(4)) {
        case 0:
        case 1:
                cfg->inplace_mode = OUT_OF_PLACE;
@@ -1050,12 +1050,12 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
                break;
        }
 
-       if (prandom_u32() % 2 == 0) {
+       if (prandom_u32_max(2) == 0) {
                cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
                p += scnprintf(p, end - p, " may_sleep");
        }
 
-       switch (prandom_u32() % 4) {
+       switch (prandom_u32_max(4)) {
        case 0:
                cfg->finalization_type = FINALIZATION_TYPE_FINAL;
                p += scnprintf(p, end - p, " use_final");
@@ -1071,7 +1071,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
        }
 
        if (!(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
-           prandom_u32() % 2 == 0) {
+           prandom_u32_max(2) == 0) {
                cfg->nosimd = true;
                p += scnprintf(p, end - p, " nosimd");
        }
@@ -1084,7 +1084,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
                                          cfg->req_flags);
        p += scnprintf(p, end - p, "]");
 
-       if (cfg->inplace_mode == OUT_OF_PLACE && prandom_u32() % 2 == 0) {
+       if (cfg->inplace_mode == OUT_OF_PLACE && prandom_u32_max(2) == 0) {
                p += scnprintf(p, end - p, " dst_divs=[");
                p = generate_random_sgl_divisions(cfg->dst_divs,
                                                  ARRAY_SIZE(cfg->dst_divs),
@@ -1093,13 +1093,13 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
                p += scnprintf(p, end - p, "]");
        }
 
-       if (prandom_u32() % 2 == 0) {
-               cfg->iv_offset = 1 + (prandom_u32() % MAX_ALGAPI_ALIGNMASK);
+       if (prandom_u32_max(2) == 0) {
+               cfg->iv_offset = 1 + prandom_u32_max(MAX_ALGAPI_ALIGNMASK);
                p += scnprintf(p, end - p, " iv_offset=%u", cfg->iv_offset);
        }
 
-       if (prandom_u32() % 2 == 0) {
-               cfg->key_offset = 1 + (prandom_u32() % MAX_ALGAPI_ALIGNMASK);
+       if (prandom_u32_max(2) == 0) {
+               cfg->key_offset = 1 + prandom_u32_max(MAX_ALGAPI_ALIGNMASK);
                p += scnprintf(p, end - p, " key_offset=%u", cfg->key_offset);
        }
 
@@ -1652,8 +1652,8 @@ static void generate_random_hash_testvec(struct shash_desc *desc,
        vec->ksize = 0;
        if (maxkeysize) {
                vec->ksize = maxkeysize;
-               if (prandom_u32() % 4 == 0)
-                       vec->ksize = 1 + (prandom_u32() % maxkeysize);
+               if (prandom_u32_max(4) == 0)
+                       vec->ksize = 1 + prandom_u32_max(maxkeysize);
                generate_random_bytes((u8 *)vec->key, vec->ksize);
 
                vec->setkey_error = crypto_shash_setkey(desc->tfm, vec->key,
@@ -2218,13 +2218,13 @@ static void mutate_aead_message(struct aead_testvec *vec, bool aad_iv,
        const unsigned int aad_tail_size = aad_iv ? ivsize : 0;
        const unsigned int authsize = vec->clen - vec->plen;
 
-       if (prandom_u32() % 2 == 0 && vec->alen > aad_tail_size) {
+       if (prandom_u32_max(2) == 0 && vec->alen > aad_tail_size) {
                 /* Mutate the AAD */
                flip_random_bit((u8 *)vec->assoc, vec->alen - aad_tail_size);
-               if (prandom_u32() % 2 == 0)
+               if (prandom_u32_max(2) == 0)
                        return;
        }
-       if (prandom_u32() % 2 == 0) {
+       if (prandom_u32_max(2) == 0) {
                /* Mutate auth tag (assuming it's at the end of ciphertext) */
                flip_random_bit((u8 *)vec->ctext + vec->plen, authsize);
        } else {
@@ -2249,7 +2249,7 @@ static void generate_aead_message(struct aead_request *req,
        const unsigned int ivsize = crypto_aead_ivsize(tfm);
        const unsigned int authsize = vec->clen - vec->plen;
        const bool inauthentic = (authsize >= MIN_COLLISION_FREE_AUTHSIZE) &&
-                                (prefer_inauthentic || prandom_u32() % 4 == 0);
+                                (prefer_inauthentic || prandom_u32_max(4) == 0);
 
        /* Generate the AAD. */
        generate_random_bytes((u8 *)vec->assoc, vec->alen);
@@ -2257,7 +2257,7 @@ static void generate_aead_message(struct aead_request *req,
                /* Avoid implementation-defined behavior. */
                memcpy((u8 *)vec->assoc + vec->alen - ivsize, vec->iv, ivsize);
 
-       if (inauthentic && prandom_u32() % 2 == 0) {
+       if (inauthentic && prandom_u32_max(2) == 0) {
                /* Generate a random ciphertext. */
                generate_random_bytes((u8 *)vec->ctext, vec->clen);
        } else {
@@ -2321,8 +2321,8 @@ static void generate_random_aead_testvec(struct aead_request *req,
 
        /* Key: length in [0, maxkeysize], but usually choose maxkeysize */
        vec->klen = maxkeysize;
-       if (prandom_u32() % 4 == 0)
-               vec->klen = prandom_u32() % (maxkeysize + 1);
+       if (prandom_u32_max(4) == 0)
+               vec->klen = prandom_u32_max(maxkeysize + 1);
        generate_random_bytes((u8 *)vec->key, vec->klen);
        vec->setkey_error = crypto_aead_setkey(tfm, vec->key, vec->klen);
 
@@ -2331,8 +2331,8 @@ static void generate_random_aead_testvec(struct aead_request *req,
 
        /* Tag length: in [0, maxauthsize], but usually choose maxauthsize */
        authsize = maxauthsize;
-       if (prandom_u32() % 4 == 0)
-               authsize = prandom_u32() % (maxauthsize + 1);
+       if (prandom_u32_max(4) == 0)
+               authsize = prandom_u32_max(maxauthsize + 1);
        if (prefer_inauthentic && authsize < MIN_COLLISION_FREE_AUTHSIZE)
                authsize = MIN_COLLISION_FREE_AUTHSIZE;
        if (WARN_ON(authsize > maxdatasize))
@@ -2342,7 +2342,7 @@ static void generate_random_aead_testvec(struct aead_request *req,
 
        /* AAD, plaintext, and ciphertext lengths */
        total_len = generate_random_length(maxdatasize);
-       if (prandom_u32() % 4 == 0)
+       if (prandom_u32_max(4) == 0)
                vec->alen = 0;
        else
                vec->alen = generate_random_length(total_len);
@@ -2958,8 +2958,8 @@ static void generate_random_cipher_testvec(struct skcipher_request *req,
 
        /* Key: length in [0, maxkeysize], but usually choose maxkeysize */
        vec->klen = maxkeysize;
-       if (prandom_u32() % 4 == 0)
-               vec->klen = prandom_u32() % (maxkeysize + 1);
+       if (prandom_u32_max(4) == 0)
+               vec->klen = prandom_u32_max(maxkeysize + 1);
        generate_random_bytes((u8 *)vec->key, vec->klen);
        vec->setkey_error = crypto_skcipher_setkey(tfm, vec->key, vec->klen);
 
index 72f1fb7..e648158 100644 (file)
@@ -12,6 +12,7 @@
 #include <linux/ratelimit.h>
 #include <linux/edac.h>
 #include <linux/ras.h>
+#include <acpi/ghes.h>
 #include <asm/cpu.h>
 #include <asm/mce.h>
 
@@ -138,8 +139,8 @@ static int extlog_print(struct notifier_block *nb, unsigned long val,
        int     cpu = mce->extcpu;
        struct acpi_hest_generic_status *estatus, *tmp;
        struct acpi_hest_generic_data *gdata;
-       const guid_t *fru_id = &guid_null;
-       char *fru_text = "";
+       const guid_t *fru_id;
+       char *fru_text;
        guid_t *sec_type;
        static u32 err_seq;
 
@@ -160,17 +161,23 @@ static int extlog_print(struct notifier_block *nb, unsigned long val,
 
        /* log event via trace */
        err_seq++;
-       gdata = (struct acpi_hest_generic_data *)(tmp + 1);
-       if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID)
-               fru_id = (guid_t *)gdata->fru_id;
-       if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
-               fru_text = gdata->fru_text;
-       sec_type = (guid_t *)gdata->section_type;
-       if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
-               struct cper_sec_mem_err *mem = (void *)(gdata + 1);
-               if (gdata->error_data_length >= sizeof(*mem))
-                       trace_extlog_mem_event(mem, err_seq, fru_id, fru_text,
-                                              (u8)gdata->error_severity);
+       apei_estatus_for_each_section(tmp, gdata) {
+               if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID)
+                       fru_id = (guid_t *)gdata->fru_id;
+               else
+                       fru_id = &guid_null;
+               if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
+                       fru_text = gdata->fru_text;
+               else
+                       fru_text = "";
+               sec_type = (guid_t *)gdata->section_type;
+               if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
+                       struct cper_sec_mem_err *mem = (void *)(gdata + 1);
+
+                       if (gdata->error_data_length >= sizeof(*mem))
+                               trace_extlog_mem_event(mem, err_seq, fru_id, fru_text,
+                                                      (u8)gdata->error_severity);
+               }
        }
 
 out:
index 80ad530..9952f3a 100644 (file)
@@ -163,7 +163,7 @@ static void ghes_unmap(void __iomem *vaddr, enum fixed_addresses fixmap_idx)
        clear_fixmap(fixmap_idx);
 }
 
-int ghes_estatus_pool_init(int num_ghes)
+int ghes_estatus_pool_init(unsigned int num_ghes)
 {
        unsigned long addr, len;
        int rc;
index ca2aed8..8059baf 100644 (file)
@@ -1142,7 +1142,8 @@ static void iort_iommu_msi_get_resv_regions(struct device *dev,
                        struct iommu_resv_region *region;
 
                        region = iommu_alloc_resv_region(base + SZ_64K, SZ_64K,
-                                                        prot, IOMMU_RESV_MSI);
+                                                        prot, IOMMU_RESV_MSI,
+                                                        GFP_KERNEL);
                        if (region)
                                list_add_tail(&region->list, head);
                }
index c8385ef..4e3db20 100644 (file)
@@ -323,6 +323,7 @@ struct pci_dev *acpi_get_pci_dev(acpi_handle handle)
 
        list_for_each_entry(pn, &adev->physical_node_list, node) {
                if (dev_is_pci(pn->dev)) {
+                       get_device(pn->dev);
                        pci_dev = to_pci_dev(pn->dev);
                        break;
                }
index 6f9489e..78c2804 100644 (file)
@@ -428,17 +428,31 @@ static const struct dmi_system_id asus_laptop[] = {
        { }
 };
 
+static const struct dmi_system_id lenovo_82ra[] = {
+       {
+               .ident = "LENOVO IdeaPad Flex 5 16ALC7",
+               .matches = {
+                       DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+                       DMI_MATCH(DMI_PRODUCT_NAME, "82RA"),
+               },
+       },
+       { }
+};
+
 struct irq_override_cmp {
        const struct dmi_system_id *system;
        unsigned char irq;
        unsigned char triggering;
        unsigned char polarity;
        unsigned char shareable;
+       bool override;
 };
 
-static const struct irq_override_cmp skip_override_table[] = {
-       { medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
-       { asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
+static const struct irq_override_cmp override_table[] = {
+       { medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+       { asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+       { lenovo_82ra, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
+       { lenovo_82ra, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
 };
 
 static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
@@ -446,6 +460,17 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
 {
        int i;
 
+       for (i = 0; i < ARRAY_SIZE(override_table); i++) {
+               const struct irq_override_cmp *entry = &override_table[i];
+
+               if (dmi_check_system(entry->system) &&
+                   entry->irq == gsi &&
+                   entry->triggering == triggering &&
+                   entry->polarity == polarity &&
+                   entry->shareable == shareable)
+                       return entry->override;
+       }
+
 #ifdef CONFIG_X86
        /*
         * IRQ override isn't needed on modern AMD Zen systems and
@@ -456,17 +481,6 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
                return false;
 #endif
 
-       for (i = 0; i < ARRAY_SIZE(skip_override_table); i++) {
-               const struct irq_override_cmp *entry = &skip_override_table[i];
-
-               if (dmi_check_system(entry->system) &&
-                   entry->irq == gsi &&
-                   entry->triggering == triggering &&
-                   entry->polarity == polarity &&
-                   entry->shareable == shareable)
-                       return false;
-       }
-
        return true;
 }
 
@@ -498,8 +512,11 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
                u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH;
 
                if (triggering != trig || polarity != pol) {
-                       pr_warn("ACPI: IRQ %d override to %s, %s\n", gsi,
-                               t ? "level" : "edge", p ? "low" : "high");
+                       pr_warn("ACPI: IRQ %d override to %s%s, %s%s\n", gsi,
+                               t ? "level" : "edge",
+                               trig == triggering ? "" : "(!)",
+                               p ? "low" : "high",
+                               pol == polarity ? "" : "(!)");
                        triggering = trig;
                        polarity = pol;
                }
index 558664d..024cc37 100644 (file)
@@ -1509,9 +1509,12 @@ int acpi_dma_get_range(struct device *dev, const struct bus_dma_region **map)
                        goto out;
                }
 
+               *map = r;
+
                list_for_each_entry(rentry, &list, node) {
                        if (rentry->res->start >= rentry->res->end) {
-                               kfree(r);
+                               kfree(*map);
+                               *map = NULL;
                                ret = -EINVAL;
                                dev_dbg(dma_dev, "Invalid DMA regions configuration\n");
                                goto out;
@@ -1523,8 +1526,6 @@ int acpi_dma_get_range(struct device *dev, const struct bus_dma_region **map)
                        r->offset = rentry->offset;
                        r++;
                }
-
-               *map = r;
        }
  out:
        acpi_dev_free_resource_list(&list);
index da7ee8b..7add8e7 100644 (file)
@@ -257,7 +257,7 @@ enum {
        PCS_7                           = 0x94, /* 7+ port PCS (Denverton) */
 
        /* em constants */
-       EM_MAX_SLOTS                    = 8,
+       EM_MAX_SLOTS                    = SATA_PMP_MAX_PORTS,
        EM_MAX_RETRY                    = 5,
 
        /* em_ctl bits */
index f61795c..6f216eb 100644 (file)
@@ -448,7 +448,7 @@ static int brcm_ahci_probe(struct platform_device *pdev)
        if (!of_id)
                return -ENODEV;
 
-       priv->version = (enum brcm_ahci_version)of_id->data;
+       priv->version = (unsigned long)of_id->data;
        priv->dev = dev;
 
        res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "top-ctrl");
index b734e06..a950767 100644 (file)
@@ -1067,7 +1067,7 @@ static int imx_ahci_probe(struct platform_device *pdev)
        imxpriv->ahci_pdev = pdev;
        imxpriv->no_device = false;
        imxpriv->first_time = true;
-       imxpriv->type = (enum ahci_imx_type)of_id->data;
+       imxpriv->type = (unsigned long)of_id->data;
 
        imxpriv->sata_clk = devm_clk_get(dev, "sata");
        if (IS_ERR(imxpriv->sata_clk)) {
@@ -1235,4 +1235,4 @@ module_platform_driver(imx_ahci_driver);
 MODULE_DESCRIPTION("Freescale i.MX AHCI SATA platform driver");
 MODULE_AUTHOR("Richard Zhu <Hong-Xing.Zhu@freescale.com>");
 MODULE_LICENSE("GPL");
-MODULE_ALIAS("ahci:imx");
+MODULE_ALIAS("platform:" DRV_NAME);
index 6cd6184..9cf9bf3 100644 (file)
@@ -280,7 +280,7 @@ static int ahci_qoriq_probe(struct platform_device *pdev)
                return -ENOMEM;
 
        if (of_id)
-               qoriq_priv->type = (enum ahci_qoriq_type)of_id->data;
+               qoriq_priv->type = (unsigned long)of_id->data;
        else
                qoriq_priv->type = (enum ahci_qoriq_type)acpi_id->driver_data;
 
index 5a2cac6..8607b68 100644 (file)
@@ -236,7 +236,7 @@ static struct platform_driver st_ahci_driver = {
        .driver = {
                .name = DRV_NAME,
                .pm = &st_ahci_pm_ops,
-               .of_match_table = of_match_ptr(st_ahci_match),
+               .of_match_table = st_ahci_match,
        },
        .probe = st_ahci_probe,
        .remove = ata_platform_remove_one,
index 7bb5db1..1e08704 100644 (file)
@@ -785,7 +785,7 @@ static int xgene_ahci_probe(struct platform_device *pdev)
        of_devid = of_match_device(xgene_ahci_of_match, dev);
        if (of_devid) {
                if (of_devid->data)
-                       version = (enum xgene_ahci_version) of_devid->data;
+                       version = (unsigned long) of_devid->data;
        }
 #ifdef CONFIG_ACPI
        else {
index 590ebea..0195eb2 100644 (file)
@@ -875,7 +875,7 @@ static int sata_rcar_probe(struct platform_device *pdev)
        if (!priv)
                return -ENOMEM;
 
-       priv->type = (enum sata_rcar_type)of_device_get_match_data(dev);
+       priv->type = (unsigned long)of_device_get_match_data(dev);
 
        pm_runtime_enable(dev);
        ret = pm_runtime_get_sync(dev);
index c897c45..ee69d50 100644 (file)
@@ -781,7 +781,7 @@ static struct socket *drbd_wait_for_connect(struct drbd_connection *connection,
 
        timeo = connect_int * HZ;
        /* 28.5% random jitter */
-       timeo += (prandom_u32() & 1) ? timeo / 7 : -timeo / 7;
+       timeo += prandom_u32_max(2) ? timeo / 7 : -timeo / 7;
 
        err = wait_for_completion_interruptible_timeout(&ad->door_bell, timeo);
        if (err <= 0)
@@ -1004,7 +1004,7 @@ retry:
                                drbd_warn(connection, "Error receiving initial packet\n");
                                sock_release(s);
 randomize:
-                               if (prandom_u32() & 1)
+                               if (prandom_u32_max(2))
                                        goto retry;
                        }
                }
index 8f7f144..7f9bcc8 100644 (file)
@@ -30,11 +30,6 @@ static struct drbd_request *drbd_req_new(struct drbd_device *device, struct bio
                return NULL;
        memset(req, 0, sizeof(*req));
 
-       req->private_bio = bio_alloc_clone(device->ldev->backing_bdev, bio_src,
-                                          GFP_NOIO, &drbd_io_bio_set);
-       req->private_bio->bi_private = req;
-       req->private_bio->bi_end_io = drbd_request_endio;
-
        req->rq_state = (bio_data_dir(bio_src) == WRITE ? RQ_WRITE : 0)
                      | (bio_op(bio_src) == REQ_OP_WRITE_ZEROES ? RQ_ZEROES : 0)
                      | (bio_op(bio_src) == REQ_OP_DISCARD ? RQ_UNMAP : 0);
@@ -1219,9 +1214,12 @@ drbd_request_prepare(struct drbd_device *device, struct bio *bio)
        /* Update disk stats */
        req->start_jif = bio_start_io_acct(req->master_bio);
 
-       if (!get_ldev(device)) {
-               bio_put(req->private_bio);
-               req->private_bio = NULL;
+       if (get_ldev(device)) {
+               req->private_bio = bio_alloc_clone(device->ldev->backing_bdev,
+                                                  bio, GFP_NOIO,
+                                                  &drbd_io_bio_set);
+               req->private_bio->bi_private = req;
+               req->private_bio->bi_end_io = drbd_request_endio;
        }
 
        /* process discards always from our submitter thread */
index 2651bf4..5afce6f 100644 (file)
@@ -124,7 +124,7 @@ struct ublk_queue {
        bool force_abort;
        unsigned short nr_io_ready;     /* how many ios setup */
        struct ublk_device *dev;
-       struct ublk_io ios[0];
+       struct ublk_io ios[];
 };
 
 #define UBLK_DAEMON_MONITOR_PERIOD     (5 * HZ)
index 7c74d8c..966aab9 100644 (file)
@@ -52,9 +52,6 @@ static unsigned int num_devices = 1;
 static size_t huge_class_size;
 
 static const struct block_device_operations zram_devops;
-#ifdef CONFIG_ZRAM_WRITEBACK
-static const struct block_device_operations zram_wb_devops;
-#endif
 
 static void zram_free_page(struct zram *zram, size_t index);
 static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
@@ -546,17 +543,6 @@ static ssize_t backing_dev_store(struct device *dev,
        zram->backing_dev = backing_dev;
        zram->bitmap = bitmap;
        zram->nr_pages = nr_pages;
-       /*
-        * With writeback feature, zram does asynchronous IO so it's no longer
-        * synchronous device so let's remove synchronous io flag. Othewise,
-        * upper layer(e.g., swap) could wait IO completion rather than
-        * (submit and return), which will cause system sluggish.
-        * Furthermore, when the IO function returns(e.g., swap_readpage),
-        * upper layer expects IO was done so it could deallocate the page
-        * freely but in fact, IO is going on so finally could cause
-        * use-after-free when the IO is really done.
-        */
-       zram->disk->fops = &zram_wb_devops;
        up_write(&zram->init_lock);
 
        pr_info("setup backing device %s\n", file_name);
@@ -1270,6 +1256,9 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
                struct bio_vec bvec;
 
                zram_slot_unlock(zram, index);
+               /* A null bio means rw_page was used, we must fallback to bio */
+               if (!bio)
+                       return -EOPNOTSUPP;
 
                bvec.bv_page = page;
                bvec.bv_len = PAGE_SIZE;
@@ -1856,15 +1845,6 @@ static const struct block_device_operations zram_devops = {
        .owner = THIS_MODULE
 };
 
-#ifdef CONFIG_ZRAM_WRITEBACK
-static const struct block_device_operations zram_wb_devops = {
-       .open = zram_open,
-       .submit_bio = zram_submit_bio,
-       .swap_slot_free_notify = zram_slot_free_notify,
-       .owner = THIS_MODULE
-};
-#endif
-
 static DEVICE_ATTR_WO(compact);
 static DEVICE_ATTR_RW(disksize);
 static DEVICE_ATTR_RO(initstate);
index e7dd457..e98fcac 100644 (file)
@@ -71,7 +71,7 @@ static int bcm2835_rng_read(struct hwrng *rng, void *buf, size_t max,
        while ((rng_readl(priv, RNG_STATUS) >> 24) == 0) {
                if (!wait)
                        return 0;
-               cpu_relax();
+               hwrng_msleep(rng, 1000);
        }
 
        num_words = rng_readl(priv, RNG_STATUS) >> 24;
index 01acf23..2fe28ee 100644 (file)
@@ -97,7 +97,7 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
  * Returns whether or not the input pool has been seeded and thus guaranteed
  * to supply cryptographically secure random numbers. This applies to: the
  * /dev/urandom device, the get_random_bytes function, and the get_random_{u8,
- * u16,u32,u64,int,long} family of functions.
+ * u16,u32,u64,long} family of functions.
  *
  * Returns: true if the input pool has been seeded.
  *          false if the input pool has not been seeded.
@@ -161,15 +161,14 @@ EXPORT_SYMBOL(wait_for_random_bytes);
  *     u16 get_random_u16()
  *     u32 get_random_u32()
  *     u64 get_random_u64()
- *     unsigned int get_random_int()
  *     unsigned long get_random_long()
  *
  * These interfaces will return the requested number of random bytes
  * into the given buffer or as a return value. This is equivalent to
- * a read from /dev/urandom. The u8, u16, u32, u64, int, and long
- * family of functions may be higher performance for one-off random
- * integers, because they do a bit of buffering and do not invoke
- * reseeding until the buffer is emptied.
+ * a read from /dev/urandom. The u8, u16, u32, u64, long family of
+ * functions may be higher performance for one-off random integers,
+ * because they do a bit of buffering and do not invoke reseeding
+ * until the buffer is emptied.
  *
  *********************************************************************/
 
index d429ba5..943ea67 100644 (file)
@@ -136,7 +136,6 @@ static int clk_generated_determine_rate(struct clk_hw *hw,
 {
        struct clk_generated *gck = to_clk_generated(hw);
        struct clk_hw *parent = NULL;
-       struct clk_rate_request req_parent = *req;
        long best_rate = -EINVAL;
        unsigned long min_rate, parent_rate;
        int best_diff = -1;
@@ -192,7 +191,9 @@ static int clk_generated_determine_rate(struct clk_hw *hw,
                goto end;
 
        for (div = 1; div < GENERATED_MAX_DIV + 2; div++) {
-               req_parent.rate = req->rate * div;
+               struct clk_rate_request req_parent;
+
+               clk_hw_forward_rate_request(hw, req, parent, &req_parent, req->rate * div);
                if (__clk_determine_rate(parent, &req_parent))
                        continue;
                clk_generated_best_diff(req, parent, req_parent.rate, div,
index 164e295..b7cd192 100644 (file)
@@ -581,7 +581,6 @@ static int clk_sama7g5_master_determine_rate(struct clk_hw *hw,
                                             struct clk_rate_request *req)
 {
        struct clk_master *master = to_clk_master(hw);
-       struct clk_rate_request req_parent = *req;
        struct clk_hw *parent;
        long best_rate = LONG_MIN, best_diff = LONG_MIN;
        unsigned long parent_rate;
@@ -618,11 +617,15 @@ static int clk_sama7g5_master_determine_rate(struct clk_hw *hw,
                goto end;
 
        for (div = 0; div < MASTER_PRES_MAX + 1; div++) {
+               struct clk_rate_request req_parent;
+               unsigned long req_rate;
+
                if (div == MASTER_PRES_MAX)
-                       req_parent.rate = req->rate * 3;
+                       req_rate = req->rate * 3;
                else
-                       req_parent.rate = req->rate << div;
+                       req_rate = req->rate << div;
 
+               clk_hw_forward_rate_request(hw, req, parent, &req_parent, req_rate);
                if (__clk_determine_rate(parent, &req_parent))
                        continue;
 
index e14fa5a..5104d40 100644 (file)
@@ -269,7 +269,6 @@ static int clk_sam9x5_peripheral_determine_rate(struct clk_hw *hw,
 {
        struct clk_sam9x5_peripheral *periph = to_clk_sam9x5_peripheral(hw);
        struct clk_hw *parent = clk_hw_get_parent(hw);
-       struct clk_rate_request req_parent = *req;
        unsigned long parent_rate = clk_hw_get_rate(parent);
        unsigned long tmp_rate;
        long best_rate = LONG_MIN;
@@ -302,8 +301,9 @@ static int clk_sam9x5_peripheral_determine_rate(struct clk_hw *hw,
                goto end;
 
        for (shift = 0; shift <= PERIPHERAL_MAX_SHIFT; shift++) {
-               req_parent.rate = req->rate << shift;
+               struct clk_rate_request req_parent;
 
+               clk_hw_forward_rate_request(hw, req, parent, &req_parent, req->rate << shift);
                if (__clk_determine_rate(parent, &req_parent))
                        continue;
 
index b9c5f90..edfa946 100644 (file)
@@ -85,10 +85,11 @@ static int clk_composite_determine_rate(struct clk_hw *hw,
                req->best_parent_hw = NULL;
 
                if (clk_hw_get_flags(hw) & CLK_SET_RATE_NO_REPARENT) {
-                       struct clk_rate_request tmp_req = *req;
+                       struct clk_rate_request tmp_req;
 
                        parent = clk_hw_get_parent(mux_hw);
 
+                       clk_hw_forward_rate_request(hw, req, parent, &tmp_req, req->rate);
                        ret = clk_composite_determine_rate_for_parent(rate_hw,
                                                                      &tmp_req,
                                                                      parent,
@@ -104,12 +105,13 @@ static int clk_composite_determine_rate(struct clk_hw *hw,
                }
 
                for (i = 0; i < clk_hw_get_num_parents(mux_hw); i++) {
-                       struct clk_rate_request tmp_req = *req;
+                       struct clk_rate_request tmp_req;
 
                        parent = clk_hw_get_parent_by_index(mux_hw, i);
                        if (!parent)
                                continue;
 
+                       clk_hw_forward_rate_request(hw, req, parent, &tmp_req, req->rate);
                        ret = clk_composite_determine_rate_for_parent(rate_hw,
                                                                      &tmp_req,
                                                                      parent,
index f6b2bf5..a2c2b52 100644 (file)
@@ -386,13 +386,13 @@ long divider_round_rate_parent(struct clk_hw *hw, struct clk_hw *parent,
                               const struct clk_div_table *table,
                               u8 width, unsigned long flags)
 {
-       struct clk_rate_request req = {
-               .rate = rate,
-               .best_parent_rate = *prate,
-               .best_parent_hw = parent,
-       };
+       struct clk_rate_request req;
        int ret;
 
+       clk_hw_init_rate_request(hw, &req, rate);
+       req.best_parent_rate = *prate;
+       req.best_parent_hw = parent;
+
        ret = divider_determine_rate(hw, &req, table, width, flags);
        if (ret)
                return ret;
@@ -408,13 +408,13 @@ long divider_ro_round_rate_parent(struct clk_hw *hw, struct clk_hw *parent,
                                  const struct clk_div_table *table, u8 width,
                                  unsigned long flags, unsigned int val)
 {
-       struct clk_rate_request req = {
-               .rate = rate,
-               .best_parent_rate = *prate,
-               .best_parent_hw = parent,
-       };
+       struct clk_rate_request req;
        int ret;
 
+       clk_hw_init_rate_request(hw, &req, rate);
+       req.best_parent_rate = *prate;
+       req.best_parent_hw = parent;
+
        ret = divider_ro_determine_rate(hw, &req, table, width, flags, val);
        if (ret)
                return ret;
index dd810bc..c3c3f8c 100644 (file)
@@ -536,6 +536,53 @@ static bool mux_is_better_rate(unsigned long rate, unsigned long now,
        return now <= rate && now > best;
 }
 
+static void clk_core_init_rate_req(struct clk_core * const core,
+                                  struct clk_rate_request *req,
+                                  unsigned long rate);
+
+static int clk_core_round_rate_nolock(struct clk_core *core,
+                                     struct clk_rate_request *req);
+
+static bool clk_core_has_parent(struct clk_core *core, const struct clk_core *parent)
+{
+       struct clk_core *tmp;
+       unsigned int i;
+
+       /* Optimize for the case where the parent is already the parent. */
+       if (core->parent == parent)
+               return true;
+
+       for (i = 0; i < core->num_parents; i++) {
+               tmp = clk_core_get_parent_by_index(core, i);
+               if (!tmp)
+                       continue;
+
+               if (tmp == parent)
+                       return true;
+       }
+
+       return false;
+}
+
+static void
+clk_core_forward_rate_req(struct clk_core *core,
+                         const struct clk_rate_request *old_req,
+                         struct clk_core *parent,
+                         struct clk_rate_request *req,
+                         unsigned long parent_rate)
+{
+       if (WARN_ON(!clk_core_has_parent(core, parent)))
+               return;
+
+       clk_core_init_rate_req(parent, req, parent_rate);
+
+       if (req->min_rate < old_req->min_rate)
+               req->min_rate = old_req->min_rate;
+
+       if (req->max_rate > old_req->max_rate)
+               req->max_rate = old_req->max_rate;
+}
+
 int clk_mux_determine_rate_flags(struct clk_hw *hw,
                                 struct clk_rate_request *req,
                                 unsigned long flags)
@@ -543,14 +590,20 @@ int clk_mux_determine_rate_flags(struct clk_hw *hw,
        struct clk_core *core = hw->core, *parent, *best_parent = NULL;
        int i, num_parents, ret;
        unsigned long best = 0;
-       struct clk_rate_request parent_req = *req;
 
        /* if NO_REPARENT flag set, pass through to current parent */
        if (core->flags & CLK_SET_RATE_NO_REPARENT) {
                parent = core->parent;
                if (core->flags & CLK_SET_RATE_PARENT) {
-                       ret = __clk_determine_rate(parent ? parent->hw : NULL,
-                                                  &parent_req);
+                       struct clk_rate_request parent_req;
+
+                       if (!parent) {
+                               req->rate = 0;
+                               return 0;
+                       }
+
+                       clk_core_forward_rate_req(core, req, parent, &parent_req, req->rate);
+                       ret = clk_core_round_rate_nolock(parent, &parent_req);
                        if (ret)
                                return ret;
 
@@ -567,23 +620,29 @@ int clk_mux_determine_rate_flags(struct clk_hw *hw,
        /* find the parent that can provide the fastest rate <= rate */
        num_parents = core->num_parents;
        for (i = 0; i < num_parents; i++) {
+               unsigned long parent_rate;
+
                parent = clk_core_get_parent_by_index(core, i);
                if (!parent)
                        continue;
 
                if (core->flags & CLK_SET_RATE_PARENT) {
-                       parent_req = *req;
-                       ret = __clk_determine_rate(parent->hw, &parent_req);
+                       struct clk_rate_request parent_req;
+
+                       clk_core_forward_rate_req(core, req, parent, &parent_req, req->rate);
+                       ret = clk_core_round_rate_nolock(parent, &parent_req);
                        if (ret)
                                continue;
+
+                       parent_rate = parent_req.rate;
                } else {
-                       parent_req.rate = clk_core_get_rate_nolock(parent);
+                       parent_rate = clk_core_get_rate_nolock(parent);
                }
 
-               if (mux_is_better_rate(req->rate, parent_req.rate,
+               if (mux_is_better_rate(req->rate, parent_rate,
                                       best, flags)) {
                        best_parent = parent;
-                       best = parent_req.rate;
+                       best = parent_rate;
                }
        }
 
@@ -625,6 +684,22 @@ static void clk_core_get_boundaries(struct clk_core *core,
                *max_rate = min(*max_rate, clk_user->max_rate);
 }
 
+/*
+ * clk_hw_get_rate_range() - returns the clock rate range for a hw clk
+ * @hw: the hw clk we want to get the range from
+ * @min_rate: pointer to the variable that will hold the minimum
+ * @max_rate: pointer to the variable that will hold the maximum
+ *
+ * Fills the @min_rate and @max_rate variables with the minimum and
+ * maximum that clock can reach.
+ */
+void clk_hw_get_rate_range(struct clk_hw *hw, unsigned long *min_rate,
+                          unsigned long *max_rate)
+{
+       clk_core_get_boundaries(hw->core, min_rate, max_rate);
+}
+EXPORT_SYMBOL_GPL(clk_hw_get_rate_range);
+
 static bool clk_core_check_boundaries(struct clk_core *core,
                                      unsigned long min_rate,
                                      unsigned long max_rate)
@@ -1340,7 +1415,19 @@ static int clk_core_determine_round_nolock(struct clk_core *core,
        if (!core)
                return 0;
 
-       req->rate = clamp(req->rate, req->min_rate, req->max_rate);
+       /*
+        * Some clock providers hand-craft their clk_rate_requests and
+        * might not fill min_rate and max_rate.
+        *
+        * If it's the case, clamping the rate is equivalent to setting
+        * the rate to 0 which is bad. Skip the clamping but complain so
+        * that it gets fixed, hopefully.
+        */
+       if (!req->min_rate && !req->max_rate)
+               pr_warn("%s: %s: clk_rate_request has initialized min or max rate.\n",
+                       __func__, core->name);
+       else
+               req->rate = clamp(req->rate, req->min_rate, req->max_rate);
 
        /*
         * At this point, core protection will be disabled
@@ -1367,13 +1454,19 @@ static int clk_core_determine_round_nolock(struct clk_core *core,
 }
 
 static void clk_core_init_rate_req(struct clk_core * const core,
-                                  struct clk_rate_request *req)
+                                  struct clk_rate_request *req,
+                                  unsigned long rate)
 {
        struct clk_core *parent;
 
        if (WARN_ON(!core || !req))
                return;
 
+       memset(req, 0, sizeof(*req));
+
+       req->rate = rate;
+       clk_core_get_boundaries(core, &req->min_rate, &req->max_rate);
+
        parent = core->parent;
        if (parent) {
                req->best_parent_hw = parent->hw;
@@ -1384,6 +1477,51 @@ static void clk_core_init_rate_req(struct clk_core * const core,
        }
 }
 
+/**
+ * clk_hw_init_rate_request - Initializes a clk_rate_request
+ * @hw: the clk for which we want to submit a rate request
+ * @req: the clk_rate_request structure we want to initialise
+ * @rate: the rate which is to be requested
+ *
+ * Initializes a clk_rate_request structure to submit to
+ * __clk_determine_rate() or similar functions.
+ */
+void clk_hw_init_rate_request(const struct clk_hw *hw,
+                             struct clk_rate_request *req,
+                             unsigned long rate)
+{
+       if (WARN_ON(!hw || !req))
+               return;
+
+       clk_core_init_rate_req(hw->core, req, rate);
+}
+EXPORT_SYMBOL_GPL(clk_hw_init_rate_request);
+
+/**
+ * clk_hw_forward_rate_request - Forwards a clk_rate_request to a clock's parent
+ * @hw: the original clock that got the rate request
+ * @old_req: the original clk_rate_request structure we want to forward
+ * @parent: the clk we want to forward @old_req to
+ * @req: the clk_rate_request structure we want to initialise
+ * @parent_rate: The rate which is to be requested to @parent
+ *
+ * Initializes a clk_rate_request structure to submit to a clock parent
+ * in __clk_determine_rate() or similar functions.
+ */
+void clk_hw_forward_rate_request(const struct clk_hw *hw,
+                                const struct clk_rate_request *old_req,
+                                const struct clk_hw *parent,
+                                struct clk_rate_request *req,
+                                unsigned long parent_rate)
+{
+       if (WARN_ON(!hw || !old_req || !parent || !req))
+               return;
+
+       clk_core_forward_rate_req(hw->core, old_req,
+                                 parent->core, req,
+                                 parent_rate);
+}
+
 static bool clk_core_can_round(struct clk_core * const core)
 {
        return core->ops->determine_rate || core->ops->round_rate;
@@ -1392,6 +1530,8 @@ static bool clk_core_can_round(struct clk_core * const core)
 static int clk_core_round_rate_nolock(struct clk_core *core,
                                      struct clk_rate_request *req)
 {
+       int ret;
+
        lockdep_assert_held(&prepare_lock);
 
        if (!core) {
@@ -1399,12 +1539,22 @@ static int clk_core_round_rate_nolock(struct clk_core *core,
                return 0;
        }
 
-       clk_core_init_rate_req(core, req);
-
        if (clk_core_can_round(core))
                return clk_core_determine_round_nolock(core, req);
-       else if (core->flags & CLK_SET_RATE_PARENT)
-               return clk_core_round_rate_nolock(core->parent, req);
+
+       if (core->flags & CLK_SET_RATE_PARENT) {
+               struct clk_rate_request parent_req;
+
+               clk_core_forward_rate_req(core, req, core->parent, &parent_req, req->rate);
+               ret = clk_core_round_rate_nolock(core->parent, &parent_req);
+               if (ret)
+                       return ret;
+
+               req->best_parent_rate = parent_req.rate;
+               req->rate = parent_req.rate;
+
+               return 0;
+       }
 
        req->rate = core->rate;
        return 0;
@@ -1448,8 +1598,7 @@ unsigned long clk_hw_round_rate(struct clk_hw *hw, unsigned long rate)
        int ret;
        struct clk_rate_request req;
 
-       clk_core_get_boundaries(hw->core, &req.min_rate, &req.max_rate);
-       req.rate = rate;
+       clk_core_init_rate_req(hw->core, &req, rate);
 
        ret = clk_core_round_rate_nolock(hw->core, &req);
        if (ret)
@@ -1481,8 +1630,7 @@ long clk_round_rate(struct clk *clk, unsigned long rate)
        if (clk->exclusive_count)
                clk_core_rate_unprotect(clk->core);
 
-       clk_core_get_boundaries(clk->core, &req.min_rate, &req.max_rate);
-       req.rate = rate;
+       clk_core_init_rate_req(clk->core, &req, rate);
 
        ret = clk_core_round_rate_nolock(clk->core, &req);
 
@@ -1611,6 +1759,7 @@ static unsigned long clk_recalc(struct clk_core *core,
 /**
  * __clk_recalc_rates
  * @core: first clk in the subtree
+ * @update_req: Whether req_rate should be updated with the new rate
  * @msg: notification type (see include/linux/clk.h)
  *
  * Walks the subtree of clks starting with clk and recalculates rates as it
@@ -1620,7 +1769,8 @@ static unsigned long clk_recalc(struct clk_core *core,
  * clk_recalc_rates also propagates the POST_RATE_CHANGE notification,
  * if necessary.
  */
-static void __clk_recalc_rates(struct clk_core *core, unsigned long msg)
+static void __clk_recalc_rates(struct clk_core *core, bool update_req,
+                              unsigned long msg)
 {
        unsigned long old_rate;
        unsigned long parent_rate = 0;
@@ -1634,6 +1784,8 @@ static void __clk_recalc_rates(struct clk_core *core, unsigned long msg)
                parent_rate = core->parent->rate;
 
        core->rate = clk_recalc(core, parent_rate);
+       if (update_req)
+               core->req_rate = core->rate;
 
        /*
         * ignore NOTIFY_STOP and NOTIFY_BAD return values for POST_RATE_CHANGE
@@ -1643,13 +1795,13 @@ static void __clk_recalc_rates(struct clk_core *core, unsigned long msg)
                __clk_notify(core, msg, old_rate, core->rate);
 
        hlist_for_each_entry(child, &core->children, child_node)
-               __clk_recalc_rates(child, msg);
+               __clk_recalc_rates(child, update_req, msg);
 }
 
 static unsigned long clk_core_get_rate_recalc(struct clk_core *core)
 {
        if (core && (core->flags & CLK_GET_RATE_NOCACHE))
-               __clk_recalc_rates(core, 0);
+               __clk_recalc_rates(core, false, 0);
 
        return clk_core_get_rate_nolock(core);
 }
@@ -1659,8 +1811,9 @@ static unsigned long clk_core_get_rate_recalc(struct clk_core *core)
  * @clk: the clk whose rate is being returned
  *
  * Simply returns the cached rate of the clk, unless CLK_GET_RATE_NOCACHE flag
- * is set, which means a recalc_rate will be issued.
- * If clk is NULL then returns 0.
+ * is set, which means a recalc_rate will be issued. Can be called regardless of
+ * the clock enabledness. If clk is NULL, or if an error occurred, then returns
+ * 0.
  */
 unsigned long clk_get_rate(struct clk *clk)
 {
@@ -1864,6 +2017,7 @@ static int __clk_set_parent(struct clk_core *core, struct clk_core *parent,
                flags = clk_enable_lock();
                clk_reparent(core, old_parent);
                clk_enable_unlock(flags);
+
                __clk_set_parent_after(core, old_parent, parent);
 
                return ret;
@@ -1969,11 +2123,7 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
        if (clk_core_can_round(core)) {
                struct clk_rate_request req;
 
-               req.rate = rate;
-               req.min_rate = min_rate;
-               req.max_rate = max_rate;
-
-               clk_core_init_rate_req(core, &req);
+               clk_core_init_rate_req(core, &req, rate);
 
                ret = clk_core_determine_round_nolock(core, &req);
                if (ret < 0)
@@ -2172,8 +2322,7 @@ static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
        if (cnt < 0)
                return cnt;
 
-       clk_core_get_boundaries(core, &req.min_rate, &req.max_rate);
-       req.rate = req_rate;
+       clk_core_init_rate_req(core, &req, req_rate);
 
        ret = clk_core_round_rate_nolock(core, &req);
 
@@ -2324,19 +2473,15 @@ int clk_set_rate_exclusive(struct clk *clk, unsigned long rate)
 }
 EXPORT_SYMBOL_GPL(clk_set_rate_exclusive);
 
-/**
- * clk_set_rate_range - set a rate range for a clock source
- * @clk: clock source
- * @min: desired minimum clock rate in Hz, inclusive
- * @max: desired maximum clock rate in Hz, inclusive
- *
- * Returns success (0) or negative errno.
- */
-int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
+static int clk_set_rate_range_nolock(struct clk *clk,
+                                    unsigned long min,
+                                    unsigned long max)
 {
        int ret = 0;
        unsigned long old_min, old_max, rate;
 
+       lockdep_assert_held(&prepare_lock);
+
        if (!clk)
                return 0;
 
@@ -2349,8 +2494,6 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
                return -EINVAL;
        }
 
-       clk_prepare_lock();
-
        if (clk->exclusive_count)
                clk_core_rate_unprotect(clk->core);
 
@@ -2365,6 +2508,10 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
                goto out;
        }
 
+       rate = clk->core->req_rate;
+       if (clk->core->flags & CLK_GET_RATE_NOCACHE)
+               rate = clk_core_get_rate_recalc(clk->core);
+
        /*
         * Since the boundaries have been changed, let's give the
         * opportunity to the provider to adjust the clock rate based on
@@ -2382,7 +2529,7 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
         * - the determine_rate() callback does not really check for
         *   this corner case when determining the rate
         */
-       rate = clamp(clk->core->req_rate, min, max);
+       rate = clamp(rate, min, max);
        ret = clk_core_set_rate_nolock(clk->core, rate);
        if (ret) {
                /* rollback the changes */
@@ -2394,6 +2541,28 @@ out:
        if (clk->exclusive_count)
                clk_core_rate_protect(clk->core);
 
+       return ret;
+}
+
+/**
+ * clk_set_rate_range - set a rate range for a clock source
+ * @clk: clock source
+ * @min: desired minimum clock rate in Hz, inclusive
+ * @max: desired maximum clock rate in Hz, inclusive
+ *
+ * Return: 0 for success or negative errno on failure.
+ */
+int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
+{
+       int ret;
+
+       if (!clk)
+               return 0;
+
+       clk_prepare_lock();
+
+       ret = clk_set_rate_range_nolock(clk, min, max);
+
        clk_prepare_unlock();
 
        return ret;
@@ -2473,7 +2642,7 @@ static void clk_core_reparent(struct clk_core *core,
 {
        clk_reparent(core, new_parent);
        __clk_recalc_accuracies(core);
-       __clk_recalc_rates(core, POST_RATE_CHANGE);
+       __clk_recalc_rates(core, true, POST_RATE_CHANGE);
 }
 
 void clk_hw_reparent(struct clk_hw *hw, struct clk_hw *new_parent)
@@ -2494,27 +2663,13 @@ void clk_hw_reparent(struct clk_hw *hw, struct clk_hw *new_parent)
  *
  * Returns true if @parent is a possible parent for @clk, false otherwise.
  */
-bool clk_has_parent(struct clk *clk, struct clk *parent)
+bool clk_has_parent(const struct clk *clk, const struct clk *parent)
 {
-       struct clk_core *core, *parent_core;
-       int i;
-
        /* NULL clocks should be nops, so return success if either is NULL. */
        if (!clk || !parent)
                return true;
 
-       core = clk->core;
-       parent_core = parent->core;
-
-       /* Optimize for the case where the parent is already the parent. */
-       if (core->parent == parent_core)
-               return true;
-
-       for (i = 0; i < core->num_parents; i++)
-               if (!strcmp(core->parents[i].name, parent_core->name))
-                       return true;
-
-       return false;
+       return clk_core_has_parent(clk->core, parent->core);
 }
 EXPORT_SYMBOL_GPL(clk_has_parent);
 
@@ -2571,9 +2726,9 @@ static int clk_core_set_parent_nolock(struct clk_core *core,
 
        /* propagate rate an accuracy recalculation accordingly */
        if (ret) {
-               __clk_recalc_rates(core, ABORT_RATE_CHANGE);
+               __clk_recalc_rates(core, true, ABORT_RATE_CHANGE);
        } else {
-               __clk_recalc_rates(core, POST_RATE_CHANGE);
+               __clk_recalc_rates(core, true, POST_RATE_CHANGE);
                __clk_recalc_accuracies(core);
        }
 
@@ -3470,7 +3625,7 @@ static void clk_core_reparent_orphans_nolock(void)
                        __clk_set_parent_before(orphan, parent);
                        __clk_set_parent_after(orphan, parent, NULL);
                        __clk_recalc_accuracies(orphan);
-                       __clk_recalc_rates(orphan, 0);
+                       __clk_recalc_rates(orphan, true, 0);
 
                        /*
                         * __clk_init_parent() will set the initial req_rate to
@@ -4346,9 +4501,10 @@ void __clk_put(struct clk *clk)
        }
 
        hlist_del(&clk->clks_node);
-       if (clk->min_rate > clk->core->req_rate ||
-           clk->max_rate < clk->core->req_rate)
-               clk_core_set_rate_nolock(clk->core, clk->core->req_rate);
+
+       /* If we had any boundaries on that clock, let's drop them. */
+       if (clk->min_rate > 0 || clk->max_rate < ULONG_MAX)
+               clk_set_rate_range_nolock(clk, 0, ULONG_MAX);
 
        owner = clk->core->owner;
        kref_put(&clk->core->ref, __clk_release);
index 6731a82..f9a5c29 100644 (file)
@@ -108,6 +108,39 @@ static const struct clk_ops clk_dummy_single_parent_ops = {
        .get_parent = clk_dummy_single_get_parent,
 };
 
+struct clk_multiple_parent_ctx {
+       struct clk_dummy_context parents_ctx[2];
+       struct clk_hw hw;
+       u8 current_parent;
+};
+
+static int clk_multiple_parents_mux_set_parent(struct clk_hw *hw, u8 index)
+{
+       struct clk_multiple_parent_ctx *ctx =
+               container_of(hw, struct clk_multiple_parent_ctx, hw);
+
+       if (index >= clk_hw_get_num_parents(hw))
+               return -EINVAL;
+
+       ctx->current_parent = index;
+
+       return 0;
+}
+
+static u8 clk_multiple_parents_mux_get_parent(struct clk_hw *hw)
+{
+       struct clk_multiple_parent_ctx *ctx =
+               container_of(hw, struct clk_multiple_parent_ctx, hw);
+
+       return ctx->current_parent;
+}
+
+static const struct clk_ops clk_multiple_parents_mux_ops = {
+       .get_parent = clk_multiple_parents_mux_get_parent,
+       .set_parent = clk_multiple_parents_mux_set_parent,
+       .determine_rate = __clk_mux_determine_rate_closest,
+};
+
 static int clk_test_init_with_ops(struct kunit *test, const struct clk_ops *ops)
 {
        struct clk_dummy_context *ctx;
@@ -160,12 +193,14 @@ static void clk_test_get_rate(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_EQ(test, rate, ctx->rate);
+
+       clk_put(clk);
 }
 
 /*
@@ -179,7 +214,7 @@ static void clk_test_set_get_rate(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -189,6 +224,8 @@ static void clk_test_set_get_rate(struct kunit *test)
        rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1);
+
+       clk_put(clk);
 }
 
 /*
@@ -202,7 +239,7 @@ static void clk_test_set_set_get_rate(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -216,6 +253,8 @@ static void clk_test_set_set_get_rate(struct kunit *test)
        rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
 }
 
 /*
@@ -226,7 +265,7 @@ static void clk_test_round_set_get_rate(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rounded_rate, set_rate;
 
        rounded_rate = clk_round_rate(clk, DUMMY_CLOCK_RATE_1);
@@ -240,6 +279,8 @@ static void clk_test_round_set_get_rate(struct kunit *test)
        set_rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, set_rate, 0);
        KUNIT_EXPECT_EQ(test, rounded_rate, set_rate);
+
+       clk_put(clk);
 }
 
 static struct kunit_case clk_test_cases[] = {
@@ -250,6 +291,11 @@ static struct kunit_case clk_test_cases[] = {
        {}
 };
 
+/*
+ * Test suite for a basic rate clock, without any parent.
+ *
+ * These tests exercise the rate API with simple scenarios
+ */
 static struct kunit_suite clk_test_suite = {
        .name = "clk-test",
        .init = clk_test_init,
@@ -257,16 +303,132 @@ static struct kunit_suite clk_test_suite = {
        .test_cases = clk_test_cases,
 };
 
-struct clk_single_parent_ctx {
-       struct clk_dummy_context parent_ctx;
-       struct clk_hw hw;
+static int clk_uncached_test_init(struct kunit *test)
+{
+       struct clk_dummy_context *ctx;
+       int ret;
+
+       ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+       if (!ctx)
+               return -ENOMEM;
+       test->priv = ctx;
+
+       ctx->rate = DUMMY_CLOCK_INIT_RATE;
+       ctx->hw.init = CLK_HW_INIT_NO_PARENT("test-clk",
+                                            &clk_dummy_rate_ops,
+                                            CLK_GET_RATE_NOCACHE);
+
+       ret = clk_hw_register(NULL, &ctx->hw);
+       if (ret)
+               return ret;
+
+       return 0;
+}
+
+/*
+ * Test that for an uncached clock, the clock framework doesn't cache
+ * the rate and clk_get_rate() will return the underlying clock rate
+ * even if it changed.
+ */
+static void clk_test_uncached_get_rate(struct kunit *test)
+{
+       struct clk_dummy_context *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       unsigned long rate;
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_INIT_RATE);
+
+       /* We change the rate behind the clock framework's back */
+       ctx->rate = DUMMY_CLOCK_RATE_1;
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that for an uncached clock, clk_set_rate_range() will work
+ * properly if the rate hasn't changed.
+ */
+static void clk_test_uncached_set_range(struct kunit *test)
+{
+       struct clk_dummy_context *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       unsigned long rate;
+
+       KUNIT_ASSERT_EQ(test,
+                       clk_set_rate_range(clk,
+                                          DUMMY_CLOCK_RATE_1,
+                                          DUMMY_CLOCK_RATE_2),
+                       0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that for an uncached clock, clk_set_rate_range() will work
+ * properly if the rate has changed in hardware.
+ *
+ * In this case, it means that if the rate wasn't initially in the range
+ * we're trying to set, but got changed at some point into the range
+ * without the kernel knowing about it, its rate shouldn't be affected.
+ */
+static void clk_test_uncached_updated_rate_set_range(struct kunit *test)
+{
+       struct clk_dummy_context *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       unsigned long rate;
+
+       /* We change the rate behind the clock framework's back */
+       ctx->rate = DUMMY_CLOCK_RATE_1 + 1000;
+       KUNIT_ASSERT_EQ(test,
+                       clk_set_rate_range(clk,
+                                          DUMMY_CLOCK_RATE_1,
+                                          DUMMY_CLOCK_RATE_2),
+                       0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1 + 1000);
+
+       clk_put(clk);
+}
+
+static struct kunit_case clk_uncached_test_cases[] = {
+       KUNIT_CASE(clk_test_uncached_get_rate),
+       KUNIT_CASE(clk_test_uncached_set_range),
+       KUNIT_CASE(clk_test_uncached_updated_rate_set_range),
+       {}
 };
 
-static int clk_orphan_transparent_single_parent_mux_test_init(struct kunit *test)
+/*
+ * Test suite for a basic, uncached, rate clock, without any parent.
+ *
+ * These tests exercise the rate API with simple scenarios
+ */
+static struct kunit_suite clk_uncached_test_suite = {
+       .name = "clk-uncached-test",
+       .init = clk_uncached_test_init,
+       .exit = clk_test_exit,
+       .test_cases = clk_uncached_test_cases,
+};
+
+static int
+clk_multiple_parents_mux_test_init(struct kunit *test)
 {
-       struct clk_single_parent_ctx *ctx;
-       struct clk_init_data init = { };
-       const char * const parents[] = { "orphan_parent" };
+       struct clk_multiple_parent_ctx *ctx;
+       const char *parents[2] = { "parent-0", "parent-1"};
        int ret;
 
        ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
@@ -274,73 +436,993 @@ static int clk_orphan_transparent_single_parent_mux_test_init(struct kunit *test
                return -ENOMEM;
        test->priv = ctx;
 
-       init.name = "test_orphan_dummy_parent";
-       init.ops = &clk_dummy_single_parent_ops;
-       init.parent_names = parents;
-       init.num_parents = ARRAY_SIZE(parents);
-       init.flags = CLK_SET_RATE_PARENT;
-       ctx->hw.init = &init;
+       ctx->parents_ctx[0].hw.init = CLK_HW_INIT_NO_PARENT("parent-0",
+                                                           &clk_dummy_rate_ops,
+                                                           0);
+       ctx->parents_ctx[0].rate = DUMMY_CLOCK_RATE_1;
+       ret = clk_hw_register(NULL, &ctx->parents_ctx[0].hw);
+       if (ret)
+               return ret;
+
+       ctx->parents_ctx[1].hw.init = CLK_HW_INIT_NO_PARENT("parent-1",
+                                                           &clk_dummy_rate_ops,
+                                                           0);
+       ctx->parents_ctx[1].rate = DUMMY_CLOCK_RATE_2;
+       ret = clk_hw_register(NULL, &ctx->parents_ctx[1].hw);
+       if (ret)
+               return ret;
 
+       ctx->current_parent = 0;
+       ctx->hw.init = CLK_HW_INIT_PARENTS("test-mux", parents,
+                                          &clk_multiple_parents_mux_ops,
+                                          CLK_SET_RATE_PARENT);
        ret = clk_hw_register(NULL, &ctx->hw);
        if (ret)
                return ret;
 
-       memset(&init, 0, sizeof(init));
-       init.name = "orphan_parent";
-       init.ops = &clk_dummy_rate_ops;
-       ctx->parent_ctx.hw.init = &init;
-       ctx->parent_ctx.rate = DUMMY_CLOCK_INIT_RATE;
+       return 0;
+}
 
-       ret = clk_hw_register(NULL, &ctx->parent_ctx.hw);
+static void
+clk_multiple_parents_mux_test_exit(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+
+       clk_hw_unregister(&ctx->hw);
+       clk_hw_unregister(&ctx->parents_ctx[0].hw);
+       clk_hw_unregister(&ctx->parents_ctx[1].hw);
+}
+
+/*
+ * Test that for a clock with multiple parents, clk_get_parent()
+ * actually returns the current one.
+ */
+static void
+clk_test_multiple_parents_mux_get_parent(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent = clk_hw_get_clk(&ctx->parents_ctx[0].hw, NULL);
+
+       KUNIT_EXPECT_TRUE(test, clk_is_match(clk_get_parent(clk), parent));
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+/*
+ * Test that for a clock with a multiple parents, clk_has_parent()
+ * actually reports all of them as parents.
+ */
+static void
+clk_test_multiple_parents_mux_has_parent(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[0].hw, NULL);
+       KUNIT_EXPECT_TRUE(test, clk_has_parent(clk, parent));
+       clk_put(parent);
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_EXPECT_TRUE(test, clk_has_parent(clk, parent));
+       clk_put(parent);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that for a clock with a multiple parents, if we set a range on
+ * that clock and the parent is changed, its rate after the reparenting
+ * is still within the range we asked for.
+ *
+ * FIXME: clk_set_parent() only does the reparenting but doesn't
+ * reevaluate whether the new clock rate is within its boundaries or
+ * not.
+ */
+static void
+clk_test_multiple_parents_mux_set_range_set_parent_get_rate(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent1, *parent2;
+       unsigned long rate;
+       int ret;
+
+       kunit_skip(test, "This needs to be fixed in the core.");
+
+       parent1 = clk_hw_get_clk(&ctx->parents_ctx[0].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent1);
+       KUNIT_ASSERT_TRUE(test, clk_is_match(clk_get_parent(clk), parent1));
+
+       parent2 = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent2);
+
+       ret = clk_set_rate(parent1, DUMMY_CLOCK_RATE_1);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_rate(parent2, DUMMY_CLOCK_RATE_2);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_rate_range(clk,
+                                DUMMY_CLOCK_RATE_1 - 1000,
+                                DUMMY_CLOCK_RATE_1 + 1000);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_parent(clk, parent2);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1 - 1000);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_1 + 1000);
+
+       clk_put(parent2);
+       clk_put(parent1);
+       clk_put(clk);
+}
+
+static struct kunit_case clk_multiple_parents_mux_test_cases[] = {
+       KUNIT_CASE(clk_test_multiple_parents_mux_get_parent),
+       KUNIT_CASE(clk_test_multiple_parents_mux_has_parent),
+       KUNIT_CASE(clk_test_multiple_parents_mux_set_range_set_parent_get_rate),
+       {}
+};
+
+/*
+ * Test suite for a basic mux clock with two parents, with
+ * CLK_SET_RATE_PARENT on the child.
+ *
+ * These tests exercise the consumer API and check that the state of the
+ * child and parents are sane and consistent.
+ */
+static struct kunit_suite
+clk_multiple_parents_mux_test_suite = {
+       .name = "clk-multiple-parents-mux-test",
+       .init = clk_multiple_parents_mux_test_init,
+       .exit = clk_multiple_parents_mux_test_exit,
+       .test_cases = clk_multiple_parents_mux_test_cases,
+};
+
+static int
+clk_orphan_transparent_multiple_parent_mux_test_init(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx;
+       const char *parents[2] = { "missing-parent", "proper-parent"};
+       int ret;
+
+       ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+       if (!ctx)
+               return -ENOMEM;
+       test->priv = ctx;
+
+       ctx->parents_ctx[1].hw.init = CLK_HW_INIT_NO_PARENT("proper-parent",
+                                                           &clk_dummy_rate_ops,
+                                                           0);
+       ctx->parents_ctx[1].rate = DUMMY_CLOCK_INIT_RATE;
+       ret = clk_hw_register(NULL, &ctx->parents_ctx[1].hw);
+       if (ret)
+               return ret;
+
+       ctx->hw.init = CLK_HW_INIT_PARENTS("test-orphan-mux", parents,
+                                          &clk_multiple_parents_mux_ops,
+                                          CLK_SET_RATE_PARENT);
+       ret = clk_hw_register(NULL, &ctx->hw);
        if (ret)
                return ret;
 
        return 0;
 }
 
-static void clk_orphan_transparent_single_parent_mux_test_exit(struct kunit *test)
+static void
+clk_orphan_transparent_multiple_parent_mux_test_exit(struct kunit *test)
 {
-       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_multiple_parent_ctx *ctx = test->priv;
 
        clk_hw_unregister(&ctx->hw);
-       clk_hw_unregister(&ctx->parent_ctx.hw);
+       clk_hw_unregister(&ctx->parents_ctx[1].hw);
 }
 
 /*
- * Test that a mux-only clock, with an initial rate within a range,
- * will still have the same rate after the range has been enforced.
+ * Test that, for a mux whose current parent hasn't been registered yet and is
+ * thus orphan, clk_get_parent() will return NULL.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_get_parent(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+
+       KUNIT_EXPECT_PTR_EQ(test, clk_get_parent(clk), NULL);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that, for a mux whose current parent hasn't been registered yet,
+ * calling clk_set_parent() to a valid parent will properly update the
+ * mux parent and its orphan status.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_set_parent(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent, *new_parent;
+       int ret;
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
+
+       ret = clk_set_parent(clk, parent);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       new_parent = clk_get_parent(clk);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
+       KUNIT_EXPECT_TRUE(test, clk_is_match(parent, new_parent));
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+/*
+ * Test that, for a mux that started orphan but got switched to a valid
+ * parent, calling clk_drop_range() on the mux won't affect the parent
+ * rate.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_set_parent_drop_range(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       unsigned long parent_rate, new_parent_rate;
+       int ret;
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
+
+       parent_rate = clk_get_rate(parent);
+       KUNIT_ASSERT_GT(test, parent_rate, 0);
+
+       ret = clk_set_parent(clk, parent);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_drop_range(clk);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       new_parent_rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, new_parent_rate, 0);
+       KUNIT_EXPECT_EQ(test, parent_rate, new_parent_rate);
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+/*
+ * Test that, for a mux that started orphan but got switched to a valid
+ * parent, the rate of the mux and its new parent are consistent.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_set_parent_get_rate(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       unsigned long parent_rate, rate;
+       int ret;
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
+
+       parent_rate = clk_get_rate(parent);
+       KUNIT_ASSERT_GT(test, parent_rate, 0);
+
+       ret = clk_set_parent(clk, parent);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, parent_rate, rate);
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+/*
+ * Test that, for a mux that started orphan but got switched to a valid
+ * parent, calling clk_put() on the mux won't affect the parent rate.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_set_parent_put(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk *clk, *parent;
+       unsigned long parent_rate, new_parent_rate;
+       int ret;
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
+
+       clk = clk_hw_get_clk(&ctx->hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, clk);
+
+       parent_rate = clk_get_rate(parent);
+       KUNIT_ASSERT_GT(test, parent_rate, 0);
+
+       ret = clk_set_parent(clk, parent);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       clk_put(clk);
+
+       new_parent_rate = clk_get_rate(parent);
+       KUNIT_ASSERT_GT(test, new_parent_rate, 0);
+       KUNIT_EXPECT_EQ(test, parent_rate, new_parent_rate);
+
+       clk_put(parent);
+}
+
+/*
+ * Test that, for a mux that started orphan but got switched to a valid
+ * parent, calling clk_set_rate_range() will affect the parent state if
+ * its rate is out of range.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_set_parent_set_range_modified(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       unsigned long rate;
+       int ret;
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
+
+       ret = clk_set_parent(clk, parent);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_rate_range(clk, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+/*
+ * Test that, for a mux that started orphan but got switched to a valid
+ * parent, calling clk_set_rate_range() won't affect the parent state if
+ * its rate is within range.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_set_parent_set_range_untouched(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       unsigned long parent_rate, new_parent_rate;
+       int ret;
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
+
+       parent_rate = clk_get_rate(parent);
+       KUNIT_ASSERT_GT(test, parent_rate, 0);
+
+       ret = clk_set_parent(clk, parent);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_rate_range(clk,
+                                DUMMY_CLOCK_INIT_RATE - 1000,
+                                DUMMY_CLOCK_INIT_RATE + 1000);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       new_parent_rate = clk_get_rate(parent);
+       KUNIT_ASSERT_GT(test, new_parent_rate, 0);
+       KUNIT_EXPECT_EQ(test, parent_rate, new_parent_rate);
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+/*
+ * Test that, for a mux whose current parent hasn't been registered yet,
+ * calling clk_set_rate_range() will succeed, and will be taken into
+ * account when rounding a rate.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_set_range_round_rate(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       unsigned long rate;
+       int ret;
+
+       ret = clk_set_rate_range(clk, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       rate = clk_round_rate(clk, DUMMY_CLOCK_RATE_1 - 1000);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that, for a mux that started orphan, was assigned and rate and
+ * then got switched to a valid parent, its rate is eventually within
+ * range.
+ *
+ * FIXME: Even though we update the rate as part of clk_set_parent(), we
+ * don't evaluate whether that new rate is within range and needs to be
+ * adjusted.
+ */
+static void
+clk_test_orphan_transparent_multiple_parent_mux_set_range_set_parent_get_rate(struct kunit *test)
+{
+       struct clk_multiple_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       unsigned long rate;
+       int ret;
+
+       kunit_skip(test, "This needs to be fixed in the core.");
+
+       clk_hw_set_rate_range(hw, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2);
+
+       parent = clk_hw_get_clk(&ctx->parents_ctx[1].hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, parent);
+
+       ret = clk_set_parent(clk, parent);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+static struct kunit_case clk_orphan_transparent_multiple_parent_mux_test_cases[] = {
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_get_parent),
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_set_parent),
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_set_parent_drop_range),
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_set_parent_get_rate),
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_set_parent_put),
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_set_parent_set_range_modified),
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_set_parent_set_range_untouched),
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_set_range_round_rate),
+       KUNIT_CASE(clk_test_orphan_transparent_multiple_parent_mux_set_range_set_parent_get_rate),
+       {}
+};
+
+/*
+ * Test suite for a basic mux clock with two parents. The default parent
+ * isn't registered, only the second parent is. By default, the clock
+ * will thus be orphan.
+ *
+ * These tests exercise the behaviour of the consumer API when dealing
+ * with an orphan clock, and how we deal with the transition to a valid
+ * parent.
+ */
+static struct kunit_suite clk_orphan_transparent_multiple_parent_mux_test_suite = {
+       .name = "clk-orphan-transparent-multiple-parent-mux-test",
+       .init = clk_orphan_transparent_multiple_parent_mux_test_init,
+       .exit = clk_orphan_transparent_multiple_parent_mux_test_exit,
+       .test_cases = clk_orphan_transparent_multiple_parent_mux_test_cases,
+};
+
+struct clk_single_parent_ctx {
+       struct clk_dummy_context parent_ctx;
+       struct clk_hw hw;
+};
+
+static int clk_single_parent_mux_test_init(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx;
+       int ret;
+
+       ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+       if (!ctx)
+               return -ENOMEM;
+       test->priv = ctx;
+
+       ctx->parent_ctx.rate = DUMMY_CLOCK_INIT_RATE;
+       ctx->parent_ctx.hw.init =
+               CLK_HW_INIT_NO_PARENT("parent-clk",
+                                     &clk_dummy_rate_ops,
+                                     0);
+
+       ret = clk_hw_register(NULL, &ctx->parent_ctx.hw);
+       if (ret)
+               return ret;
+
+       ctx->hw.init = CLK_HW_INIT("test-clk", "parent-clk",
+                                  &clk_dummy_single_parent_ops,
+                                  CLK_SET_RATE_PARENT);
+
+       ret = clk_hw_register(NULL, &ctx->hw);
+       if (ret)
+               return ret;
+
+       return 0;
+}
+
+static void
+clk_single_parent_mux_test_exit(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+
+       clk_hw_unregister(&ctx->hw);
+       clk_hw_unregister(&ctx->parent_ctx.hw);
+}
+
+/*
+ * Test that for a clock with a single parent, clk_get_parent() actually
+ * returns the parent.
+ */
+static void
+clk_test_single_parent_mux_get_parent(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent = clk_hw_get_clk(&ctx->parent_ctx.hw, NULL);
+
+       KUNIT_EXPECT_TRUE(test, clk_is_match(clk_get_parent(clk), parent));
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+/*
+ * Test that for a clock with a single parent, clk_has_parent() actually
+ * reports it as a parent.
+ */
+static void
+clk_test_single_parent_mux_has_parent(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent = clk_hw_get_clk(&ctx->parent_ctx.hw, NULL);
+
+       KUNIT_EXPECT_TRUE(test, clk_has_parent(clk, parent));
+
+       clk_put(parent);
+       clk_put(clk);
+}
+
+/*
+ * Test that for a clock that can't modify its rate and with a single
+ * parent, if we set disjoints range on the parent and then the child,
+ * the second will return an error.
+ *
+ * FIXME: clk_set_rate_range() only considers the current clock when
+ * evaluating whether ranges are disjoints and not the upstream clocks
+ * ranges.
+ */
+static void
+clk_test_single_parent_mux_set_range_disjoint_child_last(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       int ret;
+
+       kunit_skip(test, "This needs to be fixed in the core.");
+
+       parent = clk_get_parent(clk);
+       KUNIT_ASSERT_PTR_NE(test, parent, NULL);
+
+       ret = clk_set_rate_range(parent, 1000, 2000);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_rate_range(clk, 3000, 4000);
+       KUNIT_EXPECT_LT(test, ret, 0);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that for a clock that can't modify its rate and with a single
+ * parent, if we set disjoints range on the child and then the parent,
+ * the second will return an error.
+ *
+ * FIXME: clk_set_rate_range() only considers the current clock when
+ * evaluating whether ranges are disjoints and not the downstream clocks
+ * ranges.
+ */
+static void
+clk_test_single_parent_mux_set_range_disjoint_parent_last(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       int ret;
+
+       kunit_skip(test, "This needs to be fixed in the core.");
+
+       parent = clk_get_parent(clk);
+       KUNIT_ASSERT_PTR_NE(test, parent, NULL);
+
+       ret = clk_set_rate_range(clk, 1000, 2000);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_rate_range(parent, 3000, 4000);
+       KUNIT_EXPECT_LT(test, ret, 0);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that for a clock that can't modify its rate and with a single
+ * parent, if we set a range on the parent and then call
+ * clk_round_rate(), the boundaries of the parent are taken into
+ * account.
+ */
+static void
+clk_test_single_parent_mux_set_range_round_rate_parent_only(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       unsigned long rate;
+       int ret;
+
+       parent = clk_get_parent(clk);
+       KUNIT_ASSERT_PTR_NE(test, parent, NULL);
+
+       ret = clk_set_rate_range(parent, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       rate = clk_round_rate(clk, DUMMY_CLOCK_RATE_1 - 1000);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that for a clock that can't modify its rate and with a single
+ * parent, if we set a range on the parent and a more restrictive one on
+ * the child, and then call clk_round_rate(), the boundaries of the
+ * two clocks are taken into account.
+ */
+static void
+clk_test_single_parent_mux_set_range_round_rate_child_smaller(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       unsigned long rate;
+       int ret;
+
+       parent = clk_get_parent(clk);
+       KUNIT_ASSERT_PTR_NE(test, parent, NULL);
+
+       ret = clk_set_rate_range(parent, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_rate_range(clk, DUMMY_CLOCK_RATE_1 + 1000, DUMMY_CLOCK_RATE_2 - 1000);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       rate = clk_round_rate(clk, DUMMY_CLOCK_RATE_1 - 1000);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1 + 1000);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2 - 1000);
+
+       rate = clk_round_rate(clk, DUMMY_CLOCK_RATE_2 + 1000);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1 + 1000);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2 - 1000);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that for a clock that can't modify its rate and with a single
+ * parent, if we set a range on the child and a more restrictive one on
+ * the parent, and then call clk_round_rate(), the boundaries of the
+ * two clocks are taken into account.
+ */
+static void
+clk_test_single_parent_mux_set_range_round_rate_parent_smaller(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *parent;
+       unsigned long rate;
+       int ret;
+
+       parent = clk_get_parent(clk);
+       KUNIT_ASSERT_PTR_NE(test, parent, NULL);
+
+       ret = clk_set_rate_range(parent, DUMMY_CLOCK_RATE_1 + 1000, DUMMY_CLOCK_RATE_2 - 1000);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = clk_set_rate_range(clk, DUMMY_CLOCK_RATE_1, DUMMY_CLOCK_RATE_2);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       rate = clk_round_rate(clk, DUMMY_CLOCK_RATE_1 - 1000);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1 + 1000);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2 - 1000);
+
+       rate = clk_round_rate(clk, DUMMY_CLOCK_RATE_2 + 1000);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1 + 1000);
+       KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2 - 1000);
+
+       clk_put(clk);
+}
+
+static struct kunit_case clk_single_parent_mux_test_cases[] = {
+       KUNIT_CASE(clk_test_single_parent_mux_get_parent),
+       KUNIT_CASE(clk_test_single_parent_mux_has_parent),
+       KUNIT_CASE(clk_test_single_parent_mux_set_range_disjoint_child_last),
+       KUNIT_CASE(clk_test_single_parent_mux_set_range_disjoint_parent_last),
+       KUNIT_CASE(clk_test_single_parent_mux_set_range_round_rate_child_smaller),
+       KUNIT_CASE(clk_test_single_parent_mux_set_range_round_rate_parent_only),
+       KUNIT_CASE(clk_test_single_parent_mux_set_range_round_rate_parent_smaller),
+       {}
+};
+
+/*
+ * Test suite for a basic mux clock with one parent, with
+ * CLK_SET_RATE_PARENT on the child.
+ *
+ * These tests exercise the consumer API and check that the state of the
+ * child and parent are sane and consistent.
+ */
+static struct kunit_suite
+clk_single_parent_mux_test_suite = {
+       .name = "clk-single-parent-mux-test",
+       .init = clk_single_parent_mux_test_init,
+       .exit = clk_single_parent_mux_test_exit,
+       .test_cases = clk_single_parent_mux_test_cases,
+};
+
+static int clk_orphan_transparent_single_parent_mux_test_init(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx;
+       struct clk_init_data init = { };
+       const char * const parents[] = { "orphan_parent" };
+       int ret;
+
+       ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+       if (!ctx)
+               return -ENOMEM;
+       test->priv = ctx;
+
+       init.name = "test_orphan_dummy_parent";
+       init.ops = &clk_dummy_single_parent_ops;
+       init.parent_names = parents;
+       init.num_parents = ARRAY_SIZE(parents);
+       init.flags = CLK_SET_RATE_PARENT;
+       ctx->hw.init = &init;
+
+       ret = clk_hw_register(NULL, &ctx->hw);
+       if (ret)
+               return ret;
+
+       memset(&init, 0, sizeof(init));
+       init.name = "orphan_parent";
+       init.ops = &clk_dummy_rate_ops;
+       ctx->parent_ctx.hw.init = &init;
+       ctx->parent_ctx.rate = DUMMY_CLOCK_INIT_RATE;
+
+       ret = clk_hw_register(NULL, &ctx->parent_ctx.hw);
+       if (ret)
+               return ret;
+
+       return 0;
+}
+
+/*
+ * Test that a mux-only clock, with an initial rate within a range,
+ * will still have the same rate after the range has been enforced.
+ *
+ * See:
+ * https://lore.kernel.org/linux-clk/7720158d-10a7-a17b-73a4-a8615c9c6d5c@collabora.com/
+ */
+static void clk_test_orphan_transparent_parent_mux_set_range(struct kunit *test)
+{
+       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       unsigned long rate, new_rate;
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+
+       KUNIT_ASSERT_EQ(test,
+                       clk_set_rate_range(clk,
+                                          ctx->parent_ctx.rate - 1000,
+                                          ctx->parent_ctx.rate + 1000),
+                       0);
+
+       new_rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, new_rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, new_rate);
+
+       clk_put(clk);
+}
+
+static struct kunit_case clk_orphan_transparent_single_parent_mux_test_cases[] = {
+       KUNIT_CASE(clk_test_orphan_transparent_parent_mux_set_range),
+       {}
+};
+
+/*
+ * Test suite for a basic mux clock with one parent. The parent is
+ * registered after its child. The clock will thus be an orphan when
+ * registered, but will no longer be when the tests run.
+ *
+ * These tests make sure a clock that used to be orphan has a sane,
+ * consistent, behaviour.
+ */
+static struct kunit_suite clk_orphan_transparent_single_parent_test_suite = {
+       .name = "clk-orphan-transparent-single-parent-test",
+       .init = clk_orphan_transparent_single_parent_mux_test_init,
+       .exit = clk_single_parent_mux_test_exit,
+       .test_cases = clk_orphan_transparent_single_parent_mux_test_cases,
+};
+
+struct clk_single_parent_two_lvl_ctx {
+       struct clk_dummy_context parent_parent_ctx;
+       struct clk_dummy_context parent_ctx;
+       struct clk_hw hw;
+};
+
+static int
+clk_orphan_two_level_root_last_test_init(struct kunit *test)
+{
+       struct clk_single_parent_two_lvl_ctx *ctx;
+       int ret;
+
+       ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+       if (!ctx)
+               return -ENOMEM;
+       test->priv = ctx;
+
+       ctx->parent_ctx.hw.init =
+               CLK_HW_INIT("intermediate-parent",
+                           "root-parent",
+                           &clk_dummy_single_parent_ops,
+                           CLK_SET_RATE_PARENT);
+       ret = clk_hw_register(NULL, &ctx->parent_ctx.hw);
+       if (ret)
+               return ret;
+
+       ctx->hw.init =
+               CLK_HW_INIT("test-clk", "intermediate-parent",
+                           &clk_dummy_single_parent_ops,
+                           CLK_SET_RATE_PARENT);
+       ret = clk_hw_register(NULL, &ctx->hw);
+       if (ret)
+               return ret;
+
+       ctx->parent_parent_ctx.rate = DUMMY_CLOCK_INIT_RATE;
+       ctx->parent_parent_ctx.hw.init =
+               CLK_HW_INIT_NO_PARENT("root-parent",
+                                     &clk_dummy_rate_ops,
+                                     0);
+       ret = clk_hw_register(NULL, &ctx->parent_parent_ctx.hw);
+       if (ret)
+               return ret;
+
+       return 0;
+}
+
+static void
+clk_orphan_two_level_root_last_test_exit(struct kunit *test)
+{
+       struct clk_single_parent_two_lvl_ctx *ctx = test->priv;
+
+       clk_hw_unregister(&ctx->hw);
+       clk_hw_unregister(&ctx->parent_ctx.hw);
+       clk_hw_unregister(&ctx->parent_parent_ctx.hw);
+}
+
+/*
+ * Test that, for a clock whose parent used to be orphan, clk_get_rate()
+ * will return the proper rate.
+ */
+static void
+clk_orphan_two_level_root_last_test_get_rate(struct kunit *test)
+{
+       struct clk_single_parent_two_lvl_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       unsigned long rate;
+
+       rate = clk_get_rate(clk);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_INIT_RATE);
+
+       clk_put(clk);
+}
+
+/*
+ * Test that, for a clock whose parent used to be orphan,
+ * clk_set_rate_range() won't affect its rate if it is already within
+ * range.
+ *
+ * See (for Exynos 4210):
+ * https://lore.kernel.org/linux-clk/366a0232-bb4a-c357-6aa8-636e398e05eb@samsung.com/
  */
-static void clk_test_orphan_transparent_parent_mux_set_range(struct kunit *test)
+static void
+clk_orphan_two_level_root_last_test_set_range(struct kunit *test)
 {
-       struct clk_single_parent_ctx *ctx = test->priv;
+       struct clk_single_parent_two_lvl_ctx *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
-       unsigned long rate, new_rate;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       unsigned long rate;
+       int ret;
+
+       ret = clk_set_rate_range(clk,
+                                DUMMY_CLOCK_INIT_RATE - 1000,
+                                DUMMY_CLOCK_INIT_RATE + 1000);
+       KUNIT_ASSERT_EQ(test, ret, 0);
 
        rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_INIT_RATE);
 
-       KUNIT_ASSERT_EQ(test,
-                       clk_set_rate_range(clk,
-                                          ctx->parent_ctx.rate - 1000,
-                                          ctx->parent_ctx.rate + 1000),
-                       0);
-
-       new_rate = clk_get_rate(clk);
-       KUNIT_ASSERT_GT(test, new_rate, 0);
-       KUNIT_EXPECT_EQ(test, rate, new_rate);
+       clk_put(clk);
 }
 
-static struct kunit_case clk_orphan_transparent_single_parent_mux_test_cases[] = {
-       KUNIT_CASE(clk_test_orphan_transparent_parent_mux_set_range),
+static struct kunit_case
+clk_orphan_two_level_root_last_test_cases[] = {
+       KUNIT_CASE(clk_orphan_two_level_root_last_test_get_rate),
+       KUNIT_CASE(clk_orphan_two_level_root_last_test_set_range),
        {}
 };
 
-static struct kunit_suite clk_orphan_transparent_single_parent_test_suite = {
-       .name = "clk-orphan-transparent-single-parent-test",
-       .init = clk_orphan_transparent_single_parent_mux_test_init,
-       .exit = clk_orphan_transparent_single_parent_mux_test_exit,
-       .test_cases = clk_orphan_transparent_single_parent_mux_test_cases,
+/*
+ * Test suite for a basic, transparent, clock with a parent that is also
+ * such a clock. The parent's parent is registered last, while the
+ * parent and its child are registered in that order. The intermediate
+ * and leaf clocks will thus be orphan when registered, but the leaf
+ * clock itself will always have its parent and will never be
+ * reparented. Indeed, it's only orphan because its parent is.
+ *
+ * These tests exercise the behaviour of the consumer API when dealing
+ * with an orphan clock, and how we deal with the transition to a valid
+ * parent.
+ */
+static struct kunit_suite
+clk_orphan_two_level_root_last_test_suite = {
+       .name = "clk-orphan-two-level-root-last-test",
+       .init = clk_orphan_two_level_root_last_test_init,
+       .exit = clk_orphan_two_level_root_last_test_exit,
+       .test_cases = clk_orphan_two_level_root_last_test_cases,
 };
 
 /*
@@ -352,7 +1434,7 @@ static void clk_range_test_set_range(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -365,6 +1447,8 @@ static void clk_range_test_set_range(struct kunit *test)
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
        KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
 }
 
 /*
@@ -375,13 +1459,15 @@ static void clk_range_test_set_range_invalid(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
 
        KUNIT_EXPECT_LT(test,
                        clk_set_rate_range(clk,
                                           DUMMY_CLOCK_RATE_1 + 1000,
                                           DUMMY_CLOCK_RATE_1),
                        0);
+
+       clk_put(clk);
 }
 
 /*
@@ -420,7 +1506,7 @@ static void clk_range_test_set_range_round_rate_lower(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -433,6 +1519,8 @@ static void clk_range_test_set_range_round_rate_lower(struct kunit *test)
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
        KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
 }
 
 /*
@@ -443,7 +1531,7 @@ static void clk_range_test_set_range_set_rate_lower(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -460,6 +1548,8 @@ static void clk_range_test_set_range_set_rate_lower(struct kunit *test)
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
        KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
 }
 
 /*
@@ -472,7 +1562,7 @@ static void clk_range_test_set_range_set_round_rate_consistent_lower(struct kuni
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        long rounded;
 
        KUNIT_ASSERT_EQ(test,
@@ -489,6 +1579,8 @@ static void clk_range_test_set_range_set_round_rate_consistent_lower(struct kuni
                        0);
 
        KUNIT_EXPECT_EQ(test, rounded, clk_get_rate(clk));
+
+       clk_put(clk);
 }
 
 /*
@@ -499,7 +1591,7 @@ static void clk_range_test_set_range_round_rate_higher(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -512,6 +1604,8 @@ static void clk_range_test_set_range_round_rate_higher(struct kunit *test)
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
        KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
 }
 
 /*
@@ -522,7 +1616,7 @@ static void clk_range_test_set_range_set_rate_higher(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -539,6 +1633,8 @@ static void clk_range_test_set_range_set_rate_higher(struct kunit *test)
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_GE(test, rate, DUMMY_CLOCK_RATE_1);
        KUNIT_EXPECT_LE(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
 }
 
 /*
@@ -551,7 +1647,7 @@ static void clk_range_test_set_range_set_round_rate_consistent_higher(struct kun
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        long rounded;
 
        KUNIT_ASSERT_EQ(test,
@@ -568,6 +1664,8 @@ static void clk_range_test_set_range_set_round_rate_consistent_higher(struct kun
                        0);
 
        KUNIT_EXPECT_EQ(test, rounded, clk_get_rate(clk));
+
+       clk_put(clk);
 }
 
 /*
@@ -582,7 +1680,7 @@ static void clk_range_test_set_range_get_rate_raised(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -598,6 +1696,8 @@ static void clk_range_test_set_range_get_rate_raised(struct kunit *test)
        rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1);
+
+       clk_put(clk);
 }
 
 /*
@@ -612,7 +1712,7 @@ static void clk_range_test_set_range_get_rate_lowered(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -628,6 +1728,8 @@ static void clk_range_test_set_range_get_rate_lowered(struct kunit *test)
        rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
 }
 
 static struct kunit_case clk_range_test_cases[] = {
@@ -645,6 +1747,12 @@ static struct kunit_case clk_range_test_cases[] = {
        {}
 };
 
+/*
+ * Test suite for a basic rate clock, without any parent.
+ *
+ * These tests exercise the rate range API: clk_set_rate_range(),
+ * clk_set_min_rate(), clk_set_max_rate(), clk_drop_range().
+ */
 static struct kunit_suite clk_range_test_suite = {
        .name = "clk-range-test",
        .init = clk_test_init,
@@ -664,7 +1772,7 @@ static void clk_range_test_set_range_rate_maximized(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -700,6 +1808,8 @@ static void clk_range_test_set_range_rate_maximized(struct kunit *test)
        rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(clk);
 }
 
 /*
@@ -714,7 +1824,7 @@ static void clk_range_test_multiple_set_range_rate_maximized(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        struct clk *user1, *user2;
        unsigned long rate;
 
@@ -758,14 +1868,79 @@ static void clk_range_test_multiple_set_range_rate_maximized(struct kunit *test)
 
        clk_put(user2);
        clk_put(user1);
+       clk_put(clk);
+}
+
+/*
+ * Test that if we have several subsequent calls to
+ * clk_set_rate_range(), across multiple users, the core will reevaluate
+ * whether a new rate is needed, including when a user drop its clock.
+ *
+ * With clk_dummy_maximize_rate_ops, this means that the rate will
+ * trail along the maximum as it evolves.
+ */
+static void clk_range_test_multiple_set_range_rate_put_maximized(struct kunit *test)
+{
+       struct clk_dummy_context *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *user1, *user2;
+       unsigned long rate;
+
+       user1 = clk_hw_get_clk(hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, user1);
+
+       user2 = clk_hw_get_clk(hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, user2);
+
+       KUNIT_ASSERT_EQ(test,
+                       clk_set_rate(clk, DUMMY_CLOCK_RATE_2 + 1000),
+                       0);
+
+       KUNIT_ASSERT_EQ(test,
+                       clk_set_rate_range(user1,
+                                          0,
+                                          DUMMY_CLOCK_RATE_2),
+                       0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2);
+
+       KUNIT_ASSERT_EQ(test,
+                       clk_set_rate_range(user2,
+                                          0,
+                                          DUMMY_CLOCK_RATE_1),
+                       0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1);
+
+       clk_put(user2);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(user1);
+       clk_put(clk);
 }
 
 static struct kunit_case clk_range_maximize_test_cases[] = {
        KUNIT_CASE(clk_range_test_set_range_rate_maximized),
        KUNIT_CASE(clk_range_test_multiple_set_range_rate_maximized),
+       KUNIT_CASE(clk_range_test_multiple_set_range_rate_put_maximized),
        {}
 };
 
+/*
+ * Test suite for a basic rate clock, without any parent.
+ *
+ * These tests exercise the rate range API: clk_set_rate_range(),
+ * clk_set_min_rate(), clk_set_max_rate(), clk_drop_range(), with a
+ * driver that will always try to run at the highest possible rate.
+ */
 static struct kunit_suite clk_range_maximize_test_suite = {
        .name = "clk-range-maximize-test",
        .init = clk_maximize_test_init,
@@ -785,7 +1960,7 @@ static void clk_range_test_set_range_rate_minimized(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        unsigned long rate;
 
        KUNIT_ASSERT_EQ(test,
@@ -821,6 +1996,8 @@ static void clk_range_test_set_range_rate_minimized(struct kunit *test)
        rate = clk_get_rate(clk);
        KUNIT_ASSERT_GT(test, rate, 0);
        KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1);
+
+       clk_put(clk);
 }
 
 /*
@@ -835,7 +2012,7 @@ static void clk_range_test_multiple_set_range_rate_minimized(struct kunit *test)
 {
        struct clk_dummy_context *ctx = test->priv;
        struct clk_hw *hw = &ctx->hw;
-       struct clk *clk = hw->clk;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
        struct clk *user1, *user2;
        unsigned long rate;
 
@@ -875,14 +2052,75 @@ static void clk_range_test_multiple_set_range_rate_minimized(struct kunit *test)
 
        clk_put(user2);
        clk_put(user1);
+       clk_put(clk);
+}
+
+/*
+ * Test that if we have several subsequent calls to
+ * clk_set_rate_range(), across multiple users, the core will reevaluate
+ * whether a new rate is needed, including when a user drop its clock.
+ *
+ * With clk_dummy_minimize_rate_ops, this means that the rate will
+ * trail along the minimum as it evolves.
+ */
+static void clk_range_test_multiple_set_range_rate_put_minimized(struct kunit *test)
+{
+       struct clk_dummy_context *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *user1, *user2;
+       unsigned long rate;
+
+       user1 = clk_hw_get_clk(hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, user1);
+
+       user2 = clk_hw_get_clk(hw, NULL);
+       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, user2);
+
+       KUNIT_ASSERT_EQ(test,
+                       clk_set_rate_range(user1,
+                                          DUMMY_CLOCK_RATE_1,
+                                          ULONG_MAX),
+                       0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1);
+
+       KUNIT_ASSERT_EQ(test,
+                       clk_set_rate_range(user2,
+                                          DUMMY_CLOCK_RATE_2,
+                                          ULONG_MAX),
+                       0);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(user2);
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_GT(test, rate, 0);
+       KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1);
+
+       clk_put(user1);
+       clk_put(clk);
 }
 
 static struct kunit_case clk_range_minimize_test_cases[] = {
        KUNIT_CASE(clk_range_test_set_range_rate_minimized),
        KUNIT_CASE(clk_range_test_multiple_set_range_rate_minimized),
+       KUNIT_CASE(clk_range_test_multiple_set_range_rate_put_minimized),
        {}
 };
 
+/*
+ * Test suite for a basic rate clock, without any parent.
+ *
+ * These tests exercise the rate range API: clk_set_rate_range(),
+ * clk_set_min_rate(), clk_set_max_rate(), clk_drop_range(), with a
+ * driver that will always try to run at the lowest possible rate.
+ */
 static struct kunit_suite clk_range_minimize_test_suite = {
        .name = "clk-range-minimize-test",
        .init = clk_minimize_test_init,
@@ -890,11 +2128,284 @@ static struct kunit_suite clk_range_minimize_test_suite = {
        .test_cases = clk_range_minimize_test_cases,
 };
 
+struct clk_leaf_mux_ctx {
+       struct clk_multiple_parent_ctx mux_ctx;
+       struct clk_hw hw;
+};
+
+static int
+clk_leaf_mux_set_rate_parent_test_init(struct kunit *test)
+{
+       struct clk_leaf_mux_ctx *ctx;
+       const char *top_parents[2] = { "parent-0", "parent-1" };
+       int ret;
+
+       ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+       if (!ctx)
+               return -ENOMEM;
+       test->priv = ctx;
+
+       ctx->mux_ctx.parents_ctx[0].hw.init = CLK_HW_INIT_NO_PARENT("parent-0",
+                                                                   &clk_dummy_rate_ops,
+                                                                   0);
+       ctx->mux_ctx.parents_ctx[0].rate = DUMMY_CLOCK_RATE_1;
+       ret = clk_hw_register(NULL, &ctx->mux_ctx.parents_ctx[0].hw);
+       if (ret)
+               return ret;
+
+       ctx->mux_ctx.parents_ctx[1].hw.init = CLK_HW_INIT_NO_PARENT("parent-1",
+                                                                   &clk_dummy_rate_ops,
+                                                                   0);
+       ctx->mux_ctx.parents_ctx[1].rate = DUMMY_CLOCK_RATE_2;
+       ret = clk_hw_register(NULL, &ctx->mux_ctx.parents_ctx[1].hw);
+       if (ret)
+               return ret;
+
+       ctx->mux_ctx.current_parent = 0;
+       ctx->mux_ctx.hw.init = CLK_HW_INIT_PARENTS("test-mux", top_parents,
+                                                  &clk_multiple_parents_mux_ops,
+                                                  0);
+       ret = clk_hw_register(NULL, &ctx->mux_ctx.hw);
+       if (ret)
+               return ret;
+
+       ctx->hw.init = CLK_HW_INIT_HW("test-clock", &ctx->mux_ctx.hw,
+                                     &clk_dummy_single_parent_ops,
+                                     CLK_SET_RATE_PARENT);
+       ret = clk_hw_register(NULL, &ctx->hw);
+       if (ret)
+               return ret;
+
+       return 0;
+}
+
+static void clk_leaf_mux_set_rate_parent_test_exit(struct kunit *test)
+{
+       struct clk_leaf_mux_ctx *ctx = test->priv;
+
+       clk_hw_unregister(&ctx->hw);
+       clk_hw_unregister(&ctx->mux_ctx.hw);
+       clk_hw_unregister(&ctx->mux_ctx.parents_ctx[0].hw);
+       clk_hw_unregister(&ctx->mux_ctx.parents_ctx[1].hw);
+}
+
+/*
+ * Test that, for a clock that will forward any rate request to its
+ * parent, the rate request structure returned by __clk_determine_rate
+ * is sane and will be what we expect.
+ */
+static void clk_leaf_mux_set_rate_parent_determine_rate(struct kunit *test)
+{
+       struct clk_leaf_mux_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk_rate_request req;
+       unsigned long rate;
+       int ret;
+
+       rate = clk_get_rate(clk);
+       KUNIT_ASSERT_EQ(test, rate, DUMMY_CLOCK_RATE_1);
+
+       clk_hw_init_rate_request(hw, &req, DUMMY_CLOCK_RATE_2);
+
+       ret = __clk_determine_rate(hw, &req);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       KUNIT_EXPECT_EQ(test, req.rate, DUMMY_CLOCK_RATE_2);
+       KUNIT_EXPECT_EQ(test, req.best_parent_rate, DUMMY_CLOCK_RATE_2);
+       KUNIT_EXPECT_PTR_EQ(test, req.best_parent_hw, &ctx->mux_ctx.hw);
+
+       clk_put(clk);
+}
+
+static struct kunit_case clk_leaf_mux_set_rate_parent_test_cases[] = {
+       KUNIT_CASE(clk_leaf_mux_set_rate_parent_determine_rate),
+       {}
+};
+
+/*
+ * Test suite for a clock whose parent is a mux with multiple parents.
+ * The leaf clock has CLK_SET_RATE_PARENT, and will forward rate
+ * requests to the mux, which will then select which parent is the best
+ * fit for a given rate.
+ *
+ * These tests exercise the behaviour of muxes, and the proper selection
+ * of parents.
+ */
+static struct kunit_suite clk_leaf_mux_set_rate_parent_test_suite = {
+       .name = "clk-leaf-mux-set-rate-parent",
+       .init = clk_leaf_mux_set_rate_parent_test_init,
+       .exit = clk_leaf_mux_set_rate_parent_test_exit,
+       .test_cases = clk_leaf_mux_set_rate_parent_test_cases,
+};
+
+struct clk_mux_notifier_rate_change {
+       bool done;
+       unsigned long old_rate;
+       unsigned long new_rate;
+       wait_queue_head_t wq;
+};
+
+struct clk_mux_notifier_ctx {
+       struct clk_multiple_parent_ctx mux_ctx;
+       struct clk *clk;
+       struct notifier_block clk_nb;
+       struct clk_mux_notifier_rate_change pre_rate_change;
+       struct clk_mux_notifier_rate_change post_rate_change;
+};
+
+#define NOTIFIER_TIMEOUT_MS 100
+
+static int clk_mux_notifier_callback(struct notifier_block *nb,
+                                    unsigned long action, void *data)
+{
+       struct clk_notifier_data *clk_data = data;
+       struct clk_mux_notifier_ctx *ctx = container_of(nb,
+                                                       struct clk_mux_notifier_ctx,
+                                                       clk_nb);
+
+       if (action & PRE_RATE_CHANGE) {
+               ctx->pre_rate_change.old_rate = clk_data->old_rate;
+               ctx->pre_rate_change.new_rate = clk_data->new_rate;
+               ctx->pre_rate_change.done = true;
+               wake_up_interruptible(&ctx->pre_rate_change.wq);
+       }
+
+       if (action & POST_RATE_CHANGE) {
+               ctx->post_rate_change.old_rate = clk_data->old_rate;
+               ctx->post_rate_change.new_rate = clk_data->new_rate;
+               ctx->post_rate_change.done = true;
+               wake_up_interruptible(&ctx->post_rate_change.wq);
+       }
+
+       return 0;
+}
+
+static int clk_mux_notifier_test_init(struct kunit *test)
+{
+       struct clk_mux_notifier_ctx *ctx;
+       const char *top_parents[2] = { "parent-0", "parent-1" };
+       int ret;
+
+       ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
+       if (!ctx)
+               return -ENOMEM;
+       test->priv = ctx;
+       ctx->clk_nb.notifier_call = clk_mux_notifier_callback;
+       init_waitqueue_head(&ctx->pre_rate_change.wq);
+       init_waitqueue_head(&ctx->post_rate_change.wq);
+
+       ctx->mux_ctx.parents_ctx[0].hw.init = CLK_HW_INIT_NO_PARENT("parent-0",
+                                                                   &clk_dummy_rate_ops,
+                                                                   0);
+       ctx->mux_ctx.parents_ctx[0].rate = DUMMY_CLOCK_RATE_1;
+       ret = clk_hw_register(NULL, &ctx->mux_ctx.parents_ctx[0].hw);
+       if (ret)
+               return ret;
+
+       ctx->mux_ctx.parents_ctx[1].hw.init = CLK_HW_INIT_NO_PARENT("parent-1",
+                                                                   &clk_dummy_rate_ops,
+                                                                   0);
+       ctx->mux_ctx.parents_ctx[1].rate = DUMMY_CLOCK_RATE_2;
+       ret = clk_hw_register(NULL, &ctx->mux_ctx.parents_ctx[1].hw);
+       if (ret)
+               return ret;
+
+       ctx->mux_ctx.current_parent = 0;
+       ctx->mux_ctx.hw.init = CLK_HW_INIT_PARENTS("test-mux", top_parents,
+                                                  &clk_multiple_parents_mux_ops,
+                                                  0);
+       ret = clk_hw_register(NULL, &ctx->mux_ctx.hw);
+       if (ret)
+               return ret;
+
+       ctx->clk = clk_hw_get_clk(&ctx->mux_ctx.hw, NULL);
+       ret = clk_notifier_register(ctx->clk, &ctx->clk_nb);
+       if (ret)
+               return ret;
+
+       return 0;
+}
+
+static void clk_mux_notifier_test_exit(struct kunit *test)
+{
+       struct clk_mux_notifier_ctx *ctx = test->priv;
+       struct clk *clk = ctx->clk;
+
+       clk_notifier_unregister(clk, &ctx->clk_nb);
+       clk_put(clk);
+
+       clk_hw_unregister(&ctx->mux_ctx.hw);
+       clk_hw_unregister(&ctx->mux_ctx.parents_ctx[0].hw);
+       clk_hw_unregister(&ctx->mux_ctx.parents_ctx[1].hw);
+}
+
+/*
+ * Test that if the we have a notifier registered on a mux, the core
+ * will notify us when we switch to another parent, and with the proper
+ * old and new rates.
+ */
+static void clk_mux_notifier_set_parent_test(struct kunit *test)
+{
+       struct clk_mux_notifier_ctx *ctx = test->priv;
+       struct clk_hw *hw = &ctx->mux_ctx.hw;
+       struct clk *clk = clk_hw_get_clk(hw, NULL);
+       struct clk *new_parent = clk_hw_get_clk(&ctx->mux_ctx.parents_ctx[1].hw, NULL);
+       int ret;
+
+       ret = clk_set_parent(clk, new_parent);
+       KUNIT_ASSERT_EQ(test, ret, 0);
+
+       ret = wait_event_interruptible_timeout(ctx->pre_rate_change.wq,
+                                              ctx->pre_rate_change.done,
+                                              msecs_to_jiffies(NOTIFIER_TIMEOUT_MS));
+       KUNIT_ASSERT_GT(test, ret, 0);
+
+       KUNIT_EXPECT_EQ(test, ctx->pre_rate_change.old_rate, DUMMY_CLOCK_RATE_1);
+       KUNIT_EXPECT_EQ(test, ctx->pre_rate_change.new_rate, DUMMY_CLOCK_RATE_2);
+
+       ret = wait_event_interruptible_timeout(ctx->post_rate_change.wq,
+                                              ctx->post_rate_change.done,
+                                              msecs_to_jiffies(NOTIFIER_TIMEOUT_MS));
+       KUNIT_ASSERT_GT(test, ret, 0);
+
+       KUNIT_EXPECT_EQ(test, ctx->post_rate_change.old_rate, DUMMY_CLOCK_RATE_1);
+       KUNIT_EXPECT_EQ(test, ctx->post_rate_change.new_rate, DUMMY_CLOCK_RATE_2);
+
+       clk_put(new_parent);
+       clk_put(clk);
+}
+
+static struct kunit_case clk_mux_notifier_test_cases[] = {
+       KUNIT_CASE(clk_mux_notifier_set_parent_test),
+       {}
+};
+
+/*
+ * Test suite for a mux with multiple parents, and a notifier registered
+ * on the mux.
+ *
+ * These tests exercise the behaviour of notifiers.
+ */
+static struct kunit_suite clk_mux_notifier_test_suite = {
+       .name = "clk-mux-notifier",
+       .init = clk_mux_notifier_test_init,
+       .exit = clk_mux_notifier_test_exit,
+       .test_cases = clk_mux_notifier_test_cases,
+};
+
 kunit_test_suites(
+       &clk_leaf_mux_set_rate_parent_test_suite,
        &clk_test_suite,
+       &clk_multiple_parents_mux_test_suite,
+       &clk_mux_notifier_test_suite,
+       &clk_orphan_transparent_multiple_parent_mux_test_suite,
        &clk_orphan_transparent_single_parent_test_suite,
+       &clk_orphan_two_level_root_last_test_suite,
        &clk_range_test_suite,
        &clk_range_maximize_test_suite,
-       &clk_range_minimize_test_suite
+       &clk_range_minimize_test_suite,
+       &clk_single_parent_mux_test_suite,
+       &clk_uncached_test_suite
 );
 MODULE_LICENSE("GPL v2");
index 4421e48..ba1720b 100644 (file)
@@ -129,9 +129,18 @@ static int mtk_clk_mux_set_parent_setclr_lock(struct clk_hw *hw, u8 index)
        return 0;
 }
 
+static int mtk_clk_mux_determine_rate(struct clk_hw *hw,
+                                     struct clk_rate_request *req)
+{
+       struct mtk_clk_mux *mux = to_mtk_clk_mux(hw);
+
+       return clk_mux_determine_rate_flags(hw, req, mux->data->flags);
+}
+
 const struct clk_ops mtk_mux_clr_set_upd_ops = {
        .get_parent = mtk_clk_mux_get_parent,
        .set_parent = mtk_clk_mux_set_parent_setclr_lock,
+       .determine_rate = mtk_clk_mux_determine_rate,
 };
 EXPORT_SYMBOL_GPL(mtk_mux_clr_set_upd_ops);
 
@@ -141,6 +150,7 @@ const struct clk_ops mtk_mux_gate_clr_set_upd_ops  = {
        .is_enabled = mtk_clk_mux_is_enabled,
        .get_parent = mtk_clk_mux_get_parent,
        .set_parent = mtk_clk_mux_set_parent_setclr_lock,
+       .determine_rate = mtk_clk_mux_determine_rate,
 };
 EXPORT_SYMBOL_GPL(mtk_mux_gate_clr_set_upd_ops);
 
index 609c10f..7655153 100644 (file)
@@ -915,6 +915,15 @@ static int clk_gfx3d_determine_rate(struct clk_hw *hw,
                req->best_parent_hw = p2;
        }
 
+       clk_hw_get_rate_range(req->best_parent_hw,
+                             &parent_req.min_rate, &parent_req.max_rate);
+
+       if (req->min_rate > parent_req.min_rate)
+               parent_req.min_rate = req->min_rate;
+
+       if (req->max_rate < parent_req.max_rate)
+               parent_req.max_rate = req->max_rate;
+
        ret = __clk_determine_rate(req->best_parent_hw, &parent_req);
        if (ret)
                return ret;
index 657e115..a9eb6a9 100644 (file)
@@ -2767,17 +2767,6 @@ MODULE_DEVICE_TABLE(of, gcc_msm8660_match_table);
 
 static int gcc_msm8660_probe(struct platform_device *pdev)
 {
-       int ret;
-       struct device *dev = &pdev->dev;
-
-       ret = qcom_cc_register_board_clk(dev, "cxo_board", "cxo", 19200000);
-       if (ret)
-               return ret;
-
-       ret = qcom_cc_register_board_clk(dev, "pxo_board", "pxo", 27000000);
-       if (ret)
-               return ret;
-
        return qcom_cc_probe(pdev, &gcc_msm8660_desc);
 }
 
index 41717ff..ba87913 100644 (file)
@@ -8,6 +8,7 @@
 
 #include <linux/clk.h>
 #include <linux/clkdev.h>
+#include <linux/clk/spear.h>
 #include <linux/err.h>
 #include <linux/io.h>
 #include <linux/of_platform.h>
index 490701a..c192a91 100644 (file)
@@ -7,6 +7,7 @@
  */
 
 #include <linux/clkdev.h>
+#include <linux/clk/spear.h>
 #include <linux/io.h>
 #include <linux/spinlock_types.h>
 #include "clk.h"
index f7405a5..7330345 100644 (file)
@@ -1166,6 +1166,7 @@ static struct tegra_clk_init_table init_table[] __initdata = {
        { TEGRA114_CLK_I2S3_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
        { TEGRA114_CLK_I2S4_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
        { TEGRA114_CLK_VIMCLK_SYNC, TEGRA114_CLK_CLK_MAX, 24000000, 0 },
+       { TEGRA114_CLK_PWM, TEGRA114_CLK_PLL_P, 408000000, 0 },
        /* must be the last entry */
        { TEGRA114_CLK_CLK_MAX, TEGRA114_CLK_CLK_MAX, 0, 0 },
 };
index a9d4efc..6c46592 100644 (file)
@@ -1330,6 +1330,7 @@ static struct tegra_clk_init_table common_init_table[] __initdata = {
        { TEGRA124_CLK_I2S3_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
        { TEGRA124_CLK_I2S4_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
        { TEGRA124_CLK_VIMCLK_SYNC, TEGRA124_CLK_CLK_MAX, 24576000, 0 },
+       { TEGRA124_CLK_PWM, TEGRA124_CLK_PLL_P, 408000000, 0 },
        /* must be the last entry */
        { TEGRA124_CLK_CLK_MAX, TEGRA124_CLK_CLK_MAX, 0, 0 },
 };
index 8a4514f..422d782 100644 (file)
@@ -1044,6 +1044,7 @@ static struct tegra_clk_init_table init_table[] = {
        { TEGRA20_CLK_GR2D, TEGRA20_CLK_PLL_C, 300000000, 0 },
        { TEGRA20_CLK_GR3D, TEGRA20_CLK_PLL_C, 300000000, 0 },
        { TEGRA20_CLK_VDE, TEGRA20_CLK_PLL_C, 300000000, 0 },
+       { TEGRA20_CLK_PWM, TEGRA20_CLK_PLL_P, 48000000, 0 },
        /* must be the last entry */
        { TEGRA20_CLK_CLK_MAX, TEGRA20_CLK_CLK_MAX, 0, 0 },
 };
index 499f999..a3488aa 100644 (file)
@@ -3597,6 +3597,7 @@ static struct tegra_clk_init_table init_table[] __initdata = {
        { TEGRA210_CLK_VIMCLK_SYNC, TEGRA210_CLK_CLK_MAX, 24576000, 0 },
        { TEGRA210_CLK_HDA, TEGRA210_CLK_PLL_P, 51000000, 0 },
        { TEGRA210_CLK_HDA2CODEC_2X, TEGRA210_CLK_PLL_P, 48000000, 0 },
+       { TEGRA210_CLK_PWM, TEGRA210_CLK_PLL_P, 48000000, 0 },
        /* This MUST be the last entry. */
        { TEGRA210_CLK_CLK_MAX, TEGRA210_CLK_CLK_MAX, 0, 0 },
 };
index 168c07d..60f1534 100644 (file)
@@ -1237,6 +1237,7 @@ static struct tegra_clk_init_table init_table[] = {
        { TEGRA30_CLK_VIMCLK_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
        { TEGRA30_CLK_HDA, TEGRA30_CLK_PLL_P, 102000000, 0 },
        { TEGRA30_CLK_HDA2CODEC_2X, TEGRA30_CLK_PLL_P, 48000000, 0 },
+       { TEGRA30_CLK_PWM, TEGRA30_CLK_PLL_P, 48000000, 0 },
        /* must be the last entry */
        { TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 },
 };
index d69d13a..4aec4b2 100644 (file)
@@ -222,10 +222,8 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
        if (reg_name[0]) {
                priv->opp_token = dev_pm_opp_set_regulators(cpu_dev, reg_name);
                if (priv->opp_token < 0) {
-                       ret = priv->opp_token;
-                       if (ret != -EPROBE_DEFER)
-                               dev_err(cpu_dev, "failed to set regulators: %d\n",
-                                       ret);
+                       ret = dev_err_probe(cpu_dev, priv->opp_token,
+                                           "failed to set regulators\n");
                        goto free_cpumask;
                }
        }
index 90beb26..ad4ce84 100644 (file)
@@ -396,9 +396,7 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
                ret = imx6q_opp_check_speed_grading(cpu_dev);
        }
        if (ret) {
-               if (ret != -EPROBE_DEFER)
-                       dev_err(cpu_dev, "failed to read ocotp: %d\n",
-                               ret);
+               dev_err_probe(cpu_dev, ret, "failed to read ocotp\n");
                goto out_free_opp;
        }
 
index 863548f..a577586 100644 (file)
@@ -64,7 +64,7 @@ static struct platform_device *cpufreq_dt_pdev, *cpufreq_pdev;
 
 static void get_krait_bin_format_a(struct device *cpu_dev,
                                          int *speed, int *pvs, int *pvs_ver,
-                                         struct nvmem_cell *pvs_nvmem, u8 *buf)
+                                         u8 *buf)
 {
        u32 pte_efuse;
 
@@ -95,7 +95,7 @@ static void get_krait_bin_format_a(struct device *cpu_dev,
 
 static void get_krait_bin_format_b(struct device *cpu_dev,
                                          int *speed, int *pvs, int *pvs_ver,
-                                         struct nvmem_cell *pvs_nvmem, u8 *buf)
+                                         u8 *buf)
 {
        u32 pte_efuse, redundant_sel;
 
@@ -213,6 +213,7 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
        int speed = 0, pvs = 0, pvs_ver = 0;
        u8 *speedbin;
        size_t len;
+       int ret = 0;
 
        speedbin = nvmem_cell_read(speedbin_nvmem, &len);
 
@@ -222,15 +223,16 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
        switch (len) {
        case 4:
                get_krait_bin_format_a(cpu_dev, &speed, &pvs, &pvs_ver,
-                                      speedbin_nvmem, speedbin);
+                                      speedbin);
                break;
        case 8:
                get_krait_bin_format_b(cpu_dev, &speed, &pvs, &pvs_ver,
-                                      speedbin_nvmem, speedbin);
+                                      speedbin);
                break;
        default:
                dev_err(cpu_dev, "Unable to read nvmem data. Defaulting to 0!\n");
-               return -ENODEV;
+               ret = -ENODEV;
+               goto len_error;
        }
 
        snprintf(*pvs_name, sizeof("speedXX-pvsXX-vXX"), "speed%d-pvs%d-v%d",
@@ -238,8 +240,9 @@ static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
 
        drv->versions = (1 << speed);
 
+len_error:
        kfree(speedbin);
-       return 0;
+       return ret;
 }
 
 static const struct qcom_cpufreq_match_data match_data_kryo = {
@@ -262,7 +265,8 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
        struct nvmem_cell *speedbin_nvmem;
        struct device_node *np;
        struct device *cpu_dev;
-       char *pvs_name = "speedXX-pvsXX-vXX";
+       char pvs_name_buffer[] = "speedXX-pvsXX-vXX";
+       char *pvs_name = pvs_name_buffer;
        unsigned cpu;
        const struct of_device_id *match;
        int ret;
@@ -295,11 +299,8 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
        if (drv->data->get_version) {
                speedbin_nvmem = of_nvmem_cell_get(np, NULL);
                if (IS_ERR(speedbin_nvmem)) {
-                       if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
-                               dev_err(cpu_dev,
-                                       "Could not get nvmem cell: %ld\n",
-                                       PTR_ERR(speedbin_nvmem));
-                       ret = PTR_ERR(speedbin_nvmem);
+                       ret = dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
+                                           "Could not get nvmem cell\n");
                        goto free_drv;
                }
 
index a492258..1583a37 100644 (file)
@@ -56,12 +56,9 @@ static int sun50i_cpufreq_get_efuse(u32 *versions)
 
        speedbin_nvmem = of_nvmem_cell_get(np, NULL);
        of_node_put(np);
-       if (IS_ERR(speedbin_nvmem)) {
-               if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
-                       pr_err("Could not get nvmem cell: %ld\n",
-                              PTR_ERR(speedbin_nvmem));
-               return PTR_ERR(speedbin_nvmem);
-       }
+       if (IS_ERR(speedbin_nvmem))
+               return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
+                                    "Could not get nvmem cell\n");
 
        speedbin = nvmem_cell_read(speedbin_nvmem, &len);
        nvmem_cell_put(speedbin_nvmem);
index c2004ca..4596c3e 100644 (file)
@@ -589,6 +589,7 @@ static const struct of_device_id tegra194_cpufreq_of_match[] = {
        { .compatible = "nvidia,tegra239-ccplex-cluster", .data = &tegra239_cpufreq_soc },
        { /* sentinel */ }
 };
+MODULE_DEVICE_TABLE(of, tegra194_cpufreq_of_match);
 
 static struct platform_driver tegra194_ccplex_driver = {
        .driver = {
index acf31cc..97086fa 100644 (file)
@@ -48,7 +48,7 @@ void hmem_register_device(int target_nid, struct resource *r)
        rc = platform_device_add_data(pdev, &info, sizeof(info));
        if (rc < 0) {
                pr_err("hmem memregion_info allocation failure for %pr\n", &res);
-               goto out_pdev;
+               goto out_resource;
        }
 
        rc = platform_device_add_resources(pdev, &res, 1);
@@ -66,7 +66,7 @@ void hmem_register_device(int target_nid, struct resource *r)
        return;
 
 out_resource:
-       put_device(&pdev->dev);
+       platform_device_put(pdev);
 out_pdev:
        memregion_free(id);
 }
index 9b5e2a5..da4438f 100644 (file)
@@ -363,7 +363,7 @@ static void dax_free_inode(struct inode *inode)
 {
        struct dax_device *dax_dev = to_dax_dev(inode);
        if (inode->i_rdev)
-               ida_simple_remove(&dax_minor_ida, iminor(inode));
+               ida_free(&dax_minor_ida, iminor(inode));
        kmem_cache_free(dax_cache, dax_dev);
 }
 
@@ -445,7 +445,7 @@ struct dax_device *alloc_dax(void *private, const struct dax_operations *ops)
        if (WARN_ON_ONCE(ops && !ops->zero_page_range))
                return ERR_PTR(-EINVAL);
 
-       minor = ida_simple_get(&dax_minor_ida, 0, MINORMASK+1, GFP_KERNEL);
+       minor = ida_alloc_max(&dax_minor_ida, MINORMASK, GFP_KERNEL);
        if (minor < 0)
                return ERR_PTR(-ENOMEM);
 
@@ -459,7 +459,7 @@ struct dax_device *alloc_dax(void *private, const struct dax_operations *ops)
        return dax_dev;
 
  err_dev:
-       ida_simple_remove(&dax_minor_ida, minor);
+       ida_free(&dax_minor_ida, minor);
        return ERR_PTR(-ENOMEM);
 }
 EXPORT_SYMBOL_GPL(alloc_dax);
index 9fe2ae7..ffe6216 100644 (file)
@@ -312,7 +312,7 @@ static unsigned long dmatest_random(void)
 {
        unsigned long buf;
 
-       prandom_bytes(&buf, sizeof(buf));
+       get_random_bytes(&buf, sizeof(buf));
        return buf;
 }
 
index 17562cf..456602d 100644 (file)
@@ -473,7 +473,7 @@ config EDAC_ALTERA_SDMMC
 
 config EDAC_SIFIVE
        bool "Sifive platform EDAC driver"
-       depends on EDAC=y && SIFIVE_L2
+       depends on EDAC=y && SIFIVE_CCACHE
        help
          Support for error detection and correction on the SiFive SoCs.
 
index ee800ae..b844e26 100644 (file)
@@ -2,7 +2,7 @@
 /*
  * SiFive Platform EDAC Driver
  *
- * Copyright (C) 2018-2019 SiFive, Inc.
+ * Copyright (C) 2018-2022 SiFive, Inc.
  *
  * This driver is partially based on octeon_edac-pc.c
  *
@@ -10,7 +10,7 @@
 #include <linux/edac.h>
 #include <linux/platform_device.h>
 #include "edac_module.h"
-#include <soc/sifive/sifive_l2_cache.h>
+#include <soc/sifive/sifive_ccache.h>
 
 #define DRVNAME "sifive_edac"
 
@@ -32,9 +32,9 @@ int ecc_err_event(struct notifier_block *this, unsigned long event, void *ptr)
 
        p = container_of(this, struct sifive_edac_priv, notifier);
 
-       if (event == SIFIVE_L2_ERR_TYPE_UE)
+       if (event == SIFIVE_CCACHE_ERR_TYPE_UE)
                edac_device_handle_ue(p->dci, 0, 0, msg);
-       else if (event == SIFIVE_L2_ERR_TYPE_CE)
+       else if (event == SIFIVE_CCACHE_ERR_TYPE_CE)
                edac_device_handle_ce(p->dci, 0, 0, msg);
 
        return NOTIFY_OK;
@@ -67,7 +67,7 @@ static int ecc_register(struct platform_device *pdev)
                goto err;
        }
 
-       register_sifive_l2_error_notifier(&p->notifier);
+       register_sifive_ccache_error_notifier(&p->notifier);
 
        return 0;
 
@@ -81,7 +81,7 @@ static int ecc_unregister(struct platform_device *pdev)
 {
        struct sifive_edac_priv *p = platform_get_drvdata(pdev);
 
-       unregister_sifive_l2_error_notifier(&p->notifier);
+       unregister_sifive_ccache_error_notifier(&p->notifier);
        edac_device_del_device(&pdev->dev);
        edac_device_free_ctl_info(p->dci);
 
index 5b79a4a..6787ed8 100644 (file)
@@ -124,28 +124,6 @@ config EFI_ZBOOT
          is supported by the encapsulated image. (The compression algorithm
          used is described in the zboot image header)
 
-config EFI_ZBOOT_SIGNED
-       def_bool y
-       depends on EFI_ZBOOT_SIGNING_CERT != ""
-       depends on EFI_ZBOOT_SIGNING_KEY != ""
-
-config EFI_ZBOOT_SIGNING
-       bool "Sign the EFI decompressor for UEFI secure boot"
-       depends on EFI_ZBOOT
-       help
-         Use the 'sbsign' command line tool (which must exist on the host
-         path) to sign both the EFI decompressor PE/COFF image, as well as the
-         encapsulated PE/COFF image, which is subsequently compressed and
-         wrapped by the former image.
-
-config EFI_ZBOOT_SIGNING_CERT
-       string "Certificate to use for signing the compressed EFI boot image"
-       depends on EFI_ZBOOT_SIGNING
-
-config EFI_ZBOOT_SIGNING_KEY
-       string "Private key to use for signing the compressed EFI boot image"
-       depends on EFI_ZBOOT_SIGNING
-
 config EFI_ARMSTUB_DTB_LOADER
        bool "Enable the DTB loader"
        depends on EFI_GENERIC_STUB && !RISCV && !LOONGARCH
index 3359ae2..7c48c38 100644 (file)
@@ -63,7 +63,7 @@ static bool __init efi_virtmap_init(void)
 
                if (!(md->attribute & EFI_MEMORY_RUNTIME))
                        continue;
-               if (md->virt_addr == 0)
+               if (md->virt_addr == U64_MAX)
                        return false;
 
                ret = efi_create_mapping(&efi_mm, md);
index 9624735..3ecdc43 100644 (file)
@@ -271,6 +271,8 @@ static __init int efivar_ssdt_load(void)
                        acpi_status ret = acpi_load_table(data, NULL);
                        if (ret)
                                pr_err("failed to load table: %u\n", ret);
+                       else
+                               continue;
                } else {
                        pr_err("failed to get var data: 0x%lx\n", status);
                }
index 35f234a..3340b38 100644 (file)
@@ -20,22 +20,11 @@ zboot-size-len-y                    := 4
 zboot-method-$(CONFIG_KERNEL_GZIP)     := gzip
 zboot-size-len-$(CONFIG_KERNEL_GZIP)   := 0
 
-quiet_cmd_sbsign = SBSIGN  $@
-      cmd_sbsign = sbsign --out $@ $< \
-                  --key $(CONFIG_EFI_ZBOOT_SIGNING_KEY) \
-                  --cert $(CONFIG_EFI_ZBOOT_SIGNING_CERT)
-
-$(obj)/$(EFI_ZBOOT_PAYLOAD).signed: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE
-       $(call if_changed,sbsign)
-
-ZBOOT_PAYLOAD-y                                 := $(EFI_ZBOOT_PAYLOAD)
-ZBOOT_PAYLOAD-$(CONFIG_EFI_ZBOOT_SIGNED) := $(EFI_ZBOOT_PAYLOAD).signed
-
-$(obj)/vmlinuz: $(obj)/$(ZBOOT_PAYLOAD-y) FORCE
+$(obj)/vmlinuz: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE
        $(call if_changed,$(zboot-method-y))
 
 OBJCOPYFLAGS_vmlinuz.o := -I binary -O $(EFI_ZBOOT_BFD_TARGET) \
-                        --rename-section .data=.gzdata,load,alloc,readonly,contents
+                         --rename-section .data=.gzdata,load,alloc,readonly,contents
 $(obj)/vmlinuz.o: $(obj)/vmlinuz FORCE
        $(call if_changed,objcopy)
 
@@ -53,18 +42,8 @@ LDFLAGS_vmlinuz.efi.elf := -T $(srctree)/drivers/firmware/efi/libstub/zboot.lds
 $(obj)/vmlinuz.efi.elf: $(obj)/vmlinuz.o $(ZBOOT_DEPS) FORCE
        $(call if_changed,ld)
 
-ZBOOT_EFI-y                            := vmlinuz.efi
-ZBOOT_EFI-$(CONFIG_EFI_ZBOOT_SIGNED)   := vmlinuz.efi.unsigned
-
-OBJCOPYFLAGS_$(ZBOOT_EFI-y) := -O binary
-$(obj)/$(ZBOOT_EFI-y): $(obj)/vmlinuz.efi.elf FORCE
+OBJCOPYFLAGS_vmlinuz.efi := -O binary
+$(obj)/vmlinuz.efi: $(obj)/vmlinuz.efi.elf FORCE
        $(call if_changed,objcopy)
 
 targets += zboot-header.o vmlinuz vmlinuz.o vmlinuz.efi.elf vmlinuz.efi
-
-ifneq ($(CONFIG_EFI_ZBOOT_SIGNED),)
-$(obj)/vmlinuz.efi: $(obj)/vmlinuz.efi.unsigned FORCE
-       $(call if_changed,sbsign)
-endif
-
-targets += $(EFI_ZBOOT_PAYLOAD).signed vmlinuz.efi.unsigned
index 4f4d98e..70e9789 100644 (file)
@@ -313,16 +313,16 @@ efi_status_t allocate_new_fdt_and_exit_boot(void *handle,
 
                        /*
                         * Set the virtual address field of all
-                        * EFI_MEMORY_RUNTIME entries to 0. This will signal
-                        * the incoming kernel that no virtual translation has
-                        * been installed.
+                        * EFI_MEMORY_RUNTIME entries to U64_MAX. This will
+                        * signal the incoming kernel that no virtual
+                        * translation has been installed.
                         */
                        for (l = 0; l < priv.boot_memmap->map_size;
                             l += priv.boot_memmap->desc_size) {
                                p = (void *)priv.boot_memmap->map + l;
 
                                if (p->attribute & EFI_MEMORY_RUNTIME)
-                                       p->virt_addr = 0;
+                                       p->virt_addr = U64_MAX;
                        }
                }
                return EFI_SUCCESS;
index b9ce639..33a7811 100644 (file)
@@ -765,9 +765,9 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle)
  * relocated by efi_relocate_kernel.
  * On failure, we exit to the firmware via efi_exit instead of returning.
  */
-unsigned long efi_main(efi_handle_t handle,
-                            efi_system_table_t *sys_table_arg,
-                            struct boot_params *boot_params)
+asmlinkage unsigned long efi_main(efi_handle_t handle,
+                                 efi_system_table_t *sys_table_arg,
+                                 struct boot_params *boot_params)
 {
        unsigned long bzimage_addr = (unsigned long)startup_32;
        unsigned long buffer_start, buffer_end;
index 87a6276..93d33f6 100644 (file)
@@ -38,7 +38,8 @@ SECTIONS
        }
 }
 
-PROVIDE(__efistub__gzdata_size = ABSOLUTE(. - __efistub__gzdata_start));
+PROVIDE(__efistub__gzdata_size =
+               ABSOLUTE(__efistub__gzdata_end - __efistub__gzdata_start));
 
 PROVIDE(__data_rawsize = ABSOLUTE(_edata - _etext));
 PROVIDE(__data_size = ABSOLUTE(_end - _etext));
index d28e715..d0daacd 100644 (file)
@@ -41,7 +41,7 @@ static bool __init efi_virtmap_init(void)
 
                if (!(md->attribute & EFI_MEMORY_RUNTIME))
                        continue;
-               if (md->virt_addr == 0)
+               if (md->virt_addr == U64_MAX)
                        return false;
 
                ret = efi_create_mapping(&efi_mm, md);
index dd74d2a..433b615 100644 (file)
@@ -7,6 +7,7 @@
  */
 
 #include <linux/types.h>
+#include <linux/sizes.h>
 #include <linux/errno.h>
 #include <linux/init.h>
 #include <linux/module.h>
@@ -20,19 +21,19 @@ static struct efivars *__efivars;
 
 static DEFINE_SEMAPHORE(efivars_lock);
 
-efi_status_t check_var_size(u32 attributes, unsigned long size)
+static efi_status_t check_var_size(u32 attributes, unsigned long size)
 {
        const struct efivar_operations *fops;
 
        fops = __efivars->ops;
 
        if (!fops->query_variable_store)
-               return EFI_UNSUPPORTED;
+               return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES;
 
        return fops->query_variable_store(attributes, size, false);
 }
-EXPORT_SYMBOL_NS_GPL(check_var_size, EFIVAR);
 
+static
 efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size)
 {
        const struct efivar_operations *fops;
@@ -40,11 +41,10 @@ efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size)
        fops = __efivars->ops;
 
        if (!fops->query_variable_store)
-               return EFI_UNSUPPORTED;
+               return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES;
 
        return fops->query_variable_store(attributes, size, true);
 }
-EXPORT_SYMBOL_NS_GPL(check_var_size_nonblocking, EFIVAR);
 
 /**
  * efivars_kobject - get the kobject for the registered efivars
index ae9371b..8639a4f 100644 (file)
@@ -274,9 +274,6 @@ extern int amdgpu_vcnfw_log;
 #define AMDGPU_RESET_VCE                       (1 << 13)
 #define AMDGPU_RESET_VCE1                      (1 << 14)
 
-#define AMDGPU_RESET_LEVEL_SOFT_RECOVERY (1 << 0)
-#define AMDGPU_RESET_LEVEL_MODE2 (1 << 1)
-
 /* max cursor sizes (in pixels) */
 #define CIK_CURSOR_WIDTH 128
 #define CIK_CURSOR_HEIGHT 128
@@ -1065,7 +1062,6 @@ struct amdgpu_device {
 
        struct work_struct              reset_work;
 
-       uint32_t                                                amdgpu_reset_level_mask;
        bool                            job_hang;
 };
 
index 9e98f38..0561812 100644 (file)
@@ -75,9 +75,6 @@ void amdgpu_amdkfd_device_probe(struct amdgpu_device *adev)
                return;
 
        adev->kfd.dev = kgd2kfd_probe(adev, vf);
-
-       if (adev->kfd.dev)
-               amdgpu_amdkfd_total_mem_size += adev->gmc.real_vram_size;
 }
 
 /**
@@ -137,7 +134,6 @@ static void amdgpu_amdkfd_reset_work(struct work_struct *work)
        reset_context.method = AMD_RESET_METHOD_NONE;
        reset_context.reset_req_dev = adev;
        clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
-       clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
 
        amdgpu_device_gpu_recover(adev, NULL, &reset_context);
 }
@@ -201,6 +197,8 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
                adev->kfd.init_complete = kgd2kfd_device_init(adev->kfd.dev,
                                                adev_to_drm(adev), &gpu_resources);
 
+               amdgpu_amdkfd_total_mem_size += adev->gmc.real_vram_size;
+
                INIT_WORK(&adev->kfd.reset_work, amdgpu_amdkfd_reset_work);
        }
 }
@@ -210,6 +208,7 @@ void amdgpu_amdkfd_device_fini_sw(struct amdgpu_device *adev)
        if (adev->kfd.dev) {
                kgd2kfd_device_exit(adev->kfd.dev);
                adev->kfd.dev = NULL;
+               amdgpu_amdkfd_total_mem_size -= adev->gmc.real_vram_size;
        }
 }
 
index 0b0a72c..7e80caa 100644 (file)
@@ -111,7 +111,7 @@ static int init_interrupts_v11(struct amdgpu_device *adev, uint32_t pipe_id)
 
        lock_srbm(adev, mec, pipe, 0, 0);
 
-       WREG32(SOC15_REG_OFFSET(GC, 0, regCPC_INT_CNTL),
+       WREG32_SOC15(GC, 0, regCPC_INT_CNTL,
                CP_INT_CNTL_RING0__TIME_STAMP_INT_ENABLE_MASK |
                CP_INT_CNTL_RING0__OPCODE_ERROR_INT_ENABLE_MASK);
 
index 6066aeb..de61a85 100644 (file)
@@ -1954,8 +1954,6 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
                return PTR_ERR(ent);
        }
 
-       debugfs_create_u32("amdgpu_reset_level", 0600, root, &adev->amdgpu_reset_level_mask);
-
        /* Register debugfs entries for amdgpu_ttm */
        amdgpu_ttm_debugfs_init(adev);
        amdgpu_debugfs_pm_init(adev);
index ab8f970..e0445e8 100644 (file)
@@ -2928,6 +2928,14 @@ static int amdgpu_device_ip_suspend_phase1(struct amdgpu_device *adev)
        amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
        amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
 
+       /*
+        * Per PMFW team's suggestion, driver needs to handle gfxoff
+        * and df cstate features disablement for gpu reset(e.g. Mode1Reset)
+        * scenario. Add the missing df cstate disablement here.
+        */
+       if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_DISALLOW))
+               dev_warn(adev->dev, "Failed to disallow df cstate");
+
        for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
                if (!adev->ip_blocks[i].status.valid)
                        continue;
@@ -5210,7 +5218,6 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
 
        reset_context->job = job;
        reset_context->hive = hive;
-
        /*
         * Build list of devices to reset.
         * In case we are in XGMI hive mode, resort the device list
@@ -5337,11 +5344,8 @@ retry:   /* Rest of adevs pre asic reset from XGMI hive. */
                        amdgpu_ras_resume(adev);
        } else {
                r = amdgpu_do_asic_reset(device_list_handle, reset_context);
-               if (r && r == -EAGAIN) {
-                       set_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags);
-                       adev->asic_reset_res = 0;
+               if (r && r == -EAGAIN)
                        goto retry;
-               }
 
                if (!r && gpu_reset_for_dev_remove)
                        goto recover_end;
@@ -5777,7 +5781,6 @@ pci_ers_result_t amdgpu_pci_slot_reset(struct pci_dev *pdev)
        reset_context.reset_req_dev = adev;
        set_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
        set_bit(AMDGPU_SKIP_HW_RESET, &reset_context.flags);
-       set_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
 
        adev->no_hw_access = true;
        r = amdgpu_device_pre_asic_reset(adev, &reset_context);
index 23998f7..1a06b8d 100644 (file)
@@ -38,8 +38,6 @@
 #include <linux/pci.h>
 #include <linux/pm_runtime.h>
 #include <drm/drm_crtc_helper.h>
-#include <drm/drm_damage_helper.h>
-#include <drm/drm_drv.h>
 #include <drm/drm_edid.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_fb_helper.h>
@@ -500,12 +498,6 @@ static const struct drm_framebuffer_funcs amdgpu_fb_funcs = {
        .create_handle = drm_gem_fb_create_handle,
 };
 
-static const struct drm_framebuffer_funcs amdgpu_fb_funcs_atomic = {
-       .destroy = drm_gem_fb_destroy,
-       .create_handle = drm_gem_fb_create_handle,
-       .dirty = drm_atomic_helper_dirtyfb,
-};
-
 uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev,
                                          uint64_t bo_flags)
 {
@@ -1108,10 +1100,8 @@ static int amdgpu_display_gem_fb_verify_and_init(struct drm_device *dev,
        if (ret)
                goto err;
 
-       if (drm_drv_uses_atomic_modeset(dev))
-               ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs_atomic);
-       else
-               ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
+       ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
+
        if (ret)
                goto err;
 
index 46c9933..cd968e7 100644 (file)
@@ -72,7 +72,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
                reset_context.method = AMD_RESET_METHOD_NONE;
                reset_context.reset_req_dev = adev;
                clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
-               clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
 
                r = amdgpu_device_gpu_recover(ring->adev, job, &reset_context);
                if (r)
index e6a9b9f..2e8f6cd 100644 (file)
@@ -688,13 +688,16 @@ int amdgpu_bo_create_vm(struct amdgpu_device *adev,
         * num of amdgpu_vm_pt entries.
         */
        BUG_ON(bp->bo_ptr_size < sizeof(struct amdgpu_bo_vm));
-       bp->destroy = &amdgpu_bo_vm_destroy;
        r = amdgpu_bo_create(adev, bp, &bo_ptr);
        if (r)
                return r;
 
        *vmbo_ptr = to_amdgpu_bo_vm(bo_ptr);
        INIT_LIST_HEAD(&(*vmbo_ptr)->shadow_list);
+       /* Set destroy callback to amdgpu_bo_vm_destroy after vmbo->shadow_list
+        * is initialized.
+        */
+       bo_ptr->tbo.destroy = &amdgpu_bo_vm_destroy;
        return r;
 }
 
index ccebd8e..a4b47e1 100644 (file)
@@ -1950,7 +1950,6 @@ static void amdgpu_ras_do_recovery(struct work_struct *work)
                reset_context.method = AMD_RESET_METHOD_NONE;
                reset_context.reset_req_dev = adev;
                clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
-               clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
 
                amdgpu_device_gpu_recover(ras->adev, NULL, &reset_context);
        }
@@ -2268,6 +2267,25 @@ static int amdgpu_ras_recovery_fini(struct amdgpu_device *adev)
 
 static bool amdgpu_ras_asic_supported(struct amdgpu_device *adev)
 {
+       if (amdgpu_sriov_vf(adev)) {
+               switch (adev->ip_versions[MP0_HWIP][0]) {
+               case IP_VERSION(13, 0, 2):
+                       return true;
+               default:
+                       return false;
+               }
+       }
+
+       if (adev->asic_type == CHIP_IP_DISCOVERY) {
+               switch (adev->ip_versions[MP0_HWIP][0]) {
+               case IP_VERSION(13, 0, 0):
+               case IP_VERSION(13, 0, 10):
+                       return true;
+               default:
+                       return false;
+               }
+       }
+
        return adev->asic_type == CHIP_VEGA10 ||
                adev->asic_type == CHIP_VEGA20 ||
                adev->asic_type == CHIP_ARCTURUS ||
@@ -2311,11 +2329,6 @@ static void amdgpu_ras_check_supported(struct amdgpu_device *adev)
            !amdgpu_ras_asic_supported(adev))
                return;
 
-       /* If driver run on sriov guest side, only enable ras for aldebaran */
-       if (amdgpu_sriov_vf(adev) &&
-               adev->ip_versions[MP1_HWIP][0] != IP_VERSION(13, 0, 2))
-               return;
-
        if (!adev->gmc.xgmi.connected_to_cpu) {
                if (amdgpu_atomfirmware_mem_ecc_supported(adev)) {
                        dev_info(adev->dev, "MEM ECC is active.\n");
@@ -2877,9 +2890,9 @@ static int amdgpu_bad_page_notifier(struct notifier_block *nb,
        err_data.err_addr =
                kcalloc(adev->umc.max_ras_err_cnt_per_query,
                        sizeof(struct eeprom_table_record), GFP_KERNEL);
-       if(!err_data.err_addr) {
-               dev_warn(adev->dev, "Failed to alloc memory for "
-                               "umc error address record in mca notifier!\n");
+       if (!err_data.err_addr) {
+               dev_warn(adev->dev,
+                       "Failed to alloc memory for umc error record in mca notifier!\n");
                return NOTIFY_DONE;
        }
 
@@ -2889,7 +2902,7 @@ static int amdgpu_bad_page_notifier(struct notifier_block *nb,
        if (adev->umc.ras &&
            adev->umc.ras->convert_ras_error_address)
                adev->umc.ras->convert_ras_error_address(adev,
-                       &err_data, 0, ch_inst, umc_inst, m->addr);
+                       &err_data, m->addr, ch_inst, umc_inst);
 
        if (amdgpu_bad_page_threshold != 0) {
                amdgpu_ras_add_bad_pages(adev, err_data.err_addr,
index 9da5ead..f778466 100644 (file)
@@ -37,8 +37,6 @@ int amdgpu_reset_init(struct amdgpu_device *adev)
 {
        int ret = 0;
 
-       adev->amdgpu_reset_level_mask = 0x1;
-
        switch (adev->ip_versions[MP1_HWIP][0]) {
        case IP_VERSION(13, 0, 2):
                ret = aldebaran_reset_init(adev);
@@ -76,12 +74,6 @@ int amdgpu_reset_prepare_hwcontext(struct amdgpu_device *adev,
 {
        struct amdgpu_reset_handler *reset_handler = NULL;
 
-       if (!(adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_MODE2))
-               return -ENOSYS;
-
-       if (test_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags))
-               return -ENOSYS;
-
        if (adev->reset_cntl && adev->reset_cntl->get_reset_handler)
                reset_handler = adev->reset_cntl->get_reset_handler(
                        adev->reset_cntl, reset_context);
@@ -98,12 +90,6 @@ int amdgpu_reset_perform_reset(struct amdgpu_device *adev,
        int ret;
        struct amdgpu_reset_handler *reset_handler = NULL;
 
-       if (!(adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_MODE2))
-               return -ENOSYS;
-
-       if (test_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags))
-               return -ENOSYS;
-
        if (adev->reset_cntl)
                reset_handler = adev->reset_cntl->get_reset_handler(
                        adev->reset_cntl, reset_context);
index f5318fe..f4a501f 100644 (file)
@@ -30,8 +30,7 @@ enum AMDGPU_RESET_FLAGS {
 
        AMDGPU_NEED_FULL_RESET = 0,
        AMDGPU_SKIP_HW_RESET = 1,
-       AMDGPU_SKIP_MODE2_RESET = 2,
-       AMDGPU_RESET_FOR_DEVICE_REMOVE = 3,
+       AMDGPU_RESET_FOR_DEVICE_REMOVE = 2,
 };
 
 struct amdgpu_reset_context {
index 3e316b0..d3558c3 100644 (file)
@@ -405,9 +405,6 @@ bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
 {
        ktime_t deadline = ktime_add_us(ktime_get(), 10000);
 
-       if (!(ring->adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_SOFT_RECOVERY))
-               return false;
-
        if (amdgpu_sriov_vf(ring->adev) || !ring->funcs->soft_recovery || !fence)
                return false;
 
index 3949b7e..ea5278f 100644 (file)
@@ -222,8 +222,10 @@ int amdgpu_sdma_init_microcode(struct amdgpu_device *adev,
                adev->sdma.instance[instance].fw->data;
        version_major = le16_to_cpu(header->header_version_major);
 
-       if ((duplicate && instance) || (!duplicate && version_major > 1))
-               return -EINVAL;
+       if ((duplicate && instance) || (!duplicate && version_major > 1)) {
+               err = -EINVAL;
+               goto out;
+       }
 
        err = amdgpu_sdma_init_inst_ctx(&adev->sdma.instance[instance]);
        if (err)
@@ -272,7 +274,7 @@ int amdgpu_sdma_init_microcode(struct amdgpu_device *adev,
                                ALIGN(le32_to_cpu(sdma_hdr->ctl_ucode_size_bytes), PAGE_SIZE);
                        break;
                default:
-                       return -EINVAL;
+                       err = -EINVAL;
                }
        }
 
@@ -283,3 +285,24 @@ out:
        }
        return err;
 }
+
+void amdgpu_sdma_unset_buffer_funcs_helper(struct amdgpu_device *adev)
+{
+       struct amdgpu_ring *sdma;
+       int i;
+
+       for (i = 0; i < adev->sdma.num_instances; i++) {
+               if (adev->sdma.has_page_queue) {
+                       sdma = &adev->sdma.instance[i].page;
+                       if (adev->mman.buffer_funcs_ring == sdma) {
+                               amdgpu_ttm_set_buffer_funcs_status(adev, false);
+                               break;
+                       }
+               }
+               sdma = &adev->sdma.instance[i].ring;
+               if (adev->mman.buffer_funcs_ring == sdma) {
+                       amdgpu_ttm_set_buffer_funcs_status(adev, false);
+                       break;
+               }
+       }
+}
index d2d8827..7d99205 100644 (file)
@@ -128,4 +128,6 @@ int amdgpu_sdma_init_microcode(struct amdgpu_device *adev,
         char *fw_name, u32 instance, bool duplicate);
 void amdgpu_sdma_destroy_inst_ctx(struct amdgpu_device *adev,
         bool duplicate);
+void amdgpu_sdma_unset_buffer_funcs_helper(struct amdgpu_device *adev);
+
 #endif
index b1c4553..57277b1 100644 (file)
@@ -424,8 +424,9 @@ error:
 static bool amdgpu_mem_visible(struct amdgpu_device *adev,
                               struct ttm_resource *mem)
 {
-       uint64_t mem_size = (u64)mem->num_pages << PAGE_SHIFT;
+       u64 mem_size = (u64)mem->num_pages << PAGE_SHIFT;
        struct amdgpu_res_cursor cursor;
+       u64 end;
 
        if (mem->mem_type == TTM_PL_SYSTEM ||
            mem->mem_type == TTM_PL_TT)
@@ -434,12 +435,21 @@ static bool amdgpu_mem_visible(struct amdgpu_device *adev,
                return false;
 
        amdgpu_res_first(mem, 0, mem_size, &cursor);
+       end = cursor.start + cursor.size;
+       while (cursor.remaining) {
+               amdgpu_res_next(&cursor, cursor.size);
 
-       /* ttm_resource_ioremap only supports contiguous memory */
-       if (cursor.size != mem_size)
-               return false;
+               if (!cursor.remaining)
+                       break;
+
+               /* ttm_resource_ioremap only supports contiguous memory */
+               if (end != cursor.start)
+                       return false;
+
+               end = cursor.start + cursor.size;
+       }
 
-       return cursor.start + cursor.size <= adev->gmc.visible_vram_size;
+       return end <= adev->gmc.visible_vram_size;
 }
 
 /*
index 2fb4951..e464392 100644 (file)
@@ -22,8 +22,6 @@
 #define __AMDGPU_UMC_H__
 #include "amdgpu_ras.h"
 
-#define UMC_INVALID_ADDR 0x1ULL
-
 /*
  * (addr / 256) * 4096, the higher 26 bits in ErrorAddr
  * is the index of 4KB block
@@ -54,9 +52,8 @@ struct amdgpu_umc_ras {
        void (*err_cnt_init)(struct amdgpu_device *adev);
        bool (*query_ras_poison_mode)(struct amdgpu_device *adev);
        void (*convert_ras_error_address)(struct amdgpu_device *adev,
-                                                struct ras_err_data *err_data,
-                                                uint32_t umc_reg_offset, uint32_t ch_inst,
-                                                uint32_t umc_inst, uint64_t mca_addr);
+                               struct ras_err_data *err_data, uint64_t err_addr,
+                               uint32_t ch_inst, uint32_t umc_inst);
        void (*ecc_info_query_ras_error_count)(struct amdgpu_device *adev,
                                      void *ras_error_status);
        void (*ecc_info_query_ras_error_address)(struct amdgpu_device *adev,
index e4af40b..9c765b0 100644 (file)
@@ -726,6 +726,12 @@ void amdgpu_detect_virtualization(struct amdgpu_device *adev)
                        adev->virt.caps |= AMDGPU_PASSTHROUGH_MODE;
        }
 
+       if (amdgpu_sriov_vf(adev) && adev->asic_type == CHIP_SIENNA_CICHLID)
+               /* VF MMIO access (except mailbox range) from CPU
+                * will be blocked during sriov runtime
+                */
+               adev->virt.caps |= AMDGPU_VF_MMIO_ACCESS_PROTECT;
+
        /* we have the ability to check now */
        if (amdgpu_sriov_vf(adev)) {
                switch (adev->asic_type) {
index d94c31e..49c4347 100644 (file)
@@ -31,6 +31,7 @@
 #define AMDGPU_SRIOV_CAPS_IS_VF        (1 << 2) /* this GPU is a virtual function */
 #define AMDGPU_PASSTHROUGH_MODE        (1 << 3) /* thw whole GPU is pass through for VM */
 #define AMDGPU_SRIOV_CAPS_RUNTIME      (1 << 4) /* is out of full access mode */
+#define AMDGPU_VF_MMIO_ACCESS_PROTECT  (1 << 5) /* MMIO write access is not allowed in sriov runtime */
 
 /* flags for indirect register access path supported by rlcg for sriov */
 #define AMDGPU_RLCG_GC_WRITE_LEGACY    (0x8 << 28)
@@ -297,6 +298,9 @@ struct amdgpu_video_codec_info;
 #define amdgpu_passthrough(adev) \
 ((adev)->virt.caps & AMDGPU_PASSTHROUGH_MODE)
 
+#define amdgpu_sriov_vf_mmio_access_protection(adev) \
+((adev)->virt.caps & AMDGPU_VF_MMIO_ACCESS_PROTECT)
+
 static inline bool is_virtual_machine(void)
 {
 #if defined(CONFIG_X86)
index 83b0c5d..2291aa1 100644 (file)
@@ -2338,7 +2338,11 @@ void amdgpu_vm_manager_init(struct amdgpu_device *adev)
         */
 #ifdef CONFIG_X86_64
        if (amdgpu_vm_update_mode == -1) {
-               if (amdgpu_gmc_vram_full_visible(&adev->gmc))
+               /* For asic with VF MMIO access protection
+                * avoid using CPU for VM table updates
+                */
+               if (amdgpu_gmc_vram_full_visible(&adev->gmc) &&
+                   !amdgpu_sriov_vf_mmio_access_protection(adev))
                        adev->vm_manager.vm_update_mode =
                                AMDGPU_VM_USE_CPU_FOR_COMPUTE;
                else
index 2b0669c..69e105f 100644 (file)
@@ -116,8 +116,15 @@ static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p,
                                   DMA_RESV_USAGE_BOOKKEEP);
        }
 
-       if (fence && !p->immediate)
+       if (fence && !p->immediate) {
+               /*
+                * Most hw generations now have a separate queue for page table
+                * updates, but when the queue is shared with userspace we need
+                * the extra CPU round trip to correctly flush the TLB.
+                */
+               set_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &f->flags);
                swap(*fence, f);
+       }
        dma_fence_put(f);
        return 0;
 
index 5647f13..cbca986 100644 (file)
@@ -309,14 +309,10 @@ static void cik_sdma_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq
  */
 static void cik_sdma_gfx_stop(struct amdgpu_device *adev)
 {
-       struct amdgpu_ring *sdma0 = &adev->sdma.instance[0].ring;
-       struct amdgpu_ring *sdma1 = &adev->sdma.instance[1].ring;
        u32 rb_cntl;
        int i;
 
-       if ((adev->mman.buffer_funcs_ring == sdma0) ||
-           (adev->mman.buffer_funcs_ring == sdma1))
-                       amdgpu_ttm_set_buffer_funcs_status(adev, false);
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
 
        for (i = 0; i < adev->sdma.num_instances; i++) {
                rb_cntl = RREG32(mmSDMA0_GFX_RB_CNTL + sdma_offsets[i]);
index 2511097..671ca5a 100644 (file)
@@ -1571,7 +1571,7 @@ static void gfx_v11_0_init_compute_vmid(struct amdgpu_device *adev)
                WREG32_SOC15(GC, 0, regSH_MEM_BASES, sh_mem_bases);
 
                /* Enable trap for each kfd vmid. */
-               data = RREG32(SOC15_REG_OFFSET(GC, 0, regSPI_GDBG_PER_VMID_CNTL));
+               data = RREG32_SOC15(GC, 0, regSPI_GDBG_PER_VMID_CNTL);
                data = REG_SET_FIELD(data, SPI_GDBG_PER_VMID_CNTL, TRAP_EN, 1);
        }
        soc21_grbm_select(adev, 0, 0, 0, 0);
@@ -5076,6 +5076,7 @@ static int gfx_v11_0_set_clockgating_state(void *handle,
        case IP_VERSION(11, 0, 0):
        case IP_VERSION(11, 0, 1):
        case IP_VERSION(11, 0, 2):
+       case IP_VERSION(11, 0, 3):
                gfx_v11_0_update_gfx_clock_gating(adev,
                                state ==  AMD_CG_STATE_GATE);
                break;
index 846ccb6..66dfb57 100644 (file)
@@ -186,6 +186,10 @@ static void gmc_v11_0_flush_vm_hub(struct amdgpu_device *adev, uint32_t vmid,
        /* Use register 17 for GART */
        const unsigned eng = 17;
        unsigned int i;
+       unsigned char hub_ip = 0;
+
+       hub_ip = (vmhub == AMDGPU_GFXHUB_0) ?
+                  GC_HWIP : MMHUB_HWIP;
 
        spin_lock(&adev->gmc.invalidate_lock);
        /*
@@ -199,8 +203,8 @@ static void gmc_v11_0_flush_vm_hub(struct amdgpu_device *adev, uint32_t vmid,
        if (use_semaphore) {
                for (i = 0; i < adev->usec_timeout; i++) {
                        /* a read return value of 1 means semaphore acuqire */
-                       tmp = RREG32_NO_KIQ(hub->vm_inv_eng0_sem +
-                                           hub->eng_distance * eng);
+                       tmp = RREG32_RLC_NO_KIQ(hub->vm_inv_eng0_sem +
+                                           hub->eng_distance * eng, hub_ip);
                        if (tmp & 0x1)
                                break;
                        udelay(1);
@@ -210,12 +214,12 @@ static void gmc_v11_0_flush_vm_hub(struct amdgpu_device *adev, uint32_t vmid,
                        DRM_ERROR("Timeout waiting for sem acquire in VM flush!\n");
        }
 
-       WREG32_NO_KIQ(hub->vm_inv_eng0_req + hub->eng_distance * eng, inv_req);
+       WREG32_RLC_NO_KIQ(hub->vm_inv_eng0_req + hub->eng_distance * eng, inv_req, hub_ip);
 
        /* Wait for ACK with a delay.*/
        for (i = 0; i < adev->usec_timeout; i++) {
-               tmp = RREG32_NO_KIQ(hub->vm_inv_eng0_ack +
-                                   hub->eng_distance * eng);
+               tmp = RREG32_RLC_NO_KIQ(hub->vm_inv_eng0_ack +
+                                   hub->eng_distance * eng, hub_ip);
                tmp &= 1 << vmid;
                if (tmp)
                        break;
@@ -229,8 +233,8 @@ static void gmc_v11_0_flush_vm_hub(struct amdgpu_device *adev, uint32_t vmid,
                 * add semaphore release after invalidation,
                 * write with 0 means semaphore release
                 */
-               WREG32_NO_KIQ(hub->vm_inv_eng0_sem +
-                             hub->eng_distance * eng, 0);
+               WREG32_RLC_NO_KIQ(hub->vm_inv_eng0_sem +
+                             hub->eng_distance * eng, 0, hub_ip);
 
        /* Issue additional private vm invalidation to MMHUB */
        if ((vmhub != AMDGPU_GFXHUB_0) &&
index 5cec6b2..fef7d02 100644 (file)
@@ -1156,6 +1156,42 @@ static int mes_v11_0_sw_fini(void *handle)
        return 0;
 }
 
+static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev)
+{
+       uint32_t data;
+       int i;
+
+       mutex_lock(&adev->srbm_mutex);
+       soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0);
+
+       /* disable the queue if it's active */
+       if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
+               WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
+               for (i = 0; i < adev->usec_timeout; i++) {
+                       if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
+                               break;
+                       udelay(1);
+               }
+       }
+       data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+       data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+                               DOORBELL_EN, 0);
+       data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+                               DOORBELL_HIT, 1);
+       WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
+
+       WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0);
+
+       WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0);
+       WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0);
+       WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0);
+
+       soc21_grbm_select(adev, 0, 0, 0, 0);
+       mutex_unlock(&adev->srbm_mutex);
+
+       adev->mes.ring.sched.ready = false;
+}
+
 static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
 {
        uint32_t tmp;
@@ -1207,6 +1243,9 @@ failure:
 
 static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev)
 {
+       if (adev->mes.ring.sched.ready)
+               mes_v11_0_kiq_dequeue_sched(adev);
+
        mes_v11_0_enable(adev, false);
        return 0;
 }
@@ -1262,9 +1301,6 @@ failure:
 
 static int mes_v11_0_hw_fini(void *handle)
 {
-       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-
-       adev->mes.ring.sched.ready = false;
        return 0;
 }
 
@@ -1296,7 +1332,8 @@ static int mes_v11_0_late_init(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-       if (!amdgpu_in_reset(adev))
+       if (!amdgpu_in_reset(adev) &&
+           (adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3)))
                amdgpu_mes_self_test(adev);
 
        return 0;
index a2f04b2..12906ba 100644 (file)
@@ -290,7 +290,6 @@ flr_done:
                reset_context.method = AMD_RESET_METHOD_NONE;
                reset_context.reset_req_dev = adev;
                clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
-               clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
 
                amdgpu_device_gpu_recover(adev, NULL, &reset_context);
        }
index a977f00..e07757e 100644 (file)
@@ -317,7 +317,6 @@ flr_done:
                reset_context.method = AMD_RESET_METHOD_NONE;
                reset_context.reset_req_dev = adev;
                clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
-               clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
 
                amdgpu_device_gpu_recover(adev, NULL, &reset_context);
        }
index fd14fa9..288c414 100644 (file)
@@ -529,7 +529,6 @@ static void xgpu_vi_mailbox_flr_work(struct work_struct *work)
                reset_context.method = AMD_RESET_METHOD_NONE;
                reset_context.reset_req_dev = adev;
                clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
-               clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags);
 
                amdgpu_device_gpu_recover(adev, NULL, &reset_context);
        }
index 6bdffdc..c52d246 100644 (file)
@@ -342,14 +342,10 @@ static void sdma_v2_4_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 se
  */
 static void sdma_v2_4_gfx_stop(struct amdgpu_device *adev)
 {
-       struct amdgpu_ring *sdma0 = &adev->sdma.instance[0].ring;
-       struct amdgpu_ring *sdma1 = &adev->sdma.instance[1].ring;
        u32 rb_cntl, ib_cntl;
        int i;
 
-       if ((adev->mman.buffer_funcs_ring == sdma0) ||
-           (adev->mman.buffer_funcs_ring == sdma1))
-               amdgpu_ttm_set_buffer_funcs_status(adev, false);
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
 
        for (i = 0; i < adev->sdma.num_instances; i++) {
                rb_cntl = RREG32(mmSDMA0_GFX_RB_CNTL + sdma_offsets[i]);
index 2584fa3..486d9b5 100644 (file)
@@ -516,14 +516,10 @@ static void sdma_v3_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 se
  */
 static void sdma_v3_0_gfx_stop(struct amdgpu_device *adev)
 {
-       struct amdgpu_ring *sdma0 = &adev->sdma.instance[0].ring;
-       struct amdgpu_ring *sdma1 = &adev->sdma.instance[1].ring;
        u32 rb_cntl, ib_cntl;
        int i;
 
-       if ((adev->mman.buffer_funcs_ring == sdma0) ||
-           (adev->mman.buffer_funcs_ring == sdma1))
-               amdgpu_ttm_set_buffer_funcs_status(adev, false);
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
 
        for (i = 0; i < adev->sdma.num_instances; i++) {
                rb_cntl = RREG32(mmSDMA0_GFX_RB_CNTL + sdma_offsets[i]);
index 7241a9f..1122bd4 100644 (file)
@@ -915,18 +915,12 @@ static void sdma_v4_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 se
  */
 static void sdma_v4_0_gfx_stop(struct amdgpu_device *adev)
 {
-       struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
        u32 rb_cntl, ib_cntl;
-       int i, unset = 0;
-
-       for (i = 0; i < adev->sdma.num_instances; i++) {
-               sdma[i] = &adev->sdma.instance[i].ring;
+       int i;
 
-               if ((adev->mman.buffer_funcs_ring == sdma[i]) && unset != 1) {
-                       amdgpu_ttm_set_buffer_funcs_status(adev, false);
-                       unset = 1;
-               }
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
 
+       for (i = 0; i < adev->sdma.num_instances; i++) {
                rb_cntl = RREG32_SDMA(i, mmSDMA0_GFX_RB_CNTL);
                rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, 0);
                WREG32_SDMA(i, mmSDMA0_GFX_RB_CNTL, rb_cntl);
@@ -957,20 +951,12 @@ static void sdma_v4_0_rlc_stop(struct amdgpu_device *adev)
  */
 static void sdma_v4_0_page_stop(struct amdgpu_device *adev)
 {
-       struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
        u32 rb_cntl, ib_cntl;
        int i;
-       bool unset = false;
 
-       for (i = 0; i < adev->sdma.num_instances; i++) {
-               sdma[i] = &adev->sdma.instance[i].page;
-
-               if ((adev->mman.buffer_funcs_ring == sdma[i]) &&
-                       (!unset)) {
-                       amdgpu_ttm_set_buffer_funcs_status(adev, false);
-                       unset = true;
-               }
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
 
+       for (i = 0; i < adev->sdma.num_instances; i++) {
                rb_cntl = RREG32_SDMA(i, mmSDMA0_PAGE_RB_CNTL);
                rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_PAGE_RB_CNTL,
                                        RB_ENABLE, 0);
@@ -1431,11 +1417,6 @@ static int sdma_v4_0_start(struct amdgpu_device *adev)
                WREG32_SDMA(i, mmSDMA0_CNTL, temp);
 
                if (!amdgpu_sriov_vf(adev)) {
-                       ring = &adev->sdma.instance[i].ring;
-                       adev->nbio.funcs->sdma_doorbell_range(adev, i,
-                               ring->use_doorbell, ring->doorbell_index,
-                               adev->doorbell_index.sdma_doorbell_range);
-
                        /* unhalt engine */
                        temp = RREG32_SDMA(i, mmSDMA0_F32_CNTL);
                        temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
@@ -1954,8 +1935,11 @@ static int sdma_v4_0_hw_fini(void *handle)
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
        int i;
 
-       if (amdgpu_sriov_vf(adev))
+       if (amdgpu_sriov_vf(adev)) {
+               /* disable the scheduler for SDMA */
+               amdgpu_sdma_unset_buffer_funcs_helper(adev);
                return 0;
+       }
 
        for (i = 0; i < adev->sdma.num_instances; i++) {
                amdgpu_irq_put(adev, &adev->sdma.ecc_irq,
index c05c3ee..d4d9f19 100644 (file)
@@ -584,14 +584,10 @@ static void sdma_v5_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 se
  */
 static void sdma_v5_0_gfx_stop(struct amdgpu_device *adev)
 {
-       struct amdgpu_ring *sdma0 = &adev->sdma.instance[0].ring;
-       struct amdgpu_ring *sdma1 = &adev->sdma.instance[1].ring;
        u32 rb_cntl, ib_cntl;
        int i;
 
-       if ((adev->mman.buffer_funcs_ring == sdma0) ||
-           (adev->mman.buffer_funcs_ring == sdma1))
-               amdgpu_ttm_set_buffer_funcs_status(adev, false);
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
 
        for (i = 0; i < adev->sdma.num_instances; i++) {
                rb_cntl = RREG32_SOC15_IP(GC, sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL));
@@ -1460,8 +1456,11 @@ static int sdma_v5_0_hw_fini(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-       if (amdgpu_sriov_vf(adev))
+       if (amdgpu_sriov_vf(adev)) {
+               /* disable the scheduler for SDMA */
+               amdgpu_sdma_unset_buffer_funcs_helper(adev);
                return 0;
+       }
 
        sdma_v5_0_ctx_switch_enable(adev, false);
        sdma_v5_0_enable(adev, false);
index f136fec..809eca5 100644 (file)
@@ -414,18 +414,10 @@ static void sdma_v5_2_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 se
  */
 static void sdma_v5_2_gfx_stop(struct amdgpu_device *adev)
 {
-       struct amdgpu_ring *sdma0 = &adev->sdma.instance[0].ring;
-       struct amdgpu_ring *sdma1 = &adev->sdma.instance[1].ring;
-       struct amdgpu_ring *sdma2 = &adev->sdma.instance[2].ring;
-       struct amdgpu_ring *sdma3 = &adev->sdma.instance[3].ring;
        u32 rb_cntl, ib_cntl;
        int i;
 
-       if ((adev->mman.buffer_funcs_ring == sdma0) ||
-           (adev->mman.buffer_funcs_ring == sdma1) ||
-           (adev->mman.buffer_funcs_ring == sdma2) ||
-           (adev->mman.buffer_funcs_ring == sdma3))
-               amdgpu_ttm_set_buffer_funcs_status(adev, false);
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
 
        for (i = 0; i < adev->sdma.num_instances; i++) {
                rb_cntl = RREG32_SOC15_IP(GC, sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_CNTL));
@@ -1357,8 +1349,11 @@ static int sdma_v5_2_hw_fini(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-       if (amdgpu_sriov_vf(adev))
+       if (amdgpu_sriov_vf(adev)) {
+               /* disable the scheduler for SDMA */
+               amdgpu_sdma_unset_buffer_funcs_helper(adev);
                return 0;
+       }
 
        sdma_v5_2_ctx_switch_enable(adev, false);
        sdma_v5_2_enable(adev, false);
index db51230..da3beb0 100644 (file)
@@ -398,14 +398,10 @@ static void sdma_v6_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 se
  */
 static void sdma_v6_0_gfx_stop(struct amdgpu_device *adev)
 {
-       struct amdgpu_ring *sdma0 = &adev->sdma.instance[0].ring;
-       struct amdgpu_ring *sdma1 = &adev->sdma.instance[1].ring;
        u32 rb_cntl, ib_cntl;
        int i;
 
-       if ((adev->mman.buffer_funcs_ring == sdma0) ||
-           (adev->mman.buffer_funcs_ring == sdma1))
-               amdgpu_ttm_set_buffer_funcs_status(adev, false);
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
 
        for (i = 0; i < adev->sdma.num_instances; i++) {
                rb_cntl = RREG32_SOC15_IP(GC, sdma_v6_0_get_reg_offset(adev, i, regSDMA0_QUEUE0_RB_CNTL));
@@ -415,9 +411,6 @@ static void sdma_v6_0_gfx_stop(struct amdgpu_device *adev)
                ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_QUEUE0_IB_CNTL, IB_ENABLE, 0);
                WREG32_SOC15_IP(GC, sdma_v6_0_get_reg_offset(adev, i, regSDMA0_QUEUE0_IB_CNTL), ib_cntl);
        }
-
-       sdma0->sched.ready = false;
-       sdma1->sched.ready = false;
 }
 
 /**
@@ -846,7 +839,8 @@ static int sdma_v6_0_mqd_init(struct amdgpu_device *adev, void *mqd,
        m->sdmax_rlcx_rb_cntl =
                order_base_2(prop->queue_size / 4) << SDMA0_QUEUE0_RB_CNTL__RB_SIZE__SHIFT |
                1 << SDMA0_QUEUE0_RB_CNTL__RPTR_WRITEBACK_ENABLE__SHIFT |
-               4 << SDMA0_QUEUE0_RB_CNTL__RPTR_WRITEBACK_TIMER__SHIFT;
+               4 << SDMA0_QUEUE0_RB_CNTL__RPTR_WRITEBACK_TIMER__SHIFT |
+               1 << SDMA0_QUEUE0_RB_CNTL__F32_WPTR_POLL_ENABLE__SHIFT;
 
        m->sdmax_rlcx_rb_base = lower_32_bits(prop->hqd_base_gpu_addr >> 8);
        m->sdmax_rlcx_rb_base_hi = upper_32_bits(prop->hqd_base_gpu_addr >> 8);
@@ -1317,8 +1311,11 @@ static int sdma_v6_0_hw_fini(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-       if (amdgpu_sriov_vf(adev))
+       if (amdgpu_sriov_vf(adev)) {
+               /* disable the scheduler for SDMA */
+               amdgpu_sdma_unset_buffer_funcs_helper(adev);
                return 0;
+       }
 
        sdma_v6_0_ctx_switch_enable(adev, false);
        sdma_v6_0_enable(adev, false);
index f675111..4d5e718 100644 (file)
@@ -116,15 +116,14 @@ static void si_dma_stop(struct amdgpu_device *adev)
        u32 rb_cntl;
        unsigned i;
 
+       amdgpu_sdma_unset_buffer_funcs_helper(adev);
+
        for (i = 0; i < adev->sdma.num_instances; i++) {
                ring = &adev->sdma.instance[i].ring;
                /* dma0 */
                rb_cntl = RREG32(DMA_RB_CNTL + sdma_offsets[i]);
                rb_cntl &= ~DMA_RB_ENABLE;
                WREG32(DMA_RB_CNTL + sdma_offsets[i], rb_cntl);
-
-               if (adev->mman.buffer_funcs_ring == ring)
-                       amdgpu_ttm_set_buffer_funcs_status(adev, false);
        }
 }
 
index 7aa570c..81a6d5b 100644 (file)
 #include "amdgpu_psp.h"
 #include "amdgpu_xgmi.h"
 
+static bool sienna_cichlid_is_mode2_default(struct amdgpu_reset_control *reset_ctl)
+{
+#if 0
+       struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
+
+       if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(11, 0, 7) &&
+           adev->pm.fw_version >= 0x3a5500 && !amdgpu_sriov_vf(adev))
+               return true;
+#endif
+       return false;
+}
+
 static struct amdgpu_reset_handler *
 sienna_cichlid_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
                            struct amdgpu_reset_context *reset_context)
 {
        struct amdgpu_reset_handler *handler;
-       struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
 
        if (reset_context->method != AMD_RESET_METHOD_NONE) {
                list_for_each_entry(handler, &reset_ctl->reset_handlers,
@@ -44,15 +55,13 @@ sienna_cichlid_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
                        if (handler->reset_method == reset_context->method)
                                return handler;
                }
-       } else {
-               list_for_each_entry(handler, &reset_ctl->reset_handlers,
+       }
+
+       if (sienna_cichlid_is_mode2_default(reset_ctl)) {
+               list_for_each_entry (handler, &reset_ctl->reset_handlers,
                                     handler_list) {
-                       if (handler->reset_method == AMD_RESET_METHOD_MODE2 &&
-                           adev->pm.fw_version >= 0x3a5500 &&
-                           !amdgpu_sriov_vf(adev)) {
-                               reset_context->method = AMD_RESET_METHOD_MODE2;
+                       if (handler->reset_method == AMD_RESET_METHOD_MODE2)
                                return handler;
-                       }
                }
        }
 
index 183024d..e3b2b6b 100644 (file)
@@ -1211,6 +1211,20 @@ static int soc15_common_sw_fini(void *handle)
        return 0;
 }
 
+static void soc15_sdma_doorbell_range_init(struct amdgpu_device *adev)
+{
+       int i;
+
+       /* sdma doorbell range is programed by hypervisor */
+       if (!amdgpu_sriov_vf(adev)) {
+               for (i = 0; i < adev->sdma.num_instances; i++) {
+                       adev->nbio.funcs->sdma_doorbell_range(adev, i,
+                               true, adev->doorbell_index.sdma_engine[i] << 1,
+                               adev->doorbell_index.sdma_doorbell_range);
+               }
+       }
+}
+
 static int soc15_common_hw_init(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
@@ -1230,6 +1244,13 @@ static int soc15_common_hw_init(void *handle)
 
        /* enable the doorbell aperture */
        soc15_enable_doorbell_aperture(adev, true);
+       /* HW doorbell routing policy: doorbell writing not
+        * in SDMA/IH/MM/ACV range will be routed to CP. So
+        * we need to init SDMA doorbell range prior
+        * to CP ip block init and ring test.  IH already
+        * happens before CP.
+        */
+       soc15_sdma_doorbell_range_init(adev);
 
        return 0;
 }
index 16b7576..e080440 100644 (file)
@@ -423,6 +423,7 @@ static bool soc21_need_full_reset(struct amdgpu_device *adev)
        case IP_VERSION(11, 0, 0):
                return amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC);
        case IP_VERSION(11, 0, 2):
+       case IP_VERSION(11, 0, 3):
                return false;
        default:
                return true;
@@ -629,13 +630,18 @@ static int soc21_common_early_init(void *handle)
                        AMD_CG_SUPPORT_JPEG_MGCG;
                adev->pg_flags =
                        AMD_PG_SUPPORT_GFX_PG |
+                       AMD_PG_SUPPORT_VCN |
                        AMD_PG_SUPPORT_VCN_DPG |
                        AMD_PG_SUPPORT_JPEG;
                adev->external_rev_id = adev->rev_id + 0x1;
                break;
        case IP_VERSION(11, 0, 3):
                adev->cg_flags = AMD_CG_SUPPORT_VCN_MGCG |
-                       AMD_CG_SUPPORT_JPEG_MGCG;
+                       AMD_CG_SUPPORT_JPEG_MGCG |
+                       AMD_CG_SUPPORT_GFX_CGCG |
+                       AMD_CG_SUPPORT_GFX_CGLS |
+                       AMD_CG_SUPPORT_REPEATER_FGCG |
+                       AMD_CG_SUPPORT_GFX_MGCG;
                adev->pg_flags = AMD_PG_SUPPORT_VCN |
                        AMD_PG_SUPPORT_VCN_DPG |
                        AMD_PG_SUPPORT_JPEG;
index 939cb20..f17d297 100644 (file)
@@ -327,10 +327,9 @@ static void umc_v6_1_query_error_address(struct amdgpu_device *adev,
                return;
        }
 
-       /* calculate error address if ue/ce error is detected */
+       /* calculate error address if ue error is detected */
        if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
-           (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1 ||
-           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, CECC) == 1)) {
+           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1) {
 
                err_addr = RREG64_PCIE((mc_umc_addrt0 + umc_reg_offset) * 4);
                /* the lowest lsb bits should be ignored */
@@ -343,10 +342,7 @@ static void umc_v6_1_query_error_address(struct amdgpu_device *adev,
                                ADDR_OF_256B_BLOCK(channel_index) |
                                OFFSET_IN_256B_BLOCK(err_addr);
 
-               /* we only save ue error information currently, ce is skipped */
-               if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC)
-                               == 1)
-                       amdgpu_umc_fill_error_record(err_data, err_addr,
+               amdgpu_umc_fill_error_record(err_data, err_addr,
                                        retired_page, channel_index, umc_inst);
        }
 
index a0d19b7..5d5d031 100644 (file)
@@ -187,20 +187,51 @@ static void umc_v6_7_ecc_info_query_ras_error_count(struct amdgpu_device *adev,
        }
 }
 
+static void umc_v6_7_convert_error_address(struct amdgpu_device *adev,
+                                       struct ras_err_data *err_data, uint64_t err_addr,
+                                       uint32_t ch_inst, uint32_t umc_inst)
+{
+       uint32_t channel_index;
+       uint64_t soc_pa, retired_page, column;
+
+       channel_index =
+               adev->umc.channel_idx_tbl[umc_inst * adev->umc.channel_inst_num + ch_inst];
+       /* translate umc channel address to soc pa, 3 parts are included */
+       soc_pa = ADDR_OF_8KB_BLOCK(err_addr) |
+                       ADDR_OF_256B_BLOCK(channel_index) |
+                       OFFSET_IN_256B_BLOCK(err_addr);
+
+       /* The umc channel bits are not original values, they are hashed */
+       SET_CHANNEL_HASH(channel_index, soc_pa);
+
+       /* clear [C4 C3 C2] in soc physical address */
+       soc_pa &= ~(0x7ULL << UMC_V6_7_PA_C2_BIT);
+
+       /* loop for all possibilities of [C4 C3 C2] */
+       for (column = 0; column < UMC_V6_7_NA_MAP_PA_NUM; column++) {
+               retired_page = soc_pa | (column << UMC_V6_7_PA_C2_BIT);
+               dev_info(adev->dev, "Error Address(PA): 0x%llx\n", retired_page);
+               amdgpu_umc_fill_error_record(err_data, err_addr,
+                       retired_page, channel_index, umc_inst);
+
+               /* shift R14 bit */
+               retired_page ^= (0x1ULL << UMC_V6_7_PA_R14_BIT);
+               dev_info(adev->dev, "Error Address(PA): 0x%llx\n", retired_page);
+               amdgpu_umc_fill_error_record(err_data, err_addr,
+                       retired_page, channel_index, umc_inst);
+       }
+}
+
 static void umc_v6_7_ecc_info_query_error_address(struct amdgpu_device *adev,
                                         struct ras_err_data *err_data,
                                         uint32_t ch_inst,
                                         uint32_t umc_inst)
 {
-       uint64_t mc_umc_status, err_addr, soc_pa, retired_page, column;
-       uint32_t channel_index;
+       uint64_t mc_umc_status, err_addr;
        uint32_t eccinfo_table_idx;
        struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);
 
        eccinfo_table_idx = umc_inst * adev->umc.channel_inst_num + ch_inst;
-       channel_index =
-               adev->umc.channel_idx_tbl[umc_inst * adev->umc.channel_inst_num + ch_inst];
-
        mc_umc_status = ras->umc_ecc.ecc[eccinfo_table_idx].mca_umc_status;
 
        if (mc_umc_status == 0)
@@ -209,42 +240,15 @@ static void umc_v6_7_ecc_info_query_error_address(struct amdgpu_device *adev,
        if (!err_data->err_addr)
                return;
 
-       /* calculate error address if ue/ce error is detected */
+       /* calculate error address if ue error is detected */
        if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
-           (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1 ||
-           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, CECC) == 1)) {
+           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1) {
 
                err_addr = ras->umc_ecc.ecc[eccinfo_table_idx].mca_umc_addr;
                err_addr = REG_GET_FIELD(err_addr, MCA_UMC_UMC0_MCUMC_ADDRT0, ErrorAddr);
 
-               /* translate umc channel address to soc pa, 3 parts are included */
-               soc_pa = ADDR_OF_8KB_BLOCK(err_addr) |
-                               ADDR_OF_256B_BLOCK(channel_index) |
-                               OFFSET_IN_256B_BLOCK(err_addr);
-
-               /* The umc channel bits are not original values, they are hashed */
-               SET_CHANNEL_HASH(channel_index, soc_pa);
-
-               /* clear [C4 C3 C2] in soc physical address */
-               soc_pa &= ~(0x7ULL << UMC_V6_7_PA_C2_BIT);
-
-               /* we only save ue error information currently, ce is skipped */
-               if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC)
-                               == 1) {
-                       /* loop for all possibilities of [C4 C3 C2] */
-                       for (column = 0; column < UMC_V6_7_NA_MAP_PA_NUM; column++) {
-                               retired_page = soc_pa | (column << UMC_V6_7_PA_C2_BIT);
-                               dev_info(adev->dev, "Error Address(PA): 0x%llx\n", retired_page);
-                               amdgpu_umc_fill_error_record(err_data, err_addr,
-                                       retired_page, channel_index, umc_inst);
-
-                               /* shift R14 bit */
-                               retired_page ^= (0x1ULL << UMC_V6_7_PA_R14_BIT);
-                               dev_info(adev->dev, "Error Address(PA): 0x%llx\n", retired_page);
-                               amdgpu_umc_fill_error_record(err_data, err_addr,
-                                       retired_page, channel_index, umc_inst);
-                       }
-               }
+               umc_v6_7_convert_error_address(adev, err_data, err_addr,
+                                       ch_inst, umc_inst);
        }
 }
 
@@ -453,81 +457,40 @@ static void umc_v6_7_query_ras_error_count(struct amdgpu_device *adev,
 static void umc_v6_7_query_error_address(struct amdgpu_device *adev,
                                         struct ras_err_data *err_data,
                                         uint32_t umc_reg_offset, uint32_t ch_inst,
-                                        uint32_t umc_inst, uint64_t mca_addr)
+                                        uint32_t umc_inst)
 {
        uint32_t mc_umc_status_addr;
-       uint32_t channel_index;
-       uint64_t mc_umc_status = 0, mc_umc_addrt0;
-       uint64_t err_addr, soc_pa, retired_page, column;
+       uint64_t mc_umc_status = 0, mc_umc_addrt0, err_addr;
 
-       if (mca_addr == UMC_INVALID_ADDR) {
-               mc_umc_status_addr =
-                       SOC15_REG_OFFSET(UMC, 0, regMCA_UMC_UMC0_MCUMC_STATUST0);
-               mc_umc_addrt0 =
-                       SOC15_REG_OFFSET(UMC, 0, regMCA_UMC_UMC0_MCUMC_ADDRT0);
+       mc_umc_status_addr =
+               SOC15_REG_OFFSET(UMC, 0, regMCA_UMC_UMC0_MCUMC_STATUST0);
+       mc_umc_addrt0 =
+               SOC15_REG_OFFSET(UMC, 0, regMCA_UMC_UMC0_MCUMC_ADDRT0);
 
-               mc_umc_status = RREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4);
+       mc_umc_status = RREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4);
 
-               if (mc_umc_status == 0)
-                       return;
+       if (mc_umc_status == 0)
+               return;
 
-               if (!err_data->err_addr) {
-                       /* clear umc status */
-                       WREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4, 0x0ULL);
-                       return;
-               }
+       if (!err_data->err_addr) {
+               /* clear umc status */
+               WREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4, 0x0ULL);
+               return;
        }
 
-       channel_index =
-               adev->umc.channel_idx_tbl[umc_inst * adev->umc.channel_inst_num + ch_inst];
-
-       /* calculate error address if ue/ce error is detected */
-       if ((REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
-           (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1 ||
-           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, CECC) == 1)) ||
-           mca_addr != UMC_INVALID_ADDR) {
-               if (mca_addr == UMC_INVALID_ADDR) {
-                       err_addr = RREG64_PCIE((mc_umc_addrt0 + umc_reg_offset) * 4);
-                       err_addr =
-                               REG_GET_FIELD(err_addr, MCA_UMC_UMC0_MCUMC_ADDRT0, ErrorAddr);
-               } else {
-                       err_addr = mca_addr;
-               }
+       /* calculate error address if ue error is detected */
+       if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
+           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1) {
+               err_addr = RREG64_PCIE((mc_umc_addrt0 + umc_reg_offset) * 4);
+               err_addr =
+                       REG_GET_FIELD(err_addr, MCA_UMC_UMC0_MCUMC_ADDRT0, ErrorAddr);
 
-               /* translate umc channel address to soc pa, 3 parts are included */
-               soc_pa = ADDR_OF_8KB_BLOCK(err_addr) |
-                               ADDR_OF_256B_BLOCK(channel_index) |
-                               OFFSET_IN_256B_BLOCK(err_addr);
-
-               /* The umc channel bits are not original values, they are hashed */
-               SET_CHANNEL_HASH(channel_index, soc_pa);
-
-               /* clear [C4 C3 C2] in soc physical address */
-               soc_pa &= ~(0x7ULL << UMC_V6_7_PA_C2_BIT);
-
-               /* we only save ue error information currently, ce is skipped */
-               if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC)
-                               == 1 ||
-                   mca_addr != UMC_INVALID_ADDR) {
-                       /* loop for all possibilities of [C4 C3 C2] */
-                       for (column = 0; column < UMC_V6_7_NA_MAP_PA_NUM; column++) {
-                               retired_page = soc_pa | (column << UMC_V6_7_PA_C2_BIT);
-                               dev_info(adev->dev, "Error Address(PA): 0x%llx\n", retired_page);
-                               amdgpu_umc_fill_error_record(err_data, err_addr,
-                                       retired_page, channel_index, umc_inst);
-
-                               /* shift R14 bit */
-                               retired_page ^= (0x1ULL << UMC_V6_7_PA_R14_BIT);
-                               dev_info(adev->dev, "Error Address(PA): 0x%llx\n", retired_page);
-                               amdgpu_umc_fill_error_record(err_data, err_addr,
-                                       retired_page, channel_index, umc_inst);
-                       }
-               }
+               umc_v6_7_convert_error_address(adev, err_data, err_addr,
+                                       ch_inst, umc_inst);
        }
 
        /* clear umc status */
-       if (mca_addr == UMC_INVALID_ADDR)
-               WREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4, 0x0ULL);
+       WREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4, 0x0ULL);
 }
 
 static void umc_v6_7_query_ras_error_address(struct amdgpu_device *adev,
@@ -549,7 +512,7 @@ static void umc_v6_7_query_ras_error_address(struct amdgpu_device *adev,
                umc_v6_7_query_error_address(adev,
                                             err_data,
                                             umc_reg_offset, ch_inst,
-                                            umc_inst, UMC_INVALID_ADDR);
+                                            umc_inst);
        }
 }
 
@@ -590,5 +553,5 @@ struct amdgpu_umc_ras umc_v6_7_ras = {
        .query_ras_poison_mode = umc_v6_7_query_ras_poison_mode,
        .ecc_info_query_ras_error_count = umc_v6_7_ecc_info_query_ras_error_count,
        .ecc_info_query_ras_error_address = umc_v6_7_ecc_info_query_ras_error_address,
-       .convert_ras_error_address = umc_v6_7_query_error_address,
+       .convert_ras_error_address = umc_v6_7_convert_error_address,
 };
index a8cbda8..91235df 100644 (file)
@@ -208,7 +208,10 @@ static void umc_v8_10_query_error_address(struct amdgpu_device *adev,
 {
        uint64_t mc_umc_status_addr;
        uint64_t mc_umc_status, err_addr;
-       uint32_t channel_index;
+       uint64_t mc_umc_addrt0, na_err_addr_base;
+       uint64_t na_err_addr, retired_page_addr;
+       uint32_t channel_index, addr_lsb, col = 0;
+       int ret = 0;
 
        mc_umc_status_addr =
                SOC15_REG_OFFSET(UMC, 0, regMCA_UMC_UMC0_MCUMC_STATUST0);
@@ -229,13 +232,10 @@ static void umc_v8_10_query_error_address(struct amdgpu_device *adev,
                                        umc_inst * adev->umc.channel_inst_num +
                                        ch_inst];
 
-       /* calculate error address if ue/ce error is detected */
+       /* calculate error address if ue error is detected */
        if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
            REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, AddrV) == 1 &&
-           (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1 ||
-            REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, CECC) == 1)) {
-               uint32_t addr_lsb;
-               uint64_t mc_umc_addrt0;
+           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1) {
 
                mc_umc_addrt0 = SOC15_REG_OFFSET(UMC, 0, regMCA_UMC_UMC0_MCUMC_ADDRT0);
                err_addr = RREG64_PCIE((mc_umc_addrt0 + umc_reg_offset) * 4);
@@ -243,32 +243,24 @@ static void umc_v8_10_query_error_address(struct amdgpu_device *adev,
 
                /* the lowest lsb bits should be ignored */
                addr_lsb = REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, AddrLsb);
-
                err_addr &= ~((0x1ULL << addr_lsb) - 1);
-
-               /* we only save ue error information currently, ce is skipped */
-               if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1) {
-                       uint64_t na_err_addr_base = err_addr & ~(0x3ULL << UMC_V8_10_NA_C5_BIT);
-                       uint64_t na_err_addr, retired_page_addr;
-                       uint32_t col = 0;
-                       int ret = 0;
-
-                       /* loop for all possibilities of [C6 C5] in normal address. */
-                       for (col = 0; col < UMC_V8_10_NA_COL_2BITS_POWER_OF_2_NUM; col++) {
-                               na_err_addr = na_err_addr_base | (col << UMC_V8_10_NA_C5_BIT);
-
-                               /* Mapping normal error address to retired soc physical address. */
-                               ret = umc_v8_10_swizzle_mode_na_to_pa(adev, channel_index,
-                                                               na_err_addr, &retired_page_addr);
-                               if (ret) {
-                                       dev_err(adev->dev, "Failed to map pa from umc na.\n");
-                                       break;
-                               }
-                               dev_info(adev->dev, "Error Address(PA): 0x%llx\n",
-                                       retired_page_addr);
-                               amdgpu_umc_fill_error_record(err_data, na_err_addr,
-                                               retired_page_addr, channel_index, umc_inst);
+               na_err_addr_base = err_addr & ~(0x3ULL << UMC_V8_10_NA_C5_BIT);
+
+               /* loop for all possibilities of [C6 C5] in normal address. */
+               for (col = 0; col < UMC_V8_10_NA_COL_2BITS_POWER_OF_2_NUM; col++) {
+                       na_err_addr = na_err_addr_base | (col << UMC_V8_10_NA_C5_BIT);
+
+                       /* Mapping normal error address to retired soc physical address. */
+                       ret = umc_v8_10_swizzle_mode_na_to_pa(adev, channel_index,
+                                                       na_err_addr, &retired_page_addr);
+                       if (ret) {
+                               dev_err(adev->dev, "Failed to map pa from umc na.\n");
+                               break;
                        }
+                       dev_info(adev->dev, "Error Address(PA): 0x%llx\n",
+                               retired_page_addr);
+                       amdgpu_umc_fill_error_record(err_data, na_err_addr,
+                                       retired_page_addr, channel_index, umc_inst);
                }
        }
 
@@ -338,6 +330,31 @@ static void umc_v8_10_err_cnt_init(struct amdgpu_device *adev)
        }
 }
 
+static uint32_t umc_v8_10_query_ras_poison_mode_per_channel(
+                                               struct amdgpu_device *adev,
+                                               uint32_t umc_reg_offset)
+{
+       uint32_t ecc_ctrl_addr, ecc_ctrl;
+
+       ecc_ctrl_addr =
+               SOC15_REG_OFFSET(UMC, 0, regUMCCH0_0_GeccCtrl);
+       ecc_ctrl = RREG32_PCIE((ecc_ctrl_addr +
+                                       umc_reg_offset) * 4);
+
+       return REG_GET_FIELD(ecc_ctrl, UMCCH0_0_GeccCtrl, UCFatalEn);
+}
+
+static bool umc_v8_10_query_ras_poison_mode(struct amdgpu_device *adev)
+{
+       uint32_t umc_reg_offset  = 0;
+
+       /* Enabling fatal error in umc node0 instance0 channel0 will be
+        * considered as fatal error mode
+        */
+       umc_reg_offset = get_umc_v8_10_reg_offset(adev, 0, 0, 0);
+       return !umc_v8_10_query_ras_poison_mode_per_channel(adev, umc_reg_offset);
+}
+
 const struct amdgpu_ras_block_hw_ops umc_v8_10_ras_hw_ops = {
        .query_ras_error_count = umc_v8_10_query_ras_error_count,
        .query_ras_error_address = umc_v8_10_query_ras_error_address,
@@ -348,4 +365,5 @@ struct amdgpu_umc_ras umc_v8_10_ras = {
                .hw_ops = &umc_v8_10_ras_hw_ops,
        },
        .err_cnt_init = umc_v8_10_err_cnt_init,
+       .query_ras_poison_mode = umc_v8_10_query_ras_poison_mode,
 };
index f35253e..b717fda 100644 (file)
@@ -108,20 +108,35 @@ static void umc_v8_7_ecc_info_query_ras_error_count(struct amdgpu_device *adev,
        }
 }
 
+static void umc_v8_7_convert_error_address(struct amdgpu_device *adev,
+                                       struct ras_err_data *err_data, uint64_t err_addr,
+                                       uint32_t ch_inst, uint32_t umc_inst)
+{
+       uint64_t retired_page;
+       uint32_t channel_index;
+
+       channel_index =
+               adev->umc.channel_idx_tbl[umc_inst * adev->umc.channel_inst_num + ch_inst];
+
+       /* translate umc channel address to soc pa, 3 parts are included */
+       retired_page = ADDR_OF_4KB_BLOCK(err_addr) |
+                       ADDR_OF_256B_BLOCK(channel_index) |
+                       OFFSET_IN_256B_BLOCK(err_addr);
+
+       amdgpu_umc_fill_error_record(err_data, err_addr,
+                               retired_page, channel_index, umc_inst);
+}
+
 static void umc_v8_7_ecc_info_query_error_address(struct amdgpu_device *adev,
                                        struct ras_err_data *err_data,
                                        uint32_t ch_inst,
                                        uint32_t umc_inst)
 {
-       uint64_t mc_umc_status, err_addr, retired_page;
-       uint32_t channel_index;
+       uint64_t mc_umc_status, err_addr;
        uint32_t eccinfo_table_idx;
        struct amdgpu_ras *ras = amdgpu_ras_get_context(adev);
 
        eccinfo_table_idx = umc_inst * adev->umc.channel_inst_num + ch_inst;
-       channel_index =
-               adev->umc.channel_idx_tbl[umc_inst * adev->umc.channel_inst_num + ch_inst];
-
        mc_umc_status = ras->umc_ecc.ecc[eccinfo_table_idx].mca_umc_status;
 
        if (mc_umc_status == 0)
@@ -130,24 +145,15 @@ static void umc_v8_7_ecc_info_query_error_address(struct amdgpu_device *adev,
        if (!err_data->err_addr)
                return;
 
-       /* calculate error address if ue/ce error is detected */
+       /* calculate error address if ue error is detected */
        if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
-           (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1 ||
-           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, CECC) == 1)) {
+           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1) {
 
                err_addr = ras->umc_ecc.ecc[eccinfo_table_idx].mca_umc_addr;
                err_addr = REG_GET_FIELD(err_addr, MCA_UMC_UMC0_MCUMC_ADDRT0, ErrorAddr);
 
-               /* translate umc channel address to soc pa, 3 parts are included */
-               retired_page = ADDR_OF_4KB_BLOCK(err_addr) |
-                               ADDR_OF_256B_BLOCK(channel_index) |
-                               OFFSET_IN_256B_BLOCK(err_addr);
-
-               /* we only save ue error information currently, ce is skipped */
-               if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC)
-                               == 1)
-                       amdgpu_umc_fill_error_record(err_data, err_addr,
-                                       retired_page, channel_index, umc_inst);
+               umc_v8_7_convert_error_address(adev, err_data, err_addr,
+                                               ch_inst, umc_inst);
        }
 }
 
@@ -324,14 +330,12 @@ static void umc_v8_7_query_error_address(struct amdgpu_device *adev,
                                         uint32_t umc_inst)
 {
        uint32_t lsb, mc_umc_status_addr;
-       uint64_t mc_umc_status, err_addr, retired_page, mc_umc_addrt0;
-       uint32_t channel_index = adev->umc.channel_idx_tbl[umc_inst * adev->umc.channel_inst_num + ch_inst];
+       uint64_t mc_umc_status, err_addr, mc_umc_addrt0;
 
        mc_umc_status_addr =
                SOC15_REG_OFFSET(UMC, 0, mmMCA_UMC_UMC0_MCUMC_STATUST0);
        mc_umc_addrt0 =
                SOC15_REG_OFFSET(UMC, 0, mmMCA_UMC_UMC0_MCUMC_ADDRT0);
-
        mc_umc_status = RREG64_PCIE((mc_umc_status_addr + umc_reg_offset) * 4);
 
        if (mc_umc_status == 0)
@@ -343,10 +347,9 @@ static void umc_v8_7_query_error_address(struct amdgpu_device *adev,
                return;
        }
 
-       /* calculate error address if ue/ce error is detected */
+       /* calculate error address if ue error is detected */
        if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, Val) == 1 &&
-           (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1 ||
-           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, CECC) == 1)) {
+           REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC) == 1) {
 
                err_addr = RREG64_PCIE((mc_umc_addrt0 + umc_reg_offset) * 4);
                /* the lowest lsb bits should be ignored */
@@ -354,16 +357,8 @@ static void umc_v8_7_query_error_address(struct amdgpu_device *adev,
                err_addr = REG_GET_FIELD(err_addr, MCA_UMC_UMC0_MCUMC_ADDRT0, ErrorAddr);
                err_addr &= ~((0x1ULL << lsb) - 1);
 
-               /* translate umc channel address to soc pa, 3 parts are included */
-               retired_page = ADDR_OF_4KB_BLOCK(err_addr) |
-                               ADDR_OF_256B_BLOCK(channel_index) |
-                               OFFSET_IN_256B_BLOCK(err_addr);
-
-               /* we only save ue error information currently, ce is skipped */
-               if (REG_GET_FIELD(mc_umc_status, MCA_UMC_UMC0_MCUMC_STATUST0, UECC)
-                               == 1)
-                       amdgpu_umc_fill_error_record(err_data, err_addr,
-                                       retired_page, channel_index, umc_inst);
+               umc_v8_7_convert_error_address(adev, err_data, err_addr,
+                                                               ch_inst, umc_inst);
        }
 
        /* clear umc status */
index c70c026..2797029 100644 (file)
@@ -223,7 +223,7 @@ svm_migrate_get_vram_page(struct svm_range *prange, unsigned long pfn)
        page = pfn_to_page(pfn);
        svm_range_bo_ref(prange->svm_bo);
        page->zone_device_data = prange->svm_bo;
-       lock_page(page);
+       zone_device_page_init(page);
 }
 
 static void
@@ -410,7 +410,7 @@ svm_migrate_vma_to_vram(struct amdgpu_device *adev, struct svm_range *prange,
        uint64_t npages = (end - start) >> PAGE_SHIFT;
        struct kfd_process_device *pdd;
        struct dma_fence *mfence = NULL;
-       struct migrate_vma migrate;
+       struct migrate_vma migrate = { 0 };
        unsigned long cpages = 0;
        dma_addr_t *scratch;
        void *buf;
@@ -666,7 +666,7 @@ out_oom:
 static long
 svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
                       struct vm_area_struct *vma, uint64_t start, uint64_t end,
-                      uint32_t trigger)
+                      uint32_t trigger, struct page *fault_page)
 {
        struct kfd_process *p = container_of(prange->svms, struct kfd_process, svms);
        uint64_t npages = (end - start) >> PAGE_SHIFT;
@@ -674,7 +674,7 @@ svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
        unsigned long cpages = 0;
        struct kfd_process_device *pdd;
        struct dma_fence *mfence = NULL;
-       struct migrate_vma migrate;
+       struct migrate_vma migrate = { 0 };
        dma_addr_t *scratch;
        void *buf;
        int r = -ENOMEM;
@@ -697,6 +697,7 @@ svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
 
        migrate.src = buf;
        migrate.dst = migrate.src + npages;
+       migrate.fault_page = fault_page;
        scratch = (dma_addr_t *)(migrate.dst + npages);
 
        kfd_smi_event_migration_start(adev->kfd.dev, p->lead_thread->pid,
@@ -764,7 +765,7 @@ out:
  * 0 - OK, otherwise error code
  */
 int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm,
-                           uint32_t trigger)
+                           uint32_t trigger, struct page *fault_page)
 {
        struct amdgpu_device *adev;
        struct vm_area_struct *vma;
@@ -805,7 +806,8 @@ int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm,
                }
 
                next = min(vma->vm_end, end);
-               r = svm_migrate_vma_to_ram(adev, prange, vma, addr, next, trigger);
+               r = svm_migrate_vma_to_ram(adev, prange, vma, addr, next, trigger,
+                       fault_page);
                if (r < 0) {
                        pr_debug("failed %ld to migrate prange %p\n", r, prange);
                        break;
@@ -849,7 +851,7 @@ svm_migrate_vram_to_vram(struct svm_range *prange, uint32_t best_loc,
        pr_debug("from gpu 0x%x to gpu 0x%x\n", prange->actual_loc, best_loc);
 
        do {
-               r = svm_migrate_vram_to_ram(prange, mm, trigger);
+               r = svm_migrate_vram_to_ram(prange, mm, trigger, NULL);
                if (r)
                        return r;
        } while (prange->actual_loc && --retries);
@@ -950,7 +952,8 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
        }
 
        r = svm_migrate_vram_to_ram(prange, vmf->vma->vm_mm,
-                                   KFD_MIGRATE_TRIGGER_PAGEFAULT_CPU);
+                                   KFD_MIGRATE_TRIGGER_PAGEFAULT_CPU,
+                                   vmf->page);
        if (r)
                pr_debug("failed %d migrate svms 0x%p range 0x%p [0x%lx 0x%lx]\n",
                         r, prange->svms, prange, prange->start, prange->last);
index b3f0754..a5d7e6d 100644 (file)
@@ -43,7 +43,7 @@ enum MIGRATION_COPY_DIR {
 int svm_migrate_to_vram(struct svm_range *prange,  uint32_t best_loc,
                        struct mm_struct *mm, uint32_t trigger);
 int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm,
-                           uint32_t trigger);
+                           uint32_t trigger, struct page *fault_page);
 unsigned long
 svm_migrate_addr_to_pfn(struct amdgpu_device *adev, unsigned long addr);
 
index 26b53b6..4f6390f 100644 (file)
@@ -333,7 +333,8 @@ static void update_mqd_sdma(struct mqd_manager *mm, void *mqd,
                << SDMA0_QUEUE0_RB_CNTL__RB_SIZE__SHIFT |
                q->vmid << SDMA0_QUEUE0_RB_CNTL__RB_VMID__SHIFT |
                1 << SDMA0_QUEUE0_RB_CNTL__RPTR_WRITEBACK_ENABLE__SHIFT |
-               6 << SDMA0_QUEUE0_RB_CNTL__RPTR_WRITEBACK_TIMER__SHIFT;
+               6 << SDMA0_QUEUE0_RB_CNTL__RPTR_WRITEBACK_TIMER__SHIFT |
+               1 << SDMA0_QUEUE0_RB_CNTL__F32_WPTR_POLL_ENABLE__SHIFT;
 
        m->sdmax_rlcx_rb_base = lower_32_bits(q->queue_address >> 8);
        m->sdmax_rlcx_rb_base_hi = upper_32_bits(q->queue_address >> 8);
index f5913ba..64fdf63 100644 (file)
@@ -2913,13 +2913,15 @@ retry_write_locked:
                                 */
                                if (prange->actual_loc)
                                        r = svm_migrate_vram_to_ram(prange, mm,
-                                          KFD_MIGRATE_TRIGGER_PAGEFAULT_GPU);
+                                          KFD_MIGRATE_TRIGGER_PAGEFAULT_GPU,
+                                          NULL);
                                else
                                        r = 0;
                        }
                } else {
                        r = svm_migrate_vram_to_ram(prange, mm,
-                                       KFD_MIGRATE_TRIGGER_PAGEFAULT_GPU);
+                                       KFD_MIGRATE_TRIGGER_PAGEFAULT_GPU,
+                                       NULL);
                }
                if (r) {
                        pr_debug("failed %d to migrate svms %p [0x%lx 0x%lx]\n",
@@ -3278,7 +3280,8 @@ svm_range_trigger_migration(struct mm_struct *mm, struct svm_range *prange,
                return 0;
 
        if (!best_loc) {
-               r = svm_migrate_vram_to_ram(prange, mm, KFD_MIGRATE_TRIGGER_PREFETCH);
+               r = svm_migrate_vram_to_ram(prange, mm,
+                                       KFD_MIGRATE_TRIGGER_PREFETCH, NULL);
                *migrated = !r;
                return r;
        }
@@ -3339,7 +3342,7 @@ static void svm_range_evict_svm_bo_worker(struct work_struct *work)
                mutex_lock(&prange->migrate_mutex);
                do {
                        r = svm_migrate_vram_to_ram(prange, mm,
-                                               KFD_MIGRATE_TRIGGER_TTM_EVICTION);
+                                       KFD_MIGRATE_TRIGGER_TTM_EVICTION, NULL);
                } while (!r && prange->actual_loc && --retries);
 
                if (!r && prange->actual_loc)
index 4c73727..c053cb7 100644 (file)
@@ -1110,7 +1110,8 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
                hw_params.fb[i] = &fb_info->fb[i];
 
        switch (adev->ip_versions[DCE_HWIP][0]) {
-       case IP_VERSION(3, 1, 3): /* Only for this asic hw internal rev B0 */
+       case IP_VERSION(3, 1, 3):
+       case IP_VERSION(3, 1, 4):
                hw_params.dpia_supported = true;
                hw_params.disable_dpia = adev->dm.dc->debug.dpia_debug.bits.disable_dpia;
                break;
@@ -7478,15 +7479,15 @@ static void amdgpu_dm_handle_vrr_transition(struct dm_crtc_state *old_state,
                 * We also need vupdate irq for the actual core vblank handling
                 * at end of vblank.
                 */
-               dm_set_vupdate_irq(new_state->base.crtc, true);
-               drm_crtc_vblank_get(new_state->base.crtc);
+               WARN_ON(dm_set_vupdate_irq(new_state->base.crtc, true) != 0);
+               WARN_ON(drm_crtc_vblank_get(new_state->base.crtc) != 0);
                DRM_DEBUG_DRIVER("%s: crtc=%u VRR off->on: Get vblank ref\n",
                                 __func__, new_state->base.crtc->base.id);
        } else if (old_vrr_active && !new_vrr_active) {
                /* Transition VRR active -> inactive:
                 * Allow vblank irq disable again for fixed refresh rate.
                 */
-               dm_set_vupdate_irq(new_state->base.crtc, false);
+               WARN_ON(dm_set_vupdate_irq(new_state->base.crtc, false) != 0);
                drm_crtc_vblank_put(new_state->base.crtc);
                DRM_DEBUG_DRIVER("%s: crtc=%u VRR on->off: Drop vblank ref\n",
                                 __func__, new_state->base.crtc->base.id);
@@ -8242,23 +8243,6 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
                mutex_unlock(&dm->dc_lock);
        }
 
-       /* Count number of newly disabled CRTCs for dropping PM refs later. */
-       for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
-                                     new_crtc_state, i) {
-               if (old_crtc_state->active && !new_crtc_state->active)
-                       crtc_disable_count++;
-
-               dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
-               dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
-
-               /* For freesync config update on crtc state and params for irq */
-               update_stream_irq_parameters(dm, dm_new_crtc_state);
-
-               /* Handle vrr on->off / off->on transitions */
-               amdgpu_dm_handle_vrr_transition(dm_old_crtc_state,
-                                               dm_new_crtc_state);
-       }
-
        /**
         * Enable interrupts for CRTCs that are newly enabled or went through
         * a modeset. It was intentionally deferred until after the front end
@@ -8268,16 +8252,29 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
        for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
                struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
 #ifdef CONFIG_DEBUG_FS
-               bool configure_crc = false;
                enum amdgpu_dm_pipe_crc_source cur_crc_src;
 #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
-               struct crc_rd_work *crc_rd_wrk = dm->crc_rd_wrk;
+               struct crc_rd_work *crc_rd_wrk;
+#endif
+#endif
+               /* Count number of newly disabled CRTCs for dropping PM refs later. */
+               if (old_crtc_state->active && !new_crtc_state->active)
+                       crtc_disable_count++;
+
+               dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+               dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+
+               /* For freesync config update on crtc state and params for irq */
+               update_stream_irq_parameters(dm, dm_new_crtc_state);
+
+#ifdef CONFIG_DEBUG_FS
+#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+               crc_rd_wrk = dm->crc_rd_wrk;
 #endif
                spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
                cur_crc_src = acrtc->dm_irq_params.crc_src;
                spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
 #endif
-               dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
 
                if (new_crtc_state->active &&
                    (!old_crtc_state->active ||
@@ -8285,16 +8282,19 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
                        dc_stream_retain(dm_new_crtc_state->stream);
                        acrtc->dm_irq_params.stream = dm_new_crtc_state->stream;
                        manage_dm_interrupts(adev, acrtc, true);
+               }
+               /* Handle vrr on->off / off->on transitions */
+               amdgpu_dm_handle_vrr_transition(dm_old_crtc_state, dm_new_crtc_state);
 
 #ifdef CONFIG_DEBUG_FS
+               if (new_crtc_state->active &&
+                   (!old_crtc_state->active ||
+                    drm_atomic_crtc_needs_modeset(new_crtc_state))) {
                        /**
                         * Frontend may have changed so reapply the CRC capture
                         * settings for the stream.
                         */
-                       dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
-
                        if (amdgpu_dm_is_valid_crc_source(cur_crc_src)) {
-                               configure_crc = true;
 #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
                                if (amdgpu_dm_crc_window_is_activated(crtc)) {
                                        spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
@@ -8306,14 +8306,12 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
                                        spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
                                }
 #endif
-                       }
-
-                       if (configure_crc)
                                if (amdgpu_dm_crtc_configure_crc_source(
                                        crtc, dm_new_crtc_state, cur_crc_src))
                                        DRM_DEBUG_DRIVER("Failed to configure crc source");
-#endif
+                       }
                }
+#endif
        }
 
        for_each_new_crtc_in_state(state, crtc, new_crtc_state, j)
@@ -9392,10 +9390,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
                                }
                        }
                }
-               if (!pre_validate_dsc(state, &dm_state, vars)) {
-                       ret = -EINVAL;
-                       goto fail;
-               }
        }
 #endif
        for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
@@ -9529,6 +9523,15 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
                }
        }
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+       if (dc_resource_is_dsc_encoding_supported(dc)) {
+               if (!pre_validate_dsc(state, &dm_state, vars)) {
+                       ret = -EINVAL;
+                       goto fail;
+               }
+       }
+#endif
+
        /* Run this here since we want to validate the streams we created */
        ret = drm_atomic_helper_check_planes(dev, state);
        if (ret) {
index 8ca10ab..26291db 100644 (file)
@@ -60,11 +60,15 @@ static bool link_supports_psrsu(struct dc_link *link)
  */
 void amdgpu_dm_set_psr_caps(struct dc_link *link)
 {
-       if (!(link->connector_signal & SIGNAL_TYPE_EDP))
+       if (!(link->connector_signal & SIGNAL_TYPE_EDP)) {
+               link->psr_settings.psr_feature_enabled = false;
                return;
+       }
 
-       if (link->type == dc_connection_none)
+       if (link->type == dc_connection_none) {
+               link->psr_settings.psr_feature_enabled = false;
                return;
+       }
 
        if (link->dpcd_caps.psr_info.psr_version == 0) {
                link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED;
index 53b077b..ee0456b 100644 (file)
 #define LAST_RECORD_TYPE 0xff
 #define SMU9_SYSPLL0_ID  0
 
-struct i2c_id_config_access {
-       uint8_t bfI2C_LineMux:4;
-       uint8_t bfHW_EngineID:3;
-       uint8_t bfHW_Capable:1;
-       uint8_t ucAccess;
-};
-
 static enum bp_result get_gpio_i2c_info(struct bios_parser *bp,
        struct atom_i2c_record *record,
        struct graphics_object_i2c_info *info);
index 0d30d1d..650f3b4 100644 (file)
@@ -179,7 +179,7 @@ void dcn20_update_clocks_update_dentist(struct clk_mgr_internal *clk_mgr, struct
        } else if (dispclk_wdivider == 127 && current_dispclk_wdivider != 127) {
                REG_UPDATE(DENTIST_DISPCLK_CNTL,
                                DENTIST_DISPCLK_WDIVIDER, 126);
-               REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, 1, 50, 100);
+               REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, 1, 50, 2000);
                for (i = 0; i < clk_mgr->base.ctx->dc->res_pool->pipe_count; i++) {
                        struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
                        struct dccg *dccg = clk_mgr->base.ctx->dc->res_pool->dccg;
@@ -206,7 +206,7 @@ void dcn20_update_clocks_update_dentist(struct clk_mgr_internal *clk_mgr, struct
 
        REG_UPDATE(DENTIST_DISPCLK_CNTL,
                        DENTIST_DISPCLK_WDIVIDER, dispclk_wdivider);
-       REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, 1, 50, 1000);
+       REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, 1, 50, 2000);
        REG_UPDATE(DENTIST_DISPCLK_CNTL,
                        DENTIST_DPPCLK_WDIVIDER, dppclk_wdivider);
        REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DPPCLK_CHG_DONE, 1, 5, 100);
index 897105d..ef0795b 100644 (file)
@@ -339,29 +339,24 @@ void dcn314_smu_set_zstate_support(struct clk_mgr_internal *clk_mgr, enum dcn_zs
        if (!clk_mgr->smu_present)
                return;
 
-       if (!clk_mgr->base.ctx->dc->debug.enable_z9_disable_interface &&
-                       (support == DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY))
-               support = DCN_ZSTATE_SUPPORT_DISALLOW;
-
-
        // Arg[15:0] = 8/9/0 for Z8/Z9/disallow -> existing bits
        // Arg[16] = Disallow Z9 -> new bit
        switch (support) {
 
        case DCN_ZSTATE_SUPPORT_ALLOW:
                msg_id = VBIOSSMC_MSG_AllowZstatesEntry;
-               param = 9;
+               param = (1 << 10) | (1 << 9) | (1 << 8);
                break;
 
        case DCN_ZSTATE_SUPPORT_DISALLOW:
                msg_id = VBIOSSMC_MSG_AllowZstatesEntry;
-               param = 8;
+               param = 0;
                break;
 
 
        case DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY:
                msg_id = VBIOSSMC_MSG_AllowZstatesEntry;
-               param = 0x00010008;
+               param = (1 << 10);
                break;
 
        default: //DCN_ZSTATE_SUPPORT_UNKNOWN
index f0f3f66..1c612cc 100644 (file)
@@ -156,7 +156,7 @@ void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
 {
        struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
        unsigned int num_levels;
-       unsigned int num_dcfclk_levels, num_dtbclk_levels, num_dispclk_levels;
+       struct clk_limit_num_entries *num_entries_per_clk = &clk_mgr_base->bw_params->clk_table.num_entries_per_clk;
 
        memset(&(clk_mgr_base->clks), 0, sizeof(struct dc_clocks));
        clk_mgr_base->clks.p_state_change_support = true;
@@ -180,27 +180,28 @@ void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
        /* DCFCLK */
        dcn32_init_single_clock(clk_mgr, PPCLK_DCFCLK,
                        &clk_mgr_base->bw_params->clk_table.entries[0].dcfclk_mhz,
-                       &num_levels);
-       num_dcfclk_levels = num_levels;
+                       &num_entries_per_clk->num_dcfclk_levels);
 
        /* SOCCLK */
        dcn32_init_single_clock(clk_mgr, PPCLK_SOCCLK,
                                        &clk_mgr_base->bw_params->clk_table.entries[0].socclk_mhz,
-                                       &num_levels);
+                                       &num_entries_per_clk->num_socclk_levels);
+
        /* DTBCLK */
        if (!clk_mgr->base.ctx->dc->debug.disable_dtb_ref_clk_switch)
                dcn32_init_single_clock(clk_mgr, PPCLK_DTBCLK,
                                &clk_mgr_base->bw_params->clk_table.entries[0].dtbclk_mhz,
-                               &num_levels);
-       num_dtbclk_levels = num_levels;
+                               &num_entries_per_clk->num_dtbclk_levels);
 
        /* DISPCLK */
        dcn32_init_single_clock(clk_mgr, PPCLK_DISPCLK,
                        &clk_mgr_base->bw_params->clk_table.entries[0].dispclk_mhz,
-                       &num_levels);
-       num_dispclk_levels = num_levels;
+                       &num_entries_per_clk->num_dispclk_levels);
+       num_levels = num_entries_per_clk->num_dispclk_levels;
 
-       if (num_dcfclk_levels && num_dtbclk_levels && num_dispclk_levels)
+       if (num_entries_per_clk->num_dcfclk_levels &&
+                       num_entries_per_clk->num_dtbclk_levels &&
+                       num_entries_per_clk->num_dispclk_levels)
                clk_mgr->dpm_present = true;
 
        if (clk_mgr_base->ctx->dc->debug.min_disp_clk_khz) {
@@ -333,6 +334,21 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
                if (enter_display_off == safe_to_lower)
                        dcn30_smu_set_num_of_displays(clk_mgr, display_count);
 
+               clk_mgr_base->clks.fclk_prev_p_state_change_support = clk_mgr_base->clks.fclk_p_state_change_support;
+
+               total_plane_count = clk_mgr_helper_get_active_plane_cnt(dc, context);
+               fclk_p_state_change_support = new_clocks->fclk_p_state_change_support || (total_plane_count == 0);
+
+               if (should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support)) {
+                       clk_mgr_base->clks.fclk_p_state_change_support = fclk_p_state_change_support;
+
+                       /* To enable FCLK P-state switching, send FCLK_PSTATE_SUPPORTED message to PMFW */
+                       if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 && clk_mgr_base->clks.fclk_p_state_change_support) {
+                               /* Handle the code for sending a message to PMFW that FCLK P-state change is supported */
+                               dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_SUPPORTED);
+                       }
+               }
+
                if (dc->debug.force_min_dcfclk_mhz > 0)
                        new_clocks->dcfclk_khz = (new_clocks->dcfclk_khz > (dc->debug.force_min_dcfclk_mhz * 1000)) ?
                                        new_clocks->dcfclk_khz : (dc->debug.force_min_dcfclk_mhz * 1000);
@@ -352,7 +368,6 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
                        clk_mgr_base->clks.socclk_khz = new_clocks->socclk_khz;
 
                clk_mgr_base->clks.prev_p_state_change_support = clk_mgr_base->clks.p_state_change_support;
-               clk_mgr_base->clks.fclk_prev_p_state_change_support = clk_mgr_base->clks.fclk_p_state_change_support;
                clk_mgr_base->clks.prev_num_ways = clk_mgr_base->clks.num_ways;
 
                if (clk_mgr_base->clks.num_ways != new_clocks->num_ways &&
@@ -361,27 +376,25 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
                        dcn32_smu_send_cab_for_uclk_message(clk_mgr, clk_mgr_base->clks.num_ways);
                }
 
-               total_plane_count = clk_mgr_helper_get_active_plane_cnt(dc, context);
+
                p_state_change_support = new_clocks->p_state_change_support || (total_plane_count == 0);
-               fclk_p_state_change_support = new_clocks->fclk_p_state_change_support || (total_plane_count == 0);
                if (should_update_pstate_support(safe_to_lower, p_state_change_support, clk_mgr_base->clks.p_state_change_support)) {
                        clk_mgr_base->clks.p_state_change_support = p_state_change_support;
 
                        /* to disable P-State switching, set UCLK min = max */
                        if (!clk_mgr_base->clks.p_state_change_support)
                                dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
-                                               clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+                                               clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries_per_clk.num_memclk_levels - 1].memclk_mhz);
                }
 
-               if (should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support) &&
-                               clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21) {
-                       clk_mgr_base->clks.fclk_p_state_change_support = fclk_p_state_change_support;
+               /* Always update saved value, even if new value not set due to P-State switching unsupported. Also check safe_to_lower for FCLK */
+               if (safe_to_lower && (clk_mgr_base->clks.fclk_p_state_change_support != clk_mgr_base->clks.fclk_prev_p_state_change_support)) {
+                       update_fclk = true;
+               }
 
-                       /* To disable FCLK P-state switching, send FCLK_PSTATE_NOTSUPPORTED message to PMFW */
-                       if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 && !clk_mgr_base->clks.fclk_p_state_change_support) {
-                               /* Handle code for sending a message to PMFW that FCLK P-state change is not supported */
-                               dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_NOTSUPPORTED);
-                       }
+               if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 && !clk_mgr_base->clks.fclk_p_state_change_support && update_fclk) {
+                       /* Handle code for sending a message to PMFW that FCLK P-state change is not supported */
+                       dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_NOTSUPPORTED);
                }
 
                /* Always update saved value, even if new value not set due to P-State switching unsupported */
@@ -390,21 +403,11 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
                        update_uclk = true;
                }
 
-               /* Always update saved value, even if new value not set due to P-State switching unsupported. Also check safe_to_lower for FCLK */
-               if (safe_to_lower && (clk_mgr_base->clks.fclk_p_state_change_support != clk_mgr_base->clks.fclk_prev_p_state_change_support)) {
-                       update_fclk = true;
-               }
-
                /* set UCLK to requested value if P-State switching is supported, or to re-enable P-State switching */
                if (clk_mgr_base->clks.p_state_change_support &&
                                (update_uclk || !clk_mgr_base->clks.prev_p_state_change_support))
                        dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
 
-               if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 && clk_mgr_base->clks.fclk_p_state_change_support && update_fclk) {
-                       /* Handle the code for sending a message to PMFW that FCLK P-state change is supported */
-                       dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_SUPPORTED);
-               }
-
                if (clk_mgr_base->clks.num_ways != new_clocks->num_ways &&
                                clk_mgr_base->clks.num_ways > new_clocks->num_ways) {
                        clk_mgr_base->clks.num_ways = new_clocks->num_ways;
@@ -632,7 +635,7 @@ static void dcn32_set_hard_min_memclk(struct clk_mgr *clk_mgr_base, bool current
                                        khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
                else
                        dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
-                                       clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+                                       clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries_per_clk.num_memclk_levels - 1].memclk_mhz);
        } else {
                dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
                                clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz);
@@ -648,22 +651,34 @@ static void dcn32_set_hard_max_memclk(struct clk_mgr *clk_mgr_base)
                return;
 
        dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK,
-                       clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+                       clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries_per_clk.num_memclk_levels - 1].memclk_mhz);
 }
 
 /* Get current memclk states, update bounding box */
 static void dcn32_get_memclk_states_from_smu(struct clk_mgr *clk_mgr_base)
 {
        struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+       struct clk_limit_num_entries *num_entries_per_clk = &clk_mgr_base->bw_params->clk_table.num_entries_per_clk;
        unsigned int num_levels;
 
        if (!clk_mgr->smu_present)
                return;
 
-       /* Refresh memclk states */
+       /* Refresh memclk and fclk states */
        dcn32_init_single_clock(clk_mgr, PPCLK_UCLK,
                        &clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz,
-                       &num_levels);
+                       &num_entries_per_clk->num_memclk_levels);
+
+       dcn32_init_single_clock(clk_mgr, PPCLK_FCLK,
+                       &clk_mgr_base->bw_params->clk_table.entries[0].fclk_mhz,
+                       &num_entries_per_clk->num_fclk_levels);
+
+       if (num_entries_per_clk->num_memclk_levels >= num_entries_per_clk->num_fclk_levels) {
+               num_levels = num_entries_per_clk->num_memclk_levels;
+       } else {
+               num_levels = num_entries_per_clk->num_fclk_levels;
+       }
+
        clk_mgr_base->bw_params->clk_table.num_entries = num_levels ? num_levels : 1;
 
        if (clk_mgr->dpm_present && !num_levels)
index 258ba5a..997ab03 100644 (file)
@@ -1734,10 +1734,20 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
        int i, k, l;
        struct dc_stream_state *dc_streams[MAX_STREAMS] = {0};
        struct dc_state *old_state;
+       bool subvp_prev_use = false;
 
        dc_z10_restore(dc);
        dc_allow_idle_optimizations(dc, false);
 
+       for (i = 0; i < dc->res_pool->pipe_count; i++) {
+               struct pipe_ctx *old_pipe = &dc->current_state->res_ctx.pipe_ctx[i];
+
+               /* Check old context for SubVP */
+               subvp_prev_use |= (old_pipe->stream && old_pipe->stream->mall_stream_config.type == SUBVP_PHANTOM);
+               if (subvp_prev_use)
+                       break;
+       }
+
        for (i = 0; i < context->stream_count; i++)
                dc_streams[i] =  context->streams[i];
 
@@ -1777,6 +1787,9 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
                dc->hwss.wait_for_mpcc_disconnect(dc, dc->res_pool, pipe);
        }
 
+       if (dc->hwss.subvp_pipe_control_lock)
+               dc->hwss.subvp_pipe_control_lock(dc, context, true, true, NULL, subvp_prev_use);
+
        result = dc->hwss.apply_ctx_to_hw(dc, context);
 
        if (result != DC_OK) {
@@ -1794,6 +1807,12 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
                dc->hwss.interdependent_update_lock(dc, context, false);
                dc->hwss.post_unlock_program_front_end(dc, context);
        }
+
+       if (dc->hwss.commit_subvp_config)
+               dc->hwss.commit_subvp_config(dc, context);
+       if (dc->hwss.subvp_pipe_control_lock)
+               dc->hwss.subvp_pipe_control_lock(dc, context, false, true, NULL, subvp_prev_use);
+
        for (i = 0; i < context->stream_count; i++) {
                const struct dc_link *link = context->streams[i]->link;
 
@@ -2927,6 +2946,12 @@ static bool update_planes_and_stream_state(struct dc *dc,
                dc_resource_state_copy_construct(
                                dc->current_state, context);
 
+               /* For each full update, remove all existing phantom pipes first.
+                * Ensures that we have enough pipes for newly added MPO planes
+                */
+               if (dc->res_pool->funcs->remove_phantom_pipes)
+                       dc->res_pool->funcs->remove_phantom_pipes(dc, context);
+
                /*remove old surfaces from context */
                if (!dc_rem_all_planes_for_stream(dc, stream, context)) {
 
@@ -3334,8 +3359,14 @@ static void commit_planes_for_stream(struct dc *dc,
                /* Since phantom pipe programming is moved to post_unlock_program_front_end,
                 * move the SubVP lock to after the phantom pipes have been setup
                 */
-               if (dc->hwss.subvp_pipe_control_lock)
-                       dc->hwss.subvp_pipe_control_lock(dc, context, false, should_lock_all_pipes, NULL, subvp_prev_use);
+               if (should_lock_all_pipes && dc->hwss.interdependent_update_lock) {
+                       if (dc->hwss.subvp_pipe_control_lock)
+                               dc->hwss.subvp_pipe_control_lock(dc, context, false, should_lock_all_pipes, NULL, subvp_prev_use);
+               } else {
+                       if (dc->hwss.subvp_pipe_control_lock)
+                               dc->hwss.subvp_pipe_control_lock(dc, context, false, should_lock_all_pipes, NULL, subvp_prev_use);
+               }
+
                return;
        }
 
@@ -3495,6 +3526,9 @@ static void commit_planes_for_stream(struct dc *dc,
 
        if (update_type != UPDATE_TYPE_FAST)
                dc->hwss.post_unlock_program_front_end(dc, context);
+       if (update_type != UPDATE_TYPE_FAST)
+               if (dc->hwss.commit_subvp_config)
+                       dc->hwss.commit_subvp_config(dc, context);
 
        if (update_type != UPDATE_TYPE_FAST)
                if (dc->hwss.commit_subvp_config)
@@ -3542,6 +3576,7 @@ static bool could_mpcc_tree_change_for_active_pipes(struct dc *dc,
 
        struct dc_stream_status *cur_stream_status = stream_get_status(dc->current_state, stream);
        bool force_minimal_pipe_splitting = false;
+       uint32_t i;
 
        *is_plane_addition = false;
 
@@ -3573,6 +3608,36 @@ static bool could_mpcc_tree_change_for_active_pipes(struct dc *dc,
                }
        }
 
+       /* For SubVP pipe split case when adding MPO video
+        * we need to add a minimal transition. In this case
+        * there will be 2 streams (1 main stream, 1 phantom
+        * stream).
+        */
+       if (cur_stream_status &&
+                       dc->current_state->stream_count == 2 &&
+                       stream->mall_stream_config.type == SUBVP_MAIN) {
+               bool is_pipe_split = false;
+
+               for (i = 0; i < dc->res_pool->pipe_count; i++) {
+                       if (dc->current_state->res_ctx.pipe_ctx[i].stream == stream &&
+                                       (dc->current_state->res_ctx.pipe_ctx[i].bottom_pipe ||
+                                       dc->current_state->res_ctx.pipe_ctx[i].next_odm_pipe)) {
+                               is_pipe_split = true;
+                               break;
+                       }
+               }
+
+               /* determine if minimal transition is required due to SubVP*/
+               if (surface_count > 0 && is_pipe_split) {
+                       if (cur_stream_status->plane_count > surface_count) {
+                               force_minimal_pipe_splitting = true;
+                       } else if (cur_stream_status->plane_count < surface_count) {
+                               force_minimal_pipe_splitting = true;
+                               *is_plane_addition = true;
+                       }
+               }
+       }
+
        return force_minimal_pipe_splitting;
 }
 
@@ -3582,6 +3647,7 @@ static bool commit_minimal_transition_state(struct dc *dc,
        struct dc_state *transition_context = dc_create_state(dc);
        enum pipe_split_policy tmp_mpc_policy;
        bool temp_dynamic_odm_policy;
+       bool temp_subvp_policy;
        enum dc_status ret = DC_ERROR_UNEXPECTED;
        unsigned int i, j;
 
@@ -3596,6 +3662,9 @@ static bool commit_minimal_transition_state(struct dc *dc,
        temp_dynamic_odm_policy = dc->debug.enable_single_display_2to1_odm_policy;
        dc->debug.enable_single_display_2to1_odm_policy = false;
 
+       temp_subvp_policy = dc->debug.force_disable_subvp;
+       dc->debug.force_disable_subvp = true;
+
        dc_resource_state_copy_construct(transition_base_context, transition_context);
 
        //commit minimal state
@@ -3624,6 +3693,7 @@ static bool commit_minimal_transition_state(struct dc *dc,
                dc->debug.pipe_split_policy = tmp_mpc_policy;
 
        dc->debug.enable_single_display_2to1_odm_policy = temp_dynamic_odm_policy;
+       dc->debug.force_disable_subvp = temp_subvp_policy;
 
        if (ret != DC_OK) {
                /*this should never happen*/
@@ -4587,6 +4657,37 @@ enum dc_status dc_process_dmub_set_mst_slots(const struct dc *dc,
 }
 
 /**
+ *****************************************************************************
+ *  Function: dc_process_dmub_dpia_hpd_int_enable
+ *
+ *  @brief
+ *             Submits dpia hpd int enable command to dmub via inbox message
+ *
+ *  @param
+ *             [in] dc: dc structure
+ *             [in] hpd_int_enable: 1 for hpd int enable, 0 to disable
+ *
+ *     @return
+ *             None
+ *****************************************************************************
+ */
+void dc_process_dmub_dpia_hpd_int_enable(const struct dc *dc,
+                               uint32_t hpd_int_enable)
+{
+       union dmub_rb_cmd cmd = {0};
+       struct dc_dmub_srv *dmub_srv = dc->ctx->dmub_srv;
+
+       cmd.dpia_hpd_int_enable.header.type = DMUB_CMD__DPIA_HPD_INT_ENABLE;
+       cmd.dpia_hpd_int_enable.enable = hpd_int_enable;
+
+       dc_dmub_srv_cmd_queue(dmub_srv, &cmd);
+       dc_dmub_srv_cmd_execute(dmub_srv);
+       dc_dmub_srv_wait_idle(dmub_srv);
+
+       DC_LOG_DEBUG("%s: hpd_int_enable(%d)\n", __func__, hpd_int_enable);
+}
+
+/**
  * dc_disable_accelerated_mode - disable accelerated mode
  * @dc: dc structure
  */
index 3d19fb9..d7b1ace 100644 (file)
@@ -1307,7 +1307,10 @@ static bool detect_link_and_local_sink(struct dc_link *link,
                }
 
                if (link->connector_signal == SIGNAL_TYPE_EDP) {
-                       // Init dc_panel_config
+                       /* Init dc_panel_config by HW config */
+                       if (dc_ctx->dc->res_pool->funcs->get_panel_config_defaults)
+                               dc_ctx->dc->res_pool->funcs->get_panel_config_defaults(&link->panel_config);
+                       /* Pickup base DM settings */
                        dm_helpers_init_panel_settings(dc_ctx, &link->panel_config, sink);
                        // Override dc_panel_config if system has specific settings
                        dm_helpers_override_panel_settings(dc_ctx, &link->panel_config);
@@ -3143,7 +3146,7 @@ bool dc_link_set_psr_allow_active(struct dc_link *link, const bool *allow_active
        if (!dc_get_edp_link_panel_inst(dc, link, &panel_inst))
                return false;
 
-       if (allow_active && link->type == dc_connection_none) {
+       if ((allow_active != NULL) && (*allow_active == true) && (link->type == dc_connection_none)) {
                // Don't enter PSR if panel is not connected
                return false;
        }
@@ -3375,8 +3378,8 @@ bool dc_link_setup_psr(struct dc_link *link,
                case FAMILY_YELLOW_CARP:
                case AMDGPU_FAMILY_GC_10_3_6:
                case AMDGPU_FAMILY_GC_11_0_1:
-                       if(!dc->debug.disable_z10)
-                               psr_context->psr_level.bits.SKIP_CRTC_DISABLE = false;
+                       if (dc->debug.disable_z10)
+                               psr_context->psr_level.bits.SKIP_CRTC_DISABLE = true;
                        break;
                default:
                        psr_context->psr_level.bits.SKIP_CRTC_DISABLE = true;
index c57df45..1254d38 100644 (file)
@@ -944,6 +944,23 @@ enum dc_status dp_get_lane_status_and_lane_adjust(
        return status;
 }
 
+static enum dc_status dpcd_128b_132b_set_lane_settings(
+               struct dc_link *link,
+               const struct link_training_settings *link_training_setting)
+{
+       enum dc_status status = core_link_write_dpcd(link,
+                       DP_TRAINING_LANE0_SET,
+                       (uint8_t *)(link_training_setting->dpcd_lane_settings),
+                       sizeof(link_training_setting->dpcd_lane_settings));
+
+       DC_LOG_HW_LINK_TRAINING("%s:\n 0x%X TX_FFE_PRESET_VALUE = %x\n",
+                       __func__,
+                       DP_TRAINING_LANE0_SET,
+                       link_training_setting->dpcd_lane_settings[0].tx_ffe.PRESET_VALUE);
+       return status;
+}
+
+
 enum dc_status dpcd_set_lane_settings(
        struct dc_link *link,
        const struct link_training_settings *link_training_setting,
@@ -964,16 +981,6 @@ enum dc_status dpcd_set_lane_settings(
                link_training_setting->link_settings.lane_count);
 
        if (is_repeater(link_training_setting, offset)) {
-               if (dp_get_link_encoding_format(&link_training_setting->link_settings) ==
-                               DP_128b_132b_ENCODING)
-                       DC_LOG_HW_LINK_TRAINING("%s:\n LTTPR Repeater ID: %d\n"
-                                       " 0x%X TX_FFE_PRESET_VALUE = %x\n",
-                                       __func__,
-                                       offset,
-                                       lane0_set_address,
-                                       link_training_setting->dpcd_lane_settings[0].tx_ffe.PRESET_VALUE);
-               else if (dp_get_link_encoding_format(&link_training_setting->link_settings) ==
-                               DP_8b_10b_ENCODING)
                DC_LOG_HW_LINK_TRAINING("%s\n LTTPR Repeater ID: %d\n"
                                " 0x%X VS set = %x  PE set = %x max VS Reached = %x  max PE Reached = %x\n",
                        __func__,
@@ -985,14 +992,6 @@ enum dc_status dpcd_set_lane_settings(
                        link_training_setting->dpcd_lane_settings[0].bits.MAX_PRE_EMPHASIS_REACHED);
 
        } else {
-               if (dp_get_link_encoding_format(&link_training_setting->link_settings) ==
-                               DP_128b_132b_ENCODING)
-                       DC_LOG_HW_LINK_TRAINING("%s:\n 0x%X TX_FFE_PRESET_VALUE = %x\n",
-                                       __func__,
-                                       lane0_set_address,
-                                       link_training_setting->dpcd_lane_settings[0].tx_ffe.PRESET_VALUE);
-               else if (dp_get_link_encoding_format(&link_training_setting->link_settings) ==
-                               DP_8b_10b_ENCODING)
                DC_LOG_HW_LINK_TRAINING("%s\n 0x%X VS set = %x  PE set = %x max VS Reached = %x  max PE Reached = %x\n",
                        __func__,
                        lane0_set_address,
@@ -2023,7 +2022,7 @@ static enum link_training_result dp_perform_128b_132b_channel_eq_done_sequence(
                        result = DP_128b_132b_LT_FAILED;
                } else {
                        dp_set_hw_lane_settings(link, link_res, lt_settings, DPRX);
-                       dpcd_set_lane_settings(link, lt_settings, DPRX);
+                       dpcd_128b_132b_set_lane_settings(link, lt_settings);
                }
                loop_count++;
        }
@@ -5090,6 +5089,7 @@ bool dp_retrieve_lttpr_cap(struct dc_link *link)
                        (dp_convert_to_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt) == 0)) {
                ASSERT(0);
                link->dpcd_caps.lttpr_caps.phy_repeater_cnt = 0x80;
+               DC_LOG_DC("lttpr_caps forced phy_repeater_cnt = %d\n", link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
        }
 
        /* Attempt to train in LTTPR transparent mode if repeater count exceeds 8. */
@@ -5098,6 +5098,7 @@ bool dp_retrieve_lttpr_cap(struct dc_link *link)
        if (is_lttpr_present)
                CONN_DATA_DETECT(link, lttpr_dpcd_data, sizeof(lttpr_dpcd_data), "LTTPR Caps: ");
 
+       DC_LOG_DC("is_lttpr_present = %d\n", is_lttpr_present);
        return is_lttpr_present;
 }
 
@@ -5134,6 +5135,7 @@ void dp_get_lttpr_mode_override(struct dc_link *link, enum lttpr_mode *override)
        } else if (link->dc->debug.lttpr_mode_override == LTTPR_MODE_NON_LTTPR) {
                *override = LTTPR_MODE_NON_LTTPR;
        }
+       DC_LOG_DC("lttpr_mode_override chose LTTPR_MODE = %d\n", (uint8_t)(*override));
 }
 
 enum lttpr_mode dp_decide_8b_10b_lttpr_mode(struct dc_link *link)
@@ -5146,22 +5148,34 @@ enum lttpr_mode dp_decide_8b_10b_lttpr_mode(struct dc_link *link)
                return LTTPR_MODE_NON_LTTPR;
 
        if (vbios_lttpr_aware) {
-               if (vbios_lttpr_force_non_transparent)
+               if (vbios_lttpr_force_non_transparent) {
+                       DC_LOG_DC("chose LTTPR_MODE_NON_TRANSPARENT due to VBIOS DCE_INFO_CAPS_LTTPR_SUPPORT_ENABLE set to 1.\n");
                        return LTTPR_MODE_NON_TRANSPARENT;
-               else
+               } else {
+                       DC_LOG_DC("chose LTTPR_MODE_NON_TRANSPARENT by default due to VBIOS not set DCE_INFO_CAPS_LTTPR_SUPPORT_ENABLE set to 1.\n");
                        return LTTPR_MODE_TRANSPARENT;
+               }
        }
 
        if (link->dc->config.allow_lttpr_non_transparent_mode.bits.DP1_4A &&
-                       link->dc->caps.extended_aux_timeout_support)
+                       link->dc->caps.extended_aux_timeout_support) {
+               DC_LOG_DC("chose LTTPR_MODE_NON_TRANSPARENT by default and dc->config.allow_lttpr_non_transparent_mode.bits.DP1_4A set to 1.\n");
                return LTTPR_MODE_NON_TRANSPARENT;
+       }
 
+       DC_LOG_DC("chose LTTPR_MODE_NON_LTTPR.\n");
        return LTTPR_MODE_NON_LTTPR;
 }
 
 enum lttpr_mode dp_decide_128b_132b_lttpr_mode(struct dc_link *link)
 {
-       return dp_is_lttpr_present(link) ? LTTPR_MODE_NON_TRANSPARENT : LTTPR_MODE_NON_LTTPR;
+       enum lttpr_mode mode = LTTPR_MODE_NON_LTTPR;
+
+       if (dp_is_lttpr_present(link))
+               mode = LTTPR_MODE_NON_TRANSPARENT;
+
+       DC_LOG_DC("128b_132b chose LTTPR_MODE %d.\n", mode);
+       return mode;
 }
 
 static bool get_usbc_cable_id(struct dc_link *link, union dp_cable_id *cable_id)
@@ -5179,9 +5193,10 @@ static bool get_usbc_cable_id(struct dc_link *link, union dp_cable_id *cable_id)
        cmd.cable_id.data.input.phy_inst = resource_transmitter_to_phy_idx(
                        link->dc, link->link_enc->transmitter);
        if (dc_dmub_srv_cmd_with_reply_data(link->ctx->dmub_srv, &cmd) &&
-                       cmd.cable_id.header.ret_status == 1)
+                       cmd.cable_id.header.ret_status == 1) {
                cable_id->raw = cmd.cable_id.data.output_raw;
-
+               DC_LOG_DC("usbc_cable_id = %d.\n", cable_id->raw);
+       }
        return cmd.cable_id.header.ret_status == 1;
 }
 
@@ -5228,6 +5243,7 @@ static enum dc_status wa_try_to_wake_dprx(struct dc_link *link, uint64_t timeout
 
        lttpr_present = dp_is_lttpr_present(link) ||
                        (!vbios_lttpr_interop || !link->dc->caps.extended_aux_timeout_support);
+       DC_LOG_DC("lttpr_present = %d.\n", lttpr_present ? 1 : 0);
 
        /* Issue an AUX read to test DPRX responsiveness. If LTTPR is supported the first read is expected to
         * be to determine LTTPR capabilities. Otherwise trying to read power state should be an innocuous AUX read.
@@ -5795,7 +5811,7 @@ void detect_edp_sink_caps(struct dc_link *link)
         * Per VESA eDP spec, "The DPCD revision for eDP v1.4 is 13h"
         */
        if (link->dpcd_caps.dpcd_rev.raw >= DPCD_REV_13 &&
-                       (link->dc->debug.optimize_edp_link_rate ||
+                       (link->panel_config.ilr.optimize_edp_link_rate ||
                        link->reported_link_cap.link_rate == LINK_RATE_UNKNOWN)) {
                // Read DPCD 00010h - 0001Fh 16 bytes at one shot
                core_link_read_dpcd(link, DP_SUPPORTED_LINK_RATES,
@@ -6744,7 +6760,7 @@ bool is_edp_ilr_optimization_required(struct dc_link *link, struct dc_crtc_timin
        ASSERT(link || crtc_timing); // invalid input
 
        if (link->dpcd_caps.edp_supported_link_rates_count == 0 ||
-                       !link->dc->debug.optimize_edp_link_rate)
+                       !link->panel_config.ilr.optimize_edp_link_rate)
                return false;
 
 
index 8ee0d94..fd8db48 100644 (file)
@@ -1747,7 +1747,6 @@ bool dc_remove_plane_from_context(
 
        for (i = 0; i < stream_status->plane_count; i++) {
                if (stream_status->plane_states[i] == plane_state) {
-
                        dc_plane_state_release(stream_status->plane_states[i]);
                        break;
                }
@@ -3683,4 +3682,56 @@ bool is_h_timing_divisible_by_2(struct dc_stream_state *stream)
                                (stream->timing.h_sync_width % 2 == 0);
        }
        return divisible;
+}
+
+bool dc_resource_acquire_secondary_pipe_for_mpc_odm(
+               const struct dc *dc,
+               struct dc_state *state,
+               struct pipe_ctx *pri_pipe,
+               struct pipe_ctx *sec_pipe,
+               bool odm)
+{
+       int pipe_idx = sec_pipe->pipe_idx;
+       struct pipe_ctx *sec_top, *sec_bottom, *sec_next, *sec_prev;
+       const struct resource_pool *pool = dc->res_pool;
+
+       sec_top = sec_pipe->top_pipe;
+       sec_bottom = sec_pipe->bottom_pipe;
+       sec_next = sec_pipe->next_odm_pipe;
+       sec_prev = sec_pipe->prev_odm_pipe;
+
+       *sec_pipe = *pri_pipe;
+
+       sec_pipe->top_pipe = sec_top;
+       sec_pipe->bottom_pipe = sec_bottom;
+       sec_pipe->next_odm_pipe = sec_next;
+       sec_pipe->prev_odm_pipe = sec_prev;
+
+       sec_pipe->pipe_idx = pipe_idx;
+       sec_pipe->plane_res.mi = pool->mis[pipe_idx];
+       sec_pipe->plane_res.hubp = pool->hubps[pipe_idx];
+       sec_pipe->plane_res.ipp = pool->ipps[pipe_idx];
+       sec_pipe->plane_res.xfm = pool->transforms[pipe_idx];
+       sec_pipe->plane_res.dpp = pool->dpps[pipe_idx];
+       sec_pipe->plane_res.mpcc_inst = pool->dpps[pipe_idx]->inst;
+       sec_pipe->stream_res.dsc = NULL;
+       if (odm) {
+               if (!sec_pipe->top_pipe)
+                       sec_pipe->stream_res.opp = pool->opps[pipe_idx];
+               else
+                       sec_pipe->stream_res.opp = sec_pipe->top_pipe->stream_res.opp;
+               if (sec_pipe->stream->timing.flags.DSC == 1) {
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+                       dcn20_acquire_dsc(dc, &state->res_ctx, &sec_pipe->stream_res.dsc, pipe_idx);
+#endif
+                       ASSERT(sec_pipe->stream_res.dsc);
+                       if (sec_pipe->stream_res.dsc == NULL)
+                               return false;
+               }
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+               dcn20_build_mapped_resource(dc, state, sec_pipe->stream);
+#endif
+       }
+
+       return true;
 }
\ No newline at end of file
index ae13887..38d71b5 100644 (file)
@@ -276,6 +276,8 @@ static void program_cursor_attributes(
                }
 
                dc->hwss.set_cursor_attribute(pipe_ctx);
+
+               dc_send_update_cursor_info_to_dmu(pipe_ctx, i);
                if (dc->hwss.set_cursor_sdr_white_level)
                        dc->hwss.set_cursor_sdr_white_level(pipe_ctx);
        }
@@ -382,6 +384,8 @@ static void program_cursor_position(
                }
 
                dc->hwss.set_cursor_position(pipe_ctx);
+
+               dc_send_update_cursor_info_to_dmu(pipe_ctx, i);
        }
 
        if (pipe_to_program)
@@ -520,9 +524,9 @@ bool dc_stream_remove_writeback(struct dc *dc,
        }
 
        /* remove writeback info for disabled writeback pipes from stream */
-       for (i = 0, j = 0; i < stream->num_wb_info && j < MAX_DWB_PIPES; i++) {
+       for (i = 0, j = 0; i < stream->num_wb_info; i++) {
                if (stream->writeback_info[i].wb_enabled) {
-                       if (i != j)
+                       if (j < i)
                                /* trim the array */
                                stream->writeback_info[j] = stream->writeback_info[i];
                        j++;
index 2ecf36e..bfc5474 100644 (file)
@@ -47,7 +47,7 @@ struct aux_payload;
 struct set_config_cmd_payload;
 struct dmub_notification;
 
-#define DC_VER "3.2.205"
+#define DC_VER "3.2.207"
 
 #define MAX_SURFACES 3
 #define MAX_PLANES 6
@@ -821,7 +821,6 @@ struct dc_debug_options {
        /* Enable dmub aux for legacy ddc */
        bool enable_dmub_aux_for_legacy_ddc;
        bool disable_fams;
-       bool optimize_edp_link_rate; /* eDP ILR */
        /* FEC/PSR1 sequence enable delay in 100us */
        uint8_t fec_enable_delay_in100us;
        bool enable_driver_sequence_debug;
@@ -1192,6 +1191,8 @@ struct dc_plane_state {
        enum dc_irq_source irq_source;
        struct kref refcount;
        struct tg_color visual_confirm_color;
+
+       bool is_statically_allocated;
 };
 
 struct dc_plane_info {
@@ -1611,6 +1612,9 @@ enum dc_status dc_process_dmub_set_mst_slots(const struct dc *dc,
                                uint8_t mst_alloc_slots,
                                uint8_t *mst_slots_in_use);
 
+void dc_process_dmub_dpia_hpd_int_enable(const struct dc *dc,
+                               uint32_t hpd_int_enable);
+
 /*******************************************************************************
  * DSC Interfaces
  ******************************************************************************/
index 89d7d3f..0541e87 100644 (file)
@@ -30,6 +30,7 @@
 #include "dc_hw_types.h"
 #include "core_types.h"
 #include "../basics/conversion.h"
+#include "cursor_reg_cache.h"
 
 #define CTX dc_dmub_srv->ctx
 #define DC_LOGGER CTX->logger
@@ -780,7 +781,7 @@ void dc_dmub_setup_subvp_dmub_command(struct dc *dc,
                // Store the original watermark value for this SubVP config so we can lower it when the
                // MCLK switch starts
                wm_val_refclk = context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.pstate_change_ns *
-                               dc->res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000 / 1000;
+                               (dc->res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000) / 1000;
 
                cmd.fw_assisted_mclk_switch_v2.config_data.watermark_a_cache = wm_val_refclk < 0xFFFF ? wm_val_refclk : 0xFFFF;
        }
@@ -880,3 +881,147 @@ void dc_dmub_srv_log_diagnostic_data(struct dc_dmub_srv *dc_dmub_srv)
                diag_data.is_cw0_enabled,
                diag_data.is_cw6_enabled);
 }
+
+static bool dc_dmub_should_update_cursor_data(struct pipe_ctx *pipe_ctx)
+{
+       if (pipe_ctx->plane_state != NULL) {
+               if (pipe_ctx->plane_state->address.type == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)
+                       return false;
+       }
+
+       if ((pipe_ctx->stream->link->psr_settings.psr_version == DC_PSR_VERSION_SU_1 ||
+               pipe_ctx->stream->link->psr_settings.psr_version == DC_PSR_VERSION_1) &&
+               pipe_ctx->stream->ctx->dce_version >= DCN_VERSION_3_1)
+               return true;
+
+       return false;
+}
+
+static void dc_build_cursor_update_payload0(
+               struct pipe_ctx *pipe_ctx, uint8_t p_idx,
+               struct dmub_cmd_update_cursor_payload0 *payload)
+{
+       struct hubp *hubp = pipe_ctx->plane_res.hubp;
+       unsigned int panel_inst = 0;
+
+       if (!dc_get_edp_link_panel_inst(hubp->ctx->dc,
+               pipe_ctx->stream->link, &panel_inst))
+               return;
+
+       /* Payload: Cursor Rect is built from position & attribute
+        * x & y are obtained from postion
+        */
+       payload->cursor_rect.x = hubp->cur_rect.x;
+       payload->cursor_rect.y = hubp->cur_rect.y;
+       /* w & h are obtained from attribute */
+       payload->cursor_rect.width  = hubp->cur_rect.w;
+       payload->cursor_rect.height = hubp->cur_rect.h;
+
+       payload->enable      = hubp->pos.cur_ctl.bits.cur_enable;
+       payload->pipe_idx    = p_idx;
+       payload->cmd_version = DMUB_CMD_PSR_CONTROL_VERSION_1;
+       payload->panel_inst  = panel_inst;
+}
+
+static void dc_send_cmd_to_dmu(struct dc_dmub_srv *dmub_srv,
+               union dmub_rb_cmd *cmd)
+{
+       dc_dmub_srv_cmd_queue(dmub_srv, cmd);
+       dc_dmub_srv_cmd_execute(dmub_srv);
+       dc_dmub_srv_wait_idle(dmub_srv);
+}
+
+static void dc_build_cursor_position_update_payload0(
+               struct dmub_cmd_update_cursor_payload0 *pl, const uint8_t p_idx,
+               const struct hubp *hubp, const struct dpp *dpp)
+{
+       /* Hubp */
+       pl->position_cfg.pHubp.cur_ctl.raw  = hubp->pos.cur_ctl.raw;
+       pl->position_cfg.pHubp.position.raw = hubp->pos.position.raw;
+       pl->position_cfg.pHubp.hot_spot.raw = hubp->pos.hot_spot.raw;
+       pl->position_cfg.pHubp.dst_offset.raw = hubp->pos.dst_offset.raw;
+
+       /* dpp */
+       pl->position_cfg.pDpp.cur0_ctl.raw = dpp->pos.cur0_ctl.raw;
+       pl->position_cfg.pipe_idx = p_idx;
+}
+
+static void dc_build_cursor_attribute_update_payload1(
+               struct dmub_cursor_attributes_cfg *pl_A, const uint8_t p_idx,
+               const struct hubp *hubp, const struct dpp *dpp)
+{
+       /* Hubp */
+       pl_A->aHubp.SURFACE_ADDR_HIGH = hubp->att.SURFACE_ADDR_HIGH;
+       pl_A->aHubp.SURFACE_ADDR = hubp->att.SURFACE_ADDR;
+       pl_A->aHubp.cur_ctl.raw  = hubp->att.cur_ctl.raw;
+       pl_A->aHubp.size.raw     = hubp->att.size.raw;
+       pl_A->aHubp.settings.raw = hubp->att.settings.raw;
+
+       /* dpp */
+       pl_A->aDpp.cur0_ctl.raw = dpp->att.cur0_ctl.raw;
+}
+
+/**
+ * ***************************************************************************************
+ * dc_send_update_cursor_info_to_dmu: Populate the DMCUB Cursor update info command
+ *
+ * This function would store the cursor related information and pass it into dmub
+ *
+ * @param [in] pCtx: pipe context
+ * @param [in] pipe_idx: pipe index
+ *
+ * @return: void
+ *
+ * ***************************************************************************************
+ */
+
+void dc_send_update_cursor_info_to_dmu(
+               struct pipe_ctx *pCtx, uint8_t pipe_idx)
+{
+       union dmub_rb_cmd cmd = { 0 };
+       union dmub_cmd_update_cursor_info_data *update_cursor_info =
+                                       &cmd.update_cursor_info.update_cursor_info_data;
+
+       if (!dc_dmub_should_update_cursor_data(pCtx))
+               return;
+       /*
+        * Since we use multi_cmd_pending for dmub command, the 2nd command is
+        * only assigned to store cursor attributes info.
+        * 1st command can view as 2 parts, 1st is for PSR/Replay data, the other
+        * is to store cursor position info.
+        *
+        * Command heaer type must be the same type if using  multi_cmd_pending.
+        * Besides, while process 2nd command in DMU, the sub type is useless.
+        * So it's meanless to pass the sub type header with different type.
+        */
+
+       {
+               /* Build Payload#0 Header */
+               cmd.update_cursor_info.header.type = DMUB_CMD__UPDATE_CURSOR_INFO;
+               cmd.update_cursor_info.header.payload_bytes =
+                               sizeof(cmd.update_cursor_info.update_cursor_info_data);
+               cmd.update_cursor_info.header.multi_cmd_pending = 1; /* To combine multi dmu cmd, 1st cmd */
+
+               /* Prepare Payload */
+               dc_build_cursor_update_payload0(pCtx, pipe_idx, &update_cursor_info->payload0);
+
+               dc_build_cursor_position_update_payload0(&update_cursor_info->payload0, pipe_idx,
+                               pCtx->plane_res.hubp, pCtx->plane_res.dpp);
+               /* Send update_curosr_info to queue */
+               dc_dmub_srv_cmd_queue(pCtx->stream->ctx->dmub_srv, &cmd);
+       }
+       {
+               /* Build Payload#1 Header */
+               memset(update_cursor_info, 0, sizeof(union dmub_cmd_update_cursor_info_data));
+               cmd.update_cursor_info.header.type = DMUB_CMD__UPDATE_CURSOR_INFO;
+               cmd.update_cursor_info.header.payload_bytes = sizeof(struct cursor_attributes_cfg);
+               cmd.update_cursor_info.header.multi_cmd_pending = 0; /* Indicate it's the last command. */
+
+               dc_build_cursor_attribute_update_payload1(
+                               &cmd.update_cursor_info.update_cursor_info_data.payload1.attribute_cfg,
+                               pipe_idx, pCtx->plane_res.hubp, pCtx->plane_res.dpp);
+
+               /* Combine 2nd cmds update_curosr_info to DMU */
+               dc_send_cmd_to_dmu(pCtx->stream->ctx->dmub_srv, &cmd);
+       }
+}
index 7e43834..d34f556 100644 (file)
@@ -88,4 +88,5 @@ bool dc_dmub_srv_get_diagnostic_data(struct dc_dmub_srv *dc_dmub_srv, struct dmu
 void dc_dmub_setup_subvp_dmub_command(struct dc *dc, struct dc_state *context, bool enable);
 void dc_dmub_srv_log_diagnostic_data(struct dc_dmub_srv *dc_dmub_srv);
 
+void dc_send_update_cursor_info_to_dmu(struct pipe_ctx *pCtx, uint8_t pipe_idx);
 #endif /* _DMUB_DC_SRV_H_ */
index bf5f9e2..caf0c7a 100644 (file)
@@ -138,6 +138,10 @@ struct dc_panel_config {
                bool disable_dsc_edp;
                unsigned int force_dsc_edp_policy;
        } dsc;
+       /* eDP ILR */
+       struct ilr {
+               bool optimize_edp_link_rate; /* eDP ILR */
+       } ilr;
 };
 /*
  * A link contains one or more sinks and their connected status.
index 32782ef..140297c 100644 (file)
@@ -942,10 +942,6 @@ bool dce_aux_transfer_with_retries(struct ddc_service *ddc,
                case AUX_RET_ERROR_ENGINE_ACQUIRE:
                case AUX_RET_ERROR_UNKNOWN:
                default:
-                       DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_INFORMATION,
-                                               LOG_FLAG_I2cAux_DceAux,
-                                               "dce_aux_transfer_with_retries: Failure: operation_result=%d",
-                                               (int)operation_result);
                        goto fail;
                }
        }
@@ -953,14 +949,11 @@ bool dce_aux_transfer_with_retries(struct ddc_service *ddc,
 fail:
        DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
                                LOG_FLAG_Error_I2cAux,
-                               "dce_aux_transfer_with_retries: FAILURE");
+                               "%s: Failure: operation_result=%d",
+                               __func__,
+                               (int)operation_result);
        if (!payload_reply)
                payload->reply = NULL;
 
-       DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
-                               WPP_BIT_FLAG_DC_ERROR,
-                               "AUX transaction failed. Result: %d",
-                               operation_result);
-
        return false;
 }
index 897f412..b9765b3 100644 (file)
@@ -469,6 +469,7 @@ void dpp1_set_cursor_position(
        REG_UPDATE(CURSOR0_CONTROL,
                        CUR0_ENABLE, cur_en);
 
+       dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
 }
 
 void dpp1_cnv_set_optional_cursor_attributes(
index 7252174..11e4c4e 100644 (file)
@@ -2244,6 +2244,9 @@ void dcn10_enable_timing_synchronization(
        DC_SYNC_INFO("Setting up OTG reset trigger\n");
 
        for (i = 1; i < group_size; i++) {
+               if (grouped_pipes[i]->stream && grouped_pipes[i]->stream->mall_stream_config.type == SUBVP_PHANTOM)
+                       continue;
+
                opp = grouped_pipes[i]->stream_res.opp;
                tg = grouped_pipes[i]->stream_res.tg;
                tg->funcs->get_otg_active_size(tg, &width, &height);
@@ -2254,13 +2257,21 @@ void dcn10_enable_timing_synchronization(
        for (i = 0; i < group_size; i++) {
                if (grouped_pipes[i]->stream == NULL)
                        continue;
+
+               if (grouped_pipes[i]->stream && grouped_pipes[i]->stream->mall_stream_config.type == SUBVP_PHANTOM)
+                       continue;
+
                grouped_pipes[i]->stream->vblank_synchronized = false;
        }
 
-       for (i = 1; i < group_size; i++)
+       for (i = 1; i < group_size; i++) {
+               if (grouped_pipes[i]->stream && grouped_pipes[i]->stream->mall_stream_config.type == SUBVP_PHANTOM)
+                       continue;
+
                grouped_pipes[i]->stream_res.tg->funcs->enable_reset_trigger(
                                grouped_pipes[i]->stream_res.tg,
                                grouped_pipes[0]->stream_res.tg->inst);
+       }
 
        DC_SYNC_INFO("Waiting for trigger\n");
 
@@ -2268,12 +2279,21 @@ void dcn10_enable_timing_synchronization(
         * synchronized. Look at last pipe programmed to reset.
         */
 
-       wait_for_reset_trigger_to_occur(dc_ctx, grouped_pipes[1]->stream_res.tg);
-       for (i = 1; i < group_size; i++)
+       if (grouped_pipes[1]->stream && grouped_pipes[1]->stream->mall_stream_config.type != SUBVP_PHANTOM)
+               wait_for_reset_trigger_to_occur(dc_ctx, grouped_pipes[1]->stream_res.tg);
+
+       for (i = 1; i < group_size; i++) {
+               if (grouped_pipes[i]->stream && grouped_pipes[i]->stream->mall_stream_config.type == SUBVP_PHANTOM)
+                       continue;
+
                grouped_pipes[i]->stream_res.tg->funcs->disable_reset_trigger(
                                grouped_pipes[i]->stream_res.tg);
+       }
 
        for (i = 1; i < group_size; i++) {
+               if (grouped_pipes[i]->stream && grouped_pipes[i]->stream->mall_stream_config.type == SUBVP_PHANTOM)
+                       continue;
+
                opp = grouped_pipes[i]->stream_res.opp;
                tg = grouped_pipes[i]->stream_res.tg;
                tg->funcs->get_otg_active_size(tg, &width, &height);
@@ -3005,6 +3025,7 @@ void dcn10_prepare_bandwidth(
 {
        struct dce_hwseq *hws = dc->hwseq;
        struct hubbub *hubbub = dc->res_pool->hubbub;
+       int min_fclk_khz, min_dcfclk_khz, socclk_khz;
 
        if (dc->debug.sanity_checks)
                hws->funcs.verify_allow_pstate_change_high(dc);
@@ -3027,8 +3048,11 @@ void dcn10_prepare_bandwidth(
 
        if (dc->debug.pplib_wm_report_mode == WM_REPORT_OVERRIDE) {
                DC_FP_START();
-               dcn_bw_notify_pplib_of_wm_ranges(dc);
+               dcn_get_soc_clks(
+                       dc, &min_fclk_khz, &min_dcfclk_khz, &socclk_khz);
                DC_FP_END();
+               dcn_bw_notify_pplib_of_wm_ranges(
+                       dc, min_fclk_khz, min_dcfclk_khz, socclk_khz);
        }
 
        if (dc->debug.sanity_checks)
@@ -3041,6 +3065,7 @@ void dcn10_optimize_bandwidth(
 {
        struct dce_hwseq *hws = dc->hwseq;
        struct hubbub *hubbub = dc->res_pool->hubbub;
+       int min_fclk_khz, min_dcfclk_khz, socclk_khz;
 
        if (dc->debug.sanity_checks)
                hws->funcs.verify_allow_pstate_change_high(dc);
@@ -3064,8 +3089,11 @@ void dcn10_optimize_bandwidth(
 
        if (dc->debug.pplib_wm_report_mode == WM_REPORT_OVERRIDE) {
                DC_FP_START();
-               dcn_bw_notify_pplib_of_wm_ranges(dc);
+               dcn_get_soc_clks(
+                       dc, &min_fclk_khz, &min_dcfclk_khz, &socclk_khz);
                DC_FP_END();
+               dcn_bw_notify_pplib_of_wm_ranges(
+                       dc, min_fclk_khz, min_dcfclk_khz, socclk_khz);
        }
 
        if (dc->debug.sanity_checks)
@@ -3344,127 +3372,6 @@ static bool dcn10_can_pipe_disable_cursor(struct pipe_ctx *pipe_ctx)
        return false;
 }
 
-static bool dcn10_dmub_should_update_cursor_data(
-               struct pipe_ctx *pipe_ctx,
-               struct dc_debug_options *debug)
-{
-       if (pipe_ctx->plane_state->address.type == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)
-               return false;
-
-       if (dcn10_can_pipe_disable_cursor(pipe_ctx))
-               return false;
-
-       if ((pipe_ctx->stream->link->psr_settings.psr_version == DC_PSR_VERSION_SU_1 || pipe_ctx->stream->link->psr_settings.psr_version == DC_PSR_VERSION_1)
-                       && pipe_ctx->stream->ctx->dce_version >= DCN_VERSION_3_1)
-               return true;
-
-       return false;
-}
-
-static void dcn10_dmub_update_cursor_data(
-               struct pipe_ctx *pipe_ctx,
-               struct hubp *hubp,
-               const struct dc_cursor_mi_param *param,
-               const struct dc_cursor_position *cur_pos,
-               const struct dc_cursor_attributes *cur_attr)
-{
-       union dmub_rb_cmd cmd;
-       struct dmub_cmd_update_cursor_info_data *update_cursor_info;
-       const struct dc_cursor_position *pos;
-       const struct dc_cursor_attributes *attr;
-       int src_x_offset = 0;
-       int src_y_offset = 0;
-       int x_hotspot = 0;
-       int cursor_height = 0;
-       int cursor_width = 0;
-       uint32_t cur_en = 0;
-       unsigned int panel_inst = 0;
-
-       struct dc_debug_options *debug = &hubp->ctx->dc->debug;
-
-       if (!dcn10_dmub_should_update_cursor_data(pipe_ctx, debug))
-               return;
-       /**
-        * if cur_pos == NULL means the caller is from cursor_set_attribute
-        * then driver use previous cursor position data
-        * if cur_attr == NULL means the caller is from cursor_set_position
-        * then driver use previous cursor attribute
-        * if cur_pos or cur_attr is not NULL then update it
-        */
-       if (cur_pos != NULL)
-               pos = cur_pos;
-       else
-               pos = &hubp->curs_pos;
-
-       if (cur_attr != NULL)
-               attr = cur_attr;
-       else
-               attr = &hubp->curs_attr;
-
-       if (!dc_get_edp_link_panel_inst(hubp->ctx->dc, pipe_ctx->stream->link, &panel_inst))
-               return;
-
-       src_x_offset = pos->x - pos->x_hotspot - param->viewport.x;
-       src_y_offset = pos->y - pos->y_hotspot - param->viewport.y;
-       x_hotspot = pos->x_hotspot;
-       cursor_height = (int)attr->height;
-       cursor_width = (int)attr->width;
-       cur_en = pos->enable ? 1:0;
-
-       // Rotated cursor width/height and hotspots tweaks for offset calculation
-       if (param->rotation == ROTATION_ANGLE_90 || param->rotation == ROTATION_ANGLE_270) {
-               swap(cursor_height, cursor_width);
-               if (param->rotation == ROTATION_ANGLE_90) {
-                       src_x_offset = pos->x - pos->y_hotspot - param->viewport.x;
-                       src_y_offset = pos->y - pos->x_hotspot - param->viewport.y;
-               }
-       } else if (param->rotation == ROTATION_ANGLE_180) {
-               src_x_offset = pos->x - param->viewport.x;
-               src_y_offset = pos->y - param->viewport.y;
-       }
-
-       if (param->mirror) {
-               x_hotspot = param->viewport.width - x_hotspot;
-               src_x_offset = param->viewport.x + param->viewport.width - src_x_offset;
-       }
-
-       if (src_x_offset >= (int)param->viewport.width)
-               cur_en = 0;  /* not visible beyond right edge*/
-
-       if (src_x_offset + cursor_width <= 0)
-               cur_en = 0;  /* not visible beyond left edge*/
-
-       if (src_y_offset >= (int)param->viewport.height)
-               cur_en = 0;  /* not visible beyond bottom edge*/
-
-       if (src_y_offset + cursor_height <= 0)
-               cur_en = 0;  /* not visible beyond top edge*/
-
-       // Cursor bitmaps have different hotspot values
-       // There's a possibility that the above logic returns a negative value, so we clamp them to 0
-       if (src_x_offset < 0)
-               src_x_offset = 0;
-       if (src_y_offset < 0)
-               src_y_offset = 0;
-
-       memset(&cmd, 0x0, sizeof(cmd));
-       cmd.update_cursor_info.header.type = DMUB_CMD__UPDATE_CURSOR_INFO;
-       cmd.update_cursor_info.header.payload_bytes =
-                       sizeof(cmd.update_cursor_info.update_cursor_info_data);
-       update_cursor_info = &cmd.update_cursor_info.update_cursor_info_data;
-       update_cursor_info->cursor_rect.x = src_x_offset + param->viewport.x;
-       update_cursor_info->cursor_rect.y = src_y_offset + param->viewport.y;
-       update_cursor_info->cursor_rect.width = attr->width;
-       update_cursor_info->cursor_rect.height = attr->height;
-       update_cursor_info->enable = cur_en;
-       update_cursor_info->pipe_idx = pipe_ctx->pipe_idx;
-       update_cursor_info->cmd_version = DMUB_CMD_PSR_CONTROL_VERSION_1;
-       update_cursor_info->panel_inst = panel_inst;
-       dc_dmub_srv_cmd_queue(pipe_ctx->stream->ctx->dmub_srv, &cmd);
-       dc_dmub_srv_cmd_execute(pipe_ctx->stream->ctx->dmub_srv);
-       dc_dmub_srv_wait_idle(pipe_ctx->stream->ctx->dmub_srv);
-}
-
 void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
 {
        struct dc_cursor_position pos_cpy = pipe_ctx->stream->cursor_position;
@@ -3699,7 +3606,6 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
                        pipe_ctx->plane_res.scl_data.viewport.height - pos_cpy.y;
        }
 
-       dcn10_dmub_update_cursor_data(pipe_ctx, hubp, &param, &pos_cpy, NULL);
        hubp->funcs->set_cursor_position(hubp, &pos_cpy, &param);
        dpp->funcs->set_cursor_position(dpp, &pos_cpy, &param, hubp->curs_attr.width, hubp->curs_attr.height);
 }
@@ -3707,25 +3613,6 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
 void dcn10_set_cursor_attribute(struct pipe_ctx *pipe_ctx)
 {
        struct dc_cursor_attributes *attributes = &pipe_ctx->stream->cursor_attributes;
-       struct dc_cursor_mi_param param = { 0 };
-
-       /**
-        * If enter PSR without cursor attribute update
-        * the cursor attribute of dmub_restore_plane
-        * are initial value. call dmub to exit PSR and
-        * restore plane then update cursor attribute to
-        * avoid override with initial value
-        */
-       if (pipe_ctx->plane_state != NULL) {
-               param.pixel_clk_khz = pipe_ctx->stream->timing.pix_clk_100hz / 10;
-               param.ref_clk_khz = pipe_ctx->stream->ctx->dc->res_pool->ref_clocks.dchub_ref_clock_inKhz;
-               param.viewport = pipe_ctx->plane_res.scl_data.viewport;
-               param.h_scale_ratio = pipe_ctx->plane_res.scl_data.ratios.horz;
-               param.v_scale_ratio = pipe_ctx->plane_res.scl_data.ratios.vert;
-               param.rotation = pipe_ctx->plane_state->rotation;
-               param.mirror = pipe_ctx->plane_state->horizontal_mirror;
-               dcn10_dmub_update_cursor_data(pipe_ctx, pipe_ctx->plane_res.hubp, &param, NULL, attributes);
-       }
 
        pipe_ctx->plane_res.hubp->funcs->set_cursor_attributes(
                        pipe_ctx->plane_res.hubp, attributes);
@@ -3810,28 +3697,14 @@ void dcn10_calc_vupdate_position(
                uint32_t *start_line,
                uint32_t *end_line)
 {
-       const struct dc_crtc_timing *dc_crtc_timing = &pipe_ctx->stream->timing;
-       int vline_int_offset_from_vupdate =
-                       pipe_ctx->stream->periodic_interrupt.lines_offset;
-       int vupdate_offset_from_vsync = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
-       int start_position;
-
-       if (vline_int_offset_from_vupdate > 0)
-               vline_int_offset_from_vupdate--;
-       else if (vline_int_offset_from_vupdate < 0)
-               vline_int_offset_from_vupdate++;
-
-       start_position = vline_int_offset_from_vupdate + vupdate_offset_from_vsync;
+       const struct dc_crtc_timing *timing = &pipe_ctx->stream->timing;
+       int vupdate_pos = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
 
-       if (start_position >= 0)
-               *start_line = start_position;
+       if (vupdate_pos >= 0)
+               *start_line = vupdate_pos - ((vupdate_pos / timing->v_total) * timing->v_total);
        else
-               *start_line = dc_crtc_timing->v_total + start_position - 1;
-
-       *end_line = *start_line + 2;
-
-       if (*end_line >= dc_crtc_timing->v_total)
-               *end_line = 2;
+               *start_line = vupdate_pos + ((-vupdate_pos / timing->v_total) + 1) * timing->v_total - 1;
+       *end_line = (*start_line + 2) % timing->v_total;
 }
 
 static void dcn10_cal_vline_position(
@@ -3840,23 +3713,27 @@ static void dcn10_cal_vline_position(
                uint32_t *start_line,
                uint32_t *end_line)
 {
-       switch (pipe_ctx->stream->periodic_interrupt.ref_point) {
-       case START_V_UPDATE:
-               dcn10_calc_vupdate_position(
-                               dc,
-                               pipe_ctx,
-                               start_line,
-                               end_line);
-               break;
-       case START_V_SYNC:
+       const struct dc_crtc_timing *timing = &pipe_ctx->stream->timing;
+       int vline_pos = pipe_ctx->stream->periodic_interrupt.lines_offset;
+
+       if (pipe_ctx->stream->periodic_interrupt.ref_point == START_V_UPDATE) {
+               if (vline_pos > 0)
+                       vline_pos--;
+               else if (vline_pos < 0)
+                       vline_pos++;
+
+               vline_pos += dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
+               if (vline_pos >= 0)
+                       *start_line = vline_pos - ((vline_pos / timing->v_total) * timing->v_total);
+               else
+                       *start_line = vline_pos + ((-vline_pos / timing->v_total) + 1) * timing->v_total - 1;
+               *end_line = (*start_line + 2) % timing->v_total;
+       } else if (pipe_ctx->stream->periodic_interrupt.ref_point == START_V_SYNC) {
                // vsync is line 0 so start_line is just the requested line offset
-               *start_line = pipe_ctx->stream->periodic_interrupt.lines_offset;
-               *end_line = *start_line + 2;
-               break;
-       default:
+               *start_line = vline_pos;
+               *end_line = (*start_line + 2) % timing->v_total;
+       } else
                ASSERT(0);
-               break;
-       }
 }
 
 void dcn10_setup_periodic_interrupt(
index ea77392..33d7802 100644 (file)
@@ -207,10 +207,7 @@ void optc1_program_timing(
        /* In case of V_TOTAL_CONTROL is on, make sure OTG_V_TOTAL_MAX and
         * OTG_V_TOTAL_MIN are equal to V_TOTAL.
         */
-       REG_SET(OTG_V_TOTAL_MAX, 0,
-               OTG_V_TOTAL_MAX, v_total);
-       REG_SET(OTG_V_TOTAL_MIN, 0,
-               OTG_V_TOTAL_MIN, v_total);
+       optc->funcs->set_vtotal_min_max(optc, v_total, v_total);
 
        /* v_sync_start = 0, v_sync_end = v_sync_width */
        v_sync_end = patched_crtc_timing.v_sync_width;
@@ -649,13 +646,6 @@ uint32_t optc1_get_vblank_counter(struct timing_generator *optc)
 void optc1_lock(struct timing_generator *optc)
 {
        struct optc *optc1 = DCN10TG_FROM_TG(optc);
-       uint32_t regval = 0;
-
-       regval = REG_READ(OTG_CONTROL);
-
-       /* otg is not running, do not need to be locked */
-       if ((regval & 0x1) == 0x0)
-               return;
 
        REG_SET(OTG_GLOBAL_CONTROL0, 0,
                        OTG_MASTER_UPDATE_LOCK_SEL, optc->inst);
@@ -663,12 +653,10 @@ void optc1_lock(struct timing_generator *optc)
                        OTG_MASTER_UPDATE_LOCK, 1);
 
        /* Should be fast, status does not update on maximus */
-       if (optc->ctx->dce_environment != DCE_ENV_FPGA_MAXIMUS) {
-
+       if (optc->ctx->dce_environment != DCE_ENV_FPGA_MAXIMUS)
                REG_WAIT(OTG_MASTER_UPDATE_LOCK,
                                UPDATE_LOCK_STATUS, 1,
                                1, 10);
-       }
 }
 
 void optc1_unlock(struct timing_generator *optc)
@@ -679,16 +667,6 @@ void optc1_unlock(struct timing_generator *optc)
                        OTG_MASTER_UPDATE_LOCK, 0);
 }
 
-bool optc1_is_locked(struct timing_generator *optc)
-{
-       struct optc *optc1 = DCN10TG_FROM_TG(optc);
-       uint32_t locked;
-
-       REG_GET(OTG_MASTER_UPDATE_LOCK, UPDATE_LOCK_STATUS, &locked);
-
-       return (locked == 1);
-}
-
 void optc1_get_position(struct timing_generator *optc,
                struct crtc_position *position)
 {
@@ -941,11 +919,7 @@ void optc1_set_drr(
 
                }
 
-               REG_SET(OTG_V_TOTAL_MAX, 0,
-                       OTG_V_TOTAL_MAX, params->vertical_total_max - 1);
-
-               REG_SET(OTG_V_TOTAL_MIN, 0,
-                       OTG_V_TOTAL_MIN, params->vertical_total_min - 1);
+               optc->funcs->set_vtotal_min_max(optc, params->vertical_total_min - 1, params->vertical_total_max - 1);
 
                REG_UPDATE_5(OTG_V_TOTAL_CONTROL,
                                OTG_V_TOTAL_MIN_SEL, 1,
@@ -964,11 +938,7 @@ void optc1_set_drr(
                                OTG_V_TOTAL_MAX_SEL, 0,
                                OTG_FORCE_LOCK_ON_EVENT, 0);
 
-               REG_SET(OTG_V_TOTAL_MIN, 0,
-                       OTG_V_TOTAL_MIN, 0);
-
-               REG_SET(OTG_V_TOTAL_MAX, 0,
-                       OTG_V_TOTAL_MAX, 0);
+               optc->funcs->set_vtotal_min_max(optc, 0, 0);
        }
 }
 
@@ -1583,11 +1553,11 @@ static const struct timing_generator_funcs dcn10_tg_funcs = {
                .enable_crtc_reset = optc1_enable_crtc_reset,
                .disable_reset_trigger = optc1_disable_reset_trigger,
                .lock = optc1_lock,
-               .is_locked = optc1_is_locked,
                .unlock = optc1_unlock,
                .enable_optc_clock = optc1_enable_optc_clock,
                .set_drr = optc1_set_drr,
                .get_last_used_drr_vtotal = NULL,
+               .set_vtotal_min_max = optc1_set_vtotal_min_max,
                .set_static_screen_control = optc1_set_static_screen_control,
                .set_test_pattern = optc1_set_test_pattern,
                .program_stereo = optc1_program_stereo,
index 6323ca6..88ac5f6 100644 (file)
@@ -654,7 +654,6 @@ void optc1_set_blank(struct timing_generator *optc,
                bool enable_blanking);
 
 bool optc1_is_blanked(struct timing_generator *optc);
-bool optc1_is_locked(struct timing_generator *optc);
 
 void optc1_program_blank_color(
                struct timing_generator *optc,
index 831080b..56d30ba 100644 (file)
@@ -1336,6 +1336,21 @@ static noinline void dcn10_resource_construct_fp(
        }
 }
 
+static bool verify_clock_values(struct dm_pp_clock_levels_with_voltage *clks)
+{
+       int i;
+
+       if (clks->num_levels == 0)
+               return false;
+
+       for (i = 0; i < clks->num_levels; i++)
+               /* Ensure that the result is sane */
+               if (clks->data[i].clocks_in_khz == 0)
+                       return false;
+
+       return true;
+}
+
 static bool dcn10_resource_construct(
        uint8_t num_virtual_links,
        struct dc *dc,
@@ -1345,6 +1360,9 @@ static bool dcn10_resource_construct(
        int j;
        struct dc_context *ctx = dc->ctx;
        uint32_t pipe_fuses = read_pipe_fuses(ctx);
+       struct dm_pp_clock_levels_with_voltage fclks = {0}, dcfclks = {0};
+       int min_fclk_khz, min_dcfclk_khz, socclk_khz;
+       bool res;
 
        ctx->dc_bios->regs = &bios_regs;
 
@@ -1523,15 +1541,53 @@ static bool dcn10_resource_construct(
                        && pool->base.pp_smu->rv_funcs.set_pme_wa_enable != NULL)
                dc->debug.az_endpoint_mute_only = false;
 
-       DC_FP_START();
-       if (!dc->debug.disable_pplib_clock_request)
-               dcn_bw_update_from_pplib(dc);
+
+       if (!dc->debug.disable_pplib_clock_request) {
+               /*
+                * TODO: This is not the proper way to obtain
+                * fabric_and_dram_bandwidth, should be min(fclk, memclk).
+                */
+               res = dm_pp_get_clock_levels_by_type_with_voltage(
+                               ctx, DM_PP_CLOCK_TYPE_FCLK, &fclks);
+
+               DC_FP_START();
+
+               if (res)
+                       res = verify_clock_values(&fclks);
+
+               if (res)
+                       dcn_bw_update_from_pplib_fclks(dc, &fclks);
+               else
+                       BREAK_TO_DEBUGGER();
+
+               DC_FP_END();
+
+               res = dm_pp_get_clock_levels_by_type_with_voltage(
+                       ctx, DM_PP_CLOCK_TYPE_DCFCLK, &dcfclks);
+
+               DC_FP_START();
+
+               if (res)
+                       res = verify_clock_values(&dcfclks);
+
+               if (res)
+                       dcn_bw_update_from_pplib_dcfclks(dc, &dcfclks);
+               else
+                       BREAK_TO_DEBUGGER();
+
+               DC_FP_END();
+       }
+
        dcn_bw_sync_calcs_and_dml(dc);
        if (!dc->debug.disable_pplib_wm_range) {
                dc->res_pool = &pool->base;
-               dcn_bw_notify_pplib_of_wm_ranges(dc);
+               DC_FP_START();
+               dcn_get_soc_clks(
+                       dc, &min_fclk_khz, &min_dcfclk_khz, &socclk_khz);
+               DC_FP_END();
+               dcn_bw_notify_pplib_of_wm_ranges(
+                       dc, min_fclk_khz, min_dcfclk_khz, socclk_khz);
        }
-       DC_FP_END();
 
        {
                struct irq_service_init_data init_data;
index b1ec0e6..4996d28 100644 (file)
@@ -617,6 +617,17 @@ void hubp2_cursor_set_attributes(
                        CURSOR0_DST_Y_OFFSET, 0,
                         /* used to shift the cursor chunk request deadline */
                        CURSOR0_CHUNK_HDL_ADJUST, 3);
+
+       hubp->att.SURFACE_ADDR_HIGH  = attr->address.high_part;
+       hubp->att.SURFACE_ADDR       = attr->address.low_part;
+       hubp->att.size.bits.width    = attr->width;
+       hubp->att.size.bits.height   = attr->height;
+       hubp->att.cur_ctl.bits.mode  = attr->color_format;
+       hubp->att.cur_ctl.bits.pitch = hw_pitch;
+       hubp->att.cur_ctl.bits.line_per_chunk = lpc;
+       hubp->att.cur_ctl.bits.cur_2x_magnify = attr->attribute_flags.bits.ENABLE_MAGNIFICATION;
+       hubp->att.settings.bits.dst_y_offset  = 0;
+       hubp->att.settings.bits.chunk_hdl_adjust = 3;
 }
 
 void hubp2_dmdata_set_attributes(
@@ -1033,6 +1044,25 @@ void hubp2_cursor_set_position(
        REG_SET(CURSOR_DST_OFFSET, 0,
                        CURSOR_DST_X_OFFSET, dst_x_offset);
        /* TODO Handle surface pixel formats other than 4:4:4 */
+       /* Cursor Position Register Config */
+       hubp->pos.cur_ctl.bits.cur_enable = cur_en;
+       hubp->pos.position.bits.x_pos = pos->x;
+       hubp->pos.position.bits.y_pos = pos->y;
+       hubp->pos.hot_spot.bits.x_hot = x_hotspot;
+       hubp->pos.hot_spot.bits.y_hot = y_hotspot;
+       hubp->pos.dst_offset.bits.dst_x_offset = dst_x_offset;
+       /* Cursor Rectangle Cache
+        * Cursor bitmaps have different hotspot values
+        * There's a possibility that the above logic returns a negative value,
+        * so we clamp them to 0
+        */
+       if (src_x_offset < 0)
+               src_x_offset = 0;
+       if (src_y_offset < 0)
+               src_y_offset = 0;
+       /* Save necessary cursor info x, y position. w, h is saved in attribute func. */
+       hubp->cur_rect.x = src_x_offset + param->viewport.x;
+       hubp->cur_rect.y = src_y_offset + param->viewport.y;
 }
 
 void hubp2_clk_cntl(struct hubp *hubp, bool enable)
index e1d271f..d732b6f 100644 (file)
@@ -1862,24 +1862,6 @@ void dcn20_post_unlock_program_front_end(
 
        for (i = 0; i < dc->res_pool->pipe_count; i++) {
                struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
-               struct pipe_ctx *mpcc_pipe;
-
-               if (pipe->vtp_locked) {
-                       dc->hwseq->funcs.wait_for_blank_complete(pipe->stream_res.opp);
-                       pipe->plane_res.hubp->funcs->set_blank(pipe->plane_res.hubp, true);
-                       pipe->vtp_locked = false;
-
-                       for (mpcc_pipe = pipe->bottom_pipe; mpcc_pipe; mpcc_pipe = mpcc_pipe->bottom_pipe)
-                               mpcc_pipe->plane_res.hubp->funcs->set_blank(mpcc_pipe->plane_res.hubp, true);
-
-                       for (i = 0; i < dc->res_pool->pipe_count; i++)
-                               if (context->res_ctx.pipe_ctx[i].update_flags.bits.disable)
-                                       dc->hwss.disable_plane(dc, &dc->current_state->res_ctx.pipe_ctx[i]);
-               }
-       }
-
-       for (i = 0; i < dc->res_pool->pipe_count; i++) {
-               struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
                struct pipe_ctx *old_pipe = &dc->current_state->res_ctx.pipe_ctx[i];
 
                /* If an active, non-phantom pipe is being transitioned into a phantom
@@ -2018,6 +2000,10 @@ void dcn20_optimize_bandwidth(
                                context->bw_ctx.bw.dcn.clk.dramclk_khz <= dc->clk_mgr->bw_params->dc_mode_softmax_memclk * 1000)
                        dc->clk_mgr->funcs->set_max_memclk(dc->clk_mgr, dc->clk_mgr->bw_params->dc_mode_softmax_memclk);
 
+       /* increase compbuf size */
+       if (hubbub->funcs->program_compbuf_size)
+               hubbub->funcs->program_compbuf_size(hubbub, context->bw_ctx.bw.dcn.compbuf_size_kb, true);
+
        dc->clk_mgr->funcs->update_clocks(
                        dc->clk_mgr,
                        context,
@@ -2033,9 +2019,6 @@ void dcn20_optimize_bandwidth(
                                                pipe_ctx->dlg_regs.optimized_min_dst_y_next_start);
                }
        }
-       /* increase compbuf size */
-       if (hubbub->funcs->program_compbuf_size)
-               hubbub->funcs->program_compbuf_size(hubbub, context->bw_ctx.bw.dcn.compbuf_size_kb, true);
 }
 
 bool dcn20_update_bandwidth(
index 0340fdd..a08c335 100644 (file)
@@ -529,6 +529,7 @@ static struct timing_generator_funcs dcn20_tg_funcs = {
                .enable_optc_clock = optc1_enable_optc_clock,
                .set_drr = optc1_set_drr,
                .get_last_used_drr_vtotal = optc2_get_last_used_drr_vtotal,
+               .set_vtotal_min_max = optc1_set_vtotal_min_max,
                .set_static_screen_control = optc1_set_static_screen_control,
                .program_stereo = optc1_program_stereo,
                .is_stereo_left_eye = optc1_is_stereo_left_eye,
index 5752271..c5e200d 100644 (file)
@@ -67,15 +67,9 @@ static uint32_t convert_and_clamp(
 void dcn21_dchvm_init(struct hubbub *hubbub)
 {
        struct dcn20_hubbub *hubbub1 = TO_DCN20_HUBBUB(hubbub);
-       uint32_t riommu_active, prefetch_done;
+       uint32_t riommu_active;
        int i;
 
-       REG_GET(DCHVM_RIOMMU_STAT0, HOSTVM_PREFETCH_DONE, &prefetch_done);
-
-       if (prefetch_done) {
-               hubbub->riommu_active = true;
-               return;
-       }
        //Init DCHVM block
        REG_UPDATE(DCHVM_CTRL0, HOSTVM_INIT_REQ, 1);
 
index 7cb35bb..8870814 100644 (file)
@@ -657,7 +657,6 @@ static const struct dc_debug_options debug_defaults_drv = {
                .usbc_combo_phy_reset_wa = true,
                .dmub_command_table = true,
                .use_max_lb = true,
-               .optimize_edp_link_rate = true
 };
 
 static const struct dc_debug_options debug_defaults_diags = {
@@ -677,6 +676,12 @@ static const struct dc_debug_options debug_defaults_diags = {
                .use_max_lb = true
 };
 
+static const struct dc_panel_config panel_config_defaults = {
+               .ilr = {
+                       .optimize_edp_link_rate = true,
+               },
+};
+
 enum dcn20_clk_src_array_id {
        DCN20_CLK_SRC_PLL0,
        DCN20_CLK_SRC_PLL1,
@@ -1367,6 +1372,11 @@ static struct panel_cntl *dcn21_panel_cntl_create(const struct panel_cntl_init_d
        return &panel_cntl->base;
 }
 
+static void dcn21_get_panel_config_defaults(struct dc_panel_config *panel_config)
+{
+       *panel_config = panel_config_defaults;
+}
+
 #define CTX ctx
 
 #define REG(reg_name) \
@@ -1408,6 +1418,7 @@ static const struct resource_funcs dcn21_res_pool_funcs = {
        .set_mcif_arb_params = dcn20_set_mcif_arb_params,
        .find_first_free_match_stream_enc_for_link = dcn10_find_first_free_match_stream_enc_for_link,
        .update_bw_bounding_box = dcn21_update_bw_bounding_box,
+       .get_panel_config_defaults = dcn21_get_panel_config_defaults,
 };
 
 static bool dcn21_resource_construct(
index 4a668d6..e5b7ef7 100644 (file)
@@ -372,6 +372,10 @@ void dpp3_set_cursor_attributes(
                REG_UPDATE(CURSOR0_COLOR1,
                                CUR0_COLOR1, 0xFFFFFFFF);
        }
+
+       dpp_base->att.cur0_ctl.bits.expansion_mode = 0;
+       dpp_base->att.cur0_ctl.bits.cur0_rom_en = cur_rom_en;
+       dpp_base->att.cur0_ctl.bits.mode = color_format;
 }
 
 
index 1782b9c..892d3c4 100644 (file)
@@ -319,13 +319,13 @@ static struct timing_generator_funcs dcn30_tg_funcs = {
                .enable_crtc_reset = optc1_enable_crtc_reset,
                .disable_reset_trigger = optc1_disable_reset_trigger,
                .lock = optc3_lock,
-               .is_locked = optc1_is_locked,
                .unlock = optc1_unlock,
                .lock_doublebuffer_enable = optc3_lock_doublebuffer_enable,
                .lock_doublebuffer_disable = optc3_lock_doublebuffer_disable,
                .enable_optc_clock = optc1_enable_optc_clock,
                .set_drr = optc1_set_drr,
                .get_last_used_drr_vtotal = optc2_get_last_used_drr_vtotal,
+               .set_vtotal_min_max = optc3_set_vtotal_min_max,
                .set_static_screen_control = optc1_set_static_screen_control,
                .program_stereo = optc1_program_stereo,
                .is_stereo_left_eye = optc1_is_stereo_left_eye,
@@ -366,4 +366,3 @@ void dcn30_timing_generator_init(struct optc *optc1)
        optc1->min_h_sync_width = 4;
        optc1->min_v_sync_width = 1;
 }
-
index 3a3b2ac..020f512 100644 (file)
@@ -1655,6 +1655,9 @@ noinline bool dcn30_internal_validate_bw(
        if (!pipes)
                return false;
 
+       context->bw_ctx.dml.vba.maxMpcComb = 0;
+       context->bw_ctx.dml.vba.VoltageLevel = 0;
+       context->bw_ctx.dml.vba.DRAMClockChangeSupport[0][0] = dm_dram_clock_change_vactive;
        dc->res_pool->funcs->update_soc_for_wm_a(dc, context);
        pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate);
 
@@ -1873,6 +1876,7 @@ noinline bool dcn30_internal_validate_bw(
 
        if (repopulate_pipes)
                pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate);
+       context->bw_ctx.dml.vba.VoltageLevel = vlevel;
        *vlevel_out = vlevel;
        *pipe_cnt_out = pipe_cnt;
 
index 559e563..f04595b 100644 (file)
@@ -852,7 +852,7 @@ static struct hubbub *dcn301_hubbub_create(struct dc_context *ctx)
                vmid->masks = &vmid_masks;
        }
 
-        hubbub3->num_vmid = res_cap_dcn301.num_vmid;
+       hubbub3->num_vmid = res_cap_dcn301.num_vmid;
 
        return &hubbub3->base;
 }
index 52fb2bf..814f401 100644 (file)
@@ -197,7 +197,7 @@ static void dcn31_hpo_dp_stream_enc_set_stream_attribute(
        uint32_t h_back_porch;
        uint32_t h_width;
        uint32_t v_height;
-       unsigned long long v_freq;
+       uint64_t v_freq;
        uint8_t misc0 = 0;
        uint8_t misc1 = 0;
        uint8_t hsp;
@@ -360,7 +360,7 @@ static void dcn31_hpo_dp_stream_enc_set_stream_attribute(
        v_height = hw_crtc_timing.v_border_top + hw_crtc_timing.v_addressable + hw_crtc_timing.v_border_bottom;
        hsp = hw_crtc_timing.flags.HSYNC_POSITIVE_POLARITY ? 0 : 0x80;
        vsp = hw_crtc_timing.flags.VSYNC_POSITIVE_POLARITY ? 0 : 0x80;
-       v_freq = hw_crtc_timing.pix_clk_100hz * 100;
+       v_freq = (uint64_t)hw_crtc_timing.pix_clk_100hz * 100;
 
        /*   MSA Packet Mapping to 32-bit Link Symbols - DP2 spec, section 2.7.4.1
         *
@@ -436,32 +436,28 @@ static void dcn31_hpo_dp_stream_enc_update_dp_info_packets(
 {
        struct dcn31_hpo_dp_stream_encoder *enc3 = DCN3_1_HPO_DP_STREAM_ENC_FROM_HPO_STREAM_ENC(enc);
        uint32_t dmdata_packet_enabled = 0;
-       bool sdp_stream_enable = false;
 
-       if (info_frame->vsc.valid) {
+       if (info_frame->vsc.valid)
                enc->vpg->funcs->update_generic_info_packet(
                                enc->vpg,
                                0,  /* packetIndex */
                                &info_frame->vsc,
                                true);
-               sdp_stream_enable = true;
-       }
-       if (info_frame->spd.valid) {
+
+       if (info_frame->spd.valid)
                enc->vpg->funcs->update_generic_info_packet(
                                enc->vpg,
                                2,  /* packetIndex */
                                &info_frame->spd,
                                true);
-               sdp_stream_enable = true;
-       }
-       if (info_frame->hdrsmd.valid) {
+
+       if (info_frame->hdrsmd.valid)
                enc->vpg->funcs->update_generic_info_packet(
                                enc->vpg,
                                3,  /* packetIndex */
                                &info_frame->hdrsmd,
                                true);
-               sdp_stream_enable = true;
-       }
+
        /* enable/disable transmission of packet(s).
         * If enabled, packet transmission begins on the next frame
         */
index 2f7404a..63a677c 100644 (file)
@@ -201,7 +201,6 @@ void optc31_set_drr(
 
                // Setup manual flow control for EOF via TRIG_A
                optc->funcs->setup_manual_trigger(optc);
-
        } else {
                REG_UPDATE_4(OTG_V_TOTAL_CONTROL,
                                OTG_SET_V_TOTAL_MIN_MASK, 0,
@@ -260,7 +259,6 @@ static struct timing_generator_funcs dcn31_tg_funcs = {
                .enable_crtc_reset = optc1_enable_crtc_reset,
                .disable_reset_trigger = optc1_disable_reset_trigger,
                .lock = optc3_lock,
-               .is_locked = optc1_is_locked,
                .unlock = optc1_unlock,
                .lock_doublebuffer_enable = optc3_lock_doublebuffer_enable,
                .lock_doublebuffer_disable = optc3_lock_doublebuffer_disable,
index 8c1a6fb..fddc21a 100644 (file)
@@ -888,9 +888,8 @@ static const struct dc_debug_options debug_defaults_drv = {
                }
        },
        .disable_z10 = true,
-       .optimize_edp_link_rate = true,
        .enable_z9_disable_interface = true, /* Allow support for the PMFW interface for disable Z9*/
-       .dml_hostvm_override = DML_HOSTVM_NO_OVERRIDE,
+       .dml_hostvm_override = DML_HOSTVM_OVERRIDE_FALSE,
 };
 
 static const struct dc_debug_options debug_defaults_diags = {
@@ -911,6 +910,12 @@ static const struct dc_debug_options debug_defaults_diags = {
        .use_max_lb = true
 };
 
+static const struct dc_panel_config panel_config_defaults = {
+       .ilr = {
+               .optimize_edp_link_rate = true,
+       },
+};
+
 static void dcn31_dpp_destroy(struct dpp **dpp)
 {
        kfree(TO_DCN20_DPP(*dpp));
@@ -1803,6 +1808,11 @@ validate_out:
        return out;
 }
 
+static void dcn31_get_panel_config_defaults(struct dc_panel_config *panel_config)
+{
+       *panel_config = panel_config_defaults;
+}
+
 static struct dc_cap_funcs cap_funcs = {
        .get_dcc_compression_cap = dcn20_get_dcc_compression_cap
 };
@@ -1829,6 +1839,7 @@ static struct resource_funcs dcn31_res_pool_funcs = {
        .release_post_bldn_3dlut = dcn30_release_post_bldn_3dlut,
        .update_bw_bounding_box = dcn31_update_bw_bounding_box,
        .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+       .get_panel_config_defaults = dcn31_get_panel_config_defaults,
 };
 
 static struct clock_source *dcn30_clock_source_create(
index 0d2ffb6..7e773bf 100644 (file)
@@ -262,7 +262,7 @@ static bool is_two_pixels_per_containter(const struct dc_crtc_timing *timing)
        return two_pix;
 }
 
-void enc314_stream_encoder_dp_blank(
+static void enc314_stream_encoder_dp_blank(
        struct dc_link *link,
        struct stream_encoder *enc)
 {
index 24ec71c..d0ad72c 100644 (file)
@@ -881,7 +881,8 @@ static const struct dc_plane_cap plane_cap = {
 };
 
 static const struct dc_debug_options debug_defaults_drv = {
-       .disable_z10 = true, /*hw not support it*/
+       .disable_z10 = false,
+       .enable_z9_disable_interface = true,
        .disable_dmcu = true,
        .force_abm_enable = false,
        .timing_trace = false,
@@ -914,7 +915,6 @@ static const struct dc_debug_options debug_defaults_drv = {
                        .afmt = true,
                }
        },
-       .optimize_edp_link_rate = true,
        .seamless_boot_odm_combine = true
 };
 
@@ -936,6 +936,12 @@ static const struct dc_debug_options debug_defaults_diags = {
        .use_max_lb = true
 };
 
+static const struct dc_panel_config panel_config_defaults = {
+       .ilr = {
+               .optimize_edp_link_rate = true,
+       },
+};
+
 static void dcn31_dpp_destroy(struct dpp **dpp)
 {
        kfree(TO_DCN20_DPP(*dpp));
@@ -1675,6 +1681,11 @@ static void dcn314_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *b
        DC_FP_END();
 }
 
+static void dcn314_get_panel_config_defaults(struct dc_panel_config *panel_config)
+{
+       *panel_config = panel_config_defaults;
+}
+
 static struct resource_funcs dcn314_res_pool_funcs = {
        .destroy = dcn314_destroy_resource_pool,
        .link_enc_create = dcn31_link_encoder_create,
@@ -1697,6 +1708,7 @@ static struct resource_funcs dcn314_res_pool_funcs = {
        .release_post_bldn_3dlut = dcn30_release_post_bldn_3dlut,
        .update_bw_bounding_box = dcn314_update_bw_bounding_box,
        .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+       .get_panel_config_defaults = dcn314_get_panel_config_defaults,
 };
 
 static struct clock_source *dcn30_clock_source_create(
index eebb42c..58746c4 100644 (file)
@@ -885,7 +885,6 @@ static const struct dc_debug_options debug_defaults_drv = {
                        .afmt = true,
                }
        },
-       .optimize_edp_link_rate = true,
        .psr_power_use_phy_fsm = 0,
 };
 
@@ -907,6 +906,12 @@ static const struct dc_debug_options debug_defaults_diags = {
        .use_max_lb = true
 };
 
+static const struct dc_panel_config panel_config_defaults = {
+       .ilr = {
+               .optimize_edp_link_rate = true,
+       },
+};
+
 static void dcn31_dpp_destroy(struct dpp **dpp)
 {
        kfree(TO_DCN20_DPP(*dpp));
@@ -1708,6 +1713,11 @@ static int dcn315_populate_dml_pipes_from_context(
        return pipe_cnt;
 }
 
+static void dcn315_get_panel_config_defaults(struct dc_panel_config *panel_config)
+{
+       *panel_config = panel_config_defaults;
+}
+
 static struct dc_cap_funcs cap_funcs = {
        .get_dcc_compression_cap = dcn20_get_dcc_compression_cap
 };
@@ -1721,7 +1731,7 @@ static struct resource_funcs dcn315_res_pool_funcs = {
        .panel_cntl_create = dcn31_panel_cntl_create,
        .validate_bandwidth = dcn31_validate_bandwidth,
        .calculate_wm_and_dlg = dcn31_calculate_wm_and_dlg,
-       .update_soc_for_wm_a = dcn31_update_soc_for_wm_a,
+       .update_soc_for_wm_a = dcn315_update_soc_for_wm_a,
        .populate_dml_pipes = dcn315_populate_dml_pipes_from_context,
        .acquire_idle_pipe_for_layer = dcn20_acquire_idle_pipe_for_layer,
        .add_stream_to_ctx = dcn30_add_stream_to_ctx,
@@ -1734,6 +1744,7 @@ static struct resource_funcs dcn315_res_pool_funcs = {
        .release_post_bldn_3dlut = dcn30_release_post_bldn_3dlut,
        .update_bw_bounding_box = dcn315_update_bw_bounding_box,
        .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+       .get_panel_config_defaults = dcn315_get_panel_config_defaults,
 };
 
 static bool dcn315_resource_construct(
index f4b52a3..6b40a11 100644 (file)
@@ -885,7 +885,6 @@ static const struct dc_debug_options debug_defaults_drv = {
                        .afmt = true,
                }
        },
-       .optimize_edp_link_rate = true,
 };
 
 static const struct dc_debug_options debug_defaults_diags = {
@@ -906,6 +905,12 @@ static const struct dc_debug_options debug_defaults_diags = {
        .use_max_lb = true
 };
 
+static const struct dc_panel_config panel_config_defaults = {
+       .ilr = {
+               .optimize_edp_link_rate = true,
+       },
+};
+
 static void dcn31_dpp_destroy(struct dpp **dpp)
 {
        kfree(TO_DCN20_DPP(*dpp));
@@ -1710,6 +1715,11 @@ static int dcn316_populate_dml_pipes_from_context(
        return pipe_cnt;
 }
 
+static void dcn316_get_panel_config_defaults(struct dc_panel_config *panel_config)
+{
+       *panel_config = panel_config_defaults;
+}
+
 static struct dc_cap_funcs cap_funcs = {
        .get_dcc_compression_cap = dcn20_get_dcc_compression_cap
 };
@@ -1736,6 +1746,7 @@ static struct resource_funcs dcn316_res_pool_funcs = {
        .release_post_bldn_3dlut = dcn30_release_post_bldn_3dlut,
        .update_bw_bounding_box = dcn316_update_bw_bounding_box,
        .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+       .get_panel_config_defaults = dcn316_get_panel_config_defaults,
 };
 
 static bool dcn316_resource_construct(
index fdae6aa..076969d 100644 (file)
@@ -150,12 +150,6 @@ static void dcn32_link_encoder_get_max_link_cap(struct link_encoder *enc,
 
 }
 
-void enc32_set_dig_output_mode(struct link_encoder *enc, uint8_t pix_per_container)
-{
-       struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
-       REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_OUTPUT_PIXEL_MODE, pix_per_container);
-}
 static const struct link_encoder_funcs dcn32_link_enc_funcs = {
        .read_state = link_enc2_read_state,
        .validate_output_with_stream =
@@ -186,7 +180,6 @@ static const struct link_encoder_funcs dcn32_link_enc_funcs = {
        .is_in_alt_mode = dcn32_link_encoder_is_in_alt_mode,
        .get_max_link_cap = dcn32_link_encoder_get_max_link_cap,
        .set_dio_phy_mux = dcn31_link_encoder_set_dio_phy_mux,
-       .set_dig_output_mode = enc32_set_dig_output_mode,
 };
 
 void dcn32_link_encoder_construct(
index 749a1e8..bbcfce0 100644 (file)
@@ -53,8 +53,4 @@ void dcn32_link_encoder_enable_dp_output(
        const struct dc_link_settings *link_settings,
        enum clock_source_id clock_source);
 
-void enc32_set_dig_output_mode(
-               struct link_encoder *enc,
-               uint8_t pix_per_container);
-
 #endif /* __DC_LINK_ENCODER__DCN32_H__ */
index 0e9dce4..d19fc93 100644 (file)
@@ -243,6 +243,39 @@ static bool is_two_pixels_per_containter(const struct dc_crtc_timing *timing)
        return two_pix;
 }
 
+static bool is_h_timing_divisible_by_2(const struct dc_crtc_timing *timing)
+{
+       /* math borrowed from function of same name in inc/resource
+        * checks if h_timing is divisible by 2
+        */
+
+       bool divisible = false;
+       uint16_t h_blank_start = 0;
+       uint16_t h_blank_end = 0;
+
+       if (timing) {
+               h_blank_start = timing->h_total - timing->h_front_porch;
+               h_blank_end = h_blank_start - timing->h_addressable;
+
+               /* HTOTAL, Hblank start/end, and Hsync start/end all must be
+                * divisible by 2 in order for the horizontal timing params
+                * to be considered divisible by 2. Hsync start is always 0.
+                */
+               divisible = (timing->h_total % 2 == 0) &&
+                               (h_blank_start % 2 == 0) &&
+                               (h_blank_end % 2 == 0) &&
+                               (timing->h_sync_width % 2 == 0);
+       }
+       return divisible;
+}
+
+static bool is_dp_dig_pixel_rate_div_policy(struct dc *dc, const struct dc_crtc_timing *timing)
+{
+       /* should be functionally the same as dcn32_is_dp_dig_pixel_rate_div_policy for DP encoders*/
+       return is_h_timing_divisible_by_2(timing) &&
+               dc->debug.enable_dp_dig_pixel_rate_div_policy;
+}
+
 static void enc32_stream_encoder_dp_unblank(
         struct dc_link *link,
                struct stream_encoder *enc,
@@ -259,7 +292,7 @@ static void enc32_stream_encoder_dp_unblank(
 
                /* YCbCr 4:2:0 : Computed VID_M will be 2X the input rate */
                if (is_two_pixels_per_containter(&param->timing) || param->opp_cnt > 1
-                       || dc->debug.enable_dp_dig_pixel_rate_div_policy) {
+                       || is_dp_dig_pixel_rate_div_policy(dc, &param->timing)) {
                        /*this logic should be the same in get_pixel_clock_parameters() */
                        n_multiply = 1;
                }
@@ -355,7 +388,7 @@ static void enc32_dp_set_dsc_config(struct stream_encoder *enc,
 {
        struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
 
-       REG_UPDATE(DP_DSC_CNTL, DP_DSC_MODE, dsc_mode);
+       REG_UPDATE(DP_DSC_CNTL, DP_DSC_MODE, dsc_mode == OPTC_DSC_DISABLED ? 0 : 1);
 }
 
 /* this function read dsc related register fields to be logged later in dcn10_log_hw_state
@@ -378,24 +411,6 @@ static void enc32_read_state(struct stream_encoder *enc, struct enc_state *s)
        }
 }
 
-static void enc32_stream_encoder_reset_fifo(struct stream_encoder *enc)
-{
-       struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
-       uint32_t fifo_enabled;
-
-       REG_GET(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, &fifo_enabled);
-
-       if (fifo_enabled == 0) {
-               /* reset DIG resync FIFO */
-               REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 1);
-               /* TODO: fix timeout when wait for DIG_FIFO_RESET_DONE */
-               //REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 1, 1, 100);
-               udelay(1);
-               REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 0);
-               REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 0, 1, 100);
-       }
-}
-
 static void enc32_set_dig_input_mode(struct stream_encoder *enc, unsigned int pix_per_container)
 {
        struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
@@ -425,8 +440,6 @@ static const struct stream_encoder_funcs dcn32_str_enc_funcs = {
                enc3_stream_encoder_update_dp_info_packets,
        .stop_dp_info_packets =
                enc1_stream_encoder_stop_dp_info_packets,
-       .reset_fifo =
-               enc32_stream_encoder_reset_fifo,
        .dp_blank =
                enc1_stream_encoder_dp_blank,
        .dp_unblank =
index 250d9a3..ecd041a 100644 (file)
@@ -71,7 +71,9 @@
        SRI(DP_MSE_RATE_UPDATE, DP, id), \
        SRI(DP_PIXEL_FORMAT, DP, id), \
        SRI(DP_SEC_CNTL, DP, id), \
+       SRI(DP_SEC_CNTL1, DP, id), \
        SRI(DP_SEC_CNTL2, DP, id), \
+       SRI(DP_SEC_CNTL5, DP, id), \
        SRI(DP_SEC_CNTL6, DP, id), \
        SRI(DP_STEER_FIFO, DP, id), \
        SRI(DP_VID_M, DP, id), \
@@ -93,7 +95,7 @@
        SRI(DIG_FIFO_CTRL0, DIG, id)
 
 
-#define SE_COMMON_MASK_SH_LIST_DCN32_BASE(mask_sh)\
+#define SE_COMMON_MASK_SH_LIST_DCN32(mask_sh)\
        SE_SF(DP0_DP_PIXEL_FORMAT, DP_PIXEL_ENCODING, mask_sh),\
        SE_SF(DP0_DP_PIXEL_FORMAT, DP_COMPONENT_DEPTH, mask_sh),\
        SE_SF(DP0_DP_PIXEL_FORMAT, DP_PIXEL_PER_CYCLE_PROCESSING_MODE, mask_sh),\
        SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_GC_CONT, mask_sh),\
        SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_GC_SEND, mask_sh),\
        SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_NULL_SEND, mask_sh),\
+       SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_ACP_SEND, mask_sh),\
        SE_SF(DIG0_HDMI_INFOFRAME_CONTROL0, HDMI_AUDIO_INFO_SEND, mask_sh),\
        SE_SF(DIG0_HDMI_INFOFRAME_CONTROL1, HDMI_AUDIO_INFO_LINE, mask_sh),\
        SE_SF(DIG0_HDMI_GC, HDMI_GC_AVMUTE, mask_sh),\
        SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, mask_sh),\
        SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_OUTPUT_PIXEL_MODE, mask_sh)
 
-#if defined(CONFIG_DRM_AMD_DC_HDCP)
-#define SE_COMMON_MASK_SH_LIST_DCN32(mask_sh)\
-       SE_COMMON_MASK_SH_LIST_DCN32_BASE(mask_sh),\
-       SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_ACP_SEND, mask_sh)
-#else
-#define SE_COMMON_MASK_SH_LIST_DCN32(mask_sh)\
-       SE_COMMON_MASK_SH_LIST_DCN32_BASE(mask_sh)
-#endif
-
 void dcn32_dio_stream_encoder_construct(
        struct dcn10_stream_encoder *enc1,
        struct dc_context *ctx,
index 9db1323..176b153 100644 (file)
@@ -47,6 +47,7 @@
        SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_PRBS_SEL1, mask_sh),\
        SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_PRBS_SEL2, mask_sh),\
        SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_PRBS_SEL3, mask_sh),\
+       SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_SQ_PULSE, TP_SQ_PULSE_WIDTH, mask_sh),\
        SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_SAT_VC0, SAT_STREAM_SOURCE, mask_sh),\
        SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_SAT_VC0, SAT_SLOT_COUNT, mask_sh),\
        SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_VC_RATE_CNTL0, STREAM_VC_RATE_X, mask_sh),\
index f6d3da4..9fbb723 100644 (file)
@@ -936,6 +936,7 @@ static const struct hubbub_funcs hubbub32_funcs = {
        .program_watermarks = hubbub32_program_watermarks,
        .allow_self_refresh_control = hubbub1_allow_self_refresh_control,
        .is_allow_self_refresh_enabled = hubbub1_is_allow_self_refresh_enabled,
+       .verify_allow_pstate_change_high = hubbub1_verify_allow_pstate_change_high,
        .force_wm_propagate_to_pipes = hubbub32_force_wm_propagate_to_pipes,
        .force_pstate_change_control = hubbub3_force_pstate_change_control,
        .init_watermarks = hubbub32_init_watermarks,
index 2038cbd..ac1c645 100644 (file)
@@ -79,6 +79,8 @@ void hubp32_phantom_hubp_post_enable(struct hubp *hubp)
        uint32_t reg_val;
        struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
 
+       /* For phantom pipe enable, disable GSL */
+       REG_UPDATE(DCSURF_FLIP_CONTROL2, SURFACE_GSL_ENABLE, 0);
        REG_UPDATE(DCHUBP_CNTL, HUBP_BLANK_EN, 1);
        reg_val = REG_READ(DCHUBP_CNTL);
        if (reg_val) {
@@ -179,12 +181,12 @@ static struct hubp_funcs dcn32_hubp_funcs = {
        .hubp_init = hubp3_init,
        .set_unbounded_requesting = hubp31_set_unbounded_requesting,
        .hubp_soft_reset = hubp31_soft_reset,
+       .hubp_set_flip_int = hubp1_set_flip_int,
        .hubp_in_blank = hubp1_in_blank,
        .hubp_update_force_pstate_disallow = hubp32_update_force_pstate_disallow,
        .phantom_hubp_post_enable = hubp32_phantom_hubp_post_enable,
        .hubp_update_mall_sel = hubp32_update_mall_sel,
-       .hubp_prepare_subvp_buffering = hubp32_prepare_subvp_buffering,
-       .hubp_set_flip_int = hubp1_set_flip_int
+       .hubp_prepare_subvp_buffering = hubp32_prepare_subvp_buffering
 };
 
 bool hubp32_construct(
index a750343..cf5bd97 100644 (file)
@@ -206,8 +206,7 @@ static bool dcn32_check_no_memory_request_for_cab(struct dc *dc)
  */
 static uint32_t dcn32_calculate_cab_allocation(struct dc *dc, struct dc_state *ctx)
 {
-       uint8_t i;
-       int j;
+       int i, j;
        struct dc_stream_state *stream = NULL;
        struct dc_plane_state *plane = NULL;
        uint32_t cursor_size = 0;
@@ -630,10 +629,9 @@ bool dcn32_set_input_transfer_func(struct dc *dc,
                        params = &dpp_base->degamma_params;
        }
 
-       result = dpp_base->funcs->dpp_program_gamcor_lut(dpp_base, params);
+       dpp_base->funcs->dpp_program_gamcor_lut(dpp_base, params);
 
-       if (result &&
-                       pipe_ctx->stream_res.opp &&
+       if (pipe_ctx->stream_res.opp &&
                        pipe_ctx->stream_res.opp->ctx &&
                        hws->funcs.set_mcm_luts)
                result = hws->funcs.set_mcm_luts(pipe_ctx, plane_state);
@@ -991,6 +989,10 @@ void dcn32_init_hw(struct dc *dc)
                dc_dmub_srv_query_caps_cmd(dc->ctx->dmub_srv->dmub);
                dc->caps.dmub_caps.psr = dc->ctx->dmub_srv->dmub->feature_caps.psr;
        }
+
+       /* Enable support for ODM and windowed MPO if policy flag is set */
+       if (dc->debug.enable_single_display_2to1_odm_policy)
+               dc->config.enable_windowed_mpo_odm = true;
 }
 
 static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream,
@@ -1145,23 +1147,25 @@ void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
                                true);
        }
 
-       // Don't program pixel clock after link is already enabled
-/*     if (false == pipe_ctx->clock_source->funcs->program_pix_clk(
-                       pipe_ctx->clock_source,
-                       &pipe_ctx->stream_res.pix_clk_params,
-                       &pipe_ctx->pll_settings)) {
-               BREAK_TO_DEBUGGER();
-       }*/
+       if (pipe_ctx->stream_res.dsc) {
+               struct pipe_ctx *current_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[pipe_ctx->pipe_idx];
 
-       if (pipe_ctx->stream_res.dsc)
                update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC);
+
+               /* Check if no longer using pipe for ODM, then need to disconnect DSC for that pipe */
+               if (!pipe_ctx->next_odm_pipe && current_pipe_ctx->next_odm_pipe &&
+                               current_pipe_ctx->next_odm_pipe->stream_res.dsc) {
+                       struct display_stream_compressor *dsc = current_pipe_ctx->next_odm_pipe->stream_res.dsc;
+                       /* disconnect DSC block from stream */
+                       dsc->funcs->dsc_disconnect(dsc);
+               }
+       }
 }
 
 unsigned int dcn32_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsigned int *k1_div, unsigned int *k2_div)
 {
        struct dc_stream_state *stream = pipe_ctx->stream;
        unsigned int odm_combine_factor = 0;
-       struct dc *dc = pipe_ctx->stream->ctx->dc;
        bool two_pix_per_container = false;
 
        // For phantom pipes, use the same programming as the main pipes
@@ -1189,7 +1193,7 @@ unsigned int dcn32_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsign
                } else {
                        *k1_div = PIXEL_RATE_DIV_BY_1;
                        *k2_div = PIXEL_RATE_DIV_BY_4;
-                       if ((odm_combine_factor == 2) || dc->debug.enable_dp_dig_pixel_rate_div_policy)
+                       if ((odm_combine_factor == 2) || dcn32_is_dp_dig_pixel_rate_div_policy(pipe_ctx))
                                *k2_div = PIXEL_RATE_DIV_BY_2;
                }
        }
@@ -1226,7 +1230,6 @@ void dcn32_unblank_stream(struct pipe_ctx *pipe_ctx,
        struct dc_link *link = stream->link;
        struct dce_hwseq *hws = link->dc->hwseq;
        struct pipe_ctx *odm_pipe;
-       struct dc *dc = pipe_ctx->stream->ctx->dc;
        uint32_t pix_per_cycle = 1;
 
        params.opp_cnt = 1;
@@ -1245,7 +1248,7 @@ void dcn32_unblank_stream(struct pipe_ctx *pipe_ctx,
                                pipe_ctx->stream_res.tg->inst);
        } else if (dc_is_dp_signal(pipe_ctx->stream->signal)) {
                if (optc2_is_two_pixels_per_containter(&stream->timing) || params.opp_cnt > 1
-                       || dc->debug.enable_dp_dig_pixel_rate_div_policy) {
+                       || dcn32_is_dp_dig_pixel_rate_div_policy(pipe_ctx)) {
                        params.timing.pix_clk_100hz /= 2;
                        pix_per_cycle = 2;
                }
@@ -1262,6 +1265,9 @@ bool dcn32_is_dp_dig_pixel_rate_div_policy(struct pipe_ctx *pipe_ctx)
 {
        struct dc *dc = pipe_ctx->stream->ctx->dc;
 
+       if (!is_h_timing_divisible_by_2(pipe_ctx->stream))
+               return false;
+
        if (dc_is_dp_signal(pipe_ctx->stream->signal) && !is_dp_128b_132b_signal(pipe_ctx) &&
                dc->debug.enable_dp_dig_pixel_rate_div_policy)
                return true;
@@ -1394,7 +1400,7 @@ bool dcn32_dsc_pg_status(
                break;
        }
 
-       return pwr_status == 0 ? true : false;
+       return pwr_status == 0;
 }
 
 void dcn32_update_dsc_pg(struct dc *dc,
index ec3989d..2b33eeb 100644 (file)
@@ -151,7 +151,7 @@ static bool optc32_disable_crtc(struct timing_generator *optc)
        /* CRTC disabled, so disable  clock. */
        REG_WAIT(OTG_CLOCK_CONTROL,
                        OTG_BUSY, 0,
-                       1, 100000);
+                       1, 150000);
 
        return true;
 }
index 05de97e..a88dd7b 100644 (file)
@@ -1680,6 +1680,8 @@ static void dcn32_enable_phantom_plane(struct dc *dc,
                phantom_plane->clip_rect.y = 0;
                phantom_plane->clip_rect.height = phantom_stream->timing.v_addressable;
 
+               phantom_plane->is_phantom = true;
+
                dc_add_plane_to_context(dc, phantom_stream, phantom_plane, context);
 
                curr_pipe = curr_pipe->bottom_pipe;
@@ -1749,6 +1751,10 @@ bool dcn32_remove_phantom_pipes(struct dc *dc, struct dc_state *context)
                        pipe->stream->mall_stream_config.type = SUBVP_NONE;
                        pipe->stream->mall_stream_config.paired_stream = NULL;
                }
+
+               if (pipe->plane_state) {
+                       pipe->plane_state->is_phantom = false;
+               }
        }
        return removed_pipe;
 }
@@ -1798,14 +1804,39 @@ bool dcn32_validate_bandwidth(struct dc *dc,
        int vlevel = 0;
        int pipe_cnt = 0;
        display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL);
+       struct mall_temp_config mall_temp_config;
+
+       /* To handle Freesync properly, setting FreeSync DML parameters
+        * to its default state for the first stage of validation
+        */
+       context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching = false;
+       context->bw_ctx.dml.soc.dram_clock_change_requirement_final = true;
+
        DC_LOGGER_INIT(dc->ctx->logger);
 
+       /* For fast validation, there are situations where a shallow copy of
+        * of the dc->current_state is created for the validation. In this case
+        * we want to save and restore the mall config because we always
+        * teardown subvp at the beginning of validation (and don't attempt
+        * to add it back if it's fast validation). If we don't restore the
+        * subvp config in cases of fast validation + shallow copy of the
+        * dc->current_state, the dc->current_state will have a partially
+        * removed subvp state when we did not intend to remove it.
+        */
+       if (fast_validate) {
+               memset(&mall_temp_config, 0, sizeof(mall_temp_config));
+               dcn32_save_mall_state(dc, context, &mall_temp_config);
+       }
+
        BW_VAL_TRACE_COUNT();
 
        DC_FP_START();
        out = dcn32_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
        DC_FP_END();
 
+       if (fast_validate)
+               dcn32_restore_mall_state(dc, context, &mall_temp_config);
+
        if (pipe_cnt == 0)
                goto validate_out;
 
index 55945cc..f76120e 100644 (file)
 extern struct _vcs_dpi_ip_params_st dcn3_2_ip;
 extern struct _vcs_dpi_soc_bounding_box_st dcn3_2_soc;
 
+/* Temp struct used to save and restore MALL config
+ * during validation.
+ *
+ * TODO: Move MALL config into dc_state instead of stream struct
+ * to avoid needing to save/restore.
+ */
+struct mall_temp_config {
+       struct mall_stream_config mall_stream_config[MAX_PIPES];
+       bool is_phantom_plane[MAX_PIPES];
+};
+
 struct dcn32_resource_pool {
        struct resource_pool base;
 };
@@ -108,6 +119,8 @@ bool dcn32_subvp_in_use(struct dc *dc,
 
 bool dcn32_mpo_in_use(struct dc_state *context);
 
+bool dcn32_any_surfaces_rotated(struct dc *dc, struct dc_state *context);
+
 struct pipe_ctx *dcn32_acquire_idle_pipe_for_head_pipe_in_layer(
                struct dc_state *state,
                const struct resource_pool *pool,
@@ -120,6 +133,15 @@ void dcn32_determine_det_override(struct dc *dc,
 
 void dcn32_set_det_allocations(struct dc *dc, struct dc_state *context,
        display_e2e_pipe_params_st *pipes);
+
+void dcn32_save_mall_state(struct dc *dc,
+               struct dc_state *context,
+               struct mall_temp_config *temp_config);
+
+void dcn32_restore_mall_state(struct dc *dc,
+               struct dc_state *context,
+               struct mall_temp_config *temp_config);
+
 /* definitions for run time init of reg offsets */
 
 /* CLK SRC */
index a2a70a1..d51d0c4 100644 (file)
@@ -233,6 +233,23 @@ bool dcn32_mpo_in_use(struct dc_state *context)
        return false;
 }
 
+
+bool dcn32_any_surfaces_rotated(struct dc *dc, struct dc_state *context)
+{
+       uint32_t i;
+
+       for (i = 0; i < dc->res_pool->pipe_count; i++) {
+               struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+               if (!pipe->stream)
+                       continue;
+
+               if (pipe->plane_state && pipe->plane_state->rotation != ROTATION_ANGLE_0)
+                       return true;
+       }
+       return false;
+}
+
 /**
  * *******************************************************************************************
  * dcn32_determine_det_override: Determine DET allocation for each pipe
@@ -363,3 +380,74 @@ void dcn32_set_det_allocations(struct dc *dc, struct dc_state *context,
        } else
                dcn32_determine_det_override(dc, context, pipes);
 }
+
+/**
+ * *******************************************************************************************
+ * dcn32_save_mall_state: Save MALL (SubVP) state for fast validation cases
+ *
+ * This function saves the MALL (SubVP) case for fast validation cases. For fast validation,
+ * there are situations where a shallow copy of the dc->current_state is created for the
+ * validation. In this case we want to save and restore the mall config because we always
+ * teardown subvp at the beginning of validation (and don't attempt to add it back if it's
+ * fast validation). If we don't restore the subvp config in cases of fast validation +
+ * shallow copy of the dc->current_state, the dc->current_state will have a partially
+ * removed subvp state when we did not intend to remove it.
+ *
+ * NOTE: This function ONLY works if the streams are not moved to a different pipe in the
+ *       validation. We don't expect this to happen in fast_validation=1 cases.
+ *
+ * @param [in]: dc: Current DC state
+ * @param [in]: context: New DC state to be programmed
+ * @param [out]: temp_config: struct used to cache the existing MALL state
+ *
+ * @return: void
+ *
+ * *******************************************************************************************
+ */
+void dcn32_save_mall_state(struct dc *dc,
+               struct dc_state *context,
+               struct mall_temp_config *temp_config)
+{
+       uint32_t i;
+
+       for (i = 0; i < dc->res_pool->pipe_count; i++) {
+               struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+               if (pipe->stream)
+                       temp_config->mall_stream_config[i] = pipe->stream->mall_stream_config;
+
+               if (pipe->plane_state)
+                       temp_config->is_phantom_plane[i] = pipe->plane_state->is_phantom;
+       }
+}
+
+/**
+ * *******************************************************************************************
+ * dcn32_restore_mall_state: Restore MALL (SubVP) state for fast validation cases
+ *
+ * Restore the MALL state based on the previously saved state from dcn32_save_mall_state
+ *
+ * @param [in]: dc: Current DC state
+ * @param [in/out]: context: New DC state to be programmed, restore MALL state into here
+ * @param [in]: temp_config: struct that has the cached MALL state
+ *
+ * @return: void
+ *
+ * *******************************************************************************************
+ */
+void dcn32_restore_mall_state(struct dc *dc,
+               struct dc_state *context,
+               struct mall_temp_config *temp_config)
+{
+       uint32_t i;
+
+       for (i = 0; i < dc->res_pool->pipe_count; i++) {
+               struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+               if (pipe->stream)
+                       pipe->stream->mall_stream_config = temp_config->mall_stream_config[i];
+
+               if (pipe->plane_state)
+                       pipe->plane_state->is_phantom = temp_config->is_phantom_plane[i];
+       }
+}
index 49682a3..fa9b660 100644 (file)
@@ -91,7 +91,6 @@ static const struct link_encoder_funcs dcn321_link_enc_funcs = {
        .is_in_alt_mode = dcn20_link_encoder_is_in_alt_mode,
        .get_max_link_cap = dcn20_link_encoder_get_max_link_cap,
        .set_dio_phy_mux = dcn31_link_encoder_set_dio_phy_mux,
-       .set_dig_output_mode = enc32_set_dig_output_mode,
 };
 
 void dcn321_link_encoder_construct(
index aed0f68..61087f2 100644 (file)
@@ -94,8 +94,6 @@
 #include "dcn20/dcn20_vmid.h"
 
 #define DC_LOGGER_INIT(logger)
-#define fixed16_to_double(x) (((double)x) / ((double) (1 << 16)))
-#define fixed16_to_double_to_cpu(x) fixed16_to_double(le32_to_cpu(x))
 
 enum dcn321_clk_src_array_id {
        DCN321_CLK_SRC_PLL0,
@@ -1606,7 +1604,7 @@ static struct resource_funcs dcn321_res_pool_funcs = {
        .validate_bandwidth = dcn32_validate_bandwidth,
        .calculate_wm_and_dlg = dcn32_calculate_wm_and_dlg,
        .populate_dml_pipes = dcn32_populate_dml_pipes_from_context,
-       .acquire_idle_pipe_for_layer = dcn20_acquire_idle_pipe_for_layer,
+       .acquire_idle_pipe_for_head_pipe_in_layer = dcn32_acquire_idle_pipe_for_head_pipe_in_layer,
        .add_stream_to_ctx = dcn30_add_stream_to_ctx,
        .add_dsc_to_stream_resource = dcn20_add_dsc_to_stream_resource,
        .remove_stream_from_ctx = dcn20_remove_stream_from_ctx,
@@ -1656,7 +1654,7 @@ static bool dcn321_resource_construct(
 
 #undef REG_STRUCT
 #define REG_STRUCT dccg_regs
-               dccg_regs_init();
+       dccg_regs_init();
 
 
        ctx->dc_bios->regs = &bios_regs;
index d70838e..ca7d240 100644 (file)
@@ -77,7 +77,7 @@ CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/dcn30_fpu.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/dcn32_fpu.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_32.o := $(dml_ccflags) $(frame_warn_flag)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_rq_dlg_calc_32.o := $(dml_ccflags)
-CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_ccflags)
+CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_ccflags) $(frame_warn_flag)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn321/dcn321_fpu.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/dcn31_fpu.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn301/dcn301_fpu.o := $(dml_ccflags)
index d46adc8..e73f089 100644 (file)
@@ -1444,81 +1444,67 @@ unsigned int dcn_find_dcfclk_suits_all(
        return dcf_clk;
 }
 
-static bool verify_clock_values(struct dm_pp_clock_levels_with_voltage *clks)
+void dcn_bw_update_from_pplib_fclks(
+       struct dc *dc,
+       struct dm_pp_clock_levels_with_voltage *fclks)
 {
-       int i;
-
-       if (clks->num_levels == 0)
-               return false;
-
-       for (i = 0; i < clks->num_levels; i++)
-               /* Ensure that the result is sane */
-               if (clks->data[i].clocks_in_khz == 0)
-                       return false;
+       unsigned vmin0p65_idx, vmid0p72_idx, vnom0p8_idx, vmax0p9_idx;
 
-       return true;
+       ASSERT(fclks->num_levels);
+
+       vmin0p65_idx = 0;
+       vmid0p72_idx = fclks->num_levels -
+               (fclks->num_levels > 2 ? 3 : (fclks->num_levels > 1 ? 2 : 1));
+       vnom0p8_idx = fclks->num_levels - (fclks->num_levels > 1 ? 2 : 1);
+       vmax0p9_idx = fclks->num_levels - 1;
+
+       dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 =
+               32 * (fclks->data[vmin0p65_idx].clocks_in_khz / 1000.0) / 1000.0;
+       dc->dcn_soc->fabric_and_dram_bandwidth_vmid0p72 =
+               dc->dcn_soc->number_of_channels *
+               (fclks->data[vmid0p72_idx].clocks_in_khz / 1000.0)
+               * ddr4_dram_factor_single_Channel / 1000.0;
+       dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8 =
+               dc->dcn_soc->number_of_channels *
+               (fclks->data[vnom0p8_idx].clocks_in_khz / 1000.0)
+               * ddr4_dram_factor_single_Channel / 1000.0;
+       dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 =
+               dc->dcn_soc->number_of_channels *
+               (fclks->data[vmax0p9_idx].clocks_in_khz / 1000.0)
+               * ddr4_dram_factor_single_Channel / 1000.0;
 }
 
-void dcn_bw_update_from_pplib(struct dc *dc)
+void dcn_bw_update_from_pplib_dcfclks(
+       struct dc *dc,
+       struct dm_pp_clock_levels_with_voltage *dcfclks)
 {
-       struct dc_context *ctx = dc->ctx;
-       struct dm_pp_clock_levels_with_voltage fclks = {0}, dcfclks = {0};
-       bool res;
-       unsigned vmin0p65_idx, vmid0p72_idx, vnom0p8_idx, vmax0p9_idx;
-
-       /* TODO: This is not the proper way to obtain fabric_and_dram_bandwidth, should be min(fclk, memclk) */
-       res = dm_pp_get_clock_levels_by_type_with_voltage(
-                       ctx, DM_PP_CLOCK_TYPE_FCLK, &fclks);
-
-       if (res)
-               res = verify_clock_values(&fclks);
-
-       if (res) {
-               ASSERT(fclks.num_levels);
-
-               vmin0p65_idx = 0;
-               vmid0p72_idx = fclks.num_levels -
-                       (fclks.num_levels > 2 ? 3 : (fclks.num_levels > 1 ? 2 : 1));
-               vnom0p8_idx = fclks.num_levels - (fclks.num_levels > 1 ? 2 : 1);
-               vmax0p9_idx = fclks.num_levels - 1;
-
-               dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 =
-                       32 * (fclks.data[vmin0p65_idx].clocks_in_khz / 1000.0) / 1000.0;
-               dc->dcn_soc->fabric_and_dram_bandwidth_vmid0p72 =
-                       dc->dcn_soc->number_of_channels *
-                       (fclks.data[vmid0p72_idx].clocks_in_khz / 1000.0)
-                       * ddr4_dram_factor_single_Channel / 1000.0;
-               dc->dcn_soc->fabric_and_dram_bandwidth_vnom0p8 =
-                       dc->dcn_soc->number_of_channels *
-                       (fclks.data[vnom0p8_idx].clocks_in_khz / 1000.0)
-                       * ddr4_dram_factor_single_Channel / 1000.0;
-               dc->dcn_soc->fabric_and_dram_bandwidth_vmax0p9 =
-                       dc->dcn_soc->number_of_channels *
-                       (fclks.data[vmax0p9_idx].clocks_in_khz / 1000.0)
-                       * ddr4_dram_factor_single_Channel / 1000.0;
-       } else
-               BREAK_TO_DEBUGGER();
-
-       res = dm_pp_get_clock_levels_by_type_with_voltage(
-                       ctx, DM_PP_CLOCK_TYPE_DCFCLK, &dcfclks);
-
-       if (res)
-               res = verify_clock_values(&dcfclks);
+       if (dcfclks->num_levels >= 3) {
+               dc->dcn_soc->dcfclkv_min0p65 = dcfclks->data[0].clocks_in_khz / 1000.0;
+               dc->dcn_soc->dcfclkv_mid0p72 = dcfclks->data[dcfclks->num_levels - 3].clocks_in_khz / 1000.0;
+               dc->dcn_soc->dcfclkv_nom0p8 = dcfclks->data[dcfclks->num_levels - 2].clocks_in_khz / 1000.0;
+               dc->dcn_soc->dcfclkv_max0p9 = dcfclks->data[dcfclks->num_levels - 1].clocks_in_khz / 1000.0;
+       }
+}
 
-       if (res && dcfclks.num_levels >= 3) {
-               dc->dcn_soc->dcfclkv_min0p65 = dcfclks.data[0].clocks_in_khz / 1000.0;
-               dc->dcn_soc->dcfclkv_mid0p72 = dcfclks.data[dcfclks.num_levels - 3].clocks_in_khz / 1000.0;
-               dc->dcn_soc->dcfclkv_nom0p8 = dcfclks.data[dcfclks.num_levels - 2].clocks_in_khz / 1000.0;
-               dc->dcn_soc->dcfclkv_max0p9 = dcfclks.data[dcfclks.num_levels - 1].clocks_in_khz / 1000.0;
-       } else
-               BREAK_TO_DEBUGGER();
+void dcn_get_soc_clks(
+       struct dc *dc,
+       int *min_fclk_khz,
+       int *min_dcfclk_khz,
+       int *socclk_khz)
+{
+       *min_fclk_khz = dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 * 1000000 / 32;
+       *min_dcfclk_khz = dc->dcn_soc->dcfclkv_min0p65 * 1000;
+       *socclk_khz = dc->dcn_soc->socclk * 1000;
 }
 
-void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc)
+void dcn_bw_notify_pplib_of_wm_ranges(
+       struct dc *dc,
+       int min_fclk_khz,
+       int min_dcfclk_khz,
+       int socclk_khz)
 {
        struct pp_smu_funcs_rv *pp = NULL;
        struct pp_smu_wm_range_sets ranges = {0};
-       int min_fclk_khz, min_dcfclk_khz, socclk_khz;
        const int overdrive = 5000000; /* 5 GHz to cover Overdrive */
 
        if (dc->res_pool->pp_smu)
@@ -1526,10 +1512,6 @@ void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc)
        if (!pp || !pp->set_wm_ranges)
                return;
 
-       min_fclk_khz = dc->dcn_soc->fabric_and_dram_bandwidth_vmin0p65 * 1000000 / 32;
-       min_dcfclk_khz = dc->dcn_soc->dcfclkv_min0p65 * 1000;
-       socclk_khz = dc->dcn_soc->socclk * 1000;
-
        /* Now notify PPLib/SMU about which Watermarks sets they should select
         * depending on DPM state they are in. And update BW MGR GFX Engine and
         * Memory clock member variables for Watermarks calculations for each
index b6e99ee..7dd0845 100644 (file)
@@ -292,6 +292,7 @@ static struct _vcs_dpi_soc_bounding_box_st dcn3_15_soc = {
        .urgent_latency_adjustment_fabric_clock_component_us = 0,
        .urgent_latency_adjustment_fabric_clock_reference_mhz = 0,
        .num_chans = 4,
+       .dummy_pstate_latency_us = 10.0
 };
 
 struct _vcs_dpi_ip_params_st dcn3_16_ip = {
@@ -459,13 +460,30 @@ void dcn31_update_soc_for_wm_a(struct dc *dc, struct dc_state *context)
        }
 }
 
+void dcn315_update_soc_for_wm_a(struct dc *dc, struct dc_state *context)
+{
+       dc_assert_fp_enabled();
+
+       if (dc->clk_mgr->bw_params->wm_table.entries[WM_A].valid) {
+               /* For 315 pstate change is only supported if possible in vactive */
+               if (context->bw_ctx.dml.vba.DRAMClockChangeSupport[context->bw_ctx.dml.vba.VoltageLevel][context->bw_ctx.dml.vba.maxMpcComb] != dm_dram_clock_change_vactive)
+                       context->bw_ctx.dml.soc.dram_clock_change_latency_us = context->bw_ctx.dml.soc.dummy_pstate_latency_us;
+               else
+                       context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.entries[WM_A].pstate_latency_us;
+               context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us =
+                               dc->clk_mgr->bw_params->wm_table.entries[WM_A].sr_enter_plus_exit_time_us;
+               context->bw_ctx.dml.soc.sr_exit_time_us =
+                               dc->clk_mgr->bw_params->wm_table.entries[WM_A].sr_exit_time_us;
+       }
+}
+
 void dcn31_calculate_wm_and_dlg_fp(
                struct dc *dc, struct dc_state *context,
                display_e2e_pipe_params_st *pipes,
                int pipe_cnt,
                int vlevel)
 {
-       int i, pipe_idx;
+       int i, pipe_idx, active_dpp_count = 0;
        double dcfclk = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
 
        dc_assert_fp_enabled();
@@ -486,72 +504,6 @@ void dcn31_calculate_wm_and_dlg_fp(
        pipes[0].clks_cfg.dcfclk_mhz = dcfclk;
        pipes[0].clks_cfg.socclk_mhz = context->bw_ctx.dml.soc.clock_limits[vlevel].socclk_mhz;
 
-#if 0 // TODO
-       /* Set B:
-        * TODO
-        */
-       if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].valid) {
-               if (vlevel == 0) {
-                       pipes[0].clks_cfg.voltage = 1;
-                       pipes[0].clks_cfg.dcfclk_mhz = context->bw_ctx.dml.soc.clock_limits[0].dcfclk_mhz;
-               }
-               context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].dml_input.pstate_latency_us;
-               context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].dml_input.sr_enter_plus_exit_time_us;
-               context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].dml_input.sr_exit_time_us;
-       }
-       context->bw_ctx.bw.dcn.watermarks.b.urgent_ns = get_wm_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.cstate_enter_plus_exit_z8_ns = get_wm_z8_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.cstate_exit_z8_ns = get_wm_z8_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.pstate_change_ns = get_wm_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.pte_meta_urgent_ns = get_wm_memory_trip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.b.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-
-       pipes[0].clks_cfg.voltage = vlevel;
-       pipes[0].clks_cfg.dcfclk_mhz = dcfclk;
-
-       /* Set C:
-        * TODO
-        */
-       if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].valid) {
-               context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].dml_input.pstate_latency_us;
-               context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].dml_input.sr_enter_plus_exit_time_us;
-               context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].dml_input.sr_exit_time_us;
-       }
-       context->bw_ctx.bw.dcn.watermarks.c.urgent_ns = get_wm_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.cstate_enter_plus_exit_z8_ns = get_wm_z8_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.cstate_exit_z8_ns = get_wm_z8_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.pstate_change_ns = get_wm_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.pte_meta_urgent_ns = get_wm_memory_trip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.c.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-
-       /* Set D:
-        * TODO
-        */
-       if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].valid) {
-               context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].dml_input.pstate_latency_us;
-               context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].dml_input.sr_enter_plus_exit_time_us;
-               context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].dml_input.sr_exit_time_us;
-       }
-       context->bw_ctx.bw.dcn.watermarks.d.urgent_ns = get_wm_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.pstate_change_ns = get_wm_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.cstate_enter_plus_exit_z8_ns = get_wm_z8_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.cstate_exit_z8_ns = get_wm_z8_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.pte_meta_urgent_ns = get_wm_memory_trip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       context->bw_ctx.bw.dcn.watermarks.d.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-#endif
-
        /* Set A:
         * All clocks min required
         *
@@ -568,16 +520,17 @@ void dcn31_calculate_wm_and_dlg_fp(
        context->bw_ctx.bw.dcn.watermarks.a.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
        context->bw_ctx.bw.dcn.watermarks.a.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
        context->bw_ctx.bw.dcn.watermarks.a.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-       /* TODO: remove: */
        context->bw_ctx.bw.dcn.watermarks.b = context->bw_ctx.bw.dcn.watermarks.a;
        context->bw_ctx.bw.dcn.watermarks.c = context->bw_ctx.bw.dcn.watermarks.a;
        context->bw_ctx.bw.dcn.watermarks.d = context->bw_ctx.bw.dcn.watermarks.a;
-       /* end remove*/
 
        for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
                if (!context->res_ctx.pipe_ctx[i].stream)
                        continue;
 
+               if (context->res_ctx.pipe_ctx[i].plane_state)
+                       active_dpp_count++;
+
                pipes[pipe_idx].clks_cfg.dispclk_mhz = get_dispclk_calculated(&context->bw_ctx.dml, pipes, pipe_cnt);
                pipes[pipe_idx].clks_cfg.dppclk_mhz = get_dppclk_calculated(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
 
@@ -594,6 +547,9 @@ void dcn31_calculate_wm_and_dlg_fp(
        }
 
        dcn20_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel);
+       /* For 31x apu pstate change is only supported if possible in vactive or if there are no active dpps */
+       context->bw_ctx.bw.dcn.clk.p_state_change_support =
+                       context->bw_ctx.dml.vba.DRAMClockChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb] == dm_dram_clock_change_vactive || !active_dpp_count;
 }
 
 void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
@@ -739,7 +695,7 @@ void dcn315_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_param
        }
 
        if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment))
-               dml_init_instance(&dc->dml, &dcn3_15_soc, &dcn3_15_ip, DML_PROJECT_DCN31);
+               dml_init_instance(&dc->dml, &dcn3_15_soc, &dcn3_15_ip, DML_PROJECT_DCN315);
        else
                dml_init_instance(&dc->dml, &dcn3_15_soc, &dcn3_15_ip, DML_PROJECT_DCN31_FPGA);
 }
index 4372f17..fd58b25 100644 (file)
@@ -35,6 +35,7 @@ void dcn31_zero_pipe_dcc_fraction(display_e2e_pipe_params_st *pipes,
                                  int pipe_cnt);
 
 void dcn31_update_soc_for_wm_a(struct dc *dc, struct dc_state *context);
+void dcn315_update_soc_for_wm_a(struct dc *dc, struct dc_state *context);
 
 void dcn31_calculate_wm_and_dlg_fp(
                struct dc *dc, struct dc_state *context,
index 8dfe639..b612edb 100644 (file)
@@ -43,6 +43,8 @@
 #define BPP_BLENDED_PIPE 0xffffffff
 #define DCN31_MAX_DSC_IMAGE_WIDTH 5184
 #define DCN31_MAX_FMT_420_BUFFER_WIDTH 4096
+#define DCN3_15_MIN_COMPBUF_SIZE_KB 128
+#define DCN3_15_MAX_DET_SIZE 384
 
 // For DML-C changes that hasn't been propagated to VBA yet
 //#define __DML_VBA_ALLOW_DELTA__
@@ -3775,6 +3777,17 @@ static noinline void CalculatePrefetchSchedulePerPlane(
                &v->VReadyOffsetPix[k]);
 }
 
+static void PatchDETBufferSizeInKByte(unsigned int NumberOfActivePlanes, int NoOfDPPThisState[], unsigned int config_return_buffer_size_in_kbytes, unsigned int *DETBufferSizeInKByte)
+{
+       int i, total_pipes = 0;
+       for (i = 0; i < NumberOfActivePlanes; i++)
+               total_pipes += NoOfDPPThisState[i];
+       *DETBufferSizeInKByte = ((config_return_buffer_size_in_kbytes - DCN3_15_MIN_COMPBUF_SIZE_KB) / 64 / total_pipes) * 64;
+       if (*DETBufferSizeInKByte > DCN3_15_MAX_DET_SIZE)
+               *DETBufferSizeInKByte = DCN3_15_MAX_DET_SIZE;
+}
+
+
 void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_lib)
 {
        struct vba_vars_st *v = &mode_lib->vba;
@@ -4533,6 +4546,8 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
                                v->ODMCombineEnableThisState[k] = v->ODMCombineEnablePerState[i][k];
                        }
 
+                       if (v->NumberOfActivePlanes > 1 && mode_lib->project == DML_PROJECT_DCN315)
+                               PatchDETBufferSizeInKByte(v->NumberOfActivePlanes, v->NoOfDPPThisState, v->ip.config_return_buffer_size_in_kbytes, &v->DETBufferSizeInKByte[0]);
                        CalculateSwathAndDETConfiguration(
                                        false,
                                        v->NumberOfActivePlanes,
index 0571700..819de0f 100644 (file)
@@ -243,7 +243,7 @@ void dcn32_build_wm_range_table_fpu(struct clk_mgr_internal *clk_mgr)
        clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].pmfw_breakdown.max_uclk = 0xFFFF;
 }
 
-/**
+/*
  * Finds dummy_latency_index when MCLK switching using firmware based
  * vblank stretch is enabled. This function will iterate through the
  * table of dummy pstate latencies until the lowest value that allows
@@ -290,15 +290,14 @@ int dcn32_find_dummy_latency_index_for_fw_based_mclk_switch(struct dc *dc,
 /**
  * dcn32_helper_populate_phantom_dlg_params - Get DLG params for phantom pipes
  * and populate pipe_ctx with those params.
- *
- * This function must be called AFTER the phantom pipes are added to context
- * and run through DML (so that the DLG params for the phantom pipes can be
- * populated), and BEFORE we program the timing for the phantom pipes.
- *
  * @dc: [in] current dc state
  * @context: [in] new dc state
  * @pipes: [in] DML pipe params array
  * @pipe_cnt: [in] DML pipe count
+ *
+ * This function must be called AFTER the phantom pipes are added to context
+ * and run through DML (so that the DLG params for the phantom pipes can be
+ * populated), and BEFORE we program the timing for the phantom pipes.
  */
 void dcn32_helper_populate_phantom_dlg_params(struct dc *dc,
                                              struct dc_state *context,
@@ -331,8 +330,9 @@ void dcn32_helper_populate_phantom_dlg_params(struct dc *dc,
 }
 
 /**
- * *******************************************************************************************
- * dcn32_predict_pipe_split: Predict if pipe split will occur for a given DML pipe
+ * dcn32_predict_pipe_split - Predict if pipe split will occur for a given DML pipe
+ * @context: [in] New DC state to be programmed
+ * @pipe_e2e: [in] DML pipe end to end context
  *
  * This function takes in a DML pipe (pipe_e2e) and predicts if pipe split is required (both
  * ODM and MPC). For pipe split, ODM combine is determined by the ODM mode, and MPC combine is
@@ -343,12 +343,7 @@ void dcn32_helper_populate_phantom_dlg_params(struct dc *dc,
  * - MPC combine is only chosen if there is no ODM combine requirements / policy in place, and
  *   MPC is required
  *
- * @param [in]: context: New DC state to be programmed
- * @param [in]: pipe_e2e: DML pipe end to end context
- *
- * @return: Number of splits expected (1 for 2:1 split, 3 for 4:1 split, 0 for no splits).
- *
- * *******************************************************************************************
+ * Return: Number of splits expected (1 for 2:1 split, 3 for 4:1 split, 0 for no splits).
  */
 uint8_t dcn32_predict_pipe_split(struct dc_state *context,
                                  display_e2e_pipe_params_st *pipe_e2e)
@@ -504,7 +499,14 @@ void insert_entry_into_table_sorted(struct _vcs_dpi_voltage_scaling_st *table,
 }
 
 /**
- * dcn32_set_phantom_stream_timing: Set timing params for the phantom stream
+ * dcn32_set_phantom_stream_timing - Set timing params for the phantom stream
+ * @dc: current dc state
+ * @context: new dc state
+ * @ref_pipe: Main pipe for the phantom stream
+ * @phantom_stream: target phantom stream state
+ * @pipes: DML pipe params
+ * @pipe_cnt: number of DML pipes
+ * @dc_pipe_idx: DC pipe index for the main pipe (i.e. ref_pipe)
  *
  * Set timing params of the phantom stream based on calculated output from DML.
  * This function first gets the DML pipe index using the DC pipe index, then
@@ -517,13 +519,6 @@ void insert_entry_into_table_sorted(struct _vcs_dpi_voltage_scaling_st *table,
  * that separately.
  *
  * - Set phantom backporch = vstartup of main pipe
- *
- * @dc: current dc state
- * @context: new dc state
- * @ref_pipe: Main pipe for the phantom stream
- * @pipes: DML pipe params
- * @pipe_cnt: number of DML pipes
- * @dc_pipe_idx: DC pipe index for the main pipe (i.e. ref_pipe)
  */
 void dcn32_set_phantom_stream_timing(struct dc *dc,
                                     struct dc_state *context,
@@ -592,16 +587,14 @@ void dcn32_set_phantom_stream_timing(struct dc *dc,
 }
 
 /**
- * dcn32_get_num_free_pipes: Calculate number of free pipes
+ * dcn32_get_num_free_pipes - Calculate number of free pipes
+ * @dc: current dc state
+ * @context: new dc state
  *
  * This function assumes that a "used" pipe is a pipe that has
  * both a stream and a plane assigned to it.
  *
- * @dc: current dc state
- * @context: new dc state
- *
- * Return:
- * Number of free pipes available in the context
+ * Return: Number of free pipes available in the context
  */
 static unsigned int dcn32_get_num_free_pipes(struct dc *dc, struct dc_state *context)
 {
@@ -625,7 +618,10 @@ static unsigned int dcn32_get_num_free_pipes(struct dc *dc, struct dc_state *con
 }
 
 /**
- * dcn32_assign_subvp_pipe: Function to decide which pipe will use Sub-VP.
+ * dcn32_assign_subvp_pipe - Function to decide which pipe will use Sub-VP.
+ * @dc: current dc state
+ * @context: new dc state
+ * @index: [out] dc pipe index for the pipe chosen to have phantom pipes assigned
  *
  * We enter this function if we are Sub-VP capable (i.e. enough pipes available)
  * and regular P-State switching (i.e. VACTIVE/VBLANK) is not supported, or if
@@ -639,12 +635,7 @@ static unsigned int dcn32_get_num_free_pipes(struct dc *dc, struct dc_state *con
  * for determining which should be the SubVP pipe (need a way to determine if a pipe / plane doesn't
  * support MCLK switching naturally [i.e. ACTIVE or VBLANK]).
  *
- * @param dc: current dc state
- * @param context: new dc state
- * @param index: [out] dc pipe index for the pipe chosen to have phantom pipes assigned
- *
- * Return:
- * True if a valid pipe assignment was found for Sub-VP. Otherwise false.
+ * Return: True if a valid pipe assignment was found for Sub-VP. Otherwise false.
  */
 static bool dcn32_assign_subvp_pipe(struct dc *dc,
                                    struct dc_state *context,
@@ -711,7 +702,9 @@ static bool dcn32_assign_subvp_pipe(struct dc *dc,
 }
 
 /**
- * dcn32_enough_pipes_for_subvp: Function to check if there are "enough" pipes for SubVP.
+ * dcn32_enough_pipes_for_subvp - Function to check if there are "enough" pipes for SubVP.
+ * @dc: current dc state
+ * @context: new dc state
  *
  * This function returns true if there are enough free pipes
  * to create the required phantom pipes for any given stream
@@ -723,9 +716,6 @@ static bool dcn32_assign_subvp_pipe(struct dc *dc,
  * pipe which can be used as the phantom pipe for the non pipe
  * split pipe.
  *
- * @dc: current dc state
- * @context: new dc state
- *
  * Return:
  * True if there are enough free pipes to assign phantom pipes to at least one
  * stream that does not already have phantom pipes assigned. Otherwise false.
@@ -764,7 +754,9 @@ static bool dcn32_enough_pipes_for_subvp(struct dc *dc, struct dc_state *context
 }
 
 /**
- * subvp_subvp_schedulable: Determine if SubVP + SubVP config is schedulable
+ * subvp_subvp_schedulable - Determine if SubVP + SubVP config is schedulable
+ * @dc: current dc state
+ * @context: new dc state
  *
  * High level algorithm:
  * 1. Find longest microschedule length (in us) between the two SubVP pipes
@@ -772,11 +764,7 @@ static bool dcn32_enough_pipes_for_subvp(struct dc *dc, struct dc_state *context
  * pipes still allows for the maximum microschedule to fit in the active
  * region for both pipes.
  *
- * @dc: current dc state
- * @context: new dc state
- *
- * Return:
- * bool - True if the SubVP + SubVP config is schedulable, false otherwise
+ * Return: True if the SubVP + SubVP config is schedulable, false otherwise
  */
 static bool subvp_subvp_schedulable(struct dc *dc, struct dc_state *context)
 {
@@ -836,7 +824,10 @@ static bool subvp_subvp_schedulable(struct dc *dc, struct dc_state *context)
 }
 
 /**
- * subvp_drr_schedulable: Determine if SubVP + DRR config is schedulable
+ * subvp_drr_schedulable - Determine if SubVP + DRR config is schedulable
+ * @dc: current dc state
+ * @context: new dc state
+ * @drr_pipe: DRR pipe_ctx for the SubVP + DRR config
  *
  * High level algorithm:
  * 1. Get timing for SubVP pipe, phantom pipe, and DRR pipe
@@ -845,12 +836,7 @@ static bool subvp_subvp_schedulable(struct dc *dc, struct dc_state *context)
  * 3.If (SubVP Active - Prefetch > Stretched DRR frame + max(MALL region, Stretched DRR frame))
  * then report the configuration as supported
  *
- * @dc: current dc state
- * @context: new dc state
- * @drr_pipe: DRR pipe_ctx for the SubVP + DRR config
- *
- * Return:
- * bool - True if the SubVP + DRR config is schedulable, false otherwise
+ * Return: True if the SubVP + DRR config is schedulable, false otherwise
  */
 static bool subvp_drr_schedulable(struct dc *dc, struct dc_state *context, struct pipe_ctx *drr_pipe)
 {
@@ -914,7 +900,9 @@ static bool subvp_drr_schedulable(struct dc *dc, struct dc_state *context, struc
 
 
 /**
- * subvp_vblank_schedulable: Determine if SubVP + VBLANK config is schedulable
+ * subvp_vblank_schedulable - Determine if SubVP + VBLANK config is schedulable
+ * @dc: current dc state
+ * @context: new dc state
  *
  * High level algorithm:
  * 1. Get timing for SubVP pipe, phantom pipe, and VBLANK pipe
@@ -922,11 +910,7 @@ static bool subvp_drr_schedulable(struct dc *dc, struct dc_state *context, struc
  * then report the configuration as supported
  * 3. If the VBLANK display is DRR, then take the DRR static schedulability path
  *
- * @dc: current dc state
- * @context: new dc state
- *
- * Return:
- * bool - True if the SubVP + VBLANK/DRR config is schedulable, false otherwise
+ * Return: True if the SubVP + VBLANK/DRR config is schedulable, false otherwise
  */
 static bool subvp_vblank_schedulable(struct dc *dc, struct dc_state *context)
 {
@@ -1003,20 +987,18 @@ static bool subvp_vblank_schedulable(struct dc *dc, struct dc_state *context)
 }
 
 /**
- * subvp_validate_static_schedulability: Check which SubVP case is calculated and handle
- * static analysis based on the case.
+ * subvp_validate_static_schedulability - Check which SubVP case is calculated
+ * and handle static analysis based on the case.
+ * @dc: current dc state
+ * @context: new dc state
+ * @vlevel: Voltage level calculated by DML
  *
  * Three cases:
  * 1. SubVP + SubVP
  * 2. SubVP + VBLANK (DRR checked internally)
  * 3. SubVP + VACTIVE (currently unsupported)
  *
- * @dc: current dc state
- * @context: new dc state
- * @vlevel: Voltage level calculated by DML
- *
- * Return:
- * bool - True if statically schedulable, false otherwise
+ * Return: True if statically schedulable, false otherwise
  */
 static bool subvp_validate_static_schedulability(struct dc *dc,
                                struct dc_state *context,
@@ -1115,7 +1097,8 @@ static void dcn32_full_validate_bw_helper(struct dc *dc,
         * 5. (Config doesn't support MCLK in VACTIVE/VBLANK || dc->debug.force_subvp_mclk_switch)
         */
        if (!dc->debug.force_disable_subvp && dcn32_all_pipes_have_stream_and_plane(dc, context) &&
-           !dcn32_mpo_in_use(context) && (*vlevel == context->bw_ctx.dml.soc.num_states ||
+           !dcn32_mpo_in_use(context) && !dcn32_any_surfaces_rotated(dc, context) &&
+               (*vlevel == context->bw_ctx.dml.soc.num_states ||
            vba->DRAMClockChangeSupport[*vlevel][vba->maxMpcComb] == dm_dram_clock_change_unsupported ||
            dc->debug.force_subvp_mclk_switch)) {
 
@@ -1597,6 +1580,9 @@ bool dcn32_internal_validate_bw(struct dc *dc,
                                        /*MPC split rules will handle this case*/
                                        pipe->bottom_pipe->top_pipe = NULL;
                                } else {
+                                       /* when merging an ODM pipes, the bottom MPC pipe must now point to
+                                        * the previous ODM pipe and its associated stream assets
+                                        */
                                        if (pipe->prev_odm_pipe->bottom_pipe) {
                                                /* 3 plane MPO*/
                                                pipe->bottom_pipe->top_pipe = pipe->prev_odm_pipe->bottom_pipe;
@@ -1606,6 +1592,8 @@ bool dcn32_internal_validate_bw(struct dc *dc,
                                                pipe->bottom_pipe->top_pipe = pipe->prev_odm_pipe;
                                                pipe->prev_odm_pipe->bottom_pipe = pipe->bottom_pipe;
                                        }
+
+                                       memcpy(&pipe->bottom_pipe->stream_res, &pipe->bottom_pipe->top_pipe->stream_res, sizeof(struct stream_resource));
                                }
                        }
 
@@ -1781,6 +1769,7 @@ void dcn32_calculate_wm_and_dlg_fpu(struct dc *dc, struct dc_state *context,
        int i, pipe_idx, vlevel_temp = 0;
        double dcfclk = dcn3_2_soc.clock_limits[0].dcfclk_mhz;
        double dcfclk_from_validation = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
+       double dcfclk_from_fw_based_mclk_switching = dcfclk_from_validation;
        bool pstate_en = context->bw_ctx.dml.vba.DRAMClockChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb] !=
                        dm_dram_clock_change_unsupported;
        unsigned int dummy_latency_index = 0;
@@ -1816,7 +1805,7 @@ void dcn32_calculate_wm_and_dlg_fpu(struct dc *dc, struct dc_state *context,
                                        dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us;
                        dcn32_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, false);
                        maxMpcComb = context->bw_ctx.dml.vba.maxMpcComb;
-                       dcfclk = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
+                       dcfclk_from_fw_based_mclk_switching = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
                        pstate_en = context->bw_ctx.dml.vba.DRAMClockChangeSupport[vlevel][maxMpcComb] !=
                                        dm_dram_clock_change_unsupported;
                }
@@ -1902,6 +1891,10 @@ void dcn32_calculate_wm_and_dlg_fpu(struct dc *dc, struct dc_state *context,
        pipes[0].clks_cfg.dcfclk_mhz = dcfclk_from_validation;
        pipes[0].clks_cfg.socclk_mhz = context->bw_ctx.dml.soc.clock_limits[vlevel].socclk_mhz;
 
+       if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching) {
+               pipes[0].clks_cfg.dcfclk_mhz = dcfclk_from_fw_based_mclk_switching;
+       }
+
        if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].valid) {
                min_dram_speed_mts = context->bw_ctx.dml.vba.DRAMSpeed;
                min_dram_speed_mts_margin = 160;
@@ -2275,7 +2268,7 @@ static int build_synthetic_soc_states(struct clk_bw_params *bw_params,
        return 0;
 }
 
-/**
+/*
  * dcn32_update_bw_bounding_box
  *
  * This would override some dcn3_2 ip_or_soc initial parameters hardcoded from
index 75be1e1..5b91660 100644 (file)
@@ -733,6 +733,8 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
                                mode_lib->vba.FCLKChangeLatency, v->UrgentLatency,
                                mode_lib->vba.SREnterPlusExitTime);
 
+                       memset(&v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.myPipe, 0, sizeof(DmlPipe));
+
                        v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.myPipe.Dppclk = mode_lib->vba.DPPCLK[k];
                        v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.myPipe.Dispclk = mode_lib->vba.DISPCLK;
                        v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.myPipe.PixelClock = mode_lib->vba.PixelClock[k];
@@ -2252,9 +2254,8 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
        for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
                if (!(mode_lib->vba.DSCInputBitPerComponent[k] == 12.0
                                || mode_lib->vba.DSCInputBitPerComponent[k] == 10.0
-                               || mode_lib->vba.DSCInputBitPerComponent[k] == 8.0
-                               || mode_lib->vba.DSCInputBitPerComponent[k] >
-                               mode_lib->vba.MaximumDSCBitsPerComponent)) {
+                               || mode_lib->vba.DSCInputBitPerComponent[k] == 8.0)
+                               || mode_lib->vba.DSCInputBitPerComponent[k] > mode_lib->vba.MaximumDSCBitsPerComponent) {
                        mode_lib->vba.NonsupportedDSCInputBPC = true;
                }
        }
@@ -2330,16 +2331,15 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
                                if (mode_lib->vba.OutputMultistreamId[k] == k && mode_lib->vba.ForcedOutputLinkBPP[k] == 0)
                                        mode_lib->vba.BPPForMultistreamNotIndicated = true;
                                for (j = 0; j < mode_lib->vba.NumberOfActiveSurfaces; ++j) {
-                                       if (mode_lib->vba.OutputMultistreamId[k] == j && mode_lib->vba.OutputMultistreamEn[k]
+                                       if (mode_lib->vba.OutputMultistreamId[k] == j
                                                && mode_lib->vba.ForcedOutputLinkBPP[k] == 0)
                                                mode_lib->vba.BPPForMultistreamNotIndicated = true;
                                }
                        }
 
                        if ((mode_lib->vba.Output[k] == dm_edp || mode_lib->vba.Output[k] == dm_hdmi)) {
-                               if (mode_lib->vba.OutputMultistreamId[k] == k && mode_lib->vba.OutputMultistreamEn[k])
+                               if (mode_lib->vba.OutputMultistreamEn[k] == true && mode_lib->vba.OutputMultistreamId[k] == k)
                                        mode_lib->vba.MultistreamWithHDMIOreDP = true;
-
                                for (j = 0; j < mode_lib->vba.NumberOfActiveSurfaces; ++j) {
                                        if (mode_lib->vba.OutputMultistreamEn[k] == true && mode_lib->vba.OutputMultistreamId[k] == j)
                                                mode_lib->vba.MultistreamWithHDMIOreDP = true;
@@ -2478,8 +2478,6 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
                                        mode_lib->vba.PixelClock[k], mode_lib->vba.PixelClockBackEnd[k]);
                }
 
-               m = 0;
-
                for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
                        for (m = 0; m <= mode_lib->vba.NumberOfActiveSurfaces - 1; m++) {
                                for (j = 0; j <= mode_lib->vba.NumberOfActiveSurfaces - 1; j++) {
@@ -2856,8 +2854,6 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
                }
        }
 
-       m = 0;
-
        //Calculate Return BW
        for (i = 0; i < (int) v->soc.num_states; ++i) {
                for (j = 0; j <= 1; ++j) {
@@ -3618,11 +3614,10 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
                        mode_lib->vba.ModeIsSupported = mode_lib->vba.ModeSupport[i][0] == true
                                        || mode_lib->vba.ModeSupport[i][1] == true;
 
-                       if (mode_lib->vba.ModeSupport[i][0] == true) {
+                       if (mode_lib->vba.ModeSupport[i][0] == true)
                                MaximumMPCCombine = 0;
-                       } else {
+                       else
                                MaximumMPCCombine = 1;
-                       }
                }
        }
 
index f5400ed..4125d3d 100644 (file)
@@ -114,6 +114,7 @@ void dml_init_instance(struct display_mode_lib *lib,
                break;
        case DML_PROJECT_DCN31:
        case DML_PROJECT_DCN31_FPGA:
+       case DML_PROJECT_DCN315:
                lib->funcs = dml31_funcs;
                break;
        case DML_PROJECT_DCN314:
index b1878a1..3d643d5 100644 (file)
@@ -40,6 +40,7 @@ enum dml_project {
        DML_PROJECT_DCN21,
        DML_PROJECT_DCN30,
        DML_PROJECT_DCN31,
+       DML_PROJECT_DCN315,
        DML_PROJECT_DCN31_FPGA,
        DML_PROJECT_DCN314,
        DML_PROJECT_DCN32,
index 8919a20..9498105 100644 (file)
@@ -39,6 +39,8 @@
 #include "panel_cntl.h"
 
 #define MAX_CLOCK_SOURCES 7
+#define MAX_SVP_PHANTOM_STREAMS 2
+#define MAX_SVP_PHANTOM_PLANES 2
 
 void enable_surface_flip_reporting(struct dc_plane_state *plane_state,
                uint32_t controller_id);
@@ -232,6 +234,7 @@ struct resource_funcs {
             unsigned int index);
 
        bool (*remove_phantom_pipes)(struct dc *dc, struct dc_state *context);
+       void (*get_panel_config_defaults)(struct dc_panel_config *panel_config);
 };
 
 struct audio_support{
@@ -438,7 +441,6 @@ struct pipe_ctx {
        union pipe_update_flags update_flags;
        struct dwbc *dwbc;
        struct mcif_wb *mcif_wb;
-       bool vtp_locked;
 };
 
 /* Data used for dynamic link encoder assignment.
@@ -492,6 +494,8 @@ struct dcn_bw_output {
        struct dcn_watermark_set watermarks;
        struct dcn_bw_writeback bw_writeback;
        int compbuf_size_kb;
+       unsigned int legacy_svp_drr_stream_index;
+       bool legacy_svp_drr_stream_index_valid;
 };
 
 union bw_output {
index 806f304..9e4ddc9 100644 (file)
@@ -628,8 +628,23 @@ unsigned int dcn_find_dcfclk_suits_all(
        const struct dc *dc,
        struct dc_clocks *clocks);
 
-void dcn_bw_update_from_pplib(struct dc *dc);
-void dcn_bw_notify_pplib_of_wm_ranges(struct dc *dc);
+void dcn_get_soc_clks(
+               struct dc *dc,
+               int *min_fclk_khz,
+               int *min_dcfclk_khz,
+               int *socclk_khz);
+
+void dcn_bw_update_from_pplib_fclks(
+               struct dc *dc,
+               struct dm_pp_clock_levels_with_voltage *fclks);
+void dcn_bw_update_from_pplib_dcfclks(
+               struct dc *dc,
+               struct dm_pp_clock_levels_with_voltage *dcfclks);
+void dcn_bw_notify_pplib_of_wm_ranges(
+               struct dc *dc,
+               int min_fclk_khz,
+               int min_dcfclk_khz,
+               int socclk_khz);
 void dcn_bw_sync_calcs_and_dml(struct dc *dc);
 
 enum source_macro_tile_size swizzle_mode_to_macro_tile_size(enum swizzle_mode_values sw_mode);
index d9f1b0a..591ab13 100644 (file)
@@ -95,10 +95,23 @@ struct clk_limit_table_entry {
        unsigned int wck_ratio;
 };
 
+struct clk_limit_num_entries {
+       unsigned int num_dcfclk_levels;
+       unsigned int num_fclk_levels;
+       unsigned int num_memclk_levels;
+       unsigned int num_socclk_levels;
+       unsigned int num_dtbclk_levels;
+       unsigned int num_dispclk_levels;
+       unsigned int num_dppclk_levels;
+       unsigned int num_phyclk_levels;
+       unsigned int num_phyclk_d18_levels;
+};
+
 /* This table is contiguous */
 struct clk_limit_table {
        struct clk_limit_table_entry entries[MAX_NUM_DPM_LVL];
-       unsigned int num_entries;
+       struct clk_limit_num_entries num_entries_per_clk;
+       unsigned int num_entries; /* highest populated dpm level for back compatibility */
 };
 
 struct wm_range_table_entry {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/cursor_reg_cache.h b/drivers/gpu/drm/amd/display/dc/inc/hw/cursor_reg_cache.h
new file mode 100644 (file)
index 0000000..45645f9
--- /dev/null
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: MIT */
+/* Copyright © 2022 Advanced Micro Devices, Inc. All rights reserved. */
+
+#ifndef __DAL_CURSOR_CACHE_H__
+#define __DAL_CURSOR_CACHE_H__
+
+union reg_cursor_control_cfg {
+       struct {
+               uint32_t     cur_enable: 1;
+               uint32_t         reser0: 3;
+               uint32_t cur_2x_magnify: 1;
+               uint32_t         reser1: 3;
+               uint32_t           mode: 3;
+               uint32_t         reser2: 5;
+               uint32_t          pitch: 2;
+               uint32_t         reser3: 6;
+               uint32_t line_per_chunk: 5;
+               uint32_t         reser4: 3;
+       } bits;
+       uint32_t raw;
+};
+struct cursor_position_cache_hubp {
+       union reg_cursor_control_cfg cur_ctl;
+       union reg_position_cfg {
+               struct {
+                       uint32_t x_pos: 16;
+                       uint32_t y_pos: 16;
+               } bits;
+               uint32_t raw;
+       } position;
+       union reg_hot_spot_cfg {
+               struct {
+                       uint32_t x_hot: 16;
+                       uint32_t y_hot: 16;
+               } bits;
+               uint32_t raw;
+       } hot_spot;
+       union reg_dst_offset_cfg {
+               struct {
+                       uint32_t dst_x_offset: 13;
+                       uint32_t     reserved: 19;
+               } bits;
+               uint32_t raw;
+       } dst_offset;
+};
+
+struct cursor_attribute_cache_hubp {
+       uint32_t SURFACE_ADDR_HIGH;
+       uint32_t SURFACE_ADDR;
+       union    reg_cursor_control_cfg  cur_ctl;
+       union    reg_cursor_size_cfg {
+               struct {
+                       uint32_t  width: 16;
+                       uint32_t height: 16;
+               } bits;
+               uint32_t raw;
+       } size;
+       union    reg_cursor_settings_cfg {
+               struct {
+                       uint32_t     dst_y_offset: 8;
+                       uint32_t chunk_hdl_adjust: 2;
+                       uint32_t         reserved: 22;
+               } bits;
+               uint32_t raw;
+       } settings;
+};
+
+struct cursor_rect {
+       uint32_t x;
+       uint32_t y;
+       uint32_t w;
+       uint32_t h;
+};
+
+union reg_cur0_control_cfg {
+       struct {
+               uint32_t     cur0_enable: 1;
+               uint32_t  expansion_mode: 1;
+               uint32_t          reser0: 1;
+               uint32_t     cur0_rom_en: 1;
+               uint32_t            mode: 3;
+               uint32_t        reserved: 25;
+       } bits;
+       uint32_t raw;
+};
+struct cursor_position_cache_dpp {
+       union reg_cur0_control_cfg cur0_ctl;
+};
+
+struct cursor_attribute_cache_dpp {
+       union reg_cur0_control_cfg cur0_ctl;
+};
+
+struct cursor_attributes_cfg {
+       struct  cursor_attribute_cache_hubp aHubp;
+       struct  cursor_attribute_cache_dpp  aDpp;
+};
+
+#endif
index 3ef7faa..dcb80c4 100644 (file)
@@ -28,6 +28,7 @@
 #define __DAL_DPP_H__
 
 #include "transform.h"
+#include "cursor_reg_cache.h"
 
 union defer_reg_writes {
        struct {
@@ -58,6 +59,9 @@ struct dpp {
 
        struct pwl_params shaper_params;
        bool cm_bypass_mode;
+
+       struct cursor_position_cache_dpp  pos;
+       struct cursor_attribute_cache_dpp att;
 };
 
 struct dpp_input_csc_matrix {
index 44c4578..d5ea754 100644 (file)
@@ -27,6 +27,7 @@
 #define __DAL_HUBP_H__
 
 #include "mem_input.h"
+#include "cursor_reg_cache.h"
 
 #define OPP_ID_INVALID 0xf
 #define MAX_TTU 0xffffff
@@ -65,6 +66,10 @@ struct hubp {
        struct dc_cursor_attributes curs_attr;
        struct dc_cursor_position curs_pos;
        bool power_gated;
+
+       struct cursor_position_cache_hubp  pos;
+       struct cursor_attribute_cache_hubp att;
+       struct cursor_rect cur_rect;
 };
 
 struct surface_flip_registers {
index 72eef7a..25a1df4 100644 (file)
@@ -209,7 +209,6 @@ struct timing_generator_funcs {
        void (*set_blank)(struct timing_generator *tg,
                                        bool enable_blanking);
        bool (*is_blanked)(struct timing_generator *tg);
-       bool (*is_locked)(struct timing_generator *tg);
        void (*set_overscan_blank_color) (struct timing_generator *tg, const struct tg_color *color);
        void (*set_blank_color)(struct timing_generator *tg, const struct tg_color *color);
        void (*set_colors)(struct timing_generator *tg,
index c37d114..5040836 100644 (file)
@@ -230,4 +230,10 @@ const struct link_hwss *get_link_hwss(const struct dc_link *link,
 
 bool is_h_timing_divisible_by_2(struct dc_stream_state *stream);
 
+bool dc_resource_acquire_secondary_pipe_for_mpc_odm(
+               const struct dc *dc,
+               struct dc_state *state,
+               struct pipe_ctx *pri_pipe,
+               struct pipe_ctx *sec_pipe,
+               bool odm);
 #endif /* DRIVERS_GPU_DRM_AMD_DC_DEV_DC_INC_RESOURCE_H_ */
index 7d31471..153a883 100644 (file)
@@ -111,7 +111,7 @@ static void setup_hpo_dp_stream_encoder(struct pipe_ctx *pipe_ctx)
        enum phyd32clk_clock_source phyd32clk = get_phyd32clk_src(pipe_ctx->stream->link);
 
        dto_params.otg_inst = tg->inst;
-       dto_params.pixclk_khz = pipe_ctx->stream->phy_pix_clk;
+       dto_params.pixclk_khz = pipe_ctx->stream->timing.pix_clk_100hz / 10;
        dto_params.num_odm_segments = get_odm_segment_count(pipe_ctx);
        dto_params.timing = &pipe_ctx->stream->timing;
        dto_params.ref_dtbclk_khz = dc->clk_mgr->funcs->get_dtb_ref_clk_frequency(dc->clk_mgr);
index 9522fe0..4f7f991 100644 (file)
@@ -37,7 +37,7 @@ void virtual_reset_stream_encoder(struct pipe_ctx *pipe_ctx)
 {
 }
 
-void virtual_disable_link_output(struct dc_link *link,
+static void virtual_disable_link_output(struct dc_link *link,
        const struct link_resource *link_res,
        enum signal_type signal)
 {
index f34c45b..eb5b7eb 100644 (file)
@@ -248,6 +248,7 @@ struct dmub_srv_hw_params {
        bool disable_dpia;
        bool usb4_cm_version;
        bool fw_in_system_memory;
+       bool dpia_hpd_int_enable_supported;
 };
 
 /**
index 5d1aada..7a8f615 100644 (file)
@@ -400,8 +400,9 @@ union dmub_fw_boot_options {
                uint32_t diag_env: 1; /* 1 if diagnostic environment */
                uint32_t gpint_scratch8: 1; /* 1 if GPINT is in scratch8*/
                uint32_t usb4_cm_version: 1; /**< 1 CM support */
+               uint32_t dpia_hpd_int_enable_supported: 1; /* 1 if dpia hpd int enable supported */
 
-               uint32_t reserved : 17; /**< reserved */
+               uint32_t reserved : 16; /**< reserved */
        } bits; /**< boot bits */
        uint32_t all; /**< 32-bit access to bits */
 };
@@ -728,6 +729,12 @@ enum dmub_cmd_type {
        /**
         * Command type used for all VBIOS interface commands.
         */
+
+       /**
+        * Command type used to set DPIA HPD interrupt state
+        */
+       DMUB_CMD__DPIA_HPD_INT_ENABLE = 86,
+
        DMUB_CMD__VBIOS = 128,
 };
 
@@ -760,11 +767,6 @@ enum dmub_cmd_dpia_type {
        DMUB_CMD__DPIA_MST_ALLOC_SLOTS = 2,
 };
 
-enum dmub_cmd_header_sub_type {
-       DMUB_CMD__SUB_TYPE_GENERAL         = 0,
-       DMUB_CMD__SUB_TYPE_CURSOR_POSITION = 1
-};
-
 #pragma pack(push, 1)
 
 /**
@@ -1261,6 +1263,14 @@ struct dmub_rb_cmd_set_mst_alloc_slots {
 };
 
 /**
+ * DMUB command structure for DPIA HPD int enable control.
+ */
+struct dmub_rb_cmd_dpia_hpd_int_enable {
+       struct dmub_cmd_header header; /* header */
+       uint32_t enable; /* dpia hpd interrupt enable */
+};
+
+/**
  * struct dmub_rb_cmd_dpphy_init - DPPHY init.
  */
 struct dmub_rb_cmd_dpphy_init {
@@ -2089,7 +2099,99 @@ struct dmub_rb_cmd_update_dirty_rect {
 /**
  * Data passed from driver to FW in a DMUB_CMD__UPDATE_CURSOR_INFO command.
  */
-struct dmub_cmd_update_cursor_info_data {
+union dmub_reg_cursor_control_cfg {
+       struct {
+               uint32_t     cur_enable: 1;
+               uint32_t         reser0: 3;
+               uint32_t cur_2x_magnify: 1;
+               uint32_t         reser1: 3;
+               uint32_t           mode: 3;
+               uint32_t         reser2: 5;
+               uint32_t          pitch: 2;
+               uint32_t         reser3: 6;
+               uint32_t line_per_chunk: 5;
+               uint32_t         reser4: 3;
+       } bits;
+       uint32_t raw;
+};
+struct dmub_cursor_position_cache_hubp {
+       union dmub_reg_cursor_control_cfg cur_ctl;
+       union dmub_reg_position_cfg {
+               struct {
+                       uint32_t cur_x_pos: 16;
+                       uint32_t cur_y_pos: 16;
+               } bits;
+               uint32_t raw;
+       } position;
+       union dmub_reg_hot_spot_cfg {
+               struct {
+                       uint32_t hot_x: 16;
+                       uint32_t hot_y: 16;
+               } bits;
+               uint32_t raw;
+       } hot_spot;
+       union dmub_reg_dst_offset_cfg {
+               struct {
+                       uint32_t dst_x_offset: 13;
+                       uint32_t reserved: 19;
+               } bits;
+               uint32_t raw;
+       } dst_offset;
+};
+
+union dmub_reg_cur0_control_cfg {
+       struct {
+               uint32_t     cur0_enable: 1;
+               uint32_t  expansion_mode: 1;
+               uint32_t          reser0: 1;
+               uint32_t     cur0_rom_en: 1;
+               uint32_t            mode: 3;
+               uint32_t        reserved: 25;
+       } bits;
+       uint32_t raw;
+};
+struct dmub_cursor_position_cache_dpp {
+       union dmub_reg_cur0_control_cfg cur0_ctl;
+};
+struct dmub_cursor_position_cfg {
+       struct  dmub_cursor_position_cache_hubp pHubp;
+       struct  dmub_cursor_position_cache_dpp  pDpp;
+       uint8_t pipe_idx;
+       /*
+        * Padding is required. To be 4 Bytes Aligned.
+        */
+       uint8_t padding[3];
+};
+
+struct dmub_cursor_attribute_cache_hubp {
+       uint32_t SURFACE_ADDR_HIGH;
+       uint32_t SURFACE_ADDR;
+       union    dmub_reg_cursor_control_cfg  cur_ctl;
+       union    dmub_reg_cursor_size_cfg {
+               struct {
+                       uint32_t width: 16;
+                       uint32_t height: 16;
+               } bits;
+               uint32_t raw;
+       } size;
+       union    dmub_reg_cursor_settings_cfg {
+               struct {
+                       uint32_t     dst_y_offset: 8;
+                       uint32_t chunk_hdl_adjust: 2;
+                       uint32_t         reserved: 22;
+               } bits;
+               uint32_t raw;
+       } settings;
+};
+struct dmub_cursor_attribute_cache_dpp {
+       union dmub_reg_cur0_control_cfg cur0_ctl;
+};
+struct dmub_cursor_attributes_cfg {
+       struct  dmub_cursor_attribute_cache_hubp aHubp;
+       struct  dmub_cursor_attribute_cache_dpp  aDpp;
+};
+
+struct dmub_cmd_update_cursor_payload0 {
        /**
         * Cursor dirty rects.
         */
@@ -2116,6 +2218,20 @@ struct dmub_cmd_update_cursor_info_data {
         * Currently the support is only for 0 or 1
         */
        uint8_t panel_inst;
+       /**
+        * Cursor Position Register.
+        * Registers contains Hubp & Dpp modules
+        */
+       struct dmub_cursor_position_cfg position_cfg;
+};
+
+struct dmub_cmd_update_cursor_payload1 {
+       struct dmub_cursor_attributes_cfg attribute_cfg;
+};
+
+union dmub_cmd_update_cursor_info_data {
+       struct dmub_cmd_update_cursor_payload0 payload0;
+       struct dmub_cmd_update_cursor_payload1 payload1;
 };
 /**
  * Definition of a DMUB_CMD__UPDATE_CURSOR_INFO command.
@@ -2128,7 +2244,7 @@ struct dmub_rb_cmd_update_cursor_info {
        /**
         * Data passed from driver to FW in a DMUB_CMD__UPDATE_CURSOR_INFO command.
         */
-       struct dmub_cmd_update_cursor_info_data update_cursor_info_data;
+       union dmub_cmd_update_cursor_info_data update_cursor_info_data;
 };
 
 /**
@@ -2825,11 +2941,7 @@ struct dmub_rb_cmd_get_visual_confirm_color {
 struct dmub_optc_state {
        uint32_t v_total_max;
        uint32_t v_total_min;
-       uint32_t v_total_mid;
-       uint32_t v_total_mid_frame_num;
        uint32_t tg_inst;
-       uint32_t enable_manual_trigger;
-       uint32_t clear_force_vsync;
 };
 
 struct dmub_rb_cmd_drr_update {
@@ -3235,6 +3347,10 @@ union dmub_rb_cmd {
         * Definition of a DMUB_CMD__QUERY_HPD_STATE command.
         */
        struct dmub_rb_cmd_query_hpd_state query_hpd;
+       /**
+        * Definition of a DMUB_CMD__DPIA_HPD_INT_ENABLE command.
+        */
+       struct dmub_rb_cmd_dpia_hpd_int_enable dpia_hpd_int_enable;
 };
 
 /**
index c7bd7e2..c90b9ee 100644 (file)
@@ -350,6 +350,7 @@ void dmub_dcn31_enable_dmub_boot_options(struct dmub_srv *dmub, const struct dmu
        boot_options.bits.dpia_supported = params->dpia_supported;
        boot_options.bits.enable_dpia = params->disable_dpia ? 0 : 1;
        boot_options.bits.usb4_cm_version = params->usb4_cm_version;
+       boot_options.bits.dpia_hpd_int_enable_supported = params->dpia_hpd_int_enable_supported;
        boot_options.bits.power_optimization = params->power_optimization;
 
        boot_options.bits.sel_mux_phy_c_d_phy_f_g = (dmub->asic == DMUB_ASIC_DCN31B) ? 1 : 0;
index 04f7656..447a0ec 100644 (file)
@@ -1692,7 +1692,7 @@ static void apply_degamma_for_user_regamma(struct pwl_float_data_ex *rgb_regamma
        struct pwl_float_data_ex *rgb = rgb_regamma;
        const struct hw_x_point *coord_x = coordinates_x;
 
-       build_coefficients(&coeff, true);
+       build_coefficients(&coeff, TRANSFER_FUNCTION_SRGB);
 
        i = 0;
        while (i != hw_points_num + 1) {
index b798cf5..38adde3 100644 (file)
@@ -29,5 +29,7 @@
 #define regMCA_UMC_UMC0_MCUMC_STATUST0_BASE_IDX  2
 #define regMCA_UMC_UMC0_MCUMC_ADDRT0             0x03c4
 #define regMCA_UMC_UMC0_MCUMC_ADDRT0_BASE_IDX    2
+#define regUMCCH0_0_GeccCtrl                     0x0053
+#define regUMCCH0_0_GeccCtrl_BASE_IDX            2
 
 #endif
index bd99b43..4dbec52 100644 (file)
@@ -90,5 +90,8 @@
 #define MCA_UMC_UMC0_MCUMC_ADDRT0__ErrorAddr__SHIFT        0x0
 #define MCA_UMC_UMC0_MCUMC_ADDRT0__Reserved__SHIFT         0x38
 #define MCA_UMC_UMC0_MCUMC_ADDRT0__ErrorAddr_MASK          0x00FFFFFFFFFFFFFFL
+//UMCCH0_0_GeccCtrl
+#define UMCCH0_0_GeccCtrl__UCFatalEn__SHIFT                0xd
+#define UMCCH0_0_GeccCtrl__UCFatalEn_MASK                  0x00002000L
 
 #endif
index e85364d..5cb3e86 100644 (file)
@@ -262,8 +262,9 @@ struct kfd2kgd_calls {
                                uint32_t queue_id);
 
        int (*hqd_destroy)(struct amdgpu_device *adev, void *mqd,
-                               uint32_t reset_type, unsigned int timeout,
-                               uint32_t pipe_id, uint32_t queue_id);
+                               enum kfd_preempt_type reset_type,
+                               unsigned int timeout, uint32_t pipe_id,
+                               uint32_t queue_id);
 
        bool (*hqd_sdma_is_occupied)(struct amdgpu_device *adev, void *mqd);
 
index 948cc75..236657e 100644 (file)
@@ -3362,11 +3362,11 @@ int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
        if (adev->pm.sysfs_initialized)
                return 0;
 
+       INIT_LIST_HEAD(&adev->pm.pm_attr_list);
+
        if (adev->pm.dpm_enabled == 0)
                return 0;
 
-       INIT_LIST_HEAD(&adev->pm.pm_attr_list);
-
        adev->pm.int_hwmon_dev = hwmon_device_register_with_groups(adev->dev,
                                                                   DRIVER_NAME, adev,
                                                                   hwmon_groups);
index 8fd0782..f5e08b6 100644 (file)
@@ -1384,13 +1384,16 @@ static int kv_dpm_enable(struct amdgpu_device *adev)
 static void kv_dpm_disable(struct amdgpu_device *adev)
 {
        struct kv_power_info *pi = kv_get_pi(adev);
+       int err;
 
        amdgpu_irq_put(adev, &adev->pm.dpm.thermal.irq,
                       AMDGPU_THERMAL_IRQ_LOW_TO_HIGH);
        amdgpu_irq_put(adev, &adev->pm.dpm.thermal.irq,
                       AMDGPU_THERMAL_IRQ_HIGH_TO_LOW);
 
-       amdgpu_kv_smc_bapm_enable(adev, false);
+       err = amdgpu_kv_smc_bapm_enable(adev, false);
+       if (err)
+               DRM_ERROR("amdgpu_kv_smc_bapm_enable failed\n");
 
        if (adev->asic_type == CHIP_MULLINS)
                kv_enable_nb_dpm(adev, false);
index e4fcbf8..7ef7e81 100644 (file)
@@ -3603,7 +3603,7 @@ static int smu7_get_pp_table_entry_callback_func_v1(struct pp_hwmgr *hwmgr,
                        return -EINVAL);
 
        PP_ASSERT_WITH_CODE(
-                       (smu7_power_state->performance_level_count <=
+                       (smu7_power_state->performance_level_count <
                                        hwmgr->platform_descriptor.hardwareActivityPerformanceLevels),
                        "Performance levels exceeds Driver limit!",
                        return -EINVAL);
index 99bfe5e..c8c9fb8 100644 (file)
@@ -3155,7 +3155,7 @@ static int vega10_get_pp_table_entry_callback_func(struct pp_hwmgr *hwmgr,
                        return -1);
 
        PP_ASSERT_WITH_CODE(
-                       (vega10_ps->performance_level_count <=
+                       (vega10_ps->performance_level_count <
                                        hwmgr->platform_descriptor.
                                        hardwareActivityPerformanceLevels),
                        "Performance levels exceeds Driver limit!",
index 190af79..dad3e37 100644 (file)
@@ -67,21 +67,22 @@ int vega10_fan_ctrl_get_fan_speed_info(struct pp_hwmgr *hwmgr,
 int vega10_fan_ctrl_get_fan_speed_pwm(struct pp_hwmgr *hwmgr,
                uint32_t *speed)
 {
-       struct amdgpu_device *adev = hwmgr->adev;
-       uint32_t duty100, duty;
-       uint64_t tmp64;
+       uint32_t current_rpm;
+       uint32_t percent = 0;
 
-       duty100 = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_FDO_CTRL1),
-                               CG_FDO_CTRL1, FMAX_DUTY100);
-       duty = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_THERMAL_STATUS),
-                               CG_THERMAL_STATUS, FDO_PWM_DUTY);
+       if (hwmgr->thermal_controller.fanInfo.bNoFan)
+               return 0;
 
-       if (!duty100)
-               return -EINVAL;
+       if (vega10_get_current_rpm(hwmgr, &current_rpm))
+               return -1;
+
+       if (hwmgr->thermal_controller.
+                       advanceFanControlParameters.usMaxFanRPM != 0)
+               percent = current_rpm * 255 /
+                       hwmgr->thermal_controller.
+                       advanceFanControlParameters.usMaxFanRPM;
 
-       tmp64 = (uint64_t)duty * 255;
-       do_div(tmp64, duty100);
-       *speed = MIN((uint32_t)tmp64, 255);
+       *speed = MIN(percent, 255);
 
        return 0;
 }
index 13c5c7f..4fe75dd 100644 (file)
@@ -1314,8 +1314,8 @@ static int smu_smc_hw_setup(struct smu_context *smu)
 
        ret = smu_enable_thermal_alert(smu);
        if (ret) {
-               dev_err(adev->dev, "Failed to enable thermal alert!\n");
-               return ret;
+         dev_err(adev->dev, "Failed to enable thermal alert!\n");
+         return ret;
        }
 
        ret = smu_notify_display_change(smu);
index ae2d337..f774017 100644 (file)
@@ -27,7 +27,7 @@
 // *** IMPORTANT ***
 // SMU TEAM: Always increment the interface version if
 // any structure is changed in this file
-#define PMFW_DRIVER_IF_VERSION 5
+#define PMFW_DRIVER_IF_VERSION 7
 
 typedef struct {
   int32_t value;
@@ -163,8 +163,8 @@ typedef struct {
   uint16_t DclkFrequency;               //[MHz]
   uint16_t MemclkFrequency;             //[MHz]
   uint16_t spare;                       //[centi]
-  uint16_t UvdActivity;                 //[centi]
   uint16_t GfxActivity;                 //[centi]
+  uint16_t UvdActivity;                 //[centi]
 
   uint16_t Voltage[2];                  //[mV] indices: VDDCR_VDD, VDDCR_SOC
   uint16_t Current[2];                  //[mA] indices: VDDCR_VDD, VDDCR_SOC
@@ -199,6 +199,19 @@ typedef struct {
   uint16_t DeviceState;
   uint16_t CurTemp;                     //[centi-Celsius]
   uint16_t spare2;
+
+  uint16_t AverageGfxclkFrequency;
+  uint16_t AverageFclkFrequency;
+  uint16_t AverageGfxActivity;
+  uint16_t AverageSocclkFrequency;
+  uint16_t AverageVclkFrequency;
+  uint16_t AverageVcnActivity;
+  uint16_t AverageDRAMReads;          //Filtered DF Bandwidth::DRAM Reads
+  uint16_t AverageDRAMWrites;         //Filtered DF Bandwidth::DRAM Writes
+  uint16_t AverageSocketPower;        //Filtered value of CurrentSocketPower
+  uint16_t AverageCorePower;          //Filtered of [sum of CorePower[8]])
+  uint16_t AverageCoreC0Residency[8]; //Filtered of [average C0 residency %  per core]
+  uint32_t MetricsCounter;            //Counts the # of metrics table parameter reads per update to the metrics table, i.e. if the metrics table update happens every 1 second, this value could be up to 1000 if the smu collected metrics data every cycle, or as low as 0 if the smu was asleep the whole time. Reset to 0 after writing.
 } SmuMetrics_t;
 
 typedef struct {
index 9d62ea2..8f72202 100644 (file)
@@ -28,7 +28,7 @@
 #define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF
 #define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x04
 #define SMU13_DRIVER_IF_VERSION_ALDE 0x08
-#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x05
+#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x07
 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_5 0x04
 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0 0x30
 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_7 0x2C
index 4450055..9cd0051 100644 (file)
@@ -2242,9 +2242,17 @@ static void arcturus_get_unique_id(struct smu_context *smu)
 static int arcturus_set_df_cstate(struct smu_context *smu,
                                  enum pp_df_cstate state)
 {
+       struct amdgpu_device *adev = smu->adev;
        uint32_t smu_version;
        int ret;
 
+       /*
+        * Arcturus does not need the cstate disablement
+        * prerequisite for gpu reset.
+        */
+       if (amdgpu_in_reset(adev) || adev->in_suspend)
+               return 0;
+
        ret = smu_cmn_get_smc_version(smu, NULL, &smu_version);
        if (ret) {
                dev_err(smu->adev->dev, "Failed to get smu version!\n");
index 619aee5..d30ec30 100644 (file)
@@ -1640,6 +1640,15 @@ static bool aldebaran_is_baco_supported(struct smu_context *smu)
 static int aldebaran_set_df_cstate(struct smu_context *smu,
                                   enum pp_df_cstate state)
 {
+       struct amdgpu_device *adev = smu->adev;
+
+       /*
+        * Aldebaran does not need the cstate disablement
+        * prerequisite for gpu reset.
+        */
+       if (amdgpu_in_reset(adev) || adev->in_suspend)
+               return 0;
+
        return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_DFCstateControl, state, NULL);
 }
 
index 93fffdb..c4552ad 100644 (file)
@@ -211,7 +211,8 @@ int smu_v13_0_init_pptable_microcode(struct smu_context *smu)
                return 0;
 
        if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7)) ||
-           (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)))
+           (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) ||
+           (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 10)))
                return 0;
 
        /* override pptable_id from driver parameter */
@@ -454,9 +455,6 @@ int smu_v13_0_setup_pptable(struct smu_context *smu)
                dev_info(adev->dev, "override pptable id %d\n", pptable_id);
        } else {
                pptable_id = smu->smu_table.boot_values.pp_table_id;
-
-               if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 10))
-                       pptable_id = 6666;
        }
 
        /* force using vbios pptable in sriov mode */
index 1d45448..2952932 100644 (file)
@@ -119,6 +119,7 @@ static struct cmn2asic_msg_mapping smu_v13_0_0_message_map[SMU_MSG_MAX_COUNT] =
        MSG_MAP(NotifyPowerSource,              PPSMC_MSG_NotifyPowerSource,           0),
        MSG_MAP(Mode1Reset,                     PPSMC_MSG_Mode1Reset,                  0),
        MSG_MAP(PrepareMp1ForUnload,            PPSMC_MSG_PrepareMp1ForUnload,         0),
+       MSG_MAP(DFCstateControl,                PPSMC_MSG_SetExternalClientDfCstateAllow, 0),
 };
 
 static struct cmn2asic_mapping smu_v13_0_0_clk_map[SMU_CLK_COUNT] = {
@@ -1753,6 +1754,15 @@ static int smu_v13_0_0_set_mp1_state(struct smu_context *smu,
        return ret;
 }
 
+static int smu_v13_0_0_set_df_cstate(struct smu_context *smu,
+                                    enum pp_df_cstate state)
+{
+       return smu_cmn_send_smc_msg_with_param(smu,
+                                              SMU_MSG_DFCstateControl,
+                                              state,
+                                              NULL);
+}
+
 static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
        .get_allowed_feature_mask = smu_v13_0_0_get_allowed_feature_mask,
        .set_default_dpm_table = smu_v13_0_0_set_default_dpm_table,
@@ -1822,6 +1832,7 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
        .mode1_reset_is_support = smu_v13_0_0_is_mode1_reset_supported,
        .mode1_reset = smu_v13_0_mode1_reset,
        .set_mp1_state = smu_v13_0_0_set_mp1_state,
+       .set_df_cstate = smu_v13_0_0_set_df_cstate,
 };
 
 void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu)
index c422bf8..c4102cf 100644 (file)
@@ -121,6 +121,7 @@ static struct cmn2asic_msg_mapping smu_v13_0_7_message_map[SMU_MSG_MAX_COUNT] =
        MSG_MAP(Mode1Reset,             PPSMC_MSG_Mode1Reset,                  0),
        MSG_MAP(PrepareMp1ForUnload,            PPSMC_MSG_PrepareMp1ForUnload,         0),
        MSG_MAP(SetMGpuFanBoostLimitRpm,        PPSMC_MSG_SetMGpuFanBoostLimitRpm,     0),
+       MSG_MAP(DFCstateControl,                PPSMC_MSG_SetExternalClientDfCstateAllow, 0),
 };
 
 static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {
@@ -1587,6 +1588,16 @@ static bool smu_v13_0_7_is_mode1_reset_supported(struct smu_context *smu)
 
        return true;
 }
+
+static int smu_v13_0_7_set_df_cstate(struct smu_context *smu,
+                                    enum pp_df_cstate state)
+{
+       return smu_cmn_send_smc_msg_with_param(smu,
+                                              SMU_MSG_DFCstateControl,
+                                              state,
+                                              NULL);
+}
+
 static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
        .get_allowed_feature_mask = smu_v13_0_7_get_allowed_feature_mask,
        .set_default_dpm_table = smu_v13_0_7_set_default_dpm_table,
@@ -1649,6 +1660,7 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
        .mode1_reset_is_support = smu_v13_0_7_is_mode1_reset_supported,
        .mode1_reset = smu_v13_0_mode1_reset,
        .set_mp1_state = smu_v13_0_7_set_mp1_state,
+       .set_df_cstate = smu_v13_0_7_set_df_cstate,
 };
 
 void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)
index e3142c8..61c29ce 100644 (file)
@@ -435,7 +435,7 @@ int drmm_connector_init(struct drm_device *dev,
        if (drm_WARN_ON(dev, funcs && funcs->destroy))
                return -EINVAL;
 
-       ret = __drm_connector_init(dev, connector, funcs, connector_type, NULL);
+       ret = __drm_connector_init(dev, connector, funcs, connector_type, ddc);
        if (ret)
                return ret;
 
index 5fbd2ae..2b73f5f 100644 (file)
@@ -120,7 +120,7 @@ static void intel_hdmi_get_config(struct intel_encoder *encoder,
        pipe_config->hw.adjusted_mode.flags |= flags;
 
        if ((tmp & SDVO_COLOR_FORMAT_MASK) == HDMI_COLOR_FORMAT_12bpc)
-               dotclock = pipe_config->port_clock * 2 / 3;
+               dotclock = DIV_ROUND_CLOSEST(pipe_config->port_clock * 2, 3);
        else
                dotclock = pipe_config->port_clock;
 
index dd008ba..461c62c 100644 (file)
@@ -8130,6 +8130,17 @@ static void intel_setup_outputs(struct drm_i915_private *dev_priv)
        drm_helper_move_panel_connectors_to_head(&dev_priv->drm);
 }
 
+static int max_dotclock(struct drm_i915_private *i915)
+{
+       int max_dotclock = i915->max_dotclk_freq;
+
+       /* icl+ might use bigjoiner */
+       if (DISPLAY_VER(i915) >= 11)
+               max_dotclock *= 2;
+
+       return max_dotclock;
+}
+
 static enum drm_mode_status
 intel_mode_valid(struct drm_device *dev,
                 const struct drm_display_mode *mode)
@@ -8167,6 +8178,13 @@ intel_mode_valid(struct drm_device *dev,
                           DRM_MODE_FLAG_CLKDIV2))
                return MODE_BAD;
 
+       /*
+        * Reject clearly excessive dotclocks early to
+        * avoid having to worry about huge integers later.
+        */
+       if (mode->clock > max_dotclock(dev_priv))
+               return MODE_CLOCK_HIGH;
+
        /* Transcoder timing limits */
        if (DISPLAY_VER(dev_priv) >= 11) {
                hdisplay_max = 16384;
index c86e5d4..1dddd6a 100644 (file)
@@ -26,10 +26,17 @@ intel_pin_fb_obj_dpt(struct drm_framebuffer *fb,
        struct drm_device *dev = fb->dev;
        struct drm_i915_private *dev_priv = to_i915(dev);
        struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+       struct i915_gem_ww_ctx ww;
        struct i915_vma *vma;
        u32 alignment;
        int ret;
 
+       /*
+        * We are not syncing against the binding (and potential migrations)
+        * below, so this vm must never be async.
+        */
+       GEM_WARN_ON(vm->bind_async_flags);
+
        if (WARN_ON(!i915_gem_object_is_framebuffer(obj)))
                return ERR_PTR(-EINVAL);
 
@@ -37,29 +44,48 @@ intel_pin_fb_obj_dpt(struct drm_framebuffer *fb,
 
        atomic_inc(&dev_priv->gpu_error.pending_fb_pin);
 
-       ret = i915_gem_object_lock_interruptible(obj, NULL);
-       if (!ret) {
+       for_i915_gem_ww(&ww, ret, true) {
+               ret = i915_gem_object_lock(obj, &ww);
+               if (ret)
+                       continue;
+
+               if (HAS_LMEM(dev_priv)) {
+                       unsigned int flags = obj->flags;
+
+                       /*
+                        * For this type of buffer we need to able to read from the CPU
+                        * the clear color value found in the buffer, hence we need to
+                        * ensure it is always in the mappable part of lmem, if this is
+                        * a small-bar device.
+                        */
+                       if (intel_fb_rc_ccs_cc_plane(fb) >= 0)
+                               flags &= ~I915_BO_ALLOC_GPU_ONLY;
+                       ret = __i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM_0,
+                                                       flags);
+                       if (ret)
+                               continue;
+               }
+
                ret = i915_gem_object_set_cache_level(obj, I915_CACHE_NONE);
-               i915_gem_object_unlock(obj);
-       }
-       if (ret) {
-               vma = ERR_PTR(ret);
-               goto err;
-       }
+               if (ret)
+                       continue;
 
-       vma = i915_vma_instance(obj, vm, view);
-       if (IS_ERR(vma))
-               goto err;
+               vma = i915_vma_instance(obj, vm, view);
+               if (IS_ERR(vma)) {
+                       ret = PTR_ERR(vma);
+                       continue;
+               }
 
-       if (i915_vma_misplaced(vma, 0, alignment, 0)) {
-               ret = i915_vma_unbind_unlocked(vma);
-               if (ret) {
-                       vma = ERR_PTR(ret);
-                       goto err;
+               if (i915_vma_misplaced(vma, 0, alignment, 0)) {
+                       ret = i915_vma_unbind(vma);
+                       if (ret)
+                               continue;
                }
-       }
 
-       ret = i915_vma_pin(vma, 0, alignment, PIN_GLOBAL);
+               ret = i915_vma_pin_ww(vma, &ww, 0, alignment, PIN_GLOBAL);
+               if (ret)
+                       continue;
+       }
        if (ret) {
                vma = ERR_PTR(ret);
                goto err;
index 9def8d9..d4cce62 100644 (file)
@@ -116,34 +116,56 @@ static bool psr2_global_enabled(struct intel_dp *intel_dp)
        }
 }
 
+static u32 psr_irq_psr_error_bit_get(struct intel_dp *intel_dp)
+{
+       struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+
+       return DISPLAY_VER(dev_priv) >= 12 ? TGL_PSR_ERROR :
+               EDP_PSR_ERROR(intel_dp->psr.transcoder);
+}
+
+static u32 psr_irq_post_exit_bit_get(struct intel_dp *intel_dp)
+{
+       struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+
+       return DISPLAY_VER(dev_priv) >= 12 ? TGL_PSR_POST_EXIT :
+               EDP_PSR_POST_EXIT(intel_dp->psr.transcoder);
+}
+
+static u32 psr_irq_pre_entry_bit_get(struct intel_dp *intel_dp)
+{
+       struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+
+       return DISPLAY_VER(dev_priv) >= 12 ? TGL_PSR_PRE_ENTRY :
+               EDP_PSR_PRE_ENTRY(intel_dp->psr.transcoder);
+}
+
+static u32 psr_irq_mask_get(struct intel_dp *intel_dp)
+{
+       struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
+
+       return DISPLAY_VER(dev_priv) >= 12 ? TGL_PSR_MASK :
+               EDP_PSR_MASK(intel_dp->psr.transcoder);
+}
+
 static void psr_irq_control(struct intel_dp *intel_dp)
 {
        struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
-       enum transcoder trans_shift;
        i915_reg_t imr_reg;
        u32 mask, val;
 
-       /*
-        * gen12+ has registers relative to transcoder and one per transcoder
-        * using the same bit definition: handle it as TRANSCODER_EDP to force
-        * 0 shift in bit definition
-        */
-       if (DISPLAY_VER(dev_priv) >= 12) {
-               trans_shift = 0;
+       if (DISPLAY_VER(dev_priv) >= 12)
                imr_reg = TRANS_PSR_IMR(intel_dp->psr.transcoder);
-       } else {
-               trans_shift = intel_dp->psr.transcoder;
+       else
                imr_reg = EDP_PSR_IMR;
-       }
 
-       mask = EDP_PSR_ERROR(trans_shift);
+       mask = psr_irq_psr_error_bit_get(intel_dp);
        if (intel_dp->psr.debug & I915_PSR_DEBUG_IRQ)
-               mask |= EDP_PSR_POST_EXIT(trans_shift) |
-                       EDP_PSR_PRE_ENTRY(trans_shift);
+               mask |= psr_irq_post_exit_bit_get(intel_dp) |
+                       psr_irq_pre_entry_bit_get(intel_dp);
 
-       /* Warning: it is masking/setting reserved bits too */
        val = intel_de_read(dev_priv, imr_reg);
-       val &= ~EDP_PSR_TRANS_MASK(trans_shift);
+       val &= ~psr_irq_mask_get(intel_dp);
        val |= ~mask;
        intel_de_write(dev_priv, imr_reg, val);
 }
@@ -191,25 +213,21 @@ void intel_psr_irq_handler(struct intel_dp *intel_dp, u32 psr_iir)
        enum transcoder cpu_transcoder = intel_dp->psr.transcoder;
        struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
        ktime_t time_ns =  ktime_get();
-       enum transcoder trans_shift;
        i915_reg_t imr_reg;
 
-       if (DISPLAY_VER(dev_priv) >= 12) {
-               trans_shift = 0;
+       if (DISPLAY_VER(dev_priv) >= 12)
                imr_reg = TRANS_PSR_IMR(intel_dp->psr.transcoder);
-       } else {
-               trans_shift = intel_dp->psr.transcoder;
+       else
                imr_reg = EDP_PSR_IMR;
-       }
 
-       if (psr_iir & EDP_PSR_PRE_ENTRY(trans_shift)) {
+       if (psr_iir & psr_irq_pre_entry_bit_get(intel_dp)) {
                intel_dp->psr.last_entry_attempt = time_ns;
                drm_dbg_kms(&dev_priv->drm,
                            "[transcoder %s] PSR entry attempt in 2 vblanks\n",
                            transcoder_name(cpu_transcoder));
        }
 
-       if (psr_iir & EDP_PSR_POST_EXIT(trans_shift)) {
+       if (psr_iir & psr_irq_post_exit_bit_get(intel_dp)) {
                intel_dp->psr.last_exit = time_ns;
                drm_dbg_kms(&dev_priv->drm,
                            "[transcoder %s] PSR exit completed\n",
@@ -226,7 +244,7 @@ void intel_psr_irq_handler(struct intel_dp *intel_dp, u32 psr_iir)
                }
        }
 
-       if (psr_iir & EDP_PSR_ERROR(trans_shift)) {
+       if (psr_iir & psr_irq_psr_error_bit_get(intel_dp)) {
                u32 val;
 
                drm_warn(&dev_priv->drm, "[transcoder %s] PSR aux error\n",
@@ -243,7 +261,7 @@ void intel_psr_irq_handler(struct intel_dp *intel_dp, u32 psr_iir)
                 * or unset irq_aux_error.
                 */
                val = intel_de_read(dev_priv, imr_reg);
-               val |= EDP_PSR_ERROR(trans_shift);
+               val |= psr_irq_psr_error_bit_get(intel_dp);
                intel_de_write(dev_priv, imr_reg, val);
 
                schedule_work(&intel_dp->psr.work);
@@ -1194,14 +1212,12 @@ static bool psr_interrupt_error_check(struct intel_dp *intel_dp)
         * first time that PSR HW tries to activate so lets keep PSR disabled
         * to avoid any rendering problems.
         */
-       if (DISPLAY_VER(dev_priv) >= 12) {
+       if (DISPLAY_VER(dev_priv) >= 12)
                val = intel_de_read(dev_priv,
                                    TRANS_PSR_IIR(intel_dp->psr.transcoder));
-               val &= EDP_PSR_ERROR(0);
-       } else {
+       else
                val = intel_de_read(dev_priv, EDP_PSR_IIR);
-               val &= EDP_PSR_ERROR(intel_dp->psr.transcoder);
-       }
+       val &= psr_irq_psr_error_bit_get(intel_dp);
        if (val) {
                intel_dp->psr.sink_not_reliable = true;
                drm_dbg_kms(&dev_priv->drm,
index 01b0932..18178b0 100644 (file)
@@ -1710,10 +1710,22 @@ skl_compute_wm_params(const struct intel_crtc_state *crtc_state,
                      modifier == I915_FORMAT_MOD_4_TILED ||
                      modifier == I915_FORMAT_MOD_Yf_TILED ||
                      modifier == I915_FORMAT_MOD_Y_TILED_CCS ||
-                     modifier == I915_FORMAT_MOD_Yf_TILED_CCS;
+                     modifier == I915_FORMAT_MOD_Yf_TILED_CCS ||
+                     modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS ||
+                     modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS ||
+                     modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC ||
+                     modifier == I915_FORMAT_MOD_4_TILED_DG2_RC_CCS ||
+                     modifier == I915_FORMAT_MOD_4_TILED_DG2_MC_CCS ||
+                     modifier == I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC;
        wp->x_tiled = modifier == I915_FORMAT_MOD_X_TILED;
        wp->rc_surface = modifier == I915_FORMAT_MOD_Y_TILED_CCS ||
-                        modifier == I915_FORMAT_MOD_Yf_TILED_CCS;
+                        modifier == I915_FORMAT_MOD_Yf_TILED_CCS ||
+                        modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS ||
+                        modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS ||
+                        modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC ||
+                        modifier == I915_FORMAT_MOD_4_TILED_DG2_RC_CCS ||
+                        modifier == I915_FORMAT_MOD_4_TILED_DG2_MC_CCS ||
+                        modifier == I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC;
        wp->is_planar = intel_format_info_is_yuv_semiplanar(format, modifier);
 
        wp->width = width;
index 0bcde53..1e29b1e 100644 (file)
@@ -1387,14 +1387,8 @@ kill_engines(struct i915_gem_engines *engines, bool exit, bool persistent)
         */
        for_each_gem_engine(ce, engines, it) {
                struct intel_engine_cs *engine;
-               bool skip = false;
 
-               if (exit)
-                       skip = intel_context_set_exiting(ce);
-               else if (!persistent)
-                       skip = intel_context_exit_nonpersistent(ce, NULL);
-
-               if (skip)
+               if ((exit || !persistent) && intel_context_revoke(ce))
                        continue; /* Already marked. */
 
                /*
index cd75b0c..845023c 100644 (file)
@@ -2424,7 +2424,7 @@ gen8_dispatch_bsd_engine(struct drm_i915_private *dev_priv,
        /* Check whether the file_priv has already selected one ring. */
        if ((int)file_priv->bsd_engine < 0)
                file_priv->bsd_engine =
-                       get_random_int() % num_vcs_engines(dev_priv);
+                       prandom_u32_max(num_vcs_engines(dev_priv));
 
        return file_priv->bsd_engine;
 }
index 7ff9c78..369006c 100644 (file)
@@ -653,6 +653,41 @@ int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
                            struct i915_gem_ww_ctx *ww,
                            enum intel_region_id id)
 {
+       return __i915_gem_object_migrate(obj, ww, id, obj->flags);
+}
+
+/**
+ * __i915_gem_object_migrate - Migrate an object to the desired region id, with
+ * control of the extra flags
+ * @obj: The object to migrate.
+ * @ww: An optional struct i915_gem_ww_ctx. If NULL, the backend may
+ * not be successful in evicting other objects to make room for this object.
+ * @id: The region id to migrate to.
+ * @flags: The object flags. Normally just obj->flags.
+ *
+ * Attempt to migrate the object to the desired memory region. The
+ * object backend must support migration and the object may not be
+ * pinned, (explicitly pinned pages or pinned vmas). The object must
+ * be locked.
+ * On successful completion, the object will have pages pointing to
+ * memory in the new region, but an async migration task may not have
+ * completed yet, and to accomplish that, i915_gem_object_wait_migration()
+ * must be called.
+ *
+ * Note: the @ww parameter is not used yet, but included to make sure
+ * callers put some effort into obtaining a valid ww ctx if one is
+ * available.
+ *
+ * Return: 0 on success. Negative error code on failure. In particular may
+ * return -ENXIO on lack of region space, -EDEADLK for deadlock avoidance
+ * if @ww is set, -EINTR or -ERESTARTSYS if signal pending, and
+ * -EBUSY if the object is pinned.
+ */
+int __i915_gem_object_migrate(struct drm_i915_gem_object *obj,
+                             struct i915_gem_ww_ctx *ww,
+                             enum intel_region_id id,
+                             unsigned int flags)
+{
        struct drm_i915_private *i915 = to_i915(obj->base.dev);
        struct intel_memory_region *mr;
 
@@ -672,7 +707,7 @@ int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
                return 0;
        }
 
-       return obj->ops->migrate(obj, mr);
+       return obj->ops->migrate(obj, mr, flags);
 }
 
 /**
index 7317d41..1723af9 100644 (file)
@@ -608,6 +608,10 @@ bool i915_gem_object_migratable(struct drm_i915_gem_object *obj);
 int i915_gem_object_migrate(struct drm_i915_gem_object *obj,
                            struct i915_gem_ww_ctx *ww,
                            enum intel_region_id id);
+int __i915_gem_object_migrate(struct drm_i915_gem_object *obj,
+                             struct i915_gem_ww_ctx *ww,
+                             enum intel_region_id id,
+                             unsigned int flags);
 
 bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj,
                                 enum intel_region_id id);
index 40305e2..d0d6772 100644 (file)
@@ -107,7 +107,8 @@ struct drm_i915_gem_object_ops {
         * pinning or for as long as the object lock is held.
         */
        int (*migrate)(struct drm_i915_gem_object *obj,
-                      struct intel_memory_region *mr);
+                      struct intel_memory_region *mr,
+                      unsigned int flags);
 
        void (*release)(struct drm_i915_gem_object *obj);
 
index e3fc38d..4f86178 100644 (file)
@@ -848,9 +848,10 @@ static int __i915_ttm_migrate(struct drm_i915_gem_object *obj,
 }
 
 static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
-                           struct intel_memory_region *mr)
+                           struct intel_memory_region *mr,
+                           unsigned int flags)
 {
-       return __i915_ttm_migrate(obj, mr, obj->flags);
+       return __i915_ttm_migrate(obj, mr, flags);
 }
 
 static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
index 654a092..e94365b 100644 (file)
@@ -614,13 +614,12 @@ bool intel_context_ban(struct intel_context *ce, struct i915_request *rq)
        return ret;
 }
 
-bool intel_context_exit_nonpersistent(struct intel_context *ce,
-                                     struct i915_request *rq)
+bool intel_context_revoke(struct intel_context *ce)
 {
        bool ret = intel_context_set_exiting(ce);
 
        if (ce->ops->revoke)
-               ce->ops->revoke(ce, rq, ce->engine->props.preempt_timeout_ms);
+               ce->ops->revoke(ce, NULL, ce->engine->props.preempt_timeout_ms);
 
        return ret;
 }
index 8e2d706..be09fb2 100644 (file)
@@ -329,8 +329,7 @@ static inline bool intel_context_set_exiting(struct intel_context *ce)
        return test_and_set_bit(CONTEXT_EXITING, &ce->flags);
 }
 
-bool intel_context_exit_nonpersistent(struct intel_context *ce,
-                                     struct i915_request *rq);
+bool intel_context_revoke(struct intel_context *ce);
 
 static inline bool
 intel_context_force_single_submission(const struct intel_context *ce)
index 30cf5c3..2049a00 100644 (file)
@@ -1275,10 +1275,16 @@ bool i915_ggtt_resume_vm(struct i915_address_space *vm)
                        atomic_read(&vma->flags) & I915_VMA_BIND_MASK;
 
                GEM_BUG_ON(!was_bound);
-               if (!retained_ptes)
+               if (!retained_ptes) {
+                       /*
+                        * Clear the bound flags of the vma resource to allow
+                        * ptes to be repopulated.
+                        */
+                       vma->resource->bound_flags = 0;
                        vma->ops->bind_vma(vm, NULL, vma->resource,
                                           obj ? obj->cache_level : 0,
                                           was_bound);
+               }
                if (obj) { /* only used during resume => exclusive access */
                        write_domain_objs |= fetch_and_zero(&obj->write_domain);
                        obj->read_domains |= I915_GEM_DOMAIN_GTT;
index c6ebe27..152244d 100644 (file)
@@ -207,6 +207,14 @@ static const struct drm_i915_mocs_entry broxton_mocs_table[] = {
        MOCS_ENTRY(15, \
                   LE_3_WB | LE_TC_1_LLC | LE_LRUM(2) | LE_AOM(1), \
                   L3_3_WB), \
+       /* Bypass LLC - Uncached (EHL+) */ \
+       MOCS_ENTRY(16, \
+                  LE_1_UC | LE_TC_1_LLC | LE_SCF(1), \
+                  L3_1_UC), \
+       /* Bypass LLC - L3 (Read-Only) (EHL+) */ \
+       MOCS_ENTRY(17, \
+                  LE_1_UC | LE_TC_1_LLC | LE_SCF(1), \
+                  L3_3_WB), \
        /* Self-Snoop - L3 + LLC */ \
        MOCS_ENTRY(18, \
                   LE_3_WB | LE_TC_1_LLC | LE_LRUM(3) | LE_SSE(3), \
index 22ba66e..1db59ee 100644 (file)
@@ -684,7 +684,7 @@ static int __guc_add_request(struct intel_guc *guc, struct i915_request *rq)
         * Corner case where requests were sitting in the priority list or a
         * request resubmitted after the context was banned.
         */
-       if (unlikely(intel_context_is_banned(ce))) {
+       if (unlikely(!intel_context_is_schedulable(ce))) {
                i915_request_put(i915_request_mark_eio(rq));
                intel_engine_signal_breadcrumbs(ce->engine);
                return 0;
@@ -870,15 +870,15 @@ static int guc_wq_item_append(struct intel_guc *guc,
                              struct i915_request *rq)
 {
        struct intel_context *ce = request_to_scheduling_context(rq);
-       int ret = 0;
+       int ret;
 
-       if (likely(!intel_context_is_banned(ce))) {
-               ret = __guc_wq_item_append(rq);
+       if (unlikely(!intel_context_is_schedulable(ce)))
+               return 0;
 
-               if (unlikely(ret == -EBUSY)) {
-                       guc->stalled_request = rq;
-                       guc->submission_stall_reason = STALL_MOVE_LRC_TAIL;
-               }
+       ret = __guc_wq_item_append(rq);
+       if (unlikely(ret == -EBUSY)) {
+               guc->stalled_request = rq;
+               guc->submission_stall_reason = STALL_MOVE_LRC_TAIL;
        }
 
        return ret;
@@ -897,7 +897,7 @@ static bool multi_lrc_submit(struct i915_request *rq)
         * submitting all the requests generated in parallel.
         */
        return test_bit(I915_FENCE_FLAG_SUBMIT_PARALLEL, &rq->fence.flags) ||
-               intel_context_is_banned(ce);
+              !intel_context_is_schedulable(ce);
 }
 
 static int guc_dequeue_one_context(struct intel_guc *guc)
@@ -966,7 +966,7 @@ register_context:
                struct intel_context *ce = request_to_scheduling_context(last);
 
                if (unlikely(!ctx_id_mapped(guc, ce->guc_id.id) &&
-                            !intel_context_is_banned(ce))) {
+                            intel_context_is_schedulable(ce))) {
                        ret = try_context_registration(ce, false);
                        if (unlikely(ret == -EPIPE)) {
                                goto deadlk;
@@ -1576,7 +1576,7 @@ static void guc_reset_state(struct intel_context *ce, u32 head, bool scrub)
 {
        struct intel_engine_cs *engine = __context_to_physical_engine(ce);
 
-       if (intel_context_is_banned(ce))
+       if (!intel_context_is_schedulable(ce))
                return;
 
        GEM_BUG_ON(!intel_context_is_pinned(ce));
@@ -4424,12 +4424,12 @@ static void guc_handle_context_reset(struct intel_guc *guc,
 {
        trace_intel_context_reset(ce);
 
-       if (likely(!intel_context_is_banned(ce))) {
+       if (likely(intel_context_is_schedulable(ce))) {
                capture_error_state(guc, ce);
                guc_context_replay(ce);
        } else {
                drm_info(&guc_to_gt(guc)->i915->drm,
-                        "Ignoring context reset notification of banned context 0x%04X on %s",
+                        "Ignoring context reset notification of exiting context 0x%04X on %s",
                         ce->guc_id.id, ce->engine->name);
        }
 }
index 329ff75..7bd1861 100644 (file)
@@ -137,12 +137,12 @@ static u64 random_offset(u64 start, u64 end, u64 len, u64 align)
        range = round_down(end - len, align) - round_up(start, align);
        if (range) {
                if (sizeof(unsigned long) == sizeof(u64)) {
-                       addr = get_random_long();
+                       addr = get_random_u64();
                } else {
-                       addr = get_random_int();
+                       addr = get_random_u32();
                        if (range > U32_MAX) {
                                addr <<= 32;
-                               addr |= get_random_int();
+                               addr |= get_random_u32();
                        }
                }
                div64_u64_rem(addr, range, &addr);
index 1a9bd82..0b287a5 100644 (file)
 #define TRANS_PSR_IIR(tran)                    _MMIO_TRANS2(tran, _PSR_IIR_A)
 #define   _EDP_PSR_TRANS_SHIFT(trans)          ((trans) == TRANSCODER_EDP ? \
                                                 0 : ((trans) - TRANSCODER_A + 1) * 8)
-#define   EDP_PSR_TRANS_MASK(trans)            (0x7 << _EDP_PSR_TRANS_SHIFT(trans))
-#define   EDP_PSR_ERROR(trans)                 (0x4 << _EDP_PSR_TRANS_SHIFT(trans))
-#define   EDP_PSR_POST_EXIT(trans)             (0x2 << _EDP_PSR_TRANS_SHIFT(trans))
-#define   EDP_PSR_PRE_ENTRY(trans)             (0x1 << _EDP_PSR_TRANS_SHIFT(trans))
+#define   TGL_PSR_MASK                 REG_GENMASK(2, 0)
+#define   TGL_PSR_ERROR                        REG_BIT(2)
+#define   TGL_PSR_POST_EXIT            REG_BIT(1)
+#define   TGL_PSR_PRE_ENTRY            REG_BIT(0)
+#define   EDP_PSR_MASK(trans)          (TGL_PSR_MASK <<                \
+                                        _EDP_PSR_TRANS_SHIFT(trans))
+#define   EDP_PSR_ERROR(trans)         (TGL_PSR_ERROR <<               \
+                                        _EDP_PSR_TRANS_SHIFT(trans))
+#define   EDP_PSR_POST_EXIT(trans)     (TGL_PSR_POST_EXIT <<           \
+                                        _EDP_PSR_TRANS_SHIFT(trans))
+#define   EDP_PSR_PRE_ENTRY(trans)     (TGL_PSR_PRE_ENTRY <<           \
+                                        _EDP_PSR_TRANS_SHIFT(trans))
 
 #define _SRD_AUX_DATA_A                                0x60814
 #define _SRD_AUX_DATA_EDP                      0x6f814
index c4e9323..39da0fb 100644 (file)
@@ -135,7 +135,7 @@ static int __run_selftests(const char *name,
        int err = 0;
 
        while (!i915_selftest.random_seed)
-               i915_selftest.random_seed = get_random_int();
+               i915_selftest.random_seed = get_random_u32();
 
        i915_selftest.timeout_jiffies =
                i915_selftest.timeout_ms ?
index 1635661..20fe538 100644 (file)
@@ -139,44 +139,24 @@ static void nouveau_dmem_fence_done(struct nouveau_fence **fence)
        }
 }
 
-static vm_fault_t nouveau_dmem_fault_copy_one(struct nouveau_drm *drm,
-               struct vm_fault *vmf, struct migrate_vma *args,
-               dma_addr_t *dma_addr)
+static int nouveau_dmem_copy_one(struct nouveau_drm *drm, struct page *spage,
+                               struct page *dpage, dma_addr_t *dma_addr)
 {
        struct device *dev = drm->dev->dev;
-       struct page *dpage, *spage;
-       struct nouveau_svmm *svmm;
-
-       spage = migrate_pfn_to_page(args->src[0]);
-       if (!spage || !(args->src[0] & MIGRATE_PFN_MIGRATE))
-               return 0;
 
-       dpage = alloc_page_vma(GFP_HIGHUSER, vmf->vma, vmf->address);
-       if (!dpage)
-               return VM_FAULT_SIGBUS;
        lock_page(dpage);
 
        *dma_addr = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL);
        if (dma_mapping_error(dev, *dma_addr))
-               goto error_free_page;
+               return -EIO;
 
-       svmm = spage->zone_device_data;
-       mutex_lock(&svmm->mutex);
-       nouveau_svmm_invalidate(svmm, args->start, args->end);
        if (drm->dmem->migrate.copy_func(drm, 1, NOUVEAU_APER_HOST, *dma_addr,
-                       NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage)))
-               goto error_dma_unmap;
-       mutex_unlock(&svmm->mutex);
+                                        NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage))) {
+               dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
+               return -EIO;
+       }
 
-       args->dst[0] = migrate_pfn(page_to_pfn(dpage));
        return 0;
-
-error_dma_unmap:
-       mutex_unlock(&svmm->mutex);
-       dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
-error_free_page:
-       __free_page(dpage);
-       return VM_FAULT_SIGBUS;
 }
 
 static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
@@ -184,9 +164,11 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
        struct nouveau_drm *drm = page_to_drm(vmf->page);
        struct nouveau_dmem *dmem = drm->dmem;
        struct nouveau_fence *fence;
+       struct nouveau_svmm *svmm;
+       struct page *spage, *dpage;
        unsigned long src = 0, dst = 0;
        dma_addr_t dma_addr = 0;
-       vm_fault_t ret;
+       vm_fault_t ret = 0;
        struct migrate_vma args = {
                .vma            = vmf->vma,
                .start          = vmf->address,
@@ -194,6 +176,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
                .src            = &src,
                .dst            = &dst,
                .pgmap_owner    = drm->dev,
+               .fault_page     = vmf->page,
                .flags          = MIGRATE_VMA_SELECT_DEVICE_PRIVATE,
        };
 
@@ -207,9 +190,25 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
        if (!args.cpages)
                return 0;
 
-       ret = nouveau_dmem_fault_copy_one(drm, vmf, &args, &dma_addr);
-       if (ret || dst == 0)
+       spage = migrate_pfn_to_page(src);
+       if (!spage || !(src & MIGRATE_PFN_MIGRATE))
+               goto done;
+
+       dpage = alloc_page_vma(GFP_HIGHUSER, vmf->vma, vmf->address);
+       if (!dpage)
+               goto done;
+
+       dst = migrate_pfn(page_to_pfn(dpage));
+
+       svmm = spage->zone_device_data;
+       mutex_lock(&svmm->mutex);
+       nouveau_svmm_invalidate(svmm, args.start, args.end);
+       ret = nouveau_dmem_copy_one(drm, spage, dpage, &dma_addr);
+       mutex_unlock(&svmm->mutex);
+       if (ret) {
+               ret = VM_FAULT_SIGBUS;
                goto done;
+       }
 
        nouveau_fence_new(dmem->migrate.chan, false, &fence);
        migrate_vma_pages(&args);
@@ -326,7 +325,7 @@ nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm)
                        return NULL;
        }
 
-       lock_page(page);
+       zone_device_page_init(page);
        return page;
 }
 
@@ -369,6 +368,52 @@ nouveau_dmem_suspend(struct nouveau_drm *drm)
        mutex_unlock(&drm->dmem->mutex);
 }
 
+/*
+ * Evict all pages mapping a chunk.
+ */
+static void
+nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
+{
+       unsigned long i, npages = range_len(&chunk->pagemap.range) >> PAGE_SHIFT;
+       unsigned long *src_pfns, *dst_pfns;
+       dma_addr_t *dma_addrs;
+       struct nouveau_fence *fence;
+
+       src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
+       dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
+       dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL);
+
+       migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT,
+                       npages);
+
+       for (i = 0; i < npages; i++) {
+               if (src_pfns[i] & MIGRATE_PFN_MIGRATE) {
+                       struct page *dpage;
+
+                       /*
+                        * _GFP_NOFAIL because the GPU is going away and there
+                        * is nothing sensible we can do if we can't copy the
+                        * data back.
+                        */
+                       dpage = alloc_page(GFP_HIGHUSER | __GFP_NOFAIL);
+                       dst_pfns[i] = migrate_pfn(page_to_pfn(dpage));
+                       nouveau_dmem_copy_one(chunk->drm,
+                                       migrate_pfn_to_page(src_pfns[i]), dpage,
+                                       &dma_addrs[i]);
+               }
+       }
+
+       nouveau_fence_new(chunk->drm->dmem->migrate.chan, false, &fence);
+       migrate_device_pages(src_pfns, dst_pfns, npages);
+       nouveau_dmem_fence_done(&fence);
+       migrate_device_finalize(src_pfns, dst_pfns, npages);
+       kfree(src_pfns);
+       kfree(dst_pfns);
+       for (i = 0; i < npages; i++)
+               dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL);
+       kfree(dma_addrs);
+}
+
 void
 nouveau_dmem_fini(struct nouveau_drm *drm)
 {
@@ -380,8 +425,10 @@ nouveau_dmem_fini(struct nouveau_drm *drm)
        mutex_lock(&drm->dmem->mutex);
 
        list_for_each_entry_safe(chunk, tmp, &drm->dmem->chunks, list) {
+               nouveau_dmem_evict_chunk(chunk);
                nouveau_bo_unpin(chunk->bo);
                nouveau_bo_ref(NULL, &chunk->bo);
+               WARN_ON(chunk->callocated);
                list_del(&chunk->list);
                memunmap_pages(&chunk->pagemap);
                release_mem_region(chunk->pagemap.range.start,
index 89056a1..6bd0634 100644 (file)
@@ -63,13 +63,13 @@ static void panfrost_core_dump_header(struct panfrost_dump_iterator *iter,
 {
        struct panfrost_dump_object_header *hdr = iter->hdr;
 
-       hdr->magic = cpu_to_le32(PANFROSTDUMP_MAGIC);
-       hdr->type = cpu_to_le32(type);
-       hdr->file_offset = cpu_to_le32(iter->data - iter->start);
-       hdr->file_size = cpu_to_le32(data_end - iter->data);
+       hdr->magic = PANFROSTDUMP_MAGIC;
+       hdr->type = type;
+       hdr->file_offset = iter->data - iter->start;
+       hdr->file_size = data_end - iter->data;
 
        iter->hdr++;
-       iter->data += le32_to_cpu(hdr->file_size);
+       iter->data += hdr->file_size;
 }
 
 static void
@@ -93,8 +93,8 @@ panfrost_core_dump_registers(struct panfrost_dump_iterator *iter,
 
                reg = panfrost_dump_registers[i] + js_as_offset;
 
-               dumpreg->reg = cpu_to_le32(reg);
-               dumpreg->value = cpu_to_le32(gpu_read(pfdev, reg));
+               dumpreg->reg = reg;
+               dumpreg->value = gpu_read(pfdev, reg);
        }
 
        panfrost_core_dump_header(iter, PANFROSTDUMP_BUF_REG, dumpreg);
@@ -106,7 +106,7 @@ void panfrost_core_dump(struct panfrost_job *job)
        struct panfrost_dump_iterator iter;
        struct drm_gem_object *dbo;
        unsigned int n_obj, n_bomap_pages;
-       __le64 *bomap, *bomap_start;
+       u64 *bomap, *bomap_start;
        size_t file_size;
        u32 as_nr;
        int slot;
@@ -177,11 +177,11 @@ void panfrost_core_dump(struct panfrost_job *job)
         * For now, we write the job identifier in the register dump header,
         * so that we can decode the entire dump later with pandecode
         */
-       iter.hdr->reghdr.jc = cpu_to_le64(job->jc);
-       iter.hdr->reghdr.major = cpu_to_le32(PANFROSTDUMP_MAJOR);
-       iter.hdr->reghdr.minor = cpu_to_le32(PANFROSTDUMP_MINOR);
-       iter.hdr->reghdr.gpu_id = cpu_to_le32(pfdev->features.id);
-       iter.hdr->reghdr.nbos = cpu_to_le64(job->bo_count);
+       iter.hdr->reghdr.jc = job->jc;
+       iter.hdr->reghdr.major = PANFROSTDUMP_MAJOR;
+       iter.hdr->reghdr.minor = PANFROSTDUMP_MINOR;
+       iter.hdr->reghdr.gpu_id = pfdev->features.id;
+       iter.hdr->reghdr.nbos = job->bo_count;
 
        panfrost_core_dump_registers(&iter, pfdev, as_nr, slot);
 
@@ -218,27 +218,27 @@ void panfrost_core_dump(struct panfrost_job *job)
 
                WARN_ON(!mapping->active);
 
-               iter.hdr->bomap.data[0] = cpu_to_le32((bomap - bomap_start));
+               iter.hdr->bomap.data[0] = bomap - bomap_start;
 
                for_each_sgtable_page(bo->base.sgt, &page_iter, 0) {
                        struct page *page = sg_page_iter_page(&page_iter);
 
                        if (!IS_ERR(page)) {
-                               *bomap++ = cpu_to_le64(page_to_phys(page));
+                               *bomap++ = page_to_phys(page);
                        } else {
                                dev_err(pfdev->dev, "Panfrost Dump: wrong page\n");
-                               *bomap++ = ~cpu_to_le64(0);
+                               *bomap++ = 0;
                        }
                }
 
-               iter.hdr->bomap.iova = cpu_to_le64(mapping->mmnode.start << PAGE_SHIFT);
+               iter.hdr->bomap.iova = mapping->mmnode.start << PAGE_SHIFT;
 
                vaddr = map.vaddr;
                memcpy(iter.data, vaddr, bo->base.base.size);
 
                drm_gem_shmem_vunmap(&bo->base, &map);
 
-               iter.hdr->bomap.valid = cpu_to_le32(1);
+               iter.hdr->bomap.valid = 1;
 
 dump_header:   panfrost_core_dump_header(&iter, PANFROSTDUMP_BUF_BO, iter.data +
                                          bo->base.base.size);
index 6b25b2f..6137537 100644 (file)
@@ -385,7 +385,8 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity)
        }
 
        s_fence = to_drm_sched_fence(fence);
-       if (s_fence && s_fence->sched == sched) {
+       if (s_fence && s_fence->sched == sched &&
+           !test_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &fence->flags)) {
 
                /*
                 * Fence is from the same scheduler, only need to wait for
index 7a2b2d6..62f6958 100644 (file)
@@ -729,7 +729,7 @@ static void drm_test_buddy_alloc_limit(struct kunit *test)
 static int drm_buddy_init_test(struct kunit *test)
 {
        while (!random_seed)
-               random_seed = get_random_int();
+               random_seed = get_random_u32();
 
        return 0;
 }
index 8d86c25..2191e57 100644 (file)
@@ -438,7 +438,7 @@ static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test)
        iosys_map_set_vaddr(&src, xrgb8888);
 
        drm_fb_xrgb8888_to_xrgb2101010(&dst, &result->dst_pitch, &src, &fb, &params->clip);
-       buf = le32buf_to_cpu(test, buf, TEST_BUF_SIZE);
+       buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32));
        KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0);
 }
 
index 659d1af..c4b66ee 100644 (file)
@@ -2212,7 +2212,7 @@ err_nodes:
 static int drm_mm_init_test(struct kunit *test)
 {
        while (!random_seed)
-               random_seed = get_random_int();
+               random_seed = get_random_u32();
 
        return 0;
 }
index ffbbb45..2027063 100644 (file)
@@ -490,6 +490,7 @@ module_init(vc4_drm_register);
 module_exit(vc4_drm_unregister);
 
 MODULE_ALIAS("platform:vc4-drm");
+MODULE_SOFTDEP("pre: snd-soc-hdmi-codec");
 MODULE_DESCRIPTION("Broadcom VC4 DRM Driver");
 MODULE_AUTHOR("Eric Anholt <eric@anholt.net>");
 MODULE_LICENSE("GPL v2");
index 64f9fea..596e311 100644 (file)
@@ -3318,12 +3318,37 @@ static int vc4_hdmi_runtime_resume(struct device *dev)
        struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
        unsigned long __maybe_unused flags;
        u32 __maybe_unused value;
+       unsigned long rate;
        int ret;
 
+       /*
+        * The HSM clock is in the HDMI power domain, so we need to set
+        * its frequency while the power domain is active so that it
+        * keeps its rate.
+        */
+       ret = clk_set_min_rate(vc4_hdmi->hsm_clock, HSM_MIN_CLOCK_FREQ);
+       if (ret)
+               return ret;
+
        ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
        if (ret)
                return ret;
 
+       /*
+        * Whenever the RaspberryPi boots without an HDMI monitor
+        * plugged in, the firmware won't have initialized the HSM clock
+        * rate and it will be reported as 0.
+        *
+        * If we try to access a register of the controller in such a
+        * case, it will lead to a silent CPU stall. Let's make sure we
+        * prevent such a case.
+        */
+       rate = clk_get_rate(vc4_hdmi->hsm_clock);
+       if (!rate) {
+               ret = -EINVAL;
+               goto err_disable_clk;
+       }
+
        if (vc4_hdmi->variant->reset)
                vc4_hdmi->variant->reset(vc4_hdmi);
 
@@ -3345,6 +3370,10 @@ static int vc4_hdmi_runtime_resume(struct device *dev)
 #endif
 
        return 0;
+
+err_disable_clk:
+       clk_disable_unprepare(vc4_hdmi->hsm_clock);
+       return ret;
 }
 
 static void vc4_hdmi_put_ddc_device(void *ptr)
index da86565..dad953f 100644 (file)
 #define USB_DEVICE_ID_MADCATZ_BEATPAD  0x4540
 #define USB_DEVICE_ID_MADCATZ_RAT5     0x1705
 #define USB_DEVICE_ID_MADCATZ_RAT9     0x1709
+#define USB_DEVICE_ID_MADCATZ_MMO7  0x1713
 
 #define USB_VENDOR_ID_MCC              0x09db
 #define USB_DEVICE_ID_MCC_PMD1024LS    0x0076
 #define USB_DEVICE_ID_SONY_PS4_CONTROLLER_2    0x09cc
 #define USB_DEVICE_ID_SONY_PS4_CONTROLLER_DONGLE       0x0ba0
 #define USB_DEVICE_ID_SONY_PS5_CONTROLLER      0x0ce6
+#define USB_DEVICE_ID_SONY_PS5_CONTROLLER_2    0x0df2
 #define USB_DEVICE_ID_SONY_MOTION_CONTROLLER   0x03d5
 #define USB_DEVICE_ID_SONY_NAVIGATION_CONTROLLER       0x042f
 #define USB_DEVICE_ID_SONY_BUZZ_CONTROLLER             0x0002
index 9dabd63..44763c0 100644 (file)
@@ -985,7 +985,7 @@ static int lenovo_led_brightness_set(struct led_classdev *led_cdev,
        struct device *dev = led_cdev->dev->parent;
        struct hid_device *hdev = to_hid_device(dev);
        struct lenovo_drvdata *data_pointer = hid_get_drvdata(hdev);
-       u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
+       static const u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
        int led_nr = 0;
        int ret = 0;
 
index 664a624..c9c968d 100644 (file)
@@ -480,7 +480,7 @@ static int magicmouse_raw_event(struct hid_device *hdev,
                magicmouse_raw_event(hdev, report, data + 2, data[1]);
                magicmouse_raw_event(hdev, report, data + 2 + data[1],
                        size - 2 - data[1]);
-               break;
+               return 0;
        default:
                return 0;
        }
index 40050eb..0b58763 100644 (file)
@@ -46,6 +46,7 @@ struct ps_device {
        uint32_t fw_version;
 
        int (*parse_report)(struct ps_device *dev, struct hid_report *report, u8 *data, int size);
+       void (*remove)(struct ps_device *dev);
 };
 
 /* Calibration data for playstation motion sensors. */
@@ -107,6 +108,9 @@ struct ps_led_info {
 #define DS_STATUS_CHARGING             GENMASK(7, 4)
 #define DS_STATUS_CHARGING_SHIFT       4
 
+/* Feature version from DualSense Firmware Info report. */
+#define DS_FEATURE_VERSION(major, minor) ((major & 0xff) << 8 | (minor & 0xff))
+
 /*
  * Status of a DualSense touch point contact.
  * Contact IDs, with highest bit set are 'inactive'
@@ -125,6 +129,7 @@ struct ps_led_info {
 #define DS_OUTPUT_VALID_FLAG1_RELEASE_LEDS BIT(3)
 #define DS_OUTPUT_VALID_FLAG1_PLAYER_INDICATOR_CONTROL_ENABLE BIT(4)
 #define DS_OUTPUT_VALID_FLAG2_LIGHTBAR_SETUP_CONTROL_ENABLE BIT(1)
+#define DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2 BIT(2)
 #define DS_OUTPUT_POWER_SAVE_CONTROL_MIC_MUTE BIT(4)
 #define DS_OUTPUT_LIGHTBAR_SETUP_LIGHT_OUT BIT(1)
 
@@ -142,6 +147,9 @@ struct dualsense {
        struct input_dev *sensors;
        struct input_dev *touchpad;
 
+       /* Update version is used as a feature/capability version. */
+       uint16_t update_version;
+
        /* Calibration data for accelerometer and gyroscope. */
        struct ps_calibration_data accel_calib_data[3];
        struct ps_calibration_data gyro_calib_data[3];
@@ -152,6 +160,7 @@ struct dualsense {
        uint32_t sensor_timestamp_us;
 
        /* Compatible rumble state */
+       bool use_vibration_v2;
        bool update_rumble;
        uint8_t motor_left;
        uint8_t motor_right;
@@ -174,6 +183,7 @@ struct dualsense {
        struct led_classdev player_leds[5];
 
        struct work_struct output_worker;
+       bool output_worker_initialized;
        void *output_report_dmabuf;
        uint8_t output_seq; /* Sequence number for output report. */
 };
@@ -299,6 +309,7 @@ static const struct {int x; int y; } ps_gamepad_hat_mapping[] = {
        {0, 0},
 };
 
+static inline void dualsense_schedule_work(struct dualsense *ds);
 static void dualsense_set_lightbar(struct dualsense *ds, uint8_t red, uint8_t green, uint8_t blue);
 
 /*
@@ -789,6 +800,7 @@ err_free:
        return ret;
 }
 
+
 static int dualsense_get_firmware_info(struct dualsense *ds)
 {
        uint8_t *buf;
@@ -808,6 +820,15 @@ static int dualsense_get_firmware_info(struct dualsense *ds)
        ds->base.hw_version = get_unaligned_le32(&buf[24]);
        ds->base.fw_version = get_unaligned_le32(&buf[28]);
 
+       /* Update version is some kind of feature version. It is distinct from
+        * the firmware version as there can be many different variations of a
+        * controller over time with the same physical shell, but with different
+        * PCBs and other internal changes. The update version (internal name) is
+        * used as a means to detect what features are available and change behavior.
+        * Note: the version is different between DualSense and DualSense Edge.
+        */
+       ds->update_version = get_unaligned_le16(&buf[44]);
+
 err_free:
        kfree(buf);
        return ret;
@@ -878,7 +899,7 @@ static int dualsense_player_led_set_brightness(struct led_classdev *led, enum le
        ds->update_player_leds = true;
        spin_unlock_irqrestore(&ds->base.lock, flags);
 
-       schedule_work(&ds->output_worker);
+       dualsense_schedule_work(ds);
 
        return 0;
 }
@@ -922,6 +943,16 @@ static void dualsense_init_output_report(struct dualsense *ds, struct dualsense_
        }
 }
 
+static inline void dualsense_schedule_work(struct dualsense *ds)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&ds->base.lock, flags);
+       if (ds->output_worker_initialized)
+               schedule_work(&ds->output_worker);
+       spin_unlock_irqrestore(&ds->base.lock, flags);
+}
+
 /*
  * Helper function to send DualSense output reports. Applies a CRC at the end of a report
  * for Bluetooth reports.
@@ -960,7 +991,10 @@ static void dualsense_output_worker(struct work_struct *work)
        if (ds->update_rumble) {
                /* Select classic rumble style haptics and enable it. */
                common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_HAPTICS_SELECT;
-               common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION;
+               if (ds->use_vibration_v2)
+                       common->valid_flag2 |= DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2;
+               else
+                       common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION;
                common->motor_left = ds->motor_left;
                common->motor_right = ds->motor_right;
                ds->update_rumble = false;
@@ -1082,7 +1116,7 @@ static int dualsense_parse_report(struct ps_device *ps_dev, struct hid_report *r
                spin_unlock_irqrestore(&ps_dev->lock, flags);
 
                /* Schedule updating of microphone state at hardware level. */
-               schedule_work(&ds->output_worker);
+               dualsense_schedule_work(ds);
        }
        ds->last_btn_mic_state = btn_mic_state;
 
@@ -1197,10 +1231,22 @@ static int dualsense_play_effect(struct input_dev *dev, void *data, struct ff_ef
        ds->motor_right = effect->u.rumble.weak_magnitude / 256;
        spin_unlock_irqrestore(&ds->base.lock, flags);
 
-       schedule_work(&ds->output_worker);
+       dualsense_schedule_work(ds);
        return 0;
 }
 
+static void dualsense_remove(struct ps_device *ps_dev)
+{
+       struct dualsense *ds = container_of(ps_dev, struct dualsense, base);
+       unsigned long flags;
+
+       spin_lock_irqsave(&ds->base.lock, flags);
+       ds->output_worker_initialized = false;
+       spin_unlock_irqrestore(&ds->base.lock, flags);
+
+       cancel_work_sync(&ds->output_worker);
+}
+
 static int dualsense_reset_leds(struct dualsense *ds)
 {
        struct dualsense_output_report report;
@@ -1237,7 +1283,7 @@ static void dualsense_set_lightbar(struct dualsense *ds, uint8_t red, uint8_t gr
        ds->lightbar_blue = blue;
        spin_unlock_irqrestore(&ds->base.lock, flags);
 
-       schedule_work(&ds->output_worker);
+       dualsense_schedule_work(ds);
 }
 
 static void dualsense_set_player_leds(struct dualsense *ds)
@@ -1260,7 +1306,7 @@ static void dualsense_set_player_leds(struct dualsense *ds)
 
        ds->update_player_leds = true;
        ds->player_leds_state = player_ids[player_id];
-       schedule_work(&ds->output_worker);
+       dualsense_schedule_work(ds);
 }
 
 static struct ps_device *dualsense_create(struct hid_device *hdev)
@@ -1299,7 +1345,9 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
        ps_dev->battery_capacity = 100; /* initial value until parse_report. */
        ps_dev->battery_status = POWER_SUPPLY_STATUS_UNKNOWN;
        ps_dev->parse_report = dualsense_parse_report;
+       ps_dev->remove = dualsense_remove;
        INIT_WORK(&ds->output_worker, dualsense_output_worker);
+       ds->output_worker_initialized = true;
        hid_set_drvdata(hdev, ds);
 
        max_output_report_size = sizeof(struct dualsense_output_report_bt);
@@ -1320,6 +1368,21 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
                return ERR_PTR(ret);
        }
 
+       /* Original DualSense firmware simulated classic controller rumble through
+        * its new haptics hardware. It felt different from classic rumble users
+        * were used to. Since then new firmwares were introduced to change behavior
+        * and make this new 'v2' behavior default on PlayStation and other platforms.
+        * The original DualSense requires a new enough firmware as bundled with PS5
+        * software released in 2021. DualSense edge supports it out of the box.
+        * Both devices also support the old mode, but it is not really used.
+        */
+       if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
+               /* Feature version 2.21 introduced new vibration method. */
+               ds->use_vibration_v2 = ds->update_version >= DS_FEATURE_VERSION(2, 21);
+       } else if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
+               ds->use_vibration_v2 = true;
+       }
+
        ret = ps_devices_list_add(ps_dev);
        if (ret)
                return ERR_PTR(ret);
@@ -1436,7 +1499,8 @@ static int ps_probe(struct hid_device *hdev, const struct hid_device_id *id)
                goto err_stop;
        }
 
-       if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
+       if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER ||
+               hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
                dev = dualsense_create(hdev);
                if (IS_ERR(dev)) {
                        hid_err(hdev, "Failed to create dualsense.\n");
@@ -1461,6 +1525,9 @@ static void ps_remove(struct hid_device *hdev)
        ps_devices_list_remove(dev);
        ps_device_release_player_id(dev);
 
+       if (dev->remove)
+               dev->remove(dev);
+
        hid_hw_close(hdev);
        hid_hw_stop(hdev);
 }
@@ -1468,6 +1535,8 @@ static void ps_remove(struct hid_device *hdev)
 static const struct hid_device_id ps_devices[] = {
        { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
        { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
+       { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
+       { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
        { }
 };
 MODULE_DEVICE_TABLE(hid, ps_devices);
index 70f602c..50e1c71 100644 (file)
@@ -620,6 +620,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
        { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7) },
        { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT5) },
        { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9) },
+       { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7) },
 #endif
 #if IS_ENABLED(CONFIG_HID_SAMSUNG)
        { HID_USB_DEVICE(USB_VENDOR_ID_SAMSUNG, USB_DEVICE_ID_SAMSUNG_IR_REMOTE) },
index c7bf14c..b84e975 100644 (file)
@@ -187,6 +187,8 @@ static const struct hid_device_id saitek_devices[] = {
                .driver_data = SAITEK_RELEASE_MODE_RAT7 },
        { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7),
                .driver_data = SAITEK_RELEASE_MODE_MMO7 },
+       { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7),
+               .driver_data = SAITEK_RELEASE_MODE_MMO7 },
        { }
 };
 
index ccf0af5..8bf32c6 100644 (file)
@@ -46,9 +46,6 @@ MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
 #define TOTAL_ATTRS            (MAX_CORE_ATTRS + 1)
 #define MAX_CORE_DATA          (NUM_REAL_CORES + BASE_SYSFS_ATTR_NO)
 
-#define TO_CORE_ID(cpu)                (cpu_data(cpu).cpu_core_id)
-#define TO_ATTR_NO(cpu)                (TO_CORE_ID(cpu) + BASE_SYSFS_ATTR_NO)
-
 #ifdef CONFIG_SMP
 #define for_each_sibling(i, cpu) \
        for_each_cpu(i, topology_sibling_cpumask(cpu))
@@ -91,6 +88,8 @@ struct temp_data {
 struct platform_data {
        struct device           *hwmon_dev;
        u16                     pkg_id;
+       u16                     cpu_map[NUM_REAL_CORES];
+       struct ida              ida;
        struct cpumask          cpumask;
        struct temp_data        *core_data[MAX_CORE_DATA];
        struct device_attribute name_attr;
@@ -441,7 +440,7 @@ static struct temp_data *init_temp_data(unsigned int cpu, int pkg_flag)
                                                        MSR_IA32_THERM_STATUS;
        tdata->is_pkg_data = pkg_flag;
        tdata->cpu = cpu;
-       tdata->cpu_core_id = TO_CORE_ID(cpu);
+       tdata->cpu_core_id = topology_core_id(cpu);
        tdata->attr_size = MAX_CORE_ATTRS;
        mutex_init(&tdata->update_lock);
        return tdata;
@@ -454,7 +453,7 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
        struct platform_data *pdata = platform_get_drvdata(pdev);
        struct cpuinfo_x86 *c = &cpu_data(cpu);
        u32 eax, edx;
-       int err, attr_no;
+       int err, index, attr_no;
 
        /*
         * Find attr number for sysfs:
@@ -462,14 +461,26 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
         * The attr number is always core id + 2
         * The Pkgtemp will always show up as temp1_*, if available
         */
-       attr_no = pkg_flag ? PKG_SYSFS_ATTR_NO : TO_ATTR_NO(cpu);
+       if (pkg_flag) {
+               attr_no = PKG_SYSFS_ATTR_NO;
+       } else {
+               index = ida_alloc(&pdata->ida, GFP_KERNEL);
+               if (index < 0)
+                       return index;
+               pdata->cpu_map[index] = topology_core_id(cpu);
+               attr_no = index + BASE_SYSFS_ATTR_NO;
+       }
 
-       if (attr_no > MAX_CORE_DATA - 1)
-               return -ERANGE;
+       if (attr_no > MAX_CORE_DATA - 1) {
+               err = -ERANGE;
+               goto ida_free;
+       }
 
        tdata = init_temp_data(cpu, pkg_flag);
-       if (!tdata)
-               return -ENOMEM;
+       if (!tdata) {
+               err = -ENOMEM;
+               goto ida_free;
+       }
 
        /* Test if we can access the status register */
        err = rdmsr_safe_on_cpu(cpu, tdata->status_reg, &eax, &edx);
@@ -505,6 +516,9 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
 exit_free:
        pdata->core_data[attr_no] = NULL;
        kfree(tdata);
+ida_free:
+       if (!pkg_flag)
+               ida_free(&pdata->ida, index);
        return err;
 }
 
@@ -524,6 +538,9 @@ static void coretemp_remove_core(struct platform_data *pdata, int indx)
 
        kfree(pdata->core_data[indx]);
        pdata->core_data[indx] = NULL;
+
+       if (indx >= BASE_SYSFS_ATTR_NO)
+               ida_free(&pdata->ida, indx - BASE_SYSFS_ATTR_NO);
 }
 
 static int coretemp_probe(struct platform_device *pdev)
@@ -537,6 +554,7 @@ static int coretemp_probe(struct platform_device *pdev)
                return -ENOMEM;
 
        pdata->pkg_id = pdev->id;
+       ida_init(&pdata->ida);
        platform_set_drvdata(pdev, pdata);
 
        pdata->hwmon_dev = devm_hwmon_device_register_with_groups(dev, DRVNAME,
@@ -553,6 +571,7 @@ static int coretemp_remove(struct platform_device *pdev)
                if (pdata->core_data[i])
                        coretemp_remove_core(pdata, i);
 
+       ida_destroy(&pdata->ida);
        return 0;
 }
 
@@ -647,7 +666,7 @@ static int coretemp_cpu_offline(unsigned int cpu)
        struct platform_device *pdev = coretemp_get_pdev(cpu);
        struct platform_data *pd;
        struct temp_data *tdata;
-       int indx, target;
+       int i, indx = -1, target;
 
        /*
         * Don't execute this on suspend as the device remove locks
@@ -660,12 +679,19 @@ static int coretemp_cpu_offline(unsigned int cpu)
        if (!pdev)
                return 0;
 
-       /* The core id is too big, just return */
-       indx = TO_ATTR_NO(cpu);
-       if (indx > MAX_CORE_DATA - 1)
+       pd = platform_get_drvdata(pdev);
+
+       for (i = 0; i < NUM_REAL_CORES; i++) {
+               if (pd->cpu_map[i] == topology_core_id(cpu)) {
+                       indx = i + BASE_SYSFS_ATTR_NO;
+                       break;
+               }
+       }
+
+       /* Too many cores and this core is not populated, just return */
+       if (indx < 0)
                return 0;
 
-       pd = platform_get_drvdata(pdev);
        tdata = pd->core_data[indx];
 
        cpumask_clear_cpu(cpu, &pd->cpumask);
index 345d883..2210aa6 100644 (file)
@@ -820,7 +820,8 @@ static const struct hid_device_id corsairpsu_idtable[] = {
        { HID_USB_DEVICE(0x1b1c, 0x1c0b) }, /* Corsair RM750i */
        { HID_USB_DEVICE(0x1b1c, 0x1c0c) }, /* Corsair RM850i */
        { HID_USB_DEVICE(0x1b1c, 0x1c0d) }, /* Corsair RM1000i */
-       { HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsaur HX1000i revision 2 */
+       { HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsair HX1000i revision 2 */
+       { HID_USB_DEVICE(0x1b1c, 0x1c1f) }, /* Corsair HX1500i */
        { },
 };
 MODULE_DEVICE_TABLE(hid, corsairpsu_idtable);
index dc3d9a2..83a347c 100644 (file)
@@ -257,7 +257,10 @@ static int pwm_fan_update_enable(struct pwm_fan_ctx *ctx, long val)
 
        if (val == 0) {
                /* Disable pwm-fan unconditionally */
-               ret = __set_pwm(ctx, 0);
+               if (ctx->enabled)
+                       ret = __set_pwm(ctx, 0);
+               else
+                       ret = pwm_fan_switch_power(ctx, false);
                if (ret)
                        ctx->enable_mode = old_val;
                pwm_fan_update_state(ctx, 0);
index 264e780..e50f960 100644 (file)
@@ -764,6 +764,7 @@ config I2C_LPC2K
 config I2C_MLXBF
         tristate "Mellanox BlueField I2C controller"
         depends on MELLANOX_PLATFORM && ARM64
+       depends on ACPI
        select I2C_SLAVE
         help
           Enabling this option will add I2C SMBus support for Mellanox BlueField
index e68e775..1810d57 100644 (file)
@@ -2247,7 +2247,6 @@ static struct i2c_adapter_quirks mlxbf_i2c_quirks = {
        .max_write_len = MLXBF_I2C_MASTER_DATA_W_LENGTH,
 };
 
-#ifdef CONFIG_ACPI
 static const struct acpi_device_id mlxbf_i2c_acpi_ids[] = {
        { "MLNXBF03", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_1] },
        { "MLNXBF23", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_2] },
@@ -2282,12 +2281,6 @@ static int mlxbf_i2c_acpi_probe(struct device *dev, struct mlxbf_i2c_priv *priv)
 
        return 0;
 }
-#else
-static int mlxbf_i2c_acpi_probe(struct device *dev, struct mlxbf_i2c_priv *priv)
-{
-       return -ENOENT;
-}
-#endif /* CONFIG_ACPI */
 
 static int mlxbf_i2c_probe(struct platform_device *pdev)
 {
@@ -2490,9 +2483,7 @@ static struct platform_driver mlxbf_i2c_driver = {
        .remove = mlxbf_i2c_remove,
        .driver = {
                .name = "i2c-mlxbf",
-#ifdef CONFIG_ACPI
                .acpi_match_table = ACPI_PTR(mlxbf_i2c_acpi_ids),
-#endif /* CONFIG_ACPI  */
        },
 };
 
index 72fcfb1..081f51e 100644 (file)
@@ -40,7 +40,7 @@
 #define MLXCPLD_LPCI2C_STATUS_REG      0x9
 #define MLXCPLD_LPCI2C_DATA_REG                0xa
 
-/* LPC I2C masks and parametres */
+/* LPC I2C masks and parameters */
 #define MLXCPLD_LPCI2C_RST_SEL_MASK    0x1
 #define MLXCPLD_LPCI2C_TRANS_END       0x1
 #define MLXCPLD_LPCI2C_STATUS_NACK     0x10
index 87739fb..a4b97fe 100644 (file)
@@ -639,6 +639,11 @@ static int cci_probe(struct platform_device *pdev)
        if (ret < 0)
                goto error;
 
+       pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
+       pm_runtime_use_autosuspend(dev);
+       pm_runtime_set_active(dev);
+       pm_runtime_enable(dev);
+
        for (i = 0; i < cci->data->num_masters; i++) {
                if (!cci->master[i].cci)
                        continue;
@@ -650,14 +655,12 @@ static int cci_probe(struct platform_device *pdev)
                }
        }
 
-       pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);
-       pm_runtime_use_autosuspend(dev);
-       pm_runtime_set_active(dev);
-       pm_runtime_enable(dev);
-
        return 0;
 
 error_i2c:
+       pm_runtime_disable(dev);
+       pm_runtime_dont_use_autosuspend(dev);
+
        for (--i ; i >= 0; i--) {
                if (cci->master[i].cci) {
                        i2c_del_adapter(&cci->master[i].adap);
index cfb8e04..87d5625 100644 (file)
@@ -97,7 +97,7 @@ MODULE_PARM_DESC(high_clock,
 module_param(force, bool, 0);
 MODULE_PARM_DESC(force, "Forcibly enable the SIS630. DANGEROUS!");
 
-/* SMBus base adress */
+/* SMBus base address */
 static unsigned short smbus_base;
 
 /* supported chips */
index b3fe6b2..277a024 100644 (file)
@@ -920,6 +920,7 @@ static struct platform_driver xiic_i2c_driver = {
 
 module_platform_driver(xiic_i2c_driver);
 
+MODULE_ALIAS("platform:" DRIVER_NAME);
 MODULE_AUTHOR("info@mocean-labs.com");
 MODULE_DESCRIPTION("Xilinx I2C bus driver");
 MODULE_LICENSE("GPL v2");
index 7850287..351c81a 100644 (file)
@@ -1379,6 +1379,9 @@ static int i3c_master_reattach_i3c_dev(struct i3c_dev_desc *dev,
                i3c_bus_set_addr_slot_status(&master->bus,
                                             dev->info.dyn_addr,
                                             I3C_ADDR_SLOT_I3C_DEV);
+               if (old_dyn_addr)
+                       i3c_bus_set_addr_slot_status(&master->bus, old_dyn_addr,
+                                                    I3C_ADDR_SLOT_FREE);
        }
 
        if (master->ops->reattach_i3c_dev) {
@@ -1908,10 +1911,6 @@ int i3c_master_add_i3c_dev_locked(struct i3c_master_controller *master,
                i3c_master_free_i3c_dev(olddev);
        }
 
-       ret = i3c_master_reattach_i3c_dev(newdev, old_dyn_addr);
-       if (ret)
-               goto err_detach_dev;
-
        /*
         * Depending on our previous state, the expected dynamic address might
         * differ:
index 70da57e..cc2222b 100644 (file)
@@ -3807,7 +3807,7 @@ static int cma_alloc_any_port(enum rdma_ucm_port_space ps,
 
        inet_get_local_port_range(net, &low, &high);
        remaining = (high - low) + 1;
-       rover = prandom_u32() % remaining + low;
+       rover = prandom_u32_max(remaining) + low;
 retry:
        if (last_used_port != rover) {
                struct rdma_bind_list *bind_list;
index 14392c9..499a425 100644 (file)
@@ -734,7 +734,7 @@ static int send_connect(struct c4iw_ep *ep)
                                   &ep->com.remote_addr;
        int ret;
        enum chip_type adapter_type = ep->com.dev->rdev.lldi.adapter_type;
-       u32 isn = (prandom_u32() & ~7UL) - 1;
+       u32 isn = (get_random_u32() & ~7UL) - 1;
        struct net_device *netdev;
        u64 params;
 
@@ -2469,7 +2469,7 @@ static int accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
        }
 
        if (!is_t4(adapter_type)) {
-               u32 isn = (prandom_u32() & ~7UL) - 1;
+               u32 isn = (get_random_u32() & ~7UL) - 1;
 
                skb = get_skb(skb, roundup(sizeof(*rpl5), 16), GFP_KERNEL);
                rpl5 = __skb_put_zero(skb, roundup(sizeof(*rpl5), 16));
index f64e7e0..280d614 100644 (file)
@@ -54,7 +54,7 @@ u32 c4iw_id_alloc(struct c4iw_id_table *alloc)
 
        if (obj < alloc->max) {
                if (alloc->flags & C4IW_ID_TABLE_F_RANDOM)
-                       alloc->last += prandom_u32() % RANDOM_SKIP;
+                       alloc->last += prandom_u32_max(RANDOM_SKIP);
                else
                        alloc->last = obj + 1;
                if (alloc->last >= alloc->max)
@@ -85,7 +85,7 @@ int c4iw_id_table_alloc(struct c4iw_id_table *alloc, u32 start, u32 num,
        alloc->start = start;
        alloc->flags = flags;
        if (flags & C4IW_ID_TABLE_F_RANDOM)
-               alloc->last = prandom_u32() % RANDOM_SKIP;
+               alloc->last = prandom_u32_max(RANDOM_SKIP);
        else
                alloc->last = 0;
        alloc->max = num;
index 2a7abf7..18b05ff 100644 (file)
@@ -850,7 +850,7 @@ void hfi1_kern_init_ctxt_generations(struct hfi1_ctxtdata *rcd)
        int i;
 
        for (i = 0; i < RXE_NUM_TID_FLOWS; i++) {
-               rcd->flows[i].generation = mask_generation(prandom_u32());
+               rcd->flows[i].generation = mask_generation(get_random_u32());
                kern_set_hw_flow(rcd, KERN_GENERATION_RESERVED, i);
        }
 }
index 492b122..480c062 100644 (file)
@@ -41,9 +41,8 @@ static inline u16 get_ah_udp_sport(const struct rdma_ah_attr *ah_attr)
        u16 sport;
 
        if (!fl)
-               sport = get_random_u32() %
-                       (IB_ROCE_UDP_ENCAP_VALID_PORT_MAX + 1 -
-                        IB_ROCE_UDP_ENCAP_VALID_PORT_MIN) +
+               sport = prandom_u32_max(IB_ROCE_UDP_ENCAP_VALID_PORT_MAX + 1 -
+                                       IB_ROCE_UDP_ENCAP_VALID_PORT_MIN) +
                        IB_ROCE_UDP_ENCAP_VALID_PORT_MIN;
        else
                sport = rdma_flow_label_to_udp_sport(fl);
index d13ecbd..a37cfac 100644 (file)
@@ -96,7 +96,7 @@ static void __propagate_pkey_ev(struct mlx4_ib_dev *dev, int port_num,
 __be64 mlx4_ib_gen_node_guid(void)
 {
 #define NODE_GUID_HI   ((u64) (((u64)IB_OPENIB_OUI) << 40))
-       return cpu_to_be64(NODE_GUID_HI | prandom_u32());
+       return cpu_to_be64(NODE_GUID_HI | get_random_u32());
 }
 
 __be64 mlx4_ib_get_new_demux_tid(struct mlx4_ib_demux_ctx *ctx)
index ebb35b8..b610d36 100644 (file)
@@ -465,7 +465,7 @@ static int ipoib_cm_req_handler(struct ib_cm_id *cm_id,
                goto err_qp;
        }
 
-       psn = prandom_u32() & 0xffffff;
+       psn = get_random_u32() & 0xffffff;
        ret = ipoib_cm_modify_rx_qp(dev, cm_id, p->qp, psn);
        if (ret)
                goto err_modify;
index 758e1d7..8546b88 100644 (file)
@@ -1517,8 +1517,7 @@ static void rtrs_clt_err_recovery_work(struct work_struct *work)
        rtrs_clt_stop_and_destroy_conns(clt_path);
        queue_delayed_work(rtrs_wq, &clt_path->reconnect_dwork,
                           msecs_to_jiffies(delay_ms +
-                                           prandom_u32() %
-                                           RTRS_RECONNECT_SEED));
+                                           prandom_u32_max(RTRS_RECONNECT_SEED)));
 }
 
 static struct rtrs_clt_path *alloc_path(struct rtrs_clt_sess *clt,
index 65856e4..d3b39d0 100644 (file)
@@ -2330,7 +2330,8 @@ static void amd_iommu_get_resv_regions(struct device *dev,
                        type = IOMMU_RESV_RESERVED;
 
                region = iommu_alloc_resv_region(entry->address_start,
-                                                length, prot, type);
+                                                length, prot, type,
+                                                GFP_KERNEL);
                if (!region) {
                        dev_err(dev, "Out of memory allocating dm-regions\n");
                        return;
@@ -2340,14 +2341,14 @@ static void amd_iommu_get_resv_regions(struct device *dev,
 
        region = iommu_alloc_resv_region(MSI_RANGE_START,
                                         MSI_RANGE_END - MSI_RANGE_START + 1,
-                                        0, IOMMU_RESV_MSI);
+                                        0, IOMMU_RESV_MSI, GFP_KERNEL);
        if (!region)
                return;
        list_add_tail(&region->list, head);
 
        region = iommu_alloc_resv_region(HT_RANGE_START,
                                         HT_RANGE_END - HT_RANGE_START + 1,
-                                        0, IOMMU_RESV_RESERVED);
+                                        0, IOMMU_RESV_RESERVED, GFP_KERNEL);
        if (!region)
                return;
        list_add_tail(&region->list, head);
index 4526575..4f4a323 100644 (file)
@@ -758,7 +758,7 @@ static void apple_dart_get_resv_regions(struct device *dev,
 
                region = iommu_alloc_resv_region(DOORBELL_ADDR,
                                                 PAGE_SIZE, prot,
-                                                IOMMU_RESV_MSI);
+                                                IOMMU_RESV_MSI, GFP_KERNEL);
                if (!region)
                        return;
 
index ba47c73..6d5df91 100644 (file)
@@ -2757,7 +2757,7 @@ static void arm_smmu_get_resv_regions(struct device *dev,
        int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
        region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
-                                        prot, IOMMU_RESV_SW_MSI);
+                                        prot, IOMMU_RESV_SW_MSI, GFP_KERNEL);
        if (!region)
                return;
 
index 6c1114a..30dab14 100644 (file)
@@ -1534,7 +1534,7 @@ static void arm_smmu_get_resv_regions(struct device *dev,
        int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
        region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
-                                        prot, IOMMU_RESV_SW_MSI);
+                                        prot, IOMMU_RESV_SW_MSI, GFP_KERNEL);
        if (!region)
                return;
 
index a8b36c3..48cdcd0 100644 (file)
@@ -2410,6 +2410,7 @@ static int __init si_domain_init(int hw)
 
        if (md_domain_init(si_domain, DEFAULT_DOMAIN_ADDRESS_WIDTH)) {
                domain_exit(si_domain);
+               si_domain = NULL;
                return -EFAULT;
        }
 
@@ -3052,6 +3053,10 @@ free_iommu:
                disable_dmar_iommu(iommu);
                free_dmar_iommu(iommu);
        }
+       if (si_domain) {
+               domain_exit(si_domain);
+               si_domain = NULL;
+       }
 
        return ret;
 }
@@ -4534,7 +4539,7 @@ static void intel_iommu_get_resv_regions(struct device *device,
        struct device *i_dev;
        int i;
 
-       down_read(&dmar_global_lock);
+       rcu_read_lock();
        for_each_rmrr_units(rmrr) {
                for_each_active_dev_scope(rmrr->devices, rmrr->devices_cnt,
                                          i, i_dev) {
@@ -4552,14 +4557,15 @@ static void intel_iommu_get_resv_regions(struct device *device,
                                IOMMU_RESV_DIRECT_RELAXABLE : IOMMU_RESV_DIRECT;
 
                        resv = iommu_alloc_resv_region(rmrr->base_address,
-                                                      length, prot, type);
+                                                      length, prot, type,
+                                                      GFP_ATOMIC);
                        if (!resv)
                                break;
 
                        list_add_tail(&resv->list, head);
                }
        }
-       up_read(&dmar_global_lock);
+       rcu_read_unlock();
 
 #ifdef CONFIG_INTEL_IOMMU_FLOPPY_WA
        if (dev_is_pci(device)) {
@@ -4567,7 +4573,8 @@ static void intel_iommu_get_resv_regions(struct device *device,
 
                if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA) {
                        reg = iommu_alloc_resv_region(0, 1UL << 24, prot,
-                                                  IOMMU_RESV_DIRECT_RELAXABLE);
+                                       IOMMU_RESV_DIRECT_RELAXABLE,
+                                       GFP_KERNEL);
                        if (reg)
                                list_add_tail(&reg->list, head);
                }
@@ -4576,7 +4583,7 @@ static void intel_iommu_get_resv_regions(struct device *device,
 
        reg = iommu_alloc_resv_region(IOAPIC_RANGE_START,
                                      IOAPIC_RANGE_END - IOAPIC_RANGE_START + 1,
-                                     0, IOMMU_RESV_MSI);
+                                     0, IOMMU_RESV_MSI, GFP_KERNEL);
        if (!reg)
                return;
        list_add_tail(&reg->list, head);
index 4893c24..65a3b3d 100644 (file)
@@ -504,7 +504,7 @@ static int iommu_insert_resv_region(struct iommu_resv_region *new,
        LIST_HEAD(stack);
 
        nr = iommu_alloc_resv_region(new->start, new->length,
-                                    new->prot, new->type);
+                                    new->prot, new->type, GFP_KERNEL);
        if (!nr)
                return -ENOMEM;
 
@@ -2579,11 +2579,12 @@ EXPORT_SYMBOL(iommu_put_resv_regions);
 
 struct iommu_resv_region *iommu_alloc_resv_region(phys_addr_t start,
                                                  size_t length, int prot,
-                                                 enum iommu_resv_type type)
+                                                 enum iommu_resv_type type,
+                                                 gfp_t gfp)
 {
        struct iommu_resv_region *region;
 
-       region = kzalloc(sizeof(*region), GFP_KERNEL);
+       region = kzalloc(sizeof(*region), gfp);
        if (!region)
                return NULL;
 
index 5a4e00e..2ab2ecf 100644 (file)
@@ -917,7 +917,8 @@ static void mtk_iommu_get_resv_regions(struct device *dev,
                        continue;
 
                region = iommu_alloc_resv_region(resv->iova_base, resv->size,
-                                                prot, IOMMU_RESV_RESERVED);
+                                                prot, IOMMU_RESV_RESERVED,
+                                                GFP_KERNEL);
                if (!region)
                        return;
 
index b7c2280..8b1b5c2 100644 (file)
@@ -490,11 +490,13 @@ static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
                fallthrough;
        case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
                region = iommu_alloc_resv_region(start, size, 0,
-                                                IOMMU_RESV_RESERVED);
+                                                IOMMU_RESV_RESERVED,
+                                                GFP_KERNEL);
                break;
        case VIRTIO_IOMMU_RESV_MEM_T_MSI:
                region = iommu_alloc_resv_region(start, size, prot,
-                                                IOMMU_RESV_MSI);
+                                                IOMMU_RESV_MSI,
+                                                GFP_KERNEL);
                break;
        }
        if (!region)
@@ -909,7 +911,8 @@ static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
         */
        if (!msi) {
                msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
-                                             prot, IOMMU_RESV_SW_MSI);
+                                             prot, IOMMU_RESV_SW_MSI,
+                                             GFP_KERNEL);
                if (!msi)
                        return;
 
index 00aecd6..a7e052c 100644 (file)
@@ -101,6 +101,7 @@ struct pca963x_led {
        struct pca963x *chip;
        struct led_classdev led_cdev;
        int led_num; /* 0 .. 15 potentially */
+       bool blinking;
        u8 gdc;
        u8 gfrq;
 };
@@ -129,12 +130,21 @@ static int pca963x_brightness(struct pca963x_led *led,
 
        switch (brightness) {
        case LED_FULL:
-               val = (ledout & ~mask) | (PCA963X_LED_ON << shift);
+               if (led->blinking) {
+                       val = (ledout & ~mask) | (PCA963X_LED_GRP_PWM << shift);
+                       ret = i2c_smbus_write_byte_data(client,
+                                               PCA963X_PWM_BASE +
+                                               led->led_num,
+                                               LED_FULL);
+               } else {
+                       val = (ledout & ~mask) | (PCA963X_LED_ON << shift);
+               }
                ret = i2c_smbus_write_byte_data(client, ledout_addr, val);
                break;
        case LED_OFF:
                val = ledout & ~mask;
                ret = i2c_smbus_write_byte_data(client, ledout_addr, val);
+               led->blinking = false;
                break;
        default:
                ret = i2c_smbus_write_byte_data(client,
@@ -144,7 +154,11 @@ static int pca963x_brightness(struct pca963x_led *led,
                if (ret < 0)
                        return ret;
 
-               val = (ledout & ~mask) | (PCA963X_LED_PWM << shift);
+               if (led->blinking)
+                       val = (ledout & ~mask) | (PCA963X_LED_GRP_PWM << shift);
+               else
+                       val = (ledout & ~mask) | (PCA963X_LED_PWM << shift);
+
                ret = i2c_smbus_write_byte_data(client, ledout_addr, val);
                break;
        }
@@ -181,6 +195,7 @@ static void pca963x_blink(struct pca963x_led *led)
        }
 
        mutex_unlock(&led->chip->mutex);
+       led->blinking = true;
 }
 
 static int pca963x_power_state(struct pca963x_led *led)
@@ -275,6 +290,8 @@ static int pca963x_blink_set(struct led_classdev *led_cdev,
        led->gfrq = gfrq;
 
        pca963x_blink(led);
+       led->led_cdev.brightness = LED_FULL;
+       pca963x_led_set(led_cdev, LED_FULL);
 
        *delay_on = time_on;
        *delay_off = time_off;
@@ -337,6 +354,7 @@ static int pca963x_register_leds(struct i2c_client *client,
                led->led_cdev.brightness_set_blocking = pca963x_led_set;
                if (hw_blink)
                        led->led_cdev.blink_set = pca963x_blink_set;
+               led->blinking = false;
 
                init_data.fwnode = child;
                /* for backwards compatibility */
index f2c5a7e..3427555 100644 (file)
@@ -401,7 +401,7 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
        }
 
        if (bypass_torture_test(dc)) {
-               if ((get_random_int() & 3) == 3)
+               if (prandom_u32_max(4) == 3)
                        goto skip;
                else
                        goto rescale;
index 09c7ed2..9c5ef81 100644 (file)
@@ -795,7 +795,8 @@ static void __make_buffer_clean(struct dm_buffer *b)
 {
        BUG_ON(b->hold_count);
 
-       if (!b->state)  /* fast case */
+       /* smp_load_acquire() pairs with read_endio()'s smp_mb__before_atomic() */
+       if (!smp_load_acquire(&b->state))       /* fast case */
                return;
 
        wait_on_bit_io(&b->state, B_READING, TASK_UNINTERRUPTIBLE);
@@ -816,7 +817,7 @@ static struct dm_buffer *__get_unclaimed_buffer(struct dm_bufio_client *c)
                BUG_ON(test_bit(B_DIRTY, &b->state));
 
                if (static_branch_unlikely(&no_sleep_enabled) && c->no_sleep &&
-                   unlikely(test_bit(B_READING, &b->state)))
+                   unlikely(test_bit_acquire(B_READING, &b->state)))
                        continue;
 
                if (!b->hold_count) {
@@ -1058,7 +1059,7 @@ found_buffer:
         * If the user called both dm_bufio_prefetch and dm_bufio_get on
         * the same buffer, it would deadlock if we waited.
         */
-       if (nf == NF_GET && unlikely(test_bit(B_READING, &b->state)))
+       if (nf == NF_GET && unlikely(test_bit_acquire(B_READING, &b->state)))
                return NULL;
 
        b->hold_count++;
@@ -1218,7 +1219,7 @@ void dm_bufio_release(struct dm_buffer *b)
                 * invalid buffer.
                 */
                if ((b->read_error || b->write_error) &&
-                   !test_bit(B_READING, &b->state) &&
+                   !test_bit_acquire(B_READING, &b->state) &&
                    !test_bit(B_WRITING, &b->state) &&
                    !test_bit(B_DIRTY, &b->state)) {
                        __unlink_buffer(b);
@@ -1479,7 +1480,7 @@ EXPORT_SYMBOL_GPL(dm_bufio_release_move);
 
 static void forget_buffer_locked(struct dm_buffer *b)
 {
-       if (likely(!b->hold_count) && likely(!b->state)) {
+       if (likely(!b->hold_count) && likely(!smp_load_acquire(&b->state))) {
                __unlink_buffer(b);
                __free_buffer_wake(b);
        }
@@ -1639,7 +1640,7 @@ static bool __try_evict_buffer(struct dm_buffer *b, gfp_t gfp)
 {
        if (!(gfp & __GFP_FS) ||
            (static_branch_unlikely(&no_sleep_enabled) && b->c->no_sleep)) {
-               if (test_bit(B_READING, &b->state) ||
+               if (test_bit_acquire(B_READING, &b->state) ||
                    test_bit(B_WRITING, &b->state) ||
                    test_bit(B_DIRTY, &b->state))
                        return false;
index c05fc34..06eb31a 100644 (file)
@@ -166,7 +166,7 @@ struct dm_cache_policy_type {
        struct dm_cache_policy_type *real;
 
        /*
-        * Policies may store a hint for each each cache block.
+        * Policies may store a hint for each cache block.
         * Currently the size of this hint must be 0 or 4 bytes but we
         * expect to relax this in future.
         */
index 811b0a5..2f1cc66 100644 (file)
@@ -2035,7 +2035,7 @@ static void disable_passdown_if_not_supported(struct clone *clone)
                reason = "max discard sectors smaller than a region";
 
        if (reason) {
-               DMWARN("Destination device (%pd) %s: Disabling discard passdown.",
+               DMWARN("Destination device (%pg) %s: Disabling discard passdown.",
                       dest_dev, reason);
                clear_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
        }
index 98976aa..6b3f867 100644 (file)
@@ -434,10 +434,10 @@ static struct mapped_device *dm_hash_rename(struct dm_ioctl *param,
                hc = __get_name_cell(new);
 
        if (hc) {
-               DMWARN("Unable to change %s on mapped device %s to one that "
-                      "already exists: %s",
-                      change_uuid ? "uuid" : "name",
-                      param->name, new);
+               DMERR("Unable to change %s on mapped device %s to one that "
+                     "already exists: %s",
+                     change_uuid ? "uuid" : "name",
+                     param->name, new);
                dm_put(hc->md);
                up_write(&_hash_lock);
                kfree(new_data);
@@ -449,8 +449,8 @@ static struct mapped_device *dm_hash_rename(struct dm_ioctl *param,
         */
        hc = __get_name_cell(param->name);
        if (!hc) {
-               DMWARN("Unable to rename non-existent device, %s to %s%s",
-                      param->name, change_uuid ? "uuid " : "", new);
+               DMERR("Unable to rename non-existent device, %s to %s%s",
+                     param->name, change_uuid ? "uuid " : "", new);
                up_write(&_hash_lock);
                kfree(new_data);
                return ERR_PTR(-ENXIO);
@@ -460,9 +460,9 @@ static struct mapped_device *dm_hash_rename(struct dm_ioctl *param,
         * Does this device already have a uuid?
         */
        if (change_uuid && hc->uuid) {
-               DMWARN("Unable to change uuid of mapped device %s to %s "
-                      "because uuid is already set to %s",
-                      param->name, new, hc->uuid);
+               DMERR("Unable to change uuid of mapped device %s to %s "
+                     "because uuid is already set to %s",
+                     param->name, new, hc->uuid);
                dm_put(hc->md);
                up_write(&_hash_lock);
                kfree(new_data);
@@ -750,7 +750,7 @@ static int get_target_version(struct file *filp, struct dm_ioctl *param, size_t
 static int check_name(const char *name)
 {
        if (strchr(name, '/')) {
-               DMWARN("invalid device name");
+               DMERR("invalid device name");
                return -EINVAL;
        }
 
@@ -773,7 +773,7 @@ static struct dm_table *dm_get_inactive_table(struct mapped_device *md, int *src
        down_read(&_hash_lock);
        hc = dm_get_mdptr(md);
        if (!hc || hc->md != md) {
-               DMWARN("device has been removed from the dev hash table.");
+               DMERR("device has been removed from the dev hash table.");
                goto out;
        }
 
@@ -1026,7 +1026,7 @@ static int dev_rename(struct file *filp, struct dm_ioctl *param, size_t param_si
        if (new_data < param->data ||
            invalid_str(new_data, (void *) param + param_size) || !*new_data ||
            strlen(new_data) > (change_uuid ? DM_UUID_LEN - 1 : DM_NAME_LEN - 1)) {
-               DMWARN("Invalid new mapped device name or uuid string supplied.");
+               DMERR("Invalid new mapped device name or uuid string supplied.");
                return -EINVAL;
        }
 
@@ -1061,7 +1061,7 @@ static int dev_set_geometry(struct file *filp, struct dm_ioctl *param, size_t pa
 
        if (geostr < param->data ||
            invalid_str(geostr, (void *) param + param_size)) {
-               DMWARN("Invalid geometry supplied.");
+               DMERR("Invalid geometry supplied.");
                goto out;
        }
 
@@ -1069,13 +1069,13 @@ static int dev_set_geometry(struct file *filp, struct dm_ioctl *param, size_t pa
                   indata + 1, indata + 2, indata + 3, &dummy);
 
        if (x != 4) {
-               DMWARN("Unable to interpret geometry settings.");
+               DMERR("Unable to interpret geometry settings.");
                goto out;
        }
 
        if (indata[0] > 65535 || indata[1] > 255 ||
            indata[2] > 255 || indata[3] > ULONG_MAX) {
-               DMWARN("Geometry exceeds range limits.");
+               DMERR("Geometry exceeds range limits.");
                goto out;
        }
 
@@ -1387,7 +1387,7 @@ static int populate_table(struct dm_table *table,
        char *target_params;
 
        if (!param->target_count) {
-               DMWARN("populate_table: no targets specified");
+               DMERR("populate_table: no targets specified");
                return -EINVAL;
        }
 
@@ -1395,7 +1395,7 @@ static int populate_table(struct dm_table *table,
 
                r = next_target(spec, next, end, &spec, &target_params);
                if (r) {
-                       DMWARN("unable to find target");
+                       DMERR("unable to find target");
                        return r;
                }
 
@@ -1404,7 +1404,7 @@ static int populate_table(struct dm_table *table,
                                        (sector_t) spec->length,
                                        target_params);
                if (r) {
-                       DMWARN("error adding target to table");
+                       DMERR("error adding target to table");
                        return r;
                }
 
@@ -1451,8 +1451,8 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si
        if (immutable_target_type &&
            (immutable_target_type != dm_table_get_immutable_target_type(t)) &&
            !dm_table_get_wildcard_target(t)) {
-               DMWARN("can't replace immutable target type %s",
-                      immutable_target_type->name);
+               DMERR("can't replace immutable target type %s",
+                     immutable_target_type->name);
                r = -EINVAL;
                goto err_unlock_md_type;
        }
@@ -1461,12 +1461,12 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si
                /* setup md->queue to reflect md's type (may block) */
                r = dm_setup_md_queue(md, t);
                if (r) {
-                       DMWARN("unable to set up device queue for new table.");
+                       DMERR("unable to set up device queue for new table.");
                        goto err_unlock_md_type;
                }
        } else if (!is_valid_type(dm_get_md_type(md), dm_table_get_type(t))) {
-               DMWARN("can't change device type (old=%u vs new=%u) after initial table load.",
-                      dm_get_md_type(md), dm_table_get_type(t));
+               DMERR("can't change device type (old=%u vs new=%u) after initial table load.",
+                     dm_get_md_type(md), dm_table_get_type(t));
                r = -EINVAL;
                goto err_unlock_md_type;
        }
@@ -1477,7 +1477,7 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si
        down_write(&_hash_lock);
        hc = dm_get_mdptr(md);
        if (!hc || hc->md != md) {
-               DMWARN("device has been removed from the dev hash table.");
+               DMERR("device has been removed from the dev hash table.");
                up_write(&_hash_lock);
                r = -ENXIO;
                goto err_destroy_table;
@@ -1686,19 +1686,19 @@ static int target_message(struct file *filp, struct dm_ioctl *param, size_t para
 
        if (tmsg < (struct dm_target_msg *) param->data ||
            invalid_str(tmsg->message, (void *) param + param_size)) {
-               DMWARN("Invalid target message parameters.");
+               DMERR("Invalid target message parameters.");
                r = -EINVAL;
                goto out;
        }
 
        r = dm_split_args(&argc, &argv, tmsg->message);
        if (r) {
-               DMWARN("Failed to split target message parameters");
+               DMERR("Failed to split target message parameters");
                goto out;
        }
 
        if (!argc) {
-               DMWARN("Empty message received.");
+               DMERR("Empty message received.");
                r = -EINVAL;
                goto out_argv;
        }
@@ -1718,12 +1718,12 @@ static int target_message(struct file *filp, struct dm_ioctl *param, size_t para
 
        ti = dm_table_find_target(table, tmsg->sector);
        if (!ti) {
-               DMWARN("Target message sector outside device.");
+               DMERR("Target message sector outside device.");
                r = -EINVAL;
        } else if (ti->type->message)
                r = ti->type->message(ti, argc, argv, result, maxlen);
        else {
-               DMWARN("Target type does not support messages");
+               DMERR("Target type does not support messages");
                r = -EINVAL;
        }
 
@@ -1814,11 +1814,11 @@ static int check_version(unsigned int cmd, struct dm_ioctl __user *user)
 
        if ((DM_VERSION_MAJOR != version[0]) ||
            (DM_VERSION_MINOR < version[1])) {
-               DMWARN("ioctl interface mismatch: "
-                      "kernel(%u.%u.%u), user(%u.%u.%u), cmd(%d)",
-                      DM_VERSION_MAJOR, DM_VERSION_MINOR,
-                      DM_VERSION_PATCHLEVEL,
-                      version[0], version[1], version[2], cmd);
+               DMERR("ioctl interface mismatch: "
+                     "kernel(%u.%u.%u), user(%u.%u.%u), cmd(%d)",
+                     DM_VERSION_MAJOR, DM_VERSION_MINOR,
+                     DM_VERSION_PATCHLEVEL,
+                     version[0], version[1], version[2], cmd);
                r = -EINVAL;
        }
 
@@ -1927,11 +1927,11 @@ static int validate_params(uint cmd, struct dm_ioctl *param)
 
        if (cmd == DM_DEV_CREATE_CMD) {
                if (!*param->name) {
-                       DMWARN("name not supplied when creating device");
+                       DMERR("name not supplied when creating device");
                        return -EINVAL;
                }
        } else if (*param->uuid && *param->name) {
-               DMWARN("only supply one of name or uuid, cmd(%u)", cmd);
+               DMERR("only supply one of name or uuid, cmd(%u)", cmd);
                return -EINVAL;
        }
 
@@ -1978,7 +1978,7 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 
        fn = lookup_ioctl(cmd, &ioctl_flags);
        if (!fn) {
-               DMWARN("dm_ctl_ioctl: unknown command 0x%x", command);
+               DMERR("dm_ctl_ioctl: unknown command 0x%x", command);
                return -ENOTTY;
        }
 
@@ -2203,7 +2203,7 @@ int __init dm_early_create(struct dm_ioctl *dmi,
                                        (sector_t) spec_array[i]->length,
                                        target_params_array[i]);
                if (r) {
-                       DMWARN("error adding target to table");
+                       DMERR("error adding target to table");
                        goto err_destroy_table;
                }
        }
@@ -2216,7 +2216,7 @@ int __init dm_early_create(struct dm_ioctl *dmi,
        /* setup md->queue to reflect md's type (may block) */
        r = dm_setup_md_queue(md, t);
        if (r) {
-               DMWARN("unable to set up device queue for new table.");
+               DMERR("unable to set up device queue for new table.");
                goto err_destroy_table;
        }
 
index c640be4..5426367 100644 (file)
@@ -2529,7 +2529,7 @@ static int analyse_superblocks(struct dm_target *ti, struct raid_set *rs)
                 * of the "sync" directive.
                 *
                 * With reshaping capability added, we must ensure that
-                * that the "sync" directive is disallowed during the reshape.
+                * the "sync" directive is disallowed during the reshape.
                 */
                if (test_bit(__CTR_FLAG_SYNC, &rs->ctr_flags))
                        continue;
@@ -2590,7 +2590,7 @@ static int analyse_superblocks(struct dm_target *ti, struct raid_set *rs)
 
 /*
  * Adjust data_offset and new_data_offset on all disk members of @rs
- * for out of place reshaping if requested by contructor
+ * for out of place reshaping if requested by constructor
  *
  * We need free space at the beginning of each raid disk for forward
  * and at the end for backward reshapes which userspace has to provide
index 3001b10..a41209a 100644 (file)
@@ -238,7 +238,7 @@ static void dm_done(struct request *clone, blk_status_t error, bool mapped)
                dm_requeue_original_request(tio, true);
                break;
        default:
-               DMWARN("unimplemented target endio return value: %d", r);
+               DMCRIT("unimplemented target endio return value: %d", r);
                BUG();
        }
 }
@@ -409,7 +409,7 @@ static int map_request(struct dm_rq_target_io *tio)
                dm_kill_unmapped_request(rq, BLK_STS_IOERR);
                break;
        default:
-               DMWARN("unimplemented target map return value: %d", r);
+               DMCRIT("unimplemented target map return value: %d", r);
                BUG();
        }
 
index 8326f9f..f105a71 100644 (file)
@@ -1220,7 +1220,7 @@ int dm_stats_message(struct mapped_device *md, unsigned argc, char **argv,
                return 2; /* this wasn't a stats message */
 
        if (r == -EINVAL)
-               DMWARN("Invalid parameters for message %s", argv[0]);
+               DMCRIT("Invalid parameters for message %s", argv[0]);
 
        return r;
 }
index d8034ff..078da18 100644 (file)
@@ -234,12 +234,12 @@ static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev,
                return 0;
 
        if ((start >= dev_size) || (start + len > dev_size)) {
-               DMWARN("%s: %pg too small for target: "
-                      "start=%llu, len=%llu, dev_size=%llu",
-                      dm_device_name(ti->table->md), bdev,
-                      (unsigned long long)start,
-                      (unsigned long long)len,
-                      (unsigned long long)dev_size);
+               DMERR("%s: %pg too small for target: "
+                     "start=%llu, len=%llu, dev_size=%llu",
+                     dm_device_name(ti->table->md), bdev,
+                     (unsigned long long)start,
+                     (unsigned long long)len,
+                     (unsigned long long)dev_size);
                return 1;
        }
 
@@ -251,10 +251,10 @@ static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev,
                unsigned int zone_sectors = bdev_zone_sectors(bdev);
 
                if (start & (zone_sectors - 1)) {
-                       DMWARN("%s: start=%llu not aligned to h/w zone size %u of %pg",
-                              dm_device_name(ti->table->md),
-                              (unsigned long long)start,
-                              zone_sectors, bdev);
+                       DMERR("%s: start=%llu not aligned to h/w zone size %u of %pg",
+                             dm_device_name(ti->table->md),
+                             (unsigned long long)start,
+                             zone_sectors, bdev);
                        return 1;
                }
 
@@ -268,10 +268,10 @@ static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev,
                 * the sector range.
                 */
                if (len & (zone_sectors - 1)) {
-                       DMWARN("%s: len=%llu not aligned to h/w zone size %u of %pg",
-                              dm_device_name(ti->table->md),
-                              (unsigned long long)len,
-                              zone_sectors, bdev);
+                       DMERR("%s: len=%llu not aligned to h/w zone size %u of %pg",
+                             dm_device_name(ti->table->md),
+                             (unsigned long long)len,
+                             zone_sectors, bdev);
                        return 1;
                }
        }
@@ -280,20 +280,20 @@ static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev,
                return 0;
 
        if (start & (logical_block_size_sectors - 1)) {
-               DMWARN("%s: start=%llu not aligned to h/w "
-                      "logical block size %u of %pg",
-                      dm_device_name(ti->table->md),
-                      (unsigned long long)start,
-                      limits->logical_block_size, bdev);
+               DMERR("%s: start=%llu not aligned to h/w "
+                     "logical block size %u of %pg",
+                     dm_device_name(ti->table->md),
+                     (unsigned long long)start,
+                     limits->logical_block_size, bdev);
                return 1;
        }
 
        if (len & (logical_block_size_sectors - 1)) {
-               DMWARN("%s: len=%llu not aligned to h/w "
-                      "logical block size %u of %pg",
-                      dm_device_name(ti->table->md),
-                      (unsigned long long)len,
-                      limits->logical_block_size, bdev);
+               DMERR("%s: len=%llu not aligned to h/w "
+                     "logical block size %u of %pg",
+                     dm_device_name(ti->table->md),
+                     (unsigned long long)len,
+                     limits->logical_block_size, bdev);
                return 1;
        }
 
@@ -434,8 +434,8 @@ void dm_put_device(struct dm_target *ti, struct dm_dev *d)
                }
        }
        if (!found) {
-               DMWARN("%s: device %s not in table devices list",
-                      dm_device_name(ti->table->md), d->name);
+               DMERR("%s: device %s not in table devices list",
+                     dm_device_name(ti->table->md), d->name);
                return;
        }
        if (refcount_dec_and_test(&dd->count)) {
@@ -618,12 +618,12 @@ static int validate_hardware_logical_block_alignment(struct dm_table *t,
        }
 
        if (remaining) {
-               DMWARN("%s: table line %u (start sect %llu len %llu) "
-                      "not aligned to h/w logical block size %u",
-                      dm_device_name(t->md), i,
-                      (unsigned long long) ti->begin,
-                      (unsigned long long) ti->len,
-                      limits->logical_block_size);
+               DMERR("%s: table line %u (start sect %llu len %llu) "
+                     "not aligned to h/w logical block size %u",
+                     dm_device_name(t->md), i,
+                     (unsigned long long) ti->begin,
+                     (unsigned long long) ti->len,
+                     limits->logical_block_size);
                return -EINVAL;
        }
 
@@ -1008,7 +1008,7 @@ static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *
        struct dm_md_mempools *pools;
 
        if (unlikely(type == DM_TYPE_NONE)) {
-               DMWARN("no table type is set, can't allocate mempools");
+               DMERR("no table type is set, can't allocate mempools");
                return -EINVAL;
        }
 
@@ -1112,7 +1112,7 @@ static bool integrity_profile_exists(struct gendisk *disk)
  * Get a disk whose integrity profile reflects the table's profile.
  * Returns NULL if integrity support was inconsistent or unavailable.
  */
-static struct gendisk * dm_table_get_integrity_disk(struct dm_table *t)
+static struct gendisk *dm_table_get_integrity_disk(struct dm_table *t)
 {
        struct list_head *devices = dm_table_get_devices(t);
        struct dm_dev_internal *dd = NULL;
@@ -1185,10 +1185,10 @@ static int dm_table_register_integrity(struct dm_table *t)
         * profile the new profile should not conflict.
         */
        if (blk_integrity_compare(dm_disk(md), template_disk) < 0) {
-               DMWARN("%s: conflict with existing integrity profile: "
-                      "%s profile mismatch",
-                      dm_device_name(t->md),
-                      template_disk->disk_name);
+               DMERR("%s: conflict with existing integrity profile: "
+                     "%s profile mismatch",
+                     dm_device_name(t->md),
+                     template_disk->disk_name);
                return 1;
        }
 
@@ -1327,7 +1327,7 @@ static int dm_table_construct_crypto_profile(struct dm_table *t)
        if (t->md->queue &&
            !blk_crypto_has_capabilities(profile,
                                         t->md->queue->crypto_profile)) {
-               DMWARN("Inline encryption capabilities of new DM table were more restrictive than the old table's. This is not supported!");
+               DMERR("Inline encryption capabilities of new DM table were more restrictive than the old table's. This is not supported!");
                dm_destroy_crypto_profile(profile);
                return -EINVAL;
        }
index 8a00cc4..ccf5b85 100644 (file)
@@ -1401,14 +1401,16 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
 
        /* WQ_UNBOUND greatly improves performance when running on ramdisk */
        wq_flags = WQ_MEM_RECLAIM | WQ_UNBOUND;
-       if (v->use_tasklet) {
-               /*
-                * Allow verify_wq to preempt softirq since verification in
-                * tasklet will fall-back to using it for error handling
-                * (or if the bufio cache doesn't have required hashes).
-                */
-               wq_flags |= WQ_HIGHPRI;
-       }
+       /*
+        * Using WQ_HIGHPRI improves throughput and completion latency by
+        * reducing wait times when reading from a dm-verity device.
+        *
+        * Also as required for the "try_verify_in_tasklet" feature: WQ_HIGHPRI
+        * allows verify_wq to preempt softirq since verification in tasklet
+        * will fall-back to using it for error handling (or if the bufio cache
+        * doesn't have required hashes).
+        */
+       wq_flags |= WQ_HIGHPRI;
        v->verify_wq = alloc_workqueue("kverityd", wq_flags, num_online_cpus());
        if (!v->verify_wq) {
                ti->error = "Cannot allocate workqueue";
index 60549b6..95a1ee3 100644 (file)
@@ -864,7 +864,7 @@ int dm_set_geometry(struct mapped_device *md, struct hd_geometry *geo)
        sector_t sz = (sector_t)geo->cylinders * geo->heads * geo->sectors;
 
        if (geo->start > sz) {
-               DMWARN("Start sector is beyond the geometry limits.");
+               DMERR("Start sector is beyond the geometry limits.");
                return -EINVAL;
        }
 
@@ -1149,7 +1149,7 @@ static void clone_endio(struct bio *bio)
                        /* The target will handle the io */
                        return;
                default:
-                       DMWARN("unimplemented target endio return value: %d", r);
+                       DMCRIT("unimplemented target endio return value: %d", r);
                        BUG();
                }
        }
@@ -1455,7 +1455,7 @@ static void __map_bio(struct bio *clone)
                        dm_io_dec_pending(io, BLK_STS_DM_REQUEUE);
                break;
        default:
-               DMWARN("unimplemented target map return value: %d", r);
+               DMCRIT("unimplemented target map return value: %d", r);
                BUG();
        }
 }
@@ -2005,7 +2005,7 @@ static struct mapped_device *alloc_dev(int minor)
 
        md = kvzalloc_node(sizeof(*md), GFP_KERNEL, numa_node_id);
        if (!md) {
-               DMWARN("unable to allocate device, out of memory.");
+               DMERR("unable to allocate device, out of memory.");
                return NULL;
        }
 
@@ -2065,7 +2065,6 @@ static struct mapped_device *alloc_dev(int minor)
        md->disk->minors = 1;
        md->disk->flags |= GENHD_FL_NO_PART;
        md->disk->fops = &dm_blk_dops;
-       md->disk->queue = md->queue;
        md->disk->private_data = md;
        sprintf(md->disk->disk_name, "dm-%d", minor);
 
index 79c7333..832d856 100644 (file)
@@ -2994,7 +2994,7 @@ static int r5l_load_log(struct r5l_log *log)
        }
 create:
        if (create_super) {
-               log->last_cp_seq = prandom_u32();
+               log->last_cp_seq = get_random_u32();
                cp = 0;
                r5l_log_write_empty_meta_block(log, cp, log->last_cp_seq);
                /*
index ba6592b..283b78b 100644 (file)
@@ -24,7 +24,7 @@ if MEDIA_SUPPORT
 
 config MEDIA_SUPPORT_FILTER
        bool "Filter media drivers"
-       default y if !EMBEDDED && !EXPERT
+       default y if !EXPERT
        help
           Configuring the media subsystem can be complex, as there are
           hundreds of drivers and other config options.
index 41a7929..4f5ab3c 100644 (file)
@@ -1027,6 +1027,7 @@ static const u8 cec_msg_size[256] = {
        [CEC_MSG_REPORT_SHORT_AUDIO_DESCRIPTOR] = 2 | DIRECTED,
        [CEC_MSG_REQUEST_SHORT_AUDIO_DESCRIPTOR] = 2 | DIRECTED,
        [CEC_MSG_SET_SYSTEM_AUDIO_MODE] = 3 | BOTH,
+       [CEC_MSG_SET_AUDIO_VOLUME_LEVEL] = 3 | DIRECTED,
        [CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST] = 2 | DIRECTED,
        [CEC_MSG_SYSTEM_AUDIO_MODE_STATUS] = 3 | DIRECTED,
        [CEC_MSG_SET_AUDIO_RATE] = 3 | DIRECTED,
index 3b583ed..6ebedc7 100644 (file)
@@ -44,6 +44,8 @@ static void handle_cec_message(struct cros_ec_cec *cros_ec_cec)
        uint8_t *cec_message = cros_ec->event_data.data.cec_message;
        unsigned int len = cros_ec->event_size;
 
+       if (len > CEC_MAX_MSG_SIZE)
+               len = CEC_MAX_MSG_SIZE;
        cros_ec_cec->rx_msg.len = len;
        memcpy(cros_ec_cec->rx_msg.msg, cec_message, len);
 
@@ -221,6 +223,8 @@ static const struct cec_dmi_match cec_dmi_match_table[] = {
        { "Google", "Moli", "0000:00:02.0", "Port B" },
        /* Google Kinox */
        { "Google", "Kinox", "0000:00:02.0", "Port B" },
+       /* Google Kuldax */
+       { "Google", "Kuldax", "0000:00:02.0", "Port B" },
 };
 
 static struct device *cros_ec_cec_find_hdmi_dev(struct device *dev,
index ce9a9d9..0a30e7a 100644 (file)
@@ -115,6 +115,8 @@ static irqreturn_t s5p_cec_irq_handler(int irq, void *priv)
                                dev_dbg(cec->dev, "Buffer overrun (worker did not process previous message)\n");
                        cec->rx = STATE_BUSY;
                        cec->msg.len = status >> 24;
+                       if (cec->msg.len > CEC_MAX_MSG_SIZE)
+                               cec->msg.len = CEC_MAX_MSG_SIZE;
                        cec->msg.rx_status = CEC_RX_STATUS_OK;
                        s5p_cec_get_rx_buf(cec, cec->msg.len,
                                        cec->msg.msg);
index 9b7bcdc..303d02b 100644 (file)
@@ -870,7 +870,7 @@ static void precalculate_color(struct tpg_data *tpg, int k)
                g = tpg_colors[col].g;
                b = tpg_colors[col].b;
        } else if (tpg->pattern == TPG_PAT_NOISE) {
-               r = g = b = prandom_u32_max(256);
+               r = g = b = get_random_u8();
        } else if (k == TPG_COLOR_RANDOM) {
                r = g = b = tpg->qual_offset + prandom_u32_max(196);
        } else if (k >= TPG_COLOR_RAMP) {
index 47d83e0..9807f54 100644 (file)
@@ -6660,7 +6660,7 @@ static int drxk_read_snr(struct dvb_frontend *fe, u16 *snr)
 static int drxk_read_ucblocks(struct dvb_frontend *fe, u32 *ucblocks)
 {
        struct drxk_state *state = fe->demodulator_priv;
-       u16 err;
+       u16 err = 0;
 
        dprintk(1, "\n");
 
index c6ab531..e408049 100644 (file)
@@ -406,7 +406,6 @@ static int ar0521_set_fmt(struct v4l2_subdev *sd,
                          struct v4l2_subdev_format *format)
 {
        struct ar0521_dev *sensor = to_ar0521_dev(sd);
-       int ret = 0;
 
        ar0521_adj_fmt(&format->format);
 
@@ -423,7 +422,7 @@ static int ar0521_set_fmt(struct v4l2_subdev *sd,
        }
 
        mutex_unlock(&sensor->lock);
-       return ret;
+       return 0;
 }
 
 static int ar0521_s_ctrl(struct v4l2_ctrl *ctrl)
@@ -756,10 +755,12 @@ static int ar0521_power_on(struct device *dev)
                gpiod_set_value(sensor->reset_gpio, 0);
        usleep_range(4500, 5000); /* min 45000 clocks */
 
-       for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++)
-               if (ar0521_write_regs(sensor, initial_regs[cnt].data,
-                                     initial_regs[cnt].count))
+       for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++) {
+               ret = ar0521_write_regs(sensor, initial_regs[cnt].data,
+                                       initial_regs[cnt].count);
+               if (ret)
                        goto off;
+       }
 
        ret = ar0521_write_reg(sensor, AR0521_REG_SERIAL_FORMAT,
                               AR0521_REG_SERIAL_FORMAT_MIPI |
index ee6bbbb..25bf113 100644 (file)
@@ -238,6 +238,43 @@ static int get_key_knc1(struct IR_i2c *ir, enum rc_proto *protocol,
        return 1;
 }
 
+static int get_key_geniatech(struct IR_i2c *ir, enum rc_proto *protocol,
+                            u32 *scancode, u8 *toggle)
+{
+       int i, rc;
+       unsigned char b;
+
+       /* poll IR chip */
+       for (i = 0; i < 4; i++) {
+               rc = i2c_master_recv(ir->c, &b, 1);
+               if (rc == 1)
+                       break;
+               msleep(20);
+       }
+       if (rc != 1) {
+               dev_dbg(&ir->rc->dev, "read error\n");
+               if (rc < 0)
+                       return rc;
+               return -EIO;
+       }
+
+       /* don't repeat the key */
+       if (ir->old == b)
+               return 0;
+       ir->old = b;
+
+       /* decode to RC5 */
+       b &= 0x7f;
+       b = (b - 1) / 2;
+
+       dev_dbg(&ir->rc->dev, "key %02x\n", b);
+
+       *protocol = RC_PROTO_RC5;
+       *scancode = b;
+       *toggle = ir->old >> 7;
+       return 1;
+}
+
 static int get_key_avermedia_cardbus(struct IR_i2c *ir, enum rc_proto *protocol,
                                     u32 *scancode, u8 *toggle)
 {
@@ -766,6 +803,13 @@ static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id)
                rc_proto    = RC_PROTO_BIT_OTHER;
                ir_codes    = RC_MAP_EMPTY;
                break;
+       case 0x33:
+               name        = "Geniatech";
+               ir->get_key = get_key_geniatech;
+               rc_proto    = RC_PROTO_BIT_RC5;
+               ir_codes    = RC_MAP_TOTAL_MEDIA_IN_HAND_02;
+               ir->old     = 0xfc;
+               break;
        case 0x6b:
                name        = "FusionHDTV";
                ir->get_key = get_key_fusionhdtv;
@@ -825,6 +869,9 @@ static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id)
                case IR_KBD_GET_KEY_KNC1:
                        ir->get_key = get_key_knc1;
                        break;
+               case IR_KBD_GET_KEY_GENIATECH:
+                       ir->get_key = get_key_geniatech;
+                       break;
                case IR_KBD_GET_KEY_FUSIONHDTV:
                        ir->get_key = get_key_fusionhdtv;
                        break;
index 246d8d1..20f548a 100644 (file)
@@ -8,7 +8,7 @@
 
 #include <linux/bitfield.h>
 #include <linux/delay.h>
-#include <linux/gpio.h>
+#include <linux/gpio/consumer.h>
 #include <linux/i2c.h>
 #include <linux/module.h>
 #include <linux/of_graph.h>
index fe18e52..46d91cd 100644 (file)
@@ -633,7 +633,7 @@ static int mt9v111_hw_config(struct mt9v111_dev *mt9v111)
 
        /*
         * Set pixel integration time to the whole frame time.
-        * This value controls the the shutter delay when running with AE
+        * This value controls the shutter delay when running with AE
         * disabled. If longer than frame time, it affects the output
         * frame rate.
         */
index 1852e1c..2d74039 100644 (file)
@@ -15,6 +15,7 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/of_device.h>
+#include <linux/pm_runtime.h>
 #include <linux/regulator/consumer.h>
 #include <linux/slab.h>
 #include <linux/types.h>
@@ -447,8 +448,6 @@ struct ov5640_dev {
        /* lock to protect all members below */
        struct mutex lock;
 
-       int power_count;
-
        struct v4l2_mbus_framefmt fmt;
        bool pending_fmt_change;
 
@@ -2696,39 +2695,24 @@ power_off:
        return ret;
 }
 
-/* --------------- Subdev Operations --------------- */
-
-static int ov5640_s_power(struct v4l2_subdev *sd, int on)
+static int ov5640_sensor_suspend(struct device *dev)
 {
-       struct ov5640_dev *sensor = to_ov5640_dev(sd);
-       int ret = 0;
-
-       mutex_lock(&sensor->lock);
-
-       /*
-        * If the power count is modified from 0 to != 0 or from != 0 to 0,
-        * update the power state.
-        */
-       if (sensor->power_count == !on) {
-               ret = ov5640_set_power(sensor, !!on);
-               if (ret)
-                       goto out;
-       }
+       struct v4l2_subdev *sd = dev_get_drvdata(dev);
+       struct ov5640_dev *ov5640 = to_ov5640_dev(sd);
 
-       /* Update the power count. */
-       sensor->power_count += on ? 1 : -1;
-       WARN_ON(sensor->power_count < 0);
-out:
-       mutex_unlock(&sensor->lock);
+       return ov5640_set_power(ov5640, false);
+}
 
-       if (on && !ret && sensor->power_count == 1) {
-               /* restore controls */
-               ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
-       }
+static int ov5640_sensor_resume(struct device *dev)
+{
+       struct v4l2_subdev *sd = dev_get_drvdata(dev);
+       struct ov5640_dev *ov5640 = to_ov5640_dev(sd);
 
-       return ret;
+       return ov5640_set_power(ov5640, true);
 }
 
+/* --------------- Subdev Operations --------------- */
+
 static int ov5640_try_frame_interval(struct ov5640_dev *sensor,
                                     struct v4l2_fract *fi,
                                     u32 width, u32 height)
@@ -3314,6 +3298,9 @@ static int ov5640_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
 
        /* v4l2_ctrl_lock() locks our own mutex */
 
+       if (!pm_runtime_get_if_in_use(&sensor->i2c_client->dev))
+               return 0;
+
        switch (ctrl->id) {
        case V4L2_CID_AUTOGAIN:
                val = ov5640_get_gain(sensor);
@@ -3329,6 +3316,8 @@ static int ov5640_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
                break;
        }
 
+       pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
+
        return 0;
 }
 
@@ -3358,9 +3347,9 @@ static int ov5640_s_ctrl(struct v4l2_ctrl *ctrl)
        /*
         * If the device is not powered up by the host driver do
         * not apply any controls to H/W at this time. Instead
-        * the controls will be restored right after power-up.
+        * the controls will be restored at start streaming time.
         */
-       if (sensor->power_count == 0)
+       if (!pm_runtime_get_if_in_use(&sensor->i2c_client->dev))
                return 0;
 
        switch (ctrl->id) {
@@ -3402,6 +3391,8 @@ static int ov5640_s_ctrl(struct v4l2_ctrl *ctrl)
                break;
        }
 
+       pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
+
        return ret;
 }
 
@@ -3677,6 +3668,18 @@ static int ov5640_s_stream(struct v4l2_subdev *sd, int enable)
        struct ov5640_dev *sensor = to_ov5640_dev(sd);
        int ret = 0;
 
+       if (enable) {
+               ret = pm_runtime_resume_and_get(&sensor->i2c_client->dev);
+               if (ret < 0)
+                       return ret;
+
+               ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
+               if (ret) {
+                       pm_runtime_put(&sensor->i2c_client->dev);
+                       return ret;
+               }
+       }
+
        mutex_lock(&sensor->lock);
 
        if (sensor->streaming == !enable) {
@@ -3701,8 +3704,13 @@ static int ov5640_s_stream(struct v4l2_subdev *sd, int enable)
                if (!ret)
                        sensor->streaming = enable;
        }
+
 out:
        mutex_unlock(&sensor->lock);
+
+       if (!enable || ret)
+               pm_runtime_put_autosuspend(&sensor->i2c_client->dev);
+
        return ret;
 }
 
@@ -3724,7 +3732,6 @@ static int ov5640_init_cfg(struct v4l2_subdev *sd,
 }
 
 static const struct v4l2_subdev_core_ops ov5640_core_ops = {
-       .s_power = ov5640_s_power,
        .log_status = v4l2_ctrl_subdev_log_status,
        .subscribe_event = v4l2_ctrl_subdev_subscribe_event,
        .unsubscribe_event = v4l2_event_subdev_unsubscribe,
@@ -3770,26 +3777,20 @@ static int ov5640_check_chip_id(struct ov5640_dev *sensor)
        int ret = 0;
        u16 chip_id;
 
-       ret = ov5640_set_power_on(sensor);
-       if (ret)
-               return ret;
-
        ret = ov5640_read_reg16(sensor, OV5640_REG_CHIP_ID, &chip_id);
        if (ret) {
                dev_err(&client->dev, "%s: failed to read chip identifier\n",
                        __func__);
-               goto power_off;
+               return ret;
        }
 
        if (chip_id != 0x5640) {
                dev_err(&client->dev, "%s: wrong chip identifier, expected 0x5640, got 0x%x\n",
                        __func__, chip_id);
-               ret = -ENXIO;
+               return -ENXIO;
        }
 
-power_off:
-       ov5640_set_power_off(sensor);
-       return ret;
+       return 0;
 }
 
 static int ov5640_probe(struct i2c_client *client)
@@ -3880,26 +3881,43 @@ static int ov5640_probe(struct i2c_client *client)
 
        ret = ov5640_get_regulators(sensor);
        if (ret)
-               return ret;
+               goto entity_cleanup;
 
        mutex_init(&sensor->lock);
 
-       ret = ov5640_check_chip_id(sensor);
+       ret = ov5640_init_controls(sensor);
        if (ret)
                goto entity_cleanup;
 
-       ret = ov5640_init_controls(sensor);
-       if (ret)
+       ret = ov5640_sensor_resume(dev);
+       if (ret) {
+               dev_err(dev, "failed to power on\n");
                goto entity_cleanup;
+       }
+
+       pm_runtime_set_active(dev);
+       pm_runtime_get_noresume(dev);
+       pm_runtime_enable(dev);
+
+       ret = ov5640_check_chip_id(sensor);
+       if (ret)
+               goto err_pm_runtime;
 
        ret = v4l2_async_register_subdev_sensor(&sensor->sd);
        if (ret)
-               goto free_ctrls;
+               goto err_pm_runtime;
+
+       pm_runtime_set_autosuspend_delay(dev, 1000);
+       pm_runtime_use_autosuspend(dev);
+       pm_runtime_put_autosuspend(dev);
 
        return 0;
 
-free_ctrls:
+err_pm_runtime:
+       pm_runtime_put_noidle(dev);
+       pm_runtime_disable(dev);
        v4l2_ctrl_handler_free(&sensor->ctrls.handler);
+       ov5640_sensor_suspend(dev);
 entity_cleanup:
        media_entity_cleanup(&sensor->sd.entity);
        mutex_destroy(&sensor->lock);
@@ -3910,6 +3928,12 @@ static void ov5640_remove(struct i2c_client *client)
 {
        struct v4l2_subdev *sd = i2c_get_clientdata(client);
        struct ov5640_dev *sensor = to_ov5640_dev(sd);
+       struct device *dev = &client->dev;
+
+       pm_runtime_disable(dev);
+       if (!pm_runtime_status_suspended(dev))
+               ov5640_sensor_suspend(dev);
+       pm_runtime_set_suspended(dev);
 
        v4l2_async_unregister_subdev(&sensor->sd);
        media_entity_cleanup(&sensor->sd.entity);
@@ -3917,6 +3941,10 @@ static void ov5640_remove(struct i2c_client *client)
        mutex_destroy(&sensor->lock);
 }
 
+static const struct dev_pm_ops ov5640_pm_ops = {
+       SET_RUNTIME_PM_OPS(ov5640_sensor_suspend, ov5640_sensor_resume, NULL)
+};
+
 static const struct i2c_device_id ov5640_id[] = {
        {"ov5640", 0},
        {},
@@ -3933,6 +3961,7 @@ static struct i2c_driver ov5640_i2c_driver = {
        .driver = {
                .name  = "ov5640",
                .of_match_table = ov5640_dt_ids,
+               .pm = &ov5640_pm_ops,
        },
        .id_table = ov5640_id,
        .probe_new = ov5640_probe,
index a233c34..cae1866 100644 (file)
@@ -3034,11 +3034,13 @@ static int ov8865_probe(struct i2c_client *client)
                                       &rate);
        if (!ret && sensor->extclk) {
                ret = clk_set_rate(sensor->extclk, rate);
-               if (ret)
-                       return dev_err_probe(dev, ret,
-                                            "failed to set clock rate\n");
+               if (ret) {
+                       dev_err_probe(dev, ret, "failed to set clock rate\n");
+                       goto error_endpoint;
+               }
        } else if (ret && !sensor->extclk) {
-               return dev_err_probe(dev, ret, "invalid clock config\n");
+               dev_err_probe(dev, ret, "invalid clock config\n");
+               goto error_endpoint;
        }
 
        sensor->extclk_rate = rate ? rate : clk_get_rate(sensor->extclk);
index b8176a3..25020d5 100644 (file)
@@ -581,7 +581,7 @@ static void __media_device_unregister_entity(struct media_entity *entity)
        struct media_device *mdev = entity->graph_obj.mdev;
        struct media_link *link, *tmp;
        struct media_interface *intf;
-       unsigned int i;
+       struct media_pad *iter;
 
        ida_free(&mdev->entity_internal_idx, entity->internal_idx);
 
@@ -597,8 +597,8 @@ static void __media_device_unregister_entity(struct media_entity *entity)
        __media_entity_remove_links(entity);
 
        /* Remove all pads that belong to this entity */
-       for (i = 0; i < entity->num_pads; i++)
-               media_gobj_destroy(&entity->pads[i].graph_obj);
+       media_entity_for_each_pad(entity, iter)
+               media_gobj_destroy(&iter->graph_obj);
 
        /* Remove the entity */
        media_gobj_destroy(&entity->graph_obj);
@@ -610,7 +610,7 @@ int __must_check media_device_register_entity(struct media_device *mdev,
                                              struct media_entity *entity)
 {
        struct media_entity_notify *notify, *next;
-       unsigned int i;
+       struct media_pad *iter;
        int ret;
 
        if (entity->function == MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN ||
@@ -639,9 +639,8 @@ int __must_check media_device_register_entity(struct media_device *mdev,
        media_gobj_create(mdev, MEDIA_GRAPH_ENTITY, &entity->graph_obj);
 
        /* Initialize objects at the pads */
-       for (i = 0; i < entity->num_pads; i++)
-               media_gobj_create(mdev, MEDIA_GRAPH_PAD,
-                              &entity->pads[i].graph_obj);
+       media_entity_for_each_pad(entity, iter)
+               media_gobj_create(mdev, MEDIA_GRAPH_PAD, &iter->graph_obj);
 
        /* invoke entity_notify callbacks */
        list_for_each_entry_safe(notify, next, &mdev->entity_notify, list)
index afd1bd7..b8bcbc7 100644 (file)
@@ -59,10 +59,12 @@ static inline const char *link_type_name(struct media_link *link)
        }
 }
 
-__must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum,
-                                         int idx_max)
+__must_check int media_entity_enum_init(struct media_entity_enum *ent_enum,
+                                       struct media_device *mdev)
 {
-       idx_max = ALIGN(idx_max, BITS_PER_LONG);
+       int idx_max;
+
+       idx_max = ALIGN(mdev->entity_internal_idx_max + 1, BITS_PER_LONG);
        ent_enum->bmap = bitmap_zalloc(idx_max, GFP_KERNEL);
        if (!ent_enum->bmap)
                return -ENOMEM;
@@ -71,7 +73,7 @@ __must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum,
 
        return 0;
 }
-EXPORT_SYMBOL_GPL(__media_entity_enum_init);
+EXPORT_SYMBOL_GPL(media_entity_enum_init);
 
 void media_entity_enum_cleanup(struct media_entity_enum *ent_enum)
 {
@@ -193,7 +195,8 @@ int media_entity_pads_init(struct media_entity *entity, u16 num_pads,
                           struct media_pad *pads)
 {
        struct media_device *mdev = entity->graph_obj.mdev;
-       unsigned int i;
+       struct media_pad *iter;
+       unsigned int i = 0;
 
        if (num_pads >= MEDIA_ENTITY_MAX_PADS)
                return -E2BIG;
@@ -204,12 +207,12 @@ int media_entity_pads_init(struct media_entity *entity, u16 num_pads,
        if (mdev)
                mutex_lock(&mdev->graph_mutex);
 
-       for (i = 0; i < num_pads; i++) {
-               pads[i].entity = entity;
-               pads[i].index = i;
+       media_entity_for_each_pad(entity, iter) {
+               iter->entity = entity;
+               iter->index = i++;
                if (mdev)
                        media_gobj_create(mdev, MEDIA_GRAPH_PAD,
-                                       &entity->pads[i].graph_obj);
+                                         &iter->graph_obj);
        }
 
        if (mdev)
@@ -223,6 +226,33 @@ EXPORT_SYMBOL_GPL(media_entity_pads_init);
  * Graph traversal
  */
 
+/*
+ * This function checks the interdependency inside the entity between @pad0
+ * and @pad1. If two pads are interdependent they are part of the same pipeline
+ * and enabling one of the pads means that the other pad will become "locked"
+ * and doesn't allow configuration changes.
+ *
+ * This function uses the &media_entity_operations.has_pad_interdep() operation
+ * to check the dependency inside the entity between @pad0 and @pad1. If the
+ * has_pad_interdep operation is not implemented, all pads of the entity are
+ * considered to be interdependent.
+ */
+static bool media_entity_has_pad_interdep(struct media_entity *entity,
+                                         unsigned int pad0, unsigned int pad1)
+{
+       if (pad0 >= entity->num_pads || pad1 >= entity->num_pads)
+               return false;
+
+       if (entity->pads[pad0].flags & entity->pads[pad1].flags &
+           (MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_SOURCE))
+               return false;
+
+       if (!entity->ops || !entity->ops->has_pad_interdep)
+               return true;
+
+       return entity->ops->has_pad_interdep(entity, pad0, pad1);
+}
+
 static struct media_entity *
 media_entity_other(struct media_entity *entity, struct media_link *link)
 {
@@ -367,139 +397,435 @@ struct media_entity *media_graph_walk_next(struct media_graph *graph)
 }
 EXPORT_SYMBOL_GPL(media_graph_walk_next);
 
-int media_entity_get_fwnode_pad(struct media_entity *entity,
-                               struct fwnode_handle *fwnode,
-                               unsigned long direction_flags)
+/* -----------------------------------------------------------------------------
+ * Pipeline management
+ */
+
+/*
+ * The pipeline traversal stack stores pads that are reached during graph
+ * traversal, with a list of links to be visited to continue the traversal.
+ * When a new pad is reached, an entry is pushed on the top of the stack and
+ * points to the incoming pad and the first link of the entity.
+ *
+ * To find further pads in the pipeline, the traversal algorithm follows
+ * internal pad dependencies in the entity, and then links in the graph. It
+ * does so by iterating over all links of the entity, and following enabled
+ * links that originate from a pad that is internally connected to the incoming
+ * pad, as reported by the media_entity_has_pad_interdep() function.
+ */
+
+/**
+ * struct media_pipeline_walk_entry - Entry in the pipeline traversal stack
+ *
+ * @pad: The media pad being visited
+ * @links: Links left to be visited
+ */
+struct media_pipeline_walk_entry {
+       struct media_pad *pad;
+       struct list_head *links;
+};
+
+/**
+ * struct media_pipeline_walk - State used by the media pipeline traversal
+ *                             algorithm
+ *
+ * @mdev: The media device
+ * @stack: Depth-first search stack
+ * @stack.size: Number of allocated entries in @stack.entries
+ * @stack.top: Index of the top stack entry (-1 if the stack is empty)
+ * @stack.entries: Stack entries
+ */
+struct media_pipeline_walk {
+       struct media_device *mdev;
+
+       struct {
+               unsigned int size;
+               int top;
+               struct media_pipeline_walk_entry *entries;
+       } stack;
+};
+
+#define MEDIA_PIPELINE_STACK_GROW_STEP         16
+
+static struct media_pipeline_walk_entry *
+media_pipeline_walk_top(struct media_pipeline_walk *walk)
 {
-       struct fwnode_endpoint endpoint;
-       unsigned int i;
+       return &walk->stack.entries[walk->stack.top];
+}
+
+static bool media_pipeline_walk_empty(struct media_pipeline_walk *walk)
+{
+       return walk->stack.top == -1;
+}
+
+/* Increase the stack size by MEDIA_PIPELINE_STACK_GROW_STEP elements. */
+static int media_pipeline_walk_resize(struct media_pipeline_walk *walk)
+{
+       struct media_pipeline_walk_entry *entries;
+       unsigned int new_size;
+
+       /* Safety check, to avoid stack overflows in case of bugs. */
+       if (walk->stack.size >= 256)
+               return -E2BIG;
+
+       new_size = walk->stack.size + MEDIA_PIPELINE_STACK_GROW_STEP;
+
+       entries = krealloc(walk->stack.entries,
+                          new_size * sizeof(*walk->stack.entries),
+                          GFP_KERNEL);
+       if (!entries)
+               return -ENOMEM;
+
+       walk->stack.entries = entries;
+       walk->stack.size = new_size;
+
+       return 0;
+}
+
+/* Push a new entry on the stack. */
+static int media_pipeline_walk_push(struct media_pipeline_walk *walk,
+                                   struct media_pad *pad)
+{
+       struct media_pipeline_walk_entry *entry;
        int ret;
 
-       if (!entity->ops || !entity->ops->get_fwnode_pad) {
-               for (i = 0; i < entity->num_pads; i++) {
-                       if (entity->pads[i].flags & direction_flags)
-                               return i;
+       if (walk->stack.top + 1 >= walk->stack.size) {
+               ret = media_pipeline_walk_resize(walk);
+               if (ret)
+                       return ret;
+       }
+
+       walk->stack.top++;
+       entry = media_pipeline_walk_top(walk);
+       entry->pad = pad;
+       entry->links = pad->entity->links.next;
+
+       dev_dbg(walk->mdev->dev,
+               "media pipeline: pushed entry %u: '%s':%u\n",
+               walk->stack.top, pad->entity->name, pad->index);
+
+       return 0;
+}
+
+/*
+ * Move the top entry link cursor to the next link. If all links of the entry
+ * have been visited, pop the entry itself.
+ */
+static void media_pipeline_walk_pop(struct media_pipeline_walk *walk)
+{
+       struct media_pipeline_walk_entry *entry;
+
+       if (WARN_ON(walk->stack.top < 0))
+               return;
+
+       entry = media_pipeline_walk_top(walk);
+
+       if (entry->links->next == &entry->pad->entity->links) {
+               dev_dbg(walk->mdev->dev,
+                       "media pipeline: entry %u has no more links, popping\n",
+                       walk->stack.top);
+
+               walk->stack.top--;
+               return;
+       }
+
+       entry->links = entry->links->next;
+
+       dev_dbg(walk->mdev->dev,
+               "media pipeline: moved entry %u to next link\n",
+               walk->stack.top);
+}
+
+/* Free all memory allocated while walking the pipeline. */
+static void media_pipeline_walk_destroy(struct media_pipeline_walk *walk)
+{
+       kfree(walk->stack.entries);
+}
+
+/* Add a pad to the pipeline and push it to the stack. */
+static int media_pipeline_add_pad(struct media_pipeline *pipe,
+                                 struct media_pipeline_walk *walk,
+                                 struct media_pad *pad)
+{
+       struct media_pipeline_pad *ppad;
+
+       list_for_each_entry(ppad, &pipe->pads, list) {
+               if (ppad->pad == pad) {
+                       dev_dbg(pad->graph_obj.mdev->dev,
+                               "media pipeline: already contains pad '%s':%u\n",
+                               pad->entity->name, pad->index);
+                       return 0;
                }
+       }
 
-               return -ENXIO;
+       ppad = kzalloc(sizeof(*ppad), GFP_KERNEL);
+       if (!ppad)
+               return -ENOMEM;
+
+       ppad->pipe = pipe;
+       ppad->pad = pad;
+
+       list_add_tail(&ppad->list, &pipe->pads);
+
+       dev_dbg(pad->graph_obj.mdev->dev,
+               "media pipeline: added pad '%s':%u\n",
+               pad->entity->name, pad->index);
+
+       return media_pipeline_walk_push(walk, pad);
+}
+
+/* Explore the next link of the entity at the top of the stack. */
+static int media_pipeline_explore_next_link(struct media_pipeline *pipe,
+                                           struct media_pipeline_walk *walk)
+{
+       struct media_pipeline_walk_entry *entry = media_pipeline_walk_top(walk);
+       struct media_pad *pad;
+       struct media_link *link;
+       struct media_pad *local;
+       struct media_pad *remote;
+       int ret;
+
+       pad = entry->pad;
+       link = list_entry(entry->links, typeof(*link), list);
+       media_pipeline_walk_pop(walk);
+
+       dev_dbg(walk->mdev->dev,
+               "media pipeline: exploring link '%s':%u -> '%s':%u\n",
+               link->source->entity->name, link->source->index,
+               link->sink->entity->name, link->sink->index);
+
+       /* Skip links that are not enabled. */
+       if (!(link->flags & MEDIA_LNK_FL_ENABLED)) {
+               dev_dbg(walk->mdev->dev,
+                       "media pipeline: skipping link (disabled)\n");
+               return 0;
        }
 
-       ret = fwnode_graph_parse_endpoint(fwnode, &endpoint);
+       /* Get the local pad and remote pad. */
+       if (link->source->entity == pad->entity) {
+               local = link->source;
+               remote = link->sink;
+       } else {
+               local = link->sink;
+               remote = link->source;
+       }
+
+       /*
+        * Skip links that originate from a different pad than the incoming pad
+        * that is not connected internally in the entity to the incoming pad.
+        */
+       if (pad != local &&
+           !media_entity_has_pad_interdep(pad->entity, pad->index, local->index)) {
+               dev_dbg(walk->mdev->dev,
+                       "media pipeline: skipping link (no route)\n");
+               return 0;
+       }
+
+       /*
+        * Add the local and remote pads of the link to the pipeline and push
+        * them to the stack, if they're not already present.
+        */
+       ret = media_pipeline_add_pad(pipe, walk, local);
        if (ret)
                return ret;
 
-       ret = entity->ops->get_fwnode_pad(entity, &endpoint);
-       if (ret < 0)
+       ret = media_pipeline_add_pad(pipe, walk, remote);
+       if (ret)
                return ret;
 
-       if (ret >= entity->num_pads)
-               return -ENXIO;
+       return 0;
+}
 
-       if (!(entity->pads[ret].flags & direction_flags))
-               return -ENXIO;
+static void media_pipeline_cleanup(struct media_pipeline *pipe)
+{
+       while (!list_empty(&pipe->pads)) {
+               struct media_pipeline_pad *ppad;
 
-       return ret;
+               ppad = list_first_entry(&pipe->pads, typeof(*ppad), list);
+               list_del(&ppad->list);
+               kfree(ppad);
+       }
 }
-EXPORT_SYMBOL_GPL(media_entity_get_fwnode_pad);
 
-/* -----------------------------------------------------------------------------
- * Pipeline management
- */
+static int media_pipeline_populate(struct media_pipeline *pipe,
+                                  struct media_pad *pad)
+{
+       struct media_pipeline_walk walk = { };
+       struct media_pipeline_pad *ppad;
+       int ret;
+
+       /*
+        * Populate the media pipeline by walking the media graph, starting
+        * from @pad.
+        */
+       INIT_LIST_HEAD(&pipe->pads);
+       pipe->mdev = pad->graph_obj.mdev;
+
+       walk.mdev = pipe->mdev;
+       walk.stack.top = -1;
+       ret = media_pipeline_add_pad(pipe, &walk, pad);
+       if (ret)
+               goto done;
+
+       /*
+        * Use a depth-first search algorithm: as long as the stack is not
+        * empty, explore the next link of the top entry. The
+        * media_pipeline_explore_next_link() function will either move to the
+        * next link, pop the entry if fully visited, or add new entries on
+        * top.
+        */
+       while (!media_pipeline_walk_empty(&walk)) {
+               ret = media_pipeline_explore_next_link(pipe, &walk);
+               if (ret)
+                       goto done;
+       }
+
+       dev_dbg(pad->graph_obj.mdev->dev,
+               "media pipeline populated, found pads:\n");
+
+       list_for_each_entry(ppad, &pipe->pads, list)
+               dev_dbg(pad->graph_obj.mdev->dev, "- '%s':%u\n",
+                       ppad->pad->entity->name, ppad->pad->index);
+
+       WARN_ON(walk.stack.top != -1);
 
-__must_check int __media_pipeline_start(struct media_entity *entity,
+       ret = 0;
+
+done:
+       media_pipeline_walk_destroy(&walk);
+
+       if (ret)
+               media_pipeline_cleanup(pipe);
+
+       return ret;
+}
+
+__must_check int __media_pipeline_start(struct media_pad *pad,
                                        struct media_pipeline *pipe)
 {
-       struct media_device *mdev = entity->graph_obj.mdev;
-       struct media_graph *graph = &pipe->graph;
-       struct media_entity *entity_err = entity;
-       struct media_link *link;
+       struct media_device *mdev = pad->entity->graph_obj.mdev;
+       struct media_pipeline_pad *err_ppad;
+       struct media_pipeline_pad *ppad;
        int ret;
 
-       if (pipe->streaming_count) {
-               pipe->streaming_count++;
+       lockdep_assert_held(&mdev->graph_mutex);
+
+       /*
+        * If the entity is already part of a pipeline, that pipeline must
+        * be the same as the pipe given to media_pipeline_start().
+        */
+       if (WARN_ON(pad->pipe && pad->pipe != pipe))
+               return -EINVAL;
+
+       /*
+        * If the pipeline has already been started, it is guaranteed to be
+        * valid, so just increase the start count.
+        */
+       if (pipe->start_count) {
+               pipe->start_count++;
                return 0;
        }
 
-       ret = media_graph_walk_init(&pipe->graph, mdev);
+       /*
+        * Populate the pipeline. This populates the media_pipeline pads list
+        * with media_pipeline_pad instances for each pad found during graph
+        * walk.
+        */
+       ret = media_pipeline_populate(pipe, pad);
        if (ret)
                return ret;
 
-       media_graph_walk_start(&pipe->graph, entity);
+       /*
+        * Now that all the pads in the pipeline have been gathered, perform
+        * the validation steps.
+        */
+
+       list_for_each_entry(ppad, &pipe->pads, list) {
+               struct media_pad *pad = ppad->pad;
+               struct media_entity *entity = pad->entity;
+               bool has_enabled_link = false;
+               bool has_link = false;
+               struct media_link *link;
 
-       while ((entity = media_graph_walk_next(graph))) {
-               DECLARE_BITMAP(active, MEDIA_ENTITY_MAX_PADS);
-               DECLARE_BITMAP(has_no_links, MEDIA_ENTITY_MAX_PADS);
+               dev_dbg(mdev->dev, "Validating pad '%s':%u\n", pad->entity->name,
+                       pad->index);
 
-               if (entity->pipe && entity->pipe != pipe) {
-                       pr_err("Pipe active for %s. Can't start for %s\n",
-                               entity->name,
-                               entity_err->name);
+               /*
+                * 1. Ensure that the pad doesn't already belong to a different
+                * pipeline.
+                */
+               if (pad->pipe) {
+                       dev_dbg(mdev->dev, "Failed to start pipeline: pad '%s':%u busy\n",
+                               pad->entity->name, pad->index);
                        ret = -EBUSY;
                        goto error;
                }
 
-               /* Already streaming --- no need to check. */
-               if (entity->pipe)
-                       continue;
-
-               entity->pipe = pipe;
-
-               if (!entity->ops || !entity->ops->link_validate)
-                       continue;
-
-               bitmap_zero(active, entity->num_pads);
-               bitmap_fill(has_no_links, entity->num_pads);
-
+               /*
+                * 2. Validate all active links whose sink is the current pad.
+                * Validation of the source pads is performed in the context of
+                * the connected sink pad to avoid duplicating checks.
+                */
                for_each_media_entity_data_link(entity, link) {
-                       struct media_pad *pad = link->sink->entity == entity
-                                               ? link->sink : link->source;
+                       /* Skip links unrelated to the current pad. */
+                       if (link->sink != pad && link->source != pad)
+                               continue;
 
-                       /* Mark that a pad is connected by a link. */
-                       bitmap_clear(has_no_links, pad->index, 1);
+                       /* Record if the pad has links and enabled links. */
+                       if (link->flags & MEDIA_LNK_FL_ENABLED)
+                               has_enabled_link = true;
+                       has_link = true;
 
                        /*
-                        * Pads that either do not need to connect or
-                        * are connected through an enabled link are
-                        * fine.
+                        * Validate the link if it's enabled and has the
+                        * current pad as its sink.
                         */
-                       if (!(pad->flags & MEDIA_PAD_FL_MUST_CONNECT) ||
-                           link->flags & MEDIA_LNK_FL_ENABLED)
-                               bitmap_set(active, pad->index, 1);
+                       if (!(link->flags & MEDIA_LNK_FL_ENABLED))
+                               continue;
 
-                       /*
-                        * Link validation will only take place for
-                        * sink ends of the link that are enabled.
-                        */
-                       if (link->sink != pad ||
-                           !(link->flags & MEDIA_LNK_FL_ENABLED))
+                       if (link->sink != pad)
+                               continue;
+
+                       if (!entity->ops || !entity->ops->link_validate)
                                continue;
 
                        ret = entity->ops->link_validate(link);
-                       if (ret < 0 && ret != -ENOIOCTLCMD) {
-                               dev_dbg(entity->graph_obj.mdev->dev,
-                                       "link validation failed for '%s':%u -> '%s':%u, error %d\n",
+                       if (ret) {
+                               dev_dbg(mdev->dev,
+                                       "Link '%s':%u -> '%s':%u failed validation: %d\n",
                                        link->source->entity->name,
                                        link->source->index,
-                                       entity->name, link->sink->index, ret);
+                                       link->sink->entity->name,
+                                       link->sink->index, ret);
                                goto error;
                        }
-               }
 
-               /* Either no links or validated links are fine. */
-               bitmap_or(active, active, has_no_links, entity->num_pads);
+                       dev_dbg(mdev->dev,
+                               "Link '%s':%u -> '%s':%u is valid\n",
+                               link->source->entity->name,
+                               link->source->index,
+                               link->sink->entity->name,
+                               link->sink->index);
+               }
 
-               if (!bitmap_full(active, entity->num_pads)) {
+               /*
+                * 3. If the pad has the MEDIA_PAD_FL_MUST_CONNECT flag set,
+                * ensure that it has either no link or an enabled link.
+                */
+               if ((pad->flags & MEDIA_PAD_FL_MUST_CONNECT) && has_link &&
+                   !has_enabled_link) {
+                       dev_dbg(mdev->dev,
+                               "Pad '%s':%u must be connected by an enabled link\n",
+                               pad->entity->name, pad->index);
                        ret = -ENOLINK;
-                       dev_dbg(entity->graph_obj.mdev->dev,
-                               "'%s':%u must be connected by an enabled link\n",
-                               entity->name,
-                               (unsigned)find_first_zero_bit(
-                                       active, entity->num_pads));
                        goto error;
                }
+
+               /* Validation passed, store the pipe pointer in the pad. */
+               pad->pipe = pipe;
        }
 
-       pipe->streaming_count++;
+       pipe->start_count++;
 
        return 0;
 
@@ -508,42 +834,37 @@ error:
         * Link validation on graph failed. We revert what we did and
         * return the error.
         */
-       media_graph_walk_start(graph, entity_err);
 
-       while ((entity_err = media_graph_walk_next(graph))) {
-               entity_err->pipe = NULL;
-
-               /*
-                * We haven't started entities further than this so we quit
-                * here.
-                */
-               if (entity_err == entity)
+       list_for_each_entry(err_ppad, &pipe->pads, list) {
+               if (err_ppad == ppad)
                        break;
+
+               err_ppad->pad->pipe = NULL;
        }
 
-       media_graph_walk_cleanup(graph);
+       media_pipeline_cleanup(pipe);
 
        return ret;
 }
 EXPORT_SYMBOL_GPL(__media_pipeline_start);
 
-__must_check int media_pipeline_start(struct media_entity *entity,
+__must_check int media_pipeline_start(struct media_pad *pad,
                                      struct media_pipeline *pipe)
 {
-       struct media_device *mdev = entity->graph_obj.mdev;
+       struct media_device *mdev = pad->entity->graph_obj.mdev;
        int ret;
 
        mutex_lock(&mdev->graph_mutex);
-       ret = __media_pipeline_start(entity, pipe);
+       ret = __media_pipeline_start(pad, pipe);
        mutex_unlock(&mdev->graph_mutex);
        return ret;
 }
 EXPORT_SYMBOL_GPL(media_pipeline_start);
 
-void __media_pipeline_stop(struct media_entity *entity)
+void __media_pipeline_stop(struct media_pad *pad)
 {
-       struct media_graph *graph = &entity->pipe->graph;
-       struct media_pipeline *pipe = entity->pipe;
+       struct media_pipeline *pipe = pad->pipe;
+       struct media_pipeline_pad *ppad;
 
        /*
         * If the following check fails, the driver has performed an
@@ -552,29 +873,65 @@ void __media_pipeline_stop(struct media_entity *entity)
        if (WARN_ON(!pipe))
                return;
 
-       if (--pipe->streaming_count)
+       if (--pipe->start_count)
                return;
 
-       media_graph_walk_start(graph, entity);
-
-       while ((entity = media_graph_walk_next(graph)))
-               entity->pipe = NULL;
+       list_for_each_entry(ppad, &pipe->pads, list)
+               ppad->pad->pipe = NULL;
 
-       media_graph_walk_cleanup(graph);
+       media_pipeline_cleanup(pipe);
 
+       if (pipe->allocated)
+               kfree(pipe);
 }
 EXPORT_SYMBOL_GPL(__media_pipeline_stop);
 
-void media_pipeline_stop(struct media_entity *entity)
+void media_pipeline_stop(struct media_pad *pad)
 {
-       struct media_device *mdev = entity->graph_obj.mdev;
+       struct media_device *mdev = pad->entity->graph_obj.mdev;
 
        mutex_lock(&mdev->graph_mutex);
-       __media_pipeline_stop(entity);
+       __media_pipeline_stop(pad);
        mutex_unlock(&mdev->graph_mutex);
 }
 EXPORT_SYMBOL_GPL(media_pipeline_stop);
 
+__must_check int media_pipeline_alloc_start(struct media_pad *pad)
+{
+       struct media_device *mdev = pad->entity->graph_obj.mdev;
+       struct media_pipeline *new_pipe = NULL;
+       struct media_pipeline *pipe;
+       int ret;
+
+       mutex_lock(&mdev->graph_mutex);
+
+       /*
+        * Is the entity already part of a pipeline? If not, we need to allocate
+        * a pipe.
+        */
+       pipe = media_pad_pipeline(pad);
+       if (!pipe) {
+               new_pipe = kzalloc(sizeof(*new_pipe), GFP_KERNEL);
+               if (!new_pipe) {
+                       ret = -ENOMEM;
+                       goto out;
+               }
+
+               pipe = new_pipe;
+               pipe->allocated = true;
+       }
+
+       ret = __media_pipeline_start(pad, pipe);
+       if (ret)
+               kfree(new_pipe);
+
+out:
+       mutex_unlock(&mdev->graph_mutex);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(media_pipeline_alloc_start);
+
 /* -----------------------------------------------------------------------------
  * Links management
  */
@@ -829,7 +1186,7 @@ int __media_entity_setup_link(struct media_link *link, u32 flags)
 {
        const u32 mask = MEDIA_LNK_FL_ENABLED;
        struct media_device *mdev;
-       struct media_entity *source, *sink;
+       struct media_pad *source, *sink;
        int ret = -EBUSY;
 
        if (link == NULL)
@@ -845,12 +1202,11 @@ int __media_entity_setup_link(struct media_link *link, u32 flags)
        if (link->flags == flags)
                return 0;
 
-       source = link->source->entity;
-       sink = link->sink->entity;
+       source = link->source;
+       sink = link->sink;
 
        if (!(link->flags & MEDIA_LNK_FL_DYNAMIC) &&
-           (media_entity_is_streaming(source) ||
-            media_entity_is_streaming(sink)))
+           (media_pad_is_streaming(source) || media_pad_is_streaming(sink)))
                return -EBUSY;
 
        mdev = source->graph_obj.mdev;
@@ -991,6 +1347,60 @@ struct media_pad *media_pad_remote_pad_unique(const struct media_pad *pad)
 }
 EXPORT_SYMBOL_GPL(media_pad_remote_pad_unique);
 
+int media_entity_get_fwnode_pad(struct media_entity *entity,
+                               struct fwnode_handle *fwnode,
+                               unsigned long direction_flags)
+{
+       struct fwnode_endpoint endpoint;
+       unsigned int i;
+       int ret;
+
+       if (!entity->ops || !entity->ops->get_fwnode_pad) {
+               for (i = 0; i < entity->num_pads; i++) {
+                       if (entity->pads[i].flags & direction_flags)
+                               return i;
+               }
+
+               return -ENXIO;
+       }
+
+       ret = fwnode_graph_parse_endpoint(fwnode, &endpoint);
+       if (ret)
+               return ret;
+
+       ret = entity->ops->get_fwnode_pad(entity, &endpoint);
+       if (ret < 0)
+               return ret;
+
+       if (ret >= entity->num_pads)
+               return -ENXIO;
+
+       if (!(entity->pads[ret].flags & direction_flags))
+               return -ENXIO;
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(media_entity_get_fwnode_pad);
+
+struct media_pipeline *media_entity_pipeline(struct media_entity *entity)
+{
+       struct media_pad *pad;
+
+       media_entity_for_each_pad(entity, pad) {
+               if (pad->pipe)
+                       return pad->pipe;
+       }
+
+       return NULL;
+}
+EXPORT_SYMBOL_GPL(media_entity_pipeline);
+
+struct media_pipeline *media_pad_pipeline(struct media_pad *pad)
+{
+       return pad->pipe;
+}
+EXPORT_SYMBOL_GPL(media_pad_pipeline);
+
 static void media_interface_init(struct media_device *mdev,
                                 struct media_interface *intf,
                                 u32 gobj_type,
index d335864..ee6e711 100644 (file)
@@ -339,7 +339,7 @@ void cx18_av_std_setup(struct cx18 *cx)
 
                /*
                 * For a 13.5 Mpps clock and 15,625 Hz line rate, a line is
-                * is 864 pixels = 720 active + 144 blanking.  ITU-R BT.601
+                * 864 pixels = 720 active + 144 blanking.  ITU-R BT.601
                 * specifies 12 luma clock periods or ~ 0.9 * 13.5 Mpps after
                 * the end of active video to start a horizontal line, so that
                 * leaves 132 pixels of hblank to ignore.
@@ -399,7 +399,7 @@ void cx18_av_std_setup(struct cx18 *cx)
 
                /*
                 * For a 13.5 Mpps clock and 15,734.26 Hz line rate, a line is
-                * is 858 pixels = 720 active + 138 blanking.  The Hsync leading
+                * 858 pixels = 720 active + 138 blanking.  The Hsync leading
                 * edge should happen 1.2 us * 13.5 Mpps ~= 16 pixels after the
                 * end of active video, leaving 122 pixels of hblank to ignore
                 * before active video starts.
index ce0ef0b..a04a1d3 100644 (file)
@@ -586,7 +586,7 @@ void cx88_i2c_init_ir(struct cx88_core *core)
 {
        struct i2c_board_info info;
        static const unsigned short default_addr_list[] = {
-               0x18, 0x6b, 0x71,
+               0x18, 0x33, 0x6b, 0x71,
                I2C_CLIENT_END
        };
        static const unsigned short pvr2000_addr_list[] = {
index b509c2a..c0ef03e 100644 (file)
@@ -1388,6 +1388,7 @@ static int cx8800_initdev(struct pci_dev *pci_dev,
        }
                fallthrough;
        case CX88_BOARD_DVICO_FUSIONHDTV_5_PCI_NANO:
+       case CX88_BOARD_NOTONLYTV_LV3H:
                request_module("ir-kbd-i2c");
        }
 
index a3fe547..390bd5e 100644 (file)
@@ -989,7 +989,7 @@ static int cio2_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
                return r;
        }
 
-       r = media_pipeline_start(&q->vdev.entity, &q->pipe);
+       r = video_device_pipeline_start(&q->vdev, &q->pipe);
        if (r)
                goto fail_pipeline;
 
@@ -1009,7 +1009,7 @@ static int cio2_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
 fail_csi2_subdev:
        cio2_hw_exit(cio2, q);
 fail_hw:
-       media_pipeline_stop(&q->vdev.entity);
+       video_device_pipeline_stop(&q->vdev);
 fail_pipeline:
        dev_dbg(dev, "failed to start streaming (%d)\n", r);
        cio2_vb2_return_all_buffers(q, VB2_BUF_STATE_QUEUED);
@@ -1030,7 +1030,7 @@ static void cio2_vb2_stop_streaming(struct vb2_queue *vq)
        cio2_hw_exit(cio2, q);
        synchronize_irq(cio2->pci_dev->irq);
        cio2_vb2_return_all_buffers(q, VB2_BUF_STATE_ERROR);
-       media_pipeline_stop(&q->vdev.entity);
+       video_device_pipeline_stop(&q->vdev);
        pm_runtime_put(dev);
        cio2->streaming = false;
 }
index 8a3eed9..b779e0b 100644 (file)
@@ -603,6 +603,10 @@ static int vpu_v4l2_release(struct vpu_inst *inst)
                inst->workqueue = NULL;
        }
 
+       if (inst->fh.m2m_ctx) {
+               v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
+               inst->fh.m2m_ctx = NULL;
+       }
        v4l2_ctrl_handler_free(&inst->ctrl_handler);
        mutex_destroy(&inst->lock);
        v4l2_fh_del(&inst->fh);
@@ -685,13 +689,6 @@ int vpu_v4l2_close(struct file *file)
 
        vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n", inst->tgid, inst->pid, inst);
 
-       vpu_inst_lock(inst);
-       if (inst->fh.m2m_ctx) {
-               v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
-               inst->fh.m2m_ctx = NULL;
-       }
-       vpu_inst_unlock(inst);
-
        call_void_vop(inst, release);
        vpu_inst_unregister(inst);
        vpu_inst_put(inst);
index a0b22b0..435e703 100644 (file)
@@ -421,7 +421,7 @@ static inline void coda9_jpeg_write_huff_values(struct coda_dev *dev, u8 *bits,
                coda_write(dev, (s32)values[i], CODA9_REG_JPEG_HUFF_DATA);
 }
 
-static int coda9_jpeg_dec_huff_setup(struct coda_ctx *ctx)
+static void coda9_jpeg_dec_huff_setup(struct coda_ctx *ctx)
 {
        struct coda_huff_tab *huff_tab = ctx->params.jpeg_huff_tab;
        struct coda_dev *dev = ctx->dev;
@@ -455,7 +455,6 @@ static int coda9_jpeg_dec_huff_setup(struct coda_ctx *ctx)
        coda9_jpeg_write_huff_values(dev, huff_tab->luma_ac, 162);
        coda9_jpeg_write_huff_values(dev, huff_tab->chroma_ac, 162);
        coda_write(dev, 0x000, CODA9_REG_JPEG_HUFF_CTRL);
-       return 0;
 }
 
 static inline void coda9_jpeg_write_qmat_tab(struct coda_dev *dev,
@@ -1394,14 +1393,8 @@ static int coda9_jpeg_prepare_decode(struct coda_ctx *ctx)
        coda_write(dev, ctx->params.jpeg_restart_interval,
                        CODA9_REG_JPEG_RST_INTVAL);
 
-       if (ctx->params.jpeg_huff_tab) {
-               ret = coda9_jpeg_dec_huff_setup(ctx);
-               if (ret < 0) {
-                       v4l2_err(&dev->v4l2_dev,
-                                "failed to set up Huffman tables: %d\n", ret);
-                       return ret;
-               }
-       }
+       if (ctx->params.jpeg_huff_tab)
+               coda9_jpeg_dec_huff_setup(ctx);
 
        coda9_jpeg_qmat_setup(ctx);
 
index 29f6c1c..86c0546 100644 (file)
@@ -457,7 +457,7 @@ err_cmdq_data:
        kfree(path);
        atomic_dec(&mdp->job_count);
        wake_up(&mdp->callback_wq);
-       if (cmd->pkt.buf_size > 0)
+       if (cmd && cmd->pkt.buf_size > 0)
                mdp_cmdq_pkt_destroy(&cmd->pkt);
        kfree(comps);
        kfree(cmd);
index e62abf3..d3eaf88 100644 (file)
@@ -682,7 +682,7 @@ int mdp_comp_clock_on(struct device *dev, struct mdp_comp *comp)
        int i, ret;
 
        if (comp->comp_dev) {
-               ret = pm_runtime_get_sync(comp->comp_dev);
+               ret = pm_runtime_resume_and_get(comp->comp_dev);
                if (ret < 0) {
                        dev_err(dev,
                                "Failed to get power, err %d. type:%d id:%d\n",
@@ -699,6 +699,7 @@ int mdp_comp_clock_on(struct device *dev, struct mdp_comp *comp)
                        dev_err(dev,
                                "Failed to enable clk %d. type:%d id:%d\n",
                                i, comp->type, comp->id);
+                       pm_runtime_put(comp->comp_dev);
                        return ret;
                }
        }
@@ -869,7 +870,7 @@ static struct mdp_comp *mdp_comp_create(struct mdp_dev *mdp,
 
        ret = mdp_comp_init(mdp, node, comp, id);
        if (ret) {
-               kfree(comp);
+               devm_kfree(dev, comp);
                return ERR_PTR(ret);
        }
        mdp->comp[id] = comp;
@@ -930,7 +931,7 @@ void mdp_comp_destroy(struct mdp_dev *mdp)
                if (mdp->comp[i]) {
                        pm_runtime_disable(mdp->comp[i]->comp_dev);
                        mdp_comp_deinit(mdp->comp[i]);
-                       kfree(mdp->comp[i]);
+                       devm_kfree(mdp->comp[i]->comp_dev, mdp->comp[i]);
                        mdp->comp[i] = NULL;
                }
        }
index cde5957..c413e59 100644 (file)
@@ -289,7 +289,8 @@ err_deinit_comp:
        mdp_comp_destroy(mdp);
 err_return:
        for (i = 0; i < MDP_PIPE_MAX; i++)
-               mtk_mutex_put(mdp->mdp_mutex[i]);
+               if (mdp)
+                       mtk_mutex_put(mdp->mdp_mutex[i]);
        kfree(mdp);
        dev_dbg(dev, "Errno %d\n", ret);
        return ret;
index 9f58443..a72bed9 100644 (file)
@@ -173,7 +173,8 @@ int mdp_vpu_dev_init(struct mdp_vpu_dev *vpu, struct mtk_scp *scp,
        /* vpu work_size was set in mdp_vpu_ipi_handle_init_ack */
 
        mem_size = vpu_alloc_size;
-       if (mdp_vpu_shared_mem_alloc(vpu)) {
+       err = mdp_vpu_shared_mem_alloc(vpu);
+       if (err) {
                dev_err(&mdp->pdev->dev, "VPU memory alloc fail!");
                goto err_mem_alloc;
        }
index b3b0577..f6d48c3 100644 (file)
@@ -373,7 +373,7 @@ static const struct v4l2_ctrl_ops dw100_ctrl_ops = {
  * The coordinates are saved in UQ12.4 fixed point format.
  */
 static void dw100_ctrl_dewarping_map_init(const struct v4l2_ctrl *ctrl,
-                                         u32 from_idx, u32 elems,
+                                         u32 from_idx,
                                          union v4l2_ctrl_ptr ptr)
 {
        struct dw100_ctx *ctx =
@@ -398,7 +398,7 @@ static void dw100_ctrl_dewarping_map_init(const struct v4l2_ctrl *ctrl,
        ctx->map_height = mh;
        ctx->map_size = mh * mw * sizeof(u32);
 
-       for (idx = from_idx; idx < elems; idx++) {
+       for (idx = from_idx; idx < ctrl->elems; idx++) {
                qy = min_t(u32, (idx / mw) * qdy, qsh);
                qx = min_t(u32, (idx % mw) * qdx, qsw);
                map[idx] = dw100_map_format_coordinates(qx, qy);
index 290df04..81fb3a5 100644 (file)
@@ -493,7 +493,7 @@ static int video_start_streaming(struct vb2_queue *q, unsigned int count)
        struct v4l2_subdev *subdev;
        int ret;
 
-       ret = media_pipeline_start(&vdev->entity, &video->pipe);
+       ret = video_device_pipeline_start(vdev, &video->pipe);
        if (ret < 0)
                return ret;
 
@@ -522,7 +522,7 @@ static int video_start_streaming(struct vb2_queue *q, unsigned int count)
        return 0;
 
 error:
-       media_pipeline_stop(&vdev->entity);
+       video_device_pipeline_stop(vdev);
 
        video->ops->flush_buffers(video, VB2_BUF_STATE_QUEUED);
 
@@ -553,7 +553,7 @@ static void video_stop_streaming(struct vb2_queue *q)
                v4l2_subdev_call(subdev, video, s_stream, 0);
        }
 
-       media_pipeline_stop(&vdev->entity);
+       video_device_pipeline_stop(vdev);
 
        video->ops->flush_buffers(video, VB2_BUF_STATE_ERROR);
 }
index 60de420..ab6a29f 100644 (file)
@@ -1800,7 +1800,7 @@ bool venus_helper_check_format(struct venus_inst *inst, u32 v4l2_pixfmt)
        struct venus_core *core = inst->core;
        u32 fmt = to_hfi_raw_fmt(v4l2_pixfmt);
        struct hfi_plat_caps *caps;
-       u32 buftype;
+       bool found;
 
        if (!fmt)
                return false;
@@ -1809,12 +1809,13 @@ bool venus_helper_check_format(struct venus_inst *inst, u32 v4l2_pixfmt)
        if (!caps)
                return false;
 
-       if (inst->session_type == VIDC_SESSION_TYPE_DEC)
-               buftype = HFI_BUFFER_OUTPUT2;
-       else
-               buftype = HFI_BUFFER_OUTPUT;
+       found = find_fmt_from_caps(caps, HFI_BUFFER_OUTPUT, fmt);
+       if (found)
+               goto done;
 
-       return find_fmt_from_caps(caps, buftype, fmt);
+       found = find_fmt_from_caps(caps, HFI_BUFFER_OUTPUT2, fmt);
+done:
+       return found;
 }
 EXPORT_SYMBOL_GPL(venus_helper_check_format);
 
index 1968f09..e00aedb 100644 (file)
@@ -569,8 +569,6 @@ irqreturn_t hfi_isr(int irq, void *dev)
 
 int hfi_create(struct venus_core *core, const struct hfi_core_ops *ops)
 {
-       int ret;
-
        if (!ops)
                return -EINVAL;
 
@@ -579,9 +577,8 @@ int hfi_create(struct venus_core *core, const struct hfi_core_ops *ops)
        core->state = CORE_UNINIT;
        init_completion(&core->done);
        pkt_set_version(core->res->hfi_version);
-       ret = venus_hfi_create(core);
 
-       return ret;
+       return venus_hfi_create(core);
 }
 
 void hfi_destroy(struct venus_core *core)
index ac0bb45..4ceaba3 100644 (file)
@@ -183,6 +183,8 @@ vdec_try_fmt_common(struct venus_inst *inst, struct v4l2_format *f)
                else
                        return NULL;
                fmt = find_format(inst, pixmp->pixelformat, f->type);
+               if (!fmt)
+                       return NULL;
        }
 
        pixmp->width = clamp(pixmp->width, frame_width_min(inst),
index 86918ae..cdb1254 100644 (file)
@@ -192,10 +192,8 @@ venc_try_fmt_common(struct venus_inst *inst, struct v4l2_format *f)
        pixmp->height = clamp(pixmp->height, frame_height_min(inst),
                              frame_height_max(inst));
 
-       if (f->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
-               pixmp->width = ALIGN(pixmp->width, 128);
-               pixmp->height = ALIGN(pixmp->height, 32);
-       }
+       pixmp->width = ALIGN(pixmp->width, 128);
+       pixmp->height = ALIGN(pixmp->height, 32);
 
        pixmp->width = ALIGN(pixmp->width, 2);
        pixmp->height = ALIGN(pixmp->height, 2);
@@ -392,7 +390,7 @@ static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *a)
        struct v4l2_fract *timeperframe = &out->timeperframe;
        u64 us_per_frame, fps;
 
-       if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE &&
+       if (a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT &&
            a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
                return -EINVAL;
 
@@ -424,7 +422,7 @@ static int venc_g_parm(struct file *file, void *fh, struct v4l2_streamparm *a)
 {
        struct venus_inst *inst = to_inst(file);
 
-       if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE &&
+       if (a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT &&
            a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
                return -EINVAL;
 
@@ -509,6 +507,19 @@ static int venc_enum_frameintervals(struct file *file, void *fh,
        return 0;
 }
 
+static int venc_subscribe_event(struct v4l2_fh *fh,
+                               const struct v4l2_event_subscription *sub)
+{
+       switch (sub->type) {
+       case V4L2_EVENT_EOS:
+               return v4l2_event_subscribe(fh, sub, 2, NULL);
+       case V4L2_EVENT_CTRL:
+               return v4l2_ctrl_subscribe_event(fh, sub);
+       default:
+               return -EINVAL;
+       }
+}
+
 static const struct v4l2_ioctl_ops venc_ioctl_ops = {
        .vidioc_querycap = venc_querycap,
        .vidioc_enum_fmt_vid_cap = venc_enum_fmt,
@@ -534,8 +545,9 @@ static const struct v4l2_ioctl_ops venc_ioctl_ops = {
        .vidioc_g_parm = venc_g_parm,
        .vidioc_enum_framesizes = venc_enum_framesizes,
        .vidioc_enum_frameintervals = venc_enum_frameintervals,
-       .vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
+       .vidioc_subscribe_event = venc_subscribe_event,
        .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
+       .vidioc_try_encoder_cmd = v4l2_m2m_ioctl_try_encoder_cmd,
 };
 
 static int venc_pm_get(struct venus_inst *inst)
@@ -686,7 +698,8 @@ static int venc_set_properties(struct venus_inst *inst)
                        return ret;
        }
 
-       if (inst->fmt_cap->pixfmt == V4L2_PIX_FMT_HEVC) {
+       if (inst->fmt_cap->pixfmt == V4L2_PIX_FMT_HEVC &&
+           ctr->profile.hevc == V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN_10) {
                struct hfi_hdr10_pq_sei hdr10;
                unsigned int c;
 
index ed44e58..7468e43 100644 (file)
@@ -8,6 +8,7 @@
 
 #include "core.h"
 #include "venc.h"
+#include "helpers.h"
 
 #define BITRATE_MIN            32000
 #define BITRATE_MAX            160000000
@@ -336,8 +337,6 @@ static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl)
                 * if we disable 8x8 transform for HP.
                 */
 
-               if (ctrl->val == 0)
-                       return -EINVAL;
 
                ctr->h264_8x8_transform = ctrl->val;
                break;
@@ -348,15 +347,41 @@ static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl)
        return 0;
 }
 
+static int venc_op_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
+{
+       struct venus_inst *inst = ctrl_to_inst(ctrl);
+       struct hfi_buffer_requirements bufreq;
+       enum hfi_version ver = inst->core->res->hfi_version;
+       int ret;
+
+       switch (ctrl->id) {
+       case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT:
+               ret = venus_helper_get_bufreq(inst, HFI_BUFFER_INPUT, &bufreq);
+               if (!ret)
+                       ctrl->val = HFI_BUFREQ_COUNT_MIN(&bufreq, ver);
+               break;
+       default:
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
 static const struct v4l2_ctrl_ops venc_ctrl_ops = {
        .s_ctrl = venc_op_s_ctrl,
+       .g_volatile_ctrl = venc_op_g_volatile_ctrl,
 };
 
 int venc_ctrl_init(struct venus_inst *inst)
 {
        int ret;
+       struct v4l2_ctrl_hdr10_mastering_display p_hdr10_mastering = {
+               { 34000, 13250, 7500 },
+               { 16000, 34500, 3000 }, 15635, 16450, 10000000, 500,
+       };
+       struct v4l2_ctrl_hdr10_cll_info p_hdr10_cll = { 1000, 400 };
 
-       ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 58);
+       ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 59);
        if (ret)
                return ret;
 
@@ -437,6 +462,9 @@ int venc_ctrl_init(struct venus_inst *inst)
                0, V4L2_MPEG_VIDEO_VP8_PROFILE_0);
 
        v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+                         V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, 4, 11, 1, 4);
+
+       v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
                V4L2_CID_MPEG_VIDEO_BITRATE, BITRATE_MIN, BITRATE_MAX,
                BITRATE_STEP, BITRATE_DEFAULT);
 
@@ -579,11 +607,11 @@ int venc_ctrl_init(struct venus_inst *inst)
 
        v4l2_ctrl_new_std_compound(&inst->ctrl_handler, &venc_ctrl_ops,
                                   V4L2_CID_COLORIMETRY_HDR10_CLL_INFO,
-                                  v4l2_ctrl_ptr_create(NULL));
+                                  v4l2_ctrl_ptr_create(&p_hdr10_cll));
 
        v4l2_ctrl_new_std_compound(&inst->ctrl_handler, &venc_ctrl_ops,
                                   V4L2_CID_COLORIMETRY_HDR10_MASTERING_DISPLAY,
-                                  v4l2_ctrl_ptr_create(NULL));
+                                  v4l2_ctrl_ptr_create((void *)&p_hdr10_mastering));
 
        v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
                               V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE,
index 968a742..2f7daa8 100644 (file)
@@ -786,9 +786,8 @@ static int rvin_csi2_link_notify(struct media_link *link, u32 flags,
                return 0;
 
        /*
-        * Don't allow link changes if any entity in the graph is
-        * streaming, modifying the CHSEL register fields can disrupt
-        * running streams.
+        * Don't allow link changes if any stream in the graph is active as
+        * modifying the CHSEL register fields can disrupt running streams.
         */
        media_device_for_each_entity(entity, &group->mdev)
                if (media_entity_is_streaming(entity))
index 8d37fbd..3aea96d 100644 (file)
@@ -1244,8 +1244,6 @@ static int rvin_mc_validate_format(struct rvin_dev *vin, struct v4l2_subdev *sd,
 
 static int rvin_set_stream(struct rvin_dev *vin, int on)
 {
-       struct media_pipeline *pipe;
-       struct media_device *mdev;
        struct v4l2_subdev *sd;
        struct media_pad *pad;
        int ret;
@@ -1265,7 +1263,7 @@ static int rvin_set_stream(struct rvin_dev *vin, int on)
        sd = media_entity_to_v4l2_subdev(pad->entity);
 
        if (!on) {
-               media_pipeline_stop(&vin->vdev.entity);
+               video_device_pipeline_stop(&vin->vdev);
                return v4l2_subdev_call(sd, video, s_stream, 0);
        }
 
@@ -1273,17 +1271,7 @@ static int rvin_set_stream(struct rvin_dev *vin, int on)
        if (ret)
                return ret;
 
-       /*
-        * The graph lock needs to be taken to protect concurrent
-        * starts of multiple VIN instances as they might share
-        * a common subdevice down the line and then should use
-        * the same pipe.
-        */
-       mdev = vin->vdev.entity.graph_obj.mdev;
-       mutex_lock(&mdev->graph_mutex);
-       pipe = sd->entity.pipe ? sd->entity.pipe : &vin->vdev.pipe;
-       ret = __media_pipeline_start(&vin->vdev.entity, pipe);
-       mutex_unlock(&mdev->graph_mutex);
+       ret = video_device_pipeline_alloc_start(&vin->vdev);
        if (ret)
                return ret;
 
@@ -1291,7 +1279,7 @@ static int rvin_set_stream(struct rvin_dev *vin, int on)
        if (ret == -ENOIOCTLCMD)
                ret = 0;
        if (ret)
-               media_pipeline_stop(&vin->vdev.entity);
+               video_device_pipeline_stop(&vin->vdev);
 
        return ret;
 }
index df1606b..9d24647 100644 (file)
@@ -927,7 +927,7 @@ static void vsp1_video_stop_streaming(struct vb2_queue *vq)
        }
        mutex_unlock(&pipe->lock);
 
-       media_pipeline_stop(&video->video.entity);
+       video_device_pipeline_stop(&video->video);
        vsp1_video_release_buffers(video);
        vsp1_video_pipeline_put(pipe);
 }
@@ -1046,7 +1046,7 @@ vsp1_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
                return PTR_ERR(pipe);
        }
 
-       ret = __media_pipeline_start(&video->video.entity, &pipe->pipe);
+       ret = __video_device_pipeline_start(&video->video, &pipe->pipe);
        if (ret < 0) {
                mutex_unlock(&mdev->graph_mutex);
                goto err_pipe;
@@ -1070,7 +1070,7 @@ vsp1_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
        return 0;
 
 err_stop:
-       media_pipeline_stop(&video->video.entity);
+       video_device_pipeline_stop(&video->video);
 err_pipe:
        vsp1_video_pipeline_put(pipe);
        return ret;
index d5904c9..d454068 100644 (file)
@@ -913,7 +913,7 @@ static void rkisp1_cap_stream_disable(struct rkisp1_capture *cap)
  *
  * Call s_stream(false) in the reverse order from
  * rkisp1_pipeline_stream_enable() and disable the DMA engine.
- * Should be called before media_pipeline_stop()
+ * Should be called before video_device_pipeline_stop()
  */
 static void rkisp1_pipeline_stream_disable(struct rkisp1_capture *cap)
        __must_hold(&cap->rkisp1->stream_lock)
@@ -926,7 +926,7 @@ static void rkisp1_pipeline_stream_disable(struct rkisp1_capture *cap)
         * If the other capture is streaming, isp and sensor nodes shouldn't
         * be disabled, skip them.
         */
-       if (rkisp1->pipe.streaming_count < 2)
+       if (rkisp1->pipe.start_count < 2)
                v4l2_subdev_call(&rkisp1->isp.sd, video, s_stream, false);
 
        v4l2_subdev_call(&rkisp1->resizer_devs[cap->id].sd, video, s_stream,
@@ -937,7 +937,7 @@ static void rkisp1_pipeline_stream_disable(struct rkisp1_capture *cap)
  * rkisp1_pipeline_stream_enable - enable nodes in the pipeline
  *
  * Enable the DMA Engine and call s_stream(true) through the pipeline.
- * Should be called after media_pipeline_start()
+ * Should be called after video_device_pipeline_start()
  */
 static int rkisp1_pipeline_stream_enable(struct rkisp1_capture *cap)
        __must_hold(&cap->rkisp1->stream_lock)
@@ -956,7 +956,7 @@ static int rkisp1_pipeline_stream_enable(struct rkisp1_capture *cap)
         * If the other capture is streaming, isp and sensor nodes are already
         * enabled, skip them.
         */
-       if (rkisp1->pipe.streaming_count > 1)
+       if (rkisp1->pipe.start_count > 1)
                return 0;
 
        ret = v4l2_subdev_call(&rkisp1->isp.sd, video, s_stream, true);
@@ -994,7 +994,7 @@ static void rkisp1_vb2_stop_streaming(struct vb2_queue *queue)
 
        rkisp1_dummy_buf_destroy(cap);
 
-       media_pipeline_stop(&node->vdev.entity);
+       video_device_pipeline_stop(&node->vdev);
 
        mutex_unlock(&cap->rkisp1->stream_lock);
 }
@@ -1008,7 +1008,7 @@ rkisp1_vb2_start_streaming(struct vb2_queue *queue, unsigned int count)
 
        mutex_lock(&cap->rkisp1->stream_lock);
 
-       ret = media_pipeline_start(entity, &cap->rkisp1->pipe);
+       ret = video_device_pipeline_start(&cap->vnode.vdev, &cap->rkisp1->pipe);
        if (ret) {
                dev_err(cap->rkisp1->dev, "start pipeline failed %d\n", ret);
                goto err_ret_buffers;
@@ -1044,7 +1044,7 @@ err_pipe_pm_put:
 err_destroy_dummy:
        rkisp1_dummy_buf_destroy(cap);
 err_pipeline_stop:
-       media_pipeline_stop(entity);
+       video_device_pipeline_stop(&cap->vnode.vdev);
 err_ret_buffers:
        rkisp1_return_all_buffers(cap, VB2_BUF_STATE_QUEUED);
        mutex_unlock(&cap->rkisp1->stream_lock);
@@ -1273,11 +1273,12 @@ static int rkisp1_capture_link_validate(struct media_link *link)
        struct rkisp1_capture *cap = video_get_drvdata(vdev);
        const struct rkisp1_capture_fmt_cfg *fmt =
                rkisp1_find_fmt_cfg(cap, cap->pix.fmt.pixelformat);
-       struct v4l2_subdev_format sd_fmt;
+       struct v4l2_subdev_format sd_fmt = {
+               .which = V4L2_SUBDEV_FORMAT_ACTIVE,
+               .pad = link->source->index,
+       };
        int ret;
 
-       sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
-       sd_fmt.pad = link->source->index;
        ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &sd_fmt);
        if (ret)
                return ret;
index 8056997..a1293c4 100644 (file)
@@ -378,6 +378,7 @@ struct rkisp1_params {
        struct v4l2_format vdev_fmt;
 
        enum v4l2_quantization quantization;
+       enum v4l2_ycbcr_encoding ycbcr_encoding;
        enum rkisp1_fmt_raw_pat_type raw_type;
 };
 
@@ -556,17 +557,32 @@ void rkisp1_sd_adjust_crop(struct v4l2_rect *crop,
  */
 const struct rkisp1_mbus_info *rkisp1_mbus_info_get_by_code(u32 mbus_code);
 
-/* rkisp1_params_configure - configure the params when stream starts.
- *                          This function is called by the isp entity upon stream starts.
- *                          The function applies the initial configuration of the parameters.
+/*
+ * rkisp1_params_pre_configure - Configure the params before stream start
  *
- * @params:      pointer to rkisp1_params.
+ * @params:      pointer to rkisp1_params
  * @bayer_pat:   the bayer pattern on the isp video sink pad
  * @quantization: the quantization configured on the isp's src pad
+ * @ycbcr_encoding: the ycbcr_encoding configured on the isp's src pad
+ *
+ * This function is called by the ISP entity just before the ISP gets started.
+ * It applies the initial ISP parameters from the first params buffer, but
+ * skips LSC as it needs to be configured after the ISP is started.
+ */
+void rkisp1_params_pre_configure(struct rkisp1_params *params,
+                                enum rkisp1_fmt_raw_pat_type bayer_pat,
+                                enum v4l2_quantization quantization,
+                                enum v4l2_ycbcr_encoding ycbcr_encoding);
+
+/*
+ * rkisp1_params_post_configure - Configure the params after stream start
+ *
+ * @params:      pointer to rkisp1_params
+ *
+ * This function is called by the ISP entity just after the ISP gets started.
+ * It applies the initial ISP LSC parameters from the first params buffer.
  */
-void rkisp1_params_configure(struct rkisp1_params *params,
-                            enum rkisp1_fmt_raw_pat_type bayer_pat,
-                            enum v4l2_quantization quantization);
+void rkisp1_params_post_configure(struct rkisp1_params *params);
 
 /* rkisp1_params_disable - disable all parameters.
  *                        This function is called by the isp entity upon stream start
index 383a3ec..585cf3f 100644 (file)
@@ -231,10 +231,11 @@ static int rkisp1_config_isp(struct rkisp1_isp *isp,
                struct v4l2_mbus_framefmt *src_frm;
 
                src_frm = rkisp1_isp_get_pad_fmt(isp, NULL,
-                                                RKISP1_ISP_PAD_SINK_VIDEO,
+                                                RKISP1_ISP_PAD_SOURCE_VIDEO,
                                                 V4L2_SUBDEV_FORMAT_ACTIVE);
-               rkisp1_params_configure(&rkisp1->params, sink_fmt->bayer_pat,
-                                       src_frm->quantization);
+               rkisp1_params_pre_configure(&rkisp1->params, sink_fmt->bayer_pat,
+                                           src_frm->quantization,
+                                           src_frm->ycbcr_enc);
        }
 
        return 0;
@@ -340,6 +341,9 @@ static void rkisp1_isp_start(struct rkisp1_isp *isp)
               RKISP1_CIF_ISP_CTRL_ISP_ENABLE |
               RKISP1_CIF_ISP_CTRL_ISP_INFORM_ENABLE;
        rkisp1_write(rkisp1, RKISP1_CIF_ISP_CTRL, val);
+
+       if (isp->src_fmt->pixel_enc != V4L2_PIXEL_ENC_BAYER)
+               rkisp1_params_post_configure(&rkisp1->params);
 }
 
 /* ----------------------------------------------------------------------------
@@ -431,12 +435,17 @@ static int rkisp1_isp_init_config(struct v4l2_subdev *sd,
        struct v4l2_mbus_framefmt *sink_fmt, *src_fmt;
        struct v4l2_rect *sink_crop, *src_crop;
 
+       /* Video. */
        sink_fmt = v4l2_subdev_get_try_format(sd, sd_state,
                                              RKISP1_ISP_PAD_SINK_VIDEO);
        sink_fmt->width = RKISP1_DEFAULT_WIDTH;
        sink_fmt->height = RKISP1_DEFAULT_HEIGHT;
        sink_fmt->field = V4L2_FIELD_NONE;
        sink_fmt->code = RKISP1_DEF_SINK_PAD_FMT;
+       sink_fmt->colorspace = V4L2_COLORSPACE_RAW;
+       sink_fmt->xfer_func = V4L2_XFER_FUNC_NONE;
+       sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+       sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
 
        sink_crop = v4l2_subdev_get_try_crop(sd, sd_state,
                                             RKISP1_ISP_PAD_SINK_VIDEO);
@@ -449,11 +458,16 @@ static int rkisp1_isp_init_config(struct v4l2_subdev *sd,
                                             RKISP1_ISP_PAD_SOURCE_VIDEO);
        *src_fmt = *sink_fmt;
        src_fmt->code = RKISP1_DEF_SRC_PAD_FMT;
+       src_fmt->colorspace = V4L2_COLORSPACE_SRGB;
+       src_fmt->xfer_func = V4L2_XFER_FUNC_SRGB;
+       src_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+       src_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE;
 
        src_crop = v4l2_subdev_get_try_crop(sd, sd_state,
                                            RKISP1_ISP_PAD_SOURCE_VIDEO);
        *src_crop = *sink_crop;
 
+       /* Parameters and statistics. */
        sink_fmt = v4l2_subdev_get_try_format(sd, sd_state,
                                              RKISP1_ISP_PAD_SINK_PARAMS);
        src_fmt = v4l2_subdev_get_try_format(sd, sd_state,
@@ -472,40 +486,105 @@ static void rkisp1_isp_set_src_fmt(struct rkisp1_isp *isp,
                                   struct v4l2_mbus_framefmt *format,
                                   unsigned int which)
 {
-       const struct rkisp1_mbus_info *mbus_info;
+       const struct rkisp1_mbus_info *sink_info;
+       const struct rkisp1_mbus_info *src_info;
+       struct v4l2_mbus_framefmt *sink_fmt;
        struct v4l2_mbus_framefmt *src_fmt;
        const struct v4l2_rect *src_crop;
+       bool set_csc;
 
+       sink_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state,
+                                         RKISP1_ISP_PAD_SINK_VIDEO, which);
        src_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state,
                                         RKISP1_ISP_PAD_SOURCE_VIDEO, which);
        src_crop = rkisp1_isp_get_pad_crop(isp, sd_state,
                                           RKISP1_ISP_PAD_SOURCE_VIDEO, which);
 
+       /*
+        * Media bus code. The ISP can operate in pass-through mode (Bayer in,
+        * Bayer out or YUV in, YUV out) or process Bayer data to YUV, but
+        * can't convert from YUV to Bayer.
+        */
+       sink_info = rkisp1_mbus_info_get_by_code(sink_fmt->code);
+
        src_fmt->code = format->code;
-       mbus_info = rkisp1_mbus_info_get_by_code(src_fmt->code);
-       if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SRC)) {
+       src_info = rkisp1_mbus_info_get_by_code(src_fmt->code);
+       if (!src_info || !(src_info->direction & RKISP1_ISP_SD_SRC)) {
                src_fmt->code = RKISP1_DEF_SRC_PAD_FMT;
-               mbus_info = rkisp1_mbus_info_get_by_code(src_fmt->code);
+               src_info = rkisp1_mbus_info_get_by_code(src_fmt->code);
        }
-       if (which == V4L2_SUBDEV_FORMAT_ACTIVE)
-               isp->src_fmt = mbus_info;
+
+       if (sink_info->pixel_enc == V4L2_PIXEL_ENC_YUV &&
+           src_info->pixel_enc == V4L2_PIXEL_ENC_BAYER) {
+               src_fmt->code = sink_fmt->code;
+               src_info = sink_info;
+       }
+
+       /*
+        * The source width and height must be identical to the source crop
+        * size.
+        */
        src_fmt->width  = src_crop->width;
        src_fmt->height = src_crop->height;
 
        /*
-        * The CSC API is used to allow userspace to force full
-        * quantization on YUV formats.
+        * Copy the color space for the sink pad. When converting from Bayer to
+        * YUV, default to a limited quantization range.
         */
-       if (format->flags & V4L2_MBUS_FRAMEFMT_SET_CSC &&
-           format->quantization == V4L2_QUANTIZATION_FULL_RANGE &&
-           mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV)
-               src_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
-       else if (mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV)
+       src_fmt->colorspace = sink_fmt->colorspace;
+       src_fmt->xfer_func = sink_fmt->xfer_func;
+       src_fmt->ycbcr_enc = sink_fmt->ycbcr_enc;
+
+       if (sink_info->pixel_enc == V4L2_PIXEL_ENC_BAYER &&
+           src_info->pixel_enc == V4L2_PIXEL_ENC_YUV)
                src_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE;
        else
-               src_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
+               src_fmt->quantization = sink_fmt->quantization;
+
+       /*
+        * Allow setting the source color space fields when the SET_CSC flag is
+        * set and the source format is YUV. If the sink format is YUV, don't
+        * set the color primaries, transfer function or YCbCr encoding as the
+        * ISP is bypassed in that case and passes YUV data through without
+        * modifications.
+        *
+        * The color primaries and transfer function are configured through the
+        * cross-talk matrix and tone curve respectively. Settings for those
+        * hardware blocks are conveyed through the ISP parameters buffer, as
+        * they need to combine color space information with other image tuning
+        * characteristics and can't thus be computed by the kernel based on the
+        * color space. The source pad colorspace and xfer_func fields are thus
+        * ignored by the driver, but can be set by userspace to propagate
+        * accurate color space information down the pipeline.
+        */
+       set_csc = format->flags & V4L2_MBUS_FRAMEFMT_SET_CSC;
+
+       if (set_csc && src_info->pixel_enc == V4L2_PIXEL_ENC_YUV) {
+               if (sink_info->pixel_enc == V4L2_PIXEL_ENC_BAYER) {
+                       if (format->colorspace != V4L2_COLORSPACE_DEFAULT)
+                               src_fmt->colorspace = format->colorspace;
+                       if (format->xfer_func != V4L2_XFER_FUNC_DEFAULT)
+                               src_fmt->xfer_func = format->xfer_func;
+                       if (format->ycbcr_enc != V4L2_YCBCR_ENC_DEFAULT)
+                               src_fmt->ycbcr_enc = format->ycbcr_enc;
+               }
+
+               if (format->quantization != V4L2_QUANTIZATION_DEFAULT)
+                       src_fmt->quantization = format->quantization;
+       }
 
        *format = *src_fmt;
+
+       /*
+        * Restore the SET_CSC flag if it was set to indicate support for the
+        * CSC setting API.
+        */
+       if (set_csc)
+               format->flags |= V4L2_MBUS_FRAMEFMT_SET_CSC;
+
+       /* Store the source format info when setting the active format. */
+       if (which == V4L2_SUBDEV_FORMAT_ACTIVE)
+               isp->src_fmt = src_info;
 }
 
 static void rkisp1_isp_set_src_crop(struct rkisp1_isp *isp,
@@ -573,6 +652,7 @@ static void rkisp1_isp_set_sink_fmt(struct rkisp1_isp *isp,
        const struct rkisp1_mbus_info *mbus_info;
        struct v4l2_mbus_framefmt *sink_fmt;
        struct v4l2_rect *sink_crop;
+       bool is_yuv;
 
        sink_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state,
                                          RKISP1_ISP_PAD_SINK_VIDEO,
@@ -593,6 +673,36 @@ static void rkisp1_isp_set_sink_fmt(struct rkisp1_isp *isp,
                                   RKISP1_ISP_MIN_HEIGHT,
                                   RKISP1_ISP_MAX_HEIGHT);
 
+       /*
+        * Adjust the color space fields. Accept any color primaries and
+        * transfer function for both YUV and Bayer. For YUV any YCbCr encoding
+        * and quantization range is also accepted. For Bayer formats, the YCbCr
+        * encoding isn't applicable, and the quantization range can only be
+        * full.
+        */
+       is_yuv = mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV;
+
+       sink_fmt->colorspace = format->colorspace ? :
+                              (is_yuv ? V4L2_COLORSPACE_SRGB :
+                               V4L2_COLORSPACE_RAW);
+       sink_fmt->xfer_func = format->xfer_func ? :
+                             V4L2_MAP_XFER_FUNC_DEFAULT(sink_fmt->colorspace);
+       if (is_yuv) {
+               sink_fmt->ycbcr_enc = format->ycbcr_enc ? :
+                       V4L2_MAP_YCBCR_ENC_DEFAULT(sink_fmt->colorspace);
+               sink_fmt->quantization = format->quantization ? :
+                       V4L2_MAP_QUANTIZATION_DEFAULT(false, sink_fmt->colorspace,
+                                                     sink_fmt->ycbcr_enc);
+       } else {
+               /*
+                * The YCbCr encoding isn't applicable for non-YUV formats, but
+                * V4L2 has no "no encoding" value. Hardcode it to Rec. 601, it
+                * should be ignored by userspace.
+                */
+               sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+               sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
+       }
+
        *format = *sink_fmt;
 
        /* Propagate to in crop */
index 9da7dc1..d8731eb 100644 (file)
@@ -18,6 +18,8 @@
 #define RKISP1_ISP_PARAMS_REQ_BUFS_MIN 2
 #define RKISP1_ISP_PARAMS_REQ_BUFS_MAX 8
 
+#define RKISP1_ISP_DPCC_METHODS_SET(n) \
+                       (RKISP1_CIF_ISP_DPCC_METHODS_SET_1 + 0x4 * (n))
 #define RKISP1_ISP_DPCC_LINE_THRESH(n) \
                        (RKISP1_CIF_ISP_DPCC_LINE_THRESH_1 + 0x14 * (n))
 #define RKISP1_ISP_DPCC_LINE_MAD_FAC(n) \
@@ -56,39 +58,47 @@ static void rkisp1_dpcc_config(struct rkisp1_params *params,
        unsigned int i;
        u32 mode;
 
-       /* avoid to override the old enable value */
+       /*
+        * The enable bit is controlled in rkisp1_isp_isr_other_config() and
+        * must be preserved. The grayscale mode should be configured
+        * automatically based on the media bus code on the ISP sink pad, so
+        * only the STAGE1_ENABLE bit can be set by userspace.
+        */
        mode = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_DPCC_MODE);
-       mode &= RKISP1_CIF_ISP_DPCC_ENA;
-       mode |= arg->mode & ~RKISP1_CIF_ISP_DPCC_ENA;
+       mode &= RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE;
+       mode |= arg->mode & RKISP1_CIF_ISP_DPCC_MODE_STAGE1_ENABLE;
        rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_MODE, mode);
+
        rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_OUTPUT_MODE,
-                    arg->output_mode);
+                    arg->output_mode & RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_MASK);
        rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_SET_USE,
-                    arg->set_use);
-
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_1,
-                    arg->methods[0].method);
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_2,
-                    arg->methods[1].method);
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_3,
-                    arg->methods[2].method);
+                    arg->set_use & RKISP1_CIF_ISP_DPCC_SET_USE_MASK);
+
        for (i = 0; i < RKISP1_CIF_ISP_DPCC_METHODS_MAX; i++) {
+               rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_METHODS_SET(i),
+                            arg->methods[i].method &
+                            RKISP1_CIF_ISP_DPCC_METHODS_SET_MASK);
                rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_LINE_THRESH(i),
-                            arg->methods[i].line_thresh);
+                            arg->methods[i].line_thresh &
+                            RKISP1_CIF_ISP_DPCC_LINE_THRESH_MASK);
                rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_LINE_MAD_FAC(i),
-                            arg->methods[i].line_mad_fac);
+                            arg->methods[i].line_mad_fac &
+                            RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_MASK);
                rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_PG_FAC(i),
-                            arg->methods[i].pg_fac);
+                            arg->methods[i].pg_fac &
+                            RKISP1_CIF_ISP_DPCC_PG_FAC_MASK);
                rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_RND_THRESH(i),
-                            arg->methods[i].rnd_thresh);
+                            arg->methods[i].rnd_thresh &
+                            RKISP1_CIF_ISP_DPCC_RND_THRESH_MASK);
                rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_RG_FAC(i),
-                            arg->methods[i].rg_fac);
+                            arg->methods[i].rg_fac &
+                            RKISP1_CIF_ISP_DPCC_RG_FAC_MASK);
        }
 
        rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_RND_OFFS,
-                    arg->rnd_offs);
+                    arg->rnd_offs & RKISP1_CIF_ISP_DPCC_RND_OFFS_MASK);
        rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_RO_LIMITS,
-                    arg->ro_limits);
+                    arg->ro_limits & RKISP1_CIF_ISP_DPCC_RO_LIMIT_MASK);
 }
 
 /* ISP black level subtraction interface function */
@@ -188,149 +198,131 @@ static void
 rkisp1_lsc_matrix_config_v10(struct rkisp1_params *params,
                             const struct rkisp1_cif_isp_lsc_config *pconfig)
 {
-       unsigned int isp_lsc_status, sram_addr, isp_lsc_table_sel, i, j, data;
+       struct rkisp1_device *rkisp1 = params->rkisp1;
+       u32 lsc_status, sram_addr, lsc_table_sel;
+       unsigned int i, j;
 
-       isp_lsc_status = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_STATUS);
+       lsc_status = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_STATUS);
 
        /* RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153 = ( 17 * 18 ) >> 1 */
-       sram_addr = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ?
+       sram_addr = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ?
                    RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 :
                    RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153;
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr);
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr);
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr);
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr);
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr);
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr);
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr);
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr);
 
        /* program data tables (table size is 9 * 17 = 153) */
        for (i = 0; i < RKISP1_CIF_ISP_LSC_SAMPLES_MAX; i++) {
+               const __u16 *r_tbl = pconfig->r_data_tbl[i];
+               const __u16 *gr_tbl = pconfig->gr_data_tbl[i];
+               const __u16 *gb_tbl = pconfig->gb_data_tbl[i];
+               const __u16 *b_tbl = pconfig->b_data_tbl[i];
+
                /*
                 * 17 sectors with 2 values in one DWORD = 9
                 * DWORDs (2nd value of last DWORD unused)
                 */
                for (j = 0; j < RKISP1_CIF_ISP_LSC_SAMPLES_MAX - 1; j += 2) {
-                       data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->r_data_tbl[i][j],
-                                                                pconfig->r_data_tbl[i][j + 1]);
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_LSC_R_TABLE_DATA, data);
-
-                       data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gr_data_tbl[i][j],
-                                                                pconfig->gr_data_tbl[i][j + 1]);
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, data);
-
-                       data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gb_data_tbl[i][j],
-                                                                pconfig->gb_data_tbl[i][j + 1]);
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, data);
-
-                       data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->b_data_tbl[i][j],
-                                                                pconfig->b_data_tbl[i][j + 1]);
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_LSC_B_TABLE_DATA, data);
+                       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
+                                    RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(
+                                       r_tbl[j], r_tbl[j + 1]));
+                       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
+                                    RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(
+                                       gr_tbl[j], gr_tbl[j + 1]));
+                       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
+                                    RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(
+                                       gb_tbl[j], gb_tbl[j + 1]));
+                       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
+                                    RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(
+                                       b_tbl[j], b_tbl[j + 1]));
                }
-               data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->r_data_tbl[i][j], 0);
-               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
-                            data);
 
-               data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gr_data_tbl[i][j], 0);
-               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
-                            data);
-
-               data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gb_data_tbl[i][j], 0);
-               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
-                            data);
-
-               data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->b_data_tbl[i][j], 0);
-               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
-                            data);
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
+                            RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(r_tbl[j], 0));
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
+                            RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(gr_tbl[j], 0));
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
+                            RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(gb_tbl[j], 0));
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
+                            RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(b_tbl[j], 0));
        }
-       isp_lsc_table_sel = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ?
-                           RKISP1_CIF_ISP_LSC_TABLE_0 :
-                           RKISP1_CIF_ISP_LSC_TABLE_1;
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL,
-                    isp_lsc_table_sel);
+
+       lsc_table_sel = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ?
+                       RKISP1_CIF_ISP_LSC_TABLE_0 : RKISP1_CIF_ISP_LSC_TABLE_1;
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL, lsc_table_sel);
 }
 
 static void
 rkisp1_lsc_matrix_config_v12(struct rkisp1_params *params,
                             const struct rkisp1_cif_isp_lsc_config *pconfig)
 {
-       unsigned int isp_lsc_status, sram_addr, isp_lsc_table_sel, i, j, data;
+       struct rkisp1_device *rkisp1 = params->rkisp1;
+       u32 lsc_status, sram_addr, lsc_table_sel;
+       unsigned int i, j;
 
-       isp_lsc_status = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_STATUS);
+       lsc_status = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_STATUS);
 
        /* RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153 = ( 17 * 18 ) >> 1 */
-       sram_addr = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ?
-                    RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 :
-                    RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153;
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr);
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr);
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr);
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr);
+       sram_addr = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ?
+                   RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 :
+                   RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153;
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr);
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr);
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr);
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr);
 
        /* program data tables (table size is 9 * 17 = 153) */
        for (i = 0; i < RKISP1_CIF_ISP_LSC_SAMPLES_MAX; i++) {
+               const __u16 *r_tbl = pconfig->r_data_tbl[i];
+               const __u16 *gr_tbl = pconfig->gr_data_tbl[i];
+               const __u16 *gb_tbl = pconfig->gb_data_tbl[i];
+               const __u16 *b_tbl = pconfig->b_data_tbl[i];
+
                /*
                 * 17 sectors with 2 values in one DWORD = 9
                 * DWORDs (2nd value of last DWORD unused)
                 */
                for (j = 0; j < RKISP1_CIF_ISP_LSC_SAMPLES_MAX - 1; j += 2) {
-                       data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
-                                       pconfig->r_data_tbl[i][j],
-                                       pconfig->r_data_tbl[i][j + 1]);
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_LSC_R_TABLE_DATA, data);
-
-                       data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
-                                       pconfig->gr_data_tbl[i][j],
-                                       pconfig->gr_data_tbl[i][j + 1]);
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, data);
-
-                       data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
-                                       pconfig->gb_data_tbl[i][j],
-                                       pconfig->gb_data_tbl[i][j + 1]);
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, data);
-
-                       data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
-                                       pconfig->b_data_tbl[i][j],
-                                       pconfig->b_data_tbl[i][j + 1]);
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_LSC_B_TABLE_DATA, data);
+                       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
+                                    RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
+                                       r_tbl[j], r_tbl[j + 1]));
+                       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
+                                    RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
+                                       gr_tbl[j], gr_tbl[j + 1]));
+                       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
+                                    RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
+                                       gb_tbl[j], gb_tbl[j + 1]));
+                       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
+                                    RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(
+                                       b_tbl[j], b_tbl[j + 1]));
                }
 
-               data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->r_data_tbl[i][j], 0);
-               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
-                            data);
-
-               data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->gr_data_tbl[i][j], 0);
-               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
-                            data);
-
-               data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->gb_data_tbl[i][j], 0);
-               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
-                            data);
-
-               data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->b_data_tbl[i][j], 0);
-               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
-                            data);
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA,
+                            RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(r_tbl[j], 0));
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA,
+                            RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(gr_tbl[j], 0));
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA,
+                            RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(gb_tbl[j], 0));
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA,
+                            RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(b_tbl[j], 0));
        }
-       isp_lsc_table_sel = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ?
-                           RKISP1_CIF_ISP_LSC_TABLE_0 :
-                           RKISP1_CIF_ISP_LSC_TABLE_1;
-       rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL,
-                    isp_lsc_table_sel);
+
+       lsc_table_sel = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ?
+                       RKISP1_CIF_ISP_LSC_TABLE_0 : RKISP1_CIF_ISP_LSC_TABLE_1;
+       rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL, lsc_table_sel);
 }
 
 static void rkisp1_lsc_config(struct rkisp1_params *params,
                              const struct rkisp1_cif_isp_lsc_config *arg)
 {
-       unsigned int i, data;
-       u32 lsc_ctrl;
+       struct rkisp1_device *rkisp1 = params->rkisp1;
+       u32 lsc_ctrl, data;
+       unsigned int i;
 
        /* To config must be off , store the current status firstly */
-       lsc_ctrl = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_CTRL);
+       lsc_ctrl = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_CTRL);
        rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL,
                                RKISP1_CIF_ISP_LSC_CTRL_ENA);
        params->ops->lsc_matrix_config(params, arg);
@@ -339,38 +331,31 @@ static void rkisp1_lsc_config(struct rkisp1_params *params,
                /* program x size tables */
                data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->x_size_tbl[i * 2],
                                                    arg->x_size_tbl[i * 2 + 1]);
-               rkisp1_write(params->rkisp1,
-                            RKISP1_CIF_ISP_LSC_XSIZE_01 + i * 4, data);
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_XSIZE(i), data);
 
                /* program x grad tables */
-               data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->x_grad_tbl[i * 2],
+               data = RKISP1_CIF_ISP_LSC_SECT_GRAD(arg->x_grad_tbl[i * 2],
                                                    arg->x_grad_tbl[i * 2 + 1]);
-               rkisp1_write(params->rkisp1,
-                            RKISP1_CIF_ISP_LSC_XGRAD_01 + i * 4, data);
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_XGRAD(i), data);
 
                /* program y size tables */
                data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->y_size_tbl[i * 2],
                                                    arg->y_size_tbl[i * 2 + 1]);
-               rkisp1_write(params->rkisp1,
-                            RKISP1_CIF_ISP_LSC_YSIZE_01 + i * 4, data);
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_YSIZE(i), data);
 
                /* program y grad tables */
-               data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->y_grad_tbl[i * 2],
+               data = RKISP1_CIF_ISP_LSC_SECT_GRAD(arg->y_grad_tbl[i * 2],
                                                    arg->y_grad_tbl[i * 2 + 1]);
-               rkisp1_write(params->rkisp1,
-                            RKISP1_CIF_ISP_LSC_YGRAD_01 + i * 4, data);
+               rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_YGRAD(i), data);
        }
 
        /* restore the lsc ctrl status */
-       if (lsc_ctrl & RKISP1_CIF_ISP_LSC_CTRL_ENA) {
-               rkisp1_param_set_bits(params,
-                                     RKISP1_CIF_ISP_LSC_CTRL,
+       if (lsc_ctrl & RKISP1_CIF_ISP_LSC_CTRL_ENA)
+               rkisp1_param_set_bits(params, RKISP1_CIF_ISP_LSC_CTRL,
                                      RKISP1_CIF_ISP_LSC_CTRL_ENA);
-       } else {
-               rkisp1_param_clear_bits(params,
-                                       RKISP1_CIF_ISP_LSC_CTRL,
+       else
+               rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL,
                                        RKISP1_CIF_ISP_LSC_CTRL_ENA);
-       }
 }
 
 /* ISP Filtering function */
@@ -1066,39 +1051,96 @@ static void rkisp1_ie_enable(struct rkisp1_params *params, bool en)
        }
 }
 
-static void rkisp1_csm_config(struct rkisp1_params *params, bool full_range)
+static void rkisp1_csm_config(struct rkisp1_params *params)
 {
-       static const u16 full_range_coeff[] = {
-               0x0026, 0x004b, 0x000f,
-               0x01ea, 0x01d6, 0x0040,
-               0x0040, 0x01ca, 0x01f6
+       struct csm_coeffs {
+               u16 limited[9];
+               u16 full[9];
+       };
+       static const struct csm_coeffs rec601_coeffs = {
+               .limited = {
+                       0x0021, 0x0042, 0x000d,
+                       0x01ed, 0x01db, 0x0038,
+                       0x0038, 0x01d1, 0x01f7,
+               },
+               .full = {
+                       0x0026, 0x004b, 0x000f,
+                       0x01ea, 0x01d6, 0x0040,
+                       0x0040, 0x01ca, 0x01f6,
+               },
        };
-       static const u16 limited_range_coeff[] = {
-               0x0021, 0x0040, 0x000d,
-               0x01ed, 0x01db, 0x0038,
-               0x0038, 0x01d1, 0x01f7,
+       static const struct csm_coeffs rec709_coeffs = {
+               .limited = {
+                       0x0018, 0x0050, 0x0008,
+                       0x01f3, 0x01d5, 0x0038,
+                       0x0038, 0x01cd, 0x01fb,
+               },
+               .full = {
+                       0x001b, 0x005c, 0x0009,
+                       0x01f1, 0x01cf, 0x0040,
+                       0x0040, 0x01c6, 0x01fa,
+               },
        };
+       static const struct csm_coeffs rec2020_coeffs = {
+               .limited = {
+                       0x001d, 0x004c, 0x0007,
+                       0x01f0, 0x01d8, 0x0038,
+                       0x0038, 0x01cd, 0x01fb,
+               },
+               .full = {
+                       0x0022, 0x0057, 0x0008,
+                       0x01ee, 0x01d2, 0x0040,
+                       0x0040, 0x01c5, 0x01fb,
+               },
+       };
+       static const struct csm_coeffs smpte240m_coeffs = {
+               .limited = {
+                       0x0018, 0x004f, 0x000a,
+                       0x01f3, 0x01d5, 0x0038,
+                       0x0038, 0x01ce, 0x01fa,
+               },
+               .full = {
+                       0x001b, 0x005a, 0x000b,
+                       0x01f1, 0x01cf, 0x0040,
+                       0x0040, 0x01c7, 0x01f9,
+               },
+       };
+
+       const struct csm_coeffs *coeffs;
+       const u16 *csm;
        unsigned int i;
 
-       if (full_range) {
-               for (i = 0; i < ARRAY_SIZE(full_range_coeff); i++)
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_CC_COEFF_0 + i * 4,
-                                    full_range_coeff[i]);
+       switch (params->ycbcr_encoding) {
+       case V4L2_YCBCR_ENC_601:
+       default:
+               coeffs = &rec601_coeffs;
+               break;
+       case V4L2_YCBCR_ENC_709:
+               coeffs = &rec709_coeffs;
+               break;
+       case V4L2_YCBCR_ENC_BT2020:
+               coeffs = &rec2020_coeffs;
+               break;
+       case V4L2_YCBCR_ENC_SMPTE240M:
+               coeffs = &smpte240m_coeffs;
+               break;
+       }
 
+       if (params->quantization == V4L2_QUANTIZATION_FULL_RANGE) {
+               csm = coeffs->full;
                rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL,
                                      RKISP1_CIF_ISP_CTRL_ISP_CSM_Y_FULL_ENA |
                                      RKISP1_CIF_ISP_CTRL_ISP_CSM_C_FULL_ENA);
        } else {
-               for (i = 0; i < ARRAY_SIZE(limited_range_coeff); i++)
-                       rkisp1_write(params->rkisp1,
-                                    RKISP1_CIF_ISP_CC_COEFF_0 + i * 4,
-                                    limited_range_coeff[i]);
-
+               csm = coeffs->limited;
                rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_CTRL,
                                        RKISP1_CIF_ISP_CTRL_ISP_CSM_Y_FULL_ENA |
                                        RKISP1_CIF_ISP_CTRL_ISP_CSM_C_FULL_ENA);
        }
+
+       for (i = 0; i < 9; i++)
+               rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_CC_COEFF_0 + i * 4,
+                            csm[i]);
 }
 
 /* ISP De-noise Pre-Filter(DPF) function */
@@ -1216,11 +1258,11 @@ rkisp1_isp_isr_other_config(struct rkisp1_params *params,
                if (module_ens & RKISP1_CIF_ISP_MODULE_DPCC)
                        rkisp1_param_set_bits(params,
                                              RKISP1_CIF_ISP_DPCC_MODE,
-                                             RKISP1_CIF_ISP_DPCC_ENA);
+                                             RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE);
                else
                        rkisp1_param_clear_bits(params,
                                                RKISP1_CIF_ISP_DPCC_MODE,
-                                               RKISP1_CIF_ISP_DPCC_ENA);
+                                               RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE);
        }
 
        /* update bls config */
@@ -1255,22 +1297,6 @@ rkisp1_isp_isr_other_config(struct rkisp1_params *params,
                                                RKISP1_CIF_ISP_CTRL_ISP_GAMMA_IN_ENA);
        }
 
-       /* update lsc config */
-       if (module_cfg_update & RKISP1_CIF_ISP_MODULE_LSC)
-               rkisp1_lsc_config(params,
-                                 &new_params->others.lsc_config);
-
-       if (module_en_update & RKISP1_CIF_ISP_MODULE_LSC) {
-               if (module_ens & RKISP1_CIF_ISP_MODULE_LSC)
-                       rkisp1_param_set_bits(params,
-                                             RKISP1_CIF_ISP_LSC_CTRL,
-                                             RKISP1_CIF_ISP_LSC_CTRL_ENA);
-               else
-                       rkisp1_param_clear_bits(params,
-                                               RKISP1_CIF_ISP_LSC_CTRL,
-                                               RKISP1_CIF_ISP_LSC_CTRL_ENA);
-       }
-
        /* update awb gains */
        if (module_cfg_update & RKISP1_CIF_ISP_MODULE_AWB_GAIN)
                params->ops->awb_gain_config(params, &new_params->others.awb_gain_config);
@@ -1387,6 +1413,33 @@ rkisp1_isp_isr_other_config(struct rkisp1_params *params,
        }
 }
 
+static void
+rkisp1_isp_isr_lsc_config(struct rkisp1_params *params,
+                         const struct rkisp1_params_cfg *new_params)
+{
+       unsigned int module_en_update, module_cfg_update, module_ens;
+
+       module_en_update = new_params->module_en_update;
+       module_cfg_update = new_params->module_cfg_update;
+       module_ens = new_params->module_ens;
+
+       /* update lsc config */
+       if (module_cfg_update & RKISP1_CIF_ISP_MODULE_LSC)
+               rkisp1_lsc_config(params,
+                                 &new_params->others.lsc_config);
+
+       if (module_en_update & RKISP1_CIF_ISP_MODULE_LSC) {
+               if (module_ens & RKISP1_CIF_ISP_MODULE_LSC)
+                       rkisp1_param_set_bits(params,
+                                             RKISP1_CIF_ISP_LSC_CTRL,
+                                             RKISP1_CIF_ISP_LSC_CTRL_ENA);
+               else
+                       rkisp1_param_clear_bits(params,
+                                               RKISP1_CIF_ISP_LSC_CTRL,
+                                               RKISP1_CIF_ISP_LSC_CTRL_ENA);
+       }
+}
+
 static void rkisp1_isp_isr_meas_config(struct rkisp1_params *params,
                                       struct  rkisp1_params_cfg *new_params)
 {
@@ -1448,47 +1501,60 @@ static void rkisp1_isp_isr_meas_config(struct rkisp1_params *params,
        }
 }
 
-static void rkisp1_params_apply_params_cfg(struct rkisp1_params *params,
-                                          unsigned int frame_sequence)
+static bool rkisp1_params_get_buffer(struct rkisp1_params *params,
+                                    struct rkisp1_buffer **buf,
+                                    struct rkisp1_params_cfg **cfg)
 {
-       struct rkisp1_params_cfg *new_params;
-       struct rkisp1_buffer *cur_buf = NULL;
-
        if (list_empty(&params->params))
-               return;
-
-       cur_buf = list_first_entry(&params->params,
-                                  struct rkisp1_buffer, queue);
+               return false;
 
-       new_params = (struct rkisp1_params_cfg *)vb2_plane_vaddr(&cur_buf->vb.vb2_buf, 0);
+       *buf = list_first_entry(&params->params, struct rkisp1_buffer, queue);
+       *cfg = vb2_plane_vaddr(&(*buf)->vb.vb2_buf, 0);
 
-       rkisp1_isp_isr_other_config(params, new_params);
-       rkisp1_isp_isr_meas_config(params, new_params);
-
-       /* update shadow register immediately */
-       rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL, RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD);
+       return true;
+}
 
-       list_del(&cur_buf->queue);
+static void rkisp1_params_complete_buffer(struct rkisp1_params *params,
+                                         struct rkisp1_buffer *buf,
+                                         unsigned int frame_sequence)
+{
+       list_del(&buf->queue);
 
-       cur_buf->vb.sequence = frame_sequence;
-       vb2_buffer_done(&cur_buf->vb.vb2_buf, VB2_BUF_STATE_DONE);
+       buf->vb.sequence = frame_sequence;
+       vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_DONE);
 }
 
 void rkisp1_params_isr(struct rkisp1_device *rkisp1)
 {
-       /*
-        * This isr is called when the ISR finishes processing a frame (RKISP1_CIF_ISP_FRAME).
-        * Configurations performed here will be applied on the next frame.
-        * Since frame_sequence is updated on the vertical sync signal, we should use
-        * frame_sequence + 1 here to indicate to userspace on which frame these parameters
-        * are being applied.
-        */
-       unsigned int frame_sequence = rkisp1->isp.frame_sequence + 1;
        struct rkisp1_params *params = &rkisp1->params;
+       struct rkisp1_params_cfg *new_params;
+       struct rkisp1_buffer *cur_buf;
 
        spin_lock(&params->config_lock);
-       rkisp1_params_apply_params_cfg(params, frame_sequence);
 
+       if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params))
+               goto unlock;
+
+       rkisp1_isp_isr_other_config(params, new_params);
+       rkisp1_isp_isr_lsc_config(params, new_params);
+       rkisp1_isp_isr_meas_config(params, new_params);
+
+       /* update shadow register immediately */
+       rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL,
+                             RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD);
+
+       /*
+        * This isr is called when the ISR finishes processing a frame
+        * (RKISP1_CIF_ISP_FRAME). Configurations performed here will be
+        * applied on the next frame. Since frame_sequence is updated on the
+        * vertical sync signal, we should use frame_sequence + 1 here to
+        * indicate to userspace on which frame these parameters are being
+        * applied.
+        */
+       rkisp1_params_complete_buffer(params, cur_buf,
+                                     rkisp1->isp.frame_sequence + 1);
+
+unlock:
        spin_unlock(&params->config_lock);
 }
 
@@ -1531,9 +1597,18 @@ static const struct rkisp1_cif_isp_afc_config rkisp1_afc_params_default_config =
        14
 };
 
-static void rkisp1_params_config_parameter(struct rkisp1_params *params)
+void rkisp1_params_pre_configure(struct rkisp1_params *params,
+                                enum rkisp1_fmt_raw_pat_type bayer_pat,
+                                enum v4l2_quantization quantization,
+                                enum v4l2_ycbcr_encoding ycbcr_encoding)
 {
        struct rkisp1_cif_isp_hst_config hst = rkisp1_hst_params_default_config;
+       struct rkisp1_params_cfg *new_params;
+       struct rkisp1_buffer *cur_buf;
+
+       params->quantization = quantization;
+       params->ycbcr_encoding = ycbcr_encoding;
+       params->raw_type = bayer_pat;
 
        params->ops->awb_meas_config(params, &rkisp1_awb_params_default_config);
        params->ops->awb_meas_enable(params, &rkisp1_awb_params_default_config,
@@ -1552,27 +1627,55 @@ static void rkisp1_params_config_parameter(struct rkisp1_params *params)
        rkisp1_param_set_bits(params, RKISP1_CIF_ISP_HIST_PROP_V10,
                              rkisp1_hst_params_default_config.mode);
 
-       /* set the  range */
-       if (params->quantization == V4L2_QUANTIZATION_FULL_RANGE)
-               rkisp1_csm_config(params, true);
-       else
-               rkisp1_csm_config(params, false);
+       rkisp1_csm_config(params);
 
        spin_lock_irq(&params->config_lock);
 
        /* apply the first buffer if there is one already */
-       rkisp1_params_apply_params_cfg(params, 0);
 
+       if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params))
+               goto unlock;
+
+       rkisp1_isp_isr_other_config(params, new_params);
+       rkisp1_isp_isr_meas_config(params, new_params);
+
+       /* update shadow register immediately */
+       rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL,
+                             RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD);
+
+unlock:
        spin_unlock_irq(&params->config_lock);
 }
 
-void rkisp1_params_configure(struct rkisp1_params *params,
-                            enum rkisp1_fmt_raw_pat_type bayer_pat,
-                            enum v4l2_quantization quantization)
+void rkisp1_params_post_configure(struct rkisp1_params *params)
 {
-       params->quantization = quantization;
-       params->raw_type = bayer_pat;
-       rkisp1_params_config_parameter(params);
+       struct rkisp1_params_cfg *new_params;
+       struct rkisp1_buffer *cur_buf;
+
+       spin_lock_irq(&params->config_lock);
+
+       /*
+        * Apply LSC parameters from the first buffer (if any is already
+        * available. This must be done after the ISP gets started in the
+        * ISP8000Nano v18.02 (found in the i.MX8MP) as access to the LSC RAM
+        * is gated by the ISP_CTRL.ISP_ENABLE bit. As this initialization
+        * ordering doesn't affect other ISP versions negatively, do so
+        * unconditionally.
+        */
+
+       if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params))
+               goto unlock;
+
+       rkisp1_isp_isr_lsc_config(params, new_params);
+
+       /* update shadow register immediately */
+       rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL,
+                             RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD);
+
+       rkisp1_params_complete_buffer(params, cur_buf, 0);
+
+unlock:
+       spin_unlock_irq(&params->config_lock);
 }
 
 /*
@@ -1582,7 +1685,7 @@ void rkisp1_params_configure(struct rkisp1_params *params,
 void rkisp1_params_disable(struct rkisp1_params *params)
 {
        rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_DPCC_MODE,
-                               RKISP1_CIF_ISP_DPCC_ENA);
+                               RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE);
        rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL,
                                RKISP1_CIF_ISP_LSC_CTRL_ENA);
        rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_BLS_CTRL,
index dd3e6c3..421cc73 100644 (file)
        (((v0) & 0x1FFF) | (((v1) & 0x1FFF) << 13))
 #define RKISP1_CIF_ISP_LSC_SECT_SIZE(v0, v1)      \
        (((v0) & 0xFFF) | (((v1) & 0xFFF) << 16))
-#define RKISP1_CIF_ISP_LSC_GRAD_SIZE(v0, v1)      \
+#define RKISP1_CIF_ISP_LSC_SECT_GRAD(v0, v1)      \
        (((v0) & 0xFFF) | (((v1) & 0xFFF) << 16))
 
 /* LSC: ISP_LSC_TABLE_SEL */
 #define RKISP1_CIF_ISP_CTRL_ISP_GAMMA_OUT_ENA_READ(x)  (((x) >> 11) & 1)
 
 /* DPCC */
-/* ISP_DPCC_MODE */
-#define RKISP1_CIF_ISP_DPCC_ENA                                BIT(0)
-#define RKISP1_CIF_ISP_DPCC_MODE_MAX                   0x07
-#define RKISP1_CIF_ISP_DPCC_OUTPUTMODE_MAX             0x0F
-#define RKISP1_CIF_ISP_DPCC_SETUSE_MAX                 0x0F
-#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RESERVED       0xFFFFE000
-#define RKISP1_CIF_ISP_DPCC_LINE_THRESH_RESERVED       0xFFFF0000
-#define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_RESERVED      0xFFFFC0C0
-#define RKISP1_CIF_ISP_DPCC_PG_FAC_RESERVED            0xFFFFC0C0
-#define RKISP1_CIF_ISP_DPCC_RND_THRESH_RESERVED                0xFFFF0000
-#define RKISP1_CIF_ISP_DPCC_RG_FAC_RESERVED            0xFFFFC0C0
-#define RKISP1_CIF_ISP_DPCC_RO_LIMIT_RESERVED          0xFFFFF000
-#define RKISP1_CIF_ISP_DPCC_RND_OFFS_RESERVED          0xFFFFF000
+#define RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE           BIT(0)
+#define RKISP1_CIF_ISP_DPCC_MODE_GRAYSCALE_MODE                BIT(1)
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_MASK           GENMASK(3, 0)
+#define RKISP1_CIF_ISP_DPCC_SET_USE_MASK               GENMASK(3, 0)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_MASK           0x00001f1f
+#define RKISP1_CIF_ISP_DPCC_LINE_THRESH_MASK           0x0000ffff
+#define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_MASK          0x00003f3f
+#define RKISP1_CIF_ISP_DPCC_PG_FAC_MASK                        0x00003f3f
+#define RKISP1_CIF_ISP_DPCC_RND_THRESH_MASK            0x0000ffff
+#define RKISP1_CIF_ISP_DPCC_RG_FAC_MASK                        0x00003f3f
+#define RKISP1_CIF_ISP_DPCC_RO_LIMIT_MASK              0x00000fff
+#define RKISP1_CIF_ISP_DPCC_RND_OFFS_MASK              0x00000fff
 
 /* BLS */
 /* ISP_BLS_CTRL */
 #define RKISP1_CIF_ISP_LSC_GR_TABLE_DATA       (RKISP1_CIF_ISP_LSC_BASE + 0x00000018)
 #define RKISP1_CIF_ISP_LSC_B_TABLE_DATA                (RKISP1_CIF_ISP_LSC_BASE + 0x0000001C)
 #define RKISP1_CIF_ISP_LSC_GB_TABLE_DATA       (RKISP1_CIF_ISP_LSC_BASE + 0x00000020)
-#define RKISP1_CIF_ISP_LSC_XGRAD_01            (RKISP1_CIF_ISP_LSC_BASE + 0x00000024)
-#define RKISP1_CIF_ISP_LSC_XGRAD_23            (RKISP1_CIF_ISP_LSC_BASE + 0x00000028)
-#define RKISP1_CIF_ISP_LSC_XGRAD_45            (RKISP1_CIF_ISP_LSC_BASE + 0x0000002C)
-#define RKISP1_CIF_ISP_LSC_XGRAD_67            (RKISP1_CIF_ISP_LSC_BASE + 0x00000030)
-#define RKISP1_CIF_ISP_LSC_YGRAD_01            (RKISP1_CIF_ISP_LSC_BASE + 0x00000034)
-#define RKISP1_CIF_ISP_LSC_YGRAD_23            (RKISP1_CIF_ISP_LSC_BASE + 0x00000038)
-#define RKISP1_CIF_ISP_LSC_YGRAD_45            (RKISP1_CIF_ISP_LSC_BASE + 0x0000003C)
-#define RKISP1_CIF_ISP_LSC_YGRAD_67            (RKISP1_CIF_ISP_LSC_BASE + 0x00000040)
-#define RKISP1_CIF_ISP_LSC_XSIZE_01            (RKISP1_CIF_ISP_LSC_BASE + 0x00000044)
-#define RKISP1_CIF_ISP_LSC_XSIZE_23            (RKISP1_CIF_ISP_LSC_BASE + 0x00000048)
-#define RKISP1_CIF_ISP_LSC_XSIZE_45            (RKISP1_CIF_ISP_LSC_BASE + 0x0000004C)
-#define RKISP1_CIF_ISP_LSC_XSIZE_67            (RKISP1_CIF_ISP_LSC_BASE + 0x00000050)
-#define RKISP1_CIF_ISP_LSC_YSIZE_01            (RKISP1_CIF_ISP_LSC_BASE + 0x00000054)
-#define RKISP1_CIF_ISP_LSC_YSIZE_23            (RKISP1_CIF_ISP_LSC_BASE + 0x00000058)
-#define RKISP1_CIF_ISP_LSC_YSIZE_45            (RKISP1_CIF_ISP_LSC_BASE + 0x0000005C)
-#define RKISP1_CIF_ISP_LSC_YSIZE_67            (RKISP1_CIF_ISP_LSC_BASE + 0x00000060)
+#define RKISP1_CIF_ISP_LSC_XGRAD(n)            (RKISP1_CIF_ISP_LSC_BASE + 0x00000024 + (n) * 4)
+#define RKISP1_CIF_ISP_LSC_YGRAD(n)            (RKISP1_CIF_ISP_LSC_BASE + 0x00000034 + (n) * 4)
+#define RKISP1_CIF_ISP_LSC_XSIZE(n)            (RKISP1_CIF_ISP_LSC_BASE + 0x00000044 + (n) * 4)
+#define RKISP1_CIF_ISP_LSC_YSIZE(n)            (RKISP1_CIF_ISP_LSC_BASE + 0x00000054 + (n) * 4)
 #define RKISP1_CIF_ISP_LSC_TABLE_SEL           (RKISP1_CIF_ISP_LSC_BASE + 0x00000064)
 #define RKISP1_CIF_ISP_LSC_STATUS              (RKISP1_CIF_ISP_LSC_BASE + 0x00000068)
 
index f4caa8f..f76afd8 100644 (file)
@@ -411,6 +411,10 @@ static int rkisp1_rsz_init_config(struct v4l2_subdev *sd,
        sink_fmt->height = RKISP1_DEFAULT_HEIGHT;
        sink_fmt->field = V4L2_FIELD_NONE;
        sink_fmt->code = RKISP1_DEF_FMT;
+       sink_fmt->colorspace = V4L2_COLORSPACE_SRGB;
+       sink_fmt->xfer_func = V4L2_XFER_FUNC_SRGB;
+       sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+       sink_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE;
 
        sink_crop = v4l2_subdev_get_try_crop(sd, sd_state,
                                             RKISP1_RSZ_PAD_SINK);
@@ -503,6 +507,7 @@ static void rkisp1_rsz_set_sink_fmt(struct rkisp1_resizer *rsz,
        const struct rkisp1_mbus_info *mbus_info;
        struct v4l2_mbus_framefmt *sink_fmt, *src_fmt;
        struct v4l2_rect *sink_crop;
+       bool is_yuv;
 
        sink_fmt = rkisp1_rsz_get_pad_fmt(rsz, sd_state, RKISP1_RSZ_PAD_SINK,
                                          which);
@@ -524,9 +529,6 @@ static void rkisp1_rsz_set_sink_fmt(struct rkisp1_resizer *rsz,
        if (which == V4L2_SUBDEV_FORMAT_ACTIVE)
                rsz->pixel_enc = mbus_info->pixel_enc;
 
-       /* Propagete to source pad */
-       src_fmt->code = sink_fmt->code;
-
        sink_fmt->width = clamp_t(u32, format->width,
                                  RKISP1_ISP_MIN_WIDTH,
                                  RKISP1_ISP_MAX_WIDTH);
@@ -534,8 +536,45 @@ static void rkisp1_rsz_set_sink_fmt(struct rkisp1_resizer *rsz,
                                   RKISP1_ISP_MIN_HEIGHT,
                                   RKISP1_ISP_MAX_HEIGHT);
 
+       /*
+        * Adjust the color space fields. Accept any color primaries and
+        * transfer function for both YUV and Bayer. For YUV any YCbCr encoding
+        * and quantization range is also accepted. For Bayer formats, the YCbCr
+        * encoding isn't applicable, and the quantization range can only be
+        * full.
+        */
+       is_yuv = mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV;
+
+       sink_fmt->colorspace = format->colorspace ? :
+                              (is_yuv ? V4L2_COLORSPACE_SRGB :
+                               V4L2_COLORSPACE_RAW);
+       sink_fmt->xfer_func = format->xfer_func ? :
+                             V4L2_MAP_XFER_FUNC_DEFAULT(sink_fmt->colorspace);
+       if (is_yuv) {
+               sink_fmt->ycbcr_enc = format->ycbcr_enc ? :
+                       V4L2_MAP_YCBCR_ENC_DEFAULT(sink_fmt->colorspace);
+               sink_fmt->quantization = format->quantization ? :
+                       V4L2_MAP_QUANTIZATION_DEFAULT(false, sink_fmt->colorspace,
+                                                     sink_fmt->ycbcr_enc);
+       } else {
+               /*
+                * The YCbCr encoding isn't applicable for non-YUV formats, but
+                * V4L2 has no "no encoding" value. Hardcode it to Rec. 601, it
+                * should be ignored by userspace.
+                */
+               sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601;
+               sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE;
+       }
+
        *format = *sink_fmt;
 
+       /* Propagate the media bus code and color space to the source pad. */
+       src_fmt->code = sink_fmt->code;
+       src_fmt->colorspace = sink_fmt->colorspace;
+       src_fmt->xfer_func = sink_fmt->xfer_func;
+       src_fmt->ycbcr_enc = sink_fmt->ycbcr_enc;
+       src_fmt->quantization = sink_fmt->quantization;
+
        /* Update sink crop */
        rkisp1_rsz_set_sink_crop(rsz, sd_state, sink_crop, which);
 }
index 03638c8..e3b95a2 100644 (file)
@@ -524,7 +524,7 @@ static int fimc_capture_release(struct file *file)
        mutex_lock(&fimc->lock);
 
        if (close && vc->streaming) {
-               media_pipeline_stop(&vc->ve.vdev.entity);
+               video_device_pipeline_stop(&vc->ve.vdev);
                vc->streaming = false;
        }
 
@@ -1176,7 +1176,6 @@ static int fimc_cap_streamon(struct file *file, void *priv,
 {
        struct fimc_dev *fimc = video_drvdata(file);
        struct fimc_vid_cap *vc = &fimc->vid_cap;
-       struct media_entity *entity = &vc->ve.vdev.entity;
        struct fimc_source_info *si = NULL;
        struct v4l2_subdev *sd;
        int ret;
@@ -1184,7 +1183,7 @@ static int fimc_cap_streamon(struct file *file, void *priv,
        if (fimc_capture_active(fimc))
                return -EBUSY;
 
-       ret = media_pipeline_start(entity, &vc->ve.pipe->mp);
+       ret = video_device_pipeline_start(&vc->ve.vdev, &vc->ve.pipe->mp);
        if (ret < 0)
                return ret;
 
@@ -1218,7 +1217,7 @@ static int fimc_cap_streamon(struct file *file, void *priv,
        }
 
 err_p_stop:
-       media_pipeline_stop(entity);
+       video_device_pipeline_stop(&vc->ve.vdev);
        return ret;
 }
 
@@ -1234,7 +1233,7 @@ static int fimc_cap_streamoff(struct file *file, void *priv,
                return ret;
 
        if (vc->streaming) {
-               media_pipeline_stop(&vc->ve.vdev.entity);
+               video_device_pipeline_stop(&vc->ve.vdev);
                vc->streaming = false;
        }
 
index 8f12240..f6a302f 100644 (file)
@@ -312,7 +312,7 @@ static int isp_video_release(struct file *file)
        is_singular_file = v4l2_fh_is_singular_file(file);
 
        if (is_singular_file && ivc->streaming) {
-               media_pipeline_stop(entity);
+               video_device_pipeline_stop(&ivc->ve.vdev);
                ivc->streaming = 0;
        }
 
@@ -490,10 +490,9 @@ static int isp_video_streamon(struct file *file, void *priv,
 {
        struct fimc_isp *isp = video_drvdata(file);
        struct exynos_video_entity *ve = &isp->video_capture.ve;
-       struct media_entity *me = &ve->vdev.entity;
        int ret;
 
-       ret = media_pipeline_start(me, &ve->pipe->mp);
+       ret = video_device_pipeline_start(&ve->vdev, &ve->pipe->mp);
        if (ret < 0)
                return ret;
 
@@ -508,7 +507,7 @@ static int isp_video_streamon(struct file *file, void *priv,
        isp->video_capture.streaming = 1;
        return 0;
 p_stop:
-       media_pipeline_stop(me);
+       video_device_pipeline_stop(&ve->vdev);
        return ret;
 }
 
@@ -523,7 +522,7 @@ static int isp_video_streamoff(struct file *file, void *priv,
        if (ret < 0)
                return ret;
 
-       media_pipeline_stop(&video->ve.vdev.entity);
+       video_device_pipeline_stop(&video->ve.vdev);
        video->streaming = 0;
        return 0;
 }
index 41b0a4a..e185a40 100644 (file)
@@ -516,7 +516,7 @@ static int fimc_lite_release(struct file *file)
        if (v4l2_fh_is_singular_file(file) &&
            atomic_read(&fimc->out_path) == FIMC_IO_DMA) {
                if (fimc->streaming) {
-                       media_pipeline_stop(entity);
+                       video_device_pipeline_stop(&fimc->ve.vdev);
                        fimc->streaming = false;
                }
                fimc_lite_stop_capture(fimc, false);
@@ -812,13 +812,12 @@ static int fimc_lite_streamon(struct file *file, void *priv,
                              enum v4l2_buf_type type)
 {
        struct fimc_lite *fimc = video_drvdata(file);
-       struct media_entity *entity = &fimc->ve.vdev.entity;
        int ret;
 
        if (fimc_lite_active(fimc))
                return -EBUSY;
 
-       ret = media_pipeline_start(entity, &fimc->ve.pipe->mp);
+       ret = video_device_pipeline_start(&fimc->ve.vdev, &fimc->ve.pipe->mp);
        if (ret < 0)
                return ret;
 
@@ -835,7 +834,7 @@ static int fimc_lite_streamon(struct file *file, void *priv,
        }
 
 err_p_stop:
-       media_pipeline_stop(entity);
+       video_device_pipeline_stop(&fimc->ve.vdev);
        return 0;
 }
 
@@ -849,7 +848,7 @@ static int fimc_lite_streamoff(struct file *file, void *priv,
        if (ret < 0)
                return ret;
 
-       media_pipeline_stop(&fimc->ve.vdev.entity);
+       video_device_pipeline_stop(&fimc->ve.vdev);
        fimc->streaming = false;
        return 0;
 }
index c2d8f1e..db106eb 100644 (file)
@@ -848,13 +848,13 @@ static int s3c_camif_streamon(struct file *file, void *priv,
        if (s3c_vp_active(vp))
                return 0;
 
-       ret = media_pipeline_start(sensor, camif->m_pipeline);
+       ret = media_pipeline_start(sensor->pads, camif->m_pipeline);
        if (ret < 0)
                return ret;
 
        ret = camif_pipeline_validate(camif);
        if (ret < 0) {
-               media_pipeline_stop(sensor);
+               media_pipeline_stop(sensor->pads);
                return ret;
        }
 
@@ -878,7 +878,7 @@ static int s3c_camif_streamoff(struct file *file, void *priv,
 
        ret = vb2_streamoff(&vp->vb_queue, type);
        if (ret == 0)
-               media_pipeline_stop(&camif->sensor.sd->entity);
+               media_pipeline_stop(camif->sensor.sd->entity.pads);
        return ret;
 }
 
index 2ca95ab..37458d4 100644 (file)
@@ -751,7 +751,7 @@ static int dcmi_start_streaming(struct vb2_queue *vq, unsigned int count)
                goto err_unlocked;
        }
 
-       ret = media_pipeline_start(&dcmi->vdev->entity, &dcmi->pipeline);
+       ret = video_device_pipeline_start(dcmi->vdev, &dcmi->pipeline);
        if (ret < 0) {
                dev_err(dcmi->dev, "%s: Failed to start streaming, media pipeline start error (%d)\n",
                        __func__, ret);
@@ -865,7 +865,7 @@ err_pipeline_stop:
        dcmi_pipeline_stop(dcmi);
 
 err_media_pipeline_stop:
-       media_pipeline_stop(&dcmi->vdev->entity);
+       video_device_pipeline_stop(dcmi->vdev);
 
 err_pm_put:
        pm_runtime_put(dcmi->dev);
@@ -892,7 +892,7 @@ static void dcmi_stop_streaming(struct vb2_queue *vq)
 
        dcmi_pipeline_stop(dcmi);
 
-       media_pipeline_stop(&dcmi->vdev->entity);
+       video_device_pipeline_stop(dcmi->vdev);
 
        spin_lock_irq(&dcmi->irqlock);
 
index 7960e68..60610c0 100644 (file)
@@ -3,7 +3,7 @@
 config VIDEO_SUN4I_CSI
        tristate "Allwinner A10 CMOS Sensor Interface Support"
        depends on V4L_PLATFORM_DRIVERS
-       depends on VIDEO_DEV && COMMON_CLK  && HAS_DMA
+       depends on VIDEO_DEV && COMMON_CLK && RESET_CONTROLLER && HAS_DMA
        depends on ARCH_SUNXI || COMPILE_TEST
        select MEDIA_CONTROLLER
        select VIDEO_V4L2_SUBDEV_API
index 0912a1b..a3e826a 100644 (file)
@@ -266,7 +266,7 @@ static int sun4i_csi_start_streaming(struct vb2_queue *vq, unsigned int count)
                goto err_clear_dma_queue;
        }
 
-       ret = media_pipeline_start(&csi->vdev.entity, &csi->vdev.pipe);
+       ret = video_device_pipeline_alloc_start(&csi->vdev);
        if (ret < 0)
                goto err_free_scratch_buffer;
 
@@ -330,7 +330,7 @@ err_disable_device:
        sun4i_csi_capture_stop(csi);
 
 err_disable_pipeline:
-       media_pipeline_stop(&csi->vdev.entity);
+       video_device_pipeline_stop(&csi->vdev);
 
 err_free_scratch_buffer:
        dma_free_coherent(csi->dev, csi->scratch.size, csi->scratch.vaddr,
@@ -359,7 +359,7 @@ static void sun4i_csi_stop_streaming(struct vb2_queue *vq)
        return_all_buffers(csi, VB2_BUF_STATE_ERROR);
        spin_unlock_irqrestore(&csi->qlock, flags);
 
-       media_pipeline_stop(&csi->vdev.entity);
+       video_device_pipeline_stop(&csi->vdev);
 
        dma_free_coherent(csi->dev, csi->scratch.size, csi->scratch.vaddr,
                          csi->scratch.paddr);
index 0345901..886006f 100644 (file)
@@ -1,13 +1,15 @@
 # SPDX-License-Identifier: GPL-2.0-only
 config VIDEO_SUN6I_CSI
-       tristate "Allwinner V3s Camera Sensor Interface driver"
-       depends on V4L_PLATFORM_DRIVERS
-       depends on VIDEO_DEV && COMMON_CLK  && HAS_DMA
+       tristate "Allwinner A31 Camera Sensor Interface (CSI) Driver"
+       depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV
        depends on ARCH_SUNXI || COMPILE_TEST
+       depends on PM && COMMON_CLK && RESET_CONTROLLER && HAS_DMA
        select MEDIA_CONTROLLER
        select VIDEO_V4L2_SUBDEV_API
        select VIDEOBUF2_DMA_CONTIG
-       select REGMAP_MMIO
        select V4L2_FWNODE
+       select REGMAP_MMIO
        help
-          Support for the Allwinner Camera Sensor Interface Controller on V3s.
+          Support for the Allwinner A31 Camera Sensor Interface (CSI)
+          controller, also found on other platforms such as the A83T, H3,
+          V3/V3s or A64.
index a971587..8b99c17 100644 (file)
 #include <linux/sched.h>
 #include <linux/sizes.h>
 #include <linux/slab.h>
+#include <media/v4l2-mc.h>
 
 #include "sun6i_csi.h"
 #include "sun6i_csi_reg.h"
 
-#define MODULE_NAME    "sun6i-csi"
-
-struct sun6i_csi_dev {
-       struct sun6i_csi                csi;
-       struct device                   *dev;
-
-       struct regmap                   *regmap;
-       struct clk                      *clk_mod;
-       struct clk                      *clk_ram;
-       struct reset_control            *rstc_bus;
-
-       int                             planar_offset[3];
-};
-
-static inline struct sun6i_csi_dev *sun6i_csi_to_dev(struct sun6i_csi *csi)
-{
-       return container_of(csi, struct sun6i_csi_dev, csi);
-}
+/* Helpers */
 
 /* TODO add 10&12 bit YUV, RGB support */
-bool sun6i_csi_is_format_supported(struct sun6i_csi *csi,
+bool sun6i_csi_is_format_supported(struct sun6i_csi_device *csi_dev,
                                   u32 pixformat, u32 mbus_code)
 {
-       struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
+       struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2;
 
        /*
         * Some video receivers have the ability to be compatible with
         * 8bit and 16bit bus width.
         * Identify the media bus format from device tree.
         */
-       if ((sdev->csi.v4l2_ep.bus_type == V4L2_MBUS_PARALLEL
-            || sdev->csi.v4l2_ep.bus_type == V4L2_MBUS_BT656)
-            && sdev->csi.v4l2_ep.bus.parallel.bus_width == 16) {
+       if ((v4l2->v4l2_ep.bus_type == V4L2_MBUS_PARALLEL
+            || v4l2->v4l2_ep.bus_type == V4L2_MBUS_BT656)
+            && v4l2->v4l2_ep.bus.parallel.bus_width == 16) {
                switch (pixformat) {
                case V4L2_PIX_FMT_NV12_16L16:
                case V4L2_PIX_FMT_NV12:
@@ -76,13 +60,14 @@ bool sun6i_csi_is_format_supported(struct sun6i_csi *csi,
                        case MEDIA_BUS_FMT_YVYU8_1X16:
                                return true;
                        default:
-                               dev_dbg(sdev->dev, "Unsupported mbus code: 0x%x\n",
+                               dev_dbg(csi_dev->dev,
+                                       "Unsupported mbus code: 0x%x\n",
                                        mbus_code);
                                break;
                        }
                        break;
                default:
-                       dev_dbg(sdev->dev, "Unsupported pixformat: 0x%x\n",
+                       dev_dbg(csi_dev->dev, "Unsupported pixformat: 0x%x\n",
                                pixformat);
                        break;
                }
@@ -139,7 +124,7 @@ bool sun6i_csi_is_format_supported(struct sun6i_csi *csi,
                case MEDIA_BUS_FMT_YVYU8_2X8:
                        return true;
                default:
-                       dev_dbg(sdev->dev, "Unsupported mbus code: 0x%x\n",
+                       dev_dbg(csi_dev->dev, "Unsupported mbus code: 0x%x\n",
                                mbus_code);
                        break;
                }
@@ -154,67 +139,37 @@ bool sun6i_csi_is_format_supported(struct sun6i_csi *csi,
                return (mbus_code == MEDIA_BUS_FMT_JPEG_1X8);
 
        default:
-               dev_dbg(sdev->dev, "Unsupported pixformat: 0x%x\n", pixformat);
+               dev_dbg(csi_dev->dev, "Unsupported pixformat: 0x%x\n",
+                       pixformat);
                break;
        }
 
        return false;
 }
 
-int sun6i_csi_set_power(struct sun6i_csi *csi, bool enable)
+int sun6i_csi_set_power(struct sun6i_csi_device *csi_dev, bool enable)
 {
-       struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
-       struct device *dev = sdev->dev;
-       struct regmap *regmap = sdev->regmap;
+       struct device *dev = csi_dev->dev;
+       struct regmap *regmap = csi_dev->regmap;
        int ret;
 
        if (!enable) {
                regmap_update_bits(regmap, CSI_EN_REG, CSI_EN_CSI_EN, 0);
+               pm_runtime_put(dev);
 
-               clk_disable_unprepare(sdev->clk_ram);
-               if (of_device_is_compatible(dev->of_node,
-                                           "allwinner,sun50i-a64-csi"))
-                       clk_rate_exclusive_put(sdev->clk_mod);
-               clk_disable_unprepare(sdev->clk_mod);
-               reset_control_assert(sdev->rstc_bus);
                return 0;
        }
 
-       ret = clk_prepare_enable(sdev->clk_mod);
-       if (ret) {
-               dev_err(sdev->dev, "Enable csi clk err %d\n", ret);
+       ret = pm_runtime_resume_and_get(dev);
+       if (ret < 0)
                return ret;
-       }
-
-       if (of_device_is_compatible(dev->of_node, "allwinner,sun50i-a64-csi"))
-               clk_set_rate_exclusive(sdev->clk_mod, 300000000);
-
-       ret = clk_prepare_enable(sdev->clk_ram);
-       if (ret) {
-               dev_err(sdev->dev, "Enable clk_dram_csi clk err %d\n", ret);
-               goto clk_mod_disable;
-       }
-
-       ret = reset_control_deassert(sdev->rstc_bus);
-       if (ret) {
-               dev_err(sdev->dev, "reset err %d\n", ret);
-               goto clk_ram_disable;
-       }
 
        regmap_update_bits(regmap, CSI_EN_REG, CSI_EN_CSI_EN, CSI_EN_CSI_EN);
 
        return 0;
-
-clk_ram_disable:
-       clk_disable_unprepare(sdev->clk_ram);
-clk_mod_disable:
-       if (of_device_is_compatible(dev->of_node, "allwinner,sun50i-a64-csi"))
-               clk_rate_exclusive_put(sdev->clk_mod);
-       clk_disable_unprepare(sdev->clk_mod);
-       return ret;
 }
 
-static enum csi_input_fmt get_csi_input_format(struct sun6i_csi_dev *sdev,
+static enum csi_input_fmt get_csi_input_format(struct sun6i_csi_device *csi_dev,
                                               u32 mbus_code, u32 pixformat)
 {
        /* non-YUV */
@@ -232,12 +187,13 @@ static enum csi_input_fmt get_csi_input_format(struct sun6i_csi_dev *sdev,
        }
 
        /* not support YUV420 input format yet */
-       dev_dbg(sdev->dev, "Select YUV422 as default input format of CSI.\n");
+       dev_dbg(csi_dev->dev, "Select YUV422 as default input format of CSI.\n");
        return CSI_INPUT_FORMAT_YUV422;
 }
 
-static enum csi_output_fmt get_csi_output_format(struct sun6i_csi_dev *sdev,
-                                                u32 pixformat, u32 field)
+static enum csi_output_fmt
+get_csi_output_format(struct sun6i_csi_device *csi_dev, u32 pixformat,
+                     u32 field)
 {
        bool buf_interlaced = false;
 
@@ -296,14 +252,14 @@ static enum csi_output_fmt get_csi_output_format(struct sun6i_csi_dev *sdev,
                return buf_interlaced ? CSI_FRAME_RAW_8 : CSI_FIELD_RAW_8;
 
        default:
-               dev_warn(sdev->dev, "Unsupported pixformat: 0x%x\n", pixformat);
+               dev_warn(csi_dev->dev, "Unsupported pixformat: 0x%x\n", pixformat);
                break;
        }
 
        return CSI_FIELD_RAW_8;
 }
 
-static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_dev *sdev,
+static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_device *csi_dev,
                                            u32 mbus_code, u32 pixformat)
 {
        /* Input sequence does not apply to non-YUV formats */
@@ -330,7 +286,7 @@ static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_dev *sdev,
                case MEDIA_BUS_FMT_YVYU8_2X8:
                        return CSI_INPUT_SEQ_YVYU;
                default:
-                       dev_warn(sdev->dev, "Unsupported mbus code: 0x%x\n",
+                       dev_warn(csi_dev->dev, "Unsupported mbus code: 0x%x\n",
                                 mbus_code);
                        break;
                }
@@ -352,7 +308,7 @@ static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_dev *sdev,
                case MEDIA_BUS_FMT_YVYU8_2X8:
                        return CSI_INPUT_SEQ_YUYV;
                default:
-                       dev_warn(sdev->dev, "Unsupported mbus code: 0x%x\n",
+                       dev_warn(csi_dev->dev, "Unsupported mbus code: 0x%x\n",
                                 mbus_code);
                        break;
                }
@@ -362,7 +318,7 @@ static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_dev *sdev,
                return CSI_INPUT_SEQ_YUYV;
 
        default:
-               dev_warn(sdev->dev, "Unsupported pixformat: 0x%x, defaulting to YUYV\n",
+               dev_warn(csi_dev->dev, "Unsupported pixformat: 0x%x, defaulting to YUYV\n",
                         pixformat);
                break;
        }
@@ -370,23 +326,23 @@ static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_dev *sdev,
        return CSI_INPUT_SEQ_YUYV;
 }
 
-static void sun6i_csi_setup_bus(struct sun6i_csi_dev *sdev)
+static void sun6i_csi_setup_bus(struct sun6i_csi_device *csi_dev)
 {
-       struct v4l2_fwnode_endpoint *endpoint = &sdev->csi.v4l2_ep;
-       struct sun6i_csi *csi = &sdev->csi;
+       struct v4l2_fwnode_endpoint *endpoint = &csi_dev->v4l2.v4l2_ep;
+       struct sun6i_csi_config *config = &csi_dev->config;
        unsigned char bus_width;
        u32 flags;
        u32 cfg;
        bool input_interlaced = false;
 
-       if (csi->config.field == V4L2_FIELD_INTERLACED
-           || csi->config.field == V4L2_FIELD_INTERLACED_TB
-           || csi->config.field == V4L2_FIELD_INTERLACED_BT)
+       if (config->field == V4L2_FIELD_INTERLACED
+           || config->field == V4L2_FIELD_INTERLACED_TB
+           || config->field == V4L2_FIELD_INTERLACED_BT)
                input_interlaced = true;
 
        bus_width = endpoint->bus.parallel.bus_width;
 
-       regmap_read(sdev->regmap, CSI_IF_CFG_REG, &cfg);
+       regmap_read(csi_dev->regmap, CSI_IF_CFG_REG, &cfg);
 
        cfg &= ~(CSI_IF_CFG_CSI_IF_MASK | CSI_IF_CFG_MIPI_IF_MASK |
                 CSI_IF_CFG_IF_DATA_WIDTH_MASK |
@@ -434,7 +390,7 @@ static void sun6i_csi_setup_bus(struct sun6i_csi_dev *sdev)
                        cfg |= CSI_IF_CFG_CLK_POL_FALLING_EDGE;
                break;
        default:
-               dev_warn(sdev->dev, "Unsupported bus type: %d\n",
+               dev_warn(csi_dev->dev, "Unsupported bus type: %d\n",
                         endpoint->bus_type);
                break;
        }
@@ -452,54 +408,54 @@ static void sun6i_csi_setup_bus(struct sun6i_csi_dev *sdev)
        case 16: /* No need to configure DATA_WIDTH for 16bit */
                break;
        default:
-               dev_warn(sdev->dev, "Unsupported bus width: %u\n", bus_width);
+               dev_warn(csi_dev->dev, "Unsupported bus width: %u\n", bus_width);
                break;
        }
 
-       regmap_write(sdev->regmap, CSI_IF_CFG_REG, cfg);
+       regmap_write(csi_dev->regmap, CSI_IF_CFG_REG, cfg);
 }
 
-static void sun6i_csi_set_format(struct sun6i_csi_dev *sdev)
+static void sun6i_csi_set_format(struct sun6i_csi_device *csi_dev)
 {
-       struct sun6i_csi *csi = &sdev->csi;
+       struct sun6i_csi_config *config = &csi_dev->config;
        u32 cfg;
        u32 val;
 
-       regmap_read(sdev->regmap, CSI_CH_CFG_REG, &cfg);
+       regmap_read(csi_dev->regmap, CSI_CH_CFG_REG, &cfg);
 
        cfg &= ~(CSI_CH_CFG_INPUT_FMT_MASK |
                 CSI_CH_CFG_OUTPUT_FMT_MASK | CSI_CH_CFG_VFLIP_EN |
                 CSI_CH_CFG_HFLIP_EN | CSI_CH_CFG_FIELD_SEL_MASK |
                 CSI_CH_CFG_INPUT_SEQ_MASK);
 
-       val = get_csi_input_format(sdev, csi->config.code,
-                                  csi->config.pixelformat);
+       val = get_csi_input_format(csi_dev, config->code,
+                                  config->pixelformat);
        cfg |= CSI_CH_CFG_INPUT_FMT(val);
 
-       val = get_csi_output_format(sdev, csi->config.pixelformat,
-                                   csi->config.field);
+       val = get_csi_output_format(csi_dev, config->pixelformat,
+                                   config->field);
        cfg |= CSI_CH_CFG_OUTPUT_FMT(val);
 
-       val = get_csi_input_seq(sdev, csi->config.code,
-                               csi->config.pixelformat);
+       val = get_csi_input_seq(csi_dev, config->code,
+                               config->pixelformat);
        cfg |= CSI_CH_CFG_INPUT_SEQ(val);
 
-       if (csi->config.field == V4L2_FIELD_TOP)
+       if (config->field == V4L2_FIELD_TOP)
                cfg |= CSI_CH_CFG_FIELD_SEL_FIELD0;
-       else if (csi->config.field == V4L2_FIELD_BOTTOM)
+       else if (config->field == V4L2_FIELD_BOTTOM)
                cfg |= CSI_CH_CFG_FIELD_SEL_FIELD1;
        else
                cfg |= CSI_CH_CFG_FIELD_SEL_BOTH;
 
-       regmap_write(sdev->regmap, CSI_CH_CFG_REG, cfg);
+       regmap_write(csi_dev->regmap, CSI_CH_CFG_REG, cfg);
 }
 
-static void sun6i_csi_set_window(struct sun6i_csi_dev *sdev)
+static void sun6i_csi_set_window(struct sun6i_csi_device *csi_dev)
 {
-       struct sun6i_csi_config *config = &sdev->csi.config;
+       struct sun6i_csi_config *config = &csi_dev->config;
        u32 bytesperline_y;
        u32 bytesperline_c;
-       int *planar_offset = sdev->planar_offset;
+       int *planar_offset = csi_dev->planar_offset;
        u32 width = config->width;
        u32 height = config->height;
        u32 hor_len = width;
@@ -509,7 +465,7 @@ static void sun6i_csi_set_window(struct sun6i_csi_dev *sdev)
        case V4L2_PIX_FMT_YVYU:
        case V4L2_PIX_FMT_UYVY:
        case V4L2_PIX_FMT_VYUY:
-               dev_dbg(sdev->dev,
+               dev_dbg(csi_dev->dev,
                        "Horizontal length should be 2 times of width for packed YUV formats!\n");
                hor_len = width * 2;
                break;
@@ -517,10 +473,10 @@ static void sun6i_csi_set_window(struct sun6i_csi_dev *sdev)
                break;
        }
 
-       regmap_write(sdev->regmap, CSI_CH_HSIZE_REG,
+       regmap_write(csi_dev->regmap, CSI_CH_HSIZE_REG,
                     CSI_CH_HSIZE_HOR_LEN(hor_len) |
                     CSI_CH_HSIZE_HOR_START(0));
-       regmap_write(sdev->regmap, CSI_CH_VSIZE_REG,
+       regmap_write(csi_dev->regmap, CSI_CH_VSIZE_REG,
                     CSI_CH_VSIZE_VER_LEN(height) |
                     CSI_CH_VSIZE_VER_START(0));
 
@@ -552,7 +508,7 @@ static void sun6i_csi_set_window(struct sun6i_csi_dev *sdev)
                                bytesperline_c * height;
                break;
        default: /* raw */
-               dev_dbg(sdev->dev,
+               dev_dbg(csi_dev->dev,
                        "Calculating pixelformat(0x%x)'s bytesperline as a packed format\n",
                        config->pixelformat);
                bytesperline_y = (sun6i_csi_get_bpp(config->pixelformat) *
@@ -563,46 +519,42 @@ static void sun6i_csi_set_window(struct sun6i_csi_dev *sdev)
                break;
        }
 
-       regmap_write(sdev->regmap, CSI_CH_BUF_LEN_REG,
+       regmap_write(csi_dev->regmap, CSI_CH_BUF_LEN_REG,
                     CSI_CH_BUF_LEN_BUF_LEN_C(bytesperline_c) |
                     CSI_CH_BUF_LEN_BUF_LEN_Y(bytesperline_y));
 }
 
-int sun6i_csi_update_config(struct sun6i_csi *csi,
+int sun6i_csi_update_config(struct sun6i_csi_device *csi_dev,
                            struct sun6i_csi_config *config)
 {
-       struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
-
        if (!config)
                return -EINVAL;
 
-       memcpy(&csi->config, config, sizeof(csi->config));
+       memcpy(&csi_dev->config, config, sizeof(csi_dev->config));
 
-       sun6i_csi_setup_bus(sdev);
-       sun6i_csi_set_format(sdev);
-       sun6i_csi_set_window(sdev);
+       sun6i_csi_setup_bus(csi_dev);
+       sun6i_csi_set_format(csi_dev);
+       sun6i_csi_set_window(csi_dev);
 
        return 0;
 }
 
-void sun6i_csi_update_buf_addr(struct sun6i_csi *csi, dma_addr_t addr)
+void sun6i_csi_update_buf_addr(struct sun6i_csi_device *csi_dev,
+                              dma_addr_t addr)
 {
-       struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
-
-       regmap_write(sdev->regmap, CSI_CH_F0_BUFA_REG,
-                    (addr + sdev->planar_offset[0]) >> 2);
-       if (sdev->planar_offset[1] != -1)
-               regmap_write(sdev->regmap, CSI_CH_F1_BUFA_REG,
-                            (addr + sdev->planar_offset[1]) >> 2);
-       if (sdev->planar_offset[2] != -1)
-               regmap_write(sdev->regmap, CSI_CH_F2_BUFA_REG,
-                            (addr + sdev->planar_offset[2]) >> 2);
+       regmap_write(csi_dev->regmap, CSI_CH_F0_BUFA_REG,
+                    (addr + csi_dev->planar_offset[0]) >> 2);
+       if (csi_dev->planar_offset[1] != -1)
+               regmap_write(csi_dev->regmap, CSI_CH_F1_BUFA_REG,
+                            (addr + csi_dev->planar_offset[1]) >> 2);
+       if (csi_dev->planar_offset[2] != -1)
+               regmap_write(csi_dev->regmap, CSI_CH_F2_BUFA_REG,
+                            (addr + csi_dev->planar_offset[2]) >> 2);
 }
 
-void sun6i_csi_set_stream(struct sun6i_csi *csi, bool enable)
+void sun6i_csi_set_stream(struct sun6i_csi_device *csi_dev, bool enable)
 {
-       struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi);
-       struct regmap *regmap = sdev->regmap;
+       struct regmap *regmap = csi_dev->regmap;
 
        if (!enable) {
                regmap_update_bits(regmap, CSI_CAP_REG, CSI_CAP_CH0_VCAP_ON, 0);
@@ -623,10 +575,15 @@ void sun6i_csi_set_stream(struct sun6i_csi *csi, bool enable)
                           CSI_CAP_CH0_VCAP_ON);
 }
 
-/* -----------------------------------------------------------------------------
- * Media Controller and V4L2
- */
-static int sun6i_csi_link_entity(struct sun6i_csi *csi,
+/* Media */
+
+static const struct media_device_ops sun6i_csi_media_ops = {
+       .link_notify = v4l2_pipeline_link_notify,
+};
+
+/* V4L2 */
+
+static int sun6i_csi_link_entity(struct sun6i_csi_device *csi_dev,
                                 struct media_entity *entity,
                                 struct fwnode_handle *fwnode)
 {
@@ -637,24 +594,25 @@ static int sun6i_csi_link_entity(struct sun6i_csi *csi,
 
        ret = media_entity_get_fwnode_pad(entity, fwnode, MEDIA_PAD_FL_SOURCE);
        if (ret < 0) {
-               dev_err(csi->dev, "%s: no source pad in external entity %s\n",
-                       __func__, entity->name);
+               dev_err(csi_dev->dev,
+                       "%s: no source pad in external entity %s\n", __func__,
+                       entity->name);
                return -EINVAL;
        }
 
        src_pad_index = ret;
 
-       sink = &csi->video.vdev.entity;
-       sink_pad = &csi->video.pad;
+       sink = &csi_dev->video.video_dev.entity;
+       sink_pad = &csi_dev->video.pad;
 
-       dev_dbg(csi->dev, "creating %s:%u -> %s:%u link\n",
+       dev_dbg(csi_dev->dev, "creating %s:%u -> %s:%u link\n",
                entity->name, src_pad_index, sink->name, sink_pad->index);
        ret = media_create_pad_link(entity, src_pad_index, sink,
                                    sink_pad->index,
                                    MEDIA_LNK_FL_ENABLED |
                                    MEDIA_LNK_FL_IMMUTABLE);
        if (ret < 0) {
-               dev_err(csi->dev, "failed to create %s:%u -> %s:%u link\n",
+               dev_err(csi_dev->dev, "failed to create %s:%u -> %s:%u link\n",
                        entity->name, src_pad_index,
                        sink->name, sink_pad->index);
                return ret;
@@ -665,27 +623,29 @@ static int sun6i_csi_link_entity(struct sun6i_csi *csi,
 
 static int sun6i_subdev_notify_complete(struct v4l2_async_notifier *notifier)
 {
-       struct sun6i_csi *csi = container_of(notifier, struct sun6i_csi,
-                                            notifier);
-       struct v4l2_device *v4l2_dev = &csi->v4l2_dev;
+       struct sun6i_csi_device *csi_dev =
+               container_of(notifier, struct sun6i_csi_device,
+                            v4l2.notifier);
+       struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2;
+       struct v4l2_device *v4l2_dev = &v4l2->v4l2_dev;
        struct v4l2_subdev *sd;
        int ret;
 
-       dev_dbg(csi->dev, "notify complete, all subdevs registered\n");
+       dev_dbg(csi_dev->dev, "notify complete, all subdevs registered\n");
 
        sd = list_first_entry(&v4l2_dev->subdevs, struct v4l2_subdev, list);
        if (!sd)
                return -EINVAL;
 
-       ret = sun6i_csi_link_entity(csi, &sd->entity, sd->fwnode);
+       ret = sun6i_csi_link_entity(csi_dev, &sd->entity, sd->fwnode);
        if (ret < 0)
                return ret;
 
-       ret = v4l2_device_register_subdev_nodes(&csi->v4l2_dev);
+       ret = v4l2_device_register_subdev_nodes(v4l2_dev);
        if (ret < 0)
                return ret;
 
-       return media_device_register(&csi->media_dev);
+       return 0;
 }
 
 static const struct v4l2_async_notifier_operations sun6i_csi_async_ops = {
@@ -696,7 +656,7 @@ static int sun6i_csi_fwnode_parse(struct device *dev,
                                  struct v4l2_fwnode_endpoint *vep,
                                  struct v4l2_async_subdev *asd)
 {
-       struct sun6i_csi *csi = dev_get_drvdata(dev);
+       struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev);
 
        if (vep->base.port || vep->base.id) {
                dev_warn(dev, "Only support a single port with one endpoint\n");
@@ -706,7 +666,7 @@ static int sun6i_csi_fwnode_parse(struct device *dev,
        switch (vep->bus_type) {
        case V4L2_MBUS_PARALLEL:
        case V4L2_MBUS_BT656:
-               csi->v4l2_ep = *vep;
+               csi_dev->v4l2.v4l2_ep = *vep;
                return 0;
        default:
                dev_err(dev, "Unsupported media bus type\n");
@@ -714,87 +674,102 @@ static int sun6i_csi_fwnode_parse(struct device *dev,
        }
 }
 
-static void sun6i_csi_v4l2_cleanup(struct sun6i_csi *csi)
-{
-       media_device_unregister(&csi->media_dev);
-       v4l2_async_nf_unregister(&csi->notifier);
-       v4l2_async_nf_cleanup(&csi->notifier);
-       sun6i_video_cleanup(&csi->video);
-       v4l2_device_unregister(&csi->v4l2_dev);
-       v4l2_ctrl_handler_free(&csi->ctrl_handler);
-       media_device_cleanup(&csi->media_dev);
-}
-
-static int sun6i_csi_v4l2_init(struct sun6i_csi *csi)
+static int sun6i_csi_v4l2_setup(struct sun6i_csi_device *csi_dev)
 {
+       struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2;
+       struct media_device *media_dev = &v4l2->media_dev;
+       struct v4l2_device *v4l2_dev = &v4l2->v4l2_dev;
+       struct v4l2_async_notifier *notifier = &v4l2->notifier;
+       struct device *dev = csi_dev->dev;
        int ret;
 
-       csi->media_dev.dev = csi->dev;
-       strscpy(csi->media_dev.model, "Allwinner Video Capture Device",
-               sizeof(csi->media_dev.model));
-       csi->media_dev.hw_revision = 0;
+       /* Media Device */
+
+       strscpy(media_dev->model, SUN6I_CSI_DESCRIPTION,
+               sizeof(media_dev->model));
+       media_dev->hw_revision = 0;
+       media_dev->ops = &sun6i_csi_media_ops;
+       media_dev->dev = dev;
 
-       media_device_init(&csi->media_dev);
-       v4l2_async_nf_init(&csi->notifier);
+       media_device_init(media_dev);
 
-       ret = v4l2_ctrl_handler_init(&csi->ctrl_handler, 0);
+       ret = media_device_register(media_dev);
        if (ret) {
-               dev_err(csi->dev, "V4L2 controls handler init failed (%d)\n",
-                       ret);
-               goto clean_media;
+               dev_err(dev, "failed to register media device: %d\n", ret);
+               goto error_media;
        }
 
-       csi->v4l2_dev.mdev = &csi->media_dev;
-       csi->v4l2_dev.ctrl_handler = &csi->ctrl_handler;
-       ret = v4l2_device_register(csi->dev, &csi->v4l2_dev);
+       /* V4L2 Device */
+
+       v4l2_dev->mdev = media_dev;
+
+       ret = v4l2_device_register(dev, v4l2_dev);
        if (ret) {
-               dev_err(csi->dev, "V4L2 device registration failed (%d)\n",
-                       ret);
-               goto free_ctrl;
+               dev_err(dev, "failed to register v4l2 device: %d\n", ret);
+               goto error_media;
        }
 
-       ret = sun6i_video_init(&csi->video, csi, "sun6i-csi");
+       /* Video */
+
+       ret = sun6i_video_setup(csi_dev);
        if (ret)
-               goto unreg_v4l2;
+               goto error_v4l2_device;
 
-       ret = v4l2_async_nf_parse_fwnode_endpoints(csi->dev,
-                                                  &csi->notifier,
+       /* V4L2 Async */
+
+       v4l2_async_nf_init(notifier);
+       notifier->ops = &sun6i_csi_async_ops;
+
+       ret = v4l2_async_nf_parse_fwnode_endpoints(dev, notifier,
                                                   sizeof(struct
                                                          v4l2_async_subdev),
                                                   sun6i_csi_fwnode_parse);
        if (ret)
-               goto clean_video;
+               goto error_video;
 
-       csi->notifier.ops = &sun6i_csi_async_ops;
-
-       ret = v4l2_async_nf_register(&csi->v4l2_dev, &csi->notifier);
+       ret = v4l2_async_nf_register(v4l2_dev, notifier);
        if (ret) {
-               dev_err(csi->dev, "notifier registration failed\n");
-               goto clean_video;
+               dev_err(dev, "failed to register v4l2 async notifier: %d\n",
+                       ret);
+               goto error_v4l2_async_notifier;
        }
 
        return 0;
 
-clean_video:
-       sun6i_video_cleanup(&csi->video);
-unreg_v4l2:
-       v4l2_device_unregister(&csi->v4l2_dev);
-free_ctrl:
-       v4l2_ctrl_handler_free(&csi->ctrl_handler);
-clean_media:
-       v4l2_async_nf_cleanup(&csi->notifier);
-       media_device_cleanup(&csi->media_dev);
+error_v4l2_async_notifier:
+       v4l2_async_nf_cleanup(notifier);
+
+error_video:
+       sun6i_video_cleanup(csi_dev);
+
+error_v4l2_device:
+       v4l2_device_unregister(&v4l2->v4l2_dev);
+
+error_media:
+       media_device_unregister(media_dev);
+       media_device_cleanup(media_dev);
 
        return ret;
 }
 
-/* -----------------------------------------------------------------------------
- * Resources and IRQ
- */
-static irqreturn_t sun6i_csi_isr(int irq, void *dev_id)
+static void sun6i_csi_v4l2_cleanup(struct sun6i_csi_device *csi_dev)
 {
-       struct sun6i_csi_dev *sdev = (struct sun6i_csi_dev *)dev_id;
-       struct regmap *regmap = sdev->regmap;
+       struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2;
+
+       media_device_unregister(&v4l2->media_dev);
+       v4l2_async_nf_unregister(&v4l2->notifier);
+       v4l2_async_nf_cleanup(&v4l2->notifier);
+       sun6i_video_cleanup(csi_dev);
+       v4l2_device_unregister(&v4l2->v4l2_dev);
+       media_device_cleanup(&v4l2->media_dev);
+}
+
+/* Platform */
+
+static irqreturn_t sun6i_csi_interrupt(int irq, void *private)
+{
+       struct sun6i_csi_device *csi_dev = private;
+       struct regmap *regmap = csi_dev->regmap;
        u32 status;
 
        regmap_read(regmap, CSI_CH_INT_STA_REG, &status);
@@ -814,13 +789,63 @@ static irqreturn_t sun6i_csi_isr(int irq, void *dev_id)
        }
 
        if (status & CSI_CH_INT_STA_FD_PD)
-               sun6i_video_frame_done(&sdev->csi.video);
+               sun6i_video_frame_done(csi_dev);
 
        regmap_write(regmap, CSI_CH_INT_STA_REG, status);
 
        return IRQ_HANDLED;
 }
 
+static int sun6i_csi_suspend(struct device *dev)
+{
+       struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev);
+
+       reset_control_assert(csi_dev->reset);
+       clk_disable_unprepare(csi_dev->clock_ram);
+       clk_disable_unprepare(csi_dev->clock_mod);
+
+       return 0;
+}
+
+static int sun6i_csi_resume(struct device *dev)
+{
+       struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev);
+       int ret;
+
+       ret = reset_control_deassert(csi_dev->reset);
+       if (ret) {
+               dev_err(dev, "failed to deassert reset\n");
+               return ret;
+       }
+
+       ret = clk_prepare_enable(csi_dev->clock_mod);
+       if (ret) {
+               dev_err(dev, "failed to enable module clock\n");
+               goto error_reset;
+       }
+
+       ret = clk_prepare_enable(csi_dev->clock_ram);
+       if (ret) {
+               dev_err(dev, "failed to enable ram clock\n");
+               goto error_clock_mod;
+       }
+
+       return 0;
+
+error_clock_mod:
+       clk_disable_unprepare(csi_dev->clock_mod);
+
+error_reset:
+       reset_control_assert(csi_dev->reset);
+
+       return ret;
+}
+
+static const struct dev_pm_ops sun6i_csi_pm_ops = {
+       .runtime_suspend        = sun6i_csi_suspend,
+       .runtime_resume         = sun6i_csi_resume,
+};
+
 static const struct regmap_config sun6i_csi_regmap_config = {
        .reg_bits       = 32,
        .reg_stride     = 4,
@@ -828,106 +853,181 @@ static const struct regmap_config sun6i_csi_regmap_config = {
        .max_register   = 0x9c,
 };
 
-static int sun6i_csi_resource_request(struct sun6i_csi_dev *sdev,
-                                     struct platform_device *pdev)
+static int sun6i_csi_resources_setup(struct sun6i_csi_device *csi_dev,
+                                    struct platform_device *platform_dev)
 {
+       struct device *dev = csi_dev->dev;
+       const struct sun6i_csi_variant *variant;
        void __iomem *io_base;
        int ret;
        int irq;
 
-       io_base = devm_platform_ioremap_resource(pdev, 0);
+       variant = of_device_get_match_data(dev);
+       if (!variant)
+               return -EINVAL;
+
+       /* Registers */
+
+       io_base = devm_platform_ioremap_resource(platform_dev, 0);
        if (IS_ERR(io_base))
                return PTR_ERR(io_base);
 
-       sdev->regmap = devm_regmap_init_mmio_clk(&pdev->dev, "bus", io_base,
-                                                &sun6i_csi_regmap_config);
-       if (IS_ERR(sdev->regmap)) {
-               dev_err(&pdev->dev, "Failed to init register map\n");
-               return PTR_ERR(sdev->regmap);
+       csi_dev->regmap = devm_regmap_init_mmio_clk(dev, "bus", io_base,
+                                                   &sun6i_csi_regmap_config);
+       if (IS_ERR(csi_dev->regmap)) {
+               dev_err(dev, "failed to init register map\n");
+               return PTR_ERR(csi_dev->regmap);
        }
 
-       sdev->clk_mod = devm_clk_get(&pdev->dev, "mod");
-       if (IS_ERR(sdev->clk_mod)) {
-               dev_err(&pdev->dev, "Unable to acquire csi clock\n");
-               return PTR_ERR(sdev->clk_mod);
+       /* Clocks */
+
+       csi_dev->clock_mod = devm_clk_get(dev, "mod");
+       if (IS_ERR(csi_dev->clock_mod)) {
+               dev_err(dev, "failed to acquire module clock\n");
+               return PTR_ERR(csi_dev->clock_mod);
        }
 
-       sdev->clk_ram = devm_clk_get(&pdev->dev, "ram");
-       if (IS_ERR(sdev->clk_ram)) {
-               dev_err(&pdev->dev, "Unable to acquire dram-csi clock\n");
-               return PTR_ERR(sdev->clk_ram);
+       csi_dev->clock_ram = devm_clk_get(dev, "ram");
+       if (IS_ERR(csi_dev->clock_ram)) {
+               dev_err(dev, "failed to acquire ram clock\n");
+               return PTR_ERR(csi_dev->clock_ram);
        }
 
-       sdev->rstc_bus = devm_reset_control_get_shared(&pdev->dev, NULL);
-       if (IS_ERR(sdev->rstc_bus)) {
-               dev_err(&pdev->dev, "Cannot get reset controller\n");
-               return PTR_ERR(sdev->rstc_bus);
+       ret = clk_set_rate_exclusive(csi_dev->clock_mod,
+                                    variant->clock_mod_rate);
+       if (ret) {
+               dev_err(dev, "failed to set mod clock rate\n");
+               return ret;
+       }
+
+       /* Reset */
+
+       csi_dev->reset = devm_reset_control_get_shared(dev, NULL);
+       if (IS_ERR(csi_dev->reset)) {
+               dev_err(dev, "failed to acquire reset\n");
+               ret = PTR_ERR(csi_dev->reset);
+               goto error_clock_rate_exclusive;
        }
 
-       irq = platform_get_irq(pdev, 0);
-       if (irq < 0)
-               return -ENXIO;
+       /* Interrupt */
 
-       ret = devm_request_irq(&pdev->dev, irq, sun6i_csi_isr, 0, MODULE_NAME,
-                              sdev);
+       irq = platform_get_irq(platform_dev, 0);
+       if (irq < 0) {
+               dev_err(dev, "failed to get interrupt\n");
+               ret = -ENXIO;
+               goto error_clock_rate_exclusive;
+       }
+
+       ret = devm_request_irq(dev, irq, sun6i_csi_interrupt, 0, SUN6I_CSI_NAME,
+                              csi_dev);
        if (ret) {
-               dev_err(&pdev->dev, "Cannot request csi IRQ\n");
-               return ret;
+               dev_err(dev, "failed to request interrupt\n");
+               goto error_clock_rate_exclusive;
        }
 
+       /* Runtime PM */
+
+       pm_runtime_enable(dev);
+
        return 0;
+
+error_clock_rate_exclusive:
+       clk_rate_exclusive_put(csi_dev->clock_mod);
+
+       return ret;
+}
+
+static void sun6i_csi_resources_cleanup(struct sun6i_csi_device *csi_dev)
+{
+       pm_runtime_disable(csi_dev->dev);
+       clk_rate_exclusive_put(csi_dev->clock_mod);
 }
 
-static int sun6i_csi_probe(struct platform_device *pdev)
+static int sun6i_csi_probe(struct platform_device *platform_dev)
 {
-       struct sun6i_csi_dev *sdev;
+       struct sun6i_csi_device *csi_dev;
+       struct device *dev = &platform_dev->dev;
        int ret;
 
-       sdev = devm_kzalloc(&pdev->dev, sizeof(*sdev), GFP_KERNEL);
-       if (!sdev)
+       csi_dev = devm_kzalloc(dev, sizeof(*csi_dev), GFP_KERNEL);
+       if (!csi_dev)
                return -ENOMEM;
 
-       sdev->dev = &pdev->dev;
+       csi_dev->dev = &platform_dev->dev;
+       platform_set_drvdata(platform_dev, csi_dev);
 
-       ret = sun6i_csi_resource_request(sdev, pdev);
+       ret = sun6i_csi_resources_setup(csi_dev, platform_dev);
        if (ret)
                return ret;
 
-       platform_set_drvdata(pdev, sdev);
+       ret = sun6i_csi_v4l2_setup(csi_dev);
+       if (ret)
+               goto error_resources;
+
+       return 0;
 
-       sdev->csi.dev = &pdev->dev;
-       return sun6i_csi_v4l2_init(&sdev->csi);
+error_resources:
+       sun6i_csi_resources_cleanup(csi_dev);
+
+       return ret;
 }
 
 static int sun6i_csi_remove(struct platform_device *pdev)
 {
-       struct sun6i_csi_dev *sdev = platform_get_drvdata(pdev);
+       struct sun6i_csi_device *csi_dev = platform_get_drvdata(pdev);
 
-       sun6i_csi_v4l2_cleanup(&sdev->csi);
+       sun6i_csi_v4l2_cleanup(csi_dev);
+       sun6i_csi_resources_cleanup(csi_dev);
 
        return 0;
 }
 
+static const struct sun6i_csi_variant sun6i_a31_csi_variant = {
+       .clock_mod_rate = 297000000,
+};
+
+static const struct sun6i_csi_variant sun50i_a64_csi_variant = {
+       .clock_mod_rate = 300000000,
+};
+
 static const struct of_device_id sun6i_csi_of_match[] = {
-       { .compatible = "allwinner,sun6i-a31-csi", },
-       { .compatible = "allwinner,sun8i-a83t-csi", },
-       { .compatible = "allwinner,sun8i-h3-csi", },
-       { .compatible = "allwinner,sun8i-v3s-csi", },
-       { .compatible = "allwinner,sun50i-a64-csi", },
+       {
+               .compatible     = "allwinner,sun6i-a31-csi",
+               .data           = &sun6i_a31_csi_variant,
+       },
+       {
+               .compatible     = "allwinner,sun8i-a83t-csi",
+               .data           = &sun6i_a31_csi_variant,
+       },
+       {
+               .compatible     = "allwinner,sun8i-h3-csi",
+               .data           = &sun6i_a31_csi_variant,
+       },
+       {
+               .compatible     = "allwinner,sun8i-v3s-csi",
+               .data           = &sun6i_a31_csi_variant,
+       },
+       {
+               .compatible     = "allwinner,sun50i-a64-csi",
+               .data           = &sun50i_a64_csi_variant,
+       },
        {},
 };
+
 MODULE_DEVICE_TABLE(of, sun6i_csi_of_match);
 
 static struct platform_driver sun6i_csi_platform_driver = {
-       .probe = sun6i_csi_probe,
-       .remove = sun6i_csi_remove,
-       .driver = {
-               .name = MODULE_NAME,
-               .of_match_table = of_match_ptr(sun6i_csi_of_match),
+       .probe  = sun6i_csi_probe,
+       .remove = sun6i_csi_remove,
+       .driver = {
+               .name           = SUN6I_CSI_NAME,
+               .of_match_table = of_match_ptr(sun6i_csi_of_match),
+               .pm             = &sun6i_csi_pm_ops,
        },
 };
+
 module_platform_driver(sun6i_csi_platform_driver);
 
-MODULE_DESCRIPTION("Allwinner V3s Camera Sensor Interface driver");
+MODULE_DESCRIPTION("Allwinner A31 Camera Sensor Interface driver");
 MODULE_AUTHOR("Yong Deng <yong.deng@magewell.com>");
 MODULE_LICENSE("GPL");
index 3a38d10..bab7056 100644 (file)
@@ -8,13 +8,22 @@
 #ifndef __SUN6I_CSI_H__
 #define __SUN6I_CSI_H__
 
-#include <media/v4l2-ctrls.h>
 #include <media/v4l2-device.h>
 #include <media/v4l2-fwnode.h>
+#include <media/videobuf2-v4l2.h>
 
 #include "sun6i_video.h"
 
-struct sun6i_csi;
+#define SUN6I_CSI_NAME         "sun6i-csi"
+#define SUN6I_CSI_DESCRIPTION  "Allwinner A31 CSI Device"
+
+struct sun6i_csi_buffer {
+       struct vb2_v4l2_buffer          v4l2_buffer;
+       struct list_head                list;
+
+       dma_addr_t                      dma_addr;
+       bool                            queued_to_csi;
+};
 
 /**
  * struct sun6i_csi_config - configs for sun6i csi
@@ -32,59 +41,78 @@ struct sun6i_csi_config {
        u32             height;
 };
 
-struct sun6i_csi {
-       struct device                   *dev;
-       struct v4l2_ctrl_handler        ctrl_handler;
+struct sun6i_csi_v4l2 {
        struct v4l2_device              v4l2_dev;
        struct media_device             media_dev;
 
        struct v4l2_async_notifier      notifier;
-
        /* video port settings */
        struct v4l2_fwnode_endpoint     v4l2_ep;
+};
 
-       struct sun6i_csi_config         config;
+struct sun6i_csi_device {
+       struct device                   *dev;
 
+       struct sun6i_csi_config         config;
+       struct sun6i_csi_v4l2           v4l2;
        struct sun6i_video              video;
+
+       struct regmap                   *regmap;
+       struct clk                      *clock_mod;
+       struct clk                      *clock_ram;
+       struct reset_control            *reset;
+
+       int                             planar_offset[3];
+};
+
+struct sun6i_csi_variant {
+       unsigned long   clock_mod_rate;
 };
 
 /**
  * sun6i_csi_is_format_supported() - check if the format supported by csi
- * @csi:       pointer to the csi
+ * @csi_dev:   pointer to the csi device
  * @pixformat: v4l2 pixel format (V4L2_PIX_FMT_*)
  * @mbus_code: media bus format code (MEDIA_BUS_FMT_*)
+ *
+ * Return: true if format is supported, false otherwise.
  */
-bool sun6i_csi_is_format_supported(struct sun6i_csi *csi, u32 pixformat,
-                                  u32 mbus_code);
+bool sun6i_csi_is_format_supported(struct sun6i_csi_device *csi_dev,
+                                  u32 pixformat, u32 mbus_code);
 
 /**
  * sun6i_csi_set_power() - power on/off the csi
- * @csi:       pointer to the csi
+ * @csi_dev:   pointer to the csi device
  * @enable:    on/off
+ *
+ * Return: 0 if successful, error code otherwise.
  */
-int sun6i_csi_set_power(struct sun6i_csi *csi, bool enable);
+int sun6i_csi_set_power(struct sun6i_csi_device *csi_dev, bool enable);
 
 /**
  * sun6i_csi_update_config() - update the csi register settings
- * @csi:       pointer to the csi
+ * @csi_dev:   pointer to the csi device
  * @config:    see struct sun6i_csi_config
+ *
+ * Return: 0 if successful, error code otherwise.
  */
-int sun6i_csi_update_config(struct sun6i_csi *csi,
+int sun6i_csi_update_config(struct sun6i_csi_device *csi_dev,
                            struct sun6i_csi_config *config);
 
 /**
  * sun6i_csi_update_buf_addr() - update the csi frame buffer address
- * @csi:       pointer to the csi
+ * @csi_dev:   pointer to the csi device
  * @addr:      frame buffer's physical address
  */
-void sun6i_csi_update_buf_addr(struct sun6i_csi *csi, dma_addr_t addr);
+void sun6i_csi_update_buf_addr(struct sun6i_csi_device *csi_dev,
+                              dma_addr_t addr);
 
 /**
  * sun6i_csi_set_stream() - start/stop csi streaming
- * @csi:       pointer to the csi
+ * @csi_dev:   pointer to the csi device
  * @enable:    start/stop
  */
-void sun6i_csi_set_stream(struct sun6i_csi *csi, bool enable);
+void sun6i_csi_set_stream(struct sun6i_csi_device *csi_dev, bool enable);
 
 /* get bpp form v4l2 pixformat */
 static inline int sun6i_csi_get_bpp(unsigned int pixformat)
index 74d64a2..791583d 100644 (file)
 #define MAX_WIDTH      (4800)
 #define MAX_HEIGHT     (4800)
 
-struct sun6i_csi_buffer {
-       struct vb2_v4l2_buffer          vb;
-       struct list_head                list;
+/* Helpers */
 
-       dma_addr_t                      dma_addr;
-       bool                            queued_to_csi;
-};
+static struct v4l2_subdev *
+sun6i_video_remote_subdev(struct sun6i_video *video, u32 *pad)
+{
+       struct media_pad *remote;
+
+       remote = media_pad_remote_pad_first(&video->pad);
+
+       if (!remote || !is_media_entity_v4l2_subdev(remote->entity))
+               return NULL;
+
+       if (pad)
+               *pad = remote->index;
 
-static const u32 supported_pixformats[] = {
+       return media_entity_to_v4l2_subdev(remote->entity);
+}
+
+/* Format */
+
+static const u32 sun6i_video_formats[] = {
        V4L2_PIX_FMT_SBGGR8,
        V4L2_PIX_FMT_SGBRG8,
        V4L2_PIX_FMT_SGRBG8,
@@ -61,119 +73,138 @@ static const u32 supported_pixformats[] = {
        V4L2_PIX_FMT_JPEG,
 };
 
-static bool is_pixformat_valid(unsigned int pixformat)
+static bool sun6i_video_format_check(u32 format)
 {
        unsigned int i;
 
-       for (i = 0; i < ARRAY_SIZE(supported_pixformats); i++)
-               if (supported_pixformats[i] == pixformat)
+       for (i = 0; i < ARRAY_SIZE(sun6i_video_formats); i++)
+               if (sun6i_video_formats[i] == format)
                        return true;
 
        return false;
 }
 
-static struct v4l2_subdev *
-sun6i_video_remote_subdev(struct sun6i_video *video, u32 *pad)
-{
-       struct media_pad *remote;
+/* Video */
 
-       remote = media_pad_remote_pad_first(&video->pad);
+static void sun6i_video_buffer_configure(struct sun6i_csi_device *csi_dev,
+                                        struct sun6i_csi_buffer *csi_buffer)
+{
+       csi_buffer->queued_to_csi = true;
+       sun6i_csi_update_buf_addr(csi_dev, csi_buffer->dma_addr);
+}
 
-       if (!remote || !is_media_entity_v4l2_subdev(remote->entity))
-               return NULL;
+static void sun6i_video_configure(struct sun6i_csi_device *csi_dev)
+{
+       struct sun6i_video *video = &csi_dev->video;
+       struct sun6i_csi_config config = { 0 };
 
-       if (pad)
-               *pad = remote->index;
+       config.pixelformat = video->format.fmt.pix.pixelformat;
+       config.code = video->mbus_code;
+       config.field = video->format.fmt.pix.field;
+       config.width = video->format.fmt.pix.width;
+       config.height = video->format.fmt.pix.height;
 
-       return media_entity_to_v4l2_subdev(remote->entity);
+       sun6i_csi_update_config(csi_dev, &config);
 }
 
-static int sun6i_video_queue_setup(struct vb2_queue *vq,
-                                  unsigned int *nbuffers,
-                                  unsigned int *nplanes,
+/* Queue */
+
+static int sun6i_video_queue_setup(struct vb2_queue *queue,
+                                  unsigned int *buffers_count,
+                                  unsigned int *planes_count,
                                   unsigned int sizes[],
                                   struct device *alloc_devs[])
 {
-       struct sun6i_video *video = vb2_get_drv_priv(vq);
-       unsigned int size = video->fmt.fmt.pix.sizeimage;
+       struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue);
+       struct sun6i_video *video = &csi_dev->video;
+       unsigned int size = video->format.fmt.pix.sizeimage;
 
-       if (*nplanes)
+       if (*planes_count)
                return sizes[0] < size ? -EINVAL : 0;
 
-       *nplanes = 1;
+       *planes_count = 1;
        sizes[0] = size;
 
        return 0;
 }
 
-static int sun6i_video_buffer_prepare(struct vb2_buffer *vb)
+static int sun6i_video_buffer_prepare(struct vb2_buffer *buffer)
 {
-       struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
-       struct sun6i_csi_buffer *buf =
-                       container_of(vbuf, struct sun6i_csi_buffer, vb);
-       struct sun6i_video *video = vb2_get_drv_priv(vb->vb2_queue);
-       unsigned long size = video->fmt.fmt.pix.sizeimage;
-
-       if (vb2_plane_size(vb, 0) < size) {
-               v4l2_err(video->vdev.v4l2_dev, "buffer too small (%lu < %lu)\n",
-                        vb2_plane_size(vb, 0), size);
+       struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(buffer->vb2_queue);
+       struct sun6i_video *video = &csi_dev->video;
+       struct v4l2_device *v4l2_dev = &csi_dev->v4l2.v4l2_dev;
+       struct vb2_v4l2_buffer *v4l2_buffer = to_vb2_v4l2_buffer(buffer);
+       struct sun6i_csi_buffer *csi_buffer =
+               container_of(v4l2_buffer, struct sun6i_csi_buffer, v4l2_buffer);
+       unsigned long size = video->format.fmt.pix.sizeimage;
+
+       if (vb2_plane_size(buffer, 0) < size) {
+               v4l2_err(v4l2_dev, "buffer too small (%lu < %lu)\n",
+                        vb2_plane_size(buffer, 0), size);
                return -EINVAL;
        }
 
-       vb2_set_plane_payload(vb, 0, size);
-
-       buf->dma_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
+       vb2_set_plane_payload(buffer, 0, size);
 
-       vbuf->field = video->fmt.fmt.pix.field;
+       csi_buffer->dma_addr = vb2_dma_contig_plane_dma_addr(buffer, 0);
+       v4l2_buffer->field = video->format.fmt.pix.field;
 
        return 0;
 }
 
-static int sun6i_video_start_streaming(struct vb2_queue *vq, unsigned int count)
+static void sun6i_video_buffer_queue(struct vb2_buffer *buffer)
+{
+       struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(buffer->vb2_queue);
+       struct sun6i_video *video = &csi_dev->video;
+       struct vb2_v4l2_buffer *v4l2_buffer = to_vb2_v4l2_buffer(buffer);
+       struct sun6i_csi_buffer *csi_buffer =
+               container_of(v4l2_buffer, struct sun6i_csi_buffer, v4l2_buffer);
+       unsigned long flags;
+
+       spin_lock_irqsave(&video->dma_queue_lock, flags);
+       csi_buffer->queued_to_csi = false;
+       list_add_tail(&csi_buffer->list, &video->dma_queue);
+       spin_unlock_irqrestore(&video->dma_queue_lock, flags);
+}
+
+static int sun6i_video_start_streaming(struct vb2_queue *queue,
+                                      unsigned int count)
 {
-       struct sun6i_video *video = vb2_get_drv_priv(vq);
+       struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue);
+       struct sun6i_video *video = &csi_dev->video;
+       struct video_device *video_dev = &video->video_dev;
        struct sun6i_csi_buffer *buf;
        struct sun6i_csi_buffer *next_buf;
-       struct sun6i_csi_config config;
        struct v4l2_subdev *subdev;
        unsigned long flags;
        int ret;
 
        video->sequence = 0;
 
-       ret = media_pipeline_start(&video->vdev.entity, &video->vdev.pipe);
+       ret = video_device_pipeline_alloc_start(video_dev);
        if (ret < 0)
-               goto clear_dma_queue;
+               goto error_dma_queue_flush;
 
        if (video->mbus_code == 0) {
                ret = -EINVAL;
-               goto stop_media_pipeline;
+               goto error_media_pipeline;
        }
 
        subdev = sun6i_video_remote_subdev(video, NULL);
        if (!subdev) {
                ret = -EINVAL;
-               goto stop_media_pipeline;
+               goto error_media_pipeline;
        }
 
-       config.pixelformat = video->fmt.fmt.pix.pixelformat;
-       config.code = video->mbus_code;
-       config.field = video->fmt.fmt.pix.field;
-       config.width = video->fmt.fmt.pix.width;
-       config.height = video->fmt.fmt.pix.height;
-
-       ret = sun6i_csi_update_config(video->csi, &config);
-       if (ret < 0)
-               goto stop_media_pipeline;
+       sun6i_video_configure(csi_dev);
 
        spin_lock_irqsave(&video->dma_queue_lock, flags);
 
        buf = list_first_entry(&video->dma_queue,
                               struct sun6i_csi_buffer, list);
-       buf->queued_to_csi = true;
-       sun6i_csi_update_buf_addr(video->csi, buf->dma_addr);
+       sun6i_video_buffer_configure(csi_dev, buf);
 
-       sun6i_csi_set_stream(video->csi, true);
+       sun6i_csi_set_stream(csi_dev, true);
 
        /*
         * CSI will lookup the next dma buffer for next frame before the
@@ -193,34 +224,37 @@ static int sun6i_video_start_streaming(struct vb2_queue *vq, unsigned int count)
         * would also drop frame when lacking of queued buffer.
         */
        next_buf = list_next_entry(buf, list);
-       next_buf->queued_to_csi = true;
-       sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr);
+       sun6i_video_buffer_configure(csi_dev, next_buf);
 
        spin_unlock_irqrestore(&video->dma_queue_lock, flags);
 
        ret = v4l2_subdev_call(subdev, video, s_stream, 1);
        if (ret && ret != -ENOIOCTLCMD)
-               goto stop_csi_stream;
+               goto error_stream;
 
        return 0;
 
-stop_csi_stream:
-       sun6i_csi_set_stream(video->csi, false);
-stop_media_pipeline:
-       media_pipeline_stop(&video->vdev.entity);
-clear_dma_queue:
+error_stream:
+       sun6i_csi_set_stream(csi_dev, false);
+
+error_media_pipeline:
+       video_device_pipeline_stop(video_dev);
+
+error_dma_queue_flush:
        spin_lock_irqsave(&video->dma_queue_lock, flags);
        list_for_each_entry(buf, &video->dma_queue, list)
-               vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_QUEUED);
+               vb2_buffer_done(&buf->v4l2_buffer.vb2_buf,
+                               VB2_BUF_STATE_QUEUED);
        INIT_LIST_HEAD(&video->dma_queue);
        spin_unlock_irqrestore(&video->dma_queue_lock, flags);
 
        return ret;
 }
 
-static void sun6i_video_stop_streaming(struct vb2_queue *vq)
+static void sun6i_video_stop_streaming(struct vb2_queue *queue)
 {
-       struct sun6i_video *video = vb2_get_drv_priv(vq);
+       struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue);
+       struct sun6i_video *video = &csi_dev->video;
        struct v4l2_subdev *subdev;
        unsigned long flags;
        struct sun6i_csi_buffer *buf;
@@ -229,45 +263,32 @@ static void sun6i_video_stop_streaming(struct vb2_queue *vq)
        if (subdev)
                v4l2_subdev_call(subdev, video, s_stream, 0);
 
-       sun6i_csi_set_stream(video->csi, false);
+       sun6i_csi_set_stream(csi_dev, false);
 
-       media_pipeline_stop(&video->vdev.entity);
+       video_device_pipeline_stop(&video->video_dev);
 
        /* Release all active buffers */
        spin_lock_irqsave(&video->dma_queue_lock, flags);
        list_for_each_entry(buf, &video->dma_queue, list)
-               vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
+               vb2_buffer_done(&buf->v4l2_buffer.vb2_buf, VB2_BUF_STATE_ERROR);
        INIT_LIST_HEAD(&video->dma_queue);
        spin_unlock_irqrestore(&video->dma_queue_lock, flags);
 }
 
-static void sun6i_video_buffer_queue(struct vb2_buffer *vb)
-{
-       struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
-       struct sun6i_csi_buffer *buf =
-                       container_of(vbuf, struct sun6i_csi_buffer, vb);
-       struct sun6i_video *video = vb2_get_drv_priv(vb->vb2_queue);
-       unsigned long flags;
-
-       spin_lock_irqsave(&video->dma_queue_lock, flags);
-       buf->queued_to_csi = false;
-       list_add_tail(&buf->list, &video->dma_queue);
-       spin_unlock_irqrestore(&video->dma_queue_lock, flags);
-}
-
-void sun6i_video_frame_done(struct sun6i_video *video)
+void sun6i_video_frame_done(struct sun6i_csi_device *csi_dev)
 {
+       struct sun6i_video *video = &csi_dev->video;
        struct sun6i_csi_buffer *buf;
        struct sun6i_csi_buffer *next_buf;
-       struct vb2_v4l2_buffer *vbuf;
+       struct vb2_v4l2_buffer *v4l2_buffer;
 
        spin_lock(&video->dma_queue_lock);
 
        buf = list_first_entry(&video->dma_queue,
                               struct sun6i_csi_buffer, list);
        if (list_is_last(&buf->list, &video->dma_queue)) {
-               dev_dbg(video->csi->dev, "Frame dropped!\n");
-               goto unlock;
+               dev_dbg(csi_dev->dev, "Frame dropped!\n");
+               goto complete;
        }
 
        next_buf = list_next_entry(buf, list);
@@ -277,200 +298,204 @@ void sun6i_video_frame_done(struct sun6i_video *video)
         * for next ISR call.
         */
        if (!next_buf->queued_to_csi) {
-               next_buf->queued_to_csi = true;
-               sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr);
-               dev_dbg(video->csi->dev, "Frame dropped!\n");
-               goto unlock;
+               sun6i_video_buffer_configure(csi_dev, next_buf);
+               dev_dbg(csi_dev->dev, "Frame dropped!\n");
+               goto complete;
        }
 
        list_del(&buf->list);
-       vbuf = &buf->vb;
-       vbuf->vb2_buf.timestamp = ktime_get_ns();
-       vbuf->sequence = video->sequence;
-       vb2_buffer_done(&vbuf->vb2_buf, VB2_BUF_STATE_DONE);
+       v4l2_buffer = &buf->v4l2_buffer;
+       v4l2_buffer->vb2_buf.timestamp = ktime_get_ns();
+       v4l2_buffer->sequence = video->sequence;
+       vb2_buffer_done(&v4l2_buffer->vb2_buf, VB2_BUF_STATE_DONE);
 
        /* Prepare buffer for next frame but one.  */
        if (!list_is_last(&next_buf->list, &video->dma_queue)) {
                next_buf = list_next_entry(next_buf, list);
-               next_buf->queued_to_csi = true;
-               sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr);
+               sun6i_video_buffer_configure(csi_dev, next_buf);
        } else {
-               dev_dbg(video->csi->dev, "Next frame will be dropped!\n");
+               dev_dbg(csi_dev->dev, "Next frame will be dropped!\n");
        }
 
-unlock:
+complete:
        video->sequence++;
        spin_unlock(&video->dma_queue_lock);
 }
 
-static const struct vb2_ops sun6i_csi_vb2_ops = {
+static const struct vb2_ops sun6i_video_queue_ops = {
        .queue_setup            = sun6i_video_queue_setup,
-       .wait_prepare           = vb2_ops_wait_prepare,
-       .wait_finish            = vb2_ops_wait_finish,
        .buf_prepare            = sun6i_video_buffer_prepare,
+       .buf_queue              = sun6i_video_buffer_queue,
        .start_streaming        = sun6i_video_start_streaming,
        .stop_streaming         = sun6i_video_stop_streaming,
-       .buf_queue              = sun6i_video_buffer_queue,
+       .wait_prepare           = vb2_ops_wait_prepare,
+       .wait_finish            = vb2_ops_wait_finish,
 };
 
-static int vidioc_querycap(struct file *file, void *priv,
-                          struct v4l2_capability *cap)
+/* V4L2 Device */
+
+static int sun6i_video_querycap(struct file *file, void *private,
+                               struct v4l2_capability *capability)
 {
-       struct sun6i_video *video = video_drvdata(file);
+       struct sun6i_csi_device *csi_dev = video_drvdata(file);
+       struct video_device *video_dev = &csi_dev->video.video_dev;
 
-       strscpy(cap->driver, "sun6i-video", sizeof(cap->driver));
-       strscpy(cap->card, video->vdev.name, sizeof(cap->card));
-       snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s",
-                video->csi->dev->of_node->name);
+       strscpy(capability->driver, SUN6I_CSI_NAME, sizeof(capability->driver));
+       strscpy(capability->card, video_dev->name, sizeof(capability->card));
+       snprintf(capability->bus_info, sizeof(capability->bus_info),
+                "platform:%s", dev_name(csi_dev->dev));
 
        return 0;
 }
 
-static int vidioc_enum_fmt_vid_cap(struct file *file, void *priv,
-                                  struct v4l2_fmtdesc *f)
+static int sun6i_video_enum_fmt(struct file *file, void *private,
+                               struct v4l2_fmtdesc *fmtdesc)
 {
-       u32 index = f->index;
+       u32 index = fmtdesc->index;
 
-       if (index >= ARRAY_SIZE(supported_pixformats))
+       if (index >= ARRAY_SIZE(sun6i_video_formats))
                return -EINVAL;
 
-       f->pixelformat = supported_pixformats[index];
+       fmtdesc->pixelformat = sun6i_video_formats[index];
 
        return 0;
 }
 
-static int vidioc_g_fmt_vid_cap(struct file *file, void *priv,
-                               struct v4l2_format *fmt)
+static int sun6i_video_g_fmt(struct file *file, void *private,
+                            struct v4l2_format *format)
 {
-       struct sun6i_video *video = video_drvdata(file);
+       struct sun6i_csi_device *csi_dev = video_drvdata(file);
+       struct sun6i_video *video = &csi_dev->video;
 
-       *fmt = video->fmt;
+       *format = video->format;
 
        return 0;
 }
 
-static int sun6i_video_try_fmt(struct sun6i_video *video,
-                              struct v4l2_format *f)
+static int sun6i_video_format_try(struct sun6i_video *video,
+                                 struct v4l2_format *format)
 {
-       struct v4l2_pix_format *pixfmt = &f->fmt.pix;
+       struct v4l2_pix_format *pix_format = &format->fmt.pix;
        int bpp;
 
-       if (!is_pixformat_valid(pixfmt->pixelformat))
-               pixfmt->pixelformat = supported_pixformats[0];
+       if (!sun6i_video_format_check(pix_format->pixelformat))
+               pix_format->pixelformat = sun6i_video_formats[0];
 
-       v4l_bound_align_image(&pixfmt->width, MIN_WIDTH, MAX_WIDTH, 1,
-                             &pixfmt->height, MIN_HEIGHT, MAX_WIDTH, 1, 1);
+       v4l_bound_align_image(&pix_format->width, MIN_WIDTH, MAX_WIDTH, 1,
+                             &pix_format->height, MIN_HEIGHT, MAX_WIDTH, 1, 1);
 
-       bpp = sun6i_csi_get_bpp(pixfmt->pixelformat);
-       pixfmt->bytesperline = (pixfmt->width * bpp) >> 3;
-       pixfmt->sizeimage = pixfmt->bytesperline * pixfmt->height;
+       bpp = sun6i_csi_get_bpp(pix_format->pixelformat);
+       pix_format->bytesperline = (pix_format->width * bpp) >> 3;
+       pix_format->sizeimage = pix_format->bytesperline * pix_format->height;
 
-       if (pixfmt->field == V4L2_FIELD_ANY)
-               pixfmt->field = V4L2_FIELD_NONE;
+       if (pix_format->field == V4L2_FIELD_ANY)
+               pix_format->field = V4L2_FIELD_NONE;
 
-       if (pixfmt->pixelformat == V4L2_PIX_FMT_JPEG)
-               pixfmt->colorspace = V4L2_COLORSPACE_JPEG;
+       if (pix_format->pixelformat == V4L2_PIX_FMT_JPEG)
+               pix_format->colorspace = V4L2_COLORSPACE_JPEG;
        else
-               pixfmt->colorspace = V4L2_COLORSPACE_SRGB;
+               pix_format->colorspace = V4L2_COLORSPACE_SRGB;
 
-       pixfmt->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
-       pixfmt->quantization = V4L2_QUANTIZATION_DEFAULT;
-       pixfmt->xfer_func = V4L2_XFER_FUNC_DEFAULT;
+       pix_format->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
+       pix_format->quantization = V4L2_QUANTIZATION_DEFAULT;
+       pix_format->xfer_func = V4L2_XFER_FUNC_DEFAULT;
 
        return 0;
 }
 
-static int sun6i_video_set_fmt(struct sun6i_video *video, struct v4l2_format *f)
+static int sun6i_video_format_set(struct sun6i_video *video,
+                                 struct v4l2_format *format)
 {
        int ret;
 
-       ret = sun6i_video_try_fmt(video, f);
+       ret = sun6i_video_format_try(video, format);
        if (ret)
                return ret;
 
-       video->fmt = *f;
+       video->format = *format;
 
        return 0;
 }
 
-static int vidioc_s_fmt_vid_cap(struct file *file, void *priv,
-                               struct v4l2_format *f)
+static int sun6i_video_s_fmt(struct file *file, void *private,
+                            struct v4l2_format *format)
 {
-       struct sun6i_video *video = video_drvdata(file);
+       struct sun6i_csi_device *csi_dev = video_drvdata(file);
+       struct sun6i_video *video = &csi_dev->video;
 
-       if (vb2_is_busy(&video->vb2_vidq))
+       if (vb2_is_busy(&video->queue))
                return -EBUSY;
 
-       return sun6i_video_set_fmt(video, f);
+       return sun6i_video_format_set(video, format);
 }
 
-static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
-                                 struct v4l2_format *f)
+static int sun6i_video_try_fmt(struct file *file, void *private,
+                              struct v4l2_format *format)
 {
-       struct sun6i_video *video = video_drvdata(file);
+       struct sun6i_csi_device *csi_dev = video_drvdata(file);
+       struct sun6i_video *video = &csi_dev->video;
 
-       return sun6i_video_try_fmt(video, f);
+       return sun6i_video_format_try(video, format);
 }
 
-static int vidioc_enum_input(struct file *file, void *fh,
-                            struct v4l2_input *inp)
+static int sun6i_video_enum_input(struct file *file, void *private,
+                                 struct v4l2_input *input)
 {
-       if (inp->index != 0)
+       if (input->index != 0)
                return -EINVAL;
 
-       strscpy(inp->name, "camera", sizeof(inp->name));
-       inp->type = V4L2_INPUT_TYPE_CAMERA;
+       input->type = V4L2_INPUT_TYPE_CAMERA;
+       strscpy(input->name, "Camera", sizeof(input->name));
 
        return 0;
 }
 
-static int vidioc_g_input(struct file *file, void *fh, unsigned int *i)
+static int sun6i_video_g_input(struct file *file, void *private,
+                              unsigned int *index)
 {
-       *i = 0;
+       *index = 0;
 
        return 0;
 }
 
-static int vidioc_s_input(struct file *file, void *fh, unsigned int i)
+static int sun6i_video_s_input(struct file *file, void *private,
+                              unsigned int index)
 {
-       if (i != 0)
+       if (index != 0)
                return -EINVAL;
 
        return 0;
 }
 
 static const struct v4l2_ioctl_ops sun6i_video_ioctl_ops = {
-       .vidioc_querycap                = vidioc_querycap,
-       .vidioc_enum_fmt_vid_cap        = vidioc_enum_fmt_vid_cap,
-       .vidioc_g_fmt_vid_cap           = vidioc_g_fmt_vid_cap,
-       .vidioc_s_fmt_vid_cap           = vidioc_s_fmt_vid_cap,
-       .vidioc_try_fmt_vid_cap         = vidioc_try_fmt_vid_cap,
+       .vidioc_querycap                = sun6i_video_querycap,
+
+       .vidioc_enum_fmt_vid_cap        = sun6i_video_enum_fmt,
+       .vidioc_g_fmt_vid_cap           = sun6i_video_g_fmt,
+       .vidioc_s_fmt_vid_cap           = sun6i_video_s_fmt,
+       .vidioc_try_fmt_vid_cap         = sun6i_video_try_fmt,
 
-       .vidioc_enum_input              = vidioc_enum_input,
-       .vidioc_s_input                 = vidioc_s_input,
-       .vidioc_g_input                 = vidioc_g_input,
+       .vidioc_enum_input              = sun6i_video_enum_input,
+       .vidioc_g_input                 = sun6i_video_g_input,
+       .vidioc_s_input                 = sun6i_video_s_input,
 
+       .vidioc_create_bufs             = vb2_ioctl_create_bufs,
+       .vidioc_prepare_buf             = vb2_ioctl_prepare_buf,
        .vidioc_reqbufs                 = vb2_ioctl_reqbufs,
        .vidioc_querybuf                = vb2_ioctl_querybuf,
-       .vidioc_qbuf                    = vb2_ioctl_qbuf,
        .vidioc_expbuf                  = vb2_ioctl_expbuf,
+       .vidioc_qbuf                    = vb2_ioctl_qbuf,
        .vidioc_dqbuf                   = vb2_ioctl_dqbuf,
-       .vidioc_create_bufs             = vb2_ioctl_create_bufs,
-       .vidioc_prepare_buf             = vb2_ioctl_prepare_buf,
        .vidioc_streamon                = vb2_ioctl_streamon,
        .vidioc_streamoff               = vb2_ioctl_streamoff,
-
-       .vidioc_log_status              = v4l2_ctrl_log_status,
-       .vidioc_subscribe_event         = v4l2_ctrl_subscribe_event,
-       .vidioc_unsubscribe_event       = v4l2_event_unsubscribe,
 };
 
-/* -----------------------------------------------------------------------------
- * V4L2 file operations
- */
+/* V4L2 File */
+
 static int sun6i_video_open(struct file *file)
 {
-       struct sun6i_video *video = video_drvdata(file);
+       struct sun6i_csi_device *csi_dev = video_drvdata(file);
+       struct sun6i_video *video = &csi_dev->video;
        int ret = 0;
 
        if (mutex_lock_interruptible(&video->lock))
@@ -478,45 +503,48 @@ static int sun6i_video_open(struct file *file)
 
        ret = v4l2_fh_open(file);
        if (ret < 0)
-               goto unlock;
+               goto error_lock;
 
-       ret = v4l2_pipeline_pm_get(&video->vdev.entity);
+       ret = v4l2_pipeline_pm_get(&video->video_dev.entity);
        if (ret < 0)
-               goto fh_release;
-
-       /* check if already powered */
-       if (!v4l2_fh_is_singular_file(file))
-               goto unlock;
+               goto error_v4l2_fh;
 
-       ret = sun6i_csi_set_power(video->csi, true);
-       if (ret < 0)
-               goto fh_release;
+       /* Power on at first open. */
+       if (v4l2_fh_is_singular_file(file)) {
+               ret = sun6i_csi_set_power(csi_dev, true);
+               if (ret < 0)
+                       goto error_v4l2_fh;
+       }
 
        mutex_unlock(&video->lock);
+
        return 0;
 
-fh_release:
+error_v4l2_fh:
        v4l2_fh_release(file);
-unlock:
+
+error_lock:
        mutex_unlock(&video->lock);
+
        return ret;
 }
 
 static int sun6i_video_close(struct file *file)
 {
-       struct sun6i_video *video = video_drvdata(file);
-       bool last_fh;
+       struct sun6i_csi_device *csi_dev = video_drvdata(file);
+       struct sun6i_video *video = &csi_dev->video;
+       bool last_close;
 
        mutex_lock(&video->lock);
 
-       last_fh = v4l2_fh_is_singular_file(file);
+       last_close = v4l2_fh_is_singular_file(file);
 
        _vb2_fop_release(file, NULL);
+       v4l2_pipeline_pm_put(&video->video_dev.entity);
 
-       v4l2_pipeline_pm_put(&video->vdev.entity);
-
-       if (last_fh)
-               sun6i_csi_set_power(video->csi, false);
+       /* Power off at last close. */
+       if (last_close)
+               sun6i_csi_set_power(csi_dev, false);
 
        mutex_unlock(&video->lock);
 
@@ -532,9 +560,8 @@ static const struct v4l2_file_operations sun6i_video_fops = {
        .poll           = vb2_fop_poll
 };
 
-/* -----------------------------------------------------------------------------
- * Media Operations
- */
+/* Media Entity */
+
 static int sun6i_video_link_validate_get_format(struct media_pad *pad,
                                                struct v4l2_subdev_format *fmt)
 {
@@ -554,15 +581,16 @@ static int sun6i_video_link_validate(struct media_link *link)
 {
        struct video_device *vdev = container_of(link->sink->entity,
                                                 struct video_device, entity);
-       struct sun6i_video *video = video_get_drvdata(vdev);
+       struct sun6i_csi_device *csi_dev = video_get_drvdata(vdev);
+       struct sun6i_video *video = &csi_dev->video;
        struct v4l2_subdev_format source_fmt;
        int ret;
 
        video->mbus_code = 0;
 
        if (!media_pad_remote_pad_first(link->sink->entity->pads)) {
-               dev_info(video->csi->dev,
-                        "video node %s pad not connected\n", vdev->name);
+               dev_info(csi_dev->dev, "video node %s pad not connected\n",
+                        vdev->name);
                return -ENOLINK;
        }
 
@@ -570,21 +598,21 @@ static int sun6i_video_link_validate(struct media_link *link)
        if (ret < 0)
                return ret;
 
-       if (!sun6i_csi_is_format_supported(video->csi,
-                                          video->fmt.fmt.pix.pixelformat,
+       if (!sun6i_csi_is_format_supported(csi_dev,
+                                          video->format.fmt.pix.pixelformat,
                                           source_fmt.format.code)) {
-               dev_err(video->csi->dev,
+               dev_err(csi_dev->dev,
                        "Unsupported pixformat: 0x%x with mbus code: 0x%x!\n",
-                       video->fmt.fmt.pix.pixelformat,
+                       video->format.fmt.pix.pixelformat,
                        source_fmt.format.code);
                return -EPIPE;
        }
 
-       if (source_fmt.format.width != video->fmt.fmt.pix.width ||
-           source_fmt.format.height != video->fmt.fmt.pix.height) {
-               dev_err(video->csi->dev,
+       if (source_fmt.format.width != video->format.fmt.pix.width ||
+           source_fmt.format.height != video->format.fmt.pix.height) {
+               dev_err(csi_dev->dev,
                        "Wrong width or height %ux%u (%ux%u expected)\n",
-                       video->fmt.fmt.pix.width, video->fmt.fmt.pix.height,
+                       video->format.fmt.pix.width, video->format.fmt.pix.height,
                        source_fmt.format.width, source_fmt.format.height);
                return -EPIPE;
        }
@@ -598,88 +626,108 @@ static const struct media_entity_operations sun6i_video_media_ops = {
        .link_validate = sun6i_video_link_validate
 };
 
-int sun6i_video_init(struct sun6i_video *video, struct sun6i_csi *csi,
-                    const char *name)
+/* Video */
+
+int sun6i_video_setup(struct sun6i_csi_device *csi_dev)
 {
-       struct video_device *vdev = &video->vdev;
-       struct vb2_queue *vidq = &video->vb2_vidq;
-       struct v4l2_format fmt = { 0 };
+       struct sun6i_video *video = &csi_dev->video;
+       struct v4l2_device *v4l2_dev = &csi_dev->v4l2.v4l2_dev;
+       struct video_device *video_dev = &video->video_dev;
+       struct vb2_queue *queue = &video->queue;
+       struct media_pad *pad = &video->pad;
+       struct v4l2_format format = { 0 };
+       struct v4l2_pix_format *pix_format = &format.fmt.pix;
        int ret;
 
-       video->csi = csi;
+       /* Media Entity */
 
-       /* Initialize the media entity... */
-       video->pad.flags = MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_MUST_CONNECT;
-       vdev->entity.ops = &sun6i_video_media_ops;
-       ret = media_entity_pads_init(&vdev->entity, 1, &video->pad);
+       video_dev->entity.ops = &sun6i_video_media_ops;
+
+       /* Media Pad */
+
+       pad->flags = MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_MUST_CONNECT;
+
+       ret = media_entity_pads_init(&video_dev->entity, 1, pad);
        if (ret < 0)
                return ret;
 
-       mutex_init(&video->lock);
+       /* DMA queue */
 
        INIT_LIST_HEAD(&video->dma_queue);
        spin_lock_init(&video->dma_queue_lock);
 
        video->sequence = 0;
 
-       /* Setup default format */
-       fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
-       fmt.fmt.pix.pixelformat = supported_pixformats[0];
-       fmt.fmt.pix.width = 1280;
-       fmt.fmt.pix.height = 720;
-       fmt.fmt.pix.field = V4L2_FIELD_NONE;
-       sun6i_video_set_fmt(video, &fmt);
-
-       /* Initialize videobuf2 queue */
-       vidq->type                      = V4L2_BUF_TYPE_VIDEO_CAPTURE;
-       vidq->io_modes                  = VB2_MMAP | VB2_DMABUF;
-       vidq->drv_priv                  = video;
-       vidq->buf_struct_size           = sizeof(struct sun6i_csi_buffer);
-       vidq->ops                       = &sun6i_csi_vb2_ops;
-       vidq->mem_ops                   = &vb2_dma_contig_memops;
-       vidq->timestamp_flags           = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
-       vidq->lock                      = &video->lock;
-       /* Make sure non-dropped frame */
-       vidq->min_buffers_needed        = 3;
-       vidq->dev                       = csi->dev;
-
-       ret = vb2_queue_init(vidq);
+       /* Queue */
+
+       mutex_init(&video->lock);
+
+       queue->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+       queue->io_modes = VB2_MMAP | VB2_DMABUF;
+       queue->buf_struct_size = sizeof(struct sun6i_csi_buffer);
+       queue->ops = &sun6i_video_queue_ops;
+       queue->mem_ops = &vb2_dma_contig_memops;
+       queue->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+       queue->lock = &video->lock;
+       queue->dev = csi_dev->dev;
+       queue->drv_priv = csi_dev;
+
+       /* Make sure non-dropped frame. */
+       queue->min_buffers_needed = 3;
+
+       ret = vb2_queue_init(queue);
        if (ret) {
-               v4l2_err(&csi->v4l2_dev, "vb2_queue_init failed: %d\n", ret);
-               goto clean_entity;
+               v4l2_err(v4l2_dev, "failed to initialize vb2 queue: %d\n", ret);
+               goto error_media_entity;
        }
 
-       /* Register video device */
-       strscpy(vdev->name, name, sizeof(vdev->name));
-       vdev->release           = video_device_release_empty;
-       vdev->fops              = &sun6i_video_fops;
-       vdev->ioctl_ops         = &sun6i_video_ioctl_ops;
-       vdev->vfl_type          = VFL_TYPE_VIDEO;
-       vdev->vfl_dir           = VFL_DIR_RX;
-       vdev->v4l2_dev          = &csi->v4l2_dev;
-       vdev->queue             = vidq;
-       vdev->lock              = &video->lock;
-       vdev->device_caps       = V4L2_CAP_STREAMING | V4L2_CAP_VIDEO_CAPTURE;
-       video_set_drvdata(vdev, video);
-
-       ret = video_register_device(vdev, VFL_TYPE_VIDEO, -1);
+       /* V4L2 Format */
+
+       format.type = queue->type;
+       pix_format->pixelformat = sun6i_video_formats[0];
+       pix_format->width = 1280;
+       pix_format->height = 720;
+       pix_format->field = V4L2_FIELD_NONE;
+
+       sun6i_video_format_set(video, &format);
+
+       /* Video Device */
+
+       strscpy(video_dev->name, SUN6I_CSI_NAME, sizeof(video_dev->name));
+       video_dev->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
+       video_dev->vfl_dir = VFL_DIR_RX;
+       video_dev->release = video_device_release_empty;
+       video_dev->fops = &sun6i_video_fops;
+       video_dev->ioctl_ops = &sun6i_video_ioctl_ops;
+       video_dev->v4l2_dev = v4l2_dev;
+       video_dev->queue = queue;
+       video_dev->lock = &video->lock;
+
+       video_set_drvdata(video_dev, csi_dev);
+
+       ret = video_register_device(video_dev, VFL_TYPE_VIDEO, -1);
        if (ret < 0) {
-               v4l2_err(&csi->v4l2_dev,
-                        "video_register_device failed: %d\n", ret);
-               goto clean_entity;
+               v4l2_err(v4l2_dev, "failed to register video device: %d\n",
+                        ret);
+               goto error_media_entity;
        }
 
        return 0;
 
-clean_entity:
-       media_entity_cleanup(&video->vdev.entity);
+error_media_entity:
+       media_entity_cleanup(&video_dev->entity);
+
        mutex_destroy(&video->lock);
+
        return ret;
 }
 
-void sun6i_video_cleanup(struct sun6i_video *video)
+void sun6i_video_cleanup(struct sun6i_csi_device *csi_dev)
 {
-       vb2_video_unregister_device(&video->vdev);
-       media_entity_cleanup(&video->vdev.entity);
+       struct sun6i_video *video = &csi_dev->video;
+       struct video_device *video_dev = &video->video_dev;
+
+       vb2_video_unregister_device(video_dev);
+       media_entity_cleanup(&video_dev->entity);
        mutex_destroy(&video->lock);
 }
index b9cd919..a917d2d 100644 (file)
 #include <media/v4l2-dev.h>
 #include <media/videobuf2-core.h>
 
-struct sun6i_csi;
+struct sun6i_csi_device;
 
 struct sun6i_video {
-       struct video_device             vdev;
+       struct video_device             video_dev;
+       struct vb2_queue                queue;
+       struct mutex                    lock; /* Queue lock. */
        struct media_pad                pad;
-       struct sun6i_csi                *csi;
 
-       struct mutex                    lock;
-
-       struct vb2_queue                vb2_vidq;
-       spinlock_t                      dma_queue_lock;
        struct list_head                dma_queue;
+       spinlock_t                      dma_queue_lock; /* DMA queue lock. */
 
-       unsigned int                    sequence;
-       struct v4l2_format              fmt;
+       struct v4l2_format              format;
        u32                             mbus_code;
+       unsigned int                    sequence;
 };
 
-int sun6i_video_init(struct sun6i_video *video, struct sun6i_csi *csi,
-                    const char *name);
-void sun6i_video_cleanup(struct sun6i_video *video);
+int sun6i_video_setup(struct sun6i_csi_device *csi_dev);
+void sun6i_video_cleanup(struct sun6i_csi_device *csi_dev);
 
-void sun6i_video_frame_done(struct sun6i_video *video);
+void sun6i_video_frame_done(struct sun6i_csi_device *csi_dev);
 
 #endif /* __SUN6I_VIDEO_H__ */
index eb98246..08852f6 100644 (file)
@@ -3,11 +3,11 @@ config VIDEO_SUN6I_MIPI_CSI2
        tristate "Allwinner A31 MIPI CSI-2 Controller Driver"
        depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV
        depends on ARCH_SUNXI || COMPILE_TEST
-       depends on PM && COMMON_CLK
+       depends on PM && COMMON_CLK && RESET_CONTROLLER
+       depends on PHY_SUN6I_MIPI_DPHY
        select MEDIA_CONTROLLER
        select VIDEO_V4L2_SUBDEV_API
        select V4L2_FWNODE
-       select PHY_SUN6I_MIPI_DPHY
        select GENERIC_PHY_MIPI_DPHY
        select REGMAP_MMIO
        help
index a4e3f9a..30d6c0c 100644 (file)
@@ -661,7 +661,8 @@ sun6i_mipi_csi2_resources_setup(struct sun6i_mipi_csi2_device *csi2_dev,
        csi2_dev->reset = devm_reset_control_get_shared(dev, NULL);
        if (IS_ERR(csi2_dev->reset)) {
                dev_err(dev, "failed to get reset controller\n");
-               return PTR_ERR(csi2_dev->reset);
+               ret = PTR_ERR(csi2_dev->reset);
+               goto error_clock_rate_exclusive;
        }
 
        /* D-PHY */
@@ -669,13 +670,14 @@ sun6i_mipi_csi2_resources_setup(struct sun6i_mipi_csi2_device *csi2_dev,
        csi2_dev->dphy = devm_phy_get(dev, "dphy");
        if (IS_ERR(csi2_dev->dphy)) {
                dev_err(dev, "failed to get MIPI D-PHY\n");
-               return PTR_ERR(csi2_dev->dphy);
+               ret = PTR_ERR(csi2_dev->dphy);
+               goto error_clock_rate_exclusive;
        }
 
        ret = phy_init(csi2_dev->dphy);
        if (ret) {
                dev_err(dev, "failed to initialize MIPI D-PHY\n");
-               return ret;
+               goto error_clock_rate_exclusive;
        }
 
        /* Runtime PM */
@@ -683,6 +685,11 @@ sun6i_mipi_csi2_resources_setup(struct sun6i_mipi_csi2_device *csi2_dev,
        pm_runtime_enable(dev);
 
        return 0;
+
+error_clock_rate_exclusive:
+       clk_rate_exclusive_put(csi2_dev->clock_mod);
+
+       return ret;
 }
 
 static void
@@ -712,9 +719,14 @@ static int sun6i_mipi_csi2_probe(struct platform_device *platform_dev)
 
        ret = sun6i_mipi_csi2_bridge_setup(csi2_dev);
        if (ret)
-               return ret;
+               goto error_resources;
 
        return 0;
+
+error_resources:
+       sun6i_mipi_csi2_resources_cleanup(csi2_dev);
+
+       return ret;
 }
 
 static int sun6i_mipi_csi2_remove(struct platform_device *platform_dev)
index 789d58e..47a8c0f 100644 (file)
@@ -3,7 +3,7 @@ config VIDEO_SUN8I_A83T_MIPI_CSI2
        tristate "Allwinner A83T MIPI CSI-2 Controller and D-PHY Driver"
        depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV
        depends on ARCH_SUNXI || COMPILE_TEST
-       depends on PM && COMMON_CLK
+       depends on PM && COMMON_CLK && RESET_CONTROLLER
        select MEDIA_CONTROLLER
        select VIDEO_V4L2_SUBDEV_API
        select V4L2_FWNODE
index d052ee7..b032ec1 100644 (file)
@@ -719,13 +719,15 @@ sun8i_a83t_mipi_csi2_resources_setup(struct sun8i_a83t_mipi_csi2_device *csi2_de
        csi2_dev->clock_mipi = devm_clk_get(dev, "mipi");
        if (IS_ERR(csi2_dev->clock_mipi)) {
                dev_err(dev, "failed to acquire mipi clock\n");
-               return PTR_ERR(csi2_dev->clock_mipi);
+               ret = PTR_ERR(csi2_dev->clock_mipi);
+               goto error_clock_rate_exclusive;
        }
 
        csi2_dev->clock_misc = devm_clk_get(dev, "misc");
        if (IS_ERR(csi2_dev->clock_misc)) {
                dev_err(dev, "failed to acquire misc clock\n");
-               return PTR_ERR(csi2_dev->clock_misc);
+               ret = PTR_ERR(csi2_dev->clock_misc);
+               goto error_clock_rate_exclusive;
        }
 
        /* Reset */
@@ -733,7 +735,8 @@ sun8i_a83t_mipi_csi2_resources_setup(struct sun8i_a83t_mipi_csi2_device *csi2_de
        csi2_dev->reset = devm_reset_control_get_shared(dev, NULL);
        if (IS_ERR(csi2_dev->reset)) {
                dev_err(dev, "failed to get reset controller\n");
-               return PTR_ERR(csi2_dev->reset);
+               ret = PTR_ERR(csi2_dev->reset);
+               goto error_clock_rate_exclusive;
        }
 
        /* D-PHY */
@@ -741,7 +744,7 @@ sun8i_a83t_mipi_csi2_resources_setup(struct sun8i_a83t_mipi_csi2_device *csi2_de
        ret = sun8i_a83t_dphy_register(csi2_dev);
        if (ret) {
                dev_err(dev, "failed to initialize MIPI D-PHY\n");
-               return ret;
+               goto error_clock_rate_exclusive;
        }
 
        /* Runtime PM */
@@ -749,6 +752,11 @@ sun8i_a83t_mipi_csi2_resources_setup(struct sun8i_a83t_mipi_csi2_device *csi2_de
        pm_runtime_enable(dev);
 
        return 0;
+
+error_clock_rate_exclusive:
+       clk_rate_exclusive_put(csi2_dev->clock_mod);
+
+       return ret;
 }
 
 static void
@@ -778,9 +786,14 @@ static int sun8i_a83t_mipi_csi2_probe(struct platform_device *platform_dev)
 
        ret = sun8i_a83t_mipi_csi2_bridge_setup(csi2_dev);
        if (ret)
-               return ret;
+               goto error_resources;
 
        return 0;
+
+error_resources:
+       sun8i_a83t_mipi_csi2_resources_cleanup(csi2_dev);
+
+       return ret;
 }
 
 static int sun8i_a83t_mipi_csi2_remove(struct platform_device *platform_dev)
index ff71e06..f688396 100644 (file)
@@ -4,7 +4,7 @@ config VIDEO_SUN8I_DEINTERLACE
        depends on V4L_MEM2MEM_DRIVERS
        depends on VIDEO_DEV
        depends on ARCH_SUNXI || COMPILE_TEST
-       depends on COMMON_CLK && OF
+       depends on COMMON_CLK && RESET_CONTROLLER && OF
        depends on PM
        select VIDEOBUF2_DMA_CONTIG
        select V4L2_MEM2MEM_DEV
index cfba290..ee2c1f2 100644 (file)
@@ -5,7 +5,7 @@ config VIDEO_SUN8I_ROTATE
        depends on V4L_MEM2MEM_DRIVERS
        depends on VIDEO_DEV
        depends on ARCH_SUNXI || COMPILE_TEST
-       depends on COMMON_CLK && OF
+       depends on COMMON_CLK && RESET_CONTROLLER && OF
        depends on PM
        select VIDEOBUF2_DMA_CONTIG
        select V4L2_MEM2MEM_DEV
index 21e3d0a..4eade40 100644 (file)
@@ -708,7 +708,7 @@ static int cal_start_streaming(struct vb2_queue *vq, unsigned int count)
        dma_addr_t addr;
        int ret;
 
-       ret = media_pipeline_start(&ctx->vdev.entity, &ctx->phy->pipe);
+       ret = video_device_pipeline_alloc_start(&ctx->vdev);
        if (ret < 0) {
                ctx_err(ctx, "Failed to start media pipeline: %d\n", ret);
                goto error_release_buffers;
@@ -761,7 +761,7 @@ error_stop:
        cal_ctx_unprepare(ctx);
 
 error_pipeline:
-       media_pipeline_stop(&ctx->vdev.entity);
+       video_device_pipeline_stop(&ctx->vdev);
 error_release_buffers:
        cal_release_buffers(ctx, VB2_BUF_STATE_QUEUED);
 
@@ -782,7 +782,7 @@ static void cal_stop_streaming(struct vb2_queue *vq)
 
        cal_release_buffers(ctx, VB2_BUF_STATE_ERROR);
 
-       media_pipeline_stop(&ctx->vdev.entity);
+       video_device_pipeline_stop(&ctx->vdev);
 }
 
 static const struct vb2_ops cal_video_qops = {
index 80f2c9c..de73d6d 100644 (file)
@@ -174,7 +174,6 @@ struct cal_camerarx {
        struct device_node      *source_ep_node;
        struct device_node      *source_node;
        struct v4l2_subdev      *source;
-       struct media_pipeline   pipe;
 
        struct v4l2_subdev      subdev;
        struct media_pad        pads[CAL_CAMERARX_NUM_PADS];
index a6052df..24d2383 100644 (file)
@@ -937,10 +937,8 @@ static int isp_pipeline_is_last(struct media_entity *me)
        struct isp_pipeline *pipe;
        struct media_pad *pad;
 
-       if (!me->pipe)
-               return 0;
        pipe = to_isp_pipeline(me);
-       if (pipe->stream_state == ISP_PIPELINE_STREAM_STOPPED)
+       if (!pipe || pipe->stream_state == ISP_PIPELINE_STREAM_STOPPED)
                return 0;
        pad = media_pad_remote_pad_first(&pipe->output->pad);
        return pad->entity == me;
index cc9a97d..3e5348c 100644 (file)
@@ -1093,8 +1093,7 @@ isp_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
        /* Start streaming on the pipeline. No link touching an entity in the
         * pipeline can be activated or deactivated once streaming is started.
         */
-       pipe = video->video.entity.pipe
-            ? to_isp_pipeline(&video->video.entity) : &video->pipe;
+       pipe = to_isp_pipeline(&video->video.entity) ? : &video->pipe;
 
        ret = media_entity_enum_init(&pipe->ent_enum, &video->isp->media_dev);
        if (ret)
@@ -1104,7 +1103,7 @@ isp_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
        pipe->l3_ick = clk_get_rate(video->isp->clock[ISP_CLK_L3_ICK]);
        pipe->max_rate = pipe->l3_ick;
 
-       ret = media_pipeline_start(&video->video.entity, &pipe->pipe);
+       ret = video_device_pipeline_start(&video->video, &pipe->pipe);
        if (ret < 0)
                goto err_pipeline_start;
 
@@ -1161,7 +1160,7 @@ isp_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
        return 0;
 
 err_check_format:
-       media_pipeline_stop(&video->video.entity);
+       video_device_pipeline_stop(&video->video);
 err_pipeline_start:
        /* TODO: Implement PM QoS */
        /* The DMA queue must be emptied here, otherwise CCDC interrupts that
@@ -1228,7 +1227,7 @@ isp_video_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
        video->error = false;
 
        /* TODO: Implement PM QoS */
-       media_pipeline_stop(&video->video.entity);
+       video_device_pipeline_stop(&video->video);
 
        media_entity_enum_cleanup(&pipe->ent_enum);
 
index a090867..1d23df5 100644 (file)
@@ -99,8 +99,15 @@ struct isp_pipeline {
        unsigned int external_width;
 };
 
-#define to_isp_pipeline(__e) \
-       container_of((__e)->pipe, struct isp_pipeline, pipe)
+static inline struct isp_pipeline *to_isp_pipeline(struct media_entity *entity)
+{
+       struct media_pipeline *pipe = media_entity_pipeline(entity);
+
+       if (!pipe)
+               return NULL;
+
+       return container_of(pipe, struct isp_pipeline, pipe);
+}
 
 static inline int isp_pipeline_ready(struct isp_pipeline *pipe)
 {
index 2036f72..8cb4a68 100644 (file)
@@ -251,6 +251,11 @@ queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *dst_vq)
 
 static int hantro_try_ctrl(struct v4l2_ctrl *ctrl)
 {
+       struct hantro_ctx *ctx;
+
+       ctx = container_of(ctrl->handler,
+                          struct hantro_ctx, ctrl_handler);
+
        if (ctrl->id == V4L2_CID_STATELESS_H264_SPS) {
                const struct v4l2_ctrl_h264_sps *sps = ctrl->p_new.p_h264_sps;
 
@@ -266,12 +271,11 @@ static int hantro_try_ctrl(struct v4l2_ctrl *ctrl)
        } else if (ctrl->id == V4L2_CID_STATELESS_HEVC_SPS) {
                const struct v4l2_ctrl_hevc_sps *sps = ctrl->p_new.p_hevc_sps;
 
-               if (sps->bit_depth_luma_minus8 != sps->bit_depth_chroma_minus8)
-                       /* Luma and chroma bit depth mismatch */
-                       return -EINVAL;
-               if (sps->bit_depth_luma_minus8 != 0)
-                       /* Only 8-bit is supported */
+               if (sps->bit_depth_luma_minus8 != 0 && sps->bit_depth_luma_minus8 != 2)
+                       /* Only 8-bit and 10-bit are supported */
                        return -EINVAL;
+
+               ctx->bit_depth = sps->bit_depth_luma_minus8 + 8;
        } else if (ctrl->id == V4L2_CID_STATELESS_VP9_FRAME) {
                const struct v4l2_ctrl_vp9_frame *dec_params = ctrl->p_new.p_vp9_frame;
 
index 233ecd8..a9d4ac8 100644 (file)
@@ -12,7 +12,7 @@
 
 static size_t hantro_hevc_chroma_offset(struct hantro_ctx *ctx)
 {
-       return ctx->dst_fmt.width * ctx->dst_fmt.height;
+       return ctx->dst_fmt.width * ctx->dst_fmt.height * ctx->bit_depth / 8;
 }
 
 static size_t hantro_hevc_motion_vectors_offset(struct hantro_ctx *ctx)
@@ -167,8 +167,6 @@ static void set_params(struct hantro_ctx *ctx)
        hantro_reg_write(vpu, &g2_bit_depth_y_minus8, sps->bit_depth_luma_minus8);
        hantro_reg_write(vpu, &g2_bit_depth_c_minus8, sps->bit_depth_chroma_minus8);
 
-       hantro_reg_write(vpu, &g2_output_8_bits, 0);
-
        hantro_reg_write(vpu, &g2_hdr_skip_length, compute_header_skip_length(ctx));
 
        min_log2_cb_size = sps->log2_min_luma_coding_block_size_minus3 + 3;
index b990bc9..9383fb7 100644 (file)
@@ -104,7 +104,7 @@ static int tile_buffer_reallocate(struct hantro_ctx *ctx)
                hevc_dec->tile_bsd.cpu = NULL;
        }
 
-       size = VERT_FILTER_RAM_SIZE * height64 * (num_tile_cols - 1);
+       size = (VERT_FILTER_RAM_SIZE * height64 * (num_tile_cols - 1) * ctx->bit_depth) / 8;
        hevc_dec->tile_filter.cpu = dma_alloc_coherent(vpu->dev, size,
                                                       &hevc_dec->tile_filter.dma,
                                                       GFP_KERNEL);
@@ -112,7 +112,7 @@ static int tile_buffer_reallocate(struct hantro_ctx *ctx)
                goto err_free_tile_buffers;
        hevc_dec->tile_filter.size = size;
 
-       size = VERT_SAO_RAM_SIZE * height64 * (num_tile_cols - 1);
+       size = (VERT_SAO_RAM_SIZE * height64 * (num_tile_cols - 1) * ctx->bit_depth) / 8;
        hevc_dec->tile_sao.cpu = dma_alloc_coherent(vpu->dev, size,
                                                    &hevc_dec->tile_sao.dma,
                                                    GFP_KERNEL);
index a0928c5..09d8cf9 100644 (file)
@@ -114,6 +114,7 @@ static void hantro_postproc_g2_enable(struct hantro_ctx *ctx)
        struct hantro_dev *vpu = ctx->dev;
        struct vb2_v4l2_buffer *dst_buf;
        int down_scale = down_scale_factor(ctx);
+       int out_depth;
        size_t chroma_offset;
        dma_addr_t dst_dma;
 
@@ -132,8 +133,9 @@ static void hantro_postproc_g2_enable(struct hantro_ctx *ctx)
                hantro_write_addr(vpu, G2_RS_OUT_LUMA_ADDR, dst_dma);
                hantro_write_addr(vpu, G2_RS_OUT_CHROMA_ADDR, dst_dma + chroma_offset);
        }
+
+       out_depth = hantro_get_format_depth(ctx->dst_fmt.pixelformat);
        if (ctx->dev->variant->legacy_regs) {
-               int out_depth = hantro_get_format_depth(ctx->dst_fmt.pixelformat);
                u8 pp_shift = 0;
 
                if (out_depth > 8)
@@ -141,6 +143,9 @@ static void hantro_postproc_g2_enable(struct hantro_ctx *ctx)
 
                hantro_reg_write(ctx->dev, &g2_rs_out_bit_depth, out_depth);
                hantro_reg_write(ctx->dev, &g2_pp_pix_shift, pp_shift);
+       } else {
+               hantro_reg_write(vpu, &g2_output_8_bits, out_depth > 8 ? 0 : 1);
+               hantro_reg_write(vpu, &g2_output_format, out_depth > 8 ? 1 : 0);
        }
        hantro_reg_write(vpu, &g2_out_rs_e, 1);
 }
index 77f574f..b390228 100644 (file)
@@ -162,12 +162,39 @@ static const struct hantro_fmt imx8m_vpu_g2_postproc_fmts[] = {
                        .step_height = MB_DIM,
                },
        },
+       {
+               .fourcc = V4L2_PIX_FMT_P010,
+               .codec_mode = HANTRO_MODE_NONE,
+               .postprocessed = true,
+               .frmsize = {
+                       .min_width = FMT_MIN_WIDTH,
+                       .max_width = FMT_UHD_WIDTH,
+                       .step_width = MB_DIM,
+                       .min_height = FMT_MIN_HEIGHT,
+                       .max_height = FMT_UHD_HEIGHT,
+                       .step_height = MB_DIM,
+               },
+       },
 };
 
 static const struct hantro_fmt imx8m_vpu_g2_dec_fmts[] = {
        {
                .fourcc = V4L2_PIX_FMT_NV12_4L4,
                .codec_mode = HANTRO_MODE_NONE,
+               .match_depth = true,
+               .frmsize = {
+                       .min_width = FMT_MIN_WIDTH,
+                       .max_width = FMT_UHD_WIDTH,
+                       .step_width = TILE_MB_DIM,
+                       .min_height = FMT_MIN_HEIGHT,
+                       .max_height = FMT_UHD_HEIGHT,
+                       .step_height = TILE_MB_DIM,
+               },
+       },
+       {
+               .fourcc = V4L2_PIX_FMT_P010_4L4,
+               .codec_mode = HANTRO_MODE_NONE,
+               .match_depth = true,
                .frmsize = {
                        .min_width = FMT_MIN_WIDTH,
                        .max_width = FMT_UHD_WIDTH,
index 2d1ef7a..0a7fd86 100644 (file)
@@ -402,10 +402,9 @@ static int xvip_dma_start_streaming(struct vb2_queue *vq, unsigned int count)
         * Use the pipeline object embedded in the first DMA object that starts
         * streaming.
         */
-       pipe = dma->video.entity.pipe
-            ? to_xvip_pipeline(&dma->video.entity) : &dma->pipe;
+       pipe = to_xvip_pipeline(&dma->video) ? : &dma->pipe;
 
-       ret = media_pipeline_start(&dma->video.entity, &pipe->pipe);
+       ret = video_device_pipeline_start(&dma->video, &pipe->pipe);
        if (ret < 0)
                goto error;
 
@@ -431,7 +430,7 @@ static int xvip_dma_start_streaming(struct vb2_queue *vq, unsigned int count)
        return 0;
 
 error_stop:
-       media_pipeline_stop(&dma->video.entity);
+       video_device_pipeline_stop(&dma->video);
 
 error:
        /* Give back all queued buffers to videobuf2. */
@@ -448,7 +447,7 @@ error:
 static void xvip_dma_stop_streaming(struct vb2_queue *vq)
 {
        struct xvip_dma *dma = vb2_get_drv_priv(vq);
-       struct xvip_pipeline *pipe = to_xvip_pipeline(&dma->video.entity);
+       struct xvip_pipeline *pipe = to_xvip_pipeline(&dma->video);
        struct xvip_dma_buffer *buf, *nbuf;
 
        /* Stop the pipeline. */
@@ -459,7 +458,7 @@ static void xvip_dma_stop_streaming(struct vb2_queue *vq)
 
        /* Cleanup the pipeline and mark it as being stopped. */
        xvip_pipeline_cleanup(pipe);
-       media_pipeline_stop(&dma->video.entity);
+       video_device_pipeline_stop(&dma->video);
 
        /* Give back all queued buffers to videobuf2. */
        spin_lock_irq(&dma->queued_lock);
index 2378bda..9c6d4c1 100644 (file)
@@ -45,9 +45,14 @@ struct xvip_pipeline {
        struct xvip_dma *output;
 };
 
-static inline struct xvip_pipeline *to_xvip_pipeline(struct media_entity *e)
+static inline struct xvip_pipeline *to_xvip_pipeline(struct video_device *vdev)
 {
-       return container_of(e->pipe, struct xvip_pipeline, pipe);
+       struct media_pipeline *pipe = video_device_pipeline(vdev);
+
+       if (!pipe)
+               return NULL;
+
+       return container_of(pipe, struct xvip_pipeline, pipe);
 }
 
 /**
index 0bf99e1..171f9cc 100644 (file)
@@ -1072,7 +1072,6 @@ done:
 
 static int si476x_radio_fops_release(struct file *file)
 {
-       int err;
        struct si476x_radio *radio = video_drvdata(file);
 
        if (v4l2_fh_is_singular_file(file) &&
@@ -1080,9 +1079,7 @@ static int si476x_radio_fops_release(struct file *file)
                si476x_core_set_power_state(radio->core,
                                            SI476X_POWER_DOWN);
 
-       err = v4l2_fh_release(file);
-
-       return err;
+       return v4l2_fh_release(file);
 }
 
 static ssize_t si476x_radio_fops_read(struct file *file, char __user *buf,
index 2aec642..93d847c 100644 (file)
@@ -14,7 +14,7 @@
 #include <linux/interrupt.h>
 #include <linux/i2c.h>
 #include <linux/slab.h>
-#include <linux/gpio.h>
+#include <linux/gpio/consumer.h>
 #include <linux/module.h>
 #include <media/v4l2-device.h>
 #include <media/v4l2-ioctl.h>
index 735b925..5edfd8a 100644 (file)
@@ -684,7 +684,6 @@ static int send_packet(struct imon_context *ictx)
  */
 static int send_associate_24g(struct imon_context *ictx)
 {
-       int retval;
        const unsigned char packet[8] = { 0x01, 0x00, 0x00, 0x00,
                                          0x00, 0x00, 0x00, 0x20 };
 
@@ -699,9 +698,8 @@ static int send_associate_24g(struct imon_context *ictx)
        }
 
        memcpy(ictx->usb_tx_buf, packet, sizeof(packet));
-       retval = send_packet(ictx);
 
-       return retval;
+       return send_packet(ictx);
 }
 
 /*
index 39d2b03..c76ba24 100644 (file)
@@ -1077,7 +1077,7 @@ static int mceusb_set_timeout(struct rc_dev *dev, unsigned int timeout)
        struct mceusb_dev *ir = dev->priv;
        unsigned int units;
 
-       units = DIV_ROUND_CLOSEST(timeout, MCE_TIME_UNIT);
+       units = DIV_ROUND_UP(timeout, MCE_TIME_UNIT);
 
        cmdbuf[2] = units >> 8;
        cmdbuf[3] = units;
index 6c43780..aa94427 100644 (file)
@@ -241,13 +241,12 @@ static void vimc_capture_return_all_buffers(struct vimc_capture_device *vcapture
 static int vimc_capture_start_streaming(struct vb2_queue *vq, unsigned int count)
 {
        struct vimc_capture_device *vcapture = vb2_get_drv_priv(vq);
-       struct media_entity *entity = &vcapture->vdev.entity;
        int ret;
 
        vcapture->sequence = 0;
 
        /* Start the media pipeline */
-       ret = media_pipeline_start(entity, &vcapture->stream.pipe);
+       ret = video_device_pipeline_start(&vcapture->vdev, &vcapture->stream.pipe);
        if (ret) {
                vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_QUEUED);
                return ret;
@@ -255,7 +254,7 @@ static int vimc_capture_start_streaming(struct vb2_queue *vq, unsigned int count
 
        ret = vimc_streamer_s_stream(&vcapture->stream, &vcapture->ved, 1);
        if (ret) {
-               media_pipeline_stop(entity);
+               video_device_pipeline_stop(&vcapture->vdev);
                vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_QUEUED);
                return ret;
        }
@@ -274,7 +273,7 @@ static void vimc_capture_stop_streaming(struct vb2_queue *vq)
        vimc_streamer_s_stream(&vcapture->stream, &vcapture->ved, 0);
 
        /* Stop the media pipeline */
-       media_pipeline_stop(&vcapture->vdev.entity);
+       video_device_pipeline_stop(&vcapture->vdev);
 
        /* Release all active buffers */
        vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_ERROR);
index 232cab5..8bd0958 100644 (file)
@@ -104,8 +104,8 @@ retry:
                                break;
                        case 2:
                                rds.block |= V4L2_RDS_BLOCK_ERROR;
-                               rds.lsb = prandom_u32_max(256);
-                               rds.msb = prandom_u32_max(256);
+                               rds.lsb = get_random_u8();
+                               rds.msb = get_random_u8();
                                break;
                        case 3: /* Skip block altogether */
                                if (i)
index 64e3e4c..6cc32eb 100644 (file)
@@ -210,7 +210,7 @@ static void vivid_fill_buff_noise(__s16 *tch_buf, int size)
 
        /* Fill 10% of the values within range -3 and 3, zero the others */
        for (i = 0; i < size; i++) {
-               unsigned int rand = get_random_int();
+               unsigned int rand = get_random_u32();
 
                if (rand % 10)
                        tch_buf[i] = 0;
@@ -221,7 +221,7 @@ static void vivid_fill_buff_noise(__s16 *tch_buf, int size)
 
 static inline int get_random_pressure(void)
 {
-       return get_random_int() % VIVID_PRESSURE_LIMIT;
+       return prandom_u32_max(VIVID_PRESSURE_LIMIT);
 }
 
 static void vivid_tch_buf_set(struct v4l2_pix_format *f,
@@ -272,7 +272,7 @@ void vivid_fillbuff_tch(struct vivid_dev *dev, struct vivid_buffer *buf)
                return;
 
        if (test_pat_idx == 0)
-               dev->tch_pat_random = get_random_int();
+               dev->tch_pat_random = get_random_u32();
        rand = dev->tch_pat_random;
 
        switch (test_pattern) {
index a04dfd5..d59b4ab 100644 (file)
@@ -282,15 +282,13 @@ static int xc4000_tuner_reset(struct dvb_frontend *fe)
 static int xc_write_reg(struct xc4000_priv *priv, u16 regAddr, u16 i2cData)
 {
        u8 buf[4];
-       int result;
 
        buf[0] = (regAddr >> 8) & 0xFF;
        buf[1] = regAddr & 0xFF;
        buf[2] = (i2cData >> 8) & 0xFF;
        buf[3] = i2cData & 0xFF;
-       result = xc_send_i2c_data(priv, buf, 4);
 
-       return result;
+       return xc_send_i2c_data(priv, buf, 4);
 }
 
 static int xc_load_i2c_sequence(struct dvb_frontend *fe, const u8 *i2c_sequence)
index caefac0..877e85a 100644 (file)
@@ -410,7 +410,7 @@ static int au0828_enable_source(struct media_entity *entity,
                goto end;
        }
 
-       ret = __media_pipeline_start(entity, pipe);
+       ret = __media_pipeline_start(entity->pads, pipe);
        if (ret) {
                pr_err("Start Pipeline: %s->%s Error %d\n",
                        source->name, entity->name, ret);
@@ -501,12 +501,12 @@ static void au0828_disable_source(struct media_entity *entity)
                                return;
 
                        /* stop pipeline */
-                       __media_pipeline_stop(dev->active_link_owner);
+                       __media_pipeline_stop(dev->active_link_owner->pads);
                        pr_debug("Pipeline stop for %s\n",
                                dev->active_link_owner->name);
 
                        ret = __media_pipeline_start(
-                                       dev->active_link_user,
+                                       dev->active_link_user->pads,
                                        dev->active_link_user_pipe);
                        if (ret) {
                                pr_err("Start Pipeline: %s->%s %d\n",
@@ -532,7 +532,7 @@ static void au0828_disable_source(struct media_entity *entity)
                        return;
 
                /* stop pipeline */
-               __media_pipeline_stop(dev->active_link_owner);
+               __media_pipeline_stop(dev->active_link_owner->pads);
                pr_debug("Pipeline stop for %s\n",
                        dev->active_link_owner->name);
 
index 5eef37b..1e9c8d0 100644 (file)
@@ -1497,7 +1497,7 @@ static int af9035_tuner_attach(struct dvb_usb_adapter *adap)
                /*
                 * AF9035 gpiot2 = FC0012 enable
                 * XXX: there seems to be something on gpioh8 too, but on my
-                * my test I didn't find any difference.
+                * test I didn't find any difference.
                 */
 
                if (adap->id == 0) {
index 5a1f269..9759996 100644 (file)
@@ -209,7 +209,7 @@ leave:
  *
  * Control bits for previous samples is 32-bit field, containing 16 x 2-bit
  * numbers. This results one 2-bit number for 8 samples. It is likely used for
- * for bit shifting sample by given bits, increasing actual sampling resolution.
+ * bit shifting sample by given bits, increasing actual sampling resolution.
  * Number 2 (0b10) was never seen.
  *
  * 6 * 16 * 2 * 4 = 768 samples. 768 * 4 = 3072 bytes
index a8c354a..d0a3aa3 100644 (file)
@@ -89,7 +89,7 @@ static int req_to_user(struct v4l2_ext_control *c,
 /* Helper function: copy the initial control value back to the caller */
 static int def_to_user(struct v4l2_ext_control *c, struct v4l2_ctrl *ctrl)
 {
-       ctrl->type_ops->init(ctrl, 0, ctrl->elems, ctrl->p_new);
+       ctrl->type_ops->init(ctrl, 0, ctrl->p_new);
 
        return ptr_to_user(c, ctrl, ctrl->p_new);
 }
@@ -126,7 +126,7 @@ static int user_to_new(struct v4l2_ext_control *c, struct v4l2_ctrl *ctrl)
                if (ctrl->is_dyn_array)
                        ctrl->new_elems = elems;
                else if (ctrl->is_array)
-                       ctrl->type_ops->init(ctrl, elems, ctrl->elems, ctrl->p_new);
+                       ctrl->type_ops->init(ctrl, elems, ctrl->p_new);
                return 0;
        }
 
@@ -494,7 +494,7 @@ EXPORT_SYMBOL(v4l2_g_ext_ctrls);
 /* Validate a new control */
 static int validate_new(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr p_new)
 {
-       return ctrl->type_ops->validate(ctrl, ctrl->new_elems, p_new);
+       return ctrl->type_ops->validate(ctrl, p_new);
 }
 
 /* Validate controls. */
@@ -1007,7 +1007,7 @@ int __v4l2_ctrl_modify_dimensions(struct v4l2_ctrl *ctrl,
        ctrl->p_cur.p = p_array + elems * ctrl->elem_size;
        for (i = 0; i < ctrl->nr_of_dims; i++)
                ctrl->dims[i] = dims[i];
-       ctrl->type_ops->init(ctrl, 0, elems, ctrl->p_cur);
+       ctrl->type_ops->init(ctrl, 0, ctrl->p_cur);
        cur_to_new(ctrl);
        send_event(NULL, ctrl, V4L2_EVENT_CTRL_CH_VALUE |
                               V4L2_EVENT_CTRL_CH_DIMENSIONS);
index 01f0009..0dab1d7 100644 (file)
@@ -65,7 +65,7 @@ void send_event(struct v4l2_fh *fh, struct v4l2_ctrl *ctrl, u32 changes)
                        v4l2_event_queue_fh(sev->fh, &ev);
 }
 
-bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems,
+bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl,
                             union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2)
 {
        unsigned int i;
@@ -74,7 +74,7 @@ bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems,
        case V4L2_CTRL_TYPE_BUTTON:
                return false;
        case V4L2_CTRL_TYPE_STRING:
-               for (i = 0; i < elems; i++) {
+               for (i = 0; i < ctrl->elems; i++) {
                        unsigned int idx = i * ctrl->elem_size;
 
                        /* strings are always 0-terminated */
@@ -84,7 +84,7 @@ bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems,
                return true;
        default:
                return !memcmp(ptr1.p_const, ptr2.p_const,
-                              elems * ctrl->elem_size);
+                              ctrl->elems * ctrl->elem_size);
        }
 }
 EXPORT_SYMBOL(v4l2_ctrl_type_op_equal);
@@ -178,9 +178,10 @@ static void std_init_compound(const struct v4l2_ctrl *ctrl, u32 idx,
 }
 
 void v4l2_ctrl_type_op_init(const struct v4l2_ctrl *ctrl, u32 from_idx,
-                           u32 tot_elems, union v4l2_ctrl_ptr ptr)
+                           union v4l2_ctrl_ptr ptr)
 {
        unsigned int i;
+       u32 tot_elems = ctrl->elems;
        u32 elems = tot_elems - from_idx;
 
        if (from_idx >= tot_elems)
@@ -995,7 +996,7 @@ static int std_validate_elem(const struct v4l2_ctrl *ctrl, u32 idx,
        }
 }
 
-int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, u32 elems,
+int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl,
                               union v4l2_ctrl_ptr ptr)
 {
        unsigned int i;
@@ -1017,11 +1018,11 @@ int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, u32 elems,
 
        case V4L2_CTRL_TYPE_BUTTON:
        case V4L2_CTRL_TYPE_CTRL_CLASS:
-               memset(ptr.p_s32, 0, elems * sizeof(s32));
+               memset(ptr.p_s32, 0, ctrl->new_elems * sizeof(s32));
                return 0;
        }
 
-       for (i = 0; !ret && i < elems; i++)
+       for (i = 0; !ret && i < ctrl->new_elems; i++)
                ret = std_validate_elem(ctrl, i, ptr);
        return ret;
 }
@@ -1724,7 +1725,7 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl,
                memcpy(ctrl->p_def.p, p_def.p_const, elem_size);
        }
 
-       ctrl->type_ops->init(ctrl, 0, elems, ctrl->p_cur);
+       ctrl->type_ops->init(ctrl, 0, ctrl->p_cur);
        cur_to_new(ctrl);
 
        if (handler_new_ref(hdl, ctrl, NULL, false, false)) {
@@ -2069,7 +2070,7 @@ static int cluster_changed(struct v4l2_ctrl *master)
                        ctrl_changed = true;
                if (!ctrl_changed)
                        ctrl_changed = !ctrl->type_ops->equal(ctrl,
-                               ctrl->elems, ctrl->p_cur, ctrl->p_new);
+                               ctrl->p_cur, ctrl->p_new);
                ctrl->has_changed = ctrl_changed;
                changed |= ctrl->has_changed;
        }
index d00237e..397d553 100644 (file)
@@ -1095,6 +1095,78 @@ void video_unregister_device(struct video_device *vdev)
 }
 EXPORT_SYMBOL(video_unregister_device);
 
+#if defined(CONFIG_MEDIA_CONTROLLER)
+
+__must_check int video_device_pipeline_start(struct video_device *vdev,
+                                            struct media_pipeline *pipe)
+{
+       struct media_entity *entity = &vdev->entity;
+
+       if (entity->num_pads != 1)
+               return -ENODEV;
+
+       return media_pipeline_start(&entity->pads[0], pipe);
+}
+EXPORT_SYMBOL_GPL(video_device_pipeline_start);
+
+__must_check int __video_device_pipeline_start(struct video_device *vdev,
+                                              struct media_pipeline *pipe)
+{
+       struct media_entity *entity = &vdev->entity;
+
+       if (entity->num_pads != 1)
+               return -ENODEV;
+
+       return __media_pipeline_start(&entity->pads[0], pipe);
+}
+EXPORT_SYMBOL_GPL(__video_device_pipeline_start);
+
+void video_device_pipeline_stop(struct video_device *vdev)
+{
+       struct media_entity *entity = &vdev->entity;
+
+       if (WARN_ON(entity->num_pads != 1))
+               return;
+
+       return media_pipeline_stop(&entity->pads[0]);
+}
+EXPORT_SYMBOL_GPL(video_device_pipeline_stop);
+
+void __video_device_pipeline_stop(struct video_device *vdev)
+{
+       struct media_entity *entity = &vdev->entity;
+
+       if (WARN_ON(entity->num_pads != 1))
+               return;
+
+       return __media_pipeline_stop(&entity->pads[0]);
+}
+EXPORT_SYMBOL_GPL(__video_device_pipeline_stop);
+
+__must_check int video_device_pipeline_alloc_start(struct video_device *vdev)
+{
+       struct media_entity *entity = &vdev->entity;
+
+       if (entity->num_pads != 1)
+               return -ENODEV;
+
+       return media_pipeline_alloc_start(&entity->pads[0]);
+}
+EXPORT_SYMBOL_GPL(video_device_pipeline_alloc_start);
+
+struct media_pipeline *video_device_pipeline(struct video_device *vdev)
+{
+       struct media_entity *entity = &vdev->entity;
+
+       if (WARN_ON(entity->num_pads != 1))
+               return NULL;
+
+       return media_pad_pipeline(&entity->pads[0]);
+}
+EXPORT_SYMBOL_GPL(video_device_pipeline);
+
+#endif /* CONFIG_MEDIA_CONTROLLER */
+
 /*
  *     Initialise video for linux
  */
index 9489e80..bdb2ce7 100644 (file)
@@ -66,6 +66,14 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_clk)
                goto err_map;
        }
 
+       /* Parse the device's DT node for an endianness specification */
+       if (of_property_read_bool(np, "big-endian"))
+               syscon_config.val_format_endian = REGMAP_ENDIAN_BIG;
+       else if (of_property_read_bool(np, "little-endian"))
+               syscon_config.val_format_endian = REGMAP_ENDIAN_LITTLE;
+       else if (of_property_read_bool(np, "native-endian"))
+               syscon_config.val_format_endian = REGMAP_ENDIAN_NATIVE;
+
        /*
         * search for reg-io-width property in DT. If it is not provided,
         * default to 4 bytes. regmap_init_mmio will return an error if values
index 75c4bef..65e6cae 100644 (file)
@@ -2948,7 +2948,7 @@ static void gaudi2_user_interrupt_setup(struct hl_device *hdev)
 
 static inline int gaudi2_get_non_zero_random_int(void)
 {
-       int rand = get_random_int();
+       int rand = get_random_u32();
 
        return rand ? rand : 1;
 }
index ce89611..54cd009 100644 (file)
@@ -1140,8 +1140,12 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req)
 {
        struct mmc_blk_data *md = mq->blkdata;
        struct mmc_card *card = md->queue.card;
+       unsigned int arg = card->erase_arg;
 
-       mmc_blk_issue_erase_rq(mq, req, MMC_BLK_DISCARD, card->erase_arg);
+       if (mmc_card_broken_sd_discard(card))
+               arg = SD_ERASE_ARG;
+
+       mmc_blk_issue_erase_rq(mq, req, MMC_BLK_DISCARD, arg);
 }
 
 static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq,
index 99045e1..cfdd1ff 100644 (file)
@@ -73,6 +73,7 @@ struct mmc_fixup {
 #define EXT_CSD_REV_ANY (-1u)
 
 #define CID_MANFID_SANDISK      0x2
+#define CID_MANFID_SANDISK_SD   0x3
 #define CID_MANFID_ATP          0x9
 #define CID_MANFID_TOSHIBA      0x11
 #define CID_MANFID_MICRON       0x13
@@ -258,4 +259,9 @@ static inline int mmc_card_broken_hpi(const struct mmc_card *c)
        return c->quirks & MMC_QUIRK_BROKEN_HPI;
 }
 
+static inline int mmc_card_broken_sd_discard(const struct mmc_card *c)
+{
+       return c->quirks & MMC_QUIRK_BROKEN_SD_DISCARD;
+}
+
 #endif
index ef53a25..95fa8fb 100644 (file)
@@ -97,8 +97,8 @@ static void mmc_should_fail_request(struct mmc_host *host,
            !should_fail(&host->fail_mmc_request, data->blksz * data->blocks))
                return;
 
-       data->error = data_errors[prandom_u32() % ARRAY_SIZE(data_errors)];
-       data->bytes_xfered = (prandom_u32() % (data->bytes_xfered >> 9)) << 9;
+       data->error = data_errors[prandom_u32_max(ARRAY_SIZE(data_errors))];
+       data->bytes_xfered = prandom_u32_max(data->bytes_xfered >> 9) << 9;
 }
 
 #else /* CONFIG_FAIL_MMC_REQUEST */
index be43939..29b9497 100644 (file)
@@ -100,6 +100,12 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
        MMC_FIXUP("V10016", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc,
                  MMC_QUIRK_TRIM_BROKEN),
 
+       /*
+        * Some SD cards reports discard support while they don't
+        */
+       MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,
+                 MMC_QUIRK_BROKEN_SD_DISCARD),
+
        END_FIXUP
 };
 
index 5816141..c78bbc2 100644 (file)
@@ -1858,7 +1858,7 @@ static void dw_mci_start_fault_timer(struct dw_mci *host)
         * Try to inject the error at random points during the data transfer.
         */
        hrtimer_start(&host->fault_timer,
-                     ms_to_ktime(prandom_u32() % 25),
+                     ms_to_ktime(prandom_u32_max(25)),
                      HRTIMER_MODE_REL);
 }
 
index 6edbf5c..b970699 100644 (file)
@@ -128,6 +128,7 @@ static unsigned int renesas_sdhi_clk_update(struct tmio_mmc_host *host,
        struct clk *ref_clk = priv->clk;
        unsigned int freq, diff, best_freq = 0, diff_min = ~0;
        unsigned int new_clock, clkh_shift = 0;
+       unsigned int new_upper_limit;
        int i;
 
        /*
@@ -153,13 +154,20 @@ static unsigned int renesas_sdhi_clk_update(struct tmio_mmc_host *host,
         * greater than, new_clock.  As we can divide by 1 << i for
         * any i in [0, 9] we want the input clock to be as close as
         * possible, but no greater than, new_clock << i.
+        *
+        * Add an upper limit of 1/1024 rate higher to the clock rate to fix
+        * clk rate jumping to lower rate due to rounding error (eg: RZ/G2L has
+        * 3 clk sources 533.333333 MHz, 400 MHz and 266.666666 MHz. The request
+        * for 533.333333 MHz will selects a slower 400 MHz due to rounding
+        * error (533333333 Hz / 4 * 4 = 533333332 Hz < 533333333 Hz)).
         */
        for (i = min(9, ilog2(UINT_MAX / new_clock)); i >= 0; i--) {
                freq = clk_round_rate(ref_clk, new_clock << i);
-               if (freq > (new_clock << i)) {
+               new_upper_limit = (new_clock << i) + ((new_clock << i) >> 10);
+               if (freq > new_upper_limit) {
                        /* Too fast; look for a slightly slower option */
                        freq = clk_round_rate(ref_clk, (new_clock << i) / 4 * 3);
-                       if (freq > (new_clock << i))
+                       if (freq > new_upper_limit)
                                continue;
                }
 
@@ -181,6 +189,7 @@ static unsigned int renesas_sdhi_clk_update(struct tmio_mmc_host *host,
 static void renesas_sdhi_set_clock(struct tmio_mmc_host *host,
                                   unsigned int new_clock)
 {
+       unsigned int clk_margin;
        u32 clk = 0, clock;
 
        sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, ~CLK_CTL_SCLKEN &
@@ -194,7 +203,13 @@ static void renesas_sdhi_set_clock(struct tmio_mmc_host *host,
        host->mmc->actual_clock = renesas_sdhi_clk_update(host, new_clock);
        clock = host->mmc->actual_clock / 512;
 
-       for (clk = 0x80000080; new_clock >= (clock << 1); clk >>= 1)
+       /*
+        * Add a margin of 1/1024 rate higher to the clock rate in order
+        * to avoid clk variable setting a value of 0 due to the margin
+        * provided for actual_clock in renesas_sdhi_clk_update().
+        */
+       clk_margin = new_clock >> 10;
+       for (clk = 0x80000080; new_clock + clk_margin >= (clock << 1); clk >>= 1)
                clock <<= 1;
 
        /* 1/1 clock is option */
index 46c55ab..b92a408 100644 (file)
@@ -309,7 +309,7 @@ static unsigned int sdhci_sprd_get_max_clock(struct sdhci_host *host)
 
 static unsigned int sdhci_sprd_get_min_clock(struct sdhci_host *host)
 {
-       return 400000;
+       return 100000;
 }
 
 static void sdhci_sprd_set_uhs_signaling(struct sdhci_host *host,
index 2d2d826..413925b 100644 (file)
@@ -773,7 +773,7 @@ static void tegra_sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
                dev_err(dev, "failed to set clk rate to %luHz: %d\n",
                        host_clk, err);
 
-       tegra_host->curr_clk_rate = host_clk;
+       tegra_host->curr_clk_rate = clk_get_rate(pltfm_host->clk);
        if (tegra_host->ddr_signaling)
                host->max_clk = host_clk;
        else
index 24beade..6727190 100644 (file)
@@ -1393,7 +1393,7 @@ static int ns_do_read_error(struct nandsim *ns, int num)
        unsigned int page_no = ns->regs.row;
 
        if (ns_read_error(page_no)) {
-               prandom_bytes(ns->buf.byte, num);
+               get_random_bytes(ns->buf.byte, num);
                NS_WARN("simulating read error in page %u\n", page_no);
                return 1;
        }
@@ -1402,12 +1402,12 @@ static int ns_do_read_error(struct nandsim *ns, int num)
 
 static void ns_do_bit_flips(struct nandsim *ns, int num)
 {
-       if (bitflips && prandom_u32() < (1 << 22)) {
+       if (bitflips && get_random_u16() < (1 << 6)) {
                int flips = 1;
                if (bitflips > 1)
-                       flips = (prandom_u32() % (int) bitflips) + 1;
+                       flips = prandom_u32_max(bitflips) + 1;
                while (flips--) {
-                       int pos = prandom_u32() % (num * 8);
+                       int pos = prandom_u32_max(num * 8);
                        ns->buf.byte[pos / 8] ^= (1 << (pos % 8));
                        NS_WARN("read_page: flipping bit %d in page %d "
                                "reading from %d ecc: corrected=%u failed=%u\n",
index c4f2713..4409885 100644 (file)
@@ -47,7 +47,7 @@ struct nand_ecc_test {
 static void single_bit_error_data(void *error_data, void *correct_data,
                                size_t size)
 {
-       unsigned int offset = prandom_u32() % (size * BITS_PER_BYTE);
+       unsigned int offset = prandom_u32_max(size * BITS_PER_BYTE);
 
        memcpy(error_data, correct_data, size);
        __change_bit_le(offset, error_data);
@@ -58,9 +58,9 @@ static void double_bit_error_data(void *error_data, void *correct_data,
 {
        unsigned int offset[2];
 
-       offset[0] = prandom_u32() % (size * BITS_PER_BYTE);
+       offset[0] = prandom_u32_max(size * BITS_PER_BYTE);
        do {
-               offset[1] = prandom_u32() % (size * BITS_PER_BYTE);
+               offset[1] = prandom_u32_max(size * BITS_PER_BYTE);
        } while (offset[0] == offset[1]);
 
        memcpy(error_data, correct_data, size);
@@ -71,7 +71,7 @@ static void double_bit_error_data(void *error_data, void *correct_data,
 
 static unsigned int random_ecc_bit(size_t size)
 {
-       unsigned int offset = prandom_u32() % (3 * BITS_PER_BYTE);
+       unsigned int offset = prandom_u32_max(3 * BITS_PER_BYTE);
 
        if (size == 256) {
                /*
@@ -79,7 +79,7 @@ static unsigned int random_ecc_bit(size_t size)
                 * and 17th bit) in ECC code for 256 byte data block
                 */
                while (offset == 16 || offset == 17)
-                       offset = prandom_u32() % (3 * BITS_PER_BYTE);
+                       offset = prandom_u32_max(3 * BITS_PER_BYTE);
        }
 
        return offset;
@@ -266,7 +266,7 @@ static int nand_ecc_test_run(const size_t size)
                goto error;
        }
 
-       prandom_bytes(correct_data, size);
+       get_random_bytes(correct_data, size);
        ecc_sw_hamming_calculate(correct_data, size, correct_ecc, sm_order);
        for (i = 0; i < ARRAY_SIZE(nand_ecc_test); i++) {
                nand_ecc_test[i].prepare(error_data, error_ecc,
index c9ec708..075bce3 100644 (file)
@@ -223,7 +223,7 @@ static int __init mtd_speedtest_init(void)
        if (!iobuf)
                goto out;
 
-       prandom_bytes(iobuf, mtd->erasesize);
+       get_random_bytes(iobuf, mtd->erasesize);
 
        bbt = kzalloc(ebcnt, GFP_KERNEL);
        if (!bbt)
index cb29c8c..75b6ddc 100644 (file)
@@ -45,9 +45,8 @@ static int rand_eb(void)
        unsigned int eb;
 
 again:
-       eb = prandom_u32();
        /* Read or write up 2 eraseblocks at a time - hence 'ebcnt - 1' */
-       eb %= (ebcnt - 1);
+       eb = prandom_u32_max(ebcnt - 1);
        if (bbt[eb])
                goto again;
        return eb;
@@ -55,20 +54,12 @@ again:
 
 static int rand_offs(void)
 {
-       unsigned int offs;
-
-       offs = prandom_u32();
-       offs %= bufsize;
-       return offs;
+       return prandom_u32_max(bufsize);
 }
 
 static int rand_len(int offs)
 {
-       unsigned int len;
-
-       len = prandom_u32();
-       len %= (bufsize - offs);
-       return len;
+       return prandom_u32_max(bufsize - offs);
 }
 
 static int do_read(void)
@@ -127,7 +118,7 @@ static int do_write(void)
 
 static int do_operation(void)
 {
-       if (prandom_u32() & 1)
+       if (prandom_u32_max(2))
                return do_read();
        else
                return do_write();
@@ -192,7 +183,7 @@ static int __init mtd_stresstest_init(void)
                goto out;
        for (i = 0; i < ebcnt; i++)
                offsets[i] = mtd->erasesize;
-       prandom_bytes(writebuf, bufsize);
+       get_random_bytes(writebuf, bufsize);
 
        bbt = kzalloc(ebcnt, GFP_KERNEL);
        if (!bbt)
index 4cf67a2..75eaecc 100644 (file)
@@ -409,7 +409,7 @@ int ubiblock_create(struct ubi_volume_info *vi)
        ret = blk_mq_alloc_tag_set(&dev->tag_set);
        if (ret) {
                dev_err(disk_to_dev(dev->gd), "blk_mq_alloc_tag_set failed");
-               goto out_free_dev;;
+               goto out_free_dev;
        }
 
 
@@ -441,7 +441,7 @@ int ubiblock_create(struct ubi_volume_info *vi)
 
        /*
         * Create one workqueue per volume (per registered block device).
-        * Rembember workqueues are cheap, they're not threads.
+        * Remember workqueues are cheap, they're not threads.
         */
        dev->wq = alloc_workqueue("%s", 0, 0, gd->disk_name);
        if (!dev->wq) {
index a32050f..a901f8e 100644 (file)
@@ -807,6 +807,7 @@ static int autoresize(struct ubi_device *ubi, int vol_id)
  * @ubi_num: number to assign to the new UBI device
  * @vid_hdr_offset: VID header offset
  * @max_beb_per1024: maximum expected number of bad PEB per 1024 PEBs
+ * @disable_fm: whether disable fastmap
  *
  * This function attaches MTD device @mtd_dev to UBI and assign @ubi_num number
  * to the newly created UBI device, unless @ubi_num is %UBI_DEV_NUM_AUTO, in
@@ -814,11 +815,15 @@ static int autoresize(struct ubi_device *ubi, int vol_id)
  * automatically. Returns the new UBI device number in case of success and a
  * negative error code in case of failure.
  *
+ * If @disable_fm is true, ubi doesn't create new fastmap even the module param
+ * 'fm_autoconvert' is set, and existed old fastmap will be destroyed after
+ * doing full scanning.
+ *
  * Note, the invocations of this function has to be serialized by the
  * @ubi_devices_mutex.
  */
 int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
-                      int vid_hdr_offset, int max_beb_per1024)
+                      int vid_hdr_offset, int max_beb_per1024, bool disable_fm)
 {
        struct ubi_device *ubi;
        int i, err;
@@ -921,7 +926,7 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
                UBI_FM_MIN_POOL_SIZE);
 
        ubi->fm_wl_pool.max_size = ubi->fm_pool.max_size / 2;
-       ubi->fm_disabled = !fm_autoconvert;
+       ubi->fm_disabled = (!fm_autoconvert || disable_fm) ? 1 : 0;
        if (fm_debug)
                ubi_enable_dbg_chk_fastmap(ubi);
 
@@ -962,7 +967,7 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
        if (!ubi->fm_buf)
                goto out_free;
 #endif
-       err = ubi_attach(ubi, 0);
+       err = ubi_attach(ubi, disable_fm ? 1 : 0);
        if (err) {
                ubi_err(ubi, "failed to attach mtd%d, error %d",
                        mtd->index, err);
@@ -1242,7 +1247,8 @@ static int __init ubi_init(void)
 
                mutex_lock(&ubi_devices_mutex);
                err = ubi_attach_mtd_dev(mtd, p->ubi_num,
-                                        p->vid_hdr_offs, p->max_beb_per1024);
+                                        p->vid_hdr_offs, p->max_beb_per1024,
+                                        false);
                mutex_unlock(&ubi_devices_mutex);
                if (err < 0) {
                        pr_err("UBI error: cannot attach mtd%d\n",
index cc9a28c..f43430b 100644 (file)
@@ -672,7 +672,7 @@ static int verify_rsvol_req(const struct ubi_device *ubi,
  * @req: volumes re-name request
  *
  * This is a helper function for the volume re-name IOCTL which validates the
- * the request, opens the volume and calls corresponding volumes management
+ * request, opens the volume and calls corresponding volumes management
  * function. Returns zero in case of success and a negative error code in case
  * of failure.
  */
@@ -1041,7 +1041,7 @@ static long ctrl_cdev_ioctl(struct file *file, unsigned int cmd,
                 */
                mutex_lock(&ubi_devices_mutex);
                err = ubi_attach_mtd_dev(mtd, req.ubi_num, req.vid_hdr_offset,
-                                        req.max_beb_per1024);
+                                        req.max_beb_per1024, !!req.disable_fm);
                mutex_unlock(&ubi_devices_mutex);
                if (err < 0)
                        put_mtd_device(mtd);
index 31d427e..908d0e0 100644 (file)
@@ -590,7 +590,7 @@ int ubi_dbg_power_cut(struct ubi_device *ubi, int caller)
 
                if (ubi->dbg.power_cut_max > ubi->dbg.power_cut_min) {
                        range = ubi->dbg.power_cut_max - ubi->dbg.power_cut_min;
-                       ubi->dbg.power_cut_counter += prandom_u32() % range;
+                       ubi->dbg.power_cut_counter += prandom_u32_max(range);
                }
                return 0;
        }
index 118248a..dc8d8f8 100644 (file)
@@ -73,7 +73,7 @@ static inline int ubi_dbg_is_bgt_disabled(const struct ubi_device *ubi)
 static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
 {
        if (ubi->dbg.emulate_bitflips)
-               return !(prandom_u32() % 200);
+               return !prandom_u32_max(200);
        return 0;
 }
 
@@ -87,7 +87,7 @@ static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
 static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi)
 {
        if (ubi->dbg.emulate_io_failures)
-               return !(prandom_u32() % 500);
+               return !prandom_u32_max(500);
        return 0;
 }
 
@@ -101,7 +101,7 @@ static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi)
 static inline int ubi_dbg_is_erase_failure(const struct ubi_device *ubi)
 {
        if (ubi->dbg.emulate_io_failures)
-               return !(prandom_u32() % 400);
+               return !prandom_u32_max(400);
        return 0;
 }
 
index ccc5979..09c408c 100644 (file)
@@ -377,7 +377,7 @@ static int leb_write_lock(struct ubi_device *ubi, int vol_id, int lnum)
  *
  * This function locks a logical eraseblock for writing if there is no
  * contention and does nothing if there is contention. Returns %0 in case of
- * success, %1 in case of contention, and and a negative error code in case of
+ * success, %1 in case of contention, and a negative error code in case of
  * failure.
  */
 static int leb_write_trylock(struct ubi_device *ubi, int vol_id, int lnum)
index 6e95c4b..ca2d9ef 100644 (file)
@@ -20,8 +20,7 @@ static inline unsigned long *init_seen(struct ubi_device *ubi)
        if (!ubi_dbg_chk_fastmap(ubi))
                return NULL;
 
-       ret = kcalloc(BITS_TO_LONGS(ubi->peb_count), sizeof(unsigned long),
-                     GFP_KERNEL);
+       ret = bitmap_zalloc(ubi->peb_count, GFP_KERNEL);
        if (!ret)
                return ERR_PTR(-ENOMEM);
 
@@ -34,7 +33,7 @@ static inline unsigned long *init_seen(struct ubi_device *ubi)
  */
 static inline void free_seen(unsigned long *seen)
 {
-       kfree(seen);
+       bitmap_free(seen);
 }
 
 /**
@@ -1108,8 +1107,7 @@ int ubi_fastmap_init_checkmap(struct ubi_volume *vol, int leb_count)
        if (!ubi->fast_attach)
                return 0;
 
-       vol->checkmap = kcalloc(BITS_TO_LONGS(leb_count), sizeof(unsigned long),
-                               GFP_KERNEL);
+       vol->checkmap = bitmap_zalloc(leb_count, GFP_KERNEL);
        if (!vol->checkmap)
                return -ENOMEM;
 
@@ -1118,7 +1116,7 @@ int ubi_fastmap_init_checkmap(struct ubi_volume *vol, int leb_count)
 
 void ubi_fastmap_destroy_checkmap(struct ubi_volume *vol)
 {
-       kfree(vol->checkmap);
+       bitmap_free(vol->checkmap);
 }
 
 /**
index 8a7306c..01b6448 100644 (file)
@@ -1147,7 +1147,7 @@ fail:
  * @ubi: UBI device description object
  * @pnum: the physical eraseblock number to check
  *
- * This function returns zero if the erase counter header is all right and and
+ * This function returns zero if the erase counter header is all right and
  * a negative error code if not or if an error occurred.
  */
 static int self_check_peb_ec_hdr(const struct ubi_device *ubi, int pnum)
index 386db05..2c9cd3b 100644 (file)
@@ -131,7 +131,7 @@ enum {
  * is changed radically. This field is duplicated in the volume identifier
  * header.
  *
- * The @vid_hdr_offset and @data_offset fields contain the offset of the the
+ * The @vid_hdr_offset and @data_offset fields contain the offset of the
  * volume identifier header and user data, relative to the beginning of the
  * physical eraseblock. These values have to be the same for all physical
  * eraseblocks.
index 078112e..c8f1bd4 100644 (file)
@@ -86,7 +86,7 @@ void ubi_err(const struct ubi_device *ubi, const char *fmt, ...);
  * Error codes returned by the I/O sub-system.
  *
  * UBI_IO_FF: the read region of flash contains only 0xFFs
- * UBI_IO_FF_BITFLIPS: the same as %UBI_IO_FF, but also also there was a data
+ * UBI_IO_FF_BITFLIPS: the same as %UBI_IO_FF, but also there was a data
  *                     integrity error reported by the MTD driver
  *                     (uncorrectable ECC error in case of NAND)
  * UBI_IO_BAD_HDR: the EC or VID header is corrupted (bad magic or CRC)
@@ -281,7 +281,7 @@ struct ubi_eba_leb_desc {
 
 /**
  * struct ubi_volume - UBI volume description data structure.
- * @dev: device object to make use of the the Linux device model
+ * @dev: device object to make use of the Linux device model
  * @cdev: character device object to create character device
  * @ubi: reference to the UBI device description object
  * @vol_id: volume ID
@@ -439,7 +439,7 @@ struct ubi_debug_info {
 
 /**
  * struct ubi_device - UBI device description structure
- * @dev: UBI device object to use the the Linux device model
+ * @dev: UBI device object to use the Linux device model
  * @cdev: character device object to create character device
  * @ubi_num: UBI device number
  * @ubi_name: UBI device name
@@ -937,7 +937,8 @@ int ubi_io_write_vid_hdr(struct ubi_device *ubi, int pnum,
 
 /* build.c */
 int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
-                      int vid_hdr_offset, int max_beb_per1024);
+                      int vid_hdr_offset, int max_beb_per1024,
+                      bool disable_fm);
 int ubi_detach_mtd_dev(int ubi_num, int anyway);
 struct ubi_device *ubi_get_device(int ubi_num);
 void ubi_put_device(struct ubi_device *ubi);
index 6ea95ad..8fcc0bd 100644 (file)
@@ -623,7 +623,7 @@ void ubi_free_volume(struct ubi_device *ubi, struct ubi_volume *vol)
  * @ubi: UBI device description object
  * @vol_id: volume ID
  *
- * Returns zero if volume is all right and a negative error code if not.
+ * Returns zero if volume is all right and a negative error code if not.
  */
 static int self_check_volume(struct ubi_device *ubi, int vol_id)
 {
@@ -776,7 +776,7 @@ fail:
  * self_check_volumes - check information about all volumes.
  * @ubi: UBI device description object
  *
- * Returns zero if volumes are all right and a negative error code if not.
+ * Returns zero if volumes are all right and a negative error code if not.
  */
 static int self_check_volumes(struct ubi_device *ubi)
 {
index 55bae06..68eb0f2 100644 (file)
@@ -376,7 +376,7 @@ static struct ubi_wl_entry *find_mean_wl_entry(struct ubi_device *ubi,
  * refill_wl_user_pool().
  * @ubi: UBI device description object
  *
- * This function returns a wear leveling entry in case of success and
+ * This function returns a wear leveling entry in case of success and
  * NULL in case of failure.
  */
 static struct ubi_wl_entry *wl_get_wle(struct ubi_device *ubi)
@@ -429,7 +429,7 @@ static int prot_queue_del(struct ubi_device *ubi, int pnum)
 /**
  * sync_erase - synchronously erase a physical eraseblock.
  * @ubi: UBI device description object
- * @e: the the physical eraseblock to erase
+ * @e: the physical eraseblock to erase
  * @torture: if the physical eraseblock has to be tortured
  *
  * This function returns zero in case of success and a negative error code in
@@ -1016,7 +1016,7 @@ static int ensure_wear_leveling(struct ubi_device *ubi, int nested)
 
        /*
         * If the ubi->scrub tree is not empty, scrubbing is needed, and the
-        * the WL worker has to be scheduled anyway.
+        * WL worker has to be scheduled anyway.
         */
        if (!ubi->scrub.rb_node) {
 #ifdef CONFIG_MTD_UBI_FASTMAP
@@ -1464,7 +1464,7 @@ static bool scrub_possible(struct ubi_device *ubi, struct ubi_wl_entry *e)
  * ubi_bitflip_check - Check an eraseblock for bitflips and scrub it if needed.
  * @ubi: UBI device description object
  * @pnum: the physical eraseblock to schedule
- * @force: dont't read the block, assume bitflips happened and take action.
+ * @force: don't read the block, assume bitflips happened and take action.
  *
  * This function reads the given eraseblock and checks if bitflips occured.
  * In case of bitflips, the eraseblock is scheduled for scrubbing.
index e58a1e0..455b555 100644 (file)
@@ -75,6 +75,7 @@ enum ad_link_speed_type {
        AD_LINK_SPEED_100000MBPS,
        AD_LINK_SPEED_200000MBPS,
        AD_LINK_SPEED_400000MBPS,
+       AD_LINK_SPEED_800000MBPS,
 };
 
 /* compare MAC addresses */
@@ -251,6 +252,7 @@ static inline int __check_agg_selection_timer(struct port *port)
  *     %AD_LINK_SPEED_100000MBPS
  *     %AD_LINK_SPEED_200000MBPS
  *     %AD_LINK_SPEED_400000MBPS
+ *     %AD_LINK_SPEED_800000MBPS
  */
 static u16 __get_link_speed(struct port *port)
 {
@@ -326,6 +328,10 @@ static u16 __get_link_speed(struct port *port)
                        speed = AD_LINK_SPEED_400000MBPS;
                        break;
 
+               case SPEED_800000:
+                       speed = AD_LINK_SPEED_800000MBPS;
+                       break;
+
                default:
                        /* unknown speed value from ethtool. shouldn't happen */
                        if (slave->speed != SPEED_UNKNOWN)
@@ -753,6 +759,9 @@ static u32 __get_agg_bandwidth(struct aggregator *aggregator)
                case AD_LINK_SPEED_400000MBPS:
                        bandwidth = nports * 400000;
                        break;
+               case AD_LINK_SPEED_800000MBPS:
+                       bandwidth = nports * 800000;
+                       break;
                default:
                        bandwidth = 0; /* to silence the compiler */
                }
index 24bb50d..e84c49b 100644 (file)
@@ -4806,7 +4806,7 @@ static u32 bond_rr_gen_slave_id(struct bonding *bond)
 
        switch (packets_per_slave) {
        case 0:
-               slave_id = prandom_u32();
+               slave_id = get_random_u32();
                break;
        case 1:
                slave_id = this_cpu_inc_return(*bond->rr_tx_counter);
index 5669c92..c5c3b4e 100644 (file)
@@ -137,27 +137,42 @@ static void qca8k_rw_reg_ack_handler(struct dsa_switch *ds, struct sk_buff *skb)
        struct qca8k_mgmt_eth_data *mgmt_eth_data;
        struct qca8k_priv *priv = ds->priv;
        struct qca_mgmt_ethhdr *mgmt_ethhdr;
+       u32 command;
        u8 len, cmd;
+       int i;
 
        mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb_mac_header(skb);
        mgmt_eth_data = &priv->mgmt_eth_data;
 
-       cmd = FIELD_GET(QCA_HDR_MGMT_CMD, mgmt_ethhdr->command);
-       len = FIELD_GET(QCA_HDR_MGMT_LENGTH, mgmt_ethhdr->command);
+       command = get_unaligned_le32(&mgmt_ethhdr->command);
+       cmd = FIELD_GET(QCA_HDR_MGMT_CMD, command);
+       len = FIELD_GET(QCA_HDR_MGMT_LENGTH, command);
 
        /* Make sure the seq match the requested packet */
-       if (mgmt_ethhdr->seq == mgmt_eth_data->seq)
+       if (get_unaligned_le32(&mgmt_ethhdr->seq) == mgmt_eth_data->seq)
                mgmt_eth_data->ack = true;
 
        if (cmd == MDIO_READ) {
-               mgmt_eth_data->data[0] = mgmt_ethhdr->mdio_data;
+               u32 *val = mgmt_eth_data->data;
+
+               *val = get_unaligned_le32(&mgmt_ethhdr->mdio_data);
 
                /* Get the rest of the 12 byte of data.
                 * The read/write function will extract the requested data.
                 */
-               if (len > QCA_HDR_MGMT_DATA1_LEN)
-                       memcpy(mgmt_eth_data->data + 1, skb->data,
-                              QCA_HDR_MGMT_DATA2_LEN);
+               if (len > QCA_HDR_MGMT_DATA1_LEN) {
+                       __le32 *data2 = (__le32 *)skb->data;
+                       int data_len = min_t(int, QCA_HDR_MGMT_DATA2_LEN,
+                                            len - QCA_HDR_MGMT_DATA1_LEN);
+
+                       val++;
+
+                       for (i = sizeof(u32); i <= data_len; i += sizeof(u32)) {
+                               *val = get_unaligned_le32(data2);
+                               val++;
+                               data2++;
+                       }
+               }
        }
 
        complete(&mgmt_eth_data->rw_done);
@@ -169,8 +184,10 @@ static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *
        struct qca_mgmt_ethhdr *mgmt_ethhdr;
        unsigned int real_len;
        struct sk_buff *skb;
-       u32 *data2;
+       __le32 *data2;
+       u32 command;
        u16 hdr;
+       int i;
 
        skb = dev_alloc_skb(QCA_HDR_MGMT_PKT_LEN);
        if (!skb)
@@ -199,20 +216,32 @@ static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *
        hdr |= FIELD_PREP(QCA_HDR_XMIT_DP_BIT, BIT(0));
        hdr |= FIELD_PREP(QCA_HDR_XMIT_CONTROL, QCA_HDR_XMIT_TYPE_RW_REG);
 
-       mgmt_ethhdr->command = FIELD_PREP(QCA_HDR_MGMT_ADDR, reg);
-       mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_LENGTH, real_len);
-       mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_CMD, cmd);
-       mgmt_ethhdr->command |= FIELD_PREP(QCA_HDR_MGMT_CHECK_CODE,
+       command = FIELD_PREP(QCA_HDR_MGMT_ADDR, reg);
+       command |= FIELD_PREP(QCA_HDR_MGMT_LENGTH, real_len);
+       command |= FIELD_PREP(QCA_HDR_MGMT_CMD, cmd);
+       command |= FIELD_PREP(QCA_HDR_MGMT_CHECK_CODE,
                                           QCA_HDR_MGMT_CHECK_CODE_VAL);
 
+       put_unaligned_le32(command, &mgmt_ethhdr->command);
+
        if (cmd == MDIO_WRITE)
-               mgmt_ethhdr->mdio_data = *val;
+               put_unaligned_le32(*val, &mgmt_ethhdr->mdio_data);
 
        mgmt_ethhdr->hdr = htons(hdr);
 
        data2 = skb_put_zero(skb, QCA_HDR_MGMT_DATA2_LEN + QCA_HDR_MGMT_PADDING_LEN);
-       if (cmd == MDIO_WRITE && len > QCA_HDR_MGMT_DATA1_LEN)
-               memcpy(data2, val + 1, len - QCA_HDR_MGMT_DATA1_LEN);
+       if (cmd == MDIO_WRITE && len > QCA_HDR_MGMT_DATA1_LEN) {
+               int data_len = min_t(int, QCA_HDR_MGMT_DATA2_LEN,
+                                    len - QCA_HDR_MGMT_DATA1_LEN);
+
+               val++;
+
+               for (i = sizeof(u32); i <= data_len; i += sizeof(u32)) {
+                       put_unaligned_le32(*val, data2);
+                       data2++;
+                       val++;
+               }
+       }
 
        return skb;
 }
@@ -220,9 +249,11 @@ static struct sk_buff *qca8k_alloc_mdio_header(enum mdio_cmd cmd, u32 reg, u32 *
 static void qca8k_mdio_header_fill_seq_num(struct sk_buff *skb, u32 seq_num)
 {
        struct qca_mgmt_ethhdr *mgmt_ethhdr;
+       u32 seq;
 
+       seq = FIELD_PREP(QCA_HDR_MGMT_SEQ_NUM, seq_num);
        mgmt_ethhdr = (struct qca_mgmt_ethhdr *)skb->data;
-       mgmt_ethhdr->seq = FIELD_PREP(QCA_HDR_MGMT_SEQ_NUM, seq_num);
+       put_unaligned_le32(seq, &mgmt_ethhdr->seq);
 }
 
 static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len)
@@ -1487,9 +1518,9 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk
        struct qca8k_priv *priv = ds->priv;
        const struct qca8k_mib_desc *mib;
        struct mib_ethhdr *mib_ethhdr;
-       int i, mib_len, offset = 0;
-       u64 *data;
+       __le32 *data2;
        u8 port;
+       int i;
 
        mib_ethhdr = (struct mib_ethhdr *)skb_mac_header(skb);
        mib_eth_data = &priv->mib_eth_data;
@@ -1501,28 +1532,24 @@ static void qca8k_mib_autocast_handler(struct dsa_switch *ds, struct sk_buff *sk
        if (port != mib_eth_data->req_port)
                goto exit;
 
-       data = mib_eth_data->data;
+       data2 = (__le32 *)skb->data;
 
        for (i = 0; i < priv->info->mib_count; i++) {
                mib = &ar8327_mib[i];
 
                /* First 3 mib are present in the skb head */
                if (i < 3) {
-                       data[i] = mib_ethhdr->data[i];
+                       mib_eth_data->data[i] = get_unaligned_le32(mib_ethhdr->data + i);
                        continue;
                }
 
-               mib_len = sizeof(uint32_t);
-
                /* Some mib are 64 bit wide */
                if (mib->size == 2)
-                       mib_len = sizeof(uint64_t);
-
-               /* Copy the mib value from packet to the */
-               memcpy(data + i, skb->data + offset, mib_len);
+                       mib_eth_data->data[i] = get_unaligned_le64((__le64 *)data2);
+               else
+                       mib_eth_data->data[i] = get_unaligned_le32(data2);
 
-               /* Set the offset for the next mib */
-               offset += mib_len;
+               data2 += mib->size;
        }
 
 exit:
index 086aa9c..1c0015b 100644 (file)
@@ -1082,9 +1082,30 @@ static void adin1110_adjust_link(struct net_device *dev)
  */
 static int adin1110_check_spi(struct adin1110_priv *priv)
 {
+       struct gpio_desc *reset_gpio;
        int ret;
        u32 val;
 
+       reset_gpio = devm_gpiod_get_optional(&priv->spidev->dev, "reset",
+                                            GPIOD_OUT_LOW);
+       if (reset_gpio) {
+               /* MISO pin is used for internal configuration, can't have
+                * anyone else disturbing the SDO line.
+                */
+               spi_bus_lock(priv->spidev->controller);
+
+               gpiod_set_value(reset_gpio, 1);
+               fsleep(10000);
+               gpiod_set_value(reset_gpio, 0);
+
+               /* Need to wait 90 ms before interacting with
+                * the MAC after a HW reset.
+                */
+               fsleep(90000);
+
+               spi_bus_unlock(priv->spidev->controller);
+       }
+
        ret = adin1110_read_reg(priv, ADIN1110_PHY_ID, &val);
        if (ret < 0)
                return ret;
index 2af3da4..f409d7b 100644 (file)
@@ -285,6 +285,9 @@ static int xgbe_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
                /* Yellow Carp devices do not need cdr workaround */
                pdata->vdata->an_cdr_workaround = 0;
+
+               /* Yellow Carp devices do not need rrc */
+               pdata->vdata->enable_rrc = 0;
        } else {
                pdata->xpcs_window_def_reg = PCS_V2_WINDOW_DEF;
                pdata->xpcs_window_sel_reg = PCS_V2_WINDOW_SELECT;
@@ -483,6 +486,7 @@ static struct xgbe_version_data xgbe_v2a = {
        .tx_desc_prefetch               = 5,
        .rx_desc_prefetch               = 5,
        .an_cdr_workaround              = 1,
+       .enable_rrc                     = 1,
 };
 
 static struct xgbe_version_data xgbe_v2b = {
@@ -498,6 +502,7 @@ static struct xgbe_version_data xgbe_v2b = {
        .tx_desc_prefetch               = 5,
        .rx_desc_prefetch               = 5,
        .an_cdr_workaround              = 1,
+       .enable_rrc                     = 1,
 };
 
 static const struct pci_device_id xgbe_pci_table[] = {
index 2156600..4064c3e 100644 (file)
@@ -239,6 +239,7 @@ enum xgbe_sfp_speed {
 #define XGBE_SFP_BASE_BR_1GBE_MAX              0x0d
 #define XGBE_SFP_BASE_BR_10GBE_MIN             0x64
 #define XGBE_SFP_BASE_BR_10GBE_MAX             0x68
+#define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX       0x78
 
 #define XGBE_SFP_BASE_CU_CABLE_LEN             18
 
@@ -284,6 +285,8 @@ struct xgbe_sfp_eeprom {
 #define XGBE_BEL_FUSE_VENDOR   "BEL-FUSE        "
 #define XGBE_BEL_FUSE_PARTNO   "1GBT-SFP06      "
 
+#define XGBE_MOLEX_VENDOR      "Molex Inc.      "
+
 struct xgbe_sfp_ascii {
        union {
                char vendor[XGBE_SFP_BASE_VENDOR_NAME_LEN + 1];
@@ -834,7 +837,11 @@ static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
                break;
        case XGBE_SFP_SPEED_10000:
                min = XGBE_SFP_BASE_BR_10GBE_MIN;
-               max = XGBE_SFP_BASE_BR_10GBE_MAX;
+               if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME],
+                          XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0)
+                       max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX;
+               else
+                       max = XGBE_SFP_BASE_BR_10GBE_MAX;
                break;
        default:
                return false;
@@ -1151,7 +1158,10 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
        }
 
        /* Determine the type of SFP */
-       if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
+       if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE &&
+           xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
+               phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
+       else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
                phy_data->sfp_base = XGBE_SFP_BASE_10000_SR;
        else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_LR)
                phy_data->sfp_base = XGBE_SFP_BASE_10000_LR;
@@ -1167,9 +1177,6 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
                phy_data->sfp_base = XGBE_SFP_BASE_1000_CX;
        else if (sfp_base[XGBE_SFP_BASE_1GBE_CC] & XGBE_SFP_BASE_1GBE_CC_T)
                phy_data->sfp_base = XGBE_SFP_BASE_1000_T;
-       else if ((phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE) &&
-                xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
-               phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
 
        switch (phy_data->sfp_base) {
        case XGBE_SFP_BASE_1000_T:
@@ -1979,6 +1986,10 @@ static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata)
 
 static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable)
 {
+       /* PLL_CTRL feature needs to be enabled for fixed PHY modes (Non-Autoneg) only */
+       if (pdata->phy.autoneg != AUTONEG_DISABLE)
+               return;
+
        XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_MISC_CTRL0,
                         XGBE_PMA_PLL_CTRL_MASK,
                         enable ? XGBE_PMA_PLL_CTRL_ENABLE
@@ -1989,7 +2000,7 @@ static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable)
 }
 
 static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
-                                       unsigned int cmd, unsigned int sub_cmd)
+                                       enum xgbe_mb_cmd cmd, enum xgbe_mb_subcmd sub_cmd)
 {
        unsigned int s0 = 0;
        unsigned int wait;
@@ -2029,14 +2040,16 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
        xgbe_phy_rx_reset(pdata);
 
 reenable_pll:
-       /* Enable PLL re-initialization */
-       xgbe_phy_pll_ctrl(pdata, true);
+       /* Enable PLL re-initialization, not needed for PHY Power Off and RRC cmds */
+       if (cmd != XGBE_MB_CMD_POWER_OFF &&
+           cmd != XGBE_MB_CMD_RRC)
+               xgbe_phy_pll_ctrl(pdata, true);
 }
 
 static void xgbe_phy_rrc(struct xgbe_prv_data *pdata)
 {
        /* Receiver Reset Cycle */
-       xgbe_phy_perform_ratechange(pdata, 5, 0);
+       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_RRC, XGBE_MB_SUBCMD_NONE);
 
        netif_dbg(pdata, link, pdata->netdev, "receiver reset complete\n");
 }
@@ -2046,7 +2059,7 @@ static void xgbe_phy_power_off(struct xgbe_prv_data *pdata)
        struct xgbe_phy_data *phy_data = pdata->phy_data;
 
        /* Power off */
-       xgbe_phy_perform_ratechange(pdata, 0, 0);
+       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_POWER_OFF, XGBE_MB_SUBCMD_NONE);
 
        phy_data->cur_mode = XGBE_MODE_UNKNOWN;
 
@@ -2061,14 +2074,17 @@ static void xgbe_phy_sfi_mode(struct xgbe_prv_data *pdata)
 
        /* 10G/SFI */
        if (phy_data->sfp_cable != XGBE_SFP_CABLE_PASSIVE) {
-               xgbe_phy_perform_ratechange(pdata, 3, 0);
+               xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, XGBE_MB_SUBCMD_ACTIVE);
        } else {
                if (phy_data->sfp_cable_len <= 1)
-                       xgbe_phy_perform_ratechange(pdata, 3, 1);
+                       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
+                                                   XGBE_MB_SUBCMD_PASSIVE_1M);
                else if (phy_data->sfp_cable_len <= 3)
-                       xgbe_phy_perform_ratechange(pdata, 3, 2);
+                       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
+                                                   XGBE_MB_SUBCMD_PASSIVE_3M);
                else
-                       xgbe_phy_perform_ratechange(pdata, 3, 3);
+                       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI,
+                                                   XGBE_MB_SUBCMD_PASSIVE_OTHER);
        }
 
        phy_data->cur_mode = XGBE_MODE_SFI;
@@ -2083,7 +2099,7 @@ static void xgbe_phy_x_mode(struct xgbe_prv_data *pdata)
        xgbe_phy_set_redrv_mode(pdata);
 
        /* 1G/X */
-       xgbe_phy_perform_ratechange(pdata, 1, 3);
+       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_KX);
 
        phy_data->cur_mode = XGBE_MODE_X;
 
@@ -2097,7 +2113,7 @@ static void xgbe_phy_sgmii_1000_mode(struct xgbe_prv_data *pdata)
        xgbe_phy_set_redrv_mode(pdata);
 
        /* 1G/SGMII */
-       xgbe_phy_perform_ratechange(pdata, 1, 2);
+       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_SGMII);
 
        phy_data->cur_mode = XGBE_MODE_SGMII_1000;
 
@@ -2111,7 +2127,7 @@ static void xgbe_phy_sgmii_100_mode(struct xgbe_prv_data *pdata)
        xgbe_phy_set_redrv_mode(pdata);
 
        /* 100M/SGMII */
-       xgbe_phy_perform_ratechange(pdata, 1, 1);
+       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_100MBITS);
 
        phy_data->cur_mode = XGBE_MODE_SGMII_100;
 
@@ -2125,7 +2141,7 @@ static void xgbe_phy_kr_mode(struct xgbe_prv_data *pdata)
        xgbe_phy_set_redrv_mode(pdata);
 
        /* 10G/KR */
-       xgbe_phy_perform_ratechange(pdata, 4, 0);
+       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR, XGBE_MB_SUBCMD_NONE);
 
        phy_data->cur_mode = XGBE_MODE_KR;
 
@@ -2139,7 +2155,7 @@ static void xgbe_phy_kx_2500_mode(struct xgbe_prv_data *pdata)
        xgbe_phy_set_redrv_mode(pdata);
 
        /* 2.5G/KX */
-       xgbe_phy_perform_ratechange(pdata, 2, 0);
+       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_2_5G, XGBE_MB_SUBCMD_NONE);
 
        phy_data->cur_mode = XGBE_MODE_KX_2500;
 
@@ -2153,7 +2169,7 @@ static void xgbe_phy_kx_1000_mode(struct xgbe_prv_data *pdata)
        xgbe_phy_set_redrv_mode(pdata);
 
        /* 1G/KX */
-       xgbe_phy_perform_ratechange(pdata, 1, 3);
+       xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_KX);
 
        phy_data->cur_mode = XGBE_MODE_KX_1000;
 
@@ -2640,7 +2656,7 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
        }
 
        /* No link, attempt a receiver reset cycle */
-       if (phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) {
+       if (pdata->vdata->enable_rrc && phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) {
                phy_data->rrc_count = 0;
                xgbe_phy_rrc(pdata);
        }
index b875c43..71f24cb 100644 (file)
@@ -611,6 +611,31 @@ enum xgbe_mdio_mode {
        XGBE_MDIO_MODE_CL45,
 };
 
+enum xgbe_mb_cmd {
+       XGBE_MB_CMD_POWER_OFF = 0,
+       XGBE_MB_CMD_SET_1G,
+       XGBE_MB_CMD_SET_2_5G,
+       XGBE_MB_CMD_SET_10G_SFI,
+       XGBE_MB_CMD_SET_10G_KR,
+       XGBE_MB_CMD_RRC
+};
+
+enum xgbe_mb_subcmd {
+       XGBE_MB_SUBCMD_NONE = 0,
+
+       /* 10GbE SFP subcommands */
+       XGBE_MB_SUBCMD_ACTIVE = 0,
+       XGBE_MB_SUBCMD_PASSIVE_1M,
+       XGBE_MB_SUBCMD_PASSIVE_3M,
+       XGBE_MB_SUBCMD_PASSIVE_OTHER,
+
+       /* 1GbE Mode subcommands */
+       XGBE_MB_SUBCMD_10MBITS = 0,
+       XGBE_MB_SUBCMD_100MBITS,
+       XGBE_MB_SUBCMD_1G_SGMII,
+       XGBE_MB_SUBCMD_1G_KX
+};
+
 struct xgbe_phy {
        struct ethtool_link_ksettings lks;
 
@@ -1013,6 +1038,7 @@ struct xgbe_version_data {
        unsigned int tx_desc_prefetch;
        unsigned int rx_desc_prefetch;
        unsigned int an_cdr_workaround;
+       unsigned int enable_rrc;
 };
 
 struct xgbe_prv_data {
index 3d0e167..a018081 100644 (file)
@@ -1394,26 +1394,57 @@ static void aq_check_txsa_expiration(struct aq_nic_s *nic)
                        egress_sa_threshold_expired);
 }
 
+#define AQ_LOCKED_MDO_DEF(mdo)                                         \
+static int aq_locked_mdo_##mdo(struct macsec_context *ctx)             \
+{                                                                      \
+       struct aq_nic_s *nic = netdev_priv(ctx->netdev);                \
+       int ret;                                                        \
+       mutex_lock(&nic->macsec_mutex);                                 \
+       ret = aq_mdo_##mdo(ctx);                                        \
+       mutex_unlock(&nic->macsec_mutex);                               \
+       return ret;                                                     \
+}
+
+AQ_LOCKED_MDO_DEF(dev_open)
+AQ_LOCKED_MDO_DEF(dev_stop)
+AQ_LOCKED_MDO_DEF(add_secy)
+AQ_LOCKED_MDO_DEF(upd_secy)
+AQ_LOCKED_MDO_DEF(del_secy)
+AQ_LOCKED_MDO_DEF(add_rxsc)
+AQ_LOCKED_MDO_DEF(upd_rxsc)
+AQ_LOCKED_MDO_DEF(del_rxsc)
+AQ_LOCKED_MDO_DEF(add_rxsa)
+AQ_LOCKED_MDO_DEF(upd_rxsa)
+AQ_LOCKED_MDO_DEF(del_rxsa)
+AQ_LOCKED_MDO_DEF(add_txsa)
+AQ_LOCKED_MDO_DEF(upd_txsa)
+AQ_LOCKED_MDO_DEF(del_txsa)
+AQ_LOCKED_MDO_DEF(get_dev_stats)
+AQ_LOCKED_MDO_DEF(get_tx_sc_stats)
+AQ_LOCKED_MDO_DEF(get_tx_sa_stats)
+AQ_LOCKED_MDO_DEF(get_rx_sc_stats)
+AQ_LOCKED_MDO_DEF(get_rx_sa_stats)
+
 const struct macsec_ops aq_macsec_ops = {
-       .mdo_dev_open = aq_mdo_dev_open,
-       .mdo_dev_stop = aq_mdo_dev_stop,
-       .mdo_add_secy = aq_mdo_add_secy,
-       .mdo_upd_secy = aq_mdo_upd_secy,
-       .mdo_del_secy = aq_mdo_del_secy,
-       .mdo_add_rxsc = aq_mdo_add_rxsc,
-       .mdo_upd_rxsc = aq_mdo_upd_rxsc,
-       .mdo_del_rxsc = aq_mdo_del_rxsc,
-       .mdo_add_rxsa = aq_mdo_add_rxsa,
-       .mdo_upd_rxsa = aq_mdo_upd_rxsa,
-       .mdo_del_rxsa = aq_mdo_del_rxsa,
-       .mdo_add_txsa = aq_mdo_add_txsa,
-       .mdo_upd_txsa = aq_mdo_upd_txsa,
-       .mdo_del_txsa = aq_mdo_del_txsa,
-       .mdo_get_dev_stats = aq_mdo_get_dev_stats,
-       .mdo_get_tx_sc_stats = aq_mdo_get_tx_sc_stats,
-       .mdo_get_tx_sa_stats = aq_mdo_get_tx_sa_stats,
-       .mdo_get_rx_sc_stats = aq_mdo_get_rx_sc_stats,
-       .mdo_get_rx_sa_stats = aq_mdo_get_rx_sa_stats,
+       .mdo_dev_open = aq_locked_mdo_dev_open,
+       .mdo_dev_stop = aq_locked_mdo_dev_stop,
+       .mdo_add_secy = aq_locked_mdo_add_secy,
+       .mdo_upd_secy = aq_locked_mdo_upd_secy,
+       .mdo_del_secy = aq_locked_mdo_del_secy,
+       .mdo_add_rxsc = aq_locked_mdo_add_rxsc,
+       .mdo_upd_rxsc = aq_locked_mdo_upd_rxsc,
+       .mdo_del_rxsc = aq_locked_mdo_del_rxsc,
+       .mdo_add_rxsa = aq_locked_mdo_add_rxsa,
+       .mdo_upd_rxsa = aq_locked_mdo_upd_rxsa,
+       .mdo_del_rxsa = aq_locked_mdo_del_rxsa,
+       .mdo_add_txsa = aq_locked_mdo_add_txsa,
+       .mdo_upd_txsa = aq_locked_mdo_upd_txsa,
+       .mdo_del_txsa = aq_locked_mdo_del_txsa,
+       .mdo_get_dev_stats = aq_locked_mdo_get_dev_stats,
+       .mdo_get_tx_sc_stats = aq_locked_mdo_get_tx_sc_stats,
+       .mdo_get_tx_sa_stats = aq_locked_mdo_get_tx_sa_stats,
+       .mdo_get_rx_sc_stats = aq_locked_mdo_get_rx_sc_stats,
+       .mdo_get_rx_sa_stats = aq_locked_mdo_get_rx_sa_stats,
 };
 
 int aq_macsec_init(struct aq_nic_s *nic)
@@ -1435,6 +1466,7 @@ int aq_macsec_init(struct aq_nic_s *nic)
 
        nic->ndev->features |= NETIF_F_HW_MACSEC;
        nic->ndev->macsec_ops = &aq_macsec_ops;
+       mutex_init(&nic->macsec_mutex);
 
        return 0;
 }
@@ -1458,7 +1490,7 @@ int aq_macsec_enable(struct aq_nic_s *nic)
        if (!nic->macsec_cfg)
                return 0;
 
-       rtnl_lock();
+       mutex_lock(&nic->macsec_mutex);
 
        if (nic->aq_fw_ops->send_macsec_req) {
                struct macsec_cfg_request cfg = { 0 };
@@ -1507,7 +1539,7 @@ int aq_macsec_enable(struct aq_nic_s *nic)
        ret = aq_apply_macsec_cfg(nic);
 
 unlock:
-       rtnl_unlock();
+       mutex_unlock(&nic->macsec_mutex);
        return ret;
 }
 
@@ -1519,9 +1551,9 @@ void aq_macsec_work(struct aq_nic_s *nic)
        if (!netif_carrier_ok(nic->ndev))
                return;
 
-       rtnl_lock();
+       mutex_lock(&nic->macsec_mutex);
        aq_check_txsa_expiration(nic);
-       rtnl_unlock();
+       mutex_unlock(&nic->macsec_mutex);
 }
 
 int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
@@ -1532,21 +1564,30 @@ int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
        if (!cfg)
                return 0;
 
+       mutex_lock(&nic->macsec_mutex);
+
        for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
                if (!test_bit(i, &cfg->rxsc_idx_busy))
                        continue;
                cnt += hweight_long(cfg->aq_rxsc[i].rx_sa_idx_busy);
        }
 
+       mutex_unlock(&nic->macsec_mutex);
        return cnt;
 }
 
 int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic)
 {
+       int cnt;
+
        if (!nic->macsec_cfg)
                return 0;
 
-       return hweight_long(nic->macsec_cfg->txsc_idx_busy);
+       mutex_lock(&nic->macsec_mutex);
+       cnt = hweight_long(nic->macsec_cfg->txsc_idx_busy);
+       mutex_unlock(&nic->macsec_mutex);
+
+       return cnt;
 }
 
 int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
@@ -1557,12 +1598,15 @@ int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
        if (!cfg)
                return 0;
 
+       mutex_lock(&nic->macsec_mutex);
+
        for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
                if (!test_bit(i, &cfg->txsc_idx_busy))
                        continue;
                cnt += hweight_long(cfg->aq_txsc[i].tx_sa_idx_busy);
        }
 
+       mutex_unlock(&nic->macsec_mutex);
        return cnt;
 }
 
@@ -1634,6 +1678,8 @@ u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data)
        if (!cfg)
                return data;
 
+       mutex_lock(&nic->macsec_mutex);
+
        aq_macsec_update_stats(nic);
 
        common_stats = &cfg->stats;
@@ -1716,5 +1762,7 @@ u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data)
 
        data += i;
 
+       mutex_unlock(&nic->macsec_mutex);
+
        return data;
 }
index 935ba88..ad33f85 100644 (file)
@@ -157,6 +157,8 @@ struct aq_nic_s {
        struct mutex fwreq_mutex;
 #if IS_ENABLED(CONFIG_MACSEC)
        struct aq_macsec_cfg *macsec_cfg;
+       /* mutex to protect data in macsec_cfg */
+       struct mutex macsec_mutex;
 #endif
        /* PTP support */
        struct aq_ptp_s *aq_ptp;
index fec57f1..dbe3101 100644 (file)
@@ -5415,8 +5415,9 @@ bnx2_set_rx_ring_size(struct bnx2 *bp, u32 size)
 
        bp->rx_buf_use_size = rx_size;
        /* hw alignment + build_skb() overhead*/
-       bp->rx_buf_size = SKB_DATA_ALIGN(bp->rx_buf_use_size + BNX2_RX_ALIGN) +
-               NET_SKB_PAD + SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+       bp->rx_buf_size = kmalloc_size_roundup(
+               SKB_DATA_ALIGN(bp->rx_buf_use_size + BNX2_RX_ALIGN) +
+               NET_SKB_PAD + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
        bp->rx_jumbo_thresh = rx_size - BNX2_RX_OFFSET;
        bp->rx_ring_size = size;
        bp->rx_max_ring = bnx2_find_max_ring(size, BNX2_MAX_RX_RINGS);
index eed98c1..04cf768 100644 (file)
@@ -3874,7 +3874,7 @@ static void bnxt_init_vnics(struct bnxt *bp)
 
                if (bp->vnic_info[i].rss_hash_key) {
                        if (i == 0)
-                               prandom_bytes(vnic->rss_hash_key,
+                               get_random_bytes(vnic->rss_hash_key,
                                              HW_HASH_KEY_SIZE);
                        else
                                memcpy(vnic->rss_hash_key,
index b1b17f9..91a1ba0 100644 (file)
@@ -2116,6 +2116,7 @@ struct bnxt {
 #define BNXT_PHY_FL_NO_FCS             PORT_PHY_QCAPS_RESP_FLAGS_NO_FCS
 #define BNXT_PHY_FL_NO_PAUSE           (PORT_PHY_QCAPS_RESP_FLAGS2_PAUSE_UNSUPPORTED << 8)
 #define BNXT_PHY_FL_NO_PFC             (PORT_PHY_QCAPS_RESP_FLAGS2_PFC_UNSUPPORTED << 8)
+#define BNXT_PHY_FL_BANK_SEL           (PORT_PHY_QCAPS_RESP_FLAGS2_BANK_ADDR_SUPPORTED << 8)
 
        u8                      num_tests;
        struct bnxt_test_info   *test_info;
index a36803e..8a6f788 100644 (file)
@@ -613,6 +613,7 @@ static int bnxt_dl_reload_up(struct devlink *dl, enum devlink_reload_action acti
 
 static bool bnxt_nvm_test(struct bnxt *bp, struct netlink_ext_ack *extack)
 {
+       bool rc = false;
        u32 datalen;
        u16 index;
        u8 *buf;
@@ -632,20 +633,20 @@ static bool bnxt_nvm_test(struct bnxt *bp, struct netlink_ext_ack *extack)
 
        if (bnxt_get_nvram_item(bp->dev, index, 0, datalen, buf)) {
                NL_SET_ERR_MSG_MOD(extack, "nvm test vpd read error");
-               goto err;
+               goto done;
        }
 
        if (bnxt_flash_nvram(bp->dev, BNX_DIR_TYPE_VPD, BNX_DIR_ORDINAL_FIRST,
                             BNX_DIR_EXT_NONE, 0, 0, buf, datalen)) {
                NL_SET_ERR_MSG_MOD(extack, "nvm test vpd write error");
-               goto err;
+               goto done;
        }
 
-       return true;
+       rc = true;
 
-err:
+done:
        kfree(buf);
-       return false;
+       return rc;
 }
 
 static bool bnxt_dl_selftest_check(struct devlink *dl, unsigned int id,
index f57e524..cc89e5e 100644 (file)
@@ -2514,6 +2514,7 @@ static int bnxt_flash_firmware_from_file(struct net_device *dev,
 #define MSG_INTERNAL_ERR "PKG install error : Internal error"
 #define MSG_NO_PKG_UPDATE_AREA_ERR "PKG update area not created in nvram"
 #define MSG_NO_SPACE_ERR "PKG insufficient update area in nvram"
+#define MSG_RESIZE_UPDATE_ERR "Resize UPDATE entry error"
 #define MSG_ANTI_ROLLBACK_ERR "HWRM_NVM_INSTALL_UPDATE failure due to Anti-rollback detected"
 #define MSG_GENERIC_FAILURE_ERR "HWRM_NVM_INSTALL_UPDATE failure"
 
@@ -2564,6 +2565,32 @@ static int nvm_update_err_to_stderr(struct net_device *dev, u8 result,
 #define BNXT_NVM_MORE_FLAG     (cpu_to_le16(NVM_MODIFY_REQ_FLAGS_BATCH_MODE))
 #define BNXT_NVM_LAST_FLAG     (cpu_to_le16(NVM_MODIFY_REQ_FLAGS_BATCH_LAST))
 
+static int bnxt_resize_update_entry(struct net_device *dev, size_t fw_size,
+                                   struct netlink_ext_ack *extack)
+{
+       u32 item_len;
+       int rc;
+
+       rc = bnxt_find_nvram_item(dev, BNX_DIR_TYPE_UPDATE,
+                                 BNX_DIR_ORDINAL_FIRST, BNX_DIR_EXT_NONE, NULL,
+                                 &item_len, NULL);
+       if (rc) {
+               BNXT_NVM_ERR_MSG(dev, extack, MSG_NO_PKG_UPDATE_AREA_ERR);
+               return rc;
+       }
+
+       if (fw_size > item_len) {
+               rc = bnxt_flash_nvram(dev, BNX_DIR_TYPE_UPDATE,
+                                     BNX_DIR_ORDINAL_FIRST, 0, 1,
+                                     round_up(fw_size, 4096), NULL, 0);
+               if (rc) {
+                       BNXT_NVM_ERR_MSG(dev, extack, MSG_RESIZE_UPDATE_ERR);
+                       return rc;
+               }
+       }
+       return 0;
+}
+
 int bnxt_flash_package_from_fw_obj(struct net_device *dev, const struct firmware *fw,
                                   u32 install_type, struct netlink_ext_ack *extack)
 {
@@ -2580,6 +2607,11 @@ int bnxt_flash_package_from_fw_obj(struct net_device *dev, const struct firmware
        u16 index;
        int rc;
 
+       /* resize before flashing larger image than available space */
+       rc = bnxt_resize_update_entry(dev, fw->size, extack);
+       if (rc)
+               return rc;
+
        bnxt_hwrm_fw_set_time(bp);
 
        rc = hwrm_req_init(bp, modify, HWRM_NVM_MODIFY);
@@ -3146,8 +3178,9 @@ static int bnxt_get_eee(struct net_device *dev, struct ethtool_eee *edata)
 }
 
 static int bnxt_read_sfp_module_eeprom_info(struct bnxt *bp, u16 i2c_addr,
-                                           u16 page_number, u16 start_addr,
-                                           u16 data_length, u8 *buf)
+                                           u16 page_number, u8 bank,
+                                           u16 start_addr, u16 data_length,
+                                           u8 *buf)
 {
        struct hwrm_port_phy_i2c_read_output *output;
        struct hwrm_port_phy_i2c_read_input *req;
@@ -3168,8 +3201,13 @@ static int bnxt_read_sfp_module_eeprom_info(struct bnxt *bp, u16 i2c_addr,
                data_length -= xfer_size;
                req->page_offset = cpu_to_le16(start_addr + byte_offset);
                req->data_length = xfer_size;
-               req->enables = cpu_to_le32(start_addr + byte_offset ?
-                                PORT_PHY_I2C_READ_REQ_ENABLES_PAGE_OFFSET : 0);
+               req->enables =
+                       cpu_to_le32((start_addr + byte_offset ?
+                                    PORT_PHY_I2C_READ_REQ_ENABLES_PAGE_OFFSET :
+                                    0) |
+                                   (bank ?
+                                    PORT_PHY_I2C_READ_REQ_ENABLES_BANK_NUMBER :
+                                    0));
                rc = hwrm_req_send(bp, req);
                if (!rc)
                        memcpy(buf + byte_offset, output->data, xfer_size);
@@ -3199,7 +3237,7 @@ static int bnxt_get_module_info(struct net_device *dev,
        if (bp->hwrm_spec_code < 0x10202)
                return -EOPNOTSUPP;
 
-       rc = bnxt_read_sfp_module_eeprom_info(bp, I2C_DEV_ADDR_A0, 0, 0,
+       rc = bnxt_read_sfp_module_eeprom_info(bp, I2C_DEV_ADDR_A0, 0, 0, 0,
                                              SFF_DIAG_SUPPORT_OFFSET + 1,
                                              data);
        if (!rc) {
@@ -3244,7 +3282,7 @@ static int bnxt_get_module_eeprom(struct net_device *dev,
        if (start < ETH_MODULE_SFF_8436_LEN) {
                if (start + eeprom->len > ETH_MODULE_SFF_8436_LEN)
                        length = ETH_MODULE_SFF_8436_LEN - start;
-               rc = bnxt_read_sfp_module_eeprom_info(bp, I2C_DEV_ADDR_A0, 0,
+               rc = bnxt_read_sfp_module_eeprom_info(bp, I2C_DEV_ADDR_A0, 0, 0,
                                                      start, length, data);
                if (rc)
                        return rc;
@@ -3256,12 +3294,68 @@ static int bnxt_get_module_eeprom(struct net_device *dev,
        /* Read A2 portion of the EEPROM */
        if (length) {
                start -= ETH_MODULE_SFF_8436_LEN;
-               rc = bnxt_read_sfp_module_eeprom_info(bp, I2C_DEV_ADDR_A2, 0,
+               rc = bnxt_read_sfp_module_eeprom_info(bp, I2C_DEV_ADDR_A2, 0, 0,
                                                      start, length, data);
        }
        return rc;
 }
 
+static int bnxt_get_module_status(struct bnxt *bp, struct netlink_ext_ack *extack)
+{
+       if (bp->link_info.module_status <=
+           PORT_PHY_QCFG_RESP_MODULE_STATUS_WARNINGMSG)
+               return 0;
+
+       switch (bp->link_info.module_status) {
+       case PORT_PHY_QCFG_RESP_MODULE_STATUS_PWRDOWN:
+               NL_SET_ERR_MSG_MOD(extack, "Transceiver module is powering down");
+               break;
+       case PORT_PHY_QCFG_RESP_MODULE_STATUS_NOTINSERTED:
+               NL_SET_ERR_MSG_MOD(extack, "Transceiver module not inserted");
+               break;
+       case PORT_PHY_QCFG_RESP_MODULE_STATUS_CURRENTFAULT:
+               NL_SET_ERR_MSG_MOD(extack, "Transceiver module disabled due to current fault");
+               break;
+       default:
+               NL_SET_ERR_MSG_MOD(extack, "Unknown error");
+               break;
+       }
+       return -EINVAL;
+}
+
+static int bnxt_get_module_eeprom_by_page(struct net_device *dev,
+                                         const struct ethtool_module_eeprom *page_data,
+                                         struct netlink_ext_ack *extack)
+{
+       struct bnxt *bp = netdev_priv(dev);
+       int rc;
+
+       rc = bnxt_get_module_status(bp, extack);
+       if (rc)
+               return rc;
+
+       if (bp->hwrm_spec_code < 0x10202) {
+               NL_SET_ERR_MSG_MOD(extack, "Firmware version too old");
+               return -EINVAL;
+       }
+
+       if (page_data->bank && !(bp->phy_flags & BNXT_PHY_FL_BANK_SEL)) {
+               NL_SET_ERR_MSG_MOD(extack, "Firmware not capable for bank selection");
+               return -EINVAL;
+       }
+
+       rc = bnxt_read_sfp_module_eeprom_info(bp, page_data->i2c_address << 1,
+                                             page_data->page, page_data->bank,
+                                             page_data->offset,
+                                             page_data->length,
+                                             page_data->data);
+       if (rc) {
+               NL_SET_ERR_MSG_MOD(extack, "Module`s eeprom read failed");
+               return rc;
+       }
+       return page_data->length;
+}
+
 static int bnxt_nway_reset(struct net_device *dev)
 {
        int rc = 0;
@@ -4071,6 +4165,7 @@ const struct ethtool_ops bnxt_ethtool_ops = {
        .set_eee                = bnxt_set_eee,
        .get_module_info        = bnxt_get_module_info,
        .get_module_eeprom      = bnxt_get_module_eeprom,
+       .get_module_eeprom_by_page = bnxt_get_module_eeprom_by_page,
        .nway_reset             = bnxt_nway_reset,
        .set_phys_id            = bnxt_set_phys_id,
        .self_test              = bnxt_self_test,
index b753032..184dd8d 100644 (file)
@@ -254,6 +254,8 @@ struct cmd_nums {
        #define HWRM_PORT_DSC_DUMP                        0xd9UL
        #define HWRM_PORT_EP_TX_QCFG                      0xdaUL
        #define HWRM_PORT_EP_TX_CFG                       0xdbUL
+       #define HWRM_PORT_CFG                             0xdcUL
+       #define HWRM_PORT_QCFG                            0xddUL
        #define HWRM_TEMP_MONITOR_QUERY                   0xe0UL
        #define HWRM_REG_POWER_QUERY                      0xe1UL
        #define HWRM_CORE_FREQUENCY_QUERY                 0xe2UL
@@ -379,6 +381,8 @@ struct cmd_nums {
        #define HWRM_FUNC_BACKING_STORE_QCAPS_V2          0x1a8UL
        #define HWRM_FUNC_DBR_PACING_NQLIST_QUERY         0x1a9UL
        #define HWRM_FUNC_DBR_RECOVERY_COMPLETED          0x1aaUL
+       #define HWRM_FUNC_SYNCE_CFG                       0x1abUL
+       #define HWRM_FUNC_SYNCE_QCFG                      0x1acUL
        #define HWRM_SELFTEST_QLIST                       0x200UL
        #define HWRM_SELFTEST_EXEC                        0x201UL
        #define HWRM_SELFTEST_IRQ                         0x202UL
@@ -417,6 +421,8 @@ struct cmd_nums {
        #define HWRM_TF_SESSION_RESC_FREE                 0x2ceUL
        #define HWRM_TF_SESSION_RESC_FLUSH                0x2cfUL
        #define HWRM_TF_SESSION_RESC_INFO                 0x2d0UL
+       #define HWRM_TF_SESSION_HOTUP_STATE_SET           0x2d1UL
+       #define HWRM_TF_SESSION_HOTUP_STATE_GET           0x2d2UL
        #define HWRM_TF_TBL_TYPE_GET                      0x2daUL
        #define HWRM_TF_TBL_TYPE_SET                      0x2dbUL
        #define HWRM_TF_TBL_TYPE_BULK_GET                 0x2dcUL
@@ -440,6 +446,25 @@ struct cmd_nums {
        #define HWRM_TF_GLOBAL_CFG_GET                    0x2fdUL
        #define HWRM_TF_IF_TBL_SET                        0x2feUL
        #define HWRM_TF_IF_TBL_GET                        0x2ffUL
+       #define HWRM_TFC_TBL_SCOPE_QCAPS                  0x380UL
+       #define HWRM_TFC_TBL_SCOPE_ID_ALLOC               0x381UL
+       #define HWRM_TFC_TBL_SCOPE_CONFIG                 0x382UL
+       #define HWRM_TFC_TBL_SCOPE_DECONFIG               0x383UL
+       #define HWRM_TFC_TBL_SCOPE_FID_ADD                0x384UL
+       #define HWRM_TFC_TBL_SCOPE_FID_REM                0x385UL
+       #define HWRM_TFC_TBL_SCOPE_POOL_ALLOC             0x386UL
+       #define HWRM_TFC_TBL_SCOPE_POOL_FREE              0x387UL
+       #define HWRM_TFC_SESSION_ID_ALLOC                 0x388UL
+       #define HWRM_TFC_SESSION_FID_ADD                  0x389UL
+       #define HWRM_TFC_SESSION_FID_REM                  0x38aUL
+       #define HWRM_TFC_IDENT_ALLOC                      0x38bUL
+       #define HWRM_TFC_IDENT_FREE                       0x38cUL
+       #define HWRM_TFC_IDX_TBL_ALLOC                    0x38dUL
+       #define HWRM_TFC_IDX_TBL_ALLOC_SET                0x38eUL
+       #define HWRM_TFC_IDX_TBL_SET                      0x38fUL
+       #define HWRM_TFC_IDX_TBL_GET                      0x390UL
+       #define HWRM_TFC_IDX_TBL_FREE                     0x391UL
+       #define HWRM_TFC_GLOBAL_ID_ALLOC                  0x392UL
        #define HWRM_SV                                   0x400UL
        #define HWRM_DBG_READ_DIRECT                      0xff10UL
        #define HWRM_DBG_READ_INDIRECT                    0xff11UL
@@ -546,8 +571,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MAJOR 1
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 2
-#define HWRM_VERSION_RSVD 95
-#define HWRM_VERSION_STR "1.10.2.95"
+#define HWRM_VERSION_RSVD 118
+#define HWRM_VERSION_STR "1.10.2.118"
 
 /* hwrm_ver_get_input (size:192b/24B) */
 struct hwrm_ver_get_input {
@@ -1657,6 +1682,10 @@ struct hwrm_func_qcaps_output {
        #define FUNC_QCAPS_RESP_FLAGS_EXT2_DBR_PACING_EXT_SUPPORTED             0x8UL
        #define FUNC_QCAPS_RESP_FLAGS_EXT2_SW_DBR_DROP_RECOVERY_SUPPORTED       0x10UL
        #define FUNC_QCAPS_RESP_FLAGS_EXT2_GENERIC_STATS_SUPPORTED              0x20UL
+       #define FUNC_QCAPS_RESP_FLAGS_EXT2_UDP_GSO_SUPPORTED                    0x40UL
+       #define FUNC_QCAPS_RESP_FLAGS_EXT2_SYNCE_SUPPORTED                      0x80UL
+       #define FUNC_QCAPS_RESP_FLAGS_EXT2_DBR_PACING_V0_SUPPORTED              0x100UL
+       #define FUNC_QCAPS_RESP_FLAGS_EXT2_TX_PKT_TS_CMPL_SUPPORTED             0x200UL
        __le16  tunnel_disable_flag;
        #define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_VXLAN      0x1UL
        #define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_NGE        0x2UL
@@ -1804,7 +1833,20 @@ struct hwrm_func_qcfg_output {
        #define FUNC_QCFG_RESP_MPC_CHNLS_TE_CFA_ENABLED      0x4UL
        #define FUNC_QCFG_RESP_MPC_CHNLS_RE_CFA_ENABLED      0x8UL
        #define FUNC_QCFG_RESP_MPC_CHNLS_PRIMATE_ENABLED     0x10UL
-       u8      unused_2[3];
+       u8      db_page_size;
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_4KB   0x0UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_8KB   0x1UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_16KB  0x2UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_32KB  0x3UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_64KB  0x4UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_128KB 0x5UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_256KB 0x6UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_512KB 0x7UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_1MB   0x8UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_2MB   0x9UL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_4MB   0xaUL
+       #define FUNC_QCFG_RESP_DB_PAGE_SIZE_LAST FUNC_QCFG_RESP_DB_PAGE_SIZE_4MB
+       u8      unused_2[2];
        __le32  partition_min_bw;
        #define FUNC_QCFG_RESP_PARTITION_MIN_BW_BW_VALUE_MASK             0xfffffffUL
        #define FUNC_QCFG_RESP_PARTITION_MIN_BW_BW_VALUE_SFT              0
@@ -1876,6 +1918,7 @@ struct hwrm_func_cfg_input {
        #define FUNC_CFG_REQ_FLAGS_PPP_PUSH_MODE_DISABLE          0x10000000UL
        #define FUNC_CFG_REQ_FLAGS_BD_METADATA_ENABLE             0x20000000UL
        #define FUNC_CFG_REQ_FLAGS_BD_METADATA_DISABLE            0x40000000UL
+       #define FUNC_CFG_REQ_FLAGS_KEY_CTX_ASSETS_TEST            0x80000000UL
        __le32  enables;
        #define FUNC_CFG_REQ_ENABLES_ADMIN_MTU                0x1UL
        #define FUNC_CFG_REQ_ENABLES_MRU                      0x2UL
@@ -2021,12 +2064,26 @@ struct hwrm_func_cfg_input {
        __le16  num_tx_key_ctxs;
        __le16  num_rx_key_ctxs;
        __le32  enables2;
-       #define FUNC_CFG_REQ_ENABLES2_KDNET     0x1UL
+       #define FUNC_CFG_REQ_ENABLES2_KDNET            0x1UL
+       #define FUNC_CFG_REQ_ENABLES2_DB_PAGE_SIZE     0x2UL
        u8      port_kdnet_mode;
        #define FUNC_CFG_REQ_PORT_KDNET_MODE_DISABLED 0x0UL
        #define FUNC_CFG_REQ_PORT_KDNET_MODE_ENABLED  0x1UL
        #define FUNC_CFG_REQ_PORT_KDNET_MODE_LAST    FUNC_CFG_REQ_PORT_KDNET_MODE_ENABLED
-       u8      unused_0[7];
+       u8      db_page_size;
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_4KB   0x0UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_8KB   0x1UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_16KB  0x2UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_32KB  0x3UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_64KB  0x4UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_128KB 0x5UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_256KB 0x6UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_512KB 0x7UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_1MB   0x8UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_2MB   0x9UL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_4MB   0xaUL
+       #define FUNC_CFG_REQ_DB_PAGE_SIZE_LAST FUNC_CFG_REQ_DB_PAGE_SIZE_4MB
+       u8      unused_0[6];
 };
 
 /* hwrm_func_cfg_output (size:128b/16B) */
@@ -2060,10 +2117,9 @@ struct hwrm_func_qstats_input {
        __le64  resp_addr;
        __le16  fid;
        u8      flags;
-       #define FUNC_QSTATS_REQ_FLAGS_UNUSED       0x0UL
-       #define FUNC_QSTATS_REQ_FLAGS_ROCE_ONLY    0x1UL
-       #define FUNC_QSTATS_REQ_FLAGS_COUNTER_MASK 0x2UL
-       #define FUNC_QSTATS_REQ_FLAGS_LAST        FUNC_QSTATS_REQ_FLAGS_COUNTER_MASK
+       #define FUNC_QSTATS_REQ_FLAGS_ROCE_ONLY        0x1UL
+       #define FUNC_QSTATS_REQ_FLAGS_COUNTER_MASK     0x2UL
+       #define FUNC_QSTATS_REQ_FLAGS_L2_ONLY          0x4UL
        u8      unused_0[5];
 };
 
@@ -2093,7 +2149,8 @@ struct hwrm_func_qstats_output {
        __le64  rx_agg_bytes;
        __le64  rx_agg_events;
        __le64  rx_agg_aborts;
-       u8      unused_0[7];
+       u8      clear_seq;
+       u8      unused_0[6];
        u8      valid;
 };
 
@@ -2106,10 +2163,8 @@ struct hwrm_func_qstats_ext_input {
        __le64  resp_addr;
        __le16  fid;
        u8      flags;
-       #define FUNC_QSTATS_EXT_REQ_FLAGS_UNUSED       0x0UL
-       #define FUNC_QSTATS_EXT_REQ_FLAGS_ROCE_ONLY    0x1UL
-       #define FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK 0x2UL
-       #define FUNC_QSTATS_EXT_REQ_FLAGS_LAST        FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK
+       #define FUNC_QSTATS_EXT_REQ_FLAGS_ROCE_ONLY        0x1UL
+       #define FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK     0x2UL
        u8      unused_0[1];
        __le32  enables;
        #define FUNC_QSTATS_EXT_REQ_ENABLES_SCHQ_ID     0x1UL
@@ -2210,6 +2265,7 @@ struct hwrm_func_drv_rgtr_input {
        #define FUNC_DRV_RGTR_REQ_FLAGS_FAST_RESET_SUPPORT               0x80UL
        #define FUNC_DRV_RGTR_REQ_FLAGS_RSS_STRICT_HASH_TYPE_SUPPORT     0x100UL
        #define FUNC_DRV_RGTR_REQ_FLAGS_NPAR_1_2_SUPPORT                 0x200UL
+       #define FUNC_DRV_RGTR_REQ_FLAGS_ASYM_QUEUE_CFG_SUPPORT           0x400UL
        __le32  enables;
        #define FUNC_DRV_RGTR_REQ_ENABLES_OS_TYPE             0x1UL
        #define FUNC_DRV_RGTR_REQ_ENABLES_VER                 0x2UL
@@ -3155,19 +3211,23 @@ struct hwrm_func_ptp_pin_qcfg_output {
        #define FUNC_PTP_PIN_QCFG_RESP_PIN1_USAGE_SYNC_OUT 0x4UL
        #define FUNC_PTP_PIN_QCFG_RESP_PIN1_USAGE_LAST    FUNC_PTP_PIN_QCFG_RESP_PIN1_USAGE_SYNC_OUT
        u8      pin2_usage;
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_NONE     0x0UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_PPS_IN   0x1UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_PPS_OUT  0x2UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_SYNC_IN  0x3UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_SYNC_OUT 0x4UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_LAST    FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_SYNC_OUT
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_NONE                      0x0UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_PPS_IN                    0x1UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_PPS_OUT                   0x2UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_SYNC_IN                   0x3UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_SYNC_OUT                  0x4UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_SYNCE_PRIMARY_CLOCK_OUT   0x5UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_SYNCE_SECONDARY_CLOCK_OUT 0x6UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_LAST                     FUNC_PTP_PIN_QCFG_RESP_PIN2_USAGE_SYNCE_SECONDARY_CLOCK_OUT
        u8      pin3_usage;
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_NONE     0x0UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_PPS_IN   0x1UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_PPS_OUT  0x2UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_SYNC_IN  0x3UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_SYNC_OUT 0x4UL
-       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_LAST    FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_SYNC_OUT
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_NONE                      0x0UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_PPS_IN                    0x1UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_PPS_OUT                   0x2UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_SYNC_IN                   0x3UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_SYNC_OUT                  0x4UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_SYNCE_PRIMARY_CLOCK_OUT   0x5UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_SYNCE_SECONDARY_CLOCK_OUT 0x6UL
+       #define FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_LAST                     FUNC_PTP_PIN_QCFG_RESP_PIN3_USAGE_SYNCE_SECONDARY_CLOCK_OUT
        u8      unused_0;
        u8      valid;
 };
@@ -3215,23 +3275,27 @@ struct hwrm_func_ptp_pin_cfg_input {
        #define FUNC_PTP_PIN_CFG_REQ_PIN2_STATE_ENABLED  0x1UL
        #define FUNC_PTP_PIN_CFG_REQ_PIN2_STATE_LAST    FUNC_PTP_PIN_CFG_REQ_PIN2_STATE_ENABLED
        u8      pin2_usage;
-       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_NONE     0x0UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_PPS_IN   0x1UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_PPS_OUT  0x2UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_SYNC_IN  0x3UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_SYNC_OUT 0x4UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_LAST    FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_SYNC_OUT
+       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_NONE                      0x0UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_PPS_IN                    0x1UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_PPS_OUT                   0x2UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_SYNC_IN                   0x3UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_SYNC_OUT                  0x4UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_SYNCE_PRIMARY_CLOCK_OUT   0x5UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_SYNCE_SECONDARY_CLOCK_OUT 0x6UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_LAST                     FUNC_PTP_PIN_CFG_REQ_PIN2_USAGE_SYNCE_SECONDARY_CLOCK_OUT
        u8      pin3_state;
        #define FUNC_PTP_PIN_CFG_REQ_PIN3_STATE_DISABLED 0x0UL
        #define FUNC_PTP_PIN_CFG_REQ_PIN3_STATE_ENABLED  0x1UL
        #define FUNC_PTP_PIN_CFG_REQ_PIN3_STATE_LAST    FUNC_PTP_PIN_CFG_REQ_PIN3_STATE_ENABLED
        u8      pin3_usage;
-       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_NONE     0x0UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_PPS_IN   0x1UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_PPS_OUT  0x2UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_SYNC_IN  0x3UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_SYNC_OUT 0x4UL
-       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_LAST    FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_SYNC_OUT
+       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_NONE                      0x0UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_PPS_IN                    0x1UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_PPS_OUT                   0x2UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_SYNC_IN                   0x3UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_SYNC_OUT                  0x4UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_SYNCE_PRIMARY_CLOCK_OUT   0x5UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_SYNCE_SECONDARY_CLOCK_OUT 0x6UL
+       #define FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_LAST                     FUNC_PTP_PIN_CFG_REQ_PIN3_USAGE_SYNCE_SECONDARY_CLOCK_OUT
        u8      unused_0[4];
 };
 
@@ -3319,9 +3383,9 @@ struct hwrm_func_ptp_ts_query_output {
        __le16  seq_id;
        __le16  resp_len;
        __le64  pps_event_ts;
-       __le64  ptm_res_local_ts;
-       __le64  ptm_pmstr_ts;
-       __le32  ptm_mstr_prop_dly;
+       __le64  ptm_local_ts;
+       __le64  ptm_system_ts;
+       __le32  ptm_link_delay;
        u8      unused_0[3];
        u8      valid;
 };
@@ -3417,7 +3481,9 @@ struct hwrm_func_backing_store_cfg_v2_input {
        #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_LAST         FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_INVALID
        __le16  instance;
        __le32  flags;
-       #define FUNC_BACKING_STORE_CFG_V2_REQ_FLAGS_PREBOOT_MODE     0x1UL
+       #define FUNC_BACKING_STORE_CFG_V2_REQ_FLAGS_PREBOOT_MODE        0x1UL
+       #define FUNC_BACKING_STORE_CFG_V2_REQ_FLAGS_BS_CFG_ALL_DONE     0x2UL
+       #define FUNC_BACKING_STORE_CFG_V2_REQ_FLAGS_BS_EXTEND           0x4UL
        __le64  page_dir;
        __le32  num_entries;
        __le16  entry_size;
@@ -3853,7 +3919,7 @@ struct hwrm_port_phy_qcfg_input {
        u8      unused_0[6];
 };
 
-/* hwrm_port_phy_qcfg_output (size:768b/96B) */
+/* hwrm_port_phy_qcfg_output (size:832b/104B) */
 struct hwrm_port_phy_qcfg_output {
        __le16  error_code;
        __le16  req_type;
@@ -4150,6 +4216,9 @@ struct hwrm_port_phy_qcfg_output {
        #define PORT_PHY_QCFG_RESP_LINK_PARTNER_PAM4_ADV_SPEEDS_50GB      0x1UL
        #define PORT_PHY_QCFG_RESP_LINK_PARTNER_PAM4_ADV_SPEEDS_100GB     0x2UL
        #define PORT_PHY_QCFG_RESP_LINK_PARTNER_PAM4_ADV_SPEEDS_200GB     0x4UL
+       u8      link_down_reason;
+       #define PORT_PHY_QCFG_RESP_LINK_DOWN_REASON_RF     0x1UL
+       u8      unused_0[7];
        u8      valid;
 };
 
@@ -4422,9 +4491,7 @@ struct hwrm_port_qstats_input {
        __le64  resp_addr;
        __le16  port_id;
        u8      flags;
-       #define PORT_QSTATS_REQ_FLAGS_UNUSED       0x0UL
-       #define PORT_QSTATS_REQ_FLAGS_COUNTER_MASK 0x1UL
-       #define PORT_QSTATS_REQ_FLAGS_LAST        PORT_QSTATS_REQ_FLAGS_COUNTER_MASK
+       #define PORT_QSTATS_REQ_FLAGS_COUNTER_MASK     0x1UL
        u8      unused_0[5];
        __le64  tx_stat_host_addr;
        __le64  rx_stat_host_addr;
@@ -4552,9 +4619,7 @@ struct hwrm_port_qstats_ext_input {
        __le16  tx_stat_size;
        __le16  rx_stat_size;
        u8      flags;
-       #define PORT_QSTATS_EXT_REQ_FLAGS_UNUSED       0x0UL
-       #define PORT_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK 0x1UL
-       #define PORT_QSTATS_EXT_REQ_FLAGS_LAST        PORT_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK
+       #define PORT_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK     0x1UL
        u8      unused_0;
        __le64  tx_stat_host_addr;
        __le64  rx_stat_host_addr;
@@ -4613,9 +4678,7 @@ struct hwrm_port_ecn_qstats_input {
        __le16  port_id;
        __le16  ecn_stat_buf_size;
        u8      flags;
-       #define PORT_ECN_QSTATS_REQ_FLAGS_UNUSED       0x0UL
-       #define PORT_ECN_QSTATS_REQ_FLAGS_COUNTER_MASK 0x1UL
-       #define PORT_ECN_QSTATS_REQ_FLAGS_LAST        PORT_ECN_QSTATS_REQ_FLAGS_COUNTER_MASK
+       #define PORT_ECN_QSTATS_REQ_FLAGS_COUNTER_MASK     0x1UL
        u8      unused_0[3];
        __le64  ecn_stat_host_addr;
 };
@@ -4814,8 +4877,9 @@ struct hwrm_port_phy_qcaps_output {
        #define PORT_PHY_QCAPS_RESP_SUPPORTED_PAM4_SPEEDS_FORCE_MODE_100G     0x2UL
        #define PORT_PHY_QCAPS_RESP_SUPPORTED_PAM4_SPEEDS_FORCE_MODE_200G     0x4UL
        __le16  flags2;
-       #define PORT_PHY_QCAPS_RESP_FLAGS2_PAUSE_UNSUPPORTED     0x1UL
-       #define PORT_PHY_QCAPS_RESP_FLAGS2_PFC_UNSUPPORTED       0x2UL
+       #define PORT_PHY_QCAPS_RESP_FLAGS2_PAUSE_UNSUPPORTED       0x1UL
+       #define PORT_PHY_QCAPS_RESP_FLAGS2_PFC_UNSUPPORTED         0x2UL
+       #define PORT_PHY_QCAPS_RESP_FLAGS2_BANK_ADDR_SUPPORTED     0x4UL
        u8      internal_port_cnt;
        u8      valid;
 };
@@ -4830,9 +4894,10 @@ struct hwrm_port_phy_i2c_read_input {
        __le32  flags;
        __le32  enables;
        #define PORT_PHY_I2C_READ_REQ_ENABLES_PAGE_OFFSET     0x1UL
+       #define PORT_PHY_I2C_READ_REQ_ENABLES_BANK_NUMBER     0x2UL
        __le16  port_id;
        u8      i2c_slave_addr;
-       u8      unused_0;
+       u8      bank_number;
        __le16  page_number;
        __le16  page_offset;
        u8      data_length;
@@ -6537,6 +6602,7 @@ struct hwrm_vnic_qcaps_output {
        #define VNIC_QCAPS_RESP_FLAGS_RSS_IPSEC_ESP_SPI_IPV4_CAP              0x400000UL
        #define VNIC_QCAPS_RESP_FLAGS_RSS_IPSEC_AH_SPI_IPV6_CAP               0x800000UL
        #define VNIC_QCAPS_RESP_FLAGS_RSS_IPSEC_ESP_SPI_IPV6_CAP              0x1000000UL
+       #define VNIC_QCAPS_RESP_FLAGS_OUTERMOST_RSS_TRUSTED_VF_CAP            0x2000000UL
        __le16  max_aggs_supported;
        u8      unused_1[5];
        u8      valid;
@@ -6827,6 +6893,7 @@ struct hwrm_ring_alloc_input {
        #define RING_ALLOC_REQ_FLAGS_RX_SOP_PAD                        0x1UL
        #define RING_ALLOC_REQ_FLAGS_DISABLE_CQ_OVERFLOW_DETECTION     0x2UL
        #define RING_ALLOC_REQ_FLAGS_NQ_DBR_PACING                     0x4UL
+       #define RING_ALLOC_REQ_FLAGS_TX_PKT_TS_CMPL_ENABLE             0x8UL
        __le64  page_tbl_addr;
        __le32  fbo;
        u8      page_size;
@@ -7626,7 +7693,10 @@ struct hwrm_cfa_ntuple_filter_alloc_input {
        #define CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_UNKNOWN 0x0UL
        #define CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_TCP     0x6UL
        #define CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_UDP     0x11UL
-       #define CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_LAST   CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_UDP
+       #define CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_ICMP    0x1UL
+       #define CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_ICMPV6  0x3aUL
+       #define CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_RSVD    0xffUL
+       #define CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_LAST   CFA_NTUPLE_FILTER_ALLOC_REQ_IP_PROTOCOL_RSVD
        __le16  dst_id;
        __le16  mirror_vnic_id;
        u8      tunnel_type;
@@ -8337,6 +8407,7 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
        #define CFA_ADV_FLOW_MGNT_QCAPS_RESP_FLAGS_LAG_SUPPORTED                                0x20000UL
        #define CFA_ADV_FLOW_MGNT_QCAPS_RESP_FLAGS_NTUPLE_FLOW_NO_L2CTX_SUPPORTED               0x40000UL
        #define CFA_ADV_FLOW_MGNT_QCAPS_RESP_FLAGS_NIC_FLOW_STATS_SUPPORTED                     0x80000UL
+       #define CFA_ADV_FLOW_MGNT_QCAPS_RESP_FLAGS_NTUPLE_FLOW_RX_EXT_IP_PROTO_SUPPORTED        0x100000UL
        u8      unused_0[3];
        u8      valid;
 };
@@ -8355,7 +8426,9 @@ struct hwrm_tunnel_dst_port_query_input {
        #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_IPGRE_V1     0xaUL
        #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_L2_ETYPE     0xbUL
        #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
-       #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_LAST        TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE_V6
+       #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_CUSTOM_GRE   0xdUL
+       #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ECPRI        0xeUL
+       #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_LAST        TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ECPRI
        u8      unused_0[7];
 };
 
@@ -8367,7 +8440,16 @@ struct hwrm_tunnel_dst_port_query_output {
        __le16  resp_len;
        __le16  tunnel_dst_port_id;
        __be16  tunnel_dst_port_val;
-       u8      unused_0[3];
+       u8      upar_in_use;
+       #define TUNNEL_DST_PORT_QUERY_RESP_UPAR_IN_USE_UPAR0     0x1UL
+       #define TUNNEL_DST_PORT_QUERY_RESP_UPAR_IN_USE_UPAR1     0x2UL
+       #define TUNNEL_DST_PORT_QUERY_RESP_UPAR_IN_USE_UPAR2     0x4UL
+       #define TUNNEL_DST_PORT_QUERY_RESP_UPAR_IN_USE_UPAR3     0x8UL
+       #define TUNNEL_DST_PORT_QUERY_RESP_UPAR_IN_USE_UPAR4     0x10UL
+       #define TUNNEL_DST_PORT_QUERY_RESP_UPAR_IN_USE_UPAR5     0x20UL
+       #define TUNNEL_DST_PORT_QUERY_RESP_UPAR_IN_USE_UPAR6     0x40UL
+       #define TUNNEL_DST_PORT_QUERY_RESP_UPAR_IN_USE_UPAR7     0x80UL
+       u8      unused_0[2];
        u8      valid;
 };
 
@@ -8385,7 +8467,9 @@ struct hwrm_tunnel_dst_port_alloc_input {
        #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1     0xaUL
        #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE     0xbUL
        #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
-       #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_LAST        TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6
+       #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_CUSTOM_GRE   0xdUL
+       #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ECPRI        0xeUL
+       #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_LAST        TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ECPRI
        u8      unused_0;
        __be16  tunnel_dst_port_val;
        u8      unused_1[4];
@@ -8398,7 +8482,21 @@ struct hwrm_tunnel_dst_port_alloc_output {
        __le16  seq_id;
        __le16  resp_len;
        __le16  tunnel_dst_port_id;
-       u8      unused_0[5];
+       u8      error_info;
+       #define TUNNEL_DST_PORT_ALLOC_RESP_ERROR_INFO_SUCCESS         0x0UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_ERROR_INFO_ERR_ALLOCATED   0x1UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_ERROR_INFO_ERR_NO_RESOURCE 0x2UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_ERROR_INFO_LAST           TUNNEL_DST_PORT_ALLOC_RESP_ERROR_INFO_ERR_NO_RESOURCE
+       u8      upar_in_use;
+       #define TUNNEL_DST_PORT_ALLOC_RESP_UPAR_IN_USE_UPAR0     0x1UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_UPAR_IN_USE_UPAR1     0x2UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_UPAR_IN_USE_UPAR2     0x4UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_UPAR_IN_USE_UPAR3     0x8UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_UPAR_IN_USE_UPAR4     0x10UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_UPAR_IN_USE_UPAR5     0x20UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_UPAR_IN_USE_UPAR6     0x40UL
+       #define TUNNEL_DST_PORT_ALLOC_RESP_UPAR_IN_USE_UPAR7     0x80UL
+       u8      unused_0[3];
        u8      valid;
 };
 
@@ -8416,7 +8514,9 @@ struct hwrm_tunnel_dst_port_free_input {
        #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_IPGRE_V1     0xaUL
        #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_L2_ETYPE     0xbUL
        #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
-       #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_LAST        TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE_V6
+       #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_CUSTOM_GRE   0xdUL
+       #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ECPRI        0xeUL
+       #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_LAST        TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ECPRI
        u8      unused_0;
        __le16  tunnel_dst_port_id;
        u8      unused_1[4];
@@ -8428,7 +8528,12 @@ struct hwrm_tunnel_dst_port_free_output {
        __le16  req_type;
        __le16  seq_id;
        __le16  resp_len;
-       u8      unused_1[7];
+       u8      error_info;
+       #define TUNNEL_DST_PORT_FREE_RESP_ERROR_INFO_SUCCESS           0x0UL
+       #define TUNNEL_DST_PORT_FREE_RESP_ERROR_INFO_ERR_NOT_OWNER     0x1UL
+       #define TUNNEL_DST_PORT_FREE_RESP_ERROR_INFO_ERR_NOT_ALLOCATED 0x2UL
+       #define TUNNEL_DST_PORT_FREE_RESP_ERROR_INFO_LAST             TUNNEL_DST_PORT_FREE_RESP_ERROR_INFO_ERR_NOT_ALLOCATED
+       u8      unused_1[6];
        u8      valid;
 };
 
@@ -8686,9 +8791,7 @@ struct hwrm_stat_generic_qstats_input {
        __le64  resp_addr;
        __le16  generic_stat_size;
        u8      flags;
-       #define STAT_GENERIC_QSTATS_REQ_FLAGS_COUNTER      0x0UL
-       #define STAT_GENERIC_QSTATS_REQ_FLAGS_COUNTER_MASK 0x1UL
-       #define STAT_GENERIC_QSTATS_REQ_FLAGS_LAST        STAT_GENERIC_QSTATS_REQ_FLAGS_COUNTER_MASK
+       #define STAT_GENERIC_QSTATS_REQ_FLAGS_COUNTER_MASK     0x1UL
        u8      unused_0[5];
        __le64  generic_stat_host_addr;
 };
@@ -10202,6 +10305,7 @@ struct fw_status_reg {
        #define FW_STATUS_REG_SHUTDOWN               0x100000UL
        #define FW_STATUS_REG_CRASHED_NO_MASTER      0x200000UL
        #define FW_STATUS_REG_RECOVERING             0x400000UL
+       #define FW_STATUS_REG_MANU_DEBUG_STATUS      0x800000UL
 };
 
 /* hcomm_status (size:64b/8B) */
index e86503d..2198e35 100644 (file)
@@ -4105,8 +4105,7 @@ static int cnic_cm_alloc_mem(struct cnic_dev *dev)
        for (i = 0; i < MAX_CM_SK_TBL_SZ; i++)
                atomic_set(&cp->csk_tbl[i].ref_count, 0);
 
-       port_id = prandom_u32();
-       port_id %= CNIC_LOCAL_PORT_RANGE;
+       port_id = prandom_u32_max(CNIC_LOCAL_PORT_RANGE);
        if (cnic_init_id_tbl(&cp->csk_port_tbl, CNIC_LOCAL_PORT_RANGE,
                             CNIC_LOCAL_PORT_MIN, port_id)) {
                cnic_cm_free_mem(dev);
@@ -4165,7 +4164,7 @@ static int cnic_cm_init_bnx2_hw(struct cnic_dev *dev)
 {
        u32 seed;
 
-       seed = prandom_u32();
+       seed = get_random_u32();
        cnic_ctx_wr(dev, 45, 0, seed);
        return 0;
 }
index 25c4506..a8ce8d0 100644 (file)
@@ -1387,7 +1387,8 @@ static int bcmgenet_validate_flow(struct net_device *dev,
        struct ethtool_usrip4_spec *l4_mask;
        struct ethhdr *eth_mask;
 
-       if (cmd->fs.location >= MAX_NUM_OF_FS_RULES) {
+       if (cmd->fs.location >= MAX_NUM_OF_FS_RULES &&
+           cmd->fs.location != RX_CLS_LOC_ANY) {
                netdev_err(dev, "rxnfc: Invalid location (%d)\n",
                           cmd->fs.location);
                return -EINVAL;
@@ -1452,7 +1453,7 @@ static int bcmgenet_insert_flow(struct net_device *dev,
 {
        struct bcmgenet_priv *priv = netdev_priv(dev);
        struct bcmgenet_rxnfc_rule *loc_rule;
-       int err;
+       int err, i;
 
        if (priv->hw_params->hfb_filter_size < 128) {
                netdev_err(dev, "rxnfc: Not supported by this device\n");
@@ -1470,7 +1471,29 @@ static int bcmgenet_insert_flow(struct net_device *dev,
        if (err)
                return err;
 
-       loc_rule = &priv->rxnfc_rules[cmd->fs.location];
+       if (cmd->fs.location == RX_CLS_LOC_ANY) {
+               list_for_each_entry(loc_rule, &priv->rxnfc_list, list) {
+                       cmd->fs.location = loc_rule->fs.location;
+                       err = memcmp(&loc_rule->fs, &cmd->fs,
+                                    sizeof(struct ethtool_rx_flow_spec));
+                       if (!err)
+                               /* rule exists so return current location */
+                               return 0;
+               }
+               for (i = 0; i < MAX_NUM_OF_FS_RULES; i++) {
+                       loc_rule = &priv->rxnfc_rules[i];
+                       if (loc_rule->state == BCMGENET_RXNFC_STATE_UNUSED) {
+                               cmd->fs.location = i;
+                               break;
+                       }
+               }
+               if (i == MAX_NUM_OF_FS_RULES) {
+                       cmd->fs.location = RX_CLS_LOC_ANY;
+                       return -ENOSPC;
+               }
+       } else {
+               loc_rule = &priv->rxnfc_rules[cmd->fs.location];
+       }
        if (loc_rule->state == BCMGENET_RXNFC_STATE_ENABLED)
                bcmgenet_hfb_disable_filter(priv, cmd->fs.location);
        if (loc_rule->state != BCMGENET_RXNFC_STATE_UNUSED) {
@@ -1583,7 +1606,7 @@ static int bcmgenet_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
                break;
        case ETHTOOL_GRXCLSRLCNT:
                cmd->rule_cnt = bcmgenet_get_num_flows(priv);
-               cmd->data = MAX_NUM_OF_FS_RULES;
+               cmd->data = MAX_NUM_OF_FS_RULES | RX_CLS_LOC_SPECIAL;
                break;
        case ETHTOOL_GRXCLSRULE:
                err = bcmgenet_get_flow(dev, cmd, cmd->fs.location);
index 47125f4..fa40d5e 100644 (file)
@@ -202,7 +202,6 @@ static void
 __cmd_copy(struct bfa_msgq_cmdq *cmdq, struct bfa_msgq_cmd_entry *cmd)
 {
        size_t len = cmd->msg_size;
-       int num_entries = 0;
        size_t to_copy;
        u8 *src, *dst;
 
@@ -219,7 +218,6 @@ __cmd_copy(struct bfa_msgq_cmdq *cmdq, struct bfa_msgq_cmd_entry *cmd)
                BFA_MSGQ_INDX_ADD(cmdq->producer_index, 1, cmdq->depth);
                dst = (u8 *)cmdq->addr.kva;
                dst += (cmdq->producer_index * BFI_MSGQ_CMD_ENTRY_SIZE);
-               num_entries++;
        }
 
 }
index 51c9fd6..4f63f1b 100644 (file)
@@ -806,6 +806,7 @@ static int macb_mii_probe(struct net_device *dev)
 
        bp->phylink_config.dev = &dev->dev;
        bp->phylink_config.type = PHYLINK_NETDEV;
+       bp->phylink_config.mac_managed_pm = true;
 
        if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
                bp->phylink_config.poll_fixed_state = true;
index f90bfba..c2e7037 100644 (file)
@@ -1063,7 +1063,7 @@ static void chtls_pass_accept_rpl(struct sk_buff *skb,
        opt2 |= WND_SCALE_EN_V(WSCALE_OK(tp));
        rpl5->opt0 = cpu_to_be64(opt0);
        rpl5->opt2 = cpu_to_be32(opt2);
-       rpl5->iss = cpu_to_be32((prandom_u32() & ~7UL) - 1);
+       rpl5->iss = cpu_to_be32((get_random_u32() & ~7UL) - 1);
        set_wr_txq(skb, CPL_PRIORITY_SETUP, csk->port_id);
        t4_set_arp_err_handler(skb, sk, chtls_accept_rpl_arp_failure);
        cxgb4_l2t_send(csk->egress_dev, skb, csk->l2t_entry);
@@ -1466,7 +1466,7 @@ static void make_established(struct sock *sk, u32 snd_isn, unsigned int opt)
        tp->write_seq = snd_isn;
        tp->snd_nxt = snd_isn;
        tp->snd_una = snd_isn;
-       inet_sk(sk)->inet_id = prandom_u32();
+       inet_sk(sk)->inet_id = get_random_u16();
        assign_rxopt(sk, opt);
 
        if (tp->rcv_wnd > (RCV_BUFSIZ_M << 10))
index 539992d..a425608 100644 (file)
@@ -919,8 +919,8 @@ static int csk_wait_memory(struct chtls_dev *cdev,
        current_timeo = *timeo_p;
        noblock = (*timeo_p ? false : true);
        if (csk_mem_free(cdev, sk)) {
-               current_timeo = (prandom_u32() % (HZ / 5)) + 2;
-               vm_wait = (prandom_u32() % (HZ / 5)) + 2;
+               current_timeo = prandom_u32_max(HZ / 5) + 2;
+               vm_wait = prandom_u32_max(HZ / 5) + 2;
        }
 
        add_wait_queue(sk_sleep(sk), &wait);
index 2c67a85..db6615a 100644 (file)
@@ -814,7 +814,6 @@ rio_free_tx (struct net_device *dev, int irq)
 {
        struct netdev_private *np = netdev_priv(dev);
        int entry = np->old_tx % TX_RING_SIZE;
-       int tx_use = 0;
        unsigned long flag = 0;
 
        if (irq)
@@ -839,7 +838,6 @@ rio_free_tx (struct net_device *dev, int irq)
 
                np->tx_skbuff[entry] = NULL;
                entry = (entry + 1) % TX_RING_SIZE;
-               tx_use++;
        }
        if (irq)
                spin_unlock(&np->tx_lock);
index 021ba99..3f80329 100644 (file)
@@ -221,8 +221,8 @@ static int dpaa_netdev_init(struct net_device *net_dev,
        net_dev->netdev_ops = dpaa_ops;
        mac_addr = mac_dev->addr;
 
-       net_dev->mem_start = (unsigned long)mac_dev->vaddr;
-       net_dev->mem_end = (unsigned long)mac_dev->vaddr_end;
+       net_dev->mem_start = (unsigned long)priv->mac_dev->res->start;
+       net_dev->mem_end = (unsigned long)priv->mac_dev->res->end;
 
        net_dev->min_mtu = ETH_MIN_MTU;
        net_dev->max_mtu = dpaa_get_max_mtu();
index 258eb6c..4fee74c 100644 (file)
@@ -18,7 +18,7 @@ static ssize_t dpaa_eth_show_addr(struct device *dev,
 
        if (mac_dev)
                return sprintf(buf, "%llx",
-                               (unsigned long long)mac_dev->vaddr);
+                               (unsigned long long)mac_dev->res->start);
        else
                return sprintf(buf, "none");
 }
index 3d9842a..1b05ba8 100644 (file)
@@ -7,7 +7,7 @@ obj-$(CONFIG_FSL_DPAA2_ETH)             += fsl-dpaa2-eth.o
 obj-$(CONFIG_FSL_DPAA2_PTP_CLOCK)      += fsl-dpaa2-ptp.o
 obj-$(CONFIG_FSL_DPAA2_SWITCH)         += fsl-dpaa2-switch.o
 
-fsl-dpaa2-eth-objs     := dpaa2-eth.o dpaa2-ethtool.o dpni.o dpaa2-mac.o dpmac.o dpaa2-eth-devlink.o
+fsl-dpaa2-eth-objs     := dpaa2-eth.o dpaa2-ethtool.o dpni.o dpaa2-mac.o dpmac.o dpaa2-eth-devlink.o dpaa2-xsk.o
 fsl-dpaa2-eth-${CONFIG_FSL_DPAA2_ETH_DCB} += dpaa2-eth-dcb.o
 fsl-dpaa2-eth-${CONFIG_DEBUG_FS} += dpaa2-eth-debugfs.o
 fsl-dpaa2-ptp-objs     := dpaa2-ptp.o dprtc.o
index 8356af4..1af254c 100644 (file)
@@ -98,14 +98,14 @@ static int dpaa2_dbg_ch_show(struct seq_file *file, void *offset)
        int i;
 
        seq_printf(file, "Channel stats for %s:\n", priv->net_dev->name);
-       seq_printf(file, "%s%16s%16s%16s%16s%16s%16s\n",
-                  "CHID", "CPU", "Deq busy", "Frames", "CDANs",
+       seq_printf(file, "%s  %5s%16s%16s%16s%16s%16s%16s\n",
+                  "IDX", "CHID", "CPU", "Deq busy", "Frames", "CDANs",
                   "Avg Frm/CDAN", "Buf count");
 
        for (i = 0; i < priv->num_channels; i++) {
                ch = priv->channel[i];
-               seq_printf(file, "%4d%16d%16llu%16llu%16llu%16llu%16d\n",
-                          ch->ch_id,
+               seq_printf(file, "%3s%d%6d%16d%16llu%16llu%16llu%16llu%16d\n",
+                          "CH#", i, ch->ch_id,
                           ch->nctx.desired_cpu,
                           ch->stats.dequeue_portal_busy,
                           ch->stats.frames,
@@ -119,6 +119,51 @@ static int dpaa2_dbg_ch_show(struct seq_file *file, void *offset)
 
 DEFINE_SHOW_ATTRIBUTE(dpaa2_dbg_ch);
 
+static int dpaa2_dbg_bp_show(struct seq_file *file, void *offset)
+{
+       struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
+       int i, j, num_queues, buf_cnt;
+       struct dpaa2_eth_bp *bp;
+       char ch_name[10];
+       int err;
+
+       /* Print out the header */
+       seq_printf(file, "Buffer pool info for %s:\n", priv->net_dev->name);
+       seq_printf(file, "%s  %10s%15s", "IDX", "BPID", "Buf count");
+       num_queues = dpaa2_eth_queue_count(priv);
+       for (i = 0; i < num_queues; i++) {
+               snprintf(ch_name, sizeof(ch_name), "CH#%d", i);
+               seq_printf(file, "%10s", ch_name);
+       }
+       seq_printf(file, "\n");
+
+       /* For each buffer pool, print out its BPID, the number of buffers in
+        * that buffer pool and the channels which are using it.
+        */
+       for (i = 0; i < priv->num_bps; i++) {
+               bp = priv->bp[i];
+
+               err = dpaa2_io_query_bp_count(NULL, bp->bpid, &buf_cnt);
+               if (err) {
+                       netdev_warn(priv->net_dev, "Buffer count query error %d\n", err);
+                       return err;
+               }
+
+               seq_printf(file, "%3s%d%10d%15d", "BP#", i, bp->bpid, buf_cnt);
+               for (j = 0; j < num_queues; j++) {
+                       if (priv->channel[j]->bp == bp)
+                               seq_printf(file, "%10s", "x");
+                       else
+                               seq_printf(file, "%10s", "");
+               }
+               seq_printf(file, "\n");
+       }
+
+       return 0;
+}
+
+DEFINE_SHOW_ATTRIBUTE(dpaa2_dbg_bp);
+
 void dpaa2_dbg_add(struct dpaa2_eth_priv *priv)
 {
        struct fsl_mc_device *dpni_dev;
@@ -139,6 +184,10 @@ void dpaa2_dbg_add(struct dpaa2_eth_priv *priv)
 
        /* per-fq stats file */
        debugfs_create_file("ch_stats", 0444, dir, priv, &dpaa2_dbg_ch_fops);
+
+       /* per buffer pool stats file */
+       debugfs_create_file("bp_stats", 0444, dir, priv, &dpaa2_dbg_bp_fops);
+
 }
 
 void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv)
index 5fb5f14..9b43fad 100644 (file)
@@ -73,6 +73,14 @@ DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_fd,
             TP_ARGS(netdev, fd)
 );
 
+/* Tx (egress) XSK fd */
+DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_xsk_fd,
+            TP_PROTO(struct net_device *netdev,
+                     const struct dpaa2_fd *fd),
+
+            TP_ARGS(netdev, fd)
+);
+
 /* Rx fd */
 DEFINE_EVENT(dpaa2_eth_fd, dpaa2_rx_fd,
             TP_PROTO(struct net_device *netdev,
@@ -81,6 +89,14 @@ DEFINE_EVENT(dpaa2_eth_fd, dpaa2_rx_fd,
             TP_ARGS(netdev, fd)
 );
 
+/* Rx XSK fd */
+DEFINE_EVENT(dpaa2_eth_fd, dpaa2_rx_xsk_fd,
+            TP_PROTO(struct net_device *netdev,
+                     const struct dpaa2_fd *fd),
+
+            TP_ARGS(netdev, fd)
+);
+
 /* Tx confirmation fd */
 DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_conf_fd,
             TP_PROTO(struct net_device *netdev,
@@ -90,57 +106,81 @@ DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_conf_fd,
 );
 
 /* Log data about raw buffers. Useful for tracing DPBP content. */
-TRACE_EVENT(dpaa2_eth_buf_seed,
-           /* Trace function prototype */
-           TP_PROTO(struct net_device *netdev,
-                    /* virtual address and size */
-                    void *vaddr,
-                    size_t size,
-                    /* dma map address and size */
-                    dma_addr_t dma_addr,
-                    size_t map_size,
-                    /* buffer pool id, if relevant */
-                    u16 bpid),
-
-           /* Repeat argument list here */
-           TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid),
-
-           /* A structure containing the relevant information we want
-            * to record. Declare name and type for each normal element,
-            * name, type and size for arrays. Use __string for variable
-            * length strings.
-            */
-           TP_STRUCT__entry(
-                            __field(void *, vaddr)
-                            __field(size_t, size)
-                            __field(dma_addr_t, dma_addr)
-                            __field(size_t, map_size)
-                            __field(u16, bpid)
-                            __string(name, netdev->name)
-           ),
-
-           /* The function that assigns values to the above declared
-            * fields
-            */
-           TP_fast_assign(
-                          __entry->vaddr = vaddr;
-                          __entry->size = size;
-                          __entry->dma_addr = dma_addr;
-                          __entry->map_size = map_size;
-                          __entry->bpid = bpid;
-                          __assign_str(name, netdev->name);
-           ),
-
-           /* This is what gets printed when the trace event is
-            * triggered.
-            */
-           TP_printk(TR_BUF_FMT,
-                     __get_str(name),
-                     __entry->vaddr,
-                     __entry->size,
-                     &__entry->dma_addr,
-                     __entry->map_size,
-                     __entry->bpid)
+DECLARE_EVENT_CLASS(dpaa2_eth_buf,
+                   /* Trace function prototype */
+                   TP_PROTO(struct net_device *netdev,
+                            /* virtual address and size */
+                           void *vaddr,
+                           size_t size,
+                           /* dma map address and size */
+                           dma_addr_t dma_addr,
+                           size_t map_size,
+                           /* buffer pool id, if relevant */
+                           u16 bpid),
+
+                   /* Repeat argument list here */
+                   TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid),
+
+                   /* A structure containing the relevant information we want
+                    * to record. Declare name and type for each normal element,
+                    * name, type and size for arrays. Use __string for variable
+                    * length strings.
+                    */
+                   TP_STRUCT__entry(
+                                     __field(void *, vaddr)
+                                     __field(size_t, size)
+                                     __field(dma_addr_t, dma_addr)
+                                     __field(size_t, map_size)
+                                     __field(u16, bpid)
+                                     __string(name, netdev->name)
+                   ),
+
+                   /* The function that assigns values to the above declared
+                    * fields
+                    */
+                   TP_fast_assign(
+                                  __entry->vaddr = vaddr;
+                                  __entry->size = size;
+                                  __entry->dma_addr = dma_addr;
+                                  __entry->map_size = map_size;
+                                  __entry->bpid = bpid;
+                                  __assign_str(name, netdev->name);
+                   ),
+
+                   /* This is what gets printed when the trace event is
+                    * triggered.
+                    */
+                   TP_printk(TR_BUF_FMT,
+                             __get_str(name),
+                             __entry->vaddr,
+                             __entry->size,
+                             &__entry->dma_addr,
+                             __entry->map_size,
+                             __entry->bpid)
+);
+
+/* Main memory buff seeding */
+DEFINE_EVENT(dpaa2_eth_buf, dpaa2_eth_buf_seed,
+            TP_PROTO(struct net_device *netdev,
+                     void *vaddr,
+                     size_t size,
+                     dma_addr_t dma_addr,
+                     size_t map_size,
+                     u16 bpid),
+
+            TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid)
+);
+
+/* UMEM buff seeding on AF_XDP fast path */
+DEFINE_EVENT(dpaa2_eth_buf, dpaa2_xsk_buf_seed,
+            TP_PROTO(struct net_device *netdev,
+                     void *vaddr,
+                     size_t size,
+                     dma_addr_t dma_addr,
+                     size_t map_size,
+                     u16 bpid),
+
+            TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid)
 );
 
 /* If only one event of a certain type needs to be declared, use TRACE_EVENT().
index 8d029ad..281d7e3 100644 (file)
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
 /* Copyright 2014-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2022 NXP
  */
 #include <linux/init.h>
 #include <linux/module.h>
@@ -19,6 +19,7 @@
 #include <net/pkt_cls.h>
 #include <net/sock.h>
 #include <net/tso.h>
+#include <net/xdp_sock_drv.h>
 
 #include "dpaa2-eth.h"
 
@@ -104,8 +105,8 @@ static void dpaa2_ptp_onestep_reg_update_method(struct dpaa2_eth_priv *priv)
        priv->dpaa2_set_onestep_params_cb = dpaa2_update_ptp_onestep_direct;
 }
 
-static void *dpaa2_iova_to_virt(struct iommu_domain *domain,
-                               dma_addr_t iova_addr)
+void *dpaa2_iova_to_virt(struct iommu_domain *domain,
+                        dma_addr_t iova_addr)
 {
        phys_addr_t phys_addr;
 
@@ -279,23 +280,33 @@ static struct sk_buff *dpaa2_eth_build_frag_skb(struct dpaa2_eth_priv *priv,
  * be released in the pool
  */
 static void dpaa2_eth_free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array,
-                               int count)
+                               int count, bool xsk_zc)
 {
        struct device *dev = priv->net_dev->dev.parent;
+       struct dpaa2_eth_swa *swa;
+       struct xdp_buff *xdp_buff;
        void *vaddr;
        int i;
 
        for (i = 0; i < count; i++) {
                vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]);
-               dma_unmap_page(dev, buf_array[i], priv->rx_buf_size,
-                              DMA_BIDIRECTIONAL);
-               free_pages((unsigned long)vaddr, 0);
+
+               if (!xsk_zc) {
+                       dma_unmap_page(dev, buf_array[i], priv->rx_buf_size,
+                                      DMA_BIDIRECTIONAL);
+                       free_pages((unsigned long)vaddr, 0);
+               } else {
+                       swa = (struct dpaa2_eth_swa *)
+                               (vaddr + DPAA2_ETH_RX_HWA_SIZE);
+                       xdp_buff = swa->xsk.xdp_buff;
+                       xsk_buff_free(xdp_buff);
+               }
        }
 }
 
-static void dpaa2_eth_recycle_buf(struct dpaa2_eth_priv *priv,
-                                 struct dpaa2_eth_channel *ch,
-                                 dma_addr_t addr)
+void dpaa2_eth_recycle_buf(struct dpaa2_eth_priv *priv,
+                          struct dpaa2_eth_channel *ch,
+                          dma_addr_t addr)
 {
        int retries = 0;
        int err;
@@ -304,7 +315,7 @@ static void dpaa2_eth_recycle_buf(struct dpaa2_eth_priv *priv,
        if (ch->recycled_bufs_cnt < DPAA2_ETH_BUFS_PER_CMD)
                return;
 
-       while ((err = dpaa2_io_service_release(ch->dpio, priv->bpid,
+       while ((err = dpaa2_io_service_release(ch->dpio, ch->bp->bpid,
                                               ch->recycled_bufs,
                                               ch->recycled_bufs_cnt)) == -EBUSY) {
                if (retries++ >= DPAA2_ETH_SWP_BUSY_RETRIES)
@@ -313,7 +324,8 @@ static void dpaa2_eth_recycle_buf(struct dpaa2_eth_priv *priv,
        }
 
        if (err) {
-               dpaa2_eth_free_bufs(priv, ch->recycled_bufs, ch->recycled_bufs_cnt);
+               dpaa2_eth_free_bufs(priv, ch->recycled_bufs,
+                                   ch->recycled_bufs_cnt, ch->xsk_zc);
                ch->buf_count -= ch->recycled_bufs_cnt;
        }
 
@@ -377,10 +389,10 @@ static void dpaa2_eth_xdp_tx_flush(struct dpaa2_eth_priv *priv,
        fq->xdp_tx_fds.num = 0;
 }
 
-static void dpaa2_eth_xdp_enqueue(struct dpaa2_eth_priv *priv,
-                                 struct dpaa2_eth_channel *ch,
-                                 struct dpaa2_fd *fd,
-                                 void *buf_start, u16 queue_id)
+void dpaa2_eth_xdp_enqueue(struct dpaa2_eth_priv *priv,
+                          struct dpaa2_eth_channel *ch,
+                          struct dpaa2_fd *fd,
+                          void *buf_start, u16 queue_id)
 {
        struct dpaa2_faead *faead;
        struct dpaa2_fd *dest_fd;
@@ -485,19 +497,15 @@ out:
        return xdp_act;
 }
 
-static struct sk_buff *dpaa2_eth_copybreak(struct dpaa2_eth_channel *ch,
-                                          const struct dpaa2_fd *fd,
-                                          void *fd_vaddr)
+struct sk_buff *dpaa2_eth_alloc_skb(struct dpaa2_eth_priv *priv,
+                                   struct dpaa2_eth_channel *ch,
+                                   const struct dpaa2_fd *fd, u32 fd_length,
+                                   void *fd_vaddr)
 {
        u16 fd_offset = dpaa2_fd_get_offset(fd);
-       struct dpaa2_eth_priv *priv = ch->priv;
-       u32 fd_length = dpaa2_fd_get_len(fd);
        struct sk_buff *skb = NULL;
        unsigned int skb_len;
 
-       if (fd_length > priv->rx_copybreak)
-               return NULL;
-
        skb_len = fd_length + dpaa2_eth_needed_headroom(NULL);
 
        skb = napi_alloc_skb(&ch->napi, skb_len);
@@ -514,11 +522,66 @@ static struct sk_buff *dpaa2_eth_copybreak(struct dpaa2_eth_channel *ch,
        return skb;
 }
 
+static struct sk_buff *dpaa2_eth_copybreak(struct dpaa2_eth_channel *ch,
+                                          const struct dpaa2_fd *fd,
+                                          void *fd_vaddr)
+{
+       struct dpaa2_eth_priv *priv = ch->priv;
+       u32 fd_length = dpaa2_fd_get_len(fd);
+
+       if (fd_length > priv->rx_copybreak)
+               return NULL;
+
+       return dpaa2_eth_alloc_skb(priv, ch, fd, fd_length, fd_vaddr);
+}
+
+void dpaa2_eth_receive_skb(struct dpaa2_eth_priv *priv,
+                          struct dpaa2_eth_channel *ch,
+                          const struct dpaa2_fd *fd, void *vaddr,
+                          struct dpaa2_eth_fq *fq,
+                          struct rtnl_link_stats64 *percpu_stats,
+                          struct sk_buff *skb)
+{
+       struct dpaa2_fas *fas;
+       u32 status = 0;
+
+       fas = dpaa2_get_fas(vaddr, false);
+       prefetch(fas);
+       prefetch(skb->data);
+
+       /* Get the timestamp value */
+       if (priv->rx_tstamp) {
+               struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
+               __le64 *ts = dpaa2_get_ts(vaddr, false);
+               u64 ns;
+
+               memset(shhwtstamps, 0, sizeof(*shhwtstamps));
+
+               ns = DPAA2_PTP_CLK_PERIOD_NS * le64_to_cpup(ts);
+               shhwtstamps->hwtstamp = ns_to_ktime(ns);
+       }
+
+       /* Check if we need to validate the L4 csum */
+       if (likely(dpaa2_fd_get_frc(fd) & DPAA2_FD_FRC_FASV)) {
+               status = le32_to_cpu(fas->status);
+               dpaa2_eth_validate_rx_csum(priv, status, skb);
+       }
+
+       skb->protocol = eth_type_trans(skb, priv->net_dev);
+       skb_record_rx_queue(skb, fq->flowid);
+
+       percpu_stats->rx_packets++;
+       percpu_stats->rx_bytes += dpaa2_fd_get_len(fd);
+       ch->stats.bytes_per_cdan += dpaa2_fd_get_len(fd);
+
+       list_add_tail(&skb->list, ch->rx_list);
+}
+
 /* Main Rx frame processing routine */
-static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
-                        struct dpaa2_eth_channel *ch,
-                        const struct dpaa2_fd *fd,
-                        struct dpaa2_eth_fq *fq)
+void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
+                 struct dpaa2_eth_channel *ch,
+                 const struct dpaa2_fd *fd,
+                 struct dpaa2_eth_fq *fq)
 {
        dma_addr_t addr = dpaa2_fd_get_addr(fd);
        u8 fd_format = dpaa2_fd_get_format(fd);
@@ -527,9 +590,7 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
        struct rtnl_link_stats64 *percpu_stats;
        struct dpaa2_eth_drv_stats *percpu_extras;
        struct device *dev = priv->net_dev->dev.parent;
-       struct dpaa2_fas *fas;
        void *buf_data;
-       u32 status = 0;
        u32 xdp_act;
 
        /* Tracing point */
@@ -539,8 +600,6 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
        dma_sync_single_for_cpu(dev, addr, priv->rx_buf_size,
                                DMA_BIDIRECTIONAL);
 
-       fas = dpaa2_get_fas(vaddr, false);
-       prefetch(fas);
        buf_data = vaddr + dpaa2_fd_get_offset(fd);
        prefetch(buf_data);
 
@@ -578,35 +637,7 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
        if (unlikely(!skb))
                goto err_build_skb;
 
-       prefetch(skb->data);
-
-       /* Get the timestamp value */
-       if (priv->rx_tstamp) {
-               struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
-               __le64 *ts = dpaa2_get_ts(vaddr, false);
-               u64 ns;
-
-               memset(shhwtstamps, 0, sizeof(*shhwtstamps));
-
-               ns = DPAA2_PTP_CLK_PERIOD_NS * le64_to_cpup(ts);
-               shhwtstamps->hwtstamp = ns_to_ktime(ns);
-       }
-
-       /* Check if we need to validate the L4 csum */
-       if (likely(dpaa2_fd_get_frc(fd) & DPAA2_FD_FRC_FASV)) {
-               status = le32_to_cpu(fas->status);
-               dpaa2_eth_validate_rx_csum(priv, status, skb);
-       }
-
-       skb->protocol = eth_type_trans(skb, priv->net_dev);
-       skb_record_rx_queue(skb, fq->flowid);
-
-       percpu_stats->rx_packets++;
-       percpu_stats->rx_bytes += dpaa2_fd_get_len(fd);
-       ch->stats.bytes_per_cdan += dpaa2_fd_get_len(fd);
-
-       list_add_tail(&skb->list, ch->rx_list);
-
+       dpaa2_eth_receive_skb(priv, ch, fd, vaddr, fq, percpu_stats, skb);
        return;
 
 err_build_skb:
@@ -827,7 +858,7 @@ static void dpaa2_eth_enable_tx_tstamp(struct dpaa2_eth_priv *priv,
        }
 }
 
-static void *dpaa2_eth_sgt_get(struct dpaa2_eth_priv *priv)
+void *dpaa2_eth_sgt_get(struct dpaa2_eth_priv *priv)
 {
        struct dpaa2_eth_sgt_cache *sgt_cache;
        void *sgt_buf = NULL;
@@ -849,7 +880,7 @@ static void *dpaa2_eth_sgt_get(struct dpaa2_eth_priv *priv)
        return sgt_buf;
 }
 
-static void dpaa2_eth_sgt_recycle(struct dpaa2_eth_priv *priv, void *sgt_buf)
+void dpaa2_eth_sgt_recycle(struct dpaa2_eth_priv *priv, void *sgt_buf)
 {
        struct dpaa2_eth_sgt_cache *sgt_cache;
 
@@ -1084,9 +1115,10 @@ static int dpaa2_eth_build_single_fd(struct dpaa2_eth_priv *priv,
  * This can be called either from dpaa2_eth_tx_conf() or on the error path of
  * dpaa2_eth_tx().
  */
-static void dpaa2_eth_free_tx_fd(struct dpaa2_eth_priv *priv,
-                                struct dpaa2_eth_fq *fq,
-                                const struct dpaa2_fd *fd, bool in_napi)
+void dpaa2_eth_free_tx_fd(struct dpaa2_eth_priv *priv,
+                         struct dpaa2_eth_channel *ch,
+                         struct dpaa2_eth_fq *fq,
+                         const struct dpaa2_fd *fd, bool in_napi)
 {
        struct device *dev = priv->net_dev->dev.parent;
        dma_addr_t fd_addr, sg_addr;
@@ -1153,6 +1185,10 @@ static void dpaa2_eth_free_tx_fd(struct dpaa2_eth_priv *priv,
 
                        if (!swa->tso.is_last_fd)
                                should_free_skb = 0;
+               } else if (swa->type == DPAA2_ETH_SWA_XSK) {
+                       /* Unmap the SGT Buffer */
+                       dma_unmap_single(dev, fd_addr, swa->xsk.sgt_size,
+                                        DMA_BIDIRECTIONAL);
                } else {
                        skb = swa->single.skb;
 
@@ -1170,6 +1206,12 @@ static void dpaa2_eth_free_tx_fd(struct dpaa2_eth_priv *priv,
                return;
        }
 
+       if (swa->type == DPAA2_ETH_SWA_XSK) {
+               ch->xsk_tx_pkts_sent++;
+               dpaa2_eth_sgt_recycle(priv, buffer_start);
+               return;
+       }
+
        if (swa->type != DPAA2_ETH_SWA_XDP && in_napi) {
                fq->dq_frames++;
                fq->dq_bytes += fd_len;
@@ -1344,7 +1386,7 @@ err_alloc_tso_hdr:
 err_sgt_get:
        /* Free all the other FDs that were already fully created */
        for (i = 0; i < index; i++)
-               dpaa2_eth_free_tx_fd(priv, NULL, &fd_start[i], false);
+               dpaa2_eth_free_tx_fd(priv, NULL, NULL, &fd_start[i], false);
 
        return err;
 }
@@ -1460,7 +1502,7 @@ static netdev_tx_t __dpaa2_eth_tx(struct sk_buff *skb,
        if (unlikely(err < 0)) {
                percpu_stats->tx_errors++;
                /* Clean up everything, including freeing the skb */
-               dpaa2_eth_free_tx_fd(priv, fq, fd, false);
+               dpaa2_eth_free_tx_fd(priv, NULL, fq, fd, false);
                netdev_tx_completed_queue(nq, 1, fd_len);
        } else {
                percpu_stats->tx_packets += total_enqueued;
@@ -1553,7 +1595,7 @@ static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv,
 
        /* Check frame errors in the FD field */
        fd_errors = dpaa2_fd_get_ctrl(fd) & DPAA2_FD_TX_ERR_MASK;
-       dpaa2_eth_free_tx_fd(priv, fq, fd, true);
+       dpaa2_eth_free_tx_fd(priv, ch, fq, fd, true);
 
        if (likely(!fd_errors))
                return;
@@ -1631,44 +1673,76 @@ static int dpaa2_eth_set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
  * to the specified buffer pool
  */
 static int dpaa2_eth_add_bufs(struct dpaa2_eth_priv *priv,
-                             struct dpaa2_eth_channel *ch, u16 bpid)
+                             struct dpaa2_eth_channel *ch)
 {
+       struct xdp_buff *xdp_buffs[DPAA2_ETH_BUFS_PER_CMD];
        struct device *dev = priv->net_dev->dev.parent;
        u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
+       struct dpaa2_eth_swa *swa;
        struct page *page;
        dma_addr_t addr;
        int retries = 0;
-       int i, err;
-
-       for (i = 0; i < DPAA2_ETH_BUFS_PER_CMD; i++) {
-               /* Allocate buffer visible to WRIOP + skb shared info +
-                * alignment padding
-                */
-               /* allocate one page for each Rx buffer. WRIOP sees
-                * the entire page except for a tailroom reserved for
-                * skb shared info
+       int i = 0, err;
+       u32 batch;
+
+       /* Allocate buffers visible to WRIOP */
+       if (!ch->xsk_zc) {
+               for (i = 0; i < DPAA2_ETH_BUFS_PER_CMD; i++) {
+                       /* Also allocate skb shared info and alignment padding.
+                        * There is one page for each Rx buffer. WRIOP sees
+                        * the entire page except for a tailroom reserved for
+                        * skb shared info
+                        */
+                       page = dev_alloc_pages(0);
+                       if (!page)
+                               goto err_alloc;
+
+                       addr = dma_map_page(dev, page, 0, priv->rx_buf_size,
+                                           DMA_BIDIRECTIONAL);
+                       if (unlikely(dma_mapping_error(dev, addr)))
+                               goto err_map;
+
+                       buf_array[i] = addr;
+
+                       /* tracing point */
+                       trace_dpaa2_eth_buf_seed(priv->net_dev,
+                                                page_address(page),
+                                                DPAA2_ETH_RX_BUF_RAW_SIZE,
+                                                addr, priv->rx_buf_size,
+                                                ch->bp->bpid);
+               }
+       } else if (xsk_buff_can_alloc(ch->xsk_pool, DPAA2_ETH_BUFS_PER_CMD)) {
+               /* Allocate XSK buffers for AF_XDP fast path in batches
+                * of DPAA2_ETH_BUFS_PER_CMD. Bail out if the UMEM cannot
+                * provide enough buffers at the moment
                 */
-               page = dev_alloc_pages(0);
-               if (!page)
+               batch = xsk_buff_alloc_batch(ch->xsk_pool, xdp_buffs,
+                                            DPAA2_ETH_BUFS_PER_CMD);
+               if (!batch)
                        goto err_alloc;
 
-               addr = dma_map_page(dev, page, 0, priv->rx_buf_size,
-                                   DMA_BIDIRECTIONAL);
-               if (unlikely(dma_mapping_error(dev, addr)))
-                       goto err_map;
+               for (i = 0; i < batch; i++) {
+                       swa = (struct dpaa2_eth_swa *)(xdp_buffs[i]->data_hard_start +
+                                                      DPAA2_ETH_RX_HWA_SIZE);
+                       swa->xsk.xdp_buff = xdp_buffs[i];
+
+                       addr = xsk_buff_xdp_get_frame_dma(xdp_buffs[i]);
+                       if (unlikely(dma_mapping_error(dev, addr)))
+                               goto err_map;
 
-               buf_array[i] = addr;
+                       buf_array[i] = addr;
 
-               /* tracing point */
-               trace_dpaa2_eth_buf_seed(priv->net_dev, page_address(page),
-                                        DPAA2_ETH_RX_BUF_RAW_SIZE,
-                                        addr, priv->rx_buf_size,
-                                        bpid);
+                       trace_dpaa2_xsk_buf_seed(priv->net_dev,
+                                                xdp_buffs[i]->data_hard_start,
+                                                DPAA2_ETH_RX_BUF_RAW_SIZE,
+                                                addr, priv->rx_buf_size,
+                                                ch->bp->bpid);
+               }
        }
 
 release_bufs:
        /* In case the portal is busy, retry until successful */
-       while ((err = dpaa2_io_service_release(ch->dpio, bpid,
+       while ((err = dpaa2_io_service_release(ch->dpio, ch->bp->bpid,
                                               buf_array, i)) == -EBUSY) {
                if (retries++ >= DPAA2_ETH_SWP_BUSY_RETRIES)
                        break;
@@ -1679,14 +1753,19 @@ release_bufs:
         * not much else we can do about it
         */
        if (err) {
-               dpaa2_eth_free_bufs(priv, buf_array, i);
+               dpaa2_eth_free_bufs(priv, buf_array, i, ch->xsk_zc);
                return 0;
        }
 
        return i;
 
 err_map:
-       __free_pages(page, 0);
+       if (!ch->xsk_zc) {
+               __free_pages(page, 0);
+       } else {
+               for (; i < batch; i++)
+                       xsk_buff_free(xdp_buffs[i]);
+       }
 err_alloc:
        /* If we managed to allocate at least some buffers,
         * release them to hardware
@@ -1697,39 +1776,64 @@ err_alloc:
        return 0;
 }
 
-static int dpaa2_eth_seed_pool(struct dpaa2_eth_priv *priv, u16 bpid)
+static int dpaa2_eth_seed_pool(struct dpaa2_eth_priv *priv,
+                              struct dpaa2_eth_channel *ch)
 {
-       int i, j;
+       int i;
        int new_count;
 
-       for (j = 0; j < priv->num_channels; j++) {
-               for (i = 0; i < DPAA2_ETH_NUM_BUFS;
-                    i += DPAA2_ETH_BUFS_PER_CMD) {
-                       new_count = dpaa2_eth_add_bufs(priv, priv->channel[j], bpid);
-                       priv->channel[j]->buf_count += new_count;
+       for (i = 0; i < DPAA2_ETH_NUM_BUFS; i += DPAA2_ETH_BUFS_PER_CMD) {
+               new_count = dpaa2_eth_add_bufs(priv, ch);
+               ch->buf_count += new_count;
 
-                       if (new_count < DPAA2_ETH_BUFS_PER_CMD) {
-                               return -ENOMEM;
-                       }
-               }
+               if (new_count < DPAA2_ETH_BUFS_PER_CMD)
+                       return -ENOMEM;
        }
 
        return 0;
 }
 
+static void dpaa2_eth_seed_pools(struct dpaa2_eth_priv *priv)
+{
+       struct net_device *net_dev = priv->net_dev;
+       struct dpaa2_eth_channel *channel;
+       int i, err = 0;
+
+       for (i = 0; i < priv->num_channels; i++) {
+               channel = priv->channel[i];
+
+               err = dpaa2_eth_seed_pool(priv, channel);
+
+               /* Not much to do; the buffer pool, though not filled up,
+                * may still contain some buffers which would enable us
+                * to limp on.
+                */
+               if (err)
+                       netdev_err(net_dev, "Buffer seeding failed for DPBP %d (bpid=%d)\n",
+                                  channel->bp->dev->obj_desc.id,
+                                  channel->bp->bpid);
+       }
+}
+
 /*
- * Drain the specified number of buffers from the DPNI's private buffer pool.
+ * Drain the specified number of buffers from one of the DPNI's private buffer
+ * pools.
  * @count must not exceeed DPAA2_ETH_BUFS_PER_CMD
  */
-static void dpaa2_eth_drain_bufs(struct dpaa2_eth_priv *priv, int count)
+static void dpaa2_eth_drain_bufs(struct dpaa2_eth_priv *priv, int bpid,
+                                int count)
 {
        u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
+       bool xsk_zc = false;
        int retries = 0;
-       int ret;
+       int i, ret;
+
+       for (i = 0; i < priv->num_channels; i++)
+               if (priv->channel[i]->bp->bpid == bpid)
+                       xsk_zc = priv->channel[i]->xsk_zc;
 
        do {
-               ret = dpaa2_io_service_acquire(NULL, priv->bpid,
-                                              buf_array, count);
+               ret = dpaa2_io_service_acquire(NULL, bpid, buf_array, count);
                if (ret < 0) {
                        if (ret == -EBUSY &&
                            retries++ < DPAA2_ETH_SWP_BUSY_RETRIES)
@@ -1737,28 +1841,40 @@ static void dpaa2_eth_drain_bufs(struct dpaa2_eth_priv *priv, int count)
                        netdev_err(priv->net_dev, "dpaa2_io_service_acquire() failed\n");
                        return;
                }
-               dpaa2_eth_free_bufs(priv, buf_array, ret);
+               dpaa2_eth_free_bufs(priv, buf_array, ret, xsk_zc);
                retries = 0;
        } while (ret);
 }
 
-static void dpaa2_eth_drain_pool(struct dpaa2_eth_priv *priv)
+static void dpaa2_eth_drain_pool(struct dpaa2_eth_priv *priv, int bpid)
 {
        int i;
 
-       dpaa2_eth_drain_bufs(priv, DPAA2_ETH_BUFS_PER_CMD);
-       dpaa2_eth_drain_bufs(priv, 1);
+       /* Drain the buffer pool */
+       dpaa2_eth_drain_bufs(priv, bpid, DPAA2_ETH_BUFS_PER_CMD);
+       dpaa2_eth_drain_bufs(priv, bpid, 1);
 
+       /* Setup to zero the buffer count of all channels which were
+        * using this buffer pool.
+        */
        for (i = 0; i < priv->num_channels; i++)
-               priv->channel[i]->buf_count = 0;
+               if (priv->channel[i]->bp->bpid == bpid)
+                       priv->channel[i]->buf_count = 0;
+}
+
+static void dpaa2_eth_drain_pools(struct dpaa2_eth_priv *priv)
+{
+       int i;
+
+       for (i = 0; i < priv->num_bps; i++)
+               dpaa2_eth_drain_pool(priv, priv->bp[i]->bpid);
 }
 
 /* Function is called from softirq context only, so we don't need to guard
  * the access to percpu count
  */
 static int dpaa2_eth_refill_pool(struct dpaa2_eth_priv *priv,
-                                struct dpaa2_eth_channel *ch,
-                                u16 bpid)
+                                struct dpaa2_eth_channel *ch)
 {
        int new_count;
 
@@ -1766,7 +1882,7 @@ static int dpaa2_eth_refill_pool(struct dpaa2_eth_priv *priv,
                return 0;
 
        do {
-               new_count = dpaa2_eth_add_bufs(priv, ch, bpid);
+               new_count = dpaa2_eth_add_bufs(priv, ch);
                if (unlikely(!new_count)) {
                        /* Out of memory; abort for now, we'll try later on */
                        break;
@@ -1830,6 +1946,7 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
        struct dpaa2_eth_fq *fq, *txc_fq = NULL;
        struct netdev_queue *nq;
        int store_cleaned, work_done;
+       bool work_done_zc = false;
        struct list_head rx_list;
        int retries = 0;
        u16 flowid;
@@ -1842,13 +1959,22 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
        INIT_LIST_HEAD(&rx_list);
        ch->rx_list = &rx_list;
 
+       if (ch->xsk_zc) {
+               work_done_zc = dpaa2_xsk_tx(priv, ch);
+               /* If we reached the XSK Tx per NAPI threshold, we're done */
+               if (work_done_zc) {
+                       work_done = budget;
+                       goto out;
+               }
+       }
+
        do {
                err = dpaa2_eth_pull_channel(ch);
                if (unlikely(err))
                        break;
 
                /* Refill pool if appropriate */
-               dpaa2_eth_refill_pool(priv, ch, priv->bpid);
+               dpaa2_eth_refill_pool(priv, ch);
 
                store_cleaned = dpaa2_eth_consume_frames(ch, &fq);
                if (store_cleaned <= 0)
@@ -1894,6 +2020,11 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
 out:
        netif_receive_skb_list(ch->rx_list);
 
+       if (ch->xsk_tx_pkts_sent) {
+               xsk_tx_completed(ch->xsk_pool, ch->xsk_tx_pkts_sent);
+               ch->xsk_tx_pkts_sent = 0;
+       }
+
        if (txc_fq && txc_fq->dq_frames) {
                nq = netdev_get_tx_queue(priv->net_dev, txc_fq->flowid);
                netdev_tx_completed_queue(nq, txc_fq->dq_frames,
@@ -2047,15 +2178,7 @@ static int dpaa2_eth_open(struct net_device *net_dev)
        struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
        int err;
 
-       err = dpaa2_eth_seed_pool(priv, priv->bpid);
-       if (err) {
-               /* Not much to do; the buffer pool, though not filled up,
-                * may still contain some buffers which would enable us
-                * to limp on.
-                */
-               netdev_err(net_dev, "Buffer seeding failed for DPBP %d (bpid=%d)\n",
-                          priv->dpbp_dev->obj_desc.id, priv->bpid);
-       }
+       dpaa2_eth_seed_pools(priv);
 
        if (!dpaa2_eth_is_type_phy(priv)) {
                /* We'll only start the txqs when the link is actually ready;
@@ -2088,7 +2211,7 @@ static int dpaa2_eth_open(struct net_device *net_dev)
 
 enable_err:
        dpaa2_eth_disable_ch_napi(priv);
-       dpaa2_eth_drain_pool(priv);
+       dpaa2_eth_drain_pools(priv);
        return err;
 }
 
@@ -2193,7 +2316,7 @@ static int dpaa2_eth_stop(struct net_device *net_dev)
        dpaa2_eth_disable_ch_napi(priv);
 
        /* Empty the buffer pool */
-       dpaa2_eth_drain_pool(priv);
+       dpaa2_eth_drain_pools(priv);
 
        /* Empty the Scatter-Gather Buffer cache */
        dpaa2_eth_sgt_cache_drain(priv);
@@ -2602,7 +2725,7 @@ static int dpaa2_eth_setup_xdp(struct net_device *dev, struct bpf_prog *prog)
        need_update = (!!priv->xdp_prog != !!prog);
 
        if (up)
-               dpaa2_eth_stop(dev);
+               dev_close(dev);
 
        /* While in xdp mode, enforce a maximum Rx frame size based on MTU.
         * Also, when switching between xdp/non-xdp modes we need to reconfigure
@@ -2630,7 +2753,7 @@ static int dpaa2_eth_setup_xdp(struct net_device *dev, struct bpf_prog *prog)
        }
 
        if (up) {
-               err = dpaa2_eth_open(dev);
+               err = dev_open(dev, NULL);
                if (err)
                        return err;
        }
@@ -2641,7 +2764,7 @@ out_err:
        if (prog)
                bpf_prog_sub(prog, priv->num_channels);
        if (up)
-               dpaa2_eth_open(dev);
+               dev_open(dev, NULL);
 
        return err;
 }
@@ -2651,6 +2774,8 @@ static int dpaa2_eth_xdp(struct net_device *dev, struct netdev_bpf *xdp)
        switch (xdp->command) {
        case XDP_SETUP_PROG:
                return dpaa2_eth_setup_xdp(dev, xdp->prog);
+       case XDP_SETUP_XSK_POOL:
+               return dpaa2_xsk_setup_pool(dev, xdp->xsk.pool, xdp->xsk.queue_id);
        default:
                return -EINVAL;
        }
@@ -2881,6 +3006,7 @@ static const struct net_device_ops dpaa2_eth_ops = {
        .ndo_change_mtu = dpaa2_eth_change_mtu,
        .ndo_bpf = dpaa2_eth_xdp,
        .ndo_xdp_xmit = dpaa2_eth_xdp_xmit,
+       .ndo_xsk_wakeup = dpaa2_xsk_wakeup,
        .ndo_setup_tc = dpaa2_eth_setup_tc,
        .ndo_vlan_rx_add_vid = dpaa2_eth_rx_add_vid,
        .ndo_vlan_rx_kill_vid = dpaa2_eth_rx_kill_vid
@@ -2895,7 +3021,11 @@ static void dpaa2_eth_cdan_cb(struct dpaa2_io_notification_ctx *ctx)
        /* Update NAPI statistics */
        ch->stats.cdan++;
 
-       napi_schedule(&ch->napi);
+       /* NAPI can also be scheduled from the AF_XDP Tx path. Mark a missed
+        * so that it can be rescheduled again.
+        */
+       if (!napi_if_scheduled_mark_missed(&ch->napi))
+               napi_schedule(&ch->napi);
 }
 
 /* Allocate and configure a DPCON object */
@@ -3204,13 +3334,14 @@ static void dpaa2_eth_setup_fqs(struct dpaa2_eth_priv *priv)
        dpaa2_eth_set_fq_affinity(priv);
 }
 
-/* Allocate and configure one buffer pool for each interface */
-static int dpaa2_eth_setup_dpbp(struct dpaa2_eth_priv *priv)
+/* Allocate and configure a buffer pool */
+struct dpaa2_eth_bp *dpaa2_eth_allocate_dpbp(struct dpaa2_eth_priv *priv)
 {
-       int err;
-       struct fsl_mc_device *dpbp_dev;
        struct device *dev = priv->net_dev->dev.parent;
+       struct fsl_mc_device *dpbp_dev;
        struct dpbp_attr dpbp_attrs;
+       struct dpaa2_eth_bp *bp;
+       int err;
 
        err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
                                     &dpbp_dev);
@@ -3219,12 +3350,16 @@ static int dpaa2_eth_setup_dpbp(struct dpaa2_eth_priv *priv)
                        err = -EPROBE_DEFER;
                else
                        dev_err(dev, "DPBP device allocation failed\n");
-               return err;
+               return ERR_PTR(err);
        }
 
-       priv->dpbp_dev = dpbp_dev;
+       bp = kzalloc(sizeof(*bp), GFP_KERNEL);
+       if (!bp) {
+               err = -ENOMEM;
+               goto err_alloc;
+       }
 
-       err = dpbp_open(priv->mc_io, 0, priv->dpbp_dev->obj_desc.id,
+       err = dpbp_open(priv->mc_io, 0, dpbp_dev->obj_desc.id,
                        &dpbp_dev->mc_handle);
        if (err) {
                dev_err(dev, "dpbp_open() failed\n");
@@ -3249,9 +3384,11 @@ static int dpaa2_eth_setup_dpbp(struct dpaa2_eth_priv *priv)
                dev_err(dev, "dpbp_get_attributes() failed\n");
                goto err_get_attr;
        }
-       priv->bpid = dpbp_attrs.bpid;
 
-       return 0;
+       bp->dev = dpbp_dev;
+       bp->bpid = dpbp_attrs.bpid;
+
+       return bp;
 
 err_get_attr:
        dpbp_disable(priv->mc_io, 0, dpbp_dev->mc_handle);
@@ -3259,17 +3396,58 @@ err_enable:
 err_reset:
        dpbp_close(priv->mc_io, 0, dpbp_dev->mc_handle);
 err_open:
+       kfree(bp);
+err_alloc:
        fsl_mc_object_free(dpbp_dev);
 
-       return err;
+       return ERR_PTR(err);
 }
 
-static void dpaa2_eth_free_dpbp(struct dpaa2_eth_priv *priv)
+static int dpaa2_eth_setup_default_dpbp(struct dpaa2_eth_priv *priv)
 {
-       dpaa2_eth_drain_pool(priv);
-       dpbp_disable(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
-       dpbp_close(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
-       fsl_mc_object_free(priv->dpbp_dev);
+       struct dpaa2_eth_bp *bp;
+       int i;
+
+       bp = dpaa2_eth_allocate_dpbp(priv);
+       if (IS_ERR(bp))
+               return PTR_ERR(bp);
+
+       priv->bp[DPAA2_ETH_DEFAULT_BP_IDX] = bp;
+       priv->num_bps++;
+
+       for (i = 0; i < priv->num_channels; i++)
+               priv->channel[i]->bp = bp;
+
+       return 0;
+}
+
+void dpaa2_eth_free_dpbp(struct dpaa2_eth_priv *priv, struct dpaa2_eth_bp *bp)
+{
+       int idx_bp;
+
+       /* Find the index at which this BP is stored */
+       for (idx_bp = 0; idx_bp < priv->num_bps; idx_bp++)
+               if (priv->bp[idx_bp] == bp)
+                       break;
+
+       /* Drain the pool and disable the associated MC object */
+       dpaa2_eth_drain_pool(priv, bp->bpid);
+       dpbp_disable(priv->mc_io, 0, bp->dev->mc_handle);
+       dpbp_close(priv->mc_io, 0, bp->dev->mc_handle);
+       fsl_mc_object_free(bp->dev);
+       kfree(bp);
+
+       /* Move the last in use DPBP over in this position */
+       priv->bp[idx_bp] = priv->bp[priv->num_bps - 1];
+       priv->num_bps--;
+}
+
+static void dpaa2_eth_free_dpbps(struct dpaa2_eth_priv *priv)
+{
+       int i;
+
+       for (i = 0; i < priv->num_bps; i++)
+               dpaa2_eth_free_dpbp(priv, priv->bp[i]);
 }
 
 static int dpaa2_eth_set_buffer_layout(struct dpaa2_eth_priv *priv)
@@ -4154,15 +4332,16 @@ out:
  */
 static int dpaa2_eth_bind_dpni(struct dpaa2_eth_priv *priv)
 {
+       struct dpaa2_eth_bp *bp = priv->bp[DPAA2_ETH_DEFAULT_BP_IDX];
        struct net_device *net_dev = priv->net_dev;
+       struct dpni_pools_cfg pools_params = { 0 };
        struct device *dev = net_dev->dev.parent;
-       struct dpni_pools_cfg pools_params;
        struct dpni_error_cfg err_cfg;
        int err = 0;
        int i;
 
        pools_params.num_dpbp = 1;
-       pools_params.pools[0].dpbp_id = priv->dpbp_dev->obj_desc.id;
+       pools_params.pools[0].dpbp_id = bp->dev->obj_desc.id;
        pools_params.pools[0].backup_pool = 0;
        pools_params.pools[0].buffer_size = priv->rx_buf_size;
        err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
@@ -4641,7 +4820,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
 
        dpaa2_eth_setup_fqs(priv);
 
-       err = dpaa2_eth_setup_dpbp(priv);
+       err = dpaa2_eth_setup_default_dpbp(priv);
        if (err)
                goto err_dpbp_setup;
 
@@ -4777,7 +4956,7 @@ err_alloc_percpu_extras:
 err_alloc_percpu_stats:
        dpaa2_eth_del_ch_napi(priv);
 err_bind:
-       dpaa2_eth_free_dpbp(priv);
+       dpaa2_eth_free_dpbps(priv);
 err_dpbp_setup:
        dpaa2_eth_free_dpio(priv);
 err_dpio_setup:
@@ -4830,7 +5009,7 @@ static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
        free_percpu(priv->percpu_extras);
 
        dpaa2_eth_del_ch_napi(priv);
-       dpaa2_eth_free_dpbp(priv);
+       dpaa2_eth_free_dpbps(priv);
        dpaa2_eth_free_dpio(priv);
        dpaa2_eth_free_dpni(priv);
        if (priv->onestep_reg_base)
index 4477184..5d0fc43 100644 (file)
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
 /* Copyright 2014-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2022 NXP
  */
 
 #ifndef __DPAA2_ETH_H
  */
 #define DPAA2_ETH_TXCONF_PER_NAPI      256
 
+/* Maximum number of Tx frames to be processed in a single NAPI
+ * call when AF_XDP is running. Bind it to DPAA2_ETH_TXCONF_PER_NAPI
+ * to maximize the throughput.
+ */
+#define DPAA2_ETH_TX_ZC_PER_NAPI       DPAA2_ETH_TXCONF_PER_NAPI
+
 /* Buffer qouta per channel. We want to keep in check number of ingress frames
  * in flight: for small sized frames, congestion group taildrop may kick in
  * first; for large sizes, Rx FQ taildrop threshold will ensure only a
 #define DPAA2_ETH_RX_BUF_ALIGN_REV1    256
 #define DPAA2_ETH_RX_BUF_ALIGN         64
 
+/* The firmware allows assigning multiple buffer pools to a single DPNI -
+ * maximum 8 DPBP objects. By default, only the first DPBP (idx 0) is used for
+ * all queues. Thus, when enabling AF_XDP we must accommodate up to 9 DPBPs
+ * object: the default and 8 other distinct buffer pools, one for each queue.
+ */
+#define DPAA2_ETH_DEFAULT_BP_IDX       0
+#define DPAA2_ETH_MAX_BPS              9
+
 /* We are accommodating a skb backpointer and some S/G info
  * in the frame's software annotation. The hardware
  * options are either 0 or 64, so we choose the latter.
@@ -122,6 +136,7 @@ enum dpaa2_eth_swa_type {
        DPAA2_ETH_SWA_SINGLE,
        DPAA2_ETH_SWA_SG,
        DPAA2_ETH_SWA_XDP,
+       DPAA2_ETH_SWA_XSK,
        DPAA2_ETH_SWA_SW_TSO,
 };
 
@@ -144,6 +159,10 @@ struct dpaa2_eth_swa {
                        struct xdp_frame *xdpf;
                } xdp;
                struct {
+                       struct xdp_buff *xdp_buff;
+                       int sgt_size;
+               } xsk;
+               struct {
                        struct sk_buff *skb;
                        int num_sg;
                        int sgt_size;
@@ -421,12 +440,19 @@ enum dpaa2_eth_fq_type {
 };
 
 struct dpaa2_eth_priv;
+struct dpaa2_eth_channel;
+struct dpaa2_eth_fq;
 
 struct dpaa2_eth_xdp_fds {
        struct dpaa2_fd fds[DEV_MAP_BULK_SIZE];
        ssize_t num;
 };
 
+typedef void dpaa2_eth_consume_cb_t(struct dpaa2_eth_priv *priv,
+                                   struct dpaa2_eth_channel *ch,
+                                   const struct dpaa2_fd *fd,
+                                   struct dpaa2_eth_fq *fq);
+
 struct dpaa2_eth_fq {
        u32 fqid;
        u32 tx_qdbin;
@@ -439,10 +465,7 @@ struct dpaa2_eth_fq {
        struct dpaa2_eth_channel *channel;
        enum dpaa2_eth_fq_type type;
 
-       void (*consume)(struct dpaa2_eth_priv *priv,
-                       struct dpaa2_eth_channel *ch,
-                       const struct dpaa2_fd *fd,
-                       struct dpaa2_eth_fq *fq);
+       dpaa2_eth_consume_cb_t *consume;
        struct dpaa2_eth_fq_stats stats;
 
        struct dpaa2_eth_xdp_fds xdp_redirect_fds;
@@ -454,6 +477,11 @@ struct dpaa2_eth_ch_xdp {
        unsigned int res;
 };
 
+struct dpaa2_eth_bp {
+       struct fsl_mc_device *dev;
+       int bpid;
+};
+
 struct dpaa2_eth_channel {
        struct dpaa2_io_notification_ctx nctx;
        struct fsl_mc_device *dpcon;
@@ -472,6 +500,11 @@ struct dpaa2_eth_channel {
        /* Buffers to be recycled back in the buffer pool */
        u64 recycled_bufs[DPAA2_ETH_BUFS_PER_CMD];
        int recycled_bufs_cnt;
+
+       bool xsk_zc;
+       int xsk_tx_pkts_sent;
+       struct xsk_buff_pool *xsk_pool;
+       struct dpaa2_eth_bp *bp;
 };
 
 struct dpaa2_eth_dist_fields {
@@ -506,7 +539,7 @@ struct dpaa2_eth_trap_data {
 
 #define DPAA2_ETH_DEFAULT_COPYBREAK    512
 
-#define DPAA2_ETH_ENQUEUE_MAX_FDS      200
+#define DPAA2_ETH_ENQUEUE_MAX_FDS      256
 struct dpaa2_eth_fds {
        struct dpaa2_fd array[DPAA2_ETH_ENQUEUE_MAX_FDS];
 };
@@ -535,14 +568,16 @@ struct dpaa2_eth_priv {
        u8 ptp_correction_off;
        void (*dpaa2_set_onestep_params_cb)(struct dpaa2_eth_priv *priv,
                                            u32 offset, u8 udp);
-       struct fsl_mc_device *dpbp_dev;
        u16 rx_buf_size;
-       u16 bpid;
        struct iommu_domain *iommu_domain;
 
        enum hwtstamp_tx_types tx_tstamp_type;  /* Tx timestamping type */
        bool rx_tstamp;                         /* Rx timestamping enabled */
 
+       /* Buffer pool management */
+       struct dpaa2_eth_bp *bp[DPAA2_ETH_MAX_BPS];
+       int num_bps;
+
        u16 tx_qdid;
        struct fsl_mc_io *mc_io;
        /* Cores which have an affine DPIO/DPCON.
@@ -771,4 +806,54 @@ void dpaa2_eth_dl_traps_unregister(struct dpaa2_eth_priv *priv);
 
 struct dpaa2_eth_trap_item *dpaa2_eth_dl_get_trap(struct dpaa2_eth_priv *priv,
                                                  struct dpaa2_fapr *fapr);
+
+struct dpaa2_eth_bp *dpaa2_eth_allocate_dpbp(struct dpaa2_eth_priv *priv);
+void dpaa2_eth_free_dpbp(struct dpaa2_eth_priv *priv, struct dpaa2_eth_bp *bp);
+
+struct sk_buff *dpaa2_eth_alloc_skb(struct dpaa2_eth_priv *priv,
+                                   struct dpaa2_eth_channel *ch,
+                                   const struct dpaa2_fd *fd, u32 fd_length,
+                                   void *fd_vaddr);
+
+void dpaa2_eth_receive_skb(struct dpaa2_eth_priv *priv,
+                          struct dpaa2_eth_channel *ch,
+                          const struct dpaa2_fd *fd, void *vaddr,
+                          struct dpaa2_eth_fq *fq,
+                          struct rtnl_link_stats64 *percpu_stats,
+                          struct sk_buff *skb);
+
+void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
+                 struct dpaa2_eth_channel *ch,
+                 const struct dpaa2_fd *fd,
+                 struct dpaa2_eth_fq *fq);
+
+struct dpaa2_eth_bp *dpaa2_eth_allocate_dpbp(struct dpaa2_eth_priv *priv);
+void dpaa2_eth_free_dpbp(struct dpaa2_eth_priv *priv,
+                        struct dpaa2_eth_bp *bp);
+
+void *dpaa2_iova_to_virt(struct iommu_domain *domain, dma_addr_t iova_addr);
+void dpaa2_eth_recycle_buf(struct dpaa2_eth_priv *priv,
+                          struct dpaa2_eth_channel *ch,
+                          dma_addr_t addr);
+
+void dpaa2_eth_xdp_enqueue(struct dpaa2_eth_priv *priv,
+                          struct dpaa2_eth_channel *ch,
+                          struct dpaa2_fd *fd,
+                          void *buf_start, u16 queue_id);
+
+int dpaa2_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags);
+int dpaa2_xsk_setup_pool(struct net_device *dev, struct xsk_buff_pool *pool, u16 qid);
+
+void dpaa2_eth_free_tx_fd(struct dpaa2_eth_priv *priv,
+                         struct dpaa2_eth_channel *ch,
+                         struct dpaa2_eth_fq *fq,
+                         const struct dpaa2_fd *fd, bool in_napi);
+bool dpaa2_xsk_tx(struct dpaa2_eth_priv *priv,
+                 struct dpaa2_eth_channel *ch);
+
+/* SGT (Scatter-Gather Table) cache management */
+void *dpaa2_eth_sgt_get(struct dpaa2_eth_priv *priv);
+
+void dpaa2_eth_sgt_recycle(struct dpaa2_eth_priv *priv, void *sgt_buf);
+
 #endif /* __DPAA2_H */
index eea7d7a..32a38a0 100644 (file)
@@ -1,7 +1,6 @@
 // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
 /* Copyright 2014-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- * Copyright 2020 NXP
+ * Copyright 2016-2022 NXP
  */
 
 #include <linux/net_tstamp.h>
@@ -227,17 +226,8 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
                                        struct ethtool_stats *stats,
                                        u64 *data)
 {
-       int i = 0;
-       int j, k, err;
-       int num_cnt;
-       union dpni_statistics dpni_stats;
-       u32 fcnt, bcnt;
-       u32 fcnt_rx_total = 0, fcnt_tx_total = 0;
-       u32 bcnt_rx_total = 0, bcnt_tx_total = 0;
-       u32 buf_cnt;
        struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-       struct dpaa2_eth_drv_stats *extras;
-       struct dpaa2_eth_ch_stats *ch_stats;
+       union dpni_statistics dpni_stats;
        int dpni_stats_page_size[DPNI_STATISTICS_CNT] = {
                sizeof(dpni_stats.page_0),
                sizeof(dpni_stats.page_1),
@@ -247,6 +237,13 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
                sizeof(dpni_stats.page_5),
                sizeof(dpni_stats.page_6),
        };
+       u32 fcnt_rx_total = 0, fcnt_tx_total = 0;
+       u32 bcnt_rx_total = 0, bcnt_tx_total = 0;
+       struct dpaa2_eth_ch_stats *ch_stats;
+       struct dpaa2_eth_drv_stats *extras;
+       u32 buf_cnt, buf_cnt_total = 0;
+       int j, k, err, num_cnt, i = 0;
+       u32 fcnt, bcnt;
 
        memset(data, 0,
               sizeof(u64) * (DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS));
@@ -308,12 +305,15 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
        *(data + i++) = fcnt_tx_total;
        *(data + i++) = bcnt_tx_total;
 
-       err = dpaa2_io_query_bp_count(NULL, priv->bpid, &buf_cnt);
-       if (err) {
-               netdev_warn(net_dev, "Buffer count query error %d\n", err);
-               return;
+       for (j = 0; j < priv->num_bps; j++) {
+               err = dpaa2_io_query_bp_count(NULL, priv->bp[j]->bpid, &buf_cnt);
+               if (err) {
+                       netdev_warn(net_dev, "Buffer count query error %d\n", err);
+                       return;
+               }
+               buf_cnt_total += buf_cnt;
        }
-       *(data + i++) = buf_cnt;
+       *(data + i++) = buf_cnt_total;
 
        if (dpaa2_eth_has_mac(priv))
                dpaa2_mac_get_ethtool_stats(priv->mac, data + i);
@@ -876,6 +876,29 @@ restore_rx_usecs:
        return err;
 }
 
+static void dpaa2_eth_get_channels(struct net_device *net_dev,
+                                  struct ethtool_channels *channels)
+{
+       struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+       int queue_count = dpaa2_eth_queue_count(priv);
+
+       channels->max_rx = queue_count;
+       channels->max_tx = queue_count;
+       channels->rx_count = queue_count;
+       channels->tx_count = queue_count;
+
+       /* Tx confirmation and Rx error */
+       channels->max_other = queue_count + 1;
+       channels->max_combined = channels->max_rx +
+                                channels->max_tx +
+                                channels->max_other;
+       /* Tx conf and Rx err */
+       channels->other_count = queue_count + 1;
+       channels->combined_count = channels->rx_count +
+                                  channels->tx_count +
+                                  channels->other_count;
+}
+
 const struct ethtool_ops dpaa2_ethtool_ops = {
        .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS |
                                     ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
@@ -896,4 +919,5 @@ const struct ethtool_ops dpaa2_ethtool_ops = {
        .set_tunable = dpaa2_eth_set_tunable,
        .get_coalesce = dpaa2_eth_get_coalesce,
        .set_coalesce = dpaa2_eth_set_coalesce,
+       .get_channels = dpaa2_eth_get_channels,
 };
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c
new file mode 100644 (file)
index 0000000..567f52a
--- /dev/null
@@ -0,0 +1,454 @@
+// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
+/* Copyright 2022 NXP
+ */
+#include <linux/filter.h>
+#include <linux/compiler.h>
+#include <linux/bpf_trace.h>
+#include <net/xdp.h>
+#include <net/xdp_sock_drv.h>
+
+#include "dpaa2-eth.h"
+
+static void dpaa2_eth_setup_consume_func(struct dpaa2_eth_priv *priv,
+                                        struct dpaa2_eth_channel *ch,
+                                        enum dpaa2_eth_fq_type type,
+                                        dpaa2_eth_consume_cb_t *consume)
+{
+       struct dpaa2_eth_fq *fq;
+       int i;
+
+       for (i = 0; i < priv->num_fqs; i++) {
+               fq = &priv->fq[i];
+
+               if (fq->type != type)
+                       continue;
+               if (fq->channel != ch)
+                       continue;
+
+               fq->consume = consume;
+       }
+}
+
+static u32 dpaa2_xsk_run_xdp(struct dpaa2_eth_priv *priv,
+                            struct dpaa2_eth_channel *ch,
+                            struct dpaa2_eth_fq *rx_fq,
+                            struct dpaa2_fd *fd, void *vaddr)
+{
+       dma_addr_t addr = dpaa2_fd_get_addr(fd);
+       struct bpf_prog *xdp_prog;
+       struct xdp_buff *xdp_buff;
+       struct dpaa2_eth_swa *swa;
+       u32 xdp_act = XDP_PASS;
+       int err;
+
+       xdp_prog = READ_ONCE(ch->xdp.prog);
+       if (!xdp_prog)
+               goto out;
+
+       swa = (struct dpaa2_eth_swa *)(vaddr + DPAA2_ETH_RX_HWA_SIZE +
+                                      ch->xsk_pool->umem->headroom);
+       xdp_buff = swa->xsk.xdp_buff;
+
+       xdp_buff->data_hard_start = vaddr;
+       xdp_buff->data = vaddr + dpaa2_fd_get_offset(fd);
+       xdp_buff->data_end = xdp_buff->data + dpaa2_fd_get_len(fd);
+       xdp_set_data_meta_invalid(xdp_buff);
+       xdp_buff->rxq = &ch->xdp_rxq;
+
+       xsk_buff_dma_sync_for_cpu(xdp_buff, ch->xsk_pool);
+       xdp_act = bpf_prog_run_xdp(xdp_prog, xdp_buff);
+
+       /* xdp.data pointer may have changed */
+       dpaa2_fd_set_offset(fd, xdp_buff->data - vaddr);
+       dpaa2_fd_set_len(fd, xdp_buff->data_end - xdp_buff->data);
+
+       if (likely(xdp_act == XDP_REDIRECT)) {
+               err = xdp_do_redirect(priv->net_dev, xdp_buff, xdp_prog);
+               if (unlikely(err)) {
+                       ch->stats.xdp_drop++;
+                       dpaa2_eth_recycle_buf(priv, ch, addr);
+               } else {
+                       ch->buf_count--;
+                       ch->stats.xdp_redirect++;
+               }
+
+               goto xdp_redir;
+       }
+
+       switch (xdp_act) {
+       case XDP_PASS:
+               break;
+       case XDP_TX:
+               dpaa2_eth_xdp_enqueue(priv, ch, fd, vaddr, rx_fq->flowid);
+               break;
+       default:
+               bpf_warn_invalid_xdp_action(priv->net_dev, xdp_prog, xdp_act);
+               fallthrough;
+       case XDP_ABORTED:
+               trace_xdp_exception(priv->net_dev, xdp_prog, xdp_act);
+               fallthrough;
+       case XDP_DROP:
+               dpaa2_eth_recycle_buf(priv, ch, addr);
+               ch->stats.xdp_drop++;
+               break;
+       }
+
+xdp_redir:
+       ch->xdp.res |= xdp_act;
+out:
+       return xdp_act;
+}
+
+/* Rx frame processing routine for the AF_XDP fast path */
+static void dpaa2_xsk_rx(struct dpaa2_eth_priv *priv,
+                        struct dpaa2_eth_channel *ch,
+                        const struct dpaa2_fd *fd,
+                        struct dpaa2_eth_fq *fq)
+{
+       dma_addr_t addr = dpaa2_fd_get_addr(fd);
+       u8 fd_format = dpaa2_fd_get_format(fd);
+       struct rtnl_link_stats64 *percpu_stats;
+       u32 fd_length = dpaa2_fd_get_len(fd);
+       struct sk_buff *skb;
+       void *vaddr;
+       u32 xdp_act;
+
+       trace_dpaa2_rx_xsk_fd(priv->net_dev, fd);
+
+       vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
+       percpu_stats = this_cpu_ptr(priv->percpu_stats);
+
+       if (fd_format != dpaa2_fd_single) {
+               WARN_ON(priv->xdp_prog);
+               /* AF_XDP doesn't support any other formats */
+               goto err_frame_format;
+       }
+
+       xdp_act = dpaa2_xsk_run_xdp(priv, ch, fq, (struct dpaa2_fd *)fd, vaddr);
+       if (xdp_act != XDP_PASS) {
+               percpu_stats->rx_packets++;
+               percpu_stats->rx_bytes += dpaa2_fd_get_len(fd);
+               return;
+       }
+
+       /* Build skb */
+       skb = dpaa2_eth_alloc_skb(priv, ch, fd, fd_length, vaddr);
+       if (!skb)
+               /* Nothing else we can do, recycle the buffer and
+                * drop the frame.
+                */
+               goto err_alloc_skb;
+
+       /* Send the skb to the Linux networking stack */
+       dpaa2_eth_receive_skb(priv, ch, fd, vaddr, fq, percpu_stats, skb);
+
+       return;
+
+err_alloc_skb:
+       dpaa2_eth_recycle_buf(priv, ch, addr);
+err_frame_format:
+       percpu_stats->rx_dropped++;
+}
+
+static void dpaa2_xsk_set_bp_per_qdbin(struct dpaa2_eth_priv *priv,
+                                      struct dpni_pools_cfg *pools_params)
+{
+       int curr_bp = 0, i, j;
+
+       pools_params->pool_options = DPNI_POOL_ASSOC_QDBIN;
+       for (i = 0; i < priv->num_bps; i++) {
+               for (j = 0; j < priv->num_channels; j++)
+                       if (priv->bp[i] == priv->channel[j]->bp)
+                               pools_params->pools[curr_bp].priority_mask |= (1 << j);
+               if (!pools_params->pools[curr_bp].priority_mask)
+                       continue;
+
+               pools_params->pools[curr_bp].dpbp_id = priv->bp[i]->bpid;
+               pools_params->pools[curr_bp].buffer_size = priv->rx_buf_size;
+               pools_params->pools[curr_bp++].backup_pool = 0;
+       }
+       pools_params->num_dpbp = curr_bp;
+}
+
+static int dpaa2_xsk_disable_pool(struct net_device *dev, u16 qid)
+{
+       struct xsk_buff_pool *pool = xsk_get_pool_from_qid(dev, qid);
+       struct dpaa2_eth_priv *priv = netdev_priv(dev);
+       struct dpni_pools_cfg pools_params = { 0 };
+       struct dpaa2_eth_channel *ch;
+       int err;
+       bool up;
+
+       ch = priv->channel[qid];
+       if (!ch->xsk_pool)
+               return -EINVAL;
+
+       up = netif_running(dev);
+       if (up)
+               dev_close(dev);
+
+       xsk_pool_dma_unmap(pool, 0);
+       err = xdp_rxq_info_reg_mem_model(&ch->xdp_rxq,
+                                        MEM_TYPE_PAGE_ORDER0, NULL);
+       if (err)
+               netdev_err(dev, "xsk_rxq_info_reg_mem_model() failed (err = %d)\n",
+                          err);
+
+       dpaa2_eth_free_dpbp(priv, ch->bp);
+
+       ch->xsk_zc = false;
+       ch->xsk_pool = NULL;
+       ch->xsk_tx_pkts_sent = 0;
+       ch->bp = priv->bp[DPAA2_ETH_DEFAULT_BP_IDX];
+
+       dpaa2_eth_setup_consume_func(priv, ch, DPAA2_RX_FQ, dpaa2_eth_rx);
+
+       dpaa2_xsk_set_bp_per_qdbin(priv, &pools_params);
+       err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
+       if (err)
+               netdev_err(dev, "dpni_set_pools() failed\n");
+
+       if (up) {
+               err = dev_open(dev, NULL);
+               if (err)
+                       return err;
+       }
+
+       return 0;
+}
+
+static int dpaa2_xsk_enable_pool(struct net_device *dev,
+                                struct xsk_buff_pool *pool,
+                                u16 qid)
+{
+       struct dpaa2_eth_priv *priv = netdev_priv(dev);
+       struct dpni_pools_cfg pools_params = { 0 };
+       struct dpaa2_eth_channel *ch;
+       int err, err2;
+       bool up;
+
+       if (priv->dpni_attrs.wriop_version < DPAA2_WRIOP_VERSION(3, 0, 0)) {
+               netdev_err(dev, "AF_XDP zero-copy not supported on devices <= WRIOP(3, 0, 0)\n");
+               return -EOPNOTSUPP;
+       }
+
+       if (priv->dpni_attrs.num_queues > 8) {
+               netdev_err(dev, "AF_XDP zero-copy not supported on DPNI with more then 8 queues\n");
+               return -EOPNOTSUPP;
+       }
+
+       up = netif_running(dev);
+       if (up)
+               dev_close(dev);
+
+       err = xsk_pool_dma_map(pool, priv->net_dev->dev.parent, 0);
+       if (err) {
+               netdev_err(dev, "xsk_pool_dma_map() failed (err = %d)\n",
+                          err);
+               goto err_dma_unmap;
+       }
+
+       ch = priv->channel[qid];
+       err = xdp_rxq_info_reg_mem_model(&ch->xdp_rxq, MEM_TYPE_XSK_BUFF_POOL, NULL);
+       if (err) {
+               netdev_err(dev, "xdp_rxq_info_reg_mem_model() failed (err = %d)\n", err);
+               goto err_mem_model;
+       }
+       xsk_pool_set_rxq_info(pool, &ch->xdp_rxq);
+
+       priv->bp[priv->num_bps] = dpaa2_eth_allocate_dpbp(priv);
+       if (IS_ERR(priv->bp[priv->num_bps])) {
+               err = PTR_ERR(priv->bp[priv->num_bps]);
+               goto err_bp_alloc;
+       }
+       ch->xsk_zc = true;
+       ch->xsk_pool = pool;
+       ch->bp = priv->bp[priv->num_bps++];
+
+       dpaa2_eth_setup_consume_func(priv, ch, DPAA2_RX_FQ, dpaa2_xsk_rx);
+
+       dpaa2_xsk_set_bp_per_qdbin(priv, &pools_params);
+       err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
+       if (err) {
+               netdev_err(dev, "dpni_set_pools() failed\n");
+               goto err_set_pools;
+       }
+
+       if (up) {
+               err = dev_open(dev, NULL);
+               if (err)
+                       return err;
+       }
+
+       return 0;
+
+err_set_pools:
+       err2 = dpaa2_xsk_disable_pool(dev, qid);
+       if (err2)
+               netdev_err(dev, "dpaa2_xsk_disable_pool() failed %d\n", err2);
+err_bp_alloc:
+       err2 = xdp_rxq_info_reg_mem_model(&priv->channel[qid]->xdp_rxq,
+                                         MEM_TYPE_PAGE_ORDER0, NULL);
+       if (err2)
+               netdev_err(dev, "xsk_rxq_info_reg_mem_model() failed with %d)\n", err2);
+err_mem_model:
+       xsk_pool_dma_unmap(pool, 0);
+err_dma_unmap:
+       if (up)
+               dev_open(dev, NULL);
+
+       return err;
+}
+
+int dpaa2_xsk_setup_pool(struct net_device *dev, struct xsk_buff_pool *pool, u16 qid)
+{
+       return pool ? dpaa2_xsk_enable_pool(dev, pool, qid) :
+                     dpaa2_xsk_disable_pool(dev, qid);
+}
+
+int dpaa2_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
+{
+       struct dpaa2_eth_priv *priv = netdev_priv(dev);
+       struct dpaa2_eth_channel *ch = priv->channel[qid];
+
+       if (!priv->link_state.up)
+               return -ENETDOWN;
+
+       if (!priv->xdp_prog)
+               return -EINVAL;
+
+       if (!ch->xsk_zc)
+               return -EINVAL;
+
+       /* We do not have access to a per channel SW interrupt, so instead we
+        * schedule a NAPI instance.
+        */
+       if (!napi_if_scheduled_mark_missed(&ch->napi))
+               napi_schedule(&ch->napi);
+
+       return 0;
+}
+
+static int dpaa2_xsk_tx_build_fd(struct dpaa2_eth_priv *priv,
+                                struct dpaa2_eth_channel *ch,
+                                struct dpaa2_fd *fd,
+                                struct xdp_desc *xdp_desc)
+{
+       struct device *dev = priv->net_dev->dev.parent;
+       struct dpaa2_sg_entry *sgt;
+       struct dpaa2_eth_swa *swa;
+       void *sgt_buf = NULL;
+       dma_addr_t sgt_addr;
+       int sgt_buf_size;
+       dma_addr_t addr;
+       int err = 0;
+
+       /* Prepare the HW SGT structure */
+       sgt_buf_size = priv->tx_data_offset + sizeof(struct dpaa2_sg_entry);
+       sgt_buf = dpaa2_eth_sgt_get(priv);
+       if (unlikely(!sgt_buf))
+               return -ENOMEM;
+       sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
+
+       /* Get the address of the XSK Tx buffer */
+       addr = xsk_buff_raw_get_dma(ch->xsk_pool, xdp_desc->addr);
+       xsk_buff_raw_dma_sync_for_device(ch->xsk_pool, addr, xdp_desc->len);
+
+       /* Fill in the HW SGT structure */
+       dpaa2_sg_set_addr(sgt, addr);
+       dpaa2_sg_set_len(sgt, xdp_desc->len);
+       dpaa2_sg_set_final(sgt, true);
+
+       /* Store the necessary info in the SGT buffer */
+       swa = (struct dpaa2_eth_swa *)sgt_buf;
+       swa->type = DPAA2_ETH_SWA_XSK;
+       swa->xsk.sgt_size = sgt_buf_size;
+
+       /* Separately map the SGT buffer */
+       sgt_addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_BIDIRECTIONAL);
+       if (unlikely(dma_mapping_error(dev, sgt_addr))) {
+               err = -ENOMEM;
+               goto sgt_map_failed;
+       }
+
+       /* Initialize FD fields */
+       memset(fd, 0, sizeof(struct dpaa2_fd));
+       dpaa2_fd_set_offset(fd, priv->tx_data_offset);
+       dpaa2_fd_set_format(fd, dpaa2_fd_sg);
+       dpaa2_fd_set_addr(fd, sgt_addr);
+       dpaa2_fd_set_len(fd, xdp_desc->len);
+       dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA);
+
+       return 0;
+
+sgt_map_failed:
+       dpaa2_eth_sgt_recycle(priv, sgt_buf);
+
+       return err;
+}
+
+bool dpaa2_xsk_tx(struct dpaa2_eth_priv *priv,
+                 struct dpaa2_eth_channel *ch)
+{
+       struct xdp_desc *xdp_descs = ch->xsk_pool->tx_descs;
+       struct dpaa2_eth_drv_stats *percpu_extras;
+       struct rtnl_link_stats64 *percpu_stats;
+       int budget = DPAA2_ETH_TX_ZC_PER_NAPI;
+       int total_enqueued, enqueued;
+       int retries, max_retries;
+       struct dpaa2_eth_fq *fq;
+       struct dpaa2_fd *fds;
+       int batch, i, err;
+
+       percpu_stats = this_cpu_ptr(priv->percpu_stats);
+       percpu_extras = this_cpu_ptr(priv->percpu_extras);
+       fds = (this_cpu_ptr(priv->fd))->array;
+
+       /* Use the FQ with the same idx as the affine CPU */
+       fq = &priv->fq[ch->nctx.desired_cpu];
+
+       batch = xsk_tx_peek_release_desc_batch(ch->xsk_pool, budget);
+       if (!batch)
+               return false;
+
+       /* Create a FD for each XSK frame to be sent */
+       for (i = 0; i < batch; i++) {
+               err = dpaa2_xsk_tx_build_fd(priv, ch, &fds[i], &xdp_descs[i]);
+               if (err) {
+                       batch = i;
+                       break;
+               }
+
+               trace_dpaa2_tx_xsk_fd(priv->net_dev, &fds[i]);
+       }
+
+       /* Enqueue all the created FDs */
+       max_retries = batch * DPAA2_ETH_ENQUEUE_RETRIES;
+       total_enqueued = 0;
+       enqueued = 0;
+       retries = 0;
+       while (total_enqueued < batch && retries < max_retries) {
+               err = priv->enqueue(priv, fq, &fds[total_enqueued], 0,
+                                   batch - total_enqueued, &enqueued);
+               if (err == -EBUSY) {
+                       retries++;
+                       continue;
+               }
+
+               total_enqueued += enqueued;
+       }
+       percpu_extras->tx_portal_busy += retries;
+
+       /* Update statistics */
+       percpu_stats->tx_packets += total_enqueued;
+       for (i = 0; i < total_enqueued; i++)
+               percpu_stats->tx_bytes += dpaa2_fd_get_len(&fds[i]);
+       for (i = total_enqueued; i < batch; i++) {
+               dpaa2_eth_free_tx_fd(priv, ch, fq, &fds[i], false);
+               percpu_stats->tx_errors++;
+       }
+
+       xsk_tx_release(ch->xsk_pool);
+
+       return total_enqueued == budget ? true : false;
+}
index 828f538..be9492b 100644 (file)
 #define DPNI_VER_MINOR                         0
 #define DPNI_CMD_BASE_VERSION                  1
 #define DPNI_CMD_2ND_VERSION                   2
+#define DPNI_CMD_3RD_VERSION                   3
 #define DPNI_CMD_ID_OFFSET                     4
 
 #define DPNI_CMD(id)   (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_BASE_VERSION)
 #define DPNI_CMD_V2(id)        (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_2ND_VERSION)
+#define DPNI_CMD_V3(id)        (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_3RD_VERSION)
 
 #define DPNI_CMDID_OPEN                                        DPNI_CMD(0x801)
 #define DPNI_CMDID_CLOSE                               DPNI_CMD(0x800)
@@ -39,7 +41,7 @@
 #define DPNI_CMDID_GET_IRQ_STATUS                      DPNI_CMD(0x016)
 #define DPNI_CMDID_CLEAR_IRQ_STATUS                    DPNI_CMD(0x017)
 
-#define DPNI_CMDID_SET_POOLS                           DPNI_CMD(0x200)
+#define DPNI_CMDID_SET_POOLS                           DPNI_CMD_V3(0x200)
 #define DPNI_CMDID_SET_ERRORS_BEHAVIOR                 DPNI_CMD(0x20B)
 
 #define DPNI_CMDID_GET_QDID                            DPNI_CMD(0x210)
@@ -115,14 +117,19 @@ struct dpni_cmd_open {
 };
 
 #define DPNI_BACKUP_POOL(val, order)   (((val) & 0x1) << (order))
+
+struct dpni_cmd_pool {
+       __le16 dpbp_id;
+       u8 priority_mask;
+       u8 pad;
+};
+
 struct dpni_cmd_set_pools {
-       /* cmd word 0 */
        u8 num_dpbp;
        u8 backup_pool_mask;
-       __le16 pad;
-       /* cmd word 0..4 */
-       __le32 dpbp_id[DPNI_MAX_DPBP];
-       /* cmd word 4..6 */
+       u8 pad;
+       u8 pool_options;
+       struct dpni_cmd_pool pool[DPNI_MAX_DPBP];
        __le16 buffer_size[DPNI_MAX_DPBP];
 };
 
index 6c3b36f..02601a2 100644 (file)
@@ -173,8 +173,12 @@ int dpni_set_pools(struct fsl_mc_io *mc_io,
                                          token);
        cmd_params = (struct dpni_cmd_set_pools *)cmd.params;
        cmd_params->num_dpbp = cfg->num_dpbp;
+       cmd_params->pool_options = cfg->pool_options;
        for (i = 0; i < DPNI_MAX_DPBP; i++) {
-               cmd_params->dpbp_id[i] = cpu_to_le32(cfg->pools[i].dpbp_id);
+               cmd_params->pool[i].dpbp_id =
+                       cpu_to_le16(cfg->pools[i].dpbp_id);
+               cmd_params->pool[i].priority_mask =
+                       cfg->pools[i].priority_mask;
                cmd_params->buffer_size[i] =
                        cpu_to_le16(cfg->pools[i].buffer_size);
                cmd_params->backup_pool_mask |=
index 6fffd51..5c0a1d5 100644 (file)
@@ -92,19 +92,28 @@ int dpni_close(struct fsl_mc_io     *mc_io,
               u32              cmd_flags,
               u16              token);
 
+#define DPNI_POOL_ASSOC_QPRI   0
+#define DPNI_POOL_ASSOC_QDBIN  1
+
 /**
  * struct dpni_pools_cfg - Structure representing buffer pools configuration
  * @num_dpbp: Number of DPBPs
+ * @pool_options: Buffer assignment options.
+ *     This field is a combination of DPNI_POOL_ASSOC_flags
  * @pools: Array of buffer pools parameters; The number of valid entries
  *     must match 'num_dpbp' value
  * @pools.dpbp_id: DPBP object ID
+ * @pools.priority: Priority mask that indicates TC's used with this buffer.
+ *     If set to 0x00 MC will assume value 0xff.
  * @pools.buffer_size: Buffer size
  * @pools.backup_pool: Backup pool
  */
 struct dpni_pools_cfg {
        u8              num_dpbp;
+       u8              pool_options;
        struct {
                int     dpbp_id;
+               u8      priority_mask;
                u16     buffer_size;
                int     backup_pool;
        } pools[DPNI_MAX_DPBP];
index 33f84a3..476e386 100644 (file)
@@ -658,6 +658,8 @@ struct fec_enet_private {
        unsigned int reload_period;
        int pps_enable;
        unsigned int next_counter;
+       struct hrtimer perout_timer;
+       u64 perout_stime;
 
        struct imx_sc_ipc *ipc_handle;
 
index cffd9ad..67aa694 100644 (file)
@@ -88,6 +88,9 @@
 #define FEC_CHANNLE_0          0
 #define DEFAULT_PPS_CHANNEL    FEC_CHANNLE_0
 
+#define FEC_PTP_MAX_NSEC_PERIOD                4000000000ULL
+#define FEC_PTP_MAX_NSEC_COUNTER       0x80000000ULL
+
 /**
  * fec_ptp_enable_pps
  * @fep: the fec_enet_private structure handle
@@ -198,6 +201,78 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
        return 0;
 }
 
+static int fec_ptp_pps_perout(struct fec_enet_private *fep)
+{
+       u32 compare_val, ptp_hc, temp_val;
+       u64 curr_time;
+       unsigned long flags;
+
+       spin_lock_irqsave(&fep->tmreg_lock, flags);
+
+       /* Update time counter */
+       timecounter_read(&fep->tc);
+
+       /* Get the current ptp hardware time counter */
+       temp_val = readl(fep->hwp + FEC_ATIME_CTRL);
+       temp_val |= FEC_T_CTRL_CAPTURE;
+       writel(temp_val, fep->hwp + FEC_ATIME_CTRL);
+       if (fep->quirks & FEC_QUIRK_BUG_CAPTURE)
+               udelay(1);
+
+       ptp_hc = readl(fep->hwp + FEC_ATIME);
+
+       /* Convert the ptp local counter to 1588 timestamp */
+       curr_time = timecounter_cyc2time(&fep->tc, ptp_hc);
+
+       /* If the pps start time less than current time add 100ms, just return.
+        * Because the software might not able to set the comparison time into
+        * the FEC_TCCR register in time and missed the start time.
+        */
+       if (fep->perout_stime < curr_time + 100 * NSEC_PER_MSEC) {
+               dev_err(&fep->pdev->dev, "Current time is too close to the start time!\n");
+               spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+               return -1;
+       }
+
+       compare_val = fep->perout_stime - curr_time + ptp_hc;
+       compare_val &= fep->cc.mask;
+
+       writel(compare_val, fep->hwp + FEC_TCCR(fep->pps_channel));
+       fep->next_counter = (compare_val + fep->reload_period) & fep->cc.mask;
+
+       /* Enable compare event when overflow */
+       temp_val = readl(fep->hwp + FEC_ATIME_CTRL);
+       temp_val |= FEC_T_CTRL_PINPER;
+       writel(temp_val, fep->hwp + FEC_ATIME_CTRL);
+
+       /* Compare channel setting. */
+       temp_val = readl(fep->hwp + FEC_TCSR(fep->pps_channel));
+       temp_val |= (1 << FEC_T_TF_OFFSET | 1 << FEC_T_TIE_OFFSET);
+       temp_val &= ~(1 << FEC_T_TDRE_OFFSET);
+       temp_val &= ~(FEC_T_TMODE_MASK);
+       temp_val |= (FEC_TMODE_TOGGLE << FEC_T_TMODE_OFFSET);
+       writel(temp_val, fep->hwp + FEC_TCSR(fep->pps_channel));
+
+       /* Write the second compare event timestamp and calculate
+        * the third timestamp. Refer the TCCR register detail in the spec.
+        */
+       writel(fep->next_counter, fep->hwp + FEC_TCCR(fep->pps_channel));
+       fep->next_counter = (fep->next_counter + fep->reload_period) & fep->cc.mask;
+       spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+
+       return 0;
+}
+
+static enum hrtimer_restart fec_ptp_pps_perout_handler(struct hrtimer *timer)
+{
+       struct fec_enet_private *fep = container_of(timer,
+                                       struct fec_enet_private, perout_timer);
+
+       fec_ptp_pps_perout(fep);
+
+       return HRTIMER_NORESTART;
+}
+
 /**
  * fec_ptp_read - read raw cycle counter (to be used by time counter)
  * @cc: the cyclecounter structure
@@ -425,6 +500,17 @@ static int fec_ptp_settime(struct ptp_clock_info *ptp,
        return 0;
 }
 
+static int fec_ptp_pps_disable(struct fec_enet_private *fep, uint channel)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&fep->tmreg_lock, flags);
+       writel(0, fep->hwp + FEC_TCSR(channel));
+       spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+
+       return 0;
+}
+
 /**
  * fec_ptp_enable
  * @ptp: the ptp clock structure
@@ -437,14 +523,84 @@ static int fec_ptp_enable(struct ptp_clock_info *ptp,
 {
        struct fec_enet_private *fep =
            container_of(ptp, struct fec_enet_private, ptp_caps);
+       ktime_t timeout;
+       struct timespec64 start_time, period;
+       u64 curr_time, delta, period_ns;
+       unsigned long flags;
        int ret = 0;
 
        if (rq->type == PTP_CLK_REQ_PPS) {
                ret = fec_ptp_enable_pps(fep, on);
 
                return ret;
+       } else if (rq->type == PTP_CLK_REQ_PEROUT) {
+               /* Reject requests with unsupported flags */
+               if (rq->perout.flags)
+                       return -EOPNOTSUPP;
+
+               if (rq->perout.index != DEFAULT_PPS_CHANNEL)
+                       return -EOPNOTSUPP;
+
+               fep->pps_channel = DEFAULT_PPS_CHANNEL;
+               period.tv_sec = rq->perout.period.sec;
+               period.tv_nsec = rq->perout.period.nsec;
+               period_ns = timespec64_to_ns(&period);
+
+               /* FEC PTP timer only has 31 bits, so if the period exceed
+                * 4s is not supported.
+                */
+               if (period_ns > FEC_PTP_MAX_NSEC_PERIOD) {
+                       dev_err(&fep->pdev->dev, "The period must equal to or less than 4s!\n");
+                       return -EOPNOTSUPP;
+               }
+
+               fep->reload_period = div_u64(period_ns, 2);
+               if (on && fep->reload_period) {
+                       /* Convert 1588 timestamp to ns*/
+                       start_time.tv_sec = rq->perout.start.sec;
+                       start_time.tv_nsec = rq->perout.start.nsec;
+                       fep->perout_stime = timespec64_to_ns(&start_time);
+
+                       mutex_lock(&fep->ptp_clk_mutex);
+                       if (!fep->ptp_clk_on) {
+                               dev_err(&fep->pdev->dev, "Error: PTP clock is closed!\n");
+                               mutex_unlock(&fep->ptp_clk_mutex);
+                               return -EOPNOTSUPP;
+                       }
+                       spin_lock_irqsave(&fep->tmreg_lock, flags);
+                       /* Read current timestamp */
+                       curr_time = timecounter_read(&fep->tc);
+                       spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+                       mutex_unlock(&fep->ptp_clk_mutex);
+
+                       /* Calculate time difference */
+                       delta = fep->perout_stime - curr_time;
+
+                       if (fep->perout_stime <= curr_time) {
+                               dev_err(&fep->pdev->dev, "Start time must larger than current time!\n");
+                               return -EINVAL;
+                       }
+
+                       /* Because the timer counter of FEC only has 31-bits, correspondingly,
+                        * the time comparison register FEC_TCCR also only low 31 bits can be
+                        * set. If the start time of pps signal exceeds current time more than
+                        * 0x80000000 ns, a software timer is used and the timer expires about
+                        * 1 second before the start time to be able to set FEC_TCCR.
+                        */
+                       if (delta > FEC_PTP_MAX_NSEC_COUNTER) {
+                               timeout = ns_to_ktime(delta - NSEC_PER_SEC);
+                               hrtimer_start(&fep->perout_timer, timeout, HRTIMER_MODE_REL);
+                       } else {
+                               return fec_ptp_pps_perout(fep);
+                       }
+               } else {
+                       fec_ptp_pps_disable(fep, fep->pps_channel);
+               }
+
+               return 0;
+       } else {
+               return -EOPNOTSUPP;
        }
-       return -EOPNOTSUPP;
 }
 
 /**
@@ -583,7 +739,7 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
        fep->ptp_caps.max_adj = 250000000;
        fep->ptp_caps.n_alarm = 0;
        fep->ptp_caps.n_ext_ts = 0;
-       fep->ptp_caps.n_per_out = 0;
+       fep->ptp_caps.n_per_out = 1;
        fep->ptp_caps.n_pins = 0;
        fep->ptp_caps.pps = 1;
        fep->ptp_caps.adjfreq = fec_ptp_adjfreq;
@@ -605,6 +761,9 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
 
        INIT_DELAYED_WORK(&fep->time_keep, fec_time_keep);
 
+       hrtimer_init(&fep->perout_timer, CLOCK_REALTIME, HRTIMER_MODE_REL);
+       fep->perout_timer.function = fec_ptp_pps_perout_handler;
+
        irq = platform_get_irq_byname_optional(pdev, "pps");
        if (irq < 0)
                irq = platform_get_irq_optional(pdev, irq_idx);
@@ -634,6 +793,7 @@ void fec_ptp_stop(struct platform_device *pdev)
        struct fec_enet_private *fep = netdev_priv(ndev);
 
        cancel_delayed_work_sync(&fep->time_keep);
+       hrtimer_cancel(&fep->perout_timer);
        if (fep->ptp_clock)
                ptp_clock_unregister(fep->ptp_clock);
 }
index 2b0a30f..c6496a4 100644 (file)
@@ -158,7 +158,6 @@ static int mac_probe(struct platform_device *_of_dev)
        struct device_node      *mac_node, *dev_node;
        struct mac_device       *mac_dev;
        struct platform_device  *of_dev;
-       struct resource         *res;
        struct mac_priv_s       *priv;
        struct fman_mac_params   params;
        u32                      val;
@@ -218,24 +217,25 @@ static int mac_probe(struct platform_device *_of_dev)
        of_node_put(dev_node);
 
        /* Get the address of the memory mapped registers */
-       res = platform_get_mem_or_io(_of_dev, 0);
-       if (!res) {
+       mac_dev->res = platform_get_mem_or_io(_of_dev, 0);
+       if (!mac_dev->res) {
                dev_err(dev, "could not get registers\n");
                return -EINVAL;
        }
 
-       err = devm_request_resource(dev, fman_get_mem_region(priv->fman), res);
+       err = devm_request_resource(dev, fman_get_mem_region(priv->fman),
+                                   mac_dev->res);
        if (err) {
                dev_err_probe(dev, err, "could not request resource\n");
                return err;
        }
 
-       mac_dev->vaddr = devm_ioremap(dev, res->start, resource_size(res));
+       mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start,
+                                     resource_size(mac_dev->res));
        if (!mac_dev->vaddr) {
                dev_err(dev, "devm_ioremap() failed\n");
                return -EIO;
        }
-       mac_dev->vaddr_end = mac_dev->vaddr + resource_size(res);
 
        if (!of_device_is_available(mac_node))
                return -ENODEV;
index 5bf03e1..ad06f8d 100644 (file)
@@ -21,8 +21,8 @@ struct mac_priv_s;
 
 struct mac_device {
        void __iomem            *vaddr;
-       void __iomem            *vaddr_end;
        struct device           *dev;
+       struct resource         *res;
        u8                       addr[ETH_ALEN];
        struct fman_port        *port[2];
        struct phylink          *phylink;
index 00fafc0..430ecce 100644 (file)
@@ -419,8 +419,10 @@ int hnae_ae_register(struct hnae_ae_dev *hdev, struct module *owner)
        hdev->cls_dev.release = hnae_release;
        (void)dev_set_name(&hdev->cls_dev, "hnae%d", hdev->id);
        ret = device_register(&hdev->cls_dev);
-       if (ret)
+       if (ret) {
+               put_device(&hdev->cls_dev);
                return ret;
+       }
 
        __module_get(THIS_MODULE);
 
index 19eb839..061952c 100644 (file)
@@ -85,6 +85,7 @@ static int hinic_dbg_get_func_table(struct hinic_dev *nic_dev, int idx)
        struct tag_sml_funcfg_tbl *funcfg_table_elem;
        struct hinic_cmd_lt_rd *read_data;
        u16 out_size = sizeof(*read_data);
+       int ret = ~0;
        int err;
 
        read_data = kzalloc(sizeof(*read_data), GFP_KERNEL);
@@ -111,20 +112,25 @@ static int hinic_dbg_get_func_table(struct hinic_dev *nic_dev, int idx)
 
        switch (idx) {
        case VALID:
-               return funcfg_table_elem->dw0.bs.valid;
+               ret = funcfg_table_elem->dw0.bs.valid;
+               break;
        case RX_MODE:
-               return funcfg_table_elem->dw0.bs.nic_rx_mode;
+               ret = funcfg_table_elem->dw0.bs.nic_rx_mode;
+               break;
        case MTU:
-               return funcfg_table_elem->dw1.bs.mtu;
+               ret = funcfg_table_elem->dw1.bs.mtu;
+               break;
        case RQ_DEPTH:
-               return funcfg_table_elem->dw13.bs.cfg_rq_depth;
+               ret = funcfg_table_elem->dw13.bs.cfg_rq_depth;
+               break;
        case QUEUE_NUM:
-               return funcfg_table_elem->dw13.bs.cfg_q_num;
+               ret = funcfg_table_elem->dw13.bs.cfg_q_num;
+               break;
        }
 
        kfree(read_data);
 
-       return ~0;
+       return ret;
 }
 
 static ssize_t hinic_dbg_cmd_read(struct file *filp, char __user *buffer, size_t count,
index a4fbf44..52ea97c 100644 (file)
 
 #define LP_PKT_CNT             64
 
+#define HINIC_MAX_JUMBO_FRAME_SIZE      15872
+#define HINIC_MAX_MTU_SIZE      (HINIC_MAX_JUMBO_FRAME_SIZE - ETH_HLEN - ETH_FCS_LEN)
+#define HINIC_MIN_MTU_SIZE      256
+
 enum hinic_flags {
        HINIC_LINK_UP = BIT(0),
        HINIC_INTF_UP = BIT(1),
index 78190e8..d39eec9 100644 (file)
@@ -924,7 +924,7 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
 
 err_set_cmdq_depth:
        hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ);
-
+       free_cmdq(&cmdqs->cmdq[HINIC_CMDQ_SYNC]);
 err_cmdq_ctxt:
        hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
                            HINIC_MAX_CMDQ_TYPES);
index 94f4705..2779528 100644 (file)
@@ -877,7 +877,7 @@ int hinic_set_interrupt_cfg(struct hinic_hwdev *hwdev,
        if (err)
                return -EINVAL;
 
-       interrupt_info->lli_credit_cnt = temp_info.lli_timer_cnt;
+       interrupt_info->lli_credit_cnt = temp_info.lli_credit_cnt;
        interrupt_info->lli_timer_cnt = temp_info.lli_timer_cnt;
 
        err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
index e1f54a2..9d4d795 100644 (file)
@@ -1187,7 +1187,8 @@ static int nic_dev_init(struct pci_dev *pdev)
        else
                netdev->netdev_ops = &hinicvf_netdev_ops;
 
-       netdev->max_mtu = ETH_MAX_MTU;
+       netdev->max_mtu = HINIC_MAX_MTU_SIZE;
+       netdev->min_mtu = HINIC_MIN_MTU_SIZE;
 
        nic_dev = netdev_priv(netdev);
        nic_dev->netdev = netdev;
index 28ae6f1..0a39c3d 100644 (file)
@@ -17,9 +17,6 @@
 #include "hinic_port.h"
 #include "hinic_dev.h"
 
-#define HINIC_MIN_MTU_SIZE              256
-#define HINIC_MAX_JUMBO_FRAME_SIZE      15872
-
 enum mac_op {
        MAC_DEL,
        MAC_SET,
@@ -147,24 +144,12 @@ int hinic_port_get_mac(struct hinic_dev *nic_dev, u8 *addr)
  **/
 int hinic_port_set_mtu(struct hinic_dev *nic_dev, int new_mtu)
 {
-       struct net_device *netdev = nic_dev->netdev;
        struct hinic_hwdev *hwdev = nic_dev->hwdev;
        struct hinic_port_mtu_cmd port_mtu_cmd;
        struct hinic_hwif *hwif = hwdev->hwif;
        u16 out_size = sizeof(port_mtu_cmd);
        struct pci_dev *pdev = hwif->pdev;
-       int err, max_frame;
-
-       if (new_mtu < HINIC_MIN_MTU_SIZE) {
-               netif_err(nic_dev, drv, netdev, "mtu < MIN MTU size");
-               return -EINVAL;
-       }
-
-       max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN;
-       if (max_frame > HINIC_MAX_JUMBO_FRAME_SIZE) {
-               netif_err(nic_dev, drv, netdev, "mtu > MAX MTU size");
-               return -EINVAL;
-       }
+       int err;
 
        port_mtu_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
        port_mtu_cmd.mtu = new_mtu;
index a5f08b9..f7e05b4 100644 (file)
@@ -1174,7 +1174,6 @@ int hinic_vf_func_init(struct hinic_hwdev *hwdev)
                        dev_err(&hwdev->hwif->pdev->dev,
                                "Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
                                err, register_info.status, out_size);
-                       hinic_unregister_vf_mbox_cb(hwdev, HINIC_MOD_L2NIC);
                        return -EIO;
                }
        } else {
index 3b14dc9..7d79006 100644 (file)
@@ -690,8 +690,7 @@ static int ibmveth_close(struct net_device *netdev)
 
        napi_disable(&adapter->napi);
 
-       if (!adapter->pool_config)
-               netif_tx_stop_all_queues(netdev);
+       netif_tx_stop_all_queues(netdev);
 
        h_vio_signal(adapter->vdev->unit_address, VIO_IRQ_DISABLE);
 
@@ -799,9 +798,7 @@ static int ibmveth_set_csum_offload(struct net_device *dev, u32 data)
 
        if (netif_running(dev)) {
                restart = 1;
-               adapter->pool_config = 1;
                ibmveth_close(dev);
-               adapter->pool_config = 0;
        }
 
        set_attr = 0;
@@ -883,9 +880,7 @@ static int ibmveth_set_tso(struct net_device *dev, u32 data)
 
        if (netif_running(dev)) {
                restart = 1;
-               adapter->pool_config = 1;
                ibmveth_close(dev);
-               adapter->pool_config = 0;
        }
 
        set_attr = 0;
@@ -1535,9 +1530,7 @@ static int ibmveth_change_mtu(struct net_device *dev, int new_mtu)
           only the buffer pools necessary to hold the new MTU */
        if (netif_running(adapter->netdev)) {
                need_restart = 1;
-               adapter->pool_config = 1;
                ibmveth_close(adapter->netdev);
-               adapter->pool_config = 0;
        }
 
        /* Look for an active buffer pool that can hold the new MTU */
@@ -1701,7 +1694,6 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
        adapter->vdev = dev;
        adapter->netdev = netdev;
        adapter->mcastFilterSize = be32_to_cpu(*mcastFilterSize_p);
-       adapter->pool_config = 0;
        ibmveth_init_link_settings(netdev);
 
        netif_napi_add_weight(netdev, &adapter->napi, ibmveth_poll, 16);
@@ -1841,9 +1833,7 @@ static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
                                        return -ENOMEM;
                                }
                                pool->active = 1;
-                               adapter->pool_config = 1;
                                ibmveth_close(netdev);
-                               adapter->pool_config = 0;
                                if ((rc = ibmveth_open(netdev)))
                                        return rc;
                        } else {
@@ -1869,10 +1859,8 @@ static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
                        }
 
                        if (netif_running(netdev)) {
-                               adapter->pool_config = 1;
                                ibmveth_close(netdev);
                                pool->active = 0;
-                               adapter->pool_config = 0;
                                if ((rc = ibmveth_open(netdev)))
                                        return rc;
                        }
@@ -1883,9 +1871,7 @@ static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
                        return -EINVAL;
                } else {
                        if (netif_running(netdev)) {
-                               adapter->pool_config = 1;
                                ibmveth_close(netdev);
-                               adapter->pool_config = 0;
                                pool->size = value;
                                if ((rc = ibmveth_open(netdev)))
                                        return rc;
@@ -1898,9 +1884,7 @@ static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
                        return -EINVAL;
                } else {
                        if (netif_running(netdev)) {
-                               adapter->pool_config = 1;
                                ibmveth_close(netdev);
-                               adapter->pool_config = 0;
                                pool->buff_size = value;
                                if ((rc = ibmveth_open(netdev)))
                                        return rc;
index daf6f61..4f83571 100644 (file)
@@ -146,7 +146,6 @@ struct ibmveth_adapter {
     dma_addr_t filter_list_dma;
     struct ibmveth_buff_pool rx_buff_pool[IBMVETH_NUM_BUFF_POOLS];
     struct ibmveth_rx_q rx_queue;
-    int pool_config;
     int rx_csum;
     int large_send;
     bool is_active_trunk;
index 7e75706..87f36d1 100644 (file)
@@ -2183,9 +2183,6 @@ static int i40e_set_ringparam(struct net_device *netdev,
                        err = i40e_setup_rx_descriptors(&rx_rings[i]);
                        if (err)
                                goto rx_unwind;
-                       err = i40e_alloc_rx_bi(&rx_rings[i]);
-                       if (err)
-                               goto rx_unwind;
 
                        /* now allocate the Rx buffers to make sure the OS
                         * has enough memory, any failure here means abort
index 2c07fa8..b5dcd15 100644 (file)
@@ -3566,12 +3566,8 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
        if (ring->vsi->type == I40E_VSI_MAIN)
                xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);
 
-       kfree(ring->rx_bi);
        ring->xsk_pool = i40e_xsk_pool(ring);
        if (ring->xsk_pool) {
-               ret = i40e_alloc_rx_bi_zc(ring);
-               if (ret)
-                       return ret;
                ring->rx_buf_len =
                  xsk_pool_get_rx_frame_size(ring->xsk_pool);
                /* For AF_XDP ZC, we disallow packets to span on
@@ -3589,9 +3585,6 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
                         ring->queue_index);
 
        } else {
-               ret = i40e_alloc_rx_bi(ring);
-               if (ret)
-                       return ret;
                ring->rx_buf_len = vsi->rx_buf_len;
                if (ring->vsi->type == I40E_VSI_MAIN) {
                        ret = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
@@ -13296,6 +13289,14 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog,
                i40e_reset_and_rebuild(pf, true, true);
        }
 
+       if (!i40e_enabled_xdp_vsi(vsi) && prog) {
+               if (i40e_realloc_rx_bi_zc(vsi, true))
+                       return -ENOMEM;
+       } else if (i40e_enabled_xdp_vsi(vsi) && !prog) {
+               if (i40e_realloc_rx_bi_zc(vsi, false))
+                       return -ENOMEM;
+       }
+
        for (i = 0; i < vsi->num_queue_pairs; i++)
                WRITE_ONCE(vsi->rx_rings[i]->xdp_prog, vsi->xdp_prog);
 
@@ -13528,6 +13529,7 @@ int i40e_queue_pair_disable(struct i40e_vsi *vsi, int queue_pair)
 
        i40e_queue_pair_disable_irq(vsi, queue_pair);
        err = i40e_queue_pair_toggle_rings(vsi, queue_pair, false /* off */);
+       i40e_clean_rx_ring(vsi->rx_rings[queue_pair]);
        i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */);
        i40e_queue_pair_clean_rings(vsi, queue_pair);
        i40e_queue_pair_reset_stats(vsi, queue_pair);
index 69e67eb..b97c95f 100644 (file)
@@ -1457,14 +1457,6 @@ err:
        return -ENOMEM;
 }
 
-int i40e_alloc_rx_bi(struct i40e_ring *rx_ring)
-{
-       unsigned long sz = sizeof(*rx_ring->rx_bi) * rx_ring->count;
-
-       rx_ring->rx_bi = kzalloc(sz, GFP_KERNEL);
-       return rx_ring->rx_bi ? 0 : -ENOMEM;
-}
-
 static void i40e_clear_rx_bi(struct i40e_ring *rx_ring)
 {
        memset(rx_ring->rx_bi, 0, sizeof(*rx_ring->rx_bi) * rx_ring->count);
@@ -1593,6 +1585,11 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring)
 
        rx_ring->xdp_prog = rx_ring->vsi->xdp_prog;
 
+       rx_ring->rx_bi =
+               kcalloc(rx_ring->count, sizeof(*rx_ring->rx_bi), GFP_KERNEL);
+       if (!rx_ring->rx_bi)
+               return -ENOMEM;
+
        return 0;
 }
 
index 41f86e9..768290d 100644 (file)
@@ -469,7 +469,6 @@ int __i40e_maybe_stop_tx(struct i40e_ring *tx_ring, int size);
 bool __i40e_chk_linearize(struct sk_buff *skb);
 int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
                  u32 flags);
-int i40e_alloc_rx_bi(struct i40e_ring *rx_ring);
 
 /**
  * i40e_get_head - Retrieve head from head writeback
index 6d4009e..cd7b52f 100644 (file)
 #include "i40e_txrx_common.h"
 #include "i40e_xsk.h"
 
-int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring)
-{
-       unsigned long sz = sizeof(*rx_ring->rx_bi_zc) * rx_ring->count;
-
-       rx_ring->rx_bi_zc = kzalloc(sz, GFP_KERNEL);
-       return rx_ring->rx_bi_zc ? 0 : -ENOMEM;
-}
-
 void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring)
 {
        memset(rx_ring->rx_bi_zc, 0,
@@ -30,6 +22,58 @@ static struct xdp_buff **i40e_rx_bi(struct i40e_ring *rx_ring, u32 idx)
 }
 
 /**
+ * i40e_realloc_rx_xdp_bi - reallocate SW ring for either XSK or normal buffer
+ * @rx_ring: Current rx ring
+ * @pool_present: is pool for XSK present
+ *
+ * Try allocating memory and return ENOMEM, if failed to allocate.
+ * If allocation was successful, substitute buffer with allocated one.
+ * Returns 0 on success, negative on failure
+ */
+static int i40e_realloc_rx_xdp_bi(struct i40e_ring *rx_ring, bool pool_present)
+{
+       size_t elem_size = pool_present ? sizeof(*rx_ring->rx_bi_zc) :
+                                         sizeof(*rx_ring->rx_bi);
+       void *sw_ring = kcalloc(rx_ring->count, elem_size, GFP_KERNEL);
+
+       if (!sw_ring)
+               return -ENOMEM;
+
+       if (pool_present) {
+               kfree(rx_ring->rx_bi);
+               rx_ring->rx_bi = NULL;
+               rx_ring->rx_bi_zc = sw_ring;
+       } else {
+               kfree(rx_ring->rx_bi_zc);
+               rx_ring->rx_bi_zc = NULL;
+               rx_ring->rx_bi = sw_ring;
+       }
+       return 0;
+}
+
+/**
+ * i40e_realloc_rx_bi_zc - reallocate rx SW rings
+ * @vsi: Current VSI
+ * @zc: is zero copy set
+ *
+ * Reallocate buffer for rx_rings that might be used by XSK.
+ * XDP requires more memory, than rx_buf provides.
+ * Returns 0 on success, negative on failure
+ */
+int i40e_realloc_rx_bi_zc(struct i40e_vsi *vsi, bool zc)
+{
+       struct i40e_ring *rx_ring;
+       unsigned long q;
+
+       for_each_set_bit(q, vsi->af_xdp_zc_qps, vsi->alloc_queue_pairs) {
+               rx_ring = vsi->rx_rings[q];
+               if (i40e_realloc_rx_xdp_bi(rx_ring, zc))
+                       return -ENOMEM;
+       }
+       return 0;
+}
+
+/**
  * i40e_xsk_pool_enable - Enable/associate an AF_XDP buffer pool to a
  * certain ring/qid
  * @vsi: Current VSI
@@ -69,6 +113,10 @@ static int i40e_xsk_pool_enable(struct i40e_vsi *vsi,
                if (err)
                        return err;
 
+               err = i40e_realloc_rx_xdp_bi(vsi->rx_rings[qid], true);
+               if (err)
+                       return err;
+
                err = i40e_queue_pair_enable(vsi, qid);
                if (err)
                        return err;
@@ -113,6 +161,9 @@ static int i40e_xsk_pool_disable(struct i40e_vsi *vsi, u16 qid)
        xsk_pool_dma_unmap(pool, I40E_RX_DMA_ATTR);
 
        if (if_running) {
+               err = i40e_realloc_rx_xdp_bi(vsi->rx_rings[qid], false);
+               if (err)
+                       return err;
                err = i40e_queue_pair_enable(vsi, qid);
                if (err)
                        return err;
index bb96298..821df24 100644 (file)
@@ -32,7 +32,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget);
 
 bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring);
 int i40e_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags);
-int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring);
+int i40e_realloc_rx_bi_zc(struct i40e_vsi *vsi, bool zc);
 void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring);
 
 #endif /* _I40E_XSK_H_ */
index 001500a..d75c8d1 100644 (file)
  */
 #define ICE_BW_KBPS_DIVISOR            125
 
+/* Default recipes have priority 4 and below, hence priority values between 5..7
+ * can be used as filter priority for advanced switch filter (advanced switch
+ * filters need new recipe to be created for specified extraction sequence
+ * because default recipe extraction sequence does not represent custom
+ * extraction)
+ */
+#define ICE_SWITCH_FLTR_PRIO_QUEUE     7
+/* prio 6 is reserved for future use (e.g. switch filter with L3 fields +
+ * (Optional: IP TOS/TTL) + L4 fields + (optionally: TCP fields such as
+ * SYN/FIN/RST))
+ */
+#define ICE_SWITCH_FLTR_PRIO_RSVD      6
+#define ICE_SWITCH_FLTR_PRIO_VSI       5
+#define ICE_SWITCH_FLTR_PRIO_QGRP      ICE_SWITCH_FLTR_PRIO_VSI
+
 /* Macro for each VSI in a PF */
 #define ice_for_each_vsi(pf, i) \
        for ((i) = 0; (i) < (pf)->num_alloc_vsi; (i)++)
index 0f67187..df65e82 100644 (file)
@@ -8283,7 +8283,7 @@ static void ice_rem_all_chnl_fltrs(struct ice_pf *pf)
 
                rule.rid = fltr->rid;
                rule.rule_id = fltr->rule_id;
-               rule.vsi_handle = fltr->dest_id;
+               rule.vsi_handle = fltr->dest_vsi_handle;
                status = ice_rem_adv_rule_by_id(&pf->hw, &rule);
                if (status) {
                        if (status == -ENOENT)
index f68c555..faba0f8 100644 (file)
@@ -724,7 +724,7 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
         */
        fltr->rid = rule_added.rid;
        fltr->rule_id = rule_added.rule_id;
-       fltr->dest_id = rule_added.vsi_handle;
+       fltr->dest_vsi_handle = rule_added.vsi_handle;
 
 exit:
        kfree(list);
@@ -732,6 +732,116 @@ exit:
 }
 
 /**
+ * ice_locate_vsi_using_queue - locate VSI using queue (forward to queue action)
+ * @vsi: Pointer to VSI
+ * @tc_fltr: Pointer to tc_flower_filter
+ *
+ * Locate the VSI using specified queue. When ADQ is not enabled, always
+ * return input VSI, otherwise locate corresponding VSI based on per channel
+ * offset and qcount
+ */
+static struct ice_vsi *
+ice_locate_vsi_using_queue(struct ice_vsi *vsi,
+                          struct ice_tc_flower_fltr *tc_fltr)
+{
+       int num_tc, tc, queue;
+
+       /* if ADQ is not active, passed VSI is the candidate VSI */
+       if (!ice_is_adq_active(vsi->back))
+               return vsi;
+
+       /* Locate the VSI (it could still be main PF VSI or CHNL_VSI depending
+        * upon queue number)
+        */
+       num_tc = vsi->mqprio_qopt.qopt.num_tc;
+       queue = tc_fltr->action.fwd.q.queue;
+
+       for (tc = 0; tc < num_tc; tc++) {
+               int qcount = vsi->mqprio_qopt.qopt.count[tc];
+               int offset = vsi->mqprio_qopt.qopt.offset[tc];
+
+               if (queue >= offset && queue < offset + qcount) {
+                       /* for non-ADQ TCs, passed VSI is the candidate VSI */
+                       if (tc < ICE_CHNL_START_TC)
+                               return vsi;
+                       else
+                               return vsi->tc_map_vsi[tc];
+               }
+       }
+       return NULL;
+}
+
+static struct ice_rx_ring *
+ice_locate_rx_ring_using_queue(struct ice_vsi *vsi,
+                              struct ice_tc_flower_fltr *tc_fltr)
+{
+       u16 queue = tc_fltr->action.fwd.q.queue;
+
+       return queue < vsi->num_rxq ? vsi->rx_rings[queue] : NULL;
+}
+
+/**
+ * ice_tc_forward_action - Determine destination VSI and queue for the action
+ * @vsi: Pointer to VSI
+ * @tc_fltr: Pointer to TC flower filter structure
+ *
+ * Validates the tc forward action and determines the destination VSI and queue
+ * for the forward action.
+ */
+static struct ice_vsi *
+ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
+{
+       struct ice_rx_ring *ring = NULL;
+       struct ice_vsi *ch_vsi = NULL;
+       struct ice_pf *pf = vsi->back;
+       struct device *dev;
+       u32 tc_class;
+
+       dev = ice_pf_to_dev(pf);
+
+       /* Get the destination VSI and/or destination queue and validate them */
+       switch (tc_fltr->action.fltr_act) {
+       case ICE_FWD_TO_VSI:
+               tc_class = tc_fltr->action.fwd.tc.tc_class;
+               /* Select the destination VSI */
+               if (tc_class < ICE_CHNL_START_TC) {
+                       NL_SET_ERR_MSG_MOD(tc_fltr->extack,
+                                          "Unable to add filter because of unsupported destination");
+                       return ERR_PTR(-EOPNOTSUPP);
+               }
+               /* Locate ADQ VSI depending on hw_tc number */
+               ch_vsi = vsi->tc_map_vsi[tc_class];
+               break;
+       case ICE_FWD_TO_Q:
+               /* Locate the Rx queue */
+               ring = ice_locate_rx_ring_using_queue(vsi, tc_fltr);
+               if (!ring) {
+                       dev_err(dev,
+                               "Unable to locate Rx queue for action fwd_to_queue: %u\n",
+                               tc_fltr->action.fwd.q.queue);
+                       return ERR_PTR(-EINVAL);
+               }
+               /* Determine destination VSI even though the action is
+                * FWD_TO_QUEUE, because QUEUE is associated with VSI
+                */
+               ch_vsi = tc_fltr->dest_vsi;
+               break;
+       default:
+               dev_err(dev,
+                       "Unable to add filter because of unsupported action %u (supported actions: fwd to tc, fwd to queue)\n",
+                       tc_fltr->action.fltr_act);
+               return ERR_PTR(-EINVAL);
+       }
+       /* Must have valid ch_vsi (it could be main VSI or ADQ VSI) */
+       if (!ch_vsi) {
+               dev_err(dev,
+                       "Unable to add filter because specified destination VSI doesn't exist\n");
+               return ERR_PTR(-EINVAL);
+       }
+       return ch_vsi;
+}
+
+/**
  * ice_add_tc_flower_adv_fltr - add appropriate filter rules
  * @vsi: Pointer to VSI
  * @tc_fltr: Pointer to TC flower filter structure
@@ -772,11 +882,10 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
                return -EOPNOTSUPP;
        }
 
-       /* get the channel (aka ADQ VSI) */
-       if (tc_fltr->dest_vsi)
-               ch_vsi = tc_fltr->dest_vsi;
-       else
-               ch_vsi = vsi->tc_map_vsi[tc_fltr->action.tc_class];
+       /* validate forwarding action VSI and queue */
+       ch_vsi = ice_tc_forward_action(vsi, tc_fltr);
+       if (IS_ERR(ch_vsi))
+               return PTR_ERR(ch_vsi);
 
        lkups_cnt = ice_tc_count_lkups(flags, headers, tc_fltr);
        list = kcalloc(lkups_cnt, sizeof(*list), GFP_ATOMIC);
@@ -790,30 +899,40 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
        }
 
        rule_info.sw_act.fltr_act = tc_fltr->action.fltr_act;
-       if (tc_fltr->action.tc_class >= ICE_CHNL_START_TC) {
-               if (!ch_vsi) {
-                       NL_SET_ERR_MSG_MOD(tc_fltr->extack, "Unable to add filter because specified destination doesn't exist");
-                       ret = -EINVAL;
-                       goto exit;
-               }
+       /* specify the cookie as filter_rule_id */
+       rule_info.fltr_rule_id = tc_fltr->cookie;
 
-               rule_info.sw_act.fltr_act = ICE_FWD_TO_VSI;
+       switch (tc_fltr->action.fltr_act) {
+       case ICE_FWD_TO_VSI:
                rule_info.sw_act.vsi_handle = ch_vsi->idx;
-               rule_info.priority = 7;
+               rule_info.priority = ICE_SWITCH_FLTR_PRIO_VSI;
                rule_info.sw_act.src = hw->pf_id;
                rule_info.rx = true;
                dev_dbg(dev, "add switch rule for TC:%u vsi_idx:%u, lkups_cnt:%u\n",
-                       tc_fltr->action.tc_class,
+                       tc_fltr->action.fwd.tc.tc_class,
                        rule_info.sw_act.vsi_handle, lkups_cnt);
-       } else {
+               break;
+       case ICE_FWD_TO_Q:
+               /* HW queue number in global space */
+               rule_info.sw_act.fwd_id.q_id = tc_fltr->action.fwd.q.hw_queue;
+               rule_info.sw_act.vsi_handle = ch_vsi->idx;
+               rule_info.priority = ICE_SWITCH_FLTR_PRIO_QUEUE;
+               rule_info.sw_act.src = hw->pf_id;
+               rule_info.rx = true;
+               dev_dbg(dev, "add switch rule action to forward to queue:%u (HW queue %u), lkups_cnt:%u\n",
+                       tc_fltr->action.fwd.q.queue,
+                       tc_fltr->action.fwd.q.hw_queue, lkups_cnt);
+               break;
+       default:
                rule_info.sw_act.flag |= ICE_FLTR_TX;
+               /* In case of Tx (LOOKUP_TX), src needs to be src VSI */
                rule_info.sw_act.src = vsi->idx;
+               /* 'Rx' is false, direction of rule(LOOKUPTRX) */
                rule_info.rx = false;
+               rule_info.priority = ICE_SWITCH_FLTR_PRIO_VSI;
+               break;
        }
 
-       /* specify the cookie as filter_rule_id */
-       rule_info.fltr_rule_id = tc_fltr->cookie;
-
        ret = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, &rule_added);
        if (ret == -EEXIST) {
                NL_SET_ERR_MSG_MOD(tc_fltr->extack,
@@ -831,19 +950,14 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
         */
        tc_fltr->rid = rule_added.rid;
        tc_fltr->rule_id = rule_added.rule_id;
-       if (tc_fltr->action.tc_class > 0 && ch_vsi) {
-               /* For PF ADQ, VSI type is set as ICE_VSI_CHNL, and
-                * for PF ADQ filter, it is not yet set in tc_fltr,
-                * hence store the dest_vsi ptr in tc_fltr
-                */
-               if (ch_vsi->type == ICE_VSI_CHNL)
-                       tc_fltr->dest_vsi = ch_vsi;
+       tc_fltr->dest_vsi_handle = rule_added.vsi_handle;
+       if (tc_fltr->action.fltr_act == ICE_FWD_TO_VSI ||
+           tc_fltr->action.fltr_act == ICE_FWD_TO_Q) {
+               tc_fltr->dest_vsi = ch_vsi;
                /* keep track of advanced switch filter for
-                * destination VSI (channel VSI)
+                * destination VSI
                 */
                ch_vsi->num_chnl_fltr++;
-               /* in this case, dest_id is VSI handle (sw handle) */
-               tc_fltr->dest_id = rule_added.vsi_handle;
 
                /* keeps track of channel filters for PF VSI */
                if (vsi->type == ICE_VSI_PF &&
@@ -851,10 +965,22 @@ ice_add_tc_flower_adv_fltr(struct ice_vsi *vsi,
                              ICE_TC_FLWR_FIELD_ENC_DST_MAC)))
                        pf->num_dmac_chnl_fltrs++;
        }
-       dev_dbg(dev, "added switch rule (lkups_cnt %u, flags 0x%x) for TC %u, rid %u, rule_id %u, vsi_idx %u\n",
-               lkups_cnt, flags,
-               tc_fltr->action.tc_class, rule_added.rid,
-               rule_added.rule_id, rule_added.vsi_handle);
+       switch (tc_fltr->action.fltr_act) {
+       case ICE_FWD_TO_VSI:
+               dev_dbg(dev, "added switch rule (lkups_cnt %u, flags 0x%x), action is forward to TC %u, rid %u, rule_id %u, vsi_idx %u\n",
+                       lkups_cnt, flags,
+                       tc_fltr->action.fwd.tc.tc_class, rule_added.rid,
+                       rule_added.rule_id, rule_added.vsi_handle);
+               break;
+       case ICE_FWD_TO_Q:
+               dev_dbg(dev, "added switch rule (lkups_cnt %u, flags 0x%x), action is forward to queue: %u (HW queue %u)     , rid %u, rule_id %u\n",
+                       lkups_cnt, flags, tc_fltr->action.fwd.q.queue,
+                       tc_fltr->action.fwd.q.hw_queue, rule_added.rid,
+                       rule_added.rule_id);
+               break;
+       default:
+               break;
+       }
 exit:
        kfree(list);
        return ret;
@@ -1455,43 +1581,15 @@ ice_add_switch_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
 }
 
 /**
- * ice_handle_tclass_action - Support directing to a traffic class
+ * ice_prep_adq_filter - Prepare ADQ filter with the required additional headers
  * @vsi: Pointer to VSI
- * @cls_flower: Pointer to TC flower offload structure
  * @fltr: Pointer to TC flower filter structure
  *
- * Support directing traffic to a traffic class
+ * Prepare ADQ filter with the required additional header fields
  */
 static int
-ice_handle_tclass_action(struct ice_vsi *vsi,
-                        struct flow_cls_offload *cls_flower,
-                        struct ice_tc_flower_fltr *fltr)
+ice_prep_adq_filter(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
 {
-       int tc = tc_classid_to_hwtc(vsi->netdev, cls_flower->classid);
-       struct ice_vsi *main_vsi;
-
-       if (tc < 0) {
-               NL_SET_ERR_MSG_MOD(fltr->extack, "Unable to add filter because specified destination is invalid");
-               return -EINVAL;
-       }
-       if (!tc) {
-               NL_SET_ERR_MSG_MOD(fltr->extack, "Unable to add filter because of invalid destination");
-               return -EINVAL;
-       }
-
-       if (!(vsi->all_enatc & BIT(tc))) {
-               NL_SET_ERR_MSG_MOD(fltr->extack, "Unable to add filter because of non-existence destination");
-               return -EINVAL;
-       }
-
-       /* Redirect to a TC class or Queue Group */
-       main_vsi = ice_get_main_vsi(vsi->back);
-       if (!main_vsi || !main_vsi->netdev) {
-               NL_SET_ERR_MSG_MOD(fltr->extack,
-                                  "Unable to add filter because of invalid netdevice");
-               return -EINVAL;
-       }
-
        if ((fltr->flags & ICE_TC_FLWR_FIELD_TENANT_ID) &&
            (fltr->flags & (ICE_TC_FLWR_FIELD_DST_MAC |
                           ICE_TC_FLWR_FIELD_SRC_MAC))) {
@@ -1503,9 +1601,8 @@ ice_handle_tclass_action(struct ice_vsi *vsi,
        /* For ADQ, filter must include dest MAC address, otherwise unwanted
         * packets with unrelated MAC address get delivered to ADQ VSIs as long
         * as remaining filter criteria is satisfied such as dest IP address
-        * and dest/src L4 port. Following code is trying to handle:
-        * 1. For non-tunnel, if user specify MAC addresses, use them (means
-        * this code won't do anything
+        * and dest/src L4 port. Below code handles the following cases:
+        * 1. For non-tunnel, if user specify MAC addresses, use them.
         * 2. For non-tunnel, if user didn't specify MAC address, add implicit
         * dest MAC to be lower netdev's active unicast MAC address
         * 3. For tunnel,  as of now TC-filter through flower classifier doesn't
@@ -1528,35 +1625,97 @@ ice_handle_tclass_action(struct ice_vsi *vsi,
                eth_broadcast_addr(fltr->outer_headers.l2_mask.dst_mac);
        }
 
-       /* validate specified dest MAC address, make sure either it belongs to
-        * lower netdev or any of MACVLAN. MACVLANs MAC address are added as
-        * unicast MAC filter destined to main VSI.
-        */
-       if (!ice_mac_fltr_exist(&main_vsi->back->hw,
-                               fltr->outer_headers.l2_key.dst_mac,
-                               main_vsi->idx)) {
-               NL_SET_ERR_MSG_MOD(fltr->extack,
-                                  "Unable to add filter because legacy MAC filter for specified destination doesn't exist");
-               return -EINVAL;
-       }
-
        /* Make sure VLAN is already added to main VSI, before allowing ADQ to
         * add a VLAN based filter such as MAC + VLAN + L4 port.
         */
        if (fltr->flags & ICE_TC_FLWR_FIELD_VLAN) {
                u16 vlan_id = be16_to_cpu(fltr->outer_headers.vlan_hdr.vlan_id);
 
-               if (!ice_vlan_fltr_exist(&main_vsi->back->hw, vlan_id,
-                                        main_vsi->idx)) {
+               if (!ice_vlan_fltr_exist(&vsi->back->hw, vlan_id, vsi->idx)) {
                        NL_SET_ERR_MSG_MOD(fltr->extack,
                                           "Unable to add filter because legacy VLAN filter for specified destination doesn't exist");
                        return -EINVAL;
                }
        }
+       return 0;
+}
+
+/**
+ * ice_handle_tclass_action - Support directing to a traffic class
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to TC flower offload structure
+ * @fltr: Pointer to TC flower filter structure
+ *
+ * Support directing traffic to a traffic class/queue-set
+ */
+static int
+ice_handle_tclass_action(struct ice_vsi *vsi,
+                        struct flow_cls_offload *cls_flower,
+                        struct ice_tc_flower_fltr *fltr)
+{
+       int tc = tc_classid_to_hwtc(vsi->netdev, cls_flower->classid);
+
+       /* user specified hw_tc (must be non-zero for ADQ TC), action is forward
+        * to hw_tc (i.e. ADQ channel number)
+        */
+       if (tc < ICE_CHNL_START_TC) {
+               NL_SET_ERR_MSG_MOD(fltr->extack,
+                                  "Unable to add filter because of unsupported destination");
+               return -EOPNOTSUPP;
+       }
+       if (!(vsi->all_enatc & BIT(tc))) {
+               NL_SET_ERR_MSG_MOD(fltr->extack,
+                                  "Unable to add filter because of non-existence destination");
+               return -EINVAL;
+       }
        fltr->action.fltr_act = ICE_FWD_TO_VSI;
-       fltr->action.tc_class = tc;
+       fltr->action.fwd.tc.tc_class = tc;
 
-       return 0;
+       return ice_prep_adq_filter(vsi, fltr);
+}
+
+static int
+ice_tc_forward_to_queue(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr,
+                       struct flow_action_entry *act)
+{
+       struct ice_vsi *ch_vsi = NULL;
+       u16 queue = act->rx_queue;
+
+       if (queue > vsi->num_rxq) {
+               NL_SET_ERR_MSG_MOD(fltr->extack,
+                                  "Unable to add filter because specified queue is invalid");
+               return -EINVAL;
+       }
+       fltr->action.fltr_act = ICE_FWD_TO_Q;
+       fltr->action.fwd.q.queue = queue;
+       /* determine corresponding HW queue */
+       fltr->action.fwd.q.hw_queue = vsi->rxq_map[queue];
+
+       /* If ADQ is configured, and the queue belongs to ADQ VSI, then prepare
+        * ADQ switch filter
+        */
+       ch_vsi = ice_locate_vsi_using_queue(vsi, fltr);
+       if (!ch_vsi)
+               return -EINVAL;
+       fltr->dest_vsi = ch_vsi;
+       if (!ice_is_chnl_fltr(fltr))
+               return 0;
+
+       return ice_prep_adq_filter(vsi, fltr);
+}
+
+static int
+ice_tc_parse_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr,
+                   struct flow_action_entry *act)
+{
+       switch (act->id) {
+       case FLOW_ACTION_RX_QUEUE_MAPPING:
+               /* forward to queue */
+               return ice_tc_forward_to_queue(vsi, fltr, act);
+       default:
+               NL_SET_ERR_MSG_MOD(fltr->extack, "Unsupported TC action");
+               return -EOPNOTSUPP;
+       }
 }
 
 /**
@@ -1575,7 +1734,7 @@ ice_parse_tc_flower_actions(struct ice_vsi *vsi,
        struct flow_rule *rule = flow_cls_offload_flow_rule(cls_flower);
        struct flow_action *flow_action = &rule->action;
        struct flow_action_entry *act;
-       int i;
+       int i, err;
 
        if (cls_flower->classid)
                return ice_handle_tclass_action(vsi, cls_flower, fltr);
@@ -1584,21 +1743,13 @@ ice_parse_tc_flower_actions(struct ice_vsi *vsi,
                return -EINVAL;
 
        flow_action_for_each(i, act, flow_action) {
-               if (ice_is_eswitch_mode_switchdev(vsi->back)) {
-                       int err = ice_eswitch_tc_parse_action(fltr, act);
-
-                       if (err)
-                               return err;
-                       continue;
-               }
-               /* Allow only one rule per filter */
-
-               /* Drop action */
-               if (act->id == FLOW_ACTION_DROP) {
-                       NL_SET_ERR_MSG_MOD(fltr->extack, "Unsupported action DROP");
-                       return -EINVAL;
-               }
-               fltr->action.fltr_act = ICE_FWD_TO_VSI;
+               if (ice_is_eswitch_mode_switchdev(vsi->back))
+                       err = ice_eswitch_tc_parse_action(fltr, act);
+               else
+                       err = ice_tc_parse_action(vsi, fltr, act);
+               if (err)
+                       return err;
+               continue;
        }
        return 0;
 }
@@ -1618,7 +1769,7 @@ static int ice_del_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
 
        rule_rem.rid = fltr->rid;
        rule_rem.rule_id = fltr->rule_id;
-       rule_rem.vsi_handle = fltr->dest_id;
+       rule_rem.vsi_handle = fltr->dest_vsi_handle;
        err = ice_rem_adv_rule_by_id(&pf->hw, &rule_rem);
        if (err) {
                if (err == -ENOENT) {
index 92642fa..d916d1e 100644 (file)
@@ -45,7 +45,20 @@ struct ice_indr_block_priv {
 };
 
 struct ice_tc_flower_action {
-       u32 tc_class;
+       /* forward action specific params */
+       union {
+               struct {
+                       u32 tc_class; /* forward to hw_tc */
+                       u32 rsvd;
+               } tc;
+               struct {
+                       u16 queue; /* forward to queue */
+                       /* To add filter in HW, absolute queue number in global
+                        * space of queues (between 0...N) is needed
+                        */
+                       u16 hw_queue;
+               } q;
+       } fwd;
        enum ice_sw_fwd_act_type fltr_act;
 };
 
@@ -131,11 +144,11 @@ struct ice_tc_flower_fltr {
         */
        u16 rid;
        u16 rule_id;
-       /* this could be queue/vsi_idx (sw handle)/queue_group, depending upon
-        * destination type
+       /* VSI handle of the destination VSI (it could be main PF VSI, CHNL_VSI,
+        * VF VSI)
         */
-       u16 dest_id;
-       /* if dest_id is vsi_idx, then need to store destination VSI ptr */
+       u16 dest_vsi_handle;
+       /* ptr to destination VSI */
        struct ice_vsi *dest_vsi;
        /* direction of fltr for eswitch use case */
        enum ice_eswitch_fltr_direction direction;
@@ -162,12 +175,23 @@ struct ice_tc_flower_fltr {
  * @f: Pointer to tc-flower filter
  *
  * Criteria to determine of given filter is valid channel filter
- * or not is based on its "destination". If destination is hw_tc (aka tc_class)
- * and it is non-zero, then it is valid channel (aka ADQ) filter
+ * or not is based on its destination.
+ * For forward to VSI action, if destination is valid hw_tc (aka tc_class)
+ * and in supported range of TCs for ADQ, then return true.
+ * For forward to queue, as long as dest_vsi is valid and it is of type
+ * VSI_CHNL (PF ADQ VSI is of type VSI_CHNL), return true.
+ * NOTE: For forward to queue, correct dest_vsi is still set in tc_fltr based
+ * on destination queue specified.
  */
 static inline bool ice_is_chnl_fltr(struct ice_tc_flower_fltr *f)
 {
-       return !!f->action.tc_class;
+       if (f->action.fltr_act == ICE_FWD_TO_VSI)
+               return f->action.fwd.tc.tc_class >= ICE_CHNL_START_TC &&
+                      f->action.fwd.tc.tc_class < ICE_CHNL_MAX_TC;
+       else if (f->action.fltr_act == ICE_FWD_TO_Q)
+               return f->dest_vsi && f->dest_vsi->type == ICE_VSI_CHNL;
+
+       return false;
 }
 
 /**
index 59aab40..f5961bd 100644 (file)
@@ -485,7 +485,6 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
        len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
 
        if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
-               dev_kfree_skb_any(skb);
                netdev_err(dev, "tx ring full\n");
                netif_tx_stop_queue(txq);
                return NETDEV_TX_BUSY;
index 9809f55..9ec5f38 100644 (file)
@@ -815,6 +815,7 @@ free_flowid:
        cn10k_mcs_free_rsrc(pfvf, MCS_TX, MCS_RSRC_TYPE_FLOWID,
                            txsc->hw_flow_id, false);
 fail:
+       kfree(txsc);
        return ERR_PTR(ret);
 }
 
@@ -870,6 +871,7 @@ free_flowid:
        cn10k_mcs_free_rsrc(pfvf, MCS_RX, MCS_RSRC_TYPE_FLOWID,
                            rxsc->hw_flow_id, false);
 fail:
+       kfree(rxsc);
        return ERR_PTR(ret);
 }
 
index 4fba7cb..7cd3815 100644 (file)
@@ -4060,19 +4060,23 @@ static int mtk_probe(struct platform_device *pdev)
                        eth->irq[i] = platform_get_irq(pdev, i);
                if (eth->irq[i] < 0) {
                        dev_err(&pdev->dev, "no IRQ%d resource found\n", i);
-                       return -ENXIO;
+                       err = -ENXIO;
+                       goto err_wed_exit;
                }
        }
        for (i = 0; i < ARRAY_SIZE(eth->clks); i++) {
                eth->clks[i] = devm_clk_get(eth->dev,
                                            mtk_clks_source_name[i]);
                if (IS_ERR(eth->clks[i])) {
-                       if (PTR_ERR(eth->clks[i]) == -EPROBE_DEFER)
-                               return -EPROBE_DEFER;
+                       if (PTR_ERR(eth->clks[i]) == -EPROBE_DEFER) {
+                               err = -EPROBE_DEFER;
+                               goto err_wed_exit;
+                       }
                        if (eth->soc->required_clks & BIT(i)) {
                                dev_err(&pdev->dev, "clock %s not found\n",
                                        mtk_clks_source_name[i]);
-                               return -EINVAL;
+                               err = -EINVAL;
+                               goto err_wed_exit;
                        }
                        eth->clks[i] = NULL;
                }
@@ -4083,7 +4087,7 @@ static int mtk_probe(struct platform_device *pdev)
 
        err = mtk_hw_init(eth);
        if (err)
-               return err;
+               goto err_wed_exit;
 
        eth->hwlro = MTK_HAS_CAPS(eth->soc->caps, MTK_HWLRO);
 
@@ -4179,6 +4183,8 @@ err_free_dev:
        mtk_free_dev(eth);
 err_deinit_hw:
        mtk_hw_deinit(eth);
+err_wed_exit:
+       mtk_wed_exit();
 
        return err;
 }
@@ -4198,6 +4204,7 @@ static int mtk_remove(struct platform_device *pdev)
                phylink_disconnect_phy(mac->phylink);
        }
 
+       mtk_wed_exit();
        mtk_hw_deinit(eth);
 
        netif_napi_del(&eth->tx_napi);
index ae00e57..2d8ca99 100644 (file)
@@ -397,12 +397,6 @@ int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry,
        return 0;
 }
 
-static inline bool mtk_foe_entry_usable(struct mtk_foe_entry *entry)
-{
-       return !(entry->ib1 & MTK_FOE_IB1_STATIC) &&
-              FIELD_GET(MTK_FOE_IB1_STATE, entry->ib1) != MTK_FOE_STATE_BIND;
-}
-
 static bool
 mtk_flow_entry_match(struct mtk_eth *eth, struct mtk_flow_entry *entry,
                     struct mtk_foe_entry *data)
index 099b6e0..65e01bf 100644 (file)
@@ -1072,16 +1072,16 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
 
        pdev = of_find_device_by_node(np);
        if (!pdev)
-               return;
+               goto err_of_node_put;
 
        get_device(&pdev->dev);
        irq = platform_get_irq(pdev, 0);
        if (irq < 0)
-               return;
+               goto err_put_device;
 
        regs = syscon_regmap_lookup_by_phandle(np, NULL);
        if (IS_ERR(regs))
-               return;
+               goto err_put_device;
 
        rcu_assign_pointer(mtk_soc_wed_ops, &wed_ops);
 
@@ -1124,8 +1124,16 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
 
        hw_list[index] = hw;
 
+       mutex_unlock(&hw_lock);
+
+       return;
+
 unlock:
        mutex_unlock(&hw_lock);
+err_put_device:
+       put_device(&pdev->dev);
+err_of_node_put:
+       of_node_put(np);
 }
 
 void mtk_wed_exit(void)
@@ -1146,6 +1154,7 @@ void mtk_wed_exit(void)
                hw_list[i] = NULL;
                debugfs_remove(hw->debugfs_dir);
                put_device(hw->dev);
+               of_node_put(hw->node);
                kfree(hw);
        }
 }
index 4197006..4331235 100644 (file)
@@ -1846,25 +1846,16 @@ err_hash:
 void mlx5e_macsec_cleanup(struct mlx5e_priv *priv)
 {
        struct mlx5e_macsec *macsec = priv->macsec;
-       struct mlx5_core_dev *mdev = macsec->mdev;
+       struct mlx5_core_dev *mdev = priv->mdev;
 
        if (!macsec)
                return;
 
        mlx5_notifier_unregister(mdev, &macsec->nb);
-
        mlx5e_macsec_fs_cleanup(macsec->macsec_fs);
-
-       /* Cleanup workqueue */
        destroy_workqueue(macsec->wq);
-
        mlx5e_macsec_aso_cleanup(&macsec->aso, mdev);
-
-       priv->macsec = NULL;
-
        rhashtable_destroy(&macsec->sci_hash);
-
        mutex_destroy(&macsec->lock);
-
        kfree(macsec);
 }
index 0777bed..b74f30e 100644 (file)
@@ -4620,6 +4620,7 @@ MLXSW_ITEM32(reg, ptys, an_status, 0x04, 28, 4);
 #define MLXSW_REG_PTYS_EXT_ETH_SPEED_100GAUI_2_100GBASE_CR2_KR2                BIT(10)
 #define MLXSW_REG_PTYS_EXT_ETH_SPEED_200GAUI_4_200GBASE_CR4_KR4                BIT(12)
 #define MLXSW_REG_PTYS_EXT_ETH_SPEED_400GAUI_8                         BIT(15)
+#define MLXSW_REG_PTYS_EXT_ETH_SPEED_800GAUI_8                         BIT(19)
 
 /* reg_ptys_ext_eth_proto_cap
  * Extended Ethernet port supported speeds and protocols.
index dcd79d7..472830d 100644 (file)
@@ -1672,6 +1672,19 @@ mlxsw_sp2_mask_ethtool_400gaui_8[] = {
 #define MLXSW_SP2_MASK_ETHTOOL_400GAUI_8_LEN \
        ARRAY_SIZE(mlxsw_sp2_mask_ethtool_400gaui_8)
 
+static const enum ethtool_link_mode_bit_indices
+mlxsw_sp2_mask_ethtool_800gaui_8[] = {
+       ETHTOOL_LINK_MODE_800000baseCR8_Full_BIT,
+       ETHTOOL_LINK_MODE_800000baseKR8_Full_BIT,
+       ETHTOOL_LINK_MODE_800000baseDR8_Full_BIT,
+       ETHTOOL_LINK_MODE_800000baseDR8_2_Full_BIT,
+       ETHTOOL_LINK_MODE_800000baseSR8_Full_BIT,
+       ETHTOOL_LINK_MODE_800000baseVR8_Full_BIT,
+};
+
+#define MLXSW_SP2_MASK_ETHTOOL_800GAUI_8_LEN \
+       ARRAY_SIZE(mlxsw_sp2_mask_ethtool_800gaui_8)
+
 #define MLXSW_SP_PORT_MASK_WIDTH_1X    BIT(0)
 #define MLXSW_SP_PORT_MASK_WIDTH_2X    BIT(1)
 #define MLXSW_SP_PORT_MASK_WIDTH_4X    BIT(2)
@@ -1820,6 +1833,14 @@ static const struct mlxsw_sp2_port_link_mode mlxsw_sp2_port_link_mode[] = {
                .speed          = SPEED_400000,
                .width          = 8,
        },
+       {
+               .mask           = MLXSW_REG_PTYS_EXT_ETH_SPEED_800GAUI_8,
+               .mask_ethtool   = mlxsw_sp2_mask_ethtool_800gaui_8,
+               .m_ethtool_len  = MLXSW_SP2_MASK_ETHTOOL_800GAUI_8_LEN,
+               .mask_sup_width = MLXSW_SP_PORT_MASK_WIDTH_8X,
+               .speed          = SPEED_800000,
+               .width          = 8,
+       },
 };
 
 #define MLXSW_SP2_PORT_LINK_MODE_LEN ARRAY_SIZE(mlxsw_sp2_port_link_mode)
index ed7a35c..24c994b 100644 (file)
@@ -57,5 +57,6 @@ config LAN743X
 
 source "drivers/net/ethernet/microchip/lan966x/Kconfig"
 source "drivers/net/ethernet/microchip/sparx5/Kconfig"
+source "drivers/net/ethernet/microchip/vcap/Kconfig"
 
 endif # NET_VENDOR_MICROCHIP
index 9faa414..bbd3492 100644 (file)
@@ -11,3 +11,4 @@ lan743x-objs := lan743x_main.o lan743x_ethtool.o lan743x_ptp.o
 
 obj-$(CONFIG_LAN966X_SWITCH) += lan966x/
 obj-$(CONFIG_SPARX5_SWITCH) += sparx5/
+obj-$(CONFIG_VCAP) += vcap/
index c739d60..88f9484 100644 (file)
@@ -1233,6 +1233,50 @@ static void lan743x_get_regs(struct net_device *dev,
        lan743x_common_regs(dev, regs, p);
 }
 
+static void lan743x_get_pauseparam(struct net_device *dev,
+                                  struct ethtool_pauseparam *pause)
+{
+       struct lan743x_adapter *adapter = netdev_priv(dev);
+       struct lan743x_phy *phy = &adapter->phy;
+
+       if (phy->fc_request_control & FLOW_CTRL_TX)
+               pause->tx_pause = 1;
+       if (phy->fc_request_control & FLOW_CTRL_RX)
+               pause->rx_pause = 1;
+       pause->autoneg = phy->fc_autoneg;
+}
+
+static int lan743x_set_pauseparam(struct net_device *dev,
+                                 struct ethtool_pauseparam *pause)
+{
+       struct lan743x_adapter *adapter = netdev_priv(dev);
+       struct phy_device *phydev = dev->phydev;
+       struct lan743x_phy *phy = &adapter->phy;
+
+       if (!phydev)
+               return -ENODEV;
+
+       if (!phy_validate_pause(phydev, pause))
+               return -EINVAL;
+
+       phy->fc_request_control = 0;
+       if (pause->rx_pause)
+               phy->fc_request_control |= FLOW_CTRL_RX;
+
+       if (pause->tx_pause)
+               phy->fc_request_control |= FLOW_CTRL_TX;
+
+       phy->fc_autoneg = pause->autoneg;
+
+       if (pause->autoneg == AUTONEG_DISABLE)
+               lan743x_mac_flow_ctrl_set_enables(adapter, pause->tx_pause,
+                                                 pause->rx_pause);
+       else
+               phy_set_asym_pause(phydev, pause->rx_pause,  pause->tx_pause);
+
+       return 0;
+}
+
 const struct ethtool_ops lan743x_ethtool_ops = {
        .get_drvinfo = lan743x_ethtool_get_drvinfo,
        .get_msglevel = lan743x_ethtool_get_msglevel,
@@ -1259,6 +1303,8 @@ const struct ethtool_ops lan743x_ethtool_ops = {
        .set_link_ksettings = phy_ethtool_set_link_ksettings,
        .get_regs_len = lan743x_get_regs_len,
        .get_regs = lan743x_get_regs,
+       .get_pauseparam = lan743x_get_pauseparam,
+       .set_pauseparam = lan743x_set_pauseparam,
 #ifdef CONFIG_PM
        .get_wol = lan743x_ethtool_get_wol,
        .set_wol = lan743x_ethtool_set_wol,
index 50eeecb..c0f8ba6 100644 (file)
@@ -1326,8 +1326,8 @@ static void lan743x_mac_close(struct lan743x_adapter *adapter)
                                 1, 1000, 20000, 100);
 }
 
-static void lan743x_mac_flow_ctrl_set_enables(struct lan743x_adapter *adapter,
-                                             bool tx_enable, bool rx_enable)
+void lan743x_mac_flow_ctrl_set_enables(struct lan743x_adapter *adapter,
+                                      bool tx_enable, bool rx_enable)
 {
        u32 flow_setting = 0;
 
index 67877d3..bc5eea4 100644 (file)
@@ -1159,5 +1159,7 @@ u32 lan743x_csr_read(struct lan743x_adapter *adapter, int offset);
 void lan743x_csr_write(struct lan743x_adapter *adapter, int offset, u32 data);
 int lan743x_hs_syslock_acquire(struct lan743x_adapter *adapter, u16 timeout);
 void lan743x_hs_syslock_release(struct lan743x_adapter *adapter);
+void lan743x_mac_flow_ctrl_set_enables(struct lan743x_adapter *adapter,
+                                      bool tx_enable, bool rx_enable);
 
 #endif /* _LAN743X_H */
index e58a27f..fea4254 100644 (file)
@@ -656,7 +656,15 @@ void lan966x_stats_get(struct net_device *dev,
        stats->rx_dropped = dev->stats.rx_dropped +
                lan966x->stats[idx + SYS_COUNT_RX_LONG] +
                lan966x->stats[idx + SYS_COUNT_DR_LOCAL] +
-               lan966x->stats[idx + SYS_COUNT_DR_TAIL];
+               lan966x->stats[idx + SYS_COUNT_DR_TAIL] +
+               lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_0] +
+               lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_1] +
+               lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_2] +
+               lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_3] +
+               lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_4] +
+               lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_5] +
+               lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_6] +
+               lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_7];
 
        for (i = 0; i < LAN966X_NUM_TC; i++) {
                stats->rx_dropped +=
index cc5e48e..98e2753 100644 (file)
@@ -9,5 +9,6 @@ config SPARX5_SWITCH
        select PHYLINK
        select PHY_SPARX5_SERDES
        select RESET_CONTROLLER
+       select VCAP
        help
          This driver supports the Sparx5 network switch device.
index d1c6ad9..ee2c42f 100644 (file)
@@ -5,7 +5,11 @@
 
 obj-$(CONFIG_SPARX5_SWITCH) += sparx5-switch.o
 
-sparx5-switch-objs  := sparx5_main.o sparx5_packet.o \
+sparx5-switch-y  := sparx5_main.o sparx5_packet.o \
  sparx5_netdev.o sparx5_phylink.o sparx5_port.o sparx5_mactable.o sparx5_vlan.o \
  sparx5_switchdev.o sparx5_calendar.o sparx5_ethtool.o sparx5_fdma.o \
- sparx5_ptp.o sparx5_pgid.o sparx5_tc.o sparx5_qos.o
+ sparx5_ptp.o sparx5_pgid.o sparx5_tc.o sparx5_qos.o \
+ sparx5_vcap_impl.o sparx5_vcap_ag_api.o sparx5_tc_flower.o
+
+# Provide include files
+ccflags-y += -I$(srctree)/drivers/net/ethernet/microchip/vcap
index 62a325e..0b70c00 100644 (file)
@@ -672,6 +672,14 @@ static int sparx5_start(struct sparx5 *sparx5)
 
        sparx5_board_init(sparx5);
        err = sparx5_register_notifier_blocks(sparx5);
+       if (err)
+               return err;
+
+       err = sparx5_vcap_init(sparx5);
+       if (err) {
+               sparx5_unregister_notifier_blocks(sparx5);
+               return err;
+       }
 
        /* Start Frame DMA with fallback to register based INJ/XTR */
        err = -ENXIO;
@@ -906,6 +914,7 @@ static int mchp_sparx5_remove(struct platform_device *pdev)
        sparx5_ptp_deinit(sparx5);
        sparx5_fdma_stop(sparx5);
        sparx5_cleanup_ports(sparx5);
+       sparx5_vcap_destroy(sparx5);
        /* Unregister netdevs */
        sparx5_unregister_notifier_blocks(sparx5);
 
index 7a83222..2ab22a7 100644 (file)
@@ -288,6 +288,8 @@ struct sparx5 {
        struct mutex ptp_lock; /* lock for ptp interface state */
        u16 ptp_skbs;
        int ptp_irq;
+       /* VCAP */
+       struct vcap_control *vcap_ctrl;
        /* PGID allocation map */
        u8 pgid_map[PGID_TABLE_SIZE];
 };
@@ -382,6 +384,10 @@ void sparx5_ptp_txtstamp_release(struct sparx5_port *port,
                                 struct sk_buff *skb);
 irqreturn_t sparx5_ptp_irq_handler(int irq, void *args);
 
+/* sparx5_vcap_impl.c */
+int sparx5_vcap_init(struct sparx5 *sparx5);
+void sparx5_vcap_destroy(struct sparx5 *sparx5);
+
 /* sparx5_pgid.c */
 enum sparx5_pgid_type {
        SPX5_PGID_FREE,
index fa2eb70..c42195f 100644 (file)
@@ -4,8 +4,8 @@
  * Copyright (c) 2021 Microchip Technology Inc.
  */
 
-/* This file is autogenerated by cml-utils 2022-02-26 14:15:01 +0100.
- * Commit ID: 98bdd3d171cc2a1afd30d241d41a4281d471a48c (dirty)
+/* This file is autogenerated by cml-utils 2022-09-12 14:22:42 +0200.
+ * Commit ID: 06aecbca4eab6e85d87f665fe6b6348c48146245
  */
 
 #ifndef _SPARX5_MAIN_REGS_H_
@@ -171,6 +171,162 @@ enum sparx5_target {
 /*      ANA_AC:STAT_CNT_CFG_PORT:STAT_LSB_CNT */
 #define ANA_AC_PORT_STAT_LSB_CNT(g, r) __REG(TARGET_ANA_AC, 0, 1, 843776, g, 70, 64, 20, r, 4, 4)
 
+/*      ANA_ACL:COMMON:VCAP_S2_CFG */
+#define ANA_ACL_VCAP_S2_CFG(r)    __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 0, r, 70, 4)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA BIT(28)
+#define ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_ROUTE_HANDLING_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_OAM_ENA     GENMASK(27, 26)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_OAM_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_OAM_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_OAM_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_OAM_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_OTHER_ENA GENMASK(25, 24)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_OTHER_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_OTHER_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_OTHER_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_OTHER_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_VID_ENA GENMASK(23, 22)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_VID_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_VID_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_VID_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_VID_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_STD_ENA GENMASK(21, 20)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_STD_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_STD_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_STD_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_STD_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_ENA GENMASK(19, 18)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP6_TCPUDP_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP_7TUPLE_ENA GENMASK(17, 16)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP_7TUPLE_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP_7TUPLE_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP_7TUPLE_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP_7TUPLE_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_VID_ENA GENMASK(15, 14)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_VID_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_VID_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_VID_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_VID_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_TCPUDP_ENA GENMASK(13, 12)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_TCPUDP_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_TCPUDP_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_TCPUDP_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_TCPUDP_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_OTHER_ENA GENMASK(11, 10)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_OTHER_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_OTHER_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_OTHER_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_IP4_OTHER_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_ARP_ENA     GENMASK(9, 8)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_ARP_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_ARP_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_ARP_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_ARP_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_SNAP_ENA GENMASK(7, 6)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_SNAP_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_SNAP_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_SNAP_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_SNAP_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_LLC_ENA GENMASK(5, 4)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_LLC_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_LLC_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_LLC_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_TYPE_MAC_LLC_ENA, x)
+
+#define ANA_ACL_VCAP_S2_CFG_SEC_ENA              GENMASK(3, 0)
+#define ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_CFG_SEC_ENA, x)
+#define ANA_ACL_VCAP_S2_CFG_SEC_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_CFG_SEC_ENA, x)
+
+/*      ANA_ACL:COMMON:SWAP_IP_CTRL */
+#define ANA_ACL_SWAP_IP_CTRL      __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 412, 0, 1, 4)
+
+#define ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL GENMASK(23, 18)
+#define ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL_SET(x)\
+       FIELD_PREP(ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL, x)
+#define ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL_GET(x)\
+       FIELD_GET(ANA_ACL_SWAP_IP_CTRL_DMAC_REPL_OFFSET_VAL, x)
+
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_VAL GENMASK(17, 10)
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_VAL_SET(x)\
+       FIELD_PREP(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_VAL, x)
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_VAL_GET(x)\
+       FIELD_GET(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_VAL, x)
+
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_VAL GENMASK(9, 2)
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_VAL_SET(x)\
+       FIELD_PREP(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_VAL, x)
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_VAL_GET(x)\
+       FIELD_GET(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_VAL, x)
+
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_ENA BIT(1)
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_ENA, x)
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP6_HOPC_ENA, x)
+
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_ENA BIT(0)
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_ENA, x)
+#define ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_SWAP_IP_CTRL_IP_SWAP_IP4_TTL_ENA, x)
+
+/*      ANA_ACL:COMMON:VCAP_S2_RLEG_STAT */
+#define ANA_ACL_VCAP_S2_RLEG_STAT(r) __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 424, r, 4, 4)
+
+#define ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK GENMASK(12, 6)
+#define ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK, x)
+#define ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_RLEG_STAT_IRLEG_STAT_MASK, x)
+
+#define ANA_ACL_VCAP_S2_RLEG_STAT_ERLEG_STAT_MASK GENMASK(5, 0)
+#define ANA_ACL_VCAP_S2_RLEG_STAT_ERLEG_STAT_MASK_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_RLEG_STAT_ERLEG_STAT_MASK, x)
+#define ANA_ACL_VCAP_S2_RLEG_STAT_ERLEG_STAT_MASK_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_RLEG_STAT_ERLEG_STAT_MASK, x)
+
+/*      ANA_ACL:COMMON:VCAP_S2_FRAGMENT_CFG */
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 440, 0, 1, 4)
+
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN  GENMASK(9, 5)
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN, x)
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_FRAGMENT_CFG_L4_MIN_LEN, x)
+
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES_DIS BIT(4)
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES_DIS_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES_DIS, x)
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES_DIS_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES_DIS, x)
+
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES GENMASK(3, 0)
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES, x)
+#define ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_FRAGMENT_CFG_FRAGMENT_OFFSET_THRES, x)
+
 /*      ANA_ACL:COMMON:OWN_UPSID */
 #define ANA_ACL_OWN_UPSID(r)      __REG(TARGET_ANA_ACL, 0, 1, 32768, 0, 1, 592, 580, r, 3, 4)
 
@@ -180,6 +336,174 @@ enum sparx5_target {
 #define ANA_ACL_OWN_UPSID_OWN_UPSID_GET(x)\
        FIELD_GET(ANA_ACL_OWN_UPSID_OWN_UPSID, x)
 
+/*      ANA_ACL:KEY_SEL:VCAP_S2_KEY_SEL */
+#define ANA_ACL_VCAP_S2_KEY_SEL(g, r) __REG(TARGET_ANA_ACL, 0, 1, 34200, g, 134, 16, 0, r, 4, 4)
+
+#define ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA      BIT(13)
+#define ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA, x)
+#define ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA, x)
+
+#define ANA_ACL_VCAP_S2_KEY_SEL_IGR_PORT_MASK_SEL BIT(12)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IGR_PORT_MASK_SEL_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_KEY_SEL_IGR_PORT_MASK_SEL, x)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IGR_PORT_MASK_SEL_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_IGR_PORT_MASK_SEL, x)
+
+#define ANA_ACL_VCAP_S2_KEY_SEL_NON_ETH_KEY_SEL  GENMASK(11, 10)
+#define ANA_ACL_VCAP_S2_KEY_SEL_NON_ETH_KEY_SEL_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_KEY_SEL_NON_ETH_KEY_SEL, x)
+#define ANA_ACL_VCAP_S2_KEY_SEL_NON_ETH_KEY_SEL_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_NON_ETH_KEY_SEL, x)
+
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP4_MC_KEY_SEL   GENMASK(9, 8)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP4_MC_KEY_SEL_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_KEY_SEL_IP4_MC_KEY_SEL, x)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP4_MC_KEY_SEL_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_IP4_MC_KEY_SEL, x)
+
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP4_UC_KEY_SEL   GENMASK(7, 6)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP4_UC_KEY_SEL_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_KEY_SEL_IP4_UC_KEY_SEL, x)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP4_UC_KEY_SEL_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_IP4_UC_KEY_SEL, x)
+
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP6_MC_KEY_SEL   GENMASK(5, 3)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP6_MC_KEY_SEL_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_KEY_SEL_IP6_MC_KEY_SEL, x)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP6_MC_KEY_SEL_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_IP6_MC_KEY_SEL, x)
+
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL   GENMASK(2, 1)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL, x)
+#define ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL, x)
+
+#define ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL      BIT(0)
+#define ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL_SET(x)\
+       FIELD_PREP(ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL, x)
+#define ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL_GET(x)\
+       FIELD_GET(ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL, x)
+
+/*      ANA_ACL:CNT_A:CNT_A */
+#define ANA_ACL_CNT_A(g)          __REG(TARGET_ANA_ACL, 0, 1, 0, g, 4096, 4, 0, 0, 1, 4)
+
+/*      ANA_ACL:CNT_B:CNT_B */
+#define ANA_ACL_CNT_B(g)          __REG(TARGET_ANA_ACL, 0, 1, 16384, g, 4096, 4, 0, 0, 1, 4)
+
+/*      ANA_ACL:STICKY:SEC_LOOKUP_STICKY */
+#define ANA_ACL_SEC_LOOKUP_STICKY(r) __REG(TARGET_ANA_ACL, 0, 1, 36408, 0, 1, 16, 0, r, 4, 4)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY BIT(17)
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_CLM_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_IRLEG_STICKY BIT(16)
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_IRLEG_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_IRLEG_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_IRLEG_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_IRLEG_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_ERLEG_STICKY BIT(15)
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_ERLEG_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_ERLEG_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_ERLEG_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_ERLEG_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_PORT_STICKY BIT(14)
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_PORT_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_PORT_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_PORT_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_KEY_SEL_PORT_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM2_STICKY BIT(13)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM2_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM2_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM2_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM2_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM1_STICKY BIT(12)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM1_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM1_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM1_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_CUSTOM1_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_OAM_STICKY BIT(11)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_OAM_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_OAM_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_OAM_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_OAM_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY BIT(10)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_VID_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY BIT(9)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_STD_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_TCPUDP_STICKY BIT(8)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_TCPUDP_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_TCPUDP_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_TCPUDP_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP6_TCPUDP_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY BIT(7)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP_7TUPLE_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY BIT(6)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_VID_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY BIT(5)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_TCPUDP_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY BIT(4)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_IP4_OTHER_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY BIT(3)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_ARP_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_SNAP_STICKY BIT(2)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_SNAP_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_SNAP_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_SNAP_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_SNAP_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_LLC_STICKY BIT(1)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_LLC_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_LLC_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_LLC_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_LLC_STICKY, x)
+
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY BIT(0)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY_SET(x)\
+       FIELD_PREP(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY, x)
+#define ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY_GET(x)\
+       FIELD_GET(ANA_ACL_SEC_LOOKUP_STICKY_SEC_TYPE_MAC_ETYPE_STICKY, x)
+
 /*      ANA_AC_POL:POL_ALL_CFG:POL_UPD_INT_CFG */
 #define ANA_AC_POL_POL_UPD_INT_CFG __REG(TARGET_ANA_AC_POL, 0, 1, 75968, 0, 1, 1160, 1148, 0, 1, 4)
 
@@ -5039,6 +5363,138 @@ enum sparx5_target {
 #define REW_RAM_INIT_RAM_CFG_HOOK_GET(x)\
        FIELD_GET(REW_RAM_INIT_RAM_CFG_HOOK, x)
 
+/*      VCAP_SUPER:VCAP_CORE_CFG:VCAP_UPDATE_CTRL */
+#define VCAP_SUPER_CTRL           __REG(TARGET_VCAP_SUPER, 0, 1, 0, 0, 1, 8, 0, 0, 1, 4)
+
+#define VCAP_SUPER_CTRL_UPDATE_CMD               GENMASK(24, 22)
+#define VCAP_SUPER_CTRL_UPDATE_CMD_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CTRL_UPDATE_CMD, x)
+#define VCAP_SUPER_CTRL_UPDATE_CMD_GET(x)\
+       FIELD_GET(VCAP_SUPER_CTRL_UPDATE_CMD, x)
+
+#define VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS         BIT(21)
+#define VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS, x)
+#define VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS_GET(x)\
+       FIELD_GET(VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS, x)
+
+#define VCAP_SUPER_CTRL_UPDATE_ACTION_DIS        BIT(20)
+#define VCAP_SUPER_CTRL_UPDATE_ACTION_DIS_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CTRL_UPDATE_ACTION_DIS, x)
+#define VCAP_SUPER_CTRL_UPDATE_ACTION_DIS_GET(x)\
+       FIELD_GET(VCAP_SUPER_CTRL_UPDATE_ACTION_DIS, x)
+
+#define VCAP_SUPER_CTRL_UPDATE_CNT_DIS           BIT(19)
+#define VCAP_SUPER_CTRL_UPDATE_CNT_DIS_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CTRL_UPDATE_CNT_DIS, x)
+#define VCAP_SUPER_CTRL_UPDATE_CNT_DIS_GET(x)\
+       FIELD_GET(VCAP_SUPER_CTRL_UPDATE_CNT_DIS, x)
+
+#define VCAP_SUPER_CTRL_UPDATE_ADDR              GENMASK(18, 3)
+#define VCAP_SUPER_CTRL_UPDATE_ADDR_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CTRL_UPDATE_ADDR, x)
+#define VCAP_SUPER_CTRL_UPDATE_ADDR_GET(x)\
+       FIELD_GET(VCAP_SUPER_CTRL_UPDATE_ADDR, x)
+
+#define VCAP_SUPER_CTRL_UPDATE_SHOT              BIT(2)
+#define VCAP_SUPER_CTRL_UPDATE_SHOT_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CTRL_UPDATE_SHOT, x)
+#define VCAP_SUPER_CTRL_UPDATE_SHOT_GET(x)\
+       FIELD_GET(VCAP_SUPER_CTRL_UPDATE_SHOT, x)
+
+#define VCAP_SUPER_CTRL_CLEAR_CACHE              BIT(1)
+#define VCAP_SUPER_CTRL_CLEAR_CACHE_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CTRL_CLEAR_CACHE, x)
+#define VCAP_SUPER_CTRL_CLEAR_CACHE_GET(x)\
+       FIELD_GET(VCAP_SUPER_CTRL_CLEAR_CACHE, x)
+
+#define VCAP_SUPER_CTRL_MV_TRAFFIC_IGN           BIT(0)
+#define VCAP_SUPER_CTRL_MV_TRAFFIC_IGN_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CTRL_MV_TRAFFIC_IGN, x)
+#define VCAP_SUPER_CTRL_MV_TRAFFIC_IGN_GET(x)\
+       FIELD_GET(VCAP_SUPER_CTRL_MV_TRAFFIC_IGN, x)
+
+/*      VCAP_SUPER:VCAP_CORE_CFG:VCAP_MV_CFG */
+#define VCAP_SUPER_CFG            __REG(TARGET_VCAP_SUPER, 0, 1, 0, 0, 1, 8, 4, 0, 1, 4)
+
+#define VCAP_SUPER_CFG_MV_NUM_POS                GENMASK(31, 16)
+#define VCAP_SUPER_CFG_MV_NUM_POS_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CFG_MV_NUM_POS, x)
+#define VCAP_SUPER_CFG_MV_NUM_POS_GET(x)\
+       FIELD_GET(VCAP_SUPER_CFG_MV_NUM_POS, x)
+
+#define VCAP_SUPER_CFG_MV_SIZE                   GENMASK(15, 0)
+#define VCAP_SUPER_CFG_MV_SIZE_SET(x)\
+       FIELD_PREP(VCAP_SUPER_CFG_MV_SIZE, x)
+#define VCAP_SUPER_CFG_MV_SIZE_GET(x)\
+       FIELD_GET(VCAP_SUPER_CFG_MV_SIZE, x)
+
+/*      VCAP_SUPER:VCAP_CORE_CACHE:VCAP_ENTRY_DAT */
+#define VCAP_SUPER_VCAP_ENTRY_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 0, r, 64, 4)
+
+/*      VCAP_SUPER:VCAP_CORE_CACHE:VCAP_MASK_DAT */
+#define VCAP_SUPER_VCAP_MASK_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 256, r, 64, 4)
+
+/*      VCAP_SUPER:VCAP_CORE_CACHE:VCAP_ACTION_DAT */
+#define VCAP_SUPER_VCAP_ACTION_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 512, r, 64, 4)
+
+/*      VCAP_SUPER:VCAP_CORE_CACHE:VCAP_CNT_DAT */
+#define VCAP_SUPER_VCAP_CNT_DAT(r) __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 768, r, 32, 4)
+
+/*      VCAP_SUPER:VCAP_CORE_CACHE:VCAP_CNT_FW_DAT */
+#define VCAP_SUPER_VCAP_CNT_FW_DAT __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 896, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CORE_CACHE:VCAP_TG_DAT */
+#define VCAP_SUPER_VCAP_TG_DAT    __REG(TARGET_VCAP_SUPER, 0, 1, 8, 0, 1, 904, 900, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CORE_MAP:VCAP_CORE_IDX */
+#define VCAP_SUPER_IDX            __REG(TARGET_VCAP_SUPER, 0, 1, 912, 0, 1, 8, 0, 0, 1, 4)
+
+#define VCAP_SUPER_IDX_CORE_IDX                  GENMASK(3, 0)
+#define VCAP_SUPER_IDX_CORE_IDX_SET(x)\
+       FIELD_PREP(VCAP_SUPER_IDX_CORE_IDX, x)
+#define VCAP_SUPER_IDX_CORE_IDX_GET(x)\
+       FIELD_GET(VCAP_SUPER_IDX_CORE_IDX, x)
+
+/*      VCAP_SUPER:VCAP_CORE_MAP:VCAP_CORE_MAP */
+#define VCAP_SUPER_MAP            __REG(TARGET_VCAP_SUPER, 0, 1, 912, 0, 1, 8, 4, 0, 1, 4)
+
+#define VCAP_SUPER_MAP_CORE_MAP                  GENMASK(2, 0)
+#define VCAP_SUPER_MAP_CORE_MAP_SET(x)\
+       FIELD_PREP(VCAP_SUPER_MAP_CORE_MAP, x)
+#define VCAP_SUPER_MAP_CORE_MAP_GET(x)\
+       FIELD_GET(VCAP_SUPER_MAP_CORE_MAP, x)
+
+/*      VCAP_SUPER:VCAP_CONST:VCAP_VER */
+#define VCAP_SUPER_VCAP_VER       __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 0, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:ENTRY_WIDTH */
+#define VCAP_SUPER_ENTRY_WIDTH    __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 4, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:ENTRY_CNT */
+#define VCAP_SUPER_ENTRY_CNT      __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 8, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:ENTRY_SWCNT */
+#define VCAP_SUPER_ENTRY_SWCNT    __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 12, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:ENTRY_TG_WIDTH */
+#define VCAP_SUPER_ENTRY_TG_WIDTH __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 16, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:ACTION_DEF_CNT */
+#define VCAP_SUPER_ACTION_DEF_CNT __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 20, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:ACTION_WIDTH */
+#define VCAP_SUPER_ACTION_WIDTH   __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 24, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:CNT_WIDTH */
+#define VCAP_SUPER_CNT_WIDTH      __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 28, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:CORE_CNT */
+#define VCAP_SUPER_CORE_CNT       __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 32, 0, 1, 4)
+
+/*      VCAP_SUPER:VCAP_CONST:IF_CNT */
+#define VCAP_SUPER_IF_CNT         __REG(TARGET_VCAP_SUPER, 0, 1, 924, 0, 1, 40, 36, 0, 1, 4)
+
 /*      VCAP_SUPER:RAM_CTRL:RAM_INIT */
 #define VCAP_SUPER_RAM_INIT       __REG(TARGET_VCAP_SUPER, 0, 1, 1120, 0, 1, 4, 0, 0, 1, 4)
 
index e05429c..9432251 100644 (file)
 #include "sparx5_main.h"
 #include "sparx5_qos.h"
 
+/* tc block handling */
+static LIST_HEAD(sparx5_block_cb_list);
+
+static int sparx5_tc_block_cb(enum tc_setup_type type,
+                             void *type_data,
+                             void *cb_priv, bool ingress)
+{
+       struct net_device *ndev = cb_priv;
+
+       if (type == TC_SETUP_CLSFLOWER)
+               return sparx5_tc_flower(ndev, type_data, ingress);
+       return -EOPNOTSUPP;
+}
+
+static int sparx5_tc_block_cb_ingress(enum tc_setup_type type,
+                                     void *type_data,
+                                     void *cb_priv)
+{
+       return sparx5_tc_block_cb(type, type_data, cb_priv, true);
+}
+
+static int sparx5_tc_block_cb_egress(enum tc_setup_type type,
+                                    void *type_data,
+                                    void *cb_priv)
+{
+       return sparx5_tc_block_cb(type, type_data, cb_priv, false);
+}
+
+static int sparx5_tc_setup_block(struct net_device *ndev,
+                                struct flow_block_offload *fbo)
+{
+       flow_setup_cb_t *cb;
+
+       if (fbo->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+               cb = sparx5_tc_block_cb_ingress;
+       else if (fbo->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
+               cb = sparx5_tc_block_cb_egress;
+       else
+               return -EOPNOTSUPP;
+
+       return flow_block_cb_setup_simple(fbo, &sparx5_block_cb_list,
+                                         cb, ndev, ndev, false);
+}
+
 static void sparx5_tc_get_layer_and_idx(u32 parent, u32 portno, u32 *layer,
                                        u32 *idx)
 {
@@ -111,6 +155,8 @@ int sparx5_port_setup_tc(struct net_device *ndev, enum tc_setup_type type,
                         void *type_data)
 {
        switch (type) {
+       case TC_SETUP_BLOCK:
+               return sparx5_tc_setup_block(ndev, type_data);
        case TC_SETUP_QDISC_MQPRIO:
                return sparx5_tc_setup_qdisc_mqprio(ndev, type_data);
        case TC_SETUP_QDISC_TBF:
index 5b55e11..2b07a93 100644 (file)
@@ -7,9 +7,23 @@
 #ifndef __SPARX5_TC_H__
 #define __SPARX5_TC_H__
 
+#include <net/flow_offload.h>
 #include <linux/netdevice.h>
 
+/* Controls how PORT_MASK is applied */
+enum SPX5_PORT_MASK_MODE {
+       SPX5_PMM_OR_DSTMASK,
+       SPX5_PMM_AND_VLANMASK,
+       SPX5_PMM_REPLACE_PGID,
+       SPX5_PMM_REPLACE_ALL,
+       SPX5_PMM_REDIR_PGID,
+       SPX5_PMM_OR_PGID_MASK,
+};
+
 int sparx5_port_setup_tc(struct net_device *ndev, enum tc_setup_type type,
                         void *type_data);
 
+int sparx5_tc_flower(struct net_device *ndev, struct flow_cls_offload *fco,
+                    bool ingress);
+
 #endif /* __SPARX5_TC_H__ */
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c
new file mode 100644 (file)
index 0000000..626558a
--- /dev/null
@@ -0,0 +1,217 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Microchip VCAP API
+ *
+ * Copyright (c) 2022 Microchip Technology Inc. and its subsidiaries.
+ */
+
+#include <net/tcp.h>
+
+#include "sparx5_tc.h"
+#include "vcap_api.h"
+#include "vcap_api_client.h"
+#include "sparx5_main.h"
+#include "sparx5_vcap_impl.h"
+
+struct sparx5_tc_flower_parse_usage {
+       struct flow_cls_offload *fco;
+       struct flow_rule *frule;
+       struct vcap_rule *vrule;
+       unsigned int used_keys;
+};
+
+static int sparx5_tc_flower_handler_ethaddr_usage(struct sparx5_tc_flower_parse_usage *st)
+{
+       enum vcap_key_field smac_key = VCAP_KF_L2_SMAC;
+       enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC;
+       struct flow_match_eth_addrs match;
+       struct vcap_u48_key smac, dmac;
+       int err = 0;
+
+       flow_rule_match_eth_addrs(st->frule, &match);
+
+       if (!is_zero_ether_addr(match.mask->src)) {
+               vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN);
+               vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN);
+               err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac);
+               if (err)
+                       goto out;
+       }
+
+       if (!is_zero_ether_addr(match.mask->dst)) {
+               vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN);
+               vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN);
+               err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac);
+               if (err)
+                       goto out;
+       }
+
+       st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS);
+
+       return err;
+
+out:
+       NL_SET_ERR_MSG_MOD(st->fco->common.extack, "eth_addr parse error");
+       return err;
+}
+
+static int (*sparx5_tc_flower_usage_handlers[])(struct sparx5_tc_flower_parse_usage *st) = {
+       /* More dissector handlers will be added here later */
+       [FLOW_DISSECTOR_KEY_ETH_ADDRS] = sparx5_tc_flower_handler_ethaddr_usage,
+};
+
+static int sparx5_tc_use_dissectors(struct flow_cls_offload *fco,
+                                   struct vcap_admin *admin,
+                                   struct vcap_rule *vrule)
+{
+       struct sparx5_tc_flower_parse_usage state = {
+               .fco = fco,
+               .vrule = vrule,
+       };
+       int idx, err = 0;
+
+       state.frule = flow_cls_offload_flow_rule(fco);
+       for (idx = 0; idx < ARRAY_SIZE(sparx5_tc_flower_usage_handlers); ++idx) {
+               if (!flow_rule_match_key(state.frule, idx))
+                       continue;
+               if (!sparx5_tc_flower_usage_handlers[idx])
+                       continue;
+               err = sparx5_tc_flower_usage_handlers[idx](&state);
+               if (err)
+                       return err;
+       }
+       return err;
+}
+
+static int sparx5_tc_flower_replace(struct net_device *ndev,
+                                   struct flow_cls_offload *fco,
+                                   struct vcap_admin *admin)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+       struct flow_action_entry *act;
+       struct vcap_control *vctrl;
+       struct flow_rule *frule;
+       struct vcap_rule *vrule;
+       int err, idx;
+
+       frule = flow_cls_offload_flow_rule(fco);
+       if (!flow_action_has_entries(&frule->action)) {
+               NL_SET_ERR_MSG_MOD(fco->common.extack, "No actions");
+               return -EINVAL;
+       }
+
+       if (!flow_action_basic_hw_stats_check(&frule->action, fco->common.extack))
+               return -EOPNOTSUPP;
+
+       vctrl = port->sparx5->vcap_ctrl;
+       vrule = vcap_alloc_rule(vctrl, ndev, fco->common.chain_index, VCAP_USER_TC,
+                               fco->common.prio, 0);
+       if (IS_ERR(vrule))
+               return PTR_ERR(vrule);
+
+       vrule->cookie = fco->cookie;
+       sparx5_tc_use_dissectors(fco, admin, vrule);
+       flow_action_for_each(idx, act, &frule->action) {
+               switch (act->id) {
+               case FLOW_ACTION_TRAP:
+                       err = vcap_rule_add_action_bit(vrule,
+                                                      VCAP_AF_CPU_COPY_ENA,
+                                                      VCAP_BIT_1);
+                       if (err)
+                               goto out;
+                       err = vcap_rule_add_action_u32(vrule,
+                                                      VCAP_AF_CPU_QUEUE_NUM, 0);
+                       if (err)
+                               goto out;
+                       err = vcap_rule_add_action_u32(vrule, VCAP_AF_MASK_MODE,
+                                                      SPX5_PMM_REPLACE_ALL);
+                       if (err)
+                               goto out;
+                       /* For now the actionset is hardcoded */
+                       err = vcap_set_rule_set_actionset(vrule,
+                                                         VCAP_AFS_BASE_TYPE);
+                       if (err)
+                               goto out;
+                       break;
+               case FLOW_ACTION_ACCEPT:
+                       /* For now the actionset is hardcoded */
+                       err = vcap_set_rule_set_actionset(vrule,
+                                                         VCAP_AFS_BASE_TYPE);
+                       if (err)
+                               goto out;
+                       break;
+               default:
+                       NL_SET_ERR_MSG_MOD(fco->common.extack,
+                                          "Unsupported TC action");
+                       err = -EOPNOTSUPP;
+                       goto out;
+               }
+       }
+       /* For now the keyset is hardcoded */
+       err = vcap_set_rule_set_keyset(vrule, VCAP_KFS_MAC_ETYPE);
+       if (err) {
+               NL_SET_ERR_MSG_MOD(fco->common.extack,
+                                  "No matching port keyset for filter protocol and keys");
+               goto out;
+       }
+       err = vcap_val_rule(vrule, ETH_P_ALL);
+       if (err) {
+               vcap_set_tc_exterr(fco, vrule);
+               goto out;
+       }
+       err = vcap_add_rule(vrule);
+       if (err)
+               NL_SET_ERR_MSG_MOD(fco->common.extack,
+                                  "Could not add the filter");
+out:
+       vcap_free_rule(vrule);
+       return err;
+}
+
+static int sparx5_tc_flower_destroy(struct net_device *ndev,
+                                   struct flow_cls_offload *fco,
+                                   struct vcap_admin *admin)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+       struct vcap_control *vctrl;
+       int err = -ENOENT, rule_id;
+
+       vctrl = port->sparx5->vcap_ctrl;
+       while (true) {
+               rule_id = vcap_lookup_rule_by_cookie(vctrl, fco->cookie);
+               if (rule_id <= 0)
+                       break;
+               err = vcap_del_rule(vctrl, ndev, rule_id);
+               if (err) {
+                       pr_err("%s:%d: could not delete rule %d\n",
+                              __func__, __LINE__, rule_id);
+                       break;
+               }
+       }
+       return err;
+}
+
+int sparx5_tc_flower(struct net_device *ndev, struct flow_cls_offload *fco,
+                    bool ingress)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+       struct vcap_control *vctrl;
+       struct vcap_admin *admin;
+       int err = -EINVAL;
+
+       /* Get vcap instance from the chain id */
+       vctrl = port->sparx5->vcap_ctrl;
+       admin = vcap_find_admin(vctrl, fco->common.chain_index);
+       if (!admin) {
+               NL_SET_ERR_MSG_MOD(fco->common.extack, "Invalid chain");
+               return err;
+       }
+
+       switch (fco->command) {
+       case FLOW_CLS_REPLACE:
+               return sparx5_tc_flower_replace(ndev, fco, admin);
+       case FLOW_CLS_DESTROY:
+               return sparx5_tc_flower_destroy(ndev, fco, admin);
+       default:
+               return -EOPNOTSUPP;
+       }
+}
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.c
new file mode 100644 (file)
index 0000000..1bd987c
--- /dev/null
@@ -0,0 +1,1351 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP API
+ */
+
+/* This file is autogenerated by cml-utils 2022-10-13 10:04:41 +0200.
+ * Commit ID: fd7cafd175899f0672c73afb3a30fc872500ae86
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include "vcap_api.h"
+#include "sparx5_vcap_ag_api.h"
+
+/* keyfields */
+static const struct vcap_field is2_mac_etype_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 88,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 89,
+               .width = 1,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 90,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 138,
+               .width = 48,
+       },
+       [VCAP_KF_ETYPE_LEN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 186,
+               .width = 1,
+       },
+       [VCAP_KF_ETYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 187,
+               .width = 16,
+       },
+       [VCAP_KF_L2_PAYLOAD_ETYPE] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 203,
+               .width = 64,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 267,
+               .width = 16,
+       },
+       [VCAP_KF_OAM_CCM_CNTS_EQ0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 283,
+               .width = 1,
+       },
+       [VCAP_KF_OAM_Y1731_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 284,
+               .width = 1,
+       },
+};
+
+static const struct vcap_field is2_arp_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 86,
+               .width = 48,
+       },
+       [VCAP_KF_ARP_ADDR_SPACE_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 134,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_PROTO_SPACE_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 135,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_LEN_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 136,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_TGT_MATCH_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 137,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_SENDER_MATCH_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 138,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_OPCODE_UNKNOWN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 139,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_OPCODE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 140,
+               .width = 2,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 142,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 174,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 206,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 207,
+               .width = 16,
+       },
+};
+
+static const struct vcap_field is2_ip4_tcp_udp_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 88,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 89,
+               .width = 1,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 93,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 96,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 104,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 136,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 168,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 169,
+               .width = 1,
+       },
+       [VCAP_KF_L4_DPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 170,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 186,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 202,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 218,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SEQUENCE_EQ0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 219,
+               .width = 1,
+       },
+       [VCAP_KF_L4_FIN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 220,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SYN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 221,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RST] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 222,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PSH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 223,
+               .width = 1,
+       },
+       [VCAP_KF_L4_ACK] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 224,
+               .width = 1,
+       },
+       [VCAP_KF_L4_URG] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 225,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PAYLOAD] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 226,
+               .width = 64,
+       },
+};
+
+static const struct vcap_field is2_ip4_other_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 88,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 89,
+               .width = 1,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 93,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 96,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 104,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 136,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 168,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP_PROTO] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 169,
+               .width = 8,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 177,
+               .width = 16,
+       },
+       [VCAP_KF_L3_PAYLOAD] = {
+               .type = VCAP_FIELD_U112,
+               .offset = 193,
+               .width = 96,
+       },
+};
+
+static const struct vcap_field is2_ip6_std_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 88,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP6_SIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 91,
+               .width = 128,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 219,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP_PROTO] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 220,
+               .width = 8,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 228,
+               .width = 16,
+       },
+       [VCAP_KF_L3_PAYLOAD] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 244,
+               .width = 40,
+       },
+};
+
+static const struct vcap_field is2_ip_7tuple_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 11,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 12,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U72,
+               .offset = 18,
+               .width = 65,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 83,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 84,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 86,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 87,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 99,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 112,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 113,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 116,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 119,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 120,
+               .width = 1,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 121,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 169,
+               .width = 48,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 217,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 218,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 219,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP6_DIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 227,
+               .width = 128,
+       },
+       [VCAP_KF_L3_IP6_SIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 355,
+               .width = 128,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 483,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_UDP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 484,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 485,
+               .width = 1,
+       },
+       [VCAP_KF_L4_DPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 486,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 502,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 518,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 534,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SEQUENCE_EQ0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 535,
+               .width = 1,
+       },
+       [VCAP_KF_L4_FIN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 536,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SYN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 537,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RST] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 538,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PSH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 539,
+               .width = 1,
+       },
+       [VCAP_KF_L4_ACK] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 540,
+               .width = 1,
+       },
+       [VCAP_KF_L4_URG] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 541,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PAYLOAD] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 542,
+               .width = 64,
+       },
+};
+
+/* keyfield_set */
+static const struct vcap_set is2_keyfield_set[] = {
+       [VCAP_KFS_MAC_ETYPE] = {
+               .type_id = 0,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_ARP] = {
+               .type_id = 3,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP4_TCP_UDP] = {
+               .type_id = 4,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP4_OTHER] = {
+               .type_id = 5,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP6_STD] = {
+               .type_id = 6,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP_7TUPLE] = {
+               .type_id = 1,
+               .sw_per_item = 12,
+               .sw_cnt = 1,
+       },
+};
+
+/* keyfield_set map */
+static const struct vcap_field *is2_keyfield_set_map[] = {
+       [VCAP_KFS_MAC_ETYPE] = is2_mac_etype_keyfield,
+       [VCAP_KFS_ARP] = is2_arp_keyfield,
+       [VCAP_KFS_IP4_TCP_UDP] = is2_ip4_tcp_udp_keyfield,
+       [VCAP_KFS_IP4_OTHER] = is2_ip4_other_keyfield,
+       [VCAP_KFS_IP6_STD] = is2_ip6_std_keyfield,
+       [VCAP_KFS_IP_7TUPLE] = is2_ip_7tuple_keyfield,
+};
+
+/* keyfield_set map sizes */
+static int is2_keyfield_set_map_size[] = {
+       [VCAP_KFS_MAC_ETYPE] = ARRAY_SIZE(is2_mac_etype_keyfield),
+       [VCAP_KFS_ARP] = ARRAY_SIZE(is2_arp_keyfield),
+       [VCAP_KFS_IP4_TCP_UDP] = ARRAY_SIZE(is2_ip4_tcp_udp_keyfield),
+       [VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(is2_ip4_other_keyfield),
+       [VCAP_KFS_IP6_STD] = ARRAY_SIZE(is2_ip6_std_keyfield),
+       [VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(is2_ip_7tuple_keyfield),
+};
+
+/* actionfields */
+static const struct vcap_field is2_base_type_actionfield[] = {
+       [VCAP_AF_PIPELINE_FORCE_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 1,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_PT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 2,
+               .width = 5,
+       },
+       [VCAP_AF_HIT_ME_ONCE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 7,
+               .width = 1,
+       },
+       [VCAP_AF_INTR_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 8,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_COPY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 9,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_QUEUE_NUM] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 10,
+               .width = 3,
+       },
+       [VCAP_AF_LRN_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 14,
+               .width = 1,
+       },
+       [VCAP_AF_RT_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 15,
+               .width = 1,
+       },
+       [VCAP_AF_POLICE_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 16,
+               .width = 1,
+       },
+       [VCAP_AF_POLICE_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 17,
+               .width = 6,
+       },
+       [VCAP_AF_IGNORE_PIPELINE_CTRL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 23,
+               .width = 1,
+       },
+       [VCAP_AF_MASK_MODE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 27,
+               .width = 3,
+       },
+       [VCAP_AF_PORT_MASK] = {
+               .type = VCAP_FIELD_U72,
+               .offset = 30,
+               .width = 68,
+       },
+       [VCAP_AF_MIRROR_PROBE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 111,
+               .width = 2,
+       },
+       [VCAP_AF_MATCH_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 159,
+               .width = 16,
+       },
+       [VCAP_AF_MATCH_ID_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 175,
+               .width = 16,
+       },
+       [VCAP_AF_CNT_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 191,
+               .width = 12,
+       },
+};
+
+/* actionfield_set */
+static const struct vcap_set is2_actionfield_set[] = {
+       [VCAP_AFS_BASE_TYPE] = {
+               .type_id = -1,
+               .sw_per_item = 3,
+               .sw_cnt = 4,
+       },
+};
+
+/* actionfield_set map */
+static const struct vcap_field *is2_actionfield_set_map[] = {
+       [VCAP_AFS_BASE_TYPE] = is2_base_type_actionfield,
+};
+
+/* actionfield_set map size */
+static int is2_actionfield_set_map_size[] = {
+       [VCAP_AFS_BASE_TYPE] = ARRAY_SIZE(is2_base_type_actionfield),
+};
+
+/* Type Groups */
+static const struct vcap_typegroup is2_x12_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 3,
+               .value = 4,
+       },
+       {
+               .offset = 156,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 312,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 468,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is2_x6_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 2,
+               .value = 2,
+       },
+       {
+               .offset = 156,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is2_x3_keyfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup is2_x1_keyfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup *is2_keyfield_set_typegroups[] = {
+       [12] = is2_x12_keyfield_set_typegroups,
+       [6] = is2_x6_keyfield_set_typegroups,
+       [3] = is2_x3_keyfield_set_typegroups,
+       [1] = is2_x1_keyfield_set_typegroups,
+       [13] = NULL,
+};
+
+static const struct vcap_typegroup is2_x3_actionfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 2,
+               .value = 2,
+       },
+       {
+               .offset = 110,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 220,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is2_x1_actionfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup *is2_actionfield_set_typegroups[] = {
+       [3] = is2_x3_actionfield_set_typegroups,
+       [1] = is2_x1_actionfield_set_typegroups,
+       [13] = NULL,
+};
+
+/* Keyfieldset names */
+static const char * const vcap_keyfield_set_names[] = {
+       [VCAP_KFS_NO_VALUE]                      =  "(None)",
+       [VCAP_KFS_ARP]                           =  "VCAP_KFS_ARP",
+       [VCAP_KFS_IP4_OTHER]                     =  "VCAP_KFS_IP4_OTHER",
+       [VCAP_KFS_IP4_TCP_UDP]                   =  "VCAP_KFS_IP4_TCP_UDP",
+       [VCAP_KFS_IP6_STD]                       =  "VCAP_KFS_IP6_STD",
+       [VCAP_KFS_IP_7TUPLE]                     =  "VCAP_KFS_IP_7TUPLE",
+       [VCAP_KFS_MAC_ETYPE]                     =  "VCAP_KFS_MAC_ETYPE",
+};
+
+/* Actionfieldset names */
+static const char * const vcap_actionfield_set_names[] = {
+       [VCAP_AFS_NO_VALUE]                      =  "(None)",
+       [VCAP_AFS_BASE_TYPE]                     =  "VCAP_AFS_BASE_TYPE",
+};
+
+/* Keyfield names */
+static const char * const vcap_keyfield_names[] = {
+       [VCAP_KF_NO_VALUE]                       =  "(None)",
+       [VCAP_KF_8021Q_DEI_CLS]                  =  "8021Q_DEI_CLS",
+       [VCAP_KF_8021Q_PCP_CLS]                  =  "8021Q_PCP_CLS",
+       [VCAP_KF_8021Q_VID_CLS]                  =  "8021Q_VID_CLS",
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS]           =  "8021Q_VLAN_TAGGED_IS",
+       [VCAP_KF_ARP_ADDR_SPACE_OK_IS]           =  "ARP_ADDR_SPACE_OK_IS",
+       [VCAP_KF_ARP_LEN_OK_IS]                  =  "ARP_LEN_OK_IS",
+       [VCAP_KF_ARP_OPCODE]                     =  "ARP_OPCODE",
+       [VCAP_KF_ARP_OPCODE_UNKNOWN_IS]          =  "ARP_OPCODE_UNKNOWN_IS",
+       [VCAP_KF_ARP_PROTO_SPACE_OK_IS]          =  "ARP_PROTO_SPACE_OK_IS",
+       [VCAP_KF_ARP_SENDER_MATCH_IS]            =  "ARP_SENDER_MATCH_IS",
+       [VCAP_KF_ARP_TGT_MATCH_IS]               =  "ARP_TGT_MATCH_IS",
+       [VCAP_KF_ETYPE]                          =  "ETYPE",
+       [VCAP_KF_ETYPE_LEN_IS]                   =  "ETYPE_LEN_IS",
+       [VCAP_KF_IF_IGR_PORT_MASK]               =  "IF_IGR_PORT_MASK",
+       [VCAP_KF_IF_IGR_PORT_MASK_L3]            =  "IF_IGR_PORT_MASK_L3",
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG]           =  "IF_IGR_PORT_MASK_RNG",
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL]           =  "IF_IGR_PORT_MASK_SEL",
+       [VCAP_KF_IP4_IS]                         =  "IP4_IS",
+       [VCAP_KF_ISDX_CLS]                       =  "ISDX_CLS",
+       [VCAP_KF_ISDX_GT0_IS]                    =  "ISDX_GT0_IS",
+       [VCAP_KF_L2_BC_IS]                       =  "L2_BC_IS",
+       [VCAP_KF_L2_DMAC]                        =  "L2_DMAC",
+       [VCAP_KF_L2_FWD_IS]                      =  "L2_FWD_IS",
+       [VCAP_KF_L2_MC_IS]                       =  "L2_MC_IS",
+       [VCAP_KF_L2_PAYLOAD_ETYPE]               =  "L2_PAYLOAD_ETYPE",
+       [VCAP_KF_L2_SMAC]                        =  "L2_SMAC",
+       [VCAP_KF_L3_DIP_EQ_SIP_IS]               =  "L3_DIP_EQ_SIP_IS",
+       [VCAP_KF_L3_DST_IS]                      =  "L3_DST_IS",
+       [VCAP_KF_L3_FRAGMENT_TYPE]               =  "L3_FRAGMENT_TYPE",
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN]           =  "L3_FRAG_INVLD_L4_LEN",
+       [VCAP_KF_L3_IP4_DIP]                     =  "L3_IP4_DIP",
+       [VCAP_KF_L3_IP4_SIP]                     =  "L3_IP4_SIP",
+       [VCAP_KF_L3_IP6_DIP]                     =  "L3_IP6_DIP",
+       [VCAP_KF_L3_IP6_SIP]                     =  "L3_IP6_SIP",
+       [VCAP_KF_L3_IP_PROTO]                    =  "L3_IP_PROTO",
+       [VCAP_KF_L3_OPTIONS_IS]                  =  "L3_OPTIONS_IS",
+       [VCAP_KF_L3_PAYLOAD]                     =  "L3_PAYLOAD",
+       [VCAP_KF_L3_RT_IS]                       =  "L3_RT_IS",
+       [VCAP_KF_L3_TOS]                         =  "L3_TOS",
+       [VCAP_KF_L3_TTL_GT0]                     =  "L3_TTL_GT0",
+       [VCAP_KF_L4_ACK]                         =  "L4_ACK",
+       [VCAP_KF_L4_DPORT]                       =  "L4_DPORT",
+       [VCAP_KF_L4_FIN]                         =  "L4_FIN",
+       [VCAP_KF_L4_PAYLOAD]                     =  "L4_PAYLOAD",
+       [VCAP_KF_L4_PSH]                         =  "L4_PSH",
+       [VCAP_KF_L4_RNG]                         =  "L4_RNG",
+       [VCAP_KF_L4_RST]                         =  "L4_RST",
+       [VCAP_KF_L4_SEQUENCE_EQ0_IS]             =  "L4_SEQUENCE_EQ0_IS",
+       [VCAP_KF_L4_SPORT]                       =  "L4_SPORT",
+       [VCAP_KF_L4_SPORT_EQ_DPORT_IS]           =  "L4_SPORT_EQ_DPORT_IS",
+       [VCAP_KF_L4_SYN]                         =  "L4_SYN",
+       [VCAP_KF_L4_URG]                         =  "L4_URG",
+       [VCAP_KF_LOOKUP_FIRST_IS]                =  "LOOKUP_FIRST_IS",
+       [VCAP_KF_LOOKUP_PAG]                     =  "LOOKUP_PAG",
+       [VCAP_KF_OAM_CCM_CNTS_EQ0]               =  "OAM_CCM_CNTS_EQ0",
+       [VCAP_KF_OAM_Y1731_IS]                   =  "OAM_Y1731_IS",
+       [VCAP_KF_TCP_IS]                         =  "TCP_IS",
+       [VCAP_KF_TCP_UDP_IS]                     =  "TCP_UDP_IS",
+       [VCAP_KF_TYPE]                           =  "TYPE",
+};
+
+/* Actionfield names */
+static const char * const vcap_actionfield_names[] = {
+       [VCAP_AF_NO_VALUE]                       =  "(None)",
+       [VCAP_AF_CNT_ID]                         =  "CNT_ID",
+       [VCAP_AF_CPU_COPY_ENA]                   =  "CPU_COPY_ENA",
+       [VCAP_AF_CPU_QUEUE_NUM]                  =  "CPU_QUEUE_NUM",
+       [VCAP_AF_HIT_ME_ONCE]                    =  "HIT_ME_ONCE",
+       [VCAP_AF_IGNORE_PIPELINE_CTRL]           =  "IGNORE_PIPELINE_CTRL",
+       [VCAP_AF_INTR_ENA]                       =  "INTR_ENA",
+       [VCAP_AF_LRN_DIS]                        =  "LRN_DIS",
+       [VCAP_AF_MASK_MODE]                      =  "MASK_MODE",
+       [VCAP_AF_MATCH_ID]                       =  "MATCH_ID",
+       [VCAP_AF_MATCH_ID_MASK]                  =  "MATCH_ID_MASK",
+       [VCAP_AF_MIRROR_PROBE]                   =  "MIRROR_PROBE",
+       [VCAP_AF_PIPELINE_FORCE_ENA]             =  "PIPELINE_FORCE_ENA",
+       [VCAP_AF_PIPELINE_PT]                    =  "PIPELINE_PT",
+       [VCAP_AF_POLICE_ENA]                     =  "POLICE_ENA",
+       [VCAP_AF_POLICE_IDX]                     =  "POLICE_IDX",
+       [VCAP_AF_PORT_MASK]                      =  "PORT_MASK",
+       [VCAP_AF_RT_DIS]                         =  "RT_DIS",
+};
+
+/* VCAPs */
+const struct vcap_info sparx5_vcaps[] = {
+       [VCAP_TYPE_IS2] = {
+               .name = "is2",
+               .rows = 256,
+               .sw_count = 12,
+               .sw_width = 52,
+               .sticky_width = 1,
+               .act_width = 110,
+               .default_cnt = 73,
+               .require_cnt_dis = 0,
+               .version = 1,
+               .keyfield_set = is2_keyfield_set,
+               .keyfield_set_size = ARRAY_SIZE(is2_keyfield_set),
+               .actionfield_set = is2_actionfield_set,
+               .actionfield_set_size = ARRAY_SIZE(is2_actionfield_set),
+               .keyfield_set_map = is2_keyfield_set_map,
+               .keyfield_set_map_size = is2_keyfield_set_map_size,
+               .actionfield_set_map = is2_actionfield_set_map,
+               .actionfield_set_map_size = is2_actionfield_set_map_size,
+               .keyfield_set_typegroups = is2_keyfield_set_typegroups,
+               .actionfield_set_typegroups = is2_actionfield_set_typegroups,
+       },
+};
+
+const struct vcap_statistics sparx5_vcap_stats = {
+       .name = "sparx5",
+       .count = 1,
+       .keyfield_set_names = vcap_keyfield_set_names,
+       .actionfield_set_names = vcap_actionfield_set_names,
+       .keyfield_names = vcap_keyfield_names,
+       .actionfield_names = vcap_actionfield_names,
+};
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.h b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_ag_api.h
new file mode 100644 (file)
index 0000000..7d106f1
--- /dev/null
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP API
+ */
+
+/* This file is autogenerated by cml-utils 2022-10-13 10:04:41 +0200.
+ * Commit ID: fd7cafd175899f0672c73afb3a30fc872500ae86
+ */
+
+#ifndef __SPARX5_VCAP_AG_API_H__
+#define __SPARX5_VCAP_AG_API_H__
+
+/* VCAPs */
+extern const struct vcap_info sparx5_vcaps[];
+extern const struct vcap_statistics sparx5_vcap_stats;
+
+#endif /* __SPARX5_VCAP_AG_API_H__ */
+
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.c
new file mode 100644 (file)
index 0000000..5015326
--- /dev/null
@@ -0,0 +1,527 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Microchip Sparx5 Switch driver VCAP implementation
+ *
+ * Copyright (c) 2022 Microchip Technology Inc. and its subsidiaries.
+ *
+ * The Sparx5 Chip Register Model can be browsed at this location:
+ * https://github.com/microchip-ung/sparx-5_reginfo
+ */
+
+#include <linux/types.h>
+#include <linux/list.h>
+
+#include "vcap_api.h"
+#include "vcap_api_client.h"
+#include "sparx5_main_regs.h"
+#include "sparx5_main.h"
+#include "sparx5_vcap_impl.h"
+#include "sparx5_vcap_ag_api.h"
+
+#define SUPER_VCAP_BLK_SIZE 3072 /* addresses per Super VCAP block */
+#define STREAMSIZE (64 * 4)  /* bytes in the VCAP cache area */
+
+#define SPARX5_IS2_LOOKUPS 4
+
+/* IS2 port keyset selection control */
+
+/* IS2 non-ethernet traffic type keyset generation */
+enum vcap_is2_port_sel_noneth {
+       VCAP_IS2_PS_NONETH_MAC_ETYPE,
+       VCAP_IS2_PS_NONETH_CUSTOM_1,
+       VCAP_IS2_PS_NONETH_CUSTOM_2,
+       VCAP_IS2_PS_NONETH_NO_LOOKUP
+};
+
+/* IS2 IPv4 unicast traffic type keyset generation */
+enum vcap_is2_port_sel_ipv4_uc {
+       VCAP_IS2_PS_IPV4_UC_MAC_ETYPE,
+       VCAP_IS2_PS_IPV4_UC_IP4_TCP_UDP_OTHER,
+       VCAP_IS2_PS_IPV4_UC_IP_7TUPLE,
+};
+
+/* IS2 IPv4 multicast traffic type keyset generation */
+enum vcap_is2_port_sel_ipv4_mc {
+       VCAP_IS2_PS_IPV4_MC_MAC_ETYPE,
+       VCAP_IS2_PS_IPV4_MC_IP4_TCP_UDP_OTHER,
+       VCAP_IS2_PS_IPV4_MC_IP_7TUPLE,
+       VCAP_IS2_PS_IPV4_MC_IP4_VID,
+};
+
+/* IS2 IPv6 unicast traffic type keyset generation */
+enum vcap_is2_port_sel_ipv6_uc {
+       VCAP_IS2_PS_IPV6_UC_MAC_ETYPE,
+       VCAP_IS2_PS_IPV6_UC_IP_7TUPLE,
+       VCAP_IS2_PS_IPV6_UC_IP6_STD,
+       VCAP_IS2_PS_IPV6_UC_IP4_TCP_UDP_OTHER,
+};
+
+/* IS2 IPv6 multicast traffic type keyset generation */
+enum vcap_is2_port_sel_ipv6_mc {
+       VCAP_IS2_PS_IPV6_MC_MAC_ETYPE,
+       VCAP_IS2_PS_IPV6_MC_IP_7TUPLE,
+       VCAP_IS2_PS_IPV6_MC_IP6_VID,
+       VCAP_IS2_PS_IPV6_MC_IP6_STD,
+       VCAP_IS2_PS_IPV6_MC_IP4_TCP_UDP_OTHER,
+};
+
+/* IS2 ARP traffic type keyset generation */
+enum vcap_is2_port_sel_arp {
+       VCAP_IS2_PS_ARP_MAC_ETYPE,
+       VCAP_IS2_PS_ARP_ARP,
+};
+
+static struct sparx5_vcap_inst {
+       enum vcap_type vtype; /* type of vcap */
+       int vinst; /* instance number within the same type */
+       int lookups; /* number of lookups in this vcap type */
+       int lookups_per_instance; /* number of lookups in this instance */
+       int first_cid; /* first chain id in this vcap */
+       int last_cid; /* last chain id in this vcap */
+       int count; /* number of available addresses, not in super vcap */
+       int map_id; /* id in the super vcap block mapping (if applicable) */
+       int blockno; /* starting block in super vcap (if applicable) */
+       int blocks; /* number of blocks in super vcap (if applicable) */
+} sparx5_vcap_inst_cfg[] = {
+       {
+               .vtype = VCAP_TYPE_IS2, /* IS2-0 */
+               .vinst = 0,
+               .map_id = 4,
+               .lookups = SPARX5_IS2_LOOKUPS,
+               .lookups_per_instance = SPARX5_IS2_LOOKUPS / 2,
+               .first_cid = SPARX5_VCAP_CID_IS2_L0,
+               .last_cid = SPARX5_VCAP_CID_IS2_L2 - 1,
+               .blockno = 0, /* Maps block 0-1 */
+               .blocks = 2,
+       },
+       {
+               .vtype = VCAP_TYPE_IS2, /* IS2-1 */
+               .vinst = 1,
+               .map_id = 5,
+               .lookups = SPARX5_IS2_LOOKUPS,
+               .lookups_per_instance = SPARX5_IS2_LOOKUPS / 2,
+               .first_cid = SPARX5_VCAP_CID_IS2_L2,
+               .last_cid = SPARX5_VCAP_CID_IS2_MAX,
+               .blockno = 2, /* Maps block 2-3 */
+               .blocks = 2,
+       },
+};
+
+/* Await the super VCAP completion of the current operation */
+static void sparx5_vcap_wait_super_update(struct sparx5 *sparx5)
+{
+       u32 value;
+
+       read_poll_timeout(spx5_rd, value,
+                         !VCAP_SUPER_CTRL_UPDATE_SHOT_GET(value), 500, 10000,
+                         false, sparx5, VCAP_SUPER_CTRL);
+}
+
+/* Initializing a VCAP address range: only IS2 for now */
+static void _sparx5_vcap_range_init(struct sparx5 *sparx5,
+                                   struct vcap_admin *admin,
+                                   u32 addr, u32 count)
+{
+       u32 size = count - 1;
+
+       spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(0) |
+               VCAP_SUPER_CFG_MV_SIZE_SET(size),
+               sparx5, VCAP_SUPER_CFG);
+       spx5_wr(VCAP_SUPER_CTRL_UPDATE_CMD_SET(VCAP_CMD_INITIALIZE) |
+               VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS_SET(0) |
+               VCAP_SUPER_CTRL_UPDATE_ACTION_DIS_SET(0) |
+               VCAP_SUPER_CTRL_UPDATE_CNT_DIS_SET(0) |
+               VCAP_SUPER_CTRL_UPDATE_ADDR_SET(addr) |
+               VCAP_SUPER_CTRL_CLEAR_CACHE_SET(true) |
+               VCAP_SUPER_CTRL_UPDATE_SHOT_SET(true),
+               sparx5, VCAP_SUPER_CTRL);
+       sparx5_vcap_wait_super_update(sparx5);
+}
+
+/* Initializing VCAP rule data area */
+static void sparx5_vcap_block_init(struct sparx5 *sparx5,
+                                  struct vcap_admin *admin)
+{
+       _sparx5_vcap_range_init(sparx5, admin, admin->first_valid_addr,
+                               admin->last_valid_addr -
+                                       admin->first_valid_addr);
+}
+
+/* Get the keyset name from the sparx5 VCAP model */
+static const char *sparx5_vcap_keyset_name(struct net_device *ndev,
+                                          enum vcap_keyfield_set keyset)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+
+       return port->sparx5->vcap_ctrl->stats->keyfield_set_names[keyset];
+}
+
+/* Check if this is the first lookup of IS2 */
+static bool sparx5_vcap_is2_is_first_chain(struct vcap_rule *rule)
+{
+       return (rule->vcap_chain_id >= SPARX5_VCAP_CID_IS2_L0 &&
+               rule->vcap_chain_id < SPARX5_VCAP_CID_IS2_L1) ||
+               ((rule->vcap_chain_id >= SPARX5_VCAP_CID_IS2_L2 &&
+                 rule->vcap_chain_id < SPARX5_VCAP_CID_IS2_L3));
+}
+
+/* Set the narrow range ingress port mask on a rule */
+static void sparx5_vcap_add_range_port_mask(struct vcap_rule *rule,
+                                           struct net_device *ndev)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+       u32 port_mask;
+       u32 range;
+
+       range = port->portno / BITS_PER_TYPE(u32);
+       /* Port bit set to match-any */
+       port_mask = ~BIT(port->portno % BITS_PER_TYPE(u32));
+       vcap_rule_add_key_u32(rule, VCAP_KF_IF_IGR_PORT_MASK_SEL, 0, 0xf);
+       vcap_rule_add_key_u32(rule, VCAP_KF_IF_IGR_PORT_MASK_RNG, range, 0xf);
+       vcap_rule_add_key_u32(rule, VCAP_KF_IF_IGR_PORT_MASK, 0, port_mask);
+}
+
+/* Set the wide range ingress port mask on a rule */
+static void sparx5_vcap_add_wide_port_mask(struct vcap_rule *rule,
+                                          struct net_device *ndev)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+       struct vcap_u72_key port_mask;
+       u32 range;
+
+       /* Port bit set to match-any */
+       memset(port_mask.value, 0, sizeof(port_mask.value));
+       memset(port_mask.mask, 0xff, sizeof(port_mask.mask));
+       range = port->portno / BITS_PER_BYTE;
+       port_mask.mask[range] = ~BIT(port->portno % BITS_PER_BYTE);
+       vcap_rule_add_key_u72(rule, VCAP_KF_IF_IGR_PORT_MASK, &port_mask);
+}
+
+/* API callback used for validating a field keyset (check the port keysets) */
+static enum vcap_keyfield_set
+sparx5_vcap_validate_keyset(struct net_device *ndev,
+                           struct vcap_admin *admin,
+                           struct vcap_rule *rule,
+                           struct vcap_keyset_list *kslist,
+                           u16 l3_proto)
+{
+       if (!kslist || kslist->cnt == 0)
+               return VCAP_KFS_NO_VALUE;
+       /* for now just return whatever the API suggests */
+       return kslist->keysets[0];
+}
+
+/* API callback used for adding default fields to a rule */
+static void sparx5_vcap_add_default_fields(struct net_device *ndev,
+                                          struct vcap_admin *admin,
+                                          struct vcap_rule *rule)
+{
+       const struct vcap_field *field;
+
+       field = vcap_lookup_keyfield(rule, VCAP_KF_IF_IGR_PORT_MASK);
+       if (field && field->width == SPX5_PORTS)
+               sparx5_vcap_add_wide_port_mask(rule, ndev);
+       else if (field && field->width == BITS_PER_TYPE(u32))
+               sparx5_vcap_add_range_port_mask(rule, ndev);
+       else
+               pr_err("%s:%d: %s: could not add an ingress port mask for: %s\n",
+                      __func__, __LINE__, netdev_name(ndev),
+                      sparx5_vcap_keyset_name(ndev, rule->keyset));
+       /* add the lookup bit */
+       if (sparx5_vcap_is2_is_first_chain(rule))
+               vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_1);
+       else
+               vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_0);
+}
+
+/* API callback used for erasing the vcap cache area (not the register area) */
+static void sparx5_vcap_cache_erase(struct vcap_admin *admin)
+{
+       memset(admin->cache.keystream, 0, STREAMSIZE);
+       memset(admin->cache.maskstream, 0, STREAMSIZE);
+       memset(admin->cache.actionstream, 0, STREAMSIZE);
+       memset(&admin->cache.counter, 0, sizeof(admin->cache.counter));
+}
+
+/* API callback used for writing to the VCAP cache */
+static void sparx5_vcap_cache_write(struct net_device *ndev,
+                                   struct vcap_admin *admin,
+                                   enum vcap_selection sel,
+                                   u32 start,
+                                   u32 count)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+       struct sparx5 *sparx5 = port->sparx5;
+       u32 *keystr, *mskstr, *actstr;
+       int idx;
+
+       keystr = &admin->cache.keystream[start];
+       mskstr = &admin->cache.maskstream[start];
+       actstr = &admin->cache.actionstream[start];
+       switch (sel) {
+       case VCAP_SEL_ENTRY:
+               for (idx = 0; idx < count; ++idx) {
+                       /* Avoid 'match-off' by setting value & mask */
+                       spx5_wr(keystr[idx] & mskstr[idx], sparx5,
+                               VCAP_SUPER_VCAP_ENTRY_DAT(idx));
+                       spx5_wr(~mskstr[idx], sparx5,
+                               VCAP_SUPER_VCAP_MASK_DAT(idx));
+               }
+               break;
+       case VCAP_SEL_ACTION:
+               for (idx = 0; idx < count; ++idx)
+                       spx5_wr(actstr[idx], sparx5,
+                               VCAP_SUPER_VCAP_ACTION_DAT(idx));
+               break;
+       case VCAP_SEL_ALL:
+               pr_err("%s:%d: cannot write all streams at once\n",
+                      __func__, __LINE__);
+               break;
+       default:
+               break;
+       }
+}
+
+/* API callback used for reading from the VCAP into the VCAP cache */
+static void sparx5_vcap_cache_read(struct net_device *ndev,
+                                  struct vcap_admin *admin,
+                                  enum vcap_selection sel, u32 start,
+                                  u32 count)
+{
+       /* this will be added later */
+}
+
+/* API callback used for initializing a VCAP address range */
+static void sparx5_vcap_range_init(struct net_device *ndev,
+                                  struct vcap_admin *admin, u32 addr,
+                                  u32 count)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+       struct sparx5 *sparx5 = port->sparx5;
+
+       _sparx5_vcap_range_init(sparx5, admin, addr, count);
+}
+
+/* API callback used for updating the VCAP cache */
+static void sparx5_vcap_update(struct net_device *ndev,
+                              struct vcap_admin *admin, enum vcap_command cmd,
+                              enum vcap_selection sel, u32 addr)
+{
+       struct sparx5_port *port = netdev_priv(ndev);
+       struct sparx5 *sparx5 = port->sparx5;
+       bool clear;
+
+       clear = (cmd == VCAP_CMD_INITIALIZE);
+       spx5_wr(VCAP_SUPER_CFG_MV_NUM_POS_SET(0) |
+               VCAP_SUPER_CFG_MV_SIZE_SET(0), sparx5, VCAP_SUPER_CFG);
+       spx5_wr(VCAP_SUPER_CTRL_UPDATE_CMD_SET(cmd) |
+               VCAP_SUPER_CTRL_UPDATE_ENTRY_DIS_SET((VCAP_SEL_ENTRY & sel) == 0) |
+               VCAP_SUPER_CTRL_UPDATE_ACTION_DIS_SET((VCAP_SEL_ACTION & sel) == 0) |
+               VCAP_SUPER_CTRL_UPDATE_CNT_DIS_SET((VCAP_SEL_COUNTER & sel) == 0) |
+               VCAP_SUPER_CTRL_UPDATE_ADDR_SET(addr) |
+               VCAP_SUPER_CTRL_CLEAR_CACHE_SET(clear) |
+               VCAP_SUPER_CTRL_UPDATE_SHOT_SET(true),
+               sparx5, VCAP_SUPER_CTRL);
+       sparx5_vcap_wait_super_update(sparx5);
+}
+
+/* API callback used for moving a block of rules in the VCAP */
+static void sparx5_vcap_move(struct net_device *ndev, struct vcap_admin *admin,
+                            u32 addr, int offset, int count)
+{
+       /* this will be added later */
+}
+
+/* Provide port information via a callback interface */
+static int sparx5_port_info(struct net_device *ndev, enum vcap_type vtype,
+                           int (*pf)(void *out, int arg, const char *fmt, ...),
+                           void *out, int arg)
+{
+       /* this will be added later */
+       return 0;
+}
+
+/* API callback operations: only IS2 is supported for now */
+static struct vcap_operations sparx5_vcap_ops = {
+       .validate_keyset = sparx5_vcap_validate_keyset,
+       .add_default_fields = sparx5_vcap_add_default_fields,
+       .cache_erase = sparx5_vcap_cache_erase,
+       .cache_write = sparx5_vcap_cache_write,
+       .cache_read = sparx5_vcap_cache_read,
+       .init = sparx5_vcap_range_init,
+       .update = sparx5_vcap_update,
+       .move = sparx5_vcap_move,
+       .port_info = sparx5_port_info,
+};
+
+/* Enable lookups per port and set the keyset generation: only IS2 for now */
+static void sparx5_vcap_port_key_selection(struct sparx5 *sparx5,
+                                          struct vcap_admin *admin)
+{
+       int portno, lookup;
+       u32 keysel;
+
+       /* enable all 4 lookups on all ports */
+       for (portno = 0; portno < SPX5_PORTS; ++portno)
+               spx5_wr(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0xf), sparx5,
+                       ANA_ACL_VCAP_S2_CFG(portno));
+
+       /* all traffic types generate the MAC_ETYPE keyset for now in all
+        * lookups on all ports
+        */
+       keysel = ANA_ACL_VCAP_S2_KEY_SEL_KEY_SEL_ENA_SET(true) |
+               ANA_ACL_VCAP_S2_KEY_SEL_NON_ETH_KEY_SEL_SET(VCAP_IS2_PS_NONETH_MAC_ETYPE) |
+               ANA_ACL_VCAP_S2_KEY_SEL_IP4_MC_KEY_SEL_SET(VCAP_IS2_PS_IPV4_MC_MAC_ETYPE) |
+               ANA_ACL_VCAP_S2_KEY_SEL_IP4_UC_KEY_SEL_SET(VCAP_IS2_PS_IPV4_UC_MAC_ETYPE) |
+               ANA_ACL_VCAP_S2_KEY_SEL_IP6_MC_KEY_SEL_SET(VCAP_IS2_PS_IPV6_MC_MAC_ETYPE) |
+               ANA_ACL_VCAP_S2_KEY_SEL_IP6_UC_KEY_SEL_SET(VCAP_IS2_PS_IPV6_UC_MAC_ETYPE) |
+               ANA_ACL_VCAP_S2_KEY_SEL_ARP_KEY_SEL_SET(VCAP_IS2_PS_ARP_MAC_ETYPE);
+       for (lookup = 0; lookup < admin->lookups; ++lookup) {
+               for (portno = 0; portno < SPX5_PORTS; ++portno) {
+                       spx5_wr(keysel, sparx5,
+                               ANA_ACL_VCAP_S2_KEY_SEL(portno, lookup));
+               }
+       }
+}
+
+/* Disable lookups per port and set the keyset generation: only IS2 for now */
+static void sparx5_vcap_port_key_deselection(struct sparx5 *sparx5,
+                                            struct vcap_admin *admin)
+{
+       int portno;
+
+       for (portno = 0; portno < SPX5_PORTS; ++portno)
+               spx5_rmw(ANA_ACL_VCAP_S2_CFG_SEC_ENA_SET(0),
+                        ANA_ACL_VCAP_S2_CFG_SEC_ENA,
+                        sparx5,
+                        ANA_ACL_VCAP_S2_CFG(portno));
+}
+
+static void sparx5_vcap_admin_free(struct vcap_admin *admin)
+{
+       if (!admin)
+               return;
+       kfree(admin->cache.keystream);
+       kfree(admin->cache.maskstream);
+       kfree(admin->cache.actionstream);
+       kfree(admin);
+}
+
+/* Allocate a vcap instance with a rule list and a cache area */
+static struct vcap_admin *
+sparx5_vcap_admin_alloc(struct sparx5 *sparx5, struct vcap_control *ctrl,
+                       const struct sparx5_vcap_inst *cfg)
+{
+       struct vcap_admin *admin;
+
+       admin = kzalloc(sizeof(*admin), GFP_KERNEL);
+       if (!admin)
+               return ERR_PTR(-ENOMEM);
+       INIT_LIST_HEAD(&admin->list);
+       INIT_LIST_HEAD(&admin->rules);
+       admin->vtype = cfg->vtype;
+       admin->vinst = cfg->vinst;
+       admin->lookups = cfg->lookups;
+       admin->lookups_per_instance = cfg->lookups_per_instance;
+       admin->first_cid = cfg->first_cid;
+       admin->last_cid = cfg->last_cid;
+       admin->cache.keystream =
+               kzalloc(STREAMSIZE, GFP_KERNEL);
+       admin->cache.maskstream =
+               kzalloc(STREAMSIZE, GFP_KERNEL);
+       admin->cache.actionstream =
+               kzalloc(STREAMSIZE, GFP_KERNEL);
+       if (!admin->cache.keystream || !admin->cache.maskstream ||
+           !admin->cache.actionstream) {
+               sparx5_vcap_admin_free(admin);
+               return ERR_PTR(-ENOMEM);
+       }
+       return admin;
+}
+
+/* Do block allocations and provide addresses for VCAP instances */
+static void sparx5_vcap_block_alloc(struct sparx5 *sparx5,
+                                   struct vcap_admin *admin,
+                                   const struct sparx5_vcap_inst *cfg)
+{
+       int idx;
+
+       /* Super VCAP block mapping and address configuration. Block 0
+        * is assigned addresses 0 through 3071, block 1 is assigned
+        * addresses 3072 though 6143, and so on.
+        */
+       for (idx = cfg->blockno; idx < cfg->blockno + cfg->blocks; ++idx) {
+               spx5_wr(VCAP_SUPER_IDX_CORE_IDX_SET(idx), sparx5,
+                       VCAP_SUPER_IDX);
+               spx5_wr(VCAP_SUPER_MAP_CORE_MAP_SET(cfg->map_id), sparx5,
+                       VCAP_SUPER_MAP);
+       }
+       admin->first_valid_addr = cfg->blockno * SUPER_VCAP_BLK_SIZE;
+       admin->last_used_addr = admin->first_valid_addr +
+               cfg->blocks * SUPER_VCAP_BLK_SIZE;
+       admin->last_valid_addr = admin->last_used_addr - 1;
+}
+
+/* Allocate a vcap control and vcap instances and configure the system */
+int sparx5_vcap_init(struct sparx5 *sparx5)
+{
+       const struct sparx5_vcap_inst *cfg;
+       struct vcap_control *ctrl;
+       struct vcap_admin *admin;
+       int err = 0, idx;
+
+       /* Create a VCAP control instance that owns the platform specific VCAP
+        * model with VCAP instances and information about keysets, keys,
+        * actionsets and actions
+        * - Create administrative state for each available VCAP
+        *   - Lists of rules
+        *   - Address information
+        *   - Initialize VCAP blocks
+        *   - Configure port keysets
+        */
+       ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
+       if (!ctrl)
+               return -ENOMEM;
+
+       sparx5->vcap_ctrl = ctrl;
+       /* select the sparx5 VCAP model */
+       ctrl->vcaps = sparx5_vcaps;
+       ctrl->stats = &sparx5_vcap_stats;
+       /* Setup callbacks to allow the API to use the VCAP HW */
+       ctrl->ops = &sparx5_vcap_ops;
+
+       INIT_LIST_HEAD(&ctrl->list);
+       for (idx = 0; idx < ARRAY_SIZE(sparx5_vcap_inst_cfg); ++idx) {
+               cfg = &sparx5_vcap_inst_cfg[idx];
+               admin = sparx5_vcap_admin_alloc(sparx5, ctrl, cfg);
+               if (IS_ERR(admin)) {
+                       err = PTR_ERR(admin);
+                       pr_err("%s:%d: vcap allocation failed: %d\n",
+                              __func__, __LINE__, err);
+                       return err;
+               }
+               sparx5_vcap_block_alloc(sparx5, admin, cfg);
+               sparx5_vcap_block_init(sparx5, admin);
+               if (cfg->vinst == 0)
+                       sparx5_vcap_port_key_selection(sparx5, admin);
+               list_add_tail(&admin->list, &ctrl->list);
+       }
+
+       return err;
+}
+
+void sparx5_vcap_destroy(struct sparx5 *sparx5)
+{
+       struct vcap_control *ctrl = sparx5->vcap_ctrl;
+       struct vcap_admin *admin, *admin_next;
+
+       if (!ctrl)
+               return;
+
+       list_for_each_entry_safe(admin, admin_next, &ctrl->list, list) {
+               sparx5_vcap_port_key_deselection(sparx5, admin);
+               vcap_del_rules(ctrl, admin);
+               list_del(&admin->list);
+               sparx5_vcap_admin_free(admin);
+       }
+       kfree(ctrl);
+}
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h b/drivers/net/ethernet/microchip/sparx5/sparx5_vcap_impl.h
new file mode 100644 (file)
index 0000000..8e44ebd
--- /dev/null
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/* Microchip Sparx5 Switch driver VCAP implementation
+ *
+ * Copyright (c) 2022 Microchip Technology Inc. and its subsidiaries.
+ *
+ * The Sparx5 Chip Register Model can be browsed at this location:
+ * https://github.com/microchip-ung/sparx-5_reginfo
+ */
+
+#ifndef __SPARX5_VCAP_IMPL_H__
+#define __SPARX5_VCAP_IMPL_H__
+
+#define SPARX5_VCAP_CID_IS2_L0 VCAP_CID_INGRESS_STAGE2_L0 /* IS2 lookup 0 */
+#define SPARX5_VCAP_CID_IS2_L1 VCAP_CID_INGRESS_STAGE2_L1 /* IS2 lookup 1 */
+#define SPARX5_VCAP_CID_IS2_L2 VCAP_CID_INGRESS_STAGE2_L2 /* IS2 lookup 2 */
+#define SPARX5_VCAP_CID_IS2_L3 VCAP_CID_INGRESS_STAGE2_L3 /* IS2 lookup 3 */
+#define SPARX5_VCAP_CID_IS2_MAX \
+       (VCAP_CID_INGRESS_STAGE2_L3 + VCAP_CID_LOOKUP_SIZE - 1) /* IS2 Max */
+
+#endif /* __SPARX5_VCAP_IMPL_H__ */
diff --git a/drivers/net/ethernet/microchip/vcap/Kconfig b/drivers/net/ethernet/microchip/vcap/Kconfig
new file mode 100644 (file)
index 0000000..1af30a3
--- /dev/null
@@ -0,0 +1,52 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Microchip VCAP API configuration
+#
+
+if NET_VENDOR_MICROCHIP
+
+config VCAP
+       bool "VCAP (Versatile Content-Aware Processor) library"
+       help
+         Provides the basic VCAP functionality for multiple Microchip switchcores
+
+         A VCAP is essentially a TCAM with rules consisting of
+
+           - Programmable key fields
+           - Programmable action fields
+           - A counter (which may be only one bit wide)
+
+         Besides this each VCAP has:
+
+           - A number of lookups
+           - A keyset configuration per port per lookup
+
+         The VCAP implementation provides switchcore independent handling of rules
+         and supports:
+
+           - Creating and deleting rules
+           - Updating and getting rules
+
+         The platform specific configuration as well as the platform specific model
+         of the VCAP instances are attached to the VCAP API and a client can then
+         access rules via the API in a platform independent way, with the
+         limitations that each VCAP has in terms of its supported keys and actions.
+
+         Different switchcores will have different VCAP instances with different
+         characteristics. Look in the datasheet for the VCAP specifications for the
+         specific switchcore.
+
+config VCAP_KUNIT_TEST
+       bool "KUnit test for VCAP library" if !KUNIT_ALL_TESTS
+       depends on KUNIT
+       depends on KUNIT=y && VCAP=y && y
+       default KUNIT_ALL_TESTS
+       help
+         This builds unit tests for the VCAP library.
+
+         For more information on KUnit and unit tests in general, please refer
+         to the KUnit documentation in Documentation/dev-tools/kunit/.
+
+         If unsure, say N.
+
+endif # NET_VENDOR_MICROCHIP
diff --git a/drivers/net/ethernet/microchip/vcap/Makefile b/drivers/net/ethernet/microchip/vcap/Makefile
new file mode 100644 (file)
index 0000000..b377569
--- /dev/null
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Makefile for the Microchip VCAP API
+#
+
+obj-$(CONFIG_VCAP) += vcap.o
+obj-$(CONFIG_VCAP_KUNIT_TEST) +=  vcap_model_kunit.o
+
+vcap-y += vcap_api.o
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h b/drivers/net/ethernet/microchip/vcap/vcap_ag_api.h
new file mode 100644 (file)
index 0000000..804d57b
--- /dev/null
@@ -0,0 +1,326 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP API
+ */
+
+/* This file is autogenerated by cml-utils 2022-10-13 10:04:41 +0200.
+ * Commit ID: fd7cafd175899f0672c73afb3a30fc872500ae86
+ */
+
+#ifndef __VCAP_AG_API__
+#define __VCAP_AG_API__
+
+enum vcap_type {
+       VCAP_TYPE_IS2,
+       VCAP_TYPE_MAX
+};
+
+/* Keyfieldset names with origin information */
+enum vcap_keyfield_set {
+       VCAP_KFS_NO_VALUE,          /* initial value */
+       VCAP_KFS_ARP,               /* sparx5 is2 X6 */
+       VCAP_KFS_IP4_OTHER,         /* sparx5 is2 X6 */
+       VCAP_KFS_IP4_TCP_UDP,       /* sparx5 is2 X6 */
+       VCAP_KFS_IP6_STD,           /* sparx5 is2 X6 */
+       VCAP_KFS_IP_7TUPLE,         /* sparx5 is2 X12 */
+       VCAP_KFS_MAC_ETYPE,         /* sparx5 is2 X6 */
+};
+
+/* List of keyfields with description
+ *
+ * Keys ending in _IS are booleans derived from frame data
+ * Keys ending in _CLS are classified frame data
+ *
+ * VCAP_KF_8021Q_DEI_CLS: W1, sparx5: is2
+ *   Classified DEI
+ * VCAP_KF_8021Q_PCP_CLS: W3, sparx5: is2
+ *   Classified PCP
+ * VCAP_KF_8021Q_VID_CLS: W13, sparx5: is2
+ *   Classified VID
+ * VCAP_KF_8021Q_VLAN_TAGGED_IS: W1, sparx5: is2
+ *   Sparx5: Set if frame was received with a VLAN tag, LAN966x: Set if frame has
+ *   one or more Q-tags. Independent of port VLAN awareness
+ * VCAP_KF_ARP_ADDR_SPACE_OK_IS: W1, sparx5: is2
+ *   Set if hardware address is Ethernet
+ * VCAP_KF_ARP_LEN_OK_IS: W1, sparx5: is2
+ *   Set if hardware address length = 6 (Ethernet) and IP address length = 4 (IP).
+ * VCAP_KF_ARP_OPCODE: W2, sparx5: is2
+ *   ARP opcode
+ * VCAP_KF_ARP_OPCODE_UNKNOWN_IS: W1, sparx5: is2
+ *   Set if not one of the codes defined in VCAP_KF_ARP_OPCODE
+ * VCAP_KF_ARP_PROTO_SPACE_OK_IS: W1, sparx5: is2
+ *   Set if protocol address space is 0x0800
+ * VCAP_KF_ARP_SENDER_MATCH_IS: W1, sparx5: is2
+ *   Sender Hardware Address = SMAC (ARP)
+ * VCAP_KF_ARP_TGT_MATCH_IS: W1, sparx5: is2
+ *   Target Hardware Address = SMAC (RARP)
+ * VCAP_KF_ETYPE: W16, sparx5: is2
+ *   Ethernet type
+ * VCAP_KF_ETYPE_LEN_IS: W1, sparx5: is2
+ *   Set if frame has EtherType >= 0x600
+ * VCAP_KF_IF_IGR_PORT_MASK: sparx5 is2 W32, sparx5 is2 W65
+ *   Ingress port mask, one bit per port/erleg
+ * VCAP_KF_IF_IGR_PORT_MASK_L3: W1, sparx5: is2
+ *   If set, IF_IGR_PORT_MASK, IF_IGR_PORT_MASK_RNG, and IF_IGR_PORT_MASK_SEL are
+ *   used to specify L3 interfaces
+ * VCAP_KF_IF_IGR_PORT_MASK_RNG: W4, sparx5: is2
+ *   Range selector for IF_IGR_PORT_MASK.  Specifies which group of 32 ports are
+ *   available in IF_IGR_PORT_MASK
+ * VCAP_KF_IF_IGR_PORT_MASK_SEL: W2, sparx5: is2
+ *   Mode selector for IF_IGR_PORT_MASK, applicable when IF_IGR_PORT_MASK_L3 == 0.
+ *   Mapping: 0: DEFAULT 1: LOOPBACK 2: MASQUERADE 3: CPU_VD
+ * VCAP_KF_IP4_IS: W1, sparx5: is2
+ *   Set if frame has EtherType = 0x800 and IP version = 4
+ * VCAP_KF_ISDX_CLS: W12, sparx5: is2
+ *   Classified ISDX
+ * VCAP_KF_ISDX_GT0_IS: W1, sparx5: is2
+ *   Set if classified ISDX > 0
+ * VCAP_KF_L2_BC_IS: W1, sparx5: is2
+ *   Set if frame’s destination MAC address is the broadcast address
+ *   (FF-FF-FF-FF-FF-FF).
+ * VCAP_KF_L2_DMAC: W48, sparx5: is2
+ *   Destination MAC address
+ * VCAP_KF_L2_FWD_IS: W1, sparx5: is2
+ *   Set if the frame is allowed to be forwarded to front ports
+ * VCAP_KF_L2_MC_IS: W1, sparx5: is2
+ *   Set if frame’s destination MAC address is a multicast address (bit 40 = 1).
+ * VCAP_KF_L2_PAYLOAD_ETYPE: W64, sparx5: is2
+ *   Byte 0-7 of L2 payload after Type/Len field and overloading for OAM
+ * VCAP_KF_L2_SMAC: W48, sparx5: is2
+ *   Source MAC address
+ * VCAP_KF_L3_DIP_EQ_SIP_IS: W1, sparx5: is2
+ *   Set if Src IP matches Dst IP address
+ * VCAP_KF_L3_DST_IS: W1, sparx5: is2
+ *   Set if lookup is done for egress router leg
+ * VCAP_KF_L3_FRAGMENT_TYPE: W2, sparx5: is2
+ *   L3 Fragmentation type (none, initial, suspicious, valid follow up)
+ * VCAP_KF_L3_FRAG_INVLD_L4_LEN: W1, sparx5: is2
+ *   Set if frame's L4 length is less than ANA_CL:COMMON:CLM_FRAGMENT_CFG.L4_MIN_L
+ *   EN
+ * VCAP_KF_L3_IP4_DIP: W32, sparx5: is2
+ *   Destination IPv4 Address
+ * VCAP_KF_L3_IP4_SIP: W32, sparx5: is2
+ *   Source IPv4 Address
+ * VCAP_KF_L3_IP6_DIP: W128, sparx5: is2
+ *   Sparx5: Full IPv6 DIP, LAN966x: Either Full IPv6 DIP or a subset depending on
+ *   frame type
+ * VCAP_KF_L3_IP6_SIP: W128, sparx5: is2
+ *   Sparx5: Full IPv6 SIP, LAN966x: Either Full IPv6 SIP or a subset depending on
+ *   frame type
+ * VCAP_KF_L3_IP_PROTO: W8, sparx5: is2
+ *   IPv4 frames: IP protocol. IPv6 frames: Next header, same as for IPV4
+ * VCAP_KF_L3_OPTIONS_IS: W1, sparx5: is2
+ *   Set if IPv4 frame contains options (IP len > 5)
+ * VCAP_KF_L3_PAYLOAD: sparx5 is2 W96, sparx5 is2 W40
+ *   Sparx5: Payload bytes after IP header. IPv4: IPv4 options are not parsed so
+ *   payload is always taken 20 bytes after the start of the IPv4 header, LAN966x:
+ *   Bytes 0-6 after IP header
+ * VCAP_KF_L3_RT_IS: W1, sparx5: is2
+ *   Set if frame has hit a router leg
+ * VCAP_KF_L3_TOS: W8, sparx5: is2
+ *   Sparx5: Frame's IPv4/IPv6 DSCP and ECN fields, LAN966x: IP TOS field
+ * VCAP_KF_L3_TTL_GT0: W1, sparx5: is2
+ *   Set if IPv4 TTL / IPv6 hop limit is greater than 0
+ * VCAP_KF_L4_ACK: W1, sparx5: is2
+ *   Sparx5 and LAN966x: TCP flag ACK, LAN966x only: PTP over UDP: flagField bit 2
+ *   (unicastFlag)
+ * VCAP_KF_L4_DPORT: W16, sparx5: is2
+ *   Sparx5: TCP/UDP destination port. Overloading for IP_7TUPLE: Non-TCP/UDP IP
+ *   frames: L4_DPORT = L3_IP_PROTO, LAN966x: TCP/UDP destination port
+ * VCAP_KF_L4_FIN: W1, sparx5: is2
+ *   TCP flag FIN, LAN966x: TCP flag FIN, and for PTP over UDP: messageType bit 1
+ * VCAP_KF_L4_PAYLOAD: W64, sparx5: is2
+ *   Payload bytes after TCP/UDP header Overloading for IP_7TUPLE: Non TCP/UDP
+ *   frames: Payload bytes 0–7 after IP header. IPv4 options are not parsed so
+ *   payload is always taken 20 bytes after the start of the IPv4 header for non
+ *   TCP/UDP IPv4 frames
+ * VCAP_KF_L4_PSH: W1, sparx5: is2
+ *   Sparx5: TCP flag PSH, LAN966x: TCP: TCP flag PSH. PTP over UDP: flagField bit
+ *   1 (twoStepFlag)
+ * VCAP_KF_L4_RNG: W16, sparx5: is2
+ *   Range checker bitmask (one for each range checker). Input into range checkers
+ *   is taken from classified results (VID, DSCP) and frame (SPORT, DPORT, ETYPE,
+ *   outer VID, inner VID)
+ * VCAP_KF_L4_RST: W1, sparx5: is2
+ *   Sparx5: TCP flag RST , LAN966x: TCP: TCP flag RST. PTP over UDP: messageType
+ *   bit 3
+ * VCAP_KF_L4_SEQUENCE_EQ0_IS: W1, sparx5: is2
+ *   Set if TCP sequence number is 0, LAN966x: Overlayed with PTP over UDP:
+ *   messageType bit 0
+ * VCAP_KF_L4_SPORT: W16, sparx5: is2
+ *   TCP/UDP source port
+ * VCAP_KF_L4_SPORT_EQ_DPORT_IS: W1, sparx5: is2
+ *   Set if UDP or TCP source port equals UDP or TCP destination port
+ * VCAP_KF_L4_SYN: W1, sparx5: is2
+ *   Sparx5: TCP flag SYN, LAN966x: TCP: TCP flag SYN. PTP over UDP: messageType
+ *   bit 2
+ * VCAP_KF_L4_URG: W1, sparx5: is2
+ *   Sparx5: TCP flag URG, LAN966x: TCP: TCP flag URG. PTP over UDP: flagField bit
+ *   7 (reserved)
+ * VCAP_KF_LOOKUP_FIRST_IS: W1, sparx5: is2
+ *   Selects between entries relevant for first and second lookup. Set for first
+ *   lookup, cleared for second lookup.
+ * VCAP_KF_LOOKUP_PAG: W8, sparx5: is2
+ *   Classified Policy Association Group: chains rules from IS1/CLM to IS2
+ * VCAP_KF_OAM_CCM_CNTS_EQ0: W1, sparx5: is2
+ *   Dual-ended loss measurement counters in CCM frames are all zero
+ * VCAP_KF_OAM_Y1731_IS: W1, sparx5: is2
+ *   Set if frame’s EtherType = 0x8902
+ * VCAP_KF_TCP_IS: W1, sparx5: is2
+ *   Set if frame is IPv4 TCP frame (IP protocol = 6) or IPv6 TCP frames (Next
+ *   header = 6)
+ * VCAP_KF_TCP_UDP_IS: W1, sparx5: is2
+ *   Set if frame is IPv4/IPv6 TCP or UDP frame (IP protocol/next header equals 6
+ *   or 17)
+ * VCAP_KF_TYPE: sparx5 is2 W4, sparx5 is2 W2
+ *   Keyset type id - set by the API
+ */
+
+/* Keyfield names */
+enum vcap_key_field {
+       VCAP_KF_NO_VALUE,  /* initial value */
+       VCAP_KF_8021Q_DEI_CLS,
+       VCAP_KF_8021Q_PCP_CLS,
+       VCAP_KF_8021Q_VID_CLS,
+       VCAP_KF_8021Q_VLAN_TAGGED_IS,
+       VCAP_KF_ARP_ADDR_SPACE_OK_IS,
+       VCAP_KF_ARP_LEN_OK_IS,
+       VCAP_KF_ARP_OPCODE,
+       VCAP_KF_ARP_OPCODE_UNKNOWN_IS,
+       VCAP_KF_ARP_PROTO_SPACE_OK_IS,
+       VCAP_KF_ARP_SENDER_MATCH_IS,
+       VCAP_KF_ARP_TGT_MATCH_IS,
+       VCAP_KF_ETYPE,
+       VCAP_KF_ETYPE_LEN_IS,
+       VCAP_KF_IF_IGR_PORT_MASK,
+       VCAP_KF_IF_IGR_PORT_MASK_L3,
+       VCAP_KF_IF_IGR_PORT_MASK_RNG,
+       VCAP_KF_IF_IGR_PORT_MASK_SEL,
+       VCAP_KF_IP4_IS,
+       VCAP_KF_ISDX_CLS,
+       VCAP_KF_ISDX_GT0_IS,
+       VCAP_KF_L2_BC_IS,
+       VCAP_KF_L2_DMAC,
+       VCAP_KF_L2_FWD_IS,
+       VCAP_KF_L2_MC_IS,
+       VCAP_KF_L2_PAYLOAD_ETYPE,
+       VCAP_KF_L2_SMAC,
+       VCAP_KF_L3_DIP_EQ_SIP_IS,
+       VCAP_KF_L3_DST_IS,
+       VCAP_KF_L3_FRAGMENT_TYPE,
+       VCAP_KF_L3_FRAG_INVLD_L4_LEN,
+       VCAP_KF_L3_IP4_DIP,
+       VCAP_KF_L3_IP4_SIP,
+       VCAP_KF_L3_IP6_DIP,
+       VCAP_KF_L3_IP6_SIP,
+       VCAP_KF_L3_IP_PROTO,
+       VCAP_KF_L3_OPTIONS_IS,
+       VCAP_KF_L3_PAYLOAD,
+       VCAP_KF_L3_RT_IS,
+       VCAP_KF_L3_TOS,
+       VCAP_KF_L3_TTL_GT0,
+       VCAP_KF_L4_ACK,
+       VCAP_KF_L4_DPORT,
+       VCAP_KF_L4_FIN,
+       VCAP_KF_L4_PAYLOAD,
+       VCAP_KF_L4_PSH,
+       VCAP_KF_L4_RNG,
+       VCAP_KF_L4_RST,
+       VCAP_KF_L4_SEQUENCE_EQ0_IS,
+       VCAP_KF_L4_SPORT,
+       VCAP_KF_L4_SPORT_EQ_DPORT_IS,
+       VCAP_KF_L4_SYN,
+       VCAP_KF_L4_URG,
+       VCAP_KF_LOOKUP_FIRST_IS,
+       VCAP_KF_LOOKUP_PAG,
+       VCAP_KF_OAM_CCM_CNTS_EQ0,
+       VCAP_KF_OAM_Y1731_IS,
+       VCAP_KF_TCP_IS,
+       VCAP_KF_TCP_UDP_IS,
+       VCAP_KF_TYPE,
+};
+
+/* Actionset names with origin information */
+enum vcap_actionfield_set {
+       VCAP_AFS_NO_VALUE,          /* initial value */
+       VCAP_AFS_BASE_TYPE,         /* sparx5 is2 X3 */
+};
+
+/* List of actionfields with description
+ *
+ * VCAP_AF_CNT_ID: W12, sparx5: is2
+ *   Counter ID, used per lookup to index the 4K frame counters (ANA_ACL:CNT_TBL).
+ *   Multiple VCAP IS2 entries can use the same counter.
+ * VCAP_AF_CPU_COPY_ENA: W1, sparx5: is2
+ *   Setting this bit to 1 causes all frames that hit this action to be copied to
+ *   the CPU extraction queue specified in CPU_QUEUE_NUM.
+ * VCAP_AF_CPU_QUEUE_NUM: W3, sparx5: is2
+ *   CPU queue number. Used when CPU_COPY_ENA is set.
+ * VCAP_AF_HIT_ME_ONCE: W1, sparx5: is2
+ *   Setting this bit to 1 causes the first frame that hits this action where the
+ *   HIT_CNT counter is zero to be copied to the CPU extraction queue specified in
+ *   CPU_QUEUE_NUM. The HIT_CNT counter is then incremented and any frames that
+ *   hit this action later are not copied to the CPU. To re-enable the HIT_ME_ONCE
+ *   functionality, the HIT_CNT counter must be cleared.
+ * VCAP_AF_IGNORE_PIPELINE_CTRL: W1, sparx5: is2
+ *   Ignore ingress pipeline control. This enforces the use of the VCAP IS2 action
+ *   even when the pipeline control has terminated the frame before VCAP IS2.
+ * VCAP_AF_INTR_ENA: W1, sparx5: is2
+ *   If set, an interrupt is triggered when this rule is hit
+ * VCAP_AF_LRN_DIS: W1, sparx5: is2
+ *   Setting this bit to 1 disables learning of frames hitting this action.
+ * VCAP_AF_MASK_MODE: W3, sparx5: is2
+ *   Controls the PORT_MASK use. Sparx5: 0: OR_DSTMASK, 1: AND_VLANMASK, 2:
+ *   REPLACE_PGID, 3: REPLACE_ALL, 4: REDIR_PGID, 5: OR_PGID_MASK, 6: VSTAX, 7:
+ *   Not applicable. LAN966X: 0: No action, 1: Permit/deny (AND), 2: Policy
+ *   forwarding (DMAC lookup), 3: Redirect. The CPU port is untouched by
+ *   MASK_MODE.
+ * VCAP_AF_MATCH_ID: W16, sparx5: is2
+ *   Logical ID for the entry. The MATCH_ID is extracted together with the frame
+ *   if the frame is forwarded to the CPU (CPU_COPY_ENA). The result is placed in
+ *   IFH.CL_RSLT.
+ * VCAP_AF_MATCH_ID_MASK: W16, sparx5: is2
+ *   Mask used by MATCH_ID.
+ * VCAP_AF_MIRROR_PROBE: W2, sparx5: is2
+ *   Mirroring performed according to configuration of a mirror probe. 0: No
+ *   mirroring. 1: Mirror probe 0. 2: Mirror probe 1. 3: Mirror probe 2
+ * VCAP_AF_PIPELINE_FORCE_ENA: W1, sparx5: is2
+ *   If set, use PIPELINE_PT unconditionally and set PIPELINE_ACT = NONE if
+ *   PIPELINE_PT == NONE. Overrules previous settings of pipeline point.
+ * VCAP_AF_PIPELINE_PT: W5, sparx5: is2
+ *   Pipeline point used if PIPELINE_FORCE_ENA is set
+ * VCAP_AF_POLICE_ENA: W1, sparx5: is2
+ *   Setting this bit to 1 causes frames that hit this action to be policed by the
+ *   ACL policer specified in POLICE_IDX. Only applies to the first lookup.
+ * VCAP_AF_POLICE_IDX: W6, sparx5: is2
+ *   Selects VCAP policer used when policing frames (POLICE_ENA)
+ * VCAP_AF_PORT_MASK: W68, sparx5: is2
+ *   Port mask applied to the forwarding decision based on MASK_MODE.
+ * VCAP_AF_RT_DIS: W1, sparx5: is2
+ *   If set, routing is disallowed. Only applies when IS_INNER_ACL is 0. See also
+ *   IGR_ACL_ENA, EGR_ACL_ENA, and RLEG_STAT_IDX.
+ */
+
+/* Actionfield names */
+enum vcap_action_field {
+       VCAP_AF_NO_VALUE,  /* initial value */
+       VCAP_AF_CNT_ID,
+       VCAP_AF_CPU_COPY_ENA,
+       VCAP_AF_CPU_QUEUE_NUM,
+       VCAP_AF_HIT_ME_ONCE,
+       VCAP_AF_IGNORE_PIPELINE_CTRL,
+       VCAP_AF_INTR_ENA,
+       VCAP_AF_LRN_DIS,
+       VCAP_AF_MASK_MODE,
+       VCAP_AF_MATCH_ID,
+       VCAP_AF_MATCH_ID_MASK,
+       VCAP_AF_MIRROR_PROBE,
+       VCAP_AF_PIPELINE_FORCE_ENA,
+       VCAP_AF_PIPELINE_PT,
+       VCAP_AF_POLICE_ENA,
+       VCAP_AF_POLICE_IDX,
+       VCAP_AF_PORT_MASK,
+       VCAP_AF_RT_DIS,
+};
+
+#endif /* __VCAP_AG_API__ */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_ag_api_kunit.h b/drivers/net/ethernet/microchip/vcap/vcap_ag_api_kunit.h
new file mode 100644 (file)
index 0000000..e538ca7
--- /dev/null
@@ -0,0 +1,643 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP API interface for kunit testing
+ * This is a different interface, to be able to include different VCAPs
+ */
+
+/* Use same include guard as the official API to be able to override it */
+#ifndef __VCAP_AG_API__
+#define __VCAP_AG_API__
+
+enum vcap_type {
+       VCAP_TYPE_ES2,
+       VCAP_TYPE_IS0,
+       VCAP_TYPE_IS2,
+       VCAP_TYPE_MAX
+};
+
+/* Keyfieldset names with origin information */
+enum vcap_keyfield_set {
+       VCAP_KFS_NO_VALUE,          /* initial value */
+       VCAP_KFS_ARP,               /* sparx5 is2 X6, sparx5 es2 X6 */
+       VCAP_KFS_ETAG,              /* sparx5 is0 X2 */
+       VCAP_KFS_IP4_OTHER,         /* sparx5 is2 X6, sparx5 es2 X6 */
+       VCAP_KFS_IP4_TCP_UDP,       /* sparx5 is2 X6, sparx5 es2 X6 */
+       VCAP_KFS_IP4_VID,           /* sparx5 es2 X3 */
+       VCAP_KFS_IP6_STD,           /* sparx5 is2 X6 */
+       VCAP_KFS_IP6_VID,           /* sparx5 is2 X6, sparx5 es2 X6 */
+       VCAP_KFS_IP_7TUPLE,         /* sparx5 is2 X12, sparx5 es2 X12 */
+       VCAP_KFS_LL_FULL,           /* sparx5 is0 X6 */
+       VCAP_KFS_MAC_ETYPE,         /* sparx5 is2 X6, sparx5 es2 X6 */
+       VCAP_KFS_MLL,               /* sparx5 is0 X3 */
+       VCAP_KFS_NORMAL,            /* sparx5 is0 X6 */
+       VCAP_KFS_NORMAL_5TUPLE_IP4,  /* sparx5 is0 X6 */
+       VCAP_KFS_NORMAL_7TUPLE,     /* sparx5 is0 X12 */
+       VCAP_KFS_PURE_5TUPLE_IP4,   /* sparx5 is0 X3 */
+       VCAP_KFS_TRI_VID,           /* sparx5 is0 X2 */
+};
+
+/* List of keyfields with description
+ *
+ * Keys ending in _IS are booleans derived from frame data
+ * Keys ending in _CLS are classified frame data
+ *
+ * VCAP_KF_8021BR_ECID_BASE: W12, sparx5: is0
+ *   Used by 802.1BR Bridge Port Extension in an E-Tag
+ * VCAP_KF_8021BR_ECID_EXT: W8, sparx5: is0
+ *   Used by 802.1BR Bridge Port Extension in an E-Tag
+ * VCAP_KF_8021BR_E_TAGGED: W1, sparx5: is0
+ *   Set for frames containing an E-TAG (802.1BR Ethertype 893f)
+ * VCAP_KF_8021BR_GRP: W2, sparx5: is0
+ *   E-Tag group bits in 802.1BR Bridge Port Extension
+ * VCAP_KF_8021BR_IGR_ECID_BASE: W12, sparx5: is0
+ *   Used by 802.1BR Bridge Port Extension in an E-Tag
+ * VCAP_KF_8021BR_IGR_ECID_EXT: W8, sparx5: is0
+ *   Used by 802.1BR Bridge Port Extension in an E-Tag
+ * VCAP_KF_8021Q_DEI0: W1, sparx5: is0
+ *   First DEI in multiple vlan tags (outer tag or default port tag)
+ * VCAP_KF_8021Q_DEI1: W1, sparx5: is0
+ *   Second DEI in multiple vlan tags (inner tag)
+ * VCAP_KF_8021Q_DEI2: W1, sparx5: is0
+ *   Third DEI in multiple vlan tags (not always available)
+ * VCAP_KF_8021Q_DEI_CLS: W1, sparx5: is2/es2
+ *   Classified DEI
+ * VCAP_KF_8021Q_PCP0: W3, sparx5: is0
+ *   First PCP in multiple vlan tags (outer tag or default port tag)
+ * VCAP_KF_8021Q_PCP1: W3, sparx5: is0
+ *   Second PCP in multiple vlan tags (inner tag)
+ * VCAP_KF_8021Q_PCP2: W3, sparx5: is0
+ *   Third PCP in multiple vlan tags (not always available)
+ * VCAP_KF_8021Q_PCP_CLS: W3, sparx5: is2/es2
+ *   Classified PCP
+ * VCAP_KF_8021Q_TPID0: W3, sparx5: is0
+ *   First TPIC in multiple vlan tags (outer tag or default port tag)
+ * VCAP_KF_8021Q_TPID1: W3, sparx5: is0
+ *   Second TPID in multiple vlan tags (inner tag)
+ * VCAP_KF_8021Q_TPID2: W3, sparx5: is0
+ *   Third TPID in multiple vlan tags (not always available)
+ * VCAP_KF_8021Q_VID0: W12, sparx5: is0
+ *   First VID in multiple vlan tags (outer tag or default port tag)
+ * VCAP_KF_8021Q_VID1: W12, sparx5: is0
+ *   Second VID in multiple vlan tags (inner tag)
+ * VCAP_KF_8021Q_VID2: W12, sparx5: is0
+ *   Third VID in multiple vlan tags (not always available)
+ * VCAP_KF_8021Q_VID_CLS: W13, sparx5: is2/es2
+ *   Classified VID
+ * VCAP_KF_8021Q_VLAN_TAGGED_IS: W1, sparx5: is2/es2
+ *   Sparx5: Set if frame was received with a VLAN tag, LAN966x: Set if frame has
+ *   one or more Q-tags. Independent of port VLAN awareness
+ * VCAP_KF_8021Q_VLAN_TAGS: W3, sparx5: is0
+ *   Number of VLAN tags in frame: 0: Untagged, 1: Single tagged, 3: Double
+ *   tagged, 7: Triple tagged
+ * VCAP_KF_ACL_GRP_ID: W8, sparx5: es2
+ *   Used in interface map table
+ * VCAP_KF_ARP_ADDR_SPACE_OK_IS: W1, sparx5: is2/es2
+ *   Set if hardware address is Ethernet
+ * VCAP_KF_ARP_LEN_OK_IS: W1, sparx5: is2/es2
+ *   Set if hardware address length = 6 (Ethernet) and IP address length = 4 (IP).
+ * VCAP_KF_ARP_OPCODE: W2, sparx5: is2/es2
+ *   ARP opcode
+ * VCAP_KF_ARP_OPCODE_UNKNOWN_IS: W1, sparx5: is2/es2
+ *   Set if not one of the codes defined in VCAP_KF_ARP_OPCODE
+ * VCAP_KF_ARP_PROTO_SPACE_OK_IS: W1, sparx5: is2/es2
+ *   Set if protocol address space is 0x0800
+ * VCAP_KF_ARP_SENDER_MATCH_IS: W1, sparx5: is2/es2
+ *   Sender Hardware Address = SMAC (ARP)
+ * VCAP_KF_ARP_TGT_MATCH_IS: W1, sparx5: is2/es2
+ *   Target Hardware Address = SMAC (RARP)
+ * VCAP_KF_COSID_CLS: W3, sparx5: es2
+ *   Class of service
+ * VCAP_KF_DST_ENTRY: W1, sparx5: is0
+ *   Selects whether the frame’s destination or source information is used for
+ *   fields L2_SMAC and L3_IP4_SIP
+ * VCAP_KF_ES0_ISDX_KEY_ENA: W1, sparx5: es2
+ *   The value taken from the IFH .FWD.ES0_ISDX_KEY_ENA
+ * VCAP_KF_ETYPE: W16, sparx5: is0/is2/es2
+ *   Ethernet type
+ * VCAP_KF_ETYPE_LEN_IS: W1, sparx5: is0/is2/es2
+ *   Set if frame has EtherType >= 0x600
+ * VCAP_KF_ETYPE_MPLS: W2, sparx5: is0
+ *   Type of MPLS Ethertype (or not)
+ * VCAP_KF_IF_EGR_PORT_MASK: W32, sparx5: es2
+ *   Egress port mask, one bit per port
+ * VCAP_KF_IF_EGR_PORT_MASK_RNG: W3, sparx5: es2
+ *   Select which 32 port group is available in IF_EGR_PORT (or virtual ports or
+ *   CPU queue)
+ * VCAP_KF_IF_IGR_PORT: sparx5 is0 W7, sparx5 es2 W9
+ *   Sparx5: Logical ingress port number retrieved from
+ *   ANA_CL::PORT_ID_CFG.LPORT_NUM or ERLEG, LAN966x: ingress port nunmber
+ * VCAP_KF_IF_IGR_PORT_MASK: sparx5 is0 W65, sparx5 is2 W32, sparx5 is2 W65
+ *   Ingress port mask, one bit per port/erleg
+ * VCAP_KF_IF_IGR_PORT_MASK_L3: W1, sparx5: is2
+ *   If set, IF_IGR_PORT_MASK, IF_IGR_PORT_MASK_RNG, and IF_IGR_PORT_MASK_SEL are
+ *   used to specify L3 interfaces
+ * VCAP_KF_IF_IGR_PORT_MASK_RNG: W4, sparx5: is2
+ *   Range selector for IF_IGR_PORT_MASK.  Specifies which group of 32 ports are
+ *   available in IF_IGR_PORT_MASK
+ * VCAP_KF_IF_IGR_PORT_MASK_SEL: W2, sparx5: is0/is2
+ *   Mode selector for IF_IGR_PORT_MASK, applicable when IF_IGR_PORT_MASK_L3 == 0.
+ *   Mapping: 0: DEFAULT 1: LOOPBACK 2: MASQUERADE 3: CPU_VD
+ * VCAP_KF_IF_IGR_PORT_SEL: W1, sparx5: es2
+ *   Selector for IF_IGR_PORT: physical port number or ERLEG
+ * VCAP_KF_IP4_IS: W1, sparx5: is0/is2/es2
+ *   Set if frame has EtherType = 0x800 and IP version = 4
+ * VCAP_KF_IP_MC_IS: W1, sparx5: is0
+ *   Set if frame is IPv4 frame and frame’s destination MAC address is an IPv4
+ *   multicast address (0x01005E0 /25). Set if frame is IPv6 frame and frame’s
+ *   destination MAC address is an IPv6 multicast address (0x3333/16).
+ * VCAP_KF_IP_PAYLOAD_5TUPLE: W32, sparx5: is0
+ *   Payload bytes after IP header
+ * VCAP_KF_IP_SNAP_IS: W1, sparx5: is0
+ *   Set if frame is IPv4, IPv6, or SNAP frame
+ * VCAP_KF_ISDX_CLS: W12, sparx5: is2/es2
+ *   Classified ISDX
+ * VCAP_KF_ISDX_GT0_IS: W1, sparx5: is2/es2
+ *   Set if classified ISDX > 0
+ * VCAP_KF_L2_BC_IS: W1, sparx5: is0/is2/es2
+ *   Set if frame’s destination MAC address is the broadcast address
+ *   (FF-FF-FF-FF-FF-FF).
+ * VCAP_KF_L2_DMAC: W48, sparx5: is0/is2/es2
+ *   Destination MAC address
+ * VCAP_KF_L2_FWD_IS: W1, sparx5: is2
+ *   Set if the frame is allowed to be forwarded to front ports
+ * VCAP_KF_L2_MC_IS: W1, sparx5: is0/is2/es2
+ *   Set if frame’s destination MAC address is a multicast address (bit 40 = 1).
+ * VCAP_KF_L2_PAYLOAD_ETYPE: W64, sparx5: is2/es2
+ *   Byte 0-7 of L2 payload after Type/Len field and overloading for OAM
+ * VCAP_KF_L2_SMAC: W48, sparx5: is0/is2/es2
+ *   Source MAC address
+ * VCAP_KF_L3_DIP_EQ_SIP_IS: W1, sparx5: is2/es2
+ *   Set if Src IP matches Dst IP address
+ * VCAP_KF_L3_DMAC_DIP_MATCH: W1, sparx5: is2
+ *   Match found in DIP security lookup in ANA_L3
+ * VCAP_KF_L3_DPL_CLS: W1, sparx5: es2
+ *   The frames drop precedence level
+ * VCAP_KF_L3_DSCP: W6, sparx5: is0
+ *   Frame’s DSCP value
+ * VCAP_KF_L3_DST_IS: W1, sparx5: is2
+ *   Set if lookup is done for egress router leg
+ * VCAP_KF_L3_FRAGMENT_TYPE: W2, sparx5: is0/is2/es2
+ *   L3 Fragmentation type (none, initial, suspicious, valid follow up)
+ * VCAP_KF_L3_FRAG_INVLD_L4_LEN: W1, sparx5: is0/is2
+ *   Set if frame's L4 length is less than ANA_CL:COMMON:CLM_FRAGMENT_CFG.L4_MIN_L
+ *   EN
+ * VCAP_KF_L3_IP4_DIP: W32, sparx5: is0/is2/es2
+ *   Destination IPv4 Address
+ * VCAP_KF_L3_IP4_SIP: W32, sparx5: is0/is2/es2
+ *   Source IPv4 Address
+ * VCAP_KF_L3_IP6_DIP: W128, sparx5: is0/is2/es2
+ *   Sparx5: Full IPv6 DIP, LAN966x: Either Full IPv6 DIP or a subset depending on
+ *   frame type
+ * VCAP_KF_L3_IP6_SIP: W128, sparx5: is0/is2/es2
+ *   Sparx5: Full IPv6 SIP, LAN966x: Either Full IPv6 SIP or a subset depending on
+ *   frame type
+ * VCAP_KF_L3_IP_PROTO: W8, sparx5: is0/is2/es2
+ *   IPv4 frames: IP protocol. IPv6 frames: Next header, same as for IPV4
+ * VCAP_KF_L3_OPTIONS_IS: W1, sparx5: is0/is2/es2
+ *   Set if IPv4 frame contains options (IP len > 5)
+ * VCAP_KF_L3_PAYLOAD: sparx5 is2 W96, sparx5 is2 W40, sparx5 es2 W96
+ *   Sparx5: Payload bytes after IP header. IPv4: IPv4 options are not parsed so
+ *   payload is always taken 20 bytes after the start of the IPv4 header, LAN966x:
+ *   Bytes 0-6 after IP header
+ * VCAP_KF_L3_RT_IS: W1, sparx5: is2/es2
+ *   Set if frame has hit a router leg
+ * VCAP_KF_L3_SMAC_SIP_MATCH: W1, sparx5: is2
+ *   Match found in SIP security lookup in ANA_L3
+ * VCAP_KF_L3_TOS: W8, sparx5: is2/es2
+ *   Sparx5: Frame's IPv4/IPv6 DSCP and ECN fields, LAN966x: IP TOS field
+ * VCAP_KF_L3_TTL_GT0: W1, sparx5: is2/es2
+ *   Set if IPv4 TTL / IPv6 hop limit is greater than 0
+ * VCAP_KF_L4_ACK: W1, sparx5: is2/es2
+ *   Sparx5 and LAN966x: TCP flag ACK, LAN966x only: PTP over UDP: flagField bit 2
+ *   (unicastFlag)
+ * VCAP_KF_L4_DPORT: W16, sparx5: is2/es2
+ *   Sparx5: TCP/UDP destination port. Overloading for IP_7TUPLE: Non-TCP/UDP IP
+ *   frames: L4_DPORT = L3_IP_PROTO, LAN966x: TCP/UDP destination port
+ * VCAP_KF_L4_FIN: W1, sparx5: is2/es2
+ *   TCP flag FIN, LAN966x: TCP flag FIN, and for PTP over UDP: messageType bit 1
+ * VCAP_KF_L4_PAYLOAD: W64, sparx5: is2/es2
+ *   Payload bytes after TCP/UDP header Overloading for IP_7TUPLE: Non TCP/UDP
+ *   frames: Payload bytes 0–7 after IP header. IPv4 options are not parsed so
+ *   payload is always taken 20 bytes after the start of the IPv4 header for non
+ *   TCP/UDP IPv4 frames
+ * VCAP_KF_L4_PSH: W1, sparx5: is2/es2
+ *   Sparx5: TCP flag PSH, LAN966x: TCP: TCP flag PSH. PTP over UDP: flagField bit
+ *   1 (twoStepFlag)
+ * VCAP_KF_L4_RNG: sparx5 is0 W8, sparx5 is2 W16, sparx5 es2 W16
+ *   Range checker bitmask (one for each range checker). Input into range checkers
+ *   is taken from classified results (VID, DSCP) and frame (SPORT, DPORT, ETYPE,
+ *   outer VID, inner VID)
+ * VCAP_KF_L4_RST: W1, sparx5: is2/es2
+ *   Sparx5: TCP flag RST , LAN966x: TCP: TCP flag RST. PTP over UDP: messageType
+ *   bit 3
+ * VCAP_KF_L4_SEQUENCE_EQ0_IS: W1, sparx5: is2/es2
+ *   Set if TCP sequence number is 0, LAN966x: Overlayed with PTP over UDP:
+ *   messageType bit 0
+ * VCAP_KF_L4_SPORT: W16, sparx5: is0/is2/es2
+ *   TCP/UDP source port
+ * VCAP_KF_L4_SPORT_EQ_DPORT_IS: W1, sparx5: is2/es2
+ *   Set if UDP or TCP source port equals UDP or TCP destination port
+ * VCAP_KF_L4_SYN: W1, sparx5: is2/es2
+ *   Sparx5: TCP flag SYN, LAN966x: TCP: TCP flag SYN. PTP over UDP: messageType
+ *   bit 2
+ * VCAP_KF_L4_URG: W1, sparx5: is2/es2
+ *   Sparx5: TCP flag URG, LAN966x: TCP: TCP flag URG. PTP over UDP: flagField bit
+ *   7 (reserved)
+ * VCAP_KF_LOOKUP_FIRST_IS: W1, sparx5: is0/is2/es2
+ *   Selects between entries relevant for first and second lookup. Set for first
+ *   lookup, cleared for second lookup.
+ * VCAP_KF_LOOKUP_GEN_IDX: W12, sparx5: is0
+ *   Generic index - for chaining CLM instances
+ * VCAP_KF_LOOKUP_GEN_IDX_SEL: W2, sparx5: is0
+ *   Select the mode of the Generic Index
+ * VCAP_KF_LOOKUP_PAG: W8, sparx5: is2
+ *   Classified Policy Association Group: chains rules from IS1/CLM to IS2
+ * VCAP_KF_OAM_CCM_CNTS_EQ0: W1, sparx5: is2/es2
+ *   Dual-ended loss measurement counters in CCM frames are all zero
+ * VCAP_KF_OAM_MEL_FLAGS: W7, sparx5: is0
+ *   Encoding of MD level/MEG level (MEL)
+ * VCAP_KF_OAM_Y1731_IS: W1, sparx5: is0/is2/es2
+ *   Set if frame’s EtherType = 0x8902
+ * VCAP_KF_PROT_ACTIVE: W1, sparx5: es2
+ *   Protection is active
+ * VCAP_KF_TCP_IS: W1, sparx5: is0/is2/es2
+ *   Set if frame is IPv4 TCP frame (IP protocol = 6) or IPv6 TCP frames (Next
+ *   header = 6)
+ * VCAP_KF_TCP_UDP_IS: W1, sparx5: is0/is2/es2
+ *   Set if frame is IPv4/IPv6 TCP or UDP frame (IP protocol/next header equals 6
+ *   or 17)
+ * VCAP_KF_TYPE: sparx5 is0 W2, sparx5 is0 W1, sparx5 is2 W4, sparx5 is2 W2,
+ *   sparx5 es2 W3
+ *   Keyset type id - set by the API
+ */
+
+/* Keyfield names */
+enum vcap_key_field {
+       VCAP_KF_NO_VALUE,  /* initial value */
+       VCAP_KF_8021BR_ECID_BASE,
+       VCAP_KF_8021BR_ECID_EXT,
+       VCAP_KF_8021BR_E_TAGGED,
+       VCAP_KF_8021BR_GRP,
+       VCAP_KF_8021BR_IGR_ECID_BASE,
+       VCAP_KF_8021BR_IGR_ECID_EXT,
+       VCAP_KF_8021Q_DEI0,
+       VCAP_KF_8021Q_DEI1,
+       VCAP_KF_8021Q_DEI2,
+       VCAP_KF_8021Q_DEI_CLS,
+       VCAP_KF_8021Q_PCP0,
+       VCAP_KF_8021Q_PCP1,
+       VCAP_KF_8021Q_PCP2,
+       VCAP_KF_8021Q_PCP_CLS,
+       VCAP_KF_8021Q_TPID0,
+       VCAP_KF_8021Q_TPID1,
+       VCAP_KF_8021Q_TPID2,
+       VCAP_KF_8021Q_VID0,
+       VCAP_KF_8021Q_VID1,
+       VCAP_KF_8021Q_VID2,
+       VCAP_KF_8021Q_VID_CLS,
+       VCAP_KF_8021Q_VLAN_TAGGED_IS,
+       VCAP_KF_8021Q_VLAN_TAGS,
+       VCAP_KF_ACL_GRP_ID,
+       VCAP_KF_ARP_ADDR_SPACE_OK_IS,
+       VCAP_KF_ARP_LEN_OK_IS,
+       VCAP_KF_ARP_OPCODE,
+       VCAP_KF_ARP_OPCODE_UNKNOWN_IS,
+       VCAP_KF_ARP_PROTO_SPACE_OK_IS,
+       VCAP_KF_ARP_SENDER_MATCH_IS,
+       VCAP_KF_ARP_TGT_MATCH_IS,
+       VCAP_KF_COSID_CLS,
+       VCAP_KF_DST_ENTRY,
+       VCAP_KF_ES0_ISDX_KEY_ENA,
+       VCAP_KF_ETYPE,
+       VCAP_KF_ETYPE_LEN_IS,
+       VCAP_KF_ETYPE_MPLS,
+       VCAP_KF_IF_EGR_PORT_MASK,
+       VCAP_KF_IF_EGR_PORT_MASK_RNG,
+       VCAP_KF_IF_IGR_PORT,
+       VCAP_KF_IF_IGR_PORT_MASK,
+       VCAP_KF_IF_IGR_PORT_MASK_L3,
+       VCAP_KF_IF_IGR_PORT_MASK_RNG,
+       VCAP_KF_IF_IGR_PORT_MASK_SEL,
+       VCAP_KF_IF_IGR_PORT_SEL,
+       VCAP_KF_IP4_IS,
+       VCAP_KF_IP_MC_IS,
+       VCAP_KF_IP_PAYLOAD_5TUPLE,
+       VCAP_KF_IP_SNAP_IS,
+       VCAP_KF_ISDX_CLS,
+       VCAP_KF_ISDX_GT0_IS,
+       VCAP_KF_L2_BC_IS,
+       VCAP_KF_L2_DMAC,
+       VCAP_KF_L2_FWD_IS,
+       VCAP_KF_L2_MC_IS,
+       VCAP_KF_L2_PAYLOAD_ETYPE,
+       VCAP_KF_L2_SMAC,
+       VCAP_KF_L3_DIP_EQ_SIP_IS,
+       VCAP_KF_L3_DMAC_DIP_MATCH,
+       VCAP_KF_L3_DPL_CLS,
+       VCAP_KF_L3_DSCP,
+       VCAP_KF_L3_DST_IS,
+       VCAP_KF_L3_FRAGMENT_TYPE,
+       VCAP_KF_L3_FRAG_INVLD_L4_LEN,
+       VCAP_KF_L3_IP4_DIP,
+       VCAP_KF_L3_IP4_SIP,
+       VCAP_KF_L3_IP6_DIP,
+       VCAP_KF_L3_IP6_SIP,
+       VCAP_KF_L3_IP_PROTO,
+       VCAP_KF_L3_OPTIONS_IS,
+       VCAP_KF_L3_PAYLOAD,
+       VCAP_KF_L3_RT_IS,
+       VCAP_KF_L3_SMAC_SIP_MATCH,
+       VCAP_KF_L3_TOS,
+       VCAP_KF_L3_TTL_GT0,
+       VCAP_KF_L4_ACK,
+       VCAP_KF_L4_DPORT,
+       VCAP_KF_L4_FIN,
+       VCAP_KF_L4_PAYLOAD,
+       VCAP_KF_L4_PSH,
+       VCAP_KF_L4_RNG,
+       VCAP_KF_L4_RST,
+       VCAP_KF_L4_SEQUENCE_EQ0_IS,
+       VCAP_KF_L4_SPORT,
+       VCAP_KF_L4_SPORT_EQ_DPORT_IS,
+       VCAP_KF_L4_SYN,
+       VCAP_KF_L4_URG,
+       VCAP_KF_LOOKUP_FIRST_IS,
+       VCAP_KF_LOOKUP_GEN_IDX,
+       VCAP_KF_LOOKUP_GEN_IDX_SEL,
+       VCAP_KF_LOOKUP_PAG,
+       VCAP_KF_MIRROR_ENA,
+       VCAP_KF_OAM_CCM_CNTS_EQ0,
+       VCAP_KF_OAM_MEL_FLAGS,
+       VCAP_KF_OAM_Y1731_IS,
+       VCAP_KF_PROT_ACTIVE,
+       VCAP_KF_TCP_IS,
+       VCAP_KF_TCP_UDP_IS,
+       VCAP_KF_TYPE,
+};
+
+/* Actionset names with origin information */
+enum vcap_actionfield_set {
+       VCAP_AFS_NO_VALUE,          /* initial value */
+       VCAP_AFS_BASE_TYPE,         /* sparx5 is2 X3, sparx5 es2 X3 */
+       VCAP_AFS_CLASSIFICATION,    /* sparx5 is0 X2 */
+       VCAP_AFS_CLASS_REDUCED,     /* sparx5 is0 X1 */
+       VCAP_AFS_FULL,              /* sparx5 is0 X3 */
+       VCAP_AFS_MLBS,              /* sparx5 is0 X2 */
+       VCAP_AFS_MLBS_REDUCED,      /* sparx5 is0 X1 */
+};
+
+/* List of actionfields with description
+ *
+ * VCAP_AF_CLS_VID_SEL: W3, sparx5: is0
+ *   Controls the classified VID: 0: VID_NONE: No action. 1: VID_ADD: New VID =
+ *   old VID + VID_VAL. 2: VID_REPLACE: New VID = VID_VAL. 3: VID_FIRST_TAG: New
+ *   VID = VID from frame's first tag (outer tag) if available, otherwise VID_VAL.
+ *   4: VID_SECOND_TAG: New VID = VID from frame's second tag (middle tag) if
+ *   available, otherwise VID_VAL. 5: VID_THIRD_TAG: New VID = VID from frame's
+ *   third tag (inner tag) if available, otherwise VID_VAL.
+ * VCAP_AF_CNT_ID: sparx5 is2 W12, sparx5 es2 W11
+ *   Counter ID, used per lookup to index the 4K frame counters (ANA_ACL:CNT_TBL).
+ *   Multiple VCAP IS2 entries can use the same counter.
+ * VCAP_AF_COPY_PORT_NUM: W7, sparx5: es2
+ *   QSYS port number when FWD_MODE is redirect or copy
+ * VCAP_AF_COPY_QUEUE_NUM: W16, sparx5: es2
+ *   QSYS queue number when FWD_MODE is redirect or copy
+ * VCAP_AF_CPU_COPY_ENA: W1, sparx5: is2/es2
+ *   Setting this bit to 1 causes all frames that hit this action to be copied to
+ *   the CPU extraction queue specified in CPU_QUEUE_NUM.
+ * VCAP_AF_CPU_QUEUE_NUM: W3, sparx5: is2/es2
+ *   CPU queue number. Used when CPU_COPY_ENA is set.
+ * VCAP_AF_DEI_ENA: W1, sparx5: is0
+ *   If set, use DEI_VAL as classified DEI value. Otherwise, DEI from basic
+ *   classification is used
+ * VCAP_AF_DEI_VAL: W1, sparx5: is0
+ *   See DEI_ENA
+ * VCAP_AF_DP_ENA: W1, sparx5: is0
+ *   If set, use DP_VAL as classified drop precedence level. Otherwise, drop
+ *   precedence level from basic classification is used.
+ * VCAP_AF_DP_VAL: W2, sparx5: is0
+ *   See DP_ENA.
+ * VCAP_AF_DSCP_ENA: W1, sparx5: is0
+ *   If set, use DSCP_VAL as classified DSCP value. Otherwise, DSCP value from
+ *   basic classification is used.
+ * VCAP_AF_DSCP_VAL: W6, sparx5: is0
+ *   See DSCP_ENA.
+ * VCAP_AF_ES2_REW_CMD: W3, sparx5: es2
+ *   Command forwarded to REW: 0: No action. 1: SWAP MAC addresses. 2: Do L2CP
+ *   DMAC translation when entering or leaving a tunnel.
+ * VCAP_AF_FWD_MODE: W2, sparx5: es2
+ *   Forward selector: 0: Forward. 1: Discard. 2: Redirect. 3: Copy.
+ * VCAP_AF_HIT_ME_ONCE: W1, sparx5: is2/es2
+ *   Setting this bit to 1 causes the first frame that hits this action where the
+ *   HIT_CNT counter is zero to be copied to the CPU extraction queue specified in
+ *   CPU_QUEUE_NUM. The HIT_CNT counter is then incremented and any frames that
+ *   hit this action later are not copied to the CPU. To re-enable the HIT_ME_ONCE
+ *   functionality, the HIT_CNT counter must be cleared.
+ * VCAP_AF_IGNORE_PIPELINE_CTRL: W1, sparx5: is2/es2
+ *   Ignore ingress pipeline control. This enforces the use of the VCAP IS2 action
+ *   even when the pipeline control has terminated the frame before VCAP IS2.
+ * VCAP_AF_INTR_ENA: W1, sparx5: is2/es2
+ *   If set, an interrupt is triggered when this rule is hit
+ * VCAP_AF_ISDX_ADD_REPLACE_SEL: W1, sparx5: is0
+ *   Controls the classified ISDX. 0: New ISDX = old ISDX + ISDX_VAL. 1: New ISDX
+ *   = ISDX_VAL.
+ * VCAP_AF_ISDX_VAL: W12, sparx5: is0
+ *   See isdx_add_replace_sel
+ * VCAP_AF_LRN_DIS: W1, sparx5: is2
+ *   Setting this bit to 1 disables learning of frames hitting this action.
+ * VCAP_AF_MAP_IDX: W9, sparx5: is0
+ *   Index for QoS mapping table lookup
+ * VCAP_AF_MAP_KEY: W3, sparx5: is0
+ *   Key type for QoS mapping table lookup. 0: DEI0, PCP0 (outer tag). 1: DEI1,
+ *   PCP1 (middle tag). 2: DEI2, PCP2 (inner tag). 3: MPLS TC. 4: PCP0 (outer
+ *   tag). 5: E-DEI, E-PCP (E-TAG). 6: DSCP if available, otherwise none. 7: DSCP
+ *   if available, otherwise DEI0, PCP0 (outer tag) if available using MAP_IDX+8,
+ *   otherwise none
+ * VCAP_AF_MAP_LOOKUP_SEL: W2, sparx5: is0
+ *   Selects which of the two QoS Mapping Table lookups that MAP_KEY and MAP_IDX
+ *   are applied to. 0: No changes to the QoS Mapping Table lookup. 1: Update key
+ *   type and index for QoS Mapping Table lookup #0. 2: Update key type and index
+ *   for QoS Mapping Table lookup #1. 3: Reserved.
+ * VCAP_AF_MASK_MODE: W3, sparx5: is0/is2
+ *   Controls the PORT_MASK use. Sparx5: 0: OR_DSTMASK, 1: AND_VLANMASK, 2:
+ *   REPLACE_PGID, 3: REPLACE_ALL, 4: REDIR_PGID, 5: OR_PGID_MASK, 6: VSTAX, 7:
+ *   Not applicable. LAN966X: 0: No action, 1: Permit/deny (AND), 2: Policy
+ *   forwarding (DMAC lookup), 3: Redirect. The CPU port is untouched by
+ *   MASK_MODE.
+ * VCAP_AF_MATCH_ID: W16, sparx5: is0/is2
+ *   Logical ID for the entry. The MATCH_ID is extracted together with the frame
+ *   if the frame is forwarded to the CPU (CPU_COPY_ENA). The result is placed in
+ *   IFH.CL_RSLT.
+ * VCAP_AF_MATCH_ID_MASK: W16, sparx5: is0/is2
+ *   Mask used by MATCH_ID.
+ * VCAP_AF_MIRROR_PROBE: W2, sparx5: is2
+ *   Mirroring performed according to configuration of a mirror probe. 0: No
+ *   mirroring. 1: Mirror probe 0. 2: Mirror probe 1. 3: Mirror probe 2
+ * VCAP_AF_MIRROR_PROBE_ID: W2, sparx5: es2
+ *   Signals a mirror probe to be placed in the IFH. Only possible when FWD_MODE
+ *   is copy. 0: No mirroring. 1–3: Use mirror probe 0-2.
+ * VCAP_AF_NXT_IDX: W12, sparx5: is0
+ *   Index used as part of key (field G_IDX) in the next lookup.
+ * VCAP_AF_NXT_IDX_CTRL: W3, sparx5: is0
+ *   Controls the generation of the G_IDX used in the VCAP CLM next lookup
+ * VCAP_AF_PAG_OVERRIDE_MASK: W8, sparx5: is0
+ *   Bits set in this mask will override PAG_VAL from port profile.  New PAG =
+ *   (PAG (input) AND ~PAG_OVERRIDE_MASK) OR (PAG_VAL AND PAG_OVERRIDE_MASK)
+ * VCAP_AF_PAG_VAL: W8, sparx5: is0
+ *   See PAG_OVERRIDE_MASK.
+ * VCAP_AF_PCP_ENA: W1, sparx5: is0
+ *   If set, use PCP_VAL as classified PCP value. Otherwise, PCP from basic
+ *   classification is used.
+ * VCAP_AF_PCP_VAL: W3, sparx5: is0
+ *   See PCP_ENA.
+ * VCAP_AF_PIPELINE_FORCE_ENA: sparx5 is0 W2, sparx5 is2 W1
+ *   If set, use PIPELINE_PT unconditionally and set PIPELINE_ACT = NONE if
+ *   PIPELINE_PT == NONE. Overrules previous settings of pipeline point.
+ * VCAP_AF_PIPELINE_PT: W5, sparx5: is0/is2
+ *   Pipeline point used if PIPELINE_FORCE_ENA is set
+ * VCAP_AF_POLICE_ENA: W1, sparx5: is2/es2
+ *   Setting this bit to 1 causes frames that hit this action to be policed by the
+ *   ACL policer specified in POLICE_IDX. Only applies to the first lookup.
+ * VCAP_AF_POLICE_IDX: W6, sparx5: is2/es2
+ *   Selects VCAP policer used when policing frames (POLICE_ENA)
+ * VCAP_AF_POLICE_REMARK: W1, sparx5: es2
+ *   If set, frames exceeding policer rates are marked as yellow but not
+ *   discarded.
+ * VCAP_AF_PORT_MASK: sparx5 is0 W65, sparx5 is2 W68
+ *   Port mask applied to the forwarding decision based on MASK_MODE.
+ * VCAP_AF_QOS_ENA: W1, sparx5: is0
+ *   If set, use QOS_VAL as classified QoS class. Otherwise, QoS class from basic
+ *   classification is used.
+ * VCAP_AF_QOS_VAL: W3, sparx5: is0
+ *   See QOS_ENA.
+ * VCAP_AF_RT_DIS: W1, sparx5: is2
+ *   If set, routing is disallowed. Only applies when IS_INNER_ACL is 0. See also
+ *   IGR_ACL_ENA, EGR_ACL_ENA, and RLEG_STAT_IDX.
+ * VCAP_AF_TYPE: W1, sparx5: is0
+ *   Actionset type id - Set by the API
+ * VCAP_AF_VID_VAL: W13, sparx5: is0
+ *   New VID Value
+ */
+
+/* Actionfield names */
+enum vcap_action_field {
+       VCAP_AF_NO_VALUE,  /* initial value */
+       VCAP_AF_ACL_MAC,
+       VCAP_AF_ACL_RT_MODE,
+       VCAP_AF_CLS_VID_SEL,
+       VCAP_AF_CNT_ID,
+       VCAP_AF_COPY_PORT_NUM,
+       VCAP_AF_COPY_QUEUE_NUM,
+       VCAP_AF_COSID_ENA,
+       VCAP_AF_COSID_VAL,
+       VCAP_AF_CPU_COPY_ENA,
+       VCAP_AF_CPU_DIS,
+       VCAP_AF_CPU_ENA,
+       VCAP_AF_CPU_Q,
+       VCAP_AF_CPU_QUEUE_NUM,
+       VCAP_AF_CUSTOM_ACE_ENA,
+       VCAP_AF_CUSTOM_ACE_OFFSET,
+       VCAP_AF_DEI_ENA,
+       VCAP_AF_DEI_VAL,
+       VCAP_AF_DLB_OFFSET,
+       VCAP_AF_DMAC_OFFSET_ENA,
+       VCAP_AF_DP_ENA,
+       VCAP_AF_DP_VAL,
+       VCAP_AF_DSCP_ENA,
+       VCAP_AF_DSCP_VAL,
+       VCAP_AF_EGR_ACL_ENA,
+       VCAP_AF_ES2_REW_CMD,
+       VCAP_AF_FWD_DIS,
+       VCAP_AF_FWD_MODE,
+       VCAP_AF_FWD_TYPE,
+       VCAP_AF_GVID_ADD_REPLACE_SEL,
+       VCAP_AF_HIT_ME_ONCE,
+       VCAP_AF_IGNORE_PIPELINE_CTRL,
+       VCAP_AF_IGR_ACL_ENA,
+       VCAP_AF_INJ_MASQ_ENA,
+       VCAP_AF_INJ_MASQ_LPORT,
+       VCAP_AF_INJ_MASQ_PORT,
+       VCAP_AF_INTR_ENA,
+       VCAP_AF_ISDX_ADD_REPLACE_SEL,
+       VCAP_AF_ISDX_VAL,
+       VCAP_AF_IS_INNER_ACL,
+       VCAP_AF_L3_MAC_UPDATE_DIS,
+       VCAP_AF_LOG_MSG_INTERVAL,
+       VCAP_AF_LPM_AFFIX_ENA,
+       VCAP_AF_LPM_AFFIX_VAL,
+       VCAP_AF_LPORT_ENA,
+       VCAP_AF_LRN_DIS,
+       VCAP_AF_MAP_IDX,
+       VCAP_AF_MAP_KEY,
+       VCAP_AF_MAP_LOOKUP_SEL,
+       VCAP_AF_MASK_MODE,
+       VCAP_AF_MATCH_ID,
+       VCAP_AF_MATCH_ID_MASK,
+       VCAP_AF_MIP_SEL,
+       VCAP_AF_MIRROR_PROBE,
+       VCAP_AF_MIRROR_PROBE_ID,
+       VCAP_AF_MPLS_IP_CTRL_ENA,
+       VCAP_AF_MPLS_MEP_ENA,
+       VCAP_AF_MPLS_MIP_ENA,
+       VCAP_AF_MPLS_OAM_FLAVOR,
+       VCAP_AF_MPLS_OAM_TYPE,
+       VCAP_AF_NUM_VLD_LABELS,
+       VCAP_AF_NXT_IDX,
+       VCAP_AF_NXT_IDX_CTRL,
+       VCAP_AF_NXT_KEY_TYPE,
+       VCAP_AF_NXT_NORMALIZE,
+       VCAP_AF_NXT_NORM_W16_OFFSET,
+       VCAP_AF_NXT_NORM_W32_OFFSET,
+       VCAP_AF_NXT_OFFSET_FROM_TYPE,
+       VCAP_AF_NXT_TYPE_AFTER_OFFSET,
+       VCAP_AF_OAM_IP_BFD_ENA,
+       VCAP_AF_OAM_TWAMP_ENA,
+       VCAP_AF_OAM_Y1731_SEL,
+       VCAP_AF_PAG_OVERRIDE_MASK,
+       VCAP_AF_PAG_VAL,
+       VCAP_AF_PCP_ENA,
+       VCAP_AF_PCP_VAL,
+       VCAP_AF_PIPELINE_ACT_SEL,
+       VCAP_AF_PIPELINE_FORCE_ENA,
+       VCAP_AF_PIPELINE_PT,
+       VCAP_AF_PIPELINE_PT_REDUCED,
+       VCAP_AF_POLICE_ENA,
+       VCAP_AF_POLICE_IDX,
+       VCAP_AF_POLICE_REMARK,
+       VCAP_AF_PORT_MASK,
+       VCAP_AF_PTP_MASTER_SEL,
+       VCAP_AF_QOS_ENA,
+       VCAP_AF_QOS_VAL,
+       VCAP_AF_REW_CMD,
+       VCAP_AF_RLEG_DMAC_CHK_DIS,
+       VCAP_AF_RLEG_STAT_IDX,
+       VCAP_AF_RSDX_ENA,
+       VCAP_AF_RSDX_VAL,
+       VCAP_AF_RSVD_LBL_VAL,
+       VCAP_AF_RT_DIS,
+       VCAP_AF_RT_SEL,
+       VCAP_AF_S2_KEY_SEL_ENA,
+       VCAP_AF_S2_KEY_SEL_IDX,
+       VCAP_AF_SAM_SEQ_ENA,
+       VCAP_AF_SIP_IDX,
+       VCAP_AF_SWAP_MAC_ENA,
+       VCAP_AF_TCP_UDP_DPORT,
+       VCAP_AF_TCP_UDP_ENA,
+       VCAP_AF_TCP_UDP_SPORT,
+       VCAP_AF_TC_ENA,
+       VCAP_AF_TC_LABEL,
+       VCAP_AF_TPID_SEL,
+       VCAP_AF_TTL_DECR_DIS,
+       VCAP_AF_TTL_ENA,
+       VCAP_AF_TTL_LABEL,
+       VCAP_AF_TTL_UPDATE_ENA,
+       VCAP_AF_TYPE,
+       VCAP_AF_VID_VAL,
+       VCAP_AF_VLAN_POP_CNT,
+       VCAP_AF_VLAN_POP_CNT_ENA,
+       VCAP_AF_VLAN_PUSH_CNT,
+       VCAP_AF_VLAN_PUSH_CNT_ENA,
+       VCAP_AF_VLAN_WAS_TAGGED,
+};
+
+#endif /* __VCAP_AG_API__ */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api.c b/drivers/net/ethernet/microchip/vcap/vcap_api.c
new file mode 100644 (file)
index 0000000..d255bc7
--- /dev/null
@@ -0,0 +1,1184 @@
+// SPDX-License-Identifier: GPL-2.0+
+/* Microchip VCAP API
+ *
+ * Copyright (c) 2022 Microchip Technology Inc. and its subsidiaries.
+ */
+
+#include <linux/types.h>
+
+#include "vcap_api.h"
+#include "vcap_api_client.h"
+
+#define to_intrule(rule) container_of((rule), struct vcap_rule_internal, data)
+
+/* Private VCAP API rule data */
+struct vcap_rule_internal {
+       struct vcap_rule data; /* provided by the client */
+       struct list_head list; /* for insertion in the vcap admin list of rules */
+       struct vcap_admin *admin; /* vcap hw instance */
+       struct net_device *ndev;  /* the interface that the rule applies to */
+       struct vcap_control *vctrl; /* the client control */
+       u32 sort_key;  /* defines the position in the VCAP */
+       int keyset_sw;  /* subwords in a keyset */
+       int actionset_sw;  /* subwords in an actionset */
+       int keyset_sw_regs;  /* registers in a subword in an keyset */
+       int actionset_sw_regs;  /* registers in a subword in an actionset */
+       int size; /* the size of the rule: max(entry, action) */
+       u32 addr; /* address in the VCAP at insertion */
+};
+
+/* Moving a rule in the VCAP address space */
+struct vcap_rule_move {
+       int addr; /* address to move */
+       int offset; /* change in address */
+       int count; /* blocksize of addresses to move */
+};
+
+/* Bit iterator for the VCAP cache streams */
+struct vcap_stream_iter {
+       u32 offset; /* bit offset from the stream start */
+       u32 sw_width; /* subword width in bits */
+       u32 regs_per_sw; /* registers per subword */
+       u32 reg_idx; /* current register index */
+       u32 reg_bitpos; /* bit offset in current register */
+       const struct vcap_typegroup *tg; /* current typegroup */
+};
+
+static void vcap_iter_set(struct vcap_stream_iter *itr, int sw_width,
+                         const struct vcap_typegroup *tg, u32 offset)
+{
+       memset(itr, 0, sizeof(*itr));
+       itr->offset = offset;
+       itr->sw_width = sw_width;
+       itr->regs_per_sw = DIV_ROUND_UP(sw_width, 32);
+       itr->tg = tg;
+}
+
+static void vcap_iter_skip_tg(struct vcap_stream_iter *itr)
+{
+       /* Compensate the field offset for preceding typegroups.
+        * A typegroup table ends with an all-zero terminator.
+        */
+       while (itr->tg->width && itr->offset >= itr->tg->offset) {
+               itr->offset += itr->tg->width;
+               itr->tg++; /* next typegroup */
+       }
+}
+
+static void vcap_iter_update(struct vcap_stream_iter *itr)
+{
+       int sw_idx, sw_bitpos;
+
+       /* Calculate the subword index and bitposition for current bit */
+       sw_idx = itr->offset / itr->sw_width;
+       sw_bitpos = itr->offset % itr->sw_width;
+       /* Calculate the register index and bitposition for current bit */
+       itr->reg_idx = (sw_idx * itr->regs_per_sw) + (sw_bitpos / 32);
+       itr->reg_bitpos = sw_bitpos % 32;
+}
+
+static void vcap_iter_init(struct vcap_stream_iter *itr, int sw_width,
+                          const struct vcap_typegroup *tg, u32 offset)
+{
+       vcap_iter_set(itr, sw_width, tg, offset);
+       vcap_iter_skip_tg(itr);
+       vcap_iter_update(itr);
+}
+
+static void vcap_iter_next(struct vcap_stream_iter *itr)
+{
+       itr->offset++;
+       vcap_iter_skip_tg(itr);
+       vcap_iter_update(itr);
+}
+
+static void vcap_set_bit(u32 *stream, struct vcap_stream_iter *itr, bool value)
+{
+       u32 mask = BIT(itr->reg_bitpos);
+       u32 *p = &stream[itr->reg_idx];
+
+       if (value)
+               *p |= mask;
+       else
+               *p &= ~mask;
+}
+
+static void vcap_encode_bit(u32 *stream, struct vcap_stream_iter *itr, bool val)
+{
+       /* When intersected by a type group field, stream the type group bits
+        * before continuing with the value bit
+        */
+       while (itr->tg->width &&
+              itr->offset >= itr->tg->offset &&
+              itr->offset < itr->tg->offset + itr->tg->width) {
+               int tg_bitpos = itr->tg->offset - itr->offset;
+
+               vcap_set_bit(stream, itr, (itr->tg->value >> tg_bitpos) & 0x1);
+               itr->offset++;
+               vcap_iter_update(itr);
+       }
+       vcap_set_bit(stream, itr, val);
+}
+
+static void vcap_encode_field(u32 *stream, struct vcap_stream_iter *itr,
+                             int width, const u8 *value)
+{
+       int idx;
+
+       /* Loop over the field value bits and add the value bits one by one to
+        * the output stream.
+        */
+       for (idx = 0; idx < width; idx++) {
+               u8 bidx = idx & GENMASK(2, 0);
+
+               /* Encode one field value bit */
+               vcap_encode_bit(stream, itr, (value[idx / 8] >> bidx) & 0x1);
+               vcap_iter_next(itr);
+       }
+}
+
+static void vcap_encode_typegroups(u32 *stream, int sw_width,
+                                  const struct vcap_typegroup *tg,
+                                  bool mask)
+{
+       struct vcap_stream_iter iter;
+       int idx;
+
+       /* Mask bits must be set to zeros (inverted later when writing to the
+        * mask cache register), so that the mask typegroup bits consist of
+        * match-1 or match-0, or both
+        */
+       vcap_iter_set(&iter, sw_width, tg, 0);
+       while (iter.tg->width) {
+               /* Set position to current typegroup bit */
+               iter.offset = iter.tg->offset;
+               vcap_iter_update(&iter);
+               for (idx = 0; idx < iter.tg->width; idx++) {
+                       /* Iterate over current typegroup bits. Mask typegroup
+                        * bits are always set
+                        */
+                       if (mask)
+                               vcap_set_bit(stream, &iter, 0x1);
+                       else
+                               vcap_set_bit(stream, &iter,
+                                            (iter.tg->value >> idx) & 0x1);
+                       iter.offset++;
+                       vcap_iter_update(&iter);
+               }
+               iter.tg++; /* next typegroup */
+       }
+}
+
+/* Return the list of keyfields for the keyset */
+static const struct vcap_field *vcap_keyfields(struct vcap_control *vctrl,
+                                              enum vcap_type vt,
+                                              enum vcap_keyfield_set keyset)
+{
+       /* Check that the keyset exists in the vcap keyset list */
+       if (keyset >= vctrl->vcaps[vt].keyfield_set_size)
+               return NULL;
+       return vctrl->vcaps[vt].keyfield_set_map[keyset];
+}
+
+/* Return the keyset information for the keyset */
+static const struct vcap_set *vcap_keyfieldset(struct vcap_control *vctrl,
+                                              enum vcap_type vt,
+                                              enum vcap_keyfield_set keyset)
+{
+       const struct vcap_set *kset;
+
+       /* Check that the keyset exists in the vcap keyset list */
+       if (keyset >= vctrl->vcaps[vt].keyfield_set_size)
+               return NULL;
+       kset = &vctrl->vcaps[vt].keyfield_set[keyset];
+       if (kset->sw_per_item == 0 || kset->sw_per_item > vctrl->vcaps[vt].sw_count)
+               return NULL;
+       return kset;
+}
+
+/* Return the typegroup table for the matching keyset (using subword size) */
+static const struct vcap_typegroup *
+vcap_keyfield_typegroup(struct vcap_control *vctrl,
+                       enum vcap_type vt, enum vcap_keyfield_set keyset)
+{
+       const struct vcap_set *kset = vcap_keyfieldset(vctrl, vt, keyset);
+
+       /* Check that the keyset is valid */
+       if (!kset)
+               return NULL;
+       return vctrl->vcaps[vt].keyfield_set_typegroups[kset->sw_per_item];
+}
+
+/* Return the number of keyfields in the keyset */
+static int vcap_keyfield_count(struct vcap_control *vctrl,
+                              enum vcap_type vt, enum vcap_keyfield_set keyset)
+{
+       /* Check that the keyset exists in the vcap keyset list */
+       if (keyset >= vctrl->vcaps[vt].keyfield_set_size)
+               return 0;
+       return vctrl->vcaps[vt].keyfield_set_map_size[keyset];
+}
+
+static void vcap_encode_keyfield(struct vcap_rule_internal *ri,
+                                const struct vcap_client_keyfield *kf,
+                                const struct vcap_field *rf,
+                                const struct vcap_typegroup *tgt)
+{
+       int sw_width = ri->vctrl->vcaps[ri->admin->vtype].sw_width;
+       struct vcap_cache_data *cache = &ri->admin->cache;
+       struct vcap_stream_iter iter;
+       const u8 *value, *mask;
+
+       /* Encode the fields for the key and the mask in their respective
+        * streams, respecting the subword width.
+        */
+       switch (kf->ctrl.type) {
+       case VCAP_FIELD_BIT:
+               value = &kf->data.u1.value;
+               mask = &kf->data.u1.mask;
+               break;
+       case VCAP_FIELD_U32:
+               value = (const u8 *)&kf->data.u32.value;
+               mask = (const u8 *)&kf->data.u32.mask;
+               break;
+       case VCAP_FIELD_U48:
+               value = kf->data.u48.value;
+               mask = kf->data.u48.mask;
+               break;
+       case VCAP_FIELD_U56:
+               value = kf->data.u56.value;
+               mask = kf->data.u56.mask;
+               break;
+       case VCAP_FIELD_U64:
+               value = kf->data.u64.value;
+               mask = kf->data.u64.mask;
+               break;
+       case VCAP_FIELD_U72:
+               value = kf->data.u72.value;
+               mask = kf->data.u72.mask;
+               break;
+       case VCAP_FIELD_U112:
+               value = kf->data.u112.value;
+               mask = kf->data.u112.mask;
+               break;
+       case VCAP_FIELD_U128:
+               value = kf->data.u128.value;
+               mask = kf->data.u128.mask;
+               break;
+       }
+       vcap_iter_init(&iter, sw_width, tgt, rf->offset);
+       vcap_encode_field(cache->keystream, &iter, rf->width, value);
+       vcap_iter_init(&iter, sw_width, tgt, rf->offset);
+       vcap_encode_field(cache->maskstream, &iter, rf->width, mask);
+}
+
+static void vcap_encode_keyfield_typegroups(struct vcap_control *vctrl,
+                                           struct vcap_rule_internal *ri,
+                                           const struct vcap_typegroup *tgt)
+{
+       int sw_width = vctrl->vcaps[ri->admin->vtype].sw_width;
+       struct vcap_cache_data *cache = &ri->admin->cache;
+
+       /* Encode the typegroup bits for the key and the mask in their streams,
+        * respecting the subword width.
+        */
+       vcap_encode_typegroups(cache->keystream, sw_width, tgt, false);
+       vcap_encode_typegroups(cache->maskstream, sw_width, tgt, true);
+}
+
+static int vcap_encode_rule_keyset(struct vcap_rule_internal *ri)
+{
+       const struct vcap_client_keyfield *ckf;
+       const struct vcap_typegroup *tg_table;
+       const struct vcap_field *kf_table;
+       int keyset_size;
+
+       /* Get a valid set of fields for the specific keyset */
+       kf_table = vcap_keyfields(ri->vctrl, ri->admin->vtype, ri->data.keyset);
+       if (!kf_table) {
+               pr_err("%s:%d: no fields available for this keyset: %d\n",
+                      __func__, __LINE__, ri->data.keyset);
+               return -EINVAL;
+       }
+       /* Get a valid typegroup for the specific keyset */
+       tg_table = vcap_keyfield_typegroup(ri->vctrl, ri->admin->vtype,
+                                          ri->data.keyset);
+       if (!tg_table) {
+               pr_err("%s:%d: no typegroups available for this keyset: %d\n",
+                      __func__, __LINE__, ri->data.keyset);
+               return -EINVAL;
+       }
+       /* Get a valid size for the specific keyset */
+       keyset_size = vcap_keyfield_count(ri->vctrl, ri->admin->vtype,
+                                         ri->data.keyset);
+       if (keyset_size == 0) {
+               pr_err("%s:%d: zero field count for this keyset: %d\n",
+                      __func__, __LINE__, ri->data.keyset);
+               return -EINVAL;
+       }
+       /* Iterate over the keyfields (key, mask) in the rule
+        * and encode these bits
+        */
+       if (list_empty(&ri->data.keyfields)) {
+               pr_err("%s:%d: no keyfields in the rule\n", __func__, __LINE__);
+               return -EINVAL;
+       }
+       list_for_each_entry(ckf, &ri->data.keyfields, ctrl.list) {
+               /* Check that the client entry exists in the keyset */
+               if (ckf->ctrl.key >= keyset_size) {
+                       pr_err("%s:%d: key %d is not in vcap\n",
+                              __func__, __LINE__, ckf->ctrl.key);
+                       return -EINVAL;
+               }
+               vcap_encode_keyfield(ri, ckf, &kf_table[ckf->ctrl.key], tg_table);
+       }
+       /* Add typegroup bits to the key/mask bitstreams */
+       vcap_encode_keyfield_typegroups(ri->vctrl, ri, tg_table);
+       return 0;
+}
+
+/* Return the list of actionfields for the actionset */
+static const struct vcap_field *
+vcap_actionfields(struct vcap_control *vctrl,
+                 enum vcap_type vt, enum vcap_actionfield_set actionset)
+{
+       /* Check that the actionset exists in the vcap actionset list */
+       if (actionset >= vctrl->vcaps[vt].actionfield_set_size)
+               return NULL;
+       return vctrl->vcaps[vt].actionfield_set_map[actionset];
+}
+
+static const struct vcap_set *
+vcap_actionfieldset(struct vcap_control *vctrl,
+                   enum vcap_type vt, enum vcap_actionfield_set actionset)
+{
+       const struct vcap_set *aset;
+
+       /* Check that the actionset exists in the vcap actionset list */
+       if (actionset >= vctrl->vcaps[vt].actionfield_set_size)
+               return NULL;
+       aset = &vctrl->vcaps[vt].actionfield_set[actionset];
+       if (aset->sw_per_item == 0 || aset->sw_per_item > vctrl->vcaps[vt].sw_count)
+               return NULL;
+       return aset;
+}
+
+/* Return the typegroup table for the matching actionset (using subword size) */
+static const struct vcap_typegroup *
+vcap_actionfield_typegroup(struct vcap_control *vctrl,
+                          enum vcap_type vt, enum vcap_actionfield_set actionset)
+{
+       const struct vcap_set *aset = vcap_actionfieldset(vctrl, vt, actionset);
+
+       /* Check that the actionset is valid */
+       if (!aset)
+               return NULL;
+       return vctrl->vcaps[vt].actionfield_set_typegroups[aset->sw_per_item];
+}
+
+/* Return the number of actionfields in the actionset */
+static int vcap_actionfield_count(struct vcap_control *vctrl,
+                                 enum vcap_type vt,
+                                 enum vcap_actionfield_set actionset)
+{
+       /* Check that the actionset exists in the vcap actionset list */
+       if (actionset >= vctrl->vcaps[vt].actionfield_set_size)
+               return 0;
+       return vctrl->vcaps[vt].actionfield_set_map_size[actionset];
+}
+
+static void vcap_encode_actionfield(struct vcap_rule_internal *ri,
+                                   const struct vcap_client_actionfield *af,
+                                   const struct vcap_field *rf,
+                                   const struct vcap_typegroup *tgt)
+{
+       int act_width = ri->vctrl->vcaps[ri->admin->vtype].act_width;
+
+       struct vcap_cache_data *cache = &ri->admin->cache;
+       struct vcap_stream_iter iter;
+       const u8 *value;
+
+       /* Encode the action field in the stream, respecting the subword width */
+       switch (af->ctrl.type) {
+       case VCAP_FIELD_BIT:
+               value = &af->data.u1.value;
+               break;
+       case VCAP_FIELD_U32:
+               value = (const u8 *)&af->data.u32.value;
+               break;
+       case VCAP_FIELD_U48:
+               value = af->data.u48.value;
+               break;
+       case VCAP_FIELD_U56:
+               value = af->data.u56.value;
+               break;
+       case VCAP_FIELD_U64:
+               value = af->data.u64.value;
+               break;
+       case VCAP_FIELD_U72:
+               value = af->data.u72.value;
+               break;
+       case VCAP_FIELD_U112:
+               value = af->data.u112.value;
+               break;
+       case VCAP_FIELD_U128:
+               value = af->data.u128.value;
+               break;
+       }
+       vcap_iter_init(&iter, act_width, tgt, rf->offset);
+       vcap_encode_field(cache->actionstream, &iter, rf->width, value);
+}
+
+static void vcap_encode_actionfield_typegroups(struct vcap_rule_internal *ri,
+                                              const struct vcap_typegroup *tgt)
+{
+       int sw_width = ri->vctrl->vcaps[ri->admin->vtype].act_width;
+       struct vcap_cache_data *cache = &ri->admin->cache;
+
+       /* Encode the typegroup bits for the actionstream respecting the subword
+        * width.
+        */
+       vcap_encode_typegroups(cache->actionstream, sw_width, tgt, false);
+}
+
+static int vcap_encode_rule_actionset(struct vcap_rule_internal *ri)
+{
+       const struct vcap_client_actionfield *caf;
+       const struct vcap_typegroup *tg_table;
+       const struct vcap_field *af_table;
+       int actionset_size;
+
+       /* Get a valid set of actionset fields for the specific actionset */
+       af_table = vcap_actionfields(ri->vctrl, ri->admin->vtype,
+                                    ri->data.actionset);
+       if (!af_table) {
+               pr_err("%s:%d: no fields available for this actionset: %d\n",
+                      __func__, __LINE__, ri->data.actionset);
+               return -EINVAL;
+       }
+       /* Get a valid typegroup for the specific actionset */
+       tg_table = vcap_actionfield_typegroup(ri->vctrl, ri->admin->vtype,
+                                             ri->data.actionset);
+       if (!tg_table) {
+               pr_err("%s:%d: no typegroups available for this actionset: %d\n",
+                      __func__, __LINE__, ri->data.actionset);
+               return -EINVAL;
+       }
+       /* Get a valid actionset size for the specific actionset */
+       actionset_size = vcap_actionfield_count(ri->vctrl, ri->admin->vtype,
+                                               ri->data.actionset);
+       if (actionset_size == 0) {
+               pr_err("%s:%d: zero field count for this actionset: %d\n",
+                      __func__, __LINE__, ri->data.actionset);
+               return -EINVAL;
+       }
+       /* Iterate over the actionfields in the rule
+        * and encode these bits
+        */
+       if (list_empty(&ri->data.actionfields))
+               pr_warn("%s:%d: no actionfields in the rule\n",
+                       __func__, __LINE__);
+       list_for_each_entry(caf, &ri->data.actionfields, ctrl.list) {
+               /* Check that the client action exists in the actionset */
+               if (caf->ctrl.action >= actionset_size) {
+                       pr_err("%s:%d: action %d is not in vcap\n",
+                              __func__, __LINE__, caf->ctrl.action);
+                       return -EINVAL;
+               }
+               vcap_encode_actionfield(ri, caf, &af_table[caf->ctrl.action],
+                                       tg_table);
+       }
+       /* Add typegroup bits to the entry bitstreams */
+       vcap_encode_actionfield_typegroups(ri, tg_table);
+       return 0;
+}
+
+static int vcap_encode_rule(struct vcap_rule_internal *ri)
+{
+       int err;
+
+       err = vcap_encode_rule_keyset(ri);
+       if (err)
+               return err;
+       err = vcap_encode_rule_actionset(ri);
+       if (err)
+               return err;
+       return 0;
+}
+
+static int vcap_api_check(struct vcap_control *ctrl)
+{
+       if (!ctrl) {
+               pr_err("%s:%d: vcap control is missing\n", __func__, __LINE__);
+               return -EINVAL;
+       }
+       if (!ctrl->ops || !ctrl->ops->validate_keyset ||
+           !ctrl->ops->add_default_fields || !ctrl->ops->cache_erase ||
+           !ctrl->ops->cache_write || !ctrl->ops->cache_read ||
+           !ctrl->ops->init || !ctrl->ops->update || !ctrl->ops->move ||
+           !ctrl->ops->port_info) {
+               pr_err("%s:%d: client operations are missing\n",
+                      __func__, __LINE__);
+               return -ENOENT;
+       }
+       return 0;
+}
+
+static void vcap_erase_cache(struct vcap_rule_internal *ri)
+{
+       ri->vctrl->ops->cache_erase(ri->admin);
+}
+
+/* Update the keyset for the rule */
+int vcap_set_rule_set_keyset(struct vcap_rule *rule,
+                            enum vcap_keyfield_set keyset)
+{
+       struct vcap_rule_internal *ri = to_intrule(rule);
+       const struct vcap_set *kset;
+       int sw_width;
+
+       kset = vcap_keyfieldset(ri->vctrl, ri->admin->vtype, keyset);
+       /* Check that the keyset is valid */
+       if (!kset)
+               return -EINVAL;
+       ri->keyset_sw = kset->sw_per_item;
+       sw_width = ri->vctrl->vcaps[ri->admin->vtype].sw_width;
+       ri->keyset_sw_regs = DIV_ROUND_UP(sw_width, 32);
+       ri->data.keyset = keyset;
+       return 0;
+}
+EXPORT_SYMBOL_GPL(vcap_set_rule_set_keyset);
+
+/* Update the actionset for the rule */
+int vcap_set_rule_set_actionset(struct vcap_rule *rule,
+                               enum vcap_actionfield_set actionset)
+{
+       struct vcap_rule_internal *ri = to_intrule(rule);
+       const struct vcap_set *aset;
+       int act_width;
+
+       aset = vcap_actionfieldset(ri->vctrl, ri->admin->vtype, actionset);
+       /* Check that the actionset is valid */
+       if (!aset)
+               return -EINVAL;
+       ri->actionset_sw = aset->sw_per_item;
+       act_width = ri->vctrl->vcaps[ri->admin->vtype].act_width;
+       ri->actionset_sw_regs = DIV_ROUND_UP(act_width, 32);
+       ri->data.actionset = actionset;
+       return 0;
+}
+EXPORT_SYMBOL_GPL(vcap_set_rule_set_actionset);
+
+/* Find a rule with a provided rule id */
+static struct vcap_rule_internal *vcap_lookup_rule(struct vcap_control *vctrl,
+                                                  u32 id)
+{
+       struct vcap_rule_internal *ri;
+       struct vcap_admin *admin;
+
+       /* Look for the rule id in all vcaps */
+       list_for_each_entry(admin, &vctrl->list, list)
+               list_for_each_entry(ri, &admin->rules, list)
+                       if (ri->data.id == id)
+                               return ri;
+       return NULL;
+}
+
+/* Find a rule id with a provided cookie */
+int vcap_lookup_rule_by_cookie(struct vcap_control *vctrl, u64 cookie)
+{
+       struct vcap_rule_internal *ri;
+       struct vcap_admin *admin;
+
+       /* Look for the rule id in all vcaps */
+       list_for_each_entry(admin, &vctrl->list, list)
+               list_for_each_entry(ri, &admin->rules, list)
+                       if (ri->data.cookie == cookie)
+                               return ri->data.id;
+       return -ENOENT;
+}
+EXPORT_SYMBOL_GPL(vcap_lookup_rule_by_cookie);
+
+/* Make a shallow copy of the rule without the fields */
+static struct vcap_rule_internal *vcap_dup_rule(struct vcap_rule_internal *ri)
+{
+       struct vcap_rule_internal *duprule;
+
+       /* Allocate the client part */
+       duprule = kzalloc(sizeof(*duprule), GFP_KERNEL);
+       if (!duprule)
+               return ERR_PTR(-ENOMEM);
+       *duprule = *ri;
+       /* Not inserted in the VCAP */
+       INIT_LIST_HEAD(&duprule->list);
+       /* No elements in these lists */
+       INIT_LIST_HEAD(&duprule->data.keyfields);
+       INIT_LIST_HEAD(&duprule->data.actionfields);
+       return duprule;
+}
+
+/* Write VCAP cache content to the VCAP HW instance */
+static int vcap_write_rule(struct vcap_rule_internal *ri)
+{
+       struct vcap_admin *admin = ri->admin;
+       int sw_idx, ent_idx = 0, act_idx = 0;
+       u32 addr = ri->addr;
+
+       if (!ri->size || !ri->keyset_sw_regs || !ri->actionset_sw_regs) {
+               pr_err("%s:%d: rule is empty\n", __func__, __LINE__);
+               return -EINVAL;
+       }
+       /* Use the values in the streams to write the VCAP cache */
+       for (sw_idx = 0; sw_idx < ri->size; sw_idx++, addr++) {
+               ri->vctrl->ops->cache_write(ri->ndev, admin,
+                                           VCAP_SEL_ENTRY, ent_idx,
+                                           ri->keyset_sw_regs);
+               ri->vctrl->ops->cache_write(ri->ndev, admin,
+                                           VCAP_SEL_ACTION, act_idx,
+                                           ri->actionset_sw_regs);
+               ri->vctrl->ops->update(ri->ndev, admin, VCAP_CMD_WRITE,
+                                      VCAP_SEL_ALL, addr);
+               ent_idx += ri->keyset_sw_regs;
+               act_idx += ri->actionset_sw_regs;
+       }
+       return 0;
+}
+
+/* Lookup a vcap instance using chain id */
+struct vcap_admin *vcap_find_admin(struct vcap_control *vctrl, int cid)
+{
+       struct vcap_admin *admin;
+
+       if (vcap_api_check(vctrl))
+               return NULL;
+
+       list_for_each_entry(admin, &vctrl->list, list) {
+               if (cid >= admin->first_cid && cid <= admin->last_cid)
+                       return admin;
+       }
+       return NULL;
+}
+EXPORT_SYMBOL_GPL(vcap_find_admin);
+
+/* Check if there is room for a new rule */
+static int vcap_rule_space(struct vcap_admin *admin, int size)
+{
+       if (admin->last_used_addr - size < admin->first_valid_addr) {
+               pr_err("%s:%d: No room for rule size: %u, %u\n",
+                      __func__, __LINE__, size, admin->first_valid_addr);
+               return -ENOSPC;
+       }
+       return 0;
+}
+
+/* Add the keyset typefield to the list of rule keyfields */
+static int vcap_add_type_keyfield(struct vcap_rule *rule)
+{
+       struct vcap_rule_internal *ri = to_intrule(rule);
+       enum vcap_keyfield_set keyset = rule->keyset;
+       enum vcap_type vt = ri->admin->vtype;
+       const struct vcap_field *fields;
+       const struct vcap_set *kset;
+       int ret = -EINVAL;
+
+       kset = vcap_keyfieldset(ri->vctrl, vt, keyset);
+       if (!kset)
+               return ret;
+       if (kset->type_id == (u8)-1)  /* No type field is needed */
+               return 0;
+
+       fields = vcap_keyfields(ri->vctrl, vt, keyset);
+       if (!fields)
+               return -EINVAL;
+       if (fields[VCAP_KF_TYPE].width > 1) {
+               ret = vcap_rule_add_key_u32(rule, VCAP_KF_TYPE,
+                                           kset->type_id, 0xff);
+       } else {
+               if (kset->type_id)
+                       ret = vcap_rule_add_key_bit(rule, VCAP_KF_TYPE,
+                                                   VCAP_BIT_1);
+               else
+                       ret = vcap_rule_add_key_bit(rule, VCAP_KF_TYPE,
+                                                   VCAP_BIT_0);
+       }
+       return 0;
+}
+
+/* Validate a rule with respect to available port keys */
+int vcap_val_rule(struct vcap_rule *rule, u16 l3_proto)
+{
+       struct vcap_rule_internal *ri = to_intrule(rule);
+       enum vcap_keyfield_set keysets[10];
+       struct vcap_keyset_list kslist;
+       int ret;
+
+       /* This validation will be much expanded later */
+       ret = vcap_api_check(ri->vctrl);
+       if (ret)
+               return ret;
+       if (!ri->admin) {
+               ri->data.exterr = VCAP_ERR_NO_ADMIN;
+               return -EINVAL;
+       }
+       if (!ri->ndev) {
+               ri->data.exterr = VCAP_ERR_NO_NETDEV;
+               return -EINVAL;
+       }
+       if (ri->data.keyset == VCAP_KFS_NO_VALUE) {
+               ri->data.exterr = VCAP_ERR_NO_KEYSET_MATCH;
+               return -EINVAL;
+       }
+       /* prepare for keyset validation */
+       keysets[0] = ri->data.keyset;
+       kslist.keysets = keysets;
+       kslist.cnt = 1;
+       /* Pick a keyset that is supported in the port lookups */
+       ret = ri->vctrl->ops->validate_keyset(ri->ndev, ri->admin, rule, &kslist,
+                                             l3_proto);
+       if (ret < 0) {
+               pr_err("%s:%d: keyset validation failed: %d\n",
+                      __func__, __LINE__, ret);
+               ri->data.exterr = VCAP_ERR_NO_PORT_KEYSET_MATCH;
+               return ret;
+       }
+       if (ri->data.actionset == VCAP_AFS_NO_VALUE) {
+               ri->data.exterr = VCAP_ERR_NO_ACTIONSET_MATCH;
+               return -EINVAL;
+       }
+       vcap_add_type_keyfield(rule);
+       /* Add default fields to this rule */
+       ri->vctrl->ops->add_default_fields(ri->ndev, ri->admin, rule);
+
+       /* Rule size is the maximum of the entry and action subword count */
+       ri->size = max(ri->keyset_sw, ri->actionset_sw);
+
+       /* Finally check if there is room for the rule in the VCAP */
+       return vcap_rule_space(ri->admin, ri->size);
+}
+EXPORT_SYMBOL_GPL(vcap_val_rule);
+
+/* calculate the address of the next rule after this (lower address and prio) */
+static u32 vcap_next_rule_addr(u32 addr, struct vcap_rule_internal *ri)
+{
+       return ((addr - ri->size) /  ri->size) * ri->size;
+}
+
+/* Assign a unique rule id and autogenerate one if id == 0 */
+static u32 vcap_set_rule_id(struct vcap_rule_internal *ri)
+{
+       u32 next_id;
+
+       if (ri->data.id != 0)
+               return ri->data.id;
+
+       next_id = ri->vctrl->rule_id + 1;
+
+       for (next_id = ri->vctrl->rule_id + 1; next_id < ~0; ++next_id) {
+               if (!vcap_lookup_rule(ri->vctrl, next_id)) {
+                       ri->data.id = next_id;
+                       ri->vctrl->rule_id = next_id;
+                       break;
+               }
+       }
+       return ri->data.id;
+}
+
+static int vcap_insert_rule(struct vcap_rule_internal *ri,
+                           struct vcap_rule_move *move)
+{
+       struct vcap_admin *admin = ri->admin;
+       struct vcap_rule_internal *duprule;
+
+       /* Only support appending rules for now */
+       ri->addr = vcap_next_rule_addr(admin->last_used_addr, ri);
+       admin->last_used_addr = ri->addr;
+       /* Add a shallow copy of the rule to the VCAP list */
+       duprule = vcap_dup_rule(ri);
+       if (IS_ERR(duprule))
+               return PTR_ERR(duprule);
+       list_add_tail(&duprule->list, &admin->rules);
+       return 0;
+}
+
+static void vcap_move_rules(struct vcap_rule_internal *ri,
+                           struct vcap_rule_move *move)
+{
+       ri->vctrl->ops->move(ri->ndev, ri->admin, move->addr,
+                        move->offset, move->count);
+}
+
+/* Encode and write a validated rule to the VCAP */
+int vcap_add_rule(struct vcap_rule *rule)
+{
+       struct vcap_rule_internal *ri = to_intrule(rule);
+       struct vcap_rule_move move = {0};
+       int ret;
+
+       ret = vcap_api_check(ri->vctrl);
+       if (ret)
+               return ret;
+       /* Insert the new rule in the list of vcap rules */
+       ret = vcap_insert_rule(ri, &move);
+       if (ret < 0) {
+               pr_err("%s:%d: could not insert rule in vcap list: %d\n",
+                      __func__, __LINE__, ret);
+               goto out;
+       }
+       if (move.count > 0)
+               vcap_move_rules(ri, &move);
+       ret = vcap_encode_rule(ri);
+       if (ret) {
+               pr_err("%s:%d: rule encoding error: %d\n", __func__, __LINE__, ret);
+               goto out;
+       }
+
+       ret = vcap_write_rule(ri);
+       if (ret)
+               pr_err("%s:%d: rule write error: %d\n", __func__, __LINE__, ret);
+out:
+       return ret;
+}
+EXPORT_SYMBOL_GPL(vcap_add_rule);
+
+/* Allocate a new rule with the provided arguments */
+struct vcap_rule *vcap_alloc_rule(struct vcap_control *vctrl,
+                                 struct net_device *ndev, int vcap_chain_id,
+                                 enum vcap_user user, u16 priority,
+                                 u32 id)
+{
+       struct vcap_rule_internal *ri;
+       struct vcap_admin *admin;
+       int maxsize;
+
+       if (!ndev)
+               return ERR_PTR(-ENODEV);
+       /* Get the VCAP instance */
+       admin = vcap_find_admin(vctrl, vcap_chain_id);
+       if (!admin)
+               return ERR_PTR(-ENOENT);
+       /* Sanity check that this VCAP is supported on this platform */
+       if (vctrl->vcaps[admin->vtype].rows == 0)
+               return ERR_PTR(-EINVAL);
+       /* Check if a rule with this id already exists */
+       if (vcap_lookup_rule(vctrl, id))
+               return ERR_PTR(-EEXIST);
+       /* Check if there is room for the rule in the block(s) of the VCAP */
+       maxsize = vctrl->vcaps[admin->vtype].sw_count; /* worst case rule size */
+       if (vcap_rule_space(admin, maxsize))
+               return ERR_PTR(-ENOSPC);
+       /* Create a container for the rule and return it */
+       ri = kzalloc(sizeof(*ri), GFP_KERNEL);
+       if (!ri)
+               return ERR_PTR(-ENOMEM);
+       ri->data.vcap_chain_id = vcap_chain_id;
+       ri->data.user = user;
+       ri->data.priority = priority;
+       ri->data.id = id;
+       ri->data.keyset = VCAP_KFS_NO_VALUE;
+       ri->data.actionset = VCAP_AFS_NO_VALUE;
+       INIT_LIST_HEAD(&ri->list);
+       INIT_LIST_HEAD(&ri->data.keyfields);
+       INIT_LIST_HEAD(&ri->data.actionfields);
+       ri->ndev = ndev;
+       ri->admin = admin; /* refer to the vcap instance */
+       ri->vctrl = vctrl; /* refer to the client */
+       if (vcap_set_rule_id(ri) == 0)
+               goto out_free;
+       vcap_erase_cache(ri);
+       return (struct vcap_rule *)ri;
+
+out_free:
+       kfree(ri);
+       return ERR_PTR(-EINVAL);
+}
+EXPORT_SYMBOL_GPL(vcap_alloc_rule);
+
+/* Free mem of a rule owned by client after the rule as been added to the VCAP */
+void vcap_free_rule(struct vcap_rule *rule)
+{
+       struct vcap_rule_internal *ri = to_intrule(rule);
+       struct vcap_client_actionfield *caf, *next_caf;
+       struct vcap_client_keyfield *ckf, *next_ckf;
+
+       /* Deallocate the list of keys and actions */
+       list_for_each_entry_safe(ckf, next_ckf, &ri->data.keyfields, ctrl.list) {
+               list_del(&ckf->ctrl.list);
+               kfree(ckf);
+       }
+       list_for_each_entry_safe(caf, next_caf, &ri->data.actionfields, ctrl.list) {
+               list_del(&caf->ctrl.list);
+               kfree(caf);
+       }
+       /* Deallocate the rule */
+       kfree(rule);
+}
+EXPORT_SYMBOL_GPL(vcap_free_rule);
+
+/* Delete rule in a VCAP instance */
+int vcap_del_rule(struct vcap_control *vctrl, struct net_device *ndev, u32 id)
+{
+       struct vcap_rule_internal *ri, *elem;
+       struct vcap_admin *admin;
+       int err;
+
+       /* This will later also handle rule moving */
+       if (!ndev)
+               return -ENODEV;
+       err = vcap_api_check(vctrl);
+       if (err)
+               return err;
+       /* Look for the rule id in all vcaps */
+       ri = vcap_lookup_rule(vctrl, id);
+       if (!ri)
+               return -EINVAL;
+       admin = ri->admin;
+       list_del(&ri->list);
+
+       /* delete the rule in the cache */
+       vctrl->ops->init(ndev, admin, ri->addr, ri->size);
+       if (list_empty(&admin->rules)) {
+               admin->last_used_addr = admin->last_valid_addr;
+       } else {
+               /* update the address range end marker from the last rule in the list */
+               elem = list_last_entry(&admin->rules, struct vcap_rule_internal, list);
+               admin->last_used_addr = elem->addr;
+       }
+       kfree(ri);
+       return 0;
+}
+EXPORT_SYMBOL_GPL(vcap_del_rule);
+
+/* Delete all rules in the VCAP instance */
+int vcap_del_rules(struct vcap_control *vctrl, struct vcap_admin *admin)
+{
+       struct vcap_rule_internal *ri, *next_ri;
+       int ret = vcap_api_check(vctrl);
+
+       if (ret)
+               return ret;
+       list_for_each_entry_safe(ri, next_ri, &admin->rules, list) {
+               vctrl->ops->init(ri->ndev, admin, ri->addr, ri->size);
+               list_del(&ri->list);
+               kfree(ri);
+       }
+       admin->last_used_addr = admin->last_valid_addr;
+       return 0;
+}
+EXPORT_SYMBOL_GPL(vcap_del_rules);
+
+/* Find information on a key field in a rule */
+const struct vcap_field *vcap_lookup_keyfield(struct vcap_rule *rule,
+                                             enum vcap_key_field key)
+{
+       struct vcap_rule_internal *ri = to_intrule(rule);
+       enum vcap_keyfield_set keyset = rule->keyset;
+       enum vcap_type vt = ri->admin->vtype;
+       const struct vcap_field *fields;
+
+       if (keyset == VCAP_KFS_NO_VALUE)
+               return NULL;
+       fields = vcap_keyfields(ri->vctrl, vt, keyset);
+       if (!fields)
+               return NULL;
+       return &fields[key];
+}
+EXPORT_SYMBOL_GPL(vcap_lookup_keyfield);
+
+static void vcap_copy_from_client_keyfield(struct vcap_rule *rule,
+                                          struct vcap_client_keyfield *field,
+                                          struct vcap_client_keyfield_data *data)
+{
+       /* This will be expanded later to handle different vcap memory layouts */
+       memcpy(&field->data, data, sizeof(field->data));
+}
+
+static int vcap_rule_add_key(struct vcap_rule *rule,
+                            enum vcap_key_field key,
+                            enum vcap_field_type ftype,
+                            struct vcap_client_keyfield_data *data)
+{
+       struct vcap_client_keyfield *field;
+
+       /* More validation will be added here later */
+       field = kzalloc(sizeof(*field), GFP_KERNEL);
+       if (!field)
+               return -ENOMEM;
+       field->ctrl.key = key;
+       field->ctrl.type = ftype;
+       vcap_copy_from_client_keyfield(rule, field, data);
+       list_add_tail(&field->ctrl.list, &rule->keyfields);
+       return 0;
+}
+
+static void vcap_rule_set_key_bitsize(struct vcap_u1_key *u1, enum vcap_bit val)
+{
+       switch (val) {
+       case VCAP_BIT_0:
+               u1->value = 0;
+               u1->mask = 1;
+               break;
+       case VCAP_BIT_1:
+               u1->value = 1;
+               u1->mask = 1;
+               break;
+       case VCAP_BIT_ANY:
+               u1->value = 0;
+               u1->mask = 0;
+               break;
+       }
+}
+
+/* Add a bit key with value and mask to the rule */
+int vcap_rule_add_key_bit(struct vcap_rule *rule, enum vcap_key_field key,
+                         enum vcap_bit val)
+{
+       struct vcap_client_keyfield_data data;
+
+       vcap_rule_set_key_bitsize(&data.u1, val);
+       return vcap_rule_add_key(rule, key, VCAP_FIELD_BIT, &data);
+}
+EXPORT_SYMBOL_GPL(vcap_rule_add_key_bit);
+
+/* Add a 32 bit key field with value and mask to the rule */
+int vcap_rule_add_key_u32(struct vcap_rule *rule, enum vcap_key_field key,
+                         u32 value, u32 mask)
+{
+       struct vcap_client_keyfield_data data;
+
+       data.u32.value = value;
+       data.u32.mask = mask;
+       return vcap_rule_add_key(rule, key, VCAP_FIELD_U32, &data);
+}
+EXPORT_SYMBOL_GPL(vcap_rule_add_key_u32);
+
+/* Add a 48 bit key with value and mask to the rule */
+int vcap_rule_add_key_u48(struct vcap_rule *rule, enum vcap_key_field key,
+                         struct vcap_u48_key *fieldval)
+{
+       struct vcap_client_keyfield_data data;
+
+       memcpy(&data.u48, fieldval, sizeof(data.u48));
+       return vcap_rule_add_key(rule, key, VCAP_FIELD_U48, &data);
+}
+EXPORT_SYMBOL_GPL(vcap_rule_add_key_u48);
+
+/* Add a 72 bit key with value and mask to the rule */
+int vcap_rule_add_key_u72(struct vcap_rule *rule, enum vcap_key_field key,
+                         struct vcap_u72_key *fieldval)
+{
+       struct vcap_client_keyfield_data data;
+
+       memcpy(&data.u72, fieldval, sizeof(data.u72));
+       return vcap_rule_add_key(rule, key, VCAP_FIELD_U72, &data);
+}
+EXPORT_SYMBOL_GPL(vcap_rule_add_key_u72);
+
+static void vcap_copy_from_client_actionfield(struct vcap_rule *rule,
+                                             struct vcap_client_actionfield *field,
+                                             struct vcap_client_actionfield_data *data)
+{
+       /* This will be expanded later to handle different vcap memory layouts */
+       memcpy(&field->data, data, sizeof(field->data));
+}
+
+static int vcap_rule_add_action(struct vcap_rule *rule,
+                               enum vcap_action_field action,
+                               enum vcap_field_type ftype,
+                               struct vcap_client_actionfield_data *data)
+{
+       struct vcap_client_actionfield *field;
+
+       /* More validation will be added here later */
+       field = kzalloc(sizeof(*field), GFP_KERNEL);
+       if (!field)
+               return -ENOMEM;
+       field->ctrl.action = action;
+       field->ctrl.type = ftype;
+       vcap_copy_from_client_actionfield(rule, field, data);
+       list_add_tail(&field->ctrl.list, &rule->actionfields);
+       return 0;
+}
+
+static void vcap_rule_set_action_bitsize(struct vcap_u1_action *u1,
+                                        enum vcap_bit val)
+{
+       switch (val) {
+       case VCAP_BIT_0:
+               u1->value = 0;
+               break;
+       case VCAP_BIT_1:
+               u1->value = 1;
+               break;
+       case VCAP_BIT_ANY:
+               u1->value = 0;
+               break;
+       }
+}
+
+/* Add a bit action with value to the rule */
+int vcap_rule_add_action_bit(struct vcap_rule *rule,
+                            enum vcap_action_field action,
+                            enum vcap_bit val)
+{
+       struct vcap_client_actionfield_data data;
+
+       vcap_rule_set_action_bitsize(&data.u1, val);
+       return vcap_rule_add_action(rule, action, VCAP_FIELD_BIT, &data);
+}
+EXPORT_SYMBOL_GPL(vcap_rule_add_action_bit);
+
+/* Add a 32 bit action field with value to the rule */
+int vcap_rule_add_action_u32(struct vcap_rule *rule,
+                            enum vcap_action_field action,
+                            u32 value)
+{
+       struct vcap_client_actionfield_data data;
+
+       data.u32.value = value;
+       return vcap_rule_add_action(rule, action, VCAP_FIELD_U32, &data);
+}
+EXPORT_SYMBOL_GPL(vcap_rule_add_action_u32);
+
+/* Copy to host byte order */
+void vcap_netbytes_copy(u8 *dst, u8 *src, int count)
+{
+       int idx;
+
+       for (idx = 0; idx < count; ++idx, ++dst)
+               *dst = src[count - idx - 1];
+}
+EXPORT_SYMBOL_GPL(vcap_netbytes_copy);
+
+/* Convert validation error code into tc extact error message */
+void vcap_set_tc_exterr(struct flow_cls_offload *fco, struct vcap_rule *vrule)
+{
+       switch (vrule->exterr) {
+       case VCAP_ERR_NONE:
+               break;
+       case VCAP_ERR_NO_ADMIN:
+               NL_SET_ERR_MSG_MOD(fco->common.extack,
+                                  "Missing VCAP instance");
+               break;
+       case VCAP_ERR_NO_NETDEV:
+               NL_SET_ERR_MSG_MOD(fco->common.extack,
+                                  "Missing network interface");
+               break;
+       case VCAP_ERR_NO_KEYSET_MATCH:
+               NL_SET_ERR_MSG_MOD(fco->common.extack,
+                                  "No keyset matched the filter keys");
+               break;
+       case VCAP_ERR_NO_ACTIONSET_MATCH:
+               NL_SET_ERR_MSG_MOD(fco->common.extack,
+                                  "No actionset matched the filter actions");
+               break;
+       case VCAP_ERR_NO_PORT_KEYSET_MATCH:
+               NL_SET_ERR_MSG_MOD(fco->common.extack,
+                                  "No port keyset matched the filter keys");
+               break;
+       }
+}
+EXPORT_SYMBOL_GPL(vcap_set_tc_exterr);
+
+#ifdef CONFIG_VCAP_KUNIT_TEST
+#include "vcap_api_kunit.c"
+#endif
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api.h b/drivers/net/ethernet/microchip/vcap/vcap_api.h
new file mode 100644 (file)
index 0000000..eb2eae7
--- /dev/null
@@ -0,0 +1,272 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP API
+ */
+
+#ifndef __VCAP_API__
+#define __VCAP_API__
+
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+
+/* Use the generated API model */
+#ifdef CONFIG_VCAP_KUNIT_TEST
+#include "vcap_ag_api_kunit.h"
+#endif
+#include "vcap_ag_api.h"
+
+#define VCAP_CID_LOOKUP_SIZE          100000 /* Chains in a lookup */
+#define VCAP_CID_INGRESS_L0          1000000 /* Ingress Stage 1 Lookup 0 */
+#define VCAP_CID_INGRESS_L1          1100000 /* Ingress Stage 1 Lookup 1 */
+#define VCAP_CID_INGRESS_L2          1200000 /* Ingress Stage 1 Lookup 2 */
+#define VCAP_CID_INGRESS_L3          1300000 /* Ingress Stage 1 Lookup 3 */
+#define VCAP_CID_INGRESS_L4          1400000 /* Ingress Stage 1 Lookup 4 */
+#define VCAP_CID_INGRESS_L5          1500000 /* Ingress Stage 1 Lookup 5 */
+
+#define VCAP_CID_PREROUTING_IPV6     3000000 /* Prerouting Stage */
+#define VCAP_CID_PREROUTING          6000000 /* Prerouting Stage */
+
+#define VCAP_CID_INGRESS_STAGE2_L0   8000000 /* Ingress Stage 2 Lookup 0 */
+#define VCAP_CID_INGRESS_STAGE2_L1   8100000 /* Ingress Stage 2 Lookup 1 */
+#define VCAP_CID_INGRESS_STAGE2_L2   8200000 /* Ingress Stage 2 Lookup 2 */
+#define VCAP_CID_INGRESS_STAGE2_L3   8300000 /* Ingress Stage 2 Lookup 3 */
+
+#define VCAP_CID_EGRESS_L0           10000000 /* Egress Lookup 0 */
+#define VCAP_CID_EGRESS_L1           10100000 /* Egress Lookup 1 */
+
+#define VCAP_CID_EGRESS_STAGE2_L0    20000000 /* Egress Stage 2 Lookup 0 */
+#define VCAP_CID_EGRESS_STAGE2_L1    20100000 /* Egress Stage 2 Lookup 1 */
+
+/* Known users of the VCAP API */
+enum vcap_user {
+       VCAP_USER_PTP,
+       VCAP_USER_MRP,
+       VCAP_USER_CFM,
+       VCAP_USER_VLAN,
+       VCAP_USER_QOS,
+       VCAP_USER_VCAP_UTIL,
+       VCAP_USER_TC,
+       VCAP_USER_TC_EXTRA,
+
+       /* add new users above here */
+
+       /* used to define VCAP_USER_MAX below */
+       __VCAP_USER_AFTER_LAST,
+       VCAP_USER_MAX = __VCAP_USER_AFTER_LAST - 1,
+};
+
+/* VCAP information used for displaying data */
+struct vcap_statistics {
+       char *name;
+       int count;
+       const char * const *keyfield_set_names;
+       const char * const *actionfield_set_names;
+       const char * const *keyfield_names;
+       const char * const *actionfield_names;
+};
+
+/* VCAP key/action field type, position and width */
+struct vcap_field {
+       u16 type;
+       u16 width;
+       u16 offset;
+};
+
+/* VCAP keyset or actionset type and width */
+struct vcap_set {
+       u8 type_id;
+       u8 sw_per_item;
+       u8 sw_cnt;
+};
+
+/* VCAP typegroup position and bitvalue */
+struct vcap_typegroup {
+       u16 offset;
+       u16 width;
+       u16 value;
+};
+
+/* VCAP model data */
+struct vcap_info {
+       char *name; /* user-friendly name */
+       u16 rows; /* number of row in instance */
+       u16 sw_count; /* maximum subwords used per rule */
+       u16 sw_width; /* bits per subword in a keyset */
+       u16 sticky_width; /* sticky bits per rule */
+       u16 act_width;  /* bits per subword in an actionset */
+       u16 default_cnt; /* number of default rules */
+       u16 require_cnt_dis; /* not used */
+       u16 version; /* vcap rtl version */
+       const struct vcap_set *keyfield_set; /* keysets */
+       int keyfield_set_size; /* number of keysets */
+       const struct vcap_set *actionfield_set; /* actionsets */
+       int actionfield_set_size; /* number of actionsets */
+       /* map of keys per keyset */
+       const struct vcap_field **keyfield_set_map;
+       /* number of entries in the above map */
+       int *keyfield_set_map_size;
+       /* map of actions per actionset */
+       const struct vcap_field **actionfield_set_map;
+       /* number of entries in the above map */
+       int *actionfield_set_map_size;
+       /* map of keyset typegroups per subword size */
+       const struct vcap_typegroup **keyfield_set_typegroups;
+       /* map of actionset typegroups per subword size */
+       const struct vcap_typegroup **actionfield_set_typegroups;
+};
+
+enum vcap_field_type {
+       VCAP_FIELD_BIT,
+       VCAP_FIELD_U32,
+       VCAP_FIELD_U48,
+       VCAP_FIELD_U56,
+       VCAP_FIELD_U64,
+       VCAP_FIELD_U72,
+       VCAP_FIELD_U112,
+       VCAP_FIELD_U128,
+};
+
+/* VCAP rule data towards the VCAP cache */
+struct vcap_cache_data {
+       u32 *keystream;
+       u32 *maskstream;
+       u32 *actionstream;
+       u32 counter;
+       bool sticky;
+};
+
+/* Selects which part of the rule must be updated */
+enum vcap_selection {
+       VCAP_SEL_ENTRY = 0x01,
+       VCAP_SEL_ACTION = 0x02,
+       VCAP_SEL_COUNTER = 0x04,
+       VCAP_SEL_ALL = 0xff,
+};
+
+/* Commands towards the VCAP cache */
+enum vcap_command {
+       VCAP_CMD_WRITE = 0,
+       VCAP_CMD_READ = 1,
+       VCAP_CMD_MOVE_DOWN = 2,
+       VCAP_CMD_MOVE_UP = 3,
+       VCAP_CMD_INITIALIZE = 4,
+};
+
+enum vcap_rule_error {
+       VCAP_ERR_NONE = 0,  /* No known error */
+       VCAP_ERR_NO_ADMIN,  /* No admin instance */
+       VCAP_ERR_NO_NETDEV,  /* No netdev instance */
+       VCAP_ERR_NO_KEYSET_MATCH, /* No keyset matched the rule keys */
+       VCAP_ERR_NO_ACTIONSET_MATCH, /* No actionset matched the rule actions */
+       VCAP_ERR_NO_PORT_KEYSET_MATCH, /* No port keyset matched the rule keys */
+};
+
+/* Administration of each VCAP instance */
+struct vcap_admin {
+       struct list_head list; /* for insertion in vcap_control */
+       struct list_head rules; /* list of rules */
+       enum vcap_type vtype;  /* type of vcap */
+       int vinst; /* instance number within the same type */
+       int first_cid; /* first chain id in this vcap */
+       int last_cid; /* last chain id in this vcap */
+       int tgt_inst; /* hardware instance number */
+       int lookups; /* number of lookups in this vcap type */
+       int lookups_per_instance; /* number of lookups in this instance */
+       int last_valid_addr; /* top of address range to be used */
+       int first_valid_addr; /* bottom of address range to be used */
+       int last_used_addr;  /* address of lowest added rule */
+       bool w32be; /* vcap uses "32bit-word big-endian" encoding */
+       struct vcap_cache_data cache; /* encoded rule data */
+};
+
+/* Client supplied VCAP rule data */
+struct vcap_rule {
+       int vcap_chain_id; /* chain used for this rule */
+       enum vcap_user user; /* rule owner */
+       u16 priority;
+       u32 id;  /* vcap rule id, must be unique, 0 will auto-generate a value */
+       u64 cookie;  /* used by the client to identify the rule */
+       struct list_head keyfields;  /* list of vcap_client_keyfield */
+       struct list_head actionfields;  /* list of vcap_client_actionfield */
+       enum vcap_keyfield_set keyset; /* keyset used: may be derived from fields */
+       enum vcap_actionfield_set actionset; /* actionset used: may be derived from fields */
+       enum vcap_rule_error exterr; /* extended error - used by TC */
+       u64 client; /* space for client defined data */
+};
+
+/* List of keysets */
+struct vcap_keyset_list {
+       int max; /* size of the keyset list */
+       int cnt; /* count of keysets actually in the list */
+       enum vcap_keyfield_set *keysets; /* the list of keysets */
+};
+
+/* Client supplied VCAP callback operations */
+struct vcap_operations {
+       /* validate port keyset operation */
+       enum vcap_keyfield_set (*validate_keyset)
+               (struct net_device *ndev,
+                struct vcap_admin *admin,
+                struct vcap_rule *rule,
+                struct vcap_keyset_list *kslist,
+                u16 l3_proto);
+       /* add default rule fields for the selected keyset operations */
+       void (*add_default_fields)
+               (struct net_device *ndev,
+                struct vcap_admin *admin,
+                struct vcap_rule *rule);
+       /* cache operations */
+       void (*cache_erase)
+               (struct vcap_admin *admin);
+       void (*cache_write)
+               (struct net_device *ndev,
+                struct vcap_admin *admin,
+                enum vcap_selection sel,
+                u32 idx, u32 count);
+       void (*cache_read)
+               (struct net_device *ndev,
+                struct vcap_admin *admin,
+                enum vcap_selection sel,
+                u32 idx,
+                u32 count);
+       /* block operations */
+       void (*init)
+               (struct net_device *ndev,
+                struct vcap_admin *admin,
+                u32 addr,
+                u32 count);
+       void (*update)
+               (struct net_device *ndev,
+                struct vcap_admin *admin,
+                enum vcap_command cmd,
+                enum vcap_selection sel,
+                u32 addr);
+       void (*move)
+               (struct net_device *ndev,
+                struct vcap_admin *admin,
+                u32 addr,
+                int offset,
+                int count);
+       /* informational */
+       int (*port_info)
+               (struct net_device *ndev,
+                enum vcap_type vtype,
+                int (*pf)(void *out, int arg, const char *fmt, ...),
+                void *out,
+                int arg);
+};
+
+/* VCAP API Client control interface */
+struct vcap_control {
+       u32 rule_id; /* last used rule id (unique across VCAP instances) */
+       struct vcap_operations *ops;  /* client supplied operations */
+       const struct vcap_info *vcaps; /* client supplied vcap models */
+       const struct vcap_statistics *stats; /* client supplied vcap stats */
+       struct list_head list; /* list of vcap instances */
+};
+
+/* Set client control interface on the API */
+int vcap_api_set_client(struct vcap_control *vctrl);
+
+#endif /* __VCAP_API__ */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_client.h b/drivers/net/ethernet/microchip/vcap/vcap_api_client.h
new file mode 100644 (file)
index 0000000..5df6808
--- /dev/null
@@ -0,0 +1,202 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP API
+ */
+
+#ifndef __VCAP_API_CLIENT__
+#define __VCAP_API_CLIENT__
+
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+#include <net/flow_offload.h>
+
+#include "vcap_api.h"
+
+/* Client supplied VCAP rule key control part */
+struct vcap_client_keyfield_ctrl {
+       struct list_head list;  /* For insertion into a rule */
+       enum vcap_key_field key;
+       enum vcap_field_type type;
+};
+
+struct vcap_u1_key {
+       u8 value;
+       u8 mask;
+};
+
+struct vcap_u32_key {
+       u32 value;
+       u32 mask;
+};
+
+struct vcap_u48_key {
+       u8 value[6];
+       u8 mask[6];
+};
+
+struct vcap_u56_key {
+       u8 value[7];
+       u8 mask[7];
+};
+
+struct vcap_u64_key {
+       u8 value[8];
+       u8 mask[8];
+};
+
+struct vcap_u72_key {
+       u8 value[9];
+       u8 mask[9];
+};
+
+struct vcap_u112_key {
+       u8 value[14];
+       u8 mask[14];
+};
+
+struct vcap_u128_key {
+       u8 value[16];
+       u8 mask[16];
+};
+
+/* Client supplied VCAP rule field data */
+struct vcap_client_keyfield_data {
+       union {
+               struct vcap_u1_key u1;
+               struct vcap_u32_key u32;
+               struct vcap_u48_key u48;
+               struct vcap_u56_key u56;
+               struct vcap_u64_key u64;
+               struct vcap_u72_key u72;
+               struct vcap_u112_key u112;
+               struct vcap_u128_key u128;
+       };
+};
+
+/* Client supplied VCAP rule key (value, mask) */
+struct vcap_client_keyfield {
+       struct vcap_client_keyfield_ctrl ctrl;
+       struct vcap_client_keyfield_data data;
+};
+
+/* Client supplied VCAP rule action control part */
+struct vcap_client_actionfield_ctrl {
+       struct list_head list;  /* For insertion into a rule */
+       enum vcap_action_field action;
+       enum vcap_field_type type;
+};
+
+struct vcap_u1_action {
+       u8 value;
+};
+
+struct vcap_u32_action {
+       u32 value;
+};
+
+struct vcap_u48_action {
+       u8 value[6];
+};
+
+struct vcap_u56_action {
+       u8 value[7];
+};
+
+struct vcap_u64_action {
+       u8 value[8];
+};
+
+struct vcap_u72_action {
+       u8 value[9];
+};
+
+struct vcap_u112_action {
+       u8 value[14];
+};
+
+struct vcap_u128_action {
+       u8 value[16];
+};
+
+struct vcap_client_actionfield_data {
+       union {
+               struct vcap_u1_action u1;
+               struct vcap_u32_action u32;
+               struct vcap_u48_action u48;
+               struct vcap_u56_action u56;
+               struct vcap_u64_action u64;
+               struct vcap_u72_action u72;
+               struct vcap_u112_action u112;
+               struct vcap_u128_action u128;
+       };
+};
+
+struct vcap_client_actionfield {
+       struct vcap_client_actionfield_ctrl ctrl;
+       struct vcap_client_actionfield_data data;
+};
+
+enum vcap_bit {
+       VCAP_BIT_ANY,
+       VCAP_BIT_0,
+       VCAP_BIT_1
+};
+
+/* VCAP rule operations */
+/* Allocate a rule and fill in the basic information */
+struct vcap_rule *vcap_alloc_rule(struct vcap_control *vctrl,
+                                 struct net_device *ndev,
+                                 int vcap_chain_id,
+                                 enum vcap_user user,
+                                 u16 priority,
+                                 u32 id);
+/* Free mem of a rule owned by client */
+void vcap_free_rule(struct vcap_rule *rule);
+/* Validate a rule before adding it to the VCAP */
+int vcap_val_rule(struct vcap_rule *rule, u16 l3_proto);
+/* Add rule to a VCAP instance */
+int vcap_add_rule(struct vcap_rule *rule);
+/* Delete rule in a VCAP instance */
+int vcap_del_rule(struct vcap_control *vctrl, struct net_device *ndev, u32 id);
+
+/* Update the keyset for the rule */
+int vcap_set_rule_set_keyset(struct vcap_rule *rule,
+                            enum vcap_keyfield_set keyset);
+/* Update the actionset for the rule */
+int vcap_set_rule_set_actionset(struct vcap_rule *rule,
+                               enum vcap_actionfield_set actionset);
+
+/* VCAP rule field operations */
+int vcap_rule_add_key_bit(struct vcap_rule *rule, enum vcap_key_field key,
+                         enum vcap_bit val);
+int vcap_rule_add_key_u32(struct vcap_rule *rule, enum vcap_key_field key,
+                         u32 value, u32 mask);
+int vcap_rule_add_key_u48(struct vcap_rule *rule, enum vcap_key_field key,
+                         struct vcap_u48_key *fieldval);
+int vcap_rule_add_key_u72(struct vcap_rule *rule, enum vcap_key_field key,
+                         struct vcap_u72_key *fieldval);
+int vcap_rule_add_action_bit(struct vcap_rule *rule,
+                            enum vcap_action_field action, enum vcap_bit val);
+int vcap_rule_add_action_u32(struct vcap_rule *rule,
+                            enum vcap_action_field action, u32 value);
+
+/* VCAP lookup operations */
+/* Lookup a vcap instance using chain id */
+struct vcap_admin *vcap_find_admin(struct vcap_control *vctrl, int cid);
+/* Find information on a key field in a rule */
+const struct vcap_field *vcap_lookup_keyfield(struct vcap_rule *rule,
+                                             enum vcap_key_field key);
+/* Find a rule id with a provided cookie */
+int vcap_lookup_rule_by_cookie(struct vcap_control *vctrl, u64 cookie);
+
+/* Copy to host byte order */
+void vcap_netbytes_copy(u8 *dst, u8 *src, int count);
+
+/* Convert validation error code into tc extact error message */
+void vcap_set_tc_exterr(struct flow_cls_offload *fco, struct vcap_rule *vrule);
+
+/* Cleanup a VCAP instance */
+int vcap_del_rules(struct vcap_control *vctrl, struct vcap_admin *admin);
+
+#endif /* __VCAP_API_CLIENT__ */
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
new file mode 100644 (file)
index 0000000..b01a6e5
--- /dev/null
@@ -0,0 +1,933 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP API kunit test suite
+ */
+
+#include <kunit/test.h>
+#include "vcap_api.h"
+#include "vcap_api_client.h"
+#include "vcap_model_kunit.h"
+
+/* First we have the test infrastructure that emulates the platform
+ * implementation
+ */
+#define TEST_BUF_CNT 100
+#define TEST_BUF_SZ  350
+#define STREAMWSIZE 64
+
+static u32 test_updateaddr[STREAMWSIZE] = {};
+static int test_updateaddridx;
+static int test_cache_erase_count;
+static u32 test_init_start;
+static u32 test_init_count;
+static u32 test_hw_counter_id;
+static struct vcap_cache_data test_hw_cache;
+
+/* Callback used by the VCAP API */
+static enum vcap_keyfield_set test_val_keyset(struct net_device *ndev,
+                                             struct vcap_admin *admin,
+                                             struct vcap_rule *rule,
+                                             struct vcap_keyset_list *kslist,
+                                             u16 l3_proto)
+{
+       int idx;
+
+       if (kslist->cnt > 0) {
+               switch (admin->vtype) {
+               case VCAP_TYPE_IS0:
+                       for (idx = 0; idx < kslist->cnt; idx++) {
+                               if (kslist->keysets[idx] == VCAP_KFS_ETAG)
+                                       return kslist->keysets[idx];
+                               if (kslist->keysets[idx] == VCAP_KFS_PURE_5TUPLE_IP4)
+                                       return kslist->keysets[idx];
+                               if (kslist->keysets[idx] == VCAP_KFS_NORMAL_5TUPLE_IP4)
+                                       return kslist->keysets[idx];
+                               if (kslist->keysets[idx] == VCAP_KFS_NORMAL_7TUPLE)
+                                       return kslist->keysets[idx];
+                       }
+                       break;
+               case VCAP_TYPE_IS2:
+                       for (idx = 0; idx < kslist->cnt; idx++) {
+                               if (kslist->keysets[idx] == VCAP_KFS_MAC_ETYPE)
+                                       return kslist->keysets[idx];
+                               if (kslist->keysets[idx] == VCAP_KFS_ARP)
+                                       return kslist->keysets[idx];
+                               if (kslist->keysets[idx] == VCAP_KFS_IP_7TUPLE)
+                                       return kslist->keysets[idx];
+                       }
+                       break;
+               default:
+                       pr_info("%s:%d: no validation for VCAP %d\n",
+                               __func__, __LINE__, admin->vtype);
+                       break;
+               }
+       }
+       return -EINVAL;
+}
+
+/* Callback used by the VCAP API */
+static void test_add_def_fields(struct net_device *ndev,
+                               struct vcap_admin *admin,
+                               struct vcap_rule *rule)
+{
+       if (admin->vinst == 0 || admin->vinst == 2)
+               vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_1);
+       else
+               vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_0);
+}
+
+/* Callback used by the VCAP API */
+static void test_cache_erase(struct vcap_admin *admin)
+{
+       if (test_cache_erase_count) {
+               memset(admin->cache.keystream, 0, test_cache_erase_count);
+               memset(admin->cache.maskstream, 0, test_cache_erase_count);
+               memset(admin->cache.actionstream, 0, test_cache_erase_count);
+               test_cache_erase_count = 0;
+       }
+}
+
+/* Callback used by the VCAP API */
+static void test_cache_init(struct net_device *ndev, struct vcap_admin *admin,
+                           u32 start, u32 count)
+{
+       test_init_start = start;
+       test_init_count = count;
+}
+
+/* Callback used by the VCAP API */
+static void test_cache_read(struct net_device *ndev, struct vcap_admin *admin,
+                           enum vcap_selection sel, u32 start, u32 count)
+{
+       u32 *keystr, *mskstr, *actstr;
+       int idx;
+
+       pr_debug("%s:%d: %d %d\n", __func__, __LINE__, start, count);
+       switch (sel) {
+       case VCAP_SEL_ENTRY:
+               keystr = &admin->cache.keystream[start];
+               mskstr = &admin->cache.maskstream[start];
+               for (idx = 0; idx < count; ++idx) {
+                       pr_debug("%s:%d: keydata[%02d]: 0x%08x\n", __func__,
+                                __LINE__, start + idx, keystr[idx]);
+               }
+               for (idx = 0; idx < count; ++idx) {
+                       /* Invert the mask before decoding starts */
+                       mskstr[idx] = ~mskstr[idx];
+                       pr_debug("%s:%d: mskdata[%02d]: 0x%08x\n", __func__,
+                                __LINE__, start + idx, mskstr[idx]);
+               }
+               break;
+       case VCAP_SEL_ACTION:
+               actstr = &admin->cache.actionstream[start];
+               for (idx = 0; idx < count; ++idx) {
+                       pr_debug("%s:%d: actdata[%02d]: 0x%08x\n", __func__,
+                                __LINE__, start + idx, actstr[idx]);
+               }
+               break;
+       case VCAP_SEL_COUNTER:
+               pr_debug("%s:%d\n", __func__, __LINE__);
+               test_hw_counter_id = start;
+               admin->cache.counter = test_hw_cache.counter;
+               admin->cache.sticky = test_hw_cache.sticky;
+               break;
+       case VCAP_SEL_ALL:
+               pr_debug("%s:%d\n", __func__, __LINE__);
+               break;
+       }
+}
+
+/* Callback used by the VCAP API */
+static void test_cache_write(struct net_device *ndev, struct vcap_admin *admin,
+                            enum vcap_selection sel, u32 start, u32 count)
+{
+       u32 *keystr, *mskstr, *actstr;
+       int idx;
+
+       switch (sel) {
+       case VCAP_SEL_ENTRY:
+               keystr = &admin->cache.keystream[start];
+               mskstr = &admin->cache.maskstream[start];
+               for (idx = 0; idx < count; ++idx) {
+                       pr_debug("%s:%d: keydata[%02d]: 0x%08x\n", __func__,
+                                __LINE__, start + idx, keystr[idx]);
+               }
+               for (idx = 0; idx < count; ++idx) {
+                       /* Invert the mask before encoding starts */
+                       mskstr[idx] = ~mskstr[idx];
+                       pr_debug("%s:%d: mskdata[%02d]: 0x%08x\n", __func__,
+                                __LINE__, start + idx, mskstr[idx]);
+               }
+               break;
+       case VCAP_SEL_ACTION:
+               actstr = &admin->cache.actionstream[start];
+               for (idx = 0; idx < count; ++idx) {
+                       pr_debug("%s:%d: actdata[%02d]: 0x%08x\n", __func__,
+                                __LINE__, start + idx, actstr[idx]);
+               }
+               break;
+       case VCAP_SEL_COUNTER:
+               pr_debug("%s:%d\n", __func__, __LINE__);
+               test_hw_counter_id = start;
+               test_hw_cache.counter = admin->cache.counter;
+               test_hw_cache.sticky = admin->cache.sticky;
+               break;
+       case VCAP_SEL_ALL:
+               pr_err("%s:%d: cannot write all streams at once\n",
+                      __func__, __LINE__);
+               break;
+       }
+}
+
+/* Callback used by the VCAP API */
+static void test_cache_update(struct net_device *ndev, struct vcap_admin *admin,
+                             enum vcap_command cmd,
+                             enum vcap_selection sel, u32 addr)
+{
+       if (test_updateaddridx < ARRAY_SIZE(test_updateaddr))
+               test_updateaddr[test_updateaddridx] = addr;
+       else
+               pr_err("%s:%d: overflow: %d\n", __func__, __LINE__, test_updateaddridx);
+       test_updateaddridx++;
+}
+
+static void test_cache_move(struct net_device *ndev, struct vcap_admin *admin,
+                           u32 addr, int offset, int count)
+{
+}
+
+/* Provide port information via a callback interface */
+static int vcap_test_port_info(struct net_device *ndev, enum vcap_type vtype,
+                              int (*pf)(void *out, int arg, const char *fmt, ...),
+                              void *out, int arg)
+{
+       return 0;
+}
+
+struct vcap_operations test_callbacks = {
+       .validate_keyset = test_val_keyset,
+       .add_default_fields = test_add_def_fields,
+       .cache_erase = test_cache_erase,
+       .cache_write = test_cache_write,
+       .cache_read = test_cache_read,
+       .init = test_cache_init,
+       .update = test_cache_update,
+       .move = test_cache_move,
+       .port_info = vcap_test_port_info,
+};
+
+struct vcap_control test_vctrl = {
+       .vcaps = kunit_test_vcaps,
+       .stats = &kunit_test_vcap_stats,
+       .ops = &test_callbacks,
+};
+
+static void vcap_test_api_init(struct vcap_admin *admin)
+{
+       /* Initialize the shared objects */
+       INIT_LIST_HEAD(&test_vctrl.list);
+       INIT_LIST_HEAD(&admin->list);
+       INIT_LIST_HEAD(&admin->rules);
+       list_add_tail(&admin->list, &test_vctrl.list);
+       memset(test_updateaddr, 0, sizeof(test_updateaddr));
+       test_updateaddridx = 0;
+}
+
+/* Define the test cases. */
+
+static void vcap_api_set_bit_1_test(struct kunit *test)
+{
+       struct vcap_stream_iter iter = {
+               .offset = 35,
+               .sw_width = 52,
+               .reg_idx = 1,
+               .reg_bitpos = 20,
+               .tg = 0
+       };
+       u32 stream[2] = {0};
+
+       vcap_set_bit(stream, &iter, 1);
+
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[0]);
+       KUNIT_EXPECT_EQ(test, (u32)BIT(20), stream[1]);
+}
+
+static void vcap_api_set_bit_0_test(struct kunit *test)
+{
+       struct vcap_stream_iter iter = {
+               .offset = 35,
+               .sw_width = 52,
+               .reg_idx = 2,
+               .reg_bitpos = 11,
+               .tg = 0
+       };
+       u32 stream[3] = {~0, ~0, ~0};
+
+       vcap_set_bit(stream, &iter, 0);
+
+       KUNIT_EXPECT_EQ(test, (u32)~0, stream[0]);
+       KUNIT_EXPECT_EQ(test, (u32)~0, stream[1]);
+       KUNIT_EXPECT_EQ(test, (u32)~BIT(11), stream[2]);
+}
+
+static void vcap_api_iterator_init_test(struct kunit *test)
+{
+       struct vcap_stream_iter iter;
+       struct vcap_typegroup typegroups[] = {
+               { .offset = 0, .width = 2, .value = 2, },
+               { .offset = 156, .width = 1, .value = 0, },
+               { .offset = 0, .width = 0, .value = 0, },
+       };
+       struct vcap_typegroup typegroups2[] = {
+               { .offset = 0, .width = 3, .value = 4, },
+               { .offset = 49, .width = 2, .value = 0, },
+               { .offset = 98, .width = 2, .value = 0, },
+       };
+
+       vcap_iter_init(&iter, 52, typegroups, 86);
+
+       KUNIT_EXPECT_EQ(test, 52, iter.sw_width);
+       KUNIT_EXPECT_EQ(test, 86 + 2, iter.offset);
+       KUNIT_EXPECT_EQ(test, 3, iter.reg_idx);
+       KUNIT_EXPECT_EQ(test, 4, iter.reg_bitpos);
+
+       vcap_iter_init(&iter, 49, typegroups2, 134);
+
+       KUNIT_EXPECT_EQ(test, 49, iter.sw_width);
+       KUNIT_EXPECT_EQ(test, 134 + 7, iter.offset);
+       KUNIT_EXPECT_EQ(test, 5, iter.reg_idx);
+       KUNIT_EXPECT_EQ(test, 11, iter.reg_bitpos);
+}
+
+static void vcap_api_iterator_next_test(struct kunit *test)
+{
+       struct vcap_stream_iter iter;
+       struct vcap_typegroup typegroups[] = {
+               { .offset = 0, .width = 4, .value = 8, },
+               { .offset = 49, .width = 1, .value = 0, },
+               { .offset = 98, .width = 2, .value = 0, },
+               { .offset = 147, .width = 3, .value = 0, },
+               { .offset = 196, .width = 2, .value = 0, },
+               { .offset = 245, .width = 1, .value = 0, },
+       };
+       int idx;
+
+       vcap_iter_init(&iter, 49, typegroups, 86);
+
+       KUNIT_EXPECT_EQ(test, 49, iter.sw_width);
+       KUNIT_EXPECT_EQ(test, 86 + 5, iter.offset);
+       KUNIT_EXPECT_EQ(test, 3, iter.reg_idx);
+       KUNIT_EXPECT_EQ(test, 10, iter.reg_bitpos);
+
+       vcap_iter_next(&iter);
+
+       KUNIT_EXPECT_EQ(test, 91 + 1, iter.offset);
+       KUNIT_EXPECT_EQ(test, 3, iter.reg_idx);
+       KUNIT_EXPECT_EQ(test, 11, iter.reg_bitpos);
+
+       for (idx = 0; idx < 6; idx++)
+               vcap_iter_next(&iter);
+
+       KUNIT_EXPECT_EQ(test, 92 + 6 + 2, iter.offset);
+       KUNIT_EXPECT_EQ(test, 4, iter.reg_idx);
+       KUNIT_EXPECT_EQ(test, 2, iter.reg_bitpos);
+}
+
+static void vcap_api_encode_typegroups_test(struct kunit *test)
+{
+       u32 stream[12] = {0};
+       struct vcap_typegroup typegroups[] = {
+               { .offset = 0, .width = 4, .value = 8, },
+               { .offset = 49, .width = 1, .value = 1, },
+               { .offset = 98, .width = 2, .value = 3, },
+               { .offset = 147, .width = 3, .value = 5, },
+               { .offset = 196, .width = 2, .value = 2, },
+               { .offset = 245, .width = 5, .value = 27, },
+               { .offset = 0, .width = 0, .value = 0, },
+       };
+
+       vcap_encode_typegroups(stream, 49, typegroups, false);
+
+       KUNIT_EXPECT_EQ(test, (u32)0x8, stream[0]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[1]);
+       KUNIT_EXPECT_EQ(test, (u32)0x1, stream[2]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x3, stream[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[5]);
+       KUNIT_EXPECT_EQ(test, (u32)0x5, stream[6]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[7]);
+       KUNIT_EXPECT_EQ(test, (u32)0x2, stream[8]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[9]);
+       KUNIT_EXPECT_EQ(test, (u32)27, stream[10]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[11]);
+}
+
+static void vcap_api_encode_bit_test(struct kunit *test)
+{
+       struct vcap_stream_iter iter;
+       u32 stream[4] = {0};
+       struct vcap_typegroup typegroups[] = {
+               { .offset = 0, .width = 4, .value = 8, },
+               { .offset = 49, .width = 1, .value = 1, },
+               { .offset = 98, .width = 2, .value = 3, },
+               { .offset = 147, .width = 3, .value = 5, },
+               { .offset = 196, .width = 2, .value = 2, },
+               { .offset = 245, .width = 1, .value = 0, },
+       };
+
+       vcap_iter_init(&iter, 49, typegroups, 44);
+
+       KUNIT_EXPECT_EQ(test, 48, iter.offset);
+       KUNIT_EXPECT_EQ(test, 1, iter.reg_idx);
+       KUNIT_EXPECT_EQ(test, 16, iter.reg_bitpos);
+
+       vcap_encode_bit(stream, &iter, 1);
+
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[0]);
+       KUNIT_EXPECT_EQ(test, (u32)BIT(16), stream[1]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[2]);
+}
+
+static void vcap_api_encode_field_test(struct kunit *test)
+{
+       struct vcap_stream_iter iter;
+       u32 stream[16] = {0};
+       struct vcap_typegroup typegroups[] = {
+               { .offset = 0, .width = 4, .value = 8, },
+               { .offset = 49, .width = 1, .value = 1, },
+               { .offset = 98, .width = 2, .value = 3, },
+               { .offset = 147, .width = 3, .value = 5, },
+               { .offset = 196, .width = 2, .value = 2, },
+               { .offset = 245, .width = 5, .value = 27, },
+               { .offset = 0, .width = 0, .value = 0, },
+       };
+       struct vcap_field rf = {
+               .type = VCAP_FIELD_U32,
+               .offset = 86,
+               .width = 4,
+       };
+       u8 value[] = {0x5};
+
+       vcap_iter_init(&iter, 49, typegroups, rf.offset);
+
+       KUNIT_EXPECT_EQ(test, 91, iter.offset);
+       KUNIT_EXPECT_EQ(test, 3, iter.reg_idx);
+       KUNIT_EXPECT_EQ(test, 10, iter.reg_bitpos);
+
+       vcap_encode_field(stream, &iter, rf.width, value);
+
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[0]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[1]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[2]);
+       KUNIT_EXPECT_EQ(test, (u32)(0x5 << 10), stream[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[4]);
+
+       vcap_encode_typegroups(stream, 49, typegroups, false);
+
+       KUNIT_EXPECT_EQ(test, (u32)0x8, stream[0]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[1]);
+       KUNIT_EXPECT_EQ(test, (u32)0x1, stream[2]);
+       KUNIT_EXPECT_EQ(test, (u32)(0x5 << 10), stream[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x3, stream[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[5]);
+       KUNIT_EXPECT_EQ(test, (u32)0x5, stream[6]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[7]);
+       KUNIT_EXPECT_EQ(test, (u32)0x2, stream[8]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[9]);
+       KUNIT_EXPECT_EQ(test, (u32)27, stream[10]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[11]);
+}
+
+/* In this testcase the subword is smaller than a register */
+static void vcap_api_encode_short_field_test(struct kunit *test)
+{
+       struct vcap_stream_iter iter;
+       int sw_width = 21;
+       u32 stream[6] = {0};
+       struct vcap_typegroup tgt[] = {
+               { .offset = 0, .width = 3, .value = 7, },
+               { .offset = 21, .width = 2, .value = 3, },
+               { .offset = 42, .width = 1, .value = 1, },
+               { .offset = 0, .width = 0, .value = 0, },
+       };
+       struct vcap_field rf = {
+               .type = VCAP_FIELD_U32,
+               .offset = 25,
+               .width = 4,
+       };
+       u8 value[] = {0x5};
+
+       vcap_iter_init(&iter, sw_width, tgt, rf.offset);
+
+       KUNIT_EXPECT_EQ(test, 1, iter.regs_per_sw);
+       KUNIT_EXPECT_EQ(test, 21, iter.sw_width);
+       KUNIT_EXPECT_EQ(test, 25 + 3 + 2, iter.offset);
+       KUNIT_EXPECT_EQ(test, 1, iter.reg_idx);
+       KUNIT_EXPECT_EQ(test, 25 + 3 + 2 - sw_width, iter.reg_bitpos);
+
+       vcap_encode_field(stream, &iter, rf.width, value);
+
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[0]);
+       KUNIT_EXPECT_EQ(test, (u32)(0x5 << (25 + 3 + 2 - sw_width)), stream[1]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[2]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, stream[5]);
+
+       vcap_encode_typegroups(stream, sw_width, tgt, false);
+
+       KUNIT_EXPECT_EQ(test, (u32)7, stream[0]);
+       KUNIT_EXPECT_EQ(test, (u32)((0x5 << (25 + 3 + 2 - sw_width)) + 3), stream[1]);
+       KUNIT_EXPECT_EQ(test, (u32)1, stream[2]);
+       KUNIT_EXPECT_EQ(test, (u32)0, stream[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0, stream[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0, stream[5]);
+}
+
+static void vcap_api_encode_keyfield_test(struct kunit *test)
+{
+       u32 keywords[16] = {0};
+       u32 maskwords[16] = {0};
+       struct vcap_admin admin = {
+               .vtype = VCAP_TYPE_IS2,
+               .cache = {
+                       .keystream = keywords,
+                       .maskstream = maskwords,
+                       .actionstream = keywords,
+               },
+       };
+       struct vcap_rule_internal rule = {
+               .admin = &admin,
+               .data = {
+                       .keyset = VCAP_KFS_MAC_ETYPE,
+               },
+               .vctrl = &test_vctrl,
+       };
+       struct vcap_client_keyfield ckf = {
+               .ctrl.list = {},
+               .ctrl.key = VCAP_KF_ISDX_CLS,
+               .ctrl.type = VCAP_FIELD_U32,
+               .data.u32.value = 0xeef014a1,
+               .data.u32.mask = 0xfff,
+       };
+       struct vcap_field rf = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       };
+       struct vcap_typegroup tgt[] = {
+               { .offset = 0, .width = 2, .value = 2, },
+               { .offset = 156, .width = 1, .value = 1, },
+               { .offset = 0, .width = 0, .value = 0, },
+       };
+
+       vcap_test_api_init(&admin);
+       vcap_encode_keyfield(&rule, &ckf, &rf, tgt);
+
+       /* Key */
+       KUNIT_EXPECT_EQ(test, (u32)0x0, keywords[0]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, keywords[1]);
+       KUNIT_EXPECT_EQ(test, (u32)(0x04a1 << 6), keywords[2]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, keywords[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, keywords[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, keywords[5]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, keywords[6]);
+
+       /* Mask */
+       KUNIT_EXPECT_EQ(test, (u32)0x0, maskwords[0]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, maskwords[1]);
+       KUNIT_EXPECT_EQ(test, (u32)(0x0fff << 6), maskwords[2]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, maskwords[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, maskwords[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, maskwords[5]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, maskwords[6]);
+}
+
+static void vcap_api_encode_max_keyfield_test(struct kunit *test)
+{
+       int idx;
+       u32 keywords[6] = {0};
+       u32 maskwords[6] = {0};
+       struct vcap_admin admin = {
+               .vtype = VCAP_TYPE_IS2,
+               /* IS2 sw_width = 52 bit */
+               .cache = {
+                       .keystream = keywords,
+                       .maskstream = maskwords,
+                       .actionstream = keywords,
+               },
+       };
+       struct vcap_rule_internal rule = {
+               .admin = &admin,
+               .data = {
+                       .keyset = VCAP_KFS_IP_7TUPLE,
+               },
+               .vctrl = &test_vctrl,
+       };
+       struct vcap_client_keyfield ckf = {
+               .ctrl.list = {},
+               .ctrl.key = VCAP_KF_L3_IP6_DIP,
+               .ctrl.type = VCAP_FIELD_U128,
+               .data.u128.value = { 0xa1, 0xa2, 0xa3, 0xa4, 0, 0, 0x43, 0,
+                       0, 0, 0, 0, 0, 0, 0x78, 0x8e, },
+               .data.u128.mask =  { 0xff, 0xff, 0xff, 0xff, 0, 0, 0xff, 0,
+                       0, 0, 0, 0, 0, 0, 0xff, 0xff },
+       };
+       struct vcap_field rf = {
+               .type = VCAP_FIELD_U128,
+               .offset = 0,
+               .width = 128,
+       };
+       struct vcap_typegroup tgt[] = {
+               { .offset = 0, .width = 2, .value = 2, },
+               { .offset = 156, .width = 1, .value = 1, },
+               { .offset = 0, .width = 0, .value = 0, },
+       };
+       u32 keyres[] = {
+               0x928e8a84,
+               0x000c0002,
+               0x00000010,
+               0x00000000,
+               0x0239e000,
+               0x00000000,
+       };
+       u32 mskres[] = {
+               0xfffffffc,
+               0x000c0003,
+               0x0000003f,
+               0x00000000,
+               0x03fffc00,
+               0x00000000,
+       };
+
+       vcap_encode_keyfield(&rule, &ckf, &rf, tgt);
+
+       /* Key */
+       for (idx = 0; idx < ARRAY_SIZE(keyres); ++idx)
+               KUNIT_EXPECT_EQ(test, keyres[idx], keywords[idx]);
+       /* Mask */
+       for (idx = 0; idx < ARRAY_SIZE(mskres); ++idx)
+               KUNIT_EXPECT_EQ(test, mskres[idx], maskwords[idx]);
+}
+
+static void vcap_api_encode_actionfield_test(struct kunit *test)
+{
+       u32 actwords[16] = {0};
+       int sw_width = 21;
+       struct vcap_admin admin = {
+               .vtype = VCAP_TYPE_ES2, /* act_width = 21 */
+               .cache = {
+                       .actionstream = actwords,
+               },
+       };
+       struct vcap_rule_internal rule = {
+               .admin = &admin,
+               .data = {
+                       .actionset = VCAP_AFS_BASE_TYPE,
+               },
+               .vctrl = &test_vctrl,
+       };
+       struct vcap_client_actionfield caf = {
+               .ctrl.list = {},
+               .ctrl.action = VCAP_AF_POLICE_IDX,
+               .ctrl.type = VCAP_FIELD_U32,
+               .data.u32.value = 0x67908032,
+       };
+       struct vcap_field rf = {
+               .type = VCAP_FIELD_U32,
+               .offset = 35,
+               .width = 6,
+       };
+       struct vcap_typegroup tgt[] = {
+               { .offset = 0, .width = 2, .value = 2, },
+               { .offset = 21, .width = 1, .value = 1, },
+               { .offset = 42, .width = 1, .value = 0, },
+               { .offset = 0, .width = 0, .value = 0, },
+       };
+
+       vcap_encode_actionfield(&rule, &caf, &rf, tgt);
+
+       /* Action */
+       KUNIT_EXPECT_EQ(test, (u32)0x0, actwords[0]);
+       KUNIT_EXPECT_EQ(test, (u32)((0x32 << (35 + 2 + 1 - sw_width)) & 0x1fffff), actwords[1]);
+       KUNIT_EXPECT_EQ(test, (u32)((0x32 >> ((2 * sw_width) - 38 - 1))), actwords[2]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, actwords[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, actwords[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, actwords[5]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0, actwords[6]);
+}
+
+static void vcap_api_keyfield_typegroup_test(struct kunit *test)
+{
+       const struct vcap_typegroup *tg;
+
+       tg = vcap_keyfield_typegroup(&test_vctrl, VCAP_TYPE_IS2, VCAP_KFS_MAC_ETYPE);
+       KUNIT_EXPECT_PTR_NE(test, NULL, tg);
+       KUNIT_EXPECT_EQ(test, 0, tg[0].offset);
+       KUNIT_EXPECT_EQ(test, 2, tg[0].width);
+       KUNIT_EXPECT_EQ(test, 2, tg[0].value);
+       KUNIT_EXPECT_EQ(test, 156, tg[1].offset);
+       KUNIT_EXPECT_EQ(test, 1, tg[1].width);
+       KUNIT_EXPECT_EQ(test, 0, tg[1].value);
+       KUNIT_EXPECT_EQ(test, 0, tg[2].offset);
+       KUNIT_EXPECT_EQ(test, 0, tg[2].width);
+       KUNIT_EXPECT_EQ(test, 0, tg[2].value);
+
+       tg = vcap_keyfield_typegroup(&test_vctrl, VCAP_TYPE_ES2, VCAP_KFS_LL_FULL);
+       KUNIT_EXPECT_PTR_EQ(test, NULL, tg);
+}
+
+static void vcap_api_actionfield_typegroup_test(struct kunit *test)
+{
+       const struct vcap_typegroup *tg;
+
+       tg = vcap_actionfield_typegroup(&test_vctrl, VCAP_TYPE_IS0, VCAP_AFS_FULL);
+       KUNIT_EXPECT_PTR_NE(test, NULL, tg);
+       KUNIT_EXPECT_EQ(test, 0, tg[0].offset);
+       KUNIT_EXPECT_EQ(test, 3, tg[0].width);
+       KUNIT_EXPECT_EQ(test, 4, tg[0].value);
+       KUNIT_EXPECT_EQ(test, 110, tg[1].offset);
+       KUNIT_EXPECT_EQ(test, 2, tg[1].width);
+       KUNIT_EXPECT_EQ(test, 0, tg[1].value);
+       KUNIT_EXPECT_EQ(test, 220, tg[2].offset);
+       KUNIT_EXPECT_EQ(test, 2, tg[2].width);
+       KUNIT_EXPECT_EQ(test, 0, tg[2].value);
+       KUNIT_EXPECT_EQ(test, 0, tg[3].offset);
+       KUNIT_EXPECT_EQ(test, 0, tg[3].width);
+       KUNIT_EXPECT_EQ(test, 0, tg[3].value);
+
+       tg = vcap_actionfield_typegroup(&test_vctrl, VCAP_TYPE_IS2, VCAP_AFS_CLASSIFICATION);
+       KUNIT_EXPECT_PTR_EQ(test, NULL, tg);
+}
+
+static void vcap_api_vcap_keyfields_test(struct kunit *test)
+{
+       const struct vcap_field *ft;
+
+       ft = vcap_keyfields(&test_vctrl, VCAP_TYPE_IS2, VCAP_KFS_MAC_ETYPE);
+       KUNIT_EXPECT_PTR_NE(test, NULL, ft);
+
+       /* Keyset that is not available and within the maximum keyset enum value */
+       ft = vcap_keyfields(&test_vctrl, VCAP_TYPE_ES2, VCAP_KFS_PURE_5TUPLE_IP4);
+       KUNIT_EXPECT_PTR_EQ(test, NULL, ft);
+
+       /* Keyset that is not available and beyond the maximum keyset enum value */
+       ft = vcap_keyfields(&test_vctrl, VCAP_TYPE_ES2, VCAP_KFS_LL_FULL);
+       KUNIT_EXPECT_PTR_EQ(test, NULL, ft);
+}
+
+static void vcap_api_vcap_actionfields_test(struct kunit *test)
+{
+       const struct vcap_field *ft;
+
+       ft = vcap_actionfields(&test_vctrl, VCAP_TYPE_IS0, VCAP_AFS_FULL);
+       KUNIT_EXPECT_PTR_NE(test, NULL, ft);
+
+       ft = vcap_actionfields(&test_vctrl, VCAP_TYPE_IS2, VCAP_AFS_FULL);
+       KUNIT_EXPECT_PTR_EQ(test, NULL, ft);
+
+       ft = vcap_actionfields(&test_vctrl, VCAP_TYPE_IS2, VCAP_AFS_CLASSIFICATION);
+       KUNIT_EXPECT_PTR_EQ(test, NULL, ft);
+}
+
+static void vcap_api_encode_rule_keyset_test(struct kunit *test)
+{
+       u32 keywords[16] = {0};
+       u32 maskwords[16] = {0};
+       struct vcap_admin admin = {
+               .vtype = VCAP_TYPE_IS2,
+               .cache = {
+                       .keystream = keywords,
+                       .maskstream = maskwords,
+               },
+       };
+       struct vcap_rule_internal rule = {
+               .admin = &admin,
+               .data = {
+                       .keyset = VCAP_KFS_MAC_ETYPE,
+               },
+               .vctrl = &test_vctrl,
+       };
+       struct vcap_client_keyfield ckf[] = {
+               {
+                       .ctrl.key = VCAP_KF_TYPE,
+                       .ctrl.type = VCAP_FIELD_U32,
+                       .data.u32.value = 0x00,
+                       .data.u32.mask = 0x0f,
+               },
+               {
+                       .ctrl.key = VCAP_KF_LOOKUP_FIRST_IS,
+                       .ctrl.type = VCAP_FIELD_BIT,
+                       .data.u1.value = 0x01,
+                       .data.u1.mask = 0x01,
+               },
+               {
+                       .ctrl.key = VCAP_KF_IF_IGR_PORT_MASK_L3,
+                       .ctrl.type = VCAP_FIELD_BIT,
+                       .data.u1.value = 0x00,
+                       .data.u1.mask = 0x01,
+               },
+               {
+                       .ctrl.key = VCAP_KF_IF_IGR_PORT_MASK_RNG,
+                       .ctrl.type = VCAP_FIELD_U32,
+                       .data.u32.value = 0x00,
+                       .data.u32.mask = 0x0f,
+               },
+               {
+                       .ctrl.key = VCAP_KF_IF_IGR_PORT_MASK,
+                       .ctrl.type = VCAP_FIELD_U72,
+                       .data.u72.value = {0x0, 0x00, 0x00, 0x00},
+                       .data.u72.mask = {0xfd, 0xff, 0xff, 0xff},
+               },
+               {
+                       .ctrl.key = VCAP_KF_L2_DMAC,
+                       .ctrl.type = VCAP_FIELD_U48,
+                       /* Opposite endianness */
+                       .data.u48.value = {0x01, 0x02, 0x03, 0x04, 0x05, 0x06},
+                       .data.u48.mask = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+               },
+               {
+                       .ctrl.key = VCAP_KF_ETYPE_LEN_IS,
+                       .ctrl.type = VCAP_FIELD_BIT,
+                       .data.u1.value = 0x01,
+                       .data.u1.mask = 0x01,
+               },
+               {
+                       .ctrl.key = VCAP_KF_ETYPE,
+                       .ctrl.type = VCAP_FIELD_U32,
+                       .data.u32.value = 0xaabb,
+                       .data.u32.mask = 0xffff,
+               },
+       };
+       int idx;
+       int ret;
+
+       /* Empty entry list */
+       INIT_LIST_HEAD(&rule.data.keyfields);
+       ret = vcap_encode_rule_keyset(&rule);
+       KUNIT_EXPECT_EQ(test, -EINVAL, ret);
+
+       for (idx = 0; idx < ARRAY_SIZE(ckf); idx++)
+               list_add_tail(&ckf[idx].ctrl.list, &rule.data.keyfields);
+       ret = vcap_encode_rule_keyset(&rule);
+       KUNIT_EXPECT_EQ(test, 0, ret);
+
+       /* The key and mask values below are from an actual Sparx5 rule config */
+       /* Key */
+       KUNIT_EXPECT_EQ(test, (u32)0x00000042, keywords[0]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, keywords[1]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, keywords[2]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00020100, keywords[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x60504030, keywords[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, keywords[5]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, keywords[6]);
+       KUNIT_EXPECT_EQ(test, (u32)0x0002aaee, keywords[7]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, keywords[8]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, keywords[9]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, keywords[10]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, keywords[11]);
+
+       /* Mask: they will be inverted when applied to the register */
+       KUNIT_EXPECT_EQ(test, (u32)~0x00b07f80, maskwords[0]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xfff00000, maskwords[1]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xfffffffc, maskwords[2]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xfff000ff, maskwords[3]);
+       KUNIT_EXPECT_EQ(test, (u32)~0x00000000, maskwords[4]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xfffffff0, maskwords[5]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xfffffffe, maskwords[6]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xfffc0001, maskwords[7]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xffffffff, maskwords[8]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xffffffff, maskwords[9]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xffffffff, maskwords[10]);
+       KUNIT_EXPECT_EQ(test, (u32)~0xffffffff, maskwords[11]);
+}
+
+static void vcap_api_encode_rule_actionset_test(struct kunit *test)
+{
+       u32 actwords[16] = {0};
+       struct vcap_admin admin = {
+               .vtype = VCAP_TYPE_IS2,
+               .cache = {
+                       .actionstream = actwords,
+               },
+       };
+       struct vcap_rule_internal rule = {
+               .admin = &admin,
+               .data = {
+                       .actionset = VCAP_AFS_BASE_TYPE,
+               },
+               .vctrl = &test_vctrl,
+       };
+       struct vcap_client_actionfield caf[] = {
+               {
+                       .ctrl.action = VCAP_AF_MATCH_ID,
+                       .ctrl.type = VCAP_FIELD_U32,
+                       .data.u32.value = 0x01,
+               },
+               {
+                       .ctrl.action = VCAP_AF_MATCH_ID_MASK,
+                       .ctrl.type = VCAP_FIELD_U32,
+                       .data.u32.value = 0x01,
+               },
+               {
+                       .ctrl.action = VCAP_AF_CNT_ID,
+                       .ctrl.type = VCAP_FIELD_U32,
+                       .data.u32.value = 0x64,
+               },
+       };
+       int idx;
+       int ret;
+
+       /* Empty entry list */
+       INIT_LIST_HEAD(&rule.data.actionfields);
+       ret = vcap_encode_rule_actionset(&rule);
+       /* We allow rules with no actions */
+       KUNIT_EXPECT_EQ(test, 0, ret);
+
+       for (idx = 0; idx < ARRAY_SIZE(caf); idx++)
+               list_add_tail(&caf[idx].ctrl.list, &rule.data.actionfields);
+       ret = vcap_encode_rule_actionset(&rule);
+       KUNIT_EXPECT_EQ(test, 0, ret);
+
+       /* The action values below are from an actual Sparx5 rule config */
+       KUNIT_EXPECT_EQ(test, (u32)0x00000002, actwords[0]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[1]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[2]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[3]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[4]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00100000, actwords[5]);
+       KUNIT_EXPECT_EQ(test, (u32)0x06400010, actwords[6]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[7]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[8]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[9]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[10]);
+       KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[11]);
+}
+
+static struct kunit_case vcap_api_encoding_test_cases[] = {
+       KUNIT_CASE(vcap_api_set_bit_1_test),
+       KUNIT_CASE(vcap_api_set_bit_0_test),
+       KUNIT_CASE(vcap_api_iterator_init_test),
+       KUNIT_CASE(vcap_api_iterator_next_test),
+       KUNIT_CASE(vcap_api_encode_typegroups_test),
+       KUNIT_CASE(vcap_api_encode_bit_test),
+       KUNIT_CASE(vcap_api_encode_field_test),
+       KUNIT_CASE(vcap_api_encode_short_field_test),
+       KUNIT_CASE(vcap_api_encode_keyfield_test),
+       KUNIT_CASE(vcap_api_encode_max_keyfield_test),
+       KUNIT_CASE(vcap_api_encode_actionfield_test),
+       KUNIT_CASE(vcap_api_keyfield_typegroup_test),
+       KUNIT_CASE(vcap_api_actionfield_typegroup_test),
+       KUNIT_CASE(vcap_api_vcap_keyfields_test),
+       KUNIT_CASE(vcap_api_vcap_actionfields_test),
+       KUNIT_CASE(vcap_api_encode_rule_keyset_test),
+       KUNIT_CASE(vcap_api_encode_rule_actionset_test),
+       {}
+};
+
+static struct kunit_suite vcap_api_encoding_test_suite = {
+       .name = "VCAP_API_Encoding_Testsuite",
+       .test_cases = vcap_api_encoding_test_cases,
+};
+
+kunit_test_suite(vcap_api_encoding_test_suite);
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.c
new file mode 100644 (file)
index 0000000..5d681d2
--- /dev/null
@@ -0,0 +1,5570 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP API Test VCAP Model Data
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include "vcap_api.h"
+#include "vcap_model_kunit.h"
+
+/* keyfields */
+static const struct vcap_field is0_mll_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 7,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 10,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_TPID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 13,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_VID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 28,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_VID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 31,
+               .width = 12,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 43,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 91,
+               .width = 48,
+       },
+       [VCAP_KF_ETYPE_MPLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 139,
+               .width = 2,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 141,
+               .width = 8,
+       },
+};
+
+static const struct vcap_field is0_tri_vid_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 7,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 10,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 12,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 24,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_TPID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 27,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 30,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 33,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 34,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 46,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 49,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI1] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 53,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 65,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI2] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 71,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 72,
+               .width = 12,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 84,
+               .width = 8,
+       },
+       [VCAP_KF_OAM_Y1731_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 92,
+               .width = 1,
+       },
+       [VCAP_KF_OAM_MEL_FLAGS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 93,
+               .width = 7,
+       },
+};
+
+static const struct vcap_field is0_ll_full_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 7,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 10,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_TPID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 13,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 19,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 32,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 35,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI1] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 38,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 39,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 51,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 54,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI2] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 57,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 58,
+               .width = 12,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 70,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 118,
+               .width = 48,
+       },
+       [VCAP_KF_ETYPE_LEN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 166,
+               .width = 1,
+       },
+       [VCAP_KF_ETYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 167,
+               .width = 16,
+       },
+       [VCAP_KF_IP_SNAP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 183,
+               .width = 1,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 184,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 185,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 187,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 188,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DSCP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 189,
+               .width = 6,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 195,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 227,
+               .width = 32,
+       },
+       [VCAP_KF_TCP_UDP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 259,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 260,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 261,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 277,
+               .width = 8,
+       },
+};
+
+static const struct vcap_field is0_normal_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 12,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 17,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U72,
+               .offset = 19,
+               .width = 65,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 84,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 86,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_TPID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 89,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 92,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 96,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 108,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 111,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI1] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 114,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 115,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 127,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 130,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI2] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 133,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 134,
+               .width = 12,
+       },
+       [VCAP_KF_DST_ENTRY] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 146,
+               .width = 1,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 147,
+               .width = 48,
+       },
+       [VCAP_KF_IP_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 195,
+               .width = 1,
+       },
+       [VCAP_KF_ETYPE_LEN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 196,
+               .width = 1,
+       },
+       [VCAP_KF_ETYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 197,
+               .width = 16,
+       },
+       [VCAP_KF_IP_SNAP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 213,
+               .width = 1,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 214,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 215,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 217,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 218,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DSCP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 219,
+               .width = 6,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 225,
+               .width = 32,
+       },
+       [VCAP_KF_TCP_UDP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 257,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 258,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 259,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 275,
+               .width = 8,
+       },
+};
+
+static const struct vcap_field is0_normal_7tuple_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 1,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 2,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 4,
+               .width = 12,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U72,
+               .offset = 18,
+               .width = 65,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 83,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 84,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 85,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_TPID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 88,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 95,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 107,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 110,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI1] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 113,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 114,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 126,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 129,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI2] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 132,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 133,
+               .width = 12,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 145,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 193,
+               .width = 48,
+       },
+       [VCAP_KF_IP_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 241,
+               .width = 1,
+       },
+       [VCAP_KF_ETYPE_LEN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 242,
+               .width = 1,
+       },
+       [VCAP_KF_ETYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 243,
+               .width = 16,
+       },
+       [VCAP_KF_IP_SNAP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 259,
+               .width = 1,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 260,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 261,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 263,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 264,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DSCP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 265,
+               .width = 6,
+       },
+       [VCAP_KF_L3_IP6_DIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 271,
+               .width = 128,
+       },
+       [VCAP_KF_L3_IP6_SIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 399,
+               .width = 128,
+       },
+       [VCAP_KF_TCP_UDP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 527,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 528,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 529,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 545,
+               .width = 8,
+       },
+};
+
+static const struct vcap_field is0_normal_5tuple_ip4_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 12,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 17,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U72,
+               .offset = 19,
+               .width = 65,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 84,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 86,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_TPID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 89,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 92,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID0] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 96,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 108,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 111,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI1] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 114,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID1] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 115,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_TPID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 127,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_PCP2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 130,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI2] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 133,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID2] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 134,
+               .width = 12,
+       },
+       [VCAP_KF_IP_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 146,
+               .width = 1,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 147,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 148,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 150,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 151,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DSCP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 152,
+               .width = 6,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 158,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 190,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP_PROTO] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 222,
+               .width = 8,
+       },
+       [VCAP_KF_TCP_UDP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 230,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 231,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 232,
+               .width = 8,
+       },
+       [VCAP_KF_IP_PAYLOAD_5TUPLE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 240,
+               .width = 32,
+       },
+};
+
+static const struct vcap_field is0_pure_5tuple_ip4_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_GEN_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 12,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 17,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 19,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 20,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DSCP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 21,
+               .width = 6,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 27,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 59,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP_PROTO] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 8,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 99,
+               .width = 8,
+       },
+       [VCAP_KF_IP_PAYLOAD_5TUPLE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 107,
+               .width = 32,
+       },
+};
+
+static const struct vcap_field is0_etag_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 7,
+       },
+       [VCAP_KF_8021BR_E_TAGGED] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 10,
+               .width = 1,
+       },
+       [VCAP_KF_8021BR_GRP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 11,
+               .width = 2,
+       },
+       [VCAP_KF_8021BR_ECID_EXT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 13,
+               .width = 8,
+       },
+       [VCAP_KF_8021BR_ECID_BASE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 21,
+               .width = 12,
+       },
+       [VCAP_KF_8021BR_IGR_ECID_EXT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 33,
+               .width = 8,
+       },
+       [VCAP_KF_8021BR_IGR_ECID_BASE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 41,
+               .width = 12,
+       },
+};
+
+static const struct vcap_field is2_mac_etype_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L3_SMAC_SIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 86,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DMAC_DIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 87,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 88,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 89,
+               .width = 1,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 90,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 138,
+               .width = 48,
+       },
+       [VCAP_KF_ETYPE_LEN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 186,
+               .width = 1,
+       },
+       [VCAP_KF_ETYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 187,
+               .width = 16,
+       },
+       [VCAP_KF_L2_PAYLOAD_ETYPE] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 203,
+               .width = 64,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 267,
+               .width = 16,
+       },
+       [VCAP_KF_OAM_CCM_CNTS_EQ0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 283,
+               .width = 1,
+       },
+       [VCAP_KF_OAM_Y1731_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 284,
+               .width = 1,
+       },
+};
+
+static const struct vcap_field is2_arp_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 86,
+               .width = 48,
+       },
+       [VCAP_KF_ARP_ADDR_SPACE_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 134,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_PROTO_SPACE_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 135,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_LEN_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 136,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_TGT_MATCH_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 137,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_SENDER_MATCH_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 138,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_OPCODE_UNKNOWN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 139,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_OPCODE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 140,
+               .width = 2,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 142,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 174,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 206,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 207,
+               .width = 16,
+       },
+};
+
+static const struct vcap_field is2_ip4_tcp_udp_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L3_SMAC_SIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 86,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DMAC_DIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 87,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 88,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 89,
+               .width = 1,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 93,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 96,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 104,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 136,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 168,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 169,
+               .width = 1,
+       },
+       [VCAP_KF_L4_DPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 170,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 186,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 202,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 218,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SEQUENCE_EQ0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 219,
+               .width = 1,
+       },
+       [VCAP_KF_L4_FIN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 220,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SYN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 221,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RST] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 222,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PSH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 223,
+               .width = 1,
+       },
+       [VCAP_KF_L4_ACK] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 224,
+               .width = 1,
+       },
+       [VCAP_KF_L4_URG] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 225,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PAYLOAD] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 226,
+               .width = 64,
+       },
+};
+
+static const struct vcap_field is2_ip4_other_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L3_SMAC_SIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 86,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DMAC_DIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 87,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 88,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 89,
+               .width = 1,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 2,
+       },
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 93,
+               .width = 1,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 96,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 104,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 136,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 168,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP_PROTO] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 169,
+               .width = 8,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 177,
+               .width = 16,
+       },
+       [VCAP_KF_L3_PAYLOAD] = {
+               .type = VCAP_FIELD_U112,
+               .offset = 193,
+               .width = 96,
+       },
+};
+
+static const struct vcap_field is2_ip6_std_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 18,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 32,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 52,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 53,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 54,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 56,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 81,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 82,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_L3_SMAC_SIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 86,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DMAC_DIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 87,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 88,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 89,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP6_SIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 91,
+               .width = 128,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 219,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP_PROTO] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 220,
+               .width = 8,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 228,
+               .width = 16,
+       },
+       [VCAP_KF_L3_PAYLOAD] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 244,
+               .width = 40,
+       },
+};
+
+static const struct vcap_field is2_ip_7tuple_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 2,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 2,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 3,
+               .width = 8,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_L3] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 11,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 12,
+               .width = 4,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 2,
+       },
+       [VCAP_KF_IF_IGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U72,
+               .offset = 18,
+               .width = 65,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 83,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 84,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 86,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 87,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 99,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 112,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 113,
+               .width = 3,
+       },
+       [VCAP_KF_L2_FWD_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 116,
+               .width = 1,
+       },
+       [VCAP_KF_L3_SMAC_SIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 117,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DMAC_DIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 118,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 119,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 120,
+               .width = 1,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 121,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 169,
+               .width = 48,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 217,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 218,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 219,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP6_DIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 227,
+               .width = 128,
+       },
+       [VCAP_KF_L3_IP6_SIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 355,
+               .width = 128,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 483,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_UDP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 484,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 485,
+               .width = 1,
+       },
+       [VCAP_KF_L4_DPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 486,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 502,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 518,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 534,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SEQUENCE_EQ0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 535,
+               .width = 1,
+       },
+       [VCAP_KF_L4_FIN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 536,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SYN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 537,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RST] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 538,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PSH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 539,
+               .width = 1,
+       },
+       [VCAP_KF_L4_ACK] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 540,
+               .width = 1,
+       },
+       [VCAP_KF_L4_URG] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 541,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PAYLOAD] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 542,
+               .width = 64,
+       },
+};
+
+static const struct vcap_field is2_ip6_vid_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 4,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 4,
+               .width = 1,
+       },
+       [VCAP_KF_LOOKUP_PAG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 5,
+               .width = 8,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 26,
+               .width = 13,
+       },
+       [VCAP_KF_L3_SMAC_SIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 39,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DMAC_DIP_MATCH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 40,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 41,
+               .width = 1,
+       },
+       [VCAP_KF_L3_DST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 42,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP6_DIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 43,
+               .width = 128,
+       },
+       [VCAP_KF_L3_IP6_SIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 171,
+               .width = 128,
+       },
+};
+
+static const struct vcap_field es2_mac_etype_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 3,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 3,
+               .width = 1,
+       },
+       [VCAP_KF_ACL_GRP_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 4,
+               .width = 8,
+       },
+       [VCAP_KF_PROT_ACTIVE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 12,
+               .width = 1,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 14,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 15,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 28,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 29,
+               .width = 13,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 42,
+               .width = 3,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 45,
+               .width = 32,
+       },
+       [VCAP_KF_IF_IGR_PORT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 77,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 78,
+               .width = 9,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 87,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_COSID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 3,
+       },
+       [VCAP_KF_L3_DPL_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_ES0_ISDX_KEY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 96,
+               .width = 1,
+       },
+       [VCAP_KF_MIRROR_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 97,
+               .width = 2,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 99,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 147,
+               .width = 48,
+       },
+       [VCAP_KF_ETYPE_LEN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 195,
+               .width = 1,
+       },
+       [VCAP_KF_ETYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 196,
+               .width = 16,
+       },
+       [VCAP_KF_L2_PAYLOAD_ETYPE] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 212,
+               .width = 64,
+       },
+       [VCAP_KF_OAM_CCM_CNTS_EQ0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 276,
+               .width = 1,
+       },
+       [VCAP_KF_OAM_Y1731_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 277,
+               .width = 1,
+       },
+};
+
+static const struct vcap_field es2_arp_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 3,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 3,
+               .width = 1,
+       },
+       [VCAP_KF_ACL_GRP_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 4,
+               .width = 8,
+       },
+       [VCAP_KF_PROT_ACTIVE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 12,
+               .width = 1,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 14,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 15,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 28,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 29,
+               .width = 13,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 42,
+               .width = 3,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 45,
+               .width = 32,
+       },
+       [VCAP_KF_IF_IGR_PORT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 77,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 78,
+               .width = 9,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 87,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_COSID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 3,
+       },
+       [VCAP_KF_L3_DPL_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_ES0_ISDX_KEY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_MIRROR_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 96,
+               .width = 2,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 98,
+               .width = 48,
+       },
+       [VCAP_KF_ARP_ADDR_SPACE_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 146,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_PROTO_SPACE_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 147,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_LEN_OK_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 148,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_TGT_MATCH_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 149,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_SENDER_MATCH_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 150,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_OPCODE_UNKNOWN_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 151,
+               .width = 1,
+       },
+       [VCAP_KF_ARP_OPCODE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 152,
+               .width = 2,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 154,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 186,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 218,
+               .width = 1,
+       },
+};
+
+static const struct vcap_field es2_ip4_tcp_udp_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 3,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 3,
+               .width = 1,
+       },
+       [VCAP_KF_ACL_GRP_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 4,
+               .width = 8,
+       },
+       [VCAP_KF_PROT_ACTIVE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 12,
+               .width = 1,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 14,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 15,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 28,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 29,
+               .width = 13,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 42,
+               .width = 3,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 45,
+               .width = 32,
+       },
+       [VCAP_KF_IF_IGR_PORT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 77,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 78,
+               .width = 9,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 87,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_COSID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 3,
+       },
+       [VCAP_KF_L3_DPL_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_ES0_ISDX_KEY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 96,
+               .width = 1,
+       },
+       [VCAP_KF_MIRROR_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 97,
+               .width = 2,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 99,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 100,
+               .width = 2,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 102,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 103,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 104,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 112,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 144,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 176,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 177,
+               .width = 1,
+       },
+       [VCAP_KF_L4_DPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 178,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 194,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 210,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 226,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SEQUENCE_EQ0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 227,
+               .width = 1,
+       },
+       [VCAP_KF_L4_FIN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 228,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SYN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 229,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RST] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 230,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PSH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 231,
+               .width = 1,
+       },
+       [VCAP_KF_L4_ACK] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 232,
+               .width = 1,
+       },
+       [VCAP_KF_L4_URG] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 233,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PAYLOAD] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 234,
+               .width = 64,
+       },
+};
+
+static const struct vcap_field es2_ip4_other_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 3,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 3,
+               .width = 1,
+       },
+       [VCAP_KF_ACL_GRP_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 4,
+               .width = 8,
+       },
+       [VCAP_KF_PROT_ACTIVE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 12,
+               .width = 1,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 14,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 15,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 28,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 29,
+               .width = 13,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 42,
+               .width = 3,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 45,
+               .width = 32,
+       },
+       [VCAP_KF_IF_IGR_PORT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 77,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 78,
+               .width = 9,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 87,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 90,
+               .width = 1,
+       },
+       [VCAP_KF_COSID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 91,
+               .width = 3,
+       },
+       [VCAP_KF_L3_DPL_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_KF_ES0_ISDX_KEY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 96,
+               .width = 1,
+       },
+       [VCAP_KF_MIRROR_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 97,
+               .width = 2,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 99,
+               .width = 1,
+       },
+       [VCAP_KF_L3_FRAGMENT_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 100,
+               .width = 2,
+       },
+       [VCAP_KF_L3_OPTIONS_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 102,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 103,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 104,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 112,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 144,
+               .width = 32,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 176,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP_PROTO] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 177,
+               .width = 8,
+       },
+       [VCAP_KF_L3_PAYLOAD] = {
+               .type = VCAP_FIELD_U112,
+               .offset = 185,
+               .width = 96,
+       },
+};
+
+static const struct vcap_field es2_ip_7tuple_keyfield[] = {
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_KF_ACL_GRP_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 1,
+               .width = 8,
+       },
+       [VCAP_KF_PROT_ACTIVE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 9,
+               .width = 1,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 10,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 11,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 12,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 13,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 25,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 26,
+               .width = 13,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 39,
+               .width = 3,
+       },
+       [VCAP_KF_IF_EGR_PORT_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 42,
+               .width = 32,
+       },
+       [VCAP_KF_IF_IGR_PORT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 74,
+               .width = 1,
+       },
+       [VCAP_KF_IF_IGR_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 75,
+               .width = 9,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 84,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 87,
+               .width = 1,
+       },
+       [VCAP_KF_COSID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 88,
+               .width = 3,
+       },
+       [VCAP_KF_L3_DPL_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 91,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 92,
+               .width = 1,
+       },
+       [VCAP_KF_ES0_ISDX_KEY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 93,
+               .width = 1,
+       },
+       [VCAP_KF_MIRROR_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 94,
+               .width = 2,
+       },
+       [VCAP_KF_L2_DMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 96,
+               .width = 48,
+       },
+       [VCAP_KF_L2_SMAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 144,
+               .width = 48,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 192,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TTL_GT0] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 193,
+               .width = 1,
+       },
+       [VCAP_KF_L3_TOS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 194,
+               .width = 8,
+       },
+       [VCAP_KF_L3_IP6_DIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 202,
+               .width = 128,
+       },
+       [VCAP_KF_L3_IP6_SIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 330,
+               .width = 128,
+       },
+       [VCAP_KF_L3_DIP_EQ_SIP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 458,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_UDP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 459,
+               .width = 1,
+       },
+       [VCAP_KF_TCP_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 460,
+               .width = 1,
+       },
+       [VCAP_KF_L4_DPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 461,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 477,
+               .width = 16,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 493,
+               .width = 16,
+       },
+       [VCAP_KF_L4_SPORT_EQ_DPORT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 509,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SEQUENCE_EQ0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 510,
+               .width = 1,
+       },
+       [VCAP_KF_L4_FIN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 511,
+               .width = 1,
+       },
+       [VCAP_KF_L4_SYN] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 512,
+               .width = 1,
+       },
+       [VCAP_KF_L4_RST] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 513,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PSH] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 514,
+               .width = 1,
+       },
+       [VCAP_KF_L4_ACK] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 515,
+               .width = 1,
+       },
+       [VCAP_KF_L4_URG] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 516,
+               .width = 1,
+       },
+       [VCAP_KF_L4_PAYLOAD] = {
+               .type = VCAP_FIELD_U64,
+               .offset = 517,
+               .width = 64,
+       },
+};
+
+static const struct vcap_field es2_ip4_vid_keyfield[] = {
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_KF_ACL_GRP_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 1,
+               .width = 8,
+       },
+       [VCAP_KF_PROT_ACTIVE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 9,
+               .width = 1,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 10,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 11,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 12,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 13,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 25,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 26,
+               .width = 13,
+       },
+       [VCAP_KF_8021Q_PCP_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 39,
+               .width = 3,
+       },
+       [VCAP_KF_8021Q_DEI_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 42,
+               .width = 1,
+       },
+       [VCAP_KF_COSID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 43,
+               .width = 3,
+       },
+       [VCAP_KF_L3_DPL_CLS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 46,
+               .width = 1,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 47,
+               .width = 1,
+       },
+       [VCAP_KF_ES0_ISDX_KEY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 48,
+               .width = 1,
+       },
+       [VCAP_KF_MIRROR_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 49,
+               .width = 2,
+       },
+       [VCAP_KF_IP4_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 51,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP4_DIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 52,
+               .width = 32,
+       },
+       [VCAP_KF_L3_IP4_SIP] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 84,
+               .width = 32,
+       },
+       [VCAP_KF_L4_RNG] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 116,
+               .width = 16,
+       },
+};
+
+static const struct vcap_field es2_ip6_vid_keyfield[] = {
+       [VCAP_KF_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 0,
+               .width = 3,
+       },
+       [VCAP_KF_LOOKUP_FIRST_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 3,
+               .width = 1,
+       },
+       [VCAP_KF_ACL_GRP_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 4,
+               .width = 8,
+       },
+       [VCAP_KF_PROT_ACTIVE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 12,
+               .width = 1,
+       },
+       [VCAP_KF_L2_MC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_KF_L2_BC_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 14,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_GT0_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 15,
+               .width = 1,
+       },
+       [VCAP_KF_ISDX_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 12,
+       },
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 28,
+               .width = 1,
+       },
+       [VCAP_KF_8021Q_VID_CLS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 29,
+               .width = 13,
+       },
+       [VCAP_KF_L3_RT_IS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 42,
+               .width = 1,
+       },
+       [VCAP_KF_L3_IP6_DIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 43,
+               .width = 128,
+       },
+       [VCAP_KF_L3_IP6_SIP] = {
+               .type = VCAP_FIELD_U128,
+               .offset = 171,
+               .width = 128,
+       },
+};
+
+/* keyfield_set */
+static const struct vcap_set is0_keyfield_set[] = {
+       [VCAP_KFS_MLL] = {
+               .type_id = 0,
+               .sw_per_item = 3,
+               .sw_cnt = 4,
+       },
+       [VCAP_KFS_TRI_VID] = {
+               .type_id = 0,
+               .sw_per_item = 2,
+               .sw_cnt = 6,
+       },
+       [VCAP_KFS_LL_FULL] = {
+               .type_id = 0,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_NORMAL] = {
+               .type_id = 1,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_NORMAL_7TUPLE] = {
+               .type_id = 0,
+               .sw_per_item = 12,
+               .sw_cnt = 1,
+       },
+       [VCAP_KFS_NORMAL_5TUPLE_IP4] = {
+               .type_id = 2,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_PURE_5TUPLE_IP4] = {
+               .type_id = 2,
+               .sw_per_item = 3,
+               .sw_cnt = 4,
+       },
+       [VCAP_KFS_ETAG] = {
+               .type_id = 3,
+               .sw_per_item = 2,
+               .sw_cnt = 6,
+       },
+};
+
+static const struct vcap_set is2_keyfield_set[] = {
+       [VCAP_KFS_MAC_ETYPE] = {
+               .type_id = 0,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_ARP] = {
+               .type_id = 3,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP4_TCP_UDP] = {
+               .type_id = 4,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP4_OTHER] = {
+               .type_id = 5,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP6_STD] = {
+               .type_id = 6,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP_7TUPLE] = {
+               .type_id = 1,
+               .sw_per_item = 12,
+               .sw_cnt = 1,
+       },
+       [VCAP_KFS_IP6_VID] = {
+               .type_id = 9,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+};
+
+static const struct vcap_set es2_keyfield_set[] = {
+       [VCAP_KFS_MAC_ETYPE] = {
+               .type_id = 0,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_ARP] = {
+               .type_id = 1,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP4_TCP_UDP] = {
+               .type_id = 2,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP4_OTHER] = {
+               .type_id = 3,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+       [VCAP_KFS_IP_7TUPLE] = {
+               .type_id = -1,
+               .sw_per_item = 12,
+               .sw_cnt = 1,
+       },
+       [VCAP_KFS_IP4_VID] = {
+               .type_id = -1,
+               .sw_per_item = 3,
+               .sw_cnt = 4,
+       },
+       [VCAP_KFS_IP6_VID] = {
+               .type_id = 5,
+               .sw_per_item = 6,
+               .sw_cnt = 2,
+       },
+};
+
+/* keyfield_set map */
+static const struct vcap_field *is0_keyfield_set_map[] = {
+       [VCAP_KFS_MLL] = is0_mll_keyfield,
+       [VCAP_KFS_TRI_VID] = is0_tri_vid_keyfield,
+       [VCAP_KFS_LL_FULL] = is0_ll_full_keyfield,
+       [VCAP_KFS_NORMAL] = is0_normal_keyfield,
+       [VCAP_KFS_NORMAL_7TUPLE] = is0_normal_7tuple_keyfield,
+       [VCAP_KFS_NORMAL_5TUPLE_IP4] = is0_normal_5tuple_ip4_keyfield,
+       [VCAP_KFS_PURE_5TUPLE_IP4] = is0_pure_5tuple_ip4_keyfield,
+       [VCAP_KFS_ETAG] = is0_etag_keyfield,
+};
+
+static const struct vcap_field *is2_keyfield_set_map[] = {
+       [VCAP_KFS_MAC_ETYPE] = is2_mac_etype_keyfield,
+       [VCAP_KFS_ARP] = is2_arp_keyfield,
+       [VCAP_KFS_IP4_TCP_UDP] = is2_ip4_tcp_udp_keyfield,
+       [VCAP_KFS_IP4_OTHER] = is2_ip4_other_keyfield,
+       [VCAP_KFS_IP6_STD] = is2_ip6_std_keyfield,
+       [VCAP_KFS_IP_7TUPLE] = is2_ip_7tuple_keyfield,
+       [VCAP_KFS_IP6_VID] = is2_ip6_vid_keyfield,
+};
+
+static const struct vcap_field *es2_keyfield_set_map[] = {
+       [VCAP_KFS_MAC_ETYPE] = es2_mac_etype_keyfield,
+       [VCAP_KFS_ARP] = es2_arp_keyfield,
+       [VCAP_KFS_IP4_TCP_UDP] = es2_ip4_tcp_udp_keyfield,
+       [VCAP_KFS_IP4_OTHER] = es2_ip4_other_keyfield,
+       [VCAP_KFS_IP_7TUPLE] = es2_ip_7tuple_keyfield,
+       [VCAP_KFS_IP4_VID] = es2_ip4_vid_keyfield,
+       [VCAP_KFS_IP6_VID] = es2_ip6_vid_keyfield,
+};
+
+/* keyfield_set map sizes */
+static int is0_keyfield_set_map_size[] = {
+       [VCAP_KFS_MLL] = ARRAY_SIZE(is0_mll_keyfield),
+       [VCAP_KFS_TRI_VID] = ARRAY_SIZE(is0_tri_vid_keyfield),
+       [VCAP_KFS_LL_FULL] = ARRAY_SIZE(is0_ll_full_keyfield),
+       [VCAP_KFS_NORMAL] = ARRAY_SIZE(is0_normal_keyfield),
+       [VCAP_KFS_NORMAL_7TUPLE] = ARRAY_SIZE(is0_normal_7tuple_keyfield),
+       [VCAP_KFS_NORMAL_5TUPLE_IP4] = ARRAY_SIZE(is0_normal_5tuple_ip4_keyfield),
+       [VCAP_KFS_PURE_5TUPLE_IP4] = ARRAY_SIZE(is0_pure_5tuple_ip4_keyfield),
+       [VCAP_KFS_ETAG] = ARRAY_SIZE(is0_etag_keyfield),
+};
+
+static int is2_keyfield_set_map_size[] = {
+       [VCAP_KFS_MAC_ETYPE] = ARRAY_SIZE(is2_mac_etype_keyfield),
+       [VCAP_KFS_ARP] = ARRAY_SIZE(is2_arp_keyfield),
+       [VCAP_KFS_IP4_TCP_UDP] = ARRAY_SIZE(is2_ip4_tcp_udp_keyfield),
+       [VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(is2_ip4_other_keyfield),
+       [VCAP_KFS_IP6_STD] = ARRAY_SIZE(is2_ip6_std_keyfield),
+       [VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(is2_ip_7tuple_keyfield),
+       [VCAP_KFS_IP6_VID] = ARRAY_SIZE(is2_ip6_vid_keyfield),
+};
+
+static int es2_keyfield_set_map_size[] = {
+       [VCAP_KFS_MAC_ETYPE] = ARRAY_SIZE(es2_mac_etype_keyfield),
+       [VCAP_KFS_ARP] = ARRAY_SIZE(es2_arp_keyfield),
+       [VCAP_KFS_IP4_TCP_UDP] = ARRAY_SIZE(es2_ip4_tcp_udp_keyfield),
+       [VCAP_KFS_IP4_OTHER] = ARRAY_SIZE(es2_ip4_other_keyfield),
+       [VCAP_KFS_IP_7TUPLE] = ARRAY_SIZE(es2_ip_7tuple_keyfield),
+       [VCAP_KFS_IP4_VID] = ARRAY_SIZE(es2_ip4_vid_keyfield),
+       [VCAP_KFS_IP6_VID] = ARRAY_SIZE(es2_ip6_vid_keyfield),
+};
+
+/* actionfields */
+static const struct vcap_field is0_mlbs_actionfield[] = {
+       [VCAP_AF_TYPE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_AF_COSID_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 1,
+               .width = 1,
+       },
+       [VCAP_AF_COSID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 2,
+               .width = 3,
+       },
+       [VCAP_AF_QOS_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 5,
+               .width = 1,
+       },
+       [VCAP_AF_QOS_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 6,
+               .width = 3,
+       },
+       [VCAP_AF_DP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 9,
+               .width = 1,
+       },
+       [VCAP_AF_DP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 10,
+               .width = 2,
+       },
+       [VCAP_AF_MAP_LOOKUP_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 12,
+               .width = 2,
+       },
+       [VCAP_AF_MAP_KEY] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 3,
+       },
+       [VCAP_AF_MAP_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 17,
+               .width = 9,
+       },
+       [VCAP_AF_CLS_VID_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 26,
+               .width = 3,
+       },
+       [VCAP_AF_GVID_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 29,
+               .width = 3,
+       },
+       [VCAP_AF_VID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 32,
+               .width = 13,
+       },
+       [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 45,
+               .width = 1,
+       },
+       [VCAP_AF_ISDX_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 46,
+               .width = 12,
+       },
+       [VCAP_AF_FWD_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 58,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 59,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_Q] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 60,
+               .width = 3,
+       },
+       [VCAP_AF_OAM_Y1731_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 63,
+               .width = 3,
+       },
+       [VCAP_AF_OAM_TWAMP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 66,
+               .width = 1,
+       },
+       [VCAP_AF_OAM_IP_BFD_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 67,
+               .width = 1,
+       },
+       [VCAP_AF_TC_LABEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 3,
+       },
+       [VCAP_AF_TTL_LABEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 71,
+               .width = 3,
+       },
+       [VCAP_AF_NUM_VLD_LABELS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 74,
+               .width = 2,
+       },
+       [VCAP_AF_FWD_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 76,
+               .width = 3,
+       },
+       [VCAP_AF_MPLS_OAM_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 79,
+               .width = 3,
+       },
+       [VCAP_AF_MPLS_MEP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 82,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_MIP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 83,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_OAM_FLAVOR] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 84,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_IP_CTRL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 85,
+               .width = 1,
+       },
+       [VCAP_AF_PAG_OVERRIDE_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 86,
+               .width = 8,
+       },
+       [VCAP_AF_PAG_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 94,
+               .width = 8,
+       },
+       [VCAP_AF_S2_KEY_SEL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 102,
+               .width = 1,
+       },
+       [VCAP_AF_S2_KEY_SEL_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 103,
+               .width = 6,
+       },
+       [VCAP_AF_PIPELINE_FORCE_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 109,
+               .width = 2,
+       },
+       [VCAP_AF_PIPELINE_ACT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 111,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_PT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 112,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_KEY_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 117,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_NORM_W16_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 122,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_OFFSET_FROM_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 127,
+               .width = 2,
+       },
+       [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 129,
+               .width = 2,
+       },
+       [VCAP_AF_NXT_NORMALIZE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 131,
+               .width = 1,
+       },
+       [VCAP_AF_NXT_IDX_CTRL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 132,
+               .width = 3,
+       },
+       [VCAP_AF_NXT_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 135,
+               .width = 12,
+       },
+};
+
+static const struct vcap_field is0_mlbs_reduced_actionfield[] = {
+       [VCAP_AF_TYPE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_AF_COSID_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 1,
+               .width = 1,
+       },
+       [VCAP_AF_COSID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 2,
+               .width = 3,
+       },
+       [VCAP_AF_QOS_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 5,
+               .width = 1,
+       },
+       [VCAP_AF_QOS_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 6,
+               .width = 3,
+       },
+       [VCAP_AF_DP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 9,
+               .width = 1,
+       },
+       [VCAP_AF_DP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 10,
+               .width = 2,
+       },
+       [VCAP_AF_MAP_LOOKUP_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 12,
+               .width = 2,
+       },
+       [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 14,
+               .width = 1,
+       },
+       [VCAP_AF_ISDX_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 15,
+               .width = 12,
+       },
+       [VCAP_AF_FWD_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 27,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 28,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_Q] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 29,
+               .width = 3,
+       },
+       [VCAP_AF_TC_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 32,
+               .width = 1,
+       },
+       [VCAP_AF_TTL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 33,
+               .width = 1,
+       },
+       [VCAP_AF_FWD_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 34,
+               .width = 3,
+       },
+       [VCAP_AF_MPLS_OAM_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 37,
+               .width = 3,
+       },
+       [VCAP_AF_MPLS_MEP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 40,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_MIP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 41,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_OAM_FLAVOR] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 42,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_IP_CTRL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 43,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_FORCE_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 44,
+               .width = 2,
+       },
+       [VCAP_AF_PIPELINE_ACT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 46,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_PT_REDUCED] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 47,
+               .width = 3,
+       },
+       [VCAP_AF_NXT_KEY_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 50,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_NORM_W32_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 55,
+               .width = 2,
+       },
+       [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 57,
+               .width = 2,
+       },
+       [VCAP_AF_NXT_NORMALIZE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 59,
+               .width = 1,
+       },
+       [VCAP_AF_NXT_IDX_CTRL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 60,
+               .width = 3,
+       },
+       [VCAP_AF_NXT_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 63,
+               .width = 12,
+       },
+};
+
+static const struct vcap_field is0_classification_actionfield[] = {
+       [VCAP_AF_TYPE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_AF_DSCP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 1,
+               .width = 1,
+       },
+       [VCAP_AF_DSCP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 2,
+               .width = 6,
+       },
+       [VCAP_AF_COSID_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 8,
+               .width = 1,
+       },
+       [VCAP_AF_COSID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 9,
+               .width = 3,
+       },
+       [VCAP_AF_QOS_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 12,
+               .width = 1,
+       },
+       [VCAP_AF_QOS_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 13,
+               .width = 3,
+       },
+       [VCAP_AF_DP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 16,
+               .width = 1,
+       },
+       [VCAP_AF_DP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 17,
+               .width = 2,
+       },
+       [VCAP_AF_DEI_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 19,
+               .width = 1,
+       },
+       [VCAP_AF_DEI_VAL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 20,
+               .width = 1,
+       },
+       [VCAP_AF_PCP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 21,
+               .width = 1,
+       },
+       [VCAP_AF_PCP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 22,
+               .width = 3,
+       },
+       [VCAP_AF_MAP_LOOKUP_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 25,
+               .width = 2,
+       },
+       [VCAP_AF_MAP_KEY] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 27,
+               .width = 3,
+       },
+       [VCAP_AF_MAP_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 30,
+               .width = 9,
+       },
+       [VCAP_AF_CLS_VID_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 39,
+               .width = 3,
+       },
+       [VCAP_AF_GVID_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 42,
+               .width = 3,
+       },
+       [VCAP_AF_VID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 45,
+               .width = 13,
+       },
+       [VCAP_AF_VLAN_POP_CNT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 58,
+               .width = 1,
+       },
+       [VCAP_AF_VLAN_POP_CNT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 59,
+               .width = 2,
+       },
+       [VCAP_AF_VLAN_PUSH_CNT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 61,
+               .width = 1,
+       },
+       [VCAP_AF_VLAN_PUSH_CNT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 62,
+               .width = 2,
+       },
+       [VCAP_AF_TPID_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 64,
+               .width = 2,
+       },
+       [VCAP_AF_VLAN_WAS_TAGGED] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 66,
+               .width = 2,
+       },
+       [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 68,
+               .width = 1,
+       },
+       [VCAP_AF_ISDX_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 69,
+               .width = 12,
+       },
+       [VCAP_AF_RT_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 81,
+               .width = 2,
+       },
+       [VCAP_AF_LPM_AFFIX_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 83,
+               .width = 1,
+       },
+       [VCAP_AF_LPM_AFFIX_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 84,
+               .width = 10,
+       },
+       [VCAP_AF_RLEG_DMAC_CHK_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 94,
+               .width = 1,
+       },
+       [VCAP_AF_TTL_DECR_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 95,
+               .width = 1,
+       },
+       [VCAP_AF_L3_MAC_UPDATE_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 96,
+               .width = 1,
+       },
+       [VCAP_AF_FWD_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 97,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 98,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_Q] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 99,
+               .width = 3,
+       },
+       [VCAP_AF_MIP_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 102,
+               .width = 2,
+       },
+       [VCAP_AF_OAM_Y1731_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 104,
+               .width = 3,
+       },
+       [VCAP_AF_OAM_TWAMP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 107,
+               .width = 1,
+       },
+       [VCAP_AF_OAM_IP_BFD_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 108,
+               .width = 1,
+       },
+       [VCAP_AF_PAG_OVERRIDE_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 109,
+               .width = 8,
+       },
+       [VCAP_AF_PAG_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 117,
+               .width = 8,
+       },
+       [VCAP_AF_S2_KEY_SEL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 125,
+               .width = 1,
+       },
+       [VCAP_AF_S2_KEY_SEL_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 126,
+               .width = 6,
+       },
+       [VCAP_AF_INJ_MASQ_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 132,
+               .width = 1,
+       },
+       [VCAP_AF_INJ_MASQ_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 133,
+               .width = 7,
+       },
+       [VCAP_AF_LPORT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 140,
+               .width = 1,
+       },
+       [VCAP_AF_INJ_MASQ_LPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 141,
+               .width = 7,
+       },
+       [VCAP_AF_PIPELINE_FORCE_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 148,
+               .width = 2,
+       },
+       [VCAP_AF_PIPELINE_ACT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 150,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_PT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 151,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_KEY_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 156,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_NORM_W16_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 161,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_OFFSET_FROM_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 166,
+               .width = 2,
+       },
+       [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 168,
+               .width = 2,
+       },
+       [VCAP_AF_NXT_NORMALIZE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 170,
+               .width = 1,
+       },
+       [VCAP_AF_NXT_IDX_CTRL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 171,
+               .width = 3,
+       },
+       [VCAP_AF_NXT_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 174,
+               .width = 12,
+       },
+};
+
+static const struct vcap_field is0_full_actionfield[] = {
+       [VCAP_AF_DSCP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_AF_DSCP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 1,
+               .width = 6,
+       },
+       [VCAP_AF_COSID_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 7,
+               .width = 1,
+       },
+       [VCAP_AF_COSID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 8,
+               .width = 3,
+       },
+       [VCAP_AF_QOS_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 11,
+               .width = 1,
+       },
+       [VCAP_AF_QOS_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 12,
+               .width = 3,
+       },
+       [VCAP_AF_DP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 15,
+               .width = 1,
+       },
+       [VCAP_AF_DP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 16,
+               .width = 2,
+       },
+       [VCAP_AF_DEI_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 18,
+               .width = 1,
+       },
+       [VCAP_AF_DEI_VAL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 19,
+               .width = 1,
+       },
+       [VCAP_AF_PCP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 20,
+               .width = 1,
+       },
+       [VCAP_AF_PCP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 21,
+               .width = 3,
+       },
+       [VCAP_AF_MAP_LOOKUP_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 24,
+               .width = 2,
+       },
+       [VCAP_AF_MAP_KEY] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 26,
+               .width = 3,
+       },
+       [VCAP_AF_MAP_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 29,
+               .width = 9,
+       },
+       [VCAP_AF_CLS_VID_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 38,
+               .width = 3,
+       },
+       [VCAP_AF_GVID_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 41,
+               .width = 3,
+       },
+       [VCAP_AF_VID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 44,
+               .width = 13,
+       },
+       [VCAP_AF_VLAN_POP_CNT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 57,
+               .width = 1,
+       },
+       [VCAP_AF_VLAN_POP_CNT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 58,
+               .width = 2,
+       },
+       [VCAP_AF_VLAN_PUSH_CNT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 60,
+               .width = 1,
+       },
+       [VCAP_AF_VLAN_PUSH_CNT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 61,
+               .width = 2,
+       },
+       [VCAP_AF_TPID_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 63,
+               .width = 2,
+       },
+       [VCAP_AF_VLAN_WAS_TAGGED] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 65,
+               .width = 2,
+       },
+       [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 67,
+               .width = 1,
+       },
+       [VCAP_AF_ISDX_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 68,
+               .width = 12,
+       },
+       [VCAP_AF_MASK_MODE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 80,
+               .width = 3,
+       },
+       [VCAP_AF_PORT_MASK] = {
+               .type = VCAP_FIELD_U72,
+               .offset = 83,
+               .width = 65,
+       },
+       [VCAP_AF_RT_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 148,
+               .width = 2,
+       },
+       [VCAP_AF_LPM_AFFIX_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 150,
+               .width = 1,
+       },
+       [VCAP_AF_LPM_AFFIX_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 151,
+               .width = 10,
+       },
+       [VCAP_AF_RLEG_DMAC_CHK_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 161,
+               .width = 1,
+       },
+       [VCAP_AF_TTL_DECR_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 162,
+               .width = 1,
+       },
+       [VCAP_AF_L3_MAC_UPDATE_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 163,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 164,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_Q] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 165,
+               .width = 3,
+       },
+       [VCAP_AF_MIP_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 168,
+               .width = 2,
+       },
+       [VCAP_AF_OAM_Y1731_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 170,
+               .width = 3,
+       },
+       [VCAP_AF_OAM_TWAMP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 173,
+               .width = 1,
+       },
+       [VCAP_AF_OAM_IP_BFD_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 174,
+               .width = 1,
+       },
+       [VCAP_AF_RSVD_LBL_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 175,
+               .width = 4,
+       },
+       [VCAP_AF_TC_LABEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 179,
+               .width = 3,
+       },
+       [VCAP_AF_TTL_LABEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 182,
+               .width = 3,
+       },
+       [VCAP_AF_NUM_VLD_LABELS] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 185,
+               .width = 2,
+       },
+       [VCAP_AF_FWD_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 187,
+               .width = 3,
+       },
+       [VCAP_AF_MPLS_OAM_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 190,
+               .width = 3,
+       },
+       [VCAP_AF_MPLS_MEP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 193,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_MIP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 194,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_OAM_FLAVOR] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 195,
+               .width = 1,
+       },
+       [VCAP_AF_MPLS_IP_CTRL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 196,
+               .width = 1,
+       },
+       [VCAP_AF_CUSTOM_ACE_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 197,
+               .width = 5,
+       },
+       [VCAP_AF_CUSTOM_ACE_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 202,
+               .width = 2,
+       },
+       [VCAP_AF_PAG_OVERRIDE_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 204,
+               .width = 8,
+       },
+       [VCAP_AF_PAG_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 212,
+               .width = 8,
+       },
+       [VCAP_AF_S2_KEY_SEL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 220,
+               .width = 1,
+       },
+       [VCAP_AF_S2_KEY_SEL_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 221,
+               .width = 6,
+       },
+       [VCAP_AF_INJ_MASQ_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 227,
+               .width = 1,
+       },
+       [VCAP_AF_INJ_MASQ_PORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 228,
+               .width = 7,
+       },
+       [VCAP_AF_LPORT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 235,
+               .width = 1,
+       },
+       [VCAP_AF_INJ_MASQ_LPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 236,
+               .width = 7,
+       },
+       [VCAP_AF_MATCH_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 243,
+               .width = 16,
+       },
+       [VCAP_AF_MATCH_ID_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 259,
+               .width = 16,
+       },
+       [VCAP_AF_PIPELINE_FORCE_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 275,
+               .width = 2,
+       },
+       [VCAP_AF_PIPELINE_ACT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 277,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_PT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 278,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_KEY_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 283,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_NORM_W16_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 288,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_OFFSET_FROM_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 293,
+               .width = 2,
+       },
+       [VCAP_AF_NXT_TYPE_AFTER_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 295,
+               .width = 2,
+       },
+       [VCAP_AF_NXT_NORMALIZE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 297,
+               .width = 1,
+       },
+       [VCAP_AF_NXT_IDX_CTRL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 298,
+               .width = 3,
+       },
+       [VCAP_AF_NXT_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 301,
+               .width = 12,
+       },
+};
+
+static const struct vcap_field is0_class_reduced_actionfield[] = {
+       [VCAP_AF_TYPE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_AF_COSID_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 1,
+               .width = 1,
+       },
+       [VCAP_AF_COSID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 2,
+               .width = 3,
+       },
+       [VCAP_AF_QOS_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 5,
+               .width = 1,
+       },
+       [VCAP_AF_QOS_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 6,
+               .width = 3,
+       },
+       [VCAP_AF_DP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 9,
+               .width = 1,
+       },
+       [VCAP_AF_DP_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 10,
+               .width = 2,
+       },
+       [VCAP_AF_MAP_LOOKUP_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 12,
+               .width = 2,
+       },
+       [VCAP_AF_MAP_KEY] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 14,
+               .width = 3,
+       },
+       [VCAP_AF_CLS_VID_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 17,
+               .width = 3,
+       },
+       [VCAP_AF_GVID_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 3,
+       },
+       [VCAP_AF_VID_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 23,
+               .width = 13,
+       },
+       [VCAP_AF_VLAN_POP_CNT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 36,
+               .width = 1,
+       },
+       [VCAP_AF_VLAN_POP_CNT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 37,
+               .width = 2,
+       },
+       [VCAP_AF_VLAN_PUSH_CNT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 39,
+               .width = 1,
+       },
+       [VCAP_AF_VLAN_PUSH_CNT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 40,
+               .width = 2,
+       },
+       [VCAP_AF_TPID_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 42,
+               .width = 2,
+       },
+       [VCAP_AF_VLAN_WAS_TAGGED] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 44,
+               .width = 2,
+       },
+       [VCAP_AF_ISDX_ADD_REPLACE_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 46,
+               .width = 1,
+       },
+       [VCAP_AF_ISDX_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 47,
+               .width = 12,
+       },
+       [VCAP_AF_FWD_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 59,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 60,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_Q] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 61,
+               .width = 3,
+       },
+       [VCAP_AF_MIP_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 64,
+               .width = 2,
+       },
+       [VCAP_AF_OAM_Y1731_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 66,
+               .width = 3,
+       },
+       [VCAP_AF_LPORT_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 69,
+               .width = 1,
+       },
+       [VCAP_AF_INJ_MASQ_LPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 70,
+               .width = 7,
+       },
+       [VCAP_AF_PIPELINE_FORCE_ENA] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 77,
+               .width = 2,
+       },
+       [VCAP_AF_PIPELINE_ACT_SEL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 79,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_PT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 80,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_KEY_TYPE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 85,
+               .width = 5,
+       },
+       [VCAP_AF_NXT_IDX_CTRL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 90,
+               .width = 3,
+       },
+       [VCAP_AF_NXT_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 93,
+               .width = 12,
+       },
+};
+
+static const struct vcap_field is2_base_type_actionfield[] = {
+       [VCAP_AF_IS_INNER_ACL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_FORCE_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 1,
+               .width = 1,
+       },
+       [VCAP_AF_PIPELINE_PT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 2,
+               .width = 5,
+       },
+       [VCAP_AF_HIT_ME_ONCE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 7,
+               .width = 1,
+       },
+       [VCAP_AF_INTR_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 8,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_COPY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 9,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_QUEUE_NUM] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 10,
+               .width = 3,
+       },
+       [VCAP_AF_CPU_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 13,
+               .width = 1,
+       },
+       [VCAP_AF_LRN_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 14,
+               .width = 1,
+       },
+       [VCAP_AF_RT_DIS] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 15,
+               .width = 1,
+       },
+       [VCAP_AF_POLICE_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 16,
+               .width = 1,
+       },
+       [VCAP_AF_POLICE_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 17,
+               .width = 6,
+       },
+       [VCAP_AF_IGNORE_PIPELINE_CTRL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 23,
+               .width = 1,
+       },
+       [VCAP_AF_DLB_OFFSET] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 24,
+               .width = 3,
+       },
+       [VCAP_AF_MASK_MODE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 27,
+               .width = 3,
+       },
+       [VCAP_AF_PORT_MASK] = {
+               .type = VCAP_FIELD_U72,
+               .offset = 30,
+               .width = 68,
+       },
+       [VCAP_AF_RSDX_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 98,
+               .width = 1,
+       },
+       [VCAP_AF_RSDX_VAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 99,
+               .width = 12,
+       },
+       [VCAP_AF_MIRROR_PROBE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 111,
+               .width = 2,
+       },
+       [VCAP_AF_REW_CMD] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 113,
+               .width = 11,
+       },
+       [VCAP_AF_TTL_UPDATE_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 124,
+               .width = 1,
+       },
+       [VCAP_AF_SAM_SEQ_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 125,
+               .width = 1,
+       },
+       [VCAP_AF_TCP_UDP_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 126,
+               .width = 1,
+       },
+       [VCAP_AF_TCP_UDP_DPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 127,
+               .width = 16,
+       },
+       [VCAP_AF_TCP_UDP_SPORT] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 143,
+               .width = 16,
+       },
+       [VCAP_AF_MATCH_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 159,
+               .width = 16,
+       },
+       [VCAP_AF_MATCH_ID_MASK] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 175,
+               .width = 16,
+       },
+       [VCAP_AF_CNT_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 191,
+               .width = 12,
+       },
+       [VCAP_AF_SWAP_MAC_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 203,
+               .width = 1,
+       },
+       [VCAP_AF_ACL_RT_MODE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 204,
+               .width = 4,
+       },
+       [VCAP_AF_ACL_MAC] = {
+               .type = VCAP_FIELD_U48,
+               .offset = 208,
+               .width = 48,
+       },
+       [VCAP_AF_DMAC_OFFSET_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 256,
+               .width = 1,
+       },
+       [VCAP_AF_PTP_MASTER_SEL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 257,
+               .width = 2,
+       },
+       [VCAP_AF_LOG_MSG_INTERVAL] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 259,
+               .width = 4,
+       },
+       [VCAP_AF_SIP_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 263,
+               .width = 5,
+       },
+       [VCAP_AF_RLEG_STAT_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 268,
+               .width = 3,
+       },
+       [VCAP_AF_IGR_ACL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 271,
+               .width = 1,
+       },
+       [VCAP_AF_EGR_ACL_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 272,
+               .width = 1,
+       },
+};
+
+static const struct vcap_field es2_base_type_actionfield[] = {
+       [VCAP_AF_HIT_ME_ONCE] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 0,
+               .width = 1,
+       },
+       [VCAP_AF_INTR_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 1,
+               .width = 1,
+       },
+       [VCAP_AF_FWD_MODE] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 2,
+               .width = 2,
+       },
+       [VCAP_AF_COPY_QUEUE_NUM] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 4,
+               .width = 16,
+       },
+       [VCAP_AF_COPY_PORT_NUM] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 20,
+               .width = 7,
+       },
+       [VCAP_AF_MIRROR_PROBE_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 27,
+               .width = 2,
+       },
+       [VCAP_AF_CPU_COPY_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 29,
+               .width = 1,
+       },
+       [VCAP_AF_CPU_QUEUE_NUM] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 30,
+               .width = 3,
+       },
+       [VCAP_AF_POLICE_ENA] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 33,
+               .width = 1,
+       },
+       [VCAP_AF_POLICE_REMARK] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 34,
+               .width = 1,
+       },
+       [VCAP_AF_POLICE_IDX] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 35,
+               .width = 6,
+       },
+       [VCAP_AF_ES2_REW_CMD] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 41,
+               .width = 3,
+       },
+       [VCAP_AF_CNT_ID] = {
+               .type = VCAP_FIELD_U32,
+               .offset = 44,
+               .width = 11,
+       },
+       [VCAP_AF_IGNORE_PIPELINE_CTRL] = {
+               .type = VCAP_FIELD_BIT,
+               .offset = 55,
+               .width = 1,
+       },
+};
+
+/* actionfield_set */
+static const struct vcap_set is0_actionfield_set[] = {
+       [VCAP_AFS_MLBS] = {
+               .type_id = 0,
+               .sw_per_item = 2,
+               .sw_cnt = 6,
+       },
+       [VCAP_AFS_MLBS_REDUCED] = {
+               .type_id = 0,
+               .sw_per_item = 1,
+               .sw_cnt = 12,
+       },
+       [VCAP_AFS_CLASSIFICATION] = {
+               .type_id = 1,
+               .sw_per_item = 2,
+               .sw_cnt = 6,
+       },
+       [VCAP_AFS_FULL] = {
+               .type_id = -1,
+               .sw_per_item = 3,
+               .sw_cnt = 4,
+       },
+       [VCAP_AFS_CLASS_REDUCED] = {
+               .type_id = 1,
+               .sw_per_item = 1,
+               .sw_cnt = 12,
+       },
+};
+
+static const struct vcap_set is2_actionfield_set[] = {
+       [VCAP_AFS_BASE_TYPE] = {
+               .type_id = -1,
+               .sw_per_item = 3,
+               .sw_cnt = 4,
+       },
+};
+
+static const struct vcap_set es2_actionfield_set[] = {
+       [VCAP_AFS_BASE_TYPE] = {
+               .type_id = -1,
+               .sw_per_item = 3,
+               .sw_cnt = 4,
+       },
+};
+
+/* actionfield_set map */
+static const struct vcap_field *is0_actionfield_set_map[] = {
+       [VCAP_AFS_MLBS] = is0_mlbs_actionfield,
+       [VCAP_AFS_MLBS_REDUCED] = is0_mlbs_reduced_actionfield,
+       [VCAP_AFS_CLASSIFICATION] = is0_classification_actionfield,
+       [VCAP_AFS_FULL] = is0_full_actionfield,
+       [VCAP_AFS_CLASS_REDUCED] = is0_class_reduced_actionfield,
+};
+
+static const struct vcap_field *is2_actionfield_set_map[] = {
+       [VCAP_AFS_BASE_TYPE] = is2_base_type_actionfield,
+};
+
+static const struct vcap_field *es2_actionfield_set_map[] = {
+       [VCAP_AFS_BASE_TYPE] = es2_base_type_actionfield,
+};
+
+/* actionfield_set map size */
+static int is0_actionfield_set_map_size[] = {
+       [VCAP_AFS_MLBS] = ARRAY_SIZE(is0_mlbs_actionfield),
+       [VCAP_AFS_MLBS_REDUCED] = ARRAY_SIZE(is0_mlbs_reduced_actionfield),
+       [VCAP_AFS_CLASSIFICATION] = ARRAY_SIZE(is0_classification_actionfield),
+       [VCAP_AFS_FULL] = ARRAY_SIZE(is0_full_actionfield),
+       [VCAP_AFS_CLASS_REDUCED] = ARRAY_SIZE(is0_class_reduced_actionfield),
+};
+
+static int is2_actionfield_set_map_size[] = {
+       [VCAP_AFS_BASE_TYPE] = ARRAY_SIZE(is2_base_type_actionfield),
+};
+
+static int es2_actionfield_set_map_size[] = {
+       [VCAP_AFS_BASE_TYPE] = ARRAY_SIZE(es2_base_type_actionfield),
+};
+
+/* Type Groups */
+static const struct vcap_typegroup is0_x12_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 5,
+               .value = 16,
+       },
+       {
+               .offset = 52,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 104,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 156,
+               .width = 3,
+               .value = 0,
+       },
+       {
+               .offset = 208,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 260,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 312,
+               .width = 4,
+               .value = 0,
+       },
+       {
+               .offset = 364,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 416,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 468,
+               .width = 3,
+               .value = 0,
+       },
+       {
+               .offset = 520,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 572,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is0_x6_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 4,
+               .value = 8,
+       },
+       {
+               .offset = 52,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 104,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 156,
+               .width = 3,
+               .value = 0,
+       },
+       {
+               .offset = 208,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 260,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is0_x3_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 3,
+               .value = 4,
+       },
+       {
+               .offset = 52,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 104,
+               .width = 2,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is0_x2_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 2,
+               .value = 2,
+       },
+       {
+               .offset = 52,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is0_x1_keyfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup is2_x12_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 3,
+               .value = 4,
+       },
+       {
+               .offset = 156,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 312,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 468,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is2_x6_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 2,
+               .value = 2,
+       },
+       {
+               .offset = 156,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is2_x3_keyfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup is2_x1_keyfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup es2_x12_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 3,
+               .value = 4,
+       },
+       {
+               .offset = 156,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 312,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 468,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup es2_x6_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 2,
+               .value = 2,
+       },
+       {
+               .offset = 156,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup es2_x3_keyfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 1,
+               .value = 1,
+       },
+       {}
+};
+
+static const struct vcap_typegroup es2_x1_keyfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup *is0_keyfield_set_typegroups[] = {
+       [12] = is0_x12_keyfield_set_typegroups,
+       [6] = is0_x6_keyfield_set_typegroups,
+       [3] = is0_x3_keyfield_set_typegroups,
+       [2] = is0_x2_keyfield_set_typegroups,
+       [1] = is0_x1_keyfield_set_typegroups,
+       [13] = NULL,
+};
+
+static const struct vcap_typegroup *is2_keyfield_set_typegroups[] = {
+       [12] = is2_x12_keyfield_set_typegroups,
+       [6] = is2_x6_keyfield_set_typegroups,
+       [3] = is2_x3_keyfield_set_typegroups,
+       [1] = is2_x1_keyfield_set_typegroups,
+       [13] = NULL,
+};
+
+static const struct vcap_typegroup *es2_keyfield_set_typegroups[] = {
+       [12] = es2_x12_keyfield_set_typegroups,
+       [6] = es2_x6_keyfield_set_typegroups,
+       [3] = es2_x3_keyfield_set_typegroups,
+       [1] = es2_x1_keyfield_set_typegroups,
+       [13] = NULL,
+};
+
+static const struct vcap_typegroup is0_x3_actionfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 3,
+               .value = 4,
+       },
+       {
+               .offset = 110,
+               .width = 2,
+               .value = 0,
+       },
+       {
+               .offset = 220,
+               .width = 2,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is0_x2_actionfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 2,
+               .value = 2,
+       },
+       {
+               .offset = 110,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is0_x1_actionfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 1,
+               .value = 1,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is2_x3_actionfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 2,
+               .value = 2,
+       },
+       {
+               .offset = 110,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 220,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup is2_x1_actionfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup es2_x3_actionfield_set_typegroups[] = {
+       {
+               .offset = 0,
+               .width = 2,
+               .value = 2,
+       },
+       {
+               .offset = 21,
+               .width = 1,
+               .value = 0,
+       },
+       {
+               .offset = 42,
+               .width = 1,
+               .value = 0,
+       },
+       {}
+};
+
+static const struct vcap_typegroup es2_x1_actionfield_set_typegroups[] = {
+       {}
+};
+
+static const struct vcap_typegroup *is0_actionfield_set_typegroups[] = {
+       [3] = is0_x3_actionfield_set_typegroups,
+       [2] = is0_x2_actionfield_set_typegroups,
+       [1] = is0_x1_actionfield_set_typegroups,
+       [13] = NULL,
+};
+
+static const struct vcap_typegroup *is2_actionfield_set_typegroups[] = {
+       [3] = is2_x3_actionfield_set_typegroups,
+       [1] = is2_x1_actionfield_set_typegroups,
+       [13] = NULL,
+};
+
+static const struct vcap_typegroup *es2_actionfield_set_typegroups[] = {
+       [3] = es2_x3_actionfield_set_typegroups,
+       [1] = es2_x1_actionfield_set_typegroups,
+       [13] = NULL,
+};
+
+/* Keyfieldset names */
+static const char * const vcap_keyfield_set_names[] = {
+       [VCAP_KFS_NO_VALUE]                      =  "(None)",
+       [VCAP_KFS_ARP]                           =  "VCAP_KFS_ARP",
+       [VCAP_KFS_ETAG]                          =  "VCAP_KFS_ETAG",
+       [VCAP_KFS_IP4_OTHER]                     =  "VCAP_KFS_IP4_OTHER",
+       [VCAP_KFS_IP4_TCP_UDP]                   =  "VCAP_KFS_IP4_TCP_UDP",
+       [VCAP_KFS_IP4_VID]                       =  "VCAP_KFS_IP4_VID",
+       [VCAP_KFS_IP6_STD]                       =  "VCAP_KFS_IP6_STD",
+       [VCAP_KFS_IP6_VID]                       =  "VCAP_KFS_IP6_VID",
+       [VCAP_KFS_IP_7TUPLE]                     =  "VCAP_KFS_IP_7TUPLE",
+       [VCAP_KFS_LL_FULL]                       =  "VCAP_KFS_LL_FULL",
+       [VCAP_KFS_MAC_ETYPE]                     =  "VCAP_KFS_MAC_ETYPE",
+       [VCAP_KFS_MLL]                           =  "VCAP_KFS_MLL",
+       [VCAP_KFS_NORMAL]                        =  "VCAP_KFS_NORMAL",
+       [VCAP_KFS_NORMAL_5TUPLE_IP4]             =  "VCAP_KFS_NORMAL_5TUPLE_IP4",
+       [VCAP_KFS_NORMAL_7TUPLE]                 =  "VCAP_KFS_NORMAL_7TUPLE",
+       [VCAP_KFS_PURE_5TUPLE_IP4]               =  "VCAP_KFS_PURE_5TUPLE_IP4",
+       [VCAP_KFS_TRI_VID]                       =  "VCAP_KFS_TRI_VID",
+};
+
+/* Actionfieldset names */
+static const char * const vcap_actionfield_set_names[] = {
+       [VCAP_AFS_NO_VALUE]                      =  "(None)",
+       [VCAP_AFS_BASE_TYPE]                     =  "VCAP_AFS_BASE_TYPE",
+       [VCAP_AFS_CLASSIFICATION]                =  "VCAP_AFS_CLASSIFICATION",
+       [VCAP_AFS_CLASS_REDUCED]                 =  "VCAP_AFS_CLASS_REDUCED",
+       [VCAP_AFS_FULL]                          =  "VCAP_AFS_FULL",
+       [VCAP_AFS_MLBS]                          =  "VCAP_AFS_MLBS",
+       [VCAP_AFS_MLBS_REDUCED]                  =  "VCAP_AFS_MLBS_REDUCED",
+};
+
+/* Keyfield names */
+static const char * const vcap_keyfield_names[] = {
+       [VCAP_KF_NO_VALUE]                       =  "(None)",
+       [VCAP_KF_8021BR_ECID_BASE]               =  "8021BR_ECID_BASE",
+       [VCAP_KF_8021BR_ECID_EXT]                =  "8021BR_ECID_EXT",
+       [VCAP_KF_8021BR_E_TAGGED]                =  "8021BR_E_TAGGED",
+       [VCAP_KF_8021BR_GRP]                     =  "8021BR_GRP",
+       [VCAP_KF_8021BR_IGR_ECID_BASE]           =  "8021BR_IGR_ECID_BASE",
+       [VCAP_KF_8021BR_IGR_ECID_EXT]            =  "8021BR_IGR_ECID_EXT",
+       [VCAP_KF_8021Q_DEI0]                     =  "8021Q_DEI0",
+       [VCAP_KF_8021Q_DEI1]                     =  "8021Q_DEI1",
+       [VCAP_KF_8021Q_DEI2]                     =  "8021Q_DEI2",
+       [VCAP_KF_8021Q_DEI_CLS]                  =  "8021Q_DEI_CLS",
+       [VCAP_KF_8021Q_PCP0]                     =  "8021Q_PCP0",
+       [VCAP_KF_8021Q_PCP1]                     =  "8021Q_PCP1",
+       [VCAP_KF_8021Q_PCP2]                     =  "8021Q_PCP2",
+       [VCAP_KF_8021Q_PCP_CLS]                  =  "8021Q_PCP_CLS",
+       [VCAP_KF_8021Q_TPID0]                    =  "8021Q_TPID0",
+       [VCAP_KF_8021Q_TPID1]                    =  "8021Q_TPID1",
+       [VCAP_KF_8021Q_TPID2]                    =  "8021Q_TPID2",
+       [VCAP_KF_8021Q_VID0]                     =  "8021Q_VID0",
+       [VCAP_KF_8021Q_VID1]                     =  "8021Q_VID1",
+       [VCAP_KF_8021Q_VID2]                     =  "8021Q_VID2",
+       [VCAP_KF_8021Q_VID_CLS]                  =  "8021Q_VID_CLS",
+       [VCAP_KF_8021Q_VLAN_TAGGED_IS]           =  "8021Q_VLAN_TAGGED_IS",
+       [VCAP_KF_8021Q_VLAN_TAGS]                =  "8021Q_VLAN_TAGS",
+       [VCAP_KF_ACL_GRP_ID]                     =  "ACL_GRP_ID",
+       [VCAP_KF_ARP_ADDR_SPACE_OK_IS]           =  "ARP_ADDR_SPACE_OK_IS",
+       [VCAP_KF_ARP_LEN_OK_IS]                  =  "ARP_LEN_OK_IS",
+       [VCAP_KF_ARP_OPCODE]                     =  "ARP_OPCODE",
+       [VCAP_KF_ARP_OPCODE_UNKNOWN_IS]          =  "ARP_OPCODE_UNKNOWN_IS",
+       [VCAP_KF_ARP_PROTO_SPACE_OK_IS]          =  "ARP_PROTO_SPACE_OK_IS",
+       [VCAP_KF_ARP_SENDER_MATCH_IS]            =  "ARP_SENDER_MATCH_IS",
+       [VCAP_KF_ARP_TGT_MATCH_IS]               =  "ARP_TGT_MATCH_IS",
+       [VCAP_KF_COSID_CLS]                      =  "COSID_CLS",
+       [VCAP_KF_DST_ENTRY]                      =  "DST_ENTRY",
+       [VCAP_KF_ES0_ISDX_KEY_ENA]               =  "ES0_ISDX_KEY_ENA",
+       [VCAP_KF_ETYPE]                          =  "ETYPE",
+       [VCAP_KF_ETYPE_LEN_IS]                   =  "ETYPE_LEN_IS",
+       [VCAP_KF_ETYPE_MPLS]                     =  "ETYPE_MPLS",
+       [VCAP_KF_IF_EGR_PORT_MASK]               =  "IF_EGR_PORT_MASK",
+       [VCAP_KF_IF_EGR_PORT_MASK_RNG]           =  "IF_EGR_PORT_MASK_RNG",
+       [VCAP_KF_IF_IGR_PORT]                    =  "IF_IGR_PORT",
+       [VCAP_KF_IF_IGR_PORT_MASK]               =  "IF_IGR_PORT_MASK",
+       [VCAP_KF_IF_IGR_PORT_MASK_L3]            =  "IF_IGR_PORT_MASK_L3",
+       [VCAP_KF_IF_IGR_PORT_MASK_RNG]           =  "IF_IGR_PORT_MASK_RNG",
+       [VCAP_KF_IF_IGR_PORT_MASK_SEL]           =  "IF_IGR_PORT_MASK_SEL",
+       [VCAP_KF_IF_IGR_PORT_SEL]                =  "IF_IGR_PORT_SEL",
+       [VCAP_KF_IP4_IS]                         =  "IP4_IS",
+       [VCAP_KF_IP_MC_IS]                       =  "IP_MC_IS",
+       [VCAP_KF_IP_PAYLOAD_5TUPLE]              =  "IP_PAYLOAD_5TUPLE",
+       [VCAP_KF_IP_SNAP_IS]                     =  "IP_SNAP_IS",
+       [VCAP_KF_ISDX_CLS]                       =  "ISDX_CLS",
+       [VCAP_KF_ISDX_GT0_IS]                    =  "ISDX_GT0_IS",
+       [VCAP_KF_L2_BC_IS]                       =  "L2_BC_IS",
+       [VCAP_KF_L2_DMAC]                        =  "L2_DMAC",
+       [VCAP_KF_L2_FWD_IS]                      =  "L2_FWD_IS",
+       [VCAP_KF_L2_MC_IS]                       =  "L2_MC_IS",
+       [VCAP_KF_L2_PAYLOAD_ETYPE]               =  "L2_PAYLOAD_ETYPE",
+       [VCAP_KF_L2_SMAC]                        =  "L2_SMAC",
+       [VCAP_KF_L3_DIP_EQ_SIP_IS]               =  "L3_DIP_EQ_SIP_IS",
+       [VCAP_KF_L3_DMAC_DIP_MATCH]              =  "L3_DMAC_DIP_MATCH",
+       [VCAP_KF_L3_DPL_CLS]                     =  "L3_DPL_CLS",
+       [VCAP_KF_L3_DSCP]                        =  "L3_DSCP",
+       [VCAP_KF_L3_DST_IS]                      =  "L3_DST_IS",
+       [VCAP_KF_L3_FRAGMENT_TYPE]               =  "L3_FRAGMENT_TYPE",
+       [VCAP_KF_L3_FRAG_INVLD_L4_LEN]           =  "L3_FRAG_INVLD_L4_LEN",
+       [VCAP_KF_L3_IP4_DIP]                     =  "L3_IP4_DIP",
+       [VCAP_KF_L3_IP4_SIP]                     =  "L3_IP4_SIP",
+       [VCAP_KF_L3_IP6_DIP]                     =  "L3_IP6_DIP",
+       [VCAP_KF_L3_IP6_SIP]                     =  "L3_IP6_SIP",
+       [VCAP_KF_L3_IP_PROTO]                    =  "L3_IP_PROTO",
+       [VCAP_KF_L3_OPTIONS_IS]                  =  "L3_OPTIONS_IS",
+       [VCAP_KF_L3_PAYLOAD]                     =  "L3_PAYLOAD",
+       [VCAP_KF_L3_RT_IS]                       =  "L3_RT_IS",
+       [VCAP_KF_L3_SMAC_SIP_MATCH]              =  "L3_SMAC_SIP_MATCH",
+       [VCAP_KF_L3_TOS]                         =  "L3_TOS",
+       [VCAP_KF_L3_TTL_GT0]                     =  "L3_TTL_GT0",
+       [VCAP_KF_L4_ACK]                         =  "L4_ACK",
+       [VCAP_KF_L4_DPORT]                       =  "L4_DPORT",
+       [VCAP_KF_L4_FIN]                         =  "L4_FIN",
+       [VCAP_KF_L4_PAYLOAD]                     =  "L4_PAYLOAD",
+       [VCAP_KF_L4_PSH]                         =  "L4_PSH",
+       [VCAP_KF_L4_RNG]                         =  "L4_RNG",
+       [VCAP_KF_L4_RST]                         =  "L4_RST",
+       [VCAP_KF_L4_SEQUENCE_EQ0_IS]             =  "L4_SEQUENCE_EQ0_IS",
+       [VCAP_KF_L4_SPORT]                       =  "L4_SPORT",
+       [VCAP_KF_L4_SPORT_EQ_DPORT_IS]           =  "L4_SPORT_EQ_DPORT_IS",
+       [VCAP_KF_L4_SYN]                         =  "L4_SYN",
+       [VCAP_KF_L4_URG]                         =  "L4_URG",
+       [VCAP_KF_LOOKUP_FIRST_IS]                =  "LOOKUP_FIRST_IS",
+       [VCAP_KF_LOOKUP_GEN_IDX]                 =  "LOOKUP_GEN_IDX",
+       [VCAP_KF_LOOKUP_GEN_IDX_SEL]             =  "LOOKUP_GEN_IDX_SEL",
+       [VCAP_KF_LOOKUP_PAG]                     =  "LOOKUP_PAG",
+       [VCAP_KF_MIRROR_ENA]                     =  "MIRROR_ENA",
+       [VCAP_KF_OAM_CCM_CNTS_EQ0]               =  "OAM_CCM_CNTS_EQ0",
+       [VCAP_KF_OAM_MEL_FLAGS]                  =  "OAM_MEL_FLAGS",
+       [VCAP_KF_OAM_Y1731_IS]                   =  "OAM_Y1731_IS",
+       [VCAP_KF_PROT_ACTIVE]                    =  "PROT_ACTIVE",
+       [VCAP_KF_TCP_IS]                         =  "TCP_IS",
+       [VCAP_KF_TCP_UDP_IS]                     =  "TCP_UDP_IS",
+       [VCAP_KF_TYPE]                           =  "TYPE",
+};
+
+/* Actionfield names */
+static const char * const vcap_actionfield_names[] = {
+       [VCAP_AF_NO_VALUE]                       =  "(None)",
+       [VCAP_AF_ACL_MAC]                        =  "ACL_MAC",
+       [VCAP_AF_ACL_RT_MODE]                    =  "ACL_RT_MODE",
+       [VCAP_AF_CLS_VID_SEL]                    =  "CLS_VID_SEL",
+       [VCAP_AF_CNT_ID]                         =  "CNT_ID",
+       [VCAP_AF_COPY_PORT_NUM]                  =  "COPY_PORT_NUM",
+       [VCAP_AF_COPY_QUEUE_NUM]                 =  "COPY_QUEUE_NUM",
+       [VCAP_AF_COSID_ENA]                      =  "COSID_ENA",
+       [VCAP_AF_COSID_VAL]                      =  "COSID_VAL",
+       [VCAP_AF_CPU_COPY_ENA]                   =  "CPU_COPY_ENA",
+       [VCAP_AF_CPU_DIS]                        =  "CPU_DIS",
+       [VCAP_AF_CPU_ENA]                        =  "CPU_ENA",
+       [VCAP_AF_CPU_Q]                          =  "CPU_Q",
+       [VCAP_AF_CPU_QUEUE_NUM]                  =  "CPU_QUEUE_NUM",
+       [VCAP_AF_CUSTOM_ACE_ENA]                 =  "CUSTOM_ACE_ENA",
+       [VCAP_AF_CUSTOM_ACE_OFFSET]              =  "CUSTOM_ACE_OFFSET",
+       [VCAP_AF_DEI_ENA]                        =  "DEI_ENA",
+       [VCAP_AF_DEI_VAL]                        =  "DEI_VAL",
+       [VCAP_AF_DLB_OFFSET]                     =  "DLB_OFFSET",
+       [VCAP_AF_DMAC_OFFSET_ENA]                =  "DMAC_OFFSET_ENA",
+       [VCAP_AF_DP_ENA]                         =  "DP_ENA",
+       [VCAP_AF_DP_VAL]                         =  "DP_VAL",
+       [VCAP_AF_DSCP_ENA]                       =  "DSCP_ENA",
+       [VCAP_AF_DSCP_VAL]                       =  "DSCP_VAL",
+       [VCAP_AF_EGR_ACL_ENA]                    =  "EGR_ACL_ENA",
+       [VCAP_AF_ES2_REW_CMD]                    =  "ES2_REW_CMD",
+       [VCAP_AF_FWD_DIS]                        =  "FWD_DIS",
+       [VCAP_AF_FWD_MODE]                       =  "FWD_MODE",
+       [VCAP_AF_FWD_TYPE]                       =  "FWD_TYPE",
+       [VCAP_AF_GVID_ADD_REPLACE_SEL]           =  "GVID_ADD_REPLACE_SEL",
+       [VCAP_AF_HIT_ME_ONCE]                    =  "HIT_ME_ONCE",
+       [VCAP_AF_IGNORE_PIPELINE_CTRL]           =  "IGNORE_PIPELINE_CTRL",
+       [VCAP_AF_IGR_ACL_ENA]                    =  "IGR_ACL_ENA",
+       [VCAP_AF_INJ_MASQ_ENA]                   =  "INJ_MASQ_ENA",
+       [VCAP_AF_INJ_MASQ_LPORT]                 =  "INJ_MASQ_LPORT",
+       [VCAP_AF_INJ_MASQ_PORT]                  =  "INJ_MASQ_PORT",
+       [VCAP_AF_INTR_ENA]                       =  "INTR_ENA",
+       [VCAP_AF_ISDX_ADD_REPLACE_SEL]           =  "ISDX_ADD_REPLACE_SEL",
+       [VCAP_AF_ISDX_VAL]                       =  "ISDX_VAL",
+       [VCAP_AF_IS_INNER_ACL]                   =  "IS_INNER_ACL",
+       [VCAP_AF_L3_MAC_UPDATE_DIS]              =  "L3_MAC_UPDATE_DIS",
+       [VCAP_AF_LOG_MSG_INTERVAL]               =  "LOG_MSG_INTERVAL",
+       [VCAP_AF_LPM_AFFIX_ENA]                  =  "LPM_AFFIX_ENA",
+       [VCAP_AF_LPM_AFFIX_VAL]                  =  "LPM_AFFIX_VAL",
+       [VCAP_AF_LPORT_ENA]                      =  "LPORT_ENA",
+       [VCAP_AF_LRN_DIS]                        =  "LRN_DIS",
+       [VCAP_AF_MAP_IDX]                        =  "MAP_IDX",
+       [VCAP_AF_MAP_KEY]                        =  "MAP_KEY",
+       [VCAP_AF_MAP_LOOKUP_SEL]                 =  "MAP_LOOKUP_SEL",
+       [VCAP_AF_MASK_MODE]                      =  "MASK_MODE",
+       [VCAP_AF_MATCH_ID]                       =  "MATCH_ID",
+       [VCAP_AF_MATCH_ID_MASK]                  =  "MATCH_ID_MASK",
+       [VCAP_AF_MIP_SEL]                        =  "MIP_SEL",
+       [VCAP_AF_MIRROR_PROBE]                   =  "MIRROR_PROBE",
+       [VCAP_AF_MIRROR_PROBE_ID]                =  "MIRROR_PROBE_ID",
+       [VCAP_AF_MPLS_IP_CTRL_ENA]               =  "MPLS_IP_CTRL_ENA",
+       [VCAP_AF_MPLS_MEP_ENA]                   =  "MPLS_MEP_ENA",
+       [VCAP_AF_MPLS_MIP_ENA]                   =  "MPLS_MIP_ENA",
+       [VCAP_AF_MPLS_OAM_FLAVOR]                =  "MPLS_OAM_FLAVOR",
+       [VCAP_AF_MPLS_OAM_TYPE]                  =  "MPLS_OAM_TYPE",
+       [VCAP_AF_NUM_VLD_LABELS]                 =  "NUM_VLD_LABELS",
+       [VCAP_AF_NXT_IDX]                        =  "NXT_IDX",
+       [VCAP_AF_NXT_IDX_CTRL]                   =  "NXT_IDX_CTRL",
+       [VCAP_AF_NXT_KEY_TYPE]                   =  "NXT_KEY_TYPE",
+       [VCAP_AF_NXT_NORMALIZE]                  =  "NXT_NORMALIZE",
+       [VCAP_AF_NXT_NORM_W16_OFFSET]            =  "NXT_NORM_W16_OFFSET",
+       [VCAP_AF_NXT_NORM_W32_OFFSET]            =  "NXT_NORM_W32_OFFSET",
+       [VCAP_AF_NXT_OFFSET_FROM_TYPE]           =  "NXT_OFFSET_FROM_TYPE",
+       [VCAP_AF_NXT_TYPE_AFTER_OFFSET]          =  "NXT_TYPE_AFTER_OFFSET",
+       [VCAP_AF_OAM_IP_BFD_ENA]                 =  "OAM_IP_BFD_ENA",
+       [VCAP_AF_OAM_TWAMP_ENA]                  =  "OAM_TWAMP_ENA",
+       [VCAP_AF_OAM_Y1731_SEL]                  =  "OAM_Y1731_SEL",
+       [VCAP_AF_PAG_OVERRIDE_MASK]              =  "PAG_OVERRIDE_MASK",
+       [VCAP_AF_PAG_VAL]                        =  "PAG_VAL",
+       [VCAP_AF_PCP_ENA]                        =  "PCP_ENA",
+       [VCAP_AF_PCP_VAL]                        =  "PCP_VAL",
+       [VCAP_AF_PIPELINE_ACT_SEL]               =  "PIPELINE_ACT_SEL",
+       [VCAP_AF_PIPELINE_FORCE_ENA]             =  "PIPELINE_FORCE_ENA",
+       [VCAP_AF_PIPELINE_PT]                    =  "PIPELINE_PT",
+       [VCAP_AF_PIPELINE_PT_REDUCED]            =  "PIPELINE_PT_REDUCED",
+       [VCAP_AF_POLICE_ENA]                     =  "POLICE_ENA",
+       [VCAP_AF_POLICE_IDX]                     =  "POLICE_IDX",
+       [VCAP_AF_POLICE_REMARK]                  =  "POLICE_REMARK",
+       [VCAP_AF_PORT_MASK]                      =  "PORT_MASK",
+       [VCAP_AF_PTP_MASTER_SEL]                 =  "PTP_MASTER_SEL",
+       [VCAP_AF_QOS_ENA]                        =  "QOS_ENA",
+       [VCAP_AF_QOS_VAL]                        =  "QOS_VAL",
+       [VCAP_AF_REW_CMD]                        =  "REW_CMD",
+       [VCAP_AF_RLEG_DMAC_CHK_DIS]              =  "RLEG_DMAC_CHK_DIS",
+       [VCAP_AF_RLEG_STAT_IDX]                  =  "RLEG_STAT_IDX",
+       [VCAP_AF_RSDX_ENA]                       =  "RSDX_ENA",
+       [VCAP_AF_RSDX_VAL]                       =  "RSDX_VAL",
+       [VCAP_AF_RSVD_LBL_VAL]                   =  "RSVD_LBL_VAL",
+       [VCAP_AF_RT_DIS]                         =  "RT_DIS",
+       [VCAP_AF_RT_SEL]                         =  "RT_SEL",
+       [VCAP_AF_S2_KEY_SEL_ENA]                 =  "S2_KEY_SEL_ENA",
+       [VCAP_AF_S2_KEY_SEL_IDX]                 =  "S2_KEY_SEL_IDX",
+       [VCAP_AF_SAM_SEQ_ENA]                    =  "SAM_SEQ_ENA",
+       [VCAP_AF_SIP_IDX]                        =  "SIP_IDX",
+       [VCAP_AF_SWAP_MAC_ENA]                   =  "SWAP_MAC_ENA",
+       [VCAP_AF_TCP_UDP_DPORT]                  =  "TCP_UDP_DPORT",
+       [VCAP_AF_TCP_UDP_ENA]                    =  "TCP_UDP_ENA",
+       [VCAP_AF_TCP_UDP_SPORT]                  =  "TCP_UDP_SPORT",
+       [VCAP_AF_TC_ENA]                         =  "TC_ENA",
+       [VCAP_AF_TC_LABEL]                       =  "TC_LABEL",
+       [VCAP_AF_TPID_SEL]                       =  "TPID_SEL",
+       [VCAP_AF_TTL_DECR_DIS]                   =  "TTL_DECR_DIS",
+       [VCAP_AF_TTL_ENA]                        =  "TTL_ENA",
+       [VCAP_AF_TTL_LABEL]                      =  "TTL_LABEL",
+       [VCAP_AF_TTL_UPDATE_ENA]                 =  "TTL_UPDATE_ENA",
+       [VCAP_AF_TYPE]                           =  "TYPE",
+       [VCAP_AF_VID_VAL]                        =  "VID_VAL",
+       [VCAP_AF_VLAN_POP_CNT]                   =  "VLAN_POP_CNT",
+       [VCAP_AF_VLAN_POP_CNT_ENA]               =  "VLAN_POP_CNT_ENA",
+       [VCAP_AF_VLAN_PUSH_CNT]                  =  "VLAN_PUSH_CNT",
+       [VCAP_AF_VLAN_PUSH_CNT_ENA]              =  "VLAN_PUSH_CNT_ENA",
+       [VCAP_AF_VLAN_WAS_TAGGED]                =  "VLAN_WAS_TAGGED",
+};
+
+/* VCAPs */
+const struct vcap_info kunit_test_vcaps[] = {
+       [VCAP_TYPE_IS0] = {
+               .name = "is0",
+               .rows = 1024,
+               .sw_count = 12,
+               .sw_width = 52,
+               .sticky_width = 1,
+               .act_width = 110,
+               .default_cnt = 140,
+               .require_cnt_dis = 0,
+               .version = 1,
+               .keyfield_set = is0_keyfield_set,
+               .keyfield_set_size = ARRAY_SIZE(is0_keyfield_set),
+               .actionfield_set = is0_actionfield_set,
+               .actionfield_set_size = ARRAY_SIZE(is0_actionfield_set),
+               .keyfield_set_map = is0_keyfield_set_map,
+               .keyfield_set_map_size = is0_keyfield_set_map_size,
+               .actionfield_set_map = is0_actionfield_set_map,
+               .actionfield_set_map_size = is0_actionfield_set_map_size,
+               .keyfield_set_typegroups = is0_keyfield_set_typegroups,
+               .actionfield_set_typegroups = is0_actionfield_set_typegroups,
+       },
+       [VCAP_TYPE_IS2] = {
+               .name = "is2",
+               .rows = 256,
+               .sw_count = 12,
+               .sw_width = 52,
+               .sticky_width = 1,
+               .act_width = 110,
+               .default_cnt = 73,
+               .require_cnt_dis = 0,
+               .version = 1,
+               .keyfield_set = is2_keyfield_set,
+               .keyfield_set_size = ARRAY_SIZE(is2_keyfield_set),
+               .actionfield_set = is2_actionfield_set,
+               .actionfield_set_size = ARRAY_SIZE(is2_actionfield_set),
+               .keyfield_set_map = is2_keyfield_set_map,
+               .keyfield_set_map_size = is2_keyfield_set_map_size,
+               .actionfield_set_map = is2_actionfield_set_map,
+               .actionfield_set_map_size = is2_actionfield_set_map_size,
+               .keyfield_set_typegroups = is2_keyfield_set_typegroups,
+               .actionfield_set_typegroups = is2_actionfield_set_typegroups,
+       },
+       [VCAP_TYPE_ES2] = {
+               .name = "es2",
+               .rows = 1024,
+               .sw_count = 12,
+               .sw_width = 52,
+               .sticky_width = 1,
+               .act_width = 21,
+               .default_cnt = 74,
+               .require_cnt_dis = 0,
+               .version = 1,
+               .keyfield_set = es2_keyfield_set,
+               .keyfield_set_size = ARRAY_SIZE(es2_keyfield_set),
+               .actionfield_set = es2_actionfield_set,
+               .actionfield_set_size = ARRAY_SIZE(es2_actionfield_set),
+               .keyfield_set_map = es2_keyfield_set_map,
+               .keyfield_set_map_size = es2_keyfield_set_map_size,
+               .actionfield_set_map = es2_actionfield_set_map,
+               .actionfield_set_map_size = es2_actionfield_set_map_size,
+               .keyfield_set_typegroups = es2_keyfield_set_typegroups,
+               .actionfield_set_typegroups = es2_actionfield_set_typegroups,
+       },
+};
+
+const struct vcap_statistics kunit_test_vcap_stats = {
+       .name = "kunit_test",
+       .count = 3,
+       .keyfield_set_names = vcap_keyfield_set_names,
+       .actionfield_set_names = vcap_actionfield_set_names,
+       .keyfield_names = vcap_keyfield_names,
+       .actionfield_names = vcap_actionfield_names,
+};
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h b/drivers/net/ethernet/microchip/vcap/vcap_model_kunit.h
new file mode 100644 (file)
index 0000000..b5a74f0
--- /dev/null
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/* Copyright (C) 2022 Microchip Technology Inc. and its subsidiaries.
+ * Microchip VCAP test model interface for kunit testing
+ */
+
+#ifndef __VCAP_MODEL_KUNIT_H__
+#define __VCAP_MODEL_KUNIT_H__
+extern const struct vcap_info kunit_test_vcaps[];
+extern const struct vcap_statistics kunit_test_vcap_stats;
+#endif /* __VCAP_MODEL_KUNIT_H__ */
index e92860e..88d6d99 100644 (file)
@@ -154,10 +154,11 @@ nfp_fl_lag_find_group_for_master_with_lag(struct nfp_fl_lag *lag,
        return NULL;
 }
 
-int nfp_flower_lag_populate_pre_action(struct nfp_app *app,
-                                      struct net_device *master,
-                                      struct nfp_fl_pre_lag *pre_act,
-                                      struct netlink_ext_ack *extack)
+static int nfp_fl_lag_get_group_info(struct nfp_app *app,
+                                    struct net_device *netdev,
+                                    __be16 *group_id,
+                                    u8 *batch_ver,
+                                    u8 *group_inst)
 {
        struct nfp_flower_priv *priv = app->priv;
        struct nfp_fl_lag_group *group = NULL;
@@ -165,23 +166,52 @@ int nfp_flower_lag_populate_pre_action(struct nfp_app *app,
 
        mutex_lock(&priv->nfp_lag.lock);
        group = nfp_fl_lag_find_group_for_master_with_lag(&priv->nfp_lag,
-                                                         master);
+                                                         netdev);
        if (!group) {
                mutex_unlock(&priv->nfp_lag.lock);
-               NL_SET_ERR_MSG_MOD(extack, "invalid entry: group does not exist for LAG action");
                return -ENOENT;
        }
 
-       pre_act->group_id = cpu_to_be16(group->group_id);
-       temp_vers = cpu_to_be32(priv->nfp_lag.batch_ver <<
-                               NFP_FL_PRE_LAG_VER_OFF);
-       memcpy(pre_act->lag_version, &temp_vers, 3);
-       pre_act->instance = group->group_inst;
+       if (group_id)
+               *group_id = cpu_to_be16(group->group_id);
+
+       if (batch_ver) {
+               temp_vers = cpu_to_be32(priv->nfp_lag.batch_ver <<
+                                       NFP_FL_PRE_LAG_VER_OFF);
+               memcpy(batch_ver, &temp_vers, 3);
+       }
+
+       if (group_inst)
+               *group_inst = group->group_inst;
+
        mutex_unlock(&priv->nfp_lag.lock);
 
        return 0;
 }
 
+int nfp_flower_lag_populate_pre_action(struct nfp_app *app,
+                                      struct net_device *master,
+                                      struct nfp_fl_pre_lag *pre_act,
+                                      struct netlink_ext_ack *extack)
+{
+       if (nfp_fl_lag_get_group_info(app, master, &pre_act->group_id,
+                                     pre_act->lag_version,
+                                     &pre_act->instance)) {
+               NL_SET_ERR_MSG_MOD(extack, "invalid entry: group does not exist for LAG action");
+               return -ENOENT;
+       }
+
+       return 0;
+}
+
+void nfp_flower_lag_get_info_from_netdev(struct nfp_app *app,
+                                        struct net_device *netdev,
+                                        struct nfp_tun_neigh_lag *lag)
+{
+       nfp_fl_lag_get_group_info(app, netdev, NULL,
+                                 lag->lag_version, &lag->lag_instance);
+}
+
 int nfp_flower_lag_get_output_id(struct nfp_app *app, struct net_device *master)
 {
        struct nfp_flower_priv *priv = app->priv;
index 4d960a9..83eaa5a 100644 (file)
@@ -76,7 +76,9 @@ nfp_flower_get_internal_port_id(struct nfp_app *app, struct net_device *netdev)
 u32 nfp_flower_get_port_id_from_netdev(struct nfp_app *app,
                                       struct net_device *netdev)
 {
+       struct nfp_flower_priv *priv = app->priv;
        int ext_port;
+       int gid;
 
        if (nfp_netdev_is_nfp_repr(netdev)) {
                return nfp_repr_get_port_id(netdev);
@@ -86,6 +88,13 @@ u32 nfp_flower_get_port_id_from_netdev(struct nfp_app *app,
                        return 0;
 
                return nfp_flower_internal_port_get_port_id(ext_port);
+       } else if (netif_is_lag_master(netdev) &&
+                  priv->flower_ext_feats & NFP_FL_FEATS_TUNNEL_NEIGH_LAG) {
+               gid = nfp_flower_lag_get_output_id(app, netdev);
+               if (gid < 0)
+                       return 0;
+
+               return (NFP_FL_LAG_OUT | gid);
        }
 
        return 0;
index cb799d1..4037254 100644 (file)
@@ -52,6 +52,7 @@ struct nfp_app;
 #define NFP_FL_FEATS_QOS_PPS           BIT(9)
 #define NFP_FL_FEATS_QOS_METER         BIT(10)
 #define NFP_FL_FEATS_DECAP_V2          BIT(11)
+#define NFP_FL_FEATS_TUNNEL_NEIGH_LAG  BIT(12)
 #define NFP_FL_FEATS_HOST_ACK          BIT(31)
 
 #define NFP_FL_ENABLE_FLOW_MERGE       BIT(0)
@@ -69,7 +70,8 @@ struct nfp_app;
        NFP_FL_FEATS_VLAN_QINQ | \
        NFP_FL_FEATS_QOS_PPS | \
        NFP_FL_FEATS_QOS_METER | \
-       NFP_FL_FEATS_DECAP_V2)
+       NFP_FL_FEATS_DECAP_V2 | \
+       NFP_FL_FEATS_TUNNEL_NEIGH_LAG)
 
 struct nfp_fl_mask_id {
        struct circ_buf mask_id_free_list;
@@ -104,6 +106,16 @@ struct nfp_fl_tunnel_offloads {
 };
 
 /**
+ * struct nfp_tun_neigh_lag - lag info
+ * @lag_version:       lag version
+ * @lag_instance:      lag instance
+ */
+struct nfp_tun_neigh_lag {
+       u8 lag_version[3];
+       u8 lag_instance;
+};
+
+/**
  * struct nfp_tun_neigh - basic neighbour data
  * @dst_addr:  Destination MAC address
  * @src_addr:  Source MAC address
@@ -133,12 +145,14 @@ struct nfp_tun_neigh_ext {
  * @src_ipv4:  Source IPv4 address
  * @common:    Neighbour/route common info
  * @ext:       Neighbour/route extended info
+ * @lag:       lag port info
  */
 struct nfp_tun_neigh_v4 {
        __be32 dst_ipv4;
        __be32 src_ipv4;
        struct nfp_tun_neigh common;
        struct nfp_tun_neigh_ext ext;
+       struct nfp_tun_neigh_lag lag;
 };
 
 /**
@@ -147,12 +161,14 @@ struct nfp_tun_neigh_v4 {
  * @src_ipv6:  Source IPv6 address
  * @common:    Neighbour/route common info
  * @ext:       Neighbour/route extended info
+ * @lag:       lag port info
  */
 struct nfp_tun_neigh_v6 {
        struct in6_addr dst_ipv6;
        struct in6_addr src_ipv6;
        struct nfp_tun_neigh common;
        struct nfp_tun_neigh_ext ext;
+       struct nfp_tun_neigh_lag lag;
 };
 
 /**
@@ -647,6 +663,9 @@ int nfp_flower_lag_populate_pre_action(struct nfp_app *app,
                                       struct netlink_ext_ack *extack);
 int nfp_flower_lag_get_output_id(struct nfp_app *app,
                                 struct net_device *master);
+void nfp_flower_lag_get_info_from_netdev(struct nfp_app *app,
+                                        struct net_device *netdev,
+                                        struct nfp_tun_neigh_lag *lag);
 void nfp_flower_qos_init(struct nfp_app *app);
 void nfp_flower_qos_cleanup(struct nfp_app *app);
 int nfp_flower_setup_qos_offload(struct nfp_app *app, struct net_device *netdev,
index 52f6715..a8678d5 100644 (file)
@@ -290,6 +290,11 @@ nfp_flower_xmit_tun_conf(struct nfp_app *app, u8 mtype, u16 plen, void *pdata,
             mtype == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6))
                plen -= sizeof(struct nfp_tun_neigh_ext);
 
+       if (!(priv->flower_ext_feats & NFP_FL_FEATS_TUNNEL_NEIGH_LAG) &&
+           (mtype == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH ||
+            mtype == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6))
+               plen -= sizeof(struct nfp_tun_neigh_lag);
+
        skb = nfp_flower_cmsg_alloc(app, plen, mtype, flag);
        if (!skb)
                return -ENOMEM;
@@ -468,6 +473,7 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app,
                                          neigh_table_params);
        if (!nn_entry && !neigh_invalid) {
                struct nfp_tun_neigh_ext *ext;
+               struct nfp_tun_neigh_lag *lag;
                struct nfp_tun_neigh *common;
 
                nn_entry = kzalloc(sizeof(*nn_entry) + neigh_size,
@@ -488,6 +494,7 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app,
                        payload->dst_ipv6 = flowi6->daddr;
                        common = &payload->common;
                        ext = &payload->ext;
+                       lag = &payload->lag;
                        mtype = NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6;
                } else {
                        struct flowi4 *flowi4 = (struct flowi4 *)flow;
@@ -498,6 +505,7 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app,
                        payload->dst_ipv4 = flowi4->daddr;
                        common = &payload->common;
                        ext = &payload->ext;
+                       lag = &payload->lag;
                        mtype = NFP_FLOWER_CMSG_TYPE_TUN_NEIGH;
                }
                ext->host_ctx = cpu_to_be32(U32_MAX);
@@ -505,6 +513,9 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app,
                ext->vlan_tci = cpu_to_be16(U16_MAX);
                ether_addr_copy(common->src_addr, netdev->dev_addr);
                neigh_ha_snapshot(common->dst_addr, neigh, netdev);
+
+               if ((port_id & NFP_FL_LAG_OUT) == NFP_FL_LAG_OUT)
+                       nfp_flower_lag_get_info_from_netdev(app, netdev, lag);
                common->port_id = cpu_to_be32(port_id);
 
                if (rhashtable_insert_fast(&priv->neigh_table,
@@ -547,13 +558,38 @@ nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app,
                if (nn_entry->flow)
                        list_del(&nn_entry->list_head);
                kfree(nn_entry);
-       } else if (nn_entry && !neigh_invalid && override) {
-               mtype = is_ipv6 ? NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6 :
-                               NFP_FLOWER_CMSG_TYPE_TUN_NEIGH;
-               nfp_tun_link_predt_entries(app, nn_entry);
-               nfp_flower_xmit_tun_conf(app, mtype, neigh_size,
-                                        nn_entry->payload,
-                                        GFP_ATOMIC);
+       } else if (nn_entry && !neigh_invalid) {
+               struct nfp_tun_neigh *common;
+               u8 dst_addr[ETH_ALEN];
+               bool is_mac_change;
+
+               if (is_ipv6) {
+                       struct nfp_tun_neigh_v6 *payload;
+
+                       payload = (struct nfp_tun_neigh_v6 *)nn_entry->payload;
+                       common = &payload->common;
+                       mtype = NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6;
+               } else {
+                       struct nfp_tun_neigh_v4 *payload;
+
+                       payload = (struct nfp_tun_neigh_v4 *)nn_entry->payload;
+                       common = &payload->common;
+                       mtype = NFP_FLOWER_CMSG_TYPE_TUN_NEIGH;
+               }
+
+               ether_addr_copy(dst_addr, common->dst_addr);
+               neigh_ha_snapshot(common->dst_addr, neigh, netdev);
+               is_mac_change = !ether_addr_equal(dst_addr, common->dst_addr);
+               if (override || is_mac_change) {
+                       if (is_mac_change && nn_entry->flow) {
+                               list_del(&nn_entry->list_head);
+                               nn_entry->flow = NULL;
+                       }
+                       nfp_tun_link_predt_entries(app, nn_entry);
+                       nfp_flower_xmit_tun_conf(app, mtype, neigh_size,
+                                                nn_entry->payload,
+                                                GFP_ATOMIC);
+               }
        }
 
        spin_unlock_bh(&priv->predt_lock);
@@ -593,8 +629,7 @@ nfp_tun_neigh_event_handler(struct notifier_block *nb, unsigned long event,
        app_priv = container_of(nb, struct nfp_flower_priv, tun.neigh_nb);
        app = app_priv->app;
 
-       if (!nfp_netdev_is_nfp_repr(n->dev) &&
-           !nfp_flower_internal_port_can_offload(app, n->dev))
+       if (!nfp_flower_get_port_id_from_netdev(app, n->dev))
                return NOTIFY_DONE;
 
 #if IS_ENABLED(CONFIG_INET)
index e66e548..71301db 100644 (file)
@@ -716,16 +716,26 @@ static u64 nfp_net_pf_get_app_cap(struct nfp_pf *pf)
        return val;
 }
 
-static int nfp_pf_cfg_hwinfo(struct nfp_pf *pf, bool sp_indiff)
+static void nfp_pf_cfg_hwinfo(struct nfp_pf *pf)
 {
        struct nfp_nsp *nsp;
        char hwinfo[32];
+       bool sp_indiff;
        int err;
 
        nsp = nfp_nsp_open(pf->cpp);
        if (IS_ERR(nsp))
-               return PTR_ERR(nsp);
+               return;
+
+       if (!nfp_nsp_has_hwinfo_set(nsp))
+               goto end;
 
+       sp_indiff = (nfp_net_pf_get_app_id(pf) == NFP_APP_FLOWER_NIC) ||
+                   (nfp_net_pf_get_app_cap(pf) & NFP_NET_APP_CAP_SP_INDIFF);
+
+       /* No need to clean `sp_indiff` in driver, management firmware
+        * will do it when application firmware is unloaded.
+        */
        snprintf(hwinfo, sizeof(hwinfo), "sp_indiff=%d", sp_indiff);
        err = nfp_nsp_hwinfo_set(nsp, hwinfo, sizeof(hwinfo));
        /* Not a fatal error, no need to return error to stop driver from loading */
@@ -739,21 +749,8 @@ static int nfp_pf_cfg_hwinfo(struct nfp_pf *pf, bool sp_indiff)
                pf->eth_tbl = __nfp_eth_read_ports(pf->cpp, nsp);
        }
 
+end:
        nfp_nsp_close(nsp);
-       return 0;
-}
-
-static int nfp_pf_nsp_cfg(struct nfp_pf *pf)
-{
-       bool sp_indiff = (nfp_net_pf_get_app_id(pf) == NFP_APP_FLOWER_NIC) ||
-                        (nfp_net_pf_get_app_cap(pf) & NFP_NET_APP_CAP_SP_INDIFF);
-
-       return nfp_pf_cfg_hwinfo(pf, sp_indiff);
-}
-
-static void nfp_pf_nsp_clean(struct nfp_pf *pf)
-{
-       nfp_pf_cfg_hwinfo(pf, false);
 }
 
 static int nfp_pci_probe(struct pci_dev *pdev,
@@ -856,13 +853,11 @@ static int nfp_pci_probe(struct pci_dev *pdev,
                goto err_fw_unload;
        }
 
-       err = nfp_pf_nsp_cfg(pf);
-       if (err)
-               goto err_fw_unload;
+       nfp_pf_cfg_hwinfo(pf);
 
        err = nfp_net_pci_probe(pf);
        if (err)
-               goto err_nsp_clean;
+               goto err_fw_unload;
 
        err = nfp_hwmon_register(pf);
        if (err) {
@@ -874,8 +869,6 @@ static int nfp_pci_probe(struct pci_dev *pdev,
 
 err_net_remove:
        nfp_net_pci_remove(pf);
-err_nsp_clean:
-       nfp_pf_nsp_clean(pf);
 err_fw_unload:
        kfree(pf->rtbl);
        nfp_mip_close(pf->mip);
@@ -915,7 +908,6 @@ static void __nfp_pci_shutdown(struct pci_dev *pdev, bool unload_fw)
 
        nfp_net_pci_remove(pf);
 
-       nfp_pf_nsp_clean(pf);
        vfree(pf->dumpspec);
        kfree(pf->rtbl);
        nfp_mip_close(pf->mip);
index 5d58fd9..19d4848 100644 (file)
@@ -2817,11 +2817,15 @@ err_out:
         * than the full array, but leave the qcq shells in place
         */
        for (i = lif->nxqs; i < lif->ionic->ntxqs_per_lif; i++) {
-               lif->txqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
-               ionic_qcq_free(lif, lif->txqcqs[i]);
+               if (lif->txqcqs && lif->txqcqs[i]) {
+                       lif->txqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
+                       ionic_qcq_free(lif, lif->txqcqs[i]);
+               }
 
-               lif->rxqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
-               ionic_qcq_free(lif, lif->rxqcqs[i]);
+               if (lif->rxqcqs && lif->rxqcqs[i]) {
+                       lif->rxqcqs[i]->flags &= ~IONIC_QCQ_F_INTR;
+                       ionic_qcq_free(lif, lif->rxqcqs[i]);
+               }
        }
 
        if (err)
index 023682c..9e59669 100644 (file)
@@ -129,7 +129,7 @@ static int rocker_reg_test(const struct rocker *rocker)
        u64 test_reg;
        u64 rnd;
 
-       rnd = prandom_u32();
+       rnd = get_random_u32();
        rnd >>= 1;
        rocker_write32(rocker, TEST_REG, rnd);
        test_reg = rocker_read32(rocker, TEST_REG);
@@ -139,9 +139,9 @@ static int rocker_reg_test(const struct rocker *rocker)
                return -EIO;
        }
 
-       rnd = prandom_u32();
+       rnd = get_random_u32();
        rnd <<= 31;
-       rnd |= prandom_u32();
+       rnd |= get_random_u32();
        rocker_write64(rocker, TEST_REG64, rnd);
        test_reg = rocker_read64(rocker, TEST_REG64);
        if (test_reg != rnd * 2) {
@@ -224,7 +224,7 @@ static int rocker_dma_test_offset(const struct rocker *rocker,
        if (err)
                goto unmap;
 
-       prandom_bytes(buf, ROCKER_TEST_DMA_BUF_SIZE);
+       get_random_bytes(buf, ROCKER_TEST_DMA_BUF_SIZE);
        for (i = 0; i < ROCKER_TEST_DMA_BUF_SIZE; i++)
                expect[i] = ~buf[i];
        err = rocker_dma_test_one(rocker, wait, ROCKER_TEST_DMA_CTRL_INVERT,
index d1e1aa1..7022fb2 100644 (file)
@@ -3277,6 +3277,30 @@ static int efx_ef10_set_mac_address(struct efx_nic *efx)
        bool was_enabled = efx->port_enabled;
        int rc;
 
+#ifdef CONFIG_SFC_SRIOV
+       /* If this function is a VF and we have access to the parent PF,
+        * then use the PF control path to attempt to change the VF MAC address.
+        */
+       if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) {
+               struct efx_nic *efx_pf = pci_get_drvdata(efx->pci_dev->physfn);
+               struct efx_ef10_nic_data *nic_data = efx->nic_data;
+               u8 mac[ETH_ALEN];
+
+               /* net_dev->dev_addr can be zeroed by efx_net_stop in
+                * efx_ef10_sriov_set_vf_mac, so pass in a copy.
+                */
+               ether_addr_copy(mac, efx->net_dev->dev_addr);
+
+               rc = efx_ef10_sriov_set_vf_mac(efx_pf, nic_data->vf_index, mac);
+               if (!rc)
+                       return 0;
+
+               netif_dbg(efx, drv, efx->net_dev,
+                         "Updating VF mac via PF failed (%d), setting directly\n",
+                         rc);
+       }
+#endif
+
        efx_device_detach_sync(efx);
        efx_net_stop(efx->net_dev);
 
@@ -3297,40 +3321,6 @@ static int efx_ef10_set_mac_address(struct efx_nic *efx)
                efx_net_open(efx->net_dev);
        efx_device_attach_if_not_resetting(efx);
 
-#ifdef CONFIG_SFC_SRIOV
-       if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) {
-               struct efx_ef10_nic_data *nic_data = efx->nic_data;
-               struct pci_dev *pci_dev_pf = efx->pci_dev->physfn;
-
-               if (rc == -EPERM) {
-                       struct efx_nic *efx_pf;
-
-                       /* Switch to PF and change MAC address on vport */
-                       efx_pf = pci_get_drvdata(pci_dev_pf);
-
-                       rc = efx_ef10_sriov_set_vf_mac(efx_pf,
-                                                      nic_data->vf_index,
-                                                      efx->net_dev->dev_addr);
-               } else if (!rc) {
-                       struct efx_nic *efx_pf = pci_get_drvdata(pci_dev_pf);
-                       struct efx_ef10_nic_data *nic_data = efx_pf->nic_data;
-                       unsigned int i;
-
-                       /* MAC address successfully changed by VF (with MAC
-                        * spoofing) so update the parent PF if possible.
-                        */
-                       for (i = 0; i < efx_pf->vf_count; ++i) {
-                               struct ef10_vf *vf = nic_data->vf + i;
-
-                               if (vf->efx == efx) {
-                                       ether_addr_copy(vf->mac,
-                                                       efx->net_dev->dev_addr);
-                                       return 0;
-                               }
-                       }
-               }
-       } else
-#endif
        if (rc == -EPERM) {
                netif_err(efx, drv, efx->net_dev,
                          "Cannot change MAC address; use sfboot to enable"
index 135ece2..702abbe 100644 (file)
@@ -43,8 +43,6 @@ const struct ethtool_ops ef100_ethtool_ops = {
        .get_pauseparam         = efx_ethtool_get_pauseparam,
        .set_pauseparam         = efx_ethtool_set_pauseparam,
        .get_sset_count         = efx_ethtool_get_sset_count,
-       .get_priv_flags         = efx_ethtool_get_priv_flags,
-       .set_priv_flags         = efx_ethtool_set_priv_flags,
        .self_test              = efx_ethtool_self_test,
        .get_strings            = efx_ethtool_get_strings,
        .get_link_ksettings     = efx_ethtool_get_link_ksettings,
index 6649a23..a8cbcee 100644 (file)
@@ -101,14 +101,6 @@ static const struct efx_sw_stat_desc efx_sw_stat_desc[] = {
 
 #define EFX_ETHTOOL_SW_STAT_COUNT ARRAY_SIZE(efx_sw_stat_desc)
 
-static const char efx_ethtool_priv_flags_strings[][ETH_GSTRING_LEN] = {
-       "log-tc-errors",
-};
-
-#define EFX_ETHTOOL_PRIV_FLAGS_LOG_TC_ERRS             BIT(0)
-
-#define EFX_ETHTOOL_PRIV_FLAGS_COUNT ARRAY_SIZE(efx_ethtool_priv_flags_strings)
-
 void efx_ethtool_get_drvinfo(struct net_device *net_dev,
                             struct ethtool_drvinfo *info)
 {
@@ -460,8 +452,6 @@ int efx_ethtool_get_sset_count(struct net_device *net_dev, int string_set)
                       efx_ptp_describe_stats(efx, NULL);
        case ETH_SS_TEST:
                return efx_ethtool_fill_self_tests(efx, NULL, NULL, NULL);
-       case ETH_SS_PRIV_FLAGS:
-               return EFX_ETHTOOL_PRIV_FLAGS_COUNT;
        default:
                return -EINVAL;
        }
@@ -488,39 +478,12 @@ void efx_ethtool_get_strings(struct net_device *net_dev,
        case ETH_SS_TEST:
                efx_ethtool_fill_self_tests(efx, NULL, strings, NULL);
                break;
-       case ETH_SS_PRIV_FLAGS:
-               for (i = 0; i < EFX_ETHTOOL_PRIV_FLAGS_COUNT; i++)
-                       strscpy(strings + i * ETH_GSTRING_LEN,
-                               efx_ethtool_priv_flags_strings[i],
-                               ETH_GSTRING_LEN);
-               break;
        default:
                /* No other string sets */
                break;
        }
 }
 
-u32 efx_ethtool_get_priv_flags(struct net_device *net_dev)
-{
-       struct efx_nic *efx = efx_netdev_priv(net_dev);
-       u32 ret_flags = 0;
-
-       if (efx->log_tc_errs)
-               ret_flags |= EFX_ETHTOOL_PRIV_FLAGS_LOG_TC_ERRS;
-
-       return ret_flags;
-}
-
-int efx_ethtool_set_priv_flags(struct net_device *net_dev, u32 flags)
-{
-       struct efx_nic *efx = efx_netdev_priv(net_dev);
-
-       efx->log_tc_errs =
-               !!(flags & EFX_ETHTOOL_PRIV_FLAGS_LOG_TC_ERRS);
-
-       return 0;
-}
-
 void efx_ethtool_get_stats(struct net_device *net_dev,
                           struct ethtool_stats *stats,
                           u64 *data)
index 0afc740..6594919 100644 (file)
@@ -27,8 +27,6 @@ int efx_ethtool_fill_self_tests(struct efx_nic *efx,
 int efx_ethtool_get_sset_count(struct net_device *net_dev, int string_set);
 void efx_ethtool_get_strings(struct net_device *net_dev, u32 string_set,
                             u8 *strings);
-u32 efx_ethtool_get_priv_flags(struct net_device *net_dev);
-int efx_ethtool_set_priv_flags(struct net_device *net_dev, u32 flags);
 void efx_ethtool_get_stats(struct net_device *net_dev,
                           struct ethtool_stats *stats __attribute__ ((unused)),
                           u64 *data);
index be72e71..5f201a5 100644 (file)
@@ -162,9 +162,9 @@ struct efx_filter_spec {
        u32     priority:2;
        u32     flags:6;
        u32     dmaq_id:12;
-       u32     vport_id;
        u32     rss_context;
-       __be16  outer_vid __aligned(4); /* allow jhash2() of match values */
+       u32     vport_id;
+       __be16  outer_vid;
        __be16  inner_vid;
        u8      loc_mac[ETH_ALEN];
        u8      rem_mac[ETH_ALEN];
index 874c765..6f472ea 100644 (file)
@@ -265,9 +265,8 @@ int efx_mae_match_check_caps(struct efx_nic *efx,
        rc = efx_mae_match_check_cap_typ(supported_fields[MAE_FIELD_INGRESS_PORT],
                                         ingress_port_mask_type);
        if (rc) {
-               efx_tc_err(efx, "No support for %s mask in field ingress_port\n",
-                          mask_type_name(ingress_port_mask_type));
-               NL_SET_ERR_MSG_MOD(extack, "Unsupported mask type for ingress_port");
+               NL_SET_ERR_MSG_FMT_MOD(extack, "No support for %s mask in field ingress_port",
+                                      mask_type_name(ingress_port_mask_type));
                return rc;
        }
        return 0;
index 2e9ba0c..7ef823d 100644 (file)
@@ -855,7 +855,6 @@ enum efx_xdp_tx_queues_mode {
  * @timer_max_ns: Interrupt timer maximum value, in nanoseconds
  * @irq_rx_adaptive: Adaptive IRQ moderation enabled for RX event queues
  * @irqs_hooked: Channel interrupts are hooked
- * @log_tc_errs: Error logging for TC filter insertion is enabled
  * @irq_rx_mod_step_us: Step size for IRQ moderation for RX event queues
  * @irq_rx_moderation_us: IRQ moderation time for RX event queues
  * @msg_enable: Log message enable flags
@@ -1018,7 +1017,6 @@ struct efx_nic {
        unsigned int timer_max_ns;
        bool irq_rx_adaptive;
        bool irqs_hooked;
-       bool log_tc_errs;
        unsigned int irq_mod_step_us;
        unsigned int irq_rx_moderation_us;
        u32 msg_enable;
index 4826e6a..9220afe 100644 (file)
@@ -660,17 +660,17 @@ bool efx_filter_spec_equal(const struct efx_filter_spec *left,
             (EFX_FILTER_FLAG_RX | EFX_FILTER_FLAG_TX)))
                return false;
 
-       return memcmp(&left->outer_vid, &right->outer_vid,
+       return memcmp(&left->vport_id, &right->vport_id,
                      sizeof(struct efx_filter_spec) -
-                     offsetof(struct efx_filter_spec, outer_vid)) == 0;
+                     offsetof(struct efx_filter_spec, vport_id)) == 0;
 }
 
 u32 efx_filter_spec_hash(const struct efx_filter_spec *spec)
 {
-       BUILD_BUG_ON(offsetof(struct efx_filter_spec, outer_vid) & 3);
-       return jhash2((const u32 *)&spec->outer_vid,
+       BUILD_BUG_ON(offsetof(struct efx_filter_spec, vport_id) & 3);
+       return jhash2((const u32 *)&spec->vport_id,
                      (sizeof(struct efx_filter_spec) -
-                      offsetof(struct efx_filter_spec, outer_vid)) / 4,
+                      offsetof(struct efx_filter_spec, vport_id)) / 4,
                      0);
 }
 
index 3478860..b21a961 100644 (file)
@@ -137,17 +137,16 @@ static int efx_tc_flower_parse_match(struct efx_nic *efx,
                flow_rule_match_control(rule, &fm);
 
                if (fm.mask->flags) {
-                       efx_tc_err(efx, "Unsupported match on control.flags %#x\n",
-                                  fm.mask->flags);
-                       NL_SET_ERR_MSG_MOD(extack, "Unsupported match on control.flags");
+                       NL_SET_ERR_MSG_FMT_MOD(extack, "Unsupported match on control.flags %#x",
+                                              fm.mask->flags);
                        return -EOPNOTSUPP;
                }
        }
        if (dissector->used_keys &
            ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
              BIT(FLOW_DISSECTOR_KEY_BASIC))) {
-               efx_tc_err(efx, "Unsupported flower keys %#x\n", dissector->used_keys);
-               NL_SET_ERR_MSG_MOD(extack, "Unsupported flower keys encountered");
+               NL_SET_ERR_MSG_FMT_MOD(extack, "Unsupported flower keys %#x",
+                                      dissector->used_keys);
                return -EOPNOTSUPP;
        }
 
@@ -156,11 +155,11 @@ static int efx_tc_flower_parse_match(struct efx_nic *efx,
 
                flow_rule_match_basic(rule, &fm);
                if (fm.mask->n_proto) {
-                       EFX_TC_ERR_MSG(efx, extack, "Unsupported eth_proto match\n");
+                       NL_SET_ERR_MSG_MOD(extack, "Unsupported eth_proto match");
                        return -EOPNOTSUPP;
                }
                if (fm.mask->ip_proto) {
-                       EFX_TC_ERR_MSG(efx, extack, "Unsupported ip_proto match\n");
+                       NL_SET_ERR_MSG_MOD(extack, "Unsupported ip_proto match");
                        return -EOPNOTSUPP;
                }
        }
@@ -200,13 +199,9 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
 
        if (efv != from_efv) {
                /* can't happen */
-               efx_tc_err(efx, "for %s efv is %snull but from_efv is %snull\n",
-                          netdev_name(net_dev), efv ? "non-" : "",
-                          from_efv ? "non-" : "");
-               if (efv)
-                       NL_SET_ERR_MSG_MOD(extack, "vfrep filter has PF net_dev (can't happen)");
-               else
-                       NL_SET_ERR_MSG_MOD(extack, "PF filter has vfrep net_dev (can't happen)");
+               NL_SET_ERR_MSG_FMT_MOD(extack, "for %s efv is %snull but from_efv is %snull (can't happen)",
+                                      netdev_name(net_dev), efv ? "non-" : "",
+                                      from_efv ? "non-" : "");
                return -EINVAL;
        }
 
@@ -214,7 +209,7 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
        memset(&match, 0, sizeof(match));
        rc = efx_tc_flower_external_mport(efx, from_efv);
        if (rc < 0) {
-               EFX_TC_ERR_MSG(efx, extack, "Failed to identify ingress m-port");
+               NL_SET_ERR_MSG_MOD(extack, "Failed to identify ingress m-port");
                return rc;
        }
        match.value.ingress_port = rc;
@@ -224,7 +219,7 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
                return rc;
 
        if (tc->common.chain_index) {
-               EFX_TC_ERR_MSG(efx, extack, "No support for nonzero chain_index");
+               NL_SET_ERR_MSG_MOD(extack, "No support for nonzero chain_index");
                return -EOPNOTSUPP;
        }
        match.mask.recirc_id = 0xff;
@@ -261,7 +256,7 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
 
                if (!act) {
                        /* more actions after a non-pipe action */
-                       EFX_TC_ERR_MSG(efx, extack, "Action follows non-pipe action");
+                       NL_SET_ERR_MSG_MOD(extack, "Action follows non-pipe action");
                        rc = -EINVAL;
                        goto release;
                }
@@ -270,7 +265,7 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
                case FLOW_ACTION_DROP:
                        rc = efx_mae_alloc_action_set(efx, act);
                        if (rc) {
-                               EFX_TC_ERR_MSG(efx, extack, "Failed to write action set to hw (drop)");
+                               NL_SET_ERR_MSG_MOD(extack, "Failed to write action set to hw (drop)");
                                goto release;
                        }
                        list_add_tail(&act->list, &rule->acts.list);
@@ -281,20 +276,20 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
                        save = *act;
                        to_efv = efx_tc_flower_lookup_efv(efx, fa->dev);
                        if (IS_ERR(to_efv)) {
-                               EFX_TC_ERR_MSG(efx, extack, "Mirred egress device not on switch");
+                               NL_SET_ERR_MSG_MOD(extack, "Mirred egress device not on switch");
                                rc = PTR_ERR(to_efv);
                                goto release;
                        }
                        rc = efx_tc_flower_external_mport(efx, to_efv);
                        if (rc < 0) {
-                               EFX_TC_ERR_MSG(efx, extack, "Failed to identify egress m-port");
+                               NL_SET_ERR_MSG_MOD(extack, "Failed to identify egress m-port");
                                goto release;
                        }
                        act->dest_mport = rc;
                        act->deliver = 1;
                        rc = efx_mae_alloc_action_set(efx, act);
                        if (rc) {
-                               EFX_TC_ERR_MSG(efx, extack, "Failed to write action set to hw (mirred)");
+                               NL_SET_ERR_MSG_MOD(extack, "Failed to write action set to hw (mirred)");
                                goto release;
                        }
                        list_add_tail(&act->list, &rule->acts.list);
@@ -310,9 +305,9 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
                        *act = save;
                        break;
                default:
-                       efx_tc_err(efx, "Unhandled action %u\n", fa->id);
+                       NL_SET_ERR_MSG_FMT_MOD(extack, "Unhandled action %u",
+                                              fa->id);
                        rc = -EOPNOTSUPP;
-                       NL_SET_ERR_MSG_MOD(extack, "Unsupported action");
                        goto release;
                }
        }
@@ -334,7 +329,7 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
                act->deliver = 1;
                rc = efx_mae_alloc_action_set(efx, act);
                if (rc) {
-                       EFX_TC_ERR_MSG(efx, extack, "Failed to write action set to hw (deliver)");
+                       NL_SET_ERR_MSG_MOD(extack, "Failed to write action set to hw (deliver)");
                        goto release;
                }
                list_add_tail(&act->list, &rule->acts.list);
@@ -349,13 +344,13 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
 
        rc = efx_mae_alloc_action_set_list(efx, &rule->acts);
        if (rc) {
-               EFX_TC_ERR_MSG(efx, extack, "Failed to write action set list to hw");
+               NL_SET_ERR_MSG_MOD(extack, "Failed to write action set list to hw");
                goto release;
        }
        rc = efx_mae_insert_rule(efx, &rule->match, EFX_TC_PRIO_TC,
                                 rule->acts.fw_id, &rule->fw_id);
        if (rc) {
-               EFX_TC_ERR_MSG(efx, extack, "Failed to insert rule in hw");
+               NL_SET_ERR_MSG_MOD(extack, "Failed to insert rule in hw");
                goto release_acts;
        }
        return 0;
index 196fd74..4373c32 100644 (file)
 #include <linux/rhashtable.h>
 #include "net_driver.h"
 
-/* Error reporting: convenience macros.  For indicating why a given filter
- * insertion is not supported; errors in internal operation or in the
- * hardware should be netif_err()s instead.
- */
-/* Used when error message is constant. */
-#define EFX_TC_ERR_MSG(efx, extack, message)   do {                    \
-       NL_SET_ERR_MSG_MOD(extack, message);                            \
-       if (efx->log_tc_errs)                                           \
-               netif_info(efx, drv, efx->net_dev, "%s\n", message);    \
-} while (0)
-/* Used when error message is not constant; caller should also supply a
- * constant extack message with NL_SET_ERR_MSG_MOD().
- */
-#define efx_tc_err(efx, fmt, args...)  do {            \
-if (efx->log_tc_errs)                                  \
-       netif_info(efx, drv, efx->net_dev, fmt, ##args);\
-} while (0)
-
 struct efx_tc_action_set {
        u16 deliver:1;
        u32 dest_mport;
index 2240f6d..9b46579 100644 (file)
@@ -1961,11 +1961,13 @@ static int netsec_register_mdio(struct netsec_priv *priv, u32 phy_addr)
                        ret = PTR_ERR(priv->phydev);
                        dev_err(priv->dev, "get_phy_device err(%d)\n", ret);
                        priv->phydev = NULL;
+                       mdiobus_unregister(bus);
                        return -ENODEV;
                }
 
                ret = phy_device_register(priv->phydev);
                if (ret) {
+                       phy_device_free(priv->phydev);
                        mdiobus_unregister(bus);
                        dev_err(priv->dev,
                                "phy_device_register err(%d)\n", ret);
index 65c9677..8273e6a 100644 (file)
@@ -1214,6 +1214,7 @@ static int stmmac_phy_setup(struct stmmac_priv *priv)
        if (priv->plat->tx_queues_to_use > 1)
                priv->phylink_config.mac_capabilities &=
                        ~(MAC_10HD | MAC_100HD | MAC_1000HD);
+       priv->phylink_config.mac_managed_pm = true;
 
        phylink = phylink_create(&priv->phylink_config, fwnode,
                                 mode, &stmmac_phylink_mac_ops);
index 91f10f7..1c16548 100644 (file)
@@ -1328,7 +1328,7 @@ static int happy_meal_init(struct happy_meal *hp)
        void __iomem *erxregs      = hp->erxregs;
        void __iomem *bregs        = hp->bigmacregs;
        void __iomem *tregs        = hp->tcvregs;
-       const char *bursts;
+       const char *bursts = "64";
        u32 regtmp, rxcfg;
 
        /* If auto-negotiation timer is running, kill it. */
index 3e69079..791b4a5 100644 (file)
@@ -438,7 +438,7 @@ static int transmit(struct baycom_state *bc, int cnt, unsigned char stat)
                        if ((--bc->hdlctx.slotcnt) > 0)
                                return 0;
                        bc->hdlctx.slotcnt = bc->ch_params.slottime;
-                       if ((prandom_u32() % 256) > bc->ch_params.ppersist)
+                       if (get_random_u8() > bc->ch_params.ppersist)
                                return 0;
                }
        }
index a6184d6..2263029 100644 (file)
@@ -377,7 +377,7 @@ void hdlcdrv_arbitrate(struct net_device *dev, struct hdlcdrv_state *s)
        if ((--s->hdlctx.slotcnt) > 0)
                return;
        s->hdlctx.slotcnt = s->ch_params.slottime;
-       if ((prandom_u32() % 256) > s->ch_params.ppersist)
+       if (get_random_u8() > s->ch_params.ppersist)
                return;
        start_tx(dev, s);
 }
index 980f2be..2ed2f83 100644 (file)
@@ -626,7 +626,7 @@ static void yam_arbitrate(struct net_device *dev)
        yp->slotcnt = yp->slot / 10;
 
        /* is random > persist ? */
-       if ((prandom_u32() % 256) > yp->pers)
+       if (get_random_u8() > yp->pers)
                return;
 
        yam_start_tx(dev, yp);
index 11f767a..eea777e 100644 (file)
@@ -20,6 +20,7 @@
 #include <linux/vmalloc.h>
 #include <linux/rtnetlink.h>
 #include <linux/ucs2_string.h>
+#include <linux/string.h>
 
 #include "hyperv_net.h"
 #include "netvsc_trace.h"
@@ -335,9 +336,10 @@ static void rndis_filter_receive_response(struct net_device *ndev,
                if (resp->msg_len <=
                    sizeof(struct rndis_message) + RNDIS_EXT_LEN) {
                        memcpy(&request->response_msg, resp, RNDIS_HEADER_SIZE + sizeof(*req_id));
-                       memcpy((void *)&request->response_msg + RNDIS_HEADER_SIZE + sizeof(*req_id),
+                       unsafe_memcpy((void *)&request->response_msg + RNDIS_HEADER_SIZE + sizeof(*req_id),
                               data + RNDIS_HEADER_SIZE + sizeof(*req_id),
-                              resp->msg_len - RNDIS_HEADER_SIZE - sizeof(*req_id));
+                              resp->msg_len - RNDIS_HEADER_SIZE - sizeof(*req_id),
+                              "request->response_msg is followed by a padding of RNDIS_EXT_LEN inside rndis_request");
                        if (request->request_msg.ndis_msg_type ==
                            RNDIS_MSG_QUERY && request->request_msg.msg.
                            query_req.oid == RNDIS_OID_GEN_MEDIA_CONNECT_STATUS)
index 26b7f68..0f52c06 100644 (file)
@@ -87,6 +87,7 @@ struct gsi_tre {
 int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
                        u32 max_alloc)
 {
+       size_t alloc_size;
        void *virt;
 
        if (!size)
@@ -103,13 +104,15 @@ int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
         * If there aren't enough entries starting at the free index,
         * we just allocate free entries from the beginning of the pool.
         */
-       virt = kcalloc(count + max_alloc - 1, size, GFP_KERNEL);
+       alloc_size = size_mul(count + max_alloc - 1, size);
+       alloc_size = kmalloc_size_roundup(alloc_size);
+       virt = kzalloc(alloc_size, GFP_KERNEL);
        if (!virt)
                return -ENOMEM;
 
        pool->base = virt;
        /* If the allocator gave us any extra memory, use it */
-       pool->count = ksize(pool->base) / size;
+       pool->count = alloc_size / size;
        pool->free = 0;
        pool->max_alloc = max_alloc;
        pool->size = size;
index 26c3db9..de2cd86 100644 (file)
@@ -171,7 +171,8 @@ static void ipa_cmd_validate_build(void)
 }
 
 /* Validate a memory region holding a table */
-bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, bool route)
+bool ipa_cmd_table_init_valid(struct ipa *ipa, const struct ipa_mem *mem,
+                             bool route)
 {
        u32 offset_max = field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
        u32 size_max = field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
@@ -197,21 +198,11 @@ bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, bool route)
                return false;
        }
 
-       /* Entire memory range must fit within IPA-local memory */
-       if (mem->offset > ipa->mem_size ||
-           mem->size > ipa->mem_size - mem->offset) {
-               dev_err(dev, "%s table region out of range\n", table);
-               dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
-                       mem->offset, mem->size, ipa->mem_size);
-
-               return false;
-       }
-
        return true;
 }
 
 /* Validate the memory region that holds headers */
-static bool ipa_cmd_header_valid(struct ipa *ipa)
+static bool ipa_cmd_header_init_local_valid(struct ipa *ipa)
 {
        struct device *dev = &ipa->pdev->dev;
        const struct ipa_mem *mem;
@@ -257,15 +248,6 @@ static bool ipa_cmd_header_valid(struct ipa *ipa)
                return false;
        }
 
-       /* Make sure the entire combined area fits in IPA memory */
-       if (size > ipa->mem_size || offset > ipa->mem_size - size) {
-               dev_err(dev, "header table region out of range\n");
-               dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
-                       offset, size, ipa->mem_size);
-
-               return false;
-       }
-
        return true;
 }
 
@@ -336,26 +318,11 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
        return true;
 }
 
-bool ipa_cmd_data_valid(struct ipa *ipa)
-{
-       if (!ipa_cmd_header_valid(ipa))
-               return false;
-
-       if (!ipa_cmd_register_write_valid(ipa))
-               return false;
-
-       return true;
-}
-
-
 int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max)
 {
        struct gsi_trans_info *trans_info = &channel->trans_info;
        struct device *dev = channel->gsi->dev;
 
-       /* This is as good a place as any to validate build constants */
-       ipa_cmd_validate_build();
-
        /* Command payloads are allocated one at a time, but a single
         * transaction can require up to the maximum supported by the
         * channel; treat them as if they were allocated all at once.
@@ -655,3 +622,17 @@ struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count)
        return gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id,
                                       tre_count, DMA_NONE);
 }
+
+/* Init function for immediate commands; there is no ipa_cmd_exit() */
+int ipa_cmd_init(struct ipa *ipa)
+{
+       ipa_cmd_validate_build();
+
+       if (!ipa_cmd_header_init_local_valid(ipa))
+               return -EINVAL;
+
+       if (!ipa_cmd_register_write_valid(ipa))
+               return -EINVAL;
+
+       return 0;
+}
index 8e4243c..e2cf1c2 100644 (file)
@@ -47,15 +47,15 @@ enum ipa_cmd_opcode {
 };
 
 /**
- * ipa_cmd_table_valid() - Validate a memory region holding a table
+ * ipa_cmd_table_init_valid() - Validate a memory region holding a table
  * @ipa:       - IPA pointer
  * @mem:       - IPA memory region descriptor
  * @route:     - Whether the region holds a route or filter table
  *
  * Return:     true if region is valid, false otherwise
  */
-bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
-                           bool route);
+bool ipa_cmd_table_init_valid(struct ipa *ipa, const struct ipa_mem *mem,
+                             bool route);
 
 /**
  * ipa_cmd_data_valid() - Validate command-realted configuration is valid
@@ -162,4 +162,14 @@ void ipa_cmd_pipeline_clear_wait(struct ipa *ipa);
  */
 struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count);
 
+/**
+ * ipa_cmd_init() - Initialize IPA immediate commands
+ * @ipa:       - IPA pointer
+ *
+ * Return:     0 if successful, or a negative error code
+ *
+ * There is no need for a matching ipa_cmd_exit() function.
+ */
+int ipa_cmd_init(struct ipa *ipa);
+
 #endif /* _IPA_CMD_H_ */
index f84c683..a3d2317 100644 (file)
@@ -366,14 +366,6 @@ int ipa_mem_config(struct ipa *ipa)
                while (--canary_count);
        }
 
-       /* Make sure filter and route table memory regions are valid */
-       if (!ipa_table_valid(ipa))
-               goto err_dma_free;
-
-       /* Validate memory-related properties relevant to immediate commands */
-       if (!ipa_cmd_data_valid(ipa))
-               goto err_dma_free;
-
        /* Verify the microcontroller ring alignment (if defined) */
        mem = ipa_mem_find(ipa, IPA_MEM_UC_EVENT_RING);
        if (mem && mem->offset % 1024) {
@@ -625,6 +617,12 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
        ipa->mem_count = mem_data->local_count;
        ipa->mem = mem_data->local;
 
+       /* Check the route and filter table memory regions */
+       if (!ipa_table_mem_valid(ipa, 0))
+               return -EINVAL;
+       if (!ipa_table_mem_valid(ipa, IPA_ROUTE_MODEM_COUNT))
+               return -EINVAL;
+
        ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
        if (ret) {
                dev_err(dev, "error %d setting DMA mask\n", ret);
index 97c0bef..894f995 100644 (file)
@@ -9,7 +9,7 @@
 #include "ipa_qmi_msg.h"
 
 /* QMI message structure definition for struct ipa_indication_register_req */
-struct qmi_elem_info ipa_indication_register_req_ei[] = {
+const struct qmi_elem_info ipa_indication_register_req_ei[] = {
        {
                .data_type      = QMI_OPT_FLAG,
                .elem_len       = 1,
@@ -116,7 +116,7 @@ struct qmi_elem_info ipa_indication_register_req_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_indication_register_rsp */
-struct qmi_elem_info ipa_indication_register_rsp_ei[] = {
+const struct qmi_elem_info ipa_indication_register_rsp_ei[] = {
        {
                .data_type      = QMI_STRUCT,
                .elem_len       = 1,
@@ -134,7 +134,7 @@ struct qmi_elem_info ipa_indication_register_rsp_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_driver_init_complete_req */
-struct qmi_elem_info ipa_driver_init_complete_req_ei[] = {
+const struct qmi_elem_info ipa_driver_init_complete_req_ei[] = {
        {
                .data_type      = QMI_UNSIGNED_1_BYTE,
                .elem_len       = 1,
@@ -151,7 +151,7 @@ struct qmi_elem_info ipa_driver_init_complete_req_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_driver_init_complete_rsp */
-struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = {
+const struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = {
        {
                .data_type      = QMI_STRUCT,
                .elem_len       = 1,
@@ -169,7 +169,7 @@ struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_init_complete_ind */
-struct qmi_elem_info ipa_init_complete_ind_ei[] = {
+const struct qmi_elem_info ipa_init_complete_ind_ei[] = {
        {
                .data_type      = QMI_STRUCT,
                .elem_len       = 1,
@@ -187,7 +187,7 @@ struct qmi_elem_info ipa_init_complete_ind_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_mem_bounds */
-struct qmi_elem_info ipa_mem_bounds_ei[] = {
+const struct qmi_elem_info ipa_mem_bounds_ei[] = {
        {
                .data_type      = QMI_UNSIGNED_4_BYTE,
                .elem_len       = 1,
@@ -208,7 +208,7 @@ struct qmi_elem_info ipa_mem_bounds_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_mem_array */
-struct qmi_elem_info ipa_mem_array_ei[] = {
+const struct qmi_elem_info ipa_mem_array_ei[] = {
        {
                .data_type      = QMI_UNSIGNED_4_BYTE,
                .elem_len       = 1,
@@ -229,7 +229,7 @@ struct qmi_elem_info ipa_mem_array_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_mem_range */
-struct qmi_elem_info ipa_mem_range_ei[] = {
+const struct qmi_elem_info ipa_mem_range_ei[] = {
        {
                .data_type      = QMI_UNSIGNED_4_BYTE,
                .elem_len       = 1,
@@ -250,7 +250,7 @@ struct qmi_elem_info ipa_mem_range_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_init_modem_driver_req */
-struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+const struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
        {
                .data_type      = QMI_OPT_FLAG,
                .elem_len       = 1,
@@ -645,7 +645,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
 };
 
 /* QMI message structure definition for struct ipa_init_modem_driver_rsp */
-struct qmi_elem_info ipa_init_modem_driver_rsp_ei[] = {
+const struct qmi_elem_info ipa_init_modem_driver_rsp_ei[] = {
        {
                .data_type      = QMI_STRUCT,
                .elem_len       = 1,
index e296639..b735035 100644 (file)
@@ -247,15 +247,15 @@ struct ipa_init_modem_driver_rsp {
 };
 
 /* Message structure definitions defined in "ipa_qmi_msg.c" */
-extern struct qmi_elem_info ipa_indication_register_req_ei[];
-extern struct qmi_elem_info ipa_indication_register_rsp_ei[];
-extern struct qmi_elem_info ipa_driver_init_complete_req_ei[];
-extern struct qmi_elem_info ipa_driver_init_complete_rsp_ei[];
-extern struct qmi_elem_info ipa_init_complete_ind_ei[];
-extern struct qmi_elem_info ipa_mem_bounds_ei[];
-extern struct qmi_elem_info ipa_mem_array_ei[];
-extern struct qmi_elem_info ipa_mem_range_ei[];
-extern struct qmi_elem_info ipa_init_modem_driver_req_ei[];
-extern struct qmi_elem_info ipa_init_modem_driver_rsp_ei[];
+extern const struct qmi_elem_info ipa_indication_register_req_ei[];
+extern const struct qmi_elem_info ipa_indication_register_rsp_ei[];
+extern const struct qmi_elem_info ipa_driver_init_complete_req_ei[];
+extern const struct qmi_elem_info ipa_driver_init_complete_rsp_ei[];
+extern const struct qmi_elem_info ipa_init_complete_ind_ei[];
+extern const struct qmi_elem_info ipa_mem_bounds_ei[];
+extern const struct qmi_elem_info ipa_mem_array_ei[];
+extern const struct qmi_elem_info ipa_mem_range_ei[];
+extern const struct qmi_elem_info ipa_init_modem_driver_req_ei[];
+extern const struct qmi_elem_info ipa_init_modem_driver_rsp_ei[];
 
 #endif /* !_IPA_QMI_MSG_H_ */
index 510ff2d..58a1a9d 100644 (file)
  *                 ----------------------
  */
 
-/* Assignment of route table entries to the modem and AP */
-#define IPA_ROUTE_MODEM_MIN            0
-#define IPA_ROUTE_AP_MIN               IPA_ROUTE_MODEM_COUNT
-#define IPA_ROUTE_AP_COUNT \
-               (IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT)
-
 /* Filter or route rules consist of a set of 32-bit values followed by a
  * 32-bit all-zero rule list terminator.  The "zero rule" is simply an
  * all-zero rule followed by the list terminator.
@@ -140,63 +134,25 @@ static void ipa_table_validate_build(void)
        BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32);
        /* The modem must be allotted at least one route table entry */
        BUILD_BUG_ON(!IPA_ROUTE_MODEM_COUNT);
-       /* But it can't have more than what is available */
-       BUILD_BUG_ON(IPA_ROUTE_MODEM_COUNT > IPA_ROUTE_COUNT_MAX);
-
+       /* AP must too, but we can't use more than what is available */
+       BUILD_BUG_ON(IPA_ROUTE_MODEM_COUNT >= IPA_ROUTE_COUNT_MAX);
 }
 
-static bool
-ipa_table_valid_one(struct ipa *ipa, enum ipa_mem_id mem_id, bool route)
+static const struct ipa_mem *
+ipa_table_mem(struct ipa *ipa, bool filter, bool hashed, bool ipv6)
 {
-       const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
-       struct device *dev = &ipa->pdev->dev;
-       u32 size;
-
-       if (route)
-               size = IPA_ROUTE_COUNT_MAX * sizeof(__le64);
-       else
-               size = (1 + IPA_FILTER_COUNT_MAX) * sizeof(__le64);
-
-       if (!ipa_cmd_table_valid(ipa, mem, route))
-               return false;
-
-       /* mem->size >= size is sufficient, but we'll demand more */
-       if (mem->size == size)
-               return true;
-
-       /* Hashed table regions can be zero size if hashing is not supported */
-       if (ipa_table_hash_support(ipa) && !mem->size)
-               return true;
-
-       dev_err(dev, "%s table region %u size 0x%02x, expected 0x%02x\n",
-               route ? "route" : "filter", mem_id, mem->size, size);
-
-       return false;
-}
-
-/* Verify the filter and route table memory regions are the expected size */
-bool ipa_table_valid(struct ipa *ipa)
-{
-       bool valid;
-
-       valid = ipa_table_valid_one(ipa, IPA_MEM_V4_FILTER, false);
-       valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V6_FILTER, false);
-       valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V4_ROUTE, true);
-       valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V6_ROUTE, true);
-
-       if (!ipa_table_hash_support(ipa))
-               return valid;
-
-       valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V4_FILTER_HASHED,
-                                            false);
-       valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V6_FILTER_HASHED,
-                                            false);
-       valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V4_ROUTE_HASHED,
-                                            true);
-       valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V6_ROUTE_HASHED,
-                                            true);
-
-       return valid;
+       enum ipa_mem_id mem_id;
+
+       mem_id = filter ? hashed ? ipv6 ? IPA_MEM_V6_FILTER_HASHED
+                                       : IPA_MEM_V4_FILTER_HASHED
+                                : ipv6 ? IPA_MEM_V6_FILTER
+                                       : IPA_MEM_V4_FILTER
+                       : hashed ? ipv6 ? IPA_MEM_V6_ROUTE_HASHED
+                                       : IPA_MEM_V4_ROUTE_HASHED
+                                : ipv6 ? IPA_MEM_V6_ROUTE
+                                       : IPA_MEM_V4_ROUTE;
+
+       return ipa_mem_find(ipa, mem_id);
 }
 
 bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_map)
@@ -342,11 +298,11 @@ static int ipa_route_reset(struct ipa *ipa, bool modem)
        }
 
        if (modem) {
-               first = IPA_ROUTE_MODEM_MIN;
+               first = 0;
                count = IPA_ROUTE_MODEM_COUNT;
        } else {
-               first = IPA_ROUTE_AP_MIN;
-               count = IPA_ROUTE_AP_COUNT;
+               first = IPA_ROUTE_MODEM_COUNT;
+               count = IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT;
        }
 
        ipa_table_reset_add(trans, false, first, count, IPA_MEM_V4_ROUTE);
@@ -561,8 +517,7 @@ static void ipa_filter_config(struct ipa *ipa, bool modem)
 
 static bool ipa_route_id_modem(u32 route_id)
 {
-       return route_id >= IPA_ROUTE_MODEM_MIN &&
-               route_id <= IPA_ROUTE_MODEM_MIN + IPA_ROUTE_MODEM_COUNT - 1;
+       return route_id < IPA_ROUTE_MODEM_COUNT;
 }
 
 /**
@@ -611,8 +566,81 @@ void ipa_table_config(struct ipa *ipa)
        ipa_route_config(ipa, true);
 }
 
-/*
- * Initialize a coherent DMA allocation containing initialized filter and
+/* Zero modem_route_count means filter table memory check */
+bool ipa_table_mem_valid(struct ipa *ipa, bool modem_route_count)
+{
+       bool hash_support = ipa_table_hash_support(ipa);
+       bool filter = !modem_route_count;
+       const struct ipa_mem *mem_hashed;
+       const struct ipa_mem *mem_ipv4;
+       const struct ipa_mem *mem_ipv6;
+       u32 count;
+
+       /* IPv4 and IPv6 non-hashed tables are expected to be defined and
+        * have the same size.  Both must have at least two entries (and
+        * would normally have more than that).
+        */
+       mem_ipv4 = ipa_table_mem(ipa, filter, false, false);
+       if (!mem_ipv4)
+               return false;
+
+       mem_ipv6 = ipa_table_mem(ipa, filter, false, true);
+       if (!mem_ipv6)
+               return false;
+
+       if (mem_ipv4->size != mem_ipv6->size)
+               return false;
+
+       /* Table offset and size must fit in TABLE_INIT command fields */
+       if (!ipa_cmd_table_init_valid(ipa, mem_ipv4, !filter))
+               return false;
+
+       /* Make sure the regions are big enough */
+       count = mem_ipv4->size / sizeof(__le64);
+       if (count < 2)
+               return false;
+       if (filter) {
+               /* Filter tables must able to hold the endpoint bitmap plus
+                * an entry for each endpoint that supports filtering
+                */
+               if (count < 1 + hweight32(ipa->filter_map))
+                       return false;
+       } else {
+               /* Routing tables must be able to hold all modem entries,
+                * plus at least one entry for the AP.
+                */
+               if (count < modem_route_count + 1)
+                       return false;
+       }
+
+       /* If hashing is supported, hashed tables are expected to be defined,
+        * and have the same size as non-hashed tables.  If hashing is not
+        * supported, hashed tables are expected to have zero size (or not
+        * be defined).
+        */
+       mem_hashed = ipa_table_mem(ipa, filter, true, false);
+       if (hash_support) {
+               if (!mem_hashed || mem_hashed->size != mem_ipv4->size)
+                       return false;
+       } else {
+               if (mem_hashed && mem_hashed->size)
+                       return false;
+       }
+
+       /* Same check for IPv6 tables */
+       mem_hashed = ipa_table_mem(ipa, filter, true, true);
+       if (hash_support) {
+               if (!mem_hashed || mem_hashed->size != mem_ipv6->size)
+                       return false;
+       } else {
+               if (mem_hashed && mem_hashed->size)
+                       return false;
+       }
+
+       return true;
+}
+
+/* Initialize a coherent DMA allocation containing initialized filter and
  * route table data.  This is used when initializing or resetting the IPA
  * filter or route table.
  *
index 395189f..65d96de 100644 (file)
@@ -20,14 +20,6 @@ struct ipa;
 #define IPA_ROUTE_COUNT_MAX    15
 
 /**
- * ipa_table_valid() - Validate route and filter table memory regions
- * @ipa:       IPA pointer
- *
- * Return:     true if all regions are valid, false otherwise
- */
-bool ipa_table_valid(struct ipa *ipa);
-
-/**
  * ipa_filter_map_valid() - Validate a filter table endpoint bitmap
  * @ipa:       IPA pointer
  * @filter_mask: Filter table endpoint bitmap to check
@@ -86,4 +78,11 @@ int ipa_table_init(struct ipa *ipa);
  */
 void ipa_table_exit(struct ipa *ipa);
 
+/**
+ * ipa_table_mem_valid() - Validate sizes of table memory regions
+ * @ipa:       IPA pointer
+ * @modem_route_count: Number of modem route table entries
+ */
+bool ipa_table_mem_valid(struct ipa *ipa, bool modem_route_count);
+
 #endif /* _IPA_TABLE_H_ */
index 8f8f730..c5cfe85 100644 (file)
@@ -361,7 +361,7 @@ static void macvlan_broadcast_enqueue(struct macvlan_port *port,
        }
        spin_unlock(&port->bc_queue.lock);
 
-       schedule_work(&port->bc_work);
+       queue_work(system_unbound_wq, &port->bc_work);
 
        if (err)
                goto free_nskb;
index 9e9adde..349b7b1 100644 (file)
@@ -1758,7 +1758,7 @@ static int qca808x_phy_fast_retrain_config(struct phy_device *phydev)
 
 static int qca808x_phy_ms_random_seed_set(struct phy_device *phydev)
 {
-       u16 seed_value = (prandom_u32() % QCA808X_MASTER_SLAVE_SEED_RANGE);
+       u16 seed_value = prandom_u32_max(QCA808X_MASTER_SLAVE_SEED_RANGE);
 
        return at803x_debug_reg_mask(phydev, QCA808X_PHY_DEBUG_LOCAL_SEED,
                        QCA808X_MASTER_SLAVE_SEED_CFG,
index 8549e0e..b60db8b 100644 (file)
@@ -254,8 +254,7 @@ static int dp83822_config_intr(struct phy_device *phydev)
                                DP83822_EEE_ERROR_CHANGE_INT_EN);
 
                if (!dp83822->fx_enabled)
-                       misr_status |= DP83822_MDI_XOVER_INT_EN |
-                                      DP83822_ANEG_ERR_INT_EN |
+                       misr_status |= DP83822_ANEG_ERR_INT_EN |
                                       DP83822_WOL_PKT_INT_EN;
 
                err = phy_write(phydev, MII_DP83822_MISR2, misr_status);
index 6939563..417527f 100644 (file)
@@ -853,6 +853,14 @@ static int dp83867_config_init(struct phy_device *phydev)
                else
                        val &= ~DP83867_SGMII_TYPE;
                phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_SGMIICTL, val);
+
+               /* This is a SW workaround for link instability if RX_CTRL is
+                * not strapped to mode 3 or 4 in HW. This is required for SGMII
+                * in addition to clearing bit 7, handled above.
+                */
+               if (dp83867->rxctrl_strap_quirk)
+                       phy_set_bits_mmd(phydev, DP83867_DEVADDR, DP83867_CFG4,
+                                        BIT(8));
        }
 
        val = phy_read(phydev, DP83867_CFG3);
index 54a17b5..26ce0c5 100644 (file)
@@ -1295,6 +1295,81 @@ static int ksz9131_config_init(struct phy_device *phydev)
        return 0;
 }
 
+#define MII_KSZ9131_AUTO_MDIX          0x1C
+#define MII_KSZ9131_AUTO_MDI_SET       BIT(7)
+#define MII_KSZ9131_AUTO_MDIX_SWAP_OFF BIT(6)
+
+static int ksz9131_mdix_update(struct phy_device *phydev)
+{
+       int ret;
+
+       ret = phy_read(phydev, MII_KSZ9131_AUTO_MDIX);
+       if (ret < 0)
+               return ret;
+
+       if (ret & MII_KSZ9131_AUTO_MDIX_SWAP_OFF) {
+               if (ret & MII_KSZ9131_AUTO_MDI_SET)
+                       phydev->mdix_ctrl = ETH_TP_MDI;
+               else
+                       phydev->mdix_ctrl = ETH_TP_MDI_X;
+       } else {
+               phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
+       }
+
+       if (ret & MII_KSZ9131_AUTO_MDI_SET)
+               phydev->mdix = ETH_TP_MDI;
+       else
+               phydev->mdix = ETH_TP_MDI_X;
+
+       return 0;
+}
+
+static int ksz9131_config_mdix(struct phy_device *phydev, u8 ctrl)
+{
+       u16 val;
+
+       switch (ctrl) {
+       case ETH_TP_MDI:
+               val = MII_KSZ9131_AUTO_MDIX_SWAP_OFF |
+                     MII_KSZ9131_AUTO_MDI_SET;
+               break;
+       case ETH_TP_MDI_X:
+               val = MII_KSZ9131_AUTO_MDIX_SWAP_OFF;
+               break;
+       case ETH_TP_MDI_AUTO:
+               val = 0;
+               break;
+       default:
+               return 0;
+       }
+
+       return phy_modify(phydev, MII_KSZ9131_AUTO_MDIX,
+                         MII_KSZ9131_AUTO_MDIX_SWAP_OFF |
+                         MII_KSZ9131_AUTO_MDI_SET, val);
+}
+
+static int ksz9131_read_status(struct phy_device *phydev)
+{
+       int ret;
+
+       ret = ksz9131_mdix_update(phydev);
+       if (ret < 0)
+               return ret;
+
+       return genphy_read_status(phydev);
+}
+
+static int ksz9131_config_aneg(struct phy_device *phydev)
+{
+       int ret;
+
+       ret = ksz9131_config_mdix(phydev, phydev->mdix_ctrl);
+       if (ret)
+               return ret;
+
+       return genphy_config_aneg(phydev);
+}
+
 #define KSZ8873MLL_GLOBAL_CONTROL_4    0x06
 #define KSZ8873MLL_GLOBAL_CONTROL_4_DUPLEX     BIT(6)
 #define KSZ8873MLL_GLOBAL_CONTROL_4_SPEED      BIT(4)
@@ -3304,6 +3379,8 @@ static struct phy_driver ksphy_driver[] = {
        .probe          = kszphy_probe,
        .config_init    = ksz9131_config_init,
        .config_intr    = kszphy_config_intr,
+       .config_aneg    = ksz9131_config_aneg,
+       .read_status    = ksz9131_read_status,
        .handle_interrupt = kszphy_handle_interrupt,
        .get_sset_count = kszphy_get_sset_count,
        .get_strings    = kszphy_get_strings,
index 2c8bf43..5d08c62 100644 (file)
@@ -13,7 +13,7 @@
  */
 const char *phy_speed_to_str(int speed)
 {
-       BUILD_BUG_ON_MSG(__ETHTOOL_LINK_MODE_MASK_NBITS != 93,
+       BUILD_BUG_ON_MSG(__ETHTOOL_LINK_MODE_MASK_NBITS != 99,
                "Enum ethtool_link_mode_bit_indices and phylib are out of sync. "
                "If a speed or mode has been added please update phy_speed_to_str "
                "and the PHY settings array.\n");
@@ -49,6 +49,8 @@ const char *phy_speed_to_str(int speed)
                return "200Gbps";
        case SPEED_400000:
                return "400Gbps";
+       case SPEED_800000:
+               return "800Gbps";
        case SPEED_UNKNOWN:
                return "Unknown";
        default:
@@ -157,6 +159,13 @@ EXPORT_SYMBOL_GPL(phy_interface_num_ports);
                               .bit = ETHTOOL_LINK_MODE_ ## b ## _BIT}
 
 static const struct phy_setting settings[] = {
+       /* 800G */
+       PHY_SETTING( 800000, FULL, 800000baseCR8_Full           ),
+       PHY_SETTING( 800000, FULL, 800000baseKR8_Full           ),
+       PHY_SETTING( 800000, FULL, 800000baseDR8_Full           ),
+       PHY_SETTING( 800000, FULL, 800000baseDR8_2_Full         ),
+       PHY_SETTING( 800000, FULL, 800000baseSR8_Full           ),
+       PHY_SETTING( 800000, FULL, 800000baseVR8_Full           ),
        /* 400G */
        PHY_SETTING( 400000, FULL, 400000baseCR8_Full           ),
        PHY_SETTING( 400000, FULL, 400000baseKR8_Full           ),
index ef10f5a..62106c9 100644 (file)
@@ -1678,6 +1678,9 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
        if (phy_interrupt_is_valid(phy))
                phy_request_interrupt(phy);
 
+       if (pl->config->mac_managed_pm)
+               phy->mac_managed_pm = true;
+
        return 0;
 }
 
index 40c9a64..39fd181 100644 (file)
@@ -608,6 +608,22 @@ static int sfp_write(struct sfp *sfp, bool a2, u8 addr, void *buf, size_t len)
        return sfp->write(sfp, a2, addr, buf, len);
 }
 
+static int sfp_modify_u8(struct sfp *sfp, bool a2, u8 addr, u8 mask, u8 val)
+{
+       int ret;
+       u8 old, v;
+
+       ret = sfp_read(sfp, a2, addr, &old, sizeof(old));
+       if (ret != sizeof(old))
+               return ret;
+
+       v = (old & ~mask) | (val & mask);
+       if (v == old)
+               return sizeof(v);
+
+       return sfp_write(sfp, a2, addr, &v, sizeof(v));
+}
+
 static unsigned int sfp_soft_get_state(struct sfp *sfp)
 {
        unsigned int state = 0;
@@ -633,17 +649,14 @@ static unsigned int sfp_soft_get_state(struct sfp *sfp)
 
 static void sfp_soft_set_state(struct sfp *sfp, unsigned int state)
 {
-       u8 status;
+       u8 mask = SFP_STATUS_TX_DISABLE_FORCE;
+       u8 val = 0;
 
-       if (sfp_read(sfp, true, SFP_STATUS, &status, sizeof(status)) ==
-                    sizeof(status)) {
-               if (state & SFP_F_TX_DISABLE)
-                       status |= SFP_STATUS_TX_DISABLE_FORCE;
-               else
-                       status &= ~SFP_STATUS_TX_DISABLE_FORCE;
+       if (state & SFP_F_TX_DISABLE)
+               val |= SFP_STATUS_TX_DISABLE_FORCE;
 
-               sfp_write(sfp, true, SFP_STATUS, &status, sizeof(status));
-       }
+
+       sfp_modify_u8(sfp, true, SFP_STATUS, mask, val);
 }
 
 static void sfp_soft_start_poll(struct sfp *sfp)
@@ -1761,11 +1774,20 @@ static int sfp_module_parse_power(struct sfp *sfp)
        u32 power_mW = 1000;
        bool supports_a2;
 
-       if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_POWER_DECL))
+       if (sfp->id.ext.sff8472_compliance >= SFP_SFF8472_COMPLIANCE_REV10_2 &&
+           sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_POWER_DECL))
                power_mW = 1500;
-       if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_HIGH_POWER_LEVEL))
+       /* Added in Rev 11.9, but there is no compliance code for this */
+       if (sfp->id.ext.sff8472_compliance >= SFP_SFF8472_COMPLIANCE_REV11_4 &&
+           sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_HIGH_POWER_LEVEL))
                power_mW = 2000;
 
+       /* Power level 1 modules (max. 1W) are always supported. */
+       if (power_mW <= 1000) {
+               sfp->module_power_mW = power_mW;
+               return 0;
+       }
+
        supports_a2 = sfp->id.ext.sff8472_compliance !=
                                SFP_SFF8472_COMPLIANCE_NONE ||
                      sfp->id.ext.diagmon & SFP_DIAGMON_DDM;
@@ -1789,12 +1811,6 @@ static int sfp_module_parse_power(struct sfp *sfp)
                }
        }
 
-       if (power_mW <= 1000) {
-               /* Modules below 1W do not require a power change sequence */
-               sfp->module_power_mW = power_mW;
-               return 0;
-       }
-
        if (!supports_a2) {
                /* The module power level is below the host maximum and the
                 * module appears not to implement bus address 0xa2, so assume
@@ -1821,31 +1837,14 @@ static int sfp_module_parse_power(struct sfp *sfp)
 
 static int sfp_sm_mod_hpower(struct sfp *sfp, bool enable)
 {
-       u8 val;
        int err;
 
-       err = sfp_read(sfp, true, SFP_EXT_STATUS, &val, sizeof(val));
-       if (err != sizeof(val)) {
-               dev_err(sfp->dev, "Failed to read EEPROM: %pe\n", ERR_PTR(err));
-               return -EAGAIN;
-       }
-
-       /* DM7052 reports as a high power module, responds to reads (with
-        * all bytes 0xff) at 0x51 but does not accept writes.  In any case,
-        * if the bit is already set, we're already in high power mode.
-        */
-       if (!!(val & BIT(0)) == enable)
-               return 0;
-
-       if (enable)
-               val |= BIT(0);
-       else
-               val &= ~BIT(0);
-
-       err = sfp_write(sfp, true, SFP_EXT_STATUS, &val, sizeof(val));
-       if (err != sizeof(val)) {
-               dev_err(sfp->dev, "Failed to write EEPROM: %pe\n",
-                       ERR_PTR(err));
+       err = sfp_modify_u8(sfp, true, SFP_EXT_STATUS,
+                           SFP_EXT_STATUS_PWRLVL_SELECT,
+                           enable ? SFP_EXT_STATUS_PWRLVL_SELECT : 0);
+       if (err != sizeof(u8)) {
+               dev_err(sfp->dev, "failed to %sable high power: %pe\n",
+                       enable ? "en" : "dis", ERR_PTR(err));
                return -EAGAIN;
        }
 
@@ -2729,8 +2728,12 @@ static int sfp_probe(struct platform_device *pdev)
 
        device_property_read_u32(&pdev->dev, "maximum-power-milliwatt",
                                 &sfp->max_power_mW);
-       if (!sfp->max_power_mW)
+       if (sfp->max_power_mW < 1000) {
+               if (sfp->max_power_mW)
+                       dev_warn(sfp->dev,
+                                "Firmware bug: host maximum power should be at least 1W\n");
                sfp->max_power_mW = 1000;
+       }
 
        dev_info(sfp->dev, "Host maximum power %u.%uW\n",
                 sfp->max_power_mW / 1000, (sfp->max_power_mW / 100) % 10);
index 41db10f..19eac00 100644 (file)
@@ -284,7 +284,7 @@ static __init bool randomized_test(void)
        mutex_lock(&mutex);
 
        for (i = 0; i < NUM_RAND_ROUTES; ++i) {
-               prandom_bytes(ip, 4);
+               get_random_bytes(ip, 4);
                cidr = prandom_u32_max(32) + 1;
                peer = peers[prandom_u32_max(NUM_PEERS)];
                if (wg_allowedips_insert_v4(&t, (struct in_addr *)ip, cidr,
@@ -299,7 +299,7 @@ static __init bool randomized_test(void)
                }
                for (j = 0; j < NUM_MUTATED_ROUTES; ++j) {
                        memcpy(mutated, ip, 4);
-                       prandom_bytes(mutate_mask, 4);
+                       get_random_bytes(mutate_mask, 4);
                        mutate_amount = prandom_u32_max(32);
                        for (k = 0; k < mutate_amount / 8; ++k)
                                mutate_mask[k] = 0xff;
@@ -310,7 +310,7 @@ static __init bool randomized_test(void)
                        for (k = 0; k < 4; ++k)
                                mutated[k] = (mutated[k] & mutate_mask[k]) |
                                             (~mutate_mask[k] &
-                                             prandom_u32_max(256));
+                                             get_random_u8());
                        cidr = prandom_u32_max(32) + 1;
                        peer = peers[prandom_u32_max(NUM_PEERS)];
                        if (wg_allowedips_insert_v4(&t,
@@ -328,7 +328,7 @@ static __init bool randomized_test(void)
        }
 
        for (i = 0; i < NUM_RAND_ROUTES; ++i) {
-               prandom_bytes(ip, 16);
+               get_random_bytes(ip, 16);
                cidr = prandom_u32_max(128) + 1;
                peer = peers[prandom_u32_max(NUM_PEERS)];
                if (wg_allowedips_insert_v6(&t, (struct in6_addr *)ip, cidr,
@@ -343,7 +343,7 @@ static __init bool randomized_test(void)
                }
                for (j = 0; j < NUM_MUTATED_ROUTES; ++j) {
                        memcpy(mutated, ip, 16);
-                       prandom_bytes(mutate_mask, 16);
+                       get_random_bytes(mutate_mask, 16);
                        mutate_amount = prandom_u32_max(128);
                        for (k = 0; k < mutate_amount / 8; ++k)
                                mutate_mask[k] = 0xff;
@@ -354,7 +354,7 @@ static __init bool randomized_test(void)
                        for (k = 0; k < 4; ++k)
                                mutated[k] = (mutated[k] & mutate_mask[k]) |
                                             (~mutate_mask[k] &
-                                             prandom_u32_max(256));
+                                             get_random_u8());
                        cidr = prandom_u32_max(128) + 1;
                        peer = peers[prandom_u32_max(NUM_PEERS)];
                        if (wg_allowedips_insert_v6(&t,
@@ -381,13 +381,13 @@ static __init bool randomized_test(void)
 
        for (j = 0;; ++j) {
                for (i = 0; i < NUM_QUERIES; ++i) {
-                       prandom_bytes(ip, 4);
+                       get_random_bytes(ip, 4);
                        if (lookup(t.root4, 32, ip) != horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) {
                                horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip);
                                pr_err("allowedips random v4 self-test: FAIL\n");
                                goto free;
                        }
-                       prandom_bytes(ip, 16);
+                       get_random_bytes(ip, 16);
                        if (lookup(t.root6, 128, ip) != horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) {
                                pr_err("allowedips random v6 self-test: FAIL\n");
                                goto free;
index 479041f..10d9d9c 100644 (file)
@@ -1128,7 +1128,7 @@ static void brcmf_p2p_afx_handler(struct work_struct *work)
        if (afx_hdl->is_listen && afx_hdl->my_listen_chan)
                /* 100ms ~ 300ms */
                err = brcmf_p2p_discover_listen(p2p, afx_hdl->my_listen_chan,
-                                               100 * (1 + prandom_u32() % 3));
+                                               100 * (1 + prandom_u32_max(3)));
        else
                err = brcmf_p2p_act_frm_search(p2p, afx_hdl->peer_listen_chan);
 
index d0a7465..170c61c 100644 (file)
@@ -177,7 +177,7 @@ static int brcmf_pno_set_random(struct brcmf_if *ifp, struct brcmf_pno_info *pi)
        memcpy(pfn_mac.mac, mac_addr, ETH_ALEN);
        for (i = 0; i < ETH_ALEN; i++) {
                pfn_mac.mac[i] &= mac_mask[i];
-               pfn_mac.mac[i] |= get_random_int() & ~(mac_mask[i]);
+               pfn_mac.mac[i] |= get_random_u8() & ~(mac_mask[i]);
        }
        /* Clear multi bit */
        pfn_mac.mac[0] &= 0xFE;
index ed586e6..de0c545 100644 (file)
@@ -1099,7 +1099,7 @@ static void iwl_mvm_mac_ctxt_cmd_fill_ap(struct iwl_mvm *mvm,
                        iwl_mvm_mac_ap_iterator, &data);
 
                if (data.beacon_device_ts) {
-                       u32 rand = (prandom_u32() % (64 - 36)) + 36;
+                       u32 rand = prandom_u32_max(64 - 36) + 36;
                        mvmvif->ap_beacon_time = data.beacon_device_ts +
                                ieee80211_tu_to_usec(data.beacon_int * rand /
                                                     100);
index 535995e..bcd564d 100644 (file)
@@ -239,7 +239,7 @@ mwifiex_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
        tx_info->pkt_len = pkt_len;
 
        mwifiex_form_mgmt_frame(skb, buf, len);
-       *cookie = prandom_u32() | 1;
+       *cookie = get_random_u32() | 1;
 
        if (ieee80211_is_action(mgmt->frame_control))
                skb = mwifiex_clone_skb_for_tx_status(priv,
@@ -303,7 +303,7 @@ mwifiex_cfg80211_remain_on_channel(struct wiphy *wiphy,
                                         duration);
 
        if (!ret) {
-               *cookie = prandom_u32() | 1;
+               *cookie = get_random_u32() | 1;
                priv->roc_cfg.cookie = *cookie;
                priv->roc_cfg.chan = *chan;
 
index b890479..9bbfff8 100644 (file)
@@ -1161,7 +1161,7 @@ static int mgmt_tx(struct wiphy *wiphy,
        const u8 *vendor_ie;
        int ret = 0;
 
-       *cookie = prandom_u32();
+       *cookie = get_random_u32();
        priv->tx_cookie = *cookie;
        mgmt = (const struct ieee80211_mgmt *)buf;
 
index bfdf03b..73e6f94 100644 (file)
@@ -449,7 +449,7 @@ qtnf_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
 {
        struct qtnf_vif *vif = qtnf_netdev_get_priv(wdev->netdev);
        const struct ieee80211_mgmt *mgmt_frame = (void *)params->buf;
-       u32 short_cookie = prandom_u32();
+       u32 short_cookie = get_random_u32();
        u16 flags = 0;
        u16 freq;
 
index 5a3e7a6..4a9e4b5 100644 (file)
@@ -1594,7 +1594,7 @@ static int cw1200_get_prio_queue(struct cw1200_common *priv,
                edca = &priv->edca.params[i];
                score = ((edca->aifns + edca->cwmin) << 16) +
                        ((edca->cwmax - edca->cwmin) *
-                        (get_random_int() & 0xFFFF));
+                        get_random_u16());
                if (score < best && (winner < 0 || i != 3)) {
                        best = score;
                        winner = i;
index 3e3922d..28c0f06 100644 (file)
@@ -6100,7 +6100,7 @@ static int wl1271_register_hw(struct wl1271 *wl)
                        wl1271_warning("Fuse mac address is zero. using random mac");
                        /* Use TI oui and a random nic */
                        oui_addr = WLCORE_TI_OUI_ADDRESS;
-                       nic_addr = get_random_int();
+                       nic_addr = get_random_u32();
                } else {
                        oui_addr = wl->fuse_oui_addr;
                        /* fuse has the BD_ADDR, the WLAN addresses are the next two */
index ff09a8c..2397a90 100644 (file)
@@ -311,7 +311,7 @@ err_unreg_dev:
        return ERR_PTR(err);
 
 err_free_dev:
-       kfree(dev);
+       put_device(&dev->dev);
 
        return ERR_PTR(err);
 }
index f577449..85c06db 100644 (file)
@@ -54,16 +54,19 @@ static int virtual_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
        mutex_lock(&nci_mutex);
        if (state != virtual_ncidev_enabled) {
                mutex_unlock(&nci_mutex);
+               kfree_skb(skb);
                return 0;
        }
 
        if (send_buff) {
                mutex_unlock(&nci_mutex);
+               kfree_skb(skb);
                return -1;
        }
        send_buff = skb_copy(skb, GFP_KERNEL);
        mutex_unlock(&nci_mutex);
        wake_up_interruptible(&wq);
+       consume_skb(skb);
 
        return 0;
 }
index bbe5099..c60ec0b 100644 (file)
@@ -170,15 +170,12 @@ EXPORT_SYMBOL(nvdimm_namespace_disk_name);
 
 const uuid_t *nd_dev_to_uuid(struct device *dev)
 {
-       if (!dev)
-               return &uuid_null;
-
-       if (is_namespace_pmem(dev)) {
+       if (dev && is_namespace_pmem(dev)) {
                struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
 
                return nspm->uuid;
-       } else
-               return &uuid_null;
+       }
+       return &uuid_null;
 }
 EXPORT_SYMBOL(nd_dev_to_uuid);
 
@@ -388,7 +385,7 @@ static resource_size_t init_dpa_allocation(struct nd_label_id *label_id,
  *
  * BLK-space is valid as long as it does not precede a PMEM
  * allocation in a given region. PMEM-space must be contiguous
- * and adjacent to an existing existing allocation (if one
+ * and adjacent to an existing allocation (if one
  * exists).  If reserving PMEM any space is valid.
  */
 static void space_valid(struct nd_region *nd_region, struct nvdimm_drvdata *ndd,
@@ -839,7 +836,6 @@ static ssize_t size_store(struct device *dev,
 {
        struct nd_region *nd_region = to_nd_region(dev->parent);
        unsigned long long val;
-       uuid_t **uuid = NULL;
        int rc;
 
        rc = kstrtoull(buf, 0, &val);
@@ -853,16 +849,12 @@ static ssize_t size_store(struct device *dev,
        if (rc >= 0)
                rc = nd_namespace_label_update(nd_region, dev);
 
-       if (is_namespace_pmem(dev)) {
+       /* setting size zero == 'delete namespace' */
+       if (rc == 0 && val == 0 && is_namespace_pmem(dev)) {
                struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
 
-               uuid = &nspm->uuid;
-       }
-
-       if (rc == 0 && val == 0 && uuid) {
-               /* setting size zero == 'delete namespace' */
-               kfree(*uuid);
-               *uuid = NULL;
+               kfree(nspm->uuid);
+               nspm->uuid = NULL;
        }
 
        dev_dbg(dev, "%llx %s (%d)\n", val, rc < 0 ? "fail" : "success", rc);
index 473a71b..e0875d3 100644 (file)
@@ -509,16 +509,13 @@ static ssize_t align_store(struct device *dev,
 {
        struct nd_region *nd_region = to_nd_region(dev);
        unsigned long val, dpa;
-       u32 remainder;
+       u32 mappings, remainder;
        int rc;
 
        rc = kstrtoul(buf, 0, &val);
        if (rc)
                return rc;
 
-       if (!nd_region->ndr_mappings)
-               return -ENXIO;
-
        /*
         * Ensure space-align is evenly divisible by the region
         * interleave-width because the kernel typically has no facility
@@ -526,7 +523,8 @@ static ssize_t align_store(struct device *dev,
         * contribute to the tail capacity in system-physical-address
         * space for the namespace.
         */
-       dpa = div_u64_rem(val, nd_region->ndr_mappings, &remainder);
+       mappings = max_t(u32, 1, nd_region->ndr_mappings);
+       dpa = div_u64_rem(val, mappings, &remainder);
        if (!is_power_of_2(dpa) || dpa < PAGE_SIZE
                        || val > region_size(nd_region) || remainder)
                return -EINVAL;
@@ -1096,7 +1094,7 @@ int nvdimm_flush(struct nd_region *nd_region, struct bio *bio)
        return rc;
 }
 /**
- * nvdimm_flush - flush any posted write queues between the cpu and pmem media
+ * generic_nvdimm_flush() - flush any posted write queues between the cpu and pmem media
  * @nd_region: interleaved pmem region
  */
 int generic_nvdimm_flush(struct nd_region *nd_region)
index b5aa55c..8aefb60 100644 (file)
@@ -408,7 +408,7 @@ static int security_overwrite(struct nvdimm *nvdimm, unsigned int keyid)
        return rc;
 }
 
-void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm)
+static void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm)
 {
        struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(&nvdimm->dev);
        int rc;
index 04bd28f..d90e4f0 100644 (file)
@@ -23,7 +23,7 @@ u32 nvme_auth_get_seqnum(void)
 
        mutex_lock(&nvme_dhchap_mutex);
        if (!nvme_dhchap_seqnum)
-               nvme_dhchap_seqnum = prandom_u32();
+               nvme_dhchap_seqnum = get_random_u32();
        else {
                nvme_dhchap_seqnum++;
                if (!nvme_dhchap_seqnum)
index 5fc5ea1..ff8b083 100644 (file)
@@ -1039,6 +1039,8 @@ static void apple_nvme_reset_work(struct work_struct *work)
                                         dma_max_mapping_size(anv->dev) >> 9);
        anv->ctrl.max_segments = NVME_MAX_SEGS;
 
+       dma_set_max_seg_size(anv->dev, 0xffffffff);
+
        /*
         * Enable NVMMU and linear submission queues.
         * While we could keep those disabled and pretend this is slightly
index 059737c..dc42206 100644 (file)
@@ -3262,8 +3262,12 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
                return ret;
 
        if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) {
+               /*
+                * Do not return errors unless we are in a controller reset,
+                * the controller works perfectly fine without hwmon.
+                */
                ret = nvme_hwmon_init(ctrl);
-               if (ret < 0)
+               if (ret == -EINTR)
                        return ret;
        }
 
@@ -4846,7 +4850,7 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
        return 0;
 
 out_cleanup_admin_q:
-       blk_mq_destroy_queue(ctrl->fabrics_q);
+       blk_mq_destroy_queue(ctrl->admin_q);
 out_free_tagset:
        blk_mq_free_tag_set(ctrl->admin_tagset);
        return ret;
index 0a586d7..9e6e56c 100644 (file)
@@ -12,7 +12,7 @@
 
 struct nvme_hwmon_data {
        struct nvme_ctrl *ctrl;
-       struct nvme_smart_log log;
+       struct nvme_smart_log *log;
        struct mutex read_lock;
 };
 
@@ -60,14 +60,14 @@ static int nvme_set_temp_thresh(struct nvme_ctrl *ctrl, int sensor, bool under,
 static int nvme_hwmon_get_smart_log(struct nvme_hwmon_data *data)
 {
        return nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0,
-                          NVME_CSI_NVM, &data->log, sizeof(data->log), 0);
+                          NVME_CSI_NVM, data->log, sizeof(*data->log), 0);
 }
 
 static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
                           u32 attr, int channel, long *val)
 {
        struct nvme_hwmon_data *data = dev_get_drvdata(dev);
-       struct nvme_smart_log *log = &data->log;
+       struct nvme_smart_log *log = data->log;
        int temp;
        int err;
 
@@ -163,7 +163,7 @@ static umode_t nvme_hwmon_is_visible(const void *_data,
        case hwmon_temp_max:
        case hwmon_temp_min:
                if ((!channel && data->ctrl->wctemp) ||
-                   (channel && data->log.temp_sensor[channel - 1])) {
+                   (channel && data->log->temp_sensor[channel - 1])) {
                        if (data->ctrl->quirks &
                            NVME_QUIRK_NO_TEMP_THRESH_CHANGE)
                                return 0444;
@@ -176,7 +176,7 @@ static umode_t nvme_hwmon_is_visible(const void *_data,
                break;
        case hwmon_temp_input:
        case hwmon_temp_label:
-               if (!channel || data->log.temp_sensor[channel - 1])
+               if (!channel || data->log->temp_sensor[channel - 1])
                        return 0444;
                break;
        default:
@@ -230,7 +230,13 @@ int nvme_hwmon_init(struct nvme_ctrl *ctrl)
 
        data = kzalloc(sizeof(*data), GFP_KERNEL);
        if (!data)
-               return 0;
+               return -ENOMEM;
+
+       data->log = kzalloc(sizeof(*data->log), GFP_KERNEL);
+       if (!data->log) {
+               err = -ENOMEM;
+               goto err_free_data;
+       }
 
        data->ctrl = ctrl;
        mutex_init(&data->read_lock);
@@ -238,8 +244,7 @@ int nvme_hwmon_init(struct nvme_ctrl *ctrl)
        err = nvme_hwmon_get_smart_log(data);
        if (err) {
                dev_warn(dev, "Failed to read smart log (error %d)\n", err);
-               kfree(data);
-               return err;
+               goto err_free_log;
        }
 
        hwmon = hwmon_device_register_with_info(dev, "nvme",
@@ -247,11 +252,17 @@ int nvme_hwmon_init(struct nvme_ctrl *ctrl)
                                                NULL);
        if (IS_ERR(hwmon)) {
                dev_warn(dev, "Failed to instantiate hwmon device\n");
-               kfree(data);
-               return PTR_ERR(hwmon);
+               err = PTR_ERR(hwmon);
+               goto err_free_log;
        }
        ctrl->hwmon_device = hwmon;
        return 0;
+
+err_free_log:
+       kfree(data->log);
+err_free_data:
+       kfree(data);
+       return err;
 }
 
 void nvme_hwmon_exit(struct nvme_ctrl *ctrl)
@@ -262,6 +273,7 @@ void nvme_hwmon_exit(struct nvme_ctrl *ctrl)
 
                hwmon_device_unregister(ctrl->hwmon_device);
                ctrl->hwmon_device = NULL;
+               kfree(data->log);
                kfree(data);
        }
 }
index 00f2f81..0ea7e44 100644 (file)
@@ -182,6 +182,7 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
 
        for_each_node(node)
                rcu_assign_pointer(head->current_path[node], NULL);
+       kblockd_schedule_work(&head->requeue_work);
 }
 
 static bool nvme_path_is_disabled(struct nvme_ns *ns)
index 5b796ef..31e577b 100644 (file)
@@ -3511,6 +3511,16 @@ static const struct pci_device_id nvme_id_table[] = {
                .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
        { PCI_DEVICE(0x2646, 0x2263),   /* KINGSTON A2000 NVMe SSD  */
                .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+       { PCI_DEVICE(0x2646, 0x5018),   /* KINGSTON OM8SFP4xxxxP OS21012 NVMe SSD */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x2646, 0x5016),   /* KINGSTON OM3PGP4xxxxP OS21011 NVMe SSD */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x2646, 0x501A),   /* KINGSTON OM8PGP4xxxxP OS21005 NVMe SSD */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x2646, 0x501B),   /* KINGSTON OM8PGP4xxxxQ OS21005 NVMe SSD */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+       { PCI_DEVICE(0x2646, 0x501E),   /* KINGSTON OM3PGP4xxxxQ OS21011 NVMe SSD */
+               .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
        { PCI_DEVICE(0x1e4B, 0x1001),   /* MAXIO MAP1001 */
                .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x1e4B, 0x1002),   /* MAXIO MAP1002 */
@@ -3521,12 +3531,16 @@ static const struct pci_device_id nvme_id_table[] = {
                .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x1dbe, 0x5236),   /* ADATA XPG GAMMIX S70 */
                .driver_data = NVME_QUIRK_BOGUS_NID, },
+       { PCI_DEVICE(0x1e49, 0x0021),   /* ZHITAI TiPro5000 NVMe SSD */
+               .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
        { PCI_DEVICE(0x1e49, 0x0041),   /* ZHITAI TiPro7000 NVMe SSD */
                .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
        { PCI_DEVICE(0xc0a9, 0x540a),   /* Crucial P2 */
                .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(0x1d97, 0x2263), /* Lexar NM610 */
                .driver_data = NVME_QUIRK_BOGUS_NID, },
+       { PCI_DEVICE(0x1d97, 0x2269), /* Lexar NM760 */
+               .driver_data = NVME_QUIRK_BOGUS_NID, },
        { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0061),
                .driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, },
        { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0065),
index 5ad0ab2..6e079ab 100644 (file)
@@ -996,7 +996,7 @@ static void nvme_rdma_stop_ctrl(struct nvme_ctrl *nctrl)
 {
        struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl);
 
-       cancel_work_sync(&ctrl->err_work);
+       flush_work(&ctrl->err_work);
        cancel_delayed_work_sync(&ctrl->reconnect_work);
 }
 
index 93e2e31..1eed0fc 100644 (file)
@@ -2181,7 +2181,7 @@ out_fail:
 
 static void nvme_tcp_stop_ctrl(struct nvme_ctrl *ctrl)
 {
-       cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work);
+       flush_work(&to_tcp_ctrl(ctrl)->err_work);
        cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work);
 }
 
index e34a289..9443ee1 100644 (file)
@@ -1290,12 +1290,8 @@ static ssize_t nvmet_subsys_attr_qid_max_show(struct config_item *item,
 static ssize_t nvmet_subsys_attr_qid_max_store(struct config_item *item,
                                               const char *page, size_t cnt)
 {
-       struct nvmet_port *port = to_nvmet_port(item);
        u16 qid_max;
 
-       if (nvmet_is_port_enabled(port, __func__))
-               return -EACCES;
-
        if (sscanf(page, "%hu\n", &qid_max) != 1)
                return -EINVAL;
 
index 1467714..aecb585 100644 (file)
@@ -1176,7 +1176,7 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
         * reset the keep alive timer when the controller is enabled.
         */
        if (ctrl->kato)
-               mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
+               mod_delayed_work(nvmet_wq, &ctrl->ka_work, ctrl->kato * HZ);
 }
 
 static void nvmet_clear_ctrl(struct nvmet_ctrl *ctrl)
index f54a6f4..f0cb311 100644 (file)
@@ -393,7 +393,7 @@ static int parse_slot_config(int slot,
                }
                
                if (p0 + function_len < pos) {
-                       printk(KERN_ERR "eisa_enumerator: function %d length mis-match "
+                       printk(KERN_ERR "eisa_enumerator: function %d length mismatch "
                               "got %d, expected %d\n",
                               num_func, pos-p0, function_len);
                        res=-1;
@@ -407,13 +407,13 @@ static int parse_slot_config(int slot,
        }
        
        if (pos != es->config_data_length) {
-               printk(KERN_ERR "eisa_enumerator: config data length mis-match got %d, expected %d\n",
+               printk(KERN_ERR "eisa_enumerator: config data length mismatch got %d, expected %d\n",
                        pos, es->config_data_length);
                res=-1;
        }
        
        if (num_func != es->num_functions) {
-               printk(KERN_ERR "eisa_enumerator: number of functions mis-match got %d, expected %d\n",
+               printk(KERN_ERR "eisa_enumerator: number of functions mismatch got %d, expected %d\n",
                        num_func, es->num_functions);
                res=-2;
        }
@@ -451,7 +451,7 @@ static int init_slot(int slot, struct eeprom_eisa_slot_info *es)
                }
                if (es->eisa_slot_id != id) {
                        print_eisa_id(id_string, id);
-                       printk(KERN_ERR "EISA slot %d id mis-match: got %s", 
+                       printk(KERN_ERR "EISA slot %d id mismatch: got %s",
                               slot, id_string);
                        
                        print_eisa_id(id_string, es->eisa_slot_id);
index 24478ae..8e323e9 100644 (file)
@@ -415,6 +415,13 @@ static inline u32 pads_readl(struct tegra_pcie *pcie, unsigned long offset)
  * address (access to which generates correct config transaction) falls in
  * this 4 KiB region.
  */
+static unsigned int tegra_pcie_conf_offset(u8 bus, unsigned int devfn,
+                                          unsigned int where)
+{
+       return ((where & 0xf00) << 16) | (bus << 16) | (PCI_SLOT(devfn) << 11) |
+              (PCI_FUNC(devfn) << 8) | (where & 0xff);
+}
+
 static void __iomem *tegra_pcie_map_bus(struct pci_bus *bus,
                                        unsigned int devfn,
                                        int where)
@@ -436,9 +443,7 @@ static void __iomem *tegra_pcie_map_bus(struct pci_bus *bus,
                unsigned int offset;
                u32 base;
 
-               offset = PCI_CONF1_EXT_ADDRESS(bus->number, PCI_SLOT(devfn),
-                                              PCI_FUNC(devfn), where) &
-                        ~PCI_CONF1_ENABLE;
+               offset = tegra_pcie_conf_offset(bus->number, devfn, where);
 
                /* move 4 KiB window to offset within the FPCI region */
                base = 0xfe100000 + ((offset & ~(SZ_4K - 1)) >> 8);
index dc6a30e..b409659 100644 (file)
@@ -1768,10 +1768,7 @@ static void adjust_bridge_window(struct pci_dev *bridge, struct resource *res,
        }
 
        res->end = res->start + new_size - 1;
-
-       /* If the resource is part of the add_list remove it now */
-       if (add_list)
-               remove_from_list(add_list, res);
+       remove_from_list(add_list, res);
 }
 
 static void pci_bus_distribute_available_resources(struct pci_bus *bus,
@@ -1926,8 +1923,6 @@ static void pci_bridge_distribute_available_resources(struct pci_dev *bridge,
        if (!bridge->is_hotplug_bridge)
                return;
 
-       pci_dbg(bridge, "distributing available resources\n");
-
        /* Take the initial extra resources from the hotplug port */
        available_io = bridge->resource[PCI_BRIDGE_IO_WINDOW];
        available_mmio = bridge->resource[PCI_BRIDGE_MEM_WINDOW];
@@ -1939,59 +1934,6 @@ static void pci_bridge_distribute_available_resources(struct pci_dev *bridge,
                                               available_mmio_pref);
 }
 
-static bool pci_bridge_resources_not_assigned(struct pci_dev *dev)
-{
-       const struct resource *r;
-
-       /*
-        * Check the child device's resources and if they are not yet
-        * assigned it means we are configuring them (not the boot
-        * firmware) so we should be able to extend the upstream
-        * bridge's (that's the hotplug downstream PCIe port) resources
-        * in the same way we do with the normal hotplug case.
-        */
-       r = &dev->resource[PCI_BRIDGE_IO_WINDOW];
-       if (!r->flags || !(r->flags & IORESOURCE_STARTALIGN))
-               return false;
-       r = &dev->resource[PCI_BRIDGE_MEM_WINDOW];
-       if (!r->flags || !(r->flags & IORESOURCE_STARTALIGN))
-               return false;
-       r = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW];
-       if (!r->flags || !(r->flags & IORESOURCE_STARTALIGN))
-               return false;
-
-       return true;
-}
-
-static void pci_root_bus_distribute_available_resources(struct pci_bus *bus,
-                                                       struct list_head *add_list)
-{
-       struct pci_dev *dev, *bridge = bus->self;
-
-       for_each_pci_bridge(dev, bus) {
-               struct pci_bus *b;
-
-               b = dev->subordinate;
-               if (!b)
-                       continue;
-
-               /*
-                * Need to check "bridge" here too because it is NULL
-                * in case of root bus.
-                */
-               if (bridge && pci_bridge_resources_not_assigned(dev)) {
-                       pci_bridge_distribute_available_resources(bridge, add_list);
-                       /*
-                        * There is only PCIe upstream port on the bus
-                        * so we don't need to go futher.
-                        */
-                       return;
-               }
-
-               pci_root_bus_distribute_available_resources(b, add_list);
-       }
-}
-
 /*
  * First try will not touch PCI bridge res.
  * Second and later try will clear small leaf bridge res.
@@ -2031,8 +1973,6 @@ again:
         */
        __pci_bus_size_bridges(bus, add_list);
 
-       pci_root_bus_distribute_available_resources(bus, add_list);
-
        /* Depth last, allocate resources and update the hardware. */
        __pci_bus_assign_resources(bus, add_list, &fail_head);
        if (add_list)
index 44c07ea..341010f 100644 (file)
@@ -185,7 +185,7 @@ config APPLE_M1_CPU_PMU
 
 config ALIBABA_UNCORE_DRW_PMU
        tristate "Alibaba T-Head Yitian 710 DDR Sub-system Driveway PMU driver"
-       depends on ARM64 || COMPILE_TEST
+       depends on (ARM64 && ACPI) || COMPILE_TEST
        help
          Support for Driveway PMU events monitoring on Yitian 710 DDR
          Sub-system.
index 82729b8..a7689fe 100644 (file)
@@ -658,8 +658,8 @@ static int ali_drw_pmu_probe(struct platform_device *pdev)
 
        res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
        drw_pmu->cfg_base = devm_ioremap_resource(&pdev->dev, res);
-       if (!drw_pmu->cfg_base)
-               return -ENOMEM;
+       if (IS_ERR(drw_pmu->cfg_base))
+               return PTR_ERR(drw_pmu->cfg_base);
 
        name = devm_kasprintf(drw_pmu->dev, GFP_KERNEL, "ali_drw_%llx",
                              (u64) (res->start >> ALI_DRW_PMU_PA_SHIFT));
index 15e5a47..3852c18 100644 (file)
@@ -652,8 +652,11 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node)
        struct riscv_pmu *pmu = hlist_entry_safe(node, struct riscv_pmu, node);
        struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events);
 
-       /* Enable the access for TIME csr only from the user mode now */
-       csr_write(CSR_SCOUNTEREN, 0x2);
+       /*
+        * Enable the access for CYCLE, TIME, and INSTRET CSRs from userspace,
+        * as is necessary to maintain uABI compatibility.
+        */
+       csr_write(CSR_SCOUNTEREN, 0x7);
 
        /* Stop all the counters so that they can be enabled from perf */
        pmu_sbi_stop_all(pmu);
index 7e73207..9e46d83 100644 (file)
@@ -667,7 +667,7 @@ static u8 jz4755_lcd_24bit_funcs[] = { 1, 1, 1, 1, 0, 0, };
 static const struct group_desc jz4755_groups[] = {
        INGENIC_PIN_GROUP("uart0-data", jz4755_uart0_data, 0),
        INGENIC_PIN_GROUP("uart0-hwflow", jz4755_uart0_hwflow, 0),
-       INGENIC_PIN_GROUP("uart1-data", jz4755_uart1_data, 0),
+       INGENIC_PIN_GROUP("uart1-data", jz4755_uart1_data, 1),
        INGENIC_PIN_GROUP("uart2-data", jz4755_uart2_data, 1),
        INGENIC_PIN_GROUP("ssi-dt-b", jz4755_ssi_dt_b, 0),
        INGENIC_PIN_GROUP("ssi-dt-f", jz4755_ssi_dt_f, 0),
@@ -721,7 +721,7 @@ static const char *jz4755_ssi_groups[] = {
        "ssi-ce1-b", "ssi-ce1-f",
 };
 static const char *jz4755_mmc0_groups[] = { "mmc0-1bit", "mmc0-4bit", };
-static const char *jz4755_mmc1_groups[] = { "mmc0-1bit", "mmc0-4bit", };
+static const char *jz4755_mmc1_groups[] = { "mmc1-1bit", "mmc1-4bit", };
 static const char *jz4755_i2c_groups[] = { "i2c-data", };
 static const char *jz4755_cim_groups[] = { "cim-data", };
 static const char *jz4755_lcd_groups[] = {
index 62ce395..687aaa6 100644 (file)
@@ -1864,19 +1864,28 @@ static void ocelot_irq_unmask_level(struct irq_data *data)
        if (val & bit)
                ack = true;
 
+       /* Try to clear any rising edges */
+       if (!active && ack)
+               regmap_write_bits(info->map, REG(OCELOT_GPIO_INTR, info, gpio),
+                                 bit, bit);
+
        /* Enable the interrupt now */
        gpiochip_enable_irq(chip, gpio);
        regmap_update_bits(info->map, REG(OCELOT_GPIO_INTR_ENA, info, gpio),
                           bit, bit);
 
        /*
-        * In case the interrupt line is still active and the interrupt
-        * controller has not seen any changes in the interrupt line, then it
-        * means that there happen another interrupt while the line was active.
+        * In case the interrupt line is still active then it means that
+        * there happen another interrupt while the line was active.
         * So we missed that one, so we need to kick the interrupt again
         * handler.
         */
-       if (active && !ack) {
+       regmap_read(info->map, REG(OCELOT_GPIO_IN, info, gpio), &val);
+       if ((!(val & bit) && trigger_level == IRQ_TYPE_LEVEL_LOW) ||
+             (val & bit && trigger_level == IRQ_TYPE_LEVEL_HIGH))
+               active = true;
+
+       if (active) {
                struct ocelot_irq_work *work;
 
                work = kmalloc(sizeof(*work), GFP_ATOMIC);
index 7d2fbf8..c98f35a 100644 (file)
@@ -412,10 +412,6 @@ static int zynqmp_pinconf_cfg_set(struct pinctrl_dev *pctldev,
 
                        break;
                case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
-                       param = PM_PINCTRL_CONFIG_TRI_STATE;
-                       arg = PM_PINCTRL_TRI_STATE_ENABLE;
-                       ret = zynqmp_pm_pinctrl_set_config(pin, param, arg);
-                       break;
                case PIN_CONFIG_MODE_LOW_POWER:
                        /*
                         * These cases are mentioned in dts but configurable
@@ -424,11 +420,6 @@ static int zynqmp_pinconf_cfg_set(struct pinctrl_dev *pctldev,
                         */
                        ret = 0;
                        break;
-               case PIN_CONFIG_OUTPUT_ENABLE:
-                       param = PM_PINCTRL_CONFIG_TRI_STATE;
-                       arg = PM_PINCTRL_TRI_STATE_DISABLE;
-                       ret = zynqmp_pm_pinctrl_set_config(pin, param, arg);
-                       break;
                default:
                        dev_warn(pctldev->dev,
                                 "unsupported configuration parameter '%u'\n",
index a2abfe9..8bf8b21 100644 (file)
@@ -51,6 +51,7 @@
  *                  detection.
  * @skip_wake_irqs: Skip IRQs that are handled by wakeup interrupt controller
  * @disabled_for_mux: These IRQs were disabled because we muxed away.
+ * @ever_gpio:      This bit is set the first time we mux a pin to gpio_func.
  * @soc:            Reference to soc_data of platform specific data.
  * @regs:           Base addresses for the TLMM tiles.
  * @phys_base:      Physical base address
@@ -72,6 +73,7 @@ struct msm_pinctrl {
        DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO);
        DECLARE_BITMAP(skip_wake_irqs, MAX_NR_GPIO);
        DECLARE_BITMAP(disabled_for_mux, MAX_NR_GPIO);
+       DECLARE_BITMAP(ever_gpio, MAX_NR_GPIO);
 
        const struct msm_pinctrl_soc_data *soc;
        void __iomem *regs[MAX_NR_TILES];
@@ -218,6 +220,25 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
 
        val = msm_readl_ctl(pctrl, g);
 
+       /*
+        * If this is the first time muxing to GPIO and the direction is
+        * output, make sure that we're not going to be glitching the pin
+        * by reading the current state of the pin and setting it as the
+        * output.
+        */
+       if (i == gpio_func && (val & BIT(g->oe_bit)) &&
+           !test_and_set_bit(group, pctrl->ever_gpio)) {
+               u32 io_val = msm_readl_io(pctrl, g);
+
+               if (io_val & BIT(g->in_bit)) {
+                       if (!(io_val & BIT(g->out_bit)))
+                               msm_writel_io(io_val | BIT(g->out_bit), pctrl, g);
+               } else {
+                       if (io_val & BIT(g->out_bit))
+                               msm_writel_io(io_val & ~BIT(g->out_bit), pctrl, g);
+               }
+       }
+
        if (egpio_func && i == egpio_func) {
                if (val & BIT(g->egpio_present))
                        val &= ~BIT(g->egpio_enable);
index a48d9b7..154d58c 100644 (file)
 #include <linux/clk-provider.h>
 #include <linux/platform_device.h>
 #include <linux/platform_data/i2c-xiic.h>
+#include <linux/platform_data/i2c-ocores.h>
 #include <linux/ptp_clock_kernel.h>
 #include <linux/spi/spi.h>
 #include <linux/spi/xilinx_spi.h>
+#include <linux/spi/altera.h>
 #include <net/devlink.h>
 #include <linux/i2c.h>
 #include <linux/mtd/mtd.h>
@@ -28,6 +30,9 @@
 #define PCI_VENDOR_ID_CELESTICA                        0x18d4
 #define PCI_DEVICE_ID_CELESTICA_TIMECARD       0x1008
 
+#define PCI_VENDOR_ID_OROLIA                   0x1ad7
+#define PCI_DEVICE_ID_OROLIA_ARTCARD           0xa000
+
 static struct class timecard_class = {
        .owner          = THIS_MODULE,
        .name           = "timecard",
@@ -203,6 +208,11 @@ struct frequency_reg {
        u32     ctrl;
        u32     status;
 };
+
+struct board_config_reg {
+       u32 mro50_serial_activate;
+};
+
 #define FREQ_STATUS_VALID      BIT(31)
 #define FREQ_STATUS_ERROR      BIT(30)
 #define FREQ_STATUS_OVERRUN    BIT(29)
@@ -278,6 +288,11 @@ struct ptp_ocp_signal {
        bool            running;
 };
 
+struct ptp_ocp_serial_port {
+       int line;
+       int baud;
+};
+
 #define OCP_BOARD_ID_LEN               13
 #define OCP_SERIAL_LEN                 6
 
@@ -289,6 +304,7 @@ struct ptp_ocp {
        struct tod_reg __iomem  *tod;
        struct pps_reg __iomem  *pps_to_ext;
        struct pps_reg __iomem  *pps_to_clk;
+       struct board_config_reg __iomem *board_config;
        struct gpio_reg __iomem *pps_select;
        struct gpio_reg __iomem *sma_map1;
        struct gpio_reg __iomem *sma_map2;
@@ -305,6 +321,7 @@ struct ptp_ocp {
        struct ptp_ocp_ext_src  *ts2;
        struct ptp_ocp_ext_src  *ts3;
        struct ptp_ocp_ext_src  *ts4;
+       struct ocp_art_gpio_reg __iomem *art_sma;
        struct img_reg __iomem  *image;
        struct ptp_clock        *ptp;
        struct ptp_clock_info   ptp_info;
@@ -318,10 +335,10 @@ struct ptp_ocp {
        time64_t                gnss_lost;
        int                     id;
        int                     n_irqs;
-       int                     gnss_port;
-       int                     gnss2_port;
-       int                     mac_port;       /* miniature atomic clock */
-       int                     nmea_port;
+       struct ptp_ocp_serial_port      gnss_port;
+       struct ptp_ocp_serial_port      gnss2_port;
+       struct ptp_ocp_serial_port      mac_port;   /* miniature atomic clock */
+       struct ptp_ocp_serial_port      nmea_port;
        bool                    fw_loader;
        u8                      fw_tag;
        u16                     fw_version;
@@ -365,8 +382,12 @@ static int ptp_ocp_signal_from_perout(struct ptp_ocp *bp, int gen,
 static int ptp_ocp_signal_enable(void *priv, u32 req, bool enable);
 static int ptp_ocp_sma_store(struct ptp_ocp *bp, const char *buf, int sma_nr);
 
+static int ptp_ocp_art_board_init(struct ptp_ocp *bp, struct ocp_resource *r);
+
 static const struct ocp_attr_group fb_timecard_groups[];
 
+static const struct ocp_attr_group art_timecard_groups[];
+
 struct ptp_ocp_eeprom_map {
        u16     off;
        u16     len;
@@ -389,6 +410,12 @@ static struct ptp_ocp_eeprom_map fb_eeprom_map[] = {
        { }
 };
 
+static struct ptp_ocp_eeprom_map art_eeprom_map[] = {
+       { EEPROM_ENTRY(0x200 + 0x43, board_id) },
+       { EEPROM_ENTRY(0x200 + 0x63, serial) },
+       { }
+};
+
 #define bp_assign_entry(bp, res, val) ({                               \
        uintptr_t addr = (uintptr_t)(bp) + (res)->bp_offset;            \
        *(typeof(val) *)addr = val;                                     \
@@ -430,6 +457,13 @@ static struct ptp_ocp_eeprom_map fb_eeprom_map[] = {
  * 14: Signal Generator 4
  * 15: TS3
  * 16: TS4
+ --
+ * 8: Orolia TS1
+ * 10: Orolia TS2
+ * 11: Orolia TS0 (GNSS)
+ * 12: Orolia PPS
+ * 14: Orolia TS3
+ * 15: Orolia TS4
  */
 
 static struct ocp_resource ocp_fb_resource[] = {
@@ -596,14 +630,23 @@ static struct ocp_resource ocp_fb_resource[] = {
        {
                OCP_SERIAL_RESOURCE(gnss_port),
                .offset = 0x00160000 + 0x1000, .irq_vec = 3,
+               .extra = &(struct ptp_ocp_serial_port) {
+                       .baud = 115200,
+               },
        },
        {
                OCP_SERIAL_RESOURCE(gnss2_port),
                .offset = 0x00170000 + 0x1000, .irq_vec = 4,
+               .extra = &(struct ptp_ocp_serial_port) {
+                       .baud = 115200,
+               },
        },
        {
                OCP_SERIAL_RESOURCE(mac_port),
                .offset = 0x00180000 + 0x1000, .irq_vec = 5,
+               .extra = &(struct ptp_ocp_serial_port) {
+                       .baud = 57600,
+               },
        },
        {
                OCP_SERIAL_RESOURCE(nmea_port),
@@ -647,9 +690,141 @@ static struct ocp_resource ocp_fb_resource[] = {
        { }
 };
 
+#define OCP_ART_CONFIG_SIZE            144
+#define OCP_ART_TEMP_TABLE_SIZE                368
+
+struct ocp_art_gpio_reg {
+       struct {
+               u32     gpio;
+               u32     __pad[3];
+       } map[4];
+};
+
+static struct ocp_resource ocp_art_resource[] = {
+       {
+               OCP_MEM_RESOURCE(reg),
+               .offset = 0x01000000, .size = 0x10000,
+       },
+       {
+               OCP_SERIAL_RESOURCE(gnss_port),
+               .offset = 0x00160000 + 0x1000, .irq_vec = 3,
+               .extra = &(struct ptp_ocp_serial_port) {
+                       .baud = 115200,
+               },
+       },
+       {
+               OCP_MEM_RESOURCE(art_sma),
+               .offset = 0x003C0000, .size = 0x1000,
+       },
+       /* Timestamp associated with GNSS1 receiver PPS */
+       {
+               OCP_EXT_RESOURCE(ts0),
+               .offset = 0x360000, .size = 0x20, .irq_vec = 12,
+               .extra = &(struct ptp_ocp_ext_info) {
+                       .index = 0,
+                       .irq_fcn = ptp_ocp_ts_irq,
+                       .enable = ptp_ocp_ts_enable,
+               },
+       },
+       {
+               OCP_EXT_RESOURCE(ts1),
+               .offset = 0x380000, .size = 0x20, .irq_vec = 8,
+               .extra = &(struct ptp_ocp_ext_info) {
+                       .index = 1,
+                       .irq_fcn = ptp_ocp_ts_irq,
+                       .enable = ptp_ocp_ts_enable,
+               },
+       },
+       {
+               OCP_EXT_RESOURCE(ts2),
+               .offset = 0x390000, .size = 0x20, .irq_vec = 10,
+               .extra = &(struct ptp_ocp_ext_info) {
+                       .index = 2,
+                       .irq_fcn = ptp_ocp_ts_irq,
+                       .enable = ptp_ocp_ts_enable,
+               },
+       },
+       {
+               OCP_EXT_RESOURCE(ts3),
+               .offset = 0x3A0000, .size = 0x20, .irq_vec = 14,
+               .extra = &(struct ptp_ocp_ext_info) {
+                       .index = 3,
+                       .irq_fcn = ptp_ocp_ts_irq,
+                       .enable = ptp_ocp_ts_enable,
+               },
+       },
+       {
+               OCP_EXT_RESOURCE(ts4),
+               .offset = 0x3B0000, .size = 0x20, .irq_vec = 15,
+               .extra = &(struct ptp_ocp_ext_info) {
+                       .index = 4,
+                       .irq_fcn = ptp_ocp_ts_irq,
+                       .enable = ptp_ocp_ts_enable,
+               },
+       },
+       /* Timestamp associated with Internal PPS of the card */
+       {
+               OCP_EXT_RESOURCE(pps),
+               .offset = 0x00330000, .size = 0x20, .irq_vec = 11,
+               .extra = &(struct ptp_ocp_ext_info) {
+                       .index = 5,
+                       .irq_fcn = ptp_ocp_ts_irq,
+                       .enable = ptp_ocp_ts_enable,
+               },
+       },
+       {
+               OCP_SPI_RESOURCE(spi_flash),
+               .offset = 0x00310000, .size = 0x10000, .irq_vec = 9,
+               .extra = &(struct ptp_ocp_flash_info) {
+                       .name = "spi_altera", .pci_offset = 0,
+                       .data_size = sizeof(struct altera_spi_platform_data),
+                       .data = &(struct altera_spi_platform_data) {
+                               .num_chipselect = 1,
+                               .num_devices = 1,
+                               .devices = &(struct spi_board_info) {
+                                       .modalias = "spi-nor",
+                               },
+                       },
+               },
+       },
+       {
+               OCP_I2C_RESOURCE(i2c_ctrl),
+               .offset = 0x350000, .size = 0x100, .irq_vec = 4,
+               .extra = &(struct ptp_ocp_i2c_info) {
+                       .name = "ocores-i2c",
+                       .fixed_rate = 400000,
+                       .data_size = sizeof(struct ocores_i2c_platform_data),
+                       .data = &(struct ocores_i2c_platform_data) {
+                               .clock_khz = 125000,
+                               .bus_khz = 400,
+                               .num_devices = 1,
+                               .devices = &(struct i2c_board_info) {
+                                       I2C_BOARD_INFO("24c08", 0x50),
+                               },
+                       },
+               },
+       },
+       {
+               OCP_SERIAL_RESOURCE(mac_port),
+               .offset = 0x00190000, .irq_vec = 7,
+               .extra = &(struct ptp_ocp_serial_port) {
+                       .baud = 9600,
+               },
+       },
+       {
+               OCP_MEM_RESOURCE(board_config),
+               .offset = 0x210000, .size = 0x1000,
+       },
+       {
+               .setup = ptp_ocp_art_board_init,
+       },
+       { }
+};
+
 static const struct pci_device_id ptp_ocp_pcidev_id[] = {
        { PCI_DEVICE_DATA(FACEBOOK, TIMECARD, &ocp_fb_resource) },
        { PCI_DEVICE_DATA(CELESTICA, TIMECARD, &ocp_fb_resource) },
+       { PCI_DEVICE_DATA(OROLIA, ARTCARD, &ocp_art_resource) },
        { }
 };
 MODULE_DEVICE_TABLE(pci, ptp_ocp_pcidev_id);
@@ -714,6 +889,19 @@ static const struct ocp_selector ptp_ocp_sma_out[] = {
        { }
 };
 
+static const struct ocp_selector ptp_ocp_art_sma_in[] = {
+       { .name = "PPS1",       .value = 0x0001 },
+       { .name = "10Mhz",      .value = 0x0008 },
+       { }
+};
+
+static const struct ocp_selector ptp_ocp_art_sma_out[] = {
+       { .name = "PHC",        .value = 0x0002 },
+       { .name = "GNSS",       .value = 0x0004 },
+       { .name = "10Mhz",      .value = 0x0010 },
+       { }
+};
+
 struct ocp_sma_op {
        const struct ocp_selector *tbl[2];
        void (*init)(struct ptp_ocp *bp);
@@ -1342,11 +1530,9 @@ ptp_ocp_devlink_fw_image(struct devlink *devlink, const struct firmware *fw,
        hdr = (const struct ptp_ocp_firmware_header *)fw->data;
        if (memcmp(hdr->magic, OCP_FIRMWARE_MAGIC_HEADER, 4)) {
                devlink_flash_update_status_notify(devlink,
-                       "No firmware header found, flashing raw image",
+                       "No firmware header found, cancel firmware upgrade",
                        NULL, 0, 0);
-               offset = 0;
-               length = fw->size;
-               goto out;
+               return -EINVAL;
        }
 
        if (be16_to_cpu(hdr->pci_vendor_id) != bp->pdev->vendor ||
@@ -1374,7 +1560,6 @@ ptp_ocp_devlink_fw_image(struct devlink *devlink, const struct firmware *fw,
                return -EINVAL;
        }
 
-out:
        *data = &fw->data[offset];
        *size = length;
 
@@ -1872,11 +2057,15 @@ ptp_ocp_serial_line(struct ptp_ocp *bp, struct ocp_resource *r)
 static int
 ptp_ocp_register_serial(struct ptp_ocp *bp, struct ocp_resource *r)
 {
-       int port;
+       struct ptp_ocp_serial_port *p = (struct ptp_ocp_serial_port *)r->extra;
+       struct ptp_ocp_serial_port port = {};
 
-       port = ptp_ocp_serial_line(bp, r);
-       if (port < 0)
-               return port;
+       port.line = ptp_ocp_serial_line(bp, r);
+       if (port.line < 0)
+               return port.line;
+
+       if (p)
+               port.baud = p->baud;
 
        bp_assign_entry(bp, r, port);
 
@@ -2257,6 +2446,121 @@ ptp_ocp_register_resources(struct ptp_ocp *bp, kernel_ulong_t driver_data)
        return err;
 }
 
+static void
+ptp_ocp_art_sma_init(struct ptp_ocp *bp)
+{
+       u32 reg;
+       int i;
+
+       /* defaults */
+       bp->sma[0].mode = SMA_MODE_IN;
+       bp->sma[1].mode = SMA_MODE_IN;
+       bp->sma[2].mode = SMA_MODE_OUT;
+       bp->sma[3].mode = SMA_MODE_OUT;
+
+       bp->sma[0].default_fcn = 0x08;  /* IN: 10Mhz */
+       bp->sma[1].default_fcn = 0x01;  /* IN: PPS1 */
+       bp->sma[2].default_fcn = 0x10;  /* OUT: 10Mhz */
+       bp->sma[3].default_fcn = 0x02;  /* OUT: PHC */
+
+       /* If no SMA map, the pin functions and directions are fixed. */
+       if (!bp->art_sma) {
+               for (i = 0; i < 4; i++) {
+                       bp->sma[i].fixed_fcn = true;
+                       bp->sma[i].fixed_dir = true;
+               }
+               return;
+       }
+
+       for (i = 0; i < 4; i++) {
+               reg = ioread32(&bp->art_sma->map[i].gpio);
+
+               switch (reg & 0xff) {
+               case 0:
+                       bp->sma[i].fixed_fcn = true;
+                       bp->sma[i].fixed_dir = true;
+                       break;
+               case 1:
+               case 8:
+                       bp->sma[i].mode = SMA_MODE_IN;
+                       break;
+               default:
+                       bp->sma[i].mode = SMA_MODE_OUT;
+                       break;
+               }
+       }
+}
+
+static u32
+ptp_ocp_art_sma_get(struct ptp_ocp *bp, int sma_nr)
+{
+       if (bp->sma[sma_nr - 1].fixed_fcn)
+               return bp->sma[sma_nr - 1].default_fcn;
+
+       return ioread32(&bp->art_sma->map[sma_nr - 1].gpio) & 0xff;
+}
+
+/* note: store 0 is considered invalid. */
+static int
+ptp_ocp_art_sma_set(struct ptp_ocp *bp, int sma_nr, u32 val)
+{
+       unsigned long flags;
+       u32 __iomem *gpio;
+       int err = 0;
+       u32 reg;
+
+       val &= SMA_SELECT_MASK;
+       if (hweight32(val) > 1)
+               return -EINVAL;
+
+       gpio = &bp->art_sma->map[sma_nr - 1].gpio;
+
+       spin_lock_irqsave(&bp->lock, flags);
+       reg = ioread32(gpio);
+       if (((reg >> 16) & val) == 0) {
+               err = -EOPNOTSUPP;
+       } else {
+               reg = (reg & 0xff00) | (val & 0xff);
+               iowrite32(reg, gpio);
+       }
+       spin_unlock_irqrestore(&bp->lock, flags);
+
+       return err;
+}
+
+static const struct ocp_sma_op ocp_art_sma_op = {
+       .tbl            = { ptp_ocp_art_sma_in, ptp_ocp_art_sma_out },
+       .init           = ptp_ocp_art_sma_init,
+       .get            = ptp_ocp_art_sma_get,
+       .set_inputs     = ptp_ocp_art_sma_set,
+       .set_output     = ptp_ocp_art_sma_set,
+};
+
+/* ART specific board initializers; last "resource" registered. */
+static int
+ptp_ocp_art_board_init(struct ptp_ocp *bp, struct ocp_resource *r)
+{
+       int err;
+
+       bp->flash_start = 0x1000000;
+       bp->eeprom_map = art_eeprom_map;
+       bp->fw_cap = OCP_CAP_BASIC;
+       bp->fw_version = ioread32(&bp->reg->version);
+       bp->fw_tag = 2;
+       bp->sma_op = &ocp_art_sma_op;
+
+       /* Enable MAC serial port during initialisation */
+       iowrite32(1, &bp->board_config->mro50_serial_activate);
+
+       ptp_ocp_sma_init(bp);
+
+       err = ptp_ocp_attr_group_add(bp, art_timecard_groups);
+       if (err)
+               return err;
+
+       return ptp_ocp_init_clock(bp);
+}
+
 static ssize_t
 ptp_ocp_show_output(const struct ocp_selector *tbl, u32 val, char *buf,
                    int def_val)
@@ -3030,6 +3334,130 @@ DEVICE_FREQ_GROUP(freq2, 1);
 DEVICE_FREQ_GROUP(freq3, 2);
 DEVICE_FREQ_GROUP(freq4, 3);
 
+static ssize_t
+disciplining_config_read(struct file *filp, struct kobject *kobj,
+                        struct bin_attribute *bin_attr, char *buf,
+                        loff_t off, size_t count)
+{
+       struct ptp_ocp *bp = dev_get_drvdata(kobj_to_dev(kobj));
+       size_t size = OCP_ART_CONFIG_SIZE;
+       struct nvmem_device *nvmem;
+       ssize_t err;
+
+       nvmem = ptp_ocp_nvmem_device_get(bp, NULL);
+       if (IS_ERR(nvmem))
+               return PTR_ERR(nvmem);
+
+       if (off > size) {
+               err = 0;
+               goto out;
+       }
+
+       if (off + count > size)
+               count = size - off;
+
+       // the configuration is in the very beginning of the EEPROM
+       err = nvmem_device_read(nvmem, off, count, buf);
+       if (err != count) {
+               err = -EFAULT;
+               goto out;
+       }
+
+out:
+       ptp_ocp_nvmem_device_put(&nvmem);
+
+       return err;
+}
+
+static ssize_t
+disciplining_config_write(struct file *filp, struct kobject *kobj,
+                         struct bin_attribute *bin_attr, char *buf,
+                         loff_t off, size_t count)
+{
+       struct ptp_ocp *bp = dev_get_drvdata(kobj_to_dev(kobj));
+       struct nvmem_device *nvmem;
+       ssize_t err;
+
+       /* Allow write of the whole area only */
+       if (off || count != OCP_ART_CONFIG_SIZE)
+               return -EFAULT;
+
+       nvmem = ptp_ocp_nvmem_device_get(bp, NULL);
+       if (IS_ERR(nvmem))
+               return PTR_ERR(nvmem);
+
+       err = nvmem_device_write(nvmem, 0x00, count, buf);
+       if (err != count)
+               err = -EFAULT;
+
+       ptp_ocp_nvmem_device_put(&nvmem);
+
+       return err;
+}
+static BIN_ATTR_RW(disciplining_config, OCP_ART_CONFIG_SIZE);
+
+static ssize_t
+temperature_table_read(struct file *filp, struct kobject *kobj,
+                      struct bin_attribute *bin_attr, char *buf,
+                      loff_t off, size_t count)
+{
+       struct ptp_ocp *bp = dev_get_drvdata(kobj_to_dev(kobj));
+       size_t size = OCP_ART_TEMP_TABLE_SIZE;
+       struct nvmem_device *nvmem;
+       ssize_t err;
+
+       nvmem = ptp_ocp_nvmem_device_get(bp, NULL);
+       if (IS_ERR(nvmem))
+               return PTR_ERR(nvmem);
+
+       if (off > size) {
+               err = 0;
+               goto out;
+       }
+
+       if (off + count > size)
+               count = size - off;
+
+       // the configuration is in the very beginning of the EEPROM
+       err = nvmem_device_read(nvmem, 0x90 + off, count, buf);
+       if (err != count) {
+               err = -EFAULT;
+               goto out;
+       }
+
+out:
+       ptp_ocp_nvmem_device_put(&nvmem);
+
+       return err;
+}
+
+static ssize_t
+temperature_table_write(struct file *filp, struct kobject *kobj,
+                       struct bin_attribute *bin_attr, char *buf,
+                       loff_t off, size_t count)
+{
+       struct ptp_ocp *bp = dev_get_drvdata(kobj_to_dev(kobj));
+       struct nvmem_device *nvmem;
+       ssize_t err;
+
+       /* Allow write of the whole area only */
+       if (off || count != OCP_ART_TEMP_TABLE_SIZE)
+               return -EFAULT;
+
+       nvmem = ptp_ocp_nvmem_device_get(bp, NULL);
+       if (IS_ERR(nvmem))
+               return PTR_ERR(nvmem);
+
+       err = nvmem_device_write(nvmem, 0x90, count, buf);
+       if (err != count)
+               err = -EFAULT;
+
+       ptp_ocp_nvmem_device_put(&nvmem);
+
+       return err;
+}
+static BIN_ATTR_RW(temperature_table, OCP_ART_TEMP_TABLE_SIZE);
+
 static struct attribute *fb_timecard_attrs[] = {
        &dev_attr_serialnum.attr,
        &dev_attr_gnss_sync.attr,
@@ -3049,9 +3477,11 @@ static struct attribute *fb_timecard_attrs[] = {
        &dev_attr_tod_correction.attr,
        NULL,
 };
+
 static const struct attribute_group fb_timecard_group = {
        .attrs = fb_timecard_attrs,
 };
+
 static const struct ocp_attr_group fb_timecard_groups[] = {
        { .cap = OCP_CAP_BASIC,     .group = &fb_timecard_group },
        { .cap = OCP_CAP_SIGNAL,    .group = &fb_timecard_signal0_group },
@@ -3065,6 +3495,37 @@ static const struct ocp_attr_group fb_timecard_groups[] = {
        { },
 };
 
+static struct attribute *art_timecard_attrs[] = {
+       &dev_attr_serialnum.attr,
+       &dev_attr_clock_source.attr,
+       &dev_attr_available_clock_sources.attr,
+       &dev_attr_utc_tai_offset.attr,
+       &dev_attr_ts_window_adjust.attr,
+       &dev_attr_sma1.attr,
+       &dev_attr_sma2.attr,
+       &dev_attr_sma3.attr,
+       &dev_attr_sma4.attr,
+       &dev_attr_available_sma_inputs.attr,
+       &dev_attr_available_sma_outputs.attr,
+       NULL,
+};
+
+static struct bin_attribute *bin_art_timecard_attrs[] = {
+       &bin_attr_disciplining_config,
+       &bin_attr_temperature_table,
+       NULL,
+};
+
+static const struct attribute_group art_timecard_group = {
+       .attrs = art_timecard_attrs,
+       .bin_attrs = bin_art_timecard_attrs,
+};
+
+static const struct ocp_attr_group art_timecard_groups[] = {
+       { .cap = OCP_CAP_BASIC,     .group = &art_timecard_group },
+       { },
+};
+
 static void
 gpio_input_map(char *buf, struct ptp_ocp *bp, u16 map[][2], u16 bit,
               const char *def)
@@ -3177,14 +3638,16 @@ ptp_ocp_summary_show(struct seq_file *s, void *data)
        bp = dev_get_drvdata(dev);
 
        seq_printf(s, "%7s: /dev/ptp%d\n", "PTP", ptp_clock_index(bp->ptp));
-       if (bp->gnss_port != -1)
-               seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS1", bp->gnss_port);
-       if (bp->gnss2_port != -1)
-               seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS2", bp->gnss2_port);
-       if (bp->mac_port != -1)
-               seq_printf(s, "%7s: /dev/ttyS%d\n", "MAC", bp->mac_port);
-       if (bp->nmea_port != -1)
-               seq_printf(s, "%7s: /dev/ttyS%d\n", "NMEA", bp->nmea_port);
+       if (bp->gnss_port.line != -1)
+               seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS1",
+                          bp->gnss_port.line);
+       if (bp->gnss2_port.line != -1)
+               seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS2",
+                          bp->gnss2_port.line);
+       if (bp->mac_port.line != -1)
+               seq_printf(s, "%7s: /dev/ttyS%d\n", "MAC", bp->mac_port.line);
+       if (bp->nmea_port.line != -1)
+               seq_printf(s, "%7s: /dev/ttyS%d\n", "NMEA", bp->nmea_port.line);
 
        memset(sma_val, 0xff, sizeof(sma_val));
        if (bp->sma_map1) {
@@ -3508,10 +3971,10 @@ ptp_ocp_device_init(struct ptp_ocp *bp, struct pci_dev *pdev)
 
        bp->ptp_info = ptp_ocp_clock_info;
        spin_lock_init(&bp->lock);
-       bp->gnss_port = -1;
-       bp->gnss2_port = -1;
-       bp->mac_port = -1;
-       bp->nmea_port = -1;
+       bp->gnss_port.line = -1;
+       bp->gnss2_port.line = -1;
+       bp->mac_port.line = -1;
+       bp->nmea_port.line = -1;
        bp->pdev = pdev;
 
        device_initialize(&bp->dev);
@@ -3569,20 +4032,20 @@ ptp_ocp_complete(struct ptp_ocp *bp)
        struct pps_device *pps;
        char buf[32];
 
-       if (bp->gnss_port != -1) {
-               sprintf(buf, "ttyS%d", bp->gnss_port);
+       if (bp->gnss_port.line != -1) {
+               sprintf(buf, "ttyS%d", bp->gnss_port.line);
                ptp_ocp_link_child(bp, buf, "ttyGNSS");
        }
-       if (bp->gnss2_port != -1) {
-               sprintf(buf, "ttyS%d", bp->gnss2_port);
+       if (bp->gnss2_port.line != -1) {
+               sprintf(buf, "ttyS%d", bp->gnss2_port.line);
                ptp_ocp_link_child(bp, buf, "ttyGNSS2");
        }
-       if (bp->mac_port != -1) {
-               sprintf(buf, "ttyS%d", bp->mac_port);
+       if (bp->mac_port.line != -1) {
+               sprintf(buf, "ttyS%d", bp->mac_port.line);
                ptp_ocp_link_child(bp, buf, "ttyMAC");
        }
-       if (bp->nmea_port != -1) {
-               sprintf(buf, "ttyS%d", bp->nmea_port);
+       if (bp->nmea_port.line != -1) {
+               sprintf(buf, "ttyS%d", bp->nmea_port.line);
                ptp_ocp_link_child(bp, buf, "ttyNMEA");
        }
        sprintf(buf, "ptp%d", ptp_clock_index(bp->ptp));
@@ -3638,16 +4101,20 @@ ptp_ocp_info(struct ptp_ocp *bp)
 
        ptp_ocp_phc_info(bp);
 
-       ptp_ocp_serial_info(dev, "GNSS", bp->gnss_port, 115200);
-       ptp_ocp_serial_info(dev, "GNSS2", bp->gnss2_port, 115200);
-       ptp_ocp_serial_info(dev, "MAC", bp->mac_port, 57600);
-       if (bp->nmea_out && bp->nmea_port != -1) {
-               int baud = -1;
+       ptp_ocp_serial_info(dev, "GNSS", bp->gnss_port.line,
+                           bp->gnss_port.baud);
+       ptp_ocp_serial_info(dev, "GNSS2", bp->gnss2_port.line,
+                           bp->gnss2_port.baud);
+       ptp_ocp_serial_info(dev, "MAC", bp->mac_port.line, bp->mac_port.baud);
+       if (bp->nmea_out && bp->nmea_port.line != -1) {
+               bp->nmea_port.baud = -1;
 
                reg = ioread32(&bp->nmea_out->uart_baud);
                if (reg < ARRAY_SIZE(nmea_baud))
-                       baud = nmea_baud[reg];
-               ptp_ocp_serial_info(dev, "NMEA", bp->nmea_port, baud);
+                       bp->nmea_port.baud = nmea_baud[reg];
+
+               ptp_ocp_serial_info(dev, "NMEA", bp->nmea_port.line,
+                                   bp->nmea_port.baud);
        }
 }
 
@@ -3688,14 +4155,14 @@ ptp_ocp_detach(struct ptp_ocp *bp)
        for (i = 0; i < 4; i++)
                if (bp->signal_out[i])
                        ptp_ocp_unregister_ext(bp->signal_out[i]);
-       if (bp->gnss_port != -1)
-               serial8250_unregister_port(bp->gnss_port);
-       if (bp->gnss2_port != -1)
-               serial8250_unregister_port(bp->gnss2_port);
-       if (bp->mac_port != -1)
-               serial8250_unregister_port(bp->mac_port);
-       if (bp->nmea_port != -1)
-               serial8250_unregister_port(bp->nmea_port);
+       if (bp->gnss_port.line != -1)
+               serial8250_unregister_port(bp->gnss_port.line);
+       if (bp->gnss2_port.line != -1)
+               serial8250_unregister_port(bp->gnss2_port.line);
+       if (bp->mac_port.line != -1)
+               serial8250_unregister_port(bp->mac_port.line);
+       if (bp->nmea_port.line != -1)
+               serial8250_unregister_port(bp->nmea_port.line);
        platform_device_unregister(bp->spi_flash);
        platform_device_unregister(bp->i2c_ctrl);
        if (bp->i2c_clk)
index b8de251..bb63edb 100644 (file)
@@ -423,6 +423,7 @@ config RTC_DRV_ISL1208
 
 config RTC_DRV_ISL12022
        tristate "Intersil ISL12022"
+       select REGMAP_I2C
        help
          If you say yes here you get support for the
          Intersil ISL12022 RTC chip.
index bdb1df8..610413b 100644 (file)
@@ -1352,10 +1352,10 @@ static void cmos_check_acpi_rtc_status(struct device *dev,
 
 static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
 {
-       cmos_wake_setup(&pnp->dev);
+       int irq, ret;
 
        if (pnp_port_start(pnp, 0) == 0x70 && !pnp_irq_valid(pnp, 0)) {
-               unsigned int irq = 0;
+               irq = 0;
 #ifdef CONFIG_X86
                /* Some machines contain a PNP entry for the RTC, but
                 * don't define the IRQ. It should always be safe to
@@ -1364,13 +1364,17 @@ static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
                if (nr_legacy_irqs())
                        irq = RTC_IRQ;
 #endif
-               return cmos_do_probe(&pnp->dev,
-                               pnp_get_resource(pnp, IORESOURCE_IO, 0), irq);
        } else {
-               return cmos_do_probe(&pnp->dev,
-                               pnp_get_resource(pnp, IORESOURCE_IO, 0),
-                               pnp_irq(pnp, 0));
+               irq = pnp_irq(pnp, 0);
        }
+
+       ret = cmos_do_probe(&pnp->dev, pnp_get_resource(pnp, IORESOURCE_IO, 0), irq);
+       if (ret)
+               return ret;
+
+       cmos_wake_setup(&pnp->dev);
+
+       return 0;
 }
 
 static void cmos_pnp_remove(struct pnp_dev *pnp)
@@ -1454,10 +1458,9 @@ static inline void cmos_of_init(struct platform_device *pdev) {}
 static int __init cmos_platform_probe(struct platform_device *pdev)
 {
        struct resource *resource;
-       int irq;
+       int irq, ret;
 
        cmos_of_init(pdev);
-       cmos_wake_setup(&pdev->dev);
 
        if (RTC_IOMAPPED)
                resource = platform_get_resource(pdev, IORESOURCE_IO, 0);
@@ -1467,7 +1470,13 @@ static int __init cmos_platform_probe(struct platform_device *pdev)
        if (irq < 0)
                irq = -1;
 
-       return cmos_do_probe(&pdev->dev, resource, irq);
+       ret = cmos_do_probe(&pdev->dev, resource, irq);
+       if (ret)
+               return ret;
+
+       cmos_wake_setup(&pdev->dev);
+
+       return 0;
 }
 
 static int cmos_platform_remove(struct platform_device *pdev)
index a24331b..5db9c73 100644 (file)
@@ -132,7 +132,7 @@ ds1685_rtc_bin2bcd(struct ds1685_priv *rtc, u8 val, u8 bin_mask, u8 bcd_mask)
 }
 
 /**
- * s1685_rtc_check_mday - check validity of the day of month.
+ * ds1685_rtc_check_mday - check validity of the day of month.
  * @rtc: pointer to the ds1685 rtc structure.
  * @mday: day of month.
  *
index c2717bb..c828bc8 100644 (file)
@@ -265,18 +265,17 @@ static int gamecube_rtc_read_offset_from_sram(struct priv *d)
         * SRAM address as on previous consoles.
         */
        ret = regmap_read(d->regmap, RTC_SRAM_BIAS, &d->rtc_bias);
-       if (ret) {
-               pr_err("failed to get the RTC bias\n");
-               iounmap(hw_srnprot);
-               return -1;
-       }
 
        /* Reset SRAM access to how it was before, our job here is done. */
        if (old != 0x7bf)
                iowrite32be(old, hw_srnprot);
+
        iounmap(hw_srnprot);
 
-       return 0;
+       if (ret)
+               pr_err("failed to get the RTC bias\n");
+
+       return ret;
 }
 
 static const struct regmap_range rtc_rd_ranges[] = {
index 79461de..ca677c4 100644 (file)
@@ -16,6 +16,7 @@
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/of_device.h>
+#include <linux/regmap.h>
 
 /* ISL register offsets */
 #define ISL12022_REG_SC                0x00
@@ -42,83 +43,32 @@ static struct i2c_driver isl12022_driver;
 
 struct isl12022 {
        struct rtc_device *rtc;
-
-       bool write_enabled;     /* true if write enable is set */
+       struct regmap *regmap;
 };
 
-
-static int isl12022_read_regs(struct i2c_client *client, uint8_t reg,
-                             uint8_t *data, size_t n)
-{
-       struct i2c_msg msgs[] = {
-               {
-                       .addr   = client->addr,
-                       .flags  = 0,
-                       .len    = 1,
-                       .buf    = data
-               },              /* setup read ptr */
-               {
-                       .addr   = client->addr,
-                       .flags  = I2C_M_RD,
-                       .len    = n,
-                       .buf    = data
-               }
-       };
-
-       int ret;
-
-       data[0] = reg;
-       ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs));
-       if (ret != ARRAY_SIZE(msgs)) {
-               dev_err(&client->dev, "%s: read error, ret=%d\n",
-                       __func__, ret);
-               return -EIO;
-       }
-
-       return 0;
-}
-
-
-static int isl12022_write_reg(struct i2c_client *client,
-                             uint8_t reg, uint8_t val)
-{
-       uint8_t data[2] = { reg, val };
-       int err;
-
-       err = i2c_master_send(client, data, sizeof(data));
-       if (err != sizeof(data)) {
-               dev_err(&client->dev,
-                       "%s: err=%d addr=%02x, data=%02x\n",
-                       __func__, err, data[0], data[1]);
-               return -EIO;
-       }
-
-       return 0;
-}
-
-
 /*
  * In the routines that deal directly with the isl12022 hardware, we use
  * rtc_time -- month 0-11, hour 0-23, yr = calendar year-epoch.
  */
 static int isl12022_rtc_read_time(struct device *dev, struct rtc_time *tm)
 {
-       struct i2c_client *client = to_i2c_client(dev);
+       struct isl12022 *isl12022 = dev_get_drvdata(dev);
+       struct regmap *regmap = isl12022->regmap;
        uint8_t buf[ISL12022_REG_INT + 1];
        int ret;
 
-       ret = isl12022_read_regs(client, ISL12022_REG_SC, buf, sizeof(buf));
+       ret = regmap_bulk_read(regmap, ISL12022_REG_SC, buf, sizeof(buf));
        if (ret)
                return ret;
 
        if (buf[ISL12022_REG_SR] & (ISL12022_SR_LBAT85 | ISL12022_SR_LBAT75)) {
-               dev_warn(&client->dev,
+               dev_warn(dev,
                         "voltage dropped below %u%%, "
                         "date and time is not reliable.\n",
                         buf[ISL12022_REG_SR] & ISL12022_SR_LBAT85 ? 85 : 75);
        }
 
-       dev_dbg(&client->dev,
+       dev_dbg(dev,
                "%s: raw data is sec=%02x, min=%02x, hr=%02x, "
                "mday=%02x, mon=%02x, year=%02x, wday=%02x, "
                "sr=%02x, int=%02x",
@@ -141,65 +91,25 @@ static int isl12022_rtc_read_time(struct device *dev, struct rtc_time *tm)
        tm->tm_mon = bcd2bin(buf[ISL12022_REG_MO] & 0x1F) - 1;
        tm->tm_year = bcd2bin(buf[ISL12022_REG_YR]) + 100;
 
-       dev_dbg(&client->dev, "%s: secs=%d, mins=%d, hours=%d, "
-               "mday=%d, mon=%d, year=%d, wday=%d\n",
-               __func__,
-               tm->tm_sec, tm->tm_min, tm->tm_hour,
-               tm->tm_mday, tm->tm_mon, tm->tm_year, tm->tm_wday);
+       dev_dbg(dev, "%s: %ptR\n", __func__, tm);
 
        return 0;
 }
 
 static int isl12022_rtc_set_time(struct device *dev, struct rtc_time *tm)
 {
-       struct i2c_client *client = to_i2c_client(dev);
-       struct isl12022 *isl12022 = i2c_get_clientdata(client);
-       size_t i;
+       struct isl12022 *isl12022 = dev_get_drvdata(dev);
+       struct regmap *regmap = isl12022->regmap;
        int ret;
        uint8_t buf[ISL12022_REG_DW + 1];
 
-       dev_dbg(&client->dev, "%s: secs=%d, mins=%d, hours=%d, "
-               "mday=%d, mon=%d, year=%d, wday=%d\n",
-               __func__,
-               tm->tm_sec, tm->tm_min, tm->tm_hour,
-               tm->tm_mday, tm->tm_mon, tm->tm_year, tm->tm_wday);
-
-       if (!isl12022->write_enabled) {
-
-               ret = isl12022_read_regs(client, ISL12022_REG_INT, buf, 1);
-               if (ret)
-                       return ret;
-
-               /* Check if WRTC (write rtc enable) is set factory default is
-                * 0 (not set) */
-               if (!(buf[0] & ISL12022_INT_WRTC)) {
-                       dev_info(&client->dev,
-                                "init write enable and 24 hour format\n");
-
-                       /* Set the write enable bit. */
-                       ret = isl12022_write_reg(client,
-                                                ISL12022_REG_INT,
-                                                buf[0] | ISL12022_INT_WRTC);
-                       if (ret)
-                               return ret;
-
-                       /* Write to any RTC register to start RTC, we use the
-                        * HR register, setting the MIL bit to use the 24 hour
-                        * format. */
-                       ret = isl12022_read_regs(client, ISL12022_REG_HR,
-                                                buf, 1);
-                       if (ret)
-                               return ret;
-
-                       ret = isl12022_write_reg(client,
-                                                ISL12022_REG_HR,
-                                                buf[0] | ISL12022_HR_MIL);
-                       if (ret)
-                               return ret;
-               }
-
-               isl12022->write_enabled = true;
-       }
+       dev_dbg(dev, "%s: %ptR\n", __func__, tm);
+
+       /* Ensure the write enable bit is set. */
+       ret = regmap_update_bits(regmap, ISL12022_REG_INT,
+                                ISL12022_INT_WRTC, ISL12022_INT_WRTC);
+       if (ret)
+               return ret;
 
        /* hours, minutes and seconds */
        buf[ISL12022_REG_SC] = bin2bcd(tm->tm_sec);
@@ -216,15 +126,8 @@ static int isl12022_rtc_set_time(struct device *dev, struct rtc_time *tm)
 
        buf[ISL12022_REG_DW] = tm->tm_wday & 0x07;
 
-       /* write register's data */
-       for (i = 0; i < ARRAY_SIZE(buf); i++) {
-               ret = isl12022_write_reg(client, ISL12022_REG_SC + i,
-                                        buf[ISL12022_REG_SC + i]);
-               if (ret)
-                       return -EIO;
-       }
-
-       return 0;
+       return regmap_bulk_write(isl12022->regmap, ISL12022_REG_SC,
+                                buf, sizeof(buf));
 }
 
 static const struct rtc_class_ops isl12022_rtc_ops = {
@@ -232,6 +135,12 @@ static const struct rtc_class_ops isl12022_rtc_ops = {
        .set_time       = isl12022_rtc_set_time,
 };
 
+static const struct regmap_config regmap_config = {
+       .reg_bits = 8,
+       .val_bits = 8,
+       .use_single_write = true,
+};
+
 static int isl12022_probe(struct i2c_client *client)
 {
        struct isl12022 *isl12022;
@@ -243,13 +152,23 @@ static int isl12022_probe(struct i2c_client *client)
                                GFP_KERNEL);
        if (!isl12022)
                return -ENOMEM;
+       dev_set_drvdata(&client->dev, isl12022);
+
+       isl12022->regmap = devm_regmap_init_i2c(client, &regmap_config);
+       if (IS_ERR(isl12022->regmap)) {
+               dev_err(&client->dev, "regmap allocation failed\n");
+               return PTR_ERR(isl12022->regmap);
+       }
+
+       isl12022->rtc = devm_rtc_allocate_device(&client->dev);
+       if (IS_ERR(isl12022->rtc))
+               return PTR_ERR(isl12022->rtc);
 
-       i2c_set_clientdata(client, isl12022);
+       isl12022->rtc->ops = &isl12022_rtc_ops;
+       isl12022->rtc->range_min = RTC_TIMESTAMP_BEGIN_2000;
+       isl12022->rtc->range_max = RTC_TIMESTAMP_END_2099;
 
-       isl12022->rtc = devm_rtc_device_register(&client->dev,
-                                       isl12022_driver.driver.name,
-                                       &isl12022_rtc_ops, THIS_MODULE);
-       return PTR_ERR_OR_ZERO(isl12022->rtc);
+       return devm_rtc_register_device(isl12022->rtc);
 }
 
 #ifdef CONFIG_OF
index 6e51df7..c383719 100644 (file)
@@ -257,11 +257,6 @@ static void jz4740_rtc_power_off(void)
        kernel_halt();
 }
 
-static void jz4740_rtc_clk_disable(void *data)
-{
-       clk_disable_unprepare(data);
-}
-
 static const struct of_device_id jz4740_rtc_of_match[] = {
        { .compatible = "ingenic,jz4740-rtc", .data = (void *)ID_JZ4740 },
        { .compatible = "ingenic,jz4760-rtc", .data = (void *)ID_JZ4760 },
@@ -329,23 +324,9 @@ static int jz4740_rtc_probe(struct platform_device *pdev)
        if (IS_ERR(rtc->base))
                return PTR_ERR(rtc->base);
 
-       clk = devm_clk_get(dev, "rtc");
-       if (IS_ERR(clk)) {
-               dev_err(dev, "Failed to get RTC clock\n");
-               return PTR_ERR(clk);
-       }
-
-       ret = clk_prepare_enable(clk);
-       if (ret) {
-               dev_err(dev, "Failed to enable clock\n");
-               return ret;
-       }
-
-       ret = devm_add_action_or_reset(dev, jz4740_rtc_clk_disable, clk);
-       if (ret) {
-               dev_err(dev, "Failed to register devm action\n");
-               return ret;
-       }
+       clk = devm_clk_get_enabled(dev, "rtc");
+       if (IS_ERR(clk))
+               return dev_err_probe(dev, PTR_ERR(clk), "Failed to get RTC clock\n");
 
        spin_lock_init(&rtc->lock);
 
index f14d192..2a479d4 100644 (file)
@@ -193,23 +193,6 @@ static int mpfs_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
        return 0;
 }
 
-static inline struct clk *mpfs_rtc_init_clk(struct device *dev)
-{
-       struct clk *clk;
-       int ret;
-
-       clk = devm_clk_get(dev, "rtc");
-       if (IS_ERR(clk))
-               return clk;
-
-       ret = clk_prepare_enable(clk);
-       if (ret)
-               return ERR_PTR(ret);
-
-       devm_add_action_or_reset(dev, (void (*) (void *))clk_disable_unprepare, clk);
-       return clk;
-}
-
 static irqreturn_t mpfs_rtc_wakeup_irq_handler(int irq, void *dev)
 {
        struct mpfs_rtc_dev *rtcdev = dev;
@@ -233,7 +216,7 @@ static int mpfs_rtc_probe(struct platform_device *pdev)
 {
        struct mpfs_rtc_dev *rtcdev;
        struct clk *clk;
-       u32 prescaler;
+       unsigned long prescaler;
        int wakeup_irq, ret;
 
        rtcdev = devm_kzalloc(&pdev->dev, sizeof(struct mpfs_rtc_dev), GFP_KERNEL);
@@ -251,7 +234,7 @@ static int mpfs_rtc_probe(struct platform_device *pdev)
        /* range is capped by alarm max, lower reg is 31:0 & upper is 10:0 */
        rtcdev->rtc->range_max = GENMASK_ULL(42, 0);
 
-       clk = mpfs_rtc_init_clk(&pdev->dev);
+       clk = devm_clk_get_enabled(&pdev->dev, "rtc");
        if (IS_ERR(clk))
                return PTR_ERR(clk);
 
@@ -275,14 +258,13 @@ static int mpfs_rtc_probe(struct platform_device *pdev)
 
        /* prescaler hardware adds 1 to reg value */
        prescaler = clk_get_rate(devm_clk_get(&pdev->dev, "rtcref")) - 1;
-
        if (prescaler > MAX_PRESCALER_COUNT) {
-               dev_dbg(&pdev->dev, "invalid prescaler %d\n", prescaler);
+               dev_dbg(&pdev->dev, "invalid prescaler %lu\n", prescaler);
                return -EINVAL;
        }
 
        writel(prescaler, rtcdev->base + PRESCALER_REG);
-       dev_info(&pdev->dev, "prescaler set to: 0x%X \r\n", prescaler);
+       dev_info(&pdev->dev, "prescaler set to: %lu\n", prescaler);
 
        device_init_wakeup(&pdev->dev, true);
        ret = dev_pm_set_wake_irq(&pdev->dev, wakeup_irq);
index 53d4e25..762cf03 100644 (file)
@@ -291,14 +291,6 @@ static const struct rtc_class_ops mxc_rtc_ops = {
        .alarm_irq_enable       = mxc_rtc_alarm_irq_enable,
 };
 
-static void mxc_rtc_action(void *p)
-{
-       struct rtc_plat_data *pdata = p;
-
-       clk_disable_unprepare(pdata->clk_ref);
-       clk_disable_unprepare(pdata->clk_ipg);
-}
-
 static int mxc_rtc_probe(struct platform_device *pdev)
 {
        struct rtc_device *rtc;
@@ -341,33 +333,18 @@ static int mxc_rtc_probe(struct platform_device *pdev)
                rtc->range_max = (1 << 16) * 86400ULL - 1;
        }
 
-       pdata->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
+       pdata->clk_ipg = devm_clk_get_enabled(&pdev->dev, "ipg");
        if (IS_ERR(pdata->clk_ipg)) {
                dev_err(&pdev->dev, "unable to get ipg clock!\n");
                return PTR_ERR(pdata->clk_ipg);
        }
 
-       ret = clk_prepare_enable(pdata->clk_ipg);
-       if (ret)
-               return ret;
-
-       pdata->clk_ref = devm_clk_get(&pdev->dev, "ref");
+       pdata->clk_ref = devm_clk_get_enabled(&pdev->dev, "ref");
        if (IS_ERR(pdata->clk_ref)) {
-               clk_disable_unprepare(pdata->clk_ipg);
                dev_err(&pdev->dev, "unable to get ref clock!\n");
                return PTR_ERR(pdata->clk_ref);
        }
 
-       ret = clk_prepare_enable(pdata->clk_ref);
-       if (ret) {
-               clk_disable_unprepare(pdata->clk_ipg);
-               return ret;
-       }
-
-       ret = devm_add_action_or_reset(&pdev->dev, mxc_rtc_action, pdata);
-       if (ret)
-               return ret;
-
        rate = clk_get_rate(pdata->clk_ref);
 
        if (rate == 32768)
index cdc623b..dd170e3 100644 (file)
@@ -521,10 +521,9 @@ static int rv3028_param_get(struct device *dev, struct rtc_param *param)
 {
        struct rv3028_data *rv3028 = dev_get_drvdata(dev);
        int ret;
+       u32 value;
 
        switch(param->param) {
-               u32 value;
-
        case RTC_PARAM_BACKUP_SWITCH_MODE:
                ret = regmap_read(rv3028->regmap, RV3028_BACKUP, &value);
                if (ret < 0)
@@ -554,9 +553,9 @@ static int rv3028_param_get(struct device *dev, struct rtc_param *param)
 static int rv3028_param_set(struct device *dev, struct rtc_param *param)
 {
        struct rv3028_data *rv3028 = dev_get_drvdata(dev);
+       u8 mode;
 
        switch(param->param) {
-               u8 mode;
        case RTC_PARAM_BACKUP_SWITCH_MODE:
                switch (param->uvalue) {
                case RTC_BSM_DISABLED:
index 40c0f7e..aae40d2 100644 (file)
@@ -107,6 +107,8 @@ static void stmp3xxx_wdt_register(struct platform_device *rtc_pdev)
                wdt_pdev->dev.parent = &rtc_pdev->dev;
                wdt_pdev->dev.platform_data = &wdt_pdata;
                rc = platform_device_add(wdt_pdev);
+               if (rc)
+                       platform_device_put(wdt_pdev);
        }
 
        if (rc)
index 7a0f181..ba23163 100644 (file)
@@ -11,6 +11,7 @@
 #include <linux/module.h>
 #include <linux/of_device.h>
 #include <linux/platform_device.h>
+#include <linux/sys_soc.h>
 #include <linux/property.h>
 #include <linux/regmap.h>
 #include <linux/rtc.h>
 #define K3RTC_MIN_OFFSET               (-277761)
 #define K3RTC_MAX_OFFSET               (277778)
 
-/**
- * struct ti_k3_rtc_soc_data - Private of compatible data for ti-k3-rtc
- * @unlock_irq_erratum:        Has erratum for unlock infinite IRQs (erratum i2327)
- */
-struct ti_k3_rtc_soc_data {
-       const bool unlock_irq_erratum;
-};
-
 static const struct regmap_config ti_k3_rtc_regmap_config = {
        .name = "peripheral-registers",
        .reg_bits = 32,
@@ -118,7 +111,6 @@ static const struct reg_field ti_rtc_reg_fields[] = {
  * @rtc_dev:           rtc device
  * @regmap:            rtc mmio regmap
  * @r_fields:          rtc register fields
- * @soc:               SoC compatible match data
  */
 struct ti_k3_rtc {
        unsigned int irq;
@@ -127,7 +119,6 @@ struct ti_k3_rtc {
        struct rtc_device *rtc_dev;
        struct regmap *regmap;
        struct regmap_field *r_fields[K3_RTC_MAX_FIELDS];
-       const struct ti_k3_rtc_soc_data *soc;
 };
 
 static int k3rtc_field_read(struct ti_k3_rtc *priv, enum ti_k3_rtc_fields f)
@@ -190,11 +181,22 @@ static int k3rtc_unlock_rtc(struct ti_k3_rtc *priv)
 
        /* Skip fence since we are going to check the unlock bit as fence */
        ret = regmap_field_read_poll_timeout(priv->r_fields[K3RTC_UNLOCK], ret,
-                                            !ret, 2, priv->sync_timeout_us);
+                                            ret, 2, priv->sync_timeout_us);
 
        return ret;
 }
 
+/*
+ * This is the list of SoCs affected by TI's i2327 errata causing the RTC
+ * state-machine to break if not unlocked fast enough during boot. These
+ * SoCs must have the bootloader unlock this device very early in the
+ * boot-flow before we (Linux) can use this device.
+ */
+static const struct soc_device_attribute has_erratum_i2327[] = {
+       { .family = "AM62X", .revision = "SR1.0" },
+       { /* sentinel */ }
+};
+
 static int k3rtc_configure(struct device *dev)
 {
        int ret;
@@ -208,7 +210,7 @@ static int k3rtc_configure(struct device *dev)
         *
         * In such occurrence, it is assumed that the RTC module is unusable
         */
-       if (priv->soc->unlock_irq_erratum) {
+       if (soc_device_match(has_erratum_i2327)) {
                ret = k3rtc_check_unlocked(priv);
                /* If there is an error OR if we are locked, return error */
                if (ret) {
@@ -513,21 +515,12 @@ static struct nvmem_config ti_k3_rtc_nvmem_config = {
 
 static int k3rtc_get_32kclk(struct device *dev, struct ti_k3_rtc *priv)
 {
-       int ret;
        struct clk *clk;
 
-       clk = devm_clk_get(dev, "osc32k");
+       clk = devm_clk_get_enabled(dev, "osc32k");
        if (IS_ERR(clk))
                return PTR_ERR(clk);
 
-       ret = clk_prepare_enable(clk);
-       if (ret)
-               return ret;
-
-       ret = devm_add_action_or_reset(dev, (void (*)(void *))clk_disable_unprepare, clk);
-       if (ret)
-               return ret;
-
        priv->rate_32k = clk_get_rate(clk);
 
        /* Make sure we are exact 32k clock. Else, try to compensate delay */
@@ -542,24 +535,19 @@ static int k3rtc_get_32kclk(struct device *dev, struct ti_k3_rtc *priv)
         */
        priv->sync_timeout_us = (u32)(DIV_ROUND_UP_ULL(1000000, priv->rate_32k) * 4);
 
-       return ret;
+       return 0;
 }
 
 static int k3rtc_get_vbusclk(struct device *dev, struct ti_k3_rtc *priv)
 {
-       int ret;
        struct clk *clk;
 
        /* Note: VBUS isn't a context clock, it is needed for hardware operation */
-       clk = devm_clk_get(dev, "vbus");
+       clk = devm_clk_get_enabled(dev, "vbus");
        if (IS_ERR(clk))
                return PTR_ERR(clk);
 
-       ret = clk_prepare_enable(clk);
-       if (ret)
-               return ret;
-
-       return devm_add_action_or_reset(dev, (void (*)(void *))clk_disable_unprepare, clk);
+       return 0;
 }
 
 static int ti_k3_rtc_probe(struct platform_device *pdev)
@@ -602,8 +590,6 @@ static int ti_k3_rtc_probe(struct platform_device *pdev)
        if (IS_ERR(priv->rtc_dev))
                return PTR_ERR(priv->rtc_dev);
 
-       priv->soc = of_device_get_match_data(dev);
-
        priv->rtc_dev->ops = &ti_k3_rtc_ops;
        priv->rtc_dev->range_max = (1ULL << 48) - 1;    /* 48Bit seconds */
        ti_k3_rtc_nvmem_config.priv = priv;
@@ -635,12 +621,8 @@ static int ti_k3_rtc_probe(struct platform_device *pdev)
        return devm_rtc_nvmem_register(priv->rtc_dev, &ti_k3_rtc_nvmem_config);
 }
 
-static const struct ti_k3_rtc_soc_data ti_k3_am62_data = {
-       .unlock_irq_erratum = true,
-};
-
 static const struct of_device_id ti_k3_rtc_of_match_table[] = {
-       {.compatible = "ti,am62-rtc", .data = &ti_k3_am62_data},
+       {.compatible = "ti,am62-rtc" },
        {}
 };
 MODULE_DEVICE_TABLE(of, ti_k3_rtc_of_match_table);
index 68f49e2..131293f 100644 (file)
 #include <linux/cdev.h>
 #include <linux/slab.h>
 #include <linux/module.h>
+#include <linux/kobject.h>
 
 #include <linux/uaccess.h>
 #include <asm/cio.h>
 #include <asm/ccwdev.h>
 #include <asm/debug.h>
 #include <asm/diag.h>
+#include <asm/scsw.h>
 
 #include "vmur.h"
 
@@ -78,6 +80,8 @@ static struct ccw_driver ur_driver = {
 
 static DEFINE_MUTEX(vmur_mutex);
 
+static void ur_uevent(struct work_struct *ws);
+
 /*
  * Allocation, freeing, getting and putting of urdev structures
  *
@@ -108,6 +112,7 @@ static struct urdev *urdev_alloc(struct ccw_device *cdev)
        ccw_device_get_id(cdev, &urd->dev_id);
        mutex_init(&urd->io_mutex);
        init_waitqueue_head(&urd->wait);
+       INIT_WORK(&urd->uevent_work, ur_uevent);
        spin_lock_init(&urd->open_lock);
        refcount_set(&urd->ref_count,  1);
        urd->cdev = cdev;
@@ -275,6 +280,18 @@ out:
        return rc;
 }
 
+static void ur_uevent(struct work_struct *ws)
+{
+       struct urdev *urd = container_of(ws, struct urdev, uevent_work);
+       char *envp[] = {
+               "EVENT=unsol_de",       /* Unsolicited device-end interrupt */
+               NULL
+       };
+
+       kobject_uevent_env(&urd->cdev->dev.kobj, KOBJ_CHANGE, envp);
+       urdev_put(urd);
+}
+
 /*
  * ur interrupt handler, called from the ccw_device layer
  */
@@ -288,12 +305,21 @@ static void ur_int_handler(struct ccw_device *cdev, unsigned long intparm,
                      intparm, irb->scsw.cmd.cstat, irb->scsw.cmd.dstat,
                      irb->scsw.cmd.count);
        }
+       urd = dev_get_drvdata(&cdev->dev);
        if (!intparm) {
                TRACE("ur_int_handler: unsolicited interrupt\n");
+
+               if (scsw_dstat(&irb->scsw) & DEV_STAT_DEV_END) {
+                       /*
+                        * Userspace might be interested in a transition to
+                        * device-ready state.
+                        */
+                       urdev_get(urd);
+                       schedule_work(&urd->uevent_work);
+               }
+
                return;
        }
-       urd = dev_get_drvdata(&cdev->dev);
-       BUG_ON(!urd);
        /* On special conditions irb is an error pointer */
        if (IS_ERR(irb))
                urd->io_request_rc = PTR_ERR(irb);
@@ -809,7 +835,6 @@ static int ur_probe(struct ccw_device *cdev)
                rc = -ENOMEM;
                goto fail_urdev_put;
        }
-       cdev->handler = ur_int_handler;
 
        /* validate virtual unit record device */
        urd->class = get_urd_class(urd);
@@ -823,6 +848,7 @@ static int ur_probe(struct ccw_device *cdev)
        }
        spin_lock_irq(get_ccwdev_lock(cdev));
        dev_set_drvdata(&cdev->dev, urd);
+       cdev->handler = ur_int_handler;
        spin_unlock_irq(get_ccwdev_lock(cdev));
 
        mutex_unlock(&vmur_mutex);
@@ -928,6 +954,10 @@ static int ur_set_offline_force(struct ccw_device *cdev, int force)
                rc = -EBUSY;
                goto fail_urdev_put;
        }
+       if (cancel_work_sync(&urd->uevent_work)) {
+               /* Work not run yet - need to release reference here */
+               urdev_put(urd);
+       }
        device_destroy(vmur_class, urd->char_device->dev);
        cdev_del(urd->char_device);
        urd->char_device = NULL;
@@ -963,6 +993,7 @@ static void ur_remove(struct ccw_device *cdev)
        spin_lock_irqsave(get_ccwdev_lock(cdev), flags);
        urdev_put(dev_get_drvdata(&cdev->dev));
        dev_set_drvdata(&cdev->dev, NULL);
+       cdev->handler = NULL;
        spin_unlock_irqrestore(get_ccwdev_lock(cdev), flags);
 
        mutex_unlock(&vmur_mutex);
index 608b071..92d17d7 100644 (file)
@@ -13,6 +13,7 @@
 #define _VMUR_H_
 
 #include <linux/refcount.h>
+#include <linux/workqueue.h>
 
 #define DEV_CLASS_UR_I 0x20 /* diag210 unit record input device class */
 #define DEV_CLASS_UR_O 0x10 /* diag210 unit record output device class */
@@ -76,6 +77,7 @@ struct urdev {
        wait_queue_head_t wait;         /* wait queue to serialize open */
        int open_flag;                  /* "urdev is open" flag */
        spinlock_t open_lock;           /* serialize critical sections */
+       struct work_struct uevent_work; /* work to send uevent */
 };
 
 /*
index 53d91bf..c07d2e3 100644 (file)
@@ -254,7 +254,7 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
        } else if (is_t5(lldi->adapter_type)) {
                struct cpl_t5_act_open_req *req =
                                (struct cpl_t5_act_open_req *)skb->head;
-               u32 isn = (prandom_u32() & ~7UL) - 1;
+               u32 isn = (get_random_u32() & ~7UL) - 1;
 
                INIT_TP_WR(req, 0);
                OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
@@ -282,7 +282,7 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
        } else {
                struct cpl_t6_act_open_req *req =
                                (struct cpl_t6_act_open_req *)skb->head;
-               u32 isn = (prandom_u32() & ~7UL) - 1;
+               u32 isn = (get_random_u32() & ~7UL) - 1;
 
                INIT_TP_WR(req, 0);
                OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
index 39e16ea..ddc0480 100644 (file)
@@ -2233,7 +2233,7 @@ static void fcoe_ctlr_vn_restart(struct fcoe_ctlr *fip)
 
        if (fip->probe_tries < FIP_VN_RLIM_COUNT) {
                fip->probe_tries++;
-               wait = prandom_u32() % FIP_VN_PROBE_WAIT;
+               wait = prandom_u32_max(FIP_VN_PROBE_WAIT);
        } else
                wait = FIP_VN_RLIM_INT;
        mod_timer(&fip->timer, jiffies + msecs_to_jiffies(wait));
@@ -3125,7 +3125,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
                                          fcoe_all_vn2vn, 0);
                        fip->port_ka_time = jiffies +
                                 msecs_to_jiffies(FIP_VN_BEACON_INT +
-                                       (prandom_u32() % FIP_VN_BEACON_FUZZ));
+                                       prandom_u32_max(FIP_VN_BEACON_FUZZ));
                }
                if (time_before(fip->port_ka_time, next_time))
                        next_time = fip->port_ka_time;
index c7f834b..d38ebd7 100644 (file)
@@ -2156,8 +2156,8 @@ lpfc_check_pending_fcoe_event(struct lpfc_hba *phba, uint8_t unreg_fcf)
  * This function makes an running random selection decision on FCF record to
  * use through a sequence of @fcf_cnt eligible FCF records with equal
  * probability. To perform integer manunipulation of random numbers with
- * size unit32_t, the lower 16 bits of the 32-bit random number returned
- * from prandom_u32() are taken as the random random number generated.
+ * size unit32_t, a 16-bit random number returned from get_random_u16() is
+ * taken as the random random number generated.
  *
  * Returns true when outcome is for the newly read FCF record should be
  * chosen; otherwise, return false when outcome is for keeping the previously
@@ -2169,7 +2169,7 @@ lpfc_sli4_new_fcf_random_select(struct lpfc_hba *phba, uint32_t fcf_cnt)
        uint32_t rand_num;
 
        /* Get 16-bit uniform random number */
-       rand_num = 0xFFFF & prandom_u32();
+       rand_num = get_random_u16();
 
        /* Decision with probability 1/fcf_cnt */
        if ((fcf_cnt * rand_num) < 0xFFFF)
index b49c395..b535f1f 100644 (file)
@@ -4812,7 +4812,7 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
        rc = lpfc_vmid_res_alloc(phba, vport);
 
        if (rc)
-               goto out;
+               goto out_put_shost;
 
        /* Initialize all internally managed lists. */
        INIT_LIST_HEAD(&vport->fc_nodes);
@@ -4830,16 +4830,17 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
 
        error = scsi_add_host_with_dma(shost, dev, &phba->pcidev->dev);
        if (error)
-               goto out_put_shost;
+               goto out_free_vmid;
 
        spin_lock_irq(&phba->port_list_lock);
        list_add_tail(&vport->listentry, &phba->port_list);
        spin_unlock_irq(&phba->port_list_lock);
        return vport;
 
-out_put_shost:
+out_free_vmid:
        kfree(vport->vmid);
        bitmap_free(vport->vmid_priority_range);
+out_put_shost:
        scsi_host_put(shost);
 out:
        return NULL;
index cecfb2c..df2fe7b 100644 (file)
@@ -618,7 +618,7 @@ static int qedi_cm_alloc_mem(struct qedi_ctx *qedi)
                                sizeof(struct qedi_endpoint *)), GFP_KERNEL);
        if (!qedi->ep_tbl)
                return -ENOMEM;
-       port_id = prandom_u32() % QEDI_LOCAL_PORT_RANGE;
+       port_id = prandom_u32_max(QEDI_LOCAL_PORT_RANGE);
        if (qedi_init_id_tbl(&qedi->lcl_port_tbl, QEDI_LOCAL_PORT_RANGE,
                             QEDI_LOCAL_PORT_MIN, port_id)) {
                qedi_cm_free_mem(qedi);
index c95177c..cac7c90 100644 (file)
@@ -828,6 +828,14 @@ store_state_field(struct device *dev, struct device_attribute *attr,
        }
 
        mutex_lock(&sdev->state_mutex);
+       switch (sdev->sdev_state) {
+       case SDEV_RUNNING:
+       case SDEV_OFFLINE:
+               break;
+       default:
+               mutex_unlock(&sdev->state_mutex);
+               return -EINVAL;
+       }
        if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {
                ret = 0;
        } else {
index 58cf8c4..ed4c571 100644 (file)
@@ -2,9 +2,9 @@
 
 if SOC_SIFIVE
 
-config SIFIVE_L2
-       bool "Sifive L2 Cache controller"
+config SIFIVE_CCACHE
+       bool "Sifive Composable Cache controller"
        help
-         Support for the L2 cache controller on SiFive platforms.
+         Support for the composable cache controller on SiFive platforms.
 
 endif
index b5caff7..1f5dc33 100644 (file)
@@ -1,3 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0
 
-obj-$(CONFIG_SIFIVE_L2)        += sifive_l2_cache.o
+obj-$(CONFIG_SIFIVE_CCACHE)    += sifive_ccache.o
diff --git a/drivers/soc/sifive/sifive_ccache.c b/drivers/soc/sifive/sifive_ccache.c
new file mode 100644 (file)
index 0000000..1c17115
--- /dev/null
@@ -0,0 +1,255 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * SiFive composable cache controller Driver
+ *
+ * Copyright (C) 2018-2022 SiFive, Inc.
+ *
+ */
+
+#define pr_fmt(fmt) "CCACHE: " fmt
+
+#include <linux/debugfs.h>
+#include <linux/interrupt.h>
+#include <linux/of_irq.h>
+#include <linux/of_address.h>
+#include <linux/device.h>
+#include <linux/bitfield.h>
+#include <asm/cacheinfo.h>
+#include <soc/sifive/sifive_ccache.h>
+
+#define SIFIVE_CCACHE_DIRECCFIX_LOW 0x100
+#define SIFIVE_CCACHE_DIRECCFIX_HIGH 0x104
+#define SIFIVE_CCACHE_DIRECCFIX_COUNT 0x108
+
+#define SIFIVE_CCACHE_DIRECCFAIL_LOW 0x120
+#define SIFIVE_CCACHE_DIRECCFAIL_HIGH 0x124
+#define SIFIVE_CCACHE_DIRECCFAIL_COUNT 0x128
+
+#define SIFIVE_CCACHE_DATECCFIX_LOW 0x140
+#define SIFIVE_CCACHE_DATECCFIX_HIGH 0x144
+#define SIFIVE_CCACHE_DATECCFIX_COUNT 0x148
+
+#define SIFIVE_CCACHE_DATECCFAIL_LOW 0x160
+#define SIFIVE_CCACHE_DATECCFAIL_HIGH 0x164
+#define SIFIVE_CCACHE_DATECCFAIL_COUNT 0x168
+
+#define SIFIVE_CCACHE_CONFIG 0x00
+#define SIFIVE_CCACHE_CONFIG_BANK_MASK GENMASK_ULL(7, 0)
+#define SIFIVE_CCACHE_CONFIG_WAYS_MASK GENMASK_ULL(15, 8)
+#define SIFIVE_CCACHE_CONFIG_SETS_MASK GENMASK_ULL(23, 16)
+#define SIFIVE_CCACHE_CONFIG_BLKS_MASK GENMASK_ULL(31, 24)
+
+#define SIFIVE_CCACHE_WAYENABLE 0x08
+#define SIFIVE_CCACHE_ECCINJECTERR 0x40
+
+#define SIFIVE_CCACHE_MAX_ECCINTR 4
+
+static void __iomem *ccache_base;
+static int g_irq[SIFIVE_CCACHE_MAX_ECCINTR];
+static struct riscv_cacheinfo_ops ccache_cache_ops;
+static int level;
+
+enum {
+       DIR_CORR = 0,
+       DATA_CORR,
+       DATA_UNCORR,
+       DIR_UNCORR,
+};
+
+#ifdef CONFIG_DEBUG_FS
+static struct dentry *sifive_test;
+
+static ssize_t ccache_write(struct file *file, const char __user *data,
+                           size_t count, loff_t *ppos)
+{
+       unsigned int val;
+
+       if (kstrtouint_from_user(data, count, 0, &val))
+               return -EINVAL;
+       if ((val < 0xFF) || (val >= 0x10000 && val < 0x100FF))
+               writel(val, ccache_base + SIFIVE_CCACHE_ECCINJECTERR);
+       else
+               return -EINVAL;
+       return count;
+}
+
+static const struct file_operations ccache_fops = {
+       .owner = THIS_MODULE,
+       .open = simple_open,
+       .write = ccache_write
+};
+
+static void setup_sifive_debug(void)
+{
+       sifive_test = debugfs_create_dir("sifive_ccache_cache", NULL);
+
+       debugfs_create_file("sifive_debug_inject_error", 0200,
+                           sifive_test, NULL, &ccache_fops);
+}
+#endif
+
+static void ccache_config_read(void)
+{
+       u32 cfg;
+
+       cfg = readl(ccache_base + SIFIVE_CCACHE_CONFIG);
+       pr_info("%llu banks, %llu ways, sets/bank=%llu, bytes/block=%llu\n",
+               FIELD_GET(SIFIVE_CCACHE_CONFIG_BANK_MASK, cfg),
+               FIELD_GET(SIFIVE_CCACHE_CONFIG_WAYS_MASK, cfg),
+               BIT_ULL(FIELD_GET(SIFIVE_CCACHE_CONFIG_SETS_MASK, cfg)),
+               BIT_ULL(FIELD_GET(SIFIVE_CCACHE_CONFIG_BLKS_MASK, cfg)));
+
+       cfg = readl(ccache_base + SIFIVE_CCACHE_WAYENABLE);
+       pr_info("Index of the largest way enabled: %u\n", cfg);
+}
+
+static const struct of_device_id sifive_ccache_ids[] = {
+       { .compatible = "sifive,fu540-c000-ccache" },
+       { .compatible = "sifive,fu740-c000-ccache" },
+       { .compatible = "sifive,ccache0" },
+       { /* end of table */ }
+};
+
+static ATOMIC_NOTIFIER_HEAD(ccache_err_chain);
+
+int register_sifive_ccache_error_notifier(struct notifier_block *nb)
+{
+       return atomic_notifier_chain_register(&ccache_err_chain, nb);
+}
+EXPORT_SYMBOL_GPL(register_sifive_ccache_error_notifier);
+
+int unregister_sifive_ccache_error_notifier(struct notifier_block *nb)
+{
+       return atomic_notifier_chain_unregister(&ccache_err_chain, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_sifive_ccache_error_notifier);
+
+static int ccache_largest_wayenabled(void)
+{
+       return readl(ccache_base + SIFIVE_CCACHE_WAYENABLE) & 0xFF;
+}
+
+static ssize_t number_of_ways_enabled_show(struct device *dev,
+                                          struct device_attribute *attr,
+                                          char *buf)
+{
+       return sprintf(buf, "%u\n", ccache_largest_wayenabled());
+}
+
+static DEVICE_ATTR_RO(number_of_ways_enabled);
+
+static struct attribute *priv_attrs[] = {
+       &dev_attr_number_of_ways_enabled.attr,
+       NULL,
+};
+
+static const struct attribute_group priv_attr_group = {
+       .attrs = priv_attrs,
+};
+
+static const struct attribute_group *ccache_get_priv_group(struct cacheinfo
+                                                          *this_leaf)
+{
+       /* We want to use private group for composable cache only */
+       if (this_leaf->level == level)
+               return &priv_attr_group;
+       else
+               return NULL;
+}
+
+static irqreturn_t ccache_int_handler(int irq, void *device)
+{
+       unsigned int add_h, add_l;
+
+       if (irq == g_irq[DIR_CORR]) {
+               add_h = readl(ccache_base + SIFIVE_CCACHE_DIRECCFIX_HIGH);
+               add_l = readl(ccache_base + SIFIVE_CCACHE_DIRECCFIX_LOW);
+               pr_err("DirError @ 0x%08X.%08X\n", add_h, add_l);
+               /* Reading this register clears the DirError interrupt sig */
+               readl(ccache_base + SIFIVE_CCACHE_DIRECCFIX_COUNT);
+               atomic_notifier_call_chain(&ccache_err_chain,
+                                          SIFIVE_CCACHE_ERR_TYPE_CE,
+                                          "DirECCFix");
+       }
+       if (irq == g_irq[DIR_UNCORR]) {
+               add_h = readl(ccache_base + SIFIVE_CCACHE_DIRECCFAIL_HIGH);
+               add_l = readl(ccache_base + SIFIVE_CCACHE_DIRECCFAIL_LOW);
+               /* Reading this register clears the DirFail interrupt sig */
+               readl(ccache_base + SIFIVE_CCACHE_DIRECCFAIL_COUNT);
+               atomic_notifier_call_chain(&ccache_err_chain,
+                                          SIFIVE_CCACHE_ERR_TYPE_UE,
+                                          "DirECCFail");
+               panic("CCACHE: DirFail @ 0x%08X.%08X\n", add_h, add_l);
+       }
+       if (irq == g_irq[DATA_CORR]) {
+               add_h = readl(ccache_base + SIFIVE_CCACHE_DATECCFIX_HIGH);
+               add_l = readl(ccache_base + SIFIVE_CCACHE_DATECCFIX_LOW);
+               pr_err("DataError @ 0x%08X.%08X\n", add_h, add_l);
+               /* Reading this register clears the DataError interrupt sig */
+               readl(ccache_base + SIFIVE_CCACHE_DATECCFIX_COUNT);
+               atomic_notifier_call_chain(&ccache_err_chain,
+                                          SIFIVE_CCACHE_ERR_TYPE_CE,
+                                          "DatECCFix");
+       }
+       if (irq == g_irq[DATA_UNCORR]) {
+               add_h = readl(ccache_base + SIFIVE_CCACHE_DATECCFAIL_HIGH);
+               add_l = readl(ccache_base + SIFIVE_CCACHE_DATECCFAIL_LOW);
+               pr_err("DataFail @ 0x%08X.%08X\n", add_h, add_l);
+               /* Reading this register clears the DataFail interrupt sig */
+               readl(ccache_base + SIFIVE_CCACHE_DATECCFAIL_COUNT);
+               atomic_notifier_call_chain(&ccache_err_chain,
+                                          SIFIVE_CCACHE_ERR_TYPE_UE,
+                                          "DatECCFail");
+       }
+
+       return IRQ_HANDLED;
+}
+
+static int __init sifive_ccache_init(void)
+{
+       struct device_node *np;
+       struct resource res;
+       int i, rc, intr_num;
+
+       np = of_find_matching_node(NULL, sifive_ccache_ids);
+       if (!np)
+               return -ENODEV;
+
+       if (of_address_to_resource(np, 0, &res))
+               return -ENODEV;
+
+       ccache_base = ioremap(res.start, resource_size(&res));
+       if (!ccache_base)
+               return -ENOMEM;
+
+       if (of_property_read_u32(np, "cache-level", &level))
+               return -ENOENT;
+
+       intr_num = of_property_count_u32_elems(np, "interrupts");
+       if (!intr_num) {
+               pr_err("No interrupts property\n");
+               return -ENODEV;
+       }
+
+       for (i = 0; i < intr_num; i++) {
+               g_irq[i] = irq_of_parse_and_map(np, i);
+               rc = request_irq(g_irq[i], ccache_int_handler, 0, "ccache_ecc",
+                                NULL);
+               if (rc) {
+                       pr_err("Could not request IRQ %d\n", g_irq[i]);
+                       return rc;
+               }
+       }
+
+       ccache_config_read();
+
+       ccache_cache_ops.get_priv_group = ccache_get_priv_group;
+       riscv_set_cacheinfo_ops(&ccache_cache_ops);
+
+#ifdef CONFIG_DEBUG_FS
+       setup_sifive_debug();
+#endif
+       return 0;
+}
+
+device_initcall(sifive_ccache_init);
diff --git a/drivers/soc/sifive/sifive_l2_cache.c b/drivers/soc/sifive/sifive_l2_cache.c
deleted file mode 100644 (file)
index 59640a1..0000000
+++ /dev/null
@@ -1,237 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * SiFive L2 cache controller Driver
- *
- * Copyright (C) 2018-2019 SiFive, Inc.
- *
- */
-#include <linux/debugfs.h>
-#include <linux/interrupt.h>
-#include <linux/of_irq.h>
-#include <linux/of_address.h>
-#include <linux/device.h>
-#include <asm/cacheinfo.h>
-#include <soc/sifive/sifive_l2_cache.h>
-
-#define SIFIVE_L2_DIRECCFIX_LOW 0x100
-#define SIFIVE_L2_DIRECCFIX_HIGH 0x104
-#define SIFIVE_L2_DIRECCFIX_COUNT 0x108
-
-#define SIFIVE_L2_DIRECCFAIL_LOW 0x120
-#define SIFIVE_L2_DIRECCFAIL_HIGH 0x124
-#define SIFIVE_L2_DIRECCFAIL_COUNT 0x128
-
-#define SIFIVE_L2_DATECCFIX_LOW 0x140
-#define SIFIVE_L2_DATECCFIX_HIGH 0x144
-#define SIFIVE_L2_DATECCFIX_COUNT 0x148
-
-#define SIFIVE_L2_DATECCFAIL_LOW 0x160
-#define SIFIVE_L2_DATECCFAIL_HIGH 0x164
-#define SIFIVE_L2_DATECCFAIL_COUNT 0x168
-
-#define SIFIVE_L2_CONFIG 0x00
-#define SIFIVE_L2_WAYENABLE 0x08
-#define SIFIVE_L2_ECCINJECTERR 0x40
-
-#define SIFIVE_L2_MAX_ECCINTR 4
-
-static void __iomem *l2_base;
-static int g_irq[SIFIVE_L2_MAX_ECCINTR];
-static struct riscv_cacheinfo_ops l2_cache_ops;
-
-enum {
-       DIR_CORR = 0,
-       DATA_CORR,
-       DATA_UNCORR,
-       DIR_UNCORR,
-};
-
-#ifdef CONFIG_DEBUG_FS
-static struct dentry *sifive_test;
-
-static ssize_t l2_write(struct file *file, const char __user *data,
-                       size_t count, loff_t *ppos)
-{
-       unsigned int val;
-
-       if (kstrtouint_from_user(data, count, 0, &val))
-               return -EINVAL;
-       if ((val < 0xFF) || (val >= 0x10000 && val < 0x100FF))
-               writel(val, l2_base + SIFIVE_L2_ECCINJECTERR);
-       else
-               return -EINVAL;
-       return count;
-}
-
-static const struct file_operations l2_fops = {
-       .owner = THIS_MODULE,
-       .open = simple_open,
-       .write = l2_write
-};
-
-static void setup_sifive_debug(void)
-{
-       sifive_test = debugfs_create_dir("sifive_l2_cache", NULL);
-
-       debugfs_create_file("sifive_debug_inject_error", 0200,
-                           sifive_test, NULL, &l2_fops);
-}
-#endif
-
-static void l2_config_read(void)
-{
-       u32 regval, val;
-
-       regval = readl(l2_base + SIFIVE_L2_CONFIG);
-       val = regval & 0xFF;
-       pr_info("L2CACHE: No. of Banks in the cache: %d\n", val);
-       val = (regval & 0xFF00) >> 8;
-       pr_info("L2CACHE: No. of ways per bank: %d\n", val);
-       val = (regval & 0xFF0000) >> 16;
-       pr_info("L2CACHE: Sets per bank: %llu\n", (uint64_t)1 << val);
-       val = (regval & 0xFF000000) >> 24;
-       pr_info("L2CACHE: Bytes per cache block: %llu\n", (uint64_t)1 << val);
-
-       regval = readl(l2_base + SIFIVE_L2_WAYENABLE);
-       pr_info("L2CACHE: Index of the largest way enabled: %d\n", regval);
-}
-
-static const struct of_device_id sifive_l2_ids[] = {
-       { .compatible = "sifive,fu540-c000-ccache" },
-       { .compatible = "sifive,fu740-c000-ccache" },
-       { /* end of table */ },
-};
-
-static ATOMIC_NOTIFIER_HEAD(l2_err_chain);
-
-int register_sifive_l2_error_notifier(struct notifier_block *nb)
-{
-       return atomic_notifier_chain_register(&l2_err_chain, nb);
-}
-EXPORT_SYMBOL_GPL(register_sifive_l2_error_notifier);
-
-int unregister_sifive_l2_error_notifier(struct notifier_block *nb)
-{
-       return atomic_notifier_chain_unregister(&l2_err_chain, nb);
-}
-EXPORT_SYMBOL_GPL(unregister_sifive_l2_error_notifier);
-
-static int l2_largest_wayenabled(void)
-{
-       return readl(l2_base + SIFIVE_L2_WAYENABLE) & 0xFF;
-}
-
-static ssize_t number_of_ways_enabled_show(struct device *dev,
-                                          struct device_attribute *attr,
-                                          char *buf)
-{
-       return sprintf(buf, "%u\n", l2_largest_wayenabled());
-}
-
-static DEVICE_ATTR_RO(number_of_ways_enabled);
-
-static struct attribute *priv_attrs[] = {
-       &dev_attr_number_of_ways_enabled.attr,
-       NULL,
-};
-
-static const struct attribute_group priv_attr_group = {
-       .attrs = priv_attrs,
-};
-
-static const struct attribute_group *l2_get_priv_group(struct cacheinfo *this_leaf)
-{
-       /* We want to use private group for L2 cache only */
-       if (this_leaf->level == 2)
-               return &priv_attr_group;
-       else
-               return NULL;
-}
-
-static irqreturn_t l2_int_handler(int irq, void *device)
-{
-       unsigned int add_h, add_l;
-
-       if (irq == g_irq[DIR_CORR]) {
-               add_h = readl(l2_base + SIFIVE_L2_DIRECCFIX_HIGH);
-               add_l = readl(l2_base + SIFIVE_L2_DIRECCFIX_LOW);
-               pr_err("L2CACHE: DirError @ 0x%08X.%08X\n", add_h, add_l);
-               /* Reading this register clears the DirError interrupt sig */
-               readl(l2_base + SIFIVE_L2_DIRECCFIX_COUNT);
-               atomic_notifier_call_chain(&l2_err_chain, SIFIVE_L2_ERR_TYPE_CE,
-                                          "DirECCFix");
-       }
-       if (irq == g_irq[DIR_UNCORR]) {
-               add_h = readl(l2_base + SIFIVE_L2_DIRECCFAIL_HIGH);
-               add_l = readl(l2_base + SIFIVE_L2_DIRECCFAIL_LOW);
-               /* Reading this register clears the DirFail interrupt sig */
-               readl(l2_base + SIFIVE_L2_DIRECCFAIL_COUNT);
-               atomic_notifier_call_chain(&l2_err_chain, SIFIVE_L2_ERR_TYPE_UE,
-                                          "DirECCFail");
-               panic("L2CACHE: DirFail @ 0x%08X.%08X\n", add_h, add_l);
-       }
-       if (irq == g_irq[DATA_CORR]) {
-               add_h = readl(l2_base + SIFIVE_L2_DATECCFIX_HIGH);
-               add_l = readl(l2_base + SIFIVE_L2_DATECCFIX_LOW);
-               pr_err("L2CACHE: DataError @ 0x%08X.%08X\n", add_h, add_l);
-               /* Reading this register clears the DataError interrupt sig */
-               readl(l2_base + SIFIVE_L2_DATECCFIX_COUNT);
-               atomic_notifier_call_chain(&l2_err_chain, SIFIVE_L2_ERR_TYPE_CE,
-                                          "DatECCFix");
-       }
-       if (irq == g_irq[DATA_UNCORR]) {
-               add_h = readl(l2_base + SIFIVE_L2_DATECCFAIL_HIGH);
-               add_l = readl(l2_base + SIFIVE_L2_DATECCFAIL_LOW);
-               pr_err("L2CACHE: DataFail @ 0x%08X.%08X\n", add_h, add_l);
-               /* Reading this register clears the DataFail interrupt sig */
-               readl(l2_base + SIFIVE_L2_DATECCFAIL_COUNT);
-               atomic_notifier_call_chain(&l2_err_chain, SIFIVE_L2_ERR_TYPE_UE,
-                                          "DatECCFail");
-       }
-
-       return IRQ_HANDLED;
-}
-
-static int __init sifive_l2_init(void)
-{
-       struct device_node *np;
-       struct resource res;
-       int i, rc, intr_num;
-
-       np = of_find_matching_node(NULL, sifive_l2_ids);
-       if (!np)
-               return -ENODEV;
-
-       if (of_address_to_resource(np, 0, &res))
-               return -ENODEV;
-
-       l2_base = ioremap(res.start, resource_size(&res));
-       if (!l2_base)
-               return -ENOMEM;
-
-       intr_num = of_property_count_u32_elems(np, "interrupts");
-       if (!intr_num) {
-               pr_err("L2CACHE: no interrupts property\n");
-               return -ENODEV;
-       }
-
-       for (i = 0; i < intr_num; i++) {
-               g_irq[i] = irq_of_parse_and_map(np, i);
-               rc = request_irq(g_irq[i], l2_int_handler, 0, "l2_ecc", NULL);
-               if (rc) {
-                       pr_err("L2CACHE: Could not request IRQ %d\n", g_irq[i]);
-                       return rc;
-               }
-       }
-
-       l2_config_read();
-
-       l2_cache_ops.get_priv_group = l2_get_priv_group;
-       riscv_set_cacheinfo_ops(&l2_cache_ops);
-
-#ifdef CONFIG_DEBUG_FS
-       setup_sifive_debug();
-#endif
-       return 0;
-}
-device_initcall(sifive_l2_init);
index fb7b406..532e12e 100644 (file)
@@ -17,7 +17,6 @@ atomisp-objs += \
        pci/atomisp_compat_css20.o \
        pci/atomisp_csi2.o \
        pci/atomisp_drvfs.o \
-       pci/atomisp_file.o \
        pci/atomisp_fops.o \
        pci/atomisp_ioctl.o \
        pci/atomisp_subdev.o \
index 8f48b23..fa1de45 100644 (file)
@@ -841,8 +841,6 @@ static int ov2680_set_fmt(struct v4l2_subdev *sd,
        if (!ov2680_info)
                return -EINVAL;
 
-       mutex_lock(&dev->input_lock);
-
        res = v4l2_find_nearest_size(ov2680_res_preview,
                                     ARRAY_SIZE(ov2680_res_preview), width,
                                     height, fmt->width, fmt->height);
@@ -855,19 +853,22 @@ static int ov2680_set_fmt(struct v4l2_subdev *sd,
        fmt->code = MEDIA_BUS_FMT_SBGGR10_1X10;
        if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
                sd_state->pads->try_fmt = *fmt;
-               mutex_unlock(&dev->input_lock);
                return 0;
        }
 
        dev_dbg(&client->dev, "%s: %dx%d\n",
                __func__, fmt->width, fmt->height);
 
+       mutex_lock(&dev->input_lock);
+
        /* s_power has not been called yet for std v4l2 clients (camorama) */
        power_up(sd);
        ret = ov2680_write_reg_array(client, dev->res->regs);
-       if (ret)
+       if (ret) {
                dev_err(&client->dev,
                        "ov2680 write resolution register err: %d\n", ret);
+               goto err;
+       }
 
        vts = dev->res->lines_per_frame;
 
@@ -876,8 +877,10 @@ static int ov2680_set_fmt(struct v4l2_subdev *sd,
                vts = dev->exposure + OV2680_INTEGRATION_TIME_MARGIN;
 
        ret = ov2680_write_reg(client, 2, OV2680_TIMING_VTS_H, vts);
-       if (ret)
+       if (ret) {
                dev_err(&client->dev, "ov2680 write vts err: %d\n", ret);
+               goto err;
+       }
 
        ret = ov2680_get_intg_factor(client, ov2680_info, res);
        if (ret) {
@@ -894,11 +897,7 @@ static int ov2680_set_fmt(struct v4l2_subdev *sd,
        if (v_flag)
                ov2680_v_flip(sd, v_flag);
 
-       /*
-        * ret = startup(sd);
-        * if (ret)
-        * dev_err(&client->dev, "ov2680 startup err\n");
-        */
+       dev->res = res;
 err:
        mutex_unlock(&dev->input_lock);
        return ret;
index 385e22f..c5cbae1 100644 (file)
@@ -65,9 +65,6 @@
 #define        check_bo_null_return_void(bo)   \
        check_null_return_void(bo, "NULL hmm buffer object.\n")
 
-#define        HMM_MAX_ORDER           3
-#define        HMM_MIN_ORDER           0
-
 #define        ISP_VM_START    0x0
 #define        ISP_VM_SIZE     (0x7FFFFFFF)    /* 2G address space */
 #define        ISP_PTR_NULL    NULL
@@ -89,8 +86,6 @@ enum hmm_bo_type {
 #define        HMM_BO_VMAPED           0x10
 #define        HMM_BO_VMAPED_CACHED    0x20
 #define        HMM_BO_ACTIVE           0x1000
-#define        HMM_BO_MEM_TYPE_USER     0x1
-#define        HMM_BO_MEM_TYPE_PFN      0x2
 
 struct hmm_bo_device {
        struct isp_mmu          mmu;
@@ -126,7 +121,6 @@ struct hmm_buffer_object {
        enum hmm_bo_type        type;
        int             mmap_count;
        int             status;
-       int             mem_type;
        void            *vmap_addr; /* kernel virtual address by vmap */
 
        struct rb_node  node;
index f96f5ad..3f602b5 100644 (file)
@@ -740,20 +740,6 @@ enum atomisp_frame_status {
        ATOMISP_FRAME_STATUS_FLASH_FAILED,
 };
 
-/* ISP memories, isp2400 */
-enum atomisp_acc_memory {
-       ATOMISP_ACC_MEMORY_PMEM0 = 0,
-       ATOMISP_ACC_MEMORY_DMEM0,
-       /* for backward compatibility */
-       ATOMISP_ACC_MEMORY_DMEM = ATOMISP_ACC_MEMORY_DMEM0,
-       ATOMISP_ACC_MEMORY_VMEM0,
-       ATOMISP_ACC_MEMORY_VAMEM0,
-       ATOMISP_ACC_MEMORY_VAMEM1,
-       ATOMISP_ACC_MEMORY_VAMEM2,
-       ATOMISP_ACC_MEMORY_HMEM0,
-       ATOMISP_ACC_NR_MEMORY
-};
-
 enum atomisp_ext_isp_id {
        EXT_ISP_CID_ISO = 0,
        EXT_ISP_CID_CAPTURE_HDR,
index 58e0ea5..5463d11 100644 (file)
@@ -26,8 +26,6 @@ struct v4l2_subdev *atomisp_gmin_find_subdev(struct i2c_adapter *adapter,
 int atomisp_gmin_remove_subdev(struct v4l2_subdev *sd);
 int gmin_get_var_int(struct device *dev, bool is_gmin,
                     const char *var, int def);
-int camera_sensor_csi(struct v4l2_subdev *sd, u32 port,
-                     u32 lanes, u32 format, u32 bayer_order, int flag);
 struct camera_sensor_platform_data *
 gmin_camera_platform_data(
     struct v4l2_subdev *subdev,
index 8c65733..0253661 100644 (file)
@@ -141,23 +141,6 @@ struct atomisp_platform_data {
        struct intel_v4l2_subdev_table *subdevs;
 };
 
-/* Describe the capacities of one single sensor. */
-struct atomisp_sensor_caps {
-       /* The number of streams this sensor can output. */
-       int stream_num;
-       bool is_slave;
-};
-
-/* Describe the capacities of sensors connected to one camera port. */
-struct atomisp_camera_caps {
-       /* The number of sensors connected to this camera port. */
-       int sensor_num;
-       /* The capacities of each sensor. */
-       struct atomisp_sensor_caps sensor[MAX_SENSORS_PER_PORT];
-       /* Define whether stream control is required for multiple streams. */
-       bool multi_stream_ctrl;
-};
-
 /*
  *  Sensor of external ISP can send multiple steams with different mipi data
  * type in the same virtual channel. This information needs to come from the
@@ -235,7 +218,6 @@ struct camera_mipi_info {
 };
 
 const struct atomisp_platform_data *atomisp_get_platform_data(void);
-const struct atomisp_camera_caps *atomisp_get_default_camera_caps(void);
 
 /* API from old platform_camera.h, new CPUID implementation */
 #define __IS_SOC(x) (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && \
index d128b79..d3cf6ed 100644 (file)
@@ -28,3 +28,22 @@ Since getting a picture requires multiple processing steps,
 this means that unlike in fixed pipelines the soft pipelines
 on the ISP can do multiple processing steps in a single pipeline
 element (in a single binary).
+
+###
+
+The sensor drivers use of v4l2_get_subdev_hostdata(), which returns
+a camera_mipi_info struct. This struct is allocated/managed by
+the core atomisp code. The most important parts of the struct
+are filled by the atomisp core itself, like e.g. the port number.
+
+The sensor drivers on a set_fmt call do fill in camera_mipi_info.data
+which is a atomisp_sensor_mode_data struct. This gets filled from
+a function called <sensor_name>_get_intg_factor(). This struct is not
+used by the atomisp code at all. It is returned to userspace by
+a ATOMISP_IOC_G_SENSOR_MODE_DATA and the Android userspace does use this.
+
+Other members of camera_mipi_info which are set by some drivers are:
+-metadata_width, metadata_height, metadata_effective_width, set by
+ the ov5693 driver (and used by the atomisp core)
+-raw_bayer_order, adjusted by the ov2680 driver when flipping since
+ flipping can change the bayer order
index c932f34..c72d0e3 100644 (file)
@@ -80,6 +80,8 @@ union host {
        } ptr;
 };
 
+static int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id);
+
 /*
  * get sensor:dis71430/ov2720 related info from v4l2_subdev->priv data field.
  * subdev->priv is set in mrst.c
@@ -98,15 +100,6 @@ struct atomisp_video_pipe *atomisp_to_video_pipe(struct video_device *dev)
               container_of(dev, struct atomisp_video_pipe, vdev);
 }
 
-/*
- * get struct atomisp_acc_pipe from v4l2 video_device
- */
-struct atomisp_acc_pipe *atomisp_to_acc_pipe(struct video_device *dev)
-{
-       return (struct atomisp_acc_pipe *)
-              container_of(dev, struct atomisp_acc_pipe, vdev);
-}
-
 static unsigned short atomisp_get_sensor_fps(struct atomisp_sub_device *asd)
 {
        struct v4l2_subdev_frame_interval fi = { 0 };
@@ -777,24 +770,6 @@ static struct atomisp_video_pipe *__atomisp_get_pipe(
     enum ia_css_pipe_id css_pipe_id,
     enum ia_css_buffer_type buf_type)
 {
-       struct atomisp_device *isp = asd->isp;
-
-       if (css_pipe_id == IA_CSS_PIPE_ID_COPY &&
-           isp->inputs[asd->input_curr].camera_caps->
-           sensor[asd->sensor_curr].stream_num > 1) {
-               switch (stream_id) {
-               case ATOMISP_INPUT_STREAM_PREVIEW:
-                       return &asd->video_out_preview;
-               case ATOMISP_INPUT_STREAM_POSTVIEW:
-                       return &asd->video_out_vf;
-               case ATOMISP_INPUT_STREAM_VIDEO:
-                       return &asd->video_out_video_capture;
-               case ATOMISP_INPUT_STREAM_CAPTURE:
-               default:
-                       return &asd->video_out_capture;
-               }
-       }
-
        /* video is same in online as in continuouscapture mode */
        if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_LOWLAT) {
                /*
@@ -906,7 +881,8 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
        enum atomisp_metadata_type md_type;
        struct atomisp_device *isp = asd->isp;
        struct v4l2_control ctrl;
-       bool reset_wdt_timer = false;
+
+       lockdep_assert_held(&isp->mutex);
 
        if (
            buf_type != IA_CSS_BUFFER_TYPE_METADATA &&
@@ -1013,9 +989,6 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
                break;
        case IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME:
        case IA_CSS_BUFFER_TYPE_SEC_VF_OUTPUT_FRAME:
-               if (IS_ISP2401)
-                       reset_wdt_timer = true;
-
                pipe->buffers_in_css--;
                frame = buffer.css_buffer.data.frame;
                if (!frame) {
@@ -1068,9 +1041,6 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
                break;
        case IA_CSS_BUFFER_TYPE_OUTPUT_FRAME:
        case IA_CSS_BUFFER_TYPE_SEC_OUTPUT_FRAME:
-               if (IS_ISP2401)
-                       reset_wdt_timer = true;
-
                pipe->buffers_in_css--;
                frame = buffer.css_buffer.data.frame;
                if (!frame) {
@@ -1238,8 +1208,6 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
                 */
                wake_up(&vb->done);
        }
-       if (IS_ISP2401)
-               atomic_set(&pipe->wdt_count, 0);
 
        /*
         * Requeue should only be done for 3a and dis buffers.
@@ -1256,19 +1224,6 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
        }
        if (!error && q_buffers)
                atomisp_qbuffers_to_css(asd);
-
-       if (IS_ISP2401) {
-               /* If there are no buffers queued then
-               * delete wdt timer. */
-               if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
-                       return;
-               if (!atomisp_buffers_queued_pipe(pipe))
-                       atomisp_wdt_stop_pipe(pipe, false);
-               else if (reset_wdt_timer)
-                       /* SOF irq should not reset wdt timer. */
-                       atomisp_wdt_refresh_pipe(pipe,
-                                               ATOMISP_WDT_KEEP_CURRENT_DELAY);
-       }
 }
 
 void atomisp_delayed_init_work(struct work_struct *work)
@@ -1307,10 +1262,14 @@ static void __atomisp_css_recover(struct atomisp_device *isp, bool isp_timeout)
        bool stream_restart[MAX_STREAM_NUM] = {0};
        bool depth_mode = false;
        int i, ret, depth_cnt = 0;
+       unsigned long flags;
 
-       if (!isp->sw_contex.file_input)
-               atomisp_css_irq_enable(isp,
-                                      IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false);
+       lockdep_assert_held(&isp->mutex);
+
+       if (!atomisp_streaming_count(isp))
+               return;
+
+       atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false);
 
        BUG_ON(isp->num_of_streams > MAX_STREAM_NUM);
 
@@ -1331,7 +1290,9 @@ static void __atomisp_css_recover(struct atomisp_device *isp, bool isp_timeout)
 
                stream_restart[asd->index] = true;
 
+               spin_lock_irqsave(&isp->lock, flags);
                asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING;
+               spin_unlock_irqrestore(&isp->lock, flags);
 
                /* stream off sensor */
                ret = v4l2_subdev_call(
@@ -1346,7 +1307,9 @@ static void __atomisp_css_recover(struct atomisp_device *isp, bool isp_timeout)
                css_pipe_id = atomisp_get_css_pipe_id(asd);
                atomisp_css_stop(asd, css_pipe_id, true);
 
+               spin_lock_irqsave(&isp->lock, flags);
                asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED;
+               spin_unlock_irqrestore(&isp->lock, flags);
 
                asd->preview_exp_id = 1;
                asd->postview_exp_id = 1;
@@ -1387,25 +1350,23 @@ static void __atomisp_css_recover(struct atomisp_device *isp, bool isp_timeout)
                                                   IA_CSS_INPUT_MODE_BUFFERED_SENSOR);
 
                css_pipe_id = atomisp_get_css_pipe_id(asd);
-               if (atomisp_css_start(asd, css_pipe_id, true))
+               if (atomisp_css_start(asd, css_pipe_id, true)) {
                        dev_warn(isp->dev,
                                 "start SP failed, so do not set streaming to be enable!\n");
-               else
+               } else {
+                       spin_lock_irqsave(&isp->lock, flags);
                        asd->streaming = ATOMISP_DEVICE_STREAMING_ENABLED;
+                       spin_unlock_irqrestore(&isp->lock, flags);
+               }
 
                atomisp_csi2_configure(asd);
        }
 
-       if (!isp->sw_contex.file_input) {
-               atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
-                                      atomisp_css_valid_sof(isp));
+       atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
+                              atomisp_css_valid_sof(isp));
 
-               if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_AUTO, true) < 0)
-                       dev_dbg(isp->dev, "DFS auto failed while recovering!\n");
-       } else {
-               if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_MAX, true) < 0)
-                       dev_dbg(isp->dev, "DFS max failed while recovering!\n");
-       }
+       if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_AUTO, true) < 0)
+               dev_dbg(isp->dev, "DFS auto failed while recovering!\n");
 
        for (i = 0; i < isp->num_of_streams; i++) {
                struct atomisp_sub_device *asd;
@@ -1454,361 +1415,24 @@ static void __atomisp_css_recover(struct atomisp_device *isp, bool isp_timeout)
        }
 }
 
-void atomisp_wdt_work(struct work_struct *work)
+void atomisp_assert_recovery_work(struct work_struct *work)
 {
        struct atomisp_device *isp = container_of(work, struct atomisp_device,
-                                    wdt_work);
-       int i;
-       unsigned int pipe_wdt_cnt[MAX_STREAM_NUM][4] = { {0} };
-       bool css_recover = true;
-
-       rt_mutex_lock(&isp->mutex);
-       if (!atomisp_streaming_count(isp)) {
-               atomic_set(&isp->wdt_work_queued, 0);
-               rt_mutex_unlock(&isp->mutex);
-               return;
-       }
-
-       if (!IS_ISP2401) {
-               dev_err(isp->dev, "timeout %d of %d\n",
-                       atomic_read(&isp->wdt_count) + 1,
-                       ATOMISP_ISP_MAX_TIMEOUT_COUNT);
-       } else {
-               for (i = 0; i < isp->num_of_streams; i++) {
-                       struct atomisp_sub_device *asd = &isp->asd[i];
-
-                       pipe_wdt_cnt[i][0] +=
-                           atomic_read(&asd->video_out_capture.wdt_count);
-                       pipe_wdt_cnt[i][1] +=
-                           atomic_read(&asd->video_out_vf.wdt_count);
-                       pipe_wdt_cnt[i][2] +=
-                           atomic_read(&asd->video_out_preview.wdt_count);
-                       pipe_wdt_cnt[i][3] +=
-                           atomic_read(&asd->video_out_video_capture.wdt_count);
-                       css_recover =
-                           (pipe_wdt_cnt[i][0] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT &&
-                           pipe_wdt_cnt[i][1] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT &&
-                           pipe_wdt_cnt[i][2] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT &&
-                           pipe_wdt_cnt[i][3] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT)
-                           ? true : false;
-                       dev_err(isp->dev,
-                               "pipe on asd%d timeout cnt: (%d, %d, %d, %d) of %d, recover = %d\n",
-                               asd->index, pipe_wdt_cnt[i][0], pipe_wdt_cnt[i][1],
-                               pipe_wdt_cnt[i][2], pipe_wdt_cnt[i][3],
-                               ATOMISP_ISP_MAX_TIMEOUT_COUNT, css_recover);
-               }
-       }
-
-       if (css_recover) {
-               ia_css_debug_dump_sp_sw_debug_info();
-               ia_css_debug_dump_debug_info(__func__);
-               for (i = 0; i < isp->num_of_streams; i++) {
-                       struct atomisp_sub_device *asd = &isp->asd[i];
-
-                       if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
-                               continue;
-                       dev_err(isp->dev, "%s, vdev %s buffers in css: %d\n",
-                               __func__,
-                               asd->video_out_capture.vdev.name,
-                               asd->video_out_capture.
-                               buffers_in_css);
-                       dev_err(isp->dev,
-                               "%s, vdev %s buffers in css: %d\n",
-                               __func__,
-                               asd->video_out_vf.vdev.name,
-                               asd->video_out_vf.
-                               buffers_in_css);
-                       dev_err(isp->dev,
-                               "%s, vdev %s buffers in css: %d\n",
-                               __func__,
-                               asd->video_out_preview.vdev.name,
-                               asd->video_out_preview.
-                               buffers_in_css);
-                       dev_err(isp->dev,
-                               "%s, vdev %s buffers in css: %d\n",
-                               __func__,
-                               asd->video_out_video_capture.vdev.name,
-                               asd->video_out_video_capture.
-                               buffers_in_css);
-                       dev_err(isp->dev,
-                               "%s, s3a buffers in css preview pipe:%d\n",
-                               __func__,
-                               asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_PREVIEW]);
-                       dev_err(isp->dev,
-                               "%s, s3a buffers in css capture pipe:%d\n",
-                               __func__,
-                               asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_CAPTURE]);
-                       dev_err(isp->dev,
-                               "%s, s3a buffers in css video pipe:%d\n",
-                               __func__,
-                               asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_VIDEO]);
-                       dev_err(isp->dev,
-                               "%s, dis buffers in css: %d\n",
-                               __func__, asd->dis_bufs_in_css);
-                       dev_err(isp->dev,
-                               "%s, metadata buffers in css preview pipe:%d\n",
-                               __func__,
-                               asd->metadata_bufs_in_css
-                               [ATOMISP_INPUT_STREAM_GENERAL]
-                               [IA_CSS_PIPE_ID_PREVIEW]);
-                       dev_err(isp->dev,
-                               "%s, metadata buffers in css capture pipe:%d\n",
-                               __func__,
-                               asd->metadata_bufs_in_css
-                               [ATOMISP_INPUT_STREAM_GENERAL]
-                               [IA_CSS_PIPE_ID_CAPTURE]);
-                       dev_err(isp->dev,
-                               "%s, metadata buffers in css video pipe:%d\n",
-                               __func__,
-                               asd->metadata_bufs_in_css
-                               [ATOMISP_INPUT_STREAM_GENERAL]
-                               [IA_CSS_PIPE_ID_VIDEO]);
-                       if (asd->enable_raw_buffer_lock->val) {
-                               unsigned int j;
-
-                               dev_err(isp->dev, "%s, raw_buffer_locked_count %d\n",
-                                       __func__, asd->raw_buffer_locked_count);
-                               for (j = 0; j <= ATOMISP_MAX_EXP_ID / 32; j++)
-                                       dev_err(isp->dev, "%s, raw_buffer_bitmap[%d]: 0x%x\n",
-                                               __func__, j,
-                                               asd->raw_buffer_bitmap[j]);
-                       }
-               }
-
-               /*sh_css_dump_sp_state();*/
-               /*sh_css_dump_isp_state();*/
-       } else {
-               for (i = 0; i < isp->num_of_streams; i++) {
-                       struct atomisp_sub_device *asd = &isp->asd[i];
-
-                       if (asd->streaming ==
-                           ATOMISP_DEVICE_STREAMING_ENABLED) {
-                               atomisp_clear_css_buffer_counters(asd);
-                               atomisp_flush_bufs_and_wakeup(asd);
-                               complete(&asd->init_done);
-                       }
-                       if (IS_ISP2401)
-                               atomisp_wdt_stop(asd, false);
-               }
-
-               if (!IS_ISP2401) {
-                       atomic_set(&isp->wdt_count, 0);
-               } else {
-                       isp->isp_fatal_error = true;
-                       atomic_set(&isp->wdt_work_queued, 0);
-
-                       rt_mutex_unlock(&isp->mutex);
-                       return;
-               }
-       }
+                                                 assert_recovery_work);
 
+       mutex_lock(&isp->mutex);
        __atomisp_css_recover(isp, true);
-       if (IS_ISP2401) {
-               for (i = 0; i < isp->num_of_streams; i++) {
-                       struct atomisp_sub_device *asd = &isp->asd[i];
-
-                       if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
-                               continue;
-
-                       atomisp_wdt_refresh(asd,
-                                           isp->sw_contex.file_input ?
-                                           ATOMISP_ISP_FILE_TIMEOUT_DURATION :
-                                           ATOMISP_ISP_TIMEOUT_DURATION);
-               }
-       }
-
-       dev_err(isp->dev, "timeout recovery handling done\n");
-       atomic_set(&isp->wdt_work_queued, 0);
-
-       rt_mutex_unlock(&isp->mutex);
+       mutex_unlock(&isp->mutex);
 }
 
 void atomisp_css_flush(struct atomisp_device *isp)
 {
-       int i;
-
-       if (!atomisp_streaming_count(isp))
-               return;
-
-       /* Disable wdt */
-       for (i = 0; i < isp->num_of_streams; i++) {
-               struct atomisp_sub_device *asd = &isp->asd[i];
-
-               atomisp_wdt_stop(asd, true);
-       }
-
        /* Start recover */
        __atomisp_css_recover(isp, false);
-       /* Restore wdt */
-       for (i = 0; i < isp->num_of_streams; i++) {
-               struct atomisp_sub_device *asd = &isp->asd[i];
-
-               if (asd->streaming !=
-                   ATOMISP_DEVICE_STREAMING_ENABLED)
-                       continue;
 
-               atomisp_wdt_refresh(asd,
-                                   isp->sw_contex.file_input ?
-                                   ATOMISP_ISP_FILE_TIMEOUT_DURATION :
-                                   ATOMISP_ISP_TIMEOUT_DURATION);
-       }
        dev_dbg(isp->dev, "atomisp css flush done\n");
 }
 
-void atomisp_wdt(struct timer_list *t)
-{
-       struct atomisp_sub_device *asd;
-       struct atomisp_device *isp;
-
-       if (!IS_ISP2401) {
-               asd = from_timer(asd, t, wdt);
-               isp = asd->isp;
-       } else {
-               struct atomisp_video_pipe *pipe = from_timer(pipe, t, wdt);
-
-               asd = pipe->asd;
-               isp = asd->isp;
-
-               atomic_inc(&pipe->wdt_count);
-               dev_warn(isp->dev,
-                       "[WARNING]asd %d pipe %s ISP timeout %d!\n",
-                       asd->index, pipe->vdev.name,
-                       atomic_read(&pipe->wdt_count));
-       }
-
-       if (atomic_read(&isp->wdt_work_queued)) {
-               dev_dbg(isp->dev, "ISP watchdog was put into workqueue\n");
-               return;
-       }
-       atomic_set(&isp->wdt_work_queued, 1);
-       queue_work(isp->wdt_work_queue, &isp->wdt_work);
-}
-
-/* ISP2400 */
-void atomisp_wdt_start(struct atomisp_sub_device *asd)
-{
-       atomisp_wdt_refresh(asd, ATOMISP_ISP_TIMEOUT_DURATION);
-}
-
-/* ISP2401 */
-void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe,
-                             unsigned int delay)
-{
-       unsigned long next;
-
-       if (!pipe->asd) {
-               dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, pipe->vdev.name);
-               return;
-       }
-
-       if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY)
-               pipe->wdt_duration = delay;
-
-       next = jiffies + pipe->wdt_duration;
-
-       /* Override next if it has been pushed beyon the "next" time */
-       if (atomisp_is_wdt_running(pipe) && time_after(pipe->wdt_expires, next))
-               next = pipe->wdt_expires;
-
-       pipe->wdt_expires = next;
-
-       if (atomisp_is_wdt_running(pipe))
-               dev_dbg(pipe->asd->isp->dev, "WDT will hit after %d ms (%s)\n",
-                       ((int)(next - jiffies) * 1000 / HZ), pipe->vdev.name);
-       else
-               dev_dbg(pipe->asd->isp->dev, "WDT starts with %d ms period (%s)\n",
-                       ((int)(next - jiffies) * 1000 / HZ), pipe->vdev.name);
-
-       mod_timer(&pipe->wdt, next);
-}
-
-void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay)
-{
-       if (!IS_ISP2401) {
-               unsigned long next;
-
-               if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY)
-                       asd->wdt_duration = delay;
-
-               next = jiffies + asd->wdt_duration;
-
-               /* Override next if it has been pushed beyon the "next" time */
-               if (atomisp_is_wdt_running(asd) && time_after(asd->wdt_expires, next))
-                       next = asd->wdt_expires;
-
-               asd->wdt_expires = next;
-
-               if (atomisp_is_wdt_running(asd))
-                       dev_dbg(asd->isp->dev, "WDT will hit after %d ms\n",
-                               ((int)(next - jiffies) * 1000 / HZ));
-               else
-                       dev_dbg(asd->isp->dev, "WDT starts with %d ms period\n",
-                               ((int)(next - jiffies) * 1000 / HZ));
-
-               mod_timer(&asd->wdt, next);
-               atomic_set(&asd->isp->wdt_count, 0);
-       } else {
-               dev_dbg(asd->isp->dev, "WDT refresh all:\n");
-               if (atomisp_is_wdt_running(&asd->video_out_capture))
-                       atomisp_wdt_refresh_pipe(&asd->video_out_capture, delay);
-               if (atomisp_is_wdt_running(&asd->video_out_preview))
-                       atomisp_wdt_refresh_pipe(&asd->video_out_preview, delay);
-               if (atomisp_is_wdt_running(&asd->video_out_vf))
-                       atomisp_wdt_refresh_pipe(&asd->video_out_vf, delay);
-               if (atomisp_is_wdt_running(&asd->video_out_video_capture))
-                       atomisp_wdt_refresh_pipe(&asd->video_out_video_capture, delay);
-       }
-}
-
-/* ISP2401 */
-void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync)
-{
-       if (!pipe->asd) {
-               dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, pipe->vdev.name);
-               return;
-       }
-
-       if (!atomisp_is_wdt_running(pipe))
-               return;
-
-       dev_dbg(pipe->asd->isp->dev,
-               "WDT stop asd %d (%s)\n", pipe->asd->index, pipe->vdev.name);
-
-       if (sync) {
-               del_timer_sync(&pipe->wdt);
-               cancel_work_sync(&pipe->asd->isp->wdt_work);
-       } else {
-               del_timer(&pipe->wdt);
-       }
-}
-
-/* ISP 2401 */
-void atomisp_wdt_start_pipe(struct atomisp_video_pipe *pipe)
-{
-       atomisp_wdt_refresh_pipe(pipe, ATOMISP_ISP_TIMEOUT_DURATION);
-}
-
-void atomisp_wdt_stop(struct atomisp_sub_device *asd, bool sync)
-{
-       dev_dbg(asd->isp->dev, "WDT stop:\n");
-
-       if (!IS_ISP2401) {
-               if (sync) {
-                       del_timer_sync(&asd->wdt);
-                       cancel_work_sync(&asd->isp->wdt_work);
-               } else {
-                       del_timer(&asd->wdt);
-               }
-       } else {
-               atomisp_wdt_stop_pipe(&asd->video_out_capture, sync);
-               atomisp_wdt_stop_pipe(&asd->video_out_preview, sync);
-               atomisp_wdt_stop_pipe(&asd->video_out_vf, sync);
-               atomisp_wdt_stop_pipe(&asd->video_out_video_capture, sync);
-       }
-}
-
 void atomisp_setup_flash(struct atomisp_sub_device *asd)
 {
        struct atomisp_device *isp = asd->isp;
@@ -1884,7 +1508,7 @@ irqreturn_t atomisp_isr_thread(int irq, void *isp_ptr)
         * For CSS2.0: we change the way to not dequeue all the event at one
         * time, instead, dequue one and process one, then another
         */
-       rt_mutex_lock(&isp->mutex);
+       mutex_lock(&isp->mutex);
        if (atomisp_css_isr_thread(isp, frame_done_found, css_pipe_done))
                goto out;
 
@@ -1895,15 +1519,7 @@ irqreturn_t atomisp_isr_thread(int irq, void *isp_ptr)
                atomisp_setup_flash(asd);
        }
 out:
-       rt_mutex_unlock(&isp->mutex);
-       for (i = 0; i < isp->num_of_streams; i++) {
-               asd = &isp->asd[i];
-               if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED
-                   && css_pipe_done[asd->index]
-                   && isp->sw_contex.file_input)
-                       v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
-                                        video, s_stream, 1);
-       }
+       mutex_unlock(&isp->mutex);
        dev_dbg(isp->dev, "<%s\n", __func__);
 
        return IRQ_HANDLED;
@@ -2322,7 +1938,6 @@ static void atomisp_update_grid_info(struct atomisp_sub_device *asd,
 {
        struct atomisp_device *isp = asd->isp;
        int err;
-       u16 stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
 
        if (atomisp_css_get_grid_info(asd, pipe_id, source_pad))
                return;
@@ -2331,7 +1946,7 @@ static void atomisp_update_grid_info(struct atomisp_sub_device *asd,
           the grid size. */
        atomisp_css_free_stat_buffers(asd);
 
-       err = atomisp_alloc_css_stat_bufs(asd, stream_id);
+       err = atomisp_alloc_css_stat_bufs(asd, ATOMISP_INPUT_STREAM_GENERAL);
        if (err) {
                dev_err(isp->dev, "stat_buf allocate error\n");
                goto err;
@@ -4077,6 +3692,8 @@ void atomisp_handle_parameter_and_buffer(struct atomisp_video_pipe *pipe)
        unsigned long irqflags;
        bool need_to_enqueue_buffer = false;
 
+       lockdep_assert_held(&asd->isp->mutex);
+
        if (!asd) {
                dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
                        __func__, pipe->vdev.name);
@@ -4143,19 +3760,6 @@ void atomisp_handle_parameter_and_buffer(struct atomisp_video_pipe *pipe)
                return;
 
        atomisp_qbuffers_to_css(asd);
-
-       if (!IS_ISP2401) {
-               if (!atomisp_is_wdt_running(asd) && atomisp_buffers_queued(asd))
-                       atomisp_wdt_start(asd);
-       } else {
-               if (atomisp_buffers_queued_pipe(pipe)) {
-                       if (!atomisp_is_wdt_running(pipe))
-                               atomisp_wdt_start_pipe(pipe);
-                       else
-                               atomisp_wdt_refresh_pipe(pipe,
-                                                       ATOMISP_WDT_KEEP_CURRENT_DELAY);
-               }
-       }
 }
 
 /*
@@ -4170,6 +3774,8 @@ int atomisp_set_parameters(struct video_device *vdev,
        struct atomisp_css_params *css_param = &asd->params.css_param;
        int ret;
 
+       lockdep_assert_held(&asd->isp->mutex);
+
        if (!asd) {
                dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
                        __func__, vdev->name);
@@ -4824,8 +4430,6 @@ int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f,
        const struct atomisp_format_bridge *fmt;
        struct atomisp_input_stream_info *stream_info =
            (struct atomisp_input_stream_info *)snr_mbus_fmt->reserved;
-       u16 stream_index;
-       int source_pad = atomisp_subdev_source_pad(vdev);
        int ret;
 
        if (!asd) {
@@ -4837,7 +4441,6 @@ int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f,
        if (!isp->inputs[asd->input_curr].camera)
                return -EINVAL;
 
-       stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
        fmt = atomisp_get_format_bridge(f->pixelformat);
        if (!fmt) {
                dev_err(isp->dev, "unsupported pixelformat!\n");
@@ -4851,7 +4454,7 @@ int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f,
        snr_mbus_fmt->width = f->width;
        snr_mbus_fmt->height = f->height;
 
-       __atomisp_init_stream_info(stream_index, stream_info);
+       __atomisp_init_stream_info(ATOMISP_INPUT_STREAM_GENERAL, stream_info);
 
        dev_dbg(isp->dev, "try_mbus_fmt: asking for %ux%u\n",
                snr_mbus_fmt->width, snr_mbus_fmt->height);
@@ -4886,8 +4489,8 @@ int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f,
                return 0;
        }
 
-       if (snr_mbus_fmt->width < f->width
-           && snr_mbus_fmt->height < f->height) {
+       if (!res_overflow || (snr_mbus_fmt->width < f->width &&
+                             snr_mbus_fmt->height < f->height)) {
                f->width = snr_mbus_fmt->width;
                f->height = snr_mbus_fmt->height;
                /* Set the flag when resolution requested is
@@ -4906,41 +4509,6 @@ int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f,
        return 0;
 }
 
-static int
-atomisp_try_fmt_file(struct atomisp_device *isp, struct v4l2_format *f)
-{
-       u32 width = f->fmt.pix.width;
-       u32 height = f->fmt.pix.height;
-       u32 pixelformat = f->fmt.pix.pixelformat;
-       enum v4l2_field field = f->fmt.pix.field;
-       u32 depth;
-
-       if (!atomisp_get_format_bridge(pixelformat)) {
-               dev_err(isp->dev, "Wrong output pixelformat\n");
-               return -EINVAL;
-       }
-
-       depth = atomisp_get_pixel_depth(pixelformat);
-
-       if (field == V4L2_FIELD_ANY) {
-               field = V4L2_FIELD_NONE;
-       } else if (field != V4L2_FIELD_NONE) {
-               dev_err(isp->dev, "Wrong output field\n");
-               return -EINVAL;
-       }
-
-       f->fmt.pix.field = field;
-       f->fmt.pix.width = clamp_t(u32,
-                                  rounddown(width, (u32)ATOM_ISP_STEP_WIDTH),
-                                  ATOM_ISP_MIN_WIDTH, ATOM_ISP_MAX_WIDTH);
-       f->fmt.pix.height = clamp_t(u32, rounddown(height,
-                                   (u32)ATOM_ISP_STEP_HEIGHT),
-                                   ATOM_ISP_MIN_HEIGHT, ATOM_ISP_MAX_HEIGHT);
-       f->fmt.pix.bytesperline = (width * depth) >> 3;
-
-       return 0;
-}
-
 enum mipi_port_id __get_mipi_port(struct atomisp_device *isp,
                                  enum atomisp_camera_port port)
 {
@@ -5171,7 +4739,6 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
        int (*configure_pp_input)(struct atomisp_sub_device *asd,
                                  unsigned int width, unsigned int height) =
                                      configure_pp_input_nop;
-       u16 stream_index;
        const struct atomisp_in_fmt_conv *fc;
        int ret, i;
 
@@ -5180,7 +4747,6 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
                        __func__, vdev->name);
                return -EINVAL;
        }
-       stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
 
        v4l2_fh_init(&fh.vfh, vdev);
 
@@ -5200,7 +4766,7 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
                        dev_err(isp->dev, "mipi_info is NULL\n");
                        return -EINVAL;
                }
-               if (atomisp_set_sensor_mipi_to_isp(asd, stream_index,
+               if (atomisp_set_sensor_mipi_to_isp(asd, ATOMISP_INPUT_STREAM_GENERAL,
                                                   mipi_info))
                        return -EINVAL;
                fc = atomisp_find_in_fmt_conv_by_atomisp_in_fmt(
@@ -5284,7 +4850,7 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
        /* ISP2401 new input system need to use copy pipe */
        if (asd->copy_mode) {
                pipe_id = IA_CSS_PIPE_ID_COPY;
-               atomisp_css_capture_enable_online(asd, stream_index, false);
+               atomisp_css_capture_enable_online(asd, ATOMISP_INPUT_STREAM_GENERAL, false);
        } else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) {
                /* video same in continuouscapture and online modes */
                configure_output = atomisp_css_video_configure_output;
@@ -5316,7 +4882,9 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
                                pipe_id = IA_CSS_PIPE_ID_CAPTURE;
 
                                atomisp_update_capture_mode(asd);
-                               atomisp_css_capture_enable_online(asd, stream_index, false);
+                               atomisp_css_capture_enable_online(asd,
+                                                                 ATOMISP_INPUT_STREAM_GENERAL,
+                                                                 false);
                        }
                }
        } else if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) {
@@ -5341,7 +4909,7 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
 
                if (!asd->continuous_mode->val)
                        /* in case of ANR, force capture pipe to offline mode */
-                       atomisp_css_capture_enable_online(asd, stream_index,
+                       atomisp_css_capture_enable_online(asd, ATOMISP_INPUT_STREAM_GENERAL,
                                                          asd->params.low_light ?
                                                          false : asd->params.online_process);
 
@@ -5372,7 +4940,7 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
                pipe_id = IA_CSS_PIPE_ID_YUVPP;
 
        if (asd->copy_mode)
-               ret = atomisp_css_copy_configure_output(asd, stream_index,
+               ret = atomisp_css_copy_configure_output(asd, ATOMISP_INPUT_STREAM_GENERAL,
                                                        pix->width, pix->height,
                                                        format->planar ? pix->bytesperline :
                                                        pix->bytesperline * 8 / format->depth,
@@ -5396,8 +4964,9 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
                return -EINVAL;
        }
        if (asd->copy_mode)
-               ret = atomisp_css_copy_get_output_frame_info(asd, stream_index,
-                       output_info);
+               ret = atomisp_css_copy_get_output_frame_info(asd,
+                                                            ATOMISP_INPUT_STREAM_GENERAL,
+                                                            output_info);
        else
                ret = get_frame_info(asd, output_info);
        if (ret) {
@@ -5412,8 +4981,7 @@ static int atomisp_set_fmt_to_isp(struct video_device *vdev,
        ia_css_frame_free(asd->raw_output_frame);
        asd->raw_output_frame = NULL;
 
-       if (!asd->continuous_mode->val &&
-           !asd->params.online_process && !isp->sw_contex.file_input &&
+       if (!asd->continuous_mode->val && !asd->params.online_process &&
            ia_css_frame_allocate_from_info(&asd->raw_output_frame,
                    raw_output_info))
                return -ENOMEM;
@@ -5462,12 +5030,7 @@ static void atomisp_check_copy_mode(struct atomisp_sub_device *asd,
        src = atomisp_subdev_get_ffmt(&asd->subdev, NULL,
                                      V4L2_SUBDEV_FORMAT_ACTIVE, source_pad);
 
-       if ((sink->code == src->code &&
-            sink->width == f->width &&
-            sink->height == f->height) ||
-           ((asd->isp->inputs[asd->input_curr].type == SOC_CAMERA) &&
-            (asd->isp->inputs[asd->input_curr].camera_caps->
-             sensor[asd->sensor_curr].stream_num > 1)))
+       if (sink->code == src->code && sink->width == f->width && sink->height == f->height)
                asd->copy_mode = true;
        else
                asd->copy_mode = false;
@@ -5495,7 +5058,6 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
        struct atomisp_device *isp;
        struct atomisp_input_stream_info *stream_info =
            (struct atomisp_input_stream_info *)ffmt->reserved;
-       u16 stream_index = ATOMISP_INPUT_STREAM_GENERAL;
        int source_pad = atomisp_subdev_source_pad(vdev);
        struct v4l2_subdev_fh fh;
        int ret;
@@ -5510,8 +5072,6 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
 
        v4l2_fh_init(&fh.vfh, vdev);
 
-       stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
-
        format = atomisp_get_format_bridge(pixelformat);
        if (!format)
                return -EINVAL;
@@ -5524,7 +5084,7 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
                ffmt->width, ffmt->height, padding_w, padding_h,
                dvs_env_w, dvs_env_h);
 
-       __atomisp_init_stream_info(stream_index, stream_info);
+       __atomisp_init_stream_info(ATOMISP_INPUT_STREAM_GENERAL, stream_info);
 
        req_ffmt = ffmt;
 
@@ -5556,7 +5116,7 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
        if (ret)
                return ret;
 
-       __atomisp_update_stream_env(asd, stream_index, stream_info);
+       __atomisp_update_stream_env(asd, ATOMISP_INPUT_STREAM_GENERAL, stream_info);
 
        dev_dbg(isp->dev, "sensor width: %d, height: %d\n",
                ffmt->width, ffmt->height);
@@ -5580,8 +5140,9 @@ static int atomisp_set_fmt_to_snr(struct video_device *vdev,
        return css_input_resolution_changed(asd, ffmt);
 }
 
-int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f)
+int atomisp_set_fmt(struct file *file, void *unused, struct v4l2_format *f)
 {
+       struct video_device *vdev = video_devdata(file);
        struct atomisp_device *isp = video_get_drvdata(vdev);
        struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
        struct atomisp_sub_device *asd = pipe->asd;
@@ -5604,20 +5165,13 @@ int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f)
        struct v4l2_subdev_fh fh;
        int ret;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
+       ret = atomisp_pipe_check(pipe, true);
+       if (ret)
+               return ret;
 
        if (source_pad >= ATOMISP_SUBDEV_PADS_NUM)
                return -EINVAL;
 
-       if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) {
-               dev_warn(isp->dev, "ISP does not support set format while at streaming!\n");
-               return -EBUSY;
-       }
-
        dev_dbg(isp->dev,
                "setting resolution %ux%u on pad %u for asd%d, bytesperline %u\n",
                f->fmt.pix.width, f->fmt.pix.height, source_pad,
@@ -5699,58 +5253,7 @@ int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f)
                        f->fmt.pix.height = r.height;
                }
 
-               if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW &&
-                   (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA) &&
-                   (asd->isp->inputs[asd->input_curr].camera_caps->
-                    sensor[asd->sensor_curr].stream_num > 1)) {
-                       /* For M10MO outputing YUV preview images. */
-                       u16 video_index =
-                           atomisp_source_pad_to_stream_id(asd,
-                                                           ATOMISP_SUBDEV_PAD_SOURCE_VIDEO);
-
-                       ret = atomisp_css_copy_get_output_frame_info(asd,
-                               video_index, &output_info);
-                       if (ret) {
-                               dev_err(isp->dev,
-                                       "copy_get_output_frame_info ret %i", ret);
-                               return -EINVAL;
-                       }
-                       if (!asd->yuvpp_mode) {
-                               /*
-                                * If viewfinder was configured into copy_mode,
-                                * we switch to using yuvpp pipe instead.
-                                */
-                               asd->yuvpp_mode = true;
-                               ret = atomisp_css_copy_configure_output(
-                                         asd, video_index, 0, 0, 0, 0);
-                               if (ret) {
-                                       dev_err(isp->dev,
-                                               "failed to disable copy pipe");
-                                       return -EINVAL;
-                               }
-                               ret = atomisp_css_yuvpp_configure_output(
-                                         asd, video_index,
-                                         output_info.res.width,
-                                         output_info.res.height,
-                                         output_info.padded_width,
-                                         output_info.format);
-                               if (ret) {
-                                       dev_err(isp->dev,
-                                               "failed to set up yuvpp pipe\n");
-                                       return -EINVAL;
-                               }
-                               atomisp_css_video_enable_online(asd, false);
-                               atomisp_css_preview_enable_online(asd,
-                                                                 ATOMISP_INPUT_STREAM_GENERAL, false);
-                       }
-                       atomisp_css_yuvpp_configure_viewfinder(asd, video_index,
-                                                              f->fmt.pix.width, f->fmt.pix.height,
-                                                              format_bridge->planar ? f->fmt.pix.bytesperline
-                                                              : f->fmt.pix.bytesperline * 8
-                                                              / format_bridge->depth, format_bridge->sh_fmt);
-                       atomisp_css_yuvpp_get_viewfinder_frame_info(
-                           asd, video_index, &output_info);
-               } else if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) {
+               if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) {
                        atomisp_css_video_configure_viewfinder(asd,
                                                               f->fmt.pix.width, f->fmt.pix.height,
                                                               format_bridge->planar ? f->fmt.pix.bytesperline
@@ -6078,55 +5581,6 @@ done:
        return 0;
 }
 
-int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f)
-{
-       struct atomisp_device *isp = video_get_drvdata(vdev);
-       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-       struct atomisp_sub_device *asd = pipe->asd;
-       struct v4l2_mbus_framefmt ffmt = {0};
-       const struct atomisp_format_bridge *format_bridge;
-       struct v4l2_subdev_fh fh;
-       int ret;
-
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
-       v4l2_fh_init(&fh.vfh, vdev);
-
-       dev_dbg(isp->dev, "setting fmt %ux%u 0x%x for file inject\n",
-               f->fmt.pix.width, f->fmt.pix.height, f->fmt.pix.pixelformat);
-       ret = atomisp_try_fmt_file(isp, f);
-       if (ret) {
-               dev_err(isp->dev, "atomisp_try_fmt_file err: %d\n", ret);
-               return ret;
-       }
-
-       format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat);
-       if (!format_bridge) {
-               dev_dbg(isp->dev, "atomisp_get_format_bridge err! fmt:0x%x\n",
-                       f->fmt.pix.pixelformat);
-               return -EINVAL;
-       }
-
-       pipe->pix = f->fmt.pix;
-       atomisp_css_input_set_mode(asd, IA_CSS_INPUT_MODE_FIFO);
-       atomisp_css_input_configure_port(asd,
-                                        __get_mipi_port(isp, ATOMISP_CAMERA_PORT_PRIMARY), 2, 0xffff4,
-                                        0, 0, 0, 0);
-       ffmt.width = f->fmt.pix.width;
-       ffmt.height = f->fmt.pix.height;
-       ffmt.code = format_bridge->mbus_code;
-
-       atomisp_subdev_set_ffmt(&asd->subdev, fh.state,
-                               V4L2_SUBDEV_FORMAT_ACTIVE,
-                               ATOMISP_SUBDEV_PAD_SINK, &ffmt);
-
-       return 0;
-}
-
 int atomisp_set_shading_table(struct atomisp_sub_device *asd,
                              struct atomisp_shading_table *user_shading_table)
 {
@@ -6275,6 +5729,8 @@ int atomisp_offline_capture_configure(struct atomisp_sub_device *asd,
 {
        struct v4l2_ctrl *c;
 
+       lockdep_assert_held(&asd->isp->mutex);
+
        /*
        * In case of M10MO ZSL capture case, we need to issue a separate
        * capture request to M10MO which will output captured jpeg image
@@ -6379,36 +5835,6 @@ int atomisp_flash_enable(struct atomisp_sub_device *asd, int num_frames)
        return 0;
 }
 
-int atomisp_source_pad_to_stream_id(struct atomisp_sub_device *asd,
-                                   uint16_t source_pad)
-{
-       int stream_id;
-       struct atomisp_device *isp = asd->isp;
-
-       if (isp->inputs[asd->input_curr].camera_caps->
-           sensor[asd->sensor_curr].stream_num == 1)
-               return ATOMISP_INPUT_STREAM_GENERAL;
-
-       switch (source_pad) {
-       case ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE:
-               stream_id = ATOMISP_INPUT_STREAM_CAPTURE;
-               break;
-       case ATOMISP_SUBDEV_PAD_SOURCE_VF:
-               stream_id = ATOMISP_INPUT_STREAM_POSTVIEW;
-               break;
-       case ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW:
-               stream_id = ATOMISP_INPUT_STREAM_PREVIEW;
-               break;
-       case ATOMISP_SUBDEV_PAD_SOURCE_VIDEO:
-               stream_id = ATOMISP_INPUT_STREAM_VIDEO;
-               break;
-       default:
-               stream_id = ATOMISP_INPUT_STREAM_GENERAL;
-       }
-
-       return stream_id;
-}
-
 bool atomisp_is_vf_pipe(struct atomisp_video_pipe *pipe)
 {
        struct atomisp_sub_device *asd = pipe->asd;
@@ -6459,7 +5885,7 @@ void atomisp_init_raw_buffer_bitmap(struct atomisp_sub_device *asd)
        spin_unlock_irqrestore(&asd->raw_buffer_bitmap_lock, flags);
 }
 
-int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id)
+static int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id)
 {
        int *bitmap, bit;
        unsigned long flags;
@@ -6549,6 +5975,8 @@ int atomisp_exp_id_capture(struct atomisp_sub_device *asd, int *exp_id)
        int value = *exp_id;
        int ret;
 
+       lockdep_assert_held(&isp->mutex);
+
        ret = __is_raw_buffer_locked(asd, value);
        if (ret) {
                dev_err(isp->dev, "%s exp_id %d invalid %d.\n", __func__, value, ret);
@@ -6570,6 +5998,8 @@ int atomisp_exp_id_unlock(struct atomisp_sub_device *asd, int *exp_id)
        int value = *exp_id;
        int ret;
 
+       lockdep_assert_held(&isp->mutex);
+
        ret = __clear_raw_buffer_bitmap(asd, value);
        if (ret) {
                dev_err(isp->dev, "%s exp_id %d invalid %d.\n", __func__, value, ret);
@@ -6605,6 +6035,8 @@ int atomisp_inject_a_fake_event(struct atomisp_sub_device *asd, int *event)
        if (!event || asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
                return -EINVAL;
 
+       lockdep_assert_held(&asd->isp->mutex);
+
        dev_dbg(asd->isp->dev, "%s: trying to inject a fake event 0x%x\n",
                __func__, *event);
 
@@ -6675,19 +6107,6 @@ int atomisp_get_invalid_frame_num(struct video_device *vdev,
        struct ia_css_pipe_info p_info;
        int ret;
 
-       if (!asd) {
-               dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
-       if (asd->isp->inputs[asd->input_curr].camera_caps->
-           sensor[asd->sensor_curr].stream_num > 1) {
-               /* External ISP */
-               *invalid_frame_num = 0;
-               return 0;
-       }
-
        pipe_id = atomisp_get_pipe_id(pipe);
        if (!asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].pipes[pipe_id]) {
                dev_warn(asd->isp->dev,
index ebc7294..c9f92f1 100644 (file)
@@ -54,7 +54,6 @@ void dump_sp_dmem(struct atomisp_device *isp, unsigned int addr,
                  unsigned int size);
 struct camera_mipi_info *atomisp_to_sensor_mipi_info(struct v4l2_subdev *sd);
 struct atomisp_video_pipe *atomisp_to_video_pipe(struct video_device *dev);
-struct atomisp_acc_pipe *atomisp_to_acc_pipe(struct video_device *dev);
 int atomisp_reset(struct atomisp_device *isp);
 void atomisp_flush_bufs_and_wakeup(struct atomisp_sub_device *asd);
 void atomisp_clear_css_buffer_counters(struct atomisp_sub_device *asd);
@@ -66,8 +65,7 @@ bool atomisp_buffers_queued_pipe(struct atomisp_video_pipe *pipe);
 /* Interrupt functions */
 void atomisp_msi_irq_init(struct atomisp_device *isp);
 void atomisp_msi_irq_uninit(struct atomisp_device *isp);
-void atomisp_wdt_work(struct work_struct *work);
-void atomisp_wdt(struct timer_list *t);
+void atomisp_assert_recovery_work(struct work_struct *work);
 void atomisp_setup_flash(struct atomisp_sub_device *asd);
 irqreturn_t atomisp_isr(int irq, void *dev);
 irqreturn_t atomisp_isr_thread(int irq, void *isp_ptr);
@@ -268,8 +266,7 @@ int atomisp_get_sensor_mode_data(struct atomisp_sub_device *asd,
 int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f,
                    bool *res_overflow);
 
-int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f);
-int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f);
+int atomisp_set_fmt(struct file *file, void *fh, struct v4l2_format *f);
 
 int atomisp_set_shading_table(struct atomisp_sub_device *asd,
                              struct atomisp_shading_table *shading_table);
@@ -300,8 +297,6 @@ void atomisp_buf_done(struct atomisp_sub_device *asd, int error,
                      bool q_buffers, enum atomisp_input_stream_id stream_id);
 
 void atomisp_css_flush(struct atomisp_device *isp);
-int atomisp_source_pad_to_stream_id(struct atomisp_sub_device *asd,
-                                   uint16_t source_pad);
 
 /* Events. Only one event has to be exported for now. */
 void atomisp_eof_event(struct atomisp_sub_device *asd, uint8_t exp_id);
@@ -324,8 +319,6 @@ void atomisp_flush_params_queue(struct atomisp_video_pipe *asd);
 int atomisp_exp_id_unlock(struct atomisp_sub_device *asd, int *exp_id);
 int atomisp_exp_id_capture(struct atomisp_sub_device *asd, int *exp_id);
 
-/* Function to update Raw Buffer bitmap */
-int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id);
 void atomisp_init_raw_buffer_bitmap(struct atomisp_sub_device *asd);
 
 /* Function to enable/disable zoom for capture pipe */
index 3393ae6..a6d85d0 100644 (file)
@@ -129,10 +129,6 @@ int atomisp_alloc_metadata_output_buf(struct atomisp_sub_device *asd);
 
 void atomisp_free_metadata_output_buf(struct atomisp_sub_device *asd);
 
-void atomisp_css_get_dis_statistics(struct atomisp_sub_device *asd,
-                                   struct atomisp_css_buffer *isp_css_buffer,
-                                   struct ia_css_isp_dvs_statistics_map *dvs_map);
-
 void atomisp_css_temp_pipe_to_pipe_id(struct atomisp_sub_device *asd,
                                      struct atomisp_css_event *current_event);
 
@@ -434,17 +430,11 @@ void atomisp_css_get_morph_table(struct atomisp_sub_device *asd,
 
 void atomisp_css_morph_table_free(struct ia_css_morph_table *table);
 
-void atomisp_css_set_cont_prev_start_time(struct atomisp_device *isp,
-       unsigned int overlap);
-
 int atomisp_css_get_dis_stat(struct atomisp_sub_device *asd,
                             struct atomisp_dis_statistics *stats);
 
 int atomisp_css_update_stream(struct atomisp_sub_device *asd);
 
-struct atomisp_acc_fw;
-int atomisp_css_set_acc_parameters(struct atomisp_acc_fw *acc_fw);
-
 int atomisp_css_isr_thread(struct atomisp_device *isp,
                           bool *frame_done_found,
                           bool *css_pipe_done);
index 5aa108a..fdc0554 100644 (file)
@@ -1427,7 +1427,6 @@ int atomisp_css_get_grid_info(struct atomisp_sub_device *asd,
        struct ia_css_pipe_info p_info;
        struct ia_css_grid_info old_info;
        struct atomisp_device *isp = asd->isp;
-       int stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
        int md_width = asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].
                       stream_config.metadata_config.resolution.width;
 
@@ -1435,7 +1434,7 @@ int atomisp_css_get_grid_info(struct atomisp_sub_device *asd,
        memset(&old_info, 0, sizeof(struct ia_css_grid_info));
 
        if (ia_css_pipe_get_info(
-               asd->stream_env[stream_index].pipes[pipe_id],
+               asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].pipes[pipe_id],
                &p_info) != 0) {
                dev_err(isp->dev, "ia_css_pipe_get_info failed\n");
                return -EINVAL;
@@ -1574,20 +1573,6 @@ void atomisp_free_metadata_output_buf(struct atomisp_sub_device *asd)
        }
 }
 
-void atomisp_css_get_dis_statistics(struct atomisp_sub_device *asd,
-                                   struct atomisp_css_buffer *isp_css_buffer,
-                                   struct ia_css_isp_dvs_statistics_map *dvs_map)
-{
-       if (asd->params.dvs_stat) {
-               if (dvs_map)
-                       ia_css_translate_dvs2_statistics(
-                           asd->params.dvs_stat, dvs_map);
-               else
-                       ia_css_get_dvs2_statistics(asd->params.dvs_stat,
-                                                  isp_css_buffer->css_buffer.data.stats_dvs);
-       }
-}
-
 void atomisp_css_temp_pipe_to_pipe_id(struct atomisp_sub_device *asd,
                                      struct atomisp_css_event *current_event)
 {
@@ -2694,11 +2679,11 @@ int atomisp_get_css_frame_info(struct atomisp_sub_device *asd,
        struct atomisp_device *isp = asd->isp;
 
        if (ATOMISP_SOC_CAMERA(asd)) {
-               stream_index = atomisp_source_pad_to_stream_id(asd, source_pad);
+               stream_index = ATOMISP_INPUT_STREAM_GENERAL;
        } else {
                stream_index = (pipe_index == IA_CSS_PIPE_ID_YUVPP) ?
                               ATOMISP_INPUT_STREAM_VIDEO :
-                              atomisp_source_pad_to_stream_id(asd, source_pad);
+                              ATOMISP_INPUT_STREAM_GENERAL;
        }
 
        if (0 != ia_css_pipe_get_info(asd->stream_env[stream_index]
@@ -3626,6 +3611,8 @@ int atomisp_css_get_dis_stat(struct atomisp_sub_device *asd,
        struct atomisp_dis_buf *dis_buf;
        unsigned long flags;
 
+       lockdep_assert_held(&isp->mutex);
+
        if (!asd->params.dvs_stat->hor_prod.odd_real ||
            !asd->params.dvs_stat->hor_prod.odd_imag ||
            !asd->params.dvs_stat->hor_prod.even_real ||
@@ -3637,12 +3624,8 @@ int atomisp_css_get_dis_stat(struct atomisp_sub_device *asd,
                return -EINVAL;
 
        /* isp needs to be streaming to get DIS statistics */
-       spin_lock_irqsave(&isp->lock, flags);
-       if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) {
-               spin_unlock_irqrestore(&isp->lock, flags);
+       if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
                return -EINVAL;
-       }
-       spin_unlock_irqrestore(&isp->lock, flags);
 
        if (atomisp_compare_dvs_grid(asd, &stats->dvs2_stat.grid_info) != 0)
                /* If the grid info in the argument differs from the current
@@ -3763,32 +3746,6 @@ void atomisp_css_morph_table_free(struct ia_css_morph_table *table)
        ia_css_morph_table_free(table);
 }
 
-void atomisp_css_set_cont_prev_start_time(struct atomisp_device *isp,
-       unsigned int overlap)
-{
-       /* CSS 2.0 doesn't support this API. */
-       dev_dbg(isp->dev, "set cont prev start time is not supported.\n");
-       return;
-}
-
-/* Set the ACC binary arguments */
-int atomisp_css_set_acc_parameters(struct atomisp_acc_fw *acc_fw)
-{
-       unsigned int mem;
-
-       for (mem = 0; mem < ATOMISP_ACC_NR_MEMORY; mem++) {
-               if (acc_fw->args[mem].length == 0)
-                       continue;
-
-               ia_css_isp_param_set_css_mem_init(&acc_fw->fw->mem_initializers,
-                                                 IA_CSS_PARAM_CLASS_PARAM, mem,
-                                                 acc_fw->args[mem].css_ptr,
-                                                 acc_fw->args[mem].length);
-       }
-
-       return 0;
-}
-
 static struct atomisp_sub_device *__get_atomisp_subdev(
     struct ia_css_pipe *css_pipe,
     struct atomisp_device *isp,
@@ -3824,8 +3781,8 @@ int atomisp_css_isr_thread(struct atomisp_device *isp,
        enum atomisp_input_stream_id stream_id = 0;
        struct atomisp_css_event current_event;
        struct atomisp_sub_device *asd;
-       bool reset_wdt_timer[MAX_STREAM_NUM] = {false};
-       int i;
+
+       lockdep_assert_held(&isp->mutex);
 
        while (!ia_css_dequeue_psys_event(&current_event.event)) {
                if (current_event.event.type ==
@@ -3839,14 +3796,8 @@ int atomisp_css_isr_thread(struct atomisp_device *isp,
                                __func__,
                                current_event.event.fw_assert_module_id,
                                current_event.event.fw_assert_line_no);
-                       for (i = 0; i < isp->num_of_streams; i++)
-                               atomisp_wdt_stop(&isp->asd[i], 0);
-
-                       if (!IS_ISP2401)
-                               atomisp_wdt(&isp->asd[0].wdt);
-                       else
-                               queue_work(isp->wdt_work_queue, &isp->wdt_work);
 
+                       queue_work(system_long_wq, &isp->assert_recovery_work);
                        return -EINVAL;
                } else if (current_event.event.type == IA_CSS_EVENT_TYPE_FW_WARNING) {
                        dev_warn(isp->dev, "%s: ISP reports warning, code is %d, exp_id %d\n",
@@ -3875,20 +3826,12 @@ int atomisp_css_isr_thread(struct atomisp_device *isp,
                        frame_done_found[asd->index] = true;
                        atomisp_buf_done(asd, 0, IA_CSS_BUFFER_TYPE_OUTPUT_FRAME,
                                         current_event.pipe, true, stream_id);
-
-                       if (!IS_ISP2401)
-                               reset_wdt_timer[asd->index] = true; /* ISP running */
-
                        break;
                case IA_CSS_EVENT_TYPE_SECOND_OUTPUT_FRAME_DONE:
                        dev_dbg(isp->dev, "event: Second output frame done");
                        frame_done_found[asd->index] = true;
                        atomisp_buf_done(asd, 0, IA_CSS_BUFFER_TYPE_SEC_OUTPUT_FRAME,
                                         current_event.pipe, true, stream_id);
-
-                       if (!IS_ISP2401)
-                               reset_wdt_timer[asd->index] = true; /* ISP running */
-
                        break;
                case IA_CSS_EVENT_TYPE_3A_STATISTICS_DONE:
                        dev_dbg(isp->dev, "event: 3A stats frame done");
@@ -3909,19 +3852,12 @@ int atomisp_css_isr_thread(struct atomisp_device *isp,
                        atomisp_buf_done(asd, 0,
                                         IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME,
                                         current_event.pipe, true, stream_id);
-
-                       if (!IS_ISP2401)
-                               reset_wdt_timer[asd->index] = true; /* ISP running */
-
                        break;
                case IA_CSS_EVENT_TYPE_SECOND_VF_OUTPUT_FRAME_DONE:
                        dev_dbg(isp->dev, "event: second VF output frame done");
                        atomisp_buf_done(asd, 0,
                                         IA_CSS_BUFFER_TYPE_SEC_VF_OUTPUT_FRAME,
                                         current_event.pipe, true, stream_id);
-                       if (!IS_ISP2401)
-                               reset_wdt_timer[asd->index] = true; /* ISP running */
-
                        break;
                case IA_CSS_EVENT_TYPE_DIS_STATISTICS_DONE:
                        dev_dbg(isp->dev, "event: dis stats frame done");
@@ -3944,24 +3880,6 @@ int atomisp_css_isr_thread(struct atomisp_device *isp,
                }
        }
 
-       if (IS_ISP2401)
-               return 0;
-
-       /* ISP2400: If there are no buffers queued then delete wdt timer. */
-       for (i = 0; i < isp->num_of_streams; i++) {
-               asd = &isp->asd[i];
-               if (!asd)
-                       continue;
-               if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
-                       continue;
-               if (!atomisp_buffers_queued(asd))
-                       atomisp_wdt_stop(asd, false);
-               else if (reset_wdt_timer[i])
-                       /* SOF irq should not reset wdt timer. */
-                       atomisp_wdt_refresh(asd,
-                                           ATOMISP_WDT_KEEP_CURRENT_DELAY);
-       }
-
        return 0;
 }
 
diff --git a/drivers/staging/media/atomisp/pci/atomisp_file.c b/drivers/staging/media/atomisp/pci/atomisp_file.c
deleted file mode 100644 (file)
index 4570a9a..0000000
+++ /dev/null
@@ -1,229 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Support for Medifield PNW Camera Imaging ISP subsystem.
- *
- * Copyright (c) 2010 Intel Corporation. All Rights Reserved.
- *
- * Copyright (c) 2010 Silicon Hive www.siliconhive.com.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License version
- * 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- *
- */
-
-#include <media/v4l2-event.h>
-#include <media/v4l2-mediabus.h>
-
-#include <media/videobuf-vmalloc.h>
-#include <linux/delay.h>
-
-#include "ia_css.h"
-
-#include "atomisp_cmd.h"
-#include "atomisp_common.h"
-#include "atomisp_file.h"
-#include "atomisp_internal.h"
-#include "atomisp_ioctl.h"
-
-static void file_work(struct work_struct *work)
-{
-       struct atomisp_file_device *file_dev =
-           container_of(work, struct atomisp_file_device, work);
-       struct atomisp_device *isp = file_dev->isp;
-       /* only support file injection on subdev0 */
-       struct atomisp_sub_device *asd = &isp->asd[0];
-       struct atomisp_video_pipe *out_pipe = &asd->video_in;
-       unsigned short *buf = videobuf_to_vmalloc(out_pipe->outq.bufs[0]);
-       struct v4l2_mbus_framefmt isp_sink_fmt;
-
-       if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
-               return;
-
-       dev_dbg(isp->dev, ">%s: ready to start streaming\n", __func__);
-       isp_sink_fmt = *atomisp_subdev_get_ffmt(&asd->subdev, NULL,
-                                               V4L2_SUBDEV_FORMAT_ACTIVE,
-                                               ATOMISP_SUBDEV_PAD_SINK);
-
-       while (!ia_css_isp_has_started())
-               usleep_range(1000, 1500);
-
-       ia_css_stream_send_input_frame(asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].stream,
-                                      buf, isp_sink_fmt.width,
-                                      isp_sink_fmt.height);
-       dev_dbg(isp->dev, "<%s: streaming done\n", __func__);
-}
-
-static int file_input_s_stream(struct v4l2_subdev *sd, int enable)
-{
-       struct atomisp_file_device *file_dev = v4l2_get_subdevdata(sd);
-       struct atomisp_device *isp = file_dev->isp;
-       /* only support file injection on subdev0 */
-       struct atomisp_sub_device *asd = &isp->asd[0];
-
-       dev_dbg(isp->dev, "%s: enable %d\n", __func__, enable);
-       if (enable) {
-               if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED)
-                       return 0;
-
-               queue_work(file_dev->work_queue, &file_dev->work);
-               return 0;
-       }
-       cancel_work_sync(&file_dev->work);
-       return 0;
-}
-
-static int file_input_get_fmt(struct v4l2_subdev *sd,
-                             struct v4l2_subdev_state *sd_state,
-                             struct v4l2_subdev_format *format)
-{
-       struct v4l2_mbus_framefmt *fmt = &format->format;
-       struct atomisp_file_device *file_dev = v4l2_get_subdevdata(sd);
-       struct atomisp_device *isp = file_dev->isp;
-       /* only support file injection on subdev0 */
-       struct atomisp_sub_device *asd = &isp->asd[0];
-       struct v4l2_mbus_framefmt *isp_sink_fmt;
-
-       if (format->pad)
-               return -EINVAL;
-       isp_sink_fmt = atomisp_subdev_get_ffmt(&asd->subdev, NULL,
-                                              V4L2_SUBDEV_FORMAT_ACTIVE,
-                                              ATOMISP_SUBDEV_PAD_SINK);
-
-       fmt->width = isp_sink_fmt->width;
-       fmt->height = isp_sink_fmt->height;
-       fmt->code = isp_sink_fmt->code;
-
-       return 0;
-}
-
-static int file_input_set_fmt(struct v4l2_subdev *sd,
-                             struct v4l2_subdev_state *sd_state,
-                             struct v4l2_subdev_format *format)
-{
-       struct v4l2_mbus_framefmt *fmt = &format->format;
-
-       if (format->pad)
-               return -EINVAL;
-       file_input_get_fmt(sd, sd_state, format);
-       if (format->which == V4L2_SUBDEV_FORMAT_TRY)
-               sd_state->pads->try_fmt = *fmt;
-       return 0;
-}
-
-static int file_input_log_status(struct v4l2_subdev *sd)
-{
-       /*to fake*/
-       return 0;
-}
-
-static int file_input_s_power(struct v4l2_subdev *sd, int on)
-{
-       /* to fake */
-       return 0;
-}
-
-static int file_input_enum_mbus_code(struct v4l2_subdev *sd,
-                                    struct v4l2_subdev_state *sd_state,
-                                    struct v4l2_subdev_mbus_code_enum *code)
-{
-       /*to fake*/
-       return 0;
-}
-
-static int file_input_enum_frame_size(struct v4l2_subdev *sd,
-                                     struct v4l2_subdev_state *sd_state,
-                                     struct v4l2_subdev_frame_size_enum *fse)
-{
-       /*to fake*/
-       return 0;
-}
-
-static int file_input_enum_frame_ival(struct v4l2_subdev *sd,
-                                     struct v4l2_subdev_state *sd_state,
-                                     struct v4l2_subdev_frame_interval_enum
-                                     *fie)
-{
-       /*to fake*/
-       return 0;
-}
-
-static const struct v4l2_subdev_video_ops file_input_video_ops = {
-       .s_stream = file_input_s_stream,
-};
-
-static const struct v4l2_subdev_core_ops file_input_core_ops = {
-       .log_status = file_input_log_status,
-       .s_power = file_input_s_power,
-};
-
-static const struct v4l2_subdev_pad_ops file_input_pad_ops = {
-       .enum_mbus_code = file_input_enum_mbus_code,
-       .enum_frame_size = file_input_enum_frame_size,
-       .enum_frame_interval = file_input_enum_frame_ival,
-       .get_fmt = file_input_get_fmt,
-       .set_fmt = file_input_set_fmt,
-};
-
-static const struct v4l2_subdev_ops file_input_ops = {
-       .core = &file_input_core_ops,
-       .video = &file_input_video_ops,
-       .pad = &file_input_pad_ops,
-};
-
-void
-atomisp_file_input_unregister_entities(struct atomisp_file_device *file_dev)
-{
-       media_entity_cleanup(&file_dev->sd.entity);
-       v4l2_device_unregister_subdev(&file_dev->sd);
-}
-
-int atomisp_file_input_register_entities(struct atomisp_file_device *file_dev,
-       struct v4l2_device *vdev)
-{
-       /* Register the subdev and video nodes. */
-       return  v4l2_device_register_subdev(vdev, &file_dev->sd);
-}
-
-void atomisp_file_input_cleanup(struct atomisp_device *isp)
-{
-       struct atomisp_file_device *file_dev = &isp->file_dev;
-
-       if (file_dev->work_queue) {
-               destroy_workqueue(file_dev->work_queue);
-               file_dev->work_queue = NULL;
-       }
-}
-
-int atomisp_file_input_init(struct atomisp_device *isp)
-{
-       struct atomisp_file_device *file_dev = &isp->file_dev;
-       struct v4l2_subdev *sd = &file_dev->sd;
-       struct media_pad *pads = file_dev->pads;
-       struct media_entity *me = &sd->entity;
-
-       file_dev->isp = isp;
-       file_dev->work_queue = alloc_workqueue(isp->v4l2_dev.name, 0, 1);
-       if (!file_dev->work_queue) {
-               dev_err(isp->dev, "Failed to initialize file inject workq\n");
-               return -ENOMEM;
-       }
-
-       INIT_WORK(&file_dev->work, file_work);
-
-       v4l2_subdev_init(sd, &file_input_ops);
-       sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-       strscpy(sd->name, "file_input_subdev", sizeof(sd->name));
-       v4l2_set_subdevdata(sd, file_dev);
-
-       pads[0].flags = MEDIA_PAD_FL_SINK;
-       me->function = MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN;
-
-       return media_entity_pads_init(me, 1, pads);
-}
diff --git a/drivers/staging/media/atomisp/pci/atomisp_file.h b/drivers/staging/media/atomisp/pci/atomisp_file.h
deleted file mode 100644 (file)
index f166a2a..0000000
+++ /dev/null
@@ -1,44 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Support for Medifield PNW Camera Imaging ISP subsystem.
- *
- * Copyright (c) 2010 Intel Corporation. All Rights Reserved.
- *
- * Copyright (c) 2010 Silicon Hive www.siliconhive.com.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License version
- * 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- *
- */
-
-#ifndef __ATOMISP_FILE_H__
-#define __ATOMISP_FILE_H__
-
-#include <media/media-entity.h>
-#include <media/v4l2-subdev.h>
-
-struct atomisp_device;
-
-struct atomisp_file_device {
-       struct v4l2_subdev sd;
-       struct atomisp_device *isp;
-       struct media_pad pads[1];
-
-       struct workqueue_struct *work_queue;
-       struct work_struct work;
-};
-
-void atomisp_file_input_cleanup(struct atomisp_device *isp);
-int atomisp_file_input_init(struct atomisp_device *isp);
-void atomisp_file_input_unregister_entities(
-    struct atomisp_file_device *file_dev);
-int atomisp_file_input_register_entities(struct atomisp_file_device *file_dev,
-       struct v4l2_device *vdev);
-#endif /* __ATOMISP_FILE_H__ */
index 77150e4..84a84e0 100644 (file)
@@ -369,45 +369,6 @@ static int atomisp_get_css_buf_type(struct atomisp_sub_device *asd,
                return IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME;
 }
 
-static int atomisp_qbuffers_to_css_for_all_pipes(struct atomisp_sub_device *asd)
-{
-       enum ia_css_buffer_type buf_type;
-       enum ia_css_pipe_id css_capture_pipe_id = IA_CSS_PIPE_ID_COPY;
-       enum ia_css_pipe_id css_preview_pipe_id = IA_CSS_PIPE_ID_COPY;
-       enum ia_css_pipe_id css_video_pipe_id = IA_CSS_PIPE_ID_COPY;
-       enum atomisp_input_stream_id input_stream_id;
-       struct atomisp_video_pipe *capture_pipe;
-       struct atomisp_video_pipe *preview_pipe;
-       struct atomisp_video_pipe *video_pipe;
-
-       capture_pipe = &asd->video_out_capture;
-       preview_pipe = &asd->video_out_preview;
-       video_pipe = &asd->video_out_video_capture;
-
-       buf_type = atomisp_get_css_buf_type(
-                      asd, css_preview_pipe_id,
-                      atomisp_subdev_source_pad(&preview_pipe->vdev));
-       input_stream_id = ATOMISP_INPUT_STREAM_PREVIEW;
-       atomisp_q_video_buffers_to_css(asd, preview_pipe,
-                                      input_stream_id,
-                                      buf_type, css_preview_pipe_id);
-
-       buf_type = atomisp_get_css_buf_type(asd, css_capture_pipe_id,
-                                           atomisp_subdev_source_pad(&capture_pipe->vdev));
-       input_stream_id = ATOMISP_INPUT_STREAM_GENERAL;
-       atomisp_q_video_buffers_to_css(asd, capture_pipe,
-                                      input_stream_id,
-                                      buf_type, css_capture_pipe_id);
-
-       buf_type = atomisp_get_css_buf_type(asd, css_video_pipe_id,
-                                           atomisp_subdev_source_pad(&video_pipe->vdev));
-       input_stream_id = ATOMISP_INPUT_STREAM_VIDEO;
-       atomisp_q_video_buffers_to_css(asd, video_pipe,
-                                      input_stream_id,
-                                      buf_type, css_video_pipe_id);
-       return 0;
-}
-
 /* queue all available buffers to css */
 int atomisp_qbuffers_to_css(struct atomisp_sub_device *asd)
 {
@@ -423,11 +384,6 @@ int atomisp_qbuffers_to_css(struct atomisp_sub_device *asd)
        bool raw_mode = atomisp_is_mbuscode_raw(
                            asd->fmt[asd->capture_pad].fmt.code);
 
-       if (asd->isp->inputs[asd->input_curr].camera_caps->
-           sensor[asd->sensor_curr].stream_num == 2 &&
-           !asd->yuvpp_mode)
-               return atomisp_qbuffers_to_css_for_all_pipes(asd);
-
        if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) {
                video_pipe = &asd->video_out_video_capture;
                css_video_pipe_id = IA_CSS_PIPE_ID_VIDEO;
@@ -593,47 +549,6 @@ static void atomisp_buf_release(struct videobuf_queue *vq,
        atomisp_videobuf_free_buf(vb);
 }
 
-static int atomisp_buf_setup_output(struct videobuf_queue *vq,
-                                   unsigned int *count, unsigned int *size)
-{
-       struct atomisp_video_pipe *pipe = vq->priv_data;
-
-       *size = pipe->pix.sizeimage;
-
-       return 0;
-}
-
-static int atomisp_buf_prepare_output(struct videobuf_queue *vq,
-                                     struct videobuf_buffer *vb,
-                                     enum v4l2_field field)
-{
-       struct atomisp_video_pipe *pipe = vq->priv_data;
-
-       vb->size = pipe->pix.sizeimage;
-       vb->width = pipe->pix.width;
-       vb->height = pipe->pix.height;
-       vb->field = field;
-       vb->state = VIDEOBUF_PREPARED;
-
-       return 0;
-}
-
-static void atomisp_buf_queue_output(struct videobuf_queue *vq,
-                                    struct videobuf_buffer *vb)
-{
-       struct atomisp_video_pipe *pipe = vq->priv_data;
-
-       list_add_tail(&vb->queue, &pipe->activeq_out);
-       vb->state = VIDEOBUF_QUEUED;
-}
-
-static void atomisp_buf_release_output(struct videobuf_queue *vq,
-                                      struct videobuf_buffer *vb)
-{
-       videobuf_vmalloc_free(vb);
-       vb->state = VIDEOBUF_NEEDS_INIT;
-}
-
 static const struct videobuf_queue_ops videobuf_qops = {
        .buf_setup      = atomisp_buf_setup,
        .buf_prepare    = atomisp_buf_prepare,
@@ -641,13 +556,6 @@ static const struct videobuf_queue_ops videobuf_qops = {
        .buf_release    = atomisp_buf_release,
 };
 
-static const struct videobuf_queue_ops videobuf_qops_output = {
-       .buf_setup      = atomisp_buf_setup_output,
-       .buf_prepare    = atomisp_buf_prepare_output,
-       .buf_queue      = atomisp_buf_queue_output,
-       .buf_release    = atomisp_buf_release_output,
-};
-
 static int atomisp_init_pipe(struct atomisp_video_pipe *pipe)
 {
        /* init locks */
@@ -660,15 +568,7 @@ static int atomisp_init_pipe(struct atomisp_video_pipe *pipe)
                                    sizeof(struct atomisp_buffer), pipe,
                                    NULL);      /* ext_lock: NULL */
 
-       videobuf_queue_vmalloc_init(&pipe->outq, &videobuf_qops_output, NULL,
-                                   &pipe->irq_lock,
-                                   V4L2_BUF_TYPE_VIDEO_OUTPUT,
-                                   V4L2_FIELD_NONE,
-                                   sizeof(struct atomisp_buffer), pipe,
-                                   NULL);      /* ext_lock: NULL */
-
        INIT_LIST_HEAD(&pipe->activeq);
-       INIT_LIST_HEAD(&pipe->activeq_out);
        INIT_LIST_HEAD(&pipe->buffers_waiting_for_param);
        INIT_LIST_HEAD(&pipe->per_frame_params);
        memset(pipe->frame_request_config_id, 0,
@@ -684,7 +584,6 @@ static void atomisp_dev_init_struct(struct atomisp_device *isp)
 {
        unsigned int i;
 
-       isp->sw_contex.file_input = false;
        isp->need_gfx_throttle = true;
        isp->isp_fatal_error = false;
        isp->mipi_frame_size = 0;
@@ -741,9 +640,7 @@ static unsigned int atomisp_subdev_users(struct atomisp_sub_device *asd)
        return asd->video_out_preview.users +
               asd->video_out_vf.users +
               asd->video_out_capture.users +
-              asd->video_out_video_capture.users +
-              asd->video_acc.users +
-              asd->video_in.users;
+              asd->video_out_video_capture.users;
 }
 
 unsigned int atomisp_dev_users(struct atomisp_device *isp)
@@ -760,48 +657,18 @@ static int atomisp_open(struct file *file)
 {
        struct video_device *vdev = video_devdata(file);
        struct atomisp_device *isp = video_get_drvdata(vdev);
-       struct atomisp_video_pipe *pipe = NULL;
-       struct atomisp_acc_pipe *acc_pipe = NULL;
-       struct atomisp_sub_device *asd;
-       bool acc_node = false;
+       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
+       struct atomisp_sub_device *asd = pipe->asd;
        int ret;
 
        dev_dbg(isp->dev, "open device %s\n", vdev->name);
 
-       /*
-        * Ensure that if we are still loading we block. Once the loading
-        * is over we can proceed. We can't blindly hold the lock until
-        * that occurs as if the load fails we'll deadlock the unload
-        */
-       rt_mutex_lock(&isp->loading);
-       /*
-        * FIXME: revisit this with a better check once the code structure
-        * is cleaned up a bit more
-        */
        ret = v4l2_fh_open(file);
-       if (ret) {
-               dev_err(isp->dev,
-                       "%s: v4l2_fh_open() returned error %d\n",
-                      __func__, ret);
-               rt_mutex_unlock(&isp->loading);
+       if (ret)
                return ret;
-       }
-       if (!isp->ready) {
-               rt_mutex_unlock(&isp->loading);
-               return -ENXIO;
-       }
-       rt_mutex_unlock(&isp->loading);
 
-       rt_mutex_lock(&isp->mutex);
+       mutex_lock(&isp->mutex);
 
-       acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC");
-       if (acc_node) {
-               acc_pipe = atomisp_to_acc_pipe(vdev);
-               asd = acc_pipe->asd;
-       } else {
-               pipe = atomisp_to_video_pipe(vdev);
-               asd = pipe->asd;
-       }
        asd->subdev.devnode = vdev;
        /* Deferred firmware loading case. */
        if (isp->css_env.isp_css_fw.bytes == 0) {
@@ -823,14 +690,6 @@ static int atomisp_open(struct file *file)
                isp->css_env.isp_css_fw.data = NULL;
        }
 
-       if (acc_node && acc_pipe->users) {
-               dev_dbg(isp->dev, "acc node already opened\n");
-               rt_mutex_unlock(&isp->mutex);
-               return -EBUSY;
-       } else if (acc_node) {
-               goto dev_init;
-       }
-
        if (!isp->input_cnt) {
                dev_err(isp->dev, "no camera attached\n");
                ret = -EINVAL;
@@ -842,7 +701,7 @@ static int atomisp_open(struct file *file)
         */
        if (pipe->users) {
                dev_dbg(isp->dev, "video node already opened\n");
-               rt_mutex_unlock(&isp->mutex);
+               mutex_unlock(&isp->mutex);
                return -EBUSY;
        }
 
@@ -850,7 +709,6 @@ static int atomisp_open(struct file *file)
        if (ret)
                goto error;
 
-dev_init:
        if (atomisp_dev_users(isp)) {
                dev_dbg(isp->dev, "skip init isp in open\n");
                goto init_subdev;
@@ -885,16 +743,11 @@ init_subdev:
        atomisp_subdev_init_struct(asd);
 
 done:
-
-       if (acc_node)
-               acc_pipe->users++;
-       else
-               pipe->users++;
-       rt_mutex_unlock(&isp->mutex);
+       pipe->users++;
+       mutex_unlock(&isp->mutex);
 
        /* Ensure that a mode is set */
-       if (!acc_node)
-               v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
+       v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
 
        return 0;
 
@@ -902,7 +755,8 @@ css_error:
        atomisp_css_uninit(isp);
        pm_runtime_put(vdev->v4l2_dev->dev);
 error:
-       rt_mutex_unlock(&isp->mutex);
+       mutex_unlock(&isp->mutex);
+       v4l2_fh_release(file);
        return ret;
 }
 
@@ -910,13 +764,12 @@ static int atomisp_release(struct file *file)
 {
        struct video_device *vdev = video_devdata(file);
        struct atomisp_device *isp = video_get_drvdata(vdev);
-       struct atomisp_video_pipe *pipe;
-       struct atomisp_acc_pipe *acc_pipe;
-       struct atomisp_sub_device *asd;
-       bool acc_node;
+       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
+       struct atomisp_sub_device *asd = pipe->asd;
        struct v4l2_requestbuffers req;
        struct v4l2_subdev_fh fh;
        struct v4l2_rect clear_compose = {0};
+       unsigned long flags;
        int ret = 0;
 
        v4l2_fh_init(&fh.vfh, vdev);
@@ -925,23 +778,12 @@ static int atomisp_release(struct file *file)
        if (!isp)
                return -EBADF;
 
-       mutex_lock(&isp->streamoff_mutex);
-       rt_mutex_lock(&isp->mutex);
+       mutex_lock(&isp->mutex);
 
        dev_dbg(isp->dev, "release device %s\n", vdev->name);
-       acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC");
-       if (acc_node) {
-               acc_pipe = atomisp_to_acc_pipe(vdev);
-               asd = acc_pipe->asd;
-       } else {
-               pipe = atomisp_to_video_pipe(vdev);
-               asd = pipe->asd;
-       }
+
        asd->subdev.devnode = vdev;
-       if (acc_node) {
-               acc_pipe->users--;
-               goto subdev_uninit;
-       }
+
        pipe->users--;
 
        if (pipe->capq.streaming)
@@ -950,27 +792,19 @@ static int atomisp_release(struct file *file)
                         __func__);
 
        if (pipe->capq.streaming &&
-           __atomisp_streamoff(file, NULL, V4L2_BUF_TYPE_VIDEO_CAPTURE)) {
-               dev_err(isp->dev,
-                       "atomisp_streamoff failed on release, driver bug");
+           atomisp_streamoff(file, NULL, V4L2_BUF_TYPE_VIDEO_CAPTURE)) {
+               dev_err(isp->dev, "atomisp_streamoff failed on release, driver bug");
                goto done;
        }
 
        if (pipe->users)
                goto done;
 
-       if (__atomisp_reqbufs(file, NULL, &req)) {
-               dev_err(isp->dev,
-                       "atomisp_reqbufs failed on release, driver bug");
+       if (atomisp_reqbufs(file, NULL, &req)) {
+               dev_err(isp->dev, "atomisp_reqbufs failed on release, driver bug");
                goto done;
        }
 
-       if (pipe->outq.bufs[0]) {
-               mutex_lock(&pipe->outq.vb_lock);
-               videobuf_queue_cancel(&pipe->outq);
-               mutex_unlock(&pipe->outq.vb_lock);
-       }
-
        /*
         * A little trick here:
         * file injection input resolution is recorded in the sink pad,
@@ -978,26 +812,17 @@ static int atomisp_release(struct file *file)
         * The sink pad setting can only be cleared when all device nodes
         * get released.
         */
-       if (!isp->sw_contex.file_input && asd->fmt_auto->val) {
+       if (asd->fmt_auto->val) {
                struct v4l2_mbus_framefmt isp_sink_fmt = { 0 };
 
                atomisp_subdev_set_ffmt(&asd->subdev, fh.state,
                                        V4L2_SUBDEV_FORMAT_ACTIVE,
                                        ATOMISP_SUBDEV_PAD_SINK, &isp_sink_fmt);
        }
-subdev_uninit:
+
        if (atomisp_subdev_users(asd))
                goto done;
 
-       /* clear the sink pad for file input */
-       if (isp->sw_contex.file_input && asd->fmt_auto->val) {
-               struct v4l2_mbus_framefmt isp_sink_fmt = { 0 };
-
-               atomisp_subdev_set_ffmt(&asd->subdev, fh.state,
-                                       V4L2_SUBDEV_FORMAT_ACTIVE,
-                                       ATOMISP_SUBDEV_PAD_SINK, &isp_sink_fmt);
-       }
-
        atomisp_css_free_stat_buffers(asd);
        atomisp_free_internal_buffers(asd);
        ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
@@ -1007,7 +832,9 @@ subdev_uninit:
 
        /* clear the asd field to show this camera is not used */
        isp->inputs[asd->input_curr].asd = NULL;
+       spin_lock_irqsave(&isp->lock, flags);
        asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED;
+       spin_unlock_irqrestore(&isp->lock, flags);
 
        if (atomisp_dev_users(isp))
                goto done;
@@ -1029,15 +856,12 @@ subdev_uninit:
                dev_err(isp->dev, "Failed to power off device\n");
 
 done:
-       if (!acc_node) {
-               atomisp_subdev_set_selection(&asd->subdev, fh.state,
-                                            V4L2_SUBDEV_FORMAT_ACTIVE,
-                                            atomisp_subdev_source_pad(vdev),
-                                            V4L2_SEL_TGT_COMPOSE, 0,
-                                            &clear_compose);
-       }
-       rt_mutex_unlock(&isp->mutex);
-       mutex_unlock(&isp->streamoff_mutex);
+       atomisp_subdev_set_selection(&asd->subdev, fh.state,
+                                    V4L2_SUBDEV_FORMAT_ACTIVE,
+                                    atomisp_subdev_source_pad(vdev),
+                                    V4L2_SEL_TGT_COMPOSE, 0,
+                                    &clear_compose);
+       mutex_unlock(&isp->mutex);
 
        return v4l2_fh_release(file);
 }
@@ -1194,7 +1018,7 @@ static int atomisp_mmap(struct file *file, struct vm_area_struct *vma)
        if (!(vma->vm_flags & (VM_WRITE | VM_READ)))
                return -EACCES;
 
-       rt_mutex_lock(&isp->mutex);
+       mutex_lock(&isp->mutex);
 
        if (!(vma->vm_flags & VM_SHARED)) {
                /* Map private buffer.
@@ -1205,7 +1029,7 @@ static int atomisp_mmap(struct file *file, struct vm_area_struct *vma)
                 */
                vma->vm_flags |= VM_SHARED;
                ret = hmm_mmap(vma, vma->vm_pgoff << PAGE_SHIFT);
-               rt_mutex_unlock(&isp->mutex);
+               mutex_unlock(&isp->mutex);
                return ret;
        }
 
@@ -1248,7 +1072,7 @@ static int atomisp_mmap(struct file *file, struct vm_area_struct *vma)
                }
                raw_virt_addr->data_bytes = origin_size;
                vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
-               rt_mutex_unlock(&isp->mutex);
+               mutex_unlock(&isp->mutex);
                return 0;
        }
 
@@ -1260,24 +1084,16 @@ static int atomisp_mmap(struct file *file, struct vm_area_struct *vma)
                ret = -EINVAL;
                goto error;
        }
-       rt_mutex_unlock(&isp->mutex);
+       mutex_unlock(&isp->mutex);
 
        return atomisp_videobuf_mmap_mapper(&pipe->capq, vma);
 
 error:
-       rt_mutex_unlock(&isp->mutex);
+       mutex_unlock(&isp->mutex);
 
        return ret;
 }
 
-static int atomisp_file_mmap(struct file *file, struct vm_area_struct *vma)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-
-       return videobuf_mmap_mapper(&pipe->outq, vma);
-}
-
 static __poll_t atomisp_poll(struct file *file,
                             struct poll_table_struct *pt)
 {
@@ -1285,12 +1101,12 @@ static __poll_t atomisp_poll(struct file *file,
        struct atomisp_device *isp = video_get_drvdata(vdev);
        struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
 
-       rt_mutex_lock(&isp->mutex);
+       mutex_lock(&isp->mutex);
        if (pipe->capq.streaming != 1) {
-               rt_mutex_unlock(&isp->mutex);
+               mutex_unlock(&isp->mutex);
                return EPOLLERR;
        }
-       rt_mutex_unlock(&isp->mutex);
+       mutex_unlock(&isp->mutex);
 
        return videobuf_poll_stream(file, &pipe->capq, pt);
 }
@@ -1310,15 +1126,3 @@ const struct v4l2_file_operations atomisp_fops = {
 #endif
        .poll = atomisp_poll,
 };
-
-const struct v4l2_file_operations atomisp_file_fops = {
-       .owner = THIS_MODULE,
-       .open = atomisp_open,
-       .release = atomisp_release,
-       .mmap = atomisp_file_mmap,
-       .unlocked_ioctl = video_ioctl2,
-#ifdef CONFIG_COMPAT
-       /* .compat_ioctl32 = atomisp_compat_ioctl32, */
-#endif
-       .poll = atomisp_poll,
-};
index bf527b3..3d41fab 100644 (file)
@@ -134,24 +134,6 @@ static DEFINE_MUTEX(vcm_lock);
 
 static struct gmin_subdev *find_gmin_subdev(struct v4l2_subdev *subdev);
 
-/*
- * Legacy/stub behavior copied from upstream platform_camera.c.  The
- * atomisp driver relies on these values being non-NULL in a few
- * places, even though they are hard-coded in all current
- * implementations.
- */
-const struct atomisp_camera_caps *atomisp_get_default_camera_caps(void)
-{
-       static const struct atomisp_camera_caps caps = {
-               .sensor_num = 1,
-               .sensor = {
-                       { .stream_num = 1, },
-               },
-       };
-       return &caps;
-}
-EXPORT_SYMBOL_GPL(atomisp_get_default_camera_caps);
-
 const struct atomisp_platform_data *atomisp_get_platform_data(void)
 {
        return &pdata;
@@ -1066,6 +1048,38 @@ static int gmin_flisclk_ctrl(struct v4l2_subdev *subdev, int on)
        return ret;
 }
 
+static int camera_sensor_csi_alloc(struct v4l2_subdev *sd, u32 port, u32 lanes,
+                                  u32 format, u32 bayer_order)
+{
+       struct i2c_client *client = v4l2_get_subdevdata(sd);
+       struct camera_mipi_info *csi;
+
+       csi = kzalloc(sizeof(*csi), GFP_KERNEL);
+       if (!csi)
+               return -ENOMEM;
+
+       csi->port = port;
+       csi->num_lanes = lanes;
+       csi->input_format = format;
+       csi->raw_bayer_order = bayer_order;
+       v4l2_set_subdev_hostdata(sd, csi);
+       csi->metadata_format = ATOMISP_INPUT_FORMAT_EMBEDDED;
+       csi->metadata_effective_width = NULL;
+       dev_info(&client->dev,
+                "camera pdata: port: %d lanes: %d order: %8.8x\n",
+                port, lanes, bayer_order);
+
+       return 0;
+}
+
+static void camera_sensor_csi_free(struct v4l2_subdev *sd)
+{
+       struct camera_mipi_info *csi;
+
+       csi = v4l2_get_subdev_hostdata(sd);
+       kfree(csi);
+}
+
 static int gmin_csi_cfg(struct v4l2_subdev *sd, int flag)
 {
        struct i2c_client *client = v4l2_get_subdevdata(sd);
@@ -1074,8 +1088,11 @@ static int gmin_csi_cfg(struct v4l2_subdev *sd, int flag)
        if (!client || !gs)
                return -ENODEV;
 
-       return camera_sensor_csi(sd, gs->csi_port, gs->csi_lanes,
-                                gs->csi_fmt, gs->csi_bayer, flag);
+       if (flag)
+               return camera_sensor_csi_alloc(sd, gs->csi_port, gs->csi_lanes,
+                                              gs->csi_fmt, gs->csi_bayer);
+       camera_sensor_csi_free(sd);
+       return 0;
 }
 
 static struct camera_vcm_control *gmin_get_vcm_ctrl(struct v4l2_subdev *subdev,
@@ -1207,16 +1224,14 @@ static int gmin_get_config_dsm_var(struct device *dev,
        if (!strcmp(var, "CamClk"))
                return -EINVAL;
 
-       obj = acpi_evaluate_dsm(handle, &atomisp_dsm_guid, 0, 0, NULL);
+       /* Return on unexpected object type */
+       obj = acpi_evaluate_dsm_typed(handle, &atomisp_dsm_guid, 0, 0, NULL,
+                                     ACPI_TYPE_PACKAGE);
        if (!obj) {
                dev_info_once(dev, "Didn't find ACPI _DSM table.\n");
                return -EINVAL;
        }
 
-       /* Return on unexpected object type */
-       if (obj->type != ACPI_TYPE_PACKAGE)
-               return -EINVAL;
-
 #if 0 /* Just for debugging purposes */
        for (i = 0; i < obj->package.count; i++) {
                union acpi_object *cur = &obj->package.elements[i];
@@ -1360,35 +1375,6 @@ int gmin_get_var_int(struct device *dev, bool is_gmin, const char *var, int def)
 }
 EXPORT_SYMBOL_GPL(gmin_get_var_int);
 
-int camera_sensor_csi(struct v4l2_subdev *sd, u32 port,
-                     u32 lanes, u32 format, u32 bayer_order, int flag)
-{
-       struct i2c_client *client = v4l2_get_subdevdata(sd);
-       struct camera_mipi_info *csi = NULL;
-
-       if (flag) {
-               csi = kzalloc(sizeof(*csi), GFP_KERNEL);
-               if (!csi)
-                       return -ENOMEM;
-               csi->port = port;
-               csi->num_lanes = lanes;
-               csi->input_format = format;
-               csi->raw_bayer_order = bayer_order;
-               v4l2_set_subdev_hostdata(sd, (void *)csi);
-               csi->metadata_format = ATOMISP_INPUT_FORMAT_EMBEDDED;
-               csi->metadata_effective_width = NULL;
-               dev_info(&client->dev,
-                        "camera pdata: port: %d lanes: %d order: %8.8x\n",
-                        port, lanes, bayer_order);
-       } else {
-               csi = v4l2_get_subdev_hostdata(sd);
-               kfree(csi);
-       }
-
-       return 0;
-}
-EXPORT_SYMBOL_GPL(camera_sensor_csi);
-
 /* PCI quirk: The BYT ISP advertises PCI runtime PM but it doesn't
  * work.  Disable so the kernel framework doesn't hang the device
  * trying.  The driver itself does direct calls to the PUNIT to manage
index f71ab1e..d9d158c 100644 (file)
@@ -34,7 +34,6 @@
 #include "sh_css_legacy.h"
 
 #include "atomisp_csi2.h"
-#include "atomisp_file.h"
 #include "atomisp_subdev.h"
 #include "atomisp_tpg.h"
 #include "atomisp_compat.h"
 #define ATOM_ISP_POWER_DOWN    0
 #define ATOM_ISP_POWER_UP      1
 
-#define ATOM_ISP_MAX_INPUTS    4
+#define ATOM_ISP_MAX_INPUTS    3
 
 #define ATOMISP_SC_TYPE_SIZE   2
 
 #define ATOMISP_ISP_TIMEOUT_DURATION           (2 * HZ)
 #define ATOMISP_EXT_ISP_TIMEOUT_DURATION        (6 * HZ)
-#define ATOMISP_ISP_FILE_TIMEOUT_DURATION      (60 * HZ)
 #define ATOMISP_WDT_KEEP_CURRENT_DELAY          0
 #define ATOMISP_ISP_MAX_TIMEOUT_COUNT  2
 #define ATOMISP_CSS_STOP_TIMEOUT_US    200000
 #define ATOMISP_DELAYED_INIT_QUEUED    1
 #define ATOMISP_DELAYED_INIT_DONE      2
 
-#define ATOMISP_CALC_CSS_PREV_OVERLAP(lines) \
-       ((lines) * 38 / 100 & 0xfffffe)
-
 /*
  * Define how fast CPU should be able to serve ISP interrupts.
  * The bigger the value, the higher risk that the ISP is not
  * Moorefield/Baytrail platform.
  */
 #define ATOMISP_SOC_CAMERA(asd)  \
-       (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA \
-       && asd->isp->inputs[asd->input_curr].camera_caps-> \
-          sensor[asd->sensor_curr].stream_num == 1)
+       (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA)
 
 #define ATOMISP_USE_YUVPP(asd)  \
        (ATOMISP_SOC_CAMERA(asd) && ATOMISP_CSS_SUPPORT_YUVPP && \
@@ -167,7 +160,6 @@ struct atomisp_input_subdev {
         */
        struct atomisp_sub_device *asd;
 
-       const struct atomisp_camera_caps *camera_caps;
        int sensor_index;
 };
 
@@ -203,7 +195,6 @@ struct atomisp_regs {
 };
 
 struct atomisp_sw_contex {
-       bool file_input;
        int power_state;
        int running_freq;
 };
@@ -241,24 +232,10 @@ struct atomisp_device {
 
        struct atomisp_mipi_csi2_device csi2_port[ATOMISP_CAMERA_NR_PORTS];
        struct atomisp_tpg_device tpg;
-       struct atomisp_file_device file_dev;
 
        /* Purpose of mutex is to protect and serialize use of isp data
         * structures and css API calls. */
-       struct rt_mutex mutex;
-       /*
-        * This mutex ensures that we don't allow an open to succeed while
-        * the initialization process is incomplete
-        */
-       struct rt_mutex loading;
-       /* Set once the ISP is ready to allow opens */
-       bool ready;
-       /*
-        * Serialise streamoff: mutex is dropped during streamoff to
-        * cancel the watchdog queue. MUST be acquired BEFORE
-        * "mutex".
-        */
-       struct mutex streamoff_mutex;
+       struct mutex mutex;
 
        unsigned int input_cnt;
        struct atomisp_input_subdev inputs[ATOM_ISP_MAX_INPUTS];
@@ -272,15 +249,9 @@ struct atomisp_device {
        /* isp timeout status flag */
        bool isp_timeout;
        bool isp_fatal_error;
-       struct workqueue_struct *wdt_work_queue;
-       struct work_struct wdt_work;
-
-       /* ISP2400 */
-       atomic_t wdt_count;
-
-       atomic_t wdt_work_queued;
+       struct work_struct assert_recovery_work;
 
-       spinlock_t lock; /* Just for streaming below */
+       spinlock_t lock; /* Protects asd[i].streaming */
 
        bool need_gfx_throttle;
 
@@ -296,20 +267,4 @@ struct atomisp_device {
 
 extern struct device *atomisp_dev;
 
-#define atomisp_is_wdt_running(a) timer_pending(&(a)->wdt)
-
-/* ISP2401 */
-void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe,
-                             unsigned int delay);
-void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay);
-
-/* ISP2400 */
-void atomisp_wdt_start(struct atomisp_sub_device *asd);
-
-/* ISP2401 */
-void atomisp_wdt_start_pipe(struct atomisp_video_pipe *pipe);
-void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync);
-
-void atomisp_wdt_stop(struct atomisp_sub_device *asd, bool sync);
-
 #endif /* __ATOMISP_INTERNAL_H__ */
index 459645c..0ddb0ed 100644 (file)
@@ -535,6 +535,32 @@ atomisp_get_format_bridge_from_mbus(u32 mbus_code)
        return NULL;
 }
 
+int atomisp_pipe_check(struct atomisp_video_pipe *pipe, bool settings_change)
+{
+       lockdep_assert_held(&pipe->isp->mutex);
+
+       if (pipe->isp->isp_fatal_error)
+               return -EIO;
+
+       switch (pipe->asd->streaming) {
+       case ATOMISP_DEVICE_STREAMING_DISABLED:
+               break;
+       case ATOMISP_DEVICE_STREAMING_ENABLED:
+               if (settings_change) {
+                       dev_err(pipe->isp->dev, "Set fmt/input IOCTL while streaming\n");
+                       return -EBUSY;
+               }
+               break;
+       case ATOMISP_DEVICE_STREAMING_STOPPING:
+               dev_err(pipe->isp->dev, "IOCTL issued while stopping\n");
+               return -EBUSY;
+       default:
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
 /*
  * v4l2 ioctls
  * return ISP capabilities
@@ -609,8 +635,7 @@ atomisp_subdev_streaming_count(struct atomisp_sub_device *asd)
        return asd->video_out_preview.capq.streaming
               + asd->video_out_capture.capq.streaming
               + asd->video_out_video_capture.capq.streaming
-              + asd->video_out_vf.capq.streaming
-              + asd->video_in.capq.streaming;
+              + asd->video_out_vf.capq.streaming;
 }
 
 unsigned int atomisp_streaming_count(struct atomisp_device *isp)
@@ -630,19 +655,9 @@ unsigned int atomisp_streaming_count(struct atomisp_device *isp)
 static int atomisp_g_input(struct file *file, void *fh, unsigned int *input)
 {
        struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
        struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
-       rt_mutex_lock(&isp->mutex);
        *input = asd->input_curr;
-       rt_mutex_unlock(&isp->mutex);
-
        return 0;
 }
 
@@ -653,22 +668,19 @@ static int atomisp_s_input(struct file *file, void *fh, unsigned int input)
 {
        struct video_device *vdev = video_devdata(file);
        struct atomisp_device *isp = video_get_drvdata(vdev);
-       struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
+       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
+       struct atomisp_sub_device *asd = pipe->asd;
        struct v4l2_subdev *camera = NULL;
        struct v4l2_subdev *motor;
        int ret;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
+       ret = atomisp_pipe_check(pipe, true);
+       if (ret)
+               return ret;
 
-       rt_mutex_lock(&isp->mutex);
        if (input >= ATOM_ISP_MAX_INPUTS || input >= isp->input_cnt) {
                dev_dbg(isp->dev, "input_cnt: %d\n", isp->input_cnt);
-               ret = -EINVAL;
-               goto error;
+               return -EINVAL;
        }
 
        /*
@@ -680,22 +692,13 @@ static int atomisp_s_input(struct file *file, void *fh, unsigned int input)
                dev_err(isp->dev,
                        "%s, camera is already used by stream: %d\n", __func__,
                        isp->inputs[input].asd->index);
-               ret = -EBUSY;
-               goto error;
+               return -EBUSY;
        }
 
        camera = isp->inputs[input].camera;
        if (!camera) {
                dev_err(isp->dev, "%s, no camera\n", __func__);
-               ret = -EINVAL;
-               goto error;
-       }
-
-       if (atomisp_subdev_streaming_count(asd)) {
-               dev_err(isp->dev,
-                       "ISP is still streaming, stop first\n");
-               ret = -EINVAL;
-               goto error;
+               return -EINVAL;
        }
 
        /* power off the current owned sensor, as it is not used this time */
@@ -714,7 +717,7 @@ static int atomisp_s_input(struct file *file, void *fh, unsigned int input)
        ret = v4l2_subdev_call(isp->inputs[input].camera, core, s_power, 1);
        if (ret) {
                dev_err(isp->dev, "Failed to power-on sensor\n");
-               goto error;
+               return ret;
        }
        /*
         * Some sensor driver resets the run mode during power-on, thus force
@@ -727,7 +730,7 @@ static int atomisp_s_input(struct file *file, void *fh, unsigned int input)
                               0, isp->inputs[input].sensor_index, 0);
        if (ret && (ret != -ENOIOCTLCMD)) {
                dev_err(isp->dev, "Failed to select sensor\n");
-               goto error;
+               return ret;
        }
 
        if (!IS_ISP2401) {
@@ -738,20 +741,14 @@ static int atomisp_s_input(struct file *file, void *fh, unsigned int input)
                        ret = v4l2_subdev_call(motor, core, s_power, 1);
        }
 
-       if (!isp->sw_contex.file_input && motor)
+       if (motor)
                ret = v4l2_subdev_call(motor, core, init, 1);
 
        asd->input_curr = input;
        /* mark this camera is used by the current stream */
        isp->inputs[input].asd = asd;
-       rt_mutex_unlock(&isp->mutex);
 
        return 0;
-
-error:
-       rt_mutex_unlock(&isp->mutex);
-
-       return ret;
 }
 
 static int atomisp_enum_framesizes(struct file *file, void *priv,
@@ -819,12 +816,6 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
        unsigned int i, fi = 0;
        int rval;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        camera = isp->inputs[asd->input_curr].camera;
        if(!camera) {
                dev_err(isp->dev, "%s(): camera is NULL, device is %s\n",
@@ -832,15 +823,12 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
                return -EINVAL;
        }
 
-       rt_mutex_lock(&isp->mutex);
-
        rval = v4l2_subdev_call(camera, pad, enum_mbus_code, NULL, &code);
        if (rval == -ENOIOCTLCMD) {
                dev_warn(isp->dev,
                         "enum_mbus_code pad op not supported by %s. Please fix your sensor driver!\n",
                         camera->name);
        }
-       rt_mutex_unlock(&isp->mutex);
 
        if (rval)
                return rval;
@@ -872,20 +860,6 @@ static int atomisp_enum_fmt_cap(struct file *file, void *fh,
        return -EINVAL;
 }
 
-static int atomisp_g_fmt_file(struct file *file, void *fh,
-                             struct v4l2_format *f)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
-       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-
-       rt_mutex_lock(&isp->mutex);
-       f->fmt.pix = pipe->pix;
-       rt_mutex_unlock(&isp->mutex);
-
-       return 0;
-}
-
 static int atomisp_adjust_fmt(struct v4l2_format *f)
 {
        const struct atomisp_format_bridge *format_bridge;
@@ -957,13 +931,16 @@ static int atomisp_try_fmt_cap(struct file *file, void *fh,
                               struct v4l2_format *f)
 {
        struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
        int ret;
 
-       rt_mutex_lock(&isp->mutex);
-       ret = atomisp_try_fmt(vdev, &f->fmt.pix, NULL);
-       rt_mutex_unlock(&isp->mutex);
+       /*
+        * atomisp_try_fmt() gived results with padding included, note
+        * (this gets removed again by the atomisp_adjust_fmt() call below.
+        */
+       f->fmt.pix.width += pad_w;
+       f->fmt.pix.height += pad_h;
 
+       ret = atomisp_try_fmt(vdev, &f->fmt.pix, NULL);
        if (ret)
                return ret;
 
@@ -974,12 +951,9 @@ static int atomisp_g_fmt_cap(struct file *file, void *fh,
                             struct v4l2_format *f)
 {
        struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
        struct atomisp_video_pipe *pipe;
 
-       rt_mutex_lock(&isp->mutex);
        pipe = atomisp_to_video_pipe(vdev);
-       rt_mutex_unlock(&isp->mutex);
 
        f->fmt.pix = pipe->pix;
 
@@ -994,37 +968,6 @@ static int atomisp_g_fmt_cap(struct file *file, void *fh,
        return atomisp_try_fmt_cap(file, fh, f);
 }
 
-static int atomisp_s_fmt_cap(struct file *file, void *fh,
-                            struct v4l2_format *f)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
-       int ret;
-
-       rt_mutex_lock(&isp->mutex);
-       if (isp->isp_fatal_error) {
-               ret = -EIO;
-               rt_mutex_unlock(&isp->mutex);
-               return ret;
-       }
-       ret = atomisp_set_fmt(vdev, f);
-       rt_mutex_unlock(&isp->mutex);
-       return ret;
-}
-
-static int atomisp_s_fmt_file(struct file *file, void *fh,
-                             struct v4l2_format *f)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
-       int ret;
-
-       rt_mutex_lock(&isp->mutex);
-       ret = atomisp_set_fmt_file(vdev, f);
-       rt_mutex_unlock(&isp->mutex);
-       return ret;
-}
-
 /*
  * Free videobuffer buffer priv data
  */
@@ -1160,8 +1103,7 @@ error:
 /*
  * Initiate Memory Mapping or User Pointer I/O
  */
-int __atomisp_reqbufs(struct file *file, void *fh,
-                     struct v4l2_requestbuffers *req)
+int atomisp_reqbufs(struct file *file, void *fh, struct v4l2_requestbuffers *req)
 {
        struct video_device *vdev = video_devdata(file);
        struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
@@ -1170,16 +1112,8 @@ int __atomisp_reqbufs(struct file *file, void *fh,
        struct ia_css_frame *frame;
        struct videobuf_vmalloc_memory *vm_mem;
        u16 source_pad = atomisp_subdev_source_pad(vdev);
-       u16 stream_id;
        int ret = 0, i = 0;
 
-       if (!asd) {
-               dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-       stream_id = atomisp_source_pad_to_stream_id(asd, source_pad);
-
        if (req->count == 0) {
                mutex_lock(&pipe->capq.vb_lock);
                if (!list_empty(&pipe->capq.stream))
@@ -1200,7 +1134,7 @@ int __atomisp_reqbufs(struct file *file, void *fh,
        if (ret)
                return ret;
 
-       atomisp_alloc_css_stat_bufs(asd, stream_id);
+       atomisp_alloc_css_stat_bufs(asd, ATOMISP_INPUT_STREAM_GENERAL);
 
        /*
         * for user pointer type, buffers are not really allocated here,
@@ -1238,36 +1172,6 @@ error:
        return -ENOMEM;
 }
 
-int atomisp_reqbufs(struct file *file, void *fh,
-                   struct v4l2_requestbuffers *req)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
-       int ret;
-
-       rt_mutex_lock(&isp->mutex);
-       ret = __atomisp_reqbufs(file, fh, req);
-       rt_mutex_unlock(&isp->mutex);
-
-       return ret;
-}
-
-static int atomisp_reqbufs_file(struct file *file, void *fh,
-                               struct v4l2_requestbuffers *req)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-
-       if (req->count == 0) {
-               mutex_lock(&pipe->outq.vb_lock);
-               atomisp_videobuf_free_queue(&pipe->outq);
-               mutex_unlock(&pipe->outq.vb_lock);
-               return 0;
-       }
-
-       return videobuf_reqbufs(&pipe->outq, req);
-}
-
 /* application query the status of a buffer */
 static int atomisp_querybuf(struct file *file, void *fh,
                            struct v4l2_buffer *buf)
@@ -1278,15 +1182,6 @@ static int atomisp_querybuf(struct file *file, void *fh,
        return videobuf_querybuf(&pipe->capq, buf);
 }
 
-static int atomisp_querybuf_file(struct file *file, void *fh,
-                                struct v4l2_buffer *buf)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-
-       return videobuf_querybuf(&pipe->outq, buf);
-}
-
 /*
  * Applications call the VIDIOC_QBUF ioctl to enqueue an empty (capturing) or
  * filled (output) buffer in the drivers incoming queue.
@@ -1305,32 +1200,16 @@ static int atomisp_qbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
        struct ia_css_frame *handle = NULL;
        u32 length;
        u32 pgnr;
-       int ret = 0;
-
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
-       rt_mutex_lock(&isp->mutex);
-       if (isp->isp_fatal_error) {
-               ret = -EIO;
-               goto error;
-       }
+       int ret;
 
-       if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) {
-               dev_err(isp->dev, "%s: reject, as ISP at stopping.\n",
-                       __func__);
-               ret = -EIO;
-               goto error;
-       }
+       ret = atomisp_pipe_check(pipe, false);
+       if (ret)
+               return ret;
 
        if (!buf || buf->index >= VIDEO_MAX_FRAME ||
            !pipe->capq.bufs[buf->index]) {
                dev_err(isp->dev, "Invalid index for qbuf.\n");
-               ret = -EINVAL;
-               goto error;
+               return -EINVAL;
        }
 
        /*
@@ -1338,12 +1217,15 @@ static int atomisp_qbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
         * address and reprograme out page table properly
         */
        if (buf->memory == V4L2_MEMORY_USERPTR) {
+               if (offset_in_page(buf->m.userptr)) {
+                       dev_err(isp->dev, "Error userptr is not page aligned.\n");
+                       return -EINVAL;
+               }
+
                vb = pipe->capq.bufs[buf->index];
                vm_mem = vb->priv;
-               if (!vm_mem) {
-                       ret = -EINVAL;
-                       goto error;
-               }
+               if (!vm_mem)
+                       return -EINVAL;
 
                length = vb->bsize;
                pgnr = (length + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
@@ -1352,17 +1234,15 @@ static int atomisp_qbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
                        goto done;
 
                if (atomisp_get_css_frame_info(asd,
-                                              atomisp_subdev_source_pad(vdev), &frame_info)) {
-                       ret = -EIO;
-                       goto error;
-               }
+                                              atomisp_subdev_source_pad(vdev), &frame_info))
+                       return -EIO;
 
                ret = ia_css_frame_map(&handle, &frame_info,
                                            (void __user *)buf->m.userptr,
                                            pgnr);
                if (ret) {
                        dev_err(isp->dev, "Failed to map user buffer\n");
-                       goto error;
+                       return ret;
                }
 
                if (vm_mem->vaddr) {
@@ -1406,12 +1286,11 @@ done:
 
        pipe->frame_params[buf->index] = NULL;
 
-       rt_mutex_unlock(&isp->mutex);
-
+       mutex_unlock(&isp->mutex);
        ret = videobuf_qbuf(&pipe->capq, buf);
-       rt_mutex_lock(&isp->mutex);
+       mutex_lock(&isp->mutex);
        if (ret)
-               goto error;
+               return ret;
 
        /* TODO: do this better, not best way to queue to css */
        if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) {
@@ -1419,15 +1298,6 @@ done:
                        atomisp_handle_parameter_and_buffer(pipe);
                } else {
                        atomisp_qbuffers_to_css(asd);
-
-                       if (!IS_ISP2401) {
-                               if (!atomisp_is_wdt_running(asd) && atomisp_buffers_queued(asd))
-                                       atomisp_wdt_start(asd);
-                       } else {
-                               if (!atomisp_is_wdt_running(pipe) &&
-                                   atomisp_buffers_queued_pipe(pipe))
-                                       atomisp_wdt_start_pipe(pipe);
-                       }
                }
        }
 
@@ -1449,58 +1319,11 @@ done:
                        asd->pending_capture_request++;
                        dev_dbg(isp->dev, "Add one pending capture request.\n");
        }
-       rt_mutex_unlock(&isp->mutex);
 
        dev_dbg(isp->dev, "qbuf buffer %d (%s) for asd%d\n", buf->index,
                vdev->name, asd->index);
 
-       return ret;
-
-error:
-       rt_mutex_unlock(&isp->mutex);
-       return ret;
-}
-
-static int atomisp_qbuf_file(struct file *file, void *fh,
-                            struct v4l2_buffer *buf)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
-       struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
-       int ret;
-
-       rt_mutex_lock(&isp->mutex);
-       if (isp->isp_fatal_error) {
-               ret = -EIO;
-               goto error;
-       }
-
-       if (!buf || buf->index >= VIDEO_MAX_FRAME ||
-           !pipe->outq.bufs[buf->index]) {
-               dev_err(isp->dev, "Invalid index for qbuf.\n");
-               ret = -EINVAL;
-               goto error;
-       }
-
-       if (buf->memory != V4L2_MEMORY_MMAP) {
-               dev_err(isp->dev, "Unsupported memory method\n");
-               ret = -EINVAL;
-               goto error;
-       }
-
-       if (buf->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) {
-               dev_err(isp->dev, "Unsupported buffer type\n");
-               ret = -EINVAL;
-               goto error;
-       }
-       rt_mutex_unlock(&isp->mutex);
-
-       return videobuf_qbuf(&pipe->outq, buf);
-
-error:
-       rt_mutex_unlock(&isp->mutex);
-
-       return ret;
+       return 0;
 }
 
 static int __get_frame_exp_id(struct atomisp_video_pipe *pipe,
@@ -1529,37 +1352,21 @@ static int atomisp_dqbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
        struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev);
        struct atomisp_sub_device *asd = pipe->asd;
        struct atomisp_device *isp = video_get_drvdata(vdev);
-       int ret = 0;
-
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
-       rt_mutex_lock(&isp->mutex);
-
-       if (isp->isp_fatal_error) {
-               rt_mutex_unlock(&isp->mutex);
-               return -EIO;
-       }
-
-       if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) {
-               rt_mutex_unlock(&isp->mutex);
-               dev_err(isp->dev, "%s: reject, as ISP at stopping.\n",
-                       __func__);
-               return -EIO;
-       }
+       int ret;
 
-       rt_mutex_unlock(&isp->mutex);
+       ret = atomisp_pipe_check(pipe, false);
+       if (ret)
+               return ret;
 
+       mutex_unlock(&isp->mutex);
        ret = videobuf_dqbuf(&pipe->capq, buf, file->f_flags & O_NONBLOCK);
+       mutex_lock(&isp->mutex);
        if (ret) {
                if (ret != -EAGAIN)
                        dev_dbg(isp->dev, "<%s: %d\n", __func__, ret);
                return ret;
        }
-       rt_mutex_lock(&isp->mutex);
+
        buf->bytesused = pipe->pix.sizeimage;
        buf->reserved = asd->frame_status[buf->index];
 
@@ -1573,7 +1380,6 @@ static int atomisp_dqbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
        if (!(buf->flags & V4L2_BUF_FLAG_ERROR))
                buf->reserved |= __get_frame_exp_id(pipe, buf) << 16;
        buf->reserved2 = pipe->frame_config_id[buf->index];
-       rt_mutex_unlock(&isp->mutex);
 
        dev_dbg(isp->dev,
                "dqbuf buffer %d (%s) for asd%d with exp_id %d, isp_config_id %d\n",
@@ -1622,16 +1428,6 @@ enum ia_css_pipe_id atomisp_get_css_pipe_id(struct atomisp_sub_device *asd)
 
 static unsigned int atomisp_sensor_start_stream(struct atomisp_sub_device *asd)
 {
-       struct atomisp_device *isp = asd->isp;
-
-       if (isp->inputs[asd->input_curr].camera_caps->
-           sensor[asd->sensor_curr].stream_num > 1) {
-               if (asd->high_speed_mode)
-                       return 1;
-               else
-                       return 2;
-       }
-
        if (asd->vfpp->val != ATOMISP_VFPP_ENABLE ||
            asd->copy_mode)
                return 1;
@@ -1650,31 +1446,15 @@ static unsigned int atomisp_sensor_start_stream(struct atomisp_sub_device *asd)
 int atomisp_stream_on_master_slave_sensor(struct atomisp_device *isp,
        bool isp_timeout)
 {
-       unsigned int master = -1, slave = -1, delay_slave = 0;
-       int i, ret;
-
-       /*
-        * ISP only support 2 streams now so ignore multiple master/slave
-        * case to reduce the delay between 2 stream_on calls.
-        */
-       for (i = 0; i < isp->num_of_streams; i++) {
-               int sensor_index = isp->asd[i].input_curr;
-
-               if (isp->inputs[sensor_index].camera_caps->
-                   sensor[isp->asd[i].sensor_curr].is_slave)
-                       slave = sensor_index;
-               else
-                       master = sensor_index;
-       }
+       unsigned int master, slave, delay_slave = 0;
+       int ret;
 
-       if (master == -1 || slave == -1) {
-               master = ATOMISP_DEPTH_DEFAULT_MASTER_SENSOR;
-               slave = ATOMISP_DEPTH_DEFAULT_SLAVE_SENSOR;
-               dev_warn(isp->dev,
-                        "depth mode use default master=%s.slave=%s.\n",
-                        isp->inputs[master].camera->name,
-                        isp->inputs[slave].camera->name);
-       }
+       master = ATOMISP_DEPTH_DEFAULT_MASTER_SENSOR;
+       slave = ATOMISP_DEPTH_DEFAULT_SLAVE_SENSOR;
+       dev_warn(isp->dev,
+                "depth mode use default master=%s.slave=%s.\n",
+                isp->inputs[master].camera->name,
+                isp->inputs[slave].camera->name);
 
        ret = v4l2_subdev_call(isp->inputs[master].camera, core,
                               ioctl, ATOMISP_IOC_G_DEPTH_SYNC_COMP,
@@ -1708,51 +1488,6 @@ int atomisp_stream_on_master_slave_sensor(struct atomisp_device *isp,
        return 0;
 }
 
-/* FIXME! ISP2400 */
-static void __wdt_on_master_slave_sensor(struct atomisp_device *isp,
-                                        unsigned int wdt_duration)
-{
-       if (atomisp_buffers_queued(&isp->asd[0]))
-               atomisp_wdt_refresh(&isp->asd[0], wdt_duration);
-       if (atomisp_buffers_queued(&isp->asd[1]))
-               atomisp_wdt_refresh(&isp->asd[1], wdt_duration);
-}
-
-/* FIXME! ISP2401 */
-static void __wdt_on_master_slave_sensor_pipe(struct atomisp_video_pipe *pipe,
-                                             unsigned int wdt_duration,
-                                             bool enable)
-{
-       static struct atomisp_video_pipe *pipe0;
-
-       if (enable) {
-               if (atomisp_buffers_queued_pipe(pipe0))
-                       atomisp_wdt_refresh_pipe(pipe0, wdt_duration);
-               if (atomisp_buffers_queued_pipe(pipe))
-                       atomisp_wdt_refresh_pipe(pipe, wdt_duration);
-       } else {
-               pipe0 = pipe;
-       }
-}
-
-static void atomisp_pause_buffer_event(struct atomisp_device *isp)
-{
-       struct v4l2_event event = {0};
-       int i;
-
-       event.type = V4L2_EVENT_ATOMISP_PAUSE_BUFFER;
-
-       for (i = 0; i < isp->num_of_streams; i++) {
-               int sensor_index = isp->asd[i].input_curr;
-
-               if (isp->inputs[sensor_index].camera_caps->
-                   sensor[isp->asd[i].sensor_curr].is_slave) {
-                       v4l2_event_queue(isp->asd[i].subdev.devnode, &event);
-                       break;
-               }
-       }
-}
-
 /* Input system HW workaround */
 /* Input system address translation corrupts burst during */
 /* invalidate. SW workaround for this is to set burst length */
@@ -1784,15 +1519,8 @@ static int atomisp_streamon(struct file *file, void *fh,
        struct pci_dev *pdev = to_pci_dev(isp->dev);
        enum ia_css_pipe_id css_pipe_id;
        unsigned int sensor_start_stream;
-       unsigned int wdt_duration = ATOMISP_ISP_TIMEOUT_DURATION;
-       int ret = 0;
        unsigned long irqflags;
-
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
+       int ret;
 
        dev_dbg(isp->dev, "Start stream on pad %d for asd%d\n",
                atomisp_subdev_source_pad(vdev), asd->index);
@@ -1802,19 +1530,12 @@ static int atomisp_streamon(struct file *file, void *fh,
                return -EINVAL;
        }
 
-       rt_mutex_lock(&isp->mutex);
-       if (isp->isp_fatal_error) {
-               ret = -EIO;
-               goto out;
-       }
-
-       if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) {
-               ret = -EBUSY;
-               goto out;
-       }
+       ret = atomisp_pipe_check(pipe, false);
+       if (ret)
+               return ret;
 
        if (pipe->capq.streaming)
-               goto out;
+               return 0;
 
        /* Input system HW workaround */
        atomisp_dma_burst_len_cfg(asd);
@@ -1829,20 +1550,18 @@ static int atomisp_streamon(struct file *file, void *fh,
        if (list_empty(&pipe->capq.stream)) {
                spin_unlock_irqrestore(&pipe->irq_lock, irqflags);
                dev_dbg(isp->dev, "no buffer in the queue\n");
-               ret = -EINVAL;
-               goto out;
+               return -EINVAL;
        }
        spin_unlock_irqrestore(&pipe->irq_lock, irqflags);
 
        ret = videobuf_streamon(&pipe->capq);
        if (ret)
-               goto out;
+               return ret;
 
        /* Reset pending capture request count. */
        asd->pending_capture_request = 0;
 
-       if ((atomisp_subdev_streaming_count(asd) > sensor_start_stream) &&
-           (!isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl)) {
+       if (atomisp_subdev_streaming_count(asd) > sensor_start_stream) {
                /* trigger still capture */
                if (asd->continuous_mode->val &&
                    atomisp_subdev_source_pad(vdev)
@@ -1856,11 +1575,11 @@ static int atomisp_streamon(struct file *file, void *fh,
 
                        if (asd->delayed_init == ATOMISP_DELAYED_INIT_QUEUED) {
                                flush_work(&asd->delayed_init_work);
-                               rt_mutex_unlock(&isp->mutex);
-                               if (wait_for_completion_interruptible(
-                                       &asd->init_done) != 0)
+                               mutex_unlock(&isp->mutex);
+                               ret = wait_for_completion_interruptible(&asd->init_done);
+                               mutex_lock(&isp->mutex);
+                               if (ret != 0)
                                        return -ERESTARTSYS;
-                               rt_mutex_lock(&isp->mutex);
                        }
 
                        /* handle per_frame_setting parameter and buffers */
@@ -1882,16 +1601,12 @@ static int atomisp_streamon(struct file *file, void *fh,
                                        asd->params.offline_parm.num_captures,
                                        asd->params.offline_parm.skip_frames,
                                        asd->params.offline_parm.offset);
-                               if (ret) {
-                                       ret = -EINVAL;
-                                       goto out;
-                               }
-                               if (asd->depth_mode->val)
-                                       atomisp_pause_buffer_event(isp);
+                               if (ret)
+                                       return -EINVAL;
                        }
                }
                atomisp_qbuffers_to_css(asd);
-               goto out;
+               return 0;
        }
 
        if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) {
@@ -1917,14 +1632,14 @@ static int atomisp_streamon(struct file *file, void *fh,
 
        ret = atomisp_css_start(asd, css_pipe_id, false);
        if (ret)
-               goto out;
+               return ret;
 
+       spin_lock_irqsave(&isp->lock, irqflags);
        asd->streaming = ATOMISP_DEVICE_STREAMING_ENABLED;
+       spin_unlock_irqrestore(&isp->lock, irqflags);
        atomic_set(&asd->sof_count, -1);
        atomic_set(&asd->sequence, -1);
        atomic_set(&asd->sequence_temp, -1);
-       if (isp->sw_contex.file_input)
-               wdt_duration = ATOMISP_ISP_FILE_TIMEOUT_DURATION;
 
        asd->params.dis_proj_data_valid = false;
        asd->latest_preview_exp_id = 0;
@@ -1938,7 +1653,7 @@ static int atomisp_streamon(struct file *file, void *fh,
 
        /* Only start sensor when the last streaming instance started */
        if (atomisp_subdev_streaming_count(asd) < sensor_start_stream)
-               goto out;
+               return 0;
 
 start_sensor:
        if (isp->flash) {
@@ -1947,26 +1662,21 @@ start_sensor:
                atomisp_setup_flash(asd);
        }
 
-       if (!isp->sw_contex.file_input) {
-               atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
-                                      atomisp_css_valid_sof(isp));
-               atomisp_csi2_configure(asd);
-               /*
-                * set freq to max when streaming count > 1 which indicate
-                * dual camera would run
-                */
-               if (atomisp_streaming_count(isp) > 1) {
-                       if (atomisp_freq_scaling(isp,
-                                                ATOMISP_DFS_MODE_MAX, false) < 0)
-                               dev_dbg(isp->dev, "DFS max mode failed!\n");
-               } else {
-                       if (atomisp_freq_scaling(isp,
-                                                ATOMISP_DFS_MODE_AUTO, false) < 0)
-                               dev_dbg(isp->dev, "DFS auto mode failed!\n");
-               }
-       } else {
-               if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_MAX, false) < 0)
+       atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
+                              atomisp_css_valid_sof(isp));
+       atomisp_csi2_configure(asd);
+       /*
+        * set freq to max when streaming count > 1 which indicate
+        * dual camera would run
+        */
+       if (atomisp_streaming_count(isp) > 1) {
+               if (atomisp_freq_scaling(isp,
+                                        ATOMISP_DFS_MODE_MAX, false) < 0)
                        dev_dbg(isp->dev, "DFS max mode failed!\n");
+       } else {
+               if (atomisp_freq_scaling(isp,
+                                        ATOMISP_DFS_MODE_AUTO, false) < 0)
+                       dev_dbg(isp->dev, "DFS auto mode failed!\n");
        }
 
        if (asd->depth_mode->val && atomisp_streaming_count(isp) ==
@@ -1974,17 +1684,11 @@ start_sensor:
                ret = atomisp_stream_on_master_slave_sensor(isp, false);
                if (ret) {
                        dev_err(isp->dev, "master slave sensor stream on failed!\n");
-                       goto out;
+                       return ret;
                }
-               if (!IS_ISP2401)
-                       __wdt_on_master_slave_sensor(isp, wdt_duration);
-               else
-                       __wdt_on_master_slave_sensor_pipe(pipe, wdt_duration, true);
                goto start_delay_wq;
        } else if (asd->depth_mode->val && (atomisp_streaming_count(isp) <
                                            ATOMISP_DEPTH_SENSOR_STREAMON_COUNT)) {
-               if (IS_ISP2401)
-                       __wdt_on_master_slave_sensor_pipe(pipe, wdt_duration, false);
                goto start_delay_wq;
        }
 
@@ -1999,41 +1703,29 @@ start_sensor:
        ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
                               video, s_stream, 1);
        if (ret) {
+               spin_lock_irqsave(&isp->lock, irqflags);
                asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED;
-               ret = -EINVAL;
-               goto out;
-       }
-
-       if (!IS_ISP2401) {
-               if (atomisp_buffers_queued(asd))
-                       atomisp_wdt_refresh(asd, wdt_duration);
-       } else {
-               if (atomisp_buffers_queued_pipe(pipe))
-                       atomisp_wdt_refresh_pipe(pipe, wdt_duration);
+               spin_unlock_irqrestore(&isp->lock, irqflags);
+               return -EINVAL;
        }
 
 start_delay_wq:
        if (asd->continuous_mode->val) {
-               struct v4l2_mbus_framefmt *sink;
-
-               sink = atomisp_subdev_get_ffmt(&asd->subdev, NULL,
-                                              V4L2_SUBDEV_FORMAT_ACTIVE,
-                                              ATOMISP_SUBDEV_PAD_SINK);
+               atomisp_subdev_get_ffmt(&asd->subdev, NULL,
+                                       V4L2_SUBDEV_FORMAT_ACTIVE,
+                                       ATOMISP_SUBDEV_PAD_SINK);
 
                reinit_completion(&asd->init_done);
                asd->delayed_init = ATOMISP_DELAYED_INIT_QUEUED;
                queue_work(asd->delayed_init_workq, &asd->delayed_init_work);
-               atomisp_css_set_cont_prev_start_time(isp,
-                                                    ATOMISP_CALC_CSS_PREV_OVERLAP(sink->height));
        } else {
                asd->delayed_init = ATOMISP_DELAYED_INIT_NOT_QUEUED;
        }
-out:
-       rt_mutex_unlock(&isp->mutex);
-       return ret;
+
+       return 0;
 }
 
-int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
+int atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
 {
        struct video_device *vdev = video_devdata(file);
        struct atomisp_device *isp = video_get_drvdata(vdev);
@@ -2050,17 +1742,10 @@ int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
        unsigned long flags;
        bool first_streamoff = false;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        dev_dbg(isp->dev, "Stop stream on pad %d for asd%d\n",
                atomisp_subdev_source_pad(vdev), asd->index);
 
        lockdep_assert_held(&isp->mutex);
-       lockdep_assert_held(&isp->streamoff_mutex);
 
        if (type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
                dev_dbg(isp->dev, "unsupported v4l2 buf type\n");
@@ -2071,17 +1756,10 @@ int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
         * do only videobuf_streamoff for capture & vf pipes in
         * case of continuous capture
         */
-       if ((asd->continuous_mode->val ||
-            isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl) &&
-           atomisp_subdev_source_pad(vdev) !=
-           ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW &&
-           atomisp_subdev_source_pad(vdev) !=
-           ATOMISP_SUBDEV_PAD_SOURCE_VIDEO) {
-               if (isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl) {
-                       v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
-                                        video, s_stream, 0);
-               } else if (atomisp_subdev_source_pad(vdev)
-                          == ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE) {
+       if (asd->continuous_mode->val &&
+           atomisp_subdev_source_pad(vdev) != ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW &&
+           atomisp_subdev_source_pad(vdev) != ATOMISP_SUBDEV_PAD_SOURCE_VIDEO) {
+               if (atomisp_subdev_source_pad(vdev) == ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE) {
                        /* stop continuous still capture if needed */
                        if (asd->params.offline_parm.num_captures == -1)
                                atomisp_css_offline_capture_configure(asd,
@@ -2118,32 +1796,14 @@ int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
        if (!pipe->capq.streaming)
                return 0;
 
-       spin_lock_irqsave(&isp->lock, flags);
-       if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) {
-               asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING;
+       if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED)
                first_streamoff = true;
-       }
-       spin_unlock_irqrestore(&isp->lock, flags);
-
-       if (first_streamoff) {
-               /* if other streams are running, should not disable watch dog */
-               rt_mutex_unlock(&isp->mutex);
-               atomisp_wdt_stop(asd, true);
-
-               /*
-                * must stop sending pixels into GP_FIFO before stop
-                * the pipeline.
-                */
-               if (isp->sw_contex.file_input)
-                       v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
-                                        video, s_stream, 0);
-
-               rt_mutex_lock(&isp->mutex);
-       }
 
        spin_lock_irqsave(&isp->lock, flags);
        if (atomisp_subdev_streaming_count(asd) == 1)
                asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED;
+       else
+               asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING;
        spin_unlock_irqrestore(&isp->lock, flags);
 
        if (!first_streamoff) {
@@ -2154,19 +1814,16 @@ int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
        }
 
        atomisp_clear_css_buffer_counters(asd);
-
-       if (!isp->sw_contex.file_input)
-               atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF,
-                                      false);
+       atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false);
 
        if (asd->delayed_init == ATOMISP_DELAYED_INIT_QUEUED) {
                cancel_work_sync(&asd->delayed_init_work);
                asd->delayed_init = ATOMISP_DELAYED_INIT_NOT_QUEUED;
        }
-       if (first_streamoff) {
-               css_pipe_id = atomisp_get_css_pipe_id(asd);
-               atomisp_css_stop(asd, css_pipe_id, false);
-       }
+
+       css_pipe_id = atomisp_get_css_pipe_id(asd);
+       atomisp_css_stop(asd, css_pipe_id, false);
+
        /* cancel work queue*/
        if (asd->video_out_capture.users) {
                capture_pipe = &asd->video_out_capture;
@@ -2210,9 +1867,8 @@ stopsensor:
            != atomisp_sensor_start_stream(asd))
                return 0;
 
-       if (!isp->sw_contex.file_input)
-               ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
-                                      video, s_stream, 0);
+       ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera,
+                              video, s_stream, 0);
 
        if (isp->flash) {
                asd->params.num_flash_frames = 0;
@@ -2284,22 +1940,6 @@ stopsensor:
        return ret;
 }
 
-static int atomisp_streamoff(struct file *file, void *fh,
-                            enum v4l2_buf_type type)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
-       int rval;
-
-       mutex_lock(&isp->streamoff_mutex);
-       rt_mutex_lock(&isp->mutex);
-       rval = __atomisp_streamoff(file, fh, type);
-       rt_mutex_unlock(&isp->mutex);
-       mutex_unlock(&isp->streamoff_mutex);
-
-       return rval;
-}
-
 /*
  * To get the current value of a control.
  * applications initialize the id field of a struct v4l2_control and
@@ -2313,12 +1953,6 @@ static int atomisp_g_ctrl(struct file *file, void *fh,
        struct atomisp_device *isp = video_get_drvdata(vdev);
        int i, ret = -EINVAL;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        for (i = 0; i < ctrls_num; i++) {
                if (ci_v4l2_controls[i].id == control->id) {
                        ret = 0;
@@ -2329,8 +1963,6 @@ static int atomisp_g_ctrl(struct file *file, void *fh,
        if (ret)
                return ret;
 
-       rt_mutex_lock(&isp->mutex);
-
        switch (control->id) {
        case V4L2_CID_IRIS_ABSOLUTE:
        case V4L2_CID_EXPOSURE_ABSOLUTE:
@@ -2352,7 +1984,6 @@ static int atomisp_g_ctrl(struct file *file, void *fh,
        case V4L2_CID_TEST_PATTERN_COLOR_GR:
        case V4L2_CID_TEST_PATTERN_COLOR_GB:
        case V4L2_CID_TEST_PATTERN_COLOR_B:
-               rt_mutex_unlock(&isp->mutex);
                return v4l2_g_ctrl(isp->inputs[asd->input_curr].camera->
                                   ctrl_handler, control);
        case V4L2_CID_COLORFX:
@@ -2381,7 +2012,6 @@ static int atomisp_g_ctrl(struct file *file, void *fh,
                break;
        }
 
-       rt_mutex_unlock(&isp->mutex);
        return ret;
 }
 
@@ -2398,12 +2028,6 @@ static int atomisp_s_ctrl(struct file *file, void *fh,
        struct atomisp_device *isp = video_get_drvdata(vdev);
        int i, ret = -EINVAL;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        for (i = 0; i < ctrls_num; i++) {
                if (ci_v4l2_controls[i].id == control->id) {
                        ret = 0;
@@ -2414,7 +2038,6 @@ static int atomisp_s_ctrl(struct file *file, void *fh,
        if (ret)
                return ret;
 
-       rt_mutex_lock(&isp->mutex);
        switch (control->id) {
        case V4L2_CID_AUTO_N_PRESET_WHITE_BALANCE:
        case V4L2_CID_EXPOSURE:
@@ -2435,7 +2058,6 @@ static int atomisp_s_ctrl(struct file *file, void *fh,
        case V4L2_CID_TEST_PATTERN_COLOR_GR:
        case V4L2_CID_TEST_PATTERN_COLOR_GB:
        case V4L2_CID_TEST_PATTERN_COLOR_B:
-               rt_mutex_unlock(&isp->mutex);
                return v4l2_s_ctrl(NULL,
                                   isp->inputs[asd->input_curr].camera->
                                   ctrl_handler, control);
@@ -2467,7 +2089,6 @@ static int atomisp_s_ctrl(struct file *file, void *fh,
                ret = -EINVAL;
                break;
        }
-       rt_mutex_unlock(&isp->mutex);
        return ret;
 }
 
@@ -2485,12 +2106,6 @@ static int atomisp_queryctl(struct file *file, void *fh,
        struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
        struct atomisp_device *isp = video_get_drvdata(vdev);
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        switch (qc->id) {
        case V4L2_CID_FOCUS_ABSOLUTE:
        case V4L2_CID_FOCUS_RELATIVE:
@@ -2536,12 +2151,6 @@ static int atomisp_camera_g_ext_ctrls(struct file *file, void *fh,
        int i;
        int ret = 0;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        if (!IS_ISP2401)
                motor = isp->inputs[asd->input_curr].motor;
        else
@@ -2592,9 +2201,7 @@ static int atomisp_camera_g_ext_ctrls(struct file *file, void *fh,
                                                &ctrl);
                        break;
                case V4L2_CID_ZOOM_ABSOLUTE:
-                       rt_mutex_lock(&isp->mutex);
                        ret = atomisp_digital_zoom(asd, 0, &ctrl.value);
-                       rt_mutex_unlock(&isp->mutex);
                        break;
                case V4L2_CID_G_SKIP_FRAMES:
                        ret = v4l2_subdev_call(
@@ -2653,12 +2260,6 @@ static int atomisp_camera_s_ext_ctrls(struct file *file, void *fh,
        int i;
        int ret = 0;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        if (!IS_ISP2401)
                motor = isp->inputs[asd->input_curr].motor;
        else
@@ -2707,7 +2308,6 @@ static int atomisp_camera_s_ext_ctrls(struct file *file, void *fh,
                case V4L2_CID_FLASH_STROBE:
                case V4L2_CID_FLASH_MODE:
                case V4L2_CID_FLASH_STATUS_REGISTER:
-                       rt_mutex_lock(&isp->mutex);
                        if (isp->flash) {
                                ret =
                                    v4l2_s_ctrl(NULL, isp->flash->ctrl_handler,
@@ -2722,12 +2322,9 @@ static int atomisp_camera_s_ext_ctrls(struct file *file, void *fh,
                                        asd->params.num_flash_frames = 0;
                                }
                        }
-                       rt_mutex_unlock(&isp->mutex);
                        break;
                case V4L2_CID_ZOOM_ABSOLUTE:
-                       rt_mutex_lock(&isp->mutex);
                        ret = atomisp_digital_zoom(asd, 1, &ctrl.value);
-                       rt_mutex_unlock(&isp->mutex);
                        break;
                default:
                        ctr = v4l2_ctrl_find(&asd->ctrl_handler, ctrl.id);
@@ -2784,20 +2381,12 @@ static int atomisp_g_parm(struct file *file, void *fh,
        struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
        struct atomisp_device *isp = video_get_drvdata(vdev);
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
                dev_err(isp->dev, "unsupported v4l2 buf type\n");
                return -EINVAL;
        }
 
-       rt_mutex_lock(&isp->mutex);
        parm->parm.capture.capturemode = asd->run_mode->val;
-       rt_mutex_unlock(&isp->mutex);
 
        return 0;
 }
@@ -2812,19 +2401,11 @@ static int atomisp_s_parm(struct file *file, void *fh,
        int rval;
        int fps;
 
-       if (!asd) {
-               dev_err(isp->dev, "%s(): asd is NULL, device is %s\n",
-                       __func__, vdev->name);
-               return -EINVAL;
-       }
-
        if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
                dev_err(isp->dev, "unsupported v4l2 buf type\n");
                return -EINVAL;
        }
 
-       rt_mutex_lock(&isp->mutex);
-
        asd->high_speed_mode = false;
        switch (parm->parm.capture.capturemode) {
        case CI_MODE_NONE: {
@@ -2843,7 +2424,7 @@ static int atomisp_s_parm(struct file *file, void *fh,
                                asd->high_speed_mode = true;
                }
 
-               goto out;
+               return rval == -ENOIOCTLCMD ? 0 : rval;
        }
        case CI_MODE_VIDEO:
                mode = ATOMISP_RUN_MODE_VIDEO;
@@ -2858,76 +2439,29 @@ static int atomisp_s_parm(struct file *file, void *fh,
                mode = ATOMISP_RUN_MODE_PREVIEW;
                break;
        default:
-               rval = -EINVAL;
-               goto out;
+               return -EINVAL;
        }
 
        rval = v4l2_ctrl_s_ctrl(asd->run_mode, mode);
 
-out:
-       rt_mutex_unlock(&isp->mutex);
-
        return rval == -ENOIOCTLCMD ? 0 : rval;
 }
 
-static int atomisp_s_parm_file(struct file *file, void *fh,
-                              struct v4l2_streamparm *parm)
-{
-       struct video_device *vdev = video_devdata(file);
-       struct atomisp_device *isp = video_get_drvdata(vdev);
-
-       if (parm->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) {
-               dev_err(isp->dev, "unsupported v4l2 buf type for output\n");
-               return -EINVAL;
-       }
-
-       rt_mutex_lock(&isp->mutex);
-       isp->sw_contex.file_input = true;
-       rt_mutex_unlock(&isp->mutex);
-
-       return 0;
-}
-
 static long atomisp_vidioc_default(struct file *file, void *fh,
                                   bool valid_prio, unsigned int cmd, void *arg)
 {
        struct video_device *vdev = video_devdata(file);
        struct atomisp_device *isp = video_get_drvdata(vdev);
-       struct atomisp_sub_device *asd;
+       struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd;
        struct v4l2_subdev *motor;
-       bool acc_node;
        int err;
 
-       acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC");
-       if (acc_node)
-               asd = atomisp_to_acc_pipe(vdev)->asd;
-       else
-               asd = atomisp_to_video_pipe(vdev)->asd;
-
        if (!IS_ISP2401)
                motor = isp->inputs[asd->input_curr].motor;
        else
                motor = isp->motor;
 
        switch (cmd) {
-       case ATOMISP_IOC_G_MOTOR_PRIV_INT_DATA:
-       case ATOMISP_IOC_S_EXPOSURE:
-       case ATOMISP_IOC_G_SENSOR_CALIBRATION_GROUP:
-       case ATOMISP_IOC_G_SENSOR_PRIV_INT_DATA:
-       case ATOMISP_IOC_EXT_ISP_CTRL:
-       case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_INFO:
-       case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_MODE:
-       case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_MODE:
-       case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_LUT:
-       case ATOMISP_IOC_S_SENSOR_EE_CONFIG:
-       case ATOMISP_IOC_G_UPDATE_EXPOSURE:
-               /* we do not need take isp->mutex for these IOCTLs */
-               break;
-       default:
-               rt_mutex_lock(&isp->mutex);
-               break;
-       }
-       switch (cmd) {
        case ATOMISP_IOC_S_SENSOR_RUNMODE:
                if (IS_ISP2401)
                        err = atomisp_set_sensor_runmode(asd, arg);
@@ -3173,22 +2707,6 @@ static long atomisp_vidioc_default(struct file *file, void *fh,
                break;
        }
 
-       switch (cmd) {
-       case ATOMISP_IOC_G_MOTOR_PRIV_INT_DATA:
-       case ATOMISP_IOC_S_EXPOSURE:
-       case ATOMISP_IOC_G_SENSOR_CALIBRATION_GROUP:
-       case ATOMISP_IOC_G_SENSOR_PRIV_INT_DATA:
-       case ATOMISP_IOC_EXT_ISP_CTRL:
-       case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_INFO:
-       case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_MODE:
-       case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_MODE:
-       case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_LUT:
-       case ATOMISP_IOC_G_UPDATE_EXPOSURE:
-               break;
-       default:
-               rt_mutex_unlock(&isp->mutex);
-               break;
-       }
        return err;
 }
 
@@ -3207,7 +2725,7 @@ const struct v4l2_ioctl_ops atomisp_ioctl_ops = {
        .vidioc_enum_fmt_vid_cap = atomisp_enum_fmt_cap,
        .vidioc_try_fmt_vid_cap = atomisp_try_fmt_cap,
        .vidioc_g_fmt_vid_cap = atomisp_g_fmt_cap,
-       .vidioc_s_fmt_vid_cap = atomisp_s_fmt_cap,
+       .vidioc_s_fmt_vid_cap = atomisp_set_fmt,
        .vidioc_reqbufs = atomisp_reqbufs,
        .vidioc_querybuf = atomisp_querybuf,
        .vidioc_qbuf = atomisp_qbuf,
@@ -3218,13 +2736,3 @@ const struct v4l2_ioctl_ops atomisp_ioctl_ops = {
        .vidioc_s_parm = atomisp_s_parm,
        .vidioc_g_parm = atomisp_g_parm,
 };
-
-const struct v4l2_ioctl_ops atomisp_file_ioctl_ops = {
-       .vidioc_querycap = atomisp_querycap,
-       .vidioc_g_fmt_vid_out = atomisp_g_fmt_file,
-       .vidioc_s_fmt_vid_out = atomisp_s_fmt_file,
-       .vidioc_s_parm = atomisp_s_parm_file,
-       .vidioc_reqbufs = atomisp_reqbufs_file,
-       .vidioc_querybuf = atomisp_querybuf_file,
-       .vidioc_qbuf = atomisp_qbuf_file,
-};
index d85e0d6..c660f63 100644 (file)
@@ -34,27 +34,21 @@ atomisp_format_bridge *atomisp_get_format_bridge(unsigned int pixelformat);
 const struct
 atomisp_format_bridge *atomisp_get_format_bridge_from_mbus(u32 mbus_code);
 
+int atomisp_pipe_check(struct atomisp_video_pipe *pipe, bool streaming_ok);
+
 int atomisp_alloc_css_stat_bufs(struct atomisp_sub_device *asd,
                                uint16_t stream_id);
 
-int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type);
-int __atomisp_reqbufs(struct file *file, void *fh,
-                     struct v4l2_requestbuffers *req);
-
-int atomisp_reqbufs(struct file *file, void *fh,
-                   struct v4l2_requestbuffers *req);
+int atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type);
+int atomisp_reqbufs(struct file *file, void *fh, struct v4l2_requestbuffers *req);
 
 enum ia_css_pipe_id atomisp_get_css_pipe_id(struct atomisp_sub_device
        *asd);
 
 void atomisp_videobuf_free_buf(struct videobuf_buffer *vb);
 
-extern const struct v4l2_file_operations atomisp_file_fops;
-
 extern const struct v4l2_ioctl_ops atomisp_ioctl_ops;
 
-extern const struct v4l2_ioctl_ops atomisp_file_ioctl_ops;
-
 unsigned int atomisp_streaming_count(struct atomisp_device *isp);
 
 /* compat_ioctl for 32bit userland app and 64bit kernel */
index 394fe69..847dfee 100644 (file)
@@ -373,16 +373,12 @@ int atomisp_subdev_set_selection(struct v4l2_subdev *sd,
        struct atomisp_sub_device *isp_sd = v4l2_get_subdevdata(sd);
        struct atomisp_device *isp = isp_sd->isp;
        struct v4l2_mbus_framefmt *ffmt[ATOMISP_SUBDEV_PADS_NUM];
-       u16 vdev_pad = atomisp_subdev_source_pad(sd->devnode);
        struct v4l2_rect *crop[ATOMISP_SUBDEV_PADS_NUM],
                       *comp[ATOMISP_SUBDEV_PADS_NUM];
-       enum atomisp_input_stream_id stream_id;
        unsigned int i;
        unsigned int padding_w = pad_w;
        unsigned int padding_h = pad_h;
 
-       stream_id = atomisp_source_pad_to_stream_id(isp_sd, vdev_pad);
-
        isp_get_fmt_rect(sd, sd_state, which, ffmt, crop, comp);
 
        dev_dbg(isp->dev,
@@ -478,9 +474,10 @@ int atomisp_subdev_set_selection(struct v4l2_subdev *sd,
                        dvs_w = dvs_h = 0;
                }
                atomisp_css_video_set_dis_envelope(isp_sd, dvs_w, dvs_h);
-               atomisp_css_input_set_effective_resolution(isp_sd, stream_id,
-                       crop[pad]->width, crop[pad]->height);
-
+               atomisp_css_input_set_effective_resolution(isp_sd,
+                                                          ATOMISP_INPUT_STREAM_GENERAL,
+                                                          crop[pad]->width,
+                                                          crop[pad]->height);
                break;
        }
        case ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE:
@@ -523,14 +520,14 @@ int atomisp_subdev_set_selection(struct v4l2_subdev *sd,
                if (r->width * crop[ATOMISP_SUBDEV_PAD_SINK]->height <
                    crop[ATOMISP_SUBDEV_PAD_SINK]->width * r->height)
                        atomisp_css_input_set_effective_resolution(isp_sd,
-                               stream_id,
+                               ATOMISP_INPUT_STREAM_GENERAL,
                                rounddown(crop[ATOMISP_SUBDEV_PAD_SINK]->
                                          height * r->width / r->height,
                                          ATOM_ISP_STEP_WIDTH),
                                crop[ATOMISP_SUBDEV_PAD_SINK]->height);
                else
                        atomisp_css_input_set_effective_resolution(isp_sd,
-                               stream_id,
+                               ATOMISP_INPUT_STREAM_GENERAL,
                                crop[ATOMISP_SUBDEV_PAD_SINK]->width,
                                rounddown(crop[ATOMISP_SUBDEV_PAD_SINK]->
                                          width * r->height / r->width,
@@ -620,16 +617,12 @@ void atomisp_subdev_set_ffmt(struct v4l2_subdev *sd,
        struct atomisp_device *isp = isp_sd->isp;
        struct v4l2_mbus_framefmt *__ffmt =
            atomisp_subdev_get_ffmt(sd, sd_state, which, pad);
-       u16 vdev_pad = atomisp_subdev_source_pad(sd->devnode);
-       enum atomisp_input_stream_id stream_id;
 
        dev_dbg(isp->dev, "ffmt: pad %s w %d h %d code 0x%8.8x which %s\n",
                atomisp_pad_str(pad), ffmt->width, ffmt->height, ffmt->code,
                which == V4L2_SUBDEV_FORMAT_TRY ? "V4L2_SUBDEV_FORMAT_TRY"
                : "V4L2_SUBDEV_FORMAT_ACTIVE");
 
-       stream_id = atomisp_source_pad_to_stream_id(isp_sd, vdev_pad);
-
        switch (pad) {
        case ATOMISP_SUBDEV_PAD_SINK: {
                const struct atomisp_in_fmt_conv *fc =
@@ -649,15 +642,15 @@ void atomisp_subdev_set_ffmt(struct v4l2_subdev *sd,
 
                if (which == V4L2_SUBDEV_FORMAT_ACTIVE) {
                        atomisp_css_input_set_resolution(isp_sd,
-                                                        stream_id, ffmt);
+                                                        ATOMISP_INPUT_STREAM_GENERAL, ffmt);
                        atomisp_css_input_set_binning_factor(isp_sd,
-                                                            stream_id,
+                                                            ATOMISP_INPUT_STREAM_GENERAL,
                                                             atomisp_get_sensor_bin_factor(isp_sd));
-                       atomisp_css_input_set_bayer_order(isp_sd, stream_id,
+                       atomisp_css_input_set_bayer_order(isp_sd, ATOMISP_INPUT_STREAM_GENERAL,
                                                          fc->bayer_order);
-                       atomisp_css_input_set_format(isp_sd, stream_id,
+                       atomisp_css_input_set_format(isp_sd, ATOMISP_INPUT_STREAM_GENERAL,
                                                     fc->atomisp_in_fmt);
-                       atomisp_css_set_default_isys_config(isp_sd, stream_id,
+                       atomisp_css_set_default_isys_config(isp_sd, ATOMISP_INPUT_STREAM_GENERAL,
                                                            ffmt);
                }
 
@@ -874,12 +867,18 @@ static int s_ctrl(struct v4l2_ctrl *ctrl)
 {
        struct atomisp_sub_device *asd = container_of(
                                             ctrl->handler, struct atomisp_sub_device, ctrl_handler);
+       unsigned int streaming;
+       unsigned long flags;
 
        switch (ctrl->id) {
        case V4L2_CID_RUN_MODE:
                return __atomisp_update_run_mode(asd);
        case V4L2_CID_DEPTH_MODE:
-               if (asd->streaming != ATOMISP_DEVICE_STREAMING_DISABLED) {
+               /* Use spinlock instead of mutex to avoid possible locking issues */
+               spin_lock_irqsave(&asd->isp->lock, flags);
+               streaming = asd->streaming;
+               spin_unlock_irqrestore(&asd->isp->lock, flags);
+               if (streaming != ATOMISP_DEVICE_STREAMING_DISABLED) {
                        dev_err(asd->isp->dev,
                                "ISP is streaming, it is not supported to change the depth mode\n");
                        return -EINVAL;
@@ -1066,7 +1065,6 @@ static void atomisp_init_subdev_pipe(struct atomisp_sub_device *asd,
        pipe->isp = asd->isp;
        spin_lock_init(&pipe->irq_lock);
        INIT_LIST_HEAD(&pipe->activeq);
-       INIT_LIST_HEAD(&pipe->activeq_out);
        INIT_LIST_HEAD(&pipe->buffers_waiting_for_param);
        INIT_LIST_HEAD(&pipe->per_frame_params);
        memset(pipe->frame_request_config_id,
@@ -1076,13 +1074,6 @@ static void atomisp_init_subdev_pipe(struct atomisp_sub_device *asd,
               sizeof(struct atomisp_css_params_with_list *));
 }
 
-static void atomisp_init_acc_pipe(struct atomisp_sub_device *asd,
-                                 struct atomisp_acc_pipe *pipe)
-{
-       pipe->asd = asd;
-       pipe->isp = asd->isp;
-}
-
 /*
  * isp_subdev_init_entities - Initialize V4L2 subdev and media entity
  * @asd: ISP CCDC module
@@ -1126,9 +1117,6 @@ static int isp_subdev_init_entities(struct atomisp_sub_device *asd)
        if (ret < 0)
                return ret;
 
-       atomisp_init_subdev_pipe(asd, &asd->video_in,
-                                V4L2_BUF_TYPE_VIDEO_OUTPUT);
-
        atomisp_init_subdev_pipe(asd, &asd->video_out_preview,
                                 V4L2_BUF_TYPE_VIDEO_CAPTURE);
 
@@ -1141,13 +1129,6 @@ static int isp_subdev_init_entities(struct atomisp_sub_device *asd)
        atomisp_init_subdev_pipe(asd, &asd->video_out_video_capture,
                                 V4L2_BUF_TYPE_VIDEO_CAPTURE);
 
-       atomisp_init_acc_pipe(asd, &asd->video_acc);
-
-       ret = atomisp_video_init(&asd->video_in, "MEMORY",
-                                ATOMISP_RUN_MODE_SDV);
-       if (ret < 0)
-               return ret;
-
        ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE",
                                 ATOMISP_RUN_MODE_STILL_CAPTURE);
        if (ret < 0)
@@ -1168,8 +1149,6 @@ static int isp_subdev_init_entities(struct atomisp_sub_device *asd)
        if (ret < 0)
                return ret;
 
-       atomisp_acc_init(&asd->video_acc, "ACC");
-
        ret = v4l2_ctrl_handler_init(&asd->ctrl_handler, 1);
        if (ret)
                return ret;
@@ -1226,7 +1205,11 @@ int atomisp_create_pads_links(struct atomisp_device *isp)
                                return ret;
                }
        }
-       for (i = 0; i < isp->input_cnt - 2; i++) {
+       for (i = 0; i < isp->input_cnt; i++) {
+               /* Don't create links for the test-pattern-generator */
+               if (isp->inputs[i].type == TEST_PATTERN)
+                       continue;
+
                ret = media_create_pad_link(&isp->inputs[i].camera->entity, 0,
                                            &isp->csi2_port[isp->inputs[i].
                                                    port].subdev.entity,
@@ -1262,17 +1245,6 @@ int atomisp_create_pads_links(struct atomisp_device *isp)
                                            entity, 0, 0);
                if (ret < 0)
                        return ret;
-               /*
-                * file input only supported on subdev0
-                * so do not create pad link for subdevs other then subdev0
-                */
-               if (asd->index)
-                       return 0;
-               ret = media_create_pad_link(&asd->video_in.vdev.entity,
-                                           0, &asd->subdev.entity,
-                                           ATOMISP_SUBDEV_PAD_SINK, 0);
-               if (ret < 0)
-                       return ret;
        }
        return 0;
 }
@@ -1302,87 +1274,55 @@ void atomisp_subdev_unregister_entities(struct atomisp_sub_device *asd)
 {
        atomisp_subdev_cleanup_entities(asd);
        v4l2_device_unregister_subdev(&asd->subdev);
-       atomisp_video_unregister(&asd->video_in);
        atomisp_video_unregister(&asd->video_out_preview);
        atomisp_video_unregister(&asd->video_out_vf);
        atomisp_video_unregister(&asd->video_out_capture);
        atomisp_video_unregister(&asd->video_out_video_capture);
-       atomisp_acc_unregister(&asd->video_acc);
 }
 
-int atomisp_subdev_register_entities(struct atomisp_sub_device *asd,
-                                    struct v4l2_device *vdev)
+int atomisp_subdev_register_subdev(struct atomisp_sub_device *asd,
+                                  struct v4l2_device *vdev)
+{
+       return v4l2_device_register_subdev(vdev, &asd->subdev);
+}
+
+int atomisp_subdev_register_video_nodes(struct atomisp_sub_device *asd,
+                                       struct v4l2_device *vdev)
 {
        int ret;
-       u32 device_caps;
 
        /*
         * FIXME: check if all device caps are properly initialized.
-        * Should any of those use V4L2_CAP_META_OUTPUT? Probably yes.
+        * Should any of those use V4L2_CAP_META_CAPTURE? Probably yes.
         */
 
-       device_caps = V4L2_CAP_VIDEO_CAPTURE |
-                     V4L2_CAP_STREAMING;
-
-       /* Register the subdev and video node. */
-
-       ret = v4l2_device_register_subdev(vdev, &asd->subdev);
-       if (ret < 0)
-               goto error;
-
        asd->video_out_preview.vdev.v4l2_dev = vdev;
-       asd->video_out_preview.vdev.device_caps = device_caps |
-                                                 V4L2_CAP_VIDEO_OUTPUT;
+       asd->video_out_preview.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
        ret = video_register_device(&asd->video_out_preview.vdev,
                                    VFL_TYPE_VIDEO, -1);
        if (ret < 0)
                goto error;
 
        asd->video_out_capture.vdev.v4l2_dev = vdev;
-       asd->video_out_capture.vdev.device_caps = device_caps |
-                                                 V4L2_CAP_VIDEO_OUTPUT;
+       asd->video_out_capture.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
        ret = video_register_device(&asd->video_out_capture.vdev,
                                    VFL_TYPE_VIDEO, -1);
        if (ret < 0)
                goto error;
 
        asd->video_out_vf.vdev.v4l2_dev = vdev;
-       asd->video_out_vf.vdev.device_caps = device_caps |
-                                            V4L2_CAP_VIDEO_OUTPUT;
+       asd->video_out_vf.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
        ret = video_register_device(&asd->video_out_vf.vdev,
                                    VFL_TYPE_VIDEO, -1);
        if (ret < 0)
                goto error;
 
        asd->video_out_video_capture.vdev.v4l2_dev = vdev;
-       asd->video_out_video_capture.vdev.device_caps = device_caps |
-                                                       V4L2_CAP_VIDEO_OUTPUT;
+       asd->video_out_video_capture.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
        ret = video_register_device(&asd->video_out_video_capture.vdev,
                                    VFL_TYPE_VIDEO, -1);
        if (ret < 0)
                goto error;
-       asd->video_acc.vdev.v4l2_dev = vdev;
-       asd->video_acc.vdev.device_caps = device_caps |
-                                         V4L2_CAP_VIDEO_OUTPUT;
-       ret = video_register_device(&asd->video_acc.vdev,
-                                   VFL_TYPE_VIDEO, -1);
-       if (ret < 0)
-               goto error;
-
-       /*
-        * file input only supported on subdev0
-        * so do not create video node for subdevs other then subdev0
-        */
-       if (asd->index)
-               return 0;
-
-       asd->video_in.vdev.v4l2_dev = vdev;
-       asd->video_in.vdev.device_caps = device_caps |
-                                         V4L2_CAP_VIDEO_CAPTURE;
-       ret = video_register_device(&asd->video_in.vdev,
-                                   VFL_TYPE_VIDEO, -1);
-       if (ret < 0)
-               goto error;
 
        return 0;
 
@@ -1415,7 +1355,6 @@ int atomisp_subdev_init(struct atomisp_device *isp)
                return -ENOMEM;
        for (i = 0; i < isp->num_of_streams; i++) {
                asd = &isp->asd[i];
-               spin_lock_init(&asd->lock);
                asd->isp = isp;
                isp_subdev_init_params(asd);
                asd->index = i;
index 798a937..a1f4da3 100644 (file)
@@ -70,9 +70,7 @@ struct atomisp_video_pipe {
        enum v4l2_buf_type type;
        struct media_pad pad;
        struct videobuf_queue capq;
-       struct videobuf_queue outq;
        struct list_head activeq;
-       struct list_head activeq_out;
        /*
         * the buffers waiting for per-frame parameters, this is only valid
         * in per-frame setting mode.
@@ -86,9 +84,10 @@ struct atomisp_video_pipe {
 
        unsigned int buffers_in_css;
 
-       /* irq_lock is used to protect video buffer state change operations and
-        * also to make activeq, activeq_out, capq and outq list
-        * operations atomic. */
+       /*
+        * irq_lock is used to protect video buffer state change operations and
+        * also to make activeq and capq operations atomic.
+        */
        spinlock_t irq_lock;
        unsigned int users;
 
@@ -109,23 +108,6 @@ struct atomisp_video_pipe {
         */
        unsigned int frame_request_config_id[VIDEO_MAX_FRAME];
        struct atomisp_css_params_with_list *frame_params[VIDEO_MAX_FRAME];
-
-       /*
-       * move wdt from asd struct to create wdt for each pipe
-       */
-       /* ISP2401 */
-       struct timer_list wdt;
-       unsigned int wdt_duration;      /* in jiffies */
-       unsigned long wdt_expires;
-       atomic_t wdt_count;
-};
-
-struct atomisp_acc_pipe {
-       struct video_device vdev;
-       unsigned int users;
-       bool running;
-       struct atomisp_sub_device *asd;
-       struct atomisp_device *isp;
 };
 
 struct atomisp_pad_format {
@@ -267,28 +249,6 @@ struct atomisp_css_params_with_list {
        struct list_head list;
 };
 
-struct atomisp_acc_fw {
-       struct ia_css_fw_info *fw;
-       unsigned int handle;
-       unsigned int flags;
-       unsigned int type;
-       struct {
-               size_t length;
-               unsigned long css_ptr;
-       } args[ATOMISP_ACC_NR_MEMORY];
-       struct list_head list;
-};
-
-struct atomisp_map {
-       ia_css_ptr ptr;
-       size_t length;
-       struct list_head list;
-       /* FIXME: should keep book which maps are currently used
-        * by binaries and not allow releasing those
-        * which are in use. Implement by reference counting.
-        */
-};
-
 struct atomisp_sub_device {
        struct v4l2_subdev subdev;
        struct media_pad pads[ATOMISP_SUBDEV_PADS_NUM];
@@ -297,15 +257,12 @@ struct atomisp_sub_device {
 
        enum atomisp_subdev_input_entity input;
        unsigned int output;
-       struct atomisp_video_pipe video_in;
        struct atomisp_video_pipe video_out_capture; /* capture output */
        struct atomisp_video_pipe video_out_vf;      /* viewfinder output */
        struct atomisp_video_pipe video_out_preview; /* preview output */
-       struct atomisp_acc_pipe video_acc;
        /* video pipe main output */
        struct atomisp_video_pipe video_out_video_capture;
        /* struct isp_subdev_params params; */
-       spinlock_t lock;
        struct atomisp_device *isp;
        struct v4l2_ctrl_handler ctrl_handler;
        struct v4l2_ctrl *fmt_auto;
@@ -356,15 +313,16 @@ struct atomisp_sub_device {
 
        /* This field specifies which camera (v4l2 input) is selected. */
        int input_curr;
-       /* This field specifies which sensor is being selected when there
-          are multiple sensors connected to the same MIPI port. */
-       int sensor_curr;
 
        atomic_t sof_count;
        atomic_t sequence;      /* Sequence value that is assigned to buffer. */
        atomic_t sequence_temp;
 
-       unsigned int streaming; /* Hold both mutex and lock to change this */
+       /*
+        * Writers of streaming must hold both isp->mutex and isp->lock.
+        * Readers of streaming need to hold only one of the two locks.
+        */
+       unsigned int streaming;
        bool stream_prepared; /* whether css stream is created */
 
        /* subdev index: will be used to show which subdev is holding the
@@ -390,11 +348,6 @@ struct atomisp_sub_device {
        int raw_buffer_locked_count;
        spinlock_t raw_buffer_bitmap_lock;
 
-       /* ISP 2400 */
-       struct timer_list wdt;
-       unsigned int wdt_duration;      /* in jiffies */
-       unsigned long wdt_expires;
-
        /* ISP2401 */
        bool re_trigger_capture;
 
@@ -450,8 +403,10 @@ int atomisp_update_run_mode(struct atomisp_sub_device *asd);
 void atomisp_subdev_cleanup_pending_events(struct atomisp_sub_device *asd);
 
 void atomisp_subdev_unregister_entities(struct atomisp_sub_device *asd);
-int atomisp_subdev_register_entities(struct atomisp_sub_device *asd,
-                                    struct v4l2_device *vdev);
+int atomisp_subdev_register_subdev(struct atomisp_sub_device *asd,
+                                  struct v4l2_device *vdev);
+int atomisp_subdev_register_video_nodes(struct atomisp_sub_device *asd,
+                                       struct v4l2_device *vdev);
 int atomisp_subdev_init(struct atomisp_device *isp);
 void atomisp_subdev_cleanup(struct atomisp_device *isp);
 int atomisp_create_pads_links(struct atomisp_device *isp);
index 643ba98..d5bb990 100644 (file)
@@ -34,7 +34,6 @@
 #include "atomisp_cmd.h"
 #include "atomisp_common.h"
 #include "atomisp_fops.h"
-#include "atomisp_file.h"
 #include "atomisp_ioctl.h"
 #include "atomisp_internal.h"
 #include "atomisp-regs.h"
@@ -442,12 +441,7 @@ int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
                video->pad.flags = MEDIA_PAD_FL_SINK;
                video->vdev.fops = &atomisp_fops;
                video->vdev.ioctl_ops = &atomisp_ioctl_ops;
-               break;
-       case V4L2_BUF_TYPE_VIDEO_OUTPUT:
-               direction = "input";
-               video->pad.flags = MEDIA_PAD_FL_SOURCE;
-               video->vdev.fops = &atomisp_file_fops;
-               video->vdev.ioctl_ops = &atomisp_file_ioctl_ops;
+               video->vdev.lock = &video->isp->mutex;
                break;
        default:
                return -EINVAL;
@@ -467,18 +461,6 @@ int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
        return 0;
 }
 
-void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name)
-{
-       video->vdev.fops = &atomisp_fops;
-       video->vdev.ioctl_ops = &atomisp_ioctl_ops;
-
-       /* Initialize the video device. */
-       snprintf(video->vdev.name, sizeof(video->vdev.name),
-                "ATOMISP ISP %s", name);
-       video->vdev.release = video_device_release_empty;
-       video_set_drvdata(&video->vdev, video->isp);
-}
-
 void atomisp_video_unregister(struct atomisp_video_pipe *video)
 {
        if (video_is_registered(&video->vdev)) {
@@ -487,12 +469,6 @@ void atomisp_video_unregister(struct atomisp_video_pipe *video)
        }
 }
 
-void atomisp_acc_unregister(struct atomisp_acc_pipe *video)
-{
-       if (video_is_registered(&video->vdev))
-               video_unregister_device(&video->vdev);
-}
-
 static int atomisp_save_iunit_reg(struct atomisp_device *isp)
 {
        struct pci_dev *pdev = to_pci_dev(isp->dev);
@@ -1031,7 +1007,6 @@ static int atomisp_subdev_probe(struct atomisp_device *isp)
                            &subdevs->v4l2_subdev.board_info;
                struct i2c_adapter *adapter =
                    i2c_get_adapter(subdevs->v4l2_subdev.i2c_adapter_id);
-               int sensor_num, i;
 
                dev_info(isp->dev, "Probing Subdev %s\n", board_info->type);
 
@@ -1090,22 +1065,7 @@ static int atomisp_subdev_probe(struct atomisp_device *isp)
                         * pixel_format.
                         */
                        isp->inputs[isp->input_cnt].frame_size.pixel_format = 0;
-                       isp->inputs[isp->input_cnt].camera_caps =
-                           atomisp_get_default_camera_caps();
-                       sensor_num = isp->inputs[isp->input_cnt]
-                                    .camera_caps->sensor_num;
                        isp->input_cnt++;
-                       for (i = 1; i < sensor_num; i++) {
-                               if (isp->input_cnt >= ATOM_ISP_MAX_INPUTS) {
-                                       dev_warn(isp->dev,
-                                                "atomisp inputs out of range\n");
-                                       break;
-                               }
-                               isp->inputs[isp->input_cnt] =
-                                   isp->inputs[isp->input_cnt - 1];
-                               isp->inputs[isp->input_cnt].sensor_index = i;
-                               isp->input_cnt++;
-                       }
                        break;
                case CAMERA_MOTOR:
                        if (isp->motor) {
@@ -1158,7 +1118,6 @@ static void atomisp_unregister_entities(struct atomisp_device *isp)
        for (i = 0; i < isp->num_of_streams; i++)
                atomisp_subdev_unregister_entities(&isp->asd[i]);
        atomisp_tpg_unregister_entities(&isp->tpg);
-       atomisp_file_input_unregister_entities(&isp->file_dev);
        for (i = 0; i < ATOMISP_CAMERA_NR_PORTS; i++)
                atomisp_mipi_csi2_unregister_entities(&isp->csi2_port[i]);
 
@@ -1210,13 +1169,6 @@ static int atomisp_register_entities(struct atomisp_device *isp)
                goto csi_and_subdev_probe_failed;
        }
 
-       ret =
-           atomisp_file_input_register_entities(&isp->file_dev, &isp->v4l2_dev);
-       if (ret < 0) {
-               dev_err(isp->dev, "atomisp_file_input_register_entities\n");
-               goto file_input_register_failed;
-       }
-
        ret = atomisp_tpg_register_entities(&isp->tpg, &isp->v4l2_dev);
        if (ret < 0) {
                dev_err(isp->dev, "atomisp_tpg_register_entities\n");
@@ -1226,10 +1178,9 @@ static int atomisp_register_entities(struct atomisp_device *isp)
        for (i = 0; i < isp->num_of_streams; i++) {
                struct atomisp_sub_device *asd = &isp->asd[i];
 
-               ret = atomisp_subdev_register_entities(asd, &isp->v4l2_dev);
+               ret = atomisp_subdev_register_subdev(asd, &isp->v4l2_dev);
                if (ret < 0) {
-                       dev_err(isp->dev,
-                               "atomisp_subdev_register_entities fail\n");
+                       dev_err(isp->dev, "atomisp_subdev_register_subdev fail\n");
                        for (; i > 0; i--)
                                atomisp_subdev_unregister_entities(
                                    &isp->asd[i - 1]);
@@ -1267,31 +1218,17 @@ static int atomisp_register_entities(struct atomisp_device *isp)
                }
        }
 
-       dev_dbg(isp->dev,
-               "FILE_INPUT enable, camera_cnt: %d\n", isp->input_cnt);
-       isp->inputs[isp->input_cnt].type = FILE_INPUT;
-       isp->inputs[isp->input_cnt].port = -1;
-       isp->inputs[isp->input_cnt].camera_caps =
-           atomisp_get_default_camera_caps();
-       isp->inputs[isp->input_cnt++].camera = &isp->file_dev.sd;
-
        if (isp->input_cnt < ATOM_ISP_MAX_INPUTS) {
                dev_dbg(isp->dev,
                        "TPG detected, camera_cnt: %d\n", isp->input_cnt);
                isp->inputs[isp->input_cnt].type = TEST_PATTERN;
                isp->inputs[isp->input_cnt].port = -1;
-               isp->inputs[isp->input_cnt].camera_caps =
-                   atomisp_get_default_camera_caps();
                isp->inputs[isp->input_cnt++].camera = &isp->tpg.sd;
        } else {
                dev_warn(isp->dev, "too many atomisp inputs, TPG ignored.\n");
        }
 
-       ret = v4l2_device_register_subdev_nodes(&isp->v4l2_dev);
-       if (ret < 0)
-               goto link_failed;
-
-       return media_device_register(&isp->media_dev);
+       return 0;
 
 link_failed:
        for (i = 0; i < isp->num_of_streams; i++)
@@ -1304,8 +1241,6 @@ wq_alloc_failed:
 subdev_register_failed:
        atomisp_tpg_unregister_entities(&isp->tpg);
 tpg_register_failed:
-       atomisp_file_input_unregister_entities(&isp->file_dev);
-file_input_register_failed:
        for (i = 0; i < ATOMISP_CAMERA_NR_PORTS; i++)
                atomisp_mipi_csi2_unregister_entities(&isp->csi2_port[i]);
 csi_and_subdev_probe_failed:
@@ -1316,6 +1251,27 @@ v4l2_device_failed:
        return ret;
 }
 
+static int atomisp_register_device_nodes(struct atomisp_device *isp)
+{
+       int i, err;
+
+       for (i = 0; i < isp->num_of_streams; i++) {
+               err = atomisp_subdev_register_video_nodes(&isp->asd[i], &isp->v4l2_dev);
+               if (err)
+                       return err;
+       }
+
+       err = atomisp_create_pads_links(isp);
+       if (err)
+               return err;
+
+       err = v4l2_device_register_subdev_nodes(&isp->v4l2_dev);
+       if (err)
+               return err;
+
+       return media_device_register(&isp->media_dev);
+}
+
 static int atomisp_initialize_modules(struct atomisp_device *isp)
 {
        int ret;
@@ -1326,13 +1282,6 @@ static int atomisp_initialize_modules(struct atomisp_device *isp)
                goto error_mipi_csi2;
        }
 
-       ret = atomisp_file_input_init(isp);
-       if (ret < 0) {
-               dev_err(isp->dev,
-                       "file input device initialization failed\n");
-               goto error_file_input;
-       }
-
        ret = atomisp_tpg_init(isp);
        if (ret < 0) {
                dev_err(isp->dev, "tpg initialization failed\n");
@@ -1350,8 +1299,6 @@ static int atomisp_initialize_modules(struct atomisp_device *isp)
 error_isp_subdev:
 error_tpg:
        atomisp_tpg_cleanup(isp);
-error_file_input:
-       atomisp_file_input_cleanup(isp);
 error_mipi_csi2:
        atomisp_mipi_csi2_cleanup(isp);
        return ret;
@@ -1360,7 +1307,6 @@ error_mipi_csi2:
 static void atomisp_uninitialize_modules(struct atomisp_device *isp)
 {
        atomisp_tpg_cleanup(isp);
-       atomisp_file_input_cleanup(isp);
        atomisp_mipi_csi2_cleanup(isp);
 }
 
@@ -1470,39 +1416,6 @@ static bool is_valid_device(struct pci_dev *pdev, const struct pci_device_id *id
        return true;
 }
 
-static int init_atomisp_wdts(struct atomisp_device *isp)
-{
-       int i, err;
-
-       atomic_set(&isp->wdt_work_queued, 0);
-       isp->wdt_work_queue = alloc_workqueue(isp->v4l2_dev.name, 0, 1);
-       if (!isp->wdt_work_queue) {
-               dev_err(isp->dev, "Failed to initialize wdt work queue\n");
-               err = -ENOMEM;
-               goto alloc_fail;
-       }
-       INIT_WORK(&isp->wdt_work, atomisp_wdt_work);
-
-       for (i = 0; i < isp->num_of_streams; i++) {
-               struct atomisp_sub_device *asd = &isp->asd[i];
-
-               if (!IS_ISP2401) {
-                       timer_setup(&asd->wdt, atomisp_wdt, 0);
-               } else {
-                       timer_setup(&asd->video_out_capture.wdt,
-                                   atomisp_wdt, 0);
-                       timer_setup(&asd->video_out_preview.wdt,
-                                   atomisp_wdt, 0);
-                       timer_setup(&asd->video_out_vf.wdt, atomisp_wdt, 0);
-                       timer_setup(&asd->video_out_video_capture.wdt,
-                                   atomisp_wdt, 0);
-               }
-       }
-       return 0;
-alloc_fail:
-       return err;
-}
-
 #define ATOM_ISP_PCI_BAR       0
 
 static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
@@ -1551,9 +1464,7 @@ static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
 
        dev_dbg(&pdev->dev, "atomisp mmio base: %p\n", isp->base);
 
-       rt_mutex_init(&isp->mutex);
-       rt_mutex_init(&isp->loading);
-       mutex_init(&isp->streamoff_mutex);
+       mutex_init(&isp->mutex);
        spin_lock_init(&isp->lock);
 
        /* This is not a true PCI device on SoC, so the delay is not needed. */
@@ -1725,8 +1636,6 @@ static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
                pci_write_config_dword(pdev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, csi_afe_trim);
        }
 
-       rt_mutex_lock(&isp->loading);
-
        err = atomisp_initialize_modules(isp);
        if (err < 0) {
                dev_err(&pdev->dev, "atomisp_initialize_modules (%d)\n", err);
@@ -1738,13 +1647,8 @@ static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
                dev_err(&pdev->dev, "atomisp_register_entities failed (%d)\n", err);
                goto register_entities_fail;
        }
-       err = atomisp_create_pads_links(isp);
-       if (err < 0)
-               goto register_entities_fail;
-       /* init atomisp wdts */
-       err = init_atomisp_wdts(isp);
-       if (err != 0)
-               goto wdt_work_queue_fail;
+
+       INIT_WORK(&isp->assert_recovery_work, atomisp_assert_recovery_work);
 
        /* save the iunit context only once after all the values are init'ed. */
        atomisp_save_iunit_reg(isp);
@@ -1777,8 +1681,10 @@ static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
        release_firmware(isp->firmware);
        isp->firmware = NULL;
        isp->css_env.isp_css_fw.data = NULL;
-       isp->ready = true;
-       rt_mutex_unlock(&isp->loading);
+
+       err = atomisp_register_device_nodes(isp);
+       if (err)
+               goto css_init_fail;
 
        atomisp_drvfs_init(isp);
 
@@ -1789,13 +1695,10 @@ css_init_fail:
 request_irq_fail:
        hmm_cleanup();
        pm_runtime_get_noresume(&pdev->dev);
-       destroy_workqueue(isp->wdt_work_queue);
-wdt_work_queue_fail:
        atomisp_unregister_entities(isp);
 register_entities_fail:
        atomisp_uninitialize_modules(isp);
 initialize_modules_fail:
-       rt_mutex_unlock(&isp->loading);
        cpu_latency_qos_remove_request(&isp->pm_qos);
        atomisp_msi_irq_uninit(isp);
        pci_free_irq_vectors(pdev);
@@ -1851,9 +1754,6 @@ static void atomisp_pci_remove(struct pci_dev *pdev)
        atomisp_msi_irq_uninit(isp);
        atomisp_unregister_entities(isp);
 
-       destroy_workqueue(isp->wdt_work_queue);
-       atomisp_file_input_cleanup(isp);
-
        release_firmware(isp->firmware);
 }
 
index 72611b8..ccf1c0a 100644 (file)
 #define __ATOMISP_V4L2_H__
 
 struct atomisp_video_pipe;
-struct atomisp_acc_pipe;
 struct v4l2_device;
 struct atomisp_device;
 struct firmware;
 
 int atomisp_video_init(struct atomisp_video_pipe *video, const char *name,
                       unsigned int run_mode);
-void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name);
 void atomisp_video_unregister(struct atomisp_video_pipe *video);
-void atomisp_acc_unregister(struct atomisp_acc_pipe *video);
 const struct firmware *atomisp_load_firmware(struct atomisp_device *isp);
 int atomisp_csi_lane_config(struct atomisp_device *isp);
 
index f504941..a5fd6d3 100644 (file)
 #include "hmm/hmm_common.h"
 #include "hmm/hmm_bo.h"
 
-static unsigned int order_to_nr(unsigned int order)
-{
-       return 1U << order;
-}
-
-static unsigned int nr_to_order_bottom(unsigned int nr)
-{
-       return fls(nr) - 1;
-}
-
 static int __bo_init(struct hmm_bo_device *bdev, struct hmm_buffer_object *bo,
                     unsigned int pgnr)
 {
@@ -625,136 +615,40 @@ found:
        return bo;
 }
 
-static void free_private_bo_pages(struct hmm_buffer_object *bo,
-                                 int free_pgnr)
+static void free_pages_bulk_array(unsigned long nr_pages, struct page **page_array)
 {
-       int i, ret;
+       unsigned long i;
 
-       for (i = 0; i < free_pgnr; i++) {
-               ret = set_pages_wb(bo->pages[i], 1);
-               if (ret)
-                       dev_err(atomisp_dev,
-                               "set page to WB err ...ret = %d\n",
-                               ret);
-               /*
-               W/A: set_pages_wb seldom return value = -EFAULT
-               indicate that address of page is not in valid
-               range(0xffff880000000000~0xffffc7ffffffffff)
-               then, _free_pages would panic; Do not know why page
-               address be valid,it maybe memory corruption by lowmemory
-               */
-               if (!ret) {
-                       __free_pages(bo->pages[i], 0);
-               }
-       }
+       for (i = 0; i < nr_pages; i++)
+               __free_pages(page_array[i], 0);
+}
+
+static void free_private_bo_pages(struct hmm_buffer_object *bo)
+{
+       set_pages_array_wb(bo->pages, bo->pgnr);
+       free_pages_bulk_array(bo->pgnr, bo->pages);
 }
 
 /*Allocate pages which will be used only by ISP*/
 static int alloc_private_pages(struct hmm_buffer_object *bo)
 {
+       const gfp_t gfp = __GFP_NOWARN | __GFP_RECLAIM | __GFP_FS;
        int ret;
-       unsigned int pgnr, order, blk_pgnr, alloc_pgnr;
-       struct page *pages;
-       gfp_t gfp = GFP_NOWAIT | __GFP_NOWARN; /* REVISIT: need __GFP_FS too? */
-       int i, j;
-       int failure_number = 0;
-       bool reduce_order = false;
-       bool lack_mem = true;
-
-       pgnr = bo->pgnr;
-
-       i = 0;
-       alloc_pgnr = 0;
-
-       while (pgnr) {
-               order = nr_to_order_bottom(pgnr);
-               /*
-                * if be short of memory, we will set order to 0
-                * everytime.
-                */
-               if (lack_mem)
-                       order = HMM_MIN_ORDER;
-               else if (order > HMM_MAX_ORDER)
-                       order = HMM_MAX_ORDER;
-retry:
-               /*
-                * When order > HMM_MIN_ORDER, for performance reasons we don't
-                * want alloc_pages() to sleep. In case it fails and fallbacks
-                * to HMM_MIN_ORDER or in case the requested order is originally
-                * the minimum value, we can allow alloc_pages() to sleep for
-                * robustness purpose.
-                *
-                * REVISIT: why __GFP_FS is necessary?
-                */
-               if (order == HMM_MIN_ORDER) {
-                       gfp &= ~GFP_NOWAIT;
-                       gfp |= __GFP_RECLAIM | __GFP_FS;
-               }
-
-               pages = alloc_pages(gfp, order);
-               if (unlikely(!pages)) {
-                       /*
-                        * in low memory case, if allocation page fails,
-                        * we turn to try if order=0 allocation could
-                        * succeed. if order=0 fails too, that means there is
-                        * no memory left.
-                        */
-                       if (order == HMM_MIN_ORDER) {
-                               dev_err(atomisp_dev,
-                                       "%s: cannot allocate pages\n",
-                                       __func__);
-                               goto cleanup;
-                       }
-                       order = HMM_MIN_ORDER;
-                       failure_number++;
-                       reduce_order = true;
-                       /*
-                        * if fail two times continuously, we think be short
-                        * of memory now.
-                        */
-                       if (failure_number == 2) {
-                               lack_mem = true;
-                               failure_number = 0;
-                       }
-                       goto retry;
-               } else {
-                       blk_pgnr = order_to_nr(order);
-
-                       /*
-                        * set memory to uncacheable -- UC_MINUS
-                        */
-                       ret = set_pages_uc(pages, blk_pgnr);
-                       if (ret) {
-                               dev_err(atomisp_dev,
-                                       "set page uncacheablefailed.\n");
-
-                               __free_pages(pages, order);
 
-                               goto cleanup;
-                       }
-
-                       for (j = 0; j < blk_pgnr; j++, i++) {
-                               bo->pages[i] = pages + j;
-                       }
-
-                       pgnr -= blk_pgnr;
+       ret = alloc_pages_bulk_array(gfp, bo->pgnr, bo->pages);
+       if (ret != bo->pgnr) {
+               free_pages_bulk_array(ret, bo->pages);
+               return -ENOMEM;
+       }
 
-                       /*
-                        * if order is not reduced this time, clear
-                        * failure_number.
-                        */
-                       if (reduce_order)
-                               reduce_order = false;
-                       else
-                               failure_number = 0;
-               }
+       ret = set_pages_array_uc(bo->pages, bo->pgnr);
+       if (ret) {
+               dev_err(atomisp_dev, "set pages uncacheable failed.\n");
+               free_pages_bulk_array(bo->pgnr, bo->pages);
+               return ret;
        }
 
        return 0;
-cleanup:
-       alloc_pgnr = i;
-       free_private_bo_pages(bo, alloc_pgnr);
-       return -ENOMEM;
 }
 
 static void free_user_pages(struct hmm_buffer_object *bo,
@@ -762,12 +656,8 @@ static void free_user_pages(struct hmm_buffer_object *bo,
 {
        int i;
 
-       if (bo->mem_type == HMM_BO_MEM_TYPE_PFN) {
-               unpin_user_pages(bo->pages, page_nr);
-       } else {
-               for (i = 0; i < page_nr; i++)
-                       put_page(bo->pages[i]);
-       }
+       for (i = 0; i < page_nr; i++)
+               put_page(bo->pages[i]);
 }
 
 /*
@@ -777,43 +667,13 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
                            const void __user *userptr)
 {
        int page_nr;
-       struct vm_area_struct *vma;
-
-       mutex_unlock(&bo->mutex);
-       mmap_read_lock(current->mm);
-       vma = find_vma(current->mm, (unsigned long)userptr);
-       mmap_read_unlock(current->mm);
-       if (!vma) {
-               dev_err(atomisp_dev, "find_vma failed\n");
-               mutex_lock(&bo->mutex);
-               return -EFAULT;
-       }
-       mutex_lock(&bo->mutex);
-       /*
-        * Handle frame buffer allocated in other kerenl space driver
-        * and map to user space
-        */
 
        userptr = untagged_addr(userptr);
 
-       if (vma->vm_flags & (VM_IO | VM_PFNMAP)) {
-               page_nr = pin_user_pages((unsigned long)userptr, bo->pgnr,
-                                        FOLL_LONGTERM | FOLL_WRITE,
-                                        bo->pages, NULL);
-               bo->mem_type = HMM_BO_MEM_TYPE_PFN;
-       } else {
-               /*Handle frame buffer allocated in user space*/
-               mutex_unlock(&bo->mutex);
-               page_nr = get_user_pages_fast((unsigned long)userptr,
-                                             (int)(bo->pgnr), 1, bo->pages);
-               mutex_lock(&bo->mutex);
-               bo->mem_type = HMM_BO_MEM_TYPE_USER;
-       }
-
-       dev_dbg(atomisp_dev, "%s: %d %s pages were allocated as 0x%08x\n",
-               __func__,
-               bo->pgnr,
-               bo->mem_type == HMM_BO_MEM_TYPE_USER ? "user" : "pfn", page_nr);
+       /* Handle frame buffer allocated in user space */
+       mutex_unlock(&bo->mutex);
+       page_nr = get_user_pages_fast((unsigned long)userptr, bo->pgnr, 1, bo->pages);
+       mutex_lock(&bo->mutex);
 
        /* can be written by caller, not forced */
        if (page_nr != bo->pgnr) {
@@ -854,7 +714,7 @@ int hmm_bo_alloc_pages(struct hmm_buffer_object *bo,
        mutex_lock(&bo->mutex);
        check_bo_status_no_goto(bo, HMM_BO_PAGE_ALLOCED, status_err);
 
-       bo->pages = kmalloc_array(bo->pgnr, sizeof(struct page *), GFP_KERNEL);
+       bo->pages = kcalloc(bo->pgnr, sizeof(struct page *), GFP_KERNEL);
        if (unlikely(!bo->pages)) {
                ret = -ENOMEM;
                goto alloc_err;
@@ -910,7 +770,7 @@ void hmm_bo_free_pages(struct hmm_buffer_object *bo)
        bo->status &= (~HMM_BO_PAGE_ALLOCED);
 
        if (bo->type == HMM_BO_PRIVATE)
-               free_private_bo_pages(bo, bo->pgnr);
+               free_private_bo_pages(bo);
        else if (bo->type == HMM_BO_USER)
                free_user_pages(bo, bo->pgnr);
        else
index 0e7c38b..67915d7 100644 (file)
@@ -950,8 +950,8 @@ sh_css_set_black_frame(struct ia_css_stream *stream,
                params->fpn_config.data = NULL;
        }
        if (!params->fpn_config.data) {
-               params->fpn_config.data = kvmalloc(height * width *
-                                                  sizeof(short), GFP_KERNEL);
+               params->fpn_config.data = kvmalloc(array3_size(height, width, sizeof(short)),
+                                                  GFP_KERNEL);
                if (!params->fpn_config.data) {
                        IA_CSS_ERROR("out of memory");
                        IA_CSS_LEAVE_ERR_PRIVATE(-ENOMEM);
index 294c808..3e74621 100644 (file)
@@ -863,16 +863,16 @@ int imx_media_pipeline_set_stream(struct imx_media_dev *imxmd,
        mutex_lock(&imxmd->md.graph_mutex);
 
        if (on) {
-               ret = __media_pipeline_start(entity, &imxmd->pipe);
+               ret = __media_pipeline_start(entity->pads, &imxmd->pipe);
                if (ret)
                        goto out;
                ret = v4l2_subdev_call(sd, video, s_stream, 1);
                if (ret)
-                       __media_pipeline_stop(entity);
+                       __media_pipeline_stop(entity->pads);
        } else {
                v4l2_subdev_call(sd, video, s_stream, 0);
-               if (entity->pipe)
-                       __media_pipeline_stop(entity);
+               if (media_pad_pipeline(entity->pads))
+                       __media_pipeline_stop(entity->pads);
        }
 
 out:
index cbc66ef..e5b550c 100644 (file)
@@ -1360,7 +1360,7 @@ static int imx7_csi_video_start_streaming(struct vb2_queue *vq,
 
        mutex_lock(&csi->mdev.graph_mutex);
 
-       ret = __media_pipeline_start(&csi->sd.entity, &csi->pipe);
+       ret = __video_device_pipeline_start(csi->vdev, &csi->pipe);
        if (ret)
                goto err_unlock;
 
@@ -1373,7 +1373,7 @@ static int imx7_csi_video_start_streaming(struct vb2_queue *vq,
        return 0;
 
 err_stop:
-       __media_pipeline_stop(&csi->sd.entity);
+       __video_device_pipeline_stop(csi->vdev);
 err_unlock:
        mutex_unlock(&csi->mdev.graph_mutex);
        dev_err(csi->dev, "pipeline start failed with %d\n", ret);
@@ -1396,7 +1396,7 @@ static void imx7_csi_video_stop_streaming(struct vb2_queue *vq)
 
        mutex_lock(&csi->mdev.graph_mutex);
        v4l2_subdev_call(&csi->sd, video, s_stream, 0);
-       __media_pipeline_stop(&csi->sd.entity);
+       __video_device_pipeline_stop(csi->vdev);
        mutex_unlock(&csi->mdev.graph_mutex);
 
        /* release all active buffers */
index dbdd015..caa358e 100644 (file)
@@ -626,8 +626,11 @@ struct ipu3_uapi_stats_3a {
  * @b: white balance gain for B channel.
  * @gb:        white balance gain for Gb channel.
  *
- * Precision u3.13, range [0, 8). White balance correction is done by applying
- * a multiplicative gain to each color channels prior to BNR.
+ * For BNR parameters WB gain factor for the three channels [Ggr, Ggb, Gb, Gr].
+ * Their precision is U3.13 and the range is (0, 8) and the actual gain is
+ * Gx + 1, it is typically Gx = 1.
+ *
+ * Pout = {Pin * (1 + Gx)}.
  */
 struct ipu3_uapi_bnr_static_config_wb_gains_config {
        __u16 gr;
index d1c539c..ce13e74 100644 (file)
@@ -192,33 +192,30 @@ static int imgu_subdev_get_selection(struct v4l2_subdev *sd,
                                     struct v4l2_subdev_state *sd_state,
                                     struct v4l2_subdev_selection *sel)
 {
-       struct v4l2_rect *try_sel, *r;
-       struct imgu_v4l2_subdev *imgu_sd = container_of(sd,
-                                                       struct imgu_v4l2_subdev,
-                                                       subdev);
+       struct imgu_v4l2_subdev *imgu_sd =
+               container_of(sd, struct imgu_v4l2_subdev, subdev);
 
        if (sel->pad != IMGU_NODE_IN)
                return -EINVAL;
 
        switch (sel->target) {
        case V4L2_SEL_TGT_CROP:
-               try_sel = v4l2_subdev_get_try_crop(sd, sd_state, sel->pad);
-               r = &imgu_sd->rect.eff;
-               break;
+               if (sel->which == V4L2_SUBDEV_FORMAT_TRY)
+                       sel->r = *v4l2_subdev_get_try_crop(sd, sd_state,
+                                                          sel->pad);
+               else
+                       sel->r = imgu_sd->rect.eff;
+               return 0;
        case V4L2_SEL_TGT_COMPOSE:
-               try_sel = v4l2_subdev_get_try_compose(sd, sd_state, sel->pad);
-               r = &imgu_sd->rect.bds;
-               break;
+               if (sel->which == V4L2_SUBDEV_FORMAT_TRY)
+                       sel->r = *v4l2_subdev_get_try_compose(sd, sd_state,
+                                                             sel->pad);
+               else
+                       sel->r = imgu_sd->rect.bds;
+               return 0;
        default:
                return -EINVAL;
        }
-
-       if (sel->which == V4L2_SUBDEV_FORMAT_TRY)
-               sel->r = *try_sel;
-       else
-               sel->r = *r;
-
-       return 0;
 }
 
 static int imgu_subdev_set_selection(struct v4l2_subdev *sd,
@@ -486,7 +483,7 @@ static int imgu_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
        pipe = node->pipe;
        imgu_pipe = &imgu->imgu_pipe[pipe];
        atomic_set(&node->sequence, 0);
-       r = media_pipeline_start(&node->vdev.entity, &imgu_pipe->pipeline);
+       r = video_device_pipeline_start(&node->vdev, &imgu_pipe->pipeline);
        if (r < 0)
                goto fail_return_bufs;
 
@@ -511,7 +508,7 @@ static int imgu_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
        return 0;
 
 fail_stop_pipeline:
-       media_pipeline_stop(&node->vdev.entity);
+       video_device_pipeline_stop(&node->vdev);
 fail_return_bufs:
        imgu_return_all_buffers(imgu, node, VB2_BUF_STATE_QUEUED);
 
@@ -551,7 +548,7 @@ static void imgu_vb2_stop_streaming(struct vb2_queue *vq)
        imgu_return_all_buffers(imgu, node, VB2_BUF_STATE_ERROR);
        mutex_unlock(&imgu->streaming_lock);
 
-       media_pipeline_stop(&node->vdev.entity);
+       video_device_pipeline_stop(&node->vdev);
 }
 
 /******************** v4l2_ioctl_ops ********************/
index 8549d95..52f224d 100644 (file)
@@ -1102,6 +1102,7 @@ static int vdec_probe(struct platform_device *pdev)
 
 err_vdev_release:
        video_device_release(vdev);
+       v4l2_device_unregister(&core->v4l2_dev);
        return ret;
 }
 
@@ -1110,6 +1111,7 @@ static int vdec_remove(struct platform_device *pdev)
        struct amvdec_core *core = platform_get_drvdata(pdev);
 
        video_unregister_device(core->vdev_dec);
+       v4l2_device_unregister(&core->v4l2_dev);
 
        return 0;
 }
index 28aacda..fa2a36d 100644 (file)
@@ -548,10 +548,8 @@ static int iss_pipeline_is_last(struct media_entity *me)
        struct iss_pipeline *pipe;
        struct media_pad *pad;
 
-       if (!me->pipe)
-               return 0;
        pipe = to_iss_pipeline(me);
-       if (pipe->stream_state == ISS_PIPELINE_STREAM_STOPPED)
+       if (!pipe || pipe->stream_state == ISS_PIPELINE_STREAM_STOPPED)
                return 0;
        pad = media_pad_remote_pad_first(&pipe->output->pad);
        return pad->entity == me;
index 842509d..60f3d84 100644 (file)
@@ -870,8 +870,7 @@ iss_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
         * Start streaming on the pipeline. No link touching an entity in the
         * pipeline can be activated or deactivated once streaming is started.
         */
-       pipe = entity->pipe
-            ? to_iss_pipeline(entity) : &video->pipe;
+       pipe = to_iss_pipeline(&video->video.entity) ? : &video->pipe;
        pipe->external = NULL;
        pipe->external_rate = 0;
        pipe->external_bpp = 0;
@@ -887,7 +886,7 @@ iss_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
        if (video->iss->pdata->set_constraints)
                video->iss->pdata->set_constraints(video->iss, true);
 
-       ret = media_pipeline_start(entity, &pipe->pipe);
+       ret = video_device_pipeline_start(&video->video, &pipe->pipe);
        if (ret < 0)
                goto err_media_pipeline_start;
 
@@ -978,7 +977,7 @@ iss_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
 err_omap4iss_set_stream:
        vb2_streamoff(&vfh->queue, type);
 err_iss_video_check_format:
-       media_pipeline_stop(&video->video.entity);
+       video_device_pipeline_stop(&video->video);
 err_media_pipeline_start:
        if (video->iss->pdata->set_constraints)
                video->iss->pdata->set_constraints(video->iss, false);
@@ -1032,7 +1031,7 @@ iss_video_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
 
        if (video->iss->pdata->set_constraints)
                video->iss->pdata->set_constraints(video->iss, false);
-       media_pipeline_stop(&video->video.entity);
+       video_device_pipeline_stop(&video->video);
 
 done:
        mutex_unlock(&video->stream_lock);
index 526281b..ca2d5ed 100644 (file)
@@ -90,8 +90,15 @@ struct iss_pipeline {
        int external_bpp;
 };
 
-#define to_iss_pipeline(__e) \
-       container_of((__e)->pipe, struct iss_pipeline, pipe)
+static inline struct iss_pipeline *to_iss_pipeline(struct media_entity *entity)
+{
+       struct media_pipeline *pipe = media_entity_pipeline(entity);
+
+       if (!pipe)
+               return NULL;
+
+       return container_of(pipe, struct iss_pipeline, pipe);
+}
 
 static inline int iss_pipeline_ready(struct iss_pipeline *pipe)
 {
index 21c13f9..621944f 100644 (file)
@@ -2,6 +2,7 @@
 config VIDEO_SUNXI_CEDRUS
        tristate "Allwinner Cedrus VPU driver"
        depends on VIDEO_DEV
+       depends on RESET_CONTROLLER
        depends on HAS_DMA
        depends on OF
        select MEDIA_CONTROLLER
index f10a041..d58370a 100644 (file)
@@ -547,7 +547,7 @@ static int tegra210_vi_start_streaming(struct vb2_queue *vq, u32 count)
                       VI_INCR_SYNCPT_NO_STALL);
 
        /* start the pipeline */
-       ret = media_pipeline_start(&chan->video.entity, pipe);
+       ret = video_device_pipeline_start(&chan->video, pipe);
        if (ret < 0)
                goto error_pipeline_start;
 
@@ -595,7 +595,7 @@ error_kthread_done:
 error_kthread_start:
        tegra_channel_set_stream(chan, false);
 error_set_stream:
-       media_pipeline_stop(&chan->video.entity);
+       video_device_pipeline_stop(&chan->video);
 error_pipeline_start:
        tegra_channel_release_buffers(chan, VB2_BUF_STATE_QUEUED);
        return ret;
@@ -617,7 +617,7 @@ static void tegra210_vi_stop_streaming(struct vb2_queue *vq)
 
        tegra_channel_release_buffers(chan, VB2_BUF_STATE_ERROR);
        tegra_channel_set_stream(chan, false);
-       media_pipeline_stop(&chan->video.entity);
+       video_device_pipeline_stop(&chan->video);
 }
 
 /*
index 3336d2b..d9204c5 100644 (file)
@@ -1202,7 +1202,7 @@ cxgbit_pass_accept_rpl(struct cxgbit_sock *csk, struct cpl_pass_accept_req *req)
        opt2 |= CONG_CNTRL_V(CONG_ALG_NEWRENO);
 
        opt2 |= T5_ISS_F;
-       rpl5->iss = cpu_to_be32((prandom_u32() & ~7UL) - 1);
+       rpl5->iss = cpu_to_be32((get_random_u32() & ~7UL) - 1);
 
        opt2 |= T5_OPT_2_VALID_F;
 
index 2a5570b..b80e25e 100644 (file)
@@ -516,11 +516,7 @@ static int start_power_clamp(void)
        cpus_read_lock();
 
        /* prefer BSP */
-       control_cpu = 0;
-       if (!cpu_online(control_cpu)) {
-               control_cpu = get_cpu();
-               put_cpu();
-       }
+       control_cpu = cpumask_first(cpu_online_mask);
 
        clamping = true;
        schedule_delayed_work(&poll_pkg_cstate_work, 0);
index bbb248a..f00b2f6 100644 (file)
@@ -2437,7 +2437,7 @@ int tb_xdomain_init(void)
        tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1);
        tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100);
 
-       xdomain_property_block_gen = prandom_u32();
+       xdomain_property_block_gen = get_random_u32();
        return 0;
 }
 
index 13cdd9d..434f831 100644 (file)
@@ -603,21 +603,6 @@ config SERIAL_MUX_CONSOLE
        select SERIAL_CORE_CONSOLE
        default y
 
-config PDC_CONSOLE
-       bool "PDC software console support"
-       depends on PARISC && !SERIAL_MUX && VT
-       help
-         Saying Y here will enable the software based PDC console to be 
-         used as the system console.  This is useful for machines in 
-         which the hardware based console has not been written yet.  The
-         following steps must be completed to use the PDC console:
-
-           1. create the device entry (mknod /dev/ttyB0 c 11 0)
-           2. Edit the /etc/inittab to start a getty listening on /dev/ttyB0
-           3. Add device ttyB0 to /etc/securetty (if you want to log on as
-                root on this console.)
-           4. Change the kernel command console parameter to: console=ttyB0
-
 config SERIAL_SUNSAB
        tristate "Sun Siemens SAB82532 serial support"
        depends on SPARC && PCI
index 38a861e..7753e58 100644 (file)
@@ -1298,7 +1298,7 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
        
        /* limit fbsize to max visible screen size */
        if (fix->smem_len > yres*fix->line_length)
-               fix->smem_len = yres*fix->line_length;
+               fix->smem_len = ALIGN(yres*fix->line_length, 4*1024*1024);
        
        fix->accel = FB_ACCEL_NONE;
 
index fd5d701..00d789b 100644 (file)
@@ -167,7 +167,7 @@ static int uvesafb_exec(struct uvesafb_ktask *task)
        memcpy(&m->id, &uvesafb_cn_id, sizeof(m->id));
        m->seq = seq;
        m->len = len;
-       m->ack = prandom_u32();
+       m->ack = get_random_u32();
 
        /* uvesafb_task structure */
        memcpy(m + 1, &task->t, sizeof(task->t));
index 3fe8a7e..c777a61 100644 (file)
@@ -38,6 +38,9 @@
 
 #include "watchdog_core.h"     /* For watchdog_dev_register/... */
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/watchdog.h>
+
 static DEFINE_IDA(watchdog_ida);
 
 static int stop_on_reboot = -1;
@@ -163,6 +166,7 @@ static int watchdog_reboot_notifier(struct notifier_block *nb,
                        int ret;
 
                        ret = wdd->ops->stop(wdd);
+                       trace_watchdog_stop(wdd, ret);
                        if (ret)
                                return NOTIFY_BAD;
                }
index 744b2ab..55574ed 100644 (file)
@@ -47,6 +47,8 @@
 #include "watchdog_core.h"
 #include "watchdog_pretimeout.h"
 
+#include <trace/events/watchdog.h>
+
 /* the dev_t structure to store the dynamically allocated watchdog devices */
 static dev_t watchdog_devt;
 /* Reference to watchdog device behind /dev/watchdog */
@@ -157,10 +159,13 @@ static int __watchdog_ping(struct watchdog_device *wdd)
 
        wd_data->last_hw_keepalive = now;
 
-       if (wdd->ops->ping)
+       if (wdd->ops->ping) {
                err = wdd->ops->ping(wdd);  /* ping the watchdog */
-       else
+               trace_watchdog_ping(wdd, err);
+       } else {
                err = wdd->ops->start(wdd); /* restart watchdog */
+               trace_watchdog_start(wdd, err);
+       }
 
        if (err == 0)
                watchdog_hrtimer_pretimeout_start(wdd);
@@ -259,6 +264,7 @@ static int watchdog_start(struct watchdog_device *wdd)
                }
        } else {
                err = wdd->ops->start(wdd);
+               trace_watchdog_start(wdd, err);
                if (err == 0) {
                        set_bit(WDOG_ACTIVE, &wdd->status);
                        wd_data->last_keepalive = started_at;
@@ -297,6 +303,7 @@ static int watchdog_stop(struct watchdog_device *wdd)
        if (wdd->ops->stop) {
                clear_bit(WDOG_HW_RUNNING, &wdd->status);
                err = wdd->ops->stop(wdd);
+               trace_watchdog_stop(wdd, err);
        } else {
                set_bit(WDOG_HW_RUNNING, &wdd->status);
        }
@@ -369,6 +376,7 @@ static int watchdog_set_timeout(struct watchdog_device *wdd,
 
        if (wdd->ops->set_timeout) {
                err = wdd->ops->set_timeout(wdd, timeout);
+               trace_watchdog_set_timeout(wdd, timeout, err);
        } else {
                wdd->timeout = timeout;
                /* Disable pretimeout if it doesn't fit the new timeout */
index 860f37c..daa525d 100644 (file)
@@ -31,12 +31,12 @@ static DEFINE_XARRAY_FLAGS(xen_grant_dma_devices, XA_FLAGS_LOCK_IRQ);
 
 static inline dma_addr_t grant_to_dma(grant_ref_t grant)
 {
-       return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << PAGE_SHIFT);
+       return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << XEN_PAGE_SHIFT);
 }
 
 static inline grant_ref_t dma_to_grant(dma_addr_t dma)
 {
-       return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> PAGE_SHIFT);
+       return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> XEN_PAGE_SHIFT);
 }
 
 static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev)
@@ -79,7 +79,7 @@ static void *xen_grant_dma_alloc(struct device *dev, size_t size,
                                 unsigned long attrs)
 {
        struct xen_grant_dma_data *data;
-       unsigned int i, n_pages = PFN_UP(size);
+       unsigned int i, n_pages = XEN_PFN_UP(size);
        unsigned long pfn;
        grant_ref_t grant;
        void *ret;
@@ -91,14 +91,14 @@ static void *xen_grant_dma_alloc(struct device *dev, size_t size,
        if (unlikely(data->broken))
                return NULL;
 
-       ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp);
+       ret = alloc_pages_exact(n_pages * XEN_PAGE_SIZE, gfp);
        if (!ret)
                return NULL;
 
        pfn = virt_to_pfn(ret);
 
        if (gnttab_alloc_grant_reference_seq(n_pages, &grant)) {
-               free_pages_exact(ret, n_pages * PAGE_SIZE);
+               free_pages_exact(ret, n_pages * XEN_PAGE_SIZE);
                return NULL;
        }
 
@@ -116,7 +116,7 @@ static void xen_grant_dma_free(struct device *dev, size_t size, void *vaddr,
                               dma_addr_t dma_handle, unsigned long attrs)
 {
        struct xen_grant_dma_data *data;
-       unsigned int i, n_pages = PFN_UP(size);
+       unsigned int i, n_pages = XEN_PFN_UP(size);
        grant_ref_t grant;
 
        data = find_xen_grant_dma_data(dev);
@@ -138,7 +138,7 @@ static void xen_grant_dma_free(struct device *dev, size_t size, void *vaddr,
 
        gnttab_free_grant_reference_seq(grant, n_pages);
 
-       free_pages_exact(vaddr, n_pages * PAGE_SIZE);
+       free_pages_exact(vaddr, n_pages * XEN_PAGE_SIZE);
 }
 
 static struct page *xen_grant_dma_alloc_pages(struct device *dev, size_t size,
@@ -168,7 +168,9 @@ static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
                                         unsigned long attrs)
 {
        struct xen_grant_dma_data *data;
-       unsigned int i, n_pages = PFN_UP(offset + size);
+       unsigned long dma_offset = xen_offset_in_page(offset),
+                       pfn_offset = XEN_PFN_DOWN(offset);
+       unsigned int i, n_pages = XEN_PFN_UP(dma_offset + size);
        grant_ref_t grant;
        dma_addr_t dma_handle;
 
@@ -187,10 +189,11 @@ static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
 
        for (i = 0; i < n_pages; i++) {
                gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
-                               xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
+                               pfn_to_gfn(page_to_xen_pfn(page) + i + pfn_offset),
+                               dir == DMA_TO_DEVICE);
        }
 
-       dma_handle = grant_to_dma(grant) + offset;
+       dma_handle = grant_to_dma(grant) + dma_offset;
 
        return dma_handle;
 }
@@ -200,8 +203,8 @@ static void xen_grant_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
                                     unsigned long attrs)
 {
        struct xen_grant_dma_data *data;
-       unsigned long offset = dma_handle & (PAGE_SIZE - 1);
-       unsigned int i, n_pages = PFN_UP(offset + size);
+       unsigned long dma_offset = xen_offset_in_page(dma_handle);
+       unsigned int i, n_pages = XEN_PFN_UP(dma_offset + size);
        grant_ref_t grant;
 
        if (WARN_ON(dir == DMA_NONE))
index dce3a16..4ec18ce 100644 (file)
@@ -138,6 +138,7 @@ struct share_check {
        u64 root_objectid;
        u64 inum;
        int share_count;
+       bool have_delayed_delete_refs;
 };
 
 static inline int extent_is_shared(struct share_check *sc)
@@ -820,16 +821,11 @@ static int add_delayed_refs(const struct btrfs_fs_info *fs_info,
                            struct preftrees *preftrees, struct share_check *sc)
 {
        struct btrfs_delayed_ref_node *node;
-       struct btrfs_delayed_extent_op *extent_op = head->extent_op;
        struct btrfs_key key;
-       struct btrfs_key tmp_op_key;
        struct rb_node *n;
        int count;
        int ret = 0;
 
-       if (extent_op && extent_op->update_key)
-               btrfs_disk_key_to_cpu(&tmp_op_key, &extent_op->key);
-
        spin_lock(&head->lock);
        for (n = rb_first_cached(&head->ref_tree); n; n = rb_next(n)) {
                node = rb_entry(n, struct btrfs_delayed_ref_node,
@@ -855,10 +851,16 @@ static int add_delayed_refs(const struct btrfs_fs_info *fs_info,
                case BTRFS_TREE_BLOCK_REF_KEY: {
                        /* NORMAL INDIRECT METADATA backref */
                        struct btrfs_delayed_tree_ref *ref;
+                       struct btrfs_key *key_ptr = NULL;
+
+                       if (head->extent_op && head->extent_op->update_key) {
+                               btrfs_disk_key_to_cpu(&key, &head->extent_op->key);
+                               key_ptr = &key;
+                       }
 
                        ref = btrfs_delayed_node_to_tree_ref(node);
                        ret = add_indirect_ref(fs_info, preftrees, ref->root,
-                                              &tmp_op_key, ref->level + 1,
+                                              key_ptr, ref->level + 1,
                                               node->bytenr, count, sc,
                                               GFP_ATOMIC);
                        break;
@@ -884,13 +886,22 @@ static int add_delayed_refs(const struct btrfs_fs_info *fs_info,
                        key.offset = ref->offset;
 
                        /*
-                        * Found a inum that doesn't match our known inum, we
-                        * know it's shared.
+                        * If we have a share check context and a reference for
+                        * another inode, we can't exit immediately. This is
+                        * because even if this is a BTRFS_ADD_DELAYED_REF
+                        * reference we may find next a BTRFS_DROP_DELAYED_REF
+                        * which cancels out this ADD reference.
+                        *
+                        * If this is a DROP reference and there was no previous
+                        * ADD reference, then we need to signal that when we
+                        * process references from the extent tree (through
+                        * add_inline_refs() and add_keyed_refs()), we should
+                        * not exit early if we find a reference for another
+                        * inode, because one of the delayed DROP references
+                        * may cancel that reference in the extent tree.
                         */
-                       if (sc && sc->inum && ref->objectid != sc->inum) {
-                               ret = BACKREF_FOUND_SHARED;
-                               goto out;
-                       }
+                       if (sc && count < 0)
+                               sc->have_delayed_delete_refs = true;
 
                        ret = add_indirect_ref(fs_info, preftrees, ref->root,
                                               &key, 0, node->bytenr, count, sc,
@@ -920,7 +931,7 @@ static int add_delayed_refs(const struct btrfs_fs_info *fs_info,
        }
        if (!ret)
                ret = extent_is_shared(sc);
-out:
+
        spin_unlock(&head->lock);
        return ret;
 }
@@ -1023,7 +1034,8 @@ static int add_inline_refs(const struct btrfs_fs_info *fs_info,
                        key.type = BTRFS_EXTENT_DATA_KEY;
                        key.offset = btrfs_extent_data_ref_offset(leaf, dref);
 
-                       if (sc && sc->inum && key.objectid != sc->inum) {
+                       if (sc && sc->inum && key.objectid != sc->inum &&
+                           !sc->have_delayed_delete_refs) {
                                ret = BACKREF_FOUND_SHARED;
                                break;
                        }
@@ -1033,6 +1045,7 @@ static int add_inline_refs(const struct btrfs_fs_info *fs_info,
                        ret = add_indirect_ref(fs_info, preftrees, root,
                                               &key, 0, bytenr, count,
                                               sc, GFP_NOFS);
+
                        break;
                }
                default:
@@ -1122,7 +1135,8 @@ static int add_keyed_refs(struct btrfs_root *extent_root,
                        key.type = BTRFS_EXTENT_DATA_KEY;
                        key.offset = btrfs_extent_data_ref_offset(leaf, dref);
 
-                       if (sc && sc->inum && key.objectid != sc->inum) {
+                       if (sc && sc->inum && key.objectid != sc->inum &&
+                           !sc->have_delayed_delete_refs) {
                                ret = BACKREF_FOUND_SHARED;
                                break;
                        }
@@ -1522,6 +1536,9 @@ static bool lookup_backref_shared_cache(struct btrfs_backref_shared_cache *cache
 {
        struct btrfs_backref_shared_cache_entry *entry;
 
+       if (!cache->use_cache)
+               return false;
+
        if (WARN_ON_ONCE(level >= BTRFS_MAX_LEVEL))
                return false;
 
@@ -1557,6 +1574,19 @@ static bool lookup_backref_shared_cache(struct btrfs_backref_shared_cache *cache
                return false;
 
        *is_shared = entry->is_shared;
+       /*
+        * If the node at this level is shared, than all nodes below are also
+        * shared. Currently some of the nodes below may be marked as not shared
+        * because we have just switched from one leaf to another, and switched
+        * also other nodes above the leaf and below the current level, so mark
+        * them as shared.
+        */
+       if (*is_shared) {
+               for (int i = 0; i < level; i++) {
+                       cache->entries[i].is_shared = true;
+                       cache->entries[i].gen = entry->gen;
+               }
+       }
 
        return true;
 }
@@ -1573,6 +1603,9 @@ static void store_backref_shared_cache(struct btrfs_backref_shared_cache *cache,
        struct btrfs_backref_shared_cache_entry *entry;
        u64 gen;
 
+       if (!cache->use_cache)
+               return;
+
        if (WARN_ON_ONCE(level >= BTRFS_MAX_LEVEL))
                return;
 
@@ -1648,6 +1681,7 @@ int btrfs_is_data_extent_shared(struct btrfs_root *root, u64 inum, u64 bytenr,
                .root_objectid = root->root_key.objectid,
                .inum = inum,
                .share_count = 0,
+               .have_delayed_delete_refs = false,
        };
        int level;
 
@@ -1669,6 +1703,7 @@ int btrfs_is_data_extent_shared(struct btrfs_root *root, u64 inum, u64 bytenr,
        /* -1 means we are in the bytenr of the data extent. */
        level = -1;
        ULIST_ITER_INIT(&uiter);
+       cache->use_cache = true;
        while (1) {
                bool is_shared;
                bool cached;
@@ -1698,6 +1733,24 @@ int btrfs_is_data_extent_shared(struct btrfs_root *root, u64 inum, u64 bytenr,
                    extent_gen > btrfs_root_last_snapshot(&root->root_item))
                        break;
 
+               /*
+                * If our data extent was not directly shared (without multiple
+                * reference items), than it might have a single reference item
+                * with a count > 1 for the same offset, which means there are 2
+                * (or more) file extent items that point to the data extent -
+                * this happens when a file extent item needs to be split and
+                * then one item gets moved to another leaf due to a b+tree leaf
+                * split when inserting some item. In this case the file extent
+                * items may be located in different leaves and therefore some
+                * of the leaves may be referenced through shared subtrees while
+                * others are not. Since our extent buffer cache only works for
+                * a single path (by far the most common case and simpler to
+                * deal with), we can not use it if we have multiple leaves
+                * (which implies multiple paths).
+                */
+               if (level == -1 && tmp->nnodes > 1)
+                       cache->use_cache = false;
+
                if (level >= 0)
                        store_backref_shared_cache(cache, root, bytenr,
                                                   level, false);
@@ -1713,6 +1766,7 @@ int btrfs_is_data_extent_shared(struct btrfs_root *root, u64 inum, u64 bytenr,
                        break;
                }
                shared.share_count = 0;
+               shared.have_delayed_delete_refs = false;
                cond_resched();
        }
 
index 52ae695..8e69584 100644 (file)
@@ -29,6 +29,7 @@ struct btrfs_backref_shared_cache {
         * a given data extent should never exceed the maximum b+tree height.
         */
        struct btrfs_backref_shared_cache_entry entries[BTRFS_MAX_LEVEL];
+       bool use_cache;
 };
 
 typedef int (iterate_extent_inodes_t)(u64 inum, u64 offset, u64 root,
index 32c415c..deebc8d 100644 (file)
@@ -774,10 +774,8 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, bool wait)
 
        btrfs_queue_work(fs_info->caching_workers, &caching_ctl->work);
 out:
-       /* REVIEW */
        if (wait && caching_ctl)
                ret = btrfs_caching_ctl_wait_done(cache, caching_ctl);
-               /* wait_event(caching_ctl->wait, space_cache_v1_done(cache)); */
        if (caching_ctl)
                btrfs_put_caching_control(caching_ctl);
 
index 618275a..83cb037 100644 (file)
@@ -1641,16 +1641,17 @@ int lock_extent(struct extent_io_tree *tree, u64 start, u64 end,
        int err;
        u64 failed_start;
 
-       while (1) {
+       err = __set_extent_bit(tree, start, end, EXTENT_LOCKED, &failed_start,
+                              cached_state, NULL, GFP_NOFS);
+       while (err == -EEXIST) {
+               if (failed_start != start)
+                       clear_extent_bit(tree, start, failed_start - 1,
+                                        EXTENT_LOCKED, cached_state);
+
+               wait_extent_bit(tree, failed_start, end, EXTENT_LOCKED);
                err = __set_extent_bit(tree, start, end, EXTENT_LOCKED,
                                       &failed_start, cached_state, NULL,
                                       GFP_NOFS);
-               if (err == -EEXIST) {
-                       wait_extent_bit(tree, failed_start, end, EXTENT_LOCKED);
-                       start = failed_start;
-               } else
-                       break;
-               WARN_ON(start > end);
        }
        return err;
 }
index 4ef4167..ec6e175 100644 (file)
@@ -348,6 +348,7 @@ static bool proto_cmd_ok(const struct send_ctx *sctx, int cmd)
        switch (sctx->proto) {
        case 1:  return cmd <= BTRFS_SEND_C_MAX_V1;
        case 2:  return cmd <= BTRFS_SEND_C_MAX_V2;
+       case 3:  return cmd <= BTRFS_SEND_C_MAX_V3;
        default: return false;
        }
 }
@@ -6469,7 +6470,9 @@ static int finish_inode_if_needed(struct send_ctx *sctx, int at_end)
                if (ret < 0)
                        goto out;
        }
-       if (sctx->cur_inode_needs_verity) {
+
+       if (proto_cmd_ok(sctx, BTRFS_SEND_C_ENABLE_VERITY)
+           && sctx->cur_inode_needs_verity) {
                ret = process_verity(sctx);
                if (ret < 0)
                        goto out;
index 0a45377..f7585cf 100644 (file)
 #include <linux/types.h>
 
 #define BTRFS_SEND_STREAM_MAGIC "btrfs-stream"
+/* Conditional support for the upcoming protocol version. */
+#ifdef CONFIG_BTRFS_DEBUG
+#define BTRFS_SEND_STREAM_VERSION 3
+#else
 #define BTRFS_SEND_STREAM_VERSION 2
+#endif
 
 /*
  * In send stream v1, no command is larger than 64K. In send stream v2, no limit
index 9ebb7ce..4af5e55 100644 (file)
@@ -362,7 +362,7 @@ static int ceph_fill_fragtree(struct inode *inode,
        if (nsplits != ci->i_fragtree_nsplits) {
                update = true;
        } else if (nsplits) {
-               i = prandom_u32() % nsplits;
+               i = prandom_u32_max(nsplits);
                id = le32_to_cpu(fragtree->splits[i].frag);
                if (!__ceph_find_frag(ci, id))
                        update = true;
index 8d0a6d2..3fbabc9 100644 (file)
@@ -29,7 +29,7 @@ static int __mdsmap_get_random_mds(struct ceph_mdsmap *m, bool ignore_laggy)
                return -1;
 
        /* pick */
-       n = prandom_u32() % n;
+       n = prandom_u32_max(n);
        for (j = 0, i = 0; i < m->possible_max_rank; i++) {
                if (CEPH_MDS_IS_READY(i, ignore_laggy))
                        j++;
index b705dac..6039908 100644 (file)
@@ -5,13 +5,98 @@
  *  Copyright (c) 2022, Ronnie Sahlberg <lsahlber@redhat.com>
  */
 
+#include <linux/namei.h>
 #include "cifsglob.h"
 #include "cifsproto.h"
 #include "cifs_debug.h"
 #include "smb2proto.h"
 #include "cached_dir.h"
 
-struct cached_fid *init_cached_dir(const char *path);
+static struct cached_fid *init_cached_dir(const char *path);
+static void free_cached_dir(struct cached_fid *cfid);
+
+static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+                                                   const char *path,
+                                                   bool lookup_only)
+{
+       struct cached_fid *cfid;
+
+       spin_lock(&cfids->cfid_list_lock);
+       list_for_each_entry(cfid, &cfids->entries, entry) {
+               if (!strcmp(cfid->path, path)) {
+                       /*
+                        * If it doesn't have a lease it is either not yet
+                        * fully cached or it may be in the process of
+                        * being deleted due to a lease break.
+                        */
+                       if (!cfid->has_lease) {
+                               spin_unlock(&cfids->cfid_list_lock);
+                               return NULL;
+                       }
+                       kref_get(&cfid->refcount);
+                       spin_unlock(&cfids->cfid_list_lock);
+                       return cfid;
+               }
+       }
+       if (lookup_only) {
+               spin_unlock(&cfids->cfid_list_lock);
+               return NULL;
+       }
+       if (cfids->num_entries >= MAX_CACHED_FIDS) {
+               spin_unlock(&cfids->cfid_list_lock);
+               return NULL;
+       }
+       cfid = init_cached_dir(path);
+       if (cfid == NULL) {
+               spin_unlock(&cfids->cfid_list_lock);
+               return NULL;
+       }
+       cfid->cfids = cfids;
+       cfids->num_entries++;
+       list_add(&cfid->entry, &cfids->entries);
+       cfid->on_list = true;
+       kref_get(&cfid->refcount);
+       spin_unlock(&cfids->cfid_list_lock);
+       return cfid;
+}
+
+static struct dentry *
+path_to_dentry(struct cifs_sb_info *cifs_sb, const char *path)
+{
+       struct dentry *dentry;
+       const char *s, *p;
+       char sep;
+
+       sep = CIFS_DIR_SEP(cifs_sb);
+       dentry = dget(cifs_sb->root);
+       s = path;
+
+       do {
+               struct inode *dir = d_inode(dentry);
+               struct dentry *child;
+
+               if (!S_ISDIR(dir->i_mode)) {
+                       dput(dentry);
+                       dentry = ERR_PTR(-ENOTDIR);
+                       break;
+               }
+
+               /* skip separators */
+               while (*s == sep)
+                       s++;
+               if (!*s)
+                       break;
+               p = s++;
+               /* next separator */
+               while (*s && *s != sep)
+                       s++;
+
+               child = lookup_positive_unlocked(p, dentry, s - p);
+               dput(dentry);
+               dentry = child;
+       } while (!IS_ERR(dentry));
+       return dentry;
+}
 
 /*
  * Open the and cache a directory handle.
@@ -33,61 +118,57 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
        struct kvec open_iov[SMB2_CREATE_IOV_SIZE];
        struct kvec qi_iov[1];
        int rc, flags = 0;
-       __le16 utf16_path = 0; /* Null - since an open of top of share */
+       __le16 *utf16_path = NULL;
        u8 oplock = SMB2_OPLOCK_LEVEL_II;
        struct cifs_fid *pfid;
-       struct dentry *dentry;
+       struct dentry *dentry = NULL;
        struct cached_fid *cfid;
+       struct cached_fids *cfids;
 
-       if (tcon == NULL || tcon->nohandlecache ||
+       if (tcon == NULL || tcon->cfids == NULL || tcon->nohandlecache ||
            is_smb1_server(tcon->ses->server))
                return -EOPNOTSUPP;
 
        ses = tcon->ses;
        server = ses->server;
+       cfids = tcon->cfids;
+
+       if (!server->ops->new_lease_key)
+               return -EIO;
 
        if (cifs_sb->root == NULL)
                return -ENOENT;
 
-       if (!path[0])
-               dentry = cifs_sb->root;
-       else
-               return -ENOENT;
+       utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
+       if (!utf16_path)
+               return -ENOMEM;
 
-       cfid = tcon->cfids->cfid;
+       cfid = find_or_create_cached_dir(cfids, path, lookup_only);
        if (cfid == NULL) {
-               cfid = init_cached_dir(path);
-               tcon->cfids->cfid = cfid;
+               kfree(utf16_path);
+               return -ENOENT;
        }
-       if (cfid == NULL)
-               return -ENOMEM;
-
-       mutex_lock(&cfid->fid_mutex);
-       if (cfid->is_valid) {
-               cifs_dbg(FYI, "found a cached root file handle\n");
+       /*
+        * At this point we either have a lease already and we can just
+        * return it. If not we are guaranteed to be the only thread accessing
+        * this cfid.
+        */
+       if (cfid->has_lease) {
                *ret_cfid = cfid;
-               kref_get(&cfid->refcount);
-               mutex_unlock(&cfid->fid_mutex);
+               kfree(utf16_path);
                return 0;
        }
 
        /*
         * We do not hold the lock for the open because in case
-        * SMB2_open needs to reconnect, it will end up calling
-        * cifs_mark_open_files_invalid() which takes the lock again
-        * thus causing a deadlock
+        * SMB2_open needs to reconnect.
+        * This is safe because no other thread will be able to get a ref
+        * to the cfid until we have finished opening the file and (possibly)
+        * acquired a lease.
         */
-       mutex_unlock(&cfid->fid_mutex);
-
-       if (lookup_only)
-               return -ENOENT;
-
        if (smb3_encryption_required(tcon))
                flags |= CIFS_TRANSFORM_REQ;
 
-       if (!server->ops->new_lease_key)
-               return -EIO;
-
        pfid = &cfid->fid;
        server->ops->new_lease_key(pfid);
 
@@ -108,7 +189,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
        oparms.reconnect = false;
 
        rc = SMB2_open_init(tcon, server,
-                           &rqst[0], &oplock, &oparms, &utf16_path);
+                           &rqst[0], &oplock, &oparms, utf16_path);
        if (rc)
                goto oshr_free;
        smb2_set_next_command(tcon, &rqst[0]);
@@ -131,47 +212,13 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
        rc = compound_send_recv(xid, ses, server,
                                flags, 2, rqst,
                                resp_buftype, rsp_iov);
-       mutex_lock(&cfid->fid_mutex);
-
-       /*
-        * Now we need to check again as the cached root might have
-        * been successfully re-opened from a concurrent process
-        */
-
-       if (cfid->is_valid) {
-               /* work was already done */
-
-               /* stash fids for close() later */
-               struct cifs_fid fid = {
-                       .persistent_fid = pfid->persistent_fid,
-                       .volatile_fid = pfid->volatile_fid,
-               };
-
-               /*
-                * caller expects this func to set the fid in cfid to valid
-                * cached root, so increment the refcount.
-                */
-               kref_get(&cfid->refcount);
-
-               mutex_unlock(&cfid->fid_mutex);
-
-               if (rc == 0) {
-                       /* close extra handle outside of crit sec */
-                       SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
-               }
-               rc = 0;
-               goto oshr_free;
-       }
-
-       /* Cached root is still invalid, continue normaly */
-
        if (rc) {
                if (rc == -EREMCHG) {
                        tcon->need_reconnect = true;
                        pr_warn_once("server share %s deleted\n",
                                     tcon->tree_name);
                }
-               goto oshr_exit;
+               goto oshr_free;
        }
 
        atomic_inc(&tcon->num_remote_opens);
@@ -183,31 +230,18 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
        oparms.fid->mid = le64_to_cpu(o_rsp->hdr.MessageId);
 #endif /* CIFS_DEBUG2 */
 
-       cfid->tcon = tcon;
-       cfid->is_valid = true;
-       cfid->dentry = dentry;
-       if (dentry)
-               dget(dentry);
-       kref_init(&cfid->refcount);
+       if (o_rsp->OplockLevel != SMB2_OPLOCK_LEVEL_LEASE)
+               goto oshr_free;
 
-       /* BB TBD check to see if oplock level check can be removed below */
-       if (o_rsp->OplockLevel == SMB2_OPLOCK_LEVEL_LEASE) {
-               /*
-                * See commit 2f94a3125b87. Increment the refcount when we
-                * get a lease for root, release it if lease break occurs
-                */
-               kref_get(&cfid->refcount);
-               cfid->has_lease = true;
-               smb2_parse_contexts(server, o_rsp,
-                               &oparms.fid->epoch,
-                                   oparms.fid->lease_key, &oplock,
-                                   NULL, NULL);
-       } else
-               goto oshr_exit;
+
+       smb2_parse_contexts(server, o_rsp,
+                           &oparms.fid->epoch,
+                           oparms.fid->lease_key, &oplock,
+                           NULL, NULL);
 
        qi_rsp = (struct smb2_query_info_rsp *)rsp_iov[1].iov_base;
        if (le32_to_cpu(qi_rsp->OutputBufferLength) < sizeof(struct smb2_file_all_info))
-               goto oshr_exit;
+               goto oshr_free;
        if (!smb2_validate_and_copy_iov(
                                le16_to_cpu(qi_rsp->OutputBufferOffset),
                                sizeof(struct smb2_file_all_info),
@@ -215,15 +249,42 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
                                (char *)&cfid->file_all_info))
                cfid->file_all_info_is_valid = true;
 
+       if (!path[0])
+               dentry = dget(cifs_sb->root);
+       else {
+               dentry = path_to_dentry(cifs_sb, path);
+               if (IS_ERR(dentry)) {
+                       rc = -ENOENT;
+                       goto oshr_free;
+               }
+       }
+       cfid->dentry = dentry;
+       cfid->tcon = tcon;
        cfid->time = jiffies;
+       cfid->is_open = true;
+       cfid->has_lease = true;
 
-oshr_exit:
-       mutex_unlock(&cfid->fid_mutex);
 oshr_free:
+       kfree(utf16_path);
        SMB2_open_free(&rqst[0]);
        SMB2_query_info_free(&rqst[1]);
        free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
        free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
+       spin_lock(&cfids->cfid_list_lock);
+       if (!cfid->has_lease) {
+               if (cfid->on_list) {
+                       list_del(&cfid->entry);
+                       cfid->on_list = false;
+                       cfids->num_entries--;
+               }
+               rc = -ENOENT;
+       }
+       spin_unlock(&cfids->cfid_list_lock);
+       if (rc) {
+               free_cached_dir(cfid);
+               cfid = NULL;
+       }
+
        if (rc == 0)
                *ret_cfid = cfid;
 
@@ -235,20 +296,22 @@ int open_cached_dir_by_dentry(struct cifs_tcon *tcon,
                              struct cached_fid **ret_cfid)
 {
        struct cached_fid *cfid;
+       struct cached_fids *cfids = tcon->cfids;
 
-       cfid = tcon->cfids->cfid;
-       if (cfid == NULL)
+       if (cfids == NULL)
                return -ENOENT;
 
-       mutex_lock(&cfid->fid_mutex);
-       if (cfid->dentry == dentry) {
-               cifs_dbg(FYI, "found a cached root file handle by dentry\n");
-               *ret_cfid = cfid;
-               kref_get(&cfid->refcount);
-               mutex_unlock(&cfid->fid_mutex);
-               return 0;
+       spin_lock(&cfids->cfid_list_lock);
+       list_for_each_entry(cfid, &cfids->entries, entry) {
+               if (dentry && cfid->dentry == dentry) {
+                       cifs_dbg(FYI, "found a cached root file handle by dentry\n");
+                       kref_get(&cfid->refcount);
+                       *ret_cfid = cfid;
+                       spin_unlock(&cfids->cfid_list_lock);
+                       return 0;
+               }
        }
-       mutex_unlock(&cfid->fid_mutex);
+       spin_unlock(&cfids->cfid_list_lock);
        return -ENOENT;
 }
 
@@ -257,63 +320,50 @@ smb2_close_cached_fid(struct kref *ref)
 {
        struct cached_fid *cfid = container_of(ref, struct cached_fid,
                                               refcount);
-       struct cached_dirent *dirent, *q;
 
-       if (cfid->is_valid) {
-               cifs_dbg(FYI, "clear cached root file handle\n");
-               SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid,
-                          cfid->fid.volatile_fid);
+       spin_lock(&cfid->cfids->cfid_list_lock);
+       if (cfid->on_list) {
+               list_del(&cfid->entry);
+               cfid->on_list = false;
+               cfid->cfids->num_entries--;
        }
+       spin_unlock(&cfid->cfids->cfid_list_lock);
 
-       /*
-        * We only check validity above to send SMB2_close,
-        * but we still need to invalidate these entries
-        * when this function is called
-        */
-       cfid->is_valid = false;
-       cfid->file_all_info_is_valid = false;
-       cfid->has_lease = false;
-       if (cfid->dentry) {
-               dput(cfid->dentry);
-               cfid->dentry = NULL;
-       }
-       /*
-        * Delete all cached dirent names
-        */
-       mutex_lock(&cfid->dirents.de_mutex);
-       list_for_each_entry_safe(dirent, q, &cfid->dirents.entries, entry) {
-               list_del(&dirent->entry);
-               kfree(dirent->name);
-               kfree(dirent);
+       dput(cfid->dentry);
+       cfid->dentry = NULL;
+
+       if (cfid->is_open) {
+               SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid,
+                          cfid->fid.volatile_fid);
        }
-       cfid->dirents.is_valid = 0;
-       cfid->dirents.is_failed = 0;
-       cfid->dirents.ctx = NULL;
-       cfid->dirents.pos = 0;
-       mutex_unlock(&cfid->dirents.de_mutex);
 
+       free_cached_dir(cfid);
 }
 
-void close_cached_dir(struct cached_fid *cfid)
+void drop_cached_dir_by_name(const unsigned int xid, struct cifs_tcon *tcon,
+                            const char *name, struct cifs_sb_info *cifs_sb)
 {
-       mutex_lock(&cfid->fid_mutex);
-       kref_put(&cfid->refcount, smb2_close_cached_fid);
-       mutex_unlock(&cfid->fid_mutex);
-}
+       struct cached_fid *cfid = NULL;
+       int rc;
 
-void close_cached_dir_lease_locked(struct cached_fid *cfid)
-{
+       rc = open_cached_dir(xid, tcon, name, cifs_sb, true, &cfid);
+       if (rc) {
+               cifs_dbg(FYI, "no cached dir found for rmdir(%s)\n", name);
+               return;
+       }
+       spin_lock(&cfid->cfids->cfid_list_lock);
        if (cfid->has_lease) {
                cfid->has_lease = false;
                kref_put(&cfid->refcount, smb2_close_cached_fid);
        }
+       spin_unlock(&cfid->cfids->cfid_list_lock);
+       close_cached_dir(cfid);
 }
 
-void close_cached_dir_lease(struct cached_fid *cfid)
+
+void close_cached_dir(struct cached_fid *cfid)
 {
-       mutex_lock(&cfid->fid_mutex);
-       close_cached_dir_lease_locked(cfid);
-       mutex_unlock(&cfid->fid_mutex);
+       kref_put(&cfid->refcount, smb2_close_cached_fid);
 }
 
 /*
@@ -326,41 +376,60 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
        struct cached_fid *cfid;
        struct cifs_tcon *tcon;
        struct tcon_link *tlink;
+       struct cached_fids *cfids;
 
        for (node = rb_first(root); node; node = rb_next(node)) {
                tlink = rb_entry(node, struct tcon_link, tl_rbnode);
                tcon = tlink_tcon(tlink);
                if (IS_ERR(tcon))
                        continue;
-               cfid = tcon->cfids->cfid;
-               if (cfid == NULL)
+               cfids = tcon->cfids;
+               if (cfids == NULL)
                        continue;
-               mutex_lock(&cfid->fid_mutex);
-               if (cfid->dentry) {
+               list_for_each_entry(cfid, &cfids->entries, entry) {
                        dput(cfid->dentry);
                        cfid->dentry = NULL;
                }
-               mutex_unlock(&cfid->fid_mutex);
        }
 }
 
 /*
- * Invalidate and close all cached dirs when a TCON has been reset
+ * Invalidate all cached dirs when a TCON has been reset
  * due to a session loss.
  */
 void invalidate_all_cached_dirs(struct cifs_tcon *tcon)
 {
-       struct cached_fid *cfid = tcon->cfids->cfid;
-
-       if (cfid == NULL)
-               return;
-
-       mutex_lock(&cfid->fid_mutex);
-       cfid->is_valid = false;
-       /* cached handle is not valid, so SMB2_CLOSE won't be sent below */
-       close_cached_dir_lease_locked(cfid);
-       memset(&cfid->fid, 0, sizeof(struct cifs_fid));
-       mutex_unlock(&cfid->fid_mutex);
+       struct cached_fids *cfids = tcon->cfids;
+       struct cached_fid *cfid, *q;
+       LIST_HEAD(entry);
+
+       spin_lock(&cfids->cfid_list_lock);
+       list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+               list_move(&cfid->entry, &entry);
+               cfids->num_entries--;
+               cfid->is_open = false;
+               cfid->on_list = false;
+               /* To prevent race with smb2_cached_lease_break() */
+               kref_get(&cfid->refcount);
+       }
+       spin_unlock(&cfids->cfid_list_lock);
+
+       list_for_each_entry_safe(cfid, q, &entry, entry) {
+               list_del(&cfid->entry);
+               cancel_work_sync(&cfid->lease_break);
+               if (cfid->has_lease) {
+                       /*
+                        * We lease was never cancelled from the server so we
+                        * need to drop the reference.
+                        */
+                       spin_lock(&cfids->cfid_list_lock);
+                       cfid->has_lease = false;
+                       spin_unlock(&cfids->cfid_list_lock);
+                       kref_put(&cfid->refcount, smb2_close_cached_fid);
+               }
+               /* Drop the extra reference opened above*/
+               kref_put(&cfid->refcount, smb2_close_cached_fid);
+       }
 }
 
 static void
@@ -369,51 +438,83 @@ smb2_cached_lease_break(struct work_struct *work)
        struct cached_fid *cfid = container_of(work,
                                struct cached_fid, lease_break);
 
-       close_cached_dir_lease(cfid);
+       spin_lock(&cfid->cfids->cfid_list_lock);
+       cfid->has_lease = false;
+       spin_unlock(&cfid->cfids->cfid_list_lock);
+       kref_put(&cfid->refcount, smb2_close_cached_fid);
 }
 
 int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
 {
-       struct cached_fid *cfid = tcon->cfids->cfid;
+       struct cached_fids *cfids = tcon->cfids;
+       struct cached_fid *cfid;
 
-       if (cfid == NULL)
+       if (cfids == NULL)
                return false;
 
-       if (cfid->is_valid &&
-           !memcmp(lease_key,
-                   cfid->fid.lease_key,
-                   SMB2_LEASE_KEY_SIZE)) {
-               cfid->time = 0;
-               INIT_WORK(&cfid->lease_break,
-                         smb2_cached_lease_break);
-               queue_work(cifsiod_wq,
-                          &cfid->lease_break);
-               return true;
+       spin_lock(&cfids->cfid_list_lock);
+       list_for_each_entry(cfid, &cfids->entries, entry) {
+               if (cfid->has_lease &&
+                   !memcmp(lease_key,
+                           cfid->fid.lease_key,
+                           SMB2_LEASE_KEY_SIZE)) {
+                       cfid->time = 0;
+                       /*
+                        * We found a lease remove it from the list
+                        * so no threads can access it.
+                        */
+                       list_del(&cfid->entry);
+                       cfid->on_list = false;
+                       cfids->num_entries--;
+
+                       queue_work(cifsiod_wq,
+                                  &cfid->lease_break);
+                       spin_unlock(&cfids->cfid_list_lock);
+                       return true;
+               }
        }
+       spin_unlock(&cfids->cfid_list_lock);
        return false;
 }
 
-struct cached_fid *init_cached_dir(const char *path)
+static struct cached_fid *init_cached_dir(const char *path)
 {
        struct cached_fid *cfid;
 
-       cfid = kzalloc(sizeof(*cfid), GFP_KERNEL);
+       cfid = kzalloc(sizeof(*cfid), GFP_ATOMIC);
        if (!cfid)
                return NULL;
-       cfid->path = kstrdup(path, GFP_KERNEL);
+       cfid->path = kstrdup(path, GFP_ATOMIC);
        if (!cfid->path) {
                kfree(cfid);
                return NULL;
        }
 
+       INIT_WORK(&cfid->lease_break, smb2_cached_lease_break);
+       INIT_LIST_HEAD(&cfid->entry);
        INIT_LIST_HEAD(&cfid->dirents.entries);
        mutex_init(&cfid->dirents.de_mutex);
-       mutex_init(&cfid->fid_mutex);
+       spin_lock_init(&cfid->fid_lock);
+       kref_init(&cfid->refcount);
        return cfid;
 }
 
-void free_cached_dir(struct cached_fid *cfid)
+static void free_cached_dir(struct cached_fid *cfid)
 {
+       struct cached_dirent *dirent, *q;
+
+       dput(cfid->dentry);
+       cfid->dentry = NULL;
+
+       /*
+        * Delete all cached dirent names
+        */
+       list_for_each_entry_safe(dirent, q, &cfid->dirents.entries, entry) {
+               list_del(&dirent->entry);
+               kfree(dirent->name);
+               kfree(dirent);
+       }
+
        kfree(cfid->path);
        cfid->path = NULL;
        kfree(cfid);
@@ -426,15 +527,32 @@ struct cached_fids *init_cached_dirs(void)
        cfids = kzalloc(sizeof(*cfids), GFP_KERNEL);
        if (!cfids)
                return NULL;
-       mutex_init(&cfids->cfid_list_mutex);
+       spin_lock_init(&cfids->cfid_list_lock);
+       INIT_LIST_HEAD(&cfids->entries);
        return cfids;
 }
 
+/*
+ * Called from tconInfoFree when we are tearing down the tcon.
+ * There are no active users or open files/directories at this point.
+ */
 void free_cached_dirs(struct cached_fids *cfids)
 {
-       if (cfids->cfid) {
-               free_cached_dir(cfids->cfid);
-               cfids->cfid = NULL;
+       struct cached_fid *cfid, *q;
+       LIST_HEAD(entry);
+
+       spin_lock(&cfids->cfid_list_lock);
+       list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+               cfid->on_list = false;
+               cfid->is_open = false;
+               list_move(&cfid->entry, &entry);
+       }
+       spin_unlock(&cfids->cfid_list_lock);
+
+       list_for_each_entry_safe(cfid, q, &entry, entry) {
+               list_del(&cfid->entry);
+               free_cached_dir(cfid);
        }
+
        kfree(cfids);
 }
index bdf6c38..2f4e764 100644 (file)
@@ -31,14 +31,17 @@ struct cached_dirents {
 };
 
 struct cached_fid {
+       struct list_head entry;
+       struct cached_fids *cfids;
        const char *path;
-       bool is_valid:1;        /* Do we have a useable root fid */
-       bool file_all_info_is_valid:1;
        bool has_lease:1;
+       bool is_open:1;
+       bool on_list:1;
+       bool file_all_info_is_valid:1;
        unsigned long time; /* jiffies of when lease was taken */
        struct kref refcount;
        struct cifs_fid fid;
-       struct mutex fid_mutex;
+       spinlock_t fid_lock;
        struct cifs_tcon *tcon;
        struct dentry *dentry;
        struct work_struct lease_break;
@@ -46,9 +49,14 @@ struct cached_fid {
        struct cached_dirents dirents;
 };
 
+#define MAX_CACHED_FIDS 16
 struct cached_fids {
-       struct mutex cfid_list_mutex;
-       struct cached_fid *cfid;
+       /* Must be held when:
+        * - accessing the cfids->entries list
+        */
+       spinlock_t cfid_list_lock;
+       int num_entries;
+       struct list_head entries;
 };
 
 extern struct cached_fids *init_cached_dirs(void);
@@ -61,8 +69,10 @@ extern int open_cached_dir_by_dentry(struct cifs_tcon *tcon,
                                     struct dentry *dentry,
                                     struct cached_fid **cfid);
 extern void close_cached_dir(struct cached_fid *cfid);
-extern void close_cached_dir_lease(struct cached_fid *cfid);
-extern void close_cached_dir_lease_locked(struct cached_fid *cfid);
+extern void drop_cached_dir_by_name(const unsigned int xid,
+                                   struct cifs_tcon *tcon,
+                                   const char *name,
+                                   struct cifs_sb_info *cifs_sb);
 extern void close_all_cached_dirs(struct cifs_sb_info *cifs_sb);
 extern void invalidate_all_cached_dirs(struct cifs_tcon *tcon);
 extern int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16]);
index b87cbbe..d86d78d 100644 (file)
@@ -91,6 +91,13 @@ struct smb3_notify {
        bool    watch_tree;
 } __packed;
 
+struct smb3_notify_info {
+       __u32   completion_filter;
+       bool    watch_tree;
+       __u32   data_len; /* size of notify data below */
+       __u8    notify_data[];
+} __packed;
+
 #define CIFS_IOCTL_MAGIC       0xCF
 #define CIFS_IOC_COPYCHUNK_FILE        _IOW(CIFS_IOCTL_MAGIC, 3, int)
 #define CIFS_IOC_SET_INTEGRITY  _IO(CIFS_IOCTL_MAGIC, 4)
@@ -100,6 +107,7 @@ struct smb3_notify {
 #define CIFS_DUMP_KEY _IOWR(CIFS_IOCTL_MAGIC, 8, struct smb3_key_debug_info)
 #define CIFS_IOC_NOTIFY _IOW(CIFS_IOCTL_MAGIC, 9, struct smb3_notify)
 #define CIFS_DUMP_FULL_KEY _IOWR(CIFS_IOCTL_MAGIC, 10, struct smb3_full_key_debug_info)
+#define CIFS_IOC_NOTIFY_INFO _IOWR(CIFS_IOCTL_MAGIC, 11, struct smb3_notify_info)
 #define CIFS_IOC_SHUTDOWN _IOR ('X', 125, __u32)
 
 /*
index 8042d72..d0b9fec 100644 (file)
@@ -396,6 +396,7 @@ cifs_alloc_inode(struct super_block *sb)
        cifs_inode->epoch = 0;
        spin_lock_init(&cifs_inode->open_file_lock);
        generate_random_uuid(cifs_inode->lease_key);
+       cifs_inode->symlink_target = NULL;
 
        /*
         * Can not set i_flags here - they get immediately overwritten to zero
@@ -412,7 +413,11 @@ cifs_alloc_inode(struct super_block *sb)
 static void
 cifs_free_inode(struct inode *inode)
 {
-       kmem_cache_free(cifs_inode_cachep, CIFS_I(inode));
+       struct cifsInodeInfo *cinode = CIFS_I(inode);
+
+       if (S_ISLNK(inode->i_mode))
+               kfree(cinode->symlink_target);
+       kmem_cache_free(cifs_inode_cachep, cinode);
 }
 
 static void
@@ -1139,7 +1144,7 @@ const struct inode_operations cifs_file_inode_ops = {
 };
 
 const struct inode_operations cifs_symlink_inode_ops = {
-       .get_link = cifs_get_link,
+       .get_link = simple_get_link,
        .permission = cifs_permission,
        .listxattr = cifs_listxattr,
 };
@@ -1297,8 +1302,11 @@ static ssize_t cifs_copy_file_range(struct file *src_file, loff_t off,
        ssize_t rc;
        struct cifsFileInfo *cfile = dst_file->private_data;
 
-       if (cfile->swapfile)
-               return -EOPNOTSUPP;
+       if (cfile->swapfile) {
+               rc = -EOPNOTSUPP;
+               free_xid(xid);
+               return rc;
+       }
 
        rc = cifs_file_copychunk_range(xid, src_file, off, dst_file, destoff,
                                        len, flags);
index 5b4a7a3..388b745 100644 (file)
@@ -153,6 +153,6 @@ extern const struct export_operations cifs_export_ops;
 #endif /* CONFIG_CIFS_NFSD_EXPORT */
 
 /* when changing internal version - update following two lines at same time */
-#define SMB3_PRODUCT_BUILD 39
-#define CIFS_VERSION   "2.39"
+#define SMB3_PRODUCT_BUILD 40
+#define CIFS_VERSION   "2.40"
 #endif                         /* _CIFSFS_H */
index 52ddf41..1420acf 100644 (file)
@@ -185,6 +185,19 @@ struct cifs_cred {
        struct cifs_ace *aces;
 };
 
+struct cifs_open_info_data {
+       char *symlink_target;
+       union {
+               struct smb2_file_all_info fi;
+               struct smb311_posix_qinfo posix_fi;
+       };
+};
+
+static inline void cifs_free_open_info(struct cifs_open_info_data *data)
+{
+       kfree(data->symlink_target);
+}
+
 /*
  *****************************************************************
  * Except the CIFS PDUs themselves all the
@@ -307,20 +320,20 @@ struct smb_version_operations {
        int (*is_path_accessible)(const unsigned int, struct cifs_tcon *,
                                  struct cifs_sb_info *, const char *);
        /* query path data from the server */
-       int (*query_path_info)(const unsigned int, struct cifs_tcon *,
-                              struct cifs_sb_info *, const char *,
-                              FILE_ALL_INFO *, bool *, bool *);
+       int (*query_path_info)(const unsigned int xid, struct cifs_tcon *tcon,
+                              struct cifs_sb_info *cifs_sb, const char *full_path,
+                              struct cifs_open_info_data *data, bool *adjust_tz, bool *reparse);
        /* query file data from the server */
-       int (*query_file_info)(const unsigned int, struct cifs_tcon *,
-                              struct cifs_fid *, FILE_ALL_INFO *);
+       int (*query_file_info)(const unsigned int xid, struct cifs_tcon *tcon,
+                              struct cifsFileInfo *cfile, struct cifs_open_info_data *data);
        /* query reparse tag from srv to determine which type of special file */
        int (*query_reparse_tag)(const unsigned int xid, struct cifs_tcon *tcon,
                                struct cifs_sb_info *cifs_sb, const char *path,
                                __u32 *reparse_tag);
        /* get server index number */
-       int (*get_srv_inum)(const unsigned int, struct cifs_tcon *,
-                           struct cifs_sb_info *, const char *,
-                           u64 *uniqueid, FILE_ALL_INFO *);
+       int (*get_srv_inum)(const unsigned int xid, struct cifs_tcon *tcon,
+                           struct cifs_sb_info *cifs_sb, const char *full_path, u64 *uniqueid,
+                           struct cifs_open_info_data *data);
        /* set size by path */
        int (*set_path_size)(const unsigned int, struct cifs_tcon *,
                             const char *, __u64, struct cifs_sb_info *, bool);
@@ -369,8 +382,8 @@ struct smb_version_operations {
                             struct cifs_sb_info *, const char *,
                             char **, bool);
        /* open a file for non-posix mounts */
-       int (*open)(const unsigned int, struct cifs_open_parms *,
-                   __u32 *, FILE_ALL_INFO *);
+       int (*open)(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock,
+                   void *buf);
        /* set fid protocol-specific info */
        void (*set_fid)(struct cifsFileInfo *, struct cifs_fid *, __u32);
        /* close a file */
@@ -441,7 +454,7 @@ struct smb_version_operations {
        int (*enum_snapshots)(const unsigned int xid, struct cifs_tcon *tcon,
                             struct cifsFileInfo *src_file, void __user *);
        int (*notify)(const unsigned int xid, struct file *pfile,
-                            void __user *pbuf);
+                            void __user *pbuf, bool return_changes);
        int (*query_mf_symlink)(unsigned int, struct cifs_tcon *,
                                struct cifs_sb_info *, const unsigned char *,
                                char *, unsigned int *);
@@ -1123,6 +1136,7 @@ struct cifs_fattr {
        struct timespec64 cf_mtime;
        struct timespec64 cf_ctime;
        u32             cf_cifstag;
+       char            *cf_symlink_target;
 };
 
 /*
@@ -1385,6 +1399,7 @@ struct cifsFileInfo {
        struct work_struct put; /* work for the final part of _put */
        struct delayed_work deferred;
        bool deferred_close_scheduled; /* Flag to indicate close is scheduled */
+       char *symlink_target;
 };
 
 struct cifs_io_parms {
@@ -1543,6 +1558,7 @@ struct cifsInodeInfo {
        struct list_head deferred_closes; /* list of deferred closes */
        spinlock_t deferred_lock; /* protection on deferred list */
        bool lease_granted; /* Flag to indicate whether lease or oplock is granted. */
+       char *symlink_target;
 };
 
 static inline struct cifsInodeInfo *
@@ -2111,4 +2127,14 @@ static inline size_t ntlmssp_workstation_name_size(const struct cifs_ses *ses)
        return sizeof(ses->workstation_name);
 }
 
+static inline void move_cifs_info_to_smb2(struct smb2_file_all_info *dst, const FILE_ALL_INFO *src)
+{
+       memcpy(dst, src, (size_t)((u8 *)&src->AccessFlags - (u8 *)src));
+       dst->AccessFlags = src->AccessFlags;
+       dst->CurrentByteOffset = src->CurrentByteOffset;
+       dst->Mode = src->Mode;
+       dst->AlignmentRequirement = src->AlignmentRequirement;
+       dst->FileNameLength = src->FileNameLength;
+}
+
 #endif /* _CIFS_GLOB_H */
index 84ec71b..83e83d8 100644 (file)
@@ -182,10 +182,9 @@ extern int cifs_unlock_range(struct cifsFileInfo *cfile,
 extern int cifs_push_mandatory_locks(struct cifsFileInfo *cfile);
 
 extern void cifs_down_write(struct rw_semaphore *sem);
-extern struct cifsFileInfo *cifs_new_fileinfo(struct cifs_fid *fid,
-                                             struct file *file,
-                                             struct tcon_link *tlink,
-                                             __u32 oplock);
+struct cifsFileInfo *cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+                                      struct tcon_link *tlink, __u32 oplock,
+                                      const char *symlink_target);
 extern int cifs_posix_open(const char *full_path, struct inode **inode,
                           struct super_block *sb, int mode,
                           unsigned int f_flags, __u32 *oplock, __u16 *netfid,
@@ -200,9 +199,9 @@ extern int cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr);
 extern struct inode *cifs_iget(struct super_block *sb,
                               struct cifs_fattr *fattr);
 
-extern int cifs_get_inode_info(struct inode **inode, const char *full_path,
-                              FILE_ALL_INFO *data, struct super_block *sb,
-                              int xid, const struct cifs_fid *fid);
+int cifs_get_inode_info(struct inode **inode, const char *full_path,
+                       struct cifs_open_info_data *data, struct super_block *sb, int xid,
+                       const struct cifs_fid *fid);
 extern int smb311_posix_get_inode_info(struct inode **pinode, const char *search_path,
                        struct super_block *sb, unsigned int xid);
 extern int cifs_get_inode_info_unix(struct inode **pinode,
index 7a808e4..1724066 100644 (file)
@@ -2305,7 +2305,7 @@ int CIFSSMBRenameOpenFile(const unsigned int xid, struct cifs_tcon *pTcon,
                                        remap);
        }
        rename_info->target_name_len = cpu_to_le32(2 * len_of_str);
-       count = 12 /* sizeof(struct set_file_rename) */ + (2 * len_of_str);
+       count = sizeof(struct set_file_rename) + (2 * len_of_str);
        byte_count += count;
        pSMB->DataCount = cpu_to_le16(count);
        pSMB->TotalDataCount = pSMB->DataCount;
index 40900aa..ffb2915 100644 (file)
@@ -2832,9 +2832,12 @@ ip_rfc1001_connect(struct TCP_Server_Info *server)
         * sessinit is sent but no second negprot
         */
        struct rfc1002_session_packet *ses_init_buf;
+       unsigned int req_noscope_len;
        struct smb_hdr *smb_buf;
+
        ses_init_buf = kzalloc(sizeof(struct rfc1002_session_packet),
                               GFP_KERNEL);
+
        if (ses_init_buf) {
                ses_init_buf->trailer.session_req.called_len = 32;
 
@@ -2870,8 +2873,12 @@ ip_rfc1001_connect(struct TCP_Server_Info *server)
                ses_init_buf->trailer.session_req.scope2 = 0;
                smb_buf = (struct smb_hdr *)ses_init_buf;
 
-               /* sizeof RFC1002_SESSION_REQUEST with no scope */
-               smb_buf->smb_buf_length = cpu_to_be32(0x81000044);
+               /* sizeof RFC1002_SESSION_REQUEST with no scopes */
+               req_noscope_len = sizeof(struct rfc1002_session_packet) - 2;
+
+               /* == cpu_to_be32(0x81000044) */
+               smb_buf->smb_buf_length =
+                       cpu_to_be32((RFC1002_SESSION_REQUEST << 24) | req_noscope_len);
                rc = smb_send(server, smb_buf, 0x44);
                kfree(ses_init_buf);
                /*
@@ -3922,12 +3929,11 @@ CIFSTCon(const unsigned int xid, struct cifs_ses *ses,
        pSMB->AndXCommand = 0xFF;
        pSMB->Flags = cpu_to_le16(TCON_EXTENDED_SECINFO);
        bcc_ptr = &pSMB->Password[0];
-       if (tcon->pipe || (ses->server->sec_mode & SECMODE_USER)) {
-               pSMB->PasswordLength = cpu_to_le16(1);  /* minimum */
-               *bcc_ptr = 0; /* password is null byte */
-               bcc_ptr++;              /* skip password */
-               /* already aligned so no need to do it below */
-       }
+
+       pSMB->PasswordLength = cpu_to_le16(1);  /* minimum */
+       *bcc_ptr = 0; /* password is null byte */
+       bcc_ptr++;              /* skip password */
+       /* already aligned so no need to do it below */
 
        if (ses->server->sign)
                smb_buffer->Flags2 |= SMBFLG2_SECURITY_SIGNATURE;
index f588693..8b1c371 100644 (file)
@@ -165,10 +165,9 @@ check_name(struct dentry *direntry, struct cifs_tcon *tcon)
 
 /* Inode operations in similar order to how they appear in Linux file fs.h */
 
-static int
-cifs_do_create(struct inode *inode, struct dentry *direntry, unsigned int xid,
-              struct tcon_link *tlink, unsigned oflags, umode_t mode,
-              __u32 *oplock, struct cifs_fid *fid)
+static int cifs_do_create(struct inode *inode, struct dentry *direntry, unsigned int xid,
+                         struct tcon_link *tlink, unsigned int oflags, umode_t mode, __u32 *oplock,
+                         struct cifs_fid *fid, struct cifs_open_info_data *buf)
 {
        int rc = -ENOENT;
        int create_options = CREATE_NOT_DIR;
@@ -177,7 +176,6 @@ cifs_do_create(struct inode *inode, struct dentry *direntry, unsigned int xid,
        struct cifs_tcon *tcon = tlink_tcon(tlink);
        const char *full_path;
        void *page = alloc_dentry_path();
-       FILE_ALL_INFO *buf = NULL;
        struct inode *newinode = NULL;
        int disposition;
        struct TCP_Server_Info *server = tcon->ses->server;
@@ -290,12 +288,6 @@ cifs_do_create(struct inode *inode, struct dentry *direntry, unsigned int xid,
                goto out;
        }
 
-       buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL);
-       if (buf == NULL) {
-               rc = -ENOMEM;
-               goto out;
-       }
-
        /*
         * if we're not using unix extensions, see if we need to set
         * ATTR_READONLY on the create call
@@ -364,8 +356,7 @@ cifs_create_get_file_info:
        {
 #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
                /* TODO: Add support for calling POSIX query info here, but passing in fid */
-               rc = cifs_get_inode_info(&newinode, full_path, buf, inode->i_sb,
-                                        xid, fid);
+               rc = cifs_get_inode_info(&newinode, full_path, buf, inode->i_sb, xid, fid);
                if (newinode) {
                        if (server->ops->set_lease_key)
                                server->ops->set_lease_key(newinode, fid);
@@ -402,7 +393,6 @@ cifs_create_set_dentry:
        d_add(direntry, newinode);
 
 out:
-       kfree(buf);
        free_dentry_path(page);
        return rc;
 
@@ -423,10 +413,11 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry,
        struct tcon_link *tlink;
        struct cifs_tcon *tcon;
        struct TCP_Server_Info *server;
-       struct cifs_fid fid;
+       struct cifs_fid fid = {};
        struct cifs_pending_open open;
        __u32 oplock;
        struct cifsFileInfo *file_info;
+       struct cifs_open_info_data buf = {};
 
        if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb))))
                return -EIO;
@@ -484,8 +475,7 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry,
        cifs_add_pending_open(&fid, tlink, &open);
 
        rc = cifs_do_create(inode, direntry, xid, tlink, oflags, mode,
-                           &oplock, &fid);
-
+                           &oplock, &fid, &buf);
        if (rc) {
                cifs_del_pending_open(&open);
                goto out;
@@ -510,7 +500,7 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry,
                        file->f_op = &cifs_file_direct_ops;
                }
 
-       file_info = cifs_new_fileinfo(&fid, file, tlink, oplock);
+       file_info = cifs_new_fileinfo(&fid, file, tlink, oplock, buf.symlink_target);
        if (file_info == NULL) {
                if (server->ops->close)
                        server->ops->close(xid, tcon, &fid);
@@ -526,6 +516,7 @@ out:
        cifs_put_tlink(tlink);
 out_free_xid:
        free_xid(xid);
+       cifs_free_open_info(&buf);
        return rc;
 }
 
@@ -547,12 +538,15 @@ int cifs_create(struct user_namespace *mnt_userns, struct inode *inode,
        struct TCP_Server_Info *server;
        struct cifs_fid fid;
        __u32 oplock;
+       struct cifs_open_info_data buf = {};
 
        cifs_dbg(FYI, "cifs_create parent inode = 0x%p name is: %pd and dentry = 0x%p\n",
                 inode, direntry, direntry);
 
-       if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb))))
-               return -EIO;
+       if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb)))) {
+               rc = -EIO;
+               goto out_free_xid;
+       }
 
        tlink = cifs_sb_tlink(CIFS_SB(inode->i_sb));
        rc = PTR_ERR(tlink);
@@ -565,11 +559,11 @@ int cifs_create(struct user_namespace *mnt_userns, struct inode *inode,
        if (server->ops->new_lease_key)
                server->ops->new_lease_key(&fid);
 
-       rc = cifs_do_create(inode, direntry, xid, tlink, oflags, mode,
-                           &oplock, &fid);
+       rc = cifs_do_create(inode, direntry, xid, tlink, oflags, mode, &oplock, &fid, &buf);
        if (!rc && server->ops->close)
                server->ops->close(xid, tcon, &fid);
 
+       cifs_free_open_info(&buf);
        cifs_put_tlink(tlink);
 out_free_xid:
        free_xid(xid);
index 7d75672..5b3b308 100644 (file)
@@ -209,16 +209,14 @@ posix_open_ret:
 }
 #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
 
-static int
-cifs_nt_open(const char *full_path, struct inode *inode, struct cifs_sb_info *cifs_sb,
-            struct cifs_tcon *tcon, unsigned int f_flags, __u32 *oplock,
-            struct cifs_fid *fid, unsigned int xid)
+static int cifs_nt_open(const char *full_path, struct inode *inode, struct cifs_sb_info *cifs_sb,
+                       struct cifs_tcon *tcon, unsigned int f_flags, __u32 *oplock,
+                       struct cifs_fid *fid, unsigned int xid, struct cifs_open_info_data *buf)
 {
        int rc;
        int desired_access;
        int disposition;
        int create_options = CREATE_NOT_DIR;
-       FILE_ALL_INFO *buf;
        struct TCP_Server_Info *server = tcon->ses->server;
        struct cifs_open_parms oparms;
 
@@ -255,10 +253,6 @@ cifs_nt_open(const char *full_path, struct inode *inode, struct cifs_sb_info *ci
 
        /* BB pass O_SYNC flag through on file attributes .. BB */
 
-       buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL);
-       if (!buf)
-               return -ENOMEM;
-
        /* O_SYNC also has bit for O_DSYNC so following check picks up either */
        if (f_flags & O_SYNC)
                create_options |= CREATE_WRITE_THROUGH;
@@ -276,9 +270,8 @@ cifs_nt_open(const char *full_path, struct inode *inode, struct cifs_sb_info *ci
        oparms.reconnect = false;
 
        rc = server->ops->open(xid, &oparms, oplock, buf);
-
        if (rc)
-               goto out;
+               return rc;
 
        /* TODO: Add support for calling posix query info but with passing in fid */
        if (tcon->unix_ext)
@@ -294,8 +287,6 @@ cifs_nt_open(const char *full_path, struct inode *inode, struct cifs_sb_info *ci
                        rc = -EOPENSTALE;
        }
 
-out:
-       kfree(buf);
        return rc;
 }
 
@@ -325,9 +316,9 @@ cifs_down_write(struct rw_semaphore *sem)
 
 static void cifsFileInfo_put_work(struct work_struct *work);
 
-struct cifsFileInfo *
-cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
-                 struct tcon_link *tlink, __u32 oplock)
+struct cifsFileInfo *cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
+                                      struct tcon_link *tlink, __u32 oplock,
+                                      const char *symlink_target)
 {
        struct dentry *dentry = file_dentry(file);
        struct inode *inode = d_inode(dentry);
@@ -347,6 +338,15 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file,
                return NULL;
        }
 
+       if (symlink_target) {
+               cfile->symlink_target = kstrdup(symlink_target, GFP_KERNEL);
+               if (!cfile->symlink_target) {
+                       kfree(fdlocks);
+                       kfree(cfile);
+                       return NULL;
+               }
+       }
+
        INIT_LIST_HEAD(&fdlocks->locks);
        fdlocks->cfile = cfile;
        cfile->llist = fdlocks;
@@ -440,6 +440,7 @@ static void cifsFileInfo_put_final(struct cifsFileInfo *cifs_file)
        cifs_put_tlink(cifs_file->tlink);
        dput(cifs_file->dentry);
        cifs_sb_deactive(sb);
+       kfree(cifs_file->symlink_target);
        kfree(cifs_file);
 }
 
@@ -488,7 +489,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file,
        struct cifsInodeInfo *cifsi = CIFS_I(inode);
        struct super_block *sb = inode->i_sb;
        struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
-       struct cifs_fid fid;
+       struct cifs_fid fid = {};
        struct cifs_pending_open open;
        bool oplock_break_cancelled;
 
@@ -570,8 +571,9 @@ int cifs_open(struct inode *inode, struct file *file)
        void *page;
        const char *full_path;
        bool posix_open_ok = false;
-       struct cifs_fid fid;
+       struct cifs_fid fid = {};
        struct cifs_pending_open open;
+       struct cifs_open_info_data data = {};
 
        xid = get_xid();
 
@@ -662,15 +664,15 @@ int cifs_open(struct inode *inode, struct file *file)
                if (server->ops->get_lease_key)
                        server->ops->get_lease_key(inode, &fid);
 
-               rc = cifs_nt_open(full_path, inode, cifs_sb, tcon,
-                                 file->f_flags, &oplock, &fid, xid);
+               rc = cifs_nt_open(full_path, inode, cifs_sb, tcon, file->f_flags, &oplock, &fid,
+                                 xid, &data);
                if (rc) {
                        cifs_del_pending_open(&open);
                        goto out;
                }
        }
 
-       cfile = cifs_new_fileinfo(&fid, file, tlink, oplock);
+       cfile = cifs_new_fileinfo(&fid, file, tlink, oplock, data.symlink_target);
        if (cfile == NULL) {
                if (server->ops->close)
                        server->ops->close(xid, tcon, &fid);
@@ -712,6 +714,7 @@ out:
        free_dentry_path(page);
        free_xid(xid);
        cifs_put_tlink(tlink);
+       cifs_free_open_info(&data);
        return rc;
 }
 
@@ -1882,11 +1885,13 @@ int cifs_flock(struct file *file, int cmd, struct file_lock *fl)
        struct cifsFileInfo *cfile;
        __u32 type;
 
-       rc = -EACCES;
        xid = get_xid();
 
-       if (!(fl->fl_flags & FL_FLOCK))
-               return -ENOLCK;
+       if (!(fl->fl_flags & FL_FLOCK)) {
+               rc = -ENOLCK;
+               free_xid(xid);
+               return rc;
+       }
 
        cfile = (struct cifsFileInfo *)file->private_data;
        tcon = tlink_tcon(cfile->tlink);
@@ -1905,8 +1910,9 @@ int cifs_flock(struct file *file, int cmd, struct file_lock *fl)
                 * if no lock or unlock then nothing to do since we do not
                 * know what it is
                 */
+               rc = -EOPNOTSUPP;
                free_xid(xid);
-               return -EOPNOTSUPP;
+               return rc;
        }
 
        rc = cifs_setlk(file, fl, type, wait_flag, posix_lck, lock, unlock,
index ad10c61..9bde08d 100644 (file)
@@ -210,6 +210,17 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)
                 */
                inode->i_blocks = (512 - 1 + fattr->cf_bytes) >> 9;
        }
+
+       if (S_ISLNK(fattr->cf_mode)) {
+               kfree(cifs_i->symlink_target);
+               cifs_i->symlink_target = fattr->cf_symlink_target;
+               fattr->cf_symlink_target = NULL;
+
+               if (unlikely(!cifs_i->symlink_target))
+                       inode->i_link = ERR_PTR(-EOPNOTSUPP);
+               else
+                       inode->i_link = cifs_i->symlink_target;
+       }
        spin_unlock(&inode->i_lock);
 
        if (fattr->cf_flags & CIFS_FATTR_DFS_REFERRAL)
@@ -347,13 +358,22 @@ cifs_get_file_info_unix(struct file *filp)
        int rc;
        unsigned int xid;
        FILE_UNIX_BASIC_INFO find_data;
-       struct cifs_fattr fattr;
+       struct cifs_fattr fattr = {};
        struct inode *inode = file_inode(filp);
        struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
        struct cifsFileInfo *cfile = filp->private_data;
        struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);
 
        xid = get_xid();
+
+       if (cfile->symlink_target) {
+               fattr.cf_symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
+               if (!fattr.cf_symlink_target) {
+                       rc = -ENOMEM;
+                       goto cifs_gfiunix_out;
+               }
+       }
+
        rc = CIFSSMBUnixQFileInfo(xid, tcon, cfile->fid.netfid, &find_data);
        if (!rc) {
                cifs_unix_basic_to_fattr(&fattr, &find_data, cifs_sb);
@@ -378,6 +398,7 @@ int cifs_get_inode_info_unix(struct inode **pinode,
        FILE_UNIX_BASIC_INFO find_data;
        struct cifs_fattr fattr;
        struct cifs_tcon *tcon;
+       struct TCP_Server_Info *server;
        struct tcon_link *tlink;
        struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
 
@@ -387,10 +408,12 @@ int cifs_get_inode_info_unix(struct inode **pinode,
        if (IS_ERR(tlink))
                return PTR_ERR(tlink);
        tcon = tlink_tcon(tlink);
+       server = tcon->ses->server;
 
        /* could have done a find first instead but this returns more info */
        rc = CIFSSMBUnixQPathInfo(xid, tcon, full_path, &find_data,
                                  cifs_sb->local_nls, cifs_remap(cifs_sb));
+       cifs_dbg(FYI, "%s: query path info: rc = %d\n", __func__, rc);
        cifs_put_tlink(tlink);
 
        if (!rc) {
@@ -410,6 +433,17 @@ int cifs_get_inode_info_unix(struct inode **pinode,
                        cifs_dbg(FYI, "check_mf_symlink: %d\n", tmprc);
        }
 
+       if (S_ISLNK(fattr.cf_mode) && !fattr.cf_symlink_target) {
+               if (!server->ops->query_symlink)
+                       return -EOPNOTSUPP;
+               rc = server->ops->query_symlink(xid, tcon, cifs_sb, full_path,
+                                               &fattr.cf_symlink_target, false);
+               if (rc) {
+                       cifs_dbg(FYI, "%s: query_symlink: %d\n", __func__, rc);
+                       goto cgiiu_exit;
+               }
+       }
+
        if (*pinode == NULL) {
                /* get new inode */
                cifs_fill_uniqueid(sb, &fattr);
@@ -432,6 +466,7 @@ int cifs_get_inode_info_unix(struct inode **pinode,
        }
 
 cgiiu_exit:
+       kfree(fattr.cf_symlink_target);
        return rc;
 }
 #else
@@ -601,10 +636,10 @@ static int cifs_sfu_mode(struct cifs_fattr *fattr, const unsigned char *path,
 }
 
 /* Fill a cifs_fattr struct with info from POSIX info struct */
-static void
-smb311_posix_info_to_fattr(struct cifs_fattr *fattr, struct smb311_posix_qinfo *info,
-                          struct super_block *sb, bool adjust_tz, bool symlink)
+static void smb311_posix_info_to_fattr(struct cifs_fattr *fattr, struct cifs_open_info_data *data,
+                                      struct super_block *sb, bool adjust_tz, bool symlink)
 {
+       struct smb311_posix_qinfo *info = &data->posix_fi;
        struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
        struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
 
@@ -639,6 +674,8 @@ smb311_posix_info_to_fattr(struct cifs_fattr *fattr, struct smb311_posix_qinfo *
        if (symlink) {
                fattr->cf_mode |= S_IFLNK;
                fattr->cf_dtype = DT_LNK;
+               fattr->cf_symlink_target = data->symlink_target;
+               data->symlink_target = NULL;
        } else if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {
                fattr->cf_mode |= S_IFDIR;
                fattr->cf_dtype = DT_DIR;
@@ -655,13 +692,11 @@ smb311_posix_info_to_fattr(struct cifs_fattr *fattr, struct smb311_posix_qinfo *
                fattr->cf_mode, fattr->cf_uniqueid, fattr->cf_nlink);
 }
 
-
-/* Fill a cifs_fattr struct with info from FILE_ALL_INFO */
-static void
-cifs_all_info_to_fattr(struct cifs_fattr *fattr, FILE_ALL_INFO *info,
-                      struct super_block *sb, bool adjust_tz,
-                      bool symlink, u32 reparse_tag)
+static void cifs_open_info_to_fattr(struct cifs_fattr *fattr, struct cifs_open_info_data *data,
+                                   struct super_block *sb, bool adjust_tz, bool symlink,
+                                   u32 reparse_tag)
 {
+       struct smb2_file_all_info *info = &data->fi;
        struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
        struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
 
@@ -703,7 +738,8 @@ cifs_all_info_to_fattr(struct cifs_fattr *fattr, FILE_ALL_INFO *info,
        } else if (reparse_tag == IO_REPARSE_TAG_LX_BLK) {
                fattr->cf_mode |= S_IFBLK | cifs_sb->ctx->file_mode;
                fattr->cf_dtype = DT_BLK;
-       } else if (symlink) { /* TODO add more reparse tag checks */
+       } else if (symlink || reparse_tag == IO_REPARSE_TAG_SYMLINK ||
+                  reparse_tag == IO_REPARSE_TAG_NFS) {
                fattr->cf_mode = S_IFLNK;
                fattr->cf_dtype = DT_LNK;
        } else if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {
@@ -735,6 +771,11 @@ cifs_all_info_to_fattr(struct cifs_fattr *fattr, FILE_ALL_INFO *info,
                }
        }
 
+       if (S_ISLNK(fattr->cf_mode)) {
+               fattr->cf_symlink_target = data->symlink_target;
+               data->symlink_target = NULL;
+       }
+
        fattr->cf_uid = cifs_sb->ctx->linux_uid;
        fattr->cf_gid = cifs_sb->ctx->linux_gid;
 }
@@ -744,23 +785,28 @@ cifs_get_file_info(struct file *filp)
 {
        int rc;
        unsigned int xid;
-       FILE_ALL_INFO find_data;
+       struct cifs_open_info_data data = {};
        struct cifs_fattr fattr;
        struct inode *inode = file_inode(filp);
        struct cifsFileInfo *cfile = filp->private_data;
        struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);
        struct TCP_Server_Info *server = tcon->ses->server;
+       bool symlink = false;
+       u32 tag = 0;
 
        if (!server->ops->query_file_info)
                return -ENOSYS;
 
        xid = get_xid();
-       rc = server->ops->query_file_info(xid, tcon, &cfile->fid, &find_data);
+       rc = server->ops->query_file_info(xid, tcon, cfile, &data);
        switch (rc) {
        case 0:
                /* TODO: add support to query reparse tag */
-               cifs_all_info_to_fattr(&fattr, &find_data, inode->i_sb, false,
-                                      false, 0 /* no reparse tag */);
+               if (data.symlink_target) {
+                       symlink = true;
+                       tag = IO_REPARSE_TAG_SYMLINK;
+               }
+               cifs_open_info_to_fattr(&fattr, &data, inode->i_sb, false, symlink, tag);
                break;
        case -EREMOTE:
                cifs_create_dfs_fattr(&fattr, inode->i_sb);
@@ -789,6 +835,7 @@ cifs_get_file_info(struct file *filp)
        /* if filetype is different, return error */
        rc = cifs_fattr_to_inode(inode, &fattr);
 cgfi_exit:
+       cifs_free_open_info(&data);
        free_xid(xid);
        return rc;
 }
@@ -860,14 +907,9 @@ cifs_backup_query_path_info(int xid,
 }
 #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
 
-static void
-cifs_set_fattr_ino(int xid,
-                  struct cifs_tcon *tcon,
-                  struct super_block *sb,
-                  struct inode **inode,
-                  const char *full_path,
-                  FILE_ALL_INFO *data,
-                  struct cifs_fattr *fattr)
+static void cifs_set_fattr_ino(int xid, struct cifs_tcon *tcon, struct super_block *sb,
+                              struct inode **inode, const char *full_path,
+                              struct cifs_open_info_data *data, struct cifs_fattr *fattr)
 {
        struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
        struct TCP_Server_Info *server = tcon->ses->server;
@@ -885,11 +927,8 @@ cifs_set_fattr_ino(int xid,
         * If we have an inode pass a NULL tcon to ensure we don't
         * make a round trip to the server. This only works for SMB2+.
         */
-       rc = server->ops->get_srv_inum(xid,
-                                      *inode ? NULL : tcon,
-                                      cifs_sb, full_path,
-                                      &fattr->cf_uniqueid,
-                                      data);
+       rc = server->ops->get_srv_inum(xid, *inode ? NULL : tcon, cifs_sb, full_path,
+                                      &fattr->cf_uniqueid, data);
        if (rc) {
                /*
                 * If that fails reuse existing ino or generate one
@@ -923,14 +962,10 @@ static inline bool is_inode_cache_good(struct inode *ino)
        return ino && CIFS_CACHE_READ(CIFS_I(ino)) && CIFS_I(ino)->time != 0;
 }
 
-int
-cifs_get_inode_info(struct inode **inode,
-                   const char *full_path,
-                   FILE_ALL_INFO *in_data,
-                   struct super_block *sb, int xid,
-                   const struct cifs_fid *fid)
+int cifs_get_inode_info(struct inode **inode, const char *full_path,
+                       struct cifs_open_info_data *data, struct super_block *sb, int xid,
+                       const struct cifs_fid *fid)
 {
-
        struct cifs_tcon *tcon;
        struct TCP_Server_Info *server;
        struct tcon_link *tlink;
@@ -938,8 +973,7 @@ cifs_get_inode_info(struct inode **inode,
        bool adjust_tz = false;
        struct cifs_fattr fattr = {0};
        bool is_reparse_point = false;
-       FILE_ALL_INFO *data = in_data;
-       FILE_ALL_INFO *tmp_data = NULL;
+       struct cifs_open_info_data tmp_data = {};
        void *smb1_backup_rsp_buf = NULL;
        int rc = 0;
        int tmprc = 0;
@@ -960,21 +994,15 @@ cifs_get_inode_info(struct inode **inode,
                        cifs_dbg(FYI, "No need to revalidate cached inode sizes\n");
                        goto out;
                }
-               tmp_data = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL);
-               if (!tmp_data) {
-                       rc = -ENOMEM;
-                       goto out;
-               }
-               rc = server->ops->query_path_info(xid, tcon, cifs_sb,
-                                                full_path, tmp_data,
-                                                &adjust_tz, &is_reparse_point);
+               rc = server->ops->query_path_info(xid, tcon, cifs_sb, full_path, &tmp_data,
+                                                 &adjust_tz, &is_reparse_point);
 #ifdef CONFIG_CIFS_DFS_UPCALL
                if (rc == -ENOENT && is_tcon_dfs(tcon))
                        rc = cifs_dfs_query_info_nonascii_quirk(xid, tcon,
                                                                cifs_sb,
                                                                full_path);
 #endif
-               data = tmp_data;
+               data = &tmp_data;
        }
 
        /*
@@ -988,14 +1016,24 @@ cifs_get_inode_info(struct inode **inode,
                 * since we have to check if its reparse tag matches a known
                 * special file type e.g. symlink or fifo or char etc.
                 */
-               if ((le32_to_cpu(data->Attributes) & ATTR_REPARSE) &&
-                   server->ops->query_reparse_tag) {
-                       rc = server->ops->query_reparse_tag(xid, tcon, cifs_sb,
-                                               full_path, &reparse_tag);
-                       cifs_dbg(FYI, "reparse tag 0x%x\n", reparse_tag);
+               if (is_reparse_point && data->symlink_target) {
+                       reparse_tag = IO_REPARSE_TAG_SYMLINK;
+               } else if ((le32_to_cpu(data->fi.Attributes) & ATTR_REPARSE) &&
+                          server->ops->query_reparse_tag) {
+                       tmprc = server->ops->query_reparse_tag(xid, tcon, cifs_sb, full_path,
+                                                           &reparse_tag);
+                       if (tmprc)
+                               cifs_dbg(FYI, "%s: query_reparse_tag: rc = %d\n", __func__, tmprc);
+                       if (server->ops->query_symlink) {
+                               tmprc = server->ops->query_symlink(xid, tcon, cifs_sb, full_path,
+                                                                  &data->symlink_target,
+                                                                  is_reparse_point);
+                               if (tmprc)
+                                       cifs_dbg(FYI, "%s: query_symlink: rc = %d\n", __func__,
+                                                tmprc);
+                       }
                }
-               cifs_all_info_to_fattr(&fattr, data, sb, adjust_tz,
-                                      is_reparse_point, reparse_tag);
+               cifs_open_info_to_fattr(&fattr, data, sb, adjust_tz, is_reparse_point, reparse_tag);
                break;
        case -EREMOTE:
                /* DFS link, no metadata available on this server */
@@ -1014,18 +1052,20 @@ cifs_get_inode_info(struct inode **inode,
                 */
                if (backup_cred(cifs_sb) && is_smb1_server(server)) {
                        /* for easier reading */
+                       FILE_ALL_INFO *fi;
                        FILE_DIRECTORY_INFO *fdi;
                        SEARCH_ID_FULL_DIR_INFO *si;
 
                        rc = cifs_backup_query_path_info(xid, tcon, sb,
                                                         full_path,
                                                         &smb1_backup_rsp_buf,
-                                                        &data);
+                                                        &fi);
                        if (rc)
                                goto out;
 
-                       fdi = (FILE_DIRECTORY_INFO *)data;
-                       si = (SEARCH_ID_FULL_DIR_INFO *)data;
+                       move_cifs_info_to_smb2(&data->fi, fi);
+                       fdi = (FILE_DIRECTORY_INFO *)fi;
+                       si = (SEARCH_ID_FULL_DIR_INFO *)fi;
 
                        cifs_dir_info_to_fattr(&fattr, fdi, cifs_sb);
                        fattr.cf_uniqueid = le64_to_cpu(si->UniqueId);
@@ -1123,7 +1163,8 @@ handle_mnt_opt:
 out:
        cifs_buf_release(smb1_backup_rsp_buf);
        cifs_put_tlink(tlink);
-       kfree(tmp_data);
+       cifs_free_open_info(&tmp_data);
+       kfree(fattr.cf_symlink_target);
        return rc;
 }
 
@@ -1138,7 +1179,7 @@ smb311_posix_get_inode_info(struct inode **inode,
        bool adjust_tz = false;
        struct cifs_fattr fattr = {0};
        bool symlink = false;
-       struct smb311_posix_qinfo *data = NULL;
+       struct cifs_open_info_data data = {};
        int rc = 0;
        int tmprc = 0;
 
@@ -1155,15 +1196,9 @@ smb311_posix_get_inode_info(struct inode **inode,
                cifs_dbg(FYI, "No need to revalidate cached inode sizes\n");
                goto out;
        }
-       data = kmalloc(sizeof(struct smb311_posix_qinfo), GFP_KERNEL);
-       if (!data) {
-               rc = -ENOMEM;
-               goto out;
-       }
 
-       rc = smb311_posix_query_path_info(xid, tcon, cifs_sb,
-                                                 full_path, data,
-                                                 &adjust_tz, &symlink);
+       rc = smb311_posix_query_path_info(xid, tcon, cifs_sb, full_path, &data, &adjust_tz,
+                                         &symlink);
 
        /*
         * 2. Convert it to internal cifs metadata (fattr)
@@ -1171,7 +1206,7 @@ smb311_posix_get_inode_info(struct inode **inode,
 
        switch (rc) {
        case 0:
-               smb311_posix_info_to_fattr(&fattr, data, sb, adjust_tz, symlink);
+               smb311_posix_info_to_fattr(&fattr, &data, sb, adjust_tz, symlink);
                break;
        case -EREMOTE:
                /* DFS link, no metadata available on this server */
@@ -1228,7 +1263,8 @@ smb311_posix_get_inode_info(struct inode **inode,
        }
 out:
        cifs_put_tlink(tlink);
-       kfree(data);
+       cifs_free_open_info(&data);
+       kfree(fattr.cf_symlink_target);
        return rc;
 }
 
@@ -2265,13 +2301,13 @@ cifs_dentry_needs_reval(struct dentry *dentry)
                return true;
 
        if (!open_cached_dir_by_dentry(tcon, dentry->d_parent, &cfid)) {
-               mutex_lock(&cfid->fid_mutex);
+               spin_lock(&cfid->fid_lock);
                if (cfid->time && cifs_i->time > cfid->time) {
-                       mutex_unlock(&cfid->fid_mutex);
+                       spin_unlock(&cfid->fid_lock);
                        close_cached_dir(cfid);
                        return false;
                }
-               mutex_unlock(&cfid->fid_mutex);
+               spin_unlock(&cfid->fid_lock);
                close_cached_dir(cfid);
        }
        /*
index b6e6e5d..89d5fa8 100644 (file)
@@ -484,12 +484,35 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg)
                        tcon = tlink_tcon(tlink);
                        if (tcon && tcon->ses->server->ops->notify) {
                                rc = tcon->ses->server->ops->notify(xid,
-                                               filep, (void __user *)arg);
+                                               filep, (void __user *)arg,
+                                               false /* no ret data */);
                                cifs_dbg(FYI, "ioctl notify rc %d\n", rc);
                        } else
                                rc = -EOPNOTSUPP;
                        cifs_put_tlink(tlink);
                        break;
+               case CIFS_IOC_NOTIFY_INFO:
+                       if (!S_ISDIR(inode->i_mode)) {
+                               /* Notify can only be done on directories */
+                               rc = -EOPNOTSUPP;
+                               break;
+                       }
+                       cifs_sb = CIFS_SB(inode->i_sb);
+                       tlink = cifs_sb_tlink(cifs_sb);
+                       if (IS_ERR(tlink)) {
+                               rc = PTR_ERR(tlink);
+                               break;
+                       }
+                       tcon = tlink_tcon(tlink);
+                       if (tcon && tcon->ses->server->ops->notify) {
+                               rc = tcon->ses->server->ops->notify(xid,
+                                               filep, (void __user *)arg,
+                                               true /* return details */);
+                               cifs_dbg(FYI, "ioctl notify info rc %d\n", rc);
+                       } else
+                               rc = -EOPNOTSUPP;
+                       cifs_put_tlink(tlink);
+                       break;
                case CIFS_IOC_SHUTDOWN:
                        rc = cifs_shutdown(inode->i_sb, arg);
                        break;
index cd29c29..bd374fe 100644 (file)
@@ -201,40 +201,6 @@ out:
        return rc;
 }
 
-static int
-query_mf_symlink(const unsigned int xid, struct cifs_tcon *tcon,
-                struct cifs_sb_info *cifs_sb, const unsigned char *path,
-                char **symlinkinfo)
-{
-       int rc;
-       u8 *buf = NULL;
-       unsigned int link_len = 0;
-       unsigned int bytes_read = 0;
-
-       buf = kmalloc(CIFS_MF_SYMLINK_FILE_SIZE, GFP_KERNEL);
-       if (!buf)
-               return -ENOMEM;
-
-       if (tcon->ses->server->ops->query_mf_symlink)
-               rc = tcon->ses->server->ops->query_mf_symlink(xid, tcon,
-                                             cifs_sb, path, buf, &bytes_read);
-       else
-               rc = -ENOSYS;
-
-       if (rc)
-               goto out;
-
-       if (bytes_read == 0) { /* not a symlink */
-               rc = -EINVAL;
-               goto out;
-       }
-
-       rc = parse_mf_symlink(buf, bytes_read, &link_len, symlinkinfo);
-out:
-       kfree(buf);
-       return rc;
-}
-
 int
 check_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
                 struct cifs_sb_info *cifs_sb, struct cifs_fattr *fattr,
@@ -244,6 +210,7 @@ check_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
        u8 *buf = NULL;
        unsigned int link_len = 0;
        unsigned int bytes_read = 0;
+       char *symlink = NULL;
 
        if (!couldbe_mf_symlink(fattr))
                /* it's not a symlink */
@@ -265,7 +232,7 @@ check_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
        if (bytes_read == 0) /* not a symlink */
                goto out;
 
-       rc = parse_mf_symlink(buf, bytes_read, &link_len, NULL);
+       rc = parse_mf_symlink(buf, bytes_read, &link_len, &symlink);
        if (rc == -EINVAL) {
                /* it's not a symlink */
                rc = 0;
@@ -280,6 +247,7 @@ check_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
        fattr->cf_mode &= ~S_IFMT;
        fattr->cf_mode |= S_IFLNK | S_IRWXU | S_IRWXG | S_IRWXO;
        fattr->cf_dtype = DT_LNK;
+       fattr->cf_symlink_target = symlink;
 out:
        kfree(buf);
        return rc;
@@ -599,75 +567,6 @@ cifs_hl_exit:
        return rc;
 }
 
-const char *
-cifs_get_link(struct dentry *direntry, struct inode *inode,
-             struct delayed_call *done)
-{
-       int rc = -ENOMEM;
-       unsigned int xid;
-       const char *full_path;
-       void *page;
-       char *target_path = NULL;
-       struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
-       struct tcon_link *tlink = NULL;
-       struct cifs_tcon *tcon;
-       struct TCP_Server_Info *server;
-
-       if (!direntry)
-               return ERR_PTR(-ECHILD);
-
-       xid = get_xid();
-
-       tlink = cifs_sb_tlink(cifs_sb);
-       if (IS_ERR(tlink)) {
-               free_xid(xid);
-               return ERR_CAST(tlink);
-       }
-       tcon = tlink_tcon(tlink);
-       server = tcon->ses->server;
-
-       page = alloc_dentry_path();
-       full_path = build_path_from_dentry(direntry, page);
-       if (IS_ERR(full_path)) {
-               free_xid(xid);
-               cifs_put_tlink(tlink);
-               free_dentry_path(page);
-               return ERR_CAST(full_path);
-       }
-
-       cifs_dbg(FYI, "Full path: %s inode = 0x%p\n", full_path, inode);
-
-       rc = -EACCES;
-       /*
-        * First try Minshall+French Symlinks, if configured
-        * and fallback to UNIX Extensions Symlinks.
-        */
-       if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS)
-               rc = query_mf_symlink(xid, tcon, cifs_sb, full_path,
-                                     &target_path);
-
-       if (rc != 0 && server->ops->query_symlink) {
-               struct cifsInodeInfo *cifsi = CIFS_I(inode);
-               bool reparse_point = false;
-
-               if (cifsi->cifsAttrs & ATTR_REPARSE)
-                       reparse_point = true;
-
-               rc = server->ops->query_symlink(xid, tcon, cifs_sb, full_path,
-                                               &target_path, reparse_point);
-       }
-
-       free_dentry_path(page);
-       free_xid(xid);
-       cifs_put_tlink(tlink);
-       if (rc != 0) {
-               kfree(target_path);
-               return ERR_PTR(rc);
-       }
-       set_delayed_call(done, kfree_link, target_path);
-       return target_path;
-}
-
 int
 cifs_symlink(struct user_namespace *mnt_userns, struct inode *inode,
             struct dentry *direntry, const char *symname)
index 8e060c0..2d75ba5 100644 (file)
@@ -844,17 +844,34 @@ static bool emit_cached_dirents(struct cached_dirents *cde,
                                struct dir_context *ctx)
 {
        struct cached_dirent *dirent;
-       int rc;
+       bool rc;
 
        list_for_each_entry(dirent, &cde->entries, entry) {
-               if (ctx->pos >= dirent->pos)
+               /*
+                * Skip all early entries prior to the current lseek()
+                * position.
+                */
+               if (ctx->pos > dirent->pos)
                        continue;
+               /*
+                * We recorded the current ->pos value for the dirent
+                * when we stored it in the cache.
+                * However, this sequence of ->pos values may have holes
+                * in it, for example dot-dirs returned from the server
+                * are suppressed.
+                * Handle this bu forcing ctx->pos to be the same as the
+                * ->pos of the current dirent we emit from the cache.
+                * This means that when we emit these entries from the cache
+                * we now emit them with the same ->pos value as in the
+                * initial scan.
+                */
                ctx->pos = dirent->pos;
                rc = dir_emit(ctx, dirent->name, dirent->namelen,
                              dirent->fattr.cf_uniqueid,
                              dirent->fattr.cf_dtype);
                if (!rc)
                        return rc;
+               ctx->pos++;
        }
        return true;
 }
@@ -994,6 +1011,8 @@ static int cifs_filldir(char *find_entry, struct file *file,
                cifs_unix_basic_to_fattr(&fattr,
                                         &((FILE_UNIX_INFO *)find_entry)->basic,
                                         cifs_sb);
+               if (S_ISLNK(fattr.cf_mode))
+                       fattr.cf_flags |= CIFS_FATTR_NEED_REVAL;
                break;
        case SMB_FIND_FILE_INFO_STANDARD:
                cifs_std_info_to_fattr(&fattr,
@@ -1202,10 +1221,10 @@ int cifs_readdir(struct file *file, struct dir_context *ctx)
                                 ctx->pos, tmp_buf);
                        cifs_save_resume_key(current_entry, cifsFile);
                        break;
-               } else
-                       current_entry =
-                               nxt_dir_entry(current_entry, end_of_smb,
-                                       cifsFile->srch_inf.info_level);
+               }
+               current_entry =
+                       nxt_dir_entry(current_entry, end_of_smb,
+                                     cifsFile->srch_inf.info_level);
        }
        kfree(tmp_buf);
 
index f1c3c6d..92e4278 100644 (file)
@@ -496,6 +496,7 @@ out:
                cifs_put_tcp_session(chan->server, 0);
        }
 
+       free_xid(xid);
        return rc;
 }
 
@@ -601,11 +602,6 @@ static void unicode_ssetup_strings(char **pbcc_area, struct cifs_ses *ses,
        /* BB FIXME add check that strings total less
        than 335 or will need to send them as arrays */
 
-       /* unicode strings, must be word aligned before the call */
-/*     if ((long) bcc_ptr % 2) {
-               *bcc_ptr = 0;
-               bcc_ptr++;
-       } */
        /* copy user */
        if (ses->user_name == NULL) {
                /* null user mount */
@@ -1213,16 +1209,18 @@ out_free_smb_buf:
 static void
 sess_free_buffer(struct sess_data *sess_data)
 {
-       int i;
+       struct kvec *iov = sess_data->iov;
 
-       /* zero the session data before freeing, as it might contain sensitive info (keys, etc) */
-       for (i = 0; i < 3; i++)
-               if (sess_data->iov[i].iov_base)
-                       memzero_explicit(sess_data->iov[i].iov_base, sess_data->iov[i].iov_len);
+       /*
+        * Zero the session data before freeing, as it might contain sensitive info (keys, etc).
+        * Note that iov[1] is already freed by caller.
+        */
+       if (sess_data->buf0_type != CIFS_NO_BUFFER && iov[0].iov_base)
+               memzero_explicit(iov[0].iov_base, iov[0].iov_len);
 
-       free_rsp_buf(sess_data->buf0_type, sess_data->iov[0].iov_base);
+       free_rsp_buf(sess_data->buf0_type, iov[0].iov_base);
        sess_data->buf0_type = CIFS_NO_BUFFER;
-       kfree(sess_data->iov[2].iov_base);
+       kfree_sensitive(iov[2].iov_base);
 }
 
 static int
@@ -1324,7 +1322,7 @@ sess_auth_ntlmv2(struct sess_data *sess_data)
        }
 
        if (ses->capabilities & CAP_UNICODE) {
-               if (sess_data->iov[0].iov_len % 2) {
+               if (!IS_ALIGNED(sess_data->iov[0].iov_len, 2)) {
                        *bcc_ptr = 0;
                        bcc_ptr++;
                }
@@ -1364,7 +1362,7 @@ sess_auth_ntlmv2(struct sess_data *sess_data)
                /* no string area to decode, do nothing */
        } else if (smb_buf->Flags2 & SMBFLG2_UNICODE) {
                /* unicode string area must be word-aligned */
-               if (((unsigned long) bcc_ptr - (unsigned long) smb_buf) % 2) {
+               if (!IS_ALIGNED((unsigned long)bcc_ptr - (unsigned long)smb_buf, 2)) {
                        ++bcc_ptr;
                        --bytes_remaining;
                }
@@ -1448,8 +1446,7 @@ sess_auth_kerberos(struct sess_data *sess_data)
 
        if (ses->capabilities & CAP_UNICODE) {
                /* unicode strings must be word aligned */
-               if ((sess_data->iov[0].iov_len
-                       + sess_data->iov[1].iov_len) % 2) {
+               if (!IS_ALIGNED(sess_data->iov[0].iov_len + sess_data->iov[1].iov_len, 2)) {
                        *bcc_ptr = 0;
                        bcc_ptr++;
                }
@@ -1500,7 +1497,7 @@ sess_auth_kerberos(struct sess_data *sess_data)
                /* no string area to decode, do nothing */
        } else if (smb_buf->Flags2 & SMBFLG2_UNICODE) {
                /* unicode string area must be word-aligned */
-               if (((unsigned long) bcc_ptr - (unsigned long) smb_buf) % 2) {
+               if (!IS_ALIGNED((unsigned long)bcc_ptr - (unsigned long)smb_buf, 2)) {
                        ++bcc_ptr;
                        --bytes_remaining;
                }
@@ -1552,7 +1549,7 @@ _sess_auth_rawntlmssp_assemble_req(struct sess_data *sess_data)
 
        bcc_ptr = sess_data->iov[2].iov_base;
        /* unicode strings must be word aligned */
-       if ((sess_data->iov[0].iov_len + sess_data->iov[1].iov_len) % 2) {
+       if (!IS_ALIGNED(sess_data->iov[0].iov_len + sess_data->iov[1].iov_len, 2)) {
                *bcc_ptr = 0;
                bcc_ptr++;
        }
@@ -1753,7 +1750,7 @@ sess_auth_rawntlmssp_authenticate(struct sess_data *sess_data)
                /* no string area to decode, do nothing */
        } else if (smb_buf->Flags2 & SMBFLG2_UNICODE) {
                /* unicode string area must be word-aligned */
-               if (((unsigned long) bcc_ptr - (unsigned long) smb_buf) % 2) {
+               if (!IS_ALIGNED((unsigned long)bcc_ptr - (unsigned long)smb_buf, 2)) {
                        ++bcc_ptr;
                        --bytes_remaining;
                }
index f36b2d2..5048075 100644 (file)
@@ -542,31 +542,32 @@ cifs_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
        return rc;
 }
 
-static int
-cifs_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
-                    struct cifs_sb_info *cifs_sb, const char *full_path,
-                    FILE_ALL_INFO *data, bool *adjustTZ, bool *symlink)
+static int cifs_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+                               struct cifs_sb_info *cifs_sb, const char *full_path,
+                               struct cifs_open_info_data *data, bool *adjustTZ, bool *symlink)
 {
        int rc;
+       FILE_ALL_INFO fi = {};
 
        *symlink = false;
 
        /* could do find first instead but this returns more info */
-       rc = CIFSSMBQPathInfo(xid, tcon, full_path, data, 0 /* not legacy */,
-                             cifs_sb->local_nls, cifs_remap(cifs_sb));
+       rc = CIFSSMBQPathInfo(xid, tcon, full_path, &fi, 0 /* not legacy */, cifs_sb->local_nls,
+                             cifs_remap(cifs_sb));
        /*
         * BB optimize code so we do not make the above call when server claims
         * no NT SMB support and the above call failed at least once - set flag
         * in tcon or mount.
         */
        if ((rc == -EOPNOTSUPP) || (rc == -EINVAL)) {
-               rc = SMBQueryInformation(xid, tcon, full_path, data,
-                                        cifs_sb->local_nls,
+               rc = SMBQueryInformation(xid, tcon, full_path, &fi, cifs_sb->local_nls,
                                         cifs_remap(cifs_sb));
+               if (!rc)
+                       move_cifs_info_to_smb2(&data->fi, &fi);
                *adjustTZ = true;
        }
 
-       if (!rc && (le32_to_cpu(data->Attributes) & ATTR_REPARSE)) {
+       if (!rc && (le32_to_cpu(fi.Attributes) & ATTR_REPARSE)) {
                int tmprc;
                int oplock = 0;
                struct cifs_fid fid;
@@ -592,10 +593,9 @@ cifs_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
        return rc;
 }
 
-static int
-cifs_get_srv_inum(const unsigned int xid, struct cifs_tcon *tcon,
-                 struct cifs_sb_info *cifs_sb, const char *full_path,
-                 u64 *uniqueid, FILE_ALL_INFO *data)
+static int cifs_get_srv_inum(const unsigned int xid, struct cifs_tcon *tcon,
+                            struct cifs_sb_info *cifs_sb, const char *full_path,
+                            u64 *uniqueid, struct cifs_open_info_data *unused)
 {
        /*
         * We can not use the IndexNumber field by default from Windows or
@@ -613,11 +613,22 @@ cifs_get_srv_inum(const unsigned int xid, struct cifs_tcon *tcon,
                                     cifs_remap(cifs_sb));
 }
 
-static int
-cifs_query_file_info(const unsigned int xid, struct cifs_tcon *tcon,
-                    struct cifs_fid *fid, FILE_ALL_INFO *data)
+static int cifs_query_file_info(const unsigned int xid, struct cifs_tcon *tcon,
+                               struct cifsFileInfo *cfile, struct cifs_open_info_data *data)
 {
-       return CIFSSMBQFileInfo(xid, tcon, fid->netfid, data);
+       int rc;
+       FILE_ALL_INFO fi = {};
+
+       if (cfile->symlink_target) {
+               data->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
+               if (!data->symlink_target)
+                       return -ENOMEM;
+       }
+
+       rc = CIFSSMBQFileInfo(xid, tcon, cfile->fid.netfid, &fi);
+       if (!rc)
+               move_cifs_info_to_smb2(&data->fi, &fi);
+       return rc;
 }
 
 static void
@@ -702,19 +713,20 @@ cifs_mkdir_setinfo(struct inode *inode, const char *full_path,
                cifsInode->cifsAttrs = dosattrs;
 }
 
-static int
-cifs_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
-              __u32 *oplock, FILE_ALL_INFO *buf)
+static int cifs_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock,
+                         void *buf)
 {
+       FILE_ALL_INFO *fi = buf;
+
        if (!(oparms->tcon->ses->capabilities & CAP_NT_SMBS))
                return SMBLegacyOpen(xid, oparms->tcon, oparms->path,
                                     oparms->disposition,
                                     oparms->desired_access,
                                     oparms->create_options,
-                                    &oparms->fid->netfid, oplock, buf,
+                                    &oparms->fid->netfid, oplock, fi,
                                     oparms->cifs_sb->local_nls,
                                     cifs_remap(oparms->cifs_sb));
-       return CIFS_open(xid, oparms, oplock, buf);
+       return CIFS_open(xid, oparms, oplock, fi);
 }
 
 static void
index 9dfd2dd..ffbd9a9 100644 (file)
 #include "cifs_unicode.h"
 #include "fscache.h"
 #include "smb2proto.h"
+#include "smb2status.h"
 
-int
-smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
-              __u32 *oplock, FILE_ALL_INFO *buf)
+static struct smb2_symlink_err_rsp *symlink_data(const struct kvec *iov)
+{
+       struct smb2_err_rsp *err = iov->iov_base;
+       struct smb2_symlink_err_rsp *sym = ERR_PTR(-EINVAL);
+       u32 len;
+
+       if (err->ErrorContextCount) {
+               struct smb2_error_context_rsp *p, *end;
+
+               len = (u32)err->ErrorContextCount * (offsetof(struct smb2_error_context_rsp,
+                                                             ErrorContextData) +
+                                                    sizeof(struct smb2_symlink_err_rsp));
+               if (le32_to_cpu(err->ByteCount) < len || iov->iov_len < len + sizeof(*err))
+                       return ERR_PTR(-EINVAL);
+
+               p = (struct smb2_error_context_rsp *)err->ErrorData;
+               end = (struct smb2_error_context_rsp *)((u8 *)err + iov->iov_len);
+               do {
+                       if (le32_to_cpu(p->ErrorId) == SMB2_ERROR_ID_DEFAULT) {
+                               sym = (struct smb2_symlink_err_rsp *)&p->ErrorContextData;
+                               break;
+                       }
+                       cifs_dbg(FYI, "%s: skipping unhandled error context: 0x%x\n",
+                                __func__, le32_to_cpu(p->ErrorId));
+
+                       len = ALIGN(le32_to_cpu(p->ErrorDataLength), 8);
+                       p = (struct smb2_error_context_rsp *)((u8 *)&p->ErrorContextData + len);
+               } while (p < end);
+       } else if (le32_to_cpu(err->ByteCount) >= sizeof(*sym) &&
+                  iov->iov_len >= SMB2_SYMLINK_STRUCT_SIZE) {
+               sym = (struct smb2_symlink_err_rsp *)err->ErrorData;
+       }
+
+       if (!IS_ERR(sym) && (le32_to_cpu(sym->SymLinkErrorTag) != SYMLINK_ERROR_TAG ||
+                            le32_to_cpu(sym->ReparseTag) != IO_REPARSE_TAG_SYMLINK))
+               sym = ERR_PTR(-EINVAL);
+
+       return sym;
+}
+
+int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path)
+{
+       struct smb2_symlink_err_rsp *sym;
+       unsigned int sub_offs, sub_len;
+       unsigned int print_offs, print_len;
+       char *s;
+
+       if (!cifs_sb || !iov || !iov->iov_base || !iov->iov_len || !path)
+               return -EINVAL;
+
+       sym = symlink_data(iov);
+       if (IS_ERR(sym))
+               return PTR_ERR(sym);
+
+       sub_len = le16_to_cpu(sym->SubstituteNameLength);
+       sub_offs = le16_to_cpu(sym->SubstituteNameOffset);
+       print_len = le16_to_cpu(sym->PrintNameLength);
+       print_offs = le16_to_cpu(sym->PrintNameOffset);
+
+       if (iov->iov_len < SMB2_SYMLINK_STRUCT_SIZE + sub_offs + sub_len ||
+           iov->iov_len < SMB2_SYMLINK_STRUCT_SIZE + print_offs + print_len)
+               return -EINVAL;
+
+       s = cifs_strndup_from_utf16((char *)sym->PathBuffer + sub_offs, sub_len, true,
+                                   cifs_sb->local_nls);
+       if (!s)
+               return -ENOMEM;
+       convert_delimiter(s, '/');
+       cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, s);
+
+       *path = s;
+       return 0;
+}
+
+int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock, void *buf)
 {
        int rc;
        __le16 *smb2_path;
-       struct smb2_file_all_info *smb2_data = NULL;
        __u8 smb2_oplock;
+       struct cifs_open_info_data *data = buf;
+       struct smb2_file_all_info file_info = {};
+       struct smb2_file_all_info *smb2_data = data ? &file_info : NULL;
+       struct kvec err_iov = {};
+       int err_buftype = CIFS_NO_BUFFER;
        struct cifs_fid *fid = oparms->fid;
        struct network_resiliency_req nr_ioctl_req;
 
        smb2_path = cifs_convert_path_to_utf16(oparms->path, oparms->cifs_sb);
-       if (smb2_path == NULL) {
-               rc = -ENOMEM;
-               goto out;
-       }
-
-       smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
-                           GFP_KERNEL);
-       if (smb2_data == NULL) {
-               rc = -ENOMEM;
-               goto out;
-       }
+       if (smb2_path == NULL)
+               return -ENOMEM;
 
        oparms->desired_access |= FILE_READ_ATTRIBUTES;
        smb2_oplock = SMB2_OPLOCK_LEVEL_BATCH;
 
-       rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL,
-                      NULL, NULL);
+       rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov,
+                      &err_buftype);
+       if (rc && data) {
+               struct smb2_hdr *hdr = err_iov.iov_base;
+
+               if (unlikely(!err_iov.iov_base || err_buftype == CIFS_NO_BUFFER))
+                       rc = -ENOMEM;
+               else if (hdr->Status == STATUS_STOPPED_ON_SYMLINK) {
+                       rc = smb2_parse_symlink_response(oparms->cifs_sb, &err_iov,
+                                                        &data->symlink_target);
+                       if (!rc) {
+                               memset(smb2_data, 0, sizeof(*smb2_data));
+                               oparms->create_options |= OPEN_REPARSE_POINT;
+                               rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data,
+                                              NULL, NULL, NULL);
+                               oparms->create_options &= ~OPEN_REPARSE_POINT;
+                       }
+               }
+       }
+
        if (rc)
                goto out;
 
-
        if (oparms->tcon->use_resilient) {
                /* default timeout is 0, servers pick default (120 seconds) */
                nr_ioctl_req.Timeout =
@@ -73,7 +158,7 @@ smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
                rc = 0;
        }
 
-       if (buf) {
+       if (smb2_data) {
                /* if open response does not have IndexNumber field - get it */
                if (smb2_data->IndexNumber == 0) {
                        rc = SMB2_get_srv_num(xid, oparms->tcon,
@@ -89,12 +174,12 @@ smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
                                rc = 0;
                        }
                }
-               move_smb2_info_to_cifs(buf, smb2_data);
+               memcpy(&data->fi, smb2_data, sizeof(data->fi));
        }
 
        *oplock = smb2_oplock;
 out:
-       kfree(smb2_data);
+       free_rsp_buf(err_buftype, err_iov.iov_base);
        kfree(smb2_path);
        return rc;
 }
index bb3e3d5..68e08c8 100644 (file)
@@ -24,6 +24,7 @@
 #include "smb2pdu.h"
 #include "smb2proto.h"
 #include "cached_dir.h"
+#include "smb2status.h"
 
 static void
 free_set_inf_compound(struct smb_rqst *rqst)
@@ -50,13 +51,15 @@ struct cop_vars {
 /*
  * note: If cfile is passed, the reference to it is dropped here.
  * So make sure that you do not reuse cfile after return from this func.
+ *
+ * If passing @err_iov and @err_buftype, ensure to make them both large enough (>= 3) to hold all
+ * error responses.  Caller is also responsible for freeing them up.
  */
-static int
-smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
-                struct cifs_sb_info *cifs_sb, const char *full_path,
-                __u32 desired_access, __u32 create_disposition,
-                __u32 create_options, umode_t mode, void *ptr, int command,
-                struct cifsFileInfo *cfile)
+static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+                           struct cifs_sb_info *cifs_sb, const char *full_path,
+                           __u32 desired_access, __u32 create_disposition, __u32 create_options,
+                           umode_t mode, void *ptr, int command, struct cifsFileInfo *cfile,
+                           struct kvec *err_iov, int *err_buftype)
 {
        struct cop_vars *vars = NULL;
        struct kvec *rsp_iov;
@@ -70,6 +73,7 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
        int num_rqst = 0;
        int resp_buftype[3];
        struct smb2_query_info_rsp *qi_rsp = NULL;
+       struct cifs_open_info_data *idata;
        int flags = 0;
        __u8 delete_pending[8] = {1, 0, 0, 0, 0, 0, 0, 0};
        unsigned int size[2];
@@ -385,14 +389,19 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
 
        switch (command) {
        case SMB2_OP_QUERY_INFO:
+               idata = ptr;
+               if (rc == 0 && cfile && cfile->symlink_target) {
+                       idata->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
+                       if (!idata->symlink_target)
+                               rc = -ENOMEM;
+               }
                if (rc == 0) {
                        qi_rsp = (struct smb2_query_info_rsp *)
                                rsp_iov[1].iov_base;
                        rc = smb2_validate_and_copy_iov(
                                le16_to_cpu(qi_rsp->OutputBufferOffset),
                                le32_to_cpu(qi_rsp->OutputBufferLength),
-                               &rsp_iov[1], sizeof(struct smb2_file_all_info),
-                               ptr);
+                               &rsp_iov[1], sizeof(idata->fi), (char *)&idata->fi);
                }
                if (rqst[1].rq_iov)
                        SMB2_query_info_free(&rqst[1]);
@@ -406,13 +415,20 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
                                                tcon->tid);
                break;
        case SMB2_OP_POSIX_QUERY_INFO:
+               idata = ptr;
+               if (rc == 0 && cfile && cfile->symlink_target) {
+                       idata->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
+                       if (!idata->symlink_target)
+                               rc = -ENOMEM;
+               }
                if (rc == 0) {
                        qi_rsp = (struct smb2_query_info_rsp *)
                                rsp_iov[1].iov_base;
                        rc = smb2_validate_and_copy_iov(
                                le16_to_cpu(qi_rsp->OutputBufferOffset),
                                le32_to_cpu(qi_rsp->OutputBufferLength),
-                               &rsp_iov[1], sizeof(struct smb311_posix_qinfo) /* add SIDs */, ptr);
+                               &rsp_iov[1], sizeof(idata->posix_fi) /* add SIDs */,
+                               (char *)&idata->posix_fi);
                }
                if (rqst[1].rq_iov)
                        SMB2_query_info_free(&rqst[1]);
@@ -477,42 +493,33 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
                free_set_inf_compound(rqst);
                break;
        }
-       free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
-       free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
-       free_rsp_buf(resp_buftype[2], rsp_iov[2].iov_base);
+
+       if (rc && err_iov && err_buftype) {
+               memcpy(err_iov, rsp_iov, 3 * sizeof(*err_iov));
+               memcpy(err_buftype, resp_buftype, 3 * sizeof(*err_buftype));
+       } else {
+               free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+               free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
+               free_rsp_buf(resp_buftype[2], rsp_iov[2].iov_base);
+       }
        kfree(vars);
        return rc;
 }
 
-void
-move_smb2_info_to_cifs(FILE_ALL_INFO *dst, struct smb2_file_all_info *src)
-{
-       memcpy(dst, src, (size_t)(&src->CurrentByteOffset) - (size_t)src);
-       dst->CurrentByteOffset = src->CurrentByteOffset;
-       dst->Mode = src->Mode;
-       dst->AlignmentRequirement = src->AlignmentRequirement;
-       dst->IndexNumber1 = 0; /* we don't use it */
-}
-
-int
-smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
-                    struct cifs_sb_info *cifs_sb, const char *full_path,
-                    FILE_ALL_INFO *data, bool *adjust_tz, bool *reparse)
+int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+                        struct cifs_sb_info *cifs_sb, const char *full_path,
+                        struct cifs_open_info_data *data, bool *adjust_tz, bool *reparse)
 {
        int rc;
-       struct smb2_file_all_info *smb2_data;
        __u32 create_options = 0;
        struct cifsFileInfo *cfile;
        struct cached_fid *cfid = NULL;
+       struct kvec err_iov[3] = {};
+       int err_buftype[3] = {};
 
        *adjust_tz = false;
        *reparse = false;
 
-       smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
-                           GFP_KERNEL);
-       if (smb2_data == NULL)
-               return -ENOMEM;
-
        if (strcmp(full_path, ""))
                rc = -ENOENT;
        else
@@ -520,63 +527,58 @@ smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
        /* If it is a root and its handle is cached then use it */
        if (!rc) {
                if (cfid->file_all_info_is_valid) {
-                       move_smb2_info_to_cifs(data,
-                                              &cfid->file_all_info);
+                       memcpy(&data->fi, &cfid->file_all_info, sizeof(data->fi));
                } else {
-                       rc = SMB2_query_info(xid, tcon,
-                                            cfid->fid.persistent_fid,
-                                            cfid->fid.volatile_fid, smb2_data);
-                       if (!rc)
-                               move_smb2_info_to_cifs(data, smb2_data);
+                       rc = SMB2_query_info(xid, tcon, cfid->fid.persistent_fid,
+                                            cfid->fid.volatile_fid, &data->fi);
                }
                close_cached_dir(cfid);
-               goto out;
+               return rc;
        }
 
        cifs_get_readable_path(tcon, full_path, &cfile);
-       rc = smb2_compound_op(xid, tcon, cifs_sb, full_path,
-                             FILE_READ_ATTRIBUTES, FILE_OPEN, create_options,
-                             ACL_NO_MODE, smb2_data, SMB2_OP_QUERY_INFO, cfile);
+       rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, FILE_READ_ATTRIBUTES, FILE_OPEN,
+                             create_options, ACL_NO_MODE, data, SMB2_OP_QUERY_INFO, cfile,
+                             err_iov, err_buftype);
        if (rc == -EOPNOTSUPP) {
+               if (err_iov[0].iov_base && err_buftype[0] != CIFS_NO_BUFFER &&
+                   ((struct smb2_hdr *)err_iov[0].iov_base)->Command == SMB2_CREATE &&
+                   ((struct smb2_hdr *)err_iov[0].iov_base)->Status == STATUS_STOPPED_ON_SYMLINK) {
+                       rc = smb2_parse_symlink_response(cifs_sb, err_iov, &data->symlink_target);
+                       if (rc)
+                               goto out;
+               }
                *reparse = true;
                create_options |= OPEN_REPARSE_POINT;
 
                /* Failed on a symbolic link - query a reparse point info */
                cifs_get_readable_path(tcon, full_path, &cfile);
-               rc = smb2_compound_op(xid, tcon, cifs_sb, full_path,
-                                     FILE_READ_ATTRIBUTES, FILE_OPEN,
-                                     create_options, ACL_NO_MODE,
-                                     smb2_data, SMB2_OP_QUERY_INFO, cfile);
+               rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, FILE_READ_ATTRIBUTES,
+                                     FILE_OPEN, create_options, ACL_NO_MODE, data,
+                                     SMB2_OP_QUERY_INFO, cfile, NULL, NULL);
        }
-       if (rc)
-               goto out;
 
-       move_smb2_info_to_cifs(data, smb2_data);
 out:
-       kfree(smb2_data);
+       free_rsp_buf(err_buftype[0], err_iov[0].iov_base);
+       free_rsp_buf(err_buftype[1], err_iov[1].iov_base);
+       free_rsp_buf(err_buftype[2], err_iov[2].iov_base);
        return rc;
 }
 
 
-int
-smb311_posix_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
-                    struct cifs_sb_info *cifs_sb, const char *full_path,
-                    struct smb311_posix_qinfo *data, bool *adjust_tz, bool *reparse)
+int smb311_posix_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+                                struct cifs_sb_info *cifs_sb, const char *full_path,
+                                struct cifs_open_info_data *data, bool *adjust_tz, bool *reparse)
 {
        int rc;
        __u32 create_options = 0;
        struct cifsFileInfo *cfile;
-       struct smb311_posix_qinfo *smb2_data;
+       struct kvec err_iov[3] = {};
+       int err_buftype[3] = {};
 
        *adjust_tz = false;
        *reparse = false;
 
-       /* BB TODO: Make struct larger when add support for parsing owner SIDs */
-       smb2_data = kzalloc(sizeof(struct smb311_posix_qinfo),
-                           GFP_KERNEL);
-       if (smb2_data == NULL)
-               return -ENOMEM;
-
        /*
         * BB TODO: Add support for using the cached root handle.
         * Create SMB2_query_posix_info worker function to do non-compounded query
@@ -585,29 +587,32 @@ smb311_posix_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
         */
 
        cifs_get_readable_path(tcon, full_path, &cfile);
-       rc = smb2_compound_op(xid, tcon, cifs_sb, full_path,
-                             FILE_READ_ATTRIBUTES, FILE_OPEN, create_options,
-                             ACL_NO_MODE, smb2_data, SMB2_OP_POSIX_QUERY_INFO, cfile);
+       rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, FILE_READ_ATTRIBUTES, FILE_OPEN,
+                             create_options, ACL_NO_MODE, data, SMB2_OP_POSIX_QUERY_INFO, cfile,
+                             err_iov, err_buftype);
        if (rc == -EOPNOTSUPP) {
                /* BB TODO: When support for special files added to Samba re-verify this path */
+               if (err_iov[0].iov_base && err_buftype[0] != CIFS_NO_BUFFER &&
+                   ((struct smb2_hdr *)err_iov[0].iov_base)->Command == SMB2_CREATE &&
+                   ((struct smb2_hdr *)err_iov[0].iov_base)->Status == STATUS_STOPPED_ON_SYMLINK) {
+                       rc = smb2_parse_symlink_response(cifs_sb, err_iov, &data->symlink_target);
+                       if (rc)
+                               goto out;
+               }
                *reparse = true;
                create_options |= OPEN_REPARSE_POINT;
 
                /* Failed on a symbolic link - query a reparse point info */
                cifs_get_readable_path(tcon, full_path, &cfile);
-               rc = smb2_compound_op(xid, tcon, cifs_sb, full_path,
-                                     FILE_READ_ATTRIBUTES, FILE_OPEN,
-                                     create_options, ACL_NO_MODE,
-                                     smb2_data, SMB2_OP_POSIX_QUERY_INFO, cfile);
+               rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, FILE_READ_ATTRIBUTES,
+                                     FILE_OPEN, create_options, ACL_NO_MODE, data,
+                                     SMB2_OP_POSIX_QUERY_INFO, cfile, NULL, NULL);
        }
-       if (rc)
-               goto out;
-
-        /* TODO: will need to allow for the 2 SIDs when add support for getting owner UID/GID */
-       memcpy(data, smb2_data, sizeof(struct smb311_posix_qinfo));
 
 out:
-       kfree(smb2_data);
+       free_rsp_buf(err_buftype[0], err_iov[0].iov_base);
+       free_rsp_buf(err_buftype[1], err_iov[1].iov_base);
+       free_rsp_buf(err_buftype[2], err_iov[2].iov_base);
        return rc;
 }
 
@@ -619,7 +624,7 @@ smb2_mkdir(const unsigned int xid, struct inode *parent_inode, umode_t mode,
        return smb2_compound_op(xid, tcon, cifs_sb, name,
                                FILE_WRITE_ATTRIBUTES, FILE_CREATE,
                                CREATE_NOT_FILE, mode, NULL, SMB2_OP_MKDIR,
-                               NULL);
+                               NULL, NULL, NULL);
 }
 
 void
@@ -641,7 +646,7 @@ smb2_mkdir_setinfo(struct inode *inode, const char *name,
        tmprc = smb2_compound_op(xid, tcon, cifs_sb, name,
                                 FILE_WRITE_ATTRIBUTES, FILE_CREATE,
                                 CREATE_NOT_FILE, ACL_NO_MODE,
-                                &data, SMB2_OP_SET_INFO, cfile);
+                                &data, SMB2_OP_SET_INFO, cfile, NULL, NULL);
        if (tmprc == 0)
                cifs_i->cifsAttrs = dosattrs;
 }
@@ -650,9 +655,10 @@ int
 smb2_rmdir(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
           struct cifs_sb_info *cifs_sb)
 {
+       drop_cached_dir_by_name(xid, tcon, name, cifs_sb);
        return smb2_compound_op(xid, tcon, cifs_sb, name, DELETE, FILE_OPEN,
                                CREATE_NOT_FILE, ACL_NO_MODE,
-                               NULL, SMB2_OP_RMDIR, NULL);
+                               NULL, SMB2_OP_RMDIR, NULL, NULL, NULL);
 }
 
 int
@@ -661,7 +667,7 @@ smb2_unlink(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
 {
        return smb2_compound_op(xid, tcon, cifs_sb, name, DELETE, FILE_OPEN,
                                CREATE_DELETE_ON_CLOSE | OPEN_REPARSE_POINT,
-                               ACL_NO_MODE, NULL, SMB2_OP_DELETE, NULL);
+                               ACL_NO_MODE, NULL, SMB2_OP_DELETE, NULL, NULL, NULL);
 }
 
 static int
@@ -680,7 +686,7 @@ smb2_set_path_attr(const unsigned int xid, struct cifs_tcon *tcon,
        }
        rc = smb2_compound_op(xid, tcon, cifs_sb, from_name, access,
                              FILE_OPEN, 0, ACL_NO_MODE, smb2_to_name,
-                             command, cfile);
+                             command, cfile, NULL, NULL);
 smb2_rename_path:
        kfree(smb2_to_name);
        return rc;
@@ -693,6 +699,7 @@ smb2_rename_path(const unsigned int xid, struct cifs_tcon *tcon,
 {
        struct cifsFileInfo *cfile;
 
+       drop_cached_dir_by_name(xid, tcon, from_name, cifs_sb);
        cifs_get_writable_path(tcon, from_name, FIND_WR_WITH_DELETE, &cfile);
 
        return smb2_set_path_attr(xid, tcon, from_name, to_name,
@@ -720,7 +727,7 @@ smb2_set_path_size(const unsigned int xid, struct cifs_tcon *tcon,
        cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile);
        return smb2_compound_op(xid, tcon, cifs_sb, full_path,
                                FILE_WRITE_DATA, FILE_OPEN, 0, ACL_NO_MODE,
-                               &eof, SMB2_OP_SET_EOF, cfile);
+                               &eof, SMB2_OP_SET_EOF, cfile, NULL, NULL);
 }
 
 int
@@ -746,7 +753,8 @@ smb2_set_file_info(struct inode *inode, const char *full_path,
        cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile);
        rc = smb2_compound_op(xid, tcon, cifs_sb, full_path,
                              FILE_WRITE_ATTRIBUTES, FILE_OPEN,
-                             0, ACL_NO_MODE, buf, SMB2_OP_SET_INFO, cfile);
+                             0, ACL_NO_MODE, buf, SMB2_OP_SET_INFO, cfile,
+                             NULL, NULL);
        cifs_put_tlink(tlink);
        return rc;
 }
index 7db5c09..a387204 100644 (file)
@@ -248,7 +248,7 @@ smb2_check_message(char *buf, unsigned int len, struct TCP_Server_Info *server)
                 * Some windows servers (win2016) will pad also the final
                 * PDU in a compound to 8 bytes.
                 */
-               if (((calc_len + 7) & ~7) == len)
+               if (ALIGN(calc_len, 8) == len)
                        return 0;
 
                /*
index 5187250..4f53fa0 100644 (file)
@@ -530,6 +530,7 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
        p = buf;
 
        spin_lock(&ses->iface_lock);
+       ses->iface_count = 0;
        /*
         * Go through iface_list and do kref_put to remove
         * any unused ifaces. ifaces in use will be removed
@@ -550,7 +551,8 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
                /* avoid spamming logs every 10 minutes, so log only in mount */
                if ((ses->chan_max > 1) && in_mount)
                        cifs_dbg(VFS,
-                                "empty network interface list returned by server %s\n",
+                                "multichannel not available\n"
+                                "Empty network interface list returned by server %s\n",
                                 ses->server->hostname);
                rc = -EINVAL;
                goto out;
@@ -650,9 +652,9 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
                        kref_put(&iface->refcount, release_iface);
                } else
                        list_add_tail(&info->iface_head, &ses->iface_list);
-               spin_unlock(&ses->iface_lock);
 
                ses->iface_count++;
+               spin_unlock(&ses->iface_lock);
                ses->iface_last_update = jiffies;
 next_iface:
                nb_iface++;
@@ -800,7 +802,7 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
 
        rc = open_cached_dir(xid, tcon, full_path, cifs_sb, true, &cfid);
        if (!rc) {
-               if (cfid->is_valid) {
+               if (cfid->has_lease) {
                        close_cached_dir(cfid);
                        return 0;
                }
@@ -830,33 +832,25 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
        return rc;
 }
 
-static int
-smb2_get_srv_inum(const unsigned int xid, struct cifs_tcon *tcon,
-                 struct cifs_sb_info *cifs_sb, const char *full_path,
-                 u64 *uniqueid, FILE_ALL_INFO *data)
+static int smb2_get_srv_inum(const unsigned int xid, struct cifs_tcon *tcon,
+                            struct cifs_sb_info *cifs_sb, const char *full_path,
+                            u64 *uniqueid, struct cifs_open_info_data *data)
 {
-       *uniqueid = le64_to_cpu(data->IndexNumber);
+       *uniqueid = le64_to_cpu(data->fi.IndexNumber);
        return 0;
 }
 
-static int
-smb2_query_file_info(const unsigned int xid, struct cifs_tcon *tcon,
-                    struct cifs_fid *fid, FILE_ALL_INFO *data)
+static int smb2_query_file_info(const unsigned int xid, struct cifs_tcon *tcon,
+                               struct cifsFileInfo *cfile, struct cifs_open_info_data *data)
 {
-       int rc;
-       struct smb2_file_all_info *smb2_data;
-
-       smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2,
-                           GFP_KERNEL);
-       if (smb2_data == NULL)
-               return -ENOMEM;
+       struct cifs_fid *fid = &cfile->fid;
 
-       rc = SMB2_query_info(xid, tcon, fid->persistent_fid, fid->volatile_fid,
-                            smb2_data);
-       if (!rc)
-               move_smb2_info_to_cifs(data, smb2_data);
-       kfree(smb2_data);
-       return rc;
+       if (cfile->symlink_target) {
+               data->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
+               if (!data->symlink_target)
+                       return -ENOMEM;
+       }
+       return SMB2_query_info(xid, tcon, fid->persistent_fid, fid->volatile_fid, &data->fi);
 }
 
 #ifdef CONFIG_CIFS_XATTR
@@ -2025,9 +2019,10 @@ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
 
 static int
 smb3_notify(const unsigned int xid, struct file *pfile,
-           void __user *ioc_buf)
+           void __user *ioc_buf, bool return_changes)
 {
-       struct smb3_notify notify;
+       struct smb3_notify_info notify;
+       struct smb3_notify_info __user *pnotify_buf;
        struct dentry *dentry = pfile->f_path.dentry;
        struct inode *inode = file_inode(pfile);
        struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
@@ -2035,10 +2030,12 @@ smb3_notify(const unsigned int xid, struct file *pfile,
        struct cifs_fid fid;
        struct cifs_tcon *tcon;
        const unsigned char *path;
+       char *returned_ioctl_info = NULL;
        void *page = alloc_dentry_path();
        __le16 *utf16_path = NULL;
        u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
        int rc = 0;
+       __u32 ret_len = 0;
 
        path = build_path_from_dentry(dentry, page);
        if (IS_ERR(path)) {
@@ -2052,9 +2049,17 @@ smb3_notify(const unsigned int xid, struct file *pfile,
                goto notify_exit;
        }
 
-       if (copy_from_user(&notify, ioc_buf, sizeof(struct smb3_notify))) {
-               rc = -EFAULT;
-               goto notify_exit;
+       if (return_changes) {
+               if (copy_from_user(&notify, ioc_buf, sizeof(struct smb3_notify_info))) {
+                       rc = -EFAULT;
+                       goto notify_exit;
+               }
+       } else {
+               if (copy_from_user(&notify, ioc_buf, sizeof(struct smb3_notify))) {
+                       rc = -EFAULT;
+                       goto notify_exit;
+               }
+               notify.data_len = 0;
        }
 
        tcon = cifs_sb_master_tcon(cifs_sb);
@@ -2071,12 +2076,22 @@ smb3_notify(const unsigned int xid, struct file *pfile,
                goto notify_exit;
 
        rc = SMB2_change_notify(xid, tcon, fid.persistent_fid, fid.volatile_fid,
-                               notify.watch_tree, notify.completion_filter);
+                               notify.watch_tree, notify.completion_filter,
+                               notify.data_len, &returned_ioctl_info, &ret_len);
 
        SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
 
        cifs_dbg(FYI, "change notify for path %s rc %d\n", path, rc);
-
+       if (return_changes && (ret_len > 0) && (notify.data_len > 0)) {
+               if (ret_len > notify.data_len)
+                       ret_len = notify.data_len;
+               pnotify_buf = (struct smb3_notify_info __user *)ioc_buf;
+               if (copy_to_user(pnotify_buf->notify_data, returned_ioctl_info, ret_len))
+                       rc = -EFAULT;
+               else if (copy_to_user(&pnotify_buf->data_len, &ret_len, sizeof(ret_len)))
+                       rc = -EFAULT;
+       }
+       kfree(returned_ioctl_info);
 notify_exit:
        free_dentry_path(page);
        kfree(utf16_path);
@@ -2827,9 +2842,6 @@ parse_reparse_point(struct reparse_data_buffer *buf,
        }
 }
 
-#define SMB2_SYMLINK_STRUCT_SIZE \
-       (sizeof(struct smb2_err_rsp) - 1 + sizeof(struct smb2_symlink_err_rsp))
-
 static int
 smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
                   struct cifs_sb_info *cifs_sb, const char *full_path,
@@ -2841,13 +2853,7 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
        struct cifs_open_parms oparms;
        struct cifs_fid fid;
        struct kvec err_iov = {NULL, 0};
-       struct smb2_err_rsp *err_buf = NULL;
-       struct smb2_symlink_err_rsp *symlink;
        struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses);
-       unsigned int sub_len;
-       unsigned int sub_offset;
-       unsigned int print_len;
-       unsigned int print_offset;
        int flags = CIFS_CP_CREATE_CLOSE_OP;
        struct smb_rqst rqst[3];
        int resp_buftype[3];
@@ -2964,47 +2970,7 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
                goto querty_exit;
        }
 
-       err_buf = err_iov.iov_base;
-       if (le32_to_cpu(err_buf->ByteCount) < sizeof(struct smb2_symlink_err_rsp) ||
-           err_iov.iov_len < SMB2_SYMLINK_STRUCT_SIZE) {
-               rc = -EINVAL;
-               goto querty_exit;
-       }
-
-       symlink = (struct smb2_symlink_err_rsp *)err_buf->ErrorData;
-       if (le32_to_cpu(symlink->SymLinkErrorTag) != SYMLINK_ERROR_TAG ||
-           le32_to_cpu(symlink->ReparseTag) != IO_REPARSE_TAG_SYMLINK) {
-               rc = -EINVAL;
-               goto querty_exit;
-       }
-
-       /* open must fail on symlink - reset rc */
-       rc = 0;
-       sub_len = le16_to_cpu(symlink->SubstituteNameLength);
-       sub_offset = le16_to_cpu(symlink->SubstituteNameOffset);
-       print_len = le16_to_cpu(symlink->PrintNameLength);
-       print_offset = le16_to_cpu(symlink->PrintNameOffset);
-
-       if (err_iov.iov_len < SMB2_SYMLINK_STRUCT_SIZE + sub_offset + sub_len) {
-               rc = -EINVAL;
-               goto querty_exit;
-       }
-
-       if (err_iov.iov_len <
-           SMB2_SYMLINK_STRUCT_SIZE + print_offset + print_len) {
-               rc = -EINVAL;
-               goto querty_exit;
-       }
-
-       *target_path = cifs_strndup_from_utf16(
-                               (char *)symlink->PathBuffer + sub_offset,
-                               sub_len, true, cifs_sb->local_nls);
-       if (!(*target_path)) {
-               rc = -ENOMEM;
-               goto querty_exit;
-       }
-       convert_delimiter(*target_path, '/');
-       cifs_dbg(FYI, "%s: target path: %s\n", __func__, *target_path);
+       rc = smb2_parse_symlink_response(cifs_sb, &err_iov, target_path);
 
  querty_exit:
        cifs_dbg(FYI, "query symlink rc %d\n", rc);
@@ -5114,7 +5080,7 @@ smb2_make_node(unsigned int xid, struct inode *inode,
 {
        struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
        int rc = -EPERM;
-       FILE_ALL_INFO *buf = NULL;
+       struct cifs_open_info_data buf = {};
        struct cifs_io_parms io_parms = {0};
        __u32 oplock = 0;
        struct cifs_fid fid;
@@ -5130,7 +5096,7 @@ smb2_make_node(unsigned int xid, struct inode *inode,
         * and was used by default in earlier versions of Windows
         */
        if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL))
-               goto out;
+               return rc;
 
        /*
         * TODO: Add ability to create instead via reparse point. Windows (e.g.
@@ -5139,16 +5105,10 @@ smb2_make_node(unsigned int xid, struct inode *inode,
         */
 
        if (!S_ISCHR(mode) && !S_ISBLK(mode))
-               goto out;
+               return rc;
 
        cifs_dbg(FYI, "sfu compat create special file\n");
 
-       buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL);
-       if (buf == NULL) {
-               rc = -ENOMEM;
-               goto out;
-       }
-
        oparms.tcon = tcon;
        oparms.cifs_sb = cifs_sb;
        oparms.desired_access = GENERIC_WRITE;
@@ -5163,21 +5123,21 @@ smb2_make_node(unsigned int xid, struct inode *inode,
                oplock = REQ_OPLOCK;
        else
                oplock = 0;
-       rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, buf);
+       rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, &buf);
        if (rc)
-               goto out;
+               return rc;
 
        /*
         * BB Do not bother to decode buf since no local inode yet to put
         * timestamps in, but we can reuse it safely.
         */
 
-       pdev = (struct win_dev *)buf;
+       pdev = (struct win_dev *)&buf.fi;
        io_parms.pid = current->tgid;
        io_parms.tcon = tcon;
        io_parms.offset = 0;
        io_parms.length = sizeof(struct win_dev);
-       iov[1].iov_base = buf;
+       iov[1].iov_base = &buf.fi;
        iov[1].iov_len = sizeof(struct win_dev);
        if (S_ISCHR(mode)) {
                memcpy(pdev->type, "IntxCHR", 8);
@@ -5196,8 +5156,8 @@ smb2_make_node(unsigned int xid, struct inode *inode,
        d_drop(dentry);
 
        /* FIXME: add code here to set EAs */
-out:
-       kfree(buf);
+
+       cifs_free_open_info(&buf);
        return rc;
 }
 
index b3c4d2e..a569574 100644 (file)
@@ -466,15 +466,14 @@ build_signing_ctxt(struct smb2_signing_capabilities *pneg_ctxt)
        /*
         * Context Data length must be rounded to multiple of 8 for some servers
         */
-       pneg_ctxt->DataLength = cpu_to_le16(DIV_ROUND_UP(
-                               sizeof(struct smb2_signing_capabilities) -
-                               sizeof(struct smb2_neg_context) +
-                               (num_algs * 2 /* sizeof u16 */), 8) * 8);
+       pneg_ctxt->DataLength = cpu_to_le16(ALIGN(sizeof(struct smb2_signing_capabilities) -
+                                           sizeof(struct smb2_neg_context) +
+                                           (num_algs * sizeof(u16)), 8));
        pneg_ctxt->SigningAlgorithmCount = cpu_to_le16(num_algs);
        pneg_ctxt->SigningAlgorithms[0] = cpu_to_le16(SIGNING_ALG_AES_CMAC);
 
-       ctxt_len += 2 /* sizeof le16 */ * num_algs;
-       ctxt_len = DIV_ROUND_UP(ctxt_len, 8) * 8;
+       ctxt_len += sizeof(__le16) * num_algs;
+       ctxt_len = ALIGN(ctxt_len, 8);
        return ctxt_len;
        /* TBD add SIGNING_ALG_AES_GMAC and/or SIGNING_ALG_HMAC_SHA256 */
 }
@@ -511,8 +510,7 @@ build_netname_ctxt(struct smb2_netname_neg_context *pneg_ctxt, char *hostname)
        /* copy up to max of first 100 bytes of server name to NetName field */
        pneg_ctxt->DataLength = cpu_to_le16(2 * cifs_strtoUTF16(pneg_ctxt->NetName, hostname, 100, cp));
        /* context size is DataLength + minimal smb2_neg_context */
-       return DIV_ROUND_UP(le16_to_cpu(pneg_ctxt->DataLength) +
-                       sizeof(struct smb2_neg_context), 8) * 8;
+       return ALIGN(le16_to_cpu(pneg_ctxt->DataLength) + sizeof(struct smb2_neg_context), 8);
 }
 
 static void
@@ -557,18 +555,18 @@ assemble_neg_contexts(struct smb2_negotiate_req *req,
         * round up total_len of fixed part of SMB3 negotiate request to 8
         * byte boundary before adding negotiate contexts
         */
-       *total_len = roundup(*total_len, 8);
+       *total_len = ALIGN(*total_len, 8);
 
        pneg_ctxt = (*total_len) + (char *)req;
        req->NegotiateContextOffset = cpu_to_le32(*total_len);
 
        build_preauth_ctxt((struct smb2_preauth_neg_context *)pneg_ctxt);
-       ctxt_len = DIV_ROUND_UP(sizeof(struct smb2_preauth_neg_context), 8) * 8;
+       ctxt_len = ALIGN(sizeof(struct smb2_preauth_neg_context), 8);
        *total_len += ctxt_len;
        pneg_ctxt += ctxt_len;
 
        build_encrypt_ctxt((struct smb2_encryption_neg_context *)pneg_ctxt);
-       ctxt_len = DIV_ROUND_UP(sizeof(struct smb2_encryption_neg_context), 8) * 8;
+       ctxt_len = ALIGN(sizeof(struct smb2_encryption_neg_context), 8);
        *total_len += ctxt_len;
        pneg_ctxt += ctxt_len;
 
@@ -595,9 +593,7 @@ assemble_neg_contexts(struct smb2_negotiate_req *req,
        if (server->compress_algorithm) {
                build_compression_ctxt((struct smb2_compression_capabilities_context *)
                                pneg_ctxt);
-               ctxt_len = DIV_ROUND_UP(
-                       sizeof(struct smb2_compression_capabilities_context),
-                               8) * 8;
+               ctxt_len = ALIGN(sizeof(struct smb2_compression_capabilities_context), 8);
                *total_len += ctxt_len;
                pneg_ctxt += ctxt_len;
                neg_context_count++;
@@ -780,7 +776,7 @@ static int smb311_decode_neg_context(struct smb2_negotiate_rsp *rsp,
                if (rc)
                        break;
                /* offsets must be 8 byte aligned */
-               clen = (clen + 7) & ~0x7;
+               clen = ALIGN(clen, 8);
                offset += clen + sizeof(struct smb2_neg_context);
                len_of_ctxts -= clen;
        }
@@ -1345,14 +1341,13 @@ SMB2_sess_alloc_buffer(struct SMB2_sess_data *sess_data)
 static void
 SMB2_sess_free_buffer(struct SMB2_sess_data *sess_data)
 {
-       int i;
+       struct kvec *iov = sess_data->iov;
 
-       /* zero the session data before freeing, as it might contain sensitive info (keys, etc) */
-       for (i = 0; i < 2; i++)
-               if (sess_data->iov[i].iov_base)
-                       memzero_explicit(sess_data->iov[i].iov_base, sess_data->iov[i].iov_len);
+       /* iov[1] is already freed by caller */
+       if (sess_data->buf0_type != CIFS_NO_BUFFER && iov[0].iov_base)
+               memzero_explicit(iov[0].iov_base, iov[0].iov_len);
 
-       free_rsp_buf(sess_data->buf0_type, sess_data->iov[0].iov_base);
+       free_rsp_buf(sess_data->buf0_type, iov[0].iov_base);
        sess_data->buf0_type = CIFS_NO_BUFFER;
 }
 
@@ -1535,7 +1530,7 @@ SMB2_sess_auth_rawntlmssp_negotiate(struct SMB2_sess_data *sess_data)
                                          &blob_length, ses, server,
                                          sess_data->nls_cp);
        if (rc)
-               goto out_err;
+               goto out;
 
        if (use_spnego) {
                /* BB eventually need to add this */
@@ -1582,7 +1577,7 @@ SMB2_sess_auth_rawntlmssp_negotiate(struct SMB2_sess_data *sess_data)
        }
 
 out:
-       memzero_explicit(ntlmssp_blob, blob_length);
+       kfree_sensitive(ntlmssp_blob);
        SMB2_sess_free_buffer(sess_data);
        if (!rc) {
                sess_data->result = 0;
@@ -1666,7 +1661,7 @@ SMB2_sess_auth_rawntlmssp_authenticate(struct SMB2_sess_data *sess_data)
        }
 #endif
 out:
-       memzero_explicit(ntlmssp_blob, blob_length);
+       kfree_sensitive(ntlmssp_blob);
        SMB2_sess_free_buffer(sess_data);
        kfree_sensitive(ses->ntlmssp);
        ses->ntlmssp = NULL;
@@ -2424,9 +2419,9 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
        unsigned int acelen, acl_size, ace_count;
        unsigned int owner_offset = 0;
        unsigned int group_offset = 0;
-       struct smb3_acl acl;
+       struct smb3_acl acl = {};
 
-       *len = roundup(sizeof(struct crt_sd_ctxt) + (sizeof(struct cifs_ace) * 4), 8);
+       *len = round_up(sizeof(struct crt_sd_ctxt) + (sizeof(struct cifs_ace) * 4), 8);
 
        if (set_owner) {
                /* sizeof(struct owner_group_sids) is already multiple of 8 so no need to round */
@@ -2497,10 +2492,11 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
        acl.AclRevision = ACL_REVISION; /* See 2.4.4.1 of MS-DTYP */
        acl.AclSize = cpu_to_le16(acl_size);
        acl.AceCount = cpu_to_le16(ace_count);
+       /* acl.Sbz1 and Sbz2 MBZ so are not set here, but initialized above */
        memcpy(aclptr, &acl, sizeof(struct smb3_acl));
 
        buf->ccontext.DataLength = cpu_to_le32(ptr - (__u8 *)&buf->sd);
-       *len = roundup(ptr - (__u8 *)buf, 8);
+       *len = round_up((unsigned int)(ptr - (__u8 *)buf), 8);
 
        return buf;
 }
@@ -2594,7 +2590,7 @@ alloc_path_with_tree_prefix(__le16 **out_path, int *out_size, int *out_len,
         * final path needs to be 8-byte aligned as specified in
         * MS-SMB2 2.2.13 SMB2 CREATE Request.
         */
-       *out_size = roundup(*out_len * sizeof(__le16), 8);
+       *out_size = round_up(*out_len * sizeof(__le16), 8);
        *out_path = kzalloc(*out_size + sizeof(__le16) /* null */, GFP_KERNEL);
        if (!*out_path)
                return -ENOMEM;
@@ -2839,9 +2835,7 @@ SMB2_open_init(struct cifs_tcon *tcon, struct TCP_Server_Info *server,
                uni_path_len = (2 * UniStrnlen((wchar_t *)path, PATH_MAX)) + 2;
                /* MUST set path len (NameLength) to 0 opening root of share */
                req->NameLength = cpu_to_le16(uni_path_len - 2);
-               copy_size = uni_path_len;
-               if (copy_size % 8 != 0)
-                       copy_size = roundup(copy_size, 8);
+               copy_size = round_up(uni_path_len, 8);
                copy_path = kzalloc(copy_size, GFP_KERNEL);
                if (!copy_path)
                        return -ENOMEM;
@@ -3485,7 +3479,7 @@ smb2_validate_and_copy_iov(unsigned int offset, unsigned int buffer_length,
        if (rc)
                return rc;
 
-       memcpy(data, begin_of_buf, buffer_length);
+       memcpy(data, begin_of_buf, minbufsize);
 
        return 0;
 }
@@ -3609,7 +3603,7 @@ query_info(const unsigned int xid, struct cifs_tcon *tcon,
 
        rc = smb2_validate_and_copy_iov(le16_to_cpu(rsp->OutputBufferOffset),
                                        le32_to_cpu(rsp->OutputBufferLength),
-                                       &rsp_iov, min_len, *data);
+                                       &rsp_iov, dlen ? *dlen : min_len, *data);
        if (rc && allocated) {
                kfree(*data);
                *data = NULL;
@@ -3715,11 +3709,13 @@ SMB2_notify_init(const unsigned int xid, struct smb_rqst *rqst,
 int
 SMB2_change_notify(const unsigned int xid, struct cifs_tcon *tcon,
                u64 persistent_fid, u64 volatile_fid, bool watch_tree,
-               u32 completion_filter)
+               u32 completion_filter, u32 max_out_data_len, char **out_data,
+               u32 *plen /* returned data len */)
 {
        struct cifs_ses *ses = tcon->ses;
        struct TCP_Server_Info *server = cifs_pick_channel(ses);
        struct smb_rqst rqst;
+       struct smb2_change_notify_rsp *smb_rsp;
        struct kvec iov[1];
        struct kvec rsp_iov = {NULL, 0};
        int resp_buftype = CIFS_NO_BUFFER;
@@ -3735,6 +3731,9 @@ SMB2_change_notify(const unsigned int xid, struct cifs_tcon *tcon,
 
        memset(&rqst, 0, sizeof(struct smb_rqst));
        memset(&iov, 0, sizeof(iov));
+       if (plen)
+               *plen = 0;
+
        rqst.rq_iov = iov;
        rqst.rq_nvec = 1;
 
@@ -3753,9 +3752,28 @@ SMB2_change_notify(const unsigned int xid, struct cifs_tcon *tcon,
                cifs_stats_fail_inc(tcon, SMB2_CHANGE_NOTIFY_HE);
                trace_smb3_notify_err(xid, persistent_fid, tcon->tid, ses->Suid,
                                (u8)watch_tree, completion_filter, rc);
-       } else
+       } else {
                trace_smb3_notify_done(xid, persistent_fid, tcon->tid,
-                               ses->Suid, (u8)watch_tree, completion_filter);
+                       ses->Suid, (u8)watch_tree, completion_filter);
+               /* validate that notify information is plausible */
+               if ((rsp_iov.iov_base == NULL) ||
+                   (rsp_iov.iov_len < sizeof(struct smb2_change_notify_rsp)))
+                       goto cnotify_exit;
+
+               smb_rsp = (struct smb2_change_notify_rsp *)rsp_iov.iov_base;
+
+               smb2_validate_iov(le16_to_cpu(smb_rsp->OutputBufferOffset),
+                               le32_to_cpu(smb_rsp->OutputBufferLength), &rsp_iov,
+                               sizeof(struct file_notify_information));
+
+               *out_data = kmemdup((char *)smb_rsp + le16_to_cpu(smb_rsp->OutputBufferOffset),
+                               le32_to_cpu(smb_rsp->OutputBufferLength), GFP_KERNEL);
+               if (*out_data == NULL) {
+                       rc = -ENOMEM;
+                       goto cnotify_exit;
+               } else
+                       *plen = le32_to_cpu(smb_rsp->OutputBufferLength);
+       }
 
  cnotify_exit:
        if (rqst.rq_iov)
@@ -4103,7 +4121,7 @@ smb2_new_read_req(void **buf, unsigned int *total_len,
        if (request_type & CHAINED_REQUEST) {
                if (!(request_type & END_OF_CHAIN)) {
                        /* next 8-byte aligned request */
-                       *total_len = DIV_ROUND_UP(*total_len, 8) * 8;
+                       *total_len = ALIGN(*total_len, 8);
                        shdr->NextCommand = cpu_to_le32(*total_len);
                } else /* END_OF_CHAIN */
                        shdr->NextCommand = 0;
index f57881b..1237bb8 100644 (file)
@@ -56,6 +56,9 @@ struct smb2_rdma_crypto_transform {
 
 #define COMPOUND_FID 0xFFFFFFFFFFFFFFFFULL
 
+#define SMB2_SYMLINK_STRUCT_SIZE \
+       (sizeof(struct smb2_err_rsp) - 1 + sizeof(struct smb2_symlink_err_rsp))
+
 #define SYMLINK_ERROR_TAG 0x4c4d5953
 
 struct smb2_symlink_err_rsp {
index 3f740f2..be21b5d 100644 (file)
@@ -53,16 +53,12 @@ extern bool smb2_is_valid_oplock_break(char *buffer,
                                       struct TCP_Server_Info *srv);
 extern int smb3_handle_read_data(struct TCP_Server_Info *server,
                                 struct mid_q_entry *mid);
-
-extern void move_smb2_info_to_cifs(FILE_ALL_INFO *dst,
-                                  struct smb2_file_all_info *src);
 extern int smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
                                struct cifs_sb_info *cifs_sb, const char *path,
                                __u32 *reparse_tag);
-extern int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
-                               struct cifs_sb_info *cifs_sb,
-                               const char *full_path, FILE_ALL_INFO *data,
-                               bool *adjust_tz, bool *symlink);
+int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+                        struct cifs_sb_info *cifs_sb, const char *full_path,
+                        struct cifs_open_info_data *data, bool *adjust_tz, bool *reparse);
 extern int smb2_set_path_size(const unsigned int xid, struct cifs_tcon *tcon,
                              const char *full_path, __u64 size,
                              struct cifs_sb_info *cifs_sb, bool set_alloc);
@@ -95,9 +91,9 @@ extern int smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
                          struct cifs_sb_info *cifs_sb,
                          const unsigned char *path, char *pbuf,
                          unsigned int *pbytes_read);
-extern int smb2_open_file(const unsigned int xid,
-                         struct cifs_open_parms *oparms,
-                         __u32 *oplock, FILE_ALL_INFO *buf);
+int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path);
+int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock,
+                  void *buf);
 extern int smb2_unlock_range(struct cifsFileInfo *cfile,
                             struct file_lock *flock, const unsigned int xid);
 extern int smb2_push_mandatory_locks(struct cifsFileInfo *cfile);
@@ -148,7 +144,8 @@ extern int SMB2_ioctl_init(struct cifs_tcon *tcon,
 extern void SMB2_ioctl_free(struct smb_rqst *rqst);
 extern int SMB2_change_notify(const unsigned int xid, struct cifs_tcon *tcon,
                        u64 persistent_fid, u64 volatile_fid, bool watch_tree,
-                       u32 completion_filter);
+                       u32 completion_filter, u32 max_out_data_len,
+                       char **out_data, u32 *plen /* returned data len */);
 
 extern int __SMB2_close(const unsigned int xid, struct cifs_tcon *tcon,
                        u64 persistent_fid, u64 volatile_fid,
@@ -278,9 +275,9 @@ extern int smb2_query_info_compound(const unsigned int xid,
                                    struct kvec *rsp, int *buftype,
                                    struct cifs_sb_info *cifs_sb);
 /* query path info from the server using SMB311 POSIX extensions*/
-extern int smb311_posix_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
-                       struct cifs_sb_info *sb, const char *path, struct smb311_posix_qinfo *qinf,
-                       bool *adjust_tx, bool *symlink);
+int smb311_posix_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+                                struct cifs_sb_info *cifs_sb, const char *full_path,
+                                struct cifs_open_info_data *data, bool *adjust_tz, bool *reparse);
 int posix_info_parse(const void *beg, const void *end,
                     struct smb2_posix_info_parsed *out);
 int posix_info_sid_size(const void *beg, const void *end);
index a0ef63c..9e4f478 100644 (file)
@@ -651,22 +651,6 @@ int efivar_entry_set_get_size(struct efivar_entry *entry, u32 attributes,
        if (err)
                return err;
 
-       /*
-        * Ensure that the available space hasn't shrunk below the safe level
-        */
-       status = check_var_size(attributes, *size + ucs2_strsize(name, 1024));
-       if (status != EFI_SUCCESS) {
-               if (status != EFI_UNSUPPORTED) {
-                       err = efi_status_to_err(status);
-                       goto out;
-               }
-
-               if (*size > 65536) {
-                       err = -ENOSPC;
-                       goto out;
-               }
-       }
-
        status = efivar_set_variable_locked(name, vendor, attributes, *size,
                                            data, false);
        if (status != EFI_SUCCESS) {
index 998cd26..fe05bc5 100644 (file)
@@ -590,14 +590,17 @@ struct erofs_fscache *erofs_domain_register_cookie(struct super_block *sb,
        struct super_block *psb = erofs_pseudo_mnt->mnt_sb;
 
        mutex_lock(&erofs_domain_cookies_lock);
+       spin_lock(&psb->s_inode_list_lock);
        list_for_each_entry(inode, &psb->s_inodes, i_sb_list) {
                ctx = inode->i_private;
                if (!ctx || ctx->domain != domain || strcmp(ctx->name, name))
                        continue;
                igrab(inode);
+               spin_unlock(&psb->s_inode_list_lock);
                mutex_unlock(&erofs_domain_cookies_lock);
                return ctx;
        }
+       spin_unlock(&psb->s_inode_list_lock);
        ctx = erofs_fscache_domain_init_cookie(sb, name, need_inode);
        mutex_unlock(&erofs_domain_cookies_lock);
        return ctx;
index 559380a..c7f24fc 100644 (file)
@@ -813,15 +813,14 @@ retry:
        ++spiltted;
        if (fe->pcl->pageofs_out != (map->m_la & ~PAGE_MASK))
                fe->pcl->multibases = true;
-
-       if ((map->m_flags & EROFS_MAP_FULL_MAPPED) &&
-           !(map->m_flags & EROFS_MAP_PARTIAL_REF) &&
-           fe->pcl->length == map->m_llen)
-               fe->pcl->partial = false;
        if (fe->pcl->length < offset + end - map->m_la) {
                fe->pcl->length = offset + end - map->m_la;
                fe->pcl->pageofs_out = map->m_la & ~PAGE_MASK;
        }
+       if ((map->m_flags & EROFS_MAP_FULL_MAPPED) &&
+           !(map->m_flags & EROFS_MAP_PARTIAL_REF) &&
+           fe->pcl->length == map->m_llen)
+               fe->pcl->partial = false;
 next_part:
        /* shorten the remaining extent to update progress */
        map->m_llen = offset + cur - map->m_la;
@@ -888,15 +887,13 @@ static void z_erofs_do_decompressed_bvec(struct z_erofs_decompress_backend *be,
 
        if (!((bvec->offset + be->pcl->pageofs_out) & ~PAGE_MASK)) {
                unsigned int pgnr;
-               struct page *oldpage;
 
                pgnr = (bvec->offset + be->pcl->pageofs_out) >> PAGE_SHIFT;
                DBG_BUGON(pgnr >= be->nr_pages);
-               oldpage = be->decompressed_pages[pgnr];
-               be->decompressed_pages[pgnr] = bvec->page;
-
-               if (!oldpage)
+               if (!be->decompressed_pages[pgnr]) {
+                       be->decompressed_pages[pgnr] = bvec->page;
                        return;
+               }
        }
 
        /* (cold path) one pcluster is requested multiple times */
index e7f04c4..d98c952 100644 (file)
@@ -126,10 +126,10 @@ static inline unsigned int z_erofs_pclusterpages(struct z_erofs_pcluster *pcl)
 }
 
 /*
- * bit 31: I/O error occurred on this page
- * bit 0 - 30: remaining parts to complete this page
+ * bit 30: I/O error occurred on this page
+ * bit 0 - 29: remaining parts to complete this page
  */
-#define Z_EROFS_PAGE_EIO                       (1 << 31)
+#define Z_EROFS_PAGE_EIO                       (1 << 30)
 
 static inline void z_erofs_onlinepage_init(struct page *page)
 {
index 44c27ef..0bb6692 100644 (file)
@@ -57,8 +57,7 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
 
        pos = ALIGN(iloc(EROFS_SB(sb), vi->nid) + vi->inode_isize +
                    vi->xattr_isize, 8);
-       kaddr = erofs_read_metabuf(&buf, sb, erofs_blknr(pos),
-                                  EROFS_KMAP_ATOMIC);
+       kaddr = erofs_read_metabuf(&buf, sb, erofs_blknr(pos), EROFS_KMAP);
        if (IS_ERR(kaddr)) {
                err = PTR_ERR(kaddr);
                goto out_unlock;
@@ -73,7 +72,7 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
                vi->z_advise = Z_EROFS_ADVISE_FRAGMENT_PCLUSTER;
                vi->z_fragmentoff = le64_to_cpu(*(__le64 *)h) ^ (1ULL << 63);
                vi->z_tailextent_headlcn = 0;
-               goto unmap_done;
+               goto done;
        }
        vi->z_advise = le16_to_cpu(h->h_advise);
        vi->z_algorithmtype[0] = h->h_algorithmtype & 15;
@@ -85,7 +84,7 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
                erofs_err(sb, "unknown HEAD%u format %u for nid %llu, please upgrade kernel",
                          headnr + 1, vi->z_algorithmtype[headnr], vi->nid);
                err = -EOPNOTSUPP;
-               goto unmap_done;
+               goto out_put_metabuf;
        }
 
        vi->z_logical_clusterbits = LOG_BLOCK_SIZE + (h->h_clusterbits & 7);
@@ -95,7 +94,7 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
                erofs_err(sb, "per-inode big pcluster without sb feature for nid %llu",
                          vi->nid);
                err = -EFSCORRUPTED;
-               goto unmap_done;
+               goto out_put_metabuf;
        }
        if (vi->datalayout == EROFS_INODE_FLAT_COMPRESSION &&
            !(vi->z_advise & Z_EROFS_ADVISE_BIG_PCLUSTER_1) ^
@@ -103,12 +102,8 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
                erofs_err(sb, "big pcluster head1/2 of compact indexes should be consistent for nid %llu",
                          vi->nid);
                err = -EFSCORRUPTED;
-               goto unmap_done;
+               goto out_put_metabuf;
        }
-unmap_done:
-       erofs_put_metabuf(&buf);
-       if (err)
-               goto out_unlock;
 
        if (vi->z_advise & Z_EROFS_ADVISE_INLINE_PCLUSTER) {
                struct erofs_map_blocks map = {
@@ -127,7 +122,7 @@ unmap_done:
                        err = -EFSCORRUPTED;
                }
                if (err < 0)
-                       goto out_unlock;
+                       goto out_put_metabuf;
        }
 
        if (vi->z_advise & Z_EROFS_ADVISE_FRAGMENT_PCLUSTER &&
@@ -141,11 +136,14 @@ unmap_done:
                                            EROFS_GET_BLOCKS_FINDTAIL);
                erofs_put_metabuf(&map.buf);
                if (err < 0)
-                       goto out_unlock;
+                       goto out_put_metabuf;
        }
+done:
        /* paired with smp_mb() at the beginning of the function */
        smp_mb();
        set_bit(EROFS_I_Z_INITED_BIT, &vi->flags);
+out_put_metabuf:
+       erofs_put_metabuf(&buf);
 out_unlock:
        clear_and_wake_up_bit(EROFS_I_BL_Z_BIT, &vi->flags);
        return err;
index a795437..5590a1e 100644 (file)
@@ -552,7 +552,7 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
        inode->i_uid = sbi->options.fs_uid;
        inode->i_gid = sbi->options.fs_gid;
        inode_inc_iversion(inode);
-       inode->i_generation = prandom_u32();
+       inode->i_generation = get_random_u32();
 
        if (info->attr & ATTR_SUBDIR) { /* directory */
                inode->i_generation &= ~1;
index 998dd2a..f4944c4 100644 (file)
@@ -277,8 +277,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
                int best_ndir = inodes_per_group;
                int best_group = -1;
 
-               group = prandom_u32();
-               parent_group = (unsigned)group % ngroups;
+               parent_group = prandom_u32_max(ngroups);
                for (i = 0; i < ngroups; i++) {
                        group = (parent_group + i) % ngroups;
                        desc = ext2_get_group_desc (sb, group, NULL);
index 208b87c..e9bc466 100644 (file)
@@ -463,10 +463,9 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
                        hinfo.hash_version = DX_HASH_HALF_MD4;
                        hinfo.seed = sbi->s_hash_seed;
                        ext4fs_dirhash(parent, qstr->name, qstr->len, &hinfo);
-                       grp = hinfo.hash;
+                       parent_group = hinfo.hash % ngroups;
                } else
-                       grp = prandom_u32();
-               parent_group = (unsigned)grp % ngroups;
+                       parent_group = prandom_u32_max(ngroups);
                for (i = 0; i < ngroups; i++) {
                        g = (parent_group + i) % ngroups;
                        get_orlov_stats(sb, g, flex_size, &stats);
@@ -1280,7 +1279,7 @@ got:
                                        EXT4_GROUP_INFO_IBITMAP_CORRUPT);
                goto out;
        }
-       inode->i_generation = prandom_u32();
+       inode->i_generation = get_random_u32();
 
        /* Precompute checksum seed for inode metadata */
        if (ext4_has_metadata_csum(sb)) {
index 4d49c5c..ded5355 100644 (file)
@@ -454,8 +454,8 @@ static long swap_inode_boot_loader(struct super_block *sb,
        inode->i_ctime = inode_bl->i_ctime = current_time(inode);
        inode_inc_iversion(inode);
 
-       inode->i_generation = prandom_u32();
-       inode_bl->i_generation = prandom_u32();
+       inode->i_generation = get_random_u32();
+       inode_bl->i_generation = get_random_u32();
        ext4_reset_inode_seed(inode);
        ext4_reset_inode_seed(inode_bl);
 
index 9af68a7..588cb09 100644 (file)
@@ -265,7 +265,7 @@ static unsigned int mmp_new_seq(void)
        u32 new_seq;
 
        do {
-               new_seq = prandom_u32();
+               new_seq = get_random_u32();
        } while (new_seq > EXT4_MMP_SEQ_MAX);
 
        return new_seq;
index d733db8..989365b 100644 (file)
@@ -3782,8 +3782,7 @@ cont_thread:
                        }
                        if (!progress) {
                                elr->lr_next_sched = jiffies +
-                                       (prandom_u32()
-                                        % (EXT4_DEF_LI_MAX_START_DELAY * HZ));
+                                       prandom_u32_max(EXT4_DEF_LI_MAX_START_DELAY * HZ);
                        }
                        if (time_before(elr->lr_next_sched, next_wakeup))
                                next_wakeup = elr->lr_next_sched;
@@ -3930,8 +3929,8 @@ static struct ext4_li_request *ext4_li_request_new(struct super_block *sb,
         * spread the inode table initialization requests
         * better.
         */
-       elr->lr_next_sched = jiffies + (prandom_u32() %
-                               (EXT4_DEF_LI_MAX_START_DELAY * HZ));
+       elr->lr_next_sched = jiffies + prandom_u32_max(
+                               EXT4_DEF_LI_MAX_START_DELAY * HZ);
        return elr;
 }
 
index 20cadfb..3c640bd 100644 (file)
@@ -363,13 +363,14 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
                                               pgoff_t index,
                                               unsigned long num_ra_pages)
 {
-       DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
        struct page *page;
 
        index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
 
        page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED);
        if (!page || !PageUptodate(page)) {
+               DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
+
                if (page)
                        put_page(page);
                else if (num_ra_pages > 1)
index d36bcb2..4546e01 100644 (file)
@@ -282,7 +282,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
 
        /* let's select beginning hot/small space first in no_heap mode*/
        if (f2fs_need_rand_seg(sbi))
-               p->offset = prandom_u32() % (MAIN_SECS(sbi) * sbi->segs_per_sec);
+               p->offset = prandom_u32_max(MAIN_SECS(sbi) * sbi->segs_per_sec);
        else if (test_opt(sbi, NOHEAP) &&
                (type == CURSEG_HOT_DATA || IS_NODESEG(type)))
                p->offset = 0;
index d5065a5..a389772 100644 (file)
@@ -50,7 +50,7 @@ static struct inode *f2fs_new_inode(struct user_namespace *mnt_userns,
        inode->i_blocks = 0;
        inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
        F2FS_I(inode)->i_crtime = inode->i_mtime;
-       inode->i_generation = prandom_u32();
+       inode->i_generation = get_random_u32();
 
        if (S_ISDIR(inode->i_mode))
                F2FS_I(inode)->i_current_depth = 1;
index 289bcb7..acf3d3f 100644 (file)
@@ -2534,7 +2534,7 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type)
 
        sanity_check_seg_type(sbi, seg_type);
        if (f2fs_need_rand_seg(sbi))
-               return prandom_u32() % (MAIN_SECS(sbi) * sbi->segs_per_sec);
+               return prandom_u32_max(MAIN_SECS(sbi) * sbi->segs_per_sec);
 
        /* if segs_per_sec is large than 1, we need to keep original policy. */
        if (__is_large_section(sbi))
@@ -2588,7 +2588,7 @@ static void new_curseg(struct f2fs_sb_info *sbi, int type, bool new_sec)
        curseg->alloc_type = LFS;
        if (F2FS_OPTION(sbi).fs_mode == FS_MODE_FRAGMENT_BLK)
                curseg->fragment_remained_chunk =
-                               prandom_u32() % sbi->max_fragment_chunk + 1;
+                               prandom_u32_max(sbi->max_fragment_chunk) + 1;
 }
 
 static int __next_free_blkoff(struct f2fs_sb_info *sbi,
@@ -2625,9 +2625,9 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi,
                        /* To allocate block chunks in different sizes, use random number */
                        if (--seg->fragment_remained_chunk <= 0) {
                                seg->fragment_remained_chunk =
-                                  prandom_u32() % sbi->max_fragment_chunk + 1;
+                                  prandom_u32_max(sbi->max_fragment_chunk) + 1;
                                seg->next_blkoff +=
-                                  prandom_u32() % sbi->max_fragment_hole + 1;
+                                  prandom_u32_max(sbi->max_fragment_hole) + 1;
                        }
                }
        }
index f0805e5..c352fff 100644 (file)
@@ -258,13 +258,14 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode,
                                               pgoff_t index,
                                               unsigned long num_ra_pages)
 {
-       DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
        struct page *page;
 
        index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT;
 
        page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED);
        if (!page || !PageUptodate(page)) {
+               DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
+
                if (page)
                        put_page(page);
                else if (num_ra_pages > 1)
index a38238d..1cbcc46 100644 (file)
@@ -523,7 +523,7 @@ int fat_fill_inode(struct inode *inode, struct msdos_dir_entry *de)
        inode->i_uid = sbi->options.fs_uid;
        inode->i_gid = sbi->options.fs_gid;
        inode_inc_iversion(inode);
-       inode->i_generation = prandom_u32();
+       inode->i_generation = get_random_u32();
 
        if ((de->attr & ATTR_DIR) && !IS_FREE(de->name)) {
                inode->i_generation &= ~1;
index 07881b7..2774687 100644 (file)
@@ -103,7 +103,7 @@ static char *__dentry_name(struct dentry *dentry, char *name)
         */
        BUG_ON(p + strlen(p) + 1 != name + PATH_MAX);
 
-       strlcpy(name, root, PATH_MAX);
+       strscpy(name, root, PATH_MAX);
        if (len > p - name) {
                __putname(name);
                return NULL;
index 198d7ab..4e71850 100644 (file)
@@ -4375,8 +4375,8 @@ nfsd4_init_leases_net(struct nfsd_net *nn)
        nn->nfsd4_grace = 90;
        nn->somebody_reclaimed = false;
        nn->track_reclaim_completes = false;
-       nn->clverifier_counter = prandom_u32();
-       nn->clientid_base = prandom_u32();
+       nn->clverifier_counter = get_random_u32();
+       nn->clientid_base = get_random_u32();
        nn->clientid_counter = nn->clientid_base + 1;
        nn->s2s_cp_cl_id = nn->clientid_counter++;
 
index 6a29bcf..dc74a94 100644 (file)
@@ -1458,12 +1458,14 @@ static __net_init int nfsd_init_net(struct net *net)
                goto out_drc_error;
        retval = nfsd_reply_cache_init(nn);
        if (retval)
-               goto out_drc_error;
+               goto out_cache_error;
        get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key));
        seqlock_init(&nn->writeverf_lock);
 
        return 0;
 
+out_cache_error:
+       nfsd4_leases_net_shutdown(nn);
 out_drc_error:
        nfsd_idmap_shutdown(net);
 out_idmap_error:
index d734342..8c52b6c 100644 (file)
@@ -392,8 +392,8 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type, int access)
 skip_pseudoflavor_check:
        /* Finally, check access permissions. */
        error = nfsd_permission(rqstp, exp, dentry, access);
-       trace_nfsd_fh_verify_err(rqstp, fhp, type, access, error);
 out:
+       trace_nfsd_fh_verify_err(rqstp, fhp, type, access, error);
        if (error == nfserr_stale)
                nfsd_stats_fh_stale_inc(exp);
        return error;
index e7c4940..0d611a6 100644 (file)
@@ -3819,7 +3819,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
                }
 
                log_init_pg_hdr(log, page_size, page_size, 1, 1);
-               log_create(log, l_size, 0, get_random_int(), false, false);
+               log_create(log, l_size, 0, get_random_u32(), false, false);
 
                log->ra = ra;
 
@@ -3893,7 +3893,7 @@ check_restart_area:
 
                /* Do some checks based on whether we have a valid log page. */
                if (!rst_info.valid_page) {
-                       open_log_count = get_random_int();
+                       open_log_count = get_random_u32();
                        goto init_log_instance;
                }
                open_log_count = le32_to_cpu(ra2->open_log_count);
@@ -4044,7 +4044,7 @@ find_oldest:
                memcpy(ra->clients, Add2Ptr(ra2, t16),
                       le16_to_cpu(ra2->ra_len) - t16);
 
-               log->current_openlog_count = get_random_int();
+               log->current_openlog_count = get_random_u32();
                ra->open_log_count = cpu_to_le32(log->current_openlog_count);
                log->ra_size = offsetof(struct RESTART_AREA, clients) +
                               sizeof(struct CLIENT_REC);
index 961d1cf..05f3298 100644 (file)
@@ -232,6 +232,7 @@ static int ocfs2_mknod(struct user_namespace *mnt_userns,
        handle_t *handle = NULL;
        struct ocfs2_super *osb;
        struct ocfs2_dinode *dirfe;
+       struct ocfs2_dinode *fe = NULL;
        struct buffer_head *new_fe_bh = NULL;
        struct inode *inode = NULL;
        struct ocfs2_alloc_context *inode_ac = NULL;
@@ -382,6 +383,7 @@ static int ocfs2_mknod(struct user_namespace *mnt_userns,
                goto leave;
        }
 
+       fe = (struct ocfs2_dinode *) new_fe_bh->b_data;
        if (S_ISDIR(mode)) {
                status = ocfs2_fill_new_dir(osb, handle, dir, inode,
                                            new_fe_bh, data_ac, meta_ac);
@@ -454,8 +456,11 @@ roll_back:
 leave:
        if (status < 0 && did_quota_inode)
                dquot_free_inode(inode);
-       if (handle)
+       if (handle) {
+               if (status < 0 && fe)
+                       ocfs2_set_links_count(fe, 0);
                ocfs2_commit_trans(osb, handle);
+       }
 
        ocfs2_inode_unlock(dir, 1);
        if (did_block_signals)
@@ -632,18 +637,9 @@ static int ocfs2_mknod_locked(struct ocfs2_super *osb,
                return status;
        }
 
-       status = __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh,
+       return __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh,
                                    parent_fe_bh, handle, inode_ac,
                                    fe_blkno, suballoc_loc, suballoc_bit);
-       if (status < 0) {
-               u64 bg_blkno = ocfs2_which_suballoc_group(fe_blkno, suballoc_bit);
-               int tmp = ocfs2_free_suballoc_bits(handle, inode_ac->ac_inode,
-                               inode_ac->ac_bh, suballoc_bit, bg_blkno, 1);
-               if (tmp)
-                       mlog_errno(tmp);
-       }
-
-       return status;
 }
 
 static int ocfs2_mkdir(struct user_namespace *mnt_userns,
@@ -2028,8 +2024,11 @@ bail:
                                        ocfs2_clusters_to_bytes(osb->sb, 1));
        if (status < 0 && did_quota_inode)
                dquot_free_inode(inode);
-       if (handle)
+       if (handle) {
+               if (status < 0 && fe)
+                       ocfs2_set_links_count(fe, 0);
                ocfs2_commit_trans(osb, handle);
+       }
 
        ocfs2_inode_unlock(dir, 1);
        if (did_block_signals)
index 8b4f307..8a74cdc 100644 (file)
@@ -902,7 +902,7 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
                goto out_put_mm;
 
        hold_task_mempolicy(priv);
-       vma = mas_find(&mas, 0);
+       vma = mas_find(&mas, ULONG_MAX);
 
        if (unlikely(!vma))
                goto empty_set;
index c57b46a..3125e76 100644 (file)
@@ -24,6 +24,17 @@ static bool ubifs_crypt_empty_dir(struct inode *inode)
        return ubifs_check_dir_empty(inode) == 0;
 }
 
+/**
+ * ubifs_encrypt - Encrypt data.
+ * @inode: inode which refers to the data node
+ * @dn: data node to encrypt
+ * @in_len: length of data to be compressed
+ * @out_len: allocated memory size for the data area of @dn
+ * @block: logical block number of the block
+ *
+ * This function encrypt a possibly-compressed data in the data node.
+ * The encrypted data length will store in @out_len.
+ */
 int ubifs_encrypt(const struct inode *inode, struct ubifs_data_node *dn,
                  unsigned int in_len, unsigned int *out_len, int block)
 {
index fc718f6..3f128b9 100644 (file)
@@ -2467,7 +2467,7 @@ error_dump:
 
 static inline int chance(unsigned int n, unsigned int out_of)
 {
-       return !!((prandom_u32() % out_of) + 1 <= n);
+       return !!(prandom_u32_max(out_of) + 1 <= n);
 
 }
 
@@ -2485,13 +2485,13 @@ static int power_cut_emulated(struct ubifs_info *c, int lnum, int write)
                        if (chance(1, 2)) {
                                d->pc_delay = 1;
                                /* Fail within 1 minute */
-                               delay = prandom_u32() % 60000;
+                               delay = prandom_u32_max(60000);
                                d->pc_timeout = jiffies;
                                d->pc_timeout += msecs_to_jiffies(delay);
                                ubifs_warn(c, "failing after %lums", delay);
                        } else {
                                d->pc_delay = 2;
-                               delay = prandom_u32() % 10000;
+                               delay = prandom_u32_max(10000);
                                /* Fail within 10000 operations */
                                d->pc_cnt_max = delay;
                                ubifs_warn(c, "failing after %lu calls", delay);
@@ -2571,7 +2571,7 @@ static int corrupt_data(const struct ubifs_info *c, const void *buf,
        unsigned int from, to, ffs = chance(1, 2);
        unsigned char *p = (void *)buf;
 
-       from = prandom_u32() % len;
+       from = prandom_u32_max(len);
        /* Corruption span max to end of write unit */
        to = min(len, ALIGN(from + 1, c->max_write_size));
 
@@ -2581,7 +2581,7 @@ static int corrupt_data(const struct ubifs_info *c, const void *buf,
        if (ffs)
                memset(p + from, 0xFF, to - from);
        else
-               prandom_bytes(p + from, to - from);
+               get_random_bytes(p + from, to - from);
 
        return to;
 }
index f59acd6..0f29cf2 100644 (file)
@@ -68,13 +68,14 @@ static int inherit_flags(const struct inode *dir, umode_t mode)
  * @c: UBIFS file-system description object
  * @dir: parent directory inode
  * @mode: inode mode flags
+ * @is_xattr: whether the inode is xattr inode
  *
  * This function finds an unused inode number, allocates new inode and
  * initializes it. Returns new inode in case of success and an error code in
  * case of failure.
  */
 struct inode *ubifs_new_inode(struct ubifs_info *c, struct inode *dir,
-                             umode_t mode)
+                             umode_t mode, bool is_xattr)
 {
        int err;
        struct inode *inode;
@@ -99,10 +100,12 @@ struct inode *ubifs_new_inode(struct ubifs_info *c, struct inode *dir,
                         current_time(inode);
        inode->i_mapping->nrpages = 0;
 
-       err = fscrypt_prepare_new_inode(dir, inode, &encrypted);
-       if (err) {
-               ubifs_err(c, "fscrypt_prepare_new_inode failed: %i", err);
-               goto out_iput;
+       if (!is_xattr) {
+               err = fscrypt_prepare_new_inode(dir, inode, &encrypted);
+               if (err) {
+                       ubifs_err(c, "fscrypt_prepare_new_inode failed: %i", err);
+                       goto out_iput;
+               }
        }
 
        switch (mode & S_IFMT) {
@@ -309,7 +312,7 @@ static int ubifs_create(struct user_namespace *mnt_userns, struct inode *dir,
 
        sz_change = CALC_DENT_SIZE(fname_len(&nm));
 
-       inode = ubifs_new_inode(c, dir, mode);
+       inode = ubifs_new_inode(c, dir, mode, false);
        if (IS_ERR(inode)) {
                err = PTR_ERR(inode);
                goto out_fname;
@@ -370,7 +373,7 @@ static struct inode *create_whiteout(struct inode *dir, struct dentry *dentry)
        if (err)
                return ERR_PTR(err);
 
-       inode = ubifs_new_inode(c, dir, mode);
+       inode = ubifs_new_inode(c, dir, mode, false);
        if (IS_ERR(inode)) {
                err = PTR_ERR(inode);
                goto out_free;
@@ -463,7 +466,7 @@ static int ubifs_tmpfile(struct user_namespace *mnt_userns, struct inode *dir,
                return err;
        }
 
-       inode = ubifs_new_inode(c, dir, mode);
+       inode = ubifs_new_inode(c, dir, mode, false);
        if (IS_ERR(inode)) {
                err = PTR_ERR(inode);
                goto out_budg;
@@ -873,7 +876,7 @@ out_fname:
 }
 
 /**
- * check_dir_empty - check if a directory is empty or not.
+ * ubifs_check_dir_empty - check if a directory is empty or not.
  * @dir: VFS inode object of the directory to check
  *
  * This function checks if directory @dir is empty. Returns zero if the
@@ -1005,7 +1008,7 @@ static int ubifs_mkdir(struct user_namespace *mnt_userns, struct inode *dir,
 
        sz_change = CALC_DENT_SIZE(fname_len(&nm));
 
-       inode = ubifs_new_inode(c, dir, S_IFDIR | mode);
+       inode = ubifs_new_inode(c, dir, S_IFDIR | mode, false);
        if (IS_ERR(inode)) {
                err = PTR_ERR(inode);
                goto out_fname;
@@ -1092,7 +1095,7 @@ static int ubifs_mknod(struct user_namespace *mnt_userns, struct inode *dir,
 
        sz_change = CALC_DENT_SIZE(fname_len(&nm));
 
-       inode = ubifs_new_inode(c, dir, mode);
+       inode = ubifs_new_inode(c, dir, mode, false);
        if (IS_ERR(inode)) {
                kfree(dev);
                err = PTR_ERR(inode);
@@ -1174,7 +1177,7 @@ static int ubifs_symlink(struct user_namespace *mnt_userns, struct inode *dir,
 
        sz_change = CALC_DENT_SIZE(fname_len(&nm));
 
-       inode = ubifs_new_inode(c, dir, S_IFLNK | S_IRWXUGO);
+       inode = ubifs_new_inode(c, dir, S_IFLNK | S_IRWXUGO, false);
        if (IS_ERR(inode)) {
                err = PTR_ERR(inode);
                goto out_fname;
index 75dab0a..d025099 100644 (file)
@@ -503,7 +503,7 @@ static void mark_inode_clean(struct ubifs_info *c, struct ubifs_inode *ui)
 static void set_dent_cookie(struct ubifs_info *c, struct ubifs_dent_node *dent)
 {
        if (c->double_hash)
-               dent->cookie = (__force __le32) prandom_u32();
+               dent->cookie = (__force __le32) get_random_u32();
        else
                dent->cookie = 0;
 }
@@ -1472,23 +1472,25 @@ out_free:
  * @block: data block number
  * @dn: data node to re-compress
  * @new_len: new length
+ * @dn_size: size of the data node @dn in memory
  *
  * This function is used when an inode is truncated and the last data node of
  * the inode has to be re-compressed/encrypted and re-written.
  */
 static int truncate_data_node(const struct ubifs_info *c, const struct inode *inode,
                              unsigned int block, struct ubifs_data_node *dn,
-                             int *new_len)
+                             int *new_len, int dn_size)
 {
        void *buf;
-       int err, dlen, compr_type, out_len, old_dlen;
+       int err, dlen, compr_type, out_len, data_size;
 
        out_len = le32_to_cpu(dn->size);
        buf = kmalloc_array(out_len, WORST_COMPR_FACTOR, GFP_NOFS);
        if (!buf)
                return -ENOMEM;
 
-       dlen = old_dlen = le32_to_cpu(dn->ch.len) - UBIFS_DATA_NODE_SZ;
+       dlen = le32_to_cpu(dn->ch.len) - UBIFS_DATA_NODE_SZ;
+       data_size = dn_size - UBIFS_DATA_NODE_SZ;
        compr_type = le16_to_cpu(dn->compr_type);
 
        if (IS_ENCRYPTED(inode)) {
@@ -1508,11 +1510,11 @@ static int truncate_data_node(const struct ubifs_info *c, const struct inode *in
        }
 
        if (IS_ENCRYPTED(inode)) {
-               err = ubifs_encrypt(inode, dn, out_len, &old_dlen, block);
+               err = ubifs_encrypt(inode, dn, out_len, &data_size, block);
                if (err)
                        goto out;
 
-               out_len = old_dlen;
+               out_len = data_size;
        } else {
                dn->compr_size = 0;
        }
@@ -1550,6 +1552,7 @@ int ubifs_jnl_truncate(struct ubifs_info *c, const struct inode *inode,
        struct ubifs_trun_node *trun;
        struct ubifs_data_node *dn;
        int err, dlen, len, lnum, offs, bit, sz, sync = IS_SYNC(inode);
+       int dn_size;
        struct ubifs_inode *ui = ubifs_inode(inode);
        ino_t inum = inode->i_ino;
        unsigned int blk;
@@ -1562,10 +1565,13 @@ int ubifs_jnl_truncate(struct ubifs_info *c, const struct inode *inode,
        ubifs_assert(c, S_ISREG(inode->i_mode));
        ubifs_assert(c, mutex_is_locked(&ui->ui_mutex));
 
-       sz = UBIFS_TRUN_NODE_SZ + UBIFS_INO_NODE_SZ +
-            UBIFS_MAX_DATA_NODE_SZ * WORST_COMPR_FACTOR;
+       dn_size = COMPRESSED_DATA_NODE_BUF_SZ;
 
-       sz += ubifs_auth_node_sz(c);
+       if (IS_ENCRYPTED(inode))
+               dn_size += UBIFS_CIPHER_BLOCK_SIZE;
+
+       sz =  UBIFS_TRUN_NODE_SZ + UBIFS_INO_NODE_SZ +
+               dn_size + ubifs_auth_node_sz(c);
 
        ino = kmalloc(sz, GFP_NOFS);
        if (!ino)
@@ -1596,15 +1602,15 @@ int ubifs_jnl_truncate(struct ubifs_info *c, const struct inode *inode,
                        if (dn_len <= 0 || dn_len > UBIFS_BLOCK_SIZE) {
                                ubifs_err(c, "bad data node (block %u, inode %lu)",
                                          blk, inode->i_ino);
-                               ubifs_dump_node(c, dn, sz - UBIFS_INO_NODE_SZ -
-                                               UBIFS_TRUN_NODE_SZ);
+                               ubifs_dump_node(c, dn, dn_size);
                                goto out_free;
                        }
 
                        if (dn_len <= dlen)
                                dlen = 0; /* Nothing to do */
                        else {
-                               err = truncate_data_node(c, inode, blk, dn, &dlen);
+                               err = truncate_data_node(c, inode, blk, dn,
+                                               &dlen, dn_size);
                                if (err)
                                        goto out_free;
                        }
index d76a19e..cfbc31f 100644 (file)
@@ -1970,28 +1970,28 @@ static int dbg_populate_lsave(struct ubifs_info *c)
 
        if (!dbg_is_chk_gen(c))
                return 0;
-       if (prandom_u32() & 3)
+       if (prandom_u32_max(4))
                return 0;
 
        for (i = 0; i < c->lsave_cnt; i++)
                c->lsave[i] = c->main_first;
 
        list_for_each_entry(lprops, &c->empty_list, list)
-               c->lsave[prandom_u32() % c->lsave_cnt] = lprops->lnum;
+               c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;
        list_for_each_entry(lprops, &c->freeable_list, list)
-               c->lsave[prandom_u32() % c->lsave_cnt] = lprops->lnum;
+               c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;
        list_for_each_entry(lprops, &c->frdi_idx_list, list)
-               c->lsave[prandom_u32() % c->lsave_cnt] = lprops->lnum;
+               c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;
 
        heap = &c->lpt_heap[LPROPS_DIRTY_IDX - 1];
        for (i = 0; i < heap->cnt; i++)
-               c->lsave[prandom_u32() % c->lsave_cnt] = heap->arr[i]->lnum;
+               c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;
        heap = &c->lpt_heap[LPROPS_DIRTY - 1];
        for (i = 0; i < heap->cnt; i++)
-               c->lsave[prandom_u32() % c->lsave_cnt] = heap->arr[i]->lnum;
+               c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;
        heap = &c->lpt_heap[LPROPS_FREE - 1];
        for (i = 0; i < heap->cnt; i++)
-               c->lsave[prandom_u32() % c->lsave_cnt] = heap->arr[i]->lnum;
+               c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;
 
        return 1;
 }
index 58c92c9..01362ad 100644 (file)
@@ -700,7 +700,7 @@ static int alloc_idx_lebs(struct ubifs_info *c, int cnt)
                c->ilebs[c->ileb_cnt++] = lnum;
                dbg_cmt("LEB %d", lnum);
        }
-       if (dbg_is_chk_index(c) && !(prandom_u32() & 7))
+       if (dbg_is_chk_index(c) && !prandom_u32_max(8))
                return -ENOSPC;
        return 0;
 }
index 7d6d2f1..478bbbb 100644 (file)
@@ -2026,7 +2026,7 @@ int ubifs_update_time(struct inode *inode, struct timespec64 *time, int flags);
 
 /* dir.c */
 struct inode *ubifs_new_inode(struct ubifs_info *c, struct inode *dir,
-                             umode_t mode);
+                             umode_t mode, bool is_xattr);
 int ubifs_getattr(struct user_namespace *mnt_userns, const struct path *path, struct kstat *stat,
                  u32 request_mask, unsigned int flags);
 int ubifs_check_dir_empty(struct inode *dir);
index e4c4761..3db8486 100644 (file)
@@ -110,7 +110,7 @@ static int create_xattr(struct ubifs_info *c, struct inode *host,
        if (err)
                return err;
 
-       inode = ubifs_new_inode(c, host, S_IFREG | S_IRWXUGO);
+       inode = ubifs_new_inode(c, host, S_IFREG | S_IRWXUGO, true);
        if (IS_ERR(inode)) {
                err = PTR_ERR(inode);
                goto out_budg;
index e2bdf08..6261599 100644 (file)
@@ -1520,7 +1520,7 @@ xfs_alloc_ag_vextent_lastblock(
 
 #ifdef DEBUG
        /* Randomly don't execute the first algorithm. */
-       if (prandom_u32() & 1)
+       if (prandom_u32_max(2))
                return 0;
 #endif
 
index 6cdfd64..94db50e 100644 (file)
@@ -636,7 +636,7 @@ xfs_ialloc_ag_alloc(
        /* randomly do sparse inode allocations */
        if (xfs_has_sparseinodes(tp->t_mountp) &&
            igeo->ialloc_min_blks < igeo->ialloc_blks)
-               do_sparse = prandom_u32() & 1;
+               do_sparse = prandom_u32_max(2);
 #endif
 
        /*
@@ -805,7 +805,7 @@ sparse_alloc:
         * number from being easily guessable.
         */
        error = xfs_ialloc_inode_init(args.mp, tp, NULL, newlen, pag->pag_agno,
-                       args.agbno, args.len, prandom_u32());
+                       args.agbno, args.len, get_random_u32());
 
        if (error)
                return error;
index 296faa4..7db588e 100644 (file)
@@ -274,7 +274,7 @@ xfs_errortag_test(
 
        ASSERT(error_tag < XFS_ERRTAG_MAX);
        randfactor = mp->m_errortag[error_tag];
-       if (!randfactor || prandom_u32() % randfactor)
+       if (!randfactor || prandom_u32_max(randfactor))
                return false;
 
        xfs_warn_ratelimited(mp,
index 2bbe791..eae7427 100644 (file)
@@ -596,7 +596,7 @@ xfs_iget_cache_miss(
         */
        if (xfs_has_v3inodes(mp) &&
            (flags & XFS_IGET_CREATE) && !xfs_has_ikeep(mp)) {
-               VFS_I(ip)->i_generation = prandom_u32();
+               VFS_I(ip)->i_generation = get_random_u32();
        } else {
                struct xfs_buf          *bp;
 
index f6e7e4f..f02a0dd 100644 (file)
@@ -3544,7 +3544,7 @@ xlog_ticket_alloc(
        tic->t_curr_res         = unit_res;
        tic->t_cnt              = cnt;
        tic->t_ocnt             = cnt;
-       tic->t_tid              = prandom_u32();
+       tic->t_tid              = get_random_u32();
        if (permanent)
                tic->t_flags |= XLOG_TIC_PERM_RESERV;
 
index 34fb343..292a5c4 100644 (file)
@@ -71,7 +71,7 @@ int ghes_register_vendor_record_notifier(struct notifier_block *nb);
 void ghes_unregister_vendor_record_notifier(struct notifier_block *nb);
 #endif
 
-int ghes_estatus_pool_init(int num_ghes);
+int ghes_estatus_pool_init(unsigned int num_ghes);
 
 /* From drivers/edac/ghes_edac.c */
 
index c15de16..d06ada2 100644 (file)
 #define PATCHABLE_DISCARDS     *(__patchable_function_entries)
 #endif
 
+#ifndef CONFIG_ARCH_SUPPORTS_CFI_CLANG
+/*
+ * Simply points to ftrace_stub, but with the proper protocol.
+ * Defined by the linker script in linux/vmlinux.lds.h
+ */
+#define        FTRACE_STUB_HACK        ftrace_stub_graph = ftrace_stub;
+#else
+#define FTRACE_STUB_HACK
+#endif
+
 #ifdef CONFIG_FTRACE_MCOUNT_RECORD
 /*
  * The ftrace call sites are logged to a section whose name depends on the
  * FTRACE_CALLSITE_SECTION. We capture all of them here to avoid header
  * dependencies for FTRACE_CALLSITE_SECTION's definition.
  *
- * Need to also make ftrace_stub_graph point to ftrace_stub
- * so that the same stub location may have different protocols
- * and not mess up with C verifiers.
- *
  * ftrace_ops_list_func will be defined as arch_ftrace_ops_list_func
  * as some archs will have a different prototype for that function
  * but ftrace_ops_list_func() will have a single prototype.
                        KEEP(*(__mcount_loc))                   \
                        KEEP_PATCHABLE                          \
                        __stop_mcount_loc = .;                  \
-                       ftrace_stub_graph = ftrace_stub;        \
+                       FTRACE_STUB_HACK                        \
                        ftrace_ops_list_func = arch_ftrace_ops_list_func;
 #else
 # ifdef CONFIG_FUNCTION_TRACER
-#  define MCOUNT_REC() ftrace_stub_graph = ftrace_stub;        \
+#  define MCOUNT_REC() FTRACE_STUB_HACK                        \
                        ftrace_ops_list_func = arch_ftrace_ops_list_func;
 # else
 #  define MCOUNT_REC()
index 599855c..2ae4fd6 100644 (file)
 
 #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000)
 
+/**
+ * DRM_SCHED_FENCE_DONT_PIPELINE - Prefent dependency pipelining
+ *
+ * Setting this flag on a scheduler fence prevents pipelining of jobs depending
+ * on this fence. In other words we always insert a full CPU round trip before
+ * dependen jobs are pushed to the hw queue.
+ */
+#define DRM_SCHED_FENCE_DONT_PIPELINE  DMA_FENCE_FLAG_USER_BITS
+
 struct drm_gem_object;
 
 struct drm_gpu_scheduler;
index 3e187a0..50e358a 100644 (file)
@@ -580,9 +580,9 @@ struct request_queue {
 #define QUEUE_FLAG_NOWAIT       29     /* device supports NOWAIT */
 #define QUEUE_FLAG_SQ_SCHED     30     /* single queue style io dispatch */
 
-#define QUEUE_FLAG_MQ_DEFAULT  ((1 << QUEUE_FLAG_IO_STAT) |            \
-                                (1 << QUEUE_FLAG_SAME_COMP) |          \
-                                (1 << QUEUE_FLAG_NOWAIT))
+#define QUEUE_FLAG_MQ_DEFAULT  ((1UL << QUEUE_FLAG_IO_STAT) |          \
+                                (1UL << QUEUE_FLAG_SAME_COMP) |        \
+                                (1UL << QUEUE_FLAG_NOWAIT))
 
 void blk_queue_flag_set(unsigned int flag, struct request_queue *q);
 void blk_queue_flag_clear(unsigned int flag, struct request_queue *q);
index 9e7d46d..0566705 100644 (file)
@@ -27,6 +27,7 @@
 #include <linux/bpfptr.h>
 #include <linux/btf.h>
 #include <linux/rcupdate_trace.h>
+#include <linux/init.h>
 
 struct bpf_verifier_env;
 struct bpf_verifier_log;
@@ -970,6 +971,8 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
                                          struct bpf_attach_target_info *tgt_info);
 void bpf_trampoline_put(struct bpf_trampoline *tr);
 int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs);
+int __init bpf_arch_init_dispatcher_early(void *ip);
+
 #define BPF_DISPATCHER_INIT(_name) {                           \
        .mutex = __MUTEX_INITIALIZER(_name.mutex),              \
        .func = &_name##_func,                                  \
@@ -983,6 +986,13 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func
        },                                                      \
 }
 
+#define BPF_DISPATCHER_INIT_CALL(_name)                                        \
+       static int __init _name##_init(void)                            \
+       {                                                               \
+               return bpf_arch_init_dispatcher_early(_name##_func);    \
+       }                                                               \
+       early_initcall(_name##_init)
+
 #ifdef CONFIG_X86_64
 #define BPF_DISPATCHER_ATTRIBUTES __attribute__((patchable_function_entry(5)))
 #else
@@ -1000,7 +1010,9 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func
        }                                                               \
        EXPORT_SYMBOL(bpf_dispatcher_##name##_func);                    \
        struct bpf_dispatcher bpf_dispatcher_##name =                   \
-               BPF_DISPATCHER_INIT(bpf_dispatcher_##name);
+               BPF_DISPATCHER_INIT(bpf_dispatcher_##name);             \
+       BPF_DISPATCHER_INIT_CALL(bpf_dispatcher_##name);
+
 #define DECLARE_BPF_DISPATCHER(name)                                   \
        unsigned int bpf_dispatcher_##name##_func(                      \
                const void *ctx,                                        \
index 8f481d1..6e01f10 100644 (file)
@@ -428,6 +428,9 @@ struct cgroup {
        struct cgroup_file procs_file;  /* handle for "cgroup.procs" */
        struct cgroup_file events_file; /* handle for "cgroup.events" */
 
+       /* handles for "{cpu,memory,io,irq}.pressure" */
+       struct cgroup_file psi_files[NR_PSI_RESOURCES];
+
        /*
         * The bitmask of subsystems enabled on the child cgroups.
         * ->subtree_control is the one configured through
index 23b102b..528bd44 100644 (file)
@@ -106,6 +106,7 @@ struct cgroup_subsys_state *css_tryget_online_from_dir(struct dentry *dentry,
 
 struct cgroup *cgroup_get_from_path(const char *path);
 struct cgroup *cgroup_get_from_fd(int fd);
+struct cgroup *cgroup_v1v2_get_from_fd(int fd);
 
 int cgroup_attach_task_all(struct task_struct *from, struct task_struct *);
 int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from);
@@ -682,11 +683,6 @@ static inline void pr_cont_cgroup_path(struct cgroup *cgrp)
        pr_cont_kernfs_path(cgrp->kn);
 }
 
-static inline struct psi_group *cgroup_psi(struct cgroup *cgrp)
-{
-       return cgrp->psi;
-}
-
 bool cgroup_psi_enabled(void);
 
 static inline void cgroup_init_kthreadd(void)
index 2108b56..267cd06 100644 (file)
@@ -42,6 +42,8 @@ struct dentry;
  * struct clk_rate_request - Structure encoding the clk constraints that
  * a clock user might require.
  *
+ * Should be initialized by calling clk_hw_init_rate_request().
+ *
  * @rate:              Requested clock rate. This field will be adjusted by
  *                     clock drivers according to hardware capabilities.
  * @min_rate:          Minimum rate imposed by clk users.
@@ -60,6 +62,15 @@ struct clk_rate_request {
        struct clk_hw *best_parent_hw;
 };
 
+void clk_hw_init_rate_request(const struct clk_hw *hw,
+                             struct clk_rate_request *req,
+                             unsigned long rate);
+void clk_hw_forward_rate_request(const struct clk_hw *core,
+                                const struct clk_rate_request *old_req,
+                                const struct clk_hw *parent,
+                                struct clk_rate_request *req,
+                                unsigned long parent_rate);
+
 /**
  * struct clk_duty - Struture encoding the duty cycle ratio of a clock
  *
@@ -118,8 +129,9 @@ struct clk_duty {
  *
  * @recalc_rate        Recalculate the rate of this clock, by querying hardware. The
  *             parent rate is an input parameter.  It is up to the caller to
- *             ensure that the prepare_mutex is held across this call.
- *             Returns the calculated rate.  Optional, but recommended - if
+ *             ensure that the prepare_mutex is held across this call. If the
+ *             driver cannot figure out a rate for this clock, it must return
+ *             0. Returns the calculated rate. Optional, but recommended - if
  *             this op is not set then clock rate will be initialized to 0.
  *
  * @round_rate:        Given a target rate as input, returns the closest rate actually
@@ -1303,6 +1315,8 @@ int clk_mux_determine_rate_flags(struct clk_hw *hw,
                                 struct clk_rate_request *req,
                                 unsigned long flags);
 void clk_hw_reparent(struct clk_hw *hw, struct clk_hw *new_parent);
+void clk_hw_get_rate_range(struct clk_hw *hw, unsigned long *min_rate,
+                          unsigned long *max_rate);
 void clk_hw_set_rate_range(struct clk_hw *hw, unsigned long min_rate,
                           unsigned long max_rate);
 
index c13061c..1ef0133 100644 (file)
@@ -799,7 +799,7 @@ int clk_set_rate_exclusive(struct clk *clk, unsigned long rate);
  *
  * Returns true if @parent is a possible parent for @clk, false otherwise.
  */
-bool clk_has_parent(struct clk *clk, struct clk *parent);
+bool clk_has_parent(const struct clk *clk, const struct clk *parent);
 
 /**
  * clk_set_rate_range - set a rate range for a clock source
index 3484309..7af499b 100644 (file)
@@ -12,6 +12,8 @@
 #ifndef AT91_PMC_H
 #define AT91_PMC_H
 
+#include <linux/bits.h>
+
 #define AT91_PMC_V1            (1)                     /* PMC version 1 */
 #define AT91_PMC_V2            (2)                     /* PMC version 2 [SAM9X60] */
 
@@ -45,8 +47,8 @@
 #define        AT91_PMC_PCSR           0x18                    /* Peripheral Clock Status Register */
 
 #define AT91_PMC_PLL_ACR       0x18                    /* PLL Analog Control Register [for SAM9X60] */
-#define                AT91_PMC_PLL_ACR_DEFAULT_UPLL   0x12020010UL    /* Default PLL ACR value for UPLL */
-#define                AT91_PMC_PLL_ACR_DEFAULT_PLLA   0x00020010UL    /* Default PLL ACR value for PLLA */
+#define                AT91_PMC_PLL_ACR_DEFAULT_UPLL   UL(0x12020010)  /* Default PLL ACR value for UPLL */
+#define                AT91_PMC_PLL_ACR_DEFAULT_PLLA   UL(0x00020010)  /* Default PLL ACR value for PLLA */
 #define                AT91_PMC_PLL_ACR_UTMIVR         (1 << 12)       /* UPLL Voltage regulator Control */
 #define                AT91_PMC_PLL_ACR_UTMIBG         (1 << 13)       /* UPLL Bandgap Control */
 
index a64d034..eaf95ca 100644 (file)
@@ -8,6 +8,20 @@
 #ifndef __LINUX_CLK_SPEAR_H
 #define __LINUX_CLK_SPEAR_H
 
+#ifdef CONFIG_ARCH_SPEAR3XX
+void __init spear3xx_clk_init(void __iomem *misc_base,
+                             void __iomem *soc_config_base);
+#else
+static inline void __init spear3xx_clk_init(void __iomem *misc_base,
+                                           void __iomem *soc_config_base) {}
+#endif
+
+#ifdef CONFIG_ARCH_SPEAR6XX
+void __init spear6xx_clk_init(void __iomem *misc_base);
+#else
+static inline void __init spear6xx_clk_init(void __iomem *misc_base) {}
+#endif
+
 #ifdef CONFIG_MACH_SPEAR1310
 void __init spear1310_clk_init(void __iomem *misc_base, void __iomem *ras_base);
 #else
index 2f065ad..c2aa0aa 100644 (file)
@@ -174,8 +174,9 @@ static inline unsigned int cpumask_last(const struct cpumask *srcp)
 static inline
 unsigned int cpumask_next(int n, const struct cpumask *srcp)
 {
-       /* n is a prior cpu */
-       cpumask_check(n + 1);
+       /* -1 is a legal arg here. */
+       if (n != -1)
+               cpumask_check(n);
        return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n + 1);
 }
 
@@ -188,8 +189,9 @@ unsigned int cpumask_next(int n, const struct cpumask *srcp)
  */
 static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
 {
-       /* n is a prior cpu */
-       cpumask_check(n + 1);
+       /* -1 is a legal arg here. */
+       if (n != -1)
+               cpumask_check(n);
        return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -229,8 +231,9 @@ static inline
 unsigned int cpumask_next_and(int n, const struct cpumask *src1p,
                     const struct cpumask *src2p)
 {
-       /* n is a prior cpu */
-       cpumask_check(n + 1);
+       /* -1 is a legal arg here. */
+       if (n != -1)
+               cpumask_check(n);
        return find_next_and_bit(cpumask_bits(src1p), cpumask_bits(src2p),
                nr_cpumask_bits, n + 1);
 }
@@ -260,8 +263,8 @@ static inline
 unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap)
 {
        cpumask_check(start);
-       /* n is a prior cpu */
-       cpumask_check(n + 1);
+       if (n != -1)
+               cpumask_check(n);
 
        /*
         * Return the first available CPU when wrapping, or when starting before cpu0,
index ed5470f..620ada0 100644 (file)
@@ -484,6 +484,12 @@ static inline struct damon_region *damon_first_region(struct damon_target *t)
        return list_first_entry(&t->regions_list, struct damon_region, list);
 }
 
+static inline unsigned long damon_sz_region(struct damon_region *r)
+{
+       return r->ar.end - r->ar.start;
+}
+
+
 #define damon_for_each_region(r, t) \
        list_for_each_entry(r, &t->regions_list, list)
 
index 50be7cb..b1b5720 100644 (file)
@@ -61,9 +61,9 @@ struct sk_buff;
 
 /* Special struct emulating a Ethernet header */
 struct qca_mgmt_ethhdr {
-       u32 command;            /* command bit 31:0 */
-       u32 seq;                /* seq 63:32 */
-       u32 mdio_data;          /* first 4byte mdio */
+       __le32 command;         /* command bit 31:0 */
+       __le32 seq;             /* seq 63:32 */
+       __le32 mdio_data;               /* first 4byte mdio */
        __be16 hdr;             /* qca hdr */
 } __packed;
 
@@ -73,7 +73,7 @@ enum mdio_cmd {
 };
 
 struct mib_ethhdr {
-       u32 data[3];            /* first 3 mib counter */
+       __le32 data[3];         /* first 3 mib counter */
        __be16 hdr;             /* qca hdr */
 } __packed;
 
index da3974b..80f3c1c 100644 (file)
@@ -1085,9 +1085,6 @@ efi_status_t efivar_set_variable_locked(efi_char16_t *name, efi_guid_t *vendor,
 efi_status_t efivar_set_variable(efi_char16_t *name, efi_guid_t *vendor,
                                 u32 attr, unsigned long data_size, void *data);
 
-efi_status_t check_var_size(u32 attributes, unsigned long size);
-efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size);
-
 #if IS_ENABLED(CONFIG_EFI_CAPSULE_LOADER)
 extern bool efi_capsule_pending(int *reset_type);
 
index aa4d90a..f5b687a 100644 (file)
@@ -34,9 +34,6 @@ struct io_file_table {
        unsigned int alloc_hint;
 };
 
-struct io_notif;
-struct io_notif_slot;
-
 struct io_hash_bucket {
        spinlock_t              lock;
        struct hlist_head       list;
@@ -242,8 +239,6 @@ struct io_ring_ctx {
                unsigned                nr_user_files;
                unsigned                nr_user_bufs;
                struct io_mapped_ubuf   **user_bufs;
-               struct io_notif_slot    *notif_slots;
-               unsigned                nr_notif_slots;
 
                struct io_submit_state  submit_state;
 
index a325532..3c9da1f 100644 (file)
@@ -455,7 +455,7 @@ extern void iommu_set_default_translated(bool cmd_line);
 extern bool iommu_default_passthrough(void);
 extern struct iommu_resv_region *
 iommu_alloc_resv_region(phys_addr_t start, size_t length, int prot,
-                       enum iommu_resv_type type);
+                       enum iommu_resv_type type, gfp_t gfp);
 extern int iommu_get_group_resv_regions(struct iommu_group *group,
                                        struct list_head *head);
 
index 32f259f..00c3448 100644 (file)
@@ -1390,6 +1390,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
                            struct kvm_enable_cap *cap);
 long kvm_arch_vm_ioctl(struct file *filp,
                       unsigned int ioctl, unsigned long arg);
+long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
+                             unsigned long arg);
 
 int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu);
 int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu);
index c3b4cc8..7fcaf31 100644 (file)
@@ -187,6 +187,7 @@ static inline bool folio_is_device_coherent(const struct folio *folio)
 }
 
 #ifdef CONFIG_ZONE_DEVICE
+void zone_device_page_init(struct page *page);
 void *memremap_pages(struct dev_pagemap *pgmap, int nid);
 void memunmap_pages(struct dev_pagemap *pgmap);
 void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
index 704a04f..3ef77f5 100644 (file)
@@ -62,6 +62,8 @@ extern const char *migrate_reason_names[MR_TYPES];
 #ifdef CONFIG_MIGRATION
 
 extern void putback_movable_pages(struct list_head *l);
+int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
+               struct folio *src, enum migrate_mode mode, int extra_count);
 int migrate_folio(struct address_space *mapping, struct folio *dst,
                struct folio *src, enum migrate_mode mode);
 extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free,
@@ -197,11 +199,24 @@ struct migrate_vma {
         */
        void                    *pgmap_owner;
        unsigned long           flags;
+
+       /*
+        * Set to vmf->page if this is being called to migrate a page as part of
+        * a migrate_to_ram() callback.
+        */
+       struct page             *fault_page;
 };
 
 int migrate_vma_setup(struct migrate_vma *args);
 void migrate_vma_pages(struct migrate_vma *migrate);
 void migrate_vma_finalize(struct migrate_vma *migrate);
+int migrate_device_range(unsigned long *src_pfns, unsigned long start,
+                       unsigned long npages);
+void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
+                       unsigned long npages);
+void migrate_device_finalize(unsigned long *src_pfns,
+                       unsigned long *dst_pfns, unsigned long npages);
+
 #endif /* CONFIG_MIGRATION */
 
 #endif /* _LINUX_MIGRATE_H */
index 8a30de0..c726ea7 100644 (file)
@@ -293,6 +293,7 @@ struct mmc_card {
 #define MMC_QUIRK_BROKEN_IRQ_POLLING   (1<<11) /* Polling SDIO_CCCR_INTx could create a fake interrupt */
 #define MMC_QUIRK_TRIM_BROKEN  (1<<12)         /* Skip trim */
 #define MMC_QUIRK_BROKEN_HPI   (1<<13)         /* Disable broken HPI support */
+#define MMC_QUIRK_BROKEN_SD_DISCARD    (1<<14) /* Disable broken SD discard support */
 
        bool                    reenable_cmdq;  /* Re-enable Command Queue */
 
index 711c359..b73ad8e 100644 (file)
@@ -41,6 +41,8 @@ struct net;
 #define SOCK_NOSPACE           2
 #define SOCK_PASSCRED          3
 #define SOCK_PASSSEC           4
+#define SOCK_SUPPORT_ZC                5
+#define SOCK_CUSTOM_SOCKOPT    6
 
 #ifndef ARCH_HAS_SOCKET_TYPES
 /**
index a36edb0..eddf8ee 100644 (file)
@@ -3663,8 +3663,9 @@ static inline bool netif_attr_test_online(unsigned long j,
 static inline unsigned int netif_attrmask_next(int n, const unsigned long *srcp,
                                               unsigned int nr_bits)
 {
-       /* n is a prior cpu */
-       cpu_max_bits_warn(n + 1, nr_bits);
+       /* -1 is a legal arg here. */
+       if (n != -1)
+               cpu_max_bits_warn(n, nr_bits);
 
        if (srcp)
                return find_next_bit(srcp, nr_bits, n + 1);
@@ -3685,8 +3686,9 @@ static inline int netif_attrmask_next_and(int n, const unsigned long *src1p,
                                          const unsigned long *src2p,
                                          unsigned int nr_bits)
 {
-       /* n is a prior cpu */
-       cpu_max_bits_warn(n + 1, nr_bits);
+       /* -1 is a legal arg here. */
+       if (n != -1)
+               cpu_max_bits_warn(n, nr_bits);
 
        if (src1p && src2p)
                return find_next_and_bit(src1p, src2p, nr_bits, n + 1);
index d51e041..d81bde5 100644 (file)
@@ -64,6 +64,7 @@ netlink_kernel_create(struct net *net, int unit, struct netlink_kernel_cfg *cfg)
 
 /* this can be increased when necessary - don't expose to userland */
 #define NETLINK_MAX_COOKIE_LEN 20
+#define NETLINK_MAX_FMTMSG_LEN 80
 
 /**
  * struct netlink_ext_ack - netlink extended ACK report struct
@@ -75,6 +76,8 @@ netlink_kernel_create(struct net *net, int unit, struct netlink_kernel_cfg *cfg)
  * @miss_nest: nest missing an attribute (%NULL if missing top level attr)
  * @cookie: cookie data to return to userspace (for success)
  * @cookie_len: actual cookie data length
+ * @_msg_buf: output buffer for formatted message strings - don't access
+ *     directly, use %NL_SET_ERR_MSG_FMT
  */
 struct netlink_ext_ack {
        const char *_msg;
@@ -84,13 +87,13 @@ struct netlink_ext_ack {
        u16 miss_type;
        u8 cookie[NETLINK_MAX_COOKIE_LEN];
        u8 cookie_len;
+       char _msg_buf[NETLINK_MAX_FMTMSG_LEN];
 };
 
 /* Always use this macro, this allows later putting the
  * message into a separate section or such for things
  * like translation or listing all possible messages.
- * Currently string formatting is not supported (due
- * to the lack of an output buffer.)
+ * If string formatting is needed use NL_SET_ERR_MSG_FMT.
  */
 #define NL_SET_ERR_MSG(extack, msg) do {               \
        static const char __msg[] = msg;                \
@@ -102,9 +105,31 @@ struct netlink_ext_ack {
                __extack->_msg = __msg;                 \
 } while (0)
 
+/* We splice fmt with %s at each end even in the snprintf so that both calls
+ * can use the same string constant, avoiding its duplication in .ro
+ */
+#define NL_SET_ERR_MSG_FMT(extack, fmt, args...) do {                         \
+       struct netlink_ext_ack *__extack = (extack);                           \
+                                                                              \
+       if (!__extack)                                                         \
+               break;                                                         \
+       if (snprintf(__extack->_msg_buf, NETLINK_MAX_FMTMSG_LEN,               \
+                    "%s" fmt "%s", "", ##args, "") >=                         \
+           NETLINK_MAX_FMTMSG_LEN)                                            \
+               net_warn_ratelimited("%s" fmt "%s", "truncated extack: ",      \
+                                    ##args, "\n");                            \
+                                                                              \
+       do_trace_netlink_extack(__extack->_msg_buf);                           \
+                                                                              \
+       __extack->_msg = __extack->_msg_buf;                                   \
+} while (0)
+
 #define NL_SET_ERR_MSG_MOD(extack, msg)                        \
        NL_SET_ERR_MSG((extack), KBUILD_MODNAME ": " msg)
 
+#define NL_SET_ERR_MSG_FMT_MOD(extack, fmt, args...)   \
+       NL_SET_ERR_MSG_FMT((extack), KBUILD_MODNAME ": " fmt, ##args)
+
 #define NL_SET_BAD_ATTR_POLICY(extack, attr, pol) do { \
        if ((extack)) {                                 \
                (extack)->bad_attr = (attr);            \
index 378956c..efef68c 100644 (file)
@@ -516,7 +516,7 @@ static inline int node_random(const nodemask_t *maskp)
                bit = first_node(*maskp);
                break;
        default:
-               bit = find_nth_bit(maskp->bits, MAX_NUMNODES, get_random_int() % w);
+               bit = find_nth_bit(maskp->bits, MAX_NUMNODES, prandom_u32_max(w));
                break;
        }
        return bit;
index 853f64b..0031f7b 100644 (file)
@@ -756,11 +756,14 @@ struct perf_event {
        struct fasync_struct            *fasync;
 
        /* delayed work for NMIs and such */
-       int                             pending_wakeup;
-       int                             pending_kill;
-       int                             pending_disable;
+       unsigned int                    pending_wakeup;
+       unsigned int                    pending_kill;
+       unsigned int                    pending_disable;
+       unsigned int                    pending_sigtrap;
        unsigned long                   pending_addr;   /* SIGTRAP */
-       struct irq_work                 pending;
+       struct irq_work                 pending_irq;
+       struct callback_head            pending_task;
+       unsigned int                    pending_work;
 
        atomic_t                        event_limit;
 
@@ -877,6 +880,14 @@ struct perf_event_context {
 #endif
        void                            *task_ctx_data; /* pmu specific data */
        struct rcu_head                 rcu_head;
+
+       /*
+        * Sum (event->pending_sigtrap + event->pending_work)
+        *
+        * The SIGTRAP is targeted at ctx->task, as such it won't do changing
+        * that until the signal is delivered.
+        */
+       local_t                         nr_pending;
 };
 
 /*
index c29c3f1..63800bf 100644 (file)
@@ -122,6 +122,7 @@ enum phylink_op_type {
  *     (See commit 7cceb599d15d ("net: phylink: avoid mac_config calls")
  * @poll_fixed_state: if true, starts link_poll,
  *                   if MAC link is at %MLO_AN_FIXED mode.
+ * @mac_managed_pm: if true, indicate the MAC driver is responsible for PHY PM.
  * @ovr_an_inband: if true, override PCS to MLO_AN_INBAND
  * @get_fixed_state: callback to execute to determine the fixed link state,
  *                  if MAC link is at %MLO_AN_FIXED mode.
@@ -134,6 +135,7 @@ struct phylink_config {
        enum phylink_op_type type;
        bool legacy_pre_march2020;
        bool poll_fixed_state;
+       bool mac_managed_pm;
        bool ovr_an_inband;
        void (*get_fixed_state)(struct phylink_config *config,
                                struct phylink_link_state *state);
index 78db003..e0a0759 100644 (file)
 #include <linux/percpu.h>
 #include <linux/random.h>
 
-/* Deprecated: use get_random_u32 instead. */
-static inline u32 prandom_u32(void)
-{
-       return get_random_u32();
-}
-
-/* Deprecated: use get_random_bytes instead. */
-static inline void prandom_bytes(void *buf, size_t nbytes)
-{
-       return get_random_bytes(buf, nbytes);
-}
-
 struct rnd_state {
        __u32 s1, s2, s3, s4;
 };
index dd74411..b029a84 100644 (file)
@@ -7,6 +7,7 @@
 #include <linux/sched.h>
 #include <linux/poll.h>
 #include <linux/cgroup-defs.h>
+#include <linux/cgroup.h>
 
 struct seq_file;
 struct css_set;
@@ -18,10 +19,6 @@ extern struct psi_group psi_system;
 
 void psi_init(void);
 
-void psi_task_change(struct task_struct *task, int clear, int set);
-void psi_task_switch(struct task_struct *prev, struct task_struct *next,
-                    bool sleep);
-
 void psi_memstall_enter(unsigned long *flags);
 void psi_memstall_leave(unsigned long *flags);
 
@@ -34,9 +31,15 @@ __poll_t psi_trigger_poll(void **trigger_ptr, struct file *file,
                        poll_table *wait);
 
 #ifdef CONFIG_CGROUPS
+static inline struct psi_group *cgroup_psi(struct cgroup *cgrp)
+{
+       return cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+}
+
 int psi_cgroup_alloc(struct cgroup *cgrp);
 void psi_cgroup_free(struct cgroup *cgrp);
 void cgroup_move_task(struct task_struct *p, struct css_set *to);
+void psi_cgroup_restart(struct psi_group *group);
 #endif
 
 #else /* CONFIG_PSI */
@@ -58,6 +61,7 @@ static inline void cgroup_move_task(struct task_struct *p, struct css_set *to)
 {
        rcu_assign_pointer(p->cgroups, to);
 }
+static inline void psi_cgroup_restart(struct psi_group *group) {}
 #endif
 
 #endif /* CONFIG_PSI */
index c7fe7c0..6e43727 100644 (file)
@@ -16,13 +16,6 @@ enum psi_task_count {
        NR_MEMSTALL,
        NR_RUNNING,
        /*
-        * This can't have values other than 0 or 1 and could be
-        * implemented as a bit flag. But for now we still have room
-        * in the first cacheline of psi_group_cpu, and this way we
-        * don't have to special case any state tracking for it.
-        */
-       NR_ONCPU,
-       /*
         * For IO and CPU stalls the presence of running/oncpu tasks
         * in the domain means a partial rather than a full stall.
         * For memory it's not so simple because of page reclaimers:
@@ -32,22 +25,27 @@ enum psi_task_count {
         * threads and memstall ones.
         */
        NR_MEMSTALL_RUNNING,
-       NR_PSI_TASK_COUNTS = 5,
+       NR_PSI_TASK_COUNTS = 4,
 };
 
 /* Task state bitmasks */
 #define TSK_IOWAIT     (1 << NR_IOWAIT)
 #define TSK_MEMSTALL   (1 << NR_MEMSTALL)
 #define TSK_RUNNING    (1 << NR_RUNNING)
-#define TSK_ONCPU      (1 << NR_ONCPU)
 #define TSK_MEMSTALL_RUNNING   (1 << NR_MEMSTALL_RUNNING)
 
+/* Only one task can be scheduled, no corresponding task count */
+#define TSK_ONCPU      (1 << NR_PSI_TASK_COUNTS)
+
 /* Resources that workloads could be stalled on */
 enum psi_res {
        PSI_IO,
        PSI_MEM,
        PSI_CPU,
-       NR_PSI_RESOURCES = 3,
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+       PSI_IRQ,
+#endif
+       NR_PSI_RESOURCES,
 };
 
 /*
@@ -63,11 +61,17 @@ enum psi_states {
        PSI_MEM_FULL,
        PSI_CPU_SOME,
        PSI_CPU_FULL,
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+       PSI_IRQ_FULL,
+#endif
        /* Only per-CPU, to weigh the CPU in the global average: */
        PSI_NONIDLE,
-       NR_PSI_STATES = 7,
+       NR_PSI_STATES,
 };
 
+/* Use one bit in the state mask to track TSK_ONCPU */
+#define PSI_ONCPU      (1 << NR_PSI_STATES)
+
 enum psi_aggregators {
        PSI_AVGS = 0,
        PSI_POLL,
@@ -147,6 +151,9 @@ struct psi_trigger {
 };
 
 struct psi_group {
+       struct psi_group *parent;
+       bool enabled;
+
        /* Protects data used by the aggregator */
        struct mutex avgs_lock;
 
@@ -188,6 +195,8 @@ struct psi_group {
 
 #else /* CONFIG_PSI */
 
+#define NR_PSI_RESOURCES       0
+
 struct psi_group { };
 
 #endif /* CONFIG_PSI */
index 08322f7..147a5e0 100644 (file)
@@ -42,10 +42,6 @@ u8 get_random_u8(void);
 u16 get_random_u16(void);
 u32 get_random_u32(void);
 u64 get_random_u64(void);
-static inline unsigned int get_random_int(void)
-{
-       return get_random_u32();
-}
 static inline unsigned long get_random_long(void)
 {
 #if BITS_PER_LONG == 64
@@ -100,7 +96,6 @@ declare_get_random_var_wait(u8, u8)
 declare_get_random_var_wait(u16, u16)
 declare_get_random_var_wait(u32, u32)
 declare_get_random_var_wait(u64, u32)
-declare_get_random_var_wait(int, unsigned int)
 declare_get_random_var_wait(long, unsigned long)
 #undef declare_get_random_var
 
index 77f68f8..ffb6eb5 100644 (file)
@@ -870,8 +870,6 @@ struct task_struct {
        struct mm_struct                *mm;
        struct mm_struct                *active_mm;
 
-       /* Per-thread vma caching: */
-
 #ifdef SPLIT_RSS_COUNTING
        struct task_rss_stat            rss_stat;
 #endif
index d1f3438..01ae9f1 100644 (file)
@@ -489,6 +489,8 @@ enum {
        SFP_WARN1_RXPWR_LOW             = BIT(6),
 
        SFP_EXT_STATUS                  = 0x76,
+       SFP_EXT_STATUS_PWRLVL_SELECT    = BIT(0),
+
        SFP_VSL                         = 0x78,
        SFP_PAGE                        = 0x7f,
 };
index 9fcf534..59c9fd5 100644 (file)
@@ -803,6 +803,7 @@ typedef unsigned char *sk_buff_data_t;
  *     @csum_level: indicates the number of consecutive checksums found in
  *             the packet minus one that have been verified as
  *             CHECKSUM_UNNECESSARY (max 3)
+ *     @scm_io_uring: SKB holds io_uring registered files
  *     @dst_pending_confirm: need to confirm neighbour
  *     @decrypted: Decrypted SKB
  *     @slow_gro: state present at GRO time, slower prepare step required
@@ -982,6 +983,7 @@ struct sk_buff {
 #endif
        __u8                    slow_gro:1;
        __u8                    csum_not_inet:1;
+       __u8                    scm_io_uring:1;
 
 #ifdef CONFIG_NET_SCHED
        __u16                   tc_index;       /* traffic control index */
@@ -5048,12 +5050,5 @@ static inline void skb_mark_for_recycle(struct sk_buff *skb)
 }
 #endif
 
-static inline bool skb_pp_recycle(struct sk_buff *skb, void *data)
-{
-       if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
-               return false;
-       return page_pool_return_skb_page(virt_to_page(data));
-}
-
 #endif /* __KERNEL__ */
 #endif /* _LINUX_SKBUFF_H */
index e24c9af..f0ffad6 100644 (file)
@@ -33,7 +33,6 @@ struct kmem_cache {
 
        size_t colour;                  /* cache colouring range */
        unsigned int colour_off;        /* colour offset */
-       struct kmem_cache *freelist_cache;
        unsigned int freelist_size;
 
        /* constructor func */
index de3701a..13c3a23 100644 (file)
@@ -33,7 +33,10 @@ typedef __kernel_sa_family_t sa_family_t;
 
 struct sockaddr {
        sa_family_t     sa_family;      /* address family, AF_xxx       */
-       char            sa_data[14];    /* 14 bytes of protocol address */
+       union {
+               char sa_data_min[14];           /* Minimum 14 bytes of protocol address */
+               DECLARE_FLEX_ARRAY(char, sa_data);
+       };
 };
 
 struct linger {
index e96da41..5cdba00 100644 (file)
@@ -87,6 +87,9 @@ struct udp_sock {
 
        /* This field is dirtied by udp_recvmsg() */
        int             forward_deficit;
+
+       /* This fields follows rcvbuf value, and is touched by udp_recvmsg */
+       int             forward_threshold;
 };
 
 #define UDP_MAX_SEGMENTS       (1 << 6UL)
index 2b1737c..bf7613b 100644 (file)
@@ -10,6 +10,7 @@
 #include <uapi/linux/utsname.h>
 
 enum uts_proc {
+       UTS_PROC_ARCH,
        UTS_PROC_OSTYPE,
        UTS_PROC_OSRELEASE,
        UTS_PROC_VERSION,
index 9f47d6a..0b58f8b 100644 (file)
@@ -35,6 +35,7 @@ enum ir_kbd_get_key_fn {
        IR_KBD_GET_KEY_PIXELVIEW,
        IR_KBD_GET_KEY_HAUP,
        IR_KBD_GET_KEY_KNC1,
+       IR_KBD_GET_KEY_GENIATECH,
        IR_KBD_GET_KEY_FUSIONHDTV,
        IR_KBD_GET_KEY_HAUP_XVR,
        IR_KBD_GET_KEY_AVERMEDIA_CARDBUS,
index a10b305..86716ee 100644 (file)
@@ -192,21 +192,6 @@ struct usb_device;
 #define MEDIA_DEV_NOTIFY_POST_LINK_CH  1
 
 /**
- * media_entity_enum_init - Initialise an entity enumeration
- *
- * @ent_enum: Entity enumeration to be initialised
- * @mdev: The related media device
- *
- * Return: zero on success or a negative error code.
- */
-static inline __must_check int media_entity_enum_init(
-       struct media_entity_enum *ent_enum, struct media_device *mdev)
-{
-       return __media_entity_enum_init(ent_enum,
-                                       mdev->entity_internal_idx_max + 1);
-}
-
-/**
  * media_device_init() - Initializes a media device element
  *
  * @mdev:      pointer to struct &media_device
index f16ffe7..28c9de8 100644 (file)
@@ -17,6 +17,7 @@
 #include <linux/fwnode.h>
 #include <linux/list.h>
 #include <linux/media.h>
+#include <linux/minmax.h>
 #include <linux/types.h>
 
 /* Enums used internally at the media controller to represent graphs */
@@ -99,12 +100,34 @@ struct media_graph {
 /**
  * struct media_pipeline - Media pipeline related information
  *
- * @streaming_count:   Streaming start count - streaming stop count
- * @graph:             Media graph walk during pipeline start / stop
+ * @allocated:         Media pipeline allocated and freed by the framework
+ * @mdev:              The media device the pipeline is part of
+ * @pads:              List of media_pipeline_pad
+ * @start_count:       Media pipeline start - stop count
  */
 struct media_pipeline {
-       int streaming_count;
-       struct media_graph graph;
+       bool allocated;
+       struct media_device *mdev;
+       struct list_head pads;
+       int start_count;
+};
+
+/**
+ * struct media_pipeline_pad - A pad part of a media pipeline
+ *
+ * @list:              Entry in the media_pad pads list
+ * @pipe:              The media_pipeline that the pad is part of
+ * @pad:               The media pad
+ *
+ * This structure associate a pad with a media pipeline. Instances of
+ * media_pipeline_pad are created by media_pipeline_start() when it builds the
+ * pipeline, and stored in the &media_pad.pads list. media_pipeline_stop()
+ * removes the entries from the list and deletes them.
+ */
+struct media_pipeline_pad {
+       struct list_head list;
+       struct media_pipeline *pipe;
+       struct media_pad *pad;
 };
 
 /**
@@ -186,6 +209,8 @@ enum media_pad_signal_type {
  * @flags:     Pad flags, as defined in
  *             :ref:`include/uapi/linux/media.h <media_header>`
  *             (seek for ``MEDIA_PAD_FL_*``)
+ * @pipe:      Pipeline this pad belongs to. Use media_entity_pipeline() to
+ *             access this field.
  */
 struct media_pad {
        struct media_gobj graph_obj;    /* must be first field in struct */
@@ -193,6 +218,12 @@ struct media_pad {
        u16 index;
        enum media_pad_signal_type sig_type;
        unsigned long flags;
+
+       /*
+        * The fields below are private, and should only be accessed via
+        * appropriate functions.
+        */
+       struct media_pipeline *pipe;
 };
 
 /**
@@ -206,6 +237,14 @@ struct media_pad {
  * @link_validate:     Return whether a link is valid from the entity point of
  *                     view. The media_pipeline_start() function
  *                     validates all links by calling this operation. Optional.
+ * @has_pad_interdep:  Return whether a two pads inside the entity are
+ *                     interdependent. If two pads are interdependent they are
+ *                     part of the same pipeline and enabling one of the pads
+ *                     means that the other pad will become "locked" and
+ *                     doesn't allow configuration changes. pad0 and pad1 are
+ *                     guaranteed to not both be sinks or sources.
+ *                     Optional: If the operation isn't implemented all pads
+ *                     will be considered as interdependent.
  *
  * .. note::
  *
@@ -219,6 +258,8 @@ struct media_entity_operations {
                          const struct media_pad *local,
                          const struct media_pad *remote, u32 flags);
        int (*link_validate)(struct media_link *link);
+       bool (*has_pad_interdep)(struct media_entity *entity, unsigned int pad0,
+                                unsigned int pad1);
 };
 
 /**
@@ -269,7 +310,6 @@ enum media_entity_type {
  * @links:     List of data links.
  * @ops:       Entity operations.
  * @use_count: Use count for the entity.
- * @pipe:      Pipeline this entity belongs to.
  * @info:      Union with devnode information.  Kept just for backward
  *             compatibility.
  * @info.dev:  Contains device major and minor info.
@@ -305,8 +345,6 @@ struct media_entity {
 
        int use_count;
 
-       struct media_pipeline *pipe;
-
        union {
                struct {
                        u32 major;
@@ -316,6 +354,18 @@ struct media_entity {
 };
 
 /**
+ * media_entity_for_each_pad - Iterate on all pads in an entity
+ * @entity: The entity the pads belong to
+ * @iter: The iterator pad
+ *
+ * Iterate on all pads in a media entity.
+ */
+#define media_entity_for_each_pad(entity, iter)                        \
+       for (iter = (entity)->pads;                             \
+            iter < &(entity)->pads[(entity)->num_pads];        \
+            ++iter)
+
+/**
  * struct media_interface - A media interface graph object.
  *
  * @graph_obj:         embedded graph object
@@ -426,15 +476,15 @@ static inline bool is_media_entity_v4l2_subdev(struct media_entity *entity)
 }
 
 /**
- * __media_entity_enum_init - Initialise an entity enumeration
+ * media_entity_enum_init - Initialise an entity enumeration
  *
  * @ent_enum: Entity enumeration to be initialised
- * @idx_max: Maximum number of entities in the enumeration
+ * @mdev: The related media device
  *
- * Return: Returns zero on success or a negative error code.
+ * Return: zero on success or a negative error code.
  */
-__must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum,
-                                         int idx_max);
+__must_check int media_entity_enum_init(struct media_entity_enum *ent_enum,
+                                       struct media_device *mdev);
 
 /**
  * media_entity_enum_cleanup - Release resources of an entity enumeration
@@ -924,6 +974,18 @@ media_entity_remote_source_pad_unique(const struct media_entity *entity)
 }
 
 /**
+ * media_pad_is_streaming - Test if a pad is part of a streaming pipeline
+ * @pad: The pad
+ *
+ * Return: True if the pad is part of a pipeline started with the
+ * media_pipeline_start() function, false otherwise.
+ */
+static inline bool media_pad_is_streaming(const struct media_pad *pad)
+{
+       return pad->pipe;
+}
+
+/**
  * media_entity_is_streaming - Test if an entity is part of a streaming pipeline
  * @entity: The entity
  *
@@ -932,10 +994,50 @@ media_entity_remote_source_pad_unique(const struct media_entity *entity)
  */
 static inline bool media_entity_is_streaming(const struct media_entity *entity)
 {
-       return entity->pipe;
+       struct media_pad *pad;
+
+       media_entity_for_each_pad(entity, pad) {
+               if (media_pad_is_streaming(pad))
+                       return true;
+       }
+
+       return false;
 }
 
 /**
+ * media_entity_pipeline - Get the media pipeline an entity is part of
+ * @entity: The entity
+ *
+ * DEPRECATED: use media_pad_pipeline() instead.
+ *
+ * This function returns the media pipeline that an entity has been associated
+ * with when constructing the pipeline with media_pipeline_start(). The pointer
+ * remains valid until media_pipeline_stop() is called.
+ *
+ * In general, entities can be part of multiple pipelines, when carrying
+ * multiple streams (either on different pads, or on the same pad using
+ * multiplexed streams). This function is to be used only for entities that
+ * do not support multiple pipelines.
+ *
+ * Return: The media_pipeline the entity is part of, or NULL if the entity is
+ * not part of any pipeline.
+ */
+struct media_pipeline *media_entity_pipeline(struct media_entity *entity);
+
+/**
+ * media_pad_pipeline - Get the media pipeline a pad is part of
+ * @pad: The pad
+ *
+ * This function returns the media pipeline that a pad has been associated
+ * with when constructing the pipeline with media_pipeline_start(). The pointer
+ * remains valid until media_pipeline_stop() is called.
+ *
+ * Return: The media_pipeline the pad is part of, or NULL if the pad is
+ * not part of any pipeline.
+ */
+struct media_pipeline *media_pad_pipeline(struct media_pad *pad);
+
+/**
  * media_entity_get_fwnode_pad - Get pad number from fwnode
  *
  * @entity: The entity
@@ -1013,53 +1115,66 @@ struct media_entity *media_graph_walk_next(struct media_graph *graph);
 
 /**
  * media_pipeline_start - Mark a pipeline as streaming
- * @entity: Starting entity
- * @pipe: Media pipeline to be assigned to all entities in the pipeline.
+ * @pad: Starting pad
+ * @pipe: Media pipeline to be assigned to all pads in the pipeline.
  *
- * Mark all entities connected to a given entity through enabled links, either
+ * Mark all pads connected to a given pad through enabled links, either
  * directly or indirectly, as streaming. The given pipeline object is assigned
- * to every entity in the pipeline and stored in the media_entity pipe field.
+ * to every pad in the pipeline and stored in the media_pad pipe field.
  *
  * Calls to this function can be nested, in which case the same number of
  * media_pipeline_stop() calls will be required to stop streaming. The
  * pipeline pointer must be identical for all nested calls to
  * media_pipeline_start().
  */
-__must_check int media_pipeline_start(struct media_entity *entity,
+__must_check int media_pipeline_start(struct media_pad *pad,
                                      struct media_pipeline *pipe);
 /**
  * __media_pipeline_start - Mark a pipeline as streaming
  *
- * @entity: Starting entity
- * @pipe: Media pipeline to be assigned to all entities in the pipeline.
+ * @pad: Starting pad
+ * @pipe: Media pipeline to be assigned to all pads in the pipeline.
  *
  * ..note:: This is the non-locking version of media_pipeline_start()
  */
-__must_check int __media_pipeline_start(struct media_entity *entity,
+__must_check int __media_pipeline_start(struct media_pad *pad,
                                        struct media_pipeline *pipe);
 
 /**
  * media_pipeline_stop - Mark a pipeline as not streaming
- * @entity: Starting entity
+ * @pad: Starting pad
  *
- * Mark all entities connected to a given entity through enabled links, either
- * directly or indirectly, as not streaming. The media_entity pipe field is
+ * Mark all pads connected to a given pads through enabled links, either
+ * directly or indirectly, as not streaming. The media_pad pipe field is
  * reset to %NULL.
  *
  * If multiple calls to media_pipeline_start() have been made, the same
  * number of calls to this function are required to mark the pipeline as not
  * streaming.
  */
-void media_pipeline_stop(struct media_entity *entity);
+void media_pipeline_stop(struct media_pad *pad);
 
 /**
  * __media_pipeline_stop - Mark a pipeline as not streaming
  *
- * @entity: Starting entity
+ * @pad: Starting pad
  *
  * .. note:: This is the non-locking version of media_pipeline_stop()
  */
-void __media_pipeline_stop(struct media_entity *entity);
+void __media_pipeline_stop(struct media_pad *pad);
+
+/**
+ * media_pipeline_alloc_start - Mark a pipeline as streaming
+ * @pad: Starting pad
+ *
+ * media_pipeline_alloc_start() is similar to media_pipeline_start() but instead
+ * of working on a given pipeline the function will use an existing pipeline if
+ * the pad is already part of a pipeline, or allocate a new pipeline.
+ *
+ * Calls to media_pipeline_alloc_start() must be matched with
+ * media_pipeline_stop().
+ */
+__must_check int media_pipeline_alloc_start(struct media_pad *pad);
 
 /**
  * media_devnode_create() - creates and initializes a device node interface
index 725ff91..1bdaea2 100644 (file)
@@ -175,7 +175,8 @@ struct v4l2_subdev *v4l2_i2c_new_subdev_board(struct v4l2_device *v4l2_dev,
  *
  * @sd: pointer to &struct v4l2_subdev
  * @client: pointer to struct i2c_client
- * @devname: the name of the device; if NULL, the I²C device's name will be used
+ * @devname: the name of the device; if NULL, the I²C device drivers's name
+ *           will be used
  * @postfix: sub-device specific string to put right after the I²C device name;
  *          may be NULL
  */
index b76a071..e59d9a2 100644 (file)
@@ -121,21 +121,19 @@ struct v4l2_ctrl_ops {
  * struct v4l2_ctrl_type_ops - The control type operations that the driver
  *                            has to provide.
  *
- * @equal: return true if both values are equal.
- * @init: initialize the value.
+ * @equal: return true if all ctrl->elems array elements are equal.
+ * @init: initialize the value for array elements from from_idx to ctrl->elems.
  * @log: log the value.
- * @validate: validate the value. Return 0 on success and a negative value
- *     otherwise.
+ * @validate: validate the value for ctrl->new_elems array elements.
+ *     Return 0 on success and a negative value otherwise.
  */
 struct v4l2_ctrl_type_ops {
-       bool (*equal)(const struct v4l2_ctrl *ctrl, u32 elems,
-                     union v4l2_ctrl_ptr ptr1,
-                     union v4l2_ctrl_ptr ptr2);
-       void (*init)(const struct v4l2_ctrl *ctrl, u32 from_idx, u32 tot_elems,
+       bool (*equal)(const struct v4l2_ctrl *ctrl,
+                     union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2);
+       void (*init)(const struct v4l2_ctrl *ctrl, u32 from_idx,
                     union v4l2_ctrl_ptr ptr);
        void (*log)(const struct v4l2_ctrl *ctrl);
-       int (*validate)(const struct v4l2_ctrl *ctrl, u32 elems,
-                       union v4l2_ctrl_ptr ptr);
+       int (*validate)(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr ptr);
 };
 
 /**
@@ -1543,13 +1541,12 @@ int v4l2_ctrl_new_fwnode_properties(struct v4l2_ctrl_handler *hdl,
  * v4l2_ctrl_type_op_equal - Default v4l2_ctrl_type_ops equal callback.
  *
  * @ctrl: The v4l2_ctrl pointer.
- * @elems: The number of elements to compare.
  * @ptr1: A v4l2 control value.
  * @ptr2: A v4l2 control value.
  *
  * Return: true if values are equal, otherwise false.
  */
-bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems,
+bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl,
                             union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2);
 
 /**
@@ -1557,13 +1554,12 @@ bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems,
  *
  * @ctrl: The v4l2_ctrl pointer.
  * @from_idx: Starting element index.
- * @elems: The number of elements to initialize.
  * @ptr: The v4l2 control value.
  *
  * Return: void
  */
 void v4l2_ctrl_type_op_init(const struct v4l2_ctrl *ctrl, u32 from_idx,
-                           u32 elems, union v4l2_ctrl_ptr ptr);
+                           union v4l2_ctrl_ptr ptr);
 
 /**
  * v4l2_ctrl_type_op_log - Default v4l2_ctrl_type_ops log callback.
@@ -1578,12 +1574,10 @@ void v4l2_ctrl_type_op_log(const struct v4l2_ctrl *ctrl);
  * v4l2_ctrl_type_op_validate - Default v4l2_ctrl_type_ops validate callback.
  *
  * @ctrl: The v4l2_ctrl pointer.
- * @elems: The number of elements in the control.
  * @ptr: The v4l2 control value.
  *
  * Return: 0 on success, a negative error code on failure.
  */
-int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, u32 elems,
-                              union v4l2_ctrl_ptr ptr);
+int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr ptr);
 
 #endif
index 5cf1ede..e0a1350 100644 (file)
@@ -539,4 +539,106 @@ static inline int video_is_registered(struct video_device *vdev)
        return test_bit(V4L2_FL_REGISTERED, &vdev->flags);
 }
 
+#if defined(CONFIG_MEDIA_CONTROLLER)
+
+/**
+ * video_device_pipeline_start - Mark a pipeline as streaming
+ * @vdev: Starting video device
+ * @pipe: Media pipeline to be assigned to all entities in the pipeline.
+ *
+ * Mark all entities connected to a given video device through enabled links,
+ * either directly or indirectly, as streaming. The given pipeline object is
+ * assigned to every pad in the pipeline and stored in the media_pad pipe
+ * field.
+ *
+ * Calls to this function can be nested, in which case the same number of
+ * video_device_pipeline_stop() calls will be required to stop streaming. The
+ * pipeline pointer must be identical for all nested calls to
+ * video_device_pipeline_start().
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around media_pipeline_start().
+ */
+__must_check int video_device_pipeline_start(struct video_device *vdev,
+                                            struct media_pipeline *pipe);
+
+/**
+ * __video_device_pipeline_start - Mark a pipeline as streaming
+ * @vdev: Starting video device
+ * @pipe: Media pipeline to be assigned to all entities in the pipeline.
+ *
+ * ..note:: This is the non-locking version of video_device_pipeline_start()
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around __media_pipeline_start().
+ */
+__must_check int __video_device_pipeline_start(struct video_device *vdev,
+                                              struct media_pipeline *pipe);
+
+/**
+ * video_device_pipeline_stop - Mark a pipeline as not streaming
+ * @vdev: Starting video device
+ *
+ * Mark all entities connected to a given video device through enabled links,
+ * either directly or indirectly, as not streaming. The media_pad pipe field
+ * is reset to %NULL.
+ *
+ * If multiple calls to media_pipeline_start() have been made, the same
+ * number of calls to this function are required to mark the pipeline as not
+ * streaming.
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around media_pipeline_stop().
+ */
+void video_device_pipeline_stop(struct video_device *vdev);
+
+/**
+ * __video_device_pipeline_stop - Mark a pipeline as not streaming
+ * @vdev: Starting video device
+ *
+ * .. note:: This is the non-locking version of media_pipeline_stop()
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around __media_pipeline_stop().
+ */
+void __video_device_pipeline_stop(struct video_device *vdev);
+
+/**
+ * video_device_pipeline_alloc_start - Mark a pipeline as streaming
+ * @vdev: Starting video device
+ *
+ * video_device_pipeline_alloc_start() is similar to video_device_pipeline_start()
+ * but instead of working on a given pipeline the function will use an
+ * existing pipeline if the video device is already part of a pipeline, or
+ * allocate a new pipeline.
+ *
+ * Calls to video_device_pipeline_alloc_start() must be matched with
+ * video_device_pipeline_stop().
+ */
+__must_check int video_device_pipeline_alloc_start(struct video_device *vdev);
+
+/**
+ * video_device_pipeline - Get the media pipeline a video device is part of
+ * @vdev: The video device
+ *
+ * This function returns the media pipeline that a video device has been
+ * associated with when constructing the pipeline with
+ * video_device_pipeline_start(). The pointer remains valid until
+ * video_device_pipeline_stop() is called.
+ *
+ * Return: The media_pipeline the video device is part of, or NULL if the video
+ * device is not part of any pipeline.
+ *
+ * The video device must contain a single pad.
+ *
+ * This is a convenience wrapper around media_entity_pipeline().
+ */
+struct media_pipeline *video_device_pipeline(struct video_device *vdev);
+
+#endif /* CONFIG_MEDIA_CONTROLLER */
+
 #endif /* _V4L2_DEV_H */
index 15e4ab6..394d798 100644 (file)
@@ -45,10 +45,6 @@ struct v4l2_async_subdev;
  */
 struct v4l2_fwnode_endpoint {
        struct fwnode_endpoint base;
-       /*
-        * Fields below this line will be zeroed by
-        * v4l2_fwnode_endpoint_parse()
-        */
        enum v4l2_mbus_type bus_type;
        struct {
                struct v4l2_mbus_config_parallel parallel;
index 9689f38..2f80c9c 100644 (file)
@@ -358,7 +358,11 @@ struct v4l2_mbus_frame_desc_entry {
        } bus;
 };
 
-#define V4L2_FRAME_DESC_ENTRY_MAX      4
+ /*
+  * If this number is too small, it should be dropped altogether and the
+  * API switched to a dynamic number of frame descriptor entries.
+  */
+#define V4L2_FRAME_DESC_ENTRY_MAX      8
 
 /**
  * enum v4l2_mbus_frame_desc_type - media bus frame description type
@@ -1046,6 +1050,8 @@ v4l2_subdev_get_pad_format(struct v4l2_subdev *sd,
                           struct v4l2_subdev_state *state,
                           unsigned int pad)
 {
+       if (WARN_ON(!state))
+               return NULL;
        if (WARN_ON(pad >= sd->entity.num_pads))
                pad = 0;
        return &state->pads[pad].try_fmt;
@@ -1064,6 +1070,8 @@ v4l2_subdev_get_pad_crop(struct v4l2_subdev *sd,
                         struct v4l2_subdev_state *state,
                         unsigned int pad)
 {
+       if (WARN_ON(!state))
+               return NULL;
        if (WARN_ON(pad >= sd->entity.num_pads))
                pad = 0;
        return &state->pads[pad].try_crop;
@@ -1082,6 +1090,8 @@ v4l2_subdev_get_pad_compose(struct v4l2_subdev *sd,
                            struct v4l2_subdev_state *state,
                            unsigned int pad)
 {
+       if (WARN_ON(!state))
+               return NULL;
        if (WARN_ON(pad >= sd->entity.num_pads))
                pad = 0;
        return &state->pads[pad].try_compose;
index 61f2ceb..c94ea1a 100644 (file)
@@ -67,6 +67,7 @@ struct tc_action {
 #define TCA_ACT_FLAGS_BIND     (1U << (TCA_ACT_FLAGS_USER_BITS + 1))
 #define TCA_ACT_FLAGS_REPLACE  (1U << (TCA_ACT_FLAGS_USER_BITS + 2))
 #define TCA_ACT_FLAGS_NO_RTNL  (1U << (TCA_ACT_FLAGS_USER_BITS + 3))
+#define TCA_ACT_FLAGS_AT_INGRESS       (1U << (TCA_ACT_FLAGS_USER_BITS + 4))
 
 /* Update lastuse only if needed, to avoid dirtying a cache line.
  * We use a temp variable to avoid fetching jiffies twice.
index e343f9f..7a60bc6 100644 (file)
@@ -155,6 +155,7 @@ enum flow_action_id {
        FLOW_ACTION_MARK,
        FLOW_ACTION_PTYPE,
        FLOW_ACTION_PRIORITY,
+       FLOW_ACTION_RX_QUEUE_MAPPING,
        FLOW_ACTION_WAKE,
        FLOW_ACTION_QUEUE,
        FLOW_ACTION_SAMPLE,
@@ -247,6 +248,7 @@ struct flow_action_entry {
                u32                     csum_flags;     /* FLOW_ACTION_CSUM */
                u32                     mark;           /* FLOW_ACTION_MARK */
                u16                     ptype;          /* FLOW_ACTION_PTYPE */
+               u16                     rx_queue;       /* FLOW_ACTION_RX_QUEUE_MAPPING */
                u32                     priority;       /* FLOW_ACTION_PRIORITY */
                struct {                                /* FLOW_ACTION_QUEUE */
                        u32             ctx;
index 8f78017..3d08e67 100644 (file)
@@ -37,6 +37,7 @@ struct genl_info;
  *     do additional, common, filtering and return an error
  * @post_doit: called after an operation's doit callback, it may
  *     undo operations done by pre_doit, for example release locks
+ * @module: pointer to the owning module (set to THIS_MODULE)
  * @mcgrps: multicast groups used by this family
  * @n_mcgrps: number of multicast groups
  * @resv_start_op: first operation for which reserved fields of the header
@@ -173,9 +174,9 @@ struct genl_ops {
 };
 
 /**
- * struct genl_info - info that is available during dumpit op call
+ * struct genl_dumpit_info - info that is available during dumpit op call
  * @family: generic netlink family - for internal genl code usage
- * @ops: generic netlink ops - for internal genl code usage
+ * @op: generic netlink ops - for internal genl code usage
  * @attrs: netlink attributes
  */
 struct genl_dumpit_info {
@@ -354,6 +355,7 @@ int genlmsg_multicast_allns(const struct genl_family *family,
 
 /**
  * genlmsg_unicast - unicast a netlink message
+ * @net: network namespace to look up @portid in
  * @skb: netlink message as socket buffer
  * @portid: netlink portid of the destination socket
  */
@@ -373,7 +375,7 @@ static inline int genlmsg_reply(struct sk_buff *skb, struct genl_info *info)
 }
 
 /**
- * gennlmsg_data - head of message payload
+ * genlmsg_data - head of message payload
  * @gnlh: genetlink message header
  */
 static inline void *genlmsg_data(const struct genlmsghdr *gnlh)
index 8c3587d..78beaa7 100644 (file)
@@ -92,7 +92,9 @@ struct net {
 
        struct ns_common        ns;
        struct ref_tracker_dir  refcnt_tracker;
-
+       struct ref_tracker_dir  notrefcnt_tracker; /* tracker for objects not
+                                                   * refcounted against netns
+                                                   */
        struct list_head        dev_base_head;
        struct proc_dir_entry   *proc_net;
        struct proc_dir_entry   *proc_net_stat;
@@ -320,19 +322,31 @@ static inline int check_net(const struct net *net)
 #endif
 
 
-static inline void netns_tracker_alloc(struct net *net,
-                                      netns_tracker *tracker, gfp_t gfp)
+static inline void __netns_tracker_alloc(struct net *net,
+                                        netns_tracker *tracker,
+                                        bool refcounted,
+                                        gfp_t gfp)
 {
 #ifdef CONFIG_NET_NS_REFCNT_TRACKER
-       ref_tracker_alloc(&net->refcnt_tracker, tracker, gfp);
+       ref_tracker_alloc(refcounted ? &net->refcnt_tracker :
+                                      &net->notrefcnt_tracker,
+                         tracker, gfp);
 #endif
 }
 
-static inline void netns_tracker_free(struct net *net,
-                                     netns_tracker *tracker)
+static inline void netns_tracker_alloc(struct net *net, netns_tracker *tracker,
+                                      gfp_t gfp)
+{
+       __netns_tracker_alloc(net, tracker, true, gfp);
+}
+
+static inline void __netns_tracker_free(struct net *net,
+                                       netns_tracker *tracker,
+                                       bool refcounted)
 {
 #ifdef CONFIG_NET_NS_REFCNT_TRACKER
-       ref_tracker_free(&net->refcnt_tracker, tracker);
+       ref_tracker_free(refcounted ? &net->refcnt_tracker :
+                                    &net->notrefcnt_tracker, tracker);
 #endif
 }
 
@@ -346,7 +360,7 @@ static inline struct net *get_net_track(struct net *net,
 
 static inline void put_net_track(struct net *net, netns_tracker *tracker)
 {
-       netns_tracker_free(net, tracker);
+       __netns_tracker_free(net, tracker, true);
        put_net(net);
 }
 
index 980daa6..c81021a 100644 (file)
@@ -43,7 +43,7 @@ void nf_queue_entry_free(struct nf_queue_entry *entry);
 static inline void init_hashrandom(u32 *jhash_initval)
 {
        while (*jhash_initval == 0)
-               *jhash_initval = prandom_u32();
+               *jhash_initval = get_random_u32();
 }
 
 static inline u32 hash_v4(const struct iphdr *iph, u32 initval)
index 454ac2b..425364d 100644 (file)
@@ -363,7 +363,7 @@ static inline unsigned long red_calc_qavg(const struct red_parms *p,
 
 static inline u32 red_random(const struct red_parms *p)
 {
-       return reciprocal_divide(prandom_u32(), p->max_P_reciprocal);
+       return reciprocal_divide(get_random_u32(), p->max_P_reciprocal);
 }
 
 static inline int red_mark_probability(const struct red_parms *p,
index 0eaf865..60f6641 100644 (file)
@@ -35,8 +35,7 @@ struct sctp_ulpq {
 };
 
 /* Prototypes. */
-struct sctp_ulpq *sctp_ulpq_init(struct sctp_ulpq *,
-                                struct sctp_association *);
+void sctp_ulpq_init(struct sctp_ulpq *ulpq, struct sctp_association *asoc);
 void sctp_ulpq_flush(struct sctp_ulpq *ulpq);
 void sctp_ulpq_free(struct sctp_ulpq *);
 
index 08038a3..928bb60 100644 (file)
@@ -1901,7 +1901,7 @@ static inline void sockcm_init(struct sockcm_cookie *sockc,
        *sockc = (struct sockcm_cookie) { .tsflags = sk->sk_tsflags };
 }
 
-int __sock_cmsg_send(struct sock *sk, struct msghdr *msg, struct cmsghdr *cmsg,
+int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg,
                     struct sockcm_cookie *sockc);
 int sock_cmsg_send(struct sock *sk, struct msghdr *msg,
                   struct sockcm_cookie *sockc);
@@ -2109,7 +2109,7 @@ static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk)
 
 static inline u32 net_tx_rndhash(void)
 {
-       u32 v = prandom_u32();
+       u32 v = get_random_u32();
 
        return v ?: 1;
 }
@@ -2585,7 +2585,7 @@ static inline gfp_t gfp_any(void)
 
 static inline gfp_t gfp_memcg_charge(void)
 {
-       return in_softirq() ? GFP_NOWAIT : GFP_KERNEL;
+       return in_softirq() ? GFP_ATOMIC : GFP_KERNEL;
 }
 
 static inline long sock_rcvtimeo(const struct sock *sk, bool noblock)
index 473b0b0..6ec140b 100644 (file)
@@ -16,6 +16,7 @@ struct sock_reuseport {
        u16                     max_socks;              /* length of socks */
        u16                     num_socks;              /* elements in socks */
        u16                     num_closed_socks;       /* closed elements in socks */
+       u16                     incoming_cpu;
        /* The last synq overflow event timestamp of this
         * reuse->socks[] group.
         */
@@ -43,21 +44,21 @@ struct sock *reuseport_migrate_sock(struct sock *sk,
 extern int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog);
 extern int reuseport_detach_prog(struct sock *sk);
 
-static inline bool reuseport_has_conns(struct sock *sk, bool set)
+static inline bool reuseport_has_conns(struct sock *sk)
 {
        struct sock_reuseport *reuse;
        bool ret = false;
 
        rcu_read_lock();
        reuse = rcu_dereference(sk->sk_reuseport_cb);
-       if (reuse) {
-               if (set)
-                       reuse->has_conns = 1;
-               ret = reuse->has_conns;
-       }
+       if (reuse && reuse->has_conns)
+               ret = true;
        rcu_read_unlock();
 
        return ret;
 }
 
+void reuseport_has_conns_set(struct sock *sk);
+void reuseport_update_incoming_cpu(struct sock *sk, int val);
+
 #endif  /* _SOCK_REUSEPORT_H */
index dc1079f..9649600 100644 (file)
@@ -95,12 +95,41 @@ static inline u32 tcf_skbedit_priority(const struct tc_action *a)
        return priority;
 }
 
+static inline u16 tcf_skbedit_rx_queue_mapping(const struct tc_action *a)
+{
+       u16 rx_queue;
+
+       rcu_read_lock();
+       rx_queue = rcu_dereference(to_skbedit(a)->params)->queue_mapping;
+       rcu_read_unlock();
+
+       return rx_queue;
+}
+
 /* Return true iff action is queue_mapping */
 static inline bool is_tcf_skbedit_queue_mapping(const struct tc_action *a)
 {
        return is_tcf_skbedit_with_flag(a, SKBEDIT_F_QUEUE_MAPPING);
 }
 
+/* Return true if action is on ingress traffic */
+static inline bool is_tcf_skbedit_ingress(u32 flags)
+{
+       return flags & TCA_ACT_FLAGS_AT_INGRESS;
+}
+
+static inline bool is_tcf_skbedit_tx_queue_mapping(const struct tc_action *a)
+{
+       return is_tcf_skbedit_queue_mapping(a) &&
+              !is_tcf_skbedit_ingress(a->tcfa_flags);
+}
+
+static inline bool is_tcf_skbedit_rx_queue_mapping(const struct tc_action *a)
+{
+       return is_tcf_skbedit_queue_mapping(a) &&
+              is_tcf_skbedit_ingress(a->tcfa_flags);
+}
+
 /* Return true iff action is inheritdsfield */
 static inline bool is_tcf_skbedit_inheritdsfield(const struct tc_action *a)
 {
index b830463..d27b1ca 100644 (file)
@@ -58,8 +58,6 @@ ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp, __u16 srcp,
 
 #define LOOPBACK4_IPV6 cpu_to_be32(0x7f000006)
 
-void inet6_destroy_sock(struct sock *sk);
-
 #define IPV6_SEQ_DGRAM_HEADER                                         \
        "  sl  "                                                       \
        "local_address                         "                       \
index fee053b..de4b528 100644 (file)
@@ -174,6 +174,15 @@ INDIRECT_CALLABLE_DECLARE(int udpv6_rcv(struct sk_buff *));
 struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
                                  netdev_features_t features, bool is_ipv6);
 
+static inline void udp_lib_init_sock(struct sock *sk)
+{
+       struct udp_sock *up = udp_sk(sk);
+
+       skb_queue_head_init(&up->reader_queue);
+       up->forward_threshold = sk->sk_rcvbuf >> 2;
+       set_bit(SOCK_CUSTOM_SOCKOPT, &sk->sk_socket->flags);
+}
+
 /* hash routines shared between UDPv4/6 and UDP-Litev4/6 */
 static inline int udp_lib_hash(struct sock *sk)
 {
diff --git a/include/soc/sifive/sifive_ccache.h b/include/soc/sifive/sifive_ccache.h
new file mode 100644 (file)
index 0000000..4d4ed49
--- /dev/null
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * SiFive Composable Cache Controller header file
+ *
+ */
+
+#ifndef __SOC_SIFIVE_CCACHE_H
+#define __SOC_SIFIVE_CCACHE_H
+
+extern int register_sifive_ccache_error_notifier(struct notifier_block *nb);
+extern int unregister_sifive_ccache_error_notifier(struct notifier_block *nb);
+
+#define SIFIVE_CCACHE_ERR_TYPE_CE 0
+#define SIFIVE_CCACHE_ERR_TYPE_UE 1
+
+#endif /* __SOC_SIFIVE_CCACHE_H */
diff --git a/include/soc/sifive/sifive_l2_cache.h b/include/soc/sifive/sifive_l2_cache.h
deleted file mode 100644 (file)
index 92ade10..0000000
+++ /dev/null
@@ -1,16 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * SiFive L2 Cache Controller header file
- *
- */
-
-#ifndef __SOC_SIFIVE_L2_CACHE_H
-#define __SOC_SIFIVE_L2_CACHE_H
-
-extern int register_sifive_l2_error_notifier(struct notifier_block *nb);
-extern int unregister_sifive_l2_error_notifier(struct notifier_block *nb);
-
-#define SIFIVE_L2_ERR_TYPE_CE 0
-#define SIFIVE_L2_ERR_TYPE_UE 1
-
-#endif /* __SOC_SIFIVE_L2_CACHE_H */
index ddff03e..35778f9 100644 (file)
@@ -592,11 +592,11 @@ int snd_hdac_get_stream_stripe_ctl(struct hdac_bus *bus,
 #define snd_hdac_stream_readb(dev, reg) \
        snd_hdac_reg_readb((dev)->bus, (dev)->sd_addr + AZX_REG_ ## reg)
 #define snd_hdac_stream_readb_poll(dev, reg, val, cond, delay_us, timeout_us) \
-       readb_poll_timeout((dev)->sd_addr + AZX_REG_ ## reg, val, cond, \
-                          delay_us, timeout_us)
+       read_poll_timeout_atomic(snd_hdac_reg_readb, val, cond, delay_us, timeout_us, \
+                                false, (dev)->bus, (dev)->sd_addr + AZX_REG_ ## reg)
 #define snd_hdac_stream_readl_poll(dev, reg, val, cond, delay_us, timeout_us) \
-       readl_poll_timeout((dev)->sd_addr + AZX_REG_ ## reg, val, cond, \
-                          delay_us, timeout_us)
+       read_poll_timeout_atomic(snd_hdac_reg_readl, val, cond, delay_us, timeout_us, \
+                                false, (dev)->bus, (dev)->sd_addr + AZX_REG_ ## reg)
 
 /* update a register, pass without AZX_REG_ prefix */
 #define snd_hdac_stream_updatel(dev, reg, mask, val) \
diff --git a/include/trace/events/watchdog.h b/include/trace/events/watchdog.h
new file mode 100644 (file)
index 0000000..beb9bb3
--- /dev/null
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM watchdog
+
+#if !defined(_TRACE_WATCHDOG_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_WATCHDOG_H
+
+#include <linux/watchdog.h>
+#include <linux/tracepoint.h>
+
+DECLARE_EVENT_CLASS(watchdog_template,
+
+       TP_PROTO(struct watchdog_device *wdd, int err),
+
+       TP_ARGS(wdd, err),
+
+       TP_STRUCT__entry(
+               __field(int, id)
+               __field(int, err)
+       ),
+
+       TP_fast_assign(
+               __entry->id = wdd->id;
+               __entry->err = err;
+       ),
+
+       TP_printk("watchdog%d err=%d", __entry->id, __entry->err)
+);
+
+DEFINE_EVENT(watchdog_template, watchdog_start,
+       TP_PROTO(struct watchdog_device *wdd, int err),
+       TP_ARGS(wdd, err));
+
+DEFINE_EVENT(watchdog_template, watchdog_ping,
+       TP_PROTO(struct watchdog_device *wdd, int err),
+       TP_ARGS(wdd, err));
+
+DEFINE_EVENT(watchdog_template, watchdog_stop,
+       TP_PROTO(struct watchdog_device *wdd, int err),
+       TP_ARGS(wdd, err));
+
+TRACE_EVENT(watchdog_set_timeout,
+
+       TP_PROTO(struct watchdog_device *wdd, unsigned int timeout, int err),
+
+       TP_ARGS(wdd, timeout, err),
+
+       TP_STRUCT__entry(
+               __field(int, id)
+               __field(unsigned int, timeout)
+               __field(int, err)
+       ),
+
+       TP_fast_assign(
+               __entry->id = wdd->id;
+               __entry->timeout = timeout;
+               __entry->err = err;
+       ),
+
+       TP_printk("watchdog%d timeout=%u err=%d", __entry->id, __entry->timeout, __entry->err)
+);
+
+#endif /* !defined(_TRACE_WATCHDOG_H) || defined(TRACE_HEADER_MULTI_READ) */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
index eac8731..6f93c91 100644 (file)
@@ -235,25 +235,29 @@ struct drm_panfrost_madvise {
 #define PANFROSTDUMP_BUF_BO (PANFROSTDUMP_BUF_BOMAP + 1)
 #define PANFROSTDUMP_BUF_TRAILER (PANFROSTDUMP_BUF_BO + 1)
 
+/*
+ * This structure is the native endianness of the dumping machine, tools can
+ * detect the endianness by looking at the value in 'magic'.
+ */
 struct panfrost_dump_object_header {
-       __le32 magic;
-       __le32 type;
-       __le32 file_size;
-       __le32 file_offset;
+       __u32 magic;
+       __u32 type;
+       __u32 file_size;
+       __u32 file_offset;
 
        union {
-               struct pan_reg_hdr {
-                       __le64 jc;
-                       __le32 gpu_id;
-                       __le32 major;
-                       __le32 minor;
-                       __le64 nbos;
+               struct {
+                       __u64 jc;
+                       __u32 gpu_id;
+                       __u32 major;
+                       __u32 minor;
+                       __u64 nbos;
                } reghdr;
 
                struct pan_bomap_hdr {
-                       __le32 valid;
-                       __le64 iova;
-                       __le32 data[2];
+                       __u32 valid;
+                       __u64 iova;
+                       __u32 data[2];
                } bomap;
 
                /*
@@ -261,14 +265,14 @@ struct panfrost_dump_object_header {
                 * with new fields and also keep it 512-byte aligned
                 */
 
-               __le32 sizer[496];
+               __u32 sizer[496];
        };
 };
 
 /* Registers object, an array of these */
 struct panfrost_dump_registers {
-       __le32 reg;
-       __le32 value;
+       __u32 reg;
+       __u32 value;
 };
 
 #if defined(__cplusplus)
index c3baaea..d58fa1c 100644 (file)
@@ -1568,6 +1568,20 @@ static inline void cec_ops_request_short_audio_descriptor(const struct cec_msg *
        }
 }
 
+static inline void cec_msg_set_audio_volume_level(struct cec_msg *msg,
+                                                 __u8 audio_volume_level)
+{
+       msg->len = 3;
+       msg->msg[1] = CEC_MSG_SET_AUDIO_VOLUME_LEVEL;
+       msg->msg[2] = audio_volume_level;
+}
+
+static inline void cec_ops_set_audio_volume_level(const struct cec_msg *msg,
+                                                 __u8 *audio_volume_level)
+{
+       *audio_volume_level = msg->msg[2];
+}
+
 
 /* Audio Rate Control Feature */
 static inline void cec_msg_set_audio_rate(struct cec_msg *msg,
index 1d48da9..b8e071a 100644 (file)
@@ -768,6 +768,7 @@ struct cec_event {
 #define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_RATE             0x08
 #define CEC_OP_FEAT_DEV_SINK_HAS_ARC_TX                        0x04
 #define CEC_OP_FEAT_DEV_SOURCE_HAS_ARC_RX              0x02
+#define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_VOLUME_LEVEL     0x01
 
 #define CEC_MSG_GIVE_FEATURES                          0xa5    /* HDMI 2.0 */
 
@@ -1059,6 +1060,7 @@ struct cec_event {
 #define CEC_OP_AUD_FMT_ID_CEA861                       0
 #define CEC_OP_AUD_FMT_ID_CEA861_CXT                   1
 
+#define CEC_MSG_SET_AUDIO_VOLUME_LEVEL                 0x73
 
 /* Audio Rate Control Feature */
 #define CEC_MSG_SET_AUDIO_RATE                         0x9a
index dc2aa3d..f341de2 100644 (file)
@@ -1737,6 +1737,13 @@ enum ethtool_link_mode_bit_indices {
        ETHTOOL_LINK_MODE_100baseFX_Half_BIT             = 90,
        ETHTOOL_LINK_MODE_100baseFX_Full_BIT             = 91,
        ETHTOOL_LINK_MODE_10baseT1L_Full_BIT             = 92,
+       ETHTOOL_LINK_MODE_800000baseCR8_Full_BIT         = 93,
+       ETHTOOL_LINK_MODE_800000baseKR8_Full_BIT         = 94,
+       ETHTOOL_LINK_MODE_800000baseDR8_Full_BIT         = 95,
+       ETHTOOL_LINK_MODE_800000baseDR8_2_Full_BIT       = 96,
+       ETHTOOL_LINK_MODE_800000baseSR8_Full_BIT         = 97,
+       ETHTOOL_LINK_MODE_800000baseVR8_Full_BIT         = 98,
+
        /* must be last entry */
        __ETHTOOL_LINK_MODE_MASK_NBITS
 };
@@ -1848,6 +1855,7 @@ enum ethtool_link_mode_bit_indices {
 #define SPEED_100000           100000
 #define SPEED_200000           200000
 #define SPEED_400000           400000
+#define SPEED_800000           800000
 
 #define SPEED_UNKNOWN          -1
 
index 583ca0d..730673e 100644 (file)
 /*
  * Defect Pixel Cluster Correction
  */
-#define RKISP1_CIF_ISP_DPCC_METHODS_MAX       3
+#define RKISP1_CIF_ISP_DPCC_METHODS_MAX                                3
+
+#define RKISP1_CIF_ISP_DPCC_MODE_STAGE1_ENABLE                 (1U << 2)
+
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_INCL_G_CENTER   (1U << 0)
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_INCL_RB_CENTER  (1U << 1)
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_G_3X3           (1U << 2)
+#define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_RB_3X3          (1U << 3)
+
+/* 0-2 for sets 1-3 */
+#define RKISP1_CIF_ISP_DPCC_SET_USE_STAGE1_USE_SET(n)          ((n) << 0)
+#define RKISP1_CIF_ISP_DPCC_SET_USE_STAGE1_USE_FIX_SET         (1U << 3)
+
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_PG_GREEN_ENABLE                (1U << 0)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_LC_GREEN_ENABLE                (1U << 1)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RO_GREEN_ENABLE                (1U << 2)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RND_GREEN_ENABLE       (1U << 3)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RG_GREEN_ENABLE                (1U << 4)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_PG_RED_BLUE_ENABLE     (1U << 8)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_LC_RED_BLUE_ENABLE     (1U << 9)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RO_RED_BLUE_ENABLE     (1U << 10)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RND_RED_BLUE_ENABLE    (1U << 11)
+#define RKISP1_CIF_ISP_DPCC_METHODS_SET_RG_RED_BLUE_ENABLE     (1U << 12)
+
+#define RKISP1_CIF_ISP_DPCC_LINE_THRESH_G(v)                   ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_LINE_THRESH_RB(v)                  ((v) << 8)
+#define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_G(v)                  ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_RB(v)                 ((v) << 8)
+#define RKISP1_CIF_ISP_DPCC_PG_FAC_G(v)                                ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_PG_FAC_RB(v)                       ((v) << 8)
+#define RKISP1_CIF_ISP_DPCC_RND_THRESH_G(v)                    ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_RND_THRESH_RB(v)                   ((v) << 8)
+#define RKISP1_CIF_ISP_DPCC_RG_FAC_G(v)                                ((v) << 0)
+#define RKISP1_CIF_ISP_DPCC_RG_FAC_RB(v)                       ((v) << 8)
+
+#define RKISP1_CIF_ISP_DPCC_RO_LIMITS_n_G(n, v)                        ((v) << ((n) * 4))
+#define RKISP1_CIF_ISP_DPCC_RO_LIMITS_n_RB(n, v)               ((v) << ((n) * 4 + 2))
+
+#define RKISP1_CIF_ISP_DPCC_RND_OFFS_n_G(n, v)                 ((v) << ((n) * 4))
+#define RKISP1_CIF_ISP_DPCC_RND_OFFS_n_RB(n, v)                        ((v) << ((n) * 4 + 2))
 
 /*
  * Denoising pre filter
@@ -249,16 +288,20 @@ struct rkisp1_cif_isp_bls_config {
 };
 
 /**
- * struct rkisp1_cif_isp_dpcc_methods_config - Methods Configuration used by DPCC
+ * struct rkisp1_cif_isp_dpcc_methods_config - DPCC methods set configuration
  *
- * Methods Configuration used by Defect Pixel Cluster Correction
+ * This structure stores the configuration of one set of methods for the DPCC
+ * algorithm. Multiple methods can be selected in each set (independently for
+ * the Green and Red/Blue components) through the @method field, the result is
+ * the logical AND of all enabled methods. The remaining fields set thresholds
+ * and factors for each method.
  *
- * @method: Method enable bits
- * @line_thresh: Line threshold
- * @line_mad_fac: Line MAD factor
- * @pg_fac: Peak gradient factor
- * @rnd_thresh: Rank Neighbor Difference threshold
- * @rg_fac: Rank gradient factor
+ * @method: Method enable bits (RKISP1_CIF_ISP_DPCC_METHODS_SET_*)
+ * @line_thresh: Line threshold (RKISP1_CIF_ISP_DPCC_LINE_THRESH_*)
+ * @line_mad_fac: Line Mean Absolute Difference factor (RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_*)
+ * @pg_fac: Peak gradient factor (RKISP1_CIF_ISP_DPCC_PG_FAC_*)
+ * @rnd_thresh: Rank Neighbor Difference threshold (RKISP1_CIF_ISP_DPCC_RND_THRESH_*)
+ * @rg_fac: Rank gradient factor (RKISP1_CIF_ISP_DPCC_RG_FAC_*)
  */
 struct rkisp1_cif_isp_dpcc_methods_config {
        __u32 method;
@@ -272,14 +315,16 @@ struct rkisp1_cif_isp_dpcc_methods_config {
 /**
  * struct rkisp1_cif_isp_dpcc_config - Configuration used by DPCC
  *
- * Configuration used by Defect Pixel Cluster Correction
+ * Configuration used by Defect Pixel Cluster Correction. Three sets of methods
+ * can be configured and selected through the @set_use field. The result is the
+ * logical OR of all enabled sets.
  *
- * @mode: dpcc output mode
- * @output_mode: whether use hard coded methods
- * @set_use: stage1 methods set
- * @methods: methods config
- * @ro_limits: rank order limits
- * @rnd_offs: differential rank offsets for rank neighbor difference
+ * @mode: DPCC mode (RKISP1_CIF_ISP_DPCC_MODE_*)
+ * @output_mode: Interpolation output mode (RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_*)
+ * @set_use: Methods sets selection (RKISP1_CIF_ISP_DPCC_SET_USE_*)
+ * @methods: Methods sets configuration
+ * @ro_limits: Rank order limits (RKISP1_CIF_ISP_DPCC_RO_LIMITS_*)
+ * @rnd_offs: Differential rank offsets for rank neighbor difference (RKISP1_CIF_ISP_DPCC_RND_OFFS_*)
  */
 struct rkisp1_cif_isp_dpcc_config {
        __u32 mode;
index b69e9ba..dcb179d 100644 (file)
@@ -247,6 +247,7 @@ enum {
  * @vid_hdr_offset: VID header offset (use defaults if %0)
  * @max_beb_per1024: maximum expected number of bad PEB per 1024 PEBs
  * @padding: reserved for future, not used, has to be zeroed
+ * @disable_fm: whether disable fastmap
  *
  * This data structure is used to specify MTD device UBI has to attach and the
  * parameters it has to use. The number which should be assigned to the new UBI
@@ -281,13 +282,18 @@ enum {
  * eraseblocks for new bad eraseblocks, but attempts to use available
  * eraseblocks (if any). The accepted range is 0-768. If 0 is given, the
  * default kernel value of %CONFIG_MTD_UBI_BEB_LIMIT will be used.
+ *
+ * If @disable_fm is not zero, ubi doesn't create new fastmap even the module
+ * param 'fm_autoconvert' is set, and existed old fastmap will be destroyed
+ * after doing full scanning.
  */
 struct ubi_attach_req {
        __s32 ubi_num;
        __s32 mtd_num;
        __s32 vid_hdr_offset;
        __s16 max_beb_per1024;
-       __s8 padding[10];
+       __s8 disable_fm;
+       __s8 padding[9];
 };
 
 /*
index 694f7c1..abf6509 100644 (file)
@@ -66,7 +66,7 @@ config RUST_IS_AVAILABLE
          This shows whether a suitable Rust toolchain is available (found).
 
          Please see Documentation/rust/quick-start.rst for instructions on how
-         to satify the build requirements of Rust support.
+         to satisfy the build requirements of Rust support.
 
          In particular, the Makefile target 'rustavailable' is useful to check
          why the Rust toolchain is not being detected.
index 4eae088..2e04850 100644 (file)
@@ -94,7 +94,7 @@ static __cold void __io_uring_show_fdinfo(struct io_ring_ctx *ctx,
                sq_idx = READ_ONCE(ctx->sq_array[entry & sq_mask]);
                if (sq_idx > sq_mask)
                        continue;
-               sqe = &ctx->sq_sqes[sq_idx << 1];
+               sqe = &ctx->sq_sqes[sq_idx << sq_shift];
                seq_printf(m, "%5u: opcode:%s, fd:%d, flags:%x, off:%llu, "
                              "addr:0x%llx, rw_flags:0x%x, buf_index:%d "
                              "user_data:%llu",
index ff3a712..351111f 100644 (file)
@@ -5,22 +5,9 @@
 #include <linux/file.h>
 #include <linux/io_uring_types.h>
 
-/*
- * FFS_SCM is only available on 64-bit archs, for 32-bit we just define it as 0
- * and define IO_URING_SCM_ALL. For this case, we use SCM for all files as we
- * can't safely always dereference the file when the task has exited and ring
- * cleanup is done. If a file is tracked and part of SCM, then unix gc on
- * process exit may reap it before __io_sqe_files_unregister() is run.
- */
 #define FFS_NOWAIT             0x1UL
 #define FFS_ISREG              0x2UL
-#if defined(CONFIG_64BIT)
-#define FFS_SCM                        0x4UL
-#else
-#define IO_URING_SCM_ALL
-#define FFS_SCM                        0x0UL
-#endif
-#define FFS_MASK               ~(FFS_NOWAIT|FFS_ISREG|FFS_SCM)
+#define FFS_MASK               ~(FFS_NOWAIT|FFS_ISREG)
 
 bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files);
 void io_free_file_tables(struct io_file_table *table);
@@ -38,6 +25,7 @@ unsigned int io_file_get_flags(struct file *file);
 
 static inline void io_file_bitmap_clear(struct io_file_table *table, int bit)
 {
+       WARN_ON_ONCE(!test_bit(bit, table->bitmap));
        __clear_bit(bit, table->bitmap);
        table->alloc_hint = bit;
 }
index c6536d4..6f1d0e5 100644 (file)
@@ -1164,10 +1164,10 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
                wqe = kzalloc_node(sizeof(struct io_wqe), GFP_KERNEL, alloc_node);
                if (!wqe)
                        goto err;
+               wq->wqes[node] = wqe;
                if (!alloc_cpumask_var(&wqe->cpu_mask, GFP_KERNEL))
                        goto err;
                cpumask_copy(wqe->cpu_mask, cpumask_of_node(node));
-               wq->wqes[node] = wqe;
                wqe->node = alloc_node;
                wqe->acct[IO_WQ_ACCT_BOUND].max_workers = bounded;
                wqe->acct[IO_WQ_ACCT_UNBOUND].max_workers =
index 99a52f3..6cc16e3 100644 (file)
@@ -1106,6 +1106,8 @@ static void io_req_local_work_add(struct io_kiocb *req)
 
        if (!llist_add(&req->io_task_work.node, &ctx->work_llist))
                return;
+       /* need it for the following io_cqring_wake() */
+       smp_mb__after_atomic();
 
        if (unlikely(atomic_read(&req->task->io_uring->in_idle))) {
                io_move_task_work_from_local(ctx);
@@ -1117,8 +1119,7 @@ static void io_req_local_work_add(struct io_kiocb *req)
 
        if (ctx->has_evfd)
                io_eventfd_signal(ctx);
-       io_cqring_wake(ctx);
-
+       __io_cqring_wake(ctx);
 }
 
 static inline void __io_req_task_work_add(struct io_kiocb *req, bool allow_local)
@@ -1586,8 +1587,6 @@ unsigned int io_file_get_flags(struct file *file)
                res |= FFS_ISREG;
        if (__io_file_supports_nowait(file, mode))
                res |= FFS_NOWAIT;
-       if (io_file_need_scm(file))
-               res |= FFS_SCM;
        return res;
 }
 
@@ -1859,7 +1858,6 @@ inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
        /* mask in overlapping REQ_F and FFS bits */
        req->flags |= (file_ptr << REQ_F_SUPPORT_NOWAIT_BIT);
        io_req_set_rsrc_node(req, ctx, 0);
-       WARN_ON_ONCE(file && !test_bit(fd, ctx->file_table.bitmap));
 out:
        io_ring_submit_unlock(ctx, issue_flags);
        return file;
@@ -2562,18 +2560,14 @@ static int io_eventfd_unregister(struct io_ring_ctx *ctx)
 
 static void io_req_caches_free(struct io_ring_ctx *ctx)
 {
-       struct io_submit_state *state = &ctx->submit_state;
        int nr = 0;
 
        mutex_lock(&ctx->uring_lock);
-       io_flush_cached_locked_reqs(ctx, state);
+       io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
 
        while (!io_req_cache_empty(ctx)) {
-               struct io_wq_work_node *node;
-               struct io_kiocb *req;
+               struct io_kiocb *req = io_alloc_req(ctx);
 
-               node = wq_stack_extract(&state->free_list);
-               req = container_of(node, struct io_kiocb, comp_list);
                kmem_cache_free(req_cachep, req);
                nr++;
        }
@@ -2585,12 +2579,6 @@ static void io_req_caches_free(struct io_ring_ctx *ctx)
 static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
 {
        io_sq_thread_finish(ctx);
-
-       if (ctx->mm_account) {
-               mmdrop(ctx->mm_account);
-               ctx->mm_account = NULL;
-       }
-
        io_rsrc_refs_drop(ctx);
        /* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
        io_wait_rsrc_data(ctx->buf_data);
@@ -2631,8 +2619,11 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
        }
 #endif
        WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
-       WARN_ON_ONCE(ctx->notif_slots || ctx->nr_notif_slots);
 
+       if (ctx->mm_account) {
+               mmdrop(ctx->mm_account);
+               ctx->mm_account = NULL;
+       }
        io_mem_free(ctx->rings);
        io_mem_free(ctx->sq_sqes);
 
@@ -2813,15 +2804,12 @@ static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
                io_poll_remove_all(ctx, NULL, true);
        mutex_unlock(&ctx->uring_lock);
 
-       /* failed during ring init, it couldn't have issued any requests */
-       if (ctx->rings) {
+       /*
+        * If we failed setting up the ctx, we might not have any rings
+        * and therefore did not submit any requests
+        */
+       if (ctx->rings)
                io_kill_timeouts(ctx, NULL, true);
-               /* if we failed setting up the ctx, we might not have any rings */
-               io_iopoll_try_reap_events(ctx);
-               /* drop cached put refs after potentially doing completions */
-               if (current->io_uring)
-                       io_uring_drop_tctx_refs(current);
-       }
 
        INIT_WORK(&ctx->exit_work, io_ring_exit_work);
        /*
@@ -3229,8 +3217,16 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
                        mutex_unlock(&ctx->uring_lock);
                        goto out;
                }
-               if ((flags & IORING_ENTER_GETEVENTS) && ctx->syscall_iopoll)
-                       goto iopoll_locked;
+               if (flags & IORING_ENTER_GETEVENTS) {
+                       if (ctx->syscall_iopoll)
+                               goto iopoll_locked;
+                       /*
+                        * Ignore errors, we'll soon call io_cqring_wait() and
+                        * it should handle ownership problems if any.
+                        */
+                       if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
+                               (void)io_run_local_work_locked(ctx);
+               }
                mutex_unlock(&ctx->uring_lock);
        }
 
@@ -3355,7 +3351,7 @@ static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
        if (fd < 0)
                return fd;
 
-       ret = __io_uring_add_tctx_node(ctx, false);
+       ret = __io_uring_add_tctx_node(ctx);
        if (ret) {
                put_unused_fd(fd);
                return ret;
@@ -3890,6 +3886,9 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
        if (WARN_ON_ONCE(percpu_ref_is_dying(&ctx->refs)))
                return -ENXIO;
 
+       if (ctx->submitter_task && ctx->submitter_task != current)
+               return -EEXIST;
+
        if (ctx->restricted) {
                if (opcode >= IORING_REGISTER_LAST)
                        return -EINVAL;
index 48ce234..ef77d2a 100644 (file)
@@ -203,17 +203,24 @@ static inline void io_commit_cqring(struct io_ring_ctx *ctx)
        smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
 }
 
-static inline void io_cqring_wake(struct io_ring_ctx *ctx)
+/* requires smb_mb() prior, see wq_has_sleeper() */
+static inline void __io_cqring_wake(struct io_ring_ctx *ctx)
 {
        /*
         * wake_up_all() may seem excessive, but io_wake_function() and
         * io_should_wake() handle the termination of the loop and only
         * wake as many waiters as we need to.
         */
-       if (wq_has_sleeper(&ctx->cq_wait))
+       if (waitqueue_active(&ctx->cq_wait))
                wake_up_all(&ctx->cq_wait);
 }
 
+static inline void io_cqring_wake(struct io_ring_ctx *ctx)
+{
+       smp_mb();
+       __io_cqring_wake(ctx);
+}
+
 static inline bool io_sqring_full(struct io_ring_ctx *ctx)
 {
        struct io_rings *r = ctx->rings;
@@ -268,6 +275,13 @@ static inline int io_run_task_work_ctx(struct io_ring_ctx *ctx)
        return ret;
 }
 
+static inline int io_run_local_work_locked(struct io_ring_ctx *ctx)
+{
+       if (llist_empty(&ctx->work_llist))
+               return 0;
+       return __io_run_local_work(ctx, true);
+}
+
 static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
 {
        if (!*locked) {
index 4a7e5d0..90d2fc6 100644 (file)
@@ -95,6 +95,9 @@ static int io_msg_send_fd(struct io_kiocb *req, unsigned int issue_flags)
 
        msg->src_fd = array_index_nospec(msg->src_fd, ctx->nr_user_files);
        file_ptr = io_fixed_file_slot(&ctx->file_table, msg->src_fd)->file_ptr;
+       if (!file_ptr)
+               goto out_unlock;
+
        src_file = (struct file *) (file_ptr & FFS_MASK);
        get_file(src_file);
 
index caa6a80..15dea91 100644 (file)
@@ -46,6 +46,7 @@ struct io_connect {
        struct file                     *file;
        struct sockaddr __user          *addr;
        int                             addr_len;
+       bool                            in_progress;
 };
 
 struct io_sr_msg {
@@ -1055,6 +1056,8 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags)
        sock = sock_from_file(req->file);
        if (unlikely(!sock))
                return -ENOTSOCK;
+       if (!test_bit(SOCK_SUPPORT_ZC, &sock->flags))
+               return -EOPNOTSUPP;
 
        msg.msg_name = NULL;
        msg.msg_control = NULL;
@@ -1150,6 +1153,8 @@ int io_sendmsg_zc(struct io_kiocb *req, unsigned int issue_flags)
        sock = sock_from_file(req->file);
        if (unlikely(!sock))
                return -ENOTSOCK;
+       if (!test_bit(SOCK_SUPPORT_ZC, &sock->flags))
+               return -EOPNOTSUPP;
 
        if (req_has_async_data(req)) {
                kmsg = req->async_data;
@@ -1386,6 +1391,7 @@ int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 
        conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
        conn->addr_len =  READ_ONCE(sqe->addr2);
+       conn->in_progress = false;
        return 0;
 }
 
@@ -1397,6 +1403,16 @@ int io_connect(struct io_kiocb *req, unsigned int issue_flags)
        int ret;
        bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
 
+       if (connect->in_progress) {
+               struct socket *socket;
+
+               ret = -ENOTSOCK;
+               socket = sock_from_file(req->file);
+               if (socket)
+                       ret = sock_error(socket->sk);
+               goto out;
+       }
+
        if (req_has_async_data(req)) {
                io = req->async_data;
        } else {
@@ -1413,13 +1429,17 @@ int io_connect(struct io_kiocb *req, unsigned int issue_flags)
        ret = __sys_connect_file(req->file, &io->address,
                                        connect->addr_len, file_flags);
        if ((ret == -EAGAIN || ret == -EINPROGRESS) && force_nonblock) {
-               if (req_has_async_data(req))
-                       return -EAGAIN;
-               if (io_alloc_async_data(req)) {
-                       ret = -ENOMEM;
-                       goto out;
+               if (ret == -EINPROGRESS) {
+                       connect->in_progress = true;
+               } else {
+                       if (req_has_async_data(req))
+                               return -EAGAIN;
+                       if (io_alloc_async_data(req)) {
+                               ret = -ENOMEM;
+                               goto out;
+                       }
+                       memcpy(req->async_data, &__io, sizeof(__io));
                }
-               memcpy(req->async_data, &__io, sizeof(__io));
                return -EAGAIN;
        }
        if (ret == -ERESTARTSYS)
index 2330f6d..83dc0f9 100644 (file)
@@ -510,7 +510,6 @@ const struct io_op_def io_op_defs[] = {
                .needs_file             = 1,
                .unbound_nonreg_file    = 1,
                .pollout                = 1,
-               .audit_skip             = 1,
                .ioprio                 = 1,
                .manual_alloc           = 1,
 #if defined(CONFIG_NET)
index 6f88ded..55d4ab9 100644 (file)
@@ -757,20 +757,17 @@ int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
 
 void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
 {
-#if !defined(IO_URING_SCM_ALL)
        int i;
 
        for (i = 0; i < ctx->nr_user_files; i++) {
                struct file *file = io_file_from_index(&ctx->file_table, i);
 
-               if (!file)
-                       continue;
-               if (io_fixed_file_slot(&ctx->file_table, i)->file_ptr & FFS_SCM)
+               /* skip scm accounted files, they'll be freed by ->ring_sock */
+               if (!file || io_file_need_scm(file))
                        continue;
                io_file_bitmap_clear(&ctx->file_table, i);
                fput(file);
        }
-#endif
 
 #if defined(CONFIG_UNIX)
        if (ctx->ring_sock) {
@@ -855,6 +852,7 @@ int __io_scm_file_account(struct io_ring_ctx *ctx, struct file *file)
 
                UNIXCB(skb).fp = fpl;
                skb->sk = sk;
+               skb->scm_io_uring = 1;
                skb->destructor = unix_destruct_scm;
                refcount_add(skb->truesize, &sk->sk_wmem_alloc);
        }
index 9bce156..81445a4 100644 (file)
@@ -82,11 +82,7 @@ int __io_scm_file_account(struct io_ring_ctx *ctx, struct file *file);
 #if defined(CONFIG_UNIX)
 static inline bool io_file_need_scm(struct file *filp)
 {
-#if defined(IO_URING_SCM_ALL)
-       return true;
-#else
        return !!unix_get_socket(filp);
-#endif
 }
 #else
 static inline bool io_file_need_scm(struct file *filp)
index a25cd44..bb47cc4 100644 (file)
@@ -234,11 +234,32 @@ static void kiocb_end_write(struct io_kiocb *req)
        }
 }
 
+/*
+ * Trigger the notifications after having done some IO, and finish the write
+ * accounting, if any.
+ */
+static void io_req_io_end(struct io_kiocb *req)
+{
+       struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
+
+       if (rw->kiocb.ki_flags & IOCB_WRITE) {
+               kiocb_end_write(req);
+               fsnotify_modify(req->file);
+       } else {
+               fsnotify_access(req->file);
+       }
+}
+
 static bool __io_complete_rw_common(struct io_kiocb *req, long res)
 {
        if (unlikely(res != req->cqe.res)) {
                if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
                    io_rw_should_reissue(req)) {
+                       /*
+                        * Reissue will start accounting again, finish the
+                        * current cycle.
+                        */
+                       io_req_io_end(req);
                        req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
                        return true;
                }
@@ -264,15 +285,7 @@ static inline int io_fixup_rw_res(struct io_kiocb *req, long res)
 
 static void io_req_rw_complete(struct io_kiocb *req, bool *locked)
 {
-       struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
-
-       if (rw->kiocb.ki_flags & IOCB_WRITE) {
-               kiocb_end_write(req);
-               fsnotify_modify(req->file);
-       } else {
-               fsnotify_access(req->file);
-       }
-
+       io_req_io_end(req);
        io_req_task_complete(req, locked);
 }
 
@@ -317,6 +330,11 @@ static int kiocb_done(struct io_kiocb *req, ssize_t ret,
                req->file->f_pos = rw->kiocb.ki_pos;
        if (ret >= 0 && (rw->kiocb.ki_complete == io_complete_rw)) {
                if (!__io_complete_rw_common(req, ret)) {
+                       /*
+                        * Safe to call io_end from here as we're inline
+                        * from the submission path.
+                        */
+                       io_req_io_end(req);
                        io_req_set_res(req, final_ret,
                                       io_put_kbuf(req, issue_flags));
                        return IOU_OK;
@@ -916,7 +934,7 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
                        goto copy_iov;
 
                if (ret2 != req->cqe.res && ret2 >= 0 && need_complete_io(req)) {
-                       struct io_async_rw *rw;
+                       struct io_async_rw *io;
 
                        trace_io_uring_short_write(req->ctx, kiocb->ki_pos - ret2,
                                                req->cqe.res, ret2);
@@ -929,9 +947,9 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags)
                        iov_iter_save_state(&s->iter, &s->iter_state);
                        ret = io_setup_async_rw(req, iovec, s, true);
 
-                       rw = req->async_data;
-                       if (rw)
-                               rw->bytes_done += ret2;
+                       io = req->async_data;
+                       if (io)
+                               io->bytes_done += ret2;
 
                        if (kiocb->ki_flags & IOCB_WRITE)
                                kiocb_end_write(req);
index 7f97d97..4324b1c 100644 (file)
@@ -91,32 +91,12 @@ __cold int io_uring_alloc_task_context(struct task_struct *task,
        return 0;
 }
 
-static int io_register_submitter(struct io_ring_ctx *ctx)
-{
-       int ret = 0;
-
-       mutex_lock(&ctx->uring_lock);
-       if (!ctx->submitter_task)
-               ctx->submitter_task = get_task_struct(current);
-       else if (ctx->submitter_task != current)
-               ret = -EEXIST;
-       mutex_unlock(&ctx->uring_lock);
-
-       return ret;
-}
-
-int __io_uring_add_tctx_node(struct io_ring_ctx *ctx, bool submitter)
+int __io_uring_add_tctx_node(struct io_ring_ctx *ctx)
 {
        struct io_uring_task *tctx = current->io_uring;
        struct io_tctx_node *node;
        int ret;
 
-       if ((ctx->flags & IORING_SETUP_SINGLE_ISSUER) && submitter) {
-               ret = io_register_submitter(ctx);
-               if (ret)
-                       return ret;
-       }
-
        if (unlikely(!tctx)) {
                ret = io_uring_alloc_task_context(current, ctx);
                if (unlikely(ret))
@@ -150,8 +130,22 @@ int __io_uring_add_tctx_node(struct io_ring_ctx *ctx, bool submitter)
                list_add(&node->ctx_node, &ctx->tctx_list);
                mutex_unlock(&ctx->uring_lock);
        }
-       if (submitter)
-               tctx->last = ctx;
+       return 0;
+}
+
+int __io_uring_add_tctx_node_from_submit(struct io_ring_ctx *ctx)
+{
+       int ret;
+
+       if (ctx->flags & IORING_SETUP_SINGLE_ISSUER
+           && ctx->submitter_task != current)
+               return -EEXIST;
+
+       ret = __io_uring_add_tctx_node(ctx);
+       if (ret)
+               return ret;
+
+       current->io_uring->last = ctx;
        return 0;
 }
 
@@ -259,7 +253,7 @@ int io_ringfd_register(struct io_ring_ctx *ctx, void __user *__arg,
                return -EINVAL;
 
        mutex_unlock(&ctx->uring_lock);
-       ret = __io_uring_add_tctx_node(ctx, false);
+       ret = __io_uring_add_tctx_node(ctx);
        mutex_lock(&ctx->uring_lock);
        if (ret)
                return ret;
index 25974be..608e96d 100644 (file)
@@ -9,7 +9,8 @@ struct io_tctx_node {
 int io_uring_alloc_task_context(struct task_struct *task,
                                struct io_ring_ctx *ctx);
 void io_uring_del_tctx_node(unsigned long index);
-int __io_uring_add_tctx_node(struct io_ring_ctx *ctx, bool submitter);
+int __io_uring_add_tctx_node(struct io_ring_ctx *ctx);
+int __io_uring_add_tctx_node_from_submit(struct io_ring_ctx *ctx);
 void io_uring_clean_tctx(struct io_uring_task *tctx);
 
 void io_uring_unreg_ringfd(void);
@@ -27,5 +28,6 @@ static inline int io_uring_add_tctx_node(struct io_ring_ctx *ctx)
 
        if (likely(tctx && tctx->last == ctx))
                return 0;
-       return __io_uring_add_tctx_node(ctx, true);
+
+       return __io_uring_add_tctx_node_from_submit(ctx);
 }
index b9ea539..48ee750 100644 (file)
@@ -158,7 +158,7 @@ static struct bpf_map *bloom_map_alloc(union bpf_attr *attr)
                        attr->value_size / sizeof(u32);
 
        if (!(attr->map_flags & BPF_F_ZERO_SEED))
-               bloom->hash_seed = get_random_int();
+               bloom->hash_seed = get_random_u32();
 
        return &bloom->map;
 }
index eba603c..35c07af 100644 (file)
@@ -4436,6 +4436,11 @@ static int btf_func_proto_check(struct btf_verifier_env *env,
                        return -EINVAL;
                }
 
+               if (btf_type_is_resolve_source_only(ret_type)) {
+                       btf_verifier_log_type(env, t, "Invalid return type");
+                       return -EINVAL;
+               }
+
                if (btf_type_needs_resolve(ret_type) &&
                    !env_type_is_resolved(env, ret_type_id)) {
                        err = btf_resolve(env, ret_type, ret_type_id);
index 0d200a9..9fcf09f 100644 (file)
@@ -196,7 +196,7 @@ static int bpf_iter_attach_cgroup(struct bpf_prog *prog,
                return -EINVAL;
 
        if (fd)
-               cgrp = cgroup_get_from_fd(fd);
+               cgrp = cgroup_v1v2_get_from_fd(fd);
        else if (id)
                cgrp = cgroup_get_from_id(id);
        else /* walk the entire hierarchy by default. */
index a0e762a..9c16338 100644 (file)
@@ -1032,7 +1032,7 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
        hdr->size = size;
        hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)),
                     PAGE_SIZE - sizeof(*hdr));
-       start = (get_random_int() % hole) & ~(alignment - 1);
+       start = prandom_u32_max(hole) & ~(alignment - 1);
 
        /* Leave a random number of instructions before BPF code. */
        *image_ptr = &hdr->image[start];
@@ -1094,7 +1094,7 @@ bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **image_ptr,
 
        hole = min_t(unsigned int, size - (proglen + sizeof(*ro_header)),
                     BPF_PROG_CHUNK_SIZE - sizeof(*ro_header));
-       start = (get_random_int() % hole) & ~(alignment - 1);
+       start = prandom_u32_max(hole) & ~(alignment - 1);
 
        *image_ptr = &ro_header->image[start];
        *rw_image = &(*rw_header)->image[start];
@@ -1216,7 +1216,7 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
                              bool emit_zext)
 {
        struct bpf_insn *to = to_buff;
-       u32 imm_rnd = get_random_int();
+       u32 imm_rnd = get_random_u32();
        s16 off;
 
        BUILD_BUG_ON(BPF_REG_AX  + 1 != MAX_BPF_JIT_REG);
index fa64b80..04f0a04 100644 (file)
@@ -4,6 +4,7 @@
 #include <linux/hash.h>
 #include <linux/bpf.h>
 #include <linux/filter.h>
+#include <linux/init.h>
 
 /* The BPF dispatcher is a multiway branch code generator. The
  * dispatcher is a mechanism to avoid the performance penalty of an
@@ -90,6 +91,11 @@ int __weak arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int n
        return -ENOTSUPP;
 }
 
+int __weak __init bpf_arch_init_dispatcher_early(void *ip)
+{
+       return -ENOTSUPP;
+}
+
 static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image, void *buf)
 {
        s64 ips[BPF_DISPATCHER_MAX] = {}, *ipsp = &ips[0];
index ed3f8a5..f39ee3e 100644 (file)
@@ -527,7 +527,7 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
        if (htab->map.map_flags & BPF_F_ZERO_SEED)
                htab->hashrnd = 0;
        else
-               htab->hashrnd = get_random_int();
+               htab->hashrnd = get_random_u32();
 
        htab_init_buckets(htab);
 
index 2433be5..8f0d65f 100644 (file)
@@ -423,14 +423,17 @@ static void drain_mem_cache(struct bpf_mem_cache *c)
        /* No progs are using this bpf_mem_cache, but htab_map_free() called
         * bpf_mem_cache_free() for all remaining elements and they can be in
         * free_by_rcu or in waiting_for_gp lists, so drain those lists now.
+        *
+        * Except for waiting_for_gp list, there are no concurrent operations
+        * on these lists, so it is safe to use __llist_del_all().
         */
        llist_for_each_safe(llnode, t, __llist_del_all(&c->free_by_rcu))
                free_one(c, llnode);
        llist_for_each_safe(llnode, t, llist_del_all(&c->waiting_for_gp))
                free_one(c, llnode);
-       llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist))
+       llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist))
                free_one(c, llnode);
-       llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist_extra))
+       llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist_extra))
                free_one(c, llnode);
 }
 
@@ -498,6 +501,16 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
                rcu_in_progress = 0;
                for_each_possible_cpu(cpu) {
                        c = per_cpu_ptr(ma->cache, cpu);
+                       /*
+                        * refill_work may be unfinished for PREEMPT_RT kernel
+                        * in which irq work is invoked in a per-CPU RT thread.
+                        * It is also possible for kernel with
+                        * arch_irq_work_has_interrupt() being false and irq
+                        * work is invoked in timer interrupt. So waiting for
+                        * the completion of irq work to ease the handling of
+                        * concurrency.
+                        */
+                       irq_work_sync(&c->refill_work);
                        drain_mem_cache(c);
                        rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
                }
@@ -512,6 +525,7 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
                        cc = per_cpu_ptr(ma->caches, cpu);
                        for (i = 0; i < NUM_CACHES; i++) {
                                c = &cc->cache[i];
+                               irq_work_sync(&c->refill_work);
                                drain_mem_cache(c);
                                rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
                        }
index 6f6d2d5..7f0a9f6 100644 (file)
@@ -6946,6 +6946,7 @@ static int set_user_ringbuf_callback_state(struct bpf_verifier_env *env,
        __mark_reg_not_init(env, &callee->regs[BPF_REG_5]);
 
        callee->in_callback_fn = true;
+       callee->callback_ret_range = tnum_range(0, 1);
        return 0;
 }
 
@@ -13350,7 +13351,7 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env,
                            aux[adj_idx].ptr_type == PTR_TO_CTX)
                                continue;
 
-                       imm_rnd = get_random_int();
+                       imm_rnd = get_random_u32();
                        rnd_hi32_patch[0] = insn;
                        rnd_hi32_patch[1].imm = imm_rnd;
                        rnd_hi32_patch[3].dst_reg = load_reg;
index 764bdd5..2319946 100644 (file)
@@ -1392,6 +1392,9 @@ static void cgroup_destroy_root(struct cgroup_root *root)
        cgroup_free_root(root);
 }
 
+/*
+ * Returned cgroup is without refcount but it's valid as long as cset pins it.
+ */
 static inline struct cgroup *__cset_cgroup_from_root(struct css_set *cset,
                                            struct cgroup_root *root)
 {
@@ -1403,6 +1406,7 @@ static inline struct cgroup *__cset_cgroup_from_root(struct css_set *cset,
                res_cgroup = cset->dfl_cgrp;
        } else {
                struct cgrp_cset_link *link;
+               lockdep_assert_held(&css_set_lock);
 
                list_for_each_entry(link, &cset->cgrp_links, cgrp_link) {
                        struct cgroup *c = link->cgrp;
@@ -1414,6 +1418,7 @@ static inline struct cgroup *__cset_cgroup_from_root(struct css_set *cset,
                }
        }
 
+       BUG_ON(!res_cgroup);
        return res_cgroup;
 }
 
@@ -1436,23 +1441,36 @@ current_cgns_cgroup_from_root(struct cgroup_root *root)
 
        rcu_read_unlock();
 
-       BUG_ON(!res);
        return res;
 }
 
+/*
+ * Look up cgroup associated with current task's cgroup namespace on the default
+ * hierarchy.
+ *
+ * Unlike current_cgns_cgroup_from_root(), this doesn't need locks:
+ * - Internal rcu_read_lock is unnecessary because we don't dereference any rcu
+ *   pointers.
+ * - css_set_lock is not needed because we just read cset->dfl_cgrp.
+ * - As a bonus returned cgrp is pinned with the current because it cannot
+ *   switch cgroup_ns asynchronously.
+ */
+static struct cgroup *current_cgns_cgroup_dfl(void)
+{
+       struct css_set *cset;
+
+       cset = current->nsproxy->cgroup_ns->root_cset;
+       return __cset_cgroup_from_root(cset, &cgrp_dfl_root);
+}
+
 /* look up cgroup associated with given css_set on the specified hierarchy */
 static struct cgroup *cset_cgroup_from_root(struct css_set *cset,
                                            struct cgroup_root *root)
 {
-       struct cgroup *res = NULL;
-
        lockdep_assert_held(&cgroup_mutex);
        lockdep_assert_held(&css_set_lock);
 
-       res = __cset_cgroup_from_root(cset, root);
-
-       BUG_ON(!res);
-       return res;
+       return __cset_cgroup_from_root(cset, root);
 }
 
 /*
@@ -3698,27 +3716,27 @@ static int cpu_stat_show(struct seq_file *seq, void *v)
 static int cgroup_io_pressure_show(struct seq_file *seq, void *v)
 {
        struct cgroup *cgrp = seq_css(seq)->cgroup;
-       struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+       struct psi_group *psi = cgroup_psi(cgrp);
 
        return psi_show(seq, psi, PSI_IO);
 }
 static int cgroup_memory_pressure_show(struct seq_file *seq, void *v)
 {
        struct cgroup *cgrp = seq_css(seq)->cgroup;
-       struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+       struct psi_group *psi = cgroup_psi(cgrp);
 
        return psi_show(seq, psi, PSI_MEM);
 }
 static int cgroup_cpu_pressure_show(struct seq_file *seq, void *v)
 {
        struct cgroup *cgrp = seq_css(seq)->cgroup;
-       struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+       struct psi_group *psi = cgroup_psi(cgrp);
 
        return psi_show(seq, psi, PSI_CPU);
 }
 
-static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
-                                         size_t nbytes, enum psi_res res)
+static ssize_t pressure_write(struct kernfs_open_file *of, char *buf,
+                             size_t nbytes, enum psi_res res)
 {
        struct cgroup_file_ctx *ctx = of->priv;
        struct psi_trigger *new;
@@ -3738,7 +3756,7 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
                return -EBUSY;
        }
 
-       psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+       psi = cgroup_psi(cgrp);
        new = psi_trigger_create(psi, buf, res);
        if (IS_ERR(new)) {
                cgroup_put(cgrp);
@@ -3755,21 +3773,86 @@ static ssize_t cgroup_io_pressure_write(struct kernfs_open_file *of,
                                          char *buf, size_t nbytes,
                                          loff_t off)
 {
-       return cgroup_pressure_write(of, buf, nbytes, PSI_IO);
+       return pressure_write(of, buf, nbytes, PSI_IO);
 }
 
 static ssize_t cgroup_memory_pressure_write(struct kernfs_open_file *of,
                                          char *buf, size_t nbytes,
                                          loff_t off)
 {
-       return cgroup_pressure_write(of, buf, nbytes, PSI_MEM);
+       return pressure_write(of, buf, nbytes, PSI_MEM);
 }
 
 static ssize_t cgroup_cpu_pressure_write(struct kernfs_open_file *of,
                                          char *buf, size_t nbytes,
                                          loff_t off)
 {
-       return cgroup_pressure_write(of, buf, nbytes, PSI_CPU);
+       return pressure_write(of, buf, nbytes, PSI_CPU);
+}
+
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+static int cgroup_irq_pressure_show(struct seq_file *seq, void *v)
+{
+       struct cgroup *cgrp = seq_css(seq)->cgroup;
+       struct psi_group *psi = cgroup_psi(cgrp);
+
+       return psi_show(seq, psi, PSI_IRQ);
+}
+
+static ssize_t cgroup_irq_pressure_write(struct kernfs_open_file *of,
+                                        char *buf, size_t nbytes,
+                                        loff_t off)
+{
+       return pressure_write(of, buf, nbytes, PSI_IRQ);
+}
+#endif
+
+static int cgroup_pressure_show(struct seq_file *seq, void *v)
+{
+       struct cgroup *cgrp = seq_css(seq)->cgroup;
+       struct psi_group *psi = cgroup_psi(cgrp);
+
+       seq_printf(seq, "%d\n", psi->enabled);
+
+       return 0;
+}
+
+static ssize_t cgroup_pressure_write(struct kernfs_open_file *of,
+                                    char *buf, size_t nbytes,
+                                    loff_t off)
+{
+       ssize_t ret;
+       int enable;
+       struct cgroup *cgrp;
+       struct psi_group *psi;
+
+       ret = kstrtoint(strstrip(buf), 0, &enable);
+       if (ret)
+               return ret;
+
+       if (enable < 0 || enable > 1)
+               return -ERANGE;
+
+       cgrp = cgroup_kn_lock_live(of->kn, false);
+       if (!cgrp)
+               return -ENOENT;
+
+       psi = cgroup_psi(cgrp);
+       if (psi->enabled != enable) {
+               int i;
+
+               /* show or hide {cpu,memory,io,irq}.pressure files */
+               for (i = 0; i < NR_PSI_RESOURCES; i++)
+                       cgroup_file_show(&cgrp->psi_files[i], enable);
+
+               psi->enabled = enable;
+               if (enable)
+                       psi_cgroup_restart(psi);
+       }
+
+       cgroup_kn_unlock(of->kn);
+
+       return nbytes;
 }
 
 static __poll_t cgroup_pressure_poll(struct kernfs_open_file *of,
@@ -3789,6 +3872,9 @@ static void cgroup_pressure_release(struct kernfs_open_file *of)
 
 bool cgroup_psi_enabled(void)
 {
+       if (static_branch_likely(&psi_disabled))
+               return false;
+
        return (cgroup_feature_disable_mask & (1 << OPT_FEATURE_PRESSURE)) == 0;
 }
 
@@ -5175,6 +5261,7 @@ static struct cftype cgroup_psi_files[] = {
 #ifdef CONFIG_PSI
        {
                .name = "io.pressure",
+               .file_offset = offsetof(struct cgroup, psi_files[PSI_IO]),
                .seq_show = cgroup_io_pressure_show,
                .write = cgroup_io_pressure_write,
                .poll = cgroup_pressure_poll,
@@ -5182,6 +5269,7 @@ static struct cftype cgroup_psi_files[] = {
        },
        {
                .name = "memory.pressure",
+               .file_offset = offsetof(struct cgroup, psi_files[PSI_MEM]),
                .seq_show = cgroup_memory_pressure_show,
                .write = cgroup_memory_pressure_write,
                .poll = cgroup_pressure_poll,
@@ -5189,11 +5277,27 @@ static struct cftype cgroup_psi_files[] = {
        },
        {
                .name = "cpu.pressure",
+               .file_offset = offsetof(struct cgroup, psi_files[PSI_CPU]),
                .seq_show = cgroup_cpu_pressure_show,
                .write = cgroup_cpu_pressure_write,
                .poll = cgroup_pressure_poll,
                .release = cgroup_pressure_release,
        },
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+       {
+               .name = "irq.pressure",
+               .file_offset = offsetof(struct cgroup, psi_files[PSI_IRQ]),
+               .seq_show = cgroup_irq_pressure_show,
+               .write = cgroup_irq_pressure_write,
+               .poll = cgroup_pressure_poll,
+               .release = cgroup_pressure_release,
+       },
+#endif
+       {
+               .name = "cgroup.pressure",
+               .seq_show = cgroup_pressure_show,
+               .write = cgroup_pressure_write,
+       },
 #endif /* CONFIG_PSI */
        { }     /* terminate */
 };
@@ -6105,9 +6209,7 @@ struct cgroup *cgroup_get_from_id(u64 id)
        if (!cgrp)
                return ERR_PTR(-ENOENT);
 
-       spin_lock_irq(&css_set_lock);
-       root_cgrp = current_cgns_cgroup_from_root(&cgrp_dfl_root);
-       spin_unlock_irq(&css_set_lock);
+       root_cgrp = current_cgns_cgroup_dfl();
        if (!cgroup_is_descendant(cgrp, root_cgrp)) {
                cgroup_put(cgrp);
                return ERR_PTR(-ENOENT);
@@ -6208,16 +6310,42 @@ void cgroup_fork(struct task_struct *child)
        INIT_LIST_HEAD(&child->cg_list);
 }
 
-static struct cgroup *cgroup_get_from_file(struct file *f)
+/**
+ * cgroup_v1v2_get_from_file - get a cgroup pointer from a file pointer
+ * @f: file corresponding to cgroup_dir
+ *
+ * Find the cgroup from a file pointer associated with a cgroup directory.
+ * Returns a pointer to the cgroup on success. ERR_PTR is returned if the
+ * cgroup cannot be found.
+ */
+static struct cgroup *cgroup_v1v2_get_from_file(struct file *f)
 {
        struct cgroup_subsys_state *css;
-       struct cgroup *cgrp;
 
        css = css_tryget_online_from_dir(f->f_path.dentry, NULL);
        if (IS_ERR(css))
                return ERR_CAST(css);
 
-       cgrp = css->cgroup;
+       return css->cgroup;
+}
+
+/**
+ * cgroup_get_from_file - same as cgroup_v1v2_get_from_file, but only supports
+ * cgroup2.
+ * @f: file corresponding to cgroup2_dir
+ */
+static struct cgroup *cgroup_get_from_file(struct file *f)
+{
+       struct cgroup *cgrp = cgroup_v1v2_get_from_file(f);
+
+       if (IS_ERR(cgrp))
+               return ERR_CAST(cgrp);
+
+       if (!cgroup_on_dfl(cgrp)) {
+               cgroup_put(cgrp);
+               return ERR_PTR(-EBADF);
+       }
+
        return cgrp;
 }
 
@@ -6686,10 +6814,8 @@ struct cgroup *cgroup_get_from_path(const char *path)
        struct cgroup *cgrp = ERR_PTR(-ENOENT);
        struct cgroup *root_cgrp;
 
-       spin_lock_irq(&css_set_lock);
-       root_cgrp = current_cgns_cgroup_from_root(&cgrp_dfl_root);
+       root_cgrp = current_cgns_cgroup_dfl();
        kn = kernfs_walk_and_get(root_cgrp->kn, path);
-       spin_unlock_irq(&css_set_lock);
        if (!kn)
                goto out;
 
@@ -6714,15 +6840,15 @@ out:
 EXPORT_SYMBOL_GPL(cgroup_get_from_path);
 
 /**
- * cgroup_get_from_fd - get a cgroup pointer from a fd
- * @fd: fd obtained by open(cgroup2_dir)
+ * cgroup_v1v2_get_from_fd - get a cgroup pointer from a fd
+ * @fd: fd obtained by open(cgroup_dir)
  *
  * Find the cgroup from a fd which should be obtained
  * by opening a cgroup directory.  Returns a pointer to the
  * cgroup on success. ERR_PTR is returned if the cgroup
  * cannot be found.
  */
-struct cgroup *cgroup_get_from_fd(int fd)
+struct cgroup *cgroup_v1v2_get_from_fd(int fd)
 {
        struct cgroup *cgrp;
        struct file *f;
@@ -6731,10 +6857,29 @@ struct cgroup *cgroup_get_from_fd(int fd)
        if (!f)
                return ERR_PTR(-EBADF);
 
-       cgrp = cgroup_get_from_file(f);
+       cgrp = cgroup_v1v2_get_from_file(f);
        fput(f);
        return cgrp;
 }
+
+/**
+ * cgroup_get_from_fd - same as cgroup_v1v2_get_from_fd, but only supports
+ * cgroup2.
+ * @fd: fd obtained by open(cgroup2_dir)
+ */
+struct cgroup *cgroup_get_from_fd(int fd)
+{
+       struct cgroup *cgrp = cgroup_v1v2_get_from_fd(fd);
+
+       if (IS_ERR(cgrp))
+               return ERR_CAST(cgrp);
+
+       if (!cgroup_on_dfl(cgrp)) {
+               cgroup_put(cgrp);
+               return ERR_PTR(-EBADF);
+       }
+       return cgrp;
+}
 EXPORT_SYMBOL_GPL(cgroup_get_from_fd);
 
 static u64 power_of_ten(int power)
index aefc1e0..01933db 100644 (file)
@@ -54,6 +54,7 @@
 #include <linux/highmem.h>
 #include <linux/pgtable.h>
 #include <linux/buildid.h>
+#include <linux/task_work.h>
 
 #include "internal.h"
 
@@ -2276,11 +2277,26 @@ event_sched_out(struct perf_event *event,
        event->pmu->del(event, 0);
        event->oncpu = -1;
 
-       if (READ_ONCE(event->pending_disable) >= 0) {
-               WRITE_ONCE(event->pending_disable, -1);
+       if (event->pending_disable) {
+               event->pending_disable = 0;
                perf_cgroup_event_disable(event, ctx);
                state = PERF_EVENT_STATE_OFF;
        }
+
+       if (event->pending_sigtrap) {
+               bool dec = true;
+
+               event->pending_sigtrap = 0;
+               if (state != PERF_EVENT_STATE_OFF &&
+                   !event->pending_work) {
+                       event->pending_work = 1;
+                       dec = false;
+                       task_work_add(current, &event->pending_task, TWA_RESUME);
+               }
+               if (dec)
+                       local_dec(&event->ctx->nr_pending);
+       }
+
        perf_event_set_state(event, state);
 
        if (!is_software_event(event))
@@ -2432,7 +2448,7 @@ static void __perf_event_disable(struct perf_event *event,
  * hold the top-level event's child_mutex, so any descendant that
  * goes to exit will block in perf_event_exit_event().
  *
- * When called from perf_pending_event it's OK because event->ctx
+ * When called from perf_pending_irq it's OK because event->ctx
  * is the current context on this CPU and preemption is disabled,
  * hence we can't get into perf_event_task_sched_out for this context.
  */
@@ -2471,9 +2487,8 @@ EXPORT_SYMBOL_GPL(perf_event_disable);
 
 void perf_event_disable_inatomic(struct perf_event *event)
 {
-       WRITE_ONCE(event->pending_disable, smp_processor_id());
-       /* can fail, see perf_pending_event_disable() */
-       irq_work_queue(&event->pending);
+       event->pending_disable = 1;
+       irq_work_queue(&event->pending_irq);
 }
 
 #define MAX_INTERRUPTS (~0ULL)
@@ -3428,11 +3443,23 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn,
                raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING);
                if (context_equiv(ctx, next_ctx)) {
 
+                       perf_pmu_disable(pmu);
+
+                       /* PMIs are disabled; ctx->nr_pending is stable. */
+                       if (local_read(&ctx->nr_pending) ||
+                           local_read(&next_ctx->nr_pending)) {
+                               /*
+                                * Must not swap out ctx when there's pending
+                                * events that rely on the ctx->task relation.
+                                */
+                               raw_spin_unlock(&next_ctx->lock);
+                               rcu_read_unlock();
+                               goto inside_switch;
+                       }
+
                        WRITE_ONCE(ctx->task, next);
                        WRITE_ONCE(next_ctx->task, task);
 
-                       perf_pmu_disable(pmu);
-
                        if (cpuctx->sched_cb_usage && pmu->sched_task)
                                pmu->sched_task(ctx, false);
 
@@ -3473,6 +3500,7 @@ unlock:
                raw_spin_lock(&ctx->lock);
                perf_pmu_disable(pmu);
 
+inside_switch:
                if (cpuctx->sched_cb_usage && pmu->sched_task)
                        pmu->sched_task(ctx, false);
                task_ctx_sched_out(cpuctx, ctx, EVENT_ALL);
@@ -4939,7 +4967,7 @@ static void perf_addr_filters_splice(struct perf_event *event,
 
 static void _free_event(struct perf_event *event)
 {
-       irq_work_sync(&event->pending);
+       irq_work_sync(&event->pending_irq);
 
        unaccount_event(event);
 
@@ -6439,7 +6467,8 @@ static void perf_sigtrap(struct perf_event *event)
                return;
 
        /*
-        * perf_pending_event() can race with the task exiting.
+        * Both perf_pending_task() and perf_pending_irq() can race with the
+        * task exiting.
         */
        if (current->flags & PF_EXITING)
                return;
@@ -6448,23 +6477,33 @@ static void perf_sigtrap(struct perf_event *event)
                      event->attr.type, event->attr.sig_data);
 }
 
-static void perf_pending_event_disable(struct perf_event *event)
+/*
+ * Deliver the pending work in-event-context or follow the context.
+ */
+static void __perf_pending_irq(struct perf_event *event)
 {
-       int cpu = READ_ONCE(event->pending_disable);
+       int cpu = READ_ONCE(event->oncpu);
 
+       /*
+        * If the event isn't running; we done. event_sched_out() will have
+        * taken care of things.
+        */
        if (cpu < 0)
                return;
 
+       /*
+        * Yay, we hit home and are in the context of the event.
+        */
        if (cpu == smp_processor_id()) {
-               WRITE_ONCE(event->pending_disable, -1);
-
-               if (event->attr.sigtrap) {
+               if (event->pending_sigtrap) {
+                       event->pending_sigtrap = 0;
                        perf_sigtrap(event);
-                       atomic_set_release(&event->event_limit, 1); /* rearm event */
-                       return;
+                       local_dec(&event->ctx->nr_pending);
+               }
+               if (event->pending_disable) {
+                       event->pending_disable = 0;
+                       perf_event_disable_local(event);
                }
-
-               perf_event_disable_local(event);
                return;
        }
 
@@ -6484,35 +6523,62 @@ static void perf_pending_event_disable(struct perf_event *event)
         *                                irq_work_queue(); // FAILS
         *
         *  irq_work_run()
-        *    perf_pending_event()
+        *    perf_pending_irq()
         *
         * But the event runs on CPU-B and wants disabling there.
         */
-       irq_work_queue_on(&event->pending, cpu);
+       irq_work_queue_on(&event->pending_irq, cpu);
 }
 
-static void perf_pending_event(struct irq_work *entry)
+static void perf_pending_irq(struct irq_work *entry)
 {
-       struct perf_event *event = container_of(entry, struct perf_event, pending);
+       struct perf_event *event = container_of(entry, struct perf_event, pending_irq);
        int rctx;
 
-       rctx = perf_swevent_get_recursion_context();
        /*
         * If we 'fail' here, that's OK, it means recursion is already disabled
         * and we won't recurse 'further'.
         */
+       rctx = perf_swevent_get_recursion_context();
 
-       perf_pending_event_disable(event);
-
+       /*
+        * The wakeup isn't bound to the context of the event -- it can happen
+        * irrespective of where the event is.
+        */
        if (event->pending_wakeup) {
                event->pending_wakeup = 0;
                perf_event_wakeup(event);
        }
 
+       __perf_pending_irq(event);
+
        if (rctx >= 0)
                perf_swevent_put_recursion_context(rctx);
 }
 
+static void perf_pending_task(struct callback_head *head)
+{
+       struct perf_event *event = container_of(head, struct perf_event, pending_task);
+       int rctx;
+
+       /*
+        * If we 'fail' here, that's OK, it means recursion is already disabled
+        * and we won't recurse 'further'.
+        */
+       preempt_disable_notrace();
+       rctx = perf_swevent_get_recursion_context();
+
+       if (event->pending_work) {
+               event->pending_work = 0;
+               perf_sigtrap(event);
+               local_dec(&event->ctx->nr_pending);
+       }
+
+       if (rctx >= 0)
+               perf_swevent_put_recursion_context(rctx);
+       preempt_enable_notrace();
+}
+
 #ifdef CONFIG_GUEST_PERF_EVENTS
 struct perf_guest_info_callbacks __rcu *perf_guest_cbs;
 
@@ -9212,8 +9278,8 @@ int perf_event_account_interrupt(struct perf_event *event)
  */
 
 static int __perf_event_overflow(struct perf_event *event,
-                                  int throttle, struct perf_sample_data *data,
-                                  struct pt_regs *regs)
+                                int throttle, struct perf_sample_data *data,
+                                struct pt_regs *regs)
 {
        int events = atomic_read(&event->event_limit);
        int ret = 0;
@@ -9236,24 +9302,36 @@ static int __perf_event_overflow(struct perf_event *event,
        if (events && atomic_dec_and_test(&event->event_limit)) {
                ret = 1;
                event->pending_kill = POLL_HUP;
-               event->pending_addr = data->addr;
-
                perf_event_disable_inatomic(event);
        }
 
+       if (event->attr.sigtrap) {
+               /*
+                * Should not be able to return to user space without processing
+                * pending_sigtrap (kernel events can overflow multiple times).
+                */
+               WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel);
+               if (!event->pending_sigtrap) {
+                       event->pending_sigtrap = 1;
+                       local_inc(&event->ctx->nr_pending);
+               }
+               event->pending_addr = data->addr;
+               irq_work_queue(&event->pending_irq);
+       }
+
        READ_ONCE(event->overflow_handler)(event, data, regs);
 
        if (*perf_event_fasync(event) && event->pending_kill) {
                event->pending_wakeup = 1;
-               irq_work_queue(&event->pending);
+               irq_work_queue(&event->pending_irq);
        }
 
        return ret;
 }
 
 int perf_event_overflow(struct perf_event *event,
-                         struct perf_sample_data *data,
-                         struct pt_regs *regs)
+                       struct perf_sample_data *data,
+                       struct pt_regs *regs)
 {
        return __perf_event_overflow(event, 1, data, regs);
 }
@@ -11570,8 +11648,8 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
 
 
        init_waitqueue_head(&event->waitq);
-       event->pending_disable = -1;
-       init_irq_work(&event->pending, perf_pending_event);
+       init_irq_work(&event->pending_irq, perf_pending_irq);
+       init_task_work(&event->pending_task, perf_pending_task);
 
        mutex_init(&event->mmap_mutex);
        raw_spin_lock_init(&event->addr_filters.lock);
@@ -11593,9 +11671,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
        if (parent_event)
                event->event_caps = parent_event->event_caps;
 
-       if (event->attr.sigtrap)
-               atomic_set(&event->event_limit, 1);
-
        if (task) {
                event->attach_state = PERF_ATTACH_TASK;
                /*
index 7261320..273a0fe 100644 (file)
@@ -22,7 +22,7 @@ static void perf_output_wakeup(struct perf_output_handle *handle)
        atomic_set(&handle->rb->poll, EPOLLIN);
 
        handle->event->pending_wakeup = 1;
-       irq_work_queue(&handle->event->pending);
+       irq_work_queue(&handle->event->pending_irq);
 }
 
 /*
index 460c12b..7971e98 100644 (file)
 
 #define GCOV_TAG_FUNCTION_LENGTH       3
 
+/* Since GCC 12.1 sizes are in BYTES and not in WORDS (4B). */
+#if (__GNUC__ >= 12)
+#define GCOV_UNIT_SIZE                         4
+#else
+#define GCOV_UNIT_SIZE                         1
+#endif
+
 static struct gcov_info *gcov_info_head;
 
 /**
@@ -383,12 +390,18 @@ size_t convert_to_gcda(char *buffer, struct gcov_info *info)
        pos += store_gcov_u32(buffer, pos, info->version);
        pos += store_gcov_u32(buffer, pos, info->stamp);
 
+#if (__GNUC__ >= 12)
+       /* Use zero as checksum of the compilation unit. */
+       pos += store_gcov_u32(buffer, pos, 0);
+#endif
+
        for (fi_idx = 0; fi_idx < info->n_functions; fi_idx++) {
                fi_ptr = info->functions[fi_idx];
 
                /* Function record. */
                pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION);
-               pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION_LENGTH);
+               pos += store_gcov_u32(buffer, pos,
+                       GCOV_TAG_FUNCTION_LENGTH * GCOV_UNIT_SIZE);
                pos += store_gcov_u32(buffer, pos, fi_ptr->ident);
                pos += store_gcov_u32(buffer, pos, fi_ptr->lineno_checksum);
                pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
@@ -402,7 +415,8 @@ size_t convert_to_gcda(char *buffer, struct gcov_info *info)
                        /* Counter record. */
                        pos += store_gcov_u32(buffer, pos,
                                              GCOV_TAG_FOR_COUNTER(ct_idx));
-                       pos += store_gcov_u32(buffer, pos, ci_ptr->num * 2);
+                       pos += store_gcov_u32(buffer, pos,
+                               ci_ptr->num * 2 * GCOV_UNIT_SIZE);
 
                        for (cv_idx = 0; cv_idx < ci_ptr->num; cv_idx++) {
                                pos += store_gcov_u64(buffer, pos,
index 7571295..00cdf8f 100644 (file)
@@ -26,7 +26,7 @@
 static bool __init test_requires(void)
 {
        /* random should be initialized for the below tests */
-       return prandom_u32() + prandom_u32() != 0;
+       return get_random_u32() + get_random_u32() != 0;
 }
 
 /*
@@ -46,7 +46,7 @@ static bool __init test_encode_decode(void)
                unsigned long addr;
                size_t verif_size;
 
-               prandom_bytes(&addr, sizeof(addr));
+               get_random_bytes(&addr, sizeof(addr));
                if (addr < PAGE_SIZE)
                        addr = PAGE_SIZE;
 
index 3530041..43efb2a 100644 (file)
@@ -399,7 +399,7 @@ static int *get_random_order(int count)
                order[n] = n;
 
        for (n = count - 1; n > 1; n--) {
-               r = get_random_int() % (n + 1);
+               r = prandom_u32_max(n + 1);
                if (r != n) {
                        tmp = order[n];
                        order[n] = order[r];
@@ -538,7 +538,7 @@ static void stress_one_work(struct work_struct *work)
 {
        struct stress *stress = container_of(work, typeof(*stress), work);
        const int nlocks = stress->nlocks;
-       struct ww_mutex *lock = stress->locks + (get_random_int() % nlocks);
+       struct ww_mutex *lock = stress->locks + prandom_u32_max(nlocks);
        int err;
 
        do {
index 6bb8e72..93416af 100644 (file)
@@ -1403,30 +1403,32 @@ static void rcu_poll_gp_seq_end(unsigned long *snap)
 // where caller does not hold the root rcu_node structure's lock.
 static void rcu_poll_gp_seq_start_unlocked(unsigned long *snap)
 {
+       unsigned long flags;
        struct rcu_node *rnp = rcu_get_root();
 
        if (rcu_init_invoked()) {
                lockdep_assert_irqs_enabled();
-               raw_spin_lock_irq_rcu_node(rnp);
+               raw_spin_lock_irqsave_rcu_node(rnp, flags);
        }
        rcu_poll_gp_seq_start(snap);
        if (rcu_init_invoked())
-               raw_spin_unlock_irq_rcu_node(rnp);
+               raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 }
 
 // Make the polled API aware of the end of a grace period, but where
 // caller does not hold the root rcu_node structure's lock.
 static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap)
 {
+       unsigned long flags;
        struct rcu_node *rnp = rcu_get_root();
 
        if (rcu_init_invoked()) {
                lockdep_assert_irqs_enabled();
-               raw_spin_lock_irq_rcu_node(rnp);
+               raw_spin_lock_irqsave_rcu_node(rnp, flags);
        }
        rcu_poll_gp_seq_end(snap);
        if (rcu_init_invoked())
-               raw_spin_unlock_irq_rcu_node(rnp);
+               raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 }
 
 /*
index e4ce124..cb2aa2b 100644 (file)
@@ -701,6 +701,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
 
        rq->prev_irq_time += irq_delta;
        delta -= irq_delta;
+       psi_account_irqtime(rq->curr, irq_delta);
 #endif
 #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
        if (static_key_false((&paravirt_steal_rq_enabled))) {
@@ -4822,10 +4823,10 @@ static inline void finish_task(struct task_struct *prev)
 
 #ifdef CONFIG_SMP
 
-static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
+static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
 {
        void (*func)(struct rq *rq);
-       struct callback_head *next;
+       struct balance_callback *next;
 
        lockdep_assert_rq_held(rq);
 
@@ -4852,15 +4853,15 @@ static void balance_push(struct rq *rq);
  * This abuse is tolerated because it places all the unlikely/odd cases behind
  * a single test, namely: rq->balance_callback == NULL.
  */
-struct callback_head balance_push_callback = {
+struct balance_callback balance_push_callback = {
        .next = NULL,
-       .func = (void (*)(struct callback_head *))balance_push,
+       .func = balance_push,
 };
 
-static inline struct callback_head *
+static inline struct balance_callback *
 __splice_balance_callbacks(struct rq *rq, bool split)
 {
-       struct callback_head *head = rq->balance_callback;
+       struct balance_callback *head = rq->balance_callback;
 
        if (likely(!head))
                return NULL;
@@ -4882,7 +4883,7 @@ __splice_balance_callbacks(struct rq *rq, bool split)
        return head;
 }
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
 {
        return __splice_balance_callbacks(rq, true);
 }
@@ -4892,7 +4893,7 @@ static void __balance_callbacks(struct rq *rq)
        do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
 }
 
-static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
 {
        unsigned long flags;
 
@@ -4909,12 +4910,12 @@ static inline void __balance_callbacks(struct rq *rq)
 {
 }
 
-static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
 {
        return NULL;
 }
 
-static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
 {
 }
 
@@ -6187,7 +6188,7 @@ static void sched_core_balance(struct rq *rq)
        preempt_enable();
 }
 
-static DEFINE_PER_CPU(struct callback_head, core_balance_head);
+static DEFINE_PER_CPU(struct balance_callback, core_balance_head);
 
 static void queue_core_balance(struct rq *rq)
 {
@@ -7418,7 +7419,7 @@ static int __sched_setscheduler(struct task_struct *p,
        int oldpolicy = -1, policy = attr->sched_policy;
        int retval, oldprio, newprio, queued, running;
        const struct sched_class *prev_class;
-       struct callback_head *head;
+       struct balance_callback *head;
        struct rq_flags rf;
        int reset_on_fork;
        int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
index 86dea6a..9ae8f41 100644 (file)
@@ -644,8 +644,8 @@ static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
        return rq->online && dl_task(prev);
 }
 
-static DEFINE_PER_CPU(struct callback_head, dl_push_head);
-static DEFINE_PER_CPU(struct callback_head, dl_pull_head);
+static DEFINE_PER_CPU(struct balance_callback, dl_push_head);
+static DEFINE_PER_CPU(struct balance_callback, dl_pull_head);
 
 static void push_dl_tasks(struct rq *);
 static void pull_dl_task(struct rq *);
index 7f60300..ee2ecc0 100644 (file)
@@ -181,6 +181,7 @@ static void group_init(struct psi_group *group)
 {
        int cpu;
 
+       group->enabled = true;
        for_each_possible_cpu(cpu)
                seqcount_init(&per_cpu_ptr(group->pcpu, cpu)->seq);
        group->avg_last_update = sched_clock();
@@ -201,6 +202,7 @@ void __init psi_init(void)
 {
        if (!psi_enable) {
                static_branch_enable(&psi_disabled);
+               static_branch_disable(&psi_cgroups_enabled);
                return;
        }
 
@@ -211,7 +213,7 @@ void __init psi_init(void)
        group_init(&psi_system);
 }
 
-static bool test_state(unsigned int *tasks, enum psi_states state)
+static bool test_state(unsigned int *tasks, enum psi_states state, bool oncpu)
 {
        switch (state) {
        case PSI_IO_SOME:
@@ -224,9 +226,9 @@ static bool test_state(unsigned int *tasks, enum psi_states state)
                return unlikely(tasks[NR_MEMSTALL] &&
                        tasks[NR_RUNNING] == tasks[NR_MEMSTALL_RUNNING]);
        case PSI_CPU_SOME:
-               return unlikely(tasks[NR_RUNNING] > tasks[NR_ONCPU]);
+               return unlikely(tasks[NR_RUNNING] > oncpu);
        case PSI_CPU_FULL:
-               return unlikely(tasks[NR_RUNNING] && !tasks[NR_ONCPU]);
+               return unlikely(tasks[NR_RUNNING] && !oncpu);
        case PSI_NONIDLE:
                return tasks[NR_IOWAIT] || tasks[NR_MEMSTALL] ||
                        tasks[NR_RUNNING];
@@ -688,35 +690,53 @@ static void psi_group_change(struct psi_group *group, int cpu,
                             bool wake_clock)
 {
        struct psi_group_cpu *groupc;
-       u32 state_mask = 0;
        unsigned int t, m;
        enum psi_states s;
+       u32 state_mask;
 
        groupc = per_cpu_ptr(group->pcpu, cpu);
 
        /*
-        * First we assess the aggregate resource states this CPU's
-        * tasks have been in since the last change, and account any
-        * SOME and FULL time these may have resulted in.
-        *
-        * Then we update the task counts according to the state
+        * First we update the task counts according to the state
         * change requested through the @clear and @set bits.
+        *
+        * Then if the cgroup PSI stats accounting enabled, we
+        * assess the aggregate resource states this CPU's tasks
+        * have been in since the last change, and account any
+        * SOME and FULL time these may have resulted in.
         */
        write_seqcount_begin(&groupc->seq);
 
-       record_times(groupc, now);
+       /*
+        * Start with TSK_ONCPU, which doesn't have a corresponding
+        * task count - it's just a boolean flag directly encoded in
+        * the state mask. Clear, set, or carry the current state if
+        * no changes are requested.
+        */
+       if (unlikely(clear & TSK_ONCPU)) {
+               state_mask = 0;
+               clear &= ~TSK_ONCPU;
+       } else if (unlikely(set & TSK_ONCPU)) {
+               state_mask = PSI_ONCPU;
+               set &= ~TSK_ONCPU;
+       } else {
+               state_mask = groupc->state_mask & PSI_ONCPU;
+       }
 
+       /*
+        * The rest of the state mask is calculated based on the task
+        * counts. Update those first, then construct the mask.
+        */
        for (t = 0, m = clear; m; m &= ~(1 << t), t++) {
                if (!(m & (1 << t)))
                        continue;
                if (groupc->tasks[t]) {
                        groupc->tasks[t]--;
                } else if (!psi_bug) {
-                       printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u %u] clear=%x set=%x\n",
+                       printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u] clear=%x set=%x\n",
                                        cpu, t, groupc->tasks[0],
                                        groupc->tasks[1], groupc->tasks[2],
-                                       groupc->tasks[3], groupc->tasks[4],
-                                       clear, set);
+                                       groupc->tasks[3], clear, set);
                        psi_bug = 1;
                }
        }
@@ -725,9 +745,25 @@ static void psi_group_change(struct psi_group *group, int cpu,
                if (set & (1 << t))
                        groupc->tasks[t]++;
 
-       /* Calculate state mask representing active states */
+       if (!group->enabled) {
+               /*
+                * On the first group change after disabling PSI, conclude
+                * the current state and flush its time. This is unlikely
+                * to matter to the user, but aggregation (get_recent_times)
+                * may have already incorporated the live state into times_prev;
+                * avoid a delta sample underflow when PSI is later re-enabled.
+                */
+               if (unlikely(groupc->state_mask & (1 << PSI_NONIDLE)))
+                       record_times(groupc, now);
+
+               groupc->state_mask = state_mask;
+
+               write_seqcount_end(&groupc->seq);
+               return;
+       }
+
        for (s = 0; s < NR_PSI_STATES; s++) {
-               if (test_state(groupc->tasks, s))
+               if (test_state(groupc->tasks, s, state_mask & PSI_ONCPU))
                        state_mask |= (1 << s);
        }
 
@@ -739,9 +775,11 @@ static void psi_group_change(struct psi_group *group, int cpu,
         * task in a cgroup is in_memstall, the corresponding groupc
         * on that cpu is in PSI_MEM_FULL state.
         */
-       if (unlikely(groupc->tasks[NR_ONCPU] && cpu_curr(cpu)->in_memstall))
+       if (unlikely((state_mask & PSI_ONCPU) && cpu_curr(cpu)->in_memstall))
                state_mask |= (1 << PSI_MEM_FULL);
 
+       record_times(groupc, now);
+
        groupc->state_mask = state_mask;
 
        write_seqcount_end(&groupc->seq);
@@ -753,27 +791,12 @@ static void psi_group_change(struct psi_group *group, int cpu,
                schedule_delayed_work(&group->avgs_work, PSI_FREQ);
 }
 
-static struct psi_group *iterate_groups(struct task_struct *task, void **iter)
+static inline struct psi_group *task_psi_group(struct task_struct *task)
 {
-       if (*iter == &psi_system)
-               return NULL;
-
 #ifdef CONFIG_CGROUPS
-       if (static_branch_likely(&psi_cgroups_enabled)) {
-               struct cgroup *cgroup = NULL;
-
-               if (!*iter)
-                       cgroup = task->cgroups->dfl_cgrp;
-               else
-                       cgroup = cgroup_parent(*iter);
-
-               if (cgroup && cgroup_parent(cgroup)) {
-                       *iter = cgroup;
-                       return cgroup_psi(cgroup);
-               }
-       }
+       if (static_branch_likely(&psi_cgroups_enabled))
+               return cgroup_psi(task_dfl_cgroup(task));
 #endif
-       *iter = &psi_system;
        return &psi_system;
 }
 
@@ -796,8 +819,6 @@ void psi_task_change(struct task_struct *task, int clear, int set)
 {
        int cpu = task_cpu(task);
        struct psi_group *group;
-       bool wake_clock = true;
-       void *iter = NULL;
        u64 now;
 
        if (!task->pid)
@@ -806,19 +827,11 @@ void psi_task_change(struct task_struct *task, int clear, int set)
        psi_flags_change(task, clear, set);
 
        now = cpu_clock(cpu);
-       /*
-        * Periodic aggregation shuts off if there is a period of no
-        * task changes, so we wake it back up if necessary. However,
-        * don't do this if the task change is the aggregation worker
-        * itself going to sleep, or we'll ping-pong forever.
-        */
-       if (unlikely((clear & TSK_RUNNING) &&
-                    (task->flags & PF_WQ_WORKER) &&
-                    wq_worker_last_func(task) == psi_avgs_work))
-               wake_clock = false;
 
-       while ((group = iterate_groups(task, &iter)))
-               psi_group_change(group, cpu, clear, set, now, wake_clock);
+       group = task_psi_group(task);
+       do {
+               psi_group_change(group, cpu, clear, set, now, true);
+       } while ((group = group->parent));
 }
 
 void psi_task_switch(struct task_struct *prev, struct task_struct *next,
@@ -826,34 +839,30 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 {
        struct psi_group *group, *common = NULL;
        int cpu = task_cpu(prev);
-       void *iter;
        u64 now = cpu_clock(cpu);
 
        if (next->pid) {
-               bool identical_state;
-
                psi_flags_change(next, 0, TSK_ONCPU);
                /*
-                * When switching between tasks that have an identical
-                * runtime state, the cgroup that contains both tasks
-                * we reach the first common ancestor. Iterate @next's
-                * ancestors only until we encounter @prev's ONCPU.
+                * Set TSK_ONCPU on @next's cgroups. If @next shares any
+                * ancestors with @prev, those will already have @prev's
+                * TSK_ONCPU bit set, and we can stop the iteration there.
                 */
-               identical_state = prev->psi_flags == next->psi_flags;
-               iter = NULL;
-               while ((group = iterate_groups(next, &iter))) {
-                       if (identical_state &&
-                           per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
+               group = task_psi_group(next);
+               do {
+                       if (per_cpu_ptr(group->pcpu, cpu)->state_mask &
+                           PSI_ONCPU) {
                                common = group;
                                break;
                        }
 
                        psi_group_change(group, cpu, 0, TSK_ONCPU, now, true);
-               }
+               } while ((group = group->parent));
        }
 
        if (prev->pid) {
                int clear = TSK_ONCPU, set = 0;
+               bool wake_clock = true;
 
                /*
                 * When we're going to sleep, psi_dequeue() lets us
@@ -867,26 +876,74 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
                                clear |= TSK_MEMSTALL_RUNNING;
                        if (prev->in_iowait)
                                set |= TSK_IOWAIT;
+
+                       /*
+                        * Periodic aggregation shuts off if there is a period of no
+                        * task changes, so we wake it back up if necessary. However,
+                        * don't do this if the task change is the aggregation worker
+                        * itself going to sleep, or we'll ping-pong forever.
+                        */
+                       if (unlikely((prev->flags & PF_WQ_WORKER) &&
+                                    wq_worker_last_func(prev) == psi_avgs_work))
+                               wake_clock = false;
                }
 
                psi_flags_change(prev, clear, set);
 
-               iter = NULL;
-               while ((group = iterate_groups(prev, &iter)) && group != common)
-                       psi_group_change(group, cpu, clear, set, now, true);
+               group = task_psi_group(prev);
+               do {
+                       if (group == common)
+                               break;
+                       psi_group_change(group, cpu, clear, set, now, wake_clock);
+               } while ((group = group->parent));
 
                /*
-                * TSK_ONCPU is handled up to the common ancestor. If we're tasked
-                * with dequeuing too, finish that for the rest of the hierarchy.
+                * TSK_ONCPU is handled up to the common ancestor. If there are
+                * any other differences between the two tasks (e.g. prev goes
+                * to sleep, or only one task is memstall), finish propagating
+                * those differences all the way up to the root.
                 */
-               if (sleep) {
+               if ((prev->psi_flags ^ next->psi_flags) & ~TSK_ONCPU) {
                        clear &= ~TSK_ONCPU;
-                       for (; group; group = iterate_groups(prev, &iter))
-                               psi_group_change(group, cpu, clear, set, now, true);
+                       for (; group; group = group->parent)
+                               psi_group_change(group, cpu, clear, set, now, wake_clock);
                }
        }
 }
 
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+void psi_account_irqtime(struct task_struct *task, u32 delta)
+{
+       int cpu = task_cpu(task);
+       struct psi_group *group;
+       struct psi_group_cpu *groupc;
+       u64 now;
+
+       if (!task->pid)
+               return;
+
+       now = cpu_clock(cpu);
+
+       group = task_psi_group(task);
+       do {
+               if (!group->enabled)
+                       continue;
+
+               groupc = per_cpu_ptr(group->pcpu, cpu);
+
+               write_seqcount_begin(&groupc->seq);
+
+               record_times(groupc, now);
+               groupc->times[PSI_IRQ_FULL] += delta;
+
+               write_seqcount_end(&groupc->seq);
+
+               if (group->poll_states & (1 << PSI_IRQ_FULL))
+                       psi_schedule_poll_work(group, 1);
+       } while ((group = group->parent));
+}
+#endif
+
 /**
  * psi_memstall_enter - mark the beginning of a memory stall section
  * @flags: flags to handle nested sections
@@ -952,7 +1009,7 @@ EXPORT_SYMBOL_GPL(psi_memstall_leave);
 #ifdef CONFIG_CGROUPS
 int psi_cgroup_alloc(struct cgroup *cgroup)
 {
-       if (static_branch_likely(&psi_disabled))
+       if (!static_branch_likely(&psi_cgroups_enabled))
                return 0;
 
        cgroup->psi = kzalloc(sizeof(struct psi_group), GFP_KERNEL);
@@ -965,12 +1022,13 @@ int psi_cgroup_alloc(struct cgroup *cgroup)
                return -ENOMEM;
        }
        group_init(cgroup->psi);
+       cgroup->psi->parent = cgroup_psi(cgroup_parent(cgroup));
        return 0;
 }
 
 void psi_cgroup_free(struct cgroup *cgroup)
 {
-       if (static_branch_likely(&psi_disabled))
+       if (!static_branch_likely(&psi_cgroups_enabled))
                return;
 
        cancel_delayed_work_sync(&cgroup->psi->avgs_work);
@@ -998,7 +1056,7 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
        struct rq_flags rf;
        struct rq *rq;
 
-       if (static_branch_likely(&psi_disabled)) {
+       if (!static_branch_likely(&psi_cgroups_enabled)) {
                /*
                 * Lame to do this here, but the scheduler cannot be locked
                 * from the outside, so we move cgroups from inside sched/.
@@ -1046,10 +1104,45 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
 
        task_rq_unlock(rq, task, &rf);
 }
+
+void psi_cgroup_restart(struct psi_group *group)
+{
+       int cpu;
+
+       /*
+        * After we disable psi_group->enabled, we don't actually
+        * stop percpu tasks accounting in each psi_group_cpu,
+        * instead only stop test_state() loop, record_times()
+        * and averaging worker, see psi_group_change() for details.
+        *
+        * When disable cgroup PSI, this function has nothing to sync
+        * since cgroup pressure files are hidden and percpu psi_group_cpu
+        * would see !psi_group->enabled and only do task accounting.
+        *
+        * When re-enable cgroup PSI, this function use psi_group_change()
+        * to get correct state mask from test_state() loop on tasks[],
+        * and restart groupc->state_start from now, use .clear = .set = 0
+        * here since no task status really changed.
+        */
+       if (!group->enabled)
+               return;
+
+       for_each_possible_cpu(cpu) {
+               struct rq *rq = cpu_rq(cpu);
+               struct rq_flags rf;
+               u64 now;
+
+               rq_lock_irq(rq, &rf);
+               now = cpu_clock(cpu);
+               psi_group_change(group, cpu, 0, 0, now, true);
+               rq_unlock_irq(rq, &rf);
+       }
+}
 #endif /* CONFIG_CGROUPS */
 
 int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
 {
+       bool only_full = false;
        int full;
        u64 now;
 
@@ -1064,7 +1157,11 @@ int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
                group->avg_next_update = update_averages(group, now);
        mutex_unlock(&group->avgs_lock);
 
-       for (full = 0; full < 2; full++) {
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+       only_full = res == PSI_IRQ;
+#endif
+
+       for (full = 0; full < 2 - only_full; full++) {
                unsigned long avg[3] = { 0, };
                u64 total = 0;
                int w;
@@ -1078,7 +1175,7 @@ int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
                }
 
                seq_printf(m, "%s avg10=%lu.%02lu avg60=%lu.%02lu avg300=%lu.%02lu total=%llu\n",
-                          full ? "full" : "some",
+                          full || only_full ? "full" : "some",
                           LOAD_INT(avg[0]), LOAD_FRAC(avg[0]),
                           LOAD_INT(avg[1]), LOAD_FRAC(avg[1]),
                           LOAD_INT(avg[2]), LOAD_FRAC(avg[2]),
@@ -1106,6 +1203,11 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
        else
                return ERR_PTR(-EINVAL);
 
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+       if (res == PSI_IRQ && --state != PSI_IRQ_FULL)
+               return ERR_PTR(-EINVAL);
+#endif
+
        if (state >= PSI_NONIDLE)
                return ERR_PTR(-EINVAL);
 
@@ -1390,6 +1492,33 @@ static const struct proc_ops psi_cpu_proc_ops = {
        .proc_release   = psi_fop_release,
 };
 
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+static int psi_irq_show(struct seq_file *m, void *v)
+{
+       return psi_show(m, &psi_system, PSI_IRQ);
+}
+
+static int psi_irq_open(struct inode *inode, struct file *file)
+{
+       return psi_open(file, psi_irq_show);
+}
+
+static ssize_t psi_irq_write(struct file *file, const char __user *user_buf,
+                            size_t nbytes, loff_t *ppos)
+{
+       return psi_write(file, user_buf, nbytes, PSI_IRQ);
+}
+
+static const struct proc_ops psi_irq_proc_ops = {
+       .proc_open      = psi_irq_open,
+       .proc_read      = seq_read,
+       .proc_lseek     = seq_lseek,
+       .proc_write     = psi_irq_write,
+       .proc_poll      = psi_fop_poll,
+       .proc_release   = psi_fop_release,
+};
+#endif
+
 static int __init psi_proc_init(void)
 {
        if (psi_enable) {
@@ -1397,6 +1526,9 @@ static int __init psi_proc_init(void)
                proc_create("pressure/io", 0666, NULL, &psi_io_proc_ops);
                proc_create("pressure/memory", 0666, NULL, &psi_memory_proc_ops);
                proc_create("pressure/cpu", 0666, NULL, &psi_cpu_proc_ops);
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+               proc_create("pressure/irq", 0666, NULL, &psi_irq_proc_ops);
+#endif
        }
        return 0;
 }
index d869bcf..ed2a47e 100644 (file)
@@ -410,8 +410,8 @@ static inline int has_pushable_tasks(struct rq *rq)
        return !plist_head_empty(&rq->rt.pushable_tasks);
 }
 
-static DEFINE_PER_CPU(struct callback_head, rt_push_head);
-static DEFINE_PER_CPU(struct callback_head, rt_pull_head);
+static DEFINE_PER_CPU(struct balance_callback, rt_push_head);
+static DEFINE_PER_CPU(struct balance_callback, rt_pull_head);
 
 static void push_rt_tasks(struct rq *);
 static void pull_rt_task(struct rq *);
index 1644242..a4a2004 100644 (file)
@@ -938,6 +938,12 @@ struct uclamp_rq {
 DECLARE_STATIC_KEY_FALSE(sched_uclamp_used);
 #endif /* CONFIG_UCLAMP_TASK */
 
+struct rq;
+struct balance_callback {
+       struct balance_callback *next;
+       void (*func)(struct rq *rq);
+};
+
 /*
  * This is the main, per-CPU runqueue data structure.
  *
@@ -1036,7 +1042,7 @@ struct rq {
        unsigned long           cpu_capacity;
        unsigned long           cpu_capacity_orig;
 
-       struct callback_head    *balance_callback;
+       struct balance_callback *balance_callback;
 
        unsigned char           nohz_idle_balance;
        unsigned char           idle_balance;
@@ -1182,6 +1188,14 @@ static inline bool is_migration_disabled(struct task_struct *p)
 #endif
 }
 
+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+
+#define cpu_rq(cpu)            (&per_cpu(runqueues, (cpu)))
+#define this_rq()              this_cpu_ptr(&runqueues)
+#define task_rq(p)             cpu_rq(task_cpu(p))
+#define cpu_curr(cpu)          (cpu_rq(cpu)->curr)
+#define raw_rq()               raw_cpu_ptr(&runqueues)
+
 struct sched_group;
 #ifdef CONFIG_SCHED_CORE
 static inline struct cpumask *sched_group_span(struct sched_group *sg);
@@ -1269,7 +1283,7 @@ static inline bool sched_group_cookie_match(struct rq *rq,
                return true;
 
        for_each_cpu_and(cpu, sched_group_span(group), p->cpus_ptr) {
-               if (sched_core_cookie_match(rq, p))
+               if (sched_core_cookie_match(cpu_rq(cpu), p))
                        return true;
        }
        return false;
@@ -1384,14 +1398,6 @@ static inline void update_idle_core(struct rq *rq)
 static inline void update_idle_core(struct rq *rq) { }
 #endif
 
-DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-
-#define cpu_rq(cpu)            (&per_cpu(runqueues, (cpu)))
-#define this_rq()              this_cpu_ptr(&runqueues)
-#define task_rq(p)             cpu_rq(task_cpu(p))
-#define cpu_curr(cpu)          (cpu_rq(cpu)->curr)
-#define raw_rq()               raw_cpu_ptr(&runqueues)
-
 #ifdef CONFIG_FAIR_GROUP_SCHED
 static inline struct task_struct *task_of(struct sched_entity *se)
 {
@@ -1544,7 +1550,7 @@ struct rq_flags {
 #endif
 };
 
-extern struct callback_head balance_push_callback;
+extern struct balance_callback balance_push_callback;
 
 /*
  * Lockdep annotation that avoids accidental unlocks; it's like a
@@ -1724,7 +1730,7 @@ init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
 
 static inline void
 queue_balance_callback(struct rq *rq,
-                      struct callback_head *head,
+                      struct balance_callback *head,
                       void (*func)(struct rq *rq))
 {
        lockdep_assert_rq_held(rq);
@@ -1737,7 +1743,7 @@ queue_balance_callback(struct rq *rq,
        if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
                return;
 
-       head->func = (void (*)(struct callback_head *))func;
+       head->func = func;
        head->next = rq->balance_callback;
        rq->balance_callback = head;
 }
index baa839c..84a1889 100644 (file)
@@ -107,6 +107,11 @@ __schedstats_from_se(struct sched_entity *se)
 }
 
 #ifdef CONFIG_PSI
+void psi_task_change(struct task_struct *task, int clear, int set);
+void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+                    bool sleep);
+void psi_account_irqtime(struct task_struct *task, u32 delta);
+
 /*
  * PSI tracks state that persists across sleeps, such as iowaits and
  * memory stalls. As a result, it has to distinguish between sleeps,
@@ -201,6 +206,7 @@ static inline void psi_ttwu_dequeue(struct task_struct *p) {}
 static inline void psi_sched_switch(struct task_struct *prev,
                                    struct task_struct *next,
                                    bool sleep) {}
+static inline void psi_account_irqtime(struct task_struct *task, u32 delta) {}
 #endif /* CONFIG_PSI */
 
 #ifdef CONFIG_SCHED_INFO
index cee5da1..8058bec 100644 (file)
@@ -310,7 +310,7 @@ static void clocksource_verify_choose_cpus(void)
         * CPUs that are currently online.
         */
        for (i = 1; i < n; i++) {
-               cpu = prandom_u32() % nr_cpu_ids;
+               cpu = prandom_u32_max(nr_cpu_ids);
                cpu = cpumask_next(cpu - 1, cpu_online_mask);
                if (cpu >= nr_cpu_ids)
                        cpu = cpumask_first(cpu_online_mask);
index 7f5eb29..a995ea1 100644 (file)
@@ -346,8 +346,40 @@ static void put_probe_ref(void)
        mutex_unlock(&blk_probe_mutex);
 }
 
+static int blk_trace_start(struct blk_trace *bt)
+{
+       if (bt->trace_state != Blktrace_setup &&
+           bt->trace_state != Blktrace_stopped)
+               return -EINVAL;
+
+       blktrace_seq++;
+       smp_mb();
+       bt->trace_state = Blktrace_running;
+       raw_spin_lock_irq(&running_trace_lock);
+       list_add(&bt->running_list, &running_trace_list);
+       raw_spin_unlock_irq(&running_trace_lock);
+       trace_note_time(bt);
+
+       return 0;
+}
+
+static int blk_trace_stop(struct blk_trace *bt)
+{
+       if (bt->trace_state != Blktrace_running)
+               return -EINVAL;
+
+       bt->trace_state = Blktrace_stopped;
+       raw_spin_lock_irq(&running_trace_lock);
+       list_del_init(&bt->running_list);
+       raw_spin_unlock_irq(&running_trace_lock);
+       relay_flush(bt->rchan);
+
+       return 0;
+}
+
 static void blk_trace_cleanup(struct request_queue *q, struct blk_trace *bt)
 {
+       blk_trace_stop(bt);
        synchronize_rcu();
        blk_trace_free(q, bt);
        put_probe_ref();
@@ -362,8 +394,7 @@ static int __blk_trace_remove(struct request_queue *q)
        if (!bt)
                return -EINVAL;
 
-       if (bt->trace_state != Blktrace_running)
-               blk_trace_cleanup(q, bt);
+       blk_trace_cleanup(q, bt);
 
        return 0;
 }
@@ -658,7 +689,6 @@ static int compat_blk_trace_setup(struct request_queue *q, char *name,
 
 static int __blk_trace_startstop(struct request_queue *q, int start)
 {
-       int ret;
        struct blk_trace *bt;
 
        bt = rcu_dereference_protected(q->blk_trace,
@@ -666,36 +696,10 @@ static int __blk_trace_startstop(struct request_queue *q, int start)
        if (bt == NULL)
                return -EINVAL;
 
-       /*
-        * For starting a trace, we can transition from a setup or stopped
-        * trace. For stopping a trace, the state must be running
-        */
-       ret = -EINVAL;
-       if (start) {
-               if (bt->trace_state == Blktrace_setup ||
-                   bt->trace_state == Blktrace_stopped) {
-                       blktrace_seq++;
-                       smp_mb();
-                       bt->trace_state = Blktrace_running;
-                       raw_spin_lock_irq(&running_trace_lock);
-                       list_add(&bt->running_list, &running_trace_list);
-                       raw_spin_unlock_irq(&running_trace_lock);
-
-                       trace_note_time(bt);
-                       ret = 0;
-               }
-       } else {
-               if (bt->trace_state == Blktrace_running) {
-                       bt->trace_state = Blktrace_stopped;
-                       raw_spin_lock_irq(&running_trace_lock);
-                       list_del_init(&bt->running_list);
-                       raw_spin_unlock_irq(&running_trace_lock);
-                       relay_flush(bt->rchan);
-                       ret = 0;
-               }
-       }
-
-       return ret;
+       if (start)
+               return blk_trace_start(bt);
+       else
+               return blk_trace_stop(bt);
 }
 
 int blk_trace_startstop(struct request_queue *q, int start)
@@ -772,10 +776,8 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
 void blk_trace_shutdown(struct request_queue *q)
 {
        if (rcu_dereference_protected(q->blk_trace,
-                                     lockdep_is_held(&q->debugfs_mutex))) {
-               __blk_trace_startstop(q, 0);
+                                     lockdep_is_held(&q->debugfs_mutex)))
                __blk_trace_remove(q);
-       }
 }
 
 #ifdef CONFIG_BLK_CGROUP
@@ -1614,13 +1616,7 @@ static int blk_trace_remove_queue(struct request_queue *q)
        if (bt == NULL)
                return -EINVAL;
 
-       if (bt->trace_state == Blktrace_running) {
-               bt->trace_state = Blktrace_stopped;
-               raw_spin_lock_irq(&running_trace_lock);
-               list_del_init(&bt->running_list);
-               raw_spin_unlock_irq(&running_trace_lock);
-               relay_flush(bt->rchan);
-       }
+       blk_trace_stop(bt);
 
        put_probe_ref();
        synchronize_rcu();
index 49fb9ec..1ed0896 100644 (file)
@@ -687,6 +687,7 @@ BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map,
 
        perf_sample_data_init(sd, 0, 0);
        sd->raw = &raw;
+       sd->sample_flags |= PERF_SAMPLE_RAW;
 
        err = __bpf_perf_event_output(regs, map, flags, sd);
 
@@ -745,6 +746,7 @@ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
        perf_fetch_caller_regs(regs);
        perf_sample_data_init(sd, 0, 0);
        sd->raw = &raw;
+       sd->sample_flags |= PERF_SAMPLE_RAW;
 
        ret = __bpf_perf_event_output(regs, map, flags, sd);
 out:
index 064072c..f50398c 100644 (file)
@@ -74,6 +74,7 @@ static int proc_do_uts_string(struct ctl_table *table, int write,
 static DEFINE_CTL_TABLE_POLL(hostname_poll);
 static DEFINE_CTL_TABLE_POLL(domainname_poll);
 
+// Note: update 'enum uts_proc' to match any changes to this table
 static struct ctl_table uts_kern_table[] = {
        {
                .procname       = "arch",
index 73178b0..3fc7abf 100644 (file)
@@ -231,6 +231,11 @@ config DEBUG_INFO
          in the "Debug information" choice below, indicating that debug
          information will be generated for build targets.
 
+# Clang is known to generate .{s,u}leb128 with symbol deltas with DWARF5, which
+# some targets may not support: https://sourceware.org/bugzilla/show_bug.cgi?id=27215
+config AS_HAS_NON_CONST_LEB128
+       def_bool $(as-instr,.uleb128 .Lexpr_end4 - .Lexpr_start3\n.Lexpr_start3:\n.Lexpr_end4:)
+
 choice
        prompt "Debug information"
        depends on DEBUG_KERNEL
@@ -253,6 +258,7 @@ config DEBUG_INFO_NONE
 config DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
        bool "Rely on the toolchain's implicit default DWARF version"
        select DEBUG_INFO
+       depends on !CC_IS_CLANG || AS_IS_LLVM || CLANG_VERSION < 140000 || (AS_IS_GNU && AS_VERSION >= 23502 && AS_HAS_NON_CONST_LEB128)
        help
          The implicit default version of DWARF debug info produced by a
          toolchain changes over time.
@@ -264,7 +270,7 @@ config DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
 config DEBUG_INFO_DWARF4
        bool "Generate DWARF Version 4 debuginfo"
        select DEBUG_INFO
-       depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)))
+       depends on !CC_IS_CLANG || AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)
        help
          Generate DWARF v4 debug info. This requires gcc 4.5+, binutils 2.35.2
          if using clang without clang's integrated assembler, and gdb 7.0+.
@@ -276,7 +282,7 @@ config DEBUG_INFO_DWARF4
 config DEBUG_INFO_DWARF5
        bool "Generate DWARF Version 5 debuginfo"
        select DEBUG_INFO
-       depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)))
+       depends on !CC_IS_CLANG || AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502 && AS_HAS_NON_CONST_LEB128)
        help
          Generate DWARF v5 debug info. Requires binutils 2.35.2, gcc 5.0+ (gcc
          5.0+ accepts the -gdwarf-5 flag but only had partial support for some
index 05dae05..3b9a440 100644 (file)
@@ -121,7 +121,7 @@ config KDB_DEFAULT_ENABLE
 
 config KDB_KEYBOARD
        bool "KGDB_KDB: keyboard as input device"
-       depends on VT && KGDB_KDB
+       depends on VT && KGDB_KDB && !PARISC
        default n
        help
          KDB can use a PS/2 type keyboard for an input device
index a72a2c1..d4572db 100644 (file)
@@ -76,7 +76,7 @@ static void cmdline_test_lead_int(struct kunit *test)
                int rc = cmdline_test_values[i];
                int offset;
 
-               sprintf(in, "%u%s", get_random_int() % 256, str);
+               sprintf(in, "%u%s", get_random_u8(), str);
                /* Only first '-' after the number will advance the pointer */
                offset = strlen(in) - strlen(str) + !!(rc == 2);
                cmdline_do_one_test(test, in, rc, offset);
@@ -94,7 +94,7 @@ static void cmdline_test_tail_int(struct kunit *test)
                int rc = strcmp(str, "") ? (strcmp(str, "-") ? 0 : 1) : 1;
                int offset;
 
-               sprintf(in, "%s%u", str, get_random_int() % 256);
+               sprintf(in, "%s%u", str, get_random_u8());
                /*
                 * Only first and leading '-' not followed by integer
                 * will advance the pointer.
index 423784d..96e092d 100644 (file)
@@ -139,7 +139,7 @@ bool should_fail(struct fault_attr *attr, ssize_t size)
                        return false;
        }
 
-       if (attr->probability <= prandom_u32() % 100)
+       if (attr->probability <= prandom_u32_max(100))
                return false;
 
        if (!fail_stacktrace(attr))
index 1075458..7c3c011 100644 (file)
@@ -174,8 +174,8 @@ static int __init find_bit_test(void)
        bitmap_zero(bitmap2, BITMAP_LEN);
 
        while (nbits--) {
-               __set_bit(prandom_u32() % BITMAP_LEN, bitmap);
-               __set_bit(prandom_u32() % BITMAP_LEN, bitmap2);
+               __set_bit(prandom_u32_max(BITMAP_LEN), bitmap);
+               __set_bit(prandom_u32_max(BITMAP_LEN), bitmap2);
        }
 
        test_find_next_bit(bitmap, BITMAP_LEN);
index 5f0e71a..a0b2dbf 100644 (file)
@@ -694,7 +694,7 @@ static void kobject_release(struct kref *kref)
 {
        struct kobject *kobj = container_of(kref, struct kobject, kref);
 #ifdef CONFIG_DEBUG_KOBJECT_RELEASE
-       unsigned long delay = HZ + HZ * (get_random_int() & 0x3);
+       unsigned long delay = HZ + HZ * prandom_u32_max(4);
        pr_info("kobject: '%s' (%p): %s, parent %p (delayed %ld)\n",
                 kobject_name(kobj), kobj, __func__, kobj->parent, delay);
        INIT_DELAYED_WORK(&kobj->release, kobject_delayed_cleanup);
index f5ae79c..a608746 100644 (file)
@@ -56,8 +56,8 @@ int string_stream_vadd(struct string_stream *stream,
        frag_container = alloc_string_stream_fragment(stream->test,
                                                      len,
                                                      stream->gfp);
-       if (!frag_container)
-               return -ENOMEM;
+       if (IS_ERR(frag_container))
+               return PTR_ERR(frag_container);
 
        len = vsnprintf(frag_container->fragment, len, fmt, args);
        spin_lock(&stream->lock);
index 90640a4..2a6992f 100644 (file)
@@ -265,7 +265,7 @@ static void kunit_fail(struct kunit *test, const struct kunit_loc *loc,
        kunit_set_failure(test);
 
        stream = alloc_string_stream(test, GFP_KERNEL);
-       if (!stream) {
+       if (IS_ERR(stream)) {
                WARN(true,
                     "Could not allocate stream to print failed assertion in %s:%d\n",
                     loc->file,
index d5d9029..32060b8 100644 (file)
@@ -47,7 +47,7 @@
  *     @state: pointer to state structure holding seeded state.
  *
  *     This is used for pseudo-randomness with no outside seeding.
- *     For more random results, use prandom_u32().
+ *     For more random results, use get_random_u32().
  */
 u32 prandom_u32_state(struct rnd_state *state)
 {
@@ -69,7 +69,7 @@ EXPORT_SYMBOL(prandom_u32_state);
  *     @bytes: the requested number of bytes
  *
  *     This is used for pseudo-randomness with no outside seeding.
- *     For more random results, use prandom_bytes().
+ *     For more random results, use get_random_bytes().
  */
 void prandom_bytes_state(struct rnd_state *state, void *buf, size_t bytes)
 {
index d9d1c33..848e7eb 100644 (file)
@@ -164,7 +164,7 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws,
 
        /* Load c with random data and encode */
        for (i = 0; i < dlen; i++)
-               c[i] = prandom_u32() & nn;
+               c[i] = get_random_u32() & nn;
 
        memset(c + dlen, 0, nroots * sizeof(*c));
        encode_rs16(rs, c, dlen, c + dlen, 0);
@@ -178,12 +178,12 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws,
        for (i = 0; i < errs; i++) {
                do {
                        /* Error value must be nonzero */
-                       errval = prandom_u32() & nn;
+                       errval = get_random_u32() & nn;
                } while (errval == 0);
 
                do {
                        /* Must not choose the same location twice */
-                       errloc = prandom_u32() % len;
+                       errloc = prandom_u32_max(len);
                } while (errlocs[errloc] != 0);
 
                errlocs[errloc] = 1;
@@ -194,19 +194,19 @@ static int get_rcw_we(struct rs_control *rs, struct wspace *ws,
        for (i = 0; i < eras; i++) {
                do {
                        /* Must not choose the same location twice */
-                       errloc = prandom_u32() % len;
+                       errloc = prandom_u32_max(len);
                } while (errlocs[errloc] != 0);
 
                derrlocs[i] = errloc;
 
-               if (ewsc && (prandom_u32() & 1)) {
+               if (ewsc && prandom_u32_max(2)) {
                        /* Erasure with the symbol intact */
                        errlocs[errloc] = 2;
                } else {
                        /* Erasure with corrupted symbol */
                        do {
                                /* Error value must be nonzero */
-                               errval = prandom_u32() & nn;
+                               errval = get_random_u32() & nn;
                        } while (errval == 0);
 
                        errlocs[errloc] = 1;
index a8108a9..7280ae8 100644 (file)
@@ -21,7 +21,7 @@ static int init_alloc_hint(struct sbitmap *sb, gfp_t flags)
                int i;
 
                for_each_possible_cpu(i)
-                       *per_cpu_ptr(sb->alloc_hint, i) = prandom_u32() % depth;
+                       *per_cpu_ptr(sb->alloc_hint, i) = prandom_u32_max(depth);
        }
        return 0;
 }
@@ -33,7 +33,7 @@ static inline unsigned update_alloc_hint_before_get(struct sbitmap *sb,
 
        hint = this_cpu_read(*sb->alloc_hint);
        if (unlikely(hint >= depth)) {
-               hint = depth ? prandom_u32() % depth : 0;
+               hint = depth ? prandom_u32_max(depth) : 0;
                this_cpu_write(*sb->alloc_hint, hint);
        }
 
index 437d8e6..86fadd3 100644 (file)
@@ -587,7 +587,7 @@ static int __init test_string_helpers_init(void)
        for (i = 0; i < UNESCAPE_ALL_MASK + 1; i++)
                test_string_unescape("unescape", i, false);
        test_string_unescape("unescape inplace",
-                            get_random_int() % (UNESCAPE_ANY + 1), true);
+                            prandom_u32_max(UNESCAPE_ANY + 1), true);
 
        /* Without dictionary */
        for (i = 0; i < ESCAPE_ALL_MASK + 1; i++)
index ed70637..e0381b3 100644 (file)
@@ -145,7 +145,7 @@ static unsigned long get_ftrace_location(void *func)
 static int fprobe_test_init(struct kunit *test)
 {
        do {
-               rand1 = prandom_u32();
+               rand1 = get_random_u32();
        } while (rand1 <= div_factor);
 
        target = fprobe_selftest_target;
index 5144899..0927f44 100644 (file)
@@ -149,7 +149,7 @@ static void __init test_hexdump(size_t len, int rowsize, int groupsize,
 static void __init test_hexdump_set(int rowsize, bool ascii)
 {
        size_t d = min_t(size_t, sizeof(data_b), rowsize);
-       size_t len = get_random_int() % d + 1;
+       size_t len = prandom_u32_max(d) + 1;
 
        test_hexdump(len, rowsize, 4, ascii);
        test_hexdump(len, rowsize, 2, ascii);
@@ -208,11 +208,11 @@ static void __init test_hexdump_overflow(size_t buflen, size_t len,
 static void __init test_hexdump_overflow_set(size_t buflen, bool ascii)
 {
        unsigned int i = 0;
-       int rs = (get_random_int() % 2 + 1) * 16;
+       int rs = (prandom_u32_max(2) + 1) * 16;
 
        do {
                int gs = 1 << i;
-               size_t len = get_random_int() % rs + gs;
+               size_t len = prandom_u32_max(rs) + gs;
 
                test_hexdump_overflow(buflen, rounddown(len, gs), rs, gs, ascii);
        } while (i++ < 3);
@@ -223,11 +223,11 @@ static int __init test_hexdump_init(void)
        unsigned int i;
        int rowsize;
 
-       rowsize = (get_random_int() % 2 + 1) * 16;
+       rowsize = (prandom_u32_max(2) + 1) * 16;
        for (i = 0; i < 16; i++)
                test_hexdump_set(rowsize, false);
 
-       rowsize = (get_random_int() % 2 + 1) * 16;
+       rowsize = (prandom_u32_max(2) + 1) * 16;
        for (i = 0; i < 16; i++)
                test_hexdump_set(rowsize, true);
 
index 6a33f6b..67e6f83 100644 (file)
@@ -100,6 +100,7 @@ struct dmirror {
 struct dmirror_chunk {
        struct dev_pagemap      pagemap;
        struct dmirror_device   *mdevice;
+       bool remove;
 };
 
 /*
@@ -192,11 +193,15 @@ static int dmirror_fops_release(struct inode *inode, struct file *filp)
        return 0;
 }
 
+static struct dmirror_chunk *dmirror_page_to_chunk(struct page *page)
+{
+       return container_of(page->pgmap, struct dmirror_chunk, pagemap);
+}
+
 static struct dmirror_device *dmirror_page_to_device(struct page *page)
 
 {
-       return container_of(page->pgmap, struct dmirror_chunk,
-                           pagemap)->mdevice;
+       return dmirror_page_to_chunk(page)->mdevice;
 }
 
 static int dmirror_do_fault(struct dmirror *dmirror, struct hmm_range *range)
@@ -627,8 +632,8 @@ static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice)
                        goto error;
        }
 
+       zone_device_page_init(dpage);
        dpage->zone_device_data = rpage;
-       lock_page(dpage);
        return dpage;
 
 error:
@@ -907,7 +912,7 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror,
        struct vm_area_struct *vma;
        unsigned long src_pfns[64] = { 0 };
        unsigned long dst_pfns[64] = { 0 };
-       struct migrate_vma args;
+       struct migrate_vma args = { 0 };
        unsigned long next;
        int ret;
 
@@ -968,7 +973,7 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
        unsigned long src_pfns[64] = { 0 };
        unsigned long dst_pfns[64] = { 0 };
        struct dmirror_bounce bounce;
-       struct migrate_vma args;
+       struct migrate_vma args = { 0 };
        unsigned long next;
        int ret;
 
@@ -1218,6 +1223,85 @@ static int dmirror_snapshot(struct dmirror *dmirror,
        return ret;
 }
 
+static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk)
+{
+       unsigned long start_pfn = chunk->pagemap.range.start >> PAGE_SHIFT;
+       unsigned long end_pfn = chunk->pagemap.range.end >> PAGE_SHIFT;
+       unsigned long npages = end_pfn - start_pfn + 1;
+       unsigned long i;
+       unsigned long *src_pfns;
+       unsigned long *dst_pfns;
+
+       src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
+       dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
+
+       migrate_device_range(src_pfns, start_pfn, npages);
+       for (i = 0; i < npages; i++) {
+               struct page *dpage, *spage;
+
+               spage = migrate_pfn_to_page(src_pfns[i]);
+               if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE))
+                       continue;
+
+               if (WARN_ON(!is_device_private_page(spage) &&
+                           !is_device_coherent_page(spage)))
+                       continue;
+               spage = BACKING_PAGE(spage);
+               dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL);
+               lock_page(dpage);
+               copy_highpage(dpage, spage);
+               dst_pfns[i] = migrate_pfn(page_to_pfn(dpage));
+               if (src_pfns[i] & MIGRATE_PFN_WRITE)
+                       dst_pfns[i] |= MIGRATE_PFN_WRITE;
+       }
+       migrate_device_pages(src_pfns, dst_pfns, npages);
+       migrate_device_finalize(src_pfns, dst_pfns, npages);
+       kfree(src_pfns);
+       kfree(dst_pfns);
+}
+
+/* Removes free pages from the free list so they can't be re-allocated */
+static void dmirror_remove_free_pages(struct dmirror_chunk *devmem)
+{
+       struct dmirror_device *mdevice = devmem->mdevice;
+       struct page *page;
+
+       for (page = mdevice->free_pages; page; page = page->zone_device_data)
+               if (dmirror_page_to_chunk(page) == devmem)
+                       mdevice->free_pages = page->zone_device_data;
+}
+
+static void dmirror_device_remove_chunks(struct dmirror_device *mdevice)
+{
+       unsigned int i;
+
+       mutex_lock(&mdevice->devmem_lock);
+       if (mdevice->devmem_chunks) {
+               for (i = 0; i < mdevice->devmem_count; i++) {
+                       struct dmirror_chunk *devmem =
+                               mdevice->devmem_chunks[i];
+
+                       spin_lock(&mdevice->lock);
+                       devmem->remove = true;
+                       dmirror_remove_free_pages(devmem);
+                       spin_unlock(&mdevice->lock);
+
+                       dmirror_device_evict_chunk(devmem);
+                       memunmap_pages(&devmem->pagemap);
+                       if (devmem->pagemap.type == MEMORY_DEVICE_PRIVATE)
+                               release_mem_region(devmem->pagemap.range.start,
+                                                  range_len(&devmem->pagemap.range));
+                       kfree(devmem);
+               }
+               mdevice->devmem_count = 0;
+               mdevice->devmem_capacity = 0;
+               mdevice->free_pages = NULL;
+               kfree(mdevice->devmem_chunks);
+               mdevice->devmem_chunks = NULL;
+       }
+       mutex_unlock(&mdevice->devmem_lock);
+}
+
 static long dmirror_fops_unlocked_ioctl(struct file *filp,
                                        unsigned int command,
                                        unsigned long arg)
@@ -1272,6 +1356,11 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp,
                ret = dmirror_snapshot(dmirror, &cmd);
                break;
 
+       case HMM_DMIRROR_RELEASE:
+               dmirror_device_remove_chunks(dmirror->mdevice);
+               ret = 0;
+               break;
+
        default:
                return -EINVAL;
        }
@@ -1326,15 +1415,19 @@ static void dmirror_devmem_free(struct page *page)
 
        mdevice = dmirror_page_to_device(page);
        spin_lock(&mdevice->lock);
-       mdevice->cfree++;
-       page->zone_device_data = mdevice->free_pages;
-       mdevice->free_pages = page;
+
+       /* Return page to our allocator if not freeing the chunk */
+       if (!dmirror_page_to_chunk(page)->remove) {
+               mdevice->cfree++;
+               page->zone_device_data = mdevice->free_pages;
+               mdevice->free_pages = page;
+       }
        spin_unlock(&mdevice->lock);
 }
 
 static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf)
 {
-       struct migrate_vma args;
+       struct migrate_vma args = { 0 };
        unsigned long src_pfns = 0;
        unsigned long dst_pfns = 0;
        struct page *rpage;
@@ -1357,6 +1450,7 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf)
        args.dst = &dst_pfns;
        args.pgmap_owner = dmirror->mdevice;
        args.flags = dmirror_select_device(dmirror);
+       args.fault_page = vmf->page;
 
        if (migrate_vma_setup(&args))
                return VM_FAULT_SIGBUS;
@@ -1407,22 +1501,7 @@ static int dmirror_device_init(struct dmirror_device *mdevice, int id)
 
 static void dmirror_device_remove(struct dmirror_device *mdevice)
 {
-       unsigned int i;
-
-       if (mdevice->devmem_chunks) {
-               for (i = 0; i < mdevice->devmem_count; i++) {
-                       struct dmirror_chunk *devmem =
-                               mdevice->devmem_chunks[i];
-
-                       memunmap_pages(&devmem->pagemap);
-                       if (devmem->pagemap.type == MEMORY_DEVICE_PRIVATE)
-                               release_mem_region(devmem->pagemap.range.start,
-                                                  range_len(&devmem->pagemap.range));
-                       kfree(devmem);
-               }
-               kfree(mdevice->devmem_chunks);
-       }
-
+       dmirror_device_remove_chunks(mdevice);
        cdev_device_del(&mdevice->cdevice, &mdevice->device);
 }
 
index e31d58c..8c818a2 100644 (file)
@@ -36,6 +36,7 @@ struct hmm_dmirror_cmd {
 #define HMM_DMIRROR_SNAPSHOT           _IOWR('H', 0x04, struct hmm_dmirror_cmd)
 #define HMM_DMIRROR_EXCLUSIVE          _IOWR('H', 0x05, struct hmm_dmirror_cmd)
 #define HMM_DMIRROR_CHECK_EXCLUSIVE    _IOWR('H', 0x06, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_RELEASE            _IOWR('H', 0x07, struct hmm_dmirror_cmd)
 
 /*
  * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT.
index a5edc2e..eeb1d72 100644 (file)
@@ -341,7 +341,7 @@ static int kprobes_test_init(struct kunit *test)
        stacktrace_driver = kprobe_stacktrace_driver;
 
        do {
-               rand1 = prandom_u32();
+               rand1 = get_random_u32();
        } while (rand1 <= div_factor);
        return 0;
 }
index ade7a1e..19ff229 100644 (file)
@@ -71,7 +71,7 @@ static void list_sort_test(struct kunit *test)
                KUNIT_ASSERT_NOT_ERR_OR_NULL(test, el);
 
                 /* force some equivalencies */
-               el->value = prandom_u32() % (TEST_LIST_LEN / 3);
+               el->value = prandom_u32_max(TEST_LIST_LEN / 3);
                el->serial = i;
                el->poison1 = TEST_POISON1;
                el->poison2 = TEST_POISON2;
index c95db11..60e1984 100644 (file)
@@ -67,17 +67,24 @@ static int __init do_alloc_pages_order(int order, int *total_failures)
        size_t size = PAGE_SIZE << order;
 
        page = alloc_pages(GFP_KERNEL, order);
+       if (!page)
+               goto err;
        buf = page_address(page);
        fill_with_garbage(buf, size);
        __free_pages(page, order);
 
        page = alloc_pages(GFP_KERNEL, order);
+       if (!page)
+               goto err;
        buf = page_address(page);
        if (count_nonzero_bytes(buf, size))
                (*total_failures)++;
        fill_with_garbage(buf, size);
        __free_pages(page, order);
        return 1;
+err:
+       (*total_failures)++;
+       return 1;
 }
 
 /* Test the page allocator by calling alloc_pages with different orders. */
@@ -100,15 +107,22 @@ static int __init do_kmalloc_size(size_t size, int *total_failures)
        void *buf;
 
        buf = kmalloc(size, GFP_KERNEL);
+       if (!buf)
+               goto err;
        fill_with_garbage(buf, size);
        kfree(buf);
 
        buf = kmalloc(size, GFP_KERNEL);
+       if (!buf)
+               goto err;
        if (count_nonzero_bytes(buf, size))
                (*total_failures)++;
        fill_with_garbage(buf, size);
        kfree(buf);
        return 1;
+err:
+       (*total_failures)++;
+       return 1;
 }
 
 /* Test vmalloc() with given parameters. */
@@ -117,15 +131,22 @@ static int __init do_vmalloc_size(size_t size, int *total_failures)
        void *buf;
 
        buf = vmalloc(size);
+       if (!buf)
+               goto err;
        fill_with_garbage(buf, size);
        vfree(buf);
 
        buf = vmalloc(size);
+       if (!buf)
+               goto err;
        if (count_nonzero_bytes(buf, size))
                (*total_failures)++;
        fill_with_garbage(buf, size);
        vfree(buf);
        return 1;
+err:
+       (*total_failures)++;
+       return 1;
 }
 
 /* Test kmalloc()/vmalloc() by allocating objects of different sizes. */
index d19c808..7b01b43 100644 (file)
@@ -83,7 +83,7 @@ static __init int test_heapify_all(bool min_heap)
        /* Test with randomly generated values. */
        heap.nr = ARRAY_SIZE(values);
        for (i = 0; i < heap.nr; i++)
-               values[i] = get_random_int();
+               values[i] = get_random_u32();
 
        min_heapify_all(&heap, &funcs);
        err += pop_verify_heap(min_heap, &heap, &funcs);
@@ -116,7 +116,7 @@ static __init int test_heap_push(bool min_heap)
 
        /* Test with randomly generated values. */
        while (heap.nr < heap.size) {
-               temp = get_random_int();
+               temp = get_random_u32();
                min_heap_push(&heap, &temp, &funcs);
        }
        err += pop_verify_heap(min_heap, &heap, &funcs);
@@ -158,7 +158,7 @@ static __init int test_heap_pop_push(bool min_heap)
 
        /* Test with randomly generated values. */
        for (i = 0; i < ARRAY_SIZE(data); i++) {
-               temp = get_random_int();
+               temp = get_random_u32();
                min_heap_pop_push(&heap, &temp, &funcs);
        }
        err += pop_verify_heap(min_heap, &heap, &funcs);
index da13793..c0c957c 100644 (file)
@@ -157,7 +157,7 @@ static int test_nodelta_obj_get(struct world *world, struct objagg *objagg,
        int err;
 
        if (should_create_root)
-               prandom_bytes(world->next_root_buf,
+               get_random_bytes(world->next_root_buf,
                              sizeof(world->next_root_buf));
 
        objagg_obj = world_obj_get(world, objagg, key_id);
index 5a1dd47..b358a74 100644 (file)
@@ -291,7 +291,7 @@ static int __init test_rhltable(unsigned int entries)
        if (WARN_ON(err))
                goto out_free;
 
-       k = prandom_u32();
+       k = get_random_u32();
        ret = 0;
        for (i = 0; i < entries; i++) {
                rhl_test_objects[i].value.id = k;
@@ -369,12 +369,12 @@ static int __init test_rhltable(unsigned int entries)
        pr_info("test %d random rhlist add/delete operations\n", entries);
        for (j = 0; j < entries; j++) {
                u32 i = prandom_u32_max(entries);
-               u32 prand = prandom_u32();
+               u32 prand = get_random_u32();
 
                cond_resched();
 
                if (prand == 0)
-                       prand = prandom_u32();
+                       prand = get_random_u32();
 
                if (prand & 1) {
                        prand >>= 1;
index 4f2f2d1..cf77805 100644 (file)
@@ -80,7 +80,7 @@ static int random_size_align_alloc_test(void)
        int i;
 
        for (i = 0; i < test_loop_count; i++) {
-               rnd = prandom_u32();
+               rnd = get_random_u8();
 
                /*
                 * Maximum 1024 pages, if PAGE_SIZE is 4096.
@@ -151,9 +151,7 @@ static int random_size_alloc_test(void)
        int i;
 
        for (i = 0; i < test_loop_count; i++) {
-               n = prandom_u32();
-               n = (n % 100) + 1;
-
+               n = prandom_u32_max(100) + 1;
                p = vmalloc(n * PAGE_SIZE);
 
                if (!p)
@@ -293,16 +291,12 @@ pcpu_alloc_test(void)
                return -1;
 
        for (i = 0; i < 35000; i++) {
-               unsigned int r;
-
-               r = prandom_u32();
-               size = (r % (PAGE_SIZE / 4)) + 1;
+               size = prandom_u32_max(PAGE_SIZE / 4) + 1;
 
                /*
                 * Maximum PAGE_SIZE
                 */
-               r = prandom_u32();
-               align = 1 << ((r % 11) + 1);
+               align = 1 << (prandom_u32_max(11) + 1);
 
                pcpu[i] = __alloc_percpu(size, align);
                if (!pcpu[i])
@@ -393,14 +387,11 @@ static struct test_driver {
 
 static void shuffle_array(int *arr, int n)
 {
-       unsigned int rnd;
        int i, j;
 
        for (i = n - 1; i > 0; i--)  {
-               rnd = prandom_u32();
-
                /* Cut the range. */
-               j = rnd % i;
+               j = prandom_u32_max(i);
 
                /* Swap indexes. */
                swap(arr[i], arr[j]);
index 562d539..e309b4c 100644 (file)
@@ -52,7 +52,7 @@ EXPORT_SYMBOL(generate_random_guid);
 
 static void __uuid_gen_common(__u8 b[16])
 {
-       prandom_bytes(b, 16);
+       get_random_bytes(b, 16);
        /* reversion 0b10 */
        b[8] = (b[8] & 0x3F) | 0x80;
 }
index 2dd02c4..c51f7f5 100644 (file)
@@ -1847,7 +1847,6 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
                                        pfn = cc->zone->zone_start_pfn;
                                cc->fast_search_fail = 0;
                                found_block = true;
-                               set_pageblock_skip(freepage);
                                break;
                        }
                }
index 8e1ab38..36d098d 100644 (file)
@@ -491,7 +491,7 @@ static unsigned long damon_region_sz_limit(struct damon_ctx *ctx)
 
        damon_for_each_target(t, ctx) {
                damon_for_each_region(r, t)
-                       sz += r->ar.end - r->ar.start;
+                       sz += damon_sz_region(r);
        }
 
        if (ctx->attrs.min_nr_regions)
@@ -674,7 +674,7 @@ static bool __damos_valid_target(struct damon_region *r, struct damos *s)
 {
        unsigned long sz;
 
-       sz = r->ar.end - r->ar.start;
+       sz = damon_sz_region(r);
        return s->pattern.min_sz_region <= sz &&
                sz <= s->pattern.max_sz_region &&
                s->pattern.min_nr_accesses <= r->nr_accesses &&
@@ -702,7 +702,7 @@ static void damon_do_apply_schemes(struct damon_ctx *c,
 
        damon_for_each_scheme(s, c) {
                struct damos_quota *quota = &s->quota;
-               unsigned long sz = r->ar.end - r->ar.start;
+               unsigned long sz = damon_sz_region(r);
                struct timespec64 begin, end;
                unsigned long sz_applied = 0;
 
@@ -731,14 +731,14 @@ static void damon_do_apply_schemes(struct damon_ctx *c,
                                sz = ALIGN_DOWN(quota->charge_addr_from -
                                                r->ar.start, DAMON_MIN_REGION);
                                if (!sz) {
-                                       if (r->ar.end - r->ar.start <=
-                                                       DAMON_MIN_REGION)
+                                       if (damon_sz_region(r) <=
+                                           DAMON_MIN_REGION)
                                                continue;
                                        sz = DAMON_MIN_REGION;
                                }
                                damon_split_region_at(t, r, sz);
                                r = damon_next_region(r);
-                               sz = r->ar.end - r->ar.start;
+                               sz = damon_sz_region(r);
                        }
                        quota->charge_target_from = NULL;
                        quota->charge_addr_from = 0;
@@ -843,8 +843,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
                                        continue;
                                score = c->ops.get_scheme_score(
                                                c, t, r, s);
-                               quota->histogram[score] +=
-                                       r->ar.end - r->ar.start;
+                               quota->histogram[score] += damon_sz_region(r);
                                if (score > max_score)
                                        max_score = score;
                        }
@@ -865,18 +864,13 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
        }
 }
 
-static inline unsigned long sz_damon_region(struct damon_region *r)
-{
-       return r->ar.end - r->ar.start;
-}
-
 /*
  * Merge two adjacent regions into one region
  */
 static void damon_merge_two_regions(struct damon_target *t,
                struct damon_region *l, struct damon_region *r)
 {
-       unsigned long sz_l = sz_damon_region(l), sz_r = sz_damon_region(r);
+       unsigned long sz_l = damon_sz_region(l), sz_r = damon_sz_region(r);
 
        l->nr_accesses = (l->nr_accesses * sz_l + r->nr_accesses * sz_r) /
                        (sz_l + sz_r);
@@ -905,7 +899,7 @@ static void damon_merge_regions_of(struct damon_target *t, unsigned int thres,
 
                if (prev && prev->ar.end == r->ar.start &&
                    abs(prev->nr_accesses - r->nr_accesses) <= thres &&
-                   sz_damon_region(prev) + sz_damon_region(r) <= sz_limit)
+                   damon_sz_region(prev) + damon_sz_region(r) <= sz_limit)
                        damon_merge_two_regions(t, prev, r);
                else
                        prev = r;
@@ -963,7 +957,7 @@ static void damon_split_regions_of(struct damon_target *t, int nr_subs)
        int i;
 
        damon_for_each_region_safe(r, next, t) {
-               sz_region = r->ar.end - r->ar.start;
+               sz_region = damon_sz_region(r);
 
                for (i = 0; i < nr_subs - 1 &&
                                sz_region > 2 * DAMON_MIN_REGION; i++) {
index ea94e0b..15f03df 100644 (file)
@@ -72,7 +72,7 @@ static int damon_va_evenly_split_region(struct damon_target *t,
                return -EINVAL;
 
        orig_end = r->ar.end;
-       sz_orig = r->ar.end - r->ar.start;
+       sz_orig = damon_sz_region(r);
        sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION);
 
        if (!sz_piece)
@@ -618,7 +618,7 @@ static unsigned long damos_madvise(struct damon_target *target,
 {
        struct mm_struct *mm;
        unsigned long start = PAGE_ALIGN(r->ar.start);
-       unsigned long len = PAGE_ALIGN(r->ar.end - r->ar.start);
+       unsigned long len = PAGE_ALIGN(damon_sz_region(r));
        unsigned long applied;
 
        mm = damon_get_mm(target);
index c707d72..db251e7 100644 (file)
 #include <asm/tlbflush.h>
 #include <linux/vmalloc.h>
 
+#ifdef CONFIG_KMAP_LOCAL
+static inline int kmap_local_calc_idx(int idx)
+{
+       return idx + KM_MAX_IDX * smp_processor_id();
+}
+
+#ifndef arch_kmap_local_map_idx
+#define arch_kmap_local_map_idx(idx, pfn)      kmap_local_calc_idx(idx)
+#endif
+#endif /* CONFIG_KMAP_LOCAL */
+
 /*
  * Virtual_count is not a pure "count".
  *  0 means that it is not mapped, and has not been mapped
@@ -142,12 +153,29 @@ pte_t *pkmap_page_table;
 
 struct page *__kmap_to_page(void *vaddr)
 {
+       unsigned long base = (unsigned long) vaddr & PAGE_MASK;
+       struct kmap_ctrl *kctrl = &current->kmap_ctrl;
        unsigned long addr = (unsigned long)vaddr;
+       int i;
+
+       /* kmap() mappings */
+       if (WARN_ON_ONCE(addr >= PKMAP_ADDR(0) &&
+                        addr < PKMAP_ADDR(LAST_PKMAP)))
+               return pte_page(pkmap_page_table[PKMAP_NR(addr)]);
 
-       if (addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP)) {
-               int i = PKMAP_NR(addr);
+       /* kmap_local_page() mappings */
+       if (WARN_ON_ONCE(base >= __fix_to_virt(FIX_KMAP_END) &&
+                        base < __fix_to_virt(FIX_KMAP_BEGIN))) {
+               for (i = 0; i < kctrl->idx; i++) {
+                       unsigned long base_addr;
+                       int idx;
 
-               return pte_page(pkmap_page_table[i]);
+                       idx = arch_kmap_local_map_idx(i, pte_pfn(pteval));
+                       base_addr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
+
+                       if (base_addr == base)
+                               return pte_page(kctrl->pteval[i]);
+               }
        }
 
        return virt_to_page(vaddr);
@@ -462,10 +490,6 @@ static inline void kmap_local_idx_pop(void)
 # define arch_kmap_local_post_unmap(vaddr)             do { } while (0)
 #endif
 
-#ifndef arch_kmap_local_map_idx
-#define arch_kmap_local_map_idx(idx, pfn)      kmap_local_calc_idx(idx)
-#endif
-
 #ifndef arch_kmap_local_unmap_idx
 #define arch_kmap_local_unmap_idx(idx, vaddr)  kmap_local_calc_idx(idx)
 #endif
@@ -494,11 +518,6 @@ static inline bool kmap_high_unmap_local(unsigned long vaddr)
        return false;
 }
 
-static inline int kmap_local_calc_idx(int idx)
-{
-       return idx + KM_MAX_IDX * smp_processor_id();
-}
-
 static pte_t *__kmap_pte;
 
 static pte_t *kmap_get_pte(unsigned long vaddr, int idx)
index 1cc4a5f..03fc7e5 100644 (file)
@@ -2455,7 +2455,16 @@ static void __split_huge_page_tail(struct page *head, int tail,
                        page_tail);
        page_tail->mapping = head->mapping;
        page_tail->index = head->index + tail;
-       page_tail->private = 0;
+
+       /*
+        * page->private should not be set in tail pages with the exception
+        * of swap cache pages that store the swp_entry_t in tail pages.
+        * Fix up and warn once if private is unexpectedly set.
+        */
+       if (!folio_test_swapcache(page_folio(head))) {
+               VM_WARN_ON_ONCE_PAGE(page_tail->private != 0, head);
+               page_tail->private = 0;
+       }
 
        /* Page flags must be visible before we make the page non-compound. */
        smp_wmb();
index 57b7b0b..546df97 100644 (file)
@@ -1014,15 +1014,23 @@ void hugetlb_dup_vma_private(struct vm_area_struct *vma)
        VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
        /*
         * Clear vm_private_data
+        * - For shared mappings this is a per-vma semaphore that may be
+        *   allocated in a subsequent call to hugetlb_vm_op_open.
+        *   Before clearing, make sure pointer is not associated with vma
+        *   as this will leak the structure.  This is the case when called
+        *   via clear_vma_resv_huge_pages() and hugetlb_vm_op_open has already
+        *   been called to allocate a new structure.
         * - For MAP_PRIVATE mappings, this is the reserve map which does
         *   not apply to children.  Faults generated by the children are
         *   not guaranteed to succeed, even if read-only.
-        * - For shared mappings this is a per-vma semaphore that may be
-        *   allocated in a subsequent call to hugetlb_vm_op_open.
         */
-       vma->vm_private_data = (void *)0;
-       if (!(vma->vm_flags & VM_MAYSHARE))
-               return;
+       if (vma->vm_flags & VM_MAYSHARE) {
+               struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
+
+               if (vma_lock && vma_lock->vma != vma)
+                       vma->vm_private_data = NULL;
+       } else
+               vma->vm_private_data = NULL;
 }
 
 /*
@@ -2924,11 +2932,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
                page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
                if (!page)
                        goto out_uncharge_cgroup;
+               spin_lock_irq(&hugetlb_lock);
                if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
                        SetHPageRestoreReserve(page);
                        h->resv_huge_pages--;
                }
-               spin_lock_irq(&hugetlb_lock);
                list_add(&page->lru, &h->hugepage_activelist);
                set_page_refcounted(page);
                /* Fall through */
@@ -4601,6 +4609,7 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
        struct resv_map *resv = vma_resv_map(vma);
 
        /*
+        * HPAGE_RESV_OWNER indicates a private mapping.
         * This new VMA should share its siblings reservation map if present.
         * The VMA will only ever have a valid reservation map pointer where
         * it is being copied for another still existing VMA.  As that VMA
@@ -4615,11 +4624,21 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
 
        /*
         * vma_lock structure for sharable mappings is vma specific.
-        * Clear old pointer (if copied via vm_area_dup) and create new.
+        * Clear old pointer (if copied via vm_area_dup) and allocate
+        * new structure.  Before clearing, make sure vma_lock is not
+        * for this vma.
         */
        if (vma->vm_flags & VM_MAYSHARE) {
-               vma->vm_private_data = NULL;
-               hugetlb_vma_lock_alloc(vma);
+               struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
+
+               if (vma_lock) {
+                       if (vma_lock->vma != vma) {
+                               vma->vm_private_data = NULL;
+                               hugetlb_vma_lock_alloc(vma);
+                       } else
+                               pr_warn("HugeTLB: vma_lock already exists in %s.\n", __func__);
+               } else
+                       hugetlb_vma_lock_alloc(vma);
        }
 }
 
@@ -5096,6 +5115,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
                 * unmapped and its refcount is dropped, so just clear pte here.
                 */
                if (unlikely(!pte_present(pte))) {
+#ifdef CONFIG_PTE_MARKER_UFFD_WP
                        /*
                         * If the pte was wr-protected by uffd-wp in any of the
                         * swap forms, meanwhile the caller does not want to
@@ -5107,6 +5127,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
                                set_huge_pte_at(mm, address, ptep,
                                                make_pte_marker(PTE_MARKER_UFFD_WP));
                        else
+#endif
                                huge_pte_clear(mm, address, ptep, sz);
                        spin_unlock(ptl);
                        continue;
@@ -5135,11 +5156,13 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
                tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
                if (huge_pte_dirty(pte))
                        set_page_dirty(page);
+#ifdef CONFIG_PTE_MARKER_UFFD_WP
                /* Leave a uffd-wp pte marker if needed */
                if (huge_pte_uffd_wp(pte) &&
                    !(zap_flags & ZAP_FLAG_DROP_MARKER))
                        set_huge_pte_at(mm, address, ptep,
                                        make_pte_marker(PTE_MARKER_UFFD_WP));
+#endif
                hugetlb_count_sub(pages_per_huge_page(h), mm);
                page_remove_rmap(page, vma, true);
 
@@ -5531,6 +5554,23 @@ static inline vm_fault_t hugetlb_handle_userfault(struct vm_area_struct *vma,
        return handle_userfault(&vmf, reason);
 }
 
+/*
+ * Recheck pte with pgtable lock.  Returns true if pte didn't change, or
+ * false if pte changed or is changing.
+ */
+static bool hugetlb_pte_stable(struct hstate *h, struct mm_struct *mm,
+                              pte_t *ptep, pte_t old_pte)
+{
+       spinlock_t *ptl;
+       bool same;
+
+       ptl = huge_pte_lock(h, mm, ptep);
+       same = pte_same(huge_ptep_get(ptep), old_pte);
+       spin_unlock(ptl);
+
+       return same;
+}
+
 static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
                        struct vm_area_struct *vma,
                        struct address_space *mapping, pgoff_t idx,
@@ -5571,10 +5611,33 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
                if (idx >= size)
                        goto out;
                /* Check for page in userfault range */
-               if (userfaultfd_missing(vma))
-                       return hugetlb_handle_userfault(vma, mapping, idx,
-                                                      flags, haddr, address,
-                                                      VM_UFFD_MISSING);
+               if (userfaultfd_missing(vma)) {
+                       /*
+                        * Since hugetlb_no_page() was examining pte
+                        * without pgtable lock, we need to re-test under
+                        * lock because the pte may not be stable and could
+                        * have changed from under us.  Try to detect
+                        * either changed or during-changing ptes and retry
+                        * properly when needed.
+                        *
+                        * Note that userfaultfd is actually fine with
+                        * false positives (e.g. caused by pte changed),
+                        * but not wrong logical events (e.g. caused by
+                        * reading a pte during changing).  The latter can
+                        * confuse the userspace, so the strictness is very
+                        * much preferred.  E.g., MISSING event should
+                        * never happen on the page after UFFDIO_COPY has
+                        * correctly installed the page and returned.
+                        */
+                       if (!hugetlb_pte_stable(h, mm, ptep, old_pte)) {
+                               ret = 0;
+                               goto out;
+                       }
+
+                       return hugetlb_handle_userfault(vma, mapping, idx, flags,
+                                                       haddr, address,
+                                                       VM_UFFD_MISSING);
+               }
 
                page = alloc_huge_page(vma, haddr, 0);
                if (IS_ERR(page)) {
@@ -5590,11 +5653,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
                         * here.  Before returning error, get ptl and make
                         * sure there really is no pte entry.
                         */
-                       ptl = huge_pte_lock(h, mm, ptep);
-                       ret = 0;
-                       if (huge_pte_none(huge_ptep_get(ptep)))
+                       if (hugetlb_pte_stable(h, mm, ptep, old_pte))
                                ret = vmf_error(PTR_ERR(page));
-                       spin_unlock(ptl);
+                       else
+                               ret = 0;
                        goto out;
                }
                clear_huge_page(page, address, pages_per_huge_page(h));
@@ -5640,9 +5702,14 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
                if (userfaultfd_minor(vma)) {
                        unlock_page(page);
                        put_page(page);
-                       return hugetlb_handle_userfault(vma, mapping, idx,
-                                                      flags, haddr, address,
-                                                      VM_UFFD_MINOR);
+                       /* See comment in userfaultfd_missing() block above */
+                       if (!hugetlb_pte_stable(h, mm, ptep, old_pte)) {
+                               ret = 0;
+                               goto out;
+                       }
+                       return hugetlb_handle_userfault(vma, mapping, idx, flags,
+                                                       haddr, address,
+                                                       VM_UFFD_MINOR);
                }
        }
 
@@ -6804,7 +6871,7 @@ void hugetlb_vma_lock_release(struct kref *kref)
        kfree(vma_lock);
 }
 
-void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock)
+static void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock)
 {
        struct vm_area_struct *vma = vma_lock->vma;
 
index f25692d..0d59098 100644 (file)
@@ -295,6 +295,9 @@ static void krealloc_more_oob_helper(struct kunit *test,
        ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
 
+       /* Suppress -Warray-bounds warnings. */
+       OPTIMIZER_HIDE_VAR(ptr2);
+
        /* All offsets up to size2 must be accessible. */
        ptr2[size1 - 1] = 'x';
        ptr2[size1] = 'x';
@@ -327,6 +330,9 @@ static void krealloc_less_oob_helper(struct kunit *test,
        ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
 
+       /* Suppress -Warray-bounds warnings. */
+       OPTIMIZER_HIDE_VAR(ptr2);
+
        /* Must be accessible for all modes. */
        ptr2[size2 - 1] = 'x';
 
@@ -540,13 +546,14 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
 {
        char *ptr;
        size_t size = 64;
-       volatile size_t invalid_size = size;
+       size_t invalid_size = size;
 
        ptr = kmalloc(size, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
 
        memset((char *)ptr, 0, 64);
        OPTIMIZER_HIDE_VAR(ptr);
+       OPTIMIZER_HIDE_VAR(invalid_size);
        KUNIT_EXPECT_KASAN_FAIL(test,
                memmove((char *)ptr, (char *)ptr + 4, invalid_size));
        kfree(ptr);
@@ -1292,7 +1299,7 @@ static void match_all_not_assigned(struct kunit *test)
        KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC);
 
        for (i = 0; i < 256; i++) {
-               size = (get_random_int() % 1024) + 1;
+               size = prandom_u32_max(1024) + 1;
                ptr = kmalloc(size, GFP_KERNEL);
                KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
                KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
@@ -1301,7 +1308,7 @@ static void match_all_not_assigned(struct kunit *test)
        }
 
        for (i = 0; i < 256; i++) {
-               order = (get_random_int() % 4) + 1;
+               order = prandom_u32_max(4) + 1;
                pages = alloc_pages(GFP_KERNEL, order);
                ptr = page_address(pages);
                KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
@@ -1314,7 +1321,7 @@ static void match_all_not_assigned(struct kunit *test)
                return;
 
        for (i = 0; i < 256; i++) {
-               size = (get_random_int() % 1024) + 1;
+               size = prandom_u32_max(1024) + 1;
                ptr = vmalloc(size);
                KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
                KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
index df678fa..f88c351 100644 (file)
@@ -1393,10 +1393,12 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
                              unsigned long addr, pte_t *pte,
                              struct zap_details *details, pte_t pteval)
 {
+#ifdef CONFIG_PTE_MARKER_UFFD_WP
        if (zap_drop_file_uffd_wp(details))
                return;
 
        pte_install_uffd_wp_if_needed(vma, addr, pte, pteval);
+#endif
 }
 
 static unsigned long zap_pte_range(struct mmu_gather *tlb,
@@ -3748,7 +3750,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
                        ret = remove_device_exclusive_entry(vmf);
                } else if (is_device_private_entry(entry)) {
                        vmf->page = pfn_swap_entry_to_page(entry);
-                       ret = vmf->page->pgmap->ops->migrate_to_ram(vmf);
+                       vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
+                                       vmf->address, &vmf->ptl);
+                       if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
+                               spin_unlock(vmf->ptl);
+                               goto out;
+                       }
+
+                       /*
+                        * Get a page reference while we know the page can't be
+                        * freed.
+                        */
+                       get_page(vmf->page);
+                       pte_unmap_unlock(vmf->pte, vmf->ptl);
+                       vmf->page->pgmap->ops->migrate_to_ram(vmf);
+                       put_page(vmf->page);
                } else if (is_hwpoison_entry(entry)) {
                        ret = VM_FAULT_HWPOISON;
                } else if (is_swapin_error_entry(entry)) {
@@ -4118,7 +4134,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
        vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
                        &vmf->ptl);
        if (!pte_none(*vmf->pte)) {
-               update_mmu_cache(vma, vmf->address, vmf->pte);
+               update_mmu_tlb(vma, vmf->address, vmf->pte);
                goto release;
        }
 
index a937eae..61aa9ae 100644 (file)
@@ -787,17 +787,22 @@ static int vma_replace_policy(struct vm_area_struct *vma,
 static int mbind_range(struct mm_struct *mm, unsigned long start,
                       unsigned long end, struct mempolicy *new_pol)
 {
-       MA_STATE(mas, &mm->mm_mt, start - 1, start - 1);
+       MA_STATE(mas, &mm->mm_mt, start, start);
        struct vm_area_struct *prev;
        struct vm_area_struct *vma;
        int err = 0;
        pgoff_t pgoff;
 
-       prev = mas_find_rev(&mas, 0);
-       if (prev && (start < prev->vm_end))
-               vma = prev;
-       else
-               vma = mas_next(&mas, end - 1);
+       prev = mas_prev(&mas, 0);
+       if (unlikely(!prev))
+               mas_set(&mas, start);
+
+       vma = mas_find(&mas, end - 1);
+       if (WARN_ON(!vma))
+               return 0;
+
+       if (start > vma->vm_start)
+               prev = vma;
 
        for (; vma; vma = mas_next(&mas, end - 1)) {
                unsigned long vmstart = max(start, vma->vm_start);
index 25029a4..421bec3 100644 (file)
@@ -138,8 +138,11 @@ void memunmap_pages(struct dev_pagemap *pgmap)
        int i;
 
        percpu_ref_kill(&pgmap->ref);
-       for (i = 0; i < pgmap->nr_range; i++)
-               percpu_ref_put_many(&pgmap->ref, pfn_len(pgmap, i));
+       if (pgmap->type != MEMORY_DEVICE_PRIVATE &&
+           pgmap->type != MEMORY_DEVICE_COHERENT)
+               for (i = 0; i < pgmap->nr_range; i++)
+                       percpu_ref_put_many(&pgmap->ref, pfn_len(pgmap, i));
+
        wait_for_completion(&pgmap->done);
 
        for (i = 0; i < pgmap->nr_range; i++)
@@ -264,7 +267,9 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
        memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
                                PHYS_PFN(range->start),
                                PHYS_PFN(range_len(range)), pgmap);
-       percpu_ref_get_many(&pgmap->ref, pfn_len(pgmap, range_id));
+       if (pgmap->type != MEMORY_DEVICE_PRIVATE &&
+           pgmap->type != MEMORY_DEVICE_COHERENT)
+               percpu_ref_get_many(&pgmap->ref, pfn_len(pgmap, range_id));
        return 0;
 
 err_add_memory:
@@ -502,11 +507,28 @@ void free_zone_device_page(struct page *page)
        page->mapping = NULL;
        page->pgmap->ops->page_free(page);
 
+       if (page->pgmap->type != MEMORY_DEVICE_PRIVATE &&
+           page->pgmap->type != MEMORY_DEVICE_COHERENT)
+               /*
+                * Reset the page count to 1 to prepare for handing out the page
+                * again.
+                */
+               set_page_count(page, 1);
+       else
+               put_dev_pagemap(page->pgmap);
+}
+
+void zone_device_page_init(struct page *page)
+{
        /*
-        * Reset the page count to 1 to prepare for handing out the page again.
+        * Drivers shouldn't be allocating pages after calling
+        * memunmap_pages().
         */
+       WARN_ON_ONCE(!percpu_ref_tryget_live(&page->pgmap->ref));
        set_page_count(page, 1);
+       lock_page(page);
 }
+EXPORT_SYMBOL_GPL(zone_device_page_init);
 
 #ifdef CONFIG_FS_DAX
 bool __put_devmap_managed_page_refs(struct page *page, int refs)
index c228afb..1379e19 100644 (file)
@@ -625,6 +625,25 @@ EXPORT_SYMBOL(folio_migrate_copy);
  *                    Migration functions
  ***********************************************************/
 
+int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
+               struct folio *src, enum migrate_mode mode, int extra_count)
+{
+       int rc;
+
+       BUG_ON(folio_test_writeback(src));      /* Writeback must be complete */
+
+       rc = folio_migrate_mapping(mapping, dst, src, extra_count);
+
+       if (rc != MIGRATEPAGE_SUCCESS)
+               return rc;
+
+       if (mode != MIGRATE_SYNC_NO_COPY)
+               folio_migrate_copy(dst, src);
+       else
+               folio_migrate_flags(dst, src);
+       return MIGRATEPAGE_SUCCESS;
+}
+
 /**
  * migrate_folio() - Simple folio migration.
  * @mapping: The address_space containing the folio.
@@ -640,20 +659,7 @@ EXPORT_SYMBOL(folio_migrate_copy);
 int migrate_folio(struct address_space *mapping, struct folio *dst,
                struct folio *src, enum migrate_mode mode)
 {
-       int rc;
-
-       BUG_ON(folio_test_writeback(src));      /* Writeback must be complete */
-
-       rc = folio_migrate_mapping(mapping, dst, src, 0);
-
-       if (rc != MIGRATEPAGE_SUCCESS)
-               return rc;
-
-       if (mode != MIGRATE_SYNC_NO_COPY)
-               folio_migrate_copy(dst, src);
-       else
-               folio_migrate_flags(dst, src);
-       return MIGRATEPAGE_SUCCESS;
+       return migrate_folio_extra(mapping, dst, src, mode, 0);
 }
 EXPORT_SYMBOL(migrate_folio);
 
index 5ab6ab9..6fa682e 100644 (file)
@@ -325,14 +325,14 @@ static void migrate_vma_collect(struct migrate_vma *migrate)
  * folio_migrate_mapping(), except that here we allow migration of a
  * ZONE_DEVICE page.
  */
-static bool migrate_vma_check_page(struct page *page)
+static bool migrate_vma_check_page(struct page *page, struct page *fault_page)
 {
        /*
         * One extra ref because caller holds an extra reference, either from
         * isolate_lru_page() for a regular page, or migrate_vma_collect() for
         * a device page.
         */
-       int extra = 1;
+       int extra = 1 + (page == fault_page);
 
        /*
         * FIXME support THP (transparent huge page), it is bit more complex to
@@ -357,26 +357,20 @@ static bool migrate_vma_check_page(struct page *page)
 }
 
 /*
- * migrate_vma_unmap() - replace page mapping with special migration pte entry
- * @migrate: migrate struct containing all migration information
- *
- * Isolate pages from the LRU and replace mappings (CPU page table pte) with a
- * special migration pte entry and check if it has been pinned. Pinned pages are
- * restored because we cannot migrate them.
- *
- * This is the last step before we call the device driver callback to allocate
- * destination memory and copy contents of original page over to new page.
+ * Unmaps pages for migration. Returns number of unmapped pages.
  */
-static void migrate_vma_unmap(struct migrate_vma *migrate)
+static unsigned long migrate_device_unmap(unsigned long *src_pfns,
+                                         unsigned long npages,
+                                         struct page *fault_page)
 {
-       const unsigned long npages = migrate->npages;
        unsigned long i, restore = 0;
        bool allow_drain = true;
+       unsigned long unmapped = 0;
 
        lru_add_drain();
 
        for (i = 0; i < npages; i++) {
-               struct page *page = migrate_pfn_to_page(migrate->src[i]);
+               struct page *page = migrate_pfn_to_page(src_pfns[i]);
                struct folio *folio;
 
                if (!page)
@@ -391,8 +385,7 @@ static void migrate_vma_unmap(struct migrate_vma *migrate)
                        }
 
                        if (isolate_lru_page(page)) {
-                               migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
-                               migrate->cpages--;
+                               src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
                                restore++;
                                continue;
                        }
@@ -405,34 +398,55 @@ static void migrate_vma_unmap(struct migrate_vma *migrate)
                if (folio_mapped(folio))
                        try_to_migrate(folio, 0);
 
-               if (page_mapped(page) || !migrate_vma_check_page(page)) {
+               if (page_mapped(page) ||
+                   !migrate_vma_check_page(page, fault_page)) {
                        if (!is_zone_device_page(page)) {
                                get_page(page);
                                putback_lru_page(page);
                        }
 
-                       migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
-                       migrate->cpages--;
+                       src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
                        restore++;
                        continue;
                }
+
+               unmapped++;
        }
 
        for (i = 0; i < npages && restore; i++) {
-               struct page *page = migrate_pfn_to_page(migrate->src[i]);
+               struct page *page = migrate_pfn_to_page(src_pfns[i]);
                struct folio *folio;
 
-               if (!page || (migrate->src[i] & MIGRATE_PFN_MIGRATE))
+               if (!page || (src_pfns[i] & MIGRATE_PFN_MIGRATE))
                        continue;
 
                folio = page_folio(page);
                remove_migration_ptes(folio, folio, false);
 
-               migrate->src[i] = 0;
+               src_pfns[i] = 0;
                folio_unlock(folio);
                folio_put(folio);
                restore--;
        }
+
+       return unmapped;
+}
+
+/*
+ * migrate_vma_unmap() - replace page mapping with special migration pte entry
+ * @migrate: migrate struct containing all migration information
+ *
+ * Isolate pages from the LRU and replace mappings (CPU page table pte) with a
+ * special migration pte entry and check if it has been pinned. Pinned pages are
+ * restored because we cannot migrate them.
+ *
+ * This is the last step before we call the device driver callback to allocate
+ * destination memory and copy contents of original page over to new page.
+ */
+static void migrate_vma_unmap(struct migrate_vma *migrate)
+{
+       migrate->cpages = migrate_device_unmap(migrate->src, migrate->npages,
+                                       migrate->fault_page);
 }
 
 /**
@@ -517,6 +531,8 @@ int migrate_vma_setup(struct migrate_vma *args)
                return -EINVAL;
        if (!args->src || !args->dst)
                return -EINVAL;
+       if (args->fault_page && !is_device_private_page(args->fault_page))
+               return -EINVAL;
 
        memset(args->src, 0, sizeof(*args->src) * nr_pages);
        args->cpages = 0;
@@ -677,42 +693,38 @@ abort:
        *src &= ~MIGRATE_PFN_MIGRATE;
 }
 
-/**
- * migrate_vma_pages() - migrate meta-data from src page to dst page
- * @migrate: migrate struct containing all migration information
- *
- * This migrates struct page meta-data from source struct page to destination
- * struct page. This effectively finishes the migration from source page to the
- * destination page.
- */
-void migrate_vma_pages(struct migrate_vma *migrate)
+static void __migrate_device_pages(unsigned long *src_pfns,
+                               unsigned long *dst_pfns, unsigned long npages,
+                               struct migrate_vma *migrate)
 {
-       const unsigned long npages = migrate->npages;
-       const unsigned long start = migrate->start;
        struct mmu_notifier_range range;
-       unsigned long addr, i;
+       unsigned long i;
        bool notified = false;
 
-       for (i = 0, addr = start; i < npages; addr += PAGE_SIZE, i++) {
-               struct page *newpage = migrate_pfn_to_page(migrate->dst[i]);
-               struct page *page = migrate_pfn_to_page(migrate->src[i]);
+       for (i = 0; i < npages; i++) {
+               struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
+               struct page *page = migrate_pfn_to_page(src_pfns[i]);
                struct address_space *mapping;
                int r;
 
                if (!newpage) {
-                       migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
+                       src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
                        continue;
                }
 
                if (!page) {
+                       unsigned long addr;
+
+                       if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE))
+                               continue;
+
                        /*
                         * The only time there is no vma is when called from
                         * migrate_device_coherent_page(). However this isn't
                         * called if the page could not be unmapped.
                         */
-                       VM_BUG_ON(!migrate->vma);
-                       if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE))
-                               continue;
+                       VM_BUG_ON(!migrate);
+                       addr = migrate->start + i*PAGE_SIZE;
                        if (!notified) {
                                notified = true;
 
@@ -723,7 +735,7 @@ void migrate_vma_pages(struct migrate_vma *migrate)
                                mmu_notifier_invalidate_range_start(&range);
                        }
                        migrate_vma_insert_page(migrate, addr, newpage,
-                                               &migrate->src[i]);
+                                               &src_pfns[i]);
                        continue;
                }
 
@@ -736,21 +748,26 @@ void migrate_vma_pages(struct migrate_vma *migrate)
                         * device private or coherent memory.
                         */
                        if (mapping) {
-                               migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
+                               src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
                                continue;
                        }
                } else if (is_zone_device_page(newpage)) {
                        /*
                         * Other types of ZONE_DEVICE page are not supported.
                         */
-                       migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
+                       src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
                        continue;
                }
 
-               r = migrate_folio(mapping, page_folio(newpage),
-                               page_folio(page), MIGRATE_SYNC_NO_COPY);
+               if (migrate && migrate->fault_page == page)
+                       r = migrate_folio_extra(mapping, page_folio(newpage),
+                                               page_folio(page),
+                                               MIGRATE_SYNC_NO_COPY, 1);
+               else
+                       r = migrate_folio(mapping, page_folio(newpage),
+                                       page_folio(page), MIGRATE_SYNC_NO_COPY);
                if (r != MIGRATEPAGE_SUCCESS)
-                       migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
+                       src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
        }
 
        /*
@@ -761,28 +778,56 @@ void migrate_vma_pages(struct migrate_vma *migrate)
        if (notified)
                mmu_notifier_invalidate_range_only_end(&range);
 }
-EXPORT_SYMBOL(migrate_vma_pages);
 
 /**
- * migrate_vma_finalize() - restore CPU page table entry
+ * migrate_device_pages() - migrate meta-data from src page to dst page
+ * @src_pfns: src_pfns returned from migrate_device_range()
+ * @dst_pfns: array of pfns allocated by the driver to migrate memory to
+ * @npages: number of pages in the range
+ *
+ * Equivalent to migrate_vma_pages(). This is called to migrate struct page
+ * meta-data from source struct page to destination.
+ */
+void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
+                       unsigned long npages)
+{
+       __migrate_device_pages(src_pfns, dst_pfns, npages, NULL);
+}
+EXPORT_SYMBOL(migrate_device_pages);
+
+/**
+ * migrate_vma_pages() - migrate meta-data from src page to dst page
  * @migrate: migrate struct containing all migration information
  *
- * This replaces the special migration pte entry with either a mapping to the
- * new page if migration was successful for that page, or to the original page
- * otherwise.
+ * This migrates struct page meta-data from source struct page to destination
+ * struct page. This effectively finishes the migration from source page to the
+ * destination page.
+ */
+void migrate_vma_pages(struct migrate_vma *migrate)
+{
+       __migrate_device_pages(migrate->src, migrate->dst, migrate->npages, migrate);
+}
+EXPORT_SYMBOL(migrate_vma_pages);
+
+/*
+ * migrate_device_finalize() - complete page migration
+ * @src_pfns: src_pfns returned from migrate_device_range()
+ * @dst_pfns: array of pfns allocated by the driver to migrate memory to
+ * @npages: number of pages in the range
  *
- * This also unlocks the pages and puts them back on the lru, or drops the extra
- * refcount, for device pages.
+ * Completes migration of the page by removing special migration entries.
+ * Drivers must ensure copying of page data is complete and visible to the CPU
+ * before calling this.
  */
-void migrate_vma_finalize(struct migrate_vma *migrate)
+void migrate_device_finalize(unsigned long *src_pfns,
+                       unsigned long *dst_pfns, unsigned long npages)
 {
-       const unsigned long npages = migrate->npages;
        unsigned long i;
 
        for (i = 0; i < npages; i++) {
                struct folio *dst, *src;
-               struct page *newpage = migrate_pfn_to_page(migrate->dst[i]);
-               struct page *page = migrate_pfn_to_page(migrate->src[i]);
+               struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
+               struct page *page = migrate_pfn_to_page(src_pfns[i]);
 
                if (!page) {
                        if (newpage) {
@@ -792,7 +837,7 @@ void migrate_vma_finalize(struct migrate_vma *migrate)
                        continue;
                }
 
-               if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE) || !newpage) {
+               if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE) || !newpage) {
                        if (newpage) {
                                unlock_page(newpage);
                                put_page(newpage);
@@ -819,8 +864,72 @@ void migrate_vma_finalize(struct migrate_vma *migrate)
                }
        }
 }
+EXPORT_SYMBOL(migrate_device_finalize);
+
+/**
+ * migrate_vma_finalize() - restore CPU page table entry
+ * @migrate: migrate struct containing all migration information
+ *
+ * This replaces the special migration pte entry with either a mapping to the
+ * new page if migration was successful for that page, or to the original page
+ * otherwise.
+ *
+ * This also unlocks the pages and puts them back on the lru, or drops the extra
+ * refcount, for device pages.
+ */
+void migrate_vma_finalize(struct migrate_vma *migrate)
+{
+       migrate_device_finalize(migrate->src, migrate->dst, migrate->npages);
+}
 EXPORT_SYMBOL(migrate_vma_finalize);
 
+/**
+ * migrate_device_range() - migrate device private pfns to normal memory.
+ * @src_pfns: array large enough to hold migrating source device private pfns.
+ * @start: starting pfn in the range to migrate.
+ * @npages: number of pages to migrate.
+ *
+ * migrate_vma_setup() is similar in concept to migrate_vma_setup() except that
+ * instead of looking up pages based on virtual address mappings a range of
+ * device pfns that should be migrated to system memory is used instead.
+ *
+ * This is useful when a driver needs to free device memory but doesn't know the
+ * virtual mappings of every page that may be in device memory. For example this
+ * is often the case when a driver is being unloaded or unbound from a device.
+ *
+ * Like migrate_vma_setup() this function will take a reference and lock any
+ * migrating pages that aren't free before unmapping them. Drivers may then
+ * allocate destination pages and start copying data from the device to CPU
+ * memory before calling migrate_device_pages().
+ */
+int migrate_device_range(unsigned long *src_pfns, unsigned long start,
+                       unsigned long npages)
+{
+       unsigned long i, pfn;
+
+       for (pfn = start, i = 0; i < npages; pfn++, i++) {
+               struct page *page = pfn_to_page(pfn);
+
+               if (!get_page_unless_zero(page)) {
+                       src_pfns[i] = 0;
+                       continue;
+               }
+
+               if (!trylock_page(page)) {
+                       src_pfns[i] = 0;
+                       put_page(page);
+                       continue;
+               }
+
+               src_pfns[i] = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE;
+       }
+
+       migrate_device_unmap(src_pfns, npages, NULL);
+
+       return 0;
+}
+EXPORT_SYMBOL(migrate_device_range);
+
 /*
  * Migrate a device coherent page back to normal memory. The caller should have
  * a reference on page which will be copied to the new page if migration is
@@ -829,25 +938,19 @@ EXPORT_SYMBOL(migrate_vma_finalize);
 int migrate_device_coherent_page(struct page *page)
 {
        unsigned long src_pfn, dst_pfn = 0;
-       struct migrate_vma args;
        struct page *dpage;
 
        WARN_ON_ONCE(PageCompound(page));
 
        lock_page(page);
        src_pfn = migrate_pfn(page_to_pfn(page)) | MIGRATE_PFN_MIGRATE;
-       args.src = &src_pfn;
-       args.dst = &dst_pfn;
-       args.cpages = 1;
-       args.npages = 1;
-       args.vma = NULL;
 
        /*
         * We don't have a VMA and don't need to walk the page tables to find
         * the source page. So call migrate_vma_unmap() directly to unmap the
         * page as migrate_vma_setup() will fail if args.vma == NULL.
         */
-       migrate_vma_unmap(&args);
+       migrate_device_unmap(&src_pfn, 1, NULL);
        if (!(src_pfn & MIGRATE_PFN_MIGRATE))
                return -EBUSY;
 
@@ -857,10 +960,10 @@ int migrate_device_coherent_page(struct page *page)
                dst_pfn = migrate_pfn(page_to_pfn(dpage));
        }
 
-       migrate_vma_pages(&args);
+       migrate_device_pages(&src_pfn, &dst_pfn, 1);
        if (src_pfn & MIGRATE_PFN_MIGRATE)
                copy_highpage(dpage, page);
-       migrate_vma_finalize(&args);
+       migrate_device_finalize(&src_pfn, &dst_pfn, 1);
 
        if (src_pfn & MIGRATE_PFN_MIGRATE)
                return 0;
index 6e44754..e270057 100644 (file)
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -618,7 +618,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
        struct vm_area_struct *expand)
 {
        struct mm_struct *mm = vma->vm_mm;
-       struct vm_area_struct *next_next, *next = find_vma(mm, vma->vm_end);
+       struct vm_area_struct *next_next = NULL;        /* uninit var warning */
+       struct vm_area_struct *next = find_vma(mm, vma->vm_end);
        struct vm_area_struct *orig_vma = vma;
        struct address_space *mapping = NULL;
        struct rb_root_cached *root = NULL;
@@ -2625,14 +2626,14 @@ cannot_expand:
                if (error)
                        goto unmap_and_free_vma;
 
-               /* Can addr have changed??
-                *
-                * Answer: Yes, several device drivers can do it in their
-                *         f_op->mmap method. -DaveM
+               /*
+                * Expansion is handled above, merging is handled below.
+                * Drivers should not alter the address of the VMA.
                 */
-               WARN_ON_ONCE(addr != vma->vm_start);
-
-               addr = vma->vm_start;
+               if (WARN_ON((addr != vma->vm_start))) {
+                       error = -EINVAL;
+                       goto close_and_free_vma;
+               }
                mas_reset(&mas);
 
                /*
@@ -2654,7 +2655,6 @@ cannot_expand:
                                vm_area_free(vma);
                                vma = merge;
                                /* Update vm_flags to pick up the change. */
-                               addr = vma->vm_start;
                                vm_flags = vma->vm_flags;
                                goto unmap_writable;
                        }
@@ -2673,7 +2673,7 @@ cannot_expand:
        if (!arch_validate_flags(vma->vm_flags)) {
                error = -EINVAL;
                if (file)
-                       goto unmap_and_free_vma;
+                       goto close_and_free_vma;
                else
                        goto free_vma;
        }
@@ -2681,7 +2681,7 @@ cannot_expand:
        if (mas_preallocate(&mas, vma, GFP_KERNEL)) {
                error = -ENOMEM;
                if (file)
-                       goto unmap_and_free_vma;
+                       goto close_and_free_vma;
                else
                        goto free_vma;
        }
@@ -2742,6 +2742,9 @@ expanded:
        validate_mm(mm);
        return addr;
 
+close_and_free_vma:
+       if (vma->vm_ops && vma->vm_ops->close)
+               vma->vm_ops->close(vma);
 unmap_and_free_vma:
        fput(vma->vm_file);
        vma->vm_file = NULL;
@@ -2942,17 +2945,18 @@ static int do_brk_flags(struct ma_state *mas, struct vm_area_struct *vma,
        if (vma &&
            (!vma->anon_vma || list_is_singular(&vma->anon_vma_chain)) &&
            ((vma->vm_flags & ~VM_SOFTDIRTY) == flags)) {
-               mas->index = vma->vm_start;
-               mas->last = addr + len - 1;
-               vma_adjust_trans_huge(vma, addr, addr + len, 0);
+               mas_set_range(mas, vma->vm_start, addr + len - 1);
+               if (mas_preallocate(mas, vma, GFP_KERNEL))
+                       return -ENOMEM;
+
+               vma_adjust_trans_huge(vma, vma->vm_start, addr + len, 0);
                if (vma->anon_vma) {
                        anon_vma_lock_write(vma->anon_vma);
                        anon_vma_interval_tree_pre_update_vma(vma);
                }
                vma->vm_end = addr + len;
                vma->vm_flags |= VM_SOFTDIRTY;
-               if (mas_store_gfp(mas, vma, GFP_KERNEL))
-                       goto mas_expand_failed;
+               mas_store_prealloc(mas, vma);
 
                if (vma->anon_vma) {
                        anon_vma_interval_tree_post_update_vma(vma);
@@ -2993,13 +2997,6 @@ mas_store_fail:
 vma_alloc_fail:
        vm_unacct_memory(len >> PAGE_SHIFT);
        return -ENOMEM;
-
-mas_expand_failed:
-       if (vma->anon_vma) {
-               anon_vma_interval_tree_post_update_vma(vma);
-               anon_vma_unlock_write(vma->anon_vma);
-       }
-       return -ENOMEM;
 }
 
 int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long flags)
@@ -3240,6 +3237,11 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 out_vma_link:
        if (new_vma->vm_ops && new_vma->vm_ops->close)
                new_vma->vm_ops->close(new_vma);
+
+       if (new_vma->vm_file)
+               fput(new_vma->vm_file);
+
+       unlink_anon_vmas(new_vma);
 out_free_mempol:
        mpol_put(vma_policy(new_vma));
 out_free_vma:
index a71924b..add4244 100644 (file)
@@ -1,6 +1,7 @@
 #include <linux/gfp.h>
 #include <linux/highmem.h>
 #include <linux/kernel.h>
+#include <linux/kmsan-checks.h>
 #include <linux/mmdebug.h>
 #include <linux/mm_types.h>
 #include <linux/mm_inline.h>
@@ -265,6 +266,15 @@ void tlb_flush_mmu(struct mmu_gather *tlb)
 static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
                             bool fullmm)
 {
+       /*
+        * struct mmu_gather contains 7 1-bit fields packed into a 32-bit
+        * unsigned int value. The remaining 25 bits remain uninitialized
+        * and are never used, but KMSAN updates the origin for them in
+        * zap_pXX_range() in mm/memory.c, thus creating very long origin
+        * chains. This is technically correct, but consumes too much memory.
+        * Unpoisoning the whole structure will prevent creating such chains.
+        */
+       kmsan_unpoison_memory(tlb, sizeof(*tlb));
        tlb->mm = mm;
        tlb->fullmm = fullmm;
 
index 461dcbd..668bfaa 100644 (file)
@@ -267,6 +267,7 @@ static unsigned long change_pte_range(struct mmu_gather *tlb,
                } else {
                        /* It must be an none page, or what else?.. */
                        WARN_ON_ONCE(!pte_none(oldpte));
+#ifdef CONFIG_PTE_MARKER_UFFD_WP
                        if (unlikely(uffd_wp && !vma_is_anonymous(vma))) {
                                /*
                                 * For file-backed mem, we need to be able to
@@ -278,6 +279,7 @@ static unsigned long change_pte_range(struct mmu_gather *tlb,
                                           make_pte_marker(PTE_MARKER_UFFD_WP));
                                pages++;
                        }
+#endif
                }
        } while (pte++, addr += PAGE_SIZE, addr != end);
        arch_leave_lazy_mmu_mode();
index ac2c9f1..b5a6c81 100644 (file)
@@ -3446,7 +3446,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
        int pindex;
        bool free_high;
 
-       __count_vm_event(PGFREE);
+       __count_vm_events(PGFREE, 1 << order);
        pindex = order_to_pindex(migratetype, order);
        list_add(&page->pcp_list, &pcp->lists[pindex]);
        pcp->count += 1 << order;
@@ -3803,7 +3803,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
        pcp_spin_unlock_irqrestore(pcp, flags);
        pcp_trylock_finish(UP_flags);
        if (page) {
-               __count_zid_vm_events(PGALLOC, page_zonenum(page), 1);
+               __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
                zone_statistics(preferred_zone, zone, 1);
        }
        return page;
@@ -5784,14 +5784,18 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order,
                size_t size)
 {
        if (addr) {
-               unsigned long alloc_end = addr + (PAGE_SIZE << order);
-               unsigned long used = addr + PAGE_ALIGN(size);
+               unsigned long nr = DIV_ROUND_UP(size, PAGE_SIZE);
+               struct page *page = virt_to_page((void *)addr);
+               struct page *last = page + nr;
 
-               split_page(virt_to_page((void *)addr), order);
-               while (used < alloc_end) {
-                       free_page(used);
-                       used += PAGE_SIZE;
-               }
+               split_page_owner(page, 1 << order);
+               split_page_memcg(page, 1 << order);
+               while (page < --last)
+                       set_page_refcounted(last);
+
+               last = page + (1UL << order);
+               for (page += nr; page < last; page++)
+                       __free_pages_ok(page, 0, FPI_TO_TAIL);
        }
        return (void *)addr;
 }
@@ -6823,6 +6827,14 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
                set_pageblock_migratetype(page, MIGRATE_MOVABLE);
                cond_resched();
        }
+
+       /*
+        * ZONE_DEVICE pages are released directly to the driver page allocator
+        * which will set the page count to 1 when allocating the page.
+        */
+       if (pgmap->type == MEMORY_DEVICE_PRIVATE ||
+           pgmap->type == MEMORY_DEVICE_COHERENT)
+               set_page_count(page, 0);
 }
 
 /*
index 86214d4..8280a5c 100644 (file)
@@ -2332,7 +2332,7 @@ static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir,
                inode_init_owner(&init_user_ns, inode, dir, mode);
                inode->i_blocks = 0;
                inode->i_atime = inode->i_mtime = inode->i_ctime = current_time(inode);
-               inode->i_generation = prandom_u32();
+               inode->i_generation = get_random_u32();
                info = SHMEM_I(inode);
                memset(info, 0, (char *)inode - (char *)info);
                spin_lock_init(&info->lock);
index a5486ff..59c8e28 100644 (file)
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1619,7 +1619,7 @@ static void slab_destroy(struct kmem_cache *cachep, struct slab *slab)
         * although actual page can be freed in rcu context
         */
        if (OFF_SLAB(cachep))
-               kmem_cache_free(cachep->freelist_cache, freelist);
+               kfree(freelist);
 }
 
 /*
@@ -1671,21 +1671,27 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
                if (flags & CFLGS_OFF_SLAB) {
                        struct kmem_cache *freelist_cache;
                        size_t freelist_size;
+                       size_t freelist_cache_size;
 
                        freelist_size = num * sizeof(freelist_idx_t);
-                       freelist_cache = kmalloc_slab(freelist_size, 0u);
-                       if (!freelist_cache)
-                               continue;
-
-                       /*
-                        * Needed to avoid possible looping condition
-                        * in cache_grow_begin()
-                        */
-                       if (OFF_SLAB(freelist_cache))
-                               continue;
+                       if (freelist_size > KMALLOC_MAX_CACHE_SIZE) {
+                               freelist_cache_size = PAGE_SIZE << get_order(freelist_size);
+                       } else {
+                               freelist_cache = kmalloc_slab(freelist_size, 0u);
+                               if (!freelist_cache)
+                                       continue;
+                               freelist_cache_size = freelist_cache->size;
+
+                               /*
+                                * Needed to avoid possible looping condition
+                                * in cache_grow_begin()
+                                */
+                               if (OFF_SLAB(freelist_cache))
+                                       continue;
+                       }
 
                        /* check if off slab has enough benefit */
-                       if (freelist_cache->size > cachep->size / 2)
+                       if (freelist_cache_size > cachep->size / 2)
                                continue;
                }
 
@@ -2061,11 +2067,6 @@ done:
                cachep->flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
 #endif
 
-       if (OFF_SLAB(cachep)) {
-               cachep->freelist_cache =
-                       kmalloc_slab(cachep->freelist_size, 0u);
-       }
-
        err = setup_cpu_cache(cachep, gfp);
        if (err) {
                __kmem_cache_release(cachep);
@@ -2292,7 +2293,7 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
                freelist = NULL;
        else if (OFF_SLAB(cachep)) {
                /* Slab management obj is off-slab. */
-               freelist = kmem_cache_alloc_node(cachep->freelist_cache,
+               freelist = kmalloc_node(cachep->freelist_size,
                                              local_flags, nodeid);
        } else {
                /* We will use last bytes at the slab for freelist */
@@ -2380,7 +2381,7 @@ static bool freelist_state_initialize(union freelist_init_state *state,
        unsigned int rand;
 
        /* Use best entropy available to define a random shift */
-       rand = get_random_int();
+       rand = get_random_u32();
 
        /* Use a random state if the pre-computed list is not available */
        if (!cachep->random_seq) {
index 96dd392..157527d 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1881,7 +1881,7 @@ static bool shuffle_freelist(struct kmem_cache *s, struct slab *slab)
                return false;
 
        freelist_count = oo_objects(s->oo);
-       pos = get_random_int() % freelist_count;
+       pos = prandom_u32_max(freelist_count);
 
        page_limit = slab->objects * s->size;
        start = fixup_red_left(s, slab_address(slab));
index 5257587..d03941c 100644 (file)
@@ -2311,6 +2311,9 @@ void zs_destroy_pool(struct zs_pool *pool)
                int fg;
                struct size_class *class = pool->size_class[i];
 
+               if (!class)
+                       continue;
+
                if (class->index != i)
                        continue;
 
index f6012f8..fc9eb02 100644 (file)
@@ -407,7 +407,7 @@ static void garp_join_timer_arm(struct garp_applicant *app)
 {
        unsigned long delay;
 
-       delay = (u64)msecs_to_jiffies(garp_join_time) * prandom_u32() >> 32;
+       delay = prandom_u32_max(msecs_to_jiffies(garp_join_time));
        mod_timer(&app->join_timer, jiffies + delay);
 }
 
index 35e04cc..155f74d 100644 (file)
@@ -592,7 +592,7 @@ static void mrp_join_timer_arm(struct mrp_applicant *app)
 {
        unsigned long delay;
 
-       delay = (u64)msecs_to_jiffies(mrp_join_time) * prandom_u32() >> 32;
+       delay = prandom_u32_max(msecs_to_jiffies(mrp_join_time));
        mod_timer(&app->join_timer, jiffies + delay);
 }
 
index 829db9e..aaf64b9 100644 (file)
@@ -219,11 +219,12 @@ static ssize_t proc_mpc_write(struct file *file, const char __user *buff,
        if (!page)
                return -ENOMEM;
 
-       for (p = page, len = 0; len < nbytes; p++, len++) {
+       for (p = page, len = 0; len < nbytes; p++) {
                if (get_user(*p, buff++)) {
                        free_page((unsigned long)page);
                        return -EFAULT;
                }
+               len += 1;
                if (*p == '\0' || *p == '\n')
                        break;
        }
index 6a6898e..db60217 100644 (file)
@@ -222,7 +222,7 @@ static void pick_new_mon(struct ceph_mon_client *monc)
                                max--;
                }
 
-               n = prandom_u32() % max;
+               n = prandom_u32_max(max);
                if (o >= 0 && n >= o)
                        n++;
 
index 87b883c..4e4f1e4 100644 (file)
@@ -1479,7 +1479,7 @@ static bool target_should_be_paused(struct ceph_osd_client *osdc,
 
 static int pick_random_replica(const struct ceph_osds *acting)
 {
-       int i = prandom_u32() % acting->size;
+       int i = prandom_u32_max(acting->size);
 
        dout("%s picked osd%d, primary osd%d\n", __func__,
             acting->osds[i], acting->primary);
index fa53830..fff6206 100644 (file)
@@ -5136,11 +5136,13 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret,
        case TC_ACT_SHOT:
                mini_qdisc_qstats_cpu_drop(miniq);
                kfree_skb_reason(skb, SKB_DROP_REASON_TC_INGRESS);
+               *ret = NET_RX_DROP;
                return NULL;
        case TC_ACT_STOLEN:
        case TC_ACT_QUEUED:
        case TC_ACT_TRAP:
                consume_skb(skb);
+               *ret = NET_RX_SUCCESS;
                return NULL;
        case TC_ACT_REDIRECT:
                /* skb_mac_header check was done by cls/act_bpf, so
@@ -5153,8 +5155,10 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret,
                        *another = true;
                        break;
                }
+               *ret = NET_RX_SUCCESS;
                return NULL;
        case TC_ACT_CONSUMED:
+               *ret = NET_RX_SUCCESS;
                return NULL;
        default:
                break;
@@ -8818,7 +8822,7 @@ EXPORT_SYMBOL(dev_set_mac_address_user);
 
 int dev_get_mac_address(struct sockaddr *sa, struct net *net, char *dev_name)
 {
-       size_t size = sizeof(sa->sa_data);
+       size_t size = sizeof(sa->sa_data_min);
        struct net_device *dev;
        int ret = 0;
 
index 7674bb9..5cdbfbf 100644 (file)
@@ -342,7 +342,7 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, void __user *data,
                if (ifr->ifr_hwaddr.sa_family != dev->type)
                        return -EINVAL;
                memcpy(dev->broadcast, ifr->ifr_hwaddr.sa_data,
-                      min(sizeof(ifr->ifr_hwaddr.sa_data),
+                      min(sizeof(ifr->ifr_hwaddr.sa_data_min),
                           (size_t)dev->addr_len));
                call_netdevice_notifiers(NETDEV_CHANGEADDR, dev);
                return 0;
index e93edb8..3c4786b 100644 (file)
@@ -111,7 +111,7 @@ static void neigh_cleanup_and_release(struct neighbour *neigh)
 
 unsigned long neigh_rand_reach_time(unsigned long base)
 {
-       return base ? (prandom_u32() % base) + (base >> 1) : 0;
+       return base ? prandom_u32_max(base) + (base >> 1) : 0;
 }
 EXPORT_SYMBOL(neigh_rand_reach_time);
 
index 0ec2f59..5581d22 100644 (file)
@@ -117,6 +117,7 @@ static int net_assign_generic(struct net *net, unsigned int id, void *data)
 
 static int ops_init(const struct pernet_operations *ops, struct net *net)
 {
+       struct net_generic *ng;
        int err = -ENOMEM;
        void *data = NULL;
 
@@ -135,7 +136,13 @@ static int ops_init(const struct pernet_operations *ops, struct net *net)
        if (!err)
                return 0;
 
+       if (ops->id && ops->size) {
 cleanup:
+               ng = rcu_dereference_protected(net->gen,
+                                              lockdep_is_held(&pernet_ops_rwsem));
+               ng->ptr[*ops->id] = NULL;
+       }
+
        kfree(data);
 
 out:
@@ -309,6 +316,7 @@ static __net_init int setup_net(struct net *net, struct user_namespace *user_ns)
 
        refcount_set(&net->ns.count, 1);
        ref_tracker_dir_init(&net->refcnt_tracker, 128);
+       ref_tracker_dir_init(&net->notrefcnt_tracker, 128);
 
        refcount_set(&net->passive, 1);
        get_random_bytes(&net->hash_mix, sizeof(u32));
@@ -429,6 +437,10 @@ static void net_free(struct net *net)
 {
        if (refcount_dec_and_test(&net->passive)) {
                kfree(rcu_access_pointer(net->gen));
+
+               /* There should not be any trackers left there. */
+               ref_tracker_dir_exit(&net->notrefcnt_tracker);
+
                kmem_cache_free(net_cachep, net);
        }
 }
index 88906ba..c376305 100644 (file)
@@ -2324,7 +2324,7 @@ static inline int f_pick(struct pktgen_dev *pkt_dev)
                                pkt_dev->curfl = 0; /*reset */
                }
        } else {
-               flow = prandom_u32() % pkt_dev->cflows;
+               flow = prandom_u32_max(pkt_dev->cflows);
                pkt_dev->curfl = flow;
 
                if (pkt_dev->flows[flow].count > pkt_dev->lflow) {
@@ -2380,10 +2380,9 @@ static void set_cur_queue_map(struct pktgen_dev *pkt_dev)
        else if (pkt_dev->queue_map_min <= pkt_dev->queue_map_max) {
                __u16 t;
                if (pkt_dev->flags & F_QUEUE_MAP_RND) {
-                       t = prandom_u32() %
-                               (pkt_dev->queue_map_max -
-                                pkt_dev->queue_map_min + 1)
-                               + pkt_dev->queue_map_min;
+                       t = prandom_u32_max(pkt_dev->queue_map_max -
+                                           pkt_dev->queue_map_min + 1) +
+                           pkt_dev->queue_map_min;
                } else {
                        t = pkt_dev->cur_queue_map + 1;
                        if (t > pkt_dev->queue_map_max)
@@ -2412,7 +2411,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
                __u32 tmp;
 
                if (pkt_dev->flags & F_MACSRC_RND)
-                       mc = prandom_u32() % pkt_dev->src_mac_count;
+                       mc = prandom_u32_max(pkt_dev->src_mac_count);
                else {
                        mc = pkt_dev->cur_src_mac_offset++;
                        if (pkt_dev->cur_src_mac_offset >=
@@ -2438,7 +2437,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
                __u32 tmp;
 
                if (pkt_dev->flags & F_MACDST_RND)
-                       mc = prandom_u32() % pkt_dev->dst_mac_count;
+                       mc = prandom_u32_max(pkt_dev->dst_mac_count);
 
                else {
                        mc = pkt_dev->cur_dst_mac_offset++;
@@ -2465,23 +2464,23 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
                for (i = 0; i < pkt_dev->nr_labels; i++)
                        if (pkt_dev->labels[i] & MPLS_STACK_BOTTOM)
                                pkt_dev->labels[i] = MPLS_STACK_BOTTOM |
-                                            ((__force __be32)prandom_u32() &
+                                            ((__force __be32)get_random_u32() &
                                                      htonl(0x000fffff));
        }
 
        if ((pkt_dev->flags & F_VID_RND) && (pkt_dev->vlan_id != 0xffff)) {
-               pkt_dev->vlan_id = prandom_u32() & (4096 - 1);
+               pkt_dev->vlan_id = prandom_u32_max(4096);
        }
 
        if ((pkt_dev->flags & F_SVID_RND) && (pkt_dev->svlan_id != 0xffff)) {
-               pkt_dev->svlan_id = prandom_u32() & (4096 - 1);
+               pkt_dev->svlan_id = prandom_u32_max(4096);
        }
 
        if (pkt_dev->udp_src_min < pkt_dev->udp_src_max) {
                if (pkt_dev->flags & F_UDPSRC_RND)
-                       pkt_dev->cur_udp_src = prandom_u32() %
-                               (pkt_dev->udp_src_max - pkt_dev->udp_src_min)
-                               pkt_dev->udp_src_min;
+                       pkt_dev->cur_udp_src = prandom_u32_max(
+                               pkt_dev->udp_src_max - pkt_dev->udp_src_min) +
+                               pkt_dev->udp_src_min;
 
                else {
                        pkt_dev->cur_udp_src++;
@@ -2492,9 +2491,9 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
 
        if (pkt_dev->udp_dst_min < pkt_dev->udp_dst_max) {
                if (pkt_dev->flags & F_UDPDST_RND) {
-                       pkt_dev->cur_udp_dst = prandom_u32() %
-                               (pkt_dev->udp_dst_max - pkt_dev->udp_dst_min)
-                               pkt_dev->udp_dst_min;
+                       pkt_dev->cur_udp_dst = prandom_u32_max(
+                               pkt_dev->udp_dst_max - pkt_dev->udp_dst_min) +
+                               pkt_dev->udp_dst_min;
                } else {
                        pkt_dev->cur_udp_dst++;
                        if (pkt_dev->cur_udp_dst >= pkt_dev->udp_dst_max)
@@ -2509,7 +2508,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
                if (imn < imx) {
                        __u32 t;
                        if (pkt_dev->flags & F_IPSRC_RND)
-                               t = prandom_u32() % (imx - imn) + imn;
+                               t = prandom_u32_max(imx - imn) + imn;
                        else {
                                t = ntohl(pkt_dev->cur_saddr);
                                t++;
@@ -2531,8 +2530,8 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
                                if (pkt_dev->flags & F_IPDST_RND) {
 
                                        do {
-                                               t = prandom_u32() %
-                                                       (imx - imn) + imn;
+                                               t = prandom_u32_max(imx - imn) +
+                                                   imn;
                                                s = htonl(t);
                                        } while (ipv4_is_loopback(s) ||
                                                ipv4_is_multicast(s) ||
@@ -2569,7 +2568,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
 
                        for (i = 0; i < 4; i++) {
                                pkt_dev->cur_in6_daddr.s6_addr32[i] =
-                                   (((__force __be32)prandom_u32() |
+                                   (((__force __be32)get_random_u32() |
                                      pkt_dev->min_in6_daddr.s6_addr32[i]) &
                                     pkt_dev->max_in6_daddr.s6_addr32[i]);
                        }
@@ -2579,9 +2578,9 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
        if (pkt_dev->min_pkt_size < pkt_dev->max_pkt_size) {
                __u32 t;
                if (pkt_dev->flags & F_TXSIZE_RND) {
-                       t = prandom_u32() %
-                               (pkt_dev->max_pkt_size - pkt_dev->min_pkt_size)
-                               + pkt_dev->min_pkt_size;
+                       t = prandom_u32_max(pkt_dev->max_pkt_size -
+                                           pkt_dev->min_pkt_size) +
+                           pkt_dev->min_pkt_size;
                } else {
                        t = pkt_dev->cur_pkt_size + 1;
                        if (t > pkt_dev->max_pkt_size)
@@ -2590,7 +2589,7 @@ static void mod_cur_headers(struct pktgen_dev *pkt_dev)
                pkt_dev->cur_pkt_size = t;
        } else if (pkt_dev->n_imix_entries > 0) {
                struct imix_pkt *entry;
-               __u32 t = prandom_u32() % IMIX_PRECISION;
+               __u32 t = prandom_u32_max(IMIX_PRECISION);
                __u8 entry_index = pkt_dev->imix_distribution[t];
 
                entry = &pkt_dev->imix_entries[entry_index];
index 1d9719e..9b3b198 100644 (file)
@@ -748,6 +748,13 @@ static void skb_clone_fraglist(struct sk_buff *skb)
                skb_get(list);
 }
 
+static bool skb_pp_recycle(struct sk_buff *skb, void *data)
+{
+       if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
+               return false;
+       return page_pool_return_skb_page(virt_to_page(data));
+}
+
 static void skb_free_head(struct sk_buff *skb)
 {
        unsigned char *head = skb->head;
index ca70525..1efdc47 100644 (file)
@@ -500,11 +500,11 @@ bool sk_msg_is_readable(struct sock *sk)
 }
 EXPORT_SYMBOL_GPL(sk_msg_is_readable);
 
-static struct sk_msg *alloc_sk_msg(void)
+static struct sk_msg *alloc_sk_msg(gfp_t gfp)
 {
        struct sk_msg *msg;
 
-       msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_KERNEL);
+       msg = kzalloc(sizeof(*msg), gfp | __GFP_NOWARN);
        if (unlikely(!msg))
                return NULL;
        sg_init_marker(msg->sg.data, NR_MSG_FRAG_IDS);
@@ -520,7 +520,7 @@ static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk,
        if (!sk_rmem_schedule(sk, skb, skb->truesize))
                return NULL;
 
-       return alloc_sk_msg();
+       return alloc_sk_msg(GFP_KERNEL);
 }
 
 static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
@@ -597,7 +597,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
 static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
                                     u32 off, u32 len)
 {
-       struct sk_msg *msg = alloc_sk_msg();
+       struct sk_msg *msg = alloc_sk_msg(GFP_ATOMIC);
        struct sock *sk = psock->sk;
        int err;
 
index a3ba035..4571914 100644 (file)
@@ -1436,7 +1436,7 @@ set_sndbuf:
                break;
                }
        case SO_INCOMING_CPU:
-               WRITE_ONCE(sk->sk_incoming_cpu, val);
+               reuseport_update_incoming_cpu(sk, val);
                break;
 
        case SO_CNX_ADVICE:
@@ -2094,6 +2094,9 @@ struct sock *sk_alloc(struct net *net, int family, gfp_t priority,
                if (likely(sk->sk_net_refcnt)) {
                        get_net_track(net, &sk->ns_tracker, priority);
                        sock_inuse_add(net, 1);
+               } else {
+                       __netns_tracker_alloc(net, &sk->ns_tracker,
+                                             false, priority);
                }
 
                sock_net_set(sk, net);
@@ -2149,6 +2152,9 @@ static void __sk_destruct(struct rcu_head *head)
 
        if (likely(sk->sk_net_refcnt))
                put_net_track(sock_net(sk), &sk->ns_tracker);
+       else
+               __netns_tracker_free(sock_net(sk), &sk->ns_tracker, false);
+
        sk_prot_free(sk->sk_prot_creator, sk);
 }
 
@@ -2237,6 +2243,14 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
        if (likely(newsk->sk_net_refcnt)) {
                get_net_track(sock_net(newsk), &newsk->ns_tracker, priority);
                sock_inuse_add(sock_net(newsk), 1);
+       } else {
+               /* Kernel sockets are not elevating the struct net refcount.
+                * Instead, use a tracker to more easily detect if a layer
+                * is not properly dismantling its kernel sockets at netns
+                * destroy time.
+                */
+               __netns_tracker_alloc(sock_net(newsk), &newsk->ns_tracker,
+                                     false, priority);
        }
        sk_node_init(&newsk->sk_node);
        sock_lock_init(newsk);
@@ -2730,7 +2744,7 @@ failure:
 }
 EXPORT_SYMBOL(sock_alloc_send_pskb);
 
-int __sock_cmsg_send(struct sock *sk, struct msghdr *msg, struct cmsghdr *cmsg,
+int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg,
                     struct sockcm_cookie *sockc)
 {
        u32 tsflags;
@@ -2784,7 +2798,7 @@ int sock_cmsg_send(struct sock *sk, struct msghdr *msg,
                        return -EINVAL;
                if (cmsg->cmsg_level != SOL_SOCKET)
                        continue;
-               ret = __sock_cmsg_send(sk, msg, cmsg, sockc);
+               ret = __sock_cmsg_send(sk, cmsg, sockc);
                if (ret)
                        return ret;
        }
index 5daa1fa..5a16528 100644 (file)
@@ -21,6 +21,86 @@ static DEFINE_IDA(reuseport_ida);
 static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse,
                               struct sock_reuseport *reuse, bool bind_inany);
 
+void reuseport_has_conns_set(struct sock *sk)
+{
+       struct sock_reuseport *reuse;
+
+       if (!rcu_access_pointer(sk->sk_reuseport_cb))
+               return;
+
+       spin_lock_bh(&reuseport_lock);
+       reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
+                                         lockdep_is_held(&reuseport_lock));
+       if (likely(reuse))
+               reuse->has_conns = 1;
+       spin_unlock_bh(&reuseport_lock);
+}
+EXPORT_SYMBOL(reuseport_has_conns_set);
+
+static void __reuseport_get_incoming_cpu(struct sock_reuseport *reuse)
+{
+       /* Paired with READ_ONCE() in reuseport_select_sock_by_hash(). */
+       WRITE_ONCE(reuse->incoming_cpu, reuse->incoming_cpu + 1);
+}
+
+static void __reuseport_put_incoming_cpu(struct sock_reuseport *reuse)
+{
+       /* Paired with READ_ONCE() in reuseport_select_sock_by_hash(). */
+       WRITE_ONCE(reuse->incoming_cpu, reuse->incoming_cpu - 1);
+}
+
+static void reuseport_get_incoming_cpu(struct sock *sk, struct sock_reuseport *reuse)
+{
+       if (sk->sk_incoming_cpu >= 0)
+               __reuseport_get_incoming_cpu(reuse);
+}
+
+static void reuseport_put_incoming_cpu(struct sock *sk, struct sock_reuseport *reuse)
+{
+       if (sk->sk_incoming_cpu >= 0)
+               __reuseport_put_incoming_cpu(reuse);
+}
+
+void reuseport_update_incoming_cpu(struct sock *sk, int val)
+{
+       struct sock_reuseport *reuse;
+       int old_sk_incoming_cpu;
+
+       if (unlikely(!rcu_access_pointer(sk->sk_reuseport_cb))) {
+               /* Paired with REAE_ONCE() in sk_incoming_cpu_update()
+                * and compute_score().
+                */
+               WRITE_ONCE(sk->sk_incoming_cpu, val);
+               return;
+       }
+
+       spin_lock_bh(&reuseport_lock);
+
+       /* This must be done under reuseport_lock to avoid a race with
+        * reuseport_grow(), which accesses sk->sk_incoming_cpu without
+        * lock_sock() when detaching a shutdown()ed sk.
+        *
+        * Paired with READ_ONCE() in reuseport_select_sock_by_hash().
+        */
+       old_sk_incoming_cpu = sk->sk_incoming_cpu;
+       WRITE_ONCE(sk->sk_incoming_cpu, val);
+
+       reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
+                                         lockdep_is_held(&reuseport_lock));
+
+       /* reuseport_grow() has detached a closed sk. */
+       if (!reuse)
+               goto out;
+
+       if (old_sk_incoming_cpu < 0 && val >= 0)
+               __reuseport_get_incoming_cpu(reuse);
+       else if (old_sk_incoming_cpu >= 0 && val < 0)
+               __reuseport_put_incoming_cpu(reuse);
+
+out:
+       spin_unlock_bh(&reuseport_lock);
+}
+
 static int reuseport_sock_index(struct sock *sk,
                                const struct sock_reuseport *reuse,
                                bool closed)
@@ -48,6 +128,7 @@ static void __reuseport_add_sock(struct sock *sk,
        /* paired with smp_rmb() in reuseport_(select|migrate)_sock() */
        smp_wmb();
        reuse->num_socks++;
+       reuseport_get_incoming_cpu(sk, reuse);
 }
 
 static bool __reuseport_detach_sock(struct sock *sk,
@@ -60,6 +141,7 @@ static bool __reuseport_detach_sock(struct sock *sk,
 
        reuse->socks[i] = reuse->socks[reuse->num_socks - 1];
        reuse->num_socks--;
+       reuseport_put_incoming_cpu(sk, reuse);
 
        return true;
 }
@@ -70,6 +152,7 @@ static void __reuseport_add_closed_sock(struct sock *sk,
        reuse->socks[reuse->max_socks - reuse->num_closed_socks - 1] = sk;
        /* paired with READ_ONCE() in inet_csk_bind_conflict() */
        WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks + 1);
+       reuseport_get_incoming_cpu(sk, reuse);
 }
 
 static bool __reuseport_detach_closed_sock(struct sock *sk,
@@ -83,6 +166,7 @@ static bool __reuseport_detach_closed_sock(struct sock *sk,
        reuse->socks[i] = reuse->socks[reuse->max_socks - reuse->num_closed_socks];
        /* paired with READ_ONCE() in inet_csk_bind_conflict() */
        WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks - 1);
+       reuseport_put_incoming_cpu(sk, reuse);
 
        return true;
 }
@@ -150,6 +234,7 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
        reuse->bind_inany = bind_inany;
        reuse->socks[0] = sk;
        reuse->num_socks = 1;
+       reuseport_get_incoming_cpu(sk, reuse);
        rcu_assign_pointer(sk->sk_reuseport_cb, reuse);
 
 out:
@@ -193,6 +278,7 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
        more_reuse->reuseport_id = reuse->reuseport_id;
        more_reuse->bind_inany = reuse->bind_inany;
        more_reuse->has_conns = reuse->has_conns;
+       more_reuse->incoming_cpu = reuse->incoming_cpu;
 
        memcpy(more_reuse->socks, reuse->socks,
               reuse->num_socks * sizeof(struct sock *));
@@ -442,18 +528,32 @@ static struct sock *run_bpf_filter(struct sock_reuseport *reuse, u16 socks,
 static struct sock *reuseport_select_sock_by_hash(struct sock_reuseport *reuse,
                                                  u32 hash, u16 num_socks)
 {
+       struct sock *first_valid_sk = NULL;
        int i, j;
 
        i = j = reciprocal_scale(hash, num_socks);
-       while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
+       do {
+               struct sock *sk = reuse->socks[i];
+
+               if (sk->sk_state != TCP_ESTABLISHED) {
+                       /* Paired with WRITE_ONCE() in __reuseport_(get|put)_incoming_cpu(). */
+                       if (!READ_ONCE(reuse->incoming_cpu))
+                               return sk;
+
+                       /* Paired with WRITE_ONCE() in reuseport_update_incoming_cpu(). */
+                       if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+                               return sk;
+
+                       if (!first_valid_sk)
+                               first_valid_sk = sk;
+               }
+
                i++;
                if (i >= num_socks)
                        i = 0;
-               if (i == j)
-                       return NULL;
-       }
+       } while (i != j);
 
-       return reuse->socks[i];
+       return first_valid_sk;
 }
 
 /**
index 1105057..75fded8 100644 (file)
@@ -123,7 +123,7 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p)
        DEFINE_WAIT_FUNC(wait, woken_wake_function);
 
        if (sk_stream_memory_free(sk))
-               current_timeo = vm_wait = (prandom_u32() % (HZ / 5)) + 2;
+               current_timeo = vm_wait = prandom_u32_max(HZ / 5) + 2;
 
        add_wait_queue(sk_sleep(sk), &wait);
 
index 7dfc00c..9ddc3a9 100644 (file)
@@ -278,6 +278,7 @@ int dccp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
 int dccp_rcv_established(struct sock *sk, struct sk_buff *skb,
                         const struct dccp_hdr *dh, const unsigned int len);
 
+void dccp_destruct_common(struct sock *sk);
 int dccp_init_sock(struct sock *sk, const __u8 ctl_sock_initialized);
 void dccp_destroy_sock(struct sock *sk);
 
index 6a6e121..713b7b8 100644 (file)
@@ -144,7 +144,7 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
                                                    inet->inet_daddr,
                                                    inet->inet_sport,
                                                    inet->inet_dport);
-       inet->inet_id = prandom_u32();
+       inet->inet_id = get_random_u16();
 
        err = dccp_connect(sk);
        rt = NULL;
@@ -443,7 +443,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
        RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt));
        newinet->mc_index  = inet_iif(skb);
        newinet->mc_ttl    = ip_hdr(skb)->ttl;
-       newinet->inet_id   = prandom_u32();
+       newinet->inet_id   = get_random_u16();
 
        if (dst == NULL && (dst = inet_csk_route_child_sock(sk, newsk, req)) == NULL)
                goto put_and_exit;
index e57b430..ae62b15 100644 (file)
@@ -1021,6 +1021,12 @@ static const struct inet_connection_sock_af_ops dccp_ipv6_mapped = {
        .sockaddr_len      = sizeof(struct sockaddr_in6),
 };
 
+static void dccp_v6_sk_destruct(struct sock *sk)
+{
+       dccp_destruct_common(sk);
+       inet6_sock_destruct(sk);
+}
+
 /* NOTE: A lot of things set to zero explicitly by call to
  *       sk_alloc() so need not be done here.
  */
@@ -1033,17 +1039,12 @@ static int dccp_v6_init_sock(struct sock *sk)
                if (unlikely(!dccp_v6_ctl_sock_initialized))
                        dccp_v6_ctl_sock_initialized = 1;
                inet_csk(sk)->icsk_af_ops = &dccp_ipv6_af_ops;
+               sk->sk_destruct = dccp_v6_sk_destruct;
        }
 
        return err;
 }
 
-static void dccp_v6_destroy_sock(struct sock *sk)
-{
-       dccp_destroy_sock(sk);
-       inet6_destroy_sock(sk);
-}
-
 static struct timewait_sock_ops dccp6_timewait_sock_ops = {
        .twsk_obj_size  = sizeof(struct dccp6_timewait_sock),
 };
@@ -1066,7 +1067,7 @@ static struct proto dccp_v6_prot = {
        .accept            = inet_csk_accept,
        .get_port          = inet_csk_get_port,
        .shutdown          = dccp_shutdown,
-       .destroy           = dccp_v6_destroy_sock,
+       .destroy           = dccp_destroy_sock,
        .orphan_count      = &dccp_orphan_count,
        .max_header        = MAX_DCCP_HEADER,
        .obj_size          = sizeof(struct dccp6_sock),
index c548ca3..9494b0d 100644 (file)
@@ -171,12 +171,18 @@ const char *dccp_packet_name(const int type)
 
 EXPORT_SYMBOL_GPL(dccp_packet_name);
 
-static void dccp_sk_destruct(struct sock *sk)
+void dccp_destruct_common(struct sock *sk)
 {
        struct dccp_sock *dp = dccp_sk(sk);
 
        ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk);
        dp->dccps_hc_tx_ccid = NULL;
+}
+EXPORT_SYMBOL_GPL(dccp_destruct_common);
+
+static void dccp_sk_destruct(struct sock *sk)
+{
+       dccp_destruct_common(sk);
        inet_sock_destruct(sk);
 }
 
index 1a59918..a9fde48 100644 (file)
@@ -3145,7 +3145,7 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
        case NETDEV_CHANGELOWERSTATE: {
                struct netdev_notifier_changelowerstate_info *info = ptr;
                struct dsa_port *dp;
-               int err;
+               int err = 0;
 
                if (dsa_slave_dev_check(dev)) {
                        dp = dsa_slave_to_port(dev);
index 566adf8..ee3e02d 100644 (file)
@@ -202,6 +202,12 @@ const char link_mode_names[][ETH_GSTRING_LEN] = {
        __DEFINE_LINK_MODE_NAME(100, FX, Half),
        __DEFINE_LINK_MODE_NAME(100, FX, Full),
        __DEFINE_LINK_MODE_NAME(10, T1L, Full),
+       __DEFINE_LINK_MODE_NAME(800000, CR8, Full),
+       __DEFINE_LINK_MODE_NAME(800000, KR8, Full),
+       __DEFINE_LINK_MODE_NAME(800000, DR8, Full),
+       __DEFINE_LINK_MODE_NAME(800000, DR8_2, Full),
+       __DEFINE_LINK_MODE_NAME(800000, SR8, Full),
+       __DEFINE_LINK_MODE_NAME(800000, VR8, Full),
 };
 static_assert(ARRAY_SIZE(link_mode_names) == __ETHTOOL_LINK_MODE_MASK_NBITS);
 
@@ -238,6 +244,8 @@ static_assert(ARRAY_SIZE(link_mode_names) == __ETHTOOL_LINK_MODE_MASK_NBITS);
 #define __LINK_MODE_LANES_X            1
 #define __LINK_MODE_LANES_FX           1
 #define __LINK_MODE_LANES_T1L          1
+#define __LINK_MODE_LANES_VR8          8
+#define __LINK_MODE_LANES_DR8_2                8
 
 #define __DEFINE_LINK_MODE_PARAMS(_speed, _type, _duplex)      \
        [ETHTOOL_LINK_MODE(_speed, _type, _duplex)] = {         \
@@ -352,6 +360,12 @@ const struct link_mode_info link_mode_params[] = {
        __DEFINE_LINK_MODE_PARAMS(100, FX, Half),
        __DEFINE_LINK_MODE_PARAMS(100, FX, Full),
        __DEFINE_LINK_MODE_PARAMS(10, T1L, Full),
+       __DEFINE_LINK_MODE_PARAMS(800000, CR8, Full),
+       __DEFINE_LINK_MODE_PARAMS(800000, KR8, Full),
+       __DEFINE_LINK_MODE_PARAMS(800000, DR8, Full),
+       __DEFINE_LINK_MODE_PARAMS(800000, DR8_2, Full),
+       __DEFINE_LINK_MODE_PARAMS(800000, SR8, Full),
+       __DEFINE_LINK_MODE_PARAMS(800000, VR8, Full),
 };
 static_assert(ARRAY_SIZE(link_mode_params) == __ETHTOOL_LINK_MODE_MASK_NBITS);
 
index 5a471e1..e8683e4 100644 (file)
@@ -64,7 +64,7 @@ static int pse_prepare_data(const struct ethnl_req_info *req_base,
        if (ret < 0)
                return ret;
 
-       ret = pse_get_pse_attributes(dev, info->extack, data);
+       ret = pse_get_pse_attributes(dev, info ? info->extack : NULL, data);
 
        ethnl_ops_complete(dev);
 
index 5bf3577..a50429a 100644 (file)
@@ -150,15 +150,15 @@ struct sk_buff *hsr_get_untagged_frame(struct hsr_frame_info *frame,
                                       struct hsr_port *port)
 {
        if (!frame->skb_std) {
-               if (frame->skb_hsr) {
+               if (frame->skb_hsr)
                        frame->skb_std =
                                create_stripped_skb_hsr(frame->skb_hsr, frame);
-               } else {
-                       /* Unexpected */
-                       WARN_ONCE(1, "%s:%d: Unexpected frame received (port_src %s)\n",
-                                 __FILE__, __LINE__, port->dev->name);
+               else
+                       netdev_warn_once(port->dev,
+                                        "Unexpected frame received in hsr_get_untagged_frame()\n");
+
+               if (!frame->skb_std)
                        return NULL;
-               }
        }
 
        return skb_clone(frame->skb_std, GFP_ATOMIC);
index 405a8c2..4d1af0c 100644 (file)
@@ -70,10 +70,10 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
        }
        inet->inet_daddr = fl4->daddr;
        inet->inet_dport = usin->sin_port;
-       reuseport_has_conns(sk, true);
+       reuseport_has_conns_set(sk);
        sk->sk_state = TCP_ESTABLISHED;
        sk_set_txhash(sk);
-       inet->inet_id = prandom_u32();
+       inet->inet_id = get_random_u16();
 
        sk_dst_set(sk, &rt->dst);
        err = 0;
index df0660d..81be3e0 100644 (file)
@@ -213,7 +213,7 @@ static void igmp_stop_timer(struct ip_mc_list *im)
 /* It must be called with locked im->lock */
 static void igmp_start_timer(struct ip_mc_list *im, int max_delay)
 {
-       int tv = prandom_u32() % max_delay;
+       int tv = prandom_u32_max(max_delay);
 
        im->tm_running = 1;
        if (!mod_timer(&im->timer, jiffies+tv+2))
@@ -222,7 +222,7 @@ static void igmp_start_timer(struct ip_mc_list *im, int max_delay)
 
 static void igmp_gq_start_timer(struct in_device *in_dev)
 {
-       int tv = prandom_u32() % in_dev->mr_maxdelay;
+       int tv = prandom_u32_max(in_dev->mr_maxdelay);
        unsigned long exp = jiffies + tv + 2;
 
        if (in_dev->mr_gq_running &&
@@ -236,7 +236,7 @@ static void igmp_gq_start_timer(struct in_device *in_dev)
 
 static void igmp_ifc_start_timer(struct in_device *in_dev, int delay)
 {
-       int tv = prandom_u32() % delay;
+       int tv = prandom_u32_max(delay);
 
        if (!mod_timer(&in_dev->mr_ifc_timer, jiffies+tv+2))
                in_dev_hold(in_dev);
index ebca860..4e84ed2 100644 (file)
@@ -314,7 +314,7 @@ other_half_scan:
        if (likely(remaining > 1))
                remaining &= ~1U;
 
-       offset = prandom_u32() % remaining;
+       offset = prandom_u32_max(remaining);
        /* __inet_hash_connect() favors ports having @low parity
         * We do the opposite to not pollute connect() users.
         */
index a0ad34e..d3dc281 100644 (file)
@@ -1037,7 +1037,7 @@ ok:
         * on low contention the randomness is maximal and on high contention
         * it may be inexistent.
         */
-       i = max_t(int, i, (prandom_u32() & 7) * 2);
+       i = max_t(int, i, prandom_u32_max(8) * 2);
        WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2);
 
        /* Head lock still held and bh's disabled */
index 1ae83ad..922c87e 100644 (file)
@@ -172,7 +172,7 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk,
                 * Avoid using the hashed IP ident generator.
                 */
                if (sk->sk_protocol == IPPROTO_TCP)
-                       iph->id = (__force __be16)prandom_u32();
+                       iph->id = (__force __be16)get_random_u16();
                else
                        __ip_select_ident(net, iph, 1);
        }
index 6e19cad..5f16807 100644 (file)
@@ -267,7 +267,7 @@ int ip_cmsg_send(struct sock *sk, struct msghdr *msg, struct ipcm_cookie *ipc,
                }
 #endif
                if (cmsg->cmsg_level == SOL_SOCKET) {
-                       err = __sock_cmsg_send(sk, msg, cmsg, &ipc->sockc);
+                       err = __sock_cmsg_send(sk, cmsg, &ipc->sockc);
                        if (err)
                                return err;
                        continue;
index ff85db5..ded5bef 100644 (file)
@@ -78,6 +78,7 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
        flow.flowi4_tos = iph->tos & IPTOS_RT_MASK;
        flow.flowi4_scope = RT_SCOPE_UNIVERSE;
        flow.flowi4_l3mdev = l3mdev_master_ifindex_rcu(xt_in(par));
+       flow.flowi4_uid = sock_net_uid(xt_net(par), NULL);
 
        return rpfilter_lookup_reverse(xt_net(par), &flow, xt_in(par), info->flags) ^ invert;
 }
index e886147..fc65d69 100644 (file)
@@ -65,6 +65,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
        struct flowi4 fl4 = {
                .flowi4_scope = RT_SCOPE_UNIVERSE,
                .flowi4_iif = LOOPBACK_IFINDEX,
+               .flowi4_uid = sock_net_uid(nft_net(pkt), NULL),
        };
        const struct net_device *oif;
        const struct net_device *found;
index 795cbe1..cd1fa9f 100644 (file)
@@ -3664,7 +3664,7 @@ static __net_init int rt_genid_init(struct net *net)
 {
        atomic_set(&net->ipv4.rt_genid, 0);
        atomic_set(&net->fnhe_genid, 0);
-       atomic_set(&net->ipv4.dev_addr_genid, get_random_int());
+       atomic_set(&net->ipv4.dev_addr_genid, get_random_u32());
        return 0;
 }
 
@@ -3719,7 +3719,7 @@ int __init ip_rt_init(void)
 
        ip_idents = idents_hash;
 
-       prandom_bytes(ip_idents, (ip_idents_mask + 1) * sizeof(*ip_idents));
+       get_random_bytes(ip_idents, (ip_idents_mask + 1) * sizeof(*ip_idents));
 
        ip_tstamps = idents_hash + (ip_idents_mask + 1) * sizeof(*ip_idents);
 
index f823281..ef14efa 100644 (file)
@@ -457,6 +457,7 @@ void tcp_init_sock(struct sock *sk)
        WRITE_ONCE(sk->sk_sndbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[1]));
        WRITE_ONCE(sk->sk_rcvbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[1]));
 
+       set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
        sk_sockets_allocated_inc(sk);
 }
 EXPORT_SYMBOL(tcp_init_sock);
index 112f28f..ba4d98e 100644 (file)
@@ -243,7 +243,7 @@ static bool tcp_cdg_backoff(struct sock *sk, u32 grad)
        struct cdg *ca = inet_csk_ca(sk);
        struct tcp_sock *tp = tcp_sk(sk);
 
-       if (prandom_u32() <= nexp_u32(grad * backoff_factor))
+       if (get_random_u32() <= nexp_u32(grad * backoff_factor))
                return false;
 
        if (use_ineff) {
index bc2ea12..0640453 100644 (file)
@@ -2192,7 +2192,8 @@ void tcp_enter_loss(struct sock *sk)
  */
 static bool tcp_check_sack_reneging(struct sock *sk, int flag)
 {
-       if (flag & FLAG_SACK_RENEGING) {
+       if (flag & FLAG_SACK_RENEGING &&
+           flag & FLAG_SND_UNA_ADVANCED) {
                struct tcp_sock *tp = tcp_sk(sk);
                unsigned long delay = max(usecs_to_jiffies(tp->srtt_us >> 4),
                                          msecs_to_jiffies(10));
index 6376ad9..87d440f 100644 (file)
@@ -323,7 +323,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
                                                 inet->inet_daddr);
        }
 
-       inet->inet_id = prandom_u32();
+       inet->inet_id = get_random_u16();
 
        if (tcp_fastopen_defer_connect(sk, &err))
                return err;
@@ -1543,7 +1543,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
        inet_csk(newsk)->icsk_ext_hdr_len = 0;
        if (inet_opt)
                inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen;
-       newinet->inet_id = prandom_u32();
+       newinet->inet_id = get_random_u16();
 
        /* Set ToS of the new socket based upon the value of incoming SYN.
         * ECT bits are set later in tcp_init_transfer().
@@ -1874,11 +1874,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
        __skb_push(skb, hdrlen);
 
 no_coalesce:
+       limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1);
+
        /* Only socket owner can try to collapse/prune rx queues
         * to reduce memory overhead, so add a little headroom here.
         * Few sockets backlog are possibly concurrently non empty.
         */
-       limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;
+       limit += 64 * 1024;
 
        if (unlikely(sk_add_backlog(sk, skb, limit))) {
                bh_unlock_sock(sk);
index 8126f67..89accc3 100644 (file)
@@ -246,7 +246,7 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
                inet_get_local_port_range(net, &low, &high);
                remaining = (high - low) + 1;
 
-               rand = prandom_u32();
+               rand = get_random_u32();
                first = reciprocal_scale(rand, remaining) + low;
                /*
                 * force rand to be an odd multiple of UDP_HTABLE_SIZE
@@ -448,7 +448,7 @@ static struct sock *udp4_lib_lookup2(struct net *net,
                        result = lookup_reuseport(net, sk, skb,
                                                  saddr, sport, daddr, hnum);
                        /* Fall back to scoring if group has connections */
-                       if (result && !reuseport_has_conns(sk, false))
+                       if (result && !reuseport_has_conns(sk))
                                return result;
 
                        result = result ? : sk;
@@ -1448,7 +1448,7 @@ static void udp_rmem_release(struct sock *sk, int size, int partial,
        if (likely(partial)) {
                up->forward_deficit += size;
                size = up->forward_deficit;
-               if (size < (sk->sk_rcvbuf >> 2) &&
+               if (size < READ_ONCE(up->forward_threshold) &&
                    !skb_queue_empty(&up->reader_queue))
                        return;
        } else {
@@ -1622,8 +1622,9 @@ static void udp_destruct_sock(struct sock *sk)
 
 int udp_init_sock(struct sock *sk)
 {
-       skb_queue_head_init(&udp_sk(sk)->reader_queue);
+       udp_lib_init_sock(sk);
        sk->sk_destruct = udp_destruct_sock;
+       set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
        return 0;
 }
 
@@ -2671,6 +2672,18 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
        int err = 0;
        int is_udplite = IS_UDPLITE(sk);
 
+       if (level == SOL_SOCKET) {
+               err = sk_setsockopt(sk, level, optname, optval, optlen);
+
+               if (optname == SO_RCVBUF || optname == SO_RCVBUFFORCE) {
+                       sockopt_lock_sock(sk);
+                       /* paired with READ_ONCE in udp_rmem_release() */
+                       WRITE_ONCE(up->forward_threshold, sk->sk_rcvbuf >> 2);
+                       sockopt_release_sock(sk);
+               }
+               return err;
+       }
+
        if (optlen < sizeof(int))
                return -EINVAL;
 
@@ -2784,7 +2797,7 @@ EXPORT_SYMBOL(udp_lib_setsockopt);
 int udp_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
                   unsigned int optlen)
 {
-       if (level == SOL_UDP  ||  level == SOL_UDPLITE)
+       if (level == SOL_UDP  ||  level == SOL_UDPLITE || level == SOL_SOCKET)
                return udp_lib_setsockopt(sk, level, optname,
                                          optval, optlen,
                                          udp_push_pending_frames);
index 10ce86b..9c3f520 100644 (file)
@@ -104,7 +104,7 @@ static inline u32 cstamp_delta(unsigned long cstamp)
 static inline s32 rfc3315_s14_backoff_init(s32 irt)
 {
        /* multiply 'initial retransmission time' by 0.9 .. 1.1 */
-       u64 tmp = (900000 + prandom_u32() % 200001) * (u64)irt;
+       u64 tmp = (900000 + prandom_u32_max(200001)) * (u64)irt;
        do_div(tmp, 1000000);
        return (s32)tmp;
 }
@@ -112,11 +112,11 @@ static inline s32 rfc3315_s14_backoff_init(s32 irt)
 static inline s32 rfc3315_s14_backoff_update(s32 rt, s32 mrt)
 {
        /* multiply 'retransmission timeout' by 1.9 .. 2.1 */
-       u64 tmp = (1900000 + prandom_u32() % 200001) * (u64)rt;
+       u64 tmp = (1900000 + prandom_u32_max(200001)) * (u64)rt;
        do_div(tmp, 1000000);
        if ((s32)tmp > mrt) {
                /* multiply 'maximum retransmission time' by 0.9 .. 1.1 */
-               tmp = (900000 + prandom_u32() % 200001) * (u64)mrt;
+               tmp = (900000 + prandom_u32_max(200001)) * (u64)mrt;
                do_div(tmp, 1000000);
        }
        return (s32)tmp;
@@ -3967,7 +3967,7 @@ static void addrconf_dad_kick(struct inet6_ifaddr *ifp)
        if (ifp->flags & IFA_F_OPTIMISTIC)
                rand_num = 0;
        else
-               rand_num = prandom_u32() % (idev->cnf.rtr_solicit_delay ? : 1);
+               rand_num = prandom_u32_max(idev->cnf.rtr_solicit_delay ?: 1);
 
        nonce = 0;
        if (idev->cnf.enhanced_dad ||
@@ -7214,9 +7214,11 @@ err_reg_dflt:
        __addrconf_sysctl_unregister(net, all, NETCONFA_IFINDEX_ALL);
 err_reg_all:
        kfree(dflt);
+       net->ipv6.devconf_dflt = NULL;
 #endif
 err_alloc_dflt:
        kfree(all);
+       net->ipv6.devconf_all = NULL;
 err_alloc_all:
        kfree(net->ipv6.inet6_addr_lst);
 err_alloc_addr:
index 0241910..6807529 100644 (file)
@@ -114,6 +114,7 @@ void inet6_sock_destruct(struct sock *sk)
        inet6_cleanup_sock(sk);
        inet_sock_destruct(sk);
 }
+EXPORT_SYMBOL_GPL(inet6_sock_destruct);
 
 static int inet6_create(struct net *net, struct socket *sock, int protocol,
                        int kern)
@@ -489,7 +490,7 @@ int inet6_release(struct socket *sock)
 }
 EXPORT_SYMBOL(inet6_release);
 
-void inet6_destroy_sock(struct sock *sk)
+void inet6_cleanup_sock(struct sock *sk)
 {
        struct ipv6_pinfo *np = inet6_sk(sk);
        struct sk_buff *skb;
@@ -514,12 +515,6 @@ void inet6_destroy_sock(struct sock *sk)
                txopt_put(opt);
        }
 }
-EXPORT_SYMBOL_GPL(inet6_destroy_sock);
-
-void inet6_cleanup_sock(struct sock *sk)
-{
-       inet6_destroy_sock(sk);
-}
 EXPORT_SYMBOL_GPL(inet6_cleanup_sock);
 
 /*
index df665d4..df7e032 100644 (file)
@@ -256,7 +256,7 @@ ipv4_connected:
                goto out;
        }
 
-       reuseport_has_conns(sk, true);
+       reuseport_has_conns_set(sk);
        sk->sk_state = TCP_ESTABLISHED;
        sk_set_txhash(sk);
 out:
@@ -771,7 +771,7 @@ int ip6_datagram_send_ctl(struct net *net, struct sock *sk,
                }
 
                if (cmsg->cmsg_level == SOL_SOCKET) {
-                       err = __sock_cmsg_send(sk, msg, cmsg, &ipc6->sockc);
+                       err = __sock_cmsg_send(sk, cmsg, &ipc6->sockc);
                        if (err)
                                return err;
                        continue;
index ceb85c6..18481eb 100644 (file)
@@ -220,7 +220,7 @@ static struct ip6_flowlabel *fl_intern(struct net *net,
        spin_lock_bh(&ip6_fl_lock);
        if (label == 0) {
                for (;;) {
-                       fl->label = htonl(prandom_u32())&IPV6_FLOWLABEL_MASK;
+                       fl->label = htonl(get_random_u32())&IPV6_FLOWLABEL_MASK;
                        if (fl->label) {
                                lfl = __fl_lookup(net, fl->label);
                                if (!lfl)
index 532f447..9ce5168 100644 (file)
@@ -1005,10 +1005,8 @@ unlock:
        return retv;
 
 e_inval:
-       sockopt_release_sock(sk);
-       if (needs_rtnl)
-               rtnl_unlock();
-       return -EINVAL;
+       retv = -EINVAL;
+       goto unlock;
 }
 
 int ipv6_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
index 0566ab0..7860383 100644 (file)
@@ -1050,7 +1050,7 @@ bool ipv6_chk_mcast_addr(struct net_device *dev, const struct in6_addr *group,
 /* called with mc_lock */
 static void mld_gq_start_work(struct inet6_dev *idev)
 {
-       unsigned long tv = prandom_u32() % idev->mc_maxdelay;
+       unsigned long tv = prandom_u32_max(idev->mc_maxdelay);
 
        idev->mc_gq_running = 1;
        if (!mod_delayed_work(mld_wq, &idev->mc_gq_work, tv + 2))
@@ -1068,7 +1068,7 @@ static void mld_gq_stop_work(struct inet6_dev *idev)
 /* called with mc_lock */
 static void mld_ifc_start_work(struct inet6_dev *idev, unsigned long delay)
 {
-       unsigned long tv = prandom_u32() % delay;
+       unsigned long tv = prandom_u32_max(delay);
 
        if (!mod_delayed_work(mld_wq, &idev->mc_ifc_work, tv + 2))
                in6_dev_hold(idev);
@@ -1085,7 +1085,7 @@ static void mld_ifc_stop_work(struct inet6_dev *idev)
 /* called with mc_lock */
 static void mld_dad_start_work(struct inet6_dev *idev, unsigned long delay)
 {
-       unsigned long tv = prandom_u32() % delay;
+       unsigned long tv = prandom_u32_max(delay);
 
        if (!mod_delayed_work(mld_wq, &idev->mc_dad_work, tv + 2))
                in6_dev_hold(idev);
@@ -1130,7 +1130,7 @@ static void igmp6_group_queried(struct ifmcaddr6 *ma, unsigned long resptime)
        }
 
        if (delay >= resptime)
-               delay = prandom_u32() % resptime;
+               delay = prandom_u32_max(resptime);
 
        if (!mod_delayed_work(mld_wq, &ma->mca_work, delay))
                refcount_inc(&ma->mca_refcnt);
@@ -2574,7 +2574,7 @@ static void igmp6_join_group(struct ifmcaddr6 *ma)
 
        igmp6_send(&ma->mca_addr, ma->idev->dev, ICMPV6_MGM_REPORT);
 
-       delay = prandom_u32() % unsolicited_report_interval(ma->idev);
+       delay = prandom_u32_max(unsolicited_report_interval(ma->idev));
 
        if (cancel_delayed_work(&ma->mca_work)) {
                refcount_dec(&ma->mca_refcnt);
index 69d86b0..a01d9b8 100644 (file)
@@ -40,6 +40,7 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
                .flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev),
                .flowlabel = (* (__be32 *) iph) & IPV6_FLOWINFO_MASK,
                .flowi6_proto = iph->nexthdr,
+               .flowi6_uid = sock_net_uid(net, NULL),
                .daddr = iph->saddr,
        };
        int lookup_flags;
index 91faac6..36dc14b 100644 (file)
@@ -66,6 +66,7 @@ static u32 __nft_fib6_eval_type(const struct nft_fib *priv,
        struct flowi6 fl6 = {
                .flowi6_iif = LOOPBACK_IFINDEX,
                .flowi6_proto = pkt->tprot,
+               .flowi6_uid = sock_net_uid(nft_net(pkt), NULL),
        };
        u32 ret = 0;
 
@@ -163,6 +164,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
        struct flowi6 fl6 = {
                .flowi6_iif = LOOPBACK_IFINDEX,
                .flowi6_proto = pkt->tprot,
+               .flowi6_uid = sock_net_uid(nft_net(pkt), NULL),
        };
        struct rt6_info *rt;
        int lookup_flags;
index 2880dc7..2685c3f 100644 (file)
@@ -18,7 +18,7 @@ static u32 __ipv6_select_ident(struct net *net,
        u32 id;
 
        do {
-               id = prandom_u32();
+               id = get_random_u32();
        } while (!id);
 
        return id;
index 86c26e4..808983b 100644 (file)
 #include <linux/bpf-cgroup.h>
 #include <net/ping.h>
 
-static void ping_v6_destroy(struct sock *sk)
-{
-       inet6_destroy_sock(sk);
-}
-
 /* Compatibility glue so we can support IPv6 when it's compiled as a module */
 static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
                                 int *addr_len)
@@ -205,7 +200,6 @@ struct proto pingv6_prot = {
        .owner =        THIS_MODULE,
        .init =         ping_init_sock,
        .close =        ping_close,
-       .destroy =      ping_v6_destroy,
        .pre_connect =  ping_v6_pre_connect,
        .connect =      ip6_datagram_connect_v6_only,
        .disconnect =   __udp_disconnect,
index 722de9d..a06a9f8 100644 (file)
@@ -1173,8 +1173,6 @@ static void raw6_destroy(struct sock *sk)
        lock_sock(sk);
        ip6_flush_pending_frames(sk);
        release_sock(sk);
-
-       inet6_destroy_sock(sk);
 }
 
 static int rawv6_init_sk(struct sock *sk)
index 2a3f929..f676be1 100644 (file)
@@ -1966,12 +1966,6 @@ static int tcp_v6_init_sock(struct sock *sk)
        return 0;
 }
 
-static void tcp_v6_destroy_sock(struct sock *sk)
-{
-       tcp_v4_destroy_sock(sk);
-       inet6_destroy_sock(sk);
-}
-
 #ifdef CONFIG_PROC_FS
 /* Proc filesystem TCPv6 sock list dumping. */
 static void get_openreq6(struct seq_file *seq,
@@ -2164,7 +2158,7 @@ struct proto tcpv6_prot = {
        .accept                 = inet_csk_accept,
        .ioctl                  = tcp_ioctl,
        .init                   = tcp_v6_init_sock,
-       .destroy                = tcp_v6_destroy_sock,
+       .destroy                = tcp_v4_destroy_sock,
        .shutdown               = tcp_shutdown,
        .setsockopt             = tcp_setsockopt,
        .getsockopt             = tcp_getsockopt,
index 8d09f0e..297f7cc 100644 (file)
@@ -64,7 +64,7 @@ static void udpv6_destruct_sock(struct sock *sk)
 
 int udpv6_init_sock(struct sock *sk)
 {
-       skb_queue_head_init(&udp_sk(sk)->reader_queue);
+       udp_lib_init_sock(sk);
        sk->sk_destruct = udpv6_destruct_sock;
        return 0;
 }
@@ -195,7 +195,7 @@ static struct sock *udp6_lib_lookup2(struct net *net,
                        result = lookup_reuseport(net, sk, skb,
                                                  saddr, sport, daddr, hnum);
                        /* Fall back to scoring if group has connections */
-                       if (result && !reuseport_has_conns(sk, false))
+                       if (result && !reuseport_has_conns(sk))
                                return result;
 
                        result = result ? : sk;
@@ -1661,8 +1661,6 @@ void udpv6_destroy_sock(struct sock *sk)
                        udp_encap_disable();
                }
        }
-
-       inet6_destroy_sock(sk);
 }
 
 /*
@@ -1671,7 +1669,7 @@ void udpv6_destroy_sock(struct sock *sk)
 int udpv6_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
                     unsigned int optlen)
 {
-       if (level == SOL_UDP  ||  level == SOL_UDPLITE)
+       if (level == SOL_UDP  ||  level == SOL_UDPLITE || level == SOL_SOCKET)
                return udp_lib_setsockopt(sk, level, optname,
                                          optval, optlen,
                                          udp_v6_push_pending_frames);
index 2772546..63e32f1 100644 (file)
@@ -162,7 +162,8 @@ static void kcm_rcv_ready(struct kcm_sock *kcm)
        /* Buffer limit is okay now, add to ready list */
        list_add_tail(&kcm->wait_rx_list,
                      &kcm->mux->kcm_rx_waiters);
-       kcm->rx_wait = true;
+       /* paired with lockless reads in kcm_rfree() */
+       WRITE_ONCE(kcm->rx_wait, true);
 }
 
 static void kcm_rfree(struct sk_buff *skb)
@@ -178,7 +179,7 @@ static void kcm_rfree(struct sk_buff *skb)
        /* For reading rx_wait and rx_psock without holding lock */
        smp_mb__after_atomic();
 
-       if (!kcm->rx_wait && !kcm->rx_psock &&
+       if (!READ_ONCE(kcm->rx_wait) && !READ_ONCE(kcm->rx_psock) &&
            sk_rmem_alloc_get(sk) < sk->sk_rcvlowat) {
                spin_lock_bh(&mux->rx_lock);
                kcm_rcv_ready(kcm);
@@ -237,7 +238,8 @@ try_again:
                if (kcm_queue_rcv_skb(&kcm->sk, skb)) {
                        /* Should mean socket buffer full */
                        list_del(&kcm->wait_rx_list);
-                       kcm->rx_wait = false;
+                       /* paired with lockless reads in kcm_rfree() */
+                       WRITE_ONCE(kcm->rx_wait, false);
 
                        /* Commit rx_wait to read in kcm_free */
                        smp_wmb();
@@ -280,10 +282,12 @@ static struct kcm_sock *reserve_rx_kcm(struct kcm_psock *psock,
        kcm = list_first_entry(&mux->kcm_rx_waiters,
                               struct kcm_sock, wait_rx_list);
        list_del(&kcm->wait_rx_list);
-       kcm->rx_wait = false;
+       /* paired with lockless reads in kcm_rfree() */
+       WRITE_ONCE(kcm->rx_wait, false);
 
        psock->rx_kcm = kcm;
-       kcm->rx_psock = psock;
+       /* paired with lockless reads in kcm_rfree() */
+       WRITE_ONCE(kcm->rx_psock, psock);
 
        spin_unlock_bh(&mux->rx_lock);
 
@@ -310,7 +314,8 @@ static void unreserve_rx_kcm(struct kcm_psock *psock,
        spin_lock_bh(&mux->rx_lock);
 
        psock->rx_kcm = NULL;
-       kcm->rx_psock = NULL;
+       /* paired with lockless reads in kcm_rfree() */
+       WRITE_ONCE(kcm->rx_psock, NULL);
 
        /* Commit kcm->rx_psock before sk_rmem_alloc_get to sync with
         * kcm_rfree
@@ -1240,7 +1245,8 @@ static void kcm_recv_disable(struct kcm_sock *kcm)
        if (!kcm->rx_psock) {
                if (kcm->rx_wait) {
                        list_del(&kcm->wait_rx_list);
-                       kcm->rx_wait = false;
+                       /* paired with lockless reads in kcm_rfree() */
+                       WRITE_ONCE(kcm->rx_wait, false);
                }
 
                requeue_rx_msgs(mux, &kcm->sk.sk_receive_queue);
@@ -1793,7 +1799,8 @@ static void kcm_done(struct kcm_sock *kcm)
 
        if (kcm->rx_wait) {
                list_del(&kcm->wait_rx_list);
-               kcm->rx_wait = false;
+               /* paired with lockless reads in kcm_rfree() */
+               WRITE_ONCE(kcm->rx_wait, false);
        }
        /* Move any pending receive messages to other kcm sockets */
        requeue_rx_msgs(mux, &sk->sk_receive_queue);
index 9dbd801..2478aa6 100644 (file)
@@ -257,8 +257,6 @@ static void l2tp_ip6_destroy_sock(struct sock *sk)
 
        if (tunnel)
                l2tp_tunnel_delete(tunnel);
-
-       inet6_destroy_sock(sk);
 }
 
 static int l2tp_ip6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
index 7f3f5f5..3d91b98 100644 (file)
@@ -2036,7 +2036,7 @@ static void __init init_sample_table(void)
 
        memset(sample_table, 0xff, sizeof(sample_table));
        for (col = 0; col < SAMPLE_COLUMNS; col++) {
-               prandom_bytes(rnd, sizeof(rnd));
+               get_random_bytes(rnd, sizeof(rnd));
                for (i = 0; i < MCS_GROUP_RATES; i++) {
                        new_idx = (i + rnd[i]) % MCS_GROUP_RATES;
                        while (sample_table[col][new_idx] != 0xff)
index 0e8c4f4..dc3cdee 100644 (file)
@@ -641,7 +641,7 @@ static void ieee80211_send_scan_probe_req(struct ieee80211_sub_if_data *sdata,
                if (flags & IEEE80211_PROBE_FLAG_RANDOM_SN) {
                        struct ieee80211_hdr *hdr = (void *)skb->data;
                        struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
-                       u16 sn = get_random_u32();
+                       u16 sn = get_random_u16();
 
                        info->control.flags |= IEEE80211_TX_CTRL_NO_SEQNO;
                        hdr->seq_ctrl =
index f599ad4..e60d144 100644 (file)
@@ -2708,6 +2708,8 @@ static int mptcp_init_sock(struct sock *sk)
        if (ret)
                return ret;
 
+       set_bit(SOCK_CUSTOM_SOCKOPT, &sk->sk_socket->flags);
+
        /* fetch the ca name; do it outside __mptcp_init_sock(), so that clone will
         * propagate the correct value
         */
@@ -3684,6 +3686,8 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
                struct mptcp_subflow_context *subflow;
                struct sock *newsk = newsock->sk;
 
+               set_bit(SOCK_CUSTOM_SOCKOPT, &newsock->flags);
+
                lock_sock(newsk);
 
                /* PM/worker can now acquire the first subflow socket
@@ -3898,12 +3902,6 @@ static const struct proto_ops mptcp_v6_stream_ops = {
 
 static struct proto mptcp_v6_prot;
 
-static void mptcp_v6_destroy(struct sock *sk)
-{
-       mptcp_destroy(sk);
-       inet6_destroy_sock(sk);
-}
-
 static struct inet_protosw mptcp_v6_protosw = {
        .type           = SOCK_STREAM,
        .protocol       = IPPROTO_MPTCP,
@@ -3919,7 +3917,6 @@ int __init mptcp_proto_v6_init(void)
        mptcp_v6_prot = mptcp_prot;
        strcpy(mptcp_v6_prot.name, "MPTCPv6");
        mptcp_v6_prot.slab = NULL;
-       mptcp_v6_prot.destroy = mptcp_v6_destroy;
        mptcp_v6_prot.obj_size = sizeof(struct mptcp6_sock);
 
        err = proto_register(&mptcp_v6_prot, 1);
index c7cb68c..f85e9bb 100644 (file)
@@ -560,6 +560,7 @@ static bool mptcp_supported_sockopt(int level, int optname)
                case TCP_TX_DELAY:
                case TCP_INQ:
                case TCP_FASTOPEN_CONNECT:
+               case TCP_FASTOPEN_NO_COOKIE:
                        return true;
                }
 
@@ -568,8 +569,8 @@ static bool mptcp_supported_sockopt(int level, int optname)
                /* TCP_REPAIR, TCP_REPAIR_QUEUE, TCP_QUEUE_SEQ, TCP_REPAIR_OPTIONS,
                 * TCP_REPAIR_WINDOW are not supported, better avoid this mess
                 */
-               /* TCP_FASTOPEN_KEY, TCP_FASTOPEN, TCP_FASTOPEN_NO_COOKIE,
-                * are not supported fastopen is currently unsupported
+               /* TCP_FASTOPEN_KEY, TCP_FASTOPEN are not supported because
+                * fastopen for the listener side is currently unsupported
                 */
        }
        return false;
@@ -757,29 +758,17 @@ static int mptcp_setsockopt_v4(struct mptcp_sock *msk, int optname,
        return -EOPNOTSUPP;
 }
 
-static int mptcp_setsockopt_sol_tcp_defer(struct mptcp_sock *msk, sockptr_t optval,
-                                         unsigned int optlen)
-{
-       struct socket *listener;
-
-       listener = __mptcp_nmpc_socket(msk);
-       if (!listener)
-               return 0; /* TCP_DEFER_ACCEPT does not fail */
-
-       return tcp_setsockopt(listener->sk, SOL_TCP, TCP_DEFER_ACCEPT, optval, optlen);
-}
-
-static int mptcp_setsockopt_sol_tcp_fastopen_connect(struct mptcp_sock *msk, sockptr_t optval,
-                                                    unsigned int optlen)
+static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int optname,
+                                         sockptr_t optval, unsigned int optlen)
 {
        struct socket *sock;
 
-       /* Limit to first subflow */
+       /* Limit to first subflow, before the connection establishment */
        sock = __mptcp_nmpc_socket(msk);
        if (!sock)
                return -EINVAL;
 
-       return tcp_setsockopt(sock->sk, SOL_TCP, TCP_FASTOPEN_CONNECT, optval, optlen);
+       return tcp_setsockopt(sock->sk, level, optname, optval, optlen);
 }
 
 static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
@@ -809,9 +798,13 @@ static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
        case TCP_NODELAY:
                return mptcp_setsockopt_sol_tcp_nodelay(msk, optval, optlen);
        case TCP_DEFER_ACCEPT:
-               return mptcp_setsockopt_sol_tcp_defer(msk, optval, optlen);
+               /* See tcp.c: TCP_DEFER_ACCEPT does not fail */
+               mptcp_setsockopt_first_sf_only(msk, SOL_TCP, optname, optval, optlen);
+               return 0;
        case TCP_FASTOPEN_CONNECT:
-               return mptcp_setsockopt_sol_tcp_fastopen_connect(msk, optval, optlen);
+       case TCP_FASTOPEN_NO_COOKIE:
+               return mptcp_setsockopt_first_sf_only(msk, SOL_TCP, optname,
+                                                     optval, optlen);
        }
 
        return -EOPNOTSUPP;
@@ -1174,6 +1167,7 @@ static int mptcp_getsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
        case TCP_CC_INFO:
        case TCP_DEFER_ACCEPT:
        case TCP_FASTOPEN_CONNECT:
+       case TCP_FASTOPEN_NO_COOKIE:
                return mptcp_getsockopt_first_sf_only(msk, SOL_TCP, optname,
                                                      optval, optlen);
        case TCP_INQ:
index fb67f1c..8c04bb5 100644 (file)
@@ -1308,7 +1308,7 @@ void ip_vs_random_dropentry(struct netns_ipvs *ipvs)
         * Randomly scan 1/32 of the whole table every second
         */
        for (idx = 0; idx < (ip_vs_conn_tab_size>>5); idx++) {
-               unsigned int hash = prandom_u32() & ip_vs_conn_tab_mask;
+               unsigned int hash = get_random_u32() & ip_vs_conn_tab_mask;
 
                hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[hash], c_list) {
                        if (cp->ipvs != ipvs)
index acb55d8..f2579fc 100644 (file)
@@ -71,8 +71,8 @@ static struct ip_vs_dest *ip_vs_twos_schedule(struct ip_vs_service *svc,
         * from 0 to total_weight
         */
        total_weight += 1;
-       rweight1 = prandom_u32() % total_weight;
-       rweight2 = prandom_u32() % total_weight;
+       rweight1 = prandom_u32_max(total_weight);
+       rweight2 = prandom_u32_max(total_weight);
 
        /* Pick two weighted servers */
        list_for_each_entry_rcu(dest, &svc->destinations, n_list) {
index d8e6380..18319a6 100644 (file)
@@ -468,7 +468,7 @@ find_free_id:
        if (range->flags & NF_NAT_RANGE_PROTO_OFFSET)
                off = (ntohs(*keyptr) - ntohs(range->base_proto.all));
        else
-               off = prandom_u32();
+               off = get_random_u16();
 
        attempts = range_size;
        if (attempts > max_attempts)
@@ -490,7 +490,7 @@ another_round:
        if (attempts >= range_size || attempts < 16)
                return;
        attempts /= 2;
-       off = prandom_u32();
+       off = get_random_u16();
        goto another_round;
 }
 
index a0653a8..58d9cbc 100644 (file)
@@ -5865,8 +5865,9 @@ static bool nft_setelem_valid_key_end(const struct nft_set *set,
                          (NFT_SET_CONCAT | NFT_SET_INTERVAL)) {
                if (flags & NFT_SET_ELEM_INTERVAL_END)
                        return false;
-               if (!nla[NFTA_SET_ELEM_KEY_END] &&
-                   !(flags & NFT_SET_ELEM_CATCHALL))
+
+               if (nla[NFTA_SET_ELEM_KEY_END] &&
+                   flags & NFT_SET_ELEM_CATCHALL)
                        return false;
        } else {
                if (nla[NFTA_SET_ELEM_KEY_END])
index 203e24a..b26c1dc 100644 (file)
@@ -34,7 +34,7 @@ statistic_mt(const struct sk_buff *skb, struct xt_action_param *par)
 
        switch (info->mode) {
        case XT_STATISTIC_MODE_RANDOM:
-               if ((prandom_u32() & 0x7FFFFFFF) < info->u.random.probability)
+               if ((get_random_u32() & 0x7FFFFFFF) < info->u.random.probability)
                        ret = !ret;
                break;
        case XT_STATISTIC_MODE_NTH:
index a662e8a..f0c94d3 100644 (file)
@@ -812,6 +812,17 @@ static int netlink_release(struct socket *sock)
        }
 
        sock_prot_inuse_add(sock_net(sk), &netlink_proto, -1);
+
+       /* Because struct net might disappear soon, do not keep a pointer. */
+       if (!sk->sk_net_refcnt && sock_net(sk) != &init_net) {
+               __netns_tracker_free(sock_net(sk), &sk->ns_tracker, false);
+               /* Because of deferred_put_nlk_sk and use of work queue,
+                * it is possible  netns will be freed before this socket.
+                */
+               sock_net_set(sk, &init_net);
+               __netns_tracker_alloc(&init_net, &sk->ns_tracker,
+                                     false, GFP_KERNEL);
+       }
        call_rcu(&nlk->rcu, deferred_put_nlk_sk);
        return 0;
 }
index 868db46..ca3ebfd 100644 (file)
@@ -1033,7 +1033,7 @@ static int sample(struct datapath *dp, struct sk_buff *skb,
        actions = nla_next(sample_arg, &rem);
 
        if ((arg->probability != U32_MAX) &&
-           (!arg->probability || prandom_u32() > arg->probability)) {
+           (!arg->probability || get_random_u32() > arg->probability)) {
                if (last)
                        consume_skb(skb);
                return 0;
index 4a07ab0..ead5418 100644 (file)
@@ -2309,7 +2309,7 @@ static struct sw_flow_actions *nla_alloc_flow_actions(int size)
 
        WARN_ON_ONCE(size > MAX_ACTIONS_BUFSIZE);
 
-       sfa = kmalloc(sizeof(*sfa) + size, GFP_KERNEL);
+       sfa = kmalloc(kmalloc_size_roundup(sizeof(*sfa) + size), GFP_KERNEL);
        if (!sfa)
                return ERR_PTR(-ENOMEM);
 
index d3f6db3..8c5b3da 100644 (file)
@@ -1350,7 +1350,7 @@ static bool fanout_flow_is_huge(struct packet_sock *po, struct sk_buff *skb)
                if (READ_ONCE(history[i]) == rxhash)
                        count++;
 
-       victim = prandom_u32() % ROLLOVER_HLEN;
+       victim = prandom_u32_max(ROLLOVER_HLEN);
 
        /* Avoid dirtying the cache line if possible */
        if (READ_ONCE(history[victim]) != rxhash)
@@ -3277,7 +3277,7 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
                            int addr_len)
 {
        struct sock *sk = sock->sk;
-       char name[sizeof(uaddr->sa_data) + 1];
+       char name[sizeof(uaddr->sa_data_min) + 1];
 
        /*
         *      Check legality
@@ -3288,8 +3288,8 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
        /* uaddr->sa_data comes from the userspace, it's not guaranteed to be
         * zero-terminated.
         */
-       memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data));
-       name[sizeof(uaddr->sa_data)] = 0;
+       memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data_min));
+       name[sizeof(uaddr->sa_data_min)] = 0;
 
        return packet_do_bind(sk, name, 0, pkt_sk(sk)->num);
 }
@@ -3561,11 +3561,11 @@ static int packet_getname_spkt(struct socket *sock, struct sockaddr *uaddr,
                return -EOPNOTSUPP;
 
        uaddr->sa_family = AF_PACKET;
-       memset(uaddr->sa_data, 0, sizeof(uaddr->sa_data));
+       memset(uaddr->sa_data, 0, sizeof(uaddr->sa_data_min));
        rcu_read_lock();
        dev = dev_get_by_index_rcu(sock_net(sk), READ_ONCE(pkt_sk(sk)->ifindex));
        if (dev)
-               strscpy(uaddr->sa_data, dev->name, sizeof(uaddr->sa_data));
+               strscpy(uaddr->sa_data, dev->name, sizeof(uaddr->sa_data_min));
        rcu_read_unlock();
 
        return sizeof(*uaddr);
index 5b5fb4c..97a2917 100644 (file)
@@ -104,7 +104,7 @@ static int rds_add_bound(struct rds_sock *rs, const struct in6_addr *addr,
                        return -EINVAL;
                last = rover;
        } else {
-               rover = max_t(u16, prandom_u32(), 2);
+               rover = max_t(u16, get_random_u16(), 2);
                last = rover - 1;
        }
 
index 4444fd8..c5b8606 100644 (file)
@@ -503,6 +503,9 @@ bool rds_tcp_tune(struct socket *sock)
                        release_sock(sk);
                        return false;
                }
+               /* Update ns_tracker to current stack trace and refcounted tracker */
+               __netns_tracker_free(net, &sk->ns_tracker, false);
+
                sk->sk_net_refcnt = 1;
                netns_tracker_alloc(net, &sk->ns_tracker, GFP_KERNEL);
                sock_inuse_add(net, 1);
index abe1bcc..62d682b 100644 (file)
@@ -25,7 +25,7 @@ static struct tc_action_ops act_gact_ops;
 static int gact_net_rand(struct tcf_gact *gact)
 {
        smp_rmb(); /* coupled with smp_wmb() in tcf_gact_init() */
-       if (prandom_u32() % gact->tcfg_pval)
+       if (prandom_u32_max(gact->tcfg_pval))
                return gact->tcf_action;
        return gact->tcfg_paction;
 }
index 5ba36f7..7a25477 100644 (file)
@@ -168,7 +168,7 @@ static int tcf_sample_act(struct sk_buff *skb, const struct tc_action *a,
        psample_group = rcu_dereference_bh(s->psample_group);
 
        /* randomly sample packets according to rate */
-       if (psample_group && (prandom_u32() % s->rate == 0)) {
+       if (psample_group && (prandom_u32_max(s->rate) == 0)) {
                if (!skb_at_tc_ingress(skb)) {
                        md.in_ifindex = skb->skb_iif;
                        md.out_ifindex = skb->dev->ifindex;
index 7f59878..1710780 100644 (file)
@@ -148,6 +148,11 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
        }
 
        if (tb[TCA_SKBEDIT_QUEUE_MAPPING] != NULL) {
+               if (is_tcf_skbedit_ingress(act_flags) &&
+                   !(act_flags & TCA_ACT_FLAGS_SKIP_SW)) {
+                       NL_SET_ERR_MSG_MOD(extack, "\"queue_mapping\" option on receive side is hardware only, use skip_sw");
+                       return -EOPNOTSUPP;
+               }
                flags |= SKBEDIT_F_QUEUE_MAPPING;
                queue_mapping = nla_data(tb[TCA_SKBEDIT_QUEUE_MAPPING]);
        }
@@ -374,9 +379,12 @@ static int tcf_skbedit_offload_act_setup(struct tc_action *act, void *entry_data
                } else if (is_tcf_skbedit_priority(act)) {
                        entry->id = FLOW_ACTION_PRIORITY;
                        entry->priority = tcf_skbedit_priority(act);
-               } else if (is_tcf_skbedit_queue_mapping(act)) {
-                       NL_SET_ERR_MSG_MOD(extack, "Offload not supported when \"queue_mapping\" option is used");
+               } else if (is_tcf_skbedit_tx_queue_mapping(act)) {
+                       NL_SET_ERR_MSG_MOD(extack, "Offload not supported when \"queue_mapping\" option is used on transmit side");
                        return -EOPNOTSUPP;
+               } else if (is_tcf_skbedit_rx_queue_mapping(act)) {
+                       entry->id = FLOW_ACTION_RX_QUEUE_MAPPING;
+                       entry->rx_queue = tcf_skbedit_rx_queue_mapping(act);
                } else if (is_tcf_skbedit_inheritdsfield(act)) {
                        NL_SET_ERR_MSG_MOD(extack, "Offload not supported when \"inheritdsfield\" option is used");
                        return -EOPNOTSUPP;
@@ -394,6 +402,8 @@ static int tcf_skbedit_offload_act_setup(struct tc_action *act, void *entry_data
                        fl_action->id = FLOW_ACTION_PTYPE;
                else if (is_tcf_skbedit_priority(act))
                        fl_action->id = FLOW_ACTION_PRIORITY;
+               else if (is_tcf_skbedit_rx_queue_mapping(act))
+                       fl_action->id = FLOW_ACTION_RX_QUEUE_MAPPING;
                else
                        return -EOPNOTSUPP;
        }
index 50566db..23d1cfa 100644 (file)
@@ -1953,6 +1953,11 @@ static void tfilter_put(struct tcf_proto *tp, void *fh)
                tp->ops->put(tp, fh);
 }
 
+static bool is_qdisc_ingress(__u32 classid)
+{
+       return (TC_H_MIN(classid) == TC_H_MIN(TC_H_MIN_INGRESS));
+}
+
 static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
                          struct netlink_ext_ack *extack)
 {
@@ -2144,6 +2149,8 @@ replay:
                flags |= TCA_ACT_FLAGS_REPLACE;
        if (!rtnl_held)
                flags |= TCA_ACT_FLAGS_NO_RTNL;
+       if (is_qdisc_ingress(parent))
+               flags |= TCA_ACT_FLAGS_AT_INGRESS;
        err = tp->ops->change(net, skb, tp, cl, t->tcm_handle, tca, &fh,
                              flags, extack);
        if (err == 0) {
index c98af0a..4a27dfb 100644 (file)
@@ -1099,12 +1099,13 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent,
 
 skip:
                if (!ingress) {
-                       notify_and_destroy(net, skb, n, classid,
-                                          rtnl_dereference(dev->qdisc), new);
+                       old = rtnl_dereference(dev->qdisc);
                        if (new && !new->ops->attach)
                                qdisc_refcount_inc(new);
                        rcu_assign_pointer(dev->qdisc, new ? : &noop_qdisc);
 
+                       notify_and_destroy(net, skb, n, classid, old, new);
+
                        if (new && new->ops->attach)
                                new->ops->attach(new);
                } else {
index 55c6879..3ed0c33 100644 (file)
@@ -573,7 +573,7 @@ static bool cobalt_should_drop(struct cobalt_vars *vars,
 
        /* Simple BLUE implementation.  Lack of ECN is deliberate. */
        if (vars->p_drop)
-               drop |= (prandom_u32() < vars->p_drop);
+               drop |= (get_random_u32() < vars->p_drop);
 
        /* Overload the drop_next field as an activity timeout */
        if (!vars->count)
@@ -2092,11 +2092,11 @@ retry:
 
                WARN_ON(host_load > CAKE_QUEUES);
 
-               /* The shifted prandom_u32() is a way to apply dithering to
-                * avoid accumulating roundoff errors
+               /* The get_random_u16() is a way to apply dithering to avoid
+                * accumulating roundoff errors
                 */
                flow->deficit += (b->flow_quantum * quantum_div[host_load] +
-                                 (prandom_u32() >> 16)) >> 16;
+                                 get_random_u16()) >> 16;
                list_move_tail(&flow->flowchain, &b->old_flows);
 
                goto retry;
@@ -2224,8 +2224,12 @@ retry:
 
 static void cake_reset(struct Qdisc *sch)
 {
+       struct cake_sched_data *q = qdisc_priv(sch);
        u32 c;
 
+       if (!q->tins)
+               return;
+
        for (c = 0; c < CAKE_MAX_TINS; c++)
                cake_clear_tin(sch, c);
 }
index 99d318b..8c4fee0 100644 (file)
@@ -478,24 +478,26 @@ static int fq_codel_init(struct Qdisc *sch, struct nlattr *opt,
        if (opt) {
                err = fq_codel_change(sch, opt, extack);
                if (err)
-                       return err;
+                       goto init_failure;
        }
 
        err = tcf_block_get(&q->block, &q->filter_list, sch, extack);
        if (err)
-               return err;
+               goto init_failure;
 
        if (!q->flows) {
                q->flows = kvcalloc(q->flows_cnt,
                                    sizeof(struct fq_codel_flow),
                                    GFP_KERNEL);
-               if (!q->flows)
-                       return -ENOMEM;
-
+               if (!q->flows) {
+                       err = -ENOMEM;
+                       goto init_failure;
+               }
                q->backlogs = kvcalloc(q->flows_cnt, sizeof(u32), GFP_KERNEL);
-               if (!q->backlogs)
-                       return -ENOMEM;
-
+               if (!q->backlogs) {
+                       err = -ENOMEM;
+                       goto alloc_failure;
+               }
                for (i = 0; i < q->flows_cnt; i++) {
                        struct fq_codel_flow *flow = q->flows + i;
 
@@ -508,6 +510,13 @@ static int fq_codel_init(struct Qdisc *sch, struct nlattr *opt,
        else
                sch->flags &= ~TCQ_F_CAN_BYPASS;
        return 0;
+
+alloc_failure:
+       kvfree(q->flows);
+       q->flows = NULL;
+init_failure:
+       q->flows_cnt = 0;
+       return err;
 }
 
 static int fq_codel_dump(struct Qdisc *sch, struct sk_buff *skb)
index 18f4273..fb00ac4 100644 (file)
@@ -171,7 +171,7 @@ static inline struct netem_skb_cb *netem_skb_cb(struct sk_buff *skb)
 static void init_crandom(struct crndstate *state, unsigned long rho)
 {
        state->rho = rho;
-       state->last = prandom_u32();
+       state->last = get_random_u32();
 }
 
 /* get_crandom - correlated random number generator
@@ -184,9 +184,9 @@ static u32 get_crandom(struct crndstate *state)
        unsigned long answer;
 
        if (!state || state->rho == 0)  /* no correlation */
-               return prandom_u32();
+               return get_random_u32();
 
-       value = prandom_u32();
+       value = get_random_u32();
        rho = (u64)state->rho + 1;
        answer = (value * ((1ull<<32) - rho) + state->last * rho) >> 32;
        state->last = answer;
@@ -200,7 +200,7 @@ static u32 get_crandom(struct crndstate *state)
 static bool loss_4state(struct netem_sched_data *q)
 {
        struct clgstate *clg = &q->clg;
-       u32 rnd = prandom_u32();
+       u32 rnd = get_random_u32();
 
        /*
         * Makes a comparison between rnd and the transition
@@ -268,15 +268,15 @@ static bool loss_gilb_ell(struct netem_sched_data *q)
 
        switch (clg->state) {
        case GOOD_STATE:
-               if (prandom_u32() < clg->a1)
+               if (get_random_u32() < clg->a1)
                        clg->state = BAD_STATE;
-               if (prandom_u32() < clg->a4)
+               if (get_random_u32() < clg->a4)
                        return true;
                break;
        case BAD_STATE:
-               if (prandom_u32() < clg->a2)
+               if (get_random_u32() < clg->a2)
                        clg->state = GOOD_STATE;
-               if (prandom_u32() > clg->a3)
+               if (get_random_u32() > clg->a3)
                        return true;
        }
 
@@ -513,8 +513,8 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
                        goto finish_segs;
                }
 
-               skb->data[prandom_u32() % skb_headlen(skb)] ^=
-                       1<<(prandom_u32() % 8);
+               skb->data[prandom_u32_max(skb_headlen(skb))] ^=
+                       1<<prandom_u32_max(8);
        }
 
        if (unlikely(sch->q.qlen >= sch->limit)) {
@@ -632,7 +632,7 @@ static void get_slot_next(struct netem_sched_data *q, u64 now)
 
        if (!q->slot_dist)
                next_delay = q->slot_config.min_delay +
-                               (prandom_u32() *
+                               (get_random_u32() *
                                 (q->slot_config.max_delay -
                                  q->slot_config.min_delay) >> 32);
        else
index 974038b..265c238 100644 (file)
@@ -72,7 +72,7 @@ bool pie_drop_early(struct Qdisc *sch, struct pie_params *params,
        if (vars->accu_prob >= (MAX_PROB / 2) * 17)
                return true;
 
-       prandom_bytes(&rnd, 8);
+       get_random_bytes(&rnd, 8);
        if ((rnd >> BITS_PER_BYTE) < local_prob) {
                vars->accu_prob = 0;
                return true;
index e2389fa..1871a1c 100644 (file)
@@ -379,7 +379,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
                goto enqueue;
        }
 
-       r = prandom_u32() & SFB_MAX_PROB;
+       r = get_random_u16() & SFB_MAX_PROB;
 
        if (unlikely(r < p_min)) {
                if (unlikely(p_min > SFB_MAX_PROB / 2)) {
@@ -455,7 +455,8 @@ static void sfb_reset(struct Qdisc *sch)
 {
        struct sfb_sched_data *q = qdisc_priv(sch);
 
-       qdisc_reset(q->qdisc);
+       if (likely(q->qdisc))
+               qdisc_reset(q->qdisc);
        q->slot = 0;
        q->double_buffering = false;
        sfb_zero_all_buckets(q);
index 3460abc..63ba555 100644 (file)
@@ -226,8 +226,7 @@ static struct sctp_association *sctp_association_init(
        /* Create an output queue.  */
        sctp_outq_init(asoc, &asoc->outqueue);
 
-       if (!sctp_ulpq_init(&asoc->ulpq, asoc))
-               goto fail_init;
+       sctp_ulpq_init(&asoc->ulpq, asoc);
 
        if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams, 0, gfp))
                goto stream_free;
@@ -277,7 +276,6 @@ static struct sctp_association *sctp_association_init(
 
 stream_free:
        sctp_stream_free(&asoc->stream);
-fail_init:
        sock_put(asoc->base.sk);
        sctp_endpoint_put(asoc->ep);
        return NULL;
index 171f1a3..3e83963 100644 (file)
@@ -5098,13 +5098,17 @@ static void sctp_destroy_sock(struct sock *sk)
 }
 
 /* Triggered when there are no references on the socket anymore */
-static void sctp_destruct_sock(struct sock *sk)
+static void sctp_destruct_common(struct sock *sk)
 {
        struct sctp_sock *sp = sctp_sk(sk);
 
        /* Free up the HMAC transform. */
        crypto_free_shash(sp->hmac);
+}
 
+static void sctp_destruct_sock(struct sock *sk)
+{
+       sctp_destruct_common(sk);
        inet_sock_destruct(sk);
 }
 
@@ -8319,7 +8323,7 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr)
 
                inet_get_local_port_range(net, &low, &high);
                remaining = (high - low) + 1;
-               rover = prandom_u32() % remaining + low;
+               rover = prandom_u32_max(remaining) + low;
 
                do {
                        rover++;
@@ -9427,7 +9431,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
        sctp_sk(newsk)->reuse = sp->reuse;
 
        newsk->sk_shutdown = sk->sk_shutdown;
-       newsk->sk_destruct = sctp_destruct_sock;
+       newsk->sk_destruct = sk->sk_destruct;
        newsk->sk_family = sk->sk_family;
        newsk->sk_protocol = IPPROTO_SCTP;
        newsk->sk_backlog_rcv = sk->sk_prot->backlog_rcv;
@@ -9448,7 +9452,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk,
        newinet->inet_rcv_saddr = inet->inet_rcv_saddr;
        newinet->inet_dport = htons(asoc->peer.port);
        newinet->pmtudisc = inet->pmtudisc;
-       newinet->inet_id = prandom_u32();
+       newinet->inet_id = get_random_u16();
 
        newinet->uc_ttl = inet->uc_ttl;
        newinet->mc_loop = 1;
@@ -9662,11 +9666,20 @@ struct proto sctp_prot = {
 
 #if IS_ENABLED(CONFIG_IPV6)
 
-#include <net/transp_v6.h>
-static void sctp_v6_destroy_sock(struct sock *sk)
+static void sctp_v6_destruct_sock(struct sock *sk)
+{
+       sctp_destruct_common(sk);
+       inet6_sock_destruct(sk);
+}
+
+static int sctp_v6_init_sock(struct sock *sk)
 {
-       sctp_destroy_sock(sk);
-       inet6_destroy_sock(sk);
+       int ret = sctp_init_sock(sk);
+
+       if (!ret)
+               sk->sk_destruct = sctp_v6_destruct_sock;
+
+       return ret;
 }
 
 struct proto sctpv6_prot = {
@@ -9676,8 +9689,8 @@ struct proto sctpv6_prot = {
        .disconnect     = sctp_disconnect,
        .accept         = sctp_accept,
        .ioctl          = sctp_ioctl,
-       .init           = sctp_init_sock,
-       .destroy        = sctp_v6_destroy_sock,
+       .init           = sctp_v6_init_sock,
+       .destroy        = sctp_destroy_sock,
        .shutdown       = sctp_shutdown,
        .setsockopt     = sctp_setsockopt,
        .getsockopt     = sctp_getsockopt,
index bb22b71..94727fe 100644 (file)
@@ -490,11 +490,8 @@ static int sctp_enqueue_event(struct sctp_ulpq *ulpq,
        if (!sctp_ulpevent_is_enabled(event, ulpq->asoc->subscribe))
                goto out_free;
 
-       if (skb_list)
-               skb_queue_splice_tail_init(skb_list,
-                                          &sk->sk_receive_queue);
-       else
-               __skb_queue_tail(&sk->sk_receive_queue, skb);
+       skb_queue_splice_tail_init(skb_list,
+                                  &sk->sk_receive_queue);
 
        if (!sp->data_ready_signalled) {
                sp->data_ready_signalled = 1;
@@ -504,10 +501,7 @@ static int sctp_enqueue_event(struct sctp_ulpq *ulpq,
        return 1;
 
 out_free:
-       if (skb_list)
-               sctp_queue_purge_ulpevents(skb_list);
-       else
-               sctp_ulpevent_free(event);
+       sctp_queue_purge_ulpevents(skb_list);
 
        return 0;
 }
index 0a8510a..b05daaf 100644 (file)
@@ -38,8 +38,7 @@ static void sctp_ulpq_reasm_drain(struct sctp_ulpq *ulpq);
 /* 1st Level Abstractions */
 
 /* Initialize a ULP queue from a block of memory.  */
-struct sctp_ulpq *sctp_ulpq_init(struct sctp_ulpq *ulpq,
-                                struct sctp_association *asoc)
+void sctp_ulpq_init(struct sctp_ulpq *ulpq, struct sctp_association *asoc)
 {
        memset(ulpq, 0, sizeof(struct sctp_ulpq));
 
@@ -48,8 +47,6 @@ struct sctp_ulpq *sctp_ulpq_init(struct sctp_ulpq *ulpq,
        skb_queue_head_init(&ulpq->reasm_uo);
        skb_queue_head_init(&ulpq->lobby);
        ulpq->pd_mode  = 0;
-
-       return ulpq;
 }
 
 
@@ -259,10 +256,7 @@ int sctp_ulpq_tail_event(struct sctp_ulpq *ulpq, struct sk_buff_head *skb_list)
        return 1;
 
 out_free:
-       if (skb_list)
-               sctp_queue_purge_ulpevents(skb_list);
-       else
-               sctp_ulpevent_free(event);
+       sctp_queue_purge_ulpevents(skb_list);
 
        return 0;
 }
index e6ee797..c305d8d 100644 (file)
@@ -896,7 +896,8 @@ static int smc_lgr_create(struct smc_sock *smc, struct smc_init_info *ini)
                }
                memcpy(lgr->pnet_id, ibdev->pnetid[ibport - 1],
                       SMC_MAX_PNETID_LEN);
-               if (smc_wr_alloc_lgr_mem(lgr))
+               rc = smc_wr_alloc_lgr_mem(lgr);
+               if (rc)
                        goto free_wq;
                smc_llc_lgr_init(lgr, smc);
 
index 00da9ce..55c5d53 100644 (file)
@@ -2199,13 +2199,7 @@ SYSCALL_DEFINE4(recv, int, fd, void __user *, ubuf, size_t, size,
 
 static bool sock_use_custom_sol_socket(const struct socket *sock)
 {
-       const struct sock *sk = sock->sk;
-
-       /* Use sock->ops->setsockopt() for MPTCP */
-       return IS_ENABLED(CONFIG_MPTCP) &&
-              sk->sk_protocol == IPPROTO_MPTCP &&
-              sk->sk_type == SOCK_STREAM &&
-              (sk->sk_family == AF_INET || sk->sk_family == AF_INET6);
+       return test_bit(SOCK_CUSTOM_SOCKOPT, &sock->flags);
 }
 
 /*
index 5f96e75..4833768 100644 (file)
@@ -130,8 +130,8 @@ gss_krb5_make_confounder(char *p, u32 conflen)
 
        /* initialize to random value */
        if (i == 0) {
-               i = prandom_u32();
-               i = (i << 32) | prandom_u32();
+               i = get_random_u32();
+               i = (i << 32) | get_random_u32();
        }
 
        switch (conflen) {
index c3c693b..f075a9f 100644 (file)
@@ -677,7 +677,7 @@ static void cache_limit_defers(void)
 
        /* Consider removing either the first or the last */
        if (cache_defer_cnt > DFR_MAX) {
-               if (prandom_u32() & 1)
+               if (prandom_u32_max(2))
                        discard = list_entry(cache_defer_list.next,
                                             struct cache_deferred_req, recent);
                else
index 71dc263..656cec2 100644 (file)
@@ -1865,7 +1865,7 @@ xprt_alloc_xid(struct rpc_xprt *xprt)
 static void
 xprt_init_xid(struct rpc_xprt *xprt)
 {
-       xprt->xid = prandom_u32();
+       xprt->xid = get_random_u32();
 }
 
 static void
index f34d542..915b990 100644 (file)
@@ -1619,7 +1619,7 @@ static int xs_get_random_port(void)
        if (max < min)
                return -EADDRINUSE;
        range = max - min + 1;
-       rand = (unsigned short) prandom_u32() % range;
+       rand = prandom_u32_max(range);
        return rand + min;
 }
 
index da69e1a..e863070 100644 (file)
@@ -148,8 +148,8 @@ static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d,
 {
        struct net *net = d->net;
        struct tipc_net *tn = tipc_net(net);
-       bool trial = time_before(jiffies, tn->addr_trial_end);
        u32 self = tipc_own_addr(net);
+       bool trial = time_before(jiffies, tn->addr_trial_end) && !self;
 
        if (mtyp == DSC_TRIAL_FAIL_MSG) {
                if (!trial)
index f1c3b8e..e902b01 100644 (file)
@@ -3010,7 +3010,7 @@ static int tipc_sk_insert(struct tipc_sock *tsk)
        struct net *net = sock_net(sk);
        struct tipc_net *tn = net_generic(net, tipc_net_id);
        u32 remaining = (TIPC_MAX_PORT - TIPC_MIN_PORT) + 1;
-       u32 portid = prandom_u32() % remaining + TIPC_MIN_PORT;
+       u32 portid = prandom_u32_max(remaining) + TIPC_MIN_PORT;
 
        while (remaining--) {
                portid++;
index 5522865..d92ec92 100644 (file)
@@ -450,12 +450,19 @@ static void tipc_conn_data_ready(struct sock *sk)
 static void tipc_topsrv_accept(struct work_struct *work)
 {
        struct tipc_topsrv *srv = container_of(work, struct tipc_topsrv, awork);
-       struct socket *lsock = srv->listener;
-       struct socket *newsock;
+       struct socket *newsock, *lsock;
        struct tipc_conn *con;
        struct sock *newsk;
        int ret;
 
+       spin_lock_bh(&srv->idr_lock);
+       if (!srv->listener) {
+               spin_unlock_bh(&srv->idr_lock);
+               return;
+       }
+       lsock = srv->listener;
+       spin_unlock_bh(&srv->idr_lock);
+
        while (1) {
                ret = kernel_accept(lsock, &newsock, O_NONBLOCK);
                if (ret < 0)
@@ -489,7 +496,7 @@ static void tipc_topsrv_listener_data_ready(struct sock *sk)
 
        read_lock_bh(&sk->sk_callback_lock);
        srv = sk->sk_user_data;
-       if (srv->listener)
+       if (srv)
                queue_work(srv->rcv_wq, &srv->awork);
        read_unlock_bh(&sk->sk_callback_lock);
 }
@@ -568,7 +575,7 @@ bool tipc_topsrv_kern_subscr(struct net *net, u32 port, u32 type, u32 lower,
        sub.seq.upper = upper;
        sub.timeout = TIPC_WAIT_FOREVER;
        sub.filter = filter;
-       *(u32 *)&sub.usr_handle = port;
+       *(u64 *)&sub.usr_handle = (u64)port;
 
        con = tipc_conn_alloc(tipc_topsrv(net));
        if (IS_ERR(con))
@@ -699,8 +706,9 @@ static void tipc_topsrv_stop(struct net *net)
        __module_get(lsock->sk->sk_prot_creator->owner);
        srv->listener = NULL;
        spin_unlock_bh(&srv->idr_lock);
-       sock_release(lsock);
+
        tipc_topsrv_work_stop(srv);
+       sock_release(lsock);
        idr_destroy(&srv->conn_idr);
        kfree(srv);
 }
index 9b79e33..955ac3e 100644 (file)
@@ -273,7 +273,7 @@ static int tls_strp_read_copyin(struct tls_strparser *strp)
        return desc.error;
 }
 
-static int tls_strp_read_short(struct tls_strparser *strp)
+static int tls_strp_read_copy(struct tls_strparser *strp, bool qshort)
 {
        struct skb_shared_info *shinfo;
        struct page *page;
@@ -283,7 +283,7 @@ static int tls_strp_read_short(struct tls_strparser *strp)
         * to read the data out. Otherwise the connection will stall.
         * Without pressure threshold of INT_MAX will never be ready.
         */
-       if (likely(!tcp_epollin_ready(strp->sk, INT_MAX)))
+       if (likely(qshort && !tcp_epollin_ready(strp->sk, INT_MAX)))
                return 0;
 
        shinfo = skb_shinfo(strp->anchor);
@@ -315,6 +315,27 @@ static int tls_strp_read_short(struct tls_strparser *strp)
        return 0;
 }
 
+static bool tls_strp_check_no_dup(struct tls_strparser *strp)
+{
+       unsigned int len = strp->stm.offset + strp->stm.full_len;
+       struct sk_buff *skb;
+       u32 seq;
+
+       skb = skb_shinfo(strp->anchor)->frag_list;
+       seq = TCP_SKB_CB(skb)->seq;
+
+       while (skb->len < len) {
+               seq += skb->len;
+               len -= skb->len;
+               skb = skb->next;
+
+               if (TCP_SKB_CB(skb)->seq != seq)
+                       return false;
+       }
+
+       return true;
+}
+
 static void tls_strp_load_anchor_with_queue(struct tls_strparser *strp, int len)
 {
        struct tcp_sock *tp = tcp_sk(strp->sk);
@@ -373,7 +394,7 @@ static int tls_strp_read_sock(struct tls_strparser *strp)
                return tls_strp_read_copyin(strp);
 
        if (inq < strp->stm.full_len)
-               return tls_strp_read_short(strp);
+               return tls_strp_read_copy(strp, true);
 
        if (!strp->stm.full_len) {
                tls_strp_load_anchor_with_queue(strp, inq);
@@ -387,9 +408,12 @@ static int tls_strp_read_sock(struct tls_strparser *strp)
                strp->stm.full_len = sz;
 
                if (!strp->stm.full_len || inq < strp->stm.full_len)
-                       return tls_strp_read_short(strp);
+                       return tls_strp_read_copy(strp, true);
        }
 
+       if (!tls_strp_check_no_dup(strp))
+               return tls_strp_read_copy(strp, false);
+
        strp->msg_ready = 1;
        tls_rx_msg_ready(strp);
 
index 15dbb39..b3545fc 100644 (file)
@@ -1147,7 +1147,7 @@ static int unix_autobind(struct sock *sk)
        addr->name->sun_family = AF_UNIX;
        refcount_set(&addr->refcnt, 1);
 
-       ordernum = prandom_u32();
+       ordernum = get_random_u32();
        lastnum = ordernum & 0xFFFFF;
 retry:
        ordernum = (ordernum + 1) & 0xFFFFF;
index d45d536..dc27635 100644 (file)
@@ -204,6 +204,7 @@ void wait_for_unix_gc(void)
 /* The external entry point: unix_gc() */
 void unix_gc(void)
 {
+       struct sk_buff *next_skb, *skb;
        struct unix_sock *u;
        struct unix_sock *next;
        struct sk_buff_head hitlist;
@@ -297,11 +298,30 @@ void unix_gc(void)
 
        spin_unlock(&unix_gc_lock);
 
+       /* We need io_uring to clean its registered files, ignore all io_uring
+        * originated skbs. It's fine as io_uring doesn't keep references to
+        * other io_uring instances and so killing all other files in the cycle
+        * will put all io_uring references forcing it to go through normal
+        * release.path eventually putting registered files.
+        */
+       skb_queue_walk_safe(&hitlist, skb, next_skb) {
+               if (skb->scm_io_uring) {
+                       __skb_unlink(skb, &hitlist);
+                       skb_queue_tail(&skb->sk->sk_receive_queue, skb);
+               }
+       }
+
        /* Here we are. Hitlist is filled. Die. */
        __skb_queue_purge(&hitlist);
 
        spin_lock(&unix_gc_lock);
 
+       /* There could be io_uring registered files, just push them back to
+        * the inflight list
+        */
+       list_for_each_entry_safe(u, next, &gc_candidates, link)
+               list_move_tail(&u->link, &gc_inflight_list);
+
        /* All candidates should have been detached by now. */
        BUG_ON(!list_empty(&gc_candidates));
 
index 81df34b..3d2fe77 100644 (file)
@@ -2072,7 +2072,7 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high)
        } else {
                u32 spi = 0;
                for (h = 0; h < high-low+1; h++) {
-                       spi = low + prandom_u32()%(high-low+1);
+                       spi = low + prandom_u32_max(high - low + 1);
                        x0 = xfrm_state_lookup(net, mark, &x->id.daddr, htonl(spi), x->id.proto, x->props.family);
                        if (x0 == NULL) {
                                newspi = htonl(spi);
index 22adbf8..41f3602 100644 (file)
@@ -140,7 +140,7 @@ $(obj)/%.symtypes : $(src)/%.c FORCE
 # LLVM assembly
 # Generate .ll files from .c
 quiet_cmd_cc_ll_c = CC $(quiet_modtag)  $@
-      cmd_cc_ll_c = $(CC) $(c_flags) -emit-llvm -S -o $@ $<
+      cmd_cc_ll_c = $(CC) $(c_flags) -emit-llvm -S -fno-discard-value-names -o $@ $<
 
 $(obj)/%.ll: $(src)/%.c FORCE
        $(call if_changed_dep,cc_ll_c)
index 7740ce3..8489a34 100644 (file)
@@ -119,7 +119,7 @@ quiet_cmd_modpost = MODPOST $@
                echo >&2 "WARNING: $(missing-input) is missing."; \
                echo >&2 "         Modules may not have dependencies or modversions."; \
                echo >&2 "         You may get many unresolved symbol warnings.";) \
-       sed 's/ko$$/o/' $(or $(modorder-if-needed), /dev/null) | $(MODPOST) $(modpost-args) $(vmlinux.o-if-present) -T -
+       sed 's/ko$$/o/' $(or $(modorder-if-needed), /dev/null) | $(MODPOST) $(modpost-args) -T - $(vmlinux.o-if-present)
 
 targets += $(output-symdump)
 $(output-symdump): $(modorder-if-needed) $(vmlinux.o-if-present) $(moudle.symvers-if-present) $(MODPOST) FORCE
index bb78c9b..56f2ec8 100755 (executable)
@@ -45,13 +45,14 @@ def init(l, a):
 
 def run_analysis(entry):
     # Disable all checks, then re-enable the ones we want
-    checks = "-checks=-*,"
+    checks = []
+    checks.append("-checks=-*")
     if args.type == "clang-tidy":
-        checks += "linuxkernel-*"
+        checks.append("linuxkernel-*")
     else:
-        checks += "clang-analyzer-*"
-        checks += ",-clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling"
-    p = subprocess.run(["clang-tidy", "-p", args.path, checks, entry["file"]],
+        checks.append("clang-analyzer-*")
+        checks.append("-clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling")
+    p = subprocess.run(["clang-tidy", "-p", args.path, ",".join(checks), entry["file"]],
                        stdout=subprocess.PIPE,
                        stderr=subprocess.STDOUT,
                        cwd=entry["directory"])
index c920c1b..70392fd 100755 (executable)
@@ -97,8 +97,6 @@ $M    $MAKE %{?_smp_mflags} INSTALL_MOD_PATH=%{buildroot} modules_install
        $MAKE %{?_smp_mflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install
        cp System.map %{buildroot}/boot/System.map-$KERNELRELEASE
        cp .config %{buildroot}/boot/config-$KERNELRELEASE
-       bzip2 -9 --keep vmlinux
-       mv vmlinux.bz2 %{buildroot}/boot/vmlinux-$KERNELRELEASE.bz2
 $S$M   rm -f %{buildroot}/lib/modules/$KERNELRELEASE/build
 $S$M   rm -f %{buildroot}/lib/modules/$KERNELRELEASE/source
 $S$M   mkdir -p %{buildroot}/usr/src/kernels/$KERNELRELEASE
index fe5fcf5..64a6a37 100644 (file)
@@ -2022,7 +2022,8 @@ static inline int convert_context_handle_invalid_context(
  * in `newc'.  Verify that the context is valid
  * under the new policy.
  */
-static int convert_context(struct context *oldc, struct context *newc, void *p)
+static int convert_context(struct context *oldc, struct context *newc, void *p,
+                          gfp_t gfp_flags)
 {
        struct convert_context_args *args;
        struct ocontext *oc;
@@ -2036,7 +2037,7 @@ static int convert_context(struct context *oldc, struct context *newc, void *p)
        args = p;
 
        if (oldc->str) {
-               s = kstrdup(oldc->str, GFP_KERNEL);
+               s = kstrdup(oldc->str, gfp_flags);
                if (!s)
                        return -ENOMEM;
 
index a54b865..db5cce3 100644 (file)
@@ -325,7 +325,7 @@ int sidtab_context_to_sid(struct sidtab *s, struct context *context,
                }
 
                rc = convert->func(context, &dst_convert->context,
-                                  convert->args);
+                                  convert->args, GFP_ATOMIC);
                if (rc) {
                        context_destroy(&dst->context);
                        goto out_unlock;
@@ -404,7 +404,7 @@ static int sidtab_convert_tree(union sidtab_entry_inner *edst,
                while (i < SIDTAB_LEAF_ENTRIES && *pos < count) {
                        rc = convert->func(&esrc->ptr_leaf->entries[i].context,
                                           &edst->ptr_leaf->entries[i].context,
-                                          convert->args);
+                                          convert->args, GFP_KERNEL);
                        if (rc)
                                return rc;
                        (*pos)++;
index 4eff0e4..9fce0d5 100644 (file)
@@ -65,7 +65,7 @@ struct sidtab_isid_entry {
 };
 
 struct sidtab_convert_params {
-       int (*func)(struct context *oldc, struct context *newc, void *args);
+       int (*func)(struct context *oldc, struct context *newc, void *args, gfp_t gfp_flags);
        void *args;
        struct sidtab *target;
 };
index 6963d5a..d8edb60 100644 (file)
@@ -1899,10 +1899,8 @@ static int snd_rawmidi_free(struct snd_rawmidi *rmidi)
 
        snd_info_free_entry(rmidi->proc_entry);
        rmidi->proc_entry = NULL;
-       mutex_lock(&register_mutex);
        if (rmidi->ops && rmidi->ops->dev_unregister)
                rmidi->ops->dev_unregister(rmidi);
-       mutex_unlock(&register_mutex);
 
        snd_rawmidi_free_substreams(&rmidi->streams[SNDRV_RAWMIDI_STREAM_INPUT]);
        snd_rawmidi_free_substreams(&rmidi->streams[SNDRV_RAWMIDI_STREAM_OUTPUT]);
index 7ed0a2a..2751bf2 100644 (file)
@@ -162,7 +162,6 @@ int snd_unregister_oss_device(int type, struct snd_card *card, int dev)
                mutex_unlock(&sound_oss_mutex);
                return -ENOENT;
        }
-       unregister_sound_special(minor);
        switch (SNDRV_MINOR_OSS_DEVICE(minor)) {
        case SNDRV_MINOR_OSS_PCM:
                track2 = SNDRV_MINOR_OSS(cidx, SNDRV_MINOR_OSS_AUDIO);
@@ -174,12 +173,18 @@ int snd_unregister_oss_device(int type, struct snd_card *card, int dev)
                track2 = SNDRV_MINOR_OSS(cidx, SNDRV_MINOR_OSS_DMMIDI1);
                break;
        }
-       if (track2 >= 0) {
-               unregister_sound_special(track2);
+       if (track2 >= 0)
                snd_oss_minors[track2] = NULL;
-       }
        snd_oss_minors[minor] = NULL;
        mutex_unlock(&sound_oss_mutex);
+
+       /* call unregister_sound_special() outside sound_oss_mutex;
+        * otherwise may deadlock, as it can trigger the release of a card
+        */
+       unregister_sound_special(minor);
+       if (track2 >= 0)
+               unregister_sound_special(track2);
+
        kfree(mptr);
        return 0;
 }
index 3952f28..e5f0549 100644 (file)
@@ -91,20 +91,18 @@ static const struct reg_sequence cs35l41_hda_mute[] = {
        { CS35L41_AMP_DIG_VOL_CTRL,     0x0000A678 }, // AMP_VOL_PCM Mute
 };
 
-static int cs35l41_control_add(struct cs_dsp_coeff_ctl *cs_ctl)
+static void cs35l41_add_controls(struct cs35l41_hda *cs35l41)
 {
-       struct cs35l41_hda *cs35l41 = container_of(cs_ctl->dsp, struct cs35l41_hda, cs_dsp);
        struct hda_cs_dsp_ctl_info info;
 
        info.device_name = cs35l41->amp_name;
        info.fw_type = cs35l41->firmware_type;
        info.card = cs35l41->codec->card;
 
-       return hda_cs_dsp_control_add(cs_ctl, &info);
+       hda_cs_dsp_add_controls(&cs35l41->cs_dsp, &info);
 }
 
 static const struct cs_dsp_client_ops client_ops = {
-       .control_add = cs35l41_control_add,
        .control_remove = hda_cs_dsp_control_remove,
 };
 
@@ -435,6 +433,8 @@ static int cs35l41_init_dsp(struct cs35l41_hda *cs35l41)
        if (ret)
                goto err_release;
 
+       cs35l41_add_controls(cs35l41);
+
        ret = cs35l41_save_calibration(cs35l41);
 
 err_release:
@@ -461,9 +461,12 @@ static void cs35l41_remove_dsp(struct cs35l41_hda *cs35l41)
        struct cs_dsp *dsp = &cs35l41->cs_dsp;
 
        cancel_work_sync(&cs35l41->fw_load_work);
+
+       mutex_lock(&cs35l41->fw_mutex);
        cs35l41_shutdown_dsp(cs35l41);
        cs_dsp_remove(dsp);
        cs35l41->halo_initialized = false;
+       mutex_unlock(&cs35l41->fw_mutex);
 }
 
 /* Protection release cycle to get the speaker out of Safe-Mode */
@@ -487,10 +490,10 @@ static void cs35l41_hda_playback_hook(struct device *dev, int action)
        struct regmap *reg = cs35l41->regmap;
        int ret = 0;
 
-       mutex_lock(&cs35l41->fw_mutex);
-
        switch (action) {
        case HDA_GEN_PCM_ACT_OPEN:
+               pm_runtime_get_sync(dev);
+               mutex_lock(&cs35l41->fw_mutex);
                cs35l41->playback_started = true;
                if (cs35l41->firmware_running) {
                        regmap_multi_reg_write(reg, cs35l41_hda_config_dsp,
@@ -508,15 +511,21 @@ static void cs35l41_hda_playback_hook(struct device *dev, int action)
                                         CS35L41_AMP_EN_MASK, 1 << CS35L41_AMP_EN_SHIFT);
                if (cs35l41->hw_cfg.bst_type == CS35L41_EXT_BOOST)
                        regmap_write(reg, CS35L41_GPIO1_CTRL1, 0x00008001);
+               mutex_unlock(&cs35l41->fw_mutex);
                break;
        case HDA_GEN_PCM_ACT_PREPARE:
+               mutex_lock(&cs35l41->fw_mutex);
                ret = cs35l41_global_enable(reg, cs35l41->hw_cfg.bst_type, 1);
+               mutex_unlock(&cs35l41->fw_mutex);
                break;
        case HDA_GEN_PCM_ACT_CLEANUP:
+               mutex_lock(&cs35l41->fw_mutex);
                regmap_multi_reg_write(reg, cs35l41_hda_mute, ARRAY_SIZE(cs35l41_hda_mute));
                ret = cs35l41_global_enable(reg, cs35l41->hw_cfg.bst_type, 0);
+               mutex_unlock(&cs35l41->fw_mutex);
                break;
        case HDA_GEN_PCM_ACT_CLOSE:
+               mutex_lock(&cs35l41->fw_mutex);
                ret = regmap_update_bits(reg, CS35L41_PWR_CTRL2,
                                         CS35L41_AMP_EN_MASK, 0 << CS35L41_AMP_EN_SHIFT);
                if (cs35l41->hw_cfg.bst_type == CS35L41_EXT_BOOST)
@@ -530,14 +539,16 @@ static void cs35l41_hda_playback_hook(struct device *dev, int action)
                }
                cs35l41_irq_release(cs35l41);
                cs35l41->playback_started = false;
+               mutex_unlock(&cs35l41->fw_mutex);
+
+               pm_runtime_mark_last_busy(dev);
+               pm_runtime_put_autosuspend(dev);
                break;
        default:
                dev_warn(cs35l41->dev, "Playback action not supported: %d\n", action);
                break;
        }
 
-       mutex_unlock(&cs35l41->fw_mutex);
-
        if (ret)
                dev_err(cs35l41->dev, "Regmap access fail: %d\n", ret);
 }
@@ -562,45 +573,148 @@ static int cs35l41_hda_channel_map(struct device *dev, unsigned int tx_num, unsi
                                    rx_slot);
 }
 
+static void cs35l41_ready_for_reset(struct cs35l41_hda *cs35l41)
+{
+       mutex_lock(&cs35l41->fw_mutex);
+       if (cs35l41->firmware_running) {
+
+               regcache_cache_only(cs35l41->regmap, false);
+
+               cs35l41_exit_hibernate(cs35l41->dev, cs35l41->regmap);
+               cs35l41_shutdown_dsp(cs35l41);
+               cs35l41_safe_reset(cs35l41->regmap, cs35l41->hw_cfg.bst_type);
+
+               regcache_cache_only(cs35l41->regmap, true);
+               regcache_mark_dirty(cs35l41->regmap);
+       }
+       mutex_unlock(&cs35l41->fw_mutex);
+}
+
+static int cs35l41_system_suspend(struct device *dev)
+{
+       struct cs35l41_hda *cs35l41 = dev_get_drvdata(dev);
+       int ret;
+
+       dev_dbg(cs35l41->dev, "System Suspend\n");
+
+       if (cs35l41->hw_cfg.bst_type == CS35L41_EXT_BOOST_NO_VSPK_SWITCH) {
+               dev_err(cs35l41->dev, "System Suspend not supported\n");
+               return -EINVAL;
+       }
+
+       ret = pm_runtime_force_suspend(dev);
+       if (ret)
+               return ret;
+
+       /* Shutdown DSP before system suspend */
+       cs35l41_ready_for_reset(cs35l41);
+
+       /*
+        * Reset GPIO may be shared, so cannot reset here.
+        * However beyond this point, amps may be powered down.
+        */
+       return 0;
+}
+
+static int cs35l41_system_resume(struct device *dev)
+{
+       struct cs35l41_hda *cs35l41 = dev_get_drvdata(dev);
+       int ret;
+
+       dev_dbg(cs35l41->dev, "System Resume\n");
+
+       if (cs35l41->hw_cfg.bst_type == CS35L41_EXT_BOOST_NO_VSPK_SWITCH) {
+               dev_err(cs35l41->dev, "System Resume not supported\n");
+               return -EINVAL;
+       }
+
+       if (cs35l41->reset_gpio) {
+               usleep_range(2000, 2100);
+               gpiod_set_value_cansleep(cs35l41->reset_gpio, 1);
+       }
+
+       usleep_range(2000, 2100);
+
+       ret = pm_runtime_force_resume(dev);
+
+       mutex_lock(&cs35l41->fw_mutex);
+       if (!ret && cs35l41->request_fw_load && !cs35l41->fw_request_ongoing) {
+               cs35l41->fw_request_ongoing = true;
+               schedule_work(&cs35l41->fw_load_work);
+       }
+       mutex_unlock(&cs35l41->fw_mutex);
+
+       return ret;
+}
+
 static int cs35l41_runtime_suspend(struct device *dev)
 {
        struct cs35l41_hda *cs35l41 = dev_get_drvdata(dev);
+       int ret = 0;
 
-       dev_dbg(cs35l41->dev, "Suspend\n");
+       dev_dbg(cs35l41->dev, "Runtime Suspend\n");
 
-       if (!cs35l41->firmware_running)
+       if (cs35l41->hw_cfg.bst_type == CS35L41_EXT_BOOST_NO_VSPK_SWITCH) {
+               dev_dbg(cs35l41->dev, "Runtime Suspend not supported\n");
                return 0;
+       }
 
-       if (cs35l41_enter_hibernate(cs35l41->dev, cs35l41->regmap, cs35l41->hw_cfg.bst_type) < 0)
-               return 0;
+       mutex_lock(&cs35l41->fw_mutex);
+
+       if (cs35l41->playback_started) {
+               regmap_multi_reg_write(cs35l41->regmap, cs35l41_hda_mute,
+                                      ARRAY_SIZE(cs35l41_hda_mute));
+               cs35l41_global_enable(cs35l41->regmap, cs35l41->hw_cfg.bst_type, 0);
+               regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2,
+                                  CS35L41_AMP_EN_MASK, 0 << CS35L41_AMP_EN_SHIFT);
+               if (cs35l41->hw_cfg.bst_type == CS35L41_EXT_BOOST)
+                       regmap_write(cs35l41->regmap, CS35L41_GPIO1_CTRL1, 0x00000001);
+               regmap_update_bits(cs35l41->regmap, CS35L41_PWR_CTRL2,
+                                  CS35L41_VMON_EN_MASK | CS35L41_IMON_EN_MASK,
+                                  0 << CS35L41_VMON_EN_SHIFT | 0 << CS35L41_IMON_EN_SHIFT);
+               cs35l41->playback_started = false;
+       }
+
+       if (cs35l41->firmware_running) {
+               ret = cs35l41_enter_hibernate(cs35l41->dev, cs35l41->regmap,
+                                             cs35l41->hw_cfg.bst_type);
+               if (ret)
+                       goto err;
+       } else {
+               cs35l41_safe_reset(cs35l41->regmap, cs35l41->hw_cfg.bst_type);
+       }
 
        regcache_cache_only(cs35l41->regmap, true);
        regcache_mark_dirty(cs35l41->regmap);
 
-       return 0;
+err:
+       mutex_unlock(&cs35l41->fw_mutex);
+
+       return ret;
 }
 
 static int cs35l41_runtime_resume(struct device *dev)
 {
        struct cs35l41_hda *cs35l41 = dev_get_drvdata(dev);
-       int ret;
+       int ret = 0;
 
-       dev_dbg(cs35l41->dev, "Resume.\n");
+       dev_dbg(cs35l41->dev, "Runtime Resume\n");
 
        if (cs35l41->hw_cfg.bst_type == CS35L41_EXT_BOOST_NO_VSPK_SWITCH) {
-               dev_dbg(cs35l41->dev, "System does not support Resume\n");
+               dev_dbg(cs35l41->dev, "Runtime Resume not supported\n");
                return 0;
        }
 
-       if (!cs35l41->firmware_running)
-               return 0;
+       mutex_lock(&cs35l41->fw_mutex);
 
        regcache_cache_only(cs35l41->regmap, false);
 
-       ret = cs35l41_exit_hibernate(cs35l41->dev, cs35l41->regmap);
-       if (ret) {
-               regcache_cache_only(cs35l41->regmap, true);
-               return ret;
+       if (cs35l41->firmware_running)  {
+               ret = cs35l41_exit_hibernate(cs35l41->dev, cs35l41->regmap);
+               if (ret) {
+                       dev_warn(cs35l41->dev, "Unable to exit Hibernate.");
+                       goto err;
+               }
        }
 
        /* Test key needs to be unlocked to allow the OTP settings to re-apply */
@@ -609,26 +723,16 @@ static int cs35l41_runtime_resume(struct device *dev)
        cs35l41_test_key_lock(cs35l41->dev, cs35l41->regmap);
        if (ret) {
                dev_err(cs35l41->dev, "Failed to restore register cache: %d\n", ret);
-               return ret;
+               goto err;
        }
 
        if (cs35l41->hw_cfg.bst_type == CS35L41_EXT_BOOST)
                cs35l41_init_boost(cs35l41->dev, cs35l41->regmap, &cs35l41->hw_cfg);
 
-       return 0;
-}
-
-static int cs35l41_hda_suspend_hook(struct device *dev)
-{
-       dev_dbg(dev, "Request Suspend\n");
-       pm_runtime_mark_last_busy(dev);
-       return pm_runtime_put_autosuspend(dev);
-}
+err:
+       mutex_unlock(&cs35l41->fw_mutex);
 
-static int cs35l41_hda_resume_hook(struct device *dev)
-{
-       dev_dbg(dev, "Request Resume\n");
-       return pm_runtime_get_sync(dev);
+       return ret;
 }
 
 static int cs35l41_smart_amp(struct cs35l41_hda *cs35l41)
@@ -678,8 +782,6 @@ clean_dsp:
 
 static void cs35l41_load_firmware(struct cs35l41_hda *cs35l41, bool load)
 {
-       pm_runtime_get_sync(cs35l41->dev);
-
        if (cs35l41->firmware_running && !load) {
                dev_dbg(cs35l41->dev, "Unloading Firmware\n");
                cs35l41_shutdown_dsp(cs35l41);
@@ -689,9 +791,6 @@ static void cs35l41_load_firmware(struct cs35l41_hda *cs35l41, bool load)
        } else {
                dev_dbg(cs35l41->dev, "Unable to Load firmware.\n");
        }
-
-       pm_runtime_mark_last_busy(cs35l41->dev);
-       pm_runtime_put_autosuspend(cs35l41->dev);
 }
 
 static int cs35l41_fw_load_ctl_get(struct snd_kcontrol *kcontrol,
@@ -707,16 +806,21 @@ static void cs35l41_fw_load_work(struct work_struct *work)
 {
        struct cs35l41_hda *cs35l41 = container_of(work, struct cs35l41_hda, fw_load_work);
 
+       pm_runtime_get_sync(cs35l41->dev);
+
        mutex_lock(&cs35l41->fw_mutex);
 
        /* Recheck if playback is ongoing, mutex will block playback during firmware loading */
        if (cs35l41->playback_started)
-               dev_err(cs35l41->dev, "Cannot Load/Unload firmware during Playback\n");
+               dev_err(cs35l41->dev, "Cannot Load/Unload firmware during Playback. Retrying...\n");
        else
                cs35l41_load_firmware(cs35l41, cs35l41->request_fw_load);
 
        cs35l41->fw_request_ongoing = false;
        mutex_unlock(&cs35l41->fw_mutex);
+
+       pm_runtime_mark_last_busy(cs35l41->dev);
+       pm_runtime_put_autosuspend(cs35l41->dev);
 }
 
 static int cs35l41_fw_load_ctl_put(struct snd_kcontrol *kcontrol,
@@ -840,6 +944,8 @@ static int cs35l41_hda_bind(struct device *dev, struct device *master, void *mas
 
        pm_runtime_get_sync(dev);
 
+       mutex_lock(&cs35l41->fw_mutex);
+
        comps->dev = dev;
        if (!cs35l41->acpi_subsystem_id)
                cs35l41->acpi_subsystem_id = kasprintf(GFP_KERNEL, "%.8x",
@@ -852,10 +958,8 @@ static int cs35l41_hda_bind(struct device *dev, struct device *master, void *mas
        if (firmware_autostart) {
                dev_dbg(cs35l41->dev, "Firmware Autostart.\n");
                cs35l41->request_fw_load = true;
-               mutex_lock(&cs35l41->fw_mutex);
                if (cs35l41_smart_amp(cs35l41) < 0)
                        dev_warn(cs35l41->dev, "Cannot Run Firmware, reverting to dsp bypass...\n");
-               mutex_unlock(&cs35l41->fw_mutex);
        } else {
                dev_dbg(cs35l41->dev, "Firmware Autostart is disabled.\n");
        }
@@ -863,8 +967,8 @@ static int cs35l41_hda_bind(struct device *dev, struct device *master, void *mas
        ret = cs35l41_create_controls(cs35l41);
 
        comps->playback_hook = cs35l41_hda_playback_hook;
-       comps->suspend_hook = cs35l41_hda_suspend_hook;
-       comps->resume_hook = cs35l41_hda_resume_hook;
+
+       mutex_unlock(&cs35l41->fw_mutex);
 
        pm_runtime_mark_last_busy(dev);
        pm_runtime_put_autosuspend(dev);
@@ -1433,6 +1537,7 @@ EXPORT_SYMBOL_NS_GPL(cs35l41_hda_remove, SND_HDA_SCODEC_CS35L41);
 
 const struct dev_pm_ops cs35l41_hda_pm_ops = {
        RUNTIME_PM_OPS(cs35l41_runtime_suspend, cs35l41_runtime_resume, NULL)
+       SYSTEM_SLEEP_PM_OPS(cs35l41_system_suspend, cs35l41_system_resume)
 };
 EXPORT_SYMBOL_NS_GPL(cs35l41_hda_pm_ops, SND_HDA_SCODEC_CS35L41);
 
index 1223621..534e845 100644 (file)
@@ -16,6 +16,4 @@ struct hda_component {
        char name[HDA_MAX_NAME_SIZE];
        struct hda_codec *codec;
        void (*playback_hook)(struct device *dev, int action);
-       int (*suspend_hook)(struct device *dev);
-       int (*resume_hook)(struct device *dev);
 };
index 89ee549..1622a22 100644 (file)
@@ -97,7 +97,7 @@ static unsigned int wmfw_convert_flags(unsigned int in)
        return out;
 }
 
-static int hda_cs_dsp_add_kcontrol(struct hda_cs_dsp_coeff_ctl *ctl, const char *name)
+static void hda_cs_dsp_add_kcontrol(struct hda_cs_dsp_coeff_ctl *ctl, const char *name)
 {
        struct cs_dsp_coeff_ctl *cs_ctl = ctl->cs_ctl;
        struct snd_kcontrol_new kcontrol = {0};
@@ -107,7 +107,7 @@ static int hda_cs_dsp_add_kcontrol(struct hda_cs_dsp_coeff_ctl *ctl, const char
        if (cs_ctl->len > ADSP_MAX_STD_CTRL_SIZE) {
                dev_err(cs_ctl->dsp->dev, "KControl %s: length %zu exceeds maximum %d\n", name,
                        cs_ctl->len, ADSP_MAX_STD_CTRL_SIZE);
-               return -EINVAL;
+               return;
        }
 
        kcontrol.name = name;
@@ -120,24 +120,21 @@ static int hda_cs_dsp_add_kcontrol(struct hda_cs_dsp_coeff_ctl *ctl, const char
        /* Save ctl inside private_data, ctl is owned by cs_dsp,
         * and will be freed when cs_dsp removes the control */
        kctl = snd_ctl_new1(&kcontrol, (void *)ctl);
-       if (!kctl) {
-               ret = -ENOMEM;
-               return ret;
-       }
+       if (!kctl)
+               return;
 
        ret = snd_ctl_add(ctl->card, kctl);
        if (ret) {
                dev_err(cs_ctl->dsp->dev, "Failed to add KControl %s = %d\n", kcontrol.name, ret);
-               return ret;
+               return;
        }
 
        dev_dbg(cs_ctl->dsp->dev, "Added KControl: %s\n", kcontrol.name);
        ctl->kctl = kctl;
-
-       return 0;
 }
 
-int hda_cs_dsp_control_add(struct cs_dsp_coeff_ctl *cs_ctl, struct hda_cs_dsp_ctl_info *info)
+static void hda_cs_dsp_control_add(struct cs_dsp_coeff_ctl *cs_ctl,
+                                  const struct hda_cs_dsp_ctl_info *info)
 {
        struct cs_dsp *cs_dsp = cs_ctl->dsp;
        char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];
@@ -145,13 +142,10 @@ int hda_cs_dsp_control_add(struct cs_dsp_coeff_ctl *cs_ctl, struct hda_cs_dsp_ct
        const char *region_name;
        int ret;
 
-       if (cs_ctl->flags & WMFW_CTL_FLAG_SYS)
-               return 0;
-
        region_name = cs_dsp_mem_region_name(cs_ctl->alg_region.type);
        if (!region_name) {
-               dev_err(cs_dsp->dev, "Unknown region type: %d\n", cs_ctl->alg_region.type);
-               return -EINVAL;
+               dev_warn(cs_dsp->dev, "Unknown region type: %d\n", cs_ctl->alg_region.type);
+               return;
        }
 
        ret = scnprintf(name, SNDRV_CTL_ELEM_ID_NAME_MAXLEN, "%s %s %.12s %x", info->device_name,
@@ -171,22 +165,39 @@ int hda_cs_dsp_control_add(struct cs_dsp_coeff_ctl *cs_ctl, struct hda_cs_dsp_ct
 
        ctl = kzalloc(sizeof(*ctl), GFP_KERNEL);
        if (!ctl)
-               return -ENOMEM;
+               return;
 
        ctl->cs_ctl = cs_ctl;
        ctl->card = info->card;
        cs_ctl->priv = ctl;
 
-       ret = hda_cs_dsp_add_kcontrol(ctl, name);
-       if (ret) {
-               dev_err(cs_dsp->dev, "Error (%d) adding control %s\n", ret, name);
-               kfree(ctl);
-               return ret;
-       }
+       hda_cs_dsp_add_kcontrol(ctl, name);
+}
 
-       return 0;
+void hda_cs_dsp_add_controls(struct cs_dsp *dsp, const struct hda_cs_dsp_ctl_info *info)
+{
+       struct cs_dsp_coeff_ctl *cs_ctl;
+
+       /*
+        * pwr_lock would cause mutex inversion with ALSA control lock compared
+        * to the get/put functions.
+        * It is safe to walk the list without holding a mutex because entries
+        * are persistent and only cs_dsp_power_up() or cs_dsp_remove() can
+        * change the list.
+        */
+       lockdep_assert_not_held(&dsp->pwr_lock);
+
+       list_for_each_entry(cs_ctl, &dsp->ctl_list, list) {
+               if (cs_ctl->flags & WMFW_CTL_FLAG_SYS)
+                       continue;
+
+               if (cs_ctl->priv)
+                       continue;
+
+               hda_cs_dsp_control_add(cs_ctl, info);
+       }
 }
-EXPORT_SYMBOL_NS_GPL(hda_cs_dsp_control_add, SND_HDA_CS_DSP_CONTROLS);
+EXPORT_SYMBOL_NS_GPL(hda_cs_dsp_add_controls, SND_HDA_CS_DSP_CONTROLS);
 
 void hda_cs_dsp_control_remove(struct cs_dsp_coeff_ctl *cs_ctl)
 {
@@ -203,19 +214,18 @@ int hda_cs_dsp_write_ctl(struct cs_dsp *dsp, const char *name, int type,
        struct hda_cs_dsp_coeff_ctl *ctl;
        int ret;
 
+       mutex_lock(&dsp->pwr_lock);
        cs_ctl = cs_dsp_get_ctl(dsp, name, type, alg);
-       if (!cs_ctl)
-               return -EINVAL;
-
-       ctl = cs_ctl->priv;
-
        ret = cs_dsp_coeff_write_ctrl(cs_ctl, 0, buf, len);
+       mutex_unlock(&dsp->pwr_lock);
        if (ret)
                return ret;
 
        if (cs_ctl->flags & WMFW_CTL_FLAG_SYS)
                return 0;
 
+       ctl = cs_ctl->priv;
+
        snd_ctl_notify(ctl->card, SNDRV_CTL_EVENT_MASK_VALUE, &ctl->kctl->id);
 
        return 0;
@@ -225,13 +235,14 @@ EXPORT_SYMBOL_NS_GPL(hda_cs_dsp_write_ctl, SND_HDA_CS_DSP_CONTROLS);
 int hda_cs_dsp_read_ctl(struct cs_dsp *dsp, const char *name, int type,
                        unsigned int alg, void *buf, size_t len)
 {
-       struct cs_dsp_coeff_ctl *cs_ctl;
+       int ret;
 
-       cs_ctl = cs_dsp_get_ctl(dsp, name, type, alg);
-       if (!cs_ctl)
-               return -EINVAL;
+       mutex_lock(&dsp->pwr_lock);
+       ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(dsp, name, type, alg), 0, buf, len);
+       mutex_unlock(&dsp->pwr_lock);
+
+       return ret;
 
-       return cs_dsp_coeff_read_ctrl(cs_ctl, 0, buf, len);
 }
 EXPORT_SYMBOL_NS_GPL(hda_cs_dsp_read_ctl, SND_HDA_CS_DSP_CONTROLS);
 
index 4babc69..2cf9335 100644 (file)
@@ -29,7 +29,7 @@ struct hda_cs_dsp_ctl_info {
 
 extern const char * const hda_cs_dsp_fw_ids[HDA_CS_DSP_NUM_FW];
 
-int hda_cs_dsp_control_add(struct cs_dsp_coeff_ctl *cs_ctl, struct hda_cs_dsp_ctl_info *info);
+void hda_cs_dsp_add_controls(struct cs_dsp *dsp, const struct hda_cs_dsp_ctl_info *info);
 void hda_cs_dsp_control_remove(struct cs_dsp_coeff_ctl *cs_ctl);
 int hda_cs_dsp_write_ctl(struct cs_dsp *dsp, const char *name, int type,
                         unsigned int alg, const void *buf, size_t len);
index bce82b8..e6c4bb5 100644 (file)
@@ -4022,22 +4022,16 @@ static void alc5505_dsp_init(struct hda_codec *codec)
 static int alc269_suspend(struct hda_codec *codec)
 {
        struct alc_spec *spec = codec->spec;
-       int i;
 
        if (spec->has_alc5505_dsp)
                alc5505_dsp_suspend(codec);
 
-       for (i = 0; i < HDA_MAX_COMPONENTS; i++)
-               if (spec->comps[i].suspend_hook)
-                       spec->comps[i].suspend_hook(spec->comps[i].dev);
-
        return alc_suspend(codec);
 }
 
 static int alc269_resume(struct hda_codec *codec)
 {
        struct alc_spec *spec = codec->spec;
-       int i;
 
        if (spec->codec_variant == ALC269_TYPE_ALC269VB)
                alc269vb_toggle_power_output(codec, 0);
@@ -4068,10 +4062,6 @@ static int alc269_resume(struct hda_codec *codec)
        if (spec->has_alc5505_dsp)
                alc5505_dsp_resume(codec);
 
-       for (i = 0; i < HDA_MAX_COMPONENTS; i++)
-               if (spec->comps[i].resume_hook)
-                       spec->comps[i].resume_hook(spec->comps[i].dev);
-
        return 0;
 }
 #endif /* CONFIG_PM */
@@ -6664,19 +6654,12 @@ static int comp_bind(struct device *dev)
 {
        struct hda_codec *cdc = dev_to_hda_codec(dev);
        struct alc_spec *spec = cdc->spec;
-       int ret, i;
+       int ret;
 
        ret = component_bind_all(dev, spec->comps);
        if (ret)
                return ret;
 
-       if (snd_hdac_is_power_on(&cdc->core)) {
-               codec_dbg(cdc, "Resuming after bind.\n");
-               for (i = 0; i < HDA_MAX_COMPONENTS; i++)
-                       if (spec->comps[i].resume_hook)
-                               spec->comps[i].resume_hook(spec->comps[i].dev);
-       }
-
        return 0;
 }
 
@@ -8449,11 +8432,13 @@ static const struct hda_fixup alc269_fixups[] = {
        [ALC285_FIXUP_ASUS_G533Z_PINS] = {
                .type = HDA_FIXUP_PINS,
                .v.pins = (const struct hda_pintbl[]) {
-                       { 0x14, 0x90170120 },
+                       { 0x14, 0x90170152 }, /* Speaker Surround Playback Switch */
+                       { 0x19, 0x03a19020 }, /* Mic Boost Volume */
+                       { 0x1a, 0x03a11c30 }, /* Mic Boost Volume */
+                       { 0x1e, 0x90170151 }, /* Rear jack, IN OUT EAPD Detect */
+                       { 0x21, 0x03211420 },
                        { }
                },
-               .chained = true,
-               .chain_id = ALC294_FIXUP_ASUS_G513_PINS,
        },
        [ALC294_FIXUP_ASUS_COEF_1B] = {
                .type = HDA_FIXUP_VERBS,
@@ -9198,7 +9183,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
        SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
        SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
-       SND_PCI_QUIRK(0x1028, 0x087d, "Dell Precision 5530", ALC289_FIXUP_DUAL_SPK),
        SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
@@ -9422,6 +9406,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
        SND_PCI_QUIRK(0x1043, 0x1c52, "ASUS Zephyrus G15 2022", ALC289_FIXUP_ASUS_GA401),
        SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+       SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
        SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
        SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
        SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
@@ -9443,6 +9428,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
        SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
        SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+       SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
        SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
        SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
        SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
index ca75f22..4006155 100644 (file)
@@ -129,7 +129,8 @@ struct snd_usb_endpoint {
                                           in a stream */
        bool implicit_fb_sync;          /* syncs with implicit feedback */
        bool lowlatency_playback;       /* low-latency playback mode */
-       bool need_setup;                /* (re-)need for configure? */
+       bool need_setup;                /* (re-)need for hw_params? */
+       bool need_prepare;              /* (re-)need for prepare? */
 
        /* for hw constraints */
        const struct audioformat *cur_audiofmt;
index 48a3843..d0b8d61 100644 (file)
@@ -32,6 +32,7 @@ struct snd_usb_iface_ref {
        unsigned char iface;
        bool need_setup;
        int opened;
+       int altset;
        struct list_head list;
 };
 
@@ -823,6 +824,7 @@ snd_usb_endpoint_open(struct snd_usb_audio *chip,
 
                ep->implicit_fb_sync = fp->implicit_fb;
                ep->need_setup = true;
+               ep->need_prepare = true;
 
                usb_audio_dbg(chip, "  channels=%d, rate=%d, format=%s, period_bytes=%d, periods=%d, implicit_fb=%d\n",
                              ep->cur_channels, ep->cur_rate,
@@ -899,6 +901,9 @@ static int endpoint_set_interface(struct snd_usb_audio *chip,
        int altset = set ? ep->altsetting : 0;
        int err;
 
+       if (ep->iface_ref->altset == altset)
+               return 0;
+
        usb_audio_dbg(chip, "Setting usb interface %d:%d for EP 0x%x\n",
                      ep->iface, altset, ep->ep_num);
        err = usb_set_interface(chip->dev, ep->iface, altset);
@@ -910,6 +915,7 @@ static int endpoint_set_interface(struct snd_usb_audio *chip,
 
        if (chip->quirk_flags & QUIRK_FLAG_IFACE_DELAY)
                msleep(50);
+       ep->iface_ref->altset = altset;
        return 0;
 }
 
@@ -947,7 +953,7 @@ void snd_usb_endpoint_close(struct snd_usb_audio *chip,
 /* Prepare for suspening EP, called from the main suspend handler */
 void snd_usb_endpoint_suspend(struct snd_usb_endpoint *ep)
 {
-       ep->need_setup = true;
+       ep->need_prepare = true;
        if (ep->iface_ref)
                ep->iface_ref->need_setup = true;
        if (ep->clock_ref)
@@ -1330,12 +1336,16 @@ int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
                                struct snd_usb_endpoint *ep)
 {
        const struct audioformat *fmt = ep->cur_audiofmt;
-       int err;
+       int err = 0;
+
+       mutex_lock(&chip->mutex);
+       if (!ep->need_setup)
+               goto unlock;
 
        /* release old buffers, if any */
        err = release_urbs(ep, false);
        if (err < 0)
-               return err;
+               goto unlock;
 
        ep->datainterval = fmt->datainterval;
        ep->maxpacksize = fmt->maxpacksize;
@@ -1373,13 +1383,21 @@ int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
        usb_audio_dbg(chip, "Set up %d URBS, ret=%d\n", ep->nurbs, err);
 
        if (err < 0)
-               return err;
+               goto unlock;
 
        /* some unit conversions in runtime */
        ep->maxframesize = ep->maxpacksize / ep->cur_frame_bytes;
        ep->curframesize = ep->curpacksize / ep->cur_frame_bytes;
 
-       return update_clock_ref_rate(chip, ep);
+       err = update_clock_ref_rate(chip, ep);
+       if (err >= 0) {
+               ep->need_setup = false;
+               err = 0;
+       }
+
+ unlock:
+       mutex_unlock(&chip->mutex);
+       return err;
 }
 
 static int init_sample_rate(struct snd_usb_audio *chip,
@@ -1426,7 +1444,7 @@ int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,
        mutex_lock(&chip->mutex);
        if (WARN_ON(!ep->iface_ref))
                goto unlock;
-       if (!ep->need_setup)
+       if (!ep->need_prepare)
                goto unlock;
 
        /* If the interface has been already set up, just set EP parameters */
@@ -1480,7 +1498,7 @@ int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,
        ep->iface_ref->need_setup = false;
 
  done:
-       ep->need_setup = false;
+       ep->need_prepare = false;
        err = 1;
 
 unlock:
index 6674bdb..10ac527 100644 (file)
                                                 * Return Stack Buffer Predictions.
                                                 */
 
+#define ARCH_CAP_XAPIC_DISABLE         BIT(21) /*
+                                                * IA32_XAPIC_DISABLE_STATUS MSR
+                                                * supported
+                                                */
+
 #define MSR_IA32_FLUSH_CMD             0x0000010b
 #define L1D_FLUSH                      BIT(0)  /*
                                                 * Writeback and invalidate the
 #define MSR_AMD64_PERF_CNTR_GLOBAL_CTL         0xc0000301
 #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR  0xc0000302
 
+/* AMD Last Branch Record MSRs */
+#define MSR_AMD64_LBR_SELECT                   0xc000010e
+
 /* Fam 17h MSRs */
 #define MSR_F17H_IRPERF                        0xc00000e9
 
 #define MSR_AMD_DBG_EXTN_CFG           0xc000010f
 #define MSR_AMD_SAMP_BR_FROM           0xc0010300
 
+#define DBG_EXTN_CFG_LBRV2EN           BIT_ULL(6)
+
 #define MSR_IA32_MPERF                 0x000000e7
 #define MSR_IA32_APERF                 0x000000e8
 
 #define MSR_IA32_HW_FEEDBACK_PTR        0x17d0
 #define MSR_IA32_HW_FEEDBACK_CONFIG     0x17d1
 
+/* x2APIC locked status */
+#define MSR_IA32_XAPIC_DISABLE_STATUS  0xBD
+#define LEGACY_XAPIC_DISABLED          BIT(0) /*
+                                               * x2APIC mode is locked and
+                                               * disabling x2APIC will cause
+                                               * a #GP
+                                               */
+
 #endif /* _ASM_X86_MSR_INDEX_H */
index eed0315..0d5d441 100644 (file)
@@ -1177,6 +1177,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 220
 #define KVM_CAP_S390_ZPCI_OP 221
 #define KVM_CAP_S390_CPU_TOPOLOGY 222
+#define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
index e282faf..ad47d7b 100644 (file)
@@ -6,7 +6,6 @@
 #include <linux/types.h>
 #include <linux/limits.h>
 #include <linux/bpf.h>
-#include <linux/compiler.h>
 #include <sys/types.h> /* pid_t */
 
 #define event_contains(obj, mem) ((obj).header.size > offsetof(typeof(obj), mem))
@@ -207,7 +206,7 @@ struct perf_record_range_cpu_map {
        __u16 end_cpu;
 };
 
-struct __packed perf_record_cpu_map_data {
+struct perf_record_cpu_map_data {
        __u16                    type;
        union {
                /* Used when type == PERF_CPU_MAP__CPUS. */
@@ -219,7 +218,7 @@ struct __packed perf_record_cpu_map_data {
                /* Used when type == PERF_CPU_MAP__RANGE_CPUS. */
                struct perf_record_range_cpu_map range_cpu_data;
        };
-};
+} __attribute__((packed));
 
 #pragma GCC diagnostic pop
 
index 5fc6a2a..deeb163 100644 (file)
@@ -4,9 +4,11 @@
  * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
  */
 
+#include <dirent.h>
 #include <stdbool.h>
 #include <linux/coresight-pmu.h>
 #include <linux/zalloc.h>
+#include <api/fs/fs.h>
 
 #include "../../../util/auxtrace.h"
 #include "../../../util/debug.h"
@@ -14,6 +16,7 @@
 #include "../../../util/pmu.h"
 #include "cs-etm.h"
 #include "arm-spe.h"
+#include "hisi-ptt.h"
 
 static struct perf_pmu **find_all_arm_spe_pmus(int *nr_spes, int *err)
 {
@@ -50,42 +53,114 @@ static struct perf_pmu **find_all_arm_spe_pmus(int *nr_spes, int *err)
        return arm_spe_pmus;
 }
 
+static struct perf_pmu **find_all_hisi_ptt_pmus(int *nr_ptts, int *err)
+{
+       const char *sysfs = sysfs__mountpoint();
+       struct perf_pmu **hisi_ptt_pmus = NULL;
+       struct dirent *dent;
+       char path[PATH_MAX];
+       DIR *dir = NULL;
+       int idx = 0;
+
+       snprintf(path, PATH_MAX, "%s" EVENT_SOURCE_DEVICE_PATH, sysfs);
+       dir = opendir(path);
+       if (!dir) {
+               pr_err("can't read directory '%s'\n", EVENT_SOURCE_DEVICE_PATH);
+               *err = -EINVAL;
+               return NULL;
+       }
+
+       while ((dent = readdir(dir))) {
+               if (strstr(dent->d_name, HISI_PTT_PMU_NAME))
+                       (*nr_ptts)++;
+       }
+
+       if (!(*nr_ptts))
+               goto out;
+
+       hisi_ptt_pmus = zalloc(sizeof(struct perf_pmu *) * (*nr_ptts));
+       if (!hisi_ptt_pmus) {
+               pr_err("hisi_ptt alloc failed\n");
+               *err = -ENOMEM;
+               goto out;
+       }
+
+       rewinddir(dir);
+       while ((dent = readdir(dir))) {
+               if (strstr(dent->d_name, HISI_PTT_PMU_NAME) && idx < *nr_ptts) {
+                       hisi_ptt_pmus[idx] = perf_pmu__find(dent->d_name);
+                       if (hisi_ptt_pmus[idx])
+                               idx++;
+               }
+       }
+
+out:
+       closedir(dir);
+       return hisi_ptt_pmus;
+}
+
+static struct perf_pmu *find_pmu_for_event(struct perf_pmu **pmus,
+                                          int pmu_nr, struct evsel *evsel)
+{
+       int i;
+
+       if (!pmus)
+               return NULL;
+
+       for (i = 0; i < pmu_nr; i++) {
+               if (evsel->core.attr.type == pmus[i]->type)
+                       return pmus[i];
+       }
+
+       return NULL;
+}
+
 struct auxtrace_record
 *auxtrace_record__init(struct evlist *evlist, int *err)
 {
-       struct perf_pmu *cs_etm_pmu;
+       struct perf_pmu *cs_etm_pmu = NULL;
+       struct perf_pmu **arm_spe_pmus = NULL;
+       struct perf_pmu **hisi_ptt_pmus = NULL;
        struct evsel *evsel;
-       bool found_etm = false;
+       struct perf_pmu *found_etm = NULL;
        struct perf_pmu *found_spe = NULL;
-       struct perf_pmu **arm_spe_pmus = NULL;
+       struct perf_pmu *found_ptt = NULL;
+       int auxtrace_event_cnt = 0;
        int nr_spes = 0;
-       int i = 0;
+       int nr_ptts = 0;
 
        if (!evlist)
                return NULL;
 
        cs_etm_pmu = perf_pmu__find(CORESIGHT_ETM_PMU_NAME);
        arm_spe_pmus = find_all_arm_spe_pmus(&nr_spes, err);
+       hisi_ptt_pmus = find_all_hisi_ptt_pmus(&nr_ptts, err);
 
        evlist__for_each_entry(evlist, evsel) {
-               if (cs_etm_pmu &&
-                   evsel->core.attr.type == cs_etm_pmu->type)
-                       found_etm = true;
-
-               if (!nr_spes || found_spe)
-                       continue;
-
-               for (i = 0; i < nr_spes; i++) {
-                       if (evsel->core.attr.type == arm_spe_pmus[i]->type) {
-                               found_spe = arm_spe_pmus[i];
-                               break;
-                       }
-               }
+               if (cs_etm_pmu && !found_etm)
+                       found_etm = find_pmu_for_event(&cs_etm_pmu, 1, evsel);
+
+               if (arm_spe_pmus && !found_spe)
+                       found_spe = find_pmu_for_event(arm_spe_pmus, nr_spes, evsel);
+
+               if (hisi_ptt_pmus && !found_ptt)
+                       found_ptt = find_pmu_for_event(hisi_ptt_pmus, nr_ptts, evsel);
        }
+
        free(arm_spe_pmus);
+       free(hisi_ptt_pmus);
+
+       if (found_etm)
+               auxtrace_event_cnt++;
 
-       if (found_etm && found_spe) {
-               pr_err("Concurrent ARM Coresight ETM and SPE operation not currently supported\n");
+       if (found_spe)
+               auxtrace_event_cnt++;
+
+       if (found_ptt)
+               auxtrace_event_cnt++;
+
+       if (auxtrace_event_cnt > 1) {
+               pr_err("Concurrent AUX trace operation not currently supported\n");
                *err = -EOPNOTSUPP;
                return NULL;
        }
@@ -96,6 +171,9 @@ struct auxtrace_record
 #if defined(__aarch64__)
        if (found_spe)
                return arm_spe_recording_init(err, found_spe);
+
+       if (found_ptt)
+               return hisi_ptt_recording_init(err, found_ptt);
 #endif
 
        /*
index b8b23b9..887c8ad 100644 (file)
@@ -10,6 +10,7 @@
 #include <linux/string.h>
 
 #include "arm-spe.h"
+#include "hisi-ptt.h"
 #include "../../../util/pmu.h"
 
 struct perf_event_attr
@@ -22,6 +23,8 @@ struct perf_event_attr
 #if defined(__aarch64__)
        } else if (strstarts(pmu->name, ARM_SPE_PMU_NAME)) {
                return arm_spe_pmu_default_config(pmu);
+       } else if (strstarts(pmu->name, HISI_PTT_PMU_NAME)) {
+               pmu->selectable = true;
 #endif
        }
 
index 037e292..4af0c3a 100644 (file)
@@ -102,7 +102,7 @@ static int arm64__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
        if (err)
                goto out_free_arm;
        /* b, b.cond, br, cbz/cbnz, tbz/tbnz */
-       err = regcomp(&arm->jump_insn, "^[ct]?br?\\.?(cc|cs|eq|ge|gt|hi|le|ls|lt|mi|ne|pl)?n?z?$",
+       err = regcomp(&arm->jump_insn, "^[ct]?br?\\.?(cc|cs|eq|ge|gt|hi|hs|le|lo|ls|lt|mi|ne|pl|vc|vs)?n?z?$",
                      REG_EXTENDED);
        if (err)
                goto out_free_call;
index 9fcb4e6..337aa9b 100644 (file)
@@ -11,4 +11,4 @@ perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
 perf-$(CONFIG_AUXTRACE) += ../../arm/util/pmu.o \
                              ../../arm/util/auxtrace.o \
                              ../../arm/util/cs-etm.o \
-                             arm-spe.o mem-events.o
+                             arm-spe.o mem-events.o hisi-ptt.o
diff --git a/tools/perf/arch/arm64/util/hisi-ptt.c b/tools/perf/arch/arm64/util/hisi-ptt.c
new file mode 100644 (file)
index 0000000..ba97c8a
--- /dev/null
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * HiSilicon PCIe Trace and Tuning (PTT) support
+ * Copyright (c) 2022 HiSilicon Technologies Co., Ltd.
+ */
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/bitops.h>
+#include <linux/log2.h>
+#include <linux/zalloc.h>
+#include <time.h>
+
+#include <internal/lib.h> // page_size
+#include "../../../util/auxtrace.h"
+#include "../../../util/cpumap.h"
+#include "../../../util/debug.h"
+#include "../../../util/event.h"
+#include "../../../util/evlist.h"
+#include "../../../util/evsel.h"
+#include "../../../util/hisi-ptt.h"
+#include "../../../util/pmu.h"
+#include "../../../util/record.h"
+#include "../../../util/session.h"
+#include "../../../util/tsc.h"
+
+#define KiB(x) ((x) * 1024)
+#define MiB(x) ((x) * 1024 * 1024)
+
+struct hisi_ptt_recording {
+       struct auxtrace_record  itr;
+       struct perf_pmu *hisi_ptt_pmu;
+       struct evlist *evlist;
+};
+
+static size_t
+hisi_ptt_info_priv_size(struct auxtrace_record *itr __maybe_unused,
+                       struct evlist *evlist __maybe_unused)
+{
+       return HISI_PTT_AUXTRACE_PRIV_SIZE;
+}
+
+static int hisi_ptt_info_fill(struct auxtrace_record *itr,
+                             struct perf_session *session,
+                             struct perf_record_auxtrace_info *auxtrace_info,
+                             size_t priv_size)
+{
+       struct hisi_ptt_recording *pttr =
+                       container_of(itr, struct hisi_ptt_recording, itr);
+       struct perf_pmu *hisi_ptt_pmu = pttr->hisi_ptt_pmu;
+
+       if (priv_size != HISI_PTT_AUXTRACE_PRIV_SIZE)
+               return -EINVAL;
+
+       if (!session->evlist->core.nr_mmaps)
+               return -EINVAL;
+
+       auxtrace_info->type = PERF_AUXTRACE_HISI_PTT;
+       auxtrace_info->priv[0] = hisi_ptt_pmu->type;
+
+       return 0;
+}
+
+static int hisi_ptt_set_auxtrace_mmap_page(struct record_opts *opts)
+{
+       bool privileged = perf_event_paranoid_check(-1);
+
+       if (!opts->full_auxtrace)
+               return 0;
+
+       if (opts->full_auxtrace && !opts->auxtrace_mmap_pages) {
+               if (privileged) {
+                       opts->auxtrace_mmap_pages = MiB(16) / page_size;
+               } else {
+                       opts->auxtrace_mmap_pages = KiB(128) / page_size;
+                       if (opts->mmap_pages == UINT_MAX)
+                               opts->mmap_pages = KiB(256) / page_size;
+               }
+       }
+
+       /* Validate auxtrace_mmap_pages */
+       if (opts->auxtrace_mmap_pages) {
+               size_t sz = opts->auxtrace_mmap_pages * (size_t)page_size;
+               size_t min_sz = KiB(8);
+
+               if (sz < min_sz || !is_power_of_2(sz)) {
+                       pr_err("Invalid mmap size for HISI PTT: must be at least %zuKiB and a power of 2\n",
+                              min_sz / 1024);
+                       return -EINVAL;
+               }
+       }
+
+       return 0;
+}
+
+static int hisi_ptt_recording_options(struct auxtrace_record *itr,
+                                     struct evlist *evlist,
+                                     struct record_opts *opts)
+{
+       struct hisi_ptt_recording *pttr =
+                       container_of(itr, struct hisi_ptt_recording, itr);
+       struct perf_pmu *hisi_ptt_pmu = pttr->hisi_ptt_pmu;
+       struct evsel *evsel, *hisi_ptt_evsel = NULL;
+       struct evsel *tracking_evsel;
+       int err;
+
+       pttr->evlist = evlist;
+       evlist__for_each_entry(evlist, evsel) {
+               if (evsel->core.attr.type == hisi_ptt_pmu->type) {
+                       if (hisi_ptt_evsel) {
+                               pr_err("There may be only one " HISI_PTT_PMU_NAME "x event\n");
+                               return -EINVAL;
+                       }
+                       evsel->core.attr.freq = 0;
+                       evsel->core.attr.sample_period = 1;
+                       evsel->needs_auxtrace_mmap = true;
+                       hisi_ptt_evsel = evsel;
+                       opts->full_auxtrace = true;
+               }
+       }
+
+       err = hisi_ptt_set_auxtrace_mmap_page(opts);
+       if (err)
+               return err;
+       /*
+        * To obtain the auxtrace buffer file descriptor, the auxtrace event
+        * must come first.
+        */
+       evlist__to_front(evlist, hisi_ptt_evsel);
+       evsel__set_sample_bit(hisi_ptt_evsel, TIME);
+
+       /* Add dummy event to keep tracking */
+       err = parse_event(evlist, "dummy:u");
+       if (err)
+               return err;
+
+       tracking_evsel = evlist__last(evlist);
+       evlist__set_tracking_event(evlist, tracking_evsel);
+
+       tracking_evsel->core.attr.freq = 0;
+       tracking_evsel->core.attr.sample_period = 1;
+       evsel__set_sample_bit(tracking_evsel, TIME);
+
+       return 0;
+}
+
+static u64 hisi_ptt_reference(struct auxtrace_record *itr __maybe_unused)
+{
+       return rdtsc();
+}
+
+static void hisi_ptt_recording_free(struct auxtrace_record *itr)
+{
+       struct hisi_ptt_recording *pttr =
+                       container_of(itr, struct hisi_ptt_recording, itr);
+
+       free(pttr);
+}
+
+struct auxtrace_record *hisi_ptt_recording_init(int *err,
+                                               struct perf_pmu *hisi_ptt_pmu)
+{
+       struct hisi_ptt_recording *pttr;
+
+       if (!hisi_ptt_pmu) {
+               *err = -ENODEV;
+               return NULL;
+       }
+
+       pttr = zalloc(sizeof(*pttr));
+       if (!pttr) {
+               *err = -ENOMEM;
+               return NULL;
+       }
+
+       pttr->hisi_ptt_pmu = hisi_ptt_pmu;
+       pttr->itr.pmu = hisi_ptt_pmu;
+       pttr->itr.recording_options = hisi_ptt_recording_options;
+       pttr->itr.info_priv_size = hisi_ptt_info_priv_size;
+       pttr->itr.info_fill = hisi_ptt_info_fill;
+       pttr->itr.free = hisi_ptt_recording_free;
+       pttr->itr.reference = hisi_ptt_reference;
+       pttr->itr.read_finish = auxtrace_record__read_finish;
+       pttr->itr.alignment = 0;
+
+       *err = 0;
+       return &pttr->itr;
+}
index 793b35f..af102f4 100644 (file)
@@ -866,7 +866,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
                 * User space tasks can migrate between CPUs, so when tracing
                 * selected CPUs, sideband for all CPUs is still needed.
                 */
-               need_system_wide_tracking = evlist->core.has_user_cpus &&
+               need_system_wide_tracking = opts->target.cpu_list &&
                                            !intel_pt_evsel->core.attr.exclude_user;
 
                tracking_evsel = evlist__add_aux_dummy(evlist, need_system_wide_tracking);
index 744dd35..58e1ec1 100644 (file)
@@ -60,7 +60,7 @@ int cmd_list(int argc, const char **argv)
        setup_pager();
 
        if (!raw_dump && pager_in_use())
-               printf("\nList of pre-defined events (to be used in -e):\n\n");
+               printf("\nList of pre-defined events (to be used in -e or -M):\n\n");
 
        if (hybrid_type) {
                pmu_name = perf_pmu__hybrid_type_to_pmu(hybrid_type);
index f7dd821..923fb83 100644 (file)
@@ -97,6 +97,9 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
        else
                rec_argc = argc + 9 * perf_pmu__hybrid_pmu_num();
 
+       if (mem->cpu_list)
+               rec_argc += 2;
+
        rec_argv = calloc(rec_argc + 1, sizeof(char *));
        if (!rec_argv)
                return -1;
@@ -159,6 +162,11 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
        if (all_kernel)
                rec_argv[i++] = "--all-kernel";
 
+       if (mem->cpu_list) {
+               rec_argv[i++] = "-C";
+               rec_argv[i++] = mem->cpu_list;
+       }
+
        for (j = 0; j < argc; j++, i++)
                rec_argv[i] = argv[j];
 
index 8c10955..3ef07a1 100644 (file)
@@ -9,7 +9,7 @@ size=128
 config=0
 sample_period=*
 sample_type=263
-read_format=0|4
+read_format=0|4|20
 disabled=1
 inherit=1
 pinned=0
index 86a15dd..8fec06e 100644 (file)
@@ -11,7 +11,7 @@ size=128
 config=9
 sample_period=4000
 sample_type=455
-read_format=4
+read_format=4|20
 # Event will be enabled right away.
 disabled=0
 inherit=1
index 14ee60f..6c1cff8 100644 (file)
@@ -7,14 +7,14 @@ ret     = 1
 fd=1
 group_fd=-1
 sample_type=327
-read_format=4
+read_format=4|20
 
 [event-2:base-record]
 fd=2
 group_fd=1
 config=1
 sample_type=327
-read_format=4
+read_format=4|20
 mmap=0
 comm=0
 task=0
index 300b9f7..97e7e64 100644 (file)
@@ -7,7 +7,7 @@ ret     = 1
 fd=1
 group_fd=-1
 sample_type=343
-read_format=12
+read_format=12|28
 inherit=0
 
 [event-2:base-record]
@@ -21,8 +21,8 @@ config=3
 # default | PERF_SAMPLE_READ
 sample_type=343
 
-# PERF_FORMAT_ID | PERF_FORMAT_GROUP
-read_format=12
+# PERF_FORMAT_ID | PERF_FORMAT_GROUP  | PERF_FORMAT_LOST
+read_format=12|28
 task=0
 mmap=0
 comm=0
index 3ffe246..eeb1db3 100644 (file)
@@ -7,7 +7,7 @@ ret     = 1
 fd=1
 group_fd=-1
 sample_type=327
-read_format=4
+read_format=4|20
 
 [event-2:base-record]
 fd=2
@@ -15,7 +15,7 @@ group_fd=1
 type=0
 config=1
 sample_type=327
-read_format=4
+read_format=4|20
 mmap=0
 comm=0
 task=0
index 6b9f8d1..cebdaa8 100644 (file)
@@ -9,7 +9,7 @@ group_fd=-1
 config=0|1
 sample_period=1234000
 sample_type=87
-read_format=12
+read_format=12|28
 inherit=0
 freq=0
 
@@ -19,7 +19,7 @@ group_fd=1
 config=0|1
 sample_period=6789000
 sample_type=87
-read_format=12
+read_format=12|28
 disabled=0
 inherit=0
 mmap=0
index eb5196f..b7f050a 100755 (executable)
@@ -6,6 +6,8 @@
 
 set -e
 
+skip_test=0
+
 function commachecker()
 {
        local -i cnt=0
@@ -156,14 +158,47 @@ check_per_socket()
        echo "[Success]"
 }
 
+# The perf stat options for per-socket, per-core, per-die
+# and -A ( no_aggr mode ) uses the info fetched from this
+# directory: "/sys/devices/system/cpu/cpu*/topology". For
+# example, socket value is fetched from "physical_package_id"
+# file in topology directory.
+# Reference: cpu__get_topology_int in util/cpumap.c
+# If the platform doesn't expose topology information, values
+# will be set to -1. For example, incase of pSeries platform
+# of powerpc, value for  "physical_package_id" is restricted
+# and set to -1. Check here validates the socket-id read from
+# topology file before proceeding further
+
+FILE_LOC="/sys/devices/system/cpu/cpu*/topology/"
+FILE_NAME="physical_package_id"
+
+check_for_topology()
+{
+       if ! ParanoidAndNotRoot 0
+       then
+               socket_file=`ls $FILE_LOC/$FILE_NAME | head -n 1`
+               [ -z $socket_file ] && return 0
+               socket_id=`cat $socket_file`
+               [ $socket_id == -1 ] && skip_test=1
+               return 0
+       fi
+}
+
+check_for_topology
 check_no_args
 check_system_wide
-check_system_wide_no_aggr
 check_interval
 check_event
-check_per_core
 check_per_thread
-check_per_die
 check_per_node
-check_per_socket
+if [ $skip_test -ne 1 ]
+then
+       check_system_wide_no_aggr
+       check_per_core
+       check_per_die
+       check_per_socket
+else
+       echo "[Skip] Skipping tests for system_wide_no_aggr, per_core, per_die and per_socket since socket id exposed via topology is invalid"
+fi
 exit 0
index ea8714a..2c4212c 100755 (executable)
@@ -6,6 +6,8 @@
 
 set -e
 
+skip_test=0
+
 pythonchecker=$(dirname $0)/lib/perf_json_output_lint.py
 if [ "x$PYTHON" == "x" ]
 then
@@ -134,14 +136,47 @@ check_per_socket()
        echo "[Success]"
 }
 
+# The perf stat options for per-socket, per-core, per-die
+# and -A ( no_aggr mode ) uses the info fetched from this
+# directory: "/sys/devices/system/cpu/cpu*/topology". For
+# example, socket value is fetched from "physical_package_id"
+# file in topology directory.
+# Reference: cpu__get_topology_int in util/cpumap.c
+# If the platform doesn't expose topology information, values
+# will be set to -1. For example, incase of pSeries platform
+# of powerpc, value for  "physical_package_id" is restricted
+# and set to -1. Check here validates the socket-id read from
+# topology file before proceeding further
+
+FILE_LOC="/sys/devices/system/cpu/cpu*/topology/"
+FILE_NAME="physical_package_id"
+
+check_for_topology()
+{
+       if ! ParanoidAndNotRoot 0
+       then
+               socket_file=`ls $FILE_LOC/$FILE_NAME | head -n 1`
+               [ -z $socket_file ] && return 0
+               socket_id=`cat $socket_file`
+               [ $socket_id == -1 ] && skip_test=1
+               return 0
+       fi
+}
+
+check_for_topology
 check_no_args
 check_system_wide
-check_system_wide_no_aggr
 check_interval
 check_event
-check_per_core
 check_per_thread
-check_per_die
 check_per_node
-check_per_socket
+if [ $skip_test -ne 1 ]
+then
+       check_system_wide_no_aggr
+       check_per_core
+       check_per_die
+       check_per_socket
+else
+       echo "[Skip] Skipping tests for system_wide_no_aggr, per_core, per_die and per_socket since socket id exposed via topology is invalid"
+fi
 exit 0
index e4cb4f1..daad786 100755 (executable)
@@ -70,7 +70,7 @@ perf_report_instruction_samples() {
        #   68.12%  touch    libc-2.27.so   [.] _dl_addr
        #    5.80%  touch    libc-2.27.so   [.] getenv
        #    4.35%  touch    ld-2.27.so     [.] _dl_fixup
-       perf report --itrace=i1000i --stdio -i ${perfdata} 2>&1 | \
+       perf report --itrace=i20i --stdio -i ${perfdata} 2>&1 | \
                egrep " +[0-9]+\.[0-9]+% +$1" > /dev/null 2>&1
 }
 
index efaad95..4c0aabb 100755 (executable)
@@ -22,6 +22,8 @@ outfile="${temp_dir}/test-out.txt"
 errfile="${temp_dir}/test-err.txt"
 workload="${temp_dir}/workload"
 awkscript="${temp_dir}/awkscript"
+jitdump_workload="${temp_dir}/jitdump_workload"
+maxbrstack="${temp_dir}/maxbrstack.py"
 
 cleanup()
 {
@@ -42,6 +44,21 @@ trap_cleanup()
 
 trap trap_cleanup EXIT TERM INT
 
+# perf record for testing without decoding
+perf_record_no_decode()
+{
+       # Options to speed up recording: no post-processing, no build-id cache update,
+       # and no BPF events.
+       perf record -B -N --no-bpf-event "$@"
+}
+
+# perf record for testing should not need BPF events
+perf_record_no_bpf()
+{
+       # Options for no BPF events
+       perf record --no-bpf-event "$@"
+}
+
 have_workload=false
 cat << _end_of_file_ | /usr/bin/cc -o "${workload}" -xc - -pthread && have_workload=true
 #include <time.h>
@@ -76,7 +93,7 @@ _end_of_file_
 can_cpu_wide()
 {
        echo "Checking for CPU-wide recording on CPU $1"
-       if ! perf record -o "${tmpfile}" -B -N --no-bpf-event -e dummy:u -C "$1" true >/dev/null 2>&1 ; then
+       if ! perf_record_no_decode -o "${tmpfile}" -e dummy:u -C "$1" true >/dev/null 2>&1 ; then
                echo "No so skipping"
                return 2
        fi
@@ -93,7 +110,7 @@ test_system_wide_side_band()
        can_cpu_wide 1 || return $?
 
        # Record on CPU 0 a task running on CPU 1
-       perf record -B -N --no-bpf-event -o "${perfdatafile}" -e intel_pt//u -C 0 -- taskset --cpu-list 1 uname
+       perf_record_no_decode -o "${perfdatafile}" -e intel_pt//u -C 0 -- taskset --cpu-list 1 uname
 
        # Should get MMAP events from CPU 1 because they can be needed to decode
        mmap_cnt=$(perf script -i "${perfdatafile}" --no-itrace --show-mmap-events -C 1 2>/dev/null | grep -c MMAP)
@@ -109,7 +126,14 @@ test_system_wide_side_band()
 
 can_kernel()
 {
-       perf record -o "${tmpfile}" -B -N --no-bpf-event -e dummy:k true >/dev/null 2>&1 || return 2
+       if [ -z "${can_kernel_trace}" ] ; then
+               can_kernel_trace=0
+               perf_record_no_decode -o "${tmpfile}" -e dummy:k true >/dev/null 2>&1 && can_kernel_trace=1
+       fi
+       if [ ${can_kernel_trace} -eq 0 ] ; then
+               echo "SKIP: no kernel tracing"
+               return 2
+       fi
        return 0
 }
 
@@ -235,7 +259,7 @@ test_per_thread()
        wait_for_threads ${w1} 2
        wait_for_threads ${w2} 2
 
-       perf record -B -N --no-bpf-event -o "${perfdatafile}" -e intel_pt//u"${k}" -vvv --per-thread -p "${w1},${w2}" 2>"${errfile}" >"${outfile}" &
+       perf_record_no_decode -o "${perfdatafile}" -e intel_pt//u"${k}" -vvv --per-thread -p "${w1},${w2}" 2>"${errfile}" >"${outfile}" &
        ppid=$!
        echo "perf PID is $ppid"
        wait_for_perf_to_start ${ppid} "${errfile}" || return 1
@@ -254,6 +278,342 @@ test_per_thread()
        return 0
 }
 
+test_jitdump()
+{
+       echo "--- Test tracing self-modifying code that uses jitdump ---"
+
+       script_path=$(realpath "$0")
+       script_dir=$(dirname "$script_path")
+       jitdump_incl_dir="${script_dir}/../../util"
+       jitdump_h="${jitdump_incl_dir}/jitdump.h"
+
+       if [ ! -e "${jitdump_h}" ] ; then
+               echo "SKIP: Include file jitdump.h not found"
+               return 2
+       fi
+
+       if [ -z "${have_jitdump_workload}" ] ; then
+               have_jitdump_workload=false
+               # Create a workload that uses self-modifying code and generates its own jitdump file
+               cat <<- "_end_of_file_" | /usr/bin/cc -o "${jitdump_workload}" -I "${jitdump_incl_dir}" -xc - -pthread && have_jitdump_workload=true
+               #define _GNU_SOURCE
+               #include <sys/mman.h>
+               #include <sys/types.h>
+               #include <stddef.h>
+               #include <stdio.h>
+               #include <stdint.h>
+               #include <unistd.h>
+               #include <string.h>
+
+               #include "jitdump.h"
+
+               #define CHK_BYTE 0x5a
+
+               static inline uint64_t rdtsc(void)
+               {
+                       unsigned int low, high;
+
+                       asm volatile("rdtsc" : "=a" (low), "=d" (high));
+
+                       return low | ((uint64_t)high) << 32;
+               }
+
+               static FILE *open_jitdump(void)
+               {
+                       struct jitheader header = {
+                               .magic      = JITHEADER_MAGIC,
+                               .version    = JITHEADER_VERSION,
+                               .total_size = sizeof(header),
+                               .pid        = getpid(),
+                               .timestamp  = rdtsc(),
+                               .flags      = JITDUMP_FLAGS_ARCH_TIMESTAMP,
+                       };
+                       char filename[256];
+                       FILE *f;
+                       void *m;
+
+                       snprintf(filename, sizeof(filename), "jit-%d.dump", getpid());
+                       f = fopen(filename, "w+");
+                       if (!f)
+                               goto err;
+                       /* Create an MMAP event for the jitdump file. That is how perf tool finds it. */
+                       m = mmap(0, 4096, PROT_READ | PROT_EXEC, MAP_PRIVATE, fileno(f), 0);
+                       if (m == MAP_FAILED)
+                               goto err_close;
+                       munmap(m, 4096);
+                       if (fwrite(&header,sizeof(header),1,f) != 1)
+                               goto err_close;
+                       return f;
+
+               err_close:
+                       fclose(f);
+               err:
+                       return NULL;
+               }
+
+               static int write_jitdump(FILE *f, void *addr, const uint8_t *dat, size_t sz, uint64_t *idx)
+               {
+                       struct jr_code_load rec = {
+                               .p.id          = JIT_CODE_LOAD,
+                               .p.total_size  = sizeof(rec) + sz,
+                               .p.timestamp   = rdtsc(),
+                               .pid           = getpid(),
+                               .tid           = gettid(),
+                               .vma           = (unsigned long)addr,
+                               .code_addr     = (unsigned long)addr,
+                               .code_size     = sz,
+                               .code_index    = ++*idx,
+                       };
+
+                       if (fwrite(&rec,sizeof(rec),1,f) != 1 ||
+                       fwrite(dat, sz, 1, f) != 1)
+                               return -1;
+                       return 0;
+               }
+
+               static void close_jitdump(FILE *f)
+               {
+                       fclose(f);
+               }
+
+               int main()
+               {
+                       /* Get a memory page to store executable code */
+                       void *addr = mmap(0, 4096, PROT_WRITE | PROT_EXEC, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+                       /* Code to execute: mov CHK_BYTE, %eax ; ret */
+                       uint8_t dat[] = {0xb8, CHK_BYTE, 0x00, 0x00, 0x00, 0xc3};
+                       FILE *f = open_jitdump();
+                       uint64_t idx = 0;
+                       int ret = 1;
+
+                       if (!f)
+                               return 1;
+                       /* Copy executable code to executable memory page */
+                       memcpy(addr, dat, sizeof(dat));
+                       /* Record it in the jitdump file */
+                       if (write_jitdump(f, addr, dat, sizeof(dat), &idx))
+                               goto out_close;
+                       /* Call it */
+                       ret = ((int (*)(void))addr)() - CHK_BYTE;
+               out_close:
+                       close_jitdump(f);
+                       return ret;
+               }
+               _end_of_file_
+       fi
+
+       if ! $have_jitdump_workload ; then
+               echo "SKIP: No jitdump workload"
+               return 2
+       fi
+
+       # Change to temp_dir so jitdump collateral files go there
+       cd "${temp_dir}"
+       perf_record_no_bpf -o "${tmpfile}" -e intel_pt//u "${jitdump_workload}"
+       perf inject -i "${tmpfile}" -o "${perfdatafile}" --jit
+       decode_br_cnt=$(perf script -i "${perfdatafile}" --itrace=b | wc -l)
+       # Note that overflow and lost errors are suppressed for the error count
+       decode_err_cnt=$(perf script -i "${perfdatafile}" --itrace=e-o-l | grep -ci error)
+       cd -
+       # Should be thousands of branches
+       if [ "${decode_br_cnt}" -lt 1000 ] ; then
+               echo "Decode failed, only ${decode_br_cnt} branches"
+               return 1
+       fi
+       # Should be no errors
+       if [ "${decode_err_cnt}" -ne 0 ] ; then
+               echo "Decode failed, ${decode_err_cnt} errors"
+               perf script -i "${perfdatafile}" --itrace=e-o-l --show-mmap-events | cat
+               return 1
+       fi
+
+       echo OK
+       return 0
+}
+
+test_packet_filter()
+{
+       echo "--- Test with MTC and TSC disabled ---"
+       # Disable MTC and TSC
+       perf_record_no_decode -o "${perfdatafile}" -e intel_pt/mtc=0,tsc=0/u uname
+       # Should not get MTC packet
+       mtc_cnt=$(perf script -i "${perfdatafile}" -D 2>/dev/null | grep -c "MTC 0x")
+       if [ "${mtc_cnt}" -ne 0 ] ; then
+               echo "Failed to filter with mtc=0"
+               return 1
+       fi
+       # Should not get TSC package
+       tsc_cnt=$(perf script -i "${perfdatafile}" -D 2>/dev/null | grep -c "TSC 0x")
+       if [ "${tsc_cnt}" -ne 0 ] ; then
+               echo "Failed to filter with tsc=0"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
+test_disable_branch()
+{
+       echo "--- Test with branches disabled ---"
+       # Disable branch
+       perf_record_no_decode -o "${perfdatafile}" -e intel_pt/branch=0/u uname
+       # Should not get branch related packets
+       tnt_cnt=$(perf script -i "${perfdatafile}" -D 2>/dev/null | grep -c "TNT 0x")
+       tip_cnt=$(perf script -i "${perfdatafile}" -D 2>/dev/null | grep -c "TIP 0x")
+       fup_cnt=$(perf script -i "${perfdatafile}" -D 2>/dev/null | grep -c "FUP 0x")
+       if [ "${tnt_cnt}" -ne 0 ] || [ "${tip_cnt}" -ne 0 ] || [ "${fup_cnt}" -ne 0 ] ; then
+               echo "Failed to disable branches"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
+test_time_cyc()
+{
+       echo "--- Test with/without CYC ---"
+       # Check if CYC is supported
+       cyc=$(cat /sys/bus/event_source/devices/intel_pt/caps/psb_cyc)
+       if [ "${cyc}" != "1" ] ; then
+               echo "SKIP: CYC is not supported"
+               return 2
+       fi
+       # Enable CYC
+       perf_record_no_decode -o "${perfdatafile}" -e intel_pt/cyc/u uname
+       # should get CYC packets
+       cyc_cnt=$(perf script -i "${perfdatafile}" -D 2>/dev/null | grep -c "CYC 0x")
+       if [ "${cyc_cnt}" = "0" ] ; then
+               echo "Failed to get CYC packet"
+               return 1
+       fi
+       # Without CYC
+       perf_record_no_decode -o "${perfdatafile}" -e intel_pt//u uname
+       # Should not get CYC packets
+       cyc_cnt=$(perf script -i "${perfdatafile}" -D 2>/dev/null | grep -c "CYC 0x")
+       if [ "${cyc_cnt}" -gt 0 ] ; then
+               echo "Still get CYC packet without cyc"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
+test_sample()
+{
+       echo "--- Test recording with sample mode ---"
+       # Check if recording with sample mode is working
+       if ! perf_record_no_decode -o "${perfdatafile}" --aux-sample=8192 -e '{intel_pt//u,branch-misses:u}' uname ; then
+               echo "perf record failed with --aux-sample"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
+test_kernel_trace()
+{
+       echo "--- Test with kernel trace ---"
+       # Check if recording with kernel trace is working
+       can_kernel || return 2
+       if ! perf_record_no_decode -o "${perfdatafile}" -e intel_pt//k -m1,128 uname ; then
+               echo "perf record failed with intel_pt//k"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
+test_virtual_lbr()
+{
+       echo "--- Test virtual LBR ---"
+
+       # Python script to determine the maximum size of branch stacks
+       cat << "_end_of_file_" > "${maxbrstack}"
+from __future__ import print_function
+
+bmax = 0
+
+def process_event(param_dict):
+       if "brstack" in param_dict:
+               brstack = param_dict["brstack"]
+               n = len(brstack)
+               global bmax
+               if n > bmax:
+                       bmax = n
+
+def trace_end():
+       print("max brstack", bmax)
+_end_of_file_
+
+       # Check if virtual lbr is working
+       perf_record_no_bpf -o "${perfdatafile}" --aux-sample -e '{intel_pt//,cycles}:u' uname
+       times_val=$(perf script -i "${perfdatafile}" --itrace=L -s "${maxbrstack}" 2>/dev/null | grep "max brstack " | cut -d " " -f 3)
+       case "${times_val}" in
+               [0-9]*) ;;
+               *)      times_val=0;;
+       esac
+       if [ "${times_val}" -lt 2 ] ; then
+               echo "Failed with virtual lbr"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
+test_power_event()
+{
+       echo "--- Test power events ---"
+       # Check if power events are supported
+       power_event=$(cat /sys/bus/event_source/devices/intel_pt/caps/power_event_trace)
+       if [ "${power_event}" != "1" ] ; then
+               echo "SKIP: power_event_trace is not supported"
+               return 2
+       fi
+       if ! perf_record_no_decode -o "${perfdatafile}" -a -e intel_pt/pwr_evt/u uname ; then
+               echo "perf record failed with pwr_evt"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
+test_no_tnt()
+{
+       echo "--- Test with TNT packets disabled  ---"
+       # Check if TNT disable is supported
+       notnt=$(cat /sys/bus/event_source/devices/intel_pt/caps/tnt_disable)
+       if [ "${notnt}" != "1" ] ; then
+               echo "SKIP: tnt_disable is not supported"
+               return 2
+       fi
+       perf_record_no_decode -o "${perfdatafile}" -e intel_pt/notnt/u uname
+       # Should be no TNT packets
+       tnt_cnt=$(perf script -i "${perfdatafile}" -D | grep -c TNT)
+       if [ "${tnt_cnt}" -ne 0 ] ; then
+               echo "TNT packets still there after notnt"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
+test_event_trace()
+{
+       echo "--- Test with event_trace ---"
+       # Check if event_trace is supported
+       event_trace=$(cat /sys/bus/event_source/devices/intel_pt/caps/event_trace)
+       if [ "${event_trace}" != 1 ] ; then
+               echo "SKIP: event_trace is not supported"
+               return 2
+       fi
+       if ! perf_record_no_decode -o "${perfdatafile}" -e intel_pt/event/u uname ; then
+               echo "perf record failed with event trace"
+               return 1
+       fi
+       echo OK
+       return 0
+}
+
 count_result()
 {
        if [ "$1" -eq 2 ] ; then
@@ -265,13 +625,22 @@ count_result()
                return
        fi
        err_cnt=$((err_cnt + 1))
-       ret=0
 }
 
 ret=0
-test_system_wide_side_band || ret=$? ; count_result $ret
-test_per_thread "" "" || ret=$? ; count_result $ret
-test_per_thread "k" "(incl. kernel) " || ret=$? ; count_result $ret
+test_system_wide_side_band             || ret=$? ; count_result $ret ; ret=0
+test_per_thread "" ""                  || ret=$? ; count_result $ret ; ret=0
+test_per_thread "k" "(incl. kernel) "  || ret=$? ; count_result $ret ; ret=0
+test_jitdump                           || ret=$? ; count_result $ret ; ret=0
+test_packet_filter                     || ret=$? ; count_result $ret ; ret=0
+test_disable_branch                    || ret=$? ; count_result $ret ; ret=0
+test_time_cyc                          || ret=$? ; count_result $ret ; ret=0
+test_sample                            || ret=$? ; count_result $ret ; ret=0
+test_kernel_trace                      || ret=$? ; count_result $ret ; ret=0
+test_virtual_lbr                       || ret=$? ; count_result $ret ; ret=0
+test_power_event                       || ret=$? ; count_result $ret ; ret=0
+test_no_tnt                            || ret=$? ; count_result $ret ; ret=0
+test_event_trace                       || ret=$? ; count_result $ret ; ret=0
 
 cleanup
 
index 815d235..e315eca 100644 (file)
@@ -118,6 +118,8 @@ perf-$(CONFIG_AUXTRACE) += intel-pt.o
 perf-$(CONFIG_AUXTRACE) += intel-bts.o
 perf-$(CONFIG_AUXTRACE) += arm-spe.o
 perf-$(CONFIG_AUXTRACE) += arm-spe-decoder/
+perf-$(CONFIG_AUXTRACE) += hisi-ptt.o
+perf-$(CONFIG_AUXTRACE) += hisi-ptt-decoder/
 perf-$(CONFIG_AUXTRACE) += s390-cpumsf.o
 
 ifdef CONFIG_LIBOPENCSD
index b59c278..60d8beb 100644 (file)
@@ -52,6 +52,7 @@
 #include "intel-pt.h"
 #include "intel-bts.h"
 #include "arm-spe.h"
+#include "hisi-ptt.h"
 #include "s390-cpumsf.h"
 #include "util/mmap.h"
 
@@ -1320,6 +1321,9 @@ int perf_event__process_auxtrace_info(struct perf_session *session,
        case PERF_AUXTRACE_S390_CPUMSF:
                err = s390_cpumsf_process_auxtrace_info(event, session);
                break;
+       case PERF_AUXTRACE_HISI_PTT:
+               err = hisi_ptt_process_auxtrace_info(event, session);
+               break;
        case PERF_AUXTRACE_UNKNOWN:
        default:
                return -EINVAL;
index cb8e0a0..6a0f9b9 100644 (file)
@@ -48,6 +48,7 @@ enum auxtrace_type {
        PERF_AUXTRACE_CS_ETM,
        PERF_AUXTRACE_ARM_SPE,
        PERF_AUXTRACE_S390_CPUMSF,
+       PERF_AUXTRACE_HISI_PTT,
 };
 
 enum itrace_period_type {
index 435a875..6a438e0 100644 (file)
@@ -43,6 +43,18 @@ struct {
        __uint(value_size, sizeof(struct bpf_perf_event_value));
 } cgrp_readings SEC(".maps");
 
+/* new kernel cgroup definition */
+struct cgroup___new {
+       int level;
+       struct cgroup *ancestors[];
+} __attribute__((preserve_access_index));
+
+/* old kernel cgroup definition */
+struct cgroup___old {
+       int level;
+       u64 ancestor_ids[];
+} __attribute__((preserve_access_index));
+
 const volatile __u32 num_events = 1;
 const volatile __u32 num_cpus = 1;
 
@@ -50,6 +62,21 @@ int enabled = 0;
 int use_cgroup_v2 = 0;
 int perf_subsys_id = -1;
 
+static inline __u64 get_cgroup_v1_ancestor_id(struct cgroup *cgrp, int level)
+{
+       /* recast pointer to capture new type for compiler */
+       struct cgroup___new *cgrp_new = (void *)cgrp;
+
+       if (bpf_core_field_exists(cgrp_new->ancestors)) {
+               return BPF_CORE_READ(cgrp_new, ancestors[level], kn, id);
+       } else {
+               /* recast pointer to capture old type for compiler */
+               struct cgroup___old *cgrp_old = (void *)cgrp;
+
+               return BPF_CORE_READ(cgrp_old, ancestor_ids[level]);
+       }
+}
+
 static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
 {
        struct task_struct *p = (void *)bpf_get_current_task();
@@ -77,7 +104,7 @@ static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
                        break;
 
                // convert cgroup-id to a map index
-               cgrp_id = BPF_CORE_READ(cgrp, ancestors[i], kn, id);
+               cgrp_id = get_cgroup_v1_ancestor_id(cgrp, i);
                elem = bpf_map_lookup_elem(&cgrp_idx, &cgrp_id);
                if (!elem)
                        continue;
index b5c9095..6af062d 100644 (file)
@@ -2,6 +2,8 @@
 #ifndef __GENELF_H__
 #define __GENELF_H__
 
+#include <linux/math.h>
+
 /* genelf.c */
 int jit_write_elf(int fd, uint64_t code_addr, const char *sym,
                  const void *code, int csize, void *debug, int nr_debug_entries,
@@ -76,6 +78,6 @@ int jit_add_debug_info(Elf *e, uint64_t code_addr, void *debug, int nr_debug_ent
 #endif
 
 /* The .text section is directly after the ELF header */
-#define GEN_ELF_TEXT_OFFSET sizeof(Elf_Ehdr)
+#define GEN_ELF_TEXT_OFFSET round_up(sizeof(Elf_Ehdr) + sizeof(Elf_Phdr), 16)
 
 #endif
diff --git a/tools/perf/util/hisi-ptt-decoder/Build b/tools/perf/util/hisi-ptt-decoder/Build
new file mode 100644 (file)
index 0000000..db3db8b
--- /dev/null
@@ -0,0 +1 @@
+perf-$(CONFIG_AUXTRACE) += hisi-ptt-pkt-decoder.o
diff --git a/tools/perf/util/hisi-ptt-decoder/hisi-ptt-pkt-decoder.c b/tools/perf/util/hisi-ptt-decoder/hisi-ptt-pkt-decoder.c
new file mode 100644 (file)
index 0000000..a17c423
--- /dev/null
@@ -0,0 +1,164 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * HiSilicon PCIe Trace and Tuning (PTT) support
+ * Copyright (c) 2022 HiSilicon Technologies Co., Ltd.
+ */
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <endian.h>
+#include <byteswap.h>
+#include <linux/bitops.h>
+#include <stdarg.h>
+
+#include "../color.h"
+#include "hisi-ptt-pkt-decoder.h"
+
+/*
+ * For 8DW format, the bit[31:11] of DW0 is always 0x1fffff, which can be
+ * used to distinguish the data format.
+ * 8DW format is like:
+ *   bits [                 31:11                 ][       10:0       ]
+ *        |---------------------------------------|-------------------|
+ *    DW0 [                0x1fffff               ][ Reserved (0x7ff) ]
+ *    DW1 [                       Prefix                              ]
+ *    DW2 [                     Header DW0                            ]
+ *    DW3 [                     Header DW1                            ]
+ *    DW4 [                     Header DW2                            ]
+ *    DW5 [                     Header DW3                            ]
+ *    DW6 [                   Reserved (0x0)                          ]
+ *    DW7 [                        Time                               ]
+ *
+ * 4DW format is like:
+ *   bits [31:30] [ 29:25 ][24][23][22][21][    20:11   ][    10:0    ]
+ *        |-----|---------|---|---|---|---|-------------|-------------|
+ *    DW0 [ Fmt ][  Type  ][T9][T8][TH][SO][   Length   ][    Time    ]
+ *    DW1 [                     Header DW1                            ]
+ *    DW2 [                     Header DW2                            ]
+ *    DW3 [                     Header DW3                            ]
+ */
+
+enum hisi_ptt_8dw_pkt_field_type {
+       HISI_PTT_8DW_CHK_AND_RSV0,
+       HISI_PTT_8DW_PREFIX,
+       HISI_PTT_8DW_HEAD0,
+       HISI_PTT_8DW_HEAD1,
+       HISI_PTT_8DW_HEAD2,
+       HISI_PTT_8DW_HEAD3,
+       HISI_PTT_8DW_RSV1,
+       HISI_PTT_8DW_TIME,
+       HISI_PTT_8DW_TYPE_MAX
+};
+
+enum hisi_ptt_4dw_pkt_field_type {
+       HISI_PTT_4DW_HEAD1,
+       HISI_PTT_4DW_HEAD2,
+       HISI_PTT_4DW_HEAD3,
+       HISI_PTT_4DW_TYPE_MAX
+};
+
+static const char * const hisi_ptt_8dw_pkt_field_name[] = {
+       [HISI_PTT_8DW_PREFIX]   = "Prefix",
+       [HISI_PTT_8DW_HEAD0]    = "Header DW0",
+       [HISI_PTT_8DW_HEAD1]    = "Header DW1",
+       [HISI_PTT_8DW_HEAD2]    = "Header DW2",
+       [HISI_PTT_8DW_HEAD3]    = "Header DW3",
+       [HISI_PTT_8DW_TIME]     = "Time"
+};
+
+static const char * const hisi_ptt_4dw_pkt_field_name[] = {
+       [HISI_PTT_4DW_HEAD1]    = "Header DW1",
+       [HISI_PTT_4DW_HEAD2]    = "Header DW2",
+       [HISI_PTT_4DW_HEAD3]    = "Header DW3",
+};
+
+union hisi_ptt_4dw {
+       struct {
+               uint32_t format : 2;
+               uint32_t type : 5;
+               uint32_t t9 : 1;
+               uint32_t t8 : 1;
+               uint32_t th : 1;
+               uint32_t so : 1;
+               uint32_t len : 10;
+               uint32_t time : 11;
+       };
+       uint32_t value;
+};
+
+static void hisi_ptt_print_pkt(const unsigned char *buf, int pos, const char *desc)
+{
+       const char *color = PERF_COLOR_BLUE;
+       int i;
+
+       printf(".");
+       color_fprintf(stdout, color, "  %08x: ", pos);
+       for (i = 0; i < HISI_PTT_FIELD_LENTH; i++)
+               color_fprintf(stdout, color, "%02x ", buf[pos + i]);
+       for (i = 0; i < HISI_PTT_MAX_SPACE_LEN; i++)
+               color_fprintf(stdout, color, "   ");
+       color_fprintf(stdout, color, "  %s\n", desc);
+}
+
+static int hisi_ptt_8dw_kpt_desc(const unsigned char *buf, int pos)
+{
+       int i;
+
+       for (i = 0; i < HISI_PTT_8DW_TYPE_MAX; i++) {
+               /* Do not show 8DW check field and reserved fields */
+               if (i == HISI_PTT_8DW_CHK_AND_RSV0 || i == HISI_PTT_8DW_RSV1) {
+                       pos += HISI_PTT_FIELD_LENTH;
+                       continue;
+               }
+
+               hisi_ptt_print_pkt(buf, pos, hisi_ptt_8dw_pkt_field_name[i]);
+               pos += HISI_PTT_FIELD_LENTH;
+       }
+
+       return hisi_ptt_pkt_size[HISI_PTT_8DW_PKT];
+}
+
+static void hisi_ptt_4dw_print_dw0(const unsigned char *buf, int pos)
+{
+       const char *color = PERF_COLOR_BLUE;
+       union hisi_ptt_4dw dw0;
+       int i;
+
+       dw0.value = *(uint32_t *)(buf + pos);
+       printf(".");
+       color_fprintf(stdout, color, "  %08x: ", pos);
+       for (i = 0; i < HISI_PTT_FIELD_LENTH; i++)
+               color_fprintf(stdout, color, "%02x ", buf[pos + i]);
+       for (i = 0; i < HISI_PTT_MAX_SPACE_LEN; i++)
+               color_fprintf(stdout, color, "   ");
+
+       color_fprintf(stdout, color,
+                     "  %s %x %s %x %s %x %s %x %s %x %s %x %s %x %s %x\n",
+                     "Format", dw0.format, "Type", dw0.type, "T9", dw0.t9,
+                     "T8", dw0.t8, "TH", dw0.th, "SO", dw0.so, "Length",
+                     dw0.len, "Time", dw0.time);
+}
+
+static int hisi_ptt_4dw_kpt_desc(const unsigned char *buf, int pos)
+{
+       int i;
+
+       hisi_ptt_4dw_print_dw0(buf, pos);
+       pos += HISI_PTT_FIELD_LENTH;
+
+       for (i = 0; i < HISI_PTT_4DW_TYPE_MAX; i++) {
+               hisi_ptt_print_pkt(buf, pos, hisi_ptt_4dw_pkt_field_name[i]);
+               pos += HISI_PTT_FIELD_LENTH;
+       }
+
+       return hisi_ptt_pkt_size[HISI_PTT_4DW_PKT];
+}
+
+int hisi_ptt_pkt_desc(const unsigned char *buf, int pos, enum hisi_ptt_pkt_type type)
+{
+       if (type == HISI_PTT_8DW_PKT)
+               return hisi_ptt_8dw_kpt_desc(buf, pos);
+
+       return hisi_ptt_4dw_kpt_desc(buf, pos);
+}
diff --git a/tools/perf/util/hisi-ptt-decoder/hisi-ptt-pkt-decoder.h b/tools/perf/util/hisi-ptt-decoder/hisi-ptt-pkt-decoder.h
new file mode 100644 (file)
index 0000000..e78f1b5
--- /dev/null
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * HiSilicon PCIe Trace and Tuning (PTT) support
+ * Copyright (c) 2022 HiSilicon Technologies Co., Ltd.
+ */
+
+#ifndef INCLUDE__HISI_PTT_PKT_DECODER_H__
+#define INCLUDE__HISI_PTT_PKT_DECODER_H__
+
+#include <stddef.h>
+#include <stdint.h>
+
+#define HISI_PTT_8DW_CHECK_MASK                GENMASK(31, 11)
+#define HISI_PTT_IS_8DW_PKT            GENMASK(31, 11)
+#define HISI_PTT_MAX_SPACE_LEN         10
+#define HISI_PTT_FIELD_LENTH           4
+
+enum hisi_ptt_pkt_type {
+       HISI_PTT_4DW_PKT,
+       HISI_PTT_8DW_PKT,
+       HISI_PTT_PKT_MAX
+};
+
+static int hisi_ptt_pkt_size[] = {
+       [HISI_PTT_4DW_PKT]      = 16,
+       [HISI_PTT_8DW_PKT]      = 32,
+};
+
+int hisi_ptt_pkt_desc(const unsigned char *buf, int pos, enum hisi_ptt_pkt_type type);
+
+#endif
diff --git a/tools/perf/util/hisi-ptt.c b/tools/perf/util/hisi-ptt.c
new file mode 100644 (file)
index 0000000..45b614b
--- /dev/null
@@ -0,0 +1,192 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * HiSilicon PCIe Trace and Tuning (PTT) support
+ * Copyright (c) 2022 HiSilicon Technologies Co., Ltd.
+ */
+
+#include <byteswap.h>
+#include <endian.h>
+#include <errno.h>
+#include <inttypes.h>
+#include <linux/bitops.h>
+#include <linux/kernel.h>
+#include <linux/log2.h>
+#include <linux/types.h>
+#include <linux/zalloc.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+#include "auxtrace.h"
+#include "color.h"
+#include "debug.h"
+#include "evsel.h"
+#include "hisi-ptt.h"
+#include "hisi-ptt-decoder/hisi-ptt-pkt-decoder.h"
+#include "machine.h"
+#include "session.h"
+#include "tool.h"
+#include <internal/lib.h>
+
+struct hisi_ptt {
+       struct auxtrace auxtrace;
+       u32 auxtrace_type;
+       struct perf_session *session;
+       struct machine *machine;
+       u32 pmu_type;
+};
+
+struct hisi_ptt_queue {
+       struct hisi_ptt *ptt;
+       struct auxtrace_buffer *buffer;
+};
+
+static enum hisi_ptt_pkt_type hisi_ptt_check_packet_type(unsigned char *buf)
+{
+       uint32_t head = *(uint32_t *)buf;
+
+       if ((HISI_PTT_8DW_CHECK_MASK & head) == HISI_PTT_IS_8DW_PKT)
+               return HISI_PTT_8DW_PKT;
+
+       return HISI_PTT_4DW_PKT;
+}
+
+static void hisi_ptt_dump(struct hisi_ptt *ptt __maybe_unused,
+                         unsigned char *buf, size_t len)
+{
+       const char *color = PERF_COLOR_BLUE;
+       enum hisi_ptt_pkt_type type;
+       size_t pos = 0;
+       int pkt_len;
+
+       type = hisi_ptt_check_packet_type(buf);
+       len = round_down(len, hisi_ptt_pkt_size[type]);
+       color_fprintf(stdout, color, ". ... HISI PTT data: size %zu bytes\n",
+                     len);
+
+       while (len > 0) {
+               pkt_len = hisi_ptt_pkt_desc(buf, pos, type);
+               if (!pkt_len)
+                       color_fprintf(stdout, color, " Bad packet!\n");
+
+               pos += pkt_len;
+               len -= pkt_len;
+       }
+}
+
+static void hisi_ptt_dump_event(struct hisi_ptt *ptt, unsigned char *buf,
+                               size_t len)
+{
+       printf(".\n");
+
+       hisi_ptt_dump(ptt, buf, len);
+}
+
+static int hisi_ptt_process_event(struct perf_session *session __maybe_unused,
+                                 union perf_event *event __maybe_unused,
+                                 struct perf_sample *sample __maybe_unused,
+                                 struct perf_tool *tool __maybe_unused)
+{
+       return 0;
+}
+
+static int hisi_ptt_process_auxtrace_event(struct perf_session *session,
+                                          union perf_event *event,
+                                          struct perf_tool *tool __maybe_unused)
+{
+       struct hisi_ptt *ptt = container_of(session->auxtrace, struct hisi_ptt,
+                                           auxtrace);
+       int fd = perf_data__fd(session->data);
+       int size = event->auxtrace.size;
+       void *data = malloc(size);
+       off_t data_offset;
+       int err;
+
+       if (!data)
+               return -errno;
+
+       if (perf_data__is_pipe(session->data)) {
+               data_offset = 0;
+       } else {
+               data_offset = lseek(fd, 0, SEEK_CUR);
+               if (data_offset == -1)
+                       return -errno;
+       }
+
+       err = readn(fd, data, size);
+       if (err != (ssize_t)size) {
+               free(data);
+               return -errno;
+       }
+
+       if (dump_trace)
+               hisi_ptt_dump_event(ptt, data, size);
+
+       return 0;
+}
+
+static int hisi_ptt_flush(struct perf_session *session __maybe_unused,
+                         struct perf_tool *tool __maybe_unused)
+{
+       return 0;
+}
+
+static void hisi_ptt_free_events(struct perf_session *session __maybe_unused)
+{
+}
+
+static void hisi_ptt_free(struct perf_session *session)
+{
+       struct hisi_ptt *ptt = container_of(session->auxtrace, struct hisi_ptt,
+                                           auxtrace);
+
+       session->auxtrace = NULL;
+       free(ptt);
+}
+
+static bool hisi_ptt_evsel_is_auxtrace(struct perf_session *session,
+                                      struct evsel *evsel)
+{
+       struct hisi_ptt *ptt = container_of(session->auxtrace, struct hisi_ptt, auxtrace);
+
+       return evsel->core.attr.type == ptt->pmu_type;
+}
+
+static void hisi_ptt_print_info(__u64 type)
+{
+       if (!dump_trace)
+               return;
+
+       fprintf(stdout, "  PMU Type           %" PRId64 "\n", (s64) type);
+}
+
+int hisi_ptt_process_auxtrace_info(union perf_event *event,
+                                  struct perf_session *session)
+{
+       struct perf_record_auxtrace_info *auxtrace_info = &event->auxtrace_info;
+       struct hisi_ptt *ptt;
+
+       if (auxtrace_info->header.size < HISI_PTT_AUXTRACE_PRIV_SIZE +
+                               sizeof(struct perf_record_auxtrace_info))
+               return -EINVAL;
+
+       ptt = zalloc(sizeof(*ptt));
+       if (!ptt)
+               return -ENOMEM;
+
+       ptt->session = session;
+       ptt->machine = &session->machines.host; /* No kvm support */
+       ptt->auxtrace_type = auxtrace_info->type;
+       ptt->pmu_type = auxtrace_info->priv[0];
+
+       ptt->auxtrace.process_event = hisi_ptt_process_event;
+       ptt->auxtrace.process_auxtrace_event = hisi_ptt_process_auxtrace_event;
+       ptt->auxtrace.flush_events = hisi_ptt_flush;
+       ptt->auxtrace.free_events = hisi_ptt_free_events;
+       ptt->auxtrace.free = hisi_ptt_free;
+       ptt->auxtrace.evsel_is_auxtrace = hisi_ptt_evsel_is_auxtrace;
+       session->auxtrace = &ptt->auxtrace;
+
+       hisi_ptt_print_info(auxtrace_info->priv[0]);
+
+       return 0;
+}
diff --git a/tools/perf/util/hisi-ptt.h b/tools/perf/util/hisi-ptt.h
new file mode 100644 (file)
index 0000000..2db9b40
--- /dev/null
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * HiSilicon PCIe Trace and Tuning (PTT) support
+ * Copyright (c) 2022 HiSilicon Technologies Co., Ltd.
+ */
+
+#ifndef INCLUDE__PERF_HISI_PTT_H__
+#define INCLUDE__PERF_HISI_PTT_H__
+
+#define HISI_PTT_PMU_NAME              "hisi_ptt"
+#define HISI_PTT_AUXTRACE_PRIV_SIZE    sizeof(u64)
+
+struct auxtrace_record *hisi_ptt_recording_init(int *err,
+                                               struct perf_pmu *hisi_ptt_pmu);
+
+int hisi_ptt_process_auxtrace_info(union perf_event *event,
+                                  struct perf_session *session);
+
+#endif
index b34cb3d..e3548dd 100644 (file)
@@ -4046,6 +4046,7 @@ static const char * const intel_pt_info_fmts[] = {
        [INTEL_PT_SNAPSHOT_MODE]        = "  Snapshot mode       %"PRId64"\n",
        [INTEL_PT_PER_CPU_MMAPS]        = "  Per-cpu maps        %"PRId64"\n",
        [INTEL_PT_MTC_BIT]              = "  MTC bit             %#"PRIx64"\n",
+       [INTEL_PT_MTC_FREQ_BITS]        = "  MTC freq bits       %#"PRIx64"\n",
        [INTEL_PT_TSC_CTC_N]            = "  TSC:CTC numerator   %"PRIu64"\n",
        [INTEL_PT_TSC_CTC_D]            = "  TSC:CTC denominator %"PRIu64"\n",
        [INTEL_PT_CYC_BIT]              = "  CYC bit             %#"PRIx64"\n",
@@ -4060,8 +4061,12 @@ static void intel_pt_print_info(__u64 *arr, int start, int finish)
        if (!dump_trace)
                return;
 
-       for (i = start; i <= finish; i++)
-               fprintf(stdout, intel_pt_info_fmts[i], arr[i]);
+       for (i = start; i <= finish; i++) {
+               const char *fmt = intel_pt_info_fmts[i];
+
+               if (fmt)
+                       fprintf(stdout, fmt, arr[i]);
+       }
 }
 
 static void intel_pt_print_info_str(const char *name, const char *str)
index 437389d..5973f46 100644 (file)
@@ -246,6 +246,9 @@ __add_event(struct list_head *list, int *idx,
        struct perf_cpu_map *cpus = pmu ? perf_cpu_map__get(pmu->cpus) :
                               cpu_list ? perf_cpu_map__new(cpu_list) : NULL;
 
+       if (pmu)
+               perf_pmu__warn_invalid_formats(pmu);
+
        if (pmu && attr->type == PERF_TYPE_RAW)
                perf_pmu__warn_invalid_config(pmu, attr->config, name);
 
index 74a2caf..0328405 100644 (file)
@@ -1005,6 +1005,23 @@ err:
        return NULL;
 }
 
+void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu)
+{
+       struct perf_pmu_format *format;
+
+       /* fake pmu doesn't have format list */
+       if (pmu == &perf_pmu__fake)
+               return;
+
+       list_for_each_entry(format, &pmu->format, list)
+               if (format->value >= PERF_PMU_FORMAT_VALUE_CONFIG_END) {
+                       pr_warning("WARNING: '%s' format '%s' requires 'perf_event_attr::config%d'"
+                                  "which is not supported by this version of perf!\n",
+                                  pmu->name, format->name, format->value);
+                       return;
+               }
+}
+
 static struct perf_pmu *pmu_find(const char *name)
 {
        struct perf_pmu *pmu;
index a7b0f95..68e15c3 100644 (file)
@@ -17,6 +17,7 @@ enum {
        PERF_PMU_FORMAT_VALUE_CONFIG,
        PERF_PMU_FORMAT_VALUE_CONFIG1,
        PERF_PMU_FORMAT_VALUE_CONFIG2,
+       PERF_PMU_FORMAT_VALUE_CONFIG_END,
 };
 
 #define PERF_PMU_FORMAT_BITS 64
@@ -139,6 +140,7 @@ int perf_pmu__caps_parse(struct perf_pmu *pmu);
 
 void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
                                   const char *name);
+void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu);
 
 bool perf_pmu__has_hybrid(void);
 int perf_pmu__match(char *pattern, char *name, char *tok);
index a15d9fb..58b4926 100644 (file)
@@ -27,8 +27,6 @@ num_dec         [0-9]+
 
 {num_dec}      { return value(10); }
 config         { return PP_CONFIG; }
-config1                { return PP_CONFIG1; }
-config2                { return PP_CONFIG2; }
 -              { return '-'; }
 :              { return ':'; }
 ,              { return ','; }
index 0dab0ec..e675d79 100644 (file)
@@ -18,7 +18,7 @@ do { \
 
 %}
 
-%token PP_CONFIG PP_CONFIG1 PP_CONFIG2
+%token PP_CONFIG
 %token PP_VALUE PP_ERROR
 %type <num> PP_VALUE
 %type <bits> bit_term
@@ -45,18 +45,11 @@ PP_CONFIG ':' bits
                                      $3));
 }
 |
-PP_CONFIG1 ':' bits
+PP_CONFIG PP_VALUE ':' bits
 {
        ABORT_ON(perf_pmu__new_format(format, name,
-                                     PERF_PMU_FORMAT_VALUE_CONFIG1,
-                                     $3));
-}
-|
-PP_CONFIG2 ':' bits
-{
-       ABORT_ON(perf_pmu__new_format(format, name,
-                                     PERF_PMU_FORMAT_VALUE_CONFIG2,
-                                     $3));
+                                     $2,
+                                     $4));
 }
 
 bits:
index 127b8ca..24dd621 100644 (file)
@@ -3936,6 +3936,19 @@ static struct btf_raw_test raw_tests[] = {
        .err_str = "Invalid type_id",
 },
 {
+       .descr = "decl_tag test #16, func proto, return type",
+       .raw_types = {
+               BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4),                          /* [1] */
+               BTF_VAR_ENC(NAME_TBD, 1, 0),                                            /* [2] */
+               BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DECL_TAG, 0, 0), 2), (-1), /* [3] */
+               BTF_FUNC_PROTO_ENC(3, 0),                                               /* [4] */
+               BTF_END_RAW,
+       },
+       BTF_STR_SEC("\0local\0tag1"),
+       .btf_load_err = true,
+       .err_str = "Invalid return type",
+},
+{
        .descr = "type_tag test #1",
        .raw_types = {
                BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4),  /* [1] */
index 099c23d..b39093d 100644 (file)
@@ -47,14 +47,14 @@ record_sample(struct bpf_dynptr *dynptr, void *context)
                if (status) {
                        bpf_printk("bpf_dynptr_read() failed: %d\n", status);
                        err = 1;
-                       return 0;
+                       return 1;
                }
        } else {
                sample = bpf_dynptr_data(dynptr, 0, sizeof(*sample));
                if (!sample) {
                        bpf_printk("Unexpectedly failed to get sample\n");
                        err = 2;
-                       return 0;
+                       return 1;
                }
                stack_sample = *sample;
        }
index e9dab5f..6b8d2e2 100644 (file)
@@ -7,6 +7,8 @@ TEST_PROGS := \
        bond-lladdr-target.sh \
        dev_addr_lists.sh
 
-TEST_FILES := lag_lib.sh
+TEST_FILES := \
+       lag_lib.sh \
+       net_forwarding_lib.sh
 
 include ../../../lib.mk
index e6fa24e..5cfe7d8 100755 (executable)
@@ -14,7 +14,7 @@ ALL_TESTS="
 REQUIRE_MZ=no
 NUM_NETIFS=0
 lib_dir=$(dirname "$0")
-source "$lib_dir"/../../../net/forwarding/lib.sh
+source "$lib_dir"/net_forwarding_lib.sh
 
 source "$lib_dir"/lag_lib.sh
 
diff --git a/tools/testing/selftests/drivers/net/bonding/net_forwarding_lib.sh b/tools/testing/selftests/drivers/net/bonding/net_forwarding_lib.sh
new file mode 120000 (symlink)
index 0000000..39c9682
--- /dev/null
@@ -0,0 +1 @@
+../../../net/forwarding/lib.sh
\ No newline at end of file
index dca8be6..a1f269e 100755 (executable)
@@ -18,8 +18,8 @@ NUM_NETIFS=1
 REQUIRE_JQ="no"
 REQUIRE_MZ="no"
 NETIF_CREATE="no"
-lib_dir=$(dirname $0)/../../../net/forwarding
-source $lib_dir/lib.sh
+lib_dir=$(dirname "$0")
+source "$lib_dir"/lib.sh
 
 cleanup() {
        echo "Cleaning up"
index 642d8df..6a86e61 100644 (file)
@@ -3,4 +3,8 @@
 
 TEST_PROGS := dev_addr_lists.sh
 
+TEST_FILES := \
+       lag_lib.sh \
+       net_forwarding_lib.sh
+
 include ../../../lib.mk
index debda72..3391311 100755 (executable)
@@ -11,14 +11,14 @@ ALL_TESTS="
 REQUIRE_MZ=no
 NUM_NETIFS=0
 lib_dir=$(dirname "$0")
-source "$lib_dir"/../../../net/forwarding/lib.sh
+source "$lib_dir"/net_forwarding_lib.sh
 
-source "$lib_dir"/../bonding/lag_lib.sh
+source "$lib_dir"/lag_lib.sh
 
 
 destroy()
 {
-       local ifnames=(dummy0 dummy1 team0 mv0)
+       local ifnames=(dummy1 dummy2 team0 mv0)
        local ifname
 
        for ifname in "${ifnames[@]}"; do
diff --git a/tools/testing/selftests/drivers/net/team/lag_lib.sh b/tools/testing/selftests/drivers/net/team/lag_lib.sh
new file mode 120000 (symlink)
index 0000000..e1347a1
--- /dev/null
@@ -0,0 +1 @@
+../bonding/lag_lib.sh
\ No newline at end of file
diff --git a/tools/testing/selftests/drivers/net/team/net_forwarding_lib.sh b/tools/testing/selftests/drivers/net/team/net_forwarding_lib.sh
new file mode 120000 (symlink)
index 0000000..39c9682
--- /dev/null
@@ -0,0 +1 @@
+../../../net/forwarding/lib.sh
\ No newline at end of file
index db52257..d3a79da 100644 (file)
@@ -1,7 +1,7 @@
 #!/bin/sh
 # SPDX-License-Identifier: GPL-2.0
 # description: Generic dynamic event - check if duplicate events are caught
-# requires: dynamic_events "e[:[<group>/]<event>] <attached-group>.<attached-event> [<args>]":README
+# requires: dynamic_events "e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]":README
 
 echo 0 > events/enable
 
index 914fe2e..6461c37 100644 (file)
@@ -1,7 +1,7 @@
 #!/bin/sh
 # SPDX-License-Identifier: GPL-2.0
 # description: event trigger - test inter-event histogram trigger eprobe on synthetic event
-# requires: dynamic_events synthetic_events events/syscalls/sys_enter_openat/hist "e[:[<group>/]<event>] <attached-group>.<attached-event> [<args>]":README
+# requires: dynamic_events synthetic_events events/syscalls/sys_enter_openat/hist "e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]":README
 
 echo 0 > events/enable
 
index 7321490..5a0e0df 100644 (file)
@@ -3,11 +3,11 @@ INCLUDES := -I../include -I../../ -I../../../../../usr/include/
 CFLAGS := $(CFLAGS) -g -O2 -Wall -D_GNU_SOURCE -pthread $(INCLUDES) $(KHDR_INCLUDES)
 LDLIBS := -lpthread -lrt
 
-HEADERS := \
+LOCAL_HDRS := \
        ../include/futextest.h \
        ../include/atomic.h \
        ../include/logging.h
-TEST_GEN_FILES := \
+TEST_GEN_PROGS := \
        futex_wait_timeout \
        futex_wait_wouldblock \
        futex_requeue_pi \
@@ -24,5 +24,3 @@ TEST_PROGS := run.sh
 top_srcdir = ../../../../..
 DEFAULT_INSTALL_HDR_PATH := 1
 include ../../lib.mk
-
-$(TEST_GEN_FILES): $(HEADERS)
index 39f0fa2..05d66ef 100644 (file)
@@ -2,10 +2,10 @@
 CFLAGS := $(CFLAGS) -Wall -D_GNU_SOURCE
 LDLIBS += -lm
 
-uname_M := $(shell uname -m 2>/dev/null || echo not)
-ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
+ARCH ?= $(shell uname -m 2>/dev/null || echo not)
+ARCH_PROCESSED := $(shell echo $(ARCH) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
 
-ifeq (x86,$(ARCH))
+ifeq (x86,$(ARCH_PROCESSED))
 TEST_GEN_FILES := msr aperf
 endif
 
index 806a150..67fe7a4 100644 (file)
@@ -1,10 +1,10 @@
 # SPDX-License-Identifier: GPL-2.0-only
 # Makefile for kexec tests
 
-uname_M := $(shell uname -m 2>/dev/null || echo not)
-ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
+ARCH ?= $(shell uname -m 2>/dev/null || echo not)
+ARCH_PROCESSED := $(shell echo $(ARCH) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
 
-ifeq ($(ARCH),$(filter $(ARCH),x86 ppc64le))
+ifeq ($(ARCH_PROCESSED),$(filter $(ARCH_PROCESSED),x86 ppc64le))
 TEST_PROGS := test_kexec_load.sh test_kexec_file_load.sh
 TEST_FILES := kexec_common_lib.sh
 
index e05ecb3..9c131d9 100644 (file)
@@ -662,8 +662,8 @@ int test_kvm_device(uint32_t gic_dev_type)
                                             : KVM_DEV_TYPE_ARM_VGIC_V2;
 
        if (!__kvm_test_create_device(v.vm, other)) {
-               ret = __kvm_test_create_device(v.vm, other);
-               TEST_ASSERT(ret && (errno == EINVAL || errno == EEXIST),
+               ret = __kvm_create_device(v.vm, other);
+               TEST_ASSERT(ret < 0 && (errno == EINVAL || errno == EEXIST),
                                "create GIC device while other version exists");
        }
 
index 6ee7e1d..bb1d17a 100644 (file)
@@ -67,7 +67,7 @@ struct memslot_antagonist_args {
 static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay,
                               uint64_t nr_modifications)
 {
-       const uint64_t pages = 1;
+       uint64_t pages = max_t(int, vm->page_size, getpagesize()) / vm->page_size;
        uint64_t gpa;
        int i;
 
index 9d4cb94..a3ea3d4 100644 (file)
@@ -70,7 +70,7 @@ endef
 run_tests: all
 ifdef building_out_of_srctree
        @if [ "X$(TEST_PROGS)$(TEST_PROGS_EXTENDED)$(TEST_FILES)" != "X" ]; then \
-               rsync -aq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
+               rsync -aLq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
        fi
        @if [ "X$(TEST_PROGS)" != "X" ]; then \
                $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \
@@ -84,7 +84,7 @@ endif
 
 define INSTALL_SINGLE_RULE
        $(if $(INSTALL_LIST),@mkdir -p $(INSTALL_PATH))
-       $(if $(INSTALL_LIST),rsync -a $(INSTALL_LIST) $(INSTALL_PATH)/)
+       $(if $(INSTALL_LIST),rsync -aL $(INSTALL_LIST) $(INSTALL_PATH)/)
 endef
 
 define INSTALL_RULE
index 74ee506..611be86 100755 (executable)
@@ -138,7 +138,6 @@ online_all_offline_memory()
 {
        for memory in `hotpluggable_offline_memory`; do
                if ! online_memory_expect_success $memory; then
-                       echo "$FUNCNAME $memory: unexpected fail" >&2
                        retval=1
                fi
        done
index 3d7adee..ff8807c 100644 (file)
@@ -25,6 +25,7 @@ rxtimestamp
 sk_bind_sendto_listen
 sk_connect_zero_addr
 socket
+so_incoming_cpu
 so_netns_cookie
 so_txtime
 stress_reuseport_listen
index 2a6b0bc..cec4800 100644 (file)
@@ -70,6 +70,8 @@ TEST_PROGS += io_uring_zerocopy_tx.sh
 TEST_GEN_FILES += bind_bhash
 TEST_GEN_PROGS += sk_bind_sendto_listen
 TEST_GEN_PROGS += sk_connect_zero_addr
+TEST_PROGS += test_ingress_egress_chaining.sh
+TEST_GEN_PROGS += so_incoming_cpu
 
 TEST_FILES := settings
 
diff --git a/tools/testing/selftests/net/so_incoming_cpu.c b/tools/testing/selftests/net/so_incoming_cpu.c
new file mode 100644 (file)
index 0000000..0e04f9f
--- /dev/null
@@ -0,0 +1,242 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright Amazon.com Inc. or its affiliates. */
+#define _GNU_SOURCE
+#include <sched.h>
+
+#include <netinet/in.h>
+#include <sys/socket.h>
+#include <sys/sysinfo.h>
+
+#include "../kselftest_harness.h"
+
+#define CLIENT_PER_SERVER      32 /* More sockets, more reliable */
+#define NR_SERVER              self->nproc
+#define NR_CLIENT              (CLIENT_PER_SERVER * NR_SERVER)
+
+FIXTURE(so_incoming_cpu)
+{
+       int nproc;
+       int *servers;
+       union {
+               struct sockaddr addr;
+               struct sockaddr_in in_addr;
+       };
+       socklen_t addrlen;
+};
+
+enum when_to_set {
+       BEFORE_REUSEPORT,
+       BEFORE_LISTEN,
+       AFTER_LISTEN,
+       AFTER_ALL_LISTEN,
+};
+
+FIXTURE_VARIANT(so_incoming_cpu)
+{
+       int when_to_set;
+};
+
+FIXTURE_VARIANT_ADD(so_incoming_cpu, before_reuseport)
+{
+       .when_to_set = BEFORE_REUSEPORT,
+};
+
+FIXTURE_VARIANT_ADD(so_incoming_cpu, before_listen)
+{
+       .when_to_set = BEFORE_LISTEN,
+};
+
+FIXTURE_VARIANT_ADD(so_incoming_cpu, after_listen)
+{
+       .when_to_set = AFTER_LISTEN,
+};
+
+FIXTURE_VARIANT_ADD(so_incoming_cpu, after_all_listen)
+{
+       .when_to_set = AFTER_ALL_LISTEN,
+};
+
+FIXTURE_SETUP(so_incoming_cpu)
+{
+       self->nproc = get_nprocs();
+       ASSERT_LE(2, self->nproc);
+
+       self->servers = malloc(sizeof(int) * NR_SERVER);
+       ASSERT_NE(self->servers, NULL);
+
+       self->in_addr.sin_family = AF_INET;
+       self->in_addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+       self->in_addr.sin_port = htons(0);
+       self->addrlen = sizeof(struct sockaddr_in);
+}
+
+FIXTURE_TEARDOWN(so_incoming_cpu)
+{
+       int i;
+
+       for (i = 0; i < NR_SERVER; i++)
+               close(self->servers[i]);
+
+       free(self->servers);
+}
+
+void set_so_incoming_cpu(struct __test_metadata *_metadata, int fd, int cpu)
+{
+       int ret;
+
+       ret = setsockopt(fd, SOL_SOCKET, SO_INCOMING_CPU, &cpu, sizeof(int));
+       ASSERT_EQ(ret, 0);
+}
+
+int create_server(struct __test_metadata *_metadata,
+                 FIXTURE_DATA(so_incoming_cpu) *self,
+                 const FIXTURE_VARIANT(so_incoming_cpu) *variant,
+                 int cpu)
+{
+       int fd, ret;
+
+       fd = socket(AF_INET, SOCK_STREAM | SOCK_NONBLOCK, 0);
+       ASSERT_NE(fd, -1);
+
+       if (variant->when_to_set == BEFORE_REUSEPORT)
+               set_so_incoming_cpu(_metadata, fd, cpu);
+
+       ret = setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, &(int){1}, sizeof(int));
+       ASSERT_EQ(ret, 0);
+
+       ret = bind(fd, &self->addr, self->addrlen);
+       ASSERT_EQ(ret, 0);
+
+       if (variant->when_to_set == BEFORE_LISTEN)
+               set_so_incoming_cpu(_metadata, fd, cpu);
+
+       /* We don't use CLIENT_PER_SERVER here not to block
+        * this test at connect() if SO_INCOMING_CPU is broken.
+        */
+       ret = listen(fd, NR_CLIENT);
+       ASSERT_EQ(ret, 0);
+
+       if (variant->when_to_set == AFTER_LISTEN)
+               set_so_incoming_cpu(_metadata, fd, cpu);
+
+       return fd;
+}
+
+void create_servers(struct __test_metadata *_metadata,
+                   FIXTURE_DATA(so_incoming_cpu) *self,
+                   const FIXTURE_VARIANT(so_incoming_cpu) *variant)
+{
+       int i, ret;
+
+       for (i = 0; i < NR_SERVER; i++) {
+               self->servers[i] = create_server(_metadata, self, variant, i);
+
+               if (i == 0) {
+                       ret = getsockname(self->servers[i], &self->addr, &self->addrlen);
+                       ASSERT_EQ(ret, 0);
+               }
+       }
+
+       if (variant->when_to_set == AFTER_ALL_LISTEN) {
+               for (i = 0; i < NR_SERVER; i++)
+                       set_so_incoming_cpu(_metadata, self->servers[i], i);
+       }
+}
+
+void create_clients(struct __test_metadata *_metadata,
+                   FIXTURE_DATA(so_incoming_cpu) *self)
+{
+       cpu_set_t cpu_set;
+       int i, j, fd, ret;
+
+       for (i = 0; i < NR_SERVER; i++) {
+               CPU_ZERO(&cpu_set);
+
+               CPU_SET(i, &cpu_set);
+               ASSERT_EQ(CPU_COUNT(&cpu_set), 1);
+               ASSERT_NE(CPU_ISSET(i, &cpu_set), 0);
+
+               /* Make sure SYN will be processed on the i-th CPU
+                * and finally distributed to the i-th listener.
+                */
+               sched_setaffinity(0, sizeof(cpu_set), &cpu_set);
+               ASSERT_EQ(ret, 0);
+
+               for (j = 0; j < CLIENT_PER_SERVER; j++) {
+                       fd  = socket(AF_INET, SOCK_STREAM, 0);
+                       ASSERT_NE(fd, -1);
+
+                       ret = connect(fd, &self->addr, self->addrlen);
+                       ASSERT_EQ(ret, 0);
+
+                       close(fd);
+               }
+       }
+}
+
+void verify_incoming_cpu(struct __test_metadata *_metadata,
+                        FIXTURE_DATA(so_incoming_cpu) *self)
+{
+       int i, j, fd, cpu, ret, total = 0;
+       socklen_t len = sizeof(int);
+
+       for (i = 0; i < NR_SERVER; i++) {
+               for (j = 0; j < CLIENT_PER_SERVER; j++) {
+                       /* If we see -EAGAIN here, SO_INCOMING_CPU is broken */
+                       fd = accept(self->servers[i], &self->addr, &self->addrlen);
+                       ASSERT_NE(fd, -1);
+
+                       ret = getsockopt(fd, SOL_SOCKET, SO_INCOMING_CPU, &cpu, &len);
+                       ASSERT_EQ(ret, 0);
+                       ASSERT_EQ(cpu, i);
+
+                       close(fd);
+                       total++;
+               }
+       }
+
+       ASSERT_EQ(total, NR_CLIENT);
+       TH_LOG("SO_INCOMING_CPU is very likely to be "
+              "working correctly with %d sockets.", total);
+}
+
+TEST_F(so_incoming_cpu, test1)
+{
+       create_servers(_metadata, self, variant);
+       create_clients(_metadata, self);
+       verify_incoming_cpu(_metadata, self);
+}
+
+TEST_F(so_incoming_cpu, test2)
+{
+       int server;
+
+       create_servers(_metadata, self, variant);
+
+       /* No CPU specified */
+       server = create_server(_metadata, self, variant, -1);
+       close(server);
+
+       create_clients(_metadata, self);
+       verify_incoming_cpu(_metadata, self);
+}
+
+TEST_F(so_incoming_cpu, test3)
+{
+       int server, client;
+
+       create_servers(_metadata, self, variant);
+
+       /* No CPU specified */
+       server = create_server(_metadata, self, variant, -1);
+
+       create_clients(_metadata, self);
+
+       /* Never receive any requests */
+       client = accept(server, &self->addr, &self->addrlen);
+       ASSERT_EQ(client, -1);
+
+       verify_incoming_cpu(_metadata, self);
+}
+
+TEST_HARNESS_MAIN
diff --git a/tools/testing/selftests/net/test_ingress_egress_chaining.sh b/tools/testing/selftests/net/test_ingress_egress_chaining.sh
new file mode 100644 (file)
index 0000000..08adff6
--- /dev/null
@@ -0,0 +1,79 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# This test runs a simple ingress tc setup between two veth pairs,
+# and chains a single egress rule to test ingress chaining to egress.
+#
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+if [ "$(id -u)" -ne 0 ];then
+       echo "SKIP: Need root privileges"
+       exit $ksft_skip
+fi
+
+needed_mods="act_mirred cls_flower sch_ingress"
+for mod in $needed_mods; do
+       modinfo $mod &>/dev/null || { echo "SKIP: Need act_mirred module"; exit $ksft_skip; }
+done
+
+ns="ns$((RANDOM%899+100))"
+veth1="veth1$((RANDOM%899+100))"
+veth2="veth2$((RANDOM%899+100))"
+peer1="peer1$((RANDOM%899+100))"
+peer2="peer2$((RANDOM%899+100))"
+ip_peer1=198.51.100.5
+ip_peer2=198.51.100.6
+
+function fail() {
+       echo "FAIL: $@" >> /dev/stderr
+       exit 1
+}
+
+function cleanup() {
+       killall -q -9 udpgso_bench_rx
+       ip link del $veth1 &> /dev/null
+       ip link del $veth2 &> /dev/null
+       ip netns del $ns &> /dev/null
+}
+trap cleanup EXIT
+
+function config() {
+       echo "Setup veth pairs [$veth1, $peer1], and veth pair [$veth2, $peer2]"
+       ip link add $veth1 type veth peer name $peer1
+       ip link add $veth2 type veth peer name $peer2
+       ip addr add $ip_peer1/24 dev $peer1
+       ip link set $peer1 up
+       ip netns add $ns
+       ip link set dev $peer2 netns $ns
+       ip netns exec $ns ip addr add $ip_peer2/24 dev $peer2
+       ip netns exec $ns ip link set $peer2 up
+       ip link set $veth1 up
+       ip link set $veth2 up
+
+       echo "Add tc filter ingress->egress forwarding $veth1 <-> $veth2"
+       tc qdisc add dev $veth2 ingress
+       tc qdisc add dev $veth1 ingress
+       tc filter add dev $veth2 ingress prio 1 proto all flower \
+               action mirred egress redirect dev $veth1
+       tc filter add dev $veth1 ingress prio 1 proto all flower \
+               action mirred egress redirect dev $veth2
+
+       echo "Add tc filter egress->ingress forwarding $peer1 -> $veth1, bypassing the veth pipe"
+       tc qdisc add dev $peer1 clsact
+       tc filter add dev $peer1 egress prio 20 proto ip flower \
+               action mirred ingress redirect dev $veth1
+}
+
+function test_run() {
+       echo "Run tcp traffic"
+       ./udpgso_bench_rx -t &
+       sleep 1
+       ip netns exec $ns timeout -k 2 10 ./udpgso_bench_tx -t -l 2 -4 -D $ip_peer1 || fail "traffic failed"
+       echo "Test passed"
+}
+
+config
+test_run
+trap - EXIT
+cleanup
index 6d849dc..d1d8483 100644 (file)
@@ -62,6 +62,8 @@ static struct perf_event_attr make_event_attr(bool enabled, volatile void *addr,
                .remove_on_exec = 1, /* Required by sigtrap. */
                .sigtrap        = 1, /* Request synchronous SIGTRAP on event. */
                .sig_data       = TEST_SIG_DATA(addr, id),
+               .exclude_kernel = 1, /* To allow */
+               .exclude_hv     = 1, /* running as !root */
        };
        return attr;
 }
@@ -93,9 +95,13 @@ static void *test_thread(void *arg)
 
        __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED);
        iter = ctx.iterate_on; /* read */
-       for (i = 0; i < iter - 1; i++) {
-               __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED);
-               ctx.iterate_on = iter; /* idempotent write */
+       if (iter >= 0) {
+               for (i = 0; i < iter - 1; i++) {
+                       __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED);
+                       ctx.iterate_on = iter; /* idempotent write */
+               }
+       } else {
+               while (ctx.iterate_on);
        }
 
        return NULL;
@@ -208,4 +214,27 @@ TEST_F(sigtrap_threads, signal_stress)
        EXPECT_EQ(ctx.first_siginfo.si_perf_data, TEST_SIG_DATA(&ctx.iterate_on, 0));
 }
 
+TEST_F(sigtrap_threads, signal_stress_with_disable)
+{
+       const int target_count = NUM_THREADS * 3000;
+       int i;
+
+       ctx.iterate_on = -1;
+
+       EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0);
+       pthread_barrier_wait(&self->barrier);
+       while (__atomic_load_n(&ctx.signal_count, __ATOMIC_RELAXED) < target_count) {
+               EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_DISABLE, 0), 0);
+               EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0);
+       }
+       ctx.iterate_on = 0;
+       for (i = 0; i < NUM_THREADS; i++)
+               ASSERT_EQ(pthread_join(self->threads[i], NULL), 0);
+       EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_DISABLE, 0), 0);
+
+       EXPECT_EQ(ctx.first_siginfo.si_addr, &ctx.iterate_on);
+       EXPECT_EQ(ctx.first_siginfo.si_perf_type, PERF_TYPE_BREAKPOINT);
+       EXPECT_EQ(ctx.first_siginfo.si_perf_data, TEST_SIG_DATA(&ctx.iterate_on, 0));
+}
+
 TEST_HARNESS_MAIN
index 7d72226..4adaad1 100644 (file)
@@ -1054,6 +1054,55 @@ TEST_F(hmm, migrate_fault)
        hmm_buffer_free(buffer);
 }
 
+TEST_F(hmm, migrate_release)
+{
+       struct hmm_buffer *buffer;
+       unsigned long npages;
+       unsigned long size;
+       unsigned long i;
+       int *ptr;
+       int ret;
+
+       npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift;
+       ASSERT_NE(npages, 0);
+       size = npages << self->page_shift;
+
+       buffer = malloc(sizeof(*buffer));
+       ASSERT_NE(buffer, NULL);
+
+       buffer->fd = -1;
+       buffer->size = size;
+       buffer->mirror = malloc(size);
+       ASSERT_NE(buffer->mirror, NULL);
+
+       buffer->ptr = mmap(NULL, size, PROT_READ | PROT_WRITE,
+                          MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0);
+       ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+       /* Initialize buffer in system memory. */
+       for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+               ptr[i] = i;
+
+       /* Migrate memory to device. */
+       ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+       ASSERT_EQ(ret, 0);
+       ASSERT_EQ(buffer->cpages, npages);
+
+       /* Check what the device read. */
+       for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+               ASSERT_EQ(ptr[i], i);
+
+       /* Release device memory. */
+       ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_RELEASE, buffer, npages);
+       ASSERT_EQ(ret, 0);
+
+       /* Fault pages back to system memory and check them. */
+       for (i = 0, ptr = buffer->ptr; i < size / (2 * sizeof(*ptr)); ++i)
+               ASSERT_EQ(ptr[i], i);
+
+       hmm_buffer_free(buffer);
+}
+
 /*
  * Migrate anonymous shared memory to device private memory.
  */
index 74babdb..297f250 100644 (file)
@@ -774,7 +774,27 @@ static void uffd_handle_page_fault(struct uffd_msg *msg,
                continue_range(uffd, msg->arg.pagefault.address, page_size);
                stats->minor_faults++;
        } else {
-               /* Missing page faults */
+               /*
+                * Missing page faults.
+                *
+                * Here we force a write check for each of the missing mode
+                * faults.  It's guaranteed because the only threads that
+                * will trigger uffd faults are the locking threads, and
+                * their first instruction to touch the missing page will
+                * always be pthread_mutex_lock().
+                *
+                * Note that here we relied on an NPTL glibc impl detail to
+                * always read the lock type at the entry of the lock op
+                * (pthread_mutex_t.__data.__type, offset 0x10) before
+                * doing any locking operations to guarantee that.  It's
+                * actually not good to rely on this impl detail because
+                * logically a pthread-compatible lib can implement the
+                * locks without types and we can fail when linking with
+                * them.  However since we used to find bugs with this
+                * strict check we still keep it around.  Hopefully this
+                * could be a good hint when it fails again.  If one day
+                * it'll break on some other impl of glibc we'll revisit.
+                */
                if (msg->arg.pagefault.flags & UFFD_PAGEFAULT_FLAG_WRITE)
                        err("unexpected write fault");
 
index fa73353..be8a364 100644 (file)
@@ -111,7 +111,7 @@ class Dot2c(Automata):
 
     def format_aut_init_header(self):
         buff = []
-        buff.append("struct %s %s = {" % (self.struct_automaton_def, self.var_automaton_def))
+        buff.append("static struct %s %s = {" % (self.struct_automaton_def, self.var_automaton_def))
         return buff
 
     def __get_string_vector_per_line_content(self, buff):
index e30f1b4..1376a47 100644 (file)
@@ -4839,6 +4839,12 @@ struct compat_kvm_clear_dirty_log {
        };
 };
 
+long __weak kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl,
+                                    unsigned long arg)
+{
+       return -ENOTTY;
+}
+
 static long kvm_vm_compat_ioctl(struct file *filp,
                           unsigned int ioctl, unsigned long arg)
 {
@@ -4847,6 +4853,11 @@ static long kvm_vm_compat_ioctl(struct file *filp,
 
        if (kvm->mm != current->mm || kvm->vm_dead)
                return -EIO;
+
+       r = kvm_arch_vm_compat_ioctl(filp, ioctl, arg);
+       if (r != -ENOTTY)
+               return r;
+
        switch (ioctl) {
 #ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
        case KVM_CLEAR_DIRTY_LOG: {